diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index bedb27756ff..087406b56f3 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -237,6 +237,13 @@ existing resources, but you still get to implement something completely new. covering their behavior. See [Writing Acceptance Tests](#writing-acceptance-tests) below for a detailed guide on how to approach these. + - [ ] __Naming__: Resources should be named `aws__` where + `service` is the AWS short service name and `name` is a short, preferably + single word, description of the resource. Use `_` as a separator. + resources. We therefore encourage you to only submit **1 resource at a time**. + - [ ] __Arguments_and_Attributes__: The HCL for arguments and attributes should + mimic the types and structs presented by the AWS API. API arguments should be + converted from `CamelCase` to `camel_case`. - [ ] __Documentation__: Each resource gets a page in the Terraform documentation. The [Terraform website][website] source is in this repo and includes instructions for getting a local copy of the site up and @@ -389,7 +396,7 @@ ok github.com/terraform-providers/terraform-provider-aws/aws 55.619s Terraform has a framework for writing acceptance tests which minimises the amount of boilerplate code necessary to use common testing patterns. The entry -point to the framework is the `resource.Test()` function. +point to the framework is the `resource.ParallelTest()` function. Tests are divided into `TestStep`s. Each `TestStep` proceeds by applying some Terraform configuration using the provider under test, and then verifying that @@ -403,14 +410,14 @@ to a single resource. Most tests follow a similar structure. to running acceptance tests. This is common to all tests exercising a single provider. -Each `TestStep` is defined in the call to `resource.Test()`. Most assertion +Each `TestStep` is defined in the call to `resource.ParallelTest()`. Most assertion functions are defined out of band with the tests. This keeps the tests readable, and allows reuse of assertion functions across different tests of the same type of resource. The definition of a complete test looks like this: ```go func TestAccAzureRMPublicIpStatic_update(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testCheckAzureRMPublicIpDestroy, diff --git a/.github/ISSUE_TEMPLATE.md b/.github/ISSUE_TEMPLATE.md index 8066d3921e4..b376ed08b0c 100644 --- a/.github/ISSUE_TEMPLATE.md +++ b/.github/ISSUE_TEMPLATE.md @@ -1,43 +1,6 @@ -Hi there, + \ No newline at end of file diff --git a/.github/ISSUE_TEMPLATE/Bug_Report.md b/.github/ISSUE_TEMPLATE/Bug_Report.md new file mode 100644 index 00000000000..e320d8a9ac0 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/Bug_Report.md @@ -0,0 +1,87 @@ +--- +name: 🐛 Bug Report +about: If something isn't working as expected 🤔. + +--- + + + + + +### Community Note + +* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request +* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request +* If you are interested in working on this issue or have submitted a pull request, please leave a comment + + + +### Terraform Version + + + +### Affected Resource(s) + + + +* aws_XXXXX + +### Terraform Configuration Files + + + +```hcl +# Copy-paste your Terraform configurations here - for large Terraform configs, +# please use a service like Dropbox and share a link to the ZIP file. For +# security, you can also encrypt the files using our GPG public key: https://keybase.io/hashicorp +``` + +### Debug Output + + + +### Panic Output + + + +### Expected Behavior + + + +### Actual Behavior + + + +### Steps to Reproduce + + + +1. `terraform apply` + +### Important Factoids + + + +### References + + + +* #0000 diff --git a/.github/ISSUE_TEMPLATE/Feature_Request.md b/.github/ISSUE_TEMPLATE/Feature_Request.md new file mode 100644 index 00000000000..f52c8ab0956 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/Feature_Request.md @@ -0,0 +1,47 @@ +--- +name: 🚀 Feature Request +about: I have a suggestion (and might want to implement myself 🙂)! + +--- + + + +### Community Note + +* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request +* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request +* If you are interested in working on this issue or have submitted a pull request, please leave a comment + + + +### Description + + + +### New or Affected Resource(s) + + + +* aws_XXXXX + +### Potential Terraform Configuration + + + +```hcl +# Copy-paste your Terraform configurations here - for large Terraform configs, +# please use a service like Dropbox and share a link to the ZIP file. For +# security, you can also encrypt the files using our GPG public key. +``` + +### References + + + +* #0000 diff --git a/.github/ISSUE_TEMPLATE/Question.md b/.github/ISSUE_TEMPLATE/Question.md new file mode 100644 index 00000000000..a3f9a38394f --- /dev/null +++ b/.github/ISSUE_TEMPLATE/Question.md @@ -0,0 +1,15 @@ +--- +name: 💬 Question +about: If you have a question, please check out our other community resources! + +--- + +Issues on GitHub are intended to be related to bugs or feature requests with provider codebase, +so we recommend using our other community resources instead of asking here 👍. + +--- + +If you have a support request or question please submit them to one of these resources: + +* [Terraform community resources](https://www.terraform.io/docs/extend/community/index.html) +* [HashiCorp support](https://support.hashicorp.com) (Terraform Enterprise customers) diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md new file mode 100644 index 00000000000..448b34f20a5 --- /dev/null +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -0,0 +1,15 @@ + +Fixes #0000 + +Changes proposed in this pull request: + +* Change 1 +* Change 2 + +Output from acceptance testing: + +``` +$ make testacc TESTARGS='-run=TestAccAWSAvailabilityZones' + +... +``` diff --git a/.gitignore b/.gitignore index 5aaec9d3378..62a3c224895 100644 --- a/.gitignore +++ b/.gitignore @@ -32,4 +32,5 @@ website/vendor # Keep windows files with windows line endings *.winfile eol=crlf -/.vs \ No newline at end of file +/.vs +node_modules diff --git a/.gometalinter.json b/.gometalinter.json new file mode 100644 index 00000000000..3192049c4d2 --- /dev/null +++ b/.gometalinter.json @@ -0,0 +1,29 @@ +{ + "Deadline": "5m", + "Enable": [ + "deadcode", + "errcheck", + "gofmt", + "ineffassign", + "interfacer", + "misspell", + "structcheck", + "unconvert", + "unparam", + "unused", + "varcheck", + "vet" + ], + "EnableGC": true, + "Linters": { + "errcheck": { + "Command": "errcheck -abspath {not_tests=-ignoretests} -ignore github.com/hashicorp/terraform/helper/schema:ForceNew|Set -ignore io:Close" + } + }, + "Sort": [ + "path", + "line" + ], + "Vendor": true, + "WarnUnmatchedDirective": true +} \ No newline at end of file diff --git a/.travis.yml b/.travis.yml index 83247c3b830..467221131d2 100644 --- a/.travis.yml +++ b/.travis.yml @@ -1,8 +1,13 @@ dist: trusty -sudo: false +sudo: required +services: + - docker language: go go: -- 1.9.1 +- "1.11.1" + +git: + depth: 1 install: # This script is used by the Travis build to install a cookie for @@ -10,12 +15,14 @@ install: # packages that live there. # See: https://github.com/golang/go/issues/12933 - bash scripts/gogetcookie.sh -- go get github.com/kardianos/govendor +- make tools script: +- make lint - make test - make vendor-status -- make vet +- make website-lint +- make website-test branches: only: diff --git a/CHANGELOG.md b/CHANGELOG.md index 31b446ea9b3..fc7c60243c7 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,47 +1,1242 @@ -## 1.12.0 (Unreleased) +## 1.46.0 (November 20, 2018) + +FEATURES: + +* **New Data Source:** `aws_api_gateway_api_key` ([#6449](https://github.com/terraform-providers/terraform-provider-aws/issues/6449)) + +ENHANCEMENTS: + +* data-source/aws_eip: Add `association_id`, `domain`, `instance_id`, `network_interface_id`, `network_interface_owner_id`, `private_ip`, and `public_ipv4_pool` attributes ([#6463](https://github.com/terraform-providers/terraform-provider-aws/issues/6463)] / [[#6518](https://github.com/terraform-providers/terraform-provider-aws/issues/6518)) +* resource/aws_ecs_cluster: Add `tags` argument ([#6486](https://github.com/terraform-providers/terraform-provider-aws/issues/6486)) +* resource/aws_ecs_service: Add `tags` argument ([#6486](https://github.com/terraform-providers/terraform-provider-aws/issues/6486)) +* resource/aws_ecs_task_definition: Add `tags` argument ([#6486](https://github.com/terraform-providers/terraform-provider-aws/issues/6486)) +* resource/aws_ecs_task_definition: Add `ipc_mode` and `pid_mode` arguments ([#6515](https://github.com/terraform-providers/terraform-provider-aws/issues/6515)) +* resource/aws_eip: Add `public_ipv4_pool` argument ([#6518](https://github.com/terraform-providers/terraform-provider-aws/issues/6518)) +* resource/aws_iam_role: Add `tags` argument ([#6499](https://github.com/terraform-providers/terraform-provider-aws/issues/6499)) +* resource/aws_iam_user: Add `tags` argument ([#6497](https://github.com/terraform-providers/terraform-provider-aws/issues/6497)) +* resource/aws_sns_topic: Add `kms_master_key_id` argument (support server-side encryption) ([#6502](https://github.com/terraform-providers/terraform-provider-aws/issues/6502)) + +BUG FIXES: + +* resource/aws_kinesis_analytics_application: Properly handle `processing_configuration` argument ([#6495](https://github.com/terraform-providers/terraform-provider-aws/issues/6495)) + +## 1.45.0 (November 15, 2018) + +ENHANCEMENTS: + +* resource/aws_autoscaling_group: Mixed Instances Policy support ([#6465](https://github.com/terraform-providers/terraform-provider-aws/issues/6465)) + +## 1.44.0 (November 14, 2018) + +FEATURES: + +* **New Resource:** `aws_gamelift_game_session_queue` ([#6335](https://github.com/terraform-providers/terraform-provider-aws/issues/6335)) +* **New Resource:** `aws_glacier_vault_lock` ([#6432](https://github.com/terraform-providers/terraform-provider-aws/issues/6432)) + +ENHANCEMENTS: + +* data-source/aws_eip: Add `filter` argument ([#3525](https://github.com/terraform-providers/terraform-provider-aws/issues/3525)) +* data-source/aws_eip: Add `tags` argument ([#3505](https://github.com/terraform-providers/terraform-provider-aws/issues/3505)) +* data-source/aws_eip: Support EC2-Classic Elastic IPs ([#3522](https://github.com/terraform-providers/terraform-provider-aws/issues/3522)) +* resource/aws_codebuild_project: Support `source` `report_build_status` for Bitbucket ([#6426](https://github.com/terraform-providers/terraform-provider-aws/issues/6426)) +* resource/aws_dlm_lifecycle_policy: Add `copy_tags` argument ([#6445](https://github.com/terraform-providers/terraform-provider-aws/issues/6445)) +* resource/aws_ebs_snapshot: Allow retries for `SnapshotCreationPerVolumeRateExceeded` errors on creation ([#6414](https://github.com/terraform-providers/terraform-provider-aws/issues/6414)) +* resource/aws_ebs_volume: Switch to tagging on creation ([#6396](https://github.com/terraform-providers/terraform-provider-aws/issues/6396)) +* resource/aws_elastic_transcoder_pipeline: Support resource import ([#6388](https://github.com/terraform-providers/terraform-provider-aws/issues/6388)) +* resource/aws_elastic_transcoder_preset: Support resource import ([#6388](https://github.com/terraform-providers/terraform-provider-aws/issues/6388)) +* resource/aws_lambda_event_source_mapping: Add `starting_position_timestamp` argument ([#6437](https://github.com/terraform-providers/terraform-provider-aws/issues/6437)) +* resource/aws_route53_health_check: Provide plan-time validation for `type` ([#6460](https://github.com/terraform-providers/terraform-provider-aws/issues/6460)) +* resource/aws_ses_receipt_rule: Support resource import ([#6237](https://github.com/terraform-providers/terraform-provider-aws/issues/6237)) +* resource/aws_ssm_maintenance_window_task: Add `description` and `name` arguments ([#5762](https://github.com/terraform-providers/terraform-provider-aws/issues/5762)) + +BUG FIXES: + +* data-source/aws_ebs_snapshot: Fix `most_recent` ordering ([#6414](https://github.com/terraform-providers/terraform-provider-aws/issues/6414)) +* resource/aws_cloudwatch_log_metric_filter: Properly leave `default_value` empty when unset ([#5933](https://github.com/terraform-providers/terraform-provider-aws/issues/5933)) +* resource/aws_route53_health_check: Properly read `child_healthchecks` into Terraform state ([#6460](https://github.com/terraform-providers/terraform-provider-aws/issues/6460)) +* resource/aws_security_group_rule: Support all non-zero `from_port` and `to_port` configurations with `protocol` ALL/-1 ([#6423](https://github.com/terraform-providers/terraform-provider-aws/issues/6423)) +* resource/aws_sns_platform_application: Properly trigger resource recreation when deleted outside Terraform ([#6436](https://github.com/terraform-providers/terraform-provider-aws/issues/6436)) +* service/ec2: Allow `tags` and `volume_tags` updates to retry based on SDK retries instead of time bounds for EC2 throttling ([#3586](https://github.com/terraform-providers/terraform-provider-aws/issues/3586)) + +## 1.43.2 (November 10, 2018) + +BUG FIXES: + +* resource/aws_security_group_rule: Prevent crash when reading rules from groups containing an `ALL`/`-1` `protocol` rule ([#6419](https://github.com/terraform-providers/terraform-provider-aws/issues/6419)) + +## 1.43.1 (November 09, 2018) + +BUG FIXES: + +* resource/aws_cloudwatch_metric_alarm: Accept EC2 automate reboot ARN ([#6405](https://github.com/terraform-providers/terraform-provider-aws/issues/6405)) +* resource/aws_lambda_function: Handle slower code uploads on creation with configurable timeout ([#6409](https://github.com/terraform-providers/terraform-provider-aws/issues/6409)) +* resource/aws_rds_cluster: Prevent `InvalidParameterCombination` error with `engine_version` and `snapshot_identifier` on creation ([#6391](https://github.com/terraform-providers/terraform-provider-aws/issues/6391)) +* resource/aws_security_group_rule: Properly handle updating description when `protocol` is -1/ALL ([#6407](https://github.com/terraform-providers/terraform-provider-aws/issues/6407)) +* resource/aws_vpc: Always set `assign_generated_ipv6_cidr_block`, `ipv6_association_id`, and `ipv6_cidr_block` attributes in Terraform state ([#2103](https://github.com/terraform-providers/terraform-provider-aws/issues/2103)) +* resource/aws_vpc: Always wait for IPv6 CIDR block association on resource creation if `assign_generated_ipv6_cidr_block` is set ([#6394](https://github.com/terraform-providers/terraform-provider-aws/issues/6394)) +* service/ec2: Properly ignore sending existing tags during updates ([#5108](https://github.com/terraform-providers/terraform-provider-aws/issues/5108)] / [[#6370](https://github.com/terraform-providers/terraform-provider-aws/issues/6370)) + +## 1.43.0 (November 07, 2018) + +NOTES: + +* resource/aws_lb_listener: This resource will now sort the API response based on action ordering. If necessary, sorting your configuration based on `order` should resolve any plan difference. +* resource/aws_lb_listener_rule: This resource will now sort the API response based on action ordering. If necessary, sorting your configuration based on `order` should resolve any plan difference. + +FEATURES: + +* **New Resource:** `aws_dlm_lifecycle_policy` ([#5558](https://github.com/terraform-providers/terraform-provider-aws/issues/5558)) +* **New Resource:** `aws_kinesis_analytics_application` ([#5456](https://github.com/terraform-providers/terraform-provider-aws/issues/5456)) + +ENHANCEMENTS: + +* data-source/aws_efs_file_system: Add `arn` attribute ([#6371](https://github.com/terraform-providers/terraform-provider-aws/issues/6371)) +* data-source/aws_efs_mount_target: Add `file_system_arn` attribute ([#6371](https://github.com/terraform-providers/terraform-provider-aws/issues/6371)) +* data-source/aws_mq_broker: Add `logs` attribute ([#6122](https://github.com/terraform-providers/terraform-provider-aws/issues/6122)) +* resource/aws_efs_file_system: Add `arn` attribute ([#6371](https://github.com/terraform-providers/terraform-provider-aws/issues/6371)) +* resource/aws_efs_mount_target: Add `file_system_arn` attribute ([#6371](https://github.com/terraform-providers/terraform-provider-aws/issues/6371)) +* resource/aws_launch_configuration: Add `capacity_reservation_specification` argument ([#6325](https://github.com/terraform-providers/terraform-provider-aws/issues/6325)) +* resource/aws_mq_broker: Add `logs` argument ([#6122](https://github.com/terraform-providers/terraform-provider-aws/issues/6122)) + +BUG FIXES: + +* resource/aws_ecs_service: Continue supporting replica `deployment_minimum_healthy_percent = 0` and `deployment_maximum_percent = 100` ([#6316](https://github.com/terraform-providers/terraform-provider-aws/issues/6316)) +* resource/aws_flow_log: Automatically trim `:*` suffix from `log_destination` argument ([#6377](https://github.com/terraform-providers/terraform-provider-aws/issues/6377)) +* resource/aws_iam_user: Delete SSH keys with `force_delete` ([#6337](https://github.com/terraform-providers/terraform-provider-aws/issues/6337)) +* resource/aws_lb_listener: Prevent panics with actions deleted outside Terraform ([#6319](https://github.com/terraform-providers/terraform-provider-aws/issues/6319)) +* resource/aws_lb_listener_rule: Prevent panics with actions deleted outside Terraform ([#6319](https://github.com/terraform-providers/terraform-provider-aws/issues/6319)) +* resource/aws_opsworks_application: Properly recreate resource on `short_name` updates ([#6359](https://github.com/terraform-providers/terraform-provider-aws/issues/6359)) +* resource/aws_s3_bucket: Prevent `MalformedXML` error when using cross-region replication V1 with an empty `prefix` ([#6344](https://github.com/terraform-providers/terraform-provider-aws/issues/6344)) + +## 1.42.0 (October 31, 2018) + +NOTES: + +* resource/aws_route53_zone: The `vpc_id` and `vpc_region` arguments have been deprecated in favor of `vpc` configuration block(s). To upgrade, wrap existing `vpc_id` and `vpc_region` arguments with `vpc { ... }`. Since `vpc` is an exclusive set of VPC associations, you may need to define other `vpc` configuration blocks to match the infrastructure, or use lifecycle configuration `ignore_changes` to suppress the plan difference. +* resource/aws_route53_zone_association: Due to the multiple VPC association support now available in the `aws_route53_zone` resource, we recommend removing usage of this resource unless necessary for ordering. To remove this resource from management (without disassociating VPCs), you can use `terraform state rm`. If necessary to keep this resource for ordering, you can use the lifecycle `ignore_changes` in the `aws_route53_zone` resource to suppress plan differences. + +FEATURES: + +* **New Resource:** `aws_ec2_capacity_reservation` ([#6291](https://github.com/terraform-providers/terraform-provider-aws/issues/6291)) +* **New Resource:** `aws_glue_security_configuration` ([#6288](https://github.com/terraform-providers/terraform-provider-aws/issues/6288)) +* **New Resource:** `aws_iot_policy_attachment` ([#5864](https://github.com/terraform-providers/terraform-provider-aws/issues/5864)) +* **New Resource:** `aws_iot_thing_principal_attachment` ([#5868](https://github.com/terraform-providers/terraform-provider-aws/issues/5868)) +* **New Resource:** `aws_pinpoint_apns_sandbox_channel` ([#6233](https://github.com/terraform-providers/terraform-provider-aws/issues/6233)) +* **New Resource:** `aws_pinpoint_apns_voip_channel` ([#6234](https://github.com/terraform-providers/terraform-provider-aws/issues/6234)) +* **New Resource:** `aws_pinpoint_apns_voip_sandbox_channel` ([#6235](https://github.com/terraform-providers/terraform-provider-aws/issues/6235)) + +ENHANCEMENTS: + +* data-source/aws_iot_endpoint: Add `endpoint_type` argument ([#6215](https://github.com/terraform-providers/terraform-provider-aws/issues/6215)) +* data-source/aws_nat_gateway: Support `tags` as argument and attribute ([#6231](https://github.com/terraform-providers/terraform-provider-aws/issues/6231)) +* resource/aws_budgets_budget: Support resource import ([#6226](https://github.com/terraform-providers/terraform-provider-aws/issues/6226)) +* resource/aws_cloudwatch_event_permission: Add `condition` argument (support Organizations access) ([#6261](https://github.com/terraform-providers/terraform-provider-aws/issues/6261)) +* resource/aws_codepipeline_webhook: Support resource import ([#6202](https://github.com/terraform-providers/terraform-provider-aws/issues/6202)) +* resource/aws_cognito_user_pool_domain: Add `certificate_arn` argument (support custom domains) ([#6185](https://github.com/terraform-providers/terraform-provider-aws/issues/6185)) +* resource/aws_dx_hosted_private_virtual_interface: Add `mtu` argument and `jumbo_frame_capable` attribute ([#6142](https://github.com/terraform-providers/terraform-provider-aws/issues/6142)) +* resource/aws_dx_private_virtual_interface: Add `mtu` argument and `jumbo_frame_capable` attribute ([#6141](https://github.com/terraform-providers/terraform-provider-aws/issues/6141)) +* resource/aws_ecs_service: Support `deployment_minimum_healthy_percent` for `DAEMON` strategy ([#6150](https://github.com/terraform-providers/terraform-provider-aws/issues/6150)) +* resource/aws_flow_log: Add `log_destination` and `log_destination_type` arguments (support sending to S3) ([#5509](https://github.com/terraform-providers/terraform-provider-aws/issues/5509)) +* resource/aws_glue_job: Add `security_configuration` argument ([#6232](https://github.com/terraform-providers/terraform-provider-aws/issues/6232)) +* resource/aws_lb_target_group: Improve `name` and `name_prefix` argument plan-time validation ([#6168](https://github.com/terraform-providers/terraform-provider-aws/issues/6168)) +* resource/aws_s3_bucket: Support S3 Cross-Region Replication filtering based on S3 object tags ([#6095](https://github.com/terraform-providers/terraform-provider-aws/issues/6095)) +* resource/aws_secretsmanager_secret: Add `name_prefix` argument ([#6277](https://github.com/terraform-providers/terraform-provider-aws/issues/6277)) +* resource/aws_secretsmanager_secret: Add plan-time validation for `name` argument ([#6277](https://github.com/terraform-providers/terraform-provider-aws/issues/6277)) +* resource/aws_route53_zone: Add `vpc` argument, deprecate `vpc_id` and `vpc_region` arguments (support multiple VPC associations) ([#6299](https://github.com/terraform-providers/terraform-provider-aws/issues/6299)) +* resource/aws_waf_rule: Support resource import ([#6247](https://github.com/terraform-providers/terraform-provider-aws/issues/6247)) + +BUG FIXES: + +* data-source/aws_network_interface: Properly handle reading `private_ip` into Terraform state ([#6284](https://github.com/terraform-providers/terraform-provider-aws/issues/6284)) +* resource/aws_ami_launch_permission: Prevent panic reading public permissions ([#6224](https://github.com/terraform-providers/terraform-provider-aws/issues/6224)) +* resource/aws_budgets_budget: Properly read `time_period_start` and `time_period_end` into Terraform state ([#6226](https://github.com/terraform-providers/terraform-provider-aws/issues/6226)) +* resource/aws_cloudwatch_metric_alarm: Allow EC2 Automate ARNs with `alarm_actions` ([#6206](https://github.com/terraform-providers/terraform-provider-aws/issues/6206)) +* resource/aws_dx_gateway: Allow legacy `amazon_side_asn` in plan-time validation ([#6253](https://github.com/terraform-providers/terraform-provider-aws/issues/6253)) +* resource/aws_egress_only_internet_gateway: Improve eventual consistency logic during creation ([#6190](https://github.com/terraform-providers/terraform-provider-aws/issues/6190)) +* resource/aws_glue_crawler: Suppress `role` difference when using ARN ([#6293](https://github.com/terraform-providers/terraform-provider-aws/issues/6293)) +* resource/aws_iam_role_policy: Properly handle reading attributes into Terraform state after creation and update ([#6304](https://github.com/terraform-providers/terraform-provider-aws/issues/6304)) +* resource/aws_kinesis_firehose_delivery_stream: Properly recreate resource when updating `elasticsearch_configuration` `s3_backup_mode` ([#6305](https://github.com/terraform-providers/terraform-provider-aws/issues/6305)) +* resource/aws_nat_gateway: Remove `network_interface_id`, `private_ip`, and `public_ip` as configurable (they continue to be available as read-only attributes) ([#6225](https://github.com/terraform-providers/terraform-provider-aws/issues/6225)) +* resource/aws_network_acl: Properly handle ICMP code and type with IPv6 ICMP (protocol 58) ([#6264](https://github.com/terraform-providers/terraform-provider-aws/issues/6264)) +* resource/aws_network_acl_rule: Suppress `protocol` differences between name and number ([#2454](https://github.com/terraform-providers/terraform-provider-aws/issues/2454)) +* resource/aws_network_acl_rule: Properly handle ICMP code and type with IPv6 ICMP (protocol 58) ([#6263](https://github.com/terraform-providers/terraform-provider-aws/issues/6263)) +* resource/aws_rds_cluster_parameter_group: Properly read `parameter` `apply_method` into Terraform state ([#6295](https://github.com/terraform-providers/terraform-provider-aws/issues/6295)) + +## 1.41.0 (October 18, 2018) + +FEATURES: + +* **New Data Source:** `aws_cloudhsm_v2_cluster` ([#4125](https://github.com/terraform-providers/terraform-provider-aws/issues/4125)) +* **New Resource:** `aws_cloudhsm_v2_cluster` ([#4125](https://github.com/terraform-providers/terraform-provider-aws/issues/4125)) +* **New Resource:** `aws_cloudhsm_v2_hsm` ([#4125](https://github.com/terraform-providers/terraform-provider-aws/issues/4125)) +* **New Resource:** `aws_codepipeline_webhook` ([#5875](https://github.com/terraform-providers/terraform-provider-aws/issues/5875)) +* **New Resource:** `aws_pinpoint_apns_channel` ([#6194](https://github.com/terraform-providers/terraform-provider-aws/issues/6194)) +* **New Resource:** `aws_redshift_event_subscription` ([#6146](https://github.com/terraform-providers/terraform-provider-aws/issues/6146)) + +ENHANCEMENTS: + +* resource/aws_appsync_datasource: Support resource import ([#6139](https://github.com/terraform-providers/terraform-provider-aws/issues/6139)) +* resource/aws_appsync_datasource: Support `HTTP` `type` and add `http_config` argument ([#6139](https://github.com/terraform-providers/terraform-provider-aws/issues/6139)) +* resource/aws_appsync_datasource: Make `dynamodb_config` and `elasticsearch_config` `region` configuration optional based on resource current region ([#6139](https://github.com/terraform-providers/terraform-provider-aws/issues/6139)) +* resource/aws_appsync_graphql_api: Add `log_config` argument ([#6138](https://github.com/terraform-providers/terraform-provider-aws/issues/6138)) +* resource/aws_appsync_graphql_api: Add `openid_connect_config` argument ([#6138](https://github.com/terraform-providers/terraform-provider-aws/issues/6138)) +* resource/aws_appsync_graphql_api: Add `uris` attribute ([#6138](https://github.com/terraform-providers/terraform-provider-aws/issues/6138)) +* resource/aws_appsync_graphql_api: Make `user_pool_config` `aws_region` configuration optional based on resource current region ([#6138](https://github.com/terraform-providers/terraform-provider-aws/issues/6138)) +* resource/aws_athena_database: Add `encryption_configuration` argument ([#6117](https://github.com/terraform-providers/terraform-provider-aws/issues/6117)) +* resource/aws_cloudwatch_metric_alarm: Validate `alarm_actions` ([#6151](https://github.com/terraform-providers/terraform-provider-aws/issues/6151)) +* resource/aws_codebuild_project: Support `NO_SOURCE` in `source` `type` ([#6140](https://github.com/terraform-providers/terraform-provider-aws/issues/6140)) +* resource/aws_db_instance: Directly restore snapshot with `parameter_group_name` set ([#6200](https://github.com/terraform-providers/terraform-provider-aws/issues/6200)) +* resource/aws_dx_connection: Add `jumbo_frame_capable` attribute ([#6143](https://github.com/terraform-providers/terraform-provider-aws/issues/6143)) +* resource/aws_dynamodb_table: Prevent error `UnknownOperationException: Tagging is not currently supported in DynamoDB Local` ([#6149](https://github.com/terraform-providers/terraform-provider-aws/issues/6149)) +* resource/aws_lb_listener: Allow `default_action` `order` to be based on Terraform configuration ordering ([#6124](https://github.com/terraform-providers/terraform-provider-aws/issues/6124)) +* resource/aws_lb_listener_rule: Allow `action` `order` to be based on Terraform configuration ordering ([#6124](https://github.com/terraform-providers/terraform-provider-aws/issues/6124)) +* resource/aws_rds_cluster: Directly restore snapshot with `db_cluster_parameter_group_name` set ([#6200](https://github.com/terraform-providers/terraform-provider-aws/issues/6200)) + +BUG FIXES: + +* resource/aws_appsync_graphql_api: Properly handle updates by passing all parameters ([#6138](https://github.com/terraform-providers/terraform-provider-aws/issues/6138)) +* resource/aws_ecs_service: Properly handle `random` placement strategy ([#6176](https://github.com/terraform-providers/terraform-provider-aws/issues/6176)) +* resource/aws_lb_listener: Prevent unconfigured `default_action` `order` from showing difference ([#6119](https://github.com/terraform-providers/terraform-provider-aws/issues/6119)) +* resource/aws_lb_listener_rule: Prevent unconfigured `action` `order` from showing difference ([#6119](https://github.com/terraform-providers/terraform-provider-aws/issues/6119)) +* resource/aws_lb_listener_rule: Retry read for eventual consistency after resource creation ([#6154](https://github.com/terraform-providers/terraform-provider-aws/issues/6154)) + +## 1.40.0 (October 10, 2018) + +FEATURES: + +* **New Data Source:** `aws_launch_template` ([#6064](https://github.com/terraform-providers/terraform-provider-aws/issues/6064)) +* **New Data Source:** `aws_workspaces_bundle` ([#3243](https://github.com/terraform-providers/terraform-provider-aws/issues/3243)) +* **New Guide:** [`AWS IAM Policy Documents`](https://www.terraform.io/docs/providers/aws/guides/iam-policy-documents.html) ([#6016](https://github.com/terraform-providers/terraform-provider-aws/issues/6016)) +* **New Resource:** `aws_ebs_snapshot_copy` ([#3086](https://github.com/terraform-providers/terraform-provider-aws/issues/3086)) +* **New Resource:** `aws_pinpoint_adm_channel` ([#6038](https://github.com/terraform-providers/terraform-provider-aws/issues/6038)) +* **New Resource:** `aws_pinpoint_baidu_channel` ([#6111](https://github.com/terraform-providers/terraform-provider-aws/issues/6111)) +* **New Resource:** `aws_pinpoint_email_channel` ([#6110](https://github.com/terraform-providers/terraform-provider-aws/issues/6110)) +* **New Resource:** `aws_pinpoint_event_stream` ([#6069](https://github.com/terraform-providers/terraform-provider-aws/issues/6069)) +* **New Resource:** `aws_pinpoint_gcm_channel` ([#6089](https://github.com/terraform-providers/terraform-provider-aws/issues/6089)) +* **New Resource:** `aws_pinpoint_sms_channel` ([#6088](https://github.com/terraform-providers/terraform-provider-aws/issues/6088)) +* **New Resource:** `aws_redshift_snapshot_copy_grant` ([#5134](https://github.com/terraform-providers/terraform-provider-aws/issues/5134)) + +ENHANCEMENTS: + +* data-source/aws_iam_policy_document: Make `statement` argument optional ([#6052](https://github.com/terraform-providers/terraform-provider-aws/issues/6052)) +* data-source/aws_secretsmanager_secret: Add `policy` attribute ([#6091](https://github.com/terraform-providers/terraform-provider-aws/issues/6091)) +* data-source/aws_secretsmanager_secret_version: Add `secret_binary` attribute ([#6070](https://github.com/terraform-providers/terraform-provider-aws/issues/6070)) +* resource/aws_codebuild_project: Add `environment` `certificate` argument ([#6087](https://github.com/terraform-providers/terraform-provider-aws/issues/6087)) +* resource/aws_ecr_repository: Add configurable `delete` timeout ([#3910](https://github.com/terraform-providers/terraform-provider-aws/issues/3910)) +* resource/aws_elastic_beanstalk_environment: Add `platform_arn` argument (support custom platforms) ([#6093](https://github.com/terraform-providers/terraform-provider-aws/issues/6093)) +* resource/aws_lb_listener: Support Cognito and OIDC authentication ([#6094](https://github.com/terraform-providers/terraform-provider-aws/issues/6094)) +* resource/aws_lb_listener_rule: Support Cognito and OIDC authentication ([#6094](https://github.com/terraform-providers/terraform-provider-aws/issues/6094)) +* resource/aws_mq_broker: Add `instances` `ip_address` attribute ([#6103](https://github.com/terraform-providers/terraform-provider-aws/issues/6103)) +* resource/aws_rds_cluster: Support `engine_version` updates ([#5010](https://github.com/terraform-providers/terraform-provider-aws/issues/5010)) +* resource/aws_s3_bucket: Add replication `access_control_translation` and `account_id` arguments (support cross-account replication ownership) ([#3577](https://github.com/terraform-providers/terraform-provider-aws/issues/3577)) +* resource/aws_secretsmanager_secret_version: Add `secret_binary` argument ([#6070](https://github.com/terraform-providers/terraform-provider-aws/issues/6070)) +* resource/aws_security_group_rule: Support resource import ([#6027](https://github.com/terraform-providers/terraform-provider-aws/issues/6027)) + +BUG FIXES: + +* resource/aws_appautoscaling_policy: Properly handle negative values in step scaling metric intervals ([#3480](https://github.com/terraform-providers/terraform-provider-aws/issues/3480)) +* resource/aws_appsync_datasource: Properly pass all attributes during update ([#5814](https://github.com/terraform-providers/terraform-provider-aws/issues/5814)) +* resource/aws_batch_job_queue: Prevent error during read of non-existent Job Queue ([#6085](https://github.com/terraform-providers/terraform-provider-aws/issues/6085)) +* resource/aws_ecr_repository: Retry read for eventual consistency after resource creation ([#3910](https://github.com/terraform-providers/terraform-provider-aws/issues/3910)) +* resource/aws_ecs_service: Properly remove non-existent services from Terraform state ([#6039](https://github.com/terraform-providers/terraform-provider-aws/issues/6039)) +* resource/aws_iam_instance_profile: Retry for eventual consistency when adding a role ([#6079](https://github.com/terraform-providers/terraform-provider-aws/issues/6079)) +* resource/aws_lb_listener: Retry read for eventual consistency after resource creation ([#5167](https://github.com/terraform-providers/terraform-provider-aws/issues/5167)) + +## 1.39.0 (October 03, 2018) + +FEATURES: + +* **New Resource:** `aws_ec2_fleet` ([#5960](https://github.com/terraform-providers/terraform-provider-aws/issues/5960)) +* **New Resource:** `aws_pinpoint_app` ([#5956](https://github.com/terraform-providers/terraform-provider-aws/issues/5956)) + +ENHANCEMENTS: + +* resource/aws_cloudwatch_event_target: Support additional ECS target arguments ([#5982](https://github.com/terraform-providers/terraform-provider-aws/issues/5982)) +* resource/aws_codedeploy_app: Support resource import ([#6025](https://github.com/terraform-providers/terraform-provider-aws/issues/6025)) +* resource/aws_codedeploy_deployment_config: Support resource import ([#6025](https://github.com/terraform-providers/terraform-provider-aws/issues/6025)) +* resource/aws_codedeploy_deployment_group: Support resource import ([#6025](https://github.com/terraform-providers/terraform-provider-aws/issues/6025)) +* resource/aws_db_instance: Add `deletion_protection` argument ([#6011](https://github.com/terraform-providers/terraform-provider-aws/issues/6011)) +* resource/aws_dx_connection: Support 50Mbps, 100Mbps, 200Mbps, 300Mbps, 400Mbps, 500Mbps as valid `bandwidth` values ([#6057](https://github.com/terraform-providers/terraform-provider-aws/issues/6057)) +* resource/aws_dx_lag: Support 50Mbps, 100Mbps, 200Mbps, 300Mbps, 400Mbps, 500Mbps as valid `connections_bandwidth` values ([#6057](https://github.com/terraform-providers/terraform-provider-aws/issues/6057)) +* resource/aws_elasticsearch_domain: Add `node_to_node_encryption` argument ([#5997](https://github.com/terraform-providers/terraform-provider-aws/issues/5997)) +* resource/aws_rds_cluster: Add `deletion_protection` argument ([#6010](https://github.com/terraform-providers/terraform-provider-aws/issues/6010)) +* resource/aws_sns_topic_subscription: Add `delivery_policy` argument ([#3289](https://github.com/terraform-providers/terraform-provider-aws/issues/3289)) +* resource/aws_spot_fleet_request: Add `instance_pools_to_use_count` argument ([#5955](https://github.com/terraform-providers/terraform-provider-aws/issues/5955)) + +BUG FIXES: + +* resource/aws_api_gateway_deployment: Do not delete stage if it is in use by another deployment ([#3896](https://github.com/terraform-providers/terraform-provider-aws/issues/3896)) +* resource/aws_codedeploy_deployment_group: Include autoscaling groups when updating blue green config ([#5827](https://github.com/terraform-providers/terraform-provider-aws/issues/5827)) +* resource/aws_codedeploy_deployment_group: Properly read `autoscaling_groups` into Terraform state ([#6025](https://github.com/terraform-providers/terraform-provider-aws/issues/6025)) +* resource/aws_ecs_task_definition: Properly handle task scoped docker volume configurations ([#5907](https://github.com/terraform-providers/terraform-provider-aws/issues/5907)) +* resource/aws_network_interface_sg_attachment: Properly handle `InvalidNetworkInterfaceID.NotFound` errors ([#6048](https://github.com/terraform-providers/terraform-provider-aws/issues/6048)) +* resource/aws_rds_cluster: Properly handle `kms_key_id` when restoring from snapshot ([#6012](https://github.com/terraform-providers/terraform-provider-aws/issues/6012)) +* resource/aws_s3_bucket_object: Mark `version_id` as recomputed on `etag` updates ([#3861](https://github.com/terraform-providers/terraform-provider-aws/issues/3861)) +* resource/aws_security_group: Prevent `InvalidNetworkInterfaceID.NotFound` errors when deleting lingering network interfaces ([#6037](https://github.com/terraform-providers/terraform-provider-aws/issues/6037)) +* resource/aws_sns_topic_subscription: Properly read all attributes into Terraform state on reads ([#6023](https://github.com/terraform-providers/terraform-provider-aws/issues/6023)) +* resource/aws_sns_topic_subscription: Properly handle `filter_policy` removal ([#6023](https://github.com/terraform-providers/terraform-provider-aws/issues/6023)) +* resource/aws_subnet: Prevent `InvalidNetworkInterfaceID.NotFound` errors when deleting lingering network interfaces ([#6037](https://github.com/terraform-providers/terraform-provider-aws/issues/6037)) + +## 1.38.0 (September 26, 2018) + +FEATURES: + +* **New Data Source:** `aws_db_event_categories` ([#5514](https://github.com/terraform-providers/terraform-provider-aws/issues/5514)) + +ENHANCEMENTS: + +* data-source/aws_autoscaling_groups: Add `arns` attribute ([#5766](https://github.com/terraform-providers/terraform-provider-aws/issues/5766)) +* resource/aws_ami: Support resource import ([#5990](https://github.com/terraform-providers/terraform-provider-aws/issues/5990)) +* resource/aws_codebuild_project: Add `secondary_artifacts` and `secondary_sources` arguments ([#5939](https://github.com/terraform-providers/terraform-provider-aws/issues/5939)) +* resource/aws_codebuild_project: Add `arn` attribute ([#5973](https://github.com/terraform-providers/terraform-provider-aws/issues/5973)) +* resource/aws_launch_template: Support `credit_specification` configuration of T3 instance types ([#5922](https://github.com/terraform-providers/terraform-provider-aws/issues/5922)) +* resource/aws_launch_template: Allow `network_interface` `ipv6_address_count` configuration ([#5771](https://github.com/terraform-providers/terraform-provider-aws/issues/5771)) +* resource/aws_rds_cluster: Support `parallelquery` `engine_mode` argument ([#5980](https://github.com/terraform-providers/terraform-provider-aws/issues/5980)) + +BUG FIXES: + +* data-source/aws_ami: Prevent panics with AMIs in failed image state ([#5968](https://github.com/terraform-providers/terraform-provider-aws/issues/5968)) +* resource/aws_db_instance: Properly set `backup_retention_period = 0` with `snapshot_identifier` ([#5970](https://github.com/terraform-providers/terraform-provider-aws/issues/5970)) +* resource/aws_dms_replication_instance: Properly handle `engine_version` updates ([#5948](https://github.com/terraform-providers/terraform-provider-aws/issues/5948)) +* resource/aws_launch_template: Prevent `Auto Scaling only supports the 'one-time' Spot instance type with no duration.` error when using `instance_market_options` and AutoScaling Groups ([#5957](https://github.com/terraform-providers/terraform-provider-aws/issues/5957)) +* resource/aws_launch_template: Properly recreate existing resource when deleted ([#5967](https://github.com/terraform-providers/terraform-provider-aws/issues/5967)) +* resource/aws_launch_template: Continue accepting string `"true"` and `"false"` values for `ebs_optimized` argument ([#5995](https://github.com/terraform-providers/terraform-provider-aws/issues/5995)) +* resource/aws_load_balancer_policy: Properly handle resource when ELB is deleted ([#5972](https://github.com/terraform-providers/terraform-provider-aws/issues/5972)) +* resource/aws_rds_cluster_instance: Properly handle `publicly_accessible` updates ([#5991](https://github.com/terraform-providers/terraform-provider-aws/issues/5991)) +* resource/aws_security_group: Properly handle lingering ENIs from Lambda and similar services ([#4884](https://github.com/terraform-providers/terraform-provider-aws/issues/4884)) +* resource/aws_subnet: Properly handle lingering ENIs from Lambda and similar services ([#4884](https://github.com/terraform-providers/terraform-provider-aws/issues/4884)) + +## 1.37.0 (September 19, 2018) + +FEATURES: + +* **New Resource:** `aws_dx_bgp_peer` ([#5886](https://github.com/terraform-providers/terraform-provider-aws/issues/5886)) + +ENHANCEMENTS: + +* data-source/aws_ami_ids: Add `sort_ascending` argument ([#5912](https://github.com/terraform-providers/terraform-provider-aws/issues/5912)) +* resource/aws_iam_role_policy_attachment: Support resource import ([#5910](https://github.com/terraform-providers/terraform-provider-aws/issues/5910)) +* resource/aws_s3_bucket_inventory: Allow SSE-S3 encryption ([#5870](https://github.com/terraform-providers/terraform-provider-aws/issues/5870)) +* resource/aws_security_group: Add `prefix_list_ids` argument for `ingress` rules ([#5916](https://github.com/terraform-providers/terraform-provider-aws/issues/5916)) + +BUG FIXES: + +* resource/aws_config_config_rule: Prevent panic when specifying empty `scope` ([#5852](https://github.com/terraform-providers/terraform-provider-aws/issues/5852)) +* resource/aws_iam_policy: Ensure `description` is properly read into Terraform state during resource creation ([#5884](https://github.com/terraform-providers/terraform-provider-aws/issues/5884)) +* resource/aws_instance: Properly handle `credit_specifications` with T3 instance types ([#5805](https://github.com/terraform-providers/terraform-provider-aws/issues/5805)) +* resource/aws_launch_template: Fix handling of `network_interface` `ipv6_addresses` ([#5883](https://github.com/terraform-providers/terraform-provider-aws/issues/5883)) +* resource/aws_redshift_cluster: Properly disable logging when using `logging` nested argument ([#5895](https://github.com/terraform-providers/terraform-provider-aws/issues/5895)) +* resource/aws_s3_bucket: Prevent panics with various API read failures ([#5842](https://github.com/terraform-providers/terraform-provider-aws/issues/5842)) +* resource/aws_s3_bucket: Prevent `NoSuchBucket` error on deletion ([#5842](https://github.com/terraform-providers/terraform-provider-aws/issues/5842)) +* resource/aws_wafregional_byte_match_set: Properly read `byte_match_tuple` into Terraform state ([#5902](https://github.com/terraform-providers/terraform-provider-aws/issues/5902)) + +## 1.36.0 (September 13, 2018) + +FEATURES: + +* **New Resource:** `aws_cloudfront_public_key` ([#5737](https://github.com/terraform-providers/terraform-provider-aws/issues/5737)) + +ENHANCEMENTS: + +* data-source/aws_db_instance: Add `enabled_cloudwatch_logs_exports` attribute ([#5801](https://github.com/terraform-providers/terraform-provider-aws/issues/5801)) +* resource/aws_api_gateway_stage: Add `xray_tracing_enabled` argument ([#5817](https://github.com/terraform-providers/terraform-provider-aws/issues/5817)) +* resource/aws_cloudfront_distribution: Add `lambda_function_association` `include_body` argument ([#5681](https://github.com/terraform-providers/terraform-provider-aws/issues/5681)) +* resource/aws_db_instance: Add `domain` and `domain_iam_role_name` arguments (support for domain joining RDS instances) ([#5378](https://github.com/terraform-providers/terraform-provider-aws/issues/5378)) +* resource/aws_ecs_task_definition: Suppress `container_definition` differences for equivalent port and host mappings ([#5833](https://github.com/terraform-providers/terraform-provider-aws/issues/5833)) +* resource/aws_ecs_task_definition: Add docker volume configuration ([#5727](https://github.com/terraform-providers/terraform-provider-aws/issues/5727)) +* resource/aws_iam_user: Allow empty string (`""`) value for `permissions_boundary` argument ([#5859](https://github.com/terraform-providers/terraform-provider-aws/issues/5859)) +* resource/aws_iot_topic_rule: Add `firehose` `seperator` argument ([#5734](https://github.com/terraform-providers/terraform-provider-aws/issues/5734)) +* resource/aws_launch_template: Allow `network_interface` `ipv4_address_count` configuration ([#5830](https://github.com/terraform-providers/terraform-provider-aws/issues/5830)) +* resource/aws_ssm_document: Add support for `Session` `document_type` ([#5850](https://github.com/terraform-providers/terraform-provider-aws/issues/5850)) + +BUG FIXES: + +* resource/aws_iam_policy: Ensure `description` is available as an attribute when empty ([#5815](https://github.com/terraform-providers/terraform-provider-aws/issues/5815)) +* resource/aws_iam_user: Remove extraneous `DeleteUserPermissionsBoundary` API call during deletion ([#5857](https://github.com/terraform-providers/terraform-provider-aws/issues/5857)) +* resource/aws_lambda_function: Retry on `InvalidParameterValueException` errors relating to KMS-backed environment variables ([#5849](https://github.com/terraform-providers/terraform-provider-aws/issues/5849)) +* resource/aws_launch_template: Ensure `ebs_optimized` argument accepts "unspecified" value ([#5627](https://github.com/terraform-providers/terraform-provider-aws/issues/5627)) + +## 1.35.0 (September 06, 2018) + +ENHANCEMENTS: + +* data-source/aws_eks_cluster: Add `platform_version` attribute ([#5797](https://github.com/terraform-providers/terraform-provider-aws/issues/5797)) +* resource/aws_eks_cluster: Add `platform_version` attribute ([#5797](https://github.com/terraform-providers/terraform-provider-aws/issues/5797)) +* resource/aws_lambda_function: Allow empty lists for `vpc_config` `security_group_ids` and `subnet_ids` arguments to unconfigure VPC ([#1341](https://github.com/terraform-providers/terraform-provider-aws/issues/1341)) +* resource/aws_iam_role: Allow empty string (`""`) value for `permissions_boundary` argument ([#5740](https://github.com/terraform-providers/terraform-provider-aws/issues/5740)) + +BUG FIXES: + +* resource/aws_ecr_repository: Use `RepositoryUri` instead of our building our own URI for the `repository_url` attribute (AWS China fix) ([#5748](https://github.com/terraform-providers/terraform-provider-aws/issues/5748)) +* resource/aws_lambda_function: Properly handle `vpc_config` removal ([#5798](https://github.com/terraform-providers/terraform-provider-aws/issues/5798)) +* resource/aws_redshift_cluster: Properly force new resource when updating `availability_zone` argument ([#5758](https://github.com/terraform-providers/terraform-provider-aws/issues/5758)) + +## 1.34.0 (August 30, 2018) + +NOTES: + +* provider: This is the first release tested against and built with Go 1.11, which required `go fmt` changes to the code. If you are building a custom version of this provider or running tests using the repository Make targets (e.g. `make build`) when using a previous version of Go, you will receive errors. You can use the underlying `go` commands (e.g. `go build`) to workaround the `go fmt` check in the Make targets until you are able to upgrade Go. + +ENHANCEMENTS: + +* provider: `NO_PROXY` environment variable can accept CIDR notation and port +* data-source/aws_ip_ranges: Add `ipv6_cidr_blocks` attribute ([#5675](https://github.com/terraform-providers/terraform-provider-aws/issues/5675)) +* resource/aws_codebuild_project: Add `artifacts` `encryption_disabled` argument ([#5678](https://github.com/terraform-providers/terraform-provider-aws/issues/5678)) +* resource/aws_route: Support route import ([#5687](https://github.com/terraform-providers/terraform-provider-aws/issues/5687)) + +BUG FIXES: + +* data-source/aws_rds_cluster: Prevent error setting `engine_mode` and `scaling_configuration` ([#5660](https://github.com/terraform-providers/terraform-provider-aws/issues/5660)) +* resource/aws_autoscaling_group: Retry creation for eventual consistency with launch template IAM instance profile ([#5633](https://github.com/terraform-providers/terraform-provider-aws/issues/5633)) +* resource/aws_dax_cluster: Properly recreate cluster when updating `server_side_encryption` ([#5664](https://github.com/terraform-providers/terraform-provider-aws/issues/5664)) +* resource/aws_db_instance: Prevent double apply when using `replicate_source_db` parameters that require `ModifyDBInstance` during resource creation ([#5672](https://github.com/terraform-providers/terraform-provider-aws/issues/5672)) +* resource/aws_db_instance: Prevent `pending-reboot` parameter group status on creation with `parameter_group_name` ([#5672](https://github.com/terraform-providers/terraform-provider-aws/issues/5672)) +* resource/aws_lambda_event_source_mapping: Prevent perpetual difference when using function name with `function_name` (argument accepts both name and ARN) ([#5454](https://github.com/terraform-providers/terraform-provider-aws/issues/5454)) +* resource/aws_launch_template: Prevent encrypted flag cannot be specified error with `block_device_mappings` `ebs` argument ([#5632](https://github.com/terraform-providers/terraform-provider-aws/issues/5632)) +* resource/aws_key_pair: Ensure `fingerprint` attribute is saved in Terraform state during creation ([#5732](https://github.com/terraform-providers/terraform-provider-aws/issues/5732)) +* resource/aws_ssm_association: Properly handle updates when multiple arguments are used ([#5537](https://github.com/terraform-providers/terraform-provider-aws/issues/5537)) +* resource/aws_ssm_document: Properly handle deletion of privately shared documents ([#5668](https://github.com/terraform-providers/terraform-provider-aws/issues/5668)) +* resource/aws_ssm_document: Properly update `permissions.account_ids` ([#5685](https://github.com/terraform-providers/terraform-provider-aws/issues/5685)) + +## 1.33.0 (August 22, 2018) + +FEATURES: + +* **New Data Source:** `aws_api_gateway_resource` ([#5629](https://github.com/terraform-providers/terraform-provider-aws/issues/5629)) + +ENHANCEMENTS: + +* data-source/aws_storagegateway_local_disk: Add `disk_node` argument ([#5595](https://github.com/terraform-providers/terraform-provider-aws/issues/5595)) +* resource/aws_api_gateway_base_path_mapping: Support resource import ([#5566](https://github.com/terraform-providers/terraform-provider-aws/issues/5566)) +* resource/aws_api_gateway_gateway_response: Support resource import ([#5567](https://github.com/terraform-providers/terraform-provider-aws/issues/5567)) +* resource/aws_api_gateway_integration: Support resource import ([#5568](https://github.com/terraform-providers/terraform-provider-aws/issues/5568)) +* resource/aws_api_gateway_integration_response: Support resource import ([#5569](https://github.com/terraform-providers/terraform-provider-aws/issues/5569)) +* resource/aws_api_gateway_method: Support resource import ([#5571](https://github.com/terraform-providers/terraform-provider-aws/issues/5571)) +* resource/aws_api_gateway_method_response: Support resource import ([#5570](https://github.com/terraform-providers/terraform-provider-aws/issues/5570)) +* resource/aws_api_gateway_model: Support resource import ([#5572](https://github.com/terraform-providers/terraform-provider-aws/issues/5572)) +* resource/aws_api_gateway_request_validator: Support resource import ([#5573](https://github.com/terraform-providers/terraform-provider-aws/issues/5573)) +* resource/aws_api_gateway_resource: Support resource import ([#5574](https://github.com/terraform-providers/terraform-provider-aws/issues/5574)) +* resource/aws_api_gateway_rest_api: Support resource import ([#5564](https://github.com/terraform-providers/terraform-provider-aws/issues/5564)) +* resource/aws_api_gateway_stage: Support resource import ([#5575](https://github.com/terraform-providers/terraform-provider-aws/issues/5575)) +* resource/aws_dax_cluster: Add `server_side_encryption` argument (support encryption at rest) ([#5508](https://github.com/terraform-providers/terraform-provider-aws/issues/5508)) +* resource/aws_ecs_service: Add retries for target group attachment ([#3535](https://github.com/terraform-providers/terraform-provider-aws/issues/3535)) +* resource/aws_lb_listener: Add support for 'redirect' and 'fixed-response' actions ([#5430](https://github.com/terraform-providers/terraform-provider-aws/issues/5430)) +* resource/aws_lb_listener_rule: Add support for 'redirect' and 'fixed-response' actions ([#5430](https://github.com/terraform-providers/terraform-provider-aws/issues/5430)) +* resource/aws_rds_cluster: Add `scaling_configuration` argument ([#5531](https://github.com/terraform-providers/terraform-provider-aws/issues/5531)) +* resource/aws_secretsmanager_secret: Support `ForceDeleteWithoutRecovery` (via `recovery_window_in_days = 0`) and secret recreation after immediate deletion ([#5583](https://github.com/terraform-providers/terraform-provider-aws/issues/5583)) + +BUG FIXES: + +* provider: Disable AWS SDK retries faster by default for `connection refused` errors ([#5614](https://github.com/terraform-providers/terraform-provider-aws/issues/5614)) +* resource/aws_api_gateway_integration: Properly read `integration_http_method` into Terraform state ([#5568](https://github.com/terraform-providers/terraform-provider-aws/issues/5568)) +* resource/aws_api_gateway_integration_response: Properly read `content_handling` into Terraform state ([#5569](https://github.com/terraform-providers/terraform-provider-aws/issues/5569)) +* resource/aws_api_gateway_integration_response: Properly read `response_templates` into Terraform state ([#5569](https://github.com/terraform-providers/terraform-provider-aws/issues/5569)) +* resource/aws_cloudfront_distribution: Import into `ordered_cache_behavior` instead of deprecated `cache_behavior` ([#5586](https://github.com/terraform-providers/terraform-provider-aws/issues/5586)) +* resource/aws_db_instance: Prevent error when using `snapshot_identifier` with `multi_az` enabled and sqlserver `engine` ([#5613](https://github.com/terraform-providers/terraform-provider-aws/issues/5613)) +* resource/aws_db_instance: Prevent double apply when using `snapshot_identifier` parameters that require `ModifyDBInstance` during resource creation ([#5613](https://github.com/terraform-providers/terraform-provider-aws/issues/5613)] / [[#5621](https://github.com/terraform-providers/terraform-provider-aws/issues/5621)) +* resource/aws_db_instance: Prevent `is already being deleted` error on deletion and wait for deletion completion ([#5624](https://github.com/terraform-providers/terraform-provider-aws/issues/5624)) +* resource/aws_ecs_task_definition: Treat `INACTIVE` task definitions as removed ([#5565](https://github.com/terraform-providers/terraform-provider-aws/issues/5565)) +* resource/aws_elasticache_cluster: Allow `availability_zone` to be specified with `replication_group_id` ([#5585](https://github.com/terraform-providers/terraform-provider-aws/issues/5585)) +* resource/aws_instance: Ignore change of `user_data` from omission to empty string ([#5467](https://github.com/terraform-providers/terraform-provider-aws/issues/5467)) +* resource/aws_service_discovery_public_dns_namespace: Prevent creation error with names longer than 34 characters ([#5610](https://github.com/terraform-providers/terraform-provider-aws/issues/5610)) +* resource/aws_waf_ipset: Properly handle updates and deletions over 1000 IP set descriptors ([#5588](https://github.com/terraform-providers/terraform-provider-aws/issues/5588)) +* resource/aws_wafregional_ipset: Properly handle updates and deletions over 1000 IP set descriptors ([#5588](https://github.com/terraform-providers/terraform-provider-aws/issues/5588)) + +## 1.32.0 (August 16, 2018) + +FEATURES: + +* **New Resource:** `aws_neptune_cluster_snapshot` ([#5492](https://github.com/terraform-providers/terraform-provider-aws/issues/5492)) +* **New Resource:** `aws_storagegateway_cached_iscsi_volume` ([#5476](https://github.com/terraform-providers/terraform-provider-aws/issues/5476)) + +ENHANCEMENTS: + +* data-source/aws_secretsmanager_secret_version: Add `arn` attribute ([#5488](https://github.com/terraform-providers/terraform-provider-aws/issues/5488)) +* data-source/aws_subnet: Add `arn` attribute ([#5486](https://github.com/terraform-providers/terraform-provider-aws/issues/5486)) +* resource/aws_cloudwatch_metric_alarm: Add `arn` attribute ([#5487](https://github.com/terraform-providers/terraform-provider-aws/issues/5487)) +* resource/aws_db_instance: Allow `alert`, `listener`, and `trace` for `enabled_cloudwatch_logs_exports` (e.g. Oracle specific log exports) ([#5494](https://github.com/terraform-providers/terraform-provider-aws/issues/5494)) +* resource/aws_emr_cluster: Support `st1` type EBS volumes ([#5534](https://github.com/terraform-providers/terraform-provider-aws/issues/5534)) +* resource/aws_neptune_event_subscription: Support resource import ([#5491](https://github.com/terraform-providers/terraform-provider-aws/issues/5491)) +* resource/aws_rds_cluster: Add `engine_mode` argument (support RDS Aurora Serverless) ([#5507](https://github.com/terraform-providers/terraform-provider-aws/issues/5507)) +* resource/aws_rds_cluster: Allow `aurora` (MySQL 5.6) `engine_type` to enable Performance Insights ([#5468](https://github.com/terraform-providers/terraform-provider-aws/issues/5468)) +* resource/aws_secretsmanager_secret_version: Add `arn` attribute ([#5488](https://github.com/terraform-providers/terraform-provider-aws/issues/5488)) +* resource/aws_subnet: Add `arn` attribute ([#5486](https://github.com/terraform-providers/terraform-provider-aws/issues/5486)) + +BUG FIXES: + +* storagegateway: Retry API calls on busy gateway proxy connection errors ([#5476](https://github.com/terraform-providers/terraform-provider-aws/issues/5476)) +* resource/aws_cloudtrail: Increase IAM retry threshold from 15 seconds to 1 minute ([#5499](https://github.com/terraform-providers/terraform-provider-aws/issues/5499)) +* resource/aws_cognito_user_pool: Properly pass all attributes during update (prevent perpetual flip-flop apply) ([#3458](https://github.com/terraform-providers/terraform-provider-aws/issues/3458)) +* resource/aws_cognito_user_pool_client: Properly pass all attributes during update (prevent perpetual flip-flop apply) ([#5478](https://github.com/terraform-providers/terraform-provider-aws/issues/5478)) +* resource/aws_db_instance: During S3 restore, lower retry threshold for IAM eventual consistency from 5 minutes to 2 minutes and retry on additional error ([#5536](https://github.com/terraform-providers/terraform-provider-aws/issues/5536)) +* resource/aws_dynamodb_table: Allow simultaneous region deletion retry of 5 minutes to better handle global table deletions ([#5518](https://github.com/terraform-providers/terraform-provider-aws/issues/5518)) +* resource/aws_glue_crawler: Additional IAM eventual consistency retry logic for create and update ([#5502](https://github.com/terraform-providers/terraform-provider-aws/issues/5502)) +* resource/aws_iam_role: Remove extraneous `DeleteRolePermissionsBoundary` API call when deleting IAM role ([#5544](https://github.com/terraform-providers/terraform-provider-aws/issues/5544)) +* resource/aws_kinesis_firehose_delivery_stream: Retry on additional IAM eventual consistency error with ElasticSearch destinations ([#5541](https://github.com/terraform-providers/terraform-provider-aws/issues/5541)) +* resource/aws_storagegateway_cache: Prevent resource recreation due to disk identifier changes after creation ([#5476](https://github.com/terraform-providers/terraform-provider-aws/issues/5476)) + +## 1.31.0 (August 09, 2018) + +FEATURES: + +* **New Data Source:** `aws_db_cluster_snapshot` ([#4526](https://github.com/terraform-providers/terraform-provider-aws/issues/4526)) +* **New Resource:** `aws_db_cluster_snapshot` ([#4526](https://github.com/terraform-providers/terraform-provider-aws/issues/4526)) +* **New Resource:** `aws_neptune_event_subscription` ([#5480](https://github.com/terraform-providers/terraform-provider-aws/issues/5480)) +* **New Resource:** `aws_storagegateway_cache` ([#5282](https://github.com/terraform-providers/terraform-provider-aws/issues/5282)) +* **New Resource:** `aws_storagegateway_smb_file_share` ([#5276](https://github.com/terraform-providers/terraform-provider-aws/issues/5276)) + +ENHANCEMENTS: + +* provider: Allow provider configuration AssumeRoleARN and sts:GetCallerIdentity credential validation call to shortcut account ID and partition lookup ([#5177](https://github.com/terraform-providers/terraform-provider-aws/issues/5177)) +* provider: Improved output for multiple error handler ([#5442](https://github.com/terraform-providers/terraform-provider-aws/issues/5442)) +* data-source/aws_instance: Add `arn` attribute ([#5432](https://github.com/terraform-providers/terraform-provider-aws/issues/5432)) +* resource/aws_elasticsearch_domain: Support `ES_APPLICATION_LOGS` `log_type` in plan-time validation ([#5474](https://github.com/terraform-providers/terraform-provider-aws/issues/5474)) +* resource/aws_instance: Add `arn` attribute ([#5432](https://github.com/terraform-providers/terraform-provider-aws/issues/5432)) +* resource/aws_storagegateway_gateway: Add `smb_active_directory_settings` and `smb_guest_password` arguments ([#5269](https://github.com/terraform-providers/terraform-provider-aws/issues/5269)) + +BUG FIXES: + +* provider: Prefer `USERPROFILE` over `HOMEPATH` for home directory expansion on Windows ([#5443](https://github.com/terraform-providers/terraform-provider-aws/issues/5443)) +* resource/aws_ami_copy: Prevent `ena_support` attribute incorrectly reporting force new resource ([#5433](https://github.com/terraform-providers/terraform-provider-aws/issues/5433)) +* resource/aws_ami_from_instance: Prevent `ena_support` attribute incorrectly reporting force new resource ([#5433](https://github.com/terraform-providers/terraform-provider-aws/issues/5433)) +* resource/aws_elasticsearch_domain: Prevent crash when missing `AutomatedSnapshotStartHour` in API response ([#5451](https://github.com/terraform-providers/terraform-provider-aws/issues/5451)) +* resource/aws_elasticsearch_domain: Suppress plan differences for `dedicated_master_count` and `dedicated_master_type` when `dedicated_master_enabled` is disabled ([#5423](https://github.com/terraform-providers/terraform-provider-aws/issues/5423)) +* resource/aws_rds_cluster: Prevent error when restoring cluster from snapshot with tagging enabled ([#5479](https://github.com/terraform-providers/terraform-provider-aws/issues/5479)) +* resource/aws_ssm_maintenance_window: Properly recreate resource when deleted outside Terraform ([#5416](https://github.com/terraform-providers/terraform-provider-aws/issues/5416)) +* resource/aws_ssm_patch_baseline: Properly recreate resource when deleted outside Terraform ([#5438](https://github.com/terraform-providers/terraform-provider-aws/issues/5438)) +* resource/aws_vpn_gateway: Allow legacy `amazon_side_asn` in plan-time validation (ASNs 10124 and 17493) ([#5441](https://github.com/terraform-providers/terraform-provider-aws/issues/5441)) + +## 1.30.0 (August 02, 2018) + +FEATURES: + +* **New Data Source:** `aws_storagegateway_local_disk` ([#5279](https://github.com/terraform-providers/terraform-provider-aws/issues/5279)) +* **New Resource:** `aws_macie_member_account_association` ([#5283](https://github.com/terraform-providers/terraform-provider-aws/issues/5283)) +* **New Resource:** `aws_neptune_cluster_instance` ([#5376](https://github.com/terraform-providers/terraform-provider-aws/issues/5376)) +* **New Resource:** `aws_storagegateway_nfs_file_share` ([#5255](https://github.com/terraform-providers/terraform-provider-aws/issues/5255)) +* **New Resource:** `aws_storagegateway_upload_buffer` ([#5284](https://github.com/terraform-providers/terraform-provider-aws/issues/5284)) +* **New Resource:** `aws_storagegateway_working_storage` ([#5285](https://github.com/terraform-providers/terraform-provider-aws/issues/5285)) + +ENHANCEMENTS: + +* data-source/aws_rds_cluster: Add `arn` attribute ([#5221](https://github.com/terraform-providers/terraform-provider-aws/issues/5221)) +* resource/aws_ami: Add `ena_support` argument ([#5395](https://github.com/terraform-providers/terraform-provider-aws/issues/5395)) +* resource/aws_api_gateway_domain_name: Support resource import ([#5368](https://github.com/terraform-providers/terraform-provider-aws/issues/5368)) +* resource/aws_efs_file_system: Add `provisioned_throughput_in_mibps` and `throughput_mode` arguments ([#5210](https://github.com/terraform-providers/terraform-provider-aws/issues/5210)) +* resource/aws_elasticsearch_domain: Add `cognito_options` arguments (support Cognito authentication) ([#5346](https://github.com/terraform-providers/terraform-provider-aws/issues/5346)) +* resource/aws_glue_crawler: Add `dynamodb_target` argument ([#5152](https://github.com/terraform-providers/terraform-provider-aws/issues/5152)) +* resource/aws_iam_role: Add `permissions_boundary` argument ([#5184](https://github.com/terraform-providers/terraform-provider-aws/issues/5184)) +* resource/aws_iam_user: Add `permissions_boundary` argument ([#5183](https://github.com/terraform-providers/terraform-provider-aws/issues/5183)) +* resource/aws_neptune_cluster: Support resource import ([#5227](https://github.com/terraform-providers/terraform-provider-aws/issues/5227)) +* resource/aws_rds_cluster: Add `arn` attribute ([#5221](https://github.com/terraform-providers/terraform-provider-aws/issues/5221)) +* resource/aws_ssm_patch_baseline: Add `AMAZON_LINUX_2` and `SUSE` to `operating_system` plan time validation ([#5371](https://github.com/terraform-providers/terraform-provider-aws/issues/5371)) + +BUG FIXES: + +* resource/aws_codebuild_project: Handle additional IAM retry condition during update ([#5238](https://github.com/terraform-providers/terraform-provider-aws/issues/5238)) +* resource/aws_codebuild_project: Remove extraneous UpdateProject API call after CreateProject API call ([#5238](https://github.com/terraform-providers/terraform-provider-aws/issues/5238)) +* resource/aws_db_instance: Prevent error when restoring database from snapshot with tagging enabled ([#5370](https://github.com/terraform-providers/terraform-provider-aws/issues/5370)) +* resource/aws_db_option_group: Prevent error when creating options with new IAM role ([#5389](https://github.com/terraform-providers/terraform-provider-aws/issues/5389)) +* resource/aws_eip: Properly handle if multiple EIPs are returned during API read ([#5331](https://github.com/terraform-providers/terraform-provider-aws/issues/5331)) +* resource/aws_emr_cluster: Add `configurations_json` argument (handles drift detection as compared to `configurations` argument) ([#5191](https://github.com/terraform-providers/terraform-provider-aws/issues/5191)) +* resource/aws_emr_cluster: Ensure `keep_job_flow_alive_when_no_step = false` automatically terminates cluster ([#5415](https://github.com/terraform-providers/terraform-provider-aws/issues/5415)) +* resource/aws_lambda_event_source_mapping: Properly read `enabled` into Terraform state ([#5292](https://github.com/terraform-providers/terraform-provider-aws/issues/5292)) +* resource/aws_launch_template: Exclude `network_interfaces` `associate_public_ip_address` when conflicting `network_interface_id` is set ([#5314](https://github.com/terraform-providers/terraform-provider-aws/issues/5314)) +* resource/aws_launch_template: Set `latest_version` as re-computed on updates (prevent need for double apply) ([#5250](https://github.com/terraform-providers/terraform-provider-aws/issues/5250)) +* resource/aws_lb_listener: Prevent crash from new `fixed-response` and `redirect` actions ([#5367](https://github.com/terraform-providers/terraform-provider-aws/issues/5367)) +* resource/aws_lb_listener_rule: Prevent crash from new `fixed-response` and `redirect` actions ([#5367](https://github.com/terraform-providers/terraform-provider-aws/issues/5367)) +* resource/aws_vpn_gateway: Allow legacy `amazon_side_asn` in plan-time validation (ASNs 7224 and 9059) ([#5291](https://github.com/terraform-providers/terraform-provider-aws/issues/5291)) +* resource/aws_waf_web_acl: Properly read `rules` into Terraform state ([#5342](https://github.com/terraform-providers/terraform-provider-aws/issues/5342)) +* resource/aws_waf_web_acl: Properly update `rules` ([#5380](https://github.com/terraform-providers/terraform-provider-aws/issues/5380)) +* resource/aws_wafregional_rate_based_rule: Fix `rate_limit` updates ([#5356](https://github.com/terraform-providers/terraform-provider-aws/issues/5356)) +* resource/aws_wafregional_web_acl: Properly read `rules` into Terraform state ([#5342](https://github.com/terraform-providers/terraform-provider-aws/issues/5342)) + +## 1.29.0 (July 26, 2018) + +NOTES: + +* data-source/aws_kms_secret: This data source has been deprecated and will be removed in the next major version. This is required to support the upcoming Terraform 0.12. A new `aws_kms_secrets` data source is available that allows for the same multiple KMS secret decryption functionality, but requires different attribute references. Full migration information is available in the [AWS Provider Version 2 Upgrade Guide](https://www.terraform.io/docs/providers/aws/guides/version-2-upgrade.html#data-source-aws_kms_secret). + +FEATURES: + +* **New Data Source:** `aws_kms_secrets` ([#5195](https://github.com/terraform-providers/terraform-provider-aws/issues/5195)) +* **New Data Source:** `aws_network_interfaces` ([#5324](https://github.com/terraform-providers/terraform-provider-aws/issues/5324)) +* **New Guide:** [`AWS Provider Version 2 Upgrade`](https://www.terraform.io/docs/providers/aws/guides/version-2-upgrade.html) ([#5195](https://github.com/terraform-providers/terraform-provider-aws/issues/5195)) + +ENHANCEMENTS: + +* data-source/aws_iam_role: Add `permissions_boundary` attribute ([#5186](https://github.com/terraform-providers/terraform-provider-aws/issues/5186)) +* data-source/aws_vpc: Add `arn` attribute ([#5300](https://github.com/terraform-providers/terraform-provider-aws/issues/5300)) +* resource/aws_default_vpc: Add `arn` attribute ([#5300](https://github.com/terraform-providers/terraform-provider-aws/issues/5300)) +* resource/aws_instance: Add `cpu_core_count` and `cpu_threads_per_core` arguments ([#5159](https://github.com/terraform-providers/terraform-provider-aws/issues/5159)) +* resource/aws_lambda_permission: Add `event_source_token` argument (support Alexa Skills) ([#5264](https://github.com/terraform-providers/terraform-provider-aws/issues/5264)) +* resource/aws_launch_template: Add `arn` attribute ([#5306](https://github.com/terraform-providers/terraform-provider-aws/issues/5306)) +* resource/aws_secretsmanager_secret: Add `policy` argument ([#5290](https://github.com/terraform-providers/terraform-provider-aws/issues/5290)) +* resource/aws_vpc: Add `arn` attribute ([#5300](https://github.com/terraform-providers/terraform-provider-aws/issues/5300)) +* resource/aws_waf_web_acl: Support resource import ([#5337](https://github.com/terraform-providers/terraform-provider-aws/issues/5337)) + +BUG FIXES: + +* data-source/aws_vpc_endpoint_service: Perform client side filtering to workaround server side filtering issues in AWS China and AWS GovCloud (US) ([#4592](https://github.com/terraform-providers/terraform-provider-aws/issues/4592)) +* resource/aws_kinesis_firehose_delivery_stream: Force new resource for `kinesis_source_configuration` argument changes ([#5332](https://github.com/terraform-providers/terraform-provider-aws/issues/5332)) +* resource/aws_route53_record: Prevent DomainLabelEmpty errors when expanding record names with trailing period ([#5312](https://github.com/terraform-providers/terraform-provider-aws/issues/5312)) +* resource/aws_ses_identity_notification_topic: Prevent panic when API returns no attributes ([#5327](https://github.com/terraform-providers/terraform-provider-aws/issues/5327)) +* resource/aws_ssm_parameter: Reduce DescribeParameters API calls by switching filtering logic ([#5325](https://github.com/terraform-providers/terraform-provider-aws/issues/5325)) + +## 1.28.0 (July 18, 2018) + +FEATURES: + +* **New Resource:** `aws_macie_s3_bucket_association` ([#5201](https://github.com/terraform-providers/terraform-provider-aws/issues/5201)) +* **New Resource:** `aws_neptune_cluster` ([#5050](https://github.com/terraform-providers/terraform-provider-aws/issues/5050)) +* **New Resource:** `aws_storagegateway_gateway` ([#5208](https://github.com/terraform-providers/terraform-provider-aws/issues/5208)) + +ENHANCEMENTS: + +* data-source/aws_iam_user: Add `permissions_boundary` attribute ([#5187](https://github.com/terraform-providers/terraform-provider-aws/issues/5187)) +* resource/aws_api_gateway_integration: Add `timeout_milliseconds` argument ([#5199](https://github.com/terraform-providers/terraform-provider-aws/issues/5199)) +* resource/aws_cloudwatch_log_group: Allow `tags` handling in AWS GovCloud (US) and AWS China ([#5175](https://github.com/terraform-providers/terraform-provider-aws/issues/5175)) +* resource/aws_codebuild_project: Add `report_build_status` argument under `source` (support report build status for GitHub source type) ([#5156](https://github.com/terraform-providers/terraform-provider-aws/issues/5156)) +* resource/aws_launch_template: Ignore `credit_specification` when not using T2 `instance_type` ([#5190](https://github.com/terraform-providers/terraform-provider-aws/issues/5190)) +* resource/aws_rds_cluster_instance: Add `arn` attribute ([#5220](https://github.com/terraform-providers/terraform-provider-aws/issues/5220)) +* resource/aws_route: Print more useful error message when missing valid target type ([#5198](https://github.com/terraform-providers/terraform-provider-aws/issues/5198)) +* resource/aws_vpc_endpoint: Add configurable timeouts ([#3418](https://github.com/terraform-providers/terraform-provider-aws/issues/3418)) +* resource/aws_vpc_endpoint_subnet_association: Add configurable timeouts ([#3418](https://github.com/terraform-providers/terraform-provider-aws/issues/3418)) + +BUG FIXES: + +* resource/aws_glue_crawler: Prevent error when deleted outside Terraform ([#5158](https://github.com/terraform-providers/terraform-provider-aws/issues/5158)) +* resource/aws_vpc_endpoint_subnet_association: Add mutex to prevent errors with concurrent `ModifyVpcEndpoint` calls ([#3418](https://github.com/terraform-providers/terraform-provider-aws/issues/3418)) + +## 1.27.0 (July 11, 2018) + +NOTES: + +* resource/aws_codebuild_project: The `service_role` argument is now required to match the API behavior and provide plan time validation. Additional details from AWS Support can be found in: https://github.com/terraform-providers/terraform-provider-aws/pull/4826 +* resource/aws_wafregional_byte_match_set: The `byte_match_tuple` argument name has been deprecated in preference of a new `byte_match_tuples` argument name, for consistency with the `aws_waf_byte_match_set` resource to reduce any confusion working between the two resources and to denote its multiple value support. Its behavior is exactly the same as the old argument. Simply changing the argument name (adding the `s`) to configurations should upgrade without other changes. + +FEATURES: + +* **New Resource:** `aws_appsync_api_key` ([#3827](https://github.com/terraform-providers/terraform-provider-aws/issues/3827)) +* **New Resource:** `aws_swf_domain` ([#2803](https://github.com/terraform-providers/terraform-provider-aws/issues/2803)) + +ENHANCEMENTS: + +* data-source/aws_region: Add `description` attribute ([#5077](https://github.com/terraform-providers/terraform-provider-aws/issues/5077)) +* data-source/aws_vpc: Add `cidr_block_associations` attribute ([#5098](https://github.com/terraform-providers/terraform-provider-aws/issues/5098)) +* resource/aws_cloudwatch_metric_alarm: Add `datapoints_to_alarm` and `evaluation_period` plan time validation ([#5095](https://github.com/terraform-providers/terraform-provider-aws/issues/5095)) +* resource/aws_db_parameter_group: Clarify naming validation error messages ([#5090](https://github.com/terraform-providers/terraform-provider-aws/issues/5090)) +* resource/aws_glue_connection: Add `physical_connection_requirements` argument `availability_zone` (currently required by the API) ([#5039](https://github.com/terraform-providers/terraform-provider-aws/issues/5039)) +* resource/aws_instance: Ignore `credit_specifications` when not using T2 `instance_type` ([#5114](https://github.com/terraform-providers/terraform-provider-aws/issues/5114)) +* resource/aws_instance: Allow AWS GovCloud (US) to perform tagging on creation ([#5106](https://github.com/terraform-providers/terraform-provider-aws/issues/5106)) +* resource/aws_lambda_function: Support `dotnetcore2.1` in `runtime` validation ([#5150](https://github.com/terraform-providers/terraform-provider-aws/issues/5150)) +* resource/aws_route_table: Ignore propagated routes during resource import ([#5100](https://github.com/terraform-providers/terraform-provider-aws/issues/5100)) +* resource/aws_security_group: Authorize and revoke only changed individual `ingress`/`egress` rules despite their configuration grouping (e.g. replacing an individual element in a multiple element `cidr_blocks` list) ([#4726](https://github.com/terraform-providers/terraform-provider-aws/issues/4726)) +* resource/aws_ses_receipt_rule: Add plan time validation for `s3_action` argument `position` ([#5092](https://github.com/terraform-providers/terraform-provider-aws/issues/5092)) +* resource/aws_vpc_ipv4_cidr_block_association: Support resource import ([#5069](https://github.com/terraform-providers/terraform-provider-aws/issues/5069)) +* resource/aws_waf_web_acl: Add `rules` `override_action` argument and support `GROUP` type ([#5053](https://github.com/terraform-providers/terraform-provider-aws/issues/5053)) +* resource/aws_wafregional_web_acl: Add `rules` `override_action` argument and support `GROUP` type ([#5053](https://github.com/terraform-providers/terraform-provider-aws/issues/5053)) + +BUG FIXES: + +* resource/aws_codebuild_project: Prevent panic when empty `vpc_config` block is configured ([#5070](https://github.com/terraform-providers/terraform-provider-aws/issues/5070)) +* resource/aws_codebuild_project: Mark `service_role` as required ([#4826](https://github.com/terraform-providers/terraform-provider-aws/issues/4826)) +* resource/aws_glue_catalog_database: Properly return error when missing colon during import ([#5123](https://github.com/terraform-providers/terraform-provider-aws/issues/5123)) +* resource/aws_glue_catalog_database: Prevent error when deleted outside Terraform ([#5141](https://github.com/terraform-providers/terraform-provider-aws/issues/5141)) +* resource/aws_instance: Allow AWS China to perform volume tagging post-creation on first apply ([#5106](https://github.com/terraform-providers/terraform-provider-aws/issues/5106)) +* resource/aws_kms_grant: Properly return error when listing KMS grants ([#5063](https://github.com/terraform-providers/terraform-provider-aws/issues/5063)) +* resource/aws_rds_cluster_instance: Support `configuring-log-exports` status ([#5124](https://github.com/terraform-providers/terraform-provider-aws/issues/5124)) +* resource/aws_s3_bucket: Prevent extraneous ACL update during resource creation ([#5107](https://github.com/terraform-providers/terraform-provider-aws/issues/5107)) +* resource/aws_wafregional_byte_match_set: Deprecate `byte_match_tuple` argument for `byte_match_tuples` ([#5043](https://github.com/terraform-providers/terraform-provider-aws/issues/5043)) + +## 1.26.0 (July 04, 2018) + +FEATURES: + +* **New Data Source:** `aws_launch_configuration` ([#3624](https://github.com/terraform-providers/terraform-provider-aws/issues/3624)) +* **New Data Source:** `aws_pricing_product` ([#5057](https://github.com/terraform-providers/terraform-provider-aws/issues/5057)) +* **New Resource:** `aws_s3_bucket_inventory` ([#5019](https://github.com/terraform-providers/terraform-provider-aws/issues/5019)) +* **New Resource:** `aws_vpc_ipv4_cidr_block_association` ([#3723](https://github.com/terraform-providers/terraform-provider-aws/issues/3723)) + +ENHANCEMENTS: + +* data-source/aws_elasticache_replication_group: Add `member_clusters` attribute ([#5056](https://github.com/terraform-providers/terraform-provider-aws/issues/5056)) +* data-source/aws_instances: Add `instance_state_names` argument (support non-`running` instances) ([#4950](https://github.com/terraform-providers/terraform-provider-aws/issues/4950)) +* data-source/aws_route_tables: Add `filter` argument ([#5035](https://github.com/terraform-providers/terraform-provider-aws/issues/5035)) +* data-source/aws_subnet_ids: Add `filter` argument ([#5038](https://github.com/terraform-providers/terraform-provider-aws/issues/5038)) +* resource/aws_eip_association: Support resource import ([#5006](https://github.com/terraform-providers/terraform-provider-aws/issues/5006)) +* resource/aws_elasticache_replication_group: Add `member_clusters` attribute ([#5056](https://github.com/terraform-providers/terraform-provider-aws/issues/5056)) +* resource/aws_lambda_alias: Add `routing_config` argument (support traffic shifting) ([#3316](https://github.com/terraform-providers/terraform-provider-aws/issues/3316)) +* resource/aws_lambda_event_source_mapping: Make `starting_position` optional and allow `batch_size` to support default of 10 for SQS ([#5024](https://github.com/terraform-providers/terraform-provider-aws/issues/5024)) +* resource/aws_network_acl_rule: Add plan time conflict validation with `cidr_block` and `ipv6_cidr_block` ([#3951](https://github.com/terraform-providers/terraform-provider-aws/issues/3951)) +* resource/aws_spot_fleet_request: Add `fleet_type` argument ([#5032](https://github.com/terraform-providers/terraform-provider-aws/issues/5032)) +* resource/aws_ssm_document: Add `tags` argument (support tagging) ([#5020](https://github.com/terraform-providers/terraform-provider-aws/issues/5020)) + +BUG FIXES: + +* resource/aws_codebuild_project: Prevent panic with missing environment variable type ([#5052](https://github.com/terraform-providers/terraform-provider-aws/issues/5052)) +* resource/aws_kms_alias: Fix perpetual plan when `target_key_id` is ARN ([#4010](https://github.com/terraform-providers/terraform-provider-aws/issues/4010)) + +## 1.25.0 (June 27, 2018) + +NOTES: + +* resource/aws_instance: Starting around June 21, 2018, the EC2 API began responding with an empty string value for user data for some instances instead of a completely empty response. In Terraform, it would show as a difference of `user_data: "da39a3ee5e6b4b0d3255bfef95601890afd80709" => "" (forces new resource)` if the `user_data` argument was not defined in the Terraform configuration for the resource. This release ignores that difference as equivalent. + +FEATURES: + +* **New Data Source:** `aws_codecommit_repository` ([#4934](https://github.com/terraform-providers/terraform-provider-aws/issues/4934)) +* **New Data Source:** `aws_dx_gateway` ([#4988](https://github.com/terraform-providers/terraform-provider-aws/issues/4988)) +* **New Data Source:** `aws_network_acls` ([#4966](https://github.com/terraform-providers/terraform-provider-aws/issues/4966)) +* **New Data Source:** `aws_route_tables` ([#4841](https://github.com/terraform-providers/terraform-provider-aws/issues/4841)) +* **New Data Source:** `aws_security_groups` ([#2947](https://github.com/terraform-providers/terraform-provider-aws/issues/2947)) +* **New Resource:** `aws_dx_hosted_private_virtual_interface` ([#3255](https://github.com/terraform-providers/terraform-provider-aws/issues/3255)) +* **New Resource:** `aws_dx_hosted_private_virtual_interface_accepter` ([#3255](https://github.com/terraform-providers/terraform-provider-aws/issues/3255)) +* **New Resource:** `aws_dx_hosted_public_virtual_interface` ([#3254](https://github.com/terraform-providers/terraform-provider-aws/issues/3254)) +* **New Resource:** `aws_dx_hosted_public_virtual_interface_accepter` ([#3254](https://github.com/terraform-providers/terraform-provider-aws/issues/3254)) +* **New Resource:** `aws_dx_private_virtual_interface` ([#3253](https://github.com/terraform-providers/terraform-provider-aws/issues/3253)) +* **New Resource:** `aws_dx_public_virtual_interface` ([#3252](https://github.com/terraform-providers/terraform-provider-aws/issues/3252)) +* **New Resource:** `aws_media_store_container_policy` ([#3507](https://github.com/terraform-providers/terraform-provider-aws/issues/3507)) + +ENHANCEMENTS: + +* provider: Support custom endpoint for `autoscaling` ([#4970](https://github.com/terraform-providers/terraform-provider-aws/issues/4970)) +* resource/aws_codebuild_project: Support `WINDOWS_CONTAINER` as valid environment type ([#4960](https://github.com/terraform-providers/terraform-provider-aws/issues/4960)) +* resource/aws_codebuild_project: Support resource import ([#4976](https://github.com/terraform-providers/terraform-provider-aws/issues/4976)) +* resource/aws_ecs_service: Add `scheduling_strategy` argument (support `DAEMON` scheduling strategy) ([#4825](https://github.com/terraform-providers/terraform-provider-aws/issues/4825)) +* resource/aws_iam_instance_profile: Add `create_date` attribute ([#4932](https://github.com/terraform-providers/terraform-provider-aws/issues/4932)) +* resource/aws_media_store_container: Support resource import ([#3501](https://github.com/terraform-providers/terraform-provider-aws/issues/3501)) +* resource/aws_network_acl: Add full mapping of protocol names to protocol numbers ([#4956](https://github.com/terraform-providers/terraform-provider-aws/issues/4956)) +* resource/aws_network_acl_rule: Add full mapping of protocol names to protocol numbers ([#4956](https://github.com/terraform-providers/terraform-provider-aws/issues/4956)) +* resource/aws_sqs_queue: Add .fifo suffix for FIFO queues using `name_prefix` ([#4929](https://github.com/terraform-providers/terraform-provider-aws/issues/4929)) +* resource/aws_vpc: Support update of `instance_tenancy` from `dedicated` to `default` ([#2514](https://github.com/terraform-providers/terraform-provider-aws/issues/2514)) +* resource/aws_waf_ipset: Support resource import ([#4979](https://github.com/terraform-providers/terraform-provider-aws/issues/4979)) +* resource/aws_wafregional_web_acl: Add rule `type` argument (support rate limited rules) ([#4307](https://github.com/terraform-providers/terraform-provider-aws/issues/4307)] / [[#4978](https://github.com/terraform-providers/terraform-provider-aws/issues/4978)) + +BUG FIXES: + +* data-source/aws_rds_cluster: Prevent panic with new CloudWatch logs support (`enabled_cloudwatch_logs_exports`) introduced in 1.23.0 ([#4927](https://github.com/terraform-providers/terraform-provider-aws/issues/4927)) +* resource/aws_codebuild_webhook: Prevent panic when webhook is missing during read ([#4917](https://github.com/terraform-providers/terraform-provider-aws/issues/4917)) +* resource/aws_db_instance: Properly raise any `ListTagsForResource` error instead of presenting a perpetual difference with `tags` ([#4943](https://github.com/terraform-providers/terraform-provider-aws/issues/4943)) +* resource/aws_instance: Prevent extraneous ModifyInstanceAttribute call for `disable_api_termination` on resource creation ([#4941](https://github.com/terraform-providers/terraform-provider-aws/issues/4941)) +* resource/aws_instance: Ignore empty string SHA (`da39a3ee5e6b4b0d3255bfef95601890afd80709`) `user_data` difference due to EC2 API response changes ([#4991](https://github.com/terraform-providers/terraform-provider-aws/issues/4991)) +* resource/aws_launch_template: Prevent error when using `valid_until` ([#4952](https://github.com/terraform-providers/terraform-provider-aws/issues/4952)) +* resource/aws_route: Properly force resource recreation when updating `route_table_id` ([#4946](https://github.com/terraform-providers/terraform-provider-aws/issues/4946)) +* resource/aws_route53_zone: Further prevent HostedZoneAlreadyExists with specified caller reference errors ([#4903](https://github.com/terraform-providers/terraform-provider-aws/issues/4903)) +* resource/aws_ses_receipt_rule: Prevent error with `s3_action` when `kms_key_arn` is not specified ([#4965](https://github.com/terraform-providers/terraform-provider-aws/issues/4965)) + +## 1.24.0 (June 21, 2018) + +FEATURES: + +* **New Data Source:** `aws_cloudformation_export` ([#2180](https://github.com/terraform-providers/terraform-provider-aws/issues/2180)) +* **New Data Source:** `aws_vpc_dhcp_options` ([#4878](https://github.com/terraform-providers/terraform-provider-aws/issues/4878)) +* **New Resource:** `aws_dx_gateway` ([#4896](https://github.com/terraform-providers/terraform-provider-aws/issues/4896)) +* **New Resource:** `aws_dx_gateway_association` ([#4896](https://github.com/terraform-providers/terraform-provider-aws/issues/4896)) +* **New Resource:** `aws_glue_crawler` ([#4484](https://github.com/terraform-providers/terraform-provider-aws/issues/4484)) +* **New Resource:** `aws_neptune_cluster_parameter_group` ([#4860](https://github.com/terraform-providers/terraform-provider-aws/issues/4860)) +* **New Resource:** `aws_neptune_subnet_group` ([#4782](https://github.com/terraform-providers/terraform-provider-aws/issues/4782)) + +ENHANCEMENTS: + +* resource/aws_api_gateway_rest_api: Support `PRIVATE` endpoint type ([#4888](https://github.com/terraform-providers/terraform-provider-aws/issues/4888)) +* resource/aws_codedeploy_app: Add `compute_platform` argument ([#4811](https://github.com/terraform-providers/terraform-provider-aws/issues/4811)) +* resource/aws_kinesis_firehose_delivery_stream: Support extended S3 destination `data_format_conversion_configuration` ([#4842](https://github.com/terraform-providers/terraform-provider-aws/issues/4842)) +* resource/aws_kms_grant: Support ARN for `key_id` argument (external CMKs) ([#4886](https://github.com/terraform-providers/terraform-provider-aws/issues/4886)) +* resource/aws_neptune_parameter_group: Add `tags` argument and `arn` attribute ([#4873](https://github.com/terraform-providers/terraform-provider-aws/issues/4873)) +* resource/aws_rds_cluster: Add `enabled_cloudwatch_logs_exports` argument ([#4875](https://github.com/terraform-providers/terraform-provider-aws/issues/4875)) + +BUG FIXES: + +* resource/aws_batch_job_definition: Force resource recreation on retry_strategy attempts updates ([#4854](https://github.com/terraform-providers/terraform-provider-aws/issues/4854)) +* resource/aws_cognito_user_pool_client: Prevent panic with updating `refresh_token_validity` ([#4868](https://github.com/terraform-providers/terraform-provider-aws/issues/4868)) +* resource/aws_instance: Prevent extraneous ModifyInstanceCreditSpecification call on resource creation ([#4898](https://github.com/terraform-providers/terraform-provider-aws/issues/4898)) +* resource/aws_s3_bucket: Properly detect `cors_rule` drift when it is deleted outside Terraform ([#4887](https://github.com/terraform-providers/terraform-provider-aws/issues/4887)) +* resource/aws_vpn_gateway_attachment: Fix error handling for missing VPN gateway ([#4895](https://github.com/terraform-providers/terraform-provider-aws/issues/4895)) + +## 1.23.0 (June 14, 2018) + +NOTES: + +* resource/aws_elasticache_cluster: The `availability_zones` argument has been deprecated in favor of a new `preferred_availability_zones` argument to allow specifying the same Availability Zone more than once in larger Memcached clusters that also need to specifically set Availability Zones. The argument is still optional and the API will continue to automatically choose Availability Zones for nodes if not specified. The new argument will also continue to match the APIs required behavior that the length of the list must be the same as `num_cache_nodes`. Migration will require recreating the resource or using the resource [lifecycle configuration](https://www.terraform.io/docs/configuration/resources.html#lifecycle) of `ignore_changes = ["availability_zones"]` to prevent recreation. See the resource documentation for additional details. + +FEATURES: + +* **New Data Source:** `aws_vpcs` ([#4736](https://github.com/terraform-providers/terraform-provider-aws/issues/4736)) +* **New Resource:** `aws_neptune_parameter_group` ([#4724](https://github.com/terraform-providers/terraform-provider-aws/issues/4724)) + +ENHANCEMENTS: + +* resource/aws_db_instance: Display input arguments when receiving InvalidParameterValue error on resource creation ([#4803](https://github.com/terraform-providers/terraform-provider-aws/issues/4803)) +* resource/aws_elasticache_cluster: Migrate from `availability_zones` TypeSet attribute to `preferred_availability_zones` TypeList attribute (allow duplicate Availability Zone elements) ([#4741](https://github.com/terraform-providers/terraform-provider-aws/issues/4741)) +* resource/aws_launch_template: Add `tags` argument (support tagging the resource itself) ([#4763](https://github.com/terraform-providers/terraform-provider-aws/issues/4763)) +* resource/aws_launch_template: Add plan time validation for tag_specifications `resource_type` ([#4765](https://github.com/terraform-providers/terraform-provider-aws/issues/4765)) +* resource/aws_waf_ipset: Add `arn` attribute ([#4784](https://github.com/terraform-providers/terraform-provider-aws/issues/4784)) +* resource/aws_wafregional_ipset: Add `arn` attribute ([#4816](https://github.com/terraform-providers/terraform-provider-aws/issues/4816)) + +BUG FIXES: + +* resource/aws_codebuild_webhook: Properly export `secret` (the CodeBuild API only provides its value during resource creation) ([#4775](https://github.com/terraform-providers/terraform-provider-aws/issues/4775)) +* resource/aws_codecommit_repository: Prevent error and trigger recreation when not found during read ([#4761](https://github.com/terraform-providers/terraform-provider-aws/issues/4761)) +* resource/aws_eks_cluster: Properly export `arn` attribute ([#4766](https://github.com/terraform-providers/terraform-provider-aws/issues/4766)] / [[#4767](https://github.com/terraform-providers/terraform-provider-aws/issues/4767)) +* resource/aws_elasticsearch_domain: Skip EBS options update/refresh if EBS is not enabled ([#4802](https://github.com/terraform-providers/terraform-provider-aws/issues/4802)) + +## 1.22.0 (June 05, 2018) + +FEATURES: + +* **New Data Source:** `aws_ecs_service` ([#3617](https://github.com/terraform-providers/terraform-provider-aws/issues/3617)) +* **New Data Source:** `aws_eks_cluster` ([#4749](https://github.com/terraform-providers/terraform-provider-aws/issues/4749)) +* **New Guide:** EKS Getting Started +* **New Resource:** `aws_config_aggregate_authorization` ([#4263](https://github.com/terraform-providers/terraform-provider-aws/issues/4263)) +* **New Resource:** `aws_config_configuration_aggregator` ([#4262](https://github.com/terraform-providers/terraform-provider-aws/issues/4262)) +* **New Resource:** `aws_eks_cluster` ([#4749](https://github.com/terraform-providers/terraform-provider-aws/issues/4749)) + +ENHANCEMENTS: + +* provider: Support custom endpoint for EFS ([#4716](https://github.com/terraform-providers/terraform-provider-aws/issues/4716)) +* resource/aws_api_gateway_method: Add `authorization_scopes` argument ([#4533](https://github.com/terraform-providers/terraform-provider-aws/issues/4533)) +* resource/aws_api_gateway_rest_api: Add `api_key_source` argument ([#4717](https://github.com/terraform-providers/terraform-provider-aws/issues/4717)) +* resource/aws_cloudfront_distribution: Allow create and update retries on InvalidViewerCertificate for eventual consistency with ACM/IAM services ([#4698](https://github.com/terraform-providers/terraform-provider-aws/issues/4698)) +* resource/aws_cognito_identity_pool: Add `arn` attribute ([#4719](https://github.com/terraform-providers/terraform-provider-aws/issues/4719)) +* resource/aws_cognito_user_pool: Add `endpoint` attribute ([#4718](https://github.com/terraform-providers/terraform-provider-aws/issues/4718)) + +BUG FIXES: + +* resource/aws_service_discovery_private_dns_namespace: Prevent creation error with names longer than 34 characters ([#4702](https://github.com/terraform-providers/terraform-provider-aws/issues/4702)) +* resource/aws_vpn_connection: Allow period in `tunnel[1-2]_preshared_key` validation ([#4731](https://github.com/terraform-providers/terraform-provider-aws/issues/4731)) + +## 1.21.0 (May 31, 2018) + +FEATURES: + +* **New Data Source:** `aws_route` ([#4529](https://github.com/terraform-providers/terraform-provider-aws/issues/4529)) +* **New Resource:** `aws_codebuild_webhook` ([#4473](https://github.com/terraform-providers/terraform-provider-aws/issues/4473)) +* **New Resource:** `aws_cognito_identity_provider` ([#3601](https://github.com/terraform-providers/terraform-provider-aws/issues/3601)) +* **New Resource:** `aws_cognito_resource_server` ([#4530](https://github.com/terraform-providers/terraform-provider-aws/issues/4530)) +* **New Resource:** `aws_glue_classifier` ([#4472](https://github.com/terraform-providers/terraform-provider-aws/issues/4472)) + +ENHANCEMENTS: + +* provider: Support custom endpoint for SSM ([#4670](https://github.com/terraform-providers/terraform-provider-aws/issues/4670)) +* resource/aws_codebuild_project: Add `badge_enabled` argument and `badge_url` attribute ([#3504](https://github.com/terraform-providers/terraform-provider-aws/issues/3504)) +* resource/aws_codebuild_project: Add `environment_variable` argument `type` (support parameter store environment variables) ([#2811](https://github.com/terraform-providers/terraform-provider-aws/issues/2811)] / [[#4021](https://github.com/terraform-providers/terraform-provider-aws/issues/4021)) +* resource/aws_codebuild_project: Add `source` argument `git_clone_depth` and `insecure_ssl` ([#3929](https://github.com/terraform-providers/terraform-provider-aws/issues/3929)) +* resource/aws_elasticache_replication_group: Support `number_cache_nodes` updates ([#4504](https://github.com/terraform-providers/terraform-provider-aws/issues/4504)) +* resource/aws_lb_target_group: Add `slow_start` argument ([#4661](https://github.com/terraform-providers/terraform-provider-aws/issues/4661)) +* resource/aws_redshift_cluster: Add `dns_name` attribute ([#4582](https://github.com/terraform-providers/terraform-provider-aws/issues/4582)) +* resource/aws_s3_bucket: Add `bucket_regional_domain_name` attribute ([#4556](https://github.com/terraform-providers/terraform-provider-aws/issues/4556)) + +BUG FIXES: + +* data-source/aws_lambda_function: Qualifiers explicitly set are now honoured ([#4654](https://github.com/terraform-providers/terraform-provider-aws/issues/4654)) +* resource/aws_batch_job_definition: Properly force new resource when updating timeout `attempt_duration_seconds` argument ([#4697](https://github.com/terraform-providers/terraform-provider-aws/issues/4697)) +* resource/aws_budgets_budget: Force new resource when updating `name` ([#4656](https://github.com/terraform-providers/terraform-provider-aws/issues/4656)) +* resource/aws_dms_endpoint: Additionally specify MongoDB connection info in the top-level API namespace to prevent issues connecting ([#4636](https://github.com/terraform-providers/terraform-provider-aws/issues/4636)) +* resource/aws_rds_cluster: Prevent additional retry error during S3 import for IAM/S3 eventual consistency ([#4683](https://github.com/terraform-providers/terraform-provider-aws/issues/4683)) +* resource/aws_sns_sms_preferences: Properly add SNS preferences to website docs ([#4694](https://github.com/terraform-providers/terraform-provider-aws/issues/4694)) + +## 1.20.0 (May 23, 2018) + +NOTES: + +* resource/aws_guardduty_member: Terraform will now try to properly detect if a member account has been invited based on its relationship status (`Disabled`/`Enabled`/`Invited`) and appropriately flag the new `invite` argument for update. You will want to set `invite = true` in your Terraform configuration if you previously handled the invitation process for a member, otherwise the resource will attempt to disassociate the member upon updating the provider to this version. + +FEATURES: + +* **New Data Source:** `aws_glue_script` ([#4481](https://github.com/terraform-providers/terraform-provider-aws/issues/4481)) +* **New Resource:** `aws_glue_trigger` ([#4464](https://github.com/terraform-providers/terraform-provider-aws/issues/4464)) + +ENHANCEMENTS: + +* resource/aws_api_gateway_domain_name: Add `endpoint_configuration` argument, `regional_certificate_arn` argument, `regional_certificate_name` argument, `regional_domain_name` attribute, and `regional_zone_id` attribute (support regional domain names) ([#2866](https://github.com/terraform-providers/terraform-provider-aws/issues/2866)) +* resource/aws_api_gateway_rest_api: Add `endpoint_configuration` argument (support regional endpoint type) ([#2866](https://github.com/terraform-providers/terraform-provider-aws/issues/2866)) +* resource/aws_appautoscaling_policy: Add retry logic for rate exceeded errors during read, update and delete ([#4594](https://github.com/terraform-providers/terraform-provider-aws/issues/4594)) +* resource/aws_ecs_service: Add `container_name` and `container_port` arguments for `service_registry` (support bridge and host network mode for service registry) ([#4623](https://github.com/terraform-providers/terraform-provider-aws/issues/4623)) +* resource/aws_emr_cluster: Add `additional_info` argument ([#4590](https://github.com/terraform-providers/terraform-provider-aws/issues/4590)) +* resource/aws_guardduty_member: Support member account invitation on creation ([#4357](https://github.com/terraform-providers/terraform-provider-aws/issues/4357)) +* resource/aws_guardduty_member: Support `invite` argument updates (invite or disassociate on update) ([#4604](https://github.com/terraform-providers/terraform-provider-aws/issues/4604)) +* resource/aws_ssm_patch_baseline: Add `approval_rule` `enable_non_security` argument ([#4546](https://github.com/terraform-providers/terraform-provider-aws/issues/4546)) + +BUG FIXES: + +* resource/aws_api_gateway_rest_api: Prevent error with `policy` containing special characters (e.g. forward slashes in CIDRs) ([#4606](https://github.com/terraform-providers/terraform-provider-aws/issues/4606)) +* resource/aws_cloudwatch_event_rule: Prevent multiple names on creation ([#4579](https://github.com/terraform-providers/terraform-provider-aws/issues/4579)) +* resource/aws_dynamodb_table: Prevent error with APIs that do not support point in time recovery (e.g. AWS China) ([#4573](https://github.com/terraform-providers/terraform-provider-aws/issues/4573)) +* resource/aws_glue_catalog_table: Prevent multiple potential panic scenarios ([#4621](https://github.com/terraform-providers/terraform-provider-aws/issues/4621)) +* resource/aws_kinesis_stream: Handle tag additions/removals of more than 10 tags ([#4574](https://github.com/terraform-providers/terraform-provider-aws/issues/4574)) +* resource/aws_kinesis_stream: Prevent perpetual `encryption_type` difference with APIs that do not support encryption (e.g. AWS China) ([#4575](https://github.com/terraform-providers/terraform-provider-aws/issues/4575)) +* resource/aws_s3_bucket: Prevent panic from CORS reading errors ([#4603](https://github.com/terraform-providers/terraform-provider-aws/issues/4603)) +* resource/aws_spot_fleet_request: Prevent empty `iam_instance_profile_arn` from overwriting `iam_instance_profile` ([#4591](https://github.com/terraform-providers/terraform-provider-aws/issues/4591)) + +## 1.19.0 (May 16, 2018) + +NOTES: + +* data-source/aws_iam_policy_document: Please note there is a behavior change in the rendering of `principal`/`not_principal` in the case of `type = "AWS"` and `identifiers = ["*"]`. This will now render as `Principal": {"AWS": "*"}` instead of `"Principal": "*"`. This change is required for IAM role trust policy support as well as differentiating between anonymous access versus AWS access in policies. To keep the old behavior of anonymous access, use `type = "*"` and `identifiers = ["*"]`, which will continue to render as `"Principal": "*"`. For additional information, see the [`aws_iam_policy_document` documentation](https://www.terraform.io/docs/providers/aws/d/iam_policy_document.html). + +FEATURES: + +* **New Data Source:** `aws_arn` ([#3996](https://github.com/terraform-providers/terraform-provider-aws/issues/3996)) +* **New Data Source:** `aws_lambda_invocation` ([#4222](https://github.com/terraform-providers/terraform-provider-aws/issues/4222)) +* **New Resource:** `aws_sns_sms_preferences` ([#3858](https://github.com/terraform-providers/terraform-provider-aws/issues/3858)) + +ENHANCEMENTS: + +* data-source/aws_iam_policy_document: Allow rendering of `"Principal": {"AWS": "*"}` (required for IAM role trust policies) ([#4248](https://github.com/terraform-providers/terraform-provider-aws/issues/4248)) +* resource/aws_api_gateway_rest_api: Add `execution_arn` attribute ([#3968](https://github.com/terraform-providers/terraform-provider-aws/issues/3968)) +* resource/aws_db_event_subscription: Add `name_prefix` argument ([#2754](https://github.com/terraform-providers/terraform-provider-aws/issues/2754)) +* resource/aws_dms_endpoint: Add `azuredb` for `engine_name` validation ([#4506](https://github.com/terraform-providers/terraform-provider-aws/issues/4506)) +* resource/aws_rds_cluster: Add `backtrack_window` argument and wait for updates to complete ([#4524](https://github.com/terraform-providers/terraform-provider-aws/issues/4524)) +* resource/aws_spot_fleet_request: Add `launch_specification` `iam_instance_profile_arn` argument ([#4511](https://github.com/terraform-providers/terraform-provider-aws/issues/4511)) + +BUG FIXES: + +* data-source/aws_autoscaling_groups: Use pagination function for DescribeTags filtering ([#4535](https://github.com/terraform-providers/terraform-provider-aws/issues/4535)) +* resource/aws_elb: Ensure `bucket_prefix` for access logging can be updated to `""` ([#4383](https://github.com/terraform-providers/terraform-provider-aws/issues/4383)) +* resource/aws_kinesis_firehose_delivery_stream: Retry on Elasticsearch destination IAM role errors and update IAM errors ([#4518](https://github.com/terraform-providers/terraform-provider-aws/issues/4518)) +* resource/aws_launch_template: Allow `network_interfaces` `device_index` to be set to 0 ([#4367](https://github.com/terraform-providers/terraform-provider-aws/issues/4367)) +* resource/aws_lb: Ensure `bucket_prefix` for access logging can be updated to `""` ([#4383](https://github.com/terraform-providers/terraform-provider-aws/issues/4383)) +* resource/aws_lb: Ensure `access_logs` is properly set into Terraform state ([#4517](https://github.com/terraform-providers/terraform-provider-aws/issues/4517)) +* resource/aws_security_group: Fix rule description handling when gathering multiple rules with same permissions ([#4416](https://github.com/terraform-providers/terraform-provider-aws/issues/4416)) + +## 1.18.0 (May 10, 2018) + +FEATURES: + +* **New Data Source:** `aws_acmpca_certificate_authority` ([#4458](https://github.com/terraform-providers/terraform-provider-aws/issues/4458)) +* **New Resource:** `aws_acmpca_certificate_authority` ([#4458](https://github.com/terraform-providers/terraform-provider-aws/issues/4458)) +* **New Resource:** `aws_glue_catalog_table` ([#4368](https://github.com/terraform-providers/terraform-provider-aws/issues/4368)) + +ENHANCEMENTS: + +* provider: Lower retry threshold for DNS resolution failures ([#4459](https://github.com/terraform-providers/terraform-provider-aws/issues/4459)) +* resource/aws_dms_endpoint: Support `s3` `engine_name` and add `s3_settings` argument ([#1685](https://github.com/terraform-providers/terraform-provider-aws/issues/1685)] and [[#4447](https://github.com/terraform-providers/terraform-provider-aws/issues/4447)) +* resource/aws_glue_job: Add `timeout` argument ([#4460](https://github.com/terraform-providers/terraform-provider-aws/issues/4460)) +* resource/aws_lb_target_group: Add `proxy_protocol_v2` argument ([#4365](https://github.com/terraform-providers/terraform-provider-aws/issues/4365)) +* resource/aws_spot_fleet_request: Mark `spot_price` optional (defaults to on-demand price) ([#4424](https://github.com/terraform-providers/terraform-provider-aws/issues/4424)) +* resource/aws_spot_fleet_request: Add plan time validation for `valid_from` and `valid_until` arguments ([#4463](https://github.com/terraform-providers/terraform-provider-aws/issues/4463)) +* resource/aws_spot_instance_request: Mark `spot_price` optional (defaults to on-demand price) ([#4424](https://github.com/terraform-providers/terraform-provider-aws/issues/4424)) + +BUG FIXES: + +* data-source/aws_autoscaling_groups: Correctly paginate through over 50 results ([#4433](https://github.com/terraform-providers/terraform-provider-aws/issues/4433)) +* resource/aws_elastic_beanstalk_environment: Correctly handle `cname_prefix` attribute in China partition ([#4485](https://github.com/terraform-providers/terraform-provider-aws/issues/4485)) +* resource/aws_glue_job: Remove `allocated_capacity` and `max_concurrent_runs` upper plan time validation limits ([#4461](https://github.com/terraform-providers/terraform-provider-aws/issues/4461)) +* resource/aws_instance: Fix `root_device_mapping` matching of expected root device name with multiple block devices. ([#4489](https://github.com/terraform-providers/terraform-provider-aws/issues/4489)) +* resource/aws_launch_template: Prevent `parameter iops is not supported for gp2 volumes` error ([#4344](https://github.com/terraform-providers/terraform-provider-aws/issues/4344)) +* resource/aws_launch_template: Prevent `'iamInstanceProfile.name' may not be used in combination with 'iamInstanceProfile.arn'` error ([#4344](https://github.com/terraform-providers/terraform-provider-aws/issues/4344)) +* resource/aws_launch_template: Prevent `parameter groupName cannot be used with the parameter subnet` error ([#4344](https://github.com/terraform-providers/terraform-provider-aws/issues/4344)) +* resource/aws_launch_template: Separate usage of `ipv4_address_count`/`ipv6_address_count` from `ipv4_addresses`/`ipv6_addresses` ([#4344](https://github.com/terraform-providers/terraform-provider-aws/issues/4344)) +* resource/aws_redshift_cluster: Properly send all required parameters when resizing ([#3127](https://github.com/terraform-providers/terraform-provider-aws/issues/3127)) +* resource/aws_s3_bucket: Prevent crash from empty string CORS arguments ([#4465](https://github.com/terraform-providers/terraform-provider-aws/issues/4465)) +* resource/aws_ssm_document: Add missing account ID to `arn` attribute ([#4436](https://github.com/terraform-providers/terraform-provider-aws/issues/4436)) + +## 1.17.0 (May 02, 2018) + +NOTES: + +* resource/aws_ecs_service: Please note the `placement_strategy` argument (an unordered list) has been marked deprecated in favor of the `ordered_placement_strategy` argument (an ordered list based on the Terraform configuration ordering). + +FEATURES: + +* **New Data Source:** `aws_mq_broker` ([#3163](https://github.com/terraform-providers/terraform-provider-aws/issues/3163)) +* **New Resource:** `aws_budgets_budget` ([#1879](https://github.com/terraform-providers/terraform-provider-aws/issues/1879)) +* **New Resource:** `aws_iam_user_group_membership` ([#3365](https://github.com/terraform-providers/terraform-provider-aws/issues/3365)) +* **New Resource:** `aws_vpc_peering_connection_options` ([#3909](https://github.com/terraform-providers/terraform-provider-aws/issues/3909)) + +ENHANCEMENTS: + +* data-source/aws_route53_zone: Add `name_servers` attribute ([#4336](https://github.com/terraform-providers/terraform-provider-aws/issues/4336)) +* resource/aws_api_gateway_stage: Add `access_log_settings` argument (Support access logging) ([#4369](https://github.com/terraform-providers/terraform-provider-aws/issues/4369)) +* resource/aws_autoscaling_group: Add `launch_template` argument ([#4305](https://github.com/terraform-providers/terraform-provider-aws/issues/4305)) +* resource/aws_batch_job_definition: Add `timeout` argument ([#4386](https://github.com/terraform-providers/terraform-provider-aws/issues/4386)) +* resource/aws_cloudwatch_event_rule: Add `name_prefix` argument ([#2752](https://github.com/terraform-providers/terraform-provider-aws/issues/2752)) +* resource/aws_cloudwatch_event_rule: Make `name` optional (Terraform can generate unique ID) ([#2752](https://github.com/terraform-providers/terraform-provider-aws/issues/2752)) +* resource/aws_codedeploy_deployment_group: Add `ec2_tag_set` argument (tag group support) ([#4324](https://github.com/terraform-providers/terraform-provider-aws/issues/4324)) +* resource/aws_default_subnet: Allow `map_public_ip_on_launch` updates ([#4396](https://github.com/terraform-providers/terraform-provider-aws/issues/4396)) +* resource/aws_dms_endpoint: Support `mongodb` engine_name and `mongodb_settings` argument ([#4406](https://github.com/terraform-providers/terraform-provider-aws/issues/4406)) +* resource/aws_dynamodb_table: Add `point_in_time_recovery` argument ([#4063](https://github.com/terraform-providers/terraform-provider-aws/issues/4063)) +* resource/aws_ecs_service: Add `ordered_placement_strategy` argument, deprecate `placement_strategy` argument ([#4390](https://github.com/terraform-providers/terraform-provider-aws/issues/4390)) +* resource/aws_ecs_service: Allow `health_check_grace_period_seconds` up to 7200 seconds ([#4420](https://github.com/terraform-providers/terraform-provider-aws/issues/4420)) +* resource/aws_lambda_permission: Add `statement_id_prefix` argument ([#2743](https://github.com/terraform-providers/terraform-provider-aws/issues/2743)) +* resource/aws_lambda_permission: Make `statement_id` optional (Terraform can generate unique ID) ([#2743](https://github.com/terraform-providers/terraform-provider-aws/issues/2743)) +* resource/aws_rds_cluster: Add `s3_import` argument (Support MySQL Backup Restore from S3) ([#4366](https://github.com/terraform-providers/terraform-provider-aws/issues/4366)) +* resource/aws_vpc_peering_connection: Support configurable timeouts ([#3909](https://github.com/terraform-providers/terraform-provider-aws/issues/3909)) + +BUG FIXES: + +* data-source/aws_instance: Bypass `UnsupportedOperation` errors with `DescribeInstanceCreditSpecifications` call ([#4362](https://github.com/terraform-providers/terraform-provider-aws/issues/4362)) +* resource/aws_iam_group_policy: Properly handle generated policy name updates ([#4379](https://github.com/terraform-providers/terraform-provider-aws/issues/4379)) +* resource/aws_instance: Bypass `UnsupportedOperation` errors with `DescribeInstanceCreditSpecifications` call ([#4362](https://github.com/terraform-providers/terraform-provider-aws/issues/4362)) +* resource/aws_launch_template: Appropriately set `security_groups` in network interfaces ([#4364](https://github.com/terraform-providers/terraform-provider-aws/issues/4364)) +* resource/aws_rds_cluster: Add retries for IAM eventual consistency ([#4371](https://github.com/terraform-providers/terraform-provider-aws/issues/4371)) +* resource/aws_rds_cluster_instance: Add retries for IAM eventual consistency ([#4370](https://github.com/terraform-providers/terraform-provider-aws/issues/4370)) +* resource/aws_route53_zone: Add domain name to CallerReference to prevent creation issues with count greater than one ([#4341](https://github.com/terraform-providers/terraform-provider-aws/issues/4341)) + +## 1.16.0 (April 25, 2018) + +FEATURES: + +* **New Data Source:** `aws_batch_compute_environment` ([#4270](https://github.com/terraform-providers/terraform-provider-aws/issues/4270)) +* **New Data Source:** `aws_batch_job_queue` ([#4288](https://github.com/terraform-providers/terraform-provider-aws/issues/4288)) +* **New Data Source:** `aws_iot_endpoint` ([#4303](https://github.com/terraform-providers/terraform-provider-aws/issues/4303)) +* **New Data Source:** `aws_lambda_function` ([#2984](https://github.com/terraform-providers/terraform-provider-aws/issues/2984)) +* **New Data Source:** `aws_redshift_cluster` ([#2603](https://github.com/terraform-providers/terraform-provider-aws/issues/2603)) +* **New Data Source:** `aws_secretsmanager_secret` ([#4272](https://github.com/terraform-providers/terraform-provider-aws/issues/4272)) +* **New Data Source:** `aws_secretsmanager_secret_version` ([#4272](https://github.com/terraform-providers/terraform-provider-aws/issues/4272)) +* **New Resource:** `aws_dax_parameter_group` ([#4299](https://github.com/terraform-providers/terraform-provider-aws/issues/4299)) +* **New Resource:** `aws_dax_subnet_group` ([#4302](https://github.com/terraform-providers/terraform-provider-aws/issues/4302)) +* **New Resource:** `aws_organizations_policy` ([#4249](https://github.com/terraform-providers/terraform-provider-aws/issues/4249)) +* **New Resource:** `aws_organizations_policy_attachment` ([#4253](https://github.com/terraform-providers/terraform-provider-aws/issues/4253)) +* **New Resource:** `aws_secretsmanager_secret` ([#4272](https://github.com/terraform-providers/terraform-provider-aws/issues/4272)) +* **New Resource:** `aws_secretsmanager_secret_version` ([#4272](https://github.com/terraform-providers/terraform-provider-aws/issues/4272)) + +ENHANCEMENTS: + +* data-source/aws_cognito_user_pools: Add `arns` attribute ([#4256](https://github.com/terraform-providers/terraform-provider-aws/issues/4256)) +* data-source/aws_ecs_cluster Return error on multiple clusters ([#4286](https://github.com/terraform-providers/terraform-provider-aws/issues/4286)) +* data-source/aws_iam_instance_profile: Add `role_arn` and `role_name` attributes ([#4300](https://github.com/terraform-providers/terraform-provider-aws/issues/4300)) +* data-source/aws_instance: Add `disable_api_termination` attribute ([#4314](https://github.com/terraform-providers/terraform-provider-aws/issues/4314)) +* resource/aws_api_gateway_rest_api: Add `policy` argument ([#4211](https://github.com/terraform-providers/terraform-provider-aws/issues/4211)) +* resource/aws_api_gateway_stage: Add `tags` argument ([#2858](https://github.com/terraform-providers/terraform-provider-aws/issues/2858)) +* resource/aws_api_gateway_stage: Add `execution_arn` and `invoke_url` attributes ([#3469](https://github.com/terraform-providers/terraform-provider-aws/issues/3469)) +* resource/aws_api_gateway_vpc_link: Support import ([#4306](https://github.com/terraform-providers/terraform-provider-aws/issues/4306)) +* resource/aws_cloudwatch_event_target: Add `batch_target` argument ([#4312](https://github.com/terraform-providers/terraform-provider-aws/issues/4312)) +* resource/aws_cloudwatch_event_target: Add `kinesis_target` and `sqs_target` arguments ([#4323](https://github.com/terraform-providers/terraform-provider-aws/issues/4323)) +* resource/aws_cognito_user_pool: Support `user_migration` in `lambda_config` ([#4301](https://github.com/terraform-providers/terraform-provider-aws/issues/4301)) +* resource/aws_db_instance: Add `s3_import` argument ([#2728](https://github.com/terraform-providers/terraform-provider-aws/issues/2728)) +* resource/aws_elastic_beanstalk_application: Add `appversion_lifecycle` argument ([#1907](https://github.com/terraform-providers/terraform-provider-aws/issues/1907)) +* resource/aws_instance: Add `credit_specification` argument (e.g. t2.unlimited support) ([#2619](https://github.com/terraform-providers/terraform-provider-aws/issues/2619)) +* resource/aws_kinesis_firehose_delivery_stream: Support Redshift `processing_configuration` ([#4251](https://github.com/terraform-providers/terraform-provider-aws/issues/4251)) +* resource/aws_launch_configuration: Add `user_data_base64` argument ([#4257](https://github.com/terraform-providers/terraform-provider-aws/issues/4257)) +* resource/aws_s3_bucket: Add support for `ONEZONE_IA` storage class ([#4287](https://github.com/terraform-providers/terraform-provider-aws/issues/4287)) +* resource/aws_s3_bucket_object: Add support for `ONEZONE_IA` storage class ([#4287](https://github.com/terraform-providers/terraform-provider-aws/issues/4287)) +* resource/aws_spot_instance_request: Add `valid_from` and `valid_until` arguments ([#4018](https://github.com/terraform-providers/terraform-provider-aws/issues/4018)) +* resource/aws_ssm_patch_baseline: Support `CENTOS` `operating_system` argument ([#4268](https://github.com/terraform-providers/terraform-provider-aws/issues/4268)) + +BUG FIXES: + +* data-source/aws_iam_policy_document: Prevent crash with multiple value principal identifiers ([#4277](https://github.com/terraform-providers/terraform-provider-aws/issues/4277)) +* data-source/aws_lb_listener: Ensure attributes are properly set when not used as arguments ([#4317](https://github.com/terraform-providers/terraform-provider-aws/issues/4317)) +* resource/aws_codebuild_project: Mark auth resource attribute as sensitive ([#4284](https://github.com/terraform-providers/terraform-provider-aws/issues/4284)) +* resource/aws_cognito_user_pool_client: Fix import to include user pool ID ([#3762](https://github.com/terraform-providers/terraform-provider-aws/issues/3762)) +* resource/aws_elasticache_cluster: Remove extraneous plan-time validation for `node_type` and `subnet_group_name` ([#4333](https://github.com/terraform-providers/terraform-provider-aws/issues/4333)) +* resource/aws_launch_template: Allow dashes in `name` and `name_prefix` arguments ([#4321](https://github.com/terraform-providers/terraform-provider-aws/issues/4321)) +* resource/aws_launch_template: Properly set `block_device_mappings` EBS information into Terraform state ([#4321](https://github.com/terraform-providers/terraform-provider-aws/issues/4321)) +* resource/aws_launch_template: Properly pass `block_device_mappings` information to EC2 API ([#4321](https://github.com/terraform-providers/terraform-provider-aws/issues/4321)) +* resource/aws_s3_bucket: Prevent panic on lifecycle rule reading errors ([#4282](https://github.com/terraform-providers/terraform-provider-aws/issues/4282)) + +## 1.15.0 (April 18, 2018) + +NOTES: + +* resource/aws_cloudfront_distribution: Please note the `cache_behavior` argument (an unordered list) has been marked deprecated in favor of the `ordered_cache_behavior` argument (an ordered list based on the Terraform configuration ordering). This is to support proper cache behavior precedence within a CloudFront distribution. + +FEATURES: + +* **New Data Source:** `aws_api_gateway_rest_api` ([#4172](https://github.com/terraform-providers/terraform-provider-aws/issues/4172)) +* **New Data Source:** `aws_cloudwatch_log_group` ([#4167](https://github.com/terraform-providers/terraform-provider-aws/issues/4167)) +* **New Data Source:** `aws_cognito_user_pools` ([#4212](https://github.com/terraform-providers/terraform-provider-aws/issues/4212)) +* **New Data Source:** `aws_sqs_queue` ([#2311](https://github.com/terraform-providers/terraform-provider-aws/issues/2311)) +* **New Resource:** `aws_directory_service_conditional_forwarder` ([#4071](https://github.com/terraform-providers/terraform-provider-aws/issues/4071)) +* **New Resource:** `aws_glue_connection` ([#4016](https://github.com/terraform-providers/terraform-provider-aws/issues/4016)) +* **New Resource:** `aws_glue_job` ([#4028](https://github.com/terraform-providers/terraform-provider-aws/issues/4028)) +* **New Resource:** `aws_iam_service_linked_role` ([#2985](https://github.com/terraform-providers/terraform-provider-aws/issues/2985)) +* **New Resource:** `aws_launch_template` ([#2927](https://github.com/terraform-providers/terraform-provider-aws/issues/2927)) +* **New Resource:** `aws_ses_domain_identity_verification` ([#4108](https://github.com/terraform-providers/terraform-provider-aws/issues/4108)) + +ENHANCEMENTS: + +* data-source/aws_iam_server_certificate: Filter by `path_prefix` ([#3801](https://github.com/terraform-providers/terraform-provider-aws/issues/3801)) +* resource/aws_api_gateway_integration: Support VPC connection ([#3428](https://github.com/terraform-providers/terraform-provider-aws/issues/3428)) +* resource/aws_cloudfront_distribution: Added `ordered_cache_behavior` argument, deprecate `cache_behavior` ([#4117](https://github.com/terraform-providers/terraform-provider-aws/issues/4117)) +* resource/aws_db_instance: Support `enabled_cloudwatch_logs_exports` argument ([#4111](https://github.com/terraform-providers/terraform-provider-aws/issues/4111)) +* resource/aws_db_option_group: Support option version argument ([#2590](https://github.com/terraform-providers/terraform-provider-aws/issues/2590)) +* resource/aws_ecs_service: Support ServiceRegistries ([#3906](https://github.com/terraform-providers/terraform-provider-aws/issues/3906)) +* resource/aws_iam_service_linked_role: Support `custom_suffix` and `description` arguments ([#4188](https://github.com/terraform-providers/terraform-provider-aws/issues/4188)) +* resource/aws_service_discovery_service: Support `health_check_custom_config` argument ([#4083](https://github.com/terraform-providers/terraform-provider-aws/issues/4083)) +* resource/aws_spot_fleet_request: Support configurable delete timeout ([#3940](https://github.com/terraform-providers/terraform-provider-aws/issues/3940)) +* resource/aws_spot_instance_request: Support optionally fetching password data ([#4189](https://github.com/terraform-providers/terraform-provider-aws/issues/4189)) +* resource/aws_waf_rate_based_rule: Support `RegexMatch` predicate type ([#4069](https://github.com/terraform-providers/terraform-provider-aws/issues/4069)) +* resource/aws_waf_rule: Support `RegexMatch` predicate type ([#4069](https://github.com/terraform-providers/terraform-provider-aws/issues/4069)) +* resource/aws_wafregional_rate_based_rule: Support `RegexMatch` predicate type ([#4069](https://github.com/terraform-providers/terraform-provider-aws/issues/4069)) + +BUG FIXES: + +* resource/aws_athena_database: Handle database names with uppercase and underscores ([#4133](https://github.com/terraform-providers/terraform-provider-aws/issues/4133)) +* resource/aws_codebuild_project: Retry UpdateProject for IAM eventual consistency ([#4238](https://github.com/terraform-providers/terraform-provider-aws/issues/4238)) +* resource/aws_codedeploy_deployment_config: Force new resource for `minimum_healthy_hosts` updates ([#4194](https://github.com/terraform-providers/terraform-provider-aws/issues/4194)) +* resource/aws_cognito_user_group: Fix `role_arn` updates ([#4237](https://github.com/terraform-providers/terraform-provider-aws/issues/4237)) +* resource/aws_elasticache_replication_group: Increase default create timeout to 60 minutes ([#4093](https://github.com/terraform-providers/terraform-provider-aws/issues/4093)) +* resource/aws_emr_cluster: Force new resource if any of the `ec2_attributes` change ([#4218](https://github.com/terraform-providers/terraform-provider-aws/issues/4218)) +* resource/aws_iam_role: Suppress `NoSuchEntity` errors while detaching policies from role during deletion ([#4209](https://github.com/terraform-providers/terraform-provider-aws/issues/4209)) +* resource/aws_lb: Force new resource if any of the `subnet_mapping` attributes change ([#4086](https://github.com/terraform-providers/terraform-provider-aws/issues/4086)) +* resource/aws_rds_cluster: Properly handle `engine_version` with `snapshot_identifier` ([#4215](https://github.com/terraform-providers/terraform-provider-aws/issues/4215)) +* resource/aws_route53_record: Improved handling of non-alphanumeric record names ([#4183](https://github.com/terraform-providers/terraform-provider-aws/issues/4183)) +* resource/aws_spot_instance_request: Fix `instance_interuption_behaviour` hibernate and stop handling with placement ([#1986](https://github.com/terraform-providers/terraform-provider-aws/issues/1986)) +* resource/aws_vpc_dhcp_options: Handle plural and non-plural `InvalidDhcpOptionsID.NotFound` errors ([#4136](https://github.com/terraform-providers/terraform-provider-aws/issues/4136)) + +## 1.14.1 (April 11, 2018) + +ENHANCEMENTS: + +* resource/aws_db_event_subscription: Add `arn` attribute ([#4151](https://github.com/terraform-providers/terraform-provider-aws/issues/4151)) +* resource/aws_db_event_subscription: Support configurable timeouts ([#4151](https://github.com/terraform-providers/terraform-provider-aws/issues/4151)) + +BUG FIXES: + +* resource/aws_codebuild_project: Properly handle setting cache type `NO_CACHE` ([#4134](https://github.com/terraform-providers/terraform-provider-aws/issues/4134)) +* resource/aws_db_event_subscription: Fix `tag` ARN handling ([#4151](https://github.com/terraform-providers/terraform-provider-aws/issues/4151)) +* resource/aws_dynamodb_table_item: Trigger destructive update if range_key has changed ([#3821](https://github.com/terraform-providers/terraform-provider-aws/issues/3821)) +* resource/aws_elb: Return any errors when updating listeners ([#4159](https://github.com/terraform-providers/terraform-provider-aws/issues/4159)) +* resource/aws_emr_cluster: Prevent crash with missing StateChangeReason ([#4165](https://github.com/terraform-providers/terraform-provider-aws/issues/4165)) +* resource/aws_iam_user: Retry user login profile deletion on `EntityTemporarilyUnmodifiable` ([#4143](https://github.com/terraform-providers/terraform-provider-aws/issues/4143)) +* resource/aws_kinesis_firehose_delivery_stream: Prevent crash with missing CloudWatch logging options ([#4148](https://github.com/terraform-providers/terraform-provider-aws/issues/4148)) +* resource/aws_lambda_alias: Force new resource on `name` change ([#4106](https://github.com/terraform-providers/terraform-provider-aws/issues/4106)) +* resource/aws_lambda_function: Prevent perpetual difference when removing `dead_letter_config` ([#2684](https://github.com/terraform-providers/terraform-provider-aws/issues/2684)) +* resource/aws_launch_configuration: Properly read `security_groups`, `user_data`, and `vpc_classic_link_security_groups` attributes into Terraform state ([#2800](https://github.com/terraform-providers/terraform-provider-aws/issues/2800)) +* resource/aws_network_acl: Prevent error on deletion with already deleted subnets ([#4119](https://github.com/terraform-providers/terraform-provider-aws/issues/4119)) +* resource/aws_network_acl: Prevent error on update with removing associations for already deleted subnets ([#4119](https://github.com/terraform-providers/terraform-provider-aws/issues/4119)) +* resource/aws_rds_cluster: Properly handle `engine_version` during regular creation ([#4139](https://github.com/terraform-providers/terraform-provider-aws/issues/4139)) +* resource/aws_rds_cluster: Set `port` updates to force new resource ([#4144](https://github.com/terraform-providers/terraform-provider-aws/issues/4144)) +* resource/aws_route53_zone: Suppress `name` difference with trailing period ([#3982](https://github.com/terraform-providers/terraform-provider-aws/issues/3982)) +* resource/aws_vpc_peering_connection: Allow active pending state during deletion for eventual consistency ([#4140](https://github.com/terraform-providers/terraform-provider-aws/issues/4140)) + +## 1.14.0 (April 06, 2018) + +NOTES: + +* resource/aws_organizations_account: As noted in the resource documentation, resource deletion from Terraform will _not_ automatically close AWS accounts due to the behavior of the AWS Organizations service. There are also various manual steps required by AWS before the account can be removed from an organization and made into a standalone account, then manually closed if desired. + +FEATURES: + +* **New Resource:** `aws_organizations_account` ([#3524](https://github.com/terraform-providers/terraform-provider-aws/issues/3524)) +* **New Resource:** `aws_ses_identity_notification_topic` ([#2640](https://github.com/terraform-providers/terraform-provider-aws/issues/2640)) + +ENHANCEMENTS: + +* provider: Fallback to SDK default credential chain if credentials not found using provider credential chain ([#2883](https://github.com/terraform-providers/terraform-provider-aws/issues/2883)) +* data-source/aws_iam_role: Add `max_session_duration` attribute ([#4092](https://github.com/terraform-providers/terraform-provider-aws/issues/4092)) +* resource/aws_cloudfront_distribution: Add cache_behavior `field_level_encryption_id` attribute ([#4102](https://github.com/terraform-providers/terraform-provider-aws/issues/4102)) +* resource/aws_codebuild_project: Support `cache` configuration ([#2860](https://github.com/terraform-providers/terraform-provider-aws/issues/2860)) +* resource/aws_elasticache_replication_group: Support Cluster Mode Enabled online shard reconfiguration ([#3932](https://github.com/terraform-providers/terraform-provider-aws/issues/3932)) +* resource/aws_elasticache_replication_group: Configurable create, update, and delete timeouts ([#3932](https://github.com/terraform-providers/terraform-provider-aws/issues/3932)) +* resource/aws_iam_role: Add `max_session_duration` argument ([#3977](https://github.com/terraform-providers/terraform-provider-aws/issues/3977)) +* resource/aws_kinesis_firehose_delivery_stream: Add Elasticsearch destination processing configuration support ([#3621](https://github.com/terraform-providers/terraform-provider-aws/issues/3621)) +* resource/aws_kinesis_firehose_delivery_stream: Add Extended S3 destination backup mode support ([#2987](https://github.com/terraform-providers/terraform-provider-aws/issues/2987)) +* resource/aws_kinesis_firehose_delivery_stream: Add Splunk destination processing configuration support ([#3944](https://github.com/terraform-providers/terraform-provider-aws/issues/3944)) +* resource/aws_lambda_function: Support `nodejs8.10` runtime ([#4020](https://github.com/terraform-providers/terraform-provider-aws/issues/4020)) +* resource/aws_launch_configuration: Add support for `ebs_block_device.*.no_device` ([#4070](https://github.com/terraform-providers/terraform-provider-aws/issues/4070)) +* resource/aws_ssm_maintenance_window_target: Make resource updatable ([#4074](https://github.com/terraform-providers/terraform-provider-aws/issues/4074)) +* resource/aws_wafregional_rule: Validate all predicate types ([#4046](https://github.com/terraform-providers/terraform-provider-aws/issues/4046)) + +BUG FIXES: + +* resource/aws_cognito_user_pool: Trim `custom:` prefix of `developer_only_attribute = false` schema attributes ([#4041](https://github.com/terraform-providers/terraform-provider-aws/issues/4041)) +* resource/aws_cognito_user_pool: Fix `email_message_by_link` max length validation ([#4051](https://github.com/terraform-providers/terraform-provider-aws/issues/4051)) +* resource/aws_elasticache_replication_group: Properly set `cluster_mode` in state ([#3932](https://github.com/terraform-providers/terraform-provider-aws/issues/3932)) +* resource/aws_iam_user_login_profile: Changed password generation to use `crypto/rand` ([#3989](https://github.com/terraform-providers/terraform-provider-aws/issues/3989)) +* resource/aws_kinesis_firehose_delivery_stream: Prevent additional crash scenarios with optional configurations ([#4047](https://github.com/terraform-providers/terraform-provider-aws/issues/4047)) +* resource/aws_lambda_function: IAM retry for "The role defined for the function cannot be assumed by Lambda" on update ([#3988](https://github.com/terraform-providers/terraform-provider-aws/issues/3988)) +* resource/aws_lb: Suppress differences for non-applicable attributes ([#4032](https://github.com/terraform-providers/terraform-provider-aws/issues/4032)) +* resource/aws_rds_cluster_instance: Prevent crash on importing non-cluster instances ([#3961](https://github.com/terraform-providers/terraform-provider-aws/issues/3961)) +* resource/aws_route53_record: Fix ListResourceRecordSet pagination ([#3900](https://github.com/terraform-providers/terraform-provider-aws/issues/3900)) + +## 1.13.0 (March 28, 2018) + +NOTES: + +This release is happening outside the normal release schedule to accomodate a crash fix for the `aws_lb_target_group` resource. It appears an ELBv2 service update rolling out currently is the root cause. The potential for this crash has been present since the initial resource in Terraform 0.7.7 and all versions of the AWS provider up to v1.13.0. + +FEATURES: + +* **New Resource:** `aws_appsync_datasource` ([#2758](https://github.com/terraform-providers/terraform-provider-aws/issues/2758)) +* **New Resource:** `aws_waf_regex_match_set` ([#3947](https://github.com/terraform-providers/terraform-provider-aws/issues/3947)) +* **New Resource:** `aws_waf_regex_pattern_set` ([#3913](https://github.com/terraform-providers/terraform-provider-aws/issues/3913)) +* **New Resource:** `aws_waf_rule_group` ([#3898](https://github.com/terraform-providers/terraform-provider-aws/issues/3898)) +* **New Resource:** `aws_wafregional_geo_match_set` ([#3915](https://github.com/terraform-providers/terraform-provider-aws/issues/3915)) +* **New Resource:** `aws_wafregional_rate_based_rule` ([#3871](https://github.com/terraform-providers/terraform-provider-aws/issues/3871)) +* **New Resource:** `aws_wafregional_regex_match_set` ([#3950](https://github.com/terraform-providers/terraform-provider-aws/issues/3950)) +* **New Resource:** `aws_wafregional_regex_pattern_set` ([#3933](https://github.com/terraform-providers/terraform-provider-aws/issues/3933)) +* **New Resource:** `aws_wafregional_rule_group` ([#3948](https://github.com/terraform-providers/terraform-provider-aws/issues/3948)) + +ENHANCEMENTS: + +* provider: Support custom Elasticsearch endpoint ([#3941](https://github.com/terraform-providers/terraform-provider-aws/issues/3941)) +* resource/aws_appsync_graphql_api: Support import ([#3500](https://github.com/terraform-providers/terraform-provider-aws/issues/3500)) +* resource/aws_elasticache_cluster: Allow port to be optional ([#3835](https://github.com/terraform-providers/terraform-provider-aws/issues/3835)) +* resource/aws_elasticache_cluster: Add `replication_group_id` argument ([#3869](https://github.com/terraform-providers/terraform-provider-aws/issues/3869)) +* resource/aws_elasticache_replication_group: Allow port to be optional ([#3835](https://github.com/terraform-providers/terraform-provider-aws/issues/3835)) + +BUG FIXES: + +* resource/aws_autoscaling_group: Fix updating of `service_linked_role` ([#3942](https://github.com/terraform-providers/terraform-provider-aws/issues/3942)) +* resource/aws_autoscaling_group: Properly set empty `enabled_metrics` in the state during read ([#3899](https://github.com/terraform-providers/terraform-provider-aws/issues/3899)) +* resource/aws_autoscaling_policy: Fix conditional logic based on `policy_type` ([#3739](https://github.com/terraform-providers/terraform-provider-aws/issues/3739)) +* resource/aws_batch_compute_environment: Correctly set `compute_resources` in state ([#3824](https://github.com/terraform-providers/terraform-provider-aws/issues/3824)) +* resource/aws_cognito_user_pool: Correctly set `schema` in state ([#3789](https://github.com/terraform-providers/terraform-provider-aws/issues/3789)) +* resource/aws_iam_user_login_profile: Fix `password_length` validation function regression from 1.12.0 ([#3919](https://github.com/terraform-providers/terraform-provider-aws/issues/3919)) +* resource/aws_lb: Store correct state for http2 and ensure attributes are set on create ([#3854](https://github.com/terraform-providers/terraform-provider-aws/issues/3854)) +* resource/aws_lb: Correctly set `subnet_mappings` in state ([#3822](https://github.com/terraform-providers/terraform-provider-aws/issues/3822)) +* resource/aws_lb_listener: Retry CertificateNotFound errors on update for IAM eventual consistency ([#3901](https://github.com/terraform-providers/terraform-provider-aws/issues/3901)) +* resource/aws_lb_target_group: Prevent crash from missing matcher during read ([#3954](https://github.com/terraform-providers/terraform-provider-aws/issues/3954)) +* resource/aws_security_group: Retry read on creation for EC2 eventual consistency ([#3892](https://github.com/terraform-providers/terraform-provider-aws/issues/3892)) + + +## 1.12.0 (March 23, 2018) NOTES: -* provider: For resources implementing the IAM policy equivalence library (https://github.com/jen20/awspolicyequivalence/) on an attribute via `suppressEquivalentAwsPolicyDiffs`, the dependency has been updated, which should mark additional IAM policies as equivalent. [GH-3832] +* provider: For resources implementing the IAM policy equivalence library (https://github.com/jen20/awspolicyequivalence/) on an attribute via `suppressEquivalentAwsPolicyDiffs`, the dependency has been updated, which should mark additional IAM policies as equivalent. ([#3832](https://github.com/terraform-providers/terraform-provider-aws/issues/3832)) FEATURES: -* **New Resource:** `aws_waf_geo_match_set` [GH-3275] -* **New Resource:** `aws_wafregional_rule` [GH-3756] -* **New Resource:** `aws_wafregional_sql_injection_match_set` [GH-1013] -* **New Resource:** `aws_wafregional_web_acl` [GH-3754] -* **New Resource:** `aws_wafregional_xss_match_set` [GH-1014] -* **New Resource:** `aws_kms_grant` [GH-3038] +* **New Resource:** `aws_kms_grant` ([#3038](https://github.com/terraform-providers/terraform-provider-aws/issues/3038)) +* **New Resource:** `aws_waf_geo_match_set` ([#3275](https://github.com/terraform-providers/terraform-provider-aws/issues/3275)) +* **New Resource:** `aws_wafregional_rule` ([#3756](https://github.com/terraform-providers/terraform-provider-aws/issues/3756)) +* **New Resource:** `aws_wafregional_size_constraint_set` ([#3796](https://github.com/terraform-providers/terraform-provider-aws/issues/3796)) +* **New Resource:** `aws_wafregional_sql_injection_match_set` ([#1013](https://github.com/terraform-providers/terraform-provider-aws/issues/1013)) +* **New Resource:** `aws_wafregional_web_acl` ([#3754](https://github.com/terraform-providers/terraform-provider-aws/issues/3754)) +* **New Resource:** `aws_wafregional_web_acl_association` ([#3755](https://github.com/terraform-providers/terraform-provider-aws/issues/3755)) +* **New Resource:** `aws_wafregional_xss_match_set` ([#1014](https://github.com/terraform-providers/terraform-provider-aws/issues/1014)) ENHANCEMENTS: -* provider: Treat IAM policies with account ID principals as equivalent to IAM account root ARN [GH-3832] -* provider: Treat additional IAM policy scenarios with empty principal trees as equivalent [GH-3832] -* resource/aws_cloudfront_distribution: Validate origin `domain_name` and `origin_id` at plan time [GH-3767] -* resource/aws_eip: Support configurable timeouts [GH-3769] -* resource/aws_emr_cluster: Add step support [GH-3673] -* resource/aws_instance: Support optionally fetching encrypted Windows password data [GH-2219] -* resource/aws_launch_configuration: Validate `user_data` length during plan [GH-2973] -* resource/aws_lb_target_group: Validate health check threshold for TCP protocol during plan [GH-3782] -* resource/aws_security_group: Add arn attribute [GH-3751] -* resource/aws_ses_domain_identity: Support trailing period in domain name [GH-3840] -* resource/aws_sqs_queue: Support lack of ListQueueTags for all non-standard AWS implementations [GH-3794] -* resource/aws_ssm_document: Add `document_format` argument to support YAML [GH-3814] -* resource/aws_api_gateway_rest_api: Add support for content encoding [GH-3642] -* resource/aws_s3_bucket_object: New `content_base64` argument allows uploading raw binary data created in-memory, rather than reading from disk as with `source`. [GH-3788] +* provider: Treat IAM policies with account ID principals as equivalent to IAM account root ARN ([#3832](https://github.com/terraform-providers/terraform-provider-aws/issues/3832)) +* provider: Treat additional IAM policy scenarios with empty principal trees as equivalent ([#3832](https://github.com/terraform-providers/terraform-provider-aws/issues/3832)) +* resource/aws_acm_certificate: Retry on ResourceInUseException during deletion for eventual consistency ([#3868](https://github.com/terraform-providers/terraform-provider-aws/issues/3868)) +* resource/aws_api_gateway_rest_api: Add support for content encoding ([#3642](https://github.com/terraform-providers/terraform-provider-aws/issues/3642)) +* resource/aws_autoscaling_group: Add `service_linked_role_arn` argument ([#3812](https://github.com/terraform-providers/terraform-provider-aws/issues/3812)) +* resource/aws_cloudfront_distribution: Validate origin `domain_name` and `origin_id` at plan time ([#3767](https://github.com/terraform-providers/terraform-provider-aws/issues/3767)) +* resource/aws_eip: Support configurable timeouts ([#3769](https://github.com/terraform-providers/terraform-provider-aws/issues/3769)) +* resource/aws_elasticache_cluster: Support plan time validation of az_mode ([#3857](https://github.com/terraform-providers/terraform-provider-aws/issues/3857)) +* resource/aws_elasticache_cluster: Support plan time validation of node_type requiring VPC for cache.t2 instances ([#3857](https://github.com/terraform-providers/terraform-provider-aws/issues/3857)) +* resource/aws_elasticache_cluster: Support plan time validation of num_cache_nodes > 1 for redis ([#3857](https://github.com/terraform-providers/terraform-provider-aws/issues/3857)) +* resource/aws_elasticache_cluster: ForceNew on node_type changes for memcached engine ([#3857](https://github.com/terraform-providers/terraform-provider-aws/issues/3857)) +* resource/aws_elasticache_cluster: ForceNew on engine_version downgrades ([#3857](https://github.com/terraform-providers/terraform-provider-aws/issues/3857)) +* resource/aws_emr_cluster: Add step support ([#3673](https://github.com/terraform-providers/terraform-provider-aws/issues/3673)) +* resource/aws_instance: Support optionally fetching encrypted Windows password data ([#2219](https://github.com/terraform-providers/terraform-provider-aws/issues/2219)) +* resource/aws_launch_configuration: Validate `user_data` length during plan ([#2973](https://github.com/terraform-providers/terraform-provider-aws/issues/2973)) +* resource/aws_lb_target_group: Validate health check threshold for TCP protocol during plan ([#3782](https://github.com/terraform-providers/terraform-provider-aws/issues/3782)) +* resource/aws_security_group: Add arn attribute ([#3751](https://github.com/terraform-providers/terraform-provider-aws/issues/3751)) +* resource/aws_ses_domain_identity: Support trailing period in domain name ([#3840](https://github.com/terraform-providers/terraform-provider-aws/issues/3840)) +* resource/aws_sqs_queue: Support lack of ListQueueTags for all non-standard AWS implementations ([#3794](https://github.com/terraform-providers/terraform-provider-aws/issues/3794)) +* resource/aws_ssm_document: Add `document_format` argument to support YAML ([#3814](https://github.com/terraform-providers/terraform-provider-aws/issues/3814)) +* resource/aws_s3_bucket_object: New `content_base64` argument allows uploading raw binary data created in-memory, rather than reading from disk as with `source`. ([#3788](https://github.com/terraform-providers/terraform-provider-aws/issues/3788)) BUG FIXES: -* resource/aws_api_gateway_client_certificate: Export `*_date` fields correctly [GH-3805] -* resource/aws_cognito_user_pool: Detect `auto_verified_attributes` changes [GH-3786] -* resource/aws_cognito_user_pool_client: Fix `callback_urls` updates [GH-3404] -* resource/aws_db_instance: Support `incompatible-parameters` and `storage-full` state [GH-3708] -* resource/aws_ecs_task_definition: Correctly read `volume` attribute into Terraform state [GH-3823] -* resource/aws_kinesis_firehose_delivery_stream: Prevent crash on malformed ID for import [GH-3834] -* resource/aws_lambda_function: Only retry IAM eventual consistency errors for one minute [GH-3765] -* resource/aws_ssm_association: Prevent AssociationDoesNotExist error [GH-3776] -* resource/aws_vpc_endpoint: Prevent perpertual diff in non-standard partitions [GH-3317] -* resource/aws_dynamodb_table: Update and validate attributes correctly [GH-3194] +* resource/aws_api_gateway_client_certificate: Export `*_date` fields correctly ([#3805](https://github.com/terraform-providers/terraform-provider-aws/issues/3805)) +* resource/aws_cognito_user_pool: Detect `auto_verified_attributes` changes ([#3786](https://github.com/terraform-providers/terraform-provider-aws/issues/3786)) +* resource/aws_cognito_user_pool_client: Fix `callback_urls` updates ([#3404](https://github.com/terraform-providers/terraform-provider-aws/issues/3404)) +* resource/aws_db_instance: Support `incompatible-parameters` and `storage-full` state ([#3708](https://github.com/terraform-providers/terraform-provider-aws/issues/3708)) +* resource/aws_dynamodb_table: Update and validate attributes correctly ([#3194](https://github.com/terraform-providers/terraform-provider-aws/issues/3194)) +* resource/aws_ecs_task_definition: Correctly read `volume` attribute into Terraform state ([#3823](https://github.com/terraform-providers/terraform-provider-aws/issues/3823)) +* resource/aws_kinesis_firehose_delivery_stream: Prevent crash on malformed ID for import ([#3834](https://github.com/terraform-providers/terraform-provider-aws/issues/3834)) +* resource/aws_lambda_function: Only retry IAM eventual consistency errors for one minute ([#3765](https://github.com/terraform-providers/terraform-provider-aws/issues/3765)) +* resource/aws_ssm_association: Prevent AssociationDoesNotExist error ([#3776](https://github.com/terraform-providers/terraform-provider-aws/issues/3776)) +* resource/aws_vpc_endpoint: Prevent perpertual diff in non-standard partitions ([#3317](https://github.com/terraform-providers/terraform-provider-aws/issues/3317)) ## 1.11.0 (March 09, 2018) diff --git a/Dockerfile b/Dockerfile new file mode 100644 index 00000000000..d485a7dcafa --- /dev/null +++ b/Dockerfile @@ -0,0 +1,24 @@ +# This Dockerfile builds on golang:alpine by building Terraform from source +# using the current working directory. +# +# This produces a docker image that contains a working Terraform binary along +# with all of its source code, which is what gets released on hub.docker.com +# as terraform:full. The main releases (terraform:latest, terraform:light and +# the release tags) are lighter images including only the officially-released +# binary from releases.hashicorp.com; these are built instead from +# scripts/docker-release/Dockerfile-release. + +FROM golang:1.11-alpine +LABEL maintainer="HashiCorp Terraform Team " + +RUN apk add --update git bash openssh make gcc musl-dev + +ENV TF_DEV=true +ENV TF_RELEASE=1 + +WORKDIR $GOPATH/src/github.com/terraform-providers/terraform-provider-aws +COPY . . +RUN make fmt && make build && make test + +WORKDIR $GOPATH +ENTRYPOINT ["terraform"] diff --git a/GNUmakefile b/GNUmakefile index eca08ffa76f..8e1c2b7f45e 100644 --- a/GNUmakefile +++ b/GNUmakefile @@ -1,6 +1,8 @@ SWEEP?=us-east-1,us-west-2 TEST?=./... GOFMT_FILES?=$$(find . -name '*.go' |grep -v vendor) +PKG_NAME=aws +WEBSITE_REPO=github.com/hashicorp/terraform-website default: build @@ -15,25 +17,27 @@ test: fmtcheck go test $(TEST) -timeout=30s -parallel=4 testacc: fmtcheck - TF_ACC=1 go test $(TEST) -v $(TESTARGS) -timeout 120m - -vet: - @echo "go vet ." - @go vet $$(go list ./... | grep -v vendor/) ; if [ $$? -eq 1 ]; then \ - echo ""; \ - echo "Vet found suspicious constructs. Please check the reported constructs"; \ - echo "and fix them if necessary before submitting the code for review."; \ - exit 1; \ - fi + TF_ACC=1 go test $(TEST) -v -parallel 20 $(TESTARGS) -timeout 120m fmt: - gofmt -w $(GOFMT_FILES) + @echo "==> Fixing source code with gofmt..." + gofmt -s -w ./$(PKG_NAME) +# Currently required by tf-deploy compile fmtcheck: @sh -c "'$(CURDIR)/scripts/gofmtcheck.sh'" -errcheck: - @sh -c "'$(CURDIR)/scripts/errcheck.sh'" +websitefmtcheck: + @sh -c "'$(CURDIR)/scripts/websitefmtcheck.sh'" + +lint: + @echo "==> Checking source code against linters..." + @gometalinter ./$(PKG_NAME) + +tools: + go get -u github.com/kardianos/govendor + go get -u github.com/alecthomas/gometalinter + gometalinter --install vendor-status: @govendor status @@ -41,10 +45,28 @@ vendor-status: test-compile: @if [ "$(TEST)" = "./..." ]; then \ echo "ERROR: Set TEST to a specific package. For example,"; \ - echo " make test-compile TEST=./aws"; \ + echo " make test-compile TEST=./$(PKG_NAME)"; \ exit 1; \ fi go test -c $(TEST) $(TESTARGS) -.PHONY: build sweep test testacc vet fmt fmtcheck errcheck vendor-status test-compile +website: +ifeq (,$(wildcard $(GOPATH)/src/$(WEBSITE_REPO))) + echo "$(WEBSITE_REPO) not found in your GOPATH (necessary for layouts and assets), get-ting..." + git clone https://$(WEBSITE_REPO) $(GOPATH)/src/$(WEBSITE_REPO) +endif + @$(MAKE) -C $(GOPATH)/src/$(WEBSITE_REPO) website-provider PROVIDER_PATH=$(shell pwd) PROVIDER_NAME=$(PKG_NAME) + +website-lint: + @echo "==> Checking website against linters..." + @misspell -error -source=text website/ + +website-test: +ifeq (,$(wildcard $(GOPATH)/src/$(WEBSITE_REPO))) + echo "$(WEBSITE_REPO) not found in your GOPATH (necessary for layouts and assets), get-ting..." + git clone https://$(WEBSITE_REPO) $(GOPATH)/src/$(WEBSITE_REPO) +endif + @$(MAKE) -C $(GOPATH)/src/$(WEBSITE_REPO) website-provider-test PROVIDER_PATH=$(shell pwd) PROVIDER_NAME=$(PKG_NAME) + +.PHONY: build sweep test testacc fmt fmtcheck lint tools vendor-status test-compile website website-lint website-test diff --git a/README.md b/README.md index 763c7fd89aa..28dfaee8682 100644 --- a/README.md +++ b/README.md @@ -10,8 +10,8 @@ Terraform Provider Requirements ------------ -- [Terraform](https://www.terraform.io/downloads.html) 0.10.x -- [Go](https://golang.org/doc/install) 1.9 (to build the provider plugin) +- [Terraform](https://www.terraform.io/downloads.html) 0.10+ +- [Go](https://golang.org/doc/install) 1.11 (to build the provider plugin) Building The Provider --------------------- @@ -32,12 +32,12 @@ $ make build Using the provider ---------------------- -If you're building the provider, follow the instructions to [install it as a plugin.](https://www.terraform.io/docs/plugins/basics.html#installing-a-plugin) After placing it into your plugins directory, run `terraform init` to initialize it. +If you're building the provider, follow the instructions to [install it as a plugin.](https://www.terraform.io/docs/plugins/basics.html#installing-a-plugin) After placing it into your plugins directory, run `terraform init` to initialize it. Documentation about the provider specific configuration options can be found on the [provider's website](https://www.terraform.io/docs/providers/aws/index.html). Developing the Provider --------------------------- -If you wish to work on the provider, you'll first need [Go](http://www.golang.org) installed on your machine (version 1.9+ is *required*). You'll also need to correctly setup a [GOPATH](http://golang.org/doc/code.html#GOPATH), as well as adding `$GOPATH/bin` to your `$PATH`. +If you wish to work on the provider, you'll first need [Go](http://www.golang.org) installed on your machine (version 1.11+ is *required*). You'll also need to correctly setup a [GOPATH](http://golang.org/doc/code.html#GOPATH), as well as adding `$GOPATH/bin` to your `$PATH`. To compile the provider, run `make build`. This will build the provider and put the provider binary in the `$GOPATH/bin` directory. diff --git a/aws/arn.go b/aws/arn.go deleted file mode 100644 index de72a23eb70..00000000000 --- a/aws/arn.go +++ /dev/null @@ -1,26 +0,0 @@ -package aws - -import ( - "github.com/aws/aws-sdk-go/aws/arn" - "github.com/aws/aws-sdk-go/service/iam" -) - -func arnString(partition, region, service, accountId, resource string) string { - return arn.ARN{ - Partition: partition, - Region: region, - Service: service, - AccountID: accountId, - Resource: resource, - }.String() -} - -// See http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-iam -func iamArnString(partition, accountId, resource string) string { - return arnString( - partition, - "", - iam.ServiceName, - accountId, - resource) -} diff --git a/aws/arn_test.go b/aws/arn_test.go deleted file mode 100644 index 9d0505f033f..00000000000 --- a/aws/arn_test.go +++ /dev/null @@ -1,13 +0,0 @@ -package aws - -import ( - "testing" -) - -func TestArn_iamRootUser(t *testing.T) { - arn := iamArnString("aws", "1234567890", "root") - expectedArn := "arn:aws:iam::1234567890:root" - if arn != expectedArn { - t.Fatalf("Expected ARN: %s, got: %s", expectedArn, arn) - } -} diff --git a/aws/auth_helpers.go b/aws/auth_helpers.go index 50221f56f43..f0a8cbf3a1c 100644 --- a/aws/auth_helpers.go +++ b/aws/auth_helpers.go @@ -18,91 +18,137 @@ import ( "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/iam" "github.com/aws/aws-sdk-go/service/sts" - "github.com/hashicorp/errwrap" "github.com/hashicorp/go-cleanhttp" "github.com/hashicorp/go-multierror" ) -func GetAccountID(iamconn *iam.IAM, stsconn *sts.STS, authProviderName string) (string, error) { - var errors error - // If we have creds from instance profile, we can use metadata API +func GetAccountIDAndPartition(iamconn *iam.IAM, stsconn *sts.STS, authProviderName string) (string, string, error) { + var accountID, partition string + var err, errors error + if authProviderName == ec2rolecreds.ProviderName { - log.Println("[DEBUG] Trying to get account ID via AWS Metadata API") + accountID, partition, err = GetAccountIDAndPartitionFromEC2Metadata() + } else { + accountID, partition, err = GetAccountIDAndPartitionFromIAMGetUser(iamconn) + } + if accountID != "" { + return accountID, partition, nil + } + errors = multierror.Append(errors, err) - cfg := &aws.Config{} - setOptionalEndpoint(cfg) - sess, err := session.NewSession(cfg) - if err != nil { - return "", errwrap.Wrapf("Error creating AWS session: {{err}}", err) - } + accountID, partition, err = GetAccountIDAndPartitionFromSTSGetCallerIdentity(stsconn) + if accountID != "" { + return accountID, partition, nil + } + errors = multierror.Append(errors, err) - metadataClient := ec2metadata.New(sess) - info, err := metadataClient.IAMInfo() - if err == nil { - return parseAccountIDFromArn(info.InstanceProfileArn) - } - log.Printf("[DEBUG] Failed to get account info from metadata service: %s", err) - errors = multierror.Append(errors, err) + accountID, partition, err = GetAccountIDAndPartitionFromIAMListRoles(iamconn) + if accountID != "" { + return accountID, partition, nil + } + errors = multierror.Append(errors, err) + + return accountID, partition, errors +} + +func GetAccountIDAndPartitionFromEC2Metadata() (string, string, error) { + log.Println("[DEBUG] Trying to get account information via EC2 Metadata") + + cfg := &aws.Config{} + setOptionalEndpoint(cfg) + sess, err := session.NewSession(cfg) + if err != nil { + return "", "", fmt.Errorf("error creating EC2 Metadata session: %s", err) + } + + metadataClient := ec2metadata.New(sess) + info, err := metadataClient.IAMInfo() + if err != nil { // We can end up here if there's an issue with the instance metadata service // or if we're getting credentials from AdRoll's Hologram (in which case IAMInfo will - // error out). In any event, if we can't get account info here, we should try - // the other methods available. - // If we have creds from something that looks like an IAM instance profile, but - // we were unable to retrieve account info from the instance profile, it's probably - // a safe assumption that we're not an IAM user - } else { - // Creds aren't from an IAM instance profile, so try try iam:GetUser - log.Println("[DEBUG] Trying to get account ID via iam:GetUser") - outUser, err := iamconn.GetUser(nil) - if err == nil { - return parseAccountIDFromArn(*outUser.User.Arn) - } - errors = multierror.Append(errors, err) - awsErr, ok := err.(awserr.Error) + // error out). + err = fmt.Errorf("failed getting account information via EC2 Metadata IAM information: %s", err) + log.Printf("[DEBUG] %s", err) + return "", "", err + } + + return parseAccountIDAndPartitionFromARN(info.InstanceProfileArn) +} + +func GetAccountIDAndPartitionFromIAMGetUser(iamconn *iam.IAM) (string, string, error) { + log.Println("[DEBUG] Trying to get account information via iam:GetUser") + + output, err := iamconn.GetUser(&iam.GetUserInput{}) + if err != nil { // AccessDenied and ValidationError can be raised // if credentials belong to federated profile, so we ignore these - if !ok || (awsErr.Code() != "AccessDenied" && awsErr.Code() != "ValidationError" && awsErr.Code() != "InvalidClientTokenId") { - return "", fmt.Errorf("Failed getting account ID via 'iam:GetUser': %s", err) + if isAWSErr(err, "AccessDenied", "") { + return "", "", nil } - log.Printf("[DEBUG] Getting account ID via iam:GetUser failed: %s", err) + if isAWSErr(err, "InvalidClientTokenId", "") { + return "", "", nil + } + if isAWSErr(err, "ValidationError", "") { + return "", "", nil + } + err = fmt.Errorf("failed getting account information via iam:GetUser: %s", err) + log.Printf("[DEBUG] %s", err) + return "", "", err } - // Then try STS GetCallerIdentity - log.Println("[DEBUG] Trying to get account ID via sts:GetCallerIdentity") - outCallerIdentity, err := stsconn.GetCallerIdentity(&sts.GetCallerIdentityInput{}) - if err == nil { - return parseAccountIDFromArn(*outCallerIdentity.Arn) + if output == nil || output.User == nil { + err = errors.New("empty iam:GetUser response") + log.Printf("[DEBUG] %s", err) + return "", "", err } - log.Printf("[DEBUG] Getting account ID via sts:GetCallerIdentity failed: %s", err) - errors = multierror.Append(errors, err) - // Then try IAM ListRoles - log.Println("[DEBUG] Trying to get account ID via iam:ListRoles") - outRoles, err := iamconn.ListRoles(&iam.ListRolesInput{ + return parseAccountIDAndPartitionFromARN(aws.StringValue(output.User.Arn)) +} + +func GetAccountIDAndPartitionFromIAMListRoles(iamconn *iam.IAM) (string, string, error) { + log.Println("[DEBUG] Trying to get account information via iam:ListRoles") + + output, err := iamconn.ListRoles(&iam.ListRolesInput{ MaxItems: aws.Int64(int64(1)), }) if err != nil { - log.Printf("[DEBUG] Failed to get account ID via iam:ListRoles: %s", err) - errors = multierror.Append(errors, err) - return "", fmt.Errorf("Failed getting account ID via all available methods. Errors: %s", errors) + err = fmt.Errorf("failed getting account information via iam:ListRoles: %s", err) + log.Printf("[DEBUG] %s", err) + return "", "", err + } + + if output == nil || len(output.Roles) < 1 { + err = fmt.Errorf("empty iam:ListRoles response") + log.Printf("[DEBUG] %s", err) + return "", "", err + } + + return parseAccountIDAndPartitionFromARN(aws.StringValue(output.Roles[0].Arn)) +} + +func GetAccountIDAndPartitionFromSTSGetCallerIdentity(stsconn *sts.STS) (string, string, error) { + log.Println("[DEBUG] Trying to get account information via sts:GetCallerIdentity") + + output, err := stsconn.GetCallerIdentity(&sts.GetCallerIdentityInput{}) + if err != nil { + return "", "", fmt.Errorf("error calling sts:GetCallerIdentity: %s", err) } - if len(outRoles.Roles) < 1 { - err = fmt.Errorf("Failed to get account ID via iam:ListRoles: No roles available") + if output == nil || output.Arn == nil { + err = errors.New("empty sts:GetCallerIdentity response") log.Printf("[DEBUG] %s", err) - errors = multierror.Append(errors, err) - return "", fmt.Errorf("Failed getting account ID via all available methods. Errors: %s", errors) + return "", "", err } - return parseAccountIDFromArn(*outRoles.Roles[0].Arn) + return parseAccountIDAndPartitionFromARN(aws.StringValue(output.Arn)) } -func parseAccountIDFromArn(inputARN string) (string, error) { +func parseAccountIDAndPartitionFromARN(inputARN string) (string, string, error) { arn, err := arn.Parse(inputARN) if err != nil { - return "", fmt.Errorf("Unable to parse ID from invalid ARN: %q", arn) + return "", "", fmt.Errorf("error parsing ARN (%s): %s", inputARN, err) } - return arn.AccountID, nil + return arn.AccountID, arn.Partition, nil } // This function is responsible for reading credentials from the diff --git a/aws/auth_helpers_test.go b/aws/auth_helpers_test.go index de7d8162aa6..606e47de24e 100644 --- a/aws/auth_helpers_test.go +++ b/aws/auth_helpers_test.go @@ -10,304 +10,413 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws/awserr" - "github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds" - "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/iam" "github.com/aws/aws-sdk-go/service/sts" ) -func TestAWSGetAccountID_shouldBeValid_fromEC2Role(t *testing.T) { - resetEnv := unsetEnv(t) - defer resetEnv() - // capture the test server's close method, to call after the test returns - awsTs := awsMetadataApiMock(append(securityCredentialsEndpoints, instanceIdEndpoint, iamInfoEndpoint)) - defer awsTs() - - closeEmpty, emptySess, err := getMockedAwsApiSession("zero", []*awsMockEndpoint{}) - defer closeEmpty() - if err != nil { - t.Fatal(err) - } - - iamConn := iam.New(emptySess) - stsConn := sts.New(emptySess) - - id, err := GetAccountID(iamConn, stsConn, ec2rolecreds.ProviderName) - if err != nil { - t.Fatalf("Getting account ID from EC2 metadata API failed: %s", err) - } - - expectedAccountId := "123456789013" - if id != expectedAccountId { - t.Fatalf("Expected account ID: %s, given: %s", expectedAccountId, id) - } -} - -func TestAWSGetAccountID_shouldBeValid_EC2RoleHasPriority(t *testing.T) { - resetEnv := unsetEnv(t) - defer resetEnv() - // capture the test server's close method, to call after the test returns - awsTs := awsMetadataApiMock(append(securityCredentialsEndpoints, instanceIdEndpoint, iamInfoEndpoint)) - defer awsTs() - - iamEndpoints := []*awsMockEndpoint{ +func TestGetAccountIDAndPartition(t *testing.T) { + var testCases = []struct { + Description string + AuthProviderName string + EC2MetadataEndpoints []*endpoint + IAMEndpoints []*awsMockEndpoint + STSEndpoints []*awsMockEndpoint + ErrCount int + ExpectedAccountID string + ExpectedPartition string + }{ { - Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, - Response: &awsMockResponse{200, iamResponse_GetUser_valid, "text/xml"}, + Description: "EC2 Metadata over iam:GetUser when using EC2 Instance Profile", + AuthProviderName: ec2rolecreds.ProviderName, + EC2MetadataEndpoints: append(ec2metadata_securityCredentialsEndpoints, ec2metadata_instanceIdEndpoint, ec2metadata_iamInfoEndpoint), + IAMEndpoints: []*awsMockEndpoint{ + { + Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, + Response: &awsMockResponse{200, iamResponse_GetUser_valid, "text/xml"}, + }, + }, + ExpectedAccountID: ec2metadata_iamInfoEndpoint_expectedAccountID, + ExpectedPartition: ec2metadata_iamInfoEndpoint_expectedPartition, }, - } - closeIam, iamSess, err := getMockedAwsApiSession("IAM", iamEndpoints) - defer closeIam() - if err != nil { - t.Fatal(err) - } - iamConn := iam.New(iamSess) - closeSts, stsSess, err := getMockedAwsApiSession("STS", []*awsMockEndpoint{}) - defer closeSts() - if err != nil { - t.Fatal(err) - } - stsConn := sts.New(stsSess) - - id, err := GetAccountID(iamConn, stsConn, ec2rolecreds.ProviderName) - if err != nil { - t.Fatalf("Getting account ID from EC2 metadata API failed: %s", err) - } - - expectedAccountId := "123456789013" - if id != expectedAccountId { - t.Fatalf("Expected account ID: %s, given: %s", expectedAccountId, id) - } -} - -func TestAWSGetAccountID_shouldBeValid_fromIamUser(t *testing.T) { - iamEndpoints := []*awsMockEndpoint{ { - Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, - Response: &awsMockResponse{200, iamResponse_GetUser_valid, "text/xml"}, + Description: "Mimic the metadata service mocked by Hologram (https://github.com/AdRoll/hologram)", + AuthProviderName: ec2rolecreds.ProviderName, + EC2MetadataEndpoints: ec2metadata_securityCredentialsEndpoints, + IAMEndpoints: []*awsMockEndpoint{ + { + Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, + Response: &awsMockResponse{403, iamResponse_GetUser_unauthorized, "text/xml"}, + }, + }, + STSEndpoints: []*awsMockEndpoint{ + { + Request: &awsMockRequest{"POST", "/", "Action=GetCallerIdentity&Version=2011-06-15"}, + Response: &awsMockResponse{200, stsResponse_GetCallerIdentity_valid, "text/xml"}, + }, + }, + ExpectedAccountID: stsResponse_GetCallerIdentity_valid_expectedAccountID, + ExpectedPartition: stsResponse_GetCallerIdentity_valid_expectedPartition, }, - } - - closeIam, iamSess, err := getMockedAwsApiSession("IAM", iamEndpoints) - defer closeIam() - if err != nil { - t.Fatal(err) - } - closeSts, stsSess, err := getMockedAwsApiSession("STS", []*awsMockEndpoint{}) - defer closeSts() - if err != nil { - t.Fatal(err) - } - - iamConn := iam.New(iamSess) - stsConn := sts.New(stsSess) - - id, err := GetAccountID(iamConn, stsConn, "") - if err != nil { - t.Fatalf("Getting account ID via GetUser failed: %s", err) - } - - expectedAccountId := "123456789012" - if id != expectedAccountId { - t.Fatalf("Expected account ID: %s, given: %s", expectedAccountId, id) - } -} - -func TestAWSGetAccountID_shouldBeValid_fromGetCallerIdentity(t *testing.T) { - iamEndpoints := []*awsMockEndpoint{ { - Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, - Response: &awsMockResponse{403, iamResponse_GetUser_unauthorized, "text/xml"}, + Description: "iam:ListRoles if iam:GetUser AccessDenied and sts:GetCallerIdentity fails", + IAMEndpoints: []*awsMockEndpoint{ + { + Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, + Response: &awsMockResponse{403, iamResponse_GetUser_unauthorized, "text/xml"}, + }, + { + Request: &awsMockRequest{"POST", "/", "Action=ListRoles&MaxItems=1&Version=2010-05-08"}, + Response: &awsMockResponse{200, iamResponse_ListRoles_valid, "text/xml"}, + }, + }, + STSEndpoints: []*awsMockEndpoint{ + { + Request: &awsMockRequest{"POST", "/", "Action=GetCallerIdentity&Version=2011-06-15"}, + Response: &awsMockResponse{403, stsResponse_GetCallerIdentity_unauthorized, "text/xml"}, + }, + }, + ExpectedAccountID: iamResponse_ListRoles_valid_expectedAccountID, + ExpectedPartition: iamResponse_ListRoles_valid_expectedPartition, }, - } - closeIam, iamSess, err := getMockedAwsApiSession("IAM", iamEndpoints) - defer closeIam() - if err != nil { - t.Fatal(err) - } - - stsEndpoints := []*awsMockEndpoint{ { - Request: &awsMockRequest{"POST", "/", "Action=GetCallerIdentity&Version=2011-06-15"}, - Response: &awsMockResponse{200, stsResponse_GetCallerIdentity_valid, "text/xml"}, + Description: "iam:ListRoles if iam:GetUser ValidationError and sts:GetCallerIdentity fails", + IAMEndpoints: []*awsMockEndpoint{ + { + Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, + Response: &awsMockResponse{400, iamResponse_GetUser_federatedFailure, "text/xml"}, + }, + { + Request: &awsMockRequest{"POST", "/", "Action=ListRoles&MaxItems=1&Version=2010-05-08"}, + Response: &awsMockResponse{200, iamResponse_ListRoles_valid, "text/xml"}, + }, + }, + STSEndpoints: []*awsMockEndpoint{ + { + Request: &awsMockRequest{"POST", "/", "Action=GetCallerIdentity&Version=2011-06-15"}, + Response: &awsMockResponse{403, stsResponse_GetCallerIdentity_unauthorized, "text/xml"}, + }, + }, + ExpectedAccountID: iamResponse_ListRoles_valid_expectedAccountID, + ExpectedPartition: iamResponse_ListRoles_valid_expectedPartition, + }, + { + Description: "Error when all endpoints fail", + IAMEndpoints: []*awsMockEndpoint{ + { + Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, + Response: &awsMockResponse{400, iamResponse_GetUser_federatedFailure, "text/xml"}, + }, + { + Request: &awsMockRequest{"POST", "/", "Action=ListRoles&MaxItems=1&Version=2010-05-08"}, + Response: &awsMockResponse{403, iamResponse_ListRoles_unauthorized, "text/xml"}, + }, + }, + STSEndpoints: []*awsMockEndpoint{ + { + Request: &awsMockRequest{"POST", "/", "Action=GetCallerIdentity&Version=2011-06-15"}, + Response: &awsMockResponse{403, stsResponse_GetCallerIdentity_unauthorized, "text/xml"}, + }, + }, + ErrCount: 1, }, - } - closeSts, stsSess, err := getMockedAwsApiSession("STS", stsEndpoints) - defer closeSts() - if err != nil { - t.Fatal(err) } - testGetAccountID(t, iamSess, stsSess, credentials.SharedCredsProviderName) -} + for _, testCase := range testCases { + t.Run(testCase.Description, func(t *testing.T) { + resetEnv := unsetEnv(t) + defer resetEnv() + // capture the test server's close method, to call after the test returns + awsTs := awsMetadataApiMock(testCase.EC2MetadataEndpoints) + defer awsTs() -func TestAWSGetAccountID_shouldBeValid_EC2RoleFallsBackToCallerIdentity(t *testing.T) { - // This mimics the metadata service mocked by Hologram (https://github.com/AdRoll/hologram) - resetEnv := unsetEnv(t) - defer resetEnv() + closeIam, iamSess, err := getMockedAwsApiSession("IAM", testCase.IAMEndpoints) + defer closeIam() + if err != nil { + t.Fatal(err) + } - awsTs := awsMetadataApiMock(securityCredentialsEndpoints) - defer awsTs() + closeSts, stsSess, err := getMockedAwsApiSession("STS", testCase.STSEndpoints) + defer closeSts() + if err != nil { + t.Fatal(err) + } - iamEndpoints := []*awsMockEndpoint{ - { - Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, - Response: &awsMockResponse{403, iamResponse_GetUser_unauthorized, "text/xml"}, - }, - } - closeIam, iamSess, err := getMockedAwsApiSession("IAM", iamEndpoints) - defer closeIam() - if err != nil { - t.Fatal(err) - } + iamConn := iam.New(iamSess) + stsConn := sts.New(stsSess) - stsEndpoints := []*awsMockEndpoint{ - { - Request: &awsMockRequest{"POST", "/", "Action=GetCallerIdentity&Version=2011-06-15"}, - Response: &awsMockResponse{200, stsResponse_GetCallerIdentity_valid, "text/xml"}, - }, - } - closeSts, stsSess, err := getMockedAwsApiSession("STS", stsEndpoints) - defer closeSts() - if err != nil { - t.Fatal(err) + accountID, partition, err := GetAccountIDAndPartition(iamConn, stsConn, testCase.AuthProviderName) + if err != nil && testCase.ErrCount == 0 { + t.Fatalf("Expected no error, received error: %s", err) + } + if err == nil && testCase.ErrCount > 0 { + t.Fatalf("Expected %d error(s), received none", testCase.ErrCount) + } + if accountID != testCase.ExpectedAccountID { + t.Fatalf("Parsed account ID doesn't match with expected (%q != %q)", accountID, testCase.ExpectedAccountID) + } + if partition != testCase.ExpectedPartition { + t.Fatalf("Parsed partition doesn't match with expected (%q != %q)", partition, testCase.ExpectedPartition) + } + }) } +} + +func TestGetAccountIDAndPartitionFromEC2Metadata(t *testing.T) { + t.Run("EC2 metadata success", func(t *testing.T) { + resetEnv := unsetEnv(t) + defer resetEnv() + // capture the test server's close method, to call after the test returns + awsTs := awsMetadataApiMock(append(ec2metadata_securityCredentialsEndpoints, ec2metadata_instanceIdEndpoint, ec2metadata_iamInfoEndpoint)) + defer awsTs() + + id, partition, err := GetAccountIDAndPartitionFromEC2Metadata() + if err != nil { + t.Fatalf("Getting account ID from EC2 metadata API failed: %s", err) + } - testGetAccountID(t, iamSess, stsSess, ec2rolecreds.ProviderName) + if id != ec2metadata_iamInfoEndpoint_expectedAccountID { + t.Fatalf("Expected account ID: %s, given: %s", ec2metadata_iamInfoEndpoint_expectedAccountID, id) + } + if partition != ec2metadata_iamInfoEndpoint_expectedPartition { + t.Fatalf("Expected partition: %s, given: %s", ec2metadata_iamInfoEndpoint_expectedPartition, partition) + } + }) } -func TestAWSGetAccountID_shouldBeValid_fromIamListRoles(t *testing.T) { - iamEndpoints := []*awsMockEndpoint{ +func TestGetAccountIDAndPartitionFromIAMGetUser(t *testing.T) { + var testCases = []struct { + Description string + MockEndpoints []*awsMockEndpoint + ErrCount int + ExpectedAccountID string + ExpectedPartition string + }{ { - Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, - Response: &awsMockResponse{403, iamResponse_GetUser_unauthorized, "text/xml"}, + Description: "Ignore iam:GetUser failure with federated user", + MockEndpoints: []*awsMockEndpoint{ + { + Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, + Response: &awsMockResponse{400, iamResponse_GetUser_federatedFailure, "text/xml"}, + }, + }, + ErrCount: 0, }, { - Request: &awsMockRequest{"POST", "/", "Action=ListRoles&MaxItems=1&Version=2010-05-08"}, - Response: &awsMockResponse{200, iamResponse_ListRoles_valid, "text/xml"}, + Description: "Ignore iam:GetUser failure with unauthorized user", + MockEndpoints: []*awsMockEndpoint{ + { + Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, + Response: &awsMockResponse{403, iamResponse_GetUser_unauthorized, "text/xml"}, + }, + }, + ErrCount: 0, }, - } - closeIam, iamSess, err := getMockedAwsApiSession("IAM", iamEndpoints) - defer closeIam() - if err != nil { - t.Fatal(err) - } - - stsEndpoints := []*awsMockEndpoint{ { - Request: &awsMockRequest{"POST", "/", "Action=GetCallerIdentity&Version=2011-06-15"}, - Response: &awsMockResponse{403, stsResponse_GetCallerIdentity_unauthorized, "text/xml"}, + Description: "iam:GetUser success", + MockEndpoints: []*awsMockEndpoint{ + { + Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, + Response: &awsMockResponse{200, iamResponse_GetUser_valid, "text/xml"}, + }, + }, + ExpectedAccountID: iamResponse_GetUser_valid_expectedAccountID, + ExpectedPartition: iamResponse_GetUser_valid_expectedPartition, }, } - closeSts, stsSess, err := getMockedAwsApiSession("STS", stsEndpoints) - defer closeSts() - if err != nil { - t.Fatal(err) - } - iamConn := iam.New(iamSess) - stsConn := sts.New(stsSess) + for _, testCase := range testCases { + t.Run(testCase.Description, func(t *testing.T) { + closeIam, iamSess, err := getMockedAwsApiSession("IAM", testCase.MockEndpoints) + defer closeIam() + if err != nil { + t.Fatal(err) + } - id, err := GetAccountID(iamConn, stsConn, "") - if err != nil { - t.Fatalf("Getting account ID via ListRoles failed: %s", err) - } + iamConn := iam.New(iamSess) - expectedAccountId := "123456789012" - if id != expectedAccountId { - t.Fatalf("Expected account ID: %s, given: %s", expectedAccountId, id) + accountID, partition, err := GetAccountIDAndPartitionFromIAMGetUser(iamConn) + if err != nil && testCase.ErrCount == 0 { + t.Fatalf("Expected no error, received error: %s", err) + } + if err == nil && testCase.ErrCount > 0 { + t.Fatalf("Expected %d error(s), received none", testCase.ErrCount) + } + if accountID != testCase.ExpectedAccountID { + t.Fatalf("Parsed account ID doesn't match with expected (%q != %q)", accountID, testCase.ExpectedAccountID) + } + if partition != testCase.ExpectedPartition { + t.Fatalf("Parsed partition doesn't match with expected (%q != %q)", partition, testCase.ExpectedPartition) + } + }) } } -func TestAWSGetAccountID_shouldBeValid_federatedRole(t *testing.T) { - iamEndpoints := []*awsMockEndpoint{ +func TestGetAccountIDAndPartitionFromIAMListRoles(t *testing.T) { + var testCases = []struct { + Description string + MockEndpoints []*awsMockEndpoint + ErrCount int + ExpectedAccountID string + ExpectedPartition string + }{ { - Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, - Response: &awsMockResponse{400, iamResponse_GetUser_federatedFailure, "text/xml"}, + Description: "iam:ListRoles unauthorized", + MockEndpoints: []*awsMockEndpoint{ + { + Request: &awsMockRequest{"POST", "/", "Action=ListRoles&MaxItems=1&Version=2010-05-08"}, + Response: &awsMockResponse{403, iamResponse_ListRoles_unauthorized, "text/xml"}, + }, + }, + ErrCount: 1, }, { - Request: &awsMockRequest{"POST", "/", "Action=ListRoles&MaxItems=1&Version=2010-05-08"}, - Response: &awsMockResponse{200, iamResponse_ListRoles_valid, "text/xml"}, + Description: "iam:ListRoles success", + MockEndpoints: []*awsMockEndpoint{ + { + Request: &awsMockRequest{"POST", "/", "Action=ListRoles&MaxItems=1&Version=2010-05-08"}, + Response: &awsMockResponse{200, iamResponse_ListRoles_valid, "text/xml"}, + }, + }, + ExpectedAccountID: iamResponse_ListRoles_valid_expectedAccountID, + ExpectedPartition: iamResponse_ListRoles_valid_expectedPartition, }, } - closeIam, iamSess, err := getMockedAwsApiSession("IAM", iamEndpoints) - defer closeIam() - if err != nil { - t.Fatal(err) - } - - closeSts, stsSess, err := getMockedAwsApiSession("STS", []*awsMockEndpoint{}) - defer closeSts() - if err != nil { - t.Fatal(err) - } - iamConn := iam.New(iamSess) - stsConn := sts.New(stsSess) + for _, testCase := range testCases { + t.Run(testCase.Description, func(t *testing.T) { + closeIam, iamSess, err := getMockedAwsApiSession("IAM", testCase.MockEndpoints) + defer closeIam() + if err != nil { + t.Fatal(err) + } - id, err := GetAccountID(iamConn, stsConn, "") - if err != nil { - t.Fatalf("Getting account ID via ListRoles failed: %s", err) - } + iamConn := iam.New(iamSess) - expectedAccountId := "123456789012" - if id != expectedAccountId { - t.Fatalf("Expected account ID: %s, given: %s", expectedAccountId, id) + accountID, partition, err := GetAccountIDAndPartitionFromIAMListRoles(iamConn) + if err != nil && testCase.ErrCount == 0 { + t.Fatalf("Expected no error, received error: %s", err) + } + if err == nil && testCase.ErrCount > 0 { + t.Fatalf("Expected %d error(s), received none", testCase.ErrCount) + } + if accountID != testCase.ExpectedAccountID { + t.Fatalf("Parsed account ID doesn't match with expected (%q != %q)", accountID, testCase.ExpectedAccountID) + } + if partition != testCase.ExpectedPartition { + t.Fatalf("Parsed partition doesn't match with expected (%q != %q)", partition, testCase.ExpectedPartition) + } + }) } } -func TestAWSGetAccountID_shouldError_unauthorizedFromIam(t *testing.T) { - iamEndpoints := []*awsMockEndpoint{ +func TestGetAccountIDAndPartitionFromSTSGetCallerIdentity(t *testing.T) { + var testCases = []struct { + Description string + MockEndpoints []*awsMockEndpoint + ErrCount int + ExpectedAccountID string + ExpectedPartition string + }{ { - Request: &awsMockRequest{"POST", "/", "Action=GetUser&Version=2010-05-08"}, - Response: &awsMockResponse{403, iamResponse_GetUser_unauthorized, "text/xml"}, + Description: "sts:GetCallerIdentity unauthorized", + MockEndpoints: []*awsMockEndpoint{ + { + Request: &awsMockRequest{"POST", "/", "Action=GetCallerIdentity&Version=2011-06-15"}, + Response: &awsMockResponse{403, stsResponse_GetCallerIdentity_unauthorized, "text/xml"}, + }, + }, + ErrCount: 1, }, { - Request: &awsMockRequest{"POST", "/", "Action=ListRoles&MaxItems=1&Version=2010-05-08"}, - Response: &awsMockResponse{403, iamResponse_ListRoles_unauthorized, "text/xml"}, + Description: "sts:GetCallerIdentity success", + MockEndpoints: []*awsMockEndpoint{ + { + Request: &awsMockRequest{"POST", "/", "Action=GetCallerIdentity&Version=2011-06-15"}, + Response: &awsMockResponse{200, stsResponse_GetCallerIdentity_valid, "text/xml"}, + }, + }, + ExpectedAccountID: stsResponse_GetCallerIdentity_valid_expectedAccountID, + ExpectedPartition: stsResponse_GetCallerIdentity_valid_expectedPartition, }, } - closeIam, iamSess, err := getMockedAwsApiSession("IAM", iamEndpoints) - defer closeIam() - if err != nil { - t.Fatal(err) - } - - closeSts, stsSess, err := getMockedAwsApiSession("STS", []*awsMockEndpoint{}) - defer closeSts() - if err != nil { - t.Fatal(err) - } - iamConn := iam.New(iamSess) - stsConn := sts.New(stsSess) + for _, testCase := range testCases { + t.Run(testCase.Description, func(t *testing.T) { + closeSts, stsSess, err := getMockedAwsApiSession("STS", testCase.MockEndpoints) + defer closeSts() + if err != nil { + t.Fatal(err) + } - id, err := GetAccountID(iamConn, stsConn, "") - if err == nil { - t.Fatal("Expected error when getting account ID") - } + stsConn := sts.New(stsSess) - if id != "" { - t.Fatalf("Expected no account ID, given: %s", id) + accountID, partition, err := GetAccountIDAndPartitionFromSTSGetCallerIdentity(stsConn) + if err != nil && testCase.ErrCount == 0 { + t.Fatalf("Expected no error, received error: %s", err) + } + if err == nil && testCase.ErrCount > 0 { + t.Fatalf("Expected %d error(s), received none", testCase.ErrCount) + } + if accountID != testCase.ExpectedAccountID { + t.Fatalf("Parsed account ID doesn't match with expected (%q != %q)", accountID, testCase.ExpectedAccountID) + } + if partition != testCase.ExpectedPartition { + t.Fatalf("Parsed partition doesn't match with expected (%q != %q)", partition, testCase.ExpectedPartition) + } + }) } } -func TestAWSParseAccountIDFromArn(t *testing.T) { - validArn := "arn:aws:iam::101636750127:instance-profile/aws-elasticbeanstalk-ec2-role" - expectedId := "101636750127" - id, err := parseAccountIDFromArn(validArn) - if err != nil { - t.Fatalf("Expected no error when parsing valid ARN: %s", err) - } - if id != expectedId { - t.Fatalf("Parsed id doesn't match with expected (%q != %q)", id, expectedId) +func TestAWSParseAccountIDAndPartitionFromARN(t *testing.T) { + var testCases = []struct { + InputARN string + ErrCount int + ExpectedAccountID string + ExpectedPartition string + }{ + { + InputARN: "invalid-arn", + ErrCount: 1, + }, + { + InputARN: "arn:aws:iam::123456789012:instance-profile/name", + ExpectedAccountID: "123456789012", + ExpectedPartition: "aws", + }, + { + InputARN: "arn:aws:iam::123456789012:user/name", + ExpectedAccountID: "123456789012", + ExpectedPartition: "aws", + }, + { + InputARN: "arn:aws:sts::123456789012:assumed-role/name", + ExpectedAccountID: "123456789012", + ExpectedPartition: "aws", + }, + { + InputARN: "arn:aws-us-gov:sts::123456789012:assumed-role/name", + ExpectedAccountID: "123456789012", + ExpectedPartition: "aws-us-gov", + }, } - invalidArn := "blablah" - id, err = parseAccountIDFromArn(invalidArn) - if err == nil { - t.Fatalf("Expected error when parsing invalid ARN (%q)", invalidArn) + for _, testCase := range testCases { + t.Run(testCase.InputARN, func(t *testing.T) { + accountID, partition, err := parseAccountIDAndPartitionFromARN(testCase.InputARN) + if err != nil && testCase.ErrCount == 0 { + t.Fatalf("Expected no error when parsing ARN, received error: %s", err) + } + if err == nil && testCase.ErrCount > 0 { + t.Fatalf("Expected %d error(s) when parsing ARN, received none", testCase.ErrCount) + } + if accountID != testCase.ExpectedAccountID { + t.Fatalf("Parsed account ID doesn't match with expected (%q != %q)", accountID, testCase.ExpectedAccountID) + } + if partition != testCase.ExpectedPartition { + t.Fatalf("Parsed partition doesn't match with expected (%q != %q)", partition, testCase.ExpectedPartition) + } + }) } } @@ -388,7 +497,7 @@ func TestAWSGetCredentials_shouldIAM(t *testing.T) { defer resetEnv() // capture the test server's close method, to call after the test returns - ts := awsMetadataApiMock(append(securityCredentialsEndpoints, instanceIdEndpoint, iamInfoEndpoint)) + ts := awsMetadataApiMock(append(ec2metadata_securityCredentialsEndpoints, ec2metadata_instanceIdEndpoint, ec2metadata_iamInfoEndpoint)) defer ts() // An empty config, no key supplied @@ -424,7 +533,7 @@ func TestAWSGetCredentials_shouldIgnoreIAM(t *testing.T) { resetEnv := unsetEnv(t) defer resetEnv() // capture the test server's close method, to call after the test returns - ts := awsMetadataApiMock(append(securityCredentialsEndpoints, instanceIdEndpoint, iamInfoEndpoint)) + ts := awsMetadataApiMock(append(ec2metadata_securityCredentialsEndpoints, ec2metadata_instanceIdEndpoint, ec2metadata_iamInfoEndpoint)) defer ts() simple := []struct { Key, Secret, Token string @@ -531,7 +640,7 @@ func TestAWSGetCredentials_shouldCatchEC2RoleProvider(t *testing.T) { resetEnv := unsetEnv(t) defer resetEnv() // capture the test server's close method, to call after the test returns - ts := awsMetadataApiMock(append(securityCredentialsEndpoints, instanceIdEndpoint, iamInfoEndpoint)) + ts := awsMetadataApiMock(append(ec2metadata_securityCredentialsEndpoints, ec2metadata_instanceIdEndpoint, ec2metadata_iamInfoEndpoint)) defer ts() creds, err := GetCredentials(&Config{}) @@ -638,22 +747,6 @@ func TestAWSGetCredentials_shouldBeENV(t *testing.T) { } } -func testGetAccountID(t *testing.T, iamSess, stsSess *session.Session, credProviderName string) { - - iamConn := iam.New(iamSess) - stsConn := sts.New(stsSess) - - id, err := GetAccountID(iamConn, stsConn, credProviderName) - if err != nil { - t.Fatalf("Getting account ID failed: %s", err) - } - - expectedAccountId := "123456789012" - if id != expectedAccountId { - t.Fatalf("Expected account ID: %s, given: %s", expectedAccountId, id) - } -} - // unsetEnv unsets environment variables for testing a "clean slate" with no // credentials in the environment func unsetEnv(t *testing.T) func() { @@ -791,34 +884,37 @@ type endpoint struct { Body string `json:"body"` } -var instanceIdEndpoint = &endpoint{ +var ec2metadata_instanceIdEndpoint = &endpoint{ Uri: "/latest/meta-data/instance-id", Body: "mock-instance-id", } -var securityCredentialsEndpoints = []*endpoint{ - &endpoint{ - Uri: "/latest/meta-data/iam/security-credentials", +var ec2metadata_securityCredentialsEndpoints = []*endpoint{ + { + Uri: "/latest/meta-data/iam/security-credentials/", Body: "test_role", }, - &endpoint{ + { Uri: "/latest/meta-data/iam/security-credentials/test_role", Body: "{\"Code\":\"Success\",\"LastUpdated\":\"2015-12-11T17:17:25Z\",\"Type\":\"AWS-HMAC\",\"AccessKeyId\":\"somekey\",\"SecretAccessKey\":\"somesecret\",\"Token\":\"sometoken\"}", }, } -var iamInfoEndpoint = &endpoint{ +var ec2metadata_iamInfoEndpoint = &endpoint{ Uri: "/latest/meta-data/iam/info", - Body: "{\"Code\": \"Success\",\"LastUpdated\": \"2016-03-17T12:27:32Z\",\"InstanceProfileArn\": \"arn:aws:iam::123456789013:instance-profile/my-instance-profile\",\"InstanceProfileId\": \"AIPAABCDEFGHIJKLMN123\"}", + Body: "{\"Code\": \"Success\",\"LastUpdated\": \"2016-03-17T12:27:32Z\",\"InstanceProfileArn\": \"arn:aws:iam::000000000000:instance-profile/my-instance-profile\",\"InstanceProfileId\": \"AIPAABCDEFGHIJKLMN123\"}", } +const ec2metadata_iamInfoEndpoint_expectedAccountID = `000000000000` +const ec2metadata_iamInfoEndpoint_expectedPartition = `aws` + const iamResponse_GetUser_valid = ` AIDACKCEVSQ6C2EXAMPLE /division_abc/subdivision_xyz/ Bob - arn:aws:iam::123456789012:user/division_abc/subdivision_xyz/Bob + arn:aws:iam::111111111111:user/division_abc/subdivision_xyz/Bob 2013-10-02T17:01:44Z 2014-10-10T14:37:51Z @@ -828,6 +924,9 @@ const iamResponse_GetUser_valid = ` Sender @@ -839,15 +938,18 @@ const iamResponse_GetUser_unauthorized = ` - arn:aws:iam::123456789012:user/Alice + arn:aws:iam::222222222222:user/Alice AKIAI44QH8DHBEXAMPLE - 123456789012 + 222222222222 01234567-89ab-cdef-0123-456789abcdef ` +const stsResponse_GetCallerIdentity_valid_expectedAccountID = `222222222222` +const stsResponse_GetCallerIdentity_valid_expectedPartition = `aws` + const stsResponse_GetCallerIdentity_unauthorized = ` Sender @@ -876,7 +978,7 @@ const iamResponse_ListRoles_valid = ` ` +const iamResponse_ListRoles_valid_expectedAccountID = `444444444444` +const iamResponse_ListRoles_valid_expectedPartition = `aws` + const iamResponse_ListRoles_unauthorized = ` Sender diff --git a/aws/autoscaling_tags.go b/aws/autoscaling_tags.go index 5c091150536..d94eb4a9464 100644 --- a/aws/autoscaling_tags.go +++ b/aws/autoscaling_tags.go @@ -20,17 +20,17 @@ func autoscalingTagSchema() *schema.Schema { Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "key": &schema.Schema{ + "key": { Type: schema.TypeString, Required: true, }, - "value": &schema.Schema{ + "value": { Type: schema.TypeString, Required: true, }, - "propagate_at_launch": &schema.Schema{ + "propagate_at_launch": { Type: schema.TypeBool, Required: true, }, @@ -58,8 +58,8 @@ func setAutoscalingTags(conn *autoscaling.AutoScaling, d *schema.ResourceData) e if d.HasChange("tag") || d.HasChange("tags") { oraw, nraw := d.GetChange("tag") - o := setToMapByKey(oraw.(*schema.Set), "key") - n := setToMapByKey(nraw.(*schema.Set), "key") + o := setToMapByKey(oraw.(*schema.Set)) + n := setToMapByKey(nraw.(*schema.Set)) old, err := autoscalingTagsFromMap(o, resourceID) if err != nil { @@ -248,36 +248,6 @@ func autoscalingTagFromMap(attr map[string]interface{}, resourceID string) (*aut return t, nil } -// autoscalingTagsToMap turns the list of tags into a map. -func autoscalingTagsToMap(ts []*autoscaling.Tag) map[string]interface{} { - tags := make(map[string]interface{}) - for _, t := range ts { - tag := map[string]interface{}{ - "key": *t.Key, - "value": *t.Value, - "propagate_at_launch": *t.PropagateAtLaunch, - } - tags[*t.Key] = tag - } - - return tags -} - -// autoscalingTagDescriptionsToMap turns the list of tags into a map. -func autoscalingTagDescriptionsToMap(ts *[]*autoscaling.TagDescription) map[string]map[string]interface{} { - tags := make(map[string]map[string]interface{}) - for _, t := range *ts { - tag := map[string]interface{}{ - "key": *t.Key, - "value": *t.Value, - "propagate_at_launch": *t.PropagateAtLaunch, - } - tags[*t.Key] = tag - } - - return tags -} - // autoscalingTagDescriptionsToSlice turns the list of tags into a slice. func autoscalingTagDescriptionsToSlice(ts []*autoscaling.TagDescription) []map[string]interface{} { tags := make([]map[string]interface{}, 0, len(ts)) @@ -292,11 +262,11 @@ func autoscalingTagDescriptionsToSlice(ts []*autoscaling.TagDescription) []map[s return tags } -func setToMapByKey(s *schema.Set, key string) map[string]interface{} { +func setToMapByKey(s *schema.Set) map[string]interface{} { result := make(map[string]interface{}) for _, rawData := range s.List() { data := rawData.(map[string]interface{}) - result[data[key].(string)] = data + result[data["key"].(string)] = data } return result diff --git a/aws/autoscaling_tags_test.go b/aws/autoscaling_tags_test.go index 0107764d149..81024f6c504 100644 --- a/aws/autoscaling_tags_test.go +++ b/aws/autoscaling_tags_test.go @@ -156,3 +156,33 @@ func TestIgnoringTagsAutoscaling(t *testing.T) { } } } + +// autoscalingTagsToMap turns the list of tags into a map. +func autoscalingTagsToMap(ts []*autoscaling.Tag) map[string]interface{} { + tags := make(map[string]interface{}) + for _, t := range ts { + tag := map[string]interface{}{ + "key": *t.Key, + "value": *t.Value, + "propagate_at_launch": *t.PropagateAtLaunch, + } + tags[*t.Key] = tag + } + + return tags +} + +// autoscalingTagDescriptionsToMap turns the list of tags into a map. +func autoscalingTagDescriptionsToMap(ts *[]*autoscaling.TagDescription) map[string]map[string]interface{} { + tags := make(map[string]map[string]interface{}) + for _, t := range *ts { + tag := map[string]interface{}{ + "key": *t.Key, + "value": *t.Value, + "propagate_at_launch": *t.PropagateAtLaunch, + } + tags[*t.Key] = tag + } + + return tags +} diff --git a/aws/awserr.go b/aws/awserr.go index fb12edd6a0a..7c944b923e0 100644 --- a/aws/awserr.go +++ b/aws/awserr.go @@ -8,6 +8,10 @@ import ( "github.com/hashicorp/terraform/helper/resource" ) +// Returns true if the error matches all these conditions: +// * err is of type awserr.Error +// * Error.Code() matches code +// * Error.Message() contains message func isAWSErr(err error, code string, message string) bool { if err, ok := err.(awserr.Error); ok { return err.Code() == code && strings.Contains(err.Message(), message) @@ -15,6 +19,19 @@ func isAWSErr(err error, code string, message string) bool { return false } +// IsAWSErrExtended returns true if the error matches all conditions +// * err is of type awserr.Error +// * Error.Code() matches code +// * Error.Message() contains message +// * Error.OrigErr() contains origErrMessage +// Note: This function will be moved out of the aws package in the future. +func IsAWSErrExtended(err error, code string, message string, origErrMessage string) bool { + if !isAWSErr(err, code, message) { + return false + } + return strings.Contains(err.(awserr.Error).OrigErr().Error(), origErrMessage) +} + func retryOnAwsCode(code string, f func() (interface{}, error)) (interface{}, error) { var resp interface{} err := resource.Retry(1*time.Minute, func() *resource.RetryError { @@ -32,7 +49,9 @@ func retryOnAwsCode(code string, f func() (interface{}, error)) (interface{}, er return resp, err } -func retryOnAwsCodes(codes []string, f func() (interface{}, error)) (interface{}, error) { +// RetryOnAwsCodes retries AWS error codes for one minute +// Note: This function will be moved out of the aws package in the future. +func RetryOnAwsCodes(codes []string, f func() (interface{}, error)) (interface{}, error) { var resp interface{} err := resource.Retry(1*time.Minute, func() *resource.RetryError { var err error diff --git a/aws/cloudfront_distribution_configuration_structure.go b/aws/cloudfront_distribution_configuration_structure.go index 0a4baf88a5e..f2214189531 100644 --- a/aws/cloudfront_distribution_configuration_structure.go +++ b/aws/cloudfront_distribution_configuration_structure.go @@ -42,7 +42,6 @@ func (p StringPtrSlice) Swap(i, j int) { p[i], p[j] = p[j], p[i] } // Used by the aws_cloudfront_distribution Create and Update functions. func expandDistributionConfig(d *schema.ResourceData) *cloudfront.DistributionConfig { distributionConfig := &cloudfront.DistributionConfig{ - CacheBehaviors: expandCacheBehaviors(d.Get("cache_behavior").(*schema.Set)), CustomErrorResponses: expandCustomErrorResponses(d.Get("custom_error_response").(*schema.Set)), DefaultCacheBehavior: expandDefaultCacheBehavior(d.Get("default_cache_behavior").(*schema.Set).List()[0].(map[string]interface{})), Enabled: aws.Bool(d.Get("enabled").(bool)), @@ -51,8 +50,13 @@ func expandDistributionConfig(d *schema.ResourceData) *cloudfront.DistributionCo Origins: expandOrigins(d.Get("origin").(*schema.Set)), PriceClass: aws.String(d.Get("price_class").(string)), } + if v, ok := d.GetOk("ordered_cache_behavior"); ok { + distributionConfig.CacheBehaviors = expandCacheBehaviors(v.([]interface{})) + } else { + distributionConfig.CacheBehaviors = expandCacheBehaviorsDeprecated(d.Get("cache_behavior").(*schema.Set)) + } // This sets CallerReference if it's still pending computation (ie: new resource) - if v, ok := d.GetOk("caller_reference"); ok == false { + if v, ok := d.GetOk("caller_reference"); !ok { distributionConfig.CallerReference = aws.String(time.Now().Format(time.RFC3339Nano)) } else { distributionConfig.CallerReference = aws.String(v.(string)) @@ -140,7 +144,12 @@ func flattenDistributionConfig(d *schema.ResourceData, distributionConfig *cloud } } if distributionConfig.CacheBehaviors != nil { - err = d.Set("cache_behavior", flattenCacheBehaviors(distributionConfig.CacheBehaviors)) + if _, ok := d.GetOk("cache_behavior"); ok { + err = d.Set("cache_behavior", flattenCacheBehaviorsDeprecated(distributionConfig.CacheBehaviors)) + } else { + err = d.Set("ordered_cache_behavior", flattenCacheBehaviors(distributionConfig.CacheBehaviors)) + } + if err != nil { return err } @@ -178,7 +187,7 @@ func flattenDistributionConfig(d *schema.ResourceData, distributionConfig *cloud } func expandDefaultCacheBehavior(m map[string]interface{}) *cloudfront.DefaultCacheBehavior { - cb := expandCacheBehavior(m) + cb := expandCacheBehaviorDeprecated(m) var dcb cloudfront.DefaultCacheBehavior simpleCopyStruct(cb, &dcb) @@ -186,11 +195,10 @@ func expandDefaultCacheBehavior(m map[string]interface{}) *cloudfront.DefaultCac } func flattenDefaultCacheBehavior(dcb *cloudfront.DefaultCacheBehavior) *schema.Set { - m := make(map[string]interface{}) var cb cloudfront.CacheBehavior simpleCopyStruct(dcb, &cb) - m = flattenCacheBehavior(&cb) + m := flattenCacheBehaviorDeprecated(&cb) return schema.NewSet(defaultCacheBehaviorHash, []interface{}{m}) } @@ -204,6 +212,9 @@ func defaultCacheBehaviorHash(v interface{}) int { buf.WriteString(fmt.Sprintf("%s-", m["target_origin_id"].(string))) buf.WriteString(fmt.Sprintf("%d-", forwardedValuesHash(m["forwarded_values"].(*schema.Set).List()[0].(map[string]interface{})))) buf.WriteString(fmt.Sprintf("%d-", m["min_ttl"].(int))) + if d, ok := m["field_level_encryption_id"]; ok && d.(string) != "" { + buf.WriteString(fmt.Sprintf("%s-", d.(string))) + } if d, ok := m["trusted_signers"]; ok { for _, e := range sortInterfaceSlice(d.([]interface{})) { buf.WriteString(fmt.Sprintf("%s-", e.(string))) @@ -243,11 +254,11 @@ func defaultCacheBehaviorHash(v interface{}) int { return hashcode.String(buf.String()) } -func expandCacheBehaviors(s *schema.Set) *cloudfront.CacheBehaviors { +func expandCacheBehaviorsDeprecated(s *schema.Set) *cloudfront.CacheBehaviors { var qty int64 var items []*cloudfront.CacheBehavior for _, v := range s.List() { - items = append(items, expandCacheBehavior(v.(map[string]interface{}))) + items = append(items, expandCacheBehaviorDeprecated(v.(map[string]interface{}))) qty++ } return &cloudfront.CacheBehaviors{ @@ -256,23 +267,83 @@ func expandCacheBehaviors(s *schema.Set) *cloudfront.CacheBehaviors { } } -func flattenCacheBehaviors(cbs *cloudfront.CacheBehaviors) *schema.Set { +func flattenCacheBehaviorsDeprecated(cbs *cloudfront.CacheBehaviors) *schema.Set { s := []interface{}{} for _, v := range cbs.Items { - s = append(s, flattenCacheBehavior(v)) + s = append(s, flattenCacheBehaviorDeprecated(v)) } return schema.NewSet(cacheBehaviorHash, s) } +func expandCacheBehaviors(lst []interface{}) *cloudfront.CacheBehaviors { + var qty int64 + var items []*cloudfront.CacheBehavior + for _, v := range lst { + items = append(items, expandCacheBehavior(v.(map[string]interface{}))) + qty++ + } + return &cloudfront.CacheBehaviors{ + Quantity: aws.Int64(qty), + Items: items, + } +} + +func flattenCacheBehaviors(cbs *cloudfront.CacheBehaviors) []interface{} { + lst := []interface{}{} + for _, v := range cbs.Items { + lst = append(lst, flattenCacheBehavior(v)) + } + return lst +} + +// Deprecated. +func expandCacheBehaviorDeprecated(m map[string]interface{}) *cloudfront.CacheBehavior { + cb := &cloudfront.CacheBehavior{ + Compress: aws.Bool(m["compress"].(bool)), + FieldLevelEncryptionId: aws.String(m["field_level_encryption_id"].(string)), + ViewerProtocolPolicy: aws.String(m["viewer_protocol_policy"].(string)), + TargetOriginId: aws.String(m["target_origin_id"].(string)), + ForwardedValues: expandForwardedValues(m["forwarded_values"].(*schema.Set).List()[0].(map[string]interface{})), + DefaultTTL: aws.Int64(int64(m["default_ttl"].(int))), + MaxTTL: aws.Int64(int64(m["max_ttl"].(int))), + MinTTL: aws.Int64(int64(m["min_ttl"].(int))), + } + + if v, ok := m["trusted_signers"]; ok { + cb.TrustedSigners = expandTrustedSigners(v.([]interface{})) + } else { + cb.TrustedSigners = expandTrustedSigners([]interface{}{}) + } + + if v, ok := m["lambda_function_association"]; ok { + cb.LambdaFunctionAssociations = expandLambdaFunctionAssociations(v.(*schema.Set).List()) + } + + if v, ok := m["smooth_streaming"]; ok { + cb.SmoothStreaming = aws.Bool(v.(bool)) + } + if v, ok := m["allowed_methods"]; ok { + cb.AllowedMethods = expandAllowedMethodsDeprecated(v.([]interface{})) + } + if v, ok := m["cached_methods"]; ok { + cb.AllowedMethods.CachedMethods = expandCachedMethodsDeprecated(v.([]interface{})) + } + if v, ok := m["path_pattern"]; ok { + cb.PathPattern = aws.String(v.(string)) + } + return cb +} + func expandCacheBehavior(m map[string]interface{}) *cloudfront.CacheBehavior { cb := &cloudfront.CacheBehavior{ - Compress: aws.Bool(m["compress"].(bool)), - ViewerProtocolPolicy: aws.String(m["viewer_protocol_policy"].(string)), - TargetOriginId: aws.String(m["target_origin_id"].(string)), - ForwardedValues: expandForwardedValues(m["forwarded_values"].(*schema.Set).List()[0].(map[string]interface{})), - DefaultTTL: aws.Int64(int64(m["default_ttl"].(int))), - MaxTTL: aws.Int64(int64(m["max_ttl"].(int))), - MinTTL: aws.Int64(int64(m["min_ttl"].(int))), + Compress: aws.Bool(m["compress"].(bool)), + FieldLevelEncryptionId: aws.String(m["field_level_encryption_id"].(string)), + ViewerProtocolPolicy: aws.String(m["viewer_protocol_policy"].(string)), + TargetOriginId: aws.String(m["target_origin_id"].(string)), + ForwardedValues: expandForwardedValues(m["forwarded_values"].(*schema.Set).List()[0].(map[string]interface{})), + DefaultTTL: aws.Int64(int64(m["default_ttl"].(int))), + MaxTTL: aws.Int64(int64(m["max_ttl"].(int))), + MinTTL: aws.Int64(int64(m["min_ttl"].(int))), } if v, ok := m["trusted_signers"]; ok { @@ -289,10 +360,10 @@ func expandCacheBehavior(m map[string]interface{}) *cloudfront.CacheBehavior { cb.SmoothStreaming = aws.Bool(v.(bool)) } if v, ok := m["allowed_methods"]; ok { - cb.AllowedMethods = expandAllowedMethods(v.([]interface{})) + cb.AllowedMethods = expandAllowedMethods(v.(*schema.Set)) } if v, ok := m["cached_methods"]; ok { - cb.AllowedMethods.CachedMethods = expandCachedMethods(v.([]interface{})) + cb.AllowedMethods.CachedMethods = expandCachedMethods(v.(*schema.Set)) } if v, ok := m["path_pattern"]; ok { cb.PathPattern = aws.String(v.(string)) @@ -300,10 +371,48 @@ func expandCacheBehavior(m map[string]interface{}) *cloudfront.CacheBehavior { return cb } +func flattenCacheBehaviorDeprecated(cb *cloudfront.CacheBehavior) map[string]interface{} { + m := make(map[string]interface{}) + + m["compress"] = *cb.Compress + m["field_level_encryption_id"] = aws.StringValue(cb.FieldLevelEncryptionId) + m["viewer_protocol_policy"] = *cb.ViewerProtocolPolicy + m["target_origin_id"] = *cb.TargetOriginId + m["forwarded_values"] = schema.NewSet(forwardedValuesHash, []interface{}{flattenForwardedValues(cb.ForwardedValues)}) + m["min_ttl"] = int(*cb.MinTTL) + + if len(cb.TrustedSigners.Items) > 0 { + m["trusted_signers"] = flattenTrustedSigners(cb.TrustedSigners) + } + if len(cb.LambdaFunctionAssociations.Items) > 0 { + m["lambda_function_association"] = flattenLambdaFunctionAssociations(cb.LambdaFunctionAssociations) + } + if cb.MaxTTL != nil { + m["max_ttl"] = int(*cb.MaxTTL) + } + if cb.SmoothStreaming != nil { + m["smooth_streaming"] = *cb.SmoothStreaming + } + if cb.DefaultTTL != nil { + m["default_ttl"] = int(*cb.DefaultTTL) + } + if cb.AllowedMethods != nil { + m["allowed_methods"] = flattenAllowedMethodsDeprecated(cb.AllowedMethods) + } + if cb.AllowedMethods.CachedMethods != nil { + m["cached_methods"] = flattenCachedMethodsDeprecated(cb.AllowedMethods.CachedMethods) + } + if cb.PathPattern != nil { + m["path_pattern"] = *cb.PathPattern + } + return m +} + func flattenCacheBehavior(cb *cloudfront.CacheBehavior) map[string]interface{} { m := make(map[string]interface{}) m["compress"] = *cb.Compress + m["field_level_encryption_id"] = aws.StringValue(cb.FieldLevelEncryptionId) m["viewer_protocol_policy"] = *cb.ViewerProtocolPolicy m["target_origin_id"] = *cb.TargetOriginId m["forwarded_values"] = schema.NewSet(forwardedValuesHash, []interface{}{flattenForwardedValues(cb.ForwardedValues)}) @@ -346,6 +455,9 @@ func cacheBehaviorHash(v interface{}) int { buf.WriteString(fmt.Sprintf("%s-", m["target_origin_id"].(string))) buf.WriteString(fmt.Sprintf("%d-", forwardedValuesHash(m["forwarded_values"].(*schema.Set).List()[0].(map[string]interface{})))) buf.WriteString(fmt.Sprintf("%d-", m["min_ttl"].(int))) + if d, ok := m["field_level_encryption_id"]; ok && d.(string) != "" { + buf.WriteString(fmt.Sprintf("%s-", d.(string))) + } if d, ok := m["trusted_signers"]; ok { for _, e := range sortInterfaceSlice(d.([]interface{})) { buf.WriteString(fmt.Sprintf("%s-", e.(string))) @@ -412,7 +524,8 @@ func lambdaFunctionAssociationHash(v interface{}) int { var buf bytes.Buffer m := v.(map[string]interface{}) buf.WriteString(fmt.Sprintf("%s-", m["event_type"].(string))) - buf.WriteString(fmt.Sprintf("%s", m["lambda_arn"].(string))) + buf.WriteString(m["lambda_arn"].(string)) + buf.WriteString(fmt.Sprintf("%t", m["include_body"].(bool))) return hashcode.String(buf.String()) } @@ -441,6 +554,9 @@ func expandLambdaFunctionAssociation(lf map[string]interface{}) *cloudfront.Lamb if v, ok := lf["lambda_arn"]; ok { lfa.LambdaFunctionARN = aws.String(v.(string)) } + if v, ok := lf["include_body"]; ok { + lfa.IncludeBody = aws.Bool(v.(bool)) + } return &lfa } @@ -457,6 +573,7 @@ func flattenLambdaFunctionAssociation(lfa *cloudfront.LambdaFunctionAssociation) if lfa != nil { m["event_type"] = *lfa.EventType m["lambda_arn"] = *lfa.LambdaFunctionARN + m["include_body"] = *lfa.IncludeBody } return m } @@ -589,28 +706,56 @@ func flattenCookieNames(cn *cloudfront.CookieNames) []interface{} { return []interface{}{} } -func expandAllowedMethods(s []interface{}) *cloudfront.AllowedMethods { +func expandAllowedMethods(s *schema.Set) *cloudfront.AllowedMethods { + return &cloudfront.AllowedMethods{ + Quantity: aws.Int64(int64(s.Len())), + Items: expandStringList(s.List()), + } +} + +func flattenAllowedMethods(am *cloudfront.AllowedMethods) *schema.Set { + if am.Items != nil { + return schema.NewSet(schema.HashString, flattenStringList(am.Items)) + } + return nil +} + +func expandAllowedMethodsDeprecated(s []interface{}) *cloudfront.AllowedMethods { return &cloudfront.AllowedMethods{ Quantity: aws.Int64(int64(len(s))), Items: expandStringList(s), } } -func flattenAllowedMethods(am *cloudfront.AllowedMethods) []interface{} { +func flattenAllowedMethodsDeprecated(am *cloudfront.AllowedMethods) []interface{} { if am.Items != nil { return flattenStringList(am.Items) } return []interface{}{} } -func expandCachedMethods(s []interface{}) *cloudfront.CachedMethods { +func expandCachedMethods(s *schema.Set) *cloudfront.CachedMethods { + return &cloudfront.CachedMethods{ + Quantity: aws.Int64(int64(s.Len())), + Items: expandStringList(s.List()), + } +} + +func flattenCachedMethods(cm *cloudfront.CachedMethods) *schema.Set { + if cm.Items != nil { + return schema.NewSet(schema.HashString, flattenStringList(cm.Items)) + } + return nil +} + +func expandCachedMethodsDeprecated(s []interface{}) *cloudfront.CachedMethods { return &cloudfront.CachedMethods{ Quantity: aws.Int64(int64(len(s))), Items: expandStringList(s), } } -func flattenCachedMethods(cm *cloudfront.CachedMethods) []interface{} { +func flattenCachedMethodsDeprecated(cm *cloudfront.CachedMethods) []interface{} { if cm.Items != nil { return flattenStringList(cm.Items) } @@ -1114,7 +1259,7 @@ func simpleCopyStruct(src, dst interface{}) { d := reflect.ValueOf(dst).Elem() for i := 0; i < s.NumField(); i++ { - if s.Field(i).CanSet() == true { + if s.Field(i).CanSet() { if s.Field(i).Interface() != nil { for j := 0; j < d.NumField(); j++ { if d.Type().Field(j).Name == s.Type().Field(i).Name { diff --git a/aws/cloudfront_distribution_configuration_structure_test.go b/aws/cloudfront_distribution_configuration_structure_test.go index a05300a6051..76e5144b7dd 100644 --- a/aws/cloudfront_distribution_configuration_structure_test.go +++ b/aws/cloudfront_distribution_configuration_structure_test.go @@ -23,6 +23,7 @@ func defaultCacheBehaviorConf() map[string]interface{} { "allowed_methods": allowedMethodsConf(), "cached_methods": cachedMethodsConf(), "compress": true, + "field_level_encryption_id": "", } } @@ -49,12 +50,14 @@ func trustedSignersConf() []interface{} { func lambdaFunctionAssociationsConf() *schema.Set { x := []interface{}{ map[string]interface{}{ - "event_type": "viewer-request", - "lambda_arn": "arn:aws:lambda:us-east-1:999999999:function1:alias", + "event_type": "viewer-request", + "lambda_arn": "arn:aws:lambda:us-east-1:999999999:function1:alias", + "include_body": true, }, map[string]interface{}{ - "event_type": "origin-response", - "lambda_arn": "arn:aws:lambda:us-east-1:999999999:function2:alias", + "event_type": "origin-response", + "lambda_arn": "arn:aws:lambda:us-east-1:999999999:function2:alias", + "include_body": true, }, } @@ -258,7 +261,7 @@ func TestCloudFrontStructure_expandDefaultCacheBehavior(t *testing.T) { if dcb == nil { t.Fatalf("ExpandDefaultCacheBehavior returned nil") } - if *dcb.Compress != true { + if !*dcb.Compress { t.Fatalf("Expected Compress to be true, got %v", *dcb.Compress) } if *dcb.ViewerProtocolPolicy != "allow-all" { @@ -267,19 +270,19 @@ func TestCloudFrontStructure_expandDefaultCacheBehavior(t *testing.T) { if *dcb.TargetOriginId != "myS3Origin" { t.Fatalf("Expected TargetOriginId to be allow-all, got %v", *dcb.TargetOriginId) } - if reflect.DeepEqual(dcb.ForwardedValues.Headers.Items, expandStringList(headersConf())) != true { + if !reflect.DeepEqual(dcb.ForwardedValues.Headers.Items, expandStringList(headersConf())) { t.Fatalf("Expected Items to be %v, got %v", headersConf(), dcb.ForwardedValues.Headers.Items) } if *dcb.MinTTL != 0 { t.Fatalf("Expected MinTTL to be 0, got %v", *dcb.MinTTL) } - if reflect.DeepEqual(dcb.TrustedSigners.Items, expandStringList(trustedSignersConf())) != true { + if !reflect.DeepEqual(dcb.TrustedSigners.Items, expandStringList(trustedSignersConf())) { t.Fatalf("Expected TrustedSigners.Items to be %v, got %v", trustedSignersConf(), dcb.TrustedSigners.Items) } if *dcb.MaxTTL != 31536000 { t.Fatalf("Expected MaxTTL to be 31536000, got %v", *dcb.MaxTTL) } - if *dcb.SmoothStreaming != false { + if *dcb.SmoothStreaming { t.Fatalf("Expected SmoothStreaming to be false, got %v", *dcb.SmoothStreaming) } if *dcb.DefaultTTL != 86400 { @@ -288,11 +291,11 @@ func TestCloudFrontStructure_expandDefaultCacheBehavior(t *testing.T) { if *dcb.LambdaFunctionAssociations.Quantity != 2 { t.Fatalf("Expected LambdaFunctionAssociations to be 2, got %v", *dcb.LambdaFunctionAssociations.Quantity) } - if reflect.DeepEqual(dcb.AllowedMethods.Items, expandStringList(allowedMethodsConf())) != true { - t.Fatalf("Expected TrustedSigners.Items to be %v, got %v", allowedMethodsConf(), dcb.AllowedMethods.Items) + if !reflect.DeepEqual(dcb.AllowedMethods.Items, expandStringList(allowedMethodsConf())) { + t.Fatalf("Expected AllowedMethods.Items to be %v, got %v", allowedMethodsConf(), dcb.AllowedMethods.Items) } - if reflect.DeepEqual(dcb.AllowedMethods.CachedMethods.Items, expandStringList(cachedMethodsConf())) != true { - t.Fatalf("Expected TrustedSigners.Items to be %v, got %v", cachedMethodsConf(), dcb.AllowedMethods.CachedMethods.Items) + if !reflect.DeepEqual(dcb.AllowedMethods.CachedMethods.Items, expandStringList(cachedMethodsConf())) { + t.Fatalf("Expected AllowedMethods.CachedMethods.Items to be %v, got %v", cachedMethodsConf(), dcb.AllowedMethods.CachedMethods.Items) } } @@ -309,8 +312,8 @@ func TestCloudFrontStructure_flattenDefaultCacheBehavior(t *testing.T) { func TestCloudFrontStructure_expandCacheBehavior(t *testing.T) { data := cacheBehaviorConf1() - cb := expandCacheBehavior(data) - if *cb.Compress != true { + cb := expandCacheBehaviorDeprecated(data) + if !*cb.Compress { t.Fatalf("Expected Compress to be true, got %v", *cb.Compress) } if *cb.ViewerProtocolPolicy != "allow-all" { @@ -319,19 +322,19 @@ func TestCloudFrontStructure_expandCacheBehavior(t *testing.T) { if *cb.TargetOriginId != "myS3Origin" { t.Fatalf("Expected TargetOriginId to be myS3Origin, got %v", *cb.TargetOriginId) } - if reflect.DeepEqual(cb.ForwardedValues.Headers.Items, expandStringList(headersConf())) != true { + if !reflect.DeepEqual(cb.ForwardedValues.Headers.Items, expandStringList(headersConf())) { t.Fatalf("Expected Items to be %v, got %v", headersConf(), cb.ForwardedValues.Headers.Items) } if *cb.MinTTL != 0 { t.Fatalf("Expected MinTTL to be 0, got %v", *cb.MinTTL) } - if reflect.DeepEqual(cb.TrustedSigners.Items, expandStringList(trustedSignersConf())) != true { + if !reflect.DeepEqual(cb.TrustedSigners.Items, expandStringList(trustedSignersConf())) { t.Fatalf("Expected TrustedSigners.Items to be %v, got %v", trustedSignersConf(), cb.TrustedSigners.Items) } if *cb.MaxTTL != 31536000 { t.Fatalf("Expected MaxTTL to be 31536000, got %v", *cb.MaxTTL) } - if *cb.SmoothStreaming != false { + if *cb.SmoothStreaming { t.Fatalf("Expected SmoothStreaming to be false, got %v", *cb.SmoothStreaming) } if *cb.DefaultTTL != 86400 { @@ -340,10 +343,10 @@ func TestCloudFrontStructure_expandCacheBehavior(t *testing.T) { if *cb.LambdaFunctionAssociations.Quantity != 2 { t.Fatalf("Expected LambdaFunctionAssociations to be 2, got %v", *cb.LambdaFunctionAssociations.Quantity) } - if reflect.DeepEqual(cb.AllowedMethods.Items, expandStringList(allowedMethodsConf())) != true { + if !reflect.DeepEqual(cb.AllowedMethods.Items, expandStringList(allowedMethodsConf())) { t.Fatalf("Expected AllowedMethods.Items to be %v, got %v", allowedMethodsConf(), cb.AllowedMethods.Items) } - if reflect.DeepEqual(cb.AllowedMethods.CachedMethods.Items, expandStringList(cachedMethodsConf())) != true { + if !reflect.DeepEqual(cb.AllowedMethods.CachedMethods.Items, expandStringList(cachedMethodsConf())) { t.Fatalf("Expected AllowedMethods.CachedMethods.Items to be %v, got %v", cachedMethodsConf(), cb.AllowedMethods.CachedMethods.Items) } if *cb.PathPattern != "/path1" { @@ -353,10 +356,10 @@ func TestCloudFrontStructure_expandCacheBehavior(t *testing.T) { func TestCloudFrontStructure_flattenCacheBehavior(t *testing.T) { in := cacheBehaviorConf1() - cb := expandCacheBehavior(in) - out := flattenCacheBehavior(cb) + cb := expandCacheBehaviorDeprecated(in) + out := flattenCacheBehaviorDeprecated(cb) var diff *schema.Set - if out["compress"] != true { + if !out["compress"].(bool) { t.Fatalf("Expected out[compress] to be true, got %v", out["compress"]) } if out["viewer_protocol_policy"] != "allow-all" { @@ -387,22 +390,22 @@ func TestCloudFrontStructure_flattenCacheBehavior(t *testing.T) { if out["min_ttl"] != int(0) { t.Fatalf("Expected out[min_ttl] to be 0 (int), got %v", out["min_ttl"]) } - if reflect.DeepEqual(out["trusted_signers"], in["trusted_signers"]) != true { + if !reflect.DeepEqual(out["trusted_signers"], in["trusted_signers"]) { t.Fatalf("Expected out[trusted_signers] to be %v, got %v", in["trusted_signers"], out["trusted_signers"]) } if out["max_ttl"] != int(31536000) { t.Fatalf("Expected out[max_ttl] to be 31536000 (int), got %v", out["max_ttl"]) } - if out["smooth_streaming"] != false { + if out["smooth_streaming"].(bool) { t.Fatalf("Expected out[smooth_streaming] to be false, got %v", out["smooth_streaming"]) } if out["default_ttl"] != int(86400) { t.Fatalf("Expected out[default_ttl] to be 86400 (int), got %v", out["default_ttl"]) } - if reflect.DeepEqual(out["allowed_methods"], in["allowed_methods"]) != true { + if !reflect.DeepEqual(out["allowed_methods"], in["allowed_methods"]) { t.Fatalf("Expected out[allowed_methods] to be %v, got %v", in["allowed_methods"], out["allowed_methods"]) } - if reflect.DeepEqual(out["cached_methods"], in["cached_methods"]) != true { + if !reflect.DeepEqual(out["cached_methods"], in["cached_methods"]) { t.Fatalf("Expected out[cached_methods] to be %v, got %v", in["cached_methods"], out["cached_methods"]) } if out["path_pattern"] != "/path1" { @@ -412,7 +415,7 @@ func TestCloudFrontStructure_flattenCacheBehavior(t *testing.T) { func TestCloudFrontStructure_expandCacheBehaviors(t *testing.T) { data := cacheBehaviorsConf() - cbs := expandCacheBehaviors(data) + cbs := expandCacheBehaviorsDeprecated(data) if *cbs.Quantity != 2 { t.Fatalf("Expected Quantity to be 2, got %v", *cbs.Quantity) } @@ -423,8 +426,8 @@ func TestCloudFrontStructure_expandCacheBehaviors(t *testing.T) { func TestCloudFrontStructure_flattenCacheBehaviors(t *testing.T) { in := cacheBehaviorsConf() - cbs := expandCacheBehaviors(in) - out := flattenCacheBehaviors(cbs) + cbs := expandCacheBehaviorsDeprecated(in) + out := flattenCacheBehaviorsDeprecated(cbs) diff := in.Difference(out) if len(diff.List()) > 0 { @@ -438,10 +441,10 @@ func TestCloudFrontStructure_expandTrustedSigners(t *testing.T) { if *ts.Quantity != 2 { t.Fatalf("Expected Quantity to be 2, got %v", *ts.Quantity) } - if *ts.Enabled != true { + if !*ts.Enabled { t.Fatalf("Expected Enabled to be true, got %v", *ts.Enabled) } - if reflect.DeepEqual(ts.Items, expandStringList(data)) != true { + if !reflect.DeepEqual(ts.Items, expandStringList(data)) { t.Fatalf("Expected Items to be %v, got %v", data, ts.Items) } } @@ -451,7 +454,7 @@ func TestCloudFrontStructure_flattenTrustedSigners(t *testing.T) { ts := expandTrustedSigners(in) out := flattenTrustedSigners(ts) - if reflect.DeepEqual(in, out) != true { + if !reflect.DeepEqual(in, out) { t.Fatalf("Expected out to be %v, got %v", in, out) } } @@ -462,7 +465,7 @@ func TestCloudFrontStructure_expandTrustedSigners_empty(t *testing.T) { if *ts.Quantity != 0 { t.Fatalf("Expected Quantity to be 0, got %v", *ts.Quantity) } - if *ts.Enabled != false { + if *ts.Enabled { t.Fatalf("Expected Enabled to be true, got %v", *ts.Enabled) } if ts.Items != nil { @@ -492,7 +495,7 @@ func TestCloudFrontStructure_flattenlambdaFunctionAssociations(t *testing.T) { lfa := expandLambdaFunctionAssociations(in.List()) out := flattenLambdaFunctionAssociations(lfa) - if reflect.DeepEqual(in.List(), out.List()) != true { + if !reflect.DeepEqual(in.List(), out.List()) { t.Fatalf("Expected out to be %v, got %v", in, out) } } @@ -506,7 +509,7 @@ func TestCloudFrontStructure_expandlambdaFunctionAssociations_empty(t *testing.T if len(lfa.Items) != 0 { t.Fatalf("Expected Items to be len 0, got %v", len(lfa.Items)) } - if reflect.DeepEqual(lfa.Items, []*cloudfront.LambdaFunctionAssociation{}) != true { + if !reflect.DeepEqual(lfa.Items, []*cloudfront.LambdaFunctionAssociation{}) { t.Fatalf("Expected Items to be empty, got %v", lfa.Items) } } @@ -514,13 +517,13 @@ func TestCloudFrontStructure_expandlambdaFunctionAssociations_empty(t *testing.T func TestCloudFrontStructure_expandForwardedValues(t *testing.T) { data := forwardedValuesConf() fv := expandForwardedValues(data) - if *fv.QueryString != true { + if !*fv.QueryString { t.Fatalf("Expected QueryString to be true, got %v", *fv.QueryString) } - if reflect.DeepEqual(fv.Cookies.WhitelistedNames.Items, expandStringList(cookieNamesConf())) != true { + if !reflect.DeepEqual(fv.Cookies.WhitelistedNames.Items, expandStringList(cookieNamesConf())) { t.Fatalf("Expected Cookies.WhitelistedNames.Items to be %v, got %v", cookieNamesConf(), fv.Cookies.WhitelistedNames.Items) } - if reflect.DeepEqual(fv.Headers.Items, expandStringList(headersConf())) != true { + if !reflect.DeepEqual(fv.Headers.Items, expandStringList(headersConf())) { t.Fatalf("Expected Headers.Items to be %v, got %v", headersConf(), fv.Headers.Items) } } @@ -530,13 +533,13 @@ func TestCloudFrontStructure_flattenForwardedValues(t *testing.T) { fv := expandForwardedValues(in) out := flattenForwardedValues(fv) - if out["query_string"] != true { + if !out["query_string"].(bool) { t.Fatalf("Expected out[query_string] to be true, got %v", out["query_string"]) } - if out["cookies"].(*schema.Set).Equal(in["cookies"].(*schema.Set)) != true { + if !out["cookies"].(*schema.Set).Equal(in["cookies"].(*schema.Set)) { t.Fatalf("Expected out[cookies] to be %v, got %v", in["cookies"], out["cookies"]) } - if reflect.DeepEqual(out["headers"], in["headers"]) != true { + if !reflect.DeepEqual(out["headers"], in["headers"]) { t.Fatalf("Expected out[headers] to be %v, got %v", in["headers"], out["headers"]) } } @@ -547,7 +550,7 @@ func TestCloudFrontStructure_expandHeaders(t *testing.T) { if *h.Quantity != 2 { t.Fatalf("Expected Quantity to be 2, got %v", *h.Quantity) } - if reflect.DeepEqual(h.Items, expandStringList(data)) != true { + if !reflect.DeepEqual(h.Items, expandStringList(data)) { t.Fatalf("Expected Items to be %v, got %v", data, h.Items) } } @@ -557,7 +560,7 @@ func TestCloudFrontStructure_flattenHeaders(t *testing.T) { h := expandHeaders(in) out := flattenHeaders(h) - if reflect.DeepEqual(in, out) != true { + if !reflect.DeepEqual(in, out) { t.Fatalf("Expected out to be %v, got %v", in, out) } } @@ -568,7 +571,7 @@ func TestCloudFrontStructure_expandQueryStringCacheKeys(t *testing.T) { if *k.Quantity != 2 { t.Fatalf("Expected Quantity to be 2, got %v", *k.Quantity) } - if reflect.DeepEqual(k.Items, expandStringList(data)) != true { + if !reflect.DeepEqual(k.Items, expandStringList(data)) { t.Fatalf("Expected Items to be %v, got %v", data, k.Items) } } @@ -578,7 +581,7 @@ func TestCloudFrontStructure_flattenQueryStringCacheKeys(t *testing.T) { k := expandQueryStringCacheKeys(in) out := flattenQueryStringCacheKeys(k) - if reflect.DeepEqual(in, out) != true { + if !reflect.DeepEqual(in, out) { t.Fatalf("Expected out to be %v, got %v", in, out) } } @@ -589,7 +592,7 @@ func TestCloudFrontStructure_expandCookiePreference(t *testing.T) { if *cp.Forward != "whitelist" { t.Fatalf("Expected Forward to be whitelist, got %v", *cp.Forward) } - if reflect.DeepEqual(cp.WhitelistedNames.Items, expandStringList(cookieNamesConf())) != true { + if !reflect.DeepEqual(cp.WhitelistedNames.Items, expandStringList(cookieNamesConf())) { t.Fatalf("Expected WhitelistedNames.Items to be %v, got %v", cookieNamesConf(), cp.WhitelistedNames.Items) } } @@ -599,7 +602,7 @@ func TestCloudFrontStructure_flattenCookiePreference(t *testing.T) { cp := expandCookiePreference(in) out := flattenCookiePreference(cp) - if reflect.DeepEqual(in, out) != true { + if !reflect.DeepEqual(in, out) { t.Fatalf("Expected out to be %v, got %v", in, out) } } @@ -610,7 +613,7 @@ func TestCloudFrontStructure_expandCookieNames(t *testing.T) { if *cn.Quantity != 2 { t.Fatalf("Expected Quantity to be 2, got %v", *cn.Quantity) } - if reflect.DeepEqual(cn.Items, expandStringList(data)) != true { + if !reflect.DeepEqual(cn.Items, expandStringList(data)) { t.Fatalf("Expected Items to be %v, got %v", data, cn.Items) } } @@ -620,49 +623,49 @@ func TestCloudFrontStructure_flattenCookieNames(t *testing.T) { cn := expandCookieNames(in) out := flattenCookieNames(cn) - if reflect.DeepEqual(in, out) != true { + if !reflect.DeepEqual(in, out) { t.Fatalf("Expected out to be %v, got %v", in, out) } } func TestCloudFrontStructure_expandAllowedMethods(t *testing.T) { data := allowedMethodsConf() - am := expandAllowedMethods(data) + am := expandAllowedMethodsDeprecated(data) if *am.Quantity != 7 { t.Fatalf("Expected Quantity to be 7, got %v", *am.Quantity) } - if reflect.DeepEqual(am.Items, expandStringList(data)) != true { + if !reflect.DeepEqual(am.Items, expandStringList(data)) { t.Fatalf("Expected Items to be %v, got %v", data, am.Items) } } func TestCloudFrontStructure_flattenAllowedMethods(t *testing.T) { in := allowedMethodsConf() - am := expandAllowedMethods(in) - out := flattenAllowedMethods(am) + am := expandAllowedMethodsDeprecated(in) + out := flattenAllowedMethodsDeprecated(am) - if reflect.DeepEqual(in, out) != true { + if !reflect.DeepEqual(in, out) { t.Fatalf("Expected out to be %v, got %v", in, out) } } func TestCloudFrontStructure_expandCachedMethods(t *testing.T) { data := cachedMethodsConf() - cm := expandCachedMethods(data) + cm := expandCachedMethodsDeprecated(data) if *cm.Quantity != 3 { t.Fatalf("Expected Quantity to be 3, got %v", *cm.Quantity) } - if reflect.DeepEqual(cm.Items, expandStringList(data)) != true { + if !reflect.DeepEqual(cm.Items, expandStringList(data)) { t.Fatalf("Expected Items to be %v, got %v", data, cm.Items) } } func TestCloudFrontStructure_flattenCachedMethods(t *testing.T) { in := cachedMethodsConf() - cm := expandCachedMethods(in) - out := flattenCachedMethods(cm) + cm := expandCachedMethodsDeprecated(in) + out := flattenCachedMethodsDeprecated(cm) - if reflect.DeepEqual(in, out) != true { + if !reflect.DeepEqual(in, out) { t.Fatalf("Expected out to be %v, got %v", in, out) } } @@ -723,7 +726,7 @@ func TestCloudFrontStructure_flattenOrigin(t *testing.T) { if out["origin_path"] != "/" { t.Fatalf("Expected out[origin_path] to be /, got %v", out["origin_path"]) } - if out["custom_origin_config"].(*schema.Set).Equal(in["custom_origin_config"].(*schema.Set)) != true { + if !out["custom_origin_config"].(*schema.Set).Equal(in["custom_origin_config"].(*schema.Set)) { t.Fatalf("Expected out[custom_origin_config] to be %v, got %v", in["custom_origin_config"], out["custom_origin_config"]) } } @@ -800,7 +803,7 @@ func TestCloudFrontStructure_flattenCustomOriginConfig(t *testing.T) { co := expandCustomOriginConfig(in) out := flattenCustomOriginConfig(co) - if reflect.DeepEqual(in, out) != true { + if !reflect.DeepEqual(in, out) { t.Fatalf("Expected out to be %v, got %v", in, out) } } @@ -821,7 +824,7 @@ func TestCloudFrontStructure_flattenCustomOriginConfigSSL(t *testing.T) { ocs := expandCustomOriginConfigSSL(in) out := flattenCustomOriginConfigSSL(ocs) - if reflect.DeepEqual(in, out) != true { + if !reflect.DeepEqual(in, out) { t.Fatalf("Expected out to be %v, got %v", in, out) } } @@ -839,7 +842,7 @@ func TestCloudFrontStructure_flattenS3OriginConfig(t *testing.T) { s3o := expandS3OriginConfig(in) out := flattenS3OriginConfig(s3o) - if reflect.DeepEqual(in, out) != true { + if !reflect.DeepEqual(in, out) { t.Fatalf("Expected out to be %v, got %v", in, out) } } @@ -860,7 +863,7 @@ func TestCloudFrontStructure_flattenCustomErrorResponses(t *testing.T) { ers := expandCustomErrorResponses(in) out := flattenCustomErrorResponses(ers) - if in.Equal(out) != true { + if !in.Equal(out) { t.Fatalf("Expected out to be %v, got %v", in, out) } } @@ -898,7 +901,7 @@ func TestCloudFrontStructure_flattenCustomErrorResponse(t *testing.T) { er := expandCustomErrorResponse(in) out := flattenCustomErrorResponse(er) - if reflect.DeepEqual(in, out) != true { + if !reflect.DeepEqual(in, out) { t.Fatalf("Expected out to be %v, got %v", in, out) } } @@ -907,7 +910,7 @@ func TestCloudFrontStructure_expandLoggingConfig(t *testing.T) { data := loggingConfigConf() lc := expandLoggingConfig(data) - if *lc.Enabled != true { + if !*lc.Enabled { t.Fatalf("Expected Enabled to be true, got %v", *lc.Enabled) } if *lc.Prefix != "myprefix" { @@ -916,14 +919,14 @@ func TestCloudFrontStructure_expandLoggingConfig(t *testing.T) { if *lc.Bucket != "mylogs.s3.amazonaws.com" { t.Fatalf("Expected Bucket to be mylogs.s3.amazonaws.com, got %v", *lc.Bucket) } - if *lc.IncludeCookies != false { + if *lc.IncludeCookies { t.Fatalf("Expected IncludeCookies to be false, got %v", *lc.IncludeCookies) } } func TestCloudFrontStructure_expandLoggingConfig_nilValue(t *testing.T) { lc := expandLoggingConfig(nil) - if *lc.Enabled != false { + if *lc.Enabled { t.Fatalf("Expected Enabled to be false, got %v", *lc.Enabled) } if *lc.Prefix != "" { @@ -932,7 +935,7 @@ func TestCloudFrontStructure_expandLoggingConfig_nilValue(t *testing.T) { if *lc.Bucket != "" { t.Fatalf("Expected Bucket to be blank, got %v", *lc.Bucket) } - if *lc.IncludeCookies != false { + if *lc.IncludeCookies { t.Fatalf("Expected IncludeCookies to be false, got %v", *lc.IncludeCookies) } } @@ -954,7 +957,7 @@ func TestCloudFrontStructure_expandAliases(t *testing.T) { if *a.Quantity != 2 { t.Fatalf("Expected Quantity to be 2, got %v", *a.Quantity) } - if reflect.DeepEqual(a.Items, expandStringList(data.List())) != true { + if !reflect.DeepEqual(a.Items, expandStringList(data.List())) { t.Fatalf("Expected Items to be [example.com www.example.com], got %v", a.Items) } } @@ -998,7 +1001,7 @@ func TestCloudFrontStructure_expandGeoRestriction_whitelist(t *testing.T) { if *gr.Quantity != 3 { t.Fatalf("Expected Quantity to be 3, got %v", *gr.Quantity) } - if reflect.DeepEqual(gr.Items, aws.StringSlice([]string{"CA", "GB", "US"})) != true { + if !reflect.DeepEqual(gr.Items, aws.StringSlice([]string{"CA", "GB", "US"})) { t.Fatalf("Expected Items be [CA, GB, US], got %v", gr.Items) } } @@ -1008,7 +1011,7 @@ func TestCloudFrontStructure_flattenGeoRestriction_whitelist(t *testing.T) { gr := expandGeoRestriction(in) out := flattenGeoRestriction(gr) - if reflect.DeepEqual(in, out) != true { + if !reflect.DeepEqual(in, out) { t.Fatalf("Expected out to be %v, got %v", in, out) } } @@ -1032,7 +1035,7 @@ func TestCloudFrontStructure_flattenGeoRestriction_no_items(t *testing.T) { gr := expandGeoRestriction(in) out := flattenGeoRestriction(gr) - if reflect.DeepEqual(in, out) != true { + if !reflect.DeepEqual(in, out) { t.Fatalf("Expected out to be %v, got %v", in, out) } } @@ -1043,7 +1046,7 @@ func TestCloudFrontStructure_expandViewerCertificate_cloudfront_default_certific if vc.ACMCertificateArn != nil { t.Fatalf("Expected ACMCertificateArn to be unset, got %v", *vc.ACMCertificateArn) } - if *vc.CloudFrontDefaultCertificate != true { + if !*vc.CloudFrontDefaultCertificate { t.Fatalf("Expected CloudFrontDefaultCertificate to be true, got %v", *vc.CloudFrontDefaultCertificate) } if vc.IAMCertificateId != nil { diff --git a/aws/config.go b/aws/config.go index d357fb40635..8380a90dede 100644 --- a/aws/config.go +++ b/aws/config.go @@ -16,15 +16,18 @@ import ( "github.com/aws/aws-sdk-go/aws/request" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/acm" + "github.com/aws/aws-sdk-go/service/acmpca" "github.com/aws/aws-sdk-go/service/apigateway" "github.com/aws/aws-sdk-go/service/applicationautoscaling" "github.com/aws/aws-sdk-go/service/appsync" "github.com/aws/aws-sdk-go/service/athena" "github.com/aws/aws-sdk-go/service/autoscaling" "github.com/aws/aws-sdk-go/service/batch" + "github.com/aws/aws-sdk-go/service/budgets" "github.com/aws/aws-sdk-go/service/cloud9" "github.com/aws/aws-sdk-go/service/cloudformation" "github.com/aws/aws-sdk-go/service/cloudfront" + "github.com/aws/aws-sdk-go/service/cloudhsmv2" "github.com/aws/aws-sdk-go/service/cloudtrail" "github.com/aws/aws-sdk-go/service/cloudwatch" "github.com/aws/aws-sdk-go/service/cloudwatchevents" @@ -41,11 +44,13 @@ import ( "github.com/aws/aws-sdk-go/service/devicefarm" "github.com/aws/aws-sdk-go/service/directconnect" "github.com/aws/aws-sdk-go/service/directoryservice" + "github.com/aws/aws-sdk-go/service/dlm" "github.com/aws/aws-sdk-go/service/dynamodb" "github.com/aws/aws-sdk-go/service/ec2" "github.com/aws/aws-sdk-go/service/ecr" "github.com/aws/aws-sdk-go/service/ecs" "github.com/aws/aws-sdk-go/service/efs" + "github.com/aws/aws-sdk-go/service/eks" "github.com/aws/aws-sdk-go/service/elasticache" "github.com/aws/aws-sdk-go/service/elasticbeanstalk" elasticsearch "github.com/aws/aws-sdk-go/service/elasticsearchservice" @@ -54,6 +59,7 @@ import ( "github.com/aws/aws-sdk-go/service/elbv2" "github.com/aws/aws-sdk-go/service/emr" "github.com/aws/aws-sdk-go/service/firehose" + "github.com/aws/aws-sdk-go/service/fms" "github.com/aws/aws-sdk-go/service/gamelift" "github.com/aws/aws-sdk-go/service/glacier" "github.com/aws/aws-sdk-go/service/glue" @@ -62,17 +68,24 @@ import ( "github.com/aws/aws-sdk-go/service/inspector" "github.com/aws/aws-sdk-go/service/iot" "github.com/aws/aws-sdk-go/service/kinesis" + "github.com/aws/aws-sdk-go/service/kinesisanalytics" "github.com/aws/aws-sdk-go/service/kms" "github.com/aws/aws-sdk-go/service/lambda" + "github.com/aws/aws-sdk-go/service/lexmodelbuildingservice" "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go/service/macie" "github.com/aws/aws-sdk-go/service/mediastore" "github.com/aws/aws-sdk-go/service/mq" + "github.com/aws/aws-sdk-go/service/neptune" "github.com/aws/aws-sdk-go/service/opsworks" "github.com/aws/aws-sdk-go/service/organizations" + "github.com/aws/aws-sdk-go/service/pinpoint" + "github.com/aws/aws-sdk-go/service/pricing" "github.com/aws/aws-sdk-go/service/rds" "github.com/aws/aws-sdk-go/service/redshift" "github.com/aws/aws-sdk-go/service/route53" "github.com/aws/aws-sdk-go/service/s3" + "github.com/aws/aws-sdk-go/service/secretsmanager" "github.com/aws/aws-sdk-go/service/servicecatalog" "github.com/aws/aws-sdk-go/service/servicediscovery" "github.com/aws/aws-sdk-go/service/ses" @@ -81,11 +94,13 @@ import ( "github.com/aws/aws-sdk-go/service/sns" "github.com/aws/aws-sdk-go/service/sqs" "github.com/aws/aws-sdk-go/service/ssm" + "github.com/aws/aws-sdk-go/service/storagegateway" "github.com/aws/aws-sdk-go/service/sts" + "github.com/aws/aws-sdk-go/service/swf" "github.com/aws/aws-sdk-go/service/waf" "github.com/aws/aws-sdk-go/service/wafregional" + "github.com/aws/aws-sdk-go/service/workspaces" "github.com/davecgh/go-spew/spew" - "github.com/hashicorp/errwrap" "github.com/hashicorp/go-cleanhttp" "github.com/hashicorp/terraform/helper/logging" "github.com/hashicorp/terraform/terraform" @@ -118,10 +133,14 @@ type Config struct { DeviceFarmEndpoint string Ec2Endpoint string EcsEndpoint string + AutoscalingEndpoint string EcrEndpoint string + EfsEndpoint string + EsEndpoint string ElbEndpoint string IamEndpoint string KinesisEndpoint string + KinesisAnalyticsEndpoint string KmsEndpoint string LambdaEndpoint string RdsEndpoint string @@ -130,6 +149,7 @@ type Config struct { SnsEndpoint string SqsEndpoint string StsEndpoint string + SsmEndpoint string Insecure bool SkipCredsValidation bool @@ -144,6 +164,7 @@ type AWSClient struct { cfconn *cloudformation.CloudFormation cloud9conn *cloud9.Cloud9 cloudfrontconn *cloudfront.CloudFront + cloudhsmv2conn *cloudhsmv2.CloudHSMV2 cloudtrailconn *cloudtrail.CloudTrail cloudwatchconn *cloudwatch.CloudWatch cloudwatchlogsconn *cloudwatchlogs.CloudWatchLogs @@ -153,6 +174,7 @@ type AWSClient struct { configconn *configservice.ConfigService daxconn *dax.DAX devicefarmconn *devicefarm.DeviceFarm + dlmconn *dlm.DLM dmsconn *databasemigrationservice.DatabaseMigrationService dsconn *directoryservice.DirectoryService dynamodbconn *dynamodb.DynamoDB @@ -160,15 +182,18 @@ type AWSClient struct { ecrconn *ecr.ECR ecsconn *ecs.ECS efsconn *efs.EFS + eksconn *eks.EKS elbconn *elb.ELB elbv2conn *elbv2.ELBV2 emrconn *emr.EMR esconn *elasticsearch.ElasticsearchService acmconn *acm.ACM + acmpcaconn *acmpca.ACMPCA apigateway *apigateway.APIGateway appautoscalingconn *applicationautoscaling.ApplicationAutoScaling autoscalingconn *autoscaling.AutoScaling s3conn *s3.S3 + secretsmanagerconn *secretsmanager.SecretsManager scconn *servicecatalog.ServiceCatalog sesConn *ses.SES simpledbconn *simpledb.SimpleDB @@ -184,15 +209,18 @@ type AWSClient struct { rdsconn *rds.RDS iamconn *iam.IAM kinesisconn *kinesis.Kinesis + kinesisanalyticsconn *kinesisanalytics.KinesisAnalytics kmsconn *kms.KMS gameliftconn *gamelift.GameLift firehoseconn *firehose.Firehose + fmsconn *fms.FMS inspectorconn *inspector.Inspector elasticacheconn *elasticache.ElastiCache elasticbeanstalkconn *elasticbeanstalk.ElasticBeanstalk elastictranscoderconn *elastictranscoder.ElasticTranscoder lambdaconn *lambda.Lambda lightsailconn *lightsail.Lightsail + macieconn *macie.Macie mqconn *mq.MQ opsworksconn *opsworks.OpsWorks organizationsconn *organizations.Organizations @@ -205,6 +233,8 @@ type AWSClient struct { sdconn *servicediscovery.ServiceDiscovery sfnconn *sfn.SFN ssmconn *ssm.SSM + storagegatewayconn *storagegateway.StorageGateway + swfconn *swf.SWF wafconn *waf.WAF wafregionalconn *wafregional.WAFRegional iotconn *iot.IoT @@ -214,6 +244,12 @@ type AWSClient struct { dxconn *directconnect.DirectConnect mediastoreconn *mediastore.MediaStore appsyncconn *appsync.AppSync + lexmodelconn *lexmodelbuildingservice.LexModelBuildingService + budgetconn *budgets.Budgets + neptuneconn *neptune.Neptune + pricingconn *pricing.Pricing + pinpointconn *pinpoint.Pinpoint + workspacesconn *workspaces.WorkSpaces } func (c *AWSClient) S3() *s3.S3 { @@ -224,11 +260,6 @@ func (c *AWSClient) DynamoDB() *dynamodb.DynamoDB { return c.dynamodbconn } -func (c *AWSClient) IsGovCloud() bool { - _, isGovCloud := endpoints.PartitionForRegion([]endpoints.Partition{endpoints.AwsUsGovPartition()}, c.region) - return isGovCloud -} - func (c *AWSClient) IsChinaCloud() bool { _, isChinaCloud := endpoints.PartitionForRegion([]endpoints.Partition{endpoints.AwsCnPartition()}, c.region) return isChinaCloud @@ -276,16 +307,27 @@ func (c *Config) Client() (interface{}, error) { cp, err := creds.Get() if err != nil { if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "NoCredentialProviders" { - // If a profile wasn't specified then error out + // If a profile wasn't specified, the session may still be able to resolve credentials from shared config. if c.Profile == "" { - return nil, errors.New(`No valid credential sources found for AWS Provider. - Please see https://terraform.io/docs/providers/aws/index.html for more information on - providing credentials for the AWS Provider`) + sess, err := session.NewSession() + if err != nil { + return nil, errors.New(`No valid credential sources found for AWS Provider. + Please see https://terraform.io/docs/providers/aws/index.html for more information on + providing credentials for the AWS Provider`) + } + _, err = sess.Config.Credentials.Get() + if err != nil { + return nil, errors.New(`No valid credential sources found for AWS Provider. + Please see https://terraform.io/docs/providers/aws/index.html for more information on + providing credentials for the AWS Provider`) + } + log.Printf("[INFO] Using session-derived AWS Auth") + opt.Config.Credentials = sess.Config.Credentials + } else { + log.Printf("[INFO] AWS Auth using Profile: %q", c.Profile) + opt.Profile = c.Profile + opt.SharedConfigState = session.SharedConfigEnable } - // add the profile and enable share config file usage - log.Printf("[INFO] AWS Auth using Profile: %q", c.Profile) - opt.Profile = c.Profile - opt.SharedConfigState = session.SharedConfigEnable } else { return nil, fmt.Errorf("Error loading credentials for AWS Provider: %s", err) } @@ -315,7 +357,7 @@ func (c *Config) Client() (interface{}, error) { Please see https://terraform.io/docs/providers/aws/index.html for more information on providing credentials for the AWS Provider`) } - return nil, errwrap.Wrapf("Error creating AWS session: {{err}}", err) + return nil, fmt.Errorf("Error creating AWS session: %s", err) } sess.Handlers.Build.PushBackNamed(addTerraformVersionToUserAgent) @@ -329,6 +371,32 @@ func (c *Config) Client() (interface{}, error) { sess = sess.Copy(&aws.Config{MaxRetries: aws.Int(c.MaxRetries)}) } + // Generally, we want to configure a lower retry theshold for networking issues + // as the session retry threshold is very high by default and can mask permanent + // networking failures, such as a non-existent service endpoint. + // MaxRetries will override this logic if it has a lower retry threshold. + // NOTE: This logic can be fooled by other request errors raising the retry count + // before any networking error occurs + sess.Handlers.Retry.PushBack(func(r *request.Request) { + // We currently depend on the DefaultRetryer exponential backoff here. + // ~10 retries gives a fair backoff of a few seconds. + if r.RetryCount < 9 { + return + } + // RequestError: send request failed + // caused by: Post https://FQDN/: dial tcp: lookup FQDN: no such host + if IsAWSErrExtended(r.Error, "RequestError", "send request failed", "no such host") { + log.Printf("[WARN] Disabling retries after next request due to networking issue") + r.Retryable = aws.Bool(false) + } + // RequestError: send request failed + // caused by: Post https://FQDN/: dial tcp IPADDRESS:443: connect: connection refused + if IsAWSErrExtended(r.Error, "RequestError", "send request failed", "connection refused") { + log.Printf("[WARN] Disabling retries after next request due to networking issue") + r.Retryable = aws.Bool(false) + } + }) + // This restriction should only be used for Route53 sessions. // Other resources that have restrictions should allow the API to fail, rather // than Terraform abstracting the region for the user. This can lead to breaking @@ -344,12 +412,16 @@ func (c *Config) Client() (interface{}, error) { awsCwlSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.CloudWatchLogsEndpoint)}) awsDynamoSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.DynamoDBEndpoint)}) awsEc2Sess := sess.Copy(&aws.Config{Endpoint: aws.String(c.Ec2Endpoint)}) + awsAutoscalingSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.AutoscalingEndpoint)}) awsEcrSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.EcrEndpoint)}) awsEcsSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.EcsEndpoint)}) + awsEfsSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.EfsEndpoint)}) awsElbSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.ElbEndpoint)}) + awsEsSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.EsEndpoint)}) awsIamSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.IamEndpoint)}) awsLambdaSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.LambdaEndpoint)}) awsKinesisSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.KinesisEndpoint)}) + awsKinesisAnalyticsSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.KinesisAnalyticsEndpoint)}) awsKmsSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.KmsEndpoint)}) awsRdsSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.RdsEndpoint)}) awsS3Sess := sess.Copy(&aws.Config{Endpoint: aws.String(c.S3Endpoint)}) @@ -357,31 +429,48 @@ func (c *Config) Client() (interface{}, error) { awsSqsSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.SqsEndpoint)}) awsStsSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.StsEndpoint)}) awsDeviceFarmSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.DeviceFarmEndpoint)}) + awsSsmSess := sess.Copy(&aws.Config{Endpoint: aws.String(c.SsmEndpoint)}) log.Println("[INFO] Initializing DeviceFarm SDK connection") client.devicefarmconn = devicefarm.New(awsDeviceFarmSess) - // These two services need to be set up early so we can check on AccountID + // Beyond verifying credentials (if enabled), we use the next set of logic + // to determine two pieces of information required for manually assembling + // resource ARNs when they are not available in the service API: + // * client.accountid + // * client.partition client.iamconn = iam.New(awsIamSess) client.stsconn = sts.New(awsStsSess) + if c.AssumeRoleARN != "" { + client.accountid, client.partition, _ = parseAccountIDAndPartitionFromARN(c.AssumeRoleARN) + } + + // Validate credentials early and fail before we do any graph walking. if !c.SkipCredsValidation { - err = c.ValidateCredentials(client.stsconn) + var err error + client.accountid, client.partition, err = GetAccountIDAndPartitionFromSTSGetCallerIdentity(client.stsconn) if err != nil { - return nil, err + return nil, fmt.Errorf("error validating provider credentials: %s", err) } } - // Infer AWS partition from configured region - if partition, ok := endpoints.PartitionForRegion(endpoints.DefaultPartitions(), client.region); ok { - client.partition = partition.ID() + if client.accountid == "" && !c.SkipRequestingAccountId { + var err error + client.accountid, client.partition, err = GetAccountIDAndPartition(client.iamconn, client.stsconn, cp.ProviderName) + if err != nil { + // DEPRECATED: Next major version of the provider should return the error instead of logging + // if skip_request_account_id is not enabled. + log.Printf("[WARN] %s", fmt.Sprintf( + "AWS account ID not previously found and failed retrieving via all available methods. "+ + "This will return an error in the next major version of the AWS provider. "+ + "See https://www.terraform.io/docs/providers/aws/index.html#skip_requesting_account_id for workaround and implications. "+ + "Errors: %s", err)) + } } - if !c.SkipRequestingAccountId { - accountID, err := GetAccountID(client.iamconn, client.stsconn, cp.ProviderName) - if err == nil { - client.accountid = accountID - } + if client.accountid == "" { + log.Printf("[WARN] AWS account ID not found for provider. See https://www.terraform.io/docs/providers/aws/index.html#skip_requesting_account_id for implications.") } authErr := c.ValidateAccountId(client.accountid) @@ -389,6 +478,13 @@ func (c *Config) Client() (interface{}, error) { return nil, authErr } + // Infer AWS partition from configured region if we still need it + if client.partition == "" { + if partition, ok := endpoints.PartitionForRegion(endpoints.DefaultPartitions(), client.region); ok { + client.partition = partition.ID() + } + } + client.ec2conn = ec2.New(awsEc2Sess) if !c.SkipGetEC2Platforms { @@ -402,13 +498,16 @@ func (c *Config) Client() (interface{}, error) { } } + client.budgetconn = budgets.New(sess) client.acmconn = acm.New(awsAcmSess) + client.acmpcaconn = acmpca.New(sess) client.apigateway = apigateway.New(awsApigatewaySess) client.appautoscalingconn = applicationautoscaling.New(sess) - client.autoscalingconn = autoscaling.New(sess) + client.autoscalingconn = autoscaling.New(awsAutoscalingSess) client.cloud9conn = cloud9.New(sess) client.cfconn = cloudformation.New(awsCfSess) client.cloudfrontconn = cloudfront.New(sess) + client.cloudhsmv2conn = cloudhsmv2.New(sess) client.cloudtrailconn = cloudtrail.New(sess) client.cloudwatchconn = cloudwatch.New(awsCwSess) client.cloudwatcheventsconn = cloudwatchevents.New(awsCweSess) @@ -421,30 +520,37 @@ func (c *Config) Client() (interface{}, error) { client.cognitoidpconn = cognitoidentityprovider.New(sess) client.codepipelineconn = codepipeline.New(sess) client.daxconn = dax.New(awsDynamoSess) + client.dlmconn = dlm.New(sess) client.dmsconn = databasemigrationservice.New(sess) client.dsconn = directoryservice.New(sess) client.dynamodbconn = dynamodb.New(awsDynamoSess) client.ecrconn = ecr.New(awsEcrSess) client.ecsconn = ecs.New(awsEcsSess) - client.efsconn = efs.New(sess) + client.efsconn = efs.New(awsEfsSess) + client.eksconn = eks.New(sess) client.elasticacheconn = elasticache.New(sess) client.elasticbeanstalkconn = elasticbeanstalk.New(sess) client.elastictranscoderconn = elastictranscoder.New(sess) client.elbconn = elb.New(awsElbSess) client.elbv2conn = elbv2.New(awsElbSess) client.emrconn = emr.New(sess) - client.esconn = elasticsearch.New(sess) + client.esconn = elasticsearch.New(awsEsSess) client.firehoseconn = firehose.New(sess) + client.fmsconn = fms.New(sess) client.inspectorconn = inspector.New(sess) client.gameliftconn = gamelift.New(sess) client.glacierconn = glacier.New(sess) client.guarddutyconn = guardduty.New(sess) client.iotconn = iot.New(sess) client.kinesisconn = kinesis.New(awsKinesisSess) + client.kinesisanalyticsconn = kinesisanalytics.New(awsKinesisAnalyticsSess) client.kmsconn = kms.New(awsKmsSess) client.lambdaconn = lambda.New(awsLambdaSess) + client.lexmodelconn = lexmodelbuildingservice.New(sess) client.lightsailconn = lightsail.New(sess) + client.macieconn = macie.New(sess) client.mqconn = mq.New(sess) + client.neptuneconn = neptune.New(sess) client.opsworksconn = opsworks.New(sess) client.organizationsconn = organizations.New(sess) client.r53conn = route53.New(r53Sess) @@ -455,10 +561,13 @@ func (c *Config) Client() (interface{}, error) { client.scconn = servicecatalog.New(sess) client.sdconn = servicediscovery.New(sess) client.sesConn = ses.New(sess) + client.secretsmanagerconn = secretsmanager.New(sess) client.sfnconn = sfn.New(sess) client.snsconn = sns.New(awsSnsSess) client.sqsconn = sqs.New(awsSqsSess) - client.ssmconn = ssm.New(sess) + client.ssmconn = ssm.New(awsSsmSess) + client.storagegatewayconn = storagegateway.New(sess) + client.swfconn = swf.New(sess) client.wafconn = waf.New(sess) client.wafregionalconn = wafregional.New(sess) client.batchconn = batch.New(sess) @@ -467,6 +576,10 @@ func (c *Config) Client() (interface{}, error) { client.dxconn = directconnect.New(sess) client.mediastoreconn = mediastore.New(sess) client.appsyncconn = appsync.New(sess) + client.neptuneconn = neptune.New(sess) + client.pricingconn = pricing.New(sess) + client.pinpointconn = pinpoint.New(sess) + client.workspacesconn = workspaces.New(sess) // Workaround for https://github.com/aws/aws-sdk-go/issues/1376 client.kinesisconn.Handlers.Retry.PushBack(func(r *request.Request) { @@ -519,6 +632,13 @@ func (c *Config) Client() (interface{}, error) { } }) + client.storagegatewayconn.Handlers.Retry.PushBack(func(r *request.Request) { + // InvalidGatewayRequestException: The specified gateway proxy network connection is busy. + if isAWSErr(r.Error, storagegateway.ErrCodeInvalidGatewayRequestException, "The specified gateway proxy network connection is busy") { + r.Retryable = aws.Bool(true) + } + }) + return &client, nil } @@ -545,12 +665,6 @@ func (c *Config) ValidateRegion() error { return fmt.Errorf("Not a valid region: %s", c.Region) } -// Validate credentials early and fail before we do any graph walking. -func (c *Config) ValidateCredentials(stsconn *sts.STS) error { - _, err := stsconn.GetCallerIdentity(&sts.GetCallerIdentityInput{}) - return err -} - // ValidateAccountId returns a context-specific error if the configured account // id is explicitly forbidden or not authorised; and nil if it is authorised. func (c *Config) ValidateAccountId(accountId string) error { diff --git a/aws/config_test.go b/aws/config_test.go index 50b175c1ed4..1843791f03b 100644 --- a/aws/config_test.go +++ b/aws/config_test.go @@ -18,7 +18,7 @@ import ( func TestGetSupportedEC2Platforms(t *testing.T) { ec2Endpoints := []*awsMockEndpoint{ - &awsMockEndpoint{ + { Request: &awsMockRequest{"POST", "/", "Action=DescribeAccountAttributes&" + "AttributeName.1=supported-platforms&Version=2016-11-15"}, Response: &awsMockResponse{200, test_ec2_describeAccountAttributes_response, "text/xml"}, diff --git a/aws/core_acceptance_test.go b/aws/core_acceptance_test.go index 800ef0a00ed..2082c066d22 100644 --- a/aws/core_acceptance_test.go +++ b/aws/core_acceptance_test.go @@ -10,12 +10,12 @@ import ( func TestAccAWSVpc_coreMismatchedDiffs(t *testing.T) { var vpc ec2.Vpc - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpcDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testMatchedDiffs, Check: resource.ComposeTestCheckFunc( testAccCheckVpcExists("aws_vpc.test", &vpc), diff --git a/aws/data_source_aws_acm_certificate_test.go b/aws/data_source_aws_acm_certificate_test.go index c6745c167ce..87e7c0c87de 100644 --- a/aws/data_source_aws_acm_certificate_test.go +++ b/aws/data_source_aws_acm_certificate_test.go @@ -34,7 +34,7 @@ func TestAccAWSAcmCertificateDataSource_singleIssued(t *testing.T) { resourceName := "data.aws_acm_certificate.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -100,7 +100,7 @@ func TestAccAWSAcmCertificateDataSource_multipleIssued(t *testing.T) { resourceName := "data.aws_acm_certificate.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -145,7 +145,7 @@ func TestAccAWSAcmCertificateDataSource_noMatchReturnsError(t *testing.T) { domain := fmt.Sprintf("tf-acc-nonexistent.%s", os.Getenv("ACM_CERTIFICATE_ROOT_DOMAIN")) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/aws/data_source_aws_acmpca_certificate_authority.go b/aws/data_source_aws_acmpca_certificate_authority.go new file mode 100644 index 00000000000..b64902e2ef1 --- /dev/null +++ b/aws/data_source_aws_acmpca_certificate_authority.go @@ -0,0 +1,176 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/acmpca" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsAcmpcaCertificateAuthority() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsAcmpcaCertificateAuthorityRead, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Required: true, + }, + "certificate": { + Type: schema.TypeString, + Computed: true, + }, + "certificate_chain": { + Type: schema.TypeString, + Computed: true, + }, + "certificate_signing_request": { + Type: schema.TypeString, + Computed: true, + }, + "not_after": { + Type: schema.TypeString, + Computed: true, + }, + "not_before": { + Type: schema.TypeString, + Computed: true, + }, + // https://docs.aws.amazon.com/acm-pca/latest/APIReference/API_RevocationConfiguration.html + "revocation_configuration": { + Type: schema.TypeList, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + // https://docs.aws.amazon.com/acm-pca/latest/APIReference/API_CrlConfiguration.html + "crl_configuration": { + Type: schema.TypeList, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "custom_cname": { + Type: schema.TypeString, + Computed: true, + }, + "enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "expiration_in_days": { + Type: schema.TypeInt, + Computed: true, + }, + "s3_bucket_name": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + }, + }, + }, + "serial": { + Type: schema.TypeString, + Computed: true, + }, + "status": { + Type: schema.TypeString, + Computed: true, + }, + "tags": tagsSchemaComputed(), + "type": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func dataSourceAwsAcmpcaCertificateAuthorityRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).acmpcaconn + certificateAuthorityArn := d.Get("arn").(string) + + describeCertificateAuthorityInput := &acmpca.DescribeCertificateAuthorityInput{ + CertificateAuthorityArn: aws.String(certificateAuthorityArn), + } + + log.Printf("[DEBUG] Reading ACMPCA Certificate Authority: %s", describeCertificateAuthorityInput) + + describeCertificateAuthorityOutput, err := conn.DescribeCertificateAuthority(describeCertificateAuthorityInput) + if err != nil { + return fmt.Errorf("error reading ACMPCA Certificate Authority: %s", err) + } + + if describeCertificateAuthorityOutput.CertificateAuthority == nil { + return fmt.Errorf("error reading ACMPCA Certificate Authority: not found") + } + certificateAuthority := describeCertificateAuthorityOutput.CertificateAuthority + + d.Set("arn", certificateAuthority.Arn) + d.Set("not_after", certificateAuthority.NotAfter) + d.Set("not_before", certificateAuthority.NotBefore) + + if err := d.Set("revocation_configuration", flattenAcmpcaRevocationConfiguration(certificateAuthority.RevocationConfiguration)); err != nil { + return fmt.Errorf("error setting tags: %s", err) + } + + d.Set("serial", certificateAuthority.Serial) + d.Set("status", certificateAuthority.Status) + d.Set("type", certificateAuthority.Type) + + getCertificateAuthorityCertificateInput := &acmpca.GetCertificateAuthorityCertificateInput{ + CertificateAuthorityArn: aws.String(certificateAuthorityArn), + } + + log.Printf("[DEBUG] Reading ACMPCA Certificate Authority Certificate: %s", getCertificateAuthorityCertificateInput) + + getCertificateAuthorityCertificateOutput, err := conn.GetCertificateAuthorityCertificate(getCertificateAuthorityCertificateInput) + if err != nil { + // Returned when in PENDING_CERTIFICATE status + // InvalidStateException: The certificate authority XXXXX is not in the correct state to have a certificate signing request. + if !isAWSErr(err, acmpca.ErrCodeInvalidStateException, "") { + return fmt.Errorf("error reading ACMPCA Certificate Authority Certificate: %s", err) + } + } + + d.Set("certificate", "") + d.Set("certificate_chain", "") + if getCertificateAuthorityCertificateOutput != nil { + d.Set("certificate", getCertificateAuthorityCertificateOutput.Certificate) + d.Set("certificate_chain", getCertificateAuthorityCertificateOutput.CertificateChain) + } + + getCertificateAuthorityCsrInput := &acmpca.GetCertificateAuthorityCsrInput{ + CertificateAuthorityArn: aws.String(certificateAuthorityArn), + } + + log.Printf("[DEBUG] Reading ACMPCA Certificate Authority Certificate Signing Request: %s", getCertificateAuthorityCsrInput) + + getCertificateAuthorityCsrOutput, err := conn.GetCertificateAuthorityCsr(getCertificateAuthorityCsrInput) + if err != nil { + return fmt.Errorf("error reading ACMPCA Certificate Authority Certificate Signing Request: %s", err) + } + + d.Set("certificate_signing_request", "") + if getCertificateAuthorityCsrOutput != nil { + d.Set("certificate_signing_request", getCertificateAuthorityCsrOutput.Csr) + } + + tags, err := listAcmpcaTags(conn, certificateAuthorityArn) + if err != nil { + return fmt.Errorf("error reading ACMPCA Certificate Authority %q tags: %s", certificateAuthorityArn, err) + } + + if err := d.Set("tags", tagsToMapACMPCA(tags)); err != nil { + return fmt.Errorf("error setting tags: %s", err) + } + + d.SetId(certificateAuthorityArn) + + return nil +} diff --git a/aws/data_source_aws_acmpca_certificate_authority_test.go b/aws/data_source_aws_acmpca_certificate_authority_test.go new file mode 100644 index 00000000000..431cdffe300 --- /dev/null +++ b/aws/data_source_aws_acmpca_certificate_authority_test.go @@ -0,0 +1,109 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccDataSourceAwsAcmpcaCertificateAuthority_Basic(t *testing.T) { + resourceName := "aws_acmpca_certificate_authority.test" + datasourceName := "data.aws_acmpca_certificate_authority.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsAcmpcaCertificateAuthorityConfig_NonExistent, + ExpectError: regexp.MustCompile(`ResourceNotFoundException`), + }, + { + Config: testAccDataSourceAwsAcmpcaCertificateAuthorityConfig_ARN, + Check: resource.ComposeTestCheckFunc( + testAccDataSourceAwsAcmpcaCertificateAuthorityCheck(datasourceName, resourceName), + ), + }, + }, + }) +} + +func testAccDataSourceAwsAcmpcaCertificateAuthorityCheck(datasourceName, resourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + resource, ok := s.RootModule().Resources[datasourceName] + if !ok { + return fmt.Errorf("root module has no resource called %s", datasourceName) + } + + dataSource, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("root module has no resource called %s", resourceName) + } + + attrNames := []string{ + "arn", + "certificate", + "certificate_chain", + "certificate_signing_request", + "not_after", + "not_before", + "revocation_configuration.#", + "revocation_configuration.0.crl_configuration.#", + "revocation_configuration.0.crl_configuration.0.enabled", + "serial", + "status", + "tags.%", + "type", + } + + for _, attrName := range attrNames { + if resource.Primary.Attributes[attrName] != dataSource.Primary.Attributes[attrName] { + return fmt.Errorf( + "%s is %s; want %s", + attrName, + resource.Primary.Attributes[attrName], + dataSource.Primary.Attributes[attrName], + ) + } + } + + return nil + } +} + +const testAccDataSourceAwsAcmpcaCertificateAuthorityConfig_ARN = ` +resource "aws_acmpca_certificate_authority" "wrong" { + certificate_authority_configuration { + key_algorithm = "RSA_4096" + signing_algorithm = "SHA512WITHRSA" + + subject { + common_name = "terraformtesting.com" + } + } +} + +resource "aws_acmpca_certificate_authority" "test" { + certificate_authority_configuration { + key_algorithm = "RSA_4096" + signing_algorithm = "SHA512WITHRSA" + + subject { + common_name = "terraformtesting.com" + } + } +} + +data "aws_acmpca_certificate_authority" "test" { + arn = "${aws_acmpca_certificate_authority.test.arn}" +} +` + +const testAccDataSourceAwsAcmpcaCertificateAuthorityConfig_NonExistent = ` +data "aws_acmpca_certificate_authority" "test" { + arn = "arn:aws:acm-pca:us-east-1:123456789012:certificate-authority/tf-acc-test-does-not-exist" +} +` diff --git a/aws/data_source_aws_ami.go b/aws/data_source_aws_ami.go index 686b6c731e9..f5b8a01d0ee 100644 --- a/aws/data_source_aws_ami.go +++ b/aws/data_source_aws_ami.go @@ -5,7 +5,10 @@ import ( "fmt" "log" "regexp" + "sort" + "time" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" @@ -203,6 +206,27 @@ func dataSourceAwsAmiRead(d *schema.ResourceData, meta interface{}) error { } } + // Deprecated: pre-2.0.0 warning logging + if !ownersOk { + log.Print("[WARN] The \"owners\" argument will become required in the next major version.") + log.Print("[WARN] Documentation can be found at: https://www.terraform.io/docs/providers/aws/d/ami.html#owners") + + missingOwnerFilter := true + + if filtersOk { + for _, filter := range params.Filters { + if aws.StringValue(filter.Name) == "owner-alias" || aws.StringValue(filter.Name) == "owner-id" { + missingOwnerFilter = false + break + } + } + } + + if missingOwnerFilter { + log.Print("[WARN] Potential security issue: missing \"owners\" filtering for AMI. Check AMI to ensure it came from trusted source.") + } + } + log.Printf("[DEBUG] Reading AMI: %s", params) resp, err := conn.DescribeImages(params) if err != nil { @@ -230,32 +254,23 @@ func dataSourceAwsAmiRead(d *schema.ResourceData, meta interface{}) error { filteredImages = resp.Images[:] } - var image *ec2.Image if len(filteredImages) < 1 { return fmt.Errorf("Your query returned no results. Please change your search criteria and try again.") } if len(filteredImages) > 1 { - recent := d.Get("most_recent").(bool) - log.Printf("[DEBUG] aws_ami - multiple results found and `most_recent` is set to: %t", recent) - if recent { - image = mostRecentAmi(filteredImages) - } else { + if !d.Get("most_recent").(bool) { return fmt.Errorf("Your query returned more than one result. Please try a more " + "specific search criteria, or set `most_recent` attribute to true.") } - } else { - // Query returned single result. - image = filteredImages[0] + sort.Slice(filteredImages, func(i, j int) bool { + itime, _ := time.Parse(time.RFC3339, aws.StringValue(filteredImages[i].CreationDate)) + jtime, _ := time.Parse(time.RFC3339, aws.StringValue(filteredImages[j].CreationDate)) + return itime.Unix() > jtime.Unix() + }) } - log.Printf("[DEBUG] aws_ami - Single AMI found: %s", *image.ImageId) - return amiDescriptionAttributes(d, image) -} - -// Returns the most recent AMI out of a slice of images. -func mostRecentAmi(images []*ec2.Image) *ec2.Image { - return sortImages(images)[0] + return amiDescriptionAttributes(d, filteredImages[0]) } // populate the numerous fields that the image description returns. @@ -319,31 +334,23 @@ func amiBlockDeviceMappings(m []*ec2.BlockDeviceMapping) *schema.Set { } for _, v := range m { mapping := map[string]interface{}{ - "device_name": *v.DeviceName, + "device_name": aws.StringValue(v.DeviceName), + "virtual_name": aws.StringValue(v.VirtualName), } + if v.Ebs != nil { ebs := map[string]interface{}{ - "delete_on_termination": fmt.Sprintf("%t", *v.Ebs.DeleteOnTermination), - "encrypted": fmt.Sprintf("%t", *v.Ebs.Encrypted), - "volume_size": fmt.Sprintf("%d", *v.Ebs.VolumeSize), - "volume_type": *v.Ebs.VolumeType, - } - // Iops is not always set - if v.Ebs.Iops != nil { - ebs["iops"] = fmt.Sprintf("%d", *v.Ebs.Iops) - } else { - ebs["iops"] = "0" - } - // snapshot id may not be set - if v.Ebs.SnapshotId != nil { - ebs["snapshot_id"] = *v.Ebs.SnapshotId + "delete_on_termination": fmt.Sprintf("%t", aws.BoolValue(v.Ebs.DeleteOnTermination)), + "encrypted": fmt.Sprintf("%t", aws.BoolValue(v.Ebs.Encrypted)), + "iops": fmt.Sprintf("%d", aws.Int64Value(v.Ebs.Iops)), + "volume_size": fmt.Sprintf("%d", aws.Int64Value(v.Ebs.VolumeSize)), + "snapshot_id": aws.StringValue(v.Ebs.SnapshotId), + "volume_type": aws.StringValue(v.Ebs.VolumeType), } mapping["ebs"] = ebs } - if v.VirtualName != nil { - mapping["virtual_name"] = *v.VirtualName - } + log.Printf("[DEBUG] aws_ami - adding block device mapping: %v", mapping) s.Add(mapping) } @@ -357,8 +364,8 @@ func amiProductCodes(m []*ec2.ProductCode) *schema.Set { } for _, v := range m { code := map[string]interface{}{ - "product_code_id": *v.ProductCodeId, - "product_code_type": *v.ProductCodeType, + "product_code_id": aws.StringValue(v.ProductCodeId), + "product_code_type": aws.StringValue(v.ProductCodeType), } s.Add(code) } @@ -385,8 +392,8 @@ func amiRootSnapshotId(image *ec2.Image) string { func amiStateReason(m *ec2.StateReason) map[string]interface{} { s := make(map[string]interface{}) if m != nil { - s["code"] = *m.Code - s["message"] = *m.Message + s["code"] = aws.StringValue(m.Code) + s["message"] = aws.StringValue(m.Message) } else { s["code"] = "UNSET" s["message"] = "UNSET" diff --git a/aws/data_source_aws_ami_ids.go b/aws/data_source_aws_ami_ids.go index af9e11ca2a2..e75df9c7360 100644 --- a/aws/data_source_aws_ami_ids.go +++ b/aws/data_source_aws_ami_ids.go @@ -4,7 +4,10 @@ import ( "fmt" "log" "regexp" + "sort" + "time" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" @@ -40,6 +43,11 @@ func dataSourceAwsAmiIds() *schema.Resource { Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, }, + "sort_ascending": { + Type: schema.TypeBool, + Default: false, + Optional: true, + }, }, } } @@ -51,6 +59,7 @@ func dataSourceAwsAmiIdsRead(d *schema.ResourceData, meta interface{}) error { filters, filtersOk := d.GetOk("filter") nameRegex, nameRegexOk := d.GetOk("name_regex") owners, ownersOk := d.GetOk("owners") + sortAscending := d.Get("sort_ascending").(bool) if executableUsersOk == false && filtersOk == false && nameRegexOk == false && ownersOk == false { return fmt.Errorf("One of executable_users, filters, name_regex, or owners must be assigned") @@ -72,6 +81,27 @@ func dataSourceAwsAmiIdsRead(d *schema.ResourceData, meta interface{}) error { } } + // Deprecated: pre-2.0.0 warning logging + if !ownersOk { + log.Print("[WARN] The \"owners\" argument will become required in the next major version.") + log.Print("[WARN] Documentation can be found at: https://www.terraform.io/docs/providers/aws/d/ami.html#owners") + + missingOwnerFilter := true + + if filtersOk { + for _, filter := range params.Filters { + if aws.StringValue(filter.Name) == "owner-alias" || aws.StringValue(filter.Name) == "owner-id" { + missingOwnerFilter = false + break + } + } + } + + if missingOwnerFilter { + log.Print("[WARN] Potential security issue: missing \"owners\" filtering for AMI. Check AMI to ensure it came from trusted source.") + } + } + log.Printf("[DEBUG] Reading AMI IDs: %s", params) resp, err := conn.DescribeImages(params) if err != nil { @@ -101,7 +131,15 @@ func dataSourceAwsAmiIdsRead(d *schema.ResourceData, meta interface{}) error { filteredImages = resp.Images[:] } - for _, image := range sortImages(filteredImages) { + sort.Slice(filteredImages, func(i, j int) bool { + itime, _ := time.Parse(time.RFC3339, aws.StringValue(filteredImages[i].CreationDate)) + jtime, _ := time.Parse(time.RFC3339, aws.StringValue(filteredImages[j].CreationDate)) + if sortAscending { + return itime.Unix() < jtime.Unix() + } + return itime.Unix() > jtime.Unix() + }) + for _, image := range filteredImages { imageIds = append(imageIds, *image.ImageId) } diff --git a/aws/data_source_aws_ami_ids_test.go b/aws/data_source_aws_ami_ids_test.go index 52582eaba1e..1a38b56c6af 100644 --- a/aws/data_source_aws_ami_ids_test.go +++ b/aws/data_source_aws_ami_ids_test.go @@ -5,11 +5,10 @@ import ( "testing" "github.com/hashicorp/terraform/helper/resource" - "github.com/satori/uuid" ) func TestAccDataSourceAwsAmiIds_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -24,30 +23,35 @@ func TestAccDataSourceAwsAmiIds_basic(t *testing.T) { } func TestAccDataSourceAwsAmiIds_sorted(t *testing.T) { - uuid := uuid.NewV4().String() - - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccDataSourceAwsAmiIdsConfig_sorted1(uuid), + Config: testAccDataSourceAwsAmiIdsConfig_sorted(false), Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttrSet("aws_ami_from_instance.a", "id"), - resource.TestCheckResourceAttrSet("aws_ami_from_instance.b", "id"), + testAccCheckAwsEbsSnapshotDataSourceID("data.aws_ami_ids.test"), + resource.TestCheckResourceAttr("data.aws_ami_ids.test", "ids.#", "2"), + resource.TestCheckResourceAttrPair( + "data.aws_ami_ids.test", "ids.0", + "data.aws_ami.amzn_linux_2018_03", "id"), + resource.TestCheckResourceAttrPair( + "data.aws_ami_ids.test", "ids.1", + "data.aws_ami.amzn_linux_2016_09_0", "id"), ), }, + // Make sure when sort_ascending is set, they're sorted in the inverse order + // it uses the same config / dataset as above so no need to verify the other + // bits { - Config: testAccDataSourceAwsAmiIdsConfig_sorted2(uuid), + Config: testAccDataSourceAwsAmiIdsConfig_sorted(true), Check: resource.ComposeTestCheckFunc( - testAccCheckAwsEbsSnapshotDataSourceID("data.aws_ami_ids.test"), - resource.TestCheckResourceAttr("data.aws_ami_ids.test", "ids.#", "2"), resource.TestCheckResourceAttrPair( "data.aws_ami_ids.test", "ids.0", - "aws_ami_from_instance.b", "id"), + "data.aws_ami.amzn_linux_2016_09_0", "id"), resource.TestCheckResourceAttrPair( "data.aws_ami_ids.test", "ids.1", - "aws_ami_from_instance.a", "id"), + "data.aws_ami.amzn_linux_2018_03", "id"), ), }, }, @@ -55,7 +59,7 @@ func TestAccDataSourceAwsAmiIds_sorted(t *testing.T) { } func TestAccDataSourceAwsAmiIds_empty(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -81,41 +85,33 @@ data "aws_ami_ids" "ubuntu" { } ` -func testAccDataSourceAwsAmiIdsConfig_sorted1(uuid string) string { +func testAccDataSourceAwsAmiIdsConfig_sorted(sort_ascending bool) string { return fmt.Sprintf(` -resource "aws_instance" "test" { - ami = "ami-efd0428f" - instance_type = "m3.medium" - - count = 2 -} - -resource "aws_ami_from_instance" "a" { - name = "tf-test-%s-a" - source_instance_id = "${aws_instance.test.*.id[0]}" - snapshot_without_reboot = true +data "aws_ami" "amzn_linux_2016_09_0" { + owners = ["amazon"] + filter { + name = "name" + values = ["amzn-ami-hvm-2016.09.0.20161028-x86_64-gp2"] + } } -resource "aws_ami_from_instance" "b" { - name = "tf-test-%s-b" - source_instance_id = "${aws_instance.test.*.id[1]}" - snapshot_without_reboot = true - - // We want to ensure that 'aws_ami_from_instance.a.creation_date' is less - // than 'aws_ami_from_instance.b.creation_date' so that we can ensure that - // the images are being sorted correctly. - depends_on = ["aws_ami_from_instance.a"] -} -`, uuid, uuid) +data "aws_ami" "amzn_linux_2018_03" { + owners = ["amazon"] + filter { + name = "name" + values = ["amzn-ami-hvm-2018.03.0.20180811-x86_64-gp2"] + } } -func testAccDataSourceAwsAmiIdsConfig_sorted2(uuid string) string { - return testAccDataSourceAwsAmiIdsConfig_sorted1(uuid) + fmt.Sprintf(` data "aws_ami_ids" "test" { - owners = ["self"] - name_regex = "^tf-test-%s-" + owners = ["amazon"] + filter { + name = "name" + values = ["amzn-ami-hvm-2018.03.0.20180811-x86_64-gp2", "amzn-ami-hvm-2016.09.0.20161028-x86_64-gp2"] + } + sort_ascending = "%t" } -`, uuid) +`, sort_ascending) } const testAccDataSourceAwsAmiIdsConfig_empty = ` diff --git a/aws/data_source_aws_ami_test.go b/aws/data_source_aws_ami_test.go index ab484c0474b..1288b17a9a2 100644 --- a/aws/data_source_aws_ami_test.go +++ b/aws/data_source_aws_ami_test.go @@ -10,7 +10,7 @@ import ( ) func TestAccAWSAmiDataSource_natInstance(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -54,7 +54,7 @@ func TestAccAWSAmiDataSource_natInstance(t *testing.T) { }) } func TestAccAWSAmiDataSource_windowsInstance(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -65,7 +65,7 @@ func TestAccAWSAmiDataSource_windowsInstance(t *testing.T) { resource.TestCheckResourceAttr("data.aws_ami.windows_ami", "architecture", "x86_64"), resource.TestCheckResourceAttr("data.aws_ami.windows_ami", "block_device_mappings.#", "27"), resource.TestMatchResourceAttr("data.aws_ami.windows_ami", "creation_date", regexp.MustCompile("^20[0-9]{2}-")), - resource.TestMatchResourceAttr("data.aws_ami.windows_ami", "description", regexp.MustCompile("^Microsoft Windows Server 2012")), + resource.TestMatchResourceAttr("data.aws_ami.windows_ami", "description", regexp.MustCompile("^Microsoft Windows Server")), resource.TestCheckResourceAttr("data.aws_ami.windows_ami", "hypervisor", "xen"), resource.TestMatchResourceAttr("data.aws_ami.windows_ami", "image_id", regexp.MustCompile("^ami-")), resource.TestMatchResourceAttr("data.aws_ami.windows_ami", "image_location", regexp.MustCompile("^amazon/")), @@ -93,7 +93,7 @@ func TestAccAWSAmiDataSource_windowsInstance(t *testing.T) { } func TestAccAWSAmiDataSource_instanceStore(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -128,7 +128,7 @@ func TestAccAWSAmiDataSource_instanceStore(t *testing.T) { } func TestAccAWSAmiDataSource_owners(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -144,7 +144,7 @@ func TestAccAWSAmiDataSource_owners(t *testing.T) { // Acceptance test for: https://github.com/hashicorp/terraform/issues/10758 func TestAccAWSAmiDataSource_ownersEmpty(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -159,7 +159,7 @@ func TestAccAWSAmiDataSource_ownersEmpty(t *testing.T) { } func TestAccAWSAmiDataSource_localNameFilter(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -174,10 +174,6 @@ func TestAccAWSAmiDataSource_localNameFilter(t *testing.T) { }) } -func testAccCheckAwsAmiDataSourceDestroy(s *terraform.State) error { - return nil -} - func testAccCheckAwsAmiDataSourceID(n string) resource.TestCheckFunc { // Wait for IAM role return func(s *terraform.State) error { @@ -285,6 +281,23 @@ const testAccCheckAwsAmiDataSourceEmptyOwnersConfig = ` data "aws_ami" "amazon_ami" { most_recent = true owners = [""] + + # we need to test the owners = [""] for regressions but we want to filter the results + # beyond all public AWS AMIs :) + filter { + name = "owner-alias" + values = ["amazon"] + } + + filter { + name = "name" + values = ["amzn-ami-minimal-hvm-*"] + } + + filter { + name = "root-device-type" + values = ["ebs"] + } } ` diff --git a/aws/data_source_aws_api_gateway_api_key.go b/aws/data_source_aws_api_gateway_api_key.go new file mode 100644 index 00000000000..e9a7d7a9733 --- /dev/null +++ b/aws/data_source_aws_api_gateway_api_key.go @@ -0,0 +1,45 @@ +package aws + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/apigateway" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsApiGatewayApiKey() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsApiGatewayApiKeyRead, + Schema: map[string]*schema.Schema{ + "id": { + Type: schema.TypeString, + Required: true, + }, + "name": { + Type: schema.TypeString, + Computed: true, + }, + "value": { + Type: schema.TypeString, + Computed: true, + Sensitive: true, + }, + }, + } +} + +func dataSourceAwsApiGatewayApiKeyRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).apigateway + apiKey, err := conn.GetApiKey(&apigateway.GetApiKeyInput{ + ApiKey: aws.String(d.Get("id").(string)), + IncludeValue: aws.Bool(true), + }) + + if err != nil { + return err + } + + d.SetId(aws.StringValue(apiKey.Id)) + d.Set("name", apiKey.Name) + d.Set("value", apiKey.Value) + return nil +} diff --git a/aws/data_source_aws_api_gateway_api_key_test.go b/aws/data_source_aws_api_gateway_api_key_test.go new file mode 100644 index 00000000000..4bd140c62a7 --- /dev/null +++ b/aws/data_source_aws_api_gateway_api_key_test.go @@ -0,0 +1,42 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccDataSourceAwsApiGatewayApiKey(t *testing.T) { + rName := acctest.RandString(8) + resourceName1 := "aws_api_gateway_api_key.example_key" + dataSourceName1 := "data.aws_api_gateway_api_key.test_key" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsApiGatewayApiKeyConfig(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrPair(resourceName1, "id", dataSourceName1, "id"), + resource.TestCheckResourceAttrPair(resourceName1, "name", dataSourceName1, "name"), + resource.TestCheckResourceAttrPair(resourceName1, "value", dataSourceName1, "value"), + ), + }, + }, + }) +} + +func testAccDataSourceAwsApiGatewayApiKeyConfig(r string) string { + return fmt.Sprintf(` +resource "aws_api_gateway_api_key" "example_key" { + name = "%s" +} + +data "aws_api_gateway_api_key" "test_key" { + id = "${aws_api_gateway_api_key.example_key.id}" +} +`, r) +} diff --git a/aws/data_source_aws_api_gateway_resource.go b/aws/data_source_aws_api_gateway_resource.go new file mode 100644 index 00000000000..5092285fcae --- /dev/null +++ b/aws/data_source_aws_api_gateway_resource.go @@ -0,0 +1,67 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/apigateway" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsApiGatewayResource() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsApiGatewayResourceRead, + Schema: map[string]*schema.Schema{ + "rest_api_id": { + Type: schema.TypeString, + Required: true, + }, + "path": { + Type: schema.TypeString, + Required: true, + }, + "path_part": { + Type: schema.TypeString, + Computed: true, + }, + "parent_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func dataSourceAwsApiGatewayResourceRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).apigateway + + restApiId := d.Get("rest_api_id").(string) + target := d.Get("path").(string) + params := &apigateway.GetResourcesInput{RestApiId: aws.String(restApiId)} + + var match *apigateway.Resource + log.Printf("[DEBUG] Reading API Gateway Resources: %s", params) + err := conn.GetResourcesPages(params, func(page *apigateway.GetResourcesOutput, lastPage bool) bool { + for _, resource := range page.Items { + if aws.StringValue(resource.Path) == target { + match = resource + return false + } + } + return !lastPage + }) + if err != nil { + return fmt.Errorf("error describing API Gateway Resources: %s", err) + } + + if match == nil { + return fmt.Errorf("no Resources with path %q found for rest api %q", target, restApiId) + } + + d.SetId(*match.Id) + d.Set("path_part", match.PathPart) + d.Set("parent_id", match.ParentId) + + return nil +} diff --git a/aws/data_source_aws_api_gateway_resource_test.go b/aws/data_source_aws_api_gateway_resource_test.go new file mode 100644 index 00000000000..ebbb6a97fca --- /dev/null +++ b/aws/data_source_aws_api_gateway_resource_test.go @@ -0,0 +1,65 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccDataSourceAwsApiGatewayResource(t *testing.T) { + rName := acctest.RandString(8) + resourceName1 := "aws_api_gateway_resource.example_v1" + dataSourceName1 := "data.aws_api_gateway_resource.example_v1" + resourceName2 := "aws_api_gateway_resource.example_v1_endpoint" + dataSourceName2 := "data.aws_api_gateway_resource.example_v1_endpoint" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsApiGatewayResourceConfig(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrPair(resourceName1, "id", dataSourceName1, "id"), + resource.TestCheckResourceAttrPair(resourceName1, "parent_id", dataSourceName1, "parent_id"), + resource.TestCheckResourceAttrPair(resourceName1, "path_part", dataSourceName1, "path_part"), + resource.TestCheckResourceAttrPair(resourceName2, "id", dataSourceName2, "id"), + resource.TestCheckResourceAttrPair(resourceName2, "parent_id", dataSourceName2, "parent_id"), + resource.TestCheckResourceAttrPair(resourceName2, "path_part", dataSourceName2, "path_part"), + ), + }, + }, + }) +} + +func testAccDataSourceAwsApiGatewayResourceConfig(r string) string { + return fmt.Sprintf(` +resource "aws_api_gateway_rest_api" "example" { + name = "%s_example" +} + +resource "aws_api_gateway_resource" "example_v1" { + rest_api_id = "${aws_api_gateway_rest_api.example.id}" + parent_id = "${aws_api_gateway_rest_api.example.root_resource_id}" + path_part = "v1" +} + +resource "aws_api_gateway_resource" "example_v1_endpoint" { + rest_api_id = "${aws_api_gateway_rest_api.example.id}" + parent_id = "${aws_api_gateway_resource.example_v1.id}" + path_part = "endpoint" +} + +data "aws_api_gateway_resource" "example_v1" { + rest_api_id = "${aws_api_gateway_rest_api.example.id}" + path = "/${aws_api_gateway_resource.example_v1.path_part}" +} + +data "aws_api_gateway_resource" "example_v1_endpoint" { + rest_api_id = "${aws_api_gateway_rest_api.example.id}" + path = "/${aws_api_gateway_resource.example_v1.path_part}/${aws_api_gateway_resource.example_v1_endpoint.path_part}" +} +`, r) +} diff --git a/aws/data_source_aws_api_gateway_rest_api.go b/aws/data_source_aws_api_gateway_rest_api.go new file mode 100644 index 00000000000..cc71d87d2c6 --- /dev/null +++ b/aws/data_source_aws_api_gateway_rest_api.go @@ -0,0 +1,73 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/apigateway" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsApiGatewayRestApi() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsApiGatewayRestApiRead, + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + }, + "root_resource_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func dataSourceAwsApiGatewayRestApiRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).apigateway + params := &apigateway.GetRestApisInput{} + + target := d.Get("name") + var matchedApis []*apigateway.RestApi + log.Printf("[DEBUG] Reading API Gateway REST APIs: %s", params) + err := conn.GetRestApisPages(params, func(page *apigateway.GetRestApisOutput, lastPage bool) bool { + for _, api := range page.Items { + if aws.StringValue(api.Name) == target { + matchedApis = append(matchedApis, api) + } + } + return !lastPage + }) + if err != nil { + return fmt.Errorf("error describing API Gateway REST APIs: %s", err) + } + + if len(matchedApis) == 0 { + return fmt.Errorf("no REST APIs with name %q found in this region", target) + } + if len(matchedApis) > 1 { + return fmt.Errorf("multiple REST APIs with name %q found in this region", target) + } + + match := matchedApis[0] + + d.SetId(*match.Id) + + resp, err := conn.GetResources(&apigateway.GetResourcesInput{ + RestApiId: aws.String(d.Id()), + }) + if err != nil { + return err + } + + for _, item := range resp.Items { + if *item.Path == "/" { + d.Set("root_resource_id", item.Id) + break + } + } + + return nil +} diff --git a/aws/data_source_aws_api_gateway_rest_api_test.go b/aws/data_source_aws_api_gateway_rest_api_test.go new file mode 100644 index 00000000000..bfa89d0efe6 --- /dev/null +++ b/aws/data_source_aws_api_gateway_rest_api_test.go @@ -0,0 +1,80 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccDataSourceAwsApiGatewayRestApi(t *testing.T) { + rName := acctest.RandString(8) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsApiGatewayRestApiConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccDataSourceAwsApiGatewayRestApiCheck("data.aws_api_gateway_rest_api.by_name"), + ), + }, + }, + }) +} + +func testAccDataSourceAwsApiGatewayRestApiCheck(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + resources, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("root module has no resource called %s", name) + } + + apiGatewayRestApiResources, ok := s.RootModule().Resources["aws_api_gateway_rest_api.tf_test"] + if !ok { + return fmt.Errorf("can't find aws_api_gateway_rest_api.tf_test in state") + } + + attr := resources.Primary.Attributes + + if attr["name"] != apiGatewayRestApiResources.Primary.Attributes["name"] { + return fmt.Errorf( + "name is %s; want %s", + attr["name"], + apiGatewayRestApiResources.Primary.Attributes["name"], + ) + } + + if attr["root_resource_id"] != apiGatewayRestApiResources.Primary.Attributes["root_resource_id"] { + return fmt.Errorf( + "root_resource_id is %s; want %s", + attr["root_resource_id"], + apiGatewayRestApiResources.Primary.Attributes["root_resource_id"], + ) + } + + return nil + } +} + +func testAccDataSourceAwsApiGatewayRestApiConfig(r string) string { + return fmt.Sprintf(` +resource "aws_api_gateway_rest_api" "tf_wrong1" { +name = "%s_wrong1" +} + +resource "aws_api_gateway_rest_api" "tf_test" { +name = "%s_correct" +} + +resource "aws_api_gateway_rest_api" "tf_wrong2" { +name = "%s_wrong1" +} + +data "aws_api_gateway_rest_api" "by_name" { +name = "${aws_api_gateway_rest_api.tf_test.name}" +} +`, r, r, r) +} diff --git a/aws/data_source_aws_arn.go b/aws/data_source_aws_arn.go new file mode 100644 index 00000000000..93a36935551 --- /dev/null +++ b/aws/data_source_aws_arn.go @@ -0,0 +1,59 @@ +package aws + +import ( + "fmt" + + "github.com/aws/aws-sdk-go/aws/arn" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsArn() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsArnRead, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validateArn, + }, + "partition": { + Type: schema.TypeString, + Computed: true, + }, + "service": { + Type: schema.TypeString, + Computed: true, + }, + "region": { + Type: schema.TypeString, + Computed: true, + }, + "account": { + Type: schema.TypeString, + Computed: true, + }, + "resource": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func dataSourceAwsArnRead(d *schema.ResourceData, meta interface{}) error { + v := d.Get("arn").(string) + arn, err := arn.Parse(v) + if err != nil { + return fmt.Errorf("Error parsing '%s': %s", v, err.Error()) + } + + d.SetId(arn.String()) + d.Set("partition", arn.Partition) + d.Set("service", arn.Service) + d.Set("region", arn.Region) + d.Set("account", arn.AccountID) + d.Set("resource", arn.Resource) + + return nil +} diff --git a/aws/data_source_aws_arn_test.go b/aws/data_source_aws_arn_test.go new file mode 100644 index 00000000000..68c7ac7c0ad --- /dev/null +++ b/aws/data_source_aws_arn_test.go @@ -0,0 +1,48 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccDataSourceAwsArn_basic(t *testing.T) { + resourceName := "data.aws_arn.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsArnConfig, + Check: resource.ComposeTestCheckFunc( + testAccDataSourceAwsArn(resourceName), + resource.TestCheckResourceAttr(resourceName, "partition", "aws"), + resource.TestCheckResourceAttr(resourceName, "service", "rds"), + resource.TestCheckResourceAttr(resourceName, "region", "eu-west-1"), + resource.TestCheckResourceAttr(resourceName, "account", "123456789012"), + resource.TestCheckResourceAttr(resourceName, "resource", "db:mysql-db"), + ), + }, + }, + }) +} + +func testAccDataSourceAwsArn(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + _, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("root module has no resource called %s", name) + } + + return nil + } +} + +const testAccDataSourceAwsArnConfig = ` +data "aws_arn" "test" { + arn = "arn:aws:rds:eu-west-1:123456789012:db:mysql-db" +} +` diff --git a/aws/data_source_aws_autoscaling_groups.go b/aws/data_source_aws_autoscaling_groups.go index 012195e1799..a3b8bb709fd 100644 --- a/aws/data_source_aws_autoscaling_groups.go +++ b/aws/data_source_aws_autoscaling_groups.go @@ -21,6 +21,11 @@ func dataSourceAwsAutoscalingGroups() *schema.Resource { Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, }, + "arns": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "filter": { Type: schema.TypeSet, Optional: true, @@ -49,40 +54,66 @@ func dataSourceAwsAutoscalingGroupsRead(d *schema.ResourceData, meta interface{} log.Printf("[DEBUG] Reading Autoscaling Groups.") d.SetId(time.Now().UTC().String()) - var raw []string + var rawName []string + var rawArn []string + var err error tf := d.Get("filter").(*schema.Set) if tf.Len() > 0 { - out, err := conn.DescribeTags(&autoscaling.DescribeTagsInput{ + input := &autoscaling.DescribeTagsInput{ Filters: expandAsgTagFilters(tf.List()), - }) - if err != nil { - return err } + err = conn.DescribeTagsPages(input, func(resp *autoscaling.DescribeTagsOutput, lastPage bool) bool { + for _, v := range resp.Tags { + rawName = append(rawName, aws.StringValue(v.ResourceId)) + } + return !lastPage + }) - raw = make([]string, len(out.Tags)) - for i, v := range out.Tags { - raw[i] = *v.ResourceId + maxAutoScalingGroupNames := 1600 + for i := 0; i < len(rawName); i += maxAutoScalingGroupNames { + end := i + maxAutoScalingGroupNames + + if end > len(rawName) { + end = len(rawName) + } + + nameInput := &autoscaling.DescribeAutoScalingGroupsInput{ + AutoScalingGroupNames: aws.StringSlice(rawName[i:end]), + MaxRecords: aws.Int64(100), + } + + err = conn.DescribeAutoScalingGroupsPages(nameInput, func(resp *autoscaling.DescribeAutoScalingGroupsOutput, lastPage bool) bool { + for _, group := range resp.AutoScalingGroups { + rawArn = append(rawArn, aws.StringValue(group.AutoScalingGroupARN)) + } + return !lastPage + }) } } else { - - resp, err := conn.DescribeAutoScalingGroups(&autoscaling.DescribeAutoScalingGroupsInput{}) - if err != nil { - return fmt.Errorf("Error fetching Autoscaling Groups: %s", err) - } - - raw = make([]string, len(resp.AutoScalingGroups)) - for i, v := range resp.AutoScalingGroups { - raw[i] = *v.AutoScalingGroupName - } + err = conn.DescribeAutoScalingGroupsPages(&autoscaling.DescribeAutoScalingGroupsInput{}, func(resp *autoscaling.DescribeAutoScalingGroupsOutput, lastPage bool) bool { + for _, group := range resp.AutoScalingGroups { + rawName = append(rawName, aws.StringValue(group.AutoScalingGroupName)) + rawArn = append(rawArn, aws.StringValue(group.AutoScalingGroupARN)) + } + return !lastPage + }) + } + if err != nil { + return fmt.Errorf("Error fetching Autoscaling Groups: %s", err) } - sort.Strings(raw) + sort.Strings(rawName) + sort.Strings(rawArn) - if err := d.Set("names", raw); err != nil { + if err := d.Set("names", rawName); err != nil { return fmt.Errorf("[WARN] Error setting Autoscaling Group Names: %s", err) } + if err := d.Set("arns", rawArn); err != nil { + return fmt.Errorf("[WARN] Error setting Autoscaling Group Arns: %s", err) + } + return nil } diff --git a/aws/data_source_aws_autoscaling_groups_test.go b/aws/data_source_aws_autoscaling_groups_test.go index 3a6ba764480..b4a787b1ad3 100644 --- a/aws/data_source_aws_autoscaling_groups_test.go +++ b/aws/data_source_aws_autoscaling_groups_test.go @@ -13,7 +13,7 @@ import ( ) func TestAccAWSAutoscalingGroups_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -25,6 +25,7 @@ func TestAccAWSAutoscalingGroups_basic(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckAwsAutoscalingGroups("data.aws_autoscaling_groups.group_list"), resource.TestCheckResourceAttr("data.aws_autoscaling_groups.group_list", "names.#", "3"), + resource.TestCheckResourceAttr("data.aws_autoscaling_groups.group_list", "arns.#", "3"), ), }, }, @@ -81,13 +82,29 @@ func testAccCheckAwsAutoscalingGroupsAvailable(attrs map[string]string) ([]strin func testAccCheckAwsAutoscalingGroupsConfig(rInt1, rInt2, rInt3 int) string { return fmt.Sprintf(` +data "aws_ami" "test_ami" { + most_recent = true + + filter { + name = "owner-alias" + values = ["amazon"] + } + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +data "aws_availability_zones" "available" {} + resource "aws_launch_configuration" "foobar" { - image_id = "ami-21f78e11" + image_id = "${data.aws_ami.test_ami.id}" instance_type = "t1.micro" } resource "aws_autoscaling_group" "bar" { - availability_zones = ["us-west-2a"] + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] name = "test-asg-%d" max_size = 1 min_size = 0 @@ -105,7 +122,7 @@ resource "aws_autoscaling_group" "bar" { } resource "aws_autoscaling_group" "foo" { - availability_zones = ["us-west-2b"] + availability_zones = ["${data.aws_availability_zones.available.names[1]}"] name = "test-asg-%d" max_size = 1 min_size = 0 @@ -123,7 +140,7 @@ resource "aws_autoscaling_group" "foo" { } resource "aws_autoscaling_group" "barbaz" { - availability_zones = ["us-west-2c"] + availability_zones = ["${data.aws_availability_zones.available.names[2]}"] name = "test-asg-%d" max_size = 1 min_size = 0 @@ -143,13 +160,29 @@ resource "aws_autoscaling_group" "barbaz" { func testAccCheckAwsAutoscalingGroupsConfigWithDataSource(rInt1, rInt2, rInt3 int) string { return fmt.Sprintf(` +data "aws_ami" "test_ami" { + most_recent = true + + filter { + name = "owner-alias" + values = ["amazon"] + } + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +data "aws_availability_zones" "available" {} + resource "aws_launch_configuration" "foobar" { - image_id = "ami-21f78e11" + image_id = "${data.aws_ami.test_ami.id}" instance_type = "t1.micro" } resource "aws_autoscaling_group" "bar" { - availability_zones = ["us-west-2a"] + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] name = "test-asg-%d" max_size = 1 min_size = 0 @@ -167,7 +200,7 @@ resource "aws_autoscaling_group" "bar" { } resource "aws_autoscaling_group" "foo" { - availability_zones = ["us-west-2b"] + availability_zones = ["${data.aws_availability_zones.available.names[1]}"] name = "test-asg-%d" max_size = 1 min_size = 0 @@ -185,7 +218,7 @@ resource "aws_autoscaling_group" "foo" { } resource "aws_autoscaling_group" "barbaz" { - availability_zones = ["us-west-2c"] + availability_zones = ["${data.aws_availability_zones.available.names[2]}"] name = "test-asg-%d" max_size = 1 min_size = 0 diff --git a/aws/data_source_aws_availability_zone_test.go b/aws/data_source_aws_availability_zone_test.go index 8808011dbc1..e06f2f57d76 100644 --- a/aws/data_source_aws_availability_zone_test.go +++ b/aws/data_source_aws_availability_zone_test.go @@ -9,11 +9,11 @@ import ( ) func TestAccDataSourceAwsAvailabilityZone(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDataSourceAwsAvailabilityZoneConfig, Check: resource.ComposeTestCheckFunc( testAccDataSourceAwsAvailabilityZoneCheck("data.aws_availability_zone.by_name"), diff --git a/aws/data_source_aws_availability_zones.go b/aws/data_source_aws_availability_zones.go index 3a6f3876433..3d033de6ddb 100644 --- a/aws/data_source_aws_availability_zones.go +++ b/aws/data_source_aws_availability_zones.go @@ -67,7 +67,7 @@ func dataSourceAwsAvailabilityZonesRead(d *schema.ResourceData, meta interface{} sort.Strings(raw) if err := d.Set("names", raw); err != nil { - return fmt.Errorf("[WARN] Error setting Availability Zones: %s", err) + return fmt.Errorf("Error setting Availability Zones: %s", err) } return nil diff --git a/aws/data_source_aws_availability_zones_test.go b/aws/data_source_aws_availability_zones_test.go index 7f2bf00b7cf..9a3be5cbeba 100644 --- a/aws/data_source_aws_availability_zones_test.go +++ b/aws/data_source_aws_availability_zones_test.go @@ -12,7 +12,7 @@ import ( ) func TestAccAWSAvailabilityZones_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -27,7 +27,7 @@ func TestAccAWSAvailabilityZones_basic(t *testing.T) { } func TestAccAWSAvailabilityZones_stateFilter(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/aws/data_source_aws_batch_compute_environment.go b/aws/data_source_aws_batch_compute_environment.go new file mode 100644 index 00000000000..11c363150be --- /dev/null +++ b/aws/data_source_aws_batch_compute_environment.go @@ -0,0 +1,93 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/batch" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsBatchComputeEnvironment() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsBatchComputeEnvironmentRead, + + Schema: map[string]*schema.Schema{ + "compute_environment_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "arn": { + Type: schema.TypeString, + Computed: true, + }, + + "ecs_cluster_arn": { + Type: schema.TypeString, + Computed: true, + }, + + "service_role": { + Type: schema.TypeString, + Computed: true, + }, + + "type": { + Type: schema.TypeString, + Computed: true, + }, + + "status": { + Type: schema.TypeString, + Computed: true, + }, + + "status_reason": { + Type: schema.TypeString, + Computed: true, + }, + + "state": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func dataSourceAwsBatchComputeEnvironmentRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).batchconn + + params := &batch.DescribeComputeEnvironmentsInput{ + ComputeEnvironments: []*string{aws.String(d.Get("compute_environment_name").(string))}, + } + log.Printf("[DEBUG] Reading Batch Compute Environment: %s", params) + desc, err := conn.DescribeComputeEnvironments(params) + + if err != nil { + return err + } + + if len(desc.ComputeEnvironments) == 0 { + return fmt.Errorf("no matches found for name: %s", d.Get("compute_environment_name").(string)) + } + + if len(desc.ComputeEnvironments) > 1 { + return fmt.Errorf("multiple matches found for name: %s", d.Get("compute_environment_name").(string)) + } + + computeEnvironment := desc.ComputeEnvironments[0] + d.SetId(aws.StringValue(computeEnvironment.ComputeEnvironmentArn)) + d.Set("arn", computeEnvironment.ComputeEnvironmentArn) + d.Set("compute_environment_name", computeEnvironment.ComputeEnvironmentName) + d.Set("ecs_cluster_arn", computeEnvironment.EcsClusterArn) + d.Set("service_role", computeEnvironment.ServiceRole) + d.Set("type", computeEnvironment.Type) + d.Set("status", computeEnvironment.Status) + d.Set("status_reason", computeEnvironment.StatusReason) + d.Set("state", computeEnvironment.State) + return nil +} diff --git a/aws/data_source_aws_batch_compute_environment_test.go b/aws/data_source_aws_batch_compute_environment_test.go new file mode 100644 index 00000000000..919c1584a1f --- /dev/null +++ b/aws/data_source_aws_batch_compute_environment_test.go @@ -0,0 +1,177 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccDataSourceAwsBatchComputeEnvironment(t *testing.T) { + rName := acctest.RandomWithPrefix("tf_acc_test_") + resourceName := "aws_batch_compute_environment.test" + datasourceName := "data.aws_batch_compute_environment.by_name" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsBatchComputeEnvironmentConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccDataSourceAwsBatchComputeEnvironmentCheck(datasourceName, resourceName), + ), + }, + }, + }) +} + +func testAccDataSourceAwsBatchComputeEnvironmentCheck(datasourceName, resourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + ds, ok := s.RootModule().Resources[datasourceName] + if !ok { + return fmt.Errorf("root module has no data source called %s", datasourceName) + } + + batchCeRs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("root module has no resource called %s", resourceName) + } + + attrNames := []string{ + "arn", + "compute_environment_name", + } + + for _, attrName := range attrNames { + if ds.Primary.Attributes[attrName] != batchCeRs.Primary.Attributes[attrName] { + return fmt.Errorf( + "%s is %s; want %s", + attrName, + ds.Primary.Attributes[attrName], + batchCeRs.Primary.Attributes[attrName], + ) + } + } + + return nil + } +} + +func testAccDataSourceAwsBatchComputeEnvironmentConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_iam_role" "ecs_instance_role" { + name = "ecs_%[1]s" + assume_role_policy = < 1 { + return fmt.Errorf("multiple matches found for name: %s", d.Get("name").(string)) + } + + jobQueue := desc.JobQueues[0] + d.SetId(aws.StringValue(jobQueue.JobQueueArn)) + d.Set("arn", jobQueue.JobQueueArn) + d.Set("name", jobQueue.JobQueueName) + d.Set("status", jobQueue.Status) + d.Set("status_reason", jobQueue.StatusReason) + d.Set("state", jobQueue.State) + d.Set("priority", jobQueue.Priority) + + ceos := make([]map[string]interface{}, 0) + for _, v := range jobQueue.ComputeEnvironmentOrder { + ceo := map[string]interface{}{} + ceo["compute_environment"] = aws.StringValue(v.ComputeEnvironment) + ceo["order"] = int(aws.Int64Value(v.Order)) + ceos = append(ceos, ceo) + } + if err := d.Set("compute_environment_order", ceos); err != nil { + return fmt.Errorf("error setting compute_environment_order: %s", err) + } + + return nil +} diff --git a/aws/data_source_aws_batch_job_queue_test.go b/aws/data_source_aws_batch_job_queue_test.go new file mode 100644 index 00000000000..b6d1fb674ed --- /dev/null +++ b/aws/data_source_aws_batch_job_queue_test.go @@ -0,0 +1,171 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccDataSourceAwsBatchJobQueue(t *testing.T) { + rName := acctest.RandomWithPrefix("tf_acc_test_") + resourceName := "aws_batch_job_queue.test" + datasourceName := "data.aws_batch_job_queue.by_name" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsBatchJobQueueConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccDataSourceAwsBatchJobQueueCheck(datasourceName, resourceName), + ), + }, + }, + }) +} + +func testAccDataSourceAwsBatchJobQueueCheck(datasourceName, resourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + ds, ok := s.RootModule().Resources[datasourceName] + if !ok { + return fmt.Errorf("root module has no data source called %s", datasourceName) + } + + jobQueueRs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("root module has no resource called %s", resourceName) + } + + attrNames := []string{ + "arn", + "name", + "state", + "priority", + } + + for _, attrName := range attrNames { + if ds.Primary.Attributes[attrName] != jobQueueRs.Primary.Attributes[attrName] { + return fmt.Errorf( + "%s is %s; want %s", + attrName, + ds.Primary.Attributes[attrName], + jobQueueRs.Primary.Attributes[attrName], + ) + } + } + + return nil + } +} + +func testAccDataSourceAwsBatchJobQueueConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_iam_role" "ecs_instance_role" { + name = "ecs_%[1]s" + assume_role_policy = < 0 { + input.Filters["states"] = states + } + out, err := conn.DescribeClusters(input) + + if err != nil { + return fmt.Errorf("error describing CloudHSM v2 Cluster: %s", err) + } + + var cluster *cloudhsmv2.Cluster + for _, c := range out.Clusters { + if aws.StringValue(c.ClusterId) == clusterId { + cluster = c + break + } + } + + if cluster == nil { + return fmt.Errorf("cluster with id %s not found", clusterId) + } + + d.SetId(clusterId) + d.Set("vpc_id", cluster.VpcId) + d.Set("security_group_id", cluster.SecurityGroup) + d.Set("cluster_state", cluster.State) + if err := d.Set("cluster_certificates", readCloudHsm2ClusterCertificates(cluster)); err != nil { + return fmt.Errorf("error setting cluster_certificates: %s", err) + } + + var subnets []string + for _, sn := range cluster.SubnetMapping { + subnets = append(subnets, *sn) + } + + if err := d.Set("subnet_ids", subnets); err != nil { + return fmt.Errorf("[DEBUG] Error saving Subnet IDs to state for CloudHSM v2 Cluster (%s): %s", d.Id(), err) + } + + return nil +} diff --git a/aws/data_source_aws_cloudhsm2_cluster_test.go b/aws/data_source_aws_cloudhsm2_cluster_test.go new file mode 100644 index 00000000000..d9b608eeed8 --- /dev/null +++ b/aws/data_source_aws_cloudhsm2_cluster_test.go @@ -0,0 +1,73 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccDataSourceCloudHsm2Cluster_basic(t *testing.T) { + resourceName := "aws_cloudhsm_v2_cluster.cluster" + dataSourceName := "data.aws_cloudhsm_v2_cluster.default" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccCheckCloudHsm2ClusterDataSourceConfig, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(dataSourceName, "cluster_state", "UNINITIALIZED"), + resource.TestCheckResourceAttrPair(dataSourceName, "cluster_id", resourceName, "cluster_id"), + resource.TestCheckResourceAttrPair(dataSourceName, "cluster_state", resourceName, "cluster_state"), + resource.TestCheckResourceAttrPair(dataSourceName, "security_group_id", resourceName, "security_group_id"), + resource.TestCheckResourceAttrPair(dataSourceName, "subnet_ids.#", resourceName, "subnet_ids.#"), + resource.TestCheckResourceAttrPair(dataSourceName, "vpc_id", resourceName, "vpc_id"), + ), + }, + }, + }) +} + +var testAccCheckCloudHsm2ClusterDataSourceConfig = fmt.Sprintf(` +variable "subnets" { + default = ["10.0.1.0/24", "10.0.2.0/24"] + type = "list" +} + +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "cloudhsm2_test_vpc" { + cidr_block = "10.0.0.0/16" + + tags { + Name = "terraform-testacc-aws_cloudhsm_v2_cluster-data-source-basic" + } +} + +resource "aws_subnet" "cloudhsm2_test_subnets" { + count = 2 + vpc_id = "${aws_vpc.cloudhsm2_test_vpc.id}" + cidr_block = "${element(var.subnets, count.index)}" + map_public_ip_on_launch = false + availability_zone = "${element(data.aws_availability_zones.available.names, count.index)}" + + tags { + Name = "tf-acc-aws_cloudhsm_v2_cluster-data-source-basic" + } +} + +resource "aws_cloudhsm_v2_cluster" "cluster" { + hsm_type = "hsm1.medium" + subnet_ids = ["${aws_subnet.cloudhsm2_test_subnets.*.id}"] + tags { + Name = "tf-acc-aws_cloudhsm_v2_cluster-data-source-basic-%d" + } +} + +data "aws_cloudhsm_v2_cluster" "default" { + cluster_id = "${aws_cloudhsm_v2_cluster.cluster.cluster_id}" +} +`, acctest.RandInt()) diff --git a/aws/data_source_aws_cloudtrail_service_account.go b/aws/data_source_aws_cloudtrail_service_account.go index b7129bbfa97..33574a033f3 100644 --- a/aws/data_source_aws_cloudtrail_service_account.go +++ b/aws/data_source_aws_cloudtrail_service_account.go @@ -3,6 +3,7 @@ package aws import ( "fmt" + "github.com/aws/aws-sdk-go/aws/arn" "github.com/hashicorp/terraform/helper/schema" ) @@ -51,7 +52,14 @@ func dataSourceAwsCloudTrailServiceAccountRead(d *schema.ResourceData, meta inte if accid, ok := cloudTrailServiceAccountPerRegionMap[region]; ok { d.SetId(accid) - d.Set("arn", iamArnString(meta.(*AWSClient).partition, accid, "root")) + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Service: "iam", + AccountID: accid, + Resource: "root", + }.String() + d.Set("arn", arn) + return nil } diff --git a/aws/data_source_aws_cloudtrail_service_account_test.go b/aws/data_source_aws_cloudtrail_service_account_test.go index 0e5cdd4bc4d..6eb9ea90221 100644 --- a/aws/data_source_aws_cloudtrail_service_account_test.go +++ b/aws/data_source_aws_cloudtrail_service_account_test.go @@ -7,18 +7,18 @@ import ( ) func TestAccAWSCloudTrailServiceAccount_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckAwsCloudTrailServiceAccountConfig, Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr("data.aws_cloudtrail_service_account.main", "id", "113285607260"), resource.TestCheckResourceAttr("data.aws_cloudtrail_service_account.main", "arn", "arn:aws:iam::113285607260:root"), ), }, - resource.TestStep{ + { Config: testAccCheckAwsCloudTrailServiceAccountExplicitRegionConfig, Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr("data.aws_cloudtrail_service_account.regional", "id", "282025262664"), diff --git a/aws/data_source_aws_cloudwatch_log_group.go b/aws/data_source_aws_cloudwatch_log_group.go new file mode 100644 index 00000000000..134f7f5dce7 --- /dev/null +++ b/aws/data_source_aws_cloudwatch_log_group.go @@ -0,0 +1,47 @@ +package aws + +import ( + "fmt" + + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsCloudwatchLogGroup() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsCloudwatchLogGroupRead, + + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "creation_time": { + Type: schema.TypeInt, + Computed: true, + }, + }, + } +} + +func dataSourceAwsCloudwatchLogGroupRead(d *schema.ResourceData, meta interface{}) error { + name := d.Get("name").(string) + conn := meta.(*AWSClient).cloudwatchlogsconn + + logGroup, err := lookupCloudWatchLogGroup(conn, name) + if err != nil { + return err + } + if logGroup == nil { + return fmt.Errorf("No log group named %s found\n", name) + } + + d.SetId(name) + d.Set("arn", logGroup.Arn) + d.Set("creation_time", logGroup.CreationTime) + + return nil +} diff --git a/aws/data_source_aws_cloudwatch_log_group_test.go b/aws/data_source_aws_cloudwatch_log_group_test.go new file mode 100644 index 00000000000..53a13c1587c --- /dev/null +++ b/aws/data_source_aws_cloudwatch_log_group_test.go @@ -0,0 +1,41 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccAWSCloudwatchLogGroupDataSource(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccCheckAWSCloudwatchLogGroupDataSourceConfig(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.aws_cloudwatch_log_group.test", "name", rName), + resource.TestCheckResourceAttrSet("data.aws_cloudwatch_log_group.test", "arn"), + resource.TestCheckResourceAttrSet("data.aws_cloudwatch_log_group.test", "creation_time"), + ), + }, + }, + }) + return +} + +func testAccCheckAWSCloudwatchLogGroupDataSourceConfig(rName string) string { + return fmt.Sprintf(` +resource aws_cloudwatch_log_group "test" { + name = "%s" +} + +data aws_cloudwatch_log_group "test" { + name = "${aws_cloudwatch_log_group.test.name}" +} +`, rName) +} diff --git a/aws/data_source_aws_codecommit_repository.go b/aws/data_source_aws_codecommit_repository.go new file mode 100644 index 00000000000..aec07a0e018 --- /dev/null +++ b/aws/data_source_aws_codecommit_repository.go @@ -0,0 +1,79 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/codecommit" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func dataSourceAwsCodeCommitRepository() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsCodeCommitRepositoryRead, + + Schema: map[string]*schema.Schema{ + "repository_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(0, 100), + }, + + "arn": { + Type: schema.TypeString, + Computed: true, + }, + + "repository_id": { + Type: schema.TypeString, + Computed: true, + }, + + "clone_url_http": { + Type: schema.TypeString, + Computed: true, + }, + + "clone_url_ssh": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func dataSourceAwsCodeCommitRepositoryRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).codecommitconn + + repositoryName := d.Get("repository_name").(string) + input := &codecommit.GetRepositoryInput{ + RepositoryName: aws.String(repositoryName), + } + + out, err := conn.GetRepository(input) + if err != nil { + if isAWSErr(err, codecommit.ErrCodeRepositoryDoesNotExistException, "") { + log.Printf("[WARN] CodeCommit Repository (%s) not found, removing from state", d.Id()) + d.SetId("") + return fmt.Errorf("Resource codecommit repository not found for %s", repositoryName) + } else { + return fmt.Errorf("Error reading CodeCommit Repository: %s", err.Error()) + } + } + + if out.RepositoryMetadata == nil { + return fmt.Errorf("no matches found for repository name: %s", repositoryName) + } + + d.SetId(aws.StringValue(out.RepositoryMetadata.RepositoryName)) + d.Set("arn", out.RepositoryMetadata.Arn) + d.Set("clone_url_http", out.RepositoryMetadata.CloneUrlHttp) + d.Set("clone_url_ssh", out.RepositoryMetadata.CloneUrlSsh) + d.Set("repository_name", out.RepositoryMetadata.RepositoryName) + d.Set("repository_id", out.RepositoryMetadata.RepositoryId) + + return nil +} diff --git a/aws/data_source_aws_codecommit_repository_test.go b/aws/data_source_aws_codecommit_repository_test.go new file mode 100644 index 00000000000..76e438035b3 --- /dev/null +++ b/aws/data_source_aws_codecommit_repository_test.go @@ -0,0 +1,43 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccAWSCodeCommitRepositoryDataSource_basic(t *testing.T) { + rName := fmt.Sprintf("tf-acctest-%d", acctest.RandInt()) + resourceName := "aws_codecommit_repository.default" + datasourceName := "data.aws_codecommit_repository.default" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccCheckAwsCodeCommitRepositoryDataSourceConfig(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrPair(datasourceName, "arn", resourceName, "arn"), + resource.TestCheckResourceAttrPair(datasourceName, "clone_url_http", resourceName, "clone_url_http"), + resource.TestCheckResourceAttrPair(datasourceName, "clone_url_ssh", resourceName, "clone_url_ssh"), + resource.TestCheckResourceAttrPair(datasourceName, "repository_name", resourceName, "repository_name"), + ), + }, + }, + }) +} + +func testAccCheckAwsCodeCommitRepositoryDataSourceConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_codecommit_repository" "default" { + repository_name = "%s" +} + +data "aws_codecommit_repository" "default" { + repository_name = "${aws_codecommit_repository.default.repository_name}" +} +`, rName) +} diff --git a/aws/data_source_aws_cognito_user_pools.go b/aws/data_source_aws_cognito_user_pools.go new file mode 100644 index 00000000000..700f9f4958d --- /dev/null +++ b/aws/data_source_aws_cognito_user_pools.go @@ -0,0 +1,96 @@ +package aws + +import ( + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" + "github.com/aws/aws-sdk-go/service/cognitoidentityprovider" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsCognitoUserPools() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsCognitoUserPoolsRead, + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + }, + "ids": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "arns": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + } +} + +func dataSourceAwsCognitoUserPoolsRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).cognitoidpconn + name := d.Get("name").(string) + var ids []string + var arns []string + + pools, err := getAllCognitoUserPools(conn) + if err != nil { + return fmt.Errorf("Error listing cognito user pools: %s", err) + } + for _, pool := range pools { + if name == aws.StringValue(pool.Name) { + id := aws.StringValue(pool.Id) + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Service: "cognito-idp", + Region: meta.(*AWSClient).region, + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("userpool/%s", id), + }.String() + + ids = append(ids, id) + arns = append(arns, arn) + } + } + + if len(ids) == 0 { + return fmt.Errorf("No cognito user pool found with name: %s", name) + } + + d.SetId(name) + d.Set("ids", ids) + d.Set("arns", arns) + + return nil +} + +func getAllCognitoUserPools(conn *cognitoidentityprovider.CognitoIdentityProvider) ([]*cognitoidentityprovider.UserPoolDescriptionType, error) { + var pools []*cognitoidentityprovider.UserPoolDescriptionType + var nextToken string + + for { + input := &cognitoidentityprovider.ListUserPoolsInput{ + // MaxResults Valid Range: Minimum value of 1. Maximum value of 60 + MaxResults: aws.Int64(int64(60)), + } + if nextToken != "" { + input.NextToken = aws.String(nextToken) + } + out, err := conn.ListUserPools(input) + if err != nil { + return pools, err + } + pools = append(pools, out.UserPools...) + + if out.NextToken == nil { + break + } + nextToken = aws.StringValue(out.NextToken) + } + + return pools, nil +} diff --git a/aws/data_source_aws_cognito_user_pools_test.go b/aws/data_source_aws_cognito_user_pools_test.go new file mode 100644 index 00000000000..d75c356ee4e --- /dev/null +++ b/aws/data_source_aws_cognito_user_pools_test.go @@ -0,0 +1,52 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccDataSourceAwsCognitoUserPools_basic(t *testing.T) { + rName := fmt.Sprintf("tf_acc_ds_cognito_user_pools_%s", acctest.RandString(7)) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsCognitoUserPoolsConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.aws_cognito_user_pools.selected", "ids.#", "2"), + resource.TestCheckResourceAttr("data.aws_cognito_user_pools.selected", "arns.#", "2"), + ), + }, + { + Config: testAccDataSourceAwsCognitoUserPoolsConfig_notFound(rName), + ExpectError: regexp.MustCompile(`No cognito user pool found with name:`), + }, + }, + }) +} + +func testAccDataSourceAwsCognitoUserPoolsConfig_basic(rName string) string { + return fmt.Sprintf(` +resource "aws_cognito_user_pool" "main" { + count = 2 + name = "%s" +} + +data "aws_cognito_user_pools" "selected" { + name = "${aws_cognito_user_pool.main.*.name[0]}" +} +`, rName) +} + +func testAccDataSourceAwsCognitoUserPoolsConfig_notFound(rName string) string { + return fmt.Sprintf(` +data "aws_cognito_user_pools" "selected" { + name = "%s-not-found" +} +`, rName) +} diff --git a/aws/data_source_aws_db_cluster_snapshot.go b/aws/data_source_aws_db_cluster_snapshot.go new file mode 100644 index 00000000000..d756b4806ea --- /dev/null +++ b/aws/data_source_aws_db_cluster_snapshot.go @@ -0,0 +1,203 @@ +package aws + +import ( + "errors" + "fmt" + "log" + "sort" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/rds" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsDbClusterSnapshot() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsDbClusterSnapshotRead, + + Schema: map[string]*schema.Schema{ + //selection criteria + "db_cluster_identifier": { + Type: schema.TypeString, + Optional: true, + }, + + "db_cluster_snapshot_identifier": { + Type: schema.TypeString, + Optional: true, + }, + + "snapshot_type": { + Type: schema.TypeString, + Optional: true, + }, + + "include_shared": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + + "include_public": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "most_recent": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + + //Computed values returned + "allocated_storage": { + Type: schema.TypeInt, + Computed: true, + }, + "availability_zones": { + Type: schema.TypeList, + Elem: &schema.Schema{Type: schema.TypeString}, + Computed: true, + }, + "db_cluster_snapshot_arn": { + Type: schema.TypeString, + Computed: true, + }, + "storage_encrypted": { + Type: schema.TypeBool, + Computed: true, + }, + "engine": { + Type: schema.TypeString, + Computed: true, + }, + "engine_version": { + Type: schema.TypeString, + Computed: true, + }, + "kms_key_id": { + Type: schema.TypeString, + Computed: true, + }, + "license_model": { + Type: schema.TypeString, + Computed: true, + }, + "port": { + Type: schema.TypeInt, + Computed: true, + }, + "source_db_cluster_snapshot_arn": { + Type: schema.TypeString, + Computed: true, + }, + "snapshot_create_time": { + Type: schema.TypeString, + Computed: true, + }, + "status": { + Type: schema.TypeString, + Computed: true, + }, + "vpc_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func dataSourceAwsDbClusterSnapshotRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + + clusterIdentifier, clusterIdentifierOk := d.GetOk("db_cluster_identifier") + snapshotIdentifier, snapshotIdentifierOk := d.GetOk("db_cluster_snapshot_identifier") + + if !clusterIdentifierOk && !snapshotIdentifierOk { + return errors.New("One of db_cluster_snapshot_identifier or db_cluster_identifier must be assigned") + } + + params := &rds.DescribeDBClusterSnapshotsInput{ + IncludePublic: aws.Bool(d.Get("include_public").(bool)), + IncludeShared: aws.Bool(d.Get("include_shared").(bool)), + } + if v, ok := d.GetOk("snapshot_type"); ok { + params.SnapshotType = aws.String(v.(string)) + } + if clusterIdentifierOk { + params.DBClusterIdentifier = aws.String(clusterIdentifier.(string)) + } + if snapshotIdentifierOk { + params.DBClusterSnapshotIdentifier = aws.String(snapshotIdentifier.(string)) + } + + log.Printf("[DEBUG] Reading DB Cluster Snapshot: %s", params) + resp, err := conn.DescribeDBClusterSnapshots(params) + if err != nil { + return err + } + + if len(resp.DBClusterSnapshots) < 1 { + return errors.New("Your query returned no results. Please change your search criteria and try again.") + } + + var snapshot *rds.DBClusterSnapshot + if len(resp.DBClusterSnapshots) > 1 { + recent := d.Get("most_recent").(bool) + log.Printf("[DEBUG] aws_db_cluster_snapshot - multiple results found and `most_recent` is set to: %t", recent) + if recent { + snapshot = mostRecentDbClusterSnapshot(resp.DBClusterSnapshots) + } else { + return errors.New("Your query returned more than one result. Please try a more specific search criteria.") + } + } else { + snapshot = resp.DBClusterSnapshots[0] + } + + d.SetId(aws.StringValue(snapshot.DBClusterSnapshotIdentifier)) + d.Set("allocated_storage", snapshot.AllocatedStorage) + if err := d.Set("availability_zones", flattenStringList(snapshot.AvailabilityZones)); err != nil { + return fmt.Errorf("error setting availability_zones: %s", err) + } + d.Set("db_cluster_identifier", snapshot.DBClusterIdentifier) + d.Set("db_cluster_snapshot_arn", snapshot.DBClusterSnapshotArn) + d.Set("db_cluster_snapshot_identifier", snapshot.DBClusterSnapshotIdentifier) + d.Set("engine", snapshot.Engine) + d.Set("engine_version", snapshot.EngineVersion) + d.Set("kms_key_id", snapshot.KmsKeyId) + d.Set("license_model", snapshot.LicenseModel) + d.Set("port", snapshot.Port) + if snapshot.SnapshotCreateTime != nil { + d.Set("snapshot_create_time", snapshot.SnapshotCreateTime.Format(time.RFC3339)) + } + d.Set("snapshot_type", snapshot.SnapshotType) + d.Set("source_db_cluster_snapshot_arn", snapshot.SourceDBClusterSnapshotArn) + d.Set("status", snapshot.Status) + d.Set("storage_encrypted", snapshot.StorageEncrypted) + d.Set("vpc_id", snapshot.VpcId) + + return nil +} + +type rdsClusterSnapshotSort []*rds.DBClusterSnapshot + +func (a rdsClusterSnapshotSort) Len() int { return len(a) } +func (a rdsClusterSnapshotSort) Swap(i, j int) { a[i], a[j] = a[j], a[i] } +func (a rdsClusterSnapshotSort) Less(i, j int) bool { + // Snapshot creation can be in progress + if a[i].SnapshotCreateTime == nil { + return true + } + if a[j].SnapshotCreateTime == nil { + return false + } + + return (*a[i].SnapshotCreateTime).Before(*a[j].SnapshotCreateTime) +} + +func mostRecentDbClusterSnapshot(snapshots []*rds.DBClusterSnapshot) *rds.DBClusterSnapshot { + sortedSnapshots := snapshots + sort.Sort(rdsClusterSnapshotSort(sortedSnapshots)) + return sortedSnapshots[len(sortedSnapshots)-1] +} diff --git a/aws/data_source_aws_db_cluster_snapshot_test.go b/aws/data_source_aws_db_cluster_snapshot_test.go new file mode 100644 index 00000000000..fd76c25d2db --- /dev/null +++ b/aws/data_source_aws_db_cluster_snapshot_test.go @@ -0,0 +1,265 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSDbClusterSnapshotDataSource_DbClusterSnapshotIdentifier(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + dataSourceName := "data.aws_db_cluster_snapshot.test" + resourceName := "aws_db_cluster_snapshot.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccCheckAwsDbClusterSnapshotDataSourceConfig_DbClusterSnapshotIdentifier(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDbClusterSnapshotDataSourceExists(dataSourceName), + resource.TestCheckResourceAttrPair(dataSourceName, "allocated_storage", resourceName, "allocated_storage"), + resource.TestCheckResourceAttrPair(dataSourceName, "availability_zones.#", resourceName, "availability_zones.#"), + resource.TestCheckResourceAttrPair(dataSourceName, "db_cluster_identifier", resourceName, "db_cluster_identifier"), + resource.TestCheckResourceAttrPair(dataSourceName, "db_cluster_snapshot_arn", resourceName, "db_cluster_snapshot_arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "db_cluster_snapshot_identifier", resourceName, "db_cluster_snapshot_identifier"), + resource.TestCheckResourceAttrPair(dataSourceName, "engine", resourceName, "engine"), + resource.TestCheckResourceAttrPair(dataSourceName, "engine_version", resourceName, "engine_version"), + resource.TestCheckResourceAttrPair(dataSourceName, "kms_key_id", resourceName, "kms_key_id"), + resource.TestCheckResourceAttrPair(dataSourceName, "license_model", resourceName, "license_model"), + resource.TestCheckResourceAttrPair(dataSourceName, "port", resourceName, "port"), + resource.TestCheckResourceAttrSet(dataSourceName, "snapshot_create_time"), + resource.TestCheckResourceAttrPair(dataSourceName, "snapshot_type", resourceName, "snapshot_type"), + resource.TestCheckResourceAttrPair(dataSourceName, "source_db_cluster_snapshot_arn", resourceName, "source_db_cluster_snapshot_arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "status", resourceName, "status"), + resource.TestCheckResourceAttrPair(dataSourceName, "storage_encrypted", resourceName, "storage_encrypted"), + resource.TestCheckResourceAttrPair(dataSourceName, "vpc_id", resourceName, "vpc_id"), + ), + }, + }, + }) +} + +func TestAccAWSDbClusterSnapshotDataSource_DbClusterIdentifier(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + dataSourceName := "data.aws_db_cluster_snapshot.test" + resourceName := "aws_db_cluster_snapshot.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccCheckAwsDbClusterSnapshotDataSourceConfig_DbClusterIdentifier(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDbClusterSnapshotDataSourceExists(dataSourceName), + resource.TestCheckResourceAttrPair(dataSourceName, "allocated_storage", resourceName, "allocated_storage"), + resource.TestCheckResourceAttrPair(dataSourceName, "availability_zones.#", resourceName, "availability_zones.#"), + resource.TestCheckResourceAttrPair(dataSourceName, "db_cluster_identifier", resourceName, "db_cluster_identifier"), + resource.TestCheckResourceAttrPair(dataSourceName, "db_cluster_snapshot_arn", resourceName, "db_cluster_snapshot_arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "db_cluster_snapshot_identifier", resourceName, "db_cluster_snapshot_identifier"), + resource.TestCheckResourceAttrPair(dataSourceName, "engine", resourceName, "engine"), + resource.TestCheckResourceAttrPair(dataSourceName, "engine_version", resourceName, "engine_version"), + resource.TestCheckResourceAttrPair(dataSourceName, "kms_key_id", resourceName, "kms_key_id"), + resource.TestCheckResourceAttrPair(dataSourceName, "license_model", resourceName, "license_model"), + resource.TestCheckResourceAttrPair(dataSourceName, "port", resourceName, "port"), + resource.TestCheckResourceAttrSet(dataSourceName, "snapshot_create_time"), + resource.TestCheckResourceAttrPair(dataSourceName, "snapshot_type", resourceName, "snapshot_type"), + resource.TestCheckResourceAttrPair(dataSourceName, "source_db_cluster_snapshot_arn", resourceName, "source_db_cluster_snapshot_arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "status", resourceName, "status"), + resource.TestCheckResourceAttrPair(dataSourceName, "storage_encrypted", resourceName, "storage_encrypted"), + resource.TestCheckResourceAttrPair(dataSourceName, "vpc_id", resourceName, "vpc_id"), + ), + }, + }, + }) +} + +func TestAccAWSDbClusterSnapshotDataSource_MostRecent(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + dataSourceName := "data.aws_db_cluster_snapshot.test" + resourceName := "aws_db_cluster_snapshot.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccCheckAwsDbClusterSnapshotDataSourceConfig_MostRecent(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDbClusterSnapshotDataSourceExists(dataSourceName), + resource.TestCheckResourceAttrPair(dataSourceName, "db_cluster_snapshot_arn", resourceName, "db_cluster_snapshot_arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "db_cluster_snapshot_identifier", resourceName, "db_cluster_snapshot_identifier"), + ), + }, + }, + }) +} + +func testAccCheckAwsDbClusterSnapshotDataSourceExists(dataSourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[dataSourceName] + if !ok { + return fmt.Errorf("Can't find data source: %s", dataSourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("Snapshot data source ID not set") + } + return nil + } +} + +func testAccCheckAwsDbClusterSnapshotDataSourceConfig_DbClusterSnapshotIdentifier(rName string) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "test" { + cidr_block = "192.168.0.0/16" + + tags { + Name = %q + } +} + +resource "aws_subnet" "test" { + count = 2 + + availability_zone = "${data.aws_availability_zones.available.names[count.index]}" + cidr_block = "192.168.${count.index}.0/24" + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = %q + } +} + +resource "aws_db_subnet_group" "test" { + name = %q + subnet_ids = ["${aws_subnet.test.*.id}"] +} + +resource "aws_rds_cluster" "test" { + cluster_identifier = %q + db_subnet_group_name = "${aws_db_subnet_group.test.name}" + master_password = "barbarbarbar" + master_username = "foo" + skip_final_snapshot = true +} + +resource "aws_db_cluster_snapshot" "test" { + db_cluster_identifier = "${aws_rds_cluster.test.id}" + db_cluster_snapshot_identifier = %q +} + +data "aws_db_cluster_snapshot" "test" { + db_cluster_snapshot_identifier = "${aws_db_cluster_snapshot.test.id}" +} +`, rName, rName, rName, rName, rName) +} + +func testAccCheckAwsDbClusterSnapshotDataSourceConfig_DbClusterIdentifier(rName string) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "test" { + cidr_block = "192.168.0.0/16" + + tags { + Name = %q + } +} + +resource "aws_subnet" "test" { + count = 2 + + availability_zone = "${data.aws_availability_zones.available.names[count.index]}" + cidr_block = "192.168.${count.index}.0/24" + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = %q + } +} + +resource "aws_db_subnet_group" "test" { + name = %q + subnet_ids = ["${aws_subnet.test.*.id}"] +} + +resource "aws_rds_cluster" "test" { + cluster_identifier = %q + db_subnet_group_name = "${aws_db_subnet_group.test.name}" + master_password = "barbarbarbar" + master_username = "foo" + skip_final_snapshot = true +} + +resource "aws_db_cluster_snapshot" "test" { + db_cluster_identifier = "${aws_rds_cluster.test.id}" + db_cluster_snapshot_identifier = %q +} + +data "aws_db_cluster_snapshot" "test" { + db_cluster_identifier = "${aws_db_cluster_snapshot.test.db_cluster_identifier}" +} +`, rName, rName, rName, rName, rName) +} + +func testAccCheckAwsDbClusterSnapshotDataSourceConfig_MostRecent(rName string) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "test" { + cidr_block = "192.168.0.0/16" + + tags { + Name = %q + } +} + +resource "aws_subnet" "test" { + count = 2 + + availability_zone = "${data.aws_availability_zones.available.names[count.index]}" + cidr_block = "192.168.${count.index}.0/24" + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = %q + } +} + +resource "aws_db_subnet_group" "test" { + name = %q + subnet_ids = ["${aws_subnet.test.*.id}"] +} + +resource "aws_rds_cluster" "test" { + cluster_identifier = %q + db_subnet_group_name = "${aws_db_subnet_group.test.name}" + master_password = "barbarbarbar" + master_username = "foo" + skip_final_snapshot = true +} + +resource "aws_db_cluster_snapshot" "incorrect" { + db_cluster_identifier = "${aws_rds_cluster.test.id}" + db_cluster_snapshot_identifier = "%s-incorrect" +} + +resource "aws_db_cluster_snapshot" "test" { + db_cluster_identifier = "${aws_db_cluster_snapshot.incorrect.db_cluster_identifier}" + db_cluster_snapshot_identifier = %q +} + +data "aws_db_cluster_snapshot" "test" { + db_cluster_identifier = "${aws_db_cluster_snapshot.test.db_cluster_identifier}" + most_recent = true +} +`, rName, rName, rName, rName, rName, rName) +} diff --git a/aws/data_source_aws_db_event_categories.go b/aws/data_source_aws_db_event_categories.go new file mode 100644 index 00000000000..688e1deb33f --- /dev/null +++ b/aws/data_source_aws_db_event_categories.go @@ -0,0 +1,66 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/rds" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsDbEventCategories() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsDbEventCategoriesRead, + + Schema: map[string]*schema.Schema{ + "source_type": { + Type: schema.TypeString, + Optional: true, + }, + "event_categories": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + }, + } +} + +func dataSourceAwsDbEventCategoriesRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + + req := &rds.DescribeEventCategoriesInput{} + + if sourceType := d.Get("source_type").(string); sourceType != "" { + req.SourceType = aws.String(sourceType) + } + + log.Printf("[DEBUG] Describe Event Categories %s\n", req) + resp, err := conn.DescribeEventCategories(req) + if err != nil { + return err + } + + if resp == nil || len(resp.EventCategoriesMapList) == 0 { + return fmt.Errorf("Event Categories not found") + } + + eventCategories := make([]string, 0) + + for _, eventMap := range resp.EventCategoriesMapList { + for _, v := range eventMap.EventCategories { + eventCategories = append(eventCategories, aws.StringValue(v)) + } + } + + d.SetId(resource.UniqueId()) + if err := d.Set("event_categories", eventCategories); err != nil { + return fmt.Errorf("Error setting Event Categories: %s", err) + } + + return nil + +} diff --git a/aws/data_source_aws_db_event_categories_test.go b/aws/data_source_aws_db_event_categories_test.go new file mode 100644 index 00000000000..859977fc5b9 --- /dev/null +++ b/aws/data_source_aws_db_event_categories_test.go @@ -0,0 +1,130 @@ +package aws + +import ( + "fmt" + "reflect" + "regexp" + "sort" + "strconv" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSDbEventCategories_basic(t *testing.T) { + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccCheckAwsDbEventCategoriesConfig, + Check: resource.ComposeTestCheckFunc( + testAccAwsDbEventCategoriesAttrCheck("data.aws_db_event_categories.example", + completeEventCategoriesList), + ), + }, + }, + }) +} + +func TestAccAWSDbEventCategories_sourceType(t *testing.T) { + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccCheckAwsDbEventCategoriesConfig_sourceType, + Check: resource.ComposeTestCheckFunc( + testAccAwsDbEventCategoriesAttrCheck("data.aws_db_event_categories.example", + DbSnapshotEventCategoriesList), + ), + }, + }, + }) +} + +func testAccAwsDbEventCategoriesAttrCheck(n string, expected []string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Can't find DB Event Categories: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("DB Event Categories resource ID not set.") + } + + actual, err := testAccCheckAwsDbEventCategoriesBuild(rs.Primary.Attributes) + if err != nil { + return err + } + + sort.Strings(actual) + sort.Strings(expected) + if reflect.DeepEqual(expected, actual) != true { + return fmt.Errorf("DB Event Categories not matched: expected %v, got %v", expected, actual) + } + + return nil + } +} + +func testAccCheckAwsDbEventCategoriesBuild(attrs map[string]string) ([]string, error) { + v, ok := attrs["event_categories.#"] + if !ok { + return nil, fmt.Errorf("DB Event Categories list is missing.") + } + + qty, err := strconv.Atoi(v) + if err != nil { + return nil, err + } + if qty < 1 { + return nil, fmt.Errorf("No DB Event Categories found.") + } + + var eventCategories []string + for k, v := range attrs { + matched, _ := regexp.MatchString("event_categories.[0-9]+", k) + if matched { + eventCategories = append(eventCategories, v) + } + } + + return eventCategories, nil +} + +var testAccCheckAwsDbEventCategoriesConfig = ` +data "aws_db_event_categories" "example" {} +` + +var completeEventCategoriesList = []string{ + "notification", + "deletion", + "failover", + "maintenance", + "availability", + "read replica", + "failure", + "configuration change", + "recovery", + "low storage", + "backup", + "creation", + "backtrack", + "restoration", +} + +var testAccCheckAwsDbEventCategoriesConfig_sourceType = ` +data "aws_db_event_categories" "example" { + source_type = "db-snapshot" +} +` + +var DbSnapshotEventCategoriesList = []string{ + "notification", + "deletion", + "creation", + "restoration", +} diff --git a/aws/data_source_aws_db_instance.go b/aws/data_source_aws_db_instance.go index bf8df80ec92..fe3fb8898d1 100644 --- a/aws/data_source_aws_db_instance.go +++ b/aws/data_source_aws_db_instance.go @@ -87,6 +87,12 @@ func dataSourceAwsDbInstance() *schema.Resource { Computed: true, }, + "enabled_cloudwatch_logs_exports": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "endpoint": { Type: schema.TypeString, Computed: true, @@ -241,7 +247,7 @@ func dataSourceAwsDbInstanceRead(d *schema.ResourceData, meta interface{}) error parameterGroups = append(parameterGroups, *v.DBParameterGroupName) } if err := d.Set("db_parameter_groups", parameterGroups); err != nil { - return fmt.Errorf("[DEBUG] Error setting db_parameter_groups attribute: %#v, error: %#v", parameterGroups, err) + return fmt.Errorf("Error setting db_parameter_groups attribute: %#v, error: %#v", parameterGroups, err) } var dbSecurityGroups []string @@ -249,7 +255,7 @@ func dataSourceAwsDbInstanceRead(d *schema.ResourceData, meta interface{}) error dbSecurityGroups = append(dbSecurityGroups, *v.DBSecurityGroupName) } if err := d.Set("db_security_groups", dbSecurityGroups); err != nil { - return fmt.Errorf("[DEBUG] Error setting db_security_groups attribute: %#v, error: %#v", dbSecurityGroups, err) + return fmt.Errorf("Error setting db_security_groups attribute: %#v, error: %#v", dbSecurityGroups, err) } if dbInstance.DBSubnetGroup != nil { @@ -272,12 +278,16 @@ func dataSourceAwsDbInstanceRead(d *schema.ResourceData, meta interface{}) error d.Set("hosted_zone_id", dbInstance.Endpoint.HostedZoneId) d.Set("endpoint", fmt.Sprintf("%s:%d", *dbInstance.Endpoint.Address, *dbInstance.Endpoint.Port)) + if err := d.Set("enabled_cloudwatch_logs_exports", aws.StringValueSlice(dbInstance.EnabledCloudwatchLogsExports)); err != nil { + return fmt.Errorf("error setting enabled_cloudwatch_logs_exports: %#v", err) + } + var optionGroups []string for _, v := range dbInstance.OptionGroupMemberships { optionGroups = append(optionGroups, *v.OptionGroupName) } if err := d.Set("option_group_memberships", optionGroups); err != nil { - return fmt.Errorf("[DEBUG] Error setting option_group_memberships attribute: %#v, error: %#v", optionGroups, err) + return fmt.Errorf("Error setting option_group_memberships attribute: %#v, error: %#v", optionGroups, err) } d.Set("preferred_backup_window", dbInstance.PreferredBackupWindow) @@ -294,7 +304,7 @@ func dataSourceAwsDbInstanceRead(d *schema.ResourceData, meta interface{}) error vpcSecurityGroups = append(vpcSecurityGroups, *v.VpcSecurityGroupId) } if err := d.Set("vpc_security_groups", vpcSecurityGroups); err != nil { - return fmt.Errorf("[DEBUG] Error setting vpc_security_groups attribute: %#v, error: %#v", vpcSecurityGroups, err) + return fmt.Errorf("Error setting vpc_security_groups attribute: %#v, error: %#v", vpcSecurityGroups, err) } return nil diff --git a/aws/data_source_aws_db_instance_test.go b/aws/data_source_aws_db_instance_test.go index 1879f0f27fd..75e721af55c 100644 --- a/aws/data_source_aws_db_instance_test.go +++ b/aws/data_source_aws_db_instance_test.go @@ -11,7 +11,7 @@ import ( func TestAccAWSDbInstanceDataSource_basic(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -28,6 +28,8 @@ func TestAccAWSDbInstanceDataSource_basic(t *testing.T) { resource.TestCheckResourceAttrSet("data.aws_db_instance.bar", "hosted_zone_id"), resource.TestCheckResourceAttrSet("data.aws_db_instance.bar", "master_username"), resource.TestCheckResourceAttrSet("data.aws_db_instance.bar", "port"), + resource.TestCheckResourceAttrSet("data.aws_db_instance.bar", "enabled_cloudwatch_logs_exports.0"), + resource.TestCheckResourceAttrSet("data.aws_db_instance.bar", "enabled_cloudwatch_logs_exports.1"), ), }, }, @@ -41,7 +43,7 @@ func TestAccAWSDbInstanceDataSource_ec2Classic(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -69,6 +71,11 @@ resource "aws_db_instance" "bar" { backup_retention_period = 0 skip_final_snapshot = true + + enabled_cloudwatch_logs_exports = [ + "audit", + "error", + ] } data "aws_db_instance" "bar" { diff --git a/aws/data_source_aws_db_snapshot_test.go b/aws/data_source_aws_db_snapshot_test.go index c0a0255d902..dd6620d64e2 100644 --- a/aws/data_source_aws_db_snapshot_test.go +++ b/aws/data_source_aws_db_snapshot_test.go @@ -11,7 +11,7 @@ import ( func TestAccAWSDbSnapshotDataSource_basic(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/aws/data_source_aws_dx_gateway.go b/aws/data_source_aws_dx_gateway.go new file mode 100644 index 00000000000..f80025c9e3a --- /dev/null +++ b/aws/data_source_aws_dx_gateway.go @@ -0,0 +1,66 @@ +package aws + +import ( + "fmt" + "strconv" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/directconnect" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsDxGateway() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsDxGatewayRead, + + Schema: map[string]*schema.Schema{ + "amazon_side_asn": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + }, + }, + } +} + +func dataSourceAwsDxGatewayRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dxconn + name := d.Get("name").(string) + + gateways := make([]*directconnect.Gateway, 0) + // DescribeDirectConnectGatewaysInput does not have a name parameter for filtering + input := &directconnect.DescribeDirectConnectGatewaysInput{} + for { + output, err := conn.DescribeDirectConnectGateways(input) + if err != nil { + return fmt.Errorf("error reading Direct Connect Gateway: %s", err) + } + for _, gateway := range output.DirectConnectGateways { + if aws.StringValue(gateway.DirectConnectGatewayName) == name { + gateways = append(gateways, gateway) + } + } + if output.NextToken == nil { + break + } + input.NextToken = output.NextToken + } + + if len(gateways) == 0 { + return fmt.Errorf("Direct Connect Gateway not found for name: %s", name) + } + + if len(gateways) > 1 { + return fmt.Errorf("Multiple Direct Connect Gateways found for name: %s", name) + } + + gateway := gateways[0] + + d.SetId(aws.StringValue(gateway.DirectConnectGatewayId)) + d.Set("amazon_side_asn", strconv.FormatInt(aws.Int64Value(gateway.AmazonSideAsn), 10)) + + return nil +} diff --git a/aws/data_source_aws_dx_gateway_test.go b/aws/data_source_aws_dx_gateway_test.go new file mode 100644 index 00000000000..b73bca3b56b --- /dev/null +++ b/aws/data_source_aws_dx_gateway_test.go @@ -0,0 +1,58 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccDataSourceAwsDxGateway_Basic(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_dx_gateway.test" + datasourceName := "data.aws_dx_gateway.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsDxGatewayConfig_NonExistent, + ExpectError: regexp.MustCompile(`Direct Connect Gateway not found`), + }, + { + Config: testAccDataSourceAwsDxGatewayConfig_Name(rName, randIntRange(64512, 65534)), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrPair(datasourceName, "amazon_side_asn", resourceName, "amazon_side_asn"), + resource.TestCheckResourceAttrPair(datasourceName, "id", resourceName, "id"), + resource.TestCheckResourceAttrPair(datasourceName, "name", resourceName, "name"), + ), + }, + }, + }) +} + +func testAccDataSourceAwsDxGatewayConfig_Name(rName string, rBgpAsn int) string { + return fmt.Sprintf(` +resource "aws_dx_gateway" "wrong" { + amazon_side_asn = "%d" + name = "%s-wrong" +} +resource "aws_dx_gateway" "test" { + amazon_side_asn = "%d" + name = "%s" +} + +data "aws_dx_gateway" "test" { + name = "${aws_dx_gateway.test.name}" +} +`, rBgpAsn+1, rName, rBgpAsn, rName) +} + +const testAccDataSourceAwsDxGatewayConfig_NonExistent = ` +data "aws_dx_gateway" "test" { + name = "tf-acc-test-does-not-exist" +} +` diff --git a/aws/data_source_aws_dynamodb_table_test.go b/aws/data_source_aws_dynamodb_table_test.go index 1f90d59dae4..211ad4f1fbd 100644 --- a/aws/data_source_aws_dynamodb_table_test.go +++ b/aws/data_source_aws_dynamodb_table_test.go @@ -11,7 +11,7 @@ import ( func TestAccDataSourceAwsDynamoDbTable_basic(t *testing.T) { tableName := fmt.Sprintf("testaccawsdynamodbtable-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/aws/data_source_aws_ebs_snapshot.go b/aws/data_source_aws_ebs_snapshot.go index 7b7cb29d2df..cbcd941ec4e 100644 --- a/aws/data_source_aws_ebs_snapshot.go +++ b/aws/data_source_aws_ebs_snapshot.go @@ -3,7 +3,9 @@ package aws import ( "fmt" "log" + "sort" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/schema" ) @@ -117,29 +119,22 @@ func dataSourceAwsEbsSnapshotRead(d *schema.ResourceData, meta interface{}) erro return err } - var snapshot *ec2.Snapshot if len(resp.Snapshots) < 1 { return fmt.Errorf("Your query returned no results. Please change your search criteria and try again.") } if len(resp.Snapshots) > 1 { - recent := d.Get("most_recent").(bool) - log.Printf("[DEBUG] aws_ebs_snapshot - multiple results found and `most_recent` is set to: %t", recent) - if recent { - snapshot = mostRecentSnapshot(resp.Snapshots) - } else { - return fmt.Errorf("Your query returned more than one result. Please try a more specific search criteria.") + if !d.Get("most_recent").(bool) { + return fmt.Errorf("Your query returned more than one result. Please try a more " + + "specific search criteria, or set `most_recent` attribute to true.") } - } else { - snapshot = resp.Snapshots[0] + sort.Slice(resp.Snapshots, func(i, j int) bool { + return aws.TimeValue(resp.Snapshots[i].StartTime).Unix() > aws.TimeValue(resp.Snapshots[j].StartTime).Unix() + }) } //Single Snapshot found so set to state - return snapshotDescriptionAttributes(d, snapshot) -} - -func mostRecentSnapshot(snapshots []*ec2.Snapshot) *ec2.Snapshot { - return sortSnapshots(snapshots)[0] + return snapshotDescriptionAttributes(d, resp.Snapshots[0]) } func snapshotDescriptionAttributes(d *schema.ResourceData, snapshot *ec2.Snapshot) error { diff --git a/aws/data_source_aws_ebs_snapshot_ids.go b/aws/data_source_aws_ebs_snapshot_ids.go index 65a8ab10123..5041e25b316 100644 --- a/aws/data_source_aws_ebs_snapshot_ids.go +++ b/aws/data_source_aws_ebs_snapshot_ids.go @@ -3,7 +3,9 @@ package aws import ( "fmt" "log" + "sort" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" @@ -67,7 +69,10 @@ func dataSourceAwsEbsSnapshotIdsRead(d *schema.ResourceData, meta interface{}) e snapshotIds := make([]string, 0) - for _, snapshot := range sortSnapshots(resp.Snapshots) { + sort.Slice(resp.Snapshots, func(i, j int) bool { + return aws.TimeValue(resp.Snapshots[i].StartTime).Unix() > aws.TimeValue(resp.Snapshots[j].StartTime).Unix() + }) + for _, snapshot := range resp.Snapshots { snapshotIds = append(snapshotIds, *snapshot.SnapshotId) } diff --git a/aws/data_source_aws_ebs_snapshot_ids_test.go b/aws/data_source_aws_ebs_snapshot_ids_test.go index 0c5f3ec4d4d..c0e112848b3 100644 --- a/aws/data_source_aws_ebs_snapshot_ids_test.go +++ b/aws/data_source_aws_ebs_snapshot_ids_test.go @@ -4,12 +4,12 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" - "github.com/satori/uuid" ) func TestAccDataSourceAwsEbsSnapshotIds_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -24,21 +24,21 @@ func TestAccDataSourceAwsEbsSnapshotIds_basic(t *testing.T) { } func TestAccDataSourceAwsEbsSnapshotIds_sorted(t *testing.T) { - uuid := uuid.NewV4().String() + rName := acctest.RandomWithPrefix("tf-acc-test") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccDataSourceAwsEbsSnapshotIdsConfig_sorted1(uuid), + Config: testAccDataSourceAwsEbsSnapshotIdsConfig_sorted1(rName), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttrSet("aws_ebs_snapshot.a", "id"), resource.TestCheckResourceAttrSet("aws_ebs_snapshot.b", "id"), ), }, { - Config: testAccDataSourceAwsEbsSnapshotIdsConfig_sorted2(uuid), + Config: testAccDataSourceAwsEbsSnapshotIdsConfig_sorted2(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAwsEbsSnapshotDataSourceID("data.aws_ebs_snapshot_ids.test"), resource.TestCheckResourceAttr("data.aws_ebs_snapshot_ids.test", "ids.#", "2"), @@ -55,7 +55,7 @@ func TestAccDataSourceAwsEbsSnapshotIds_sorted(t *testing.T) { } func TestAccDataSourceAwsEbsSnapshotIds_empty(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -85,7 +85,7 @@ data "aws_ebs_snapshot_ids" "test" { } ` -func testAccDataSourceAwsEbsSnapshotIdsConfig_sorted1(uuid string) string { +func testAccDataSourceAwsEbsSnapshotIdsConfig_sorted1(rName string) string { return fmt.Sprintf(` resource "aws_ebs_volume" "test" { availability_zone = "us-west-2a" @@ -96,32 +96,32 @@ resource "aws_ebs_volume" "test" { resource "aws_ebs_snapshot" "a" { volume_id = "${aws_ebs_volume.test.*.id[0]}" - description = "tf-test-%s" + description = %q } resource "aws_ebs_snapshot" "b" { volume_id = "${aws_ebs_volume.test.*.id[1]}" - description = "tf-test-%s" + description = %q // We want to ensure that 'aws_ebs_snapshot.a.creation_date' is less than // 'aws_ebs_snapshot.b.creation_date'/ so that we can ensure that the // snapshots are being sorted correctly. depends_on = ["aws_ebs_snapshot.a"] } -`, uuid, uuid) +`, rName, rName) } -func testAccDataSourceAwsEbsSnapshotIdsConfig_sorted2(uuid string) string { - return testAccDataSourceAwsEbsSnapshotIdsConfig_sorted1(uuid) + fmt.Sprintf(` +func testAccDataSourceAwsEbsSnapshotIdsConfig_sorted2(rName string) string { + return testAccDataSourceAwsEbsSnapshotIdsConfig_sorted1(rName) + fmt.Sprintf(` data "aws_ebs_snapshot_ids" "test" { owners = ["self"] filter { name = "description" - values = ["tf-test-%s"] + values = [%q] } } -`, uuid) +`, rName) } const testAccDataSourceAwsEbsSnapshotIdsConfig_empty = ` diff --git a/aws/data_source_aws_ebs_snapshot_test.go b/aws/data_source_aws_ebs_snapshot_test.go index b9113a65453..ebe24cd5954 100644 --- a/aws/data_source_aws_ebs_snapshot_test.go +++ b/aws/data_source_aws_ebs_snapshot_test.go @@ -9,34 +9,64 @@ import ( ) func TestAccAWSEbsSnapshotDataSource_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + dataSourceName := "data.aws_ebs_snapshot.test" + resourceName := "aws_ebs_snapshot.test" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccCheckAwsEbsSnapshotDataSourceConfig, Check: resource.ComposeTestCheckFunc( - testAccCheckAwsEbsSnapshotDataSourceID("data.aws_ebs_snapshot.snapshot"), - resource.TestCheckResourceAttr("data.aws_ebs_snapshot.snapshot", "volume_size", "40"), - resource.TestCheckResourceAttr("data.aws_ebs_snapshot.snapshot", "tags.%", "0"), + testAccCheckAwsEbsSnapshotDataSourceID(dataSourceName), + resource.TestCheckResourceAttrPair(dataSourceName, "id", resourceName, "id"), + resource.TestCheckResourceAttrPair(dataSourceName, "description", resourceName, "description"), + resource.TestCheckResourceAttrPair(dataSourceName, "encrypted", resourceName, "encrypted"), + resource.TestCheckResourceAttrPair(dataSourceName, "kms_key_id", resourceName, "kms_key_id"), + resource.TestCheckResourceAttrPair(dataSourceName, "owner_alias", resourceName, "owner_alias"), + resource.TestCheckResourceAttrPair(dataSourceName, "owner_id", resourceName, "owner_id"), + resource.TestCheckResourceAttrPair(dataSourceName, "tags.%", resourceName, "tags.%"), + resource.TestCheckResourceAttrPair(dataSourceName, "volume_id", resourceName, "volume_id"), + resource.TestCheckResourceAttrPair(dataSourceName, "volume_size", resourceName, "volume_size"), ), }, }, }) } -func TestAccAWSEbsSnapshotDataSource_multipleFilters(t *testing.T) { - resource.Test(t, resource.TestCase{ +func TestAccAWSEbsSnapshotDataSource_Filter(t *testing.T) { + dataSourceName := "data.aws_ebs_snapshot.test" + resourceName := "aws_ebs_snapshot.test" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { - Config: testAccCheckAwsEbsSnapshotDataSourceConfigWithMultipleFilters, + Config: testAccCheckAwsEbsSnapshotDataSourceConfigFilter, Check: resource.ComposeTestCheckFunc( - testAccCheckAwsEbsSnapshotDataSourceID("data.aws_ebs_snapshot.snapshot"), - resource.TestCheckResourceAttr("data.aws_ebs_snapshot.snapshot", "volume_size", "10"), - resource.TestCheckResourceAttr("data.aws_ebs_snapshot.snapshot", "tags.%", "1"), - resource.TestCheckResourceAttr("data.aws_ebs_snapshot.snapshot", "tags.Name", "TF ACC Snapshot"), + testAccCheckAwsEbsSnapshotDataSourceID(dataSourceName), + resource.TestCheckResourceAttrPair(dataSourceName, "id", resourceName, "id"), + ), + }, + }, + }) +} + +func TestAccAWSEbsSnapshotDataSource_MostRecent(t *testing.T) { + dataSourceName := "data.aws_ebs_snapshot.test" + resourceName := "aws_ebs_snapshot.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccCheckAwsEbsSnapshotDataSourceConfigMostRecent, + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsEbsSnapshotDataSourceID(dataSourceName), + resource.TestCheckResourceAttrPair(dataSourceName, "id", resourceName, "id"), ), }, }, @@ -58,48 +88,75 @@ func testAccCheckAwsEbsSnapshotDataSourceID(n string) resource.TestCheckFunc { } const testAccCheckAwsEbsSnapshotDataSourceConfig = ` -resource "aws_ebs_volume" "example" { - availability_zone = "us-west-2a" +data "aws_availability_zones" "available" {} + +resource "aws_ebs_volume" "test" { + availability_zone = "${data.aws_availability_zones.available.names[0]}" type = "gp2" - size = 40 - tags { - Name = "External Volume" - } + size = 1 } -resource "aws_ebs_snapshot" "snapshot" { - volume_id = "${aws_ebs_volume.example.id}" +resource "aws_ebs_snapshot" "test" { + volume_id = "${aws_ebs_volume.test.id}" } -data "aws_ebs_snapshot" "snapshot" { - most_recent = true - snapshot_ids = ["${aws_ebs_snapshot.snapshot.id}"] +data "aws_ebs_snapshot" "test" { + snapshot_ids = ["${aws_ebs_snapshot.test.id}"] } ` -const testAccCheckAwsEbsSnapshotDataSourceConfigWithMultipleFilters = ` -resource "aws_ebs_volume" "external1" { - availability_zone = "us-west-2a" +const testAccCheckAwsEbsSnapshotDataSourceConfigFilter = ` +data "aws_availability_zones" "available" {} + +resource "aws_ebs_volume" "test" { + availability_zone = "${data.aws_availability_zones.available.names[0]}" type = "gp2" - size = 10 - tags { - Name = "External Volume 1" + size = 1 +} + +resource "aws_ebs_snapshot" "test" { + volume_id = "${aws_ebs_volume.test.id}" +} + +data "aws_ebs_snapshot" "test" { + filter { + name = "snapshot-id" + values = ["${aws_ebs_snapshot.test.id}"] } } +` -resource "aws_ebs_snapshot" "snapshot" { - volume_id = "${aws_ebs_volume.external1.id}" - tags { - Name = "TF ACC Snapshot" +const testAccCheckAwsEbsSnapshotDataSourceConfigMostRecent = ` +data "aws_availability_zones" "available" {} + +resource "aws_ebs_volume" "test" { + availability_zone = "${data.aws_availability_zones.available.names[0]}" + type = "gp2" + size = 1 +} + +resource "aws_ebs_snapshot" "incorrect" { + volume_id = "${aws_ebs_volume.test.id}" + + tags = { + Name = "tf-acc-test-ec2-ebs-snapshot-data-source-most-recent" + } +} + +resource "aws_ebs_snapshot" "test" { + volume_id = "${aws_ebs_snapshot.incorrect.volume_id}" + + tags = { + Name = "tf-acc-test-ec2-ebs-snapshot-data-source-most-recent" } } -data "aws_ebs_snapshot" "snapshot" { +data "aws_ebs_snapshot" "test" { most_recent = true - snapshot_ids = ["${aws_ebs_snapshot.snapshot.id}"] + filter { - name = "volume-size" - values = ["10"] + name = "tag:Name" + values = ["${aws_ebs_snapshot.test.tags.Name}"] } } ` diff --git a/aws/data_source_aws_ebs_volume_test.go b/aws/data_source_aws_ebs_volume_test.go index 072b471910e..31f359569a4 100644 --- a/aws/data_source_aws_ebs_volume_test.go +++ b/aws/data_source_aws_ebs_volume_test.go @@ -9,7 +9,7 @@ import ( ) func TestAccAWSEbsVolumeDataSource_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -28,7 +28,7 @@ func TestAccAWSEbsVolumeDataSource_basic(t *testing.T) { } func TestAccAWSEbsVolumeDataSource_multipleFilters(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -61,8 +61,10 @@ func testAccCheckAwsEbsVolumeDataSourceID(n string) resource.TestCheckFunc { } const testAccCheckAwsEbsVolumeDataSourceConfig = ` +data "aws_availability_zones" "available" {} + resource "aws_ebs_volume" "example" { - availability_zone = "us-west-2a" + availability_zone = "${data.aws_availability_zones.available.names[0]}" type = "gp2" size = 40 tags { @@ -84,8 +86,10 @@ data "aws_ebs_volume" "ebs_volume" { ` const testAccCheckAwsEbsVolumeDataSourceConfigWithMultipleFilters = ` +data "aws_availability_zones" "available" {} + resource "aws_ebs_volume" "external1" { - availability_zone = "us-west-2a" + availability_zone = "${data.aws_availability_zones.available.names[0]}" type = "gp2" size = 10 tags { diff --git a/aws/data_source_aws_ecr_repository_test.go b/aws/data_source_aws_ecr_repository_test.go index b8280c406a2..4c2f049a00b 100644 --- a/aws/data_source_aws_ecr_repository_test.go +++ b/aws/data_source_aws_ecr_repository_test.go @@ -10,7 +10,7 @@ import ( ) func TestAccAWSEcrDataSource_ecrRepository(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/aws/data_source_aws_ecs_cluster.go b/aws/data_source_aws_ecs_cluster.go index b6dbc63f4c9..308bdb9a95a 100644 --- a/aws/data_source_aws_ecs_cluster.go +++ b/aws/data_source_aws_ecs_cluster.go @@ -61,21 +61,21 @@ func dataSourceAwsEcsClusterRead(d *schema.ResourceData, meta interface{}) error return err } - for _, cluster := range desc.Clusters { - if aws.StringValue(cluster.ClusterName) != d.Get("cluster_name").(string) { - continue - } - d.SetId(aws.StringValue(cluster.ClusterArn)) - d.Set("arn", cluster.ClusterArn) - d.Set("status", cluster.Status) - d.Set("pending_tasks_count", cluster.PendingTasksCount) - d.Set("running_tasks_count", cluster.RunningTasksCount) - d.Set("registered_container_instances_count", cluster.RegisteredContainerInstancesCount) + if len(desc.Clusters) == 0 { + return fmt.Errorf("no matches found for name: %s", d.Get("cluster_name").(string)) } - if d.Id() == "" { - return fmt.Errorf("cluster with name %q not found", d.Get("cluster_name").(string)) + if len(desc.Clusters) > 1 { + return fmt.Errorf("multiple matches found for name: %s", d.Get("cluster_name").(string)) } + cluster := desc.Clusters[0] + d.SetId(aws.StringValue(cluster.ClusterArn)) + d.Set("arn", cluster.ClusterArn) + d.Set("status", cluster.Status) + d.Set("pending_tasks_count", cluster.PendingTasksCount) + d.Set("running_tasks_count", cluster.RunningTasksCount) + d.Set("registered_container_instances_count", cluster.RegisteredContainerInstancesCount) + return nil } diff --git a/aws/data_source_aws_ecs_cluster_test.go b/aws/data_source_aws_ecs_cluster_test.go index 3060b0715bc..53caa633702 100644 --- a/aws/data_source_aws_ecs_cluster_test.go +++ b/aws/data_source_aws_ecs_cluster_test.go @@ -9,7 +9,7 @@ import ( ) func TestAccAWSEcsDataSource_ecsCluster(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/aws/data_source_aws_ecs_container_definition.go b/aws/data_source_aws_ecs_container_definition.go index 9bb99cdaeda..914d582649a 100644 --- a/aws/data_source_aws_ecs_container_definition.go +++ b/aws/data_source_aws_ecs_container_definition.go @@ -53,12 +53,12 @@ func dataSourceAwsEcsContainerDefinition() *schema.Resource { "docker_labels": { Type: schema.TypeMap, Computed: true, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, }, "environment": { Type: schema.TypeMap, Computed: true, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, }, }, } diff --git a/aws/data_source_aws_ecs_container_definition_test.go b/aws/data_source_aws_ecs_container_definition_test.go index 25f3a996cac..a9196d1ec0d 100644 --- a/aws/data_source_aws_ecs_container_definition_test.go +++ b/aws/data_source_aws_ecs_container_definition_test.go @@ -14,11 +14,11 @@ func TestAccAWSEcsDataSource_ecsContainerDefinition(t *testing.T) { svcName := fmt.Sprintf("tf_acc_svc_td_ds_ecs_containter_definition_%s", rString) tdName := fmt.Sprintf("tf_acc_td_ds_ecs_containter_definition_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckAwsEcsContainerDefinitionDataSourceConfig(clusterName, tdName, svcName), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr("data.aws_ecs_container_definition.mongo", "image", "mongo:latest"), diff --git a/aws/data_source_aws_ecs_service.go b/aws/data_source_aws_ecs_service.go new file mode 100644 index 00000000000..524b47f7334 --- /dev/null +++ b/aws/data_source_aws_ecs_service.go @@ -0,0 +1,89 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ecs" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsEcsService() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsEcsServiceRead, + + Schema: map[string]*schema.Schema{ + "service_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "cluster_arn": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "desired_count": { + Type: schema.TypeInt, + Computed: true, + }, + "launch_type": { + Type: schema.TypeString, + Computed: true, + }, + "scheduling_strategy": { + Type: schema.TypeString, + Computed: true, + }, + "task_definition": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func dataSourceAwsEcsServiceRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ecsconn + + clusterArn := d.Get("cluster_arn").(string) + serviceName := d.Get("service_name").(string) + + params := &ecs.DescribeServicesInput{ + Cluster: aws.String(clusterArn), + Services: []*string{aws.String(serviceName)}, + } + + log.Printf("[DEBUG] Reading ECS Service: %s", params) + desc, err := conn.DescribeServices(params) + + if err != nil { + return err + } + + if desc == nil || len(desc.Services) == 0 { + return fmt.Errorf("service with name %q in cluster %q not found", serviceName, clusterArn) + } + + if len(desc.Services) > 1 { + return fmt.Errorf("multiple services with name %q found in cluster %q", serviceName, clusterArn) + } + + service := desc.Services[0] + d.SetId(aws.StringValue(service.ServiceArn)) + + d.Set("service_name", service.ServiceName) + d.Set("arn", service.ServiceArn) + d.Set("cluster_arn", service.ClusterArn) + d.Set("desired_count", service.DesiredCount) + d.Set("launch_type", service.LaunchType) + d.Set("scheduling_strategy", service.SchedulingStrategy) + d.Set("task_definition", service.TaskDefinition) + + return nil +} diff --git a/aws/data_source_aws_ecs_service_test.go b/aws/data_source_aws_ecs_service_test.go new file mode 100644 index 00000000000..b295bb44447 --- /dev/null +++ b/aws/data_source_aws_ecs_service_test.go @@ -0,0 +1,66 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccAWSEcsServiceDataSource_basic(t *testing.T) { + dataSourceName := "data.aws_ecs_service.test" + resourceName := "aws_ecs_service.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccCheckAwsEcsServiceDataSourceConfig, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrPair(resourceName, "id", dataSourceName, "arn"), + resource.TestCheckResourceAttrPair(resourceName, "desired_count", dataSourceName, "desired_count"), + resource.TestCheckResourceAttrPair(resourceName, "launch_type", dataSourceName, "launch_type"), + resource.TestCheckResourceAttrPair(resourceName, "scheduling_strategy", dataSourceName, "scheduling_strategy"), + resource.TestCheckResourceAttrPair(resourceName, "name", dataSourceName, "service_name"), + resource.TestCheckResourceAttrPair(resourceName, "task_definition", dataSourceName, "task_definition"), + ), + }, + }, + }) +} + +var testAccCheckAwsEcsServiceDataSourceConfig = fmt.Sprintf(` +resource "aws_ecs_cluster" "test" { + name = "tf-acc-%d" +} + +resource "aws_ecs_task_definition" "test" { + family = "mongodb" + container_definitions = < 0 { + role := instanceProfile.Roles[0] + d.Set("role_arn", role.Arn) + d.Set("role_id", role.RoleId) + d.Set("role_name", role.RoleName) } return nil diff --git a/aws/data_source_aws_iam_instance_profile_test.go b/aws/data_source_aws_iam_instance_profile_test.go index eabbb5d450b..b957eaf5e56 100644 --- a/aws/data_source_aws_iam_instance_profile_test.go +++ b/aws/data_source_aws_iam_instance_profile_test.go @@ -10,20 +10,29 @@ import ( ) func TestAccAWSDataSourceIAMInstanceProfile_basic(t *testing.T) { - roleName := fmt.Sprintf("test-datasource-user-%d", acctest.RandInt()) - profileName := fmt.Sprintf("test-datasource-user-%d", acctest.RandInt()) + roleName := fmt.Sprintf("tf-acc-ds-instance-profile-role-%d", acctest.RandInt()) + profileName := fmt.Sprintf("tf-acc-ds-instance-profile-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccDatasourceAwsIamInstanceProfileConfig(roleName, profileName), Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttrSet("data.aws_iam_instance_profile.test", "role_id"), + resource.TestMatchResourceAttr( + "data.aws_iam_instance_profile.test", + "arn", + regexp.MustCompile("^arn:[^:]+:iam::[0-9]{12}:instance-profile/testpath/"+profileName+"$"), + ), resource.TestCheckResourceAttr("data.aws_iam_instance_profile.test", "path", "/testpath/"), - resource.TestMatchResourceAttr("data.aws_iam_instance_profile.test", "arn", - regexp.MustCompile("^arn:aws:iam::[0-9]{12}:instance-profile/testpath/"+profileName+"$")), + resource.TestMatchResourceAttr( + "data.aws_iam_instance_profile.test", + "role_arn", + regexp.MustCompile("^arn:[^:]+:iam::[0-9]{12}:role/"+roleName+"$"), + ), + resource.TestCheckResourceAttrSet("data.aws_iam_instance_profile.test", "role_id"), + resource.TestCheckResourceAttr("data.aws_iam_instance_profile.test", "role_name", roleName), ), }, }, diff --git a/aws/data_source_aws_iam_policy_document.go b/aws/data_source_aws_iam_policy_document.go index 4f5c644317b..776dbe19d87 100644 --- a/aws/data_source_aws_iam_policy_document.go +++ b/aws/data_source_aws_iam_policy_document.go @@ -39,7 +39,7 @@ func dataSourceAwsIamPolicyDocument() *schema.Resource { }, "statement": { Type: schema.TypeList, - Required: true, + Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "sid": { @@ -96,8 +96,8 @@ func dataSourceAwsIamPolicyDocumentRead(d *schema.ResourceData, meta interface{} mergedDoc := &IAMPolicyDoc{} // populate mergedDoc directly with any source_json - if sourceJson, hasSourceJson := d.GetOk("source_json"); hasSourceJson { - if err := json.Unmarshal([]byte(sourceJson.(string)), mergedDoc); err != nil { + if sourceJSON, hasSourceJSON := d.GetOk("source_json"); hasSourceJSON { + if err := json.Unmarshal([]byte(sourceJSON.(string)), mergedDoc); err != nil { return err } } @@ -107,64 +107,67 @@ func dataSourceAwsIamPolicyDocumentRead(d *schema.ResourceData, meta interface{} doc.Version = "2012-10-17" - if policyId, hasPolicyId := d.GetOk("policy_id"); hasPolicyId { - doc.Id = policyId.(string) + if policyID, hasPolicyID := d.GetOk("policy_id"); hasPolicyID { + doc.Id = policyID.(string) } - var cfgStmts = d.Get("statement").([]interface{}) - stmts := make([]*IAMPolicyStatement, len(cfgStmts)) - for i, stmtI := range cfgStmts { - cfgStmt := stmtI.(map[string]interface{}) - stmt := &IAMPolicyStatement{ - Effect: cfgStmt["effect"].(string), - } + if cfgStmts, hasCfgStmts := d.GetOk("statement"); hasCfgStmts { + var cfgStmtIntf = cfgStmts.([]interface{}) + stmts := make([]*IAMPolicyStatement, len(cfgStmtIntf)) + for i, stmtI := range cfgStmtIntf { + cfgStmt := stmtI.(map[string]interface{}) + stmt := &IAMPolicyStatement{ + Effect: cfgStmt["effect"].(string), + } - if sid, ok := cfgStmt["sid"]; ok { - stmt.Sid = sid.(string) - } + if sid, ok := cfgStmt["sid"]; ok { + stmt.Sid = sid.(string) + } - if actions := cfgStmt["actions"].(*schema.Set).List(); len(actions) > 0 { - stmt.Actions = iamPolicyDecodeConfigStringList(actions) - } - if actions := cfgStmt["not_actions"].(*schema.Set).List(); len(actions) > 0 { - stmt.NotActions = iamPolicyDecodeConfigStringList(actions) - } + if actions := cfgStmt["actions"].(*schema.Set).List(); len(actions) > 0 { + stmt.Actions = iamPolicyDecodeConfigStringList(actions) + } + if actions := cfgStmt["not_actions"].(*schema.Set).List(); len(actions) > 0 { + stmt.NotActions = iamPolicyDecodeConfigStringList(actions) + } - if resources := cfgStmt["resources"].(*schema.Set).List(); len(resources) > 0 { - stmt.Resources = dataSourceAwsIamPolicyDocumentReplaceVarsInList( - iamPolicyDecodeConfigStringList(resources), - ) - } - if resources := cfgStmt["not_resources"].(*schema.Set).List(); len(resources) > 0 { - stmt.NotResources = dataSourceAwsIamPolicyDocumentReplaceVarsInList( - iamPolicyDecodeConfigStringList(resources), - ) - } + if resources := cfgStmt["resources"].(*schema.Set).List(); len(resources) > 0 { + stmt.Resources = dataSourceAwsIamPolicyDocumentReplaceVarsInList( + iamPolicyDecodeConfigStringList(resources), + ) + } + if resources := cfgStmt["not_resources"].(*schema.Set).List(); len(resources) > 0 { + stmt.NotResources = dataSourceAwsIamPolicyDocumentReplaceVarsInList( + iamPolicyDecodeConfigStringList(resources), + ) + } - if principals := cfgStmt["principals"].(*schema.Set).List(); len(principals) > 0 { - stmt.Principals = dataSourceAwsIamPolicyDocumentMakePrincipals(principals) - } + if principals := cfgStmt["principals"].(*schema.Set).List(); len(principals) > 0 { + stmt.Principals = dataSourceAwsIamPolicyDocumentMakePrincipals(principals) + } - if principals := cfgStmt["not_principals"].(*schema.Set).List(); len(principals) > 0 { - stmt.NotPrincipals = dataSourceAwsIamPolicyDocumentMakePrincipals(principals) - } + if principals := cfgStmt["not_principals"].(*schema.Set).List(); len(principals) > 0 { + stmt.NotPrincipals = dataSourceAwsIamPolicyDocumentMakePrincipals(principals) + } - if conditions := cfgStmt["condition"].(*schema.Set).List(); len(conditions) > 0 { - stmt.Conditions = dataSourceAwsIamPolicyDocumentMakeConditions(conditions) + if conditions := cfgStmt["condition"].(*schema.Set).List(); len(conditions) > 0 { + stmt.Conditions = dataSourceAwsIamPolicyDocumentMakeConditions(conditions) + } + + stmts[i] = stmt } - stmts[i] = stmt - } + doc.Statements = stmts - doc.Statements = stmts + } // merge our current document into mergedDoc mergedDoc.Merge(doc) // merge in override_json - if overrideJson, hasOverrideJson := d.GetOk("override_json"); hasOverrideJson { + if overrideJSON, hasOverrideJSON := d.GetOk("override_json"); hasOverrideJSON { overrideDoc := &IAMPolicyDoc{} - if err := json.Unmarshal([]byte(overrideJson.(string)), overrideDoc); err != nil { + if err := json.Unmarshal([]byte(overrideJSON.(string)), overrideDoc); err != nil { return err } diff --git a/aws/data_source_aws_iam_policy_document_test.go b/aws/data_source_aws_iam_policy_document_test.go index bd7367f62bc..abceafbfad5 100644 --- a/aws/data_source_aws_iam_policy_document_test.go +++ b/aws/data_source_aws_iam_policy_document_test.go @@ -12,7 +12,7 @@ func TestAccAWSDataSourceIAMPolicyDocument_basic(t *testing.T) { // This really ought to be able to be a unit test rather than an // acceptance test, but just instantiating the AWS provider requires // some AWS API calls, and so this needs valid AWS credentials to work. - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -32,7 +32,7 @@ func TestAccAWSDataSourceIAMPolicyDocument_source(t *testing.T) { // This really ought to be able to be a unit test rather than an // acceptance test, but just instantiating the AWS provider requires // some AWS API calls, and so this needs valid AWS credentials to work. - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -57,7 +57,7 @@ func TestAccAWSDataSourceIAMPolicyDocument_source(t *testing.T) { } func TestAccAWSDataSourceIAMPolicyDocument_sourceConflicting(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -74,7 +74,7 @@ func TestAccAWSDataSourceIAMPolicyDocument_sourceConflicting(t *testing.T) { } func TestAccAWSDataSourceIAMPolicyDocument_override(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -90,6 +90,40 @@ func TestAccAWSDataSourceIAMPolicyDocument_override(t *testing.T) { }) } +func TestAccAWSDataSourceIAMPolicyDocument_noStatementMerge(t *testing.T) { + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAWSIAMPolicyDocumentNoStatementMergeConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckStateValue("data.aws_iam_policy_document.yak_politik", "json", + testAccAWSIAMPolicyDocumentNoStatementMergeExpectedJSON, + ), + ), + }, + }, + }) +} + +func TestAccAWSDataSourceIAMPolicyDocument_noStatementOverride(t *testing.T) { + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAWSIAMPolicyDocumentNoStatementOverrideConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckStateValue("data.aws_iam_policy_document.yak_politik", "json", + testAccAWSIAMPolicyDocumentNoStatementOverrideExpectedJSON, + ), + ), + }, + }, + }) +} + func testAccCheckStateValue(id, name, value string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[id] @@ -239,7 +273,9 @@ var testAccAWSIAMPolicyDocumentExpectedJSON = `{ "Sid": "", "Effect": "Allow", "Action": "kinesis:*", - "Principal": "*" + "Principal": { + "AWS": "*" + } }, { "Sid": "", @@ -296,7 +332,10 @@ data "aws_iam_policy_document" "test" { ] principals { type = "AWS" - identifiers = ["arn:blahblah:example"] + identifiers = [ + "arn:blahblah:example", + "arn:blahblahblah:example", + ] } } @@ -376,7 +415,10 @@ var testAccAWSIAMPolicyDocumentSourceExpectedJSON = `{ "arn:aws:s3:::foo/home/${aws:username}" ], "Principal": { - "AWS": "arn:blahblah:example" + "AWS": [ + "arn:blahblahblah:example", + "arn:blahblah:example" + ] } }, { @@ -389,7 +431,9 @@ var testAccAWSIAMPolicyDocumentSourceExpectedJSON = `{ "Sid": "", "Effect": "Allow", "Action": "kinesis:*", - "Principal": "*" + "Principal": { + "AWS": "*" + } }, { "Sid": "", @@ -510,3 +554,79 @@ var testAccAWSIAMPolicyDocumentOverrideExpectedJSON = `{ } ] }` + +var testAccAWSIAMPolicyDocumentNoStatementMergeConfig = ` +data "aws_iam_policy_document" "source" { + statement { + sid = "" + actions = ["ec2:DescribeAccountAttributes"] + resources = ["*"] + } +} + +data "aws_iam_policy_document" "override" { + statement { + sid = "OverridePlaceholder" + actions = ["s3:GetObject"] + resources = ["*"] + } +} + +data "aws_iam_policy_document" "yak_politik" { + source_json = "${data.aws_iam_policy_document.source.json}" + override_json = "${data.aws_iam_policy_document.override.json}" +} +` + +var testAccAWSIAMPolicyDocumentNoStatementMergeExpectedJSON = `{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "", + "Effect": "Allow", + "Action": "ec2:DescribeAccountAttributes", + "Resource": "*" + }, + { + "Sid": "OverridePlaceholder", + "Effect": "Allow", + "Action": "s3:GetObject", + "Resource": "*" + } + ] +}` + +var testAccAWSIAMPolicyDocumentNoStatementOverrideConfig = ` +data "aws_iam_policy_document" "source" { + statement { + sid = "OverridePlaceholder" + actions = ["ec2:DescribeAccountAttributes"] + resources = ["*"] + } +} + +data "aws_iam_policy_document" "override" { + statement { + sid = "OverridePlaceholder" + actions = ["s3:GetObject"] + resources = ["*"] + } +} + +data "aws_iam_policy_document" "yak_politik" { + source_json = "${data.aws_iam_policy_document.source.json}" + override_json = "${data.aws_iam_policy_document.override.json}" +} +` + +var testAccAWSIAMPolicyDocumentNoStatementOverrideExpectedJSON = `{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "OverridePlaceholder", + "Effect": "Allow", + "Action": "s3:GetObject", + "Resource": "*" + } + ] +}` diff --git a/aws/data_source_aws_iam_policy_test.go b/aws/data_source_aws_iam_policy_test.go index a8f5c753e5d..cd63298684b 100644 --- a/aws/data_source_aws_iam_policy_test.go +++ b/aws/data_source_aws_iam_policy_test.go @@ -12,7 +12,7 @@ import ( func TestAccAWSDataSourceIAMPolicy_basic(t *testing.T) { policyName := fmt.Sprintf("test-policy-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/aws/data_source_aws_iam_role.go b/aws/data_source_aws_iam_role.go index 7de37708b24..26af5a2fb01 100644 --- a/aws/data_source_aws_iam_role.go +++ b/aws/data_source_aws_iam_role.go @@ -2,6 +2,11 @@ package aws import ( "fmt" + "net/url" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/iam" "github.com/hashicorp/terraform/helper/schema" ) @@ -27,6 +32,10 @@ func dataSourceAwsIAMRole() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "permissions_boundary": { + Type: schema.TypeString, + Computed: true, + }, "role_id": { Type: schema.TypeString, Computed: true, @@ -53,11 +62,17 @@ func dataSourceAwsIAMRole() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "max_session_duration": { + Type: schema.TypeInt, + Computed: true, + }, }, } } func dataSourceAwsIAMRoleRead(d *schema.ResourceData, meta interface{}) error { + iamconn := meta.(*AWSClient).iamconn + name, hasName := d.GetOk("name") roleName, hasRoleName := d.GetOk("role_name") @@ -73,10 +88,40 @@ func dataSourceAwsIAMRoleRead(d *schema.ResourceData, meta interface{}) error { } d.SetId(id) - data := resourceAwsIamRoleRead(d, meta) + input := &iam.GetRoleInput{ + RoleName: aws.String(d.Id()), + } + + output, err := iamconn.GetRole(input) + if err != nil { + return fmt.Errorf("Error reading IAM Role %s: %s", d.Id(), err) + } + + d.Set("arn", output.Role.Arn) + if err := d.Set("create_date", output.Role.CreateDate.Format(time.RFC3339)); err != nil { + return err + } + d.Set("description", output.Role.Description) + d.Set("max_session_duration", output.Role.MaxSessionDuration) + d.Set("name", output.Role.RoleName) + d.Set("path", output.Role.Path) + d.Set("permissions_boundary", "") + if output.Role.PermissionsBoundary != nil { + d.Set("permissions_boundary", output.Role.PermissionsBoundary.PermissionsBoundaryArn) + } + d.Set("unique_id", output.Role.RoleId) + + assumRolePolicy, err := url.QueryUnescape(aws.StringValue(output.Role.AssumeRolePolicyDocument)) + if err != nil { + return err + } + if err := d.Set("assume_role_policy", assumRolePolicy); err != nil { + return err + } + // Keep backward compatibility with previous attributes - d.Set("role_id", d.Get("unique_id").(string)) - d.Set("assume_role_policy_document", d.Get("assume_role_policy").(string)) + d.Set("role_id", output.Role.RoleId) + d.Set("assume_role_policy_document", assumRolePolicy) - return data + return nil } diff --git a/aws/data_source_aws_iam_role_test.go b/aws/data_source_aws_iam_role_test.go index e0f34481d62..ead7f4263d1 100644 --- a/aws/data_source_aws_iam_role_test.go +++ b/aws/data_source_aws_iam_role_test.go @@ -12,7 +12,7 @@ import ( func TestAccAWSDataSourceIAMRole_basic(t *testing.T) { roleName := fmt.Sprintf("test-role-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -22,6 +22,7 @@ func TestAccAWSDataSourceIAMRole_basic(t *testing.T) { resource.TestCheckResourceAttrSet("data.aws_iam_role.test", "unique_id"), resource.TestCheckResourceAttrSet("data.aws_iam_role.test", "assume_role_policy"), resource.TestCheckResourceAttr("data.aws_iam_role.test", "path", "/testpath/"), + resource.TestCheckResourceAttr("data.aws_iam_role.test", "permissions_boundary", ""), resource.TestCheckResourceAttr("data.aws_iam_role.test", "name", roleName), resource.TestCheckResourceAttrSet("data.aws_iam_role.test", "create_date"), resource.TestMatchResourceAttr("data.aws_iam_role.test", "arn", diff --git a/aws/data_source_aws_iam_server_certificate.go b/aws/data_source_aws_iam_server_certificate.go index 29321ef43be..16403fbd15d 100644 --- a/aws/data_source_aws_iam_server_certificate.go +++ b/aws/data_source_aws_iam_server_certificate.go @@ -9,9 +9,9 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/iam" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func dataSourceAwsIAMServerCertificate() *schema.Resource { @@ -25,7 +25,7 @@ func dataSourceAwsIAMServerCertificate() *schema.Resource { Computed: true, ForceNew: true, ConflictsWith: []string{"name_prefix"}, - ValidateFunc: validateMaxLength(128), + ValidateFunc: validation.StringLenBetween(0, 128), }, "name_prefix": { @@ -33,7 +33,13 @@ func dataSourceAwsIAMServerCertificate() *schema.Resource { Optional: true, ForceNew: true, ConflictsWith: []string{"name"}, - ValidateFunc: validateMaxLength(128 - resource.UniqueIDSuffixLength), + ValidateFunc: validation.StringLenBetween(0, 128-resource.UniqueIDSuffixLength), + }, + + "path_prefix": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, }, "latest": { @@ -103,8 +109,12 @@ func dataSourceAwsIAMServerCertificateRead(d *schema.ResourceData, meta interfac } var metadatas []*iam.ServerCertificateMetadata + input := &iam.ListServerCertificatesInput{} + if v, ok := d.GetOk("path_prefix"); ok { + input.PathPrefix = aws.String(v.(string)) + } log.Printf("[DEBUG] Reading IAM Server Certificate") - err := iamconn.ListServerCertificatesPages(&iam.ListServerCertificatesInput{}, func(p *iam.ListServerCertificatesOutput, lastPage bool) bool { + err := iamconn.ListServerCertificatesPages(input, func(p *iam.ListServerCertificatesOutput, lastPage bool) bool { for _, cert := range p.ServerCertificateMetadataList { if matcher(cert) { metadatas = append(metadatas, cert) @@ -113,7 +123,7 @@ func dataSourceAwsIAMServerCertificateRead(d *schema.ResourceData, meta interfac return true }) if err != nil { - return errwrap.Wrapf("Error describing certificates: {{err}}", err) + return fmt.Errorf("Error describing certificates: %s", err) } if len(metadatas) == 0 { diff --git a/aws/data_source_aws_iam_server_certificate_test.go b/aws/data_source_aws_iam_server_certificate_test.go index 07712b0ebf5..c1c55da6589 100644 --- a/aws/data_source_aws_iam_server_certificate_test.go +++ b/aws/data_source_aws_iam_server_certificate_test.go @@ -41,14 +41,11 @@ func TestResourceSortByExpirationDate(t *testing.T) { func TestAccAWSDataSourceIAMServerCertificate_basic(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProvidersWithTLS, CheckDestroy: testAccCheckIAMServerCertificateDestroy, Steps: []resource.TestStep{ - { - Config: testAccIAMServerCertConfig(rInt), - }, { Config: testAccAwsDataIAMServerCertConfig(rInt), Check: resource.ComposeTestCheckFunc( @@ -67,7 +64,7 @@ func TestAccAWSDataSourceIAMServerCertificate_basic(t *testing.T) { } func TestAccAWSDataSourceIAMServerCertificate_matchNamePrefix(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckIAMServerCertificateDestroy, @@ -80,6 +77,26 @@ func TestAccAWSDataSourceIAMServerCertificate_matchNamePrefix(t *testing.T) { }) } +func TestAccAWSDataSourceIAMServerCertificate_path(t *testing.T) { + rInt := acctest.RandInt() + path := "/test-path/" + pathPrefix := "/test-path/" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProvidersWithTLS, + CheckDestroy: testAccCheckIAMServerCertificateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsDataIAMServerCertConfigPath(rInt, path, pathPrefix), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.aws_iam_server_certificate.test", "path", path), + ), + }, + }, + }) +} + func testAccAwsDataIAMServerCertConfig(rInt int) string { return fmt.Sprintf(` %s @@ -91,6 +108,18 @@ data "aws_iam_server_certificate" "test" { `, testAccIAMServerCertConfig(rInt)) } +func testAccAwsDataIAMServerCertConfigPath(rInt int, path, pathPrefix string) string { + return fmt.Sprintf(` +%s + +data "aws_iam_server_certificate" "test" { + name = "${aws_iam_server_certificate.test_cert.name}" + path_prefix = "%s" + latest = true +} +`, testAccIAMServerCertConfig_path(rInt, path), pathPrefix) +} + var testAccAwsDataIAMServerCertConfigMatchNamePrefix = ` data "aws_iam_server_certificate" "test" { name_prefix = "MyCert" diff --git a/aws/data_source_aws_iam_user.go b/aws/data_source_aws_iam_user.go index 72d3e47d938..2a627b918bb 100644 --- a/aws/data_source_aws_iam_user.go +++ b/aws/data_source_aws_iam_user.go @@ -1,11 +1,11 @@ package aws import ( + "fmt" "log" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/iam" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/schema" ) @@ -22,6 +22,10 @@ func dataSourceAwsIAMUser() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "permissions_boundary": { + Type: schema.TypeString, + Computed: true, + }, "user_id": { Type: schema.TypeString, Computed: true, @@ -44,13 +48,17 @@ func dataSourceAwsIAMUserRead(d *schema.ResourceData, meta interface{}) error { log.Printf("[DEBUG] Reading IAM User: %s", req) resp, err := iamconn.GetUser(req) if err != nil { - return errwrap.Wrapf("error getting user: {{err}}", err) + return fmt.Errorf("error getting user: %s", err) } user := resp.User - d.SetId(*user.UserId) + d.SetId(aws.StringValue(user.UserId)) d.Set("arn", user.Arn) d.Set("path", user.Path) + d.Set("permissions_boundary", "") + if user.PermissionsBoundary != nil { + d.Set("permissions_boundary", user.PermissionsBoundary.PermissionsBoundaryArn) + } d.Set("user_id", user.UserId) return nil diff --git a/aws/data_source_aws_iam_user_test.go b/aws/data_source_aws_iam_user_test.go index df96b0e7f92..5af9809e493 100644 --- a/aws/data_source_aws_iam_user_test.go +++ b/aws/data_source_aws_iam_user_test.go @@ -12,7 +12,7 @@ import ( func TestAccAWSDataSourceIAMUser_basic(t *testing.T) { userName := fmt.Sprintf("test-datasource-user-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -21,8 +21,9 @@ func TestAccAWSDataSourceIAMUser_basic(t *testing.T) { Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttrSet("data.aws_iam_user.test", "user_id"), resource.TestCheckResourceAttr("data.aws_iam_user.test", "path", "/"), + resource.TestCheckResourceAttr("data.aws_iam_user.test", "permissions_boundary", ""), resource.TestCheckResourceAttr("data.aws_iam_user.test", "user_name", userName), - resource.TestMatchResourceAttr("data.aws_iam_user.test", "arn", regexp.MustCompile("^arn:aws:iam::[0-9]{12}:user/"+userName)), + resource.TestMatchResourceAttr("data.aws_iam_user.test", "arn", regexp.MustCompile("^arn:[^:]+:iam::[0-9]{12}:user/"+userName)), ), }, }, diff --git a/aws/data_source_aws_inspector_rules_packages_test.go b/aws/data_source_aws_inspector_rules_packages_test.go index 159f8defc29..31bdb5b688a 100644 --- a/aws/data_source_aws_inspector_rules_packages_test.go +++ b/aws/data_source_aws_inspector_rules_packages_test.go @@ -7,7 +7,7 @@ import ( ) func TestAccAWSInspectorRulesPackages_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/aws/data_source_aws_instance.go b/aws/data_source_aws_instance.go index 23a8fac24c3..0c49381cb5f 100644 --- a/aws/data_source_aws_instance.go +++ b/aws/data_source_aws_instance.go @@ -5,6 +5,7 @@ import ( "log" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/schema" ) @@ -26,6 +27,10 @@ func dataSourceAwsInstance() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, "instance_type": { Type: schema.TypeString, Computed: true, @@ -222,6 +227,22 @@ func dataSourceAwsInstance() *schema.Resource { }, }, }, + "credit_specification": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cpu_credits": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "disable_api_termination": { + Type: schema.TypeBool, + Computed: true, + }, }, } } @@ -301,6 +322,16 @@ func dataSourceAwsInstanceRead(d *schema.ResourceData, meta interface{}) error { d.Set("password_data", passwordData) } + // ARN + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Region: meta.(*AWSClient).region, + Service: "ec2", + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("instance/%s", d.Id()), + } + d.Set("arn", arn.String()) + return nil } @@ -389,6 +420,15 @@ func instanceDescriptionAttributes(d *schema.ResourceData, instance *ec2.Instanc d.Set("user_data", userDataHashSum(*attr.UserData.Value)) } } + { + creditSpecifications, err := getCreditSpecifications(conn, d.Id()) + if err != nil && !isAWSErr(err, "UnsupportedOperation", "") { + return err + } + if err := d.Set("credit_specification", creditSpecifications); err != nil { + return fmt.Errorf("error setting credit_specification: %s", err) + } + } return nil } diff --git a/aws/data_source_aws_instance_test.go b/aws/data_source_aws_instance_test.go index a569c31b6d0..a0014d4aa2a 100644 --- a/aws/data_source_aws_instance_test.go +++ b/aws/data_source_aws_instance_test.go @@ -4,13 +4,14 @@ import ( "testing" "fmt" + "regexp" "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" ) func TestAccAWSInstanceDataSource_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -20,6 +21,7 @@ func TestAccAWSInstanceDataSource_basic(t *testing.T) { resource.TestCheckResourceAttr("data.aws_instance.web-instance", "ami", "ami-4fccb37f"), resource.TestCheckResourceAttr("data.aws_instance.web-instance", "tags.%", "1"), resource.TestCheckResourceAttr("data.aws_instance.web-instance", "instance_type", "m1.small"), + resource.TestMatchResourceAttr("data.aws_instance.web-instance", "arn", regexp.MustCompile(`^arn:[^:]+:ec2:[^:]+:\d{12}:instance/i-.+`)), ), }, }, @@ -28,7 +30,7 @@ func TestAccAWSInstanceDataSource_basic(t *testing.T) { func TestAccAWSInstanceDataSource_tags(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -45,7 +47,7 @@ func TestAccAWSInstanceDataSource_tags(t *testing.T) { } func TestAccAWSInstanceDataSource_AzUserData(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -64,7 +66,7 @@ func TestAccAWSInstanceDataSource_AzUserData(t *testing.T) { } func TestAccAWSInstanceDataSource_gp2IopsDevice(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -84,7 +86,7 @@ func TestAccAWSInstanceDataSource_gp2IopsDevice(t *testing.T) { } func TestAccAWSInstanceDataSource_blockDevices(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -105,7 +107,7 @@ func TestAccAWSInstanceDataSource_blockDevices(t *testing.T) { } func TestAccAWSInstanceDataSource_rootInstanceStore(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -124,7 +126,7 @@ func TestAccAWSInstanceDataSource_rootInstanceStore(t *testing.T) { } func TestAccAWSInstanceDataSource_privateIP(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -142,7 +144,7 @@ func TestAccAWSInstanceDataSource_privateIP(t *testing.T) { func TestAccAWSInstanceDataSource_keyPair(t *testing.T) { rName := fmt.Sprintf("tf-test-key-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -160,7 +162,7 @@ func TestAccAWSInstanceDataSource_keyPair(t *testing.T) { } func TestAccAWSInstanceDataSource_VPC(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -181,7 +183,7 @@ func TestAccAWSInstanceDataSource_VPC(t *testing.T) { func TestAccAWSInstanceDataSource_PlacementGroup(t *testing.T) { rStr := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -197,7 +199,7 @@ func TestAccAWSInstanceDataSource_PlacementGroup(t *testing.T) { func TestAccAWSInstanceDataSource_SecurityGroups(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -216,7 +218,7 @@ func TestAccAWSInstanceDataSource_SecurityGroups(t *testing.T) { } func TestAccAWSInstanceDataSource_VPCSecurityGroups(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -236,7 +238,7 @@ func TestAccAWSInstanceDataSource_VPCSecurityGroups(t *testing.T) { func TestAccAWSInstanceDataSource_getPasswordData_trueToFalse(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -261,7 +263,7 @@ func TestAccAWSInstanceDataSource_getPasswordData_trueToFalse(t *testing.T) { func TestAccAWSInstanceDataSource_getPasswordData_falseToTrue(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -283,6 +285,24 @@ func TestAccAWSInstanceDataSource_getPasswordData_falseToTrue(t *testing.T) { }) } +func TestAccAWSInstanceDataSource_creditSpecification(t *testing.T) { + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + + Config: testAccInstanceDataSourceConfig_creditSpecification, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.aws_instance.foo", "instance_type", "t2.micro"), + resource.TestCheckResourceAttr("data.aws_instance.foo", "credit_specification.#", "1"), + resource.TestCheckResourceAttr("data.aws_instance.foo", "credit_specification.0.cpu_credits", "unlimited"), + ), + }, + }, + }) +} + // Lookup based on InstanceID const testAccInstanceDataSourceConfig = ` resource "aws_instance" "web" { @@ -658,3 +678,27 @@ func testAccInstanceDataSourceConfig_getPasswordData(val bool, rInt int) string } `, rInt, val) } + +const testAccInstanceDataSourceConfig_creditSpecification = ` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" +} + +resource "aws_subnet" "foo" { + cidr_block = "10.1.1.0/24" + vpc_id = "${aws_vpc.foo.id}" +} + +resource "aws_instance" "foo" { + ami = "ami-bf4193c7" + instance_type = "t2.micro" + subnet_id = "${aws_subnet.foo.id}" + credit_specification { + cpu_credits = "unlimited" + } +} + +data "aws_instance" "foo" { + instance_id = "${aws_instance.foo.id}" +} +` diff --git a/aws/data_source_aws_instances.go b/aws/data_source_aws_instances.go index 7196e3189e7..83def2a8f07 100644 --- a/aws/data_source_aws_instances.go +++ b/aws/data_source_aws_instances.go @@ -8,6 +8,7 @@ import ( "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func dataSourceAwsInstances() *schema.Resource { @@ -17,6 +18,21 @@ func dataSourceAwsInstances() *schema.Resource { Schema: map[string]*schema.Schema{ "filter": dataSourceFiltersSchema(), "instance_tags": tagsSchemaComputed(), + "instance_state_names": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice([]string{ + ec2.InstanceStateNamePending, + ec2.InstanceStateNameRunning, + ec2.InstanceStateNameShuttingDown, + ec2.InstanceStateNameStopped, + ec2.InstanceStateNameStopping, + ec2.InstanceStateNameTerminated, + }, false), + }, + }, "ids": { Type: schema.TypeList, @@ -47,14 +63,19 @@ func dataSourceAwsInstancesRead(d *schema.ResourceData, meta interface{}) error return fmt.Errorf("One of filters or instance_tags must be assigned") } + instanceStateNames := []*string{aws.String(ec2.InstanceStateNameRunning)} + if v, ok := d.GetOk("instance_state_names"); ok && len(v.(*schema.Set).List()) > 0 { + instanceStateNames = expandStringSet(v.(*schema.Set)) + } params := &ec2.DescribeInstancesInput{ Filters: []*ec2.Filter{ - &ec2.Filter{ + { Name: aws.String("instance-state-name"), - Values: []*string{aws.String("running")}, + Values: instanceStateNames, }, }, } + if filtersOk { params.Filters = append(params.Filters, buildAwsDataSourceFilters(filters.(*schema.Set))...) diff --git a/aws/data_source_aws_instances_test.go b/aws/data_source_aws_instances_test.go index 2dd92bfa5b1..a5d734c8c42 100644 --- a/aws/data_source_aws_instances_test.go +++ b/aws/data_source_aws_instances_test.go @@ -9,7 +9,7 @@ import ( ) func TestAccAWSInstancesDataSource_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -27,7 +27,7 @@ func TestAccAWSInstancesDataSource_basic(t *testing.T) { func TestAccAWSInstancesDataSource_tags(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -43,6 +43,22 @@ func TestAccAWSInstancesDataSource_tags(t *testing.T) { }) } +func TestAccAWSInstancesDataSource_instance_state_names(t *testing.T) { + rInt := acctest.RandInt() + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccInstancesDataSourceConfig_instance_state_names(rInt), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.aws_instances.test", "ids.#", "2"), + ), + }, + }, + }) +} + const testAccInstancesDataSourceConfig_ids = ` data "aws_ami" "ubuntu" { most_recent = true @@ -113,3 +129,41 @@ data "aws_instances" "test" { } `, rInt) } + +func testAccInstancesDataSourceConfig_instance_state_names(rInt int) string { + return fmt.Sprintf(` +data "aws_ami" "ubuntu" { + most_recent = true + + filter { + name = "name" + values = ["ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"] + } + + filter { + name = "virtualization-type" + values = ["hvm"] + } + + owners = ["099720109477"] # Canonical +} + +resource "aws_instance" "test" { + count = 2 + ami = "${data.aws_ami.ubuntu.id}" + instance_type = "t2.micro" + tags { + Name = "TfAccTest-HelloWorld" + TestSeed = "%[1]d" + } +} + +data "aws_instances" "test" { + instance_tags { + Name = "${aws_instance.test.0.tags["Name"]}" + } + + instance_state_names = [ "pending", "running" ] +} +`, rInt) +} diff --git a/aws/data_source_aws_internet_gateway_test.go b/aws/data_source_aws_internet_gateway_test.go index 2fa4bbcfc79..a5af5e53527 100644 --- a/aws/data_source_aws_internet_gateway_test.go +++ b/aws/data_source_aws_internet_gateway_test.go @@ -9,7 +9,7 @@ import ( ) func TestAccDataSourceAwsInternetGateway_typical(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/aws/data_source_aws_iot_endpoint.go b/aws/data_source_aws_iot_endpoint.go new file mode 100644 index 00000000000..2c5e943726e --- /dev/null +++ b/aws/data_source_aws_iot_endpoint.go @@ -0,0 +1,52 @@ +package aws + +import ( + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/iot" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func dataSourceAwsIotEndpoint() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsIotEndpointRead, + Schema: map[string]*schema.Schema{ + "endpoint_address": { + Type: schema.TypeString, + Computed: true, + }, + "endpoint_type": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{ + "iot:CredentialProvider", + "iot:Data", + "iot:Data-ATS", + "iot:Jobs", + }, false), + }, + }, + } +} + +func dataSourceAwsIotEndpointRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).iotconn + input := &iot.DescribeEndpointInput{} + + if v, ok := d.GetOk("endpoint_type"); ok { + input.EndpointType = aws.String(v.(string)) + } + + output, err := conn.DescribeEndpoint(input) + if err != nil { + return fmt.Errorf("error while describing iot endpoint: %s", err) + } + endpointAddress := aws.StringValue(output.EndpointAddress) + d.SetId(endpointAddress) + if err := d.Set("endpoint_address", endpointAddress); err != nil { + return fmt.Errorf("error setting endpoint_address: %s", err) + } + return nil +} diff --git a/aws/data_source_aws_iot_endpoint_test.go b/aws/data_source_aws_iot_endpoint_test.go new file mode 100644 index 00000000000..6dfca1658a9 --- /dev/null +++ b/aws/data_source_aws_iot_endpoint_test.go @@ -0,0 +1,106 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccAWSIotEndpointDataSource_basic(t *testing.T) { + dataSourceName := "data.aws_iot_endpoint.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAWSIotEndpointConfig, + Check: resource.ComposeTestCheckFunc( + resource.TestMatchResourceAttr(dataSourceName, "endpoint_address", regexp.MustCompile(fmt.Sprintf("^[a-z0-9]+(-ats)?.iot.%s.amazonaws.com$", testAccGetRegion()))), + ), + }, + }, + }) +} + +func TestAccAWSIotEndpointDataSource_EndpointType_IOTCredentialProvider(t *testing.T) { + dataSourceName := "data.aws_iot_endpoint.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAWSIotEndpointConfigEndpointType("iot:CredentialProvider"), + Check: resource.ComposeTestCheckFunc( + resource.TestMatchResourceAttr(dataSourceName, "endpoint_address", regexp.MustCompile(fmt.Sprintf("^[a-z0-9]+.credentials.iot.%s.amazonaws.com$", testAccGetRegion()))), + ), + }, + }, + }) +} + +func TestAccAWSIotEndpointDataSource_EndpointType_IOTData(t *testing.T) { + dataSourceName := "data.aws_iot_endpoint.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAWSIotEndpointConfigEndpointType("iot:Data"), + Check: resource.ComposeTestCheckFunc( + resource.TestMatchResourceAttr(dataSourceName, "endpoint_address", regexp.MustCompile(fmt.Sprintf("^[a-z0-9]+.iot.%s.amazonaws.com$", testAccGetRegion()))), + ), + }, + }, + }) +} + +func TestAccAWSIotEndpointDataSource_EndpointType_IOTDataATS(t *testing.T) { + dataSourceName := "data.aws_iot_endpoint.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAWSIotEndpointConfigEndpointType("iot:Data-ATS"), + Check: resource.ComposeTestCheckFunc( + resource.TestMatchResourceAttr(dataSourceName, "endpoint_address", regexp.MustCompile(fmt.Sprintf("^[a-z0-9]+-ats.iot.%s.amazonaws.com$", testAccGetRegion()))), + ), + }, + }, + }) +} + +func TestAccAWSIotEndpointDataSource_EndpointType_IOTJobs(t *testing.T) { + dataSourceName := "data.aws_iot_endpoint.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAWSIotEndpointConfigEndpointType("iot:Jobs"), + Check: resource.ComposeTestCheckFunc( + resource.TestMatchResourceAttr(dataSourceName, "endpoint_address", regexp.MustCompile(fmt.Sprintf("^[a-z0-9]+.jobs.iot.%s.amazonaws.com$", testAccGetRegion()))), + ), + }, + }, + }) +} + +const testAccAWSIotEndpointConfig = ` +data "aws_iot_endpoint" "test" {} +` + +func testAccAWSIotEndpointConfigEndpointType(endpointType string) string { + return fmt.Sprintf(` +data "aws_iot_endpoint" "test" { + endpoint_type = %q +} +`, endpointType) +} diff --git a/aws/data_source_aws_ip_ranges.go b/aws/data_source_aws_ip_ranges.go index 3e1aa7d1a96..aec2529e16f 100644 --- a/aws/data_source_aws_ip_ranges.go +++ b/aws/data_source_aws_ip_ranges.go @@ -14,9 +14,10 @@ import ( ) type dataSourceAwsIPRangesResult struct { - CreateDate string - Prefixes []dataSourceAwsIPRangesPrefix - SyncToken string + CreateDate string + Prefixes []dataSourceAwsIPRangesPrefix + Ipv6Prefixes []dataSourceAwsIPRangesIpv6Prefix `json:"ipv6_prefixes"` + SyncToken string } type dataSourceAwsIPRangesPrefix struct { @@ -25,6 +26,12 @@ type dataSourceAwsIPRangesPrefix struct { Service string } +type dataSourceAwsIPRangesIpv6Prefix struct { + Ipv6Prefix string `json:"ipv6_prefix"` + Region string + Service string +} + func dataSourceAwsIPRanges() *schema.Resource { return &schema.Resource{ Read: dataSourceAwsIPRangesRead, @@ -39,6 +46,11 @@ func dataSourceAwsIPRanges() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "ipv6_cidr_blocks": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "regions": { Type: schema.TypeSet, Elem: &schema.Schema{Type: schema.TypeString}, @@ -120,30 +132,42 @@ func dataSourceAwsIPRangesRead(d *schema.ResourceData, meta interface{}) error { regions = get("regions") services = get("services") noRegionFilter = regions.Len() == 0 - prefixes []string + ipPrefixes []string + ipv6Prefixes []string ) - for _, e := range result.Prefixes { + matchFilter := func(region, service string) bool { + matchRegion := noRegionFilter || regions.Contains(strings.ToLower(region)) + matchService := services.Contains(strings.ToLower(service)) + return matchRegion && matchService + } - var ( - matchRegion = noRegionFilter || regions.Contains(strings.ToLower(e.Region)) - matchService = services.Contains(strings.ToLower(e.Service)) - ) + for _, e := range result.Prefixes { + if matchFilter(e.Region, e.Service) { + ipPrefixes = append(ipPrefixes, e.IpPrefix) + } + } - if matchRegion && matchService { - prefixes = append(prefixes, e.IpPrefix) + for _, e := range result.Ipv6Prefixes { + if matchFilter(e.Region, e.Service) { + ipv6Prefixes = append(ipv6Prefixes, e.Ipv6Prefix) } + } + if len(ipPrefixes) == 0 && len(ipv6Prefixes) == 0 { + return fmt.Errorf("No IP ranges result from filters") } - if len(prefixes) == 0 { - return fmt.Errorf(" No IP ranges result from filters") + sort.Strings(ipPrefixes) + + if err := d.Set("cidr_blocks", ipPrefixes); err != nil { + return fmt.Errorf("Error setting cidr_blocks: %s", err) } - sort.Strings(prefixes) + sort.Strings(ipv6Prefixes) - if err := d.Set("cidr_blocks", prefixes); err != nil { - return fmt.Errorf("Error setting ip ranges: %s", err) + if err := d.Set("ipv6_cidr_blocks", ipv6Prefixes); err != nil { + return fmt.Errorf("Error setting ipv6_cidr_blocks: %s", err) } return nil diff --git a/aws/data_source_aws_ip_ranges_test.go b/aws/data_source_aws_ip_ranges_test.go index da7de0757cf..2b54869adc0 100644 --- a/aws/data_source_aws_ip_ranges_test.go +++ b/aws/data_source_aws_ip_ranges_test.go @@ -14,41 +14,34 @@ import ( ) func TestAccAWSIPRanges(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSIPRangesConfig, Check: resource.ComposeTestCheckFunc( - testAccAWSIPRanges("data.aws_ip_ranges.some"), + testAccAWSIPRangesCheckAttributes("data.aws_ip_ranges.some"), + testAccAWSIPRangesCheckCidrBlocksAttribute("data.aws_ip_ranges.some", "cidr_blocks"), + testAccAWSIPRangesCheckCidrBlocksAttribute("data.aws_ip_ranges.some", "ipv6_cidr_blocks"), ), }, }, }) } -func testAccAWSIPRanges(n string) resource.TestCheckFunc { +func testAccAWSIPRangesCheckAttributes(n string) resource.TestCheckFunc { return func(s *terraform.State) error { r := s.RootModule().Resources[n] a := r.Primary.Attributes var ( - cidrBlockSize int - createDate time.Time - err error - syncToken int + createDate time.Time + err error + syncToken int ) - if cidrBlockSize, err = strconv.Atoi(a["cidr_blocks.#"]); err != nil { - return err - } - - if cidrBlockSize < 10 { - return fmt.Errorf("cidr_blocks for eu-west-1 seem suspiciously low: %d", cidrBlockSize) - } - if createDate, err = time.Parse("2006-01-02-15-04-05", a["create_date"]); err != nil { return err } @@ -61,24 +54,6 @@ func testAccAWSIPRanges(n string) resource.TestCheckFunc { return fmt.Errorf("sync_token %d does not match create_date %s", syncToken, createDate) } - var cidrBlocks sort.StringSlice = make([]string, cidrBlockSize) - - for i := range make([]string, cidrBlockSize) { - - block := a[fmt.Sprintf("cidr_blocks.%d", i)] - - if _, _, err := net.ParseCIDR(block); err != nil { - return fmt.Errorf("malformed CIDR block %s: %s", block, err) - } - - cidrBlocks[i] = block - - } - - if !sort.IsSorted(cidrBlocks) { - return fmt.Errorf("unexpected order of cidr_blocks: %s", cidrBlocks) - } - var ( regionMember = regexp.MustCompile(`regions\.\d+`) regions, services int @@ -120,6 +95,46 @@ func testAccAWSIPRanges(n string) resource.TestCheckFunc { } } +func testAccAWSIPRangesCheckCidrBlocksAttribute(name, attribute string) resource.TestCheckFunc { + return func(s *terraform.State) error { + r := s.RootModule().Resources[name] + a := r.Primary.Attributes + + var ( + cidrBlockSize int + cidrBlocks sort.StringSlice + err error + ) + + if cidrBlockSize, err = strconv.Atoi(a[fmt.Sprintf("%s.#", attribute)]); err != nil { + return err + } + + if cidrBlockSize < 5 { + return fmt.Errorf("%s for eu-west-1 seem suspiciously low: %d", attribute, cidrBlockSize) + } + + cidrBlocks = make([]string, cidrBlockSize) + + for i := range cidrBlocks { + cidrBlock := a[fmt.Sprintf("%s.%d", attribute, i)] + + _, _, err := net.ParseCIDR(cidrBlock) + if err != nil { + return fmt.Errorf("malformed CIDR block %s in %s: %s", cidrBlock, attribute, err) + } + + cidrBlocks[i] = cidrBlock + } + + if !sort.IsSorted(cidrBlocks) { + return fmt.Errorf("unexpected order of %s: %s", attribute, cidrBlocks) + } + + return nil + } +} + const testAccAWSIPRangesConfig = ` data "aws_ip_ranges" "some" { regions = [ "eu-west-1", "eu-central-1" ] diff --git a/aws/data_source_aws_kinesis_stream_test.go b/aws/data_source_aws_kinesis_stream_test.go index 49108d846ec..9103e33aeb6 100644 --- a/aws/data_source_aws_kinesis_stream_test.go +++ b/aws/data_source_aws_kinesis_stream_test.go @@ -32,7 +32,7 @@ func TestAccAWSKinesisStreamDataSource(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckKinesisStreamDestroy, diff --git a/aws/data_source_aws_kms_alias.go b/aws/data_source_aws_kms_alias.go index 91dfacb7ee6..c002d1da8f8 100644 --- a/aws/data_source_aws_kms_alias.go +++ b/aws/data_source_aws_kms_alias.go @@ -7,7 +7,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/kms" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/schema" ) @@ -54,7 +53,7 @@ func dataSourceAwsKmsAliasRead(d *schema.ResourceData, meta interface{}) error { return true }) if err != nil { - return errwrap.Wrapf("Error fetch KMS alias list: {{err}}", err) + return fmt.Errorf("Error fetch KMS alias list: %s", err) } if alias == nil { @@ -80,7 +79,7 @@ func dataSourceAwsKmsAliasRead(d *schema.ResourceData, meta interface{}) error { } resp, err := conn.DescribeKey(req) if err != nil { - return errwrap.Wrapf("Error calling KMS DescribeKey: {{err}}", err) + return fmt.Errorf("Error calling KMS DescribeKey: %s", err) } d.Set("target_key_arn", resp.KeyMetadata.Arn) diff --git a/aws/data_source_aws_kms_alias_test.go b/aws/data_source_aws_kms_alias_test.go index fe375d2b7da..20c1936a939 100644 --- a/aws/data_source_aws_kms_alias_test.go +++ b/aws/data_source_aws_kms_alias_test.go @@ -15,11 +15,11 @@ func TestAccDataSourceAwsKmsAlias_AwsService(t *testing.T) { name := "alias/aws/s3" resourceName := "data.aws_kms_alias.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDataSourceAwsKmsAlias_name(name), Check: resource.ComposeTestCheckFunc( testAccDataSourceAwsKmsAliasCheckExists(resourceName), @@ -38,11 +38,11 @@ func TestAccDataSourceAwsKmsAlias_CMK(t *testing.T) { aliasResourceName := "aws_kms_alias.test" datasourceAliasResourceName := "data.aws_kms_alias.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDataSourceAwsKmsAlias_CMK(rInt), Check: resource.ComposeTestCheckFunc( testAccDataSourceAwsKmsAliasCheckExists(datasourceAliasResourceName), diff --git a/aws/data_source_aws_kms_ciphertext_test.go b/aws/data_source_aws_kms_ciphertext_test.go index f871acc0340..4703d176e64 100644 --- a/aws/data_source_aws_kms_ciphertext_test.go +++ b/aws/data_source_aws_kms_ciphertext_test.go @@ -7,7 +7,7 @@ import ( ) func TestAccDataSourceAwsKmsCiphertext_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -23,7 +23,7 @@ func TestAccDataSourceAwsKmsCiphertext_basic(t *testing.T) { } func TestAccDataSourceAwsKmsCiphertext_validate(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -43,7 +43,7 @@ func TestAccDataSourceAwsKmsCiphertext_validate(t *testing.T) { } func TestAccDataSourceAwsKmsCiphertext_validate_withContext(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/aws/data_source_aws_kms_key.go b/aws/data_source_aws_kms_key.go index 35c231ead10..a05f8edfd6e 100644 --- a/aws/data_source_aws_kms_key.go +++ b/aws/data_source_aws_kms_key.go @@ -2,10 +2,11 @@ package aws import ( "fmt" + "time" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/kms" "github.com/hashicorp/terraform/helper/schema" - "time" ) func dataSourceAwsKmsKey() *schema.Resource { diff --git a/aws/data_source_aws_kms_key_test.go b/aws/data_source_aws_kms_key_test.go index 6bb239ceb27..cdf9eed13fd 100644 --- a/aws/data_source_aws_kms_key_test.go +++ b/aws/data_source_aws_kms_key_test.go @@ -2,15 +2,15 @@ package aws import ( "fmt" + "regexp" "testing" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" - "regexp" ) func TestAccDataSourceAwsKmsKey_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/aws/data_source_aws_kms_secret.go b/aws/data_source_aws_kms_secret.go index 3b022dfb5f2..16772633e52 100644 --- a/aws/data_source_aws_kms_secret.go +++ b/aws/data_source_aws_kms_secret.go @@ -13,7 +13,8 @@ import ( func dataSourceAwsKmsSecret() *schema.Resource { return &schema.Resource{ - Read: dataSourceAwsKmsSecretRead, + DeprecationMessage: "This data source will be removed in Terraform AWS provider version 2.0. Please see migration information available in: https://www.terraform.io/docs/providers/aws/guides/version-2-upgrade.html#data-source-aws_kms_secret", + Read: dataSourceAwsKmsSecretRead, Schema: map[string]*schema.Schema{ "secret": { @@ -69,7 +70,7 @@ func dataSourceAwsKmsSecretRead(d *schema.ResourceData, meta interface{}) error // build the kms decrypt params params := &kms.DecryptInput{ - CiphertextBlob: []byte(payload), + CiphertextBlob: payload, } if context, exists := secret["context"]; exists { params.EncryptionContext = make(map[string]*string) diff --git a/aws/data_source_aws_kms_secrets.go b/aws/data_source_aws_kms_secrets.go new file mode 100644 index 00000000000..63caf704130 --- /dev/null +++ b/aws/data_source_aws_kms_secrets.go @@ -0,0 +1,108 @@ +package aws + +import ( + "encoding/base64" + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/kms" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsKmsSecrets() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsKmsSecretsRead, + + Schema: map[string]*schema.Schema{ + "secret": { + Type: schema.TypeSet, + Required: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + }, + "payload": { + Type: schema.TypeString, + Required: true, + }, + "context": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "grant_tokens": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + }, + }, + "plaintext": { + Type: schema.TypeMap, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + Sensitive: true, + }, + }, + }, + } +} + +func dataSourceAwsKmsSecretsRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).kmsconn + + secrets := d.Get("secret").(*schema.Set) + plaintext := make(map[string]string, len(secrets.List())) + + for _, v := range secrets.List() { + secret := v.(map[string]interface{}) + + // base64 decode the payload + payload, err := base64.StdEncoding.DecodeString(secret["payload"].(string)) + if err != nil { + return fmt.Errorf("Invalid base64 value for secret '%s': %v", secret["name"].(string), err) + } + + // build the kms decrypt params + params := &kms.DecryptInput{ + CiphertextBlob: payload, + } + if context, exists := secret["context"]; exists { + params.EncryptionContext = make(map[string]*string) + for k, v := range context.(map[string]interface{}) { + params.EncryptionContext[k] = aws.String(v.(string)) + } + } + if grant_tokens, exists := secret["grant_tokens"]; exists { + params.GrantTokens = make([]*string, 0) + for _, v := range grant_tokens.([]interface{}) { + params.GrantTokens = append(params.GrantTokens, aws.String(v.(string))) + } + } + + // decrypt + resp, err := conn.Decrypt(params) + if err != nil { + return fmt.Errorf("Failed to decrypt '%s': %s", secret["name"].(string), err) + } + + // Set the secret via the name + log.Printf("[DEBUG] aws_kms_secret - successfully decrypted secret: %s", secret["name"].(string)) + plaintext[secret["name"].(string)] = string(resp.Plaintext) + } + + if err := d.Set("plaintext", plaintext); err != nil { + return fmt.Errorf("error setting plaintext: %s", err) + } + + d.SetId(time.Now().UTC().String()) + + return nil +} diff --git a/aws/data_source_aws_kms_secrets_test.go b/aws/data_source_aws_kms_secrets_test.go new file mode 100644 index 00000000000..29c2d035afb --- /dev/null +++ b/aws/data_source_aws_kms_secrets_test.go @@ -0,0 +1,105 @@ +package aws + +import ( + "encoding/base64" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/kms" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSKmsSecretsDataSource_basic(t *testing.T) { + var encryptedPayload string + var key kms.KeyMetadata + + plaintext := "my-plaintext-string" + resourceName := "aws_kms_key.test" + + // Run a resource test to setup our KMS key + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccCheckAwsKmsSecretsDataSourceKey, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSKmsKeyExists(resourceName, &key), + testAccDataSourceAwsKmsSecretsEncrypt(&key, plaintext, &encryptedPayload), + // We need to dereference the encryptedPayload in a test Terraform configuration + testAccDataSourceAwsKmsSecretsDecrypt(t, plaintext, &encryptedPayload), + ), + }, + }, + }) +} + +func testAccDataSourceAwsKmsSecretsEncrypt(key *kms.KeyMetadata, plaintext string, encryptedPayload *string) resource.TestCheckFunc { + return func(s *terraform.State) error { + kmsconn := testAccProvider.Meta().(*AWSClient).kmsconn + + input := &kms.EncryptInput{ + KeyId: key.Arn, + Plaintext: []byte(plaintext), + EncryptionContext: map[string]*string{ + "name": aws.String("value"), + }, + } + + resp, err := kmsconn.Encrypt(input) + if err != nil { + return fmt.Errorf("failed encrypting string: %s", err) + } + + *encryptedPayload = base64.StdEncoding.EncodeToString(resp.CiphertextBlob) + + return nil + } +} + +func testAccDataSourceAwsKmsSecretsDecrypt(t *testing.T, plaintext string, encryptedPayload *string) resource.TestCheckFunc { + return func(s *terraform.State) error { + dataSourceName := "data.aws_kms_secrets.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccCheckAwsKmsSecretsDataSourceSecret(*encryptedPayload), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(dataSourceName, "plaintext.%", "1"), + resource.TestCheckResourceAttr(dataSourceName, "plaintext.secret1", plaintext), + ), + }, + }, + }) + + return nil + } +} + +const testAccCheckAwsKmsSecretsDataSourceKey = ` +resource "aws_kms_key" "test" { + deletion_window_in_days = 7 + description = "Testing the Terraform AWS KMS Secrets data_source" +} +` + +func testAccCheckAwsKmsSecretsDataSourceSecret(payload string) string { + return testAccCheckAwsKmsSecretsDataSourceKey + fmt.Sprintf(` +data "aws_kms_secrets" "test" { + secret { + name = "secret1" + payload = %q + + context { + name = "value" + } + } +} +`, payload) +} diff --git a/aws/data_source_aws_lambda_function.go b/aws/data_source_aws_lambda_function.go new file mode 100644 index 00000000000..392a0e4442b --- /dev/null +++ b/aws/data_source_aws_lambda_function.go @@ -0,0 +1,154 @@ +package aws + +import ( + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsLambdaFunction() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsLambdaFunctionRead, + + Schema: map[string]*schema.Schema{ + "function_name": { + Type: schema.TypeString, + Required: true, + }, + "qualifier": { + Type: schema.TypeString, + Optional: true, + Default: "$LATEST", + }, + "description": { + Type: schema.TypeString, + Computed: true, + }, + "dead_letter_config": { + Type: schema.TypeList, + Computed: true, + MinItems: 0, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "target_arn": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "handler": { + Type: schema.TypeString, + Computed: true, + }, + "memory_size": { + Type: schema.TypeInt, + Computed: true, + }, + "reserved_concurrent_executions": { + Type: schema.TypeInt, + Computed: true, + }, + "role": { + Type: schema.TypeString, + Computed: true, + }, + "runtime": { + Type: schema.TypeString, + Computed: true, + }, + "timeout": { + Type: schema.TypeInt, + Computed: true, + }, + "version": { + Type: schema.TypeString, + Computed: true, + }, + "vpc_config": { + Type: schema.TypeList, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "subnet_ids": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "security_group_ids": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "vpc_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "qualified_arn": { + Type: schema.TypeString, + Computed: true, + }, + "invoke_arn": { + Type: schema.TypeString, + Computed: true, + }, + "last_modified": { + Type: schema.TypeString, + Computed: true, + }, + "source_code_hash": { + Type: schema.TypeString, + Computed: true, + }, + "source_code_size": { + Type: schema.TypeInt, + Computed: true, + }, + "environment": { + Type: schema.TypeList, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "variables": { + Type: schema.TypeMap, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + }, + }, + "tracing_config": { + Type: schema.TypeList, + MaxItems: 1, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "mode": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "kms_key_arn": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func dataSourceAwsLambdaFunctionRead(d *schema.ResourceData, meta interface{}) error { + d.SetId(d.Get("function_name").(string)) + return resourceAwsLambdaFunctionRead(d, meta) +} diff --git a/aws/data_source_aws_lambda_function_test.go b/aws/data_source_aws_lambda_function_test.go new file mode 100644 index 00000000000..50198777eeb --- /dev/null +++ b/aws/data_source_aws_lambda_function_test.go @@ -0,0 +1,340 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccDataSourceAWSLambdaFunction_basic(t *testing.T) { + rString := acctest.RandString(7) + roleName := fmt.Sprintf("tf-acctest-d-lambda-function-basic-role-%s", rString) + policyName := fmt.Sprintf("tf-acctest-d-lambda-function-basic-policy-%s", rString) + sgName := fmt.Sprintf("tf-acctest-d-lambda-function-basic-sg-%s", rString) + funcName := fmt.Sprintf("tf-acctest-d-lambda-function-basic-func-%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAWSLambdaFunctionConfigBasic(roleName, policyName, sgName, funcName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrSet("data.aws_lambda_function.acctest", "arn"), + resource.TestCheckResourceAttrSet("data.aws_lambda_function.acctest", "role"), + resource.TestCheckResourceAttrSet("data.aws_lambda_function.acctest", "source_code_hash"), + resource.TestCheckResourceAttrSet("data.aws_lambda_function.acctest", "source_code_size"), + resource.TestCheckResourceAttrSet("data.aws_lambda_function.acctest", "last_modified"), + resource.TestCheckResourceAttrSet("data.aws_lambda_function.acctest", "qualified_arn"), + resource.TestCheckResourceAttrSet("data.aws_lambda_function.acctest", "invoke_arn"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "function_name", funcName), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "description", funcName), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "qualifier", "$LATEST"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "handler", "exports.example"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "memory_size", "128"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "runtime", "nodejs4.3"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "timeout", "3"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "version", "$LATEST"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "reserved_concurrent_executions", "0"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "dead_letter_config.#", "0"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "tracing_config.#", "1"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "tracing_config.0.mode", "PassThrough"), + ), + }, + }, + }) +} + +func TestAccDataSourceAWSLambdaFunction_version(t *testing.T) { + rString := acctest.RandString(7) + roleName := fmt.Sprintf("tf-acctest-d-lambda-function-version-role-%s", rString) + policyName := fmt.Sprintf("tf-acctest-d-lambda-function-version-policy-%s", rString) + sgName := fmt.Sprintf("tf-acctest-d-lambda-function-version-sg-%s", rString) + funcName := fmt.Sprintf("tf-acctest-d-lambda-function-version-func-%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAWSLambdaFunctionConfigVersion(roleName, policyName, sgName, funcName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrSet("data.aws_lambda_function.acctest", "arn"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "function_name", funcName), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "qualifier", "1"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "version", "1"), + ), + }, + }, + }) +} + +func TestAccDataSourceAWSLambdaFunction_alias(t *testing.T) { + rString := acctest.RandString(7) + roleName := fmt.Sprintf("tf-acctest-d-lambda-function-alias-role-%s", rString) + policyName := fmt.Sprintf("tf-acctest-d-lambda-function-alias-policy-%s", rString) + sgName := fmt.Sprintf("tf-acctest-d-lambda-function-alias-sg-%s", rString) + funcName := fmt.Sprintf("tf-acctest-d-lambda-function-alias-func-%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAWSLambdaFunctionConfigAlias(roleName, policyName, sgName, funcName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrSet("data.aws_lambda_function.acctest", "arn"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "function_name", funcName), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "qualifier", "alias-name"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "version", "1"), + ), + }, + }, + }) +} + +func TestAccDataSourceAWSLambdaFunction_vpc(t *testing.T) { + rString := acctest.RandString(7) + roleName := fmt.Sprintf("tf-acctest-d-lambda-function-vpc-role-%s", rString) + policyName := fmt.Sprintf("tf-acctest-d-lambda-function-vpc-policy-%s", rString) + sgName := fmt.Sprintf("tf-acctest-d-lambda-function-vpc-sg-%s", rString) + funcName := fmt.Sprintf("tf-acctest-d-lambda-function-vpc-func-%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAWSLambdaFunctionConfigVPC(roleName, policyName, sgName, funcName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrSet("data.aws_lambda_function.acctest", "arn"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "vpc_config.#", "1"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "vpc_config.0.security_group_ids.#", "1"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "vpc_config.0.subnet_ids.#", "1"), + ), + }, + }, + }) +} + +func TestAccDataSourceAWSLambdaFunction_environment(t *testing.T) { + rString := acctest.RandString(7) + roleName := fmt.Sprintf("tf-acctest-d-lambda-function-environment-role-%s", rString) + policyName := fmt.Sprintf("tf-acctest-d-lambda-function-environment-policy-%s", rString) + sgName := fmt.Sprintf("tf-acctest-d-lambda-function-environment-sg-%s", rString) + funcName := fmt.Sprintf("tf-acctest-d-lambda-function-environment-func-%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAWSLambdaFunctionConfigEnvironment(roleName, policyName, sgName, funcName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrSet("data.aws_lambda_function.acctest", "arn"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "environment.#", "1"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "environment.0.variables.%", "2"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "environment.0.variables.key1", "value1"), + resource.TestCheckResourceAttr("data.aws_lambda_function.acctest", "environment.0.variables.key2", "value2"), + ), + }, + }, + }) +} + +func testAccDataSourceAWSLambdaFunctionConfigBase(roleName, policyName, sgName string) string { + return fmt.Sprintf(` +resource "aws_iam_role" "lambda" { + name = "%s" + + assume_role_policy = < 1 { + return errors.New("Multiple matching Launch Configurations found") + } + + lc := describConfs.LaunchConfigurations[0] + + d.Set("key_name", lc.KeyName) + d.Set("image_id", lc.ImageId) + d.Set("instance_type", lc.InstanceType) + d.Set("name", lc.LaunchConfigurationName) + d.Set("user_data", lc.UserData) + d.Set("iam_instance_profile", lc.IamInstanceProfile) + d.Set("ebs_optimized", lc.EbsOptimized) + d.Set("spot_price", lc.SpotPrice) + d.Set("associate_public_ip_address", lc.AssociatePublicIpAddress) + d.Set("vpc_classic_link_id", lc.ClassicLinkVPCId) + d.Set("enable_monitoring", false) + + if lc.InstanceMonitoring != nil { + d.Set("enable_monitoring", lc.InstanceMonitoring.Enabled) + } + + vpcSGs := make([]string, 0, len(lc.SecurityGroups)) + for _, sg := range lc.SecurityGroups { + vpcSGs = append(vpcSGs, *sg) + } + if err := d.Set("security_groups", vpcSGs); err != nil { + return fmt.Errorf("error setting security_groups: %s", err) + } + + classicSGs := make([]string, 0, len(lc.ClassicLinkVPCSecurityGroups)) + for _, sg := range lc.ClassicLinkVPCSecurityGroups { + classicSGs = append(classicSGs, *sg) + } + if err := d.Set("vpc_classic_link_security_groups", classicSGs); err != nil { + return fmt.Errorf("error setting vpc_classic_link_security_groups: %s", err) + } + + if err := readLCBlockDevices(d, lc, ec2conn); err != nil { + return err + } + + return nil +} diff --git a/aws/data_source_aws_launch_configuration_test.go b/aws/data_source_aws_launch_configuration_test.go new file mode 100644 index 00000000000..f9018760853 --- /dev/null +++ b/aws/data_source_aws_launch_configuration_test.go @@ -0,0 +1,109 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccAWSLaunchConfigurationDataSource_basic(t *testing.T) { + rInt := acctest.RandInt() + rName := "data.aws_launch_configuration.foo" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccLaunchConfigurationDataSourceConfig_basic(rInt), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet(rName, "image_id"), + resource.TestCheckResourceAttrSet(rName, "instance_type"), + resource.TestCheckResourceAttrSet(rName, "associate_public_ip_address"), + resource.TestCheckResourceAttrSet(rName, "user_data"), + resource.TestCheckResourceAttr(rName, "root_block_device.#", "1"), + resource.TestCheckResourceAttr(rName, "ebs_block_device.#", "1"), + resource.TestCheckResourceAttr(rName, "ephemeral_block_device.#", "1"), + ), + }, + }, + }) +} +func TestAccAWSLaunchConfigurationDataSource_securityGroups(t *testing.T) { + rInt := acctest.RandInt() + rName := "data.aws_launch_configuration.foo" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccLaunchConfigurationDataSourceConfig_securityGroups(rInt), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(rName, "security_groups.#", "1"), + ), + }, + }, + }) +} + +func testAccLaunchConfigurationDataSourceConfig_basic(rInt int) string { + return fmt.Sprintf(` +resource "aws_launch_configuration" "foo" { + name = "terraform-test-%d" + image_id = "ami-21f78e11" + instance_type = "m1.small" + associate_public_ip_address = true + user_data = "foobar-user-data" + + root_block_device { + volume_type = "gp2" + volume_size = 11 + } + ebs_block_device { + device_name = "/dev/sdb" + volume_size = 9 + } + ebs_block_device { + device_name = "/dev/sdc" + volume_size = 10 + volume_type = "io1" + iops = 100 + } + ephemeral_block_device { + device_name = "/dev/sde" + virtual_name = "ephemeral0" + } +} + +data "aws_launch_configuration" "foo" { + name = "${aws_launch_configuration.foo.name}" +} +`, rInt) +} + +func testAccLaunchConfigurationDataSourceConfig_securityGroups(rInt int) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "10.1.0.0/16" +} + +resource "aws_security_group" "test" { + name = "terraform-test_%d" + vpc_id = "${aws_vpc.test.id}" +} + +resource "aws_launch_configuration" "test" { + name = "terraform-test-%d" + image_id = "ami-21f78e11" + instance_type = "m1.small" + security_groups = ["${aws_security_group.test.id}"] +} + +data "aws_launch_configuration" "foo" { + name = "${aws_launch_configuration.test.name}" +} +`, rInt, rInt) +} diff --git a/aws/data_source_aws_launch_template.go b/aws/data_source_aws_launch_template.go new file mode 100644 index 00000000000..c27f248150b --- /dev/null +++ b/aws/data_source_aws_launch_template.go @@ -0,0 +1,461 @@ +package aws + +import ( + "fmt" + "log" + "strconv" + "strings" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsLaunchTemplate() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsLaunchTemplateRead, + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + }, + "description": { + Type: schema.TypeString, + Computed: true, + }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "default_version": { + Type: schema.TypeInt, + Computed: true, + }, + "latest_version": { + Type: schema.TypeInt, + Computed: true, + }, + "block_device_mappings": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "device_name": { + Type: schema.TypeString, + Computed: true, + }, + "no_device": { + Type: schema.TypeString, + Computed: true, + }, + "virtual_name": { + Type: schema.TypeString, + Computed: true, + }, + "ebs": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "delete_on_termination": { + Type: schema.TypeString, + Computed: true, + }, + "encrypted": { + Type: schema.TypeString, + Computed: true, + }, + "iops": { + Type: schema.TypeInt, + Computed: true, + }, + "kms_key_id": { + Type: schema.TypeString, + Computed: true, + }, + "snapshot_id": { + Type: schema.TypeString, + Computed: true, + }, + "volume_size": { + Type: schema.TypeInt, + Computed: true, + }, + "volume_type": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + }, + }, + }, + "credit_specification": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cpu_credits": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "disable_api_termination": { + Type: schema.TypeBool, + Computed: true, + }, + "ebs_optimized": { + Type: schema.TypeString, + Computed: true, + }, + "elastic_gpu_specifications": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "type": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "iam_instance_profile": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "image_id": { + Type: schema.TypeString, + Computed: true, + }, + + "instance_initiated_shutdown_behavior": { + Type: schema.TypeString, + Computed: true, + }, + "instance_market_options": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "market_type": { + Type: schema.TypeString, + Computed: true, + }, + "spot_options": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "block_duration_minutes": { + Type: schema.TypeInt, + Computed: true, + }, + "instance_interruption_behavior": { + Type: schema.TypeString, + Computed: true, + }, + "max_price": { + Type: schema.TypeString, + Computed: true, + }, + "spot_instance_type": { + Type: schema.TypeString, + Computed: true, + }, + "valid_until": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + }, + }, + }, + "instance_type": { + Type: schema.TypeString, + Computed: true, + }, + "kernel_id": { + Type: schema.TypeString, + Computed: true, + }, + "key_name": { + Type: schema.TypeString, + Computed: true, + }, + "monitoring": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Computed: true, + }, + }, + }, + }, + "network_interfaces": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "associate_public_ip_address": { + Type: schema.TypeBool, + Computed: true, + }, + "delete_on_termination": { + Type: schema.TypeBool, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Computed: true, + }, + "device_index": { + Type: schema.TypeInt, + Computed: true, + }, + "security_groups": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "ipv6_address_count": { + Type: schema.TypeInt, + Computed: true, + }, + "ipv6_addresses": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "network_interface_id": { + Type: schema.TypeString, + Computed: true, + }, + "private_ip_address": { + Type: schema.TypeString, + Computed: true, + }, + "ipv4_addresses": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "ipv4_address_count": { + Type: schema.TypeInt, + Computed: true, + }, + "subnet_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "placement": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "affinity": { + Type: schema.TypeString, + Computed: true, + }, + "availability_zone": { + Type: schema.TypeString, + Computed: true, + }, + "group_name": { + Type: schema.TypeString, + Computed: true, + }, + "host_id": { + Type: schema.TypeString, + Computed: true, + }, + "spread_domain": { + Type: schema.TypeString, + Computed: true, + }, + "tenancy": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "ram_disk_id": { + Type: schema.TypeString, + Computed: true, + }, + "security_group_names": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "vpc_security_group_ids": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "tag_specifications": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "resource_type": { + Type: schema.TypeString, + Computed: true, + }, + "tags": tagsSchemaComputed(), + }, + }, + }, + "user_data": { + Type: schema.TypeString, + Computed: true, + }, + "tags": tagsSchemaComputed(), + }, + } +} + +func dataSourceAwsLaunchTemplateRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + log.Printf("[DEBUG] Reading launch template %s", d.Get("name")) + + dlt, err := conn.DescribeLaunchTemplates(&ec2.DescribeLaunchTemplatesInput{ + LaunchTemplateNames: []*string{aws.String(d.Get("name").(string))}, + }) + + if isAWSErr(err, ec2.LaunchTemplateErrorCodeLaunchTemplateIdDoesNotExist, "") { + log.Printf("[WARN] launch template (%s) not found - removing from state", d.Id()) + d.SetId("") + return nil + } + + // AWS SDK constant above is currently incorrect + if isAWSErr(err, "InvalidLaunchTemplateId.NotFound", "") { + log.Printf("[WARN] launch template (%s) not found - removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return fmt.Errorf("Error getting launch template: %s", err) + } + + if dlt == nil || len(dlt.LaunchTemplates) == 0 { + log.Printf("[WARN] launch template (%s) not found - removing from state", d.Id()) + d.SetId("") + return nil + } + + log.Printf("[DEBUG] Found launch template %s", d.Id()) + + lt := dlt.LaunchTemplates[0] + d.SetId(*lt.LaunchTemplateId) + d.Set("name", lt.LaunchTemplateName) + d.Set("latest_version", lt.LatestVersionNumber) + d.Set("default_version", lt.DefaultVersionNumber) + d.Set("tags", tagsToMap(lt.Tags)) + + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Service: "ec2", + Region: meta.(*AWSClient).region, + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("launch-template/%s", d.Id()), + }.String() + d.Set("arn", arn) + + version := strconv.Itoa(int(*lt.LatestVersionNumber)) + dltv, err := conn.DescribeLaunchTemplateVersions(&ec2.DescribeLaunchTemplateVersionsInput{ + LaunchTemplateId: aws.String(d.Id()), + Versions: []*string{aws.String(version)}, + }) + if err != nil { + return err + } + + log.Printf("[DEBUG] Received launch template version %q (version %d)", d.Id(), *lt.LatestVersionNumber) + + ltData := dltv.LaunchTemplateVersions[0].LaunchTemplateData + + d.Set("disable_api_termination", ltData.DisableApiTermination) + d.Set("image_id", ltData.ImageId) + d.Set("instance_initiated_shutdown_behavior", ltData.InstanceInitiatedShutdownBehavior) + d.Set("instance_type", ltData.InstanceType) + d.Set("kernel_id", ltData.KernelId) + d.Set("key_name", ltData.KeyName) + d.Set("ram_disk_id", ltData.RamDiskId) + d.Set("security_group_names", aws.StringValueSlice(ltData.SecurityGroups)) + d.Set("user_data", ltData.UserData) + d.Set("vpc_security_group_ids", aws.StringValueSlice(ltData.SecurityGroupIds)) + d.Set("ebs_optimized", "") + + if ltData.EbsOptimized != nil { + d.Set("ebs_optimized", strconv.FormatBool(aws.BoolValue(ltData.EbsOptimized))) + } + + if err := d.Set("block_device_mappings", getBlockDeviceMappings(ltData.BlockDeviceMappings)); err != nil { + return err + } + + if strings.HasPrefix(aws.StringValue(ltData.InstanceType), "t2") || strings.HasPrefix(aws.StringValue(ltData.InstanceType), "t3") { + if err := d.Set("credit_specification", getCreditSpecification(ltData.CreditSpecification)); err != nil { + return err + } + } + + if err := d.Set("elastic_gpu_specifications", getElasticGpuSpecifications(ltData.ElasticGpuSpecifications)); err != nil { + return err + } + + if err := d.Set("iam_instance_profile", getIamInstanceProfile(ltData.IamInstanceProfile)); err != nil { + return err + } + + if err := d.Set("instance_market_options", getInstanceMarketOptions(ltData.InstanceMarketOptions)); err != nil { + return err + } + + if err := d.Set("monitoring", getMonitoring(ltData.Monitoring)); err != nil { + return err + } + + if err := d.Set("network_interfaces", getNetworkInterfaces(ltData.NetworkInterfaces)); err != nil { + return err + } + + if err := d.Set("placement", getPlacement(ltData.Placement)); err != nil { + return err + } + + if err := d.Set("tag_specifications", getTagSpecifications(ltData.TagSpecifications)); err != nil { + return err + } + + return nil +} diff --git a/aws/data_source_aws_launch_template_test.go b/aws/data_source_aws_launch_template_test.go new file mode 100644 index 00000000000..d79c7cc8a7e --- /dev/null +++ b/aws/data_source_aws_launch_template_test.go @@ -0,0 +1,44 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccAWSLaunchTemplateDataSource_basic(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + dataSourceName := "data.aws_launch_template.test" + resourceName := "aws_launch_template.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateDataSourceConfig_Basic(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrPair(resourceName, "arn", dataSourceName, "arn"), + resource.TestCheckResourceAttrPair(resourceName, "default_version", dataSourceName, "default_version"), + resource.TestCheckResourceAttrPair(resourceName, "latest_version", dataSourceName, "latest_version"), + resource.TestCheckResourceAttrPair(resourceName, "name", dataSourceName, "name"), + ), + }, + }, + }) +} + +func testAccAWSLaunchTemplateDataSourceConfig_Basic(rName string) string { + return fmt.Sprintf(` +resource "aws_launch_template" "test" { + name = %q +} + +data "aws_launch_template" "test" { + name = "${aws_launch_template.test.name}" +} +`, rName) +} diff --git a/aws/data_source_aws_lb.go b/aws/data_source_aws_lb.go index 0d46b4e2b00..6a7395a59f7 100644 --- a/aws/data_source_aws_lb.go +++ b/aws/data_source_aws_lb.go @@ -6,7 +6,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/elbv2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/schema" ) @@ -140,12 +139,12 @@ func dataSourceAwsLbRead(d *schema.ResourceData, meta interface{}) error { log.Printf("[DEBUG] Reading Load Balancer: %s", describeLbOpts) describeResp, err := elbconn.DescribeLoadBalancers(describeLbOpts) if err != nil { - return errwrap.Wrapf("Error retrieving LB: {{err}}", err) + return fmt.Errorf("Error retrieving LB: %s", err) } if len(describeResp.LoadBalancers) != 1 { return fmt.Errorf("Search returned %d results, please revise so only one is returned", len(describeResp.LoadBalancers)) } - d.SetId(*describeResp.LoadBalancers[0].LoadBalancerArn) + d.SetId(aws.StringValue(describeResp.LoadBalancers[0].LoadBalancerArn)) return flattenAwsLbResource(d, meta, describeResp.LoadBalancers[0]) } diff --git a/aws/data_source_aws_lb_listener.go b/aws/data_source_aws_lb_listener.go index f0357a5b705..0e54e0862d5 100644 --- a/aws/data_source_aws_lb_listener.go +++ b/aws/data_source_aws_lb_listener.go @@ -17,17 +17,20 @@ func dataSourceAwsLbListener() *schema.Resource { "arn": { Type: schema.TypeString, Optional: true, + Computed: true, ConflictsWith: []string{"load_balancer_arn", "port"}, }, "load_balancer_arn": { Type: schema.TypeString, Optional: true, + Computed: true, ConflictsWith: []string{"arn"}, }, "port": { Type: schema.TypeInt, Optional: true, + Computed: true, ConflictsWith: []string{"arn"}, }, @@ -86,7 +89,7 @@ func dataSourceAwsLbListenerRead(d *schema.ResourceData, meta interface{}) error return err } if len(resp.Listeners) == 0 { - return fmt.Errorf("[DEBUG] no listener exists for load balancer: %s", lbArn) + return fmt.Errorf("no listener exists for load balancer: %s", lbArn) } for _, listener := range resp.Listeners { if *listener.Port == int64(port.(int)) { diff --git a/aws/data_source_aws_lb_listener_test.go b/aws/data_source_aws_lb_listener_test.go index 95b24db3f4d..930b9dbab9e 100644 --- a/aws/data_source_aws_lb_listener_test.go +++ b/aws/data_source_aws_lb_listener_test.go @@ -2,9 +2,7 @@ package aws import ( "fmt" - "math/rand" "testing" - "time" "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" @@ -14,7 +12,7 @@ func TestAccDataSourceAWSLBListener_basic(t *testing.T) { lbName := fmt.Sprintf("testlistener-basic-%s", acctest.RandStringFromCharSet(13, acctest.CharSetAlphaNum)) targetGroupName := fmt.Sprintf("testtargetgroup-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -45,7 +43,7 @@ func TestAccDataSourceAWSLBListenerBackwardsCompatibility(t *testing.T) { lbName := fmt.Sprintf("testlistener-basic-%s", acctest.RandStringFromCharSet(13, acctest.CharSetAlphaNum)) targetGroupName := fmt.Sprintf("testtargetgroup-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -76,7 +74,7 @@ func TestAccDataSourceAWSLBListener_https(t *testing.T) { lbName := fmt.Sprintf("testlistener-https-%s", acctest.RandStringFromCharSet(13, acctest.CharSetAlphaNum)) targetGroupName := fmt.Sprintf("testtargetgroup-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProvidersWithTLS, Steps: []resource.TestStep{ @@ -211,6 +209,18 @@ data "aws_lb_listener" "from_lb_and_port" { load_balancer_arn = "${aws_lb.alb_test.arn}" port = "${aws_lb_listener.front_end.port}" } + +output "front_end_load_balancer_arn" { + value = "${data.aws_lb_listener.front_end.load_balancer_arn}" +} + +output "front_end_port" { + value = "${data.aws_lb_listener.front_end.port}" +} + +output "from_lb_and_port_arn" { + value = "${data.aws_lb_listener.from_lb_and_port.arn}" +} `, lbName, targetGroupName) } @@ -347,6 +357,8 @@ resource "aws_lb" "alb_test" { tags { TestName = "TestAccAWSALB_basic" } + + depends_on = ["aws_internet_gateway.gw"] } resource "aws_lb_target_group" "test" { @@ -462,5 +474,5 @@ data "aws_lb_listener" "front_end" { data "aws_lb_listener" "from_lb_and_port" { load_balancer_arn = "${aws_lb.alb_test.arn}" port = "${aws_lb_listener.front_end.port}" -}`, lbName, targetGroupName, rand.New(rand.NewSource(time.Now().UnixNano())).Int()) +}`, lbName, targetGroupName, acctest.RandInt()) } diff --git a/aws/data_source_aws_lb_target_group.go b/aws/data_source_aws_lb_target_group.go index f109194489c..5f7f20f728b 100644 --- a/aws/data_source_aws_lb_target_group.go +++ b/aws/data_source_aws_lb_target_group.go @@ -6,7 +6,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/elbv2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/schema" ) @@ -50,6 +49,11 @@ func dataSourceAwsLbTargetGroup() *schema.Resource { Computed: true, }, + "slow_start": { + Type: schema.TypeInt, + Computed: true, + }, + "stickiness": { Type: schema.TypeList, Computed: true, @@ -142,7 +146,7 @@ func dataSourceAwsLbTargetGroupRead(d *schema.ResourceData, meta interface{}) er log.Printf("[DEBUG] Reading Load Balancer Target Group: %s", describeTgOpts) describeResp, err := elbconn.DescribeTargetGroups(describeTgOpts) if err != nil { - return errwrap.Wrapf("Error retrieving LB Target Group: {{err}}", err) + return fmt.Errorf("Error retrieving LB Target Group: %s", err) } if len(describeResp.TargetGroups) != 1 { return fmt.Errorf("Search returned %d results, please revise so only one is returned", len(describeResp.TargetGroups)) @@ -150,6 +154,6 @@ func dataSourceAwsLbTargetGroupRead(d *schema.ResourceData, meta interface{}) er targetGroup := describeResp.TargetGroups[0] - d.SetId(*targetGroup.TargetGroupArn) + d.SetId(aws.StringValue(targetGroup.TargetGroupArn)) return flattenAwsLbTargetGroupResource(d, meta, targetGroup) } diff --git a/aws/data_source_aws_lb_target_group_test.go b/aws/data_source_aws_lb_target_group_test.go index d0907537f08..db984eb9915 100644 --- a/aws/data_source_aws_lb_target_group_test.go +++ b/aws/data_source_aws_lb_target_group_test.go @@ -12,7 +12,7 @@ func TestAccDataSourceAWSALBTargetGroup_basic(t *testing.T) { lbName := fmt.Sprintf("testlb-%s", acctest.RandStringFromCharSet(13, acctest.CharSetAlphaNum)) targetGroupName := fmt.Sprintf("testtargetgroup-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -27,6 +27,7 @@ func TestAccDataSourceAWSALBTargetGroup_basic(t *testing.T) { resource.TestCheckResourceAttr("data.aws_lb_target_group.alb_tg_test_with_arn", "protocol", "HTTP"), resource.TestCheckResourceAttrSet("data.aws_lb_target_group.alb_tg_test_with_arn", "vpc_id"), resource.TestCheckResourceAttr("data.aws_lb_target_group.alb_tg_test_with_arn", "deregistration_delay", "300"), + resource.TestCheckResourceAttr("data.aws_lb_target_group.alb_tg_test_with_arn", "slow_start", "0"), resource.TestCheckResourceAttr("data.aws_lb_target_group.alb_tg_test_with_arn", "tags.%", "1"), resource.TestCheckResourceAttr("data.aws_lb_target_group.alb_tg_test_with_arn", "tags.TestName", "TestAccDataSourceAWSALBTargetGroup_basic"), resource.TestCheckResourceAttr("data.aws_lb_target_group.alb_tg_test_with_arn", "stickiness.#", "1"), @@ -46,6 +47,7 @@ func TestAccDataSourceAWSALBTargetGroup_basic(t *testing.T) { resource.TestCheckResourceAttr("data.aws_lb_target_group.alb_tg_test_with_name", "protocol", "HTTP"), resource.TestCheckResourceAttrSet("data.aws_lb_target_group.alb_tg_test_with_name", "vpc_id"), resource.TestCheckResourceAttr("data.aws_lb_target_group.alb_tg_test_with_name", "deregistration_delay", "300"), + resource.TestCheckResourceAttr("data.aws_lb_target_group.alb_tg_test_with_name", "slow_start", "0"), resource.TestCheckResourceAttr("data.aws_lb_target_group.alb_tg_test_with_name", "tags.%", "1"), resource.TestCheckResourceAttr("data.aws_lb_target_group.alb_tg_test_with_name", "tags.TestName", "TestAccDataSourceAWSALBTargetGroup_basic"), resource.TestCheckResourceAttr("data.aws_lb_target_group.alb_tg_test_with_name", "stickiness.#", "1"), @@ -67,7 +69,7 @@ func TestAccDataSourceAWSLBTargetGroupBackwardsCompatibility(t *testing.T) { lbName := fmt.Sprintf("testlb-%s", acctest.RandStringFromCharSet(13, acctest.CharSetAlphaNum)) targetGroupName := fmt.Sprintf("testtargetgroup-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -82,6 +84,7 @@ func TestAccDataSourceAWSLBTargetGroupBackwardsCompatibility(t *testing.T) { resource.TestCheckResourceAttr("data.aws_alb_target_group.alb_tg_test_with_arn", "protocol", "HTTP"), resource.TestCheckResourceAttrSet("data.aws_alb_target_group.alb_tg_test_with_arn", "vpc_id"), resource.TestCheckResourceAttr("data.aws_alb_target_group.alb_tg_test_with_arn", "deregistration_delay", "300"), + resource.TestCheckResourceAttr("data.aws_alb_target_group.alb_tg_test_with_arn", "slow_start", "0"), resource.TestCheckResourceAttr("data.aws_alb_target_group.alb_tg_test_with_arn", "tags.%", "1"), resource.TestCheckResourceAttr("data.aws_alb_target_group.alb_tg_test_with_arn", "tags.TestName", "TestAccDataSourceAWSALBTargetGroup_basic"), resource.TestCheckResourceAttr("data.aws_alb_target_group.alb_tg_test_with_arn", "stickiness.#", "1"), @@ -101,6 +104,7 @@ func TestAccDataSourceAWSLBTargetGroupBackwardsCompatibility(t *testing.T) { resource.TestCheckResourceAttr("data.aws_alb_target_group.alb_tg_test_with_name", "protocol", "HTTP"), resource.TestCheckResourceAttrSet("data.aws_alb_target_group.alb_tg_test_with_name", "vpc_id"), resource.TestCheckResourceAttr("data.aws_alb_target_group.alb_tg_test_with_name", "deregistration_delay", "300"), + resource.TestCheckResourceAttr("data.aws_alb_target_group.alb_tg_test_with_name", "slow_start", "0"), resource.TestCheckResourceAttr("data.aws_alb_target_group.alb_tg_test_with_name", "tags.%", "1"), resource.TestCheckResourceAttr("data.aws_alb_target_group.alb_tg_test_with_name", "tags.TestName", "TestAccDataSourceAWSALBTargetGroup_basic"), resource.TestCheckResourceAttr("data.aws_alb_target_group.alb_tg_test_with_name", "stickiness.#", "1"), diff --git a/aws/data_source_aws_lb_test.go b/aws/data_source_aws_lb_test.go index b87cdb01566..3308bb33a6a 100644 --- a/aws/data_source_aws_lb_test.go +++ b/aws/data_source_aws_lb_test.go @@ -11,7 +11,7 @@ import ( func TestAccDataSourceAWSLB_basic(t *testing.T) { lbName := fmt.Sprintf("testaccawslb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -51,7 +51,7 @@ func TestAccDataSourceAWSLB_basic(t *testing.T) { func TestAccDataSourceAWSLBBackwardsCompatibility(t *testing.T) { lbName := fmt.Sprintf("testaccawsalb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/aws/data_source_aws_mq_broker.go b/aws/data_source_aws_mq_broker.go new file mode 100644 index 00000000000..0fe5ca6b6e0 --- /dev/null +++ b/aws/data_source_aws_mq_broker.go @@ -0,0 +1,208 @@ +package aws + +import ( + "errors" + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/mq" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsMqBroker() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsmQBrokerRead, + + Schema: map[string]*schema.Schema{ + "broker_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ConflictsWith: []string{"broker_name"}, + }, + "broker_name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ConflictsWith: []string{"broker_id"}, + }, + "auto_minor_version_upgrade": { + Type: schema.TypeBool, + Computed: true, + }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "configuration": { + Type: schema.TypeList, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "id": { + Type: schema.TypeString, + Computed: true, + }, + "revision": { + Type: schema.TypeInt, + Computed: true, + }, + }, + }, + }, + "deployment_mode": { + Type: schema.TypeString, + Computed: true, + }, + "engine_type": { + Type: schema.TypeString, + Computed: true, + }, + "engine_version": { + Type: schema.TypeString, + Computed: true, + }, + "host_instance_type": { + Type: schema.TypeString, + Computed: true, + }, + "instances": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "console_url": { + Type: schema.TypeString, + Computed: true, + }, + "ip_address": { + Type: schema.TypeString, + Computed: true, + }, + "endpoints": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + }, + }, + "logs": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + // Ignore missing configuration block + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == "1" && new == "0" { + return true + } + return false + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "general": { + Type: schema.TypeBool, + Computed: true, + }, + "audit": { + Type: schema.TypeBool, + Computed: true, + }, + }, + }, + }, + "maintenance_window_start_time": { + Type: schema.TypeList, + MaxItems: 1, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "day_of_week": { + Type: schema.TypeString, + Computed: true, + }, + "time_of_day": { + Type: schema.TypeString, + Computed: true, + }, + "time_zone": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "publicly_accessible": { + Type: schema.TypeBool, + Computed: true, + }, + "security_groups": { + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Computed: true, + }, + "subnet_ids": { + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Computed: true, + }, + "user": { + Type: schema.TypeSet, + Computed: true, + Set: resourceAwsMqUserHash, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "console_access": { + Type: schema.TypeBool, + Computed: true, + }, + "groups": { + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + Computed: true, + }, + "username": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + }, + } +} + +func dataSourceAwsmQBrokerRead(d *schema.ResourceData, meta interface{}) error { + if brokerId, ok := d.GetOk("broker_id"); ok { + d.SetId(brokerId.(string)) + } else { + conn := meta.(*AWSClient).mqconn + brokerName := d.Get("broker_name").(string) + var nextToken string + for { + out, err := conn.ListBrokers(&mq.ListBrokersInput{NextToken: aws.String(nextToken)}) + if err != nil { + return errors.New("Failed to list mq brokers") + } + for _, broker := range out.BrokerSummaries { + if aws.StringValue(broker.BrokerName) == brokerName { + brokerId := aws.StringValue(broker.BrokerId) + d.Set("broker_id", brokerId) + d.SetId(brokerId) + } + } + if out.NextToken == nil { + break + } + nextToken = *out.NextToken + } + + if d.Id() == "" { + return fmt.Errorf("Failed to determine mq broker: %s", brokerName) + } + } + + return resourceAwsMqBrokerRead(d, meta) +} diff --git a/aws/data_source_aws_mq_broker_test.go b/aws/data_source_aws_mq_broker_test.go new file mode 100644 index 00000000000..19996e7ba37 --- /dev/null +++ b/aws/data_source_aws_mq_broker_test.go @@ -0,0 +1,204 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccDataSourceAWSMqBroker_basic(t *testing.T) { + rString := acctest.RandString(7) + prefix := "tf-acctest-d-mq-broker" + brokerName := fmt.Sprintf("%s-%s", prefix, rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAWSMqBrokerConfig_byId(brokerName, prefix), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrPair( + "data.aws_mq_broker.by_id", "arn", + "aws_mq_broker.acctest", "arn"), + resource.TestCheckResourceAttrPair( + "data.aws_mq_broker.by_id", "broker_name", + "aws_mq_broker.acctest", "broker_name"), + resource.TestCheckResourceAttrPair( + "data.aws_mq_broker.by_id", "auto_minor_version_upgrade", + "aws_mq_broker.acctest", "auto_minor_version_upgrade"), + resource.TestCheckResourceAttrPair( + "data.aws_mq_broker.by_id", "deployment_mode", + "aws_mq_broker.acctest", "deployment_mode"), + resource.TestCheckResourceAttrPair( + "data.aws_mq_broker.by_id", "configuration.#", + "aws_mq_broker.acctest", "configuration.#"), + resource.TestCheckResourceAttrPair( + "data.aws_mq_broker.by_id", "engine_type", + "aws_mq_broker.acctest", "engine_type"), + resource.TestCheckResourceAttrPair( + "data.aws_mq_broker.by_id", "engine_version", + "aws_mq_broker.acctest", "engine_version"), + resource.TestCheckResourceAttrPair( + "data.aws_mq_broker.by_id", "host_instance_type", + "aws_mq_broker.acctest", "host_instance_type"), + resource.TestCheckResourceAttrPair( + "data.aws_mq_broker.by_id", "instances.#", + "aws_mq_broker.acctest", "instances.#"), + resource.TestCheckResourceAttrPair( + "data.aws_mq_broker.by_id", "logs.#", + "aws_mq_broker.acctest", "logs.#"), + resource.TestCheckResourceAttrPair( + "data.aws_mq_broker.by_id", "maintenance_window_start_time.#", + "aws_mq_broker.acctest", "maintenance_window_start_time.#"), + resource.TestCheckResourceAttrPair( + "data.aws_mq_broker.by_id", "publicly_accessible", + "aws_mq_broker.acctest", "publicly_accessible"), + resource.TestCheckResourceAttrPair( + "data.aws_mq_broker.by_id", "security_groups.#", + "aws_mq_broker.acctest", "security_groups.#"), + resource.TestCheckResourceAttrPair( + "data.aws_mq_broker.by_id", "subnet_ids.#", + "aws_mq_broker.acctest", "subnet_ids.#"), + resource.TestCheckResourceAttrPair( + "data.aws_mq_broker.by_id", "user.#", + "aws_mq_broker.acctest", "user.#"), + ), + }, + { + Config: testAccDataSourceAWSMqBrokerConfig_byName(brokerName, prefix), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrPair( + "data.aws_mq_broker.by_name", "broker_id", + "aws_mq_broker.acctest", "id"), + resource.TestCheckResourceAttrPair( + "data.aws_mq_broker.by_name", "broker_name", + "aws_mq_broker.acctest", "broker_name"), + ), + }, + }, + }) +} + +func testAccDataSourceAWSMqBrokerConfig_base(brokerName, prefix string) string { + return fmt.Sprintf(` +variable "prefix" { + default = "%s" +} + +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "acctest" { + cidr_block = "10.0.0.0/16" + + tags { + Name = "${var.prefix}" + } +} + +resource "aws_internet_gateway" "acctest" { + vpc_id = "${aws_vpc.acctest.id}" +} + +resource "aws_route_table" "acctest" { + vpc_id = "${aws_vpc.acctest.id}" + + route { + cidr_block = "0.0.0.0/0" + gateway_id = "${aws_internet_gateway.acctest.id}" + } +} + +resource "aws_subnet" "acctest" { + count = 2 + cidr_block = "10.0.${count.index}.0/24" + availability_zone = "${data.aws_availability_zones.available.names[count.index]}" + vpc_id = "${aws_vpc.acctest.id}" + + tags { + Name = "${var.prefix}" + } +} + +resource "aws_route_table_association" "acctest" { + count = 2 + subnet_id = "${aws_subnet.acctest.*.id[count.index]}" + route_table_id = "${aws_route_table.acctest.id}" +} + +resource "aws_security_group" "acctest" { + count = 2 + name = "${var.prefix}-${count.index}" + vpc_id = "${aws_vpc.acctest.id}" +} + +resource "aws_mq_configuration" "acctest" { + name = "${var.prefix}" + engine_type = "ActiveMQ" + engine_version = "5.15.0" + + data = < + + +DATA +} + +resource "aws_mq_broker" "acctest" { + auto_minor_version_upgrade = true + apply_immediately = true + broker_name = "%s" + + configuration { + id = "${aws_mq_configuration.acctest.id}" + revision = "${aws_mq_configuration.acctest.latest_revision}" + } + + deployment_mode = "ACTIVE_STANDBY_MULTI_AZ" + engine_type = "ActiveMQ" + engine_version = "5.15.0" + host_instance_type = "mq.t2.micro" + + maintenance_window_start_time { + day_of_week = "TUESDAY" + time_of_day = "02:00" + time_zone = "CET" + } + + publicly_accessible = true + security_groups = ["${aws_security_group.acctest.*.id}"] + subnet_ids = ["${aws_subnet.acctest.*.id}"] + + user { + username = "Ender" + password = "AndrewWiggin" + } + + user { + username = "Petra" + password = "PetraArkanian" + console_access = true + groups = ["dragon", "salamander", "leopard"] + } + + depends_on = ["aws_internet_gateway.acctest"] +} +`, prefix, brokerName) +} + +func testAccDataSourceAWSMqBrokerConfig_byId(brokerName, prefix string) string { + return testAccDataSourceAWSMqBrokerConfig_base(brokerName, prefix) + fmt.Sprintf(` +data "aws_mq_broker" "by_id" { + broker_id = "${aws_mq_broker.acctest.id}" +} +`) +} + +func testAccDataSourceAWSMqBrokerConfig_byName(brokerName, prefix string) string { + return testAccDataSourceAWSMqBrokerConfig_base(brokerName, prefix) + fmt.Sprintf(` +data "aws_mq_broker" "by_name" { + broker_name = "${aws_mq_broker.acctest.broker_name}" +}`) +} diff --git a/aws/data_source_aws_nat_gateway.go b/aws/data_source_aws_nat_gateway.go index 98947f397c2..a805694473d 100644 --- a/aws/data_source_aws_nat_gateway.go +++ b/aws/data_source_aws_nat_gateway.go @@ -89,6 +89,12 @@ func dataSourceAwsNatGatewayRead(d *schema.ResourceData, meta interface{}) error )...) } + if tags, ok := d.GetOk("tags"); ok { + req.Filter = append(req.Filter, buildEC2TagFilterList( + tagsFromMap(tags.(map[string]interface{})), + )...) + } + req.Filter = append(req.Filter, buildEC2CustomFilterList( d.Get("filter").(*schema.Set), )...) @@ -116,6 +122,7 @@ func dataSourceAwsNatGatewayRead(d *schema.ResourceData, meta interface{}) error d.Set("state", ngw.State) d.Set("subnet_id", ngw.SubnetId) d.Set("vpc_id", ngw.VpcId) + d.Set("tags", tagsToMap(ngw.Tags)) for _, address := range ngw.NatGatewayAddresses { if *address.AllocationId != "" { diff --git a/aws/data_source_aws_nat_gateway_test.go b/aws/data_source_aws_nat_gateway_test.go index 1657fe8ac56..92cad96d391 100644 --- a/aws/data_source_aws_nat_gateway_test.go +++ b/aws/data_source_aws_nat_gateway_test.go @@ -12,11 +12,11 @@ func TestAccDataSourceAwsNatGateway(t *testing.T) { // This is used as a portion of CIDR network addresses. rInt := acctest.RandIntRange(4, 254) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDataSourceAwsNatGatewayConfig(rInt), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttrPair( @@ -25,12 +25,16 @@ func TestAccDataSourceAwsNatGateway(t *testing.T) { resource.TestCheckResourceAttrPair( "data.aws_nat_gateway.test_by_subnet_id", "subnet_id", "aws_nat_gateway.test", "subnet_id"), + resource.TestCheckResourceAttrPair( + "data.aws_nat_gateway.test_by_tags", "tags.Name", + "aws_nat_gateway.test", "tags.Name"), resource.TestCheckResourceAttrSet("data.aws_nat_gateway.test_by_id", "state"), resource.TestCheckResourceAttrSet("data.aws_nat_gateway.test_by_id", "allocation_id"), resource.TestCheckResourceAttrSet("data.aws_nat_gateway.test_by_id", "network_interface_id"), resource.TestCheckResourceAttrSet("data.aws_nat_gateway.test_by_id", "public_ip"), resource.TestCheckResourceAttrSet("data.aws_nat_gateway.test_by_id", "private_ip"), resource.TestCheckNoResourceAttr("data.aws_nat_gateway.test_by_id", "attached_vpc_id"), + resource.TestCheckResourceAttrSet("data.aws_nat_gateway.test_by_id", "tags.OtherTag"), ), }, }, @@ -73,13 +77,13 @@ resource "aws_internet_gateway" "test" { } } -# NGWs are not taggable, either resource "aws_nat_gateway" "test" { subnet_id = "${aws_subnet.test.id}" allocation_id = "${aws_eip.test.id}" tags { - Name = "terraform-testacc-nat-gw-data-source" + Name = "terraform-testacc-nat-gw-data-source-%d" + OtherTag = "some-value" } depends_on = ["aws_internet_gateway.test"] @@ -93,5 +97,11 @@ data "aws_nat_gateway" "test_by_subnet_id" { subnet_id = "${aws_nat_gateway.test.subnet_id}" } -`, rInt, rInt, rInt) +data "aws_nat_gateway" "test_by_tags" { + tags { + Name = "${aws_nat_gateway.test.tags["Name"]}" + } +} + +`, rInt, rInt, rInt, rInt) } diff --git a/aws/data_source_aws_network_acls.go b/aws/data_source_aws_network_acls.go new file mode 100644 index 00000000000..bd99ed18a65 --- /dev/null +++ b/aws/data_source_aws_network_acls.go @@ -0,0 +1,92 @@ +package aws + +import ( + "errors" + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsNetworkAcls() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsNetworkAclsRead, + Schema: map[string]*schema.Schema{ + "filter": ec2CustomFiltersSchema(), + + "tags": tagsSchemaComputed(), + + "vpc_id": { + Type: schema.TypeString, + Optional: true, + }, + + "ids": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + }, + } +} + +func dataSourceAwsNetworkAclsRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + req := &ec2.DescribeNetworkAclsInput{} + + if v, ok := d.GetOk("vpc_id"); ok { + req.Filters = buildEC2AttributeFilterList( + map[string]string{ + "vpc-id": v.(string), + }, + ) + } + + filters, filtersOk := d.GetOk("filter") + tags, tagsOk := d.GetOk("tags") + + if tagsOk { + req.Filters = append(req.Filters, buildEC2TagFilterList( + tagsFromMap(tags.(map[string]interface{})), + )...) + } + + if filtersOk { + req.Filters = append(req.Filters, buildEC2CustomFilterList( + filters.(*schema.Set), + )...) + } + + if len(req.Filters) == 0 { + // Don't send an empty filters list; the EC2 API won't accept it. + req.Filters = nil + } + + log.Printf("[DEBUG] DescribeNetworkAcls %s\n", req) + resp, err := conn.DescribeNetworkAcls(req) + if err != nil { + return err + } + + if resp == nil || len(resp.NetworkAcls) == 0 { + return errors.New("no matching network ACLs found") + } + + networkAcls := make([]string, 0) + + for _, networkAcl := range resp.NetworkAcls { + networkAcls = append(networkAcls, aws.StringValue(networkAcl.NetworkAclId)) + } + + d.SetId(resource.UniqueId()) + if err := d.Set("ids", networkAcls); err != nil { + return fmt.Errorf("Error setting network ACL ids: %s", err) + } + + return nil +} diff --git a/aws/data_source_aws_network_acls_test.go b/aws/data_source_aws_network_acls_test.go new file mode 100644 index 00000000000..63de06b487b --- /dev/null +++ b/aws/data_source_aws_network_acls_test.go @@ -0,0 +1,141 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccDataSourceAwsNetworkAcls_basic(t *testing.T) { + rName := acctest.RandString(5) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVpcDestroy, + Steps: []resource.TestStep{ + { + // Ensure at least 1 network ACL exists. We cannot use depends_on. + Config: testAccDataSourceAwsNetworkAclsConfig_Base(rName), + }, + { + Config: testAccDataSourceAwsNetworkAclsConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + // At least 1 + resource.TestMatchResourceAttr("data.aws_network_acls.test", "ids.#", regexp.MustCompile(`^[1-9][0-9]*`)), + ), + }, + }, + }) +} + +func TestAccDataSourceAwsNetworkAcls_Filter(t *testing.T) { + rName := acctest.RandString(5) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVpcDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsNetworkAclsConfig_Filter(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.aws_network_acls.test", "ids.#", "1"), + ), + }, + }, + }) +} + +func TestAccDataSourceAwsNetworkAcls_Tags(t *testing.T) { + rName := acctest.RandString(5) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVpcDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsNetworkAclsConfig_Tags(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.aws_network_acls.test", "ids.#", "2"), + ), + }, + }, + }) +} + +func TestAccDataSourceAwsNetworkAcls_VpcID(t *testing.T) { + rName := acctest.RandString(5) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVpcDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsNetworkAclsConfig_VpcID(rName), + Check: resource.ComposeTestCheckFunc( + // The VPC will have a default network ACL + resource.TestCheckResourceAttr("data.aws_network_acls.test", "ids.#", "3"), + ), + }, + }, + }) +} + +func testAccDataSourceAwsNetworkAclsConfig_Base(rName string) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" + + tags { + Name = "testacc-acl-%s" + } +} + +resource "aws_network_acl" "acl" { + count = 2 + + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = "testacc-acl-%s" + } +} +`, rName, rName) +} + +func testAccDataSourceAwsNetworkAclsConfig_basic(rName string) string { + return testAccDataSourceAwsNetworkAclsConfig_Base(rName) + ` +data "aws_network_acls" "test" {} +` +} + +func testAccDataSourceAwsNetworkAclsConfig_Filter(rName string) string { + return testAccDataSourceAwsNetworkAclsConfig_Base(rName) + ` +data "aws_network_acls" "test" { + filter { + name = "network-acl-id" + values = ["${aws_network_acl.acl.0.id}"] + } +} +` +} + +func testAccDataSourceAwsNetworkAclsConfig_Tags(rName string) string { + return testAccDataSourceAwsNetworkAclsConfig_Base(rName) + ` +data "aws_network_acls" "test" { + tags { + Name = "${aws_network_acl.acl.0.tags.Name}" + } +} +` +} + +func testAccDataSourceAwsNetworkAclsConfig_VpcID(rName string) string { + return testAccDataSourceAwsNetworkAclsConfig_Base(rName) + ` +data "aws_network_acls" "test" { + vpc_id = "${aws_network_acl.acl.0.vpc_id}" +} +` +} diff --git a/aws/data_source_aws_network_interface.go b/aws/data_source_aws_network_interface.go index 85834577041..51b91b49a15 100644 --- a/aws/data_source_aws_network_interface.go +++ b/aws/data_source_aws_network_interface.go @@ -175,7 +175,7 @@ func dataSourceAwsNetworkInterfaceRead(d *schema.ResourceData, meta interface{}) d.Set("mac_address", eni.MacAddress) d.Set("owner_id", eni.OwnerId) d.Set("private_dns_name", eni.PrivateDnsName) - d.Set("private_id", eni.PrivateIpAddress) + d.Set("private_ip", eni.PrivateIpAddress) d.Set("private_ips", flattenNetworkInterfacesPrivateIPAddresses(eni.PrivateIpAddresses)) d.Set("requester_id", eni.RequesterId) d.Set("subnet_id", eni.SubnetId) diff --git a/aws/data_source_aws_network_interface_test.go b/aws/data_source_aws_network_interface_test.go index 933c6225ea3..32fe393338b 100644 --- a/aws/data_source_aws_network_interface_test.go +++ b/aws/data_source_aws_network_interface_test.go @@ -10,7 +10,7 @@ import ( func TestAccDataSourceAwsNetworkInterface_basic(t *testing.T) { rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -19,6 +19,13 @@ func TestAccDataSourceAwsNetworkInterface_basic(t *testing.T) { Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttr("data.aws_network_interface.test", "private_ips.#", "1"), resource.TestCheckResourceAttr("data.aws_network_interface.test", "security_groups.#", "1"), + resource.TestCheckResourceAttrPair("data.aws_network_interface.test", "private_ip", "aws_network_interface.test", "private_ip"), + resource.TestCheckResourceAttrSet("data.aws_network_interface.test", "availability_zone"), + resource.TestCheckResourceAttrPair("data.aws_network_interface.test", "description", "aws_network_interface.test", "description"), + resource.TestCheckResourceAttrSet("data.aws_network_interface.test", "interface_type"), + resource.TestCheckResourceAttrPair("data.aws_network_interface.test", "private_dns_name", "aws_network_interface.test", "private_dns_name"), + resource.TestCheckResourceAttrPair("data.aws_network_interface.test", "subnet_id", "aws_network_interface.test", "subnet_id"), + resource.TestCheckResourceAttrSet("data.aws_network_interface.test", "vpc_id"), ), }, }, @@ -64,7 +71,7 @@ data "aws_network_interface" "test" { func TestAccDataSourceAwsNetworkInterface_filters(t *testing.T) { rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/aws/data_source_aws_network_interfaces.go b/aws/data_source_aws_network_interfaces.go new file mode 100644 index 00000000000..316c598032f --- /dev/null +++ b/aws/data_source_aws_network_interfaces.go @@ -0,0 +1,79 @@ +package aws + +import ( + "errors" + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsNetworkInterfaces() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsNetworkInterfacesRead, + Schema: map[string]*schema.Schema{ + + "filter": ec2CustomFiltersSchema(), + + "tags": tagsSchemaComputed(), + + "ids": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + }, + } +} + +func dataSourceAwsNetworkInterfacesRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + req := &ec2.DescribeNetworkInterfacesInput{} + + filters, filtersOk := d.GetOk("filter") + tags, tagsOk := d.GetOk("tags") + + if tagsOk { + req.Filters = buildEC2TagFilterList( + tagsFromMap(tags.(map[string]interface{})), + ) + } + + if filtersOk { + req.Filters = append(req.Filters, buildEC2CustomFilterList( + filters.(*schema.Set), + )...) + } + + if len(req.Filters) == 0 { + req.Filters = nil + } + + log.Printf("[DEBUG] DescribeNetworkInterfaces %s\n", req) + resp, err := conn.DescribeNetworkInterfaces(req) + if err != nil { + return err + } + + if resp == nil || len(resp.NetworkInterfaces) == 0 { + return errors.New("no matching network interfaces found") + } + + networkInterfaces := make([]string, 0) + + for _, networkInterface := range resp.NetworkInterfaces { + networkInterfaces = append(networkInterfaces, aws.StringValue(networkInterface.NetworkInterfaceId)) + } + + d.SetId(resource.UniqueId()) + if err := d.Set("ids", networkInterfaces); err != nil { + return fmt.Errorf("Error setting network interfaces ids: %s", err) + } + + return nil +} diff --git a/aws/data_source_aws_network_interfaces_test.go b/aws/data_source_aws_network_interfaces_test.go new file mode 100644 index 00000000000..8b693615cc0 --- /dev/null +++ b/aws/data_source_aws_network_interfaces_test.go @@ -0,0 +1,95 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccDataSourceAwsNetworkInterfaces_Filter(t *testing.T) { + rName := acctest.RandString(5) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVpcDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsNetworkInterfacesConfig_Filter(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.aws_network_interfaces.test", "ids.#", "2"), + ), + }, + }, + }) +} + +func TestAccDataSourceAwsNetworkInterfaces_Tags(t *testing.T) { + rName := acctest.RandString(5) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVpcDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsNetworkInterfacesConfig_Tags(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.aws_network_interfaces.test", "ids.#", "1"), + ), + }, + }, + }) +} + +func testAccDataSourceAwsNetworkInterfacesConfig_Base(rName string) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" + tags { + Name = "terraform-testacc-eni-data-source-basic-%s" + } +} + +resource "aws_subnet" "test" { + cidr_block = "10.0.0.0/24" + vpc_id = "${aws_vpc.test.id}" + tags { + Name = "terraform-testacc-eni-data-source-basic-%s" + } +} + +resource "aws_network_interface" "test" { + subnet_id = "${aws_subnet.test.id}" +} + +resource "aws_network_interface" "test1" { + subnet_id = "${aws_subnet.test.id}" + tags { + Name = "${aws_vpc.test.tags.Name}" + } +} + +`, rName, rName) +} + +func testAccDataSourceAwsNetworkInterfacesConfig_Filter(rName string) string { + return testAccDataSourceAwsNetworkInterfacesConfig_Base(rName) + ` +data "aws_network_interfaces" "test" { + filter { + name = "subnet-id" + values = ["${aws_network_interface.test.subnet_id}", "${aws_network_interface.test1.subnet_id}"] + } +} +` +} + +func testAccDataSourceAwsNetworkInterfacesConfig_Tags(rName string) string { + return testAccDataSourceAwsNetworkInterfacesConfig_Base(rName) + ` +data "aws_network_interfaces" "test" { + tags { + Name = "${aws_network_interface.test1.tags.Name}" + } +} +` +} diff --git a/aws/data_source_aws_partition_test.go b/aws/data_source_aws_partition_test.go index 5610a0b968e..57c0ab2bfbb 100644 --- a/aws/data_source_aws_partition_test.go +++ b/aws/data_source_aws_partition_test.go @@ -9,7 +9,7 @@ import ( ) func TestAccAWSPartition_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/aws/data_source_aws_prefix_list_test.go b/aws/data_source_aws_prefix_list_test.go index c9ad308d092..0839788bd53 100644 --- a/aws/data_source_aws_prefix_list_test.go +++ b/aws/data_source_aws_prefix_list_test.go @@ -10,11 +10,11 @@ import ( ) func TestAccDataSourceAwsPrefixList(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDataSourceAwsPrefixListConfig, Check: resource.ComposeTestCheckFunc( testAccDataSourceAwsPrefixListCheck("data.aws_prefix_list.s3_by_id"), diff --git a/aws/data_source_aws_pricing_product.go b/aws/data_source_aws_pricing_product.go new file mode 100644 index 00000000000..393acef12a6 --- /dev/null +++ b/aws/data_source_aws_pricing_product.go @@ -0,0 +1,92 @@ +package aws + +import ( + "log" + + "encoding/json" + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/pricing" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsPricingProduct() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsPricingProductRead, + Schema: map[string]*schema.Schema{ + "service_code": { + Type: schema.TypeString, + Required: true, + }, + "filters": { + Type: schema.TypeList, + Required: true, + MinItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "field": { + Type: schema.TypeString, + Required: true, + }, + "value": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "result": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func dataSourceAwsPricingProductRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).pricingconn + + params := &pricing.GetProductsInput{ + ServiceCode: aws.String(d.Get("service_code").(string)), + Filters: []*pricing.Filter{}, + } + + filters := d.Get("filters") + for _, v := range filters.([]interface{}) { + m := v.(map[string]interface{}) + params.Filters = append(params.Filters, &pricing.Filter{ + Field: aws.String(m["field"].(string)), + Value: aws.String(m["value"].(string)), + Type: aws.String(pricing.FilterTypeTermMatch), + }) + } + + log.Printf("[DEBUG] Reading pricing of products: %s", params) + resp, err := conn.GetProducts(params) + if err != nil { + return fmt.Errorf("Error reading pricing of products: %s", err) + } + + numberOfElements := len(resp.PriceList) + if numberOfElements == 0 { + return fmt.Errorf("Pricing product query did not return any elements") + } else if numberOfElements > 1 { + priceListBytes, err := json.Marshal(resp.PriceList) + priceListString := string(priceListBytes) + if err != nil { + priceListString = err.Error() + } + return fmt.Errorf("Pricing product query not precise enough. Returned more than one element: %s", priceListString) + } + + pricingResult, err := json.Marshal(resp.PriceList[0]) + if err != nil { + return fmt.Errorf("Invalid JSON value returned by AWS: %s", err) + } + + d.SetId(fmt.Sprintf("%d", hashcode.String(params.String()))) + d.Set("result", string(pricingResult)) + return nil +} diff --git a/aws/data_source_aws_pricing_product_test.go b/aws/data_source_aws_pricing_product_test.go new file mode 100644 index 00000000000..d4d6993995c --- /dev/null +++ b/aws/data_source_aws_pricing_product_test.go @@ -0,0 +1,124 @@ +package aws + +import ( + "encoding/json" + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccDataSourceAwsPricingProduct_ec2(t *testing.T) { + oldRegion := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldRegion) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsPricingProductConfigEc2("test", "c5.large"), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrSet("data.aws_pricing_product.test", "result"), + testAccPricingCheckValueIsJSON("data.aws_pricing_product.test"), + ), + }, + }, + }) +} + +func TestAccDataSourceAwsPricingProduct_redshift(t *testing.T) { + oldRegion := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldRegion) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsPricingProductConfigRedshift(), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrSet("data.aws_pricing_product.test", "result"), + testAccPricingCheckValueIsJSON("data.aws_pricing_product.test"), + ), + }, + }, + }) +} + +func testAccDataSourceAwsPricingProductConfigEc2(dataName string, instanceType string) string { + return fmt.Sprintf(`data "aws_pricing_product" "%s" { + service_code = "AmazonEC2" + + filters = [ + { + field = "instanceType" + value = "%s" + }, + { + field = "operatingSystem" + value = "Linux" + }, + { + field = "location" + value = "US East (N. Virginia)" + }, + { + field = "preInstalledSw" + value = "NA" + }, + { + field = "licenseModel" + value = "No License required" + }, + { + field = "tenancy" + value = "Shared" + }, + ] +} +`, dataName, instanceType) +} + +func testAccDataSourceAwsPricingProductConfigRedshift() string { + return fmt.Sprintf(`data "aws_pricing_product" "test" { + service_code = "AmazonRedshift" + + filters = [ + { + field = "instanceType" + value = "ds1.xlarge" + }, + { + field = "location" + value = "US East (N. Virginia)" + }, + ] +} +`) +} + +func testAccPricingCheckValueIsJSON(data string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[data] + + if !ok { + return fmt.Errorf("Can't find resource: %s", data) + } + + result := rs.Primary.Attributes["result"] + var objmap map[string]*json.RawMessage + + if err := json.Unmarshal([]byte(result), &objmap); err != nil { + return fmt.Errorf("%s result value (%s) is not JSON: %s", data, result, err) + } + + if len(objmap) == 0 { + return fmt.Errorf("%s result value (%s) unmarshalling resulted in an empty map", data, result) + } + + return nil + } +} diff --git a/aws/data_source_aws_rds_cluster.go b/aws/data_source_aws_rds_cluster.go index 6fea1159112..27f2d44f0f8 100644 --- a/aws/data_source_aws_rds_cluster.go +++ b/aws/data_source_aws_rds_cluster.go @@ -1,12 +1,12 @@ package aws import ( + "fmt" "log" "strings" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/rds" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/schema" ) @@ -14,6 +14,11 @@ func dataSourceAwsRdsCluster() *schema.Resource { return &schema.Resource{ Read: dataSourceAwsRdsClusterRead, Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "cluster_identifier": { Type: schema.TypeString, Required: true, @@ -58,6 +63,12 @@ func dataSourceAwsRdsCluster() *schema.Resource { Computed: true, }, + "enabled_cloudwatch_logs_exports": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "endpoint": { Type: schema.TypeString, Computed: true, @@ -151,17 +162,106 @@ func dataSourceAwsRdsCluster() *schema.Resource { func dataSourceAwsRdsClusterRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).rdsconn + dbClusterIdentifier := d.Get("cluster_identifier").(string) + params := &rds.DescribeDBClustersInput{ - DBClusterIdentifier: aws.String(d.Get("cluster_identifier").(string)), + DBClusterIdentifier: aws.String(dbClusterIdentifier), } log.Printf("[DEBUG] Reading RDS Cluster: %s", params) resp, err := conn.DescribeDBClusters(params) if err != nil { - return errwrap.Wrapf("Error retrieving RDS cluster: {{err}}", err) + return fmt.Errorf("Error retrieving RDS cluster: %s", err) + } + + if resp == nil { + return fmt.Errorf("Error retrieving RDS cluster: empty response for: %s", params) + } + + var dbc *rds.DBCluster + for _, c := range resp.DBClusters { + if aws.StringValue(c.DBClusterIdentifier) == dbClusterIdentifier { + dbc = c + break + } + } + + if dbc == nil { + return fmt.Errorf("Error retrieving RDS cluster: cluster not found in response for: %s", params) + } + + d.SetId(aws.StringValue(dbc.DBClusterIdentifier)) + + if err := d.Set("availability_zones", aws.StringValueSlice(dbc.AvailabilityZones)); err != nil { + return fmt.Errorf("error setting availability_zones: %s", err) + } + + d.Set("arn", dbc.DBClusterArn) + d.Set("backtrack_window", int(aws.Int64Value(dbc.BacktrackWindow))) + d.Set("backup_retention_period", dbc.BackupRetentionPeriod) + d.Set("cluster_identifier", dbc.DBClusterIdentifier) + + var cm []string + for _, m := range dbc.DBClusterMembers { + cm = append(cm, aws.StringValue(m.DBInstanceIdentifier)) + } + if err := d.Set("cluster_members", cm); err != nil { + return fmt.Errorf("error setting cluster_members: %s", err) + } + + d.Set("cluster_resource_id", dbc.DbClusterResourceId) + + // Only set the DatabaseName if it is not nil. There is a known API bug where + // RDS accepts a DatabaseName but does not return it, causing a perpetual + // diff. + // See https://github.com/hashicorp/terraform/issues/4671 for backstory + if dbc.DatabaseName != nil { + d.Set("database_name", dbc.DatabaseName) } - d.SetId(*resp.DBClusters[0].DBClusterIdentifier) + d.Set("db_cluster_parameter_group_name", dbc.DBClusterParameterGroup) + d.Set("db_subnet_group_name", dbc.DBSubnetGroup) + + if err := d.Set("enabled_cloudwatch_logs_exports", aws.StringValueSlice(dbc.EnabledCloudwatchLogsExports)); err != nil { + return fmt.Errorf("error setting enabled_cloudwatch_logs_exports: %s", err) + } + + d.Set("endpoint", dbc.Endpoint) + d.Set("engine_version", dbc.EngineVersion) + d.Set("engine", dbc.Engine) + d.Set("hosted_zone_id", dbc.HostedZoneId) + d.Set("iam_database_authentication_enabled", dbc.IAMDatabaseAuthenticationEnabled) + + var roles []string + for _, r := range dbc.AssociatedRoles { + roles = append(roles, aws.StringValue(r.RoleArn)) + } + if err := d.Set("iam_roles", roles); err != nil { + return fmt.Errorf("error setting iam_roles: %s", err) + } + + d.Set("kms_key_id", dbc.KmsKeyId) + d.Set("master_username", dbc.MasterUsername) + d.Set("port", dbc.Port) + d.Set("preferred_backup_window", dbc.PreferredBackupWindow) + d.Set("preferred_maintenance_window", dbc.PreferredMaintenanceWindow) + d.Set("reader_endpoint", dbc.ReaderEndpoint) + d.Set("replication_source_identifier", dbc.ReplicationSourceIdentifier) + + d.Set("storage_encrypted", dbc.StorageEncrypted) + + var vpcg []string + for _, g := range dbc.VpcSecurityGroups { + vpcg = append(vpcg, aws.StringValue(g.VpcSecurityGroupId)) + } + if err := d.Set("vpc_security_group_ids", vpcg); err != nil { + return fmt.Errorf("error setting vpc_security_group_ids: %s", err) + } + + // Fetch and save tags + if err := saveTagsRDS(conn, d, aws.StringValue(dbc.DBClusterArn)); err != nil { + log.Printf("[WARN] Failed to save tags for RDS Cluster (%s): %s", aws.StringValue(dbc.DBClusterIdentifier), err) + } - return flattenAwsRdsClusterResource(d, meta, resp.DBClusters[0]) + return nil } diff --git a/aws/data_source_aws_rds_cluster_test.go b/aws/data_source_aws_rds_cluster_test.go index dfae1478f01..7d0355486f7 100644 --- a/aws/data_source_aws_rds_cluster_test.go +++ b/aws/data_source_aws_rds_cluster_test.go @@ -8,23 +8,26 @@ import ( "github.com/hashicorp/terraform/helper/resource" ) -func TestAccDataSourceAwsRdsCluster_basic(t *testing.T) { +func TestAccDataSourceAWSRDSCluster_basic(t *testing.T) { clusterName := fmt.Sprintf("testaccawsrdscluster-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + dataSourceName := "data.aws_rds_cluster.test" + resourceName := "aws_rds_cluster.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccDataSourceAwsRdsClusterConfigBasic(clusterName), Check: resource.ComposeAggregateTestCheckFunc( - resource.TestCheckResourceAttr("data.aws_rds_cluster.rds_cluster_test", "cluster_identifier", clusterName), - resource.TestCheckResourceAttr("data.aws_rds_cluster.rds_cluster_test", "database_name", "mydb"), - resource.TestCheckResourceAttr("data.aws_rds_cluster.rds_cluster_test", "db_cluster_parameter_group_name", "default.aurora5.6"), - resource.TestCheckResourceAttr("data.aws_rds_cluster.rds_cluster_test", "db_subnet_group_name", clusterName), - resource.TestCheckResourceAttr("data.aws_rds_cluster.rds_cluster_test", "master_username", "foo"), - resource.TestCheckResourceAttr("data.aws_rds_cluster.rds_cluster_test", "tags.%", "1"), - resource.TestCheckResourceAttr("data.aws_rds_cluster.rds_cluster_test", "tags.Environment", "test"), + resource.TestCheckResourceAttrPair(dataSourceName, "arn", resourceName, "arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "cluster_identifier", resourceName, "cluster_identifier"), + resource.TestCheckResourceAttrPair(dataSourceName, "database_name", resourceName, "database_name"), + resource.TestCheckResourceAttrPair(dataSourceName, "db_cluster_parameter_group_name", resourceName, "db_cluster_parameter_group_name"), + resource.TestCheckResourceAttrPair(dataSourceName, "db_subnet_group_name", resourceName, "db_subnet_group_name"), + resource.TestCheckResourceAttrPair(dataSourceName, "master_username", resourceName, "master_username"), + resource.TestCheckResourceAttrPair(dataSourceName, "tags.%", resourceName, "tags.%"), + resource.TestCheckResourceAttrPair(dataSourceName, "tags.Environment", resourceName, "tags.Environment"), ), }, }, @@ -32,7 +35,7 @@ func TestAccDataSourceAwsRdsCluster_basic(t *testing.T) { } func testAccDataSourceAwsRdsClusterConfigBasic(clusterName string) string { - return fmt.Sprintf(`resource "aws_rds_cluster" "rds_cluster_test" { + return fmt.Sprintf(`resource "aws_rds_cluster" "test" { cluster_identifier = "%s" database_name = "mydb" db_cluster_parameter_group_name = "default.aurora5.6" @@ -74,7 +77,7 @@ resource "aws_db_subnet_group" "test" { subnet_ids = ["${aws_subnet.a.id}", "${aws_subnet.b.id}"] } -data "aws_rds_cluster" "rds_cluster_test" { - cluster_identifier = "${aws_rds_cluster.rds_cluster_test.cluster_identifier}" +data "aws_rds_cluster" "test" { + cluster_identifier = "${aws_rds_cluster.test.cluster_identifier}" }`, clusterName, clusterName) } diff --git a/aws/data_source_aws_redshift_cluster.go b/aws/data_source_aws_redshift_cluster.go new file mode 100644 index 00000000000..4aaeeb251ed --- /dev/null +++ b/aws/data_source_aws_redshift_cluster.go @@ -0,0 +1,278 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/redshift" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsRedshiftCluster() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsRedshiftClusterRead, + + Schema: map[string]*schema.Schema{ + + "cluster_identifier": { + Type: schema.TypeString, + Required: true, + }, + + "allow_version_upgrade": { + Type: schema.TypeBool, + Computed: true, + }, + + "automated_snapshot_retention_period": { + Type: schema.TypeInt, + Computed: true, + }, + + "availability_zone": { + Type: schema.TypeString, + Computed: true, + }, + + "bucket_name": { + Type: schema.TypeString, + Computed: true, + }, + + "cluster_parameter_group_name": { + Type: schema.TypeString, + Computed: true, + }, + + "cluster_public_key": { + Type: schema.TypeString, + Computed: true, + }, + + "cluster_revision_number": { + Type: schema.TypeString, + Computed: true, + }, + + "cluster_security_groups": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "cluster_subnet_group_name": { + Type: schema.TypeString, + Computed: true, + }, + + "cluster_type": { + Type: schema.TypeString, + Computed: true, + }, + + "cluster_version": { + Type: schema.TypeString, + Computed: true, + }, + + "database_name": { + Type: schema.TypeString, + Computed: true, + }, + + "elastic_ip": { + Type: schema.TypeString, + Computed: true, + }, + + "enable_logging": { + Type: schema.TypeBool, + Computed: true, + }, + + "encrypted": { + Type: schema.TypeBool, + Computed: true, + }, + + "endpoint": { + Type: schema.TypeString, + Computed: true, + }, + + "enhanced_vpc_routing": { + Type: schema.TypeBool, + Computed: true, + }, + + "iam_roles": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "kms_key_id": { + Type: schema.TypeString, + Computed: true, + }, + + "master_username": { + Type: schema.TypeString, + Computed: true, + }, + + "node_type": { + Type: schema.TypeString, + Computed: true, + }, + + "number_of_nodes": { + Type: schema.TypeInt, + Computed: true, + }, + + "port": { + Type: schema.TypeInt, + Computed: true, + }, + + "preferred_maintenance_window": { + Type: schema.TypeString, + Computed: true, + }, + + "publicly_accessible": { + Type: schema.TypeBool, + Computed: true, + }, + + "s3_key_prefix": { + Type: schema.TypeString, + Computed: true, + }, + + "tags": tagsSchema(), + + "vpc_id": { + Type: schema.TypeString, + Computed: true, + }, + + "vpc_security_group_ids": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + } +} + +func dataSourceAwsRedshiftClusterRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + + cluster := d.Get("cluster_identifier").(string) + + log.Printf("[INFO] Reading Redshift Cluster Information: %s", cluster) + + resp, err := conn.DescribeClusters(&redshift.DescribeClustersInput{ + ClusterIdentifier: aws.String(cluster), + }) + + if err != nil { + return fmt.Errorf("Error describing Redshift Cluster: %s, error: %s", cluster, err) + } + + if resp.Clusters == nil || len(resp.Clusters) == 0 { + return fmt.Errorf("Error describing Redshift Cluster: %s, cluster information not found", cluster) + } + + rsc := *resp.Clusters[0] + + d.SetId(cluster) + d.Set("allow_version_upgrade", rsc.AllowVersionUpgrade) + d.Set("automated_snapshot_retention_period", rsc.AutomatedSnapshotRetentionPeriod) + d.Set("availability_zone", rsc.AvailabilityZone) + d.Set("cluster_identifier", rsc.ClusterIdentifier) + + if len(rsc.ClusterParameterGroups) > 0 { + d.Set("cluster_parameter_group_name", rsc.ClusterParameterGroups[0].ParameterGroupName) + } + + d.Set("cluster_public_key", rsc.ClusterPublicKey) + d.Set("cluster_revision_number", rsc.ClusterRevisionNumber) + + var csg []string + for _, g := range rsc.ClusterSecurityGroups { + csg = append(csg, *g.ClusterSecurityGroupName) + } + if err := d.Set("cluster_security_groups", csg); err != nil { + return fmt.Errorf("Error saving Cluster Security Group Names to state for Redshift Cluster (%s): %s", cluster, err) + } + + d.Set("cluster_subnet_group_name", rsc.ClusterSubnetGroupName) + + if len(rsc.ClusterNodes) > 1 { + d.Set("cluster_type", "multi-node") + } else { + d.Set("cluster_type", "single-node") + } + + d.Set("cluster_version", rsc.ClusterVersion) + d.Set("database_name", rsc.DBName) + + if rsc.ElasticIpStatus != nil { + d.Set("elastic_ip", rsc.ElasticIpStatus.ElasticIp) + } + + d.Set("encrypted", rsc.Encrypted) + + if rsc.Endpoint != nil { + d.Set("endpoint", rsc.Endpoint.Address) + } + + d.Set("enhanced_vpc_routing", rsc.EnhancedVpcRouting) + + var iamRoles []string + for _, i := range rsc.IamRoles { + iamRoles = append(iamRoles, *i.IamRoleArn) + } + if err := d.Set("iam_roles", iamRoles); err != nil { + return fmt.Errorf("Error saving IAM Roles to state for Redshift Cluster (%s): %s", cluster, err) + } + + d.Set("kms_key_id", rsc.KmsKeyId) + d.Set("master_username", rsc.MasterUsername) + d.Set("node_type", rsc.NodeType) + d.Set("number_of_nodes", rsc.NumberOfNodes) + d.Set("port", rsc.Endpoint.Port) + d.Set("preferred_maintenance_window", rsc.PreferredMaintenanceWindow) + d.Set("publicly_accessible", rsc.PubliclyAccessible) + d.Set("tags", tagsToMapRedshift(rsc.Tags)) + d.Set("vpc_id", rsc.VpcId) + + var vpcg []string + for _, g := range rsc.VpcSecurityGroups { + vpcg = append(vpcg, *g.VpcSecurityGroupId) + } + if err := d.Set("vpc_security_group_ids", vpcg); err != nil { + return fmt.Errorf("Error saving VPC Security Group IDs to state for Redshift Cluster (%s): %s", cluster, err) + } + + log.Printf("[INFO] Reading Redshift Cluster Logging Status: %s", cluster) + loggingStatus, loggingErr := conn.DescribeLoggingStatus(&redshift.DescribeLoggingStatusInput{ + ClusterIdentifier: aws.String(cluster), + }) + + if loggingErr != nil { + return loggingErr + } + + if loggingStatus != nil && aws.BoolValue(loggingStatus.LoggingEnabled) { + d.Set("enable_logging", loggingStatus.LoggingEnabled) + d.Set("bucket_name", loggingStatus.BucketName) + d.Set("s3_key_prefix", loggingStatus.S3KeyPrefix) + } + + return nil +} diff --git a/aws/data_source_aws_redshift_cluster_test.go b/aws/data_source_aws_redshift_cluster_test.go new file mode 100644 index 00000000000..d3b28dd49c0 --- /dev/null +++ b/aws/data_source_aws_redshift_cluster_test.go @@ -0,0 +1,195 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccAWSDataSourceRedshiftCluster_basic(t *testing.T) { + rInt := acctest.RandInt() + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAWSDataSourceRedshiftClusterConfig(rInt), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrSet("data.aws_redshift_cluster.test", "allow_version_upgrade"), + resource.TestCheckResourceAttrSet("data.aws_redshift_cluster.test", "automated_snapshot_retention_period"), + resource.TestCheckResourceAttrSet("data.aws_redshift_cluster.test", "availability_zone"), + resource.TestCheckResourceAttrSet("data.aws_redshift_cluster.test", "cluster_identifier"), + resource.TestCheckResourceAttrSet("data.aws_redshift_cluster.test", "cluster_parameter_group_name"), + resource.TestCheckResourceAttrSet("data.aws_redshift_cluster.test", "cluster_public_key"), + resource.TestCheckResourceAttrSet("data.aws_redshift_cluster.test", "cluster_revision_number"), + resource.TestCheckResourceAttr("data.aws_redshift_cluster.test", "cluster_type", "single-node"), + resource.TestCheckResourceAttrSet("data.aws_redshift_cluster.test", "cluster_version"), + resource.TestCheckResourceAttrSet("data.aws_redshift_cluster.test", "database_name"), + resource.TestCheckResourceAttrSet("data.aws_redshift_cluster.test", "encrypted"), + resource.TestCheckResourceAttrSet("data.aws_redshift_cluster.test", "endpoint"), + resource.TestCheckResourceAttrSet("data.aws_redshift_cluster.test", "master_username"), + resource.TestCheckResourceAttrSet("data.aws_redshift_cluster.test", "node_type"), + resource.TestCheckResourceAttrSet("data.aws_redshift_cluster.test", "number_of_nodes"), + resource.TestCheckResourceAttrSet("data.aws_redshift_cluster.test", "port"), + resource.TestCheckResourceAttrSet("data.aws_redshift_cluster.test", "preferred_maintenance_window"), + resource.TestCheckResourceAttrSet("data.aws_redshift_cluster.test", "publicly_accessible"), + ), + }, + }, + }) +} + +func TestAccAWSDataSourceRedshiftCluster_vpc(t *testing.T) { + rInt := acctest.RandInt() + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAWSDataSourceRedshiftClusterConfigWithVpc(rInt), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrSet("data.aws_redshift_cluster.test", "vpc_id"), + resource.TestCheckResourceAttr("data.aws_redshift_cluster.test", "vpc_security_group_ids.#", "1"), + resource.TestCheckResourceAttr("data.aws_redshift_cluster.test", "cluster_type", "multi-node"), + resource.TestCheckResourceAttr("data.aws_redshift_cluster.test", "cluster_subnet_group_name", fmt.Sprintf("tf-redshift-subnet-group-%d", rInt)), + ), + }, + }, + }) +} + +func TestAccAWSDataSourceRedshiftCluster_logging(t *testing.T) { + rInt := acctest.RandInt() + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAWSDataSourceRedshiftClusterConfigWithLogging(rInt), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr("data.aws_redshift_cluster.test", "enable_logging", "true"), + resource.TestCheckResourceAttr("data.aws_redshift_cluster.test", "bucket_name", fmt.Sprintf("tf-redshift-logging-%d", rInt)), + resource.TestCheckResourceAttr("data.aws_redshift_cluster.test", "s3_key_prefix", "cluster-logging/"), + ), + }, + }, + }) +} + +func testAccAWSDataSourceRedshiftClusterConfig(rInt int) string { + return fmt.Sprintf(` +resource "aws_redshift_cluster" "test" { + cluster_identifier = "tf-redshift-cluster-%d" + + database_name = "testdb" + master_username = "foo" + master_password = "Password1" + node_type = "dc1.large" + cluster_type = "single-node" + skip_final_snapshot = true +} + +data "aws_redshift_cluster" "test" { + cluster_identifier = "${aws_redshift_cluster.test.cluster_identifier}" +} +`, rInt) +} + +func testAccAWSDataSourceRedshiftClusterConfigWithVpc(rInt int) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "10.1.0.0/16" +} + +resource "aws_subnet" "foo" { + cidr_block = "10.1.1.0/24" + availability_zone = "us-west-2a" + vpc_id = "${aws_vpc.test.id}" +} + +resource "aws_subnet" "bar" { + cidr_block = "10.1.2.0/24" + availability_zone = "us-west-2b" + vpc_id = "${aws_vpc.test.id}" +} + +resource "aws_redshift_subnet_group" "test" { + name = "tf-redshift-subnet-group-%d" + subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] +} + +resource "aws_security_group" "test" { + name = "tf-redshift-sg-%d" + vpc_id = "${aws_vpc.test.id}" +} + +resource "aws_redshift_cluster" "test" { + cluster_identifier = "tf-redshift-cluster-%d" + + database_name = "testdb" + master_username = "foo" + master_password = "Password1" + node_type = "dc1.large" + cluster_type = "multi-node" + number_of_nodes = 2 + publicly_accessible = false + cluster_subnet_group_name = "${aws_redshift_subnet_group.test.name}" + vpc_security_group_ids = ["${aws_security_group.test.id}"] + skip_final_snapshot = true +} + +data "aws_redshift_cluster" "test" { + cluster_identifier = "${aws_redshift_cluster.test.cluster_identifier}" +} +`, rInt, rInt, rInt) +} + +func testAccAWSDataSourceRedshiftClusterConfigWithLogging(rInt int) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = "tf-redshift-logging-%d" + force_destroy = true + policy = < 1 { + return fmt.Errorf("Your query returned more than one route table. Please change your search criteria and try again.") + } + + results := getRoutes(resp.RouteTables[0], d) + + if len(results) == 0 { + return fmt.Errorf("No routes matching supplied arguments found in table(s)") + } + if len(results) > 1 { + return fmt.Errorf("Multiple routes matched; use additional constraints to reduce matches to a single route") + } + route := results[0] + + d.SetId(resourceAwsRouteID(d, route)) // using function from "resource_aws_route.go" + d.Set("destination_cidr_block", route.DestinationCidrBlock) + d.Set("destination_ipv6_cidr_block", route.DestinationIpv6CidrBlock) + d.Set("egress_only_gateway_id", route.EgressOnlyInternetGatewayId) + d.Set("gateway_id", route.GatewayId) + d.Set("instance_id", route.InstanceId) + d.Set("nat_gateway_id", route.NatGatewayId) + d.Set("vpc_peering_connection_id", route.VpcPeeringConnectionId) + d.Set("network_interface_id", route.NetworkInterfaceId) + + return nil +} + +func getRoutes(table *ec2.RouteTable, d *schema.ResourceData) []*ec2.Route { + ec2Routes := table.Routes + routes := make([]*ec2.Route, 0, len(ec2Routes)) + // Loop through the routes and add them to the set + for _, r := range ec2Routes { + + if r.Origin != nil && *r.Origin == "EnableVgwRoutePropagation" { + continue + } + + if r.DestinationPrefixListId != nil { + // Skipping because VPC endpoint routes are handled separately + // See aws_vpc_endpoint + continue + } + + if v, ok := d.GetOk("destination_cidr_block"); ok { + if r.DestinationCidrBlock == nil || *r.DestinationCidrBlock != v.(string) { + continue + } + } + + if v, ok := d.GetOk("destination_ipv6_cidr_block"); ok { + if r.DestinationIpv6CidrBlock == nil || *r.DestinationIpv6CidrBlock != v.(string) { + continue + } + } + + if v, ok := d.GetOk("egress_only_gateway_id"); ok { + if r.EgressOnlyInternetGatewayId == nil || *r.EgressOnlyInternetGatewayId != v.(string) { + continue + } + } + + if v, ok := d.GetOk("gateway_id"); ok { + if r.GatewayId == nil || *r.GatewayId != v.(string) { + continue + } + } + + if v, ok := d.GetOk("instance_id"); ok { + if r.InstanceId == nil || *r.InstanceId != v.(string) { + continue + } + } + + if v, ok := d.GetOk("nat_gateway_id"); ok { + if r.NatGatewayId == nil || *r.NatGatewayId != v.(string) { + continue + } + } + + if v, ok := d.GetOk("vpc_peering_connection_id"); ok { + if r.VpcPeeringConnectionId == nil || *r.VpcPeeringConnectionId != v.(string) { + continue + } + } + + if v, ok := d.GetOk("network_interface_id"); ok { + if r.NetworkInterfaceId == nil || *r.NetworkInterfaceId != v.(string) { + continue + } + } + routes = append(routes, r) + } + return routes +} diff --git a/aws/data_source_aws_route53_zone.go b/aws/data_source_aws_route53_zone.go index 55629f5def5..34104295a9c 100644 --- a/aws/data_source_aws_route53_zone.go +++ b/aws/data_source_aws_route53_zone.go @@ -51,6 +51,11 @@ func dataSourceAwsRoute53Zone() *schema.Resource { Optional: true, Computed: true, }, + "name_servers": { + Type: schema.TypeList, + Elem: &schema.Schema{Type: schema.TypeString}, + Computed: true, + }, }, } } @@ -135,7 +140,6 @@ func dataSourceAwsRoute53ZoneRead(d *schema.ResourceData, meta interface{}) erro break } } - } if matchingTags && matchingVPC { @@ -146,7 +150,6 @@ func dataSourceAwsRoute53ZoneRead(d *schema.ResourceData, meta interface{}) erro hostedZoneFound = hostedZone } } - } if *resp.IsTruncated { nextMarker = resp.NextMarker @@ -166,6 +169,13 @@ func dataSourceAwsRoute53ZoneRead(d *schema.ResourceData, meta interface{}) erro d.Set("private_zone", hostedZoneFound.Config.PrivateZone) d.Set("caller_reference", hostedZoneFound.CallerReference) d.Set("resource_record_set_count", hostedZoneFound.ResourceRecordSetCount) + + nameServers, err := hostedZoneNameServers(idHostedZone, conn) + if err != nil { + return fmt.Errorf("Error finding Route 53 Hosted Zone: %v", err) + } + d.Set("name_servers", nameServers) + return nil } @@ -177,3 +187,26 @@ func hostedZoneName(name string) string { return name + "." } + +// used to retrieve name servers +func hostedZoneNameServers(id string, conn *route53.Route53) ([]string, error) { + req := &route53.GetHostedZoneInput{} + req.Id = aws.String(id) + + resp, err := conn.GetHostedZone(req) + if err != nil { + return []string{}, err + } + + if resp.DelegationSet == nil { + return []string{}, nil + } + + servers := []string{} + for _, server := range resp.DelegationSet.NameServers { + if server != nil { + servers = append(servers, *server) + } + } + return servers, nil +} diff --git a/aws/data_source_aws_route53_zone_test.go b/aws/data_source_aws_route53_zone_test.go index 0eb8439a93a..c2da728aa18 100644 --- a/aws/data_source_aws_route53_zone_test.go +++ b/aws/data_source_aws_route53_zone_test.go @@ -16,9 +16,10 @@ func TestAccDataSourceAwsRoute53Zone(t *testing.T) { privateResourceName := "aws_route53_zone.test_private" privateDomain := fmt.Sprintf("test.acc-%d.", rInt) - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRoute53ZoneDestroy, Steps: []resource.TestStep{ { Config: testAccDataSourceAwsRoute53ZoneConfig(rInt), @@ -54,17 +55,17 @@ func testAccDataSourceAwsRoute53ZoneCheck(rsName, dsName, zName string) resource attr := rs.Primary.Attributes if attr["id"] != hostedZone.Primary.Attributes["id"] { - return fmt.Errorf( - "id is %s; want %s", - attr["id"], - hostedZone.Primary.Attributes["id"], - ) + return fmt.Errorf("Route53 Zone id is %s; want %s", attr["id"], hostedZone.Primary.Attributes["id"]) } if attr["name"] != zName { return fmt.Errorf("Route53 Zone name is %q; want %q", attr["name"], zName) } + if attr["private_zone"] == "false" && len(attr["name_servers"]) == 0 { + return fmt.Errorf("Route53 Zone %s has no name_servers", zName) + } + return nil } } diff --git a/aws/data_source_aws_route_table_test.go b/aws/data_source_aws_route_table_test.go index d3c7e30f795..594cfc565b0 100644 --- a/aws/data_source_aws_route_table_test.go +++ b/aws/data_source_aws_route_table_test.go @@ -9,7 +9,7 @@ import ( ) func TestAccDataSourceAwsRouteTable_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -28,7 +28,7 @@ func TestAccDataSourceAwsRouteTable_basic(t *testing.T) { } func TestAccDataSourceAwsRouteTable_main(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/aws/data_source_aws_route_tables.go b/aws/data_source_aws_route_tables.go new file mode 100644 index 00000000000..ef34987c37f --- /dev/null +++ b/aws/data_source_aws_route_tables.go @@ -0,0 +1,80 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsRouteTables() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsRouteTablesRead, + Schema: map[string]*schema.Schema{ + + "filter": ec2CustomFiltersSchema(), + + "tags": tagsSchemaComputed(), + + "vpc_id": { + Type: schema.TypeString, + Optional: true, + }, + + "ids": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + }, + } +} + +func dataSourceAwsRouteTablesRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + req := &ec2.DescribeRouteTablesInput{} + + if v, ok := d.GetOk("vpc_id"); ok { + req.Filters = buildEC2AttributeFilterList( + map[string]string{ + "vpc-id": v.(string), + }, + ) + } + + req.Filters = append(req.Filters, buildEC2TagFilterList( + tagsFromMap(d.Get("tags").(map[string]interface{})), + )...) + + req.Filters = append(req.Filters, buildEC2CustomFilterList( + d.Get("filter").(*schema.Set), + )...) + + log.Printf("[DEBUG] DescribeRouteTables %s\n", req) + resp, err := conn.DescribeRouteTables(req) + if err != nil { + return err + } + + if resp == nil || len(resp.RouteTables) == 0 { + return fmt.Errorf("no matching route tables found for vpc with id %s", d.Get("vpc_id").(string)) + } + + routeTables := make([]string, 0) + + for _, routeTable := range resp.RouteTables { + routeTables = append(routeTables, aws.StringValue(routeTable.RouteTableId)) + } + + d.SetId(resource.UniqueId()) + if err = d.Set("ids", routeTables); err != nil { + return fmt.Errorf("error setting ids: %s", err) + } + + return nil +} diff --git a/aws/data_source_aws_route_tables_test.go b/aws/data_source_aws_route_tables_test.go new file mode 100644 index 00000000000..951c2fd5b3b --- /dev/null +++ b/aws/data_source_aws_route_tables_test.go @@ -0,0 +1,168 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccDataSourceAwsRouteTables(t *testing.T) { + rInt := acctest.RandIntRange(0, 256) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVpcDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsRouteTablesConfig(rInt), + }, + { + Config: testAccDataSourceAwsRouteTablesConfigWithDataSource(rInt), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.aws_route_tables.test", "ids.#", "5"), + resource.TestCheckResourceAttr("data.aws_route_tables.private", "ids.#", "3"), + resource.TestCheckResourceAttr("data.aws_route_tables.test2", "ids.#", "1"), + resource.TestCheckResourceAttr("data.aws_route_tables.filter_test", "ids.#", "2"), + ), + }, + }, + }) +} + +func testAccDataSourceAwsRouteTablesConfigWithDataSource(rInt int) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "172.%d.0.0/16" + + tags { + Name = "terraform-testacc-route-tables-data-source" + } +} + +resource "aws_vpc" "test2" { + cidr_block = "172.%d.0.0/16" + + tags { + Name = "terraform-test2acc-route-tables-data-source" + } +} + +resource "aws_route_table" "test_public_a" { + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = "tf-acc-route-tables-data-source-public-a" + Tier = "Public" + Component = "Frontend" + } +} + +resource "aws_route_table" "test_private_a" { + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = "tf-acc-route-tables-data-source-private-a" + Tier = "Private" + Component = "Database" + } +} + +resource "aws_route_table" "test_private_b" { + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = "tf-acc-route-tables-data-source-private-b" + Tier = "Private" + Component = "Backend-1" + } +} + +resource "aws_route_table" "test_private_c" { + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = "tf-acc-route-tables-data-source-private-c" + Tier = "Private" + Component = "Backend-2" + } +} + +data "aws_route_tables" "test" { + vpc_id = "${aws_vpc.test.id}" +} + +data "aws_route_tables" "test2" { + vpc_id = "${aws_vpc.test2.id}" +} + +data "aws_route_tables" "private" { + vpc_id = "${aws_vpc.test.id}" + tags { + Tier = "Private" + } +} + +data "aws_route_tables" "filter_test" { + vpc_id = "${aws_vpc.test.id}" + + filter { + name = "tag:Component" + values = ["Backend*"] + } +} +`, rInt, rInt) +} + +func testAccDataSourceAwsRouteTablesConfig(rInt int) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "172.%d.0.0/16" + + tags { + Name = "terraform-testacc-route-tables-data-source" + } +} + +resource "aws_route_table" "test_public_a" { + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = "tf-acc-route-tables-data-source-public-a" + Tier = "Public" + Component = "Frontend" + } +} + +resource "aws_route_table" "test_private_a" { + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = "tf-acc-route-tables-data-source-private-a" + Tier = "Private" + Component = "Database" + } +} + +resource "aws_route_table" "test_private_b" { + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = "tf-acc-route-tables-data-source-private-b" + Tier = "Private" + Component = "Backend-1" + } +} + +resource "aws_route_table" "test_private_c" { + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = "tf-acc-route-tables-data-source-private-c" + Tier = "Private" + Component = "Backend-2" + } +} +`, rInt) +} diff --git a/aws/data_source_aws_route_test.go b/aws/data_source_aws_route_test.go new file mode 100644 index 00000000000..3ea69c5f8f7 --- /dev/null +++ b/aws/data_source_aws_route_test.go @@ -0,0 +1,174 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccDataSourceAwsRoute_basic(t *testing.T) { + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsRouteGroupConfig, + Check: resource.ComposeTestCheckFunc( + testAccDataSourceAwsRouteCheck("data.aws_route.by_destination_cidr_block"), + testAccDataSourceAwsRouteCheck("data.aws_route.by_instance_id"), + testAccDataSourceAwsRouteCheck("data.aws_route.by_peering_connection_id"), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccDataSourceAwsRouteCheck(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + + if !ok { + return fmt.Errorf("root module has no resource called %s", name) + } + + r, ok := s.RootModule().Resources["aws_route.test"] + if !ok { + return fmt.Errorf("can't find aws_route.test in state") + } + rts, ok := s.RootModule().Resources["aws_route_table.test"] + if !ok { + return fmt.Errorf("can't find aws_route_table.test in state") + } + + attr := rs.Primary.Attributes + + if attr["route_table_id"] != r.Primary.Attributes["route_table_id"] { + return fmt.Errorf( + "route_table_id is %s; want %s", + attr["route_table_id"], + r.Primary.Attributes["route_table_id"], + ) + } + + if attr["route_table_id"] != rts.Primary.Attributes["id"] { + return fmt.Errorf( + "route_table_id is %s; want %s", + attr["route_table_id"], + rts.Primary.Attributes["id"], + ) + } + + return nil + } +} + +const testAccDataSourceAwsRouteGroupConfig = ` +provider "aws" { + region = "ap-southeast-2" +} + +resource "aws_vpc" "test" { + cidr_block = "172.16.0.0/16" + + tags { + Name = "terraform-testacc-route-table-data-source" + } +} + +resource "aws_vpc" "dest" { + cidr_block = "172.17.0.0/16" + + tags { + Name = "terraform-testacc-route-table-data-source" + } +} + +resource "aws_vpc_peering_connection" "test" { + peer_vpc_id = "${aws_vpc.dest.id}" + vpc_id = "${aws_vpc.test.id}" + auto_accept = true +} + +resource "aws_subnet" "test" { + cidr_block = "172.16.0.0/24" + vpc_id = "${aws_vpc.test.id}" + tags { + Name = "tf-acc-route-table-data-source" + } +} + +resource "aws_route_table" "test" { + vpc_id = "${aws_vpc.test.id}" + tags { + Name = "terraform-testacc-routetable-data-source" + } +} + +resource "aws_route" "pcx" { + route_table_id = "${aws_route_table.test.id}" + vpc_peering_connection_id = "${aws_vpc_peering_connection.test.id}" + destination_cidr_block = "10.0.2.0/24" +} + +resource "aws_route_table_association" "a" { + subnet_id = "${aws_subnet.test.id}" + route_table_id = "${aws_route_table.test.id}" +} + +data "aws_ami" "ubuntu" { + most_recent = true + + filter { + name = "name" + values = ["ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"] + } + + filter { + name = "virtualization-type" + values = ["hvm"] + } + + owners = ["099720109477"] # Canonical + } + + resource "aws_instance" "web" { + ami = "${data.aws_ami.ubuntu.id}" + instance_type = "t2.micro" + subnet_id = "${aws_subnet.test.id}" + tags { + Name = "HelloWorld" + } + } + + +resource "aws_route" "test" { + route_table_id = "${aws_route_table.test.id}" + destination_cidr_block = "10.0.1.0/24" + instance_id = "${aws_instance.web.id}" + timeouts { + create ="5m" + } +} + +data "aws_route" "by_peering_connection_id"{ + route_table_id = "${aws_route_table.test.id}" + vpc_peering_connection_id = "${aws_route.pcx.vpc_peering_connection_id}" +} + +data "aws_route" "by_destination_cidr_block"{ + route_table_id = "${aws_route_table.test.id}" + destination_cidr_block = "10.0.1.0/24" + depends_on = ["aws_route.test"] +} + +data "aws_route" "by_instance_id"{ + route_table_id = "${aws_route_table.test.id}" + instance_id = "${aws_instance.web.id}" + depends_on = ["aws_route.test"] +} + + +` diff --git a/aws/data_source_aws_s3_bucket.go b/aws/data_source_aws_s3_bucket.go index 1303ae35a67..6a678a75b5b 100644 --- a/aws/data_source_aws_s3_bucket.go +++ b/aws/data_source_aws_s3_bucket.go @@ -5,6 +5,7 @@ import ( "log" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" "github.com/aws/aws-sdk-go/service/s3" "github.com/hashicorp/terraform/helper/schema" ) @@ -63,7 +64,12 @@ func dataSourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { } d.SetId(bucket) - d.Set("arn", fmt.Sprintf("arn:%s:s3:::%s", meta.(*AWSClient).partition, bucket)) + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Service: "s3", + Resource: bucket, + }.String() + d.Set("arn", arn) d.Set("bucket_domain_name", bucketDomainName(bucket)) if err := bucketLocation(d, bucket, conn); err != nil { diff --git a/aws/data_source_aws_s3_bucket_object_test.go b/aws/data_source_aws_s3_bucket_object_test.go index 8bfe6c73f2d..ce6b87bf1f7 100644 --- a/aws/data_source_aws_s3_bucket_object_test.go +++ b/aws/data_source_aws_s3_bucket_object_test.go @@ -19,18 +19,18 @@ func TestAccDataSourceAWSS3BucketObject_basic(t *testing.T) { var rObj s3.GetObjectOutput var dsObj s3.GetObjectOutput - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: resourceOnlyConf, Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketObjectExists("aws_s3_bucket_object.object", &rObj), ), }, - resource.TestStep{ + { Config: conf, Check: resource.ComposeTestCheckFunc( testAccCheckAwsS3ObjectDataSourceExists("data.aws_s3_bucket_object.obj", &dsObj), @@ -53,18 +53,18 @@ func TestAccDataSourceAWSS3BucketObject_readableBody(t *testing.T) { var rObj s3.GetObjectOutput var dsObj s3.GetObjectOutput - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: resourceOnlyConf, Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketObjectExists("aws_s3_bucket_object.object", &rObj), ), }, - resource.TestStep{ + { Config: conf, Check: resource.ComposeTestCheckFunc( testAccCheckAwsS3ObjectDataSourceExists("data.aws_s3_bucket_object.obj", &dsObj), @@ -87,18 +87,18 @@ func TestAccDataSourceAWSS3BucketObject_kmsEncrypted(t *testing.T) { var rObj s3.GetObjectOutput var dsObj s3.GetObjectOutput - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: resourceOnlyConf, Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketObjectExists("aws_s3_bucket_object.object", &rObj), ), }, - resource.TestStep{ + { Config: conf, Check: resource.ComposeTestCheckFunc( testAccCheckAwsS3ObjectDataSourceExists("data.aws_s3_bucket_object.obj", &dsObj), @@ -124,18 +124,18 @@ func TestAccDataSourceAWSS3BucketObject_allParams(t *testing.T) { var rObj s3.GetObjectOutput var dsObj s3.GetObjectOutput - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, PreventPostDestroyRefresh: true, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: resourceOnlyConf, Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketObjectExists("aws_s3_bucket_object.object", &rObj), ), }, - resource.TestStep{ + { Config: conf, Check: resource.ComposeTestCheckFunc( testAccCheckAwsS3ObjectDataSourceExists("data.aws_s3_bucket_object.obj", &dsObj), diff --git a/aws/data_source_aws_s3_bucket_test.go b/aws/data_source_aws_s3_bucket_test.go index 12e1ac1b4e1..1858affaec6 100644 --- a/aws/data_source_aws_s3_bucket_test.go +++ b/aws/data_source_aws_s3_bucket_test.go @@ -11,11 +11,11 @@ import ( func TestAccDataSourceS3Bucket_basic(t *testing.T) { rInt := acctest.RandInt() - arnRegexp := regexp.MustCompile( - "^arn:aws:s3:::") - hostedZoneID, _ := HostedZoneIDForRegion("us-west-2") + arnRegexp := regexp.MustCompile(`^arn:aws[\w-]*:s3:::`) + region := testAccGetRegion() + hostedZoneID, _ := HostedZoneIDForRegion(region) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -24,7 +24,7 @@ func TestAccDataSourceS3Bucket_basic(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketExists("data.aws_s3_bucket.bucket"), resource.TestMatchResourceAttr("data.aws_s3_bucket.bucket", "arn", arnRegexp), - resource.TestCheckResourceAttr("data.aws_s3_bucket.bucket", "region", "us-west-2"), + resource.TestCheckResourceAttr("data.aws_s3_bucket.bucket", "region", region), resource.TestCheckResourceAttr( "data.aws_s3_bucket.bucket", "bucket_domain_name", testAccBucketDomainName(rInt)), resource.TestCheckResourceAttr( @@ -38,8 +38,9 @@ func TestAccDataSourceS3Bucket_basic(t *testing.T) { func TestAccDataSourceS3Bucket_website(t *testing.T) { rInt := acctest.RandInt() + region := testAccGetRegion() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -50,7 +51,7 @@ func TestAccDataSourceS3Bucket_website(t *testing.T) { testAccCheckAWSS3BucketWebsite( "data.aws_s3_bucket.bucket", "index.html", "error.html", "", ""), resource.TestCheckResourceAttr( - "data.aws_s3_bucket.bucket", "website_endpoint", testAccWebsiteEndpoint(rInt)), + "data.aws_s3_bucket.bucket", "website_endpoint", testAccWebsiteEndpoint(rInt, region)), ), }, }, diff --git a/aws/data_source_aws_secretsmanager_secret.go b/aws/data_source_aws_secretsmanager_secret.go new file mode 100644 index 00000000000..1cba15188c3 --- /dev/null +++ b/aws/data_source_aws_secretsmanager_secret.go @@ -0,0 +1,139 @@ +package aws + +import ( + "errors" + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/secretsmanager" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/structure" +) + +func dataSourceAwsSecretsManagerSecret() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsSecretsManagerSecretRead, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validateArn, + }, + "description": { + Type: schema.TypeString, + Computed: true, + }, + "kms_key_id": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "policy": { + Type: schema.TypeString, + Computed: true, + }, + "rotation_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "rotation_lambda_arn": { + Type: schema.TypeString, + Computed: true, + }, + "rotation_rules": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "automatically_after_days": { + Type: schema.TypeInt, + Computed: true, + }, + }, + }, + }, + "tags": { + Type: schema.TypeMap, + Computed: true, + }, + }, + } +} + +func dataSourceAwsSecretsManagerSecretRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).secretsmanagerconn + var secretID string + if v, ok := d.GetOk("arn"); ok { + secretID = v.(string) + } + if v, ok := d.GetOk("name"); ok { + if secretID != "" { + return errors.New("specify only arn or name") + } + secretID = v.(string) + } + + if secretID == "" { + return errors.New("must specify either arn or name") + } + + input := &secretsmanager.DescribeSecretInput{ + SecretId: aws.String(secretID), + } + + log.Printf("[DEBUG] Reading Secrets Manager Secret: %s", input) + output, err := conn.DescribeSecret(input) + if err != nil { + if isAWSErr(err, secretsmanager.ErrCodeResourceNotFoundException, "") { + return fmt.Errorf("Secrets Manager Secret %q not found", secretID) + } + return fmt.Errorf("error reading Secrets Manager Secret: %s", err) + } + + if output.ARN == nil { + return fmt.Errorf("Secrets Manager Secret %q not found", secretID) + } + + d.SetId(aws.StringValue(output.ARN)) + d.Set("arn", output.ARN) + d.Set("description", output.Description) + d.Set("kms_key_id", output.KmsKeyId) + d.Set("name", output.Name) + d.Set("rotation_enabled", output.RotationEnabled) + d.Set("rotation_lambda_arn", output.RotationLambdaARN) + d.Set("policy", "") + + pIn := &secretsmanager.GetResourcePolicyInput{ + SecretId: aws.String(d.Id()), + } + log.Printf("[DEBUG] Reading Secrets Manager Secret policy: %s", pIn) + pOut, err := conn.GetResourcePolicy(pIn) + if err != nil { + return fmt.Errorf("error reading Secrets Manager Secret policy: %s", err) + } + + if pOut != nil && pOut.ResourcePolicy != nil { + policy, err := structure.NormalizeJsonString(aws.StringValue(pOut.ResourcePolicy)) + if err != nil { + return fmt.Errorf("policy contains an invalid JSON: %s", err) + } + d.Set("policy", policy) + } + + if err := d.Set("rotation_rules", flattenSecretsManagerRotationRules(output.RotationRules)); err != nil { + return fmt.Errorf("error setting rotation_rules: %s", err) + } + + if err := d.Set("tags", tagsToMapSecretsManager(output.Tags)); err != nil { + return fmt.Errorf("error setting tags: %s", err) + } + + return nil +} diff --git a/aws/data_source_aws_secretsmanager_secret_test.go b/aws/data_source_aws_secretsmanager_secret_test.go new file mode 100644 index 00000000000..224dc025c04 --- /dev/null +++ b/aws/data_source_aws_secretsmanager_secret_test.go @@ -0,0 +1,204 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccDataSourceAwsSecretsManagerSecret_Basic(t *testing.T) { + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsSecretsManagerSecretConfig_MissingRequired, + ExpectError: regexp.MustCompile(`must specify either arn or name`), + }, + { + Config: testAccDataSourceAwsSecretsManagerSecretConfig_MultipleSpecified, + ExpectError: regexp.MustCompile(`specify only arn or name`), + }, + { + Config: testAccDataSourceAwsSecretsManagerSecretConfig_NonExistent, + ExpectError: regexp.MustCompile(`not found`), + }, + }, + }) +} + +func TestAccDataSourceAwsSecretsManagerSecret_ARN(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_secretsmanager_secret.test" + datasourceName := "data.aws_secretsmanager_secret.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsSecretsManagerSecretConfig_ARN(rName), + Check: resource.ComposeTestCheckFunc( + testAccDataSourceAwsSecretsManagerSecretCheck(datasourceName, resourceName), + ), + }, + }, + }) +} + +func TestAccDataSourceAwsSecretsManagerSecret_Name(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_secretsmanager_secret.test" + datasourceName := "data.aws_secretsmanager_secret.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsSecretsManagerSecretConfig_Name(rName), + Check: resource.ComposeTestCheckFunc( + testAccDataSourceAwsSecretsManagerSecretCheck(datasourceName, resourceName), + ), + }, + }, + }) +} + +func TestAccDataSourceAwsSecretsManagerSecret_Policy(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_secretsmanager_secret.test" + datasourceName := "data.aws_secretsmanager_secret.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsSecretsManagerSecretConfig_Policy(rName), + Check: resource.ComposeTestCheckFunc( + testAccDataSourceAwsSecretsManagerSecretCheck(datasourceName, resourceName), + ), + }, + }, + }) +} + +func testAccDataSourceAwsSecretsManagerSecretCheck(datasourceName, resourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + resource, ok := s.RootModule().Resources[datasourceName] + if !ok { + return fmt.Errorf("root module has no resource called %s", datasourceName) + } + + dataSource, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("root module has no resource called %s", resourceName) + } + + attrNames := []string{ + "arn", + "description", + "kms_key_id", + "name", + "policy", + "rotation_enabled", + "rotation_lambda_arn", + "rotation_rules.#", + "tags.#", + } + + for _, attrName := range attrNames { + if resource.Primary.Attributes[attrName] != dataSource.Primary.Attributes[attrName] { + return fmt.Errorf( + "%s is %s; want %s", + attrName, + resource.Primary.Attributes[attrName], + dataSource.Primary.Attributes[attrName], + ) + } + } + + return nil + } +} + +func testAccDataSourceAwsSecretsManagerSecretConfig_ARN(rName string) string { + return fmt.Sprintf(` +resource "aws_secretsmanager_secret" "wrong" { + name = "%[1]s-wrong" +} +resource "aws_secretsmanager_secret" "test" { + name = "%[1]s" +} + +data "aws_secretsmanager_secret" "test" { + arn = "${aws_secretsmanager_secret.test.arn}" +} +`, rName) +} + +const testAccDataSourceAwsSecretsManagerSecretConfig_MissingRequired = ` +data "aws_secretsmanager_secret" "test" {} +` + +const testAccDataSourceAwsSecretsManagerSecretConfig_MultipleSpecified = ` +data "aws_secretsmanager_secret" "test" { + arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:tf-acc-test-does-not-exist" + name = "tf-acc-test-does-not-exist" +} +` + +func testAccDataSourceAwsSecretsManagerSecretConfig_Name(rName string) string { + return fmt.Sprintf(` +resource "aws_secretsmanager_secret" "wrong" { + name = "%[1]s-wrong" +} +resource "aws_secretsmanager_secret" "test" { + name = "%[1]s" +} + +data "aws_secretsmanager_secret" "test" { + name = "${aws_secretsmanager_secret.test.name}" +} +`, rName) +} + +func testAccDataSourceAwsSecretsManagerSecretConfig_Policy(rName string) string { + return fmt.Sprintf(` +resource "aws_secretsmanager_secret" "test" { + name = "%[1]s" + + policy = < 0 { - return fmt.Errorf("[ERROR] SSM Parameter %s is invalid", name) + return fmt.Errorf("Error describing SSM parameter: %s", err) } - param := resp.Parameters[0] + param := resp.Parameter d.SetId(*param.Name) arn := arn.ARN{ @@ -76,7 +69,6 @@ func dataAwsSsmParameterRead(d *schema.ResourceData, meta interface{}) error { Resource: fmt.Sprintf("parameter/%s", strings.TrimPrefix(d.Id(), "/")), } d.Set("arn", arn.String()) - d.Set("name", param.Name) d.Set("type", param.Type) d.Set("value", param.Value) diff --git a/aws/data_source_aws_ssm_parameter_test.go b/aws/data_source_aws_ssm_parameter_test.go index 5d77b3844d2..3d6ba4edb04 100644 --- a/aws/data_source_aws_ssm_parameter_test.go +++ b/aws/data_source_aws_ssm_parameter_test.go @@ -12,7 +12,7 @@ func TestAccAWSSsmParameterDataSource_basic(t *testing.T) { resourceName := "data.aws_ssm_parameter.test" name := "test.parameter" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -48,7 +48,7 @@ func TestAccAWSSsmParameterDataSource_fullPath(t *testing.T) { resourceName := "data.aws_ssm_parameter.test" name := "/path/parameter" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, diff --git a/aws/data_source_aws_storagegateway_local_disk.go b/aws/data_source_aws_storagegateway_local_disk.go new file mode 100644 index 00000000000..ed86c4a4b5c --- /dev/null +++ b/aws/data_source_aws_storagegateway_local_disk.go @@ -0,0 +1,85 @@ +package aws + +import ( + "errors" + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/storagegateway" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsStorageGatewayLocalDisk() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsStorageGatewayLocalDiskRead, + + Schema: map[string]*schema.Schema{ + "disk_id": { + Type: schema.TypeString, + Computed: true, + }, + "disk_node": { + Type: schema.TypeString, + Optional: true, + }, + "disk_path": { + Type: schema.TypeString, + Optional: true, + }, + "gateway_arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validateArn, + }, + }, + } +} + +func dataSourceAwsStorageGatewayLocalDiskRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).storagegatewayconn + + input := &storagegateway.ListLocalDisksInput{ + GatewayARN: aws.String(d.Get("gateway_arn").(string)), + } + + log.Printf("[DEBUG] Reading Storage Gateway Local Disk: %s", input) + output, err := conn.ListLocalDisks(input) + if err != nil { + return fmt.Errorf("error reading Storage Gateway Local Disk: %s", err) + } + + if output == nil || len(output.Disks) == 0 { + return errors.New("no results found for query, try adjusting your search criteria") + } + + var matchingDisks []*storagegateway.Disk + + for _, disk := range output.Disks { + if v, ok := d.GetOk("disk_node"); ok && v.(string) == aws.StringValue(disk.DiskNode) { + matchingDisks = append(matchingDisks, disk) + continue + } + if v, ok := d.GetOk("disk_path"); ok && v.(string) == aws.StringValue(disk.DiskPath) { + matchingDisks = append(matchingDisks, disk) + continue + } + } + + if len(matchingDisks) == 0 { + return errors.New("no results found for query, try adjusting your search criteria") + } + + if len(matchingDisks) > 1 { + return errors.New("multiple results found for query, try adjusting your search criteria") + } + + disk := matchingDisks[0] + + d.SetId(aws.StringValue(disk.DiskId)) + d.Set("disk_id", disk.DiskId) + d.Set("disk_node", disk.DiskNode) + d.Set("disk_path", disk.DiskPath) + + return nil +} diff --git a/aws/data_source_aws_storagegateway_local_disk_test.go b/aws/data_source_aws_storagegateway_local_disk_test.go new file mode 100644 index 00000000000..38dff768a8f --- /dev/null +++ b/aws/data_source_aws_storagegateway_local_disk_test.go @@ -0,0 +1,172 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSStorageGatewayLocalDiskDataSource_DiskNode(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + dataSourceName := "data.aws_storagegateway_local_disk.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayLocalDiskDataSourceConfig_DiskNode(rName), + Check: resource.ComposeTestCheckFunc( + testAccAWSStorageGatewayLocalDiskDataSourceExists(dataSourceName), + resource.TestCheckResourceAttrSet(dataSourceName, "disk_id"), + ), + }, + { + Config: testAccAWSStorageGatewayLocalDiskDataSourceConfig_DiskNode_NonExistent(rName), + ExpectError: regexp.MustCompile(`no results found`), + }, + }, + }) +} + +func TestAccAWSStorageGatewayLocalDiskDataSource_DiskPath(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + dataSourceName := "data.aws_storagegateway_local_disk.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayLocalDiskDataSourceConfig_DiskPath(rName), + Check: resource.ComposeTestCheckFunc( + testAccAWSStorageGatewayLocalDiskDataSourceExists(dataSourceName), + resource.TestCheckResourceAttrSet(dataSourceName, "disk_id"), + ), + }, + { + Config: testAccAWSStorageGatewayLocalDiskDataSourceConfig_DiskPath_NonExistent(rName), + ExpectError: regexp.MustCompile(`no results found`), + }, + }, + }) +} + +func testAccAWSStorageGatewayLocalDiskDataSourceExists(dataSourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + _, ok := s.RootModule().Resources[dataSourceName] + if !ok { + return fmt.Errorf("not found: %s", dataSourceName) + } + + return nil + } +} + +func testAccAWSStorageGatewayLocalDiskDataSourceConfig_DiskNode(rName string) string { + return testAccAWSStorageGatewayGatewayConfig_GatewayType_FileS3(rName) + fmt.Sprintf(` +resource "aws_ebs_volume" "test" { + availability_zone = "${aws_instance.test.availability_zone}" + size = "10" + type = "gp2" + + tags { + Name = %q + } +} + +resource "aws_volume_attachment" "test" { + device_name = "/dev/sdb" + force_detach = true + instance_id = "${aws_instance.test.id}" + volume_id = "${aws_ebs_volume.test.id}" +} + +data "aws_storagegateway_local_disk" "test" { + disk_node = "${aws_volume_attachment.test.device_name}" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} +`, rName) +} + +func testAccAWSStorageGatewayLocalDiskDataSourceConfig_DiskNode_NonExistent(rName string) string { + return testAccAWSStorageGatewayGatewayConfig_GatewayType_FileS3(rName) + fmt.Sprintf(` +resource "aws_ebs_volume" "test" { + availability_zone = "${aws_instance.test.availability_zone}" + size = "10" + type = "gp2" + + tags { + Name = %q + } +} + +resource "aws_volume_attachment" "test" { + device_name = "/dev/sdb" + force_detach = true + instance_id = "${aws_instance.test.id}" + volume_id = "${aws_ebs_volume.test.id}" +} + +data "aws_storagegateway_local_disk" "test" { + disk_node = "/dev/sdz" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} +`, rName) +} + +func testAccAWSStorageGatewayLocalDiskDataSourceConfig_DiskPath(rName string) string { + return testAccAWSStorageGatewayGatewayConfig_GatewayType_FileS3(rName) + fmt.Sprintf(` +resource "aws_ebs_volume" "test" { + availability_zone = "${aws_instance.test.availability_zone}" + size = "10" + type = "gp2" + + tags { + Name = %q + } +} + +resource "aws_volume_attachment" "test" { + device_name = "/dev/xvdb" + force_detach = true + instance_id = "${aws_instance.test.id}" + volume_id = "${aws_ebs_volume.test.id}" +} + +data "aws_storagegateway_local_disk" "test" { + disk_path = "${aws_volume_attachment.test.device_name}" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} +`, rName) +} + +func testAccAWSStorageGatewayLocalDiskDataSourceConfig_DiskPath_NonExistent(rName string) string { + return testAccAWSStorageGatewayGatewayConfig_GatewayType_FileS3(rName) + fmt.Sprintf(` +resource "aws_ebs_volume" "test" { + availability_zone = "${aws_instance.test.availability_zone}" + size = "10" + type = "gp2" + + tags { + Name = %q + } +} + +resource "aws_volume_attachment" "test" { + device_name = "/dev/xvdb" + force_detach = true + instance_id = "${aws_instance.test.id}" + volume_id = "${aws_ebs_volume.test.id}" +} + +data "aws_storagegateway_local_disk" "test" { + disk_path = "/dev/xvdz" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} +`, rName) +} diff --git a/aws/data_source_aws_subnet.go b/aws/data_source_aws_subnet.go index bdea59c4688..f8fffa26478 100644 --- a/aws/data_source_aws_subnet.go +++ b/aws/data_source_aws_subnet.go @@ -5,6 +5,7 @@ import ( "log" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/schema" ) @@ -74,6 +75,11 @@ func dataSourceAwsSubnet() *schema.Resource { Type: schema.TypeString, Computed: true, }, + + "arn": { + Type: schema.TypeString, + Computed: true, + }, }, } } @@ -155,5 +161,14 @@ func dataSourceAwsSubnetRead(d *schema.ResourceData, meta interface{}) error { } } + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Region: meta.(*AWSClient).region, + Service: "ec2", + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("subnet/%s", d.Id()), + } + d.Set("arn", arn.String()) + return nil } diff --git a/aws/data_source_aws_subnet_ids.go b/aws/data_source_aws_subnet_ids.go index a2f5f0c46aa..21339f1ab91 100644 --- a/aws/data_source_aws_subnet_ids.go +++ b/aws/data_source_aws_subnet_ids.go @@ -12,6 +12,7 @@ func dataSourceAwsSubnetIDs() *schema.Resource { return &schema.Resource{ Read: dataSourceAwsSubnetIDsRead, Schema: map[string]*schema.Schema{ + "filter": ec2CustomFiltersSchema(), "tags": tagsSchemaComputed(), @@ -35,15 +36,29 @@ func dataSourceAwsSubnetIDsRead(d *schema.ResourceData, meta interface{}) error req := &ec2.DescribeSubnetsInput{} - req.Filters = buildEC2AttributeFilterList( - map[string]string{ - "vpc-id": d.Get("vpc_id").(string), - }, - ) + if vpc, vpcOk := d.GetOk("vpc_id"); vpcOk { + req.Filters = buildEC2AttributeFilterList( + map[string]string{ + "vpc-id": vpc.(string), + }, + ) + } + + if tags, tagsOk := d.GetOk("tags"); tagsOk { + req.Filters = append(req.Filters, buildEC2TagFilterList( + tagsFromMap(tags.(map[string]interface{})), + )...) + } + + if filters, filtersOk := d.GetOk("filter"); filtersOk { + req.Filters = append(req.Filters, buildEC2CustomFilterList( + filters.(*schema.Set), + )...) + } - req.Filters = append(req.Filters, buildEC2TagFilterList( - tagsFromMap(d.Get("tags").(map[string]interface{})), - )...) + if len(req.Filters) == 0 { + req.Filters = nil + } log.Printf("[DEBUG] DescribeSubnets %s\n", req) resp, err := conn.DescribeSubnets(req) diff --git a/aws/data_source_aws_subnet_ids_test.go b/aws/data_source_aws_subnet_ids_test.go index c4826446b2e..9d022588261 100644 --- a/aws/data_source_aws_subnet_ids_test.go +++ b/aws/data_source_aws_subnet_ids_test.go @@ -10,7 +10,7 @@ import ( func TestAccDataSourceAwsSubnetIDs(t *testing.T) { rInt := acctest.RandIntRange(0, 256) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpcDestroy, @@ -29,6 +29,25 @@ func TestAccDataSourceAwsSubnetIDs(t *testing.T) { }) } +func TestAccDataSourceAwsSubnetIDs_filter(t *testing.T) { + rInt := acctest.RandIntRange(0, 256) + rName := "data.aws_subnet_ids.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVpcDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsSubnetIDs_filter(rInt), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(rName, "ids.#", "2"), + ), + }, + }, + }) +} + func testAccDataSourceAwsSubnetIDsConfigWithDataSource(rInt int) string { return fmt.Sprintf(` resource "aws_vpc" "test" { @@ -129,3 +148,40 @@ resource "aws_subnet" "test_private_b" { } `, rInt, rInt, rInt, rInt) } + +func testAccDataSourceAwsSubnetIDs_filter(rInt int) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "172.%d.0.0/16" + tags { + Name = "terraform-testacc-subnet-ids-data-source" + } +} + +resource "aws_subnet" "test_a_one" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "172.%d.1.0/24" + availability_zone = "us-west-2a" +} + +resource "aws_subnet" "test_a_two" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "172.%d.2.0/24" + availability_zone = "us-west-2a" +} + +resource "aws_subnet" "test_b" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "172.%d.3.0/24" + availability_zone = "us-west-2b" +} + +data "aws_subnet_ids" "test" { + vpc_id = "${aws_subnet.test_a_two.vpc_id}" + filter { + name = "availabilityZone" + values = ["${aws_subnet.test_a_one.availability_zone}"] + } +} +`, rInt, rInt, rInt, rInt) +} diff --git a/aws/data_source_aws_subnet_test.go b/aws/data_source_aws_subnet_test.go index 39eb35e4163..c8d12335250 100644 --- a/aws/data_source_aws_subnet_test.go +++ b/aws/data_source_aws_subnet_test.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "regexp" "testing" "github.com/hashicorp/terraform/helper/acctest" @@ -12,7 +13,7 @@ import ( func TestAccDataSourceAwsSubnet_basic(t *testing.T) { rInt := acctest.RandIntRange(0, 256) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpcDestroy, @@ -33,7 +34,7 @@ func TestAccDataSourceAwsSubnet_basic(t *testing.T) { func TestAccDataSourceAwsSubnet_ipv6ByIpv6Filter(t *testing.T) { rInt := acctest.RandIntRange(0, 256) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -55,7 +56,7 @@ func TestAccDataSourceAwsSubnet_ipv6ByIpv6Filter(t *testing.T) { func TestAccDataSourceAwsSubnet_ipv6ByIpv6CidrBlock(t *testing.T) { rInt := acctest.RandIntRange(0, 256) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -117,6 +118,13 @@ func testAccDataSourceAwsSubnetCheck(name string, rInt int) resource.TestCheckFu return fmt.Errorf("bad Name tag %s", attr["tags.Name"]) } + arnformat := `^arn:[^:]+:ec2:[^:]+:\d{12}:subnet/subnet-.+` + arnregex := regexp.MustCompile(arnformat) + + if !arnregex.MatchString(attr["arn"]) { + return fmt.Errorf("arn doesn't match format %s", attr["arn"]) + } + return nil } } diff --git a/aws/data_source_aws_vpc.go b/aws/data_source_aws_vpc.go index 583d69333fb..87701c2f631 100644 --- a/aws/data_source_aws_vpc.go +++ b/aws/data_source_aws_vpc.go @@ -5,6 +5,7 @@ import ( "log" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/schema" ) @@ -14,21 +15,57 @@ func dataSourceAwsVpc() *schema.Resource { Read: dataSourceAwsVpcRead, Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "cidr_block": { Type: schema.TypeString, Optional: true, Computed: true, }, + "cidr_block_associations": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "association_id": { + Type: schema.TypeString, + Computed: true, + }, + "cidr_block": { + Type: schema.TypeString, + Computed: true, + }, + "state": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + + "default": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + "dhcp_options_id": { Type: schema.TypeString, Optional: true, Computed: true, }, - "default": { + "enable_dns_hostnames": { + Type: schema.TypeBool, + Computed: true, + }, + + "enable_dns_support": { Type: schema.TypeBool, - Optional: true, Computed: true, }, @@ -55,19 +92,14 @@ func dataSourceAwsVpc() *schema.Resource { Computed: true, }, - "state": { + "main_route_table_id": { Type: schema.TypeString, - Optional: true, - Computed: true, - }, - - "enable_dns_hostnames": { - Type: schema.TypeBool, Computed: true, }, - "enable_dns_support": { - Type: schema.TypeBool, + "state": { + Type: schema.TypeString, + Optional: true, Computed: true, }, @@ -133,7 +165,7 @@ func dataSourceAwsVpcRead(d *schema.ResourceData, meta interface{}) error { vpc := resp.Vpcs[0] - d.SetId(*vpc.VpcId) + d.SetId(aws.StringValue(vpc.VpcId)) d.Set("cidr_block", vpc.CidrBlock) d.Set("dhcp_options_id", vpc.DhcpOptionsId) d.Set("instance_tenancy", vpc.InstanceTenancy) @@ -141,22 +173,50 @@ func dataSourceAwsVpcRead(d *schema.ResourceData, meta interface{}) error { d.Set("state", vpc.State) d.Set("tags", tagsToMap(vpc.Tags)) + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Service: "ec2", + Region: meta.(*AWSClient).region, + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("vpc/%s", d.Id()), + }.String() + d.Set("arn", arn) + + cidrAssociations := []interface{}{} + for _, associationSet := range vpc.CidrBlockAssociationSet { + association := map[string]interface{}{ + "association_id": aws.StringValue(associationSet.AssociationId), + "cidr_block": aws.StringValue(associationSet.CidrBlock), + "state": aws.StringValue(associationSet.CidrBlockState.State), + } + cidrAssociations = append(cidrAssociations, association) + } + if err := d.Set("cidr_block_associations", cidrAssociations); err != nil { + return fmt.Errorf("error setting cidr_block_associations: %s", err) + } + if vpc.Ipv6CidrBlockAssociationSet != nil { d.Set("ipv6_association_id", vpc.Ipv6CidrBlockAssociationSet[0].AssociationId) d.Set("ipv6_cidr_block", vpc.Ipv6CidrBlockAssociationSet[0].Ipv6CidrBlock) } - attResp, err := awsVpcDescribeVpcAttribute("enableDnsSupport", *vpc.VpcId, conn) + attResp, err := awsVpcDescribeVpcAttribute("enableDnsSupport", aws.StringValue(vpc.VpcId), conn) if err != nil { return err } d.Set("enable_dns_support", attResp.EnableDnsSupport.Value) - attResp, err = awsVpcDescribeVpcAttribute("enableDnsHostnames", *vpc.VpcId, conn) + attResp, err = awsVpcDescribeVpcAttribute("enableDnsHostnames", aws.StringValue(vpc.VpcId), conn) if err != nil { return err } d.Set("enable_dns_hostnames", attResp.EnableDnsHostnames.Value) + routeTableId, err := resourceAwsVpcSetMainRouteTable(conn, aws.StringValue(vpc.VpcId)) + if err != nil { + log.Printf("[WARN] Unable to set Main Route Table: %s", err) + } + d.Set("main_route_table_id", routeTableId) + return nil } diff --git a/aws/data_source_aws_vpc_dhcp_options.go b/aws/data_source_aws_vpc_dhcp_options.go new file mode 100644 index 00000000000..00be63080b5 --- /dev/null +++ b/aws/data_source_aws_vpc_dhcp_options.go @@ -0,0 +1,126 @@ +package aws + +import ( + "errors" + "fmt" + "log" + "strings" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsVpcDhcpOptions() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsVpcDhcpOptionsRead, + + Schema: map[string]*schema.Schema{ + "dhcp_options_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "domain_name": { + Type: schema.TypeString, + Computed: true, + }, + "domain_name_servers": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "filter": ec2CustomFiltersSchema(), + "netbios_name_servers": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "netbios_node_type": { + Type: schema.TypeString, + Computed: true, + }, + "ntp_servers": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "tags": tagsSchemaComputed(), + }, + } +} + +func dataSourceAwsVpcDhcpOptionsRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + input := &ec2.DescribeDhcpOptionsInput{} + + if v, ok := d.GetOk("dhcp_options_id"); ok && v.(string) != "" { + input.DhcpOptionsIds = []*string{aws.String(v.(string))} + } + + input.Filters = append(input.Filters, buildEC2CustomFilterList( + d.Get("filter").(*schema.Set), + )...) + if len(input.Filters) == 0 { + // Don't send an empty filters list; the EC2 API won't accept it. + input.Filters = nil + } + + log.Printf("[DEBUG] Reading EC2 DHCP Options: %s", input) + output, err := conn.DescribeDhcpOptions(input) + if err != nil { + if isNoSuchDhcpOptionIDErr(err) { + return errors.New("No matching EC2 DHCP Options found") + } + return fmt.Errorf("error reading EC2 DHCP Options: %s", err) + } + + if len(output.DhcpOptions) == 0 { + return errors.New("No matching EC2 DHCP Options found") + } + + if len(output.DhcpOptions) > 1 { + return errors.New("Multiple matching EC2 DHCP Options found") + } + + dhcpOptionID := aws.StringValue(output.DhcpOptions[0].DhcpOptionsId) + d.SetId(dhcpOptionID) + d.Set("dhcp_options_id", dhcpOptionID) + + dhcpConfigurations := output.DhcpOptions[0].DhcpConfigurations + + for _, dhcpConfiguration := range dhcpConfigurations { + key := aws.StringValue(dhcpConfiguration.Key) + tfKey := strings.Replace(key, "-", "_", -1) + + if len(dhcpConfiguration.Values) == 0 { + continue + } + + switch key { + case "domain-name": + d.Set(tfKey, aws.StringValue(dhcpConfiguration.Values[0].Value)) + case "domain-name-servers": + if err := d.Set(tfKey, flattenEc2AttributeValues(dhcpConfiguration.Values)); err != nil { + return fmt.Errorf("error setting %s: %s", tfKey, err) + } + case "netbios-name-servers": + if err := d.Set(tfKey, flattenEc2AttributeValues(dhcpConfiguration.Values)); err != nil { + return fmt.Errorf("error setting %s: %s", tfKey, err) + } + case "netbios-node-type": + d.Set(tfKey, aws.StringValue(dhcpConfiguration.Values[0].Value)) + case "ntp-servers": + if err := d.Set(tfKey, flattenEc2AttributeValues(dhcpConfiguration.Values)); err != nil { + return fmt.Errorf("error setting %s: %s", tfKey, err) + } + } + } + + if err := d.Set("tags", d.Set("tags", tagsToMap(output.DhcpOptions[0].Tags))); err != nil { + return fmt.Errorf("error setting tags: %s", err) + } + + return nil +} diff --git a/aws/data_source_aws_vpc_dhcp_options_test.go b/aws/data_source_aws_vpc_dhcp_options_test.go new file mode 100644 index 00000000000..5fd7cea9de2 --- /dev/null +++ b/aws/data_source_aws_vpc_dhcp_options_test.go @@ -0,0 +1,139 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccDataSourceAwsVpcDhcpOptions_basic(t *testing.T) { + resourceName := "aws_vpc_dhcp_options.test" + datasourceName := "data.aws_vpc_dhcp_options.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsVpcDhcpOptionsConfig_Missing, + ExpectError: regexp.MustCompile(`No matching EC2 DHCP Options found`), + }, + { + Config: testAccDataSourceAwsVpcDhcpOptionsConfig_DhcpOptionsID, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrPair(datasourceName, "dhcp_options_id", resourceName, "id"), + resource.TestCheckResourceAttrPair(datasourceName, "domain_name", resourceName, "domain_name"), + resource.TestCheckResourceAttrPair(datasourceName, "domain_name_servers.#", resourceName, "domain_name_servers.#"), + resource.TestCheckResourceAttrPair(datasourceName, "domain_name_servers.0", resourceName, "domain_name_servers.0"), + resource.TestCheckResourceAttrPair(datasourceName, "domain_name_servers.1", resourceName, "domain_name_servers.1"), + resource.TestCheckResourceAttrPair(datasourceName, "netbios_name_servers.#", resourceName, "netbios_name_servers.#"), + resource.TestCheckResourceAttrPair(datasourceName, "netbios_name_servers.0", resourceName, "netbios_name_servers.0"), + resource.TestCheckResourceAttrPair(datasourceName, "netbios_node_type", resourceName, "netbios_node_type"), + resource.TestCheckResourceAttrPair(datasourceName, "ntp_servers.#", resourceName, "ntp_servers.#"), + resource.TestCheckResourceAttrPair(datasourceName, "ntp_servers.0", resourceName, "ntp_servers.0"), + resource.TestCheckResourceAttrPair(datasourceName, "tags.%", resourceName, "tags.%"), + resource.TestCheckResourceAttrPair(datasourceName, "tags.Name", resourceName, "tags.Name"), + ), + }, + }, + }) +} + +func TestAccDataSourceAwsVpcDhcpOptions_Filter(t *testing.T) { + rInt := acctest.RandInt() + resourceName := "aws_vpc_dhcp_options.test" + datasourceName := "data.aws_vpc_dhcp_options.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsVpcDhcpOptionsConfig_Filter(rInt, 1), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrPair(datasourceName, "dhcp_options_id", resourceName, "id"), + resource.TestCheckResourceAttrPair(datasourceName, "domain_name", resourceName, "domain_name"), + resource.TestCheckResourceAttrPair(datasourceName, "domain_name_servers.#", resourceName, "domain_name_servers.#"), + resource.TestCheckResourceAttrPair(datasourceName, "domain_name_servers.0", resourceName, "domain_name_servers.0"), + resource.TestCheckResourceAttrPair(datasourceName, "domain_name_servers.1", resourceName, "domain_name_servers.1"), + resource.TestCheckResourceAttrPair(datasourceName, "netbios_name_servers.#", resourceName, "netbios_name_servers.#"), + resource.TestCheckResourceAttrPair(datasourceName, "netbios_name_servers.0", resourceName, "netbios_name_servers.0"), + resource.TestCheckResourceAttrPair(datasourceName, "netbios_node_type", resourceName, "netbios_node_type"), + resource.TestCheckResourceAttrPair(datasourceName, "ntp_servers.#", resourceName, "ntp_servers.#"), + resource.TestCheckResourceAttrPair(datasourceName, "ntp_servers.0", resourceName, "ntp_servers.0"), + resource.TestCheckResourceAttrPair(datasourceName, "tags.%", resourceName, "tags.%"), + resource.TestCheckResourceAttrPair(datasourceName, "tags.Name", resourceName, "tags.Name"), + ), + }, + { + Config: testAccDataSourceAwsVpcDhcpOptionsConfig_Filter(rInt, 2), + ExpectError: regexp.MustCompile(`Multiple matching EC2 DHCP Options found`), + }, + }, + }) +} + +const testAccDataSourceAwsVpcDhcpOptionsConfig_Missing = ` +data "aws_vpc_dhcp_options" "test" { + dhcp_options_id = "does-not-exist" +} +` + +const testAccDataSourceAwsVpcDhcpOptionsConfig_DhcpOptionsID = ` +resource "aws_vpc_dhcp_options" "incorrect" { + domain_name = "tf-acc-test-incorrect.example.com" +} + +resource "aws_vpc_dhcp_options" "test" { + domain_name = "service.consul" + domain_name_servers = ["127.0.0.1", "10.0.0.2"] + netbios_name_servers = ["127.0.0.1"] + netbios_node_type = 2 + ntp_servers = ["127.0.0.1"] + + tags { + Name = "tf-acc-test" + } +} + +data "aws_vpc_dhcp_options" "test" { + dhcp_options_id = "${aws_vpc_dhcp_options.test.id}" +} +` + +func testAccDataSourceAwsVpcDhcpOptionsConfig_Filter(rInt, count int) string { + return fmt.Sprintf(` +resource "aws_vpc_dhcp_options" "incorrect" { + domain_name = "tf-acc-test-incorrect.example.com" +} + +resource "aws_vpc_dhcp_options" "test" { + count = %d + + domain_name = "tf-acc-test-%d.example.com" + domain_name_servers = ["127.0.0.1", "10.0.0.2"] + netbios_name_servers = ["127.0.0.1"] + netbios_node_type = 2 + ntp_servers = ["127.0.0.1"] + + tags { + Name = "tf-acc-test-%d" + } +} + +data "aws_vpc_dhcp_options" "test" { + filter { + name = "key" + values = ["domain-name"] + } + + filter { + name = "value" + values = ["${aws_vpc_dhcp_options.test.0.domain_name}"] + } +} +`, count, rInt, rInt) +} diff --git a/aws/data_source_aws_vpc_endpoint_service.go b/aws/data_source_aws_vpc_endpoint_service.go index 62a5d342b52..b4eaa5e7dd9 100644 --- a/aws/data_source_aws_vpc_endpoint_service.go +++ b/aws/data_source_aws_vpc_endpoint_service.go @@ -86,20 +86,28 @@ func dataSourceAwsVpcEndpointServiceRead(d *schema.ResourceData, meta interface{ return fmt.Errorf("Error fetching VPC Endpoint Services: %s", err) } - if resp == nil || len(resp.ServiceNames) == 0 { + if resp == nil || (len(resp.ServiceNames) == 0 && len(resp.ServiceDetails) == 0) { return fmt.Errorf("no matching VPC Endpoint Service found") } - if len(resp.ServiceNames) > 1 { - return fmt.Errorf("multiple VPC Endpoint Services matched; use additional constraints to reduce matches to a single VPC Endpoint Service") - } - // Note: AWS Commercial now returns a response with `ServiceNames` and // `ServiceDetails`, but GovCloud responses only include `ServiceNames` if len(resp.ServiceDetails) == 0 { - d.SetId(strconv.Itoa(hashcode.String(*resp.ServiceNames[0]))) - d.Set("service_name", resp.ServiceNames[0]) - return nil + // GovCloud doesn't respect the filter. + names := aws.StringValueSlice(resp.ServiceNames) + for _, name := range names { + if name == serviceName { + d.SetId(strconv.Itoa(hashcode.String(name))) + d.Set("service_name", name) + return nil + } + } + + return fmt.Errorf("no matching VPC Endpoint Service found") + } + + if len(resp.ServiceDetails) > 1 { + return fmt.Errorf("multiple VPC Endpoint Services matched; use additional constraints to reduce matches to a single VPC Endpoint Service") } sd := resp.ServiceDetails[0] diff --git a/aws/data_source_aws_vpc_endpoint_service_test.go b/aws/data_source_aws_vpc_endpoint_service_test.go index f9eb639be6c..8a5af21e563 100644 --- a/aws/data_source_aws_vpc_endpoint_service_test.go +++ b/aws/data_source_aws_vpc_endpoint_service_test.go @@ -10,11 +10,11 @@ import ( ) func TestAccDataSourceAwsVpcEndpointService_gateway(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDataSourceAwsVpcEndpointServiceGatewayConfig, Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr( @@ -48,11 +48,11 @@ func TestAccDataSourceAwsVpcEndpointService_gateway(t *testing.T) { } func TestAccDataSourceAwsVpcEndpointService_interface(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDataSourceAwsVpcEndpointServiceInterfaceConfig, Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr( @@ -88,11 +88,11 @@ func TestAccDataSourceAwsVpcEndpointService_interface(t *testing.T) { func TestAccDataSourceAwsVpcEndpointService_custom(t *testing.T) { lbName := fmt.Sprintf("testaccawsnlb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDataSourceAwsVpcEndpointServiceCustomConfig(lbName), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr( @@ -135,16 +135,6 @@ data "aws_vpc_endpoint_service" "ec2" { } ` -const testAccDataSourceAwsVpcEndpointServiceConfig_custom = ` -provider "aws" { - region = "us-west-2" -} - -data "aws_vpc_endpoint_service" "ec2" { - service = "ec2" -} -` - func testAccDataSourceAwsVpcEndpointServiceCustomConfig(lbName string) string { return fmt.Sprintf( ` diff --git a/aws/data_source_aws_vpc_endpoint_test.go b/aws/data_source_aws_vpc_endpoint_test.go index 1dd2d7ac4e5..9925ef29765 100644 --- a/aws/data_source_aws_vpc_endpoint_test.go +++ b/aws/data_source_aws_vpc_endpoint_test.go @@ -9,7 +9,7 @@ import ( ) func TestAccDataSourceAwsVpcEndpoint_gatewayBasic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -32,7 +32,7 @@ func TestAccDataSourceAwsVpcEndpoint_gatewayBasic(t *testing.T) { } func TestAccDataSourceAwsVpcEndpoint_byId(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -47,7 +47,7 @@ func TestAccDataSourceAwsVpcEndpoint_byId(t *testing.T) { } func TestAccDataSourceAwsVpcEndpoint_gatewayWithRouteTable(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -66,7 +66,7 @@ func TestAccDataSourceAwsVpcEndpoint_gatewayWithRouteTable(t *testing.T) { } func TestAccDataSourceAwsVpcEndpoint_interface(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/aws/data_source_aws_vpc_peering_connection.go b/aws/data_source_aws_vpc_peering_connection.go index a78e35f03e7..b3f1cca8e21 100644 --- a/aws/data_source_aws_vpc_peering_connection.go +++ b/aws/data_source_aws_vpc_peering_connection.go @@ -67,12 +67,12 @@ func dataSourceAwsVpcPeeringConnection() *schema.Resource { "accepter": { Type: schema.TypeMap, Computed: true, - Elem: schema.TypeBool, + Elem: &schema.Schema{Type: schema.TypeBool}, }, "requester": { Type: schema.TypeMap, Computed: true, - Elem: schema.TypeBool, + Elem: &schema.Schema{Type: schema.TypeBool}, }, "filter": ec2CustomFiltersSchema(), "tags": tagsSchemaComputed(), @@ -140,13 +140,13 @@ func dataSourceAwsVpcPeeringConnectionRead(d *schema.ResourceData, meta interfac d.Set("tags", tagsToMap(pcx.Tags)) if pcx.AccepterVpcInfo.PeeringOptions != nil { - if err := d.Set("accepter", flattenPeeringOptions(pcx.AccepterVpcInfo.PeeringOptions)[0]); err != nil { + if err := d.Set("accepter", flattenVpcPeeringConnectionOptions(pcx.AccepterVpcInfo.PeeringOptions)[0]); err != nil { return err } } if pcx.RequesterVpcInfo.PeeringOptions != nil { - if err := d.Set("requester", flattenPeeringOptions(pcx.RequesterVpcInfo.PeeringOptions)[0]); err != nil { + if err := d.Set("requester", flattenVpcPeeringConnectionOptions(pcx.RequesterVpcInfo.PeeringOptions)[0]); err != nil { return err } } diff --git a/aws/data_source_aws_vpc_peering_connection_test.go b/aws/data_source_aws_vpc_peering_connection_test.go index 9d2b7c9a4bb..867ca87889b 100644 --- a/aws/data_source_aws_vpc_peering_connection_test.go +++ b/aws/data_source_aws_vpc_peering_connection_test.go @@ -9,11 +9,11 @@ import ( ) func TestAccDataSourceAwsVpcPeeringConnection_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDataSourceAwsVpcPeeringConnectionConfig, Check: resource.ComposeTestCheckFunc( testAccDataSourceAwsVpcPeeringConnectionCheck("data.aws_vpc_peering_connection.test_by_id"), diff --git a/aws/data_source_aws_vpc_test.go b/aws/data_source_aws_vpc_test.go index a3784c7124d..1e66c49f3c1 100644 --- a/aws/data_source_aws_vpc_test.go +++ b/aws/data_source_aws_vpc_test.go @@ -15,7 +15,7 @@ func TestAccDataSourceAwsVpc_basic(t *testing.T) { rInt := rand.Intn(16) cidr := fmt.Sprintf("172.%d.0.0/16", rInt) tag := fmt.Sprintf("terraform-testacc-vpc-data-source-basic-%d", rInt) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -30,6 +30,10 @@ func TestAccDataSourceAwsVpc_basic(t *testing.T) { "data.aws_vpc.by_id", "enable_dns_support", "true"), resource.TestCheckResourceAttr( "data.aws_vpc.by_id", "enable_dns_hostnames", "false"), + resource.TestCheckResourceAttrSet( + "data.aws_vpc.by_id", "arn"), + resource.TestCheckResourceAttrPair( + "data.aws_vpc.by_id", "main_route_table_id", "aws_vpc.test", "main_route_table_id"), ), }, }, @@ -41,7 +45,7 @@ func TestAccDataSourceAwsVpc_ipv6Associated(t *testing.T) { rInt := rand.Intn(16) cidr := fmt.Sprintf("172.%d.0.0/16", rInt) tag := fmt.Sprintf("terraform-testacc-vpc-data-source-ipv6-associated-%d", rInt) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -59,6 +63,25 @@ func TestAccDataSourceAwsVpc_ipv6Associated(t *testing.T) { }) } +func TestAccDataSourceAwsVpc_multipleCidr(t *testing.T) { + rInt := rand.Intn(16) + rName := "data.aws_vpc.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVpcDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsVpcConfigMultipleCidr(rInt), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(rName, "cidr_block_associations.#", "2"), + ), + }, + }, + }) +} + func testAccDataSourceAwsVpcCheck(name, cidr, tag string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[name] @@ -147,3 +170,23 @@ data "aws_vpc" "by_filter" { } }`, cidr, tag) } + +func testAccDataSourceAwsVpcConfigMultipleCidr(octet int) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "10.%d.0.0/16" +} + +resource "aws_vpc_ipv4_cidr_block_association" "test" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "172.%d.0.0/16" +} + +data "aws_vpc" "test" { + filter { + name = "cidr-block-association.cidr-block" + values = ["${aws_vpc_ipv4_cidr_block_association.test.cidr_block}"] + } +} +`, octet, octet) +} diff --git a/aws/data_source_aws_vpcs.go b/aws/data_source_aws_vpcs.go new file mode 100644 index 00000000000..b51053d8cd8 --- /dev/null +++ b/aws/data_source_aws_vpcs.go @@ -0,0 +1,77 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsVpcs() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsVpcsRead, + Schema: map[string]*schema.Schema{ + "filter": ec2CustomFiltersSchema(), + + "tags": tagsSchemaComputed(), + + "ids": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + }, + } +} + +func dataSourceAwsVpcsRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + filters, filtersOk := d.GetOk("filter") + tags, tagsOk := d.GetOk("tags") + + req := &ec2.DescribeVpcsInput{} + + if tagsOk { + req.Filters = buildEC2TagFilterList( + tagsFromMap(tags.(map[string]interface{})), + ) + } + + if filtersOk { + req.Filters = append(req.Filters, buildEC2CustomFilterList( + filters.(*schema.Set), + )...) + } + if len(req.Filters) == 0 { + // Don't send an empty filters list; the EC2 API won't accept it. + req.Filters = nil + } + + log.Printf("[DEBUG] DescribeVpcs %s\n", req) + resp, err := conn.DescribeVpcs(req) + if err != nil { + return err + } + + if resp == nil || len(resp.Vpcs) == 0 { + return fmt.Errorf("no matching VPC found") + } + + vpcs := make([]string, 0) + + for _, vpc := range resp.Vpcs { + vpcs = append(vpcs, aws.StringValue(vpc.VpcId)) + } + + d.SetId(time.Now().UTC().String()) + if err := d.Set("ids", vpcs); err != nil { + return fmt.Errorf("Error setting vpc ids: %s", err) + } + + return nil +} diff --git a/aws/data_source_aws_vpcs_test.go b/aws/data_source_aws_vpcs_test.go new file mode 100644 index 00000000000..6cd0fce4a4d --- /dev/null +++ b/aws/data_source_aws_vpcs_test.go @@ -0,0 +1,151 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccDataSourceAwsVpcs_basic(t *testing.T) { + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsVpcsConfig(), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsVpcsDataSourceExists("data.aws_vpcs.all"), + ), + }, + }, + }) +} + +func TestAccDataSourceAwsVpcs_tags(t *testing.T) { + rName := acctest.RandString(5) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsVpcsConfig_tags(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsVpcsDataSourceExists("data.aws_vpcs.selected"), + resource.TestCheckResourceAttr("data.aws_vpcs.selected", "ids.#", "1"), + ), + }, + }, + }) +} + +func TestAccDataSourceAwsVpcs_filters(t *testing.T) { + rName := acctest.RandString(5) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsVpcsConfig_filters(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsVpcsDataSourceExists("data.aws_vpcs.selected"), + testCheckResourceAttrGreaterThanValue("data.aws_vpcs.selected", "ids.#", "0"), + ), + }, + }, + }) +} + +func testCheckResourceAttrGreaterThanValue(name, key, value string) resource.TestCheckFunc { + return func(s *terraform.State) error { + ms := s.RootModule() + rs, ok := ms.Resources[name] + if !ok { + return fmt.Errorf("Not found: %s in %s", name, ms.Path) + } + + is := rs.Primary + if is == nil { + return fmt.Errorf("No primary instance: %s in %s", name, ms.Path) + } + + if v, ok := is.Attributes[key]; !ok || !(v > value) { + if !ok { + return fmt.Errorf("%s: Attribute '%s' not found", name, key) + } + + return fmt.Errorf( + "%s: Attribute '%s' is not greater than %#v, got %#v", + name, + key, + value, + v) + } + return nil + + } +} + +func testAccCheckAwsVpcsDataSourceExists(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Can't find aws_vpcs data source: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("aws_vpcs data source ID not set") + } + return nil + } +} + +func testAccDataSourceAwsVpcsConfig() string { + return fmt.Sprintf(` + resource "aws_vpc" "test-vpc" { + cidr_block = "10.0.0.0/24" + } + + data "aws_vpcs" "all" {} + `) +} + +func testAccDataSourceAwsVpcsConfig_tags(rName string) string { + return fmt.Sprintf(` + resource "aws_vpc" "test-vpc" { + cidr_block = "10.0.0.0/24" + + tags { + Name = "testacc-vpc-%s" + Service = "testacc-test" + } + } + + data "aws_vpcs" "selected" { + tags { + Name = "testacc-vpc-%s" + Service = "${aws_vpc.test-vpc.tags["Service"]}" + } + } + `, rName, rName) +} + +func testAccDataSourceAwsVpcsConfig_filters(rName string) string { + return fmt.Sprintf(` + resource "aws_vpc" "test-vpc" { + cidr_block = "192.168.0.0/25" + tags { + Name = "testacc-vpc-%s" + } + } + + data "aws_vpcs" "selected" { + filter { + name = "cidr" + values = ["${aws_vpc.test-vpc.cidr_block}"] + } + } + `, rName) +} diff --git a/aws/data_source_aws_vpn_gateway_test.go b/aws/data_source_aws_vpn_gateway_test.go index 3643c2c56e4..3591b2dbdbd 100644 --- a/aws/data_source_aws_vpn_gateway_test.go +++ b/aws/data_source_aws_vpn_gateway_test.go @@ -12,11 +12,11 @@ import ( func TestAccDataSourceAwsVpnGateway_unattached(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDataSourceAwsVpnGatewayUnattachedConfig(rInt), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttrPair( @@ -41,11 +41,11 @@ func TestAccDataSourceAwsVpnGateway_unattached(t *testing.T) { func TestAccDataSourceAwsVpnGateway_attached(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDataSourceAwsVpnGatewayAttachedConfig(rInt), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttrPair( diff --git a/aws/data_source_aws_workspaces_bundle.go b/aws/data_source_aws_workspaces_bundle.go new file mode 100644 index 00000000000..0da1e52a1d8 --- /dev/null +++ b/aws/data_source_aws_workspaces_bundle.go @@ -0,0 +1,126 @@ +package aws + +import ( + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/workspaces" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsWorkspaceBundle() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsWorkspaceBundleRead, + + Schema: map[string]*schema.Schema{ + "bundle_id": { + Type: schema.TypeString, + Required: true, + }, + "description": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Computed: true, + }, + "owner": { + Type: schema.TypeString, + Computed: true, + }, + "compute_type": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "user_storage": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "capacity": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "root_storage": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "capacity": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + }, + } +} + +func dataSourceAwsWorkspaceBundleRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).workspacesconn + + bundleID := d.Get("bundle_id").(string) + input := &workspaces.DescribeWorkspaceBundlesInput{ + BundleIds: []*string{aws.String(bundleID)}, + } + + resp, err := conn.DescribeWorkspaceBundles(input) + if err != nil { + return err + } + + if len(resp.Bundles) != 1 { + return fmt.Errorf("The number of Workspace Bundle (%s) should be 1, but %d", bundleID, len(resp.Bundles)) + } + + bundle := resp.Bundles[0] + d.SetId(bundleID) + d.Set("description", bundle.Description) + d.Set("name", bundle.Name) + d.Set("owner", bundle.Owner) + + computeType := make([]map[string]interface{}, 1) + if bundle.ComputeType != nil { + computeType[0] = map[string]interface{}{ + "name": aws.StringValue(bundle.ComputeType.Name), + } + } + if err := d.Set("compute_type", computeType); err != nil { + return fmt.Errorf("error setting compute_type: %s", err) + } + + rootStorage := make([]map[string]interface{}, 1) + if bundle.RootStorage != nil { + rootStorage[0] = map[string]interface{}{ + "capacity": aws.StringValue(bundle.RootStorage.Capacity), + } + } + if err := d.Set("root_storage", rootStorage); err != nil { + return fmt.Errorf("error setting root_storage: %s", err) + } + + userStorage := make([]map[string]interface{}, 1) + if bundle.UserStorage != nil { + userStorage[0] = map[string]interface{}{ + "capacity": aws.StringValue(bundle.UserStorage.Capacity), + } + } + if err := d.Set("user_storage", userStorage); err != nil { + return fmt.Errorf("error setting user_storage: %s", err) + } + + return nil +} diff --git a/aws/data_source_aws_workspaces_bundle_test.go b/aws/data_source_aws_workspaces_bundle_test.go new file mode 100644 index 00000000000..95ac48dd1a7 --- /dev/null +++ b/aws/data_source_aws_workspaces_bundle_test.go @@ -0,0 +1,42 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccDataSourceAwsWorkspaceBundle_basic(t *testing.T) { + dataSourceName := "data.aws_workspaces_bundle.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsWorkspaceBundleConfig("wsb-b0s22j3d7"), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(dataSourceName, "bundle_id", "wsb-b0s22j3d7"), + resource.TestCheckResourceAttr(dataSourceName, "compute_type.#", "1"), + resource.TestCheckResourceAttr(dataSourceName, "compute_type.0.name", "PERFORMANCE"), + resource.TestCheckResourceAttr(dataSourceName, "description", "Windows 7 Experience provided by Windows Server 2008 R2, 2 vCPU, 7.5GiB Memory, 100GB Storage"), + resource.TestCheckResourceAttr(dataSourceName, "name", "Performance with Windows 7"), + resource.TestCheckResourceAttr(dataSourceName, "owner", "Amazon"), + resource.TestCheckResourceAttr(dataSourceName, "root_storage.#", "1"), + resource.TestCheckResourceAttr(dataSourceName, "root_storage.0.capacity", "80"), + resource.TestCheckResourceAttr(dataSourceName, "user_storage.#", "1"), + resource.TestCheckResourceAttr(dataSourceName, "user_storage.0.capacity", "100"), + ), + }, + }, + }) +} + +func testAccDataSourceAwsWorkspaceBundleConfig(bundleID string) string { + return fmt.Sprintf(` +data "aws_workspaces_bundle" "test" { + bundle_id = %q +} +`, bundleID) +} diff --git a/aws/diff_suppress_funcs.go b/aws/diff_suppress_funcs.go index d891708b918..b994d2a9a23 100644 --- a/aws/diff_suppress_funcs.go +++ b/aws/diff_suppress_funcs.go @@ -20,6 +20,19 @@ func suppressEquivalentAwsPolicyDiffs(k, old, new string, d *schema.ResourceData return equivalent } +// suppressEquivalentTypeStringBoolean provides custom difference suppression for TypeString booleans +// Some arguments require three values: true, false, and "" (unspecified), but +// confusing behavior exists when converting bare true/false values with state. +func suppressEquivalentTypeStringBoolean(k, old, new string, d *schema.ResourceData) bool { + if old == "false" && new == "0" { + return true + } + if old == "true" && new == "1" { + return true + } + return false +} + // Suppresses minor version changes to the db_instance engine_version attribute func suppressAwsDbEngineVersionDiffs(k, old, new string, d *schema.ResourceData) bool { // First check if the old/new values are nil. @@ -85,3 +98,7 @@ func suppressAutoscalingGroupAvailabilityZoneDiffs(k, old, new string, d *schema return false } + +func suppressRoute53ZoneNameWithTrailingDot(k, old, new string, d *schema.ResourceData) bool { + return strings.TrimSuffix(old, ".") == strings.TrimSuffix(new, ".") +} diff --git a/aws/diff_suppress_funcs_test.go b/aws/diff_suppress_funcs_test.go index 0727a1042b5..196616aeae5 100644 --- a/aws/diff_suppress_funcs_test.go +++ b/aws/diff_suppress_funcs_test.go @@ -29,3 +29,44 @@ func TestSuppressEquivalentJsonDiffsWhitespaceAndNoWhitespace(t *testing.T) { t.Errorf("Expected suppressEquivalentJsonDiffs to return false for %s == %s", noWhitespaceDiff, whitespaceDiff) } } + +func TestSuppressEquivalentTypeStringBoolean(t *testing.T) { + testCases := []struct { + old string + new string + equivalent bool + }{ + { + old: "false", + new: "0", + equivalent: true, + }, + { + old: "true", + new: "1", + equivalent: true, + }, + { + old: "", + new: "0", + equivalent: false, + }, + { + old: "", + new: "1", + equivalent: false, + }, + } + + for i, tc := range testCases { + value := suppressEquivalentTypeStringBoolean("test_property", tc.old, tc.new, nil) + + if tc.equivalent && !value { + t.Fatalf("expected test case %d to be equivalent", i) + } + + if !tc.equivalent && value { + t.Fatalf("expected test case %d to not be equivalent", i) + } + } +} diff --git a/aws/dx_vif.go b/aws/dx_vif.go new file mode 100644 index 00000000000..ebf0a54e045 --- /dev/null +++ b/aws/dx_vif.go @@ -0,0 +1,132 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/directconnect" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func dxVirtualInterfaceRead(id string, conn *directconnect.DirectConnect) (*directconnect.VirtualInterface, error) { + resp, state, err := dxVirtualInterfaceStateRefresh(conn, id)() + if err != nil { + return nil, fmt.Errorf("Error reading Direct Connect virtual interface: %s", err) + } + if state == directconnect.VirtualInterfaceStateDeleted { + return nil, nil + } + + return resp.(*directconnect.VirtualInterface), nil +} + +func dxVirtualInterfaceUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dxconn + + req := &directconnect.UpdateVirtualInterfaceAttributesInput{ + VirtualInterfaceId: aws.String(d.Id()), + } + + requestUpdate := false + if d.HasChange("mtu") { + req.Mtu = aws.Int64(int64(d.Get("mtu").(int))) + requestUpdate = true + } + + if requestUpdate { + log.Printf("[DEBUG] Modifying Direct Connect virtual interface attributes: %#v", req) + _, err := conn.UpdateVirtualInterfaceAttributes(req) + if err != nil { + return fmt.Errorf("Error modifying Direct Connect virtual interface (%s) attributes, error: %s", d.Id(), err) + } + } + + if err := setTagsDX(conn, d, d.Get("arn").(string)); err != nil { + return err + } + + return nil +} + +func dxVirtualInterfaceDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dxconn + + log.Printf("[DEBUG] Deleting Direct Connect virtual interface: %s", d.Id()) + _, err := conn.DeleteVirtualInterface(&directconnect.DeleteVirtualInterfaceInput{ + VirtualInterfaceId: aws.String(d.Id()), + }) + if err != nil { + if isAWSErr(err, directconnect.ErrCodeClientException, "does not exist") { + return nil + } + return fmt.Errorf("Error deleting Direct Connect virtual interface: %s", err) + } + + deleteStateConf := &resource.StateChangeConf{ + Pending: []string{ + directconnect.VirtualInterfaceStateAvailable, + directconnect.VirtualInterfaceStateConfirming, + directconnect.VirtualInterfaceStateDeleting, + directconnect.VirtualInterfaceStateDown, + directconnect.VirtualInterfaceStatePending, + directconnect.VirtualInterfaceStateRejected, + directconnect.VirtualInterfaceStateVerifying, + }, + Target: []string{ + directconnect.VirtualInterfaceStateDeleted, + }, + Refresh: dxVirtualInterfaceStateRefresh(conn, d.Id()), + Timeout: d.Timeout(schema.TimeoutDelete), + Delay: 10 * time.Second, + MinTimeout: 5 * time.Second, + } + _, err = deleteStateConf.WaitForState() + if err != nil { + return fmt.Errorf("Error waiting for Direct Connect virtual interface (%s) to be deleted: %s", d.Id(), err) + } + + return nil +} + +func dxVirtualInterfaceStateRefresh(conn *directconnect.DirectConnect, vifId string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + resp, err := conn.DescribeVirtualInterfaces(&directconnect.DescribeVirtualInterfacesInput{ + VirtualInterfaceId: aws.String(vifId), + }) + if err != nil { + return nil, "", err + } + + n := len(resp.VirtualInterfaces) + switch n { + case 0: + return "", directconnect.VirtualInterfaceStateDeleted, nil + + case 1: + vif := resp.VirtualInterfaces[0] + return vif, aws.StringValue(vif.VirtualInterfaceState), nil + + default: + return nil, "", fmt.Errorf("Found %d Direct Connect virtual interfaces for %s, expected 1", n, vifId) + } + } +} + +func dxVirtualInterfaceWaitUntilAvailable(conn *directconnect.DirectConnect, vifId string, timeout time.Duration, pending, target []string) error { + stateConf := &resource.StateChangeConf{ + Pending: pending, + Target: target, + Refresh: dxVirtualInterfaceStateRefresh(conn, vifId), + Timeout: timeout, + Delay: 10 * time.Second, + MinTimeout: 5 * time.Second, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf("Error waiting for Direct Connect virtual interface (%s) to become available: %s", vifId, err) + } + + return nil +} diff --git a/aws/ecs_task_definition_equivalency.go b/aws/ecs_task_definition_equivalency.go index 52c5e38fe7e..72791b76dd3 100644 --- a/aws/ecs_task_definition_equivalency.go +++ b/aws/ecs_task_definition_equivalency.go @@ -13,13 +13,15 @@ import ( "github.com/mitchellh/copystructure" ) -func ecsContainerDefinitionsAreEquivalent(def1, def2 string) (bool, error) { +// EcsContainerDefinitionsAreEquivalent determines equality between two ECS container definition JSON strings +// Note: This function will be moved out of the aws package in the future. +func EcsContainerDefinitionsAreEquivalent(def1, def2 string, isAWSVPC bool) (bool, error) { var obj1 containerDefinitions err := json.Unmarshal([]byte(def1), &obj1) if err != nil { return false, err } - err = obj1.Reduce() + err = obj1.Reduce(isAWSVPC) if err != nil { return false, err } @@ -33,7 +35,7 @@ func ecsContainerDefinitionsAreEquivalent(def1, def2 string) (bool, error) { if err != nil { return false, err } - err = obj2.Reduce() + err = obj2.Reduce(isAWSVPC) if err != nil { return false, err } @@ -53,7 +55,7 @@ func ecsContainerDefinitionsAreEquivalent(def1, def2 string) (bool, error) { type containerDefinitions []*ecs.ContainerDefinition -func (cd containerDefinitions) Reduce() error { +func (cd containerDefinitions) Reduce(isAWSVPC bool) error { for i, def := range cd { // Deal with special fields which have defaults if def.Cpu != nil && *def.Cpu == 0 { @@ -69,6 +71,9 @@ func (cd containerDefinitions) Reduce() error { if pm.HostPort != nil && *pm.HostPort == 0 { cd[i].PortMappings[j].HostPort = nil } + if isAWSVPC && cd[i].PortMappings[j].HostPort == nil { + cd[i].PortMappings[j].HostPort = cd[i].PortMappings[j].ContainerPort + } } // Deal with fields which may be re-ordered in the API diff --git a/aws/ecs_task_definition_equivalency_test.go b/aws/ecs_task_definition_equivalency_test.go index a05517bca03..e2d4194a5b3 100644 --- a/aws/ecs_task_definition_equivalency_test.go +++ b/aws/ecs_task_definition_equivalency_test.go @@ -78,7 +78,7 @@ func TestAwsEcsContainerDefinitionsAreEquivalent_basic(t *testing.T) { } ]` - equal, err := ecsContainerDefinitionsAreEquivalent(cfgRepresention, apiRepresentation) + equal, err := EcsContainerDefinitionsAreEquivalent(cfgRepresention, apiRepresentation, false) if err != nil { t.Fatal(err) } @@ -125,7 +125,57 @@ func TestAwsEcsContainerDefinitionsAreEquivalent_portMappings(t *testing.T) { } ]` - equal, err := ecsContainerDefinitionsAreEquivalent(cfgRepresention, apiRepresentation) + equal, err := EcsContainerDefinitionsAreEquivalent(cfgRepresention, apiRepresentation, false) + if err != nil { + t.Fatal(err) + } + if !equal { + t.Fatal("Expected definitions to be equal.") + } +} + +func TestAwsEcsContainerDefinitionsAreEquivalent_portMappingsIgnoreHostPort(t *testing.T) { + cfgRepresention := ` +[ + { + "name": "wordpress", + "image": "wordpress", + "portMappings": [ + { + "containerPort": 80, + "hostPort": 80 + } + ] + } +]` + + apiRepresentation := ` +[ + { + "name": "wordpress", + "image": "wordpress", + "portMappings": [ + { + "containerPort": 80 + } + ] + } +]` + + var ( + equal bool + err error + ) + + equal, err = EcsContainerDefinitionsAreEquivalent(cfgRepresention, apiRepresentation, false) + if err != nil { + t.Fatal(err) + } + if equal { + t.Fatal("Expected definitions to differ.") + } + + equal, err = EcsContainerDefinitionsAreEquivalent(cfgRepresention, apiRepresentation, true) if err != nil { t.Fatal(err) } @@ -378,7 +428,7 @@ func TestAwsEcsContainerDefinitionsAreEquivalent_arrays(t *testing.T) { ] ` - equal, err := ecsContainerDefinitionsAreEquivalent(cfgRepresention, apiRepresentation) + equal, err := EcsContainerDefinitionsAreEquivalent(cfgRepresention, apiRepresentation, false) if err != nil { t.Fatal(err) } @@ -416,7 +466,7 @@ func TestAwsEcsContainerDefinitionsAreEquivalent_negative(t *testing.T) { } ]` - equal, err := ecsContainerDefinitionsAreEquivalent(cfgRepresention, apiRepresentation) + equal, err := EcsContainerDefinitionsAreEquivalent(cfgRepresention, apiRepresentation, false) if err != nil { t.Fatal(err) } diff --git a/aws/iam_policy_model.go b/aws/iam_policy_model.go index 9c74e297a54..08ea5c78e2e 100644 --- a/aws/iam_policy_model.go +++ b/aws/iam_policy_model.go @@ -73,13 +73,14 @@ func (self *IAMPolicyDoc) Merge(newDoc *IAMPolicyDoc) { func (ps IAMPolicyStatementPrincipalSet) MarshalJSON() ([]byte, error) { raw := map[string]interface{}{} - // As a special case, IAM considers the string value "*" to be - // equivalent to "AWS": "*", and normalizes policies as such. - // We'll follow their lead and do the same normalization here. - // IAM also considers {"*": "*"} to be equivalent to this. + // Although IAM documentation says, that "*" and {"AWS": "*"} are equivalent + // (https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html), + // in practice they are not for IAM roles. IAM will return an error if trust + // policy have "*" or {"*": "*"} as principal, but will accept {"AWS": "*"}. + // Only {"*": "*"} should be normalized to "*". if len(ps) == 1 { p := ps[0] - if p.Type == "AWS" || p.Type == "*" { + if p.Type == "*" { if sv, ok := p.Identifiers.(string); ok && sv == "*" { return []byte(`"*"`), nil } @@ -101,7 +102,7 @@ func (ps IAMPolicyStatementPrincipalSet) MarshalJSON() ([]byte, error) { case string: raw[p.Type] = i default: - panic("Unsupported data type for IAMPolicyStatementPrincipalSet") + return []byte{}, fmt.Errorf("Unsupported data type %T for IAMPolicyStatementPrincipalSet", i) } } @@ -121,10 +122,21 @@ func (ps *IAMPolicyStatementPrincipalSet) UnmarshalJSON(b []byte) error { out = append(out, IAMPolicyStatementPrincipal{Type: "*", Identifiers: []string{"*"}}) case map[string]interface{}: for key, value := range data.(map[string]interface{}) { - out = append(out, IAMPolicyStatementPrincipal{Type: key, Identifiers: value}) + switch vt := value.(type) { + case string: + out = append(out, IAMPolicyStatementPrincipal{Type: key, Identifiers: value.(string)}) + case []interface{}: + values := []string{} + for _, v := range value.([]interface{}) { + values = append(values, v.(string)) + } + out = append(out, IAMPolicyStatementPrincipal{Type: key, Identifiers: values}) + default: + return fmt.Errorf("Unsupported data type %T for IAMPolicyStatementPrincipalSet.Identifiers", vt) + } } default: - return fmt.Errorf("Unsupported data type %s for IAMPolicyStatementPrincipalSet", t) + return fmt.Errorf("Unsupported data type %T for IAMPolicyStatementPrincipalSet", t) } *ps = out diff --git a/aws/import_aws_api_gateway_account_test.go b/aws/import_aws_api_gateway_account_test.go deleted file mode 100644 index cb60a4929af..00000000000 --- a/aws/import_aws_api_gateway_account_test.go +++ /dev/null @@ -1,28 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSAPIGatewayAccount_importBasic(t *testing.T) { - resourceName := "aws_api_gateway_account.test" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSAPIGatewayAccountDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSAPIGatewayAccountConfig_empty, - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_api_gateway_key_test.go b/aws/import_aws_api_gateway_key_test.go deleted file mode 100644 index 2fd3d4cef53..00000000000 --- a/aws/import_aws_api_gateway_key_test.go +++ /dev/null @@ -1,28 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSAPIGatewayApiKey_importBasic(t *testing.T) { - resourceName := "aws_api_gateway_api_key.test" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSAPIGatewayApiKeyDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSAPIGatewayApiKeyConfig, - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_api_gateway_usage_plan_test.go b/aws/import_aws_api_gateway_usage_plan_test.go deleted file mode 100644 index 76a58e0c5d5..00000000000 --- a/aws/import_aws_api_gateway_usage_plan_test.go +++ /dev/null @@ -1,30 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSAPIGatewayUsagePlan_importBasic(t *testing.T) { - resourceName := "aws_api_gateway_usage_plan.main" - rName := acctest.RandString(10) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSAPIGatewayUsagePlanDestroy, - Steps: []resource.TestStep{ - { - Config: testAccAWSApiGatewayUsagePlanBasicConfig(rName), - }, - - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_autoscaling_group_test.go b/aws/import_aws_autoscaling_group_test.go deleted file mode 100644 index 666563b506b..00000000000 --- a/aws/import_aws_autoscaling_group_test.go +++ /dev/null @@ -1,33 +0,0 @@ -package aws - -import ( - "fmt" - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSAutoScalingGroup_importBasic(t *testing.T) { - resourceName := "aws_autoscaling_group.bar" - randName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSAutoScalingGroupImport(randName), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{ - "force_delete", "metrics_granularity", "wait_for_capacity_timeout"}, - }, - }, - }) -} diff --git a/aws/import_aws_cloudfront_distribution_test.go b/aws/import_aws_cloudfront_distribution_test.go deleted file mode 100644 index 787d913a591..00000000000 --- a/aws/import_aws_cloudfront_distribution_test.go +++ /dev/null @@ -1,32 +0,0 @@ -package aws - -import ( - "fmt" - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSCloudFrontDistribution_importBasic(t *testing.T) { - ri := acctest.RandInt() - testConfig := fmt.Sprintf(testAccAWSCloudFrontDistributionS3Config, ri, originBucket, logBucket, testAccAWSCloudFrontDistributionRetainConfig()) - - resourceName := "aws_cloudfront_distribution.s3_distribution" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckCloudFrontDistributionDestroy, - Steps: []resource.TestStep{ - { - Config: testConfig, - }, - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_cloudfront_origin_access_identity_test.go b/aws/import_aws_cloudfront_origin_access_identity_test.go deleted file mode 100644 index dd45cc7861a..00000000000 --- a/aws/import_aws_cloudfront_origin_access_identity_test.go +++ /dev/null @@ -1,28 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSCloudFrontOriginAccessIdentity_importBasic(t *testing.T) { - resourceName := "aws_cloudfront_origin_access_identity.origin_access_identity" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckCloudFrontOriginAccessIdentityDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSCloudFrontOriginAccessIdentityConfig, - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_cloudtrail_test.go b/aws/import_aws_cloudtrail_test.go deleted file mode 100644 index b5b3aba3bed..00000000000 --- a/aws/import_aws_cloudtrail_test.go +++ /dev/null @@ -1,31 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSCloudTrail_importBasic(t *testing.T) { - resourceName := "aws_cloudtrail.foobar" - cloudTrailRandInt := acctest.RandInt() - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSCloudTrailDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSCloudTrailConfig(cloudTrailRandInt), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"enable_log_file_validation", "is_multi_region_trail", "include_global_service_events", "enable_logging"}, - }, - }, - }) -} diff --git a/aws/import_aws_cloudwatch_dashboard_test.go b/aws/import_aws_cloudwatch_dashboard_test.go deleted file mode 100644 index 67618bdf5ac..00000000000 --- a/aws/import_aws_cloudwatch_dashboard_test.go +++ /dev/null @@ -1,29 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSCloudWatchDashboard_importBasic(t *testing.T) { - resourceName := "aws_cloudwatch_dashboard.foobar" - rInt := acctest.RandInt() - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSCloudWatchDashboardDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSCloudWatchDashboardConfig(rInt), - }, - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_cloudwatch_event_rule_test.go b/aws/import_aws_cloudwatch_event_rule_test.go deleted file mode 100644 index ac200dddfc7..00000000000 --- a/aws/import_aws_cloudwatch_event_rule_test.go +++ /dev/null @@ -1,29 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSCloudWatchEventRule_importBasic(t *testing.T) { - resourceName := "aws_cloudwatch_event_rule.foo" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSCloudWatchEventRuleDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSCloudWatchEventRuleConfig, - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"is_enabled"}, //this has a default value - }, - }, - }) -} diff --git a/aws/import_aws_cloudwatch_log_destination_policy_test.go b/aws/import_aws_cloudwatch_log_destination_policy_test.go deleted file mode 100644 index f7c4a7f35ee..00000000000 --- a/aws/import_aws_cloudwatch_log_destination_policy_test.go +++ /dev/null @@ -1,31 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSCloudwatchLogDestinationPolicy_importBasic(t *testing.T) { - resourceName := "aws_cloudwatch_log_destination_policy.test" - - rstring := acctest.RandString(5) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSCloudwatchLogDestinationPolicyDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSCloudwatchLogDestinationPolicyConfig(rstring), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_cloudwatch_log_destination_test.go b/aws/import_aws_cloudwatch_log_destination_test.go deleted file mode 100644 index b0c1d253567..00000000000 --- a/aws/import_aws_cloudwatch_log_destination_test.go +++ /dev/null @@ -1,31 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSCloudwatchLogDestination_importBasic(t *testing.T) { - resourceName := "aws_cloudwatch_log_destination.test" - - rstring := acctest.RandString(5) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSCloudwatchLogDestinationDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSCloudwatchLogDestinationConfig(rstring), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_cloudwatch_log_group_test.go b/aws/import_aws_cloudwatch_log_group_test.go deleted file mode 100644 index b218ab2864d..00000000000 --- a/aws/import_aws_cloudwatch_log_group_test.go +++ /dev/null @@ -1,31 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSCloudWatchLogGroup_importBasic(t *testing.T) { - resourceName := "aws_cloudwatch_log_group.foobar" - rInt := acctest.RandInt() - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSCloudWatchLogGroupDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSCloudWatchLogGroupConfig(rInt), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"retention_in_days"}, //this has a default value - }, - }, - }) -} diff --git a/aws/import_aws_cloudwatch_metric_alarm_test.go b/aws/import_aws_cloudwatch_metric_alarm_test.go deleted file mode 100644 index 1cb30254c14..00000000000 --- a/aws/import_aws_cloudwatch_metric_alarm_test.go +++ /dev/null @@ -1,30 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSCloudWatchMetricAlarm_importBasic(t *testing.T) { - rInt := acctest.RandInt() - resourceName := "aws_cloudwatch_metric_alarm.foobar" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSCloudWatchMetricAlarmDestroy, - Steps: []resource.TestStep{ - { - Config: testAccAWSCloudWatchMetricAlarmConfig(rInt), - }, - - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_codecommit_repository_test.go b/aws/import_aws_codecommit_repository_test.go deleted file mode 100644 index ea203c9c14a..00000000000 --- a/aws/import_aws_codecommit_repository_test.go +++ /dev/null @@ -1,29 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSCodeCommitRepository_importBasic(t *testing.T) { - resName := "aws_codecommit_repository.test" - rInt := acctest.RandInt() - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckCodeCommitRepositoryDestroy, - Steps: []resource.TestStep{ - { - Config: testAccCodeCommitRepository_basic(rInt), - }, - { - ResourceName: resName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_codepipeline_test.go b/aws/import_aws_codepipeline_test.go deleted file mode 100644 index 5025fcddc92..00000000000 --- a/aws/import_aws_codepipeline_test.go +++ /dev/null @@ -1,34 +0,0 @@ -package aws - -import ( - "os" - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSCodePipeline_Import_basic(t *testing.T) { - if os.Getenv("GITHUB_TOKEN") == "" { - t.Skip("Environment variable GITHUB_TOKEN is not set") - } - - name := acctest.RandString(10) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSCodePipelineDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSCodePipelineConfig_basic(name), - }, - - resource.TestStep{ - ResourceName: "aws_codepipeline.bar", - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_cognito_identity_pool_test.go b/aws/import_aws_cognito_identity_pool_test.go deleted file mode 100644 index bdd2caec809..00000000000 --- a/aws/import_aws_cognito_identity_pool_test.go +++ /dev/null @@ -1,30 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSCognitoIdentityPool_importBasic(t *testing.T) { - resourceName := "aws_cognito_identity_pool.main" - rName := acctest.RandString(10) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSAPIGatewayAccountDestroy, - Steps: []resource.TestStep{ - { - Config: testAccAWSCognitoIdentityPoolConfig_basic(rName), - }, - - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_cognito_user_pool_test.go b/aws/import_aws_cognito_user_pool_test.go deleted file mode 100644 index 2cd078f2431..00000000000 --- a/aws/import_aws_cognito_user_pool_test.go +++ /dev/null @@ -1,29 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSCognitoUserPool_importBasic(t *testing.T) { - resourceName := "aws_cognito_user_pool.pool" - name := acctest.RandString(5) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSCloudWatchDashboardDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSCognitoUserPoolConfig_basic(name), - }, - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_customer_gateway_test.go b/aws/import_aws_customer_gateway_test.go deleted file mode 100644 index 96e791ce866..00000000000 --- a/aws/import_aws_customer_gateway_test.go +++ /dev/null @@ -1,31 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSCustomerGateway_importBasic(t *testing.T) { - resourceName := "aws_customer_gateway.foo" - rInt := acctest.RandInt() - rBgpAsn := acctest.RandIntRange(64512, 65534) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckCustomerGatewayDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccCustomerGatewayConfig(rInt, rBgpAsn), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_dax_cluster_test.go b/aws/import_aws_dax_cluster_test.go deleted file mode 100644 index aae8e95a386..00000000000 --- a/aws/import_aws_dax_cluster_test.go +++ /dev/null @@ -1,30 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSDAXCluster_importBasic(t *testing.T) { - resourceName := "aws_dax_cluster.test" - rString := acctest.RandString(10) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSDAXClusterDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSDAXClusterConfig(rString), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_db_event_subscription_test.go b/aws/import_aws_db_event_subscription_test.go deleted file mode 100644 index 2aa85073f67..00000000000 --- a/aws/import_aws_db_event_subscription_test.go +++ /dev/null @@ -1,33 +0,0 @@ -package aws - -import ( - "fmt" - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSDBEventSubscription_importBasic(t *testing.T) { - resourceName := "aws_db_event_subscription.bar" - rInt := acctest.RandInt() - subscriptionName := fmt.Sprintf("tf-acc-test-rds-event-subs-%d", rInt) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSDBEventSubscriptionDestroy, - Steps: []resource.TestStep{ - { - Config: testAccAWSDBEventSubscriptionConfig(rInt), - }, - - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - ImportStateId: subscriptionName, - }, - }, - }) -} diff --git a/aws/import_aws_db_instance_test.go b/aws/import_aws_db_instance_test.go deleted file mode 100644 index 5fea3c2e0d0..00000000000 --- a/aws/import_aws_db_instance_test.go +++ /dev/null @@ -1,33 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSDBInstance_importBasic(t *testing.T) { - resourceName := "aws_db_instance.bar" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSDBInstanceDestroy, - Steps: []resource.TestStep{ - { - Config: testAccAWSDBInstanceConfig, - }, - - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{ - "password", - "skip_final_snapshot", - "final_snapshot_identifier", - }, - }, - }, - }) -} diff --git a/aws/import_aws_db_option_group_test.go b/aws/import_aws_db_option_group_test.go deleted file mode 100644 index 3025ff9e87c..00000000000 --- a/aws/import_aws_db_option_group_test.go +++ /dev/null @@ -1,31 +0,0 @@ -package aws - -import ( - "fmt" - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSDBOptionGroup_importBasic(t *testing.T) { - resourceName := "aws_db_option_group.bar" - rName := fmt.Sprintf("option-group-test-terraform-%s", acctest.RandString(5)) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSDBOptionGroupDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSDBOptionGroupBasicConfig(rName), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_db_parameter_group_group_test.go b/aws/import_aws_db_parameter_group_group_test.go deleted file mode 100644 index d9806e5cf81..00000000000 --- a/aws/import_aws_db_parameter_group_group_test.go +++ /dev/null @@ -1,31 +0,0 @@ -package aws - -import ( - "fmt" - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSDBParameterGroup_importBasic(t *testing.T) { - resourceName := "aws_db_parameter_group.bar" - groupName := fmt.Sprintf("parameter-group-test-terraform-%d", acctest.RandInt()) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSDBParameterGroupDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSDBParameterGroupConfig(groupName), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_db_security_group_test.go b/aws/import_aws_db_security_group_test.go deleted file mode 100644 index 51694ba5356..00000000000 --- a/aws/import_aws_db_security_group_test.go +++ /dev/null @@ -1,36 +0,0 @@ -package aws - -import ( - "fmt" - "os" - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSDBSecurityGroup_importBasic(t *testing.T) { - oldvar := os.Getenv("AWS_DEFAULT_REGION") - os.Setenv("AWS_DEFAULT_REGION", "us-east-1") - defer os.Setenv("AWS_DEFAULT_REGION", oldvar) - - rName := fmt.Sprintf("tf-acc-%s", acctest.RandString(5)) - resourceName := "aws_db_security_group.bar" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSDBSecurityGroupDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSDBSecurityGroupConfig(rName), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_db_subnet_group_test.go b/aws/import_aws_db_subnet_group_test.go deleted file mode 100644 index e9ab51b8288..00000000000 --- a/aws/import_aws_db_subnet_group_test.go +++ /dev/null @@ -1,33 +0,0 @@ -package aws - -import ( - "fmt" - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSDBSubnetGroup_importBasic(t *testing.T) { - resourceName := "aws_db_subnet_group.foo" - - rName := fmt.Sprintf("tf-test-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckDBSubnetGroupDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccDBSubnetGroupConfig(rName), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{ - "description"}, - }, - }, - }) -} diff --git a/aws/import_aws_dx_connection_test.go b/aws/import_aws_dx_connection_test.go deleted file mode 100644 index 99d02da2514..00000000000 --- a/aws/import_aws_dx_connection_test.go +++ /dev/null @@ -1,29 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSDxConnection_importBasic(t *testing.T) { - resourceName := "aws_dx_connection.hoge" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAwsDxConnectionDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccDxConnectionConfig(acctest.RandString(5)), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_dx_gateway.go b/aws/import_aws_dx_gateway.go new file mode 100644 index 00000000000..067e704e208 --- /dev/null +++ b/aws/import_aws_dx_gateway.go @@ -0,0 +1,48 @@ +package aws + +import ( + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/directconnect" + "github.com/hashicorp/terraform/helper/schema" +) + +// Direct Connect Gateway import also imports all assocations +func resourceAwsDxGatewayImportState(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + conn := meta.(*AWSClient).dxconn + + id := d.Id() + resp, err := conn.DescribeDirectConnectGateways(&directconnect.DescribeDirectConnectGatewaysInput{ + DirectConnectGatewayId: aws.String(id), + }) + if err != nil { + return nil, err + } + if len(resp.DirectConnectGateways) < 1 || resp.DirectConnectGateways[0] == nil { + return nil, fmt.Errorf("Direct Connect Gateway %s was not found", id) + } + results := make([]*schema.ResourceData, 1) + results[0] = d + + { + subResource := resourceAwsDxGatewayAssociation() + resp, err := conn.DescribeDirectConnectGatewayAssociations(&directconnect.DescribeDirectConnectGatewayAssociationsInput{ + DirectConnectGatewayId: aws.String(id), + }) + if err != nil { + return nil, err + } + + for _, assoc := range resp.DirectConnectGatewayAssociations { + d := subResource.Data(nil) + d.SetType("aws_dx_gateway_association") + d.Set("dx_gateway_id", assoc.DirectConnectGatewayId) + d.Set("vpn_gateway_id", assoc.VirtualGatewayId) + d.SetId(dxGatewayAssociationId(aws.StringValue(assoc.DirectConnectGatewayId), aws.StringValue(assoc.VirtualGatewayId))) + results = append(results, d) + } + } + + return results, nil +} diff --git a/aws/import_aws_dx_lag_test.go b/aws/import_aws_dx_lag_test.go deleted file mode 100644 index 17792d5259d..00000000000 --- a/aws/import_aws_dx_lag_test.go +++ /dev/null @@ -1,30 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSDxLag_importBasic(t *testing.T) { - resourceName := "aws_dx_lag.hoge" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAwsDxLagDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccDxLagConfig(acctest.RandString(5)), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"force_destroy"}, - }, - }, - }) -} diff --git a/aws/import_aws_dynamodb_table_test.go b/aws/import_aws_dynamodb_table_test.go deleted file mode 100644 index 666573c71b2..00000000000 --- a/aws/import_aws_dynamodb_table_test.go +++ /dev/null @@ -1,73 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSDynamoDbTable_importBasic(t *testing.T) { - resourceName := "aws_dynamodb_table.basic-dynamodb-table" - - rName := acctest.RandomWithPrefix("TerraformTestTable-") - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, - Steps: []resource.TestStep{ - { - Config: testAccAWSDynamoDbConfigInitialState(rName), - }, - - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} - -func TestAccAWSDynamoDbTable_importTags(t *testing.T) { - resourceName := "aws_dynamodb_table.basic-dynamodb-table" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, - Steps: []resource.TestStep{ - { - Config: testAccAWSDynamoDbConfigTags(), - }, - - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} - -func TestAccAWSDynamoDbTable_importTimeToLive(t *testing.T) { - resourceName := "aws_dynamodb_table.basic-dynamodb-table" - rName := acctest.RandomWithPrefix("TerraformTestTable-") - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, - Steps: []resource.TestStep{ - { - Config: testAccAWSDynamoDbConfigAddTimeToLive(rName), - }, - - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_ebs_volume_test.go b/aws/import_aws_ebs_volume_test.go deleted file mode 100644 index fd15bc2418a..00000000000 --- a/aws/import_aws_ebs_volume_test.go +++ /dev/null @@ -1,27 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSEBSVolume_importBasic(t *testing.T) { - resourceName := "aws_ebs_volume.test" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAwsEbsVolumeConfig, - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_ecr_repository_test.go b/aws/import_aws_ecr_repository_test.go deleted file mode 100644 index 0c0d99ae90d..00000000000 --- a/aws/import_aws_ecr_repository_test.go +++ /dev/null @@ -1,30 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSEcrRepository_importBasic(t *testing.T) { - resourceName := "aws_ecr_repository.default" - randString := acctest.RandString(10) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSEcrRepositoryDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSEcrRepository(randString), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_efs_file_system_test.go b/aws/import_aws_efs_file_system_test.go deleted file mode 100644 index 885ee9ddda8..00000000000 --- a/aws/import_aws_efs_file_system_test.go +++ /dev/null @@ -1,31 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSEFSFileSystem_importBasic(t *testing.T) { - resourceName := "aws_efs_file_system.foo-with-tags" - rInt := acctest.RandInt() - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckEfsFileSystemDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSEFSFileSystemConfigWithTags(rInt), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"reference_name", "creation_token"}, - }, - }, - }) -} diff --git a/aws/import_aws_efs_mount_target_test.go b/aws/import_aws_efs_mount_target_test.go deleted file mode 100644 index 607938e43dd..00000000000 --- a/aws/import_aws_efs_mount_target_test.go +++ /dev/null @@ -1,31 +0,0 @@ -package aws - -import ( - "fmt" - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSEFSMountTarget_importBasic(t *testing.T) { - resourceName := "aws_efs_mount_target.alpha" - - ct := fmt.Sprintf("createtoken-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckEfsMountTargetDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSEFSMountTargetConfig(ct), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_elastic_beanstalk_application_test.go b/aws/import_aws_elastic_beanstalk_application_test.go deleted file mode 100644 index 8d322abb47f..00000000000 --- a/aws/import_aws_elastic_beanstalk_application_test.go +++ /dev/null @@ -1,38 +0,0 @@ -package aws - -import ( - "fmt" - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAWSElasticBeanstalkApplication_importBasic(t *testing.T) { - resourceName := "aws_elastic_beanstalk_application.tftest" - config := fmt.Sprintf("tf-test-name-%d", acctest.RandInt()) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckBeanstalkAppDestroy, - Steps: []resource.TestStep{ - { - Config: testAccBeanstalkAppImportConfig(config), - }, - - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} - -func testAccBeanstalkAppImportConfig(name string) string { - return fmt.Sprintf(`resource "aws_elastic_beanstalk_application" "tftest" { - name = "%s" - description = "tf-test-desc" - }`, name) -} diff --git a/aws/import_aws_elastic_beanstalk_environment_test.go b/aws/import_aws_elastic_beanstalk_environment_test.go deleted file mode 100644 index 559df2e3e33..00000000000 --- a/aws/import_aws_elastic_beanstalk_environment_test.go +++ /dev/null @@ -1,46 +0,0 @@ -package aws - -import ( - "fmt" - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAWSElasticBeanstalkEnvironment_importBasic(t *testing.T) { - resourceName := "aws_elastic_beanstalk_application.tftest" - - applicationName := fmt.Sprintf("tf-test-name-%d", acctest.RandInt()) - environmentName := fmt.Sprintf("tf-test-env-name-%d", acctest.RandInt()) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckBeanstalkAppDestroy, - Steps: []resource.TestStep{ - { - Config: testAccBeanstalkEnvImportConfig(applicationName, environmentName), - }, - - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} - -func testAccBeanstalkEnvImportConfig(appName, envName string) string { - return fmt.Sprintf(`resource "aws_elastic_beanstalk_application" "tftest" { - name = "%s" - description = "tf-test-desc" - } - - resource "aws_elastic_beanstalk_environment" "tfenvtest" { - name = "%s" - application = "${aws_elastic_beanstalk_application.tftest.name}" - solution_stack_name = "64bit Amazon Linux running Python" - }`, appName, envName) -} diff --git a/aws/import_aws_elasticache_cluster_test.go b/aws/import_aws_elasticache_cluster_test.go deleted file mode 100644 index 6128ddf9503..00000000000 --- a/aws/import_aws_elasticache_cluster_test.go +++ /dev/null @@ -1,36 +0,0 @@ -package aws - -import ( - "os" - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSElasticacheCluster_importBasic(t *testing.T) { - oldvar := os.Getenv("AWS_DEFAULT_REGION") - os.Setenv("AWS_DEFAULT_REGION", "us-east-1") - defer os.Setenv("AWS_DEFAULT_REGION", oldvar) - - name := acctest.RandString(10) - - resourceName := "aws_elasticache_cluster.bar" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSElasticacheClusterConfigBasic(name), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_elasticache_parameter_group_test.go b/aws/import_aws_elasticache_parameter_group_test.go deleted file mode 100644 index 11c9334eda6..00000000000 --- a/aws/import_aws_elasticache_parameter_group_test.go +++ /dev/null @@ -1,31 +0,0 @@ -package aws - -import ( - "fmt" - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSElasticacheParameterGroup_importBasic(t *testing.T) { - resourceName := "aws_elasticache_parameter_group.bar" - rName := fmt.Sprintf("parameter-group-test-terraform-%d", acctest.RandInt()) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSElasticacheParameterGroupDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSElasticacheParameterGroupConfig(rName), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_elasticache_replication_group_test.go b/aws/import_aws_elasticache_replication_group_test.go deleted file mode 100644 index 372eff849ca..00000000000 --- a/aws/import_aws_elasticache_replication_group_test.go +++ /dev/null @@ -1,37 +0,0 @@ -package aws - -import ( - "os" - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSElasticacheReplicationGroup_importBasic(t *testing.T) { - oldvar := os.Getenv("AWS_DEFAULT_REGION") - os.Setenv("AWS_DEFAULT_REGION", "us-east-1") - defer os.Setenv("AWS_DEFAULT_REGION", oldvar) - - name := acctest.RandString(10) - - resourceName := "aws_elasticache_replication_group.bar" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSElasticacheReplicationGroupConfig(name), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"apply_immediately"}, //not in the API - }, - }, - }) -} diff --git a/aws/import_aws_elasticache_subnet_group_test.go b/aws/import_aws_elasticache_subnet_group_test.go deleted file mode 100644 index 2ce15611039..00000000000 --- a/aws/import_aws_elasticache_subnet_group_test.go +++ /dev/null @@ -1,34 +0,0 @@ -package aws - -import ( - "testing" - - "fmt" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSElasticacheSubnetGroup_importBasic(t *testing.T) { - resourceName := "aws_elasticache_subnet_group.bar" - config := fmt.Sprintf(testAccAWSElasticacheSubnetGroupConfig, acctest.RandInt()) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSElasticacheSubnetGroupDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: config, - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{ - "description"}, - }, - }, - }) -} diff --git a/aws/import_aws_elb_test.go b/aws/import_aws_elb_test.go deleted file mode 100644 index f4d90dcef99..00000000000 --- a/aws/import_aws_elb_test.go +++ /dev/null @@ -1,28 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSELB_importBasic(t *testing.T) { - resourceName := "aws_elb.bar" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSELBDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSELBConfig, - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_emr_security_configuration_test.go b/aws/import_aws_emr_security_configuration_test.go deleted file mode 100644 index 72ddddf5136..00000000000 --- a/aws/import_aws_emr_security_configuration_test.go +++ /dev/null @@ -1,28 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSEmrSecurityConfiguration_importBasic(t *testing.T) { - resourceName := "aws_emr_security_configuration.foo" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckEmrSecurityConfigurationDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccEmrSecurityConfigurationConfig, - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_flow_log_test.go b/aws/import_aws_flow_log_test.go deleted file mode 100644 index 97ccebb689d..00000000000 --- a/aws/import_aws_flow_log_test.go +++ /dev/null @@ -1,31 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSFlowLog_importBasic(t *testing.T) { - resourceName := "aws_flow_log.test_flow_log" - - rInt := acctest.RandInt() - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckFlowLogDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccFlowLogConfig_basic(rInt), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_glacier_vault_test.go b/aws/import_aws_glacier_vault_test.go deleted file mode 100644 index f7c20666e45..00000000000 --- a/aws/import_aws_glacier_vault_test.go +++ /dev/null @@ -1,30 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSGlacierVault_importBasic(t *testing.T) { - resourceName := "aws_glacier_vault.full" - rInt := acctest.RandInt() - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckGlacierVaultDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccGlacierVault_full(rInt), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_glue_catalog_database_test.go b/aws/import_aws_glue_catalog_database_test.go deleted file mode 100644 index a84f2d670f3..00000000000 --- a/aws/import_aws_glue_catalog_database_test.go +++ /dev/null @@ -1,29 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSGlueCatalogDatabase_importBasic(t *testing.T) { - resourceName := "aws_glue_catalog_database.test" - rInt := acctest.RandInt() - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckGlueDatabaseDestroy, - Steps: []resource.TestStep{ - { - Config: testAccGlueCatalogDatabase_full(rInt, "A test catalog from terraform"), - }, - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_iam_account_alias_test.go b/aws/import_aws_iam_account_alias_test.go deleted file mode 100644 index 28829a4194f..00000000000 --- a/aws/import_aws_iam_account_alias_test.go +++ /dev/null @@ -1,31 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func testAccAWSIAMAccountAlias_importBasic(t *testing.T) { - resourceName := "aws_iam_account_alias.test" - - rstring := acctest.RandString(5) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSIAMAccountAliasDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSIAMAccountAliasConfig(rstring), - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_iam_account_password_policy_test.go b/aws/import_aws_iam_account_password_policy_test.go deleted file mode 100644 index b5fec9eca83..00000000000 --- a/aws/import_aws_iam_account_password_policy_test.go +++ /dev/null @@ -1,28 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSIAMAccountPasswordPolicy_importBasic(t *testing.T) { - resourceName := "aws_iam_account_password_policy.default" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSIAMAccountPasswordPolicyDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSIAMAccountPasswordPolicy, - }, - - resource.TestStep{ - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_iam_group_test.go b/aws/import_aws_iam_group_test.go deleted file mode 100644 index 74c7e701f7b..00000000000 --- a/aws/import_aws_iam_group_test.go +++ /dev/null @@ -1,33 +0,0 @@ -package aws - -import ( - "fmt" - "testing" - - "github.com/hashicorp/terraform/helper/acctest" - "github.com/hashicorp/terraform/helper/resource" -) - -func TestAccAWSIAMGroup_importBasic(t *testing.T) { - resourceName := "aws_iam_group.group" - - rString := acctest.RandString(8) - groupName := fmt.Sprintf("tf-acc-group-import-%s", rString) - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSGroupDestroy, - Steps: []resource.TestStep{ - { - Config: testAccAWSGroupConfig(groupName), - }, - - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} diff --git a/aws/import_aws_iam_policy_test.go b/aws/import_aws_iam_policy_test.go deleted file mode 100644 index d40145b5874..00000000000 --- a/aws/import_aws_iam_policy_test.go +++ /dev/null @@ -1,88 +0,0 @@ -package aws - -import ( - "fmt" - "testing" - - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" - "github.com/aws/aws-sdk-go/service/iam" - - "github.com/hashicorp/terraform/helper/resource" - "github.com/hashicorp/terraform/terraform" -) - -func testAccAwsIamPolicyConfig(suffix string) string { - return fmt.Sprintf(` -resource "aws_iam_policy" "test_%[1]s" { - name = "test_policy_%[1]s" - path = "/" - description = "My test policy" - policy = < LB Rename "aws_lb": dataSourceAwsLb(), @@ -241,333 +280,431 @@ func Provider() terraform.ResourceProvider { }, ResourcesMap: map[string]*schema.Resource{ - "aws_acm_certificate": resourceAwsAcmCertificate(), - "aws_acm_certificate_validation": resourceAwsAcmCertificateValidation(), - "aws_ami": resourceAwsAmi(), - "aws_ami_copy": resourceAwsAmiCopy(), - "aws_ami_from_instance": resourceAwsAmiFromInstance(), - "aws_ami_launch_permission": resourceAwsAmiLaunchPermission(), - "aws_api_gateway_account": resourceAwsApiGatewayAccount(), - "aws_api_gateway_api_key": resourceAwsApiGatewayApiKey(), - "aws_api_gateway_authorizer": resourceAwsApiGatewayAuthorizer(), - "aws_api_gateway_base_path_mapping": resourceAwsApiGatewayBasePathMapping(), - "aws_api_gateway_client_certificate": resourceAwsApiGatewayClientCertificate(), - "aws_api_gateway_deployment": resourceAwsApiGatewayDeployment(), - "aws_api_gateway_documentation_part": resourceAwsApiGatewayDocumentationPart(), - "aws_api_gateway_documentation_version": resourceAwsApiGatewayDocumentationVersion(), - "aws_api_gateway_domain_name": resourceAwsApiGatewayDomainName(), - "aws_api_gateway_gateway_response": resourceAwsApiGatewayGatewayResponse(), - "aws_api_gateway_integration": resourceAwsApiGatewayIntegration(), - "aws_api_gateway_integration_response": resourceAwsApiGatewayIntegrationResponse(), - "aws_api_gateway_method": resourceAwsApiGatewayMethod(), - "aws_api_gateway_method_response": resourceAwsApiGatewayMethodResponse(), - "aws_api_gateway_method_settings": resourceAwsApiGatewayMethodSettings(), - "aws_api_gateway_model": resourceAwsApiGatewayModel(), - "aws_api_gateway_request_validator": resourceAwsApiGatewayRequestValidator(), - "aws_api_gateway_resource": resourceAwsApiGatewayResource(), - "aws_api_gateway_rest_api": resourceAwsApiGatewayRestApi(), - "aws_api_gateway_stage": resourceAwsApiGatewayStage(), - "aws_api_gateway_usage_plan": resourceAwsApiGatewayUsagePlan(), - "aws_api_gateway_usage_plan_key": resourceAwsApiGatewayUsagePlanKey(), - "aws_api_gateway_vpc_link": resourceAwsApiGatewayVpcLink(), - "aws_app_cookie_stickiness_policy": resourceAwsAppCookieStickinessPolicy(), - "aws_appautoscaling_target": resourceAwsAppautoscalingTarget(), - "aws_appautoscaling_policy": resourceAwsAppautoscalingPolicy(), - "aws_appautoscaling_scheduled_action": resourceAwsAppautoscalingScheduledAction(), - "aws_appsync_graphql_api": resourceAwsAppsyncGraphqlApi(), - "aws_athena_database": resourceAwsAthenaDatabase(), - "aws_athena_named_query": resourceAwsAthenaNamedQuery(), - "aws_autoscaling_attachment": resourceAwsAutoscalingAttachment(), - "aws_autoscaling_group": resourceAwsAutoscalingGroup(), - "aws_autoscaling_lifecycle_hook": resourceAwsAutoscalingLifecycleHook(), - "aws_autoscaling_notification": resourceAwsAutoscalingNotification(), - "aws_autoscaling_policy": resourceAwsAutoscalingPolicy(), - "aws_autoscaling_schedule": resourceAwsAutoscalingSchedule(), - "aws_cloud9_environment_ec2": resourceAwsCloud9EnvironmentEc2(), - "aws_cloudformation_stack": resourceAwsCloudFormationStack(), - "aws_cloudfront_distribution": resourceAwsCloudFrontDistribution(), - "aws_cloudfront_origin_access_identity": resourceAwsCloudFrontOriginAccessIdentity(), - "aws_cloudtrail": resourceAwsCloudTrail(), - "aws_cloudwatch_event_permission": resourceAwsCloudWatchEventPermission(), - "aws_cloudwatch_event_rule": resourceAwsCloudWatchEventRule(), - "aws_cloudwatch_event_target": resourceAwsCloudWatchEventTarget(), - "aws_cloudwatch_log_destination": resourceAwsCloudWatchLogDestination(), - "aws_cloudwatch_log_destination_policy": resourceAwsCloudWatchLogDestinationPolicy(), - "aws_cloudwatch_log_group": resourceAwsCloudWatchLogGroup(), - "aws_cloudwatch_log_metric_filter": resourceAwsCloudWatchLogMetricFilter(), - "aws_cloudwatch_log_resource_policy": resourceAwsCloudWatchLogResourcePolicy(), - "aws_cloudwatch_log_stream": resourceAwsCloudWatchLogStream(), - "aws_cloudwatch_log_subscription_filter": resourceAwsCloudwatchLogSubscriptionFilter(), - "aws_config_config_rule": resourceAwsConfigConfigRule(), - "aws_config_configuration_recorder": resourceAwsConfigConfigurationRecorder(), - "aws_config_configuration_recorder_status": resourceAwsConfigConfigurationRecorderStatus(), - "aws_config_delivery_channel": resourceAwsConfigDeliveryChannel(), - "aws_cognito_identity_pool": resourceAwsCognitoIdentityPool(), - "aws_cognito_identity_pool_roles_attachment": resourceAwsCognitoIdentityPoolRolesAttachment(), - "aws_cognito_user_group": resourceAwsCognitoUserGroup(), - "aws_cognito_user_pool": resourceAwsCognitoUserPool(), - "aws_cognito_user_pool_client": resourceAwsCognitoUserPoolClient(), - "aws_cognito_user_pool_domain": resourceAwsCognitoUserPoolDomain(), - "aws_cloudwatch_metric_alarm": resourceAwsCloudWatchMetricAlarm(), - "aws_cloudwatch_dashboard": resourceAwsCloudWatchDashboard(), - "aws_codedeploy_app": resourceAwsCodeDeployApp(), - "aws_codedeploy_deployment_config": resourceAwsCodeDeployDeploymentConfig(), - "aws_codedeploy_deployment_group": resourceAwsCodeDeployDeploymentGroup(), - "aws_codecommit_repository": resourceAwsCodeCommitRepository(), - "aws_codecommit_trigger": resourceAwsCodeCommitTrigger(), - "aws_codebuild_project": resourceAwsCodeBuildProject(), - "aws_codepipeline": resourceAwsCodePipeline(), - "aws_customer_gateway": resourceAwsCustomerGateway(), - "aws_dax_cluster": resourceAwsDaxCluster(), - "aws_db_event_subscription": resourceAwsDbEventSubscription(), - "aws_db_instance": resourceAwsDbInstance(), - "aws_db_option_group": resourceAwsDbOptionGroup(), - "aws_db_parameter_group": resourceAwsDbParameterGroup(), - "aws_db_security_group": resourceAwsDbSecurityGroup(), - "aws_db_snapshot": resourceAwsDbSnapshot(), - "aws_db_subnet_group": resourceAwsDbSubnetGroup(), - "aws_devicefarm_project": resourceAwsDevicefarmProject(), - "aws_directory_service_directory": resourceAwsDirectoryServiceDirectory(), - "aws_dms_certificate": resourceAwsDmsCertificate(), - "aws_dms_endpoint": resourceAwsDmsEndpoint(), - "aws_dms_replication_instance": resourceAwsDmsReplicationInstance(), - "aws_dms_replication_subnet_group": resourceAwsDmsReplicationSubnetGroup(), - "aws_dms_replication_task": resourceAwsDmsReplicationTask(), - "aws_dx_lag": resourceAwsDxLag(), - "aws_dx_connection": resourceAwsDxConnection(), - "aws_dx_connection_association": resourceAwsDxConnectionAssociation(), - "aws_dynamodb_table": resourceAwsDynamoDbTable(), - "aws_dynamodb_table_item": resourceAwsDynamoDbTableItem(), - "aws_dynamodb_global_table": resourceAwsDynamoDbGlobalTable(), - "aws_ebs_snapshot": resourceAwsEbsSnapshot(), - "aws_ebs_volume": resourceAwsEbsVolume(), - "aws_ecr_lifecycle_policy": resourceAwsEcrLifecyclePolicy(), - "aws_ecr_repository": resourceAwsEcrRepository(), - "aws_ecr_repository_policy": resourceAwsEcrRepositoryPolicy(), - "aws_ecs_cluster": resourceAwsEcsCluster(), - "aws_ecs_service": resourceAwsEcsService(), - "aws_ecs_task_definition": resourceAwsEcsTaskDefinition(), - "aws_efs_file_system": resourceAwsEfsFileSystem(), - "aws_efs_mount_target": resourceAwsEfsMountTarget(), - "aws_egress_only_internet_gateway": resourceAwsEgressOnlyInternetGateway(), - "aws_eip": resourceAwsEip(), - "aws_eip_association": resourceAwsEipAssociation(), - "aws_elasticache_cluster": resourceAwsElasticacheCluster(), - "aws_elasticache_parameter_group": resourceAwsElasticacheParameterGroup(), - "aws_elasticache_replication_group": resourceAwsElasticacheReplicationGroup(), - "aws_elasticache_security_group": resourceAwsElasticacheSecurityGroup(), - "aws_elasticache_subnet_group": resourceAwsElasticacheSubnetGroup(), - "aws_elastic_beanstalk_application": resourceAwsElasticBeanstalkApplication(), - "aws_elastic_beanstalk_application_version": resourceAwsElasticBeanstalkApplicationVersion(), - "aws_elastic_beanstalk_configuration_template": resourceAwsElasticBeanstalkConfigurationTemplate(), - "aws_elastic_beanstalk_environment": resourceAwsElasticBeanstalkEnvironment(), - "aws_elasticsearch_domain": resourceAwsElasticSearchDomain(), - "aws_elasticsearch_domain_policy": resourceAwsElasticSearchDomainPolicy(), - "aws_elastictranscoder_pipeline": resourceAwsElasticTranscoderPipeline(), - "aws_elastictranscoder_preset": resourceAwsElasticTranscoderPreset(), - "aws_elb": resourceAwsElb(), - "aws_elb_attachment": resourceAwsElbAttachment(), - "aws_emr_cluster": resourceAwsEMRCluster(), - "aws_emr_instance_group": resourceAwsEMRInstanceGroup(), - "aws_emr_security_configuration": resourceAwsEMRSecurityConfiguration(), - "aws_flow_log": resourceAwsFlowLog(), - "aws_gamelift_alias": resourceAwsGameliftAlias(), - "aws_gamelift_build": resourceAwsGameliftBuild(), - "aws_gamelift_fleet": resourceAwsGameliftFleet(), - "aws_glacier_vault": resourceAwsGlacierVault(), - "aws_glue_catalog_database": resourceAwsGlueCatalogDatabase(), - "aws_guardduty_detector": resourceAwsGuardDutyDetector(), - "aws_guardduty_ipset": resourceAwsGuardDutyIpset(), - "aws_guardduty_member": resourceAwsGuardDutyMember(), - "aws_guardduty_threatintelset": resourceAwsGuardDutyThreatintelset(), - "aws_iam_access_key": resourceAwsIamAccessKey(), - "aws_iam_account_alias": resourceAwsIamAccountAlias(), - "aws_iam_account_password_policy": resourceAwsIamAccountPasswordPolicy(), - "aws_iam_group_policy": resourceAwsIamGroupPolicy(), - "aws_iam_group": resourceAwsIamGroup(), - "aws_iam_group_membership": resourceAwsIamGroupMembership(), - "aws_iam_group_policy_attachment": resourceAwsIamGroupPolicyAttachment(), - "aws_iam_instance_profile": resourceAwsIamInstanceProfile(), - "aws_iam_openid_connect_provider": resourceAwsIamOpenIDConnectProvider(), - "aws_iam_policy": resourceAwsIamPolicy(), - "aws_iam_policy_attachment": resourceAwsIamPolicyAttachment(), - "aws_iam_role_policy_attachment": resourceAwsIamRolePolicyAttachment(), - "aws_iam_role_policy": resourceAwsIamRolePolicy(), - "aws_iam_role": resourceAwsIamRole(), - "aws_iam_saml_provider": resourceAwsIamSamlProvider(), - "aws_iam_server_certificate": resourceAwsIAMServerCertificate(), - "aws_iam_user_policy_attachment": resourceAwsIamUserPolicyAttachment(), - "aws_iam_user_policy": resourceAwsIamUserPolicy(), - "aws_iam_user_ssh_key": resourceAwsIamUserSshKey(), - "aws_iam_user": resourceAwsIamUser(), - "aws_iam_user_login_profile": resourceAwsIamUserLoginProfile(), - "aws_inspector_assessment_target": resourceAWSInspectorAssessmentTarget(), - "aws_inspector_assessment_template": resourceAWSInspectorAssessmentTemplate(), - "aws_inspector_resource_group": resourceAWSInspectorResourceGroup(), - "aws_instance": resourceAwsInstance(), - "aws_internet_gateway": resourceAwsInternetGateway(), - "aws_iot_certificate": resourceAwsIotCertificate(), - "aws_iot_policy": resourceAwsIotPolicy(), - "aws_iot_thing": resourceAwsIotThing(), - "aws_iot_thing_type": resourceAwsIotThingType(), - "aws_iot_topic_rule": resourceAwsIotTopicRule(), - "aws_key_pair": resourceAwsKeyPair(), - "aws_kinesis_firehose_delivery_stream": resourceAwsKinesisFirehoseDeliveryStream(), - "aws_kinesis_stream": resourceAwsKinesisStream(), - "aws_kms_alias": resourceAwsKmsAlias(), - "aws_kms_grant": resourceAwsKmsGrant(), - "aws_kms_key": resourceAwsKmsKey(), - "aws_lambda_function": resourceAwsLambdaFunction(), - "aws_lambda_event_source_mapping": resourceAwsLambdaEventSourceMapping(), - "aws_lambda_alias": resourceAwsLambdaAlias(), - "aws_lambda_permission": resourceAwsLambdaPermission(), - "aws_launch_configuration": resourceAwsLaunchConfiguration(), - "aws_lightsail_domain": resourceAwsLightsailDomain(), - "aws_lightsail_instance": resourceAwsLightsailInstance(), - "aws_lightsail_key_pair": resourceAwsLightsailKeyPair(), - "aws_lightsail_static_ip": resourceAwsLightsailStaticIp(), - "aws_lightsail_static_ip_attachment": resourceAwsLightsailStaticIpAttachment(), - "aws_lb_cookie_stickiness_policy": resourceAwsLBCookieStickinessPolicy(), - "aws_load_balancer_policy": resourceAwsLoadBalancerPolicy(), - "aws_load_balancer_backend_server_policy": resourceAwsLoadBalancerBackendServerPolicies(), - "aws_load_balancer_listener_policy": resourceAwsLoadBalancerListenerPolicies(), - "aws_lb_ssl_negotiation_policy": resourceAwsLBSSLNegotiationPolicy(), - "aws_main_route_table_association": resourceAwsMainRouteTableAssociation(), - "aws_mq_broker": resourceAwsMqBroker(), - "aws_mq_configuration": resourceAwsMqConfiguration(), - "aws_media_store_container": resourceAwsMediaStoreContainer(), - "aws_nat_gateway": resourceAwsNatGateway(), - "aws_network_acl": resourceAwsNetworkAcl(), - "aws_default_network_acl": resourceAwsDefaultNetworkAcl(), - "aws_network_acl_rule": resourceAwsNetworkAclRule(), - "aws_network_interface": resourceAwsNetworkInterface(), - "aws_network_interface_attachment": resourceAwsNetworkInterfaceAttachment(), - "aws_opsworks_application": resourceAwsOpsworksApplication(), - "aws_opsworks_stack": resourceAwsOpsworksStack(), - "aws_opsworks_java_app_layer": resourceAwsOpsworksJavaAppLayer(), - "aws_opsworks_haproxy_layer": resourceAwsOpsworksHaproxyLayer(), - "aws_opsworks_static_web_layer": resourceAwsOpsworksStaticWebLayer(), - "aws_opsworks_php_app_layer": resourceAwsOpsworksPhpAppLayer(), - "aws_opsworks_rails_app_layer": resourceAwsOpsworksRailsAppLayer(), - "aws_opsworks_nodejs_app_layer": resourceAwsOpsworksNodejsAppLayer(), - "aws_opsworks_memcached_layer": resourceAwsOpsworksMemcachedLayer(), - "aws_opsworks_mysql_layer": resourceAwsOpsworksMysqlLayer(), - "aws_opsworks_ganglia_layer": resourceAwsOpsworksGangliaLayer(), - "aws_opsworks_custom_layer": resourceAwsOpsworksCustomLayer(), - "aws_opsworks_instance": resourceAwsOpsworksInstance(), - "aws_opsworks_user_profile": resourceAwsOpsworksUserProfile(), - "aws_opsworks_permission": resourceAwsOpsworksPermission(), - "aws_opsworks_rds_db_instance": resourceAwsOpsworksRdsDbInstance(), - "aws_organizations_organization": resourceAwsOrganizationsOrganization(), - "aws_placement_group": resourceAwsPlacementGroup(), - "aws_proxy_protocol_policy": resourceAwsProxyProtocolPolicy(), - "aws_rds_cluster": resourceAwsRDSCluster(), - "aws_rds_cluster_instance": resourceAwsRDSClusterInstance(), - "aws_rds_cluster_parameter_group": resourceAwsRDSClusterParameterGroup(), - "aws_redshift_cluster": resourceAwsRedshiftCluster(), - "aws_redshift_security_group": resourceAwsRedshiftSecurityGroup(), - "aws_redshift_parameter_group": resourceAwsRedshiftParameterGroup(), - "aws_redshift_subnet_group": resourceAwsRedshiftSubnetGroup(), - "aws_route53_delegation_set": resourceAwsRoute53DelegationSet(), - "aws_route53_query_log": resourceAwsRoute53QueryLog(), - "aws_route53_record": resourceAwsRoute53Record(), - "aws_route53_zone_association": resourceAwsRoute53ZoneAssociation(), - "aws_route53_zone": resourceAwsRoute53Zone(), - "aws_route53_health_check": resourceAwsRoute53HealthCheck(), - "aws_route": resourceAwsRoute(), - "aws_route_table": resourceAwsRouteTable(), - "aws_default_route_table": resourceAwsDefaultRouteTable(), - "aws_route_table_association": resourceAwsRouteTableAssociation(), - "aws_ses_active_receipt_rule_set": resourceAwsSesActiveReceiptRuleSet(), - "aws_ses_domain_identity": resourceAwsSesDomainIdentity(), - "aws_ses_domain_dkim": resourceAwsSesDomainDkim(), - "aws_ses_domain_mail_from": resourceAwsSesDomainMailFrom(), - "aws_ses_receipt_filter": resourceAwsSesReceiptFilter(), - "aws_ses_receipt_rule": resourceAwsSesReceiptRule(), - "aws_ses_receipt_rule_set": resourceAwsSesReceiptRuleSet(), - "aws_ses_configuration_set": resourceAwsSesConfigurationSet(), - "aws_ses_event_destination": resourceAwsSesEventDestination(), - "aws_ses_template": resourceAwsSesTemplate(), - "aws_s3_bucket": resourceAwsS3Bucket(), - "aws_s3_bucket_policy": resourceAwsS3BucketPolicy(), - "aws_s3_bucket_object": resourceAwsS3BucketObject(), - "aws_s3_bucket_notification": resourceAwsS3BucketNotification(), - "aws_s3_bucket_metric": resourceAwsS3BucketMetric(), - "aws_security_group": resourceAwsSecurityGroup(), - "aws_network_interface_sg_attachment": resourceAwsNetworkInterfaceSGAttachment(), - "aws_default_security_group": resourceAwsDefaultSecurityGroup(), - "aws_security_group_rule": resourceAwsSecurityGroupRule(), - "aws_servicecatalog_portfolio": resourceAwsServiceCatalogPortfolio(), - "aws_service_discovery_private_dns_namespace": resourceAwsServiceDiscoveryPrivateDnsNamespace(), - "aws_service_discovery_public_dns_namespace": resourceAwsServiceDiscoveryPublicDnsNamespace(), - "aws_service_discovery_service": resourceAwsServiceDiscoveryService(), - "aws_simpledb_domain": resourceAwsSimpleDBDomain(), - "aws_ssm_activation": resourceAwsSsmActivation(), - "aws_ssm_association": resourceAwsSsmAssociation(), - "aws_ssm_document": resourceAwsSsmDocument(), - "aws_ssm_maintenance_window": resourceAwsSsmMaintenanceWindow(), - "aws_ssm_maintenance_window_target": resourceAwsSsmMaintenanceWindowTarget(), - "aws_ssm_maintenance_window_task": resourceAwsSsmMaintenanceWindowTask(), - "aws_ssm_patch_baseline": resourceAwsSsmPatchBaseline(), - "aws_ssm_patch_group": resourceAwsSsmPatchGroup(), - "aws_ssm_parameter": resourceAwsSsmParameter(), - "aws_ssm_resource_data_sync": resourceAwsSsmResourceDataSync(), - "aws_spot_datafeed_subscription": resourceAwsSpotDataFeedSubscription(), - "aws_spot_instance_request": resourceAwsSpotInstanceRequest(), - "aws_spot_fleet_request": resourceAwsSpotFleetRequest(), - "aws_sqs_queue": resourceAwsSqsQueue(), - "aws_sqs_queue_policy": resourceAwsSqsQueuePolicy(), - "aws_snapshot_create_volume_permission": resourceAwsSnapshotCreateVolumePermission(), - "aws_sns_platform_application": resourceAwsSnsPlatformApplication(), - "aws_sns_topic": resourceAwsSnsTopic(), - "aws_sns_topic_policy": resourceAwsSnsTopicPolicy(), - "aws_sns_topic_subscription": resourceAwsSnsTopicSubscription(), - "aws_sfn_activity": resourceAwsSfnActivity(), - "aws_sfn_state_machine": resourceAwsSfnStateMachine(), - "aws_default_subnet": resourceAwsDefaultSubnet(), - "aws_subnet": resourceAwsSubnet(), - "aws_volume_attachment": resourceAwsVolumeAttachment(), - "aws_vpc_dhcp_options_association": resourceAwsVpcDhcpOptionsAssociation(), - "aws_default_vpc_dhcp_options": resourceAwsDefaultVpcDhcpOptions(), - "aws_vpc_dhcp_options": resourceAwsVpcDhcpOptions(), - "aws_vpc_peering_connection": resourceAwsVpcPeeringConnection(), - "aws_vpc_peering_connection_accepter": resourceAwsVpcPeeringConnectionAccepter(), - "aws_default_vpc": resourceAwsDefaultVpc(), - "aws_vpc": resourceAwsVpc(), - "aws_vpc_endpoint": resourceAwsVpcEndpoint(), - "aws_vpc_endpoint_connection_notification": resourceAwsVpcEndpointConnectionNotification(), - "aws_vpc_endpoint_route_table_association": resourceAwsVpcEndpointRouteTableAssociation(), - "aws_vpc_endpoint_subnet_association": resourceAwsVpcEndpointSubnetAssociation(), - "aws_vpc_endpoint_service": resourceAwsVpcEndpointService(), - "aws_vpc_endpoint_service_allowed_principal": resourceAwsVpcEndpointServiceAllowedPrincipal(), - "aws_vpn_connection": resourceAwsVpnConnection(), - "aws_vpn_connection_route": resourceAwsVpnConnectionRoute(), - "aws_vpn_gateway": resourceAwsVpnGateway(), - "aws_vpn_gateway_attachment": resourceAwsVpnGatewayAttachment(), - "aws_vpn_gateway_route_propagation": resourceAwsVpnGatewayRoutePropagation(), - "aws_waf_byte_match_set": resourceAwsWafByteMatchSet(), - "aws_waf_ipset": resourceAwsWafIPSet(), - "aws_waf_rule": resourceAwsWafRule(), - "aws_waf_rate_based_rule": resourceAwsWafRateBasedRule(), - "aws_waf_size_constraint_set": resourceAwsWafSizeConstraintSet(), - "aws_waf_web_acl": resourceAwsWafWebAcl(), - "aws_waf_xss_match_set": resourceAwsWafXssMatchSet(), - "aws_waf_sql_injection_match_set": resourceAwsWafSqlInjectionMatchSet(), - "aws_waf_geo_match_set": resourceAwsWafGeoMatchSet(), - "aws_wafregional_byte_match_set": resourceAwsWafRegionalByteMatchSet(), - "aws_wafregional_ipset": resourceAwsWafRegionalIPSet(), - "aws_wafregional_sql_injection_match_set": resourceAwsWafRegionalSqlInjectionMatchSet(), - "aws_wafregional_xss_match_set": resourceAwsWafRegionalXssMatchSet(), - "aws_wafregional_rule": resourceAwsWafRegionalRule(), - "aws_wafregional_web_acl": resourceAwsWafRegionalWebAcl(), - "aws_batch_compute_environment": resourceAwsBatchComputeEnvironment(), - "aws_batch_job_definition": resourceAwsBatchJobDefinition(), - "aws_batch_job_queue": resourceAwsBatchJobQueue(), + "aws_acm_certificate": resourceAwsAcmCertificate(), + "aws_acm_certificate_validation": resourceAwsAcmCertificateValidation(), + "aws_acmpca_certificate_authority": resourceAwsAcmpcaCertificateAuthority(), + "aws_ami": resourceAwsAmi(), + "aws_ami_copy": resourceAwsAmiCopy(), + "aws_ami_from_instance": resourceAwsAmiFromInstance(), + "aws_ami_launch_permission": resourceAwsAmiLaunchPermission(), + "aws_api_gateway_account": resourceAwsApiGatewayAccount(), + "aws_api_gateway_api_key": resourceAwsApiGatewayApiKey(), + "aws_api_gateway_authorizer": resourceAwsApiGatewayAuthorizer(), + "aws_api_gateway_base_path_mapping": resourceAwsApiGatewayBasePathMapping(), + "aws_api_gateway_client_certificate": resourceAwsApiGatewayClientCertificate(), + "aws_api_gateway_deployment": resourceAwsApiGatewayDeployment(), + "aws_api_gateway_documentation_part": resourceAwsApiGatewayDocumentationPart(), + "aws_api_gateway_documentation_version": resourceAwsApiGatewayDocumentationVersion(), + "aws_api_gateway_domain_name": resourceAwsApiGatewayDomainName(), + "aws_api_gateway_gateway_response": resourceAwsApiGatewayGatewayResponse(), + "aws_api_gateway_integration": resourceAwsApiGatewayIntegration(), + "aws_api_gateway_integration_response": resourceAwsApiGatewayIntegrationResponse(), + "aws_api_gateway_method": resourceAwsApiGatewayMethod(), + "aws_api_gateway_method_response": resourceAwsApiGatewayMethodResponse(), + "aws_api_gateway_method_settings": resourceAwsApiGatewayMethodSettings(), + "aws_api_gateway_model": resourceAwsApiGatewayModel(), + "aws_api_gateway_request_validator": resourceAwsApiGatewayRequestValidator(), + "aws_api_gateway_resource": resourceAwsApiGatewayResource(), + "aws_api_gateway_rest_api": resourceAwsApiGatewayRestApi(), + "aws_api_gateway_stage": resourceAwsApiGatewayStage(), + "aws_api_gateway_usage_plan": resourceAwsApiGatewayUsagePlan(), + "aws_api_gateway_usage_plan_key": resourceAwsApiGatewayUsagePlanKey(), + "aws_api_gateway_vpc_link": resourceAwsApiGatewayVpcLink(), + "aws_app_cookie_stickiness_policy": resourceAwsAppCookieStickinessPolicy(), + "aws_appautoscaling_target": resourceAwsAppautoscalingTarget(), + "aws_appautoscaling_policy": resourceAwsAppautoscalingPolicy(), + "aws_appautoscaling_scheduled_action": resourceAwsAppautoscalingScheduledAction(), + "aws_appsync_api_key": resourceAwsAppsyncApiKey(), + "aws_appsync_datasource": resourceAwsAppsyncDatasource(), + "aws_appsync_graphql_api": resourceAwsAppsyncGraphqlApi(), + "aws_athena_database": resourceAwsAthenaDatabase(), + "aws_athena_named_query": resourceAwsAthenaNamedQuery(), + "aws_autoscaling_attachment": resourceAwsAutoscalingAttachment(), + "aws_autoscaling_group": resourceAwsAutoscalingGroup(), + "aws_autoscaling_lifecycle_hook": resourceAwsAutoscalingLifecycleHook(), + "aws_autoscaling_notification": resourceAwsAutoscalingNotification(), + "aws_autoscaling_policy": resourceAwsAutoscalingPolicy(), + "aws_autoscaling_schedule": resourceAwsAutoscalingSchedule(), + "aws_budgets_budget": resourceAwsBudgetsBudget(), + "aws_cloud9_environment_ec2": resourceAwsCloud9EnvironmentEc2(), + "aws_cloudformation_stack": resourceAwsCloudFormationStack(), + "aws_cloudfront_distribution": resourceAwsCloudFrontDistribution(), + "aws_cloudfront_origin_access_identity": resourceAwsCloudFrontOriginAccessIdentity(), + "aws_cloudfront_public_key": resourceAwsCloudFrontPublicKey(), + "aws_cloudtrail": resourceAwsCloudTrail(), + "aws_cloudwatch_event_permission": resourceAwsCloudWatchEventPermission(), + "aws_cloudwatch_event_rule": resourceAwsCloudWatchEventRule(), + "aws_cloudwatch_event_target": resourceAwsCloudWatchEventTarget(), + "aws_cloudwatch_log_destination": resourceAwsCloudWatchLogDestination(), + "aws_cloudwatch_log_destination_policy": resourceAwsCloudWatchLogDestinationPolicy(), + "aws_cloudwatch_log_group": resourceAwsCloudWatchLogGroup(), + "aws_cloudwatch_log_metric_filter": resourceAwsCloudWatchLogMetricFilter(), + "aws_cloudwatch_log_resource_policy": resourceAwsCloudWatchLogResourcePolicy(), + "aws_cloudwatch_log_stream": resourceAwsCloudWatchLogStream(), + "aws_cloudwatch_log_subscription_filter": resourceAwsCloudwatchLogSubscriptionFilter(), + "aws_config_aggregate_authorization": resourceAwsConfigAggregateAuthorization(), + "aws_config_config_rule": resourceAwsConfigConfigRule(), + "aws_config_configuration_aggregator": resourceAwsConfigConfigurationAggregator(), + "aws_config_configuration_recorder": resourceAwsConfigConfigurationRecorder(), + "aws_config_configuration_recorder_status": resourceAwsConfigConfigurationRecorderStatus(), + "aws_config_delivery_channel": resourceAwsConfigDeliveryChannel(), + "aws_cognito_identity_pool": resourceAwsCognitoIdentityPool(), + "aws_cognito_identity_pool_roles_attachment": resourceAwsCognitoIdentityPoolRolesAttachment(), + "aws_cognito_identity_provider": resourceAwsCognitoIdentityProvider(), + "aws_cognito_user_group": resourceAwsCognitoUserGroup(), + "aws_cognito_user_pool": resourceAwsCognitoUserPool(), + "aws_cognito_user_pool_client": resourceAwsCognitoUserPoolClient(), + "aws_cognito_user_pool_domain": resourceAwsCognitoUserPoolDomain(), + "aws_cloudhsm_v2_cluster": resourceAwsCloudHsm2Cluster(), + "aws_cloudhsm_v2_hsm": resourceAwsCloudHsm2Hsm(), + "aws_cognito_resource_server": resourceAwsCognitoResourceServer(), + "aws_cloudwatch_metric_alarm": resourceAwsCloudWatchMetricAlarm(), + "aws_cloudwatch_dashboard": resourceAwsCloudWatchDashboard(), + "aws_codedeploy_app": resourceAwsCodeDeployApp(), + "aws_codedeploy_deployment_config": resourceAwsCodeDeployDeploymentConfig(), + "aws_codedeploy_deployment_group": resourceAwsCodeDeployDeploymentGroup(), + "aws_codecommit_repository": resourceAwsCodeCommitRepository(), + "aws_codecommit_trigger": resourceAwsCodeCommitTrigger(), + "aws_codebuild_project": resourceAwsCodeBuildProject(), + "aws_codebuild_webhook": resourceAwsCodeBuildWebhook(), + "aws_codepipeline": resourceAwsCodePipeline(), + "aws_codepipeline_webhook": resourceAwsCodePipelineWebhook(), + "aws_customer_gateway": resourceAwsCustomerGateway(), + "aws_dax_cluster": resourceAwsDaxCluster(), + "aws_dax_parameter_group": resourceAwsDaxParameterGroup(), + "aws_dax_subnet_group": resourceAwsDaxSubnetGroup(), + "aws_db_cluster_snapshot": resourceAwsDbClusterSnapshot(), + "aws_db_event_subscription": resourceAwsDbEventSubscription(), + "aws_db_instance": resourceAwsDbInstance(), + "aws_db_option_group": resourceAwsDbOptionGroup(), + "aws_db_parameter_group": resourceAwsDbParameterGroup(), + "aws_db_security_group": resourceAwsDbSecurityGroup(), + "aws_db_snapshot": resourceAwsDbSnapshot(), + "aws_db_subnet_group": resourceAwsDbSubnetGroup(), + "aws_devicefarm_project": resourceAwsDevicefarmProject(), + "aws_directory_service_directory": resourceAwsDirectoryServiceDirectory(), + "aws_directory_service_conditional_forwarder": resourceAwsDirectoryServiceConditionalForwarder(), + "aws_dlm_lifecycle_policy": resourceAwsDlmLifecyclePolicy(), + "aws_dms_certificate": resourceAwsDmsCertificate(), + "aws_dms_endpoint": resourceAwsDmsEndpoint(), + "aws_dms_replication_instance": resourceAwsDmsReplicationInstance(), + "aws_dms_replication_subnet_group": resourceAwsDmsReplicationSubnetGroup(), + "aws_dms_replication_task": resourceAwsDmsReplicationTask(), + "aws_dx_bgp_peer": resourceAwsDxBgpPeer(), + "aws_dx_connection": resourceAwsDxConnection(), + "aws_dx_connection_association": resourceAwsDxConnectionAssociation(), + "aws_dx_gateway": resourceAwsDxGateway(), + "aws_dx_gateway_association": resourceAwsDxGatewayAssociation(), + "aws_dx_hosted_private_virtual_interface": resourceAwsDxHostedPrivateVirtualInterface(), + "aws_dx_hosted_private_virtual_interface_accepter": resourceAwsDxHostedPrivateVirtualInterfaceAccepter(), + "aws_dx_hosted_public_virtual_interface": resourceAwsDxHostedPublicVirtualInterface(), + "aws_dx_hosted_public_virtual_interface_accepter": resourceAwsDxHostedPublicVirtualInterfaceAccepter(), + "aws_dx_lag": resourceAwsDxLag(), + "aws_dx_private_virtual_interface": resourceAwsDxPrivateVirtualInterface(), + "aws_dx_public_virtual_interface": resourceAwsDxPublicVirtualInterface(), + "aws_dynamodb_table": resourceAwsDynamoDbTable(), + "aws_dynamodb_table_item": resourceAwsDynamoDbTableItem(), + "aws_dynamodb_global_table": resourceAwsDynamoDbGlobalTable(), + "aws_ec2_capacity_reservation": resourceAwsEc2CapacityReservation(), + "aws_ec2_fleet": resourceAwsEc2Fleet(), + "aws_ebs_snapshot": resourceAwsEbsSnapshot(), + "aws_ebs_snapshot_copy": resourceAwsEbsSnapshotCopy(), + "aws_ebs_volume": resourceAwsEbsVolume(), + "aws_ecr_lifecycle_policy": resourceAwsEcrLifecyclePolicy(), + "aws_ecr_repository": resourceAwsEcrRepository(), + "aws_ecr_repository_policy": resourceAwsEcrRepositoryPolicy(), + "aws_ecs_cluster": resourceAwsEcsCluster(), + "aws_ecs_service": resourceAwsEcsService(), + "aws_ecs_task_definition": resourceAwsEcsTaskDefinition(), + "aws_efs_file_system": resourceAwsEfsFileSystem(), + "aws_efs_mount_target": resourceAwsEfsMountTarget(), + "aws_egress_only_internet_gateway": resourceAwsEgressOnlyInternetGateway(), + "aws_eip": resourceAwsEip(), + "aws_eip_association": resourceAwsEipAssociation(), + "aws_eks_cluster": resourceAwsEksCluster(), + "aws_elasticache_cluster": resourceAwsElasticacheCluster(), + "aws_elasticache_parameter_group": resourceAwsElasticacheParameterGroup(), + "aws_elasticache_replication_group": resourceAwsElasticacheReplicationGroup(), + "aws_elasticache_security_group": resourceAwsElasticacheSecurityGroup(), + "aws_elasticache_subnet_group": resourceAwsElasticacheSubnetGroup(), + "aws_elastic_beanstalk_application": resourceAwsElasticBeanstalkApplication(), + "aws_elastic_beanstalk_application_version": resourceAwsElasticBeanstalkApplicationVersion(), + "aws_elastic_beanstalk_configuration_template": resourceAwsElasticBeanstalkConfigurationTemplate(), + "aws_elastic_beanstalk_environment": resourceAwsElasticBeanstalkEnvironment(), + "aws_elasticsearch_domain": resourceAwsElasticSearchDomain(), + "aws_elasticsearch_domain_policy": resourceAwsElasticSearchDomainPolicy(), + "aws_elastictranscoder_pipeline": resourceAwsElasticTranscoderPipeline(), + "aws_elastictranscoder_preset": resourceAwsElasticTranscoderPreset(), + "aws_elb": resourceAwsElb(), + "aws_elb_attachment": resourceAwsElbAttachment(), + "aws_emr_cluster": resourceAwsEMRCluster(), + "aws_emr_instance_group": resourceAwsEMRInstanceGroup(), + "aws_emr_security_configuration": resourceAwsEMRSecurityConfiguration(), + "aws_flow_log": resourceAwsFlowLog(), + "aws_gamelift_alias": resourceAwsGameliftAlias(), + "aws_gamelift_build": resourceAwsGameliftBuild(), + "aws_gamelift_fleet": resourceAwsGameliftFleet(), + "aws_gamelift_game_session_queue": resourceAwsGameliftGameSessionQueue(), + "aws_glacier_vault": resourceAwsGlacierVault(), + "aws_glacier_vault_lock": resourceAwsGlacierVaultLock(), + "aws_glue_catalog_database": resourceAwsGlueCatalogDatabase(), + "aws_glue_catalog_table": resourceAwsGlueCatalogTable(), + "aws_glue_classifier": resourceAwsGlueClassifier(), + "aws_glue_connection": resourceAwsGlueConnection(), + "aws_glue_crawler": resourceAwsGlueCrawler(), + "aws_glue_job": resourceAwsGlueJob(), + "aws_glue_security_configuration": resourceAwsGlueSecurityConfiguration(), + "aws_glue_trigger": resourceAwsGlueTrigger(), + "aws_guardduty_detector": resourceAwsGuardDutyDetector(), + "aws_guardduty_ipset": resourceAwsGuardDutyIpset(), + "aws_guardduty_member": resourceAwsGuardDutyMember(), + "aws_guardduty_threatintelset": resourceAwsGuardDutyThreatintelset(), + "aws_iam_access_key": resourceAwsIamAccessKey(), + "aws_iam_account_alias": resourceAwsIamAccountAlias(), + "aws_iam_account_password_policy": resourceAwsIamAccountPasswordPolicy(), + "aws_iam_group_policy": resourceAwsIamGroupPolicy(), + "aws_iam_group": resourceAwsIamGroup(), + "aws_iam_group_membership": resourceAwsIamGroupMembership(), + "aws_iam_group_policy_attachment": resourceAwsIamGroupPolicyAttachment(), + "aws_iam_instance_profile": resourceAwsIamInstanceProfile(), + "aws_iam_openid_connect_provider": resourceAwsIamOpenIDConnectProvider(), + "aws_iam_policy": resourceAwsIamPolicy(), + "aws_iam_policy_attachment": resourceAwsIamPolicyAttachment(), + "aws_iam_role_policy_attachment": resourceAwsIamRolePolicyAttachment(), + "aws_iam_role_policy": resourceAwsIamRolePolicy(), + "aws_iam_role": resourceAwsIamRole(), + "aws_iam_saml_provider": resourceAwsIamSamlProvider(), + "aws_iam_server_certificate": resourceAwsIAMServerCertificate(), + "aws_iam_service_linked_role": resourceAwsIamServiceLinkedRole(), + "aws_iam_user_group_membership": resourceAwsIamUserGroupMembership(), + "aws_iam_user_policy_attachment": resourceAwsIamUserPolicyAttachment(), + "aws_iam_user_policy": resourceAwsIamUserPolicy(), + "aws_iam_user_ssh_key": resourceAwsIamUserSshKey(), + "aws_iam_user": resourceAwsIamUser(), + "aws_iam_user_login_profile": resourceAwsIamUserLoginProfile(), + "aws_inspector_assessment_target": resourceAWSInspectorAssessmentTarget(), + "aws_inspector_assessment_template": resourceAWSInspectorAssessmentTemplate(), + "aws_inspector_resource_group": resourceAWSInspectorResourceGroup(), + "aws_instance": resourceAwsInstance(), + "aws_internet_gateway": resourceAwsInternetGateway(), + "aws_iot_certificate": resourceAwsIotCertificate(), + "aws_iot_policy": resourceAwsIotPolicy(), + "aws_iot_policy_attachment": resourceAwsIotPolicyAttachment(), + "aws_iot_thing": resourceAwsIotThing(), + "aws_iot_thing_principal_attachment": resourceAwsIotThingPrincipalAttachment(), + "aws_iot_thing_type": resourceAwsIotThingType(), + "aws_iot_topic_rule": resourceAwsIotTopicRule(), + "aws_key_pair": resourceAwsKeyPair(), + "aws_kinesis_firehose_delivery_stream": resourceAwsKinesisFirehoseDeliveryStream(), + "aws_kinesis_stream": resourceAwsKinesisStream(), + "aws_kinesis_analytics_application": resourceAwsKinesisAnalyticsApplication(), + "aws_kms_alias": resourceAwsKmsAlias(), + "aws_kms_grant": resourceAwsKmsGrant(), + "aws_kms_key": resourceAwsKmsKey(), + "aws_lambda_function": resourceAwsLambdaFunction(), + "aws_lambda_event_source_mapping": resourceAwsLambdaEventSourceMapping(), + "aws_lambda_alias": resourceAwsLambdaAlias(), + "aws_lambda_permission": resourceAwsLambdaPermission(), + "aws_launch_configuration": resourceAwsLaunchConfiguration(), + "aws_launch_template": resourceAwsLaunchTemplate(), + "aws_lightsail_domain": resourceAwsLightsailDomain(), + "aws_lightsail_instance": resourceAwsLightsailInstance(), + "aws_lightsail_key_pair": resourceAwsLightsailKeyPair(), + "aws_lightsail_static_ip": resourceAwsLightsailStaticIp(), + "aws_lightsail_static_ip_attachment": resourceAwsLightsailStaticIpAttachment(), + "aws_lb_cookie_stickiness_policy": resourceAwsLBCookieStickinessPolicy(), + "aws_load_balancer_policy": resourceAwsLoadBalancerPolicy(), + "aws_load_balancer_backend_server_policy": resourceAwsLoadBalancerBackendServerPolicies(), + "aws_load_balancer_listener_policy": resourceAwsLoadBalancerListenerPolicies(), + "aws_lb_ssl_negotiation_policy": resourceAwsLBSSLNegotiationPolicy(), + "aws_macie_member_account_association": resourceAwsMacieMemberAccountAssociation(), + "aws_macie_s3_bucket_association": resourceAwsMacieS3BucketAssociation(), + "aws_main_route_table_association": resourceAwsMainRouteTableAssociation(), + "aws_mq_broker": resourceAwsMqBroker(), + "aws_mq_configuration": resourceAwsMqConfiguration(), + "aws_media_store_container": resourceAwsMediaStoreContainer(), + "aws_media_store_container_policy": resourceAwsMediaStoreContainerPolicy(), + "aws_nat_gateway": resourceAwsNatGateway(), + "aws_network_acl": resourceAwsNetworkAcl(), + "aws_default_network_acl": resourceAwsDefaultNetworkAcl(), + "aws_neptune_cluster": resourceAwsNeptuneCluster(), + "aws_neptune_cluster_instance": resourceAwsNeptuneClusterInstance(), + "aws_neptune_cluster_parameter_group": resourceAwsNeptuneClusterParameterGroup(), + "aws_neptune_cluster_snapshot": resourceAwsNeptuneClusterSnapshot(), + "aws_neptune_event_subscription": resourceAwsNeptuneEventSubscription(), + "aws_neptune_parameter_group": resourceAwsNeptuneParameterGroup(), + "aws_neptune_subnet_group": resourceAwsNeptuneSubnetGroup(), + "aws_network_acl_rule": resourceAwsNetworkAclRule(), + "aws_network_interface": resourceAwsNetworkInterface(), + "aws_network_interface_attachment": resourceAwsNetworkInterfaceAttachment(), + "aws_opsworks_application": resourceAwsOpsworksApplication(), + "aws_opsworks_stack": resourceAwsOpsworksStack(), + "aws_opsworks_java_app_layer": resourceAwsOpsworksJavaAppLayer(), + "aws_opsworks_haproxy_layer": resourceAwsOpsworksHaproxyLayer(), + "aws_opsworks_static_web_layer": resourceAwsOpsworksStaticWebLayer(), + "aws_opsworks_php_app_layer": resourceAwsOpsworksPhpAppLayer(), + "aws_opsworks_rails_app_layer": resourceAwsOpsworksRailsAppLayer(), + "aws_opsworks_nodejs_app_layer": resourceAwsOpsworksNodejsAppLayer(), + "aws_opsworks_memcached_layer": resourceAwsOpsworksMemcachedLayer(), + "aws_opsworks_mysql_layer": resourceAwsOpsworksMysqlLayer(), + "aws_opsworks_ganglia_layer": resourceAwsOpsworksGangliaLayer(), + "aws_opsworks_custom_layer": resourceAwsOpsworksCustomLayer(), + "aws_opsworks_instance": resourceAwsOpsworksInstance(), + "aws_opsworks_user_profile": resourceAwsOpsworksUserProfile(), + "aws_opsworks_permission": resourceAwsOpsworksPermission(), + "aws_opsworks_rds_db_instance": resourceAwsOpsworksRdsDbInstance(), + "aws_organizations_organization": resourceAwsOrganizationsOrganization(), + "aws_organizations_account": resourceAwsOrganizationsAccount(), + "aws_organizations_policy": resourceAwsOrganizationsPolicy(), + "aws_organizations_policy_attachment": resourceAwsOrganizationsPolicyAttachment(), + "aws_placement_group": resourceAwsPlacementGroup(), + "aws_proxy_protocol_policy": resourceAwsProxyProtocolPolicy(), + "aws_rds_cluster": resourceAwsRDSCluster(), + "aws_rds_cluster_instance": resourceAwsRDSClusterInstance(), + "aws_rds_cluster_parameter_group": resourceAwsRDSClusterParameterGroup(), + "aws_redshift_cluster": resourceAwsRedshiftCluster(), + "aws_redshift_security_group": resourceAwsRedshiftSecurityGroup(), + "aws_redshift_parameter_group": resourceAwsRedshiftParameterGroup(), + "aws_redshift_subnet_group": resourceAwsRedshiftSubnetGroup(), + "aws_redshift_snapshot_copy_grant": resourceAwsRedshiftSnapshotCopyGrant(), + "aws_redshift_event_subscription": resourceAwsRedshiftEventSubscription(), + "aws_route53_delegation_set": resourceAwsRoute53DelegationSet(), + "aws_route53_query_log": resourceAwsRoute53QueryLog(), + "aws_route53_record": resourceAwsRoute53Record(), + "aws_route53_zone_association": resourceAwsRoute53ZoneAssociation(), + "aws_route53_zone": resourceAwsRoute53Zone(), + "aws_route53_health_check": resourceAwsRoute53HealthCheck(), + "aws_route": resourceAwsRoute(), + "aws_route_table": resourceAwsRouteTable(), + "aws_default_route_table": resourceAwsDefaultRouteTable(), + "aws_route_table_association": resourceAwsRouteTableAssociation(), + "aws_secretsmanager_secret": resourceAwsSecretsManagerSecret(), + "aws_secretsmanager_secret_version": resourceAwsSecretsManagerSecretVersion(), + "aws_ses_active_receipt_rule_set": resourceAwsSesActiveReceiptRuleSet(), + "aws_ses_domain_identity": resourceAwsSesDomainIdentity(), + "aws_ses_domain_identity_verification": resourceAwsSesDomainIdentityVerification(), + "aws_ses_domain_dkim": resourceAwsSesDomainDkim(), + "aws_ses_domain_mail_from": resourceAwsSesDomainMailFrom(), + "aws_ses_receipt_filter": resourceAwsSesReceiptFilter(), + "aws_ses_receipt_rule": resourceAwsSesReceiptRule(), + "aws_ses_receipt_rule_set": resourceAwsSesReceiptRuleSet(), + "aws_ses_configuration_set": resourceAwsSesConfigurationSet(), + "aws_ses_event_destination": resourceAwsSesEventDestination(), + "aws_ses_identity_notification_topic": resourceAwsSesNotificationTopic(), + "aws_ses_template": resourceAwsSesTemplate(), + "aws_s3_bucket": resourceAwsS3Bucket(), + "aws_s3_bucket_policy": resourceAwsS3BucketPolicy(), + "aws_s3_bucket_object": resourceAwsS3BucketObject(), + "aws_s3_bucket_notification": resourceAwsS3BucketNotification(), + "aws_s3_bucket_metric": resourceAwsS3BucketMetric(), + "aws_s3_bucket_inventory": resourceAwsS3BucketInventory(), + "aws_security_group": resourceAwsSecurityGroup(), + "aws_network_interface_sg_attachment": resourceAwsNetworkInterfaceSGAttachment(), + "aws_default_security_group": resourceAwsDefaultSecurityGroup(), + "aws_security_group_rule": resourceAwsSecurityGroupRule(), + "aws_servicecatalog_portfolio": resourceAwsServiceCatalogPortfolio(), + "aws_service_discovery_private_dns_namespace": resourceAwsServiceDiscoveryPrivateDnsNamespace(), + "aws_service_discovery_public_dns_namespace": resourceAwsServiceDiscoveryPublicDnsNamespace(), + "aws_service_discovery_service": resourceAwsServiceDiscoveryService(), + "aws_simpledb_domain": resourceAwsSimpleDBDomain(), + "aws_ssm_activation": resourceAwsSsmActivation(), + "aws_ssm_association": resourceAwsSsmAssociation(), + "aws_ssm_document": resourceAwsSsmDocument(), + "aws_ssm_maintenance_window": resourceAwsSsmMaintenanceWindow(), + "aws_ssm_maintenance_window_target": resourceAwsSsmMaintenanceWindowTarget(), + "aws_ssm_maintenance_window_task": resourceAwsSsmMaintenanceWindowTask(), + "aws_ssm_patch_baseline": resourceAwsSsmPatchBaseline(), + "aws_ssm_patch_group": resourceAwsSsmPatchGroup(), + "aws_ssm_parameter": resourceAwsSsmParameter(), + "aws_ssm_resource_data_sync": resourceAwsSsmResourceDataSync(), + "aws_storagegateway_cache": resourceAwsStorageGatewayCache(), + "aws_storagegateway_cached_iscsi_volume": resourceAwsStorageGatewayCachedIscsiVolume(), + "aws_storagegateway_gateway": resourceAwsStorageGatewayGateway(), + "aws_storagegateway_nfs_file_share": resourceAwsStorageGatewayNfsFileShare(), + "aws_storagegateway_smb_file_share": resourceAwsStorageGatewaySmbFileShare(), + "aws_storagegateway_upload_buffer": resourceAwsStorageGatewayUploadBuffer(), + "aws_storagegateway_working_storage": resourceAwsStorageGatewayWorkingStorage(), + "aws_spot_datafeed_subscription": resourceAwsSpotDataFeedSubscription(), + "aws_spot_instance_request": resourceAwsSpotInstanceRequest(), + "aws_spot_fleet_request": resourceAwsSpotFleetRequest(), + "aws_sqs_queue": resourceAwsSqsQueue(), + "aws_sqs_queue_policy": resourceAwsSqsQueuePolicy(), + "aws_snapshot_create_volume_permission": resourceAwsSnapshotCreateVolumePermission(), + "aws_sns_platform_application": resourceAwsSnsPlatformApplication(), + "aws_sns_sms_preferences": resourceAwsSnsSmsPreferences(), + "aws_sns_topic": resourceAwsSnsTopic(), + "aws_sns_topic_policy": resourceAwsSnsTopicPolicy(), + "aws_sns_topic_subscription": resourceAwsSnsTopicSubscription(), + "aws_sfn_activity": resourceAwsSfnActivity(), + "aws_sfn_state_machine": resourceAwsSfnStateMachine(), + "aws_default_subnet": resourceAwsDefaultSubnet(), + "aws_subnet": resourceAwsSubnet(), + "aws_swf_domain": resourceAwsSwfDomain(), + "aws_volume_attachment": resourceAwsVolumeAttachment(), + "aws_vpc_dhcp_options_association": resourceAwsVpcDhcpOptionsAssociation(), + "aws_default_vpc_dhcp_options": resourceAwsDefaultVpcDhcpOptions(), + "aws_vpc_dhcp_options": resourceAwsVpcDhcpOptions(), + "aws_vpc_peering_connection": resourceAwsVpcPeeringConnection(), + "aws_vpc_peering_connection_accepter": resourceAwsVpcPeeringConnectionAccepter(), + "aws_vpc_peering_connection_options": resourceAwsVpcPeeringConnectionOptions(), + "aws_default_vpc": resourceAwsDefaultVpc(), + "aws_vpc": resourceAwsVpc(), + "aws_vpc_endpoint": resourceAwsVpcEndpoint(), + "aws_vpc_endpoint_connection_notification": resourceAwsVpcEndpointConnectionNotification(), + "aws_vpc_endpoint_route_table_association": resourceAwsVpcEndpointRouteTableAssociation(), + "aws_vpc_endpoint_subnet_association": resourceAwsVpcEndpointSubnetAssociation(), + "aws_vpc_endpoint_service": resourceAwsVpcEndpointService(), + "aws_vpc_endpoint_service_allowed_principal": resourceAwsVpcEndpointServiceAllowedPrincipal(), + "aws_vpc_ipv4_cidr_block_association": resourceAwsVpcIpv4CidrBlockAssociation(), + "aws_vpn_connection": resourceAwsVpnConnection(), + "aws_vpn_connection_route": resourceAwsVpnConnectionRoute(), + "aws_vpn_gateway": resourceAwsVpnGateway(), + "aws_vpn_gateway_attachment": resourceAwsVpnGatewayAttachment(), + "aws_vpn_gateway_route_propagation": resourceAwsVpnGatewayRoutePropagation(), + "aws_waf_byte_match_set": resourceAwsWafByteMatchSet(), + "aws_waf_ipset": resourceAwsWafIPSet(), + "aws_waf_rate_based_rule": resourceAwsWafRateBasedRule(), + "aws_waf_regex_match_set": resourceAwsWafRegexMatchSet(), + "aws_waf_regex_pattern_set": resourceAwsWafRegexPatternSet(), + "aws_waf_rule": resourceAwsWafRule(), + "aws_waf_rule_group": resourceAwsWafRuleGroup(), + "aws_waf_size_constraint_set": resourceAwsWafSizeConstraintSet(), + "aws_waf_web_acl": resourceAwsWafWebAcl(), + "aws_waf_xss_match_set": resourceAwsWafXssMatchSet(), + "aws_waf_sql_injection_match_set": resourceAwsWafSqlInjectionMatchSet(), + "aws_waf_geo_match_set": resourceAwsWafGeoMatchSet(), + "aws_wafregional_byte_match_set": resourceAwsWafRegionalByteMatchSet(), + "aws_wafregional_geo_match_set": resourceAwsWafRegionalGeoMatchSet(), + "aws_wafregional_ipset": resourceAwsWafRegionalIPSet(), + "aws_wafregional_rate_based_rule": resourceAwsWafRegionalRateBasedRule(), + "aws_wafregional_regex_match_set": resourceAwsWafRegionalRegexMatchSet(), + "aws_wafregional_regex_pattern_set": resourceAwsWafRegionalRegexPatternSet(), + "aws_wafregional_rule": resourceAwsWafRegionalRule(), + "aws_wafregional_rule_group": resourceAwsWafRegionalRuleGroup(), + "aws_wafregional_size_constraint_set": resourceAwsWafRegionalSizeConstraintSet(), + "aws_wafregional_sql_injection_match_set": resourceAwsWafRegionalSqlInjectionMatchSet(), + "aws_wafregional_xss_match_set": resourceAwsWafRegionalXssMatchSet(), + "aws_wafregional_web_acl": resourceAwsWafRegionalWebAcl(), + "aws_wafregional_web_acl_association": resourceAwsWafRegionalWebAclAssociation(), + "aws_batch_compute_environment": resourceAwsBatchComputeEnvironment(), + "aws_batch_job_definition": resourceAwsBatchJobDefinition(), + "aws_batch_job_queue": resourceAwsBatchJobQueue(), + "aws_pinpoint_app": resourceAwsPinpointApp(), + "aws_pinpoint_adm_channel": resourceAwsPinpointADMChannel(), + "aws_pinpoint_apns_channel": resourceAwsPinpointAPNSChannel(), + "aws_pinpoint_apns_sandbox_channel": resourceAwsPinpointAPNSSandboxChannel(), + "aws_pinpoint_apns_voip_channel": resourceAwsPinpointAPNSVoipChannel(), + "aws_pinpoint_apns_voip_sandbox_channel": resourceAwsPinpointAPNSVoipSandboxChannel(), + "aws_pinpoint_baidu_channel": resourceAwsPinpointBaiduChannel(), + "aws_pinpoint_email_channel": resourceAwsPinpointEmailChannel(), + "aws_pinpoint_event_stream": resourceAwsPinpointEventStream(), + "aws_pinpoint_gcm_channel": resourceAwsPinpointGCMChannel(), + "aws_pinpoint_sms_channel": resourceAwsPinpointSMSChannel(), // ALBs are actually LBs because they can be type `network` or `application` // To avoid regressions, we will add a new resource for each and they both point // back to the old ALB version. IF the Terraform supported aliases for resources - // this would be a whole lot simplier + // this would be a whole lot simpler "aws_alb": resourceAwsLb(), "aws_lb": resourceAwsLb(), "aws_alb_listener": resourceAwsLbListener(), @@ -629,6 +766,8 @@ func init() { "kinesis_endpoint": "Use this to override the default endpoint URL constructed from the `region`.\n" + "It's typically used to connect to kinesalite.", + "kinesis_analytics_endpoint": "Use this to override the default endpoint URL constructed from the `region`.\n", + "kms_endpoint": "Use this to override the default endpoint URL constructed from the `region`.\n", "iam_endpoint": "Use this to override the default endpoint URL constructed from the `region`.\n", @@ -637,8 +776,14 @@ func init() { "ec2_endpoint": "Use this to override the default endpoint URL constructed from the `region`.\n", + "autoscaling_endpoint": "Use this to override the default endpoint URL constructed from the `region`.\n", + + "efs_endpoint": "Use this to override the default endpoint URL constructed from the `region`.\n", + "elb_endpoint": "Use this to override the default endpoint URL constructed from the `region`.\n", + "es_endpoint": "Use this to override the default endpoint URL constructed from the `region`.\n", + "rds_endpoint": "Use this to override the default endpoint URL constructed from the `region`.\n", "s3_endpoint": "Use this to override the default endpoint URL constructed from the `region`.\n", @@ -647,6 +792,8 @@ func init() { "sqs_endpoint": "Use this to override the default endpoint URL constructed from the `region`.\n", + "ssm_endpoint": "Use this to override the default endpoint URL constructed from the `region`.\n", + "insecure": "Explicitly allow the provider to perform \"insecure\" SSL requests. If omitted," + "default value is `false`", @@ -738,11 +885,15 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) { config.DeviceFarmEndpoint = endpoints["devicefarm"].(string) config.DynamoDBEndpoint = endpoints["dynamodb"].(string) config.Ec2Endpoint = endpoints["ec2"].(string) + config.AutoscalingEndpoint = endpoints["autoscaling"].(string) config.EcrEndpoint = endpoints["ecr"].(string) config.EcsEndpoint = endpoints["ecs"].(string) + config.EfsEndpoint = endpoints["efs"].(string) config.ElbEndpoint = endpoints["elb"].(string) + config.EsEndpoint = endpoints["es"].(string) config.IamEndpoint = endpoints["iam"].(string) config.KinesisEndpoint = endpoints["kinesis"].(string) + config.KinesisAnalyticsEndpoint = endpoints["kinesis_analytics"].(string) config.KmsEndpoint = endpoints["kms"].(string) config.LambdaEndpoint = endpoints["lambda"].(string) config.R53Endpoint = endpoints["r53"].(string) @@ -751,6 +902,7 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) { config.SnsEndpoint = endpoints["sns"].(string) config.SqsEndpoint = endpoints["sqs"].(string) config.StsEndpoint = endpoints["sts"].(string) + config.SsmEndpoint = endpoints["ssm"].(string) } if v, ok := d.GetOk("allowed_account_ids"); ok { @@ -870,6 +1022,13 @@ func endpointsSchema() *schema.Schema { Description: descriptions["ec2_endpoint"], }, + "autoscaling": { + Type: schema.TypeString, + Optional: true, + Default: "", + Description: descriptions["autoscaling_endpoint"], + }, + "ecr": { Type: schema.TypeString, Optional: true, @@ -884,18 +1043,37 @@ func endpointsSchema() *schema.Schema { Description: descriptions["ecs_endpoint"], }, + "efs": { + Type: schema.TypeString, + Optional: true, + Default: "", + Description: descriptions["efs_endpoint"], + }, + "elb": { Type: schema.TypeString, Optional: true, Default: "", Description: descriptions["elb_endpoint"], }, + "es": { + Type: schema.TypeString, + Optional: true, + Default: "", + Description: descriptions["es_endpoint"], + }, "kinesis": { Type: schema.TypeString, Optional: true, Default: "", Description: descriptions["kinesis_endpoint"], }, + "kinesis_analytics": { + Type: schema.TypeString, + Optional: true, + Default: "", + Description: descriptions["kinesis_analytics_endpoint"], + }, "kms": { Type: schema.TypeString, Optional: true, @@ -944,6 +1122,12 @@ func endpointsSchema() *schema.Schema { Default: "", Description: descriptions["sts_endpoint"], }, + "ssm": { + Type: schema.TypeString, + Optional: true, + Default: "", + Description: descriptions["ssm_endpoint"], + }, }, }, Set: endpointsToHash, @@ -962,6 +1146,8 @@ func endpointsToHash(v interface{}) int { buf.WriteString(fmt.Sprintf("%s-", m["dynamodb"].(string))) buf.WriteString(fmt.Sprintf("%s-", m["iam"].(string))) buf.WriteString(fmt.Sprintf("%s-", m["ec2"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["autoscaling"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["efs"].(string))) buf.WriteString(fmt.Sprintf("%s-", m["elb"].(string))) buf.WriteString(fmt.Sprintf("%s-", m["kinesis"].(string))) buf.WriteString(fmt.Sprintf("%s-", m["kms"].(string))) diff --git a/aws/provider_test.go b/aws/provider_test.go index 61c206eab08..927fbdb1bd4 100644 --- a/aws/provider_test.go +++ b/aws/provider_test.go @@ -1,10 +1,18 @@ package aws import ( + "fmt" "log" "os" + "regexp" + "strings" "testing" + "github.com/aws/aws-sdk-go/service/organizations" + + "github.com/aws/aws-sdk-go/aws/arn" + "github.com/aws/aws-sdk-go/aws/endpoints" + "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/terraform" @@ -73,6 +81,78 @@ func testAccPreCheck(t *testing.T) { } } +// testAccAwsProviderAccountID returns the account ID of an AWS provider +func testAccAwsProviderAccountID(provider *schema.Provider) string { + if provider == nil { + log.Print("[DEBUG] Unable to read account ID from test provider: empty provider") + return "" + } + if provider.Meta() == nil { + log.Print("[DEBUG] Unable to read account ID from test provider: unconfigured provider") + return "" + } + client, ok := provider.Meta().(*AWSClient) + if !ok { + log.Print("[DEBUG] Unable to read account ID from test provider: non-AWS or unconfigured AWS provider") + return "" + } + return client.accountid +} + +// testAccCheckResourceAttrRegionalARN ensures the Terraform state exactly matches a formatted ARN with region +func testAccCheckResourceAttrRegionalARN(resourceName, attributeName, arnService, arnResource string) resource.TestCheckFunc { + return func(s *terraform.State) error { + attributeValue := arn.ARN{ + AccountID: testAccGetAccountID(), + Partition: testAccGetPartition(), + Region: testAccGetRegion(), + Resource: arnResource, + Service: arnService, + }.String() + return resource.TestCheckResourceAttr(resourceName, attributeName, attributeValue)(s) + } +} + +// testAccCheckResourceAttrRegionalARN ensures the Terraform state regexp matches a formatted ARN with region +func testAccMatchResourceAttrRegionalARN(resourceName, attributeName, arnService string, arnResourceRegexp *regexp.Regexp) resource.TestCheckFunc { + return func(s *terraform.State) error { + arnRegexp := arn.ARN{ + AccountID: testAccGetAccountID(), + Partition: testAccGetPartition(), + Region: testAccGetRegion(), + Resource: arnResourceRegexp.String(), + Service: arnService, + }.String() + + attributeMatch, err := regexp.Compile(arnRegexp) + + if err != nil { + return fmt.Errorf("Unable to compile ARN regexp (%s): %s", arnRegexp, err) + } + + return resource.TestMatchResourceAttr(resourceName, attributeName, attributeMatch)(s) + } +} + +// testAccCheckResourceAttrGlobalARN ensures the Terraform state exactly matches a formatted ARN without region +func testAccCheckResourceAttrGlobalARN(resourceName, attributeName, arnService, arnResource string) resource.TestCheckFunc { + return func(s *terraform.State) error { + attributeValue := arn.ARN{ + AccountID: testAccGetAccountID(), + Partition: testAccGetPartition(), + Resource: arnResource, + Service: arnService, + }.String() + return resource.TestCheckResourceAttr(resourceName, attributeName, attributeValue)(s) + } +} + +// testAccGetAccountID returns the account ID of testAccProvider +// Must be used returned within a resource.TestCheckFunc +func testAccGetAccountID() string { + return testAccAwsProviderAccountID(testAccProvider) +} + func testAccGetRegion() string { v := os.Getenv("AWS_DEFAULT_REGION") if v == "" { @@ -81,6 +161,21 @@ func testAccGetRegion() string { return v } +func testAccGetPartition() string { + if partition, ok := endpoints.PartitionForRegion(endpoints.DefaultPartitions(), testAccGetRegion()); ok { + return partition.ID() + } + return "aws" +} + +func testAccGetServiceEndpoint(service string) string { + endpoint, err := endpoints.DefaultResolver().EndpointFor(service, testAccGetRegion()) + if err != nil { + return "" + } + return strings.TrimPrefix(endpoint.URL, "https://") +} + func testAccEC2ClassicPreCheck(t *testing.T) { client := testAccProvider.Meta().(*AWSClient) platforms := client.supportedplatforms @@ -91,6 +186,35 @@ func testAccEC2ClassicPreCheck(t *testing.T) { } } +func testAccHasServicePreCheck(service string, t *testing.T) { + if partition, ok := endpoints.PartitionForRegion(endpoints.DefaultPartitions(), testAccGetRegion()); ok { + if _, ok := partition.Services()[service]; !ok { + t.Skip(fmt.Sprintf("skipping tests; partition does not support %s service", service)) + } + } +} + +func testAccMultipleRegionsPreCheck(t *testing.T) { + if partition, ok := endpoints.PartitionForRegion(endpoints.DefaultPartitions(), testAccGetRegion()); ok { + if len(partition.Regions()) < 2 { + t.Skip("skipping tests; partition only includes a single region") + } + } +} + +func testAccOrganizationsAccountPreCheck(t *testing.T) { + conn := testAccProvider.Meta().(*AWSClient).organizationsconn + input := &organizations.DescribeOrganizationInput{} + _, err := conn.DescribeOrganization(input) + if isAWSErr(err, organizations.ErrCodeAWSOrganizationsNotInUseException, "") { + return + } + if err != nil { + t.Fatalf("error describing AWS Organization: %s", err) + } + t.Skip("skipping tests; this AWS account must not be an existing member of an AWS Organization") +} + func testAccAwsRegionProviderFunc(region string, providers *[]*schema.Provider) func() *schema.Provider { return func() *schema.Provider { if region == "" { @@ -146,3 +270,22 @@ func testAccCheckWithProviders(f func(*terraform.State, *schema.Provider) error, return nil } } + +// Check sweeper API call error for reasons to skip sweeping +// These include missing API endpoints and unsupported API calls +func testSweepSkipSweepError(err error) bool { + // Ignore missing API endpoints + if isAWSErr(err, "RequestError", "send request failed") { + return true + } + // Ignore unsupported API calls + if isAWSErr(err, "UnsupportedOperation", "") { + return true + } + // Ignore more unsupported API calls + // InvalidParameterValue: Use of cache security groups is not permitted in this API version for your account. + if isAWSErr(err, "InvalidParameterValue", "not permitted in this API version for your account") { + return true + } + return false +} diff --git a/aws/resource_aws_acm_certificate.go b/aws/resource_aws_acm_certificate.go index 4b2c06b3624..db49e7d0973 100644 --- a/aws/resource_aws_acm_certificate.go +++ b/aws/resource_aws_acm_certificate.go @@ -156,6 +156,9 @@ func resourceAwsAcmCertificateRead(d *schema.ResourceData, meta interface{}) err } tagResp, err := acmconn.ListTagsForCertificate(params) + if err != nil { + return resource.NonRetryableError(fmt.Errorf("error listing tags for certificate (%s): %s", d.Id(), err)) + } if err := d.Set("tags", tagsToMapACM(tagResp.Tags)); err != nil { return resource.NonRetryableError(err) } @@ -229,16 +232,27 @@ func convertValidationOptions(certificate *acm.CertificateDetail) ([]map[string] func resourceAwsAcmCertificateDelete(d *schema.ResourceData, meta interface{}) error { acmconn := meta.(*AWSClient).acmconn + log.Printf("[INFO] Deleting ACM Certificate: %s", d.Id()) + params := &acm.DeleteCertificateInput{ CertificateArn: aws.String(d.Id()), } - _, err := acmconn.DeleteCertificate(params) + err := resource.Retry(10*time.Minute, func() *resource.RetryError { + _, err := acmconn.DeleteCertificate(params) + if err != nil { + if isAWSErr(err, acm.ErrCodeResourceInUseException, "") { + log.Printf("[WARN] Conflict deleting certificate in use: %s, retrying", err.Error()) + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) if err != nil && !isAWSErr(err, acm.ErrCodeResourceNotFoundException, "") { return fmt.Errorf("Error deleting certificate: %s", err) } - d.SetId("") return nil } diff --git a/aws/resource_aws_acm_certificate_test.go b/aws/resource_aws_acm_certificate_test.go index 24f7b956441..89214144020 100644 --- a/aws/resource_aws_acm_certificate_test.go +++ b/aws/resource_aws_acm_certificate_test.go @@ -38,12 +38,12 @@ func TestAccAWSAcmCertificate_emailValidation(t *testing.T) { domain := fmt.Sprintf("tf-acc-%d.%s", rInt1, rootDomain) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAcmCertificateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAcmCertificateConfig(domain, acm.ValidationMethodEmail), Check: resource.ComposeTestCheckFunc( resource.TestMatchResourceAttr("aws_acm_certificate.cert", "arn", certificateArnRegex), @@ -54,7 +54,7 @@ func TestAccAWSAcmCertificate_emailValidation(t *testing.T) { resource.TestCheckResourceAttr("aws_acm_certificate.cert", "validation_method", acm.ValidationMethodEmail), ), }, - resource.TestStep{ + { ResourceName: "aws_acm_certificate.cert", ImportState: true, ImportStateVerify: true, @@ -71,12 +71,12 @@ func TestAccAWSAcmCertificate_dnsValidation(t *testing.T) { domain := fmt.Sprintf("tf-acc-%d.%s", rInt1, rootDomain) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAcmCertificateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAcmCertificateConfig(domain, acm.ValidationMethodDns), Check: resource.ComposeTestCheckFunc( resource.TestMatchResourceAttr("aws_acm_certificate.cert", "arn", certificateArnRegex), @@ -91,7 +91,7 @@ func TestAccAWSAcmCertificate_dnsValidation(t *testing.T) { resource.TestCheckResourceAttr("aws_acm_certificate.cert", "validation_method", acm.ValidationMethodDns), ), }, - resource.TestStep{ + { ResourceName: "aws_acm_certificate.cert", ImportState: true, ImportStateVerify: true, @@ -103,12 +103,12 @@ func TestAccAWSAcmCertificate_dnsValidation(t *testing.T) { func TestAccAWSAcmCertificate_root(t *testing.T) { rootDomain := testAccAwsAcmCertificateDomainFromEnv(t) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAcmCertificateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAcmCertificateConfig(rootDomain, acm.ValidationMethodDns), Check: resource.ComposeTestCheckFunc( resource.TestMatchResourceAttr("aws_acm_certificate.cert", "arn", certificateArnRegex), @@ -123,7 +123,7 @@ func TestAccAWSAcmCertificate_root(t *testing.T) { resource.TestCheckResourceAttr("aws_acm_certificate.cert", "validation_method", acm.ValidationMethodDns), ), }, - resource.TestStep{ + { ResourceName: "aws_acm_certificate.cert", ImportState: true, ImportStateVerify: true, @@ -136,12 +136,12 @@ func TestAccAWSAcmCertificate_rootAndWildcardSan(t *testing.T) { rootDomain := testAccAwsAcmCertificateDomainFromEnv(t) wildcardDomain := fmt.Sprintf("*.%s", rootDomain) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAcmCertificateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAcmCertificateConfig_subjectAlternativeNames(rootDomain, strconv.Quote(wildcardDomain), acm.ValidationMethodDns), Check: resource.ComposeTestCheckFunc( resource.TestMatchResourceAttr("aws_acm_certificate.cert", "arn", certificateArnRegex), @@ -161,7 +161,7 @@ func TestAccAWSAcmCertificate_rootAndWildcardSan(t *testing.T) { resource.TestCheckResourceAttr("aws_acm_certificate.cert", "validation_method", acm.ValidationMethodDns), ), }, - resource.TestStep{ + { ResourceName: "aws_acm_certificate.cert", ImportState: true, ImportStateVerify: true, @@ -178,12 +178,12 @@ func TestAccAWSAcmCertificate_san_single(t *testing.T) { domain := fmt.Sprintf("tf-acc-%d.%s", rInt1, rootDomain) sanDomain := fmt.Sprintf("tf-acc-%d-san.%s", rInt1, rootDomain) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAcmCertificateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAcmCertificateConfig_subjectAlternativeNames(domain, strconv.Quote(sanDomain), acm.ValidationMethodDns), Check: resource.ComposeTestCheckFunc( resource.TestMatchResourceAttr("aws_acm_certificate.cert", "arn", certificateArnRegex), @@ -203,7 +203,7 @@ func TestAccAWSAcmCertificate_san_single(t *testing.T) { resource.TestCheckResourceAttr("aws_acm_certificate.cert", "validation_method", acm.ValidationMethodDns), ), }, - resource.TestStep{ + { ResourceName: "aws_acm_certificate.cert", ImportState: true, ImportStateVerify: true, @@ -221,12 +221,12 @@ func TestAccAWSAcmCertificate_san_multiple(t *testing.T) { sanDomain1 := fmt.Sprintf("tf-acc-%d-san1.%s", rInt1, rootDomain) sanDomain2 := fmt.Sprintf("tf-acc-%d-san2.%s", rInt1, rootDomain) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAcmCertificateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAcmCertificateConfig_subjectAlternativeNames(domain, fmt.Sprintf("%q, %q", sanDomain1, sanDomain2), acm.ValidationMethodDns), Check: resource.ComposeTestCheckFunc( resource.TestMatchResourceAttr("aws_acm_certificate.cert", "arn", certificateArnRegex), @@ -251,7 +251,7 @@ func TestAccAWSAcmCertificate_san_multiple(t *testing.T) { resource.TestCheckResourceAttr("aws_acm_certificate.cert", "validation_method", acm.ValidationMethodDns), ), }, - resource.TestStep{ + { ResourceName: "aws_acm_certificate.cert", ImportState: true, ImportStateVerify: true, @@ -264,12 +264,12 @@ func TestAccAWSAcmCertificate_wildcard(t *testing.T) { rootDomain := testAccAwsAcmCertificateDomainFromEnv(t) wildcardDomain := fmt.Sprintf("*.%s", rootDomain) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAcmCertificateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAcmCertificateConfig(wildcardDomain, acm.ValidationMethodDns), Check: resource.ComposeTestCheckFunc( resource.TestMatchResourceAttr("aws_acm_certificate.cert", "arn", certificateArnRegex), @@ -284,7 +284,7 @@ func TestAccAWSAcmCertificate_wildcard(t *testing.T) { resource.TestCheckResourceAttr("aws_acm_certificate.cert", "validation_method", acm.ValidationMethodDns), ), }, - resource.TestStep{ + { ResourceName: "aws_acm_certificate.cert", ImportState: true, ImportStateVerify: true, @@ -297,12 +297,12 @@ func TestAccAWSAcmCertificate_wildcardAndRootSan(t *testing.T) { rootDomain := testAccAwsAcmCertificateDomainFromEnv(t) wildcardDomain := fmt.Sprintf("*.%s", rootDomain) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAcmCertificateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAcmCertificateConfig_subjectAlternativeNames(wildcardDomain, strconv.Quote(rootDomain), acm.ValidationMethodDns), Check: resource.ComposeTestCheckFunc( resource.TestMatchResourceAttr("aws_acm_certificate.cert", "arn", certificateArnRegex), @@ -322,7 +322,7 @@ func TestAccAWSAcmCertificate_wildcardAndRootSan(t *testing.T) { resource.TestCheckResourceAttr("aws_acm_certificate.cert", "validation_method", acm.ValidationMethodDns), ), }, - resource.TestStep{ + { ResourceName: "aws_acm_certificate.cert", ImportState: true, ImportStateVerify: true, @@ -338,18 +338,18 @@ func TestAccAWSAcmCertificate_tags(t *testing.T) { domain := fmt.Sprintf("tf-acc-%d.%s", rInt1, rootDomain) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAcmCertificateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAcmCertificateConfig(domain, acm.ValidationMethodDns), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr("aws_acm_certificate.cert", "tags.%", "0"), ), }, - resource.TestStep{ + { Config: testAccAcmCertificateConfig_twoTags(domain, acm.ValidationMethodDns, "Hello", "World", "Foo", "Bar"), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr("aws_acm_certificate.cert", "tags.%", "2"), @@ -357,7 +357,7 @@ func TestAccAWSAcmCertificate_tags(t *testing.T) { resource.TestCheckResourceAttr("aws_acm_certificate.cert", "tags.Foo", "Bar"), ), }, - resource.TestStep{ + { Config: testAccAcmCertificateConfig_twoTags(domain, acm.ValidationMethodDns, "Hello", "World", "Foo", "Baz"), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr("aws_acm_certificate.cert", "tags.%", "2"), @@ -365,14 +365,14 @@ func TestAccAWSAcmCertificate_tags(t *testing.T) { resource.TestCheckResourceAttr("aws_acm_certificate.cert", "tags.Foo", "Baz"), ), }, - resource.TestStep{ + { Config: testAccAcmCertificateConfig_oneTag(domain, acm.ValidationMethodDns, "Environment", "Test"), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr("aws_acm_certificate.cert", "tags.%", "1"), resource.TestCheckResourceAttr("aws_acm_certificate.cert", "tags.Environment", "Test"), ), }, - resource.TestStep{ + { ResourceName: "aws_acm_certificate.cert", ImportState: true, ImportStateVerify: true, diff --git a/aws/resource_aws_acm_certificate_validation.go b/aws/resource_aws_acm_certificate_validation.go index 024b2689700..9cca55ef8c7 100644 --- a/aws/resource_aws_acm_certificate_validation.go +++ b/aws/resource_aws_acm_certificate_validation.go @@ -90,7 +90,7 @@ func resourceAwsAcmCertificateCheckValidationRecords(validationRecordFqdns []int CertificateArn: cert.CertificateArn, } err := resource.Retry(1*time.Minute, func() *resource.RetryError { - log.Printf("[DEBUG] Certificate domain validation options empty for %q, retrying", cert.CertificateArn) + log.Printf("[DEBUG] Certificate domain validation options empty for %q, retrying", *cert.CertificateArn) output, err := conn.DescribeCertificate(input) if err != nil { return resource.NonRetryableError(err) @@ -160,6 +160,5 @@ func resourceAwsAcmCertificateValidationRead(d *schema.ResourceData, meta interf func resourceAwsAcmCertificateValidationDelete(d *schema.ResourceData, meta interface{}) error { // No need to do anything, certificate will be deleted when acm_certificate is deleted - d.SetId("") return nil } diff --git a/aws/resource_aws_acm_certificate_validation_test.go b/aws/resource_aws_acm_certificate_validation_test.go index 3bdc9c56f5a..07dcadbebc4 100644 --- a/aws/resource_aws_acm_certificate_validation_test.go +++ b/aws/resource_aws_acm_certificate_validation_test.go @@ -18,13 +18,13 @@ func TestAccAWSAcmCertificateValidation_basic(t *testing.T) { domain := fmt.Sprintf("tf-acc-%d.%s", rInt1, rootDomain) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAcmCertificateDestroy, Steps: []resource.TestStep{ // Test that validation succeeds - resource.TestStep{ + { Config: testAccAcmCertificateValidation_basic(rootDomain, domain), Check: resource.ComposeTestCheckFunc( resource.TestMatchResourceAttr("aws_acm_certificate_validation.cert", "certificate_arn", certificateArnRegex), @@ -41,12 +41,12 @@ func TestAccAWSAcmCertificateValidation_timeout(t *testing.T) { domain := fmt.Sprintf("tf-acc-%d.%s", rInt1, rootDomain) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAcmCertificateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAcmCertificateValidation_timeout(domain), ExpectError: regexp.MustCompile("Expected certificate to be issued but was in state PENDING_VALIDATION"), }, @@ -61,23 +61,23 @@ func TestAccAWSAcmCertificateValidation_validationRecordFqdns(t *testing.T) { domain := fmt.Sprintf("tf-acc-%d.%s", rInt1, rootDomain) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAcmCertificateDestroy, Steps: []resource.TestStep{ // Test that validation fails if given validation_fqdns don't match - resource.TestStep{ + { Config: testAccAcmCertificateValidation_validationRecordFqdnsWrongFqdn(domain), ExpectError: regexp.MustCompile("missing .+ DNS validation record: .+"), }, // Test that validation fails if not DNS validation - resource.TestStep{ + { Config: testAccAcmCertificateValidation_validationRecordFqdnsEmailValidation(domain), ExpectError: regexp.MustCompile("validation_record_fqdns is only valid for DNS validation"), }, // Test that validation succeeds with validation - resource.TestStep{ + { Config: testAccAcmCertificateValidation_validationRecordFqdnsOneRoute53Record(rootDomain, domain), Check: resource.ComposeTestCheckFunc( resource.TestMatchResourceAttr("aws_acm_certificate_validation.cert", "certificate_arn", certificateArnRegex), @@ -90,12 +90,12 @@ func TestAccAWSAcmCertificateValidation_validationRecordFqdns(t *testing.T) { func TestAccAWSAcmCertificateValidation_validationRecordFqdnsRoot(t *testing.T) { rootDomain := testAccAwsAcmCertificateDomainFromEnv(t) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAcmCertificateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAcmCertificateValidation_validationRecordFqdnsOneRoute53Record(rootDomain, rootDomain), Check: resource.ComposeTestCheckFunc( resource.TestMatchResourceAttr("aws_acm_certificate_validation.cert", "certificate_arn", certificateArnRegex), @@ -109,12 +109,12 @@ func TestAccAWSAcmCertificateValidation_validationRecordFqdnsRootAndWildcard(t * rootDomain := testAccAwsAcmCertificateDomainFromEnv(t) wildcardDomain := fmt.Sprintf("*.%s", rootDomain) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAcmCertificateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAcmCertificateValidation_validationRecordFqdnsTwoRoute53Records(rootDomain, rootDomain, strconv.Quote(wildcardDomain)), Check: resource.ComposeTestCheckFunc( resource.TestMatchResourceAttr("aws_acm_certificate_validation.cert", "certificate_arn", certificateArnRegex), @@ -132,12 +132,12 @@ func TestAccAWSAcmCertificateValidation_validationRecordFqdnsSan(t *testing.T) { domain := fmt.Sprintf("tf-acc-%d.%s", rInt1, rootDomain) sanDomain := fmt.Sprintf("tf-acc-%d-san.%s", rInt1, rootDomain) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAcmCertificateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAcmCertificateValidation_validationRecordFqdnsTwoRoute53Records(rootDomain, domain, strconv.Quote(sanDomain)), Check: resource.ComposeTestCheckFunc( resource.TestMatchResourceAttr("aws_acm_certificate_validation.cert", "certificate_arn", certificateArnRegex), @@ -151,12 +151,12 @@ func TestAccAWSAcmCertificateValidation_validationRecordFqdnsWildcard(t *testing rootDomain := testAccAwsAcmCertificateDomainFromEnv(t) wildcardDomain := fmt.Sprintf("*.%s", rootDomain) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAcmCertificateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAcmCertificateValidation_validationRecordFqdnsOneRoute53Record(rootDomain, wildcardDomain), Check: resource.ComposeTestCheckFunc( resource.TestMatchResourceAttr("aws_acm_certificate_validation.cert", "certificate_arn", certificateArnRegex), @@ -170,12 +170,12 @@ func TestAccAWSAcmCertificateValidation_validationRecordFqdnsWildcardAndRoot(t * rootDomain := testAccAwsAcmCertificateDomainFromEnv(t) wildcardDomain := fmt.Sprintf("*.%s", rootDomain) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAcmCertificateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAcmCertificateValidation_validationRecordFqdnsTwoRoute53Records(rootDomain, wildcardDomain, strconv.Quote(rootDomain)), Check: resource.ComposeTestCheckFunc( resource.TestMatchResourceAttr("aws_acm_certificate_validation.cert", "certificate_arn", certificateArnRegex), diff --git a/aws/resource_aws_acmpca_certificate_authority.go b/aws/resource_aws_acmpca_certificate_authority.go new file mode 100644 index 00000000000..75bbc1460fc --- /dev/null +++ b/aws/resource_aws_acmpca_certificate_authority.go @@ -0,0 +1,715 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/acmpca" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsAcmpcaCertificateAuthority() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsAcmpcaCertificateAuthorityCreate, + Read: resourceAwsAcmpcaCertificateAuthorityRead, + Update: resourceAwsAcmpcaCertificateAuthorityUpdate, + Delete: resourceAwsAcmpcaCertificateAuthorityDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(1 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "certificate": { + Type: schema.TypeString, + Computed: true, + }, + // https://docs.aws.amazon.com/acm-pca/latest/APIReference/API_CertificateAuthorityConfiguration.html + "certificate_authority_configuration": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "key_algorithm": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{ + acmpca.KeyAlgorithmEcPrime256v1, + acmpca.KeyAlgorithmEcSecp384r1, + acmpca.KeyAlgorithmRsa2048, + acmpca.KeyAlgorithmRsa4096, + }, false), + }, + "signing_algorithm": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{ + acmpca.SigningAlgorithmSha256withecdsa, + acmpca.SigningAlgorithmSha256withrsa, + acmpca.SigningAlgorithmSha384withecdsa, + acmpca.SigningAlgorithmSha384withrsa, + acmpca.SigningAlgorithmSha512withecdsa, + acmpca.SigningAlgorithmSha512withrsa, + }, false), + }, + // https://docs.aws.amazon.com/acm-pca/latest/APIReference/API_ASN1Subject.html + "subject": { + Type: schema.TypeList, + Required: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "common_name": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(0, 64), + }, + "country": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(0, 2), + }, + "distinguished_name_qualifier": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(0, 64), + }, + "generation_qualifier": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(0, 3), + }, + "given_name": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(0, 16), + }, + "initials": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(0, 5), + }, + "locality": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(0, 128), + }, + "organization": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(0, 64), + }, + "organizational_unit": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(0, 64), + }, + "pseudonym": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(0, 128), + }, + "state": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(0, 128), + }, + "surname": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(0, 40), + }, + "title": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(0, 64), + }, + }, + }, + }, + }, + }, + }, + "certificate_chain": { + Type: schema.TypeString, + Computed: true, + }, + "certificate_signing_request": { + Type: schema.TypeString, + Computed: true, + }, + "enabled": { + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + "not_after": { + Type: schema.TypeString, + Computed: true, + }, + "not_before": { + Type: schema.TypeString, + Computed: true, + }, + // https://docs.aws.amazon.com/acm-pca/latest/APIReference/API_RevocationConfiguration.html + "revocation_configuration": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == "1" && new == "0" { + return true + } + return false + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + // https://docs.aws.amazon.com/acm-pca/latest/APIReference/API_CrlConfiguration.html + "crl_configuration": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == "1" && new == "0" { + return true + } + return false + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "custom_cname": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(0, 253), + }, + "enabled": { + Type: schema.TypeBool, + Optional: true, + }, + // ValidationException: 1 validation error detected: Value null or empty at 'expirationInDays' failed to satisfy constraint: Member must not be null or empty. + // InvalidParameter: 1 validation error(s) found. minimum field value of 1, CreateCertificateAuthorityInput.RevocationConfiguration.CrlConfiguration.ExpirationInDays. + "expiration_in_days": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntBetween(1, 5000), + }, + "s3_bucket_name": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(0, 255), + }, + }, + }, + }, + }, + }, + }, + "serial": { + Type: schema.TypeString, + Computed: true, + }, + "status": { + Type: schema.TypeString, + Computed: true, + }, + "tags": tagsSchema(), + "type": { + Type: schema.TypeString, + Optional: true, + Default: acmpca.CertificateAuthorityTypeSubordinate, + ValidateFunc: validation.StringInSlice([]string{ + acmpca.CertificateAuthorityTypeSubordinate, + }, false), + }, + }, + } +} + +func resourceAwsAcmpcaCertificateAuthorityCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).acmpcaconn + + input := &acmpca.CreateCertificateAuthorityInput{ + CertificateAuthorityConfiguration: expandAcmpcaCertificateAuthorityConfiguration(d.Get("certificate_authority_configuration").([]interface{})), + CertificateAuthorityType: aws.String(d.Get("type").(string)), + RevocationConfiguration: expandAcmpcaRevocationConfiguration(d.Get("revocation_configuration").([]interface{})), + } + + log.Printf("[DEBUG] Creating ACMPCA Certificate Authority: %s", input) + var output *acmpca.CreateCertificateAuthorityOutput + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + var err error + output, err = conn.CreateCertificateAuthority(input) + if err != nil { + // ValidationException: The ACM Private CA service account 'acm-pca-prod-pdx' requires getBucketAcl permissions for your S3 bucket 'tf-acc-test-5224996536060125340'. Check your S3 bucket permissions and try again. + if isAWSErr(err, "ValidationException", "Check your S3 bucket permissions and try again") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("error creating ACMPCA Certificate Authority: %s", err) + } + + d.SetId(aws.StringValue(output.CertificateAuthorityArn)) + + if v, ok := d.GetOk("tags"); ok { + input := &acmpca.TagCertificateAuthorityInput{ + CertificateAuthorityArn: aws.String(d.Id()), + Tags: tagsFromMapACMPCA(v.(map[string]interface{})), + } + + log.Printf("[DEBUG] Tagging ACMPCA Certificate Authority: %s", input) + _, err := conn.TagCertificateAuthority(input) + if err != nil { + return fmt.Errorf("error tagging ACMPCA Certificate Authority %q: %s", d.Id(), input) + } + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{ + "", + acmpca.CertificateAuthorityStatusCreating, + }, + Target: []string{ + acmpca.CertificateAuthorityStatusActive, + acmpca.CertificateAuthorityStatusPendingCertificate, + }, + Refresh: acmpcaCertificateAuthorityRefreshFunc(conn, d.Id()), + Timeout: d.Timeout(schema.TimeoutCreate), + } + + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("error waiting for ACMPCA Certificate Authority %q to be active or pending certificate: %s", d.Id(), err) + } + + return resourceAwsAcmpcaCertificateAuthorityRead(d, meta) +} + +func resourceAwsAcmpcaCertificateAuthorityRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).acmpcaconn + + describeCertificateAuthorityInput := &acmpca.DescribeCertificateAuthorityInput{ + CertificateAuthorityArn: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Reading ACMPCA Certificate Authority: %s", describeCertificateAuthorityInput) + + describeCertificateAuthorityOutput, err := conn.DescribeCertificateAuthority(describeCertificateAuthorityInput) + if err != nil { + if isAWSErr(err, acmpca.ErrCodeResourceNotFoundException, "") { + log.Printf("[WARN] ACMPCA Certificate Authority %q not found - removing from state", d.Id()) + d.SetId("") + return nil + } + return fmt.Errorf("error reading ACMPCA Certificate Authority: %s", err) + } + + if describeCertificateAuthorityOutput.CertificateAuthority == nil { + log.Printf("[WARN] ACMPCA Certificate Authority %q not found - removing from state", d.Id()) + d.SetId("") + return nil + } + certificateAuthority := describeCertificateAuthorityOutput.CertificateAuthority + + d.Set("arn", certificateAuthority.Arn) + + if err := d.Set("certificate_authority_configuration", flattenAcmpcaCertificateAuthorityConfiguration(certificateAuthority.CertificateAuthorityConfiguration)); err != nil { + return fmt.Errorf("error setting tags: %s", err) + } + + d.Set("enabled", (aws.StringValue(certificateAuthority.Status) != acmpca.CertificateAuthorityStatusDisabled)) + d.Set("not_after", certificateAuthority.NotAfter) + d.Set("not_before", certificateAuthority.NotBefore) + + if err := d.Set("revocation_configuration", flattenAcmpcaRevocationConfiguration(certificateAuthority.RevocationConfiguration)); err != nil { + return fmt.Errorf("error setting tags: %s", err) + } + + d.Set("serial", certificateAuthority.Serial) + d.Set("status", certificateAuthority.Status) + d.Set("type", certificateAuthority.Type) + + getCertificateAuthorityCertificateInput := &acmpca.GetCertificateAuthorityCertificateInput{ + CertificateAuthorityArn: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Reading ACMPCA Certificate Authority Certificate: %s", getCertificateAuthorityCertificateInput) + + getCertificateAuthorityCertificateOutput, err := conn.GetCertificateAuthorityCertificate(getCertificateAuthorityCertificateInput) + if err != nil { + if isAWSErr(err, acmpca.ErrCodeResourceNotFoundException, "") { + log.Printf("[WARN] ACMPCA Certificate Authority %q not found - removing from state", d.Id()) + d.SetId("") + return nil + } + // Returned when in PENDING_CERTIFICATE status + // InvalidStateException: The certificate authority XXXXX is not in the correct state to have a certificate signing request. + if !isAWSErr(err, acmpca.ErrCodeInvalidStateException, "") { + return fmt.Errorf("error reading ACMPCA Certificate Authority Certificate: %s", err) + } + } + + d.Set("certificate", "") + d.Set("certificate_chain", "") + if getCertificateAuthorityCertificateOutput != nil { + d.Set("certificate", getCertificateAuthorityCertificateOutput.Certificate) + d.Set("certificate_chain", getCertificateAuthorityCertificateOutput.CertificateChain) + } + + getCertificateAuthorityCsrInput := &acmpca.GetCertificateAuthorityCsrInput{ + CertificateAuthorityArn: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Reading ACMPCA Certificate Authority Certificate Signing Request: %s", getCertificateAuthorityCsrInput) + + getCertificateAuthorityCsrOutput, err := conn.GetCertificateAuthorityCsr(getCertificateAuthorityCsrInput) + if err != nil { + if isAWSErr(err, acmpca.ErrCodeResourceNotFoundException, "") { + log.Printf("[WARN] ACMPCA Certificate Authority %q not found - removing from state", d.Id()) + d.SetId("") + return nil + } + return fmt.Errorf("error reading ACMPCA Certificate Authority Certificate Signing Request: %s", err) + } + + d.Set("certificate_signing_request", "") + if getCertificateAuthorityCsrOutput != nil { + d.Set("certificate_signing_request", getCertificateAuthorityCsrOutput.Csr) + } + + tags, err := listAcmpcaTags(conn, d.Id()) + if err != nil { + return fmt.Errorf("error reading ACMPCA Certificate Authority %q tags: %s", d.Id(), err) + } + + if err := d.Set("tags", tagsToMapACMPCA(tags)); err != nil { + return fmt.Errorf("error setting tags: %s", err) + } + + return nil +} + +func resourceAwsAcmpcaCertificateAuthorityUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).acmpcaconn + updateCertificateAuthority := false + + input := &acmpca.UpdateCertificateAuthorityInput{ + CertificateAuthorityArn: aws.String(d.Id()), + } + + if d.HasChange("enabled") { + input.Status = aws.String(acmpca.CertificateAuthorityStatusActive) + if !d.Get("enabled").(bool) { + input.Status = aws.String(acmpca.CertificateAuthorityStatusDisabled) + } + updateCertificateAuthority = true + } + + if d.HasChange("revocation_configuration") { + input.RevocationConfiguration = expandAcmpcaRevocationConfiguration(d.Get("revocation_configuration").([]interface{})) + updateCertificateAuthority = true + } + + if updateCertificateAuthority { + log.Printf("[DEBUG] Updating ACMPCA Certificate Authority: %s", input) + _, err := conn.UpdateCertificateAuthority(input) + if err != nil { + return fmt.Errorf("error updating ACMPCA Certificate Authority: %s", err) + } + } + + if d.HasChange("tags") { + oraw, nraw := d.GetChange("tags") + o := oraw.(map[string]interface{}) + n := nraw.(map[string]interface{}) + create, remove := diffTagsACMPCA(tagsFromMapACMPCA(o), tagsFromMapACMPCA(n)) + + if len(remove) > 0 { + log.Printf("[DEBUG] Removing ACMPCA Certificate Authority %q tags: %#v", d.Id(), remove) + _, err := conn.UntagCertificateAuthority(&acmpca.UntagCertificateAuthorityInput{ + CertificateAuthorityArn: aws.String(d.Id()), + Tags: remove, + }) + if err != nil { + return fmt.Errorf("error updating ACMPCA Certificate Authority %q tags: %s", d.Id(), err) + } + } + if len(create) > 0 { + log.Printf("[DEBUG] Creating ACMPCA Certificate Authority %q tags: %#v", d.Id(), create) + _, err := conn.TagCertificateAuthority(&acmpca.TagCertificateAuthorityInput{ + CertificateAuthorityArn: aws.String(d.Id()), + Tags: create, + }) + if err != nil { + return fmt.Errorf("error updating ACMPCA Certificate Authority %q tags: %s", d.Id(), err) + } + } + } + + return resourceAwsAcmpcaCertificateAuthorityRead(d, meta) +} + +func resourceAwsAcmpcaCertificateAuthorityDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).acmpcaconn + + input := &acmpca.DeleteCertificateAuthorityInput{ + CertificateAuthorityArn: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Deleting ACMPCA Certificate Authority: %s", input) + _, err := conn.DeleteCertificateAuthority(input) + if err != nil { + if isAWSErr(err, acmpca.ErrCodeResourceNotFoundException, "") { + return nil + } + return fmt.Errorf("error deleting ACMPCA Certificate Authority: %s", err) + } + + return nil +} + +func acmpcaCertificateAuthorityRefreshFunc(conn *acmpca.ACMPCA, certificateAuthorityArn string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + input := &acmpca.DescribeCertificateAuthorityInput{ + CertificateAuthorityArn: aws.String(certificateAuthorityArn), + } + + log.Printf("[DEBUG] Reading ACMPCA Certificate Authority: %s", input) + output, err := conn.DescribeCertificateAuthority(input) + if err != nil { + if isAWSErr(err, acmpca.ErrCodeResourceNotFoundException, "") { + return nil, "", nil + } + return nil, "", err + } + + if output == nil || output.CertificateAuthority == nil { + return nil, "", nil + } + + return output.CertificateAuthority, aws.StringValue(output.CertificateAuthority.Status), nil + } +} + +func expandAcmpcaASN1Subject(l []interface{}) *acmpca.ASN1Subject { + if len(l) == 0 { + return nil + } + + m := l[0].(map[string]interface{}) + + subject := &acmpca.ASN1Subject{} + if v, ok := m["common_name"]; ok && v.(string) != "" { + subject.CommonName = aws.String(v.(string)) + } + if v, ok := m["country"]; ok && v.(string) != "" { + subject.Country = aws.String(v.(string)) + } + if v, ok := m["distinguished_name_qualifier"]; ok && v.(string) != "" { + subject.DistinguishedNameQualifier = aws.String(v.(string)) + } + if v, ok := m["generation_qualifier"]; ok && v.(string) != "" { + subject.GenerationQualifier = aws.String(v.(string)) + } + if v, ok := m["given_name"]; ok && v.(string) != "" { + subject.GivenName = aws.String(v.(string)) + } + if v, ok := m["initials"]; ok && v.(string) != "" { + subject.Initials = aws.String(v.(string)) + } + if v, ok := m["locality"]; ok && v.(string) != "" { + subject.Locality = aws.String(v.(string)) + } + if v, ok := m["organization"]; ok && v.(string) != "" { + subject.Organization = aws.String(v.(string)) + } + if v, ok := m["organizational_unit"]; ok && v.(string) != "" { + subject.OrganizationalUnit = aws.String(v.(string)) + } + if v, ok := m["pseudonym"]; ok && v.(string) != "" { + subject.Pseudonym = aws.String(v.(string)) + } + if v, ok := m["state"]; ok && v.(string) != "" { + subject.State = aws.String(v.(string)) + } + if v, ok := m["surname"]; ok && v.(string) != "" { + subject.Surname = aws.String(v.(string)) + } + if v, ok := m["title"]; ok && v.(string) != "" { + subject.Title = aws.String(v.(string)) + } + + return subject +} + +func expandAcmpcaCertificateAuthorityConfiguration(l []interface{}) *acmpca.CertificateAuthorityConfiguration { + if len(l) == 0 { + return nil + } + + m := l[0].(map[string]interface{}) + + config := &acmpca.CertificateAuthorityConfiguration{ + KeyAlgorithm: aws.String(m["key_algorithm"].(string)), + SigningAlgorithm: aws.String(m["signing_algorithm"].(string)), + Subject: expandAcmpcaASN1Subject(m["subject"].([]interface{})), + } + + return config +} + +func expandAcmpcaCrlConfiguration(l []interface{}) *acmpca.CrlConfiguration { + if len(l) == 0 { + return nil + } + + m := l[0].(map[string]interface{}) + + config := &acmpca.CrlConfiguration{ + Enabled: aws.Bool(m["enabled"].(bool)), + } + + if v, ok := m["custom_cname"]; ok && v.(string) != "" { + config.CustomCname = aws.String(v.(string)) + } + if v, ok := m["expiration_in_days"]; ok && v.(int) > 0 { + config.ExpirationInDays = aws.Int64(int64(v.(int))) + } + if v, ok := m["s3_bucket_name"]; ok && v.(string) != "" { + config.S3BucketName = aws.String(v.(string)) + } + + return config +} + +func expandAcmpcaRevocationConfiguration(l []interface{}) *acmpca.RevocationConfiguration { + if len(l) == 0 { + return nil + } + + m := l[0].(map[string]interface{}) + + config := &acmpca.RevocationConfiguration{ + CrlConfiguration: expandAcmpcaCrlConfiguration(m["crl_configuration"].([]interface{})), + } + + return config +} + +func flattenAcmpcaASN1Subject(subject *acmpca.ASN1Subject) []interface{} { + if subject == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "common_name": aws.StringValue(subject.CommonName), + "country": aws.StringValue(subject.Country), + "distinguished_name_qualifier": aws.StringValue(subject.DistinguishedNameQualifier), + "generation_qualifier": aws.StringValue(subject.GenerationQualifier), + "given_name": aws.StringValue(subject.GivenName), + "initials": aws.StringValue(subject.Initials), + "locality": aws.StringValue(subject.Locality), + "organization": aws.StringValue(subject.Organization), + "organizational_unit": aws.StringValue(subject.OrganizationalUnit), + "pseudonym": aws.StringValue(subject.Pseudonym), + "state": aws.StringValue(subject.State), + "surname": aws.StringValue(subject.Surname), + "title": aws.StringValue(subject.Title), + } + + return []interface{}{m} +} + +func flattenAcmpcaCertificateAuthorityConfiguration(config *acmpca.CertificateAuthorityConfiguration) []interface{} { + if config == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "key_algorithm": aws.StringValue(config.KeyAlgorithm), + "signing_algorithm": aws.StringValue(config.SigningAlgorithm), + "subject": flattenAcmpcaASN1Subject(config.Subject), + } + + return []interface{}{m} +} + +func flattenAcmpcaCrlConfiguration(config *acmpca.CrlConfiguration) []interface{} { + if config == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "custom_cname": aws.StringValue(config.CustomCname), + "enabled": aws.BoolValue(config.Enabled), + "expiration_in_days": int(aws.Int64Value(config.ExpirationInDays)), + "s3_bucket_name": aws.StringValue(config.S3BucketName), + } + + return []interface{}{m} +} + +func flattenAcmpcaRevocationConfiguration(config *acmpca.RevocationConfiguration) []interface{} { + if config == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "crl_configuration": flattenAcmpcaCrlConfiguration(config.CrlConfiguration), + } + + return []interface{}{m} +} + +func listAcmpcaTags(conn *acmpca.ACMPCA, certificateAuthorityArn string) ([]*acmpca.Tag, error) { + tags := []*acmpca.Tag{} + input := &acmpca.ListTagsInput{ + CertificateAuthorityArn: aws.String(certificateAuthorityArn), + } + + for { + output, err := conn.ListTags(input) + if err != nil { + return tags, err + } + for _, tag := range output.Tags { + tags = append(tags, tag) + } + if output.NextToken == nil { + break + } + input.NextToken = output.NextToken + } + + return tags, nil +} diff --git a/aws/resource_aws_acmpca_certificate_authority_test.go b/aws/resource_aws_acmpca_certificate_authority_test.go new file mode 100644 index 00000000000..771f255b4f9 --- /dev/null +++ b/aws/resource_aws_acmpca_certificate_authority_test.go @@ -0,0 +1,668 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/acmpca" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func init() { + resource.AddTestSweepers("aws_acmpca_certificate_authority", &resource.Sweeper{ + Name: "aws_acmpca_certificate_authority", + F: testSweepAcmpcaCertificateAuthorities, + }) +} + +func testSweepAcmpcaCertificateAuthorities(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).acmpcaconn + + certificateAuthorities, err := listAcmpcaCertificateAuthorities(conn) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping ACMPCA Certificate Authorities sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving ACMPCA Certificate Authorities: %s", err) + } + if len(certificateAuthorities) == 0 { + log.Print("[DEBUG] No ACMPCA Certificate Authorities to sweep") + return nil + } + + for _, certificateAuthority := range certificateAuthorities { + arn := aws.StringValue(certificateAuthority.Arn) + log.Printf("[INFO] Deleting ACMPCA Certificate Authority: %s", arn) + input := &acmpca.DeleteCertificateAuthorityInput{ + CertificateAuthorityArn: aws.String(arn), + } + + _, err := conn.DeleteCertificateAuthority(input) + if err != nil { + if isAWSErr(err, acmpca.ErrCodeResourceNotFoundException, "") { + continue + } + log.Printf("[ERROR] Failed to delete ACMPCA Certificate Authority (%s): %s", arn, err) + } + } + + return nil +} + +func TestAccAwsAcmpcaCertificateAuthority_Basic(t *testing.T) { + var certificateAuthority acmpca.CertificateAuthority + resourceName := "aws_acmpca_certificate_authority.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsAcmpcaCertificateAuthorityDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_Required, + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestMatchResourceAttr(resourceName, "arn", regexp.MustCompile(`^arn:[^:]+:acm-pca:[^:]+:[^:]+:certificate-authority/.+$`)), + resource.TestCheckResourceAttr(resourceName, "certificate_authority_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "certificate_authority_configuration.0.key_algorithm", "RSA_4096"), + resource.TestCheckResourceAttr(resourceName, "certificate_authority_configuration.0.signing_algorithm", "SHA512WITHRSA"), + resource.TestCheckResourceAttr(resourceName, "certificate_authority_configuration.0.subject.#", "1"), + resource.TestCheckResourceAttr(resourceName, "certificate_authority_configuration.0.subject.0.common_name", "terraformtesting.com"), + resource.TestCheckResourceAttr(resourceName, "certificate", ""), + resource.TestCheckResourceAttr(resourceName, "certificate_chain", ""), + resource.TestCheckResourceAttrSet(resourceName, "certificate_signing_request"), + resource.TestCheckResourceAttr(resourceName, "enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "not_after", ""), + resource.TestCheckResourceAttr(resourceName, "not_before", ""), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "serial", ""), + resource.TestCheckResourceAttr(resourceName, "status", "PENDING_CERTIFICATE"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttr(resourceName, "type", "SUBORDINATE"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAwsAcmpcaCertificateAuthority_Enabled(t *testing.T) { + var certificateAuthority acmpca.CertificateAuthority + resourceName := "aws_acmpca_certificate_authority.test" + + // error updating ACMPCA Certificate Authority: InvalidStateException: The certificate authority must be in the Active or DISABLED state to be updated + t.Skip("We need to fully sign the certificate authority CSR from another CA in order to test this functionality, which requires another resource") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsAcmpcaCertificateAuthorityDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_Enabled(true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "status", "PENDING_CERTIFICATE"), + ), + }, + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_Enabled(false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "status", "DISABLED"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAwsAcmpcaCertificateAuthority_RevocationConfiguration_CrlConfiguration_CustomCname(t *testing.T) { + var certificateAuthority acmpca.CertificateAuthority + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_acmpca_certificate_authority.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsAcmpcaCertificateAuthorityDestroy, + Steps: []resource.TestStep{ + // Test creating revocation configuration on resource creation + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_RevocationConfiguration_CrlConfiguration_CustomCname(rName, "crl.terraformtesting.com"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.custom_cname", "crl.terraformtesting.com"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.expiration_in_days", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.s3_bucket_name", rName), + ), + }, + // Test importing revocation configuration + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + // Test updating revocation configuration + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_RevocationConfiguration_CrlConfiguration_CustomCname(rName, "crl2.terraformtesting.com"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.custom_cname", "crl2.terraformtesting.com"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.expiration_in_days", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.s3_bucket_name", rName), + ), + }, + // Test removing custom cname on resource update + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_RevocationConfiguration_CrlConfiguration_Enabled(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.custom_cname", ""), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.expiration_in_days", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.s3_bucket_name", rName), + ), + }, + // Test adding custom cname on resource update + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_RevocationConfiguration_CrlConfiguration_CustomCname(rName, "crl.terraformtesting.com"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.custom_cname", "crl.terraformtesting.com"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.expiration_in_days", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.s3_bucket_name", rName), + ), + }, + // Test removing revocation configuration on resource update + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_Required, + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.enabled", "false"), + ), + }, + }, + }) +} + +func TestAccAwsAcmpcaCertificateAuthority_RevocationConfiguration_CrlConfiguration_Enabled(t *testing.T) { + var certificateAuthority acmpca.CertificateAuthority + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_acmpca_certificate_authority.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsAcmpcaCertificateAuthorityDestroy, + Steps: []resource.TestStep{ + // Test creating revocation configuration on resource creation + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_RevocationConfiguration_CrlConfiguration_Enabled(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.custom_cname", ""), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.expiration_in_days", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.s3_bucket_name", rName), + ), + }, + // Test importing revocation configuration + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + // Test disabling revocation configuration + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_RevocationConfiguration_CrlConfiguration_Enabled(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.enabled", "false"), + ), + }, + // Test enabling revocation configuration + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_RevocationConfiguration_CrlConfiguration_Enabled(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.custom_cname", ""), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.expiration_in_days", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.s3_bucket_name", rName), + ), + }, + // Test removing revocation configuration on resource update + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_Required, + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.enabled", "false"), + ), + }, + }, + }) +} + +func TestAccAwsAcmpcaCertificateAuthority_RevocationConfiguration_CrlConfiguration_ExpirationInDays(t *testing.T) { + var certificateAuthority acmpca.CertificateAuthority + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_acmpca_certificate_authority.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsAcmpcaCertificateAuthorityDestroy, + Steps: []resource.TestStep{ + // Test creating revocation configuration on resource creation + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_RevocationConfiguration_CrlConfiguration_ExpirationInDays(rName, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.custom_cname", ""), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.expiration_in_days", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.s3_bucket_name", rName), + ), + }, + // Test importing revocation configuration + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + // Test updating revocation configuration + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_RevocationConfiguration_CrlConfiguration_ExpirationInDays(rName, 2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.expiration_in_days", "2"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.s3_bucket_name", rName), + ), + }, + // Test removing revocation configuration on resource update + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_Required, + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.enabled", "false"), + ), + }, + }, + }) +} + +func TestAccAwsAcmpcaCertificateAuthority_Tags(t *testing.T) { + var certificateAuthority acmpca.CertificateAuthority + resourceName := "aws_acmpca_certificate_authority.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsAcmpcaCertificateAuthorityDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_Tags_Single, + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.tag1", "tag1value"), + ), + }, + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_Tags_SingleUpdated, + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.tag1", "tag1value-updated"), + ), + }, + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_Tags_Multiple, + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.tag1", "tag1value"), + resource.TestCheckResourceAttr(resourceName, "tags.tag2", "tag2value"), + ), + }, + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_Tags_Single, + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.tag1", "tag1value"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAwsAcmpcaCertificateAuthorityDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).acmpcaconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_acmpca_certificate_authority" { + continue + } + + input := &acmpca.DescribeCertificateAuthorityInput{ + CertificateAuthorityArn: aws.String(rs.Primary.ID), + } + + output, err := conn.DescribeCertificateAuthority(input) + + if err != nil { + if isAWSErr(err, acmpca.ErrCodeResourceNotFoundException, "") { + return nil + } + return err + } + + if output != nil { + return fmt.Errorf("ACMPCA Certificate Authority %q still exists", rs.Primary.ID) + } + } + + return nil + +} + +func testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName string, certificateAuthority *acmpca.CertificateAuthority) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + conn := testAccProvider.Meta().(*AWSClient).acmpcaconn + input := &acmpca.DescribeCertificateAuthorityInput{ + CertificateAuthorityArn: aws.String(rs.Primary.ID), + } + + output, err := conn.DescribeCertificateAuthority(input) + + if err != nil { + return err + } + + if output == nil || output.CertificateAuthority == nil { + return fmt.Errorf("ACMPCA Certificate Authority %q does not exist", rs.Primary.ID) + } + + *certificateAuthority = *output.CertificateAuthority + + return nil + } +} + +func listAcmpcaCertificateAuthorities(conn *acmpca.ACMPCA) ([]*acmpca.CertificateAuthority, error) { + certificateAuthorities := []*acmpca.CertificateAuthority{} + input := &acmpca.ListCertificateAuthoritiesInput{} + + for { + output, err := conn.ListCertificateAuthorities(input) + if err != nil { + return certificateAuthorities, err + } + for _, certificateAuthority := range output.CertificateAuthorities { + certificateAuthorities = append(certificateAuthorities, certificateAuthority) + } + if output.NextToken == nil { + break + } + input.NextToken = output.NextToken + } + + return certificateAuthorities, nil +} + +func testAccAwsAcmpcaCertificateAuthorityConfig_Enabled(enabled bool) string { + return fmt.Sprintf(` +resource "aws_acmpca_certificate_authority" "test" { + enabled = %t + + certificate_authority_configuration { + key_algorithm = "RSA_4096" + signing_algorithm = "SHA512WITHRSA" + + subject { + common_name = "terraformtesting.com" + } + } +} +`, enabled) +} + +const testAccAwsAcmpcaCertificateAuthorityConfig_Required = ` +resource "aws_acmpca_certificate_authority" "test" { + certificate_authority_configuration { + key_algorithm = "RSA_4096" + signing_algorithm = "SHA512WITHRSA" + + subject { + common_name = "terraformtesting.com" + } + } +} +` + +func testAccAwsAcmpcaCertificateAuthorityConfig_RevocationConfiguration_CrlConfiguration_CustomCname(rName, customCname string) string { + return fmt.Sprintf(` +%s + +resource "aws_acmpca_certificate_authority" "test" { + certificate_authority_configuration { + key_algorithm = "RSA_4096" + signing_algorithm = "SHA512WITHRSA" + + subject { + common_name = "terraformtesting.com" + } + } + + revocation_configuration { + crl_configuration { + custom_cname = "%s" + enabled = true + expiration_in_days = 1 + s3_bucket_name = "${aws_s3_bucket.test.id}" + } + } + + depends_on = ["aws_s3_bucket_policy.test"] +} +`, testAccAwsAcmpcaCertificateAuthorityConfig_S3Bucket(rName), customCname) +} + +func testAccAwsAcmpcaCertificateAuthorityConfig_RevocationConfiguration_CrlConfiguration_Enabled(rName string, enabled bool) string { + return fmt.Sprintf(` +%s + +resource "aws_acmpca_certificate_authority" "test" { + certificate_authority_configuration { + key_algorithm = "RSA_4096" + signing_algorithm = "SHA512WITHRSA" + + subject { + common_name = "terraformtesting.com" + } + } + + revocation_configuration { + crl_configuration { + enabled = %t + expiration_in_days = 1 + s3_bucket_name = "${aws_s3_bucket.test.id}" + } + } +} +`, testAccAwsAcmpcaCertificateAuthorityConfig_S3Bucket(rName), enabled) +} + +func testAccAwsAcmpcaCertificateAuthorityConfig_RevocationConfiguration_CrlConfiguration_ExpirationInDays(rName string, expirationInDays int) string { + return fmt.Sprintf(` +%s + +resource "aws_acmpca_certificate_authority" "test" { + certificate_authority_configuration { + key_algorithm = "RSA_4096" + signing_algorithm = "SHA512WITHRSA" + + subject { + common_name = "terraformtesting.com" + } + } + + revocation_configuration { + crl_configuration { + enabled = true + expiration_in_days = %d + s3_bucket_name = "${aws_s3_bucket.test.id}" + } + } +} +`, testAccAwsAcmpcaCertificateAuthorityConfig_S3Bucket(rName), expirationInDays) +} + +func testAccAwsAcmpcaCertificateAuthorityConfig_S3Bucket(rName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = "%s" + force_destroy = true +} + +data "aws_iam_policy_document" "acmpca_bucket_access" { + statement { + actions = [ + "s3:GetBucketAcl", + "s3:GetBucketLocation", + "s3:PutObject", + "s3:PutObjectAcl", + ] + + resources = [ + "${aws_s3_bucket.test.arn}", + "${aws_s3_bucket.test.arn}/*", + ] + + principals { + identifiers = ["acm-pca.amazonaws.com"] + type = "Service" + } + } +} + +resource "aws_s3_bucket_policy" "test" { + bucket = "${aws_s3_bucket.test.id}" + policy = "${data.aws_iam_policy_document.acmpca_bucket_access.json}" +} +`, rName) +} + +const testAccAwsAcmpcaCertificateAuthorityConfig_Tags_Single = ` +resource "aws_acmpca_certificate_authority" "test" { + certificate_authority_configuration { + key_algorithm = "RSA_4096" + signing_algorithm = "SHA512WITHRSA" + + subject { + common_name = "terraformtesting.com" + } + } + + tags = { + tag1 = "tag1value" + } +} +` + +const testAccAwsAcmpcaCertificateAuthorityConfig_Tags_SingleUpdated = ` +resource "aws_acmpca_certificate_authority" "test" { + certificate_authority_configuration { + key_algorithm = "RSA_4096" + signing_algorithm = "SHA512WITHRSA" + + subject { + common_name = "terraformtesting.com" + } + } + + tags = { + tag1 = "tag1value-updated" + } +} +` + +const testAccAwsAcmpcaCertificateAuthorityConfig_Tags_Multiple = ` +resource "aws_acmpca_certificate_authority" "test" { + certificate_authority_configuration { + key_algorithm = "RSA_4096" + signing_algorithm = "SHA512WITHRSA" + + subject { + common_name = "terraformtesting.com" + } + } + + tags = { + tag1 = "tag1value" + tag2 = "tag2value" + } +} +` diff --git a/aws/resource_aws_alb_target_group_test.go b/aws/resource_aws_alb_target_group_test.go index d80981a6ed1..60fc45fc968 100644 --- a/aws/resource_aws_alb_target_group_test.go +++ b/aws/resource_aws_alb_target_group_test.go @@ -8,7 +8,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/elbv2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -49,7 +48,7 @@ func TestAccAWSALBTargetGroup_basic(t *testing.T) { var conf elbv2.TargetGroup targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_alb_target_group.test", Providers: testAccProviders, @@ -65,6 +64,7 @@ func TestAccAWSALBTargetGroup_basic(t *testing.T) { resource.TestCheckResourceAttr("aws_alb_target_group.test", "protocol", "HTTPS"), resource.TestCheckResourceAttrSet("aws_alb_target_group.test", "vpc_id"), resource.TestCheckResourceAttr("aws_alb_target_group.test", "deregistration_delay", "200"), + resource.TestCheckResourceAttr("aws_alb_target_group.test", "slow_start", "0"), resource.TestCheckResourceAttr("aws_alb_target_group.test", "target_type", "instance"), resource.TestCheckResourceAttr("aws_alb_target_group.test", "stickiness.#", "1"), resource.TestCheckResourceAttr("aws_alb_target_group.test", "stickiness.0.enabled", "true"), @@ -90,7 +90,7 @@ func TestAccAWSALBTargetGroup_basic(t *testing.T) { func TestAccAWSALBTargetGroup_namePrefix(t *testing.T) { var conf elbv2.TargetGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_alb_target_group.test", Providers: testAccProviders, @@ -110,7 +110,7 @@ func TestAccAWSALBTargetGroup_namePrefix(t *testing.T) { func TestAccAWSALBTargetGroup_generatedName(t *testing.T) { var conf elbv2.TargetGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_alb_target_group.test", Providers: testAccProviders, @@ -131,7 +131,7 @@ func TestAccAWSALBTargetGroup_changeNameForceNew(t *testing.T) { targetGroupNameBefore := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) targetGroupNameAfter := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(4, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_alb_target_group.test", Providers: testAccProviders, @@ -159,7 +159,7 @@ func TestAccAWSALBTargetGroup_changeProtocolForceNew(t *testing.T) { var before, after elbv2.TargetGroup targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_alb_target_group.test", Providers: testAccProviders, @@ -187,7 +187,7 @@ func TestAccAWSALBTargetGroup_changePortForceNew(t *testing.T) { var before, after elbv2.TargetGroup targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_alb_target_group.test", Providers: testAccProviders, @@ -215,7 +215,7 @@ func TestAccAWSALBTargetGroup_changeVpcForceNew(t *testing.T) { var before, after elbv2.TargetGroup targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_alb_target_group.test", Providers: testAccProviders, @@ -241,7 +241,7 @@ func TestAccAWSALBTargetGroup_tags(t *testing.T) { var conf elbv2.TargetGroup targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_alb_target_group.test", Providers: testAccProviders, @@ -272,7 +272,7 @@ func TestAccAWSALBTargetGroup_updateHealthCheck(t *testing.T) { var conf elbv2.TargetGroup targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_alb_target_group.test", Providers: testAccProviders, @@ -334,7 +334,7 @@ func TestAccAWSALBTargetGroup_updateSticknessEnabled(t *testing.T) { var conf elbv2.TargetGroup targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_alb_target_group.test", Providers: testAccProviders, @@ -415,6 +415,34 @@ func TestAccAWSALBTargetGroup_updateSticknessEnabled(t *testing.T) { }) } +func TestAccAWSALBTargetGroup_setAndUpdateSlowStart(t *testing.T) { + var before, after elbv2.TargetGroup + targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "aws_alb_target_group.test", + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSALBTargetGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSALBTargetGroupConfig_updateSlowStart(targetGroupName, 30), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSALBTargetGroupExists("aws_alb_target_group.test", &before), + resource.TestCheckResourceAttr("aws_alb_target_group.test", "slow_start", "30"), + ), + }, + { + Config: testAccAWSALBTargetGroupConfig_updateSlowStart(targetGroupName, 60), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSALBTargetGroupExists("aws_alb_target_group.test", &after), + resource.TestCheckResourceAttr("aws_alb_target_group.test", "slow_start", "60"), + ), + }, + }, + }) +} + func testAccCheckAWSALBTargetGroupExists(n string, res *elbv2.TargetGroup) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -466,10 +494,10 @@ func testAccCheckAWSALBTargetGroupDestroy(s *terraform.State) error { } // Verify the error - if isTargetGroupNotFound(err) { + if isAWSErr(err, elbv2.ErrCodeTargetGroupNotFoundException, "") { return nil } else { - return errwrap.Wrapf("Unexpected error checking ALB destroyed: {{err}}", err) + return fmt.Errorf("Unexpected error checking ALB destroyed: %s", err) } } @@ -757,6 +785,46 @@ resource "aws_vpc" "test" { }`, targetGroupName, stickinessBlock) } +func testAccAWSALBTargetGroupConfig_updateSlowStart(targetGroupName string, slowStartDuration int) string { + return fmt.Sprintf(`resource "aws_alb_target_group" "test" { + name = "%s" + port = 443 + protocol = "HTTP" + vpc_id = "${aws_vpc.test.id}" + + deregistration_delay = 200 + slow_start = %d + + stickiness { + type = "lb_cookie" + cookie_duration = 10000 + } + + health_check { + path = "/health" + interval = 60 + port = 8081 + protocol = "HTTP" + timeout = 3 + healthy_threshold = 3 + unhealthy_threshold = 3 + matcher = "200-299" + } + + tags { + TestName = "TestAccAWSALBTargetGroup_SlowStart" + } +} + +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" + + tags { + Name = "terraform-testacc-alb-target-group-slowstart" + } +}`, targetGroupName, slowStartDuration) +} + const testAccAWSALBTargetGroupConfig_namePrefix = ` resource "aws_alb_target_group" "test" { name_prefix = "tf-" diff --git a/aws/resource_aws_ami.go b/aws/resource_aws_ami.go index bab340bf761..f1e0ca9a3e3 100644 --- a/aws/resource_aws_ami.go +++ b/aws/resource_aws_ami.go @@ -25,11 +25,18 @@ const ( ) func resourceAwsAmi() *schema.Resource { - // Our schema is shared also with aws_ami_copy and aws_ami_from_instance - resourceSchema := resourceAwsAmiCommonSchema(false) - return &schema.Resource{ Create: resourceAwsAmiCreate, + // The Read, Update and Delete operations are shared with aws_ami_copy + // and aws_ami_from_instance, since they differ only in how the image + // is created. + Read: resourceAwsAmiRead, + Update: resourceAwsAmiUpdate, + Delete: resourceAwsAmiDelete, + + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, Timeouts: &schema.ResourceTimeout{ Create: schema.DefaultTimeout(AWSAMIRetryTimeout), @@ -37,14 +44,167 @@ func resourceAwsAmi() *schema.Resource { Delete: schema.DefaultTimeout(AWSAMIDeleteRetryTimeout), }, - Schema: resourceSchema, - - // The Read, Update and Delete operations are shared with aws_ami_copy - // and aws_ami_from_instance, since they differ only in how the image - // is created. - Read: resourceAwsAmiRead, - Update: resourceAwsAmiUpdate, - Delete: resourceAwsAmiDelete, + Schema: map[string]*schema.Schema{ + "image_location": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "architecture": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "x86_64", + }, + "description": { + Type: schema.TypeString, + Optional: true, + }, + // The following block device attributes intentionally mimick the + // corresponding attributes on aws_instance, since they have the + // same meaning. + // However, we don't use root_block_device here because the constraint + // on which root device attributes can be overridden for an instance to + // not apply when registering an AMI. + "ebs_block_device": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "delete_on_termination": { + Type: schema.TypeBool, + Optional: true, + Default: true, + ForceNew: true, + }, + + "device_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "encrypted": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + }, + + "iops": { + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + }, + + "snapshot_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "volume_size": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "volume_type": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "standard", + }, + }, + }, + Set: func(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["device_name"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["snapshot_id"].(string))) + return hashcode.String(buf.String()) + }, + }, + "ena_support": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + }, + "ephemeral_block_device": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "device_name": { + Type: schema.TypeString, + Required: true, + }, + + "virtual_name": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + Set: func(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["device_name"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["virtual_name"].(string))) + return hashcode.String(buf.String()) + }, + }, + "kernel_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + // Not a public attribute; used to let the aws_ami_copy and aws_ami_from_instance + // resources record that they implicitly created new EBS snapshots that we should + // now manage. Not set by aws_ami, since the snapshots used there are presumed to + // be independently managed. + "manage_ebs_snapshots": { + Type: schema.TypeBool, + Computed: true, + ForceNew: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "ramdisk_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "root_device_name": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "root_snapshot_id": { + Type: schema.TypeString, + Computed: true, + }, + "sriov_net_support": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "simple", + }, + "tags": tagsSchema(), + "virtualization_type": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "paravirtual", + }, + }, } } @@ -59,6 +219,7 @@ func resourceAwsAmiCreate(d *schema.ResourceData, meta interface{}) error { RootDeviceName: aws.String(d.Get("root_device_name").(string)), SriovNetSupport: aws.String(d.Get("sriov_net_support").(string)), VirtualizationType: aws.String(d.Get("virtualization_type").(string)), + EnaSupport: aws.Bool(d.Get("ena_support").(bool)), } if kernelId := d.Get("kernel_id").(string); kernelId != "" { @@ -196,6 +357,7 @@ func resourceAwsAmiRead(d *schema.ResourceData, meta interface{}) error { d.Set("root_snapshot_id", amiRootSnapshotId(image)) d.Set("sriov_net_support", image.SriovNetSupport) d.Set("virtualization_type", image.VirtualizationType) + d.Set("ena_support", image.EnaSupport) var ebsBlockDevs []map[string]interface{} var ephemeralBlockDevs []map[string]interface{} @@ -307,8 +469,6 @@ func resourceAwsAmiDelete(d *schema.ResourceData, meta interface{}) error { return err } - // No error, ami was deleted successfully - d.SetId("") return nil } @@ -374,202 +534,3 @@ func resourceAwsAmiWaitForAvailable(timeout time.Duration, id string, client *ec } return info.(*ec2.Image), nil } - -func resourceAwsAmiCommonSchema(computed bool) map[string]*schema.Schema { - // The "computed" parameter controls whether we're making - // a schema for an AMI that's been implicitly registered (aws_ami_copy, aws_ami_from_instance) - // or whether we're making a schema for an explicit registration (aws_ami). - // When set, almost every attribute is marked as "computed". - // When not set, only the "id" attribute is computed. - // "name" and "description" are never computed, since they must always - // be provided by the user. - - var virtualizationTypeDefault interface{} - var deleteEbsOnTerminationDefault interface{} - var sriovNetSupportDefault interface{} - var architectureDefault interface{} - var volumeTypeDefault interface{} - if !computed { - virtualizationTypeDefault = "paravirtual" - deleteEbsOnTerminationDefault = true - sriovNetSupportDefault = "simple" - architectureDefault = "x86_64" - volumeTypeDefault = "standard" - } - - return map[string]*schema.Schema{ - "image_location": { - Type: schema.TypeString, - Optional: !computed, - Computed: true, - ForceNew: !computed, - }, - "architecture": { - Type: schema.TypeString, - Optional: !computed, - Computed: computed, - ForceNew: !computed, - Default: architectureDefault, - }, - "description": { - Type: schema.TypeString, - Optional: true, - }, - "kernel_id": { - Type: schema.TypeString, - Optional: !computed, - Computed: computed, - ForceNew: !computed, - }, - "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - }, - "ramdisk_id": { - Type: schema.TypeString, - Optional: !computed, - Computed: computed, - ForceNew: !computed, - }, - "root_device_name": { - Type: schema.TypeString, - Optional: !computed, - Computed: computed, - ForceNew: !computed, - }, - "root_snapshot_id": { - Type: schema.TypeString, - Computed: true, - }, - "sriov_net_support": { - Type: schema.TypeString, - Optional: !computed, - Computed: computed, - ForceNew: !computed, - Default: sriovNetSupportDefault, - }, - "virtualization_type": { - Type: schema.TypeString, - Optional: !computed, - Computed: computed, - ForceNew: !computed, - Default: virtualizationTypeDefault, - }, - - // The following block device attributes intentionally mimick the - // corresponding attributes on aws_instance, since they have the - // same meaning. - // However, we don't use root_block_device here because the constraint - // on which root device attributes can be overridden for an instance to - // not apply when registering an AMI. - - "ebs_block_device": { - Type: schema.TypeSet, - Optional: true, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "delete_on_termination": { - Type: schema.TypeBool, - Optional: !computed, - Default: deleteEbsOnTerminationDefault, - ForceNew: !computed, - Computed: computed, - }, - - "device_name": { - Type: schema.TypeString, - Required: !computed, - ForceNew: !computed, - Computed: computed, - }, - - "encrypted": { - Type: schema.TypeBool, - Optional: !computed, - Computed: computed, - ForceNew: !computed, - }, - - "iops": { - Type: schema.TypeInt, - Optional: !computed, - Computed: computed, - ForceNew: !computed, - }, - - "snapshot_id": { - Type: schema.TypeString, - Optional: !computed, - Computed: computed, - ForceNew: !computed, - }, - - "volume_size": { - Type: schema.TypeInt, - Optional: !computed, - Computed: true, - ForceNew: !computed, - }, - - "volume_type": { - Type: schema.TypeString, - Optional: !computed, - Computed: computed, - ForceNew: !computed, - Default: volumeTypeDefault, - }, - }, - }, - Set: func(v interface{}) int { - var buf bytes.Buffer - m := v.(map[string]interface{}) - buf.WriteString(fmt.Sprintf("%s-", m["device_name"].(string))) - buf.WriteString(fmt.Sprintf("%s-", m["snapshot_id"].(string))) - return hashcode.String(buf.String()) - }, - }, - - "ephemeral_block_device": { - Type: schema.TypeSet, - Optional: true, - Computed: true, - ForceNew: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "device_name": { - Type: schema.TypeString, - Required: !computed, - Computed: computed, - }, - - "virtual_name": { - Type: schema.TypeString, - Required: !computed, - Computed: computed, - }, - }, - }, - Set: func(v interface{}) int { - var buf bytes.Buffer - m := v.(map[string]interface{}) - buf.WriteString(fmt.Sprintf("%s-", m["device_name"].(string))) - buf.WriteString(fmt.Sprintf("%s-", m["virtual_name"].(string))) - return hashcode.String(buf.String()) - }, - }, - - "tags": tagsSchema(), - - // Not a public attribute; used to let the aws_ami_copy and aws_ami_from_instance - // resources record that they implicitly created new EBS snapshots that we should - // now manage. Not set by aws_ami, since the snapshots used there are presumed to - // be independently managed. - "manage_ebs_snapshots": { - Type: schema.TypeBool, - Computed: true, - ForceNew: true, - }, - } -} diff --git a/aws/resource_aws_ami_copy.go b/aws/resource_aws_ami_copy.go index efd27146fac..e74d74f864e 100644 --- a/aws/resource_aws_ami_copy.go +++ b/aws/resource_aws_ami_copy.go @@ -1,44 +1,17 @@ package aws import ( + "bytes" + "fmt" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" ) func resourceAwsAmiCopy() *schema.Resource { - // Inherit all of the common AMI attributes from aws_ami, since we're - // implicitly creating an aws_ami resource. - resourceSchema := resourceAwsAmiCommonSchema(true) - - // Additional attributes unique to the copy operation. - resourceSchema["source_ami_id"] = &schema.Schema{ - Type: schema.TypeString, - Required: true, - ForceNew: true, - } - resourceSchema["source_ami_region"] = &schema.Schema{ - Type: schema.TypeString, - Required: true, - ForceNew: true, - } - - resourceSchema["encrypted"] = &schema.Schema{ - Type: schema.TypeBool, - Optional: true, - Default: false, - ForceNew: true, - } - - resourceSchema["kms_key_id"] = &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - ValidateFunc: validateArn, - } - return &schema.Resource{ Create: resourceAwsAmiCopyCreate, @@ -48,7 +21,168 @@ func resourceAwsAmiCopy() *schema.Resource { Delete: schema.DefaultTimeout(AWSAMIDeleteRetryTimeout), }, - Schema: resourceSchema, + Schema: map[string]*schema.Schema{ + "architecture": { + Type: schema.TypeString, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + }, + // The following block device attributes intentionally mimick the + // corresponding attributes on aws_instance, since they have the + // same meaning. + // However, we don't use root_block_device here because the constraint + // on which root device attributes can be overridden for an instance to + // not apply when registering an AMI. + "ebs_block_device": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "delete_on_termination": { + Type: schema.TypeBool, + Computed: true, + }, + + "device_name": { + Type: schema.TypeString, + Computed: true, + }, + + "encrypted": { + Type: schema.TypeBool, + Computed: true, + }, + + "iops": { + Type: schema.TypeInt, + Computed: true, + }, + + "snapshot_id": { + Type: schema.TypeString, + Computed: true, + }, + + "volume_size": { + Type: schema.TypeInt, + Computed: true, + }, + + "volume_type": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + Set: func(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["device_name"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["snapshot_id"].(string))) + return hashcode.String(buf.String()) + }, + }, + "ephemeral_block_device": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "device_name": { + Type: schema.TypeString, + Computed: true, + }, + + "virtual_name": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + Set: func(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["device_name"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["virtual_name"].(string))) + return hashcode.String(buf.String()) + }, + }, + "ena_support": { + Type: schema.TypeBool, + Computed: true, + }, + "encrypted": { + Type: schema.TypeBool, + Optional: true, + Default: false, + ForceNew: true, + }, + "image_location": { + Type: schema.TypeString, + Computed: true, + }, + "kernel_id": { + Type: schema.TypeString, + Computed: true, + }, + "kms_key_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: validateArn, + }, + // Not a public attribute; used to let the aws_ami_copy and aws_ami_from_instance + // resources record that they implicitly created new EBS snapshots that we should + // now manage. Not set by aws_ami, since the snapshots used there are presumed to + // be independently managed. + "manage_ebs_snapshots": { + Type: schema.TypeBool, + Computed: true, + ForceNew: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "ramdisk_id": { + Type: schema.TypeString, + Computed: true, + }, + "root_device_name": { + Type: schema.TypeString, + Computed: true, + }, + "root_snapshot_id": { + Type: schema.TypeString, + Computed: true, + }, + "source_ami_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "source_ami_region": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "sriov_net_support": { + Type: schema.TypeString, + Computed: true, + }, + "tags": tagsSchema(), + "virtualization_type": { + Type: schema.TypeString, + Computed: true, + }, + }, // The remaining operations are shared with the generic aws_ami resource, // since the aws_ami_copy resource only differs in how it's created. diff --git a/aws/resource_aws_ami_copy_test.go b/aws/resource_aws_ami_copy_test.go index 2099640dad8..7e16ba66d9d 100644 --- a/aws/resource_aws_ami_copy_test.go +++ b/aws/resource_aws_ami_copy_test.go @@ -1,143 +1,141 @@ package aws import ( - "errors" "fmt" - "strings" "testing" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) -func TestAccAWSAMICopy(t *testing.T) { - var amiId string - snapshots := []string{} +func TestAccAWSAMICopy_basic(t *testing.T) { + var image ec2.Image + resourceName := "aws_ami_copy.test" - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAMICopyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAMICopyConfig, - Check: func(state *terraform.State) error { - rs, ok := state.RootModule().Resources["aws_ami_copy.test"] - if !ok { - return fmt.Errorf("AMI resource not found") - } - - amiId = rs.Primary.ID - - if amiId == "" { - return fmt.Errorf("AMI id is not set") - } - - conn := testAccProvider.Meta().(*AWSClient).ec2conn - req := &ec2.DescribeImagesInput{ - ImageIds: []*string{aws.String(amiId)}, - } - describe, err := conn.DescribeImages(req) - if err != nil { - return err - } - - if len(describe.Images) != 1 || - *describe.Images[0].ImageId != rs.Primary.ID { - return fmt.Errorf("AMI not found") - } - - image := describe.Images[0] - if expected := "available"; *image.State != expected { - return fmt.Errorf("invalid image state; expected %v, got %v", expected, image.State) - } - if expected := "machine"; *image.ImageType != expected { - return fmt.Errorf("wrong image type; expected %v, got %v", expected, image.ImageType) - } - if expected := "terraform-acc-ami-copy"; *image.Name != expected { - return fmt.Errorf("wrong name; expected %v, got %v", expected, image.Name) - } - - for _, bdm := range image.BlockDeviceMappings { - // The snapshot ID might not be set, - // even for a block device that is an - // EBS volume. - if bdm.Ebs != nil && bdm.Ebs.SnapshotId != nil { - snapshots = append(snapshots, *bdm.Ebs.SnapshotId) - } - } - - if expected := 1; len(snapshots) != expected { - return fmt.Errorf("wrong number of snapshots; expected %v, got %v", expected, len(snapshots)) - } - - return nil - }, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAMICopyExists(resourceName, &image), + testAccCheckAWSAMICopyAttributes(&image), + ), }, }, - CheckDestroy: func(state *terraform.State) error { - conn := testAccProvider.Meta().(*AWSClient).ec2conn - diReq := &ec2.DescribeImagesInput{ - ImageIds: []*string{aws.String(amiId)}, - } - diRes, err := conn.DescribeImages(diReq) - if err != nil { - return err - } + }) +} - if len(diRes.Images) > 0 { - state := diRes.Images[0].State - return fmt.Errorf("AMI %v remains in state %v", amiId, state) - } +func TestAccAWSAMICopy_EnaSupport(t *testing.T) { + var image ec2.Image + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_ami_copy.test" - stillExist := make([]string, 0, len(snapshots)) - checkErrors := make(map[string]error) - for _, snapshotId := range snapshots { - dsReq := &ec2.DescribeSnapshotsInput{ - SnapshotIds: []*string{aws.String(snapshotId)}, - } - _, err := conn.DescribeSnapshots(dsReq) - if err == nil { - stillExist = append(stillExist, snapshotId) - continue - } - - awsErr, ok := err.(awserr.Error) - if !ok { - checkErrors[snapshotId] = err - continue - } - - if awsErr.Code() != "InvalidSnapshot.NotFound" { - checkErrors[snapshotId] = err - continue - } - } + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAMICopyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAMICopyConfig_ENASupport(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAMICopyExists(resourceName, &image), + resource.TestCheckResourceAttr(resourceName, "ena_support", "true"), + ), + }, + }, + }) +} + +func testAccCheckAWSAMICopyExists(resourceName string, image *ec2.Image) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID set for %s", resourceName) + } + + conn := testAccProvider.Meta().(*AWSClient).ec2conn + + input := &ec2.DescribeImagesInput{ + ImageIds: []*string{aws.String(rs.Primary.ID)}, + } + output, err := conn.DescribeImages(input) + if err != nil { + return err + } + + if len(output.Images) == 0 || aws.StringValue(output.Images[0].ImageId) != rs.Primary.ID { + return fmt.Errorf("AMI %q not found", rs.Primary.ID) + } + + *image = *output.Images[0] + + return nil + } +} + +func testAccCheckAWSAMICopyDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).ec2conn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_ami_copy" { + continue + } + + input := &ec2.DescribeImagesInput{ + ImageIds: []*string{aws.String(rs.Primary.ID)}, + } + output, err := conn.DescribeImages(input) + if err != nil { + return err + } + + if output != nil && len(output.Images) > 0 && aws.StringValue(output.Images[0].ImageId) == rs.Primary.ID { + return fmt.Errorf("AMI %q still exists in state: %s", rs.Primary.ID, aws.StringValue(output.Images[0].State)) + } + } + + // Check for managed EBS snapshots + return testAccCheckAWSEbsSnapshotDestroy(s) +} - if len(stillExist) > 0 || len(checkErrors) > 0 { - errParts := []string{ - "Expected all snapshots to be gone, but:", - } - for _, snapshotId := range stillExist { - errParts = append( - errParts, - fmt.Sprintf("- %v still exists", snapshotId), - ) - } - for snapshotId, err := range checkErrors { - errParts = append( - errParts, - fmt.Sprintf("- checking %v gave error: %v", snapshotId, err), - ) - } - return errors.New(strings.Join(errParts, "\n")) +func testAccCheckAWSAMICopyAttributes(image *ec2.Image) resource.TestCheckFunc { + return func(s *terraform.State) error { + if expected := "available"; aws.StringValue(image.State) != expected { + return fmt.Errorf("invalid image state; expected %s, got %s", expected, aws.StringValue(image.State)) + } + if expected := "machine"; aws.StringValue(image.ImageType) != expected { + return fmt.Errorf("wrong image type; expected %s, got %s", expected, aws.StringValue(image.ImageType)) + } + if expected := "terraform-acc-ami-copy"; aws.StringValue(image.Name) != expected { + return fmt.Errorf("wrong name; expected %s, got %s", expected, aws.StringValue(image.Name)) + } + + snapshots := []string{} + for _, bdm := range image.BlockDeviceMappings { + // The snapshot ID might not be set, + // even for a block device that is an + // EBS volume. + if bdm.Ebs != nil && bdm.Ebs.SnapshotId != nil { + snapshots = append(snapshots, aws.StringValue(bdm.Ebs.SnapshotId)) } + } - return nil - }, - }) + if expected := 1; len(snapshots) != expected { + return fmt.Errorf("wrong number of snapshots; expected %v, got %v", expected, len(snapshots)) + } + + return nil + } } var testAccAWSAMICopyConfig = ` @@ -202,3 +200,45 @@ resource "aws_ami_copy" "test" { source_ami_region = "us-east-1" } ` + +func testAccAWSAMICopyConfig_ENASupport(rName string) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} +data "aws_region" "current" {} + +resource "aws_ebs_volume" "test" { + availability_zone = "${data.aws_availability_zones.available.names[0]}" + size = 1 + + tags { + Name = %q + } +} + +resource "aws_ebs_snapshot" "test" { + volume_id = "${aws_ebs_volume.test.id}" + + tags { + Name = %q + } +} + +resource "aws_ami" "test" { + ena_support = true + name = "%s-source" + virtualization_type = "hvm" + root_device_name = "/dev/sda1" + + ebs_block_device { + device_name = "/dev/sda1" + snapshot_id = "${aws_ebs_snapshot.test.id}" + } +} + +resource "aws_ami_copy" "test" { + name = "%s-copy" + source_ami_id = "${aws_ami.test.id}" + source_ami_region = "${data.aws_region.current.name}" +} +`, rName, rName, rName, rName) +} diff --git a/aws/resource_aws_ami_from_instance.go b/aws/resource_aws_ami_from_instance.go index 0db9d336346..af05db1f1ff 100644 --- a/aws/resource_aws_ami_from_instance.go +++ b/aws/resource_aws_ami_from_instance.go @@ -1,29 +1,17 @@ package aws import ( + "bytes" + "fmt" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" ) func resourceAwsAmiFromInstance() *schema.Resource { - // Inherit all of the common AMI attributes from aws_ami, since we're - // implicitly creating an aws_ami resource. - resourceSchema := resourceAwsAmiCommonSchema(true) - - // Additional attributes unique to the copy operation. - resourceSchema["source_instance_id"] = &schema.Schema{ - Type: schema.TypeString, - Required: true, - ForceNew: true, - } - resourceSchema["snapshot_without_reboot"] = &schema.Schema{ - Type: schema.TypeBool, - Optional: true, - ForceNew: true, - } - return &schema.Resource{ Create: resourceAwsAmiFromInstanceCreate, @@ -33,7 +21,155 @@ func resourceAwsAmiFromInstance() *schema.Resource { Delete: schema.DefaultTimeout(AWSAMIDeleteRetryTimeout), }, - Schema: resourceSchema, + Schema: map[string]*schema.Schema{ + "architecture": { + Type: schema.TypeString, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + }, + // The following block device attributes intentionally mimick the + // corresponding attributes on aws_instance, since they have the + // same meaning. + // However, we don't use root_block_device here because the constraint + // on which root device attributes can be overridden for an instance to + // not apply when registering an AMI. + "ebs_block_device": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "delete_on_termination": { + Type: schema.TypeBool, + Computed: true, + }, + + "device_name": { + Type: schema.TypeString, + Computed: true, + }, + + "encrypted": { + Type: schema.TypeBool, + Computed: true, + }, + + "iops": { + Type: schema.TypeInt, + Computed: true, + }, + + "snapshot_id": { + Type: schema.TypeString, + Computed: true, + }, + + "volume_size": { + Type: schema.TypeInt, + Computed: true, + }, + + "volume_type": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + Set: func(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["device_name"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["snapshot_id"].(string))) + return hashcode.String(buf.String()) + }, + }, + "ena_support": { + Type: schema.TypeBool, + Computed: true, + }, + "ephemeral_block_device": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "device_name": { + Type: schema.TypeString, + Computed: true, + }, + + "virtual_name": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + Set: func(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["device_name"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["virtual_name"].(string))) + return hashcode.String(buf.String()) + }, + }, + "image_location": { + Type: schema.TypeString, + Computed: true, + }, + "kernel_id": { + Type: schema.TypeString, + Computed: true, + }, + // Not a public attribute; used to let the aws_ami_copy and aws_ami_from_instance + // resources record that they implicitly created new EBS snapshots that we should + // now manage. Not set by aws_ami, since the snapshots used there are presumed to + // be independently managed. + "manage_ebs_snapshots": { + Type: schema.TypeBool, + Computed: true, + ForceNew: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "ramdisk_id": { + Type: schema.TypeString, + Computed: true, + }, + "root_device_name": { + Type: schema.TypeString, + Computed: true, + }, + "root_snapshot_id": { + Type: schema.TypeString, + Computed: true, + }, + "source_instance_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "snapshot_without_reboot": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + }, + "sriov_net_support": { + Type: schema.TypeString, + Computed: true, + }, + "tags": tagsSchema(), + "virtualization_type": { + Type: schema.TypeString, + Computed: true, + }, + }, // The remaining operations are shared with the generic aws_ami resource, // since the aws_ami_copy resource only differs in how it's created. diff --git a/aws/resource_aws_ami_from_instance_test.go b/aws/resource_aws_ami_from_instance_test.go index e130a6cbc5a..acbdfa509ae 100644 --- a/aws/resource_aws_ami_from_instance_test.go +++ b/aws/resource_aws_ami_from_instance_test.go @@ -19,7 +19,7 @@ func TestAccAWSAMIFromInstance(t *testing.T) { snapshots := []string{} rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/aws/resource_aws_ami_launch_permission.go b/aws/resource_aws_ami_launch_permission.go index 278e9d9abf4..9d075c3a424 100644 --- a/aws/resource_aws_ami_launch_permission.go +++ b/aws/resource_aws_ami_launch_permission.go @@ -19,12 +19,12 @@ func resourceAwsAmiLaunchPermission() *schema.Resource { Delete: resourceAwsAmiLaunchPermissionDelete, Schema: map[string]*schema.Schema{ - "image_id": &schema.Schema{ + "image_id": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "account_id": &schema.Schema{ + "account_id": { Type: schema.TypeString, Required: true, ForceNew: true, @@ -52,7 +52,7 @@ func resourceAwsAmiLaunchPermissionCreate(d *schema.ResourceData, meta interface Attribute: aws.String("launchPermission"), LaunchPermission: &ec2.LaunchPermissionModifications{ Add: []*ec2.LaunchPermission{ - &ec2.LaunchPermission{UserId: aws.String(account_id)}, + {UserId: aws.String(account_id)}, }, }, }) @@ -79,7 +79,7 @@ func resourceAwsAmiLaunchPermissionDelete(d *schema.ResourceData, meta interface Attribute: aws.String("launchPermission"), LaunchPermission: &ec2.LaunchPermissionModifications{ Remove: []*ec2.LaunchPermission{ - &ec2.LaunchPermission{UserId: aws.String(account_id)}, + {UserId: aws.String(account_id)}, }, }, }) @@ -106,7 +106,7 @@ func hasLaunchPermission(conn *ec2.EC2, image_id string, account_id string) (boo } for _, lp := range attrs.LaunchPermissions { - if *lp.UserId == account_id { + if aws.StringValue(lp.UserId) == account_id { return true, nil } } diff --git a/aws/resource_aws_ami_launch_permission_test.go b/aws/resource_aws_ami_launch_permission_test.go index 23de0c66f9d..d33fb81b7a4 100644 --- a/aws/resource_aws_ami_launch_permission_test.go +++ b/aws/resource_aws_ami_launch_permission_test.go @@ -2,56 +2,101 @@ package aws import ( "fmt" - "os" "testing" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" - r "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccAWSAMILaunchPermission_Basic(t *testing.T) { - imageID := "" - accountID := os.Getenv("AWS_ACCOUNT_ID") - - r.Test(t, r.TestCase{ - PreCheck: func() { - testAccPreCheck(t) - if os.Getenv("AWS_ACCOUNT_ID") == "" { - t.Fatal("AWS_ACCOUNT_ID must be set") - } + resourceName := "aws_ami_launch_permission.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAMILaunchPermissionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAMILaunchPermissionConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAMILaunchPermissionExists(resourceName), + ), + }, }, - Providers: testAccProviders, - Steps: []r.TestStep{ - // Scaffold everything - r.TestStep{ - Config: testAccAWSAMILaunchPermissionConfig(accountID, true), - Check: r.ComposeTestCheckFunc( - testCheckResourceGetAttr("aws_ami_copy.test", "id", &imageID), - testAccAWSAMILaunchPermissionExists(accountID, &imageID), + }) +} + +func TestAccAWSAMILaunchPermission_Disappears_LaunchPermission(t *testing.T) { + resourceName := "aws_ami_launch_permission.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAMILaunchPermissionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAMILaunchPermissionConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAMILaunchPermissionExists(resourceName), + testAccCheckAWSAMILaunchPermissionDisappears(resourceName), ), + ExpectNonEmptyPlan: true, }, - // Drop just launch permission to test destruction - r.TestStep{ - Config: testAccAWSAMILaunchPermissionConfig(accountID, false), - Check: r.ComposeTestCheckFunc( - testAccAWSAMILaunchPermissionDestroyed(accountID, &imageID), + }, + }) +} + +// Bug reference: https://github.com/terraform-providers/terraform-provider-aws/issues/6222 +// Images with all will not have and can cause a panic +func TestAccAWSAMILaunchPermission_Disappears_LaunchPermission_Public(t *testing.T) { + resourceName := "aws_ami_launch_permission.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAMILaunchPermissionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAMILaunchPermissionConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAMILaunchPermissionExists(resourceName), + testAccCheckAWSAMILaunchPermissionAddPublic(resourceName), + testAccCheckAWSAMILaunchPermissionDisappears(resourceName), ), + ExpectNonEmptyPlan: true, }, - // Re-add everything so we can test when AMI disappears - r.TestStep{ - Config: testAccAWSAMILaunchPermissionConfig(accountID, true), - Check: r.ComposeTestCheckFunc( - testCheckResourceGetAttr("aws_ami_copy.test", "id", &imageID), - testAccAWSAMILaunchPermissionExists(accountID, &imageID), + }, + }) +} + +func TestAccAWSAMILaunchPermission_Disappears_AMI(t *testing.T) { + imageID := "" + resourceName := "aws_ami_launch_permission.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAMILaunchPermissionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAMILaunchPermissionConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAMILaunchPermissionExists(resourceName), ), }, // Here we delete the AMI to verify the follow-on refresh after this step // should not error. - r.TestStep{ - Config: testAccAWSAMILaunchPermissionConfig(accountID, true), - Check: r.ComposeTestCheckFunc( + { + Config: testAccAWSAMILaunchPermissionConfig(rName), + Check: resource.ComposeTestCheckFunc( + testCheckResourceGetAttr("aws_ami_copy.test", "id", &imageID), testAccAWSAMIDisappears(&imageID), ), ExpectNonEmptyPlan: true, @@ -60,7 +105,7 @@ func TestAccAWSAMILaunchPermission_Basic(t *testing.T) { }) } -func testCheckResourceGetAttr(name, key string, value *string) r.TestCheckFunc { +func testCheckResourceGetAttr(name, key string, value *string) resource.TestCheckFunc { return func(s *terraform.State) error { ms := s.RootModule() rs, ok := ms.Resources[name] @@ -78,26 +123,115 @@ func testCheckResourceGetAttr(name, key string, value *string) r.TestCheckFunc { } } -func testAccAWSAMILaunchPermissionExists(accountID string, imageID *string) r.TestCheckFunc { +func testAccCheckAWSAMILaunchPermissionExists(resourceName string) resource.TestCheckFunc { return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No resource ID is set") + } + conn := testAccProvider.Meta().(*AWSClient).ec2conn - if has, err := hasLaunchPermission(conn, *imageID, accountID); err != nil { + accountID := rs.Primary.Attributes["account_id"] + imageID := rs.Primary.Attributes["image_id"] + + if has, err := hasLaunchPermission(conn, imageID, accountID); err != nil { return err } else if !has { - return fmt.Errorf("launch permission does not exist for '%s' on '%s'", accountID, *imageID) + return fmt.Errorf("launch permission does not exist for '%s' on '%s'", accountID, imageID) } return nil } } -func testAccAWSAMILaunchPermissionDestroyed(accountID string, imageID *string) r.TestCheckFunc { - return func(s *terraform.State) error { +func testAccCheckAWSAMILaunchPermissionDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_ami_launch_permission" { + continue + } + conn := testAccProvider.Meta().(*AWSClient).ec2conn - if has, err := hasLaunchPermission(conn, *imageID, accountID); err != nil { + accountID := rs.Primary.Attributes["account_id"] + imageID := rs.Primary.Attributes["image_id"] + + if has, err := hasLaunchPermission(conn, imageID, accountID); err != nil { return err } else if has { - return fmt.Errorf("launch permission still exists for '%s' on '%s'", accountID, *imageID) + return fmt.Errorf("launch permission still exists for '%s' on '%s'", accountID, imageID) + } + } + + return nil +} + +func testAccCheckAWSAMILaunchPermissionAddPublic(resourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) } + + if rs.Primary.ID == "" { + return fmt.Errorf("No resource ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).ec2conn + imageID := rs.Primary.Attributes["image_id"] + + input := &ec2.ModifyImageAttributeInput{ + ImageId: aws.String(imageID), + Attribute: aws.String("launchPermission"), + LaunchPermission: &ec2.LaunchPermissionModifications{ + Add: []*ec2.LaunchPermission{ + {Group: aws.String("all")}, + }, + }, + } + + _, err := conn.ModifyImageAttribute(input) + + if err != nil { + return err + } + + return nil + } +} + +func testAccCheckAWSAMILaunchPermissionDisappears(resourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No resource ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).ec2conn + accountID := rs.Primary.Attributes["account_id"] + imageID := rs.Primary.Attributes["image_id"] + + input := &ec2.ModifyImageAttributeInput{ + ImageId: aws.String(imageID), + Attribute: aws.String("launchPermission"), + LaunchPermission: &ec2.LaunchPermissionModifications{ + Remove: []*ec2.LaunchPermission{ + {UserId: aws.String(accountID)}, + }, + }, + } + + _, err := conn.ModifyImageAttribute(input) + + if err != nil { + return err + } + return nil } } @@ -105,7 +239,7 @@ func testAccAWSAMILaunchPermissionDestroyed(accountID string, imageID *string) r // testAccAWSAMIDisappears is technically a "test check function" but really it // exists to perform a side effect of deleting an AMI out from under a resource // so we can test that Terraform will react properly -func testAccAWSAMIDisappears(imageID *string) r.TestCheckFunc { +func testAccAWSAMIDisappears(imageID *string) resource.TestCheckFunc { return func(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).ec2conn req := &ec2.DeregisterImageInput{ @@ -124,24 +258,36 @@ func testAccAWSAMIDisappears(imageID *string) r.TestCheckFunc { } } -func testAccAWSAMILaunchPermissionConfig(accountID string, includeLaunchPermission bool) string { - base := ` -resource "aws_ami_copy" "test" { - name = "launch-permission-test" - description = "Launch Permission Test Copy" - source_ami_id = "ami-7172b611" - source_ami_region = "us-west-2" +func testAccAWSAMILaunchPermissionConfig(rName string) string { + return fmt.Sprintf(` +data "aws_ami" "amzn-ami-minimal-hvm" { + most_recent = true + owners = ["amazon"] + + filter { + name = "name" + values = ["amzn-ami-minimal-hvm-*"] + } + filter { + name = "root-device-type" + values = ["ebs"] + } } -` - if !includeLaunchPermission { - return base - } +data "aws_caller_identity" "current" {} + +data "aws_region" "current" {} + +resource "aws_ami_copy" "test" { + description = %q + name = %q + source_ami_id = "${data.aws_ami.amzn-ami-minimal-hvm.id}" + source_ami_region = "${data.aws_region.current.name}" +} - return base + fmt.Sprintf(` -resource "aws_ami_launch_permission" "self-test" { - image_id = "${aws_ami_copy.test.id}" - account_id = "%s" +resource "aws_ami_launch_permission" "test" { + account_id = "${data.aws_caller_identity.current.account_id}" + image_id = "${aws_ami_copy.test.id}" } -`, accountID) +`, rName, rName) } diff --git a/aws/resource_aws_ami_test.go b/aws/resource_aws_ami_test.go index 14fdf0c3c05..ef16a7f8d04 100644 --- a/aws/resource_aws_ami_test.go +++ b/aws/resource_aws_ami_test.go @@ -17,23 +17,28 @@ import ( func TestAccAWSAMI_basic(t *testing.T) { var ami ec2.Image - rInt := acctest.RandInt() + resourceName := "aws_ami.test" + rName := acctest.RandomWithPrefix("tf-acc-test") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAmiDestroy, Steps: []resource.TestStep{ { - Config: testAccAmiConfig_basic(rInt), + Config: testAccAmiConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckAmiExists("aws_ami.foo", &ami), - resource.TestCheckResourceAttr( - "aws_ami.foo", "name", fmt.Sprintf("tf-testing-%d", rInt)), - resource.TestMatchResourceAttr( - "aws_ami.foo", "root_snapshot_id", regexp.MustCompile("^snap-")), + testAccCheckAmiExists(resourceName, &ami), + resource.TestCheckResourceAttr(resourceName, "ena_support", "true"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestMatchResourceAttr(resourceName, "root_snapshot_id", regexp.MustCompile("^snap-")), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } @@ -41,7 +46,8 @@ func TestAccAWSAMI_basic(t *testing.T) { func TestAccAWSAMI_snapshotSize(t *testing.T) { var ami ec2.Image var bd ec2.BlockDeviceMapping - rInt := acctest.RandInt() + resourceName := "aws_ami.test" + rName := acctest.RandomWithPrefix("tf-acc-test") expectedDevice := &ec2.EbsBlockDevice{ DeleteOnTermination: aws.Bool(true), @@ -51,23 +57,26 @@ func TestAccAWSAMI_snapshotSize(t *testing.T) { VolumeType: aws.String("standard"), } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAmiDestroy, Steps: []resource.TestStep{ { - Config: testAccAmiConfig_snapshotSize(rInt), + Config: testAccAmiConfig_snapshotSize(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckAmiExists("aws_ami.foo", &ami), + testAccCheckAmiExists(resourceName, &ami), testAccCheckAmiBlockDevice(&ami, &bd, "/dev/sda1"), testAccCheckAmiEbsBlockDevice(&bd, expectedDevice), - resource.TestCheckResourceAttr( - "aws_ami.foo", "name", fmt.Sprintf("tf-testing-%d", rInt)), - resource.TestCheckResourceAttr( - "aws_ami.foo", "architecture", "x86_64"), + resource.TestCheckResourceAttr(resourceName, "architecture", "x86_64"), + resource.TestCheckResourceAttr(resourceName, "name", rName), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } @@ -203,7 +212,7 @@ func testAccCheckAmiEbsBlockDevice(bd *ec2.BlockDeviceMapping, ed *ec2.EbsBlockD } } -func testAccAmiConfig_basic(rInt int) string { +func testAccAmiConfig_basic(rName string) string { return fmt.Sprintf(` data "aws_availability_zones" "available" {} @@ -223,19 +232,21 @@ resource "aws_ebs_snapshot" "foo" { } } -resource "aws_ami" "foo" { - name = "tf-testing-%d" +resource "aws_ami" "test" { + ena_support = true + name = %q + root_device_name = "/dev/sda1" virtualization_type = "hvm" - root_device_name = "/dev/sda1" + ebs_block_device { device_name = "/dev/sda1" snapshot_id = "${aws_ebs_snapshot.foo.id}" } } - `, rInt) +`, rName) } -func testAccAmiConfig_snapshotSize(rInt int) string { +func testAccAmiConfig_snapshotSize(rName string) string { return fmt.Sprintf(` data "aws_availability_zones" "available" {} @@ -255,14 +266,15 @@ resource "aws_ebs_snapshot" "foo" { } } -resource "aws_ami" "foo" { - name = "tf-testing-%d" +resource "aws_ami" "test" { + name = %q + root_device_name = "/dev/sda1" virtualization_type = "hvm" - root_device_name = "/dev/sda1" + ebs_block_device { device_name = "/dev/sda1" snapshot_id = "${aws_ebs_snapshot.foo.id}" } } - `, rInt) +`, rName) } diff --git a/aws/resource_aws_api_gateway_account.go b/aws/resource_aws_api_gateway_account.go index 7b786270a53..29365633b2a 100644 --- a/aws/resource_aws_api_gateway_account.go +++ b/aws/resource_aws_api_gateway_account.go @@ -22,21 +22,21 @@ func resourceAwsApiGatewayAccount() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "cloudwatch_role_arn": &schema.Schema{ + "cloudwatch_role_arn": { Type: schema.TypeString, Optional: true, }, - "throttle_settings": &schema.Schema{ + "throttle_settings": { Type: schema.TypeList, Computed: true, MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "burst_limit": &schema.Schema{ + "burst_limit": { Type: schema.TypeInt, Computed: true, }, - "rate_limit": &schema.Schema{ + "rate_limit": { Type: schema.TypeFloat, Computed: true, }, @@ -122,6 +122,5 @@ func resourceAwsApiGatewayAccountUpdate(d *schema.ResourceData, meta interface{} func resourceAwsApiGatewayAccountDelete(d *schema.ResourceData, meta interface{}) error { // There is no API for "deleting" account or resetting it to "default" settings - d.SetId("") return nil } diff --git a/aws/resource_aws_api_gateway_account_test.go b/aws/resource_aws_api_gateway_account_test.go index 49e4ae72b94..948603179c6 100644 --- a/aws/resource_aws_api_gateway_account_test.go +++ b/aws/resource_aws_api_gateway_account_test.go @@ -11,6 +11,27 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSAPIGatewayAccount_importBasic(t *testing.T) { + resourceName := "aws_api_gateway_account.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayAccountDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAPIGatewayAccountConfig_empty, + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSAPIGatewayAccount_basic(t *testing.T) { var conf apigateway.Account @@ -21,12 +42,12 @@ func TestAccAWSAPIGatewayAccount_basic(t *testing.T) { expectedRoleArn_first := regexp.MustCompile(":role/" + firstName + "$") expectedRoleArn_second := regexp.MustCompile(":role/" + secondName + "$") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayAccountDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAPIGatewayAccountConfig_updated(firstName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayAccountExists("aws_api_gateway_account.test", &conf), @@ -34,7 +55,7 @@ func TestAccAWSAPIGatewayAccount_basic(t *testing.T) { resource.TestMatchResourceAttr("aws_api_gateway_account.test", "cloudwatch_role_arn", expectedRoleArn_first), ), }, - resource.TestStep{ + { Config: testAccAWSAPIGatewayAccountConfig_updated2(secondName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayAccountExists("aws_api_gateway_account.test", &conf), @@ -42,7 +63,7 @@ func TestAccAWSAPIGatewayAccount_basic(t *testing.T) { resource.TestMatchResourceAttr("aws_api_gateway_account.test", "cloudwatch_role_arn", expectedRoleArn_second), ), }, - resource.TestStep{ + { Config: testAccAWSAPIGatewayAccountConfig_empty, Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayAccountExists("aws_api_gateway_account.test", &conf), diff --git a/aws/resource_aws_api_gateway_api_key.go b/aws/resource_aws_api_gateway_api_key.go index ec8b151a646..7748a74ec7f 100644 --- a/aws/resource_aws_api_gateway_api_key.go +++ b/aws/resource_aws_api_gateway_api_key.go @@ -6,7 +6,6 @@ import ( "time" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/apigateway" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" @@ -98,7 +97,7 @@ func resourceAwsApiGatewayApiKeyCreate(d *schema.ResourceData, meta interface{}) return fmt.Errorf("Error creating API Gateway: %s", err) } - d.SetId(*apiKey.Id) + d.SetId(aws.StringValue(apiKey.Id)) return resourceAwsApiGatewayApiKeyRead(d, meta) } @@ -112,7 +111,7 @@ func resourceAwsApiGatewayApiKeyRead(d *schema.ResourceData, meta interface{}) e IncludeValue: aws.Bool(true), }) if err != nil { - if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "NotFoundException" { + if isAWSErr(err, apigateway.ErrCodeNotFoundException, "") { log.Printf("[WARN] API Gateway API Key (%s) not found, removing from state", d.Id()) d.SetId("") return nil @@ -195,8 +194,10 @@ func resourceAwsApiGatewayApiKeyDelete(d *schema.ResourceData, meta interface{}) return nil } - if apigatewayErr, ok := err.(awserr.Error); ok && apigatewayErr.Code() == "NotFoundException" { - return nil + if err != nil { + if isAWSErr(err, apigateway.ErrCodeNotFoundException, "") { + return nil + } } return resource.NonRetryableError(err) diff --git a/aws/resource_aws_api_gateway_api_key_test.go b/aws/resource_aws_api_gateway_api_key_test.go index a7d519ae689..29a534e5826 100644 --- a/aws/resource_aws_api_gateway_api_key_test.go +++ b/aws/resource_aws_api_gateway_api_key_test.go @@ -12,10 +12,31 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSAPIGatewayApiKey_importBasic(t *testing.T) { + resourceName := "aws_api_gateway_api_key.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayApiKeyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAPIGatewayApiKeyConfig, + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSAPIGatewayApiKey_basic(t *testing.T) { var conf apigateway.ApiKey - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayApiKeyDestroy, diff --git a/aws/resource_aws_api_gateway_authorizer_test.go b/aws/resource_aws_api_gateway_authorizer_test.go index e4945ec4042..42835dcdfe6 100644 --- a/aws/resource_aws_api_gateway_authorizer_test.go +++ b/aws/resource_aws_api_gateway_authorizer_test.go @@ -24,12 +24,12 @@ func TestAccAWSAPIGatewayAuthorizer_basic(t *testing.T) { "arn:aws:lambda:[a-z0-9-]+:[0-9]{12}:function:" + lambdaName + "/invocations") expectedCreds := regexp.MustCompile("arn:aws:iam::[0-9]{12}:role/" + apiGatewayName + "_auth_invocation_role") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayAuthorizerDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAPIGatewayAuthorizerConfig_lambda(apiGatewayName, authorizerName, lambdaName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayAuthorizerExists("aws_api_gateway_authorizer.acctest", &conf), @@ -49,7 +49,7 @@ func TestAccAWSAPIGatewayAuthorizer_basic(t *testing.T) { resource.TestCheckResourceAttr("aws_api_gateway_authorizer.acctest", "identity_validation_expression", ""), ), }, - resource.TestStep{ + { Config: testAccAWSAPIGatewayAuthorizerConfig_lambdaUpdate(apiGatewayName, authorizerName, lambdaName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayAuthorizerExists("aws_api_gateway_authorizer.acctest", &conf), @@ -79,19 +79,19 @@ func TestAccAWSAPIGatewayAuthorizer_cognito(t *testing.T) { authorizerName := "tf-acctest-igw-authorizer-" + rString cognitoName := "tf-acctest-cognito-user-pool-" + rString - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayAuthorizerDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAPIGatewayAuthorizerConfig_cognito(apiGatewayName, authorizerName, cognitoName), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr("aws_api_gateway_authorizer.acctest", "name", authorizerName+"-cognito"), resource.TestCheckResourceAttr("aws_api_gateway_authorizer.acctest", "provider_arns.#", "2"), ), }, - resource.TestStep{ + { Config: testAccAWSAPIGatewayAuthorizerConfig_cognitoUpdate(apiGatewayName, authorizerName, cognitoName), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr("aws_api_gateway_authorizer.acctest", "name", authorizerName+"-cognito-update"), @@ -113,12 +113,12 @@ func TestAccAWSAPIGatewayAuthorizer_switchAuthType(t *testing.T) { "arn:aws:lambda:[a-z0-9-]+:[0-9]{12}:function:" + lambdaName + "/invocations") expectedCreds := regexp.MustCompile("arn:aws:iam::[0-9]{12}:role/" + apiGatewayName + "_auth_invocation_role") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayAuthorizerDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAPIGatewayAuthorizerConfig_lambda(apiGatewayName, authorizerName, lambdaName), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr("aws_api_gateway_authorizer.acctest", "name", authorizerName), @@ -127,7 +127,7 @@ func TestAccAWSAPIGatewayAuthorizer_switchAuthType(t *testing.T) { resource.TestMatchResourceAttr("aws_api_gateway_authorizer.acctest", "authorizer_credentials", expectedCreds), ), }, - resource.TestStep{ + { Config: testAccAWSAPIGatewayAuthorizerConfig_cognito(apiGatewayName, authorizerName, cognitoName), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr("aws_api_gateway_authorizer.acctest", "name", authorizerName+"-cognito"), @@ -135,7 +135,7 @@ func TestAccAWSAPIGatewayAuthorizer_switchAuthType(t *testing.T) { resource.TestCheckResourceAttr("aws_api_gateway_authorizer.acctest", "provider_arns.#", "2"), ), }, - resource.TestStep{ + { Config: testAccAWSAPIGatewayAuthorizerConfig_lambdaUpdate(apiGatewayName, authorizerName, lambdaName), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr("aws_api_gateway_authorizer.acctest", "name", authorizerName+"_modified"), @@ -155,20 +155,20 @@ func TestAccAWSAPIGatewayAuthorizer_authTypeValidation(t *testing.T) { cognitoName := "tf-acctest-cognito-user-pool-" + rString lambdaName := "tf-acctest-igw-auth-lambda-" + rString - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayAuthorizerDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAPIGatewayAuthorizerConfig_authTypeValidationDefaultToken(apiGatewayName, authorizerName, lambdaName), ExpectError: regexp.MustCompile(`authorizer_uri must be set non-empty when authorizer type is TOKEN`), }, - resource.TestStep{ + { Config: testAccAWSAPIGatewayAuthorizerConfig_authTypeValidationRequest(apiGatewayName, authorizerName, lambdaName), ExpectError: regexp.MustCompile(`authorizer_uri must be set non-empty when authorizer type is REQUEST`), }, - resource.TestStep{ + { Config: testAccAWSAPIGatewayAuthorizerConfig_authTypeValidationCognito(apiGatewayName, authorizerName, cognitoName), ExpectError: regexp.MustCompile(`provider_arns must be set non-empty when authorizer type is COGNITO_USER_POOLS`), }, diff --git a/aws/resource_aws_api_gateway_base_path_mapping.go b/aws/resource_aws_api_gateway_base_path_mapping.go index ed31dc67b5f..dfbcd704136 100644 --- a/aws/resource_aws_api_gateway_base_path_mapping.go +++ b/aws/resource_aws_api_gateway_base_path_mapping.go @@ -3,6 +3,7 @@ package aws import ( "fmt" "log" + "strings" "time" "github.com/aws/aws-sdk-go/aws" @@ -19,6 +20,9 @@ func resourceAwsApiGatewayBasePathMapping() *schema.Resource { Create: resourceAwsApiGatewayBasePathMappingCreate, Read: resourceAwsApiGatewayBasePathMappingRead, Delete: resourceAwsApiGatewayBasePathMappingDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, Schema: map[string]*schema.Schema{ "api_id": { @@ -82,15 +86,9 @@ func resourceAwsApiGatewayBasePathMappingCreate(d *schema.ResourceData, meta int func resourceAwsApiGatewayBasePathMappingRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).apigateway - domainName := d.Get("domain_name").(string) - basePath := d.Get("base_path").(string) - - if domainName == "" { - return nil - } - - if basePath == "" { - basePath = emptyBasePathMappingValue + domainName, basePath, err := decodeApiGatewayBasePathMappingId(d.Id()) + if err != nil { + return err } mapping, err := conn.GetBasePathMapping(&apigateway.GetBasePathMappingInput{ @@ -114,6 +112,7 @@ func resourceAwsApiGatewayBasePathMappingRead(d *schema.ResourceData, meta inter } d.Set("base_path", mappingBasePath) + d.Set("domain_name", domainName) d.Set("api_id", mapping.RestApiId) d.Set("stage_name", mapping.Stage) @@ -123,14 +122,13 @@ func resourceAwsApiGatewayBasePathMappingRead(d *schema.ResourceData, meta inter func resourceAwsApiGatewayBasePathMappingDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).apigateway - basePath := d.Get("base_path").(string) - - if basePath == "" { - basePath = emptyBasePathMappingValue + domainName, basePath, err := decodeApiGatewayBasePathMappingId(d.Id()) + if err != nil { + return err } - _, err := conn.DeleteBasePathMapping(&apigateway.DeleteBasePathMappingInput{ - DomainName: aws.String(d.Get("domain_name").(string)), + _, err = conn.DeleteBasePathMapping(&apigateway.DeleteBasePathMappingInput{ + DomainName: aws.String(domainName), BasePath: aws.String(basePath), }) @@ -144,3 +142,25 @@ func resourceAwsApiGatewayBasePathMappingDelete(d *schema.ResourceData, meta int return nil } + +func decodeApiGatewayBasePathMappingId(id string) (string, string, error) { + idFormatErr := fmt.Errorf("Unexpected format of ID (%q), expected DOMAIN/BASEPATH", id) + + parts := strings.SplitN(id, "/", 2) + if len(parts) != 2 { + return "", "", idFormatErr + } + + domainName := parts[0] + basePath := parts[1] + + if domainName == "" { + return "", "", idFormatErr + } + + if basePath == "" { + basePath = emptyBasePathMappingValue + } + + return domainName, basePath, nil +} diff --git a/aws/resource_aws_api_gateway_base_path_mapping_test.go b/aws/resource_aws_api_gateway_base_path_mapping_test.go index 9cd09b8886f..ca58efb35ae 100644 --- a/aws/resource_aws_api_gateway_base_path_mapping_test.go +++ b/aws/resource_aws_api_gateway_base_path_mapping_test.go @@ -12,13 +12,65 @@ import ( "github.com/hashicorp/terraform/terraform" ) -func TestAccAWSAPIGatewayBasePath_basic(t *testing.T) { +func TestDecodeApiGatewayBasePathMappingId(t *testing.T) { + var testCases = []struct { + Input string + DomainName string + BasePath string + ErrCount int + }{ + { + Input: "no-slash", + ErrCount: 1, + }, + { + Input: "/missing-domain-name", + ErrCount: 1, + }, + { + Input: "domain-name/base-path", + DomainName: "domain-name", + BasePath: "base-path", + ErrCount: 0, + }, + { + Input: "domain-name/base/path", + DomainName: "domain-name", + BasePath: "base/path", + ErrCount: 0, + }, + { + Input: "domain-name/", + DomainName: "domain-name", + BasePath: emptyBasePathMappingValue, + ErrCount: 0, + }, + } + + for _, tc := range testCases { + domainName, basePath, err := decodeApiGatewayBasePathMappingId(tc.Input) + if tc.ErrCount == 0 && err != nil { + t.Fatalf("expected %q not to trigger an error, received: %s", tc.Input, err) + } + if tc.ErrCount > 0 && err == nil { + t.Fatalf("expected %q to trigger an error", tc.Input) + } + if domainName != tc.DomainName { + t.Fatalf("expected domain name %q to be %q", domainName, tc.DomainName) + } + if basePath != tc.BasePath { + t.Fatalf("expected base path %q to be %q", basePath, tc.BasePath) + } + } +} + +func TestAccAWSAPIGatewayBasePathMapping_basic(t *testing.T) { var conf apigateway.BasePathMapping // Our test cert is for a wildcard on this domain name := fmt.Sprintf("tf-acc-%s.terraformtest.com", acctest.RandString(8)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProvidersWithTLS, CheckDestroy: testAccCheckAWSAPIGatewayBasePathDestroy(name), @@ -29,18 +81,23 @@ func TestAccAWSAPIGatewayBasePath_basic(t *testing.T) { testAccCheckAWSAPIGatewayBasePathExists("aws_api_gateway_base_path_mapping.test", name, &conf), ), }, + { + ResourceName: "aws_api_gateway_base_path_mapping.test", + ImportState: true, + ImportStateVerify: true, + }, }, }) } // https://github.com/hashicorp/terraform/issues/9212 -func TestAccAWSAPIGatewayEmptyBasePath_basic(t *testing.T) { +func TestAccAWSAPIGatewayBasePathMapping_BasePath_Empty(t *testing.T) { var conf apigateway.BasePathMapping // Our test cert is for a wildcard on this domain name := fmt.Sprintf("tf-acc-%s.terraformtest.com", acctest.RandString(8)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProvidersWithTLS, CheckDestroy: testAccCheckAWSAPIGatewayBasePathDestroy(name), @@ -48,9 +105,14 @@ func TestAccAWSAPIGatewayEmptyBasePath_basic(t *testing.T) { { Config: testAccAWSAPIGatewayEmptyBasePathConfig(name), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSAPIGatewayEmptyBasePathExists("aws_api_gateway_base_path_mapping.test", name, &conf), + testAccCheckAWSAPIGatewayBasePathExists("aws_api_gateway_base_path_mapping.test", name, &conf), ), }, + { + ResourceName: "aws_api_gateway_base_path_mapping.test", + ImportState: true, + ImportStateVerify: true, + }, }, }) } @@ -68,41 +130,14 @@ func testAccCheckAWSAPIGatewayBasePathExists(n string, name string, res *apigate conn := testAccProvider.Meta().(*AWSClient).apigateway - req := &apigateway.GetBasePathMappingInput{ - DomainName: aws.String(name), - BasePath: aws.String("tf-acc"), - } - describe, err := conn.GetBasePathMapping(req) + domainName, basePath, err := decodeApiGatewayBasePathMappingId(rs.Primary.ID) if err != nil { return err } - if *describe.BasePath != "tf-acc" { - return fmt.Errorf("base path mapping not found") - } - - *res = *describe - - return nil - } -} - -func testAccCheckAWSAPIGatewayEmptyBasePathExists(n string, name string, res *apigateway.BasePathMapping) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[n] - if !ok { - return fmt.Errorf("Not found: %s", n) - } - - if rs.Primary.ID == "" { - return fmt.Errorf("No API Gateway ID is set") - } - - conn := testAccProvider.Meta().(*AWSClient).apigateway - req := &apigateway.GetBasePathMappingInput{ - DomainName: aws.String(name), - BasePath: aws.String(""), + DomainName: aws.String(domainName), + BasePath: aws.String(basePath), } describe, err := conn.GetBasePathMapping(req) if err != nil { @@ -120,14 +155,20 @@ func testAccCheckAWSAPIGatewayBasePathDestroy(name string) resource.TestCheckFun conn := testAccProvider.Meta().(*AWSClient).apigateway for _, rs := range s.RootModule().Resources { - if rs.Type != "aws_api_gateway_rest_api" { + if rs.Type != "aws_api_gateway_base_path_mapping" { continue } - req := &apigateway.GetBasePathMappingsInput{ - DomainName: aws.String(name), + domainName, basePath, err := decodeApiGatewayBasePathMappingId(rs.Primary.ID) + if err != nil { + return err + } + + req := &apigateway.GetBasePathMappingInput{ + DomainName: aws.String(domainName), + BasePath: aws.String(basePath), } - _, err := conn.GetBasePathMappings(req) + _, err = conn.GetBasePathMapping(req) if err != nil { if err, ok := err.(awserr.Error); ok && err.Code() == "NotFoundException" { diff --git a/aws/resource_aws_api_gateway_client_certificate_test.go b/aws/resource_aws_api_gateway_client_certificate_test.go index 96f57fdff4a..4f75de94531 100644 --- a/aws/resource_aws_api_gateway_client_certificate_test.go +++ b/aws/resource_aws_api_gateway_client_certificate_test.go @@ -14,19 +14,19 @@ import ( func TestAccAWSAPIGatewayClientCertificate_basic(t *testing.T) { var conf apigateway.ClientCertificate - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayClientCertificateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAPIGatewayClientCertificateConfig_basic, Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayClientCertificateExists("aws_api_gateway_client_certificate.cow", &conf), resource.TestCheckResourceAttr("aws_api_gateway_client_certificate.cow", "description", "Hello from TF acceptance test"), ), }, - resource.TestStep{ + { Config: testAccAWSAPIGatewayClientCertificateConfig_basic_updated, Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayClientCertificateExists("aws_api_gateway_client_certificate.cow", &conf), @@ -40,16 +40,16 @@ func TestAccAWSAPIGatewayClientCertificate_basic(t *testing.T) { func TestAccAWSAPIGatewayClientCertificate_importBasic(t *testing.T) { resourceName := "aws_api_gateway_client_certificate.cow" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayClientCertificateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAPIGatewayClientCertificateConfig_basic, }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, diff --git a/aws/resource_aws_api_gateway_deployment.go b/aws/resource_aws_api_gateway_deployment.go index 4e291b780e8..b11f387065c 100644 --- a/aws/resource_aws_api_gateway_deployment.go +++ b/aws/resource_aws_api_gateway_deployment.go @@ -6,9 +6,9 @@ import ( "time" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/apigateway" - "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) @@ -47,7 +47,7 @@ func resourceAwsApiGatewayDeployment() *schema.Resource { Type: schema.TypeMap, Optional: true, ForceNew: true, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, }, "created_date": { @@ -121,12 +121,14 @@ func resourceAwsApiGatewayDeploymentRead(d *schema.ResourceData, meta interface{ d.Set("invoke_url", buildApiGatewayInvokeURL(restApiId, region, stageName)) - accountId := meta.(*AWSClient).accountid - arn, err := buildApiGatewayExecutionARN(restApiId, region, accountId) - if err != nil { - return err - } - d.Set("execution_arn", arn+"/"+stageName) + executionArn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Service: "execute-api", + Region: meta.(*AWSClient).region, + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("%s/%s", restApiId, stageName), + }.String() + d.Set("execution_arn", executionArn) if err := d.Set("created_date", out.CreatedDate.Format(time.RFC3339)); err != nil { log.Printf("[DEBUG] Error setting created_date: %s", err) @@ -170,32 +172,44 @@ func resourceAwsApiGatewayDeploymentDelete(d *schema.ResourceData, meta interfac conn := meta.(*AWSClient).apigateway log.Printf("[DEBUG] Deleting API Gateway Deployment: %s", d.Id()) - return resource.Retry(5*time.Minute, func() *resource.RetryError { - log.Printf("[DEBUG] schema is %#v", d) + // If the stage has been updated to point at a different deployment, then + // the stage should not be removed then this deployment is deleted. + shouldDeleteStage := false + + stage, err := conn.GetStage(&apigateway.GetStageInput{ + StageName: aws.String(d.Get("stage_name").(string)), + RestApiId: aws.String(d.Get("rest_api_id").(string)), + }) + + if err != nil && !isAWSErr(err, apigateway.ErrCodeNotFoundException, "") { + return fmt.Errorf("error getting referenced stage: %s", err) + } + + if stage != nil && aws.StringValue(stage.DeploymentId) == d.Id() { + shouldDeleteStage = true + } + + if shouldDeleteStage { if _, err := conn.DeleteStage(&apigateway.DeleteStageInput{ StageName: aws.String(d.Get("stage_name").(string)), RestApiId: aws.String(d.Get("rest_api_id").(string)), }); err == nil { return nil } + } - _, err := conn.DeleteDeployment(&apigateway.DeleteDeploymentInput{ - DeploymentId: aws.String(d.Id()), - RestApiId: aws.String(d.Get("rest_api_id").(string)), - }) - if err == nil { - return nil - } + _, err = conn.DeleteDeployment(&apigateway.DeleteDeploymentInput{ + DeploymentId: aws.String(d.Id()), + RestApiId: aws.String(d.Get("rest_api_id").(string)), + }) - apigatewayErr, ok := err.(awserr.Error) - if apigatewayErr.Code() == "NotFoundException" { - return nil - } + if isAWSErr(err, apigateway.ErrCodeNotFoundException, "") { + return nil + } - if !ok { - return resource.NonRetryableError(err) - } + if err != nil { + return fmt.Errorf("error deleting API Gateway Deployment (%s): %s", d.Id(), err) + } - return resource.NonRetryableError(err) - }) + return nil } diff --git a/aws/resource_aws_api_gateway_deployment_test.go b/aws/resource_aws_api_gateway_deployment_test.go index 7cf58a4e47e..183f19706cd 100644 --- a/aws/resource_aws_api_gateway_deployment_test.go +++ b/aws/resource_aws_api_gateway_deployment_test.go @@ -14,7 +14,7 @@ import ( func TestAccAWSAPIGatewayDeployment_basic(t *testing.T) { var conf apigateway.Deployment - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayDeploymentDestroy, @@ -37,6 +37,49 @@ func TestAccAWSAPIGatewayDeployment_basic(t *testing.T) { }) } +func TestAccAWSAPIGatewayDeployment_createBeforeDestoryUpdate(t *testing.T) { + var conf apigateway.Deployment + var stage apigateway.Stage + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayDeploymentDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAPIGatewayDeploymentCreateBeforeDestroyConfig("description1", "https://www.google.com"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayDeploymentExists("aws_api_gateway_deployment.test", &conf), + resource.TestCheckResourceAttr( + "aws_api_gateway_deployment.test", "stage_name", "test"), + resource.TestCheckResourceAttr( + "aws_api_gateway_deployment.test", "description", "description1"), + resource.TestCheckResourceAttr( + "aws_api_gateway_deployment.test", "variables.a", "2"), + resource.TestCheckResourceAttrSet( + "aws_api_gateway_deployment.test", "created_date"), + testAccCheckAWSAPIGatewayDeploymentStageExists("aws_api_gateway_deployment.test", &stage), + ), + }, + { + Config: testAccAWSAPIGatewayDeploymentCreateBeforeDestroyConfig("description2", "https://www.google.de"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayDeploymentExists("aws_api_gateway_deployment.test", &conf), + resource.TestCheckResourceAttr( + "aws_api_gateway_deployment.test", "stage_name", "test"), + resource.TestCheckResourceAttr( + "aws_api_gateway_deployment.test", "description", "description2"), + resource.TestCheckResourceAttr( + "aws_api_gateway_deployment.test", "variables.a", "2"), + resource.TestCheckResourceAttrSet( + "aws_api_gateway_deployment.test", "created_date"), + testAccCheckAWSAPIGatewayDeploymentStageExists("aws_api_gateway_deployment.test", &stage), + ), + }, + }, + }) +} + func testAccCheckAWSAPIGatewayDeploymentExists(n string, res *apigateway.Deployment) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -69,6 +112,30 @@ func testAccCheckAWSAPIGatewayDeploymentExists(n string, res *apigateway.Deploym } } +func testAccCheckAWSAPIGatewayDeploymentStageExists(resourceName string, res *apigateway.Stage) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).apigateway + + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Deployment not found: %s", resourceName) + } + + req := &apigateway.GetStageInput{ + StageName: aws.String(rs.Primary.Attributes["stage_name"]), + RestApiId: aws.String(rs.Primary.Attributes["rest_api_id"]), + } + stage, err := conn.GetStage(req) + if err != nil { + return err + } + + *res = *stage + + return nil + } +} + func testAccCheckAWSAPIGatewayDeploymentDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).apigateway @@ -103,7 +170,8 @@ func testAccCheckAWSAPIGatewayDeploymentDestroy(s *terraform.State) error { return nil } -const testAccAWSAPIGatewayDeploymentConfig = ` +func buildAPIGatewayDeploymentConfig(description, url, extras string) string { + return fmt.Sprintf(` resource "aws_api_gateway_rest_api" "test" { name = "test" } @@ -134,7 +202,7 @@ resource "aws_api_gateway_integration" "test" { http_method = "${aws_api_gateway_method.test.http_method}" type = "HTTP" - uri = "https://www.google.de" + uri = "%s" integration_http_method = "GET" } @@ -150,10 +218,24 @@ resource "aws_api_gateway_deployment" "test" { rest_api_id = "${aws_api_gateway_rest_api.test.id}" stage_name = "test" - description = "This is a test" + description = "%s" + stage_description = "%s" + + %s variables = { "a" = "2" } } -` +`, url, description, description, extras) +} + +var testAccAWSAPIGatewayDeploymentConfig = buildAPIGatewayDeploymentConfig("This is a test", "https://www.google.de", "") + +func testAccAWSAPIGatewayDeploymentCreateBeforeDestroyConfig(description string, url string) string { + return buildAPIGatewayDeploymentConfig(description, url, ` + lifecycle { + create_before_destroy = true + } + `) +} diff --git a/aws/resource_aws_api_gateway_documentation_part_test.go b/aws/resource_aws_api_gateway_documentation_part_test.go index 2277a9c4493..fb1cf8543b3 100644 --- a/aws/resource_aws_api_gateway_documentation_part_test.go +++ b/aws/resource_aws_api_gateway_documentation_part_test.go @@ -22,12 +22,12 @@ func TestAccAWSAPIGatewayDocumentationPart_basic(t *testing.T) { resourceName := "aws_api_gateway_documentation_part.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayDocumentationPartDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAPIGatewayDocumentationPartConfig(apiName, strconv.Quote(properties)), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayDocumentationPartExists(resourceName, &conf), @@ -37,7 +37,7 @@ func TestAccAWSAPIGatewayDocumentationPart_basic(t *testing.T) { resource.TestCheckResourceAttrSet(resourceName, "rest_api_id"), ), }, - resource.TestStep{ + { Config: testAccAWSAPIGatewayDocumentationPartConfig(apiName, strconv.Quote(uProperties)), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayDocumentationPartExists(resourceName, &conf), @@ -61,12 +61,12 @@ func TestAccAWSAPIGatewayDocumentationPart_method(t *testing.T) { resourceName := "aws_api_gateway_documentation_part.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayDocumentationPartDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAPIGatewayDocumentationPartMethodConfig(apiName, strconv.Quote(properties)), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayDocumentationPartExists(resourceName, &conf), @@ -78,7 +78,7 @@ func TestAccAWSAPIGatewayDocumentationPart_method(t *testing.T) { resource.TestCheckResourceAttrSet(resourceName, "rest_api_id"), ), }, - resource.TestStep{ + { Config: testAccAWSAPIGatewayDocumentationPartMethodConfig(apiName, strconv.Quote(uProperties)), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayDocumentationPartExists(resourceName, &conf), @@ -104,12 +104,12 @@ func TestAccAWSAPIGatewayDocumentationPart_responseHeader(t *testing.T) { resourceName := "aws_api_gateway_documentation_part.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayDocumentationPartDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAPIGatewayDocumentationPartResponseHeaderConfig(apiName, strconv.Quote(properties)), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayDocumentationPartExists(resourceName, &conf), @@ -123,7 +123,7 @@ func TestAccAWSAPIGatewayDocumentationPart_responseHeader(t *testing.T) { resource.TestCheckResourceAttrSet(resourceName, "rest_api_id"), ), }, - resource.TestStep{ + { Config: testAccAWSAPIGatewayDocumentationPartResponseHeaderConfig(apiName, strconv.Quote(uProperties)), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayDocumentationPartExists(resourceName, &conf), @@ -150,12 +150,12 @@ func TestAccAWSAPIGatewayDocumentationPart_importBasic(t *testing.T) { resourceName := "aws_api_gateway_documentation_part.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayDocumentationPartDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAPIGatewayDocumentationPartConfig(apiName, strconv.Quote(properties)), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayDocumentationPartExists(resourceName, &conf), @@ -165,7 +165,7 @@ func TestAccAWSAPIGatewayDocumentationPart_importBasic(t *testing.T) { resource.TestCheckResourceAttrSet(resourceName, "rest_api_id"), ), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, diff --git a/aws/resource_aws_api_gateway_documentation_version_test.go b/aws/resource_aws_api_gateway_documentation_version_test.go index 0e32f4a5715..b856e19d9b0 100644 --- a/aws/resource_aws_api_gateway_documentation_version_test.go +++ b/aws/resource_aws_api_gateway_documentation_version_test.go @@ -20,12 +20,12 @@ func TestAccAWSAPIGatewayDocumentationVersion_basic(t *testing.T) { resourceName := "aws_api_gateway_documentation_version.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayDocumentationVersionDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAPIGatewayDocumentationVersionBasicConfig(version, apiName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayDocumentationVersionExists(resourceName, &conf), @@ -49,12 +49,12 @@ func TestAccAWSAPIGatewayDocumentationVersion_allFields(t *testing.T) { resourceName := "aws_api_gateway_documentation_version.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayDocumentationVersionDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAPIGatewayDocumentationVersionAllFieldsConfig(version, apiName, stageName, description), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayDocumentationVersionExists(resourceName, &conf), @@ -63,7 +63,7 @@ func TestAccAWSAPIGatewayDocumentationVersion_allFields(t *testing.T) { resource.TestCheckResourceAttrSet(resourceName, "rest_api_id"), ), }, - resource.TestStep{ + { Config: testAccAWSAPIGatewayDocumentationVersionAllFieldsConfig(version, apiName, stageName, uDescription), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayDocumentationVersionExists(resourceName, &conf), @@ -83,15 +83,15 @@ func TestAccAWSAPIGatewayDocumentationVersion_importBasic(t *testing.T) { resourceName := "aws_api_gateway_documentation_version.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayDocumentationVersionDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAPIGatewayDocumentationVersionBasicConfig(version, apiName), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, @@ -109,15 +109,15 @@ func TestAccAWSAPIGatewayDocumentationVersion_importAllFields(t *testing.T) { resourceName := "aws_api_gateway_documentation_version.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayDocumentationVersionDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAPIGatewayDocumentationVersionAllFieldsConfig(version, apiName, stageName, description), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, diff --git a/aws/resource_aws_api_gateway_domain_name.go b/aws/resource_aws_api_gateway_domain_name.go index d06298503ee..b7669899fb7 100644 --- a/aws/resource_aws_api_gateway_domain_name.go +++ b/aws/resource_aws_api_gateway_domain_name.go @@ -10,6 +10,7 @@ import ( "github.com/aws/aws-sdk-go/service/apigateway" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsApiGatewayDomainName() *schema.Resource { @@ -18,6 +19,9 @@ func resourceAwsApiGatewayDomainName() *schema.Resource { Read: resourceAwsApiGatewayDomainNameRead, Update: resourceAwsApiGatewayDomainNameUpdate, Delete: resourceAwsApiGatewayDomainNameDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, Schema: map[string]*schema.Schema{ @@ -28,20 +32,20 @@ func resourceAwsApiGatewayDomainName() *schema.Resource { Type: schema.TypeString, ForceNew: true, Optional: true, - ConflictsWith: []string{"certificate_arn"}, + ConflictsWith: []string{"certificate_arn", "regional_certificate_arn"}, }, "certificate_chain": { Type: schema.TypeString, ForceNew: true, Optional: true, - ConflictsWith: []string{"certificate_arn"}, + ConflictsWith: []string{"certificate_arn", "regional_certificate_arn"}, }, "certificate_name": { Type: schema.TypeString, Optional: true, - ConflictsWith: []string{"certificate_arn"}, + ConflictsWith: []string{"certificate_arn", "regional_certificate_arn", "regional_certificate_name"}, }, "certificate_private_key": { @@ -49,7 +53,7 @@ func resourceAwsApiGatewayDomainName() *schema.Resource { ForceNew: true, Optional: true, Sensitive: true, - ConflictsWith: []string{"certificate_arn"}, + ConflictsWith: []string{"certificate_arn", "regional_certificate_arn"}, }, "domain_name": { @@ -61,7 +65,7 @@ func resourceAwsApiGatewayDomainName() *schema.Resource { "certificate_arn": { Type: schema.TypeString, Optional: true, - ConflictsWith: []string{"certificate_body", "certificate_chain", "certificate_name", "certificate_private_key"}, + ConflictsWith: []string{"certificate_body", "certificate_chain", "certificate_name", "certificate_private_key", "regional_certificate_arn", "regional_certificate_name"}, }, "cloudfront_domain_name": { @@ -78,6 +82,54 @@ func resourceAwsApiGatewayDomainName() *schema.Resource { Type: schema.TypeString, Computed: true, }, + + "endpoint_configuration": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MinItems: 1, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "types": { + Type: schema.TypeList, + Required: true, + MinItems: 1, + // BadRequestException: Cannot create an api with multiple Endpoint Types + MaxItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice([]string{ + apigateway.EndpointTypeEdge, + apigateway.EndpointTypeRegional, + }, false), + }, + }, + }, + }, + }, + + "regional_certificate_arn": { + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"certificate_arn", "certificate_body", "certificate_chain", "certificate_name", "certificate_private_key", "regional_certificate_name"}, + }, + + "regional_certificate_name": { + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"certificate_arn", "certificate_name", "regional_certificate_arn"}, + }, + + "regional_domain_name": { + Type: schema.TypeString, + Computed: true, + }, + + "regional_zone_id": { + Type: schema.TypeString, + Computed: true, + }, }, } } @@ -110,14 +162,24 @@ func resourceAwsApiGatewayDomainNameCreate(d *schema.ResourceData, meta interfac params.CertificatePrivateKey = aws.String(v.(string)) } + if v, ok := d.GetOk("endpoint_configuration"); ok { + params.EndpointConfiguration = expandApiGatewayEndpointConfiguration(v.([]interface{})) + } + + if v, ok := d.GetOk("regional_certificate_arn"); ok && v.(string) != "" { + params.RegionalCertificateArn = aws.String(v.(string)) + } + + if v, ok := d.GetOk("regional_certificate_name"); ok && v.(string) != "" { + params.RegionalCertificateName = aws.String(v.(string)) + } + domainName, err := conn.CreateDomainName(params) if err != nil { return fmt.Errorf("Error creating API Gateway Domain Name: %s", err) } d.SetId(*domainName.DomainName) - d.Set("cloudfront_domain_name", domainName.DistributionDomainName) - d.Set("cloudfront_zone_id", cloudFrontRoute53ZoneID) return resourceAwsApiGatewayDomainNameRead(d, meta) } @@ -139,13 +201,23 @@ func resourceAwsApiGatewayDomainNameRead(d *schema.ResourceData, meta interface{ return err } + d.Set("certificate_arn", domainName.CertificateArn) d.Set("certificate_name", domainName.CertificateName) if err := d.Set("certificate_upload_date", domainName.CertificateUploadDate.Format(time.RFC3339)); err != nil { log.Printf("[DEBUG] Error setting certificate_upload_date: %s", err) } d.Set("cloudfront_domain_name", domainName.DistributionDomainName) + d.Set("cloudfront_zone_id", cloudFrontRoute53ZoneID) d.Set("domain_name", domainName.DomainName) - d.Set("certificate_arn", domainName.CertificateArn) + + if err := d.Set("endpoint_configuration", flattenApiGatewayEndpointConfiguration(domainName.EndpointConfiguration)); err != nil { + return fmt.Errorf("error setting endpoint_configuration: %s", err) + } + + d.Set("regional_certificate_arn", domainName.RegionalCertificateArn) + d.Set("regional_certificate_name", domainName.RegionalCertificateName) + d.Set("regional_domain_name", domainName.RegionalDomainName) + d.Set("regional_zone_id", domainName.RegionalHostedZoneId) return nil } @@ -169,6 +241,36 @@ func resourceAwsApiGatewayDomainNameUpdateOperations(d *schema.ResourceData) []* }) } + if d.HasChange("regional_certificate_name") { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("replace"), + Path: aws.String("/regionalCertificateName"), + Value: aws.String(d.Get("regional_certificate_name").(string)), + }) + } + + if d.HasChange("regional_certificate_arn") { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("replace"), + Path: aws.String("/regionalCertificateArn"), + Value: aws.String(d.Get("regional_certificate_arn").(string)), + }) + } + + if d.HasChange("endpoint_configuration.0.types") { + // The domain name must have an endpoint type. + // If attempting to remove the configuration, do nothing. + if v, ok := d.GetOk("endpoint_configuration"); ok && len(v.([]interface{})) > 0 { + m := v.([]interface{})[0].(map[string]interface{}) + + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("replace"), + Path: aws.String("/endpointConfiguration/types/0"), + Value: aws.String(m["types"].([]interface{})[0].(string)), + }) + } + } + return operations } diff --git a/aws/resource_aws_api_gateway_domain_name_test.go b/aws/resource_aws_api_gateway_domain_name_test.go index 59d0163b775..92bf731b5e3 100644 --- a/aws/resource_aws_api_gateway_domain_name_test.go +++ b/aws/resource_aws_api_gateway_domain_name_test.go @@ -2,18 +2,56 @@ package aws import ( "fmt" + "os" "regexp" "testing" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/apigateway" "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) -func TestAccAWSAPIGatewayDomainName_basic(t *testing.T) { +func TestAccAWSAPIGatewayDomainName_CertificateArn(t *testing.T) { + // This test must always run in us-east-1 + // BadRequestException: Invalid certificate ARN: arn:aws:acm:us-west-2:123456789012:certificate/xxxxx. Certificate must be in 'us-east-1'. + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + + // For now, use an environment variable to limit running this test + certificateArn := os.Getenv("AWS_API_GATEWAY_DOMAIN_NAME_CERTIFICATE_ARN") + if certificateArn == "" { + t.Skip( + "Environment variable AWS_API_GATEWAY_DOMAIN_NAME_CERTIFICATE_ARN is not set. " + + "This environment variable must be set to the ARN of " + + "an ISSUED ACM certificate in us-east-1 to enable this test.") + } + + var domainName apigateway.DomainName + resourceName := "aws_api_gateway_domain_name.test" + rName := fmt.Sprintf("tf-acc-%s.terraformtest.com", acctest.RandString(8)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProvidersWithTLS, + CheckDestroy: testAccCheckAWSAPIGatewayDomainNameDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAPIGatewayDomainNameConfig_CertificateArn(rName, certificateArn), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayDomainNameExists(resourceName, &domainName), + resource.TestCheckResourceAttrSet(resourceName, "cloudfront_domain_name"), + resource.TestCheckResourceAttr(resourceName, "cloudfront_zone_id", "Z2FDTNDATAQYW2"), + resource.TestCheckResourceAttr(resourceName, "domain_name", rName), + ), + }, + }, + }) +} + +func TestAccAWSAPIGatewayDomainName_CertificateName(t *testing.T) { var conf apigateway.DomainName rString := acctest.RandString(8) @@ -23,35 +61,123 @@ func TestAccAWSAPIGatewayDomainName_basic(t *testing.T) { certRe := regexp.MustCompile("^-----BEGIN CERTIFICATE-----\n") keyRe := regexp.MustCompile("^-----BEGIN RSA PRIVATE KEY-----\n") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProvidersWithTLS, CheckDestroy: testAccCheckAWSAPIGatewayDomainNameDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSAPIGatewayDomainNameConfig(name, commonName), + Config: testAccAWSAPIGatewayDomainNameConfig_CertificateName(name, commonName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayDomainNameExists("aws_api_gateway_domain_name.test", &conf), resource.TestMatchResourceAttr("aws_api_gateway_domain_name.test", "certificate_body", certRe), resource.TestMatchResourceAttr("aws_api_gateway_domain_name.test", "certificate_chain", certRe), resource.TestCheckResourceAttr("aws_api_gateway_domain_name.test", "certificate_name", "tf-acc-apigateway-domain-name"), resource.TestMatchResourceAttr("aws_api_gateway_domain_name.test", "certificate_private_key", keyRe), + resource.TestCheckResourceAttrSet("aws_api_gateway_domain_name.test", "cloudfront_domain_name"), + resource.TestCheckResourceAttr("aws_api_gateway_domain_name.test", "cloudfront_zone_id", "Z2FDTNDATAQYW2"), resource.TestCheckResourceAttr("aws_api_gateway_domain_name.test", "domain_name", name), resource.TestCheckResourceAttrSet("aws_api_gateway_domain_name.test", "certificate_upload_date"), ), }, { - Config: testAccAWSAPIGatewayDomainNameConfig(nameModified, commonName), + Config: testAccAWSAPIGatewayDomainNameConfig_CertificateName(nameModified, commonName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayDomainNameExists("aws_api_gateway_domain_name.test", &conf), resource.TestMatchResourceAttr("aws_api_gateway_domain_name.test", "certificate_body", certRe), resource.TestMatchResourceAttr("aws_api_gateway_domain_name.test", "certificate_chain", certRe), resource.TestCheckResourceAttr("aws_api_gateway_domain_name.test", "certificate_name", "tf-acc-apigateway-domain-name"), resource.TestMatchResourceAttr("aws_api_gateway_domain_name.test", "certificate_private_key", keyRe), + resource.TestCheckResourceAttrSet("aws_api_gateway_domain_name.test", "cloudfront_domain_name"), + resource.TestCheckResourceAttr("aws_api_gateway_domain_name.test", "cloudfront_zone_id", "Z2FDTNDATAQYW2"), resource.TestCheckResourceAttr("aws_api_gateway_domain_name.test", "domain_name", nameModified), resource.TestCheckResourceAttrSet("aws_api_gateway_domain_name.test", "certificate_upload_date"), ), }, + { + ResourceName: "aws_api_gateway_domain_name.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"certificate_body", "certificate_chain", "certificate_private_key"}, + }, + }, + }) +} + +func TestAccAWSAPIGatewayDomainName_RegionalCertificateArn(t *testing.T) { + // For now, use an environment variable to limit running this test + regionalCertificateArn := os.Getenv("AWS_API_GATEWAY_DOMAIN_NAME_REGIONAL_CERTIFICATE_ARN") + if regionalCertificateArn == "" { + t.Skip( + "Environment variable AWS_API_GATEWAY_DOMAIN_NAME_REGIONAL_CERTIFICATE_ARN is not set. " + + "This environment variable must be set to the ARN of " + + "an ISSUED ACM certificate in the region where this test " + + "is running to enable the test.") + } + + var domainName apigateway.DomainName + resourceName := "aws_api_gateway_domain_name.test" + rName := fmt.Sprintf("tf-acc-%s.terraformtest.com", acctest.RandString(8)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProvidersWithTLS, + CheckDestroy: testAccCheckAWSAPIGatewayDomainNameDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAPIGatewayDomainNameConfig_RegionalCertificateArn(rName, regionalCertificateArn), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayDomainNameExists(resourceName, &domainName), + resource.TestCheckResourceAttr(resourceName, "domain_name", rName), + resource.TestMatchResourceAttr(resourceName, "regional_domain_name", regexp.MustCompile(`.*\.execute-api\..*`)), + resource.TestMatchResourceAttr(resourceName, "regional_zone_id", regexp.MustCompile(`^Z`)), + ), + }, + }, + }) +} + +func TestAccAWSAPIGatewayDomainName_RegionalCertificateName(t *testing.T) { + // For now, use an environment variable to limit running this test + // BadRequestException: Uploading certificates is not supported for REGIONAL. + // See Remarks section of https://docs.aws.amazon.com/apigateway/api-reference/link-relation/domainname-create/ + // which suggests this configuration should be possible somewhere, e.g. AWS China? + regionalCertificateArn := os.Getenv("AWS_API_GATEWAY_DOMAIN_NAME_REGIONAL_CERTIFICATE_NAME_ENABLED") + if regionalCertificateArn == "" { + t.Skip( + "Environment variable AWS_API_GATEWAY_DOMAIN_NAME_REGIONAL_CERTIFICATE_NAME_ENABLED is not set. " + + "This environment variable must be set to any non-empty value " + + "in a region where uploading REGIONAL certificates is allowed " + + "to enable the test.") + } + + var domainName apigateway.DomainName + resourceName := "aws_api_gateway_domain_name.test" + + rName := fmt.Sprintf("tf-acc-%s.terraformtest.com", acctest.RandString(8)) + commonName := "*.terraformtest.com" + certRe := regexp.MustCompile("^-----BEGIN CERTIFICATE-----\n") + keyRe := regexp.MustCompile("^-----BEGIN RSA PRIVATE KEY-----\n") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProvidersWithTLS, + CheckDestroy: testAccCheckAWSAPIGatewayDomainNameDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAPIGatewayDomainNameConfig_RegionalCertificateName(rName, commonName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayDomainNameExists(resourceName, &domainName), + resource.TestMatchResourceAttr(resourceName, "certificate_body", certRe), + resource.TestMatchResourceAttr(resourceName, "certificate_chain", certRe), + resource.TestCheckResourceAttr(resourceName, "certificate_name", "tf-acc-apigateway-domain-name"), + resource.TestMatchResourceAttr(resourceName, "certificate_private_key", keyRe), + resource.TestCheckResourceAttrSet(resourceName, "certificate_upload_date"), + resource.TestCheckResourceAttr(resourceName, "domain_name", rName), + resource.TestMatchResourceAttr(resourceName, "regional_domain_name", regexp.MustCompile(`.*\.execute-api\..*`)), + resource.TestMatchResourceAttr(resourceName, "regional_zone_id", regexp.MustCompile(`^Z`)), + ), + }, }, }) } @@ -91,28 +217,22 @@ func testAccCheckAWSAPIGatewayDomainNameDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).apigateway for _, rs := range s.RootModule().Resources { - if rs.Type != "aws_api_gateway_api_key" { + if rs.Type != "aws_api_gateway_domain_name" { continue } - describe, err := conn.GetDomainNames(&apigateway.GetDomainNamesInput{}) + _, err := conn.GetDomainName(&apigateway.GetDomainNameInput{ + DomainName: aws.String(rs.Primary.ID), + }) - if err == nil { - if len(describe.Items) != 0 && - *describe.Items[0].DomainName == rs.Primary.ID { - return fmt.Errorf("API Gateway DomainName still exists") + if err != nil { + if isAWSErr(err, apigateway.ErrCodeNotFoundException, "") { + return nil } - } - - aws2err, ok := err.(awserr.Error) - if !ok { - return err - } - if aws2err.Code() != "NotFoundException" { return err } - return nil + return fmt.Errorf("API Gateway Domain Name still exists: %s", rs.Primary.ID) } return nil @@ -168,7 +288,20 @@ resource "tls_locally_signed_cert" "leaf" { `, commonName) } -func testAccAWSAPIGatewayDomainNameConfig(domainName, commonName string) string { +func testAccAWSAPIGatewayDomainNameConfig_CertificateArn(domainName, certificateArn string) string { + return fmt.Sprintf(` +resource "aws_api_gateway_domain_name" "test" { + domain_name = "%s" + certificate_arn = "%s" + + endpoint_configuration { + types = ["EDGE"] + } +} +`, domainName, certificateArn) +} + +func testAccAWSAPIGatewayDomainNameConfig_CertificateName(domainName, commonName string) string { return fmt.Sprintf(` resource "aws_api_gateway_domain_name" "test" { domain_name = "%s" @@ -180,3 +313,33 @@ resource "aws_api_gateway_domain_name" "test" { %s `, domainName, testAccAWSAPIGatewayCerts(commonName)) } + +func testAccAWSAPIGatewayDomainNameConfig_RegionalCertificateArn(domainName, regionalCertificateArn string) string { + return fmt.Sprintf(` +resource "aws_api_gateway_domain_name" "test" { + domain_name = "%s" + regional_certificate_arn = "%s" + + endpoint_configuration { + types = ["REGIONAL"] + } +} +`, domainName, regionalCertificateArn) +} + +func testAccAWSAPIGatewayDomainNameConfig_RegionalCertificateName(domainName, commonName string) string { + return fmt.Sprintf(` +resource "aws_api_gateway_domain_name" "test" { + certificate_body = "${tls_locally_signed_cert.leaf.cert_pem}" + certificate_chain = "${tls_self_signed_cert.ca.cert_pem}" + certificate_private_key = "${tls_private_key.test.private_key_pem}" + domain_name = "%s" + regional_certificate_name = "tf-acc-apigateway-domain-name" + + endpoint_configuration { + types = ["REGIONAL"] + } +} +%s +`, domainName, testAccAWSAPIGatewayCerts(commonName)) +} diff --git a/aws/resource_aws_api_gateway_gateway_response.go b/aws/resource_aws_api_gateway_gateway_response.go index 105d1227f24..cddf6e16105 100644 --- a/aws/resource_aws_api_gateway_gateway_response.go +++ b/aws/resource_aws_api_gateway_gateway_response.go @@ -3,6 +3,7 @@ package aws import ( "fmt" "log" + "strings" "time" "github.com/aws/aws-sdk-go/aws" @@ -18,6 +19,20 @@ func resourceAwsApiGatewayGatewayResponse() *schema.Resource { Read: resourceAwsApiGatewayGatewayResponseRead, Update: resourceAwsApiGatewayGatewayResponsePut, Delete: resourceAwsApiGatewayGatewayResponseDelete, + Importer: &schema.ResourceImporter{ + State: func(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + idParts := strings.Split(d.Id(), "/") + if len(idParts) != 2 || idParts[0] == "" || idParts[1] == "" { + return nil, fmt.Errorf("Unexpected format of ID (%q), expected REST-API-ID/RESPONSE-TYPE", d.Id()) + } + restApiID := idParts[0] + responseType := idParts[1] + d.Set("response_type", responseType) + d.Set("rest_api_id", restApiID) + d.SetId(fmt.Sprintf("aggr-%s-%s", restApiID, responseType)) + return []*schema.ResourceData{d}, nil + }, + }, Schema: map[string]*schema.Schema{ "rest_api_id": { @@ -39,13 +54,13 @@ func resourceAwsApiGatewayGatewayResponse() *schema.Resource { "response_templates": { Type: schema.TypeMap, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, Optional: true, }, "response_parameters": { Type: schema.TypeMap, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, Optional: true, }, }, diff --git a/aws/resource_aws_api_gateway_gateway_response_test.go b/aws/resource_aws_api_gateway_gateway_response_test.go index c2dc74362f3..6d19dbfaeb0 100644 --- a/aws/resource_aws_api_gateway_gateway_response_test.go +++ b/aws/resource_aws_api_gateway_gateway_response_test.go @@ -17,7 +17,7 @@ func TestAccAWSAPIGatewayGatewayResponse_basic(t *testing.T) { rName := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayGatewayResponseDestroy, @@ -43,6 +43,12 @@ func TestAccAWSAPIGatewayGatewayResponse_basic(t *testing.T) { resource.TestCheckNoResourceAttr("aws_api_gateway_gateway_response.test", "response_parameters.gatewayresponse.header.Authorization"), ), }, + { + ResourceName: "aws_api_gateway_gateway_response.test", + ImportState: true, + ImportStateIdFunc: testAccAWSAPIGatewayGatewayResponseImportStateIdFunc("aws_api_gateway_gateway_response.test"), + ImportStateVerify: true, + }, }, }) } @@ -107,6 +113,17 @@ func testAccCheckAWSAPIGatewayGatewayResponseDestroy(s *terraform.State) error { return nil } +func testAccAWSAPIGatewayGatewayResponseImportStateIdFunc(resourceName string) resource.ImportStateIdFunc { + return func(s *terraform.State) (string, error) { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return "", fmt.Errorf("Not found: %s", resourceName) + } + + return fmt.Sprintf("%s/%s", rs.Primary.Attributes["rest_api_id"], rs.Primary.Attributes["response_type"]), nil + } +} + func testAccAWSAPIGatewayGatewayResponseConfig(rName string) string { return fmt.Sprintf(` resource "aws_api_gateway_rest_api" "main" { diff --git a/aws/resource_aws_api_gateway_integration.go b/aws/resource_aws_api_gateway_integration.go index bfbfb05dd54..259d5d69136 100644 --- a/aws/resource_aws_api_gateway_integration.go +++ b/aws/resource_aws_api_gateway_integration.go @@ -4,6 +4,7 @@ import ( "encoding/json" "fmt" "log" + "strconv" "strings" "time" @@ -21,6 +22,22 @@ func resourceAwsApiGatewayIntegration() *schema.Resource { Read: resourceAwsApiGatewayIntegrationRead, Update: resourceAwsApiGatewayIntegrationUpdate, Delete: resourceAwsApiGatewayIntegrationDelete, + Importer: &schema.ResourceImporter{ + State: func(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + idParts := strings.Split(d.Id(), "/") + if len(idParts) != 3 || idParts[0] == "" || idParts[1] == "" || idParts[2] == "" { + return nil, fmt.Errorf("Unexpected format of ID (%q), expected REST-API-ID/RESOURCE-ID/HTTP-METHOD", d.Id()) + } + restApiID := idParts[0] + resourceID := idParts[1] + httpMethod := idParts[2] + d.Set("http_method", httpMethod) + d.Set("resource_id", resourceID) + d.Set("rest_api_id", restApiID) + d.SetId(fmt.Sprintf("agi-%s-%s-%s", restApiID, resourceID, httpMethod)) + return []*schema.ResourceData{d}, nil + }, + }, Schema: map[string]*schema.Schema{ "rest_api_id": { @@ -55,6 +72,21 @@ func resourceAwsApiGatewayIntegration() *schema.Resource { }, false), }, + "connection_type": { + Type: schema.TypeString, + Optional: true, + Default: apigateway.ConnectionTypeInternet, + ValidateFunc: validation.StringInSlice([]string{ + apigateway.ConnectionTypeInternet, + apigateway.ConnectionTypeVpcLink, + }, false), + }, + + "connection_id": { + Type: schema.TypeString, + Optional: true, + }, + "uri": { Type: schema.TypeString, Optional: true, @@ -76,12 +108,12 @@ func resourceAwsApiGatewayIntegration() *schema.Resource { "request_templates": { Type: schema.TypeMap, Optional: true, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, }, "request_parameters": { Type: schema.TypeMap, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, Optional: true, ConflictsWith: []string{"request_parameters_in_json"}, }, @@ -123,6 +155,13 @@ func resourceAwsApiGatewayIntegration() *schema.Resource { Optional: true, Computed: true, }, + + "timeout_milliseconds": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntBetween(50, 29000), + Default: 29000, + }, }, } } @@ -131,14 +170,26 @@ func resourceAwsApiGatewayIntegrationCreate(d *schema.ResourceData, meta interfa conn := meta.(*AWSClient).apigateway log.Print("[DEBUG] Creating API Gateway Integration") + + connectionType := aws.String(d.Get("connection_type").(string)) + var connectionId *string + if *connectionType == apigateway.ConnectionTypeVpcLink { + if _, ok := d.GetOk("connection_id"); !ok { + return fmt.Errorf("connection_id required when connection_type set to VPC_LINK") + } + connectionId = aws.String(d.Get("connection_id").(string)) + } + var integrationHttpMethod *string if v, ok := d.GetOk("integration_http_method"); ok { integrationHttpMethod = aws.String(v.(string)) } + var uri *string if v, ok := d.GetOk("uri"); ok { uri = aws.String(v.(string)) } + templates := make(map[string]string) for k, v := range d.Get("request_templates").(map[string]interface{}) { templates[k] = v.(string) @@ -186,20 +237,28 @@ func resourceAwsApiGatewayIntegrationCreate(d *schema.ResourceData, meta interfa cacheNamespace = aws.String(v.(string)) } + var timeoutInMillis *int64 + if v, ok := d.GetOk("timeout_milliseconds"); ok { + timeoutInMillis = aws.Int64(int64(v.(int))) + } + _, err := conn.PutIntegration(&apigateway.PutIntegrationInput{ - HttpMethod: aws.String(d.Get("http_method").(string)), - ResourceId: aws.String(d.Get("resource_id").(string)), - RestApiId: aws.String(d.Get("rest_api_id").(string)), - Type: aws.String(d.Get("type").(string)), + HttpMethod: aws.String(d.Get("http_method").(string)), + ResourceId: aws.String(d.Get("resource_id").(string)), + RestApiId: aws.String(d.Get("rest_api_id").(string)), + Type: aws.String(d.Get("type").(string)), IntegrationHttpMethod: integrationHttpMethod, - Uri: uri, - RequestParameters: aws.StringMap(parameters), - RequestTemplates: aws.StringMap(templates), - Credentials: credentials, - CacheNamespace: cacheNamespace, - CacheKeyParameters: cacheKeyParameters, - PassthroughBehavior: passthroughBehavior, - ContentHandling: contentHandling, + Uri: uri, + RequestParameters: aws.StringMap(parameters), + RequestTemplates: aws.StringMap(templates), + Credentials: credentials, + CacheNamespace: cacheNamespace, + CacheKeyParameters: cacheKeyParameters, + PassthroughBehavior: passthroughBehavior, + ContentHandling: contentHandling, + ConnectionType: connectionType, + ConnectionId: connectionId, + TimeoutInMillis: timeoutInMillis, }) if err != nil { return fmt.Errorf("Error creating API Gateway Integration: %s", err) @@ -228,37 +287,44 @@ func resourceAwsApiGatewayIntegrationRead(d *schema.ResourceData, meta interface return err } log.Printf("[DEBUG] Received API Gateway Integration: %s", integration) - d.SetId(fmt.Sprintf("agi-%s-%s-%s", d.Get("rest_api_id").(string), d.Get("resource_id").(string), d.Get("http_method").(string))) - // AWS converts "" to null on their side, convert it back - if v, ok := integration.RequestTemplates["application/json"]; ok && v == nil { - integration.RequestTemplates["application/json"] = aws.String("") + if err := d.Set("cache_key_parameters", flattenStringList(integration.CacheKeyParameters)); err != nil { + return fmt.Errorf("error setting cache_key_parameters: %s", err) } - - d.Set("request_templates", aws.StringValueMap(integration.RequestTemplates)) - d.Set("type", integration.Type) - d.Set("request_parameters", aws.StringValueMap(integration.RequestParameters)) - d.Set("request_parameters_in_json", aws.StringValueMap(integration.RequestParameters)) + d.Set("cache_namespace", integration.CacheNamespace) + d.Set("connection_id", integration.ConnectionId) + d.Set("connection_type", apigateway.ConnectionTypeInternet) + if integration.ConnectionType != nil { + d.Set("connection_type", integration.ConnectionType) + } + d.Set("content_handling", integration.ContentHandling) + d.Set("credentials", integration.Credentials) + d.Set("integration_http_method", integration.HttpMethod) d.Set("passthrough_behavior", integration.PassthroughBehavior) - if integration.Uri != nil { - d.Set("uri", integration.Uri) + // KNOWN ISSUE: This next d.Set() is broken as it should be a JSON string of the map, + // however leaving as-is since this attribute has been deprecated + // for a very long time and will be removed soon in the next major release. + // Not worth the effort of fixing, acceptance testing, and potential JSON equivalence bugs. + if _, ok := d.GetOk("request_parameters_in_json"); ok { + d.Set("request_parameters_in_json", aws.StringValueMap(integration.RequestParameters)) } - if integration.Credentials != nil { - d.Set("credentials", integration.Credentials) - } + d.Set("request_parameters", aws.StringValueMap(integration.RequestParameters)) - if integration.ContentHandling != nil { - d.Set("content_handling", integration.ContentHandling) + // We need to explicitly convert key = nil values into key = "", which aws.StringValueMap() removes + requestTemplateMap := make(map[string]string) + for key, valuePointer := range integration.RequestTemplates { + requestTemplateMap[key] = aws.StringValue(valuePointer) } - - d.Set("cache_key_parameters", flattenStringList(integration.CacheKeyParameters)) - - if integration.CacheNamespace != nil { - d.Set("cache_namespace", integration.CacheNamespace) + if err := d.Set("request_templates", requestTemplateMap); err != nil { + return fmt.Errorf("error setting request_templates: %s", err) } + d.Set("timeout_milliseconds", integration.TimeoutInMillis) + d.Set("type", integration.Type) + d.Set("uri", integration.Uri) + return nil } @@ -398,6 +464,30 @@ func resourceAwsApiGatewayIntegrationUpdate(d *schema.ResourceData, meta interfa }) } + if d.HasChange("connection_type") { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("replace"), + Path: aws.String("/connectionType"), + Value: aws.String(d.Get("connection_type").(string)), + }) + } + + if d.HasChange("connection_id") { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("replace"), + Path: aws.String("/connectionId"), + Value: aws.String(d.Get("connection_id").(string)), + }) + } + + if d.HasChange("timeout_milliseconds") { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("replace"), + Path: aws.String("/timeoutInMillis"), + Value: aws.String(strconv.Itoa(d.Get("timeout_milliseconds").(int))), + }) + } + params := &apigateway.UpdateIntegrationInput{ HttpMethod: aws.String(d.Get("http_method").(string)), ResourceId: aws.String(d.Get("resource_id").(string)), diff --git a/aws/resource_aws_api_gateway_integration_response.go b/aws/resource_aws_api_gateway_integration_response.go index 5462d92d7aa..1da6a179955 100644 --- a/aws/resource_aws_api_gateway_integration_response.go +++ b/aws/resource_aws_api_gateway_integration_response.go @@ -4,6 +4,7 @@ import ( "encoding/json" "fmt" "log" + "strings" "time" "github.com/aws/aws-sdk-go/aws" @@ -19,6 +20,24 @@ func resourceAwsApiGatewayIntegrationResponse() *schema.Resource { Read: resourceAwsApiGatewayIntegrationResponseRead, Update: resourceAwsApiGatewayIntegrationResponseCreate, Delete: resourceAwsApiGatewayIntegrationResponseDelete, + Importer: &schema.ResourceImporter{ + State: func(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + idParts := strings.Split(d.Id(), "/") + if len(idParts) != 4 || idParts[0] == "" || idParts[1] == "" || idParts[2] == "" || idParts[3] == "" { + return nil, fmt.Errorf("Unexpected format of ID (%q), expected REST-API-ID/RESOURCE-ID/HTTP-METHOD/STATUS-CODE", d.Id()) + } + restApiID := idParts[0] + resourceID := idParts[1] + httpMethod := idParts[2] + statusCode := idParts[3] + d.Set("http_method", httpMethod) + d.Set("status_code", statusCode) + d.Set("resource_id", resourceID) + d.Set("rest_api_id", restApiID) + d.SetId(fmt.Sprintf("agir-%s-%s-%s-%s", restApiID, resourceID, httpMethod, statusCode)) + return []*schema.ResourceData{d}, nil + }, + }, Schema: map[string]*schema.Schema{ "rest_api_id": { @@ -53,12 +72,12 @@ func resourceAwsApiGatewayIntegrationResponse() *schema.Resource { "response_templates": { Type: schema.TypeMap, Optional: true, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, }, "response_parameters": { Type: schema.TypeMap, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, Optional: true, ConflictsWith: []string{"response_parameters_in_json"}, }, @@ -148,11 +167,31 @@ func resourceAwsApiGatewayIntegrationResponseRead(d *schema.ResourceData, meta i log.Printf("[DEBUG] Received API Gateway Integration Response: %s", integrationResponse) - d.SetId(fmt.Sprintf("agir-%s-%s-%s-%s", d.Get("rest_api_id").(string), d.Get("resource_id").(string), d.Get("http_method").(string), d.Get("status_code").(string))) - d.Set("response_templates", integrationResponse.ResponseTemplates) + d.Set("content_handling", integrationResponse.ContentHandling) + + if err := d.Set("response_parameters", aws.StringValueMap(integrationResponse.ResponseParameters)); err != nil { + return fmt.Errorf("error setting response_parameters: %s", err) + } + + // KNOWN ISSUE: This next d.Set() is broken as it should be a JSON string of the map, + // however leaving as-is since this attribute has been deprecated + // for a very long time and will be removed soon in the next major release. + // Not worth the effort of fixing, acceptance testing, and potential JSON equivalence bugs. + if _, ok := d.GetOk("response_parameters_in_json"); ok { + d.Set("response_parameters_in_json", aws.StringValueMap(integrationResponse.ResponseParameters)) + } + + // We need to explicitly convert key = nil values into key = "", which aws.StringValueMap() removes + responseTemplateMap := make(map[string]string) + for key, valuePointer := range integrationResponse.ResponseTemplates { + responseTemplateMap[key] = aws.StringValue(valuePointer) + } + if err := d.Set("response_templates", responseTemplateMap); err != nil { + return fmt.Errorf("error setting response_templates: %s", err) + } + d.Set("selection_pattern", integrationResponse.SelectionPattern) - d.Set("response_parameters", aws.StringValueMap(integrationResponse.ResponseParameters)) - d.Set("response_parameters_in_json", aws.StringValueMap(integrationResponse.ResponseParameters)) + return nil } diff --git a/aws/resource_aws_api_gateway_integration_response_test.go b/aws/resource_aws_api_gateway_integration_response_test.go index 5eef59cd88c..5756787a1ab 100644 --- a/aws/resource_aws_api_gateway_integration_response_test.go +++ b/aws/resource_aws_api_gateway_integration_response_test.go @@ -14,12 +14,12 @@ import ( func TestAccAWSAPIGatewayIntegrationResponse_basic(t *testing.T) { var conf apigateway.IntegrationResponse - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayIntegrationResponseDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAPIGatewayIntegrationResponseConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayIntegrationResponseExists("aws_api_gateway_integration_response.test", &conf), @@ -28,12 +28,12 @@ func TestAccAWSAPIGatewayIntegrationResponse_basic(t *testing.T) { "aws_api_gateway_integration_response.test", "response_templates.application/json", ""), resource.TestCheckResourceAttr( "aws_api_gateway_integration_response.test", "response_templates.application/xml", "#set($inputRoot = $input.path('$'))\n{ }"), - resource.TestCheckNoResourceAttr( - "aws_api_gateway_integration_response.test", "content_handling"), + resource.TestCheckResourceAttr( + "aws_api_gateway_integration_response.test", "content_handling", ""), ), }, - resource.TestStep{ + { Config: testAccAWSAPIGatewayIntegrationResponseConfigUpdate, Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayIntegrationResponseExists("aws_api_gateway_integration_response.test", &conf), @@ -46,6 +46,12 @@ func TestAccAWSAPIGatewayIntegrationResponse_basic(t *testing.T) { "aws_api_gateway_integration_response.test", "content_handling", "CONVERT_TO_BINARY"), ), }, + { + ResourceName: "aws_api_gateway_integration_response.test", + ImportState: true, + ImportStateIdFunc: testAccAWSAPIGatewayIntegrationResponseImportStateIdFunc("aws_api_gateway_integration_response.test"), + ImportStateVerify: true, + }, }, }) } @@ -157,6 +163,17 @@ func testAccCheckAWSAPIGatewayIntegrationResponseDestroy(s *terraform.State) err return nil } +func testAccAWSAPIGatewayIntegrationResponseImportStateIdFunc(resourceName string) resource.ImportStateIdFunc { + return func(s *terraform.State) (string, error) { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return "", fmt.Errorf("Not found: %s", resourceName) + } + + return fmt.Sprintf("%s/%s/%s/%s", rs.Primary.Attributes["rest_api_id"], rs.Primary.Attributes["resource_id"], rs.Primary.Attributes["http_method"], rs.Primary.Attributes["status_code"]), nil + } +} + const testAccAWSAPIGatewayIntegrationResponseConfig = ` resource "aws_api_gateway_rest_api" "test" { name = "test" diff --git a/aws/resource_aws_api_gateway_integration_test.go b/aws/resource_aws_api_gateway_integration_test.go index 37b55313801..a919d4c0ff7 100644 --- a/aws/resource_aws_api_gateway_integration_test.go +++ b/aws/resource_aws_api_gateway_integration_test.go @@ -2,11 +2,13 @@ package aws import ( "fmt" + "regexp" "testing" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/apigateway" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -14,7 +16,7 @@ import ( func TestAccAWSAPIGatewayIntegration_basic(t *testing.T) { var conf apigateway.Integration - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayIntegrationDestroy, @@ -28,13 +30,14 @@ func TestAccAWSAPIGatewayIntegration_basic(t *testing.T) { resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "uri", "https://www.google.de"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "passthrough_behavior", "WHEN_NO_MATCH"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "content_handling", "CONVERT_TO_TEXT"), - resource.TestCheckNoResourceAttr("aws_api_gateway_integration.test", "credentials"), + resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "credentials", ""), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.%", "2"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.integration.request.header.X-Authorization", "'static'"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.integration.request.header.X-Foo", "'Bar'"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_templates.%", "2"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_templates.application/json", ""), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_templates.application/xml", "#set($inputRoot = $input.path('$'))\n{ }"), + resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "timeout_milliseconds", "29000"), ), }, @@ -47,13 +50,14 @@ func TestAccAWSAPIGatewayIntegration_basic(t *testing.T) { resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "uri", "https://www.google.de"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "passthrough_behavior", "WHEN_NO_MATCH"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "content_handling", "CONVERT_TO_TEXT"), - resource.TestCheckNoResourceAttr("aws_api_gateway_integration.test", "credentials"), + resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "credentials", ""), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.%", "2"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.integration.request.header.X-Authorization", "'updated'"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.integration.request.header.X-FooBar", "'Baz'"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_templates.%", "2"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_templates.application/json", "{'foobar': 'bar}"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_templates.text/html", "Foo"), + resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "timeout_milliseconds", "2000"), ), }, @@ -66,13 +70,14 @@ func TestAccAWSAPIGatewayIntegration_basic(t *testing.T) { resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "uri", "https://www.google.de/updated"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "passthrough_behavior", "WHEN_NO_MATCH"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "content_handling", "CONVERT_TO_TEXT"), - resource.TestCheckNoResourceAttr("aws_api_gateway_integration.test", "credentials"), + resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "credentials", ""), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.%", "2"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.integration.request.header.X-Authorization", "'static'"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.integration.request.header.X-Foo", "'Bar'"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_templates.%", "2"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_templates.application/json", ""), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_templates.application/xml", "#set($inputRoot = $input.path('$'))\n{ }"), + resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "timeout_milliseconds", "2000"), ), }, @@ -85,9 +90,10 @@ func TestAccAWSAPIGatewayIntegration_basic(t *testing.T) { resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "uri", "https://www.google.de"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "passthrough_behavior", "WHEN_NO_MATCH"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "content_handling", "CONVERT_TO_TEXT"), - resource.TestCheckNoResourceAttr("aws_api_gateway_integration.test", "credentials"), + resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "credentials", ""), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.%", "0"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_templates.%", "0"), + resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "timeout_milliseconds", "2000"), ), }, @@ -100,14 +106,21 @@ func TestAccAWSAPIGatewayIntegration_basic(t *testing.T) { resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "uri", "https://www.google.de"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "passthrough_behavior", "WHEN_NO_MATCH"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "content_handling", "CONVERT_TO_TEXT"), - resource.TestCheckNoResourceAttr("aws_api_gateway_integration.test", "credentials"), + resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "credentials", ""), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.%", "2"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.integration.request.header.X-Authorization", "'static'"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_templates.%", "2"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_templates.application/json", ""), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_templates.application/xml", "#set($inputRoot = $input.path('$'))\n{ }"), + resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "timeout_milliseconds", "29000"), ), }, + { + ResourceName: "aws_api_gateway_integration.test", + ImportState: true, + ImportStateIdFunc: testAccAWSAPIGatewayIntegrationImportStateIdFunc("aws_api_gateway_integration.test"), + ImportStateVerify: true, + }, }, }) } @@ -115,7 +128,7 @@ func TestAccAWSAPIGatewayIntegration_basic(t *testing.T) { func TestAccAWSAPIGatewayIntegration_contentHandling(t *testing.T) { var conf apigateway.Integration - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayIntegrationDestroy, @@ -129,7 +142,7 @@ func TestAccAWSAPIGatewayIntegration_contentHandling(t *testing.T) { resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "uri", "https://www.google.de"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "passthrough_behavior", "WHEN_NO_MATCH"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "content_handling", "CONVERT_TO_TEXT"), - resource.TestCheckNoResourceAttr("aws_api_gateway_integration.test", "credentials"), + resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "credentials", ""), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.%", "2"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.integration.request.header.X-Authorization", "'static'"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.integration.request.header.X-Foo", "'Bar'"), @@ -148,7 +161,7 @@ func TestAccAWSAPIGatewayIntegration_contentHandling(t *testing.T) { resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "uri", "https://www.google.de"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "passthrough_behavior", "WHEN_NO_MATCH"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "content_handling", "CONVERT_TO_BINARY"), - resource.TestCheckNoResourceAttr("aws_api_gateway_integration.test", "credentials"), + resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "credentials", ""), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.%", "2"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.integration.request.header.X-Authorization", "'static'"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.integration.request.header.X-Foo", "'Bar'"), @@ -167,7 +180,7 @@ func TestAccAWSAPIGatewayIntegration_contentHandling(t *testing.T) { resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "uri", "https://www.google.de"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "passthrough_behavior", "WHEN_NO_MATCH"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "content_handling", ""), - resource.TestCheckNoResourceAttr("aws_api_gateway_integration.test", "credentials"), + resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "credentials", ""), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.%", "2"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.integration.request.header.X-Authorization", "'static'"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.integration.request.header.X-Foo", "'Bar'"), @@ -176,6 +189,12 @@ func TestAccAWSAPIGatewayIntegration_contentHandling(t *testing.T) { resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_templates.application/xml", "#set($inputRoot = $input.path('$'))\n{ }"), ), }, + { + ResourceName: "aws_api_gateway_integration.test", + ImportState: true, + ImportStateIdFunc: testAccAWSAPIGatewayIntegrationImportStateIdFunc("aws_api_gateway_integration.test"), + ImportStateVerify: true, + }, }, }) } @@ -183,7 +202,7 @@ func TestAccAWSAPIGatewayIntegration_contentHandling(t *testing.T) { func TestAccAWSAPIGatewayIntegration_cache_key_parameters(t *testing.T) { var conf apigateway.Integration - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayIntegrationDestroy, @@ -197,7 +216,7 @@ func TestAccAWSAPIGatewayIntegration_cache_key_parameters(t *testing.T) { resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "uri", "https://www.google.de"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "passthrough_behavior", "WHEN_NO_MATCH"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "content_handling", "CONVERT_TO_TEXT"), - resource.TestCheckNoResourceAttr("aws_api_gateway_integration.test", "credentials"), + resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "credentials", ""), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.%", "3"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.integration.request.header.X-Authorization", "'static'"), resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_parameters.integration.request.header.X-Foo", "'Bar'"), @@ -210,6 +229,56 @@ func TestAccAWSAPIGatewayIntegration_cache_key_parameters(t *testing.T) { resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "request_templates.application/xml", "#set($inputRoot = $input.path('$'))\n{ }"), ), }, + { + ResourceName: "aws_api_gateway_integration.test", + ImportState: true, + ImportStateIdFunc: testAccAWSAPIGatewayIntegrationImportStateIdFunc("aws_api_gateway_integration.test"), + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSAPIGatewayIntegration_integrationType(t *testing.T) { + var conf apigateway.Integration + + rName := fmt.Sprintf("tf-acctest-apigw-int-%s", acctest.RandString(7)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayIntegrationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAPIGatewayIntegrationConfig_IntegrationTypeInternet(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayIntegrationExists("aws_api_gateway_integration.test", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "connection_type", "INTERNET"), + resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "connection_id", ""), + ), + }, + { + Config: testAccAWSAPIGatewayIntegrationConfig_IntegrationTypeVpcLink(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayIntegrationExists("aws_api_gateway_integration.test", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "connection_type", "VPC_LINK"), + resource.TestMatchResourceAttr("aws_api_gateway_integration.test", "connection_id", regexp.MustCompile("^[0-9a-z]+$")), + ), + }, + { + Config: testAccAWSAPIGatewayIntegrationConfig_IntegrationTypeInternet(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayIntegrationExists("aws_api_gateway_integration.test", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "connection_type", "INTERNET"), + resource.TestCheckResourceAttr("aws_api_gateway_integration.test", "connection_id", ""), + ), + }, + { + ResourceName: "aws_api_gateway_integration.test", + ImportState: true, + ImportStateIdFunc: testAccAWSAPIGatewayIntegrationImportStateIdFunc("aws_api_gateway_integration.test"), + ImportStateVerify: true, + }, }, }) } @@ -276,6 +345,17 @@ func testAccCheckAWSAPIGatewayIntegrationDestroy(s *terraform.State) error { return nil } +func testAccAWSAPIGatewayIntegrationImportStateIdFunc(resourceName string) resource.ImportStateIdFunc { + return func(s *terraform.State) (string, error) { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return "", fmt.Errorf("Not found: %s", resourceName) + } + + return fmt.Sprintf("%s/%s/%s", rs.Primary.Attributes["rest_api_id"], rs.Primary.Attributes["resource_id"], rs.Primary.Attributes["http_method"]), nil + } +} + const testAccAWSAPIGatewayIntegrationConfig = ` resource "aws_api_gateway_rest_api" "test" { name = "test" @@ -317,7 +397,7 @@ resource "aws_api_gateway_integration" "test" { uri = "https://www.google.de" integration_http_method = "GET" passthrough_behavior = "WHEN_NO_MATCH" - content_handling = "CONVERT_TO_TEXT" + content_handling = "CONVERT_TO_TEXT" } ` @@ -362,7 +442,8 @@ resource "aws_api_gateway_integration" "test" { uri = "https://www.google.de" integration_http_method = "GET" passthrough_behavior = "WHEN_NO_MATCH" - content_handling = "CONVERT_TO_TEXT" + content_handling = "CONVERT_TO_TEXT" + timeout_milliseconds = 2000 } ` @@ -407,7 +488,8 @@ resource "aws_api_gateway_integration" "test" { uri = "https://www.google.de/updated" integration_http_method = "GET" passthrough_behavior = "WHEN_NO_MATCH" - content_handling = "CONVERT_TO_TEXT" + content_handling = "CONVERT_TO_TEXT" + timeout_milliseconds = 2000 } ` @@ -452,7 +534,8 @@ resource "aws_api_gateway_integration" "test" { uri = "https://www.google.de" integration_http_method = "GET" passthrough_behavior = "WHEN_NO_MATCH" - content_handling = "CONVERT_TO_BINARY" + content_handling = "CONVERT_TO_BINARY" + timeout_milliseconds = 2000 } ` @@ -496,7 +579,8 @@ resource "aws_api_gateway_integration" "test" { type = "HTTP" uri = "https://www.google.de" integration_http_method = "GET" - passthrough_behavior = "WHEN_NO_MATCH" + passthrough_behavior = "WHEN_NO_MATCH" + timeout_milliseconds = 2000 } ` @@ -531,7 +615,8 @@ resource "aws_api_gateway_integration" "test" { uri = "https://www.google.de" integration_http_method = "GET" passthrough_behavior = "WHEN_NO_MATCH" - content_handling = "CONVERT_TO_TEXT" + content_handling = "CONVERT_TO_TEXT" + timeout_milliseconds = 2000 } ` @@ -584,6 +669,99 @@ resource "aws_api_gateway_integration" "test" { uri = "https://www.google.de" integration_http_method = "GET" passthrough_behavior = "WHEN_NO_MATCH" - content_handling = "CONVERT_TO_TEXT" + content_handling = "CONVERT_TO_TEXT" + timeout_milliseconds = 2000 } ` + +func testAccAWSAPIGatewayIntegrationConfig_IntegrationTypeBase(rName string) string { + return fmt.Sprintf(` +variable "name" { + default = "%s" +} + +data "aws_availability_zones" "test" {} + +resource "aws_vpc" "test" { + cidr_block = "10.10.0.0/16" + + tags { + Name = "${var.name}" + } +} + +resource "aws_subnet" "test" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "10.10.0.0/24" + availability_zone = "${data.aws_availability_zones.test.names[0]}" +} + +resource "aws_api_gateway_rest_api" "test" { + name = "${var.name}" +} + +resource "aws_api_gateway_resource" "test" { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + parent_id = "${aws_api_gateway_rest_api.test.root_resource_id}" + path_part = "test" +} + +resource "aws_api_gateway_method" "test" { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + resource_id = "${aws_api_gateway_resource.test.id}" + http_method = "GET" + authorization = "NONE" + + request_models = { + "application/json" = "Error" + } +} + +resource "aws_lb" "test" { + name = "${var.name}" + internal = true + load_balancer_type = "network" + subnets = ["${aws_subnet.test.id}"] +} + +resource "aws_api_gateway_vpc_link" "test" { + name = "${var.name}" + target_arns = ["${aws_lb.test.arn}"] +} +`, rName) +} + +func testAccAWSAPIGatewayIntegrationConfig_IntegrationTypeVpcLink(rName string) string { + return testAccAWSAPIGatewayIntegrationConfig_IntegrationTypeBase(rName) + fmt.Sprintf(` +resource "aws_api_gateway_integration" "test" { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + resource_id = "${aws_api_gateway_resource.test.id}" + http_method = "${aws_api_gateway_method.test.http_method}" + + type = "HTTP" + uri = "https://www.google.de" + integration_http_method = "GET" + passthrough_behavior = "WHEN_NO_MATCH" + content_handling = "CONVERT_TO_TEXT" + + connection_type = "VPC_LINK" + connection_id = "${aws_api_gateway_vpc_link.test.id}" +} +`) +} + +func testAccAWSAPIGatewayIntegrationConfig_IntegrationTypeInternet(rName string) string { + return testAccAWSAPIGatewayIntegrationConfig_IntegrationTypeBase(rName) + fmt.Sprintf(` +resource "aws_api_gateway_integration" "test" { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + resource_id = "${aws_api_gateway_resource.test.id}" + http_method = "${aws_api_gateway_method.test.http_method}" + + type = "HTTP" + uri = "https://www.google.de" + integration_http_method = "GET" + passthrough_behavior = "WHEN_NO_MATCH" + content_handling = "CONVERT_TO_TEXT" +} +`) +} diff --git a/aws/resource_aws_api_gateway_method.go b/aws/resource_aws_api_gateway_method.go index d35fb20ebc2..9c469078ecc 100644 --- a/aws/resource_aws_api_gateway_method.go +++ b/aws/resource_aws_api_gateway_method.go @@ -5,6 +5,7 @@ import ( "fmt" "log" "strconv" + "strings" "time" "github.com/aws/aws-sdk-go/aws" @@ -20,64 +21,87 @@ func resourceAwsApiGatewayMethod() *schema.Resource { Read: resourceAwsApiGatewayMethodRead, Update: resourceAwsApiGatewayMethodUpdate, Delete: resourceAwsApiGatewayMethodDelete, + Importer: &schema.ResourceImporter{ + State: func(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + idParts := strings.Split(d.Id(), "/") + if len(idParts) != 3 || idParts[0] == "" || idParts[1] == "" || idParts[2] == "" { + return nil, fmt.Errorf("Unexpected format of ID (%q), expected REST-API-ID/RESOURCE-ID/HTTP-METHOD", d.Id()) + } + restApiID := idParts[0] + resourceID := idParts[1] + httpMethod := idParts[2] + d.Set("http_method", httpMethod) + d.Set("resource_id", resourceID) + d.Set("rest_api_id", restApiID) + d.SetId(fmt.Sprintf("agm-%s-%s-%s", restApiID, resourceID, httpMethod)) + return []*schema.ResourceData{d}, nil + }, + }, Schema: map[string]*schema.Schema{ - "rest_api_id": &schema.Schema{ + "rest_api_id": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "resource_id": &schema.Schema{ + "resource_id": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "http_method": &schema.Schema{ + "http_method": { Type: schema.TypeString, Required: true, ForceNew: true, ValidateFunc: validateHTTPMethod(), }, - "authorization": &schema.Schema{ + "authorization": { Type: schema.TypeString, Required: true, }, - "authorizer_id": &schema.Schema{ + "authorizer_id": { Type: schema.TypeString, Optional: true, }, - "api_key_required": &schema.Schema{ + "authorization_scopes": { + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + Optional: true, + }, + + "api_key_required": { Type: schema.TypeBool, Optional: true, Default: false, }, - "request_models": &schema.Schema{ + "request_models": { Type: schema.TypeMap, Optional: true, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, }, - "request_parameters": &schema.Schema{ + "request_parameters": { Type: schema.TypeMap, - Elem: schema.TypeBool, + Elem: &schema.Schema{Type: schema.TypeBool}, Optional: true, ConflictsWith: []string{"request_parameters_in_json"}, }, - "request_parameters_in_json": &schema.Schema{ + "request_parameters_in_json": { Type: schema.TypeString, Optional: true, ConflictsWith: []string{"request_parameters"}, Deprecated: "Use field request_parameters instead", }, - "request_validator_id": &schema.Schema{ + "request_validator_id": { Type: schema.TypeString, Optional: true, }, @@ -126,6 +150,10 @@ func resourceAwsApiGatewayMethodCreate(d *schema.ResourceData, meta interface{}) input.AuthorizerId = aws.String(v.(string)) } + if v, ok := d.GetOk("authorization_scopes"); ok { + input.AuthorizationScopes = expandStringList(v.(*schema.Set).List()) + } + if v, ok := d.GetOk("request_validator_id"); ok { input.RequestValidatorId = aws.String(v.(string)) } @@ -159,13 +187,32 @@ func resourceAwsApiGatewayMethodRead(d *schema.ResourceData, meta interface{}) e return err } log.Printf("[DEBUG] Received API Gateway Method: %s", out) - d.SetId(fmt.Sprintf("agm-%s-%s-%s", d.Get("rest_api_id").(string), d.Get("resource_id").(string), d.Get("http_method").(string))) - d.Set("request_parameters", aws.BoolValueMap(out.RequestParameters)) - d.Set("request_parameters_in_json", aws.BoolValueMap(out.RequestParameters)) + d.Set("api_key_required", out.ApiKeyRequired) + + if err := d.Set("authorization_scopes", flattenStringList(out.AuthorizationScopes)); err != nil { + return fmt.Errorf("error setting authorization_scopes: %s", err) + } + d.Set("authorization", out.AuthorizationType) d.Set("authorizer_id", out.AuthorizerId) - d.Set("request_models", aws.StringValueMap(out.RequestModels)) + + if err := d.Set("request_models", aws.StringValueMap(out.RequestModels)); err != nil { + return fmt.Errorf("error setting request_models: %s", err) + } + + // KNOWN ISSUE: This next d.Set() is broken as it should be a JSON string of the map, + // however leaving as-is since this attribute has been deprecated + // for a very long time and will be removed soon in the next major release. + // Not worth the effort of fixing, acceptance testing, and potential JSON equivalence bugs. + if _, ok := d.GetOk("request_parameters_in_json"); ok { + d.Set("request_parameters_in_json", aws.BoolValueMap(out.RequestParameters)) + } + + if err := d.Set("request_parameters", aws.BoolValueMap(out.RequestParameters)); err != nil { + return fmt.Errorf("error setting request_models: %s", err) + } + d.Set("request_validator_id", out.RequestValidatorId) return nil @@ -229,6 +276,32 @@ func resourceAwsApiGatewayMethodUpdate(d *schema.ResourceData, meta interface{}) }) } + if d.HasChange("authorization_scopes") { + old, new := d.GetChange("authorization_scopes") + path := "/authorizationScopes" + + os := old.(*schema.Set) + ns := new.(*schema.Set) + + additionList := ns.Difference(os) + for _, v := range additionList.List() { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("add"), + Path: aws.String(path), + Value: aws.String(v.(string)), + }) + } + + removalList := os.Difference(ns) + for _, v := range removalList.List() { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("remove"), + Path: aws.String(path), + Value: aws.String(v.(string)), + }) + } + } + if d.HasChange("api_key_required") { operations = append(operations, &apigateway.PatchOperation{ Op: aws.String("replace"), diff --git a/aws/resource_aws_api_gateway_method_response.go b/aws/resource_aws_api_gateway_method_response.go index 8c4ab7af3cc..9bd7cf8c90e 100644 --- a/aws/resource_aws_api_gateway_method_response.go +++ b/aws/resource_aws_api_gateway_method_response.go @@ -5,6 +5,8 @@ import ( "fmt" "log" "strconv" + "strings" + "sync" "time" "github.com/aws/aws-sdk-go/aws" @@ -12,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/apigateway" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" - "sync" ) var resourceAwsApiGatewayMethodResponseMutex = &sync.Mutex{} @@ -23,46 +24,64 @@ func resourceAwsApiGatewayMethodResponse() *schema.Resource { Read: resourceAwsApiGatewayMethodResponseRead, Update: resourceAwsApiGatewayMethodResponseUpdate, Delete: resourceAwsApiGatewayMethodResponseDelete, + Importer: &schema.ResourceImporter{ + State: func(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + idParts := strings.Split(d.Id(), "/") + if len(idParts) != 4 || idParts[0] == "" || idParts[1] == "" || idParts[2] == "" || idParts[3] == "" { + return nil, fmt.Errorf("Unexpected format of ID (%q), expected REST-API-ID/RESOURCE-ID/HTTP-METHOD/STATUS-CODE", d.Id()) + } + restApiID := idParts[0] + resourceID := idParts[1] + httpMethod := idParts[2] + statusCode := idParts[3] + d.Set("http_method", httpMethod) + d.Set("status_code", statusCode) + d.Set("resource_id", resourceID) + d.Set("rest_api_id", restApiID) + d.SetId(fmt.Sprintf("agmr-%s-%s-%s-%s", restApiID, resourceID, httpMethod, statusCode)) + return []*schema.ResourceData{d}, nil + }, + }, Schema: map[string]*schema.Schema{ - "rest_api_id": &schema.Schema{ + "rest_api_id": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "resource_id": &schema.Schema{ + "resource_id": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "http_method": &schema.Schema{ + "http_method": { Type: schema.TypeString, Required: true, ForceNew: true, ValidateFunc: validateHTTPMethod(), }, - "status_code": &schema.Schema{ + "status_code": { Type: schema.TypeString, Required: true, }, - "response_models": &schema.Schema{ + "response_models": { Type: schema.TypeMap, Optional: true, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, }, - "response_parameters": &schema.Schema{ + "response_parameters": { Type: schema.TypeMap, - Elem: schema.TypeBool, + Elem: &schema.Schema{Type: schema.TypeBool}, Optional: true, ConflictsWith: []string{"response_parameters_in_json"}, }, - "response_parameters_in_json": &schema.Schema{ + "response_parameters_in_json": { Type: schema.TypeString, Optional: true, ConflictsWith: []string{"response_parameters"}, @@ -140,10 +159,22 @@ func resourceAwsApiGatewayMethodResponseRead(d *schema.ResourceData, meta interf } log.Printf("[DEBUG] Received API Gateway Method Response: %s", methodResponse) - d.Set("response_models", aws.StringValueMap(methodResponse.ResponseModels)) - d.Set("response_parameters", aws.BoolValueMap(methodResponse.ResponseParameters)) - d.Set("response_parameters_in_json", aws.BoolValueMap(methodResponse.ResponseParameters)) - d.SetId(fmt.Sprintf("agmr-%s-%s-%s-%s", d.Get("rest_api_id").(string), d.Get("resource_id").(string), d.Get("http_method").(string), d.Get("status_code").(string))) + + if err := d.Set("response_models", aws.StringValueMap(methodResponse.ResponseModels)); err != nil { + return fmt.Errorf("error setting response_models: %s", err) + } + + if err := d.Set("response_parameters", aws.BoolValueMap(methodResponse.ResponseParameters)); err != nil { + return fmt.Errorf("error setting response_parameters: %s", err) + } + + // KNOWN ISSUE: This next d.Set() is broken as it should be a JSON string of the map, + // however leaving as-is since this attribute has been deprecated + // for a very long time and will be removed soon in the next major release. + // Not worth the effort of fixing, acceptance testing, and potential JSON equivalence bugs. + if _, ok := d.GetOk("response_parameters_in_json"); ok { + d.Set("response_parameters_in_json", aws.BoolValueMap(methodResponse.ResponseParameters)) + } return nil } diff --git a/aws/resource_aws_api_gateway_method_response_test.go b/aws/resource_aws_api_gateway_method_response_test.go index 514cb1db14a..43f982b424b 100644 --- a/aws/resource_aws_api_gateway_method_response_test.go +++ b/aws/resource_aws_api_gateway_method_response_test.go @@ -14,12 +14,12 @@ import ( func TestAccAWSAPIGatewayMethodResponse_basic(t *testing.T) { var conf apigateway.MethodResponse - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayMethodResponseDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAPIGatewayMethodResponseConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayMethodResponseExists("aws_api_gateway_method_response.error", &conf), @@ -31,7 +31,7 @@ func TestAccAWSAPIGatewayMethodResponse_basic(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSAPIGatewayMethodResponseConfigUpdate, Check: resource.ComposeTestCheckFunc( testAccCheckAWSAPIGatewayMethodResponseExists("aws_api_gateway_method_response.error", &conf), @@ -42,6 +42,12 @@ func TestAccAWSAPIGatewayMethodResponse_basic(t *testing.T) { "aws_api_gateway_method_response.error", "response_models.application/json", "Empty"), ), }, + { + ResourceName: "aws_api_gateway_method_response.error", + ImportState: true, + ImportStateIdFunc: testAccAWSAPIGatewayMethodResponseImportStateIdFunc("aws_api_gateway_method_response.error"), + ImportStateVerify: true, + }, }, }) } @@ -152,6 +158,17 @@ func testAccCheckAWSAPIGatewayMethodResponseDestroy(s *terraform.State) error { return nil } +func testAccAWSAPIGatewayMethodResponseImportStateIdFunc(resourceName string) resource.ImportStateIdFunc { + return func(s *terraform.State) (string, error) { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return "", fmt.Errorf("Not found: %s", resourceName) + } + + return fmt.Sprintf("%s/%s/%s/%s", rs.Primary.Attributes["rest_api_id"], rs.Primary.Attributes["resource_id"], rs.Primary.Attributes["http_method"], rs.Primary.Attributes["status_code"]), nil + } +} + const testAccAWSAPIGatewayMethodResponseConfig = ` resource "aws_api_gateway_rest_api" "test" { name = "test" diff --git a/aws/resource_aws_api_gateway_method_settings_test.go b/aws/resource_aws_api_gateway_method_settings_test.go index 9372a6748cc..30633ff2b11 100644 --- a/aws/resource_aws_api_gateway_method_settings_test.go +++ b/aws/resource_aws_api_gateway_method_settings_test.go @@ -16,7 +16,7 @@ func TestAccAWSAPIGatewayMethodSettings_basic(t *testing.T) { var stage apigateway.Stage rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayMethodSettingsDestroy, diff --git a/aws/resource_aws_api_gateway_method_test.go b/aws/resource_aws_api_gateway_method_test.go index a6ca2d832d1..e699d387348 100644 --- a/aws/resource_aws_api_gateway_method_test.go +++ b/aws/resource_aws_api_gateway_method_test.go @@ -17,7 +17,7 @@ func TestAccAWSAPIGatewayMethod_basic(t *testing.T) { var conf apigateway.Method rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayMethodDestroy, @@ -35,6 +35,12 @@ func TestAccAWSAPIGatewayMethod_basic(t *testing.T) { "aws_api_gateway_method.test", "request_models.application/json", "Error"), ), }, + { + ResourceName: "aws_api_gateway_method.test", + ImportState: true, + ImportStateIdFunc: testAccAWSAPIGatewayMethodImportStateIdFunc("aws_api_gateway_method.test"), + ImportStateVerify: true, + }, { Config: testAccAWSAPIGatewayMethodConfigUpdate(rInt), @@ -51,7 +57,7 @@ func TestAccAWSAPIGatewayMethod_customauthorizer(t *testing.T) { var conf apigateway.Method rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayMethodDestroy, @@ -71,6 +77,12 @@ func TestAccAWSAPIGatewayMethod_customauthorizer(t *testing.T) { "aws_api_gateway_method.test", "request_models.application/json", "Error"), ), }, + { + ResourceName: "aws_api_gateway_method.test", + ImportState: true, + ImportStateIdFunc: testAccAWSAPIGatewayMethodImportStateIdFunc("aws_api_gateway_method.test"), + ImportStateVerify: true, + }, { Config: testAccAWSAPIGatewayMethodConfigUpdate(rInt), @@ -87,11 +99,63 @@ func TestAccAWSAPIGatewayMethod_customauthorizer(t *testing.T) { }) } +func TestAccAWSAPIGatewayMethod_cognitoauthorizer(t *testing.T) { + var conf apigateway.Method + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayMethodDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAPIGatewayMethodConfigWithCognitoAuthorizer(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayMethodExists("aws_api_gateway_method.test", &conf), + testAccCheckAWSAPIGatewayMethodAttributes(&conf), + resource.TestCheckResourceAttr( + "aws_api_gateway_method.test", "http_method", "GET"), + resource.TestCheckResourceAttr( + "aws_api_gateway_method.test", "authorization", "COGNITO_USER_POOLS"), + resource.TestMatchResourceAttr( + "aws_api_gateway_method.test", "authorizer_id", regexp.MustCompile("^[a-z0-9]{6}$")), + resource.TestCheckResourceAttr( + "aws_api_gateway_method.test", "request_models.application/json", "Error"), + resource.TestCheckResourceAttr( + "aws_api_gateway_method.test", "authorization_scopes.#", "2"), + ), + }, + + { + Config: testAccAWSAPIGatewayMethodConfigWithCognitoAuthorizerUpdate(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayMethodExists("aws_api_gateway_method.test", &conf), + testAccCheckAWSAPIGatewayMethodAttributesUpdate(&conf), + resource.TestCheckResourceAttr( + "aws_api_gateway_method.test", "authorization", "COGNITO_USER_POOLS"), + resource.TestMatchResourceAttr( + "aws_api_gateway_method.test", "authorizer_id", regexp.MustCompile("^[a-z0-9]{6}$")), + resource.TestCheckResourceAttr( + "aws_api_gateway_method.test", "request_models.application/json", "Error"), + resource.TestCheckResourceAttr( + "aws_api_gateway_method.test", "authorization_scopes.#", "3"), + ), + }, + { + ResourceName: "aws_api_gateway_method.test", + ImportState: true, + ImportStateIdFunc: testAccAWSAPIGatewayMethodImportStateIdFunc("aws_api_gateway_method.test"), + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSAPIGatewayMethod_customrequestvalidator(t *testing.T) { var conf apigateway.Method rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayMethodDestroy, @@ -111,6 +175,12 @@ func TestAccAWSAPIGatewayMethod_customrequestvalidator(t *testing.T) { "aws_api_gateway_method.test", "request_validator_id", regexp.MustCompile("^[a-z0-9]{6}$")), ), }, + { + ResourceName: "aws_api_gateway_method.test", + ImportState: true, + ImportStateIdFunc: testAccAWSAPIGatewayMethodImportStateIdFunc("aws_api_gateway_method.test"), + ImportStateVerify: true, + }, { Config: testAccAWSAPIGatewayMethodConfigWithCustomRequestValidatorUpdate(rInt), @@ -130,7 +200,7 @@ func testAccCheckAWSAPIGatewayMethodAttributes(conf *apigateway.Method) resource if *conf.HttpMethod != "GET" { return fmt.Errorf("Wrong HttpMethod: %q", *conf.HttpMethod) } - if *conf.AuthorizationType != "NONE" && *conf.AuthorizationType != "CUSTOM" { + if *conf.AuthorizationType != "NONE" && *conf.AuthorizationType != "CUSTOM" && *conf.AuthorizationType != "COGNITO_USER_POOLS" { return fmt.Errorf("Wrong Authorization: %q", *conf.AuthorizationType) } @@ -235,6 +305,17 @@ func testAccCheckAWSAPIGatewayMethodDestroy(s *terraform.State) error { return nil } +func testAccAWSAPIGatewayMethodImportStateIdFunc(resourceName string) resource.ImportStateIdFunc { + return func(s *terraform.State) (string, error) { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return "", fmt.Errorf("Not found: %s", resourceName) + } + + return fmt.Sprintf("%s/%s/%s", rs.Primary.Attributes["rest_api_id"], rs.Primary.Attributes["resource_id"], rs.Primary.Attributes["http_method"]), nil + } +} + func testAccAWSAPIGatewayMethodConfigWithCustomAuthorizer(rInt int) string { return fmt.Sprintf(` resource "aws_api_gateway_rest_api" "test" { @@ -337,6 +418,171 @@ resource "aws_api_gateway_method" "test" { }`, rInt, rInt, rInt, rInt, rInt) } +func testAccAWSAPIGatewayMethodConfigWithCognitoAuthorizer(rInt int) string { + return fmt.Sprintf(` +resource "aws_api_gateway_rest_api" "test" { + name = "tf-acc-test-cognito-auth-%d" +} + +resource "aws_iam_role" "invocation_role" { + name = "tf_acc_api_gateway_auth_invocation_role-%d" + path = "/" + assume_role_policy = < 0 { + m := v.([]interface{})[0].(map[string]interface{}) + + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String("replace"), + Path: aws.String("/endpointConfiguration/types/0"), + Value: aws.String(m["types"].([]interface{})[0].(string)), + }) + } + } + return operations } @@ -239,7 +352,7 @@ func resourceAwsApiGatewayRestApiUpdate(d *schema.ResourceData, meta interface{} Body: []byte(body.(string)), }) if err != nil { - return errwrap.Wrapf("Error updating API Gateway specification: {{err}}", err) + return fmt.Errorf("error updating API Gateway specification: %s", err) } } } @@ -269,10 +382,36 @@ func resourceAwsApiGatewayRestApiDelete(d *schema.ResourceData, meta interface{} return nil } - if apigatewayErr, ok := err.(awserr.Error); ok && apigatewayErr.Code() == "NotFoundException" { + if isAWSErr(err, apigateway.ErrCodeNotFoundException, "") { return nil } return resource.NonRetryableError(err) }) } + +func expandApiGatewayEndpointConfiguration(l []interface{}) *apigateway.EndpointConfiguration { + if len(l) == 0 { + return nil + } + + m := l[0].(map[string]interface{}) + + endpointConfiguration := &apigateway.EndpointConfiguration{ + Types: expandStringList(m["types"].([]interface{})), + } + + return endpointConfiguration +} + +func flattenApiGatewayEndpointConfiguration(endpointConfiguration *apigateway.EndpointConfiguration) []interface{} { + if endpointConfiguration == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "types": flattenStringList(endpointConfiguration.Types), + } + + return []interface{}{m} +} diff --git a/aws/resource_aws_api_gateway_rest_api_test.go b/aws/resource_aws_api_gateway_rest_api_test.go index 343f0704cac..e169a3b4023 100644 --- a/aws/resource_aws_api_gateway_rest_api_test.go +++ b/aws/resource_aws_api_gateway_rest_api_test.go @@ -9,6 +9,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/apigateway" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -71,6 +72,10 @@ func testSweepAPIGatewayRestApis(region string) error { return !lastPage }) if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping API Gateway REST API sweep for %s: %s", region, err) + return nil + } return fmt.Errorf("Error retrieving API Gateway REST APIs: %s", err) } @@ -80,7 +85,7 @@ func testSweepAPIGatewayRestApis(region string) error { func TestAccAWSAPIGatewayRestApi_basic(t *testing.T) { var conf apigateway.RestApi - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayRestAPIDestroy, @@ -93,11 +98,19 @@ func TestAccAWSAPIGatewayRestApi_basic(t *testing.T) { testAccCheckAWSAPIGatewayRestAPIMinimumCompressionSizeAttribute(&conf, 0), resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "name", "bar"), resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "description", ""), + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "api_key_source", "HEADER"), resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "minimum_compression_size", "0"), resource.TestCheckResourceAttrSet("aws_api_gateway_rest_api.test", "created_date"), + resource.TestCheckResourceAttrSet("aws_api_gateway_rest_api.test", "execution_arn"), resource.TestCheckNoResourceAttr("aws_api_gateway_rest_api.test", "binary_media_types"), + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "endpoint_configuration.#", "1"), ), }, + { + ResourceName: "aws_api_gateway_rest_api.test", + ImportState: true, + ImportStateVerify: true, + }, { Config: testAccAWSAPIGatewayRestAPIUpdateConfig, @@ -110,6 +123,7 @@ func TestAccAWSAPIGatewayRestApi_basic(t *testing.T) { resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "description", "test"), resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "minimum_compression_size", "10485760"), resource.TestCheckResourceAttrSet("aws_api_gateway_rest_api.test", "created_date"), + resource.TestCheckResourceAttrSet("aws_api_gateway_rest_api.test", "execution_arn"), resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "binary_media_types.#", "1"), resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "binary_media_types.0", "application/octet-stream"), ), @@ -127,10 +141,206 @@ func TestAccAWSAPIGatewayRestApi_basic(t *testing.T) { }) } +func TestAccAWSAPIGatewayRestApi_EndpointConfiguration(t *testing.T) { + var restApi apigateway.RestApi + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayRestAPIDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAPIGatewayRestAPIConfig_EndpointConfiguration(rName, "REGIONAL"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayRestAPIExists("aws_api_gateway_rest_api.test", &restApi), + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "endpoint_configuration.#", "1"), + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "endpoint_configuration.0.types.#", "1"), + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "endpoint_configuration.0.types.0", "REGIONAL"), + ), + }, + { + ResourceName: "aws_api_gateway_rest_api.test", + ImportState: true, + ImportStateVerify: true, + }, + // For backwards compatibility, test removing endpoint_configuration, which should do nothing + { + Config: testAccAWSAPIGatewayRestAPIConfig_Name(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayRestAPIExists("aws_api_gateway_rest_api.test", &restApi), + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "endpoint_configuration.#", "1"), + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "endpoint_configuration.0.types.#", "1"), + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "endpoint_configuration.0.types.0", "REGIONAL"), + ), + }, + // Test updating endpoint type + { + PreConfig: func() { + // Ensure region supports EDGE endpoint + // This can eventually be moved to a PreCheck function + // If the region does not support EDGE endpoint type, this test will either show + // SKIP (if REGIONAL passed) or FAIL (if REGIONAL failed) + conn := testAccProvider.Meta().(*AWSClient).apigateway + output, err := conn.CreateRestApi(&apigateway.CreateRestApiInput{ + Name: aws.String(acctest.RandomWithPrefix("tf-acc-test-edge-endpoint-precheck")), + EndpointConfiguration: &apigateway.EndpointConfiguration{ + Types: []*string{aws.String("EDGE")}, + }, + }) + if err != nil { + if isAWSErr(err, apigateway.ErrCodeBadRequestException, "Endpoint Configuration type EDGE is not supported in this region") { + t.Skip("Region does not support EDGE endpoint type") + } + t.Fatal(err) + } + + // Be kind and rewind. :) + _, err = conn.DeleteRestApi(&apigateway.DeleteRestApiInput{ + RestApiId: output.Id, + }) + if err != nil { + t.Fatal(err) + } + }, + Config: testAccAWSAPIGatewayRestAPIConfig_EndpointConfiguration(rName, "EDGE"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayRestAPIExists("aws_api_gateway_rest_api.test", &restApi), + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "endpoint_configuration.#", "1"), + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "endpoint_configuration.0.types.#", "1"), + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "endpoint_configuration.0.types.0", "EDGE"), + ), + }, + }, + }) +} + +func TestAccAWSAPIGatewayRestApi_EndpointConfiguration_Private(t *testing.T) { + var restApi apigateway.RestApi + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayRestAPIDestroy, + Steps: []resource.TestStep{ + { + PreConfig: func() { + // Ensure region supports PRIVATE endpoint + // This can eventually be moved to a PreCheck function + conn := testAccProvider.Meta().(*AWSClient).apigateway + output, err := conn.CreateRestApi(&apigateway.CreateRestApiInput{ + Name: aws.String(acctest.RandomWithPrefix("tf-acc-test-private-endpoint-precheck")), + EndpointConfiguration: &apigateway.EndpointConfiguration{ + Types: []*string{aws.String("PRIVATE")}, + }, + }) + if err != nil { + if isAWSErr(err, apigateway.ErrCodeBadRequestException, "Endpoint Configuration type PRIVATE is not supported in this region") { + t.Skip("Region does not support PRIVATE endpoint type") + } + t.Fatal(err) + } + + // Be kind and rewind. :) + _, err = conn.DeleteRestApi(&apigateway.DeleteRestApiInput{ + RestApiId: output.Id, + }) + if err != nil { + t.Fatal(err) + } + }, + Config: testAccAWSAPIGatewayRestAPIConfig_EndpointConfiguration(rName, "PRIVATE"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayRestAPIExists("aws_api_gateway_rest_api.test", &restApi), + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "endpoint_configuration.#", "1"), + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "endpoint_configuration.0.types.#", "1"), + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "endpoint_configuration.0.types.0", "PRIVATE"), + ), + }, + { + ResourceName: "aws_api_gateway_rest_api.test", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSAPIGatewayRestApi_api_key_source(t *testing.T) { + expectedAPIKeySource := "HEADER" + expectedUpdateAPIKeySource := "AUTHORIZER" + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayRestAPIDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAPIGatewayRestAPIConfigWithAPIKeySource, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "api_key_source", expectedAPIKeySource), + ), + }, + { + ResourceName: "aws_api_gateway_rest_api.test", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSAPIGatewayRestAPIConfigWithUpdateAPIKeySource, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "api_key_source", expectedUpdateAPIKeySource), + ), + }, + { + Config: testAccAWSAPIGatewayRestAPIConfig, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "api_key_source", expectedAPIKeySource), + ), + }, + }, + }) +} + +func TestAccAWSAPIGatewayRestApi_policy(t *testing.T) { + expectedPolicyText := `{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"*"},"Action":"execute-api:Invoke","Resource":"*","Condition":{"IpAddress":{"aws:SourceIp":"123.123.123.123/32"}}}]}` + expectedUpdatePolicyText := `{"Version":"2012-10-17","Statement":[{"Effect":"Deny","Principal":{"AWS":"*"},"Action":"execute-api:Invoke","Resource":"*"}]}` + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayRestAPIDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAPIGatewayRestAPIConfigWithPolicy, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "policy", expectedPolicyText), + ), + }, + { + ResourceName: "aws_api_gateway_rest_api.test", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSAPIGatewayRestAPIConfigUpdatePolicy, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "policy", expectedUpdatePolicyText), + ), + }, + { + Config: testAccAWSAPIGatewayRestAPIConfig, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "policy", ""), + ), + }, + }, + }) +} + func TestAccAWSAPIGatewayRestApi_openapi(t *testing.T) { var conf apigateway.RestApi - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayRestAPIDestroy, @@ -144,10 +354,16 @@ func TestAccAWSAPIGatewayRestApi_openapi(t *testing.T) { resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "name", "test"), resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "description", ""), resource.TestCheckResourceAttrSet("aws_api_gateway_rest_api.test", "created_date"), + resource.TestCheckResourceAttrSet("aws_api_gateway_rest_api.test", "execution_arn"), resource.TestCheckNoResourceAttr("aws_api_gateway_rest_api.test", "binary_media_types"), ), }, - + { + ResourceName: "aws_api_gateway_rest_api.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"body"}, + }, { Config: testAccAWSAPIGatewayRestAPIUpdateConfigOpenAPI, Check: resource.ComposeTestCheckFunc( @@ -156,6 +372,7 @@ func TestAccAWSAPIGatewayRestApi_openapi(t *testing.T) { testAccCheckAWSAPIGatewayRestAPIRoutes(&conf, []string{"/", "/update"}), resource.TestCheckResourceAttr("aws_api_gateway_rest_api.test", "name", "test"), resource.TestCheckResourceAttrSet("aws_api_gateway_rest_api.test", "created_date"), + resource.TestCheckResourceAttrSet("aws_api_gateway_rest_api.test", "execution_arn"), ), }, }, @@ -298,6 +515,88 @@ resource "aws_api_gateway_rest_api" "test" { } ` +func testAccAWSAPIGatewayRestAPIConfig_EndpointConfiguration(rName, endpointType string) string { + return fmt.Sprintf(` +resource "aws_api_gateway_rest_api" "test" { + name = "%s" + + endpoint_configuration { + types = ["%s"] + } +} +`, rName, endpointType) +} + +func testAccAWSAPIGatewayRestAPIConfig_Name(rName string) string { + return fmt.Sprintf(` +resource "aws_api_gateway_rest_api" "test" { + name = "%s" +} +`, rName) +} + +const testAccAWSAPIGatewayRestAPIConfigWithAPIKeySource = ` +resource "aws_api_gateway_rest_api" "test" { + name = "bar" + api_key_source = "HEADER" +} +` +const testAccAWSAPIGatewayRestAPIConfigWithUpdateAPIKeySource = ` +resource "aws_api_gateway_rest_api" "test" { + name = "bar" + api_key_source = "AUTHORIZER" +} +` + +const testAccAWSAPIGatewayRestAPIConfigWithPolicy = ` +resource "aws_api_gateway_rest_api" "test" { + name = "bar" + minimum_compression_size = 0 + policy = < $context.identity.sourceIp $context.identity.caller $context.identity.user $context.requestTime $context.httpMethod $context.resourcePath $context.status $context.protocol $context.responseLength ` + csv := `$context.identity.sourceIp,$context.identity.caller,$context.identity.user,$context.requestTime,$context.httpMethod,$context.resourcePath,$context.protocol,$context.status,$context.responseLength,$context.requestId` + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayStageDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAPIGatewayStageConfig_accessLogSettings(rName, clf), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayStageExists("aws_api_gateway_stage.test", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_stage.test", "access_log_settings.#", "1"), + resource.TestMatchResourceAttr("aws_api_gateway_stage.test", "access_log_settings.0.destination_arn", logGroupArnRegex), + resource.TestCheckResourceAttr("aws_api_gateway_stage.test", "access_log_settings.0.format", clf), + ), + }, + + { + Config: testAccAWSAPIGatewayStageConfig_accessLogSettings(rName, json), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayStageExists("aws_api_gateway_stage.test", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_stage.test", "access_log_settings.#", "1"), + resource.TestMatchResourceAttr("aws_api_gateway_stage.test", "access_log_settings.0.destination_arn", logGroupArnRegex), + resource.TestCheckResourceAttr("aws_api_gateway_stage.test", "access_log_settings.0.format", json), + ), + }, + { + Config: testAccAWSAPIGatewayStageConfig_accessLogSettings(rName, xml), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayStageExists("aws_api_gateway_stage.test", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_stage.test", "access_log_settings.#", "1"), + resource.TestMatchResourceAttr("aws_api_gateway_stage.test", "access_log_settings.0.destination_arn", logGroupArnRegex), + resource.TestCheckResourceAttr("aws_api_gateway_stage.test", "access_log_settings.0.format", xml), + ), + }, + { + Config: testAccAWSAPIGatewayStageConfig_accessLogSettings(rName, csv), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayStageExists("aws_api_gateway_stage.test", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_stage.test", "access_log_settings.#", "1"), + resource.TestMatchResourceAttr("aws_api_gateway_stage.test", "access_log_settings.0.destination_arn", logGroupArnRegex), + resource.TestCheckResourceAttr("aws_api_gateway_stage.test", "access_log_settings.0.format", csv), + ), + }, + { + Config: testAccAWSAPIGatewayStageConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAPIGatewayStageExists("aws_api_gateway_stage.test", &conf), + resource.TestCheckResourceAttr("aws_api_gateway_stage.test", "access_log_settings.#", "0"), ), }, }, @@ -108,9 +189,21 @@ func testAccCheckAWSAPIGatewayStageDestroy(s *terraform.State) error { return nil } -const testAccAWSAPIGatewayStageConfig_base = ` +func testAccAWSAPIGatewayStageImportStateIdFunc(resourceName string) resource.ImportStateIdFunc { + return func(s *terraform.State) (string, error) { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return "", fmt.Errorf("Not found: %s", resourceName) + } + + return fmt.Sprintf("%s/%s", rs.Primary.Attributes["rest_api_id"], rs.Primary.Attributes["stage_name"]), nil + } +} + +func testAccAWSAPIGatewayStageConfig_base(rName string) string { + return fmt.Sprintf(` resource "aws_api_gateway_rest_api" "test" { - name = "tf-acc-test" + name = "tf-acc-test-%s" } resource "aws_api_gateway_resource" "test" { @@ -161,36 +254,73 @@ resource "aws_api_gateway_deployment" "dev" { "a" = "2" } } -` +`, rName) +} -func testAccAWSAPIGatewayStageConfig_basic() string { - return testAccAWSAPIGatewayStageConfig_base + ` +func testAccAWSAPIGatewayStageConfig_basic(rName string) string { + return testAccAWSAPIGatewayStageConfig_base(rName) + ` resource "aws_api_gateway_stage" "test" { rest_api_id = "${aws_api_gateway_rest_api.test.id}" stage_name = "prod" deployment_id = "${aws_api_gateway_deployment.dev.id}" cache_cluster_enabled = true cache_cluster_size = "0.5" + xray_tracing_enabled = true variables { one = "1" two = "2" } + tags { + Name = "tf-test" + } } ` } -func testAccAWSAPIGatewayStageConfig_updated() string { - return testAccAWSAPIGatewayStageConfig_base + ` +func testAccAWSAPIGatewayStageConfig_updated(rName string) string { + return testAccAWSAPIGatewayStageConfig_base(rName) + ` resource "aws_api_gateway_stage" "test" { rest_api_id = "${aws_api_gateway_rest_api.test.id}" stage_name = "prod" deployment_id = "${aws_api_gateway_deployment.dev.id}" cache_cluster_enabled = false description = "Hello world" + xray_tracing_enabled = false variables { one = "1" three = "3" } + tags { + Name = "tf-test" + ExtraName = "tf-test" + } } ` } + +func testAccAWSAPIGatewayStageConfig_accessLogSettings(rName string, format string) string { + return testAccAWSAPIGatewayStageConfig_base(rName) + fmt.Sprintf(` +resource "aws_cloudwatch_log_group" "test" { + name = "foo-bar-%s" +} + +resource "aws_api_gateway_stage" "test" { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + stage_name = "prod" + deployment_id = "${aws_api_gateway_deployment.dev.id}" + cache_cluster_enabled = true + cache_cluster_size = "0.5" + variables { + one = "1" + two = "2" + } + tags { + Name = "tf-test" + } + access_log_settings { + destination_arn = "${aws_cloudwatch_log_group.test.arn}" + format = %q + } +} +`, rName, format) +} diff --git a/aws/resource_aws_api_gateway_usage_plan.go b/aws/resource_aws_api_gateway_usage_plan.go index e7786b0458d..cf4653825cb 100644 --- a/aws/resource_aws_api_gateway_usage_plan.go +++ b/aws/resource_aws_api_gateway_usage_plan.go @@ -1,12 +1,12 @@ package aws import ( + "errors" "fmt" "log" "strconv" "time" - "errors" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/apigateway" @@ -251,19 +251,19 @@ func resourceAwsApiGatewayUsagePlanRead(d *schema.ResourceData, meta interface{} if up.ApiStages != nil { if err := d.Set("api_stages", flattenApiGatewayUsageApiStages(up.ApiStages)); err != nil { - return fmt.Errorf("[DEBUG] Error setting api_stages error: %#v", err) + return fmt.Errorf("Error setting api_stages error: %#v", err) } } if up.Throttle != nil { if err := d.Set("throttle_settings", flattenApiGatewayUsagePlanThrottling(up.Throttle)); err != nil { - return fmt.Errorf("[DEBUG] Error setting throttle_settings error: %#v", err) + return fmt.Errorf("Error setting throttle_settings error: %#v", err) } } if up.Quota != nil { if err := d.Set("quota_settings", flattenApiGatewayUsagePlanQuota(up.Quota)); err != nil { - return fmt.Errorf("[DEBUG] Error setting quota_settings error: %#v", err) + return fmt.Errorf("Error setting quota_settings error: %#v", err) } } diff --git a/aws/resource_aws_api_gateway_usage_plan_key_test.go b/aws/resource_aws_api_gateway_usage_plan_key_test.go index 608a88fd2a4..740ad8ed011 100644 --- a/aws/resource_aws_api_gateway_usage_plan_key_test.go +++ b/aws/resource_aws_api_gateway_usage_plan_key_test.go @@ -18,7 +18,7 @@ func TestAccAWSAPIGatewayUsagePlanKey_basic(t *testing.T) { name := acctest.RandString(10) updatedName := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayUsagePlanKeyDestroy, diff --git a/aws/resource_aws_api_gateway_usage_plan_test.go b/aws/resource_aws_api_gateway_usage_plan_test.go index b0054db34cb..91f851a7123 100644 --- a/aws/resource_aws_api_gateway_usage_plan_test.go +++ b/aws/resource_aws_api_gateway_usage_plan_test.go @@ -12,12 +12,34 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSAPIGatewayUsagePlan_importBasic(t *testing.T) { + resourceName := "aws_api_gateway_usage_plan.main" + rName := acctest.RandString(10) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayUsagePlanDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSApiGatewayUsagePlanBasicConfig(rName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSAPIGatewayUsagePlan_basic(t *testing.T) { var conf apigateway.UsagePlan name := acctest.RandString(10) updatedName := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayUsagePlanDestroy, @@ -53,7 +75,7 @@ func TestAccAWSAPIGatewayUsagePlan_description(t *testing.T) { var conf apigateway.UsagePlan name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayUsagePlanDestroy, @@ -100,7 +122,7 @@ func TestAccAWSAPIGatewayUsagePlan_productCode(t *testing.T) { var conf apigateway.UsagePlan name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayUsagePlanDestroy, @@ -147,7 +169,7 @@ func TestAccAWSAPIGatewayUsagePlan_throttling(t *testing.T) { var conf apigateway.UsagePlan name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayUsagePlanDestroy, @@ -195,7 +217,7 @@ func TestAccAWSAPIGatewayUsagePlan_throttlingInitialRateLimit(t *testing.T) { var conf apigateway.UsagePlan name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayUsagePlanDestroy, @@ -215,7 +237,7 @@ func TestAccAWSAPIGatewayUsagePlan_quota(t *testing.T) { var conf apigateway.UsagePlan name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayUsagePlanDestroy, @@ -264,7 +286,7 @@ func TestAccAWSAPIGatewayUsagePlan_apiStages(t *testing.T) { var conf apigateway.UsagePlan name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAPIGatewayUsagePlanDestroy, diff --git a/aws/resource_aws_api_gateway_vpc_link.go b/aws/resource_aws_api_gateway_vpc_link.go index 62f10d8a502..3d4f2676bd7 100644 --- a/aws/resource_aws_api_gateway_vpc_link.go +++ b/aws/resource_aws_api_gateway_vpc_link.go @@ -17,6 +17,9 @@ func resourceAwsApiGatewayVpcLink() *schema.Resource { Read: resourceAwsApiGatewayVpcLinkRead, Update: resourceAwsApiGatewayVpcLinkUpdate, Delete: resourceAwsApiGatewayVpcLinkDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, Schema: map[string]*schema.Schema{ "name": { @@ -67,7 +70,7 @@ func resourceAwsApiGatewayVpcLinkCreate(d *schema.ResourceData, meta interface{} _, err = stateConf.WaitForState() if err != nil { d.SetId("") - return fmt.Errorf("[WARN] Error waiting for APIGateway Vpc Link status to be \"%s\": %s", apigateway.VpcLinkStatusAvailable, err) + return fmt.Errorf("Error waiting for APIGateway Vpc Link status to be \"%s\": %s", apigateway.VpcLinkStatusAvailable, err) } return nil @@ -142,7 +145,7 @@ func resourceAwsApiGatewayVpcLinkUpdate(d *schema.ResourceData, meta interface{} _, err = stateConf.WaitForState() if err != nil { - return fmt.Errorf("[WARN] Error waiting for APIGateway Vpc Link status to be \"%s\": %s", apigateway.VpcLinkStatusAvailable, err) + return fmt.Errorf("Error waiting for APIGateway Vpc Link status to be \"%s\": %s", apigateway.VpcLinkStatusAvailable, err) } return nil diff --git a/aws/resource_aws_api_gateway_vpc_link_test.go b/aws/resource_aws_api_gateway_vpc_link_test.go index 6d8e046980b..fddfa65824a 100644 --- a/aws/resource_aws_api_gateway_vpc_link_test.go +++ b/aws/resource_aws_api_gateway_vpc_link_test.go @@ -13,7 +13,7 @@ import ( func TestAccAWSAPIGatewayVpcLink_basic(t *testing.T) { rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsAPIGatewayVpcLinkDestroy, @@ -40,6 +40,26 @@ func TestAccAWSAPIGatewayVpcLink_basic(t *testing.T) { }) } +func TestAccAWSAPIGatewayVpcLink_importBasic(t *testing.T) { + rName := acctest.RandString(5) + resourceName := "aws_api_gateway_vpc_link.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAPIGatewayVpcLinkConfig(rName), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func testAccCheckAwsAPIGatewayVpcLinkDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).apigateway diff --git a/aws/resource_aws_app_cookie_stickiness_policy.go b/aws/resource_aws_app_cookie_stickiness_policy.go index 76deaa08e68..7f0f5d04861 100644 --- a/aws/resource_aws_app_cookie_stickiness_policy.go +++ b/aws/resource_aws_app_cookie_stickiness_policy.go @@ -22,7 +22,7 @@ func resourceAwsAppCookieStickinessPolicy() *schema.Resource { Delete: resourceAwsAppCookieStickinessPolicyDelete, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, @@ -36,19 +36,19 @@ func resourceAwsAppCookieStickinessPolicy() *schema.Resource { }, }, - "load_balancer": &schema.Schema{ + "load_balancer": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "lb_port": &schema.Schema{ + "lb_port": { Type: schema.TypeInt, Required: true, ForceNew: true, }, - "cookie_name": &schema.Schema{ + "cookie_name": { Type: schema.TypeString, Required: true, ForceNew: true, diff --git a/aws/resource_aws_app_cookie_stickiness_policy_test.go b/aws/resource_aws_app_cookie_stickiness_policy_test.go index ed0d25a4699..ef07e1574a6 100644 --- a/aws/resource_aws_app_cookie_stickiness_policy_test.go +++ b/aws/resource_aws_app_cookie_stickiness_policy_test.go @@ -16,12 +16,12 @@ import ( func TestAccAWSAppCookieStickinessPolicy_basic(t *testing.T) { lbName := fmt.Sprintf("tf-test-lb-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAppCookieStickinessPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAppCookieStickinessPolicyConfig(lbName), Check: resource.ComposeTestCheckFunc( testAccCheckAppCookieStickinessPolicy( @@ -30,7 +30,7 @@ func TestAccAWSAppCookieStickinessPolicy_basic(t *testing.T) { ), ), }, - resource.TestStep{ + { Config: testAccAppCookieStickinessPolicyConfigUpdate(lbName), Check: resource.ComposeTestCheckFunc( testAccCheckAppCookieStickinessPolicy( @@ -57,12 +57,12 @@ func TestAccAWSAppCookieStickinessPolicy_missingLB(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAppCookieStickinessPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAppCookieStickinessPolicyConfig(lbName), Check: resource.ComposeTestCheckFunc( testAccCheckAppCookieStickinessPolicy( @@ -71,7 +71,7 @@ func TestAccAWSAppCookieStickinessPolicy_missingLB(t *testing.T) { ), ), }, - resource.TestStep{ + { PreConfig: removeLB, Config: testAccAppCookieStickinessPolicyConfigDestroy(lbName), }, @@ -159,12 +159,12 @@ func TestAccAWSAppCookieStickinessPolicy_drift(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAppCookieStickinessPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAppCookieStickinessPolicyConfig(lbName), Check: resource.ComposeTestCheckFunc( testAccCheckAppCookieStickinessPolicy( @@ -173,7 +173,7 @@ func TestAccAWSAppCookieStickinessPolicy_drift(t *testing.T) { ), ), }, - resource.TestStep{ + { PreConfig: removePolicy, Config: testAccAppCookieStickinessPolicyConfig(lbName), Check: resource.ComposeTestCheckFunc( diff --git a/aws/resource_aws_appautoscaling_policy.go b/aws/resource_aws_appautoscaling_policy.go index 8e885662ebf..324223b122d 100644 --- a/aws/resource_aws_appautoscaling_policy.go +++ b/aws/resource_aws_appautoscaling_policy.go @@ -23,32 +23,32 @@ func resourceAwsAppautoscalingPolicy() *schema.Resource { Delete: resourceAwsAppautoscalingPolicyDelete, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, // https://github.com/boto/botocore/blob/9f322b1/botocore/data/autoscaling/2011-01-01/service-2.json#L1862-L1873 - ValidateFunc: validateMaxLength(255), + ValidateFunc: validation.StringLenBetween(0, 255), }, - "arn": &schema.Schema{ + "arn": { Type: schema.TypeString, Computed: true, }, - "policy_type": &schema.Schema{ + "policy_type": { Type: schema.TypeString, Optional: true, Default: "StepScaling", }, - "resource_id": &schema.Schema{ + "resource_id": { Type: schema.TypeString, Required: true, }, - "scalable_dimension": &schema.Schema{ + "scalable_dimension": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "service_namespace": &schema.Schema{ + "service_namespace": { Type: schema.TypeString, Required: true, ForceNew: true, @@ -62,34 +62,32 @@ func resourceAwsAppautoscalingPolicy() *schema.Resource { Type: schema.TypeString, Optional: true, }, - "cooldown": &schema.Schema{ + "cooldown": { Type: schema.TypeInt, Optional: true, }, - "metric_aggregation_type": &schema.Schema{ + "metric_aggregation_type": { Type: schema.TypeString, Optional: true, }, - "min_adjustment_magnitude": &schema.Schema{ + "min_adjustment_magnitude": { Type: schema.TypeInt, Optional: true, }, - "step_adjustment": &schema.Schema{ + "step_adjustment": { Type: schema.TypeSet, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "metric_interval_lower_bound": &schema.Schema{ - Type: schema.TypeFloat, + "metric_interval_lower_bound": { + Type: schema.TypeString, Optional: true, - Default: -1, }, - "metric_interval_upper_bound": &schema.Schema{ - Type: schema.TypeFloat, + "metric_interval_upper_bound": { + Type: schema.TypeString, Optional: true, - Default: -1, }, - "scaling_adjustment": &schema.Schema{ + "scaling_adjustment": { Type: schema.TypeInt, Required: true, }, @@ -99,49 +97,49 @@ func resourceAwsAppautoscalingPolicy() *schema.Resource { }, }, }, - "alarms": &schema.Schema{ + "alarms": { Type: schema.TypeList, Optional: true, ForceNew: true, Elem: &schema.Schema{Type: schema.TypeString}, }, - "adjustment_type": &schema.Schema{ + "adjustment_type": { Type: schema.TypeString, Optional: true, Deprecated: "Use step_scaling_policy_configuration -> adjustment_type instead", }, - "cooldown": &schema.Schema{ + "cooldown": { Type: schema.TypeInt, Optional: true, Deprecated: "Use step_scaling_policy_configuration -> cooldown instead", }, - "metric_aggregation_type": &schema.Schema{ + "metric_aggregation_type": { Type: schema.TypeString, Optional: true, Deprecated: "Use step_scaling_policy_configuration -> metric_aggregation_type instead", }, - "min_adjustment_magnitude": &schema.Schema{ + "min_adjustment_magnitude": { Type: schema.TypeInt, Optional: true, Deprecated: "Use step_scaling_policy_configuration -> min_adjustment_magnitude instead", }, - "step_adjustment": &schema.Schema{ + "step_adjustment": { Type: schema.TypeSet, Optional: true, Deprecated: "Use step_scaling_policy_configuration -> step_adjustment instead", Set: resourceAwsAppautoscalingAdjustmentHash, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "metric_interval_lower_bound": &schema.Schema{ + "metric_interval_lower_bound": { Type: schema.TypeString, Optional: true, }, - "metric_interval_upper_bound": &schema.Schema{ + "metric_interval_upper_bound": { Type: schema.TypeString, Optional: true, }, - "scaling_adjustment": &schema.Schema{ + "scaling_adjustment": { Type: schema.TypeInt, Required: true, }, @@ -154,14 +152,14 @@ func resourceAwsAppautoscalingPolicy() *schema.Resource { Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "customized_metric_specification": &schema.Schema{ + "customized_metric_specification": { Type: schema.TypeList, MaxItems: 1, Optional: true, ConflictsWith: []string{"target_tracking_scaling_policy_configuration.0.predefined_metric_specification"}, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "dimensions": &schema.Schema{ + "dimensions": { Type: schema.TypeSet, Optional: true, Elem: &schema.Resource{ @@ -177,15 +175,15 @@ func resourceAwsAppautoscalingPolicy() *schema.Resource { }, }, }, - "metric_name": &schema.Schema{ + "metric_name": { Type: schema.TypeString, Required: true, }, - "namespace": &schema.Schema{ + "namespace": { Type: schema.TypeString, Required: true, }, - "statistic": &schema.Schema{ + "statistic": { Type: schema.TypeString, Required: true, ValidateFunc: validation.StringInSlice([]string{ @@ -196,46 +194,46 @@ func resourceAwsAppautoscalingPolicy() *schema.Resource { applicationautoscaling.MetricStatisticSum, }, false), }, - "unit": &schema.Schema{ + "unit": { Type: schema.TypeString, Optional: true, }, }, }, }, - "predefined_metric_specification": &schema.Schema{ + "predefined_metric_specification": { Type: schema.TypeList, MaxItems: 1, Optional: true, ConflictsWith: []string{"target_tracking_scaling_policy_configuration.0.customized_metric_specification"}, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "predefined_metric_type": &schema.Schema{ + "predefined_metric_type": { Type: schema.TypeString, Required: true, }, - "resource_label": &schema.Schema{ + "resource_label": { Type: schema.TypeString, Optional: true, - ValidateFunc: validateMaxLength(1023), + ValidateFunc: validation.StringLenBetween(0, 1023), }, }, }, }, - "disable_scale_in": &schema.Schema{ + "disable_scale_in": { Type: schema.TypeBool, Default: false, Optional: true, }, - "scale_in_cooldown": &schema.Schema{ + "scale_in_cooldown": { Type: schema.TypeInt, Optional: true, }, - "scale_out_cooldown": &schema.Schema{ + "scale_out_cooldown": { Type: schema.TypeInt, Optional: true, }, - "target_value": &schema.Schema{ + "target_value": { Type: schema.TypeFloat, Required: true, }, @@ -256,7 +254,7 @@ func resourceAwsAppautoscalingPolicyCreate(d *schema.ResourceData, meta interfac log.Printf("[DEBUG] ApplicationAutoScaling PutScalingPolicy: %#v", params) var resp *applicationautoscaling.PutScalingPolicyOutput - err = resource.Retry(1*time.Minute, func() *resource.RetryError { + err = resource.Retry(2*time.Minute, func() *resource.RetryError { var err error resp, err = conn.PutScalingPolicy(¶ms) if err != nil { @@ -285,10 +283,23 @@ func resourceAwsAppautoscalingPolicyCreate(d *schema.ResourceData, meta interfac } func resourceAwsAppautoscalingPolicyRead(d *schema.ResourceData, meta interface{}) error { - p, err := getAwsAppautoscalingPolicy(d, meta) + var p *applicationautoscaling.ScalingPolicy + + err := resource.Retry(2*time.Minute, func() *resource.RetryError { + var err error + p, err = getAwsAppautoscalingPolicy(d, meta) + if err != nil { + if isAWSErr(err, applicationautoscaling.ErrCodeFailedResourceAccessException, "") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) if err != nil { - return err + return fmt.Errorf("Failed to read scaling policy: %s", err) } + if p == nil { log.Printf("[WARN] Application AutoScaling Policy (%s) not found, removing from state", d.Id()) d.SetId("") @@ -320,7 +331,16 @@ func resourceAwsAppautoscalingPolicyUpdate(d *schema.ResourceData, meta interfac } log.Printf("[DEBUG] Application Autoscaling Update Scaling Policy: %#v", params) - _, err := conn.PutScalingPolicy(¶ms) + err := resource.Retry(2*time.Minute, func() *resource.RetryError { + _, err := conn.PutScalingPolicy(¶ms) + if err != nil { + if isAWSErr(err, applicationautoscaling.ErrCodeFailedResourceAccessException, "") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) if err != nil { return fmt.Errorf("Failed to update scaling policy: %s", err) } @@ -345,11 +365,19 @@ func resourceAwsAppautoscalingPolicyDelete(d *schema.ResourceData, meta interfac ServiceNamespace: aws.String(d.Get("service_namespace").(string)), } log.Printf("[DEBUG] Deleting Application AutoScaling Policy opts: %#v", params) - if _, err := conn.DeleteScalingPolicy(¶ms); err != nil { - return fmt.Errorf("Failed to delete autoscaling policy: %s", err) + err = resource.Retry(2*time.Minute, func() *resource.RetryError { + _, err = conn.DeleteScalingPolicy(¶ms) + if err != nil { + if isAWSErr(err, applicationautoscaling.ErrCodeFailedResourceAccessException, "") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("Failed to delete scaling policy: %s", err) } - - d.SetId("") return nil } @@ -372,10 +400,6 @@ func expandAppautoscalingStepAdjustments(configured []interface{}) ([]*applicati if data["metric_interval_lower_bound"] != "" { bound := data["metric_interval_lower_bound"] switch bound := bound.(type) { - case float64: - if bound >= 0 { - a.MetricIntervalLowerBound = aws.Float64(bound) - } case string: f, err := strconv.ParseFloat(bound, 64) if err != nil { @@ -391,10 +415,6 @@ func expandAppautoscalingStepAdjustments(configured []interface{}) ([]*applicati if data["metric_interval_upper_bound"] != "" { bound := data["metric_interval_upper_bound"] switch bound := bound.(type) { - case float64: - if bound >= 0 { - a.MetricIntervalUpperBound = aws.Float64(bound) - } case string: f, err := strconv.ParseFloat(bound, 64) if err != nil { diff --git a/aws/resource_aws_appautoscaling_policy_test.go b/aws/resource_aws_appautoscaling_policy_test.go index 8400ced6eb4..2b6f77f1c5a 100644 --- a/aws/resource_aws_appautoscaling_policy_test.go +++ b/aws/resource_aws_appautoscaling_policy_test.go @@ -17,12 +17,12 @@ func TestAccAWSAppautoScalingPolicy_basic(t *testing.T) { randClusterName := fmt.Sprintf("cluster%s", acctest.RandString(10)) randPolicyName := fmt.Sprintf("terraform-test-foobar-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAppautoscalingPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAppautoscalingPolicyConfig(randClusterName, randPolicyName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAppautoscalingPolicyExists("aws_appautoscaling_policy.foobar_simple", &policy), @@ -48,21 +48,21 @@ func TestAccAWSAppautoScalingPolicy_nestedSchema(t *testing.T) { randClusterName := fmt.Sprintf("cluster%s", acctest.RandString(10)) randPolicyName := fmt.Sprintf("terraform-test-foobar-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAppautoscalingPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAppautoscalingPolicyNestedSchemaConfig(randClusterName, randPolicyName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAppautoscalingPolicyExists("aws_appautoscaling_policy.foobar_simple", &policy), resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_simple", "step_scaling_policy_configuration.0.adjustment_type", "PercentChangeInCapacity"), resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_simple", "step_scaling_policy_configuration.0.cooldown", "60"), resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_simple", "step_scaling_policy_configuration.0.step_adjustment.#", "1"), - resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_simple", "step_scaling_policy_configuration.0.step_adjustment.2252990027.scaling_adjustment", "1"), - resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_simple", "step_scaling_policy_configuration.0.step_adjustment.2252990027.metric_interval_lower_bound", "1"), - resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_simple", "step_scaling_policy_configuration.0.step_adjustment.2252990027.metric_interval_upper_bound", "-1"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_simple", "step_scaling_policy_configuration.0.step_adjustment.1704088838.scaling_adjustment", "1"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_simple", "step_scaling_policy_configuration.0.step_adjustment.1704088838.metric_interval_lower_bound", "1"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_simple", "step_scaling_policy_configuration.0.step_adjustment.1704088838.metric_interval_upper_bound", ""), resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_simple", "name", randPolicyName), resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_simple", "policy_type", "StepScaling"), resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_simple", "resource_id", fmt.Sprintf("service/%s/foobar", randClusterName)), @@ -74,17 +74,73 @@ func TestAccAWSAppautoScalingPolicy_nestedSchema(t *testing.T) { }) } +func TestAccAWSAppautoScalingPolicy_scaleOutAndIn(t *testing.T) { + var policy applicationautoscaling.ScalingPolicy + + randClusterName := fmt.Sprintf("cluster%s", acctest.RandString(10)) + randPolicyNamePrefix := fmt.Sprintf("terraform-test-foobar-%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAppautoscalingPolicyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAppautoscalingPolicyScaleOutAndInConfig(randClusterName, randPolicyNamePrefix), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAppautoscalingPolicyExists("aws_appautoscaling_policy.foobar_out", &policy), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_out", "step_scaling_policy_configuration.0.adjustment_type", "PercentChangeInCapacity"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_out", "step_scaling_policy_configuration.0.cooldown", "60"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_out", "step_scaling_policy_configuration.0.step_adjustment.#", "3"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_out", "step_scaling_policy_configuration.0.step_adjustment.2218643358.metric_interval_lower_bound", "3"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_out", "step_scaling_policy_configuration.0.step_adjustment.2218643358.metric_interval_upper_bound", ""), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_out", "step_scaling_policy_configuration.0.step_adjustment.2218643358.scaling_adjustment", "3"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_out", "step_scaling_policy_configuration.0.step_adjustment.594919880.metric_interval_lower_bound", "1"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_out", "step_scaling_policy_configuration.0.step_adjustment.594919880.metric_interval_upper_bound", "3"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_out", "step_scaling_policy_configuration.0.step_adjustment.594919880.scaling_adjustment", "2"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_out", "step_scaling_policy_configuration.0.step_adjustment.2601972131.metric_interval_lower_bound", "0"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_out", "step_scaling_policy_configuration.0.step_adjustment.2601972131.metric_interval_upper_bound", "1"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_out", "step_scaling_policy_configuration.0.step_adjustment.2601972131.scaling_adjustment", "1"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_out", "name", fmt.Sprintf("%s-out", randPolicyNamePrefix)), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_out", "policy_type", "StepScaling"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_out", "resource_id", fmt.Sprintf("service/%s/foobar", randClusterName)), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_out", "service_namespace", "ecs"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_out", "scalable_dimension", "ecs:service:DesiredCount"), + testAccCheckAWSAppautoscalingPolicyExists("aws_appautoscaling_policy.foobar_in", &policy), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_in", "step_scaling_policy_configuration.0.adjustment_type", "PercentChangeInCapacity"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_in", "step_scaling_policy_configuration.0.cooldown", "60"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_in", "step_scaling_policy_configuration.0.step_adjustment.#", "3"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_in", "step_scaling_policy_configuration.0.step_adjustment.3898905432.metric_interval_lower_bound", "-1"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_in", "step_scaling_policy_configuration.0.step_adjustment.3898905432.metric_interval_upper_bound", "0"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_in", "step_scaling_policy_configuration.0.step_adjustment.3898905432.scaling_adjustment", "-1"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_in", "step_scaling_policy_configuration.0.step_adjustment.386467692.metric_interval_lower_bound", "-3"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_in", "step_scaling_policy_configuration.0.step_adjustment.386467692.metric_interval_upper_bound", "-1"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_in", "step_scaling_policy_configuration.0.step_adjustment.386467692.scaling_adjustment", "-2"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_in", "step_scaling_policy_configuration.0.step_adjustment.602910043.metric_interval_lower_bound", ""), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_in", "step_scaling_policy_configuration.0.step_adjustment.602910043.metric_interval_upper_bound", "-3"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_in", "step_scaling_policy_configuration.0.step_adjustment.602910043.scaling_adjustment", "-3"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_in", "name", fmt.Sprintf("%s-in", randPolicyNamePrefix)), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_in", "policy_type", "StepScaling"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_in", "resource_id", fmt.Sprintf("service/%s/foobar", randClusterName)), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_in", "service_namespace", "ecs"), + resource.TestCheckResourceAttr("aws_appautoscaling_policy.foobar_in", "scalable_dimension", "ecs:service:DesiredCount"), + ), + }, + }, + }) +} + func TestAccAWSAppautoScalingPolicy_spotFleetRequest(t *testing.T) { var policy applicationautoscaling.ScalingPolicy randPolicyName := fmt.Sprintf("test-appautoscaling-policy-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAppautoscalingPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAppautoscalingPolicySpotFleetRequestConfig(randPolicyName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAppautoscalingPolicyExists("aws_appautoscaling_policy.test", &policy), @@ -104,12 +160,12 @@ func TestAccAWSAppautoScalingPolicy_dynamoDb(t *testing.T) { randPolicyName := fmt.Sprintf("test-appautoscaling-policy-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAppautoscalingPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAppautoscalingPolicyDynamoDB(randPolicyName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAppautoscalingPolicyExists("aws_appautoscaling_policy.dynamo_test", &policy), @@ -130,12 +186,12 @@ func TestAccAWSAppautoScalingPolicy_multiplePoliciesSameName(t *testing.T) { tableName2 := fmt.Sprintf("tf-autoscaled-table-%s", acctest.RandString(5)) namePrefix := fmt.Sprintf("tf-appautoscaling-policy-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAppautoscalingPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAppautoscalingPolicy_multiplePoliciesSameName(tableName1, tableName2, namePrefix), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAppautoscalingPolicyExists("aws_appautoscaling_policy.read1", &readPolicy1), @@ -162,12 +218,12 @@ func TestAccAWSAppautoScalingPolicy_multiplePoliciesSameResource(t *testing.T) { tableName := fmt.Sprintf("tf-autoscaled-table-%s", acctest.RandString(5)) namePrefix := fmt.Sprintf("tf-appautoscaling-policy-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAppautoscalingPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAppautoscalingPolicy_multiplePoliciesSameResource(tableName, namePrefix), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAppautoscalingPolicyExists("aws_appautoscaling_policy.read", &readPolicy), @@ -451,7 +507,7 @@ resource "aws_appautoscaling_policy" "dynamo_test" { predefined_metric_specification { predefined_metric_type = "DynamoDBWriteCapacityUtilization" } - + scale_in_cooldown = 10 scale_out_cooldown = 10 target_value = 70 @@ -604,3 +660,108 @@ resource "aws_appautoscaling_policy" "read" { } `, tableName, namePrefix, namePrefix) } + +func testAccAWSAppautoscalingPolicyScaleOutAndInConfig( + randClusterName string, + randPolicyNamePrefix string) string { + return fmt.Sprintf(` +resource "aws_ecs_cluster" "foo" { + name = "%s" +} + +resource "aws_ecs_task_definition" "task" { + family = "foobar" + container_definitions = < 0 { + encryptionConfig.KmsKey = aws.String(keyID) + } + + resultConfig.EncryptionConfiguration = &encryptionConfig + + return &resultConfig, nil +} + func resourceAwsAthenaDatabaseCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).athenaconn + resultConfig, err := expandAthenaResultConfiguration(d.Get("bucket").(string), d.Get("encryption_configuration").([]interface{})) + if err != nil { + return err + } + input := &athena.StartQueryExecutionInput{ - QueryString: aws.String(fmt.Sprintf("create database %s;", d.Get("name").(string))), - ResultConfiguration: &athena.ResultConfiguration{ - OutputLocation: aws.String("s3://" + d.Get("bucket").(string)), - }, + QueryString: aws.String(fmt.Sprintf("create database `%s`;", d.Get("name").(string))), + ResultConfiguration: resultConfig, } resp, err := conn.StartQueryExecution(input) @@ -53,7 +108,7 @@ func resourceAwsAthenaDatabaseCreate(d *schema.ResourceData, meta interface{}) e return err } - if err := executeAndExpectNoRowsWhenCreate(*resp.QueryExecutionId, d, conn); err != nil { + if err := executeAndExpectNoRowsWhenCreate(*resp.QueryExecutionId, conn); err != nil { return err } d.SetId(d.Get("name").(string)) @@ -63,12 +118,14 @@ func resourceAwsAthenaDatabaseCreate(d *schema.ResourceData, meta interface{}) e func resourceAwsAthenaDatabaseRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).athenaconn - bucket := d.Get("bucket").(string) + resultConfig, err := expandAthenaResultConfiguration(d.Get("bucket").(string), d.Get("encryption_configuration").([]interface{})) + if err != nil { + return err + } + input := &athena.StartQueryExecutionInput{ - QueryString: aws.String(fmt.Sprint("show databases;")), - ResultConfiguration: &athena.ResultConfiguration{ - OutputLocation: aws.String("s3://" + bucket), - }, + QueryString: aws.String(fmt.Sprint("show databases;")), + ResultConfiguration: resultConfig, } resp, err := conn.StartQueryExecution(input) @@ -89,20 +146,22 @@ func resourceAwsAthenaDatabaseUpdate(d *schema.ResourceData, meta interface{}) e func resourceAwsAthenaDatabaseDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).athenaconn + resultConfig, err := expandAthenaResultConfiguration(d.Get("bucket").(string), d.Get("encryption_configuration").([]interface{})) + if err != nil { + return err + } + name := d.Get("name").(string) - bucket := d.Get("bucket").(string) - queryString := fmt.Sprintf("drop database %s", name) + queryString := fmt.Sprintf("drop database `%s`", name) if d.Get("force_destroy").(bool) { queryString += " cascade" } queryString += ";" input := &athena.StartQueryExecutionInput{ - QueryString: aws.String(queryString), - ResultConfiguration: &athena.ResultConfiguration{ - OutputLocation: aws.String("s3://" + bucket), - }, + QueryString: aws.String(queryString), + ResultConfiguration: resultConfig, } resp, err := conn.StartQueryExecution(input) @@ -110,19 +169,19 @@ func resourceAwsAthenaDatabaseDelete(d *schema.ResourceData, meta interface{}) e return err } - if err := executeAndExpectNoRowsWhenDrop(*resp.QueryExecutionId, d, conn); err != nil { + if err := executeAndExpectNoRowsWhenDrop(*resp.QueryExecutionId, conn); err != nil { return err } return nil } -func executeAndExpectNoRowsWhenCreate(qeid string, d *schema.ResourceData, conn *athena.Athena) error { +func executeAndExpectNoRowsWhenCreate(qeid string, conn *athena.Athena) error { rs, err := queryExecutionResult(qeid, conn) if err != nil { return err } if len(rs.Rows) != 0 { - return fmt.Errorf("[ERROR] Athena create database, unexpected query result: %s", flattenAthenaResultSet(rs)) + return fmt.Errorf("Athena create database, unexpected query result: %s", flattenAthenaResultSet(rs)) } return nil } @@ -139,16 +198,16 @@ func executeAndExpectMatchingRow(qeid string, dbName string, conn *athena.Athena } } } - return fmt.Errorf("[ERROR] Athena not found database: %s, query result: %s", dbName, flattenAthenaResultSet(rs)) + return fmt.Errorf("Athena not found database: %s, query result: %s", dbName, flattenAthenaResultSet(rs)) } -func executeAndExpectNoRowsWhenDrop(qeid string, d *schema.ResourceData, conn *athena.Athena) error { +func executeAndExpectNoRowsWhenDrop(qeid string, conn *athena.Athena) error { rs, err := queryExecutionResult(qeid, conn) if err != nil { return err } if len(rs.Rows) != 0 { - return fmt.Errorf("[ERROR] Athena drop database, unexpected query result: %s", flattenAthenaResultSet(rs)) + return fmt.Errorf("Athena drop database, unexpected query result: %s", flattenAthenaResultSet(rs)) } return nil } diff --git a/aws/resource_aws_athena_database_test.go b/aws/resource_aws_athena_database_test.go index dca9a236e07..f5ec357646d 100644 --- a/aws/resource_aws_athena_database_test.go +++ b/aws/resource_aws_athena_database_test.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "regexp" "testing" "github.com/aws/aws-sdk-go/aws" @@ -15,13 +16,13 @@ import ( func TestAccAWSAthenaDatabase_basic(t *testing.T) { rInt := acctest.RandInt() dbName := acctest.RandString(8) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAthenaDatabaseDestroy, Steps: []resource.TestStep{ { - Config: testAccAthenaDatabaseConfig(rInt, dbName), + Config: testAccAthenaDatabaseConfig(rInt, dbName, false), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAthenaDatabaseExists("aws_athena_database.hoge"), ), @@ -30,16 +31,70 @@ func TestAccAWSAthenaDatabase_basic(t *testing.T) { }) } +func TestAccAWSAthenaDatabase_encryption(t *testing.T) { + rInt := acctest.RandInt() + dbName := acctest.RandString(8) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAthenaDatabaseDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAthenaDatabaseWithKMSConfig(rInt, dbName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAthenaDatabaseExists("aws_athena_database.hoge"), + resource.TestCheckResourceAttr("aws_athena_database.hoge", "encryption_configuration.0.encryption_option", "SSE_KMS"), + ), + }, + }, + }) +} + +func TestAccAWSAthenaDatabase_nameStartsWithUnderscore(t *testing.T) { + rInt := acctest.RandInt() + dbName := "_" + acctest.RandString(8) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAthenaDatabaseDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAthenaDatabaseConfig(rInt, dbName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAthenaDatabaseExists("aws_athena_database.hoge"), + resource.TestCheckResourceAttr("aws_athena_database.hoge", "name", dbName), + ), + }, + }, + }) +} + +func TestAccAWSAthenaDatabase_nameCantHaveUppercase(t *testing.T) { + rInt := acctest.RandInt() + dbName := "A" + acctest.RandString(8) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAthenaDatabaseDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAthenaDatabaseConfig(rInt, dbName, false), + ExpectError: regexp.MustCompile(`see .*\.com`), + }, + }, + }) +} + func TestAccAWSAthenaDatabase_destroyFailsIfTablesExist(t *testing.T) { rInt := acctest.RandInt() dbName := acctest.RandString(8) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAthenaDatabaseDestroy, Steps: []resource.TestStep{ { - Config: testAccAthenaDatabaseConfig(rInt, dbName), + Config: testAccAthenaDatabaseConfig(rInt, dbName, false), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAthenaDatabaseExists("aws_athena_database.hoge"), testAccAWSAthenaDatabaseCreateTables(dbName), @@ -54,13 +109,13 @@ func TestAccAWSAthenaDatabase_destroyFailsIfTablesExist(t *testing.T) { func TestAccAWSAthenaDatabase_forceDestroyAlwaysSucceeds(t *testing.T) { rInt := acctest.RandInt() dbName := acctest.RandString(8) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAthenaDatabaseDestroy, Steps: []resource.TestStep{ { - Config: testAccAthenaDatabaseConfigForceDestroy(rInt, dbName), + Config: testAccAthenaDatabaseConfig(rInt, dbName, true), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAthenaDatabaseExists("aws_athena_database.hoge"), testAccAWSAthenaDatabaseCreateTables(dbName), @@ -81,7 +136,7 @@ func testAccCheckAWSAthenaDatabaseDestroy(s *terraform.State) error { } rInt := acctest.RandInt() - bucketName := fmt.Sprintf("tf-athena-db-%s-%d", rs.Primary.Attributes["name"], rInt) + bucketName := fmt.Sprintf("tf-athena-db-%d", rInt) _, err := s3conn.CreateBucket(&s3.CreateBucketInput{ Bucket: aws.String(bucketName), }) @@ -248,7 +303,7 @@ func testAccCheckAWSAthenaDatabaseDropFails(dbName string) resource.TestCheckFun QueryExecutionContext: &athena.QueryExecutionContext{ Database: aws.String(dbName), }, - QueryString: aws.String(fmt.Sprintf("drop database %s;", dbName)), + QueryString: aws.String(fmt.Sprintf("drop database `%s`;", dbName)), ResultConfiguration: &athena.ResultConfiguration{ OutputLocation: aws.String("s3://" + bucketName), }, @@ -283,31 +338,50 @@ func testAccAthenaDatabaseFindBucketName(s *terraform.State, dbName string) (buc return bucket, err } -func testAccAthenaDatabaseConfig(randInt int, dbName string) string { +func testAccAthenaDatabaseConfig(randInt int, dbName string, forceDestroy bool) string { return fmt.Sprintf(` resource "aws_s3_bucket" "hoge" { - bucket = "tf-athena-db-%s-%d" + bucket = "tf-athena-db-%[1]d" force_destroy = true } resource "aws_athena_database" "hoge" { - name = "%s" - bucket = "${aws_s3_bucket.hoge.bucket}" + name = "%[2]s" + bucket = "${aws_s3_bucket.hoge.bucket}" + force_destroy = %[3]t } - `, dbName, randInt, dbName) + `, randInt, dbName, forceDestroy) } -func testAccAthenaDatabaseConfigForceDestroy(randInt int, dbName string) string { +func testAccAthenaDatabaseWithKMSConfig(randInt int, dbName string, forceDestroy bool) string { return fmt.Sprintf(` - resource "aws_s3_bucket" "hoge" { - bucket = "tf-athena-db-%s-%d" - force_destroy = true - } +resource "aws_kms_key" "hoge" { + deletion_window_in_days = 10 +} - resource "aws_athena_database" "hoge" { - name = "%s" - bucket = "${aws_s3_bucket.hoge.bucket}" - force_destroy = true +resource "aws_s3_bucket" "hoge" { + bucket = "tf-athena-db-%[1]d" + force_destroy = true + + server_side_encryption_configuration { + rule { + apply_server_side_encryption_by_default { + kms_master_key_id = "${aws_kms_key.hoge.arn}" + sse_algorithm = "aws:kms" + } } - `, dbName, randInt, dbName) + } +} + +resource "aws_athena_database" "hoge" { + name = "%[2]s" + bucket = "${aws_s3_bucket.hoge.bucket}" + force_destroy = %[3]t + + encryption_configuration { + encryption_option = "SSE_KMS" + kms_key = "${aws_kms_key.hoge.arn}" + } +} + `, randInt, dbName, forceDestroy) } diff --git a/aws/resource_aws_athena_named_query_test.go b/aws/resource_aws_athena_named_query_test.go index ed090c879ba..7aa7646bea5 100644 --- a/aws/resource_aws_athena_named_query_test.go +++ b/aws/resource_aws_athena_named_query_test.go @@ -12,7 +12,7 @@ import ( ) func TestAccAWSAthenaNamedQuery_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAthenaNamedQueryDestroy, @@ -30,16 +30,16 @@ func TestAccAWSAthenaNamedQuery_basic(t *testing.T) { func TestAccAWSAthenaNamedQuery_import(t *testing.T) { resourceName := "aws_athena_named_query.foo" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAthenaNamedQueryDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAthenaNamedQueryConfig(acctest.RandInt(), acctest.RandString(5)), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, diff --git a/aws/resource_aws_autoscaling_attachment.go b/aws/resource_aws_autoscaling_attachment.go index 321a98c75bb..8563abd0e01 100644 --- a/aws/resource_aws_autoscaling_attachment.go +++ b/aws/resource_aws_autoscaling_attachment.go @@ -6,7 +6,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/autoscaling" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) @@ -52,7 +51,7 @@ func resourceAwsAutoscalingAttachmentCreate(d *schema.ResourceData, meta interfa log.Printf("[INFO] registering asg %s with ELBs %s", asgName, v.(string)) if _, err := asgconn.AttachLoadBalancers(attachOpts); err != nil { - return errwrap.Wrapf(fmt.Sprintf("Failure attaching AutoScaling Group %s with Elastic Load Balancer: %s: {{err}}", asgName, v.(string)), err) + return fmt.Errorf("Failure attaching AutoScaling Group %s with Elastic Load Balancer: %s: %s", asgName, v.(string), err) } } @@ -65,7 +64,7 @@ func resourceAwsAutoscalingAttachmentCreate(d *schema.ResourceData, meta interfa log.Printf("[INFO] registering asg %s with ALB Target Group %s", asgName, v.(string)) if _, err := asgconn.AttachLoadBalancerTargetGroups(attachOpts); err != nil { - return errwrap.Wrapf(fmt.Sprintf("Failure attaching AutoScaling Group %s with ALB Target Group: %s: {{err}}", asgName, v.(string)), err) + return fmt.Errorf("Failure attaching AutoScaling Group %s with ALB Target Group: %s: %s", asgName, v.(string), err) } } @@ -78,7 +77,7 @@ func resourceAwsAutoscalingAttachmentRead(d *schema.ResourceData, meta interface asgconn := meta.(*AWSClient).autoscalingconn asgName := d.Get("autoscaling_group_name").(string) - // Retrieve the ASG properites to get list of associated ELBs + // Retrieve the ASG properties to get list of associated ELBs asg, err := getAwsAutoscalingGroup(asgName, asgconn) if err != nil { @@ -101,7 +100,7 @@ func resourceAwsAutoscalingAttachmentRead(d *schema.ResourceData, meta interface } if !found { - log.Printf("[WARN] Association for %s was not found in ASG assocation", v.(string)) + log.Printf("[WARN] Association for %s was not found in ASG association", v.(string)) d.SetId("") } } @@ -117,7 +116,7 @@ func resourceAwsAutoscalingAttachmentRead(d *schema.ResourceData, meta interface } if !found { - log.Printf("[WARN] Association for %s was not found in ASG assocation", v.(string)) + log.Printf("[WARN] Association for %s was not found in ASG association", v.(string)) d.SetId("") } } @@ -137,7 +136,7 @@ func resourceAwsAutoscalingAttachmentDelete(d *schema.ResourceData, meta interfa log.Printf("[INFO] Deleting ELB %s association from: %s", v.(string), asgName) if _, err := asgconn.DetachLoadBalancers(detachOpts); err != nil { - return errwrap.Wrapf(fmt.Sprintf("Failure detaching AutoScaling Group %s with Elastic Load Balancer: %s: {{err}}", asgName, v.(string)), err) + return fmt.Errorf("Failure detaching AutoScaling Group %s with Elastic Load Balancer: %s: %s", asgName, v.(string), err) } } @@ -149,7 +148,7 @@ func resourceAwsAutoscalingAttachmentDelete(d *schema.ResourceData, meta interfa log.Printf("[INFO] Deleting ALB Target Group %s association from: %s", v.(string), asgName) if _, err := asgconn.DetachLoadBalancerTargetGroups(detachOpts); err != nil { - return errwrap.Wrapf(fmt.Sprintf("Failure detaching AutoScaling Group %s with ALB Target Group: %s: {{err}}", asgName, v.(string)), err) + return fmt.Errorf("Failure detaching AutoScaling Group %s with ALB Target Group: %s: %s", asgName, v.(string), err) } } diff --git a/aws/resource_aws_autoscaling_attachment_test.go b/aws/resource_aws_autoscaling_attachment_test.go index 90ea56c516a..c7a9df4fe58 100644 --- a/aws/resource_aws_autoscaling_attachment_test.go +++ b/aws/resource_aws_autoscaling_attachment_test.go @@ -15,7 +15,7 @@ func TestAccAWSAutoscalingAttachment_elb(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -57,7 +57,7 @@ func TestAccAWSAutoscalingAttachment_albTargetGroup(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -136,7 +136,7 @@ func testAccCheckAWSAutocalingAlbAttachmentExists(asgname string, targetGroupCou }) if err != nil { - return fmt.Errorf("Recieved an error when attempting to load %s: %s", asg, err) + return fmt.Errorf("Received an error when attempting to load %s: %s", asg, err) } if targetGroupCount != len(actual.AutoScalingGroups[0].TargetGroupARNs) { diff --git a/aws/resource_aws_autoscaling_group.go b/aws/resource_aws_autoscaling_group.go index 4f4c3f2b1fc..2871ef4fc13 100644 --- a/aws/resource_aws_autoscaling_group.go +++ b/aws/resource_aws_autoscaling_group.go @@ -6,9 +6,10 @@ import ( "strings" "time" - "github.com/hashicorp/errwrap" + "github.com/hashicorp/terraform/helper/customdiff" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" @@ -32,24 +33,165 @@ func resourceAwsAutoscalingGroup() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Optional: true, Computed: true, ForceNew: true, ConflictsWith: []string{"name_prefix"}, - ValidateFunc: validateMaxLength(255), + ValidateFunc: validation.StringLenBetween(0, 255), }, - "name_prefix": &schema.Schema{ + "name_prefix": { Type: schema.TypeString, Optional: true, ForceNew: true, - ValidateFunc: validateMaxLength(255 - resource.UniqueIDSuffixLength), + ValidateFunc: validation.StringLenBetween(0, 255-resource.UniqueIDSuffixLength), }, "launch_configuration": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"launch_template"}, + }, + + "launch_template": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + ConflictsWith: []string{"launch_configuration"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ConflictsWith: []string{"launch_template.0.name"}, + ValidateFunc: validateLaunchTemplateId, + }, + "name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ConflictsWith: []string{"launch_template.0.id"}, + ValidateFunc: validateLaunchTemplateName, + }, + "version": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + }, + }, + }, + + "mixed_instances_policy": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "instances_distribution": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + // Ignore missing configuration block + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == "1" && new == "0" { + return true + } + return false + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "on_demand_allocation_strategy": { + Type: schema.TypeString, + Optional: true, + Default: "prioritized", + ValidateFunc: validation.StringInSlice([]string{ + "prioritized", + }, false), + }, + "on_demand_base_capacity": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntAtLeast(0), + }, + "on_demand_percentage_above_base_capacity": { + Type: schema.TypeInt, + Optional: true, + Default: 100, + ValidateFunc: validation.IntBetween(0, 100), + }, + "spot_allocation_strategy": { + Type: schema.TypeString, + Optional: true, + Default: "lowest-price", + ValidateFunc: validation.StringInSlice([]string{ + "lowest-price", + }, false), + }, + "spot_instance_pools": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + ValidateFunc: validation.IntAtLeast(0), + }, + "spot_max_price": { + Type: schema.TypeString, + Optional: true, + }, + }, + }, + }, + "launch_template": { + Type: schema.TypeList, + Required: true, + MinItems: 1, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "launch_template_specification": { + Type: schema.TypeList, + Required: true, + MinItems: 1, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "launch_template_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "launch_template_name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "version": { + Type: schema.TypeString, + Optional: true, + Default: "$Default", + }, + }, + }, + }, + "override": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "instance_type": { + Type: schema.TypeString, + Optional: true, + }, + }, + }, + }, + }, + }, + }, + }, + }, }, "desired_capacity": { @@ -236,13 +378,28 @@ func resourceAwsAutoscalingGroup() *schema.Resource { "tag": autoscalingTagSchema(), - "tags": &schema.Schema{ + "tags": { Type: schema.TypeList, Optional: true, Elem: &schema.Schema{Type: schema.TypeMap}, ConflictsWith: []string{"tag"}, }, + + "service_linked_role_arn": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, }, + + CustomizeDiff: customdiff.Sequence( + customdiff.ComputedIf("launch_template.0.id", func(diff *schema.ResourceDiff, meta interface{}) bool { + return diff.HasChange("launch_template.0.name") + }), + customdiff.ComputedIf("launch_template.0.name", func(diff *schema.ResourceDiff, meta interface{}) bool { + return diff.HasChange("launch_template.0.id") + }), + ), } } @@ -304,7 +461,7 @@ func resourceAwsAutoscalingGroupCreate(d *schema.ResourceData, meta interface{}) createOpts := autoscaling.CreateAutoScalingGroupInput{ AutoScalingGroupName: aws.String(asgName), - LaunchConfigurationName: aws.String(d.Get("launch_configuration").(string)), + MixedInstancesPolicy: expandAutoScalingMixedInstancesPolicy(d.Get("mixed_instances_policy").([]interface{})), NewInstancesProtectedFromScaleIn: aws.Bool(d.Get("protect_from_scale_in").(bool)), } updateOpts := autoscaling.UpdateAutoScalingGroupInput{ @@ -336,6 +493,25 @@ func resourceAwsAutoscalingGroupCreate(d *schema.ResourceData, meta interface{}) } } + launchConfigurationValue, launchConfigurationOk := d.GetOk("launch_configuration") + launchTemplateValue, launchTemplateOk := d.GetOk("launch_template") + + if createOpts.MixedInstancesPolicy == nil && !launchConfigurationOk && !launchTemplateOk { + return fmt.Errorf("One of `launch_configuration`, `launch_template`, or `mixed_instances_policy` must be set for an autoscaling group") + } + + if launchConfigurationOk { + createOpts.LaunchConfigurationName = aws.String(launchConfigurationValue.(string)) + } + + if launchTemplateOk { + var err error + createOpts.LaunchTemplate, err = expandLaunchTemplateSpecification(launchTemplateValue.([]interface{})) + if err != nil { + return err + } + } + // Availability Zones are optional if VPC Zone Identifer(s) are specified if v, ok := d.GetOk("availability_zones"); ok && v.(*schema.Set).Len() > 0 { createOpts.AvailabilityZones = expandStringList(v.(*schema.Set).List()) @@ -345,7 +521,7 @@ func resourceAwsAutoscalingGroupCreate(d *schema.ResourceData, meta interface{}) if v, ok := d.GetOk("tag"); ok { var err error createOpts.Tags, err = autoscalingTagsFromMap( - setToMapByKey(v.(*schema.Set), "key"), resourceID) + setToMapByKey(v.(*schema.Set)), resourceID) if err != nil { return err } @@ -393,8 +569,27 @@ func resourceAwsAutoscalingGroupCreate(d *schema.ResourceData, meta interface{}) createOpts.TargetGroupARNs = expandStringList(v.(*schema.Set).List()) } + if v, ok := d.GetOk("service_linked_role_arn"); ok { + createOpts.ServiceLinkedRoleARN = aws.String(v.(string)) + } + log.Printf("[DEBUG] AutoScaling Group create configuration: %#v", createOpts) - _, err := conn.CreateAutoScalingGroup(&createOpts) + + // Retry for IAM eventual consistency + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err := conn.CreateAutoScalingGroup(&createOpts) + + // ValidationError: You must use a valid fully-formed launch template. Value (tf-acc-test-6643732652421074386) for parameter iamInstanceProfile.name is invalid. Invalid IAM Instance Profile name + if isAWSErr(err, "ValidationError", "Invalid IAM Instance Profile") { + return resource.RetryableError(err) + } + + if err != nil { + return resource.NonRetryableError(err) + } + + return nil + }) if err != nil { return fmt.Errorf("Error creating AutoScaling Group: %s", err) } @@ -455,9 +650,20 @@ func resourceAwsAutoscalingGroupRead(d *schema.ResourceData, meta interface{}) e d.Set("desired_capacity", g.DesiredCapacity) d.Set("health_check_grace_period", g.HealthCheckGracePeriod) d.Set("health_check_type", g.HealthCheckType) - d.Set("launch_configuration", g.LaunchConfigurationName) d.Set("load_balancers", flattenStringList(g.LoadBalancerNames)) + d.Set("launch_configuration", g.LaunchConfigurationName) + + if g.LaunchTemplate != nil { + d.Set("launch_template", flattenLaunchTemplateSpecification(g.LaunchTemplate)) + } else { + d.Set("launch_template", nil) + } + + if err := d.Set("mixed_instances_policy", flattenAutoScalingMixedInstancesPolicy(g.MixedInstancesPolicy)); err != nil { + return fmt.Errorf("error setting mixed_instances_policy: %s", err) + } + if err := d.Set("suspended_processes", flattenAsgSuspendedProcesses(g.SuspendedProcesses)); err != nil { log.Printf("[WARN] Error setting suspended_processes for %q: %s", d.Id(), err) } @@ -468,13 +674,14 @@ func resourceAwsAutoscalingGroupRead(d *schema.ResourceData, meta interface{}) e d.Set("max_size", g.MaxSize) d.Set("placement_group", g.PlacementGroup) d.Set("name", g.AutoScalingGroupName) + d.Set("service_linked_role_arn", g.ServiceLinkedRoleARN) var tagList, tagsList []*autoscaling.TagDescription var tagOk, tagsOk bool var v interface{} if v, tagOk = d.GetOk("tag"); tagOk { - tags := setToMapByKey(v.(*schema.Set), "key") + tags := setToMapByKey(v.(*schema.Set)) for _, t := range g.Tags { if _, ok := tags[*t.Key]; ok { tagList = append(tagList, t) @@ -534,6 +741,8 @@ func resourceAwsAutoscalingGroupRead(d *schema.ResourceData, meta interface{}) e log.Printf("[WARN] Error setting metrics for (%s): %s", d.Id(), err) } d.Set("metrics_granularity", g.EnabledMetrics[0].Granularity) + } else { + d.Set("enabled_metrics", nil) } return nil @@ -559,7 +768,19 @@ func resourceAwsAutoscalingGroupUpdate(d *schema.ResourceData, meta interface{}) } if d.HasChange("launch_configuration") { - opts.LaunchConfigurationName = aws.String(d.Get("launch_configuration").(string)) + if v, ok := d.GetOk("launch_configuration"); ok { + opts.LaunchConfigurationName = aws.String(v.(string)) + } + } + + if d.HasChange("launch_template") { + if v, ok := d.GetOk("launch_template"); ok && len(v.([]interface{})) > 0 { + opts.LaunchTemplate, _ = expandLaunchTemplateSpecification(v.([]interface{})) + } + } + + if d.HasChange("mixed_instances_policy") { + opts.MixedInstancesPolicy = expandAutoScalingMixedInstancesPolicy(d.Get("mixed_instances_policy").([]interface{})) } if d.HasChange("min_size") { @@ -605,6 +826,10 @@ func resourceAwsAutoscalingGroupUpdate(d *schema.ResourceData, meta interface{}) } } + if d.HasChange("service_linked_role_arn") { + opts.ServiceLinkedRoleARN = aws.String(d.Get("service_linked_role_arn").(string)) + } + if err := setAutoscalingTags(conn, d); err != nil { return err } @@ -645,7 +870,7 @@ func resourceAwsAutoscalingGroupUpdate(d *schema.ResourceData, meta interface{}) LoadBalancerNames: remove, }) if err != nil { - return fmt.Errorf("[WARN] Error updating Load Balancers for AutoScaling Group (%s), error: %s", d.Id(), err) + return fmt.Errorf("Error updating Load Balancers for AutoScaling Group (%s), error: %s", d.Id(), err) } } @@ -655,7 +880,7 @@ func resourceAwsAutoscalingGroupUpdate(d *schema.ResourceData, meta interface{}) LoadBalancerNames: add, }) if err != nil { - return fmt.Errorf("[WARN] Error updating Load Balancers for AutoScaling Group (%s), error: %s", d.Id(), err) + return fmt.Errorf("Error updating Load Balancers for AutoScaling Group (%s), error: %s", d.Id(), err) } } } @@ -681,7 +906,7 @@ func resourceAwsAutoscalingGroupUpdate(d *schema.ResourceData, meta interface{}) TargetGroupARNs: remove, }) if err != nil { - return fmt.Errorf("[WARN] Error updating Load Balancers Target Groups for AutoScaling Group (%s), error: %s", d.Id(), err) + return fmt.Errorf("Error updating Load Balancers Target Groups for AutoScaling Group (%s), error: %s", d.Id(), err) } } @@ -691,26 +916,26 @@ func resourceAwsAutoscalingGroupUpdate(d *schema.ResourceData, meta interface{}) TargetGroupARNs: add, }) if err != nil { - return fmt.Errorf("[WARN] Error updating Load Balancers Target Groups for AutoScaling Group (%s), error: %s", d.Id(), err) + return fmt.Errorf("Error updating Load Balancers Target Groups for AutoScaling Group (%s), error: %s", d.Id(), err) } } } if shouldWaitForCapacity { if err := waitForASGCapacity(d, meta, capacitySatisfiedUpdate); err != nil { - return errwrap.Wrapf("Error waiting for AutoScaling Group Capacity: {{err}}", err) + return fmt.Errorf("Error waiting for AutoScaling Group Capacity: %s", err) } } if d.HasChange("enabled_metrics") { if err := updateASGMetricsCollection(d, conn); err != nil { - return errwrap.Wrapf("Error updating AutoScaling Group Metrics collection: {{err}}", err) + return fmt.Errorf("Error updating AutoScaling Group Metrics collection: %s", err) } } if d.HasChange("suspended_processes") { if err := updateASGSuspendedProcesses(d, conn); err != nil { - return errwrap.Wrapf("Error updating AutoScaling Group Suspended Processes: {{err}}", err) + return fmt.Errorf("Error updating AutoScaling Group Suspended Processes: %s", err) } } @@ -729,7 +954,6 @@ func resourceAwsAutoscalingGroupDelete(d *schema.ResourceData, meta interface{}) } if g == nil { log.Printf("[WARN] Autoscaling Group (%s) not found, removing from state", d.Id()) - d.SetId("") return nil } if len(g.Instances) > 0 || *g.DesiredCapacity > 0 { @@ -1028,3 +1252,203 @@ func expandVpcZoneIdentifiers(list []interface{}) *string { } return aws.String(strings.Join(strs, ",")) } + +func expandAutoScalingInstancesDistribution(l []interface{}) *autoscaling.InstancesDistribution { + if len(l) == 0 || l[0] == nil { + return nil + } + + m := l[0].(map[string]interface{}) + + instancesDistribution := &autoscaling.InstancesDistribution{} + + if v, ok := m["on_demand_allocation_strategy"]; ok && v.(string) != "" { + instancesDistribution.OnDemandAllocationStrategy = aws.String(v.(string)) + } + + if v, ok := m["on_demand_base_capacity"]; ok && v.(int) != 0 { + instancesDistribution.OnDemandBaseCapacity = aws.Int64(int64(v.(int))) + } + + if v, ok := m["on_demand_percentage_above_base_capacity"]; ok { + instancesDistribution.OnDemandPercentageAboveBaseCapacity = aws.Int64(int64(v.(int))) + } + + if v, ok := m["spot_allocation_strategy"]; ok && v.(string) != "" { + instancesDistribution.SpotAllocationStrategy = aws.String(v.(string)) + } + + if v, ok := m["spot_instance_pools"]; ok && v.(int) != 0 { + instancesDistribution.SpotInstancePools = aws.Int64(int64(v.(int))) + } + + if v, ok := m["spot_max_price"]; ok && v.(string) != "" { + instancesDistribution.SpotMaxPrice = aws.String(v.(string)) + } + + return instancesDistribution +} + +func expandAutoScalingLaunchTemplate(l []interface{}) *autoscaling.LaunchTemplate { + if len(l) == 0 || l[0] == nil { + return nil + } + + m := l[0].(map[string]interface{}) + + launchTemplate := &autoscaling.LaunchTemplate{ + LaunchTemplateSpecification: expandAutoScalingLaunchTemplateSpecification(m["launch_template_specification"].([]interface{})), + } + + if v, ok := m["override"]; ok { + launchTemplate.Overrides = expandAutoScalingLaunchTemplateOverrides(v.([]interface{})) + } + + return launchTemplate +} + +func expandAutoScalingLaunchTemplateOverrides(l []interface{}) []*autoscaling.LaunchTemplateOverrides { + if len(l) == 0 { + return nil + } + + launchTemplateOverrides := make([]*autoscaling.LaunchTemplateOverrides, len(l)) + for i, m := range l { + if m == nil { + launchTemplateOverrides[i] = &autoscaling.LaunchTemplateOverrides{} + continue + } + + launchTemplateOverrides[i] = expandAutoScalingLaunchTemplateOverride(m.(map[string]interface{})) + } + return launchTemplateOverrides +} + +func expandAutoScalingLaunchTemplateOverride(m map[string]interface{}) *autoscaling.LaunchTemplateOverrides { + launchTemplateOverrides := &autoscaling.LaunchTemplateOverrides{} + + if v, ok := m["instance_type"]; ok && v.(string) != "" { + launchTemplateOverrides.InstanceType = aws.String(v.(string)) + } + + return launchTemplateOverrides +} + +func expandAutoScalingLaunchTemplateSpecification(l []interface{}) *autoscaling.LaunchTemplateSpecification { + launchTemplateSpecification := &autoscaling.LaunchTemplateSpecification{} + + if len(l) == 0 || l[0] == nil { + return launchTemplateSpecification + } + + m := l[0].(map[string]interface{}) + + if v, ok := m["launch_template_id"]; ok && v.(string) != "" { + launchTemplateSpecification.LaunchTemplateId = aws.String(v.(string)) + } + + // API returns both ID and name, which Terraform saves to state. Next update returns: + // ValidationError: Valid requests must contain either launchTemplateId or LaunchTemplateName + // Prefer the ID if we have both. + if v, ok := m["launch_template_name"]; ok && v.(string) != "" && launchTemplateSpecification.LaunchTemplateId == nil { + launchTemplateSpecification.LaunchTemplateName = aws.String(v.(string)) + } + + if v, ok := m["version"]; ok && v.(string) != "" { + launchTemplateSpecification.Version = aws.String(v.(string)) + } + + return launchTemplateSpecification +} + +func expandAutoScalingMixedInstancesPolicy(l []interface{}) *autoscaling.MixedInstancesPolicy { + if len(l) == 0 || l[0] == nil { + return nil + } + + m := l[0].(map[string]interface{}) + + mixedInstancesPolicy := &autoscaling.MixedInstancesPolicy{ + LaunchTemplate: expandAutoScalingLaunchTemplate(m["launch_template"].([]interface{})), + } + + if v, ok := m["instances_distribution"]; ok { + mixedInstancesPolicy.InstancesDistribution = expandAutoScalingInstancesDistribution(v.([]interface{})) + } + + return mixedInstancesPolicy +} + +func flattenAutoScalingInstancesDistribution(instancesDistribution *autoscaling.InstancesDistribution) []interface{} { + if instancesDistribution == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "on_demand_allocation_strategy": aws.StringValue(instancesDistribution.OnDemandAllocationStrategy), + "on_demand_base_capacity": aws.Int64Value(instancesDistribution.OnDemandBaseCapacity), + "on_demand_percentage_above_base_capacity": aws.Int64Value(instancesDistribution.OnDemandPercentageAboveBaseCapacity), + "spot_allocation_strategy": aws.StringValue(instancesDistribution.SpotAllocationStrategy), + "spot_instance_pools": aws.Int64Value(instancesDistribution.SpotInstancePools), + "spot_max_price": aws.StringValue(instancesDistribution.SpotMaxPrice), + } + + return []interface{}{m} +} + +func flattenAutoScalingLaunchTemplate(launchTemplate *autoscaling.LaunchTemplate) []interface{} { + if launchTemplate == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "launch_template_specification": flattenAutoScalingLaunchTemplateSpecification(launchTemplate.LaunchTemplateSpecification), + "override": flattenAutoScalingLaunchTemplateOverrides(launchTemplate.Overrides), + } + + return []interface{}{m} +} + +func flattenAutoScalingLaunchTemplateOverrides(launchTemplateOverrides []*autoscaling.LaunchTemplateOverrides) []interface{} { + l := make([]interface{}, len(launchTemplateOverrides)) + + for i, launchTemplateOverride := range launchTemplateOverrides { + if launchTemplateOverride == nil { + l[i] = map[string]interface{}{} + continue + } + m := map[string]interface{}{ + "instance_type": aws.StringValue(launchTemplateOverride.InstanceType), + } + l[i] = m + } + + return l +} + +func flattenAutoScalingLaunchTemplateSpecification(launchTemplateSpecification *autoscaling.LaunchTemplateSpecification) []interface{} { + if launchTemplateSpecification == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "launch_template_id": aws.StringValue(launchTemplateSpecification.LaunchTemplateId), + "launch_template_name": aws.StringValue(launchTemplateSpecification.LaunchTemplateName), + "version": aws.StringValue(launchTemplateSpecification.Version), + } + + return []interface{}{m} +} + +func flattenAutoScalingMixedInstancesPolicy(mixedInstancesPolicy *autoscaling.MixedInstancesPolicy) []interface{} { + if mixedInstancesPolicy == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "instances_distribution": flattenAutoScalingInstancesDistribution(mixedInstancesPolicy.InstancesDistribution), + "launch_template": flattenAutoScalingLaunchTemplate(mixedInstancesPolicy.LaunchTemplate), + } + + return []interface{}{m} +} diff --git a/aws/resource_aws_autoscaling_group_test.go b/aws/resource_aws_autoscaling_group_test.go index a472fc88435..32ae0a7610d 100644 --- a/aws/resource_aws_autoscaling_group_test.go +++ b/aws/resource_aws_autoscaling_group_test.go @@ -36,6 +36,10 @@ func testSweepAutoscalingGroups(region string) error { resp, err := conn.DescribeAutoScalingGroups(&autoscaling.DescribeAutoScalingGroupsInput{}) if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping AutoScaling Group sweep for %s: %s", region, err) + return nil + } return fmt.Errorf("Error retrieving AutoScaling Groups in Sweeper: %s", err) } @@ -46,7 +50,7 @@ func testSweepAutoscalingGroups(region string) error { for _, asg := range resp.AutoScalingGroups { var testOptGroup bool - for _, testName := range []string{"foobar", "terraform-", "tf-test"} { + for _, testName := range []string{"foobar", "terraform-", "tf-test", "tf-asg-"} { if strings.HasPrefix(*asg.AutoScalingGroupName, testName) { testOptGroup = true break @@ -87,20 +91,44 @@ func testSweepAutoscalingGroups(region string) error { return nil } +func TestAccAWSAutoScalingGroup_importBasic(t *testing.T) { + resourceName := "aws_autoscaling_group.bar" + randName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAutoScalingGroupImport(randName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "force_delete", "metrics_granularity", "wait_for_capacity_timeout"}, + }, + }, + }) +} + func TestAccAWSAutoScalingGroup_basic(t *testing.T) { var group autoscaling.Group var lc autoscaling.LaunchConfiguration randName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_autoscaling_group.bar", IDRefreshIgnore: []string{"force_delete", "metrics_granularity", "wait_for_capacity_timeout"}, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfig(randName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), @@ -128,10 +156,12 @@ func TestAccAWSAutoScalingGroup_basic(t *testing.T) { "aws_autoscaling_group.bar", "termination_policies.1", "ClosestToNextInstanceHour"), resource.TestCheckResourceAttr( "aws_autoscaling_group.bar", "protect_from_scale_in", "false"), + resource.TestCheckResourceAttr( + "aws_autoscaling_group.bar", "enabled_metrics.#", "0"), ), }, - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfigUpdate(randName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), @@ -162,14 +192,14 @@ func TestAccAWSAutoScalingGroup_basic(t *testing.T) { } func TestAccAWSAutoScalingGroup_namePrefix(t *testing.T) { - nameRegexp := regexp.MustCompile("^test-") + nameRegexp := regexp.MustCompile("^tf-test-") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfig_namePrefix, Check: resource.ComposeTestCheckFunc( resource.TestMatchResourceAttr( @@ -185,12 +215,12 @@ func TestAccAWSAutoScalingGroup_namePrefix(t *testing.T) { func TestAccAWSAutoScalingGroup_autoGeneratedName(t *testing.T) { asgNameRegexp := regexp.MustCompile("^tf-asg-") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfig_autoGeneratedName, Check: resource.ComposeTestCheckFunc( resource.TestMatchResourceAttr( @@ -204,12 +234,12 @@ func TestAccAWSAutoScalingGroup_autoGeneratedName(t *testing.T) { } func TestAccAWSAutoScalingGroup_terminationPolicies(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfig_terminationPoliciesEmpty, Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr( @@ -217,7 +247,7 @@ func TestAccAWSAutoScalingGroup_terminationPolicies(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfig_terminationPoliciesUpdate, Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr( @@ -227,7 +257,7 @@ func TestAccAWSAutoScalingGroup_terminationPolicies(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfig_terminationPoliciesExplicitDefault, Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr( @@ -237,7 +267,7 @@ func TestAccAWSAutoScalingGroup_terminationPolicies(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfig_terminationPoliciesEmpty, Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr( @@ -251,14 +281,14 @@ func TestAccAWSAutoScalingGroup_terminationPolicies(t *testing.T) { func TestAccAWSAutoScalingGroup_tags(t *testing.T) { var group autoscaling.Group - randName := fmt.Sprintf("tfautotags-%s", acctest.RandString(5)) + randName := fmt.Sprintf("tf-test-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfig(randName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), @@ -277,7 +307,7 @@ func TestAccAWSAutoScalingGroup_tags(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfigUpdate(randName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), @@ -303,12 +333,12 @@ func TestAccAWSAutoScalingGroup_tags(t *testing.T) { func TestAccAWSAutoScalingGroup_VpcUpdates(t *testing.T) { var group autoscaling.Group - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfigWithAZ, Check: resource.ComposeTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), @@ -321,7 +351,7 @@ func TestAccAWSAutoScalingGroup_VpcUpdates(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfigWithVPCIdent, Check: resource.ComposeTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), @@ -341,12 +371,12 @@ func TestAccAWSAutoScalingGroup_VpcUpdates(t *testing.T) { func TestAccAWSAutoScalingGroup_WithLoadBalancer(t *testing.T) { var group autoscaling.Group - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfigWithLoadBalancer, Check: resource.ComposeTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), @@ -360,13 +390,13 @@ func TestAccAWSAutoScalingGroup_WithLoadBalancer(t *testing.T) { func TestAccAWSAutoScalingGroup_withPlacementGroup(t *testing.T) { var group autoscaling.Group - randName := fmt.Sprintf("tf_placement_test-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + randName := fmt.Sprintf("tf-test-%s", acctest.RandString(5)) + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfig_withPlacementGroup(randName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), @@ -382,12 +412,12 @@ func TestAccAWSAutoScalingGroup_enablingMetrics(t *testing.T) { var group autoscaling.Group randName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfig(randName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), @@ -396,7 +426,7 @@ func TestAccAWSAutoScalingGroup_enablingMetrics(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSAutoscalingMetricsCollectionConfig_updatingMetricsCollected, Check: resource.ComposeTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), @@ -412,7 +442,7 @@ func TestAccAWSAutoScalingGroup_suspendingProcesses(t *testing.T) { var group autoscaling.Group randName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, @@ -448,12 +478,12 @@ func TestAccAWSAutoScalingGroup_suspendingProcesses(t *testing.T) { func TestAccAWSAutoScalingGroup_withMetrics(t *testing.T) { var group autoscaling.Group - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAutoscalingMetricsCollectionConfig_allMetricsCollected, Check: resource.ComposeTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), @@ -462,7 +492,7 @@ func TestAccAWSAutoScalingGroup_withMetrics(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSAutoscalingMetricsCollectionConfig_updatingMetricsCollected, Check: resource.ComposeTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), @@ -474,6 +504,26 @@ func TestAccAWSAutoScalingGroup_withMetrics(t *testing.T) { }) } +func TestAccAWSAutoScalingGroup_serviceLinkedRoleARN(t *testing.T) { + var group autoscaling.Group + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAutoScalingGroupConfig_withServiceLinkedRoleARN, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), + resource.TestCheckResourceAttrSet( + "aws_autoscaling_group.bar", "service_linked_role_arn"), + ), + }, + }, + }) +} + func TestAccAWSAutoScalingGroup_ALB_TargetGroups(t *testing.T) { var group autoscaling.Group var tg elbv2.TargetGroup @@ -501,12 +551,12 @@ func TestAccAWSAutoScalingGroup_ALB_TargetGroups(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfig_ALB_TargetGroup_pre, Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), @@ -516,7 +566,7 @@ func TestAccAWSAutoScalingGroup_ALB_TargetGroups(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfig_ALB_TargetGroup_post_duo, Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), @@ -528,7 +578,7 @@ func TestAccAWSAutoScalingGroup_ALB_TargetGroups(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfig_ALB_TargetGroup_post, Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), @@ -547,14 +597,14 @@ func TestAccAWSAutoScalingGroup_initialLifecycleHook(t *testing.T) { randName := fmt.Sprintf("terraform-test-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_autoscaling_group.bar", IDRefreshIgnore: []string{"force_delete", "metrics_granularity", "wait_for_capacity_timeout"}, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupWithHookConfig(randName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), @@ -579,12 +629,12 @@ func TestAccAWSAutoScalingGroup_ALB_TargetGroups_ELBCapacity(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfig_ALB_TargetGroup_ELBCapacity(rInt), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), @@ -841,13 +891,13 @@ func testAccCheckAWSALBTargetGroupHealthy(res *elbv2.TargetGroup) resource.TestC func TestAccAWSAutoScalingGroup_classicVpcZoneIdentifier(t *testing.T) { var group autoscaling.Group - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_autoscaling_group.test", Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfig_classicVpcZoneIdentifier, Check: resource.ComposeTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.test", &group), @@ -861,13 +911,13 @@ func TestAccAWSAutoScalingGroup_classicVpcZoneIdentifier(t *testing.T) { func TestAccAWSAutoScalingGroup_emptyAvailabilityZones(t *testing.T) { var group autoscaling.Group - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_autoscaling_group.test", Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAutoScalingGroupConfig_emptyAvailabilityZones, Check: resource.ComposeTestCheckFunc( testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.test", &group), @@ -878,116 +928,615 @@ func TestAccAWSAutoScalingGroup_emptyAvailabilityZones(t *testing.T) { }) } -const testAccAWSAutoScalingGroupConfig_autoGeneratedName = ` -data "aws_ami" "test_ami" { - most_recent = true - - filter { - name = "owner-alias" - values = ["amazon"] - } - - filter { - name = "name" - values = ["amzn-ami-hvm-*-x86_64-gp2"] - } -} +func TestAccAWSAutoScalingGroup_launchTemplate(t *testing.T) { + var group autoscaling.Group -resource "aws_launch_configuration" "foobar" { - image_id = "${data.aws_ami.test_ami.id}" - instance_type = "t2.micro" + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAutoScalingGroupConfig_withLaunchTemplate, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), + resource.TestCheckResourceAttrSet( + "aws_autoscaling_group.bar", "launch_template.0.id"), + ), + }, + }, + }) } -resource "aws_autoscaling_group" "bar" { - availability_zones = ["us-west-2a"] - desired_capacity = 0 - max_size = 0 - min_size = 0 - launch_configuration = "${aws_launch_configuration.foobar.name}" -} -` +func TestAccAWSAutoScalingGroup_launchTemplate_update(t *testing.T) { + var group autoscaling.Group -const testAccAWSAutoScalingGroupConfig_namePrefix = ` -data "aws_ami" "test_ami" { - most_recent = true + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAutoScalingGroupConfig_withLaunchTemplate, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), + resource.TestCheckResourceAttrSet( + "aws_autoscaling_group.bar", "launch_template.0.name"), + ), + }, - filter { - name = "owner-alias" - values = ["amazon"] - } + { + Config: testAccAWSAutoScalingGroupConfig_withLaunchTemplate_toLaunchConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), + resource.TestCheckResourceAttrSet( + "aws_autoscaling_group.bar", "launch_configuration"), + resource.TestCheckNoResourceAttr( + "aws_autoscaling_group.bar", "launch_template"), + ), + }, - filter { - name = "name" - values = ["amzn-ami-hvm-*-x86_64-gp2"] - } -} + { + Config: testAccAWSAutoScalingGroupConfig_withLaunchTemplate_toLaunchTemplateName, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), + resource.TestCheckResourceAttr( + "aws_autoscaling_group.bar", "launch_configuration", ""), + resource.TestCheckResourceAttr( + "aws_autoscaling_group.bar", "launch_template.0.name", "foobar2"), + resource.TestCheckResourceAttrSet( + "aws_autoscaling_group.bar", "launch_template.0.id"), + ), + }, -resource "aws_launch_configuration" "test" { - image_id = "${data.aws_ami.test_ami.id}" - instance_type = "t2.micro" -} + { + Config: testAccAWSAutoScalingGroupConfig_withLaunchTemplate_toLaunchTemplateVersion, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), + resource.TestCheckResourceAttr( + "aws_autoscaling_group.bar", "launch_template.0.version", "$Latest"), + ), + }, -resource "aws_autoscaling_group" "test" { - availability_zones = ["us-west-2a"] - desired_capacity = 0 - max_size = 0 - min_size = 0 - name_prefix = "test-" - launch_configuration = "${aws_launch_configuration.test.name}" + { + Config: testAccAWSAutoScalingGroupConfig_withLaunchTemplate, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists("aws_autoscaling_group.bar", &group), + resource.TestCheckResourceAttrSet( + "aws_autoscaling_group.bar", "launch_template.0.name"), + resource.TestCheckResourceAttr( + "aws_autoscaling_group.bar", "launch_template.0.version", "1"), + ), + }, + }, + }) } -` - -const testAccAWSAutoScalingGroupConfig_terminationPoliciesEmpty = ` -data "aws_ami" "test_ami" { - most_recent = true - filter { - name = "owner-alias" - values = ["amazon"] - } +func TestAccAWSAutoScalingGroup_LaunchTemplate_IAMInstanceProfile(t *testing.T) { + var group autoscaling.Group + resourceName := "aws_autoscaling_group.test" + rName := acctest.RandomWithPrefix("tf-acc-test") - filter { - name = "name" - values = ["amzn-ami-hvm-*-x86_64-gp2"] - } + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAutoScalingGroupConfig_LaunchTemplate_IAMInstanceProfile(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists(resourceName, &group), + ), + }, + }, + }) } -resource "aws_launch_configuration" "foobar" { - image_id = "${data.aws_ami.test_ami.id}" - instance_type = "t2.micro" +func TestAccAWSAutoScalingGroup_MixedInstancesPolicy(t *testing.T) { + var group autoscaling.Group + resourceName := "aws_autoscaling_group.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists(resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.0.launch_template_specification.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.0.launch_template_specification.0.version", "$Default"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.0.override.#", "2"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.0.override.0.instance_type", "t2.micro"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.0.override.1.instance_type", "t3.small"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "force_delete", + "metrics_granularity", + "wait_for_capacity_timeout", + }, + }, + }, + }) } -resource "aws_autoscaling_group" "bar" { - availability_zones = ["us-west-2a"] - max_size = 0 - min_size = 0 - desired_capacity = 0 +func TestAccAWSAutoScalingGroup_MixedInstancesPolicy_InstancesDistribution_OnDemandAllocationStrategy(t *testing.T) { + var group autoscaling.Group + resourceName := "aws_autoscaling_group.test" + rName := acctest.RandomWithPrefix("tf-acc-test") - launch_configuration = "${aws_launch_configuration.foobar.name}" + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_InstancesDistribution_OnDemandAllocationStrategy(rName, "prioritized"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists(resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.0.on_demand_allocation_strategy", "prioritized"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "force_delete", + "metrics_granularity", + "wait_for_capacity_timeout", + }, + }, + }, + }) } -` - -const testAccAWSAutoScalingGroupConfig_terminationPoliciesExplicitDefault = ` -data "aws_ami" "test_ami" { - most_recent = true - filter { - name = "owner-alias" - values = ["amazon"] - } +func TestAccAWSAutoScalingGroup_MixedInstancesPolicy_InstancesDistribution_OnDemandBaseCapacity(t *testing.T) { + var group autoscaling.Group + resourceName := "aws_autoscaling_group.test" + rName := acctest.RandomWithPrefix("tf-acc-test") - filter { - name = "name" - values = ["amzn-ami-hvm-*-x86_64-gp2"] - } + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_InstancesDistribution_OnDemandBaseCapacity(rName, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists(resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.0.on_demand_base_capacity", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "force_delete", + "metrics_granularity", + "wait_for_capacity_timeout", + }, + }, + { + Config: testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_InstancesDistribution_OnDemandBaseCapacity(rName, 2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists(resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.0.on_demand_base_capacity", "2"), + ), + }, + }, + }) } -resource "aws_launch_configuration" "foobar" { - image_id = "${data.aws_ami.test_ami.id}" - instance_type = "t2.micro" -} +func TestAccAWSAutoScalingGroup_MixedInstancesPolicy_InstancesDistribution_OnDemandPercentageAboveBaseCapacity(t *testing.T) { + var group autoscaling.Group + resourceName := "aws_autoscaling_group.test" + rName := acctest.RandomWithPrefix("tf-acc-test") -resource "aws_autoscaling_group" "bar" { + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_InstancesDistribution_OnDemandPercentageAboveBaseCapacity(rName, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists(resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.0.on_demand_percentage_above_base_capacity", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "force_delete", + "metrics_granularity", + "wait_for_capacity_timeout", + }, + }, + { + Config: testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_InstancesDistribution_OnDemandPercentageAboveBaseCapacity(rName, 2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists(resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.0.on_demand_percentage_above_base_capacity", "2"), + ), + }, + }, + }) +} + +func TestAccAWSAutoScalingGroup_MixedInstancesPolicy_InstancesDistribution_SpotAllocationStrategy(t *testing.T) { + var group autoscaling.Group + resourceName := "aws_autoscaling_group.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_InstancesDistribution_SpotAllocationStrategy(rName, "lowest-price"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists(resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.0.spot_allocation_strategy", "lowest-price"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "force_delete", + "metrics_granularity", + "wait_for_capacity_timeout", + }, + }, + }, + }) +} + +func TestAccAWSAutoScalingGroup_MixedInstancesPolicy_InstancesDistribution_SpotInstancePools(t *testing.T) { + var group autoscaling.Group + resourceName := "aws_autoscaling_group.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_InstancesDistribution_SpotInstancePools(rName, 2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists(resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.0.spot_instance_pools", "2"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "force_delete", + "metrics_granularity", + "wait_for_capacity_timeout", + }, + }, + { + Config: testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_InstancesDistribution_SpotInstancePools(rName, 3), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists(resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.0.spot_instance_pools", "3"), + ), + }, + }, + }) +} + +func TestAccAWSAutoScalingGroup_MixedInstancesPolicy_InstancesDistribution_SpotMaxPrice(t *testing.T) { + var group autoscaling.Group + resourceName := "aws_autoscaling_group.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_InstancesDistribution_SpotMaxPrice(rName, "0.50"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists(resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.0.spot_max_price", "0.50"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "force_delete", + "metrics_granularity", + "wait_for_capacity_timeout", + }, + }, + { + Config: testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_InstancesDistribution_SpotMaxPrice(rName, "0.51"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists(resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.instances_distribution.0.spot_max_price", "0.51"), + ), + }, + }, + }) +} + +func TestAccAWSAutoScalingGroup_MixedInstancesPolicy_LaunchTemplate_LaunchTemplateSpecification_LaunchTemplateName(t *testing.T) { + var group autoscaling.Group + resourceName := "aws_autoscaling_group.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_LaunchTemplate_LaunchTemplateSpecification_LaunchTemplateName(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists(resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.0.launch_template_specification.#", "1"), + resource.TestCheckResourceAttrSet(resourceName, "mixed_instances_policy.0.launch_template.0.launch_template_specification.0.launch_template_name"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "force_delete", + "metrics_granularity", + "wait_for_capacity_timeout", + }, + }, + }, + }) +} + +func TestAccAWSAutoScalingGroup_MixedInstancesPolicy_LaunchTemplate_LaunchTemplateSpecification_Version(t *testing.T) { + var group autoscaling.Group + resourceName := "aws_autoscaling_group.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_LaunchTemplate_LaunchTemplateSpecification_Version(rName, "1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists(resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.0.launch_template_specification.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.0.launch_template_specification.0.version", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "force_delete", + "metrics_granularity", + "wait_for_capacity_timeout", + }, + }, + { + Config: testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_LaunchTemplate_LaunchTemplateSpecification_Version(rName, "$$Latest"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists(resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.0.launch_template_specification.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.0.launch_template_specification.0.version", "$Latest"), + ), + }, + }, + }) +} + +func TestAccAWSAutoScalingGroup_MixedInstancesPolicy_LaunchTemplate_Override_InstanceType(t *testing.T) { + var group autoscaling.Group + resourceName := "aws_autoscaling_group.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoScalingGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_LaunchTemplate_Override_InstanceType(rName, "t3.small"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists(resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.0.override.#", "2"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.0.override.0.instance_type", "t2.micro"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.0.override.1.instance_type", "t3.small"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "force_delete", + "metrics_granularity", + "wait_for_capacity_timeout", + }, + }, + { + Config: testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_LaunchTemplate_Override_InstanceType(rName, "t3.medium"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAutoScalingGroupExists(resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.#", "1"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.0.override.#", "2"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.0.override.0.instance_type", "t2.micro"), + resource.TestCheckResourceAttr(resourceName, "mixed_instances_policy.0.launch_template.0.override.1.instance_type", "t3.medium"), + ), + }, + }, + }) +} + +const testAccAWSAutoScalingGroupConfig_autoGeneratedName = ` +data "aws_ami" "test_ami" { + most_recent = true + + filter { + name = "owner-alias" + values = ["amazon"] + } + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +resource "aws_launch_configuration" "foobar" { + image_id = "${data.aws_ami.test_ami.id}" + instance_type = "t2.micro" +} + +resource "aws_autoscaling_group" "bar" { + availability_zones = ["us-west-2a"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + launch_configuration = "${aws_launch_configuration.foobar.name}" +} +` + +const testAccAWSAutoScalingGroupConfig_namePrefix = ` +data "aws_ami" "test_ami" { + most_recent = true + + filter { + name = "owner-alias" + values = ["amazon"] + } + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +resource "aws_launch_configuration" "test" { + image_id = "${data.aws_ami.test_ami.id}" + instance_type = "t2.micro" +} + +resource "aws_autoscaling_group" "test" { + availability_zones = ["us-west-2a"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + name_prefix = "tf-test-" + launch_configuration = "${aws_launch_configuration.test.name}" +} +` + +const testAccAWSAutoScalingGroupConfig_terminationPoliciesEmpty = ` +data "aws_ami" "test_ami" { + most_recent = true + + filter { + name = "owner-alias" + values = ["amazon"] + } + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +resource "aws_launch_configuration" "foobar" { + image_id = "${data.aws_ami.test_ami.id}" + instance_type = "t2.micro" +} + +resource "aws_autoscaling_group" "bar" { + availability_zones = ["us-west-2a"] + max_size = 0 + min_size = 0 + desired_capacity = 0 + + launch_configuration = "${aws_launch_configuration.foobar.name}" +} +` + +const testAccAWSAutoScalingGroupConfig_terminationPoliciesExplicitDefault = ` +data "aws_ami" "test_ami" { + most_recent = true + + filter { + name = "owner-alias" + values = ["amazon"] + } + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +resource "aws_launch_configuration" "foobar" { + image_id = "${data.aws_ami.test_ami.id}" + instance_type = "t2.micro" +} + +resource "aws_autoscaling_group" "bar" { availability_zones = ["us-west-2a"] max_size = 0 min_size = 0 @@ -1268,10 +1817,20 @@ resource "aws_elb" "bar" { depends_on = ["aws_internet_gateway.gw"] } +// need an AMI that listens on :80 at boot, this is: +data "aws_ami" "test_ami" { + most_recent = true + + owners = ["979382823631"] + + filter { + name = "name" + values = ["bitnami-nginxstack-*-linux-debian-9-x86_64-hvm-ebs"] + } +} + resource "aws_launch_configuration" "foobar" { - // need an AMI that listens on :80 at boot, this is: - // bitnami-nginxstack-1.6.1-0-linux-ubuntu-14.04.1-x86_64-hvm-ebs-ami-99f5b1a9-3 - image_id = "ami-b5b3fc85" + image_id = "${data.aws_ami.test_ami.id}" instance_type = "t2.micro" security_groups = ["${aws_security_group.foo.id}"] } @@ -1431,8 +1990,42 @@ resource "aws_autoscaling_group" "bar" { propagate_at_launch = true } } -`, name, name) +`, name, name) +} + +const testAccAWSAutoScalingGroupConfig_withServiceLinkedRoleARN = ` +data "aws_ami" "test_ami" { + most_recent = true + + filter { + name = "owner-alias" + values = ["amazon"] + } + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +data "aws_iam_role" "autoscaling_service_linked_role" { + name = "AWSServiceRoleForAutoScaling" +} + +resource "aws_launch_configuration" "foobar" { + image_id = "${data.aws_ami.test_ami.id}" + instance_type = "t2.micro" +} + +resource "aws_autoscaling_group" "bar" { + availability_zones = ["us-west-2a"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + launch_configuration = "${aws_launch_configuration.foobar.name}" + service_linked_role_arn = "${data.aws_iam_role.autoscaling_service_linked_role.arn}" } +` const testAccAWSAutoscalingMetricsCollectionConfig_allMetricsCollected = ` data "aws_ami" "test_ami" { @@ -2167,3 +2760,531 @@ resource "aws_launch_configuration" "test" { instance_type = "t2.micro" } ` + +const testAccAWSAutoScalingGroupConfig_withLaunchTemplate = ` +data "aws_ami" "test_ami" { + most_recent = true + + filter { + name = "owner-alias" + values = ["amazon"] + } + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +resource "aws_launch_template" "foobar" { + name_prefix = "foobar" + image_id = "${data.aws_ami.test_ami.id}" + instance_type = "t2.micro" +} + +data "aws_availability_zones" "available" {} + +resource "aws_autoscaling_group" "bar" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + launch_template = { + id = "${aws_launch_template.foobar.id}" + version = "${aws_launch_template.foobar.default_version}" + } +} +` + +const testAccAWSAutoScalingGroupConfig_withLaunchTemplate_toLaunchConfig = ` +data "aws_ami" "test_ami" { + most_recent = true + + filter { + name = "owner-alias" + values = ["amazon"] + } + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +resource "aws_launch_template" "foobar" { + name_prefix = "foobar" + image_id = "${data.aws_ami.test_ami.id}" + instance_type = "t2.micro" +} + +resource "aws_launch_configuration" "test" { + image_id = "${data.aws_ami.test_ami.id}" + instance_type = "t2.micro" +} + +data "aws_availability_zones" "available" {} + +resource "aws_autoscaling_group" "bar" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + launch_configuration = "${aws_launch_configuration.test.name}" +} +` + +const testAccAWSAutoScalingGroupConfig_withLaunchTemplate_toLaunchTemplateName = ` +data "aws_ami" "test_ami" { + most_recent = true + + filter { + name = "owner-alias" + values = ["amazon"] + } + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +resource "aws_launch_template" "foobar" { + name_prefix = "foobar" + image_id = "${data.aws_ami.test_ami.id}" + instance_type = "t2.micro" +} + +resource "aws_launch_configuration" "test" { + image_id = "${data.aws_ami.test_ami.id}" + instance_type = "t2.micro" +} + +resource "aws_launch_template" "foobar2" { + name = "foobar2" + image_id = "${data.aws_ami.test_ami.id}" + instance_type = "t2.micro" +} + +data "aws_availability_zones" "available" {} + +resource "aws_autoscaling_group" "bar" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + launch_template = { + name = "foobar2" + } +} +` + +const testAccAWSAutoScalingGroupConfig_withLaunchTemplate_toLaunchTemplateVersion = ` +data "aws_ami" "test_ami" { + most_recent = true + + filter { + name = "owner-alias" + values = ["amazon"] + } + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +resource "aws_launch_template" "foobar" { + name_prefix = "foobar" + image_id = "${data.aws_ami.test_ami.id}" + instance_type = "t2.micro" +} + +resource "aws_launch_configuration" "test" { + image_id = "${data.aws_ami.test_ami.id}" + instance_type = "t2.micro" +} + +resource "aws_launch_template" "foobar2" { + name = "foobar2" + image_id = "${data.aws_ami.test_ami.id}" + instance_type = "t2.micro" +} + +data "aws_availability_zones" "available" {} + +resource "aws_autoscaling_group" "bar" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + launch_template = { + id = "${aws_launch_template.foobar.id}" + version = "$$Latest" + } +} +` + +func testAccAWSAutoScalingGroupConfig_LaunchTemplate_IAMInstanceProfile(rName string) string { + return fmt.Sprintf(` +data "aws_ami" "test" { + most_recent = true + owners = ["amazon"] + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +data "aws_availability_zones" "available" {} + +resource "aws_iam_role" "test" { + assume_role_policy = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Service\":[\"ec2.amazonaws.com\"]},\"Action\":[\"sts:AssumeRole\"]}]}" + name = %q +} + +resource "aws_iam_instance_profile" "test" { + name = %q + roles = ["${aws_iam_role.test.name}"] +} + +resource "aws_launch_template" "test" { + image_id = "${data.aws_ami.test.id}" + instance_type = "t2.micro" + name = %q + + iam_instance_profile { + name = "${aws_iam_instance_profile.test.id}" + } +} + +resource "aws_autoscaling_group" "test" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + name = %q + + launch_template { + id = "${aws_launch_template.test.id}" + } +} +`, rName, rName, rName, rName) +} + +func testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_Base(rName string) string { + return fmt.Sprintf(` +data "aws_ami" "test" { + most_recent = true + owners = ["amazon"] + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +data "aws_availability_zones" "available" {} + +resource "aws_launch_template" "test" { + image_id = "${data.aws_ami.test.id}" + instance_type = "t3.micro" + name = %q +} +`, rName) +} + +func testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy(rName string) string { + return testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_Base(rName) + fmt.Sprintf(` +resource "aws_autoscaling_group" "test" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + name = %q + + mixed_instances_policy { + launch_template { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + } + + override { + instance_type = "t2.micro" + } + override { + instance_type = "t3.small" + } + } + } +} +`, rName) +} + +func testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_InstancesDistribution_OnDemandAllocationStrategy(rName, onDemandAllocationStrategy string) string { + return testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_Base(rName) + fmt.Sprintf(` +resource "aws_autoscaling_group" "test" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + name = %q + + mixed_instances_policy { + instances_distribution { + on_demand_allocation_strategy = %q + } + + launch_template { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + } + + override { + instance_type = "t2.micro" + } + override { + instance_type = "t3.small" + } + } + } +} +`, rName, onDemandAllocationStrategy) +} + +func testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_InstancesDistribution_OnDemandBaseCapacity(rName string, onDemandBaseCapacity int) string { + return testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_Base(rName) + fmt.Sprintf(` +resource "aws_autoscaling_group" "test" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + max_size = 2 + min_size = 0 + name = %q + + mixed_instances_policy { + instances_distribution { + on_demand_base_capacity = %d + } + + launch_template { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + } + + override { + instance_type = "t2.micro" + } + override { + instance_type = "t3.small" + } + } + } +} +`, rName, onDemandBaseCapacity) +} + +func testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_InstancesDistribution_OnDemandPercentageAboveBaseCapacity(rName string, onDemandPercentageAboveBaseCapacity int) string { + return testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_Base(rName) + fmt.Sprintf(` +resource "aws_autoscaling_group" "test" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + name = %q + + mixed_instances_policy { + instances_distribution { + on_demand_percentage_above_base_capacity = %d + } + + launch_template { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + } + + override { + instance_type = "t2.micro" + } + override { + instance_type = "t3.small" + } + } + } +} +`, rName, onDemandPercentageAboveBaseCapacity) +} + +func testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_InstancesDistribution_SpotAllocationStrategy(rName, spotAllocationStrategy string) string { + return testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_Base(rName) + fmt.Sprintf(` +resource "aws_autoscaling_group" "test" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + name = %q + + mixed_instances_policy { + instances_distribution { + spot_allocation_strategy = %q + } + + launch_template { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + } + + override { + instance_type = "t2.micro" + } + override { + instance_type = "t3.small" + } + } + } +} +`, rName, spotAllocationStrategy) +} + +func testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_InstancesDistribution_SpotInstancePools(rName string, spotInstancePools int) string { + return testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_Base(rName) + fmt.Sprintf(` +resource "aws_autoscaling_group" "test" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + name = %q + + mixed_instances_policy { + instances_distribution { + spot_instance_pools = %d + } + + launch_template { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + } + + override { + instance_type = "t2.micro" + } + override { + instance_type = "t3.small" + } + } + } +} +`, rName, spotInstancePools) +} + +func testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_InstancesDistribution_SpotMaxPrice(rName, spotMaxPrice string) string { + return testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_Base(rName) + fmt.Sprintf(` +resource "aws_autoscaling_group" "test" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + name = %q + + mixed_instances_policy { + instances_distribution { + spot_max_price = %q + } + + launch_template { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + } + + override { + instance_type = "t2.micro" + } + override { + instance_type = "t3.small" + } + } + } +} +`, rName, spotMaxPrice) +} + +func testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_LaunchTemplate_LaunchTemplateSpecification_LaunchTemplateName(rName string) string { + return testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_Base(rName) + fmt.Sprintf(` +resource "aws_autoscaling_group" "test" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + name = %q + + mixed_instances_policy { + launch_template { + launch_template_specification { + launch_template_name = "${aws_launch_template.test.name}" + } + + override { + instance_type = "t2.micro" + } + override { + instance_type = "t3.small" + } + } + } +} +`, rName) +} + +func testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_LaunchTemplate_LaunchTemplateSpecification_Version(rName, version string) string { + return testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_Base(rName) + fmt.Sprintf(` +resource "aws_autoscaling_group" "test" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + name = %q + + mixed_instances_policy { + launch_template { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = %q + } + + override { + instance_type = "t2.micro" + } + override { + instance_type = "t3.small" + } + } + } +} +`, rName, version) +} + +func testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_LaunchTemplate_Override_InstanceType(rName, instanceType string) string { + return testAccAWSAutoScalingGroupConfig_MixedInstancesPolicy_Base(rName) + fmt.Sprintf(` +resource "aws_autoscaling_group" "test" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + name = %q + + mixed_instances_policy { + launch_template { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + } + + override { + instance_type = "t2.micro" + } + override { + instance_type = %q + } + } + } +} +`, rName, instanceType) +} diff --git a/aws/resource_aws_autoscaling_group_waiting.go b/aws/resource_aws_autoscaling_group_waiting.go index daa8faa8aae..8bde39baba9 100644 --- a/aws/resource_aws_autoscaling_group_waiting.go +++ b/aws/resource_aws_autoscaling_group_waiting.go @@ -8,7 +8,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/autoscaling" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) @@ -46,6 +45,9 @@ func waitForASGCapacity( return nil } elbis, err := getELBInstanceStates(g, meta) + if err != nil { + return resource.NonRetryableError(err) + } albis, err := getTargetGroupInstanceStates(g, meta) if err != nil { return resource.NonRetryableError(err) @@ -121,8 +123,7 @@ func waitForASGCapacity( recentStatus = fmt.Sprintf("(Failed to describe scaling activities: %s)", aErr) } - msg := fmt.Sprintf("{{err}}. Most recent activity: %s", recentStatus) - return errwrap.Wrapf(msg, err) + return fmt.Errorf("%s. Most recent activity: %s", err, recentStatus) } type capacitySatisfiedFunc func(*schema.ResourceData, int, int) (bool, string) diff --git a/aws/resource_aws_autoscaling_lifecycle_hook.go b/aws/resource_aws_autoscaling_lifecycle_hook.go index 2684a4ef343..500924eac48 100644 --- a/aws/resource_aws_autoscaling_lifecycle_hook.go +++ b/aws/resource_aws_autoscaling_lifecycle_hook.go @@ -1,6 +1,7 @@ package aws import ( + "fmt" "log" "strings" "time" @@ -8,7 +9,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/autoscaling" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) @@ -67,10 +67,10 @@ func resourceAwsAutoscalingLifecycleHookPutOp(conn *autoscaling.AutoScaling, par if err != nil { if awsErr, ok := err.(awserr.Error); ok { if strings.Contains(awsErr.Message(), "Unable to publish test message to notification target") { - return resource.RetryableError(errwrap.Wrapf("[DEBUG] Retrying AWS AutoScaling Lifecycle Hook: {{err}}", awsErr)) + return resource.RetryableError(fmt.Errorf("Retrying AWS AutoScaling Lifecycle Hook: %s", awsErr)) } } - return resource.NonRetryableError(errwrap.Wrapf("Error putting lifecycle hook: {{err}}", err)) + return resource.NonRetryableError(fmt.Errorf("Error putting lifecycle hook: %s", err)) } return nil }) @@ -128,10 +128,9 @@ func resourceAwsAutoscalingLifecycleHookDelete(d *schema.ResourceData, meta inte LifecycleHookName: aws.String(d.Get("name").(string)), } if _, err := autoscalingconn.DeleteLifecycleHook(¶ms); err != nil { - return errwrap.Wrapf("Autoscaling Lifecycle Hook: {{err}}", err) + return fmt.Errorf("Autoscaling Lifecycle Hook: %s", err) } - d.SetId("") return nil } @@ -179,7 +178,7 @@ func getAwsAutoscalingLifecycleHook(d *schema.ResourceData, meta interface{}) (* log.Printf("[DEBUG] AutoScaling Lifecycle Hook Describe Params: %#v", params) resp, err := autoscalingconn.DescribeLifecycleHooks(¶ms) if err != nil { - return nil, errwrap.Wrapf("Error retrieving lifecycle hooks: {{err}}", err) + return nil, fmt.Errorf("Error retrieving lifecycle hooks: %s", err) } // find lifecycle hooks diff --git a/aws/resource_aws_autoscaling_lifecycle_hook_test.go b/aws/resource_aws_autoscaling_lifecycle_hook_test.go index 580c2ed55f3..30eedf0151f 100644 --- a/aws/resource_aws_autoscaling_lifecycle_hook_test.go +++ b/aws/resource_aws_autoscaling_lifecycle_hook_test.go @@ -14,7 +14,7 @@ import ( func TestAccAWSAutoscalingLifecycleHook_basic(t *testing.T) { resourceName := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoscalingLifecycleHookDestroy, @@ -36,7 +36,7 @@ func TestAccAWSAutoscalingLifecycleHook_basic(t *testing.T) { func TestAccAWSAutoscalingLifecycleHook_omitDefaultResult(t *testing.T) { rName := acctest.RandString(10) rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoscalingLifecycleHookDestroy, diff --git a/aws/resource_aws_autoscaling_notification.go b/aws/resource_aws_autoscaling_notification.go index 5afcc664921..77fb744feed 100644 --- a/aws/resource_aws_autoscaling_notification.go +++ b/aws/resource_aws_autoscaling_notification.go @@ -18,20 +18,20 @@ func resourceAwsAutoscalingNotification() *schema.Resource { Delete: resourceAwsAutoscalingNotificationDelete, Schema: map[string]*schema.Schema{ - "topic_arn": &schema.Schema{ + "topic_arn": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "group_names": &schema.Schema{ + "group_names": { Type: schema.TypeSet, Required: true, Elem: &schema.Schema{Type: schema.TypeString}, Set: schema.HashString, }, - "notifications": &schema.Schema{ + "notifications": { Type: schema.TypeSet, Required: true, Elem: &schema.Schema{Type: schema.TypeString}, @@ -95,13 +95,13 @@ func resourceAwsAutoscalingNotificationRead(d *schema.ResourceData, meta interfa // Grab the keys here as the list of Groups var gList []string - for k, _ := range gRaw { + for k := range gRaw { gList = append(gList, k) } // Grab the keys here as the list of Types var nList []string - for k, _ := range nRaw { + for k := range nRaw { nList = append(nList, k) } @@ -166,7 +166,7 @@ func addNotificationConfigToGroupsWithTopic(conn *autoscaling.AutoScaling, group _, err := conn.PutNotificationConfiguration(opts) if err != nil { if awsErr, ok := err.(awserr.Error); ok { - return fmt.Errorf("[WARN] Error creating Autoscaling Group Notification for Group %s, error: \"%s\", code: \"%s\"", *a, awsErr.Message(), awsErr.Code()) + return fmt.Errorf("Error creating Autoscaling Group Notification for Group %s, error: \"%s\", code: \"%s\"", *a, awsErr.Message(), awsErr.Code()) } return err } @@ -183,7 +183,7 @@ func removeNotificationConfigToGroupsWithTopic(conn *autoscaling.AutoScaling, gr _, err := conn.DeleteNotificationConfiguration(opts) if err != nil { - return fmt.Errorf("[WARN] Error deleting notification configuration for ASG \"%s\", Topic ARN \"%s\"", *r, topic) + return fmt.Errorf("Error deleting notification configuration for ASG \"%s\", Topic ARN \"%s\"", *r, topic) } } return nil diff --git a/aws/resource_aws_autoscaling_notification_test.go b/aws/resource_aws_autoscaling_notification_test.go index b2ca2b67839..ff059eaa65c 100644 --- a/aws/resource_aws_autoscaling_notification_test.go +++ b/aws/resource_aws_autoscaling_notification_test.go @@ -17,12 +17,12 @@ func TestAccAWSASGNotification_basic(t *testing.T) { rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckASGNDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccASGNotificationConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckASGNotificationExists("aws_autoscaling_notification.example", []string{"foobar1-terraform-test-" + rName}, &asgn), @@ -38,12 +38,12 @@ func TestAccAWSASGNotification_update(t *testing.T) { rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckASGNDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccASGNotificationConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckASGNotificationExists("aws_autoscaling_notification.example", []string{"foobar1-terraform-test-" + rName}, &asgn), @@ -51,7 +51,7 @@ func TestAccAWSASGNotification_update(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccASGNotificationConfig_update(rName), Check: resource.ComposeTestCheckFunc( testAccCheckASGNotificationExists("aws_autoscaling_notification.example", []string{"foobar1-terraform-test-" + rName, "barfoo-terraform-test-" + rName}, &asgn), @@ -65,12 +65,12 @@ func TestAccAWSASGNotification_update(t *testing.T) { func TestAccAWSASGNotification_Pagination(t *testing.T) { var asgn autoscaling.DescribeNotificationConfigurationsOutput - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckASGNDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccASGNotificationConfig_pagination, Check: resource.ComposeTestCheckFunc( testAccCheckASGNotificationExists("aws_autoscaling_notification.example", @@ -184,13 +184,13 @@ func testAccCheckAWSASGNotificationAttributes(n string, asgn *autoscaling.Descri // Grab the keys here as the list of Groups var gList []string - for k, _ := range gRaw { + for k := range gRaw { gList = append(gList, k) } // Grab the keys here as the list of Types var nList []string - for k, _ := range nRaw { + for k := range nRaw { nList = append(nList, k) } diff --git a/aws/resource_aws_autoscaling_policy.go b/aws/resource_aws_autoscaling_policy.go index 303c863b179..aafccced10e 100644 --- a/aws/resource_aws_autoscaling_policy.go +++ b/aws/resource_aws_autoscaling_policy.go @@ -21,73 +21,78 @@ func resourceAwsAutoscalingPolicy() *schema.Resource { Delete: resourceAwsAutoscalingPolicyDelete, Schema: map[string]*schema.Schema{ - "arn": &schema.Schema{ + "arn": { Type: schema.TypeString, Computed: true, }, - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "adjustment_type": &schema.Schema{ + "adjustment_type": { Type: schema.TypeString, Optional: true, }, - "autoscaling_group_name": &schema.Schema{ + "autoscaling_group_name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "policy_type": &schema.Schema{ + "policy_type": { Type: schema.TypeString, Optional: true, Default: "SimpleScaling", // preserve AWS's default to make validation easier. + ValidateFunc: validation.StringInSlice([]string{ + "SimpleScaling", + "StepScaling", + "TargetTrackingScaling", + }, false), }, - "cooldown": &schema.Schema{ + "cooldown": { Type: schema.TypeInt, Optional: true, }, - "estimated_instance_warmup": &schema.Schema{ + "estimated_instance_warmup": { Type: schema.TypeInt, Optional: true, }, - "metric_aggregation_type": &schema.Schema{ + "metric_aggregation_type": { Type: schema.TypeString, Optional: true, Computed: true, }, - "min_adjustment_magnitude": &schema.Schema{ + "min_adjustment_magnitude": { Type: schema.TypeInt, Optional: true, ValidateFunc: validation.IntAtLeast(1), }, - "min_adjustment_step": &schema.Schema{ + "min_adjustment_step": { Type: schema.TypeInt, Optional: true, Deprecated: "Use min_adjustment_magnitude instead, otherwise you may see a perpetual diff on this resource.", ConflictsWith: []string{"min_adjustment_magnitude"}, }, - "scaling_adjustment": &schema.Schema{ + "scaling_adjustment": { Type: schema.TypeInt, Optional: true, ConflictsWith: []string{"step_adjustment"}, }, - "step_adjustment": &schema.Schema{ + "step_adjustment": { Type: schema.TypeSet, Optional: true, ConflictsWith: []string{"scaling_adjustment"}, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "metric_interval_lower_bound": &schema.Schema{ + "metric_interval_lower_bound": { Type: schema.TypeString, Optional: true, }, - "metric_interval_upper_bound": &schema.Schema{ + "metric_interval_upper_bound": { Type: schema.TypeString, Optional: true, }, - "scaling_adjustment": &schema.Schema{ + "scaling_adjustment": { Type: schema.TypeInt, Required: true, }, @@ -95,77 +100,77 @@ func resourceAwsAutoscalingPolicy() *schema.Resource { }, Set: resourceAwsAutoscalingScalingAdjustmentHash, }, - "target_tracking_configuration": &schema.Schema{ + "target_tracking_configuration": { Type: schema.TypeList, Optional: true, MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "predefined_metric_specification": &schema.Schema{ + "predefined_metric_specification": { Type: schema.TypeList, Optional: true, MaxItems: 1, ConflictsWith: []string{"target_tracking_configuration.0.customized_metric_specification"}, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "predefined_metric_type": &schema.Schema{ + "predefined_metric_type": { Type: schema.TypeString, Required: true, }, - "resource_label": &schema.Schema{ + "resource_label": { Type: schema.TypeString, Optional: true, }, }, }, }, - "customized_metric_specification": &schema.Schema{ + "customized_metric_specification": { Type: schema.TypeList, Optional: true, MaxItems: 1, ConflictsWith: []string{"target_tracking_configuration.0.predefined_metric_specification"}, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "metric_dimension": &schema.Schema{ + "metric_dimension": { Type: schema.TypeList, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, }, - "value": &schema.Schema{ + "value": { Type: schema.TypeString, Required: true, }, }, }, }, - "metric_name": &schema.Schema{ + "metric_name": { Type: schema.TypeString, Required: true, }, - "namespace": &schema.Schema{ + "namespace": { Type: schema.TypeString, Required: true, }, - "statistic": &schema.Schema{ + "statistic": { Type: schema.TypeString, Required: true, }, - "unit": &schema.Schema{ + "unit": { Type: schema.TypeString, Optional: true, }, }, }, }, - "target_value": &schema.Schema{ + "target_value": { Type: schema.TypeFloat, Required: true, }, - "disable_scale_in": &schema.Schema{ + "disable_scale_in": { Type: schema.TypeBool, Optional: true, Default: false, @@ -181,11 +186,11 @@ func resourceAwsAutoscalingPolicyCreate(d *schema.ResourceData, meta interface{} autoscalingconn := meta.(*AWSClient).autoscalingconn params, err := getAwsAutoscalingPutScalingPolicyInput(d) + log.Printf("[DEBUG] AutoScaling PutScalingPolicy on Create: %#v", params) if err != nil { return err } - log.Printf("[DEBUG] AutoScaling PutScalingPolicy: %#v", params) resp, err := autoscalingconn.PutScalingPolicy(¶ms) if err != nil { return fmt.Errorf("Error putting scaling policy: %s", err) @@ -236,11 +241,11 @@ func resourceAwsAutoscalingPolicyUpdate(d *schema.ResourceData, meta interface{} autoscalingconn := meta.(*AWSClient).autoscalingconn params, inputErr := getAwsAutoscalingPutScalingPolicyInput(d) + log.Printf("[DEBUG] AutoScaling PutScalingPolicy on Update: %#v", params) if inputErr != nil { return inputErr } - log.Printf("[DEBUG] Autoscaling Update Scaling Policy: %#v", params) _, err := autoscalingconn.PutScalingPolicy(¶ms) if err != nil { return err @@ -268,7 +273,6 @@ func resourceAwsAutoscalingPolicyDelete(d *schema.ResourceData, meta interface{} return fmt.Errorf("Autoscaling Scaling Policy: %s ", err) } - d.SetId("") return nil } @@ -281,95 +285,84 @@ func getAwsAutoscalingPutScalingPolicyInput(d *schema.ResourceData) (autoscaling PolicyName: aws.String(d.Get("name").(string)), } - if v, ok := d.GetOk("adjustment_type"); ok { + // get policy_type first as parameter support depends on policy type + policyType := d.Get("policy_type") + params.PolicyType = aws.String(policyType.(string)) + + // This parameter is supported if the policy type is SimpleScaling or StepScaling. + if v, ok := d.GetOk("adjustment_type"); ok && (policyType == "SimpleScaling" || policyType == "StepScaling") { params.AdjustmentType = aws.String(v.(string)) } + // This parameter is supported if the policy type is SimpleScaling. if v, ok := d.GetOkExists("cooldown"); ok { + // 0 is allowed as placeholder even if policyType is not supported params.Cooldown = aws.Int64(int64(v.(int))) + if v.(int) != 0 && policyType != "SimpleScaling" { + return params, fmt.Errorf("cooldown is only supported for policy type SimpleScaling") + } } + // This parameter is supported if the policy type is StepScaling or TargetTrackingScaling. if v, ok := d.GetOkExists("estimated_instance_warmup"); ok { - params.EstimatedInstanceWarmup = aws.Int64(int64(v.(int))) + // 0 is NOT allowed as placeholder if policyType is not supported + if policyType == "StepScaling" || policyType == "TargetTrackingScaling" { + params.EstimatedInstanceWarmup = aws.Int64(int64(v.(int))) + } + if v.(int) != 0 && policyType != "StepScaling" && policyType != "TargetTrackingScaling" { + return params, fmt.Errorf("estimated_instance_warmup is only supported for policy type StepScaling and TargetTrackingScaling") + } } - if v, ok := d.GetOk("metric_aggregation_type"); ok { + // This parameter is supported if the policy type is StepScaling. + if v, ok := d.GetOk("metric_aggregation_type"); ok && policyType == "StepScaling" { params.MetricAggregationType = aws.String(v.(string)) } - if v, ok := d.GetOk("policy_type"); ok { - params.PolicyType = aws.String(v.(string)) + // MinAdjustmentMagnitude is supported if the policy type is SimpleScaling or StepScaling. + // MinAdjustmentStep is available for backward compatibility. Use MinAdjustmentMagnitude instead. + if v, ok := d.GetOkExists("min_adjustment_magnitude"); ok && v.(int) != 0 && (policyType == "SimpleScaling" || policyType == "StepScaling") { + params.MinAdjustmentMagnitude = aws.Int64(int64(v.(int))) + } else if v, ok := d.GetOkExists("min_adjustment_step"); ok && v.(int) != 0 && (policyType == "SimpleScaling" || policyType == "StepScaling") { + params.MinAdjustmentStep = aws.Int64(int64(v.(int))) } + // This parameter is required if the policy type is SimpleScaling and not supported otherwise. //if policy_type=="SimpleScaling" then scaling_adjustment is required and 0 is allowed - if v, ok := d.GetOkExists("scaling_adjustment"); ok || *params.PolicyType == "SimpleScaling" { - params.ScalingAdjustment = aws.Int64(int64(v.(int))) + if v, ok := d.GetOkExists("scaling_adjustment"); ok { + // 0 is NOT allowed as placeholder if policyType is not supported + if policyType == "SimpleScaling" { + params.ScalingAdjustment = aws.Int64(int64(v.(int))) + } + if v.(int) != 0 && policyType != "SimpleScaling" { + return params, fmt.Errorf("scaling_adjustment is only supported for policy type SimpleScaling") + } + } else if !ok && policyType == "SimpleScaling" { + return params, fmt.Errorf("scaling_adjustment is required for policy type SimpleScaling") } + // This parameter is required if the policy type is StepScaling and not supported otherwise. if v, ok := d.GetOk("step_adjustment"); ok { steps, err := expandStepAdjustments(v.(*schema.Set).List()) if err != nil { return params, fmt.Errorf("metric_interval_lower_bound and metric_interval_upper_bound must be strings!") } params.StepAdjustments = steps + if len(steps) != 0 && policyType != "StepScaling" { + return params, fmt.Errorf("step_adjustment is only supported for policy type StepScaling") + } + } else if !ok && policyType == "StepScaling" { + return params, fmt.Errorf("step_adjustment is required for policy type StepScaling") } - if v, ok := d.GetOkExists("min_adjustment_magnitude"); ok { - // params.MinAdjustmentMagnitude = aws.Int64(int64(d.Get("min_adjustment_magnitude").(int))) - params.MinAdjustmentMagnitude = aws.Int64(int64(v.(int))) - } else if v, ok := d.GetOkExists("min_adjustment_step"); ok { - // params.MinAdjustmentStep = aws.Int64(int64(d.Get("min_adjustment_step").(int))) - params.MinAdjustmentStep = aws.Int64(int64(v.(int))) - } - + // This parameter is required if the policy type is TargetTrackingScaling and not supported otherwise. if v, ok := d.GetOk("target_tracking_configuration"); ok { params.TargetTrackingConfiguration = expandTargetTrackingConfiguration(v.([]interface{})) - } - - // Validate our final input to confirm it won't error when sent to AWS. - // First, SimpleScaling policy types... - if *params.PolicyType == "SimpleScaling" && params.StepAdjustments != nil { - return params, fmt.Errorf("SimpleScaling policy types cannot use step_adjustments!") - } - if *params.PolicyType == "SimpleScaling" && params.MetricAggregationType != nil { - return params, fmt.Errorf("SimpleScaling policy types cannot use metric_aggregation_type!") - } - if *params.PolicyType == "SimpleScaling" && params.EstimatedInstanceWarmup != nil { - return params, fmt.Errorf("SimpleScaling policy types cannot use estimated_instance_warmup!") - } - if *params.PolicyType == "SimpleScaling" && params.TargetTrackingConfiguration != nil { - return params, fmt.Errorf("SimpleScaling policy types cannot use target_tracking_configuration!") - } - - // Second, StepScaling policy types... - if *params.PolicyType == "StepScaling" && params.ScalingAdjustment != nil { - return params, fmt.Errorf("StepScaling policy types cannot use scaling_adjustment!") - } - if *params.PolicyType == "StepScaling" && params.Cooldown != nil { - return params, fmt.Errorf("StepScaling policy types cannot use cooldown!") - } - if *params.PolicyType == "StepScaling" && params.TargetTrackingConfiguration != nil { - return params, fmt.Errorf("StepScaling policy types cannot use target_tracking_configuration!") - } - - // Third, TargetTrackingScaling policy types... - if *params.PolicyType == "TargetTrackingScaling" && params.AdjustmentType != nil { - return params, fmt.Errorf("TargetTrackingScaling policy types cannot use adjustment_type!") - } - if *params.PolicyType == "TargetTrackingScaling" && params.Cooldown != nil { - return params, fmt.Errorf("TargetTrackingScaling policy types cannot use cooldown!") - } - if *params.PolicyType == "TargetTrackingScaling" && params.MetricAggregationType != nil { - return params, fmt.Errorf("TargetTrackingScaling policy types cannot use metric_aggregation_type!") - } - if *params.PolicyType == "TargetTrackingScaling" && params.MinAdjustmentMagnitude != nil { - return params, fmt.Errorf("TargetTrackingScaling policy types cannot use min_adjustment_magnitude!") - } - if *params.PolicyType == "TargetTrackingScaling" && params.ScalingAdjustment != nil { - return params, fmt.Errorf("TargetTrackingScaling policy types cannot use scaling_adjustment!") - } - if *params.PolicyType == "TargetTrackingScaling" && params.StepAdjustments != nil { - return params, fmt.Errorf("TargetTrackingScaling policy types cannot use step_adjustments!") + if policyType != "TargetTrackingScaling" { + return params, fmt.Errorf("target_tracking_configuration is only supported for policy type TargetTrackingScaling") + } + } else if !ok && policyType == "TargetTrackingScaling" { + return params, fmt.Errorf("target_tracking_configuration is required for policy type TargetTrackingScaling") } return params, nil diff --git a/aws/resource_aws_autoscaling_policy_test.go b/aws/resource_aws_autoscaling_policy_test.go index 3e4ad5d3cf9..60229d9b3bc 100644 --- a/aws/resource_aws_autoscaling_policy_test.go +++ b/aws/resource_aws_autoscaling_policy_test.go @@ -18,30 +18,59 @@ import ( func TestAccAWSAutoscalingPolicy_basic(t *testing.T) { var policy autoscaling.ScalingPolicy - name := fmt.Sprintf("terraform-test-foobar-%s", acctest.RandString(5)) + name := fmt.Sprintf("terraform-testacc-asp-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoscalingPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSAutoscalingPolicyConfig(name), + { + Config: testAccAWSAutoscalingPolicyConfig_basic(name), Check: resource.ComposeTestCheckFunc( testAccCheckScalingPolicyExists("aws_autoscaling_policy.foobar_simple", &policy), resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_simple", "adjustment_type", "ChangeInCapacity"), resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_simple", "policy_type", "SimpleScaling"), resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_simple", "cooldown", "300"), - resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_simple", "name", "foobar_simple"), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_simple", "name", name+"-foobar_simple"), resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_simple", "scaling_adjustment", "2"), resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_simple", "autoscaling_group_name", name), testAccCheckScalingPolicyExists("aws_autoscaling_policy.foobar_step", &policy), resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_step", "adjustment_type", "ChangeInCapacity"), resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_step", "policy_type", "StepScaling"), - resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_step", "name", "foobar_step"), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_step", "name", name+"-foobar_step"), resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_step", "metric_aggregation_type", "Minimum"), resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_step", "estimated_instance_warmup", "200"), resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_step", "autoscaling_group_name", name), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_step", "step_adjustment.2042107634.scaling_adjustment", "1"), + testAccCheckScalingPolicyExists("aws_autoscaling_policy.foobar_target_tracking", &policy), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_target_tracking", "policy_type", "TargetTrackingScaling"), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_target_tracking", "name", name+"-foobar_target_tracking"), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_target_tracking", "autoscaling_group_name", name), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_target_tracking", "target_tracking_configuration.#", "1"), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_target_tracking", "target_tracking_configuration.0.customized_metric_specification.#", "0"), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_target_tracking", "target_tracking_configuration.0.predefined_metric_specification.#", "1"), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_target_tracking", "target_tracking_configuration.0.predefined_metric_specification.0.predefined_metric_type", "ASGAverageCPUUtilization"), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_target_tracking", "target_tracking_configuration.0.target_value", "40"), + ), + }, + { + Config: testAccAWSAutoscalingPolicyConfig_basicUpdate(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckScalingPolicyExists("aws_autoscaling_policy.foobar_simple", &policy), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_simple", "policy_type", "SimpleScaling"), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_simple", "cooldown", "30"), + testAccCheckScalingPolicyExists("aws_autoscaling_policy.foobar_step", &policy), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_step", "policy_type", "StepScaling"), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_step", "estimated_instance_warmup", "20"), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_step", "step_adjustment.997979330.scaling_adjustment", "10"), + testAccCheckScalingPolicyExists("aws_autoscaling_policy.foobar_target_tracking", &policy), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_target_tracking", "policy_type", "TargetTrackingScaling"), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_target_tracking", "target_tracking_configuration.#", "1"), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_target_tracking", "target_tracking_configuration.0.customized_metric_specification.#", "1"), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_target_tracking", "target_tracking_configuration.0.customized_metric_specification.0.statistic", "Average"), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_target_tracking", "target_tracking_configuration.0.predefined_metric_specification.#", "0"), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_target_tracking", "target_tracking_configuration.0.target_value", "70"), ), }, }, @@ -51,15 +80,15 @@ func TestAccAWSAutoscalingPolicy_basic(t *testing.T) { func TestAccAWSAutoscalingPolicy_disappears(t *testing.T) { var policy autoscaling.ScalingPolicy - name := fmt.Sprintf("terraform-test-foobar-%s", acctest.RandString(5)) + name := fmt.Sprintf("terraform-testacc-asp-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoscalingPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSAutoscalingPolicyConfig(name), + { + Config: testAccAWSAutoscalingPolicyConfig_basic(name), Check: resource.ComposeTestCheckFunc( testAccCheckScalingPolicyExists("aws_autoscaling_policy.foobar_simple", &policy), testAccCheckScalingPolicyDisappears(&policy), @@ -111,14 +140,14 @@ func testAccCheckScalingPolicyDisappears(conf *autoscaling.ScalingPolicy) resour func TestAccAWSAutoscalingPolicy_upgrade(t *testing.T) { var policy autoscaling.ScalingPolicy - name := acctest.RandString(5) + name := fmt.Sprintf("terraform-testacc-asp-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoscalingPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAutoscalingPolicyConfig_upgrade_614(name), Check: resource.ComposeTestCheckFunc( testAccCheckScalingPolicyExists("aws_autoscaling_policy.foobar_simple", &policy), @@ -127,8 +156,7 @@ func TestAccAWSAutoscalingPolicy_upgrade(t *testing.T) { ), ExpectNonEmptyPlan: true, }, - - resource.TestStep{ + { Config: testAccAWSAutoscalingPolicyConfig_upgrade_615(name), Check: resource.ComposeTestCheckFunc( testAccCheckScalingPolicyExists("aws_autoscaling_policy.foobar_simple", &policy), @@ -140,17 +168,39 @@ func TestAccAWSAutoscalingPolicy_upgrade(t *testing.T) { }) } +func TestAccAWSAutoscalingPolicy_SimpleScalingStepAdjustment(t *testing.T) { + var policy autoscaling.ScalingPolicy + + name := fmt.Sprintf("terraform-testacc-asp-%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAutoscalingPolicyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAutoscalingPolicyConfig_SimpleScalingStepAdjustment(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckScalingPolicyExists("aws_autoscaling_policy.foobar_simple", &policy), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_simple", "adjustment_type", "ExactCapacity"), + resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_simple", "scaling_adjustment", "0"), + ), + }, + }, + }) +} + func TestAccAWSAutoscalingPolicy_TargetTrack_Predefined(t *testing.T) { var policy autoscaling.ScalingPolicy - name := acctest.RandString(5) + name := fmt.Sprintf("terraform-testacc-asp-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoscalingPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAwsAutoscalingPolicyConfig_TargetTracking_Predefined(name), Check: resource.ComposeTestCheckFunc( testAccCheckScalingPolicyExists("aws_autoscaling_policy.test", &policy), @@ -163,14 +213,14 @@ func TestAccAWSAutoscalingPolicy_TargetTrack_Predefined(t *testing.T) { func TestAccAWSAutoscalingPolicy_TargetTrack_Custom(t *testing.T) { var policy autoscaling.ScalingPolicy - name := acctest.RandString(5) + name := fmt.Sprintf("terraform-testacc-asp-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoscalingPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAwsAutoscalingPolicyConfig_TargetTracking_Custom(name), Check: resource.ComposeTestCheckFunc( testAccCheckScalingPolicyExists("aws_autoscaling_policy.test", &policy), @@ -184,12 +234,12 @@ func TestAccAWSAutoscalingPolicy_zerovalue(t *testing.T) { var simplepolicy autoscaling.ScalingPolicy var steppolicy autoscaling.ScalingPolicy - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoscalingPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAutoscalingPolicyConfig_zerovalue(acctest.RandString(5)), Check: resource.ComposeTestCheckFunc( testAccCheckScalingPolicyExists("aws_autoscaling_policy.foobar_simple", &simplepolicy), @@ -256,342 +306,237 @@ func testAccCheckAWSAutoscalingPolicyDestroy(s *terraform.State) error { return nil } -func testAccAWSAutoscalingPolicyConfig(name string) string { +func testAccAWSAutoscalingPolicyConfig_base(name string) string { return fmt.Sprintf(` -resource "aws_launch_configuration" "foobar" { - name = "%s" - image_id = "ami-21f78e11" - instance_type = "t1.micro" -} - -resource "aws_autoscaling_group" "foobar" { - availability_zones = ["us-west-2a"] - name = "%s" - max_size = 5 - min_size = 2 - health_check_grace_period = 300 - health_check_type = "ELB" - force_delete = true - termination_policies = ["OldestInstance"] - launch_configuration = "${aws_launch_configuration.foobar.name}" - tag { - key = "Foo" - value = "foo-bar" - propagate_at_launch = true - } +data "aws_ami" "amzn" { + most_recent = true + owners = ["amazon"] + + filter { + name = "name" + values = ["amzn2-ami-hvm-*"] + } } -resource "aws_autoscaling_policy" "foobar_simple" { - name = "foobar_simple" - adjustment_type = "ChangeInCapacity" - cooldown = 300 - policy_type = "SimpleScaling" - scaling_adjustment = 2 - autoscaling_group_name = "${aws_autoscaling_group.foobar.name}" +data "aws_availability_zones" "available" {} + +resource "aws_launch_configuration" "test" { + name = "%s" + image_id = "${data.aws_ami.amzn.id}" + instance_type = "t2.micro" } -resource "aws_autoscaling_policy" "foobar_step" { - name = "foobar_step" - adjustment_type = "ChangeInCapacity" - policy_type = "StepScaling" - estimated_instance_warmup = 200 - metric_aggregation_type = "Minimum" - step_adjustment { - scaling_adjustment = 1 - metric_interval_lower_bound = 2.0 - } - autoscaling_group_name = "${aws_autoscaling_group.foobar.name}" +resource "aws_autoscaling_group" "test" { + availability_zones = ["${data.aws_availability_zones.available.names}"] + name = "%s" + max_size = 0 + min_size = 0 + force_delete = true + launch_configuration = "${aws_launch_configuration.test.name}" } `, name, name) } -func testAccAWSAutoscalingPolicyConfig_upgrade_614(name string) string { - return fmt.Sprintf(` -resource "aws_launch_configuration" "foobar" { - name = "tf-test-%s" - image_id = "ami-21f78e11" - instance_type = "t1.micro" -} - -resource "aws_autoscaling_group" "foobar" { - availability_zones = ["us-west-2a"] - name = "terraform-test-%s" - max_size = 5 - min_size = 1 - health_check_grace_period = 300 - health_check_type = "ELB" - force_delete = true - termination_policies = ["OldestInstance"] - launch_configuration = "${aws_launch_configuration.foobar.name}" - - tag { - key = "Foo" - value = "foo-bar" - propagate_at_launch = true - } -} - +func testAccAWSAutoscalingPolicyConfig_basic(name string) string { + return testAccAWSAutoscalingPolicyConfig_base(name) + fmt.Sprintf(` resource "aws_autoscaling_policy" "foobar_simple" { - name = "foobar_simple_%s" - adjustment_type = "PercentChangeInCapacity" + name = "%s-foobar_simple" + adjustment_type = "ChangeInCapacity" cooldown = 300 policy_type = "SimpleScaling" scaling_adjustment = 2 - min_adjustment_step = 1 - autoscaling_group_name = "${aws_autoscaling_group.foobar.name}" -} -`, name, name, name) + autoscaling_group_name = "${aws_autoscaling_group.test.name}" } -func testAccAWSAutoscalingPolicyConfig_upgrade_615(name string) string { - return fmt.Sprintf(` -resource "aws_launch_configuration" "foobar" { - name = "tf-test-%s" - image_id = "ami-21f78e11" - instance_type = "t1.micro" -} - -resource "aws_autoscaling_group" "foobar" { - availability_zones = ["us-west-2a"] - name = "terraform-test-%s" - max_size = 5 - min_size = 1 - health_check_grace_period = 300 - health_check_type = "ELB" - force_delete = true - termination_policies = ["OldestInstance"] - launch_configuration = "${aws_launch_configuration.foobar.name}" - - tag { - key = "Foo" - value = "foo-bar" - propagate_at_launch = true +resource "aws_autoscaling_policy" "foobar_step" { + name = "%s-foobar_step" + adjustment_type = "ChangeInCapacity" + policy_type = "StepScaling" + estimated_instance_warmup = 200 + metric_aggregation_type = "Minimum" + + step_adjustment { + scaling_adjustment = 1 + metric_interval_lower_bound = 2.0 } + + autoscaling_group_name = "${aws_autoscaling_group.test.name}" } -resource "aws_autoscaling_policy" "foobar_simple" { - name = "foobar_simple_%s" - adjustment_type = "PercentChangeInCapacity" - cooldown = 300 - policy_type = "SimpleScaling" - scaling_adjustment = 2 - min_adjustment_magnitude = 1 - autoscaling_group_name = "${aws_autoscaling_group.foobar.name}" +resource "aws_autoscaling_policy" "foobar_target_tracking" { + name = "%s-foobar_target_tracking" + policy_type = "TargetTrackingScaling" + autoscaling_group_name = "${aws_autoscaling_group.test.name}" + + target_tracking_configuration { + predefined_metric_specification { + predefined_metric_type = "ASGAverageCPUUtilization" + } + + target_value = 40.0 + } } `, name, name, name) } -func TestAccAWSAutoscalingPolicy_SimpleScalingStepAdjustment(t *testing.T) { - var policy autoscaling.ScalingPolicy +func testAccAWSAutoscalingPolicyConfig_basicUpdate(name string) string { + return testAccAWSAutoscalingPolicyConfig_base(name) + fmt.Sprintf(` +resource "aws_autoscaling_policy" "foobar_simple" { + name = "%s-foobar_simple" + adjustment_type = "ChangeInCapacity" + cooldown = 30 + policy_type = "SimpleScaling" + scaling_adjustment = 2 + autoscaling_group_name = "${aws_autoscaling_group.test.name}" +} - name := acctest.RandString(5) +resource "aws_autoscaling_policy" "foobar_step" { + name = "%s-foobar_step" + adjustment_type = "ChangeInCapacity" + policy_type = "StepScaling" + estimated_instance_warmup = 20 + metric_aggregation_type = "Minimum" + + step_adjustment { + scaling_adjustment = 10 + metric_interval_lower_bound = 2.0 + } - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSAutoscalingPolicyDestroy, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSAutoscalingPolicyConfig_SimpleScalingStepAdjustment(name), - Check: resource.ComposeTestCheckFunc( - testAccCheckScalingPolicyExists("aws_autoscaling_policy.foobar_simple", &policy), - resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_simple", "adjustment_type", "ExactCapacity"), - resource.TestCheckResourceAttr("aws_autoscaling_policy.foobar_simple", "scaling_adjustment", "0"), - ), - }, - }, - }) + autoscaling_group_name = "${aws_autoscaling_group.test.name}" } -func testAccAWSAutoscalingPolicyConfig_SimpleScalingStepAdjustment(name string) string { - return fmt.Sprintf(` -resource "aws_launch_configuration" "foobar" { - name = "tf-test-%s" - image_id = "ami-21f78e11" - instance_type = "t1.micro" -} - -resource "aws_autoscaling_group" "foobar" { - availability_zones = ["us-west-2a"] - name = "terraform-test-%s" - max_size = 5 - min_size = 0 - health_check_grace_period = 300 - health_check_type = "ELB" - force_delete = true - termination_policies = ["OldestInstance"] - launch_configuration = "${aws_launch_configuration.foobar.name}" - - tag { - key = "Foo" - value = "foo-bar" - propagate_at_launch = true +resource "aws_autoscaling_policy" "foobar_target_tracking" { + name = "%s-foobar_target_tracking" + policy_type = "TargetTrackingScaling" + autoscaling_group_name = "${aws_autoscaling_group.test.name}" + + target_tracking_configuration { + customized_metric_specification { + metric_dimension { + name = "fuga" + value = "fuga" + } + + metric_name = "hoge" + namespace = "hoge" + statistic = "Average" + } + + target_value = 70.0 } } +`, name, name, name) +} +func testAccAWSAutoscalingPolicyConfig_upgrade_614(name string) string { + return testAccAWSAutoscalingPolicyConfig_base(name) + fmt.Sprintf(` resource "aws_autoscaling_policy" "foobar_simple" { - name = "foobar_simple_%s" - adjustment_type = "ExactCapacity" - cooldown = 300 - policy_type = "SimpleScaling" - scaling_adjustment = 0 - autoscaling_group_name = "${aws_autoscaling_group.foobar.name}" + name = "%s-foobar_simple" + adjustment_type = "PercentChangeInCapacity" + cooldown = 300 + policy_type = "SimpleScaling" + scaling_adjustment = 2 + min_adjustment_step = 1 + autoscaling_group_name = "${aws_autoscaling_group.test.name}" } -`, name, name, name) +`, name) } -func testAccAwsAutoscalingPolicyConfig_TargetTracking_Predefined(rName string) string { - return fmt.Sprintf(` -data "aws_ami" "amzn" { - most_recent = true - owners = ["amazon"] - - filter { - name = "name" - values = ["amzn-ami-hvm-*-x86_64-gp2"] - } +func testAccAWSAutoscalingPolicyConfig_upgrade_615(name string) string { + return testAccAWSAutoscalingPolicyConfig_base(name) + fmt.Sprintf(` +resource "aws_autoscaling_policy" "foobar_simple" { + name = "%s-foobar_simple" + adjustment_type = "PercentChangeInCapacity" + cooldown = 300 + policy_type = "SimpleScaling" + scaling_adjustment = 2 + min_adjustment_magnitude = 1 + autoscaling_group_name = "${aws_autoscaling_group.test.name}" } - -data "aws_availability_zones" "test" {} - -resource "aws_launch_configuration" "test" { - name = "tf-test-%s" - image_id = "${data.aws_ami.amzn.id}" - instance_type = "t2.micro" +`, name) } -resource "aws_autoscaling_group" "test" { - availability_zones = ["${data.aws_availability_zones.test.names[0]}"] - name = "tf-test-%s" - max_size = 5 - min_size = 0 - health_check_grace_period = 300 - health_check_type = "ELB" - force_delete = true - termination_policies = ["OldestInstance"] - launch_configuration = "${aws_launch_configuration.test.name}" +func testAccAWSAutoscalingPolicyConfig_SimpleScalingStepAdjustment(name string) string { + return testAccAWSAutoscalingPolicyConfig_base(name) + fmt.Sprintf(` +resource "aws_autoscaling_policy" "foobar_simple" { + name = "%s-foobar_simple" + adjustment_type = "ExactCapacity" + cooldown = 300 + policy_type = "SimpleScaling" + scaling_adjustment = 0 + autoscaling_group_name = "${aws_autoscaling_group.test.name}" +} +`, name) } +func testAccAwsAutoscalingPolicyConfig_TargetTracking_Predefined(name string) string { + return testAccAWSAutoscalingPolicyConfig_base(name) + fmt.Sprintf(` resource "aws_autoscaling_policy" "test" { - name = "tf-as-%s" - policy_type = "TargetTrackingScaling" - autoscaling_group_name = "${aws_autoscaling_group.test.name}" + name = "%s-test" + policy_type = "TargetTrackingScaling" + autoscaling_group_name = "${aws_autoscaling_group.test.name}" target_tracking_configuration { predefined_metric_specification { predefined_metric_type = "ASGAverageCPUUtilization" } - target_value = 40.0 - } -} -`, rName, rName, rName) -} - -func testAccAwsAutoscalingPolicyConfig_TargetTracking_Custom(rName string) string { - return fmt.Sprintf(` -data "aws_ami" "amzn" { - most_recent = true - owners = ["amazon"] - filter { - name = "name" - values = ["amzn-ami-hvm-*-x86_64-gp2"] + target_value = 40.0 } } - -data "aws_availability_zones" "test" {} - -resource "aws_launch_configuration" "test" { - name = "tf-test-%s" - image_id = "${data.aws_ami.amzn.id}" - instance_type = "t2.micro" -} - -resource "aws_autoscaling_group" "test" { - availability_zones = ["${data.aws_availability_zones.test.names[0]}"] - name = "tf-test-%s" - max_size = 5 - min_size = 0 - health_check_grace_period = 300 - health_check_type = "ELB" - force_delete = true - termination_policies = ["OldestInstance"] - launch_configuration = "${aws_launch_configuration.test.name}" +`, name) } +func testAccAwsAutoscalingPolicyConfig_TargetTracking_Custom(name string) string { + return testAccAWSAutoscalingPolicyConfig_base(name) + fmt.Sprintf(` resource "aws_autoscaling_policy" "test" { - name = "tf-as-%s" - policy_type = "TargetTrackingScaling" - autoscaling_group_name = "${aws_autoscaling_group.test.name}" + name = "%s-test" + policy_type = "TargetTrackingScaling" + autoscaling_group_name = "${aws_autoscaling_group.test.name}" target_tracking_configuration { customized_metric_specification { metric_dimension { - name = "fuga" + name = "fuga" value = "fuga" } + metric_name = "hoge" - namespace = "hoge" - statistic = "Average" + namespace = "hoge" + statistic = "Average" } + target_value = 40.0 } } -`, rName, rName, rName) +`, name) } func testAccAWSAutoscalingPolicyConfig_zerovalue(name string) string { - return fmt.Sprintf(` -data "aws_ami" "amzn" { - most_recent = true - owners = ["amazon"] - filter { - name = "name" - values = ["amzn-ami-hvm-*-x86_64-gp2"] - } -} - -resource "aws_launch_configuration" "foobar" { - name = "tf-test-%s" - image_id = "${data.aws_ami.amzn.id}" - instance_type = "t2.micro" -} - -data "aws_availability_zones" "test" {} - -resource "aws_autoscaling_group" "foobar" { - availability_zones = ["${data.aws_availability_zones.test.names[0]}"] - name = "terraform-test-%s" - max_size = 5 - min_size = 0 - health_check_grace_period = 300 - health_check_type = "ELB" - force_delete = true - termination_policies = ["OldestInstance"] - launch_configuration = "${aws_launch_configuration.foobar.name}" -} - + return testAccAWSAutoscalingPolicyConfig_base(name) + fmt.Sprintf(` resource "aws_autoscaling_policy" "foobar_simple" { - name = "foobar_simple_%s" - adjustment_type = "ExactCapacity" - cooldown = 0 - policy_type = "SimpleScaling" - scaling_adjustment = 0 - autoscaling_group_name = "${aws_autoscaling_group.foobar.name}" + name = "%s-foobar_simple" + adjustment_type = "ExactCapacity" + cooldown = 0 + policy_type = "SimpleScaling" + scaling_adjustment = 0 + autoscaling_group_name = "${aws_autoscaling_group.test.name}" } resource "aws_autoscaling_policy" "foobar_step" { - name = "foobar_step_%s" - adjustment_type = "PercentChangeInCapacity" - policy_type = "StepScaling" + name = "%s-foobar_step" + adjustment_type = "PercentChangeInCapacity" + policy_type = "StepScaling" estimated_instance_warmup = 0 - metric_aggregation_type = "Minimum" - step_adjustment { - scaling_adjustment = 1 + metric_aggregation_type = "Minimum" + + step_adjustment { + scaling_adjustment = 1 metric_interval_lower_bound = 2.0 - } + } + min_adjustment_magnitude = 1 - autoscaling_group_name = "${aws_autoscaling_group.foobar.name}" + autoscaling_group_name = "${aws_autoscaling_group.test.name}" } -`, name, name, name, name) +`, name, name) } diff --git a/aws/resource_aws_autoscaling_schedule_test.go b/aws/resource_aws_autoscaling_schedule_test.go index c84b526480e..c84a1d8090a 100644 --- a/aws/resource_aws_autoscaling_schedule_test.go +++ b/aws/resource_aws_autoscaling_schedule_test.go @@ -18,7 +18,7 @@ func TestAccAWSAutoscalingSchedule_basic(t *testing.T) { start := testAccAWSAutoscalingScheduleValidStart(t) end := testAccAWSAutoscalingScheduleValidEnd(t) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoscalingScheduleDestroy, @@ -39,7 +39,7 @@ func TestAccAWSAutoscalingSchedule_disappears(t *testing.T) { start := testAccAWSAutoscalingScheduleValidStart(t) end := testAccAWSAutoscalingScheduleValidEnd(t) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoscalingScheduleDestroy, @@ -74,7 +74,7 @@ func TestAccAWSAutoscalingSchedule_recurrence(t *testing.T) { var schedule autoscaling.ScheduledUpdateGroupAction rName := fmt.Sprintf("tf-test-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoscalingScheduleDestroy, @@ -97,7 +97,7 @@ func TestAccAWSAutoscalingSchedule_zeroValues(t *testing.T) { start := testAccAWSAutoscalingScheduleValidStart(t) end := testAccAWSAutoscalingScheduleValidEnd(t) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoscalingScheduleDestroy, @@ -119,7 +119,7 @@ func TestAccAWSAutoscalingSchedule_negativeOne(t *testing.T) { start := testAccAWSAutoscalingScheduleValidStart(t) end := testAccAWSAutoscalingScheduleValidEnd(t) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAutoscalingScheduleDestroy, diff --git a/aws/resource_aws_batch_compute_environment.go b/aws/resource_aws_batch_compute_environment.go index 06955e0a6c1..3d2e7c719c0 100644 --- a/aws/resource_aws_batch_compute_environment.go +++ b/aws/resource_aws_batch_compute_environment.go @@ -228,7 +228,7 @@ func resourceAwsBatchComputeEnvironmentCreate(d *schema.ResourceData, meta inter stateConf := &resource.StateChangeConf{ Pending: []string{batch.CEStatusCreating}, Target: []string{batch.CEStatusValid}, - Refresh: resourceAwsBatchComputeEnvironmentStatusRefreshFunc(d, meta), + Refresh: resourceAwsBatchComputeEnvironmentStatusRefreshFunc(computeEnvironmentName, conn), Timeout: d.Timeout(schema.TimeoutCreate), MinTimeout: 5 * time.Second, } @@ -266,8 +266,10 @@ func resourceAwsBatchComputeEnvironmentRead(d *schema.ResourceData, meta interfa d.Set("state", computeEnvironment.State) d.Set("type", computeEnvironment.Type) - if *(computeEnvironment.Type) == "MANAGED" { - d.Set("compute_resources", flattenComputeResources(computeEnvironment.ComputeResources)) + if aws.StringValue(computeEnvironment.Type) == batch.CETypeManaged { + if err := d.Set("compute_resources", flattenBatchComputeResources(computeEnvironment.ComputeResources)); err != nil { + return fmt.Errorf("error setting compute_resources: %s", err) + } } d.Set("arn", computeEnvironment.ComputeEnvironmentArn) @@ -279,23 +281,23 @@ func resourceAwsBatchComputeEnvironmentRead(d *schema.ResourceData, meta interfa return nil } -func flattenComputeResources(computeResource *batch.ComputeResource) []map[string]interface{} { +func flattenBatchComputeResources(computeResource *batch.ComputeResource) []map[string]interface{} { result := make([]map[string]interface{}, 0) m := make(map[string]interface{}) - m["bid_percentage"] = computeResource.BidPercentage - m["desired_vcpus"] = computeResource.DesiredvCpus - m["ec2_key_pair"] = computeResource.Ec2KeyPair - m["image_id"] = computeResource.ImageId - m["instance_role"] = computeResource.InstanceRole + m["bid_percentage"] = int(aws.Int64Value(computeResource.BidPercentage)) + m["desired_vcpus"] = int(aws.Int64Value(computeResource.DesiredvCpus)) + m["ec2_key_pair"] = aws.StringValue(computeResource.Ec2KeyPair) + m["image_id"] = aws.StringValue(computeResource.ImageId) + m["instance_role"] = aws.StringValue(computeResource.InstanceRole) m["instance_type"] = schema.NewSet(schema.HashString, flattenStringList(computeResource.InstanceTypes)) - m["max_vcpus"] = computeResource.MaxvCpus - m["min_vcpus"] = computeResource.MinvCpus + m["max_vcpus"] = int(aws.Int64Value(computeResource.MaxvCpus)) + m["min_vcpus"] = int(aws.Int64Value(computeResource.MinvCpus)) m["security_group_ids"] = schema.NewSet(schema.HashString, flattenStringList(computeResource.SecurityGroupIds)) - m["spot_iam_fleet_role"] = computeResource.SpotIamFleetRole + m["spot_iam_fleet_role"] = aws.StringValue(computeResource.SpotIamFleetRole) m["subnets"] = schema.NewSet(schema.HashString, flattenStringList(computeResource.Subnets)) m["tags"] = tagsToMapGeneric(computeResource.Tags) - m["type"] = computeResource.Type + m["type"] = aws.StringValue(computeResource.Type) result = append(result, m) return result @@ -303,52 +305,20 @@ func flattenComputeResources(computeResource *batch.ComputeResource) []map[strin func resourceAwsBatchComputeEnvironmentDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).batchconn - computeEnvironmentName := d.Get("compute_environment_name").(string) - updateInput := &batch.UpdateComputeEnvironmentInput{ - ComputeEnvironment: aws.String(computeEnvironmentName), - State: aws.String(batch.CEStateDisabled), - } - - log.Printf("[DEBUG] Delete compute environment %s.\n", updateInput) - - if _, err := conn.UpdateComputeEnvironment(updateInput); err != nil { - return err - } - - stateConf := &resource.StateChangeConf{ - Pending: []string{batch.CEStatusUpdating}, - Target: []string{batch.CEStatusValid}, - Refresh: resourceAwsBatchComputeEnvironmentStatusRefreshFunc(d, meta), - Timeout: d.Timeout(schema.TimeoutDelete), - MinTimeout: 5 * time.Second, - } - if _, err := stateConf.WaitForState(); err != nil { - return err - } - - input := &batch.DeleteComputeEnvironmentInput{ - ComputeEnvironment: aws.String(computeEnvironmentName), - } - - if _, err := conn.DeleteComputeEnvironment(input); err != nil { - return err + log.Printf("[DEBUG] Disabling Batch Compute Environment: %s", computeEnvironmentName) + err := disableBatchComputeEnvironment(computeEnvironmentName, d.Timeout(schema.TimeoutDelete), conn) + if err != nil { + return fmt.Errorf("error disabling Batch Compute Environment (%s): %s", computeEnvironmentName, err) } - stateConfForDelete := &resource.StateChangeConf{ - Pending: []string{batch.CEStatusDeleting}, - Target: []string{batch.CEStatusDeleted}, - Refresh: resourceAwsBatchComputeEnvironmentDeleteRefreshFunc(d, meta), - Timeout: d.Timeout(schema.TimeoutDelete), - MinTimeout: 5 * time.Second, - } - if _, err := stateConfForDelete.WaitForState(); err != nil { - return err + log.Printf("[DEBUG] Deleting Batch Compute Environment: %s", computeEnvironmentName) + err = deleteBatchComputeEnvironment(computeEnvironmentName, d.Timeout(schema.TimeoutDelete), conn) + if err != nil { + return fmt.Errorf("error deleting Batch Compute Environment (%s): %s", computeEnvironmentName, err) } - d.SetId("") - return nil } @@ -390,12 +360,8 @@ func resourceAwsBatchComputeEnvironmentUpdate(d *schema.ResourceData, meta inter return resourceAwsBatchComputeEnvironmentRead(d, meta) } -func resourceAwsBatchComputeEnvironmentStatusRefreshFunc(d *schema.ResourceData, meta interface{}) resource.StateRefreshFunc { +func resourceAwsBatchComputeEnvironmentStatusRefreshFunc(computeEnvironmentName string, conn *batch.Batch) resource.StateRefreshFunc { return func() (interface{}, string, error) { - conn := meta.(*AWSClient).batchconn - - computeEnvironmentName := d.Get("compute_environment_name").(string) - result, err := conn.DescribeComputeEnvironments(&batch.DescribeComputeEnvironmentsInput{ ComputeEnvironments: []*string{ aws.String(computeEnvironmentName), @@ -414,12 +380,8 @@ func resourceAwsBatchComputeEnvironmentStatusRefreshFunc(d *schema.ResourceData, } } -func resourceAwsBatchComputeEnvironmentDeleteRefreshFunc(d *schema.ResourceData, meta interface{}) resource.StateRefreshFunc { +func resourceAwsBatchComputeEnvironmentDeleteRefreshFunc(computeEnvironmentName string, conn *batch.Batch) resource.StateRefreshFunc { return func() (interface{}, string, error) { - conn := meta.(*AWSClient).batchconn - - computeEnvironmentName := d.Get("compute_environment_name").(string) - result, err := conn.DescribeComputeEnvironments(&batch.DescribeComputeEnvironmentsInput{ ComputeEnvironments: []*string{ aws.String(computeEnvironmentName), @@ -437,3 +399,44 @@ func resourceAwsBatchComputeEnvironmentDeleteRefreshFunc(d *schema.ResourceData, return result, *(computeEnvironment.Status), nil } } + +func deleteBatchComputeEnvironment(computeEnvironment string, timeout time.Duration, conn *batch.Batch) error { + input := &batch.DeleteComputeEnvironmentInput{ + ComputeEnvironment: aws.String(computeEnvironment), + } + + if _, err := conn.DeleteComputeEnvironment(input); err != nil { + return err + } + + stateChangeConf := &resource.StateChangeConf{ + Pending: []string{batch.CEStatusDeleting}, + Target: []string{batch.CEStatusDeleted}, + Refresh: resourceAwsBatchComputeEnvironmentDeleteRefreshFunc(computeEnvironment, conn), + Timeout: timeout, + MinTimeout: 5 * time.Second, + } + _, err := stateChangeConf.WaitForState() + return err +} + +func disableBatchComputeEnvironment(computeEnvironment string, timeout time.Duration, conn *batch.Batch) error { + input := &batch.UpdateComputeEnvironmentInput{ + ComputeEnvironment: aws.String(computeEnvironment), + State: aws.String(batch.CEStateDisabled), + } + + if _, err := conn.UpdateComputeEnvironment(input); err != nil { + return err + } + + stateChangeConf := &resource.StateChangeConf{ + Pending: []string{batch.CEStatusUpdating}, + Target: []string{batch.CEStatusValid}, + Refresh: resourceAwsBatchComputeEnvironmentStatusRefreshFunc(computeEnvironment, conn), + Timeout: timeout, + MinTimeout: 5 * time.Second, + } + _, err := stateChangeConf.WaitForState() + return err +} diff --git a/aws/resource_aws_batch_compute_environment_test.go b/aws/resource_aws_batch_compute_environment_test.go index 031131caf4e..9fa10c61cd6 100644 --- a/aws/resource_aws_batch_compute_environment_test.go +++ b/aws/resource_aws_batch_compute_environment_test.go @@ -2,8 +2,11 @@ package aws import ( "fmt" + "log" "regexp" + "strings" "testing" + "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/batch" @@ -12,10 +15,70 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func init() { + resource.AddTestSweepers("aws_batch_compute_environment", &resource.Sweeper{ + Name: "aws_batch_compute_environment", + Dependencies: []string{ + "aws_batch_job_queue", + }, + F: testSweepBatchComputeEnvironments, + }) +} + +func testSweepBatchComputeEnvironments(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).batchconn + + prefixes := []string{ + "tf_acc", + } + + out, err := conn.DescribeComputeEnvironments(&batch.DescribeComputeEnvironmentsInput{}) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping Batch Compute Environment sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving Batch Compute Environments: %s", err) + } + for _, computeEnvironment := range out.ComputeEnvironments { + name := computeEnvironment.ComputeEnvironmentName + skip := true + for _, prefix := range prefixes { + if strings.HasPrefix(*name, prefix) { + skip = false + break + } + } + if skip { + log.Printf("[INFO] Skipping Batch Compute Environment: %s", *name) + continue + } + + log.Printf("[INFO] Disabling Batch Compute Environment: %s", *name) + err := disableBatchComputeEnvironment(*name, 20*time.Minute, conn) + if err != nil { + log.Printf("[ERROR] Failed to disable Batch Compute Environment %s: %s", *name, err) + continue + } + + log.Printf("[INFO] Deleting Batch Compute Environment: %s", *name) + err = deleteBatchComputeEnvironment(*name, 20*time.Minute, conn) + if err != nil { + log.Printf("[ERROR] Failed to delete Batch Compute Environment %s: %s", *name, err) + } + } + + return nil +} + func TestAccAWSBatchComputeEnvironment_createEc2(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckBatchComputeEnvironmentDestroy, @@ -33,7 +96,7 @@ func TestAccAWSBatchComputeEnvironment_createEc2(t *testing.T) { func TestAccAWSBatchComputeEnvironment_createEc2WithTags(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckBatchComputeEnvironmentDestroy, @@ -53,7 +116,7 @@ func TestAccAWSBatchComputeEnvironment_createEc2WithTags(t *testing.T) { func TestAccAWSBatchComputeEnvironment_createSpot(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckBatchComputeEnvironmentDestroy, @@ -71,7 +134,7 @@ func TestAccAWSBatchComputeEnvironment_createSpot(t *testing.T) { func TestAccAWSBatchComputeEnvironment_createUnmanaged(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckBatchComputeEnvironmentDestroy, @@ -89,7 +152,7 @@ func TestAccAWSBatchComputeEnvironment_createUnmanaged(t *testing.T) { func TestAccAWSBatchComputeEnvironment_updateMaxvCpus(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckBatchComputeEnvironmentDestroy, @@ -115,7 +178,7 @@ func TestAccAWSBatchComputeEnvironment_updateMaxvCpus(t *testing.T) { func TestAccAWSBatchComputeEnvironment_updateInstanceType(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckBatchComputeEnvironmentDestroy, @@ -143,7 +206,7 @@ func TestAccAWSBatchComputeEnvironment_updateComputeEnvironmentName(t *testing.T expectedName := fmt.Sprintf("tf_acc_test_%d", rInt) expectedUpdatedName := fmt.Sprintf("tf_acc_test_updated_%d", rInt) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckBatchComputeEnvironmentDestroy, @@ -169,7 +232,7 @@ func TestAccAWSBatchComputeEnvironment_updateComputeEnvironmentName(t *testing.T func TestAccAWSBatchComputeEnvironment_createEc2WithoutComputeResources(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckBatchComputeEnvironmentDestroy, @@ -185,7 +248,7 @@ func TestAccAWSBatchComputeEnvironment_createEc2WithoutComputeResources(t *testi func TestAccAWSBatchComputeEnvironment_createUnmanagedWithComputeResources(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckBatchComputeEnvironmentDestroy, @@ -204,7 +267,7 @@ func TestAccAWSBatchComputeEnvironment_createUnmanagedWithComputeResources(t *te func TestAccAWSBatchComputeEnvironment_createSpotWithoutBidPercentage(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckBatchComputeEnvironmentDestroy, @@ -220,7 +283,7 @@ func TestAccAWSBatchComputeEnvironment_createSpotWithoutBidPercentage(t *testing func TestAccAWSBatchComputeEnvironment_updateState(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckBatchComputeEnvironmentDestroy, @@ -258,7 +321,7 @@ func testAccCheckBatchComputeEnvironmentDestroy(s *terraform.State) error { }) if err != nil { - return fmt.Errorf("Error occured when get compute environment information.") + return fmt.Errorf("Error occurred when get compute environment information.") } if len(result.ComputeEnvironments) == 1 { return fmt.Errorf("Compute environment still exists.") @@ -285,7 +348,7 @@ func testAccCheckAwsBatchComputeEnvironmentExists() resource.TestCheckFunc { }) if err != nil { - return fmt.Errorf("Error occured when get compute environment information.") + return fmt.Errorf("Error occurred when get compute environment information.") } if len(result.ComputeEnvironments) == 0 { return fmt.Errorf("Compute environment doesn't exists.") diff --git a/aws/resource_aws_batch_job_definition.go b/aws/resource_aws_batch_job_definition.go index 1bef71db799..755559501dc 100644 --- a/aws/resource_aws_batch_job_definition.go +++ b/aws/resource_aws_batch_job_definition.go @@ -40,7 +40,7 @@ func resourceAwsBatchJobDefinition() *schema.Resource { Type: schema.TypeMap, Optional: true, ForceNew: true, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, }, "retry_strategy": { Type: schema.TypeList, @@ -50,8 +50,26 @@ func resourceAwsBatchJobDefinition() *schema.Resource { Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "attempts": { - Type: schema.TypeInt, - Required: true, + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + ValidateFunc: validation.IntBetween(1, 10), + }, + }, + }, + }, + "timeout": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "attempt_duration_seconds": { + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + ValidateFunc: validation.IntAtLeast(60), }, }, }, @@ -99,6 +117,10 @@ func resourceAwsBatchJobDefinitionCreate(d *schema.ResourceData, meta interface{ input.RetryStrategy = expandJobDefinitionRetryStrategy(v.([]interface{})) } + if v, ok := d.GetOk("timeout"); ok { + input.Timeout = expandJobDefinitionTimeout(v.([]interface{})) + } + out, err := conn.RegisterJobDefinition(input) if err != nil { return fmt.Errorf("%s %q", err, name) @@ -122,7 +144,15 @@ func resourceAwsBatchJobDefinitionRead(d *schema.ResourceData, meta interface{}) d.Set("arn", job.JobDefinitionArn) d.Set("container_properties", job.ContainerProperties) d.Set("parameters", aws.StringValueMap(job.Parameters)) - d.Set("retry_strategy", flattenRetryStrategy(job.RetryStrategy)) + + if err := d.Set("retry_strategy", flattenBatchRetryStrategy(job.RetryStrategy)); err != nil { + return fmt.Errorf("error setting retry_strategy: %s", err) + } + + if err := d.Set("timeout", flattenBatchJobTimeout(job.Timeout)); err != nil { + return fmt.Errorf("error setting timeout: %s", err) + } + d.Set("revision", job.Revision) d.Set("type", job.Type) return nil @@ -137,7 +167,7 @@ func resourceAwsBatchJobDefinitionDelete(d *schema.ResourceData, meta interface{ if err != nil { return fmt.Errorf("%s %q", err, arn) } - d.SetId("") + return nil } @@ -195,17 +225,42 @@ func expandJobDefinitionParameters(params map[string]interface{}) map[string]*st } func expandJobDefinitionRetryStrategy(item []interface{}) *batch.RetryStrategy { + retryStrategy := &batch.RetryStrategy{} + data := item[0].(map[string]interface{}) + + if v, ok := data["attempts"].(int); ok && v > 0 && v <= 10 { + retryStrategy.Attempts = aws.Int64(int64(v)) + } + + return retryStrategy +} + +func flattenBatchRetryStrategy(item *batch.RetryStrategy) []map[string]interface{} { + data := []map[string]interface{}{} + if item != nil && item.Attempts != nil { + data = append(data, map[string]interface{}{ + "attempts": int(aws.Int64Value(item.Attempts)), + }) + } + return data +} + +func expandJobDefinitionTimeout(item []interface{}) *batch.JobTimeout { + timeout := &batch.JobTimeout{} data := item[0].(map[string]interface{}) - return &batch.RetryStrategy{ - Attempts: aws.Int64(int64(data["attempts"].(int))), + + if v, ok := data["attempt_duration_seconds"].(int); ok && v >= 60 { + timeout.AttemptDurationSeconds = aws.Int64(int64(v)) } + + return timeout } -func flattenRetryStrategy(item *batch.RetryStrategy) []map[string]interface{} { +func flattenBatchJobTimeout(item *batch.JobTimeout) []map[string]interface{} { data := []map[string]interface{}{} - if item != nil { + if item != nil && item.AttemptDurationSeconds != nil { data = append(data, map[string]interface{}{ - "attempts": item.Attempts, + "attempt_duration_seconds": int(aws.Int64Value(item.AttemptDurationSeconds)), }) } return data diff --git a/aws/resource_aws_batch_job_definition_test.go b/aws/resource_aws_batch_job_definition_test.go index eba4d9ca3de..6f03de72a94 100644 --- a/aws/resource_aws_batch_job_definition_test.go +++ b/aws/resource_aws_batch_job_definition_test.go @@ -24,6 +24,9 @@ func TestAccAWSBatchJobDefinition_basic(t *testing.T) { RetryStrategy: &batch.RetryStrategy{ Attempts: aws.Int64(int64(1)), }, + Timeout: &batch.JobTimeout{ + AttemptDurationSeconds: aws.Int64(int64(60)), + }, ContainerProperties: &batch.ContainerProperties{ Command: []*string{aws.String("ls"), aws.String("-la")}, Environment: []*batch.KeyValuePair{ @@ -48,7 +51,7 @@ func TestAccAWSBatchJobDefinition_basic(t *testing.T) { } ri := acctest.RandInt() config := fmt.Sprintf(testAccBatchJobDefinitionBaseConfig, ri) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckBatchJobDefinitionDestroy, @@ -70,7 +73,7 @@ func TestAccAWSBatchJobDefinition_updateForcesNewResource(t *testing.T) { ri := acctest.RandInt() config := fmt.Sprintf(testAccBatchJobDefinitionBaseConfig, ri) updateConfig := fmt.Sprintf(testAccBatchJobDefinitionUpdateConfig, ri) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckBatchJobDefinitionDestroy, @@ -185,6 +188,9 @@ resource "aws_batch_job_definition" "test" { retry_strategy = { attempts = 1 } + timeout = { + attempt_duration_seconds = 60 + } container_properties = < 0 { + targetState = cloudhsmv2.ClusterStateActive + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{cloudhsmv2.ClusterStateCreateInProgress, cloudhsmv2.ClusterStateInitializeInProgress}, + Target: []string{targetState}, + Refresh: resourceAwsCloudHsm2ClusterRefreshFunc(cloudhsm2, d.Id()), + Timeout: d.Timeout(schema.TimeoutCreate), + MinTimeout: 30 * time.Second, + Delay: 30 * time.Second, + } + + // Wait, catching any errors + _, errWait := stateConf.WaitForState() + if errWait != nil { + if len(backupId) == 0 { + return fmt.Errorf("[WARN] Error waiting for CloudHSMv2 Cluster state to be \"UNINITIALIZED\": %s", errWait) + } else { + return fmt.Errorf("[WARN] Error waiting for CloudHSMv2 Cluster state to be \"ACTIVE\": %s", errWait) + } + } + + if err := setTagsAwsCloudHsm2Cluster(cloudhsm2, d); err != nil { + return err + } + + return resourceAwsCloudHsm2ClusterRead(d, meta) +} + +func resourceAwsCloudHsm2ClusterRead(d *schema.ResourceData, meta interface{}) error { + + cluster, err := describeCloudHsm2Cluster(meta.(*AWSClient).cloudhsmv2conn, d.Id()) + + if cluster == nil { + log.Printf("[WARN] CloudHSMv2 Cluster (%s) not found", d.Id()) + d.SetId("") + return err + } + + log.Printf("[INFO] Reading CloudHSMv2 Cluster Information: %s", d.Id()) + + d.Set("cluster_id", cluster.ClusterId) + d.Set("cluster_state", cluster.State) + d.Set("security_group_id", cluster.SecurityGroup) + d.Set("vpc_id", cluster.VpcId) + d.Set("source_backup_identifier", cluster.SourceBackupId) + d.Set("hsm_type", cluster.HsmType) + if err := d.Set("cluster_certificates", readCloudHsm2ClusterCertificates(cluster)); err != nil { + return fmt.Errorf("error setting cluster_certificates: %s", err) + } + + var subnets []string + for _, sn := range cluster.SubnetMapping { + subnets = append(subnets, aws.StringValue(sn)) + } + if err := d.Set("subnet_ids", subnets); err != nil { + return fmt.Errorf("Error saving Subnet IDs to state for CloudHSMv2 Cluster (%s): %s", d.Id(), err) + } + + return nil +} + +func resourceAwsCloudHsm2ClusterUpdate(d *schema.ResourceData, meta interface{}) error { + cloudhsm2 := meta.(*AWSClient).cloudhsmv2conn + + if err := setTagsAwsCloudHsm2Cluster(cloudhsm2, d); err != nil { + return err + } + + return resourceAwsCloudHsm2ClusterRead(d, meta) +} + +func resourceAwsCloudHsm2ClusterDelete(d *schema.ResourceData, meta interface{}) error { + cloudhsm2 := meta.(*AWSClient).cloudhsmv2conn + + log.Printf("[DEBUG] CloudHSMv2 Delete cluster: %s", d.Id()) + err := resource.Retry(180*time.Second, func() *resource.RetryError { + var err error + _, err = cloudhsm2.DeleteCluster(&cloudhsmv2.DeleteClusterInput{ + ClusterId: aws.String(d.Id()), + }) + if err != nil { + if isAWSErr(err, cloudhsmv2.ErrCodeCloudHsmInternalFailureException, "request was rejected because of an AWS CloudHSM internal failure") { + log.Printf("[DEBUG] CloudHSMv2 Cluster re-try deleting %s", d.Id()) + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + + if err != nil { + return err + } + log.Println("[INFO] Waiting for CloudHSMv2 Cluster to be deleted") + + stateConf := &resource.StateChangeConf{ + Pending: []string{cloudhsmv2.ClusterStateDeleteInProgress}, + Target: []string{cloudhsmv2.ClusterStateDeleted}, + Refresh: resourceAwsCloudHsm2ClusterRefreshFunc(cloudhsm2, d.Id()), + Timeout: d.Timeout(schema.TimeoutCreate), + MinTimeout: 30 * time.Second, + Delay: 30 * time.Second, + } + + // Wait, catching any errors + _, errWait := stateConf.WaitForState() + if errWait != nil { + return fmt.Errorf("Error waiting for CloudHSMv2 Cluster state to be \"DELETED\": %s", errWait) + } + + return nil +} + +func setTagsAwsCloudHsm2Cluster(conn *cloudhsmv2.CloudHSMV2, d *schema.ResourceData) error { + if d.HasChange("tags") { + oraw, nraw := d.GetChange("tags") + create, remove := diffTagsGeneric(oraw.(map[string]interface{}), nraw.(map[string]interface{})) + + if len(remove) > 0 { + log.Printf("[DEBUG] Removing tags: %#v", remove) + keys := make([]*string, 0, len(remove)) + for k := range remove { + keys = append(keys, aws.String(k)) + } + + _, err := conn.UntagResource(&cloudhsmv2.UntagResourceInput{ + ResourceId: aws.String(d.Id()), + TagKeyList: keys, + }) + if err != nil { + return err + } + } + if len(create) > 0 { + log.Printf("[DEBUG] Creating tags: %#v", create) + tagList := make([]*cloudhsmv2.Tag, 0, len(create)) + for k, v := range create { + tagList = append(tagList, &cloudhsmv2.Tag{ + Key: &k, + Value: v, + }) + } + _, err := conn.TagResource(&cloudhsmv2.TagResourceInput{ + ResourceId: aws.String(d.Id()), + TagList: tagList, + }) + if err != nil { + return err + } + } + } + + return nil +} + +func readCloudHsm2ClusterCertificates(cluster *cloudhsmv2.Cluster) []map[string]interface{} { + certs := map[string]interface{}{} + if cluster.Certificates != nil { + if aws.StringValue(cluster.State) == "UNINITIALIZED" { + certs["cluster_csr"] = aws.StringValue(cluster.Certificates.ClusterCsr) + certs["aws_hardware_certificate"] = aws.StringValue(cluster.Certificates.AwsHardwareCertificate) + certs["hsm_certificate"] = aws.StringValue(cluster.Certificates.HsmCertificate) + certs["manufacturer_hardware_certificate"] = aws.StringValue(cluster.Certificates.ManufacturerHardwareCertificate) + } else if aws.StringValue(cluster.State) == "ACTIVE" { + certs["cluster_certificate"] = aws.StringValue(cluster.Certificates.ClusterCertificate) + } + } + if len(certs) > 0 { + return []map[string]interface{}{certs} + } + return []map[string]interface{}{} +} diff --git a/aws/resource_aws_cloudhsm2_cluster_test.go b/aws/resource_aws_cloudhsm2_cluster_test.go new file mode 100644 index 00000000000..917310ca05a --- /dev/null +++ b/aws/resource_aws_cloudhsm2_cluster_test.go @@ -0,0 +1,113 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSCloudHsm2Cluster_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCloudHsm2ClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCloudHsm2Cluster(), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCloudHsm2ClusterExists("aws_cloudhsm_v2_cluster.cluster"), + resource.TestCheckResourceAttrSet("aws_cloudhsm_v2_cluster.cluster", "cluster_id"), + resource.TestCheckResourceAttrSet("aws_cloudhsm_v2_cluster.cluster", "vpc_id"), + resource.TestCheckResourceAttrSet("aws_cloudhsm_v2_cluster.cluster", "security_group_id"), + resource.TestCheckResourceAttrSet("aws_cloudhsm_v2_cluster.cluster", "cluster_state"), + ), + }, + { + ResourceName: "aws_cloudhsm_v2_cluster.cluster", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"cluster_certificates", "tags"}, + }, + }, + }) +} + +func testAccAWSCloudHsm2Cluster() string { + return fmt.Sprintf(` +variable "subnets" { + default = ["10.0.1.0/24", "10.0.2.0/24"] + type = "list" +} + +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "cloudhsm2_test_vpc" { + cidr_block = "10.0.0.0/16" + + tags { + Name = "terraform-testacc-aws_cloudhsm_v2_cluster-resource-basic" + } +} + +resource "aws_subnet" "cloudhsm2_test_subnets" { + count = 2 + vpc_id = "${aws_vpc.cloudhsm2_test_vpc.id}" + cidr_block = "${element(var.subnets, count.index)}" + map_public_ip_on_launch = false + availability_zone = "${element(data.aws_availability_zones.available.names, count.index)}" + + tags { + Name = "tf-acc-aws_cloudhsm_v2_cluster-resource-basic" + } +} + +resource "aws_cloudhsm_v2_cluster" "cluster" { + hsm_type = "hsm1.medium" + subnet_ids = ["${aws_subnet.cloudhsm2_test_subnets.*.id}"] + tags { + Name = "tf-acc-aws_cloudhsm_v2_cluster-resource-basic-%d" + } +} +`, acctest.RandInt()) +} + +func testAccCheckAWSCloudHsm2ClusterDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_cloudhsm_v2_cluster" { + continue + } + cluster, err := describeCloudHsm2Cluster(testAccProvider.Meta().(*AWSClient).cloudhsmv2conn, rs.Primary.ID) + + if err != nil { + return err + } + + if cluster != nil && aws.StringValue(cluster.State) != "DELETED" { + return fmt.Errorf("CloudHSM cluster still exists %s", cluster) + } + } + + return nil +} + +func testAccCheckAWSCloudHsm2ClusterExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).cloudhsmv2conn + it, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + _, err := describeCloudHsm2Cluster(conn, it.Primary.ID) + + if err != nil { + return fmt.Errorf("CloudHSM cluster not found: %s", err) + } + + return nil + } +} diff --git a/aws/resource_aws_cloudhsm2_hsm.go b/aws/resource_aws_cloudhsm2_hsm.go new file mode 100644 index 00000000000..0bb93ff97ec --- /dev/null +++ b/aws/resource_aws_cloudhsm2_hsm.go @@ -0,0 +1,261 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/hashicorp/terraform/helper/schema" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/cloudhsmv2" + "github.com/hashicorp/terraform/helper/resource" +) + +func resourceAwsCloudHsm2Hsm() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsCloudHsm2HsmCreate, + Read: resourceAwsCloudHsm2HsmRead, + Delete: resourceAwsCloudHsm2HsmDelete, + Importer: &schema.ResourceImporter{ + State: resourceAwsCloudHsm2HsmImport, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(120 * time.Minute), + Update: schema.DefaultTimeout(120 * time.Minute), + Delete: schema.DefaultTimeout(120 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + "cluster_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "subnet_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "availability_zone": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "ip_address": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "hsm_id": { + Type: schema.TypeString, + Computed: true, + }, + + "hsm_state": { + Type: schema.TypeString, + Computed: true, + }, + + "hsm_eni_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceAwsCloudHsm2HsmImport( + d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + d.Set("hsm_id", d.Id()) + return []*schema.ResourceData{d}, nil +} + +func describeHsm(conn *cloudhsmv2.CloudHSMV2, hsmId string) (*cloudhsmv2.Hsm, error) { + out, err := conn.DescribeClusters(&cloudhsmv2.DescribeClustersInput{}) + if err != nil { + log.Printf("[WARN] Error on descibing CloudHSM v2 Cluster: %s", err) + return nil, err + } + + var hsm *cloudhsmv2.Hsm + + for _, c := range out.Clusters { + for _, h := range c.Hsms { + if aws.StringValue(h.HsmId) == hsmId { + hsm = h + break + } + } + } + + return hsm, nil +} + +func resourceAwsCloudHsm2HsmRefreshFunc( + d *schema.ResourceData, meta interface{}) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + hsm, err := describeHsm(meta.(*AWSClient).cloudhsmv2conn, d.Id()) + + if hsm == nil { + return 42, "destroyed", nil + } + + if hsm.State != nil { + log.Printf("[DEBUG] CloudHSMv2 Cluster status (%s): %s", d.Id(), *hsm.State) + } + + return hsm, aws.StringValue(hsm.State), err + } +} + +func resourceAwsCloudHsm2HsmCreate(d *schema.ResourceData, meta interface{}) error { + cloudhsm2 := meta.(*AWSClient).cloudhsmv2conn + + clusterId := d.Get("cluster_id").(string) + + cluster, err := describeCloudHsm2Cluster(cloudhsm2, clusterId) + + if cluster == nil { + log.Printf("[WARN] Error on retrieving CloudHSMv2 Cluster: %s %s", clusterId, err) + return err + } + + availabilityZone := d.Get("availability_zone").(string) + if len(availabilityZone) == 0 { + subnetId := d.Get("subnet_id").(string) + for az, sn := range cluster.SubnetMapping { + if aws.StringValue(sn) == subnetId { + availabilityZone = az + } + } + } + + input := &cloudhsmv2.CreateHsmInput{ + ClusterId: aws.String(clusterId), + AvailabilityZone: aws.String(availabilityZone), + } + + ipAddress := d.Get("ip_address").(string) + if len(ipAddress) != 0 { + input.IpAddress = aws.String(ipAddress) + } + + log.Printf("[DEBUG] CloudHSMv2 HSM create %s", input) + + var output *cloudhsmv2.CreateHsmOutput + + errRetry := resource.Retry(180*time.Second, func() *resource.RetryError { + var err error + output, err = cloudhsm2.CreateHsm(input) + if err != nil { + if isAWSErr(err, cloudhsmv2.ErrCodeCloudHsmInternalFailureException, "request was rejected because of an AWS CloudHSM internal failure") { + log.Printf("[DEBUG] CloudHSMv2 HSM re-try creating %s", input) + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + + if errRetry != nil { + return fmt.Errorf("error creating CloudHSM v2 HSM module: %s", errRetry) + } + + d.SetId(aws.StringValue(output.Hsm.HsmId)) + log.Printf("[INFO] CloudHSMv2 HSM Id: %s", d.Id()) + log.Println("[INFO] Waiting for CloudHSMv2 HSM to be available") + + stateConf := &resource.StateChangeConf{ + Pending: []string{cloudhsmv2.HsmStateCreateInProgress, "destroyed"}, + Target: []string{cloudhsmv2.HsmStateActive}, + Refresh: resourceAwsCloudHsm2HsmRefreshFunc(d, meta), + Timeout: d.Timeout(schema.TimeoutCreate), + MinTimeout: 30 * time.Second, + Delay: 30 * time.Second, + } + + // Wait, catching any errors + _, errWait := stateConf.WaitForState() + if errWait != nil { + return fmt.Errorf("Error waiting for CloudHSMv2 HSM state to be \"ACTIVE\": %s", errWait) + } + + return resourceAwsCloudHsm2HsmRead(d, meta) +} + +func resourceAwsCloudHsm2HsmRead(d *schema.ResourceData, meta interface{}) error { + + hsm, err := describeHsm(meta.(*AWSClient).cloudhsmv2conn, d.Id()) + + if hsm == nil { + log.Printf("[WARN] CloudHSMv2 HSM (%s) not found", d.Id()) + d.SetId("") + return err + } + + log.Printf("[INFO] Reading CloudHSMv2 HSM Information: %s", d.Id()) + + d.Set("cluster_id", hsm.ClusterId) + d.Set("subnet_id", hsm.SubnetId) + d.Set("availability_zone", hsm.AvailabilityZone) + d.Set("ip_address", hsm.EniIp) + d.Set("hsm_id", hsm.HsmId) + d.Set("hsm_state", hsm.State) + d.Set("hsm_eni_id", hsm.EniId) + + return nil +} + +func resourceAwsCloudHsm2HsmDelete(d *schema.ResourceData, meta interface{}) error { + cloudhsm2 := meta.(*AWSClient).cloudhsmv2conn + clusterId := d.Get("cluster_id").(string) + + log.Printf("[DEBUG] CloudHSMv2 HSM delete %s %s", clusterId, d.Id()) + + errRetry := resource.Retry(180*time.Second, func() *resource.RetryError { + var err error + _, err = cloudhsm2.DeleteHsm(&cloudhsmv2.DeleteHsmInput{ + ClusterId: aws.String(clusterId), + HsmId: aws.String(d.Id()), + }) + if err != nil { + if isAWSErr(err, cloudhsmv2.ErrCodeCloudHsmInternalFailureException, "request was rejected because of an AWS CloudHSM internal failure") { + log.Printf("[DEBUG] CloudHSMv2 HSM re-try deleting %s", d.Id()) + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + + if errRetry != nil { + return fmt.Errorf("error deleting CloudHSM v2 HSM module (%s): %s", d.Id(), errRetry) + } + log.Println("[INFO] Waiting for CloudHSMv2 HSM to be deleted") + + stateConf := &resource.StateChangeConf{ + Pending: []string{cloudhsmv2.HsmStateDeleteInProgress}, + Target: []string{"destroyed"}, + Refresh: resourceAwsCloudHsm2HsmRefreshFunc(d, meta), + Timeout: d.Timeout(schema.TimeoutCreate), + MinTimeout: 30 * time.Second, + Delay: 30 * time.Second, + } + + // Wait, catching any errors + _, errWait := stateConf.WaitForState() + if errWait != nil { + return fmt.Errorf("Error waiting for CloudHSMv2 HSM state to be \"DELETED\": %s", errWait) + } + + return nil +} diff --git a/aws/resource_aws_cloudhsm2_hsm_test.go b/aws/resource_aws_cloudhsm2_hsm_test.go new file mode 100644 index 00000000000..310d8169f59 --- /dev/null +++ b/aws/resource_aws_cloudhsm2_hsm_test.go @@ -0,0 +1,120 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSCloudHsm2Hsm_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCloudHsm2HsmDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCloudHsm2Hsm(), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCloudHsm2HsmExists("aws_cloudhsm_v2_hsm.hsm"), + resource.TestCheckResourceAttrSet("aws_cloudhsm_v2_hsm.hsm", "hsm_id"), + resource.TestCheckResourceAttrSet("aws_cloudhsm_v2_hsm.hsm", "hsm_state"), + resource.TestCheckResourceAttrSet("aws_cloudhsm_v2_hsm.hsm", "hsm_eni_id"), + resource.TestCheckResourceAttrSet("aws_cloudhsm_v2_hsm.hsm", "ip_address"), + ), + }, + { + ResourceName: "aws_cloudhsm_v2_hsm.hsm", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccAWSCloudHsm2Hsm() string { + return fmt.Sprintf(` +variable "subnets" { + default = ["10.0.1.0/24", "10.0.2.0/24"] + type = "list" +} + +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "cloudhsm2_test_vpc" { + cidr_block = "10.0.0.0/16" + + tags { + Name = "terraform-testacc-aws_cloudhsm_v2_hsm-resource-basic" + } +} + +resource "aws_subnet" "cloudhsm2_test_subnets" { + count = 2 + vpc_id = "${aws_vpc.cloudhsm2_test_vpc.id}" + cidr_block = "${element(var.subnets, count.index)}" + map_public_ip_on_launch = false + availability_zone = "${element(data.aws_availability_zones.available.names, count.index)}" + + tags { + Name = "tf-acc-aws_cloudhsm_v2_hsm-resource-basic" + } +} + +resource "aws_cloudhsm_v2_cluster" "cloudhsm_v2_cluster" { + hsm_type = "hsm1.medium" + subnet_ids = ["${aws_subnet.cloudhsm2_test_subnets.*.id}"] + tags { + Name = "tf-acc-aws_cloudhsm_v2_hsm-resource-basic-%d" + } +} + +resource "aws_cloudhsm_v2_hsm" "hsm" { + subnet_id = "${aws_subnet.cloudhsm2_test_subnets.0.id}" + cluster_id = "${aws_cloudhsm_v2_cluster.cloudhsm_v2_cluster.cluster_id}" +} +`, acctest.RandInt()) +} + +func testAccCheckAWSCloudHsm2HsmDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).cloudhsmv2conn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_cloudhsm_v2_hsm" { + continue + } + + hsm, err := describeHsm(conn, rs.Primary.ID) + + if err != nil { + return err + } + + if hsm != nil && aws.StringValue(hsm.State) != "DELETED" { + return fmt.Errorf("HSM still exists:\n%s", hsm) + } + } + + return nil +} + +func testAccCheckAWSCloudHsm2HsmExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).cloudhsmv2conn + + it, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + _, err := describeHsm(conn, it.Primary.ID) + if err != nil { + return fmt.Errorf("CloudHSM cluster not found: %s", err) + } + + return nil + } +} diff --git a/aws/resource_aws_cloudtrail.go b/aws/resource_aws_cloudtrail.go index ded9e0be917..8c6a1ee4260 100644 --- a/aws/resource_aws_cloudtrail.go +++ b/aws/resource_aws_cloudtrail.go @@ -165,7 +165,7 @@ func resourceAwsCloudTrailCreate(d *schema.ResourceData, meta interface{}) error } var t *cloudtrail.CreateTrailOutput - err := resource.Retry(15*time.Second, func() *resource.RetryError { + err := resource.Retry(1*time.Minute, func() *resource.RetryError { var err error t, err = conn.CreateTrail(&input) if err != nil { @@ -332,7 +332,7 @@ func resourceAwsCloudTrailUpdate(d *schema.ResourceData, meta interface{}) error log.Printf("[DEBUG] Updating CloudTrail: %s", input) var t *cloudtrail.UpdateTrailOutput - err := resource.Retry(30*time.Second, func() *resource.RetryError { + err := resource.Retry(1*time.Minute, func() *resource.RetryError { var err error t, err = conn.UpdateTrail(&input) if err != nil { diff --git a/aws/resource_aws_cloudtrail_test.go b/aws/resource_aws_cloudtrail_test.go index 292db065a2e..4b0cccb166d 100644 --- a/aws/resource_aws_cloudtrail_test.go +++ b/aws/resource_aws_cloudtrail_test.go @@ -41,6 +41,29 @@ func TestAccAWSCloudTrail(t *testing.T) { } } +func TestAccAWSCloudTrail_importBasic(t *testing.T) { + resourceName := "aws_cloudtrail.foobar" + cloudTrailRandInt := acctest.RandInt() + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCloudTrailDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCloudTrailConfig(cloudTrailRandInt), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"enable_log_file_validation", "is_multi_region_trail", "include_global_service_events", "enable_logging"}, + }, + }, + }) +} + func testAccAWSCloudTrail_basic(t *testing.T) { var trail cloudtrail.Trail cloudTrailRandInt := acctest.RandInt() diff --git a/aws/resource_aws_cloudwatch_dashboard.go b/aws/resource_aws_cloudwatch_dashboard.go index dea7c2fa1fa..2a60e595fd1 100644 --- a/aws/resource_aws_cloudwatch_dashboard.go +++ b/aws/resource_aws_cloudwatch_dashboard.go @@ -8,6 +8,7 @@ import ( "github.com/aws/aws-sdk-go/service/cloudwatch" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/structure" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsCloudWatchDashboard() *schema.Resource { @@ -33,7 +34,7 @@ func resourceAwsCloudWatchDashboard() *schema.Resource { "dashboard_body": { Type: schema.TypeString, Required: true, - ValidateFunc: validateJsonString, + ValidateFunc: validation.ValidateJsonString, StateFunc: func(v interface{}) string { json, _ := structure.NormalizeJsonString(v) return json @@ -102,15 +103,12 @@ func resourceAwsCloudWatchDashboardDelete(d *schema.ResourceData, meta interface if _, err := conn.DeleteDashboards(¶ms); err != nil { if isCloudWatchDashboardNotFoundErr(err) { - log.Printf("[WARN] CloudWatch Dashboard %s is already gone", d.Id()) - d.SetId("") return nil } return fmt.Errorf("Error deleting CloudWatch Dashboard: %s", err) } log.Printf("[INFO] CloudWatch Dashboard %s deleted", d.Id()) - d.SetId("") return nil } diff --git a/aws/resource_aws_cloudwatch_dashboard_test.go b/aws/resource_aws_cloudwatch_dashboard_test.go index 70aa34b5d42..749d1594eca 100644 --- a/aws/resource_aws_cloudwatch_dashboard_test.go +++ b/aws/resource_aws_cloudwatch_dashboard_test.go @@ -14,10 +14,31 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSCloudWatchDashboard_importBasic(t *testing.T) { + resourceName := "aws_cloudwatch_dashboard.foobar" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCloudWatchDashboardDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCloudWatchDashboardConfig(rInt), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSCloudWatchDashboard_basic(t *testing.T) { var dashboard cloudwatch.GetDashboardOutput rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCloudWatchDashboardDestroy, @@ -36,7 +57,7 @@ func TestAccAWSCloudWatchDashboard_basic(t *testing.T) { func TestAccAWSCloudWatchDashboard_update(t *testing.T) { var dashboard cloudwatch.GetDashboardOutput rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCloudWatchDashboardDestroy, diff --git a/aws/resource_aws_cloudwatch_event_permission.go b/aws/resource_aws_cloudwatch_event_permission.go index 7383c647524..8457bdf0e98 100644 --- a/aws/resource_aws_cloudwatch_event_permission.go +++ b/aws/resource_aws_cloudwatch_event_permission.go @@ -12,6 +12,7 @@ import ( events "github.com/aws/aws-sdk-go/service/cloudwatchevents" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsCloudWatchEventPermission() *schema.Resource { @@ -31,6 +32,30 @@ func resourceAwsCloudWatchEventPermission() *schema.Resource { Default: "events:PutEvents", ValidateFunc: validateCloudWatchEventPermissionAction, }, + "condition": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "key": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{"aws:PrincipalOrgID"}, false), + }, + "type": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{"StringEquals"}, false), + }, + "value": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + }, + }, + }, "principal": { Type: schema.TypeString, Required: true, @@ -53,6 +78,7 @@ func resourceAwsCloudWatchEventPermissionCreate(d *schema.ResourceData, meta int input := events.PutPermissionInput{ Action: aws.String(d.Get("action").(string)), + Condition: expandCloudWatchEventsCondition(d.Get("condition").([]interface{})), Principal: aws.String(d.Get("principal").(string)), StatementId: aws.String(statementID), } @@ -108,6 +134,10 @@ func resourceAwsCloudWatchEventPermissionRead(d *schema.ResourceData, meta inter d.Set("action", policyStatement.Action) + if err := d.Set("condition", flattenCloudWatchEventPermissionPolicyStatementCondition(policyStatement.Condition)); err != nil { + return fmt.Errorf("error setting condition: %s", err) + } + principalString, ok := policyStatement.Principal.(string) if ok && (principalString == "*") { d.Set("principal", "*") @@ -129,6 +159,7 @@ func resourceAwsCloudWatchEventPermissionUpdate(d *schema.ResourceData, meta int input := events.PutPermissionInput{ Action: aws.String(d.Get("action").(string)), + Condition: expandCloudWatchEventsCondition(d.Get("condition").([]interface{})), Principal: aws.String(d.Get("principal").(string)), StatementId: aws.String(d.Get("statement_id").(string)), } @@ -205,10 +236,42 @@ type CloudWatchEventPermissionPolicyStatement struct { Sid string Effect string Action string - Principal interface{} // "*" or {"AWS": "arn:aws:iam::111111111111:root"} + Condition *CloudWatchEventPermissionPolicyStatementCondition `json:"Condition,omitempty"` + Principal interface{} // "*" or {"AWS": "arn:aws:iam::111111111111:root"} Resource string } +// CloudWatchEventPermissionPolicyStatementCondition represents the Condition attribute of CloudWatchEventPermissionPolicyStatement +// See also: https://docs.aws.amazon.com/AmazonCloudWatchEvents/latest/APIReference/API_DescribeEventBus.html +type CloudWatchEventPermissionPolicyStatementCondition struct { + Key string + Type string + Value string +} + +func (condition *CloudWatchEventPermissionPolicyStatementCondition) UnmarshalJSON(b []byte) error { + var out CloudWatchEventPermissionPolicyStatementCondition + + // JSON representation: \"Condition\":{\"StringEquals\":{\"aws:PrincipalOrgID\":\"o-0123456789\"}} + var data map[string]map[string]string + if err := json.Unmarshal(b, &data); err != nil { + return err + } + + for typeKey, typeValue := range data { + for conditionKey, conditionValue := range typeValue { + out = CloudWatchEventPermissionPolicyStatementCondition{ + Key: conditionKey, + Type: typeKey, + Value: conditionValue, + } + } + } + + *condition = out + return nil +} + func findCloudWatchEventPermissionPolicyStatementByID(policy *CloudWatchEventPermissionPolicyDoc, id string) ( *CloudWatchEventPermissionPolicyStatement, error) { @@ -225,3 +288,33 @@ func findCloudWatchEventPermissionPolicyStatementByID(policy *CloudWatchEventPer Message: fmt.Sprintf("Failed to find statement %q in CloudWatch Events permission policy:\n%s", id, policy.Statements), } } + +func expandCloudWatchEventsCondition(l []interface{}) *events.Condition { + if len(l) == 0 || l[0] == nil { + return nil + } + + m := l[0].(map[string]interface{}) + + condition := &events.Condition{ + Key: aws.String(m["key"].(string)), + Type: aws.String(m["type"].(string)), + Value: aws.String(m["value"].(string)), + } + + return condition +} + +func flattenCloudWatchEventPermissionPolicyStatementCondition(condition *CloudWatchEventPermissionPolicyStatementCondition) []interface{} { + if condition == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "key": condition.Key, + "type": condition.Type, + "value": condition.Value, + } + + return []interface{}{m} +} diff --git a/aws/resource_aws_cloudwatch_event_permission_test.go b/aws/resource_aws_cloudwatch_event_permission_test.go index c00d3337057..f5e03861663 100644 --- a/aws/resource_aws_cloudwatch_event_permission_test.go +++ b/aws/resource_aws_cloudwatch_event_permission_test.go @@ -3,10 +3,13 @@ package aws import ( "encoding/json" "fmt" + "log" "regexp" + "strings" "testing" "time" + "github.com/aws/aws-sdk-go/aws" events "github.com/aws/aws-sdk-go/service/cloudwatchevents" "github.com/hashicorp/terraform/helper/acctest" @@ -14,16 +17,71 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func init() { + resource.AddTestSweepers("aws_cloudwatch_event_permission", &resource.Sweeper{ + Name: "aws_cloudwatch_event_permission", + F: testSweepCloudWatchEventPermissions, + }) +} + +func testSweepCloudWatchEventPermissions(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("Error getting client: %s", err) + } + conn := client.(*AWSClient).cloudwatcheventsconn + + output, err := conn.DescribeEventBus(&events.DescribeEventBusInput{}) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping CloudWatch Event Permission sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving CloudWatch Event Permissions: %s", err) + } + + policy := aws.StringValue(output.Policy) + + if policy == "" { + log.Print("[DEBUG] No CloudWatch Event Permissions to sweep") + return nil + } + + var policyDoc CloudWatchEventPermissionPolicyDoc + err = json.Unmarshal([]byte(policy), &policyDoc) + if err != nil { + return fmt.Errorf("Parsing CloudWatch Event Permissions policy %q failed: %s", policy, err) + } + + for _, statement := range policyDoc.Statements { + sid := statement.Sid + + if !strings.HasPrefix(sid, "TestAcc") { + continue + } + + log.Printf("[INFO] Deleting CloudWatch Event Permission %s", sid) + _, err := conn.RemovePermission(&events.RemovePermissionInput{ + StatementId: aws.String(sid), + }) + if err != nil { + return fmt.Errorf("Error deleting CloudWatch Event Permission %s: %s", sid, err) + } + } + + return nil +} + func TestAccAWSCloudWatchEventPermission_Basic(t *testing.T) { principal1 := "111111111111" principal2 := "*" statementID := acctest.RandomWithPrefix(t.Name()) resourceName := "aws_cloudwatch_event_permission.test1" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckAWSEcsServiceDestroy, + CheckDestroy: testAccCheckCloudWatchEventPermissionDestroy, Steps: []resource.TestStep{ { Config: testAccCheckAwsCloudWatchEventPermissionResourceConfigBasic("", statementID), @@ -58,6 +116,7 @@ func TestAccAWSCloudWatchEventPermission_Basic(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckCloudWatchEventPermissionExists(resourceName), resource.TestCheckResourceAttr(resourceName, "action", "events:PutEvents"), + resource.TestCheckResourceAttr(resourceName, "condition.#", "0"), resource.TestCheckResourceAttr(resourceName, "principal", principal1), resource.TestCheckResourceAttr(resourceName, "statement_id", statementID), ), @@ -69,6 +128,11 @@ func TestAccAWSCloudWatchEventPermission_Basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "principal", principal2), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } @@ -78,10 +142,10 @@ func TestAccAWSCloudWatchEventPermission_Action(t *testing.T) { statementID := acctest.RandomWithPrefix(t.Name()) resourceName := "aws_cloudwatch_event_permission.test1" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckAWSEcsServiceDestroy, + CheckDestroy: testAccCheckCloudWatchEventPermissionDestroy, Steps: []resource.TestStep{ { Config: testAccCheckAwsCloudWatchEventPermissionResourceConfigAction("", principal, statementID), @@ -106,28 +170,45 @@ func TestAccAWSCloudWatchEventPermission_Action(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "action", "events:PutEvents"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } -func TestAccAWSCloudWatchEventPermission_Import(t *testing.T) { - principal := "123456789012" - statementID := acctest.RandomWithPrefix(t.Name()) - resourceName := "aws_cloudwatch_event_permission.test1" +func TestAccAWSCloudWatchEventPermission_Condition(t *testing.T) { + statementID := acctest.RandomWithPrefix("TestAcc") + resourceName := "aws_cloudwatch_event_permission.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckCloudWatchEventPermissionDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccCheckAwsCloudWatchEventPermissionResourceConfigBasic(principal, statementID), + { + Config: testAccCheckAwsCloudWatchEventPermissionResourceConfigConditionOrganization(statementID, "o-1234567890"), Check: resource.ComposeTestCheckFunc( testAccCheckCloudWatchEventPermissionExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "condition.#", "1"), + resource.TestCheckResourceAttr(resourceName, "condition.0.key", "aws:PrincipalOrgID"), + resource.TestCheckResourceAttr(resourceName, "condition.0.type", "StringEquals"), + resource.TestCheckResourceAttr(resourceName, "condition.0.value", "o-1234567890"), ), }, - - resource.TestStep{ + { + Config: testAccCheckAwsCloudWatchEventPermissionResourceConfigConditionOrganization(statementID, "o-0123456789"), + Check: resource.ComposeTestCheckFunc( + testAccCheckCloudWatchEventPermissionExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "condition.#", "1"), + resource.TestCheckResourceAttr(resourceName, "condition.0.key", "aws:PrincipalOrgID"), + resource.TestCheckResourceAttr(resourceName, "condition.0.type", "StringEquals"), + resource.TestCheckResourceAttr(resourceName, "condition.0.value", "o-0123456789"), + ), + }, + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, @@ -144,10 +225,10 @@ func TestAccAWSCloudWatchEventPermission_Multiple(t *testing.T) { resourceName1 := "aws_cloudwatch_event_permission.test1" resourceName2 := "aws_cloudwatch_event_permission.test2" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckAWSEcsServiceDestroy, + CheckDestroy: testAccCheckCloudWatchEventPermissionDestroy, Steps: []resource.TestStep{ { Config: testAccCheckAwsCloudWatchEventPermissionResourceConfigBasic(principal1, statementID1), @@ -268,6 +349,21 @@ resource "aws_cloudwatch_event_permission" "test1" { `, action, principal, statementID) } +func testAccCheckAwsCloudWatchEventPermissionResourceConfigConditionOrganization(statementID, value string) string { + return fmt.Sprintf(` +resource "aws_cloudwatch_event_permission" "test" { + principal = "*" + statement_id = %q + + condition { + key = "aws:PrincipalOrgID" + type = "StringEquals" + value = %q + } +} +`, statementID, value) +} + func testAccCheckAwsCloudWatchEventPermissionResourceConfigMultiple(principal1, statementID1, principal2, statementID2 string) string { return fmt.Sprintf(` resource "aws_cloudwatch_event_permission" "test1" { diff --git a/aws/resource_aws_cloudwatch_event_rule.go b/aws/resource_aws_cloudwatch_event_rule.go index e0c087097a1..16977cb1f87 100644 --- a/aws/resource_aws_cloudwatch_event_rule.go +++ b/aws/resource_aws_cloudwatch_event_rule.go @@ -9,10 +9,10 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" events "github.com/aws/aws-sdk-go/service/cloudwatchevents" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/structure" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsCloudWatchEventRule() *schema.Resource { @@ -27,20 +27,28 @@ func resourceAwsCloudWatchEventRule() *schema.Resource { Schema: map[string]*schema.Schema{ "name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"name_prefix"}, + ValidateFunc: validateCloudWatchEventRuleName, + }, + "name_prefix": { Type: schema.TypeString, - Required: true, + Optional: true, ForceNew: true, ValidateFunc: validateCloudWatchEventRuleName, }, "schedule_expression": { Type: schema.TypeString, Optional: true, - ValidateFunc: validateMaxLength(256), + ValidateFunc: validation.StringLenBetween(0, 256), }, "event_pattern": { Type: schema.TypeString, Optional: true, - ValidateFunc: validateEventPatternValue(2048), + ValidateFunc: validateEventPatternValue(), StateFunc: func(v interface{}) string { json, _ := structure.NormalizeJsonString(v.(string)) return json @@ -49,12 +57,12 @@ func resourceAwsCloudWatchEventRule() *schema.Resource { "description": { Type: schema.TypeString, Optional: true, - ValidateFunc: validateMaxLength(512), + ValidateFunc: validation.StringLenBetween(0, 512), }, "role_arn": { Type: schema.TypeString, Optional: true, - ValidateFunc: validateMaxLength(1600), + ValidateFunc: validation.StringLenBetween(0, 1600), }, "is_enabled": { Type: schema.TypeBool, @@ -72,9 +80,18 @@ func resourceAwsCloudWatchEventRule() *schema.Resource { func resourceAwsCloudWatchEventRuleCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).cloudwatcheventsconn - input, err := buildPutRuleInputStruct(d) + var name string + if v, ok := d.GetOk("name"); ok { + name = v.(string) + } else if v, ok := d.GetOk("name_prefix"); ok { + name = resource.PrefixedUniqueId(v.(string)) + } else { + name = resource.UniqueId() + } + + input, err := buildPutRuleInputStruct(d, name) if err != nil { - return errwrap.Wrapf("Creating CloudWatch Event Rule failed: {{err}}", err) + return fmt.Errorf("Creating CloudWatch Event Rule failed: %s", err) } log.Printf("[DEBUG] Creating CloudWatch Event Rule: %s", input) @@ -96,11 +113,11 @@ func resourceAwsCloudWatchEventRuleCreate(d *schema.ResourceData, meta interface return nil }) if err != nil { - return errwrap.Wrapf("Creating CloudWatch Event Rule failed: {{err}}", err) + return fmt.Errorf("Creating CloudWatch Event Rule failed: %s", err) } d.Set("arn", out.RuleArn) - d.SetId(d.Get("name").(string)) + d.SetId(*input.Name) log.Printf("[INFO] CloudWatch Event Rule %q created", *out.RuleArn) @@ -132,7 +149,7 @@ func resourceAwsCloudWatchEventRuleRead(d *schema.ResourceData, meta interface{} if out.EventPattern != nil { pattern, err := structure.NormalizeJsonString(*out.EventPattern) if err != nil { - return errwrap.Wrapf("event pattern contains an invalid JSON: {{err}}", err) + return fmt.Errorf("event pattern contains an invalid JSON: %s", err) } d.Set("event_pattern", pattern) } @@ -164,9 +181,9 @@ func resourceAwsCloudWatchEventRuleUpdate(d *schema.ResourceData, meta interface log.Printf("[DEBUG] CloudWatch Event Rule (%q) enabled", d.Id()) } - input, err := buildPutRuleInputStruct(d) + input, err := buildPutRuleInputStruct(d, d.Id()) if err != nil { - return errwrap.Wrapf("Updating CloudWatch Event Rule failed: {{err}}", err) + return fmt.Errorf("Updating CloudWatch Event Rule failed: %s", err) } log.Printf("[DEBUG] Updating CloudWatch Event Rule: %s", input) @@ -186,7 +203,7 @@ func resourceAwsCloudWatchEventRuleUpdate(d *schema.ResourceData, meta interface return nil }) if err != nil { - return errwrap.Wrapf("Updating CloudWatch Event Rule failed: {{err}}", err) + return fmt.Errorf("Updating CloudWatch Event Rule failed: %s", err) } if d.HasChange("is_enabled") && !d.Get("is_enabled").(bool) { @@ -215,14 +232,12 @@ func resourceAwsCloudWatchEventRuleDelete(d *schema.ResourceData, meta interface } log.Println("[INFO] CloudWatch Event Rule deleted") - d.SetId("") - return nil } -func buildPutRuleInputStruct(d *schema.ResourceData) (*events.PutRuleInput, error) { +func buildPutRuleInputStruct(d *schema.ResourceData, name string) (*events.PutRuleInput, error) { input := events.PutRuleInput{ - Name: aws.String(d.Get("name").(string)), + Name: aws.String(name), } if v, ok := d.GetOk("description"); ok { input.Description = aws.String(v.(string)) @@ -230,7 +245,7 @@ func buildPutRuleInputStruct(d *schema.ResourceData) (*events.PutRuleInput, erro if v, ok := d.GetOk("event_pattern"); ok { pattern, err := structure.NormalizeJsonString(v) if err != nil { - return nil, errwrap.Wrapf("event pattern contains an invalid JSON: {{err}}", err) + return nil, fmt.Errorf("event pattern contains an invalid JSON: %s", err) } input.EventPattern = aws.String(pattern) } @@ -266,7 +281,7 @@ func getStringStateFromBoolean(isEnabled bool) string { return "DISABLED" } -func validateEventPatternValue(length int) schema.SchemaValidateFunc { +func validateEventPatternValue() schema.SchemaValidateFunc { return func(v interface{}, k string) (ws []string, errors []error) { json, err := structure.NormalizeJsonString(v) if err != nil { @@ -279,9 +294,9 @@ func validateEventPatternValue(length int) schema.SchemaValidateFunc { } // Check whether the normalized JSON is within the given length. - if len(json) > length { + if len(json) > 2048 { errors = append(errors, fmt.Errorf( - "%q cannot be longer than %d characters: %q", k, length, json)) + "%q cannot be longer than %d characters: %q", k, 2048, json)) } return } diff --git a/aws/resource_aws_cloudwatch_event_rule_test.go b/aws/resource_aws_cloudwatch_event_rule_test.go index e69489777f8..e2cc318385c 100644 --- a/aws/resource_aws_cloudwatch_event_rule_test.go +++ b/aws/resource_aws_cloudwatch_event_rule_test.go @@ -2,6 +2,9 @@ package aws import ( "fmt" + "log" + "regexp" + "strings" "testing" "github.com/aws/aws-sdk-go/aws" @@ -11,22 +14,100 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func init() { + resource.AddTestSweepers("aws_cloudwatch_event_rule", &resource.Sweeper{ + Name: "aws_cloudwatch_event_rule", + F: testSweepCloudWatchEventRules, + }) +} + +func testSweepCloudWatchEventRules(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("Error getting client: %s", err) + } + conn := client.(*AWSClient).cloudwatcheventsconn + + input := &events.ListRulesInput{} + + for { + output, err := conn.ListRules(input) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping CloudWatch Event Rule sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving CloudWatch Event Rules: %s", err) + } + + if len(output.Rules) == 0 { + log.Print("[DEBUG] No CloudWatch Event Rules to sweep") + return nil + } + + for _, rule := range output.Rules { + name := aws.StringValue(rule.Name) + + if !strings.HasPrefix(name, "tf") { + continue + } + + log.Printf("[INFO] Deleting CloudWatch Event Rule %s", name) + _, err := conn.DeleteRule(&events.DeleteRuleInput{ + Name: aws.String(name), + }) + if err != nil { + return fmt.Errorf("Error deleting CloudWatch Event Rule %s: %s", name, err) + } + } + + if output.NextToken == nil { + break + } + input.NextToken = output.NextToken + } + + return nil +} + +func TestAccAWSCloudWatchEventRule_importBasic(t *testing.T) { + resourceName := "aws_cloudwatch_event_rule.foo" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCloudWatchEventRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCloudWatchEventRuleConfig, + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"is_enabled"}, //this has a default value + }, + }, + }) +} + func TestAccAWSCloudWatchEventRule_basic(t *testing.T) { var rule events.DescribeRuleOutput - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCloudWatchEventRuleDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSCloudWatchEventRuleConfig, Check: resource.ComposeTestCheckFunc( testAccCheckCloudWatchEventRuleExists("aws_cloudwatch_event_rule.foo", &rule), resource.TestCheckResourceAttr("aws_cloudwatch_event_rule.foo", "name", "tf-acc-cw-event-rule"), ), }, - resource.TestStep{ + { Config: testAccAWSCloudWatchEventRuleConfigModified, Check: resource.ComposeTestCheckFunc( testAccCheckCloudWatchEventRuleExists("aws_cloudwatch_event_rule.foo", &rule), @@ -37,15 +118,35 @@ func TestAccAWSCloudWatchEventRule_basic(t *testing.T) { }) } +func TestAccAWSCloudWatchEventRule_prefix(t *testing.T) { + var rule events.DescribeRuleOutput + startsWithPrefix := regexp.MustCompile("^tf-acc-cw-event-rule-prefix-") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCloudWatchEventRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCloudWatchEventRuleConfig_prefix, + Check: resource.ComposeTestCheckFunc( + testAccCheckCloudWatchEventRuleExists("aws_cloudwatch_event_rule.moobar", &rule), + resource.TestMatchResourceAttr("aws_cloudwatch_event_rule.moobar", "name", startsWithPrefix), + ), + }, + }, + }) +} + func TestAccAWSCloudWatchEventRule_full(t *testing.T) { var rule events.DescribeRuleOutput - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCloudWatchEventRuleDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSCloudWatchEventRuleConfig_full, Check: resource.ComposeTestCheckFunc( testAccCheckCloudWatchEventRuleExists("aws_cloudwatch_event_rule.moobar", &rule), @@ -64,26 +165,26 @@ func TestAccAWSCloudWatchEventRule_full(t *testing.T) { func TestAccAWSCloudWatchEventRule_enable(t *testing.T) { var rule events.DescribeRuleOutput - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCloudWatchEventRuleDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSCloudWatchEventRuleConfigEnabled, Check: resource.ComposeTestCheckFunc( testAccCheckCloudWatchEventRuleExists("aws_cloudwatch_event_rule.moo", &rule), testAccCheckCloudWatchEventRuleEnabled("aws_cloudwatch_event_rule.moo", "ENABLED", &rule), ), }, - resource.TestStep{ + { Config: testAccAWSCloudWatchEventRuleConfigDisabled, Check: resource.ComposeTestCheckFunc( testAccCheckCloudWatchEventRuleExists("aws_cloudwatch_event_rule.moo", &rule), testAccCheckCloudWatchEventRuleEnabled("aws_cloudwatch_event_rule.moo", "DISABLED", &rule), ), }, - resource.TestStep{ + { Config: testAccAWSCloudWatchEventRuleConfigEnabled, Check: resource.ComposeTestCheckFunc( testAccCheckCloudWatchEventRuleExists("aws_cloudwatch_event_rule.moo", &rule), @@ -168,31 +269,27 @@ func testAccCheckAWSCloudWatchEventRuleDestroy(s *terraform.State) error { func TestResourceAWSCloudWatchEventRule_validateEventPatternValue(t *testing.T) { type testCases struct { - Length int Value string ErrCount int } invalidCases := []testCases{ { - Length: 8, - Value: acctest.RandString(16), + Value: acctest.RandString(2049), ErrCount: 1, }, { - Length: 123, - Value: `{"abc":}`, + Value: `not-json`, ErrCount: 1, }, { - Length: 1, - Value: `{"abc":["1","2"]}`, + Value: fmt.Sprintf("{%q:[1, 2]}", acctest.RandString(2049)), ErrCount: 1, }, } for _, tc := range invalidCases { - _, errors := validateEventPatternValue(tc.Length)(tc.Value, "event_pattern") + _, errors := validateEventPatternValue()(tc.Value, "event_pattern") if len(errors) != tc.ErrCount { t.Fatalf("Expected %q to trigger a validation error.", tc.Value) } @@ -200,24 +297,21 @@ func TestResourceAWSCloudWatchEventRule_validateEventPatternValue(t *testing.T) validCases := []testCases{ { - Length: 0, Value: ``, ErrCount: 0, }, { - Length: 2, Value: `{}`, ErrCount: 0, }, { - Length: 18, Value: `{"abc":["1","2"]}`, ErrCount: 0, }, } for _, tc := range validCases { - _, errors := validateEventPatternValue(tc.Length)(tc.Value, "event_pattern") + _, errors := validateEventPatternValue()(tc.Value, "event_pattern") if len(errors) != tc.ErrCount { t.Fatalf("Expected %q not to trigger a validation error.", tc.Value) } @@ -252,6 +346,18 @@ resource "aws_cloudwatch_event_rule" "foo" { } ` +var testAccAWSCloudWatchEventRuleConfig_prefix = ` +resource "aws_cloudwatch_event_rule" "moobar" { + name_prefix = "tf-acc-cw-event-rule-prefix-" + schedule_expression = "rate(5 minutes)" + event_pattern = < 1 && v <= 10000 { + arrayProperties := &events.BatchArrayProperties{} + arrayProperties.Size = aws.Int64(int64(v)) + batchParameters.ArrayProperties = arrayProperties + } + if v, ok := param["job_attempts"].(int); ok && v > 0 && v <= 10 { + retryStrategy := &events.BatchRetryStrategy{} + retryStrategy.Attempts = aws.Int64(int64(v)) + batchParameters.RetryStrategy = retryStrategy + } + } + + return batchParameters +} + +func expandAwsCloudWatchEventTargetKinesisParameters(config []interface{}) *events.KinesisParameters { + kinesisParameters := &events.KinesisParameters{} + for _, c := range config { + param := c.(map[string]interface{}) + if v, ok := param["partition_key_path"].(string); ok && v != "" { + kinesisParameters.PartitionKeyPath = aws.String(v) + } + } + + return kinesisParameters +} + +func expandAwsCloudWatchEventTargetSqsParameters(config []interface{}) *events.SqsParameters { + sqsParameters := &events.SqsParameters{} + for _, c := range config { + param := c.(map[string]interface{}) + if v, ok := param["message_group_id"].(string); ok && v != "" { + sqsParameters.MessageGroupId = aws.String(v) + } + } + + return sqsParameters +} func expandAwsCloudWatchEventTransformerParameters(config []interface{}) *events.InputTransformer { transformerParameters := &events.InputTransformer{} @@ -379,11 +580,64 @@ func flattenAwsCloudWatchEventTargetRunParameters(runCommand *events.RunCommandP } func flattenAwsCloudWatchEventTargetEcsParameters(ecsParameters *events.EcsParameters) []map[string]interface{} { config := make(map[string]interface{}) + if ecsParameters.Group != nil { + config["group"] = *ecsParameters.Group + } + if ecsParameters.LaunchType != nil { + config["launch_type"] = *ecsParameters.LaunchType + } + config["network_configuration"] = flattenAwsCloudWatchEventTargetEcsParametersNetworkConfiguration(ecsParameters.NetworkConfiguration) + if ecsParameters.PlatformVersion != nil { + config["platform_version"] = *ecsParameters.PlatformVersion + } config["task_count"] = *ecsParameters.TaskCount config["task_definition_arn"] = *ecsParameters.TaskDefinitionArn result := []map[string]interface{}{config} return result } +func flattenAwsCloudWatchEventTargetEcsParametersNetworkConfiguration(nc *events.NetworkConfiguration) []interface{} { + if nc == nil { + return nil + } + + result := make(map[string]interface{}) + result["security_groups"] = schema.NewSet(schema.HashString, flattenStringList(nc.AwsvpcConfiguration.SecurityGroups)) + result["subnets"] = schema.NewSet(schema.HashString, flattenStringList(nc.AwsvpcConfiguration.Subnets)) + + if nc.AwsvpcConfiguration.AssignPublicIp != nil { + result["assign_public_ip"] = *nc.AwsvpcConfiguration.AssignPublicIp == events.AssignPublicIpEnabled + } + + return []interface{}{result} +} + +func flattenAwsCloudWatchEventTargetBatchParameters(batchParameters *events.BatchParameters) []map[string]interface{} { + config := make(map[string]interface{}) + config["job_definition"] = aws.StringValue(batchParameters.JobDefinition) + config["job_name"] = aws.StringValue(batchParameters.JobName) + if batchParameters.ArrayProperties != nil { + config["array_size"] = int(aws.Int64Value(batchParameters.ArrayProperties.Size)) + } + if batchParameters.RetryStrategy != nil { + config["job_attempts"] = int(aws.Int64Value(batchParameters.RetryStrategy.Attempts)) + } + result := []map[string]interface{}{config} + return result +} + +func flattenAwsCloudWatchEventTargetKinesisParameters(kinesisParameters *events.KinesisParameters) []map[string]interface{} { + config := make(map[string]interface{}) + config["partition_key_path"] = *kinesisParameters.PartitionKeyPath + result := []map[string]interface{}{config} + return result +} + +func flattenAwsCloudWatchEventTargetSqsParameters(sqsParameters *events.SqsParameters) []map[string]interface{} { + config := make(map[string]interface{}) + config["message_group_id"] = *sqsParameters.MessageGroupId + result := []map[string]interface{}{config} + return result +} func flattenAwsCloudWatchInputTransformer(inputTransformer *events.InputTransformer) []map[string]interface{} { config := make(map[string]interface{}) diff --git a/aws/resource_aws_cloudwatch_event_target_test.go b/aws/resource_aws_cloudwatch_event_target_test.go index ef69f943d1b..96955c17d4c 100644 --- a/aws/resource_aws_cloudwatch_event_target_test.go +++ b/aws/resource_aws_cloudwatch_event_target_test.go @@ -21,7 +21,7 @@ func TestAccAWSCloudWatchEventTarget_basic(t *testing.T) { targetID1 := fmt.Sprintf("tf-acc-cw-target-%s", rName1) targetID2 := fmt.Sprintf("tf-acc-cw-target-%s", rName2) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCloudWatchEventTargetDestroy, @@ -56,7 +56,7 @@ func TestAccAWSCloudWatchEventTarget_missingTargetId(t *testing.T) { ruleName := fmt.Sprintf("tf-acc-cw-event-rule-missing-target-id-%s", rName) snsTopicName := fmt.Sprintf("tf-acc-%s", rName) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCloudWatchEventTargetDestroy, @@ -81,7 +81,7 @@ func TestAccAWSCloudWatchEventTarget_full(t *testing.T) { ssmDocumentName := acctest.RandomWithPrefix("tf_ssm_Document") targetID := fmt.Sprintf("tf-acc-cw-target-full-%s", rName) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCloudWatchEventTargetDestroy, @@ -106,7 +106,7 @@ func TestAccAWSCloudWatchEventTarget_ssmDocument(t *testing.T) { var target events.Target rName := acctest.RandomWithPrefix("tf_ssm_Document") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCloudWatchEventTargetDestroy, @@ -125,7 +125,7 @@ func TestAccAWSCloudWatchEventTarget_ecs(t *testing.T) { var target events.Target rName := acctest.RandomWithPrefix("tf_ecs_target") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCloudWatchEventTargetDestroy, @@ -139,11 +139,69 @@ func TestAccAWSCloudWatchEventTarget_ecs(t *testing.T) { }, }) } + +func TestAccAWSCloudWatchEventTarget_batch(t *testing.T) { + var target events.Target + rName := acctest.RandomWithPrefix("tf_batch_target") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCloudWatchEventTargetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCloudWatchEventTargetConfigBatch(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckCloudWatchEventTargetExists("aws_cloudwatch_event_target.test", &target), + ), + }, + }, + }) +} + +func TestAccAWSCloudWatchEventTarget_kinesis(t *testing.T) { + var target events.Target + rName := acctest.RandomWithPrefix("tf_kinesis_target") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCloudWatchEventTargetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCloudWatchEventTargetConfigKinesis(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckCloudWatchEventTargetExists("aws_cloudwatch_event_target.test", &target), + ), + }, + }, + }) +} + +func TestAccAWSCloudWatchEventTarget_sqs(t *testing.T) { + var target events.Target + rName := acctest.RandomWithPrefix("tf_sqs_target") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCloudWatchEventTargetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCloudWatchEventTargetConfigSqs(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckCloudWatchEventTargetExists("aws_cloudwatch_event_target.test", &target), + ), + }, + }, + }) +} + func TestAccAWSCloudWatchEventTarget_input_transformer(t *testing.T) { var target events.Target rName := acctest.RandomWithPrefix("tf_input_transformer") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCloudWatchEventTargetDestroy, @@ -400,6 +458,15 @@ resource "aws_cloudwatch_event_rule" "schedule" { schedule_expression = "rate(5 minutes)" } +resource "aws_vpc" "vpc" { + cidr_block = "10.1.0.0/16" +} + +resource "aws_subnet" "subnet" { + vpc_id = "${aws_vpc.vpc.id}" + cidr_block = "10.1.1.0/24" +} + resource "aws_cloudwatch_event_target" "test" { arn = "${aws_ecs_cluster.test.id}" rule = "${aws_cloudwatch_event_rule.schedule.id}" @@ -408,6 +475,10 @@ resource "aws_cloudwatch_event_target" "test" { ecs_target { task_count = 1 task_definition_arn = "${aws_ecs_task_definition.task.arn}" + launch_type = "FARGATE" + network_configuration { + subnets = ["${aws_subnet.subnet.id}"] + } } } @@ -459,6 +530,11 @@ resource "aws_ecs_cluster" "test" { resource "aws_ecs_task_definition" "task" { family = "%s" + cpu = 256 + memory = 512 + requires_compatibilities = ["FARGATE"] + network_mode = "awsvpc" + container_definitions = < 0 { @@ -373,6 +657,10 @@ func expandProjectEnvironment(d *schema.ResourceData) *codebuild.ProjectEnvironm projectEnvironmentVar.Value = &v } + if v := config["type"].(string); v != "" { + projectEnvironmentVar.Type = &v + } + projectEnvironmentVariables = append(projectEnvironmentVariables, projectEnvironmentVar) } @@ -385,44 +673,73 @@ func expandProjectEnvironment(d *schema.ResourceData) *codebuild.ProjectEnvironm func expandCodeBuildVpcConfig(rawVpcConfig []interface{}) *codebuild.VpcConfig { vpcConfig := codebuild.VpcConfig{} - if len(rawVpcConfig) == 0 { + if len(rawVpcConfig) == 0 || rawVpcConfig[0] == nil { return &vpcConfig - } else { + } - data := rawVpcConfig[0].(map[string]interface{}) - vpcConfig.VpcId = aws.String(data["vpc_id"].(string)) - vpcConfig.Subnets = expandStringList(data["subnets"].(*schema.Set).List()) - vpcConfig.SecurityGroupIds = expandStringList(data["security_group_ids"].(*schema.Set).List()) + data := rawVpcConfig[0].(map[string]interface{}) + vpcConfig.VpcId = aws.String(data["vpc_id"].(string)) + vpcConfig.Subnets = expandStringList(data["subnets"].(*schema.Set).List()) + vpcConfig.SecurityGroupIds = expandStringList(data["security_group_ids"].(*schema.Set).List()) - return &vpcConfig + return &vpcConfig +} + +func expandProjectSecondarySources(d *schema.ResourceData) []*codebuild.ProjectSource { + configs := d.Get("secondary_sources").(*schema.Set).List() + + if len(configs) == 0 { + return nil } + + sources := make([]*codebuild.ProjectSource, 0) + + for _, config := range configs { + source := expandProjectSourceData(config.(map[string]interface{})) + sources = append(sources, &source) + } + + return sources } func expandProjectSource(d *schema.ResourceData) codebuild.ProjectSource { configs := d.Get("source").(*schema.Set).List() - projectSource := codebuild.ProjectSource{} - for _, configRaw := range configs { - data := configRaw.(map[string]interface{}) + data := configs[0].(map[string]interface{}) + return expandProjectSourceData(data) +} - sourceType := data["type"].(string) - location := data["location"].(string) - buildspec := data["buildspec"].(string) +func expandProjectSourceData(data map[string]interface{}) codebuild.ProjectSource { + sourceType := data["type"].(string) - projectSource = codebuild.ProjectSource{ - Type: &sourceType, - Location: &location, - Buildspec: &buildspec, - } + projectSource := codebuild.ProjectSource{ + Buildspec: aws.String(data["buildspec"].(string)), + GitCloneDepth: aws.Int64(int64(data["git_clone_depth"].(int))), + InsecureSsl: aws.Bool(data["insecure_ssl"].(bool)), + Type: aws.String(sourceType), + } + + if data["source_identifier"] != nil { + projectSource.SourceIdentifier = aws.String(data["source_identifier"].(string)) + } - if v, ok := data["auth"]; ok { - if len(v.(*schema.Set).List()) > 0 { - auth := v.(*schema.Set).List()[0].(map[string]interface{}) + if data["location"].(string) != "" { + projectSource.Location = aws.String(data["location"].(string)) + } - projectSource.Auth = &codebuild.SourceAuth{ - Type: aws.String(auth["type"].(string)), - Resource: aws.String(auth["resource"].(string)), - } + // Only valid for BITBUCKET and GITHUB source type, e.g. + // InvalidInputException: Source type GITHUB_ENTERPRISE does not support ReportBuildStatus + if sourceType == codebuild.SourceTypeBitbucket || sourceType == codebuild.SourceTypeGithub { + projectSource.ReportBuildStatus = aws.Bool(data["report_build_status"].(bool)) + } + + if v, ok := data["auth"]; ok { + if len(v.(*schema.Set).List()) > 0 { + auth := v.(*schema.Set).List()[0].(map[string]interface{}) + + projectSource.Auth = &codebuild.SourceAuth{ + Type: aws.String(auth["type"].(string)), + Resource: aws.String(auth["resource"].(string)), } } } @@ -440,7 +757,7 @@ func resourceAwsCodeBuildProjectRead(d *schema.ResourceData, meta interface{}) e }) if err != nil { - return fmt.Errorf("[ERROR] Error retreiving Projects: %q", err) + return fmt.Errorf("Error retreiving Projects: %q", err) } // if nothing was found, then return no state @@ -460,6 +777,18 @@ func resourceAwsCodeBuildProjectRead(d *schema.ResourceData, meta interface{}) e return err } + if err := d.Set("cache", flattenAwsCodebuildProjectCache(project.Cache)); err != nil { + return err + } + + if err := d.Set("secondary_artifacts", flattenAwsCodeBuildProjectSecondaryArtifacts(project.SecondaryArtifacts)); err != nil { + return err + } + + if err := d.Set("secondary_sources", flattenAwsCodeBuildProjectSecondarySources(project.SecondarySources)); err != nil { + return err + } + if err := d.Set("source", flattenAwsCodeBuildProjectSource(project.Source)); err != nil { return err } @@ -468,11 +797,19 @@ func resourceAwsCodeBuildProjectRead(d *schema.ResourceData, meta interface{}) e return err } + d.Set("arn", project.Arn) d.Set("description", project.Description) d.Set("encryption_key", project.EncryptionKey) d.Set("name", project.Name) d.Set("service_role", project.ServiceRole) d.Set("build_timeout", project.TimeoutInMinutes) + if project.Badge != nil { + d.Set("badge_enabled", project.Badge.BadgeEnabled) + d.Set("badge_url", project.Badge.BadgeRequestUrl) + } else { + d.Set("badge_enabled", false) + d.Set("badge_url", "") + } if err := d.Set("tags", tagsToMapCodeBuild(project.Tags)); err != nil { return err @@ -503,10 +840,30 @@ func resourceAwsCodeBuildProjectUpdate(d *schema.ResourceData, meta interface{}) params.Artifacts = &projectArtifacts } + if d.HasChange("secondary_sources") { + projectSecondarySources := expandProjectSecondarySources(d) + params.SecondarySources = projectSecondarySources + } + + if d.HasChange("secondary_artifacts") { + projectSecondaryArtifacts := expandProjectSecondaryArtifacts(d) + params.SecondaryArtifacts = projectSecondaryArtifacts + } + if d.HasChange("vpc_config") { params.VpcConfig = expandCodeBuildVpcConfig(d.Get("vpc_config").([]interface{})) } + if d.HasChange("cache") { + if v, ok := d.GetOk("cache"); ok { + params.Cache = expandProjectCache(v.([]interface{})) + } else { + params.Cache = &codebuild.ProjectCache{ + Type: aws.String("NO_CACHE"), + } + } + } + if d.HasChange("description") { params.Description = aws.String(d.Get("description").(string)) } @@ -523,11 +880,32 @@ func resourceAwsCodeBuildProjectUpdate(d *schema.ResourceData, meta interface{}) params.TimeoutInMinutes = aws.Int64(int64(d.Get("build_timeout").(int))) } + if d.HasChange("badge_enabled") { + params.BadgeEnabled = aws.Bool(d.Get("badge_enabled").(bool)) + } + // The documentation clearly says "The replacement set of tags for this build project." // But its a slice of pointers so if not set for every update, they get removed. params.Tags = tagsFromMapCodeBuild(d.Get("tags").(map[string]interface{})) - _, err := conn.UpdateProject(params) + // Handle IAM eventual consistency + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + var err error + + _, err = conn.UpdateProject(params) + if err != nil { + // InvalidInputException: CodeBuild is not authorized to perform + // InvalidInputException: Not authorized to perform DescribeSecurityGroups + if isAWSErr(err, "InvalidInputException", "ot authorized to perform") { + return resource.RetryableError(err) + } + + return resource.NonRetryableError(err) + } + + return nil + + }) if err != nil { return fmt.Errorf( @@ -549,21 +927,45 @@ func resourceAwsCodeBuildProjectDelete(d *schema.ResourceData, meta interface{}) return err } - d.SetId("") - return nil } +func flattenAwsCodeBuildProjectSecondaryArtifacts(artifactsList []*codebuild.ProjectArtifacts) *schema.Set { + artifactSet := schema.Set{ + F: resourceAwsCodeBuildProjectArtifactsHash, + } + + for _, artifacts := range artifactsList { + artifactSet.Add(flattenAwsCodeBuildProjectArtifactsData(*artifacts)) + } + return &artifactSet +} + func flattenAwsCodeBuildProjectArtifacts(artifacts *codebuild.ProjectArtifacts) *schema.Set { artifactSet := schema.Set{ F: resourceAwsCodeBuildProjectArtifactsHash, } + values := flattenAwsCodeBuildProjectArtifactsData(*artifacts) + + artifactSet.Add(values) + + return &artifactSet +} + +func flattenAwsCodeBuildProjectArtifactsData(artifacts codebuild.ProjectArtifacts) map[string]interface{} { values := map[string]interface{}{} values["type"] = *artifacts.Type + if artifacts.ArtifactIdentifier != nil { + values["artifact_identifier"] = *artifacts.ArtifactIdentifier + } + + if artifacts.EncryptionDisabled != nil { + values["encryption_disabled"] = *artifacts.EncryptionDisabled + } if artifacts.Location != nil { values["location"] = *artifacts.Location } @@ -583,10 +985,20 @@ func flattenAwsCodeBuildProjectArtifacts(artifacts *codebuild.ProjectArtifacts) if artifacts.Path != nil { values["path"] = *artifacts.Path } + return values +} - artifactSet.Add(values) +func flattenAwsCodebuildProjectCache(cache *codebuild.ProjectCache) []interface{} { + if cache == nil { + return []interface{}{} + } - return &artifactSet + values := map[string]interface{}{ + "location": aws.StringValue(cache.Location), + "type": aws.StringValue(cache.Type), + } + + return []interface{}{values} } func flattenAwsCodeBuildProjectEnvironment(environment *codebuild.ProjectEnvironment) []interface{} { @@ -595,6 +1007,7 @@ func flattenAwsCodeBuildProjectEnvironment(environment *codebuild.ProjectEnviron envConfig["type"] = *environment.Type envConfig["compute_type"] = *environment.ComputeType envConfig["image"] = *environment.Image + envConfig["certificate"] = aws.StringValue(environment.Certificate) envConfig["privileged_mode"] = *environment.PrivilegedMode if environment.EnvironmentVariables != nil { @@ -605,27 +1018,43 @@ func flattenAwsCodeBuildProjectEnvironment(environment *codebuild.ProjectEnviron } +func flattenAwsCodeBuildProjectSecondarySources(sourceList []*codebuild.ProjectSource) []interface{} { + l := make([]interface{}, 0) + + for _, source := range sourceList { + l = append(l, flattenAwsCodeBuildProjectSourceData(source)) + } + + return l +} + func flattenAwsCodeBuildProjectSource(source *codebuild.ProjectSource) []interface{} { l := make([]interface{}, 1) - m := map[string]interface{}{} - m["type"] = *source.Type + l[0] = flattenAwsCodeBuildProjectSourceData(source) - if source.Auth != nil { - m["auth"] = schema.NewSet(resourceAwsCodeBuildProjectSourceAuthHash, []interface{}{sourceAuthToMap(source.Auth)}) - } + return l +} - if source.Buildspec != nil { - m["buildspec"] = *source.Buildspec +func flattenAwsCodeBuildProjectSourceData(source *codebuild.ProjectSource) interface{} { + m := map[string]interface{}{ + "buildspec": aws.StringValue(source.Buildspec), + "location": aws.StringValue(source.Location), + "git_clone_depth": int(aws.Int64Value(source.GitCloneDepth)), + "insecure_ssl": aws.BoolValue(source.InsecureSsl), + "report_build_status": aws.BoolValue(source.ReportBuildStatus), + "type": aws.StringValue(source.Type), } - if source.Location != nil { - m["location"] = *source.Location + if source.Auth != nil { + m["auth"] = schema.NewSet(resourceAwsCodeBuildProjectSourceAuthHash, []interface{}{sourceAuthToMap(source.Auth)}) } - l[0] = m + if source.SourceIdentifier != nil { + m["source_identifier"] = aws.StringValue(source.SourceIdentifier) + } - return l + return m } func flattenAwsCodeBuildVpcConfig(vpcConfig *codebuild.VpcConfig) []interface{} { @@ -645,10 +1074,11 @@ func resourceAwsCodeBuildProjectArtifactsHash(v interface{}) int { var buf bytes.Buffer m := v.(map[string]interface{}) - artifactType := m["type"].(string) - - buf.WriteString(fmt.Sprintf("%s-", artifactType)) + buf.WriteString(fmt.Sprintf("%s-", m["type"].(string))) + if v, ok := m["artifact_identifier"]; ok { + buf.WriteString(fmt.Sprintf("%s:", v.(string))) + } return hashcode.String(buf.String()) } @@ -665,10 +1095,18 @@ func resourceAwsCodeBuildProjectEnvironmentHash(v interface{}) int { buf.WriteString(fmt.Sprintf("%s-", computeType)) buf.WriteString(fmt.Sprintf("%s-", image)) buf.WriteString(fmt.Sprintf("%t-", privilegedMode)) + if v, ok := m["certificate"]; ok && v.(string) != "" { + buf.WriteString(fmt.Sprintf("%s-", v.(string))) + } for _, e := range environmentVariables { if e != nil { // Old statefiles might have nil values in them ev := e.(map[string]interface{}) - buf.WriteString(fmt.Sprintf("%s:%s-", ev["name"].(string), ev["value"].(string))) + buf.WriteString(fmt.Sprintf("%s:", ev["name"].(string))) + // type is sometimes not returned by the API + if v, ok := ev["type"]; ok { + buf.WriteString(fmt.Sprintf("%s:", v.(string))) + } + buf.WriteString(fmt.Sprintf("%s-", ev["value"].(string))) } } @@ -680,12 +1118,24 @@ func resourceAwsCodeBuildProjectSourceHash(v interface{}) int { m := v.(map[string]interface{}) buf.WriteString(fmt.Sprintf("%s-", m["type"].(string))) + if v, ok := m["source_identifier"]; ok { + buf.WriteString(fmt.Sprintf("%s-", strconv.Itoa(v.(int)))) + } if v, ok := m["buildspec"]; ok { buf.WriteString(fmt.Sprintf("%s-", v.(string))) } if v, ok := m["location"]; ok { buf.WriteString(fmt.Sprintf("%s-", v.(string))) } + if v, ok := m["git_clone_depth"]; ok { + buf.WriteString(fmt.Sprintf("%s-", strconv.Itoa(v.(int)))) + } + if v, ok := m["insecure_ssl"]; ok { + buf.WriteString(fmt.Sprintf("%s-", strconv.FormatBool(v.(bool)))) + } + if v, ok := m["report_build_status"]; ok { + buf.WriteString(fmt.Sprintf("%s-", strconv.FormatBool(v.(bool)))) + } return hashcode.String(buf.String()) } @@ -711,6 +1161,9 @@ func environmentVariablesToMap(environmentVariables []*codebuild.EnvironmentVari item := map[string]interface{}{} item["name"] = *env.Name item["value"] = *env.Value + if env.Type != nil { + item["type"] = *env.Type + } envVariables = append(envVariables, item) } } diff --git a/aws/resource_aws_codebuild_project_migrate.go b/aws/resource_aws_codebuild_project_migrate.go deleted file mode 100644 index 97d7a9ff245..00000000000 --- a/aws/resource_aws_codebuild_project_migrate.go +++ /dev/null @@ -1,36 +0,0 @@ -package aws - -import ( - "fmt" - "log" - "strings" - - "github.com/hashicorp/terraform/terraform" -) - -func resourceAwsCodebuildMigrateState( - v int, is *terraform.InstanceState, meta interface{}) (*terraform.InstanceState, error) { - switch v { - case 0: - log.Println("[INFO] Found AWS Codebuild State v0; migrating to v1") - return migrateCodebuildStateV0toV1(is) - default: - return is, fmt.Errorf("Unexpected schema version: %d", v) - } -} - -func migrateCodebuildStateV0toV1(is *terraform.InstanceState) (*terraform.InstanceState, error) { - if is.Empty() { - log.Println("[DEBUG] Empty InstanceState; nothing to migrate.") - return is, nil - } - - log.Printf("[DEBUG] Attributes before migration: %#v", is.Attributes) - - if is.Attributes["timeout"] != "" { - is.Attributes["build_timeout"] = strings.TrimSpace(is.Attributes["timeout"]) - } - - log.Printf("[DEBUG] Attributes after migration: %#v", is.Attributes) - return is, nil -} diff --git a/aws/resource_aws_codebuild_project_migrate_test.go b/aws/resource_aws_codebuild_project_migrate_test.go deleted file mode 100644 index 2ae6b4e5363..00000000000 --- a/aws/resource_aws_codebuild_project_migrate_test.go +++ /dev/null @@ -1,53 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/terraform" -) - -func TestAWSCodebuildMigrateState(t *testing.T) { - cases := map[string]struct { - StateVersion int - ID string - Attributes map[string]string - Expected string - Meta interface{} - }{ - "v0_1": { - StateVersion: 0, - ID: "tf-testing-file", - Attributes: map[string]string{ - "description": "some description", - "timeout": "5", - }, - Expected: "5", - }, - "v0_2": { - StateVersion: 0, - ID: "tf-testing-file", - Attributes: map[string]string{ - "description": "some description", - "build_timeout": "5", - }, - Expected: "5", - }, - } - - for tn, tc := range cases { - is := &terraform.InstanceState{ - ID: tc.ID, - Attributes: tc.Attributes, - } - is, err := resourceAwsCodebuildMigrateState( - tc.StateVersion, is, tc.Meta) - - if err != nil { - t.Fatalf("bad: %s, err: %#v", tn, err) - } - - if is.Attributes["build_timeout"] != tc.Expected { - t.Fatalf("Bad build_timeout migration: %s\n\n expected: %s", is.Attributes["build_timeout"], tc.Expected) - } - } -} diff --git a/aws/resource_aws_codebuild_project_test.go b/aws/resource_aws_codebuild_project_test.go index e4a5ee3a657..0a9ed416e8a 100644 --- a/aws/resource_aws_codebuild_project_test.go +++ b/aws/resource_aws_codebuild_project_test.go @@ -2,10 +2,9 @@ package aws import ( "fmt" + "os" "regexp" - "strings" "testing" - "unicode" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/codebuild" @@ -14,143 +13,780 @@ import ( "github.com/hashicorp/terraform/terraform" ) +// This is used for testing aws_codebuild_webhook as well as aws_codebuild_project. +// The Terraform AWS user must have done the manual Bitbucket OAuth dance for this +// functionality to work. Additionally, the Bitbucket user that the Terraform AWS +// user logs in as must have access to the Bitbucket repository. +func testAccAWSCodeBuildBitbucketSourceLocationFromEnv() string { + sourceLocation := os.Getenv("AWS_CODEBUILD_BITBUCKET_SOURCE_LOCATION") + if sourceLocation == "" { + return "https://terraform@bitbucket.org/terraform/aws-test.git" + } + return sourceLocation +} + +// This is used for testing aws_codebuild_webhook as well as aws_codebuild_project. +// The Terraform AWS user must have done the manual GitHub OAuth dance for this +// functionality to work. Additionally, the GitHub user that the Terraform AWS +// user logs in as must have access to the GitHub repository. +func testAccAWSCodeBuildGitHubSourceLocationFromEnv() string { + sourceLocation := os.Getenv("AWS_CODEBUILD_GITHUB_SOURCE_LOCATION") + if sourceLocation == "" { + return "https://github.com/hashibot-test/aws-test.git" + } + return sourceLocation +} + +func TestAccAWSCodeBuildProject_importBasic(t *testing.T) { + resourceName := "aws_codebuild_project.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_basic(rName), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSCodeBuildProject_basic(t *testing.T) { - name := acctest.RandString(10) + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + testAccCheckResourceAttrRegionalARN(resourceName, "arn", "codebuild", fmt.Sprintf("project/%s", rName)), + resource.TestCheckResourceAttr(resourceName, "artifacts.#", "1"), + resource.TestCheckResourceAttr(resourceName, "artifacts.1178773975.encryption_disabled", "false"), + resource.TestCheckResourceAttr(resourceName, "artifacts.1178773975.location", ""), + resource.TestCheckResourceAttr(resourceName, "artifacts.1178773975.name", ""), + resource.TestCheckResourceAttr(resourceName, "artifacts.1178773975.namespace_type", ""), + resource.TestCheckResourceAttr(resourceName, "artifacts.1178773975.packaging", ""), + resource.TestCheckResourceAttr(resourceName, "artifacts.1178773975.path", ""), + resource.TestCheckResourceAttr(resourceName, "artifacts.1178773975.type", "NO_ARTIFACTS"), + resource.TestCheckResourceAttr(resourceName, "badge_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "build_timeout", "60"), + resource.TestCheckResourceAttr(resourceName, "cache.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cache.0.type", "NO_CACHE"), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestMatchResourceAttr(resourceName, "encryption_key", regexp.MustCompile(`^arn:[^:]+:kms:[^:]+:[^:]+:alias/aws/s3$`)), + resource.TestCheckResourceAttr(resourceName, "environment.#", "1"), + resource.TestCheckResourceAttr(resourceName, "environment.1974383098.compute_type", "BUILD_GENERAL1_SMALL"), + resource.TestCheckResourceAttr(resourceName, "environment.1974383098.environment_variable.#", "0"), + resource.TestCheckResourceAttr(resourceName, "environment.1974383098.image", "2"), + resource.TestCheckResourceAttr(resourceName, "environment.1974383098.privileged_mode", "false"), + resource.TestCheckResourceAttr(resourceName, "environment.1974383098.type", "LINUX_CONTAINER"), + resource.TestMatchResourceAttr(resourceName, "service_role", regexp.MustCompile(`^arn:[^:]+:iam::[^:]+:role/tf-acc-test-[0-9]+$`)), + resource.TestCheckResourceAttr(resourceName, "source.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source.1441597390.auth.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source.1441597390.git_clone_depth", "0"), + resource.TestCheckResourceAttr(resourceName, "source.1441597390.insecure_ssl", "false"), + resource.TestCheckResourceAttr(resourceName, "source.1441597390.location", "https://github.com/hashibot-test/aws-test.git"), + resource.TestCheckResourceAttr(resourceName, "source.1441597390.report_build_status", "false"), + resource.TestCheckResourceAttr(resourceName, "source.1441597390.type", "GITHUB"), + resource.TestCheckResourceAttr(resourceName, "vpc_config.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_BadgeEnabled(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodebuildProjectConfig_BadgeEnabled(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "badge_enabled", "true"), + resource.TestMatchResourceAttr(resourceName, "badge_url", regexp.MustCompile(`\b(https?).*\b`)), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_BuildTimeout(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_BuildTimeout(rName, 120), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "build_timeout", "120"), + ), + }, + { + Config: testAccAWSCodeBuildProjectConfig_BuildTimeout(rName, 240), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "build_timeout", "240"), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_Cache(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_Cache(rName, "", "S3"), + ExpectError: regexp.MustCompile(`cache location is required when cache type is "S3"`), + }, + { + Config: testAccAWSCodeBuildProjectConfig_Cache(rName, "", "NO_CACHE"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "cache.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cache.0.type", "NO_CACHE"), + ), + }, + { + Config: testAccAWSCodeBuildProjectConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "cache.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cache.0.type", "NO_CACHE"), + ), + }, + { + Config: testAccAWSCodeBuildProjectConfig_Cache(rName, "some-bucket", "S3"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "cache.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cache.0.location", "some-bucket"), + resource.TestCheckResourceAttr(resourceName, "cache.0.type", "S3"), + ), + }, + { + Config: testAccAWSCodeBuildProjectConfig_Cache(rName, "some-new-bucket", "S3"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "cache.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cache.0.location", "some-new-bucket"), + resource.TestCheckResourceAttr(resourceName, "cache.0.type", "S3"), + ), + }, + { + Config: testAccAWSCodeBuildProjectConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "cache.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cache.0.type", "NO_CACHE"), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_Description(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_Description(rName, "description1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "description", "description1"), + ), + }, + { + Config: testAccAWSCodeBuildProjectConfig_Description(rName, "description2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "description", "description2"), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_EncryptionKey(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_EncryptionKey(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestMatchResourceAttr(resourceName, "encryption_key", regexp.MustCompile(`.+`)), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_Environment_EnvironmentVariable_Type(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_Environment_EnvironmentVariable_Type(rName, "PLAINTEXT"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "environment.3925601246.environment_variable.0.type", "PLAINTEXT"), + resource.TestCheckResourceAttr(resourceName, "environment.3925601246.environment_variable.1.type", "PLAINTEXT"), + ), + }, + { + Config: testAccAWSCodeBuildProjectConfig_Environment_EnvironmentVariable_Type(rName, "PARAMETER_STORE"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "environment.2414912039.environment_variable.0.type", "PLAINTEXT"), + resource.TestCheckResourceAttr(resourceName, "environment.2414912039.environment_variable.1.type", "PARAMETER_STORE"), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_Environment_Certificate(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + bName := acctest.RandomWithPrefix("tf-acc-test-bucket") + oName := "certificate.pem" + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_Environment_Certificate(rName, bName, oName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + testAccCheckAWSCodeBuildProjectCertificate(&project, fmt.Sprintf("%s/%s", bName, oName)), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_Source_Auth(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_Source_Auth(rName, "FAKERESOURCE1", "INVALID"), + ExpectError: regexp.MustCompile(`expected source.0.auth.0.type to be one of`), + }, + { + Config: testAccAWSCodeBuildProjectConfig_Source_Auth(rName, "FAKERESOURCE1", "OAUTH"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "source.3680505372.auth.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source.3680505372.auth.2706882902.resource", "FAKERESOURCE1"), + resource.TestCheckResourceAttr(resourceName, "source.3680505372.auth.2706882902.type", "OAUTH"), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_Source_GitCloneDepth(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_Source_GitCloneDepth(rName, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "source.1181740906.git_clone_depth", "1"), + ), + }, + { + Config: testAccAWSCodeBuildProjectConfig_Source_GitCloneDepth(rName, 2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "source.974047921.git_clone_depth", "2"), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_Source_InsecureSSL(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSCodeBuildProjectConfig_basic(name, "", ""), + Config: testAccAWSCodeBuildProjectConfig_Source_InsecureSSL(rName, true), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSCodeBuildProjectExists("aws_codebuild_project.foo"), - resource.TestCheckResourceAttr( - "aws_codebuild_project.foo", "build_timeout", "5"), + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "source.1976396802.insecure_ssl", "true"), + ), + }, + { + Config: testAccAWSCodeBuildProjectConfig_Source_InsecureSSL(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "source.3680505372.insecure_ssl", "false"), ), }, }, }) } -func TestAccAWSCodeBuildProject_vpc(t *testing.T) { - name := acctest.RandString(10) +func TestAccAWSCodeBuildProject_Source_ReportBuildStatus_Bitbucket(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSCodeBuildProjectConfig_basic(name, - testAccAWSCodeBuildProjectConfig_vpcConfig("\"${aws_subnet.codebuild_subnet.id}\",\"${aws_subnet.codebuild_subnet_2.id}\""), testAccAWSCodeBuildProjectConfig_vpcResources()), + Config: testAccAWSCodeBuildProjectConfig_Source_ReportBuildStatus_Bitbucket(rName, true), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSCodeBuildProjectExists("aws_codebuild_project.foo"), - resource.TestCheckResourceAttr( - "aws_codebuild_project.foo", "build_timeout", "5"), - resource.TestCheckResourceAttrSet("aws_codebuild_project.foo", "vpc_config.0.vpc_id"), - resource.TestCheckResourceAttr("aws_codebuild_project.foo", "vpc_config.0.subnets.#", "2"), - resource.TestCheckResourceAttr("aws_codebuild_project.foo", "vpc_config.0.security_group_ids.#", "1"), + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "source.2876219937.report_build_status", "true"), ), }, { - Config: testAccAWSCodeBuildProjectConfig_basic(name, testAccAWSCodeBuildProjectConfig_vpcConfig("\"${aws_subnet.codebuild_subnet.id}\""), testAccAWSCodeBuildProjectConfig_vpcResources()), + Config: testAccAWSCodeBuildProjectConfig_Source_ReportBuildStatus_Bitbucket(rName, false), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSCodeBuildProjectExists("aws_codebuild_project.foo"), - resource.TestCheckResourceAttr( - "aws_codebuild_project.foo", "build_timeout", "5"), - resource.TestCheckResourceAttrSet("aws_codebuild_project.foo", "vpc_config.0.vpc_id"), - resource.TestCheckResourceAttr("aws_codebuild_project.foo", "vpc_config.0.subnets.#", "1"), - resource.TestCheckResourceAttr("aws_codebuild_project.foo", "vpc_config.0.security_group_ids.#", "1"), + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "source.3210444828.report_build_status", "false"), ), }, + }, + }) +} + +func TestAccAWSCodeBuildProject_Source_ReportBuildStatus_GitHub(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_Source_ReportBuildStatus_GitHub(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "source.4215890488.report_build_status", "true"), + ), + }, + { + Config: testAccAWSCodeBuildProjectConfig_Source_ReportBuildStatus_GitHub(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "source.3680505372.report_build_status", "false"), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_Source_Type_Bitbucket(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_Source_Type_Bitbucket(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "source.3210444828.type", "BITBUCKET"), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_Source_Type_CodeCommit(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_Source_Type_CodeCommit(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "source.3715340088.type", "CODECOMMIT"), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_Source_Type_CodePipeline(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_Source_Type_CodePipeline(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "source.2280907000.type", "CODEPIPELINE"), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_Source_Type_GitHubEnterprise(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_Source_Type_GitHubEnterprise(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "source.553628638.type", "GITHUB_ENTERPRISE"), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_Source_Type_S3(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_Source_Type_S3(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "source.2751363124.type", "S3"), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_Source_Type_NoSource(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" + rBuildspec := ` +version: 0.2 +phases: + build: + commands: + - rspec hello_world_spec.rb` + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_Source_Type_NoSource(rName, "", rBuildspec), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "source.2726343112.type", "NO_SOURCE"), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_Source_Type_NoSourceInvalid(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + rBuildspec := ` +version: 0.2 +phases: + build: + commands: + - rspec hello_world_spec.rb` + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_Source_Type_NoSource(rName, "", ""), + ExpectError: regexp.MustCompile("`build_spec` must be set when source's `type` is `NO_SOURCE`"), + }, + { + Config: testAccAWSCodeBuildProjectConfig_Source_Type_NoSource(rName, "location", rBuildspec), + ExpectError: regexp.MustCompile("`location` must be empty when source's `type` is `NO_SOURCE`"), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_Tags(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_Tags(rName, "tag2", "tag2value"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.tag1", "tag1value"), + resource.TestCheckResourceAttr(resourceName, "tags.tag2", "tag2value"), + ), + }, + { + Config: testAccAWSCodeBuildProjectConfig_Tags(rName, "tag2", "tag2value-updated"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.tag1", "tag1value"), + resource.TestCheckResourceAttr(resourceName, "tags.tag2", "tag2value-updated"), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_VpcConfig(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_VpcConfig(rName, 2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "vpc_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "vpc_config.0.security_group_ids.#", "1"), + resource.TestCheckResourceAttr(resourceName, "vpc_config.0.subnets.#", "2"), + resource.TestMatchResourceAttr(resourceName, "vpc_config.0.vpc_id", regexp.MustCompile(`^vpc-`)), + ), + }, + { + Config: testAccAWSCodeBuildProjectConfig_VpcConfig(rName, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "vpc_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "vpc_config.0.security_group_ids.#", "1"), + resource.TestCheckResourceAttr(resourceName, "vpc_config.0.subnets.#", "1"), + resource.TestMatchResourceAttr(resourceName, "vpc_config.0.vpc_id", regexp.MustCompile(`^vpc-`)), + ), + }, + { + Config: testAccAWSCodeBuildProjectConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "vpc_config.#", "0"), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_WindowsContainer(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeBuildProjectConfig_WindowsContainer(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "environment.#", "1"), + resource.TestCheckResourceAttr(resourceName, "environment.3935046469.compute_type", "BUILD_GENERAL1_MEDIUM"), + resource.TestCheckResourceAttr(resourceName, "environment.3935046469.environment_variable.#", "0"), + resource.TestCheckResourceAttr(resourceName, "environment.3935046469.image", "2"), + resource.TestCheckResourceAttr(resourceName, "environment.3935046469.privileged_mode", "false"), + resource.TestCheckResourceAttr(resourceName, "environment.3935046469.type", "WINDOWS_CONTAINER"), + ), + }, + }, + }) +} + +func TestAccAWSCodeBuildProject_Artifacts_EncryptionDisabled(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + bName := acctest.RandomWithPrefix("tf-acc-test-bucket") + resourceName := "aws_codebuild_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, + Steps: []resource.TestStep{ { - Config: testAccAWSCodeBuildProjectConfig_basicUpdated(name), + Config: testAccAWSCodebuildProjectConfig_Artifacts_EncryptionDisabled(rName, bName, true), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSCodeBuildProjectExists("aws_codebuild_project.foo"), - resource.TestCheckResourceAttr( - "aws_codebuild_project.foo", "build_timeout", "50"), - resource.TestCheckNoResourceAttr("aws_codebuild_project.foo", "vpc_config"), + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "artifacts.#", "1"), + resource.TestCheckResourceAttr(resourceName, "artifacts.537882814.encryption_disabled", "true"), ), }, }, }) } -func TestAccAWSCodeBuildProject_sourceAuth(t *testing.T) { - authResource := "FAKERESOURCE1" - authType := "OAUTH" - name := acctest.RandString(10) - resourceName := "aws_codebuild_project.foo" +func TestAccAWSCodeBuildProject_SecondaryArtifacts(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + bName := acctest.RandomWithPrefix("tf-acc-test-bucket") + resourceName := "aws_codebuild_project.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSCodeBuildProjectConfig_sourceAuth(name, authResource, "INVALID"), - ExpectError: regexp.MustCompile(`expected source.0.auth.0.type to be one of`), - }, - { - Config: testAccAWSCodeBuildProjectConfig_sourceAuth(name, authResource, authType), + Config: testAccAWSCodebuildProjectConfig_SecondaryArtifacts(rName, bName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSCodeBuildProjectExists(resourceName), - resource.TestCheckResourceAttr(resourceName, "source.1060593600.auth.2706882902.resource", authResource), - resource.TestCheckResourceAttr(resourceName, "source.1060593600.auth.2706882902.type", authType), + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "secondary_artifacts.#", "2"), + resource.TestCheckResourceAttr(resourceName, "secondary_artifacts.2341549664.artifact_identifier", "secondaryArtifact1"), + resource.TestCheckResourceAttr(resourceName, "secondary_artifacts.2696701347.artifact_identifier", "secondaryArtifact2"), ), }, }, }) } -func TestAccAWSCodeBuildProject_default_build_timeout(t *testing.T) { - name := acctest.RandString(10) +func TestAccAWSCodeBuildProject_SecondarySources_CodeCommit(t *testing.T) { + var project codebuild.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_codebuild_project.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeBuildProjectDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSCodeBuildProjectConfig_default_timeout(name), - Check: resource.ComposeTestCheckFunc( - testAccCheckAWSCodeBuildProjectExists("aws_codebuild_project.foo"), - resource.TestCheckResourceAttr( - "aws_codebuild_project.foo", "build_timeout", "60"), - ), - }, - { - Config: testAccAWSCodeBuildProjectConfig_basicUpdated(name), + Config: testAccAWSCodeBuildProjectConfig_SecondarySources_CodeCommit(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSCodeBuildProjectExists("aws_codebuild_project.foo"), - resource.TestCheckResourceAttr( - "aws_codebuild_project.foo", "build_timeout", "50"), + testAccCheckAWSCodeBuildProjectExists(resourceName, &project), + resource.TestCheckResourceAttr(resourceName, "source.3715340088.type", "CODECOMMIT"), + resource.TestCheckResourceAttr(resourceName, "secondary_sources.3525046785.source_identifier", "secondarySource1"), + resource.TestCheckResourceAttr(resourceName, "secondary_sources.2644986630.source_identifier", "secondarySource2"), ), }, }, }) } -func longTestData() string { - data := ` - test-test-test-test-test-test-test-test-test-test- - test-test-test-test-test-test-test-test-test-test- - test-test-test-test-test-test-test-test-test-test- - test-test-test-test-test-test-test-test-test-test- - test-test-test-test-test-test-test-test-test-test- - test-test-test-test-test-test-test-test-test-test- - ` - - return strings.Map(func(r rune) rune { - if unicode.IsSpace(r) { - return -1 - } - return r - }, data) -} - func TestAWSCodeBuildProject_nameValidation(t *testing.T) { cases := []struct { Value string @@ -160,7 +796,7 @@ func TestAWSCodeBuildProject_nameValidation(t *testing.T) { {Value: "test", ErrCount: 0}, {Value: "1_test", ErrCount: 0}, {Value: "test**1", ErrCount: 1}, - {Value: longTestData(), ErrCount: 1}, + {Value: acctest.RandString(256), ErrCount: 1}, } for _, tc := range cases { @@ -172,7 +808,7 @@ func TestAWSCodeBuildProject_nameValidation(t *testing.T) { } } -func testAccCheckAWSCodeBuildProjectExists(n string) resource.TestCheckFunc { +func testAccCheckAWSCodeBuildProjectExists(n string, project *codebuild.Project) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -199,6 +835,8 @@ func testAccCheckAWSCodeBuildProjectExists(n string) resource.TestCheckFunc { return fmt.Errorf("No project found") } + *project = *out.Projects[0] + return nil } } @@ -228,13 +866,30 @@ func testAccCheckAWSCodeBuildProjectDestroy(s *terraform.State) error { return nil } - return fmt.Errorf("Default error in CodeBuild Test") + return nil +} + +func testAccCheckAWSCodeBuildProjectCertificate(project *codebuild.Project, expectedCertificate string) resource.TestCheckFunc { + return func(s *terraform.State) error { + if aws.StringValue(project.Environment.Certificate) != expectedCertificate { + return fmt.Errorf("CodeBuild Project certificate (%s) did not match: %s", aws.StringValue(project.Environment.Certificate), expectedCertificate) + } + return nil + } +} + +func testAccAWSCodeBuildProjectConfig_Base_Bucket(rName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = "%s" + force_destroy = true +}`, rName) } -func testAccAWSCodeBuildProjectConfig_basic(rName, vpcConfig, vpcResources string) string { +func testAccAWSCodeBuildProjectConfig_Base_ServiceRole(rName string) string { return fmt.Sprintf(` -resource "aws_iam_role" "codebuild_role" { - name = "codebuild-role-%s" +resource "aws_iam_role" "test" { + name = "%s" assume_role_policy = <> (64 - 7))) ^ uint64(i) ^ uint64(resourceAwsCodeDeployTagFilterHash(filter)) + } + return int(x) +} + func resourceAwsCodeDeployTriggerConfigHash(v interface{}) int { var buf bytes.Buffer m := v.(map[string]interface{}) diff --git a/aws/resource_aws_codedeploy_deployment_group_test.go b/aws/resource_aws_codedeploy_deployment_group_test.go index addde663f67..f35bb7be888 100644 --- a/aws/resource_aws_codedeploy_deployment_group_test.go +++ b/aws/resource_aws_codedeploy_deployment_group_test.go @@ -1,6 +1,7 @@ package aws import ( + "errors" "fmt" "reflect" "regexp" @@ -22,13 +23,13 @@ func TestAccAWSCodeDeployDeploymentGroup_basic(t *testing.T) { rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSCodeDeployDeploymentGroup(rName), + { + Config: testAccAWSCodeDeployDeploymentGroup(rName, false), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo", &group), resource.TestCheckResourceAttr( @@ -41,6 +42,8 @@ func TestAccAWSCodeDeployDeploymentGroup_basic(t *testing.T) { "aws_codedeploy_deployment_group.foo", "service_role_arn", regexp.MustCompile("arn:aws:iam::[0-9]{12}:role/foo_role_.*")), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "ec2_tag_set.#", "0"), resource.TestCheckResourceAttr( "aws_codedeploy_deployment_group.foo", "ec2_tag_filter.#", "1"), resource.TestCheckResourceAttr( @@ -58,8 +61,8 @@ func TestAccAWSCodeDeployDeploymentGroup_basic(t *testing.T) { "aws_codedeploy_deployment_group.foo", "trigger_configuration.#", "0"), ), }, - resource.TestStep{ - Config: testAccAWSCodeDeployDeploymentGroupModified(rName), + { + Config: testAccAWSCodeDeployDeploymentGroupModified(rName, false), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo", &group), resource.TestCheckResourceAttr( @@ -72,6 +75,8 @@ func TestAccAWSCodeDeployDeploymentGroup_basic(t *testing.T) { "aws_codedeploy_deployment_group.foo", "service_role_arn", regexp.MustCompile("arn:aws:iam::[0-9]{12}:role/bar_role_.*")), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "ec2_tag_set.#", "0"), resource.TestCheckResourceAttr( "aws_codedeploy_deployment_group.foo", "ec2_tag_filter.#", "1"), resource.TestCheckResourceAttr( @@ -89,6 +94,102 @@ func TestAccAWSCodeDeployDeploymentGroup_basic(t *testing.T) { "aws_codedeploy_deployment_group.foo", "trigger_configuration.#", "0"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo"), + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSCodeDeployDeploymentGroup_basic_tagSet(t *testing.T) { + var group codedeploy.DeploymentGroupInfo + + rName := acctest.RandString(5) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodeDeployDeploymentGroup(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo", &group), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "app_name", "foo_app_"+rName), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "deployment_group_name", "foo_"+rName), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "deployment_config_name", "CodeDeployDefault.OneAtATime"), + resource.TestMatchResourceAttr( + "aws_codedeploy_deployment_group.foo", "service_role_arn", + regexp.MustCompile("arn:aws:iam::[0-9]{12}:role/foo_role_.*")), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "ec2_tag_set.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "ec2_tag_set.2916377593.ec2_tag_filter.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "ec2_tag_set.2916377593.ec2_tag_filter.2916377465.key", "filterkey"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "ec2_tag_set.2916377593.ec2_tag_filter.2916377465.type", "KEY_AND_VALUE"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "ec2_tag_set.2916377593.ec2_tag_filter.2916377465.value", "filtervalue"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "ec2_tag_filter.#", "0"), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "alarm_configuration.#", "0"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "auto_rollback_configuration.#", "0"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "trigger_configuration.#", "0"), + ), + }, + { + Config: testAccAWSCodeDeployDeploymentGroupModified(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo", &group), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "app_name", "foo_app_"+rName), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "deployment_group_name", "bar_"+rName), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "deployment_config_name", "CodeDeployDefault.OneAtATime"), + resource.TestMatchResourceAttr( + "aws_codedeploy_deployment_group.foo", "service_role_arn", + regexp.MustCompile("arn:aws:iam::[0-9]{12}:role/bar_role_.*")), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "ec2_tag_set.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "ec2_tag_set.2369538847.ec2_tag_filter.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "ec2_tag_set.2369538847.ec2_tag_filter.2369538975.key", "filterkey"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "ec2_tag_set.2369538847.ec2_tag_filter.2369538975.type", "KEY_AND_VALUE"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "ec2_tag_set.2369538847.ec2_tag_filter.2369538975.value", "anotherfiltervalue"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "ec2_tag_filter.#", "0"), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "alarm_configuration.#", "0"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "auto_rollback_configuration.#", "0"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo", "trigger_configuration.#", "0"), + ), + }, + { + ResourceName: "aws_codedeploy_deployment_group.foo", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo"), + ImportStateVerify: true, + }, }, }) } @@ -98,12 +199,12 @@ func TestAccAWSCodeDeployDeploymentGroup_onPremiseTag(t *testing.T) { rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSCodeDeployDeploymentGroupOnPremiseTags(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo", &group), @@ -124,6 +225,12 @@ func TestAccAWSCodeDeployDeploymentGroup_onPremiseTag(t *testing.T) { "aws_codedeploy_deployment_group.foo", "on_premises_instance_tag_filter.2916377465.value", "filtervalue"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo"), + ImportStateVerify: true, + }, }, }) } @@ -132,13 +239,13 @@ func TestAccAWSCodeDeployDeploymentGroup_disappears(t *testing.T) { var group codedeploy.DeploymentGroupInfo rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSCodeDeployDeploymentGroup(rName), + { + Config: testAccAWSCodeDeployDeploymentGroup(rName, false), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo", &group), testAccAWSCodeDeployDeploymentGroupDisappears(&group), @@ -154,12 +261,12 @@ func TestAccAWSCodeDeployDeploymentGroup_triggerConfiguration_basic(t *testing.T rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSCodeDeployDeploymentGroup_triggerConfiguration_create(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -172,7 +279,7 @@ func TestAccAWSCodeDeployDeploymentGroup_triggerConfiguration_basic(t *testing.T }), ), }, - resource.TestStep{ + { Config: testAccAWSCodeDeployDeploymentGroup_triggerConfiguration_update(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -186,6 +293,12 @@ func TestAccAWSCodeDeployDeploymentGroup_triggerConfiguration_basic(t *testing.T }), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -195,12 +308,12 @@ func TestAccAWSCodeDeployDeploymentGroup_triggerConfiguration_multiple(t *testin rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSCodeDeployDeploymentGroup_triggerConfiguration_createMultiple(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -218,7 +331,7 @@ func TestAccAWSCodeDeployDeploymentGroup_triggerConfiguration_multiple(t *testin regexp.MustCompile("^arn:aws:sns:[^:]+:[0-9]{12}:bar-topic-"+rName+"$")), ), }, - resource.TestStep{ + { Config: testAccAWSCodeDeployDeploymentGroup_triggerConfiguration_updateMultiple(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -239,6 +352,12 @@ func TestAccAWSCodeDeployDeploymentGroup_triggerConfiguration_multiple(t *testin regexp.MustCompile("^arn:aws:sns:[^:]+:[0-9]{12}:baz-topic-"+rName+"$")), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -248,12 +367,12 @@ func TestAccAWSCodeDeployDeploymentGroup_autoRollbackConfiguration_create(t *tes rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_auto_rollback_configuration_delete(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -261,7 +380,7 @@ func TestAccAWSCodeDeployDeploymentGroup_autoRollbackConfiguration_create(t *tes "aws_codedeploy_deployment_group.foo_group", "auto_rollback_configuration.#", "0"), ), }, - resource.TestStep{ + { Config: test_config_auto_rollback_configuration_create(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -275,6 +394,12 @@ func TestAccAWSCodeDeployDeploymentGroup_autoRollbackConfiguration_create(t *tes "aws_codedeploy_deployment_group.foo_group", "auto_rollback_configuration.0.events.135881253", "DEPLOYMENT_FAILURE"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -284,12 +409,12 @@ func TestAccAWSCodeDeployDeploymentGroup_autoRollbackConfiguration_update(t *tes rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_auto_rollback_configuration_create(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -303,7 +428,7 @@ func TestAccAWSCodeDeployDeploymentGroup_autoRollbackConfiguration_update(t *tes "aws_codedeploy_deployment_group.foo_group", "auto_rollback_configuration.0.events.135881253", "DEPLOYMENT_FAILURE"), ), }, - resource.TestStep{ + { Config: test_config_auto_rollback_configuration_update(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -319,6 +444,12 @@ func TestAccAWSCodeDeployDeploymentGroup_autoRollbackConfiguration_update(t *tes "aws_codedeploy_deployment_group.foo_group", "auto_rollback_configuration.0.events.135881253", "DEPLOYMENT_FAILURE"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -328,12 +459,12 @@ func TestAccAWSCodeDeployDeploymentGroup_autoRollbackConfiguration_delete(t *tes rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_auto_rollback_configuration_create(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -347,7 +478,7 @@ func TestAccAWSCodeDeployDeploymentGroup_autoRollbackConfiguration_delete(t *tes "aws_codedeploy_deployment_group.foo_group", "auto_rollback_configuration.0.events.135881253", "DEPLOYMENT_FAILURE"), ), }, - resource.TestStep{ + { Config: test_config_auto_rollback_configuration_delete(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -355,6 +486,12 @@ func TestAccAWSCodeDeployDeploymentGroup_autoRollbackConfiguration_delete(t *tes "aws_codedeploy_deployment_group.foo_group", "auto_rollback_configuration.#", "0"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -364,12 +501,12 @@ func TestAccAWSCodeDeployDeploymentGroup_autoRollbackConfiguration_disable(t *te rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_auto_rollback_configuration_create(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -383,7 +520,7 @@ func TestAccAWSCodeDeployDeploymentGroup_autoRollbackConfiguration_disable(t *te "aws_codedeploy_deployment_group.foo_group", "auto_rollback_configuration.0.events.135881253", "DEPLOYMENT_FAILURE"), ), }, - resource.TestStep{ + { Config: test_config_auto_rollback_configuration_disable(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -397,6 +534,12 @@ func TestAccAWSCodeDeployDeploymentGroup_autoRollbackConfiguration_disable(t *te "aws_codedeploy_deployment_group.foo_group", "auto_rollback_configuration.0.events.135881253", "DEPLOYMENT_FAILURE"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -406,12 +549,12 @@ func TestAccAWSCodeDeployDeploymentGroup_alarmConfiguration_create(t *testing.T) rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_alarm_configuration_delete(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -419,7 +562,7 @@ func TestAccAWSCodeDeployDeploymentGroup_alarmConfiguration_create(t *testing.T) "aws_codedeploy_deployment_group.foo_group", "alarm_configuration.#", "0"), ), }, - resource.TestStep{ + { Config: test_config_alarm_configuration_create(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -435,6 +578,12 @@ func TestAccAWSCodeDeployDeploymentGroup_alarmConfiguration_create(t *testing.T) "aws_codedeploy_deployment_group.foo_group", "alarm_configuration.0.ignore_poll_alarm_failure", "false"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -444,12 +593,12 @@ func TestAccAWSCodeDeployDeploymentGroup_alarmConfiguration_update(t *testing.T) rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_alarm_configuration_create(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -465,7 +614,7 @@ func TestAccAWSCodeDeployDeploymentGroup_alarmConfiguration_update(t *testing.T) "aws_codedeploy_deployment_group.foo_group", "alarm_configuration.0.ignore_poll_alarm_failure", "false"), ), }, - resource.TestStep{ + { Config: test_config_alarm_configuration_update(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -483,6 +632,12 @@ func TestAccAWSCodeDeployDeploymentGroup_alarmConfiguration_update(t *testing.T) "aws_codedeploy_deployment_group.foo_group", "alarm_configuration.0.ignore_poll_alarm_failure", "true"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -492,12 +647,12 @@ func TestAccAWSCodeDeployDeploymentGroup_alarmConfiguration_delete(t *testing.T) rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_alarm_configuration_create(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -513,7 +668,7 @@ func TestAccAWSCodeDeployDeploymentGroup_alarmConfiguration_delete(t *testing.T) "aws_codedeploy_deployment_group.foo_group", "alarm_configuration.0.ignore_poll_alarm_failure", "false"), ), }, - resource.TestStep{ + { Config: test_config_alarm_configuration_delete(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -521,6 +676,12 @@ func TestAccAWSCodeDeployDeploymentGroup_alarmConfiguration_delete(t *testing.T) "aws_codedeploy_deployment_group.foo_group", "alarm_configuration.#", "0"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -530,12 +691,12 @@ func TestAccAWSCodeDeployDeploymentGroup_alarmConfiguration_disable(t *testing.T rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_alarm_configuration_create(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -551,7 +712,7 @@ func TestAccAWSCodeDeployDeploymentGroup_alarmConfiguration_disable(t *testing.T "aws_codedeploy_deployment_group.foo_group", "alarm_configuration.0.ignore_poll_alarm_failure", "false"), ), }, - resource.TestStep{ + { Config: test_config_alarm_configuration_disable(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -567,6 +728,12 @@ func TestAccAWSCodeDeployDeploymentGroup_alarmConfiguration_disable(t *testing.T "aws_codedeploy_deployment_group.foo_group", "alarm_configuration.0.ignore_poll_alarm_failure", "false"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -577,12 +744,12 @@ func TestAccAWSCodeDeployDeploymentGroup_deploymentStyle_default(t *testing.T) { rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_deployment_style_default(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -594,6 +761,12 @@ func TestAccAWSCodeDeployDeploymentGroup_deploymentStyle_default(t *testing.T) { "aws_codedeploy_deployment_group.foo_group", "deployment_style.0.deployment_type"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -603,12 +776,12 @@ func TestAccAWSCodeDeployDeploymentGroup_deploymentStyle_create(t *testing.T) { rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_deployment_style_default(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -620,7 +793,7 @@ func TestAccAWSCodeDeployDeploymentGroup_deploymentStyle_create(t *testing.T) { "aws_codedeploy_deployment_group.foo_group", "deployment_style.0.deployment_type"), ), }, - resource.TestStep{ + { Config: test_config_deployment_style_create(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -639,6 +812,12 @@ func TestAccAWSCodeDeployDeploymentGroup_deploymentStyle_create(t *testing.T) { "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.0.elb_info.2441772102.name", "foo-elb"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -648,12 +827,12 @@ func TestAccAWSCodeDeployDeploymentGroup_deploymentStyle_update(t *testing.T) { rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_deployment_style_create(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -665,7 +844,7 @@ func TestAccAWSCodeDeployDeploymentGroup_deploymentStyle_update(t *testing.T) { "aws_codedeploy_deployment_group.foo_group", "deployment_style.0.deployment_type", "BLUE_GREEN"), ), }, - resource.TestStep{ + { Config: test_config_deployment_style_update(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -677,6 +856,12 @@ func TestAccAWSCodeDeployDeploymentGroup_deploymentStyle_update(t *testing.T) { "aws_codedeploy_deployment_group.foo_group", "deployment_style.0.deployment_type", "IN_PLACE"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -688,12 +873,12 @@ func TestAccAWSCodeDeployDeploymentGroup_deploymentStyle_delete(t *testing.T) { rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_deployment_style_create(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -705,7 +890,7 @@ func TestAccAWSCodeDeployDeploymentGroup_deploymentStyle_delete(t *testing.T) { "aws_codedeploy_deployment_group.foo_group", "deployment_style.0.deployment_type", "BLUE_GREEN"), ), }, - resource.TestStep{ + { Config: test_config_deployment_style_default(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -717,6 +902,12 @@ func TestAccAWSCodeDeployDeploymentGroup_deploymentStyle_delete(t *testing.T) { "aws_codedeploy_deployment_group.foo_group", "deployment_style.0.deployment_type"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -726,12 +917,12 @@ func TestAccAWSCodeDeployDeploymentGroup_loadBalancerInfo_create(t *testing.T) { rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_load_balancer_info_default(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -739,7 +930,7 @@ func TestAccAWSCodeDeployDeploymentGroup_loadBalancerInfo_create(t *testing.T) { "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.#", "0"), ), }, - resource.TestStep{ + { Config: test_config_load_balancer_info_create(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -751,6 +942,12 @@ func TestAccAWSCodeDeployDeploymentGroup_loadBalancerInfo_create(t *testing.T) { "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.0.elb_info.2441772102.name", "foo-elb"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -760,12 +957,12 @@ func TestAccAWSCodeDeployDeploymentGroup_loadBalancerInfo_update(t *testing.T) { rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_load_balancer_info_create(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -777,7 +974,7 @@ func TestAccAWSCodeDeployDeploymentGroup_loadBalancerInfo_update(t *testing.T) { "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.0.elb_info.2441772102.name", "foo-elb"), ), }, - resource.TestStep{ + { Config: test_config_load_balancer_info_update(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -789,23 +986,29 @@ func TestAccAWSCodeDeployDeploymentGroup_loadBalancerInfo_update(t *testing.T) { "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.0.elb_info.4206303396.name", "bar-elb"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } // Without "Computed: true" on load_balancer_info, removing the resource -// from configuration causes an error, becuase the remote resource still exists. +// from configuration causes an error, because the remote resource still exists. func TestAccAWSCodeDeployDeploymentGroup_loadBalancerInfo_delete(t *testing.T) { var group codedeploy.DeploymentGroupInfo rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_load_balancer_info_create(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -817,7 +1020,7 @@ func TestAccAWSCodeDeployDeploymentGroup_loadBalancerInfo_delete(t *testing.T) { "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.0.elb_info.2441772102.name", "foo-elb"), ), }, - resource.TestStep{ + { Config: test_config_load_balancer_info_delete(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -825,6 +1028,12 @@ func TestAccAWSCodeDeployDeploymentGroup_loadBalancerInfo_delete(t *testing.T) { "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.#", "1"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -834,12 +1043,12 @@ func TestAccAWSCodeDeployDeploymentGroup_loadBalancerInfo_targetGroupInfo_create rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_load_balancer_info_default(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -847,7 +1056,7 @@ func TestAccAWSCodeDeployDeploymentGroup_loadBalancerInfo_targetGroupInfo_create "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.#", "0"), ), }, - resource.TestStep{ + { Config: test_config_load_balancer_info_target_group_info_create(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -860,6 +1069,12 @@ func TestAccAWSCodeDeployDeploymentGroup_loadBalancerInfo_targetGroupInfo_create "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.0.target_group_info.4178177480.name", "foo-tg"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -869,12 +1084,12 @@ func TestAccAWSCodeDeployDeploymentGroup_loadBalancerInfo_targetGroupInfo_update rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_load_balancer_info_target_group_info_create(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -886,7 +1101,7 @@ func TestAccAWSCodeDeployDeploymentGroup_loadBalancerInfo_targetGroupInfo_update "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.0.target_group_info.4178177480.name", "foo-tg"), ), }, - resource.TestStep{ + { Config: test_config_load_balancer_info_target_group_info_update(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -898,6 +1113,12 @@ func TestAccAWSCodeDeployDeploymentGroup_loadBalancerInfo_targetGroupInfo_update "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.0.target_group_info.2940009368.name", "bar-tg"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -907,12 +1128,12 @@ func TestAccAWSCodeDeployDeploymentGroup_loadBalancerInfo_targetGroupInfo_delete rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_load_balancer_info_target_group_info_create(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -924,7 +1145,7 @@ func TestAccAWSCodeDeployDeploymentGroup_loadBalancerInfo_targetGroupInfo_delete "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.0.target_group_info.4178177480.name", "foo-tg"), ), }, - resource.TestStep{ + { Config: test_config_load_balancer_info_target_group_info_delete(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -932,6 +1153,12 @@ func TestAccAWSCodeDeployDeploymentGroup_loadBalancerInfo_targetGroupInfo_delete "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.#", "1"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -941,12 +1168,12 @@ func TestAccAWSCodeDeployDeploymentGroup_in_place_deployment_with_traffic_contro rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_deployment_style_default(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -960,8 +1187,46 @@ func TestAccAWSCodeDeployDeploymentGroup_in_place_deployment_with_traffic_contro "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.#", "0"), ), }, + { + Config: test_config_in_place_deployment_with_traffic_control_create(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "deployment_style.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "deployment_style.0.deployment_option", "WITH_TRAFFIC_CONTROL"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "deployment_style.0.deployment_type", "IN_PLACE"), - resource.TestStep{ + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.0.elb_info.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.0.elb_info.2441772102.name", "foo-elb"), + ), + }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSCodeDeployDeploymentGroup_in_place_deployment_with_traffic_control_update(t *testing.T) { + var group codedeploy.DeploymentGroupInfo + + rName := acctest.RandString(5) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, + Steps: []resource.TestStep{ + { Config: test_config_in_place_deployment_with_traffic_control_create(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -977,33 +1242,147 @@ func TestAccAWSCodeDeployDeploymentGroup_in_place_deployment_with_traffic_contro resource.TestCheckResourceAttr( "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.0.elb_info.#", "1"), resource.TestCheckResourceAttr( - "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.0.elb_info.2441772102.name", "foo-elb"), + "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.0.elb_info.2441772102.name", "foo-elb"), + ), + }, + { + Config: test_config_in_place_deployment_with_traffic_control_update(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "deployment_style.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "deployment_style.0.deployment_option", "WITH_TRAFFIC_CONTROL"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "deployment_style.0.deployment_type", "BLUE_GREEN"), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.0.elb_info.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.0.elb_info.2441772102.name", "foo-elb"), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.deployment_ready_option.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.deployment_ready_option.0.action_on_timeout", "CONTINUE_DEPLOYMENT"), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.green_fleet_provisioning_option.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.green_fleet_provisioning_option.0.action", "DISCOVER_EXISTING"), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.terminate_blue_instances_on_deployment_success.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.terminate_blue_instances_on_deployment_success.0.action", "KEEP_ALIVE"), + ), + }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSCodeDeployDeploymentGroup_blueGreenDeploymentConfiguration_create(t *testing.T) { + var group codedeploy.DeploymentGroupInfo + + rName := acctest.RandString(5) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, + Steps: []resource.TestStep{ + { + Config: test_config_blue_green_deployment_config_default(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.#", "0"), + ), + }, + { + Config: test_config_blue_green_deployment_config_create_with_asg(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.#", "1"), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.deployment_ready_option.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.deployment_ready_option.0.action_on_timeout", "STOP_DEPLOYMENT"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.deployment_ready_option.0.wait_time_in_minutes", "60"), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.green_fleet_provisioning_option.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.green_fleet_provisioning_option.0.action", "COPY_AUTO_SCALING_GROUP"), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.terminate_blue_instances_on_deployment_success.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.terminate_blue_instances_on_deployment_success.0.action", "TERMINATE"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.terminate_blue_instances_on_deployment_success.0.termination_wait_time_in_minutes", "120"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } -func TestAccAWSCodeDeployDeploymentGroup_blueGreenDeploymentConfiguration_create(t *testing.T) { +func TestAccAWSCodeDeployDeploymentGroup_blueGreenDeploymentConfiguration_update_with_asg(t *testing.T) { var group codedeploy.DeploymentGroupInfo rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: test_config_blue_green_deployment_config_default(rName), + { + Config: test_config_blue_green_deployment_config_create_with_asg(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), resource.TestCheckResourceAttr( - "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.#", "0"), + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.#", "1"), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.deployment_ready_option.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.deployment_ready_option.0.action_on_timeout", "STOP_DEPLOYMENT"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.deployment_ready_option.0.wait_time_in_minutes", "60"), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.green_fleet_provisioning_option.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.green_fleet_provisioning_option.0.action", "COPY_AUTO_SCALING_GROUP"), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.terminate_blue_instances_on_deployment_success.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.terminate_blue_instances_on_deployment_success.0.action", "TERMINATE"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.terminate_blue_instances_on_deployment_success.0.termination_wait_time_in_minutes", "120"), ), }, - resource.TestStep{ - Config: test_config_blue_green_deployment_config_create_with_asg(rName), + { + Config: test_config_blue_green_deployment_config_update_with_asg(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), resource.TestCheckResourceAttr( @@ -1024,9 +1403,7 @@ func TestAccAWSCodeDeployDeploymentGroup_blueGreenDeploymentConfiguration_create resource.TestCheckResourceAttr( "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.terminate_blue_instances_on_deployment_success.#", "1"), resource.TestCheckResourceAttr( - "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.terminate_blue_instances_on_deployment_success.0.action", "TERMINATE"), - resource.TestCheckResourceAttr( - "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.terminate_blue_instances_on_deployment_success.0.termination_wait_time_in_minutes", "120"), + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.terminate_blue_instances_on_deployment_success.0.action", "KEEP_ALIVE"), ), }, }, @@ -1038,12 +1415,12 @@ func TestAccAWSCodeDeployDeploymentGroup_blueGreenDeploymentConfiguration_update rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_blue_green_deployment_config_create_no_asg(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -1070,7 +1447,7 @@ func TestAccAWSCodeDeployDeploymentGroup_blueGreenDeploymentConfiguration_update "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.terminate_blue_instances_on_deployment_success.0.termination_wait_time_in_minutes", "120"), ), }, - resource.TestStep{ + { Config: test_config_blue_green_deployment_config_update_no_asg(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -1098,18 +1475,18 @@ func TestAccAWSCodeDeployDeploymentGroup_blueGreenDeploymentConfiguration_update } // Without "Computed: true" on blue_green_deployment_config, removing the resource -// from configuration causes an error, becuase the remote resource still exists. +// from configuration causes an error, because the remote resource still exists. func TestAccAWSCodeDeployDeploymentGroup_blueGreenDeploymentConfiguration_delete(t *testing.T) { var group codedeploy.DeploymentGroupInfo rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_blue_green_deployment_config_create_no_asg(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -1136,7 +1513,7 @@ func TestAccAWSCodeDeployDeploymentGroup_blueGreenDeploymentConfiguration_delete "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.terminate_blue_instances_on_deployment_success.0.termination_wait_time_in_minutes", "120"), ), }, - resource.TestStep{ + { Config: test_config_blue_green_deployment_config_default(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -1144,6 +1521,12 @@ func TestAccAWSCodeDeployDeploymentGroup_blueGreenDeploymentConfiguration_delete "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.#", "1"), ), }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -1153,12 +1536,12 @@ func TestAccAWSCodeDeployDeploymentGroup_blueGreenDeployment_complete(t *testing rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodeDeployDeploymentGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: test_config_blue_green_deployment_complete(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), @@ -1200,6 +1583,52 @@ func TestAccAWSCodeDeployDeploymentGroup_blueGreenDeployment_complete(t *testing "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.terminate_blue_instances_on_deployment_success.0.termination_wait_time_in_minutes", "0"), ), }, + { + Config: test_config_blue_green_deployment_complete_updated(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodeDeployDeploymentGroupExists("aws_codedeploy_deployment_group.foo_group", &group), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "deployment_style.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "deployment_style.0.deployment_option", "WITH_TRAFFIC_CONTROL"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "deployment_style.0.deployment_type", "BLUE_GREEN"), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.0.elb_info.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "load_balancer_info.0.elb_info.2441772102.name", "foo-elb"), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.#", "1"), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.deployment_ready_option.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.deployment_ready_option.0.action_on_timeout", "CONTINUE_DEPLOYMENT"), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.green_fleet_provisioning_option.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.green_fleet_provisioning_option.0.action", "DISCOVER_EXISTING"), + + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.terminate_blue_instances_on_deployment_success.#", "1"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.terminate_blue_instances_on_deployment_success.0.action", "KEEP_ALIVE"), + resource.TestCheckResourceAttr( + "aws_codedeploy_deployment_group.foo_group", "blue_green_deployment_config.0.terminate_blue_instances_on_deployment_success.0.termination_wait_time_in_minutes", "0"), + ), + }, + { + ResourceName: "aws_codedeploy_deployment_group.foo_group", + ImportState: true, + ImportStateIdFunc: testAccAWSCodeDeployDeploymentGroupImportStateIdFunc("aws_codedeploy_deployment_group.foo_group"), + ImportStateVerify: true, + }, }, }) } @@ -1216,7 +1645,7 @@ func TestAWSCodeDeployDeploymentGroup_buildTriggerConfigs(t *testing.T) { } expected := []*codedeploy.TriggerConfig{ - &codedeploy.TriggerConfig{ + { TriggerEvents: []*string{ aws.String("DeploymentFailure"), }, @@ -1235,7 +1664,7 @@ func TestAWSCodeDeployDeploymentGroup_buildTriggerConfigs(t *testing.T) { func TestAWSCodeDeployDeploymentGroup_triggerConfigsToMap(t *testing.T) { input := []*codedeploy.TriggerConfig{ - &codedeploy.TriggerConfig{ + { TriggerEvents: []*string{ aws.String("DeploymentFailure"), aws.String("InstanceFailure"), @@ -1480,7 +1909,7 @@ func TestAWSCodeDeployDeploymentGroup_expandBlueGreenDeploymentConfig(t *testing "terminate_blue_instances_on_deployment_success": []interface{}{ map[string]interface{}{ - "action": "TERMINATE", + "action": "TERMINATE", "termination_wait_time_in_minutes": 90, }, }, @@ -1498,7 +1927,7 @@ func TestAWSCodeDeployDeploymentGroup_expandBlueGreenDeploymentConfig(t *testing }, TerminateBlueInstancesOnDeploymentSuccess: &codedeploy.BlueInstanceTerminationOption{ - Action: aws.String("TERMINATE"), + Action: aws.String("TERMINATE"), TerminationWaitTimeInMinutes: aws.Int64(90), }, } @@ -1523,28 +1952,28 @@ func TestAWSCodeDeployDeploymentGroup_flattenBlueGreenDeploymentConfig(t *testin }, TerminateBlueInstancesOnDeploymentSuccess: &codedeploy.BlueInstanceTerminationOption{ - Action: aws.String("KEEP_ALIVE"), + Action: aws.String("KEEP_ALIVE"), TerminationWaitTimeInMinutes: aws.Int64(90), }, } expected := map[string]interface{}{ "deployment_ready_option": []map[string]interface{}{ - map[string]interface{}{ + { "action_on_timeout": "STOP_DEPLOYMENT", "wait_time_in_minutes": 120, }, }, "green_fleet_provisioning_option": []map[string]interface{}{ - map[string]interface{}{ + { "action": "DISCOVER_EXISTING", }, }, "terminate_blue_instances_on_deployment_success": []map[string]interface{}{ - map[string]interface{}{ - "action": "KEEP_ALIVE", + { + "action": "KEEP_ALIVE", "termination_wait_time_in_minutes": 90, }, }, @@ -1786,7 +2215,83 @@ func testAccCheckAWSCodeDeployDeploymentGroupExists(name string, group *codedepl } } -func testAccAWSCodeDeployDeploymentGroup(rName string) string { +func TestAWSCodeDeployDeploymentGroup_handleCodeDeployApiError(t *testing.T) { + notAnAwsError := errors.New("Not an awserr") + invalidRoleException := awserr.New("InvalidRoleException", "Invalid role exception", nil) + invalidTriggerConfigExceptionNoMatch := awserr.New("InvalidTriggerConfigException", "Some other error message", nil) + invalidTriggerConfigExceptionMatch := awserr.New("InvalidTriggerConfigException", "Topic ARN foo is not valid", nil) + fakeAwsError := awserr.New("FakeAwsException", "Not a real AWS error", nil) + + testCases := []struct { + Input error + Expected *resource.RetryError + }{ + { + Input: nil, + Expected: nil, + }, + { + Input: notAnAwsError, + Expected: resource.NonRetryableError(notAnAwsError), + }, + { + Input: invalidRoleException, + Expected: resource.RetryableError(invalidRoleException), + }, + { + Input: invalidTriggerConfigExceptionNoMatch, + Expected: resource.NonRetryableError(invalidTriggerConfigExceptionNoMatch), + }, + { + Input: invalidTriggerConfigExceptionMatch, + Expected: resource.RetryableError(invalidTriggerConfigExceptionMatch), + }, + { + Input: fakeAwsError, + Expected: resource.NonRetryableError(fakeAwsError), + }, + } + + for _, tc := range testCases { + actual := handleCodeDeployApiError(tc.Input, "test") + if !reflect.DeepEqual(actual, tc.Expected) { + t.Fatalf(`handleCodeDeployApiError output is not correct. Expected: %v, Actual: %v.`, + tc.Expected, actual) + } + } +} + +func testAccAWSCodeDeployDeploymentGroupImportStateIdFunc(resourceName string) resource.ImportStateIdFunc { + return func(s *terraform.State) (string, error) { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return "", fmt.Errorf("Not found: %s", resourceName) + } + + return fmt.Sprintf("%s:%s", rs.Primary.Attributes["app_name"], rs.Primary.Attributes["deployment_group_name"]), nil + } +} + +func testAccAWSCodeDeployDeploymentGroup(rName string, tagGroup bool) string { + var tagGroupOrFilter string + if tagGroup { + tagGroupOrFilter = `ec2_tag_set { + ec2_tag_filter { + key = "filterkey" + type = "KEY_AND_VALUE" + value = "filtervalue" + } + } +` + } else { + tagGroupOrFilter = `ec2_tag_filter { + key = "filterkey" + type = "KEY_AND_VALUE" + value = "filtervalue" + } +` + } + return fmt.Sprintf(` resource "aws_codedeploy_app" "foo_app" { name = "foo_app_%s" @@ -1845,15 +2350,30 @@ resource "aws_codedeploy_deployment_group" "foo" { app_name = "${aws_codedeploy_app.foo_app.name}" deployment_group_name = "foo_%s" service_role_arn = "${aws_iam_role.foo_role.arn}" - ec2_tag_filter { + %s +}`, rName, rName, rName, rName, tagGroupOrFilter) +} + +func testAccAWSCodeDeployDeploymentGroupModified(rName string, tagGroup bool) string { + var tagGroupOrFilter string + if tagGroup { + tagGroupOrFilter = `ec2_tag_set { + ec2_tag_filter { + key = "filterkey" + type = "KEY_AND_VALUE" + value = "anotherfiltervalue" + } + } +` + } else { + tagGroupOrFilter = `ec2_tag_filter { key = "filterkey" type = "KEY_AND_VALUE" - value = "filtervalue" + value = "anotherfiltervalue" } -}`, rName, rName, rName, rName) -} +` + } -func testAccAWSCodeDeployDeploymentGroupModified(rName string) string { return fmt.Sprintf(` resource "aws_codedeploy_app" "foo_app" { name = "foo_app_%s" @@ -1912,12 +2432,8 @@ resource "aws_codedeploy_deployment_group" "foo" { app_name = "${aws_codedeploy_app.foo_app.name}" deployment_group_name = "bar_%s" service_role_arn = "${aws_iam_role.bar_role.arn}" - ec2_tag_filter { - key = "filterkey" - type = "KEY_AND_VALUE" - value = "anotherfiltervalue" - } -}`, rName, rName, rName, rName) + %s +}`, rName, rName, rName, rName, tagGroupOrFilter) } func testAccAWSCodeDeployDeploymentGroupOnPremiseTags(rName string) string { @@ -2453,6 +2969,43 @@ resource "aws_codedeploy_deployment_group" "foo_group" { }`, baseCodeDeployConfig(rName), rName) } +func test_config_in_place_deployment_with_traffic_control_update(rName string) string { + return fmt.Sprintf(` + + %s + +resource "aws_codedeploy_deployment_group" "foo_group" { + app_name = "${aws_codedeploy_app.foo_app.name}" + deployment_group_name = "foo-group-%s" + service_role_arn = "${aws_iam_role.foo_role.arn}" + + deployment_style { + deployment_option = "WITH_TRAFFIC_CONTROL" + deployment_type = "BLUE_GREEN" + } + + load_balancer_info { + elb_info { + name = "foo-elb" + } + } + + blue_green_deployment_config { + deployment_ready_option { + action_on_timeout = "CONTINUE_DEPLOYMENT" + } + + green_fleet_provisioning_option { + action = "DISCOVER_EXISTING" + } + + terminate_blue_instances_on_deployment_success { + action = "KEEP_ALIVE" + } + } +}`, baseCodeDeployConfig(rName), rName) +} + func test_config_blue_green_deployment_config_default(rName string) string { return fmt.Sprintf(` @@ -2520,6 +3073,60 @@ resource "aws_codedeploy_deployment_group" "foo_group" { }`, baseCodeDeployConfig(rName), rName, rName) } +func test_config_blue_green_deployment_config_update_with_asg(rName string) string { + return fmt.Sprintf(` + + %s + +resource "aws_launch_configuration" "foo_lc" { + image_id = "ami-21f78e11" + instance_type = "t1.micro" + "name_prefix" = "foo-lc-" + + lifecycle { + create_before_destroy = true + } +} + +resource "aws_autoscaling_group" "foo_asg" { + name = "foo-asg-%s" + max_size = 2 + min_size = 0 + desired_capacity = 1 + + availability_zones = ["us-west-2a"] + + launch_configuration = "${aws_launch_configuration.foo_lc.name}" + + lifecycle { + create_before_destroy = true + } +} + +resource "aws_codedeploy_deployment_group" "foo_group" { + app_name = "${aws_codedeploy_app.foo_app.name}" + deployment_group_name = "foo-group-%s" + service_role_arn = "${aws_iam_role.foo_role.arn}" + + autoscaling_groups = ["${aws_autoscaling_group.foo_asg.name}"] + + blue_green_deployment_config { + deployment_ready_option { + action_on_timeout = "STOP_DEPLOYMENT" + wait_time_in_minutes = 60 + } + + green_fleet_provisioning_option { + action = "COPY_AUTO_SCALING_GROUP" + } + + terminate_blue_instances_on_deployment_success { + action = "KEEP_ALIVE" + } + } +}`, baseCodeDeployConfig(rName), rName, rName) +} + func test_config_blue_green_deployment_config_create_no_asg(rName string) string { return fmt.Sprintf(` @@ -2611,3 +3218,40 @@ resource "aws_codedeploy_deployment_group" "foo_group" { } }`, baseCodeDeployConfig(rName), rName) } + +func test_config_blue_green_deployment_complete_updated(rName string) string { + return fmt.Sprintf(` + + %s + +resource "aws_codedeploy_deployment_group" "foo_group" { + app_name = "${aws_codedeploy_app.foo_app.name}" + deployment_group_name = "foo-group-%s" + service_role_arn = "${aws_iam_role.foo_role.arn}" + + deployment_style { + deployment_option = "WITH_TRAFFIC_CONTROL" + deployment_type = "BLUE_GREEN" + } + + load_balancer_info { + elb_info { + name = "foo-elb" + } + } + + blue_green_deployment_config { + deployment_ready_option { + action_on_timeout = "CONTINUE_DEPLOYMENT" + } + + green_fleet_provisioning_option { + action = "DISCOVER_EXISTING" + } + + terminate_blue_instances_on_deployment_success { + action = "KEEP_ALIVE" + } + } +}`, baseCodeDeployConfig(rName), rName) +} diff --git a/aws/resource_aws_codepipeline.go b/aws/resource_aws_codepipeline.go index ff35e5b8f13..654ef12c4ba 100644 --- a/aws/resource_aws_codepipeline.go +++ b/aws/resource_aws_codepipeline.go @@ -165,15 +165,6 @@ func resourceAwsCodePipeline() *schema.Resource { } } -func validateAwsCodePipelineStageActionConfiguration(v interface{}, k string) (ws []string, errors []error) { - for k := range v.(map[string]interface{}) { - if k == "OAuthToken" { - errors = append(errors, fmt.Errorf("CodePipeline: OAuthToken should be set as environment variable 'GITHUB_TOKEN'")) - } - } - return -} - func resourceAwsCodePipelineCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).codepipelineconn params := &codepipeline.CreatePipelineInput{ @@ -193,10 +184,10 @@ func resourceAwsCodePipelineCreate(d *schema.ResourceData, meta interface{}) err return resource.NonRetryableError(err) }) if err != nil { - return fmt.Errorf("[ERROR] Error creating CodePipeline: %s", err) + return fmt.Errorf("Error creating CodePipeline: %s", err) } if resp.Pipeline == nil { - return fmt.Errorf("[ERROR] Error creating CodePipeline: invalid response from AWS") + return fmt.Errorf("Error creating CodePipeline: invalid response from AWS") } d.SetId(*resp.Pipeline.Name) @@ -440,7 +431,7 @@ func resourceAwsCodePipelineRead(d *schema.ResourceData, meta interface{}) error d.SetId("") return nil } - return fmt.Errorf("[ERROR] Error retreiving Pipeline: %q", err) + return fmt.Errorf("Error retreiving Pipeline: %q", err) } metadata := resp.Metadata pipeline := resp.Pipeline @@ -488,7 +479,5 @@ func resourceAwsCodePipelineDelete(d *schema.ResourceData, meta interface{}) err return err } - d.SetId("") - return nil } diff --git a/aws/resource_aws_codepipeline_test.go b/aws/resource_aws_codepipeline_test.go index bd83c6f47a4..762637c696a 100644 --- a/aws/resource_aws_codepipeline_test.go +++ b/aws/resource_aws_codepipeline_test.go @@ -13,6 +13,31 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSCodePipeline_Import_basic(t *testing.T) { + if os.Getenv("GITHUB_TOKEN") == "" { + t.Skip("Environment variable GITHUB_TOKEN is not set") + } + + name := acctest.RandString(10) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodePipelineDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodePipelineConfig_basic(name), + }, + + { + ResourceName: "aws_codepipeline.bar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSCodePipeline_basic(t *testing.T) { if os.Getenv("GITHUB_TOKEN") == "" { t.Skip("Environment variable GITHUB_TOKEN is not set") @@ -20,7 +45,7 @@ func TestAccAWSCodePipeline_basic(t *testing.T) { name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodePipelineDestroy, @@ -56,7 +81,7 @@ func TestAccAWSCodePipeline_emptyArtifacts(t *testing.T) { name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodePipelineDestroy, @@ -90,7 +115,7 @@ func TestAccAWSCodePipeline_deployWithServiceRole(t *testing.T) { name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCodePipelineDestroy, diff --git a/aws/resource_aws_codepipeline_webhook.go b/aws/resource_aws_codepipeline_webhook.go new file mode 100644 index 00000000000..99357ca1bba --- /dev/null +++ b/aws/resource_aws_codepipeline_webhook.go @@ -0,0 +1,287 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/codepipeline" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsCodePipelineWebhook() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsCodePipelineWebhookCreate, + Read: resourceAwsCodePipelineWebhookRead, + Update: nil, + Delete: resourceAwsCodePipelineWebhookDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "authentication": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + ValidateFunc: validation.StringInSlice([]string{ + codepipeline.WebhookAuthenticationTypeGithubHmac, + codepipeline.WebhookAuthenticationTypeIp, + codepipeline.WebhookAuthenticationTypeUnauthenticated, + }, false), + }, + "authentication_configuration": { + Type: schema.TypeList, + MaxItems: 1, + MinItems: 1, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "secret_token": { + Type: schema.TypeString, + Optional: true, + Sensitive: true, + }, + "allowed_ip_range": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.CIDRNetwork(0, 32), + }, + }, + }, + }, + "filter": { + Type: schema.TypeSet, + ForceNew: true, + Required: true, + MinItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "json_path": { + Type: schema.TypeString, + Required: true, + }, + + "match_equals": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "url": { + Type: schema.TypeString, + Computed: true, + }, + + "target_action": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + }, + "target_pipeline": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + }, + }, + } +} + +func extractCodePipelineWebhookRules(filters *schema.Set) []*codepipeline.WebhookFilterRule { + var rules []*codepipeline.WebhookFilterRule + + for _, f := range filters.List() { + r := f.(map[string]interface{}) + filter := codepipeline.WebhookFilterRule{ + JsonPath: aws.String(r["json_path"].(string)), + MatchEquals: aws.String(r["match_equals"].(string)), + } + + rules = append(rules, &filter) + } + + return rules +} + +func extractCodePipelineWebhookAuthConfig(authType string, authConfig map[string]interface{}) *codepipeline.WebhookAuthConfiguration { + var conf codepipeline.WebhookAuthConfiguration + switch authType { + case codepipeline.WebhookAuthenticationTypeIp: + conf.AllowedIPRange = aws.String(authConfig["allowed_ip_range"].(string)) + break + case codepipeline.WebhookAuthenticationTypeGithubHmac: + conf.SecretToken = aws.String(authConfig["secret_token"].(string)) + break + } + + return &conf +} + +func resourceAwsCodePipelineWebhookCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).codepipelineconn + authType := d.Get("authentication").(string) + + var authConfig map[string]interface{} + if v, ok := d.GetOk("authentication_configuration"); ok { + l := v.([]interface{}) + authConfig = l[0].(map[string]interface{}) + } + + request := &codepipeline.PutWebhookInput{ + Webhook: &codepipeline.WebhookDefinition{ + Authentication: aws.String(authType), + Filters: extractCodePipelineWebhookRules(d.Get("filter").(*schema.Set)), + Name: aws.String(d.Get("name").(string)), + TargetAction: aws.String(d.Get("target_action").(string)), + TargetPipeline: aws.String(d.Get("target_pipeline").(string)), + AuthenticationConfiguration: extractCodePipelineWebhookAuthConfig(authType, authConfig), + }, + } + + webhook, err := conn.PutWebhook(request) + if err != nil { + return fmt.Errorf("Error creating webhook: %s", err) + } + + d.SetId(aws.StringValue(webhook.Webhook.Arn)) + + return resourceAwsCodePipelineWebhookRead(d, meta) +} + +func getCodePipelineWebhook(conn *codepipeline.CodePipeline, arn string) (*codepipeline.ListWebhookItem, error) { + var nextToken string + + for { + input := &codepipeline.ListWebhooksInput{ + MaxResults: aws.Int64(int64(60)), + } + if nextToken != "" { + input.NextToken = aws.String(nextToken) + } + + out, err := conn.ListWebhooks(input) + if err != nil { + return nil, err + } + + for _, w := range out.Webhooks { + if arn == aws.StringValue(w.Arn) { + return w, nil + } + } + + if out.NextToken == nil { + break + } + + nextToken = aws.StringValue(out.NextToken) + } + + return nil, &resource.NotFoundError{ + Message: fmt.Sprintf("No webhook with ARN %s found", arn), + } +} + +func flattenCodePipelineWebhookFilters(filters []*codepipeline.WebhookFilterRule) []interface{} { + results := []interface{}{} + for _, filter := range filters { + f := map[string]interface{}{ + "json_path": aws.StringValue(filter.JsonPath), + "match_equals": aws.StringValue(filter.MatchEquals), + } + results = append(results, f) + } + + return results +} + +func flattenCodePipelineWebhookAuthenticationConfiguration(authConfig *codepipeline.WebhookAuthConfiguration) []interface{} { + conf := map[string]interface{}{} + if authConfig.AllowedIPRange != nil { + conf["allowed_ip_range"] = aws.StringValue(authConfig.AllowedIPRange) + } + + if authConfig.SecretToken != nil { + conf["secret_token"] = aws.StringValue(authConfig.SecretToken) + } + + var results []interface{} + if len(conf) > 0 { + results = append(results, conf) + } + + return results +} + +func resourceAwsCodePipelineWebhookRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).codepipelineconn + + arn := d.Id() + webhook, err := getCodePipelineWebhook(conn, arn) + + if isResourceNotFoundError(err) { + log.Printf("[WARN] CodePipeline Webhook (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return fmt.Errorf("error getting CodePipeline Webhook (%s): %s", d.Id(), err) + } + + name := aws.StringValue(webhook.Definition.Name) + if name == "" { + return fmt.Errorf("Webhook not found: %s", arn) + } + + d.Set("name", name) + d.Set("url", aws.StringValue(webhook.Url)) + + if err := d.Set("target_action", aws.StringValue(webhook.Definition.TargetAction)); err != nil { + return err + } + + if err := d.Set("target_pipeline", aws.StringValue(webhook.Definition.TargetPipeline)); err != nil { + return err + } + + if err := d.Set("authentication", aws.StringValue(webhook.Definition.Authentication)); err != nil { + return err + } + + if err := d.Set("authentication_configuration", flattenCodePipelineWebhookAuthenticationConfiguration(webhook.Definition.AuthenticationConfiguration)); err != nil { + return fmt.Errorf("error setting authentication_configuration: %s", err) + } + + if err := d.Set("filter", flattenCodePipelineWebhookFilters(webhook.Definition.Filters)); err != nil { + return fmt.Errorf("error setting filter: %s", err) + } + + return nil +} + +func resourceAwsCodePipelineWebhookDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).codepipelineconn + name := d.Get("name").(string) + + input := codepipeline.DeleteWebhookInput{ + Name: &name, + } + _, err := conn.DeleteWebhook(&input) + + if err != nil { + return fmt.Errorf("Could not delete webhook: %s", err) + } + + return nil +} diff --git a/aws/resource_aws_codepipeline_webhook_test.go b/aws/resource_aws_codepipeline_webhook_test.go new file mode 100644 index 00000000000..33d317ac664 --- /dev/null +++ b/aws/resource_aws_codepipeline_webhook_test.go @@ -0,0 +1,300 @@ +package aws + +import ( + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSCodePipelineWebhook_basic(t *testing.T) { + if os.Getenv("GITHUB_TOKEN") == "" { + t.Skip("Environment variable GITHUB_TOKEN is not set") + } + + resourceName := "aws_codepipeline_webhook.bar" + name := acctest.RandString(10) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodePipelineDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodePipelineWebhookConfig_basic(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodePipelineExists("aws_codepipeline.bar"), + testAccCheckAWSCodePipelineWebhookExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "id"), + resource.TestCheckResourceAttrSet(resourceName, "url"), + resource.TestCheckResourceAttr(resourceName, "authentication_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "authentication_configuration.0.secret_token", "super-secret"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSCodePipelineWebhook_ipAuth(t *testing.T) { + if os.Getenv("GITHUB_TOKEN") == "" { + t.Skip("Environment variable GITHUB_TOKEN is not set") + } + + resourceName := "aws_codepipeline_webhook.bar" + name := acctest.RandString(10) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodePipelineDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodePipelineWebhookConfig_ipAuth(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodePipelineExists("aws_codepipeline.bar"), + testAccCheckAWSCodePipelineWebhookExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "id"), + resource.TestCheckResourceAttrSet(resourceName, "url"), + resource.TestCheckResourceAttr(resourceName, "authentication_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "authentication_configuration.0.allowed_ip_range", "0.0.0.0/0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSCodePipelineWebhook_unauthenticated(t *testing.T) { + if os.Getenv("GITHUB_TOKEN") == "" { + t.Skip("Environment variable GITHUB_TOKEN is not set") + } + + resourceName := "aws_codepipeline_webhook.bar" + name := acctest.RandString(10) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCodePipelineDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCodePipelineWebhookConfig_unauthenticated(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSCodePipelineExists("aws_codepipeline.bar"), + testAccCheckAWSCodePipelineWebhookExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "id"), + resource.TestCheckResourceAttrSet(resourceName, "url"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAWSCodePipelineWebhookExists(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No webhook ARN is set as ID") + } + + conn := testAccProvider.Meta().(*AWSClient).codepipelineconn + + _, err := getCodePipelineWebhook(conn, rs.Primary.ID) + if err != nil { + return err + } + + return nil + } +} + +func testAccAWSCodePipelineWebhookConfig_basic(rName string) string { + return testAccAWSCodePipelineWebhookConfig_codePipeline(rName, fmt.Sprintf(` +resource "aws_codepipeline_webhook" "bar" { + name = "test-webhook-%s" + authentication = "GITHUB_HMAC" + target_action = "Source" + target_pipeline = "${aws_codepipeline.bar.name}" + + authentication_configuration { + secret_token = "super-secret" + } + + filter { + json_path = "$.ref" + match_equals = "refs/head/{Branch}" + } +} +`, rName)) +} + +func testAccAWSCodePipelineWebhookConfig_ipAuth(rName string) string { + return testAccAWSCodePipelineWebhookConfig_codePipeline(rName, fmt.Sprintf(` +resource "aws_codepipeline_webhook" "bar" { + name = "test-webhook-%s" + authentication = "IP" + target_action = "Source" + target_pipeline = "${aws_codepipeline.bar.name}" + + authentication_configuration { + allowed_ip_range = "0.0.0.0/0" + } + + filter { + json_path = "$.ref" + match_equals = "refs/head/{Branch}" + } +} +`, rName)) +} + +func testAccAWSCodePipelineWebhookConfig_unauthenticated(rName string) string { + return testAccAWSCodePipelineWebhookConfig_codePipeline(rName, fmt.Sprintf(` +resource "aws_codepipeline_webhook" "bar" { + name = "test-webhook-%s" + authentication = "UNAUTHENTICATED" + target_action = "Source" + target_pipeline = "${aws_codepipeline.bar.name}" + + filter { + json_path = "$.ref" + match_equals = "refs/head/{Branch}" + } +} +`, rName)) +} + +func testAccAWSCodePipelineWebhookConfig_codePipeline(rName string, webhook string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "foo" { + bucket = "tf-test-pipeline-%s" + acl = "private" +} + +resource "aws_iam_role" "codepipeline_role" { + name = "codepipeline-role-%s" + + assume_role_policy = < 0 { + if errors := validateCognitoRoles(d.Get("roles").(map[string]interface{})); len(errors) > 0 { return fmt.Errorf("Error validating Roles: %v", errors) } @@ -168,11 +165,11 @@ func resourceAwsCognitoIdentityPoolRolesAttachmentRead(d *schema.ResourceData, m } if err := d.Set("roles", flattenCognitoIdentityPoolRoles(ip.Roles)); err != nil { - return fmt.Errorf("[DEBUG] Error setting roles error: %#v", err) + return fmt.Errorf("Error setting roles error: %#v", err) } if err := d.Set("role_mapping", flattenCognitoIdentityPoolRoleMappingsAttachment(ip.RoleMappings)); err != nil { - return fmt.Errorf("[DEBUG] Error setting role mappings error: %#v", err) + return fmt.Errorf("Error setting role mappings error: %#v", err) } return nil @@ -183,7 +180,7 @@ func resourceAwsCognitoIdentityPoolRolesAttachmentUpdate(d *schema.ResourceData, // Validates role keys to be either authenticated or unauthenticated, // since ValidateFunc validates only the value not the key. - if errors := validateCognitoRoles(d.Get("roles").(map[string]interface{}), "roles"); len(errors) > 0 { + if errors := validateCognitoRoles(d.Get("roles").(map[string]interface{})); len(errors) > 0 { return fmt.Errorf("Error validating Roles: %v", errors) } @@ -263,35 +260,3 @@ func validateRoleMappings(roleMappings []interface{}) []error { return errors } - -func cognitoRoleMappingHash(v interface{}) int { - var buf bytes.Buffer - m := v.(map[string]interface{}) - buf.WriteString(fmt.Sprintf("%s-", m["identity_provider"].(string))) - - return hashcode.String(buf.String()) -} - -func cognitoRoleMappingValueHash(v interface{}) int { - var buf bytes.Buffer - m := v.(map[string]interface{}) - buf.WriteString(fmt.Sprintf("%s-", m["type"].(string))) - if d, ok := m["ambiguous_role_resolution"]; ok { - buf.WriteString(fmt.Sprintf("%s-", d.(string))) - } - - return hashcode.String(buf.String()) -} - -func cognitoRoleMappingRulesConfigurationHash(v interface{}) int { - var buf bytes.Buffer - for _, rule := range v.([]interface{}) { - r := rule.(map[string]interface{}) - buf.WriteString(fmt.Sprintf("%s-", r["claim"].(string))) - buf.WriteString(fmt.Sprintf("%s-", r["match_type"].(string))) - buf.WriteString(fmt.Sprintf("%s-", r["role_arn"].(string))) - buf.WriteString(fmt.Sprintf("%s-", r["value"].(string))) - } - - return hashcode.String(buf.String()) -} diff --git a/aws/resource_aws_cognito_identity_pool_roles_attachment_test.go b/aws/resource_aws_cognito_identity_pool_roles_attachment_test.go index 8d0a9ba5fee..6a98cb6597e 100644 --- a/aws/resource_aws_cognito_identity_pool_roles_attachment_test.go +++ b/aws/resource_aws_cognito_identity_pool_roles_attachment_test.go @@ -18,7 +18,7 @@ func TestAccAWSCognitoIdentityPoolRolesAttachment_basic(t *testing.T) { name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) updatedName := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCognitoIdentityPoolRolesAttachmentDestroy, @@ -46,7 +46,7 @@ func TestAccAWSCognitoIdentityPoolRolesAttachment_basic(t *testing.T) { func TestAccAWSCognitoIdentityPoolRolesAttachment_roleMappings(t *testing.T) { name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCognitoIdentityPoolRolesAttachmentDestroy, @@ -94,7 +94,7 @@ func TestAccAWSCognitoIdentityPoolRolesAttachment_roleMappings(t *testing.T) { func TestAccAWSCognitoIdentityPoolRolesAttachment_roleMappingsWithAmbiguousRoleResolutionError(t *testing.T) { name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCognitoIdentityPoolRolesAttachmentDestroy, @@ -110,7 +110,7 @@ func TestAccAWSCognitoIdentityPoolRolesAttachment_roleMappingsWithAmbiguousRoleR func TestAccAWSCognitoIdentityPoolRolesAttachment_roleMappingsWithRulesTypeError(t *testing.T) { name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCognitoIdentityPoolRolesAttachmentDestroy, @@ -126,7 +126,7 @@ func TestAccAWSCognitoIdentityPoolRolesAttachment_roleMappingsWithRulesTypeError func TestAccAWSCognitoIdentityPoolRolesAttachment_roleMappingsWithTokenTypeError(t *testing.T) { name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCognitoIdentityPoolRolesAttachmentDestroy, diff --git a/aws/resource_aws_cognito_identity_pool_test.go b/aws/resource_aws_cognito_identity_pool_test.go index c7cf0c34617..685c234f9fd 100644 --- a/aws/resource_aws_cognito_identity_pool_test.go +++ b/aws/resource_aws_cognito_identity_pool_test.go @@ -3,6 +3,7 @@ package aws import ( "errors" "fmt" + "regexp" "testing" "github.com/aws/aws-sdk-go/aws" @@ -13,11 +14,33 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSCognitoIdentityPool_importBasic(t *testing.T) { + resourceName := "aws_cognito_identity_pool.main" + rName := acctest.RandString(10) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayAccountDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCognitoIdentityPoolConfig_basic(rName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSCognitoIdentityPool_basic(t *testing.T) { name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) updatedName := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy, @@ -27,6 +50,8 @@ func TestAccAWSCognitoIdentityPool_basic(t *testing.T) { Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"), resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)), + resource.TestMatchResourceAttr("aws_cognito_identity_pool.main", "arn", + regexp.MustCompile("^arn:aws:cognito-identity:[^:]+:[0-9]{12}:identitypool/[^:]+:([0-9a-f]){8}-([0-9a-f]){4}-([0-9a-f]){4}-([0-9a-f]){4}-([0-9a-f]){12}$")), resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "allow_unauthenticated_identities", "false"), ), }, @@ -44,7 +69,7 @@ func TestAccAWSCognitoIdentityPool_basic(t *testing.T) { func TestAccAWSCognitoIdentityPool_supportedLoginProviders(t *testing.T) { name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy, @@ -80,7 +105,7 @@ func TestAccAWSCognitoIdentityPool_supportedLoginProviders(t *testing.T) { func TestAccAWSCognitoIdentityPool_openidConnectProviderArns(t *testing.T) { name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy, @@ -115,7 +140,7 @@ func TestAccAWSCognitoIdentityPool_openidConnectProviderArns(t *testing.T) { func TestAccAWSCognitoIdentityPool_samlProviderArns(t *testing.T) { name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy, @@ -151,7 +176,7 @@ func TestAccAWSCognitoIdentityPool_samlProviderArns(t *testing.T) { func TestAccAWSCognitoIdentityPool_cognitoIdentityProviders(t *testing.T) { name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy, diff --git a/aws/resource_aws_cognito_identity_provider.go b/aws/resource_aws_cognito_identity_provider.go new file mode 100644 index 00000000000..57b25f8ca96 --- /dev/null +++ b/aws/resource_aws_cognito_identity_provider.go @@ -0,0 +1,209 @@ +package aws + +import ( + "fmt" + "log" + "strings" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/cognitoidentityprovider" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsCognitoIdentityProvider() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsCognitoIdentityProviderCreate, + Read: resourceAwsCognitoIdentityProviderRead, + Update: resourceAwsCognitoIdentityProviderUpdate, + Delete: resourceAwsCognitoIdentityProviderDelete, + + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "attribute_mapping": { + Type: schema.TypeMap, + Optional: true, + }, + + "idp_identifiers": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + + "provider_details": { + Type: schema.TypeMap, + Required: true, + }, + + "provider_name": { + Type: schema.TypeString, + Required: true, + }, + + "provider_type": { + Type: schema.TypeString, + Required: true, + }, + + "user_pool_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + } +} + +func resourceAwsCognitoIdentityProviderCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).cognitoidpconn + log.Print("[DEBUG] Creating Cognito Identity Provider") + + providerName := d.Get("provider_name").(string) + userPoolID := d.Get("user_pool_id").(string) + params := &cognitoidentityprovider.CreateIdentityProviderInput{ + ProviderName: aws.String(providerName), + ProviderType: aws.String(d.Get("provider_type").(string)), + UserPoolId: aws.String(userPoolID), + } + + if v, ok := d.GetOk("attribute_mapping"); ok { + params.AttributeMapping = stringMapToPointers(v.(map[string]interface{})) + } + + if v, ok := d.GetOk("provider_details"); ok { + params.ProviderDetails = stringMapToPointers(v.(map[string]interface{})) + } + + if v, ok := d.GetOk("idp_identifiers"); ok { + params.IdpIdentifiers = expandStringList(v.([]interface{})) + } + + _, err := conn.CreateIdentityProvider(params) + if err != nil { + return fmt.Errorf("Error creating Cognito Identity Provider: %s", err) + } + + d.SetId(fmt.Sprintf("%s:%s", userPoolID, providerName)) + + return resourceAwsCognitoIdentityProviderRead(d, meta) +} + +func resourceAwsCognitoIdentityProviderRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).cognitoidpconn + log.Printf("[DEBUG] Reading Cognito Identity Provider: %s", d.Id()) + + userPoolID, providerName, err := decodeCognitoIdentityProviderID(d.Id()) + if err != nil { + return err + } + + ret, err := conn.DescribeIdentityProvider(&cognitoidentityprovider.DescribeIdentityProviderInput{ + ProviderName: aws.String(providerName), + UserPoolId: aws.String(userPoolID), + }) + + if err != nil { + if isAWSErr(err, cognitoidentityprovider.ErrCodeResourceNotFoundException, "") { + log.Printf("[WARN] Cognito Identity Provider %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + return err + } + + if ret == nil || ret.IdentityProvider == nil { + log.Printf("[WARN] Cognito Identity Provider %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + ip := ret.IdentityProvider + d.Set("provider_name", ip.ProviderName) + d.Set("provider_type", ip.ProviderType) + d.Set("user_pool_id", ip.UserPoolId) + + if err := d.Set("attribute_mapping", aws.StringValueMap(ip.AttributeMapping)); err != nil { + return fmt.Errorf("error setting attribute_mapping error: %s", err) + } + + if err := d.Set("provider_details", aws.StringValueMap(ip.ProviderDetails)); err != nil { + return fmt.Errorf("error setting provider_details error: %s", err) + } + + if err := d.Set("idp_identifiers", flattenStringList(ip.IdpIdentifiers)); err != nil { + return fmt.Errorf("error setting idp_identifiers error: %s", err) + } + + return nil +} + +func resourceAwsCognitoIdentityProviderUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).cognitoidpconn + log.Print("[DEBUG] Updating Cognito Identity Provider") + + userPoolID, providerName, err := decodeCognitoIdentityProviderID(d.Id()) + if err != nil { + return err + } + + params := &cognitoidentityprovider.UpdateIdentityProviderInput{ + ProviderName: aws.String(providerName), + UserPoolId: aws.String(userPoolID), + } + + if d.HasChange("attribute_mapping") { + params.AttributeMapping = stringMapToPointers(d.Get("attribute_mapping").(map[string]interface{})) + } + + if d.HasChange("provider_details") { + params.ProviderDetails = stringMapToPointers(d.Get("provider_details").(map[string]interface{})) + } + + if d.HasChange("idp_identifiers") { + params.IdpIdentifiers = expandStringList(d.Get("supported_login_providers").([]interface{})) + } + + _, err = conn.UpdateIdentityProvider(params) + if err != nil { + return fmt.Errorf("Error updating Cognito Identity Provider: %s", err) + } + + return resourceAwsCognitoIdentityProviderRead(d, meta) +} + +func resourceAwsCognitoIdentityProviderDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).cognitoidpconn + log.Printf("[DEBUG] Deleting Cognito Identity Provider: %s", d.Id()) + + userPoolID, providerName, err := decodeCognitoIdentityProviderID(d.Id()) + if err != nil { + return err + } + + _, err = conn.DeleteIdentityProvider(&cognitoidentityprovider.DeleteIdentityProviderInput{ + ProviderName: aws.String(providerName), + UserPoolId: aws.String(userPoolID), + }) + + if err != nil { + if isAWSErr(err, cognitoidentityprovider.ErrCodeResourceNotFoundException, "") { + return nil + } + return err + } + + return nil +} + +func decodeCognitoIdentityProviderID(id string) (string, string, error) { + idParts := strings.Split(id, ":") + if len(idParts) != 2 { + return "", "", fmt.Errorf("expected ID in format UserPoolID:ProviderName, received: %s", id) + } + return idParts[0], idParts[1], nil +} diff --git a/aws/resource_aws_cognito_identity_provider_test.go b/aws/resource_aws_cognito_identity_provider_test.go new file mode 100644 index 00000000000..84aab1794dc --- /dev/null +++ b/aws/resource_aws_cognito_identity_provider_test.go @@ -0,0 +1,144 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/cognitoidentityprovider" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSCognitoIdentityProvider_basic(t *testing.T) { + var identityProvider cognitoidentityprovider.IdentityProviderType + resourceName := "aws_cognito_identity_provider.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCognitoIdentityProviderDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCognitoIdentityProviderConfig_basic(), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoIdentityProviderExists(resourceName, &identityProvider), + resource.TestCheckResourceAttr(resourceName, "provider_details.%", "9"), + resource.TestCheckResourceAttr(resourceName, "provider_details.authorize_scopes", "email"), + resource.TestCheckResourceAttr(resourceName, "provider_details.authorize_url", "https://accounts.google.com/o/oauth2/v2/auth"), + resource.TestCheckResourceAttr(resourceName, "provider_details.client_id", "test-url.apps.googleusercontent.com"), + resource.TestCheckResourceAttr(resourceName, "provider_details.client_secret", "client_secret"), + resource.TestCheckResourceAttr(resourceName, "provider_details.attributes_url", "https://people.googleapis.com/v1/people/me?personFields="), + resource.TestCheckResourceAttr(resourceName, "provider_details.attributes_url_add_attributes", "true"), + resource.TestCheckResourceAttr(resourceName, "provider_details.token_request_method", "POST"), + resource.TestCheckResourceAttr(resourceName, "provider_details.token_url", "https://www.googleapis.com/oauth2/v4/token"), + resource.TestCheckResourceAttr(resourceName, "provider_details.oidc_issuer", "https://accounts.google.com"), + resource.TestCheckResourceAttr(resourceName, "provider_name", "Google"), + resource.TestCheckResourceAttr(resourceName, "provider_type", "Google"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAWSCognitoIdentityProviderDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).cognitoidpconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_cognito_identity_provider" { + continue + } + + userPoolID, providerName, err := decodeCognitoIdentityProviderID(rs.Primary.ID) + if err != nil { + return err + } + + _, err = conn.DescribeIdentityProvider(&cognitoidentityprovider.DescribeIdentityProviderInput{ + ProviderName: aws.String(providerName), + UserPoolId: aws.String(userPoolID), + }) + + if err != nil { + if isAWSErr(err, cognitoidentityprovider.ErrCodeResourceNotFoundException, "") { + return nil + } + return err + } + } + + return nil +} + +func testAccCheckAWSCognitoIdentityProviderExists(resourceName string, identityProvider *cognitoidentityprovider.IdentityProviderType) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + userPoolID, providerName, err := decodeCognitoIdentityProviderID(rs.Primary.ID) + if err != nil { + return err + } + + conn := testAccProvider.Meta().(*AWSClient).cognitoidpconn + + input := &cognitoidentityprovider.DescribeIdentityProviderInput{ + ProviderName: aws.String(providerName), + UserPoolId: aws.String(userPoolID), + } + + output, err := conn.DescribeIdentityProvider(input) + + if err != nil { + return err + } + + if output == nil || output.IdentityProvider == nil { + return fmt.Errorf("Cognito Identity Provider %q does not exist", rs.Primary.ID) + } + + *identityProvider = *output.IdentityProvider + + return nil + } +} + +func testAccAWSCognitoIdentityProviderConfig_basic() string { + return ` + +resource "aws_cognito_user_pool" "test" { + name = "tfmytestpool" + auto_verified_attributes = ["email"] +} + +resource "aws_cognito_identity_provider" "test" { + user_pool_id = "${aws_cognito_user_pool.test.id}" + provider_name = "Google" + provider_type = "Google" + + provider_details { + attributes_url = "https://people.googleapis.com/v1/people/me?personFields=" + attributes_url_add_attributes = "true" + authorize_scopes = "email" + authorize_url = "https://accounts.google.com/o/oauth2/v2/auth" + client_id = "test-url.apps.googleusercontent.com" + client_secret = "client_secret" + oidc_issuer = "https://accounts.google.com" + token_request_method = "POST" + token_url = "https://www.googleapis.com/oauth2/v4/token" + } + + attribute_mapping { + email = "email" + username = "sub" + } +} +` +} diff --git a/aws/resource_aws_cognito_resource_server.go b/aws/resource_aws_cognito_resource_server.go new file mode 100644 index 00000000000..5870d99847b --- /dev/null +++ b/aws/resource_aws_cognito_resource_server.go @@ -0,0 +1,211 @@ +package aws + +import ( + "fmt" + "log" + "strings" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/cognitoidentityprovider" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsCognitoResourceServer() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsCognitoResourceServerCreate, + Read: resourceAwsCognitoResourceServerRead, + Update: resourceAwsCognitoResourceServerUpdate, + Delete: resourceAwsCognitoResourceServerDelete, + + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + // https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_CreateResourceServer.html + Schema: map[string]*schema.Schema{ + "identifier": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "scope": { + Type: schema.TypeSet, + Optional: true, + MaxItems: 25, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "scope_description": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 256), + }, + "scope_name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validateCognitoResourceServerScopeName, + }, + }, + }, + }, + "user_pool_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "scope_identifiers": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + }, + } +} + +func resourceAwsCognitoResourceServerCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).cognitoidpconn + + identifier := d.Get("identifier").(string) + userPoolID := d.Get("user_pool_id").(string) + + params := &cognitoidentityprovider.CreateResourceServerInput{ + Identifier: aws.String(identifier), + Name: aws.String(d.Get("name").(string)), + UserPoolId: aws.String(userPoolID), + } + + if v, ok := d.GetOk("scope"); ok { + configs := v.(*schema.Set).List() + params.Scopes = expandCognitoResourceServerScope(configs) + } + + log.Printf("[DEBUG] Creating Cognito Resource Server: %s", params) + + _, err := conn.CreateResourceServer(params) + + if err != nil { + return fmt.Errorf("Error creating Cognito Resource Server: %s", err) + } + + d.SetId(fmt.Sprintf("%s|%s", userPoolID, identifier)) + + return resourceAwsCognitoResourceServerRead(d, meta) +} + +func resourceAwsCognitoResourceServerRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).cognitoidpconn + + userPoolID, identifier, err := decodeCognitoResourceServerID(d.Id()) + if err != nil { + return err + } + + params := &cognitoidentityprovider.DescribeResourceServerInput{ + Identifier: aws.String(identifier), + UserPoolId: aws.String(userPoolID), + } + + log.Printf("[DEBUG] Reading Cognito Resource Server: %s", params) + + resp, err := conn.DescribeResourceServer(params) + + if err != nil { + if isAWSErr(err, cognitoidentityprovider.ErrCodeResourceNotFoundException, "") { + log.Printf("[WARN] Cognito Resource Server %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + return err + } + + if resp == nil || resp.ResourceServer == nil { + log.Printf("[WARN] Cognito Resource Server %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + d.Set("identifier", resp.ResourceServer.Identifier) + d.Set("name", resp.ResourceServer.Name) + d.Set("user_pool_id", resp.ResourceServer.UserPoolId) + + scopes := flattenCognitoResourceServerScope(resp.ResourceServer.Scopes) + if err := d.Set("scope", scopes); err != nil { + return fmt.Errorf("Failed setting schema: %s", err) + } + + var scopeIdentifiers []string + for _, elem := range scopes { + + scopeIdentifier := fmt.Sprintf("%s/%s", aws.StringValue(resp.ResourceServer.Identifier), elem["scope_name"].(string)) + scopeIdentifiers = append(scopeIdentifiers, scopeIdentifier) + } + d.Set("scope_identifiers", scopeIdentifiers) + return nil +} + +func resourceAwsCognitoResourceServerUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).cognitoidpconn + + userPoolID, identifier, err := decodeCognitoResourceServerID(d.Id()) + if err != nil { + return err + } + + params := &cognitoidentityprovider.UpdateResourceServerInput{ + Identifier: aws.String(identifier), + Name: aws.String(d.Get("name").(string)), + Scopes: expandCognitoResourceServerScope(d.Get("scope").(*schema.Set).List()), + UserPoolId: aws.String(userPoolID), + } + + log.Printf("[DEBUG] Updating Cognito Resource Server: %s", params) + + _, err = conn.UpdateResourceServer(params) + if err != nil { + return fmt.Errorf("Error updating Cognito Resource Server: %s", err) + } + + return resourceAwsCognitoResourceServerRead(d, meta) +} + +func resourceAwsCognitoResourceServerDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).cognitoidpconn + + userPoolID, identifier, err := decodeCognitoResourceServerID(d.Id()) + if err != nil { + return err + } + + params := &cognitoidentityprovider.DeleteResourceServerInput{ + Identifier: aws.String(identifier), + UserPoolId: aws.String(userPoolID), + } + + log.Printf("[DEBUG] Deleting Resource Server: %s", params) + + _, err = conn.DeleteResourceServer(params) + + if err != nil { + if isAWSErr(err, cognitoidentityprovider.ErrCodeResourceNotFoundException, "") { + return nil + } + return fmt.Errorf("Error deleting Resource Server: %s", err) + } + + return nil +} + +func decodeCognitoResourceServerID(id string) (string, string, error) { + idParts := strings.Split(id, "|") + if len(idParts) != 2 { + return "", "", fmt.Errorf("expected ID in format UserPoolID|Identifier, received: %s", id) + } + return idParts[0], idParts[1], nil +} diff --git a/aws/resource_aws_cognito_resource_server_test.go b/aws/resource_aws_cognito_resource_server_test.go new file mode 100644 index 00000000000..83e9b7611ae --- /dev/null +++ b/aws/resource_aws_cognito_resource_server_test.go @@ -0,0 +1,226 @@ +package aws + +import ( + "errors" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/cognitoidentityprovider" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSCognitoResourceServer_basic(t *testing.T) { + var resourceServer cognitoidentityprovider.ResourceServerType + identifier := fmt.Sprintf("tf-acc-test-resource-server-id-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + name1 := fmt.Sprintf("tf-acc-test-resource-server-name-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + name2 := fmt.Sprintf("tf-acc-test-resource-server-name-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + poolName := fmt.Sprintf("tf-acc-test-pool-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + resourceName := "aws_cognito_resource_server.main" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCognitoResourceServerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCognitoResourceServerConfig_basic(identifier, name1, poolName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoResourceServerExists(resourceName, &resourceServer), + resource.TestCheckResourceAttr(resourceName, "identifier", identifier), + resource.TestCheckResourceAttr(resourceName, "name", name1), + resource.TestCheckResourceAttr(resourceName, "scope.#", "0"), + resource.TestCheckResourceAttr(resourceName, "scope_identifiers.#", "0"), + ), + }, + { + Config: testAccAWSCognitoResourceServerConfig_basic(identifier, name2, poolName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoResourceServerExists(resourceName, &resourceServer), + resource.TestCheckResourceAttr(resourceName, "identifier", identifier), + resource.TestCheckResourceAttr(resourceName, "name", name2), + resource.TestCheckResourceAttr(resourceName, "scope.#", "0"), + resource.TestCheckResourceAttr(resourceName, "scope_identifiers.#", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSCognitoResourceServer_scope(t *testing.T) { + var resourceServer cognitoidentityprovider.ResourceServerType + identifier := fmt.Sprintf("tf-acc-test-resource-server-id-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + name := fmt.Sprintf("tf-acc-test-resource-server-name-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + poolName := fmt.Sprintf("tf-acc-test-pool-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + resourceName := "aws_cognito_resource_server.main" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCognitoResourceServerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCognitoResourceServerConfig_scope(identifier, name, poolName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoResourceServerExists(resourceName, &resourceServer), + resource.TestCheckResourceAttr(resourceName, "scope.#", "2"), + resource.TestCheckResourceAttr(resourceName, "scope_identifiers.#", "2"), + ), + }, + { + Config: testAccAWSCognitoResourceServerConfig_scope_update(identifier, name, poolName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoResourceServerExists(resourceName, &resourceServer), + resource.TestCheckResourceAttr(resourceName, "scope.#", "1"), + resource.TestCheckResourceAttr(resourceName, "scope_identifiers.#", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + // Ensure we can remove scope completely + { + Config: testAccAWSCognitoResourceServerConfig_basic(identifier, name, poolName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoResourceServerExists(resourceName, &resourceServer), + resource.TestCheckResourceAttr(resourceName, "scope.#", "0"), + resource.TestCheckResourceAttr(resourceName, "scope_identifiers.#", "0"), + ), + }, + }, + }) +} + +func testAccCheckAWSCognitoResourceServerExists(n string, resourceServer *cognitoidentityprovider.ResourceServerType) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return errors.New("No Cognito Resource Server ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).cognitoidpconn + + userPoolID, identifier, err := decodeCognitoResourceServerID(rs.Primary.ID) + if err != nil { + return err + } + + output, err := conn.DescribeResourceServer(&cognitoidentityprovider.DescribeResourceServerInput{ + Identifier: aws.String(identifier), + UserPoolId: aws.String(userPoolID), + }) + + if err != nil { + return err + } + + if output == nil || output.ResourceServer == nil { + return fmt.Errorf("Cognito Resource Server %q information not found", rs.Primary.ID) + } + + *resourceServer = *output.ResourceServer + + return nil + } +} + +func testAccCheckAWSCognitoResourceServerDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).cognitoidpconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_cognito_resource_server" { + continue + } + + userPoolID, identifier, err := decodeCognitoResourceServerID(rs.Primary.ID) + if err != nil { + return err + } + + _, err = conn.DescribeResourceServer(&cognitoidentityprovider.DescribeResourceServerInput{ + Identifier: aws.String(identifier), + UserPoolId: aws.String(userPoolID), + }) + + if err != nil { + if isAWSErr(err, cognitoidentityprovider.ErrCodeResourceNotFoundException, "") { + return nil + } + return err + } + } + + return nil +} + +func testAccAWSCognitoResourceServerConfig_basic(identifier string, name string, poolName string) string { + return fmt.Sprintf(` +resource "aws_cognito_resource_server" "main" { + identifier = "%s" + name = "%s" + user_pool_id = "${aws_cognito_user_pool.main.id}" +} + +resource "aws_cognito_user_pool" "main" { + name = "%s" +} +`, identifier, name, poolName) +} + +func testAccAWSCognitoResourceServerConfig_scope(identifier string, name string, poolName string) string { + return fmt.Sprintf(` +resource "aws_cognito_resource_server" "main" { + identifier = "%s" + name = "%s" + + scope = { + scope_name = "scope_1_name" + scope_description = "scope_1_description" + } + + scope = { + scope_name = "scope_2_name" + scope_description = "scope_2_description" + } + + user_pool_id = "${aws_cognito_user_pool.main.id}" +} + +resource "aws_cognito_user_pool" "main" { + name = "%s" +} +`, identifier, name, poolName) +} + +func testAccAWSCognitoResourceServerConfig_scope_update(identifier string, name string, poolName string) string { + return fmt.Sprintf(` +resource "aws_cognito_resource_server" "main" { + identifier = "%s" + name = "%s" + + scope = { + scope_name = "scope_1_name_updated" + scope_description = "scope_1_description" + } + + user_pool_id = "${aws_cognito_user_pool.main.id}" +} + +resource "aws_cognito_user_pool" "main" { + name = "%s" +} +`, identifier, name, poolName) +} diff --git a/aws/resource_aws_cognito_user_group.go b/aws/resource_aws_cognito_user_group.go index f8f4464f004..c294665cd2e 100644 --- a/aws/resource_aws_cognito_user_group.go +++ b/aws/resource_aws_cognito_user_group.go @@ -9,6 +9,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/cognitoidentityprovider" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsCognitoUserGroup() *schema.Resource { @@ -27,7 +28,7 @@ func resourceAwsCognitoUserGroup() *schema.Resource { "description": { Type: schema.TypeString, Optional: true, - ValidateFunc: validateMaxLength(2048), + ValidateFunc: validation.StringLenBetween(0, 2048), }, "name": { Type: schema.TypeString, @@ -130,7 +131,7 @@ func resourceAwsCognitoUserGroupUpdate(d *schema.ResourceData, meta interface{}) } if d.HasChange("role_arn") { - params.RoleArn = aws.String(d.Get("description").(string)) + params.RoleArn = aws.String(d.Get("role_arn").(string)) } log.Print("[DEBUG] Updating Cognito User Group") diff --git a/aws/resource_aws_cognito_user_group_test.go b/aws/resource_aws_cognito_user_group_test.go index 9ed6ef96753..6b2765cddb2 100644 --- a/aws/resource_aws_cognito_user_group_test.go +++ b/aws/resource_aws_cognito_user_group_test.go @@ -18,7 +18,7 @@ func TestAccAWSCognitoUserGroup_basic(t *testing.T) { groupName := fmt.Sprintf("tf-acc-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) updatedGroupName := fmt.Sprintf("tf-acc-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCognitoUserGroupDestroy, @@ -46,7 +46,7 @@ func TestAccAWSCognitoUserGroup_complex(t *testing.T) { groupName := fmt.Sprintf("tf-acc-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) updatedGroupName := fmt.Sprintf("tf-acc-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCognitoUserGroupDestroy, @@ -75,12 +75,39 @@ func TestAccAWSCognitoUserGroup_complex(t *testing.T) { }) } +func TestAccAWSCognitoUserGroup_RoleArn(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc") + resourceName := "aws_cognito_user_group.main" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCognitoUserGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCognitoUserGroupConfig_RoleArn(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoUserGroupExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "role_arn"), + ), + }, + { + Config: testAccAWSCognitoUserGroupConfig_RoleArn_Updated(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoUserGroupExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "role_arn"), + ), + }, + }, + }) +} + func TestAccAWSCognitoUserGroup_importBasic(t *testing.T) { resourceName := "aws_cognito_user_group.main" poolName := fmt.Sprintf("tf-acc-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) groupName := fmt.Sprintf("tf-acc-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSCognitoUserGroupDestroy, @@ -218,3 +245,69 @@ resource "aws_cognito_user_group" "main" { } `, poolName, groupName, groupName, groupDescription, precedence) } + +func testAccAWSCognitoUserGroupConfig_RoleArn(rName string) string { + return fmt.Sprintf(` +resource "aws_cognito_user_pool" "main" { + name = "%[1]s" +} + +resource "aws_iam_role" "group_role" { + name = "%[1]s" + assume_role_policy = < 0 { - ruleInput.Scope = expandConfigRuleScope(scopes[0].(map[string]interface{})) - } - if v, ok := d.GetOk("description"); ok { ruleInput.Description = aws.String(v.(string)) } @@ -295,7 +291,6 @@ func resourceAwsConfigConfigRuleDelete(d *schema.ResourceData, meta interface{}) log.Printf("[DEBUG] AWS Config config rule %q deleted", name) - d.SetId("") return nil } diff --git a/aws/resource_aws_config_config_rule_test.go b/aws/resource_aws_config_config_rule_test.go index 42f3047bd72..52e0802c195 100644 --- a/aws/resource_aws_config_config_rule_test.go +++ b/aws/resource_aws_config_config_rule_test.go @@ -122,11 +122,11 @@ func testAccConfigConfigRule_importAws(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckConfigConfigRuleDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccConfigConfigRuleConfig_ownerAws(rInt), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, @@ -146,11 +146,11 @@ func testAccConfigConfigRule_importLambda(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckConfigConfigRuleDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccConfigConfigRuleConfig_customLambda(rInt, path), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, @@ -159,6 +159,86 @@ func testAccConfigConfigRule_importLambda(t *testing.T) { }) } +func testAccConfigConfigRule_Scope_TagKey(t *testing.T) { + var configRule configservice.ConfigRule + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_config_config_rule.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckConfigConfigRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccConfigConfigRuleConfig_Scope_TagKey(rName, "key1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckConfigConfigRuleExists(resourceName, &configRule), + resource.TestCheckResourceAttr(resourceName, "scope.#", "1"), + resource.TestCheckResourceAttr(resourceName, "scope.0.tag_key", "key1"), + ), + }, + { + Config: testAccConfigConfigRuleConfig_Scope_TagKey(rName, "key2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckConfigConfigRuleExists(resourceName, &configRule), + resource.TestCheckResourceAttr(resourceName, "scope.#", "1"), + resource.TestCheckResourceAttr(resourceName, "scope.0.tag_key", "key2"), + ), + }, + }, + }) +} + +func testAccConfigConfigRule_Scope_TagKey_Empty(t *testing.T) { + var configRule configservice.ConfigRule + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_config_config_rule.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckConfigConfigRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccConfigConfigRuleConfig_Scope_TagKey(rName, ""), + Check: resource.ComposeTestCheckFunc( + testAccCheckConfigConfigRuleExists(resourceName, &configRule), + ), + }, + }, + }) +} + +func testAccConfigConfigRule_Scope_TagValue(t *testing.T) { + var configRule configservice.ConfigRule + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_config_config_rule.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckConfigConfigRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccConfigConfigRuleConfig_Scope_TagValue(rName, "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckConfigConfigRuleExists(resourceName, &configRule), + resource.TestCheckResourceAttr(resourceName, "scope.#", "1"), + resource.TestCheckResourceAttr(resourceName, "scope.0.tag_value", "value1"), + ), + }, + { + Config: testAccConfigConfigRuleConfig_Scope_TagValue(rName, "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckConfigConfigRuleExists(resourceName, &configRule), + resource.TestCheckResourceAttr(resourceName, "scope.#", "1"), + resource.TestCheckResourceAttr(resourceName, "scope.0.tag_value", "value2"), + ), + }, + }, + }) +} + func testAccCheckConfigConfigRuleName(n, desired string, obj *configservice.ConfigRule) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -224,6 +304,42 @@ func testAccCheckConfigConfigRuleDestroy(s *terraform.State) error { return nil } +func testAccConfigConfigRuleConfig_base(rName string) string { + return fmt.Sprintf(` +data "aws_partition" "current" {} + +resource "aws_config_configuration_recorder" "test" { + name = %q + role_arn = "${aws_iam_role.test.arn}" +} + +resource "aws_iam_role" "test" { + name = %q + + assume_role_policy = < 0 + }), + customdiff.ForceNewIfChange("organization_aggregation_source", func(old, new, meta interface{}) bool { + return len(old.([]interface{})) == 0 && len(new.([]interface{})) > 0 + }), + ), + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(0, 256), + }, + "account_aggregation_source": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{"organization_aggregation_source"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "account_ids": { + Type: schema.TypeList, + Required: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validateAwsAccountId, + }, + }, + "all_regions": { + Type: schema.TypeBool, + Default: false, + Optional: true, + }, + "regions": { + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + }, + }, + }, + "organization_aggregation_source": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{"account_aggregation_source"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "all_regions": { + Type: schema.TypeBool, + Default: false, + Optional: true, + }, + "regions": { + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "role_arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validateArn, + }, + }, + }, + }, + }, + } +} + +func resourceAwsConfigConfigurationAggregatorPut(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).configconn + + name := d.Get("name").(string) + + req := &configservice.PutConfigurationAggregatorInput{ + ConfigurationAggregatorName: aws.String(name), + } + + account_aggregation_sources := d.Get("account_aggregation_source").([]interface{}) + if len(account_aggregation_sources) > 0 { + req.AccountAggregationSources = expandConfigAccountAggregationSources(account_aggregation_sources) + } + + organization_aggregation_sources := d.Get("organization_aggregation_source").([]interface{}) + if len(organization_aggregation_sources) > 0 { + req.OrganizationAggregationSource = expandConfigOrganizationAggregationSource(organization_aggregation_sources[0].(map[string]interface{})) + } + + _, err := conn.PutConfigurationAggregator(req) + if err != nil { + return fmt.Errorf("Error creating aggregator: %s", err) + } + + d.SetId(strings.ToLower(name)) + + return resourceAwsConfigConfigurationAggregatorRead(d, meta) +} + +func resourceAwsConfigConfigurationAggregatorRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).configconn + req := &configservice.DescribeConfigurationAggregatorsInput{ + ConfigurationAggregatorNames: []*string{aws.String(d.Id())}, + } + + res, err := conn.DescribeConfigurationAggregators(req) + if err != nil { + if isAWSErr(err, configservice.ErrCodeNoSuchConfigurationAggregatorException, "") { + log.Printf("[WARN] No such configuration aggregator (%s), removing from state", d.Id()) + d.SetId("") + return nil + } + return err + } + + if res == nil || len(res.ConfigurationAggregators) == 0 { + log.Printf("[WARN] No aggregators returned (%s), removing from state", d.Id()) + d.SetId("") + return nil + } + + aggregator := res.ConfigurationAggregators[0] + d.Set("arn", aggregator.ConfigurationAggregatorArn) + d.Set("name", aggregator.ConfigurationAggregatorName) + + if err := d.Set("account_aggregation_source", flattenConfigAccountAggregationSources(aggregator.AccountAggregationSources)); err != nil { + return fmt.Errorf("error setting account_aggregation_source: %s", err) + } + + if err := d.Set("organization_aggregation_source", flattenConfigOrganizationAggregationSource(aggregator.OrganizationAggregationSource)); err != nil { + return fmt.Errorf("error setting organization_aggregation_source: %s", err) + } + + return nil +} + +func resourceAwsConfigConfigurationAggregatorDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).configconn + + req := &configservice.DeleteConfigurationAggregatorInput{ + ConfigurationAggregatorName: aws.String(d.Id()), + } + _, err := conn.DeleteConfigurationAggregator(req) + if err != nil { + return err + } + + d.SetId("") + return nil +} diff --git a/aws/resource_aws_config_configuration_aggregator_test.go b/aws/resource_aws_config_configuration_aggregator_test.go new file mode 100644 index 00000000000..748c4c4c186 --- /dev/null +++ b/aws/resource_aws_config_configuration_aggregator_test.go @@ -0,0 +1,273 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + "strings" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/configservice" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func init() { + resource.AddTestSweepers("aws_config_configuration_aggregator", &resource.Sweeper{ + Name: "aws_config_configuration_aggregator", + F: testSweepConfigConfigurationAggregators, + }) +} + +func testSweepConfigConfigurationAggregators(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("Error getting client: %s", err) + } + conn := client.(*AWSClient).configconn + + resp, err := conn.DescribeConfigurationAggregators(&configservice.DescribeConfigurationAggregatorsInput{}) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping Config Configuration Aggregators sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving config configuration aggregators: %s", err) + } + + if len(resp.ConfigurationAggregators) == 0 { + log.Print("[DEBUG] No config configuration aggregators to sweep") + return nil + } + + log.Printf("[INFO] Found %d config configuration aggregators", len(resp.ConfigurationAggregators)) + + for _, agg := range resp.ConfigurationAggregators { + if !strings.HasPrefix(*agg.ConfigurationAggregatorName, "tf-") { + continue + } + + log.Printf("[INFO] Deleting config configuration aggregator %s", *agg.ConfigurationAggregatorName) + _, err := conn.DeleteConfigurationAggregator(&configservice.DeleteConfigurationAggregatorInput{ + ConfigurationAggregatorName: agg.ConfigurationAggregatorName, + }) + + if err != nil { + return fmt.Errorf("Error deleting config configuration aggregator %s: %s", *agg.ConfigurationAggregatorName, err) + } + } + + return nil +} + +func TestAccAWSConfigConfigurationAggregator_account(t *testing.T) { + var ca configservice.ConfigurationAggregator + rString := acctest.RandString(10) + expectedName := fmt.Sprintf("tf-%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSConfigConfigurationAggregatorDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSConfigConfigurationAggregatorConfig_account(rString), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSConfigConfigurationAggregatorExists("aws_config_configuration_aggregator.example", &ca), + testAccCheckAWSConfigConfigurationAggregatorName("aws_config_configuration_aggregator.example", expectedName, &ca), + resource.TestCheckResourceAttr("aws_config_configuration_aggregator.example", "name", expectedName), + resource.TestCheckResourceAttr("aws_config_configuration_aggregator.example", "account_aggregation_source.#", "1"), + resource.TestCheckResourceAttr("aws_config_configuration_aggregator.example", "account_aggregation_source.0.account_ids.#", "1"), + resource.TestMatchResourceAttr("aws_config_configuration_aggregator.example", "account_aggregation_source.0.account_ids.0", regexp.MustCompile("^\\d{12}$")), + resource.TestCheckResourceAttr("aws_config_configuration_aggregator.example", "account_aggregation_source.0.regions.#", "1"), + resource.TestCheckResourceAttr("aws_config_configuration_aggregator.example", "account_aggregation_source.0.regions.0", "us-west-2"), + ), + }, + { + ResourceName: "aws_config_configuration_aggregator.example", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSConfigConfigurationAggregator_organization(t *testing.T) { + var ca configservice.ConfigurationAggregator + rString := acctest.RandString(10) + expectedName := fmt.Sprintf("tf-%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccOrganizationsAccountPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSConfigConfigurationAggregatorDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSConfigConfigurationAggregatorConfig_organization(rString), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSConfigConfigurationAggregatorExists("aws_config_configuration_aggregator.example", &ca), + testAccCheckAWSConfigConfigurationAggregatorName("aws_config_configuration_aggregator.example", expectedName, &ca), + resource.TestCheckResourceAttr("aws_config_configuration_aggregator.example", "name", expectedName), + resource.TestCheckResourceAttr("aws_config_configuration_aggregator.example", "organization_aggregation_source.#", "1"), + resource.TestMatchResourceAttr("aws_config_configuration_aggregator.example", "organization_aggregation_source.0.role_arn", regexp.MustCompile("^arn:aws:iam::\\d+:role/")), + resource.TestCheckResourceAttr("aws_config_configuration_aggregator.example", "organization_aggregation_source.0.all_regions", "true"), + ), + }, + { + ResourceName: "aws_config_configuration_aggregator.example", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSConfigConfigurationAggregator_switch(t *testing.T) { + rString := acctest.RandString(10) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccOrganizationsAccountPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSConfigConfigurationAggregatorDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSConfigConfigurationAggregatorConfig_account(rString), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_config_configuration_aggregator.example", "account_aggregation_source.#", "1"), + resource.TestCheckResourceAttr("aws_config_configuration_aggregator.example", "organization_aggregation_source.#", "0"), + ), + }, + { + Config: testAccAWSConfigConfigurationAggregatorConfig_organization(rString), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_config_configuration_aggregator.example", "account_aggregation_source.#", "0"), + resource.TestCheckResourceAttr("aws_config_configuration_aggregator.example", "organization_aggregation_source.#", "1"), + ), + }, + }, + }) +} + +func testAccCheckAWSConfigConfigurationAggregatorName(n, desired string, obj *configservice.ConfigurationAggregator) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + if rs.Primary.Attributes["name"] != *obj.ConfigurationAggregatorName { + return fmt.Errorf("Expected name: %q, given: %q", desired, *obj.ConfigurationAggregatorName) + } + return nil + } +} + +func testAccCheckAWSConfigConfigurationAggregatorExists(n string, obj *configservice.ConfigurationAggregator) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not Found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No config configuration aggregator ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).configconn + out, err := conn.DescribeConfigurationAggregators(&configservice.DescribeConfigurationAggregatorsInput{ + ConfigurationAggregatorNames: []*string{aws.String(rs.Primary.Attributes["name"])}, + }) + if err != nil { + return fmt.Errorf("Failed to describe config configuration aggregator: %s", err) + } + if len(out.ConfigurationAggregators) < 1 { + return fmt.Errorf("No config configuration aggregator found when describing %q", rs.Primary.Attributes["name"]) + } + + ca := out.ConfigurationAggregators[0] + *obj = *ca + + return nil + } +} + +func testAccCheckAWSConfigConfigurationAggregatorDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).configconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_config_configuration_aggregator" { + continue + } + + resp, err := conn.DescribeConfigurationAggregators(&configservice.DescribeConfigurationAggregatorsInput{ + ConfigurationAggregatorNames: []*string{aws.String(rs.Primary.Attributes["name"])}, + }) + + if err == nil { + if len(resp.ConfigurationAggregators) != 0 && + *resp.ConfigurationAggregators[0].ConfigurationAggregatorName == rs.Primary.Attributes["name"] { + return fmt.Errorf("config configuration aggregator still exists: %s", rs.Primary.Attributes["name"]) + } + } + } + + return nil +} + +func testAccAWSConfigConfigurationAggregatorConfig_account(rString string) string { + return fmt.Sprintf(` +resource "aws_config_configuration_aggregator" "example" { + name = "tf-%s" + + account_aggregation_source { + account_ids = ["${data.aws_caller_identity.current.account_id}"] + regions = ["us-west-2"] + } +} + +data "aws_caller_identity" "current" {} +`, rString) +} + +func testAccAWSConfigConfigurationAggregatorConfig_organization(rString string) string { + return fmt.Sprintf(` +resource "aws_organizations_organization" "test" {} + +resource "aws_config_configuration_aggregator" "example" { + depends_on = ["aws_iam_role_policy_attachment.example"] + + name = "tf-%s" + + organization_aggregation_source { + all_regions = true + role_arn = "${aws_iam_role.example.arn}" + } +} + +resource "aws_iam_role" "example" { + name = "tf-%s" + + assume_role_policy = < 0 { + options := v.([]interface{}) + s := options[0].(map[string]interface{}) + req.SSESpecification = expandDaxEncryptAtRestOptions(s) + } + // IAM roles take some time to propagate var resp *dax.CreateClusterOutput err := resource.Retry(30*time.Second, func() *resource.RetryError { @@ -288,39 +314,33 @@ func resourceAwsDaxClusterRead(d *schema.ResourceData, meta interface{}) error { return err } + if err := d.Set("server_side_encryption", flattenDaxEncryptAtRestOptions(c.SSEDescription)); err != nil { + return fmt.Errorf("error setting server_side_encryption: %s", err) + } + // list tags for resource - // set tags - arn, err := buildDaxArn(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region) - if err != nil { - log.Printf("[DEBUG] Error building ARN for DAX Cluster, not setting Tags for cluster %s", *c.ClusterName) - } else { - resp, err := conn.ListTags(&dax.ListTagsInput{ - ResourceName: aws.String(arn), - }) + resp, err := conn.ListTags(&dax.ListTagsInput{ + ResourceName: aws.String(aws.StringValue(c.ClusterArn)), + }) - if err != nil { - log.Printf("[DEBUG] Error retrieving tags for ARN: %s", arn) - } + if err != nil { + log.Printf("[DEBUG] Error retrieving tags for ARN: %s", aws.StringValue(c.ClusterArn)) + } - var dt []*dax.Tag - if len(resp.Tags) > 0 { - dt = resp.Tags - } - d.Set("tags", tagsToMapDax(dt)) + var dt []*dax.Tag + if len(resp.Tags) > 0 { + dt = resp.Tags } + d.Set("tags", tagsToMapDax(dt)) return nil } func resourceAwsDaxClusterUpdate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).daxconn - arn, err := buildDaxArn(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region) - if err != nil { - log.Printf("[DEBUG] Error building ARN for DAX Cluster, not updating Tags for cluster %s", d.Id()) - } else { - if err := setTagsDax(conn, d, arn); err != nil { - return err - } + + if err := setTagsDax(conn, d, d.Get("arn").(string)); err != nil { + return err } req := &dax.UpdateClusterInput{ @@ -365,7 +385,7 @@ func resourceAwsDaxClusterUpdate(d *schema.ResourceData, meta interface{}) error log.Printf("[DEBUG] Modifying DAX Cluster (%s), opts:\n%s", d.Id(), req) _, err := conn.UpdateCluster(req) if err != nil { - return fmt.Errorf("[WARN] Error updating DAX cluster (%s), error: %s", d.Id(), err) + return fmt.Errorf("Error updating DAX cluster (%s), error: %s", d.Id(), err) } awaitUpdate = true } @@ -381,7 +401,7 @@ func resourceAwsDaxClusterUpdate(d *schema.ResourceData, meta interface{}) error NewReplicationFactor: aws.Int64(int64(nraw.(int))), }) if err != nil { - return fmt.Errorf("[WARN] Error increasing nodes in DAX cluster %s, error: %s", d.Id(), err) + return fmt.Errorf("Error increasing nodes in DAX cluster %s, error: %s", d.Id(), err) } awaitUpdate = true } @@ -392,7 +412,7 @@ func resourceAwsDaxClusterUpdate(d *schema.ResourceData, meta interface{}) error NewReplicationFactor: aws.Int64(int64(nraw.(int))), }) if err != nil { - return fmt.Errorf("[WARN] Error increasing nodes in DAX cluster %s, error: %s", d.Id(), err) + return fmt.Errorf("Error increasing nodes in DAX cluster %s, error: %s", d.Id(), err) } awaitUpdate = true } @@ -485,8 +505,6 @@ func resourceAwsDaxClusterDelete(d *schema.ResourceData, meta interface{}) error return fmt.Errorf("Error waiting for DAX (%s) to delete: %s", d.Id(), sterr) } - d.SetId("") - return nil } @@ -506,7 +524,7 @@ func daxClusterStateRefreshFunc(conn *dax.DAX, clusterID, givenState string, pen } if len(resp.Clusters) == 0 { - return nil, "", fmt.Errorf("[WARN] Error: no DAX clusters found for id (%s)", clusterID) + return nil, "", fmt.Errorf("Error: no DAX clusters found for id (%s)", clusterID) } var c *dax.Cluster @@ -518,7 +536,7 @@ func daxClusterStateRefreshFunc(conn *dax.DAX, clusterID, givenState string, pen } if c == nil { - return nil, "", fmt.Errorf("[WARN] Error: no matching DAX cluster for id (%s)", clusterID) + return nil, "", fmt.Errorf("Error: no matching DAX cluster for id (%s)", clusterID) } // DescribeCluster returns a response without status late on in the @@ -567,22 +585,3 @@ func daxClusterStateRefreshFunc(conn *dax.DAX, clusterID, givenState string, pen return c, *c.Status, nil } } - -func buildDaxArn(identifier, partition, accountid, region string) (string, error) { - if partition == "" { - return "", fmt.Errorf("Unable to construct DAX ARN because of missing AWS partition") - } - if accountid == "" { - return "", fmt.Errorf("Unable to construct DAX ARN because of missing AWS Account ID") - } - - arn := arn.ARN{ - Partition: partition, - Service: "dax", - Region: region, - AccountID: accountid, - Resource: fmt.Sprintf("cache/%s", identifier), - } - - return arn.String(), nil -} diff --git a/aws/resource_aws_dax_cluster_test.go b/aws/resource_aws_dax_cluster_test.go index cd5015cb057..edf5ee46e0d 100644 --- a/aws/resource_aws_dax_cluster_test.go +++ b/aws/resource_aws_dax_cluster_test.go @@ -30,6 +30,12 @@ func testSweepDAXClusters(region string) error { resp, err := conn.DescribeClusters(&dax.DescribeClustersInput{}) if err != nil { + // GovCloud (with no DAX support) has an endpoint that responds with: + // InvalidParameterValueException: Access Denied to API Version: DAX_V3 + if testSweepSkipSweepError(err) || isAWSErr(err, "InvalidParameterValueException", "Access Denied to API Version: DAX_V3") { + log.Printf("[WARN] Skipping DAX Cluster sweep for %s: %s", region, err) + return nil + } return fmt.Errorf("Error retrieving DAX clusters: %s", err) } @@ -57,10 +63,32 @@ func testSweepDAXClusters(region string) error { return nil } +func TestAccAWSDAXCluster_importBasic(t *testing.T) { + resourceName := "aws_dax_cluster.test" + rString := acctest.RandString(10) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDAXClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDAXClusterConfig(rString), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSDAXCluster_basic(t *testing.T) { var dc dax.Cluster rString := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDAXClusterDestroy, @@ -76,7 +104,7 @@ func TestAccAWSDAXCluster_basic(t *testing.T) { resource.TestMatchResourceAttr( "aws_dax_cluster.test", "iam_role_arn", regexp.MustCompile("^arn:aws:iam::\\d+:role/")), resource.TestCheckResourceAttr( - "aws_dax_cluster.test", "node_type", "dax.r3.large"), + "aws_dax_cluster.test", "node_type", "dax.t2.small"), resource.TestCheckResourceAttr( "aws_dax_cluster.test", "replication_factor", "1"), resource.TestCheckResourceAttr( @@ -95,6 +123,10 @@ func TestAccAWSDAXCluster_basic(t *testing.T) { "aws_dax_cluster.test", "cluster_address"), resource.TestMatchResourceAttr( "aws_dax_cluster.test", "port", regexp.MustCompile("^\\d+$")), + resource.TestCheckResourceAttr( + "aws_dax_cluster.test", "server_side_encryption.#", "1"), + resource.TestCheckResourceAttr( + "aws_dax_cluster.test", "server_side_encryption.0.enabled", "false"), ), }, }, @@ -104,7 +136,7 @@ func TestAccAWSDAXCluster_basic(t *testing.T) { func TestAccAWSDAXCluster_resize(t *testing.T) { var dc dax.Cluster rString := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDAXClusterDestroy, @@ -137,6 +169,58 @@ func TestAccAWSDAXCluster_resize(t *testing.T) { }) } +func TestAccAWSDAXCluster_encryption_disabled(t *testing.T) { + var dc dax.Cluster + rString := acctest.RandString(10) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDAXClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDAXClusterConfigWithEncryption(rString, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDAXClusterExists("aws_dax_cluster.test", &dc), + resource.TestCheckResourceAttr("aws_dax_cluster.test", "server_side_encryption.#", "1"), + resource.TestCheckResourceAttr("aws_dax_cluster.test", "server_side_encryption.0.enabled", "false"), + ), + }, + // Ensure it shows no difference when removing server_side_encryption configuration + { + Config: testAccAWSDAXClusterConfig(rString), + PlanOnly: true, + ExpectNonEmptyPlan: false, + }, + }, + }) +} + +func TestAccAWSDAXCluster_encryption_enabled(t *testing.T) { + var dc dax.Cluster + rString := acctest.RandString(10) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDAXClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDAXClusterConfigWithEncryption(rString, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDAXClusterExists("aws_dax_cluster.test", &dc), + resource.TestCheckResourceAttr("aws_dax_cluster.test", "server_side_encryption.#", "1"), + resource.TestCheckResourceAttr("aws_dax_cluster.test", "server_side_encryption.0.enabled", "true"), + ), + }, + // Ensure it shows a difference when removing server_side_encryption configuration + { + Config: testAccAWSDAXClusterConfig(rString), + PlanOnly: true, + ExpectNonEmptyPlan: true, + }, + }, + }) +} + func testAccCheckAWSDAXClusterDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).daxconn @@ -233,9 +317,9 @@ EOF func testAccAWSDAXClusterConfig(rString string) string { return fmt.Sprintf(`%s resource "aws_dax_cluster" "test" { - cluster_name = "tf-%s" + cluster_name = "tf-%s" iam_role_arn = "${aws_iam_role.test.arn}" - node_type = "dax.r3.large" + node_type = "dax.t2.small" replication_factor = 1 description = "test cluster" @@ -246,6 +330,26 @@ func testAccAWSDAXClusterConfig(rString string) string { `, baseConfig, rString) } +func testAccAWSDAXClusterConfigWithEncryption(rString string, enabled bool) string { + return fmt.Sprintf(`%s + resource "aws_dax_cluster" "test" { + cluster_name = "tf-%s" + iam_role_arn = "${aws_iam_role.test.arn}" + node_type = "dax.t2.small" + replication_factor = 1 + description = "test cluster" + + tags { + foo = "bar" + } + + server_side_encryption { + enabled = %t + } + } + `, baseConfig, rString, enabled) +} + func testAccAWSDAXClusterConfigResize_singleNode(rString string) string { return fmt.Sprintf(`%s resource "aws_dax_cluster" "test" { diff --git a/aws/resource_aws_dax_parameter_group.go b/aws/resource_aws_dax_parameter_group.go new file mode 100644 index 00000000000..27dc0489103 --- /dev/null +++ b/aws/resource_aws_dax_parameter_group.go @@ -0,0 +1,160 @@ +package aws + +import ( + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/dax" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsDaxParameterGroup() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsDaxParameterGroupCreate, + Read: resourceAwsDaxParameterGroupRead, + Update: resourceAwsDaxParameterGroupUpdate, + Delete: resourceAwsDaxParameterGroupDelete, + + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "parameters": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + }, + "value": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + }, + } +} + +func resourceAwsDaxParameterGroupCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).daxconn + + input := &dax.CreateParameterGroupInput{ + ParameterGroupName: aws.String(d.Get("name").(string)), + } + if v, ok := d.GetOk("description"); ok { + input.Description = aws.String(v.(string)) + } + + _, err := conn.CreateParameterGroup(input) + if err != nil { + return err + } + + d.SetId(d.Get("name").(string)) + + if len(d.Get("parameters").(*schema.Set).List()) > 0 { + return resourceAwsDaxParameterGroupUpdate(d, meta) + } + return resourceAwsDaxParameterGroupRead(d, meta) +} + +func resourceAwsDaxParameterGroupRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).daxconn + + resp, err := conn.DescribeParameterGroups(&dax.DescribeParameterGroupsInput{ + ParameterGroupNames: []*string{aws.String(d.Id())}, + }) + if err != nil { + if isAWSErr(err, dax.ErrCodeParameterGroupNotFoundFault, "") { + log.Printf("[WARN] DAX ParameterGroup %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + return err + } + + if len(resp.ParameterGroups) == 0 { + log.Printf("[WARN] DAX ParameterGroup %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + pg := resp.ParameterGroups[0] + + paramresp, err := conn.DescribeParameters(&dax.DescribeParametersInput{ + ParameterGroupName: aws.String(d.Id()), + }) + if err != nil { + if isAWSErr(err, dax.ErrCodeParameterGroupNotFoundFault, "") { + log.Printf("[WARN] DAX ParameterGroup %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + return err + } + + d.Set("name", pg.ParameterGroupName) + desc := pg.Description + // default description is " " + if desc != nil && *desc == " " { + *desc = "" + } + d.Set("description", desc) + d.Set("parameters", flattenDaxParameterGroupParameters(paramresp.Parameters)) + return nil +} + +func resourceAwsDaxParameterGroupUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).daxconn + + input := &dax.UpdateParameterGroupInput{ + ParameterGroupName: aws.String(d.Id()), + } + + if d.HasChange("parameters") { + input.ParameterNameValues = expandDaxParameterGroupParameterNameValue( + d.Get("parameters").(*schema.Set).List(), + ) + } + + _, err := conn.UpdateParameterGroup(input) + if err != nil { + return err + } + + return resourceAwsDaxParameterGroupRead(d, meta) +} + +func resourceAwsDaxParameterGroupDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).daxconn + + input := &dax.DeleteParameterGroupInput{ + ParameterGroupName: aws.String(d.Id()), + } + + _, err := conn.DeleteParameterGroup(input) + if err != nil { + if isAWSErr(err, dax.ErrCodeParameterGroupNotFoundFault, "") { + return nil + } + return err + } + + return nil +} diff --git a/aws/resource_aws_dax_parameter_group_test.go b/aws/resource_aws_dax_parameter_group_test.go new file mode 100644 index 00000000000..912e1f310ee --- /dev/null +++ b/aws/resource_aws_dax_parameter_group_test.go @@ -0,0 +1,124 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/dax" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAwsDaxParameterGroup_basic(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsDaxParameterGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDaxParameterGroupConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDaxParameterGroupExists("aws_dax_parameter_group.test"), + resource.TestCheckResourceAttr("aws_dax_parameter_group.test", "parameters.#", "2"), + ), + }, + { + Config: testAccDaxParameterGroupConfig_parameters(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDaxParameterGroupExists("aws_dax_parameter_group.test"), + resource.TestCheckResourceAttr("aws_dax_parameter_group.test", "parameters.#", "2"), + ), + }, + }, + }) +} + +func TestAccAwsDaxParameterGroup_import(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_dax_parameter_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsDaxParameterGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDaxParameterGroupConfig(rName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAwsDaxParameterGroupDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).daxconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_dax_parameter_group" { + continue + } + + _, err := conn.DescribeParameterGroups(&dax.DescribeParameterGroupsInput{ + ParameterGroupNames: []*string{aws.String(rs.Primary.ID)}, + }) + if err != nil { + if isAWSErr(err, dax.ErrCodeParameterGroupNotFoundFault, "") { + return nil + } + return err + } + } + return nil +} + +func testAccCheckAwsDaxParameterGroupExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + conn := testAccProvider.Meta().(*AWSClient).daxconn + + _, err := conn.DescribeParameterGroups(&dax.DescribeParameterGroupsInput{ + ParameterGroupNames: []*string{aws.String(rs.Primary.ID)}, + }) + if err != nil { + return err + } + + return nil + } +} + +func testAccDaxParameterGroupConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_dax_parameter_group" "test" { + name = "%s" +} +`, rName) +} + +func testAccDaxParameterGroupConfig_parameters(rName string) string { + return fmt.Sprintf(` +resource "aws_dax_parameter_group" "test" { + name = "%s" + parameters { + name = "query-ttl-millis" + value = "100000" + } + parameters { + name = "record-ttl-millis" + value = "100000" + } +} +`, rName) +} diff --git a/aws/resource_aws_dax_subnet_group.go b/aws/resource_aws_dax_subnet_group.go new file mode 100644 index 00000000000..15f15fcd323 --- /dev/null +++ b/aws/resource_aws_dax_subnet_group.go @@ -0,0 +1,132 @@ +package aws + +import ( + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/dax" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsDaxSubnetGroup() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsDaxSubnetGroupCreate, + Read: resourceAwsDaxSubnetGroupRead, + Update: resourceAwsDaxSubnetGroupUpdate, + Delete: resourceAwsDaxSubnetGroupDelete, + + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + }, + "subnet_ids": { + Type: schema.TypeSet, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "vpc_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceAwsDaxSubnetGroupCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).daxconn + + input := &dax.CreateSubnetGroupInput{ + SubnetGroupName: aws.String(d.Get("name").(string)), + SubnetIds: expandStringSet(d.Get("subnet_ids").(*schema.Set)), + } + if v, ok := d.GetOk("description"); ok { + input.Description = aws.String(v.(string)) + } + + _, err := conn.CreateSubnetGroup(input) + if err != nil { + return err + } + + d.SetId(d.Get("name").(string)) + return resourceAwsDaxSubnetGroupRead(d, meta) +} + +func resourceAwsDaxSubnetGroupRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).daxconn + + resp, err := conn.DescribeSubnetGroups(&dax.DescribeSubnetGroupsInput{ + SubnetGroupNames: []*string{aws.String(d.Id())}, + }) + if err != nil { + if isAWSErr(err, dax.ErrCodeSubnetGroupNotFoundFault, "") { + log.Printf("[WARN] DAX SubnetGroup %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + return err + } + sg := resp.SubnetGroups[0] + + d.Set("name", sg.SubnetGroupName) + d.Set("description", sg.Description) + subnetIDs := make([]*string, 0, len(sg.Subnets)) + for _, v := range sg.Subnets { + subnetIDs = append(subnetIDs, v.SubnetIdentifier) + } + d.Set("subnet_ids", flattenStringList(subnetIDs)) + d.Set("vpc_id", sg.VpcId) + return nil +} + +func resourceAwsDaxSubnetGroupUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).daxconn + + input := &dax.UpdateSubnetGroupInput{ + SubnetGroupName: aws.String(d.Id()), + } + + if d.HasChange("description") { + input.Description = aws.String(d.Get("description").(string)) + } + + if d.HasChange("subnet_ids") { + input.SubnetIds = expandStringSet(d.Get("subnet_ids").(*schema.Set)) + } + + _, err := conn.UpdateSubnetGroup(input) + if err != nil { + return err + } + + return resourceAwsDaxSubnetGroupRead(d, meta) +} + +func resourceAwsDaxSubnetGroupDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).daxconn + + input := &dax.DeleteSubnetGroupInput{ + SubnetGroupName: aws.String(d.Id()), + } + + _, err := conn.DeleteSubnetGroup(input) + if err != nil { + if isAWSErr(err, dax.ErrCodeSubnetGroupNotFoundFault, "") { + return nil + } + return err + } + + return nil +} diff --git a/aws/resource_aws_dax_subnet_group_test.go b/aws/resource_aws_dax_subnet_group_test.go new file mode 100644 index 00000000000..288d3a3280b --- /dev/null +++ b/aws/resource_aws_dax_subnet_group_test.go @@ -0,0 +1,147 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/dax" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAwsDaxSubnetGroup_basic(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_dax_subnet_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsDaxSubnetGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDaxSubnetGroupConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDaxSubnetGroupExists("aws_dax_subnet_group.test"), + resource.TestCheckResourceAttr("aws_dax_subnet_group.test", "subnet_ids.#", "2"), + resource.TestCheckResourceAttrSet("aws_dax_subnet_group.test", "vpc_id"), + ), + }, + { + Config: testAccDaxSubnetGroupConfig_update(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDaxSubnetGroupExists("aws_dax_subnet_group.test"), + resource.TestCheckResourceAttr("aws_dax_subnet_group.test", "description", "update"), + resource.TestCheckResourceAttr("aws_dax_subnet_group.test", "subnet_ids.#", "3"), + resource.TestCheckResourceAttrSet("aws_dax_subnet_group.test", "vpc_id"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAwsDaxSubnetGroupDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).daxconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_dax_subnet_group" { + continue + } + + _, err := conn.DescribeSubnetGroups(&dax.DescribeSubnetGroupsInput{ + SubnetGroupNames: []*string{aws.String(rs.Primary.ID)}, + }) + if err != nil { + if isAWSErr(err, dax.ErrCodeSubnetGroupNotFoundFault, "") { + return nil + } + return err + } + } + return nil +} + +func testAccCheckAwsDaxSubnetGroupExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + conn := testAccProvider.Meta().(*AWSClient).daxconn + + _, err := conn.DescribeSubnetGroups(&dax.DescribeSubnetGroupsInput{ + SubnetGroupNames: []*string{aws.String(rs.Primary.ID)}, + }) + if err != nil { + return err + } + + return nil + } +} + +func testAccDaxSubnetGroupConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_subnet" "test1" { + cidr_block = "10.0.1.0/24" + vpc_id = "${aws_vpc.test.id}" +} + +resource "aws_subnet" "test2" { + cidr_block = "10.0.2.0/24" + vpc_id = "${aws_vpc.test.id}" +} + +resource "aws_dax_subnet_group" "test" { + name = "%s" + subnet_ids = [ + "${aws_subnet.test1.id}", + "${aws_subnet.test2.id}", + ] +} +`, rName) +} + +func testAccDaxSubnetGroupConfig_update(rName string) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_subnet" "test1" { + cidr_block = "10.0.1.0/24" + vpc_id = "${aws_vpc.test.id}" +} + +resource "aws_subnet" "test2" { + cidr_block = "10.0.2.0/24" + vpc_id = "${aws_vpc.test.id}" +} + +resource "aws_subnet" "test3" { + cidr_block = "10.0.3.0/24" + vpc_id = "${aws_vpc.test.id}" +} + +resource "aws_dax_subnet_group" "test" { + name = "%s" + description = "update" + subnet_ids = [ + "${aws_subnet.test1.id}", + "${aws_subnet.test2.id}", + "${aws_subnet.test3.id}", + ] +} +`, rName) +} diff --git a/aws/resource_aws_db_cluster_snapshot.go b/aws/resource_aws_db_cluster_snapshot.go new file mode 100644 index 00000000000..089e4fa715a --- /dev/null +++ b/aws/resource_aws_db_cluster_snapshot.go @@ -0,0 +1,214 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/rds" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsDbClusterSnapshot() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsDbClusterSnapshotCreate, + Read: resourceAwsDbClusterSnapshotRead, + Delete: resourceAwsDbClusterSnapshotDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(20 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + "db_cluster_snapshot_identifier": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "db_cluster_identifier": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "allocated_storage": { + Type: schema.TypeInt, + Computed: true, + }, + "availability_zones": { + Type: schema.TypeList, + Elem: &schema.Schema{Type: schema.TypeString}, + Computed: true, + }, + "db_cluster_snapshot_arn": { + Type: schema.TypeString, + Computed: true, + }, + "storage_encrypted": { + Type: schema.TypeBool, + Computed: true, + }, + "engine": { + Type: schema.TypeString, + Computed: true, + }, + "engine_version": { + Type: schema.TypeString, + Computed: true, + }, + "kms_key_id": { + Type: schema.TypeString, + Computed: true, + }, + "license_model": { + Type: schema.TypeString, + Computed: true, + }, + "port": { + Type: schema.TypeInt, + Computed: true, + }, + "source_db_cluster_snapshot_arn": { + Type: schema.TypeString, + Computed: true, + }, + "snapshot_type": { + Type: schema.TypeString, + Computed: true, + }, + "status": { + Type: schema.TypeString, + Computed: true, + }, + "vpc_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceAwsDbClusterSnapshotCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + + params := &rds.CreateDBClusterSnapshotInput{ + DBClusterIdentifier: aws.String(d.Get("db_cluster_identifier").(string)), + DBClusterSnapshotIdentifier: aws.String(d.Get("db_cluster_snapshot_identifier").(string)), + } + + _, err := conn.CreateDBClusterSnapshot(params) + if err != nil { + return fmt.Errorf("error creating RDS DB Cluster Snapshot: %s", err) + } + d.SetId(d.Get("db_cluster_snapshot_identifier").(string)) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"creating"}, + Target: []string{"available"}, + Refresh: resourceAwsDbClusterSnapshotStateRefreshFunc(d.Id(), conn), + Timeout: d.Timeout(schema.TimeoutCreate), + MinTimeout: 10 * time.Second, + Delay: 5 * time.Second, + } + + // Wait, catching any errors + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("error waiting for RDS DB Cluster Snapshot %q to create: %s", d.Id(), err) + } + + return resourceAwsDbClusterSnapshotRead(d, meta) +} + +func resourceAwsDbClusterSnapshotRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + + params := &rds.DescribeDBClusterSnapshotsInput{ + DBClusterSnapshotIdentifier: aws.String(d.Id()), + } + resp, err := conn.DescribeDBClusterSnapshots(params) + if err != nil { + if isAWSErr(err, rds.ErrCodeDBClusterSnapshotNotFoundFault, "") { + log.Printf("[WARN] RDS DB Cluster Snapshot %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + return fmt.Errorf("error reading RDS DB Cluster Snapshot %q: %s", d.Id(), err) + } + + if resp == nil || len(resp.DBClusterSnapshots) == 0 || resp.DBClusterSnapshots[0] == nil || aws.StringValue(resp.DBClusterSnapshots[0].DBClusterSnapshotIdentifier) != d.Id() { + log.Printf("[WARN] RDS DB Cluster Snapshot %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + snapshot := resp.DBClusterSnapshots[0] + + d.Set("allocated_storage", snapshot.AllocatedStorage) + if err := d.Set("availability_zones", flattenStringList(snapshot.AvailabilityZones)); err != nil { + return fmt.Errorf("error setting availability_zones: %s", err) + } + d.Set("db_cluster_identifier", snapshot.DBClusterIdentifier) + d.Set("db_cluster_snapshot_arn", snapshot.DBClusterSnapshotArn) + d.Set("db_cluster_snapshot_identifier", snapshot.DBClusterSnapshotIdentifier) + d.Set("engine_version", snapshot.EngineVersion) + d.Set("engine", snapshot.Engine) + d.Set("kms_key_id", snapshot.KmsKeyId) + d.Set("license_model", snapshot.LicenseModel) + d.Set("port", snapshot.Port) + d.Set("snapshot_type", snapshot.SnapshotType) + d.Set("source_db_cluster_snapshot_arn", snapshot.SourceDBClusterSnapshotArn) + d.Set("status", snapshot.Status) + d.Set("storage_encrypted", snapshot.StorageEncrypted) + d.Set("vpc_id", snapshot.VpcId) + + return nil +} + +func resourceAwsDbClusterSnapshotDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).rdsconn + + params := &rds.DeleteDBClusterSnapshotInput{ + DBClusterSnapshotIdentifier: aws.String(d.Id()), + } + _, err := conn.DeleteDBClusterSnapshot(params) + if err != nil { + if isAWSErr(err, rds.ErrCodeDBClusterSnapshotNotFoundFault, "") { + return nil + } + return fmt.Errorf("error deleting RDS DB Cluster Snapshot %q: %s", d.Id(), err) + } + + return nil +} + +func resourceAwsDbClusterSnapshotStateRefreshFunc(dbClusterSnapshotIdentifier string, conn *rds.RDS) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + opts := &rds.DescribeDBClusterSnapshotsInput{ + DBClusterSnapshotIdentifier: aws.String(dbClusterSnapshotIdentifier), + } + + log.Printf("[DEBUG] DB Cluster Snapshot describe configuration: %#v", opts) + + resp, err := conn.DescribeDBClusterSnapshots(opts) + if err != nil { + if isAWSErr(err, rds.ErrCodeDBClusterSnapshotNotFoundFault, "") { + return nil, "", nil + } + return nil, "", fmt.Errorf("Error retrieving DB Cluster Snapshots: %s", err) + } + + if resp == nil || len(resp.DBClusterSnapshots) == 0 || resp.DBClusterSnapshots[0] == nil { + return nil, "", fmt.Errorf("No snapshots returned for %s", dbClusterSnapshotIdentifier) + } + + snapshot := resp.DBClusterSnapshots[0] + + return resp, aws.StringValue(snapshot.Status), nil + } +} diff --git a/aws/resource_aws_db_cluster_snapshot_test.go b/aws/resource_aws_db_cluster_snapshot_test.go new file mode 100644 index 00000000000..3e4518dbd3b --- /dev/null +++ b/aws/resource_aws_db_cluster_snapshot_test.go @@ -0,0 +1,155 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/rds" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSDBClusterSnapshot_basic(t *testing.T) { + var dbClusterSnapshot rds.DBClusterSnapshot + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_db_cluster_snapshot.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDbClusterSnapshotDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsDbClusterSnapshotConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDbClusterSnapshotExists(resourceName, &dbClusterSnapshot), + resource.TestCheckResourceAttrSet(resourceName, "allocated_storage"), + resource.TestCheckResourceAttrSet(resourceName, "availability_zones.#"), + resource.TestMatchResourceAttr(resourceName, "db_cluster_snapshot_arn", regexp.MustCompile(`^arn:[^:]+:rds:[^:]+:\d{12}:cluster-snapshot:.+`)), + resource.TestCheckResourceAttrSet(resourceName, "engine"), + resource.TestCheckResourceAttrSet(resourceName, "engine_version"), + resource.TestCheckResourceAttr(resourceName, "kms_key_id", ""), + resource.TestCheckResourceAttrSet(resourceName, "license_model"), + resource.TestCheckResourceAttrSet(resourceName, "port"), + resource.TestCheckResourceAttr(resourceName, "snapshot_type", "manual"), + resource.TestCheckResourceAttr(resourceName, "source_db_cluster_snapshot_arn", ""), + resource.TestCheckResourceAttr(resourceName, "status", "available"), + resource.TestCheckResourceAttr(resourceName, "storage_encrypted", "false"), + resource.TestMatchResourceAttr(resourceName, "vpc_id", regexp.MustCompile(`^vpc-.+`)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckDbClusterSnapshotDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).rdsconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_db_cluster_snapshot" { + continue + } + + input := &rds.DescribeDBClusterSnapshotsInput{ + DBClusterSnapshotIdentifier: aws.String(rs.Primary.ID), + } + + output, err := conn.DescribeDBClusterSnapshots(input) + if err != nil { + if isAWSErr(err, rds.ErrCodeDBClusterSnapshotNotFoundFault, "") { + continue + } + return err + } + + if output != nil && len(output.DBClusterSnapshots) > 0 && output.DBClusterSnapshots[0] != nil && aws.StringValue(output.DBClusterSnapshots[0].DBClusterSnapshotIdentifier) == rs.Primary.ID { + return fmt.Errorf("RDS DB Cluster Snapshot %q still exists", rs.Primary.ID) + } + } + + return nil +} + +func testAccCheckDbClusterSnapshotExists(resourceName string, dbClusterSnapshot *rds.DBClusterSnapshot) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set for %s", resourceName) + } + + conn := testAccProvider.Meta().(*AWSClient).rdsconn + + request := &rds.DescribeDBClusterSnapshotsInput{ + DBClusterSnapshotIdentifier: aws.String(rs.Primary.ID), + } + + response, err := conn.DescribeDBClusterSnapshots(request) + if err != nil { + return err + } + + if response == nil || len(response.DBClusterSnapshots) == 0 || response.DBClusterSnapshots[0] == nil || aws.StringValue(response.DBClusterSnapshots[0].DBClusterSnapshotIdentifier) != rs.Primary.ID { + return fmt.Errorf("RDS DB Cluster Snapshot %q not found", rs.Primary.ID) + } + + *dbClusterSnapshot = *response.DBClusterSnapshots[0] + + return nil + } +} + +func testAccAwsDbClusterSnapshotConfig(rName string) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "test" { + cidr_block = "192.168.0.0/16" + + tags { + Name = %q + } +} + +resource "aws_subnet" "test" { + count = 2 + + availability_zone = "${data.aws_availability_zones.available.names[count.index]}" + cidr_block = "192.168.${count.index}.0/24" + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = %q + } +} + +resource "aws_db_subnet_group" "test" { + name = %q + subnet_ids = ["${aws_subnet.test.*.id}"] +} + +resource "aws_rds_cluster" "test" { + cluster_identifier = %q + db_subnet_group_name = "${aws_db_subnet_group.test.name}" + master_password = "barbarbarbar" + master_username = "foo" + skip_final_snapshot = true +} + +resource "aws_db_cluster_snapshot" "test" { + db_cluster_identifier = "${aws_rds_cluster.test.id}" + db_cluster_snapshot_identifier = %q +} +`, rName, rName, rName, rName, rName) +} diff --git a/aws/resource_aws_db_event_subscription.go b/aws/resource_aws_db_event_subscription.go index 4415bbcc1b2..e3838695e26 100644 --- a/aws/resource_aws_db_event_subscription.go +++ b/aws/resource_aws_db_event_subscription.go @@ -6,7 +6,6 @@ import ( "time" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/rds" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" @@ -21,12 +20,30 @@ func resourceAwsDbEventSubscription() *schema.Resource { Importer: &schema.ResourceImporter{ State: resourceAwsDbEventSubscriptionImport, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(40 * time.Minute), + Delete: schema.DefaultTimeout(40 * time.Minute), + Update: schema.DefaultTimeout(40 * time.Minute), + }, Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validateDbEventSubscriptionName, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"name_prefix"}, + ValidateFunc: validateDbEventSubscriptionName, + }, + "name_prefix": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"name"}, + ValidateFunc: validateDbEventSubscriptionName, }, "sns_topic": { Type: schema.TypeString, @@ -65,8 +82,16 @@ func resourceAwsDbEventSubscription() *schema.Resource { } func resourceAwsDbEventSubscriptionCreate(d *schema.ResourceData, meta interface{}) error { - rdsconn := meta.(*AWSClient).rdsconn - name := d.Get("name").(string) + conn := meta.(*AWSClient).rdsconn + var name string + if v, ok := d.GetOk("name"); ok { + name = v.(string) + } else if v, ok := d.GetOk("name_prefix"); ok { + name = resource.PrefixedUniqueId(v.(string)) + } else { + name = resource.UniqueId() + } + tags := tagsFromMapRDS(d.Get("tags").(map[string]interface{})) sourceIdsSet := d.Get("source_ids").(*schema.Set) @@ -93,19 +118,25 @@ func resourceAwsDbEventSubscriptionCreate(d *schema.ResourceData, meta interface log.Println("[DEBUG] Create RDS Event Subscription:", request) - _, err := rdsconn.CreateEventSubscription(request) - if err != nil { + output, err := conn.CreateEventSubscription(request) + if err != nil || output.EventSubscription == nil { return fmt.Errorf("Error creating RDS Event Subscription %s: %s", name, err) } + d.SetId(aws.StringValue(output.EventSubscription.CustSubscriptionId)) + + if err := setTagsRDS(conn, d, aws.StringValue(output.EventSubscription.EventSubscriptionArn)); err != nil { + return fmt.Errorf("Error creating RDS Event Subscription (%s) tags: %s", d.Id(), err) + } + log.Println( "[INFO] Waiting for RDS Event Subscription to be ready") stateConf := &resource.StateChangeConf{ Pending: []string{"creating"}, Target: []string{"active"}, - Refresh: resourceAwsDbEventSubscriptionRefreshFunc(d, meta.(*AWSClient).rdsconn), - Timeout: 40 * time.Minute, + Refresh: resourceAwsDbEventSubscriptionRefreshFunc(d.Id(), conn), + Timeout: d.Timeout(schema.TimeoutCreate), MinTimeout: 10 * time.Second, Delay: 30 * time.Second, // Wait 30 secs before starting } @@ -120,16 +151,19 @@ func resourceAwsDbEventSubscriptionCreate(d *schema.ResourceData, meta interface } func resourceAwsDbEventSubscriptionRead(d *schema.ResourceData, meta interface{}) error { - sub, err := resourceAwsDbEventSubscriptionRetrieve(d.Get("name").(string), meta.(*AWSClient).rdsconn) + conn := meta.(*AWSClient).rdsconn + + sub, err := resourceAwsDbEventSubscriptionRetrieve(d.Id(), conn) if err != nil { return fmt.Errorf("Error retrieving RDS Event Subscription %s: %s", d.Id(), err) } if sub == nil { + log.Printf("[WARN] RDS Event Subscription (%s) not found - removing from state", d.Id()) d.SetId("") return nil } - d.SetId(*sub.CustSubscriptionId) + d.Set("arn", sub.EventSubscriptionArn) if err := d.Set("name", sub.CustSubscriptionId); err != nil { return err } @@ -153,39 +187,34 @@ func resourceAwsDbEventSubscriptionRead(d *schema.ResourceData, meta interface{} } // list tags for resource - // set tags - conn := meta.(*AWSClient).rdsconn - if arn, err := buildRDSEventSubscriptionARN(d.Get("customer_aws_id").(string), d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).region); err != nil { - log.Printf("[DEBUG] Error building ARN for RDS Event Subscription, not setting Tags for Event Subscription %s", *sub.CustSubscriptionId) - } else { - resp, err := conn.ListTagsForResource(&rds.ListTagsForResourceInput{ - ResourceName: aws.String(arn), - }) + resp, err := conn.ListTagsForResource(&rds.ListTagsForResourceInput{ + ResourceName: sub.EventSubscriptionArn, + }) - if err != nil { - log.Printf("[DEBUG] Error retrieving tags for ARN: %s", arn) - } + if err != nil { + log.Printf("[DEBUG] Error retrieving tags for ARN: %s", aws.StringValue(sub.EventSubscriptionArn)) + } - var dt []*rds.Tag - if len(resp.TagList) > 0 { - dt = resp.TagList - } - d.Set("tags", tagsToMapRDS(dt)) + var dt []*rds.Tag + if len(resp.TagList) > 0 { + dt = resp.TagList + } + if err := d.Set("tags", tagsToMapRDS(dt)); err != nil { + return fmt.Errorf("error setting tags: %s", err) } return nil } -func resourceAwsDbEventSubscriptionRetrieve( - name string, rdsconn *rds.RDS) (*rds.EventSubscription, error) { +func resourceAwsDbEventSubscriptionRetrieve(name string, conn *rds.RDS) (*rds.EventSubscription, error) { request := &rds.DescribeEventSubscriptionsInput{ SubscriptionName: aws.String(name), } - describeResp, err := rdsconn.DescribeEventSubscriptions(request) + describeResp, err := conn.DescribeEventSubscriptions(request) if err != nil { - if rdserr, ok := err.(awserr.Error); ok && rdserr.Code() == "SubscriptionNotFound" { + if isAWSErr(err, rds.ErrCodeSubscriptionNotFoundFault, "") { log.Printf("[WARN] No RDS Event Subscription by name (%s) found", name) return nil, nil } @@ -200,7 +229,7 @@ func resourceAwsDbEventSubscriptionRetrieve( } func resourceAwsDbEventSubscriptionUpdate(d *schema.ResourceData, meta interface{}) error { - rdsconn := meta.(*AWSClient).rdsconn + conn := meta.(*AWSClient).rdsconn d.Partial(true) requestUpdate := false @@ -237,7 +266,7 @@ func resourceAwsDbEventSubscriptionUpdate(d *schema.ResourceData, meta interface log.Printf("[DEBUG] Send RDS Event Subscription modification request: %#v", requestUpdate) if requestUpdate { log.Printf("[DEBUG] RDS Event Subscription modification request: %#v", req) - _, err := rdsconn.ModifyEventSubscription(req) + _, err := conn.ModifyEventSubscription(req) if err != nil { return fmt.Errorf("Modifying RDS Event Subscription %s failed: %s", d.Id(), err) } @@ -248,8 +277,8 @@ func resourceAwsDbEventSubscriptionUpdate(d *schema.ResourceData, meta interface stateConf := &resource.StateChangeConf{ Pending: []string{"modifying"}, Target: []string{"active"}, - Refresh: resourceAwsDbEventSubscriptionRefreshFunc(d, meta.(*AWSClient).rdsconn), - Timeout: 40 * time.Minute, + Refresh: resourceAwsDbEventSubscriptionRefreshFunc(d.Id(), conn), + Timeout: d.Timeout(schema.TimeoutUpdate), MinTimeout: 10 * time.Second, Delay: 30 * time.Second, // Wait 30 secs before starting } @@ -265,12 +294,10 @@ func resourceAwsDbEventSubscriptionUpdate(d *schema.ResourceData, meta interface d.SetPartial("source_type") } - if arn, err := buildRDSEventSubscriptionARN(d.Get("customer_aws_id").(string), d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).region); err == nil { - if err := setTagsRDS(rdsconn, d, arn); err != nil { - return err - } else { - d.SetPartial("tags") - } + if err := setTagsRDS(conn, d, d.Get("arn").(string)); err != nil { + return err + } else { + d.SetPartial("tags") } if d.HasChange("source_ids") { @@ -290,7 +317,7 @@ func resourceAwsDbEventSubscriptionUpdate(d *schema.ResourceData, meta interface if len(remove) > 0 { for _, removing := range remove { log.Printf("[INFO] Removing %s as a Source Identifier from %q", *removing, d.Id()) - _, err := rdsconn.RemoveSourceIdentifierFromSubscription(&rds.RemoveSourceIdentifierFromSubscriptionInput{ + _, err := conn.RemoveSourceIdentifierFromSubscription(&rds.RemoveSourceIdentifierFromSubscriptionInput{ SourceIdentifier: removing, SubscriptionName: aws.String(d.Id()), }) @@ -303,7 +330,7 @@ func resourceAwsDbEventSubscriptionUpdate(d *schema.ResourceData, meta interface if len(add) > 0 { for _, adding := range add { log.Printf("[INFO] Adding %s as a Source Identifier to %q", *adding, d.Id()) - _, err := rdsconn.AddSourceIdentifierToSubscription(&rds.AddSourceIdentifierToSubscriptionInput{ + _, err := conn.AddSourceIdentifierToSubscription(&rds.AddSourceIdentifierToSubscriptionInput{ SourceIdentifier: adding, SubscriptionName: aws.String(d.Id()), }) @@ -321,28 +348,23 @@ func resourceAwsDbEventSubscriptionUpdate(d *schema.ResourceData, meta interface } func resourceAwsDbEventSubscriptionDelete(d *schema.ResourceData, meta interface{}) error { - rdsconn := meta.(*AWSClient).rdsconn + conn := meta.(*AWSClient).rdsconn deleteOpts := rds.DeleteEventSubscriptionInput{ SubscriptionName: aws.String(d.Id()), } - if _, err := rdsconn.DeleteEventSubscription(&deleteOpts); err != nil { - rdserr, ok := err.(awserr.Error) - if !ok { - return fmt.Errorf("Error deleting RDS Event Subscription %s: %s", d.Id(), err) - } - - if rdserr.Code() != "DBEventSubscriptionNotFoundFault" { - log.Printf("[WARN] RDS Event Subscription %s missing during delete", d.Id()) - return fmt.Errorf("Error deleting RDS Event Subscription %s: %s", d.Id(), err) + if _, err := conn.DeleteEventSubscription(&deleteOpts); err != nil { + if isAWSErr(err, rds.ErrCodeSubscriptionNotFoundFault, "") { + return nil } + return fmt.Errorf("Error deleting RDS Event Subscription %s: %s", d.Id(), err) } stateConf := &resource.StateChangeConf{ Pending: []string{"deleting"}, Target: []string{}, - Refresh: resourceAwsDbEventSubscriptionRefreshFunc(d, meta.(*AWSClient).rdsconn), - Timeout: 40 * time.Minute, + Refresh: resourceAwsDbEventSubscriptionRefreshFunc(d.Id(), conn), + Timeout: d.Timeout(schema.TimeoutDelete), MinTimeout: 10 * time.Second, Delay: 30 * time.Second, // Wait 30 secs before starting } @@ -353,12 +375,10 @@ func resourceAwsDbEventSubscriptionDelete(d *schema.ResourceData, meta interface return err } -func resourceAwsDbEventSubscriptionRefreshFunc( - d *schema.ResourceData, - rdsconn *rds.RDS) resource.StateRefreshFunc { +func resourceAwsDbEventSubscriptionRefreshFunc(name string, conn *rds.RDS) resource.StateRefreshFunc { return func() (interface{}, string, error) { - sub, err := resourceAwsDbEventSubscriptionRetrieve(d.Get("name").(string), rdsconn) + sub, err := resourceAwsDbEventSubscriptionRetrieve(name, conn) if err != nil { log.Printf("Error on retrieving DB Event Subscription when waiting: %s", err) @@ -370,17 +390,9 @@ func resourceAwsDbEventSubscriptionRefreshFunc( } if sub.Status != nil { - log.Printf("[DEBUG] DB Event Subscription status for %s: %s", d.Id(), *sub.Status) + log.Printf("[DEBUG] DB Event Subscription status for %s: %s", name, *sub.Status) } return sub, *sub.Status, nil } } - -func buildRDSEventSubscriptionARN(customerAwsId, subscriptionId, partition, region string) (string, error) { - if partition == "" { - return "", fmt.Errorf("Unable to construct RDS ARN because of missing AWS partition") - } - arn := fmt.Sprintf("arn:%s:rds:%s:%s:es:%s", partition, region, customerAwsId, subscriptionId) - return arn, nil -} diff --git a/aws/resource_aws_db_event_subscription_test.go b/aws/resource_aws_db_event_subscription_test.go index 97bd8df1d60..fa306256407 100644 --- a/aws/resource_aws_db_event_subscription_test.go +++ b/aws/resource_aws_db_event_subscription_test.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "regexp" "testing" "github.com/aws/aws-sdk-go/aws" @@ -12,11 +13,36 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSDBEventSubscription_importBasic(t *testing.T) { + resourceName := "aws_db_event_subscription.bar" + rInt := acctest.RandInt() + subscriptionName := fmt.Sprintf("tf-acc-test-rds-event-subs-%d", rInt) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBEventSubscriptionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBEventSubscriptionConfig(rInt), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateId: subscriptionName, + }, + }, + }) +} + func TestAccAWSDBEventSubscription_basicUpdate(t *testing.T) { var v rds.EventSubscription rInt := acctest.RandInt() + rName := fmt.Sprintf("tf-acc-test-rds-event-subs-%d", rInt) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBEventSubscriptionDestroy, @@ -25,26 +51,50 @@ func TestAccAWSDBEventSubscription_basicUpdate(t *testing.T) { Config: testAccAWSDBEventSubscriptionConfig(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckAWSDBEventSubscriptionExists("aws_db_event_subscription.bar", &v), - resource.TestCheckResourceAttr( - "aws_db_event_subscription.bar", "enabled", "true"), - resource.TestCheckResourceAttr( - "aws_db_event_subscription.bar", "source_type", "db-instance"), - resource.TestCheckResourceAttr( - "aws_db_event_subscription.bar", "name", fmt.Sprintf("tf-acc-test-rds-event-subs-%d", rInt)), - resource.TestCheckResourceAttr( - "aws_db_event_subscription.bar", "tags.Name", "name"), + resource.TestMatchResourceAttr("aws_db_event_subscription.bar", "arn", regexp.MustCompile(fmt.Sprintf("^arn:[^:]+:rds:[^:]+:[^:]+:es:%s$", rName))), + resource.TestCheckResourceAttr("aws_db_event_subscription.bar", "enabled", "true"), + resource.TestCheckResourceAttr("aws_db_event_subscription.bar", "source_type", "db-instance"), + resource.TestCheckResourceAttr("aws_db_event_subscription.bar", "name", rName), + resource.TestCheckResourceAttr("aws_db_event_subscription.bar", "tags.%", "1"), + resource.TestCheckResourceAttr("aws_db_event_subscription.bar", "tags.Name", "name"), ), }, { Config: testAccAWSDBEventSubscriptionConfigUpdate(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBEventSubscriptionExists("aws_db_event_subscription.bar", &v), + resource.TestCheckResourceAttr("aws_db_event_subscription.bar", "enabled", "false"), + resource.TestCheckResourceAttr("aws_db_event_subscription.bar", "source_type", "db-parameter-group"), + resource.TestCheckResourceAttr("aws_db_event_subscription.bar", "tags.%", "1"), + resource.TestCheckResourceAttr("aws_db_event_subscription.bar", "tags.Name", "new-name"), + ), + }, + }, + }) +} + +func TestAccAWSDBEventSubscription_withPrefix(t *testing.T) { + var v rds.EventSubscription + rInt := acctest.RandInt() + startsWithPrefix := regexp.MustCompile("^tf-acc-test-rds-event-subs-") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBEventSubscriptionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBEventSubscriptionConfigWithPrefix(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckAWSDBEventSubscriptionExists("aws_db_event_subscription.bar", &v), resource.TestCheckResourceAttr( - "aws_db_event_subscription.bar", "enabled", "false"), + "aws_db_event_subscription.bar", "enabled", "true"), resource.TestCheckResourceAttr( - "aws_db_event_subscription.bar", "source_type", "db-parameter-group"), + "aws_db_event_subscription.bar", "source_type", "db-instance"), + resource.TestMatchResourceAttr( + "aws_db_event_subscription.bar", "name", startsWithPrefix), resource.TestCheckResourceAttr( - "aws_db_event_subscription.bar", "tags.Name", "new-name"), + "aws_db_event_subscription.bar", "tags.Name", "name"), ), }, }, @@ -55,7 +105,7 @@ func TestAccAWSDBEventSubscription_withSourceIds(t *testing.T) { var v rds.EventSubscription rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBEventSubscriptionDestroy, @@ -96,7 +146,7 @@ func TestAccAWSDBEventSubscription_categoryUpdate(t *testing.T) { var v rds.EventSubscription rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBEventSubscriptionDestroy, @@ -221,6 +271,29 @@ resource "aws_db_event_subscription" "bar" { }`, rInt, rInt) } +func testAccAWSDBEventSubscriptionConfigWithPrefix(rInt int) string { + return fmt.Sprintf(` +resource "aws_sns_topic" "aws_sns_topic" { + name = "tf-acc-test-rds-event-subs-sns-topic-%d" +} + +resource "aws_db_event_subscription" "bar" { + name_prefix = "tf-acc-test-rds-event-subs-" + sns_topic = "${aws_sns_topic.aws_sns_topic.arn}" + source_type = "db-instance" + event_categories = [ + "availability", + "backup", + "creation", + "deletion", + "maintenance" + ] + tags { + Name = "name" + } +}`, rInt) +} + func testAccAWSDBEventSubscriptionConfigUpdate(rInt int) string { return fmt.Sprintf(` resource "aws_sns_topic" "aws_sns_topic" { diff --git a/aws/resource_aws_db_instance.go b/aws/resource_aws_db_instance.go index 3ac4bcb8f6e..b7cd598ca24 100644 --- a/aws/resource_aws_db_instance.go +++ b/aws/resource_aws_db_instance.go @@ -8,11 +8,11 @@ import ( "time" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/rds" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsDbInstance() *schema.Resource { @@ -57,6 +57,11 @@ func resourceAwsDbInstance() *schema.Resource { Sensitive: true, }, + "deletion_protection": { + Type: schema.TypeBool, + Optional: true, + }, + "engine": { Type: schema.TypeString, Optional: true, @@ -218,6 +223,46 @@ func resourceAwsDbInstance() *schema.Resource { }, }, + "s3_import": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{ + "snapshot_identifier", + "replicate_source_db", + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "bucket_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "bucket_prefix": { + Type: schema.TypeString, + Required: false, + Optional: true, + ForceNew: true, + }, + "ingestion_role": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "source_engine": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "source_engine_version": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + }, + }, + "skip_final_snapshot": { Type: schema.TypeBool, Optional: true, @@ -350,6 +395,33 @@ func resourceAwsDbInstance() *schema.Resource { Computed: true, }, + "enabled_cloudwatch_logs_exports": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice([]string{ + "alert", + "audit", + "error", + "general", + "listener", + "slowquery", + "trace", + }, false), + }, + }, + + "domain": { + Type: schema.TypeString, + Optional: true, + }, + + "domain_iam_role_name": { + Type: schema.TypeString, + Optional: true, + }, + "tags": tagsSchema(), }, } @@ -357,6 +429,23 @@ func resourceAwsDbInstance() *schema.Resource { func resourceAwsDbInstanceCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).rdsconn + + // Some API calls (e.g. CreateDBInstanceReadReplica and + // RestoreDBInstanceFromDBSnapshot do not support all parameters to + // correctly apply all settings in one pass. For missing parameters or + // unsupported configurations, we may need to call ModifyDBInstance + // afterwards to prevent Terraform operators from API errors or needing + // to double apply. + var requiresModifyDbInstance bool + modifyDbInstanceInput := &rds.ModifyDBInstanceInput{ + ApplyImmediately: aws.Bool(true), + } + + // Some ModifyDBInstance parameters (e.g. DBParameterGroupName) require + // a database instance reboot to take affect. During resource creation, + // we expect everything to be in sync before returning completion. + var requiresRebootDbInstance bool + tags := tagsFromMapRDS(d.Get("tags").(map[string]interface{})) var identifier string @@ -380,33 +469,51 @@ func resourceAwsDbInstanceCreate(d *schema.ResourceData, meta interface{}) error if v, ok := d.GetOk("replicate_source_db"); ok { opts := rds.CreateDBInstanceReadReplicaInput{ - SourceDBInstanceIdentifier: aws.String(v.(string)), + AutoMinorVersionUpgrade: aws.Bool(d.Get("auto_minor_version_upgrade").(bool)), CopyTagsToSnapshot: aws.Bool(d.Get("copy_tags_to_snapshot").(bool)), + DeletionProtection: aws.Bool(d.Get("deletion_protection").(bool)), DBInstanceClass: aws.String(d.Get("instance_class").(string)), DBInstanceIdentifier: aws.String(identifier), PubliclyAccessible: aws.Bool(d.Get("publicly_accessible").(bool)), + SourceDBInstanceIdentifier: aws.String(v.(string)), Tags: tags, } - if attr, ok := d.GetOk("iops"); ok { - opts.Iops = aws.Int64(int64(attr.(int))) - } - if attr, ok := d.GetOk("port"); ok { - opts.Port = aws.Int64(int64(attr.(int))) + if attr, ok := d.GetOk("allocated_storage"); ok { + modifyDbInstanceInput.AllocatedStorage = aws.Int64(int64(attr.(int))) + requiresModifyDbInstance = true } if attr, ok := d.GetOk("availability_zone"); ok { opts.AvailabilityZone = aws.String(attr.(string)) } - if attr, ok := d.GetOk("storage_type"); ok { - opts.StorageType = aws.String(attr.(string)) + if attr, ok := d.GetOk("backup_retention_period"); ok { + modifyDbInstanceInput.BackupRetentionPeriod = aws.Int64(int64(attr.(int))) + requiresModifyDbInstance = true + } + + if attr, ok := d.GetOk("backup_window"); ok { + modifyDbInstanceInput.PreferredBackupWindow = aws.String(attr.(string)) + requiresModifyDbInstance = true } if attr, ok := d.GetOk("db_subnet_group_name"); ok { opts.DBSubnetGroupName = aws.String(attr.(string)) } + if attr, ok := d.GetOk("enabled_cloudwatch_logs_exports"); ok && len(attr.([]interface{})) > 0 { + opts.EnableCloudwatchLogsExports = expandStringList(attr.([]interface{})) + } + + if attr, ok := d.GetOk("iam_database_authentication_enabled"); ok { + opts.EnableIAMDatabaseAuthentication = aws.Bool(attr.(bool)) + } + + if attr, ok := d.GetOk("iops"); ok { + opts.Iops = aws.Int64(int64(attr.(int))) + } + if attr, ok := d.GetOk("kms_key_id"); ok { opts.KmsKeyId = aws.String(attr.(string)) if arnParts := strings.Split(v.(string), ":"); len(arnParts) >= 4 { @@ -414,32 +521,244 @@ func resourceAwsDbInstanceCreate(d *schema.ResourceData, meta interface{}) error } } - if attr, ok := d.GetOk("monitoring_role_arn"); ok { - opts.MonitoringRoleArn = aws.String(attr.(string)) + if attr, ok := d.GetOk("maintenance_window"); ok { + modifyDbInstanceInput.PreferredMaintenanceWindow = aws.String(attr.(string)) + requiresModifyDbInstance = true } if attr, ok := d.GetOk("monitoring_interval"); ok { opts.MonitoringInterval = aws.Int64(int64(attr.(int))) } + if attr, ok := d.GetOk("monitoring_role_arn"); ok { + opts.MonitoringRoleArn = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("multi_az"); ok { + opts.MultiAZ = aws.Bool(attr.(bool)) + } + if attr, ok := d.GetOk("option_group_name"); ok { opts.OptionGroupName = aws.String(attr.(string)) } + if attr, ok := d.GetOk("parameter_group_name"); ok { + modifyDbInstanceInput.DBParameterGroupName = aws.String(attr.(string)) + requiresModifyDbInstance = true + requiresRebootDbInstance = true + } + + if attr, ok := d.GetOk("password"); ok { + modifyDbInstanceInput.MasterUserPassword = aws.String(attr.(string)) + requiresModifyDbInstance = true + } + + if attr, ok := d.GetOk("port"); ok { + opts.Port = aws.Int64(int64(attr.(int))) + } + + if attr := d.Get("security_group_names").(*schema.Set); attr.Len() > 0 { + modifyDbInstanceInput.DBSecurityGroups = expandStringSet(attr) + requiresModifyDbInstance = true + } + + if attr, ok := d.GetOk("storage_type"); ok { + opts.StorageType = aws.String(attr.(string)) + } + + if attr := d.Get("vpc_security_group_ids").(*schema.Set); attr.Len() > 0 { + modifyDbInstanceInput.VpcSecurityGroupIds = expandStringSet(attr) + requiresModifyDbInstance = true + } + log.Printf("[DEBUG] DB Instance Replica create configuration: %#v", opts) _, err := conn.CreateDBInstanceReadReplica(&opts) if err != nil { return fmt.Errorf("Error creating DB Instance: %s", err) } + } else if v, ok := d.GetOk("s3_import"); ok { + + if _, ok := d.GetOk("allocated_storage"); !ok { + return fmt.Errorf(`provider.aws: aws_db_instance: %s: "allocated_storage": required field is not set`, d.Get("name").(string)) + } + if _, ok := d.GetOk("engine"); !ok { + return fmt.Errorf(`provider.aws: aws_db_instance: %s: "engine": required field is not set`, d.Get("name").(string)) + } + if _, ok := d.GetOk("password"); !ok { + return fmt.Errorf(`provider.aws: aws_db_instance: %s: "password": required field is not set`, d.Get("name").(string)) + } + if _, ok := d.GetOk("username"); !ok { + return fmt.Errorf(`provider.aws: aws_db_instance: %s: "username": required field is not set`, d.Get("name").(string)) + } + + s3_bucket := v.([]interface{})[0].(map[string]interface{}) + opts := rds.RestoreDBInstanceFromS3Input{ + AllocatedStorage: aws.Int64(int64(d.Get("allocated_storage").(int))), + AutoMinorVersionUpgrade: aws.Bool(d.Get("auto_minor_version_upgrade").(bool)), + CopyTagsToSnapshot: aws.Bool(d.Get("copy_tags_to_snapshot").(bool)), + DBName: aws.String(d.Get("name").(string)), + DBInstanceClass: aws.String(d.Get("instance_class").(string)), + DBInstanceIdentifier: aws.String(d.Get("identifier").(string)), + DeletionProtection: aws.Bool(d.Get("deletion_protection").(bool)), + Engine: aws.String(d.Get("engine").(string)), + EngineVersion: aws.String(d.Get("engine_version").(string)), + S3BucketName: aws.String(s3_bucket["bucket_name"].(string)), + S3Prefix: aws.String(s3_bucket["bucket_prefix"].(string)), + S3IngestionRoleArn: aws.String(s3_bucket["ingestion_role"].(string)), + MasterUsername: aws.String(d.Get("username").(string)), + MasterUserPassword: aws.String(d.Get("password").(string)), + PubliclyAccessible: aws.Bool(d.Get("publicly_accessible").(bool)), + StorageEncrypted: aws.Bool(d.Get("storage_encrypted").(bool)), + SourceEngine: aws.String(s3_bucket["source_engine"].(string)), + SourceEngineVersion: aws.String(s3_bucket["source_engine_version"].(string)), + Tags: tags, + } + + if attr, ok := d.GetOk("multi_az"); ok { + opts.MultiAZ = aws.Bool(attr.(bool)) + + } + + if _, ok := d.GetOk("character_set_name"); ok { + return fmt.Errorf(`provider.aws: aws_db_instance: %s: "character_set_name" doesn't work with with restores"`, d.Get("name").(string)) + } + if _, ok := d.GetOk("timezone"); ok { + return fmt.Errorf(`provider.aws: aws_db_instance: %s: "timezone" doesn't work with with restores"`, d.Get("name").(string)) + } + + attr := d.Get("backup_retention_period") + opts.BackupRetentionPeriod = aws.Int64(int64(attr.(int))) + + if attr, ok := d.GetOk("maintenance_window"); ok { + opts.PreferredMaintenanceWindow = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("backup_window"); ok { + opts.PreferredBackupWindow = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("license_model"); ok { + opts.LicenseModel = aws.String(attr.(string)) + } + if attr, ok := d.GetOk("parameter_group_name"); ok { + opts.DBParameterGroupName = aws.String(attr.(string)) + } + + if attr := d.Get("vpc_security_group_ids").(*schema.Set); attr.Len() > 0 { + var s []*string + for _, v := range attr.List() { + s = append(s, aws.String(v.(string))) + } + opts.VpcSecurityGroupIds = s + } + + if attr := d.Get("security_group_names").(*schema.Set); attr.Len() > 0 { + var s []*string + for _, v := range attr.List() { + s = append(s, aws.String(v.(string))) + } + opts.DBSecurityGroups = s + } + if attr, ok := d.GetOk("storage_type"); ok { + opts.StorageType = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("db_subnet_group_name"); ok { + opts.DBSubnetGroupName = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("iops"); ok { + opts.Iops = aws.Int64(int64(attr.(int))) + } + + if attr, ok := d.GetOk("port"); ok { + opts.Port = aws.Int64(int64(attr.(int))) + } + + if attr, ok := d.GetOk("availability_zone"); ok { + opts.AvailabilityZone = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("monitoring_role_arn"); ok { + opts.MonitoringRoleArn = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("monitoring_interval"); ok { + opts.MonitoringInterval = aws.Int64(int64(attr.(int))) + } + + if attr, ok := d.GetOk("option_group_name"); ok { + opts.OptionGroupName = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("kms_key_id"); ok { + opts.KmsKeyId = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("iam_database_authentication_enabled"); ok { + opts.EnableIAMDatabaseAuthentication = aws.Bool(attr.(bool)) + } + + log.Printf("[DEBUG] DB Instance S3 Restore configuration: %#v", opts) + var err error + // Retry for IAM eventual consistency + err = resource.Retry(2*time.Minute, func() *resource.RetryError { + _, err = conn.RestoreDBInstanceFromS3(&opts) + if err != nil { + if isAWSErr(err, "InvalidParameterValue", "ENHANCED_MONITORING") { + return resource.RetryableError(err) + } + if isAWSErr(err, "InvalidParameterValue", "S3_SNAPSHOT_INGESTION") { + return resource.RetryableError(err) + } + if isAWSErr(err, "InvalidParameterValue", "S3 bucket cannot be found") { + return resource.RetryableError(err) + } + // InvalidParameterValue: Files from the specified Amazon S3 bucket cannot be downloaded. Make sure that you have created an AWS Identity and Access Management (IAM) role that lets Amazon RDS access Amazon S3 for you. + if isAWSErr(err, "InvalidParameterValue", "Files from the specified Amazon S3 bucket cannot be downloaded") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("Error creating DB Instance: %s", err) + } + + d.SetId(d.Get("identifier").(string)) + + log.Printf("[INFO] DB Instance ID: %s", d.Id()) + + log.Println( + "[INFO] Waiting for DB Instance to be available") + + stateConf := &resource.StateChangeConf{ + Pending: resourceAwsDbInstanceCreatePendingStates, + Target: []string{"available", "storage-optimization"}, + Refresh: resourceAwsDbInstanceStateRefreshFunc(d.Id(), conn), + Timeout: d.Timeout(schema.TimeoutCreate), + MinTimeout: 10 * time.Second, + Delay: 30 * time.Second, // Wait 30 secs before starting + } + + // Wait, catching any errors + _, err = stateConf.WaitForState() + if err != nil { + return err + } + + return resourceAwsDbInstanceRead(d, meta) } else if _, ok := d.GetOk("snapshot_identifier"); ok { opts := rds.RestoreDBInstanceFromDBSnapshotInput{ + AutoMinorVersionUpgrade: aws.Bool(d.Get("auto_minor_version_upgrade").(bool)), + CopyTagsToSnapshot: aws.Bool(d.Get("copy_tags_to_snapshot").(bool)), DBInstanceClass: aws.String(d.Get("instance_class").(string)), DBInstanceIdentifier: aws.String(d.Get("identifier").(string)), DBSnapshotIdentifier: aws.String(d.Get("snapshot_identifier").(string)), - AutoMinorVersionUpgrade: aws.Bool(d.Get("auto_minor_version_upgrade").(bool)), + DeletionProtection: aws.Bool(d.Get("deletion_protection").(bool)), PubliclyAccessible: aws.Bool(d.Get("publicly_accessible").(bool)), Tags: tags, - CopyTagsToSnapshot: aws.Bool(d.Get("copy_tags_to_snapshot").(bool)), } if attr, ok := d.GetOk("name"); ok { @@ -453,18 +772,49 @@ func resourceAwsDbInstanceCreate(d *schema.ResourceData, meta interface{}) error } } + if attr, ok := d.GetOk("allocated_storage"); ok { + modifyDbInstanceInput.AllocatedStorage = aws.Int64(int64(attr.(int))) + requiresModifyDbInstance = true + } + if attr, ok := d.GetOk("availability_zone"); ok { opts.AvailabilityZone = aws.String(attr.(string)) } + if attr, ok := d.GetOkExists("backup_retention_period"); ok { + modifyDbInstanceInput.BackupRetentionPeriod = aws.Int64(int64(attr.(int))) + requiresModifyDbInstance = true + } + + if attr, ok := d.GetOk("backup_window"); ok { + modifyDbInstanceInput.PreferredBackupWindow = aws.String(attr.(string)) + requiresModifyDbInstance = true + } + if attr, ok := d.GetOk("db_subnet_group_name"); ok { opts.DBSubnetGroupName = aws.String(attr.(string)) } + if attr, ok := d.GetOk("domain"); ok { + opts.Domain = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("domain_iam_role_name"); ok { + opts.DomainIAMRoleName = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("enabled_cloudwatch_logs_exports"); ok && len(attr.([]interface{})) > 0 { + opts.EnableCloudwatchLogsExports = expandStringList(attr.([]interface{})) + } + if attr, ok := d.GetOk("engine"); ok { opts.Engine = aws.String(attr.(string)) } + if attr, ok := d.GetOk("iam_database_authentication_enabled"); ok { + opts.EnableIAMDatabaseAuthentication = aws.Bool(attr.(bool)) + } + if attr, ok := d.GetOk("iops"); ok { opts.Iops = aws.Int64(int64(attr.(int))) } @@ -473,77 +823,91 @@ func resourceAwsDbInstanceCreate(d *schema.ResourceData, meta interface{}) error opts.LicenseModel = aws.String(attr.(string)) } + if attr, ok := d.GetOk("maintenance_window"); ok { + modifyDbInstanceInput.PreferredMaintenanceWindow = aws.String(attr.(string)) + requiresModifyDbInstance = true + } + + if attr, ok := d.GetOk("monitoring_interval"); ok { + modifyDbInstanceInput.MonitoringInterval = aws.Int64(int64(attr.(int))) + requiresModifyDbInstance = true + } + + if attr, ok := d.GetOk("monitoring_role_arn"); ok { + modifyDbInstanceInput.MonitoringRoleArn = aws.String(attr.(string)) + requiresModifyDbInstance = true + } + if attr, ok := d.GetOk("multi_az"); ok { - opts.MultiAZ = aws.Bool(attr.(bool)) + // When using SQL Server engine with MultiAZ enabled, its not + // possible to immediately enable mirroring since + // BackupRetentionPeriod is not available as a parameter to + // RestoreDBInstanceFromDBSnapshot and you receive an error. e.g. + // InvalidParameterValue: Mirroring cannot be applied to instances with backup retention set to zero. + // If we know the engine, prevent the error upfront. + if v, ok := d.GetOk("engine"); ok && strings.HasPrefix(strings.ToLower(v.(string)), "sqlserver") { + modifyDbInstanceInput.MultiAZ = aws.Bool(attr.(bool)) + requiresModifyDbInstance = true + } else { + opts.MultiAZ = aws.Bool(attr.(bool)) + } } if attr, ok := d.GetOk("option_group_name"); ok { opts.OptionGroupName = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("parameter_group_name"); ok { + opts.DBParameterGroupName = aws.String(attr.(string)) + } + if attr, ok := d.GetOk("password"); ok { + modifyDbInstanceInput.MasterUserPassword = aws.String(attr.(string)) + requiresModifyDbInstance = true } if attr, ok := d.GetOk("port"); ok { opts.Port = aws.Int64(int64(attr.(int))) } - if attr, ok := d.GetOk("tde_credential_arn"); ok { - opts.TdeCredentialArn = aws.String(attr.(string)) + if attr := d.Get("security_group_names").(*schema.Set); attr.Len() > 0 { + modifyDbInstanceInput.DBSecurityGroups = expandStringSet(attr) + requiresModifyDbInstance = true } if attr, ok := d.GetOk("storage_type"); ok { opts.StorageType = aws.String(attr.(string)) } - log.Printf("[DEBUG] DB Instance restore from snapshot configuration: %s", opts) - _, err := conn.RestoreDBInstanceFromDBSnapshot(&opts) - if err != nil { - return fmt.Errorf("Error creating DB Instance: %s", err) - } - - var sgUpdate bool - var passwordUpdate bool - - if _, ok := d.GetOk("password"); ok { - passwordUpdate = true + if attr, ok := d.GetOk("tde_credential_arn"); ok { + opts.TdeCredentialArn = aws.String(attr.(string)) } if attr := d.Get("vpc_security_group_ids").(*schema.Set); attr.Len() > 0 { - sgUpdate = true - } - if attr := d.Get("security_group_names").(*schema.Set); attr.Len() > 0 { - sgUpdate = true + modifyDbInstanceInput.VpcSecurityGroupIds = expandStringSet(attr) + requiresModifyDbInstance = true } - if sgUpdate || passwordUpdate { - log.Printf("[INFO] DB is restoring from snapshot with default security, but custom security should be set, will now update after snapshot is restored!") - - // wait for instance to get up and then modify security - d.SetId(d.Get("identifier").(string)) - - log.Printf("[INFO] DB Instance ID: %s", d.Id()) - - log.Println( - "[INFO] Waiting for DB Instance to be available") - stateConf := &resource.StateChangeConf{ - Pending: resourceAwsDbInstanceCreatePendingStates, - Target: []string{"available", "storage-optimization"}, - Refresh: resourceAwsDbInstanceStateRefreshFunc(d.Id(), conn), - Timeout: d.Timeout(schema.TimeoutCreate), - MinTimeout: 10 * time.Second, - Delay: 30 * time.Second, // Wait 30 secs before starting - } - - // Wait, catching any errors - _, err := stateConf.WaitForState() - if err != nil { - return err - } + log.Printf("[DEBUG] DB Instance restore from snapshot configuration: %s", opts) + _, err := conn.RestoreDBInstanceFromDBSnapshot(&opts) - err = resourceAwsDbInstanceUpdate(d, meta) - if err != nil { - return err - } + // When using SQL Server engine with MultiAZ enabled, its not + // possible to immediately enable mirroring since + // BackupRetentionPeriod is not available as a parameter to + // RestoreDBInstanceFromDBSnapshot and you receive an error. e.g. + // InvalidParameterValue: Mirroring cannot be applied to instances with backup retention set to zero. + // Since engine is not a required argument when using snapshot_identifier + // and the RDS API determines this condition, we catch the error + // and remove the invalid configuration for it to be fixed afterwards. + if isAWSErr(err, "InvalidParameterValue", "Mirroring cannot be applied to instances with backup retention set to zero") { + opts.MultiAZ = aws.Bool(false) + modifyDbInstanceInput.MultiAZ = aws.Bool(true) + requiresModifyDbInstance = true + _, err = conn.RestoreDBInstanceFromDBSnapshot(&opts) + } + if err != nil { + return fmt.Errorf("Error creating DB Instance: %s", err) } } else { if _, ok := d.GetOk("allocated_storage"); !ok { @@ -563,6 +927,7 @@ func resourceAwsDbInstanceCreate(d *schema.ResourceData, meta interface{}) error DBName: aws.String(d.Get("name").(string)), DBInstanceClass: aws.String(d.Get("instance_class").(string)), DBInstanceIdentifier: aws.String(d.Get("identifier").(string)), + DeletionProtection: aws.Bool(d.Get("deletion_protection").(bool)), MasterUsername: aws.String(d.Get("username").(string)), MasterUserPassword: aws.String(d.Get("password").(string)), Engine: aws.String(d.Get("engine").(string)), @@ -627,6 +992,10 @@ func resourceAwsDbInstanceCreate(d *schema.ResourceData, meta interface{}) error opts.DBSubnetGroupName = aws.String(attr.(string)) } + if attr, ok := d.GetOk("enabled_cloudwatch_logs_exports"); ok && len(attr.([]interface{})) > 0 { + opts.EnableCloudwatchLogsExports = expandStringList(attr.([]interface{})) + } + if attr, ok := d.GetOk("iops"); ok { opts.Iops = aws.Int64(int64(attr.(int))) } @@ -659,32 +1028,37 @@ func resourceAwsDbInstanceCreate(d *schema.ResourceData, meta interface{}) error opts.EnableIAMDatabaseAuthentication = aws.Bool(attr.(bool)) } + if attr, ok := d.GetOk("domain"); ok { + opts.Domain = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("domain_iam_role_name"); ok { + opts.DomainIAMRoleName = aws.String(attr.(string)) + } + log.Printf("[DEBUG] DB Instance create configuration: %#v", opts) var err error err = resource.Retry(5*time.Minute, func() *resource.RetryError { _, err = conn.CreateDBInstance(&opts) if err != nil { - if awsErr, ok := err.(awserr.Error); ok { - if awsErr.Code() == "InvalidParameterValue" && strings.Contains(awsErr.Message(), "ENHANCED_MONITORING") { - return resource.RetryableError(awsErr) - } + if isAWSErr(err, "InvalidParameterValue", "ENHANCED_MONITORING") { + return resource.RetryableError(err) } return resource.NonRetryableError(err) } return nil }) if err != nil { + if isAWSErr(err, "InvalidParameterValue", "") { + return fmt.Errorf("Error creating DB Instance: %s, %+v", err, opts) + } return fmt.Errorf("Error creating DB Instance: %s", err) + } } d.SetId(d.Get("identifier").(string)) - log.Printf("[INFO] DB Instance ID: %s", d.Id()) - - log.Println( - "[INFO] Waiting for DB Instance to be available") - stateConf := &resource.StateChangeConf{ Pending: resourceAwsDbInstanceCreatePendingStates, Target: []string{"available", "storage-optimization"}, @@ -694,12 +1068,46 @@ func resourceAwsDbInstanceCreate(d *schema.ResourceData, meta interface{}) error Delay: 30 * time.Second, // Wait 30 secs before starting } - // Wait, catching any errors + log.Printf("[INFO] Waiting for DB Instance (%s) to be available", d.Id()) _, err := stateConf.WaitForState() if err != nil { return err } + if requiresModifyDbInstance { + modifyDbInstanceInput.DBInstanceIdentifier = aws.String(d.Id()) + + log.Printf("[INFO] DB Instance (%s) configuration requires ModifyDBInstance: %s", d.Id(), modifyDbInstanceInput) + _, err := conn.ModifyDBInstance(modifyDbInstanceInput) + if err != nil { + return fmt.Errorf("error modifying DB Instance (%s): %s", d.Id(), err) + } + + log.Printf("[INFO] Waiting for DB Instance (%s) to be available", d.Id()) + err = waitUntilAwsDbInstanceIsAvailableAfterUpdate(d.Id(), conn, d.Timeout(schema.TimeoutUpdate)) + if err != nil { + return fmt.Errorf("error waiting for DB Instance (%s) to be available: %s", d.Id(), err) + } + } + + if requiresRebootDbInstance { + rebootDbInstanceInput := &rds.RebootDBInstanceInput{ + DBInstanceIdentifier: aws.String(d.Id()), + } + + log.Printf("[INFO] DB Instance (%s) configuration requires RebootDBInstance: %s", d.Id(), rebootDbInstanceInput) + _, err := conn.RebootDBInstance(rebootDbInstanceInput) + if err != nil { + return fmt.Errorf("error rebooting DB Instance (%s): %s", d.Id(), err) + } + + log.Printf("[INFO] Waiting for DB Instance (%s) to be available", d.Id()) + err = waitUntilAwsDbInstanceIsAvailableAfterUpdate(d.Id(), conn, d.Timeout(schema.TimeoutUpdate)) + if err != nil { + return fmt.Errorf("error waiting for DB Instance (%s) to be available: %s", d.Id(), err) + } + } + return resourceAwsDbInstanceRead(d, meta) } @@ -718,6 +1126,7 @@ func resourceAwsDbInstanceRead(d *schema.ResourceData, meta interface{}) error { d.Set("identifier", v.DBInstanceIdentifier) d.Set("resource_id", v.DbiResourceId) d.Set("username", v.MasterUsername) + d.Set("deletion_protection", v.DeletionProtection) d.Set("engine", v.Engine) d.Set("engine_version", v.EngineVersion) d.Set("allocated_storage", v.AllocatedStorage) @@ -774,32 +1183,36 @@ func resourceAwsDbInstanceRead(d *schema.ResourceData, meta interface{}) error { d.Set("monitoring_role_arn", v.MonitoringRoleArn) } + if err := d.Set("enabled_cloudwatch_logs_exports", flattenStringList(v.EnabledCloudwatchLogsExports)); err != nil { + return fmt.Errorf("error setting enabled_cloudwatch_logs_exports: %s", err) + } + + d.Set("domain", "") + d.Set("domain_iam_role_name", "") + if len(v.DomainMemberships) > 0 && v.DomainMemberships[0] != nil { + d.Set("domain", v.DomainMemberships[0].Domain) + d.Set("domain_iam_role_name", v.DomainMemberships[0].IAMRoleName) + } + // list tags for resource // set tags conn := meta.(*AWSClient).rdsconn - arn, err := buildRDSARN(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region) - if err != nil { - name := "" - if v.DBName != nil && *v.DBName != "" { - name = *v.DBName - } - log.Printf("[DEBUG] Error building ARN for DB Instance, not setting Tags for DB %s", name) - } else { - d.Set("arn", arn) - resp, err := conn.ListTagsForResource(&rds.ListTagsForResourceInput{ - ResourceName: aws.String(arn), - }) - if err != nil { - log.Printf("[DEBUG] Error retrieving tags for ARN: %s", arn) - } + arn := aws.StringValue(v.DBInstanceArn) + d.Set("arn", arn) + resp, err := conn.ListTagsForResource(&rds.ListTagsForResourceInput{ + ResourceName: aws.String(arn), + }) - var dt []*rds.Tag - if len(resp.TagList) > 0 { - dt = resp.TagList - } - d.Set("tags", tagsToMapRDS(dt)) + if err != nil { + return fmt.Errorf("Error retrieving tags for ARN: %s", arn) + } + + var dt []*rds.Tag + if len(resp.TagList) > 0 { + dt = resp.TagList } + d.Set("tags", tagsToMapRDS(dt)) // Create an empty schema.Set to hold all vpc security group ids ids := &schema.Set{ @@ -825,7 +1238,7 @@ func resourceAwsDbInstanceRead(d *schema.ResourceData, meta interface{}) error { replicas = append(replicas, *v) } if err := d.Set("replicas", replicas); err != nil { - return fmt.Errorf("[DEBUG] Error setting replicas attribute: %#v, error: %#v", replicas, err) + return fmt.Errorf("Error setting replicas attribute: %#v, error: %#v", replicas, err) } d.Set("replicate_source_db", v.ReadReplicaSourceDBInstanceIdentifier) @@ -854,14 +1267,30 @@ func resourceAwsDbInstanceDelete(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] DB Instance destroy configuration: %v", opts) - if _, err := conn.DeleteDBInstance(&opts); err != nil { - return err + _, err := conn.DeleteDBInstance(&opts) + + // InvalidDBInstanceState: Instance XXX is already being deleted. + if err != nil && !isAWSErr(err, rds.ErrCodeInvalidDBInstanceStateFault, "is already being deleted") { + return fmt.Errorf("error deleting Database Instance %q: %s", d.Id(), err) } log.Println("[INFO] Waiting for DB Instance to be destroyed") return waitUntilAwsDbInstanceIsDeleted(d.Id(), conn, d.Timeout(schema.TimeoutDelete)) } +func waitUntilAwsDbInstanceIsAvailableAfterUpdate(id string, conn *rds.RDS, timeout time.Duration) error { + stateConf := &resource.StateChangeConf{ + Pending: resourceAwsDbInstanceUpdatePendingStates, + Target: []string{"available", "storage-optimization"}, + Refresh: resourceAwsDbInstanceStateRefreshFunc(id, conn), + Timeout: timeout, + MinTimeout: 10 * time.Second, + Delay: 30 * time.Second, // Wait 30 secs before starting + } + _, err := stateConf.WaitForState() + return err +} + func waitUntilAwsDbInstanceIsDeleted(id string, conn *rds.RDS, timeout time.Duration) error { stateConf := &resource.StateChangeConf{ Pending: resourceAwsDbInstanceDeletePendingStates, @@ -884,9 +1313,10 @@ func resourceAwsDbInstanceUpdate(d *schema.ResourceData, meta interface{}) error ApplyImmediately: aws.Bool(d.Get("apply_immediately").(bool)), DBInstanceIdentifier: aws.String(d.Id()), } + d.SetPartial("apply_immediately") - if !d.Get("apply_immediately").(bool) { + if !aws.BoolValue(req.ApplyImmediately) { log.Println("[INFO] Only settings updating, instance changes will be applied in next maintenance window") } @@ -913,6 +1343,11 @@ func resourceAwsDbInstanceUpdate(d *schema.ResourceData, meta interface{}) error req.CopyTagsToSnapshot = aws.Bool(d.Get("copy_tags_to_snapshot").(bool)) requestUpdate = true } + if d.HasChange("deletion_protection") { + d.SetPartial("deletion_protection") + req.DeletionProtection = aws.Bool(d.Get("deletion_protection").(bool)) + requestUpdate = true + } if d.HasChange("instance_class") { d.SetPartial("instance_class") req.DBInstanceClass = aws.String(d.Get("instance_class").(string)) @@ -983,22 +1418,14 @@ func resourceAwsDbInstanceUpdate(d *schema.ResourceData, meta interface{}) error if d.HasChange("vpc_security_group_ids") { if attr := d.Get("vpc_security_group_ids").(*schema.Set); attr.Len() > 0 { - var s []*string - for _, v := range attr.List() { - s = append(s, aws.String(v.(string))) - } - req.VpcSecurityGroupIds = s + req.VpcSecurityGroupIds = expandStringSet(attr) } requestUpdate = true } if d.HasChange("security_group_names") { if attr := d.Get("security_group_names").(*schema.Set); attr.Len() > 0 { - var s []*string - for _, v := range attr.List() { - s = append(s, aws.String(v.(string))) - } - req.DBSecurityGroups = s + req.DBSecurityGroups = expandStringSet(attr) } requestUpdate = true } @@ -1014,17 +1441,31 @@ func resourceAwsDbInstanceUpdate(d *schema.ResourceData, meta interface{}) error req.DBPortNumber = aws.Int64(int64(d.Get("port").(int))) requestUpdate = true } - if d.HasChange("db_subnet_group_name") && !d.IsNewResource() { + if d.HasChange("db_subnet_group_name") { d.SetPartial("db_subnet_group_name") req.DBSubnetGroupName = aws.String(d.Get("db_subnet_group_name").(string)) requestUpdate = true } + if d.HasChange("enabled_cloudwatch_logs_exports") { + d.SetPartial("enabled_cloudwatch_logs_exports") + req.CloudwatchLogsExportConfiguration = buildCloudwatchLogsExportConfiguration(d) + requestUpdate = true + } + if d.HasChange("iam_database_authentication_enabled") { req.EnableIAMDatabaseAuthentication = aws.Bool(d.Get("iam_database_authentication_enabled").(bool)) requestUpdate = true } + if d.HasChange("domain") || d.HasChange("domain_iam_role_name") { + d.SetPartial("domain") + d.SetPartial("domain_iam_role_name") + req.Domain = aws.String(d.Get("domain").(string)) + req.DomainIAMRoleName = aws.String(d.Get("domain_iam_role_name").(string)) + requestUpdate = true + } + log.Printf("[DEBUG] Send DB Instance Modification request: %t", requestUpdate) if requestUpdate { log.Printf("[DEBUG] DB Instance Modification request: %s", req) @@ -1033,21 +1474,10 @@ func resourceAwsDbInstanceUpdate(d *schema.ResourceData, meta interface{}) error return fmt.Errorf("Error modifying DB Instance %s: %s", d.Id(), err) } - log.Println("[INFO] Waiting for DB Instance to be available") - - stateConf := &resource.StateChangeConf{ - Pending: resourceAwsDbInstanceUpdatePendingStates, - Target: []string{"available", "storage-optimization"}, - Refresh: resourceAwsDbInstanceStateRefreshFunc(d.Id(), conn), - Timeout: d.Timeout(schema.TimeoutUpdate), - MinTimeout: 10 * time.Second, - Delay: 30 * time.Second, // Wait 30 secs before starting - } - - // Wait, catching any errors - _, dbStateErr := stateConf.WaitForState() - if dbStateErr != nil { - return dbStateErr + log.Printf("[DEBUG] Waiting for DB Instance (%s) to be available", d.Id()) + err = waitUntilAwsDbInstanceIsAvailableAfterUpdate(d.Id(), conn, d.Timeout(schema.TimeoutUpdate)) + if err != nil { + return fmt.Errorf("error waiting for DB Instance (%s) to be available: %s", d.Id(), err) } } @@ -1073,13 +1503,14 @@ func resourceAwsDbInstanceUpdate(d *schema.ResourceData, meta interface{}) error } } - if arn, err := buildRDSARN(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region); err == nil { - if err := setTagsRDS(conn, d, arn); err != nil { + if d.HasChange("tags") { + if err := setTagsRDS(conn, d, d.Get("arn").(string)); err != nil { return err } else { d.SetPartial("tags") } } + d.Partial(false) return resourceAwsDbInstanceRead(d, meta) @@ -1098,8 +1529,7 @@ func resourceAwsDbInstanceRetrieve(id string, conn *rds.RDS) (*rds.DBInstance, e resp, err := conn.DescribeDBInstances(&opts) if err != nil { - dbinstanceerr, ok := err.(awserr.Error) - if ok && dbinstanceerr.Code() == "DBInstanceNotFound" { + if isAWSErr(err, rds.ErrCodeDBInstanceNotFoundFault, "") { return nil, nil } return nil, fmt.Errorf("Error retrieving DB Instances: %s", err) @@ -1145,21 +1575,44 @@ func resourceAwsDbInstanceStateRefreshFunc(id string, conn *rds.RDS) resource.St } } -func buildRDSARN(identifier, partition, accountid, region string) (string, error) { - if partition == "" { - return "", fmt.Errorf("Unable to construct RDS ARN because of missing AWS partition") +func buildCloudwatchLogsExportConfiguration(d *schema.ResourceData) *rds.CloudwatchLogsExportConfiguration { + + oraw, nraw := d.GetChange("enabled_cloudwatch_logs_exports") + o := oraw.([]interface{}) + n := nraw.([]interface{}) + + create, disable := diffCloudwatchLogsExportConfiguration(o, n) + + return &rds.CloudwatchLogsExportConfiguration{ + EnableLogTypes: expandStringList(create), + DisableLogTypes: expandStringList(disable), } - if accountid == "" { - return "", fmt.Errorf("Unable to construct RDS ARN because of missing AWS Account ID") +} + +func diffCloudwatchLogsExportConfiguration(old, new []interface{}) ([]interface{}, []interface{}) { + create := make([]interface{}, 0) + disable := make([]interface{}, 0) + + for _, n := range new { + if _, contains := sliceContainsString(old, n.(string)); !contains { + create = append(create, n) + } } - arn := fmt.Sprintf("arn:%s:rds:%s:%s:db:%s", partition, region, accountid, identifier) - return arn, nil + + for _, o := range old { + if _, contains := sliceContainsString(new, o.(string)); !contains { + disable = append(disable, o) + } + } + + return create, disable } // Database instance status: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Status.html var resourceAwsDbInstanceCreatePendingStates = []string{ "backing-up", "configuring-enhanced-monitoring", + "configuring-log-exports", "creating", "maintenance", "modifying", @@ -1175,6 +1628,7 @@ var resourceAwsDbInstanceDeletePendingStates = []string{ "available", "backing-up", "configuring-enhanced-monitoring", + "configuring-log-exports", "creating", "deleting", "incompatible-parameters", @@ -1188,6 +1642,7 @@ var resourceAwsDbInstanceDeletePendingStates = []string{ var resourceAwsDbInstanceUpdatePendingStates = []string{ "backing-up", "configuring-enhanced-monitoring", + "configuring-log-exports", "creating", "maintenance", "modifying", diff --git a/aws/resource_aws_db_instance_test.go b/aws/resource_aws_db_instance_test.go index 3dddda15f11..ea156a16cbd 100644 --- a/aws/resource_aws_db_instance_test.go +++ b/aws/resource_aws_db_instance_test.go @@ -3,7 +3,6 @@ package aws import ( "fmt" "log" - "math/rand" "os" "regexp" "strings" @@ -15,7 +14,6 @@ import ( "github.com/hashicorp/terraform/terraform" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/rds" ) @@ -73,6 +71,10 @@ func testSweepDbInstances(region string) error { return !lastPage }) if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping RDS DB Instance sweep for %s: %s", region, err) + return nil + } return fmt.Errorf("Error retrieving DB instances: %s", err) } @@ -80,9 +82,10 @@ func testSweepDbInstances(region string) error { } func TestAccAWSDBInstance_basic(t *testing.T) { - var v rds.DBInstance + var dbInstance1 rds.DBInstance + resourceName := "aws_db_instance.bar" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBInstanceDestroy, @@ -90,28 +93,49 @@ func TestAccAWSDBInstance_basic(t *testing.T) { { Config: testAccAWSDBInstanceConfig, Check: resource.ComposeTestCheckFunc( - testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), - testAccCheckAWSDBInstanceAttributes(&v), - resource.TestCheckResourceAttr( - "aws_db_instance.bar", "allocated_storage", "10"), - resource.TestCheckResourceAttr( - "aws_db_instance.bar", "engine", "mysql"), - resource.TestCheckResourceAttr( - "aws_db_instance.bar", "license_model", "general-public-license"), - resource.TestCheckResourceAttr( - "aws_db_instance.bar", "instance_class", "db.t2.micro"), - resource.TestCheckResourceAttr( - "aws_db_instance.bar", "name", "baz"), - resource.TestCheckResourceAttr( - "aws_db_instance.bar", "username", "foo"), - resource.TestCheckResourceAttr( - "aws_db_instance.bar", "parameter_group_name", "default.mysql5.6"), - resource.TestCheckResourceAttrSet("aws_db_instance.bar", "hosted_zone_id"), - resource.TestCheckResourceAttrSet("aws_db_instance.bar", "ca_cert_identifier"), - resource.TestCheckResourceAttrSet( - "aws_db_instance.bar", "resource_id"), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance1), + testAccCheckAWSDBInstanceAttributes(&dbInstance1), + resource.TestCheckResourceAttr(resourceName, "allocated_storage", "10"), + resource.TestCheckResourceAttr(resourceName, "auto_minor_version_upgrade", "true"), + testAccMatchResourceAttrRegionalARN(resourceName, "arn", "rds", regexp.MustCompile(`db:.+`)), + resource.TestCheckResourceAttrSet(resourceName, "availability_zone"), + resource.TestCheckResourceAttr(resourceName, "backup_retention_period", "0"), + resource.TestCheckResourceAttrSet(resourceName, "backup_window"), + resource.TestCheckResourceAttrSet(resourceName, "ca_cert_identifier"), + resource.TestCheckResourceAttr(resourceName, "copy_tags_to_snapshot", "false"), + resource.TestCheckResourceAttr(resourceName, "db_subnet_group_name", "default"), + resource.TestCheckResourceAttr(resourceName, "deletion_protection", "false"), + resource.TestCheckResourceAttr(resourceName, "enabled_cloudwatch_logs_exports.#", "0"), + resource.TestCheckResourceAttrSet(resourceName, "endpoint"), + resource.TestCheckResourceAttr(resourceName, "engine", "mysql"), + resource.TestCheckResourceAttrSet(resourceName, "engine_version"), + resource.TestCheckResourceAttrSet(resourceName, "hosted_zone_id"), + resource.TestCheckResourceAttr(resourceName, "iam_database_authentication_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "instance_class", "db.t2.micro"), + resource.TestCheckResourceAttr(resourceName, "license_model", "general-public-license"), + resource.TestCheckResourceAttrSet(resourceName, "maintenance_window"), + resource.TestCheckResourceAttr(resourceName, "name", "baz"), + resource.TestCheckResourceAttr(resourceName, "option_group_name", "default:mysql-5-6"), + resource.TestCheckResourceAttr(resourceName, "parameter_group_name", "default.mysql5.6"), + resource.TestCheckResourceAttr(resourceName, "port", "3306"), + resource.TestCheckResourceAttr(resourceName, "publicly_accessible", "false"), + resource.TestCheckResourceAttrSet(resourceName, "resource_id"), + resource.TestCheckResourceAttr(resourceName, "status", "available"), + resource.TestCheckResourceAttr(resourceName, "storage_encrypted", "false"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttr(resourceName, "username", "foo"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "password", + "skip_final_snapshot", + "final_snapshot_identifier", + }, + }, }, }) } @@ -119,7 +143,7 @@ func TestAccAWSDBInstance_basic(t *testing.T) { func TestAccAWSDBInstance_namePrefix(t *testing.T) { var v rds.DBInstance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBInstanceDestroy, @@ -140,7 +164,7 @@ func TestAccAWSDBInstance_namePrefix(t *testing.T) { func TestAccAWSDBInstance_generatedName(t *testing.T) { var v rds.DBInstance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBInstanceDestroy, @@ -160,10 +184,10 @@ func TestAccAWSDBInstance_kmsKey(t *testing.T) { var v rds.DBInstance keyRegex := regexp.MustCompile("^arn:aws:kms:") - ri := rand.New(rand.NewSource(time.Now().UnixNano())).Int() + ri := acctest.RandInt() config := fmt.Sprintf(testAccAWSDBInstanceConfigKmsKeyId, ri) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBInstanceDestroy, @@ -185,7 +209,7 @@ func TestAccAWSDBInstance_subnetGroup(t *testing.T) { var v rds.DBInstance rName := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBInstanceDestroy, @@ -215,7 +239,7 @@ func TestAccAWSDBInstance_optionGroup(t *testing.T) { rName := fmt.Sprintf("tf-option-test-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBInstanceDestroy, @@ -236,7 +260,7 @@ func TestAccAWSDBInstance_optionGroup(t *testing.T) { func TestAccAWSDBInstance_iamAuth(t *testing.T) { var v rds.DBInstance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBInstanceDestroy, @@ -254,36 +278,59 @@ func TestAccAWSDBInstance_iamAuth(t *testing.T) { }) } -func TestAccAWSDBInstance_replica(t *testing.T) { - var s, r rds.DBInstance +func TestAccAWSDBInstance_DeletionProtection(t *testing.T) { + var dbInstance rds.DBInstance + + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_db_instance.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBInstanceDestroy, Steps: []resource.TestStep{ { - Config: testAccReplicaInstanceConfig(rand.New(rand.NewSource(time.Now().UnixNano())).Int()), + Config: testAccAWSDBInstanceConfig_DeletionProtection(rName, true), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &s), - testAccCheckAWSDBInstanceExists("aws_db_instance.replica", &r), - testAccCheckAWSDBInstanceReplicaAttributes(&s, &r), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "deletion_protection", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "apply_immediately", + "final_snapshot_identifier", + "password", + "skip_final_snapshot", + }, + }, + { + Config: testAccAWSDBInstanceConfig_DeletionProtection(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "deletion_protection", "false"), ), }, }, }) } -func TestAccAWSDBInstance_noSnapshot(t *testing.T) { +func TestAccAWSDBInstance_FinalSnapshotIdentifier(t *testing.T) { var snap rds.DBInstance + rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSDBInstanceNoSnapshot, + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + // testAccCheckAWSDBInstanceSnapshot verifies a database snapshot is + // created, and subequently deletes it + CheckDestroy: testAccCheckAWSDBInstanceSnapshot, Steps: []resource.TestStep{ { - Config: testAccSnapshotInstanceConfig(), + Config: testAccAWSDBInstanceConfig_FinalSnapshotIdentifier(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckAWSDBInstanceExists("aws_db_instance.snapshot", &snap), ), @@ -292,19 +339,16 @@ func TestAccAWSDBInstance_noSnapshot(t *testing.T) { }) } -func TestAccAWSDBInstance_snapshot(t *testing.T) { +func TestAccAWSDBInstance_FinalSnapshotIdentifier_SkipFinalSnapshot(t *testing.T) { var snap rds.DBInstance - rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - // testAccCheckAWSDBInstanceSnapshot verifies a database snapshot is - // created, and subequently deletes it - CheckDestroy: testAccCheckAWSDBInstanceSnapshot(rInt), + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceNoSnapshot, Steps: []resource.TestStep{ { - Config: testAccSnapshotInstanceConfigWithSnapshot(rInt), + Config: testAccAWSDBInstanceConfig_FinalSnapshotIdentifier_SkipFinalSnapshot(), Check: resource.ComposeTestCheckFunc( testAccCheckAWSDBInstanceExists("aws_db_instance.snapshot", &snap), ), @@ -313,1164 +357,3995 @@ func TestAccAWSDBInstance_snapshot(t *testing.T) { }) } -func TestAccAWSDBInstance_enhancedMonitoring(t *testing.T) { +func TestAccAWSDBInstance_IsAlreadyBeingDeleted(t *testing.T) { var dbInstance rds.DBInstance - rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckAWSDBInstanceNoSnapshot, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, Steps: []resource.TestStep{ { - Config: testAccSnapshotInstanceConfig_enhancedMonitoring(rName), + Config: testAccAWSDBInstanceConfig_MariaDB(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSDBInstanceExists("aws_db_instance.enhanced_monitoring", &dbInstance), - resource.TestCheckResourceAttr( - "aws_db_instance.enhanced_monitoring", "monitoring_interval", "5"), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), ), }, + { + PreConfig: func() { + // Get Database Instance into deleting state + conn := testAccProvider.Meta().(*AWSClient).rdsconn + input := &rds.DeleteDBInstanceInput{ + DBInstanceIdentifier: aws.String(rName), + SkipFinalSnapshot: aws.Bool(true), + } + _, err := conn.DeleteDBInstance(input) + if err != nil { + t.Fatalf("error deleting Database Instance: %s", err) + } + }, + Config: testAccAWSDBInstanceConfig_MariaDB(rName), + Destroy: true, + }, }, }) } -// Regression test for https://github.com/hashicorp/terraform/issues/3760 . -// We apply a plan, then change just the iops. If the apply succeeds, we -// consider this a pass, as before in 3760 the request would fail -func TestAccAWSDBInstance_separate_iops_update(t *testing.T) { - var v rds.DBInstance +func TestAccAWSDBInstance_ReplicateSourceDb(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance - rName := acctest.RandString(5) + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceResourceName := "aws_db_instance.source" + resourceName := "aws_db_instance.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBInstanceDestroy, Steps: []resource.TestStep{ { - Config: testAccSnapshotInstanceConfig_iopsUpdate(rName, 1000), + Config: testAccAWSDBInstanceConfig_ReplicateSourceDb(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), - testAccCheckAWSDBInstanceAttributes(&v), + testAccCheckAWSDBInstanceExists(sourceResourceName, &sourceDbInstance), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + testAccCheckAWSDBInstanceReplicaAttributes(&sourceDbInstance, &dbInstance), ), }, + }, + }) +} + +func TestAccAWSDBInstance_ReplicateSourceDb_AllocatedStorage(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceResourceName := "aws_db_instance.source" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ { - Config: testAccSnapshotInstanceConfig_iopsUpdate(rName, 2000), + Config: testAccAWSDBInstanceConfig_ReplicateSourceDb_AllocatedStorage(rName, 10), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), - testAccCheckAWSDBInstanceAttributes(&v), + testAccCheckAWSDBInstanceExists(sourceResourceName, &sourceDbInstance), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + testAccCheckAWSDBInstanceReplicaAttributes(&sourceDbInstance, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "allocated_storage", "10"), ), }, }, }) } -func TestAccAWSDBInstance_portUpdate(t *testing.T) { - var v rds.DBInstance +func TestAccAWSDBInstance_ReplicateSourceDb_AutoMinorVersionUpgrade(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance - rName := acctest.RandString(5) + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceResourceName := "aws_db_instance.source" + resourceName := "aws_db_instance.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBInstanceDestroy, Steps: []resource.TestStep{ { - Config: testAccSnapshotInstanceConfig_mysqlPort(rName), + Config: testAccAWSDBInstanceConfig_ReplicateSourceDb_AutoMinorVersionUpgrade(rName, false), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), - resource.TestCheckResourceAttr( - "aws_db_instance.bar", "port", "3306"), + testAccCheckAWSDBInstanceExists(sourceResourceName, &sourceDbInstance), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + testAccCheckAWSDBInstanceReplicaAttributes(&sourceDbInstance, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "auto_minor_version_upgrade", "false"), ), }, + }, + }) +} + +func TestAccAWSDBInstance_ReplicateSourceDb_AvailabilityZone(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceResourceName := "aws_db_instance.source" + resourceName := "aws_db_instance.test" + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ { - Config: testAccSnapshotInstanceConfig_updateMysqlPort(rName), + Config: testAccAWSDBInstanceConfig_ReplicateSourceDb_AvailabilityZone(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), - resource.TestCheckResourceAttr( - "aws_db_instance.bar", "port", "3305"), + testAccCheckAWSDBInstanceExists(sourceResourceName, &sourceDbInstance), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + testAccCheckAWSDBInstanceReplicaAttributes(&sourceDbInstance, &dbInstance), ), }, }, }) } -func TestAccAWSDBInstance_MSSQL_TZ(t *testing.T) { - var v rds.DBInstance - rInt := acctest.RandInt() +func TestAccAWSDBInstance_ReplicateSourceDb_BackupRetentionPeriod(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceResourceName := "aws_db_instance.source" + resourceName := "aws_db_instance.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBInstanceDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSDBMSSQL_timezone(rInt), + Config: testAccAWSDBInstanceConfig_ReplicateSourceDb_BackupRetentionPeriod(rName, 1), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSDBInstanceExists("aws_db_instance.mssql", &v), - testAccCheckAWSDBInstanceAttributes_MSSQL(&v, ""), - resource.TestCheckResourceAttr( - "aws_db_instance.mssql", "allocated_storage", "20"), - resource.TestCheckResourceAttr( - "aws_db_instance.mssql", "engine", "sqlserver-ex"), + testAccCheckAWSDBInstanceExists(sourceResourceName, &sourceDbInstance), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + testAccCheckAWSDBInstanceReplicaAttributes(&sourceDbInstance, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "backup_retention_period", "1"), ), }, + }, + }) +} + +func TestAccAWSDBInstance_ReplicateSourceDb_BackupWindow(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceResourceName := "aws_db_instance.source" + resourceName := "aws_db_instance.test" + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ { - Config: testAccAWSDBMSSQL_timezone_AKST(rInt), + Config: testAccAWSDBInstanceConfig_ReplicateSourceDb_BackupWindow(rName, "00:00-08:00"), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSDBInstanceExists("aws_db_instance.mssql", &v), - testAccCheckAWSDBInstanceAttributes_MSSQL(&v, "Alaskan Standard Time"), - resource.TestCheckResourceAttr( - "aws_db_instance.mssql", "allocated_storage", "20"), - resource.TestCheckResourceAttr( - "aws_db_instance.mssql", "engine", "sqlserver-ex"), + testAccCheckAWSDBInstanceExists(sourceResourceName, &sourceDbInstance), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + testAccCheckAWSDBInstanceReplicaAttributes(&sourceDbInstance, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "backup_window", "00:00-08:00"), ), }, }, }) } -func TestAccAWSDBInstance_MinorVersion(t *testing.T) { - var v rds.DBInstance - - resource.Test(t, resource.TestCase{ +func TestAccAWSDBInstance_ReplicateSourceDb_DeletionProtection(t *testing.T) { + t.Skip("CreateDBInstanceReadReplica API currently ignores DeletionProtection=true with SourceDBInstanceIdentifier set") + // --- FAIL: TestAccAWSDBInstance_ReplicateSourceDb_DeletionProtection (1624.88s) + // testing.go:527: Step 0 error: Check failed: Check 4/4 error: aws_db_instance.test: Attribute 'deletion_protection' expected "true", got "false" + // + // Action=CreateDBInstanceReadReplica&AutoMinorVersionUpgrade=true&CopyTagsToSnapshot=false&DBInstanceClass=db.t2.micro&DBInstanceIdentifier=tf-acc-test-6591588621809891413&DeletionProtection=true&PubliclyAccessible=false&SourceDBInstanceIdentifier=tf-acc-test-6591588621809891413-source&Tags=&Version=2014-10-31 + // + // + // + // false + // + // AWS Support has confirmed this issue and noted that it will be fixed in the future. + + var dbInstance, sourceDbInstance rds.DBInstance + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceResourceName := "aws_db_instance.source" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBInstanceDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSDBInstanceConfigAutoMinorVersion, + Config: testAccAWSDBInstanceConfig_ReplicateSourceDb_DeletionProtection(rName, true), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), + testAccCheckAWSDBInstanceExists(sourceResourceName, &sourceDbInstance), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + testAccCheckAWSDBInstanceReplicaAttributes(&sourceDbInstance, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "deletion_protection", "true"), + ), + }, + // Ensure we disable deletion protection before attempting to delete :) + { + Config: testAccAWSDBInstanceConfig_ReplicateSourceDb_DeletionProtection(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceResourceName, &sourceDbInstance), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + testAccCheckAWSDBInstanceReplicaAttributes(&sourceDbInstance, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "deletion_protection", "false"), ), }, }, }) } -// See https://github.com/hashicorp/terraform/issues/11881 -func TestAccAWSDBInstance_diffSuppressInitialState(t *testing.T) { - var v rds.DBInstance - rInt := acctest.RandInt() +func TestAccAWSDBInstance_ReplicateSourceDb_IamDatabaseAuthenticationEnabled(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance - resource.Test(t, resource.TestCase{ + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceResourceName := "aws_db_instance.source" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBInstanceDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSDBInstanceConfigSuppressInitialState(rInt), + Config: testAccAWSDBInstanceConfig_ReplicateSourceDb_IamDatabaseAuthenticationEnabled(rName, true), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), + testAccCheckAWSDBInstanceExists(sourceResourceName, &sourceDbInstance), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + testAccCheckAWSDBInstanceReplicaAttributes(&sourceDbInstance, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "iam_database_authentication_enabled", "true"), ), }, }, }) } -func TestAccAWSDBInstance_ec2Classic(t *testing.T) { - var v rds.DBInstance - - oldvar := os.Getenv("AWS_DEFAULT_REGION") - os.Setenv("AWS_DEFAULT_REGION", "us-east-1") - defer os.Setenv("AWS_DEFAULT_REGION", oldvar) +func TestAccAWSDBInstance_ReplicateSourceDb_MaintenanceWindow(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance - rInt := acctest.RandInt() + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceResourceName := "aws_db_instance.source" + resourceName := "aws_db_instance.test" - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBInstanceDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSDBInstanceConfigEc2Classic(rInt), + Config: testAccAWSDBInstanceConfig_ReplicateSourceDb_MaintenanceWindow(rName, "sun:01:00-sun:01:30"), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), + testAccCheckAWSDBInstanceExists(sourceResourceName, &sourceDbInstance), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + testAccCheckAWSDBInstanceReplicaAttributes(&sourceDbInstance, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "maintenance_window", "sun:01:00-sun:01:30"), ), }, }, }) } -func testAccCheckAWSDBInstanceDestroy(s *terraform.State) error { - conn := testAccProvider.Meta().(*AWSClient).rdsconn - - for _, rs := range s.RootModule().Resources { - if rs.Type != "aws_db_instance" { - continue - } +func TestAccAWSDBInstance_ReplicateSourceDb_Monitoring(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance - // Try to find the Group - var err error - resp, err := conn.DescribeDBInstances( - &rds.DescribeDBInstancesInput{ - DBInstanceIdentifier: aws.String(rs.Primary.ID), - }) + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceResourceName := "aws_db_instance.source" + resourceName := "aws_db_instance.test" - if ae, ok := err.(awserr.Error); ok && ae.Code() == "DBInstanceNotFound" { - continue - } + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_ReplicateSourceDb_Monitoring(rName, 5), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceResourceName, &sourceDbInstance), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + testAccCheckAWSDBInstanceReplicaAttributes(&sourceDbInstance, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "monitoring_interval", "5"), + ), + }, + }, + }) +} - if err == nil { - if len(resp.DBInstances) != 0 && - *resp.DBInstances[0].DBInstanceIdentifier == rs.Primary.ID { - return fmt.Errorf("DB Instance still exists") - } - } +func TestAccAWSDBInstance_ReplicateSourceDb_MultiAZ(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance - // Verify the error - newerr, ok := err.(awserr.Error) - if !ok { - return err - } - if newerr.Code() != "DBInstanceNotFound" { - return err - } - } + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceResourceName := "aws_db_instance.source" + resourceName := "aws_db_instance.test" - return nil + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_ReplicateSourceDb_MultiAZ(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceResourceName, &sourceDbInstance), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + testAccCheckAWSDBInstanceReplicaAttributes(&sourceDbInstance, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "multi_az", "true"), + ), + }, + }, + }) } -func testAccCheckAWSDBInstanceAttributes(v *rds.DBInstance) resource.TestCheckFunc { - return func(s *terraform.State) error { - - if *v.Engine != "mysql" { - return fmt.Errorf("bad engine: %#v", *v.Engine) - } - - if *v.EngineVersion == "" { - return fmt.Errorf("bad engine_version: %#v", *v.EngineVersion) - } +func TestAccAWSDBInstance_ReplicateSourceDb_ParameterGroupName(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance - if *v.BackupRetentionPeriod != 0 { - return fmt.Errorf("bad backup_retention_period: %#v", *v.BackupRetentionPeriod) - } + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceResourceName := "aws_db_instance.source" + resourceName := "aws_db_instance.test" - return nil - } + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_ReplicateSourceDb_ParameterGroupName(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceResourceName, &sourceDbInstance), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + testAccCheckAWSDBInstanceReplicaAttributes(&sourceDbInstance, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "parameter_group_name", rName), + testAccCheckAWSDBInstanceParameterApplyStatusInSync(&dbInstance), + ), + }, + }, + }) } -func testAccCheckAWSDBInstanceAttributes_MSSQL(v *rds.DBInstance, tz string) resource.TestCheckFunc { - return func(s *terraform.State) error { +func TestAccAWSDBInstance_ReplicateSourceDb_Port(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance - if *v.Engine != "sqlserver-ex" { - return fmt.Errorf("bad engine: %#v", *v.Engine) - } + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceResourceName := "aws_db_instance.source" + resourceName := "aws_db_instance.test" - rtz := "" - if v.Timezone != nil { - rtz = *v.Timezone - } + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_ReplicateSourceDb_Port(rName, 9999), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceResourceName, &sourceDbInstance), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + testAccCheckAWSDBInstanceReplicaAttributes(&sourceDbInstance, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "port", "9999"), + ), + }, + }, + }) +} - if tz != rtz { - return fmt.Errorf("Expected (%s) Timezone for MSSQL test, got (%s)", tz, rtz) - } +func TestAccAWSDBInstance_ReplicateSourceDb_VpcSecurityGroupIds(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance - return nil - } + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceResourceName := "aws_db_instance.source" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_ReplicateSourceDb_VpcSecurityGroupIds(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceResourceName, &sourceDbInstance), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + testAccCheckAWSDBInstanceReplicaAttributes(&sourceDbInstance, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "vpc_security_group_ids.#", "1"), + ), + }, + }, + }) } -func testAccCheckAWSDBInstanceReplicaAttributes(source, replica *rds.DBInstance) resource.TestCheckFunc { - return func(s *terraform.State) error { +func TestAccAWSDBInstance_S3Import(t *testing.T) { + var snap rds.DBInstance + bucket := acctest.RandomWithPrefix("tf-acc-test") + uniqueId := acctest.RandomWithPrefix("tf-acc-s3-import-test") + bucketPrefix := acctest.RandString(5) - if replica.ReadReplicaSourceDBInstanceIdentifier != nil && *replica.ReadReplicaSourceDBInstanceIdentifier != *source.DBInstanceIdentifier { + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_S3Import(bucket, bucketPrefix, uniqueId), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.s3", &snap), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_SnapshotIdentifier(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + var dbSnapshot rds.DBSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_db_instance.source" + snapshotResourceName := "aws_db_snapshot.test" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_SnapshotIdentifier_AllocatedStorage(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + var dbSnapshot rds.DBSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_db_instance.source" + snapshotResourceName := "aws_db_snapshot.test" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier_AllocatedStorage(rName, 10), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "allocated_storage", "10"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_SnapshotIdentifier_AutoMinorVersionUpgrade(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + var dbSnapshot rds.DBSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_db_instance.source" + snapshotResourceName := "aws_db_snapshot.test" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier_AutoMinorVersionUpgrade(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "auto_minor_version_upgrade", "false"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_SnapshotIdentifier_AvailabilityZone(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + var dbSnapshot rds.DBSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_db_instance.source" + snapshotResourceName := "aws_db_snapshot.test" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier_AvailabilityZone(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_SnapshotIdentifier_BackupRetentionPeriod(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + var dbSnapshot rds.DBSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_db_instance.source" + snapshotResourceName := "aws_db_snapshot.test" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier_BackupRetentionPeriod(rName, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "backup_retention_period", "1"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_SnapshotIdentifier_BackupRetentionPeriod_Unset(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + var dbSnapshot rds.DBSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_db_instance.source" + snapshotResourceName := "aws_db_snapshot.test" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier_BackupRetentionPeriod_Unset(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "backup_retention_period", "0"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_SnapshotIdentifier_BackupWindow(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + var dbSnapshot rds.DBSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_db_instance.source" + snapshotResourceName := "aws_db_snapshot.test" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier_BackupWindow(rName, "00:00-08:00"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "backup_window", "00:00-08:00"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_SnapshotIdentifier_DeletionProtection(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + var dbSnapshot rds.DBSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_db_instance.source" + snapshotResourceName := "aws_db_snapshot.test" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier_DeletionProtection(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "deletion_protection", "true"), + ), + }, + // Ensure we disable deletion protection before attempting to delete :) + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier_DeletionProtection(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "deletion_protection", "false"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_SnapshotIdentifier_IamDatabaseAuthenticationEnabled(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + var dbSnapshot rds.DBSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_db_instance.source" + snapshotResourceName := "aws_db_snapshot.test" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier_IamDatabaseAuthenticationEnabled(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "iam_database_authentication_enabled", "true"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_SnapshotIdentifier_MaintenanceWindow(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + var dbSnapshot rds.DBSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_db_instance.source" + snapshotResourceName := "aws_db_snapshot.test" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier_MaintenanceWindow(rName, "sun:01:00-sun:01:30"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "maintenance_window", "sun:01:00-sun:01:30"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_SnapshotIdentifier_Monitoring(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + var dbSnapshot rds.DBSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_db_instance.source" + snapshotResourceName := "aws_db_snapshot.test" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier_Monitoring(rName, 5), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "monitoring_interval", "5"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_SnapshotIdentifier_MultiAZ(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + var dbSnapshot rds.DBSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_db_instance.source" + snapshotResourceName := "aws_db_snapshot.test" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier_MultiAZ(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "multi_az", "true"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_SnapshotIdentifier_MultiAZ_SQLServer(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + var dbSnapshot rds.DBSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_db_instance.source" + snapshotResourceName := "aws_db_snapshot.test" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier_MultiAZ_SQLServer(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "multi_az", "true"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_SnapshotIdentifier_ParameterGroupName(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + var dbSnapshot rds.DBSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_db_instance.source" + snapshotResourceName := "aws_db_snapshot.test" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier_ParameterGroupName(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "parameter_group_name", rName), + testAccCheckAWSDBInstanceParameterApplyStatusInSync(&dbInstance), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_SnapshotIdentifier_Port(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + var dbSnapshot rds.DBSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_db_instance.source" + snapshotResourceName := "aws_db_snapshot.test" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier_Port(rName, 9999), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "port", "9999"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_SnapshotIdentifier_Tags(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + var dbSnapshot rds.DBSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_db_instance.source" + snapshotResourceName := "aws_db_snapshot.test" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier_Tags(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_SnapshotIdentifier_Tags_Unset(t *testing.T) { + t.Skip("To be fixed: https://github.com/terraform-providers/terraform-provider-aws/issues/5959") + // --- FAIL: TestAccAWSDBInstance_SnapshotIdentifier_Tags_Unset (1086.15s) + // testing.go:527: Step 0 error: Check failed: Check 4/4 error: aws_db_instance.test: Attribute 'tags.%' expected "0", got "1" + + var dbInstance, sourceDbInstance rds.DBInstance + var dbSnapshot rds.DBSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_db_instance.source" + snapshotResourceName := "aws_db_snapshot.test" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier_Tags_Unset(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_SnapshotIdentifier_VpcSecurityGroupIds(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + var dbSnapshot rds.DBSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_db_instance.source" + snapshotResourceName := "aws_db_snapshot.test" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier_VpcSecurityGroupIds(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + ), + }, + }, + }) +} + +// Regression reference: https://github.com/terraform-providers/terraform-provider-aws/issues/5360 +// This acceptance test explicitly tests when snapshot_identifer is set, +// vpc_security_group_ids is set (which triggered the resource update function), +// and tags is set which was missing its ARN used for tagging +func TestAccAWSDBInstance_SnapshotIdentifier_VpcSecurityGroupIds_Tags(t *testing.T) { + var dbInstance, sourceDbInstance rds.DBInstance + var dbSnapshot rds.DBSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_db_instance.source" + snapshotResourceName := "aws_db_snapshot.test" + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_SnapshotIdentifier_VpcSecurityGroupIds_Tags(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(sourceDbResourceName, &sourceDbInstance), + testAccCheckDbSnapshotExists(snapshotResourceName, &dbSnapshot), + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_enhancedMonitoring(t *testing.T) { + var dbInstance rds.DBInstance + rName := acctest.RandString(5) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccSnapshotInstanceConfig_enhancedMonitoring(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.enhanced_monitoring", &dbInstance), + resource.TestCheckResourceAttr( + "aws_db_instance.enhanced_monitoring", "monitoring_interval", "5"), + ), + }, + }, + }) +} + +// Regression test for https://github.com/hashicorp/terraform/issues/3760 . +// We apply a plan, then change just the iops. If the apply succeeds, we +// consider this a pass, as before in 3760 the request would fail +func TestAccAWSDBInstance_separate_iops_update(t *testing.T) { + var v rds.DBInstance + + rName := acctest.RandString(5) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccSnapshotInstanceConfig_iopsUpdate(rName, 1000), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), + testAccCheckAWSDBInstanceAttributes(&v), + ), + }, + + { + Config: testAccSnapshotInstanceConfig_iopsUpdate(rName, 2000), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), + testAccCheckAWSDBInstanceAttributes(&v), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_portUpdate(t *testing.T) { + var v rds.DBInstance + + rName := acctest.RandString(5) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccSnapshotInstanceConfig_mysqlPort(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), + resource.TestCheckResourceAttr( + "aws_db_instance.bar", "port", "3306"), + ), + }, + + { + Config: testAccSnapshotInstanceConfig_updateMysqlPort(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), + resource.TestCheckResourceAttr( + "aws_db_instance.bar", "port", "3305"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_MSSQL_TZ(t *testing.T) { + var v rds.DBInstance + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBMSSQL_timezone(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.mssql", &v), + testAccCheckAWSDBInstanceAttributes_MSSQL(&v, ""), + resource.TestCheckResourceAttr( + "aws_db_instance.mssql", "allocated_storage", "20"), + resource.TestCheckResourceAttr( + "aws_db_instance.mssql", "engine", "sqlserver-ex"), + ), + }, + + { + Config: testAccAWSDBMSSQL_timezone_AKST(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.mssql", &v), + testAccCheckAWSDBInstanceAttributes_MSSQL(&v, "Alaskan Standard Time"), + resource.TestCheckResourceAttr( + "aws_db_instance.mssql", "allocated_storage", "20"), + resource.TestCheckResourceAttr( + "aws_db_instance.mssql", "engine", "sqlserver-ex"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_MSSQL_Domain(t *testing.T) { + var vBefore, vAfter rds.DBInstance + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBMSSQLDomain(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.mssql", &vBefore), + testAccCheckAWSDBInstanceDomainAttributes("foo.somedomain.com", &vBefore), + resource.TestCheckResourceAttrSet( + "aws_db_instance.mssql", "domain"), + resource.TestCheckResourceAttrSet( + "aws_db_instance.mssql", "domain_iam_role_name"), + ), + }, + { + Config: testAccAWSDBMSSQLUpdateDomain(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.mssql", &vAfter), + testAccCheckAWSDBInstanceDomainAttributes("bar.somedomain.com", &vAfter), + resource.TestCheckResourceAttrSet( + "aws_db_instance.mssql", "domain"), + resource.TestCheckResourceAttrSet( + "aws_db_instance.mssql", "domain_iam_role_name"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_MSSQL_DomainSnapshotRestore(t *testing.T) { + var v, vRestoredInstance rds.DBInstance + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBMSSQLDomainSnapshotRestore(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.mssql_restore", &vRestoredInstance), + testAccCheckAWSDBInstanceExists("aws_db_instance.mssql", &v), + testAccCheckAWSDBInstanceDomainAttributes("foo.somedomain.com", &vRestoredInstance), + resource.TestCheckResourceAttrSet( + "aws_db_instance.mssql_restore", "domain"), + resource.TestCheckResourceAttrSet( + "aws_db_instance.mssql_restore", "domain_iam_role_name"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_MinorVersion(t *testing.T) { + var v rds.DBInstance + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfigAutoMinorVersion, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), + ), + }, + }, + }) +} + +// See https://github.com/hashicorp/terraform/issues/11881 +func TestAccAWSDBInstance_diffSuppressInitialState(t *testing.T) { + var v rds.DBInstance + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfigSuppressInitialState(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_ec2Classic(t *testing.T) { + var v rds.DBInstance + + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfigEc2Classic(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_cloudwatchLogsExportConfiguration(t *testing.T) { + var v rds.DBInstance + + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfigCloudwatchLogsExportConfiguration(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), + ), + }, + { + ResourceName: "aws_db_instance.bar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"password"}, + }, + }, + }) +} + +func TestAccAWSDBInstance_cloudwatchLogsExportConfigurationUpdate(t *testing.T) { + var v rds.DBInstance + + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfigCloudwatchLogsExportConfiguration(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), + resource.TestCheckResourceAttr( + "aws_db_instance.bar", "enabled_cloudwatch_logs_exports.0", "audit"), + resource.TestCheckResourceAttr( + "aws_db_instance.bar", "enabled_cloudwatch_logs_exports.1", "error"), + ), + }, + { + Config: testAccAWSDBInstanceConfigCloudwatchLogsExportConfigurationAdd(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), + resource.TestCheckResourceAttr( + "aws_db_instance.bar", "enabled_cloudwatch_logs_exports.0", "audit"), + resource.TestCheckResourceAttr( + "aws_db_instance.bar", "enabled_cloudwatch_logs_exports.1", "error"), + resource.TestCheckResourceAttr( + "aws_db_instance.bar", "enabled_cloudwatch_logs_exports.2", "general"), + ), + }, + { + Config: testAccAWSDBInstanceConfigCloudwatchLogsExportConfigurationModify(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), + resource.TestCheckResourceAttr( + "aws_db_instance.bar", "enabled_cloudwatch_logs_exports.0", "audit"), + resource.TestCheckResourceAttr( + "aws_db_instance.bar", "enabled_cloudwatch_logs_exports.1", "general"), + resource.TestCheckResourceAttr( + "aws_db_instance.bar", "enabled_cloudwatch_logs_exports.2", "slowquery"), + ), + }, + { + Config: testAccAWSDBInstanceConfigCloudwatchLogsExportConfigurationDelete(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists("aws_db_instance.bar", &v), + resource.TestCheckResourceAttr( + "aws_db_instance.bar", "enabled_cloudwatch_logs_exports.#", "0"), + ), + }, + }, + }) +} + +func TestAccAWSDBInstance_EnabledCloudwatchLogsExports_Oracle(t *testing.T) { + var dbInstance rds.DBInstance + + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_db_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBInstanceConfig_EnabledCloudwatchLogsExports_Oracle(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "enabled_cloudwatch_logs_exports.#", "3"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"password"}, + }, + }, + }) +} + +func testAccCheckAWSDBInstanceDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).rdsconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_db_instance" { + continue + } + + // Try to find the Group + var err error + resp, err := conn.DescribeDBInstances( + &rds.DescribeDBInstancesInput{ + DBInstanceIdentifier: aws.String(rs.Primary.ID), + }) + + if err != nil { + if isAWSErr(err, rds.ErrCodeDBInstanceNotFoundFault, "") { + continue + } + return err + } + + if len(resp.DBInstances) != 0 && + *resp.DBInstances[0].DBInstanceIdentifier == rs.Primary.ID { + return fmt.Errorf("DB Instance still exists") + } + } + + return nil +} + +func testAccCheckAWSDBInstanceAttributes(v *rds.DBInstance) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if *v.Engine != "mysql" { + return fmt.Errorf("bad engine: %#v", *v.Engine) + } + + if *v.EngineVersion == "" { + return fmt.Errorf("bad engine_version: %#v", *v.EngineVersion) + } + + if *v.BackupRetentionPeriod != 0 { + return fmt.Errorf("bad backup_retention_period: %#v", *v.BackupRetentionPeriod) + } + + return nil + } +} + +func testAccCheckAWSDBInstanceAttributes_MSSQL(v *rds.DBInstance, tz string) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if *v.Engine != "sqlserver-ex" { + return fmt.Errorf("bad engine: %#v", *v.Engine) + } + + rtz := "" + if v.Timezone != nil { + rtz = *v.Timezone + } + + if tz != rtz { + return fmt.Errorf("Expected (%s) Timezone for MSSQL test, got (%s)", tz, rtz) + } + + return nil + } +} + +func testAccCheckAWSDBInstanceDomainAttributes(domain string, v *rds.DBInstance) resource.TestCheckFunc { + return func(s *terraform.State) error { + for _, dm := range v.DomainMemberships { + if *dm.FQDN != domain { + continue + } + + return nil + } + + return fmt.Errorf("Domain %s not found in domain memberships", domain) + } +} + +func testAccCheckAWSDBInstanceParameterApplyStatusInSync(dbInstance *rds.DBInstance) resource.TestCheckFunc { + return func(s *terraform.State) error { + for _, dbParameterGroup := range dbInstance.DBParameterGroups { + parameterApplyStatus := aws.StringValue(dbParameterGroup.ParameterApplyStatus) + if parameterApplyStatus != "in-sync" { + id := aws.StringValue(dbInstance.DBInstanceIdentifier) + parameterGroupName := aws.StringValue(dbParameterGroup.DBParameterGroupName) + return fmt.Errorf("expected DB Instance (%s) Parameter Group (%s) apply status to be: \"in-sync\", got: %q", id, parameterGroupName, parameterApplyStatus) + } + } + + return nil + } +} + +func testAccCheckAWSDBInstanceReplicaAttributes(source, replica *rds.DBInstance) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if replica.ReadReplicaSourceDBInstanceIdentifier != nil && *replica.ReadReplicaSourceDBInstanceIdentifier != *source.DBInstanceIdentifier { return fmt.Errorf("bad source identifier for replica, expected: '%s', got: '%s'", *source.DBInstanceIdentifier, *replica.ReadReplicaSourceDBInstanceIdentifier) } - return nil - } + return nil + } +} + +func testAccCheckAWSDBInstanceSnapshot(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_db_instance" { + continue + } + + awsClient := testAccProvider.Meta().(*AWSClient) + conn := awsClient.rdsconn + + log.Printf("[INFO] Trying to locate the DBInstance Final Snapshot") + snapOutput, err := conn.DescribeDBSnapshots( + &rds.DescribeDBSnapshotsInput{ + DBSnapshotIdentifier: aws.String(rs.Primary.Attributes["final_snapshot_identifier"]), + }) + + if err != nil { + return err + } + + if snapOutput == nil || len(snapOutput.DBSnapshots) == 0 { + return fmt.Errorf("Snapshot %s not found", rs.Primary.Attributes["final_snapshot_identifier"]) + } + + // verify we have the tags copied to the snapshot + tagsARN := aws.StringValue(snapOutput.DBSnapshots[0].DBSnapshotArn) + listTagsOutput, err := conn.ListTagsForResource(&rds.ListTagsForResourceInput{ + ResourceName: aws.String(tagsARN), + }) + if err != nil { + return fmt.Errorf("Error retrieving tags for ARN (%s): %s", tagsARN, err) + } + + if listTagsOutput.TagList == nil || len(listTagsOutput.TagList) == 0 { + return fmt.Errorf("Tag list is nil or zero: %s", listTagsOutput.TagList) + } + + var found bool + for _, t := range listTagsOutput.TagList { + if *t.Key == "Name" && *t.Value == "tf-tags-db" { + found = true + } + } + if !found { + return fmt.Errorf("Expected to find tag Name (%s), but wasn't found. Tags: %s", "tf-tags-db", listTagsOutput.TagList) + } + // end tag search + + log.Printf("[INFO] Deleting the Snapshot %s", rs.Primary.Attributes["final_snapshot_identifier"]) + _, err = conn.DeleteDBSnapshot( + &rds.DeleteDBSnapshotInput{ + DBSnapshotIdentifier: aws.String(rs.Primary.Attributes["final_snapshot_identifier"]), + }) + if err != nil { + return err + } + + resp, err := conn.DescribeDBInstances( + &rds.DescribeDBInstancesInput{ + DBInstanceIdentifier: aws.String(rs.Primary.ID), + }) + + if err != nil { + if isAWSErr(err, rds.ErrCodeDBInstanceNotFoundFault, "") { + continue + } + return err + + } + + if len(resp.DBInstances) != 0 && aws.StringValue(resp.DBInstances[0].DBInstanceIdentifier) == rs.Primary.ID { + return fmt.Errorf("DB Instance still exists") + } + } + + return nil +} + +func testAccCheckAWSDBInstanceNoSnapshot(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).rdsconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_db_instance" { + continue + } + + resp, err := conn.DescribeDBInstances( + &rds.DescribeDBInstancesInput{ + DBInstanceIdentifier: aws.String(rs.Primary.ID), + }) + + if err != nil && !isAWSErr(err, rds.ErrCodeDBInstanceNotFoundFault, "") { + return err + } + + if len(resp.DBInstances) != 0 && aws.StringValue(resp.DBInstances[0].DBInstanceIdentifier) == rs.Primary.ID { + return fmt.Errorf("DB Instance still exists") + } + + _, err = conn.DescribeDBSnapshots( + &rds.DescribeDBSnapshotsInput{ + DBSnapshotIdentifier: aws.String(rs.Primary.Attributes["final_snapshot_identifier"]), + }) + + if err != nil && !isAWSErr(err, rds.ErrCodeDBSnapshotNotFoundFault, "") { + return err + } + } + + return nil +} + +func testAccCheckAWSDBInstanceExists(n string, v *rds.DBInstance) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No DB Instance ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).rdsconn + + opts := rds.DescribeDBInstancesInput{ + DBInstanceIdentifier: aws.String(rs.Primary.ID), + } + + resp, err := conn.DescribeDBInstances(&opts) + + if err != nil { + return err + } + + if len(resp.DBInstances) != 1 || + *resp.DBInstances[0].DBInstanceIdentifier != rs.Primary.ID { + return fmt.Errorf("DB Instance not found") + } + + *v = *resp.DBInstances[0] + + return nil + } +} + +// Database names cannot collide, and deletion takes so long, that making the +// name a bit random helps so able we can kill a test that's just waiting for a +// delete and not be blocked on kicking off another one. +var testAccAWSDBInstanceConfig = ` +resource "aws_db_instance" "bar" { + allocated_storage = 10 + engine = "MySQL" + engine_version = "5.6.35" + instance_class = "db.t2.micro" + name = "baz" + password = "barbarbarbar" + username = "foo" + + + # Maintenance Window is stored in lower case in the API, though not strictly + # documented. Terraform will downcase this to match (as opposed to throw a + # validation error). + maintenance_window = "Fri:09:00-Fri:09:30" + skip_final_snapshot = true + + backup_retention_period = 0 + + parameter_group_name = "default.mysql5.6" + + timeouts { + create = "30m" + } +}` + +const testAccAWSDBInstanceConfig_namePrefix = ` +resource "aws_db_instance" "test" { + allocated_storage = 10 + engine = "MySQL" + identifier_prefix = "tf-test-" + instance_class = "db.t2.micro" + password = "password" + username = "root" + publicly_accessible = true + skip_final_snapshot = true + + timeouts { + create = "30m" + } +}` + +const testAccAWSDBInstanceConfig_generatedName = ` +resource "aws_db_instance" "test" { + allocated_storage = 10 + engine = "MySQL" + instance_class = "db.t2.micro" + password = "password" + username = "root" + publicly_accessible = true + skip_final_snapshot = true + + timeouts { + create = "30m" + } +}` + +var testAccAWSDBInstanceConfigKmsKeyId = ` +resource "aws_kms_key" "foo" { + description = "Terraform acc test %s" + policy = < 0 { - dt = resp.TagList - } - d.Set("tags", tagsToMapRDS(dt)) + var dt []*rds.Tag + if len(resp.TagList) > 0 { + dt = resp.TagList } + d.Set("tags", tagsToMapRDS(dt)) return nil } @@ -276,20 +273,31 @@ func resourceAwsDbOptionGroupUpdate(d *schema.ResourceData, meta interface{}) er } log.Printf("[DEBUG] Modify DB Option Group: %s", modifyOpts) - _, err = rdsconn.ModifyOptionGroup(modifyOpts) + + err = resource.Retry(2*time.Minute, func() *resource.RetryError { + var err error + + _, err = rdsconn.ModifyOptionGroup(modifyOpts) + if err != nil { + // InvalidParameterValue: IAM role ARN value is invalid or does not include the required permissions for: SQLSERVER_BACKUP_RESTORE + if isAWSErr(err, "InvalidParameterValue", "IAM role ARN value is invalid or does not include the required permissions") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { return fmt.Errorf("Error modifying DB Option Group: %s", err) } d.SetPartial("option") - } - if arn, err := buildRDSOptionGroupARN(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region); err == nil { - if err := setTagsRDS(rdsconn, d, arn); err != nil { - return err - } else { - d.SetPartial("tags") - } + if err := setTagsRDS(rdsconn, d, d.Get("arn").(string)); err != nil { + return err + } else { + d.SetPartial("tags") } return resourceAwsDbOptionGroupRead(d, meta) @@ -306,11 +314,9 @@ func resourceAwsDbOptionGroupDelete(d *schema.ResourceData, meta interface{}) er ret := resource.Retry(d.Timeout(schema.TimeoutDelete), func() *resource.RetryError { _, err := rdsconn.DeleteOptionGroup(deleteOpts) if err != nil { - if awsErr, ok := err.(awserr.Error); ok { - if awsErr.Code() == "InvalidOptionGroupStateFault" { - log.Printf("[DEBUG] AWS believes the RDS Option Group is still in use, retrying") - return resource.RetryableError(awsErr) - } + if isAWSErr(err, rds.ErrCodeInvalidOptionGroupStateFault, "") { + log.Printf("[DEBUG] AWS believes the RDS Option Group is still in use, retrying") + return resource.RetryableError(err) } return resource.NonRetryableError(err) } @@ -353,16 +359,10 @@ func resourceAwsDbOptionHash(v interface{}) int { for _, sgRaw := range m["db_security_group_memberships"].(*schema.Set).List() { buf.WriteString(fmt.Sprintf("%s-", sgRaw.(string))) } - return hashcode.String(buf.String()) -} -func buildRDSOptionGroupARN(identifier, partition, accountid, region string) (string, error) { - if partition == "" { - return "", fmt.Errorf("Unable to construct RDS Option Group ARN because of missing AWS partition") - } - if accountid == "" { - return "", fmt.Errorf("Unable to construct RDS Option Group ARN because of missing AWS Account ID") + if v, ok := m["version"]; ok && v.(string) != "" { + buf.WriteString(fmt.Sprintf("%s-", v.(string))) } - arn := fmt.Sprintf("arn:%s:rds:%s:%s:og:%s", partition, region, accountid, identifier) - return arn, nil + + return hashcode.String(buf.String()) } diff --git a/aws/resource_aws_db_option_group_test.go b/aws/resource_aws_db_option_group_test.go index da90ab21880..9fd7093a56d 100644 --- a/aws/resource_aws_db_option_group_test.go +++ b/aws/resource_aws_db_option_group_test.go @@ -1,6 +1,7 @@ package aws import ( + "errors" "fmt" "log" "regexp" @@ -9,7 +10,6 @@ import ( "time" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/rds" "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" @@ -34,6 +34,10 @@ func testSweepDbOptionGroups(region string) error { opts := rds.DescribeOptionGroupsInput{} resp, err := conn.DescribeOptionGroups(&opts) if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping RDS DB Option Group sweep for %s: %s", region, err) + return nil + } return fmt.Errorf("error describing DB Option Groups in Sweeper: %s", err) } @@ -56,11 +60,9 @@ func testSweepDbOptionGroups(region string) error { ret := resource.Retry(1*time.Minute, func() *resource.RetryError { _, err := conn.DeleteOptionGroup(deleteOpts) if err != nil { - if awsErr, ok := err.(awserr.Error); ok { - if awsErr.Code() == "InvalidOptionGroupStateFault" { - log.Printf("[DEBUG] AWS believes the RDS Option Group is still in use, retrying") - return resource.RetryableError(awsErr) - } + if isAWSErr(err, rds.ErrCodeInvalidOptionGroupStateFault, "") { + log.Printf("[DEBUG] AWS believes the RDS Option Group is still in use, retrying") + return resource.RetryableError(err) } return resource.NonRetryableError(err) } @@ -74,11 +76,33 @@ func testSweepDbOptionGroups(region string) error { return nil } +func TestAccAWSDBOptionGroup_importBasic(t *testing.T) { + resourceName := "aws_db_option_group.bar" + rName := fmt.Sprintf("option-group-test-terraform-%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBOptionGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBOptionGroupBasicConfig(rName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSDBOptionGroup_basic(t *testing.T) { var v rds.OptionGroup rName := fmt.Sprintf("option-group-test-terraform-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBOptionGroupDestroy, @@ -88,6 +112,7 @@ func TestAccAWSDBOptionGroup_basic(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckAWSDBOptionGroupExists("aws_db_option_group.bar", &v), testAccCheckAWSDBOptionGroupAttributes(&v), + resource.TestMatchResourceAttr("aws_db_option_group.bar", "arn", regexp.MustCompile(`^arn:[^:]+:rds:[^:]+:\d{12}:og:.+`)), resource.TestCheckResourceAttr( "aws_db_option_group.bar", "name", rName), ), @@ -100,7 +125,7 @@ func TestAccAWSDBOptionGroup_timeoutBlock(t *testing.T) { var v rds.OptionGroup rName := fmt.Sprintf("option-group-test-terraform-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBOptionGroupDestroy, @@ -121,7 +146,7 @@ func TestAccAWSDBOptionGroup_timeoutBlock(t *testing.T) { func TestAccAWSDBOptionGroup_namePrefix(t *testing.T) { var v rds.OptionGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBOptionGroupDestroy, @@ -142,7 +167,7 @@ func TestAccAWSDBOptionGroup_namePrefix(t *testing.T) { func TestAccAWSDBOptionGroup_generatedName(t *testing.T) { var v rds.OptionGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBOptionGroupDestroy, @@ -161,7 +186,7 @@ func TestAccAWSDBOptionGroup_generatedName(t *testing.T) { func TestAccAWSDBOptionGroup_defaultDescription(t *testing.T) { var v rds.OptionGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBOptionGroupDestroy, @@ -181,7 +206,7 @@ func TestAccAWSDBOptionGroup_defaultDescription(t *testing.T) { func TestAccAWSDBOptionGroup_basicDestroyWithInstance(t *testing.T) { rName := fmt.Sprintf("option-group-test-terraform-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBOptionGroupDestroy, @@ -197,7 +222,7 @@ func TestAccAWSDBOptionGroup_OptionSettings(t *testing.T) { var v rds.OptionGroup rName := fmt.Sprintf("option-group-test-terraform-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBOptionGroupDestroy, @@ -230,11 +255,35 @@ func TestAccAWSDBOptionGroup_OptionSettings(t *testing.T) { }) } +func TestAccAWSDBOptionGroup_OptionSettingsIAMRole(t *testing.T) { + var v rds.OptionGroup + rName := fmt.Sprintf("option-group-test-terraform-%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBOptionGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBOptionGroupOptionSettingsIAMRole(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBOptionGroupExists("aws_db_option_group.bar", &v), + resource.TestCheckResourceAttr( + "aws_db_option_group.bar", "name", rName), + resource.TestCheckResourceAttr( + "aws_db_option_group.bar", "option.#", "1"), + testAccCheckAWSDBOptionGroupOptionSettingsIAMRole(&v), + ), + }, + }, + }) +} + func TestAccAWSDBOptionGroup_sqlServerOptionsUpdate(t *testing.T) { var v rds.OptionGroup rName := fmt.Sprintf("option-group-test-terraform-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBOptionGroupDestroy, @@ -262,11 +311,47 @@ func TestAccAWSDBOptionGroup_sqlServerOptionsUpdate(t *testing.T) { }) } +func TestAccAWSDBOptionGroup_OracleOptionsUpdate(t *testing.T) { + var v rds.OptionGroup + rName := fmt.Sprintf("option-group-test-terraform-%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBOptionGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBOptionGroupOracleEEOptionSettings(rName, "12.1.0.4.v1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBOptionGroupExists("aws_db_option_group.bar", &v), + resource.TestCheckResourceAttr( + "aws_db_option_group.bar", "name", rName), + resource.TestCheckResourceAttr( + "aws_db_option_group.bar", "option.#", "1"), + testAccCheckAWSDBOptionGroupOptionVersionAttribute(&v, "12.1.0.4.v1"), + ), + }, + + { + Config: testAccAWSDBOptionGroupOracleEEOptionSettings(rName, "12.1.0.5.v1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBOptionGroupExists("aws_db_option_group.bar", &v), + resource.TestCheckResourceAttr( + "aws_db_option_group.bar", "name", rName), + resource.TestCheckResourceAttr( + "aws_db_option_group.bar", "option.#", "1"), + testAccCheckAWSDBOptionGroupOptionVersionAttribute(&v, "12.1.0.5.v1"), + ), + }, + }, + }) +} + func TestAccAWSDBOptionGroup_multipleOptions(t *testing.T) { var v rds.OptionGroup rName := fmt.Sprintf("option-group-test-terraform-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBOptionGroupDestroy, @@ -304,6 +389,48 @@ func testAccCheckAWSDBOptionGroupAttributes(v *rds.OptionGroup) resource.TestChe } } +func testAccCheckAWSDBOptionGroupOptionSettingsIAMRole(optionGroup *rds.OptionGroup) resource.TestCheckFunc { + return func(s *terraform.State) error { + if optionGroup == nil { + return errors.New("Option Group does not exist") + } + if len(optionGroup.Options) == 0 { + return errors.New("Option Group does not have any options") + } + if len(optionGroup.Options[0].OptionSettings) == 0 { + return errors.New("Option Group does not have any option settings") + } + + settingName := aws.StringValue(optionGroup.Options[0].OptionSettings[0].Name) + if settingName != "IAM_ROLE_ARN" { + return fmt.Errorf("Expected option setting IAM_ROLE_ARN and received %s", settingName) + } + + settingValue := aws.StringValue(optionGroup.Options[0].OptionSettings[0].Value) + iamArnRegExp := regexp.MustCompile(`^arn:aws:iam::\d{12}:role/.+`) + if !iamArnRegExp.MatchString(settingValue) { + return fmt.Errorf("Expected option setting to be a valid IAM role but received %s", settingValue) + } + return nil + } +} + +func testAccCheckAWSDBOptionGroupOptionVersionAttribute(optionGroup *rds.OptionGroup, optionVersion string) resource.TestCheckFunc { + return func(s *terraform.State) error { + if optionGroup == nil { + return errors.New("Option Group does not exist") + } + if len(optionGroup.Options) == 0 { + return errors.New("Option Group does not have any options") + } + foundOptionVersion := aws.StringValue(optionGroup.Options[0].OptionVersion) + if foundOptionVersion != optionVersion { + return fmt.Errorf("Expected option version %q and received %q", optionVersion, foundOptionVersion) + } + return nil + } +} + func testAccCheckAWSDBOptionGroupExists(n string, v *rds.OptionGroup) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -359,11 +486,7 @@ func testAccCheckAWSDBOptionGroupDestroy(s *terraform.State) error { } // Verify the error - newerr, ok := err.(awserr.Error) - if !ok { - return err - } - if newerr.Code() != "OptionGroupNotFoundFault" { + if !isAWSErr(err, rds.ErrCodeOptionGroupNotFoundFault, "") { return err } } @@ -448,6 +571,40 @@ resource "aws_db_option_group" "bar" { `, r) } +func testAccAWSDBOptionGroupOptionSettingsIAMRole(r string) string { + return fmt.Sprintf(` +data "aws_iam_policy_document" "rds_assume_role" { + statement { + actions = ["sts:AssumeRole"] + principals { + type = "Service" + identifiers = ["rds.amazonaws.com"] + } + } +} + +resource "aws_iam_role" "sql_server_backup" { + name = "rds-backup-%s" + assume_role_policy = "${data.aws_iam_policy_document.rds_assume_role.json}" +} + +resource "aws_db_option_group" "bar" { + name = "%s" + option_group_description = "Test option group for terraform" + engine_name = "sqlserver-ex" + major_engine_version = "14.00" + + option { + option_name = "SQLSERVER_BACKUP_RESTORE" + option_settings { + name = "IAM_ROLE_ARN" + value = "${aws_iam_role.sql_server_backup.arn}" + } + } +} +`, r, r) +} + func testAccAWSDBOptionGroupOptionSettings_update(r string) string { return fmt.Sprintf(` resource "aws_db_option_group" "bar" { @@ -493,6 +650,44 @@ resource "aws_db_option_group" "bar" { `, r) } +func testAccAWSDBOptionGroupOracleEEOptionSettings(r, optionVersion string) string { + return fmt.Sprintf(` +resource "aws_security_group" "foo" { + name = "%[1]s" +} + +resource "aws_db_option_group" "bar" { + name = "%[1]s" + option_group_description = "Test option group for terraform issue 748" + engine_name = "oracle-ee" + major_engine_version = "12.1" + + option { + option_name = "OEM_AGENT" + port = "3872" + version = "%[2]s" + + vpc_security_group_memberships = ["${aws_security_group.foo.id}"] + + option_settings { + name = "OMS_PORT" + value = "4903" + } + + option_settings { + name = "OMS_HOST" + value = "oem.host.value" + } + + option_settings { + name = "AGENT_REGISTRATION_PASSWORD" + value = "password" + } + } +} +`, r, optionVersion) +} + func testAccAWSDBOptionGroupMultipleOptions(r string) string { return fmt.Sprintf(` resource "aws_db_option_group" "bar" { diff --git a/aws/resource_aws_db_parameter_group.go b/aws/resource_aws_db_parameter_group.go index cbc00b4cfcb..e95d0b4d4ed 100644 --- a/aws/resource_aws_db_parameter_group.go +++ b/aws/resource_aws_db_parameter_group.go @@ -27,11 +27,11 @@ func resourceAwsDbParameterGroup() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "arn": &schema.Schema{ + "arn": { Type: schema.TypeString, Computed: true, }, - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Optional: true, Computed: true, @@ -39,39 +39,40 @@ func resourceAwsDbParameterGroup() *schema.Resource { ConflictsWith: []string{"name_prefix"}, ValidateFunc: validateDbParamGroupName, }, - "name_prefix": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - ValidateFunc: validateDbParamGroupNamePrefix, + "name_prefix": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"name"}, + ValidateFunc: validateDbParamGroupNamePrefix, }, - "family": &schema.Schema{ + "family": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "description": &schema.Schema{ + "description": { Type: schema.TypeString, Optional: true, ForceNew: true, Default: "Managed by Terraform", }, - "parameter": &schema.Schema{ + "parameter": { Type: schema.TypeSet, Optional: true, ForceNew: false, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, }, - "value": &schema.Schema{ + "value": { Type: schema.TypeString, Required: true, }, - "apply_method": &schema.Schema{ + "apply_method": { Type: schema.TypeString, Optional: true, Default: "immediate", @@ -108,7 +109,7 @@ func resourceAwsDbParameterGroupCreate(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Create DB Parameter Group: %#v", createOpts) - _, err := rdsconn.CreateDBParameterGroup(&createOpts) + resp, err := rdsconn.CreateDBParameterGroup(&createOpts) if err != nil { return fmt.Errorf("Error creating DB Parameter Group: %s", err) } @@ -119,7 +120,8 @@ func resourceAwsDbParameterGroupCreate(d *schema.ResourceData, meta interface{}) d.SetPartial("description") d.Partial(false) - d.SetId(*createOpts.DBParameterGroupName) + d.SetId(aws.StringValue(resp.DBParameterGroup.DBParameterGroupName)) + d.Set("arn", resp.DBParameterGroup.DBParameterGroupArn) log.Printf("[INFO] DB Parameter Group ID: %s", d.Id()) return resourceAwsDbParameterGroupUpdate(d, meta) @@ -226,30 +228,22 @@ func resourceAwsDbParameterGroupRead(d *schema.ResourceData, meta interface{}) e return fmt.Errorf("error setting 'parameter' in state: %#v", err) } - paramGroup := describeResp.DBParameterGroups[0] - arn, err := buildRDSPGARN(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region) - if err != nil { - name := "" - if paramGroup.DBParameterGroupName != nil && *paramGroup.DBParameterGroupName != "" { - name = *paramGroup.DBParameterGroupName - } - log.Printf("[DEBUG] Error building ARN for DB Parameter Group, not setting Tags for Param Group %s", name) - } else { - d.Set("arn", arn) - resp, err := rdsconn.ListTagsForResource(&rds.ListTagsForResourceInput{ - ResourceName: aws.String(arn), - }) + arn := aws.StringValue(describeResp.DBParameterGroups[0].DBParameterGroupArn) + d.Set("arn", arn) - if err != nil { - log.Printf("[DEBUG] Error retrieving tags for ARN: %s", arn) - } + resp, err := rdsconn.ListTagsForResource(&rds.ListTagsForResourceInput{ + ResourceName: aws.String(arn), + }) - var dt []*rds.Tag - if len(resp.TagList) > 0 { - dt = resp.TagList - } - d.Set("tags", tagsToMapRDS(dt)) + if err != nil { + log.Printf("[DEBUG] Error retrieving tags for ARN: %s", arn) + } + + var dt []*rds.Tag + if len(resp.TagList) > 0 { + dt = resp.TagList } + d.Set("tags", tagsToMapRDS(dt)) return nil } @@ -282,7 +276,7 @@ func resourceAwsDbParameterGroupUpdate(d *schema.ResourceData, meta interface{}) // we've got them all. maxParams := 20 for parameters != nil { - paramsToModify := make([]*rds.Parameter, 0) + var paramsToModify []*rds.Parameter if len(parameters) <= maxParams { paramsToModify, parameters = parameters[:], nil } else { @@ -303,12 +297,10 @@ func resourceAwsDbParameterGroupUpdate(d *schema.ResourceData, meta interface{}) } } - if arn, err := buildRDSPGARN(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region); err == nil { - if err := setTagsRDS(rdsconn, d, arn); err != nil { - return err - } else { - d.SetPartial("tags") - } + if err := setTagsRDS(rdsconn, d, d.Get("arn").(string)); err != nil { + return err + } else { + d.SetPartial("tags") } d.Partial(false) @@ -346,15 +338,3 @@ func resourceAwsDbParameterHash(v interface{}) int { return hashcode.String(buf.String()) } - -func buildRDSPGARN(identifier, partition, accountid, region string) (string, error) { - if partition == "" { - return "", fmt.Errorf("Unable to construct RDS ARN because of missing AWS partition") - } - if accountid == "" { - return "", fmt.Errorf("Unable to construct RDS ARN because of missing AWS Account ID") - } - arn := fmt.Sprintf("arn:%s:rds:%s:%s:pg:%s", partition, region, accountid, identifier) - return arn, nil - -} diff --git a/aws/resource_aws_db_parameter_group_test.go b/aws/resource_aws_db_parameter_group_test.go index dc726c166ed..4102746b9e8 100644 --- a/aws/resource_aws_db_parameter_group_test.go +++ b/aws/resource_aws_db_parameter_group_test.go @@ -15,17 +15,39 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSDBParameterGroup_importBasic(t *testing.T) { + resourceName := "aws_db_parameter_group.bar" + groupName := fmt.Sprintf("parameter-group-test-terraform-%d", acctest.RandInt()) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBParameterGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBParameterGroupConfig(groupName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSDBParameterGroup_limit(t *testing.T) { var v rds.DBParameterGroup groupName := fmt.Sprintf("parameter-group-test-terraform-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBParameterGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: createAwsDbParameterGroupsExceedDefaultAwsLimit(groupName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSDBParameterGroupExists("aws_db_parameter_group.large", &v), @@ -118,7 +140,7 @@ func TestAccAWSDBParameterGroup_limit(t *testing.T) { resource.TestCheckResourceAttr("aws_db_parameter_group.large", "parameter.748684209.value", "repeatable-read"), ), }, - resource.TestStep{ + { Config: updateAwsDbParameterGroupsExceedDefaultAwsLimit(groupName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSDBParameterGroupExists("aws_db_parameter_group.large", &v), @@ -220,12 +242,12 @@ func TestAccAWSDBParameterGroup_basic(t *testing.T) { groupName := fmt.Sprintf("parameter-group-test-terraform-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBParameterGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSDBParameterGroupConfig(groupName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSDBParameterGroupExists("aws_db_parameter_group.bar", &v), @@ -250,9 +272,11 @@ func TestAccAWSDBParameterGroup_basic(t *testing.T) { "aws_db_parameter_group.bar", "parameter.2478663599.value", "utf8"), resource.TestCheckResourceAttr( "aws_db_parameter_group.bar", "tags.%", "1"), + resource.TestMatchResourceAttr( + "aws_db_parameter_group.bar", "arn", regexp.MustCompile(fmt.Sprintf("^arn:[^:]+:rds:[^:]+:\\d{12}:pg:%s", groupName))), ), }, - resource.TestStep{ + { Config: testAccAWSDBParameterGroupAddParametersConfig(groupName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSDBParameterGroupExists("aws_db_parameter_group.bar", &v), @@ -285,6 +309,8 @@ func TestAccAWSDBParameterGroup_basic(t *testing.T) { "aws_db_parameter_group.bar", "parameter.2478663599.value", "utf8"), resource.TestCheckResourceAttr( "aws_db_parameter_group.bar", "tags.%", "2"), + resource.TestMatchResourceAttr( + "aws_db_parameter_group.bar", "arn", regexp.MustCompile(fmt.Sprintf("^arn:[^:]+:rds:[^:]+:\\d{12}:pg:%s", groupName))), ), }, }, @@ -294,12 +320,12 @@ func TestAccAWSDBParameterGroup_basic(t *testing.T) { func TestAccAWSDBParameterGroup_Disappears(t *testing.T) { var v rds.DBParameterGroup groupName := fmt.Sprintf("parameter-group-test-terraform-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBParameterGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSDBParameterGroupConfig(groupName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSDBParameterGroupExists("aws_db_parameter_group.bar", &v), @@ -314,12 +340,12 @@ func TestAccAWSDBParameterGroup_Disappears(t *testing.T) { func TestAccAWSDBParameterGroup_namePrefix(t *testing.T) { var v rds.DBParameterGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBParameterGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDBParameterGroupConfig_namePrefix, Check: resource.ComposeTestCheckFunc( testAccCheckAWSDBParameterGroupExists("aws_db_parameter_group.test", &v), @@ -334,12 +360,12 @@ func TestAccAWSDBParameterGroup_namePrefix(t *testing.T) { func TestAccAWSDBParameterGroup_generatedName(t *testing.T) { var v rds.DBParameterGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBParameterGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDBParameterGroupConfig_generatedName, Check: resource.ComposeTestCheckFunc( testAccCheckAWSDBParameterGroupExists("aws_db_parameter_group.test", &v), @@ -354,12 +380,12 @@ func TestAccAWSDBParameterGroup_withApplyMethod(t *testing.T) { groupName := fmt.Sprintf("parameter-group-test-terraform-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBParameterGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSDBParameterGroupConfigWithApplyMethod(groupName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSDBParameterGroupExists("aws_db_parameter_group.bar", &v), @@ -392,12 +418,12 @@ func TestAccAWSDBParameterGroup_Only(t *testing.T) { var v rds.DBParameterGroup groupName := fmt.Sprintf("parameter-group-test-terraform-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBParameterGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSDBParameterGroupOnlyConfig(groupName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSDBParameterGroupExists("aws_db_parameter_group.bar", &v), @@ -416,12 +442,12 @@ func TestAccAWSDBParameterGroup_MatchDefault(t *testing.T) { var v rds.DBParameterGroup groupName := fmt.Sprintf("parameter-group-test-terraform-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBParameterGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSDBParameterGroupIncludeDefaultConfig(groupName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSDBParameterGroupExists("aws_db_parameter_group.bar", &v), diff --git a/aws/resource_aws_db_security_group.go b/aws/resource_aws_db_security_group.go index b9e73f2fb40..04d4ef1f9f8 100644 --- a/aws/resource_aws_db_security_group.go +++ b/aws/resource_aws_db_security_group.go @@ -7,8 +7,8 @@ import ( "time" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/rds" + "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/resource" @@ -26,47 +26,47 @@ func resourceAwsDbSecurityGroup() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "arn": &schema.Schema{ + "arn": { Type: schema.TypeString, Computed: true, }, - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "description": &schema.Schema{ + "description": { Type: schema.TypeString, Optional: true, ForceNew: true, Default: "Managed by Terraform", }, - "ingress": &schema.Schema{ + "ingress": { Type: schema.TypeSet, Required: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "cidr": &schema.Schema{ + "cidr": { Type: schema.TypeString, Optional: true, }, - "security_group_name": &schema.Schema{ + "security_group_name": { Type: schema.TypeString, Optional: true, Computed: true, }, - "security_group_id": &schema.Schema{ + "security_group_id": { Type: schema.TypeString, Optional: true, Computed: true, }, - "security_group_owner_id": &schema.Schema{ + "security_group_owner_id": { Type: schema.TypeString, Optional: true, Computed: true, @@ -91,7 +91,7 @@ func resourceAwsDbSecurityGroupCreate(d *schema.ResourceData, meta interface{}) opts := rds.CreateDBSecurityGroupInput{ DBSecurityGroupName: aws.String(d.Get("name").(string)), DBSecurityGroupDescription: aws.String(d.Get("description").(string)), - Tags: tags, + Tags: tags, } log.Printf("[DEBUG] DB Security Group create configuration: %#v", opts) @@ -146,8 +146,8 @@ func resourceAwsDbSecurityGroupRead(d *schema.ResourceData, meta interface{}) er return err } - d.Set("name", *sg.DBSecurityGroupName) - d.Set("description", *sg.DBSecurityGroupDescription) + d.Set("name", sg.DBSecurityGroupName) + d.Set("description", sg.DBSecurityGroupDescription) // Create an empty schema.Set to hold all ingress rules rules := &schema.Set{ @@ -176,29 +176,22 @@ func resourceAwsDbSecurityGroupRead(d *schema.ResourceData, meta interface{}) er d.Set("ingress", rules) conn := meta.(*AWSClient).rdsconn - arn, err := buildRDSSecurityGroupARN(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region) - if err != nil { - name := "" - if sg.DBSecurityGroupName != nil && *sg.DBSecurityGroupName != "" { - name = *sg.DBSecurityGroupName - } - log.Printf("[DEBUG] Error building ARN for DB Security Group, not setting Tags for DB Security Group %s", name) - } else { - d.Set("arn", arn) - resp, err := conn.ListTagsForResource(&rds.ListTagsForResourceInput{ - ResourceName: aws.String(arn), - }) - if err != nil { - log.Printf("[DEBUG] Error retrieving tags for ARN: %s", arn) - } + arn := aws.StringValue(sg.DBSecurityGroupArn) + d.Set("arn", arn) + resp, err := conn.ListTagsForResource(&rds.ListTagsForResourceInput{ + ResourceName: aws.String(arn), + }) - var dt []*rds.Tag - if len(resp.TagList) > 0 { - dt = resp.TagList - } - d.Set("tags", tagsToMapRDS(dt)) + if err != nil { + log.Printf("[DEBUG] Error retrieving tags for ARN: %s", arn) + } + + var dt []*rds.Tag + if len(resp.TagList) > 0 { + dt = resp.TagList } + d.Set("tags", tagsToMapRDS(dt)) return nil } @@ -207,12 +200,11 @@ func resourceAwsDbSecurityGroupUpdate(d *schema.ResourceData, meta interface{}) conn := meta.(*AWSClient).rdsconn d.Partial(true) - if arn, err := buildRDSSecurityGroupARN(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region); err == nil { - if err := setTagsRDS(conn, d, arn); err != nil { - return err - } else { - d.SetPartial("tags") - } + + if err := setTagsRDS(conn, d, d.Get("arn").(string)); err != nil { + return err + } else { + d.SetPartial("tags") } if d.HasChange("ingress") { @@ -266,8 +258,7 @@ func resourceAwsDbSecurityGroupDelete(d *schema.ResourceData, meta interface{}) _, err := conn.DeleteDBSecurityGroup(&opts) if err != nil { - newerr, ok := err.(awserr.Error) - if ok && newerr.Code() == "InvalidDBSecurityGroup.NotFound" { + if isAWSErr(err, "InvalidDBSecurityGroup.NotFound", "") { return nil } return err @@ -420,15 +411,3 @@ func resourceAwsDbSecurityGroupStateRefreshFunc( return v, "authorized", nil } } - -func buildRDSSecurityGroupARN(identifier, partition, accountid, region string) (string, error) { - if partition == "" { - return "", fmt.Errorf("Unable to construct RDS ARN because of missing AWS partition") - } - if accountid == "" { - return "", fmt.Errorf("Unable to construct RDS ARN because of missing AWS Account ID") - } - arn := fmt.Sprintf("arn:%s:rds:%s:%s:secgrp:%s", partition, region, accountid, identifier) - return arn, nil - -} diff --git a/aws/resource_aws_db_security_group_test.go b/aws/resource_aws_db_security_group_test.go index a222e2ee40f..02d53526141 100644 --- a/aws/resource_aws_db_security_group_test.go +++ b/aws/resource_aws_db_security_group_test.go @@ -3,6 +3,7 @@ package aws import ( "fmt" "os" + "regexp" "testing" "github.com/aws/aws-sdk-go/aws" @@ -13,6 +14,32 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSDBSecurityGroup_importBasic(t *testing.T) { + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + + rName := fmt.Sprintf("tf-acc-%s", acctest.RandString(5)) + resourceName := "aws_db_security_group.bar" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBSecurityGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBSecurityGroupConfig(rName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSDBSecurityGroup_basic(t *testing.T) { var v rds.DBSecurityGroup @@ -22,16 +49,17 @@ func TestAccAWSDBSecurityGroup_basic(t *testing.T) { rName := fmt.Sprintf("tf-acc-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSDBSecurityGroupConfig(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSDBSecurityGroupExists("aws_db_security_group.bar", &v), testAccCheckAWSDBSecurityGroupAttributes(&v), + resource.TestMatchResourceAttr("aws_db_security_group.bar", "arn", regexp.MustCompile(`^arn:[^:]+:rds:[^:]+:\d{12}:secgrp:.+`)), resource.TestCheckResourceAttr( "aws_db_security_group.bar", "name", rName), resource.TestCheckResourceAttr( diff --git a/aws/resource_aws_db_snapshot_test.go b/aws/resource_aws_db_snapshot_test.go index 97c857b2850..e7b0d62c77e 100644 --- a/aws/resource_aws_db_snapshot_test.go +++ b/aws/resource_aws_db_snapshot_test.go @@ -14,7 +14,7 @@ import ( func TestAccAWSDBSnapshot_basic(t *testing.T) { var v rds.DBSnapshot rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ diff --git a/aws/resource_aws_db_subnet_group.go b/aws/resource_aws_db_subnet_group.go index c4e437beeb0..c329dfae08c 100644 --- a/aws/resource_aws_db_subnet_group.go +++ b/aws/resource_aws_db_subnet_group.go @@ -9,6 +9,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/rds" + "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) @@ -24,12 +25,12 @@ func resourceAwsDbSubnetGroup() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "arn": &schema.Schema{ + "arn": { Type: schema.TypeString, Computed: true, }, - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Optional: true, Computed: true, @@ -37,21 +38,22 @@ func resourceAwsDbSubnetGroup() *schema.Resource { ConflictsWith: []string{"name_prefix"}, ValidateFunc: validateDbSubnetGroupName, }, - "name_prefix": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - ValidateFunc: validateDbSubnetGroupNamePrefix, + "name_prefix": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"name"}, + ValidateFunc: validateDbSubnetGroupNamePrefix, }, - "description": &schema.Schema{ + "description": { Type: schema.TypeString, Optional: true, Default: "Managed by Terraform", }, - "subnet_ids": &schema.Schema{ + "subnet_ids": { Type: schema.TypeSet, Required: true, Elem: &schema.Schema{Type: schema.TypeString}, @@ -147,26 +149,23 @@ func resourceAwsDbSubnetGroupRead(d *schema.ResourceData, meta interface{}) erro // list tags for resource // set tags conn := meta.(*AWSClient).rdsconn - arn, err := buildRDSsubgrpARN(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region) - if err != nil { - log.Printf("[DEBUG] Error building ARN for DB Subnet Group, not setting Tags for group %s", *subnetGroup.DBSubnetGroupName) - } else { - d.Set("arn", arn) - resp, err := conn.ListTagsForResource(&rds.ListTagsForResourceInput{ - ResourceName: aws.String(arn), - }) - if err != nil { - log.Printf("[DEBUG] Error retreiving tags for ARN: %s", arn) - } + arn := aws.StringValue(subnetGroup.DBSubnetGroupArn) + d.Set("arn", arn) + resp, err := conn.ListTagsForResource(&rds.ListTagsForResourceInput{ + ResourceName: aws.String(arn), + }) - var dt []*rds.Tag - if len(resp.TagList) > 0 { - dt = resp.TagList - } - d.Set("tags", tagsToMapRDS(dt)) + if err != nil { + log.Printf("[DEBUG] Error retreiving tags for ARN: %s", arn) } + var dt []*rds.Tag + if len(resp.TagList) > 0 { + dt = resp.TagList + } + d.Set("tags", tagsToMapRDS(dt)) + return nil } @@ -195,12 +194,11 @@ func resourceAwsDbSubnetGroupUpdate(d *schema.ResourceData, meta interface{}) er } } - if arn, err := buildRDSsubgrpARN(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region); err == nil { - if err := setTagsRDS(conn, d, arn); err != nil { - return err - } else { - d.SetPartial("tags") - } + arn := d.Get("arn").(string) + if err := setTagsRDS(conn, d, arn); err != nil { + return err + } else { + d.SetPartial("tags") } return resourceAwsDbSubnetGroupRead(d, meta) @@ -243,15 +241,3 @@ func resourceAwsDbSubnetGroupDeleteRefreshFunc( return d, "destroyed", nil } } - -func buildRDSsubgrpARN(identifier, partition, accountid, region string) (string, error) { - if partition == "" { - return "", fmt.Errorf("Unable to construct RDS ARN because of missing AWS partition") - } - if accountid == "" { - return "", fmt.Errorf("Unable to construct RDS ARN because of missing AWS Account ID") - } - arn := fmt.Sprintf("arn:%s:rds:%s:%s:subgrp:%s", partition, region, accountid, identifier) - return arn, nil - -} diff --git a/aws/resource_aws_db_subnet_group_test.go b/aws/resource_aws_db_subnet_group_test.go index 260e1ab36a4..7522b206ec8 100644 --- a/aws/resource_aws_db_subnet_group_test.go +++ b/aws/resource_aws_db_subnet_group_test.go @@ -14,6 +14,30 @@ import ( "github.com/aws/aws-sdk-go/service/rds" ) +func TestAccAWSDBSubnetGroup_importBasic(t *testing.T) { + resourceName := "aws_db_subnet_group.foo" + + rName := fmt.Sprintf("tf-test-%d", acctest.RandInt()) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDBSubnetGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDBSubnetGroupConfig(rName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "description"}, + }, + }, + }) +} + func TestAccAWSDBSubnetGroup_basic(t *testing.T) { var v rds.DBSubnetGroup @@ -23,12 +47,12 @@ func TestAccAWSDBSubnetGroup_basic(t *testing.T) { rName := fmt.Sprintf("tf-test-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckDBSubnetGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDBSubnetGroupConfig(rName), Check: resource.ComposeTestCheckFunc( testAccCheckDBSubnetGroupExists( @@ -37,6 +61,8 @@ func TestAccAWSDBSubnetGroup_basic(t *testing.T) { "aws_db_subnet_group.foo", "name", rName), resource.TestCheckResourceAttr( "aws_db_subnet_group.foo", "description", "Managed by Terraform"), + resource.TestMatchResourceAttr( + "aws_db_subnet_group.foo", "arn", regexp.MustCompile(fmt.Sprintf("^arn:[^:]+:rds:[^:]+:\\d{12}:subgrp:%s", rName))), testCheck, ), }, @@ -47,12 +73,12 @@ func TestAccAWSDBSubnetGroup_basic(t *testing.T) { func TestAccAWSDBSubnetGroup_namePrefix(t *testing.T) { var v rds.DBSubnetGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckDBSubnetGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDBSubnetGroupConfig_namePrefix, Check: resource.ComposeTestCheckFunc( testAccCheckDBSubnetGroupExists( @@ -68,12 +94,12 @@ func TestAccAWSDBSubnetGroup_namePrefix(t *testing.T) { func TestAccAWSDBSubnetGroup_generatedName(t *testing.T) { var v rds.DBSubnetGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckDBSubnetGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDBSubnetGroupConfig_generatedName, Check: resource.ComposeTestCheckFunc( testAccCheckDBSubnetGroupExists( @@ -93,12 +119,12 @@ func TestAccAWSDBSubnetGroup_withUndocumentedCharacters(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckDBSubnetGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDBSubnetGroupConfig_withUnderscoresAndPeriodsAndSpaces, Check: resource.ComposeTestCheckFunc( testAccCheckDBSubnetGroupExists( @@ -118,12 +144,12 @@ func TestAccAWSDBSubnetGroup_updateDescription(t *testing.T) { var v rds.DBSubnetGroup rName := fmt.Sprintf("tf-test-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckDBSubnetGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDBSubnetGroupConfig(rName), Check: resource.ComposeTestCheckFunc( testAccCheckDBSubnetGroupExists( @@ -133,7 +159,7 @@ func TestAccAWSDBSubnetGroup_updateDescription(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccDBSubnetGroupConfig_updatedDescription(rName), Check: resource.ComposeTestCheckFunc( testAccCheckDBSubnetGroupExists( diff --git a/aws/resource_aws_default_network_acl.go b/aws/resource_aws_default_network_acl.go index 7a11dee72e8..16b4ae0c41f 100644 --- a/aws/resource_aws_default_network_acl.go +++ b/aws/resource_aws_default_network_acl.go @@ -243,7 +243,6 @@ func resourceAwsDefaultNetworkAclUpdate(d *schema.ResourceData, meta interface{} func resourceAwsDefaultNetworkAclDelete(d *schema.ResourceData, meta interface{}) error { log.Printf("[WARN] Cannot destroy Default Network ACL. Terraform will remove this resource from the state file, however resources may remain.") - d.SetId("") return nil } @@ -262,7 +261,7 @@ func revokeAllNetworkACLEntries(netaclId string, meta interface{}) error { } if resp == nil { - return fmt.Errorf("[ERR] Error looking up Default Network ACL Entries: No results") + return fmt.Errorf("Error looking up Default Network ACL Entries: No results") } networkAcl := resp.NetworkAcls[0] diff --git a/aws/resource_aws_default_network_acl_test.go b/aws/resource_aws_default_network_acl_test.go index e22902c71e9..fac6d15a00f 100644 --- a/aws/resource_aws_default_network_acl_test.go +++ b/aws/resource_aws_default_network_acl_test.go @@ -28,7 +28,7 @@ var ipv6IngressAcl = &ec2.NetworkAclEntry{ func TestAccAWSDefaultNetworkAcl_basic(t *testing.T) { var networkAcl ec2.NetworkAcl - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDefaultNetworkAclDestroy, @@ -47,7 +47,7 @@ func TestAccAWSDefaultNetworkAcl_basic(t *testing.T) { func TestAccAWSDefaultNetworkAcl_basicIpv6Vpc(t *testing.T) { var networkAcl ec2.NetworkAcl - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDefaultNetworkAclDestroy, @@ -69,7 +69,7 @@ func TestAccAWSDefaultNetworkAcl_deny_ingress(t *testing.T) { // additional Egress. var networkAcl ec2.NetworkAcl - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDefaultNetworkAclDestroy, @@ -88,7 +88,7 @@ func TestAccAWSDefaultNetworkAcl_deny_ingress(t *testing.T) { func TestAccAWSDefaultNetworkAcl_withIpv6Ingress(t *testing.T) { var networkAcl ec2.NetworkAcl - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDefaultNetworkAclDestroy, @@ -107,7 +107,7 @@ func TestAccAWSDefaultNetworkAcl_withIpv6Ingress(t *testing.T) { func TestAccAWSDefaultNetworkAcl_SubnetRemoval(t *testing.T) { var networkAcl ec2.NetworkAcl - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDefaultNetworkAclDestroy, @@ -138,7 +138,7 @@ func TestAccAWSDefaultNetworkAcl_SubnetRemoval(t *testing.T) { func TestAccAWSDefaultNetworkAcl_SubnetReassign(t *testing.T) { var networkAcl ec2.NetworkAcl - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDefaultNetworkAclDestroy, diff --git a/aws/resource_aws_default_route_table.go b/aws/resource_aws_default_route_table.go index 987dd4a7df3..c372ed347a7 100644 --- a/aws/resource_aws_default_route_table.go +++ b/aws/resource_aws_default_route_table.go @@ -155,7 +155,6 @@ func resourceAwsDefaultRouteTableRead(d *schema.ResourceData, meta interface{}) func resourceAwsDefaultRouteTableDelete(d *schema.ResourceData, meta interface{}) error { log.Printf("[WARN] Cannot destroy Default Route Table. Terraform will remove this resource from the state file, however resources may remain.") - d.SetId("") return nil } diff --git a/aws/resource_aws_default_route_table_test.go b/aws/resource_aws_default_route_table_test.go index 55e1d9d0cc8..6d42046c175 100644 --- a/aws/resource_aws_default_route_table_test.go +++ b/aws/resource_aws_default_route_table_test.go @@ -14,7 +14,7 @@ import ( func TestAccAWSDefaultRouteTable_basic(t *testing.T) { var v ec2.RouteTable - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_default_route_table.foo", Providers: testAccProviders, @@ -34,7 +34,7 @@ func TestAccAWSDefaultRouteTable_basic(t *testing.T) { func TestAccAWSDefaultRouteTable_swap(t *testing.T) { var v ec2.RouteTable - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_default_route_table.foo", Providers: testAccProviders, @@ -68,7 +68,7 @@ func TestAccAWSDefaultRouteTable_swap(t *testing.T) { func TestAccAWSDefaultRouteTable_vpc_endpoint(t *testing.T) { var v ec2.RouteTable - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_default_route_table.foo", Providers: testAccProviders, @@ -118,11 +118,6 @@ func testAccCheckDefaultRouteTableDestroy(s *terraform.State) error { return nil } -func testAccCheckDefaultRouteTableExists(s *terraform.State) error { - // We can't destroy this resource; it comes and goes with the VPC itself. - return nil -} - const testAccDefaultRouteTableConfig = ` resource "aws_vpc" "foo" { cidr_block = "10.1.0.0/16" diff --git a/aws/resource_aws_default_security_group.go b/aws/resource_aws_default_security_group.go index f4fb748bbda..0b3af038a6c 100644 --- a/aws/resource_aws_default_security_group.go +++ b/aws/resource_aws_default_security_group.go @@ -6,7 +6,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/schema" ) @@ -38,7 +37,7 @@ func resourceAwsDefaultSecurityGroupCreate(d *schema.ResourceData, meta interfac conn := meta.(*AWSClient).ec2conn securityGroupOpts := &ec2.DescribeSecurityGroupsInput{ Filters: []*ec2.Filter{ - &ec2.Filter{ + { Name: aws.String("group-name"), Values: []*string{aws.String("default")}, }, @@ -67,7 +66,7 @@ func resourceAwsDefaultSecurityGroupCreate(d *schema.ResourceData, meta interfac // returned, as default is a protected name for each VPC, and for each // Region on EC2 Classic if len(resp.SecurityGroups) != 1 { - return fmt.Errorf("[ERR] Error finding default security group; found (%d) groups: %s", len(resp.SecurityGroups), resp) + return fmt.Errorf("Error finding default security group; found (%d) groups: %s", len(resp.SecurityGroups), resp) } g = resp.SecurityGroups[0] } else { @@ -81,7 +80,7 @@ func resourceAwsDefaultSecurityGroupCreate(d *schema.ResourceData, meta interfac } if g == nil { - return fmt.Errorf("[ERR] Error finding default security group: no matching group found") + return fmt.Errorf("Error finding default security group: no matching group found") } d.SetId(*g.GroupId) @@ -93,7 +92,7 @@ func resourceAwsDefaultSecurityGroupCreate(d *schema.ResourceData, meta interfac } if err := revokeDefaultSecurityGroupRules(meta, g); err != nil { - return errwrap.Wrapf("{{err}}", err) + return fmt.Errorf("%s", err) } return resourceAwsSecurityGroupUpdate(d, meta) @@ -101,7 +100,6 @@ func resourceAwsDefaultSecurityGroupCreate(d *schema.ResourceData, meta interfac func resourceAwsDefaultSecurityGroupDelete(d *schema.ResourceData, meta interface{}) error { log.Printf("[WARN] Cannot destroy Default Security Group. Terraform will remove this resource from the state file, however resources may remain.") - d.SetId("") return nil } diff --git a/aws/resource_aws_default_security_group_test.go b/aws/resource_aws_default_security_group_test.go index c70a012eb6d..860fe5c0375 100644 --- a/aws/resource_aws_default_security_group_test.go +++ b/aws/resource_aws_default_security_group_test.go @@ -14,13 +14,13 @@ import ( func TestAccAWSDefaultSecurityGroup_basic(t *testing.T) { var group ec2.SecurityGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_default_security_group.web", Providers: testAccProviders, CheckDestroy: testAccCheckAWSDefaultSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSDefaultSecurityGroupConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSDefaultSecurityGroupExists("aws_default_security_group.web", &group), @@ -46,13 +46,13 @@ func TestAccAWSDefaultSecurityGroup_basic(t *testing.T) { func TestAccAWSDefaultSecurityGroup_classic(t *testing.T) { var group ec2.SecurityGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_default_security_group.web", Providers: testAccProviders, CheckDestroy: testAccCheckAWSDefaultSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSDefaultSecurityGroupConfig_classic, Check: resource.ComposeTestCheckFunc( testAccCheckAWSDefaultSecurityGroupExists("aws_default_security_group.web", &group), @@ -115,7 +115,7 @@ func testAccCheckAWSDefaultSecurityGroupAttributes(group *ec2.SecurityGroup) res FromPort: aws.Int64(80), ToPort: aws.Int64(8000), IpProtocol: aws.String("tcp"), - IpRanges: []*ec2.IpRange{&ec2.IpRange{CidrIp: aws.String("10.0.0.0/8")}}, + IpRanges: []*ec2.IpRange{{CidrIp: aws.String("10.0.0.0/8")}}, } if *group.GroupName != "default" { diff --git a/aws/resource_aws_default_subnet.go b/aws/resource_aws_default_subnet.go index fc10723dbe6..932d8d23df0 100644 --- a/aws/resource_aws_default_subnet.go +++ b/aws/resource_aws_default_subnet.go @@ -38,6 +38,7 @@ func resourceAwsDefaultSubnet() *schema.Resource { // map_public_ip_on_launch is a computed value for Default Subnets dsubnet.Schema["map_public_ip_on_launch"] = &schema.Schema{ Type: schema.TypeBool, + Optional: true, Computed: true, } // assign_ipv6_address_on_creation is a computed value for Default Subnets @@ -51,35 +52,29 @@ func resourceAwsDefaultSubnet() *schema.Resource { func resourceAwsDefaultSubnetCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).ec2conn - req := &ec2.DescribeSubnetsInput{ - Filters: []*ec2.Filter{ - &ec2.Filter{ - Name: aws.String("availabilityZone"), - Values: aws.StringSlice([]string{d.Get("availability_zone").(string)}), - }, - &ec2.Filter{ - Name: aws.String("defaultForAz"), - Values: aws.StringSlice([]string{"true"}), - }, + + req := &ec2.DescribeSubnetsInput{} + req.Filters = buildEC2AttributeFilterList( + map[string]string{ + "availabilityZone": d.Get("availability_zone").(string), + "defaultForAz": "true", }, - } + ) + log.Printf("[DEBUG] Reading Default Subnet: %s", req) resp, err := conn.DescribeSubnets(req) if err != nil { return err } - if len(resp.Subnets) != 1 || resp.Subnets[0] == nil { return fmt.Errorf("Default subnet not found") } d.SetId(aws.StringValue(resp.Subnets[0].SubnetId)) - return resourceAwsSubnetUpdate(d, meta) } func resourceAwsDefaultSubnetDelete(d *schema.ResourceData, meta interface{}) error { log.Printf("[WARN] Cannot destroy Default Subnet. Terraform will remove this resource from the state file, however resources may remain.") - d.SetId("") return nil } diff --git a/aws/resource_aws_default_subnet_test.go b/aws/resource_aws_default_subnet_test.go index a6651221d2c..fb078426bb6 100644 --- a/aws/resource_aws_default_subnet_test.go +++ b/aws/resource_aws_default_subnet_test.go @@ -11,13 +11,39 @@ import ( func TestAccAWSDefaultSubnet_basic(t *testing.T) { var v ec2.Subnet - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDefaultSubnetDestroy, Steps: []resource.TestStep{ { Config: testAccAWSDefaultSubnetConfigBasic, + Check: resource.ComposeTestCheckFunc( + testAccCheckSubnetExists("aws_default_subnet.foo", &v), + resource.TestCheckResourceAttr( + "aws_default_subnet.foo", "availability_zone", "us-west-2a"), + resource.TestCheckResourceAttr( + "aws_default_subnet.foo", "assign_ipv6_address_on_creation", "false"), + resource.TestCheckResourceAttr( + "aws_default_subnet.foo", "tags.%", "1"), + resource.TestCheckResourceAttr( + "aws_default_subnet.foo", "tags.Name", "terraform-testacc-default-subnet"), + ), + }, + }, + }) +} + +func TestAccAWSDefaultSubnet_publicIp(t *testing.T) { + var v ec2.Subnet + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDefaultSubnetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDefaultSubnetConfigPublicIp, Check: resource.ComposeTestCheckFunc( testAccCheckSubnetExists("aws_default_subnet.foo", &v), resource.TestCheckResourceAttr( @@ -29,7 +55,23 @@ func TestAccAWSDefaultSubnet_basic(t *testing.T) { resource.TestCheckResourceAttr( "aws_default_subnet.foo", "tags.%", "1"), resource.TestCheckResourceAttr( - "aws_default_subnet.foo", "tags.Name", "Default subnet for us-west-2a"), + "aws_default_subnet.foo", "tags.Name", "terraform-testacc-default-subnet"), + ), + }, + { + Config: testAccAWSDefaultSubnetConfigNoPublicIp, + Check: resource.ComposeTestCheckFunc( + testAccCheckSubnetExists("aws_default_subnet.foo", &v), + resource.TestCheckResourceAttr( + "aws_default_subnet.foo", "availability_zone", "us-west-2a"), + resource.TestCheckResourceAttr( + "aws_default_subnet.foo", "map_public_ip_on_launch", "false"), + resource.TestCheckResourceAttr( + "aws_default_subnet.foo", "assign_ipv6_address_on_creation", "false"), + resource.TestCheckResourceAttr( + "aws_default_subnet.foo", "tags.%", "1"), + resource.TestCheckResourceAttr( + "aws_default_subnet.foo", "tags.Name", "terraform-testacc-default-subnet"), ), }, }, @@ -42,14 +84,30 @@ func testAccCheckAWSDefaultSubnetDestroy(s *terraform.State) error { } const testAccAWSDefaultSubnetConfigBasic = ` -provider "aws" { - region = "us-west-2" +resource "aws_default_subnet" "foo" { + availability_zone = "us-west-2a" + tags { + Name = "terraform-testacc-default-subnet" + } } +` + +const testAccAWSDefaultSubnetConfigPublicIp = ` +resource "aws_default_subnet" "foo" { + availability_zone = "us-west-2a" + map_public_ip_on_launch = true + tags { + Name = "terraform-testacc-default-subnet" + } +} +` +const testAccAWSDefaultSubnetConfigNoPublicIp = ` resource "aws_default_subnet" "foo" { - availability_zone = "us-west-2a" - tags { - Name = "Default subnet for us-west-2a" - } + availability_zone = "us-west-2a" + map_public_ip_on_launch = false + tags { + Name = "terraform-testacc-default-subnet" + } } ` diff --git a/aws/resource_aws_default_vpc.go b/aws/resource_aws_default_vpc.go index 8953534a017..345325fb83c 100644 --- a/aws/resource_aws_default_vpc.go +++ b/aws/resource_aws_default_vpc.go @@ -61,6 +61,5 @@ func resourceAwsDefaultVpcCreate(d *schema.ResourceData, meta interface{}) error func resourceAwsDefaultVpcDelete(d *schema.ResourceData, meta interface{}) error { log.Printf("[WARN] Cannot destroy Default VPC. Terraform will remove this resource from the state file, however resources may remain.") - d.SetId("") return nil } diff --git a/aws/resource_aws_default_vpc_dhcp_options.go b/aws/resource_aws_default_vpc_dhcp_options.go index cb433ff4bb9..93d2763d937 100644 --- a/aws/resource_aws_default_vpc_dhcp_options.go +++ b/aws/resource_aws_default_vpc_dhcp_options.go @@ -46,19 +46,19 @@ func resourceAwsDefaultVpcDhcpOptionsCreate(d *schema.ResourceData, meta interfa } req := &ec2.DescribeDhcpOptionsInput{ Filters: []*ec2.Filter{ - &ec2.Filter{ + { Name: aws.String("key"), Values: aws.StringSlice([]string{"domain-name"}), }, - &ec2.Filter{ + { Name: aws.String("value"), Values: aws.StringSlice([]string{domainName}), }, - &ec2.Filter{ + { Name: aws.String("key"), Values: aws.StringSlice([]string{"domain-name-servers"}), }, - &ec2.Filter{ + { Name: aws.String("value"), Values: aws.StringSlice([]string{"AmazonProvidedDNS"}), }, @@ -85,6 +85,5 @@ func resourceAwsDefaultVpcDhcpOptionsCreate(d *schema.ResourceData, meta interfa func resourceAwsDefaultVpcDhcpOptionsDelete(d *schema.ResourceData, meta interface{}) error { log.Printf("[WARN] Cannot destroy Default DHCP Options Set. Terraform will remove this resource from the state file, however resources may remain.") - d.SetId("") return nil } diff --git a/aws/resource_aws_default_vpc_dhcp_options_test.go b/aws/resource_aws_default_vpc_dhcp_options_test.go index 110d2e8c726..12db0cd7aea 100644 --- a/aws/resource_aws_default_vpc_dhcp_options_test.go +++ b/aws/resource_aws_default_vpc_dhcp_options_test.go @@ -11,7 +11,7 @@ import ( func TestAccAWSDefaultVpcDhcpOptions_basic(t *testing.T) { var d ec2.DhcpOptions - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDefaultVpcDhcpOptionsDestroy, diff --git a/aws/resource_aws_default_vpc_test.go b/aws/resource_aws_default_vpc_test.go index 8d77f250427..0d93ba90d11 100644 --- a/aws/resource_aws_default_vpc_test.go +++ b/aws/resource_aws_default_vpc_test.go @@ -11,7 +11,7 @@ import ( func TestAccAWSDefaultVpc_basic(t *testing.T) { var vpc ec2.Vpc - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDefaultVpcDestroy, @@ -27,6 +27,8 @@ func TestAccAWSDefaultVpc_basic(t *testing.T) { "aws_default_vpc.foo", "tags.%", "1"), resource.TestCheckResourceAttr( "aws_default_vpc.foo", "tags.Name", "Default VPC"), + resource.TestCheckResourceAttrSet( + "aws_default_vpc.foo", "arn"), resource.TestCheckNoResourceAttr( "aws_default_vpc.foo", "assign_generated_ipv6_cidr_block"), resource.TestCheckNoResourceAttr( diff --git a/aws/resource_aws_devicefarm_project.go b/aws/resource_aws_devicefarm_project.go index e7e377eafe3..f35e3fe2bbf 100644 --- a/aws/resource_aws_devicefarm_project.go +++ b/aws/resource_aws_devicefarm_project.go @@ -17,12 +17,12 @@ func resourceAwsDevicefarmProject() *schema.Resource { Delete: resourceAwsDevicefarmProjectDelete, Schema: map[string]*schema.Schema{ - "arn": &schema.Schema{ + "arn": { Type: schema.TypeString, Computed: true, }, - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, }, diff --git a/aws/resource_aws_devicefarm_project_test.go b/aws/resource_aws_devicefarm_project_test.go index dc3a92c0412..0a765ff9f07 100644 --- a/aws/resource_aws_devicefarm_project_test.go +++ b/aws/resource_aws_devicefarm_project_test.go @@ -17,7 +17,7 @@ func TestAccAWSDeviceFarmProject_basic(t *testing.T) { beforeInt := acctest.RandInt() afterInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckDeviceFarmProjectDestroy, diff --git a/aws/resource_aws_directory_service_conditional_forwarder.go b/aws/resource_aws_directory_service_conditional_forwarder.go new file mode 100644 index 00000000000..8927fc9bd3b --- /dev/null +++ b/aws/resource_aws_directory_service_conditional_forwarder.go @@ -0,0 +1,164 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + "strings" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/directoryservice" +) + +func resourceAwsDirectoryServiceConditionalForwarder() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsDirectoryServiceConditionalForwarderCreate, + Read: resourceAwsDirectoryServiceConditionalForwarderRead, + Update: resourceAwsDirectoryServiceConditionalForwarderUpdate, + Delete: resourceAwsDirectoryServiceConditionalForwarderDelete, + + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "directory_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "dns_ips": { + Type: schema.TypeList, + Required: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + + "remote_domain_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile("^([a-zA-Z0-9]+[\\.-])+([a-zA-Z0-9])+[.]?$"), "invalid value, see the RemoteDomainName attribute documentation: https://docs.aws.amazon.com/directoryservice/latest/devguide/API_ConditionalForwarder.html"), + }, + }, + } +} + +func resourceAwsDirectoryServiceConditionalForwarderCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dsconn + + dnsIps := expandStringList(d.Get("dns_ips").([]interface{})) + + directoryId := d.Get("directory_id").(string) + domainName := d.Get("remote_domain_name").(string) + + _, err := conn.CreateConditionalForwarder(&directoryservice.CreateConditionalForwarderInput{ + DirectoryId: aws.String(directoryId), + DnsIpAddrs: dnsIps, + RemoteDomainName: aws.String(domainName), + }) + + if err != nil { + return err + } + + d.SetId(fmt.Sprintf("%s:%s", directoryId, domainName)) + + return nil +} + +func resourceAwsDirectoryServiceConditionalForwarderRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dsconn + + directoryId, domainName, err := parseDSConditionalForwarderId(d.Id()) + if err != nil { + return err + } + + res, err := conn.DescribeConditionalForwarders(&directoryservice.DescribeConditionalForwardersInput{ + DirectoryId: aws.String(directoryId), + RemoteDomainNames: []*string{aws.String(domainName)}, + }) + + if err != nil { + if isAWSErr(err, directoryservice.ErrCodeEntityDoesNotExistException, "") { + log.Printf("[WARN] Directory Service Conditional Forwarder (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + return err + } + + if len(res.ConditionalForwarders) == 0 { + log.Printf("[WARN] Directory Service Conditional Forwarder (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + cfd := res.ConditionalForwarders[0] + + d.Set("dns_ips", flattenStringList(cfd.DnsIpAddrs)) + d.Set("directory_id", directoryId) + d.Set("remote_domain_name", cfd.RemoteDomainName) + + return nil +} + +func resourceAwsDirectoryServiceConditionalForwarderUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dsconn + + directoryId, domainName, err := parseDSConditionalForwarderId(d.Id()) + if err != nil { + return err + } + + dnsIps := expandStringList(d.Get("dns_ips").([]interface{})) + + _, err = conn.UpdateConditionalForwarder(&directoryservice.UpdateConditionalForwarderInput{ + DirectoryId: aws.String(directoryId), + DnsIpAddrs: dnsIps, + RemoteDomainName: aws.String(domainName), + }) + + if err != nil { + return err + } + + return resourceAwsDirectoryServiceConditionalForwarderRead(d, meta) +} + +func resourceAwsDirectoryServiceConditionalForwarderDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dsconn + + directoryId, domainName, err := parseDSConditionalForwarderId(d.Id()) + if err != nil { + return err + } + + _, err = conn.DeleteConditionalForwarder(&directoryservice.DeleteConditionalForwarderInput{ + DirectoryId: aws.String(directoryId), + RemoteDomainName: aws.String(domainName), + }) + + if err != nil && !isAWSErr(err, directoryservice.ErrCodeEntityDoesNotExistException, "") { + return err + } + + return nil +} + +func parseDSConditionalForwarderId(id string) (directoryId, domainName string, err error) { + parts := strings.SplitN(id, ":", 2) + + if len(parts) != 2 { + return "", "", fmt.Errorf("please make sure ID is in format DIRECTORY_ID:DOMAIN_NAME") + } + + return parts[0], parts[1], nil +} diff --git a/aws/resource_aws_directory_service_conditional_forwarder_test.go b/aws/resource_aws_directory_service_conditional_forwarder_test.go new file mode 100644 index 00000000000..dd2ec407fdf --- /dev/null +++ b/aws/resource_aws_directory_service_conditional_forwarder_test.go @@ -0,0 +1,192 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/directoryservice" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSDirectoryServiceConditionForwarder_basic(t *testing.T) { + resourceName := "aws_directory_service_conditional_forwarder.fwd" + + ip1, ip2, ip3 := "8.8.8.8", "1.1.1.1", "8.8.4.4" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsDirectoryServiceConditionalForwarderDestroy, + Steps: []resource.TestStep{ + // test create + { + Config: testAccDirectoryServiceConditionalForwarderConfig(ip1, ip2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDirectoryServiceConditionalForwarderExists( + resourceName, + []string{ip1, ip2}, + ), + ), + }, + // test update + { + Config: testAccDirectoryServiceConditionalForwarderConfig(ip1, ip3), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDirectoryServiceConditionalForwarderExists( + resourceName, + []string{ip1, ip3}, + ), + ), + }, + // test import + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAwsDirectoryServiceConditionalForwarderDestroy(s *terraform.State) error { + dsconn := testAccProvider.Meta().(*AWSClient).dsconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_directory_service_conditional_forwarder" { + continue + } + + directoryId, domainName, err := parseDSConditionalForwarderId(rs.Primary.ID) + if err != nil { + return err + } + + res, err := dsconn.DescribeConditionalForwarders(&directoryservice.DescribeConditionalForwardersInput{ + DirectoryId: aws.String(directoryId), + RemoteDomainNames: []*string{aws.String(domainName)}, + }) + + if err != nil { + if isAWSErr(err, directoryservice.ErrCodeEntityDoesNotExistException, "") { + return nil + } + return err + } + + if len(res.ConditionalForwarders) > 0 { + return fmt.Errorf("Expected AWS Directory Service Conditional Forwarder to be gone, but was still found") + } + + return nil + } + + return fmt.Errorf("Default error in Service Directory Test") +} + +func testAccCheckAwsDirectoryServiceConditionalForwarderExists(name string, dnsIps []string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + directoryId, domainName, err := parseDSConditionalForwarderId(rs.Primary.ID) + if err != nil { + return err + } + + dsconn := testAccProvider.Meta().(*AWSClient).dsconn + + res, err := dsconn.DescribeConditionalForwarders(&directoryservice.DescribeConditionalForwardersInput{ + DirectoryId: aws.String(directoryId), + RemoteDomainNames: []*string{aws.String(domainName)}, + }) + + if err != nil { + return err + } + + if len(res.ConditionalForwarders) == 0 { + return fmt.Errorf("No Conditional Fowrwarder found") + } + + cfd := res.ConditionalForwarders[0] + + if dnsIps != nil { + if len(dnsIps) != len(cfd.DnsIpAddrs) { + return fmt.Errorf("DnsIpAddrs length mismatch") + } + + for k, v := range cfd.DnsIpAddrs { + if *v != dnsIps[k] { + return fmt.Errorf("DnsIp mismatch, '%s' != '%s' at index '%d'", *v, dnsIps[k], k) + } + } + } + + return nil + } +} + +func testAccDirectoryServiceConditionalForwarderConfig(ip1, ip2 string) string { + return fmt.Sprintf(` +resource "aws_directory_service_directory" "bar" { + name = "corp.notexample.com" + password = "SuperSecretPassw0rd" + type = "MicrosoftAD" + edition = "Standard" + + vpc_settings { + vpc_id = "${aws_vpc.main.id}" + subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] + } + + tags { + Name = "terraform-testacc-directory-service-conditional-forwarder" + } +} + +resource "aws_vpc" "main" { + cidr_block = "10.0.0.0/16" + tags { + Name = "terraform-testacc-directory-service-conditional-forwarder" + } +} + +resource "aws_subnet" "foo" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2a" + cidr_block = "10.0.1.0/24" + tags { + Name = "terraform-testacc-directory-service-conditional-forwarder" + } +} + +resource "aws_subnet" "bar" { + vpc_id = "${aws_vpc.main.id}" + availability_zone = "us-west-2b" + cidr_block = "10.0.2.0/24" + tags { + Name = "terraform-testacc-directory-service-conditional-forwarder" + } +} + +resource "aws_directory_service_conditional_forwarder" "fwd" { + directory_id = "${aws_directory_service_directory.bar.id}" + + remote_domain_name = "test.example.com" + + dns_ips = [ + "%s", + "%s", + ] +} +`, ip1, ip2) +} diff --git a/aws/resource_aws_directory_service_directory.go b/aws/resource_aws_directory_service_directory.go index 8d4b8eae0e4..d440d126710 100644 --- a/aws/resource_aws_directory_service_directory.go +++ b/aws/resource_aws_directory_service_directory.go @@ -475,14 +475,22 @@ func resourceAwsDirectoryServiceDirectoryDelete(d *schema.ResourceData, meta int DirectoryId: aws.String(d.Id()), } - log.Printf("[DEBUG] Delete Directory input: %s", input) + log.Printf("[DEBUG] Deleting Directory Service Directory: %s", input) _, err := dsconn.DeleteDirectory(&input) if err != nil { - return err + return fmt.Errorf("error deleting Directory Service Directory (%s): %s", d.Id(), err) + } + + log.Printf("[DEBUG] Waiting for Directory Service Directory (%q) to be deleted", d.Id()) + err = waitForDirectoryServiceDirectoryDeletion(dsconn, d.Id()) + if err != nil { + return fmt.Errorf("error waiting for Directory Service (%s) to be deleted: %s", d.Id(), err) } - // Wait for deletion - log.Printf("[DEBUG] Waiting for DS (%q) to be deleted", d.Id()) + return nil +} + +func waitForDirectoryServiceDirectoryDeletion(conn *directoryservice.DirectoryService, directoryID string) error { stateConf := &resource.StateChangeConf{ Pending: []string{ directoryservice.DirectoryStageActive, @@ -490,8 +498,8 @@ func resourceAwsDirectoryServiceDirectoryDelete(d *schema.ResourceData, meta int }, Target: []string{directoryservice.DirectoryStageDeleted}, Refresh: func() (interface{}, string, error) { - resp, err := dsconn.DescribeDirectories(&directoryservice.DescribeDirectoriesInput{ - DirectoryIds: []*string{aws.String(d.Id())}, + resp, err := conn.DescribeDirectories(&directoryservice.DescribeDirectoriesInput{ + DirectoryIds: []*string{aws.String(directoryID)}, }) if err != nil { if isAWSErr(err, directoryservice.ErrCodeEntityDoesNotExistException, "") { @@ -500,22 +508,17 @@ func resourceAwsDirectoryServiceDirectoryDelete(d *schema.ResourceData, meta int return nil, "error", err } - if len(resp.DirectoryDescriptions) == 0 { + if len(resp.DirectoryDescriptions) == 0 || resp.DirectoryDescriptions[0] == nil { return 42, directoryservice.DirectoryStageDeleted, nil } ds := resp.DirectoryDescriptions[0] - log.Printf("[DEBUG] Deletion of DS %q is in following stage: %q.", - d.Id(), *ds.Stage) - return ds, *ds.Stage, nil + log.Printf("[DEBUG] Deletion of Directory Service Directory %q is in following stage: %q.", directoryID, aws.StringValue(ds.Stage)) + return ds, aws.StringValue(ds.Stage), nil }, Timeout: 60 * time.Minute, } - if _, err := stateConf.WaitForState(); err != nil { - return fmt.Errorf( - "Error waiting for Directory Service (%s) to be deleted: %q", - d.Id(), err.Error()) - } + _, err := stateConf.WaitForState() - return nil + return err } diff --git a/aws/resource_aws_directory_service_directory_test.go b/aws/resource_aws_directory_service_directory_test.go index d6c782f39cb..dbbf0142ef2 100644 --- a/aws/resource_aws_directory_service_directory_test.go +++ b/aws/resource_aws_directory_service_directory_test.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "log" "reflect" "testing" @@ -14,6 +15,69 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func init() { + resource.AddTestSweepers("aws_directory_service_directory", &resource.Sweeper{ + Name: "aws_directory_service_directory", + F: testSweepDirectoryServiceDirectories, + }) +} + +func testSweepDirectoryServiceDirectories(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).dsconn + + input := &directoryservice.DescribeDirectoriesInput{} + for { + resp, err := conn.DescribeDirectories(input) + + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping Directory Service Directory sweep for %s: %s", region, err) + return nil + } + + if err != nil { + return fmt.Errorf("error listing Directory Service Directories: %s", err) + } + + for _, directory := range resp.DirectoryDescriptions { + id := aws.StringValue(directory.DirectoryId) + name := aws.StringValue(directory.Name) + + if name != "corp.notexample.com" && name != "terraformtesting.com" { + log.Printf("[INFO] Skipping Directory Service Directory: %s / %s", id, name) + continue + } + + deleteDirectoryInput := directoryservice.DeleteDirectoryInput{ + DirectoryId: directory.DirectoryId, + } + + log.Printf("[INFO] Deleting Directory Service Directory: %s", deleteDirectoryInput) + _, err := conn.DeleteDirectory(&deleteDirectoryInput) + if err != nil { + return fmt.Errorf("error deleting Directory Service Directory (%s): %s", id, err) + } + + log.Printf("[INFO] Waiting for Directory Service Directory (%q) to be deleted", id) + err = waitForDirectoryServiceDirectoryDeletion(conn, id) + if err != nil { + return fmt.Errorf("error waiting for Directory Service (%s) to be deleted: %s", id, err) + } + } + + if resp.NextToken == nil { + break + } + + input.NextToken = resp.NextToken + } + + return nil +} + func TestDiffTagsDirectoryService(t *testing.T) { cases := []struct { Old, New map[string]interface{} @@ -68,7 +132,7 @@ func TestDiffTagsDirectoryService(t *testing.T) { func TestAccAWSDirectoryServiceDirectory_importBasic(t *testing.T) { resourceName := "aws_directory_service_directory.bar" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckDirectoryServiceDirectoryDestroy, @@ -89,7 +153,7 @@ func TestAccAWSDirectoryServiceDirectory_importBasic(t *testing.T) { } func TestAccAWSDirectoryServiceDirectory_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckDirectoryServiceDirectoryDestroy, @@ -106,7 +170,7 @@ func TestAccAWSDirectoryServiceDirectory_basic(t *testing.T) { } func TestAccAWSDirectoryServiceDirectory_tags(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckDirectoryServiceDirectoryDestroy, @@ -123,7 +187,7 @@ func TestAccAWSDirectoryServiceDirectory_tags(t *testing.T) { } func TestAccAWSDirectoryServiceDirectory_microsoft(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckDirectoryServiceDirectoryDestroy, @@ -140,7 +204,7 @@ func TestAccAWSDirectoryServiceDirectory_microsoft(t *testing.T) { } func TestAccAWSDirectoryServiceDirectory_microsoftStandard(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckDirectoryServiceDirectoryDestroy, @@ -157,7 +221,7 @@ func TestAccAWSDirectoryServiceDirectory_microsoftStandard(t *testing.T) { } func TestAccAWSDirectoryServiceDirectory_connector(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckDirectoryServiceDirectoryDestroy, @@ -173,7 +237,7 @@ func TestAccAWSDirectoryServiceDirectory_connector(t *testing.T) { } func TestAccAWSDirectoryServiceDirectory_withAliasAndSso(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckDirectoryServiceDirectoryDestroy, diff --git a/aws/resource_aws_dlm_lifecycle_policy.go b/aws/resource_aws_dlm_lifecycle_policy.go new file mode 100644 index 00000000000..7211e3bfd9e --- /dev/null +++ b/aws/resource_aws_dlm_lifecycle_policy.go @@ -0,0 +1,367 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/dlm" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsDlmLifecyclePolicy() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsDlmLifecyclePolicyCreate, + Read: resourceAwsDlmLifecyclePolicyRead, + Update: resourceAwsDlmLifecyclePolicyUpdate, + Delete: resourceAwsDlmLifecyclePolicyDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "description": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile("^[0-9A-Za-z _-]+$"), "see https://docs.aws.amazon.com/cli/latest/reference/dlm/create-lifecycle-policy.html"), + // TODO: https://docs.aws.amazon.com/dlm/latest/APIReference/API_LifecyclePolicy.html#dlm-Type-LifecyclePolicy-Description says it has max length of 500 but doesn't mention the regex but SDK and CLI docs only mention the regex and not max length. Check this + }, + "execution_role_arn": { + // TODO: Make this not required and if it's not provided then use the default service role, creating it if necessary + Type: schema.TypeString, + Required: true, + ValidateFunc: validateArn, + }, + "policy_details": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "resource_types": { + Type: schema.TypeList, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "schedule": { + Type: schema.TypeList, + Required: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "copy_tags": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + ForceNew: true, + }, + "create_rule": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "interval": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validateIntegerInSlice([]int{ + 12, + 24, + }), + }, + "interval_unit": { + Type: schema.TypeString, + Optional: true, + Default: dlm.IntervalUnitValuesHours, + ValidateFunc: validation.StringInSlice([]string{ + dlm.IntervalUnitValuesHours, + }, false), + }, + "times": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringMatch(regexp.MustCompile("^([0-9]|0[0-9]|1[0-9]|2[0-3]):[0-5][0-9]$"), "see https://docs.aws.amazon.com/dlm/latest/APIReference/API_CreateRule.html#dlm-Type-CreateRule-Times"), + }, + }, + }, + }, + }, + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(0, 500), + }, + "retain_rule": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "count": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntBetween(1, 1000), + }, + }, + }, + }, + "tags_to_add": { + Type: schema.TypeMap, + Optional: true, + }, + }, + }, + }, + "target_tags": { + Type: schema.TypeMap, + Required: true, + }, + }, + }, + }, + "state": { + Type: schema.TypeString, + Optional: true, + Default: dlm.SettablePolicyStateValuesEnabled, + ValidateFunc: validation.StringInSlice([]string{ + dlm.SettablePolicyStateValuesDisabled, + dlm.SettablePolicyStateValuesEnabled, + }, false), + }, + }, + } +} + +func resourceAwsDlmLifecyclePolicyCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dlmconn + + input := dlm.CreateLifecyclePolicyInput{ + Description: aws.String(d.Get("description").(string)), + ExecutionRoleArn: aws.String(d.Get("execution_role_arn").(string)), + PolicyDetails: expandDlmPolicyDetails(d.Get("policy_details").([]interface{})), + State: aws.String(d.Get("state").(string)), + } + + log.Printf("[INFO] Creating DLM lifecycle policy: %s", input) + out, err := conn.CreateLifecyclePolicy(&input) + if err != nil { + return fmt.Errorf("error creating DLM Lifecycle Policy: %s", err) + } + + d.SetId(*out.PolicyId) + + return resourceAwsDlmLifecyclePolicyRead(d, meta) +} + +func resourceAwsDlmLifecyclePolicyRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dlmconn + + log.Printf("[INFO] Reading DLM lifecycle policy: %s", d.Id()) + out, err := conn.GetLifecyclePolicy(&dlm.GetLifecyclePolicyInput{ + PolicyId: aws.String(d.Id()), + }) + + if isAWSErr(err, dlm.ErrCodeResourceNotFoundException, "") { + log.Printf("[WARN] DLM Lifecycle Policy (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return fmt.Errorf("error reading DLM Lifecycle Policy (%s): %s", d.Id(), err) + } + + d.Set("description", out.Policy.Description) + d.Set("execution_role_arn", out.Policy.ExecutionRoleArn) + d.Set("state", out.Policy.State) + if err := d.Set("policy_details", flattenDlmPolicyDetails(out.Policy.PolicyDetails)); err != nil { + return fmt.Errorf("error setting policy details %s", err) + } + + return nil +} + +func resourceAwsDlmLifecyclePolicyUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dlmconn + + input := dlm.UpdateLifecyclePolicyInput{ + PolicyId: aws.String(d.Id()), + } + + if d.HasChange("description") { + input.Description = aws.String(d.Get("description").(string)) + } + if d.HasChange("execution_role_arn") { + input.ExecutionRoleArn = aws.String(d.Get("execution_role_arn").(string)) + } + if d.HasChange("state") { + input.State = aws.String(d.Get("state").(string)) + } + if d.HasChange("policy_details") { + input.PolicyDetails = expandDlmPolicyDetails(d.Get("policy_details").([]interface{})) + } + + log.Printf("[INFO] Updating lifecycle policy %s", d.Id()) + _, err := conn.UpdateLifecyclePolicy(&input) + if err != nil { + return fmt.Errorf("error updating DLM Lifecycle Policy (%s): %s", d.Id(), err) + } + + return resourceAwsDlmLifecyclePolicyRead(d, meta) +} + +func resourceAwsDlmLifecyclePolicyDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dlmconn + + log.Printf("[INFO] Deleting DLM lifecycle policy: %s", d.Id()) + _, err := conn.DeleteLifecyclePolicy(&dlm.DeleteLifecyclePolicyInput{ + PolicyId: aws.String(d.Id()), + }) + if err != nil { + return fmt.Errorf("error deleting DLM Lifecycle Policy (%s): %s", d.Id(), err) + } + + return nil +} + +func expandDlmPolicyDetails(cfg []interface{}) *dlm.PolicyDetails { + if len(cfg) == 0 || cfg[0] == nil { + return nil + } + + policyDetails := &dlm.PolicyDetails{} + m := cfg[0].(map[string]interface{}) + if v, ok := m["resource_types"]; ok { + policyDetails.ResourceTypes = expandStringList(v.([]interface{})) + } + if v, ok := m["schedule"]; ok { + policyDetails.Schedules = expandDlmSchedules(v.([]interface{})) + } + if v, ok := m["target_tags"]; ok { + policyDetails.TargetTags = expandDlmTags(v.(map[string]interface{})) + } + + return policyDetails +} + +func flattenDlmPolicyDetails(policyDetails *dlm.PolicyDetails) []map[string]interface{} { + result := make(map[string]interface{}, 0) + result["resource_types"] = flattenStringList(policyDetails.ResourceTypes) + result["schedule"] = flattenDlmSchedules(policyDetails.Schedules) + result["target_tags"] = flattenDlmTags(policyDetails.TargetTags) + + return []map[string]interface{}{result} +} + +func expandDlmSchedules(cfg []interface{}) []*dlm.Schedule { + schedules := make([]*dlm.Schedule, len(cfg)) + for i, c := range cfg { + schedule := &dlm.Schedule{} + m := c.(map[string]interface{}) + if v, ok := m["copy_tags"]; ok { + schedule.CopyTags = aws.Bool(v.(bool)) + } + if v, ok := m["create_rule"]; ok { + schedule.CreateRule = expandDlmCreateRule(v.([]interface{})) + } + if v, ok := m["name"]; ok { + schedule.Name = aws.String(v.(string)) + } + if v, ok := m["retain_rule"]; ok { + schedule.RetainRule = expandDlmRetainRule(v.([]interface{})) + } + if v, ok := m["tags_to_add"]; ok { + schedule.TagsToAdd = expandDlmTags(v.(map[string]interface{})) + } + schedules[i] = schedule + } + + return schedules +} + +func flattenDlmSchedules(schedules []*dlm.Schedule) []map[string]interface{} { + result := make([]map[string]interface{}, len(schedules)) + for i, s := range schedules { + m := make(map[string]interface{}) + m["copy_tags"] = aws.BoolValue(s.CopyTags) + m["create_rule"] = flattenDlmCreateRule(s.CreateRule) + m["name"] = aws.StringValue(s.Name) + m["retain_rule"] = flattenDlmRetainRule(s.RetainRule) + m["tags_to_add"] = flattenDlmTags(s.TagsToAdd) + result[i] = m + } + + return result +} + +func expandDlmCreateRule(cfg []interface{}) *dlm.CreateRule { + if len(cfg) == 0 || cfg[0] == nil { + return nil + } + c := cfg[0].(map[string]interface{}) + createRule := &dlm.CreateRule{ + Interval: aws.Int64(int64(c["interval"].(int))), + IntervalUnit: aws.String(c["interval_unit"].(string)), + } + if v, ok := c["times"]; ok { + createRule.Times = expandStringList(v.([]interface{})) + } + + return createRule +} + +func flattenDlmCreateRule(createRule *dlm.CreateRule) []map[string]interface{} { + if createRule == nil { + return []map[string]interface{}{} + } + + result := make(map[string]interface{}) + result["interval"] = aws.Int64Value(createRule.Interval) + result["interval_unit"] = aws.StringValue(createRule.IntervalUnit) + result["times"] = flattenStringList(createRule.Times) + + return []map[string]interface{}{result} +} + +func expandDlmRetainRule(cfg []interface{}) *dlm.RetainRule { + if len(cfg) == 0 || cfg[0] == nil { + return nil + } + m := cfg[0].(map[string]interface{}) + return &dlm.RetainRule{ + Count: aws.Int64(int64(m["count"].(int))), + } +} + +func flattenDlmRetainRule(retainRule *dlm.RetainRule) []map[string]interface{} { + result := make(map[string]interface{}) + result["count"] = aws.Int64Value(retainRule.Count) + + return []map[string]interface{}{result} +} + +func expandDlmTags(m map[string]interface{}) []*dlm.Tag { + var result []*dlm.Tag + for k, v := range m { + result = append(result, &dlm.Tag{ + Key: aws.String(k), + Value: aws.String(v.(string)), + }) + } + + return result +} + +func flattenDlmTags(tags []*dlm.Tag) map[string]string { + result := make(map[string]string) + for _, t := range tags { + result[aws.StringValue(t.Key)] = aws.StringValue(t.Value) + } + + return result +} diff --git a/aws/resource_aws_dlm_lifecycle_policy_test.go b/aws/resource_aws_dlm_lifecycle_policy_test.go new file mode 100644 index 00000000000..120165ae4da --- /dev/null +++ b/aws/resource_aws_dlm_lifecycle_policy_test.go @@ -0,0 +1,313 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/dlm" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSDlmLifecyclePolicy_Basic(t *testing.T) { + resourceName := "aws_dlm_lifecycle_policy.basic" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: dlmLifecyclePolicyDestroy, + Steps: []resource.TestStep{ + { + Config: dlmLifecyclePolicyBasicConfig(rName), + Check: resource.ComposeTestCheckFunc( + checkDlmLifecyclePolicyExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "description", "tf-acc-basic"), + resource.TestCheckResourceAttrSet(resourceName, "execution_role_arn"), + resource.TestCheckResourceAttr(resourceName, "state", "ENABLED"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.resource_types.0", "VOLUME"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.name", "tf-acc-basic"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.create_rule.0.interval", "12"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.create_rule.0.interval_unit", "HOURS"), + resource.TestCheckResourceAttrSet(resourceName, "policy_details.0.schedule.0.create_rule.0.times.0"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.retain_rule.0.count", "10"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.target_tags.tf-acc-test", "basic"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSDlmLifecyclePolicy_Full(t *testing.T) { + resourceName := "aws_dlm_lifecycle_policy.full" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: dlmLifecyclePolicyDestroy, + Steps: []resource.TestStep{ + { + Config: dlmLifecyclePolicyFullConfig(rName), + Check: resource.ComposeTestCheckFunc( + checkDlmLifecyclePolicyExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "description", "tf-acc-full"), + resource.TestCheckResourceAttrSet(resourceName, "execution_role_arn"), + resource.TestCheckResourceAttr(resourceName, "state", "ENABLED"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.resource_types.0", "VOLUME"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.name", "tf-acc-full"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.create_rule.0.interval", "12"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.create_rule.0.interval_unit", "HOURS"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.create_rule.0.times.0", "21:42"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.retain_rule.0.count", "10"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.tags_to_add.tf-acc-test-added", "full"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.copy_tags", "false"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.target_tags.tf-acc-test", "full"), + ), + }, + { + Config: dlmLifecyclePolicyFullUpdateConfig(rName), + Check: resource.ComposeTestCheckFunc( + checkDlmLifecyclePolicyExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "description", "tf-acc-full-updated"), + resource.TestCheckResourceAttrSet(resourceName, "execution_role_arn"), + resource.TestCheckResourceAttr(resourceName, "state", "DISABLED"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.resource_types.0", "VOLUME"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.name", "tf-acc-full-updated"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.create_rule.0.interval", "24"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.create_rule.0.interval_unit", "HOURS"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.create_rule.0.times.0", "09:42"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.retain_rule.0.count", "100"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.tags_to_add.tf-acc-test-added", "full-updated"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.copy_tags", "true"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.target_tags.tf-acc-test", "full-updated"), + ), + }, + }, + }) +} + +func dlmLifecyclePolicyDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).dlmconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_dlm_lifecycle_policy" { + continue + } + + input := dlm.GetLifecyclePolicyInput{ + PolicyId: aws.String(rs.Primary.ID), + } + + out, err := conn.GetLifecyclePolicy(&input) + + if isAWSErr(err, dlm.ErrCodeResourceNotFoundException, "") { + return nil + } + + if err != nil { + return fmt.Errorf("error getting DLM Lifecycle Policy (%s): %s", rs.Primary.ID, err) + } + + if out.Policy != nil { + return fmt.Errorf("DLM lifecycle policy still exists: %#v", out) + } + } + + return nil +} + +func checkDlmLifecyclePolicyExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + conn := testAccProvider.Meta().(*AWSClient).dlmconn + + input := dlm.GetLifecyclePolicyInput{ + PolicyId: aws.String(rs.Primary.ID), + } + + _, err := conn.GetLifecyclePolicy(&input) + + if err != nil { + return fmt.Errorf("error getting DLM Lifecycle Policy (%s): %s", rs.Primary.ID, err) + } + + return nil + } +} + +func dlmLifecyclePolicyBasicConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_iam_role" "dlm_lifecycle_role" { + name = %q + + assume_role_policy = < 0 { + return fmt.Errorf("Direct Connect Gateway (%s) is not dissociated from VGW %s", rs.Primary.Attributes["dx_gateway_id"], rs.Primary.Attributes["vpn_gateway_id"]) + } + } + return nil +} + +func testAccCheckAwsDxGatewayAssociationExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + _, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + return nil + } +} + +func testAccDxGatewayAssociationConfig(rName string, rBgpAsn int) string { + return fmt.Sprintf(` +resource "aws_dx_gateway" "test" { + name = "terraform-testacc-dxgwassoc-%s" + amazon_side_asn = "%d" +} + +resource "aws_vpc" "test" { + cidr_block = "10.255.255.0/28" + tags { + Name = "terraform-testacc-dxgwassoc-%s" + } +} + +resource "aws_vpn_gateway" "test" { + tags { + Name = "terraform-testacc-dxgwassoc-%s" + } +} + +resource "aws_vpn_gateway_attachment" "test" { + vpc_id = "${aws_vpc.test.id}" + vpn_gateway_id = "${aws_vpn_gateway.test.id}" +} + +resource "aws_dx_gateway_association" "test" { + dx_gateway_id = "${aws_dx_gateway.test.id}" + vpn_gateway_id = "${aws_vpn_gateway_attachment.test.vpn_gateway_id}" +} +`, rName, rBgpAsn, rName, rName) +} + +func testAccDxGatewayAssociationConfig_multiVgws(rName string, rBgpAsn int) string { + return fmt.Sprintf(` +resource "aws_dx_gateway" "test" { + name = "terraform-testacc-dxgwassoc-%s" + amazon_side_asn = "%d" +} + +resource "aws_vpc" "test1" { + cidr_block = "10.255.255.16/28" + tags { + Name = "terraform-testacc-dxgwassoc-%s-1" + } +} + +resource "aws_vpn_gateway" "test1" { + tags { + Name = "terraform-testacc-dxgwassoc-%s-1" + } +} + +resource "aws_vpn_gateway_attachment" "test1" { + vpc_id = "${aws_vpc.test1.id}" + vpn_gateway_id = "${aws_vpn_gateway.test1.id}" +} + +resource "aws_dx_gateway_association" "test1" { + dx_gateway_id = "${aws_dx_gateway.test.id}" + vpn_gateway_id = "${aws_vpn_gateway_attachment.test1.vpn_gateway_id}" +} + +resource "aws_vpc" "test2" { + cidr_block = "10.255.255.32/28" + tags { + Name = "terraform-testacc-dxgwassoc-%s-2" + } +} + +resource "aws_vpn_gateway" "test2" { + tags { + Name = "terraform-testacc-dxgwassoc-%s-2" + } +} + +resource "aws_vpn_gateway_attachment" "test2" { + vpc_id = "${aws_vpc.test2.id}" + vpn_gateway_id = "${aws_vpn_gateway.test2.id}" +} + +resource "aws_dx_gateway_association" "test2" { + dx_gateway_id = "${aws_dx_gateway.test.id}" + vpn_gateway_id = "${aws_vpn_gateway_attachment.test2.vpn_gateway_id}" +} +`, rName, rBgpAsn, rName, rName, rName, rName) +} diff --git a/aws/resource_aws_dx_gateway_test.go b/aws/resource_aws_dx_gateway_test.go new file mode 100644 index 00000000000..18a9feebe4a --- /dev/null +++ b/aws/resource_aws_dx_gateway_test.go @@ -0,0 +1,132 @@ +package aws + +import ( + "fmt" + "math/rand" + "testing" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/directconnect" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAwsDxGateway_importBasic(t *testing.T) { + resourceName := "aws_dx_gateway.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsDxGatewayDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDxGatewayConfig(acctest.RandString(5), randIntRange(64512, 65534)), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAwsDxGateway_importComplex(t *testing.T) { + checkFn := func(s []*terraform.InstanceState) error { + if len(s) != 3 { + return fmt.Errorf("Got %d resources, expected 3. State: %#v", len(s), s) + } + return nil + } + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsDxGatewayDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDxGatewayAssociationConfig_multiVgws(acctest.RandString(5), randIntRange(64512, 65534)), + }, + + { + ResourceName: "aws_dx_gateway.test", + ImportState: true, + ImportStateCheck: checkFn, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAwsDxGateway_basic(t *testing.T) { + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsDxGatewayDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDxGatewayConfig(acctest.RandString(5), randIntRange(64512, 65534)), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDxGatewayExists("aws_dx_gateway.test"), + ), + }, + }, + }) +} + +func testAccCheckAwsDxGatewayDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).dxconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_dx_gateway" { + continue + } + + input := &directconnect.DescribeDirectConnectGatewaysInput{ + DirectConnectGatewayId: aws.String(rs.Primary.ID), + } + + resp, err := conn.DescribeDirectConnectGateways(input) + if err != nil { + return err + } + for _, v := range resp.DirectConnectGateways { + if *v.DirectConnectGatewayId == rs.Primary.ID && !(*v.DirectConnectGatewayState == directconnect.GatewayStateDeleted) { + return fmt.Errorf("[DESTROY ERROR] DX Gateway (%s) not deleted", rs.Primary.ID) + } + } + } + return nil +} + +func testAccCheckAwsDxGatewayExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + _, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + return nil + } +} + +func testAccDxGatewayConfig(rName string, rBgpAsn int) string { + return fmt.Sprintf(` + resource "aws_dx_gateway" "test" { + name = "terraform-testacc-dxgw-%s" + amazon_side_asn = "%d" + } + `, rName, rBgpAsn) +} + +// Local copy of acctest.RandIntRange until https://github.com/hashicorp/terraform/pull/17438 is merged. +func randIntRange(min int, max int) int { + rand.Seed(time.Now().UTC().UnixNano()) + source := rand.New(rand.NewSource(time.Now().UnixNano())) + rangeMax := max - min + + return int(source.Int31n(int32(rangeMax))) + min +} diff --git a/aws/resource_aws_dx_hosted_private_virtual_interface.go b/aws/resource_aws_dx_hosted_private_virtual_interface.go new file mode 100644 index 00000000000..56fc6f85a60 --- /dev/null +++ b/aws/resource_aws_dx_hosted_private_virtual_interface.go @@ -0,0 +1,209 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" + "github.com/aws/aws-sdk-go/service/directconnect" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsDxHostedPrivateVirtualInterface() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsDxHostedPrivateVirtualInterfaceCreate, + Read: resourceAwsDxHostedPrivateVirtualInterfaceRead, + Delete: resourceAwsDxHostedPrivateVirtualInterfaceDelete, + Importer: &schema.ResourceImporter{ + State: resourceAwsDxHostedPrivateVirtualInterfaceImport, + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "connection_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "vlan": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + ValidateFunc: validation.IntBetween(1, 4094), + }, + "bgp_asn": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + "bgp_auth_key": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "address_family": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{directconnect.AddressFamilyIpv4, directconnect.AddressFamilyIpv6}, false), + }, + "customer_address": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "amazon_address": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "owner_account_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateAwsAccountId, + }, + "mtu": { + Type: schema.TypeInt, + Default: 1500, + Optional: true, + ForceNew: true, + ValidateFunc: validateIntegerInSlice([]int{1500, 9001}), + }, + "jumbo_frame_capable": { + Type: schema.TypeBool, + Computed: true, + }, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Update: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + } +} + +func resourceAwsDxHostedPrivateVirtualInterfaceCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dxconn + + req := &directconnect.AllocatePrivateVirtualInterfaceInput{ + ConnectionId: aws.String(d.Get("connection_id").(string)), + OwnerAccount: aws.String(d.Get("owner_account_id").(string)), + NewPrivateVirtualInterfaceAllocation: &directconnect.NewPrivateVirtualInterfaceAllocation{ + VirtualInterfaceName: aws.String(d.Get("name").(string)), + Vlan: aws.Int64(int64(d.Get("vlan").(int))), + Asn: aws.Int64(int64(d.Get("bgp_asn").(int))), + AddressFamily: aws.String(d.Get("address_family").(string)), + Mtu: aws.Int64(int64(d.Get("mtu").(int))), + }, + } + if v, ok := d.GetOk("bgp_auth_key"); ok && v.(string) != "" { + req.NewPrivateVirtualInterfaceAllocation.AuthKey = aws.String(v.(string)) + } + if v, ok := d.GetOk("customer_address"); ok && v.(string) != "" { + req.NewPrivateVirtualInterfaceAllocation.CustomerAddress = aws.String(v.(string)) + } + if v, ok := d.GetOk("amazon_address"); ok && v.(string) != "" { + req.NewPrivateVirtualInterfaceAllocation.AmazonAddress = aws.String(v.(string)) + } + if v, ok := d.GetOk("mtu"); ok && v.(int) != 0 { + req.NewPrivateVirtualInterfaceAllocation.Mtu = aws.Int64(int64(v.(int))) + } + + log.Printf("[DEBUG] Creating Direct Connect hosted private virtual interface: %#v", req) + resp, err := conn.AllocatePrivateVirtualInterface(req) + if err != nil { + return fmt.Errorf("Error creating Direct Connect hosted private virtual interface: %s", err.Error()) + } + + d.SetId(aws.StringValue(resp.VirtualInterfaceId)) + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Region: meta.(*AWSClient).region, + Service: "directconnect", + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("dxvif/%s", d.Id()), + }.String() + d.Set("arn", arn) + + if err := dxHostedPrivateVirtualInterfaceWaitUntilAvailable(conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { + return err + } + + return resourceAwsDxHostedPrivateVirtualInterfaceRead(d, meta) +} + +func resourceAwsDxHostedPrivateVirtualInterfaceRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dxconn + + vif, err := dxVirtualInterfaceRead(d.Id(), conn) + if err != nil { + return err + } + if vif == nil { + log.Printf("[WARN] Direct Connect virtual interface (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + d.Set("connection_id", vif.ConnectionId) + d.Set("name", vif.VirtualInterfaceName) + d.Set("vlan", vif.Vlan) + d.Set("bgp_asn", vif.Asn) + d.Set("bgp_auth_key", vif.AuthKey) + d.Set("address_family", vif.AddressFamily) + d.Set("customer_address", vif.CustomerAddress) + d.Set("amazon_address", vif.AmazonAddress) + d.Set("owner_account_id", vif.OwnerAccount) + d.Set("mtu", vif.Mtu) + d.Set("jumbo_frame_capable", vif.JumboFrameCapable) + + return nil +} + +func resourceAwsDxHostedPrivateVirtualInterfaceDelete(d *schema.ResourceData, meta interface{}) error { + return dxVirtualInterfaceDelete(d, meta) +} + +func resourceAwsDxHostedPrivateVirtualInterfaceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Region: meta.(*AWSClient).region, + Service: "directconnect", + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("dxvif/%s", d.Id()), + }.String() + d.Set("arn", arn) + + return []*schema.ResourceData{d}, nil +} + +func dxHostedPrivateVirtualInterfaceWaitUntilAvailable(conn *directconnect.DirectConnect, vifId string, timeout time.Duration) error { + return dxVirtualInterfaceWaitUntilAvailable( + conn, + vifId, + timeout, + []string{ + directconnect.VirtualInterfaceStatePending, + }, + []string{ + directconnect.VirtualInterfaceStateAvailable, + directconnect.VirtualInterfaceStateConfirming, + directconnect.VirtualInterfaceStateDown, + }) +} diff --git a/aws/resource_aws_dx_hosted_private_virtual_interface_accepter.go b/aws/resource_aws_dx_hosted_private_virtual_interface_accepter.go new file mode 100644 index 00000000000..87ed7673b19 --- /dev/null +++ b/aws/resource_aws_dx_hosted_private_virtual_interface_accepter.go @@ -0,0 +1,169 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" + "github.com/aws/aws-sdk-go/service/directconnect" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsDxHostedPrivateVirtualInterfaceAccepter() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsDxHostedPrivateVirtualInterfaceAccepterCreate, + Read: resourceAwsDxHostedPrivateVirtualInterfaceAccepterRead, + Update: resourceAwsDxHostedPrivateVirtualInterfaceAccepterUpdate, + Delete: resourceAwsDxHostedPrivateVirtualInterfaceAccepterDelete, + Importer: &schema.ResourceImporter{ + State: resourceAwsDxHostedPrivateVirtualInterfaceAccepterImport, + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "virtual_interface_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "vpn_gateway_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"dx_gateway_id"}, + }, + "dx_gateway_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"vpn_gateway_id"}, + }, + "tags": tagsSchema(), + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + } +} + +func resourceAwsDxHostedPrivateVirtualInterfaceAccepterCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dxconn + + vgwIdRaw, vgwOk := d.GetOk("vpn_gateway_id") + dxgwIdRaw, dxgwOk := d.GetOk("dx_gateway_id") + if vgwOk == dxgwOk { + return fmt.Errorf( + "One of ['vpn_gateway_id', 'dx_gateway_id'] must be set to create a Direct Connect private virtual interface accepter") + } + + vifId := d.Get("virtual_interface_id").(string) + req := &directconnect.ConfirmPrivateVirtualInterfaceInput{ + VirtualInterfaceId: aws.String(vifId), + } + if vgwOk && vgwIdRaw.(string) != "" { + req.VirtualGatewayId = aws.String(vgwIdRaw.(string)) + } + if dxgwOk && dxgwIdRaw.(string) != "" { + req.DirectConnectGatewayId = aws.String(dxgwIdRaw.(string)) + } + + log.Printf("[DEBUG] Accepting Direct Connect hosted private virtual interface: %#v", req) + _, err := conn.ConfirmPrivateVirtualInterface(req) + if err != nil { + return fmt.Errorf("Error accepting Direct Connect hosted private virtual interface: %s", err.Error()) + } + + d.SetId(vifId) + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Region: meta.(*AWSClient).region, + Service: "directconnect", + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("dxvif/%s", d.Id()), + }.String() + d.Set("arn", arn) + + if err := dxHostedPrivateVirtualInterfaceAccepterWaitUntilAvailable(conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { + return err + } + + return resourceAwsDxHostedPrivateVirtualInterfaceAccepterUpdate(d, meta) +} + +func resourceAwsDxHostedPrivateVirtualInterfaceAccepterRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dxconn + + vif, err := dxVirtualInterfaceRead(d.Id(), conn) + if err != nil { + return err + } + if vif == nil { + log.Printf("[WARN] Direct Connect virtual interface (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + vifState := aws.StringValue(vif.VirtualInterfaceState) + if vifState != directconnect.VirtualInterfaceStateAvailable && + vifState != directconnect.VirtualInterfaceStateDown { + log.Printf("[WARN] Direct Connect virtual interface (%s) is '%s', removing from state", vifState, d.Id()) + d.SetId("") + return nil + } + + d.Set("virtual_interface_id", vif.VirtualInterfaceId) + d.Set("vpn_gateway_id", vif.VirtualGatewayId) + d.Set("dx_gateway_id", vif.DirectConnectGatewayId) + if err := getTagsDX(conn, d, d.Get("arn").(string)); err != nil { + return err + } + + return nil +} + +func resourceAwsDxHostedPrivateVirtualInterfaceAccepterUpdate(d *schema.ResourceData, meta interface{}) error { + if err := dxVirtualInterfaceUpdate(d, meta); err != nil { + return err + } + + return resourceAwsDxHostedPrivateVirtualInterfaceAccepterRead(d, meta) +} + +func resourceAwsDxHostedPrivateVirtualInterfaceAccepterDelete(d *schema.ResourceData, meta interface{}) error { + log.Printf("[WARN] Will not delete Direct Connect virtual interface. Terraform will remove this resource from the state file, however resources may remain.") + return nil +} + +func resourceAwsDxHostedPrivateVirtualInterfaceAccepterImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Region: meta.(*AWSClient).region, + Service: "directconnect", + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("dxvif/%s", d.Id()), + }.String() + d.Set("arn", arn) + + return []*schema.ResourceData{d}, nil +} + +func dxHostedPrivateVirtualInterfaceAccepterWaitUntilAvailable(conn *directconnect.DirectConnect, vifId string, timeout time.Duration) error { + return dxVirtualInterfaceWaitUntilAvailable( + conn, + vifId, + timeout, + []string{ + directconnect.VirtualInterfaceStateConfirming, + directconnect.VirtualInterfaceStatePending, + }, + []string{ + directconnect.VirtualInterfaceStateAvailable, + directconnect.VirtualInterfaceStateDown, + }) +} diff --git a/aws/resource_aws_dx_hosted_private_virtual_interface_test.go b/aws/resource_aws_dx_hosted_private_virtual_interface_test.go new file mode 100644 index 00000000000..6817623d312 --- /dev/null +++ b/aws/resource_aws_dx_hosted_private_virtual_interface_test.go @@ -0,0 +1,164 @@ +package aws + +import ( + "fmt" + "os" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/directconnect" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAwsDxHostedPrivateVirtualInterface_basic(t *testing.T) { + key := "DX_CONNECTION_ID" + connectionId := os.Getenv(key) + if connectionId == "" { + t.Skipf("Environment variable %s is not set", key) + } + key = "DX_HOSTED_VIF_OWNER_ACCOUNT" + ownerAccountId := os.Getenv(key) + if ownerAccountId == "" { + t.Skipf("Environment variable %s is not set", key) + } + vifName := fmt.Sprintf("terraform-testacc-dxvif-%s", acctest.RandString(5)) + bgpAsn := randIntRange(64512, 65534) + vlan := randIntRange(2049, 4094) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsDxHostedPrivateVirtualInterfaceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDxHostedPrivateVirtualInterfaceConfig_basic(connectionId, ownerAccountId, vifName, bgpAsn, vlan), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDxHostedPrivateVirtualInterfaceExists("aws_dx_hosted_private_virtual_interface.foo"), + resource.TestCheckResourceAttr("aws_dx_hosted_private_virtual_interface.foo", "name", vifName), + ), + }, + // Test import. + { + ResourceName: "aws_dx_hosted_private_virtual_interface.foo", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAwsDxHostedPrivateVirtualInterface_mtuUpdate(t *testing.T) { + key := "DX_CONNECTION_ID" + connectionId := os.Getenv(key) + if connectionId == "" { + t.Skipf("Environment variable %s is not set", key) + } + key = "DX_HOSTED_VIF_OWNER_ACCOUNT" + ownerAccountId := os.Getenv(key) + if ownerAccountId == "" { + t.Skipf("Environment variable %s is not set", key) + } + vifName := fmt.Sprintf("terraform-testacc-dxvif-%s", acctest.RandString(5)) + bgpAsn := randIntRange(64512, 65534) + vlan := randIntRange(2049, 4094) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsDxHostedPrivateVirtualInterfaceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDxHostedPrivateVirtualInterfaceConfig_basic(connectionId, ownerAccountId, vifName, bgpAsn, vlan), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDxHostedPrivateVirtualInterfaceExists("aws_dx_hosted_private_virtual_interface.foo"), + resource.TestCheckResourceAttr("aws_dx_hosted_private_virtual_interface.foo", "name", vifName), + resource.TestCheckResourceAttr("aws_dx_hosted_private_virtual_interface.foo", "mtu", "1500"), + resource.TestCheckResourceAttr("aws_dx_hosted_private_virtual_interface.foo", "jumbo_frame_capable", "true"), + ), + }, + { + Config: testAccDxHostedPrivateVirtualInterfaceConfig_JumboFrames(connectionId, ownerAccountId, vifName, bgpAsn, vlan), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDxHostedPrivateVirtualInterfaceExists("aws_dx_hosted_private_virtual_interface.foo"), + resource.TestCheckResourceAttr("aws_dx_hosted_private_virtual_interface.foo", "name", vifName), + resource.TestCheckResourceAttr("aws_dx_hosted_private_virtual_interface.foo", "mtu", "9001"), + ), + }, + { + Config: testAccDxHostedPrivateVirtualInterfaceConfig_basic(connectionId, ownerAccountId, vifName, bgpAsn, vlan), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDxHostedPrivateVirtualInterfaceExists("aws_dx_hosted_private_virtual_interface.foo"), + resource.TestCheckResourceAttr("aws_dx_hosted_private_virtual_interface.foo", "name", vifName), + resource.TestCheckResourceAttr("aws_dx_hosted_private_virtual_interface.foo", "mtu", "1500"), + ), + }, + }, + }) +} + +func testAccCheckAwsDxHostedPrivateVirtualInterfaceDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).dxconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_dx_hosted_private_virtual_interface" { + continue + } + + input := &directconnect.DescribeVirtualInterfacesInput{ + VirtualInterfaceId: aws.String(rs.Primary.ID), + } + + resp, err := conn.DescribeVirtualInterfaces(input) + if err != nil { + return err + } + for _, v := range resp.VirtualInterfaces { + if *v.VirtualInterfaceId == rs.Primary.ID && !(*v.VirtualInterfaceState == directconnect.VirtualInterfaceStateDeleted) { + return fmt.Errorf("[DESTROY ERROR] Dx Private VIF (%s) not deleted", rs.Primary.ID) + } + } + } + return nil +} + +func testAccCheckAwsDxHostedPrivateVirtualInterfaceExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + _, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + return nil + } +} + +func testAccDxHostedPrivateVirtualInterfaceConfig_basic(cid, ownerAcctId, n string, bgpAsn, vlan int) string { + return fmt.Sprintf(` +resource "aws_dx_hosted_private_virtual_interface" "foo" { + connection_id = "%s" + owner_account_id = "%s" + + name = "%s" + vlan = %d + address_family = "ipv4" + bgp_asn = %d +} +`, cid, ownerAcctId, n, vlan, bgpAsn) +} + +func testAccDxHostedPrivateVirtualInterfaceConfig_JumboFrames(cid, ownerAcctId, n string, bgpAsn, vlan int) string { + return fmt.Sprintf(` +resource "aws_dx_hosted_private_virtual_interface" "foo" { + connection_id = "%s" + owner_account_id = "%s" + + name = "%s" + vlan = %d + address_family = "ipv4" + bgp_asn = %d + mtu = 9001 +} +`, cid, ownerAcctId, n, vlan, bgpAsn) +} diff --git a/aws/resource_aws_dx_hosted_public_virtual_interface.go b/aws/resource_aws_dx_hosted_public_virtual_interface.go new file mode 100644 index 00000000000..850acd898f4 --- /dev/null +++ b/aws/resource_aws_dx_hosted_public_virtual_interface.go @@ -0,0 +1,215 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" + "github.com/aws/aws-sdk-go/service/directconnect" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsDxHostedPublicVirtualInterface() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsDxHostedPublicVirtualInterfaceCreate, + Read: resourceAwsDxHostedPublicVirtualInterfaceRead, + Delete: resourceAwsDxHostedPublicVirtualInterfaceDelete, + Importer: &schema.ResourceImporter{ + State: resourceAwsDxHostedPublicVirtualInterfaceImport, + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "connection_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "vlan": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + ValidateFunc: validation.IntBetween(1, 4094), + }, + "bgp_asn": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + "bgp_auth_key": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "address_family": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{directconnect.AddressFamilyIpv4, directconnect.AddressFamilyIpv6}, false), + }, + "customer_address": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "amazon_address": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "owner_account_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateAwsAccountId, + }, + "route_filter_prefixes": { + Type: schema.TypeSet, + Required: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + MinItems: 1, + }, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + } +} + +func resourceAwsDxHostedPublicVirtualInterfaceCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dxconn + + addressFamily := d.Get("address_family").(string) + caRaw, caOk := d.GetOk("customer_address") + aaRaw, aaOk := d.GetOk("amazon_address") + if addressFamily == directconnect.AddressFamilyIpv4 { + if !caOk { + return fmt.Errorf("'customer_address' must be set when 'address_family' is '%s'", addressFamily) + } + if !aaOk { + return fmt.Errorf("'amazon_address' must be set when 'address_family' is '%s'", addressFamily) + } + } + + req := &directconnect.AllocatePublicVirtualInterfaceInput{ + ConnectionId: aws.String(d.Get("connection_id").(string)), + OwnerAccount: aws.String(d.Get("owner_account_id").(string)), + NewPublicVirtualInterfaceAllocation: &directconnect.NewPublicVirtualInterfaceAllocation{ + VirtualInterfaceName: aws.String(d.Get("name").(string)), + Vlan: aws.Int64(int64(d.Get("vlan").(int))), + Asn: aws.Int64(int64(d.Get("bgp_asn").(int))), + AddressFamily: aws.String(addressFamily), + }, + } + if v, ok := d.GetOk("bgp_auth_key"); ok && v.(string) != "" { + req.NewPublicVirtualInterfaceAllocation.AuthKey = aws.String(v.(string)) + } + if caOk && caRaw.(string) != "" { + req.NewPublicVirtualInterfaceAllocation.CustomerAddress = aws.String(caRaw.(string)) + } + if aaOk && aaRaw.(string) != "" { + req.NewPublicVirtualInterfaceAllocation.AmazonAddress = aws.String(aaRaw.(string)) + } + if v, ok := d.GetOk("route_filter_prefixes"); ok { + req.NewPublicVirtualInterfaceAllocation.RouteFilterPrefixes = expandDxRouteFilterPrefixes(v.(*schema.Set).List()) + } + + log.Printf("[DEBUG] Allocating Direct Connect hosted public virtual interface: %#v", req) + resp, err := conn.AllocatePublicVirtualInterface(req) + if err != nil { + return fmt.Errorf("Error allocating Direct Connect hosted public virtual interface: %s", err.Error()) + } + + d.SetId(aws.StringValue(resp.VirtualInterfaceId)) + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Region: meta.(*AWSClient).region, + Service: "directconnect", + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("dxvif/%s", d.Id()), + }.String() + d.Set("arn", arn) + + if err := dxHostedPublicVirtualInterfaceWaitUntilAvailable(conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { + return err + } + + return resourceAwsDxHostedPublicVirtualInterfaceRead(d, meta) +} + +func resourceAwsDxHostedPublicVirtualInterfaceRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dxconn + + vif, err := dxVirtualInterfaceRead(d.Id(), conn) + if err != nil { + return err + } + if vif == nil { + log.Printf("[WARN] Direct Connect virtual interface (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + d.Set("connection_id", vif.ConnectionId) + d.Set("name", vif.VirtualInterfaceName) + d.Set("vlan", vif.Vlan) + d.Set("bgp_asn", vif.Asn) + d.Set("bgp_auth_key", vif.AuthKey) + d.Set("address_family", vif.AddressFamily) + d.Set("customer_address", vif.CustomerAddress) + d.Set("amazon_address", vif.AmazonAddress) + d.Set("route_filter_prefixes", flattenDxRouteFilterPrefixes(vif.RouteFilterPrefixes)) + d.Set("owner_account_id", vif.OwnerAccount) + + return nil +} + +func resourceAwsDxHostedPublicVirtualInterfaceDelete(d *schema.ResourceData, meta interface{}) error { + return dxVirtualInterfaceDelete(d, meta) +} + +func resourceAwsDxHostedPublicVirtualInterfaceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Region: meta.(*AWSClient).region, + Service: "directconnect", + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("dxvif/%s", d.Id()), + }.String() + d.Set("arn", arn) + + return []*schema.ResourceData{d}, nil +} + +func dxHostedPublicVirtualInterfaceWaitUntilAvailable(conn *directconnect.DirectConnect, vifId string, timeout time.Duration) error { + return dxVirtualInterfaceWaitUntilAvailable( + conn, + vifId, + timeout, + []string{ + directconnect.VirtualInterfaceStatePending, + }, + []string{ + directconnect.VirtualInterfaceStateAvailable, + directconnect.VirtualInterfaceStateConfirming, + directconnect.VirtualInterfaceStateDown, + directconnect.VirtualInterfaceStateVerifying, + }) +} diff --git a/aws/resource_aws_dx_hosted_public_virtual_interface_accepter.go b/aws/resource_aws_dx_hosted_public_virtual_interface_accepter.go new file mode 100644 index 00000000000..b3cfc696358 --- /dev/null +++ b/aws/resource_aws_dx_hosted_public_virtual_interface_accepter.go @@ -0,0 +1,144 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" + "github.com/aws/aws-sdk-go/service/directconnect" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsDxHostedPublicVirtualInterfaceAccepter() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsDxHostedPublicVirtualInterfaceAccepterCreate, + Read: resourceAwsDxHostedPublicVirtualInterfaceAccepterRead, + Update: resourceAwsDxHostedPublicVirtualInterfaceAccepterUpdate, + Delete: resourceAwsDxHostedPublicVirtualInterfaceAccepterDelete, + Importer: &schema.ResourceImporter{ + State: resourceAwsDxHostedPublicVirtualInterfaceAccepterImport, + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "virtual_interface_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "tags": tagsSchema(), + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + } +} + +func resourceAwsDxHostedPublicVirtualInterfaceAccepterCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dxconn + + vifId := d.Get("virtual_interface_id").(string) + req := &directconnect.ConfirmPublicVirtualInterfaceInput{ + VirtualInterfaceId: aws.String(vifId), + } + + log.Printf("[DEBUG] Accepting Direct Connect hosted public virtual interface: %#v", req) + _, err := conn.ConfirmPublicVirtualInterface(req) + if err != nil { + return fmt.Errorf("Error accepting Direct Connect hosted public virtual interface: %s", err.Error()) + } + + d.SetId(vifId) + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Region: meta.(*AWSClient).region, + Service: "directconnect", + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("dxvif/%s", d.Id()), + }.String() + d.Set("arn", arn) + + if err := dxHostedPublicVirtualInterfaceAccepterWaitUntilAvailable(conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { + return err + } + + return resourceAwsDxHostedPublicVirtualInterfaceAccepterUpdate(d, meta) +} + +func resourceAwsDxHostedPublicVirtualInterfaceAccepterRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dxconn + + vif, err := dxVirtualInterfaceRead(d.Id(), conn) + if err != nil { + return err + } + if vif == nil { + log.Printf("[WARN] Direct Connect virtual interface (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + vifState := aws.StringValue(vif.VirtualInterfaceState) + if vifState != directconnect.VirtualInterfaceStateAvailable && + vifState != directconnect.VirtualInterfaceStateDown && + vifState != directconnect.VirtualInterfaceStateVerifying { + log.Printf("[WARN] Direct Connect virtual interface (%s) is '%s', removing from state", vifState, d.Id()) + d.SetId("") + return nil + } + + d.Set("virtual_interface_id", vif.VirtualInterfaceId) + if err := getTagsDX(conn, d, d.Get("arn").(string)); err != nil { + return err + } + + return nil +} + +func resourceAwsDxHostedPublicVirtualInterfaceAccepterUpdate(d *schema.ResourceData, meta interface{}) error { + if err := dxVirtualInterfaceUpdate(d, meta); err != nil { + return err + } + + return resourceAwsDxHostedPublicVirtualInterfaceAccepterRead(d, meta) +} + +func resourceAwsDxHostedPublicVirtualInterfaceAccepterDelete(d *schema.ResourceData, meta interface{}) error { + log.Printf("[WARN] Will not delete Direct Connect virtual interface. Terraform will remove this resource from the state file, however resources may remain.") + return nil +} + +func resourceAwsDxHostedPublicVirtualInterfaceAccepterImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Region: meta.(*AWSClient).region, + Service: "directconnect", + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("dxvif/%s", d.Id()), + }.String() + d.Set("arn", arn) + + return []*schema.ResourceData{d}, nil +} + +func dxHostedPublicVirtualInterfaceAccepterWaitUntilAvailable(conn *directconnect.DirectConnect, vifId string, timeout time.Duration) error { + return dxVirtualInterfaceWaitUntilAvailable( + conn, + vifId, + timeout, + []string{ + directconnect.VirtualInterfaceStateConfirming, + directconnect.VirtualInterfaceStatePending, + }, + []string{ + directconnect.VirtualInterfaceStateAvailable, + directconnect.VirtualInterfaceStateDown, + directconnect.VirtualInterfaceStateVerifying, + }) +} diff --git a/aws/resource_aws_dx_hosted_public_virtual_interface_test.go b/aws/resource_aws_dx_hosted_public_virtual_interface_test.go new file mode 100644 index 00000000000..354e987e34e --- /dev/null +++ b/aws/resource_aws_dx_hosted_public_virtual_interface_test.go @@ -0,0 +1,107 @@ +package aws + +import ( + "fmt" + "os" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/directconnect" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAwsDxHostedPublicVirtualInterface_basic(t *testing.T) { + key := "DX_CONNECTION_ID" + connectionId := os.Getenv(key) + if connectionId == "" { + t.Skipf("Environment variable %s is not set", key) + } + key = "DX_HOSTED_VIF_OWNER_ACCOUNT" + ownerAccountId := os.Getenv(key) + if ownerAccountId == "" { + t.Skipf("Environment variable %s is not set", key) + } + vifName := fmt.Sprintf("terraform-testacc-dxvif-%s", acctest.RandString(5)) + bgpAsn := randIntRange(64512, 65534) + vlan := randIntRange(2049, 4094) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsDxHostedPublicVirtualInterfaceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDxHostedPublicVirtualInterfaceConfig_basic(connectionId, ownerAccountId, vifName, bgpAsn, vlan), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDxHostedPublicVirtualInterfaceExists("aws_dx_hosted_public_virtual_interface.foo"), + resource.TestCheckResourceAttr("aws_dx_hosted_public_virtual_interface.foo", "name", vifName), + ), + }, + // Test import. + { + ResourceName: "aws_dx_hosted_public_virtual_interface.foo", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAwsDxHostedPublicVirtualInterfaceDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).dxconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_dx_hosted_public_virtual_interface" { + continue + } + + input := &directconnect.DescribeVirtualInterfacesInput{ + VirtualInterfaceId: aws.String(rs.Primary.ID), + } + + resp, err := conn.DescribeVirtualInterfaces(input) + if err != nil { + return err + } + for _, v := range resp.VirtualInterfaces { + if *v.VirtualInterfaceId == rs.Primary.ID && !(*v.VirtualInterfaceState == directconnect.VirtualInterfaceStateDeleted) { + return fmt.Errorf("[DESTROY ERROR] Dx Public VIF (%s) not deleted", rs.Primary.ID) + } + } + } + return nil +} + +func testAccCheckAwsDxHostedPublicVirtualInterfaceExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + _, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + return nil + } +} + +func testAccDxHostedPublicVirtualInterfaceConfig_basic(cid, ownerAcctId, n string, bgpAsn, vlan int) string { + return fmt.Sprintf(` +resource "aws_dx_hosted_public_virtual_interface" "foo" { + connection_id = "%s" + owner_account_id = "%s" + + name = "%s" + vlan = %d + address_family = "ipv4" + bgp_asn = %d + + customer_address = "175.45.176.1/30" + amazon_address = "175.45.176.2/30" + route_filter_prefixes = [ + "210.52.109.0/24", + "175.45.176.0/22" + ] +} +`, cid, ownerAcctId, n, vlan, bgpAsn) +} diff --git a/aws/resource_aws_dx_lag.go b/aws/resource_aws_dx_lag.go index 6c0fbb2524d..14e3b9a3823 100644 --- a/aws/resource_aws_dx_lag.go +++ b/aws/resource_aws_dx_lag.go @@ -111,11 +111,11 @@ func resourceAwsDxLagRead(d *schema.ResourceData, meta interface{}) error { return nil } if len(resp.Lags) != 1 { - return fmt.Errorf("[ERROR] Number of Direct Connect LAGs (%s) isn't one, got %d", d.Id(), len(resp.Lags)) + return fmt.Errorf("Number of Direct Connect LAGs (%s) isn't one, got %d", d.Id(), len(resp.Lags)) } lag := resp.Lags[0] if d.Id() != aws.StringValue(lag.LagId) { - return fmt.Errorf("[ERROR] Direct Connect LAG (%s) not found", d.Id()) + return fmt.Errorf("Direct Connect LAG (%s) not found", d.Id()) } if aws.StringValue(lag.LagState) == directconnect.LagStateDeleted { diff --git a/aws/resource_aws_dx_lag_test.go b/aws/resource_aws_dx_lag_test.go index 964d939e358..e8d21862d20 100644 --- a/aws/resource_aws_dx_lag_test.go +++ b/aws/resource_aws_dx_lag_test.go @@ -11,11 +11,33 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSDxLag_importBasic(t *testing.T) { + resourceName := "aws_dx_lag.hoge" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsDxLagDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDxLagConfig(acctest.RandString(5)), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy"}, + }, + }, + }) +} + func TestAccAWSDxLag_basic(t *testing.T) { lagName1 := fmt.Sprintf("tf-dx-lag-%s", acctest.RandString(5)) lagName2 := fmt.Sprintf("tf-dx-lag-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsDxLagDestroy, @@ -47,7 +69,7 @@ func TestAccAWSDxLag_basic(t *testing.T) { func TestAccAWSDxLag_tags(t *testing.T) { lagName := fmt.Sprintf("tf-dx-lag-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsDxLagDestroy, diff --git a/aws/resource_aws_dx_private_virtual_interface.go b/aws/resource_aws_dx_private_virtual_interface.go new file mode 100644 index 00000000000..f5851bd9da4 --- /dev/null +++ b/aws/resource_aws_dx_private_virtual_interface.go @@ -0,0 +1,240 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" + "github.com/aws/aws-sdk-go/service/directconnect" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsDxPrivateVirtualInterface() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsDxPrivateVirtualInterfaceCreate, + Read: resourceAwsDxPrivateVirtualInterfaceRead, + Update: resourceAwsDxPrivateVirtualInterfaceUpdate, + Delete: resourceAwsDxPrivateVirtualInterfaceDelete, + Importer: &schema.ResourceImporter{ + State: resourceAwsDxPrivateVirtualInterfaceImport, + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "connection_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "vpn_gateway_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"dx_gateway_id"}, + }, + "dx_gateway_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"vpn_gateway_id"}, + }, + "vlan": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + ValidateFunc: validation.IntBetween(1, 4094), + }, + "bgp_asn": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + "bgp_auth_key": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "address_family": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{directconnect.AddressFamilyIpv4, directconnect.AddressFamilyIpv6}, false), + }, + "customer_address": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "amazon_address": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "mtu": { + Type: schema.TypeInt, + Default: 1500, + Optional: true, + ValidateFunc: validateIntegerInSlice([]int{1500, 9001}), + }, + "jumbo_frame_capable": { + Type: schema.TypeBool, + Computed: true, + }, + "tags": tagsSchema(), + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Update: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + } +} + +func resourceAwsDxPrivateVirtualInterfaceCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dxconn + + vgwIdRaw, vgwOk := d.GetOk("vpn_gateway_id") + dxgwIdRaw, dxgwOk := d.GetOk("dx_gateway_id") + if vgwOk == dxgwOk { + return fmt.Errorf( + "One of ['vpn_gateway_id', 'dx_gateway_id'] must be set to create a Direct Connect private virtual interface") + } + + req := &directconnect.CreatePrivateVirtualInterfaceInput{ + ConnectionId: aws.String(d.Get("connection_id").(string)), + NewPrivateVirtualInterface: &directconnect.NewPrivateVirtualInterface{ + VirtualInterfaceName: aws.String(d.Get("name").(string)), + Vlan: aws.Int64(int64(d.Get("vlan").(int))), + Asn: aws.Int64(int64(d.Get("bgp_asn").(int))), + AddressFamily: aws.String(d.Get("address_family").(string)), + Mtu: aws.Int64(int64(d.Get("mtu").(int))), + }, + } + if vgwOk && vgwIdRaw.(string) != "" { + req.NewPrivateVirtualInterface.VirtualGatewayId = aws.String(vgwIdRaw.(string)) + } + if dxgwOk && dxgwIdRaw.(string) != "" { + req.NewPrivateVirtualInterface.DirectConnectGatewayId = aws.String(dxgwIdRaw.(string)) + } + if v, ok := d.GetOk("bgp_auth_key"); ok { + req.NewPrivateVirtualInterface.AuthKey = aws.String(v.(string)) + } + if v, ok := d.GetOk("customer_address"); ok && v.(string) != "" { + req.NewPrivateVirtualInterface.CustomerAddress = aws.String(v.(string)) + } + if v, ok := d.GetOk("amazon_address"); ok && v.(string) != "" { + req.NewPrivateVirtualInterface.AmazonAddress = aws.String(v.(string)) + } + + log.Printf("[DEBUG] Creating Direct Connect private virtual interface: %#v", req) + resp, err := conn.CreatePrivateVirtualInterface(req) + if err != nil { + return fmt.Errorf("Error creating Direct Connect private virtual interface: %s", err.Error()) + } + + d.SetId(aws.StringValue(resp.VirtualInterfaceId)) + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Region: meta.(*AWSClient).region, + Service: "directconnect", + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("dxvif/%s", d.Id()), + }.String() + d.Set("arn", arn) + + if err := dxPrivateVirtualInterfaceWaitUntilAvailable(conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { + return err + } + + return resourceAwsDxPrivateVirtualInterfaceUpdate(d, meta) +} + +func resourceAwsDxPrivateVirtualInterfaceRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dxconn + + vif, err := dxVirtualInterfaceRead(d.Id(), conn) + if err != nil { + return err + } + if vif == nil { + log.Printf("[WARN] Direct Connect virtual interface (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + d.Set("connection_id", vif.ConnectionId) + d.Set("name", vif.VirtualInterfaceName) + d.Set("vlan", vif.Vlan) + d.Set("bgp_asn", vif.Asn) + d.Set("bgp_auth_key", vif.AuthKey) + d.Set("address_family", vif.AddressFamily) + d.Set("customer_address", vif.CustomerAddress) + d.Set("amazon_address", vif.AmazonAddress) + d.Set("vpn_gateway_id", vif.VirtualGatewayId) + d.Set("dx_gateway_id", vif.DirectConnectGatewayId) + d.Set("mtu", vif.Mtu) + d.Set("jumbo_frame_capable", vif.JumboFrameCapable) + if err := getTagsDX(conn, d, d.Get("arn").(string)); err != nil { + return err + } + + return nil +} + +func resourceAwsDxPrivateVirtualInterfaceUpdate(d *schema.ResourceData, meta interface{}) error { + if err := dxVirtualInterfaceUpdate(d, meta); err != nil { + return err + } + + if err := dxPrivateVirtualInterfaceWaitUntilAvailable(meta.(*AWSClient).dxconn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { + return err + } + + return resourceAwsDxPrivateVirtualInterfaceRead(d, meta) +} + +func resourceAwsDxPrivateVirtualInterfaceDelete(d *schema.ResourceData, meta interface{}) error { + return dxVirtualInterfaceDelete(d, meta) +} + +func resourceAwsDxPrivateVirtualInterfaceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Region: meta.(*AWSClient).region, + Service: "directconnect", + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("dxvif/%s", d.Id()), + }.String() + d.Set("arn", arn) + + return []*schema.ResourceData{d}, nil +} + +func dxPrivateVirtualInterfaceWaitUntilAvailable(conn *directconnect.DirectConnect, vifId string, timeout time.Duration) error { + return dxVirtualInterfaceWaitUntilAvailable( + conn, + vifId, + timeout, + []string{ + directconnect.VirtualInterfaceStatePending, + }, + []string{ + directconnect.VirtualInterfaceStateAvailable, + directconnect.VirtualInterfaceStateDown, + }) +} diff --git a/aws/resource_aws_dx_private_virtual_interface_test.go b/aws/resource_aws_dx_private_virtual_interface_test.go new file mode 100644 index 00000000000..8a9eba3584d --- /dev/null +++ b/aws/resource_aws_dx_private_virtual_interface_test.go @@ -0,0 +1,246 @@ +package aws + +import ( + "fmt" + "os" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/directconnect" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAwsDxPrivateVirtualInterface_basic(t *testing.T) { + key := "DX_CONNECTION_ID" + connectionId := os.Getenv(key) + if connectionId == "" { + t.Skipf("Environment variable %s is not set", key) + } + vifName := fmt.Sprintf("terraform-testacc-dxvif-%s", acctest.RandString(5)) + bgpAsn := randIntRange(64512, 65534) + vlan := randIntRange(2049, 4094) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsDxPrivateVirtualInterfaceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDxPrivateVirtualInterfaceConfig_noTags(connectionId, vifName, bgpAsn, vlan), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDxPrivateVirtualInterfaceExists("aws_dx_private_virtual_interface.foo"), + resource.TestCheckResourceAttr("aws_dx_private_virtual_interface.foo", "name", vifName), + resource.TestCheckResourceAttr("aws_dx_private_virtual_interface.foo", "tags.%", "0"), + ), + }, + { + Config: testAccDxPrivateVirtualInterfaceConfig_tags(connectionId, vifName, bgpAsn, vlan), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDxPrivateVirtualInterfaceExists("aws_dx_private_virtual_interface.foo"), + resource.TestCheckResourceAttr("aws_dx_private_virtual_interface.foo", "name", vifName), + resource.TestCheckResourceAttr("aws_dx_private_virtual_interface.foo", "tags.%", "1"), + resource.TestCheckResourceAttr("aws_dx_private_virtual_interface.foo", "tags.Environment", "test"), + ), + }, + // Test import. + { + ResourceName: "aws_dx_private_virtual_interface.foo", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAwsDxPrivateVirtualInterface_dxGateway(t *testing.T) { + key := "DX_CONNECTION_ID" + connectionId := os.Getenv(key) + if connectionId == "" { + t.Skipf("Environment variable %s is not set", key) + } + vifName := fmt.Sprintf("terraform-testacc-dxvif-%s", acctest.RandString(5)) + amzAsn := randIntRange(64512, 65534) + bgpAsn := randIntRange(64512, 65534) + vlan := randIntRange(2049, 4094) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsDxPrivateVirtualInterfaceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDxPrivateVirtualInterfaceConfig_dxGateway(connectionId, vifName, amzAsn, bgpAsn, vlan), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDxPrivateVirtualInterfaceExists("aws_dx_private_virtual_interface.foo"), + resource.TestCheckResourceAttr("aws_dx_private_virtual_interface.foo", "name", vifName), + ), + }, + }, + }) +} + +func TestAccAwsDxPrivateVirtualInterface_mtuUpdate(t *testing.T) { + key := "DX_CONNECTION_ID" + connectionId := os.Getenv(key) + if connectionId == "" { + t.Skipf("Environment variable %s is not set", key) + } + vifName := fmt.Sprintf("terraform-testacc-dxvif-%s", acctest.RandString(5)) + bgpAsn := randIntRange(64512, 65534) + vlan := randIntRange(2049, 4094) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsDxPrivateVirtualInterfaceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDxPrivateVirtualInterfaceConfig_noTags(connectionId, vifName, bgpAsn, vlan), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDxPrivateVirtualInterfaceExists("aws_dx_private_virtual_interface.foo"), + resource.TestCheckResourceAttr("aws_dx_private_virtual_interface.foo", "name", vifName), + resource.TestCheckResourceAttr("aws_dx_private_virtual_interface.foo", "mtu", "1500"), + resource.TestCheckResourceAttr("aws_dx_private_virtual_interface.foo", "jumbo_frame_capable", "true"), + ), + }, + { + Config: testAccDxPrivateVirtualInterfaceConfig_jumboFrames(connectionId, vifName, bgpAsn, vlan), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDxPrivateVirtualInterfaceExists("aws_dx_private_virtual_interface.foo"), + resource.TestCheckResourceAttr("aws_dx_private_virtual_interface.foo", "name", vifName), + resource.TestCheckResourceAttr("aws_dx_private_virtual_interface.foo", "mtu", "9001"), + ), + }, + { + Config: testAccDxPrivateVirtualInterfaceConfig_noTags(connectionId, vifName, bgpAsn, vlan), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDxPrivateVirtualInterfaceExists("aws_dx_private_virtual_interface.foo"), + resource.TestCheckResourceAttr("aws_dx_private_virtual_interface.foo", "name", vifName), + resource.TestCheckResourceAttr("aws_dx_private_virtual_interface.foo", "mtu", "1500"), + ), + }, + }, + }) +} + +func testAccCheckAwsDxPrivateVirtualInterfaceDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).dxconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_dx_private_virtual_interface" { + continue + } + + input := &directconnect.DescribeVirtualInterfacesInput{ + VirtualInterfaceId: aws.String(rs.Primary.ID), + } + + resp, err := conn.DescribeVirtualInterfaces(input) + if err != nil { + return err + } + for _, v := range resp.VirtualInterfaces { + if *v.VirtualInterfaceId == rs.Primary.ID && !(*v.VirtualInterfaceState == directconnect.VirtualInterfaceStateDeleted) { + return fmt.Errorf("[DESTROY ERROR] Dx Private VIF (%s) not deleted", rs.Primary.ID) + } + } + } + return nil +} + +func testAccCheckAwsDxPrivateVirtualInterfaceExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + _, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + return nil + } +} + +func testAccDxPrivateVirtualInterfaceConfig_noTags(cid, n string, bgpAsn, vlan int) string { + return fmt.Sprintf(` +resource "aws_vpn_gateway" "foo" { + tags { + Name = "%s" + } +} + +resource "aws_dx_private_virtual_interface" "foo" { + connection_id = "%s" + + vpn_gateway_id = "${aws_vpn_gateway.foo.id}" + name = "%s" + vlan = %d + address_family = "ipv4" + bgp_asn = %d +} +`, n, cid, n, vlan, bgpAsn) +} + +func testAccDxPrivateVirtualInterfaceConfig_tags(cid, n string, bgpAsn, vlan int) string { + return fmt.Sprintf(` +resource "aws_vpn_gateway" "foo" { + tags { + Name = "%s" + } +} + +resource "aws_dx_private_virtual_interface" "foo" { + connection_id = "%s" + + vpn_gateway_id = "${aws_vpn_gateway.foo.id}" + name = "%s" + vlan = %d + address_family = "ipv4" + bgp_asn = %d + + tags { + Environment = "test" + } +} +`, n, cid, n, vlan, bgpAsn) +} + +func testAccDxPrivateVirtualInterfaceConfig_dxGateway(cid, n string, amzAsn, bgpAsn, vlan int) string { + return fmt.Sprintf(` +resource "aws_dx_gateway" "foo" { + name = "%s" + amazon_side_asn = %d +} + +resource "aws_dx_private_virtual_interface" "foo" { + connection_id = "%s" + + dx_gateway_id = "${aws_dx_gateway.foo.id}" + name = "%s" + vlan = %d + address_family = "ipv4" + bgp_asn = %d +} +`, n, amzAsn, cid, n, vlan, bgpAsn) +} + +func testAccDxPrivateVirtualInterfaceConfig_jumboFrames(cid, n string, bgpAsn, vlan int) string { + return fmt.Sprintf(` +resource "aws_vpn_gateway" "foo" { + tags { + Name = "%s" + } +} + +resource "aws_dx_private_virtual_interface" "foo" { + connection_id = "%s" + + vpn_gateway_id = "${aws_vpn_gateway.foo.id}" + name = "%s" + vlan = %d + address_family = "ipv4" + bgp_asn = %d + mtu = 9001 +} +`, n, cid, n, vlan, bgpAsn) +} diff --git a/aws/resource_aws_dx_public_virtual_interface.go b/aws/resource_aws_dx_public_virtual_interface.go new file mode 100644 index 00000000000..26d3cb3a1a3 --- /dev/null +++ b/aws/resource_aws_dx_public_virtual_interface.go @@ -0,0 +1,224 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" + "github.com/aws/aws-sdk-go/service/directconnect" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsDxPublicVirtualInterface() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsDxPublicVirtualInterfaceCreate, + Read: resourceAwsDxPublicVirtualInterfaceRead, + Update: resourceAwsDxPublicVirtualInterfaceUpdate, + Delete: resourceAwsDxPublicVirtualInterfaceDelete, + Importer: &schema.ResourceImporter{ + State: resourceAwsDxPublicVirtualInterfaceImport, + }, + CustomizeDiff: resourceAwsDxPublicVirtualInterfaceCustomizeDiff, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "connection_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "vlan": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + ValidateFunc: validation.IntBetween(1, 4094), + }, + "bgp_asn": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + "bgp_auth_key": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "address_family": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{directconnect.AddressFamilyIpv4, directconnect.AddressFamilyIpv6}, false), + }, + "customer_address": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "amazon_address": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "route_filter_prefixes": { + Type: schema.TypeSet, + Required: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + MinItems: 1, + }, + "tags": tagsSchema(), + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + } +} + +func resourceAwsDxPublicVirtualInterfaceCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dxconn + + req := &directconnect.CreatePublicVirtualInterfaceInput{ + ConnectionId: aws.String(d.Get("connection_id").(string)), + NewPublicVirtualInterface: &directconnect.NewPublicVirtualInterface{ + VirtualInterfaceName: aws.String(d.Get("name").(string)), + Vlan: aws.Int64(int64(d.Get("vlan").(int))), + Asn: aws.Int64(int64(d.Get("bgp_asn").(int))), + AddressFamily: aws.String(d.Get("address_family").(string)), + }, + } + if v, ok := d.GetOk("bgp_auth_key"); ok && v.(string) != "" { + req.NewPublicVirtualInterface.AuthKey = aws.String(v.(string)) + } + if v, ok := d.GetOk("customer_address"); ok && v.(string) != "" { + req.NewPublicVirtualInterface.CustomerAddress = aws.String(v.(string)) + } + if v, ok := d.GetOk("amazon_address"); ok && v.(string) != "" { + req.NewPublicVirtualInterface.AmazonAddress = aws.String(v.(string)) + } + if v, ok := d.GetOk("route_filter_prefixes"); ok { + req.NewPublicVirtualInterface.RouteFilterPrefixes = expandDxRouteFilterPrefixes(v.(*schema.Set).List()) + } + + log.Printf("[DEBUG] Creating Direct Connect public virtual interface: %#v", req) + resp, err := conn.CreatePublicVirtualInterface(req) + if err != nil { + return fmt.Errorf("Error creating Direct Connect public virtual interface: %s", err) + } + + d.SetId(aws.StringValue(resp.VirtualInterfaceId)) + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Region: meta.(*AWSClient).region, + Service: "directconnect", + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("dxvif/%s", d.Id()), + }.String() + d.Set("arn", arn) + + if err := dxPublicVirtualInterfaceWaitUntilAvailable(conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { + return err + } + + return resourceAwsDxPublicVirtualInterfaceUpdate(d, meta) +} + +func resourceAwsDxPublicVirtualInterfaceRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).dxconn + + vif, err := dxVirtualInterfaceRead(d.Id(), conn) + if err != nil { + return err + } + if vif == nil { + log.Printf("[WARN] Direct Connect virtual interface (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + d.Set("connection_id", vif.ConnectionId) + d.Set("name", vif.VirtualInterfaceName) + d.Set("vlan", vif.Vlan) + d.Set("bgp_asn", vif.Asn) + d.Set("bgp_auth_key", vif.AuthKey) + d.Set("address_family", vif.AddressFamily) + d.Set("customer_address", vif.CustomerAddress) + d.Set("amazon_address", vif.AmazonAddress) + d.Set("route_filter_prefixes", flattenDxRouteFilterPrefixes(vif.RouteFilterPrefixes)) + if err := getTagsDX(conn, d, d.Get("arn").(string)); err != nil { + return err + } + + return nil +} + +func resourceAwsDxPublicVirtualInterfaceUpdate(d *schema.ResourceData, meta interface{}) error { + if err := dxVirtualInterfaceUpdate(d, meta); err != nil { + return err + } + + return resourceAwsDxPublicVirtualInterfaceRead(d, meta) +} + +func resourceAwsDxPublicVirtualInterfaceDelete(d *schema.ResourceData, meta interface{}) error { + return dxVirtualInterfaceDelete(d, meta) +} + +func resourceAwsDxPublicVirtualInterfaceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Region: meta.(*AWSClient).region, + Service: "directconnect", + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("dxvif/%s", d.Id()), + }.String() + d.Set("arn", arn) + + return []*schema.ResourceData{d}, nil +} + +func resourceAwsDxPublicVirtualInterfaceCustomizeDiff(diff *schema.ResourceDiff, meta interface{}) error { + if diff.Id() == "" { + // New resource. + if addressFamily := diff.Get("address_family").(string); addressFamily == directconnect.AddressFamilyIpv4 { + if _, ok := diff.GetOk("customer_address"); !ok { + return fmt.Errorf("'customer_address' must be set when 'address_family' is '%s'", addressFamily) + } + if _, ok := diff.GetOk("amazon_address"); !ok { + return fmt.Errorf("'amazon_address' must be set when 'address_family' is '%s'", addressFamily) + } + } + } + + return nil +} + +func dxPublicVirtualInterfaceWaitUntilAvailable(conn *directconnect.DirectConnect, vifId string, timeout time.Duration) error { + return dxVirtualInterfaceWaitUntilAvailable( + conn, + vifId, + timeout, + []string{ + directconnect.VirtualInterfaceStatePending, + }, + []string{ + directconnect.VirtualInterfaceStateAvailable, + directconnect.VirtualInterfaceStateDown, + directconnect.VirtualInterfaceStateVerifying, + }) +} diff --git a/aws/resource_aws_dx_public_virtual_interface_test.go b/aws/resource_aws_dx_public_virtual_interface_test.go new file mode 100644 index 00000000000..e9bf6b4bf04 --- /dev/null +++ b/aws/resource_aws_dx_public_virtual_interface_test.go @@ -0,0 +1,135 @@ +package aws + +import ( + "fmt" + "os" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/directconnect" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAwsDxPublicVirtualInterface_basic(t *testing.T) { + key := "DX_CONNECTION_ID" + connectionId := os.Getenv(key) + if connectionId == "" { + t.Skipf("Environment variable %s is not set", key) + } + vifName := fmt.Sprintf("terraform-testacc-dxvif-%s", acctest.RandString(5)) + bgpAsn := randIntRange(64512, 65534) + vlan := randIntRange(2049, 4094) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsDxPublicVirtualInterfaceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDxPublicVirtualInterfaceConfig_noTags(connectionId, vifName, bgpAsn, vlan), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDxPublicVirtualInterfaceExists("aws_dx_public_virtual_interface.foo"), + resource.TestCheckResourceAttr("aws_dx_public_virtual_interface.foo", "name", vifName), + resource.TestCheckResourceAttr("aws_dx_public_virtual_interface.foo", "tags.%", "0"), + ), + }, + { + Config: testAccDxPublicVirtualInterfaceConfig_tags(connectionId, vifName, bgpAsn, vlan), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsDxPublicVirtualInterfaceExists("aws_dx_public_virtual_interface.foo"), + resource.TestCheckResourceAttr("aws_dx_public_virtual_interface.foo", "name", vifName), + resource.TestCheckResourceAttr("aws_dx_public_virtual_interface.foo", "tags.%", "1"), + resource.TestCheckResourceAttr("aws_dx_public_virtual_interface.foo", "tags.Environment", "test"), + ), + }, + // Test import. + { + ResourceName: "aws_dx_public_virtual_interface.foo", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAwsDxPublicVirtualInterfaceDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).dxconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_dx_public_virtual_interface" { + continue + } + + input := &directconnect.DescribeVirtualInterfacesInput{ + VirtualInterfaceId: aws.String(rs.Primary.ID), + } + + resp, err := conn.DescribeVirtualInterfaces(input) + if err != nil { + return err + } + for _, v := range resp.VirtualInterfaces { + if *v.VirtualInterfaceId == rs.Primary.ID && !(*v.VirtualInterfaceState == directconnect.VirtualInterfaceStateDeleted) { + return fmt.Errorf("[DESTROY ERROR] Dx Public VIF (%s) not deleted", rs.Primary.ID) + } + } + } + return nil +} + +func testAccCheckAwsDxPublicVirtualInterfaceExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + _, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + return nil + } +} + +func testAccDxPublicVirtualInterfaceConfig_noTags(cid, n string, bgpAsn, vlan int) string { + return fmt.Sprintf(` +resource "aws_dx_public_virtual_interface" "foo" { + connection_id = "%s" + + name = "%s" + vlan = %d + address_family = "ipv4" + bgp_asn = %d + + customer_address = "175.45.176.1/30" + amazon_address = "175.45.176.2/30" + route_filter_prefixes = [ + "210.52.109.0/24", + "175.45.176.0/22" + ] +} +`, cid, n, vlan, bgpAsn) +} + +func testAccDxPublicVirtualInterfaceConfig_tags(cid, n string, bgpAsn, vlan int) string { + return fmt.Sprintf(` +resource "aws_dx_public_virtual_interface" "foo" { + connection_id = "%s" + + name = "%s" + vlan = %d + address_family = "ipv4" + bgp_asn = %d + + customer_address = "175.45.176.1/30" + amazon_address = "175.45.176.2/30" + route_filter_prefixes = [ + "210.52.109.0/24", + "175.45.176.0/22" + ] + + tags { + Environment = "test" + } +} +`, cid, n, vlan, bgpAsn) +} diff --git a/aws/resource_aws_dynamodb_global_table_test.go b/aws/resource_aws_dynamodb_global_table_test.go index ec19f2e6db5..6eedb381982 100644 --- a/aws/resource_aws_dynamodb_global_table_test.go +++ b/aws/resource_aws_dynamodb_global_table_test.go @@ -16,7 +16,7 @@ func TestAccAWSDynamoDbGlobalTable_basic(t *testing.T) { resourceName := "aws_dynamodb_global_table.test" tableName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsDynamoDbGlobalTableDestroy, @@ -51,7 +51,7 @@ func TestAccAWSDynamoDbGlobalTable_multipleRegions(t *testing.T) { resourceName := "aws_dynamodb_global_table.test" tableName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsDynamoDbGlobalTableDestroy, @@ -93,16 +93,16 @@ func TestAccAWSDynamoDbGlobalTable_import(t *testing.T) { resourceName := "aws_dynamodb_global_table.test" tableName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckSesTemplateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDynamoDbGlobalTableConfig_basic(tableName), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, diff --git a/aws/resource_aws_dynamodb_table.go b/aws/resource_aws_dynamodb_table.go index 25bd461e86d..2fcc37d58c3 100644 --- a/aws/resource_aws_dynamodb_table.go +++ b/aws/resource_aws_dynamodb_table.go @@ -2,6 +2,7 @@ package aws import ( "bytes" + "errors" "fmt" "log" "strings" @@ -42,12 +43,21 @@ func resourceAwsDynamoDbTable() *schema.Resource { func(diff *schema.ResourceDiff, v interface{}) error { if diff.Id() != "" && diff.HasChange("server_side_encryption") { o, n := diff.GetChange("server_side_encryption") - if isDynamoDbTableSSEDisabled(o) && isDynamoDbTableSSEDisabled(n) { + if isDynamoDbTableOptionDisabled(o) && isDynamoDbTableOptionDisabled(n) { return diff.Clear("server_side_encryption") } } return nil }, + func(diff *schema.ResourceDiff, v interface{}) error { + if diff.Id() != "" && diff.HasChange("point_in_time_recovery") { + o, n := diff.GetChange("point_in_time_recovery") + if isDynamoDbTableOptionDisabled(o) && isDynamoDbTableOptionDisabled(n) { + return diff.Clear("point_in_time_recovery") + } + } + return nil + }, ), SchemaVersion: 1, @@ -238,6 +248,20 @@ func resourceAwsDynamoDbTable() *schema.Resource { }, }, "tags": tagsSchema(), + "point_in_time_recovery": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + }, }, } } @@ -448,6 +472,15 @@ func resourceAwsDynamoDbTableUpdate(d *schema.ResourceData, meta interface{}) er } } + if d.HasChange("point_in_time_recovery") { + _, enabled := d.GetChange("point_in_time_recovery.0.enabled") + if !d.IsNewResource() || enabled.(bool) { + if err := updateDynamoDbPITR(d, conn); err != nil { + return err + } + } + } + return resourceAwsDynamoDbTableRead(d, meta) } @@ -491,6 +524,14 @@ func resourceAwsDynamoDbTableRead(d *schema.ResourceData, meta interface{}) erro } d.Set("tags", tags) + pitrOut, err := conn.DescribeContinuousBackups(&dynamodb.DescribeContinuousBackupsInput{ + TableName: aws.String(d.Id()), + }) + if err != nil && !isAWSErr(err, "UnknownOperationException", "") { + return err + } + d.Set("point_in_time_recovery", flattenDynamoDbPitr(pitrOut)) + return nil } @@ -539,7 +580,7 @@ func deleteAwsDynamoDbTable(tableName string, conn *dynamodb.DynamoDB) error { TableName: aws.String(tableName), } - return resource.Retry(1*time.Minute, func() *resource.RetryError { + return resource.Retry(5*time.Minute, func() *resource.RetryError { _, err := conn.DeleteTable(input) if err != nil { // Subscriber limit exceeded: Only 10 tables can be created, updated, or deleted simultaneously @@ -608,11 +649,48 @@ func updateDynamoDbTimeToLive(d *schema.ResourceData, conn *dynamodb.DynamoDB) e return nil } +func updateDynamoDbPITR(d *schema.ResourceData, conn *dynamodb.DynamoDB) error { + toEnable := d.Get("point_in_time_recovery.0.enabled").(bool) + + input := &dynamodb.UpdateContinuousBackupsInput{ + TableName: aws.String(d.Id()), + PointInTimeRecoverySpecification: &dynamodb.PointInTimeRecoverySpecification{ + PointInTimeRecoveryEnabled: aws.Bool(toEnable), + }, + } + + log.Printf("[DEBUG] Updating DynamoDB point in time recovery status to %v", toEnable) + + err := resource.Retry(20*time.Minute, func() *resource.RetryError { + _, err := conn.UpdateContinuousBackups(input) + if err != nil { + // Backups are still being enabled for this newly created table + if isAWSErr(err, dynamodb.ErrCodeContinuousBackupsUnavailableException, "Backups are being enabled") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + + if err != nil { + return err + } + + if err := waitForDynamoDbBackupUpdateToBeCompleted(d.Id(), toEnable, conn); err != nil { + return fmt.Errorf("Error waiting for DynamoDB PITR update: %s", err) + } + + return nil +} + func readDynamoDbTableTags(arn string, conn *dynamodb.DynamoDB) (map[string]string, error) { output, err := conn.ListTagsOfResource(&dynamodb.ListTagsOfResourceInput{ ResourceArn: aws.String(arn), }) - if err != nil { + + // Do not fail if interfacing with dynamodb-local + if err != nil && !isAWSErr(err, "UnknownOperationException", "Tagging is not currently supported in DynamoDB Local.") { return nil, fmt.Errorf("Error reading tags from dynamodb resource: %s", err) } @@ -720,6 +798,41 @@ func waitForDynamoDbTableToBeActive(tableName string, timeout time.Duration, con return err } +func waitForDynamoDbBackupUpdateToBeCompleted(tableName string, toEnable bool, conn *dynamodb.DynamoDB) error { + var pending []string + target := []string{dynamodb.TimeToLiveStatusDisabled} + + if toEnable { + pending = []string{ + "ENABLING", + } + target = []string{dynamodb.PointInTimeRecoveryStatusEnabled} + } + + stateConf := &resource.StateChangeConf{ + Pending: pending, + Target: target, + Timeout: 10 * time.Second, + Refresh: func() (interface{}, string, error) { + result, err := conn.DescribeContinuousBackups(&dynamodb.DescribeContinuousBackupsInput{ + TableName: aws.String(tableName), + }) + if err != nil { + return 42, "", err + } + + if result.ContinuousBackupsDescription == nil || result.ContinuousBackupsDescription.PointInTimeRecoveryDescription == nil { + return 42, "", errors.New("Error reading backup status from dynamodb resource: empty description") + } + pitr := result.ContinuousBackupsDescription.PointInTimeRecoveryDescription + + return result, *pitr.PointInTimeRecoveryStatus, nil + }, + } + _, err := stateConf.WaitForState() + return err +} + func waitForDynamoDbTtlUpdateToBeCompleted(tableName string, toEnable bool, conn *dynamodb.DynamoDB) error { pending := []string{ dynamodb.TimeToLiveStatusEnabled, @@ -757,7 +870,7 @@ func waitForDynamoDbTtlUpdateToBeCompleted(tableName string, toEnable bool, conn return err } -func isDynamoDbTableSSEDisabled(v interface{}) bool { +func isDynamoDbTableOptionDisabled(v interface{}) bool { options := v.([]interface{}) if len(options) == 0 { return true diff --git a/aws/resource_aws_dynamodb_table_item.go b/aws/resource_aws_dynamodb_table_item.go index 764bf8b12d9..8ceb8a23224 100644 --- a/aws/resource_aws_dynamodb_table_item.go +++ b/aws/resource_aws_dynamodb_table_item.go @@ -105,9 +105,9 @@ func resourceAwsDynamoDbTableItemUpdate(d *schema.ResourceData, meta interface{} updates := map[string]*dynamodb.AttributeValueUpdate{} for key, value := range attributes { - // Hash keys are not updatable, so we'll basically create + // Hash keys and range keys are not updatable, so we'll basically create // a new record and delete the old one below - if key == hashKey { + if key == hashKey || key == rangeKey { continue } updates[key] = &dynamodb.AttributeValueUpdate{ @@ -224,7 +224,7 @@ func resourceAwsDynamoDbTableItemDelete(d *schema.ResourceData, meta interface{} func buildDynamoDbExpressionAttributeNames(attrs map[string]*dynamodb.AttributeValue) map[string]*string { names := map[string]*string{} - for key, _ := range attrs { + for key := range attrs { names["#a_"+key] = aws.String(key) } @@ -233,7 +233,7 @@ func buildDynamoDbExpressionAttributeNames(attrs map[string]*dynamodb.AttributeV func buildDynamoDbProjectionExpression(attrs map[string]*dynamodb.AttributeValue) *string { keys := []string{} - for key, _ := range attrs { + for key := range attrs { keys = append(keys, key) } return aws.String("#a_" + strings.Join(keys, ", #a_")) diff --git a/aws/resource_aws_dynamodb_table_item_test.go b/aws/resource_aws_dynamodb_table_item_test.go index d18c2b7771e..152e7fb84eb 100644 --- a/aws/resource_aws_dynamodb_table_item_test.go +++ b/aws/resource_aws_dynamodb_table_item_test.go @@ -24,7 +24,7 @@ func TestAccAWSDynamoDbTableItem_basic(t *testing.T) { "four": {"N": "44444"} }` - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDynamoDbItemDestroy, @@ -58,7 +58,7 @@ func TestAccAWSDynamoDbTableItem_rangeKey(t *testing.T) { "four": {"N": "44444"} }` - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDynamoDbItemDestroy, @@ -101,7 +101,7 @@ func TestAccAWSDynamoDbTableItem_withMultipleItems(t *testing.T) { "four": {"S": "four"} }` - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDynamoDbItemDestroy, @@ -148,7 +148,7 @@ func TestAccAWSDynamoDbTableItem_update(t *testing.T) { "new": {"S": "shiny new one"} }` - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDynamoDbItemDestroy, @@ -177,6 +177,55 @@ func TestAccAWSDynamoDbTableItem_update(t *testing.T) { }) } +func TestAccAWSDynamoDbTableItem_updateWithRangeKey(t *testing.T) { + var conf dynamodb.GetItemOutput + + tableName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(8)) + hashKey := "hashKey" + rangeKey := "rangeKey" + + itemBefore := `{ + "hashKey": {"S": "before"}, + "rangeKey": {"S": "rangeBefore"}, + "value": {"S": "valueBefore"} +}` + itemAfter := `{ + "hashKey": {"S": "before"}, + "rangeKey": {"S": "rangeAfter"}, + "value": {"S": "valueAfter"} +}` + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDynamoDbItemDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDynamoDbItemConfigWithRangeKey(tableName, hashKey, rangeKey, itemBefore), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDynamoDbTableItemExists("aws_dynamodb_table_item.test", &conf), + testAccCheckAWSDynamoDbTableItemCount(tableName, 1), + resource.TestCheckResourceAttr("aws_dynamodb_table_item.test", "hash_key", hashKey), + resource.TestCheckResourceAttr("aws_dynamodb_table_item.test", "range_key", rangeKey), + resource.TestCheckResourceAttr("aws_dynamodb_table_item.test", "table_name", tableName), + resource.TestCheckResourceAttr("aws_dynamodb_table_item.test", "item", itemBefore+"\n"), + ), + }, + { + Config: testAccAWSDynamoDbItemConfigWithRangeKey(tableName, hashKey, rangeKey, itemAfter), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDynamoDbTableItemExists("aws_dynamodb_table_item.test", &conf), + testAccCheckAWSDynamoDbTableItemCount(tableName, 1), + resource.TestCheckResourceAttr("aws_dynamodb_table_item.test", "hash_key", hashKey), + resource.TestCheckResourceAttr("aws_dynamodb_table_item.test", "range_key", rangeKey), + resource.TestCheckResourceAttr("aws_dynamodb_table_item.test", "table_name", tableName), + resource.TestCheckResourceAttr("aws_dynamodb_table_item.test", "item", itemAfter+"\n"), + ), + }, + }, + }) +} + func testAccCheckAWSDynamoDbItemDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).dynamodbconn @@ -241,7 +290,7 @@ func testAccCheckAWSDynamoDbTableItemExists(n string, item *dynamodb.GetItemOutp ExpressionAttributeNames: buildDynamoDbExpressionAttributeNames(attributes), }) if err != nil { - return fmt.Errorf("[ERROR] Problem getting table item '%s': %s", rs.Primary.ID, err) + return fmt.Errorf("Problem getting table item '%s': %s", rs.Primary.ID, err) } *item = *result diff --git a/aws/resource_aws_dynamodb_table_migrate.go b/aws/resource_aws_dynamodb_table_migrate.go index 59865effc89..29eb38de472 100644 --- a/aws/resource_aws_dynamodb_table_migrate.go +++ b/aws/resource_aws_dynamodb_table_migrate.go @@ -3,10 +3,10 @@ package aws import ( "fmt" "log" + "strings" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/terraform" - "strings" ) func resourceAwsDynamoDbTableMigrateState( diff --git a/aws/resource_aws_dynamodb_table_test.go b/aws/resource_aws_dynamodb_table_test.go index ab43f68a229..af533962902 100644 --- a/aws/resource_aws_dynamodb_table_test.go +++ b/aws/resource_aws_dynamodb_table_test.go @@ -54,6 +54,10 @@ func testSweepDynamoDbTables(region string) error { return !lastPage }) if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping DynamoDB Table sweep for %s: %s", region, err) + return nil + } return fmt.Errorf("Error retrieving DynamoDB Tables: %s", err) } @@ -327,12 +331,77 @@ func TestDiffDynamoDbGSI(t *testing.T) { } } +func TestAccAWSDynamoDbTable_importBasic(t *testing.T) { + resourceName := "aws_dynamodb_table.basic-dynamodb-table" + + rName := acctest.RandomWithPrefix("TerraformTestTable-") + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDynamoDbConfigInitialState(rName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSDynamoDbTable_importTags(t *testing.T) { + resourceName := "aws_dynamodb_table.basic-dynamodb-table" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDynamoDbConfigTags(), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSDynamoDbTable_importTimeToLive(t *testing.T) { + resourceName := "aws_dynamodb_table.basic-dynamodb-table" + rName := acctest.RandomWithPrefix("TerraformTestTable-") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDynamoDbConfigAddTimeToLive(rName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSDynamoDbTable_basic(t *testing.T) { var conf dynamodb.DescribeTableOutput rName := acctest.RandomWithPrefix("TerraformTestTable-") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, @@ -357,7 +426,7 @@ func TestAccAWSDynamoDbTable_extended(t *testing.T) { rName := acctest.RandomWithPrefix("TerraformTestTable-") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, @@ -379,12 +448,41 @@ func TestAccAWSDynamoDbTable_extended(t *testing.T) { }) } +func TestAccAWSDynamoDbTable_enablePitr(t *testing.T) { + var conf dynamodb.DescribeTableOutput + + rName := acctest.RandomWithPrefix("TerraformTestTable-") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDynamoDbConfigInitialState(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckInitialAWSDynamoDbTableExists("aws_dynamodb_table.basic-dynamodb-table", &conf), + testAccCheckInitialAWSDynamoDbTableConf("aws_dynamodb_table.basic-dynamodb-table"), + ), + }, + { + Config: testAccAWSDynamoDbConfig_backup(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDynamoDbTableHasPointInTimeRecoveryEnabled("aws_dynamodb_table.basic-dynamodb-table"), + resource.TestCheckResourceAttr("aws_dynamodb_table.basic-dynamodb-table", "point_in_time_recovery.#", "1"), + resource.TestCheckResourceAttr("aws_dynamodb_table.basic-dynamodb-table", "point_in_time_recovery.0.enabled", "true"), + ), + }, + }, + }) +} + func TestAccAWSDynamoDbTable_streamSpecification(t *testing.T) { var conf dynamodb.DescribeTableOutput tableName := fmt.Sprintf("TerraformTestStreamTable-%s", acctest.RandString(8)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, @@ -414,14 +512,14 @@ func TestAccAWSDynamoDbTable_streamSpecification(t *testing.T) { } func TestAccAWSDynamoDbTable_streamSpecificationValidation(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, Steps: []resource.TestStep{ { Config: testAccAWSDynamoDbConfigStreamSpecification("anything", true, ""), - ExpectError: regexp.MustCompile(`stream_view_type is required when stream_enabled = true$`), + ExpectError: regexp.MustCompile(`stream_view_type is required when stream_enabled = true`), }, }, }) @@ -430,7 +528,7 @@ func TestAccAWSDynamoDbTable_streamSpecificationValidation(t *testing.T) { func TestAccAWSDynamoDbTable_tags(t *testing.T) { var conf dynamodb.DescribeTableOutput - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, @@ -453,7 +551,7 @@ func TestAccAWSDynamoDbTable_gsiUpdateCapacity(t *testing.T) { var conf dynamodb.DescribeTableOutput name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, @@ -492,7 +590,7 @@ func TestAccAWSDynamoDbTable_gsiUpdateOtherAttributes(t *testing.T) { var conf dynamodb.DescribeTableOutput name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, @@ -563,7 +661,7 @@ func TestAccAWSDynamoDbTable_gsiUpdateNonKeyAttributes(t *testing.T) { var conf dynamodb.DescribeTableOutput name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, @@ -636,7 +734,7 @@ func TestAccAWSDynamoDbTable_ttl(t *testing.T) { rName := acctest.RandomWithPrefix("TerraformTestTable-") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, @@ -662,7 +760,7 @@ func TestAccAWSDynamoDbTable_attributeUpdate(t *testing.T) { rName := acctest.RandomWithPrefix("TerraformTestTable-") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, @@ -698,18 +796,18 @@ func TestAccAWSDynamoDbTable_attributeUpdate(t *testing.T) { func TestAccAWSDynamoDbTable_attributeUpdateValidation(t *testing.T) { rName := acctest.RandomWithPrefix("TerraformTestTable-") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, Steps: []resource.TestStep{ { Config: testAccAWSDynamoDbConfigOneAttribute(rName, "firstKey", "unusedKey", "S"), - ExpectError: regexp.MustCompile(`All attributes must be indexed. Unused attributes: \["unusedKey"\]$`), + ExpectError: regexp.MustCompile(`All attributes must be indexed. Unused attributes: \["unusedKey"\]`), }, { Config: testAccAWSDynamoDbConfigTwoAttributes(rName, "firstKey", "secondKey", "firstUnused", "N", "secondUnused", "S"), - ExpectError: regexp.MustCompile(`All attributes must be indexed. Unused attributes: \["firstUnused"\ \"secondUnused\"]$`), + ExpectError: regexp.MustCompile(`All attributes must be indexed. Unused attributes: \["firstUnused"\ \"secondUnused\"]`), }, }, }) @@ -736,7 +834,7 @@ func testAccCheckDynamoDbTableTimeToLiveWasUpdated(n string) resource.TestCheckF resp, err := conn.DescribeTimeToLive(params) if err != nil { - return fmt.Errorf("[ERROR] Problem describing time to live for table '%s': %s", rs.Primary.ID, err) + return fmt.Errorf("Problem describing time to live for table '%s': %s", rs.Primary.ID, err) } ttlDescription := resp.TimeToLiveDescription @@ -760,7 +858,7 @@ func TestAccAWSDynamoDbTable_encryption(t *testing.T) { rName := acctest.RandomWithPrefix("TerraformTestTable-") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDynamoDbTableDestroy, @@ -780,7 +878,7 @@ func TestAccAWSDynamoDbTable_encryption(t *testing.T) { resource.TestCheckResourceAttr("aws_dynamodb_table.basic-dynamodb-table", "server_side_encryption.#", "0"), func(s *terraform.State) error { if confEncDisabled.Table.CreationDateTime.Equal(*confEncEnabled.Table.CreationDateTime) { - return fmt.Errorf("[ERROR] DynamoDB table not recreated when changing SSE") + return fmt.Errorf("DynamoDB table not recreated when changing SSE") } return nil }, @@ -793,7 +891,7 @@ func TestAccAWSDynamoDbTable_encryption(t *testing.T) { resource.TestCheckResourceAttr("aws_dynamodb_table.basic-dynamodb-table", "server_side_encryption.#", "0"), func(s *terraform.State) error { if !confBasic.Table.CreationDateTime.Equal(*confEncDisabled.Table.CreationDateTime) { - return fmt.Errorf("[ERROR] DynamoDB table was recreated unexpectedly") + return fmt.Errorf("DynamoDB table was recreated unexpectedly") } return nil }, @@ -854,7 +952,7 @@ func testAccCheckInitialAWSDynamoDbTableExists(n string, table *dynamodb.Describ resp, err := conn.DescribeTable(params) if err != nil { - return fmt.Errorf("[ERROR] Problem describing table '%s': %s", rs.Primary.ID, err) + return fmt.Errorf("Problem describing table '%s': %s", rs.Primary.ID, err) } *table = *resp @@ -884,7 +982,7 @@ func testAccCheckInitialAWSDynamoDbTableConf(n string) resource.TestCheckFunc { resp, err := conn.DescribeTable(params) if err != nil { - return fmt.Errorf("[ERROR] Problem describing table '%s': %s", rs.Primary.ID, err) + return fmt.Errorf("Problem describing table '%s': %s", rs.Primary.ID, err) } table := resp.Table @@ -937,6 +1035,37 @@ func testAccCheckInitialAWSDynamoDbTableConf(n string) resource.TestCheckFunc { } } +func testAccCheckDynamoDbTableHasPointInTimeRecoveryEnabled(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No DynamoDB table name specified!") + } + + conn := testAccProvider.Meta().(*AWSClient).dynamodbconn + + resp, err := conn.DescribeContinuousBackups(&dynamodb.DescribeContinuousBackupsInput{ + TableName: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + pitr := resp.ContinuousBackupsDescription.PointInTimeRecoveryDescription + status := *pitr.PointInTimeRecoveryStatus + if status != dynamodb.PointInTimeRecoveryStatusEnabled { + return fmt.Errorf("Point in time backup had a status of %s rather than enabled", status) + } + + return nil + } +} + func testAccCheckDynamoDbTableWasUpdated(n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -1038,6 +1167,25 @@ resource "aws_dynamodb_table" "basic-dynamodb-table" { `, rName) } +func testAccAWSDynamoDbConfig_backup(rName string) string { + return fmt.Sprintf(` +resource "aws_dynamodb_table" "basic-dynamodb-table" { + name = "%s" + read_capacity = 1 + write_capacity = 1 + hash_key = "TestTableHashKey" + + attribute { + name = "TestTableHashKey" + type = "S" + } + point_in_time_recovery { + enabled = true + } +} +`, rName) +} + func testAccAWSDynamoDbConfigInitialState(rName string) string { return fmt.Sprintf(` resource "aws_dynamodb_table" "basic-dynamodb-table" { diff --git a/aws/resource_aws_ebs_snapshot.go b/aws/resource_aws_ebs_snapshot.go index 8e6eaba7fcd..7821ece7296 100644 --- a/aws/resource_aws_ebs_snapshot.go +++ b/aws/resource_aws_ebs_snapshot.go @@ -73,9 +73,24 @@ func resourceAwsEbsSnapshotCreate(d *schema.ResourceData, meta interface{}) erro request.Description = aws.String(v.(string)) } - res, err := conn.CreateSnapshot(request) + var res *ec2.Snapshot + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + var err error + res, err = conn.CreateSnapshot(request) + + if isAWSErr(err, "SnapshotCreationPerVolumeRateExceeded", "The maximum per volume CreateSnapshot request rate has been exceeded") { + return resource.RetryableError(err) + } + + if err != nil { + return resource.NonRetryableError(err) + } + + return nil + }) + if err != nil { - return err + return fmt.Errorf("error creating EC2 EBS Snapshot: %s", err) } d.SetId(*res.SnapshotId) diff --git a/aws/resource_aws_ebs_snapshot_copy.go b/aws/resource_aws_ebs_snapshot_copy.go new file mode 100644 index 00000000000..f1595412888 --- /dev/null +++ b/aws/resource_aws_ebs_snapshot_copy.go @@ -0,0 +1,176 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsEbsSnapshotCopy() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsEbsSnapshotCopyCreate, + Read: resourceAwsEbsSnapshotCopyRead, + Delete: resourceAwsEbsSnapshotCopyDelete, + + Schema: map[string]*schema.Schema{ + "volume_id": { + Type: schema.TypeString, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "owner_id": { + Type: schema.TypeString, + Computed: true, + }, + "owner_alias": { + Type: schema.TypeString, + Computed: true, + }, + "encrypted": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + }, + "volume_size": { + Type: schema.TypeInt, + Computed: true, + }, + "kms_key_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "data_encryption_key_id": { + Type: schema.TypeString, + Computed: true, + }, + "source_region": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "source_snapshot_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "tags": { + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + }, + }, + } +} + +func resourceAwsEbsSnapshotCopyCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + request := &ec2.CopySnapshotInput{ + SourceRegion: aws.String(d.Get("source_region").(string)), + SourceSnapshotId: aws.String(d.Get("source_snapshot_id").(string)), + } + if v, ok := d.GetOk("description"); ok { + request.Description = aws.String(v.(string)) + } + if v, ok := d.GetOk("encrypted"); ok { + request.Encrypted = aws.Bool(v.(bool)) + } + if v, ok := d.GetOk("kms_key_id"); ok { + request.KmsKeyId = aws.String(v.(string)) + } + + res, err := conn.CopySnapshot(request) + if err != nil { + return err + } + + d.SetId(*res.SnapshotId) + + err = resourceAwsEbsSnapshotCopyWaitForAvailable(d.Id(), conn) + if err != nil { + return err + } + + if err := setTags(conn, d); err != nil { + log.Printf("[WARN] error setting tags: %s", err) + } + + return resourceAwsEbsSnapshotCopyRead(d, meta) +} + +func resourceAwsEbsSnapshotCopyRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + req := &ec2.DescribeSnapshotsInput{ + SnapshotIds: []*string{aws.String(d.Id())}, + } + res, err := conn.DescribeSnapshots(req) + if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidSnapshotID.NotFound" { + log.Printf("Snapshot %q Not found - removing from state", d.Id()) + d.SetId("") + return nil + } + + snapshot := res.Snapshots[0] + + d.Set("description", snapshot.Description) + d.Set("owner_id", snapshot.OwnerId) + d.Set("encrypted", snapshot.Encrypted) + d.Set("owner_alias", snapshot.OwnerAlias) + d.Set("volume_id", snapshot.VolumeId) + d.Set("data_encryption_key_id", snapshot.DataEncryptionKeyId) + d.Set("kms_key_id", snapshot.KmsKeyId) + d.Set("volume_size", snapshot.VolumeSize) + + if err := d.Set("tags", tagsToMap(snapshot.Tags)); err != nil { + log.Printf("[WARN] error saving tags to state: %s", err) + } + + return nil +} + +func resourceAwsEbsSnapshotCopyDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + return resource.Retry(5*time.Minute, func() *resource.RetryError { + request := &ec2.DeleteSnapshotInput{ + SnapshotId: aws.String(d.Id()), + } + _, err := conn.DeleteSnapshot(request) + if err == nil { + return nil + } + + ebsErr, ok := err.(awserr.Error) + if ebsErr.Code() == "SnapshotInUse" { + return resource.RetryableError(fmt.Errorf("EBS SnapshotInUse - trying again while it detaches")) + } + + if !ok { + return resource.NonRetryableError(err) + } + + return resource.NonRetryableError(err) + }) +} + +func resourceAwsEbsSnapshotCopyWaitForAvailable(id string, conn *ec2.EC2) error { + log.Printf("Waiting for Snapshot %s to become available...", id) + + req := &ec2.DescribeSnapshotsInput{ + SnapshotIds: []*string{aws.String(id)}, + } + err := conn.WaitUntilSnapshotCompleted(req) + return err +} diff --git a/aws/resource_aws_ebs_snapshot_copy_test.go b/aws/resource_aws_ebs_snapshot_copy_test.go new file mode 100644 index 00000000000..b1aee2370ac --- /dev/null +++ b/aws/resource_aws_ebs_snapshot_copy_test.go @@ -0,0 +1,268 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSEbsSnapshotCopy_basic(t *testing.T) { + var v ec2.Snapshot + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAwsEbsSnapshotCopyConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckEbsSnapshotCopyExists("aws_ebs_snapshot_copy.test", &v), + testAccCheckTags(&v.Tags, "Name", "testAccAwsEbsSnapshotCopyConfig"), + ), + }, + }, + }) +} + +func TestAccAWSEbsSnapshotCopy_withDescription(t *testing.T) { + var v ec2.Snapshot + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAwsEbsSnapshotCopyConfigWithDescription, + Check: resource.ComposeTestCheckFunc( + testAccCheckEbsSnapshotCopyExists("aws_ebs_snapshot_copy.description_test", &v), + resource.TestCheckResourceAttr("aws_ebs_snapshot_copy.description_test", "description", "Copy Snapshot Acceptance Test"), + ), + }, + }, + }) +} + +func TestAccAWSEbsSnapshotCopy_withRegions(t *testing.T) { + var v ec2.Snapshot + + // record the initialized providers so that we can use them to + // check for the instances in each region + var providers []*schema.Provider + providerFactories := map[string]terraform.ResourceProviderFactory{ + "aws": func() (terraform.ResourceProvider, error) { + p := Provider() + providers = append(providers, p.(*schema.Provider)) + return p, nil + }, + } + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ProviderFactories: providerFactories, + Steps: []resource.TestStep{ + { + Config: testAccAwsEbsSnapshotCopyConfigWithRegions, + Check: resource.ComposeTestCheckFunc( + testAccCheckEbsSnapshotCopyExistsWithProviders("aws_ebs_snapshot_copy.region_test", &v, &providers), + ), + }, + }, + }) + +} + +func TestAccAWSEbsSnapshotCopy_withKms(t *testing.T) { + var v ec2.Snapshot + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAwsEbsSnapshotCopyConfigWithKms, + Check: resource.ComposeTestCheckFunc( + testAccCheckEbsSnapshotCopyExists("aws_ebs_snapshot_copy.kms_test", &v), + resource.TestMatchResourceAttr("aws_ebs_snapshot_copy.kms_test", "kms_key_id", + regexp.MustCompile("^arn:aws:kms:[a-z]{2}-[a-z]+-\\d{1}:[0-9]{12}:key/[a-z0-9-]{36}$")), + ), + }, + }, + }) +} + +func testAccCheckEbsSnapshotCopyExists(n string, v *ec2.Snapshot) resource.TestCheckFunc { + providers := []*schema.Provider{testAccProvider} + return testAccCheckEbsSnapshotCopyExistsWithProviders(n, v, &providers) +} + +func testAccCheckEbsSnapshotCopyExistsWithProviders(n string, v *ec2.Snapshot, providers *[]*schema.Provider) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + for _, provider := range *providers { + // Ignore if Meta is empty, this can happen for validation providers + if provider.Meta() == nil { + continue + } + + conn := provider.Meta().(*AWSClient).ec2conn + + request := &ec2.DescribeSnapshotsInput{ + SnapshotIds: []*string{aws.String(rs.Primary.ID)}, + } + + response, err := conn.DescribeSnapshots(request) + if err == nil { + if response.Snapshots != nil && len(response.Snapshots) > 0 { + *v = *response.Snapshots[0] + return nil + } + } + } + return fmt.Errorf("Error finding EC2 Snapshot %s", rs.Primary.ID) + } +} + +const testAccAwsEbsSnapshotCopyConfig = ` +resource "aws_ebs_volume" "test" { + availability_zone = "us-west-2a" + size = 1 +} + +resource "aws_ebs_snapshot" "test" { + volume_id = "${aws_ebs_volume.test.id}" + + tags { + Name = "testAccAwsEbsSnapshotCopyConfig" + } +} + +resource "aws_ebs_snapshot_copy" "test" { + source_snapshot_id = "${aws_ebs_snapshot.test.id}" + source_region = "us-west-2" + + tags { + Name = "testAccAwsEbsSnapshotCopyConfig" + } +} +` + +const testAccAwsEbsSnapshotCopyConfigWithDescription = ` +resource "aws_ebs_volume" "description_test" { + availability_zone = "us-west-2a" + size = 1 + + tags { + Name = "testAccAwsEbsSnapshotCopyConfigWithDescription" + } +} + +resource "aws_ebs_snapshot" "description_test" { + volume_id = "${aws_ebs_volume.description_test.id}" + description = "EBS Snapshot Acceptance Test" + + tags { + Name = "testAccAwsEbsSnapshotCopyConfigWithDescription" + } +} + +resource "aws_ebs_snapshot_copy" "description_test" { + description = "Copy Snapshot Acceptance Test" + source_snapshot_id = "${aws_ebs_snapshot.description_test.id}" + source_region = "us-west-2" + + tags { + Name = "testAccAwsEbsSnapshotCopyConfigWithDescription" + } +} +` + +const testAccAwsEbsSnapshotCopyConfigWithRegions = ` +provider "aws" { + region = "us-west-2" + alias = "uswest2" +} + +provider "aws" { + region = "us-east-1" + alias = "useast1" +} + +resource "aws_ebs_volume" "region_test" { + provider = "aws.uswest2" + availability_zone = "us-west-2a" + size = 1 + + tags { + Name = "testAccAwsEbsSnapshotCopyConfigWithRegions" + } +} + +resource "aws_ebs_snapshot" "region_test" { + provider = "aws.uswest2" + volume_id = "${aws_ebs_volume.region_test.id}" + + tags { + Name = "testAccAwsEbsSnapshotCopyConfigWithRegions" + } +} + +resource "aws_ebs_snapshot_copy" "region_test" { + provider = "aws.useast1" + source_snapshot_id = "${aws_ebs_snapshot.region_test.id}" + source_region = "us-west-2" + + tags { + Name = "testAccAwsEbsSnapshotCopyConfigWithRegions" + } +} +` + +const testAccAwsEbsSnapshotCopyConfigWithKms = ` +provider "aws" { + region = "us-west-2" +} + +resource "aws_kms_key" "kms_test" { + description = "testAccAwsEbsSnapshotCopyConfigWithKms" + deletion_window_in_days = 7 +} + +resource "aws_ebs_volume" "kms_test" { + availability_zone = "us-west-2a" + size = 1 + + tags { + Name = "testAccAwsEbsSnapshotCopyConfigWithKms" + } +} + +resource "aws_ebs_snapshot" "kms_test" { + volume_id = "${aws_ebs_volume.kms_test.id}" + + tags { + Name = "testAccAwsEbsSnapshotCopyConfigWithKms" + } +} + +resource "aws_ebs_snapshot_copy" "kms_test" { + source_snapshot_id = "${aws_ebs_snapshot.kms_test.id}" + source_region = "us-west-2" + encrypted = true + kms_key_id = "${aws_kms_key.kms_test.arn}" + + tags { + Name = "testAccAwsEbsSnapshotCopyConfigWithKms" + } +} +` diff --git a/aws/resource_aws_ebs_snapshot_test.go b/aws/resource_aws_ebs_snapshot_test.go index 254ab40d0ff..2bfa0c683b8 100644 --- a/aws/resource_aws_ebs_snapshot_test.go +++ b/aws/resource_aws_ebs_snapshot_test.go @@ -39,9 +39,10 @@ func TestAccAWSEBSSnapshot_basic(t *testing.T) { } } - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEbsSnapshotDestroy, Steps: []resource.TestStep{ { Config: testAccAwsEbsSnapshotConfigBasic(rName), @@ -66,9 +67,10 @@ func TestAccAWSEBSSnapshot_withDescription(t *testing.T) { var v ec2.Snapshot rName := fmt.Sprintf("tf-acc-ebs-snapshot-desc-%s", acctest.RandString(7)) - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEbsSnapshotDestroy, Steps: []resource.TestStep{ { Config: testAccAwsEbsSnapshotConfigWithDescription(rName), @@ -85,9 +87,10 @@ func TestAccAWSEBSSnapshot_withKms(t *testing.T) { var v ec2.Snapshot rName := fmt.Sprintf("tf-acc-ebs-snapshot-kms-%s", acctest.RandString(7)) - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEbsSnapshotDestroy, Steps: []resource.TestStep{ { Config: testAccAwsEbsSnapshotConfigWithKms(rName), @@ -129,6 +132,32 @@ func testAccCheckSnapshotExists(n string, v *ec2.Snapshot) resource.TestCheckFun } } +func testAccCheckAWSEbsSnapshotDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).ec2conn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_ebs_snapshot" { + continue + } + input := &ec2.DescribeSnapshotsInput{ + SnapshotIds: []*string{aws.String(rs.Primary.ID)}, + } + + output, err := conn.DescribeSnapshots(input) + if err != nil { + if isAWSErr(err, "InvalidSnapshot.NotFound", "") { + continue + } + return err + } + if output != nil && len(output.Snapshots) > 0 && aws.StringValue(output.Snapshots[0].SnapshotId) == rs.Primary.ID { + return fmt.Errorf("EBS Snapshot %q still exists", rs.Primary.ID) + } + } + + return nil +} + func testAccAwsEbsSnapshotConfigBasic(rName string) string { return fmt.Sprintf(` data "aws_region" "current" {} diff --git a/aws/resource_aws_ebs_volume.go b/aws/resource_aws_ebs_volume.go index 6b8d59775d1..45dc65a89ba 100644 --- a/aws/resource_aws_ebs_volume.go +++ b/aws/resource_aws_ebs_volume.go @@ -10,7 +10,6 @@ import ( "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) @@ -92,6 +91,14 @@ func resourceAwsEbsVolumeCreate(d *schema.ResourceData, meta interface{}) error if value, ok := d.GetOk("snapshot_id"); ok { request.SnapshotId = aws.String(value.(string)) } + if value, ok := d.GetOk("tags"); ok { + request.TagSpecifications = []*ec2.TagSpecification{ + { + ResourceType: aws.String(ec2.ResourceTypeVolume), + Tags: tagsFromMap(value.(map[string]interface{})), + }, + } + } // IOPs are only valid, and required for, storage type io1. The current minimu // is 100. Instead of a hard validation we we only apply the IOPs to the @@ -139,12 +146,6 @@ func resourceAwsEbsVolumeCreate(d *schema.ResourceData, meta interface{}) error d.SetId(*result.VolumeId) - if _, ok := d.GetOk("tags"); ok { - if err := setTags(conn, d); err != nil { - return errwrap.Wrapf("Error setting tags for EBS Volume: {{err}}", err) - } - } - return resourceAwsEbsVolumeRead(d, meta) } @@ -152,7 +153,7 @@ func resourceAWSEbsVolumeUpdate(d *schema.ResourceData, meta interface{}) error conn := meta.(*AWSClient).ec2conn if _, ok := d.GetOk("tags"); ok { if err := setTags(conn, d); err != nil { - return errwrap.Wrapf("Error updating tags for EBS Volume: {{err}}", err) + return fmt.Errorf("Error updating tags for EBS Volume: %s", err) } } diff --git a/aws/resource_aws_ebs_volume_test.go b/aws/resource_aws_ebs_volume_test.go index fbee38d767e..2d484625ce2 100644 --- a/aws/resource_aws_ebs_volume_test.go +++ b/aws/resource_aws_ebs_volume_test.go @@ -15,41 +15,61 @@ import ( func TestAccAWSEBSVolume_basic(t *testing.T) { var v ec2.Volume - resource.Test(t, resource.TestCase{ + resourceName := "aws_ebs_volume.test" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, - IDRefreshName: "aws_ebs_volume.test", + IDRefreshName: resourceName, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccAwsEbsVolumeConfig, Check: resource.ComposeTestCheckFunc( - testAccCheckVolumeExists("aws_ebs_volume.test", &v), - resource.TestCheckResourceAttrSet("aws_ebs_volume.test", "arn"), + testAccCheckVolumeExists(resourceName, &v), + testAccMatchResourceAttrRegionalARN(resourceName, "arn", "ec2", regexp.MustCompile(`volume/vol-.+`)), + resource.TestCheckResourceAttr(resourceName, "encrypted", "false"), + resource.TestCheckNoResourceAttr(resourceName, "iops"), + resource.TestCheckNoResourceAttr(resourceName, "kms_key_id"), + resource.TestCheckResourceAttr(resourceName, "size", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttr(resourceName, "type", "gp2"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } func TestAccAWSEBSVolume_updateAttachedEbsVolume(t *testing.T) { var v ec2.Volume - resource.Test(t, resource.TestCase{ + resourceName := "aws_ebs_volume.test" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, - IDRefreshName: "aws_ebs_volume.test", + IDRefreshName: resourceName, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccAwsEbsAttachedVolumeConfig, Check: resource.ComposeTestCheckFunc( - testAccCheckVolumeExists("aws_ebs_volume.test", &v), - resource.TestCheckResourceAttr("aws_ebs_volume.test", "size", "10"), + testAccCheckVolumeExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "size", "10"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, { Config: testAccAwsEbsAttachedVolumeConfigUpdateSize, Check: resource.ComposeTestCheckFunc( - testAccCheckVolumeExists("aws_ebs_volume.test", &v), - resource.TestCheckResourceAttr("aws_ebs_volume.test", "size", "20"), + testAccCheckVolumeExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "size", "20"), ), }, }, @@ -58,23 +78,30 @@ func TestAccAWSEBSVolume_updateAttachedEbsVolume(t *testing.T) { func TestAccAWSEBSVolume_updateSize(t *testing.T) { var v ec2.Volume - resource.Test(t, resource.TestCase{ + resourceName := "aws_ebs_volume.test" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, - IDRefreshName: "aws_ebs_volume.test", + IDRefreshName: resourceName, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccAwsEbsVolumeConfig, Check: resource.ComposeTestCheckFunc( - testAccCheckVolumeExists("aws_ebs_volume.test", &v), - resource.TestCheckResourceAttr("aws_ebs_volume.test", "size", "1"), + testAccCheckVolumeExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "size", "1"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, { Config: testAccAwsEbsVolumeConfigUpdateSize, Check: resource.ComposeTestCheckFunc( - testAccCheckVolumeExists("aws_ebs_volume.test", &v), - resource.TestCheckResourceAttr("aws_ebs_volume.test", "size", "10"), + testAccCheckVolumeExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "size", "10"), ), }, }, @@ -83,23 +110,30 @@ func TestAccAWSEBSVolume_updateSize(t *testing.T) { func TestAccAWSEBSVolume_updateType(t *testing.T) { var v ec2.Volume - resource.Test(t, resource.TestCase{ + resourceName := "aws_ebs_volume.test" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, - IDRefreshName: "aws_ebs_volume.test", + IDRefreshName: resourceName, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccAwsEbsVolumeConfig, Check: resource.ComposeTestCheckFunc( - testAccCheckVolumeExists("aws_ebs_volume.test", &v), - resource.TestCheckResourceAttr("aws_ebs_volume.test", "type", "gp2"), + testAccCheckVolumeExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "type", "gp2"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, { Config: testAccAwsEbsVolumeConfigUpdateType, Check: resource.ComposeTestCheckFunc( - testAccCheckVolumeExists("aws_ebs_volume.test", &v), - resource.TestCheckResourceAttr("aws_ebs_volume.test", "type", "sc1"), + testAccCheckVolumeExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "type", "sc1"), ), }, }, @@ -108,23 +142,30 @@ func TestAccAWSEBSVolume_updateType(t *testing.T) { func TestAccAWSEBSVolume_updateIops(t *testing.T) { var v ec2.Volume - resource.Test(t, resource.TestCase{ + resourceName := "aws_ebs_volume.test" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, - IDRefreshName: "aws_ebs_volume.test", + IDRefreshName: resourceName, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccAwsEbsVolumeConfigWithIops, Check: resource.ComposeTestCheckFunc( - testAccCheckVolumeExists("aws_ebs_volume.test", &v), - resource.TestCheckResourceAttr("aws_ebs_volume.test", "iops", "100"), + testAccCheckVolumeExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "iops", "100"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, { Config: testAccAwsEbsVolumeConfigWithIopsUpdated, Check: resource.ComposeTestCheckFunc( - testAccCheckVolumeExists("aws_ebs_volume.test", &v), - resource.TestCheckResourceAttr("aws_ebs_volume.test", "iops", "200"), + testAccCheckVolumeExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "iops", "200"), ), }, }, @@ -135,54 +176,77 @@ func TestAccAWSEBSVolume_kmsKey(t *testing.T) { var v ec2.Volume ri := acctest.RandInt() config := fmt.Sprintf(testAccAwsEbsVolumeConfigWithKmsKey, ri) - keyRegex := regexp.MustCompile("^arn:aws:([a-zA-Z0-9\\-])+:([a-z]{2}-[a-z]+-\\d{1})?:(\\d{12})?:(.*)$") + kmsKeyResourceName := "aws_kms_key.test" + resourceName := "aws_ebs_volume.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, - IDRefreshName: "aws_ebs_volume.test", + IDRefreshName: resourceName, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: config, Check: resource.ComposeTestCheckFunc( - testAccCheckVolumeExists("aws_ebs_volume.test", &v), - resource.TestCheckResourceAttr("aws_ebs_volume.test", "encrypted", "true"), - resource.TestMatchResourceAttr("aws_ebs_volume.test", "kms_key_id", keyRegex), + testAccCheckVolumeExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "encrypted", "true"), + resource.TestCheckResourceAttrPair(resourceName, "kms_key_id", kmsKeyResourceName, "arn"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } func TestAccAWSEBSVolume_NoIops(t *testing.T) { var v ec2.Volume - resource.Test(t, resource.TestCase{ + resourceName := "aws_ebs_volume.test" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccAwsEbsVolumeConfigWithNoIops, Check: resource.ComposeTestCheckFunc( - testAccCheckVolumeExists("aws_ebs_volume.iops_test", &v), + testAccCheckVolumeExists(resourceName, &v), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"iops"}, + }, }, }) } func TestAccAWSEBSVolume_withTags(t *testing.T) { var v ec2.Volume - resource.Test(t, resource.TestCase{ + resourceName := "aws_ebs_volume.test" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, - IDRefreshName: "aws_ebs_volume.tags_test", + IDRefreshName: resourceName, Providers: testAccProviders, Steps: []resource.TestStep{ { Config: testAccAwsEbsVolumeConfigWithTags, Check: resource.ComposeTestCheckFunc( - testAccCheckVolumeExists("aws_ebs_volume.tags_test", &v), + testAccCheckVolumeExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.Name", "TerraformTest"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } @@ -216,13 +280,12 @@ func testAccCheckVolumeExists(n string, v *ec2.Volume) resource.TestCheckFunc { } const testAccAwsEbsVolumeConfig = ` +data "aws_availability_zones" "available" {} + resource "aws_ebs_volume" "test" { - availability_zone = "us-west-2a" + availability_zone = "${data.aws_availability_zones.available.names[0]}" type = "gp2" size = 1 - tags { - Name = "tf-acc-test-ebs-volume-test" - } } ` @@ -232,7 +295,7 @@ data "aws_ami" "debian_jessie_latest" { filter { name = "name" - values = ["debian-jessie-*"] + values = ["amzn-ami-*"] } filter { @@ -250,13 +313,11 @@ data "aws_ami" "debian_jessie_latest" { values = ["ebs"] } - owners = ["379101102735"] # Debian + owners = ["amazon"] } resource "aws_instance" "test" { - ami = "${data.aws_ami.debian_jessie_latest.id}" - associate_public_ip_address = true - count = 1 + ami = "${data.aws_ami.debian_jessie_latest.id}" instance_type = "t2.medium" root_block_device { @@ -270,6 +331,8 @@ resource "aws_instance" "test" { } } +data "aws_availability_zones" "available" {} + resource "aws_ebs_volume" "test" { depends_on = ["aws_instance.test"] availability_zone = "${aws_instance.test.availability_zone}" @@ -291,7 +354,7 @@ data "aws_ami" "debian_jessie_latest" { filter { name = "name" - values = ["debian-jessie-*"] + values = ["amzn-ami-*"] } filter { @@ -309,13 +372,11 @@ data "aws_ami" "debian_jessie_latest" { values = ["ebs"] } - owners = ["379101102735"] # Debian + owners = ["amazon"] } resource "aws_instance" "test" { - ami = "${data.aws_ami.debian_jessie_latest.id}" - associate_public_ip_address = true - count = 1 + ami = "${data.aws_ami.debian_jessie_latest.id}" instance_type = "t2.medium" root_block_device { @@ -345,8 +406,10 @@ resource "aws_volume_attachment" "test" { ` const testAccAwsEbsVolumeConfigUpdateSize = ` +data "aws_availability_zones" "available" {} + resource "aws_ebs_volume" "test" { - availability_zone = "us-west-2a" + availability_zone = "${data.aws_availability_zones.available.names[0]}" type = "gp2" size = 10 tags { @@ -356,8 +419,10 @@ resource "aws_ebs_volume" "test" { ` const testAccAwsEbsVolumeConfigUpdateType = ` +data "aws_availability_zones" "available" {} + resource "aws_ebs_volume" "test" { - availability_zone = "us-west-2a" + availability_zone = "${data.aws_availability_zones.available.names[0]}" type = "sc1" size = 500 tags { @@ -367,8 +432,10 @@ resource "aws_ebs_volume" "test" { ` const testAccAwsEbsVolumeConfigWithIops = ` +data "aws_availability_zones" "available" {} + resource "aws_ebs_volume" "test" { - availability_zone = "us-west-2a" + availability_zone = "${data.aws_availability_zones.available.names[0]}" type = "io1" size = 4 iops = 100 @@ -379,8 +446,10 @@ resource "aws_ebs_volume" "test" { ` const testAccAwsEbsVolumeConfigWithIopsUpdated = ` +data "aws_availability_zones" "available" {} + resource "aws_ebs_volume" "test" { - availability_zone = "us-west-2a" + availability_zone = "${data.aws_availability_zones.available.names[0]}" type = "io1" size = 4 iops = 200 @@ -391,7 +460,7 @@ resource "aws_ebs_volume" "test" { ` const testAccAwsEbsVolumeConfigWithKmsKey = ` -resource "aws_kms_key" "foo" { +resource "aws_kms_key" "test" { description = "Terraform acc test %d" policy = < 0 { + opts.TagSpecifications = []*ec2.TagSpecification{ + { + // There is no constant in the SDK for this resource type + ResourceType: aws.String("capacity-reservation"), + Tags: tagsFromMap(v.(map[string]interface{})), + }, + } + } + + log.Printf("[DEBUG] Capacity reservation: %s", opts) + + out, err := conn.CreateCapacityReservation(opts) + if err != nil { + return fmt.Errorf("Error creating EC2 Capacity Reservation: %s", err) + } + d.SetId(*out.CapacityReservation.CapacityReservationId) + return resourceAwsEc2CapacityReservationRead(d, meta) +} + +func resourceAwsEc2CapacityReservationRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + resp, err := conn.DescribeCapacityReservations(&ec2.DescribeCapacityReservationsInput{ + CapacityReservationIds: []*string{aws.String(d.Id())}, + }) + + if err != nil { + return fmt.Errorf("Error describing EC2 Capacity Reservations: %s", err) + } + + // If nothing was found, then return no state + if len(resp.CapacityReservations) == 0 { + log.Printf("[WARN] EC2 Capacity Reservation (%s) not found, removing from state", d.Id()) + d.SetId("") + } + + reservation := resp.CapacityReservations[0] + + if aws.StringValue(reservation.State) == ec2.CapacityReservationStateCancelled || aws.StringValue(reservation.State) == ec2.CapacityReservationStateExpired { + log.Printf("[WARN] EC2 Capacity Reservation (%s) no longer active, removing from state", d.Id()) + d.SetId("") + return nil + } + + d.Set("availability_zone", reservation.AvailabilityZone) + d.Set("ebs_optimized", reservation.EbsOptimized) + + d.Set("end_date", "") + if reservation.EndDate != nil { + d.Set("end_date", aws.TimeValue(reservation.EndDate).Format(time.RFC3339)) + } + + d.Set("end_date_type", reservation.EndDateType) + d.Set("ephemeral_storage", reservation.EphemeralStorage) + d.Set("instance_count", reservation.TotalInstanceCount) + d.Set("instance_match_criteria", reservation.InstanceMatchCriteria) + d.Set("instance_platform", reservation.InstancePlatform) + d.Set("instance_type", reservation.InstanceType) + + if err := d.Set("tags", tagsToMap(reservation.Tags)); err != nil { + return fmt.Errorf("error setting tags: %s", err) + } + + d.Set("tenancy", reservation.Tenancy) + + return nil +} + +func resourceAwsEc2CapacityReservationUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + d.Partial(true) + + if d.HasChange("tags") { + if err := setTags(conn, d); err != nil { + return err + } else { + d.SetPartial("tags") + } + } + + d.Partial(false) + + opts := &ec2.ModifyCapacityReservationInput{ + CapacityReservationId: aws.String(d.Id()), + EndDateType: aws.String(d.Get("end_date_type").(string)), + InstanceCount: aws.Int64(int64(d.Get("instance_count").(int))), + } + + if v, ok := d.GetOk("end_date"); ok { + t, err := time.Parse(time.RFC3339, v.(string)) + if err != nil { + return fmt.Errorf("Error parsing EC2 Capacity Reservation end date: %s", err.Error()) + } + opts.EndDate = aws.Time(t) + } + + log.Printf("[DEBUG] Capacity reservation: %s", opts) + + _, err := conn.ModifyCapacityReservation(opts) + if err != nil { + return fmt.Errorf("Error modifying EC2 Capacity Reservation: %s", err) + } + return resourceAwsEc2CapacityReservationRead(d, meta) +} + +func resourceAwsEc2CapacityReservationDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + opts := &ec2.CancelCapacityReservationInput{ + CapacityReservationId: aws.String(d.Id()), + } + + _, err := conn.CancelCapacityReservation(opts) + if err != nil { + return fmt.Errorf("Error cancelling EC2 Capacity Reservation: %s", err) + } + + return nil +} diff --git a/aws/resource_aws_ec2_capacity_reservation_test.go b/aws/resource_aws_ec2_capacity_reservation_test.go new file mode 100644 index 00000000000..3eb27ca9870 --- /dev/null +++ b/aws/resource_aws_ec2_capacity_reservation_test.go @@ -0,0 +1,599 @@ +package aws + +import ( + "fmt" + "log" + "testing" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func init() { + resource.AddTestSweepers("aws_ec2_capacity_reservation", &resource.Sweeper{ + Name: "aws_ec2_capacity_reservation", + F: testSweepEc2CapacityReservations, + }) +} + +func testSweepEc2CapacityReservations(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).ec2conn + + resp, err := conn.DescribeCapacityReservations(&ec2.DescribeCapacityReservationsInput{}) + + if err != nil { + return fmt.Errorf("Error retrieving EC2 Capacity Reservations: %s", err) + } + + if len(resp.CapacityReservations) == 0 { + log.Print("[DEBUG] No EC2 Capacity Reservations to sweep") + return nil + } + + for _, r := range resp.CapacityReservations { + if aws.StringValue(r.State) != ec2.CapacityReservationStateCancelled && aws.StringValue(r.State) != ec2.CapacityReservationStateExpired { + id := aws.StringValue(r.CapacityReservationId) + + log.Printf("[INFO] Cancelling EC2 Capacity Reservation EC2 Instance: %s", id) + + opts := &ec2.CancelCapacityReservationInput{ + CapacityReservationId: aws.String(id), + } + + _, err := conn.CancelCapacityReservation(opts) + + if err != nil { + log.Printf("[ERROR] Error cancelling EC2 Capacity Reservation (%s): %s", id, err) + } + } + } + + return nil +} + +func TestAccAWSEc2CapacityReservation_basic(t *testing.T) { + var cr ec2.CapacityReservation + availabilityZonesDataSourceName := "data.aws_availability_zones.available" + resourceName := "aws_ec2_capacity_reservation.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckEc2CapacityReservationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccEc2CapacityReservationConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckEc2CapacityReservationExists(resourceName, &cr), + resource.TestCheckResourceAttrPair(resourceName, "availability_zone", availabilityZonesDataSourceName, "names.0"), + resource.TestCheckResourceAttr(resourceName, "ebs_optimized", "false"), + resource.TestCheckResourceAttr(resourceName, "end_date", ""), + resource.TestCheckResourceAttr(resourceName, "end_date_type", "unlimited"), + resource.TestCheckResourceAttr(resourceName, "ephemeral_storage", "false"), + resource.TestCheckResourceAttr(resourceName, "instance_count", "1"), + resource.TestCheckResourceAttr(resourceName, "instance_match_criteria", "open"), + resource.TestCheckResourceAttr(resourceName, "instance_platform", "Linux/UNIX"), + resource.TestCheckResourceAttr(resourceName, "instance_type", "t2.micro"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttr(resourceName, "tenancy", "default"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSEc2CapacityReservation_ebsOptimized(t *testing.T) { + var cr ec2.CapacityReservation + resourceName := "aws_ec2_capacity_reservation.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckEc2CapacityReservationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccEc2CapacityReservationConfig_ebsOptimized(true), + Check: resource.ComposeTestCheckFunc( + testAccCheckEc2CapacityReservationExists(resourceName, &cr), + resource.TestCheckResourceAttr(resourceName, "ebs_optimized", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSEc2CapacityReservation_endDate(t *testing.T) { + var cr ec2.CapacityReservation + endDate1 := time.Now().UTC().Add(1 * time.Hour).Format(time.RFC3339) + endDate2 := time.Now().UTC().Add(2 * time.Hour).Format(time.RFC3339) + resourceName := "aws_ec2_capacity_reservation.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckEc2CapacityReservationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccEc2CapacityReservationConfig_endDate(endDate1), + Check: resource.ComposeTestCheckFunc( + testAccCheckEc2CapacityReservationExists(resourceName, &cr), + resource.TestCheckResourceAttr(resourceName, "end_date", endDate1), + resource.TestCheckResourceAttr(resourceName, "end_date_type", "limited"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccEc2CapacityReservationConfig_endDate(endDate2), + Check: resource.ComposeTestCheckFunc( + testAccCheckEc2CapacityReservationExists(resourceName, &cr), + resource.TestCheckResourceAttr(resourceName, "end_date", endDate2), + resource.TestCheckResourceAttr(resourceName, "end_date_type", "limited"), + ), + }, + }, + }) +} + +func TestAccAWSEc2CapacityReservation_endDateType(t *testing.T) { + var cr ec2.CapacityReservation + resourceName := "aws_ec2_capacity_reservation.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckEc2CapacityReservationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccEc2CapacityReservationConfig_endDateType("unlimited"), + Check: resource.ComposeTestCheckFunc( + testAccCheckEc2CapacityReservationExists(resourceName, &cr), + resource.TestCheckResourceAttr(resourceName, "end_date_type", "unlimited"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccEc2CapacityReservationConfig_endDate("2019-10-31T07:39:57Z"), + Check: resource.ComposeTestCheckFunc( + testAccCheckEc2CapacityReservationExists(resourceName, &cr), + resource.TestCheckResourceAttr(resourceName, "end_date", "2019-10-31T07:39:57Z"), + resource.TestCheckResourceAttr(resourceName, "end_date_type", "limited"), + ), + }, + { + Config: testAccEc2CapacityReservationConfig_endDateType("unlimited"), + Check: resource.ComposeTestCheckFunc( + testAccCheckEc2CapacityReservationExists(resourceName, &cr), + resource.TestCheckResourceAttr(resourceName, "end_date_type", "unlimited"), + ), + }, + }, + }) +} + +func TestAccAWSEc2CapacityReservation_ephemeralStorage(t *testing.T) { + var cr ec2.CapacityReservation + resourceName := "aws_ec2_capacity_reservation.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckEc2CapacityReservationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccEc2CapacityReservationConfig_ephemeralStorage(true), + Check: resource.ComposeTestCheckFunc( + testAccCheckEc2CapacityReservationExists(resourceName, &cr), + resource.TestCheckResourceAttr(resourceName, "ephemeral_storage", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSEc2CapacityReservation_instanceCount(t *testing.T) { + var cr ec2.CapacityReservation + resourceName := "aws_ec2_capacity_reservation.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckEc2CapacityReservationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccEc2CapacityReservationConfig_instanceCount(1), + Check: resource.ComposeTestCheckFunc( + testAccCheckEc2CapacityReservationExists(resourceName, &cr), + resource.TestCheckResourceAttr(resourceName, "instance_count", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccEc2CapacityReservationConfig_instanceCount(2), + Check: resource.ComposeTestCheckFunc( + testAccCheckEc2CapacityReservationExists(resourceName, &cr), + resource.TestCheckResourceAttr(resourceName, "instance_count", "2"), + ), + }, + }, + }) +} + +func TestAccAWSEc2CapacityReservation_instanceMatchCriteria(t *testing.T) { + var cr ec2.CapacityReservation + resourceName := "aws_ec2_capacity_reservation.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckEc2CapacityReservationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccEc2CapacityReservationConfig_instanceMatchCriteria("targeted"), + Check: resource.ComposeTestCheckFunc( + testAccCheckEc2CapacityReservationExists(resourceName, &cr), + resource.TestCheckResourceAttr(resourceName, "instance_match_criteria", "targeted"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSEc2CapacityReservation_instanceType(t *testing.T) { + var cr ec2.CapacityReservation + resourceName := "aws_ec2_capacity_reservation.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckEc2CapacityReservationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccEc2CapacityReservationConfig_instanceType("t2.micro"), + Check: resource.ComposeTestCheckFunc( + testAccCheckEc2CapacityReservationExists(resourceName, &cr), + resource.TestCheckResourceAttr(resourceName, "instance_type", "t2.micro"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccEc2CapacityReservationConfig_instanceType("t2.small"), + Check: resource.ComposeTestCheckFunc( + testAccCheckEc2CapacityReservationExists(resourceName, &cr), + resource.TestCheckResourceAttr(resourceName, "instance_type", "t2.small"), + ), + }, + }, + }) +} + +func TestAccAWSEc2CapacityReservation_tags(t *testing.T) { + var cr ec2.CapacityReservation + resourceName := "aws_ec2_capacity_reservation.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckEc2CapacityReservationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccEc2CapacityReservationConfig_tags_single("key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckEc2CapacityReservationExists(resourceName, &cr), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccEc2CapacityReservationConfig_tags_multiple("key1", "value1updated", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckEc2CapacityReservationExists(resourceName, &cr), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccEc2CapacityReservationConfig_tags_single("key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckEc2CapacityReservationExists(resourceName, &cr), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + +func TestAccAWSEc2CapacityReservation_tenancy(t *testing.T) { + // Error creating EC2 Capacity Reservation: Unsupported: The requested configuration is currently not supported. Please check the documentation for supported configurations. + t.Skip("EC2 Capacity Reservations do not currently support dedicated tenancy.") + var cr ec2.CapacityReservation + resourceName := "aws_ec2_capacity_reservation.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckEc2CapacityReservationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccEc2CapacityReservationConfig_tenancy("dedicated"), + Check: resource.ComposeTestCheckFunc( + testAccCheckEc2CapacityReservationExists(resourceName, &cr), + resource.TestCheckResourceAttr(resourceName, "tenancy", "dedicated"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckEc2CapacityReservationExists(resourceName string, cr *ec2.CapacityReservation) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).ec2conn + resp, err := conn.DescribeCapacityReservations(&ec2.DescribeCapacityReservationsInput{ + CapacityReservationIds: []*string{aws.String(rs.Primary.ID)}, + }) + + if err != nil { + return fmt.Errorf("Error retrieving EC2 Capacity Reservations: %s", err) + } + + if len(resp.CapacityReservations) == 0 { + return fmt.Errorf("EC2 Capacity Reservation (%s) not found", rs.Primary.ID) + } + + reservation := resp.CapacityReservations[0] + + if aws.StringValue(reservation.State) != ec2.CapacityReservationStateActive && aws.StringValue(reservation.State) != ec2.CapacityReservationStatePending { + return fmt.Errorf("EC2 Capacity Reservation (%s) found in unexpected state: %s", rs.Primary.ID, aws.StringValue(reservation.State)) + } + + *cr = *reservation + return nil + } +} + +func testAccCheckEc2CapacityReservationDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).ec2conn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_ec2_capacity_reservation" { + continue + } + + // Try to find the resource + resp, err := conn.DescribeCapacityReservations(&ec2.DescribeCapacityReservationsInput{ + CapacityReservationIds: []*string{aws.String(rs.Primary.ID)}, + }) + if err == nil { + for _, r := range resp.CapacityReservations { + if aws.StringValue(r.State) != ec2.CapacityReservationStateCancelled && aws.StringValue(r.State) != ec2.CapacityReservationStateExpired { + return fmt.Errorf("Found uncancelled EC2 Capacity Reservation: %s", r) + } + } + } + + return err + } + + return nil + +} + +const testAccEc2CapacityReservationConfig = ` +data "aws_availability_zones" "available" {} + +resource "aws_ec2_capacity_reservation" "test" { + availability_zone = "${data.aws_availability_zones.available.names[0]}" + instance_count = 1 + instance_platform = "Linux/UNIX" + instance_type = "t2.micro" +} +` + +func testAccEc2CapacityReservationConfig_ebsOptimized(ebsOptimized bool) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_ec2_capacity_reservation" "test" { + availability_zone = "${data.aws_availability_zones.available.names[0]}" + ebs_optimized = %t + instance_count = 1 + instance_platform = "Linux/UNIX" + instance_type = "m4.large" +} +`, ebsOptimized) +} + +func testAccEc2CapacityReservationConfig_endDate(endDate string) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_ec2_capacity_reservation" "test" { + availability_zone = "${data.aws_availability_zones.available.names[0]}" + end_date = %q + end_date_type = "limited" + instance_count = 1 + instance_platform = "Linux/UNIX" + instance_type = "t2.micro" +} +`, endDate) +} + +func testAccEc2CapacityReservationConfig_endDateType(endDateType string) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_ec2_capacity_reservation" "test" { + availability_zone = "${data.aws_availability_zones.available.names[0]}" + end_date_type = %q + instance_count = 1 + instance_platform = "Linux/UNIX" + instance_type = "t2.micro" +} +`, endDateType) +} + +func testAccEc2CapacityReservationConfig_ephemeralStorage(ephemeralStorage bool) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_ec2_capacity_reservation" "test" { + availability_zone = "${data.aws_availability_zones.available.names[0]}" + ephemeral_storage = %t + instance_count = 1 + instance_platform = "Linux/UNIX" + instance_type = "m3.medium" +} +`, ephemeralStorage) +} + +func testAccEc2CapacityReservationConfig_instanceCount(instanceCount int) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_ec2_capacity_reservation" "test" { + availability_zone = "${data.aws_availability_zones.available.names[0]}" + instance_count = %d + instance_platform = "Linux/UNIX" + instance_type = "t2.micro" +} +`, instanceCount) +} + +func testAccEc2CapacityReservationConfig_instanceMatchCriteria(instanceMatchCriteria string) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_ec2_capacity_reservation" "test" { + availability_zone = "${data.aws_availability_zones.available.names[0]}" + instance_count = 1 + instance_platform = "Linux/UNIX" + instance_match_criteria = %q + instance_type = "t2.micro" +} +`, instanceMatchCriteria) +} + +func testAccEc2CapacityReservationConfig_instanceType(instanceType string) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_ec2_capacity_reservation" "test" { + availability_zone = "${data.aws_availability_zones.available.names[0]}" + instance_count = 1 + instance_platform = "Linux/UNIX" + instance_type = %q +} +`, instanceType) +} + +func testAccEc2CapacityReservationConfig_tags_single(tag1Key, tag1Value string) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_ec2_capacity_reservation" "test" { + availability_zone = "${data.aws_availability_zones.available.names[0]}" + instance_count = 1 + instance_platform = "Linux/UNIX" + instance_type = "t2.micro" + + tags { + %q = %q + } +} +`, tag1Key, tag1Value) +} + +func testAccEc2CapacityReservationConfig_tags_multiple(tag1Key, tag1Value, tag2Key, tag2Value string) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_ec2_capacity_reservation" "test" { + availability_zone = "${data.aws_availability_zones.available.names[0]}" + instance_count = 1 + instance_platform = "Linux/UNIX" + instance_type = "t2.micro" + + tags { + %q = %q + %q = %q + } +} +`, tag1Key, tag1Value, tag2Key, tag2Value) +} + +func testAccEc2CapacityReservationConfig_tenancy(tenancy string) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_ec2_capacity_reservation" "test" { + availability_zone = "${data.aws_availability_zones.available.names[0]}" + instance_count = 1 + instance_platform = "Linux/UNIX" + instance_type = "t2.micro" + tenancy = %q +} +`, tenancy) +} diff --git a/aws/resource_aws_ec2_fleet.go b/aws/resource_aws_ec2_fleet.go new file mode 100644 index 00000000000..7e816cddef9 --- /dev/null +++ b/aws/resource_aws_ec2_fleet.go @@ -0,0 +1,840 @@ +package aws + +import ( + "fmt" + "log" + "strconv" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsEc2Fleet() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsEc2FleetCreate, + Read: resourceAwsEc2FleetRead, + Update: resourceAwsEc2FleetUpdate, + Delete: resourceAwsEc2FleetDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + Update: schema.DefaultTimeout(10 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + "excess_capacity_termination_policy": { + Type: schema.TypeString, + Optional: true, + Default: ec2.FleetExcessCapacityTerminationPolicyTermination, + ValidateFunc: validation.StringInSlice([]string{ + ec2.FleetExcessCapacityTerminationPolicyNoTermination, + ec2.FleetExcessCapacityTerminationPolicyTermination, + }, false), + }, + "launch_template_config": { + Type: schema.TypeList, + Required: true, + ForceNew: true, + MinItems: 1, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "launch_template_specification": { + Type: schema.TypeList, + Required: true, + ForceNew: true, + MinItems: 1, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "launch_template_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "launch_template_name": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "version": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + }, + }, + "override": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + MaxItems: 50, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_zone": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "instance_type": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "max_price": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "priority": { + Type: schema.TypeFloat, + Optional: true, + ForceNew: true, + }, + "subnet_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "weighted_capacity": { + Type: schema.TypeFloat, + Optional: true, + ForceNew: true, + }, + }, + }, + }, + }, + }, + }, + "on_demand_options": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == "1" && new == "0" { + return true + } + return false + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "allocation_strategy": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "lowestPrice", + ValidateFunc: validation.StringInSlice([]string{ + // AWS SDK constant incorrect + // ec2.FleetOnDemandAllocationStrategyLowestPrice, + "lowestPrice", + ec2.FleetOnDemandAllocationStrategyPrioritized, + }, false), + }, + }, + }, + }, + "replace_unhealthy_instances": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + }, + "spot_options": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == "1" && new == "0" { + return true + } + return false + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "allocation_strategy": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "lowestPrice", + ValidateFunc: validation.StringInSlice([]string{ + ec2.SpotAllocationStrategyDiversified, + // AWS SDK constant incorrect + // ec2.SpotAllocationStrategyLowestPrice, + "lowestPrice", + }, false), + }, + "instance_interruption_behavior": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: ec2.SpotInstanceInterruptionBehaviorTerminate, + ValidateFunc: validation.StringInSlice([]string{ + ec2.SpotInstanceInterruptionBehaviorHibernate, + ec2.SpotInstanceInterruptionBehaviorStop, + ec2.SpotInstanceInterruptionBehaviorTerminate, + }, false), + }, + "instance_pools_to_use_count": { + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + Default: 1, + ValidateFunc: validation.IntAtLeast(1), + }, + }, + }, + }, + "tags": { + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "target_capacity_specification": { + Type: schema.TypeList, + Required: true, + MinItems: 1, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "default_target_capacity_type": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{ + ec2.DefaultTargetCapacityTypeOnDemand, + ec2.DefaultTargetCapacityTypeSpot, + }, false), + }, + "on_demand_target_capacity": { + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + // Show difference for new resources + if d.Id() == "" { + return false + } + // Show difference if value is configured + if new != "0" { + return false + } + // Show difference if existing state reflects different default type + defaultTargetCapacityTypeO, _ := d.GetChange("target_capacity_specification.0.default_target_capacity_type") + if defaultTargetCapacityTypeO.(string) != ec2.DefaultTargetCapacityTypeOnDemand { + return false + } + // Show difference if existing state reflects different total capacity + oldInt, err := strconv.Atoi(old) + if err != nil { + log.Printf("[WARN] %s DiffSuppressFunc error converting %s to integer: %s", k, old, err) + return false + } + totalTargetCapacityO, _ := d.GetChange("target_capacity_specification.0.total_target_capacity") + if oldInt != totalTargetCapacityO.(int) { + return false + } + return true + }, + }, + "spot_target_capacity": { + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + // Show difference for new resources + if d.Id() == "" { + return false + } + // Show difference if value is configured + if new != "0" { + return false + } + // Show difference if existing state reflects different default type + defaultTargetCapacityTypeO, _ := d.GetChange("target_capacity_specification.0.default_target_capacity_type") + if defaultTargetCapacityTypeO.(string) != ec2.DefaultTargetCapacityTypeSpot { + return false + } + // Show difference if existing state reflects different total capacity + oldInt, err := strconv.Atoi(old) + if err != nil { + log.Printf("[WARN] %s DiffSuppressFunc error converting %s to integer: %s", k, old, err) + return false + } + totalTargetCapacityO, _ := d.GetChange("target_capacity_specification.0.total_target_capacity") + if oldInt != totalTargetCapacityO.(int) { + return false + } + return true + }, + }, + "total_target_capacity": { + Type: schema.TypeInt, + Required: true, + }, + }, + }, + }, + "terminate_instances": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "terminate_instances_with_expiration": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + }, + "type": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: ec2.FleetTypeMaintain, + ValidateFunc: validation.StringInSlice([]string{ + ec2.FleetTypeMaintain, + ec2.FleetTypeRequest, + }, false), + }, + }, + } +} + +func resourceAwsEc2FleetCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + input := &ec2.CreateFleetInput{ + ExcessCapacityTerminationPolicy: aws.String(d.Get("excess_capacity_termination_policy").(string)), + LaunchTemplateConfigs: expandEc2FleetLaunchTemplateConfigRequests(d.Get("launch_template_config").([]interface{})), + OnDemandOptions: expandEc2OnDemandOptionsRequest(d.Get("on_demand_options").([]interface{})), + ReplaceUnhealthyInstances: aws.Bool(d.Get("replace_unhealthy_instances").(bool)), + SpotOptions: expandEc2SpotOptionsRequest(d.Get("spot_options").([]interface{})), + TargetCapacitySpecification: expandEc2TargetCapacitySpecificationRequest(d.Get("target_capacity_specification").([]interface{})), + TerminateInstancesWithExpiration: aws.Bool(d.Get("terminate_instances_with_expiration").(bool)), + TagSpecifications: expandEc2TagSpecifications(d.Get("tags").(map[string]interface{})), + Type: aws.String(d.Get("type").(string)), + } + + log.Printf("[DEBUG] Creating EC2 Fleet: %s", input) + output, err := conn.CreateFleet(input) + if err != nil { + return fmt.Errorf("error creating EC2 Fleet: %s", err) + } + + d.SetId(aws.StringValue(output.FleetId)) + + // If a request type is fulfilled immediately, we can miss the transition from active to deleted + // Instead of an error here, allow the Read function to trigger recreation + target := []string{ec2.FleetStateCodeActive} + if d.Get("type").(string) == ec2.FleetTypeRequest { + target = append(target, ec2.FleetStateCodeDeleted) + target = append(target, ec2.FleetStateCodeDeletedRunning) + target = append(target, ec2.FleetStateCodeDeletedTerminating) + // AWS SDK constants incorrect + target = append(target, "deleted_running") + target = append(target, "deleted_terminating") + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{ec2.FleetStateCodeSubmitted}, + Target: target, + Refresh: ec2FleetRefreshFunc(conn, d.Id()), + Timeout: d.Timeout(schema.TimeoutCreate), + } + + log.Printf("[DEBUG] Waiting for EC2 Fleet (%s) activation", d.Id()) + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("error waiting for EC2 Fleet (%s) activation: %s", d.Id(), err) + } + + return resourceAwsEc2FleetRead(d, meta) +} + +func resourceAwsEc2FleetRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + input := &ec2.DescribeFleetsInput{ + FleetIds: []*string{aws.String(d.Id())}, + } + + log.Printf("[DEBUG] Reading EC2 Fleet (%s): %s", d.Id(), input) + output, err := conn.DescribeFleets(input) + + if isAWSErr(err, "InvalidFleetId.NotFound", "") { + log.Printf("[WARN] EC2 Fleet (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return fmt.Errorf("error reading EC2 Fleet: %s", err) + } + + if output == nil || len(output.Fleets) == 0 { + log.Printf("[WARN] EC2 Fleet (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + var fleet *ec2.FleetData + for _, fleetData := range output.Fleets { + if fleetData == nil { + continue + } + if aws.StringValue(fleetData.FleetId) != d.Id() { + continue + } + fleet = fleetData + break + } + + if fleet == nil { + log.Printf("[WARN] EC2 Fleet (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + deletedStates := []string{ + ec2.FleetStateCodeDeleted, + ec2.FleetStateCodeDeletedRunning, + ec2.FleetStateCodeDeletedTerminating, + // AWS SDK constants are incorrect + "deleted_running", + "deleted_terminating", + } + for _, deletedState := range deletedStates { + if aws.StringValue(fleet.FleetState) == deletedState { + log.Printf("[WARN] EC2 Fleet (%s) in deleted state (%s), removing from state", d.Id(), aws.StringValue(fleet.FleetState)) + d.SetId("") + return nil + } + } + + d.Set("excess_capacity_termination_policy", fleet.ExcessCapacityTerminationPolicy) + + if err := d.Set("launch_template_config", flattenEc2FleetLaunchTemplateConfigs(fleet.LaunchTemplateConfigs)); err != nil { + return fmt.Errorf("error setting launch_template_config: %s", err) + } + + if err := d.Set("on_demand_options", flattenEc2OnDemandOptions(fleet.OnDemandOptions)); err != nil { + return fmt.Errorf("error setting on_demand_options: %s", err) + } + + d.Set("replace_unhealthy_instances", fleet.ReplaceUnhealthyInstances) + + if err := d.Set("spot_options", flattenEc2SpotOptions(fleet.SpotOptions)); err != nil { + return fmt.Errorf("error setting spot_options: %s", err) + } + + if err := d.Set("target_capacity_specification", flattenEc2TargetCapacitySpecification(fleet.TargetCapacitySpecification)); err != nil { + return fmt.Errorf("error setting target_capacity_specification: %s", err) + } + + d.Set("terminate_instances_with_expiration", fleet.TerminateInstancesWithExpiration) + d.Set("type", fleet.Type) + + if err := d.Set("tags", tagsToMap(fleet.Tags)); err != nil { + return fmt.Errorf("error setting tags: %s", err) + } + + return nil +} + +func resourceAwsEc2FleetUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + input := &ec2.ModifyFleetInput{ + ExcessCapacityTerminationPolicy: aws.String(d.Get("excess_capacity_termination_policy").(string)), + FleetId: aws.String(d.Id()), + // InvalidTargetCapacitySpecification: Currently we only support total target capacity modification. + // TargetCapacitySpecification: expandEc2TargetCapacitySpecificationRequest(d.Get("target_capacity_specification").([]interface{})), + TargetCapacitySpecification: &ec2.TargetCapacitySpecificationRequest{ + TotalTargetCapacity: aws.Int64(int64(d.Get("target_capacity_specification.0.total_target_capacity").(int))), + }, + } + + log.Printf("[DEBUG] Modifying EC2 Fleet (%s): %s", d.Id(), input) + _, err := conn.ModifyFleet(input) + + if err != nil { + return fmt.Errorf("error modifying EC2 Fleet: %s", err) + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{ec2.FleetStateCodeModifying}, + Target: []string{ec2.FleetStateCodeActive}, + Refresh: ec2FleetRefreshFunc(conn, d.Id()), + Timeout: d.Timeout(schema.TimeoutUpdate), + } + + log.Printf("[DEBUG] Waiting for EC2 Fleet (%s) modification", d.Id()) + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("error waiting for EC2 Fleet (%s) modification: %s", d.Id(), err) + } + + return resourceAwsEc2FleetRead(d, meta) +} + +func resourceAwsEc2FleetDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + input := &ec2.DeleteFleetsInput{ + FleetIds: []*string{aws.String(d.Id())}, + TerminateInstances: aws.Bool(d.Get("terminate_instances").(bool)), + } + + log.Printf("[DEBUG] Deleting EC2 Fleet (%s): %s", d.Id(), input) + _, err := conn.DeleteFleets(input) + + if isAWSErr(err, "InvalidFleetId.NotFound", "") { + return nil + } + + if err != nil { + return fmt.Errorf("error deleting EC2 Fleet: %s", err) + } + + pending := []string{ec2.FleetStateCodeActive} + target := []string{ec2.FleetStateCodeDeleted} + if d.Get("terminate_instances").(bool) { + pending = append(pending, ec2.FleetStateCodeDeletedTerminating) + // AWS SDK constant is incorrect: unexpected state 'deleted_terminating', wanted target 'deleted, deleted-terminating' + pending = append(pending, "deleted_terminating") + } else { + target = append(target, ec2.FleetStateCodeDeletedRunning) + // AWS SDK constant is incorrect: unexpected state 'deleted_running', wanted target 'deleted, deleted-running' + target = append(target, "deleted_running") + } + + stateConf := &resource.StateChangeConf{ + Pending: pending, + Target: target, + Refresh: ec2FleetRefreshFunc(conn, d.Id()), + Timeout: d.Timeout(schema.TimeoutDelete), + } + + log.Printf("[DEBUG] Waiting for EC2 Fleet (%s) deletion", d.Id()) + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("error waiting for EC2 Fleet (%s) deletion: %s", d.Id(), err) + } + + return nil +} + +func ec2FleetRefreshFunc(conn *ec2.EC2, fleetID string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + input := &ec2.DescribeFleetsInput{ + FleetIds: []*string{aws.String(fleetID)}, + } + + log.Printf("[DEBUG] Reading EC2 Fleet (%s): %s", fleetID, input) + output, err := conn.DescribeFleets(input) + + if isAWSErr(err, "InvalidFleetId.NotFound", "") { + return nil, ec2.FleetStateCodeDeleted, nil + } + + if err != nil { + return nil, "", fmt.Errorf("error reading EC2 Fleet: %s", err) + } + + if output == nil || len(output.Fleets) == 0 { + return nil, ec2.FleetStateCodeDeleted, nil + } + + var fleet *ec2.FleetData + for _, fleetData := range output.Fleets { + if fleetData == nil { + continue + } + if aws.StringValue(fleetData.FleetId) == fleetID { + fleet = fleetData + break + } + } + + if fleet == nil { + return nil, ec2.FleetStateCodeDeleted, nil + } + + return fleet, aws.StringValue(fleet.FleetState), nil + } +} + +func expandEc2FleetLaunchTemplateConfigRequests(l []interface{}) []*ec2.FleetLaunchTemplateConfigRequest { + fleetLaunchTemplateConfigRequests := make([]*ec2.FleetLaunchTemplateConfigRequest, len(l)) + for i, m := range l { + if m == nil { + fleetLaunchTemplateConfigRequests[i] = &ec2.FleetLaunchTemplateConfigRequest{} + continue + } + + fleetLaunchTemplateConfigRequests[i] = expandEc2FleetLaunchTemplateConfigRequest(m.(map[string]interface{})) + } + return fleetLaunchTemplateConfigRequests +} + +func expandEc2FleetLaunchTemplateConfigRequest(m map[string]interface{}) *ec2.FleetLaunchTemplateConfigRequest { + fleetLaunchTemplateConfigRequest := &ec2.FleetLaunchTemplateConfigRequest{ + LaunchTemplateSpecification: expandEc2LaunchTemplateSpecificationRequest(m["launch_template_specification"].([]interface{})), + } + + if v, ok := m["override"]; ok { + fleetLaunchTemplateConfigRequest.Overrides = expandEc2FleetLaunchTemplateOverridesRequests(v.([]interface{})) + } + + return fleetLaunchTemplateConfigRequest +} + +func expandEc2FleetLaunchTemplateOverridesRequests(l []interface{}) []*ec2.FleetLaunchTemplateOverridesRequest { + if len(l) == 0 { + return nil + } + + fleetLaunchTemplateOverridesRequests := make([]*ec2.FleetLaunchTemplateOverridesRequest, len(l)) + for i, m := range l { + if m == nil { + fleetLaunchTemplateOverridesRequests[i] = &ec2.FleetLaunchTemplateOverridesRequest{} + continue + } + + fleetLaunchTemplateOverridesRequests[i] = expandEc2FleetLaunchTemplateOverridesRequest(m.(map[string]interface{})) + } + return fleetLaunchTemplateOverridesRequests +} + +func expandEc2FleetLaunchTemplateOverridesRequest(m map[string]interface{}) *ec2.FleetLaunchTemplateOverridesRequest { + fleetLaunchTemplateOverridesRequest := &ec2.FleetLaunchTemplateOverridesRequest{} + + if v, ok := m["availability_zone"]; ok && v.(string) != "" { + fleetLaunchTemplateOverridesRequest.AvailabilityZone = aws.String(v.(string)) + } + + if v, ok := m["instance_type"]; ok && v.(string) != "" { + fleetLaunchTemplateOverridesRequest.InstanceType = aws.String(v.(string)) + } + + if v, ok := m["max_price"]; ok && v.(string) != "" { + fleetLaunchTemplateOverridesRequest.MaxPrice = aws.String(v.(string)) + } + + if v, ok := m["priority"]; ok && v.(float64) != 0.0 { + fleetLaunchTemplateOverridesRequest.Priority = aws.Float64(v.(float64)) + } + + if v, ok := m["subnet_id"]; ok && v.(string) != "" { + fleetLaunchTemplateOverridesRequest.SubnetId = aws.String(v.(string)) + } + + if v, ok := m["weighted_capacity"]; ok && v.(float64) != 0.0 { + fleetLaunchTemplateOverridesRequest.WeightedCapacity = aws.Float64(v.(float64)) + } + + return fleetLaunchTemplateOverridesRequest +} + +func expandEc2LaunchTemplateSpecificationRequest(l []interface{}) *ec2.FleetLaunchTemplateSpecificationRequest { + fleetLaunchTemplateSpecificationRequest := &ec2.FleetLaunchTemplateSpecificationRequest{} + + if len(l) == 0 || l[0] == nil { + return fleetLaunchTemplateSpecificationRequest + } + + m := l[0].(map[string]interface{}) + + if v, ok := m["launch_template_id"]; ok && v.(string) != "" { + fleetLaunchTemplateSpecificationRequest.LaunchTemplateId = aws.String(v.(string)) + } + + if v, ok := m["launch_template_name"]; ok && v.(string) != "" { + fleetLaunchTemplateSpecificationRequest.LaunchTemplateName = aws.String(v.(string)) + } + + if v, ok := m["version"]; ok && v.(string) != "" { + fleetLaunchTemplateSpecificationRequest.Version = aws.String(v.(string)) + } + + return fleetLaunchTemplateSpecificationRequest +} + +func expandEc2OnDemandOptionsRequest(l []interface{}) *ec2.OnDemandOptionsRequest { + if len(l) == 0 || l[0] == nil { + return nil + } + + m := l[0].(map[string]interface{}) + + return &ec2.OnDemandOptionsRequest{ + AllocationStrategy: aws.String(m["allocation_strategy"].(string)), + } +} + +func expandEc2SpotOptionsRequest(l []interface{}) *ec2.SpotOptionsRequest { + if len(l) == 0 || l[0] == nil { + return nil + } + + m := l[0].(map[string]interface{}) + + spotOptionsRequest := &ec2.SpotOptionsRequest{ + AllocationStrategy: aws.String(m["allocation_strategy"].(string)), + InstanceInterruptionBehavior: aws.String(m["instance_interruption_behavior"].(string)), + } + + // InvalidFleetConfig: InstancePoolsToUseCount option is only available with the lowestPrice allocation strategy. + if aws.StringValue(spotOptionsRequest.AllocationStrategy) == "lowestPrice" { + spotOptionsRequest.InstancePoolsToUseCount = aws.Int64(int64(m["instance_pools_to_use_count"].(int))) + } + + return spotOptionsRequest +} + +func expandEc2TagSpecifications(m map[string]interface{}) []*ec2.TagSpecification { + if len(m) == 0 { + return nil + } + + return []*ec2.TagSpecification{ + { + ResourceType: aws.String("fleet"), + Tags: tagsFromMap(m), + }, + } +} + +func expandEc2TargetCapacitySpecificationRequest(l []interface{}) *ec2.TargetCapacitySpecificationRequest { + if len(l) == 0 || l[0] == nil { + return nil + } + + m := l[0].(map[string]interface{}) + + targetCapacitySpecificationrequest := &ec2.TargetCapacitySpecificationRequest{ + TotalTargetCapacity: aws.Int64(int64(m["total_target_capacity"].(int))), + } + + if v, ok := m["default_target_capacity_type"]; ok && v.(string) != "" { + targetCapacitySpecificationrequest.DefaultTargetCapacityType = aws.String(v.(string)) + } + + if v, ok := m["on_demand_target_capacity"]; ok && v.(int) != 0 { + targetCapacitySpecificationrequest.OnDemandTargetCapacity = aws.Int64(int64(v.(int))) + } + + if v, ok := m["spot_target_capacity"]; ok && v.(int) != 0 { + targetCapacitySpecificationrequest.SpotTargetCapacity = aws.Int64(int64(v.(int))) + } + + return targetCapacitySpecificationrequest +} + +func flattenEc2FleetLaunchTemplateConfigs(fleetLaunchTemplateConfigs []*ec2.FleetLaunchTemplateConfig) []interface{} { + l := make([]interface{}, len(fleetLaunchTemplateConfigs)) + + for i, fleetLaunchTemplateConfig := range fleetLaunchTemplateConfigs { + if fleetLaunchTemplateConfig == nil { + l[i] = map[string]interface{}{} + continue + } + m := map[string]interface{}{ + "launch_template_specification": flattenEc2FleetLaunchTemplateSpecification(fleetLaunchTemplateConfig.LaunchTemplateSpecification), + "override": flattenEc2FleetLaunchTemplateOverrides(fleetLaunchTemplateConfig.Overrides), + } + l[i] = m + } + + return l +} + +func flattenEc2FleetLaunchTemplateOverrides(fleetLaunchTemplateOverrides []*ec2.FleetLaunchTemplateOverrides) []interface{} { + l := make([]interface{}, len(fleetLaunchTemplateOverrides)) + + for i, fleetLaunchTemplateOverride := range fleetLaunchTemplateOverrides { + if fleetLaunchTemplateOverride == nil { + l[i] = map[string]interface{}{} + continue + } + m := map[string]interface{}{ + "availability_zone": aws.StringValue(fleetLaunchTemplateOverride.AvailabilityZone), + "instance_type": aws.StringValue(fleetLaunchTemplateOverride.InstanceType), + "max_price": aws.StringValue(fleetLaunchTemplateOverride.MaxPrice), + "priority": aws.Float64Value(fleetLaunchTemplateOverride.Priority), + "subnet_id": aws.StringValue(fleetLaunchTemplateOverride.SubnetId), + "weighted_capacity": aws.Float64Value(fleetLaunchTemplateOverride.WeightedCapacity), + } + l[i] = m + } + + return l +} + +func flattenEc2FleetLaunchTemplateSpecification(fleetLaunchTemplateSpecification *ec2.FleetLaunchTemplateSpecification) []interface{} { + if fleetLaunchTemplateSpecification == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "launch_template_id": aws.StringValue(fleetLaunchTemplateSpecification.LaunchTemplateId), + "launch_template_name": aws.StringValue(fleetLaunchTemplateSpecification.LaunchTemplateName), + "version": aws.StringValue(fleetLaunchTemplateSpecification.Version), + } + + return []interface{}{m} +} + +func flattenEc2OnDemandOptions(onDemandOptions *ec2.OnDemandOptions) []interface{} { + if onDemandOptions == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "allocation_strategy": aws.StringValue(onDemandOptions.AllocationStrategy), + } + + return []interface{}{m} +} + +func flattenEc2SpotOptions(spotOptions *ec2.SpotOptions) []interface{} { + if spotOptions == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "allocation_strategy": aws.StringValue(spotOptions.AllocationStrategy), + "instance_interruption_behavior": aws.StringValue(spotOptions.InstanceInterruptionBehavior), + "instance_pools_to_use_count": aws.Int64Value(spotOptions.InstancePoolsToUseCount), + } + + // API will omit InstancePoolsToUseCount if AllocationStrategy is diversified, which breaks our Default: 1 + // Here we just reset it to 1 to prevent removing the Default and setting up a special DiffSuppressFunc + if spotOptions.InstancePoolsToUseCount == nil && aws.StringValue(spotOptions.AllocationStrategy) == ec2.SpotAllocationStrategyDiversified { + m["instance_pools_to_use_count"] = 1 + } + + return []interface{}{m} +} + +func flattenEc2TargetCapacitySpecification(targetCapacitySpecification *ec2.TargetCapacitySpecification) []interface{} { + if targetCapacitySpecification == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "default_target_capacity_type": aws.StringValue(targetCapacitySpecification.DefaultTargetCapacityType), + "on_demand_target_capacity": aws.Int64Value(targetCapacitySpecification.OnDemandTargetCapacity), + "spot_target_capacity": aws.Int64Value(targetCapacitySpecification.SpotTargetCapacity), + "total_target_capacity": aws.Int64Value(targetCapacitySpecification.TotalTargetCapacity), + } + + return []interface{}{m} +} diff --git a/aws/resource_aws_ec2_fleet_test.go b/aws/resource_aws_ec2_fleet_test.go new file mode 100644 index 00000000000..9238d91c004 --- /dev/null +++ b/aws/resource_aws_ec2_fleet_test.go @@ -0,0 +1,1698 @@ +package aws + +import ( + "errors" + "fmt" + "strconv" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSEc2Fleet_basic(t *testing.T) { + var fleet1 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_TargetCapacitySpecification_DefaultTargetCapacityType(rName, "spot"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "excess_capacity_termination_policy", "termination"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.launch_template_specification.#", "1"), + resource.TestCheckResourceAttrSet(resourceName, "launch_template_config.0.launch_template_specification.0.launch_template_id"), + resource.TestCheckResourceAttrSet(resourceName, "launch_template_config.0.launch_template_specification.0.version"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.#", "0"), + resource.TestCheckResourceAttr(resourceName, "on_demand_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "on_demand_options.0.allocation_strategy", "lowestPrice"), + resource.TestCheckResourceAttr(resourceName, "replace_unhealthy_instances", "false"), + resource.TestCheckResourceAttr(resourceName, "spot_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "spot_options.0.allocation_strategy", "lowestPrice"), + resource.TestCheckResourceAttr(resourceName, "spot_options.0.instance_interruption_behavior", "terminate"), + resource.TestCheckResourceAttr(resourceName, "spot_options.0.instance_pools_to_use_count", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttr(resourceName, "target_capacity_specification.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_capacity_specification.0.default_target_capacity_type", "spot"), + resource.TestCheckResourceAttr(resourceName, "target_capacity_specification.0.total_target_capacity", "0"), + resource.TestCheckResourceAttr(resourceName, "terminate_instances", "false"), + resource.TestCheckResourceAttr(resourceName, "terminate_instances_with_expiration", "false"), + resource.TestCheckResourceAttr(resourceName, "type", "maintain"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + }, + }) +} + +func TestAccAWSEc2Fleet_disappears(t *testing.T) { + var fleet1 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_TargetCapacitySpecification_DefaultTargetCapacityType(rName, "spot"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + testAccCheckAWSEc2FleetDisappears(&fleet1), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccAWSEc2Fleet_ExcessCapacityTerminationPolicy(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_ExcessCapacityTerminationPolicy(rName, "no-termination"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "excess_capacity_termination_policy", "no-termination"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_ExcessCapacityTerminationPolicy(rName, "termination"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet2), + testAccCheckAWSEc2FleetNotRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "excess_capacity_termination_policy", "termination"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_LaunchTemplateConfig_LaunchTemplateSpecification_LaunchTemplateId(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + launchTemplateResourceName1 := "aws_launch_template.test1" + launchTemplateResourceName2 := "aws_launch_template.test2" + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_LaunchTemplateSpecification_LaunchTemplateId(rName, launchTemplateResourceName1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.launch_template_specification.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "launch_template_config.0.launch_template_specification.0.launch_template_id", launchTemplateResourceName1, "id"), + resource.TestCheckResourceAttrPair(resourceName, "launch_template_config.0.launch_template_specification.0.version", launchTemplateResourceName1, "latest_version"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_LaunchTemplateSpecification_LaunchTemplateId(rName, launchTemplateResourceName2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.launch_template_specification.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "launch_template_config.0.launch_template_specification.0.launch_template_id", launchTemplateResourceName2, "id"), + resource.TestCheckResourceAttrPair(resourceName, "launch_template_config.0.launch_template_specification.0.version", launchTemplateResourceName2, "latest_version"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_LaunchTemplateConfig_LaunchTemplateSpecification_LaunchTemplateName(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + launchTemplateResourceName1 := "aws_launch_template.test1" + launchTemplateResourceName2 := "aws_launch_template.test2" + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_LaunchTemplateSpecification_LaunchTemplateName(rName, launchTemplateResourceName1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.launch_template_specification.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "launch_template_config.0.launch_template_specification.0.launch_template_name", launchTemplateResourceName1, "name"), + resource.TestCheckResourceAttrPair(resourceName, "launch_template_config.0.launch_template_specification.0.version", launchTemplateResourceName1, "latest_version"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_LaunchTemplateSpecification_LaunchTemplateName(rName, launchTemplateResourceName2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.launch_template_specification.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "launch_template_config.0.launch_template_specification.0.launch_template_name", launchTemplateResourceName2, "name"), + resource.TestCheckResourceAttrPair(resourceName, "launch_template_config.0.launch_template_specification.0.version", launchTemplateResourceName2, "latest_version"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_LaunchTemplateConfig_LaunchTemplateSpecification_Version(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + var launchTemplate ec2.LaunchTemplate + launchTemplateResourceName := "aws_launch_template.test" + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_LaunchTemplateSpecification_Version(rName, "t3.micro"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(launchTemplateResourceName, &launchTemplate), + resource.TestCheckResourceAttr(launchTemplateResourceName, "instance_type", "t3.micro"), + resource.TestCheckResourceAttr(launchTemplateResourceName, "latest_version", "1"), + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.launch_template_specification.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "launch_template_config.0.launch_template_specification.0.launch_template_id", launchTemplateResourceName, "id"), + resource.TestCheckResourceAttrPair(resourceName, "launch_template_config.0.launch_template_specification.0.version", launchTemplateResourceName, "latest_version"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_LaunchTemplateSpecification_Version(rName, "t3.small"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(launchTemplateResourceName, &launchTemplate), + resource.TestCheckResourceAttr(launchTemplateResourceName, "instance_type", "t3.small"), + resource.TestCheckResourceAttr(launchTemplateResourceName, "latest_version", "2"), + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.launch_template_specification.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "launch_template_config.0.launch_template_specification.0.launch_template_id", launchTemplateResourceName, "id"), + resource.TestCheckResourceAttrPair(resourceName, "launch_template_config.0.launch_template_specification.0.version", launchTemplateResourceName, "latest_version"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_LaunchTemplateConfig_Override_AvailabilityZone(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + availabilityZonesDataSourceName := "data.aws_availability_zones.available" + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_AvailabilityZone(rName, 0), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "launch_template_config.0.override.0.availability_zone", availabilityZonesDataSourceName, "names.0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_AvailabilityZone(rName, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "launch_template_config.0.override.0.availability_zone", availabilityZonesDataSourceName, "names.1"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_LaunchTemplateConfig_Override_InstanceType(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_InstanceType(rName, "t3.small"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.0.instance_type", "t3.small"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_InstanceType(rName, "t3.medium"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.0.instance_type", "t3.medium"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_LaunchTemplateConfig_Override_MaxPrice(t *testing.T) { + t.Skip("EC2 API is not correctly returning MaxPrice override") + + var fleet1, fleet2 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_MaxPrice(rName, "1.01"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.0.max_price", "1.01"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_MaxPrice(rName, "1.02"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.0.max_price", "1.02"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_LaunchTemplateConfig_Override_Priority(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_Priority(rName, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.0.priority", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_Priority(rName, 2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.0.priority", "2"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_LaunchTemplateConfig_Override_Priority_Multiple(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_Priority_Multiple(rName, 1, 2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.#", "2"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.0.priority", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.1.priority", "2"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_Priority_Multiple(rName, 2, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.#", "2"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.0.priority", "2"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.1.priority", "1"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_LaunchTemplateConfig_Override_SubnetId(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + subnetResourceName1 := "aws_subnet.test.0" + subnetResourceName2 := "aws_subnet.test.1" + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_SubnetId(rName, 0), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "launch_template_config.0.override.0.subnet_id", subnetResourceName1, "id"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_SubnetId(rName, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "launch_template_config.0.override.0.subnet_id", subnetResourceName2, "id"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_LaunchTemplateConfig_Override_WeightedCapacity(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_WeightedCapacity(rName, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.0.weighted_capacity", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_WeightedCapacity(rName, 2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.0.weighted_capacity", "2"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_LaunchTemplateConfig_Override_WeightedCapacity_Multiple(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_WeightedCapacity_Multiple(rName, 1, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.#", "2"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.0.weighted_capacity", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.1.weighted_capacity", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_WeightedCapacity_Multiple(rName, 1, 2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.#", "2"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.0.weighted_capacity", "1"), + resource.TestCheckResourceAttr(resourceName, "launch_template_config.0.override.1.weighted_capacity", "2"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_OnDemandOptions_AllocationStrategy(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_OnDemandOptions_AllocationStrategy(rName, "prioritized"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "on_demand_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "on_demand_options.0.allocation_strategy", "prioritized"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_OnDemandOptions_AllocationStrategy(rName, "lowestPrice"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet2), + testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "on_demand_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "on_demand_options.0.allocation_strategy", "lowestPrice"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_ReplaceUnhealthyInstances(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_ReplaceUnhealthyInstances(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "replace_unhealthy_instances", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_ReplaceUnhealthyInstances(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet2), + testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "replace_unhealthy_instances", "false"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_SpotOptions_AllocationStrategy(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_SpotOptions_AllocationStrategy(rName, "diversified"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "spot_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "spot_options.0.allocation_strategy", "diversified"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_SpotOptions_AllocationStrategy(rName, "lowestPrice"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet2), + testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "spot_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "spot_options.0.allocation_strategy", "lowestPrice"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_SpotOptions_InstanceInterruptionBehavior(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_SpotOptions_InstanceInterruptionBehavior(rName, "stop"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "spot_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "spot_options.0.instance_interruption_behavior", "stop"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_SpotOptions_InstanceInterruptionBehavior(rName, "terminate"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet2), + testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "spot_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "spot_options.0.instance_interruption_behavior", "terminate"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_SpotOptions_InstancePoolsToUseCount(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_SpotOptions_InstancePoolsToUseCount(rName, 2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "spot_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "spot_options.0.instance_pools_to_use_count", "2"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_SpotOptions_InstancePoolsToUseCount(rName, 3), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet2), + testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "spot_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "spot_options.0.instance_pools_to_use_count", "3"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_Tags(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_Tags(rName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_Tags(rName, "key1", "value1updated"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet2), + testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_TargetCapacitySpecification_DefaultTargetCapacityType(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_TargetCapacitySpecification_DefaultTargetCapacityType(rName, "on-demand"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "target_capacity_specification.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_capacity_specification.0.default_target_capacity_type", "on-demand"), + ), + }, + { + Config: testAccAWSEc2FleetConfig_TargetCapacitySpecification_DefaultTargetCapacityType(rName, "spot"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet2), + testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "target_capacity_specification.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_capacity_specification.0.default_target_capacity_type", "spot"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_TargetCapacitySpecification_DefaultTargetCapacityType_OnDemand(t *testing.T) { + var fleet1 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_TargetCapacitySpecification_DefaultTargetCapacityType(rName, "on-demand"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "target_capacity_specification.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_capacity_specification.0.default_target_capacity_type", "on-demand"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + }, + }) +} + +func TestAccAWSEc2Fleet_TargetCapacitySpecification_DefaultTargetCapacityType_Spot(t *testing.T) { + var fleet1 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_TargetCapacitySpecification_DefaultTargetCapacityType(rName, "spot"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "target_capacity_specification.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_capacity_specification.0.default_target_capacity_type", "spot"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + }, + }) +} + +func TestAccAWSEc2Fleet_TargetCapacitySpecification_TotalTargetCapacity(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_TargetCapacitySpecification_TotalTargetCapacity(rName, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "target_capacity_specification.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_capacity_specification.0.total_target_capacity", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_TargetCapacitySpecification_TotalTargetCapacity(rName, 2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet2), + testAccCheckAWSEc2FleetNotRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "target_capacity_specification.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_capacity_specification.0.total_target_capacity", "2"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_TerminateInstancesWithExpiration(t *testing.T) { + var fleet1, fleet2 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_TerminateInstancesWithExpiration(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "terminate_instances_with_expiration", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + { + Config: testAccAWSEc2FleetConfig_TerminateInstancesWithExpiration(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet2), + testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + resource.TestCheckResourceAttr(resourceName, "terminate_instances_with_expiration", "false"), + ), + }, + }, + }) +} + +func TestAccAWSEc2Fleet_Type(t *testing.T) { + var fleet1 ec2.FleetData + resourceName := "aws_ec2_fleet.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEc2FleetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEc2FleetConfig_Type(rName, "maintain"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEc2FleetExists(resourceName, &fleet1), + resource.TestCheckResourceAttr(resourceName, "type", "maintain"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"terminate_instances"}, + }, + // This configuration will fulfill immediately, skip until ValidFrom is implemented + // { + // Config: testAccAWSEc2FleetConfig_Type(rName, "request"), + // Check: resource.ComposeTestCheckFunc( + // testAccCheckAWSEc2FleetExists(resourceName, &fleet2), + // testAccCheckAWSEc2FleetRecreated(&fleet1, &fleet2), + // resource.TestCheckResourceAttr(resourceName, "type", "request"), + // ), + // }, + }, + }) +} + +func testAccCheckAWSEc2FleetExists(resourceName string, fleet *ec2.FleetData) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No EC2 Fleet ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).ec2conn + + input := &ec2.DescribeFleetsInput{ + FleetIds: []*string{aws.String(rs.Primary.ID)}, + } + + output, err := conn.DescribeFleets(input) + + if err != nil { + return err + } + + if output == nil { + return fmt.Errorf("EC2 Fleet not found") + } + + for _, fleetData := range output.Fleets { + if fleetData == nil { + continue + } + if aws.StringValue(fleetData.FleetId) != rs.Primary.ID { + continue + } + *fleet = *fleetData + break + } + + if fleet == nil { + return fmt.Errorf("EC2 Fleet not found") + } + + return nil + } +} + +func testAccCheckAWSEc2FleetDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).ec2conn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_ec2_fleet" { + continue + } + + input := &ec2.DescribeFleetsInput{ + FleetIds: []*string{aws.String(rs.Primary.ID)}, + } + + output, err := conn.DescribeFleets(input) + + if isAWSErr(err, "InvalidFleetId.NotFound", "") { + continue + } + + if err != nil { + return err + } + + if output == nil { + continue + } + + for _, fleetData := range output.Fleets { + if fleetData == nil { + continue + } + if aws.StringValue(fleetData.FleetId) != rs.Primary.ID { + continue + } + if aws.StringValue(fleetData.FleetState) == ec2.FleetStateCodeDeleted { + break + } + terminateInstances, err := strconv.ParseBool(rs.Primary.Attributes["terminate_instances"]) + if err != nil { + return fmt.Errorf("error converting terminate_instances (%s) to bool: %s", rs.Primary.Attributes["terminate_instances"], err) + } + if !terminateInstances && aws.StringValue(fleetData.FleetState) == ec2.FleetStateCodeDeletedRunning { + break + } + // AWS SDK constant is incorrect + if !terminateInstances && aws.StringValue(fleetData.FleetState) == "deleted_running" { + break + } + return fmt.Errorf("EC2 Fleet (%s) still exists in non-deleted (%s) state", rs.Primary.ID, aws.StringValue(fleetData.FleetState)) + } + } + + return nil +} + +func testAccCheckAWSEc2FleetDisappears(fleet *ec2.FleetData) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).ec2conn + + input := &ec2.DeleteFleetsInput{ + FleetIds: []*string{fleet.FleetId}, + TerminateInstances: aws.Bool(false), + } + + _, err := conn.DeleteFleets(input) + + return err + } +} + +func testAccCheckAWSEc2FleetNotRecreated(i, j *ec2.FleetData) resource.TestCheckFunc { + return func(s *terraform.State) error { + if aws.TimeValue(i.CreateTime) != aws.TimeValue(j.CreateTime) { + return errors.New("EC2 Fleet was recreated") + } + + return nil + } +} + +func testAccCheckAWSEc2FleetRecreated(i, j *ec2.FleetData) resource.TestCheckFunc { + return func(s *terraform.State) error { + if aws.TimeValue(i.CreateTime) == aws.TimeValue(j.CreateTime) { + return errors.New("EC2 Fleet was not recreated") + } + + return nil + } +} + +func testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName string) string { + return fmt.Sprintf(` +data "aws_ami" "test" { + most_recent = true + owners = ["amazon"] + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +resource "aws_launch_template" "test" { + image_id = "${data.aws_ami.test.id}" + instance_type = "t3.micro" + name = %q +} +`, rName) +} + +func testAccAWSEc2FleetConfig_ExcessCapacityTerminationPolicy(rName, excessCapacityTerminationPolicy string) string { + return testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName) + fmt.Sprintf(` +resource "aws_ec2_fleet" "test" { + excess_capacity_termination_policy = %q + + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 0 + } +} +`, excessCapacityTerminationPolicy) +} + +func testAccAWSEc2FleetConfig_LaunchTemplateConfig_LaunchTemplateSpecification_LaunchTemplateId(rName, launchTemplateResourceName string) string { + return fmt.Sprintf(` +data "aws_ami" "test" { + most_recent = true + owners = ["amazon"] + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +resource "aws_launch_template" "test1" { + image_id = "${data.aws_ami.test.id}" + instance_type = "t3.micro" + name = "%s1" +} + +resource "aws_launch_template" "test2" { + image_id = "${data.aws_ami.test.id}" + instance_type = "t3.micro" + name = "%s2" +} + +resource "aws_ec2_fleet" "test" { + launch_template_config { + launch_template_specification { + launch_template_id = "${%s.id}" + version = "${%s.latest_version}" + } + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 0 + } +} +`, rName, rName, launchTemplateResourceName, launchTemplateResourceName) +} + +func testAccAWSEc2FleetConfig_LaunchTemplateConfig_LaunchTemplateSpecification_LaunchTemplateName(rName, launchTemplateResourceName string) string { + return fmt.Sprintf(` +data "aws_ami" "test" { + most_recent = true + owners = ["amazon"] + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +resource "aws_launch_template" "test1" { + image_id = "${data.aws_ami.test.id}" + instance_type = "t3.micro" + name = "%s1" +} + +resource "aws_launch_template" "test2" { + image_id = "${data.aws_ami.test.id}" + instance_type = "t3.micro" + name = "%s2" +} + +resource "aws_ec2_fleet" "test" { + launch_template_config { + launch_template_specification { + launch_template_name = "${%s.name}" + version = "${%s.latest_version}" + } + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 0 + } +} +`, rName, rName, launchTemplateResourceName, launchTemplateResourceName) +} + +func testAccAWSEc2FleetConfig_LaunchTemplateConfig_LaunchTemplateSpecification_Version(rName, instanceType string) string { + return fmt.Sprintf(` +data "aws_ami" "test" { + most_recent = true + owners = ["amazon"] + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +resource "aws_launch_template" "test" { + image_id = "${data.aws_ami.test.id}" + instance_type = %q + name = %q +} + +resource "aws_ec2_fleet" "test" { + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 0 + } +} +`, instanceType, rName) +} + +func testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_AvailabilityZone(rName string, availabilityZoneIndex int) string { + return testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName) + fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_ec2_fleet" "test" { + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + + override { + availability_zone = "${data.aws_availability_zones.available.names[%d]}" + } + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 0 + } +} +`, availabilityZoneIndex) +} + +func testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_InstanceType(rName, instanceType string) string { + return testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName) + fmt.Sprintf(` +resource "aws_ec2_fleet" "test" { + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + + override { + instance_type = %q + } + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 0 + } +} +`, instanceType) +} + +func testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_MaxPrice(rName, maxPrice string) string { + return testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName) + fmt.Sprintf(` +resource "aws_ec2_fleet" "test" { + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + + override { + max_price = %q + } + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 0 + } +} +`, maxPrice) +} + +func testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_Priority(rName string, priority int) string { + return testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName) + fmt.Sprintf(` +resource "aws_ec2_fleet" "test" { + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + + override { + priority = %d + } + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 0 + } +} +`, priority) +} + +func testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_Priority_Multiple(rName string, priority1, priority2 int) string { + return testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName) + fmt.Sprintf(` +resource "aws_ec2_fleet" "test" { + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + + override { + instance_type = "${aws_launch_template.test.instance_type}" + priority = %d + } + + override { + instance_type = "t3.small" + priority = %d + } + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 0 + } +} +`, priority1, priority2) +} + +func testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_SubnetId(rName string, subnetIndex int) string { + return testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName) + fmt.Sprintf(` +variable "TestAccNameTag" { + default = "tf-acc-test-ec2-fleet-launchtemplateconfig-override-subnetid" +} + +resource "aws_vpc" "test" { + cidr_block = "10.1.0.0/16" + + tags { + Name = "${var.TestAccNameTag}" + } +} + +resource "aws_subnet" "test" { + count = 2 + + cidr_block = "10.1.${count.index}.0/24" + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = "${var.TestAccNameTag}" + } +} + +resource "aws_ec2_fleet" "test" { + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + + override { + subnet_id = "${aws_subnet.test.*.id[%d]}" + } + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 0 + } +} +`, subnetIndex) +} + +func testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_WeightedCapacity(rName string, weightedCapacity int) string { + return testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName) + fmt.Sprintf(` +resource "aws_ec2_fleet" "test" { + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + + override { + weighted_capacity = %d + } + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 0 + } +} +`, weightedCapacity) +} + +func testAccAWSEc2FleetConfig_LaunchTemplateConfig_Override_WeightedCapacity_Multiple(rName string, weightedCapacity1, weightedCapacity2 int) string { + return testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName) + fmt.Sprintf(` +resource "aws_ec2_fleet" "test" { + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + + override { + instance_type = "${aws_launch_template.test.instance_type}" + weighted_capacity = %d + } + + override { + instance_type = "t3.small" + weighted_capacity = %d + } + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 0 + } +} +`, weightedCapacity1, weightedCapacity2) +} + +func testAccAWSEc2FleetConfig_OnDemandOptions_AllocationStrategy(rName, allocationStrategy string) string { + return testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName) + fmt.Sprintf(` +resource "aws_ec2_fleet" "test" { + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + } + + on_demand_options { + allocation_strategy = %q + } + + target_capacity_specification { + default_target_capacity_type = "on-demand" + total_target_capacity = 0 + } +} +`, allocationStrategy) +} + +func testAccAWSEc2FleetConfig_ReplaceUnhealthyInstances(rName string, replaceUnhealthyInstances bool) string { + return testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName) + fmt.Sprintf(` +resource "aws_ec2_fleet" "test" { + replace_unhealthy_instances = %t + + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 0 + } +} +`, replaceUnhealthyInstances) +} + +func testAccAWSEc2FleetConfig_SpotOptions_AllocationStrategy(rName, allocationStrategy string) string { + return testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName) + fmt.Sprintf(` +resource "aws_ec2_fleet" "test" { + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + } + + spot_options { + allocation_strategy = %q + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 0 + } +} +`, allocationStrategy) +} + +func testAccAWSEc2FleetConfig_SpotOptions_InstanceInterruptionBehavior(rName, instanceInterruptionBehavior string) string { + return testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName) + fmt.Sprintf(` +resource "aws_ec2_fleet" "test" { + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + } + + spot_options { + instance_interruption_behavior = %q + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 0 + } +} +`, instanceInterruptionBehavior) +} + +func testAccAWSEc2FleetConfig_SpotOptions_InstancePoolsToUseCount(rName string, instancePoolsToUseCount int) string { + return testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName) + fmt.Sprintf(` +resource "aws_ec2_fleet" "test" { + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + } + + spot_options { + instance_pools_to_use_count = %d + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 0 + } +} +`, instancePoolsToUseCount) +} + +func testAccAWSEc2FleetConfig_Tags(rName, key1, value1 string) string { + return testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName) + fmt.Sprintf(` +resource "aws_ec2_fleet" "test" { + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + } + + tags { + %q = %q + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 0 + } +} +`, key1, value1) +} + +func testAccAWSEc2FleetConfig_TargetCapacitySpecification_DefaultTargetCapacityType(rName, defaultTargetCapacityType string) string { + return testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName) + fmt.Sprintf(` +resource "aws_ec2_fleet" "test" { + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + } + + target_capacity_specification { + default_target_capacity_type = %q + total_target_capacity = 0 + } +} +`, defaultTargetCapacityType) +} + +func testAccAWSEc2FleetConfig_TargetCapacitySpecification_TotalTargetCapacity(rName string, totalTargetCapacity int) string { + return testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName) + fmt.Sprintf(` +resource "aws_ec2_fleet" "test" { + terminate_instances = true + + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = %d + } +} +`, totalTargetCapacity) +} + +func testAccAWSEc2FleetConfig_TerminateInstancesWithExpiration(rName string, terminateInstancesWithExpiration bool) string { + return testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName) + fmt.Sprintf(` +resource "aws_ec2_fleet" "test" { + terminate_instances_with_expiration = %t + + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 0 + } +} +`, terminateInstancesWithExpiration) +} + +func testAccAWSEc2FleetConfig_Type(rName, fleetType string) string { + return testAccAWSEc2FleetConfig_BaseLaunchTemplate(rName) + fmt.Sprintf(` +resource "aws_ec2_fleet" "test" { + type = %q + + launch_template_config { + launch_template_specification { + launch_template_id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 0 + } +} +`, fleetType) +} diff --git a/aws/resource_aws_ecr_lifecycle_policy.go b/aws/resource_aws_ecr_lifecycle_policy.go index 4f5f4e65852..31b73e64889 100644 --- a/aws/resource_aws_ecr_lifecycle_policy.go +++ b/aws/resource_aws_ecr_lifecycle_policy.go @@ -4,6 +4,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ecr" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsEcrLifecyclePolicy() *schema.Resource { @@ -17,19 +18,19 @@ func resourceAwsEcrLifecyclePolicy() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "repository": &schema.Schema{ + "repository": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "policy": &schema.Schema{ + "policy": { Type: schema.TypeString, Required: true, ForceNew: true, - ValidateFunc: validateJsonString, + ValidateFunc: validation.ValidateJsonString, DiffSuppressFunc: suppressEquivalentJsonDiffs, }, - "registry_id": &schema.Schema{ + "registry_id": { Type: schema.TypeString, Computed: true, }, @@ -91,11 +92,9 @@ func resourceAwsEcrLifecyclePolicyDelete(d *schema.ResourceData, meta interface{ _, err := conn.DeleteLifecyclePolicy(input) if err != nil { if isAWSErr(err, ecr.ErrCodeRepositoryNotFoundException, "") { - d.SetId("") return nil } if isAWSErr(err, ecr.ErrCodeLifecyclePolicyNotFoundException, "") { - d.SetId("") return nil } return err diff --git a/aws/resource_aws_ecr_lifecycle_policy_test.go b/aws/resource_aws_ecr_lifecycle_policy_test.go index 1f54b58ab47..fa6b2864c31 100644 --- a/aws/resource_aws_ecr_lifecycle_policy_test.go +++ b/aws/resource_aws_ecr_lifecycle_policy_test.go @@ -15,7 +15,7 @@ func TestAccAWSEcrLifecyclePolicy_basic(t *testing.T) { randString := acctest.RandString(10) rName := fmt.Sprintf("tf-acc-test-lifecycle-%s", randString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcrLifecyclePolicyDestroy, @@ -35,16 +35,16 @@ func TestAccAWSEcrLifecyclePolicy_import(t *testing.T) { randString := acctest.RandString(10) rName := fmt.Sprintf("tf-acc-test-lifecycle-%s", randString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcrLifecyclePolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccEcrLifecyclePolicyConfig(rName), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, diff --git a/aws/resource_aws_ecr_repository.go b/aws/resource_aws_ecr_repository.go index 3a244743522..8b540432b09 100644 --- a/aws/resource_aws_ecr_repository.go +++ b/aws/resource_aws_ecr_repository.go @@ -22,21 +22,25 @@ func resourceAwsEcrRepository() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Delete: schema.DefaultTimeout(20 * time.Minute), + }, + Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "arn": &schema.Schema{ + "arn": { Type: schema.TypeString, Computed: true, }, - "registry_id": &schema.Schema{ + "registry_id": { Type: schema.TypeString, Computed: true, }, - "repository_url": &schema.Schema{ + "repository_url": { Type: schema.TypeString, Computed: true, }, @@ -72,37 +76,43 @@ func resourceAwsEcrRepositoryRead(d *schema.ResourceData, meta interface{}) erro conn := meta.(*AWSClient).ecrconn log.Printf("[DEBUG] Reading repository %s", d.Id()) - out, err := conn.DescribeRepositories(&ecr.DescribeRepositoriesInput{ + var out *ecr.DescribeRepositoriesOutput + input := &ecr.DescribeRepositoriesInput{ RepositoryNames: []*string{aws.String(d.Id())}, + } + + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + var err error + out, err = conn.DescribeRepositories(input) + if d.IsNewResource() && isAWSErr(err, ecr.ErrCodeRepositoryNotFoundException, "") { + return resource.RetryableError(err) + } + if err != nil { + return resource.NonRetryableError(err) + } + return nil }) + + if isAWSErr(err, ecr.ErrCodeRepositoryNotFoundException, "") { + log.Printf("[WARN] ECR Repository (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + if err != nil { - if ecrerr, ok := err.(awserr.Error); ok && ecrerr.Code() == "RepositoryNotFoundException" { - d.SetId("") - return nil - } return err } repository := out.Repositories[0] - log.Printf("[DEBUG] Received repository %s", out) - - d.SetId(*repository.RepositoryName) d.Set("arn", repository.RepositoryArn) - d.Set("registry_id", repository.RegistryId) d.Set("name", repository.RepositoryName) - - repositoryUrl := buildRepositoryUrl(repository, meta.(*AWSClient).region) - log.Printf("[INFO] Setting the repository url to be %s", repositoryUrl) - d.Set("repository_url", repositoryUrl) + d.Set("registry_id", repository.RegistryId) + d.Set("repository_url", repository.RepositoryUri) return nil } -func buildRepositoryUrl(repo *ecr.Repository, region string) string { - return fmt.Sprintf("%s.dkr.ecr.%s.amazonaws.com/%s", *repo.RegistryId, region, *repo.RepositoryName) -} - func resourceAwsEcrRepositoryDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).ecrconn @@ -113,14 +123,13 @@ func resourceAwsEcrRepositoryDelete(d *schema.ResourceData, meta interface{}) er }) if err != nil { if ecrerr, ok := err.(awserr.Error); ok && ecrerr.Code() == "RepositoryNotFoundException" { - d.SetId("") return nil } return err } log.Printf("[DEBUG] Waiting for ECR Repository %q to be deleted", d.Id()) - err = resource.Retry(20*time.Minute, func() *resource.RetryError { + err = resource.Retry(d.Timeout(schema.TimeoutDelete), func() *resource.RetryError { _, err := conn.DescribeRepositories(&ecr.DescribeRepositoriesInput{ RepositoryNames: []*string{aws.String(d.Id())}, }) @@ -145,7 +154,6 @@ func resourceAwsEcrRepositoryDelete(d *schema.ResourceData, meta interface{}) er return err } - d.SetId("") log.Printf("[DEBUG] repository %q deleted.", d.Get("name").(string)) return nil diff --git a/aws/resource_aws_ecr_repository_policy.go b/aws/resource_aws_ecr_repository_policy.go index a58a5d60b10..e2824ea2b73 100644 --- a/aws/resource_aws_ecr_repository_policy.go +++ b/aws/resource_aws_ecr_repository_policy.go @@ -19,16 +19,16 @@ func resourceAwsEcrRepositoryPolicy() *schema.Resource { Delete: resourceAwsEcrRepositoryPolicyDelete, Schema: map[string]*schema.Schema{ - "repository": &schema.Schema{ + "repository": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "policy": &schema.Schema{ + "policy": { Type: schema.TypeString, Required: true, }, - "registry_id": &schema.Schema{ + "registry_id": { Type: schema.TypeString, Computed: true, }, @@ -153,7 +153,6 @@ func resourceAwsEcrRepositoryPolicyDelete(d *schema.ResourceData, meta interface if ecrerr, ok := err.(awserr.Error); ok { switch ecrerr.Code() { case "RepositoryNotFoundException", "RepositoryPolicyNotFoundException": - d.SetId("") return nil default: return err diff --git a/aws/resource_aws_ecr_repository_policy_test.go b/aws/resource_aws_ecr_repository_policy_test.go index 260a8ba13e1..8b3313a7070 100644 --- a/aws/resource_aws_ecr_repository_policy_test.go +++ b/aws/resource_aws_ecr_repository_policy_test.go @@ -15,12 +15,12 @@ import ( func TestAccAWSEcrRepositoryPolicy_basic(t *testing.T) { randString := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcrRepositoryPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSEcrRepositoryPolicy(randString), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEcrRepositoryPolicyExists("aws_ecr_repository_policy.default"), @@ -33,12 +33,12 @@ func TestAccAWSEcrRepositoryPolicy_basic(t *testing.T) { func TestAccAWSEcrRepositoryPolicy_iam(t *testing.T) { randString := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcrRepositoryPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSEcrRepositoryPolicyWithIAMRole(randString), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEcrRepositoryPolicyExists("aws_ecr_repository_policy.default"), diff --git a/aws/resource_aws_ecr_repository_test.go b/aws/resource_aws_ecr_repository_test.go index 23d5e3bcfed..85dbfa2a616 100644 --- a/aws/resource_aws_ecr_repository_test.go +++ b/aws/resource_aws_ecr_repository_test.go @@ -5,7 +5,6 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ecr" "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" @@ -13,19 +12,29 @@ import ( ) func TestAccAWSEcrRepository_basic(t *testing.T) { - randString := acctest.RandString(10) + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_ecr_repository.default" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcrRepositoryDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSEcrRepository(randString), + Config: testAccAWSEcrRepositoryConfig(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSEcrRepositoryExists("aws_ecr_repository.default"), + testAccCheckAWSEcrRepositoryExists(resourceName), + testAccCheckResourceAttrRegionalARN(resourceName, "arn", "ecr", fmt.Sprintf("repository/%s", rName)), + resource.TestCheckResourceAttr(resourceName, "name", rName), + testAccCheckAWSEcrRepositoryRegistryID(resourceName), + testAccCheckAWSEcrRepositoryRepositoryURL(resourceName, rName), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } @@ -44,16 +53,17 @@ func testAccCheckAWSEcrRepositoryDestroy(s *terraform.State) error { out, err := conn.DescribeRepositories(&input) + if isAWSErr(err, ecr.ErrCodeRepositoryNotFoundException, "") { + return nil + } + if err != nil { - if ecrerr, ok := err.(awserr.Error); ok && ecrerr.Code() == "RepositoryNotFoundException" { - return nil - } return err } for _, repository := range out.Repositories { - if repository.RepositoryName == aws.String(rs.Primary.Attributes["name"]) { - return fmt.Errorf("ECR repository still exists:\n%#v", repository) + if aws.StringValue(repository.RepositoryName) == rs.Primary.Attributes["name"] { + return fmt.Errorf("ECR repository still exists: %s", rs.Primary.Attributes["name"]) } } } @@ -72,10 +82,24 @@ func testAccCheckAWSEcrRepositoryExists(name string) resource.TestCheckFunc { } } -func testAccAWSEcrRepository(randString string) string { +func testAccCheckAWSEcrRepositoryRegistryID(resourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + attributeValue := testAccGetAccountID() + return resource.TestCheckResourceAttr(resourceName, "registry_id", attributeValue)(s) + } +} + +func testAccCheckAWSEcrRepositoryRepositoryURL(resourceName, repositoryName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + attributeValue := fmt.Sprintf("%s.dkr.%s/%s", testAccGetAccountID(), testAccGetServiceEndpoint("ecr"), repositoryName) + return resource.TestCheckResourceAttr(resourceName, "repository_url", attributeValue)(s) + } +} + +func testAccAWSEcrRepositoryConfig(rName string) string { return fmt.Sprintf(` resource "aws_ecr_repository" "default" { - name = "tf-acc-test-ecr-%s" + name = %q } -`, randString) +`, rName) } diff --git a/aws/resource_aws_ecs_cluster.go b/aws/resource_aws_ecs_cluster.go index 5b25b9e6aa2..032318ed24c 100644 --- a/aws/resource_aws_ecs_cluster.go +++ b/aws/resource_aws_ecs_cluster.go @@ -17,6 +17,7 @@ func resourceAwsEcsCluster() *schema.Resource { return &schema.Resource{ Create: resourceAwsEcsClusterCreate, Read: resourceAwsEcsClusterRead, + Update: resourceAwsEcsClusterUpdate, Delete: resourceAwsEcsClusterDelete, Importer: &schema.ResourceImporter{ State: resourceAwsEcsClusterImport, @@ -28,7 +29,7 @@ func resourceAwsEcsCluster() *schema.Resource { Required: true, ForceNew: true, }, - + "tags": tagsSchema(), "arn": { Type: schema.TypeString, Computed: true, @@ -57,6 +58,7 @@ func resourceAwsEcsClusterCreate(d *schema.ResourceData, meta interface{}) error out, err := conn.CreateCluster(&ecs.CreateClusterInput{ ClusterName: aws.String(clusterName), + Tags: tagsFromMapECS(d.Get("tags").(map[string]interface{})), }) if err != nil { return err @@ -64,42 +66,118 @@ func resourceAwsEcsClusterCreate(d *schema.ResourceData, meta interface{}) error log.Printf("[DEBUG] ECS cluster %s created", *out.Cluster.ClusterArn) d.SetId(*out.Cluster.ClusterArn) - d.Set("arn", out.Cluster.ClusterArn) - d.Set("name", out.Cluster.ClusterName) - return nil + + return resourceAwsEcsClusterRead(d, meta) } func resourceAwsEcsClusterRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).ecsconn - clusterName := d.Get("name").(string) - log.Printf("[DEBUG] Reading ECS cluster %s", clusterName) - out, err := conn.DescribeClusters(&ecs.DescribeClustersInput{ - Clusters: []*string{aws.String(clusterName)}, + input := &ecs.DescribeClustersInput{ + Clusters: []*string{aws.String(d.Id())}, + Include: []*string{aws.String(ecs.ClusterFieldTags)}, + } + + log.Printf("[DEBUG] Reading ECS Cluster: %s", input) + var out *ecs.DescribeClustersOutput + err := resource.Retry(2*time.Minute, func() *resource.RetryError { + var err error + out, err = conn.DescribeClusters(input) + + if err != nil { + return resource.NonRetryableError(err) + } + + if out == nil || len(out.Failures) > 0 { + if d.IsNewResource() { + return resource.RetryableError(&resource.NotFoundError{}) + } + return resource.NonRetryableError(&resource.NotFoundError{}) + } + + return nil }) + + if isResourceNotFoundError(err) { + log.Printf("[WARN] ECS Cluster (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + if err != nil { - return err + return fmt.Errorf("error reading ECS Cluster (%s): %s", d.Id(), err) } - log.Printf("[DEBUG] Received ECS clusters: %s", out.Clusters) + var cluster *ecs.Cluster for _, c := range out.Clusters { - if *c.ClusterName == clusterName { - // Status==INACTIVE means deleted cluster - if *c.Status == "INACTIVE" { - log.Printf("[DEBUG] Removing ECS cluster %q because it's INACTIVE", *c.ClusterArn) - d.SetId("") - return nil + if aws.StringValue(c.ClusterArn) == d.Id() { + cluster = c + break + } + } + + if cluster == nil { + log.Printf("[WARN] ECS Cluster (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + // Status==INACTIVE means deleted cluster + if aws.StringValue(cluster.Status) == "INACTIVE" { + log.Printf("[WARN] ECS Cluster (%s) deleted, removing from state", d.Id()) + d.SetId("") + return nil + } + + d.Set("arn", cluster.ClusterArn) + d.Set("name", cluster.ClusterName) + + if err := d.Set("tags", tagsToMapECS(cluster.Tags)); err != nil { + return fmt.Errorf("error setting tags: %s", err) + } + + return nil +} + +func resourceAwsEcsClusterUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ecsconn + + if d.HasChange("tags") { + oldTagsRaw, newTagsRaw := d.GetChange("tags") + oldTagsMap := oldTagsRaw.(map[string]interface{}) + newTagsMap := newTagsRaw.(map[string]interface{}) + createTags, removeTags := diffTagsECS(tagsFromMapECS(oldTagsMap), tagsFromMapECS(newTagsMap)) + + if len(removeTags) > 0 { + removeTagKeys := make([]*string, len(removeTags)) + for i, removeTag := range removeTags { + removeTagKeys[i] = removeTag.Key } - d.SetId(*c.ClusterArn) - d.Set("arn", c.ClusterArn) - d.Set("name", c.ClusterName) - return nil + input := &ecs.UntagResourceInput{ + ResourceArn: aws.String(d.Id()), + TagKeys: removeTagKeys, + } + + log.Printf("[DEBUG] Untagging ECS Cluster: %s", input) + if _, err := conn.UntagResource(input); err != nil { + return fmt.Errorf("error untagging ECS Cluster (%s): %s", d.Id(), err) + } + } + + if len(createTags) > 0 { + input := &ecs.TagResourceInput{ + ResourceArn: aws.String(d.Id()), + Tags: createTags, + } + + log.Printf("[DEBUG] Tagging ECS Cluster: %s", input) + if _, err := conn.TagResource(input); err != nil { + return fmt.Errorf("error tagging ECS Cluster (%s): %s", d.Id(), err) + } } } - log.Printf("[ERR] No matching ECS Cluster found for (%s)", d.Id()) - d.SetId("") return nil } diff --git a/aws/resource_aws_ecs_cluster_test.go b/aws/resource_aws_ecs_cluster_test.go index b9d25e6af56..bddaee5ab48 100644 --- a/aws/resource_aws_ecs_cluster_test.go +++ b/aws/resource_aws_ecs_cluster_test.go @@ -2,7 +2,6 @@ package aws import ( "fmt" - "regexp" "testing" "github.com/aws/aws-sdk-go/aws" @@ -13,46 +12,97 @@ import ( ) func TestAccAWSEcsCluster_basic(t *testing.T) { - rString := acctest.RandString(8) - clusterName := fmt.Sprintf("tf-acc-cluster-basic-%s", rString) + var cluster1 ecs.Cluster + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_ecs_cluster.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsClusterDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSEcsCluster(clusterName), + { + Config: testAccAWSEcsClusterConfig(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSEcsClusterExists("aws_ecs_cluster.foo"), - resource.TestMatchResourceAttr("aws_ecs_cluster.foo", "arn", - regexp.MustCompile("^arn:aws:ecs:[a-z0-9-]+:[0-9]{12}:cluster/"+clusterName+"$")), + testAccCheckAWSEcsClusterExists(resourceName, &cluster1), + testAccCheckResourceAttrRegionalARN(resourceName, "arn", "ecs", fmt.Sprintf("cluster/%s", rName)), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), }, + { + ResourceName: resourceName, + ImportStateId: rName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } -func TestAccAWSEcsCluster_importBasic(t *testing.T) { - rString := acctest.RandString(8) - clusterName := fmt.Sprintf("tf-acc-cluster-import-%s", rString) +func TestAccAWSEcsCluster_disappears(t *testing.T) { + var cluster1 ecs.Cluster + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_ecs_cluster.test" - resourceName := "aws_ecs_cluster.foo" + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEcsClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEcsClusterConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsClusterExists(resourceName, &cluster1), + testAccCheckAWSEcsClusterDisappears(&cluster1), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} - resource.Test(t, resource.TestCase{ +func TestAccAWSEcsCluster_Tags(t *testing.T) { + var cluster1 ecs.Cluster + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_ecs_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsClusterDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSEcsCluster(clusterName), + { + Config: testAccAWSEcsClusterConfigTags1(rName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsClusterExists(resourceName, &cluster1), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), }, - resource.TestStep{ + { ResourceName: resourceName, - ImportStateId: clusterName, + ImportStateId: rName, ImportState: true, ImportStateVerify: true, }, + { + Config: testAccAWSEcsClusterConfigTags2(rName, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsClusterExists(resourceName, &cluster1), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccAWSEcsClusterConfigTags1(rName, "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsClusterExists(resourceName, &cluster1), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, }, }) } @@ -83,21 +133,82 @@ func testAccCheckAWSEcsClusterDestroy(s *terraform.State) error { return nil } -func testAccCheckAWSEcsClusterExists(name string) resource.TestCheckFunc { +func testAccCheckAWSEcsClusterExists(resourceName string, cluster *ecs.Cluster) resource.TestCheckFunc { return func(s *terraform.State) error { - _, ok := s.RootModule().Resources[name] + rs, ok := s.RootModule().Resources[resourceName] if !ok { - return fmt.Errorf("Not found: %s", name) + return fmt.Errorf("Not found: %s", resourceName) + } + + conn := testAccProvider.Meta().(*AWSClient).ecsconn + + input := &ecs.DescribeClustersInput{ + Clusters: []*string{aws.String(rs.Primary.ID)}, + Include: []*string{aws.String(ecs.ClusterFieldTags)}, + } + + output, err := conn.DescribeClusters(input) + + if err != nil { + return fmt.Errorf("error reading ECS Cluster (%s): %s", rs.Primary.ID, err) + } + + for _, c := range output.Clusters { + if aws.StringValue(c.ClusterArn) == rs.Primary.ID && aws.StringValue(c.Status) != "INACTIVE" { + *cluster = *c + return nil + } + } + + return fmt.Errorf("ECS Cluster (%s) not found", rs.Primary.ID) + } +} + +func testAccCheckAWSEcsClusterDisappears(cluster *ecs.Cluster) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).ecsconn + + input := &ecs.DeleteClusterInput{ + Cluster: cluster.ClusterArn, + } + + if _, err := conn.DeleteCluster(input); err != nil { + return fmt.Errorf("error deleting ECS Cluster (%s): %s", aws.StringValue(cluster.ClusterArn), err) } return nil } } -func testAccAWSEcsCluster(clusterName string) string { +func testAccAWSEcsClusterConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_ecs_cluster" "test" { + name = %q +} +`, rName) +} + +func testAccAWSEcsClusterConfigTags1(rName, tag1Key, tag1Value string) string { return fmt.Sprintf(` -resource "aws_ecs_cluster" "foo" { - name = "%s" +resource "aws_ecs_cluster" "test" { + name = %q + + tags { + %q = %q + } +} +`, rName, tag1Key, tag1Value) +} + +func testAccAWSEcsClusterConfigTags2(rName, tag1Key, tag1Value, tag2Key, tag2Value string) string { + return fmt.Sprintf(` +resource "aws_ecs_cluster" "test" { + name = %q + + tags { + %q = %q + %q = %q + } } -`, clusterName) +`, rName, tag1Key, tag1Value, tag2Key, tag2Value) } diff --git a/aws/resource_aws_ecs_service.go b/aws/resource_aws_ecs_service.go index 7d8b74aba70..01ee9659ddc 100644 --- a/aws/resource_aws_ecs_service.go +++ b/aws/resource_aws_ecs_service.go @@ -4,19 +4,18 @@ import ( "bytes" "fmt" "log" - "regexp" "strings" "time" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" "github.com/aws/aws-sdk-go/service/ecs" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) -var taskDefinitionRE = regexp.MustCompile("^([a-zA-Z0-9_-]+):([0-9]+)$") - func resourceAwsEcsService() *schema.Resource { return &schema.Resource{ Create: resourceAwsEcsServiceCreate, @@ -49,12 +48,18 @@ func resourceAwsEcsService() *schema.Resource { "desired_count": { Type: schema.TypeInt, Optional: true, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if d.Get("scheduling_strategy").(string) == ecs.SchedulingStrategyDaemon { + return true + } + return false + }, }, "health_check_grace_period_seconds": { Type: schema.TypeInt, Optional: true, - ValidateFunc: validateAwsEcsServiceHealthCheckGracePeriodSeconds, + ValidateFunc: validation.IntBetween(0, 7200), }, "launch_type": { @@ -64,6 +69,17 @@ func resourceAwsEcsService() *schema.Resource { Default: "EC2", }, + "scheduling_strategy": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: ecs.SchedulingStrategyReplica, + ValidateFunc: validation.StringInSlice([]string{ + ecs.SchedulingStrategyDaemon, + ecs.SchedulingStrategyReplica, + }, false), + }, + "iam_role": { Type: schema.TypeString, ForceNew: true, @@ -75,12 +91,24 @@ func resourceAwsEcsService() *schema.Resource { Type: schema.TypeInt, Optional: true, Default: 200, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if d.Get("scheduling_strategy").(string) == ecs.SchedulingStrategyDaemon && new == "200" { + return true + } + return false + }, }, "deployment_minimum_healthy_percent": { Type: schema.TypeInt, Optional: true, Default: 100, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if d.Get("scheduling_strategy").(string) == ecs.SchedulingStrategyDaemon && new == "100" { + return true + } + return false + }, }, "load_balancer": { @@ -144,10 +172,12 @@ func resourceAwsEcsService() *schema.Resource { }, }, "placement_strategy": { - Type: schema.TypeSet, - Optional: true, - ForceNew: true, - MaxItems: 5, + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + MaxItems: 5, + ConflictsWith: []string{"ordered_placement_strategy"}, + Deprecated: "Use `ordered_placement_strategy` instead", Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "type": { @@ -183,7 +213,40 @@ func resourceAwsEcsService() *schema.Resource { return hashcode.String(buf.String()) }, }, - + "ordered_placement_strategy": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + MaxItems: 5, + ConflictsWith: []string{"placement_strategy"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "type": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + }, + "field": { + Type: schema.TypeString, + ForceNew: true, + Optional: true, + StateFunc: func(v interface{}) string { + value := v.(string) + if value == "host" { + return "instanceId" + } + return value + }, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if strings.ToLower(old) == strings.ToLower(new) { + return true + } + return false + }, + }, + }, + }, + }, "placement_constraints": { Type: schema.TypeSet, Optional: true, @@ -204,26 +267,57 @@ func resourceAwsEcsService() *schema.Resource { }, }, }, + + "service_registries": { + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "container_name": { + Type: schema.TypeString, + Optional: true, + }, + "container_port": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntBetween(0, 65536), + }, + "port": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntBetween(0, 65536), + }, + "registry_arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validateArn, + }, + }, + }, + }, + "tags": tagsSchema(), }, } } func resourceAwsEcsServiceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { if len(strings.Split(d.Id(), "/")) != 2 { - return []*schema.ResourceData{}, fmt.Errorf("[ERR] Wrong format of resource: %s. Please follow 'cluster-name/service-name'", d.Id()) + return []*schema.ResourceData{}, fmt.Errorf("Wrong format of resource: %s. Please follow 'cluster-name/service-name'", d.Id()) } cluster := strings.Split(d.Id(), "/")[0] name := strings.Split(d.Id(), "/")[1] log.Printf("[DEBUG] Importing ECS service %s from cluster %s", name, cluster) d.SetId(name) - clusterArn := arnString( - meta.(*AWSClient).partition, - meta.(*AWSClient).region, - "ecs", - meta.(*AWSClient).accountid, - fmt.Sprintf("cluster/%s", cluster), - ) + clusterArn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Region: meta.(*AWSClient).region, + Service: "ecs", + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("cluster/%s", cluster), + }.String() d.Set("cluster", clusterArn) return []*schema.ResourceData{d}, nil } @@ -231,15 +325,27 @@ func resourceAwsEcsServiceImport(d *schema.ResourceData, meta interface{}) ([]*s func resourceAwsEcsServiceCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).ecsconn + deploymentMinimumHealthyPercent := d.Get("deployment_minimum_healthy_percent").(int) + schedulingStrategy := d.Get("scheduling_strategy").(string) + input := ecs.CreateServiceInput{ - ServiceName: aws.String(d.Get("name").(string)), - TaskDefinition: aws.String(d.Get("task_definition").(string)), - DesiredCount: aws.Int64(int64(d.Get("desired_count").(int))), - ClientToken: aws.String(resource.UniqueId()), - DeploymentConfiguration: &ecs.DeploymentConfiguration{ + ClientToken: aws.String(resource.UniqueId()), + SchedulingStrategy: aws.String(schedulingStrategy), + ServiceName: aws.String(d.Get("name").(string)), + Tags: tagsFromMapECS(d.Get("tags").(map[string]interface{})), + TaskDefinition: aws.String(d.Get("task_definition").(string)), + } + + if schedulingStrategy == ecs.SchedulingStrategyDaemon && deploymentMinimumHealthyPercent != 100 { + input.DeploymentConfiguration = &ecs.DeploymentConfiguration{ + MinimumHealthyPercent: aws.Int64(int64(deploymentMinimumHealthyPercent)), + } + } else if schedulingStrategy == ecs.SchedulingStrategyReplica { + input.DeploymentConfiguration = &ecs.DeploymentConfiguration{ MaximumPercent: aws.Int64(int64(d.Get("deployment_maximum_percent").(int))), - MinimumHealthyPercent: aws.Int64(int64(d.Get("deployment_minimum_healthy_percent").(int))), - }, + MinimumHealthyPercent: aws.Int64(int64(deploymentMinimumHealthyPercent)), + } + input.DesiredCount = aws.Int64(int64(d.Get("desired_count").(int))) } if v, ok := d.GetOk("cluster"); ok { @@ -265,20 +371,16 @@ func resourceAwsEcsServiceCreate(d *schema.ResourceData, meta interface{}) error input.NetworkConfiguration = expandEcsNetworkConfiguration(d.Get("network_configuration").([]interface{})) - strategies := d.Get("placement_strategy").(*schema.Set).List() - if len(strategies) > 0 { - var ps []*ecs.PlacementStrategy - for _, raw := range strategies { - p := raw.(map[string]interface{}) - t := p["type"].(string) - f := p["field"].(string) - if err := validateAwsEcsPlacementStrategy(t, f); err != nil { - return err - } - ps = append(ps, &ecs.PlacementStrategy{ - Type: aws.String(p["type"].(string)), - Field: aws.String(p["field"].(string)), - }) + if v, ok := d.GetOk("ordered_placement_strategy"); ok { + ps, err := expandPlacementStrategy(v.([]interface{})) + if err != nil { + return err + } + input.PlacementStrategy = ps + } else { + ps, err := expandPlacementStrategyDeprecated(d.Get("placement_strategy").(*schema.Set)) + if err != nil { + return err } input.PlacementStrategy = ps } @@ -305,6 +407,29 @@ func resourceAwsEcsServiceCreate(d *schema.ResourceData, meta interface{}) error input.PlacementConstraints = pc } + serviceRegistries := d.Get("service_registries").(*schema.Set).List() + if len(serviceRegistries) > 0 { + srs := make([]*ecs.ServiceRegistry, 0, len(serviceRegistries)) + for _, v := range serviceRegistries { + raw := v.(map[string]interface{}) + sr := &ecs.ServiceRegistry{ + RegistryArn: aws.String(raw["registry_arn"].(string)), + } + if port, ok := raw["port"].(int); ok && port != 0 { + sr.Port = aws.Int64(int64(port)) + } + if raw, ok := raw["container_port"].(int); ok && raw != 0 { + sr.ContainerPort = aws.Int64(int64(raw)) + } + if raw, ok := raw["container_name"].(string); ok && raw != "" { + sr.ContainerName = aws.String(raw) + } + + srs = append(srs, sr) + } + input.ServiceRegistries = srs + } + log.Printf("[DEBUG] Creating ECS service: %s", input) // Retry due to AWS IAM & ECS eventual consistency @@ -320,6 +445,9 @@ func resourceAwsEcsServiceCreate(d *schema.ResourceData, meta interface{}) error if isAWSErr(err, ecs.ErrCodeInvalidParameterException, "Please verify that the ECS service role being passed has the proper permissions.") { return resource.RetryableError(err) } + if isAWSErr(err, ecs.ErrCodeInvalidParameterException, "does not have an associated load balancer") { + return resource.RetryableError(err) + } return resource.NonRetryableError(err) } @@ -342,8 +470,9 @@ func resourceAwsEcsServiceRead(d *schema.ResourceData, meta interface{}) error { log.Printf("[DEBUG] Reading ECS service %s", d.Id()) input := ecs.DescribeServicesInput{ - Services: []*string{aws.String(d.Id())}, Cluster: aws.String(d.Get("cluster").(string)), + Include: []*string{aws.String(ecs.ServiceFieldTags)}, + Services: []*string{aws.String(d.Id())}, } var out *ecs.DescribeServicesOutput @@ -361,7 +490,9 @@ func resourceAwsEcsServiceRead(d *schema.ResourceData, meta interface{}) error { if d.IsNewResource() { return resource.RetryableError(fmt.Errorf("ECS service not created yet: %q", d.Id())) } - return resource.NonRetryableError(fmt.Errorf("No ECS service found: %q", d.Id())) + log.Printf("[WARN] ECS Service %s not found, removing from state.", d.Id()) + d.SetId("") + return nil } service := out.Services[0] @@ -403,6 +534,7 @@ func resourceAwsEcsServiceRead(d *schema.ResourceData, meta interface{}) error { d.Set("task_definition", taskDefinition) } + d.Set("scheduling_strategy", service.SchedulingStrategy) d.Set("desired_count", service.DesiredCount) d.Set("health_check_grace_period_seconds", service.HealthCheckGracePeriodSeconds) d.Set("launch_type", service.LaunchType) @@ -434,15 +566,29 @@ func resourceAwsEcsServiceRead(d *schema.ResourceData, meta interface{}) error { d.Set("load_balancer", flattenEcsLoadBalancers(service.LoadBalancers)) } - if err := d.Set("placement_strategy", flattenPlacementStrategy(service.PlacementStrategy)); err != nil { - log.Printf("[ERR] Error setting placement_strategy for (%s): %s", d.Id(), err) + if _, ok := d.GetOk("placement_strategy"); ok { + if err := d.Set("placement_strategy", flattenPlacementStrategyDeprecated(service.PlacementStrategy)); err != nil { + return fmt.Errorf("error setting placement_strategy: %s", err) + } + } else { + if err := d.Set("ordered_placement_strategy", flattenPlacementStrategy(service.PlacementStrategy)); err != nil { + return fmt.Errorf("error setting ordered_placement_strategy: %s", err) + } } if err := d.Set("placement_constraints", flattenServicePlacementConstraints(service.PlacementConstraints)); err != nil { log.Printf("[ERR] Error setting placement_constraints for (%s): %s", d.Id(), err) } if err := d.Set("network_configuration", flattenEcsNetworkConfiguration(service.NetworkConfiguration)); err != nil { - return fmt.Errorf("[ERR] Error setting network_configuration for (%s): %s", d.Id(), err) + return fmt.Errorf("Error setting network_configuration for (%s): %s", d.Id(), err) + } + + if err := d.Set("service_registries", flattenServiceRegistries(service.ServiceRegistries)); err != nil { + return fmt.Errorf("Error setting service_registries for (%s): %s", d.Id(), err) + } + + if err := d.Set("tags", tagsToMapECS(service.Tags)); err != nil { + return fmt.Errorf("error setting tags: %s", err) } return nil @@ -501,7 +647,7 @@ func flattenServicePlacementConstraints(pcs []*ecs.PlacementConstraint) []map[st return results } -func flattenPlacementStrategy(pss []*ecs.PlacementStrategy) []map[string]interface{} { +func flattenPlacementStrategyDeprecated(pss []*ecs.PlacementStrategy) []map[string]interface{} { if len(pss) == 0 { return nil } @@ -509,11 +655,14 @@ func flattenPlacementStrategy(pss []*ecs.PlacementStrategy) []map[string]interfa for _, ps := range pss { c := make(map[string]interface{}) c["type"] = *ps.Type - c["field"] = *ps.Field - // for some fields the API requires lowercase for creation but will return uppercase on query - if *ps.Field == "MEMORY" || *ps.Field == "CPU" { - c["field"] = strings.ToLower(*ps.Field) + if ps.Field != nil { + c["field"] = *ps.Field + + // for some fields the API requires lowercase for creation but will return uppercase on query + if *ps.Field == "MEMORY" || *ps.Field == "CPU" { + c["field"] = strings.ToLower(*ps.Field) + } } results = append(results, c) @@ -521,54 +670,205 @@ func flattenPlacementStrategy(pss []*ecs.PlacementStrategy) []map[string]interfa return results } +func expandPlacementStrategy(s []interface{}) ([]*ecs.PlacementStrategy, error) { + if len(s) == 0 { + return nil, nil + } + pss := make([]*ecs.PlacementStrategy, 0) + for _, raw := range s { + p := raw.(map[string]interface{}) + t := p["type"].(string) + f := p["field"].(string) + if err := validateAwsEcsPlacementStrategy(t, f); err != nil { + return nil, err + } + ps := &ecs.PlacementStrategy{ + Type: aws.String(t), + } + if f != "" { + // Field must be omitted (i.e. not empty string) for random strategy + ps.Field = aws.String(f) + } + pss = append(pss, ps) + } + return pss, nil +} + +func expandPlacementStrategyDeprecated(s *schema.Set) ([]*ecs.PlacementStrategy, error) { + if len(s.List()) == 0 { + return nil, nil + } + pss := make([]*ecs.PlacementStrategy, 0) + for _, raw := range s.List() { + p := raw.(map[string]interface{}) + t := p["type"].(string) + f := p["field"].(string) + if err := validateAwsEcsPlacementStrategy(t, f); err != nil { + return nil, err + } + ps := &ecs.PlacementStrategy{ + Type: aws.String(t), + } + if f != "" { + // Field must be omitted (i.e. not empty string) for random strategy + ps.Field = aws.String(f) + } + pss = append(pss, ps) + } + return pss, nil +} + +func flattenPlacementStrategy(pss []*ecs.PlacementStrategy) []interface{} { + if len(pss) == 0 { + return nil + } + results := make([]interface{}, 0, len(pss)) + for _, ps := range pss { + c := make(map[string]interface{}) + c["type"] = *ps.Type + + if ps.Field != nil { + c["field"] = *ps.Field + + // for some fields the API requires lowercase for creation but will return uppercase on query + if *ps.Field == "MEMORY" || *ps.Field == "CPU" { + c["field"] = strings.ToLower(*ps.Field) + } + } + + results = append(results, c) + } + return results +} + +func flattenServiceRegistries(srs []*ecs.ServiceRegistry) []map[string]interface{} { + if len(srs) == 0 { + return nil + } + results := make([]map[string]interface{}, 0) + for _, sr := range srs { + c := map[string]interface{}{ + "registry_arn": aws.StringValue(sr.RegistryArn), + } + if sr.Port != nil { + c["port"] = int(aws.Int64Value(sr.Port)) + } + if sr.ContainerPort != nil { + c["container_port"] = int(aws.Int64Value(sr.ContainerPort)) + } + if sr.ContainerName != nil { + c["container_name"] = aws.StringValue(sr.ContainerName) + } + results = append(results, c) + } + return results +} + func resourceAwsEcsServiceUpdate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).ecsconn + updateService := false - log.Printf("[DEBUG] Updating ECS service %s", d.Id()) input := ecs.UpdateServiceInput{ Service: aws.String(d.Id()), Cluster: aws.String(d.Get("cluster").(string)), } - if d.HasChange("desired_count") { - _, n := d.GetChange("desired_count") - input.DesiredCount = aws.Int64(int64(n.(int))) + schedulingStrategy := d.Get("scheduling_strategy").(string) + + if schedulingStrategy == ecs.SchedulingStrategyDaemon { + if d.HasChange("deployment_minimum_healthy_percent") { + updateService = true + input.DeploymentConfiguration = &ecs.DeploymentConfiguration{ + MinimumHealthyPercent: aws.Int64(int64(d.Get("deployment_minimum_healthy_percent").(int))), + } + } + } else if schedulingStrategy == ecs.SchedulingStrategyReplica { + if d.HasChange("desired_count") { + updateService = true + input.DesiredCount = aws.Int64(int64(d.Get("desired_count").(int))) + } + + if d.HasChange("deployment_maximum_percent") || d.HasChange("deployment_minimum_healthy_percent") { + updateService = true + input.DeploymentConfiguration = &ecs.DeploymentConfiguration{ + MaximumPercent: aws.Int64(int64(d.Get("deployment_maximum_percent").(int))), + MinimumHealthyPercent: aws.Int64(int64(d.Get("deployment_minimum_healthy_percent").(int))), + } + } } + if d.HasChange("health_check_grace_period_seconds") { - _, n := d.GetChange("health_check_grace_period_seconds") - input.HealthCheckGracePeriodSeconds = aws.Int64(int64(n.(int))) - } - if d.HasChange("task_definition") { - _, n := d.GetChange("task_definition") - input.TaskDefinition = aws.String(n.(string)) + updateService = true + input.HealthCheckGracePeriodSeconds = aws.Int64(int64(d.Get("health_check_grace_period_seconds").(int))) } - if d.HasChange("deployment_maximum_percent") || d.HasChange("deployment_minimum_healthy_percent") { - input.DeploymentConfiguration = &ecs.DeploymentConfiguration{ - MaximumPercent: aws.Int64(int64(d.Get("deployment_maximum_percent").(int))), - MinimumHealthyPercent: aws.Int64(int64(d.Get("deployment_minimum_healthy_percent").(int))), - } + if d.HasChange("task_definition") { + updateService = true + input.TaskDefinition = aws.String(d.Get("task_definition").(string)) } if d.HasChange("network_configuration") { + updateService = true input.NetworkConfiguration = expandEcsNetworkConfiguration(d.Get("network_configuration").([]interface{})) } - // Retry due to IAM eventual consistency - err := resource.Retry(2*time.Minute, func() *resource.RetryError { - out, err := conn.UpdateService(&input) + if updateService { + log.Printf("[DEBUG] Updating ECS Service (%s): %s", d.Id(), input) + // Retry due to IAM eventual consistency + err := resource.Retry(2*time.Minute, func() *resource.RetryError { + out, err := conn.UpdateService(&input) + if err != nil { + if isAWSErr(err, ecs.ErrCodeInvalidParameterException, "Please verify that the ECS service role being passed has the proper permissions.") { + return resource.RetryableError(err) + } + if isAWSErr(err, ecs.ErrCodeInvalidParameterException, "does not have an associated load balancer") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + + log.Printf("[DEBUG] Updated ECS service %s", out.Service) + return nil + }) if err != nil { - if isAWSErr(err, ecs.ErrCodeInvalidParameterException, "Please verify that the ECS service role being passed has the proper permissions.") { - return resource.RetryableError(err) + return fmt.Errorf("error updating ECS Service (%s): %s", d.Id(), err) + } + } + + if d.HasChange("tags") { + oldTagsRaw, newTagsRaw := d.GetChange("tags") + oldTagsMap := oldTagsRaw.(map[string]interface{}) + newTagsMap := newTagsRaw.(map[string]interface{}) + createTags, removeTags := diffTagsECS(tagsFromMapECS(oldTagsMap), tagsFromMapECS(newTagsMap)) + + if len(removeTags) > 0 { + removeTagKeys := make([]*string, len(removeTags)) + for i, removeTag := range removeTags { + removeTagKeys[i] = removeTag.Key + } + + input := &ecs.UntagResourceInput{ + ResourceArn: aws.String(d.Id()), + TagKeys: removeTagKeys, + } + + log.Printf("[DEBUG] Untagging ECS Cluster: %s", input) + if _, err := conn.UntagResource(input); err != nil { + return fmt.Errorf("error untagging ECS Cluster (%s): %s", d.Id(), err) } - return resource.NonRetryableError(err) } - log.Printf("[DEBUG] Updated ECS service %s", out.Service) - return nil - }) - if err != nil { - return err + if len(createTags) > 0 { + input := &ecs.TagResourceInput{ + ResourceArn: aws.String(d.Id()), + Tags: createTags, + } + + log.Printf("[DEBUG] Tagging ECS Cluster: %s", input) + if _, err := conn.TagResource(input); err != nil { + return fmt.Errorf("error tagging ECS Cluster (%s): %s", d.Id(), err) + } + } } return resourceAwsEcsServiceRead(d, meta) @@ -585,7 +885,6 @@ func resourceAwsEcsServiceDelete(d *schema.ResourceData, meta interface{}) error if err != nil { if isAWSErr(err, ecs.ErrCodeServiceNotFoundException, "") { log.Printf("[DEBUG] Removing ECS Service from state, %q is already gone", d.Id()) - d.SetId("") return nil } return err @@ -593,7 +892,6 @@ func resourceAwsEcsServiceDelete(d *schema.ResourceData, meta interface{}) error if len(resp.Services) == 0 { log.Printf("[DEBUG] Removing ECS Service from state, %q is already gone", d.Id()) - d.SetId("") return nil } @@ -604,7 +902,7 @@ func resourceAwsEcsServiceDelete(d *schema.ResourceData, meta interface{}) error } // Drain the ECS service - if *resp.Services[0].Status != "DRAINING" { + if *resp.Services[0].Status != "DRAINING" && aws.StringValue(resp.Services[0].SchedulingStrategy) != ecs.SchedulingStrategyDaemon { log.Printf("[DEBUG] Draining ECS service %s", d.Id()) _, err = conn.UpdateService(&ecs.UpdateServiceInput{ Service: aws.String(d.Id()), @@ -692,23 +990,3 @@ func buildFamilyAndRevisionFromARN(arn string) string { func getNameFromARN(arn string) string { return strings.Split(arn, "/")[1] } - -func parseTaskDefinition(taskDefinition string) (string, string, error) { - matches := taskDefinitionRE.FindAllStringSubmatch(taskDefinition, 2) - - if len(matches) == 0 || len(matches[0]) != 3 { - return "", "", fmt.Errorf( - "Invalid task definition format, family:rev or ARN expected (%#v)", - taskDefinition) - } - - return matches[0][1], matches[0][2], nil -} - -func validateAwsEcsServiceHealthCheckGracePeriodSeconds(v interface{}, k string) (ws []string, errors []error) { - value := v.(int) - if (value < 0) || (value > 1800) { - errors = append(errors, fmt.Errorf("%q must be between 0 and 1800", k)) - } - return -} diff --git a/aws/resource_aws_ecs_service_test.go b/aws/resource_aws_ecs_service_test.go index 5d12bcc5088..6c448838819 100644 --- a/aws/resource_aws_ecs_service_test.go +++ b/aws/resource_aws_ecs_service_test.go @@ -13,78 +13,6 @@ import ( "github.com/hashicorp/terraform/terraform" ) -func TestParseTaskDefinition(t *testing.T) { - cases := map[string]map[string]interface{}{ - "invalid": { - "family": "", - "revision": "", - "isValid": false, - }, - "invalidWithColon:": { - "family": "", - "revision": "", - "isValid": false, - }, - "1234": { - "family": "", - "revision": "", - "isValid": false, - }, - "invalid:aaa": { - "family": "", - "revision": "", - "isValid": false, - }, - "invalid=family:1": { - "family": "", - "revision": "", - "isValid": false, - }, - "invalid:name:1": { - "family": "", - "revision": "", - "isValid": false, - }, - "valid:1": { - "family": "valid", - "revision": "1", - "isValid": true, - }, - "abc12-def:54": { - "family": "abc12-def", - "revision": "54", - "isValid": true, - }, - "lorem_ip-sum:123": { - "family": "lorem_ip-sum", - "revision": "123", - "isValid": true, - }, - "lorem-ipsum:1": { - "family": "lorem-ipsum", - "revision": "1", - "isValid": true, - }, - } - - for input, expectedOutput := range cases { - family, revision, err := parseTaskDefinition(input) - isValid := expectedOutput["isValid"].(bool) - if !isValid && err == nil { - t.Fatalf("Task definition %s should fail", input) - } - - expectedFamily := expectedOutput["family"].(string) - if family != expectedFamily { - t.Fatalf("Unexpected family (%#v) for task definition %s\n%#v", family, input, err) - } - expectedRevision := expectedOutput["revision"].(string) - if revision != expectedRevision { - t.Fatalf("Unexpected revision (%#v) for task definition %s\n%#v", revision, input, err) - } - } -} - func TestAccAWSEcsService_withARN(t *testing.T) { var service ecs.Service rString := acctest.RandString(8) @@ -93,7 +21,7 @@ func TestAccAWSEcsService_withARN(t *testing.T) { tdName := fmt.Sprintf("tf-acc-td-svc-w-arn-%s", rString) svcName := fmt.Sprintf("tf-acc-svc-w-arn-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, @@ -102,6 +30,8 @@ func TestAccAWSEcsService_withARN(t *testing.T) { Config: testAccAWSEcsService(clusterName, tdName, svcName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo", &service), + resource.TestCheckResourceAttr("aws_ecs_service.mongo", "service_registries.#", "0"), + resource.TestCheckResourceAttr("aws_ecs_service.mongo", "scheduling_strategy", "REPLICA"), ), }, @@ -109,6 +39,8 @@ func TestAccAWSEcsService_withARN(t *testing.T) { Config: testAccAWSEcsServiceModified(clusterName, tdName, svcName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo", &service), + resource.TestCheckResourceAttr("aws_ecs_service.mongo", "service_registries.#", "0"), + resource.TestCheckResourceAttr("aws_ecs_service.mongo", "scheduling_strategy", "REPLICA"), ), }, }, @@ -126,7 +58,7 @@ func TestAccAWSEcsService_basicImport(t *testing.T) { resourceName := "aws_ecs_service.jenkins" importInput := fmt.Sprintf("%s/%s", clusterName, svcName) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, @@ -150,7 +82,32 @@ func TestAccAWSEcsService_basicImport(t *testing.T) { ImportStateId: fmt.Sprintf("%s/nonexistent", clusterName), ImportState: true, ImportStateVerify: false, - ExpectError: regexp.MustCompile(`No ECS service found`), + ExpectError: regexp.MustCompile(`Please verify the ID is correct`), + }, + }, + }) +} + +func TestAccAWSEcsService_disappears(t *testing.T) { + var service ecs.Service + rString := acctest.RandString(8) + + clusterName := fmt.Sprintf("tf-acc-cluster-svc-w-arn-%s", rString) + tdName := fmt.Sprintf("tf-acc-td-svc-w-arn-%s", rString) + svcName := fmt.Sprintf("tf-acc-svc-w-arn-%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEcsServiceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEcsService(clusterName, tdName, svcName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo", &service), + testAccCheckAWSEcsServiceDisappears(&service), + ), + ExpectNonEmptyPlan: true, }, }, }) @@ -164,7 +121,7 @@ func TestAccAWSEcsService_withUnnormalizedPlacementStrategy(t *testing.T) { tdName := fmt.Sprintf("tf-acc-td-svc-w-ups-%s", rString) svcName := fmt.Sprintf("tf-acc-svc-w-ups-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, @@ -187,7 +144,7 @@ func TestAccAWSEcsService_withFamilyAndRevision(t *testing.T) { tdName := fmt.Sprintf("tf-acc-td-svc-w-far-%s", rString) svcName := fmt.Sprintf("tf-acc-svc-w-far-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, @@ -224,7 +181,7 @@ func TestAccAWSEcsService_withRenamedCluster(t *testing.T) { modifiedRegexp := regexp.MustCompile( "^arn:aws:ecs:[^:]+:[0-9]+:cluster/" + uClusterName + "$") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, @@ -259,30 +216,29 @@ func TestAccAWSEcsService_healthCheckGracePeriodSeconds(t *testing.T) { tdName := fmt.Sprintf("tf-acc-td-svc-w-hcgps-%s", rString) roleName := fmt.Sprintf("tf-acc-role-svc-w-hcgps-%s", rString) policyName := fmt.Sprintf("tf-acc-policy-svc-w-hcgps-%s", rString) - tgName := fmt.Sprintf("tf-acc-tg-svc-w-hcgps-%s", rString) lbName := fmt.Sprintf("tf-acc-lb-svc-w-hcgps-%s", rString) svcName := fmt.Sprintf("tf-acc-svc-w-hcgps-%s", rString) resourceName := "aws_ecs_service.with_alb" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, Steps: []resource.TestStep{ { Config: testAccAWSEcsService_healthCheckGracePeriodSeconds(vpcNameTag, clusterName, tdName, - roleName, policyName, tgName, lbName, svcName, -1), - ExpectError: regexp.MustCompile(`must be between 0 and 1800`), + roleName, policyName, lbName, svcName, -1), + ExpectError: regexp.MustCompile(`expected health_check_grace_period_seconds to be in the range`), }, { Config: testAccAWSEcsService_healthCheckGracePeriodSeconds(vpcNameTag, clusterName, tdName, - roleName, policyName, tgName, lbName, svcName, 1801), - ExpectError: regexp.MustCompile(`must be between 0 and 1800`), + roleName, policyName, lbName, svcName, 7201), + ExpectError: regexp.MustCompile(`expected health_check_grace_period_seconds to be in the range`), }, { Config: testAccAWSEcsService_healthCheckGracePeriodSeconds(vpcNameTag, clusterName, tdName, - roleName, policyName, tgName, lbName, svcName, 300), + roleName, policyName, lbName, svcName, 300), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEcsServiceExists(resourceName, &service), resource.TestCheckResourceAttr(resourceName, "health_check_grace_period_seconds", "300"), @@ -290,7 +246,7 @@ func TestAccAWSEcsService_healthCheckGracePeriodSeconds(t *testing.T) { }, { Config: testAccAWSEcsService_healthCheckGracePeriodSeconds(vpcNameTag, clusterName, tdName, - roleName, policyName, tgName, lbName, svcName, 600), + roleName, policyName, lbName, svcName, 600), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEcsServiceExists(resourceName, &service), resource.TestCheckResourceAttr(resourceName, "health_check_grace_period_seconds", "600"), @@ -310,7 +266,7 @@ func TestAccAWSEcsService_withIamRole(t *testing.T) { policyName := fmt.Sprintf("tf-acc-policy-svc-w-iam-role-%s", rString) svcName := fmt.Sprintf("tf-acc-svc-w-iam-role-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, @@ -333,7 +289,7 @@ func TestAccAWSEcsService_withDeploymentValues(t *testing.T) { tdName := fmt.Sprintf("tf-acc-td-svc-w-dv-%s", rString) svcName := fmt.Sprintf("tf-acc-svc-w-dv-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, @@ -352,6 +308,29 @@ func TestAccAWSEcsService_withDeploymentValues(t *testing.T) { }) } +// Regression for https://github.com/terraform-providers/terraform-provider-aws/issues/6315 +func TestAccAWSEcsService_withDeploymentMinimumZeroMaximumOneHundred(t *testing.T) { + var service ecs.Service + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_ecs_service.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEcsServiceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEcsServiceConfigDeploymentPercents(rName, 0, 100), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsServiceExists(resourceName, &service), + resource.TestCheckResourceAttr(resourceName, "deployment_maximum_percent", "100"), + resource.TestCheckResourceAttr(resourceName, "deployment_minimum_healthy_percent", "0"), + ), + }, + }, + }) +} + // Regression for https://github.com/hashicorp/terraform/issues/3444 func TestAccAWSEcsService_withLbChanges(t *testing.T) { var service ecs.Service @@ -363,7 +342,7 @@ func TestAccAWSEcsService_withLbChanges(t *testing.T) { policyName := fmt.Sprintf("tf-acc-policy-svc-w-lbc-%s", rString) svcName := fmt.Sprintf("tf-acc-svc-w-lbc-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, @@ -393,7 +372,7 @@ func TestAccAWSEcsService_withEcsClusterName(t *testing.T) { tdName := fmt.Sprintf("tf-acc-td-svc-w-cluster-name-%s", rString) svcName := fmt.Sprintf("tf-acc-svc-w-cluster-name-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, @@ -418,17 +397,16 @@ func TestAccAWSEcsService_withAlb(t *testing.T) { tdName := fmt.Sprintf("tf-acc-td-svc-w-alb-%s", rString) roleName := fmt.Sprintf("tf-acc-role-svc-w-alb-%s", rString) policyName := fmt.Sprintf("tf-acc-policy-svc-w-alb-%s", rString) - tgName := fmt.Sprintf("tf-acc-tg-svc-w-alb-%s", rString) lbName := fmt.Sprintf("tf-acc-lb-svc-w-alb-%s", rString) svcName := fmt.Sprintf("tf-acc-svc-w-alb-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSEcsServiceWithAlb(clusterName, tdName, roleName, policyName, tgName, lbName, svcName), + Config: testAccAWSEcsServiceWithAlb(clusterName, tdName, roleName, policyName, lbName, svcName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEcsServiceExists("aws_ecs_service.with_alb", &service), resource.TestCheckResourceAttr("aws_ecs_service.with_alb", "load_balancer.#", "1"), @@ -446,7 +424,7 @@ func TestAccAWSEcsService_withPlacementStrategy(t *testing.T) { tdName := fmt.Sprintf("tf-acc-td-svc-w-ps-%s", rString) svcName := fmt.Sprintf("tf-acc-svc-w-ps-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, @@ -455,14 +433,36 @@ func TestAccAWSEcsService_withPlacementStrategy(t *testing.T) { Config: testAccAWSEcsService(clusterName, tdName, svcName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo", &service), - resource.TestCheckResourceAttr("aws_ecs_service.mongo", "placement_strategy.#", "0"), + resource.TestCheckResourceAttr("aws_ecs_service.mongo", "ordered_placement_strategy.#", "0"), ), }, { Config: testAccAWSEcsServiceWithPlacementStrategy(clusterName, tdName, svcName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo", &service), - resource.TestCheckResourceAttr("aws_ecs_service.mongo", "placement_strategy.#", "1"), + resource.TestCheckResourceAttr("aws_ecs_service.mongo", "ordered_placement_strategy.#", "1"), + resource.TestCheckResourceAttr("aws_ecs_service.mongo", "ordered_placement_strategy.0.type", "binpack"), + resource.TestCheckResourceAttr("aws_ecs_service.mongo", "ordered_placement_strategy.0.field", "memory"), + ), + }, + { + Config: testAccAWSEcsServiceWithRandomPlacementStrategy(clusterName, tdName, svcName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo", &service), + resource.TestCheckResourceAttr("aws_ecs_service.mongo", "ordered_placement_strategy.#", "1"), + resource.TestCheckResourceAttr("aws_ecs_service.mongo", "ordered_placement_strategy.0.type", "random"), + resource.TestCheckResourceAttr("aws_ecs_service.mongo", "ordered_placement_strategy.0.field", ""), + ), + }, + { + Config: testAccAWSEcsServiceWithMultiPlacementStrategy(clusterName, tdName, svcName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo", &service), + resource.TestCheckResourceAttr("aws_ecs_service.mongo", "ordered_placement_strategy.#", "2"), + resource.TestCheckResourceAttr("aws_ecs_service.mongo", "ordered_placement_strategy.0.type", "binpack"), + resource.TestCheckResourceAttr("aws_ecs_service.mongo", "ordered_placement_strategy.0.field", "memory"), + resource.TestCheckResourceAttr("aws_ecs_service.mongo", "ordered_placement_strategy.1.type", "spread"), + resource.TestCheckResourceAttr("aws_ecs_service.mongo", "ordered_placement_strategy.1.field", "instanceId"), ), }, }, @@ -477,7 +477,7 @@ func TestAccAWSEcsService_withPlacementConstraints(t *testing.T) { tdName := fmt.Sprintf("tf-acc-td-svc-w-pc-%s", rString) svcName := fmt.Sprintf("tf-acc-svc-w-pc-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, @@ -501,7 +501,7 @@ func TestAccAWSEcsService_withPlacementConstraints_emptyExpression(t *testing.T) tdName := fmt.Sprintf("tf-acc-td-svc-w-pc-ee-%s", rString) svcName := fmt.Sprintf("tf-acc-svc-w-pc-ee-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, @@ -584,7 +584,7 @@ func TestAccAWSEcsService_withLaunchTypeEC2AndNetworkConfiguration(t *testing.T) tdName := fmt.Sprintf("tf-acc-td-svc-w-nc-%s", rString) svcName := fmt.Sprintf("tf-acc-svc-w-nc-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, @@ -611,6 +611,173 @@ func TestAccAWSEcsService_withLaunchTypeEC2AndNetworkConfiguration(t *testing.T) }) } +func TestAccAWSEcsService_withDaemonSchedulingStrategy(t *testing.T) { + var service ecs.Service + rString := acctest.RandString(8) + + clusterName := fmt.Sprintf("tf-acc-cluster-svc-w-ss-daemon-%s", rString) + tdName := fmt.Sprintf("tf-acc-td-svc-w-ss-daemon-%s", rString) + svcName := fmt.Sprintf("tf-acc-svc-w-ss-daemon-%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEcsServiceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEcsServiceWithDaemonSchedulingStrategy(clusterName, tdName, svcName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsServiceExists("aws_ecs_service.ghost", &service), + resource.TestCheckResourceAttr("aws_ecs_service.ghost", "scheduling_strategy", "DAEMON"), + ), + }, + }, + }) +} + +func TestAccAWSEcsService_withDaemonSchedulingStrategySetDeploymentMinimum(t *testing.T) { + var service ecs.Service + rString := acctest.RandString(8) + + clusterName := fmt.Sprintf("tf-acc-cluster-svc-w-ss-daemon-%s", rString) + tdName := fmt.Sprintf("tf-acc-td-svc-w-ss-daemon-%s", rString) + svcName := fmt.Sprintf("tf-acc-svc-w-ss-daemon-%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEcsServiceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEcsServiceWithDaemonSchedulingStrategySetDeploymentMinimum(clusterName, tdName, svcName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsServiceExists("aws_ecs_service.ghost", &service), + resource.TestCheckResourceAttr("aws_ecs_service.ghost", "scheduling_strategy", "DAEMON"), + ), + }, + }, + }) +} + +func TestAccAWSEcsService_withReplicaSchedulingStrategy(t *testing.T) { + var service ecs.Service + rString := acctest.RandString(8) + + clusterName := fmt.Sprintf("tf-acc-cluster-svc-w-ss-replica-%s", rString) + tdName := fmt.Sprintf("tf-acc-td-svc-w-ss-replica-%s", rString) + svcName := fmt.Sprintf("tf-acc-svc-w-ss-replica-%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEcsServiceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEcsServiceWithReplicaSchedulingStrategy(clusterName, tdName, svcName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsServiceExists("aws_ecs_service.ghost", &service), + resource.TestCheckResourceAttr("aws_ecs_service.ghost", "scheduling_strategy", "REPLICA"), + ), + }, + }, + }) +} + +func TestAccAWSEcsService_withServiceRegistries(t *testing.T) { + var service ecs.Service + rString := acctest.RandString(8) + + clusterName := fmt.Sprintf("tf-acc-cluster-svc-w-ups-%s", rString) + tdName := fmt.Sprintf("tf-acc-td-svc-w-ups-%s", rString) + svcName := fmt.Sprintf("tf-acc-svc-w-ups-%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEcsServiceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEcsService_withServiceRegistries(rString, clusterName, tdName, svcName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsServiceExists("aws_ecs_service.test", &service), + resource.TestCheckResourceAttr("aws_ecs_service.test", "service_registries.#", "1"), + ), + }, + }, + }) +} + +func TestAccAWSEcsService_withServiceRegistries_container(t *testing.T) { + var service ecs.Service + rString := acctest.RandString(8) + + clusterName := fmt.Sprintf("tf-acc-cluster-svc-w-ups-%s", rString) + tdName := fmt.Sprintf("tf-acc-td-svc-w-ups-%s", rString) + svcName := fmt.Sprintf("tf-acc-svc-w-ups-%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEcsServiceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEcsService_withServiceRegistries_container(rString, clusterName, tdName, svcName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsServiceExists("aws_ecs_service.test", &service), + resource.TestCheckResourceAttr("aws_ecs_service.test", "service_registries.#", "1"), + ), + }, + }, + }) +} + +func TestAccAWSEcsService_Tags(t *testing.T) { + var service ecs.Service + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_ecs_service.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEcsServiceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEcsServiceConfigTags1(rName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsServiceExists(resourceName, &service), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportStateId: fmt.Sprintf("%s/%s", rName, rName), + ImportState: true, + ImportStateVerify: true, + // Resource currently defaults to importing task_definition as family:revision + ImportStateVerifyIgnore: []string{"task_definition"}, + }, + { + Config: testAccAWSEcsServiceConfigTags2(rName, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsServiceExists(resourceName, &service), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccAWSEcsServiceConfigTags1(rName, "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsServiceExists(resourceName, &service), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + func testAccCheckAWSEcsServiceDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).ecsconn @@ -692,6 +859,47 @@ func testAccCheckAWSEcsServiceExists(name string, service *ecs.Service) resource } } +func testAccCheckAWSEcsServiceDisappears(service *ecs.Service) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).ecsconn + + input := &ecs.DeleteServiceInput{ + Cluster: service.ClusterArn, + Service: service.ServiceName, + Force: aws.Bool(true), + } + + _, err := conn.DeleteService(input) + + if err != nil { + return err + } + + // Wait until it's deleted + wait := resource.StateChangeConf{ + Pending: []string{"ACTIVE", "DRAINING"}, + Target: []string{"INACTIVE"}, + Timeout: 10 * time.Minute, + MinTimeout: 1 * time.Second, + Refresh: func() (interface{}, string, error) { + resp, err := conn.DescribeServices(&ecs.DescribeServicesInput{ + Cluster: service.ClusterArn, + Services: []*string{service.ServiceName}, + }) + if err != nil { + return resp, "FAILED", err + } + + return resp, aws.StringValue(resp.Services[0].Status), nil + }, + } + + _, err = wait.WaitForState() + + return err + } +} + func testAccAWSEcsService(clusterName, tdName, svcName string) string { return fmt.Sprintf(` resource "aws_ecs_cluster" "default" { @@ -722,7 +930,71 @@ resource "aws_ecs_service" "mongo" { `, clusterName, tdName, svcName) } -func testAccAWSEcsServiceModified(clusterName, tdName, svcName string) string { +func testAccAWSEcsServiceModified(clusterName, tdName, svcName string) string { + return fmt.Sprintf(` +resource "aws_ecs_cluster" "default" { + name = "%s" +} + +resource "aws_ecs_task_definition" "mongo" { + family = "%s" + container_definitions = < 0 { + removeTagKeys := make([]*string, len(removeTags)) + for i, removeTag := range removeTags { + removeTagKeys[i] = removeTag.Key + } + + input := &ecs.UntagResourceInput{ + ResourceArn: aws.String(d.Get("arn").(string)), + TagKeys: removeTagKeys, + } + + log.Printf("[DEBUG] Untagging ECS Cluster: %s", input) + if _, err := conn.UntagResource(input); err != nil { + return fmt.Errorf("error untagging ECS Cluster (%s): %s", d.Get("arn").(string), err) + } + } + + if len(createTags) > 0 { + input := &ecs.TagResourceInput{ + ResourceArn: aws.String(d.Get("arn").(string)), + Tags: createTags, + } + + log.Printf("[DEBUG] Tagging ECS Cluster: %s", input) + if _, err := conn.TagResource(input); err != nil { + return fmt.Errorf("error tagging ECS Cluster (%s): %s", d.Get("arn").(string), err) + } + } + } + + return nil +} + func resourceAwsEcsTaskDefinitionDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).ecsconn @@ -327,6 +454,5 @@ func resourceAwsEcsTaskDefinitionVolumeHash(v interface{}) int { m := v.(map[string]interface{}) buf.WriteString(fmt.Sprintf("%s-", m["name"].(string))) buf.WriteString(fmt.Sprintf("%s-", m["host_path"].(string))) - return hashcode.String(buf.String()) } diff --git a/aws/resource_aws_ecs_task_definition_test.go b/aws/resource_aws_ecs_task_definition_test.go index 5784876efe2..6a0872dc20b 100644 --- a/aws/resource_aws_ecs_task_definition_test.go +++ b/aws/resource_aws_ecs_task_definition_test.go @@ -17,7 +17,7 @@ func TestAccAWSEcsTaskDefinition_basic(t *testing.T) { rString := acctest.RandString(8) tdName := fmt.Sprintf("tf_acc_td_basic_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsTaskDefinitionDestroy, @@ -45,7 +45,7 @@ func TestAccAWSEcsTaskDefinition_withScratchVolume(t *testing.T) { rString := acctest.RandString(8) tdName := fmt.Sprintf("tf_acc_td_with_scratch_volume_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsTaskDefinitionDestroy, @@ -60,6 +60,108 @@ func TestAccAWSEcsTaskDefinition_withScratchVolume(t *testing.T) { }) } +func TestAccAWSEcsTaskDefinition_withDockerVolume(t *testing.T) { + var def ecs.TaskDefinition + + rString := acctest.RandString(8) + tdName := fmt.Sprintf("tf_acc_td_with_docker_volume_%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEcsTaskDefinitionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEcsTaskDefinitionWithDockerVolumes(tdName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsTaskDefinitionExists("aws_ecs_task_definition.sleep", &def), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "volume.#", "1"), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "volume.584193650.docker_volume_configuration.#", "1"), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "volume.584193650.docker_volume_configuration.0.scope", "shared"), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "volume.584193650.docker_volume_configuration.0.autoprovision", "true"), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "volume.584193650.docker_volume_configuration.0.driver", "local"), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "volume.584193650.docker_volume_configuration.0.driver_opts.%", "2"), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "volume.584193650.docker_volume_configuration.0.driver_opts.uid", "1000"), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "volume.584193650.docker_volume_configuration.0.driver_opts.device", "tmpfs"), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "volume.584193650.docker_volume_configuration.0.labels.%", "2"), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "volume.584193650.docker_volume_configuration.0.labels.stack", "april"), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "volume.584193650.docker_volume_configuration.0.labels.environment", "test"), + ), + }, + }, + }) +} + +func TestAccAWSEcsTaskDefinition_withDockerVolumeMinimalConfig(t *testing.T) { + var def ecs.TaskDefinition + + rString := acctest.RandString(8) + tdName := fmt.Sprintf("tf_acc_td_with_docker_volume_%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEcsTaskDefinitionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEcsTaskDefinitionWithDockerVolumesMinimalConfig(tdName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsTaskDefinitionExists("aws_ecs_task_definition.sleep", &def), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "volume.#", "1"), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "volume.584193650.docker_volume_configuration.#", "1"), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "volume.584193650.docker_volume_configuration.0.scope", "task"), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "volume.584193650.docker_volume_configuration.0.driver", "local"), + ), + }, + }, + }) +} + +func TestAccAWSEcsTaskDefinition_withTaskScopedDockerVolume(t *testing.T) { + var def ecs.TaskDefinition + + rString := acctest.RandString(8) + tdName := fmt.Sprintf("tf_acc_td_with_docker_volume_%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEcsTaskDefinitionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEcsTaskDefinitionWithTaskScopedDockerVolume(tdName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsTaskDefinitionExists("aws_ecs_task_definition.sleep", &def), + testAccCheckAWSTaskDefinitionDockerVolumeConfigurationAutoprovisionNil(&def), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "volume.#", "1"), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "volume.584193650.docker_volume_configuration.#", "1"), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "volume.584193650.docker_volume_configuration.0.scope", "task"), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "volume.584193650.docker_volume_configuration.0.driver", "local"), + ), + }, + }, + }) +} + // Regression for https://github.com/hashicorp/terraform/issues/2694 func TestAccAWSEcsTaskDefinition_withEcsService(t *testing.T) { var def ecs.TaskDefinition @@ -70,7 +172,7 @@ func TestAccAWSEcsTaskDefinition_withEcsService(t *testing.T) { svcName := fmt.Sprintf("tf_acc_td_with_ecs_service_%s", rString) tdName := fmt.Sprintf("tf_acc_td_with_ecs_service_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsTaskDefinitionDestroy, @@ -101,7 +203,7 @@ func TestAccAWSEcsTaskDefinition_withTaskRoleArn(t *testing.T) { policyName := fmt.Sprintf("tf-acc-policy-ecs-td-with-task-role-arn-%s", rString) tdName := fmt.Sprintf("tf_acc_td_with_task_role_arn_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsTaskDefinitionDestroy, @@ -124,7 +226,7 @@ func TestAccAWSEcsTaskDefinition_withNetworkMode(t *testing.T) { policyName := fmt.Sprintf("tf_acc_ecs_td_with_network_mode_%s", rString) tdName := fmt.Sprintf("tf_acc_td_with_network_mode_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsTaskDefinitionDestroy, @@ -141,13 +243,63 @@ func TestAccAWSEcsTaskDefinition_withNetworkMode(t *testing.T) { }) } +func TestAccAWSEcsTaskDefinition_withIPCMode(t *testing.T) { + var def ecs.TaskDefinition + + rString := acctest.RandString(8) + roleName := fmt.Sprintf("tf_acc_ecs_td_with_ipc_mode_%s", rString) + policyName := fmt.Sprintf("tf_acc_ecs_td_with_ipc_mode_%s", rString) + tdName := fmt.Sprintf("tf_acc_td_with_ipc_mode_%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEcsTaskDefinitionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEcsTaskDefinitionWithIpcMode(roleName, policyName, tdName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsTaskDefinitionExists("aws_ecs_task_definition.sleep", &def), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "ipc_mode", "host"), + ), + }, + }, + }) +} + +func TestAccAWSEcsTaskDefinition_withPidMode(t *testing.T) { + var def ecs.TaskDefinition + + rString := acctest.RandString(8) + roleName := fmt.Sprintf("tf_acc_ecs_td_with_pid_mode_%s", rString) + policyName := fmt.Sprintf("tf_acc_ecs_td_with_pid_mode_%s", rString) + tdName := fmt.Sprintf("tf_acc_td_with_pid_mode_%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEcsTaskDefinitionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEcsTaskDefinitionWithPidMode(roleName, policyName, tdName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsTaskDefinitionExists("aws_ecs_task_definition.sleep", &def), + resource.TestCheckResourceAttr( + "aws_ecs_task_definition.sleep", "pid_mode", "host"), + ), + }, + }, + }) +} + func TestAccAWSEcsTaskDefinition_constraint(t *testing.T) { var def ecs.TaskDefinition rString := acctest.RandString(8) tdName := fmt.Sprintf("tf_acc_td_constraint_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsTaskDefinitionDestroy, @@ -171,7 +323,7 @@ func TestAccAWSEcsTaskDefinition_changeVolumesForcesNewResource(t *testing.T) { rString := acctest.RandString(8) tdName := fmt.Sprintf("tf_acc_td_change_vol_forces_new_resource_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsTaskDefinitionDestroy, @@ -200,7 +352,7 @@ func TestAccAWSEcsTaskDefinition_arrays(t *testing.T) { rString := acctest.RandString(8) tdName := fmt.Sprintf("tf_acc_td_arrays_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsTaskDefinitionDestroy, @@ -221,13 +373,13 @@ func TestAccAWSEcsTaskDefinition_Fargate(t *testing.T) { rString := acctest.RandString(8) tdName := fmt.Sprintf("tf_acc_td_fargate_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsTaskDefinitionDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSEcsTaskDefinitionFargate(tdName), + Config: testAccAWSEcsTaskDefinitionFargate(tdName, `[{"protocol": "tcp", "containerPort": 8000}]`), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEcsTaskDefinitionExists("aws_ecs_task_definition.fargate", &conf), resource.TestCheckResourceAttr("aws_ecs_task_definition.fargate", "requires_compatibilities.#", "1"), @@ -235,6 +387,11 @@ func TestAccAWSEcsTaskDefinition_Fargate(t *testing.T) { resource.TestCheckResourceAttr("aws_ecs_task_definition.fargate", "memory", "512"), ), }, + { + ExpectNonEmptyPlan: false, + PlanOnly: true, + Config: testAccAWSEcsTaskDefinitionFargate(tdName, `[{"protocol": "tcp", "containerPort": 8000, "hostPort": 8000}]`), + }, }, }) } @@ -247,7 +404,7 @@ func TestAccAWSEcsTaskDefinition_ExecutionRole(t *testing.T) { policyName := fmt.Sprintf("tf-acc-policy-ecs-td-execution-role-%s", rString) tdName := fmt.Sprintf("tf_acc_td_execution_role_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsTaskDefinitionDestroy, @@ -262,6 +419,79 @@ func TestAccAWSEcsTaskDefinition_ExecutionRole(t *testing.T) { }) } +// Regression for https://github.com/hashicorp/terraform/issues/3582#issuecomment-286409786 +func TestAccAWSEcsTaskDefinition_Inactive(t *testing.T) { + var def ecs.TaskDefinition + + rString := acctest.RandString(8) + tdName := fmt.Sprintf("tf_acc_td_basic_%s", rString) + + markTaskDefinitionInactive := func() { + conn := testAccProvider.Meta().(*AWSClient).ecsconn + conn.DeregisterTaskDefinition(&ecs.DeregisterTaskDefinitionInput{ + TaskDefinition: aws.String(fmt.Sprintf("%s:1", tdName)), + }) + } + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEcsTaskDefinitionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEcsTaskDefinition(tdName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsTaskDefinitionExists("aws_ecs_task_definition.jenkins", &def), + ), + }, + { + Config: testAccAWSEcsTaskDefinition(tdName), + PreConfig: markTaskDefinitionInactive, + Check: resource.TestCheckResourceAttr("aws_ecs_task_definition.jenkins", "revision", "2"), // should get re-created + }, + }, + }) +} + +func TestAccAWSEcsTaskDefinition_Tags(t *testing.T) { + var taskDefinition ecs.TaskDefinition + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_ecs_task_definition.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEcsTaskDefinitionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEcsTaskDefinitionConfigTags1(rName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsTaskDefinitionExists(resourceName, &taskDefinition), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + Config: testAccAWSEcsTaskDefinitionConfigTags2(rName, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsTaskDefinitionExists(resourceName, &taskDefinition), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccAWSEcsTaskDefinitionConfigTags1(rName, "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEcsTaskDefinitionExists(resourceName, &taskDefinition), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + func testAccCheckEcsTaskDefinitionRecreated(t *testing.T, before, after *ecs.TaskDefinition) resource.TestCheckFunc { return func(s *terraform.State) error { @@ -280,31 +510,6 @@ func testAccCheckAWSTaskDefinitionConstraintsAttrs(def *ecs.TaskDefinition) reso return nil } } -func TestValidateAwsEcsTaskDefinitionNetworkMode(t *testing.T) { - validNames := []string{ - "bridge", - "host", - "none", - } - for _, v := range validNames { - _, errors := validateAwsEcsTaskDefinitionNetworkMode(v, "network_mode") - if len(errors) != 0 { - t.Fatalf("%q should be a valid AWS ECS Task Definition Network Mode: %q", v, errors) - } - } - - invalidNames := []string{ - "bridged", - "-docker", - } - for _, v := range invalidNames { - _, errors := validateAwsEcsTaskDefinitionNetworkMode(v, "network_mode") - if len(errors) == 0 { - t.Fatalf("%q should be an invalid AWS ECS Task Definition Network Mode", v) - } - } -} - func TestValidateAwsEcsTaskDefinitionContainerDefinitions(t *testing.T) { validDefinitions := []string{ testValidateAwsEcsTaskDefinitionValidContainerDefinitions, @@ -375,6 +580,22 @@ func testAccCheckAWSEcsTaskDefinitionExists(name string, def *ecs.TaskDefinition } } +func testAccCheckAWSTaskDefinitionDockerVolumeConfigurationAutoprovisionNil(def *ecs.TaskDefinition) resource.TestCheckFunc { + return func(s *terraform.State) error { + if len(def.Volumes) != 1 { + return fmt.Errorf("Expected (1) volumes, got (%d)", len(def.Volumes)) + } + config := def.Volumes[0].DockerVolumeConfiguration + if config == nil { + return fmt.Errorf("Expected docker_volume_configuration, got nil") + } + if config.Autoprovision != nil { + return fmt.Errorf("Expected autoprovision to be nil, got %t", *config.Autoprovision) + } + return nil + } +} + func testAccAWSEcsTaskDefinition_constraint(tdName string) string { return fmt.Sprintf(` resource "aws_ecs_task_definition" "jenkins" { @@ -640,7 +861,7 @@ TASK_DEFINITION `, tdName) } -func testAccAWSEcsTaskDefinitionFargate(tdName string) string { +func testAccAWSEcsTaskDefinitionFargate(tdName, portMappings string) string { return fmt.Sprintf(` resource "aws_ecs_task_definition" "fargate" { family = "%s" @@ -656,12 +877,13 @@ resource "aws_ecs_task_definition" "fargate" { "cpu": 10, "command": ["sleep","360"], "memory": 10, - "essential": true + "essential": true, + "portMappings": %s } ] TASK_DEFINITION } -`, tdName) +`, tdName, portMappings) } func testAccAWSEcsTaskDefinitionExecutionRole(roleName, policyName, tdName string) string { @@ -757,6 +979,97 @@ TASK_DEFINITION `, tdName) } +func testAccAWSEcsTaskDefinitionWithDockerVolumes(tdName string) string { + return fmt.Sprintf(` +resource "aws_ecs_task_definition" "sleep" { + family = "%s" + container_definitions = < 0 { + for _, igw := range resp.EgressOnlyInternetGateways { + if aws.StringValue(igw.EgressOnlyInternetGatewayId) == d.Id() { + found = true + break + } + } + } + if d.IsNewResource() && !found { + return resource.RetryableError(fmt.Errorf("Egress Only Internet Gateway (%s) not found.", d.Id())) } + return nil + }) + + if err != nil { + return fmt.Errorf("Error describing egress internet gateway: %s", err) } if !found { diff --git a/aws/resource_aws_egress_only_internet_gateway_test.go b/aws/resource_aws_egress_only_internet_gateway_test.go index d81ff6b5742..f1c10bbdb49 100644 --- a/aws/resource_aws_egress_only_internet_gateway_test.go +++ b/aws/resource_aws_egress_only_internet_gateway_test.go @@ -12,7 +12,7 @@ import ( func TestAccAWSEgressOnlyInternetGateway_basic(t *testing.T) { var igw ec2.EgressOnlyInternetGateway - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEgressOnlyInternetGatewayDestroy, diff --git a/aws/resource_aws_eip.go b/aws/resource_aws_eip.go index 94a951fb196..c3f8a1d580c 100644 --- a/aws/resource_aws_eip.go +++ b/aws/resource_aws_eip.go @@ -31,55 +31,62 @@ func resourceAwsEip() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "vpc": &schema.Schema{ + "vpc": { Type: schema.TypeBool, Optional: true, ForceNew: true, Computed: true, }, - "instance": &schema.Schema{ + "instance": { Type: schema.TypeString, Optional: true, Computed: true, }, - "network_interface": &schema.Schema{ + "network_interface": { Type: schema.TypeString, Optional: true, Computed: true, }, - "allocation_id": &schema.Schema{ + "allocation_id": { Type: schema.TypeString, Computed: true, }, - "association_id": &schema.Schema{ + "association_id": { Type: schema.TypeString, Computed: true, }, - "domain": &schema.Schema{ + "domain": { Type: schema.TypeString, Computed: true, }, - "public_ip": &schema.Schema{ + "public_ip": { Type: schema.TypeString, Computed: true, }, - "private_ip": &schema.Schema{ + "private_ip": { Type: schema.TypeString, Computed: true, }, - "associate_with_private_ip": &schema.Schema{ + "associate_with_private_ip": { Type: schema.TypeString, Optional: true, }, + "public_ipv4_pool": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + }, + "tags": tagsSchema(), }, } @@ -98,6 +105,10 @@ func resourceAwsEipCreate(d *schema.ResourceData, meta interface{}) error { Domain: aws.String(domainOpt), } + if v, ok := d.GetOk("public_ipv4_pool"); ok { + allocOpts.PublicIpv4Pool = aws.String(v.(string)) + } + log.Printf("[DEBUG] EIP create configuration: %#v", allocOpts) allocResp, err := ec2conn.AllocateAddress(allocOpts) if err != nil { @@ -181,16 +192,22 @@ func resourceAwsEipRead(d *schema.ResourceData, meta interface{}) error { } } - // Verify AWS returned our EIP - if len(describeAddresses.Addresses) != 1 || - domain == "vpc" && *describeAddresses.Addresses[0].AllocationId != id || - *describeAddresses.Addresses[0].PublicIp != id { - if err != nil { - return fmt.Errorf("Unable to find EIP: %#v", describeAddresses.Addresses) + var address *ec2.Address + + // In the case that AWS returns more EIPs than we intend it to, we loop + // over the returned addresses to see if it's in the list of results + for _, addr := range describeAddresses.Addresses { + if (domain == "vpc" && aws.StringValue(addr.AllocationId) == id) || aws.StringValue(addr.PublicIp) == id { + address = addr + break } } - address := describeAddresses.Addresses[0] + if address == nil { + log.Printf("[WARN] EIP %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } d.Set("association_id", address.AssociationId) if address.InstanceId != nil { @@ -205,6 +222,7 @@ func resourceAwsEipRead(d *schema.ResourceData, meta interface{}) error { } d.Set("private_ip", address.PrivateIpAddress) d.Set("public_ip", address.PublicIp) + d.Set("public_ipv4_pool", address.PublicIpv4Pool) // On import (domain never set, which it must've been if we created), // set the 'vpc' attribute depending on if we're in a VPC. diff --git a/aws/resource_aws_eip_association.go b/aws/resource_aws_eip_association.go index e5a051631b0..c7485e1c656 100644 --- a/aws/resource_aws_eip_association.go +++ b/aws/resource_aws_eip_association.go @@ -18,43 +18,46 @@ func resourceAwsEipAssociation() *schema.Resource { Create: resourceAwsEipAssociationCreate, Read: resourceAwsEipAssociationRead, Delete: resourceAwsEipAssociationDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, Schema: map[string]*schema.Schema{ - "allocation_id": &schema.Schema{ + "allocation_id": { Type: schema.TypeString, Optional: true, Computed: true, ForceNew: true, }, - "allow_reassociation": &schema.Schema{ + "allow_reassociation": { Type: schema.TypeBool, Optional: true, ForceNew: true, }, - "instance_id": &schema.Schema{ + "instance_id": { Type: schema.TypeString, Optional: true, Computed: true, ForceNew: true, }, - "network_interface_id": &schema.Schema{ + "network_interface_id": { Type: schema.TypeString, Optional: true, Computed: true, ForceNew: true, }, - "private_ip_address": &schema.Schema{ + "private_ip_address": { Type: schema.TypeString, Optional: true, Computed: true, ForceNew: true, }, - "public_ip": &schema.Schema{ + "public_ip": { Type: schema.TypeString, Optional: true, Computed: true, @@ -212,11 +215,11 @@ func describeAddressesById(id string, supportedPlatforms []string) (*ec2.Describ return &ec2.DescribeAddressesInput{ Filters: []*ec2.Filter{ - &ec2.Filter{ + { Name: aws.String("public-ip"), Values: []*string{aws.String(id)}, }, - &ec2.Filter{ + { Name: aws.String("domain"), Values: []*string{aws.String("standard")}, }, @@ -226,7 +229,7 @@ func describeAddressesById(id string, supportedPlatforms []string) (*ec2.Describ return &ec2.DescribeAddressesInput{ Filters: []*ec2.Filter{ - &ec2.Filter{ + { Name: aws.String("association-id"), Values: []*string{aws.String(id)}, }, diff --git a/aws/resource_aws_eip_association_test.go b/aws/resource_aws_eip_association_test.go index 71c873f1c2e..536bc412f11 100644 --- a/aws/resource_aws_eip_association_test.go +++ b/aws/resource_aws_eip_association_test.go @@ -12,10 +12,50 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSEIPAssociation_importInstance(t *testing.T) { + resourceName := "aws_eip_association.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEIPAssociationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEIPAssociationConfig_instance, + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSEIPAssociation_importNetworkInterface(t *testing.T) { + resourceName := "aws_eip_association.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEIPAssociationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEIPAssociationConfig_networkInterface, + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSEIPAssociation_basic(t *testing.T) { var a ec2.Address - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEIPAssociationDestroy, @@ -48,7 +88,7 @@ func TestAccAWSEIPAssociation_ec2Classic(t *testing.T) { os.Setenv("AWS_DEFAULT_REGION", "us-east-1") defer os.Setenv("AWS_DEFAULT_REGION", oldvar) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEIPAssociationDestroy, @@ -71,7 +111,7 @@ func TestAccAWSEIPAssociation_spotInstance(t *testing.T) { var a ec2.Address rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEIPAssociationDestroy, @@ -92,7 +132,7 @@ func TestAccAWSEIPAssociation_spotInstance(t *testing.T) { func TestAccAWSEIPAssociation_disappears(t *testing.T) { var a ec2.Address - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEIPAssociationDestroy, @@ -191,7 +231,7 @@ func testAccCheckAWSEIPAssociationDestroy(s *terraform.State) error { request := &ec2.DescribeAddressesInput{ Filters: []*ec2.Filter{ - &ec2.Filter{ + { Name: aws.String("association-id"), Values: []*string{aws.String(rs.Primary.ID)}, }, @@ -345,3 +385,44 @@ resource "aws_eip_association" "test" { } `, testAccAWSSpotInstanceRequestConfig(rInt)) } + +const testAccAWSEIPAssociationConfig_instance = ` +resource "aws_instance" "test" { + # us-west-2 + ami = "ami-4fccb37f" + instance_type = "m1.small" +} + +resource "aws_eip" "test" {} + +resource "aws_eip_association" "test" { + allocation_id = "${aws_eip.test.id}" + instance_id = "${aws_instance.test.id}" +} +` + +const testAccAWSEIPAssociationConfig_networkInterface = ` +resource "aws_vpc" "test" { + cidr_block = "10.1.0.0/16" +} + +resource "aws_subnet" "test" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "10.1.1.0/24" +} + +resource "aws_internet_gateway" "test" { + vpc_id = "${aws_vpc.test.id}" +} + +resource "aws_network_interface" "test" { + subnet_id = "${aws_subnet.test.id}" +} + +resource "aws_eip" "test" {} + +resource "aws_eip_association" "test" { + allocation_id = "${aws_eip.test.id}" + network_interface_id = "${aws_network_interface.test.id}" +} +` diff --git a/aws/resource_aws_eip_test.go b/aws/resource_aws_eip_test.go index d767fa533a8..76c4229f263 100644 --- a/aws/resource_aws_eip_test.go +++ b/aws/resource_aws_eip_test.go @@ -21,7 +21,7 @@ func TestAccAWSEIP_importEc2Classic(t *testing.T) { resourceName := "aws_eip.bar" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEIPDestroy, @@ -41,7 +41,7 @@ func TestAccAWSEIP_importEc2Classic(t *testing.T) { func TestAccAWSEIP_importVpc(t *testing.T) { resourceName := "aws_eip.bar" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEIPDestroy, @@ -61,13 +61,13 @@ func TestAccAWSEIP_importVpc(t *testing.T) { func TestAccAWSEIP_basic(t *testing.T) { var conf ec2.Address - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_eip.bar", Providers: testAccProviders, CheckDestroy: testAccCheckAWSEIPDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSEIPConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSEIPExists("aws_eip.bar", &conf), @@ -81,13 +81,13 @@ func TestAccAWSEIP_basic(t *testing.T) { func TestAccAWSEIP_instance(t *testing.T) { var conf ec2.Address - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_eip.bar", Providers: testAccProviders, CheckDestroy: testAccCheckAWSEIPDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSEIPInstanceConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSEIPExists("aws_eip.bar", &conf), @@ -95,7 +95,7 @@ func TestAccAWSEIP_instance(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSEIPInstanceConfig2, Check: resource.ComposeTestCheckFunc( testAccCheckAWSEIPExists("aws_eip.bar", &conf), @@ -109,13 +109,13 @@ func TestAccAWSEIP_instance(t *testing.T) { func TestAccAWSEIP_network_interface(t *testing.T) { var conf ec2.Address - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_eip.bar", Providers: testAccProviders, CheckDestroy: testAccCheckAWSEIPDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSEIPNetworkInterfaceConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSEIPExists("aws_eip.bar", &conf), @@ -130,13 +130,13 @@ func TestAccAWSEIP_network_interface(t *testing.T) { func TestAccAWSEIP_twoEIPsOneNetworkInterface(t *testing.T) { var one, two ec2.Address - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_eip.one", Providers: testAccProviders, CheckDestroy: testAccCheckAWSEIPDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSEIPMultiNetworkInterfaceConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSEIPExists("aws_eip.one", &one), @@ -156,13 +156,13 @@ func TestAccAWSEIP_twoEIPsOneNetworkInterface(t *testing.T) { func TestAccAWSEIP_associated_user_private_ip(t *testing.T) { var one ec2.Address - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_eip.bar", Providers: testAccProviders, CheckDestroy: testAccCheckAWSEIPDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSEIPInstanceConfig_associated, Check: resource.ComposeTestCheckFunc( testAccCheckAWSEIPExists("aws_eip.bar", &one), @@ -171,7 +171,7 @@ func TestAccAWSEIP_associated_user_private_ip(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSEIPInstanceConfig_associated_switch, Check: resource.ComposeTestCheckFunc( testAccCheckAWSEIPExists("aws_eip.bar", &one), @@ -186,13 +186,17 @@ func TestAccAWSEIP_associated_user_private_ip(t *testing.T) { // Regression test for https://github.com/hashicorp/terraform/issues/3429 (now // https://github.com/terraform-providers/terraform-provider-aws/issues/42) func TestAccAWSEIP_classic_disassociate(t *testing.T) { - resource.Test(t, resource.TestCase{ + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEIPDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSEIP_classic_disassociate("ami-408c7f28"), + { + Config: testAccAWSEIP_classic_disassociate("instance-store"), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttrSet( "aws_eip.ip.0", @@ -202,8 +206,8 @@ func TestAccAWSEIP_classic_disassociate(t *testing.T) { "instance"), ), }, - resource.TestStep{ - Config: testAccAWSEIP_classic_disassociate("ami-8c6ea9e4"), + { + Config: testAccAWSEIP_classic_disassociate("ebs"), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttrSet( "aws_eip.ip.0", @@ -220,12 +224,12 @@ func TestAccAWSEIP_classic_disassociate(t *testing.T) { func TestAccAWSEIP_disappears(t *testing.T) { var conf ec2.Address - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEIPDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSEIPConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSEIPExists("aws_eip.bar", &conf), @@ -240,13 +244,13 @@ func TestAccAWSEIP_disappears(t *testing.T) { func TestAccAWSEIPAssociate_not_associated(t *testing.T) { var conf ec2.Address - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_eip.bar", Providers: testAccProviders, CheckDestroy: testAccCheckAWSEIPDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSEIPAssociate_not_associated, Check: resource.ComposeTestCheckFunc( testAccCheckAWSEIPExists("aws_eip.bar", &conf), @@ -254,7 +258,7 @@ func TestAccAWSEIPAssociate_not_associated(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSEIPAssociate_associated, Check: resource.ComposeTestCheckFunc( testAccCheckAWSEIPExists("aws_eip.bar", &conf), @@ -272,13 +276,13 @@ func TestAccAWSEIP_tags(t *testing.T) { rName1 := fmt.Sprintf("%s-%d", t.Name(), acctest.RandInt()) rName2 := fmt.Sprintf("%s-%d", t.Name(), acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_eip.bar", Providers: testAccProviders, CheckDestroy: testAccCheckAWSEIPDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSEIPConfig_tags(rName1, t.Name()), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEIPExists(resourceName, &conf), @@ -288,7 +292,7 @@ func TestAccAWSEIP_tags(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "tags.TestName", t.Name()), ), }, - resource.TestStep{ + { Config: testAccAWSEIPConfig_tags(rName2, t.Name()), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEIPExists(resourceName, &conf), @@ -302,6 +306,56 @@ func TestAccAWSEIP_tags(t *testing.T) { }) } +func TestAccAWSEIP_PublicIpv4Pool_default(t *testing.T) { + var conf ec2.Address + resourceName := "aws_eip.bar" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: resourceName, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEIPDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEIPConfig_PublicIpv4Pool_default, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEIPExists(resourceName, &conf), + testAccCheckAWSEIPAttributes(&conf), + resource.TestCheckResourceAttr(resourceName, "public_ipv4_pool", "amazon"), + ), + }, + }, + }) +} + +func TestAccAWSEIP_PublicIpv4Pool_custom(t *testing.T) { + if os.Getenv("AWS_EC2_EIP_PUBLIC_IPV4_POOL") == "" { + t.Skip("Environment variable AWS_EC2_EIP_PUBLIC_IPV4_POOL is not set") + } + + var conf ec2.Address + resourceName := "aws_eip.bar" + + poolName := fmt.Sprintf("%s", os.Getenv("AWS_EC2_EIP_PUBLIC_IPV4_POOL")) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: resourceName, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEIPDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEIPConfig_PublicIpv4Pool_custom(poolName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEIPExists(resourceName, &conf), + testAccCheckAWSEIPAttributes(&conf), + resource.TestCheckResourceAttr(resourceName, "public_ipv4_pool", poolName), + ), + }, + }, + }) +} + func testAccCheckAWSEIPDisappears(v *ec2.Address) resource.TestCheckFunc { return func(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).ec2conn @@ -450,12 +504,44 @@ resource "aws_eip" "bar" { `, rName, testName) } +const testAccAWSEIPConfig_PublicIpv4Pool_default = ` +resource "aws_eip" "bar" { + vpc = true +} +` + +func testAccAWSEIPConfig_PublicIpv4Pool_custom(poolName string) string { + return fmt.Sprintf(` +resource "aws_eip" "bar" { + vpc = true + public_ipv4_pool = "%s" +} +`, poolName) +} + const testAccAWSEIPInstanceEc2Classic = ` provider "aws" { region = "us-east-1" } + +data "aws_ami" "amzn-ami-minimal-pv" { + most_recent = true + filter { + name = "owner-alias" + values = ["amazon"] + } + filter { + name = "name" + values = ["amzn-ami-minimal-pv-*"] + } + filter { + name = "root-device-type" + values = ["instance-store"] + } +} + resource "aws_instance" "foo" { - ami = "ami-5469ae3c" + ami = "${data.aws_ami.amzn-ami-minimal-pv.id}" instance_type = "m1.small" tags { Name = "testAccAWSEIPInstanceEc2Classic" @@ -468,9 +554,24 @@ resource "aws_eip" "bar" { ` const testAccAWSEIPInstanceConfig = ` +data "aws_ami" "amzn-ami-minimal-pv" { + most_recent = true + filter { + name = "owner-alias" + values = ["amazon"] + } + filter { + name = "name" + values = ["amzn-ami-minimal-pv-*"] + } + filter { + name = "root-device-type" + values = ["instance-store"] + } +} + resource "aws_instance" "foo" { - # us-west-2 - ami = "ami-4fccb37f" + ami = "${data.aws_ami.amzn-ami-minimal-pv.id}" instance_type = "m1.small" } @@ -480,9 +581,24 @@ resource "aws_eip" "bar" { ` const testAccAWSEIPInstanceConfig2 = ` +data "aws_ami" "amzn-ami-minimal-pv" { + most_recent = true + filter { + name = "owner-alias" + values = ["amazon"] + } + filter { + name = "name" + values = ["amzn-ami-minimal-pv-*"] + } + filter { + name = "root-device-type" + values = ["instance-store"] + } +} + resource "aws_instance" "bar" { - # us-west-2 - ami = "ami-4fccb37f" + ami = "${data.aws_ami.amzn-ami-minimal-pv.id}" instance_type = "m1.small" } @@ -492,6 +608,22 @@ resource "aws_eip" "bar" { ` const testAccAWSEIPInstanceConfig_associated = ` +data "aws_ami" "amzn-ami-minimal-hvm" { + most_recent = true + filter { + name = "owner-alias" + values = ["amazon"] + } + filter { + name = "name" + values = ["amzn-ami-minimal-hvm-*"] + } + filter { + name = "root-device-type" + values = ["ebs"] + } +} + resource "aws_vpc" "default" { cidr_block = "10.0.0.0/16" enable_dns_hostnames = true @@ -522,8 +654,7 @@ resource "aws_subnet" "tf_test_subnet" { } resource "aws_instance" "foo" { - # us-west-2 - ami = "ami-5189a661" + ami = "${data.aws_ami.amzn-ami-minimal-hvm.id}" instance_type = "t2.micro" private_ip = "10.0.0.12" @@ -535,9 +666,7 @@ resource "aws_instance" "foo" { } resource "aws_instance" "bar" { - # us-west-2 - - ami = "ami-5189a661" + ami = "${data.aws_ami.amzn-ami-minimal-hvm.id}" instance_type = "t2.micro" @@ -557,6 +686,22 @@ resource "aws_eip" "bar" { } ` const testAccAWSEIPInstanceConfig_associated_switch = ` +data "aws_ami" "amzn-ami-minimal-hvm" { + most_recent = true + filter { + name = "owner-alias" + values = ["amazon"] + } + filter { + name = "name" + values = ["amzn-ami-minimal-hvm-*"] + } + filter { + name = "root-device-type" + values = ["ebs"] + } +} + resource "aws_vpc" "default" { cidr_block = "10.0.0.0/16" enable_dns_hostnames = true @@ -587,8 +732,7 @@ resource "aws_subnet" "tf_test_subnet" { } resource "aws_instance" "foo" { - # us-west-2 - ami = "ami-5189a661" + ami = "${data.aws_ami.amzn-ami-minimal-hvm.id}" instance_type = "t2.micro" private_ip = "10.0.0.12" @@ -600,9 +744,7 @@ resource "aws_instance" "foo" { } resource "aws_instance" "bar" { - # us-west-2 - - ami = "ami-5189a661" + ami = "${data.aws_ami.amzn-ami-minimal-hvm.id}" instance_type = "t2.micro" @@ -622,18 +764,6 @@ resource "aws_eip" "bar" { } ` -const testAccAWSEIPInstanceConfig_associated_update = ` -resource "aws_instance" "bar" { - # us-west-2 - ami = "ami-4fccb37f" - instance_type = "m1.small" -} - -resource "aws_eip" "bar" { - instance = "${aws_instance.bar.id}" -} -` - const testAccAWSEIPNetworkInterfaceConfig = ` resource "aws_vpc" "bar" { cidr_block = "10.0.0.0/24" @@ -651,7 +781,7 @@ resource "aws_subnet" "bar" { availability_zone = "us-west-2a" cidr_block = "10.0.0.0/24" tags { - Name = "tf-acc-eip-network-interface" + Name = "tf-acc-eip-network-interface" } } @@ -685,7 +815,7 @@ resource "aws_subnet" "bar" { availability_zone = "us-west-2a" cidr_block = "10.0.0.0/24" tags { - Name = "tf-acc-eip-multi-network-interface" + Name = "tf-acc-eip-multi-network-interface" } } @@ -710,7 +840,7 @@ resource "aws_eip" "two" { } ` -func testAccAWSEIP_classic_disassociate(ami string) string { +func testAccAWSEIP_classic_disassociate(rootDeviceType string) string { return fmt.Sprintf(` provider "aws" { region = "us-east-1" @@ -720,6 +850,22 @@ variable "server_count" { default = 2 } +data "aws_ami" "amzn-ami-minimal-pv" { + most_recent = true + filter { + name = "owner-alias" + values = ["amazon"] + } + filter { + name = "name" + values = ["amzn-ami-minimal-pv-*"] + } + filter { + name = "root-device-type" + values = [%q] + } +} + resource "aws_eip" "ip" { count = "${var.server_count}" instance = "${element(aws_instance.example.*.id, count.index)}" @@ -729,7 +875,7 @@ resource "aws_eip" "ip" { resource "aws_instance" "example" { count = "${var.server_count}" - ami = "%s" + ami = "${data.aws_ami.amzn-ami-minimal-pv.id}" instance_type = "m1.small" associate_public_ip_address = true subnet_id = "${aws_subnet.us-east-1b-public.id}" @@ -777,13 +923,28 @@ resource "aws_route_table" "us-east-1-public" { resource "aws_route_table_association" "us-east-1b-public" { subnet_id = "${aws_subnet.us-east-1b-public.id}" route_table_id = "${aws_route_table.us-east-1-public.id}" -}`, ami) +}`, rootDeviceType) } const testAccAWSEIPAssociate_not_associated = ` +data "aws_ami" "amzn-ami-minimal-pv" { + most_recent = true + filter { + name = "owner-alias" + values = ["amazon"] + } + filter { + name = "name" + values = ["amzn-ami-minimal-pv-*"] + } + filter { + name = "root-device-type" + values = ["instance-store"] + } +} + resource "aws_instance" "foo" { - # us-west-2 - ami = "ami-4fccb37f" + ami = "${data.aws_ami.amzn-ami-minimal-pv.id}" instance_type = "m1.small" } @@ -792,9 +953,24 @@ resource "aws_eip" "bar" { ` const testAccAWSEIPAssociate_associated = ` +data "aws_ami" "amzn-ami-minimal-pv" { + most_recent = true + filter { + name = "owner-alias" + values = ["amazon"] + } + filter { + name = "name" + values = ["amzn-ami-minimal-pv-*"] + } + filter { + name = "root-device-type" + values = ["instance-store"] + } +} + resource "aws_instance" "foo" { - # us-west-2 - ami = "ami-4fccb37f" + ami = "${data.aws_ami.amzn-ami-minimal-pv.id}" instance_type = "m1.small" } diff --git a/aws/resource_aws_eks_cluster.go b/aws/resource_aws_eks_cluster.go new file mode 100644 index 00000000000..f2d1118893d --- /dev/null +++ b/aws/resource_aws_eks_cluster.go @@ -0,0 +1,331 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/eks" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsEksCluster() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsEksClusterCreate, + Read: resourceAwsEksClusterRead, + Delete: resourceAwsEksClusterDelete, + + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(15 * time.Minute), + Delete: schema.DefaultTimeout(15 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "certificate_authority": { + Type: schema.TypeList, + MaxItems: 1, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "data": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "created_at": { + Type: schema.TypeString, + Computed: true, + }, + "endpoint": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.NoZeroValues, + }, + "platform_version": { + Type: schema.TypeString, + Computed: true, + }, + "role_arn": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateArn, + }, + "version": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "vpc_config": { + Type: schema.TypeList, + MinItems: 1, + MaxItems: 1, + Required: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "security_group_ids": { + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "subnet_ids": { + Type: schema.TypeSet, + Required: true, + ForceNew: true, + MinItems: 1, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "vpc_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + }, + } +} + +func resourceAwsEksClusterCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).eksconn + name := d.Get("name").(string) + + input := &eks.CreateClusterInput{ + Name: aws.String(name), + RoleArn: aws.String(d.Get("role_arn").(string)), + ResourcesVpcConfig: expandEksVpcConfigRequest(d.Get("vpc_config").([]interface{})), + } + + if v, ok := d.GetOk("version"); ok && v.(string) != "" { + input.Version = aws.String(v.(string)) + } + + log.Printf("[DEBUG] Creating EKS Cluster: %s", input) + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err := conn.CreateCluster(input) + if err != nil { + // InvalidParameterException: roleArn, arn:aws:iam::123456789012:role/XXX, does not exist + if isAWSErr(err, eks.ErrCodeInvalidParameterException, "does not exist") { + return resource.RetryableError(err) + } + if isAWSErr(err, eks.ErrCodeInvalidParameterException, "Role could not be assumed because the trusted entity is not correct") { + return resource.RetryableError(err) + } + // InvalidParameterException: The provided role doesn't have the Amazon EKS Managed Policies associated with it. Please ensure the following policies [arn:aws:iam::aws:policy/AmazonEKSClusterPolicy, arn:aws:iam::aws:policy/AmazonEKSServicePolicy] are attached + if isAWSErr(err, eks.ErrCodeInvalidParameterException, "The provided role doesn't have the Amazon EKS Managed Policies associated with it") { + return resource.RetryableError(err) + } + // InvalidParameterException: IAM role's policy must include the `ec2:DescribeSubnets` action + if isAWSErr(err, eks.ErrCodeInvalidParameterException, "IAM role's policy must include") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + + if err != nil { + return fmt.Errorf("error creating EKS Cluster (%s): %s", name, err) + } + + d.SetId(name) + + stateConf := resource.StateChangeConf{ + Pending: []string{eks.ClusterStatusCreating}, + Target: []string{eks.ClusterStatusActive}, + Timeout: d.Timeout(schema.TimeoutCreate), + Refresh: refreshEksClusterStatus(conn, name), + } + _, err = stateConf.WaitForState() + if err != nil { + return err + } + + return resourceAwsEksClusterRead(d, meta) +} + +func resourceAwsEksClusterRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).eksconn + + input := &eks.DescribeClusterInput{ + Name: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Reading EKS Cluster: %s", input) + output, err := conn.DescribeCluster(input) + if err != nil { + if isAWSErr(err, eks.ErrCodeResourceNotFoundException, "") { + log.Printf("[WARN] EKS Cluster (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + return fmt.Errorf("error reading EKS Cluster (%s): %s", d.Id(), err) + } + + cluster := output.Cluster + if cluster == nil { + log.Printf("[WARN] EKS Cluster (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + d.Set("arn", cluster.Arn) + + if err := d.Set("certificate_authority", flattenEksCertificate(cluster.CertificateAuthority)); err != nil { + return fmt.Errorf("error setting certificate_authority: %s", err) + } + + d.Set("created_at", aws.TimeValue(cluster.CreatedAt).String()) + d.Set("endpoint", cluster.Endpoint) + d.Set("name", cluster.Name) + d.Set("platform_version", cluster.PlatformVersion) + d.Set("role_arn", cluster.RoleArn) + d.Set("version", cluster.Version) + + if err := d.Set("vpc_config", flattenEksVpcConfigResponse(cluster.ResourcesVpcConfig)); err != nil { + return fmt.Errorf("error setting vpc_config: %s", err) + } + + return nil +} + +func resourceAwsEksClusterDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).eksconn + + log.Printf("[DEBUG] Deleting EKS Cluster: %s", d.Id()) + err := deleteEksCluster(conn, d.Id()) + if err != nil { + return fmt.Errorf("error deleting EKS Cluster (%s): %s", d.Id(), err) + } + + err = waitForDeleteEksCluster(conn, d.Id(), d.Timeout(schema.TimeoutDelete)) + if err != nil { + return fmt.Errorf("error waiting for EKS Cluster (%s) deletion: %s", d.Id(), err) + } + + return nil +} + +func deleteEksCluster(conn *eks.EKS, clusterName string) error { + input := &eks.DeleteClusterInput{ + Name: aws.String(clusterName), + } + + _, err := conn.DeleteCluster(input) + if err != nil { + if isAWSErr(err, eks.ErrCodeResourceNotFoundException, "") { + return nil + } + // Sometimes the EKS API returns the ResourceNotFound error in this form: + // ClientException: No cluster found for name: tf-acc-test-0o1f8 + if isAWSErr(err, eks.ErrCodeClientException, "No cluster found for name:") { + return nil + } + return err + } + + return nil +} + +func expandEksVpcConfigRequest(l []interface{}) *eks.VpcConfigRequest { + if len(l) == 0 { + return nil + } + + m := l[0].(map[string]interface{}) + + return &eks.VpcConfigRequest{ + SecurityGroupIds: expandStringSet(m["security_group_ids"].(*schema.Set)), + SubnetIds: expandStringSet(m["subnet_ids"].(*schema.Set)), + } +} + +func flattenEksCertificate(certificate *eks.Certificate) []map[string]interface{} { + if certificate == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "data": aws.StringValue(certificate.Data), + } + + return []map[string]interface{}{m} +} + +func flattenEksVpcConfigResponse(vpcConfig *eks.VpcConfigResponse) []map[string]interface{} { + if vpcConfig == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "security_group_ids": schema.NewSet(schema.HashString, flattenStringList(vpcConfig.SecurityGroupIds)), + "subnet_ids": schema.NewSet(schema.HashString, flattenStringList(vpcConfig.SubnetIds)), + "vpc_id": aws.StringValue(vpcConfig.VpcId), + } + + return []map[string]interface{}{m} +} + +func refreshEksClusterStatus(conn *eks.EKS, clusterName string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := conn.DescribeCluster(&eks.DescribeClusterInput{ + Name: aws.String(clusterName), + }) + if err != nil { + return 42, "", err + } + cluster := output.Cluster + if cluster == nil { + return cluster, "", fmt.Errorf("EKS Cluster (%s) missing", clusterName) + } + return cluster, aws.StringValue(cluster.Status), nil + } +} + +func waitForDeleteEksCluster(conn *eks.EKS, clusterName string, timeout time.Duration) error { + stateConf := resource.StateChangeConf{ + Pending: []string{ + eks.ClusterStatusActive, + eks.ClusterStatusDeleting, + }, + Target: []string{""}, + Timeout: timeout, + Refresh: refreshEksClusterStatus(conn, clusterName), + } + cluster, err := stateConf.WaitForState() + if err != nil { + if isAWSErr(err, eks.ErrCodeResourceNotFoundException, "") { + return nil + } + // Sometimes the EKS API returns the ResourceNotFound error in this form: + // ClientException: No cluster found for name: tf-acc-test-0o1f8 + if isAWSErr(err, eks.ErrCodeClientException, "No cluster found for name:") { + return nil + } + } + if cluster == nil { + return nil + } + return err +} diff --git a/aws/resource_aws_eks_cluster_test.go b/aws/resource_aws_eks_cluster_test.go new file mode 100644 index 00000000000..dd5312d0c93 --- /dev/null +++ b/aws/resource_aws_eks_cluster_test.go @@ -0,0 +1,354 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + "strings" + "testing" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/eks" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func init() { + resource.AddTestSweepers("aws_eks_cluster", &resource.Sweeper{ + Name: "aws_eks_cluster", + F: testSweepEksClusters, + }) +} + +func testSweepEksClusters(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).eksconn + + input := &eks.ListClustersInput{} + for { + out, err := conn.ListClusters(input) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping EKS Clusters sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving EKS Clusters: %s", err) + } + + if out == nil || len(out.Clusters) == 0 { + log.Printf("[INFO] No EKS Clusters to sweep") + return nil + } + + for _, cluster := range out.Clusters { + name := aws.StringValue(cluster) + + if !strings.HasPrefix(name, "tf-acc-test-") { + log.Printf("[INFO] Skipping EKS Cluster: %s", name) + continue + } + + log.Printf("[INFO] Deleting EKS Cluster: %s", name) + err := deleteEksCluster(conn, name) + if err != nil { + log.Printf("[ERROR] Failed to delete EKS Cluster %s: %s", name, err) + continue + } + err = waitForDeleteEksCluster(conn, name, 15*time.Minute) + if err != nil { + log.Printf("[ERROR] Failed to wait for EKS Cluster %s deletion: %s", name, err) + } + } + + if out.NextToken == nil { + break + } + + input.NextToken = out.NextToken + } + + return nil +} + +func TestAccAWSEksCluster_basic(t *testing.T) { + var cluster eks.Cluster + + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) + resourceName := "aws_eks_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEksClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEksClusterConfig_Required(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEksClusterExists(resourceName, &cluster), + resource.TestMatchResourceAttr(resourceName, "arn", regexp.MustCompile(fmt.Sprintf("^arn:[^:]+:eks:[^:]+:[^:]+:cluster/%s$", rName))), + resource.TestCheckResourceAttr(resourceName, "certificate_authority.#", "1"), + resource.TestCheckResourceAttrSet(resourceName, "certificate_authority.0.data"), + resource.TestMatchResourceAttr(resourceName, "endpoint", regexp.MustCompile(`^https://`)), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestMatchResourceAttr(resourceName, "platform_version", regexp.MustCompile(`^eks\.\d+$`)), + resource.TestMatchResourceAttr(resourceName, "role_arn", regexp.MustCompile(fmt.Sprintf("%s$", rName))), + resource.TestMatchResourceAttr(resourceName, "version", regexp.MustCompile(`^\d+\.\d+$`)), + resource.TestCheckResourceAttr(resourceName, "vpc_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "vpc_config.0.security_group_ids.#", "0"), + resource.TestCheckResourceAttr(resourceName, "vpc_config.0.subnet_ids.#", "2"), + resource.TestMatchResourceAttr(resourceName, "vpc_config.0.vpc_id", regexp.MustCompile(`^vpc-.+`)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSEksCluster_Version(t *testing.T) { + var cluster eks.Cluster + + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) + resourceName := "aws_eks_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEksClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEksClusterConfig_Version(rName, "1.10"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEksClusterExists(resourceName, &cluster), + resource.TestCheckResourceAttr(resourceName, "version", "1.10"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSEksCluster_VpcConfig_SecurityGroupIds(t *testing.T) { + var cluster eks.Cluster + + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) + resourceName := "aws_eks_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEksClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEksClusterConfig_VpcConfig_SecurityGroupIds(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEksClusterExists(resourceName, &cluster), + resource.TestCheckResourceAttr(resourceName, "vpc_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "vpc_config.0.security_group_ids.#", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAWSEksClusterExists(resourceName string, cluster *eks.Cluster) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No EKS Cluster ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).eksconn + output, err := conn.DescribeCluster(&eks.DescribeClusterInput{ + Name: aws.String(rs.Primary.ID), + }) + if err != nil { + return err + } + + if output == nil || output.Cluster == nil { + return fmt.Errorf("EKS Cluster (%s) not found", rs.Primary.ID) + } + + if aws.StringValue(output.Cluster.Name) != rs.Primary.ID { + return fmt.Errorf("EKS Cluster (%s) not found", rs.Primary.ID) + } + + *cluster = *output.Cluster + + return nil + } +} + +func testAccCheckAWSEksClusterDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_eks_cluster" { + continue + } + + conn := testAccProvider.Meta().(*AWSClient).eksconn + + // Handle eventual consistency + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + output, err := conn.DescribeCluster(&eks.DescribeClusterInput{ + Name: aws.String(rs.Primary.ID), + }) + + if err != nil { + if isAWSErr(err, eks.ErrCodeResourceNotFoundException, "") { + return nil + } + return resource.NonRetryableError(err) + } + + if output != nil && output.Cluster != nil && aws.StringValue(output.Cluster.Name) == rs.Primary.ID { + return resource.RetryableError(fmt.Errorf("EKS Cluster %s still exists", rs.Primary.ID)) + } + + return nil + }) + + return err + } + + return nil +} + +func testAccAWSEksClusterConfig_Base(rName string) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_iam_role" "test" { + name = "%s" + + assume_role_policy = < 1`) + }, + func(diff *schema.ResourceDiff, v interface{}) error { + // Engine memcached does not currently support vertical scaling + // InvalidParameterCombination: Scaling is not supported for engine memcached + // https://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Scaling.Memcached.html#Scaling.Memcached.Vertically + if diff.Id() == "" || !diff.HasChange("node_type") { + return nil + } + if v, ok := diff.GetOk("engine"); !ok || v.(string) == "redis" { + return nil + } + return diff.ForceNew("node_type") + }, + ), } } -func resourceAwsElasticacheCluster() *schema.Resource { - resourceSchema := resourceAwsElastiCacheCommonSchema() - - resourceSchema["cluster_id"] = &schema.Schema{ - Type: schema.TypeString, - Required: true, - ForceNew: true, - StateFunc: func(val interface{}) string { - // Elasticache normalizes cluster ids to lowercase, - // so we have to do this too or else we can end up - // with non-converging diffs. - return strings.ToLower(val.(string)) - }, - ValidateFunc: validateElastiCacheClusterId, - } +func resourceAwsElasticacheClusterCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).elasticacheconn - resourceSchema["num_cache_nodes"] = &schema.Schema{ - Type: schema.TypeInt, - Required: true, - } + req := &elasticache.CreateCacheClusterInput{} - resourceSchema["az_mode"] = &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - } + if v, ok := d.GetOk("replication_group_id"); ok { + req.ReplicationGroupId = aws.String(v.(string)) + } else { + securityNameSet := d.Get("security_group_names").(*schema.Set) + securityIdSet := d.Get("security_group_ids").(*schema.Set) + securityNames := expandStringList(securityNameSet.List()) + securityIds := expandStringList(securityIdSet.List()) + tags := tagsFromMapEC(d.Get("tags").(map[string]interface{})) - resourceSchema["availability_zone"] = &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, + req.CacheSecurityGroupNames = securityNames + req.SecurityGroupIds = securityIds + req.Tags = tags } - resourceSchema["configuration_endpoint"] = &schema.Schema{ - Type: schema.TypeString, - Computed: true, + if v, ok := d.GetOk("cluster_id"); ok { + req.CacheClusterId = aws.String(v.(string)) } - resourceSchema["cluster_address"] = &schema.Schema{ - Type: schema.TypeString, - Computed: true, + if v, ok := d.GetOk("node_type"); ok { + req.CacheNodeType = aws.String(v.(string)) } - resourceSchema["replication_group_id"] = &schema.Schema{ - Type: schema.TypeString, - Computed: true, + if v, ok := d.GetOk("num_cache_nodes"); ok { + req.NumCacheNodes = aws.Int64(int64(v.(int))) } - resourceSchema["cache_nodes"] = &schema.Schema{ - Type: schema.TypeList, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "id": { - Type: schema.TypeString, - Computed: true, - }, - "address": { - Type: schema.TypeString, - Computed: true, - }, - "port": { - Type: schema.TypeInt, - Computed: true, - }, - "availability_zone": { - Type: schema.TypeString, - Computed: true, - }, - }, - }, + if v, ok := d.GetOk("engine"); ok { + req.Engine = aws.String(v.(string)) } - return &schema.Resource{ - Create: resourceAwsElasticacheClusterCreate, - Read: resourceAwsElasticacheClusterRead, - Update: resourceAwsElasticacheClusterUpdate, - Delete: resourceAwsElasticacheClusterDelete, - Importer: &schema.ResourceImporter{ - State: schema.ImportStatePassthrough, - }, - - Schema: resourceSchema, + if v, ok := d.GetOk("engine_version"); ok { + req.EngineVersion = aws.String(v.(string)) } -} -func resourceAwsElasticacheClusterCreate(d *schema.ResourceData, meta interface{}) error { - conn := meta.(*AWSClient).elasticacheconn + if v, ok := d.GetOk("port"); ok { + req.Port = aws.Int64(int64(v.(int))) + } - clusterId := d.Get("cluster_id").(string) - nodeType := d.Get("node_type").(string) // e.g) cache.m1.small - numNodes := int64(d.Get("num_cache_nodes").(int)) // 2 - engine := d.Get("engine").(string) // memcached - engineVersion := d.Get("engine_version").(string) // 1.4.14 - port := int64(d.Get("port").(int)) // e.g) 11211 - subnetGroupName := d.Get("subnet_group_name").(string) - securityNameSet := d.Get("security_group_names").(*schema.Set) - securityIdSet := d.Get("security_group_ids").(*schema.Set) - - securityNames := expandStringList(securityNameSet.List()) - securityIds := expandStringList(securityIdSet.List()) - tags := tagsFromMapEC(d.Get("tags").(map[string]interface{})) - - req := &elasticache.CreateCacheClusterInput{ - CacheClusterId: aws.String(clusterId), - CacheNodeType: aws.String(nodeType), - NumCacheNodes: aws.Int64(numNodes), - Engine: aws.String(engine), - EngineVersion: aws.String(engineVersion), - Port: aws.Int64(port), - CacheSubnetGroupName: aws.String(subnetGroupName), - CacheSecurityGroupNames: securityNames, - SecurityGroupIds: securityIds, - Tags: tags, + if v, ok := d.GetOk("subnet_group_name"); ok { + req.CacheSubnetGroupName = aws.String(v.(string)) } // parameter groups are optional and can be defaulted by AWS @@ -283,41 +385,26 @@ func resourceAwsElasticacheClusterCreate(d *schema.ResourceData, meta interface{ req.PreferredAvailabilityZone = aws.String(v.(string)) } - preferred_azs := d.Get("availability_zones").(*schema.Set).List() - if len(preferred_azs) > 0 { - azs := expandStringList(preferred_azs) - req.PreferredAvailabilityZones = azs - } - - if v, ok := d.GetOk("replication_group_id"); ok { - req.ReplicationGroupId = aws.String(v.(string)) + if v, ok := d.GetOk("preferred_availability_zones"); ok && len(v.([]interface{})) > 0 { + req.PreferredAvailabilityZones = expandStringList(v.([]interface{})) + } else { + preferred_azs := d.Get("availability_zones").(*schema.Set).List() + if len(preferred_azs) > 0 { + azs := expandStringList(preferred_azs) + req.PreferredAvailabilityZones = azs + } } - resp, err := conn.CreateCacheCluster(req) + id, err := createElasticacheCacheCluster(conn, req) if err != nil { - return fmt.Errorf("Error creating Elasticache: %s", err) + return fmt.Errorf("error creating Elasticache Cache Cluster: %s", err) } - // Assign the cluster id as the resource ID - // Elasticache always retains the id in lower case, so we have to - // mimic that or else we won't be able to refresh a resource whose - // name contained uppercase characters. - d.SetId(strings.ToLower(*resp.CacheCluster.CacheClusterId)) - - pending := []string{"creating", "modifying", "restoring", "snapshotting"} - stateConf := &resource.StateChangeConf{ - Pending: pending, - Target: []string{"available"}, - Refresh: cacheClusterStateRefreshFunc(conn, d.Id(), "available", pending), - Timeout: 40 * time.Minute, - MinTimeout: 10 * time.Second, - Delay: 30 * time.Second, - } + d.SetId(id) - log.Printf("[DEBUG] Waiting for state to become available: %v", d.Id()) - _, sterr := stateConf.WaitForState() - if sterr != nil { - return fmt.Errorf("Error waiting for elasticache (%s) to be created: %s", d.Id(), sterr) + err = waitForCreateElasticacheCacheCluster(conn, d.Id(), 40*time.Minute) + if err != nil { + return fmt.Errorf("error waiting for Elasticache Cache Cluster (%s) to be created: %s", d.Id(), err) } return resourceAwsElasticacheClusterRead(d, meta) @@ -332,7 +419,7 @@ func resourceAwsElasticacheClusterRead(d *schema.ResourceData, meta interface{}) res, err := conn.DescribeCacheClusters(req) if err != nil { - if eccErr, ok := err.(awserr.Error); ok && eccErr.Code() == "CacheClusterNotFound" { + if isAWSErr(err, elasticache.ErrCodeCacheClusterNotFoundFault, "") { log.Printf("[WARN] ElastiCache Cluster (%s) not found", d.Id()) d.SetId("") return nil @@ -352,6 +439,8 @@ func resourceAwsElasticacheClusterRead(d *schema.ResourceData, meta interface{}) d.Set("port", c.ConfigurationEndpoint.Port) d.Set("configuration_endpoint", aws.String(fmt.Sprintf("%s:%d", *c.ConfigurationEndpoint.Address, *c.ConfigurationEndpoint.Port))) d.Set("cluster_address", aws.String(fmt.Sprintf("%s", *c.ConfigurationEndpoint.Address))) + } else if len(c.CacheNodes) > 0 { + d.Set("port", int(aws.Int64Value(c.CacheNodes[0].Endpoint.Port))) } if c.ReplicationGroupId != nil { @@ -384,24 +473,26 @@ func resourceAwsElasticacheClusterRead(d *schema.ResourceData, meta interface{}) } // list tags for resource // set tags - arn, err := buildECARN(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region) - if err != nil { - log.Printf("[DEBUG] Error building ARN for ElastiCache Cluster, not setting Tags for cluster %s", *c.CacheClusterId) - } else { - resp, err := conn.ListTagsForResource(&elasticache.ListTagsForResourceInput{ - ResourceName: aws.String(arn), - }) + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Service: "elasticache", + Region: meta.(*AWSClient).region, + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("cluster:%s", d.Id()), + }.String() + resp, err := conn.ListTagsForResource(&elasticache.ListTagsForResourceInput{ + ResourceName: aws.String(arn), + }) - if err != nil { - log.Printf("[DEBUG] Error retrieving tags for ARN: %s", arn) - } + if err != nil { + log.Printf("[DEBUG] Error retrieving tags for ARN: %s", arn) + } - var et []*elasticache.Tag - if len(resp.TagList) > 0 { - et = resp.TagList - } - d.Set("tags", tagsToMapEC(et)) + var et []*elasticache.Tag + if len(resp.TagList) > 0 { + et = resp.TagList } + d.Set("tags", tagsToMapEC(et)) } return nil @@ -409,13 +500,16 @@ func resourceAwsElasticacheClusterRead(d *schema.ResourceData, meta interface{}) func resourceAwsElasticacheClusterUpdate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).elasticacheconn - arn, err := buildECARN(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region) - if err != nil { - log.Printf("[DEBUG] Error building ARN for ElastiCache Cluster, not updating Tags for cluster %s", d.Id()) - } else { - if err := setTagsEC(conn, d, arn); err != nil { - return err - } + + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Service: "elasticache", + Region: meta.(*AWSClient).region, + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("cluster:%s", d.Id()), + }.String() + if err := setTagsEC(conn, d, arn); err != nil { + return err } req := &elasticache.ModifyCacheClusterInput{ @@ -480,13 +574,26 @@ func resourceAwsElasticacheClusterUpdate(d *schema.ResourceData, meta interface{ oraw, nraw := d.GetChange("num_cache_nodes") o := oraw.(int) n := nraw.(int) - if v, ok := d.GetOk("az_mode"); ok && v.(string) == "cross-az" && n == 1 { - return fmt.Errorf("[WARN] Error updateing Elasticache cluster (%s), error: Cross-AZ mode is not supported in a single cache node.", d.Id()) - } if n < o { log.Printf("[INFO] Cluster %s is marked for Decreasing cache nodes from %d to %d", d.Id(), o, n) - nodesToRemove := getCacheNodesToRemove(d, o, o-n) + nodesToRemove := getCacheNodesToRemove(o, o-n) req.CacheNodeIdsToRemove = nodesToRemove + } else { + log.Printf("[INFO] Cluster %s is marked for increasing cache nodes from %d to %d", d.Id(), o, n) + // SDK documentation for NewAvailabilityZones states: + // The list of Availability Zones where the new Memcached cache nodes are created. + // + // This parameter is only valid when NumCacheNodes in the request is greater + // than the sum of the number of active cache nodes and the number of cache + // nodes pending creation (which may be zero). The number of Availability Zones + // supplied in this list must match the cache nodes being added in this request. + if v, ok := d.GetOk("preferred_availability_zones"); ok && len(v.([]interface{})) > 0 { + // Here we check the list length to prevent a potential panic :) + if len(v.([]interface{})) != n { + return fmt.Errorf("length of preferred_availability_zones (%d) must match num_cache_nodes (%d)", len(v.([]interface{})), n) + } + req.NewAvailabilityZones = expandStringList(v.([]interface{})[o:]) + } } req.NumCacheNodes = aws.Int64(int64(d.Get("num_cache_nodes").(int))) @@ -498,7 +605,7 @@ func resourceAwsElasticacheClusterUpdate(d *schema.ResourceData, meta interface{ log.Printf("[DEBUG] Modifying ElastiCache Cluster (%s), opts:\n%s", d.Id(), req) _, err := conn.ModifyCacheCluster(req) if err != nil { - return fmt.Errorf("[WARN] Error updating ElastiCache cluster (%s), error: %s", d.Id(), err) + return fmt.Errorf("Error updating ElastiCache cluster (%s), error: %s", d.Id(), err) } log.Printf("[DEBUG] Waiting for update: %s", d.Id()) @@ -521,7 +628,7 @@ func resourceAwsElasticacheClusterUpdate(d *schema.ResourceData, meta interface{ return resourceAwsElasticacheClusterRead(d, meta) } -func getCacheNodesToRemove(d *schema.ResourceData, oldNumberOfNodes int, cacheNodesToRemove int) []*string { +func getCacheNodesToRemove(oldNumberOfNodes int, cacheNodesToRemove int) []*string { nodesIdsToRemove := []*string{} for i := oldNumberOfNodes; i > oldNumberOfNodes-cacheNodesToRemove && i > 0; i-- { s := fmt.Sprintf("%04d", i) @@ -565,42 +672,18 @@ func (b byCacheNodeId) Less(i, j int) bool { func resourceAwsElasticacheClusterDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).elasticacheconn - req := &elasticache.DeleteCacheClusterInput{ - CacheClusterId: aws.String(d.Id()), - } - err := resource.Retry(5*time.Minute, func() *resource.RetryError { - _, err := conn.DeleteCacheCluster(req) - if err != nil { - awsErr, ok := err.(awserr.Error) - // The cluster may be just snapshotting, so we retry until it's ready for deletion - if ok && awsErr.Code() == "InvalidCacheClusterState" { - return resource.RetryableError(err) - } - return resource.NonRetryableError(err) - } - return nil - }) + err := deleteElasticacheCacheCluster(conn, d.Id()) if err != nil { - return err - } - - log.Printf("[DEBUG] Waiting for deletion: %v", d.Id()) - stateConf := &resource.StateChangeConf{ - Pending: []string{"creating", "available", "deleting", "incompatible-parameters", "incompatible-network", "restore-failed", "snapshotting"}, - Target: []string{}, - Refresh: cacheClusterStateRefreshFunc(conn, d.Id(), "", []string{}), - Timeout: 40 * time.Minute, - MinTimeout: 10 * time.Second, - Delay: 30 * time.Second, + if isAWSErr(err, elasticache.ErrCodeCacheClusterNotFoundFault, "") { + return nil + } + return fmt.Errorf("error deleting Elasticache Cache Cluster (%s): %s", d.Id(), err) } - - _, sterr := stateConf.WaitForState() - if sterr != nil { - return fmt.Errorf("Error waiting for elasticache (%s) to delete: %s", d.Id(), sterr) + err = waitForDeleteElasticacheCacheCluster(conn, d.Id(), 40*time.Minute) + if err != nil { + return fmt.Errorf("error waiting for Elasticache Cache Cluster (%s) to be deleted: %s", d.Id(), err) } - d.SetId("") - return nil } @@ -611,9 +694,7 @@ func cacheClusterStateRefreshFunc(conn *elasticache.ElastiCache, clusterID, give ShowCacheNodeInfo: aws.Bool(true), }) if err != nil { - apierr := err.(awserr.Error) - log.Printf("[DEBUG] message: %v, code: %v", apierr.Message(), apierr.Code()) - if apierr.Message() == fmt.Sprintf("CacheCluster not found: %v", clusterID) { + if isAWSErr(err, elasticache.ErrCodeCacheClusterNotFoundFault, "") { log.Printf("[DEBUG] Detect deletion") return nil, "", nil } @@ -623,7 +704,7 @@ func cacheClusterStateRefreshFunc(conn *elasticache.ElastiCache, clusterID, give } if len(resp.CacheClusters) == 0 { - return nil, "", fmt.Errorf("[WARN] Error: no Cache Clusters found for id (%s)", clusterID) + return nil, "", fmt.Errorf("Error: no Cache Clusters found for id (%s)", clusterID) } var c *elasticache.CacheCluster @@ -635,7 +716,7 @@ func cacheClusterStateRefreshFunc(conn *elasticache.ElastiCache, clusterID, give } if c == nil { - return nil, "", fmt.Errorf("[WARN] Error: no matching Elastic Cache cluster for id (%s)", clusterID) + return nil, "", fmt.Errorf("Error: no matching Elastic Cache cluster for id (%s)", clusterID) } log.Printf("[DEBUG] ElastiCache Cluster (%s) status: %v", clusterID, *c.CacheClusterStatus) @@ -677,14 +758,71 @@ func cacheClusterStateRefreshFunc(conn *elasticache.ElastiCache, clusterID, give } } -func buildECARN(identifier, partition, accountid, region string) (string, error) { - if partition == "" { - return "", fmt.Errorf("Unable to construct ElastiCache ARN because of missing AWS partition") +func createElasticacheCacheCluster(conn *elasticache.ElastiCache, input *elasticache.CreateCacheClusterInput) (string, error) { + log.Printf("[DEBUG] Creating Elasticache Cache Cluster: %s", input) + output, err := conn.CreateCacheCluster(input) + if err != nil { + return "", err + } + if output == nil || output.CacheCluster == nil { + return "", errors.New("missing cluster ID after creation") } - if accountid == "" { - return "", fmt.Errorf("Unable to construct ElastiCache ARN because of missing AWS Account ID") + // Elasticache always retains the id in lower case, so we have to + // mimic that or else we won't be able to refresh a resource whose + // name contained uppercase characters. + return strings.ToLower(aws.StringValue(output.CacheCluster.CacheClusterId)), nil +} + +func waitForCreateElasticacheCacheCluster(conn *elasticache.ElastiCache, cacheClusterID string, timeout time.Duration) error { + pending := []string{"creating", "modifying", "restoring", "snapshotting"} + stateConf := &resource.StateChangeConf{ + Pending: pending, + Target: []string{"available"}, + Refresh: cacheClusterStateRefreshFunc(conn, cacheClusterID, "available", pending), + Timeout: timeout, + MinTimeout: 10 * time.Second, + Delay: 30 * time.Second, + } + + log.Printf("[DEBUG] Waiting for Elasticache Cache Cluster (%s) to be created", cacheClusterID) + _, err := stateConf.WaitForState() + return err +} + +func deleteElasticacheCacheCluster(conn *elasticache.ElastiCache, cacheClusterID string) error { + input := &elasticache.DeleteCacheClusterInput{ + CacheClusterId: aws.String(cacheClusterID), + } + log.Printf("[DEBUG] Deleting Elasticache Cache Cluster: %s", input) + err := resource.Retry(5*time.Minute, func() *resource.RetryError { + _, err := conn.DeleteCacheCluster(input) + if err != nil { + // This will not be fixed by retrying + if isAWSErr(err, elasticache.ErrCodeInvalidCacheClusterStateFault, "serving as primary") { + return resource.NonRetryableError(err) + } + // The cluster may be just snapshotting, so we retry until it's ready for deletion + if isAWSErr(err, elasticache.ErrCodeInvalidCacheClusterStateFault, "") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + return err +} + +func waitForDeleteElasticacheCacheCluster(conn *elasticache.ElastiCache, cacheClusterID string, timeout time.Duration) error { + stateConf := &resource.StateChangeConf{ + Pending: []string{"creating", "available", "deleting", "incompatible-parameters", "incompatible-network", "restore-failed", "snapshotting"}, + Target: []string{}, + Refresh: cacheClusterStateRefreshFunc(conn, cacheClusterID, "", []string{}), + Timeout: timeout, + MinTimeout: 10 * time.Second, + Delay: 30 * time.Second, } - arn := fmt.Sprintf("arn:%s:elasticache:%s:%s:cluster:%s", partition, region, accountid, identifier) - return arn, nil + log.Printf("[DEBUG] Waiting for Elasticache Cache Cluster deletion: %v", cacheClusterID) + _, err := stateConf.WaitForState() + return err } diff --git a/aws/resource_aws_elasticache_cluster_test.go b/aws/resource_aws_elasticache_cluster_test.go index 8dbe2da6d32..6482dc0f7bc 100644 --- a/aws/resource_aws_elasticache_cluster_test.go +++ b/aws/resource_aws_elasticache_cluster_test.go @@ -1,9 +1,15 @@ package aws import ( + "errors" "fmt" + "log" + "os" + "regexp" + "strconv" "strings" "testing" + "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" @@ -13,15 +19,208 @@ import ( "github.com/hashicorp/terraform/terraform" ) -func TestAccAWSElasticacheCluster_basic(t *testing.T) { +func init() { + resource.AddTestSweepers("aws_elasticache_cluster", &resource.Sweeper{ + Name: "aws_elasticache_cluster", + F: testSweepElasticacheClusters, + Dependencies: []string{ + "aws_elasticache_replication_group", + }, + }) +} + +func testSweepElasticacheClusters(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).elasticacheconn + + prefixes := []string{ + "tf-", + "tf-test-", + "tf-acc-test-", + } + + err = conn.DescribeCacheClustersPages(&elasticache.DescribeCacheClustersInput{}, func(page *elasticache.DescribeCacheClustersOutput, isLast bool) bool { + if len(page.CacheClusters) == 0 { + log.Print("[DEBUG] No Elasticache Replicaton Groups to sweep") + return false + } + + for _, cluster := range page.CacheClusters { + id := aws.StringValue(cluster.CacheClusterId) + skip := true + for _, prefix := range prefixes { + if strings.HasPrefix(id, prefix) { + skip = false + break + } + } + if skip { + log.Printf("[INFO] Skipping Elasticache Cluster: %s", id) + continue + } + log.Printf("[INFO] Deleting Elasticache Cluster: %s", id) + err := deleteElasticacheCacheCluster(conn, id) + if err != nil { + log.Printf("[ERROR] Failed to delete Elasticache Cache Cluster (%s): %s", id, err) + } + err = waitForDeleteElasticacheCacheCluster(conn, id, 40*time.Minute) + if err != nil { + log.Printf("[ERROR] Failed waiting for Elasticache Cache Cluster (%s) to be deleted: %s", id, err) + } + } + return !isLast + }) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping Elasticache Cluster sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving Elasticache Clusters: %s", err) + } + return nil +} + +func TestAccAWSElasticacheCluster_Engine_Memcached_Ec2Classic(t *testing.T) { + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + + var ec elasticache.CacheCluster + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(8)) + resourceName := "aws_elasticache_cluster.bar" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheClusterConfig_Engine_Memcached(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &ec), + resource.TestCheckResourceAttr(resourceName, "cache_nodes.0.id", "0001"), + resource.TestCheckResourceAttrSet(resourceName, "configuration_endpoint"), + resource.TestCheckResourceAttrSet(resourceName, "cluster_address"), + resource.TestCheckResourceAttr(resourceName, "engine", "memcached"), + resource.TestCheckResourceAttr(resourceName, "port", "11211"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSElasticacheCluster_Engine_Redis_Ec2Classic(t *testing.T) { + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + + var ec elasticache.CacheCluster + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(8)) + resourceName := "aws_elasticache_cluster.bar" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheClusterConfig_Engine_Redis(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &ec), + resource.TestCheckResourceAttr(resourceName, "cache_nodes.0.id", "0001"), + resource.TestCheckResourceAttr(resourceName, "engine", "redis"), + resource.TestCheckResourceAttr(resourceName, "port", "6379"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSElasticacheCluster_ParameterGroupName_Default(t *testing.T) { + var ec elasticache.CacheCluster + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(8)) + resourceName := "aws_elasticache_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheClusterConfig_ParameterGroupName(rName, "memcached", "1.4.34", "default.memcached1.4"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &ec), + resource.TestCheckResourceAttr(resourceName, "engine", "memcached"), + resource.TestCheckResourceAttr(resourceName, "engine_version", "1.4.34"), + resource.TestCheckResourceAttr(resourceName, "parameter_group_name", "default.memcached1.4"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSElasticacheCluster_Port_Ec2Classic(t *testing.T) { + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + var ec elasticache.CacheCluster - resource.Test(t, resource.TestCase{ + port := 11212 + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(8)) + resourceName := "aws_elasticache_cluster.bar" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheClusterConfig_Port(rName, port), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &ec), + resource.TestCheckResourceAttr(resourceName, "cache_nodes.0.id", "0001"), + resource.TestCheckResourceAttrSet(resourceName, "configuration_endpoint"), + resource.TestCheckResourceAttrSet(resourceName, "cluster_address"), + resource.TestCheckResourceAttr(resourceName, "engine", "memcached"), + resource.TestCheckResourceAttr(resourceName, "port", strconv.Itoa(port)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSElasticacheCluster_SecurityGroup(t *testing.T) { + var ec elasticache.CacheCluster + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSElasticacheClusterConfig, + Config: testAccAWSElasticacheClusterConfig_SecurityGroup, Check: resource.ComposeTestCheckFunc( testAccCheckAWSElasticacheSecurityGroupExists("aws_elasticache_security_group.bar"), testAccCheckAWSElasticacheClusterExists("aws_elasticache_cluster.bar", &ec), @@ -42,7 +241,7 @@ func TestAccAWSElasticacheCluster_snapshotsWithUpdates(t *testing.T) { preConfig := fmt.Sprintf(testAccAWSElasticacheClusterConfig_snapshots, ri, ri, acctest.RandString(10)) postConfig := fmt.Sprintf(testAccAWSElasticacheClusterConfig_snapshotsUpdated, ri, ri, acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, @@ -74,35 +273,86 @@ func TestAccAWSElasticacheCluster_snapshotsWithUpdates(t *testing.T) { }) } -func TestAccAWSElasticacheCluster_decreasingCacheNodes(t *testing.T) { +func TestAccAWSElasticacheCluster_NumCacheNodes_Decrease(t *testing.T) { var ec elasticache.CacheCluster + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(8)) + resourceName := "aws_elasticache_cluster.bar" - ri := acctest.RandInt() - preConfig := fmt.Sprintf(testAccAWSElasticacheClusterConfigDecreasingNodes, ri, ri, acctest.RandString(10)) - postConfig := fmt.Sprintf(testAccAWSElasticacheClusterConfigDecreasingNodes_update, ri, ri, acctest.RandString(10)) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheClusterConfig_NumCacheNodes(rName, 3), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &ec), + resource.TestCheckResourceAttr(resourceName, "num_cache_nodes", "3"), + ), + }, + { + Config: testAccAWSElasticacheClusterConfig_NumCacheNodes(rName, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &ec), + resource.TestCheckResourceAttr(resourceName, "num_cache_nodes", "1"), + ), + }, + }, + }) +} + +func TestAccAWSElasticacheCluster_NumCacheNodes_Increase(t *testing.T) { + var ec elasticache.CacheCluster + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(8)) + resourceName := "aws_elasticache_cluster.bar" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, Steps: []resource.TestStep{ { - Config: preConfig, + Config: testAccAWSElasticacheClusterConfig_NumCacheNodes(rName, 1), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSElasticacheSecurityGroupExists("aws_elasticache_security_group.bar"), - testAccCheckAWSElasticacheClusterExists("aws_elasticache_cluster.bar", &ec), - resource.TestCheckResourceAttr( - "aws_elasticache_cluster.bar", "num_cache_nodes", "3"), + testAccCheckAWSElasticacheClusterExists(resourceName, &ec), + resource.TestCheckResourceAttr(resourceName, "num_cache_nodes", "1"), + ), + }, + { + Config: testAccAWSElasticacheClusterConfig_NumCacheNodes(rName, 3), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &ec), + resource.TestCheckResourceAttr(resourceName, "num_cache_nodes", "3"), ), }, + }, + }) +} +func TestAccAWSElasticacheCluster_NumCacheNodes_IncreaseWithPreferredAvailabilityZones(t *testing.T) { + var ec elasticache.CacheCluster + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(8)) + resourceName := "aws_elasticache_cluster.bar" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ { - Config: postConfig, + Config: testAccAWSElasticacheClusterConfig_NumCacheNodesWithPreferredAvailabilityZones(rName, 1), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSElasticacheSecurityGroupExists("aws_elasticache_security_group.bar"), - testAccCheckAWSElasticacheClusterExists("aws_elasticache_cluster.bar", &ec), - resource.TestCheckResourceAttr( - "aws_elasticache_cluster.bar", "num_cache_nodes", "1"), + testAccCheckAWSElasticacheClusterExists(resourceName, &ec), + resource.TestCheckResourceAttr(resourceName, "num_cache_nodes", "1"), + resource.TestCheckResourceAttr(resourceName, "preferred_availability_zones.#", "1"), + ), + }, + { + Config: testAccAWSElasticacheClusterConfig_NumCacheNodesWithPreferredAvailabilityZones(rName, 3), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &ec), + resource.TestCheckResourceAttr(resourceName, "num_cache_nodes", "3"), + resource.TestCheckResourceAttr(resourceName, "preferred_availability_zones.#", "3"), ), }, }, @@ -112,7 +362,7 @@ func TestAccAWSElasticacheCluster_decreasingCacheNodes(t *testing.T) { func TestAccAWSElasticacheCluster_vpc(t *testing.T) { var csg elasticache.CacheSubnetGroup var ec elasticache.CacheCluster - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, @@ -134,7 +384,7 @@ func TestAccAWSElasticacheCluster_vpc(t *testing.T) { func TestAccAWSElasticacheCluster_multiAZInVpc(t *testing.T) { var csg elasticache.CacheSubnetGroup var ec elasticache.CacheCluster - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, @@ -152,6 +402,424 @@ func TestAccAWSElasticacheCluster_multiAZInVpc(t *testing.T) { }) } +func TestAccAWSElasticacheCluster_AZMode_Memcached_Ec2Classic(t *testing.T) { + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + + var cluster elasticache.CacheCluster + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(8)) + resourceName := "aws_elasticache_cluster.bar" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheClusterConfig_AZMode_Memcached_Ec2Classic(rName, "unknown"), + ExpectError: regexp.MustCompile(`expected az_mode to be one of .*, got unknown`), + }, + { + Config: testAccAWSElasticacheClusterConfig_AZMode_Memcached_Ec2Classic(rName, "cross-az"), + ExpectError: regexp.MustCompile(`az_mode "cross-az" is not supported with num_cache_nodes = 1`), + }, + { + Config: testAccAWSElasticacheClusterConfig_AZMode_Memcached_Ec2Classic(rName, "single-az"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &cluster), + resource.TestCheckResourceAttr(resourceName, "az_mode", "single-az"), + ), + }, + { + Config: testAccAWSElasticacheClusterConfig_AZMode_Memcached_Ec2Classic(rName, "cross-az"), + ExpectError: regexp.MustCompile(`az_mode "cross-az" is not supported with num_cache_nodes = 1`), + }, + }, + }) +} + +func TestAccAWSElasticacheCluster_AZMode_Redis_Ec2Classic(t *testing.T) { + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + + var cluster elasticache.CacheCluster + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(8)) + resourceName := "aws_elasticache_cluster.bar" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheClusterConfig_AZMode_Redis_Ec2Classic(rName, "unknown"), + ExpectError: regexp.MustCompile(`expected az_mode to be one of .*, got unknown`), + }, + { + Config: testAccAWSElasticacheClusterConfig_AZMode_Redis_Ec2Classic(rName, "cross-az"), + ExpectError: regexp.MustCompile(`az_mode "cross-az" is not supported with num_cache_nodes = 1`), + }, + { + Config: testAccAWSElasticacheClusterConfig_AZMode_Redis_Ec2Classic(rName, "single-az"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &cluster), + resource.TestCheckResourceAttr(resourceName, "az_mode", "single-az"), + ), + }, + }, + }) +} + +func TestAccAWSElasticacheCluster_EngineVersion_Memcached_Ec2Classic(t *testing.T) { + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + + var pre, mid, post elasticache.CacheCluster + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(8)) + resourceName := "aws_elasticache_cluster.bar" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheClusterConfig_EngineVersion_Memcached_Ec2Classic(rName, "1.4.33"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &pre), + resource.TestCheckResourceAttr(resourceName, "engine_version", "1.4.33"), + ), + }, + { + Config: testAccAWSElasticacheClusterConfig_EngineVersion_Memcached_Ec2Classic(rName, "1.4.24"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &mid), + testAccCheckAWSElasticacheClusterRecreated(&pre, &mid), + resource.TestCheckResourceAttr(resourceName, "engine_version", "1.4.24"), + ), + }, + { + Config: testAccAWSElasticacheClusterConfig_EngineVersion_Memcached_Ec2Classic(rName, "1.4.34"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &post), + testAccCheckAWSElasticacheClusterNotRecreated(&mid, &post), + resource.TestCheckResourceAttr(resourceName, "engine_version", "1.4.34"), + ), + }, + }, + }) +} + +func TestAccAWSElasticacheCluster_EngineVersion_Redis_Ec2Classic(t *testing.T) { + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + + var pre, mid, post elasticache.CacheCluster + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(8)) + resourceName := "aws_elasticache_cluster.bar" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheClusterConfig_EngineVersion_Redis_Ec2Classic(rName, "3.2.6"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &pre), + resource.TestCheckResourceAttr(resourceName, "engine_version", "3.2.6"), + ), + }, + { + Config: testAccAWSElasticacheClusterConfig_EngineVersion_Redis_Ec2Classic(rName, "3.2.4"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &mid), + testAccCheckAWSElasticacheClusterRecreated(&pre, &mid), + resource.TestCheckResourceAttr(resourceName, "engine_version", "3.2.4"), + ), + }, + { + Config: testAccAWSElasticacheClusterConfig_EngineVersion_Redis_Ec2Classic(rName, "3.2.10"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &post), + testAccCheckAWSElasticacheClusterNotRecreated(&mid, &post), + resource.TestCheckResourceAttr(resourceName, "engine_version", "3.2.10"), + ), + }, + }, + }) +} + +func TestAccAWSElasticacheCluster_NodeTypeResize_Memcached_Ec2Classic(t *testing.T) { + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + + var pre, post elasticache.CacheCluster + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(8)) + resourceName := "aws_elasticache_cluster.bar" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheClusterConfig_NodeType_Memcached_Ec2Classic(rName, "cache.m3.medium"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &pre), + resource.TestCheckResourceAttr(resourceName, "node_type", "cache.m3.medium"), + ), + }, + { + Config: testAccAWSElasticacheClusterConfig_NodeType_Memcached_Ec2Classic(rName, "cache.m3.large"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &post), + testAccCheckAWSElasticacheClusterRecreated(&pre, &post), + resource.TestCheckResourceAttr(resourceName, "node_type", "cache.m3.large"), + ), + }, + }, + }) +} + +func TestAccAWSElasticacheCluster_NodeTypeResize_Redis_Ec2Classic(t *testing.T) { + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + + var pre, post elasticache.CacheCluster + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(8)) + resourceName := "aws_elasticache_cluster.bar" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheClusterConfig_NodeType_Redis_Ec2Classic(rName, "cache.m3.medium"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &pre), + resource.TestCheckResourceAttr(resourceName, "node_type", "cache.m3.medium"), + ), + }, + { + Config: testAccAWSElasticacheClusterConfig_NodeType_Redis_Ec2Classic(rName, "cache.m3.large"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheClusterExists(resourceName, &post), + testAccCheckAWSElasticacheClusterNotRecreated(&pre, &post), + resource.TestCheckResourceAttr(resourceName, "node_type", "cache.m3.large"), + ), + }, + }, + }) +} + +func TestAccAWSElasticacheCluster_NumCacheNodes_Redis_Ec2Classic(t *testing.T) { + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(8)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheClusterConfig_NumCacheNodes_Redis_Ec2Classic(rName, 2), + ExpectError: regexp.MustCompile(`engine "redis" does not support num_cache_nodes > 1`), + }, + }, + }) +} + +func TestAccAWSElasticacheCluster_ReplicationGroupID_InvalidAttributes(t *testing.T) { + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(8)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_InvalidAttribute(rName, "availability_zones", "${list(\"us-east-1a\", \"us-east-1c\")}"), + ExpectError: regexp.MustCompile(`"replication_group_id": conflicts with availability_zones`), + }, + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_InvalidAttribute(rName, "az_mode", "single-az"), + ExpectError: regexp.MustCompile(`"replication_group_id": conflicts with az_mode`), + }, + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_InvalidAttribute(rName, "engine_version", "3.2.10"), + ExpectError: regexp.MustCompile(`"replication_group_id": conflicts with engine_version`), + }, + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_InvalidAttribute(rName, "engine", "redis"), + ExpectError: regexp.MustCompile(`"replication_group_id": conflicts with engine`), + }, + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_InvalidAttribute(rName, "maintenance_window", "sun:05:00-sun:09:00"), + ExpectError: regexp.MustCompile(`"replication_group_id": conflicts with maintenance_window`), + }, + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_InvalidAttribute(rName, "node_type", "cache.m3.medium"), + ExpectError: regexp.MustCompile(`"replication_group_id": conflicts with node_type`), + }, + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_InvalidAttribute(rName, "notification_topic_arn", "arn:aws:sns:us-east-1:123456789012:topic/non-existent"), + ExpectError: regexp.MustCompile(`"replication_group_id": conflicts with notification_topic_arn`), + }, + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_InvalidAttribute(rName, "num_cache_nodes", "1"), + ExpectError: regexp.MustCompile(`"replication_group_id": conflicts with num_cache_nodes`), + }, + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_InvalidAttribute(rName, "parameter_group_name", "non-existent"), + ExpectError: regexp.MustCompile(`"replication_group_id": conflicts with parameter_group_name`), + }, + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_InvalidAttribute(rName, "port", "6379"), + ExpectError: regexp.MustCompile(`"replication_group_id": conflicts with port`), + }, + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_InvalidAttribute(rName, "security_group_ids", "${list(\"sg-12345678\", \"sg-87654321\")}"), + ExpectError: regexp.MustCompile(`"replication_group_id": conflicts with security_group_ids`), + }, + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_InvalidAttribute(rName, "security_group_names", "${list(\"group1\", \"group2\")}"), + ExpectError: regexp.MustCompile(`"replication_group_id": conflicts with security_group_names`), + }, + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_InvalidAttribute(rName, "snapshot_arns", "${list(\"arn:aws:s3:::my_bucket/snapshot1.rdb\")}"), + ExpectError: regexp.MustCompile(`"replication_group_id": conflicts with snapshot_arns`), + }, + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_InvalidAttribute(rName, "snapshot_name", "arn:aws:s3:::my_bucket/snapshot1.rdb"), + ExpectError: regexp.MustCompile(`"replication_group_id": conflicts with snapshot_name`), + }, + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_InvalidAttribute(rName, "snapshot_retention_limit", "0"), + ExpectError: regexp.MustCompile(`"replication_group_id": conflicts with snapshot_retention_limit`), + }, + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_InvalidAttribute(rName, "snapshot_window", "05:00-09:00"), + ExpectError: regexp.MustCompile(`"replication_group_id": conflicts with snapshot_window`), + }, + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_InvalidAttribute(rName, "subnet_group_name", "group1"), + ExpectError: regexp.MustCompile(`"replication_group_id": conflicts with subnet_group_name`), + }, + }, + }) +} + +func TestAccAWSElasticacheCluster_ReplicationGroupID_AvailabilityZone_Ec2Classic(t *testing.T) { + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + + var cluster elasticache.CacheCluster + var replicationGroup elasticache.ReplicationGroup + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(7)) + clusterResourceName := "aws_elasticache_cluster.replica" + replicationGroupResourceName := "aws_elasticache_replication_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_AvailabilityZone_Ec2Classic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheReplicationGroupExists(replicationGroupResourceName, &replicationGroup), + testAccCheckAWSElasticacheClusterExists(clusterResourceName, &cluster), + testAccCheckAWSElasticacheClusterReplicationGroupIDAttribute(&cluster, &replicationGroup), + ), + }, + }, + }) +} + +func TestAccAWSElasticacheCluster_ReplicationGroupID_SingleReplica_Ec2Classic(t *testing.T) { + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + + var cluster elasticache.CacheCluster + var replicationGroup elasticache.ReplicationGroup + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(7)) + clusterResourceName := "aws_elasticache_cluster.replica" + replicationGroupResourceName := "aws_elasticache_replication_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_Replica_Ec2Classic(rName, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheReplicationGroupExists(replicationGroupResourceName, &replicationGroup), + testAccCheckAWSElasticacheClusterExists(clusterResourceName, &cluster), + testAccCheckAWSElasticacheClusterReplicationGroupIDAttribute(&cluster, &replicationGroup), + resource.TestCheckResourceAttr(clusterResourceName, "engine", "redis"), + resource.TestCheckResourceAttr(clusterResourceName, "node_type", "cache.m3.medium"), + resource.TestCheckResourceAttr(clusterResourceName, "port", "6379"), + ), + }, + }, + }) +} + +func TestAccAWSElasticacheCluster_ReplicationGroupID_MultipleReplica_Ec2Classic(t *testing.T) { + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + + var cluster1, cluster2 elasticache.CacheCluster + var replicationGroup elasticache.ReplicationGroup + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(7)) + clusterResourceName1 := "aws_elasticache_cluster.replica.0" + clusterResourceName2 := "aws_elasticache_cluster.replica.1" + replicationGroupResourceName := "aws_elasticache_replication_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheClusterConfig_ReplicationGroupID_Replica_Ec2Classic(rName, 2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheReplicationGroupExists(replicationGroupResourceName, &replicationGroup), + testAccCheckAWSElasticacheClusterExists(clusterResourceName1, &cluster1), + testAccCheckAWSElasticacheClusterExists(clusterResourceName2, &cluster2), + testAccCheckAWSElasticacheClusterReplicationGroupIDAttribute(&cluster1, &replicationGroup), + testAccCheckAWSElasticacheClusterReplicationGroupIDAttribute(&cluster2, &replicationGroup), + resource.TestCheckResourceAttr(clusterResourceName1, "engine", "redis"), + resource.TestCheckResourceAttr(clusterResourceName1, "node_type", "cache.m3.medium"), + resource.TestCheckResourceAttr(clusterResourceName1, "port", "6379"), + resource.TestCheckResourceAttr(clusterResourceName2, "engine", "redis"), + resource.TestCheckResourceAttr(clusterResourceName2, "node_type", "cache.m3.medium"), + resource.TestCheckResourceAttr(clusterResourceName2, "port", "6379"), + ), + }, + }, + }) +} + func testAccCheckAWSElasticacheClusterAttributes(v *elasticache.CacheCluster) resource.TestCheckFunc { return func(s *terraform.State) error { if v.NotificationConfiguration == nil { @@ -166,6 +834,40 @@ func testAccCheckAWSElasticacheClusterAttributes(v *elasticache.CacheCluster) re } } +func testAccCheckAWSElasticacheClusterReplicationGroupIDAttribute(cluster *elasticache.CacheCluster, replicationGroup *elasticache.ReplicationGroup) resource.TestCheckFunc { + return func(s *terraform.State) error { + if cluster.ReplicationGroupId == nil { + return errors.New("expected cluster ReplicationGroupId to be set") + } + + if aws.StringValue(cluster.ReplicationGroupId) != aws.StringValue(replicationGroup.ReplicationGroupId) { + return errors.New("expected cluster ReplicationGroupId to equal replication group ID") + } + + return nil + } +} + +func testAccCheckAWSElasticacheClusterNotRecreated(i, j *elasticache.CacheCluster) resource.TestCheckFunc { + return func(s *terraform.State) error { + if aws.TimeValue(i.CacheClusterCreateTime) != aws.TimeValue(j.CacheClusterCreateTime) { + return errors.New("Elasticache Cluster was recreated") + } + + return nil + } +} + +func testAccCheckAWSElasticacheClusterRecreated(i, j *elasticache.CacheCluster) resource.TestCheckFunc { + return func(s *terraform.State) error { + if aws.TimeValue(i.CacheClusterCreateTime) == aws.TimeValue(j.CacheClusterCreateTime) { + return errors.New("Elasticache Cluster was not recreated") + } + + return nil + } +} + func testAccCheckAWSElasticacheClusterDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).elasticacheconn @@ -219,24 +921,54 @@ func testAccCheckAWSElasticacheClusterExists(n string, v *elasticache.CacheClust } } -func testAccAWSElasticacheClusterConfigBasic(clusterId string) string { +func testAccAWSElasticacheClusterConfig_Engine_Memcached(rName string) string { return fmt.Sprintf(` -provider "aws" { - region = "us-east-1" +resource "aws_elasticache_cluster" "bar" { + cluster_id = "%s" + engine = "memcached" + node_type = "cache.m1.small" + num_cache_nodes = 1 +} +`, rName) } +func testAccAWSElasticacheClusterConfig_Engine_Redis(rName string) string { + return fmt.Sprintf(` resource "aws_elasticache_cluster" "bar" { - cluster_id = "tf-%s" - engine = "memcached" - node_type = "cache.m1.small" - num_cache_nodes = 1 - port = 11211 - parameter_group_name = "default.memcached1.4" + cluster_id = "%s" + engine = "redis" + node_type = "cache.m1.small" + num_cache_nodes = 1 +} +`, rName) +} + +func testAccAWSElasticacheClusterConfig_ParameterGroupName(rName, engine, engineVersion, parameterGroupName string) string { + return fmt.Sprintf(` +resource "aws_elasticache_cluster" "test" { + cluster_id = %q + engine = %q + engine_version = %q + node_type = "cache.m1.small" + num_cache_nodes = 1 + parameter_group_name = %q +} +`, rName, engine, engineVersion, parameterGroupName) +} + +func testAccAWSElasticacheClusterConfig_Port(rName string, port int) string { + return fmt.Sprintf(` +resource "aws_elasticache_cluster" "bar" { + cluster_id = "%s" + engine = "memcached" + node_type = "cache.m1.small" + num_cache_nodes = 1 + port = %d } -`, clusterId) +`, rName, port) } -var testAccAWSElasticacheClusterConfig = fmt.Sprintf(` +var testAccAWSElasticacheClusterConfig_SecurityGroup = fmt.Sprintf(` provider "aws" { region = "us-east-1" } @@ -267,7 +999,6 @@ resource "aws_elasticache_cluster" "bar" { node_type = "cache.m1.small" num_cache_nodes = 1 port = 11211 - parameter_group_name = "default.memcached1.4" security_group_names = ["${aws_elasticache_security_group.bar.name}"] } `, acctest.RandInt(), acctest.RandInt(), acctest.RandString(10)) @@ -299,7 +1030,6 @@ resource "aws_elasticache_cluster" "bar" { node_type = "cache.m1.small" num_cache_nodes = 1 port = 6379 - parameter_group_name = "default.redis3.2" security_group_names = ["${aws_elasticache_security_group.bar.name}"] snapshot_window = "05:00-09:00" snapshot_retention_limit = 3 @@ -333,7 +1063,6 @@ resource "aws_elasticache_cluster" "bar" { node_type = "cache.m1.small" num_cache_nodes = 1 port = 6379 - parameter_group_name = "default.redis3.2" security_group_names = ["${aws_elasticache_security_group.bar.name}"] snapshot_window = "07:00-09:00" snapshot_retention_limit = 7 @@ -341,70 +1070,37 @@ resource "aws_elasticache_cluster" "bar" { } ` -var testAccAWSElasticacheClusterConfigDecreasingNodes = ` -provider "aws" { - region = "us-east-1" -} -resource "aws_security_group" "bar" { - name = "tf-test-security-group-%03d" - description = "tf-test-security-group-descr" - ingress { - from_port = -1 - to_port = -1 - protocol = "icmp" - cidr_blocks = ["0.0.0.0/0"] - } -} - -resource "aws_elasticache_security_group" "bar" { - name = "tf-test-security-group-%03d" - description = "tf-test-security-group-descr" - security_group_names = ["${aws_security_group.bar.name}"] -} - +func testAccAWSElasticacheClusterConfig_NumCacheNodes(rName string, numCacheNodes int) string { + return fmt.Sprintf(` resource "aws_elasticache_cluster" "bar" { - cluster_id = "tf-%s" - engine = "memcached" - node_type = "cache.m1.small" - num_cache_nodes = 3 - port = 11211 - parameter_group_name = "default.memcached1.4" - security_group_names = ["${aws_elasticache_security_group.bar.name}"] + apply_immediately = true + cluster_id = "%s" + engine = "memcached" + node_type = "cache.m1.small" + num_cache_nodes = %d } -` - -var testAccAWSElasticacheClusterConfigDecreasingNodes_update = ` -provider "aws" { - region = "us-east-1" -} -resource "aws_security_group" "bar" { - name = "tf-test-security-group-%03d" - description = "tf-test-security-group-descr" - ingress { - from_port = -1 - to_port = -1 - protocol = "icmp" - cidr_blocks = ["0.0.0.0/0"] - } +`, rName, numCacheNodes) } -resource "aws_elasticache_security_group" "bar" { - name = "tf-test-security-group-%03d" - description = "tf-test-security-group-descr" - security_group_names = ["${aws_security_group.bar.name}"] -} +func testAccAWSElasticacheClusterConfig_NumCacheNodesWithPreferredAvailabilityZones(rName string, numCacheNodes int) string { + preferredAvailabilityZones := make([]string, numCacheNodes) + for i := range preferredAvailabilityZones { + preferredAvailabilityZones[i] = `"${data.aws_availability_zones.available.names[0]}"` + } + + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} resource "aws_elasticache_cluster" "bar" { - cluster_id = "tf-%s" - engine = "memcached" - node_type = "cache.m1.small" - num_cache_nodes = 1 - port = 11211 - parameter_group_name = "default.memcached1.4" - security_group_names = ["${aws_elasticache_security_group.bar.name}"] - apply_immediately = true + apply_immediately = true + cluster_id = "%s" + engine = "memcached" + node_type = "cache.m1.small" + num_cache_nodes = %d + preferred_availability_zones = [%s] +} +`, rName, numCacheNodes, strings.Join(preferredAvailabilityZones, ",")) } -` var testAccAWSElasticacheClusterInVPCConfig = fmt.Sprintf(` resource "aws_vpc" "foo" { @@ -518,11 +1214,162 @@ resource "aws_elasticache_cluster" "bar" { port = 11211 subnet_group_name = "${aws_elasticache_subnet_group.bar.name}" security_group_ids = ["${aws_security_group.bar.id}"] - parameter_group_name = "default.memcached1.4" az_mode = "cross-az" - availability_zones = [ + preferred_availability_zones = [ "us-west-2a", "us-west-2b" ] } `, acctest.RandInt(), acctest.RandInt(), acctest.RandString(10)) + +func testAccAWSElasticacheClusterConfig_AZMode_Memcached_Ec2Classic(rName, azMode string) string { + return fmt.Sprintf(` +resource "aws_elasticache_cluster" "bar" { + apply_immediately = true + az_mode = "%[2]s" + cluster_id = "%[1]s" + engine = "memcached" + node_type = "cache.m3.medium" + num_cache_nodes = 1 + port = 11211 +} +`, rName, azMode) +} + +func testAccAWSElasticacheClusterConfig_AZMode_Redis_Ec2Classic(rName, azMode string) string { + return fmt.Sprintf(` +resource "aws_elasticache_cluster" "bar" { + apply_immediately = true + az_mode = "%[2]s" + cluster_id = "%[1]s" + engine = "redis" + node_type = "cache.m3.medium" + num_cache_nodes = 1 + port = 6379 +} +`, rName, azMode) +} + +func testAccAWSElasticacheClusterConfig_EngineVersion_Memcached_Ec2Classic(rName, engineVersion string) string { + return fmt.Sprintf(` +resource "aws_elasticache_cluster" "bar" { + apply_immediately = true + cluster_id = "%[1]s" + engine = "memcached" + engine_version = "%[2]s" + node_type = "cache.m3.medium" + num_cache_nodes = 1 + port = 11211 +} +`, rName, engineVersion) +} + +func testAccAWSElasticacheClusterConfig_EngineVersion_Redis_Ec2Classic(rName, engineVersion string) string { + return fmt.Sprintf(` +resource "aws_elasticache_cluster" "bar" { + apply_immediately = true + cluster_id = "%[1]s" + engine = "redis" + engine_version = "%[2]s" + node_type = "cache.m3.medium" + num_cache_nodes = 1 + port = 6379 +} +`, rName, engineVersion) +} + +func testAccAWSElasticacheClusterConfig_NodeType_Memcached_Ec2Classic(rName, nodeType string) string { + return fmt.Sprintf(` +resource "aws_elasticache_cluster" "bar" { + apply_immediately = true + cluster_id = "%[1]s" + engine = "memcached" + node_type = "%[2]s" + num_cache_nodes = 1 + port = 11211 +} +`, rName, nodeType) +} + +func testAccAWSElasticacheClusterConfig_NodeType_Redis_Ec2Classic(rName, nodeType string) string { + return fmt.Sprintf(` +resource "aws_elasticache_cluster" "bar" { + apply_immediately = true + cluster_id = "%[1]s" + engine = "redis" + node_type = "%[2]s" + num_cache_nodes = 1 + port = 6379 +} +`, rName, nodeType) +} + +func testAccAWSElasticacheClusterConfig_NumCacheNodes_Redis_Ec2Classic(rName string, numCacheNodes int) string { + return fmt.Sprintf(` +resource "aws_elasticache_cluster" "bar" { + apply_immediately = true + cluster_id = "%[1]s" + engine = "redis" + node_type = "cache.m3.medium" + num_cache_nodes = %[2]d + port = 6379 +} +`, rName, numCacheNodes) +} + +func testAccAWSElasticacheClusterConfig_ReplicationGroupID_InvalidAttribute(rName, attrName, attrValue string) string { + return fmt.Sprintf(` +resource "aws_elasticache_cluster" "replica" { + cluster_id = "%[1]s" + replication_group_id = "non-existent-id" + %[2]s = "%[3]s" +} +`, rName, attrName, attrValue) +} + +func testAccAWSElasticacheClusterConfig_ReplicationGroupID_AvailabilityZone_Ec2Classic(rName string) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_elasticache_replication_group" "test" { + replication_group_description = "Terraform Acceptance Testing" + replication_group_id = "%[1]s" + node_type = "cache.m3.medium" + number_cache_clusters = 1 + port = 6379 + + lifecycle { + ignore_changes = ["number_cache_clusters"] + } +} + +resource "aws_elasticache_cluster" "replica" { + availability_zone = "${data.aws_availability_zones.available.names[0]}" + cluster_id = "%[1]s1" + replication_group_id = "${aws_elasticache_replication_group.test.id}" +} +`, rName) +} + +func testAccAWSElasticacheClusterConfig_ReplicationGroupID_Replica_Ec2Classic(rName string, count int) string { + return fmt.Sprintf(` +resource "aws_elasticache_replication_group" "test" { + replication_group_description = "Terraform Acceptance Testing" + replication_group_id = "%[1]s" + node_type = "cache.m3.medium" + number_cache_clusters = 1 + port = 6379 + + lifecycle { + ignore_changes = ["number_cache_clusters"] + } +} + +resource "aws_elasticache_cluster" "replica" { + count = %[2]d + + cluster_id = "%[1]s${count.index}" + replication_group_id = "${aws_elasticache_replication_group.test.id}" +} +`, rName, count) +} diff --git a/aws/resource_aws_elasticache_parameter_group.go b/aws/resource_aws_elasticache_parameter_group.go index 3c84ae070ce..3b7b2f4caf9 100644 --- a/aws/resource_aws_elasticache_parameter_group.go +++ b/aws/resource_aws_elasticache_parameter_group.go @@ -26,7 +26,7 @@ func resourceAwsElasticacheParameterGroup() *schema.Resource { State: schema.ImportStatePassthrough, }, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, ForceNew: true, Required: true, @@ -34,27 +34,27 @@ func resourceAwsElasticacheParameterGroup() *schema.Resource { return strings.ToLower(val.(string)) }, }, - "family": &schema.Schema{ + "family": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "description": &schema.Schema{ + "description": { Type: schema.TypeString, Optional: true, ForceNew: true, Default: "Managed by Terraform", }, - "parameter": &schema.Schema{ + "parameter": { Type: schema.TypeSet, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, }, - "value": &schema.Schema{ + "value": { Type: schema.TypeString, Required: true, }, @@ -166,7 +166,7 @@ func resourceAwsElasticacheParameterGroupUpdate(d *schema.ResourceData, meta int maxParams := 20 for len(toRemove) > 0 { - paramsToModify := make([]*elasticache.ParameterNameValue, 0) + var paramsToModify []*elasticache.ParameterNameValue if len(toRemove) <= maxParams { paramsToModify, toRemove = toRemove[:], nil } else { @@ -194,7 +194,7 @@ func resourceAwsElasticacheParameterGroupUpdate(d *schema.ResourceData, meta int } for len(toAdd) > 0 { - paramsToModify := make([]*elasticache.ParameterNameValue, 0) + var paramsToModify []*elasticache.ParameterNameValue if len(toAdd) <= maxParams { paramsToModify, toAdd = toAdd[:], nil } else { @@ -231,7 +231,6 @@ func resourceAwsElasticacheParameterGroupDelete(d *schema.ResourceData, meta int if err != nil { awsErr, ok := err.(awserr.Error) if ok && awsErr.Code() == "CacheParameterGroupNotFoundFault" { - d.SetId("") return nil } if ok && awsErr.Code() == "InvalidCacheParameterGroupState" { diff --git a/aws/resource_aws_elasticache_parameter_group_test.go b/aws/resource_aws_elasticache_parameter_group_test.go index b10dfbfa68b..e3c992bea78 100644 --- a/aws/resource_aws_elasticache_parameter_group_test.go +++ b/aws/resource_aws_elasticache_parameter_group_test.go @@ -12,16 +12,38 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSElasticacheParameterGroup_importBasic(t *testing.T) { + resourceName := "aws_elasticache_parameter_group.bar" + rName := fmt.Sprintf("parameter-group-test-terraform-%d", acctest.RandInt()) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheParameterGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheParameterGroupConfig(rName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSElasticacheParameterGroup_basic(t *testing.T) { var v elasticache.CacheParameterGroup rName := fmt.Sprintf("parameter-group-test-terraform-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheParameterGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSElasticacheParameterGroupConfig(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSElasticacheParameterGroupExists("aws_elasticache_parameter_group.bar", &v), @@ -38,7 +60,7 @@ func TestAccAWSElasticacheParameterGroup_basic(t *testing.T) { "aws_elasticache_parameter_group.bar", "parameter.283487565.value", "yes"), ), }, - resource.TestStep{ + { Config: testAccAWSElasticacheParameterGroupAddParametersConfig(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSElasticacheParameterGroupExists("aws_elasticache_parameter_group.bar", &v), @@ -67,12 +89,12 @@ func TestAccAWSElasticacheParameterGroup_only(t *testing.T) { var v elasticache.CacheParameterGroup rName := fmt.Sprintf("parameter-group-test-terraform-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheParameterGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSElasticacheParameterGroupOnlyConfig(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSElasticacheParameterGroupExists("aws_elasticache_parameter_group.bar", &v), @@ -92,7 +114,7 @@ func TestAccAWSElasticacheParameterGroup_removeParam(t *testing.T) { var v elasticache.CacheParameterGroup rName := fmt.Sprintf("parameter-group-test-terraform-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheParameterGroupDestroy, @@ -128,12 +150,12 @@ func TestAccAWSElasticacheParameterGroup_UppercaseName(t *testing.T) { rInt := acctest.RandInt() rName := fmt.Sprintf("TF-ELASTIPG-%d", rInt) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheParameterGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSElasticacheParameterGroupConfig_UppercaseName(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSElasticacheParameterGroupExists("aws_elasticache_parameter_group.bar", &v), diff --git a/aws/resource_aws_elasticache_replication_group.go b/aws/resource_aws_elasticache_replication_group.go index 03d387e3c61..5a127aafd6c 100644 --- a/aws/resource_aws_elasticache_replication_group.go +++ b/aws/resource_aws_elasticache_replication_group.go @@ -8,107 +8,13 @@ import ( "time" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/elasticache" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsElasticacheReplicationGroup() *schema.Resource { - - resourceSchema := resourceAwsElastiCacheCommonSchema() - - resourceSchema["replication_group_id"] = &schema.Schema{ - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validateAwsElastiCacheReplicationGroupId, - StateFunc: func(val interface{}) string { - return strings.ToLower(val.(string)) - }, - } - - resourceSchema["automatic_failover_enabled"] = &schema.Schema{ - Type: schema.TypeBool, - Optional: true, - Default: false, - } - - resourceSchema["auto_minor_version_upgrade"] = &schema.Schema{ - Type: schema.TypeBool, - Optional: true, - Default: true, - } - - resourceSchema["replication_group_description"] = &schema.Schema{ - Type: schema.TypeString, - Required: true, - } - - resourceSchema["number_cache_clusters"] = &schema.Schema{ - Type: schema.TypeInt, - Computed: true, - Optional: true, - ForceNew: true, - } - - resourceSchema["primary_endpoint_address"] = &schema.Schema{ - Type: schema.TypeString, - Computed: true, - } - - resourceSchema["configuration_endpoint_address"] = &schema.Schema{ - Type: schema.TypeString, - Computed: true, - } - - resourceSchema["cluster_mode"] = &schema.Schema{ - Type: schema.TypeSet, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "replicas_per_node_group": { - Type: schema.TypeInt, - Required: true, - ForceNew: true, - }, - "num_node_groups": { - Type: schema.TypeInt, - Required: true, - ForceNew: true, - }, - }, - }, - } - - resourceSchema["engine"].Required = false - resourceSchema["engine"].Optional = true - resourceSchema["engine"].Default = "redis" - resourceSchema["engine"].ValidateFunc = validateAwsElastiCacheReplicationGroupEngine - - resourceSchema["at_rest_encryption_enabled"] = &schema.Schema{ - Type: schema.TypeBool, - Optional: true, - Default: false, - ForceNew: true, - } - - resourceSchema["transit_encryption_enabled"] = &schema.Schema{ - Type: schema.TypeBool, - Optional: true, - Default: false, - ForceNew: true, - } - - resourceSchema["auth_token"] = &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Sensitive: true, - ForceNew: true, - ValidateFunc: validateAwsElastiCacheReplicationGroupAuthToken, - } - return &schema.Resource{ Create: resourceAwsElasticacheReplicationGroupCreate, Read: resourceAwsElasticacheReplicationGroupRead, @@ -118,7 +24,211 @@ func resourceAwsElasticacheReplicationGroup() *schema.Resource { State: schema.ImportStatePassthrough, }, - Schema: resourceSchema, + Schema: map[string]*schema.Schema{ + "apply_immediately": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + "at_rest_encryption_enabled": { + Type: schema.TypeBool, + Optional: true, + Default: false, + ForceNew: true, + }, + "auth_token": { + Type: schema.TypeString, + Optional: true, + Sensitive: true, + ForceNew: true, + ValidateFunc: validateAwsElastiCacheReplicationGroupAuthToken, + }, + "auto_minor_version_upgrade": { + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + "automatic_failover_enabled": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "availability_zones": { + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "cluster_mode": { + Type: schema.TypeList, + Optional: true, + // We allow Computed: true here since using number_cache_clusters + // and a cluster mode enabled parameter_group_name will create + // a single shard replication group with number_cache_clusters - 1 + // read replicas. Otherwise, the resource is marked ForceNew. + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "replicas_per_node_group": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + "num_node_groups": { + Type: schema.TypeInt, + Required: true, + }, + }, + }, + }, + "configuration_endpoint_address": { + Type: schema.TypeString, + Computed: true, + }, + "engine": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "redis", + ValidateFunc: validateAwsElastiCacheReplicationGroupEngine, + }, + "engine_version": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "maintenance_window": { + Type: schema.TypeString, + Optional: true, + Computed: true, + StateFunc: func(val interface{}) string { + // Elasticache always changes the maintenance + // to lowercase + return strings.ToLower(val.(string)) + }, + ValidateFunc: validateOnceAWeekWindowFormat, + }, + "member_clusters": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "node_type": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "notification_topic_arn": { + Type: schema.TypeString, + Optional: true, + }, + "number_cache_clusters": { + Type: schema.TypeInt, + Computed: true, + Optional: true, + }, + "parameter_group_name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "port": { + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + // Suppress default memcached/redis ports when not defined + if !d.IsNewResource() && new == "0" && (old == "6379" || old == "11211") { + return true + } + return false + }, + }, + "primary_endpoint_address": { + Type: schema.TypeString, + Computed: true, + }, + "replication_group_description": { + Type: schema.TypeString, + Required: true, + }, + "replication_group_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateAwsElastiCacheReplicationGroupId, + StateFunc: func(val interface{}) string { + return strings.ToLower(val.(string)) + }, + }, + "security_group_names": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "security_group_ids": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + // A single-element string list containing an Amazon Resource Name (ARN) that + // uniquely identifies a Redis RDB snapshot file stored in Amazon S3. The snapshot + // file will be used to populate the node group. + // + // See also: + // https://github.com/aws/aws-sdk-go/blob/4862a174f7fc92fb523fc39e68f00b87d91d2c3d/service/elasticache/api.go#L2079 + "snapshot_arns": { + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "snapshot_retention_limit": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntAtMost(35), + }, + "snapshot_window": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validateOnceADayWindowFormat, + }, + "snapshot_name": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "subnet_group_name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "tags": tagsSchema(), + "transit_encryption_enabled": { + Type: schema.TypeBool, + Optional: true, + Default: false, + ForceNew: true, + }, + }, + SchemaVersion: 1, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(60 * time.Minute), + Delete: schema.DefaultTimeout(40 * time.Minute), + Update: schema.DefaultTimeout(40 * time.Minute), + }, } } @@ -133,7 +243,6 @@ func resourceAwsElasticacheReplicationGroupCreate(d *schema.ResourceData, meta i AutoMinorVersionUpgrade: aws.Bool(d.Get("auto_minor_version_upgrade").(bool)), CacheNodeType: aws.String(d.Get("node_type").(string)), Engine: aws.String(d.Get("engine").(string)), - Port: aws.Int64(int64(d.Get("port").(int))), Tags: tags, } @@ -151,6 +260,10 @@ func resourceAwsElasticacheReplicationGroupCreate(d *schema.ResourceData, meta i params.CacheParameterGroupName = aws.String(v.(string)) } + if v, ok := d.GetOk("port"); ok { + params.Port = aws.Int64(int64(v.(int))) + } + if v, ok := d.GetOk("subnet_group_name"); ok { params.CacheSubnetGroupName = aws.String(v.(string)) } @@ -210,8 +323,8 @@ func resourceAwsElasticacheReplicationGroupCreate(d *schema.ResourceData, meta i } if clusterModeOk { - clusterModeAttributes := clusterMode.(*schema.Set).List() - attributes := clusterModeAttributes[0].(map[string]interface{}) + clusterModeList := clusterMode.([]interface{}) + attributes := clusterModeList[0].(map[string]interface{}) if v, ok := attributes["num_node_groups"]; ok { params.NumNodeGroups = aws.Int64(int64(v.(int))) @@ -237,8 +350,8 @@ func resourceAwsElasticacheReplicationGroupCreate(d *schema.ResourceData, meta i stateConf := &resource.StateChangeConf{ Pending: pending, Target: []string{"available"}, - Refresh: cacheReplicationGroupStateRefreshFunc(conn, d.Id(), "available", pending), - Timeout: 50 * time.Minute, + Refresh: cacheReplicationGroupStateRefreshFunc(conn, d.Id(), pending), + Timeout: d.Timeout(schema.TimeoutCreate), MinTimeout: 10 * time.Second, Delay: 30 * time.Second, } @@ -260,7 +373,7 @@ func resourceAwsElasticacheReplicationGroupRead(d *schema.ResourceData, meta int res, err := conn.DescribeReplicationGroups(req) if err != nil { - if eccErr, ok := err.(awserr.Error); ok && eccErr.Code() == "ReplicationGroupNotFoundFault" { + if isAWSErr(err, elasticache.ErrCodeReplicationGroupNotFoundFault, "") { log.Printf("[WARN] Elasticache Replication Group (%s) not found", d.Id()) d.SetId("") return nil @@ -301,6 +414,12 @@ func resourceAwsElasticacheReplicationGroupRead(d *schema.ResourceData, meta int d.Set("replication_group_description", rgp.Description) d.Set("number_cache_clusters", len(rgp.MemberClusters)) + if err := d.Set("member_clusters", flattenStringList(rgp.MemberClusters)); err != nil { + return fmt.Errorf("error setting member_clusters: %s", err) + } + if err := d.Set("cluster_mode", flattenElasticacheNodeGroupsToClusterMode(aws.BoolValue(rgp.ClusterEnabled), rgp.NodeGroups)); err != nil { + return fmt.Errorf("error setting cluster_mode attribute: %s", err) + } d.Set("replication_group_id", rgp.ReplicationGroupId) if rgp.NodeGroups != nil { @@ -361,6 +480,200 @@ func resourceAwsElasticacheReplicationGroupRead(d *schema.ResourceData, meta int func resourceAwsElasticacheReplicationGroupUpdate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).elasticacheconn + if d.HasChange("cluster_mode.0.num_node_groups") { + o, n := d.GetChange("cluster_mode.0.num_node_groups") + oldNumNodeGroups := o.(int) + newNumNodeGroups := n.(int) + + input := &elasticache.ModifyReplicationGroupShardConfigurationInput{ + ApplyImmediately: aws.Bool(true), + NodeGroupCount: aws.Int64(int64(newNumNodeGroups)), + ReplicationGroupId: aws.String(d.Id()), + } + + if oldNumNodeGroups > newNumNodeGroups { + // Node Group IDs are 1 indexed: 0001 through 0015 + // Loop from highest old ID until we reach highest new ID + nodeGroupsToRemove := []string{} + for i := oldNumNodeGroups; i > newNumNodeGroups; i-- { + nodeGroupID := fmt.Sprintf("%04d", i) + nodeGroupsToRemove = append(nodeGroupsToRemove, nodeGroupID) + } + input.NodeGroupsToRemove = aws.StringSlice(nodeGroupsToRemove) + } + + log.Printf("[DEBUG] Modifying Elasticache Replication Group (%s) shard configuration: %s", d.Id(), input) + _, err := conn.ModifyReplicationGroupShardConfiguration(input) + if err != nil { + return fmt.Errorf("error modifying Elasticache Replication Group shard configuration: %s", err) + } + + err = waitForModifyElasticacheReplicationGroup(conn, d.Id(), d.Timeout(schema.TimeoutUpdate)) + if err != nil { + return fmt.Errorf("error waiting for Elasticache Replication Group (%s) shard reconfiguration completion: %s", d.Id(), err) + } + } + + if d.HasChange("number_cache_clusters") { + o, n := d.GetChange("number_cache_clusters") + oldNumberCacheClusters := o.(int) + newNumberCacheClusters := n.(int) + + // We will try to use similar naming to the console, which are 1 indexed: RGID-001 through RGID-006 + var addClusterIDs, removeClusterIDs []string + for clusterID := oldNumberCacheClusters + 1; clusterID <= newNumberCacheClusters; clusterID++ { + addClusterIDs = append(addClusterIDs, fmt.Sprintf("%s-%03d", d.Id(), clusterID)) + } + for clusterID := oldNumberCacheClusters; clusterID >= (newNumberCacheClusters + 1); clusterID-- { + removeClusterIDs = append(removeClusterIDs, fmt.Sprintf("%s-%03d", d.Id(), clusterID)) + } + + if len(addClusterIDs) > 0 { + // Kick off all the Cache Cluster creations + for _, cacheClusterID := range addClusterIDs { + input := &elasticache.CreateCacheClusterInput{ + CacheClusterId: aws.String(cacheClusterID), + ReplicationGroupId: aws.String(d.Id()), + } + _, err := createElasticacheCacheCluster(conn, input) + if err != nil { + // Future enhancement: we could retry creation with random ID on naming collision + // if isAWSErr(err, elasticache.ErrCodeCacheClusterAlreadyExistsFault, "") { ... } + return fmt.Errorf("error creating Elasticache Cache Cluster (adding replica): %s", err) + } + } + + // Wait for all Cache Cluster creations + for _, cacheClusterID := range addClusterIDs { + err := waitForCreateElasticacheCacheCluster(conn, cacheClusterID, d.Timeout(schema.TimeoutUpdate)) + if err != nil { + return fmt.Errorf("error waiting for Elasticache Cache Cluster (%s) to be created (adding replica): %s", cacheClusterID, err) + } + } + } + + if len(removeClusterIDs) > 0 { + // Cannot reassign primary cluster ID while automatic failover is enabled + // If we temporarily disable automatic failover, ensure we re-enable it + reEnableAutomaticFailover := false + + // Kick off all the Cache Cluster deletions + for _, cacheClusterID := range removeClusterIDs { + err := deleteElasticacheCacheCluster(conn, cacheClusterID) + if err != nil { + // Future enhancement: we could retry deletion with random existing ID on missing name + // if isAWSErr(err, elasticache.ErrCodeCacheClusterNotFoundFault, "") { ... } + if !isAWSErr(err, elasticache.ErrCodeInvalidCacheClusterStateFault, "serving as primary") { + return fmt.Errorf("error deleting Elasticache Cache Cluster (%s) (removing replica): %s", cacheClusterID, err) + } + + // Use Replication Group MemberClusters to find a new primary cache cluster ID + // that is not in removeClusterIDs + newPrimaryClusterID := "" + + describeReplicationGroupInput := &elasticache.DescribeReplicationGroupsInput{ + ReplicationGroupId: aws.String(d.Id()), + } + log.Printf("[DEBUG] Reading Elasticache Replication Group: %s", describeReplicationGroupInput) + output, err := conn.DescribeReplicationGroups(describeReplicationGroupInput) + if err != nil { + return fmt.Errorf("error reading Elasticache Replication Group (%s) to determine new primary: %s", d.Id(), err) + } + if output == nil || len(output.ReplicationGroups) == 0 || len(output.ReplicationGroups[0].MemberClusters) == 0 { + return fmt.Errorf("error reading Elasticache Replication Group (%s) to determine new primary: missing replication group information", d.Id()) + } + + for _, memberClusterPtr := range output.ReplicationGroups[0].MemberClusters { + memberCluster := aws.StringValue(memberClusterPtr) + memberClusterInRemoveClusterIDs := false + for _, removeClusterID := range removeClusterIDs { + if memberCluster == removeClusterID { + memberClusterInRemoveClusterIDs = true + break + } + } + if !memberClusterInRemoveClusterIDs { + newPrimaryClusterID = memberCluster + break + } + } + if newPrimaryClusterID == "" { + return fmt.Errorf("error reading Elasticache Replication Group (%s) to determine new primary: unable to assign new primary", d.Id()) + } + + // Disable automatic failover if enabled + // Must be applied previous to trying to set new primary + // InvalidReplicationGroupState: Cannot manually promote a new master cache cluster while autofailover is enabled + if aws.StringValue(output.ReplicationGroups[0].AutomaticFailover) == elasticache.AutomaticFailoverStatusEnabled { + // Be kind and rewind + if d.Get("automatic_failover_enabled").(bool) { + reEnableAutomaticFailover = true + } + + modifyReplicationGroupInput := &elasticache.ModifyReplicationGroupInput{ + ApplyImmediately: aws.Bool(true), + AutomaticFailoverEnabled: aws.Bool(false), + ReplicationGroupId: aws.String(d.Id()), + } + log.Printf("[DEBUG] Modifying Elasticache Replication Group: %s", modifyReplicationGroupInput) + _, err = conn.ModifyReplicationGroup(modifyReplicationGroupInput) + if err != nil { + return fmt.Errorf("error modifying Elasticache Replication Group (%s) to set new primary: %s", d.Id(), err) + } + err = waitForModifyElasticacheReplicationGroup(conn, d.Id(), d.Timeout(schema.TimeoutUpdate)) + if err != nil { + return fmt.Errorf("error waiting for Elasticache Replication Group (%s) to be available: %s", d.Id(), err) + } + } + + // Set new primary + modifyReplicationGroupInput := &elasticache.ModifyReplicationGroupInput{ + ApplyImmediately: aws.Bool(true), + PrimaryClusterId: aws.String(newPrimaryClusterID), + ReplicationGroupId: aws.String(d.Id()), + } + log.Printf("[DEBUG] Modifying Elasticache Replication Group: %s", modifyReplicationGroupInput) + _, err = conn.ModifyReplicationGroup(modifyReplicationGroupInput) + if err != nil { + return fmt.Errorf("error modifying Elasticache Replication Group (%s) to set new primary: %s", d.Id(), err) + } + err = waitForModifyElasticacheReplicationGroup(conn, d.Id(), d.Timeout(schema.TimeoutUpdate)) + if err != nil { + return fmt.Errorf("error waiting for Elasticache Replication Group (%s) to be available: %s", d.Id(), err) + } + + // Finally retry deleting the cache cluster + err = deleteElasticacheCacheCluster(conn, cacheClusterID) + if err != nil { + return fmt.Errorf("error deleting Elasticache Cache Cluster (%s) (removing replica after setting new primary): %s", cacheClusterID, err) + } + } + } + + // Wait for all Cache Cluster deletions + for _, cacheClusterID := range removeClusterIDs { + err := waitForDeleteElasticacheCacheCluster(conn, cacheClusterID, d.Timeout(schema.TimeoutUpdate)) + if err != nil { + return fmt.Errorf("error waiting for Elasticache Cache Cluster (%s) to be deleted (removing replica): %s", cacheClusterID, err) + } + } + + // Re-enable automatic failover if we needed to temporarily disable it + if reEnableAutomaticFailover { + input := &elasticache.ModifyReplicationGroupInput{ + ApplyImmediately: aws.Bool(true), + AutomaticFailoverEnabled: aws.Bool(true), + ReplicationGroupId: aws.String(d.Id()), + } + log.Printf("[DEBUG] Modifying Elasticache Replication Group: %s", input) + _, err := conn.ModifyReplicationGroup(input) + if err != nil { + return fmt.Errorf("error modifying Elasticache Replication Group (%s) to re-enable automatic failover: %s", d.Id(), err) + } + } + } + } + requestUpdate := false params := &elasticache.ModifyReplicationGroupInput{ ApplyImmediately: aws.Bool(d.Get("apply_immediately").(bool)), @@ -440,23 +753,12 @@ func resourceAwsElasticacheReplicationGroupUpdate(d *schema.ResourceData, meta i if requestUpdate { _, err := conn.ModifyReplicationGroup(params) if err != nil { - return fmt.Errorf("Error updating Elasticache replication group: %s", err) - } - - pending := []string{"creating", "modifying", "snapshotting"} - stateConf := &resource.StateChangeConf{ - Pending: pending, - Target: []string{"available"}, - Refresh: cacheReplicationGroupStateRefreshFunc(conn, d.Id(), "available", pending), - Timeout: 40 * time.Minute, - MinTimeout: 10 * time.Second, - Delay: 30 * time.Second, + return fmt.Errorf("error updating Elasticache Replication Group (%s): %s", d.Id(), err) } - log.Printf("[DEBUG] Waiting for state to become available: %v", d.Id()) - _, sterr := stateConf.WaitForState() - if sterr != nil { - return fmt.Errorf("Error waiting for elasticache replication group (%s) to be created: %s", d.Id(), sterr) + err = waitForModifyElasticacheReplicationGroup(conn, d.Id(), d.Timeout(schema.TimeoutUpdate)) + if err != nil { + return fmt.Errorf("error waiting for Elasticache Replication Group (%s) to be updated: %s", d.Id(), err) } } return resourceAwsElasticacheReplicationGroupRead(d, meta) @@ -465,45 +767,21 @@ func resourceAwsElasticacheReplicationGroupUpdate(d *schema.ResourceData, meta i func resourceAwsElasticacheReplicationGroupDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).elasticacheconn - req := &elasticache.DeleteReplicationGroupInput{ - ReplicationGroupId: aws.String(d.Id()), - } - - _, err := conn.DeleteReplicationGroup(req) + err := deleteElasticacheReplicationGroup(d.Id(), conn) if err != nil { - if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "ReplicationGroupNotFoundFault" { - d.SetId("") - return nil - } - - return fmt.Errorf("Error deleting Elasticache replication group: %s", err) - } - - log.Printf("[DEBUG] Waiting for deletion: %v", d.Id()) - stateConf := &resource.StateChangeConf{ - Pending: []string{"creating", "available", "deleting"}, - Target: []string{}, - Refresh: cacheReplicationGroupStateRefreshFunc(conn, d.Id(), "", []string{}), - Timeout: 40 * time.Minute, - MinTimeout: 10 * time.Second, - Delay: 30 * time.Second, - } - - _, sterr := stateConf.WaitForState() - if sterr != nil { - return fmt.Errorf("Error waiting for replication group (%s) to delete: %s", d.Id(), sterr) + return fmt.Errorf("error deleting Elasticache Replication Group (%s): %s", d.Id(), err) } return nil } -func cacheReplicationGroupStateRefreshFunc(conn *elasticache.ElastiCache, replicationGroupId, givenState string, pending []string) resource.StateRefreshFunc { +func cacheReplicationGroupStateRefreshFunc(conn *elasticache.ElastiCache, replicationGroupId string, pending []string) resource.StateRefreshFunc { return func() (interface{}, string, error) { resp, err := conn.DescribeReplicationGroups(&elasticache.DescribeReplicationGroupsInput{ ReplicationGroupId: aws.String(replicationGroupId), }) if err != nil { - if eccErr, ok := err.(awserr.Error); ok && eccErr.Code() == "ReplicationGroupNotFoundFault" { + if isAWSErr(err, elasticache.ErrCodeReplicationGroupNotFoundFault, "") { log.Printf("[DEBUG] Replication Group Not Found") return nil, "", nil } @@ -513,7 +791,7 @@ func cacheReplicationGroupStateRefreshFunc(conn *elasticache.ElastiCache, replic } if len(resp.ReplicationGroups) == 0 { - return nil, "", fmt.Errorf("[WARN] Error: no Cache Replication Groups found for id (%s)", replicationGroupId) + return nil, "", fmt.Errorf("Error: no Cache Replication Groups found for id (%s)", replicationGroupId) } var rg *elasticache.ReplicationGroup @@ -525,7 +803,7 @@ func cacheReplicationGroupStateRefreshFunc(conn *elasticache.ElastiCache, replic } if rg == nil { - return nil, "", fmt.Errorf("[WARN] Error: no matching ElastiCache Replication Group for id (%s)", replicationGroupId) + return nil, "", fmt.Errorf("Error: no matching ElastiCache Replication Group for id (%s)", replicationGroupId) } log.Printf("[DEBUG] ElastiCache Replication Group (%s) status: %v", replicationGroupId, *rg.Status) @@ -544,6 +822,80 @@ func cacheReplicationGroupStateRefreshFunc(conn *elasticache.ElastiCache, replic } } +func deleteElasticacheReplicationGroup(replicationGroupID string, conn *elasticache.ElastiCache) error { + input := &elasticache.DeleteReplicationGroupInput{ + ReplicationGroupId: aws.String(replicationGroupID), + } + + // 10 minutes should give any creating/deleting cache clusters or snapshots time to complete + err := resource.Retry(10*time.Minute, func() *resource.RetryError { + _, err := conn.DeleteReplicationGroup(input) + if err != nil { + if isAWSErr(err, elasticache.ErrCodeReplicationGroupNotFoundFault, "") { + return nil + } + // Cache Cluster is creating/deleting or Replication Group is snapshotting + // InvalidReplicationGroupState: Cache cluster tf-acc-test-uqhe-003 is not in a valid state to be deleted + if isAWSErr(err, elasticache.ErrCodeInvalidReplicationGroupStateFault, "") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return err + } + + log.Printf("[DEBUG] Waiting for deletion: %s", replicationGroupID) + stateConf := &resource.StateChangeConf{ + Pending: []string{"creating", "available", "deleting"}, + Target: []string{}, + Refresh: cacheReplicationGroupStateRefreshFunc(conn, replicationGroupID, []string{}), + Timeout: 40 * time.Minute, + MinTimeout: 10 * time.Second, + Delay: 30 * time.Second, + } + + _, err = stateConf.WaitForState() + return err +} + +func flattenElasticacheNodeGroupsToClusterMode(clusterEnabled bool, nodeGroups []*elasticache.NodeGroup) []map[string]interface{} { + if !clusterEnabled { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "num_node_groups": 0, + "replicas_per_node_group": 0, + } + + if len(nodeGroups) == 0 { + return []map[string]interface{}{m} + } + + m["num_node_groups"] = len(nodeGroups) + m["replicas_per_node_group"] = (len(nodeGroups[0].NodeGroupMembers) - 1) + return []map[string]interface{}{m} +} + +func waitForModifyElasticacheReplicationGroup(conn *elasticache.ElastiCache, replicationGroupID string, timeout time.Duration) error { + pending := []string{"creating", "modifying", "snapshotting"} + stateConf := &resource.StateChangeConf{ + Pending: pending, + Target: []string{"available"}, + Refresh: cacheReplicationGroupStateRefreshFunc(conn, replicationGroupID, pending), + Timeout: timeout, + MinTimeout: 10 * time.Second, + Delay: 30 * time.Second, + } + + log.Printf("[DEBUG] Waiting for Elasticache Replication Group (%s) to become available", replicationGroupID) + _, err := stateConf.WaitForState() + return err +} + func validateAwsElastiCacheReplicationGroupEngine(v interface{}, k string) (ws []string, errors []error) { if strings.ToLower(v.(string)) != "redis" { errors = append(errors, fmt.Errorf("The only acceptable Engine type when using Replication Groups is Redis")) diff --git a/aws/resource_aws_elasticache_replication_group_test.go b/aws/resource_aws_elasticache_replication_group_test.go index eb441b00c06..2c89659dc52 100644 --- a/aws/resource_aws_elasticache_replication_group_test.go +++ b/aws/resource_aws_elasticache_replication_group_test.go @@ -2,8 +2,12 @@ package aws import ( "fmt" + "log" + "os" "regexp" + "strings" "testing" + "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" @@ -13,10 +17,95 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func init() { + resource.AddTestSweepers("aws_elasticache_replication_group", &resource.Sweeper{ + Name: "aws_elasticache_replication_group", + F: testSweepElasticacheReplicationGroups, + }) +} + +func testSweepElasticacheReplicationGroups(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).elasticacheconn + + prefixes := []string{ + "tf-", + "tf-test-", + "tf-acc-test-", + } + + err = conn.DescribeReplicationGroupsPages(&elasticache.DescribeReplicationGroupsInput{}, func(page *elasticache.DescribeReplicationGroupsOutput, isLast bool) bool { + if len(page.ReplicationGroups) == 0 { + log.Print("[DEBUG] No Elasticache Replicaton Groups to sweep") + return false + } + + for _, replicationGroup := range page.ReplicationGroups { + id := aws.StringValue(replicationGroup.ReplicationGroupId) + skip := true + for _, prefix := range prefixes { + if strings.HasPrefix(id, prefix) { + skip = false + break + } + } + if skip { + log.Printf("[INFO] Skipping Elasticache Replication Group: %s", id) + continue + } + log.Printf("[INFO] Deleting Elasticache Replication Group: %s", id) + err := deleteElasticacheReplicationGroup(id, conn) + if err != nil { + log.Printf("[ERROR] Failed to delete Elasticache Replication Group (%s): %s", id, err) + } + } + return !isLast + }) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping Elasticache Replication Group sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving Elasticache Replication Groups: %s", err) + } + return nil +} + +func TestAccAWSElasticacheReplicationGroup_importBasic(t *testing.T) { + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + + name := acctest.RandString(10) + + resourceName := "aws_elasticache_replication_group.bar" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheReplicationGroupConfig(name), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"apply_immediately"}, //not in the API + }, + }, + }) +} + func TestAccAWSElasticacheReplicationGroup_basic(t *testing.T) { var rg elasticache.ReplicationGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, @@ -25,8 +114,12 @@ func TestAccAWSElasticacheReplicationGroup_basic(t *testing.T) { Config: testAccAWSElasticacheReplicationGroupConfig(acctest.RandString(10)), Check: resource.ComposeTestCheckFunc( testAccCheckAWSElasticacheReplicationGroupExists("aws_elasticache_replication_group.bar", &rg), + resource.TestCheckResourceAttr( + "aws_elasticache_replication_group.bar", "cluster_mode.#", "0"), resource.TestCheckResourceAttr( "aws_elasticache_replication_group.bar", "number_cache_clusters", "2"), + resource.TestCheckResourceAttr( + "aws_elasticache_replication_group.bar", "member_clusters.#", "2"), resource.TestCheckResourceAttr( "aws_elasticache_replication_group.bar", "auto_minor_version_upgrade", "false"), ), @@ -39,7 +132,7 @@ func TestAccAWSElasticacheReplicationGroup_Uppercase(t *testing.T) { var rg elasticache.ReplicationGroup rStr := acctest.RandString(5) rgName := fmt.Sprintf("TF-ELASTIRG-%s", rStr) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, @@ -59,7 +152,7 @@ func TestAccAWSElasticacheReplicationGroup_Uppercase(t *testing.T) { func TestAccAWSElasticacheReplicationGroup_updateDescription(t *testing.T) { var rg elasticache.ReplicationGroup rName := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, @@ -96,7 +189,7 @@ func TestAccAWSElasticacheReplicationGroup_updateDescription(t *testing.T) { func TestAccAWSElasticacheReplicationGroup_updateMaintenanceWindow(t *testing.T) { var rg elasticache.ReplicationGroup rName := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, @@ -124,7 +217,7 @@ func TestAccAWSElasticacheReplicationGroup_updateMaintenanceWindow(t *testing.T) func TestAccAWSElasticacheReplicationGroup_updateNodeSize(t *testing.T) { var rg elasticache.ReplicationGroup rName := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, @@ -159,7 +252,7 @@ func TestAccAWSElasticacheReplicationGroup_updateParameterGroup(t *testing.T) { var rg elasticache.ReplicationGroup rName := acctest.RandString(10) rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, @@ -168,8 +261,8 @@ func TestAccAWSElasticacheReplicationGroup_updateParameterGroup(t *testing.T) { Config: testAccAWSElasticacheReplicationGroupConfig(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSElasticacheReplicationGroupExists("aws_elasticache_replication_group.bar", &rg), - resource.TestCheckResourceAttr( - "aws_elasticache_replication_group.bar", "parameter_group_name", "default.redis3.2"), + resource.TestMatchResourceAttr( + "aws_elasticache_replication_group.bar", "parameter_group_name", regexp.MustCompile(`^default.redis.+`)), ), }, @@ -187,7 +280,7 @@ func TestAccAWSElasticacheReplicationGroup_updateParameterGroup(t *testing.T) { func TestAccAWSElasticacheReplicationGroup_vpc(t *testing.T) { var rg elasticache.ReplicationGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, @@ -208,7 +301,7 @@ func TestAccAWSElasticacheReplicationGroup_vpc(t *testing.T) { func TestAccAWSElasticacheReplicationGroup_multiAzInVpc(t *testing.T) { var rg elasticache.ReplicationGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, @@ -235,7 +328,7 @@ func TestAccAWSElasticacheReplicationGroup_multiAzInVpc(t *testing.T) { func TestAccAWSElasticacheReplicationGroup_redisClusterInVpc2(t *testing.T) { var rg elasticache.ReplicationGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, @@ -247,45 +340,83 @@ func TestAccAWSElasticacheReplicationGroup_redisClusterInVpc2(t *testing.T) { resource.TestCheckResourceAttr( "aws_elasticache_replication_group.bar", "number_cache_clusters", "2"), resource.TestCheckResourceAttr( - "aws_elasticache_replication_group.bar", "automatic_failover_enabled", "true"), + "aws_elasticache_replication_group.bar", "automatic_failover_enabled", "false"), resource.TestCheckResourceAttr( "aws_elasticache_replication_group.bar", "snapshot_window", "02:00-03:00"), resource.TestCheckResourceAttr( "aws_elasticache_replication_group.bar", "snapshot_retention_limit", "7"), resource.TestCheckResourceAttrSet( - "aws_elasticache_replication_group.bar", "configuration_endpoint_address"), + "aws_elasticache_replication_group.bar", "primary_endpoint_address"), ), }, }, }) } -func TestAccAWSElasticacheReplicationGroup_nativeRedisCluster(t *testing.T) { +func TestAccAWSElasticacheReplicationGroup_ClusterMode_Basic(t *testing.T) { var rg elasticache.ReplicationGroup - rInt := acctest.RandInt() rName := acctest.RandString(10) + resourceName := "aws_elasticache_replication_group.bar" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSElasticacheReplicationGroupNativeRedisClusterConfig(rInt, rName), + Config: testAccAWSElasticacheReplicationGroupNativeRedisClusterConfig(rName, 2, 1), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSElasticacheReplicationGroupExists("aws_elasticache_replication_group.bar", &rg), - resource.TestCheckResourceAttr( - "aws_elasticache_replication_group.bar", "number_cache_clusters", "4"), - resource.TestCheckResourceAttr( - "aws_elasticache_replication_group.bar", "cluster_mode.#", "1"), - resource.TestCheckResourceAttr( - "aws_elasticache_replication_group.bar", "cluster_mode.4170186206.num_node_groups", "2"), - resource.TestCheckResourceAttr( - "aws_elasticache_replication_group.bar", "cluster_mode.4170186206.replicas_per_node_group", "1"), - resource.TestCheckResourceAttr( - "aws_elasticache_replication_group.bar", "port", "6379"), - resource.TestCheckResourceAttrSet( - "aws_elasticache_replication_group.bar", "configuration_endpoint_address"), + testAccCheckAWSElasticacheReplicationGroupExists(resourceName, &rg), + resource.TestCheckResourceAttr(resourceName, "number_cache_clusters", "4"), + resource.TestCheckResourceAttr(resourceName, "cluster_mode.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cluster_mode.0.num_node_groups", "2"), + resource.TestCheckResourceAttr(resourceName, "cluster_mode.0.replicas_per_node_group", "1"), + resource.TestCheckResourceAttr(resourceName, "port", "6379"), + resource.TestCheckResourceAttrSet(resourceName, "configuration_endpoint_address"), + ), + }, + }, + }) +} + +func TestAccAWSElasticacheReplicationGroup_ClusterMode_NumNodeGroups(t *testing.T) { + var rg elasticache.ReplicationGroup + rName := acctest.RandString(10) + resourceName := "aws_elasticache_replication_group.bar" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheReplicationGroupNativeRedisClusterConfig(rName, 3, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheReplicationGroupExists(resourceName, &rg), + resource.TestCheckResourceAttr(resourceName, "number_cache_clusters", "6"), + resource.TestCheckResourceAttr(resourceName, "cluster_mode.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cluster_mode.0.num_node_groups", "3"), + resource.TestCheckResourceAttr(resourceName, "cluster_mode.0.replicas_per_node_group", "1"), + ), + }, + { + Config: testAccAWSElasticacheReplicationGroupNativeRedisClusterConfig(rName, 1, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheReplicationGroupExists(resourceName, &rg), + resource.TestCheckResourceAttr(resourceName, "number_cache_clusters", "2"), + resource.TestCheckResourceAttr(resourceName, "cluster_mode.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cluster_mode.0.num_node_groups", "1"), + resource.TestCheckResourceAttr(resourceName, "cluster_mode.0.replicas_per_node_group", "1"), + ), + }, + { + Config: testAccAWSElasticacheReplicationGroupNativeRedisClusterConfig(rName, 2, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheReplicationGroupExists(resourceName, &rg), + resource.TestCheckResourceAttr(resourceName, "number_cache_clusters", "4"), + resource.TestCheckResourceAttr(resourceName, "cluster_mode.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cluster_mode.0.num_node_groups", "2"), + resource.TestCheckResourceAttr(resourceName, "cluster_mode.0.replicas_per_node_group", "1"), ), }, }, @@ -296,7 +427,7 @@ func TestAccAWSElasticacheReplicationGroup_clusteringAndCacheNodesCausesError(t rInt := acctest.RandInt() rName := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, @@ -313,7 +444,7 @@ func TestAccAWSElasticacheReplicationGroup_enableSnapshotting(t *testing.T) { var rg elasticache.ReplicationGroup rName := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, @@ -341,7 +472,7 @@ func TestAccAWSElasticacheReplicationGroup_enableSnapshotting(t *testing.T) { func TestAccAWSElasticacheReplicationGroup_enableAuthTokenTransitEncryption(t *testing.T) { var rg elasticache.ReplicationGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, @@ -360,7 +491,7 @@ func TestAccAWSElasticacheReplicationGroup_enableAuthTokenTransitEncryption(t *t func TestAccAWSElasticacheReplicationGroup_enableAtRestEncryption(t *testing.T) { var rg elasticache.ReplicationGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, @@ -377,6 +508,163 @@ func TestAccAWSElasticacheReplicationGroup_enableAtRestEncryption(t *testing.T) }) } +func TestAccAWSElasticacheReplicationGroup_NumberCacheClusters(t *testing.T) { + var replicationGroup elasticache.ReplicationGroup + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(4)) + resourceName := "aws_elasticache_replication_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheReplicationGroupConfig_NumberCacheClusters(rName, 2, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheReplicationGroupExists(resourceName, &replicationGroup), + resource.TestCheckResourceAttr(resourceName, "automatic_failover_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "number_cache_clusters", "2"), + ), + }, + { + Config: testAccAWSElasticacheReplicationGroupConfig_NumberCacheClusters(rName, 4, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheReplicationGroupExists(resourceName, &replicationGroup), + resource.TestCheckResourceAttr(resourceName, "automatic_failover_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "number_cache_clusters", "4"), + ), + }, + { + Config: testAccAWSElasticacheReplicationGroupConfig_NumberCacheClusters(rName, 2, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheReplicationGroupExists(resourceName, &replicationGroup), + resource.TestCheckResourceAttr(resourceName, "automatic_failover_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "number_cache_clusters", "2"), + ), + }, + }, + }) +} + +func TestAccAWSElasticacheReplicationGroup_NumberCacheClusters_Failover_AutoFailoverDisabled(t *testing.T) { + var replicationGroup elasticache.ReplicationGroup + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(4)) + resourceName := "aws_elasticache_replication_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheReplicationGroupConfig_NumberCacheClusters(rName, 3, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheReplicationGroupExists(resourceName, &replicationGroup), + resource.TestCheckResourceAttr(resourceName, "automatic_failover_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "number_cache_clusters", "3"), + ), + }, + { + PreConfig: func() { + // Simulate failover so primary is on node we are trying to delete + conn := testAccProvider.Meta().(*AWSClient).elasticacheconn + input := &elasticache.ModifyReplicationGroupInput{ + ApplyImmediately: aws.Bool(true), + PrimaryClusterId: aws.String(fmt.Sprintf("%s-003", rName)), + ReplicationGroupId: aws.String(rName), + } + if _, err := conn.ModifyReplicationGroup(input); err != nil { + t.Fatalf("error setting new primary cache cluster: %s", err) + } + if err := waitForModifyElasticacheReplicationGroup(conn, rName, 40*time.Minute); err != nil { + t.Fatalf("error waiting for new primary cache cluster: %s", err) + } + }, + Config: testAccAWSElasticacheReplicationGroupConfig_NumberCacheClusters(rName, 2, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheReplicationGroupExists(resourceName, &replicationGroup), + resource.TestCheckResourceAttr(resourceName, "automatic_failover_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "number_cache_clusters", "2"), + ), + }, + }, + }) +} + +func TestAccAWSElasticacheReplicationGroup_NumberCacheClusters_Failover_AutoFailoverEnabled(t *testing.T) { + var replicationGroup elasticache.ReplicationGroup + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(4)) + resourceName := "aws_elasticache_replication_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheReplicationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheReplicationGroupConfig_NumberCacheClusters(rName, 3, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheReplicationGroupExists(resourceName, &replicationGroup), + resource.TestCheckResourceAttr(resourceName, "automatic_failover_enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "number_cache_clusters", "3"), + ), + }, + { + PreConfig: func() { + // Simulate failover so primary is on node we are trying to delete + conn := testAccProvider.Meta().(*AWSClient).elasticacheconn + var input *elasticache.ModifyReplicationGroupInput + + // Must disable automatic failover first + input = &elasticache.ModifyReplicationGroupInput{ + ApplyImmediately: aws.Bool(true), + AutomaticFailoverEnabled: aws.Bool(false), + ReplicationGroupId: aws.String(rName), + } + if _, err := conn.ModifyReplicationGroup(input); err != nil { + t.Fatalf("error disabling automatic failover: %s", err) + } + if err := waitForModifyElasticacheReplicationGroup(conn, rName, 40*time.Minute); err != nil { + t.Fatalf("error waiting for disabling automatic failover: %s", err) + } + + // Failover + input = &elasticache.ModifyReplicationGroupInput{ + ApplyImmediately: aws.Bool(true), + PrimaryClusterId: aws.String(fmt.Sprintf("%s-003", rName)), + ReplicationGroupId: aws.String(rName), + } + if _, err := conn.ModifyReplicationGroup(input); err != nil { + t.Fatalf("error setting new primary cache cluster: %s", err) + } + if err := waitForModifyElasticacheReplicationGroup(conn, rName, 40*time.Minute); err != nil { + t.Fatalf("error waiting for new primary cache cluster: %s", err) + } + + // Re-enable automatic failover like nothing ever happened + input = &elasticache.ModifyReplicationGroupInput{ + ApplyImmediately: aws.Bool(true), + AutomaticFailoverEnabled: aws.Bool(true), + ReplicationGroupId: aws.String(rName), + } + if _, err := conn.ModifyReplicationGroup(input); err != nil { + t.Fatalf("error enabled automatic failover: %s", err) + } + if err := waitForModifyElasticacheReplicationGroup(conn, rName, 40*time.Minute); err != nil { + t.Fatalf("error waiting for enabled automatic failover: %s", err) + } + }, + Config: testAccAWSElasticacheReplicationGroupConfig_NumberCacheClusters(rName, 2, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheReplicationGroupExists(resourceName, &replicationGroup), + resource.TestCheckResourceAttr(resourceName, "automatic_failover_enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "number_cache_clusters", "2"), + ), + }, + }, + }) +} + func TestResourceAWSElastiCacheReplicationGroupIdValidation(t *testing.T) { cases := []struct { Value string @@ -526,7 +814,6 @@ resource "aws_elasticache_replication_group" "bar" { node_type = "cache.m1.small" number_cache_clusters = 2 port = 6379 - parameter_group_name = "default.redis3.2" security_group_names = ["${aws_elasticache_security_group.bar.name}"] apply_immediately = true auto_minor_version_upgrade = false @@ -574,7 +861,6 @@ resource "aws_elasticache_replication_group" "bar" { node_type = "cache.m1.small" number_cache_clusters = 2 port = 6379 - parameter_group_name = "default.redis3.2" security_group_names = ["${aws_elasticache_security_group.bar.name}"] apply_immediately = true auto_minor_version_upgrade = false @@ -608,7 +894,9 @@ resource "aws_elasticache_security_group" "bar" { resource "aws_elasticache_parameter_group" "bar" { name = "allkeys-lru-%d" - family = "redis3.2" + # We do not have a data source for "latest" Elasticache family + # so unfortunately we must hardcode this for now + family = "redis4.0" parameter { name = "maxmemory-policy" @@ -656,7 +944,6 @@ resource "aws_elasticache_replication_group" "bar" { node_type = "cache.m1.small" number_cache_clusters = 2 port = 6379 - parameter_group_name = "default.redis3.2" security_group_names = ["${aws_elasticache_security_group.bar.name}"] apply_immediately = true auto_minor_version_upgrade = true @@ -691,7 +978,6 @@ resource "aws_elasticache_replication_group" "bar" { node_type = "cache.m1.small" number_cache_clusters = 2 port = 6379 - parameter_group_name = "default.redis3.2" security_group_names = ["${aws_elasticache_security_group.bar.name}"] apply_immediately = true auto_minor_version_upgrade = true @@ -728,7 +1014,6 @@ resource "aws_elasticache_replication_group" "bar" { node_type = "cache.m1.medium" number_cache_clusters = 2 port = 6379 - parameter_group_name = "default.redis3.2" security_group_names = ["${aws_elasticache_security_group.bar.name}"] apply_immediately = true }`, rName, rName, rName) @@ -777,7 +1062,6 @@ resource "aws_elasticache_replication_group" "bar" { port = 6379 subnet_group_name = "${aws_elasticache_subnet_group.bar.name}" security_group_ids = ["${aws_security_group.bar.id}"] - parameter_group_name = "default.redis3.2" availability_zones = ["us-west-2a"] auto_minor_version_upgrade = false } @@ -839,7 +1123,6 @@ resource "aws_elasticache_replication_group" "bar" { port = 6379 subnet_group_name = "${aws_elasticache_subnet_group.bar.name}" security_group_ids = ["${aws_security_group.bar.id}"] - parameter_group_name = "default.redis3.2" availability_zones = ["us-west-2a","us-west-2b"] automatic_failover_enabled = true snapshot_window = "02:00-03:00" @@ -897,14 +1180,13 @@ resource "aws_security_group" "bar" { resource "aws_elasticache_replication_group" "bar" { replication_group_id = "tf-%s" replication_group_description = "test description" - node_type = "cache.t2.micro" + node_type = "cache.m3.medium" number_cache_clusters = "2" port = 6379 subnet_group_name = "${aws_elasticache_subnet_group.bar.name}" security_group_ids = ["${aws_security_group.bar.id}"] - parameter_group_name = "default.redis3.2.cluster.on" availability_zones = ["us-west-2a","us-west-2b"] - automatic_failover_enabled = true + automatic_failover_enabled = false snapshot_window = "02:00-03:00" snapshot_retention_limit = 7 engine_version = "3.2.4" @@ -967,7 +1249,6 @@ resource "aws_elasticache_replication_group" "bar" { port = 6379 subnet_group_name = "${aws_elasticache_subnet_group.bar.name}" security_group_ids = ["${aws_security_group.bar.id}"] - parameter_group_name = "default.redis3.2.cluster.on" automatic_failover_enabled = true cluster_mode { replicas_per_node_group = 1 @@ -977,7 +1258,7 @@ resource "aws_elasticache_replication_group" "bar" { }`, rInt, rInt, rName) } -func testAccAWSElasticacheReplicationGroupNativeRedisClusterConfig(rInt int, rName string) string { +func testAccAWSElasticacheReplicationGroupNativeRedisClusterConfig(rName string, numNodeGroups, replicasPerNodeGroup int) string { return fmt.Sprintf(` resource "aws_vpc" "foo" { cidr_block = "192.168.0.0/16" @@ -1005,7 +1286,7 @@ resource "aws_subnet" "bar" { } resource "aws_elasticache_subnet_group" "bar" { - name = "tf-test-cache-subnet-%03d" + name = "tf-test-%[1]s" description = "tf-test-cache-subnet-group-descr" subnet_ids = [ "${aws_subnet.foo.id}", @@ -1014,7 +1295,7 @@ resource "aws_elasticache_subnet_group" "bar" { } resource "aws_security_group" "bar" { - name = "tf-test-security-group-%03d" + name = "tf-test-%[1]s" description = "tf-test-security-group-descr" vpc_id = "${aws_vpc.foo.id}" ingress { @@ -1026,19 +1307,18 @@ resource "aws_security_group" "bar" { } resource "aws_elasticache_replication_group" "bar" { - replication_group_id = "tf-%s" + replication_group_id = "tf-%[1]s" replication_group_description = "test description" node_type = "cache.t2.micro" port = 6379 subnet_group_name = "${aws_elasticache_subnet_group.bar.name}" security_group_ids = ["${aws_security_group.bar.id}"] - parameter_group_name = "default.redis3.2.cluster.on" automatic_failover_enabled = true cluster_mode { - replicas_per_node_group = 1 - num_node_groups = 2 + num_node_groups = %d + replicas_per_node_group = %d } -}`, rInt, rInt, rName) +}`, rName, numNodeGroups, replicasPerNodeGroup) } func testAccAWSElasticacheReplicationGroup_EnableAtRestEncryptionConfig(rInt int, rString string) string { @@ -1149,3 +1429,42 @@ resource "aws_elasticache_replication_group" "bar" { } `, rInt, rInt, rString10, rString16) } + +func testAccAWSElasticacheReplicationGroupConfig_NumberCacheClusters(rName string, numberCacheClusters int, autoFailover bool) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "test" { + cidr_block = "192.168.0.0/16" + tags { + Name = "terraform-testacc-elasticache-replication-group-number-cache-clusters" + } +} + +resource "aws_subnet" "test" { + count = 2 + + availability_zone = "${data.aws_availability_zones.available.names[count.index]}" + cidr_block = "192.168.${count.index}.0/24" + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = "tf-acc-elasticache-replication-group-number-cache-clusters" + } +} + +resource "aws_elasticache_subnet_group" "test" { + name = "%[1]s" + subnet_ids = ["${aws_subnet.test.*.id}"] +} + +resource "aws_elasticache_replication_group" "test" { + # InvalidParameterCombination: Automatic failover is not supported for T1 and T2 cache node types. + automatic_failover_enabled = %[2]t + node_type = "cache.m3.medium" + number_cache_clusters = %[3]d + replication_group_id = "%[1]s" + replication_group_description = "Terraform Acceptance Testing - number_cache_clusters" + subnet_group_name = "${aws_elasticache_subnet_group.test.name}" +}`, rName, autoFailover, numberCacheClusters) +} diff --git a/aws/resource_aws_elasticache_security_group.go b/aws/resource_aws_elasticache_security_group.go index 1f282f9a6b3..41fc03ce924 100644 --- a/aws/resource_aws_elasticache_security_group.go +++ b/aws/resource_aws_elasticache_security_group.go @@ -22,18 +22,18 @@ func resourceAwsElasticacheSecurityGroup() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "description": &schema.Schema{ + "description": { Type: schema.TypeString, Optional: true, ForceNew: true, Default: "Managed by Terraform", }, - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "security_group_names": &schema.Schema{ + "security_group_names": { Type: schema.TypeSet, Required: true, ForceNew: true, diff --git a/aws/resource_aws_elasticache_security_group_test.go b/aws/resource_aws_elasticache_security_group_test.go index 553e9fada73..4b918024ec8 100644 --- a/aws/resource_aws_elasticache_security_group_test.go +++ b/aws/resource_aws_elasticache_security_group_test.go @@ -2,7 +2,9 @@ package aws import ( "fmt" + "log" "os" + "strings" "testing" "github.com/aws/aws-sdk-go/aws" @@ -13,13 +15,76 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func init() { + resource.AddTestSweepers("aws_elasticache_security_group", &resource.Sweeper{ + Name: "aws_elasticache_security_group", + F: testSweepElasticacheCacheSecurityGroups, + Dependencies: []string{ + "aws_elasticache_cluster", + "aws_elasticache_replication_group", + }, + }) +} + +func testSweepElasticacheCacheSecurityGroups(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).elasticacheconn + + prefixes := []string{ + "tf-", + "tf-test-", + "tf-acc-test-", + } + + err = conn.DescribeCacheSecurityGroupsPages(&elasticache.DescribeCacheSecurityGroupsInput{}, func(page *elasticache.DescribeCacheSecurityGroupsOutput, isLast bool) bool { + if len(page.CacheSecurityGroups) == 0 { + log.Print("[DEBUG] No Elasticache Cache Security Groups to sweep") + return false + } + + for _, securityGroup := range page.CacheSecurityGroups { + name := aws.StringValue(securityGroup.CacheSecurityGroupName) + skip := true + for _, prefix := range prefixes { + if strings.HasPrefix(name, prefix) { + skip = false + break + } + } + if skip { + log.Printf("[INFO] Skipping Elasticache Cache Security Group: %s", name) + continue + } + log.Printf("[INFO] Deleting Elasticache Cache Security Group: %s", name) + _, err := conn.DeleteCacheSecurityGroup(&elasticache.DeleteCacheSecurityGroupInput{ + CacheSecurityGroupName: aws.String(name), + }) + if err != nil { + log.Printf("[ERROR] Failed to delete Elasticache Cache Security Group (%s): %s", name, err) + } + } + return !isLast + }) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping Elasticache Cache Security Group sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving Elasticache Cache Security Groups: %s", err) + } + return nil +} + func TestAccAWSElasticacheSecurityGroup_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSElasticacheSecurityGroupConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSElasticacheSecurityGroupExists("aws_elasticache_security_group.bar"), @@ -37,16 +102,16 @@ func TestAccAWSElasticacheSecurityGroup_Import(t *testing.T) { os.Setenv("AWS_DEFAULT_REGION", "us-east-1") defer os.Setenv("AWS_DEFAULT_REGION", oldRegion) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheSecurityGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSElasticacheSecurityGroupConfig, }, - resource.TestStep{ + { ResourceName: "aws_elasticache_security_group.bar", ImportState: true, ImportStateVerify: true, diff --git a/aws/resource_aws_elasticache_subnet_group.go b/aws/resource_aws_elasticache_subnet_group.go index efae2e703fa..fce1e9d53a7 100644 --- a/aws/resource_aws_elasticache_subnet_group.go +++ b/aws/resource_aws_elasticache_subnet_group.go @@ -24,12 +24,12 @@ func resourceAwsElasticacheSubnetGroup() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "description": &schema.Schema{ + "description": { Type: schema.TypeString, Optional: true, Default: "Managed by Terraform", }, - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, @@ -40,7 +40,7 @@ func resourceAwsElasticacheSubnetGroup() *schema.Resource { return strings.ToLower(val.(string)) }, }, - "subnet_ids": &schema.Schema{ + "subnet_ids": { Type: schema.TypeSet, Required: true, Elem: &schema.Schema{Type: schema.TypeString}, diff --git a/aws/resource_aws_elasticache_subnet_group_test.go b/aws/resource_aws_elasticache_subnet_group_test.go index 4f87d6ba398..c7cbcb5f9d1 100644 --- a/aws/resource_aws_elasticache_subnet_group_test.go +++ b/aws/resource_aws_elasticache_subnet_group_test.go @@ -12,16 +12,40 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSElasticacheSubnetGroup_importBasic(t *testing.T) { + resourceName := "aws_elasticache_subnet_group.bar" + config := fmt.Sprintf(testAccAWSElasticacheSubnetGroupConfig, acctest.RandInt()) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheSubnetGroupDestroy, + Steps: []resource.TestStep{ + { + Config: config, + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "description"}, + }, + }, + }) +} + func TestAccAWSElasticacheSubnetGroup_basic(t *testing.T) { var csg elasticache.CacheSubnetGroup config := fmt.Sprintf(testAccAWSElasticacheSubnetGroupConfig, acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheSubnetGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: config, Check: resource.ComposeTestCheckFunc( testAccCheckAWSElasticacheSubnetGroupExists("aws_elasticache_subnet_group.bar", &csg), @@ -40,12 +64,12 @@ func TestAccAWSElasticacheSubnetGroup_update(t *testing.T) { preConfig := fmt.Sprintf(testAccAWSElasticacheSubnetGroupUpdateConfigPre, ri) postConfig := fmt.Sprintf(testAccAWSElasticacheSubnetGroupUpdateConfigPost, ri) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSElasticacheSubnetGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: preConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSElasticacheSubnetGroupExists(rn, &csg), @@ -53,7 +77,7 @@ func TestAccAWSElasticacheSubnetGroup_update(t *testing.T) { ), }, - resource.TestStep{ + { Config: postConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSElasticacheSubnetGroupExists(rn, &csg), diff --git a/aws/resource_aws_elasticsearch_domain.go b/aws/resource_aws_elasticsearch_domain.go index 53173acb69c..06502feea26 100644 --- a/aws/resource_aws_elasticsearch_domain.go +++ b/aws/resource_aws_elasticsearch_domain.go @@ -11,7 +11,6 @@ import ( "github.com/aws/aws-sdk-go/aws/awserr" elasticsearch "github.com/aws/aws-sdk-go/service/elasticsearchservice" "github.com/aws/aws-sdk-go/service/iam" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/structure" @@ -33,7 +32,7 @@ func resourceAwsElasticSearchDomain() *schema.Resource { Type: schema.TypeString, Optional: true, Computed: true, - ValidateFunc: validateJsonString, + ValidateFunc: validation.ValidateJsonString, DiffSuppressFunc: suppressEquivalentAwsPolicyDiffs, }, "advanced_options": { @@ -74,6 +73,7 @@ func resourceAwsElasticSearchDomain() *schema.Resource { Type: schema.TypeList, Optional: true, Computed: true, + MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "ebs_enabled": { @@ -118,15 +118,32 @@ func resourceAwsElasticSearchDomain() *schema.Resource { }, }, }, + "node_to_node_encryption": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + ForceNew: true, + }, + }, + }, + }, "cluster_config": { Type: schema.TypeList, Optional: true, Computed: true, + MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "dedicated_master_count": { - Type: schema.TypeInt, - Optional: true, + Type: schema.TypeInt, + Optional: true, + DiffSuppressFunc: isDedicatedMasterDisabled, }, "dedicated_master_enabled": { Type: schema.TypeBool, @@ -134,8 +151,9 @@ func resourceAwsElasticSearchDomain() *schema.Resource { Default: false, }, "dedicated_master_type": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + DiffSuppressFunc: isDedicatedMasterDisabled, }, "instance_count": { Type: schema.TypeInt, @@ -157,6 +175,13 @@ func resourceAwsElasticSearchDomain() *schema.Resource { "snapshot_options": { Type: schema.TypeList, Optional: true, + MaxItems: 1, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == "1" && new == "0" { + return true + } + return false + }, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "automated_snapshot_start_hour": { @@ -209,6 +234,7 @@ func resourceAwsElasticSearchDomain() *schema.Resource { ValidateFunc: validation.StringInSlice([]string{ elasticsearch.LogTypeIndexSlowLogs, elasticsearch.LogTypeSearchSlowLogs, + elasticsearch.LogTypeEsApplicationLogs, }, false), }, "cloudwatch_log_group_arn": { @@ -229,6 +255,34 @@ func resourceAwsElasticSearchDomain() *schema.Resource { Default: "1.5", ForceNew: true, }, + "cognito_options": { + Type: schema.TypeList, + Optional: true, + ForceNew: false, + MaxItems: 1, + DiffSuppressFunc: esCognitoOptionsDiffSuppress, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "user_pool_id": { + Type: schema.TypeString, + Required: true, + }, + "identity_pool_id": { + Type: schema.TypeString, + Required: true, + }, + "role_arn": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, "tags": tagsSchema(), }, @@ -282,7 +336,7 @@ func resourceAwsElasticSearchDomainCreate(d *schema.ResourceData, meta interface DomainName: aws.String(d.Get("domain_name").(string)), }) if err == nil { - return fmt.Errorf("ElasticSearch domain %q already exists", *resp.DomainStatus.DomainName) + return fmt.Errorf("ElasticSearch domain %s already exists", aws.StringValue(resp.DomainStatus.DomainName)) } input := elasticsearch.CreateElasticsearchDomainInput{ @@ -301,9 +355,7 @@ func resourceAwsElasticSearchDomainCreate(d *schema.ResourceData, meta interface if v, ok := d.GetOk("ebs_options"); ok { options := v.([]interface{}) - if len(options) > 1 { - return fmt.Errorf("Only a single ebs_options block is expected") - } else if len(options) == 1 { + if len(options) == 1 { if options[0] == nil { return fmt.Errorf("At least one field is expected inside ebs_options") } @@ -326,9 +378,7 @@ func resourceAwsElasticSearchDomainCreate(d *schema.ResourceData, meta interface if v, ok := d.GetOk("cluster_config"); ok { config := v.([]interface{}) - if len(config) > 1 { - return fmt.Errorf("Only a single cluster_config block is expected") - } else if len(config) == 1 { + if len(config) == 1 { if config[0] == nil { return fmt.Errorf("At least one field is expected inside cluster_config") } @@ -337,12 +387,17 @@ func resourceAwsElasticSearchDomainCreate(d *schema.ResourceData, meta interface } } + if v, ok := d.GetOk("node_to_node_encryption"); ok { + options := v.([]interface{}) + + s := options[0].(map[string]interface{}) + input.NodeToNodeEncryptionOptions = expandESNodeToNodeEncryptionOptions(s) + } + if v, ok := d.GetOk("snapshot_options"); ok { options := v.([]interface{}) - if len(options) > 1 { - return fmt.Errorf("Only a single snapshot_options block is expected") - } else if len(options) == 1 { + if len(options) == 1 { if options[0] == nil { return fmt.Errorf("At least one field is expected inside snapshot_options") } @@ -384,6 +439,10 @@ func resourceAwsElasticSearchDomainCreate(d *schema.ResourceData, meta interface } } + if v, ok := d.GetOk("cognito_options"); ok { + input.CognitoOptions = expandESCognitoOptions(v.([]interface{})) + } + log.Printf("[DEBUG] Creating ElasticSearch domain: %s", input) // IAM Roles can take some time to propagate if set in AccessPolicies and created in the same terraform @@ -393,7 +452,7 @@ func resourceAwsElasticSearchDomainCreate(d *schema.ResourceData, meta interface out, err = conn.CreateElasticsearchDomain(&input) if err != nil { if isAWSErr(err, "InvalidTypeException", "Error setting policy") { - log.Printf("[DEBUG] Retrying creation of ElasticSearch domain %s", *input.DomainName) + log.Printf("[DEBUG] Retrying creation of ElasticSearch domain %s", aws.StringValue(input.DomainName)) return resource.RetryableError(err) } if isAWSErr(err, "ValidationException", "enable a service-linked role to give Amazon ES permissions") { @@ -402,6 +461,12 @@ func resourceAwsElasticSearchDomainCreate(d *schema.ResourceData, meta interface if isAWSErr(err, "ValidationException", "Domain is still being deleted") { return resource.RetryableError(err) } + if isAWSErr(err, "ValidationException", "Amazon Elasticsearch must be allowed to use the passed role") { + return resource.RetryableError(err) + } + if isAWSErr(err, "ValidationException", "The passed role has not propagated yet") { + return resource.RetryableError(err) + } return resource.NonRetryableError(err) } @@ -412,7 +477,7 @@ func resourceAwsElasticSearchDomainCreate(d *schema.ResourceData, meta interface return err } - d.SetId(*out.DomainStatus.ARN) + d.SetId(aws.StringValue(out.DomainStatus.ARN)) // Whilst the domain is being created, we can initialise the tags. // This should mean that if the creation fails (eg because your token expired @@ -420,7 +485,7 @@ func resourceAwsElasticSearchDomainCreate(d *schema.ResourceData, meta interface // the resources. tags := tagsFromMapElasticsearchService(d.Get("tags").(map[string]interface{})) - if err := setTagsElasticsearchService(conn, d, *out.DomainStatus.ARN); err != nil { + if err := setTagsElasticsearchService(conn, d, aws.StringValue(out.DomainStatus.ARN)); err != nil { return err } @@ -476,10 +541,10 @@ func resourceAwsElasticSearchDomainRead(d *schema.ResourceData, meta interface{} ds := out.DomainStatus - if ds.AccessPolicies != nil && *ds.AccessPolicies != "" { - policies, err := structure.NormalizeJsonString(*ds.AccessPolicies) + if ds.AccessPolicies != nil && aws.StringValue(ds.AccessPolicies) != "" { + policies, err := structure.NormalizeJsonString(aws.StringValue(ds.AccessPolicies)) if err != nil { - return errwrap.Wrapf("access policies contain an invalid JSON: {{err}}", err) + return fmt.Errorf("access policies contain an invalid JSON: %s", err) } d.Set("access_policies", policies) } @@ -487,7 +552,7 @@ func resourceAwsElasticSearchDomainRead(d *schema.ResourceData, meta interface{} if err != nil { return err } - d.SetId(*ds.ARN) + d.SetId(aws.StringValue(ds.ARN)) d.Set("domain_id", ds.DomainId) d.Set("domain_name", ds.DomainName) d.Set("elasticsearch_version", ds.ElasticsearchVersion) @@ -504,11 +569,19 @@ func resourceAwsElasticSearchDomainRead(d *schema.ResourceData, meta interface{} if err != nil { return err } - if ds.SnapshotOptions != nil { - d.Set("snapshot_options", map[string]interface{}{ - "automated_snapshot_start_hour": *ds.SnapshotOptions.AutomatedSnapshotStartHour, - }) + err = d.Set("cognito_options", flattenESCognitoOptions(ds.CognitoOptions)) + if err != nil { + return err + } + err = d.Set("node_to_node_encryption", flattenESNodeToNodeEncryptionOptions(ds.NodeToNodeEncryptionOptions)) + if err != nil { + return err } + + if err := d.Set("snapshot_options", flattenESSnapshotOptions(ds.SnapshotOptions)); err != nil { + return fmt.Errorf("error setting snapshot_options: %s", err) + } + if ds.VPCOptions != nil { err = d.Set("vpc_options", flattenESVPCDerivedInfo(ds.VPCOptions)) if err != nil { @@ -525,7 +598,7 @@ func resourceAwsElasticSearchDomainRead(d *schema.ResourceData, meta interface{} } } else { if ds.Endpoint != nil { - d.Set("endpoint", *ds.Endpoint) + d.Set("endpoint", aws.StringValue(ds.Endpoint)) d.Set("kibana_endpoint", getKibanaEndpoint(d)) } if ds.Endpoints != nil { @@ -539,9 +612,9 @@ func resourceAwsElasticSearchDomainRead(d *schema.ResourceData, meta interface{} mm := map[string]interface{}{} mm["log_type"] = k if val.CloudWatchLogsLogGroupArn != nil { - mm["cloudwatch_log_group_arn"] = *val.CloudWatchLogsLogGroupArn + mm["cloudwatch_log_group_arn"] = aws.StringValue(val.CloudWatchLogsLogGroupArn) } - mm["enabled"] = *val.Enabled + mm["enabled"] = aws.BoolValue(val.Enabled) m = append(m, mm) } d.Set("log_publishing_options", m) @@ -592,9 +665,7 @@ func resourceAwsElasticSearchDomainUpdate(d *schema.ResourceData, meta interface if d.HasChange("ebs_options") || d.HasChange("cluster_config") { options := d.Get("ebs_options").([]interface{}) - if len(options) > 1 { - return fmt.Errorf("Only a single ebs_options block is expected") - } else if len(options) == 1 { + if len(options) == 1 { s := options[0].(map[string]interface{}) input.EBSOptions = expandESEBSOptions(s) } @@ -602,9 +673,7 @@ func resourceAwsElasticSearchDomainUpdate(d *schema.ResourceData, meta interface if d.HasChange("cluster_config") { config := d.Get("cluster_config").([]interface{}) - if len(config) > 1 { - return fmt.Errorf("Only a single cluster_config block is expected") - } else if len(config) == 1 { + if len(config) == 1 { m := config[0].(map[string]interface{}) input.ElasticsearchClusterConfig = expandESClusterConfig(m) } @@ -615,9 +684,7 @@ func resourceAwsElasticSearchDomainUpdate(d *schema.ResourceData, meta interface if d.HasChange("snapshot_options") { options := d.Get("snapshot_options").([]interface{}) - if len(options) > 1 { - return fmt.Errorf("Only a single snapshot_options block is expected") - } else if len(options) == 1 { + if len(options) == 1 { o := options[0].(map[string]interface{}) snapshotOptions := elasticsearch.SnapshotOptions{ @@ -634,6 +701,11 @@ func resourceAwsElasticSearchDomainUpdate(d *schema.ResourceData, meta interface input.VPCOptions = expandESVPCOptions(s) } + if d.HasChange("cognito_options") { + options := d.Get("cognito_options").([]interface{}) + input.CognitoOptions = expandESCognitoOptions(options) + } + if d.HasChange("log_publishing_options") { input.LogPublishingOptions = make(map[string]*elasticsearch.LogPublishingOption) options := d.Get("log_publishing_options").(*schema.Set).List() @@ -730,3 +802,41 @@ func suppressEquivalentKmsKeyIds(k, old, new string, d *schema.ResourceData) boo func getKibanaEndpoint(d *schema.ResourceData) string { return d.Get("endpoint").(string) + "/_plugin/kibana/" } + +func esCognitoOptionsDiffSuppress(k, old, new string, d *schema.ResourceData) bool { + if old == "1" && new == "0" { + return true + } + return false +} + +func isDedicatedMasterDisabled(k, old, new string, d *schema.ResourceData) bool { + v, ok := d.GetOk("cluster_config") + if ok { + clusterConfig := v.([]interface{})[0].(map[string]interface{}) + return !clusterConfig["dedicated_master_enabled"].(bool) + } + return false +} + +func expandESNodeToNodeEncryptionOptions(s map[string]interface{}) *elasticsearch.NodeToNodeEncryptionOptions { + options := elasticsearch.NodeToNodeEncryptionOptions{} + + if v, ok := s["enabled"]; ok { + options.Enabled = aws.Bool(v.(bool)) + } + return &options +} + +func flattenESNodeToNodeEncryptionOptions(o *elasticsearch.NodeToNodeEncryptionOptions) []map[string]interface{} { + if o == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{} + if o.Enabled != nil { + m["enabled"] = aws.BoolValue(o.Enabled) + } + + return []map[string]interface{}{m} +} diff --git a/aws/resource_aws_elasticsearch_domain_policy.go b/aws/resource_aws_elasticsearch_domain_policy.go index 7918ec5845d..da4e588160d 100644 --- a/aws/resource_aws_elasticsearch_domain_policy.go +++ b/aws/resource_aws_elasticsearch_domain_policy.go @@ -122,6 +122,5 @@ func resourceAwsElasticSearchDomainPolicyDelete(d *schema.ResourceData, meta int return err } - d.SetId("") return nil } diff --git a/aws/resource_aws_elasticsearch_domain_policy_test.go b/aws/resource_aws_elasticsearch_domain_policy_test.go index 5efd3eb994d..3e995234341 100644 --- a/aws/resource_aws_elasticsearch_domain_policy_test.go +++ b/aws/resource_aws_elasticsearch_domain_policy_test.go @@ -43,12 +43,12 @@ func TestAccAWSElasticSearchDomainPolicy_basic(t *testing.T) { }` name := fmt.Sprintf("tf-test-%d", ri) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckESDomainDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccESDomainPolicyConfig(ri, policy), Check: resource.ComposeTestCheckFunc( testAccCheckESDomainExists("aws_elasticsearch_domain.example", &domain), diff --git a/aws/resource_aws_elasticsearch_domain_test.go b/aws/resource_aws_elasticsearch_domain_test.go index e3c787b59b6..aa9bddfc42e 100644 --- a/aws/resource_aws_elasticsearch_domain_test.go +++ b/aws/resource_aws_elasticsearch_domain_test.go @@ -36,6 +36,10 @@ func testSweepElasticSearchDomains(region string) error { out, err := conn.ListDomainNames(&elasticsearch.ListDomainNamesInput{}) if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping Elasticsearch Domain sweep for %s: %s", region, err) + return nil + } return fmt.Errorf("Error retrieving Elasticsearch Domains: %s", err) } for _, domain := range out.DomainNames { @@ -72,7 +76,7 @@ func TestAccAWSElasticSearchDomain_basic(t *testing.T) { var domain elasticsearch.ElasticsearchDomainStatus ri := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckESDomainDestroy, @@ -90,12 +94,43 @@ func TestAccAWSElasticSearchDomain_basic(t *testing.T) { }) } +func TestAccAWSElasticSearchDomain_withDedicatedMaster(t *testing.T) { + var domain elasticsearch.ElasticsearchDomainStatus + ri := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckESDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccESDomainConfig_WithDedicatedClusterMaster(ri, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckESDomainExists("aws_elasticsearch_domain.example", &domain), + ), + }, + { + Config: testAccESDomainConfig_WithDedicatedClusterMaster(ri, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckESDomainExists("aws_elasticsearch_domain.example", &domain), + ), + }, + { + Config: testAccESDomainConfig_WithDedicatedClusterMaster(ri, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckESDomainExists("aws_elasticsearch_domain.example", &domain), + ), + }, + }, + }) +} + func TestAccAWSElasticSearchDomain_duplicate(t *testing.T) { var domain elasticsearch.ElasticsearchDomainStatus ri := acctest.RandInt() name := fmt.Sprintf("tf-test-%d", ri) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: func(s *terraform.State) error { @@ -132,7 +167,7 @@ func TestAccAWSElasticSearchDomain_duplicate(t *testing.T) { resource.TestCheckResourceAttr( "aws_elasticsearch_domain.example", "elasticsearch_version", "1.5"), ), - ExpectError: regexp.MustCompile(`domain "[^"]+" already exists`), + ExpectError: regexp.MustCompile(`domain .+ already exists`), }, }, }) @@ -143,7 +178,7 @@ func TestAccAWSElasticSearchDomain_importBasic(t *testing.T) { ri := acctest.RandInt() resourceId := fmt.Sprintf("tf-test-%d", ri) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckESDomainDestroy, @@ -165,7 +200,7 @@ func TestAccAWSElasticSearchDomain_v23(t *testing.T) { var domain elasticsearch.ElasticsearchDomainStatus ri := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckESDomainDestroy, @@ -186,7 +221,7 @@ func TestAccAWSElasticSearchDomain_complex(t *testing.T) { var domain elasticsearch.ElasticsearchDomainStatus ri := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckESDomainDestroy, @@ -205,7 +240,7 @@ func TestAccAWSElasticSearchDomain_vpc(t *testing.T) { var domain elasticsearch.ElasticsearchDomainStatus ri := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckESDomainDestroy, @@ -224,7 +259,7 @@ func TestAccAWSElasticSearchDomain_vpc_update(t *testing.T) { var domain elasticsearch.ElasticsearchDomainStatus ri := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckESDomainDestroy, @@ -251,7 +286,7 @@ func TestAccAWSElasticSearchDomain_internetToVpcEndpoint(t *testing.T) { var domain elasticsearch.ElasticsearchDomainStatus ri := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckESDomainDestroy, @@ -274,7 +309,7 @@ func TestAccAWSElasticSearchDomain_internetToVpcEndpoint(t *testing.T) { func TestAccAWSElasticSearchDomain_LogPublishingOptions(t *testing.T) { var domain elasticsearch.ElasticsearchDomainStatus - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckESDomainDestroy, @@ -289,6 +324,60 @@ func TestAccAWSElasticSearchDomain_LogPublishingOptions(t *testing.T) { }) } +func TestAccAWSElasticSearchDomain_CognitoOptionsCreateAndRemove(t *testing.T) { + var domain elasticsearch.ElasticsearchDomainStatus + ri := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckESDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccESDomainConfig_CognitoOptions(ri, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckESDomainExists("aws_elasticsearch_domain.example", &domain), + testAccCheckESCognitoOptions(true, &domain), + ), + }, + { + Config: testAccESDomainConfig_CognitoOptions(ri, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckESDomainExists("aws_elasticsearch_domain.example", &domain), + testAccCheckESCognitoOptions(false, &domain), + ), + }, + }, + }) +} + +func TestAccAWSElasticSearchDomain_CognitoOptionsUpdate(t *testing.T) { + var domain elasticsearch.ElasticsearchDomainStatus + ri := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckESDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccESDomainConfig_CognitoOptions(ri, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckESDomainExists("aws_elasticsearch_domain.example", &domain), + testAccCheckESCognitoOptions(false, &domain), + ), + }, + { + Config: testAccESDomainConfig_CognitoOptions(ri, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckESDomainExists("aws_elasticsearch_domain.example", &domain), + testAccCheckESCognitoOptions(true, &domain), + ), + }, + }, + }) +} + func testAccCheckESNumberOfSecurityGroups(numberOfSecurityGroups int, status *elasticsearch.ElasticsearchDomainStatus) resource.TestCheckFunc { return func(s *terraform.State) error { count := len(status.VPCOptions.SecurityGroupIds) @@ -302,7 +391,7 @@ func testAccCheckESNumberOfSecurityGroups(numberOfSecurityGroups int, status *el func TestAccAWSElasticSearchDomain_policy(t *testing.T) { var domain elasticsearch.ElasticsearchDomainStatus - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckESDomainDestroy, @@ -320,7 +409,7 @@ func TestAccAWSElasticSearchDomain_policy(t *testing.T) { func TestAccAWSElasticSearchDomain_encrypt_at_rest_default_key(t *testing.T) { var domain elasticsearch.ElasticsearchDomainStatus - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckESDomainDestroy, @@ -339,7 +428,7 @@ func TestAccAWSElasticSearchDomain_encrypt_at_rest_default_key(t *testing.T) { func TestAccAWSElasticSearchDomain_encrypt_at_rest_specify_key(t *testing.T) { var domain elasticsearch.ElasticsearchDomainStatus - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckESDomainDestroy, @@ -355,12 +444,31 @@ func TestAccAWSElasticSearchDomain_encrypt_at_rest_specify_key(t *testing.T) { }) } +func TestAccAWSElasticSearchDomain_NodeToNodeEncryption(t *testing.T) { + var domain elasticsearch.ElasticsearchDomainStatus + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckESDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccESDomainConfigwithNodeToNodeEncryption(acctest.RandInt()), + Check: resource.ComposeTestCheckFunc( + testAccCheckESDomainExists("aws_elasticsearch_domain.example", &domain), + testAccCheckESNodetoNodeEncrypted(true, &domain), + ), + }, + }, + }) +} + func TestAccAWSElasticSearchDomain_tags(t *testing.T) { var domain elasticsearch.ElasticsearchDomainStatus var td elasticsearch.ListTagsOutput ri := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, @@ -389,7 +497,7 @@ func TestAccAWSElasticSearchDomain_update(t *testing.T) { var input elasticsearch.ElasticsearchDomainStatus ri := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckESDomainDestroy, @@ -413,6 +521,60 @@ func TestAccAWSElasticSearchDomain_update(t *testing.T) { }}) } +func TestAccAWSElasticSearchDomain_update_volume_type(t *testing.T) { + var input elasticsearch.ElasticsearchDomainStatus + ri := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckESDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccESDomainConfig_ClusterUpdateEBSVolume(ri, 24), + Check: resource.ComposeTestCheckFunc( + testAccCheckESDomainExists("aws_elasticsearch_domain.example", &input), + testAccCheckESEBSVolumeEnabled(true, &input), + testAccCheckESEBSVolumeSize(24, &input), + ), + }, + { + Config: testAccESDomainConfig_ClusterUpdateInstanceStore(ri), + Check: resource.ComposeTestCheckFunc( + testAccCheckESDomainExists("aws_elasticsearch_domain.example", &input), + testAccCheckESEBSVolumeEnabled(false, &input), + ), + }, + { + Config: testAccESDomainConfig_ClusterUpdateEBSVolume(ri, 12), + Check: resource.ComposeTestCheckFunc( + testAccCheckESDomainExists("aws_elasticsearch_domain.example", &input), + testAccCheckESEBSVolumeEnabled(true, &input), + testAccCheckESEBSVolumeSize(12, &input), + ), + }, + }}) +} + +func testAccCheckESEBSVolumeSize(ebsVolumeSize int, status *elasticsearch.ElasticsearchDomainStatus) resource.TestCheckFunc { + return func(s *terraform.State) error { + conf := status.EBSOptions + if *conf.VolumeSize != int64(ebsVolumeSize) { + return fmt.Errorf("EBS volume size differ. Given: %d, Expected: %d", *conf.VolumeSize, ebsVolumeSize) + } + return nil + } +} +func testAccCheckESEBSVolumeEnabled(ebsEnabled bool, status *elasticsearch.ElasticsearchDomainStatus) resource.TestCheckFunc { + return func(s *terraform.State) error { + conf := status.EBSOptions + if *conf.EBSEnabled != ebsEnabled { + return fmt.Errorf("EBS volume enabled. Given: %t, Expected: %t", *conf.EBSEnabled, ebsEnabled) + } + return nil + } +} + func testAccCheckESSnapshotHour(snapshotHour int, status *elasticsearch.ElasticsearchDomainStatus) resource.TestCheckFunc { return func(s *terraform.State) error { conf := status.SnapshotOptions @@ -443,6 +605,26 @@ func testAccCheckESEncrypted(encrypted bool, status *elasticsearch.Elasticsearch } } +func testAccCheckESNodetoNodeEncrypted(encrypted bool, status *elasticsearch.ElasticsearchDomainStatus) resource.TestCheckFunc { + return func(s *terraform.State) error { + options := status.NodeToNodeEncryptionOptions + if aws.BoolValue(options.Enabled) != encrypted { + return fmt.Errorf("Node-to-Node Encryption not set properly. Given: %t, Expected: %t", aws.BoolValue(options.Enabled), encrypted) + } + return nil + } +} + +func testAccCheckESCognitoOptions(enabled bool, status *elasticsearch.ElasticsearchDomainStatus) resource.TestCheckFunc { + return func(s *terraform.State) error { + conf := status.CognitoOptions + if *conf.Enabled != enabled { + return fmt.Errorf("CognitoOptions not set properly. Given: %t, Expected: %t", *conf.Enabled, enabled) + } + return nil + } +} + func testAccLoadESTags(conf *elasticsearch.ElasticsearchDomainStatus, td *elasticsearch.ListTagsOutput) resource.TestCheckFunc { return func(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).esconn @@ -524,6 +706,27 @@ resource "aws_elasticsearch_domain" "example" { `, randInt) } +func testAccESDomainConfig_WithDedicatedClusterMaster(randInt int, enabled bool) string { + return fmt.Sprintf(` +resource "aws_elasticsearch_domain" "example" { + domain_name = "tf-test-%d" + + cluster_config { + instance_type = "t2.micro.elasticsearch" + instance_count = "1" + dedicated_master_enabled = %t + dedicated_master_count = "3" + dedicated_master_type = "t2.micro.elasticsearch" + } + + ebs_options { + ebs_enabled = true + volume_size = 10 + } +} +`, randInt, enabled) +} + func testAccESDomainConfig_ClusterUpdate(randInt, instanceInt, snapshotInt int) string { return fmt.Sprintf(` resource "aws_elasticsearch_domain" "example" { @@ -552,6 +755,55 @@ resource "aws_elasticsearch_domain" "example" { `, randInt, instanceInt, snapshotInt) } +func testAccESDomainConfig_ClusterUpdateEBSVolume(randInt, volumeSize int) string { + return fmt.Sprintf(` +resource "aws_elasticsearch_domain" "example" { + domain_name = "tf-test-%d" + + elasticsearch_version = "6.0" + + advanced_options { + "indices.fielddata.cache.size" = 80 + } + + ebs_options { + ebs_enabled = true + volume_size = %d + } + + cluster_config { + instance_count = 2 + zone_awareness_enabled = true + instance_type = "t2.small.elasticsearch" + } +} +`, randInt, volumeSize) +} + +func testAccESDomainConfig_ClusterUpdateInstanceStore(randInt int) string { + return fmt.Sprintf(` +resource "aws_elasticsearch_domain" "example" { + domain_name = "tf-test-%d" + + elasticsearch_version = "6.0" + + advanced_options { + "indices.fielddata.cache.size" = 80 + } + + ebs_options { + ebs_enabled = false + } + + cluster_config { + instance_count = 2 + zone_awareness_enabled = true + instance_type = "i3.large.elasticsearch" + } +} +`, randInt) +} + func testAccESDomainConfig_TagUpdate(randInt int) string { return fmt.Sprintf(` resource "aws_elasticsearch_domain" "example" { @@ -664,6 +916,30 @@ resource "aws_elasticsearch_domain" "example" { `, randESId, randESId) } +func testAccESDomainConfigwithNodeToNodeEncryption(randInt int) string { + return fmt.Sprintf(` + +resource "aws_elasticsearch_domain" "example" { + domain_name = "tf-test-%d" + + elasticsearch_version = "6.0" + + cluster_config { + instance_type = "m4.large.elasticsearch" + } + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + node_to_node_encryption { + enabled = true + } +} +`, randInt) +} + func testAccESDomainConfig_complex(randInt int) string { return fmt.Sprintf(` resource "aws_elasticsearch_domain" "example" { @@ -957,3 +1233,81 @@ resource "aws_elasticsearch_domain" "example" { } `, randInt, randInt, randInt) } + +func testAccESDomainConfig_CognitoOptions(randInt int, includeCognitoOptions bool) string { + + var cognitoOptions string + if includeCognitoOptions { + cognitoOptions = ` + cognito_options { + enabled = true + user_pool_id = "${aws_cognito_user_pool.example.id}" + identity_pool_id = "${aws_cognito_identity_pool.example.id}" + role_arn = "${aws_iam_role.example.arn}" + }` + } else { + cognitoOptions = "" + } + + return fmt.Sprintf(` +resource "aws_cognito_user_pool" "example" { + name = "tf-test-%d" +} + +resource "aws_cognito_user_pool_domain" "example" { + domain = "tf-test-%d" + user_pool_id = "${aws_cognito_user_pool.example.id}" +} + +resource "aws_cognito_identity_pool" "example" { + identity_pool_name = "tf_test_%d" + allow_unauthenticated_identities = false + + lifecycle { + ignore_changes = ["cognito_identity_providers"] + } +} + +resource "aws_iam_role" "example" { + name = "tf-test-%d" + path = "/service-role/" + assume_role_policy = "${data.aws_iam_policy_document.assume-role-policy.json}" +} + +data "aws_iam_policy_document" "assume-role-policy" { + statement { + sid = "" + actions = ["sts:AssumeRole"] + effect = "Allow" + + principals { + type = "Service" + identifiers = ["es.amazonaws.com"] + } + } +} + +resource "aws_iam_role_policy_attachment" "example" { + role = "${aws_iam_role.example.name}" + policy_arn = "arn:aws:iam::aws:policy/AmazonESCognitoAccess" +} + +resource "aws_elasticsearch_domain" "example" { + domain_name = "tf-test-%d" + + elasticsearch_version = "6.0" + + %s + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + depends_on = [ + "aws_iam_role.example", + "aws_iam_role_policy_attachment.example" + ] +} +`, randInt, randInt, randInt, randInt, randInt, cognitoOptions) +} diff --git a/aws/resource_aws_elb.go b/aws/resource_aws_elb.go index caf28742c20..38ca61e265a 100644 --- a/aws/resource_aws_elb.go +++ b/aws/resource_aws_elb.go @@ -17,6 +17,7 @@ import ( "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsElb() *schema.Resource { @@ -30,7 +31,7 @@ func resourceAwsElb() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Optional: true, Computed: true, @@ -38,32 +39,33 @@ func resourceAwsElb() *schema.Resource { ConflictsWith: []string{"name_prefix"}, ValidateFunc: validateElbName, }, - "name_prefix": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - ForceNew: true, - ValidateFunc: validateElbNamePrefix, + "name_prefix": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"name"}, + ValidateFunc: validateElbNamePrefix, }, - "arn": &schema.Schema{ + "arn": { Type: schema.TypeString, Computed: true, }, - "internal": &schema.Schema{ + "internal": { Type: schema.TypeBool, Optional: true, ForceNew: true, Computed: true, }, - "cross_zone_load_balancing": &schema.Schema{ + "cross_zone_load_balancing": { Type: schema.TypeBool, Optional: true, Default: true, }, - "availability_zones": &schema.Schema{ + "availability_zones": { Type: schema.TypeSet, Elem: &schema.Schema{Type: schema.TypeString}, Optional: true, @@ -71,7 +73,7 @@ func resourceAwsElb() *schema.Resource { Set: schema.HashString, }, - "instances": &schema.Schema{ + "instances": { Type: schema.TypeSet, Elem: &schema.Schema{Type: schema.TypeString}, Optional: true, @@ -79,7 +81,7 @@ func resourceAwsElb() *schema.Resource { Set: schema.HashString, }, - "security_groups": &schema.Schema{ + "security_groups": { Type: schema.TypeSet, Elem: &schema.Schema{Type: schema.TypeString}, Optional: true, @@ -87,18 +89,18 @@ func resourceAwsElb() *schema.Resource { Set: schema.HashString, }, - "source_security_group": &schema.Schema{ + "source_security_group": { Type: schema.TypeString, Optional: true, Computed: true, }, - "source_security_group_id": &schema.Schema{ + "source_security_group_id": { Type: schema.TypeString, Computed: true, }, - "subnets": &schema.Schema{ + "subnets": { Type: schema.TypeSet, Elem: &schema.Schema{Type: schema.TypeString}, Optional: true, @@ -106,46 +108,46 @@ func resourceAwsElb() *schema.Resource { Set: schema.HashString, }, - "idle_timeout": &schema.Schema{ + "idle_timeout": { Type: schema.TypeInt, Optional: true, Default: 60, - ValidateFunc: validateIntegerInRange(1, 4000), + ValidateFunc: validation.IntBetween(1, 4000), }, - "connection_draining": &schema.Schema{ + "connection_draining": { Type: schema.TypeBool, Optional: true, Default: false, }, - "connection_draining_timeout": &schema.Schema{ + "connection_draining_timeout": { Type: schema.TypeInt, Optional: true, Default: 300, }, - "access_logs": &schema.Schema{ + "access_logs": { Type: schema.TypeList, Optional: true, MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "interval": &schema.Schema{ + "interval": { Type: schema.TypeInt, Optional: true, Default: 60, ValidateFunc: validateAccessLogsInterval, }, - "bucket": &schema.Schema{ + "bucket": { Type: schema.TypeString, Required: true, }, - "bucket_prefix": &schema.Schema{ + "bucket_prefix": { Type: schema.TypeString, Optional: true, }, - "enabled": &schema.Schema{ + "enabled": { Type: schema.TypeBool, Optional: true, Default: true, @@ -154,36 +156,36 @@ func resourceAwsElb() *schema.Resource { }, }, - "listener": &schema.Schema{ + "listener": { Type: schema.TypeSet, Required: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "instance_port": &schema.Schema{ + "instance_port": { Type: schema.TypeInt, Required: true, - ValidateFunc: validateIntegerInRange(1, 65535), + ValidateFunc: validation.IntBetween(1, 65535), }, - "instance_protocol": &schema.Schema{ + "instance_protocol": { Type: schema.TypeString, Required: true, - ValidateFunc: validateListenerProtocol, + ValidateFunc: validateListenerProtocol(), }, - "lb_port": &schema.Schema{ + "lb_port": { Type: schema.TypeInt, Required: true, - ValidateFunc: validateIntegerInRange(1, 65535), + ValidateFunc: validation.IntBetween(1, 65535), }, - "lb_protocol": &schema.Schema{ + "lb_protocol": { Type: schema.TypeString, Required: true, - ValidateFunc: validateListenerProtocol, + ValidateFunc: validateListenerProtocol(), }, - "ssl_certificate_id": &schema.Schema{ + "ssl_certificate_id": { Type: schema.TypeString, Optional: true, ValidateFunc: validateArn, @@ -193,52 +195,52 @@ func resourceAwsElb() *schema.Resource { Set: resourceAwsElbListenerHash, }, - "health_check": &schema.Schema{ + "health_check": { Type: schema.TypeList, Optional: true, Computed: true, MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "healthy_threshold": &schema.Schema{ + "healthy_threshold": { Type: schema.TypeInt, Required: true, - ValidateFunc: validateIntegerInRange(2, 10), + ValidateFunc: validation.IntBetween(2, 10), }, - "unhealthy_threshold": &schema.Schema{ + "unhealthy_threshold": { Type: schema.TypeInt, Required: true, - ValidateFunc: validateIntegerInRange(2, 10), + ValidateFunc: validation.IntBetween(2, 10), }, - "target": &schema.Schema{ + "target": { Type: schema.TypeString, Required: true, ValidateFunc: validateHeathCheckTarget, }, - "interval": &schema.Schema{ + "interval": { Type: schema.TypeInt, Required: true, - ValidateFunc: validateIntegerInRange(5, 300), + ValidateFunc: validation.IntBetween(5, 300), }, - "timeout": &schema.Schema{ + "timeout": { Type: schema.TypeInt, Required: true, - ValidateFunc: validateIntegerInRange(2, 60), + ValidateFunc: validation.IntBetween(2, 60), }, }, }, }, - "dns_name": &schema.Schema{ + "dns_name": { Type: schema.TypeString, Computed: true, }, - "zone_id": &schema.Schema{ + "zone_id": { Type: schema.TypeString, Computed: true, }, @@ -302,7 +304,7 @@ func resourceAwsElbCreate(d *schema.ResourceData, meta interface{}) error { // Check for IAM SSL Cert error, eventual consistancy issue if awsErr.Code() == "CertificateNotFound" { return resource.RetryableError( - fmt.Errorf("[WARN] Error creating ELB Listener with SSL Cert, retrying: %s", err)) + fmt.Errorf("Error creating ELB Listener with SSL Cert, retrying: %s", err)) } } return resource.NonRetryableError(err) @@ -405,7 +407,7 @@ func flattenAwsELbResource(d *schema.ResourceData, ec2conn *ec2.EC2, elbconn *el elbVpc = *lb.VPCId sgId, err := sourceSGIdByName(ec2conn, *lb.SourceSecurityGroup.GroupName, elbVpc) if err != nil { - return fmt.Errorf("[WARN] Error looking up ELB Security Group ID: %s", err) + return fmt.Errorf("Error looking up ELB Security Group ID: %s", err) } else { d.Set("source_security_group_id", sgId) } @@ -446,6 +448,9 @@ func flattenAwsELbResource(d *schema.ResourceData, ec2conn *ec2.EC2, elbconn *el resp, err := elbconn.DescribeTags(&elb.DescribeTagsInput{ LoadBalancerNames: []*string{lb.LoadBalancerName}, }) + if err != nil { + return fmt.Errorf("error describing tags for ELB (%s): %s", d.Id(), err) + } var et []*elb.Tag if len(resp.TagDescriptions) > 0 { @@ -473,7 +478,10 @@ func resourceAwsElbUpdate(d *schema.ResourceData, meta interface{}) error { ns := n.(*schema.Set) remove, _ := expandListeners(os.Difference(ns).List()) - add, _ := expandListeners(ns.Difference(os).List()) + add, err := expandListeners(ns.Difference(os).List()) + if err != nil { + return err + } if len(remove) > 0 { ports := make([]*int64, 0, len(remove)) @@ -581,17 +589,12 @@ func resourceAwsElbUpdate(d *schema.ResourceData, meta interface{}) error { logs := d.Get("access_logs").([]interface{}) if len(logs) == 1 { l := logs[0].(map[string]interface{}) - accessLog := &elb.AccessLog{ - Enabled: aws.Bool(l["enabled"].(bool)), - EmitInterval: aws.Int64(int64(l["interval"].(int))), - S3BucketName: aws.String(l["bucket"].(string)), - } - - if l["bucket_prefix"] != "" { - accessLog.S3BucketPrefix = aws.String(l["bucket_prefix"].(string)) + attrs.LoadBalancerAttributes.AccessLog = &elb.AccessLog{ + Enabled: aws.Bool(l["enabled"].(bool)), + EmitInterval: aws.Int64(int64(l["interval"].(int))), + S3BucketName: aws.String(l["bucket"].(string)), + S3BucketPrefix: aws.String(l["bucket_prefix"].(string)), } - - attrs.LoadBalancerAttributes.AccessLog = accessLog } else if len(logs) == 0 { // disable access logs attrs.LoadBalancerAttributes.AccessLog = &elb.AccessLog{ @@ -962,18 +965,6 @@ func validateHeathCheckTarget(v interface{}, k string) (ws []string, errors []er return } -func validateListenerProtocol(v interface{}, k string) (ws []string, errors []error) { - value := v.(string) - - if !isValidProtocol(value) { - errors = append(errors, fmt.Errorf( - "%q contains an invalid Listener protocol %q. "+ - "Valid protocols are either %q, %q, %q, or %q.", - k, value, "TCP", "SSL", "HTTP", "HTTPS")) - } - return -} - func isValidProtocol(s string) bool { if s == "" { return false @@ -994,6 +985,15 @@ func isValidProtocol(s string) bool { return true } +func validateListenerProtocol() schema.SchemaValidateFunc { + return validation.StringInSlice([]string{ + "HTTP", + "HTTPS", + "SSL", + "TCP", + }, true) +} + // ELB automatically creates ENI(s) on creation // but the cleanup is asynchronous and may take time // which then blocks IGW, SG or VPC on deletion diff --git a/aws/resource_aws_elb_attachment.go b/aws/resource_aws_elb_attachment.go index 401544ad7b1..119c4e663a4 100644 --- a/aws/resource_aws_elb_attachment.go +++ b/aws/resource_aws_elb_attachment.go @@ -17,13 +17,13 @@ func resourceAwsElbAttachment() *schema.Resource { Delete: resourceAwsElbAttachmentDelete, Schema: map[string]*schema.Schema{ - "elb": &schema.Schema{ + "elb": { Type: schema.TypeString, ForceNew: true, Required: true, }, - "instance": &schema.Schema{ + "instance": { Type: schema.TypeString, ForceNew: true, Required: true, diff --git a/aws/resource_aws_elb_attachment_test.go b/aws/resource_aws_elb_attachment_test.go index 154f17c0161..b699c4b6eed 100644 --- a/aws/resource_aws_elb_attachment_test.go +++ b/aws/resource_aws_elb_attachment_test.go @@ -22,13 +22,13 @@ func TestAccAWSELBAttachment_basic(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.bar", Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSELBAttachmentConfig1, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.bar", &conf), @@ -36,7 +36,7 @@ func TestAccAWSELBAttachment_basic(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSELBAttachmentConfig2, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.bar", &conf), @@ -44,7 +44,7 @@ func TestAccAWSELBAttachment_basic(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSELBAttachmentConfig3, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.bar", &conf), @@ -52,7 +52,7 @@ func TestAccAWSELBAttachment_basic(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSELBAttachmentConfig4, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.bar", &conf), @@ -93,13 +93,13 @@ func TestAccAWSELBAttachment_drift(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.bar", Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSELBAttachmentConfig1, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.bar", &conf), @@ -108,7 +108,7 @@ func TestAccAWSELBAttachment_drift(t *testing.T) { }, // remove an instance from the ELB, and make sure it gets re-added - resource.TestStep{ + { Config: testAccAWSELBAttachmentConfig1, PreConfig: deregInstance, Check: resource.ComposeTestCheckFunc( diff --git a/aws/resource_aws_elb_test.go b/aws/resource_aws_elb_test.go index b6b1b0f82c3..f0abe4aac12 100644 --- a/aws/resource_aws_elb_test.go +++ b/aws/resource_aws_elb_test.go @@ -2,10 +2,12 @@ package aws import ( "fmt" + "log" "math/rand" "reflect" "regexp" "sort" + "strings" "testing" "time" @@ -17,10 +19,93 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func init() { + resource.AddTestSweepers("aws_elb", &resource.Sweeper{ + Name: "aws_elb", + F: testSweepELBs, + }) +} + +func testSweepELBs(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).elbconn + + prefixes := []string{ + "test-elb-", + } + + err = conn.DescribeLoadBalancersPages(&elb.DescribeLoadBalancersInput{}, func(out *elb.DescribeLoadBalancersOutput, isLast bool) bool { + if len(out.LoadBalancerDescriptions) == 0 { + log.Println("[INFO] No ELBs found for sweeping") + return false + } + + for _, lb := range out.LoadBalancerDescriptions { + skip := true + for _, prefix := range prefixes { + if strings.HasPrefix(*lb.LoadBalancerName, prefix) { + skip = false + break + } + } + if skip { + log.Printf("[INFO] Skipping ELB: %s", *lb.LoadBalancerName) + continue + } + log.Printf("[INFO] Deleting ELB: %s", *lb.LoadBalancerName) + + _, err := conn.DeleteLoadBalancer(&elb.DeleteLoadBalancerInput{ + LoadBalancerName: lb.LoadBalancerName, + }) + if err != nil { + log.Printf("[ERROR] Failed to delete ELB %s: %s", *lb.LoadBalancerName, err) + continue + } + err = cleanupELBNetworkInterfaces(client.(*AWSClient).ec2conn, *lb.LoadBalancerName) + if err != nil { + log.Printf("[WARN] Failed to cleanup ENIs for ELB %q: %s", *lb.LoadBalancerName, err) + } + } + return !isLast + }) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping ELB sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving ELBs: %s", err) + } + return nil +} + +func TestAccAWSELB_importBasic(t *testing.T) { + resourceName := "aws_elb.bar" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSELBDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSELBConfig, + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSELB_basic(t *testing.T) { var conf elb.LoadBalancerDescription - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.bar", Providers: testAccProviders, @@ -59,13 +144,33 @@ func TestAccAWSELB_basic(t *testing.T) { }) } +func TestAccAWSELB_disappears(t *testing.T) { + var loadBalancer elb.LoadBalancerDescription + resourceName := "aws_elb.bar" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSELBDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSELBConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSELBExists(resourceName, &loadBalancer), + testAccCheckAWSELBDisappears(&loadBalancer), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + func TestAccAWSELB_fullCharacterRange(t *testing.T) { var conf elb.LoadBalancerDescription - lbName := fmt.Sprintf("Tf-%d", - rand.New(rand.NewSource(time.Now().UnixNano())).Int()) + lbName := fmt.Sprintf("Tf-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.foo", Providers: testAccProviders, @@ -88,7 +193,7 @@ func TestAccAWSELB_AccessLogs_enabled(t *testing.T) { rName := fmt.Sprintf("terraform-access-logs-bucket-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.foo", Providers: testAccProviders, @@ -133,7 +238,7 @@ func TestAccAWSELB_AccessLogs_disabled(t *testing.T) { rName := fmt.Sprintf("terraform-access-logs-bucket-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.foo", Providers: testAccProviders, @@ -177,13 +282,13 @@ func TestAccAWSELB_namePrefix(t *testing.T) { var conf elb.LoadBalancerDescription nameRegex := regexp.MustCompile("^test-") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.test", Providers: testAccProviders, CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSELB_namePrefix, Check: resource.ComposeTestCheckFunc( testAccCheckAWSELBExists("aws_elb.test", &conf), @@ -199,7 +304,7 @@ func TestAccAWSELB_generatedName(t *testing.T) { var conf elb.LoadBalancerDescription generatedNameRegexp := regexp.MustCompile("^tf-lb-") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.foo", Providers: testAccProviders, @@ -221,7 +326,7 @@ func TestAccAWSELB_generatesNameForZeroValue(t *testing.T) { var conf elb.LoadBalancerDescription generatedNameRegexp := regexp.MustCompile("^tf-lb-") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.foo", Providers: testAccProviders, @@ -242,7 +347,7 @@ func TestAccAWSELB_generatesNameForZeroValue(t *testing.T) { func TestAccAWSELB_availabilityZones(t *testing.T) { var conf elb.LoadBalancerDescription - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.bar", Providers: testAccProviders, @@ -283,7 +388,7 @@ func TestAccAWSELB_tags(t *testing.T) { var conf elb.LoadBalancerDescription var td elb.TagDescription - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.bar", Providers: testAccProviders, @@ -313,9 +418,11 @@ func TestAccAWSELB_tags(t *testing.T) { }) } -func TestAccAWSELB_iam_server_cert(t *testing.T) { +func TestAccAWSELB_Listener_SSLCertificateID_IAMServerCertificate(t *testing.T) { var conf elb.LoadBalancerDescription - // var td elb.TagDescription + rName := fmt.Sprintf("tf-acctest-%s", acctest.RandString(10)) + resourceName := "aws_elb.bar" + testCheck := func(*terraform.State) error { if len(conf.ListenerDescriptions) != 1 { return fmt.Errorf( @@ -324,20 +431,27 @@ func TestAccAWSELB_iam_server_cert(t *testing.T) { } return nil } - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - IDRefreshName: "aws_elb.bar", - Providers: testAccProvidersWithTLS, - CheckDestroy: testAccCheckAWSELBDestroy, + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProvidersWithTLS, + CheckDestroy: testAccCheckAWSELBDestroy, Steps: []resource.TestStep{ { - Config: testAccELBIAMServerCertConfig( - fmt.Sprintf("tf-acctest-%s", acctest.RandString(10))), + Config: testAccELBConfig_Listener_IAMServerCertificate(rName, "tcp"), + ExpectError: regexp.MustCompile(`ssl_certificate_id may be set only when protocol is 'https' or 'ssl'`), + }, + { + Config: testAccELBConfig_Listener_IAMServerCertificate(rName, "https"), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSELBExists("aws_elb.bar", &conf), + testAccCheckAWSELBExists(resourceName, &conf), testCheck, ), }, + { + Config: testAccELBConfig_Listener_IAMServerCertificate_AddInvalidListener(rName), + ExpectError: regexp.MustCompile(`ssl_certificate_id may be set only when protocol is 'https' or 'ssl'`), + }, }, }) } @@ -345,7 +459,7 @@ func TestAccAWSELB_iam_server_cert(t *testing.T) { func TestAccAWSELB_swap_subnets(t *testing.T) { var conf elb.LoadBalancerDescription - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.ourapp", Providers: testAccProviders, @@ -402,7 +516,7 @@ func TestAccAWSELB_InstanceAttaching(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.bar", Providers: testAccProviders, @@ -431,7 +545,7 @@ func TestAccAWSELB_listener(t *testing.T) { var conf elb.LoadBalancerDescription resourceName := "aws_elb.bar" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: resourceName, Providers: testAccProviders, @@ -514,7 +628,7 @@ func TestAccAWSELB_listener(t *testing.T) { input := &elb.CreateLoadBalancerListenersInput{ LoadBalancerName: conf.LoadBalancerName, Listeners: []*elb.Listener{ - &elb.Listener{ + { InstancePort: aws.Int64(int64(22)), InstanceProtocol: aws.String("tcp"), LoadBalancerPort: aws.Int64(int64(22)), @@ -543,7 +657,7 @@ func TestAccAWSELB_listener(t *testing.T) { func TestAccAWSELB_HealthCheck(t *testing.T) { var conf elb.LoadBalancerDescription - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.bar", Providers: testAccProviders, @@ -571,7 +685,7 @@ func TestAccAWSELB_HealthCheck(t *testing.T) { } func TestAccAWSELBUpdate_HealthCheck(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.bar", Providers: testAccProviders, @@ -598,7 +712,7 @@ func TestAccAWSELBUpdate_HealthCheck(t *testing.T) { func TestAccAWSELB_Timeout(t *testing.T) { var conf elb.LoadBalancerDescription - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.bar", Providers: testAccProviders, @@ -618,7 +732,7 @@ func TestAccAWSELB_Timeout(t *testing.T) { } func TestAccAWSELBUpdate_Timeout(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.bar", Providers: testAccProviders, @@ -645,7 +759,7 @@ func TestAccAWSELBUpdate_Timeout(t *testing.T) { } func TestAccAWSELB_ConnectionDraining(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.bar", Providers: testAccProviders, @@ -667,7 +781,7 @@ func TestAccAWSELB_ConnectionDraining(t *testing.T) { } func TestAccAWSELBUpdate_ConnectionDraining(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.bar", Providers: testAccProviders, @@ -708,7 +822,7 @@ func TestAccAWSELBUpdate_ConnectionDraining(t *testing.T) { } func TestAccAWSELB_SecurityGroups(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_elb.bar", Providers: testAccProviders, @@ -844,57 +958,6 @@ func TestResourceAWSELB_validateAccessLogsInterval(t *testing.T) { } -func TestResourceAWSELB_validateListenerProtocol(t *testing.T) { - type testCases struct { - Value string - ErrCount int - } - - invalidCases := []testCases{ - { - Value: "", - ErrCount: 1, - }, - { - Value: "incorrect", - ErrCount: 1, - }, - { - Value: "HTTP:", - ErrCount: 1, - }, - } - - for _, tc := range invalidCases { - _, errors := validateListenerProtocol(tc.Value, "protocol") - if len(errors) != tc.ErrCount { - t.Fatalf("Expected %q to trigger a validation error.", tc.Value) - } - } - - validCases := []testCases{ - { - Value: "TCP", - ErrCount: 0, - }, - { - Value: "ssl", - ErrCount: 0, - }, - { - Value: "HTTP", - ErrCount: 0, - }, - } - - for _, tc := range validCases { - _, errors := validateListenerProtocol(tc.Value, "protocol") - if len(errors) != tc.ErrCount { - t.Fatalf("Expected %q not to trigger a validation error.", tc.Value) - } - } -} - func TestResourceAWSELB_validateHealthCheckTarget(t *testing.T) { type testCase struct { Value string @@ -1024,6 +1087,19 @@ func testAccCheckAWSELBDestroy(s *terraform.State) error { return nil } +func testAccCheckAWSELBDisappears(loadBalancer *elb.LoadBalancerDescription) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).elbconn + + input := elb.DeleteLoadBalancerInput{ + LoadBalancerName: loadBalancer.LoadBalancerName, + } + _, err := conn.DeleteLoadBalancer(&input) + + return err + } +} + func testAccCheckAWSELBAttributes(conf *elb.LoadBalancerDescription) resource.TestCheckFunc { return func(s *terraform.State) error { zones := []string{"us-west-2a", "us-west-2b", "us-west-2c"} @@ -1374,20 +1450,6 @@ resource "aws_instance" "foo" { } ` -const testAccAWSELBConfigListenerSSLCertificateId = ` -resource "aws_elb" "bar" { - availability_zones = ["us-west-2a"] - - listener { - instance_port = 8000 - instance_protocol = "http" - ssl_certificate_id = "%s" - lb_port = 443 - lb_protocol = "https" - } -} -` - const testAccAWSELBConfigHealthCheck = ` resource "aws_elb" "bar" { availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] @@ -1568,33 +1630,63 @@ resource "aws_security_group" "bar" { } ` -func testAccELBIAMServerCertConfig(certName string) string { +func testAccELBConfig_Listener_IAMServerCertificate(certName, lbProtocol string) string { return fmt.Sprintf(` -%s +data "aws_availability_zones" "available" {} + +%[1]s resource "aws_iam_server_certificate" "test_cert" { - name = "%s" + name = "%[2]s" certificate_body = "${tls_self_signed_cert.example.cert_pem}" private_key = "${tls_private_key.example.private_key_pem}" } resource "aws_elb" "bar" { - availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] listener { - instance_port = 8000 - instance_protocol = "https" - lb_port = 80 - // Protocol should be case insensitive - lb_protocol = "HttPs" + instance_port = 443 + instance_protocol = "%[3]s" + lb_port = 443 + lb_protocol = "%[3]s" ssl_certificate_id = "${aws_iam_server_certificate.test_cert.arn}" } +} +`, testAccTLSServerCert, certName, lbProtocol) +} - tags { - bar = "baz" - } +func testAccELBConfig_Listener_IAMServerCertificate_AddInvalidListener(certName string) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} - cross_zone_load_balancing = true +%[1]s + +resource "aws_iam_server_certificate" "test_cert" { + name = "%[2]s" + certificate_body = "${tls_self_signed_cert.example.cert_pem}" + private_key = "${tls_private_key.example.private_key_pem}" +} + +resource "aws_elb" "bar" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + + listener { + instance_port = 443 + instance_protocol = "https" + lb_port = 443 + lb_protocol = "https" + ssl_certificate_id = "${aws_iam_server_certificate.test_cert.arn}" + } + + # lb_protocol tcp and ssl_certificate_id is not valid + listener { + instance_port = 8443 + instance_protocol = "tcp" + lb_port = 8443 + lb_protocol = "tcp" + ssl_certificate_id = "${aws_iam_server_certificate.test_cert.arn}" + } } `, testAccTLSServerCert, certName) } diff --git a/aws/resource_aws_emr_cluster.go b/aws/resource_aws_emr_cluster.go index b57e6aea024..366b2ad7e75 100644 --- a/aws/resource_aws_emr_cluster.go +++ b/aws/resource_aws_emr_cluster.go @@ -12,6 +12,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/private/protocol/json/jsonutil" "github.com/aws/aws-sdk-go/service/emr" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" @@ -42,6 +43,17 @@ func resourceAwsEMRCluster() *schema.Resource { Optional: true, ForceNew: true, }, + "additional_info": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.ValidateJsonString, + DiffSuppressFunc: suppressEquivalentJsonDiffs, + StateFunc: func(v interface{}) string { + json, _ := structure.NormalizeJsonString(v) + return json + }, + }, "core_instance_type": { Type: schema.TypeString, Optional: true, @@ -100,34 +112,42 @@ func resourceAwsEMRCluster() *schema.Resource { "key_name": { Type: schema.TypeString, Optional: true, + ForceNew: true, }, "subnet_id": { Type: schema.TypeString, Optional: true, + ForceNew: true, }, "additional_master_security_groups": { Type: schema.TypeString, Optional: true, + ForceNew: true, }, "additional_slave_security_groups": { Type: schema.TypeString, Optional: true, + ForceNew: true, }, "emr_managed_master_security_group": { Type: schema.TypeString, Optional: true, + ForceNew: true, }, "emr_managed_slave_security_group": { Type: schema.TypeString, Optional: true, + ForceNew: true, }, "instance_profile": { Type: schema.TypeString, Required: true, + ForceNew: true, }, "service_access_security_group": { Type: schema.TypeString, Optional: true, + ForceNew: true, }, }, }, @@ -217,7 +237,7 @@ func resourceAwsEMRCluster() *schema.Resource { Type: schema.TypeString, Optional: true, DiffSuppressFunc: suppressEquivalentJsonDiffs, - ValidateFunc: validateJsonString, + ValidateFunc: validation.ValidateJsonString, StateFunc: func(v interface{}) string { jsonString, _ := structure.NormalizeJsonString(v) return jsonString @@ -327,9 +347,22 @@ func resourceAwsEMRCluster() *schema.Resource { }, "tags": tagsSchema(), "configurations": { - Type: schema.TypeString, - ForceNew: true, - Optional: true, + Type: schema.TypeString, + ForceNew: true, + Optional: true, + ConflictsWith: []string{"configurations_json"}, + }, + "configurations_json": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"configurations"}, + ValidateFunc: validation.ValidateJsonString, + DiffSuppressFunc: suppressEquivalentJsonDiffs, + StateFunc: func(v interface{}) string { + json, _ := structure.NormalizeJsonString(v) + return json + }, }, "service_role": { Type: schema.TypeString, @@ -351,7 +384,7 @@ func resourceAwsEMRCluster() *schema.Resource { ForceNew: true, Optional: true, }, - "autoscaling_role": &schema.Schema{ + "autoscaling_role": { Type: schema.TypeString, ForceNew: true, Optional: true, @@ -383,7 +416,7 @@ func resourceAwsEMRClusterCreate(d *schema.ResourceData, meta interface{}) error applications := d.Get("applications").(*schema.Set).List() keepJobFlowAliveWhenNoSteps := true - if v, ok := d.GetOk("keep_job_flow_alive_when_no_steps"); ok { + if v, ok := d.GetOkExists("keep_job_flow_alive_when_no_steps"); ok { keepJobFlowAliveWhenNoSteps = v.(bool) } @@ -455,7 +488,13 @@ func resourceAwsEMRClusterCreate(d *schema.ResourceData, meta interface{}) error } if v, ok := d.GetOk("instance_group"); ok { instanceGroupConfigs := v.(*schema.Set).List() - instanceConfig.InstanceGroups = expandInstanceGroupConfigs(instanceGroupConfigs) + instanceGroups, err := expandInstanceGroupConfigs(instanceGroupConfigs) + + if err != nil { + return fmt.Errorf("error parsing EMR instance groups configuration: %s", err) + } + + instanceConfig.InstanceGroups = instanceGroups } emrApps := expandApplications(applications) @@ -470,6 +509,14 @@ func resourceAwsEMRClusterCreate(d *schema.ResourceData, meta interface{}) error VisibleToAllUsers: aws.Bool(d.Get("visible_to_all_users").(bool)), } + if v, ok := d.GetOk("additional_info"); ok { + info, err := structure.NormalizeJsonString(v) + if err != nil { + return fmt.Errorf("Additional Info contains an invalid JSON: %v", err) + } + params.AdditionalInfo = aws.String(info) + } + if v, ok := d.GetOk("log_uri"); ok { params.LogUri = aws.String(v.(string)) } @@ -515,6 +562,17 @@ func resourceAwsEMRClusterCreate(d *schema.ResourceData, meta interface{}) error params.Configurations = expandConfigures(confUrl) } + if v, ok := d.GetOk("configurations_json"); ok { + info, err := structure.NormalizeJsonString(v) + if err != nil { + return fmt.Errorf("configurations_json contains an invalid JSON: %v", err) + } + params.Configurations, err = expandConfigurationJson(info) + if err != nil { + return fmt.Errorf("Error reading EMR configurations_json: %s", err) + } + } + if v, ok := d.GetOk("kerberos_attributes"); ok { kerberosAttributesList := v.([]interface{}) kerberosAttributesMap := kerberosAttributesList[0].(map[string]interface{}) @@ -563,7 +621,7 @@ func resourceAwsEMRClusterCreate(d *schema.ResourceData, meta interface{}) error _, err = stateConf.WaitForState() if err != nil { - return fmt.Errorf("[WARN] Error waiting for EMR Cluster state to be \"WAITING\" or \"RUNNING\": %s", err) + return fmt.Errorf("Error waiting for EMR Cluster state to be \"WAITING\" or \"RUNNING\": %s", err) } return resourceAwsEMRClusterRead(d, meta) @@ -603,7 +661,7 @@ func resourceAwsEMRClusterRead(d *schema.ResourceData, meta interface{}) error { instanceGroups, err := fetchAllEMRInstanceGroups(emrconn, d.Id()) if err == nil { - coreGroup := findGroup(instanceGroups, "CORE") + coreGroup := emrCoreInstanceGroup(instanceGroups) if coreGroup != nil { d.Set("core_instance_type", coreGroup.InstanceType) } @@ -639,6 +697,16 @@ func resourceAwsEMRClusterRead(d *schema.ResourceData, meta interface{}) error { log.Printf("[ERR] Error setting EMR configurations for cluster (%s): %s", d.Id(), err) } + if _, ok := d.GetOk("configurations_json"); ok { + configOut, err := flattenConfigurationJson(cluster.Configurations) + if err != nil { + return fmt.Errorf("Error reading EMR cluster configurations: %s", err) + } + if err := d.Set("configurations_json", configOut); err != nil { + return fmt.Errorf("Error setting EMR configurations_json for cluster (%s): %s", d.Id(), err) + } + } + if err := d.Set("ec2_attributes", flattenEc2Attributes(cluster.Ec2InstanceAttributes)); err != nil { log.Printf("[ERR] Error setting EMR Ec2 Attributes: %s", err) } @@ -676,6 +744,15 @@ func resourceAwsEMRClusterRead(d *schema.ResourceData, meta interface{}) error { return fmt.Errorf("error setting step: %s", err) } + // AWS provides no other way to read back the additional_info + if v, ok := d.GetOk("additional_info"); ok { + info, err := structure.NormalizeJsonString(v) + if err != nil { + return fmt.Errorf("Additional Info contains an invalid JSON: %v", err) + } + d.Set("additional_info", info) + } + return nil } @@ -694,9 +771,9 @@ func resourceAwsEMRClusterUpdate(d *schema.ResourceData, meta interface{}) error } coreInstanceCount := d.Get("core_instance_count").(int) - coreGroup := findGroup(groups, "CORE") + coreGroup := emrCoreInstanceGroup(groups) if coreGroup == nil { - return fmt.Errorf("[ERR] Error finding core group") + return fmt.Errorf("Error finding core group") } params := &emr.ModifyInstanceGroupsInput{ @@ -734,7 +811,7 @@ func resourceAwsEMRClusterUpdate(d *schema.ResourceData, meta interface{}) error _, err = stateConf.WaitForState() if err != nil { - return fmt.Errorf("[WARN] Error waiting for EMR Cluster state to be \"WAITING\" or \"RUNNING\" after modification: %s", err) + return fmt.Errorf("Error waiting for EMR Cluster state to be \"WAITING\" or \"RUNNING\" after modification: %s", err) } } @@ -820,14 +897,13 @@ func resourceAwsEMRClusterDelete(d *schema.ResourceData, meta interface{}) error log.Printf("[DEBUG] All (%d) EMR Cluster (%s) Instances terminated", instanceCount, d.Id()) return nil } - return resource.RetryableError(fmt.Errorf("[DEBUG] EMR Cluster (%s) has (%d) Instances remaining, retrying", d.Id(), len(resp.Instances))) + return resource.RetryableError(fmt.Errorf("EMR Cluster (%s) has (%d) Instances remaining, retrying", d.Id(), len(resp.Instances))) }) if err != nil { log.Printf("[ERR] Error waiting for EMR Cluster (%s) Instances to drain", d.Id()) } - d.SetId("") return nil } @@ -1023,25 +1099,10 @@ func flattenBootstrapArguments(actions []*emr.Command) []map[string]interface{} return result } -func loadGroups(d *schema.ResourceData, meta interface{}) ([]*emr.InstanceGroup, error) { - emrconn := meta.(*AWSClient).emrconn - reqGrps := &emr.ListInstanceGroupsInput{ - ClusterId: aws.String(d.Id()), - } - - respGrps, errGrps := emrconn.ListInstanceGroups(reqGrps) - if errGrps != nil { - return nil, fmt.Errorf("Error reading EMR cluster: %s", errGrps) - } - return respGrps.InstanceGroups, nil -} - -func findGroup(grps []*emr.InstanceGroup, typ string) *emr.InstanceGroup { +func emrCoreInstanceGroup(grps []*emr.InstanceGroup) *emr.InstanceGroup { for _, grp := range grps { - if grp.InstanceGroupType != nil { - if *grp.InstanceGroupType == typ { - return grp - } + if aws.StringValue(grp.InstanceGroupType) == emr.InstanceGroupTypeCore { + return grp } } return nil @@ -1223,7 +1284,7 @@ func expandEmrStepConfigs(l []interface{}) []*emr.StepConfig { return stepConfigs } -func expandInstanceGroupConfigs(instanceGroupConfigs []interface{}) []*emr.InstanceGroupConfig { +func expandInstanceGroupConfigs(instanceGroupConfigs []interface{}) ([]*emr.InstanceGroupConfig, error) { instanceGroupConfig := []*emr.InstanceGroupConfig{} for _, raw := range instanceGroupConfigs { @@ -1241,12 +1302,23 @@ func expandInstanceGroupConfigs(instanceGroupConfigs []interface{}) []*emr.Insta applyBidPrice(config, configAttributes) applyEbsConfig(configAttributes, config) - applyAutoScalingPolicy(configAttributes, config) + + if v, ok := configAttributes["autoscaling_policy"]; ok && v.(string) != "" { + var autoScalingPolicy *emr.AutoScalingPolicy + + err := json.Unmarshal([]byte(v.(string)), &autoScalingPolicy) + + if err != nil { + return []*emr.InstanceGroupConfig{}, fmt.Errorf("error parsing EMR Auto Scaling Policy JSON: %s", err) + } + + config.AutoScalingPolicy = autoScalingPolicy + } instanceGroupConfig = append(instanceGroupConfig, config) } - return instanceGroupConfig + return instanceGroupConfig, nil } func applyBidPrice(config *emr.InstanceGroupConfig, configAttributes map[string]interface{}) { @@ -1285,22 +1357,23 @@ func applyEbsConfig(configAttributes map[string]interface{}, config *emr.Instanc } } -func applyAutoScalingPolicy(configAttributes map[string]interface{}, config *emr.InstanceGroupConfig) { - if rawAutoScalingPolicy, ok := configAttributes["autoscaling_policy"]; ok { - autoScalingConfig, _ := expandAutoScalingPolicy(rawAutoScalingPolicy.(string)) - config.AutoScalingPolicy = autoScalingConfig +func expandConfigurationJson(input string) ([]*emr.Configuration, error) { + configsOut := []*emr.Configuration{} + err := json.Unmarshal([]byte(input), &configsOut) + if err != nil { + return nil, err } -} + log.Printf("[DEBUG] Expanded EMR Configurations %s", configsOut) -func expandAutoScalingPolicy(rawDefinitions string) (*emr.AutoScalingPolicy, error) { - var policy *emr.AutoScalingPolicy + return configsOut, nil +} - err := json.Unmarshal([]byte(rawDefinitions), &policy) +func flattenConfigurationJson(config []*emr.Configuration) (string, error) { + out, err := jsonutil.BuildJSON(config) if err != nil { - return nil, fmt.Errorf("Error decoding JSON: %s", err) + return "", err } - - return policy, nil + return string(out), nil } func expandConfigures(input string) []*emr.Configuration { @@ -1374,23 +1447,25 @@ func resourceAwsEMRClusterStateRefreshFunc(d *schema.ResourceData, meta interfac return nil, "", err } - emrc := resp.Cluster - - if emrc == nil { + if resp.Cluster == nil { return 42, "destroyed", nil } - if resp.Cluster.Status != nil { - log.Printf("[DEBUG] EMR Cluster status (%s): %s", d.Id(), *resp.Cluster.Status) + if resp.Cluster.Status == nil { + return resp.Cluster, "", fmt.Errorf("cluster status not provided") } - status := emrc.Status - if *status.State == "TERMINATING" || *status.State == "TERMINATED_WITH_ERRORS" { - reason := *status.StateChangeReason - return emrc, *status.State, fmt.Errorf("%s: %s", - *reason.Code, *reason.Message) + state := aws.StringValue(resp.Cluster.Status.State) + log.Printf("[DEBUG] EMR Cluster status (%s): %s", d.Id(), state) + + if state == emr.ClusterStateTerminating || state == emr.ClusterStateTerminatedWithErrors { + reason := resp.Cluster.Status.StateChangeReason + if reason == nil { + return resp.Cluster, state, fmt.Errorf("%s: reason code and message not provided", state) + } + return resp.Cluster, state, fmt.Errorf("%s: %s: %s", state, aws.StringValue(reason.Code), aws.StringValue(reason.Message)) } - return emrc, *status.State, nil + return resp.Cluster, state, nil } } diff --git a/aws/resource_aws_emr_cluster_test.go b/aws/resource_aws_emr_cluster_test.go index ca7e7c1759d..ae2961fcb3c 100644 --- a/aws/resource_aws_emr_cluster_test.go +++ b/aws/resource_aws_emr_cluster_test.go @@ -4,6 +4,7 @@ import ( "fmt" "log" "reflect" + "regexp" "testing" "github.com/aws/aws-sdk-go/aws" @@ -17,7 +18,7 @@ import ( func TestAccAWSEMRCluster_basic(t *testing.T) { var cluster emr.Cluster r := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEmrDestroy, @@ -34,10 +35,50 @@ func TestAccAWSEMRCluster_basic(t *testing.T) { }) } +func TestAccAWSEMRCluster_additionalInfo(t *testing.T) { + var cluster emr.Cluster + r := acctest.RandInt() + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEmrDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEmrClusterConfigAdditionalInfo(r), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &cluster), + resource.TestCheckResourceAttr("aws_emr_cluster.tf-test-cluster", "scale_down_behavior", "TERMINATE_AT_TASK_COMPLETION"), + resource.TestCheckResourceAttr("aws_emr_cluster.tf-test-cluster", "step.#", "0"), + ), + }, + }, + }) +} + +func TestAccAWSEMRCluster_configurationsJson(t *testing.T) { + var cluster emr.Cluster + r := acctest.RandInt() + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEmrDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEmrClusterConfigConfigurationsJson(r), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &cluster), + resource.TestMatchResourceAttr("aws_emr_cluster.tf-test-cluster", "configurations_json", + regexp.MustCompile("{\"JAVA_HOME\":\"/usr/lib/jvm/java-1.8.0\".+")), + ), + }, + }, + }) +} + func TestAccAWSEMRCluster_instance_group(t *testing.T) { var cluster emr.Cluster r := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEmrDestroy, @@ -54,12 +95,32 @@ func TestAccAWSEMRCluster_instance_group(t *testing.T) { }) } +func TestAccAWSEMRCluster_instance_group_EBSVolumeType_st1(t *testing.T) { + var cluster emr.Cluster + r := acctest.RandInt() + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEmrDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEmrClusterConfigInstanceGroups_st1(r), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &cluster), + resource.TestCheckResourceAttr( + "aws_emr_cluster.tf-test-cluster", "instance_group.#", "2"), + ), + }, + }, + }) +} + func TestAccAWSEMRCluster_Kerberos_ClusterDedicatedKdc(t *testing.T) { var cluster emr.Cluster r := acctest.RandInt() password := fmt.Sprintf("NeverKeepPasswordsInPlainText%d!", r) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEmrDestroy, @@ -80,7 +141,7 @@ func TestAccAWSEMRCluster_Kerberos_ClusterDedicatedKdc(t *testing.T) { func TestAccAWSEMRCluster_security_config(t *testing.T) { var cluster emr.Cluster r := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEmrDestroy, @@ -98,7 +159,7 @@ func TestAccAWSEMRCluster_Step_Basic(t *testing.T) { rInt := acctest.RandInt() resourceName := "aws_emr_cluster.tf-test-cluster" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEmrDestroy, @@ -125,7 +186,7 @@ func TestAccAWSEMRCluster_Step_Multiple(t *testing.T) { rInt := acctest.RandInt() resourceName := "aws_emr_cluster.tf-test-cluster" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEmrDestroy, @@ -172,7 +233,7 @@ func TestAccAWSEMRCluster_bootstrap_ordering(t *testing.T) { "echo running on master node", } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEmrDestroy, @@ -191,7 +252,7 @@ func TestAccAWSEMRCluster_bootstrap_ordering(t *testing.T) { func TestAccAWSEMRCluster_terminationProtected(t *testing.T) { var cluster emr.Cluster r := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEmrDestroy, @@ -225,10 +286,30 @@ func TestAccAWSEMRCluster_terminationProtected(t *testing.T) { }) } +func TestAccAWSEMRCluster_keepJob(t *testing.T) { + var cluster emr.Cluster + r := acctest.RandInt() + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEmrDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEmrClusterConfig_keepJop(r, "false"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEmrClusterExists("aws_emr_cluster.tf-test-cluster", &cluster), + resource.TestCheckResourceAttr( + "aws_emr_cluster.tf-test-cluster", "keep_job_flow_alive_when_no_steps", "false"), + ), + }, + }, + }) +} + func TestAccAWSEMRCluster_visibleToAllUsers(t *testing.T) { var cluster emr.Cluster r := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEmrDestroy, @@ -258,7 +339,7 @@ func TestAccAWSEMRCluster_s3Logging(t *testing.T) { r := acctest.RandInt() bucketName := fmt.Sprintf("s3n://tf-acc-test-%d/", r) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEmrDestroy, @@ -277,7 +358,7 @@ func TestAccAWSEMRCluster_s3Logging(t *testing.T) { func TestAccAWSEMRCluster_tags(t *testing.T) { var cluster emr.Cluster r := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEmrDestroy, @@ -316,7 +397,7 @@ func TestAccAWSEMRCluster_tags(t *testing.T) { func TestAccAWSEMRCluster_root_volume_size(t *testing.T) { var cluster emr.Cluster r := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEmrDestroy, @@ -342,7 +423,7 @@ func TestAccAWSEMRCluster_root_volume_size(t *testing.T) { func TestAccAWSEMRCluster_custom_ami_id(t *testing.T) { var cluster emr.Cluster r := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEmrDestroy, @@ -368,7 +449,7 @@ func testAccCheck_bootstrap_order(cluster *emr.Cluster, argsInts, argsStrings [] resp, err := emrconn.ListBootstrapActions(&req) if err != nil { - return fmt.Errorf("[ERR] Error listing boostrap actions in test: %s", err) + return fmt.Errorf("Error listing boostrap actions in test: %s", err) } // make sure we actually checked something @@ -1070,140 +1151,20 @@ resource "aws_iam_role_policy_attachment" "emr-autoscaling-role" { `, r) } -func testAccAWSEmrClusterConfig_Kerberos_ClusterDedicatedKdc(r int, password string) string { +func testAccAWSEmrClusterConfigAdditionalInfo(r int) string { return fmt.Sprintf(` -provider "aws" { - region = "us-west-2" -} - -data "aws_availability_zones" "available" {} - -resource "aws_emr_security_configuration" "foo" { - configuration = < 0 { + return resource.RetryableError(fmt.Errorf("gamelift Session Queue still exists")) + } + + return nil + }) + + if err != nil { + return err + } + } + + return nil +} + +func testAccAWSGameliftGameSessionQueueBasicConfig(queueName string, + playerLatencyPolicies []gamelift.PlayerLatencyPolicy, timeoutInSeconds int64) string { + return fmt.Sprintf(` +resource "aws_gamelift_game_session_queue" "test" { + name = "%s" + destinations = [] + player_latency_policy { + maximum_individual_player_latency_milliseconds = %d + policy_duration_seconds = %d + } + player_latency_policy { + maximum_individual_player_latency_milliseconds = %d + } + timeout_in_seconds = %d +} +`, + queueName, + *playerLatencyPolicies[0].MaximumIndividualPlayerLatencyMilliseconds, + *playerLatencyPolicies[0].PolicyDurationSeconds, + *playerLatencyPolicies[1].MaximumIndividualPlayerLatencyMilliseconds, + timeoutInSeconds) +} diff --git a/aws/resource_aws_glacier_vault.go b/aws/resource_aws_glacier_vault.go index 55de0a8fd1c..587402f228f 100644 --- a/aws/resource_aws_glacier_vault.go +++ b/aws/resource_aws_glacier_vault.go @@ -6,13 +6,13 @@ import ( "log" "regexp" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/schema" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/glacier" "github.com/hashicorp/terraform/helper/structure" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsGlacierVault() *schema.Resource { @@ -58,7 +58,7 @@ func resourceAwsGlacierVault() *schema.Resource { "access_policy": { Type: schema.TypeString, Optional: true, - ValidateFunc: validateJsonString, + ValidateFunc: validation.ValidateJsonString, StateFunc: func(v interface{}) string { json, _ := structure.NormalizeJsonString(v) return json @@ -167,7 +167,7 @@ func resourceAwsGlacierVaultRead(d *schema.ResourceData, meta interface{}) error } else if pol != nil { policy, err := structure.NormalizeJsonString(*pol.Policy.Policy) if err != nil { - return errwrap.Wrapf("access policy contains an invalid JSON: {{err}}", err) + return fmt.Errorf("access policy contains an invalid JSON: %s", err) } d.Set("access_policy", policy) } else { diff --git a/aws/resource_aws_glacier_vault_lock.go b/aws/resource_aws_glacier_vault_lock.go new file mode 100644 index 00000000000..c181510cbbb --- /dev/null +++ b/aws/resource_aws_glacier_vault_lock.go @@ -0,0 +1,188 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/glacier" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsGlacierVaultLock() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsGlacierVaultLockCreate, + Read: resourceAwsGlacierVaultLockRead, + // Allow ignore_deletion_error update + Update: schema.Noop, + Delete: resourceAwsGlacierVaultLockDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "complete_lock": { + Type: schema.TypeBool, + Required: true, + ForceNew: true, + }, + "ignore_deletion_error": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "policy": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + DiffSuppressFunc: suppressEquivalentAwsPolicyDiffs, + ValidateFunc: validateIAMPolicyJson, + }, + "vault_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.NoZeroValues, + }, + }, + } +} + +func resourceAwsGlacierVaultLockCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).glacierconn + vaultName := d.Get("vault_name").(string) + + input := &glacier.InitiateVaultLockInput{ + AccountId: aws.String("-"), + Policy: &glacier.VaultLockPolicy{ + Policy: aws.String(d.Get("policy").(string)), + }, + VaultName: aws.String(vaultName), + } + + log.Printf("[DEBUG] Initiating Glacier Vault Lock: %s", input) + output, err := conn.InitiateVaultLock(input) + if err != nil { + return fmt.Errorf("error initiating Glacier Vault Lock: %s", err) + } + + d.SetId(vaultName) + + if !d.Get("complete_lock").(bool) { + return resourceAwsGlacierVaultLockRead(d, meta) + } + + completeLockInput := &glacier.CompleteVaultLockInput{ + LockId: output.LockId, + VaultName: aws.String(vaultName), + } + + log.Printf("[DEBUG] Completing Glacier Vault (%s) Lock: %s", vaultName, completeLockInput) + if _, err := conn.CompleteVaultLock(completeLockInput); err != nil { + return fmt.Errorf("error completing Glacier Vault (%s) Lock: %s", vaultName, err) + } + + if err := waitForGlacierVaultLockCompletion(conn, vaultName); err != nil { + return fmt.Errorf("error waiting for Glacier Vault Lock (%s) completion: %s", d.Id(), err) + } + + return resourceAwsGlacierVaultLockRead(d, meta) +} + +func resourceAwsGlacierVaultLockRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).glacierconn + + input := &glacier.GetVaultLockInput{ + AccountId: aws.String("-"), + VaultName: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Reading Glacier Vault Lock (%s): %s", d.Id(), input) + output, err := conn.GetVaultLock(input) + + if isAWSErr(err, glacier.ErrCodeResourceNotFoundException, "") { + log.Printf("[WARN] Glacier Vault Lock (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return fmt.Errorf("error reading Glacier Vault Lock (%s): %s", d.Id(), err) + } + + if output == nil { + log.Printf("[WARN] Glacier Vault Lock (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + d.Set("complete_lock", aws.StringValue(output.State) == "Locked") + d.Set("policy", output.Policy) + d.Set("vault_name", d.Id()) + + return nil +} + +func resourceAwsGlacierVaultLockDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).glacierconn + + input := &glacier.AbortVaultLockInput{ + VaultName: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Aborting Glacier Vault Lock (%s): %s", d.Id(), input) + _, err := conn.AbortVaultLock(input) + + if isAWSErr(err, glacier.ErrCodeResourceNotFoundException, "") { + return nil + } + + if err != nil && !d.Get("ignore_deletion_error").(bool) { + return fmt.Errorf("error aborting Glacier Vault Lock (%s): %s", d.Id(), err) + } + + return nil +} + +func glacierVaultLockRefreshFunc(conn *glacier.Glacier, vaultName string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + input := &glacier.GetVaultLockInput{ + AccountId: aws.String("-"), + VaultName: aws.String(vaultName), + } + + log.Printf("[DEBUG] Reading Glacier Vault Lock (%s): %s", vaultName, input) + output, err := conn.GetVaultLock(input) + + if isAWSErr(err, glacier.ErrCodeResourceNotFoundException, "") { + return nil, "", nil + } + + if err != nil { + return nil, "", fmt.Errorf("error reading Glacier Vault Lock (%s): %s", vaultName, err) + } + + if output == nil { + return nil, "", nil + } + + return output, aws.StringValue(output.State), nil + } +} + +func waitForGlacierVaultLockCompletion(conn *glacier.Glacier, vaultName string) error { + stateConf := &resource.StateChangeConf{ + Pending: []string{"InProgress"}, + Target: []string{"Locked"}, + Refresh: glacierVaultLockRefreshFunc(conn, vaultName), + Timeout: 5 * time.Minute, + } + + log.Printf("[DEBUG] Waiting for Glacier Vault Lock (%s) completion", vaultName) + _, err := stateConf.WaitForState() + + return err +} diff --git a/aws/resource_aws_glacier_vault_lock_test.go b/aws/resource_aws_glacier_vault_lock_test.go new file mode 100644 index 00000000000..e0d6c04a582 --- /dev/null +++ b/aws/resource_aws_glacier_vault_lock_test.go @@ -0,0 +1,173 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/glacier" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSGlacierVaultLock_basic(t *testing.T) { + var vaultLock1 glacier.GetVaultLockOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + glacierVaultResourceName := "aws_glacier_vault.test" + resourceName := "aws_glacier_vault_lock.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckGlacierVaultLockDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlacierVaultLockConfigCompleteLock(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckGlacierVaultLockExists(resourceName, &vaultLock1), + resource.TestCheckResourceAttr(resourceName, "complete_lock", "false"), + resource.TestCheckResourceAttr(resourceName, "ignore_deletion_error", "false"), + resource.TestCheckResourceAttrSet(resourceName, "policy"), + resource.TestCheckResourceAttrPair(resourceName, "vault_name", glacierVaultResourceName, "name"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"ignore_deletion_error"}, + }, + }, + }) +} + +func TestAccAWSGlacierVaultLock_CompleteLock(t *testing.T) { + var vaultLock1 glacier.GetVaultLockOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + glacierVaultResourceName := "aws_glacier_vault.test" + resourceName := "aws_glacier_vault_lock.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckGlacierVaultLockDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlacierVaultLockConfigCompleteLock(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckGlacierVaultLockExists(resourceName, &vaultLock1), + resource.TestCheckResourceAttr(resourceName, "complete_lock", "true"), + resource.TestCheckResourceAttr(resourceName, "ignore_deletion_error", "true"), + resource.TestCheckResourceAttrSet(resourceName, "policy"), + resource.TestCheckResourceAttrPair(resourceName, "vault_name", glacierVaultResourceName, "name"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"ignore_deletion_error"}, + }, + }, + }) +} + +func testAccCheckGlacierVaultLockExists(resourceName string, getVaultLockOutput *glacier.GetVaultLockOutput) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).glacierconn + + input := &glacier.GetVaultLockInput{ + VaultName: aws.String(rs.Primary.ID), + } + output, err := conn.GetVaultLock(input) + + if err != nil { + return fmt.Errorf("error reading Glacier Vault Lock (%s): %s", rs.Primary.ID, err) + } + + if output == nil { + return fmt.Errorf("error reading Glacier Vault Lock (%s): empty response", rs.Primary.ID) + } + + *getVaultLockOutput = *output + + return nil + } +} + +func testAccCheckGlacierVaultLockDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).glacierconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_glacier_vault_lock" { + continue + } + + input := &glacier.GetVaultLockInput{ + VaultName: aws.String(rs.Primary.ID), + } + output, err := conn.GetVaultLock(input) + + if isAWSErr(err, glacier.ErrCodeResourceNotFoundException, "") { + continue + } + + if err != nil { + return fmt.Errorf("error reading Glacier Vault Lock (%s): %s", rs.Primary.ID, err) + } + + if output != nil { + return fmt.Errorf("Glacier Vault Lock (%s) still exists", rs.Primary.ID) + } + } + + return nil +} + +func testAccGlacierVaultLockConfigCompleteLock(rName string, completeLock bool) string { + return fmt.Sprintf(` +resource "aws_glacier_vault" "test" { + name = %q +} + +data "aws_caller_identity" "current" {} +data "aws_partition" "current" {} + +data "aws_iam_policy_document" "test" { + statement { + # Allow for testing purposes + actions = ["glacier:DeleteArchive"] + effect = "Allow" + resources = ["${aws_glacier_vault.test.arn}"] + + condition { + test = "NumericLessThanEquals" + variable = "glacier:ArchiveAgeinDays" + values = ["0"] + } + + principals { + identifiers = ["arn:${data.aws_partition.current.partition}:iam::${data.aws_caller_identity.current.account_id}:root"] + type = "AWS" + } + } +} + +resource "aws_glacier_vault_lock" "test" { + complete_lock = %t + ignore_deletion_error = %t + policy = "${data.aws_iam_policy_document.test.json}" + vault_name = "${aws_glacier_vault.test.name}" +} +`, rName, completeLock, completeLock) +} diff --git a/aws/resource_aws_glacier_vault_test.go b/aws/resource_aws_glacier_vault_test.go index 011284c1d52..5aa8621af8c 100644 --- a/aws/resource_aws_glacier_vault_test.go +++ b/aws/resource_aws_glacier_vault_test.go @@ -14,14 +14,36 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSGlacierVault_importBasic(t *testing.T) { + resourceName := "aws_glacier_vault.full" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckGlacierVaultDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlacierVault_full(rInt), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSGlacierVault_basic(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckGlacierVaultDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccGlacierVault_basic(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckGlacierVaultExists("aws_glacier_vault.test"), @@ -33,12 +55,12 @@ func TestAccAWSGlacierVault_basic(t *testing.T) { func TestAccAWSGlacierVault_full(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckGlacierVaultDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccGlacierVault_full(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckGlacierVaultExists("aws_glacier_vault.full"), @@ -50,18 +72,18 @@ func TestAccAWSGlacierVault_full(t *testing.T) { func TestAccAWSGlacierVault_RemoveNotifications(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckGlacierVaultDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccGlacierVault_full(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckGlacierVaultExists("aws_glacier_vault.full"), ), }, - resource.TestStep{ + { Config: testAccGlacierVault_withoutNotification(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckGlacierVaultExists("aws_glacier_vault.full"), diff --git a/aws/resource_aws_glue_catalog_database.go b/aws/resource_aws_glue_catalog_database.go index 1f040be1b61..8e9478dea13 100644 --- a/aws/resource_aws_glue_catalog_database.go +++ b/aws/resource_aws_glue_catalog_database.go @@ -44,7 +44,7 @@ func resourceAwsGlueCatalogDatabase() *schema.Resource { }, "parameters": { Type: schema.TypeMap, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, Optional: true, }, }, @@ -76,7 +76,10 @@ func resourceAwsGlueCatalogDatabaseCreate(d *schema.ResourceData, meta interface func resourceAwsGlueCatalogDatabaseUpdate(d *schema.ResourceData, meta interface{}) error { glueconn := meta.(*AWSClient).glueconn - catalogID, name := readAwsGlueCatalogID(d.Id()) + catalogID, name, err := readAwsGlueCatalogID(d.Id()) + if err != nil { + return err + } dbUpdateInput := &glue.UpdateDatabaseInput{ CatalogId: aws.String(catalogID), @@ -117,7 +120,10 @@ func resourceAwsGlueCatalogDatabaseUpdate(d *schema.ResourceData, meta interface func resourceAwsGlueCatalogDatabaseRead(d *schema.ResourceData, meta interface{}) error { glueconn := meta.(*AWSClient).glueconn - catalogID, name := readAwsGlueCatalogID(d.Id()) + catalogID, name, err := readAwsGlueCatalogID(d.Id()) + if err != nil { + return err + } input := &glue.GetDatabaseInput{ CatalogId: aws.String(catalogID), @@ -130,6 +136,7 @@ func resourceAwsGlueCatalogDatabaseRead(d *schema.ResourceData, meta interface{} if isAWSErr(err, glue.ErrCodeEntityNotFoundException, "") { log.Printf("[WARN] Glue Catalog Database (%s) not found, removing from state", d.Id()) d.SetId("") + return nil } return fmt.Errorf("Error reading Glue Catalog Database: %s", err.Error()) @@ -153,10 +160,13 @@ func resourceAwsGlueCatalogDatabaseRead(d *schema.ResourceData, meta interface{} func resourceAwsGlueCatalogDatabaseDelete(d *schema.ResourceData, meta interface{}) error { glueconn := meta.(*AWSClient).glueconn - catalogID, name := readAwsGlueCatalogID(d.Id()) + catalogID, name, err := readAwsGlueCatalogID(d.Id()) + if err != nil { + return err + } log.Printf("[DEBUG] Glue Catalog Database: %s:%s", catalogID, name) - _, err := glueconn.DeleteDatabase(&glue.DeleteDatabaseInput{ + _, err = glueconn.DeleteDatabase(&glue.DeleteDatabaseInput{ Name: aws.String(name), }) if err != nil { @@ -167,20 +177,32 @@ func resourceAwsGlueCatalogDatabaseDelete(d *schema.ResourceData, meta interface func resourceAwsGlueCatalogDatabaseExists(d *schema.ResourceData, meta interface{}) (bool, error) { glueconn := meta.(*AWSClient).glueconn - catalogID, name := readAwsGlueCatalogID(d.Id()) + catalogID, name, err := readAwsGlueCatalogID(d.Id()) + if err != nil { + return false, err + } input := &glue.GetDatabaseInput{ CatalogId: aws.String(catalogID), Name: aws.String(name), } - _, err := glueconn.GetDatabase(input) - return err == nil, err + _, err = glueconn.GetDatabase(input) + if err != nil { + if isAWSErr(err, glue.ErrCodeEntityNotFoundException, "") { + return false, nil + } + return false, err + } + return true, nil } -func readAwsGlueCatalogID(id string) (catalogID string, name string) { +func readAwsGlueCatalogID(id string) (catalogID string, name string, err error) { idParts := strings.Split(id, ":") - return idParts[0], idParts[1] + if len(idParts) != 2 { + return "", "", fmt.Errorf("Unexpected format of ID (%q), expected CATALOG-ID:DATABASE-NAME", id) + } + return idParts[0], idParts[1], nil } func createAwsGlueCatalogID(d *schema.ResourceData, accountid string) (catalogID string) { diff --git a/aws/resource_aws_glue_catalog_database_test.go b/aws/resource_aws_glue_catalog_database_test.go index 14ddcca007f..0eefd001098 100644 --- a/aws/resource_aws_glue_catalog_database_test.go +++ b/aws/resource_aws_glue_catalog_database_test.go @@ -12,9 +12,30 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSGlueCatalogDatabase_importBasic(t *testing.T) { + resourceName := "aws_glue_catalog_database.test" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckGlueDatabaseDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCatalogDatabase_full(rInt, "A test catalog from terraform"), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSGlueCatalogDatabase_full(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckGlueDatabaseDestroy, @@ -113,6 +134,41 @@ func TestAccAWSGlueCatalogDatabase_full(t *testing.T) { }) } +func TestAccAWSGlueCatalogDatabase_recreates(t *testing.T) { + resourceName := "aws_glue_catalog_database.test" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckGlueDatabaseDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCatalogDatabase_basic(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckGlueCatalogDatabaseExists(resourceName), + ), + }, + { + // Simulate deleting the database outside Terraform + PreConfig: func() { + conn := testAccProvider.Meta().(*AWSClient).glueconn + input := &glue.DeleteDatabaseInput{ + Name: aws.String(fmt.Sprintf("my_test_catalog_database_%d", rInt)), + } + _, err := conn.DeleteDatabase(input) + if err != nil { + t.Fatalf("error deleting Glue Catalog Database: %s", err) + } + }, + Config: testAccGlueCatalogDatabase_basic(rInt), + ExpectNonEmptyPlan: true, + PlanOnly: true, + }, + }, + }) +} + func testAccCheckGlueDatabaseDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).glueconn @@ -121,7 +177,10 @@ func testAccCheckGlueDatabaseDestroy(s *terraform.State) error { continue } - catalogId, dbName := readAwsGlueCatalogID(rs.Primary.ID) + catalogId, dbName, err := readAwsGlueCatalogID(rs.Primary.ID) + if err != nil { + return err + } input := &glue.GetDatabaseInput{ CatalogId: aws.String(catalogId), @@ -174,7 +233,10 @@ func testAccCheckGlueCatalogDatabaseExists(name string) resource.TestCheckFunc { return fmt.Errorf("No ID is set") } - catalogId, dbName := readAwsGlueCatalogID(rs.Primary.ID) + catalogId, dbName, err := readAwsGlueCatalogID(rs.Primary.ID) + if err != nil { + return err + } glueconn := testAccProvider.Meta().(*AWSClient).glueconn out, err := glueconn.GetDatabase(&glue.GetDatabaseInput{ diff --git a/aws/resource_aws_glue_catalog_table.go b/aws/resource_aws_glue_catalog_table.go new file mode 100644 index 00000000000..3f9c19b1369 --- /dev/null +++ b/aws/resource_aws_glue_catalog_table.go @@ -0,0 +1,667 @@ +package aws + +import ( + "fmt" + "log" + "strings" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/glue" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsGlueCatalogTable() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsGlueCatalogTableCreate, + Read: resourceAwsGlueCatalogTableRead, + Update: resourceAwsGlueCatalogTableUpdate, + Delete: resourceAwsGlueCatalogTableDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "catalog_id": { + Type: schema.TypeString, + ForceNew: true, + Optional: true, + Computed: true, + }, + "database_name": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + }, + "name": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + }, + "owner": { + Type: schema.TypeString, + Optional: true, + }, + "parameters": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "partition_keys": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "comment": { + Type: schema.TypeString, + Optional: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + }, + "type": { + Type: schema.TypeString, + Optional: true, + }, + }, + }, + }, + "retention": { + Type: schema.TypeInt, + Optional: true, + }, + "storage_descriptor": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "bucket_columns": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "columns": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "comment": { + Type: schema.TypeString, + Optional: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + }, + "type": { + Type: schema.TypeString, + Optional: true, + }, + }, + }, + }, + "compressed": { + Type: schema.TypeBool, + Optional: true, + }, + "input_format": { + Type: schema.TypeString, + Optional: true, + }, + "location": { + Type: schema.TypeString, + Optional: true, + }, + "number_of_buckets": { + Type: schema.TypeInt, + Optional: true, + }, + "output_format": { + Type: schema.TypeString, + Optional: true, + }, + "parameters": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "ser_de_info": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Optional: true, + }, + "parameters": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "serialization_library": { + Type: schema.TypeString, + Optional: true, + }, + }, + }, + }, + "skewed_info": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "skewed_column_names": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "skewed_column_values": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "skewed_column_value_location_maps": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + }, + }, + "sort_columns": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "column": { + Type: schema.TypeString, + Required: true, + }, + "sort_order": { + Type: schema.TypeInt, + Required: true, + }, + }, + }, + }, + "stored_as_sub_directories": { + Type: schema.TypeBool, + Optional: true, + }, + }, + }, + }, + "table_type": { + Type: schema.TypeString, + Optional: true, + }, + "view_original_text": { + Type: schema.TypeString, + Optional: true, + }, + "view_expanded_text": { + Type: schema.TypeString, + Optional: true, + }, + }, + } +} + +func readAwsGlueTableID(id string) (catalogID string, dbName string, name string, error error) { + idParts := strings.Split(id, ":") + if len(idParts) != 3 { + return "", "", "", fmt.Errorf("expected ID in format catalog-id:database-name:table-name, received: %s", id) + } + return idParts[0], idParts[1], idParts[2], nil +} + +func resourceAwsGlueCatalogTableCreate(d *schema.ResourceData, meta interface{}) error { + glueconn := meta.(*AWSClient).glueconn + catalogID := createAwsGlueCatalogID(d, meta.(*AWSClient).accountid) + dbName := d.Get("database_name").(string) + name := d.Get("name").(string) + + input := &glue.CreateTableInput{ + CatalogId: aws.String(catalogID), + DatabaseName: aws.String(dbName), + TableInput: expandGlueTableInput(d), + } + + _, err := glueconn.CreateTable(input) + if err != nil { + return fmt.Errorf("Error creating Catalog Table: %s", err) + } + + d.SetId(fmt.Sprintf("%s:%s:%s", catalogID, dbName, name)) + + return resourceAwsGlueCatalogTableRead(d, meta) +} + +func resourceAwsGlueCatalogTableRead(d *schema.ResourceData, meta interface{}) error { + glueconn := meta.(*AWSClient).glueconn + + catalogID, dbName, name, err := readAwsGlueTableID(d.Id()) + if err != nil { + return err + } + + input := &glue.GetTableInput{ + CatalogId: aws.String(catalogID), + DatabaseName: aws.String(dbName), + Name: aws.String(name), + } + + out, err := glueconn.GetTable(input) + if err != nil { + + if isAWSErr(err, glue.ErrCodeEntityNotFoundException, "") { + log.Printf("[WARN] Glue Catalog Table (%s) not found, removing from state", d.Id()) + d.SetId("") + } + + return fmt.Errorf("Error reading Glue Catalog Table: %s", err) + } + + d.Set("name", out.Table.Name) + d.Set("catalog_id", catalogID) + d.Set("database_name", dbName) + d.Set("description", out.Table.Description) + d.Set("owner", out.Table.Owner) + d.Set("retention", out.Table.Retention) + + if err := d.Set("storage_descriptor", flattenGlueStorageDescriptor(out.Table.StorageDescriptor)); err != nil { + return fmt.Errorf("error setting storage_descriptor: %s", err) + } + + if err := d.Set("partition_keys", flattenGlueColumns(out.Table.PartitionKeys)); err != nil { + return fmt.Errorf("error setting partition_keys: %s", err) + } + + d.Set("view_original_text", out.Table.ViewOriginalText) + d.Set("view_expanded_text", out.Table.ViewExpandedText) + d.Set("table_type", out.Table.TableType) + + if err := d.Set("parameters", aws.StringValueMap(out.Table.Parameters)); err != nil { + return fmt.Errorf("error setting parameters: %s", err) + } + + return nil +} + +func resourceAwsGlueCatalogTableUpdate(d *schema.ResourceData, meta interface{}) error { + glueconn := meta.(*AWSClient).glueconn + + catalogID, dbName, _, err := readAwsGlueTableID(d.Id()) + if err != nil { + return err + } + + updateTableInput := &glue.UpdateTableInput{ + CatalogId: aws.String(catalogID), + DatabaseName: aws.String(dbName), + TableInput: expandGlueTableInput(d), + } + + if _, err := glueconn.UpdateTable(updateTableInput); err != nil { + return fmt.Errorf("Error updating Glue Catalog Table: %s", err) + } + + return resourceAwsGlueCatalogTableRead(d, meta) +} + +func resourceAwsGlueCatalogTableDelete(d *schema.ResourceData, meta interface{}) error { + glueconn := meta.(*AWSClient).glueconn + + catalogID, dbName, name, tableIdErr := readAwsGlueTableID(d.Id()) + if tableIdErr != nil { + return tableIdErr + } + + log.Printf("[DEBUG] Glue Catalog Table: %s:%s:%s", catalogID, dbName, name) + _, err := glueconn.DeleteTable(&glue.DeleteTableInput{ + CatalogId: aws.String(catalogID), + Name: aws.String(name), + DatabaseName: aws.String(dbName), + }) + if err != nil { + return fmt.Errorf("Error deleting Glue Catalog Table: %s", err.Error()) + } + return nil +} + +func expandGlueTableInput(d *schema.ResourceData) *glue.TableInput { + tableInput := &glue.TableInput{ + Name: aws.String(d.Get("name").(string)), + } + + if v, ok := d.GetOk("description"); ok { + tableInput.Description = aws.String(v.(string)) + } + + if v, ok := d.GetOk("owner"); ok { + tableInput.Owner = aws.String(v.(string)) + } + + if v, ok := d.GetOk("retention"); ok { + tableInput.Retention = aws.Int64(int64(v.(int))) + } + + if v, ok := d.GetOk("storage_descriptor"); ok { + tableInput.StorageDescriptor = expandGlueStorageDescriptor(v.([]interface{})) + } + + if v, ok := d.GetOk("partition_keys"); ok { + columns := expandGlueColumns(v.([]interface{})) + tableInput.PartitionKeys = columns + } + + if v, ok := d.GetOk("view_original_text"); ok { + tableInput.ViewOriginalText = aws.String(v.(string)) + } + + if v, ok := d.GetOk("view_expanded_text"); ok { + tableInput.ViewExpandedText = aws.String(v.(string)) + } + + if v, ok := d.GetOk("table_type"); ok { + tableInput.TableType = aws.String(v.(string)) + } + + if v, ok := d.GetOk("parameters"); ok { + paramsMap := map[string]string{} + for key, value := range v.(map[string]interface{}) { + paramsMap[key] = value.(string) + } + tableInput.Parameters = aws.StringMap(paramsMap) + } + + return tableInput +} + +func expandGlueStorageDescriptor(l []interface{}) *glue.StorageDescriptor { + if len(l) == 0 { + return nil + } + + s := l[0].(map[string]interface{}) + storageDescriptor := &glue.StorageDescriptor{} + + if v, ok := s["columns"]; ok { + columns := expandGlueColumns(v.([]interface{})) + storageDescriptor.Columns = columns + } + + if v, ok := s["location"]; ok { + storageDescriptor.Location = aws.String(v.(string)) + } + + if v, ok := s["input_format"]; ok { + storageDescriptor.InputFormat = aws.String(v.(string)) + } + + if v, ok := s["output_format"]; ok { + storageDescriptor.OutputFormat = aws.String(v.(string)) + } + + if v, ok := s["compressed"]; ok { + storageDescriptor.Compressed = aws.Bool(v.(bool)) + } + + if v, ok := s["number_of_buckets"]; ok { + storageDescriptor.NumberOfBuckets = aws.Int64(int64(v.(int))) + } + + if v, ok := s["ser_de_info"]; ok { + storageDescriptor.SerdeInfo = expandGlueSerDeInfo(v.([]interface{})) + } + + if v, ok := s["bucket_columns"]; ok { + bucketColumns := make([]string, len(v.([]interface{}))) + for i, item := range v.([]interface{}) { + bucketColumns[i] = fmt.Sprint(item) + } + storageDescriptor.BucketColumns = aws.StringSlice(bucketColumns) + } + + if v, ok := s["sort_columns"]; ok { + storageDescriptor.SortColumns = expandGlueSortColumns(v.([]interface{})) + } + + if v, ok := s["skewed_info"]; ok { + storageDescriptor.SkewedInfo = expandGlueSkewedInfo(v.([]interface{})) + } + + if v, ok := s["parameters"]; ok { + paramsMap := map[string]string{} + for key, value := range v.(map[string]interface{}) { + paramsMap[key] = value.(string) + } + storageDescriptor.Parameters = aws.StringMap(paramsMap) + } + + if v, ok := s["stored_as_sub_directories"]; ok { + storageDescriptor.StoredAsSubDirectories = aws.Bool(v.(bool)) + } + + return storageDescriptor +} + +func expandGlueColumns(columns []interface{}) []*glue.Column { + columnSlice := []*glue.Column{} + for _, element := range columns { + elementMap := element.(map[string]interface{}) + + column := &glue.Column{ + Name: aws.String(elementMap["name"].(string)), + } + + if v, ok := elementMap["comment"]; ok { + column.Comment = aws.String(v.(string)) + } + + if v, ok := elementMap["type"]; ok { + column.Type = aws.String(v.(string)) + } + + columnSlice = append(columnSlice, column) + } + + return columnSlice +} + +func expandGlueSerDeInfo(l []interface{}) *glue.SerDeInfo { + if len(l) == 0 { + return nil + } + + s := l[0].(map[string]interface{}) + serDeInfo := &glue.SerDeInfo{} + + if v, ok := s["name"]; ok { + serDeInfo.Name = aws.String(v.(string)) + } + + if v, ok := s["parameters"]; ok { + paramsMap := map[string]string{} + for key, value := range v.(map[string]interface{}) { + paramsMap[key] = value.(string) + } + serDeInfo.Parameters = aws.StringMap(paramsMap) + } + + if v, ok := s["serialization_library"]; ok { + serDeInfo.SerializationLibrary = aws.String(v.(string)) + } + + return serDeInfo +} + +func expandGlueSortColumns(columns []interface{}) []*glue.Order { + orderSlice := make([]*glue.Order, len(columns)) + + for i, element := range columns { + elementMap := element.(map[string]interface{}) + + order := &glue.Order{ + Column: aws.String(elementMap["column"].(string)), + } + + if v, ok := elementMap["sort_order"]; ok { + order.SortOrder = aws.Int64(int64(v.(int))) + } + + orderSlice[i] = order + } + + return orderSlice +} + +func expandGlueSkewedInfo(l []interface{}) *glue.SkewedInfo { + if len(l) == 0 { + return nil + } + + s := l[0].(map[string]interface{}) + skewedInfo := &glue.SkewedInfo{} + + if v, ok := s["skewed_column_names"]; ok { + columnsSlice := make([]string, len(v.([]interface{}))) + for i, item := range v.([]interface{}) { + columnsSlice[i] = fmt.Sprint(item) + } + skewedInfo.SkewedColumnNames = aws.StringSlice(columnsSlice) + } + + if v, ok := s["skewed_column_value_location_maps"]; ok { + typeMap := map[string]string{} + for key, value := range v.(map[string]interface{}) { + typeMap[key] = value.(string) + } + skewedInfo.SkewedColumnValueLocationMaps = aws.StringMap(typeMap) + } + + if v, ok := s["skewed_column_values"]; ok { + columnsSlice := make([]string, len(v.([]interface{}))) + for i, item := range v.([]interface{}) { + columnsSlice[i] = fmt.Sprint(item) + } + skewedInfo.SkewedColumnValues = aws.StringSlice(columnsSlice) + } + + return skewedInfo +} + +func flattenGlueStorageDescriptor(s *glue.StorageDescriptor) []map[string]interface{} { + if s == nil { + storageDescriptors := make([]map[string]interface{}, 0) + return storageDescriptors + } + + storageDescriptors := make([]map[string]interface{}, 1) + + storageDescriptor := make(map[string]interface{}) + + storageDescriptor["columns"] = flattenGlueColumns(s.Columns) + storageDescriptor["location"] = aws.StringValue(s.Location) + storageDescriptor["input_format"] = aws.StringValue(s.InputFormat) + storageDescriptor["output_format"] = aws.StringValue(s.OutputFormat) + storageDescriptor["compressed"] = aws.BoolValue(s.Compressed) + storageDescriptor["number_of_buckets"] = aws.Int64Value(s.NumberOfBuckets) + storageDescriptor["ser_de_info"] = flattenGlueSerDeInfo(s.SerdeInfo) + storageDescriptor["bucket_columns"] = flattenStringList(s.BucketColumns) + storageDescriptor["sort_columns"] = flattenGlueOrders(s.SortColumns) + storageDescriptor["parameters"] = aws.StringValueMap(s.Parameters) + storageDescriptor["skewed_info"] = flattenGlueSkewedInfo(s.SkewedInfo) + storageDescriptor["stored_as_sub_directories"] = aws.BoolValue(s.StoredAsSubDirectories) + + storageDescriptors[0] = storageDescriptor + + return storageDescriptors +} + +func flattenGlueColumns(cs []*glue.Column) []map[string]string { + columnsSlice := make([]map[string]string, len(cs)) + if len(cs) > 0 { + for i, v := range cs { + columnsSlice[i] = flattenGlueColumn(v) + } + } + + return columnsSlice +} + +func flattenGlueColumn(c *glue.Column) map[string]string { + column := make(map[string]string) + + if c == nil { + return column + } + + if v := aws.StringValue(c.Name); v != "" { + column["name"] = v + } + + if v := aws.StringValue(c.Type); v != "" { + column["type"] = v + } + + if v := aws.StringValue(c.Comment); v != "" { + column["comment"] = v + } + + return column +} + +func flattenGlueSerDeInfo(s *glue.SerDeInfo) []map[string]interface{} { + if s == nil { + serDeInfos := make([]map[string]interface{}, 0) + return serDeInfos + } + + serDeInfos := make([]map[string]interface{}, 1) + serDeInfo := make(map[string]interface{}) + + serDeInfo["name"] = aws.StringValue(s.Name) + serDeInfo["parameters"] = aws.StringValueMap(s.Parameters) + serDeInfo["serialization_library"] = aws.StringValue(s.SerializationLibrary) + + serDeInfos[0] = serDeInfo + return serDeInfos +} + +func flattenGlueOrders(os []*glue.Order) []map[string]interface{} { + orders := make([]map[string]interface{}, len(os)) + for i, v := range os { + order := make(map[string]interface{}) + order["column"] = aws.StringValue(v.Column) + order["sort_order"] = int(aws.Int64Value(v.SortOrder)) + orders[i] = order + } + + return orders +} + +func flattenGlueSkewedInfo(s *glue.SkewedInfo) []map[string]interface{} { + if s == nil { + skewedInfoSlice := make([]map[string]interface{}, 0) + return skewedInfoSlice + } + + skewedInfoSlice := make([]map[string]interface{}, 1) + + skewedInfo := make(map[string]interface{}) + skewedInfo["skewed_column_names"] = flattenStringList(s.SkewedColumnNames) + skewedInfo["skewed_column_value_location_maps"] = aws.StringValueMap(s.SkewedColumnValueLocationMaps) + skewedInfo["skewed_column_values"] = flattenStringList(s.SkewedColumnValues) + skewedInfoSlice[0] = skewedInfo + + return skewedInfoSlice +} diff --git a/aws/resource_aws_glue_catalog_table_test.go b/aws/resource_aws_glue_catalog_table_test.go new file mode 100644 index 00000000000..014afa7b2b9 --- /dev/null +++ b/aws/resource_aws_glue_catalog_table_test.go @@ -0,0 +1,557 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/glue" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSGlueCatalogTable_importBasic(t *testing.T) { + resourceName := "aws_glue_catalog_table.test" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckGlueTableDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCatalogTable_full(rInt, "A test table from terraform"), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueCatalogTable_basic(t *testing.T) { + rInt := acctest.RandInt() + tableName := "aws_glue_catalog_table.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckGlueTableDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCatalogTable_basic(rInt), + Destroy: false, + Check: resource.ComposeTestCheckFunc( + testAccCheckGlueCatalogTableExists("aws_glue_catalog_table.test"), + resource.TestCheckResourceAttr( + "aws_glue_catalog_table.test", + "name", + fmt.Sprintf("my_test_catalog_table_%d", rInt), + ), + resource.TestCheckResourceAttr( + tableName, + "database_name", + fmt.Sprintf("my_test_catalog_database_%d", rInt), + ), + ), + }, + }, + }) +} + +func TestAccAWSGlueCatalogTable_full(t *testing.T) { + rInt := acctest.RandInt() + description := "A test table from terraform" + tableName := "aws_glue_catalog_table.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckGlueTableDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCatalogTable_full(rInt, description), + Destroy: false, + Check: resource.ComposeTestCheckFunc( + testAccCheckGlueCatalogTableExists(tableName), + resource.TestCheckResourceAttr(tableName, "name", fmt.Sprintf("my_test_catalog_table_%d", rInt)), + resource.TestCheckResourceAttr(tableName, "database_name", fmt.Sprintf("my_test_catalog_database_%d", rInt)), + resource.TestCheckResourceAttr(tableName, "description", description), + resource.TestCheckResourceAttr(tableName, "owner", "my_owner"), + resource.TestCheckResourceAttr(tableName, "retention", "1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.0.name", "my_column_1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.0.type", "int"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.0.comment", "my_column1_comment"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.1.name", "my_column_2"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.1.type", "string"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.1.comment", "my_column2_comment"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.location", "my_location"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.input_format", "SequenceFileInputFormat"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.output_format", "SequenceFileInputFormat"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.compressed", "false"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.number_of_buckets", "1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.ser_de_info.0.name", "ser_de_name"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.ser_de_info.0.parameters.param1", "param_val_1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.ser_de_info.0.serialization_library", "org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.bucket_columns.0", "bucket_column_1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.sort_columns.0.column", "my_column_1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.sort_columns.0.sort_order", "1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.parameters.param1", "param1_val"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.skewed_info.0.skewed_column_names.0", "my_column_1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.skewed_info.0.skewed_column_value_location_maps.my_column_1", "my_column_1_val_loc_map"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.skewed_info.0.skewed_column_values.0", "skewed_val_1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.stored_as_sub_directories", "false"), + resource.TestCheckResourceAttr(tableName, "partition_keys.0.name", "my_column_1"), + resource.TestCheckResourceAttr(tableName, "partition_keys.0.type", "int"), + resource.TestCheckResourceAttr(tableName, "partition_keys.0.comment", "my_column_1_comment"), + resource.TestCheckResourceAttr(tableName, "partition_keys.1.name", "my_column_2"), + resource.TestCheckResourceAttr(tableName, "partition_keys.1.type", "string"), + resource.TestCheckResourceAttr(tableName, "partition_keys.1.comment", "my_column_2_comment"), + resource.TestCheckResourceAttr(tableName, "view_original_text", "view_original_text_1"), + resource.TestCheckResourceAttr(tableName, "view_expanded_text", "view_expanded_text_1"), + resource.TestCheckResourceAttr(tableName, "table_type", "VIRTUAL_VIEW"), + resource.TestCheckResourceAttr(tableName, "parameters.param1", "param1_val"), + ), + }, + }, + }) +} + +func TestAccAWSGlueCatalogTable_update_addValues(t *testing.T) { + rInt := acctest.RandInt() + description := "A test table from terraform" + tableName := "aws_glue_catalog_table.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckGlueTableDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCatalogTable_basic(rInt), + Destroy: false, + Check: resource.ComposeTestCheckFunc( + testAccCheckGlueCatalogTableExists("aws_glue_catalog_table.test"), + resource.TestCheckResourceAttr( + "aws_glue_catalog_table.test", + "name", + fmt.Sprintf("my_test_catalog_table_%d", rInt), + ), + resource.TestCheckResourceAttr( + tableName, + "database_name", + fmt.Sprintf("my_test_catalog_database_%d", rInt), + ), + ), + }, + { + Config: testAccGlueCatalogTable_full(rInt, description), + Destroy: false, + Check: resource.ComposeTestCheckFunc( + testAccCheckGlueCatalogTableExists(tableName), + resource.TestCheckResourceAttr(tableName, "name", fmt.Sprintf("my_test_catalog_table_%d", rInt)), + resource.TestCheckResourceAttr(tableName, "database_name", fmt.Sprintf("my_test_catalog_database_%d", rInt)), + resource.TestCheckResourceAttr(tableName, "description", description), + resource.TestCheckResourceAttr(tableName, "owner", "my_owner"), + resource.TestCheckResourceAttr(tableName, "retention", "1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.0.name", "my_column_1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.0.type", "int"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.0.comment", "my_column1_comment"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.1.name", "my_column_2"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.1.type", "string"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.1.comment", "my_column2_comment"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.location", "my_location"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.input_format", "SequenceFileInputFormat"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.output_format", "SequenceFileInputFormat"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.compressed", "false"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.number_of_buckets", "1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.ser_de_info.0.name", "ser_de_name"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.ser_de_info.0.parameters.param1", "param_val_1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.ser_de_info.0.serialization_library", "org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.bucket_columns.0", "bucket_column_1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.sort_columns.0.column", "my_column_1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.sort_columns.0.sort_order", "1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.parameters.param1", "param1_val"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.skewed_info.0.skewed_column_names.0", "my_column_1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.skewed_info.0.skewed_column_value_location_maps.my_column_1", "my_column_1_val_loc_map"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.skewed_info.0.skewed_column_values.0", "skewed_val_1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.stored_as_sub_directories", "false"), + resource.TestCheckResourceAttr(tableName, "partition_keys.0.name", "my_column_1"), + resource.TestCheckResourceAttr(tableName, "partition_keys.0.type", "int"), + resource.TestCheckResourceAttr(tableName, "partition_keys.0.comment", "my_column_1_comment"), + resource.TestCheckResourceAttr(tableName, "partition_keys.1.name", "my_column_2"), + resource.TestCheckResourceAttr(tableName, "partition_keys.1.type", "string"), + resource.TestCheckResourceAttr(tableName, "partition_keys.1.comment", "my_column_2_comment"), + resource.TestCheckResourceAttr(tableName, "view_original_text", "view_original_text_1"), + resource.TestCheckResourceAttr(tableName, "view_expanded_text", "view_expanded_text_1"), + resource.TestCheckResourceAttr(tableName, "table_type", "VIRTUAL_VIEW"), + resource.TestCheckResourceAttr(tableName, "parameters.param1", "param1_val"), + ), + }, + }, + }) +} + +func TestAccAWSGlueCatalogTable_update_replaceValues(t *testing.T) { + rInt := acctest.RandInt() + description := "A test table from terraform" + tableName := "aws_glue_catalog_table.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckGlueTableDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCatalogTable_full(rInt, description), + Destroy: false, + Check: resource.ComposeTestCheckFunc( + testAccCheckGlueCatalogTableExists(tableName), + resource.TestCheckResourceAttr(tableName, "name", fmt.Sprintf("my_test_catalog_table_%d", rInt)), + resource.TestCheckResourceAttr(tableName, "database_name", fmt.Sprintf("my_test_catalog_database_%d", rInt)), + resource.TestCheckResourceAttr(tableName, "description", description), + resource.TestCheckResourceAttr(tableName, "owner", "my_owner"), + resource.TestCheckResourceAttr(tableName, "retention", "1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.0.name", "my_column_1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.0.type", "int"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.0.comment", "my_column1_comment"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.1.name", "my_column_2"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.1.type", "string"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.1.comment", "my_column2_comment"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.location", "my_location"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.input_format", "SequenceFileInputFormat"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.output_format", "SequenceFileInputFormat"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.compressed", "false"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.number_of_buckets", "1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.ser_de_info.0.name", "ser_de_name"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.ser_de_info.0.parameters.param1", "param_val_1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.ser_de_info.0.serialization_library", "org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.bucket_columns.0", "bucket_column_1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.sort_columns.0.column", "my_column_1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.sort_columns.0.sort_order", "1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.parameters.param1", "param1_val"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.skewed_info.0.skewed_column_names.0", "my_column_1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.skewed_info.0.skewed_column_value_location_maps.my_column_1", "my_column_1_val_loc_map"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.skewed_info.0.skewed_column_values.0", "skewed_val_1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.stored_as_sub_directories", "false"), + resource.TestCheckResourceAttr(tableName, "partition_keys.0.name", "my_column_1"), + resource.TestCheckResourceAttr(tableName, "partition_keys.0.type", "int"), + resource.TestCheckResourceAttr(tableName, "partition_keys.0.comment", "my_column_1_comment"), + resource.TestCheckResourceAttr(tableName, "partition_keys.1.name", "my_column_2"), + resource.TestCheckResourceAttr(tableName, "partition_keys.1.type", "string"), + resource.TestCheckResourceAttr(tableName, "partition_keys.1.comment", "my_column_2_comment"), + resource.TestCheckResourceAttr(tableName, "view_original_text", "view_original_text_1"), + resource.TestCheckResourceAttr(tableName, "view_expanded_text", "view_expanded_text_1"), + resource.TestCheckResourceAttr(tableName, "table_type", "VIRTUAL_VIEW"), + resource.TestCheckResourceAttr(tableName, "parameters.param1", "param1_val"), + ), + }, + { + Config: testAccGlueCatalogTable_full_replacedValues(rInt), + Destroy: false, + Check: resource.ComposeTestCheckFunc( + testAccCheckGlueCatalogTableExists(tableName), + resource.TestCheckResourceAttr(tableName, "name", fmt.Sprintf("my_test_catalog_table_%d", rInt)), + resource.TestCheckResourceAttr(tableName, "database_name", fmt.Sprintf("my_test_catalog_database_%d", rInt)), + resource.TestCheckResourceAttr(tableName, "description", "A test table from terraform2"), + resource.TestCheckResourceAttr(tableName, "owner", "my_owner2"), + resource.TestCheckResourceAttr(tableName, "retention", "2"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.0.name", "my_column_12"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.0.type", "date"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.0.comment", "my_column1_comment2"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.1.name", "my_column_22"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.1.type", "timestamp"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.columns.1.comment", "my_column2_comment2"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.location", "my_location2"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.input_format", "TextInputFormat"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.output_format", "TextInputFormat"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.compressed", "true"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.number_of_buckets", "12"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.ser_de_info.0.name", "ser_de_name2"), + resource.TestCheckNoResourceAttr(tableName, "storage_descriptor.0.ser_de_info.0.parameters.param1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.ser_de_info.0.parameters.param2", "param_val_12"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.ser_de_info.0.serialization_library", "org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe2"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.bucket_columns.0", "bucket_column_12"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.bucket_columns.1", "bucket_column_2"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.sort_columns.0.column", "my_column_12"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.sort_columns.0.sort_order", "0"), + resource.TestCheckNoResourceAttr(tableName, "storage_descriptor.0.parameters.param1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.parameters.param12", "param1_val2"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.skewed_info.0.skewed_column_names.0", "my_column_12"), + resource.TestCheckNoResourceAttr(tableName, "storage_descriptor.0.skewed_info.0.skewed_column_value_location_maps.my_column_1"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.skewed_info.0.skewed_column_value_location_maps.my_column_12", "my_column_1_val_loc_map2"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.skewed_info.0.skewed_column_values.0", "skewed_val_12"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.skewed_info.0.skewed_column_values.1", "skewed_val_2"), + resource.TestCheckResourceAttr(tableName, "storage_descriptor.0.stored_as_sub_directories", "true"), + resource.TestCheckResourceAttr(tableName, "partition_keys.0.name", "my_column_12"), + resource.TestCheckResourceAttr(tableName, "partition_keys.0.type", "date"), + resource.TestCheckResourceAttr(tableName, "partition_keys.0.comment", "my_column_1_comment2"), + resource.TestCheckResourceAttr(tableName, "partition_keys.1.name", "my_column_22"), + resource.TestCheckResourceAttr(tableName, "partition_keys.1.type", "timestamp"), + resource.TestCheckResourceAttr(tableName, "partition_keys.1.comment", "my_column_2_comment2"), + resource.TestCheckResourceAttr(tableName, "view_original_text", "view_original_text_12"), + resource.TestCheckResourceAttr(tableName, "view_expanded_text", "view_expanded_text_12"), + resource.TestCheckResourceAttr(tableName, "table_type", "EXTERNAL_TABLE"), + //resource.TestCheckResourceAttr(tableName, "parameters.param1", "param1_val"), + resource.TestCheckResourceAttr(tableName, "parameters.param2", "param1_val2"), + ), + }, + }, + }) +} + +func testAccGlueCatalogTable_basic(rInt int) string { + return fmt.Sprintf(` +resource "aws_glue_catalog_database" "test" { + name = "my_test_catalog_database_%d" +} + +resource "aws_glue_catalog_table" "test" { + name = "my_test_catalog_table_%d" + database_name = "${aws_glue_catalog_database.test.name}" +} +`, rInt, rInt) +} + +func testAccGlueCatalogTable_full(rInt int, desc string) string { + return fmt.Sprintf(` +resource "aws_glue_catalog_database" "test" { + name = "my_test_catalog_database_%d" +} + +resource "aws_glue_catalog_table" "test" { + name = "my_test_catalog_table_%d" + database_name = "${aws_glue_catalog_database.test.name}" + description = "%s" + owner = "my_owner" + retention = 1 + storage_descriptor { + columns = [ + { + name = "my_column_1" + type = "int" + comment = "my_column1_comment" + }, + { + name = "my_column_2" + type = "string" + comment = "my_column2_comment" + } + ] + location = "my_location" + input_format = "SequenceFileInputFormat" + output_format = "SequenceFileInputFormat" + compressed = false + number_of_buckets = 1 + ser_de_info { + name = "ser_de_name" + parameters { + param1 = "param_val_1" + } + serialization_library = "org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe" + } + bucket_columns = ["bucket_column_1"] + sort_columns = [ + { + column = "my_column_1" + sort_order = 1 + } + ] + parameters { + param1 = "param1_val" + } + skewed_info { + skewed_column_names = [ + "my_column_1" + ] + skewed_column_value_location_maps { + my_column_1 = "my_column_1_val_loc_map" + } + skewed_column_values = [ + "skewed_val_1" + ] + } + stored_as_sub_directories = false + } + partition_keys = [ + { + name = "my_column_1" + type = "int" + comment = "my_column_1_comment" + }, + { + name = "my_column_2" + type = "string" + comment = "my_column_2_comment" + } + ] + view_original_text = "view_original_text_1" + view_expanded_text = "view_expanded_text_1" + table_type = "VIRTUAL_VIEW" + parameters { + param1 = "param1_val" + } +} +`, rInt, rInt, desc) +} + +func testAccGlueCatalogTable_full_replacedValues(rInt int) string { + return fmt.Sprintf(` +resource "aws_glue_catalog_database" "test" { + name = "my_test_catalog_database_%d" +} + +resource "aws_glue_catalog_table" "test" { + name = "my_test_catalog_table_%d" + database_name = "${aws_glue_catalog_database.test.name}" + description = "A test table from terraform2" + owner = "my_owner2" + retention = 2 + storage_descriptor { + columns = [ + { + name = "my_column_12" + type = "date" + comment = "my_column1_comment2" + }, + { + name = "my_column_22" + type = "timestamp" + comment = "my_column2_comment2" + } + ] + location = "my_location2" + input_format = "TextInputFormat" + output_format = "TextInputFormat" + compressed = true + number_of_buckets = 12 + ser_de_info { + name = "ser_de_name2" + parameters { + param2 = "param_val_12" + } + serialization_library = "org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe2" + } + bucket_columns = [ + "bucket_column_12", + "bucket_column_2" + ] + sort_columns = [ + { + column = "my_column_12" + sort_order = 0 + } + ] + parameters { + param12 = "param1_val2" + } + skewed_info { + skewed_column_names = [ + "my_column_12" + ] + skewed_column_value_location_maps { + my_column_12 = "my_column_1_val_loc_map2" + } + skewed_column_values = [ + "skewed_val_12", + "skewed_val_2" + ] + } + stored_as_sub_directories = true + } + partition_keys = [ + { + name = "my_column_12" + type = "date" + comment = "my_column_1_comment2" + }, + { + name = "my_column_22" + type = "timestamp" + comment = "my_column_2_comment2" + } + ] + view_original_text = "view_original_text_12" + view_expanded_text = "view_expanded_text_12" + table_type = "EXTERNAL_TABLE" + parameters { + param2 = "param1_val2" + } +} +`, rInt, rInt) +} + +func testAccCheckGlueTableDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).glueconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_glue_catalog_table" { + continue + } + + catalogId, dbName, tableName, err := readAwsGlueTableID(rs.Primary.ID) + if err != nil { + return err + } + + input := &glue.GetTableInput{ + DatabaseName: aws.String(dbName), + CatalogId: aws.String(catalogId), + Name: aws.String(tableName), + } + if _, err := conn.GetTable(input); err != nil { + //Verify the error is what we want + if isAWSErr(err, glue.ErrCodeEntityNotFoundException, "") { + continue + } + + return err + } + return fmt.Errorf("still exists") + } + return nil +} + +func testAccCheckGlueCatalogTableExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + catalogId, dbName, tableName, err := readAwsGlueTableID(rs.Primary.ID) + if err != nil { + return err + } + + glueconn := testAccProvider.Meta().(*AWSClient).glueconn + out, err := glueconn.GetTable(&glue.GetTableInput{ + CatalogId: aws.String(catalogId), + DatabaseName: aws.String(dbName), + Name: aws.String(tableName), + }) + + if err != nil { + return err + } + + if out.Table == nil { + return fmt.Errorf("No Glue Table Found") + } + + if *out.Table.Name != tableName { + return fmt.Errorf("Glue Table Mismatch - existing: %q, state: %q", + *out.Table.Name, tableName) + } + + return nil + } +} diff --git a/aws/resource_aws_glue_classifier.go b/aws/resource_aws_glue_classifier.go new file mode 100644 index 00000000000..9505201cc27 --- /dev/null +++ b/aws/resource_aws_glue_classifier.go @@ -0,0 +1,348 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/glue" + "github.com/hashicorp/terraform/helper/customdiff" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsGlueClassifier() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsGlueClassifierCreate, + Read: resourceAwsGlueClassifierRead, + Update: resourceAwsGlueClassifierUpdate, + Delete: resourceAwsGlueClassifierDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + CustomizeDiff: customdiff.Sequence( + func(diff *schema.ResourceDiff, v interface{}) error { + // ForceNew when changing classifier type + // InvalidInputException: UpdateClassifierRequest can't change the type of the classifier + if diff.HasChange("grok_classifier") && diff.HasChange("json_classifier") { + diff.ForceNew("grok_classifier") + diff.ForceNew("json_classifier") + } + if diff.HasChange("grok_classifier") && diff.HasChange("xml_classifier") { + diff.ForceNew("grok_classifier") + diff.ForceNew("xml_classifier") + } + if diff.HasChange("json_classifier") && diff.HasChange("xml_classifier") { + diff.ForceNew("json_classifier") + diff.ForceNew("xml_classifier") + } + return nil + }, + ), + + Schema: map[string]*schema.Schema{ + "grok_classifier": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{"json_classifier", "xml_classifier"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "classification": { + Type: schema.TypeString, + Required: true, + }, + "custom_patterns": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(0, 16000), + }, + "grok_pattern": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 2048), + }, + }, + }, + }, + "json_classifier": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{"grok_classifier", "xml_classifier"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "json_path": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + "xml_classifier": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{"grok_classifier", "json_classifier"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "classification": { + Type: schema.TypeString, + Required: true, + }, + "row_tag": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + }, + } +} + +func resourceAwsGlueClassifierCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).glueconn + name := d.Get("name").(string) + + input := &glue.CreateClassifierInput{} + + if v, ok := d.GetOk("grok_classifier"); ok { + m := v.([]interface{})[0].(map[string]interface{}) + input.GrokClassifier = expandGlueGrokClassifierCreate(name, m) + } + + if v, ok := d.GetOk("json_classifier"); ok { + m := v.([]interface{})[0].(map[string]interface{}) + input.JsonClassifier = expandGlueJsonClassifierCreate(name, m) + } + + if v, ok := d.GetOk("xml_classifier"); ok { + m := v.([]interface{})[0].(map[string]interface{}) + input.XMLClassifier = expandGlueXmlClassifierCreate(name, m) + } + + log.Printf("[DEBUG] Creating Glue Classifier: %s", input) + _, err := conn.CreateClassifier(input) + if err != nil { + return fmt.Errorf("error creating Glue Classifier (%s): %s", name, err) + } + + d.SetId(name) + + return resourceAwsGlueClassifierRead(d, meta) +} + +func resourceAwsGlueClassifierRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).glueconn + + input := &glue.GetClassifierInput{ + Name: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Reading Glue Classifier: %s", input) + output, err := conn.GetClassifier(input) + if err != nil { + if isAWSErr(err, glue.ErrCodeEntityNotFoundException, "") { + log.Printf("[WARN] Glue Classifier (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + return fmt.Errorf("error reading Glue Classifier (%s): %s", d.Id(), err) + } + + classifier := output.Classifier + if classifier == nil { + log.Printf("[WARN] Glue Classifier (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err := d.Set("grok_classifier", flattenGlueGrokClassifier(classifier.GrokClassifier)); err != nil { + return fmt.Errorf("error setting match_criteria: %s", err) + } + + if err := d.Set("json_classifier", flattenGlueJsonClassifier(classifier.JsonClassifier)); err != nil { + return fmt.Errorf("error setting json_classifier: %s", err) + } + + d.Set("name", d.Id()) + + if err := d.Set("xml_classifier", flattenGlueXmlClassifier(classifier.XMLClassifier)); err != nil { + return fmt.Errorf("error setting xml_classifier: %s", err) + } + + return nil +} + +func resourceAwsGlueClassifierUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).glueconn + + input := &glue.UpdateClassifierInput{} + + if v, ok := d.GetOk("grok_classifier"); ok { + m := v.([]interface{})[0].(map[string]interface{}) + input.GrokClassifier = expandGlueGrokClassifierUpdate(d.Id(), m) + } + + if v, ok := d.GetOk("json_classifier"); ok { + m := v.([]interface{})[0].(map[string]interface{}) + input.JsonClassifier = expandGlueJsonClassifierUpdate(d.Id(), m) + } + + if v, ok := d.GetOk("xml_classifier"); ok { + m := v.([]interface{})[0].(map[string]interface{}) + input.XMLClassifier = expandGlueXmlClassifierUpdate(d.Id(), m) + } + + log.Printf("[DEBUG] Updating Glue Classifier: %s", input) + _, err := conn.UpdateClassifier(input) + if err != nil { + return fmt.Errorf("error updating Glue Classifier (%s): %s", d.Id(), err) + } + + return nil +} + +func resourceAwsGlueClassifierDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).glueconn + + log.Printf("[DEBUG] Deleting Glue Classifier: %s", d.Id()) + err := deleteGlueClassifier(conn, d.Id()) + if err != nil { + return fmt.Errorf("error deleting Glue Classifier (%s): %s", d.Id(), err) + } + + return nil +} + +func deleteGlueClassifier(conn *glue.Glue, name string) error { + input := &glue.DeleteClassifierInput{ + Name: aws.String(name), + } + + _, err := conn.DeleteClassifier(input) + if err != nil { + if isAWSErr(err, glue.ErrCodeEntityNotFoundException, "") { + return nil + } + return err + } + + return nil +} + +func expandGlueGrokClassifierCreate(name string, m map[string]interface{}) *glue.CreateGrokClassifierRequest { + grokClassifier := &glue.CreateGrokClassifierRequest{ + Classification: aws.String(m["classification"].(string)), + GrokPattern: aws.String(m["grok_pattern"].(string)), + Name: aws.String(name), + } + + if v, ok := m["custom_patterns"]; ok && v.(string) != "" { + grokClassifier.CustomPatterns = aws.String(v.(string)) + } + + return grokClassifier +} + +func expandGlueGrokClassifierUpdate(name string, m map[string]interface{}) *glue.UpdateGrokClassifierRequest { + grokClassifier := &glue.UpdateGrokClassifierRequest{ + Classification: aws.String(m["classification"].(string)), + GrokPattern: aws.String(m["grok_pattern"].(string)), + Name: aws.String(name), + } + + if v, ok := m["custom_patterns"]; ok && v.(string) != "" { + grokClassifier.CustomPatterns = aws.String(v.(string)) + } + + return grokClassifier +} + +func expandGlueJsonClassifierCreate(name string, m map[string]interface{}) *glue.CreateJsonClassifierRequest { + jsonClassifier := &glue.CreateJsonClassifierRequest{ + JsonPath: aws.String(m["json_path"].(string)), + Name: aws.String(name), + } + + return jsonClassifier +} + +func expandGlueJsonClassifierUpdate(name string, m map[string]interface{}) *glue.UpdateJsonClassifierRequest { + jsonClassifier := &glue.UpdateJsonClassifierRequest{ + JsonPath: aws.String(m["json_path"].(string)), + Name: aws.String(name), + } + + return jsonClassifier +} + +func expandGlueXmlClassifierCreate(name string, m map[string]interface{}) *glue.CreateXMLClassifierRequest { + xmlClassifier := &glue.CreateXMLClassifierRequest{ + Classification: aws.String(m["classification"].(string)), + Name: aws.String(name), + RowTag: aws.String(m["row_tag"].(string)), + } + + return xmlClassifier +} + +func expandGlueXmlClassifierUpdate(name string, m map[string]interface{}) *glue.UpdateXMLClassifierRequest { + xmlClassifier := &glue.UpdateXMLClassifierRequest{ + Classification: aws.String(m["classification"].(string)), + Name: aws.String(name), + RowTag: aws.String(m["row_tag"].(string)), + } + + if v, ok := m["row_tag"]; ok && v.(string) != "" { + xmlClassifier.RowTag = aws.String(v.(string)) + } + + return xmlClassifier +} + +func flattenGlueGrokClassifier(grokClassifier *glue.GrokClassifier) []map[string]interface{} { + if grokClassifier == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "classification": aws.StringValue(grokClassifier.Classification), + "custom_patterns": aws.StringValue(grokClassifier.CustomPatterns), + "grok_pattern": aws.StringValue(grokClassifier.GrokPattern), + } + + return []map[string]interface{}{m} +} + +func flattenGlueJsonClassifier(jsonClassifier *glue.JsonClassifier) []map[string]interface{} { + if jsonClassifier == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "json_path": aws.StringValue(jsonClassifier.JsonPath), + } + + return []map[string]interface{}{m} +} + +func flattenGlueXmlClassifier(xmlClassifier *glue.XMLClassifier) []map[string]interface{} { + if xmlClassifier == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "classification": aws.StringValue(xmlClassifier.Classification), + "row_tag": aws.StringValue(xmlClassifier.RowTag), + } + + return []map[string]interface{}{m} +} diff --git a/aws/resource_aws_glue_classifier_test.go b/aws/resource_aws_glue_classifier_test.go new file mode 100644 index 00000000000..1d4b091d679 --- /dev/null +++ b/aws/resource_aws_glue_classifier_test.go @@ -0,0 +1,437 @@ +package aws + +import ( + "fmt" + "log" + "strings" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/glue" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func init() { + resource.AddTestSweepers("aws_glue_classifier", &resource.Sweeper{ + Name: "aws_glue_classifier", + F: testSweepGlueClassifiers, + }) +} + +func testSweepGlueClassifiers(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).glueconn + + prefixes := []string{ + "tf-acc-test-", + } + + input := &glue.GetClassifiersInput{} + err = conn.GetClassifiersPages(input, func(page *glue.GetClassifiersOutput, lastPage bool) bool { + if len(page.Classifiers) == 0 { + log.Printf("[INFO] No Glue Classifiers to sweep") + return false + } + for _, classifier := range page.Classifiers { + skip := true + + var name string + if classifier.GrokClassifier != nil { + name = aws.StringValue(classifier.GrokClassifier.Name) + } else if classifier.JsonClassifier != nil { + name = aws.StringValue(classifier.JsonClassifier.Name) + } else if classifier.XMLClassifier != nil { + name = aws.StringValue(classifier.XMLClassifier.Name) + } + if name == "" { + log.Printf("[WARN] Unable to determine Glue Classifier name: %#v", classifier) + continue + } + + for _, prefix := range prefixes { + if strings.HasPrefix(name, prefix) { + skip = false + break + } + } + if skip { + log.Printf("[INFO] Skipping Glue Classifier: %s", name) + continue + } + + log.Printf("[INFO] Deleting Glue Classifier: %s", name) + err := deleteGlueClassifier(conn, name) + if err != nil { + log.Printf("[ERROR] Failed to delete Glue Classifier %s: %s", name, err) + } + } + return !lastPage + }) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping Glue Classifier sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving Glue Classifiers: %s", err) + } + + return nil +} + +func TestAccAWSGlueClassifier_GrokClassifier(t *testing.T) { + var classifier glue.Classifier + + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_classifier.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueClassifierDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSGlueClassifierConfig_GrokClassifier(rName, "classification1", "pattern1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueClassifierExists(resourceName, &classifier), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.#", "1"), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.0.classification", "classification1"), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.0.custom_patterns", ""), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.0.grok_pattern", "pattern1"), + resource.TestCheckResourceAttr(resourceName, "json_classifier.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "xml_classifier.#", "0"), + ), + }, + { + Config: testAccAWSGlueClassifierConfig_GrokClassifier(rName, "classification2", "pattern2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueClassifierExists(resourceName, &classifier), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.#", "1"), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.0.classification", "classification2"), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.0.custom_patterns", ""), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.0.grok_pattern", "pattern2"), + resource.TestCheckResourceAttr(resourceName, "json_classifier.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "xml_classifier.#", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueClassifier_GrokClassifier_CustomPatterns(t *testing.T) { + var classifier glue.Classifier + + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_classifier.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueClassifierDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSGlueClassifierConfig_GrokClassifier_CustomPatterns(rName, "custompattern1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueClassifierExists(resourceName, &classifier), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.#", "1"), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.0.classification", "classification"), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.0.custom_patterns", "custompattern1"), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.0.grok_pattern", "pattern"), + resource.TestCheckResourceAttr(resourceName, "json_classifier.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "xml_classifier.#", "0"), + ), + }, + { + Config: testAccAWSGlueClassifierConfig_GrokClassifier_CustomPatterns(rName, "custompattern2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueClassifierExists(resourceName, &classifier), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.#", "1"), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.0.classification", "classification"), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.0.custom_patterns", "custompattern2"), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.0.grok_pattern", "pattern"), + resource.TestCheckResourceAttr(resourceName, "json_classifier.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "xml_classifier.#", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueClassifier_JsonClassifier(t *testing.T) { + var classifier glue.Classifier + + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_classifier.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueClassifierDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSGlueClassifierConfig_JsonClassifier(rName, "jsonpath1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueClassifierExists(resourceName, &classifier), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.#", "0"), + resource.TestCheckResourceAttr(resourceName, "json_classifier.#", "1"), + resource.TestCheckResourceAttr(resourceName, "json_classifier.0.json_path", "jsonpath1"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "xml_classifier.#", "0"), + ), + }, + { + Config: testAccAWSGlueClassifierConfig_JsonClassifier(rName, "jsonpath2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueClassifierExists(resourceName, &classifier), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.#", "0"), + resource.TestCheckResourceAttr(resourceName, "json_classifier.#", "1"), + resource.TestCheckResourceAttr(resourceName, "json_classifier.0.json_path", "jsonpath2"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "xml_classifier.#", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueClassifier_TypeChange(t *testing.T) { + var classifier glue.Classifier + + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_classifier.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueClassifierDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSGlueClassifierConfig_GrokClassifier(rName, "classification1", "pattern1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueClassifierExists(resourceName, &classifier), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.#", "1"), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.0.classification", "classification1"), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.0.custom_patterns", ""), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.0.grok_pattern", "pattern1"), + resource.TestCheckResourceAttr(resourceName, "json_classifier.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "xml_classifier.#", "0"), + ), + }, + { + Config: testAccAWSGlueClassifierConfig_JsonClassifier(rName, "jsonpath1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueClassifierExists(resourceName, &classifier), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.#", "0"), + resource.TestCheckResourceAttr(resourceName, "json_classifier.#", "1"), + resource.TestCheckResourceAttr(resourceName, "json_classifier.0.json_path", "jsonpath1"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "xml_classifier.#", "0"), + ), + }, + { + Config: testAccAWSGlueClassifierConfig_XmlClassifier(rName, "classification1", "rowtag1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueClassifierExists(resourceName, &classifier), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.#", "0"), + resource.TestCheckResourceAttr(resourceName, "json_classifier.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "xml_classifier.#", "1"), + resource.TestCheckResourceAttr(resourceName, "xml_classifier.0.classification", "classification1"), + resource.TestCheckResourceAttr(resourceName, "xml_classifier.0.row_tag", "rowtag1"), + ), + }, + { + Config: testAccAWSGlueClassifierConfig_GrokClassifier(rName, "classification1", "pattern1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueClassifierExists(resourceName, &classifier), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.#", "1"), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.0.classification", "classification1"), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.0.custom_patterns", ""), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.0.grok_pattern", "pattern1"), + resource.TestCheckResourceAttr(resourceName, "json_classifier.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "xml_classifier.#", "0"), + ), + }, + }, + }) +} + +func TestAccAWSGlueClassifier_XmlClassifier(t *testing.T) { + var classifier glue.Classifier + + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_classifier.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueClassifierDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSGlueClassifierConfig_XmlClassifier(rName, "classification1", "rowtag1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueClassifierExists(resourceName, &classifier), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.#", "0"), + resource.TestCheckResourceAttr(resourceName, "json_classifier.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "xml_classifier.#", "1"), + resource.TestCheckResourceAttr(resourceName, "xml_classifier.0.classification", "classification1"), + resource.TestCheckResourceAttr(resourceName, "xml_classifier.0.row_tag", "rowtag1"), + ), + }, + { + Config: testAccAWSGlueClassifierConfig_XmlClassifier(rName, "classification2", "rowtag2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueClassifierExists(resourceName, &classifier), + resource.TestCheckResourceAttr(resourceName, "grok_classifier.#", "0"), + resource.TestCheckResourceAttr(resourceName, "json_classifier.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "xml_classifier.#", "1"), + resource.TestCheckResourceAttr(resourceName, "xml_classifier.0.classification", "classification2"), + resource.TestCheckResourceAttr(resourceName, "xml_classifier.0.row_tag", "rowtag2"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAWSGlueClassifierExists(resourceName string, classifier *glue.Classifier) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Glue Classifier ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).glueconn + + output, err := conn.GetClassifier(&glue.GetClassifierInput{ + Name: aws.String(rs.Primary.ID), + }) + if err != nil { + return err + } + + if output.Classifier == nil { + return fmt.Errorf("Glue Classifier (%s) not found", rs.Primary.ID) + } + + *classifier = *output.Classifier + return nil + } +} + +func testAccCheckAWSGlueClassifierDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_glue_classifier" { + continue + } + + conn := testAccProvider.Meta().(*AWSClient).glueconn + + output, err := conn.GetClassifier(&glue.GetClassifierInput{ + Name: aws.String(rs.Primary.ID), + }) + + if err != nil { + if isAWSErr(err, glue.ErrCodeEntityNotFoundException, "") { + return nil + } + + } + + classifier := output.Classifier + if classifier != nil { + return fmt.Errorf("Glue Classifier %s still exists", rs.Primary.ID) + } + + return err + } + + return nil +} + +func testAccAWSGlueClassifierConfig_GrokClassifier(rName, classification, grokPattern string) string { + return fmt.Sprintf(` +resource "aws_glue_classifier" "test" { + name = "%s" + + grok_classifier { + classification = "%s" + grok_pattern = "%s" + } +} +`, rName, classification, grokPattern) +} + +func testAccAWSGlueClassifierConfig_GrokClassifier_CustomPatterns(rName, customPatterns string) string { + return fmt.Sprintf(` +resource "aws_glue_classifier" "test" { + name = "%s" + + grok_classifier { + classification = "classification" + custom_patterns = "%s" + grok_pattern = "pattern" + } +} +`, rName, customPatterns) +} + +func testAccAWSGlueClassifierConfig_JsonClassifier(rName, jsonPath string) string { + return fmt.Sprintf(` +resource "aws_glue_classifier" "test" { + name = "%s" + + json_classifier { + json_path = "%s" + } +} +`, rName, jsonPath) +} + +func testAccAWSGlueClassifierConfig_XmlClassifier(rName, classification, rowTag string) string { + return fmt.Sprintf(` +resource "aws_glue_classifier" "test" { + name = "%s" + + xml_classifier { + classification = "%s" + row_tag = "%s" + } +} +`, rName, classification, rowTag) +} diff --git a/aws/resource_aws_glue_connection.go b/aws/resource_aws_glue_connection.go new file mode 100644 index 00000000000..546f3e9f4a5 --- /dev/null +++ b/aws/resource_aws_glue_connection.go @@ -0,0 +1,283 @@ +package aws + +import ( + "fmt" + "log" + "strings" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/glue" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsGlueConnection() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsGlueConnectionCreate, + Read: resourceAwsGlueConnectionRead, + Update: resourceAwsGlueConnectionUpdate, + Delete: resourceAwsGlueConnectionDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "catalog_id": { + Type: schema.TypeString, + ForceNew: true, + Optional: true, + Computed: true, + }, + "connection_properties": { + Type: schema.TypeMap, + Required: true, + }, + "connection_type": { + Type: schema.TypeString, + Optional: true, + Default: glue.ConnectionTypeJdbc, + ValidateFunc: validation.StringInSlice([]string{ + glue.ConnectionTypeJdbc, + glue.ConnectionTypeSftp, + }, false), + }, + "description": { + Type: schema.TypeString, + Optional: true, + }, + "match_criteria": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.NoZeroValues, + }, + "physical_connection_requirements": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_zone": { + Type: schema.TypeString, + Optional: true, + }, + "security_group_id_list": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "subnet_id": { + Type: schema.TypeString, + Optional: true, + }, + }, + }, + }, + }, + } +} + +func resourceAwsGlueConnectionCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).glueconn + var catalogID string + if v, ok := d.GetOkExists("catalog_id"); ok { + catalogID = v.(string) + } else { + catalogID = meta.(*AWSClient).accountid + } + name := d.Get("name").(string) + + input := &glue.CreateConnectionInput{ + CatalogId: aws.String(catalogID), + ConnectionInput: expandGlueConnectionInput(d), + } + + log.Printf("[DEBUG] Creating Glue Connection: %s", input) + _, err := conn.CreateConnection(input) + if err != nil { + return fmt.Errorf("error creating Glue Connection (%s): %s", name, err) + } + + d.SetId(fmt.Sprintf("%s:%s", catalogID, name)) + + return resourceAwsGlueConnectionRead(d, meta) +} + +func resourceAwsGlueConnectionRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).glueconn + + catalogID, connectionName, err := decodeGlueConnectionID(d.Id()) + if err != nil { + return err + } + + input := &glue.GetConnectionInput{ + CatalogId: aws.String(catalogID), + Name: aws.String(connectionName), + } + + log.Printf("[DEBUG] Reading Glue Connection: %s", input) + output, err := conn.GetConnection(input) + if err != nil { + if isAWSErr(err, glue.ErrCodeEntityNotFoundException, "") { + log.Printf("[WARN] Glue Connection (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + return fmt.Errorf("error reading Glue Connection (%s): %s", d.Id(), err) + } + + connection := output.Connection + if connection == nil { + log.Printf("[WARN] Glue Connection (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + d.Set("catalog_id", catalogID) + if err := d.Set("connection_properties", aws.StringValueMap(connection.ConnectionProperties)); err != nil { + return fmt.Errorf("error setting connection_properties: %s", err) + } + d.Set("connection_type", connection.ConnectionType) + d.Set("description", connection.Description) + if err := d.Set("match_criteria", flattenStringList(connection.MatchCriteria)); err != nil { + return fmt.Errorf("error setting match_criteria: %s", err) + } + d.Set("name", connection.Name) + if err := d.Set("physical_connection_requirements", flattenGluePhysicalConnectionRequirements(connection.PhysicalConnectionRequirements)); err != nil { + return fmt.Errorf("error setting physical_connection_requirements: %s", err) + } + + return nil +} + +func resourceAwsGlueConnectionUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).glueconn + + catalogID, connectionName, err := decodeGlueConnectionID(d.Id()) + if err != nil { + return err + } + + input := &glue.UpdateConnectionInput{ + CatalogId: aws.String(catalogID), + ConnectionInput: expandGlueConnectionInput(d), + Name: aws.String(connectionName), + } + + log.Printf("[DEBUG] Updating Glue Connection: %s", input) + _, err = conn.UpdateConnection(input) + if err != nil { + return fmt.Errorf("error updating Glue Connection (%s): %s", d.Id(), err) + } + + return nil +} + +func resourceAwsGlueConnectionDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).glueconn + + catalogID, connectionName, err := decodeGlueConnectionID(d.Id()) + if err != nil { + return err + } + + log.Printf("[DEBUG] Deleting Glue Connection: %s", d.Id()) + err = deleteGlueConnection(conn, catalogID, connectionName) + if err != nil { + return fmt.Errorf("error deleting Glue Connection (%s): %s", d.Id(), err) + } + + return nil +} + +func decodeGlueConnectionID(id string) (string, string, error) { + idParts := strings.Split(id, ":") + if len(idParts) != 2 { + return "", "", fmt.Errorf("expected ID in format CATALOG-ID:NAME, provided: %s", id) + } + return idParts[0], idParts[1], nil +} + +func deleteGlueConnection(conn *glue.Glue, catalogID, connectionName string) error { + input := &glue.DeleteConnectionInput{ + CatalogId: aws.String(catalogID), + ConnectionName: aws.String(connectionName), + } + + _, err := conn.DeleteConnection(input) + if err != nil { + if isAWSErr(err, glue.ErrCodeEntityNotFoundException, "") { + return nil + } + return err + } + + return nil +} + +func expandGlueConnectionInput(d *schema.ResourceData) *glue.ConnectionInput { + connectionProperties := make(map[string]string) + for k, v := range d.Get("connection_properties").(map[string]interface{}) { + connectionProperties[k] = v.(string) + } + + connectionInput := &glue.ConnectionInput{ + ConnectionProperties: aws.StringMap(connectionProperties), + ConnectionType: aws.String(d.Get("connection_type").(string)), + Name: aws.String(d.Get("name").(string)), + } + + if v, ok := d.GetOk("description"); ok { + connectionInput.Description = aws.String(v.(string)) + } + + if v, ok := d.GetOk("match_criteria"); ok { + connectionInput.MatchCriteria = expandStringList(v.([]interface{})) + } + + if v, ok := d.GetOk("physical_connection_requirements"); ok { + physicalConnectionRequirementsList := v.([]interface{}) + physicalConnectionRequirementsMap := physicalConnectionRequirementsList[0].(map[string]interface{}) + connectionInput.PhysicalConnectionRequirements = expandGluePhysicalConnectionRequirements(physicalConnectionRequirementsMap) + } + + return connectionInput +} + +func expandGluePhysicalConnectionRequirements(m map[string]interface{}) *glue.PhysicalConnectionRequirements { + physicalConnectionRequirements := &glue.PhysicalConnectionRequirements{} + + if v, ok := m["availability_zone"]; ok { + physicalConnectionRequirements.AvailabilityZone = aws.String(v.(string)) + } + + if v, ok := m["security_group_id_list"]; ok { + physicalConnectionRequirements.SecurityGroupIdList = expandStringList(v.([]interface{})) + } + + if v, ok := m["subnet_id"]; ok { + physicalConnectionRequirements.SubnetId = aws.String(v.(string)) + } + + return physicalConnectionRequirements +} + +func flattenGluePhysicalConnectionRequirements(physicalConnectionRequirements *glue.PhysicalConnectionRequirements) []map[string]interface{} { + if physicalConnectionRequirements == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "availability_zone": aws.StringValue(physicalConnectionRequirements.AvailabilityZone), + "security_group_id_list": flattenStringList(physicalConnectionRequirements.SecurityGroupIdList), + "subnet_id": aws.StringValue(physicalConnectionRequirements.SubnetId), + } + + return []map[string]interface{}{m} +} diff --git a/aws/resource_aws_glue_connection_test.go b/aws/resource_aws_glue_connection_test.go new file mode 100644 index 00000000000..85eda1e3042 --- /dev/null +++ b/aws/resource_aws_glue_connection_test.go @@ -0,0 +1,450 @@ +package aws + +import ( + "fmt" + "log" + "strings" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/glue" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func init() { + resource.AddTestSweepers("aws_glue_connection", &resource.Sweeper{ + Name: "aws_glue_connection", + F: testSweepGlueConnections, + }) +} + +func testSweepGlueConnections(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).glueconn + catalogID := client.(*AWSClient).accountid + + prefixes := []string{ + "tf-acc-test-", + } + + input := &glue.GetConnectionsInput{ + CatalogId: aws.String(catalogID), + } + err = conn.GetConnectionsPages(input, func(page *glue.GetConnectionsOutput, lastPage bool) bool { + if len(page.ConnectionList) == 0 { + log.Printf("[INFO] No Glue Connections to sweep") + return false + } + for _, connection := range page.ConnectionList { + skip := true + name := connection.Name + for _, prefix := range prefixes { + if strings.HasPrefix(*name, prefix) { + skip = false + break + } + } + if skip { + log.Printf("[INFO] Skipping Glue Connection: %s", *name) + continue + } + + log.Printf("[INFO] Deleting Glue Connection: %s", *name) + err := deleteGlueConnection(conn, catalogID, *name) + if err != nil { + log.Printf("[ERROR] Failed to delete Glue Connection %s: %s", *name, err) + } + } + return !lastPage + }) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping Glue Connection sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving Glue Connections: %s", err) + } + + return nil +} + +func TestAccAWSGlueConnection_Basic(t *testing.T) { + var connection glue.Connection + + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) + resourceName := "aws_glue_connection.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueConnectionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSGlueConnectionConfig_Required(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueConnectionExists(resourceName, &connection), + resource.TestCheckResourceAttr(resourceName, "connection_properties.%", "3"), + resource.TestCheckResourceAttr(resourceName, "connection_properties.JDBC_CONNECTION_URL", "jdbc:mysql://terraformacctesting.com/testdatabase"), + resource.TestCheckResourceAttr(resourceName, "connection_properties.PASSWORD", "testpassword"), + resource.TestCheckResourceAttr(resourceName, "connection_properties.USERNAME", "testusername"), + resource.TestCheckResourceAttr(resourceName, "match_criteria.#", "0"), + resource.TestCheckResourceAttr(resourceName, "physical_connection_requirements.#", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueConnection_Description(t *testing.T) { + var connection glue.Connection + + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) + resourceName := "aws_glue_connection.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueConnectionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSGlueConnectionConfig_Description(rName, "First Description"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueConnectionExists(resourceName, &connection), + resource.TestCheckResourceAttr(resourceName, "description", "First Description"), + ), + }, + { + Config: testAccAWSGlueConnectionConfig_Description(rName, "Second Description"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueConnectionExists(resourceName, &connection), + resource.TestCheckResourceAttr(resourceName, "description", "Second Description"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueConnection_MatchCriteria(t *testing.T) { + var connection glue.Connection + + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) + resourceName := "aws_glue_connection.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueConnectionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSGlueConnectionConfig_MatchCriteria_First(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueConnectionExists(resourceName, &connection), + resource.TestCheckResourceAttr(resourceName, "match_criteria.#", "4"), + resource.TestCheckResourceAttr(resourceName, "match_criteria.0", "criteria1"), + resource.TestCheckResourceAttr(resourceName, "match_criteria.1", "criteria2"), + resource.TestCheckResourceAttr(resourceName, "match_criteria.2", "criteria3"), + resource.TestCheckResourceAttr(resourceName, "match_criteria.3", "criteria4"), + ), + }, + { + Config: testAccAWSGlueConnectionConfig_MatchCriteria_Second(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueConnectionExists(resourceName, &connection), + resource.TestCheckResourceAttr(resourceName, "match_criteria.#", "1"), + resource.TestCheckResourceAttr(resourceName, "match_criteria.0", "criteria1"), + ), + }, + { + Config: testAccAWSGlueConnectionConfig_MatchCriteria_Third(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueConnectionExists(resourceName, &connection), + resource.TestCheckResourceAttr(resourceName, "match_criteria.#", "3"), + resource.TestCheckResourceAttr(resourceName, "match_criteria.0", "criteria2"), + resource.TestCheckResourceAttr(resourceName, "match_criteria.1", "criteria3"), + resource.TestCheckResourceAttr(resourceName, "match_criteria.2", "criteria4"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueConnection_PhysicalConnectionRequirements(t *testing.T) { + var connection glue.Connection + + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) + resourceName := "aws_glue_connection.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueConnectionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSGlueConnectionConfig_PhysicalConnectionRequirements(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueConnectionExists(resourceName, &connection), + resource.TestCheckResourceAttr(resourceName, "connection_properties.%", "3"), + resource.TestCheckResourceAttrSet(resourceName, "connection_properties.JDBC_CONNECTION_URL"), + resource.TestCheckResourceAttrSet(resourceName, "connection_properties.PASSWORD"), + resource.TestCheckResourceAttrSet(resourceName, "connection_properties.USERNAME"), + resource.TestCheckResourceAttr(resourceName, "match_criteria.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "physical_connection_requirements.#", "1"), + resource.TestCheckResourceAttrSet(resourceName, "physical_connection_requirements.0.availability_zone"), + resource.TestCheckResourceAttr(resourceName, "physical_connection_requirements.0.security_group_id_list.#", "1"), + resource.TestCheckResourceAttrSet(resourceName, "physical_connection_requirements.0.subnet_id"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAWSGlueConnectionExists(resourceName string, connection *glue.Connection) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Glue Connection ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).glueconn + catalogID, connectionName, err := decodeGlueConnectionID(rs.Primary.ID) + if err != nil { + return err + } + + output, err := conn.GetConnection(&glue.GetConnectionInput{ + CatalogId: aws.String(catalogID), + Name: aws.String(connectionName), + }) + if err != nil { + return err + } + + if output.Connection == nil { + return fmt.Errorf("Glue Connection (%s) not found", rs.Primary.ID) + } + + if aws.StringValue(output.Connection.Name) == connectionName { + *connection = *output.Connection + return nil + } + + return fmt.Errorf("Glue Connection (%s) not found", rs.Primary.ID) + } +} + +func testAccCheckAWSGlueConnectionDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_glue_connection" { + continue + } + + conn := testAccProvider.Meta().(*AWSClient).glueconn + catalogID, connectionName, err := decodeGlueConnectionID(rs.Primary.ID) + if err != nil { + return err + } + + output, err := conn.GetConnection(&glue.GetConnectionInput{ + CatalogId: aws.String(catalogID), + Name: aws.String(connectionName), + }) + + if err != nil { + if isAWSErr(err, glue.ErrCodeEntityNotFoundException, "") { + return nil + } + + } + + connection := output.Connection + if connection != nil && aws.StringValue(connection.Name) == connectionName { + return fmt.Errorf("Glue Connection %s still exists", rs.Primary.ID) + } + + return err + } + + return nil +} + +func testAccAWSGlueConnectionConfig_Description(rName, description string) string { + return fmt.Sprintf(` +resource "aws_glue_connection" "test" { + connection_properties = { + JDBC_CONNECTION_URL = "jdbc:mysql://terraformacctesting.com/testdatabase" + PASSWORD = "testpassword" + USERNAME = "testusername" + } + + description = "%[1]s" + name = "%[2]s" +} +`, description, rName) +} + +func testAccAWSGlueConnectionConfig_MatchCriteria_First(rName string) string { + return fmt.Sprintf(` +resource "aws_glue_connection" "test" { + connection_properties = { + JDBC_CONNECTION_URL = "jdbc:mysql://terraformacctesting.com/testdatabase" + PASSWORD = "testpassword" + USERNAME = "testusername" + } + + match_criteria = ["criteria1", "criteria2", "criteria3", "criteria4"] + name = "%s" +} +`, rName) +} + +func testAccAWSGlueConnectionConfig_MatchCriteria_Second(rName string) string { + return fmt.Sprintf(` +resource "aws_glue_connection" "test" { + connection_properties = { + JDBC_CONNECTION_URL = "jdbc:mysql://terraformacctesting.com/testdatabase" + PASSWORD = "testpassword" + USERNAME = "testusername" + } + + match_criteria = ["criteria1"] + name = "%s" +} +`, rName) +} + +func testAccAWSGlueConnectionConfig_MatchCriteria_Third(rName string) string { + return fmt.Sprintf(` +resource "aws_glue_connection" "test" { + connection_properties = { + JDBC_CONNECTION_URL = "jdbc:mysql://terraformacctesting.com/testdatabase" + PASSWORD = "testpassword" + USERNAME = "testusername" + } + + match_criteria = ["criteria2", "criteria3", "criteria4"] + name = "%s" +} +`, rName) +} + +func testAccAWSGlueConnectionConfig_PhysicalConnectionRequirements(rName string) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" + + tags { + Name = "terraform-testacc-glue-connection-base" + } +} + +resource "aws_security_group" "test" { + name = "%[1]s" + vpc_id = "${aws_vpc.test.id}" + + ingress { + from_port = 1 + protocol = "tcp" + self = true + to_port = 65535 + } +} + +resource "aws_subnet" "test" { + count = 2 + + availability_zone = "${data.aws_availability_zones.available.names[count.index]}" + cidr_block = "10.0.${count.index}.0/24" + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = "terraform-testacc-glue-connection-base" + } +} + +resource "aws_db_subnet_group" "test" { + name = "%[1]s" + subnet_ids = ["${aws_subnet.test.0.id}", "${aws_subnet.test.1.id}"] +} + +resource "aws_rds_cluster" "test" { + cluster_identifier = "%[1]s" + database_name = "gluedatabase" + db_cluster_parameter_group_name = "default.aurora-mysql5.7" + db_subnet_group_name = "${aws_db_subnet_group.test.name}" + engine = "aurora-mysql" + master_password = "gluepassword" + master_username = "glueusername" + skip_final_snapshot = true + vpc_security_group_ids = ["${aws_security_group.test.id}"] +} + +resource "aws_rds_cluster_instance" "test" { + cluster_identifier = "${aws_rds_cluster.test.id}" + engine = "aurora-mysql" + identifier = "%[1]s" + instance_class = "db.t2.medium" +} + +resource "aws_glue_connection" "test" { + connection_properties = { + JDBC_CONNECTION_URL = "jdbc:mysql://${aws_rds_cluster.test.endpoint}/${aws_rds_cluster.test.database_name}" + PASSWORD = "${aws_rds_cluster.test.master_password}" + USERNAME = "${aws_rds_cluster.test.master_username}" + } + + name = "%[1]s" + + physical_connection_requirements { + availability_zone = "${aws_subnet.test.0.availability_zone}" + security_group_id_list = ["${aws_security_group.test.id}"] + subnet_id = "${aws_subnet.test.0.id}" + } +} +`, rName) +} + +func testAccAWSGlueConnectionConfig_Required(rName string) string { + return fmt.Sprintf(` +resource "aws_glue_connection" "test" { + connection_properties = { + JDBC_CONNECTION_URL = "jdbc:mysql://terraformacctesting.com/testdatabase" + PASSWORD = "testpassword" + USERNAME = "testusername" + } + + name = "%s" +} +`, rName) +} diff --git a/aws/resource_aws_glue_crawler.go b/aws/resource_aws_glue_crawler.go new file mode 100644 index 00000000000..573e0c61ce7 --- /dev/null +++ b/aws/resource_aws_glue_crawler.go @@ -0,0 +1,522 @@ +package aws + +import ( + "fmt" + "log" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" + "github.com/aws/aws-sdk-go/service/glue" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/structure" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsGlueCrawler() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsGlueCrawlerCreate, + Read: resourceAwsGlueCrawlerRead, + Update: resourceAwsGlueCrawlerUpdate, + Delete: resourceAwsGlueCrawlerDelete, + Exists: resourceAwsGlueCrawlerExists, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + }, + "database_name": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + }, + "role": { + Type: schema.TypeString, + Required: true, + // Glue API always returns name + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + newARN, err := arn.Parse(new) + + if err != nil { + return false + } + + return old == strings.TrimPrefix(newARN.Resource, "role/") + }, + }, + "description": { + Type: schema.TypeString, + Optional: true, + }, + "schedule": { + Type: schema.TypeString, + Optional: true, + }, + "classifiers": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "schema_change_policy": { + Type: schema.TypeList, + Optional: true, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == "1" && new == "0" { + return true + } + return false + }, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "delete_behavior": { + Type: schema.TypeString, + Optional: true, + Default: glue.DeleteBehaviorDeprecateInDatabase, + ValidateFunc: validation.StringInSlice([]string{ + glue.DeleteBehaviorDeleteFromDatabase, + glue.DeleteBehaviorDeprecateInDatabase, + glue.DeleteBehaviorLog, + }, false), + }, + "update_behavior": { + Type: schema.TypeString, + Optional: true, + Default: glue.UpdateBehaviorUpdateInDatabase, + ValidateFunc: validation.StringInSlice([]string{ + glue.UpdateBehaviorLog, + glue.UpdateBehaviorUpdateInDatabase, + }, false), + }, + }, + }, + }, + "table_prefix": { + Type: schema.TypeString, + Optional: true, + }, + "s3_target": { + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "path": { + Type: schema.TypeString, + Required: true, + }, + "exclusions": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + }, + }, + "dynamodb_target": { + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "path": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "jdbc_target": { + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "connection_name": { + Type: schema.TypeString, + Required: true, + }, + "path": { + Type: schema.TypeString, + Required: true, + }, + "exclusions": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + }, + }, + "configuration": { + Type: schema.TypeString, + Optional: true, + DiffSuppressFunc: suppressEquivalentJsonDiffs, + StateFunc: func(v interface{}) string { + json, _ := structure.NormalizeJsonString(v) + return json + }, + ValidateFunc: validation.ValidateJsonString, + }, + }, + } +} + +func resourceAwsGlueCrawlerCreate(d *schema.ResourceData, meta interface{}) error { + glueConn := meta.(*AWSClient).glueconn + name := d.Get("name").(string) + + crawlerInput, err := createCrawlerInput(name, d) + if err != nil { + return err + } + + // Retry for IAM eventual consistency + err = resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err = glueConn.CreateCrawler(crawlerInput) + if err != nil { + if isAWSErr(err, glue.ErrCodeInvalidInputException, "Service is unable to assume role") { + return resource.RetryableError(err) + } + // InvalidInputException: Unable to retrieve connection tf-acc-test-8656357591012534997: User: arn:aws:sts::*******:assumed-role/tf-acc-test-8656357591012534997/AWS-Crawler is not authorized to perform: glue:GetConnection on resource: * (Service: AmazonDataCatalog; Status Code: 400; Error Code: AccessDeniedException; Request ID: 4d72b66f-9c75-11e8-9faf-5b526c7be968) + if isAWSErr(err, glue.ErrCodeInvalidInputException, "is not authorized") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + + if err != nil { + return fmt.Errorf("error creating Glue crawler: %s", err) + } + d.SetId(name) + + return resourceAwsGlueCrawlerRead(d, meta) +} + +func createCrawlerInput(crawlerName string, d *schema.ResourceData) (*glue.CreateCrawlerInput, error) { + crawlerTargets, err := expandGlueCrawlerTargets(d) + if err != nil { + return nil, err + } + crawlerInput := &glue.CreateCrawlerInput{ + Name: aws.String(crawlerName), + DatabaseName: aws.String(d.Get("database_name").(string)), + Role: aws.String(d.Get("role").(string)), + Targets: crawlerTargets, + } + if description, ok := d.GetOk("description"); ok { + crawlerInput.Description = aws.String(description.(string)) + } + if schedule, ok := d.GetOk("schedule"); ok { + crawlerInput.Schedule = aws.String(schedule.(string)) + } + if classifiers, ok := d.GetOk("classifiers"); ok { + crawlerInput.Classifiers = expandStringList(classifiers.([]interface{})) + } + + crawlerInput.SchemaChangePolicy = expandGlueSchemaChangePolicy(d.Get("schema_change_policy").([]interface{})) + + if tablePrefix, ok := d.GetOk("table_prefix"); ok { + crawlerInput.TablePrefix = aws.String(tablePrefix.(string)) + } + if configuration, ok := d.GetOk("configuration"); ok { + crawlerInput.Configuration = aws.String(configuration.(string)) + } + + if v, ok := d.GetOk("configuration"); ok { + configuration, err := structure.NormalizeJsonString(v) + if err != nil { + return nil, fmt.Errorf("Configuration contains an invalid JSON: %v", err) + } + crawlerInput.Configuration = aws.String(configuration) + } + + return crawlerInput, nil +} + +func expandGlueSchemaChangePolicy(v []interface{}) *glue.SchemaChangePolicy { + if len(v) == 0 { + return nil + } + + schemaPolicy := &glue.SchemaChangePolicy{} + + member := v[0].(map[string]interface{}) + + if updateBehavior, ok := member["update_behavior"]; ok && updateBehavior.(string) != "" { + schemaPolicy.UpdateBehavior = aws.String(updateBehavior.(string)) + } + + if deleteBehavior, ok := member["delete_behavior"]; ok && deleteBehavior.(string) != "" { + schemaPolicy.DeleteBehavior = aws.String(deleteBehavior.(string)) + } + return schemaPolicy +} + +func expandGlueCrawlerTargets(d *schema.ResourceData) (*glue.CrawlerTargets, error) { + crawlerTargets := &glue.CrawlerTargets{} + + dynamodbTargets, dynamodbTargetsOk := d.GetOk("dynamodb_target") + jdbcTargets, jdbcTargetsOk := d.GetOk("jdbc_target") + s3Targets, s3TargetsOk := d.GetOk("s3_target") + if !dynamodbTargetsOk && !jdbcTargetsOk && !s3TargetsOk { + return nil, fmt.Errorf("One of the following configurations is required: dynamodb_target, jdbc_target, s3_target") + } + + log.Print("[DEBUG] Creating crawler target") + crawlerTargets.DynamoDBTargets = expandGlueDynamoDBTargets(dynamodbTargets.([]interface{})) + crawlerTargets.JdbcTargets = expandGlueJdbcTargets(jdbcTargets.([]interface{})) + crawlerTargets.S3Targets = expandGlueS3Targets(s3Targets.([]interface{})) + + return crawlerTargets, nil +} + +func expandGlueDynamoDBTargets(targets []interface{}) []*glue.DynamoDBTarget { + if len(targets) < 1 { + return []*glue.DynamoDBTarget{} + } + + perms := make([]*glue.DynamoDBTarget, len(targets), len(targets)) + for i, rawCfg := range targets { + cfg := rawCfg.(map[string]interface{}) + perms[i] = expandGlueDynamoDBTarget(cfg) + } + return perms +} + +func expandGlueDynamoDBTarget(cfg map[string]interface{}) *glue.DynamoDBTarget { + target := &glue.DynamoDBTarget{ + Path: aws.String(cfg["path"].(string)), + } + + return target +} + +func expandGlueS3Targets(targets []interface{}) []*glue.S3Target { + if len(targets) < 1 { + return []*glue.S3Target{} + } + + perms := make([]*glue.S3Target, len(targets), len(targets)) + for i, rawCfg := range targets { + cfg := rawCfg.(map[string]interface{}) + perms[i] = expandGlueS3Target(cfg) + } + return perms +} + +func expandGlueS3Target(cfg map[string]interface{}) *glue.S3Target { + target := &glue.S3Target{ + Path: aws.String(cfg["path"].(string)), + } + + if exclusions, ok := cfg["exclusions"]; ok { + target.Exclusions = expandStringList(exclusions.([]interface{})) + } + return target +} + +func expandGlueJdbcTargets(targets []interface{}) []*glue.JdbcTarget { + if len(targets) < 1 { + return []*glue.JdbcTarget{} + } + + perms := make([]*glue.JdbcTarget, len(targets), len(targets)) + for i, rawCfg := range targets { + cfg := rawCfg.(map[string]interface{}) + perms[i] = expandGlueJdbcTarget(cfg) + } + return perms +} + +func expandGlueJdbcTarget(cfg map[string]interface{}) *glue.JdbcTarget { + target := &glue.JdbcTarget{ + Path: aws.String(cfg["path"].(string)), + ConnectionName: aws.String(cfg["connection_name"].(string)), + } + + if exclusions, ok := cfg["exclusions"]; ok { + target.Exclusions = expandStringList(exclusions.([]interface{})) + } + return target +} + +func resourceAwsGlueCrawlerUpdate(d *schema.ResourceData, meta interface{}) error { + glueConn := meta.(*AWSClient).glueconn + name := d.Get("name").(string) + + crawlerInput, err := createCrawlerInput(name, d) + if err != nil { + return err + } + updateCrawlerInput := glue.UpdateCrawlerInput(*crawlerInput) + + // Retry for IAM eventual consistency + err = resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err := glueConn.UpdateCrawler(&updateCrawlerInput) + if err != nil { + if isAWSErr(err, glue.ErrCodeInvalidInputException, "Service is unable to assume role") { + return resource.RetryableError(err) + } + // InvalidInputException: Unable to retrieve connection tf-acc-test-8656357591012534997: User: arn:aws:sts::*******:assumed-role/tf-acc-test-8656357591012534997/AWS-Crawler is not authorized to perform: glue:GetConnection on resource: * (Service: AmazonDataCatalog; Status Code: 400; Error Code: AccessDeniedException; Request ID: 4d72b66f-9c75-11e8-9faf-5b526c7be968) + if isAWSErr(err, glue.ErrCodeInvalidInputException, "is not authorized") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + + if err != nil { + return fmt.Errorf("error updating Glue crawler: %s", err) + } + + return resourceAwsGlueCrawlerRead(d, meta) +} + +func resourceAwsGlueCrawlerRead(d *schema.ResourceData, meta interface{}) error { + glueConn := meta.(*AWSClient).glueconn + + input := &glue.GetCrawlerInput{ + Name: aws.String(d.Id()), + } + + crawlerOutput, err := glueConn.GetCrawler(input) + if err != nil { + if isAWSErr(err, glue.ErrCodeEntityNotFoundException, "") { + log.Printf("[WARN] Glue Crawler (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + return fmt.Errorf("error reading Glue crawler: %s", err.Error()) + } + + if crawlerOutput.Crawler == nil { + log.Printf("[WARN] Glue Crawler (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + d.Set("name", crawlerOutput.Crawler.Name) + d.Set("database_name", crawlerOutput.Crawler.DatabaseName) + d.Set("role", crawlerOutput.Crawler.Role) + d.Set("configuration", crawlerOutput.Crawler.Configuration) + d.Set("description", crawlerOutput.Crawler.Description) + d.Set("schedule", "") + if crawlerOutput.Crawler.Schedule != nil { + d.Set("schedule", crawlerOutput.Crawler.Schedule.ScheduleExpression) + } + if err := d.Set("classifiers", flattenStringList(crawlerOutput.Crawler.Classifiers)); err != nil { + return fmt.Errorf("error setting classifiers: %s", err) + } + d.Set("table_prefix", crawlerOutput.Crawler.TablePrefix) + + if crawlerOutput.Crawler.SchemaChangePolicy != nil { + schemaPolicy := map[string]string{ + "delete_behavior": aws.StringValue(crawlerOutput.Crawler.SchemaChangePolicy.DeleteBehavior), + "update_behavior": aws.StringValue(crawlerOutput.Crawler.SchemaChangePolicy.UpdateBehavior), + } + + if err := d.Set("schema_change_policy", []map[string]string{schemaPolicy}); err != nil { + return fmt.Errorf("error setting schema_change_policy: %s", schemaPolicy) + } + } + + if crawlerOutput.Crawler.Targets != nil { + if err := d.Set("dynamodb_target", flattenGlueDynamoDBTargets(crawlerOutput.Crawler.Targets.DynamoDBTargets)); err != nil { + return fmt.Errorf("error setting dynamodb_target: %s", err) + } + + if err := d.Set("jdbc_target", flattenGlueJdbcTargets(crawlerOutput.Crawler.Targets.JdbcTargets)); err != nil { + return fmt.Errorf("error setting jdbc_target: %s", err) + } + + if err := d.Set("s3_target", flattenGlueS3Targets(crawlerOutput.Crawler.Targets.S3Targets)); err != nil { + return fmt.Errorf("error setting s3_target: %s", err) + } + } + + return nil +} + +func flattenGlueS3Targets(s3Targets []*glue.S3Target) []map[string]interface{} { + result := make([]map[string]interface{}, 0) + + for _, s3Target := range s3Targets { + attrs := make(map[string]interface{}) + attrs["exclusions"] = flattenStringList(s3Target.Exclusions) + attrs["path"] = aws.StringValue(s3Target.Path) + + result = append(result, attrs) + } + return result +} + +func flattenGlueDynamoDBTargets(dynamodbTargets []*glue.DynamoDBTarget) []map[string]interface{} { + result := make([]map[string]interface{}, 0) + + for _, dynamodbTarget := range dynamodbTargets { + attrs := make(map[string]interface{}) + attrs["path"] = aws.StringValue(dynamodbTarget.Path) + + result = append(result, attrs) + } + return result +} + +func flattenGlueJdbcTargets(jdbcTargets []*glue.JdbcTarget) []map[string]interface{} { + result := make([]map[string]interface{}, 0) + + for _, jdbcTarget := range jdbcTargets { + attrs := make(map[string]interface{}) + attrs["connection_name"] = aws.StringValue(jdbcTarget.ConnectionName) + attrs["exclusions"] = flattenStringList(jdbcTarget.Exclusions) + attrs["path"] = aws.StringValue(jdbcTarget.Path) + + result = append(result, attrs) + } + return result +} + +func resourceAwsGlueCrawlerDelete(d *schema.ResourceData, meta interface{}) error { + glueConn := meta.(*AWSClient).glueconn + + log.Printf("[DEBUG] deleting Glue crawler: %s", d.Id()) + _, err := glueConn.DeleteCrawler(&glue.DeleteCrawlerInput{ + Name: aws.String(d.Id()), + }) + if err != nil { + if isAWSErr(err, glue.ErrCodeEntityNotFoundException, "") { + return nil + } + return fmt.Errorf("error deleting Glue crawler: %s", err.Error()) + } + return nil +} + +func resourceAwsGlueCrawlerExists(d *schema.ResourceData, meta interface{}) (bool, error) { + glueConn := meta.(*AWSClient).glueconn + + input := &glue.GetCrawlerInput{ + Name: aws.String(d.Id()), + } + + _, err := glueConn.GetCrawler(input) + if err != nil { + if isAWSErr(err, glue.ErrCodeEntityNotFoundException, "") { + return false, nil + } + return false, err + } + return true, nil +} diff --git a/aws/resource_aws_glue_crawler_test.go b/aws/resource_aws_glue_crawler_test.go new file mode 100644 index 00000000000..63181c36cf2 --- /dev/null +++ b/aws/resource_aws_glue_crawler_test.go @@ -0,0 +1,1355 @@ +package aws + +import ( + "bytes" + "encoding/json" + "fmt" + "log" + "strconv" + "strings" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/glue" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func init() { + resource.AddTestSweepers("aws_glue_crawler", &resource.Sweeper{ + Name: "aws_glue_crawler", + F: testSweepGlueCrawlers, + }) +} + +func testSweepGlueCrawlers(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).glueconn + + input := &glue.GetCrawlersInput{} + err = conn.GetCrawlersPages(input, func(page *glue.GetCrawlersOutput, lastPage bool) bool { + if len(page.Crawlers) == 0 { + log.Printf("[INFO] No Glue Crawlers to sweep") + return false + } + for _, crawler := range page.Crawlers { + name := aws.StringValue(crawler.Name) + if !strings.HasPrefix(name, "tf-acc-test-") { + log.Printf("[INFO] Skipping Glue Crawler: %s", name) + continue + } + + log.Printf("[INFO] Deleting Glue Crawler: %s", name) + _, err := conn.DeleteCrawler(&glue.DeleteCrawlerInput{ + Name: aws.String(name), + }) + if err != nil { + log.Printf("[ERROR] Failed to delete Glue Crawler %s: %s", name, err) + } + } + return !lastPage + }) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping Glue Crawler sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving Glue Crawlers: %s", err) + } + + return nil +} + +func TestAccAWSGlueCrawler_DynamodbTarget(t *testing.T) { + var crawler glue.Crawler + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_crawler.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueCrawlerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCrawlerConfig_DynamodbTarget(rName, "table1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "classifiers.#", "0"), + resource.TestCheckResourceAttr(resourceName, "configuration", ""), + resource.TestCheckResourceAttr(resourceName, "database_name", rName), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "dynamodb_target.#", "1"), + resource.TestCheckResourceAttr(resourceName, "dynamodb_target.0.path", "table1"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "role", rName), + resource.TestCheckResourceAttr(resourceName, "s3_target.#", "0"), + resource.TestCheckResourceAttr(resourceName, "schedule", ""), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.0.delete_behavior", "DEPRECATE_IN_DATABASE"), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.0.update_behavior", "UPDATE_IN_DATABASE"), + resource.TestCheckResourceAttr(resourceName, "table_prefix", ""), + ), + }, + { + Config: testAccGlueCrawlerConfig_DynamodbTarget(rName, "table2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "classifiers.#", "0"), + resource.TestCheckResourceAttr(resourceName, "configuration", ""), + resource.TestCheckResourceAttr(resourceName, "database_name", rName), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "dynamodb_target.#", "1"), + resource.TestCheckResourceAttr(resourceName, "dynamodb_target.0.path", "table2"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "role", rName), + resource.TestCheckResourceAttr(resourceName, "s3_target.#", "0"), + resource.TestCheckResourceAttr(resourceName, "schedule", ""), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.0.delete_behavior", "DEPRECATE_IN_DATABASE"), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.0.update_behavior", "UPDATE_IN_DATABASE"), + resource.TestCheckResourceAttr(resourceName, "table_prefix", ""), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueCrawler_JdbcTarget(t *testing.T) { + var crawler glue.Crawler + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_crawler.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueCrawlerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCrawlerConfig_JdbcTarget(rName, "database-name/%"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "classifiers.#", "0"), + resource.TestCheckResourceAttr(resourceName, "configuration", ""), + resource.TestCheckResourceAttr(resourceName, "database_name", rName), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "dynamodb_target.#", "0"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.#", "1"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.connection_name", rName), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.exclusions.#", "0"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.path", "database-name/%"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "role", rName), + resource.TestCheckResourceAttr(resourceName, "s3_target.#", "0"), + resource.TestCheckResourceAttr(resourceName, "schedule", ""), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.0.delete_behavior", "DEPRECATE_IN_DATABASE"), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.0.update_behavior", "UPDATE_IN_DATABASE"), + resource.TestCheckResourceAttr(resourceName, "table_prefix", ""), + ), + }, + { + Config: testAccGlueCrawlerConfig_JdbcTarget(rName, "database-name/table-name"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "classifiers.#", "0"), + resource.TestCheckResourceAttr(resourceName, "configuration", ""), + resource.TestCheckResourceAttr(resourceName, "database_name", rName), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "dynamodb_target.#", "0"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.#", "1"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.connection_name", rName), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.exclusions.#", "0"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.path", "database-name/table-name"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "role", rName), + resource.TestCheckResourceAttr(resourceName, "s3_target.#", "0"), + resource.TestCheckResourceAttr(resourceName, "schedule", ""), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.0.delete_behavior", "DEPRECATE_IN_DATABASE"), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.0.update_behavior", "UPDATE_IN_DATABASE"), + resource.TestCheckResourceAttr(resourceName, "table_prefix", ""), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueCrawler_JdbcTarget_Exclusions(t *testing.T) { + var crawler glue.Crawler + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_crawler.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueCrawlerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCrawlerConfig_JdbcTarget_Exclusions(rName, "exclusion1", "exclusion2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.#", "1"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.exclusions.#", "2"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.exclusions.0", "exclusion1"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.exclusions.1", "exclusion2"), + ), + }, + { + Config: testAccGlueCrawlerConfig_JdbcTarget_Exclusions(rName, "exclusion1", ""), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.#", "1"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.exclusions.#", "1"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.exclusions.0", "exclusion1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueCrawler_JdbcTarget_Multiple(t *testing.T) { + var crawler glue.Crawler + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_crawler.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueCrawlerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCrawlerConfig_JdbcTarget_Multiple(rName, "database-name/table1", "database-name/table2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.#", "2"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.connection_name", rName), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.exclusions.#", "0"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.path", "database-name/table1"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.1.connection_name", rName), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.1.exclusions.#", "0"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.1.path", "database-name/table2"), + ), + }, + { + Config: testAccGlueCrawlerConfig_JdbcTarget(rName, "database-name/table1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.#", "1"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.connection_name", rName), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.exclusions.#", "0"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.path", "database-name/table1"), + ), + }, + { + Config: testAccGlueCrawlerConfig_JdbcTarget_Multiple(rName, "database-name/table1", "database-name/table2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.#", "2"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.connection_name", rName), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.exclusions.#", "0"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.path", "database-name/table1"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.1.connection_name", rName), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.1.exclusions.#", "0"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.1.path", "database-name/table2"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueCrawler_S3Target(t *testing.T) { + var crawler glue.Crawler + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_crawler.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueCrawlerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCrawlerConfig_S3Target(rName, "s3://bucket1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "classifiers.#", "0"), + resource.TestCheckResourceAttr(resourceName, "configuration", ""), + resource.TestCheckResourceAttr(resourceName, "database_name", rName), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "dynamodb_target.#", "0"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "role", rName), + resource.TestCheckResourceAttr(resourceName, "s3_target.#", "1"), + resource.TestCheckResourceAttr(resourceName, "s3_target.0.exclusions.#", "0"), + resource.TestCheckResourceAttr(resourceName, "s3_target.0.path", "s3://bucket1"), + resource.TestCheckResourceAttr(resourceName, "schedule", ""), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.0.delete_behavior", "DEPRECATE_IN_DATABASE"), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.0.update_behavior", "UPDATE_IN_DATABASE"), + resource.TestCheckResourceAttr(resourceName, "table_prefix", ""), + ), + }, + { + Config: testAccGlueCrawlerConfig_S3Target(rName, "s3://bucket2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "classifiers.#", "0"), + resource.TestCheckResourceAttr(resourceName, "configuration", ""), + resource.TestCheckResourceAttr(resourceName, "database_name", rName), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "dynamodb_target.#", "0"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "role", rName), + resource.TestCheckResourceAttr(resourceName, "s3_target.#", "1"), + resource.TestCheckResourceAttr(resourceName, "s3_target.0.exclusions.#", "0"), + resource.TestCheckResourceAttr(resourceName, "s3_target.0.path", "s3://bucket2"), + resource.TestCheckResourceAttr(resourceName, "schedule", ""), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.0.delete_behavior", "DEPRECATE_IN_DATABASE"), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.0.update_behavior", "UPDATE_IN_DATABASE"), + resource.TestCheckResourceAttr(resourceName, "table_prefix", ""), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueCrawler_S3Target_Exclusions(t *testing.T) { + var crawler glue.Crawler + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_crawler.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueCrawlerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCrawlerConfig_S3Target_Exclusions(rName, "exclusion1", "exclusion2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "s3_target.#", "1"), + resource.TestCheckResourceAttr(resourceName, "s3_target.0.exclusions.#", "2"), + resource.TestCheckResourceAttr(resourceName, "s3_target.0.exclusions.0", "exclusion1"), + resource.TestCheckResourceAttr(resourceName, "s3_target.0.exclusions.1", "exclusion2"), + ), + }, + { + Config: testAccGlueCrawlerConfig_S3Target_Exclusions(rName, "exclusion1", ""), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "s3_target.#", "1"), + resource.TestCheckResourceAttr(resourceName, "s3_target.0.exclusions.#", "1"), + resource.TestCheckResourceAttr(resourceName, "s3_target.0.exclusions.0", "exclusion1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueCrawler_S3Target_Multiple(t *testing.T) { + var crawler glue.Crawler + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_crawler.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueCrawlerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCrawlerConfig_S3Target_Multiple(rName, "s3://bucket1", "s3://bucket2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "s3_target.#", "2"), + resource.TestCheckResourceAttr(resourceName, "s3_target.0.exclusions.#", "0"), + resource.TestCheckResourceAttr(resourceName, "s3_target.0.path", "s3://bucket1"), + resource.TestCheckResourceAttr(resourceName, "s3_target.1.exclusions.#", "0"), + resource.TestCheckResourceAttr(resourceName, "s3_target.1.path", "s3://bucket2"), + ), + }, + { + Config: testAccGlueCrawlerConfig_S3Target(rName, "s3://bucket1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "s3_target.#", "1"), + resource.TestCheckResourceAttr(resourceName, "s3_target.0.exclusions.#", "0"), + resource.TestCheckResourceAttr(resourceName, "s3_target.0.path", "s3://bucket1"), + ), + }, + { + Config: testAccGlueCrawlerConfig_S3Target_Multiple(rName, "s3://bucket1", "s3://bucket2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "s3_target.#", "2"), + resource.TestCheckResourceAttr(resourceName, "s3_target.0.exclusions.#", "0"), + resource.TestCheckResourceAttr(resourceName, "s3_target.0.path", "s3://bucket1"), + resource.TestCheckResourceAttr(resourceName, "s3_target.1.exclusions.#", "0"), + resource.TestCheckResourceAttr(resourceName, "s3_target.1.path", "s3://bucket2"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueCrawler_recreates(t *testing.T) { + var crawler glue.Crawler + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_crawler.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueCrawlerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCrawlerConfig_S3Target(rName, "s3://bucket1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + ), + }, + { + // Simulate deleting the crawler outside Terraform + PreConfig: func() { + conn := testAccProvider.Meta().(*AWSClient).glueconn + input := &glue.DeleteCrawlerInput{ + Name: aws.String(rName), + } + _, err := conn.DeleteCrawler(input) + if err != nil { + t.Fatalf("error deleting Glue Crawler: %s", err) + } + }, + Config: testAccGlueCrawlerConfig_S3Target(rName, "s3://bucket1"), + ExpectNonEmptyPlan: true, + PlanOnly: true, + }, + }, + }) +} + +func TestAccAWSGlueCrawler_Classifiers(t *testing.T) { + var crawler glue.Crawler + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_crawler.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueCrawlerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCrawlerConfig_Classifiers_Single(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "classifiers.#", "1"), + resource.TestCheckResourceAttr(resourceName, "classifiers.0", rName+"1"), + ), + }, + { + Config: testAccGlueCrawlerConfig_Classifiers_Multiple(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "classifiers.#", "2"), + resource.TestCheckResourceAttr(resourceName, "classifiers.0", rName+"1"), + resource.TestCheckResourceAttr(resourceName, "classifiers.1", rName+"2"), + ), + }, + { + Config: testAccGlueCrawlerConfig_Classifiers_Single(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "classifiers.#", "1"), + resource.TestCheckResourceAttr(resourceName, "classifiers.0", rName+"1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueCrawler_Configuration(t *testing.T) { + var crawler glue.Crawler + configuration1 := `{"Version": 1.0, "CrawlerOutput": {"Tables": { "AddOrUpdateBehavior": "MergeNewColumns" }}}` + configuration2 := `{"Version": 1.0, "CrawlerOutput": {"Partitions": { "AddOrUpdateBehavior": "InheritFromTable" }}}` + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_crawler.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueCrawlerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCrawlerConfig_Configuration(rName, configuration1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + testAccCheckAWSGlueCrawlerConfiguration(&crawler, configuration1), + ), + }, + { + Config: testAccGlueCrawlerConfig_Configuration(rName, configuration2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + testAccCheckAWSGlueCrawlerConfiguration(&crawler, configuration2), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueCrawler_Description(t *testing.T) { + var crawler glue.Crawler + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_crawler.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueCrawlerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCrawlerConfig_Description(rName, "description1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "description", "description1"), + ), + }, + { + Config: testAccGlueCrawlerConfig_Description(rName, "description2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "description", "description2"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueCrawler_Role_ARN_NoPath(t *testing.T) { + var crawler glue.Crawler + iamRoleResourceName := "aws_iam_role.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_crawler.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueCrawlerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCrawlerConfig_Role_ARN_NoPath(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttrPair(resourceName, "role", iamRoleResourceName, "name"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueCrawler_Role_ARN_Path(t *testing.T) { + var crawler glue.Crawler + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_crawler.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueCrawlerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCrawlerConfig_Role_ARN_Path(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "role", fmt.Sprintf("path/%s", rName)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueCrawler_Role_Name_Path(t *testing.T) { + var crawler glue.Crawler + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_crawler.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueCrawlerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCrawlerConfig_Role_Name_Path(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "role", fmt.Sprintf("path/%s", rName)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueCrawler_Schedule(t *testing.T) { + var crawler glue.Crawler + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_crawler.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueCrawlerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCrawlerConfig_Schedule(rName, "cron(0 1 * * ? *)"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "schedule", "cron(0 1 * * ? *)"), + ), + }, + { + Config: testAccGlueCrawlerConfig_Schedule(rName, "cron(0 2 * * ? *)"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "schedule", "cron(0 2 * * ? *)"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueCrawler_SchemaChangePolicy(t *testing.T) { + var crawler glue.Crawler + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_crawler.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueCrawlerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCrawlerConfig_SchemaChangePolicy(rName, glue.DeleteBehaviorDeleteFromDatabase, glue.UpdateBehaviorUpdateInDatabase), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.0.delete_behavior", glue.DeleteBehaviorDeleteFromDatabase), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.0.update_behavior", glue.UpdateBehaviorUpdateInDatabase), + ), + }, + { + Config: testAccGlueCrawlerConfig_SchemaChangePolicy(rName, glue.DeleteBehaviorLog, glue.UpdateBehaviorLog), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.0.delete_behavior", glue.DeleteBehaviorLog), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.0.update_behavior", glue.UpdateBehaviorLog), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueCrawler_TablePrefix(t *testing.T) { + var crawler glue.Crawler + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_glue_crawler.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueCrawlerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGlueCrawlerConfig_TablePrefix(rName, "prefix1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "table_prefix", "prefix1"), + ), + }, + { + Config: testAccGlueCrawlerConfig_TablePrefix(rName, "prefix2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueCrawlerExists(resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "table_prefix", "prefix2"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAWSGlueCrawlerExists(resourceName string, crawler *glue.Crawler) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("no ID is set") + } + + glueConn := testAccProvider.Meta().(*AWSClient).glueconn + out, err := glueConn.GetCrawler(&glue.GetCrawlerInput{ + Name: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + if out.Crawler == nil { + return fmt.Errorf("no Glue Crawler found") + } + + *crawler = *out.Crawler + + return nil + } +} + +func testAccCheckAWSGlueCrawlerDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_glue_crawler" { + continue + } + + conn := testAccProvider.Meta().(*AWSClient).glueconn + output, err := conn.GetCrawler(&glue.GetCrawlerInput{ + Name: aws.String(rs.Primary.ID), + }) + + if err != nil { + if isAWSErr(err, glue.ErrCodeEntityNotFoundException, "") { + return nil + } + return err + } + + crawler := output.Crawler + if crawler != nil && aws.StringValue(crawler.Name) == rs.Primary.ID { + return fmt.Errorf("Glue Crawler %s still exists", rs.Primary.ID) + } + + return nil + } + + return nil +} + +func testAccCheckAWSGlueCrawlerConfiguration(crawler *glue.Crawler, acctestJSON string) resource.TestCheckFunc { + return func(s *terraform.State) error { + apiJSON := aws.StringValue(crawler.Configuration) + apiJSONBuffer := bytes.NewBufferString("") + if err := json.Compact(apiJSONBuffer, []byte(apiJSON)); err != nil { + return fmt.Errorf("unable to compact API configuration JSON: %s", err) + } + + acctestJSONBuffer := bytes.NewBufferString("") + if err := json.Compact(acctestJSONBuffer, []byte(acctestJSON)); err != nil { + return fmt.Errorf("unable to compact acceptance test configuration JSON: %s", err) + } + + if !jsonBytesEqual(apiJSONBuffer.Bytes(), acctestJSONBuffer.Bytes()) { + return fmt.Errorf("expected configuration JSON to match %v, received JSON: %v", acctestJSON, apiJSON) + } + return nil + } +} + +func testAccGlueCrawlerConfig_Base(rName string) string { + return fmt.Sprintf(` +resource "aws_iam_role" "test" { + name = %q + assume_role_policy = < 0 { + action.Timeout = aws.Int64(int64(v.(int))) + } + + actions = append(actions, action) + } + + return actions +} + +func expandGlueConditions(l []interface{}) []*glue.Condition { + conditions := []*glue.Condition{} + + for _, mRaw := range l { + m := mRaw.(map[string]interface{}) + + condition := &glue.Condition{ + JobName: aws.String(m["job_name"].(string)), + LogicalOperator: aws.String(m["logical_operator"].(string)), + State: aws.String(m["state"].(string)), + } + + conditions = append(conditions, condition) + } + + return conditions +} + +func expandGluePredicate(l []interface{}) *glue.Predicate { + m := l[0].(map[string]interface{}) + + predicate := &glue.Predicate{ + Conditions: expandGlueConditions(m["conditions"].([]interface{})), + } + + if v, ok := m["logical"]; ok && v.(string) != "" { + predicate.Logical = aws.String(v.(string)) + } + + return predicate +} + +func flattenGlueActions(actions []*glue.Action) []interface{} { + l := []interface{}{} + + for _, action := range actions { + m := map[string]interface{}{ + "arguments": aws.StringValueMap(action.Arguments), + "job_name": aws.StringValue(action.JobName), + "timeout": int(aws.Int64Value(action.Timeout)), + } + l = append(l, m) + } + + return l +} + +func flattenGlueConditions(conditions []*glue.Condition) []interface{} { + l := []interface{}{} + + for _, condition := range conditions { + m := map[string]interface{}{ + "job_name": aws.StringValue(condition.JobName), + "logical_operator": aws.StringValue(condition.LogicalOperator), + "state": aws.StringValue(condition.State), + } + l = append(l, m) + } + + return l +} + +func flattenGluePredicate(predicate *glue.Predicate) []map[string]interface{} { + if predicate == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "conditions": flattenGlueConditions(predicate.Conditions), + "logical": aws.StringValue(predicate.Logical), + } + + return []map[string]interface{}{m} +} diff --git a/aws/resource_aws_glue_trigger_test.go b/aws/resource_aws_glue_trigger_test.go new file mode 100644 index 00000000000..2e6e624c403 --- /dev/null +++ b/aws/resource_aws_glue_trigger_test.go @@ -0,0 +1,416 @@ +package aws + +import ( + "fmt" + "log" + "strings" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/glue" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func init() { + resource.AddTestSweepers("aws_glue_trigger", &resource.Sweeper{ + Name: "aws_glue_trigger", + F: testSweepGlueTriggers, + }) +} + +func testSweepGlueTriggers(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).glueconn + + prefixes := []string{ + "tf-acc-test-", + } + + input := &glue.GetTriggersInput{} + err = conn.GetTriggersPages(input, func(page *glue.GetTriggersOutput, lastPage bool) bool { + if page == nil || len(page.Triggers) == 0 { + log.Printf("[INFO] No Glue Triggers to sweep") + return false + } + for _, trigger := range page.Triggers { + skip := true + name := aws.StringValue(trigger.Name) + for _, prefix := range prefixes { + if strings.HasPrefix(name, prefix) { + skip = false + break + } + } + if skip { + log.Printf("[INFO] Skipping Glue Trigger: %s", name) + continue + } + + log.Printf("[INFO] Deleting Glue Trigger: %s", name) + err := deleteGlueJob(conn, name) + if err != nil { + log.Printf("[ERROR] Failed to delete Glue Trigger %s: %s", name, err) + } + } + return !lastPage + }) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping Glue Trigger sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving Glue Triggers: %s", err) + } + + return nil +} + +func TestAccAWSGlueTrigger_Basic(t *testing.T) { + var trigger glue.Trigger + + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) + resourceName := "aws_glue_trigger.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueTriggerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSGlueTriggerConfig_OnDemand(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueTriggerExists(resourceName, &trigger), + resource.TestCheckResourceAttr(resourceName, "actions.#", "1"), + resource.TestCheckResourceAttr(resourceName, "actions.0.job_name", rName), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "predicate.#", "0"), + resource.TestCheckResourceAttr(resourceName, "schedule", ""), + resource.TestCheckResourceAttr(resourceName, "type", "ON_DEMAND"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueTrigger_Description(t *testing.T) { + var trigger glue.Trigger + + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) + resourceName := "aws_glue_trigger.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueTriggerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSGlueTriggerConfig_Description(rName, "description1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueTriggerExists(resourceName, &trigger), + resource.TestCheckResourceAttr(resourceName, "description", "description1"), + ), + }, + { + Config: testAccAWSGlueTriggerConfig_Description(rName, "description2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueTriggerExists(resourceName, &trigger), + resource.TestCheckResourceAttr(resourceName, "description", "description2"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueTrigger_Enabled(t *testing.T) { + var trigger glue.Trigger + + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) + resourceName := "aws_glue_trigger.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueTriggerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSGlueTriggerConfig_Enabled(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueTriggerExists(resourceName, &trigger), + resource.TestCheckResourceAttr(resourceName, "enabled", "true"), + ), + }, + { + Config: testAccAWSGlueTriggerConfig_Enabled(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueTriggerExists(resourceName, &trigger), + resource.TestCheckResourceAttr(resourceName, "enabled", "false"), + ), + }, + { + Config: testAccAWSGlueTriggerConfig_Enabled(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueTriggerExists(resourceName, &trigger), + resource.TestCheckResourceAttr(resourceName, "enabled", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueTrigger_Predicate(t *testing.T) { + var trigger glue.Trigger + + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) + resourceName := "aws_glue_trigger.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueTriggerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSGlueTriggerConfig_Predicate(rName, "SUCCEEDED"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueTriggerExists(resourceName, &trigger), + resource.TestCheckResourceAttr(resourceName, "predicate.#", "1"), + resource.TestCheckResourceAttr(resourceName, "predicate.0.conditions.#", "1"), + resource.TestCheckResourceAttr(resourceName, "predicate.0.conditions.0.job_name", rName), + resource.TestCheckResourceAttr(resourceName, "predicate.0.conditions.0.state", "SUCCEEDED"), + resource.TestCheckResourceAttr(resourceName, "type", "CONDITIONAL"), + ), + }, + { + Config: testAccAWSGlueTriggerConfig_Predicate(rName, "FAILED"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueTriggerExists(resourceName, &trigger), + resource.TestCheckResourceAttr(resourceName, "predicate.#", "1"), + resource.TestCheckResourceAttr(resourceName, "predicate.0.conditions.#", "1"), + resource.TestCheckResourceAttr(resourceName, "predicate.0.conditions.0.job_name", rName), + resource.TestCheckResourceAttr(resourceName, "predicate.0.conditions.0.state", "FAILED"), + resource.TestCheckResourceAttr(resourceName, "type", "CONDITIONAL"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSGlueTrigger_Schedule(t *testing.T) { + var trigger glue.Trigger + + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) + resourceName := "aws_glue_trigger.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueTriggerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSGlueTriggerConfig_Schedule(rName, "cron(1 2 * * ? *)"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueTriggerExists(resourceName, &trigger), + resource.TestCheckResourceAttr(resourceName, "schedule", "cron(1 2 * * ? *)"), + ), + }, + { + Config: testAccAWSGlueTriggerConfig_Schedule(rName, "cron(2 3 * * ? *)"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueTriggerExists(resourceName, &trigger), + resource.TestCheckResourceAttr(resourceName, "schedule", "cron(2 3 * * ? *)"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAWSGlueTriggerExists(resourceName string, trigger *glue.Trigger) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Glue Trigger ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).glueconn + + output, err := conn.GetTrigger(&glue.GetTriggerInput{ + Name: aws.String(rs.Primary.ID), + }) + if err != nil { + return err + } + + if output.Trigger == nil { + return fmt.Errorf("Glue Trigger (%s) not found", rs.Primary.ID) + } + + if aws.StringValue(output.Trigger.Name) == rs.Primary.ID { + *trigger = *output.Trigger + return nil + } + + return fmt.Errorf("Glue Trigger (%s) not found", rs.Primary.ID) + } +} + +func testAccCheckAWSGlueTriggerDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_glue_trigger" { + continue + } + + conn := testAccProvider.Meta().(*AWSClient).glueconn + + output, err := conn.GetTrigger(&glue.GetTriggerInput{ + Name: aws.String(rs.Primary.ID), + }) + + if err != nil { + if isAWSErr(err, glue.ErrCodeEntityNotFoundException, "") { + return nil + } + + } + + trigger := output.Trigger + if trigger != nil && aws.StringValue(trigger.Name) == rs.Primary.ID { + return fmt.Errorf("Glue Trigger %s still exists", rs.Primary.ID) + } + + return err + } + + return nil +} + +func testAccAWSGlueTriggerConfig_Description(rName, description string) string { + return fmt.Sprintf(` +%s + +resource "aws_glue_trigger" "test" { + description = "%s" + name = "%s" + type = "ON_DEMAND" + + actions { + job_name = "${aws_glue_job.test.name}" + } +} +`, testAccAWSGlueJobConfig_Required(rName), description, rName) +} + +func testAccAWSGlueTriggerConfig_Enabled(rName string, enabled bool) string { + return fmt.Sprintf(` +%s + +resource "aws_glue_trigger" "test" { + enabled = %t + name = "%s" + schedule = "cron(15 12 * * ? *)" + type = "SCHEDULED" + + actions { + job_name = "${aws_glue_job.test.name}" + } +} +`, testAccAWSGlueJobConfig_Required(rName), enabled, rName) +} + +func testAccAWSGlueTriggerConfig_OnDemand(rName string) string { + return fmt.Sprintf(` +%s + +resource "aws_glue_trigger" "test" { + name = "%s" + type = "ON_DEMAND" + + actions { + job_name = "${aws_glue_job.test.name}" + } +} +`, testAccAWSGlueJobConfig_Required(rName), rName) +} + +func testAccAWSGlueTriggerConfig_Predicate(rName, state string) string { + return fmt.Sprintf(` +%s + +resource "aws_glue_job" "test2" { + name = "%s2" + role_arn = "${aws_iam_role.test.arn}" + + command { + script_location = "testscriptlocation" + } + + depends_on = ["aws_iam_role_policy_attachment.test"] +} + +resource "aws_glue_trigger" "test" { + name = "%s" + type = "CONDITIONAL" + + actions { + job_name = "${aws_glue_job.test2.name}" + } + + predicate { + conditions { + job_name = "${aws_glue_job.test.name}" + state = "%s" + } + } +} +`, testAccAWSGlueJobConfig_Required(rName), rName, rName, state) +} + +func testAccAWSGlueTriggerConfig_Schedule(rName, schedule string) string { + return fmt.Sprintf(` +%s + +resource "aws_glue_trigger" "test" { + name = "%s" + schedule = "%s" + type = "SCHEDULED" + + actions { + job_name = "${aws_glue_job.test.name}" + } +} +`, testAccAWSGlueJobConfig_Required(rName), rName, schedule) +} diff --git a/aws/resource_aws_guardduty_detector_test.go b/aws/resource_aws_guardduty_detector_test.go index 228d7f1f87b..c19401e3bba 100644 --- a/aws/resource_aws_guardduty_detector_test.go +++ b/aws/resource_aws_guardduty_detector_test.go @@ -52,11 +52,11 @@ func testAccAwsGuardDutyDetector_import(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAwsGuardDutyDetectorDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccGuardDutyDetectorConfig_basic1, }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, diff --git a/aws/resource_aws_guardduty_ipset.go b/aws/resource_aws_guardduty_ipset.go index 6627936b679..b6e0ab64c8d 100644 --- a/aws/resource_aws_guardduty_ipset.go +++ b/aws/resource_aws_guardduty_ipset.go @@ -86,7 +86,7 @@ func resourceAwsGuardDutyIpsetCreate(d *schema.ResourceData, meta interface{}) e _, err = stateConf.WaitForState() if err != nil { - return fmt.Errorf("[WARN] Error waiting for GuardDuty IpSet status to be \"%s\" or \"%s\": %s", guardduty.IpSetStatusActive, guardduty.IpSetStatusInactive, err) + return fmt.Errorf("Error waiting for GuardDuty IpSet status to be \"%s\" or \"%s\": %s", guardduty.IpSetStatusActive, guardduty.IpSetStatusInactive, err) } d.SetId(fmt.Sprintf("%s:%s", detectorID, *resp.IpSetId)) @@ -186,7 +186,7 @@ func resourceAwsGuardDutyIpsetDelete(d *schema.ResourceData, meta interface{}) e _, err = stateConf.WaitForState() if err != nil { - return fmt.Errorf("[WARN] Error waiting for GuardDuty IpSet status to be \"%s\": %s", guardduty.IpSetStatusDeleted, err) + return fmt.Errorf("Error waiting for GuardDuty IpSet status to be \"%s\": %s", guardduty.IpSetStatusDeleted, err) } return nil diff --git a/aws/resource_aws_guardduty_ipset_test.go b/aws/resource_aws_guardduty_ipset_test.go index 5373c0f2dc9..453d6b728dd 100644 --- a/aws/resource_aws_guardduty_ipset_test.go +++ b/aws/resource_aws_guardduty_ipset_test.go @@ -60,11 +60,11 @@ func testAccAwsGuardDutyIpset_import(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAwsGuardDutyIpsetDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccGuardDutyIpsetConfig_basic(bucketName, keyName, ipsetName, true), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, diff --git a/aws/resource_aws_guardduty_member.go b/aws/resource_aws_guardduty_member.go index 90cfbdc83d0..49ca3beb9ca 100644 --- a/aws/resource_aws_guardduty_member.go +++ b/aws/resource_aws_guardduty_member.go @@ -4,9 +4,11 @@ import ( "fmt" "log" "strings" + "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/guardduty" + "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) @@ -14,6 +16,7 @@ func resourceAwsGuardDutyMember() *schema.Resource { return &schema.Resource{ Create: resourceAwsGuardDutyMemberCreate, Read: resourceAwsGuardDutyMemberRead, + Update: resourceAwsGuardDutyMemberUpdate, Delete: resourceAwsGuardDutyMemberDelete, Importer: &schema.ResourceImporter{ @@ -37,6 +40,28 @@ func resourceAwsGuardDutyMember() *schema.Resource { Required: true, ForceNew: true, }, + "relationship_status": { + Type: schema.TypeString, + Computed: true, + }, + "invite": { + Type: schema.TypeBool, + Optional: true, + }, + "disable_email_notification": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + }, + "invitation_message": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(60 * time.Second), + Update: schema.DefaultTimeout(60 * time.Second), }, } } @@ -59,8 +84,31 @@ func resourceAwsGuardDutyMemberCreate(d *schema.ResourceData, meta interface{}) if err != nil { return fmt.Errorf("Creating GuardDuty Member failed: %s", err.Error()) } + d.SetId(fmt.Sprintf("%s:%s", detectorID, accountID)) + if !d.Get("invite").(bool) { + return resourceAwsGuardDutyMemberRead(d, meta) + } + + imi := &guardduty.InviteMembersInput{ + DetectorId: aws.String(detectorID), + AccountIds: []*string{aws.String(accountID)}, + DisableEmailNotification: aws.Bool(d.Get("disable_email_notification").(bool)), + Message: aws.String(d.Get("invitation_message").(string)), + } + + log.Printf("[INFO] Inviting GuardDuty Member: %s", input) + _, err = conn.InviteMembers(imi) + if err != nil { + return fmt.Errorf("error inviting GuardDuty Member %q: %s", d.Id(), err) + } + + err = inviteGuardDutyMemberWaiter(accountID, detectorID, d.Timeout(schema.TimeoutUpdate), conn) + if err != nil { + return fmt.Errorf("error waiting for GuardDuty Member %q invite: %s", d.Id(), err) + } + return resourceAwsGuardDutyMemberRead(d, meta) } @@ -93,14 +141,72 @@ func resourceAwsGuardDutyMemberRead(d *schema.ResourceData, meta interface{}) er d.SetId("") return nil } + member := gmo.Members[0] d.Set("account_id", member.AccountId) d.Set("detector_id", detectorID) d.Set("email", member.Email) + status := aws.StringValue(member.RelationshipStatus) + d.Set("relationship_status", status) + + // https://docs.aws.amazon.com/guardduty/latest/ug/list-members.html + d.Set("invite", false) + if status == "Disabled" || status == "Enabled" || status == "Invited" || status == "EmailVerificationInProgress" { + d.Set("invite", true) + } + return nil } +func resourceAwsGuardDutyMemberUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).guarddutyconn + + accountID, detectorID, err := decodeGuardDutyMemberID(d.Id()) + if err != nil { + return err + } + + if d.HasChange("invite") { + if d.Get("invite").(bool) { + input := &guardduty.InviteMembersInput{ + DetectorId: aws.String(detectorID), + AccountIds: []*string{aws.String(accountID)}, + DisableEmailNotification: aws.Bool(d.Get("disable_email_notification").(bool)), + Message: aws.String(d.Get("invitation_message").(string)), + } + + log.Printf("[INFO] Inviting GuardDuty Member: %s", input) + output, err := conn.InviteMembers(input) + if err != nil { + return fmt.Errorf("error inviting GuardDuty Member %q: %s", d.Id(), err) + } + + // {"unprocessedAccounts":[{"result":"The request is rejected because the current account has already invited or is already the GuardDuty master of the given member account ID.","accountId":"067819342479"}]} + if len(output.UnprocessedAccounts) > 0 { + return fmt.Errorf("error inviting GuardDuty Member %q: %s", d.Id(), aws.StringValue(output.UnprocessedAccounts[0].Result)) + } + + err = inviteGuardDutyMemberWaiter(accountID, detectorID, d.Timeout(schema.TimeoutUpdate), conn) + if err != nil { + return fmt.Errorf("error waiting for GuardDuty Member %q invite: %s", d.Id(), err) + } + } else { + input := &guardduty.DisassociateMembersInput{ + AccountIds: []*string{aws.String(accountID)}, + DetectorId: aws.String(detectorID), + } + log.Printf("[INFO] Disassociating GuardDuty Member: %s", input) + _, err := conn.DisassociateMembers(input) + if err != nil { + return fmt.Errorf("error disassociating GuardDuty Member %q: %s", d.Id(), err) + } + } + } + + return resourceAwsGuardDutyMemberRead(d, meta) +} + func resourceAwsGuardDutyMemberDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).guarddutyconn @@ -122,6 +228,40 @@ func resourceAwsGuardDutyMemberDelete(d *schema.ResourceData, meta interface{}) return nil } +func inviteGuardDutyMemberWaiter(accountID, detectorID string, timeout time.Duration, conn *guardduty.GuardDuty) error { + input := guardduty.GetMembersInput{ + DetectorId: aws.String(detectorID), + AccountIds: []*string{aws.String(accountID)}, + } + + // wait until e-mail verification finishes + return resource.Retry(timeout, func() *resource.RetryError { + log.Printf("[DEBUG] Reading GuardDuty Member: %s", input) + gmo, err := conn.GetMembers(&input) + + if err != nil { + return resource.NonRetryableError(fmt.Errorf("error reading GuardDuty Member %q: %s", accountID, err)) + } + + if gmo == nil || len(gmo.Members) == 0 { + return resource.RetryableError(fmt.Errorf("error reading GuardDuty Member %q: member missing from response", accountID)) + } + + member := gmo.Members[0] + status := aws.StringValue(member.RelationshipStatus) + + if status == "Disabled" || status == "Enabled" || status == "Invited" { + return nil + } + + if status == "Created" || status == "EmailVerificationInProgress" { + return resource.RetryableError(fmt.Errorf("Expected member to be invited but was in state: %s", status)) + } + + return resource.NonRetryableError(fmt.Errorf("error inviting GuardDuty Member %q: invalid status: %s", accountID, status)) + }) +} + func decodeGuardDutyMemberID(id string) (accountID, detectorID string, err error) { parts := strings.Split(id, ":") if len(parts) != 2 { diff --git a/aws/resource_aws_guardduty_member_test.go b/aws/resource_aws_guardduty_member_test.go index 512a192bf77..c32073d19d8 100644 --- a/aws/resource_aws_guardduty_member_test.go +++ b/aws/resource_aws_guardduty_member_test.go @@ -27,28 +27,125 @@ func testAccAwsGuardDutyMember_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "account_id", accountID), resource.TestCheckResourceAttrSet(resourceName, "detector_id"), resource.TestCheckResourceAttr(resourceName, "email", email), + resource.TestCheckResourceAttr(resourceName, "relationship_status", "Created"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } -func testAccAwsGuardDutyMember_import(t *testing.T) { +func testAccAwsGuardDutyMember_invite_disassociate(t *testing.T) { resourceName := "aws_guardduty_member.test" + accountID, email := testAccAWSGuardDutyMemberFromEnv(t) resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsGuardDutyMemberDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccGuardDutyMemberConfig_basic("111111111111", "required@example.com"), + { + Config: testAccGuardDutyMemberConfig_invite(accountID, email, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsGuardDutyMemberExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "invite", "true"), + resource.TestCheckResourceAttr(resourceName, "relationship_status", "Invited"), + ), + }, + // Disassociate member + { + Config: testAccGuardDutyMemberConfig_invite(accountID, email, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsGuardDutyMemberExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "invite", "false"), + resource.TestCheckResourceAttr(resourceName, "relationship_status", "Removed"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "disable_email_notification", + }, + }, + }, + }) +} + +func testAccAwsGuardDutyMember_invite_onUpdate(t *testing.T) { + resourceName := "aws_guardduty_member.test" + accountID, email := testAccAWSGuardDutyMemberFromEnv(t) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsGuardDutyMemberDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGuardDutyMemberConfig_invite(accountID, email, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsGuardDutyMemberExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "invite", "false"), + resource.TestCheckResourceAttr(resourceName, "relationship_status", "Created"), + ), + }, + // Invite member + { + Config: testAccGuardDutyMemberConfig_invite(accountID, email, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsGuardDutyMemberExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "invite", "true"), + resource.TestCheckResourceAttr(resourceName, "relationship_status", "Invited"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "disable_email_notification", + }, }, + }, + }) +} + +func testAccAwsGuardDutyMember_invitationMessage(t *testing.T) { + resourceName := "aws_guardduty_member.test" + accountID, email := testAccAWSGuardDutyMemberFromEnv(t) + invitationMessage := "inviting" - resource.TestStep{ + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsGuardDutyMemberDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGuardDutyMemberConfig_invitationMessage(accountID, email, invitationMessage), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsGuardDutyMemberExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "account_id", accountID), + resource.TestCheckResourceAttrSet(resourceName, "detector_id"), + resource.TestCheckResourceAttr(resourceName, "disable_email_notification", "true"), + resource.TestCheckResourceAttr(resourceName, "email", email), + resource.TestCheckResourceAttr(resourceName, "invite", "true"), + resource.TestCheckResourceAttr(resourceName, "invitation_message", invitationMessage), + resource.TestCheckResourceAttr(resourceName, "relationship_status", "Invited"), + ), + }, + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "disable_email_notification", + "invitation_message", + }, }, }, }) @@ -132,3 +229,32 @@ resource "aws_guardduty_member" "test" { } `, testAccGuardDutyDetectorConfig_basic1, accountID, email) } + +func testAccGuardDutyMemberConfig_invite(accountID, email string, invite bool) string { + return fmt.Sprintf(` +%[1]s + +resource "aws_guardduty_member" "test" { + account_id = "%[2]s" + detector_id = "${aws_guardduty_detector.test.id}" + disable_email_notification = true + email = "%[3]s" + invite = %[4]t +} +`, testAccGuardDutyDetectorConfig_basic1, accountID, email, invite) +} + +func testAccGuardDutyMemberConfig_invitationMessage(accountID, email, invitationMessage string) string { + return fmt.Sprintf(` +%[1]s + +resource "aws_guardduty_member" "test" { + account_id = "%[2]s" + detector_id = "${aws_guardduty_detector.test.id}" + disable_email_notification = true + email = "%[3]s" + invitation_message = "%[4]s" + invite = true +} +`, testAccGuardDutyDetectorConfig_basic1, accountID, email, invitationMessage) +} diff --git a/aws/resource_aws_guardduty_test.go b/aws/resource_aws_guardduty_test.go index 992beda06d5..93a4fbfbe03 100644 --- a/aws/resource_aws_guardduty_test.go +++ b/aws/resource_aws_guardduty_test.go @@ -1,6 +1,7 @@ package aws import ( + "os" "testing" ) @@ -19,8 +20,10 @@ func TestAccAWSGuardDuty(t *testing.T) { "import": testAccAwsGuardDutyThreatintelset_import, }, "Member": { - "basic": testAccAwsGuardDutyMember_basic, - "import": testAccAwsGuardDutyMember_import, + "basic": testAccAwsGuardDutyMember_basic, + "inviteOnUpdate": testAccAwsGuardDutyMember_invite_onUpdate, + "inviteDisassociate": testAccAwsGuardDutyMember_invite_disassociate, + "invitationMessage": testAccAwsGuardDutyMember_invitationMessage, }, } @@ -36,3 +39,21 @@ func TestAccAWSGuardDuty(t *testing.T) { }) } } + +func testAccAWSGuardDutyMemberFromEnv(t *testing.T) (string, string) { + accountID := os.Getenv("AWS_GUARDDUTY_MEMBER_ACCOUNT_ID") + if accountID == "" { + t.Skip( + "Environment variable AWS_GUARDDUTY_MEMBER_ACCOUNT_ID is not set. " + + "To properly test inviting GuardDuty member accounts, " + + "a valid AWS account ID must be provided.") + } + email := os.Getenv("AWS_GUARDDUTY_MEMBER_EMAIL") + if email == "" { + t.Skip( + "Environment variable AWS_GUARDDUTY_MEMBER_EMAIL is not set. " + + "To properly test inviting GuardDuty member accounts, " + + "a valid email associated with the AWS_GUARDDUTY_MEMBER_ACCOUNT_ID must be provided.") + } + return accountID, email +} diff --git a/aws/resource_aws_guardduty_threatintelset.go b/aws/resource_aws_guardduty_threatintelset.go index 16351a51ee1..75622cef5b7 100644 --- a/aws/resource_aws_guardduty_threatintelset.go +++ b/aws/resource_aws_guardduty_threatintelset.go @@ -86,7 +86,7 @@ func resourceAwsGuardDutyThreatintelsetCreate(d *schema.ResourceData, meta inter _, err = stateConf.WaitForState() if err != nil { - return fmt.Errorf("[WARN] Error waiting for GuardDuty ThreatIntelSet status to be \"%s\" or \"%s\": %s", + return fmt.Errorf("Error waiting for GuardDuty ThreatIntelSet status to be \"%s\" or \"%s\": %s", guardduty.ThreatIntelSetStatusActive, guardduty.ThreatIntelSetStatusInactive, err) } @@ -187,7 +187,7 @@ func resourceAwsGuardDutyThreatintelsetDelete(d *schema.ResourceData, meta inter _, err = stateConf.WaitForState() if err != nil { - return fmt.Errorf("[WARN] Error waiting for GuardDuty ThreatIntelSet status to be \"%s\": %s", guardduty.ThreatIntelSetStatusDeleted, err) + return fmt.Errorf("Error waiting for GuardDuty ThreatIntelSet status to be \"%s\": %s", guardduty.ThreatIntelSetStatusDeleted, err) } return nil diff --git a/aws/resource_aws_guardduty_threatintelset_test.go b/aws/resource_aws_guardduty_threatintelset_test.go index 1ba1652c0ce..6b7259ab7dc 100644 --- a/aws/resource_aws_guardduty_threatintelset_test.go +++ b/aws/resource_aws_guardduty_threatintelset_test.go @@ -60,11 +60,11 @@ func testAccAwsGuardDutyThreatintelset_import(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAwsGuardDutyThreatintelsetDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccGuardDutyThreatintelsetConfig_basic(bucketName, keyName, threatintelsetName, true), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, diff --git a/aws/resource_aws_iam_access_key.go b/aws/resource_aws_iam_access_key.go index 515069c036c..2f9cb681f56 100644 --- a/aws/resource_aws_iam_access_key.go +++ b/aws/resource_aws_iam_access_key.go @@ -21,21 +21,21 @@ func resourceAwsIamAccessKey() *schema.Resource { Delete: resourceAwsIamAccessKeyDelete, Schema: map[string]*schema.Schema{ - "user": &schema.Schema{ + "user": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "status": &schema.Schema{ + "status": { Type: schema.TypeString, Computed: true, }, - "secret": &schema.Schema{ + "secret": { Type: schema.TypeString, Computed: true, Deprecated: "Please use a PGP key to encrypt", }, - "ses_smtp_password": &schema.Schema{ + "ses_smtp_password": { Type: schema.TypeString, Computed: true, }, @@ -75,7 +75,7 @@ func resourceAwsIamAccessKeyCreate(d *schema.ResourceData, meta interface{}) err d.SetId(*createResp.AccessKey.AccessKeyId) if createResp.AccessKey == nil || createResp.AccessKey.SecretAccessKey == nil { - return fmt.Errorf("[ERR] CreateAccessKey response did not contain a Secret Access Key as expected") + return fmt.Errorf("CreateAccessKey response did not contain a Secret Access Key as expected") } if v, ok := d.GetOk("pgp_key"); ok { @@ -122,7 +122,7 @@ func resourceAwsIamAccessKeyRead(d *schema.ResourceData, meta interface{}) error d.SetId("") return nil } - return fmt.Errorf("Error reading IAM acces key: %s", err) + return fmt.Errorf("Error reading IAM access key: %s", err) } for _, key := range getResp.AccessKeyMetadata { diff --git a/aws/resource_aws_iam_access_key_test.go b/aws/resource_aws_iam_access_key_test.go index 4cf8a1dd71c..877a9f7d287 100644 --- a/aws/resource_aws_iam_access_key_test.go +++ b/aws/resource_aws_iam_access_key_test.go @@ -19,12 +19,12 @@ func TestAccAWSAccessKey_basic(t *testing.T) { var conf iam.AccessKeyMetadata rName := fmt.Sprintf("test-user-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAccessKeyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSAccessKeyConfig(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAccessKeyExists("aws_iam_access_key.a_key", &conf), @@ -40,13 +40,13 @@ func TestAccAWSAccessKey_encrypted(t *testing.T) { var conf iam.AccessKeyMetadata rName := fmt.Sprintf("test-user-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSAccessKeyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSAccessKeyConfig_encrypted(rName, testPubAccessKey1), + { + Config: testAccAWSAccessKeyConfig_encrypted(rName, testPubKey1), Check: resource.ComposeTestCheckFunc( testAccCheckAWSAccessKeyExists("aws_iam_access_key.a_key", &conf), testAccCheckAWSAccessKeyAttributes(&conf), @@ -207,80 +207,3 @@ func TestSesSmtpPasswordFromSecretKey(t *testing.T) { } } } - -const testPubAccessKey1 = `mQENBFXbjPUBCADjNjCUQwfxKL+RR2GA6pv/1K+zJZ8UWIF9S0lk7cVIEfJiprzzwiMwBS5cD0da -rGin1FHvIWOZxujA7oW0O2TUuatqI3aAYDTfRYurh6iKLC+VS+F7H+/mhfFvKmgr0Y5kDCF1j0T/ -063QZ84IRGucR/X43IY7kAtmxGXH0dYOCzOe5UBX1fTn3mXGe2ImCDWBH7gOViynXmb6XNvXkP0f -sF5St9jhO7mbZU9EFkv9O3t3EaURfHopsCVDOlCkFCw5ArY+DUORHRzoMX0PnkyQb5OzibkChzpg -8hQssKeVGpuskTdz5Q7PtdW71jXd4fFVzoNH8fYwRpziD2xNvi6HABEBAAG0EFZhdWx0IFRlc3Qg -S2V5IDGJATgEEwECACIFAlXbjPUCGy8GCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEOfLr44B -HbeTo+sH/i7bapIgPnZsJ81hmxPj4W12uvunksGJiC7d4hIHsG7kmJRTJfjECi+AuTGeDwBy84TD -cRaOB6e79fj65Fg6HgSahDUtKJbGxj/lWzmaBuTzlN3CEe8cMwIPqPT2kajJVdOyrvkyuFOdPFOE -A7bdCH0MqgIdM2SdF8t40k/ATfuD2K1ZmumJ508I3gF39jgTnPzD4C8quswrMQ3bzfvKC3klXRlB -C0yoArn+0QA3cf2B9T4zJ2qnvgotVbeK/b1OJRNj6Poeo+SsWNc/A5mw7lGScnDgL3yfwCm1gQXa -QKfOt5x+7GqhWDw10q+bJpJlI10FfzAnhMF9etSqSeURBRW5AQ0EVduM9QEIAL53hJ5bZJ7oEDCn -aY+SCzt9QsAfnFTAnZJQrvkvusJzrTQ088eUQmAjvxkfRqnv981fFwGnh2+I1Ktm698UAZS9Jt8y -jak9wWUICKQO5QUt5k8cHwldQXNXVXFa+TpQWQR5yW1a9okjh5o/3d4cBt1yZPUJJyLKY43Wvptb -6EuEsScO2DnRkh5wSMDQ7dTooddJCmaq3LTjOleRFQbu9ij386Do6jzK69mJU56TfdcydkxkWF5N -ZLGnED3lq+hQNbe+8UI5tD2oP/3r5tXKgMy1R/XPvR/zbfwvx4FAKFOP01awLq4P3d/2xOkMu4Lu -9p315E87DOleYwxk+FoTqXEAEQEAAYkCPgQYAQIACQUCVduM9QIbLgEpCRDny6+OAR23k8BdIAQZ -AQIABgUCVduM9QAKCRAID0JGyHtSGmqYB/4m4rJbbWa7dBJ8VqRU7ZKnNRDR9CVhEGipBmpDGRYu -lEimOPzLUX/ZXZmTZzgemeXLBaJJlWnopVUWuAsyjQuZAfdd8nHkGRHG0/DGum0l4sKTta3OPGHN -C1z1dAcQ1RCr9bTD3PxjLBczdGqhzw71trkQRBRdtPiUchltPMIyjUHqVJ0xmg0hPqFic0fICsr0 -YwKoz3h9+QEcZHvsjSZjgydKvfLYcm+4DDMCCqcHuJrbXJKUWmJcXR0y/+HQONGrGJ5xWdO+6eJi -oPn2jVMnXCm4EKc7fcLFrz/LKmJ8seXhxjM3EdFtylBGCrx3xdK0f+JDNQaC/rhUb5V2XuX6VwoH -/AtY+XsKVYRfNIupLOUcf/srsm3IXT4SXWVomOc9hjGQiJ3rraIbADsc+6bCAr4XNZS7moViAAcI -PXFv3m3WfUlnG/om78UjQqyVACRZqqAGmuPq+TSkRUCpt9h+A39LQWkojHqyob3cyLgy6z9Q557O -9uK3lQozbw2gH9zC0RqnePl+rsWIUU/ga16fH6pWc1uJiEBt8UZGypQ/E56/343epmYAe0a87sHx -8iDV+dNtDVKfPRENiLOOc19MmS+phmUyrbHqI91c0pmysYcJZCD3a502X1gpjFbPZcRtiTmGnUKd -OIu60YPNE4+h7u2CfYyFPu3AlUaGNMBlvy6PEpU=` - -const testPrivAccessKey1 = `lQOYBFXbjPUBCADjNjCUQwfxKL+RR2GA6pv/1K+zJZ8UWIF9S0lk7cVIEfJiprzzwiMwBS5cD0da -rGin1FHvIWOZxujA7oW0O2TUuatqI3aAYDTfRYurh6iKLC+VS+F7H+/mhfFvKmgr0Y5kDCF1j0T/ -063QZ84IRGucR/X43IY7kAtmxGXH0dYOCzOe5UBX1fTn3mXGe2ImCDWBH7gOViynXmb6XNvXkP0f -sF5St9jhO7mbZU9EFkv9O3t3EaURfHopsCVDOlCkFCw5ArY+DUORHRzoMX0PnkyQb5OzibkChzpg -8hQssKeVGpuskTdz5Q7PtdW71jXd4fFVzoNH8fYwRpziD2xNvi6HABEBAAEAB/wL+KX0mdeISEpX -oDgt766Key1Kthe8nbEs5dOXIsP7OR7ZPcnE2hy6gftgVFnBGEZnWVN70vmJd6Z5y9d1mI+GecXj -UL0EpI0EmohyYDJsHUnght/5ecRNFA+VeNmGPYNQGCeHJyZOiFunGGENpHU7BbubAht8delz37Mx -JQgvMyR6AKvg8HKBoQeqV1uMWNJE/vKwV/z1dh1sjK/GFxu05Qaq0GTfAjVLuFOyJTS95yq6gblD -jUdbHLp7tBeqIKo9voWCJF5mGOlq3973vVoWETy9b0YYPCE/M7fXmK9dJITHqkROLMW6TgcFeIw4 -yL5KOBCHk+QGPSvyQN7R7Fd5BADwuT1HZmvg7Y9GjarKXDjxdNemUiHtba2rUzfH6uNmKNQvwQek -nma5palNUJ4/dz1aPB21FUBXJF5yWwXEdApl+lIDU0J5m4UD26rqEVRq9Kx3GsX+yfcwObkrSzW6 -kmnQSB5KI0fIuegMTM+Jxo3pB/mIRwDTMmk+vfzIGyW+7QQA8aFwFLMdKdfLgSGbl5Z6etmOAVQ2 -Oe2ebegU9z/ewi/Rdt2s9yQiAdGVM8+q15Saz8a+kyS/l1CjNPzr3VpYx1OdZ3gb7i2xoy9GdMYR -ZpTq3TuST95kx/9DqA97JrP23G47U0vwF/cg8ixCYF8Fz5dG4DEsxgMwKqhGdW58wMMD/iytkfMk -Vk6Z958Rpy7lhlC6L3zpO38767bSeZ8gRRi/NMFVOSGYepKFarnfxcTiNa+EoSVA6hUo1N64nALE -sJBpyOoTfKIpz7WwTF1+WogkiYrfM6lHon1+3qlziAcRW0IohM3g2C1i3GWdON4Cl8/PDO3R0E52 -N6iG/ctNNeMiPe60EFZhdWx0IFRlc3QgS2V5IDGJATgEEwECACIFAlXbjPUCGy8GCwkIBwMCBhUI -AgkKCwQWAgMBAh4BAheAAAoJEOfLr44BHbeTo+sH/i7bapIgPnZsJ81hmxPj4W12uvunksGJiC7d -4hIHsG7kmJRTJfjECi+AuTGeDwBy84TDcRaOB6e79fj65Fg6HgSahDUtKJbGxj/lWzmaBuTzlN3C -Ee8cMwIPqPT2kajJVdOyrvkyuFOdPFOEA7bdCH0MqgIdM2SdF8t40k/ATfuD2K1ZmumJ508I3gF3 -9jgTnPzD4C8quswrMQ3bzfvKC3klXRlBC0yoArn+0QA3cf2B9T4zJ2qnvgotVbeK/b1OJRNj6Poe -o+SsWNc/A5mw7lGScnDgL3yfwCm1gQXaQKfOt5x+7GqhWDw10q+bJpJlI10FfzAnhMF9etSqSeUR -BRWdA5gEVduM9QEIAL53hJ5bZJ7oEDCnaY+SCzt9QsAfnFTAnZJQrvkvusJzrTQ088eUQmAjvxkf -Rqnv981fFwGnh2+I1Ktm698UAZS9Jt8yjak9wWUICKQO5QUt5k8cHwldQXNXVXFa+TpQWQR5yW1a -9okjh5o/3d4cBt1yZPUJJyLKY43Wvptb6EuEsScO2DnRkh5wSMDQ7dTooddJCmaq3LTjOleRFQbu -9ij386Do6jzK69mJU56TfdcydkxkWF5NZLGnED3lq+hQNbe+8UI5tD2oP/3r5tXKgMy1R/XPvR/z -bfwvx4FAKFOP01awLq4P3d/2xOkMu4Lu9p315E87DOleYwxk+FoTqXEAEQEAAQAH+wVyQXaNwnjQ -xfW+M8SJNo0C7e+0d7HsuBTA/d/eP4bj6+X8RaRFVwiMvSAoxsqBNCLJP00qzzKfRQWJseD1H35z -UjM7rNVUEL2k1yppyp61S0qj0TdhVUfJDYZqRYonVgRMvzfDTB1ryKrefKenQYL/jGd9VYMnKmWZ -6GVk4WWXXx61iOt2HNcmSXKetMM1Mg67woPZkA3fJaXZ+zW0zMu4lTSB7yl3+vLGIFYILkCFnREr -drQ+pmIMwozUAt+pBq8dylnkHh6g/FtRfWmLIMDqM1NlyuHRp3dyLDFdTA93osLG0QJblfX54W34 -byX7a4HASelGi3nPjjOAsTFDkuEEANV2viaWk1CV4ryDrXGmy4Xo32Md+laGPRcVfbJ0mjZjhQsO -gWC1tjMs1qZMPhcrKIBCjjdAcAIrGV9h3CXc0uGuez4XxLO+TPBKaS0B8rKhnKph1YZuf+HrOhzS -astDnOjNIT+qucCL/qSbdYpj9of3yY61S59WphPOBjoVM3BFBADka6ZCk81gx8jA2E1e9UqQDmdM -FZaVA1E7++kqVSFRDJGnq+5GrBTwCJ+sevi+Rvf8Nx4AXvpCdtMBPX9RogsUFcR0pMrKBrgRo/Vg -EpuodY2Ef1VtqXR24OxtRf1UwvHKydIsU05rzMAy5uGgQvTzRTXxZFLGUY31wjWqmo9VPQP+PnwA -K83EV2kk2bsXwZ9MXg05iXqGQYR4bEc/12v04BtaNaDS53hBDO4JIa3Bnz+5oUoYhb8FgezUKA9I -n6RdKTTP1BLAu8titeozpNF07V++dPiSE2wrIVsaNHL1pUwW0ql50titVwe+EglWiCKPtJBcCPUA -3oepSPchiDjPqrNCYIkCPgQYAQIACQUCVduM9QIbLgEpCRDny6+OAR23k8BdIAQZAQIABgUCVduM -9QAKCRAID0JGyHtSGmqYB/4m4rJbbWa7dBJ8VqRU7ZKnNRDR9CVhEGipBmpDGRYulEimOPzLUX/Z -XZmTZzgemeXLBaJJlWnopVUWuAsyjQuZAfdd8nHkGRHG0/DGum0l4sKTta3OPGHNC1z1dAcQ1RCr -9bTD3PxjLBczdGqhzw71trkQRBRdtPiUchltPMIyjUHqVJ0xmg0hPqFic0fICsr0YwKoz3h9+QEc -ZHvsjSZjgydKvfLYcm+4DDMCCqcHuJrbXJKUWmJcXR0y/+HQONGrGJ5xWdO+6eJioPn2jVMnXCm4 -EKc7fcLFrz/LKmJ8seXhxjM3EdFtylBGCrx3xdK0f+JDNQaC/rhUb5V2XuX6VwoH/AtY+XsKVYRf -NIupLOUcf/srsm3IXT4SXWVomOc9hjGQiJ3rraIbADsc+6bCAr4XNZS7moViAAcIPXFv3m3WfUln -G/om78UjQqyVACRZqqAGmuPq+TSkRUCpt9h+A39LQWkojHqyob3cyLgy6z9Q557O9uK3lQozbw2g -H9zC0RqnePl+rsWIUU/ga16fH6pWc1uJiEBt8UZGypQ/E56/343epmYAe0a87sHx8iDV+dNtDVKf -PRENiLOOc19MmS+phmUyrbHqI91c0pmysYcJZCD3a502X1gpjFbPZcRtiTmGnUKdOIu60YPNE4+h -7u2CfYyFPu3AlUaGNMBlvy6PEpU=` diff --git a/aws/resource_aws_iam_account_alias.go b/aws/resource_aws_iam_account_alias.go index 3307ae1c29b..717f8187e48 100644 --- a/aws/resource_aws_iam_account_alias.go +++ b/aws/resource_aws_iam_account_alias.go @@ -88,7 +88,5 @@ func resourceAwsIamAccountAliasDelete(d *schema.ResourceData, meta interface{}) return fmt.Errorf("Error deleting account alias with name '%s': %s", account_alias, err) } - d.SetId("") - return nil } diff --git a/aws/resource_aws_iam_account_alias_test.go b/aws/resource_aws_iam_account_alias_test.go index 455af4a9796..aacb98c84fc 100644 --- a/aws/resource_aws_iam_account_alias_test.go +++ b/aws/resource_aws_iam_account_alias_test.go @@ -34,6 +34,29 @@ func TestAccAWSIAMAccountAlias(t *testing.T) { } } +func testAccAWSIAMAccountAlias_importBasic(t *testing.T) { + resourceName := "aws_iam_account_alias.test" + + rstring := acctest.RandString(5) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSIAMAccountAliasDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSIAMAccountAliasConfig(rstring), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func testAccAWSIAMAccountAlias_basic_with_datasource(t *testing.T) { var account_alias string @@ -59,7 +82,7 @@ func testAccAWSIAMAccountAlias_basic_with_datasource(t *testing.T) { // We expect a non-empty plan due to the way data sources and depends_on // work, or don't work. See https://github.com/hashicorp/terraform/issues/11139#issuecomment-275121893 // We accept this limitation and feel this test is OK because of the - // explicity check above + // explicitly check above ExpectNonEmptyPlan: true, }, }, @@ -132,21 +155,6 @@ func testAccCheckAWSIAMAccountAliasExists(n string, a *string) resource.TestChec } } -func testAccCheckAwsIamAccountAlias(n string) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[n] - if !ok { - return fmt.Errorf("Can't find Account Alias resource: %s", n) - } - - if rs.Primary.Attributes["account_alias"] == "" { - return fmt.Errorf("Missing Account Alias") - } - - return nil - } -} - func testAccAWSIAMAccountAliasConfig_with_datasource(rstring string) string { return fmt.Sprintf(` resource "aws_iam_account_alias" "test" { diff --git a/aws/resource_aws_iam_account_password_policy.go b/aws/resource_aws_iam_account_password_policy.go index 71dfbf0c843..c3b63cc6a13 100644 --- a/aws/resource_aws_iam_account_password_policy.go +++ b/aws/resource_aws_iam_account_password_policy.go @@ -22,51 +22,51 @@ func resourceAwsIamAccountPasswordPolicy() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "allow_users_to_change_password": &schema.Schema{ + "allow_users_to_change_password": { Type: schema.TypeBool, Optional: true, Default: true, }, - "expire_passwords": &schema.Schema{ + "expire_passwords": { Type: schema.TypeBool, Computed: true, }, - "hard_expiry": &schema.Schema{ + "hard_expiry": { Type: schema.TypeBool, Optional: true, Computed: true, }, - "max_password_age": &schema.Schema{ + "max_password_age": { Type: schema.TypeInt, Optional: true, Computed: true, }, - "minimum_password_length": &schema.Schema{ + "minimum_password_length": { Type: schema.TypeInt, Optional: true, Default: 6, }, - "password_reuse_prevention": &schema.Schema{ + "password_reuse_prevention": { Type: schema.TypeInt, Optional: true, Computed: true, }, - "require_lowercase_characters": &schema.Schema{ + "require_lowercase_characters": { Type: schema.TypeBool, Optional: true, Computed: true, }, - "require_numbers": &schema.Schema{ + "require_numbers": { Type: schema.TypeBool, Optional: true, Computed: true, }, - "require_symbols": &schema.Schema{ + "require_symbols": { Type: schema.TypeBool, Optional: true, Computed: true, }, - "require_uppercase_characters": &schema.Schema{ + "require_uppercase_characters": { Type: schema.TypeBool, Optional: true, Computed: true, @@ -161,7 +161,6 @@ func resourceAwsIamAccountPasswordPolicyDelete(d *schema.ResourceData, meta inte if _, err := iamconn.DeleteAccountPasswordPolicy(input); err != nil { return fmt.Errorf("Error deleting IAM Password Policy: %s", err) } - d.SetId("") log.Println("[DEBUG] Deleted IAM account password policy") return nil diff --git a/aws/resource_aws_iam_account_password_policy_test.go b/aws/resource_aws_iam_account_password_policy_test.go index b909fc05a25..af11ae4a24b 100644 --- a/aws/resource_aws_iam_account_password_policy_test.go +++ b/aws/resource_aws_iam_account_password_policy_test.go @@ -10,22 +10,43 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSIAMAccountPasswordPolicy_importBasic(t *testing.T) { + resourceName := "aws_iam_account_password_policy.default" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSIAMAccountPasswordPolicyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSIAMAccountPasswordPolicy, + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSIAMAccountPasswordPolicy_basic(t *testing.T) { var policy iam.GetAccountPasswordPolicyOutput - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSIAMAccountPasswordPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSIAMAccountPasswordPolicy, Check: resource.ComposeTestCheckFunc( testAccCheckAWSIAMAccountPasswordPolicyExists("aws_iam_account_password_policy.default", &policy), resource.TestCheckResourceAttr("aws_iam_account_password_policy.default", "minimum_password_length", "8"), ), }, - resource.TestStep{ + { Config: testAccAWSIAMAccountPasswordPolicy_modified, Check: resource.ComposeTestCheckFunc( testAccCheckAWSIAMAccountPasswordPolicyExists("aws_iam_account_password_policy.default", &policy), diff --git a/aws/resource_aws_iam_group.go b/aws/resource_aws_iam_group.go index 967f055cdd3..694a9aa0eab 100644 --- a/aws/resource_aws_iam_group.go +++ b/aws/resource_aws_iam_group.go @@ -22,20 +22,20 @@ func resourceAwsIamGroup() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "arn": &schema.Schema{ + "arn": { Type: schema.TypeString, Computed: true, }, - "unique_id": &schema.Schema{ + "unique_id": { Type: schema.TypeString, Computed: true, }, - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ValidateFunc: validateAwsIamGroupName, }, - "path": &schema.Schema{ + "path": { Type: schema.TypeString, Optional: true, Default: "/", diff --git a/aws/resource_aws_iam_group_membership.go b/aws/resource_aws_iam_group_membership.go index 7977bbfb74e..34a8f0b271a 100644 --- a/aws/resource_aws_iam_group_membership.go +++ b/aws/resource_aws_iam_group_membership.go @@ -17,20 +17,20 @@ func resourceAwsIamGroupMembership() *schema.Resource { Delete: resourceAwsIamGroupMembershipDelete, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "users": &schema.Schema{ + "users": { Type: schema.TypeSet, Required: true, Elem: &schema.Schema{Type: schema.TypeString}, Set: schema.HashString, }, - "group": &schema.Schema{ + "group": { Type: schema.TypeString, Required: true, ForceNew: true, @@ -88,7 +88,7 @@ func resourceAwsIamGroupMembershipRead(d *schema.ResourceData, meta interface{}) } if err := d.Set("users", ul); err != nil { - return fmt.Errorf("[WARN] Error setting user list from IAM Group Membership (%s), error: %s", group, err) + return fmt.Errorf("Error setting user list from IAM Group Membership (%s), error: %s", group, err) } return nil diff --git a/aws/resource_aws_iam_group_membership_test.go b/aws/resource_aws_iam_group_membership_test.go index 901d94a49c8..b6cab75e39a 100644 --- a/aws/resource_aws_iam_group_membership_test.go +++ b/aws/resource_aws_iam_group_membership_test.go @@ -23,12 +23,12 @@ func TestAccAWSGroupMembership_basic(t *testing.T) { userName3 := fmt.Sprintf("tf-acc-user-gm-basic-three-%s", rString) membershipName := fmt.Sprintf("tf-acc-membership-gm-basic-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSGroupMembershipDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSGroupMemberConfig(groupName, userName, membershipName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSGroupMembershipExists("aws_iam_group_membership.team", &group), @@ -36,7 +36,7 @@ func TestAccAWSGroupMembership_basic(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSGroupMemberConfigUpdate(groupName, userName, userName2, userName3, membershipName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSGroupMembershipExists("aws_iam_group_membership.team", &group), @@ -44,7 +44,7 @@ func TestAccAWSGroupMembership_basic(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSGroupMemberConfigUpdateDown(groupName, userName3, membershipName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSGroupMembershipExists("aws_iam_group_membership.team", &group), @@ -63,12 +63,12 @@ func TestAccAWSGroupMembership_paginatedUserList(t *testing.T) { membershipName := fmt.Sprintf("tf-acc-membership-gm-pul-%s", rString) userNamePrefix := fmt.Sprintf("tf-acc-user-gm-pul-%s-", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSGroupMembershipDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSGroupMemberConfigPaginatedUserList(groupName, membershipName, userNamePrefix), Check: resource.ComposeTestCheckFunc( testAccCheckAWSGroupMembershipExists("aws_iam_group_membership.team", &group), diff --git a/aws/resource_aws_iam_group_policy.go b/aws/resource_aws_iam_group_policy.go index 1bdf7254514..cb35b0ca7e2 100644 --- a/aws/resource_aws_iam_group_policy.go +++ b/aws/resource_aws_iam_group_policy.go @@ -23,23 +23,24 @@ func resourceAwsIamGroupPolicy() *schema.Resource { Delete: resourceAwsIamGroupPolicyDelete, Schema: map[string]*schema.Schema{ - "policy": &schema.Schema{ + "policy": { Type: schema.TypeString, Required: true, }, - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Optional: true, Computed: true, ForceNew: true, ConflictsWith: []string{"name_prefix"}, }, - "name_prefix": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - ForceNew: true, + "name_prefix": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"name"}, }, - "group": &schema.Schema{ + "group": { Type: schema.TypeString, Required: true, ForceNew: true, @@ -102,7 +103,12 @@ func resourceAwsIamGroupPolicyRead(d *schema.ResourceData, meta interface{}) err if err != nil { return err } - return d.Set("policy", policy) + + d.Set("group", group) + d.Set("name", name) + d.Set("policy", policy) + + return nil } func resourceAwsIamGroupPolicyDelete(d *schema.ResourceData, meta interface{}) error { diff --git a/aws/resource_aws_iam_group_policy_attachment.go b/aws/resource_aws_iam_group_policy_attachment.go index cf9595232a4..26e620bacee 100644 --- a/aws/resource_aws_iam_group_policy_attachment.go +++ b/aws/resource_aws_iam_group_policy_attachment.go @@ -18,12 +18,12 @@ func resourceAwsIamGroupPolicyAttachment() *schema.Resource { Delete: resourceAwsIamGroupPolicyAttachmentDelete, Schema: map[string]*schema.Schema{ - "group": &schema.Schema{ + "group": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "policy_arn": &schema.Schema{ + "policy_arn": { Type: schema.TypeString, Required: true, ForceNew: true, @@ -40,7 +40,7 @@ func resourceAwsIamGroupPolicyAttachmentCreate(d *schema.ResourceData, meta inte err := attachPolicyToGroup(conn, group, arn) if err != nil { - return fmt.Errorf("[WARN] Error attaching policy %s to IAM group %s: %v", arn, group, err) + return fmt.Errorf("Error attaching policy %s to IAM group %s: %v", arn, group, err) } d.SetId(resource.PrefixedUniqueId(fmt.Sprintf("%s-", group))) @@ -96,7 +96,7 @@ func resourceAwsIamGroupPolicyAttachmentDelete(d *schema.ResourceData, meta inte err := detachPolicyFromGroup(conn, group, arn) if err != nil { - return fmt.Errorf("[WARN] Error removing policy %s from IAM Group %s: %v", arn, group, err) + return fmt.Errorf("Error removing policy %s from IAM Group %s: %v", arn, group, err) } return nil } diff --git a/aws/resource_aws_iam_group_policy_attachment_test.go b/aws/resource_aws_iam_group_policy_attachment_test.go index ca813d7ef9a..7f2561c72d1 100644 --- a/aws/resource_aws_iam_group_policy_attachment_test.go +++ b/aws/resource_aws_iam_group_policy_attachment_test.go @@ -21,19 +21,19 @@ func TestAccAWSIAMGroupPolicyAttachment_basic(t *testing.T) { policyName2 := fmt.Sprintf("tf-acc-policy-gpa-basic-2-%s", rString) policyName3 := fmt.Sprintf("tf-acc-policy-gpa-basic-3-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSGroupPolicyAttachmentDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSGroupPolicyAttachConfig(groupName, policyName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSGroupPolicyAttachmentExists("aws_iam_group_policy_attachment.test-attach", 1, &out), testAccCheckAWSGroupPolicyAttachmentAttributes([]string{policyName}, &out), ), }, - resource.TestStep{ + { Config: testAccAWSGroupPolicyAttachConfigUpdate(groupName, policyName, policyName2, policyName3), Check: resource.ComposeTestCheckFunc( testAccCheckAWSGroupPolicyAttachmentExists("aws_iam_group_policy_attachment.test-attach", 2, &out), diff --git a/aws/resource_aws_iam_group_policy_test.go b/aws/resource_aws_iam_group_policy_test.go index 6e33cd48445..9a24dd7167d 100644 --- a/aws/resource_aws_iam_group_policy_test.go +++ b/aws/resource_aws_iam_group_policy_test.go @@ -1,6 +1,7 @@ package aws import ( + "errors" "fmt" "testing" @@ -13,8 +14,9 @@ import ( ) func TestAccAWSIAMGroupPolicy_basic(t *testing.T) { + var groupPolicy1, groupPolicy2 iam.GetGroupPolicyOutput rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckIAMGroupPolicyDestroy, @@ -22,19 +24,22 @@ func TestAccAWSIAMGroupPolicy_basic(t *testing.T) { { Config: testAccIAMGroupPolicyConfig(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckIAMGroupPolicy( + testAccCheckIAMGroupPolicyExists( "aws_iam_group.group", "aws_iam_group_policy.foo", + &groupPolicy1, ), ), }, { Config: testAccIAMGroupPolicyConfigUpdate(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckIAMGroupPolicy( + testAccCheckIAMGroupPolicyExists( "aws_iam_group.group", "aws_iam_group_policy.bar", + &groupPolicy2, ), + testAccCheckAWSIAMGroupPolicyNameChanged(&groupPolicy1, &groupPolicy2), ), }, }, @@ -42,41 +47,67 @@ func TestAccAWSIAMGroupPolicy_basic(t *testing.T) { } func TestAccAWSIAMGroupPolicy_namePrefix(t *testing.T) { + var groupPolicy1, groupPolicy2 iam.GetGroupPolicyOutput rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_iam_group_policy.test", Providers: testAccProviders, CheckDestroy: testAccCheckIAMGroupPolicyDestroy, Steps: []resource.TestStep{ { - Config: testAccIAMGroupPolicyConfig_namePrefix(rInt), + Config: testAccIAMGroupPolicyConfig_namePrefix(rInt, "*"), Check: resource.ComposeTestCheckFunc( - testAccCheckIAMGroupPolicy( + testAccCheckIAMGroupPolicyExists( "aws_iam_group.test", "aws_iam_group_policy.test", + &groupPolicy1, ), ), }, + { + Config: testAccIAMGroupPolicyConfig_namePrefix(rInt, "ec2:*"), + Check: resource.ComposeTestCheckFunc( + testAccCheckIAMGroupPolicyExists( + "aws_iam_group.test", + "aws_iam_group_policy.test", + &groupPolicy2, + ), + testAccCheckAWSIAMGroupPolicyNameMatches(&groupPolicy1, &groupPolicy2), + ), + }, }, }) } func TestAccAWSIAMGroupPolicy_generatedName(t *testing.T) { + var groupPolicy1, groupPolicy2 iam.GetGroupPolicyOutput rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_iam_group_policy.test", Providers: testAccProviders, CheckDestroy: testAccCheckIAMGroupPolicyDestroy, Steps: []resource.TestStep{ { - Config: testAccIAMGroupPolicyConfig_generatedName(rInt), + Config: testAccIAMGroupPolicyConfig_generatedName(rInt, "*"), + Check: resource.ComposeTestCheckFunc( + testAccCheckIAMGroupPolicyExists( + "aws_iam_group.test", + "aws_iam_group_policy.test", + &groupPolicy1, + ), + ), + }, + { + Config: testAccIAMGroupPolicyConfig_generatedName(rInt, "ec2:*"), Check: resource.ComposeTestCheckFunc( - testAccCheckIAMGroupPolicy( + testAccCheckIAMGroupPolicyExists( "aws_iam_group.test", "aws_iam_group_policy.test", + &groupPolicy2, ), + testAccCheckAWSIAMGroupPolicyNameMatches(&groupPolicy1, &groupPolicy2), ), }, }, @@ -113,9 +144,10 @@ func testAccCheckIAMGroupPolicyDestroy(s *terraform.State) error { return nil } -func testAccCheckIAMGroupPolicy( +func testAccCheckIAMGroupPolicyExists( iamGroupResource string, - iamGroupPolicyResource string) resource.TestCheckFunc { + iamGroupPolicyResource string, + groupPolicy *iam.GetGroupPolicyOutput) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[iamGroupResource] if !ok { @@ -133,7 +165,7 @@ func testAccCheckIAMGroupPolicy( iamconn := testAccProvider.Meta().(*AWSClient).iamconn group, name := resourceAwsIamGroupPolicyParseId(policy.Primary.ID) - _, err := iamconn.GetGroupPolicy(&iam.GetGroupPolicyInput{ + output, err := iamconn.GetGroupPolicy(&iam.GetGroupPolicyInput{ GroupName: aws.String(group), PolicyName: aws.String(name), }) @@ -142,6 +174,28 @@ func testAccCheckIAMGroupPolicy( return err } + *groupPolicy = *output + + return nil + } +} + +func testAccCheckAWSIAMGroupPolicyNameChanged(i, j *iam.GetGroupPolicyOutput) resource.TestCheckFunc { + return func(s *terraform.State) error { + if aws.StringValue(i.PolicyName) == aws.StringValue(j.PolicyName) { + return errors.New("IAM Group Policy name did not change") + } + + return nil + } +} + +func testAccCheckAWSIAMGroupPolicyNameMatches(i, j *iam.GetGroupPolicyOutput) resource.TestCheckFunc { + return func(s *terraform.State) error { + if aws.StringValue(i.PolicyName) != aws.StringValue(j.PolicyName) { + return errors.New("IAM Group Policy name did not match") + } + return nil } } @@ -169,7 +223,7 @@ EOF }`, rInt, rInt) } -func testAccIAMGroupPolicyConfig_namePrefix(rInt int) string { +func testAccIAMGroupPolicyConfig_namePrefix(rInt int, policyAction string) string { return fmt.Sprintf(` resource "aws_iam_group" "test" { name = "test_group_%d" @@ -184,15 +238,15 @@ func testAccIAMGroupPolicyConfig_namePrefix(rInt int) string { "Version": "2012-10-17", "Statement": { "Effect": "Allow", - "Action": "*", + "Action": "%s", "Resource": "*" } } EOF - }`, rInt, rInt) + }`, rInt, rInt, policyAction) } -func testAccIAMGroupPolicyConfig_generatedName(rInt int) string { +func testAccIAMGroupPolicyConfig_generatedName(rInt int, policyAction string) string { return fmt.Sprintf(` resource "aws_iam_group" "test" { name = "test_group_%d" @@ -206,12 +260,12 @@ func testAccIAMGroupPolicyConfig_generatedName(rInt int) string { "Version": "2012-10-17", "Statement": { "Effect": "Allow", - "Action": "*", + "Action": "%s", "Resource": "*" } } EOF - }`, rInt) + }`, rInt, policyAction) } func testAccIAMGroupPolicyConfigUpdate(rInt int) string { diff --git a/aws/resource_aws_iam_group_test.go b/aws/resource_aws_iam_group_test.go index 16f0dceaaca..35b0ef223c8 100644 --- a/aws/resource_aws_iam_group_test.go +++ b/aws/resource_aws_iam_group_test.go @@ -50,6 +50,30 @@ func TestValidateIamGroupName(t *testing.T) { } } +func TestAccAWSIAMGroup_importBasic(t *testing.T) { + resourceName := "aws_iam_group.group" + + rString := acctest.RandString(8) + groupName := fmt.Sprintf("tf-acc-group-import-%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSGroupConfig(groupName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSIAMGroup_basic(t *testing.T) { var conf iam.GetGroupOutput @@ -57,7 +81,7 @@ func TestAccAWSIAMGroup_basic(t *testing.T) { groupName := fmt.Sprintf("tf-acc-group-basic-%s", rString) groupName2 := fmt.Sprintf("tf-acc-group-basic-2-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSGroupDestroy, diff --git a/aws/resource_aws_iam_instance_profile.go b/aws/resource_aws_iam_instance_profile.go index 837a3a51325..7414ddd29da 100644 --- a/aws/resource_aws_iam_instance_profile.go +++ b/aws/resource_aws_iam_instance_profile.go @@ -3,6 +3,7 @@ package aws import ( "fmt" "regexp" + "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" @@ -60,9 +61,10 @@ func resourceAwsIamInstanceProfile() *schema.Resource { }, "name_prefix": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"name"}, ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { // https://github.com/boto/botocore/blob/2485f5c/botocore/data/iam/2010-05-08/service-2.json#L8196-L8201 value := v.(string) @@ -158,7 +160,24 @@ func instanceProfileAddRole(iamconn *iam.IAM, profileName, roleName string) erro RoleName: aws.String(roleName), } - _, err := iamconn.AddRoleToInstanceProfile(request) + err := resource.Retry(30*time.Second, func() *resource.RetryError { + var err error + _, err = iamconn.AddRoleToInstanceProfile(request) + // IAM unfortunately does not provide a better error code or message for eventual consistency + // InvalidParameterValue: Value (XXX) for parameter iamInstanceProfile.name is invalid. Invalid IAM Instance Profile name + // NoSuchEntity: The request was rejected because it referenced an entity that does not exist. The error message describes the entity. HTTP Status Code: 404 + if isAWSErr(err, "InvalidParameterValue", "Invalid IAM Instance Profile name") || isAWSErr(err, "NoSuchEntity", "The role with name") { + return resource.RetryableError(err) + } + if err != nil { + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("Error adding IAM Role %s to Instance Profile %s: %s", roleName, profileName, err) + } + return err } @@ -295,7 +314,7 @@ func resourceAwsIamInstanceProfileDelete(d *schema.ResourceData, meta interface{ if err != nil { return fmt.Errorf("Error deleting IAM instance profile %s: %s", d.Id(), err) } - d.SetId("") + return nil } @@ -307,6 +326,9 @@ func instanceProfileReadResult(d *schema.ResourceData, result *iam.InstanceProfi if err := d.Set("arn", result.Arn); err != nil { return err } + if err := d.Set("create_date", result.CreateDate.Format(time.RFC3339)); err != nil { + return err + } if err := d.Set("path", result.Path); err != nil { return err } diff --git a/aws/resource_aws_iam_instance_profile_test.go b/aws/resource_aws_iam_instance_profile_test.go index f60c4584fbc..ca840b25b56 100644 --- a/aws/resource_aws_iam_instance_profile_test.go +++ b/aws/resource_aws_iam_instance_profile_test.go @@ -18,7 +18,7 @@ func TestAccAWSIAMInstanceProfile_importBasic(t *testing.T) { resourceName := "aws_iam_instance_profile.test" rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSInstanceProfileDestroy, @@ -41,7 +41,7 @@ func TestAccAWSIAMInstanceProfile_basic(t *testing.T) { var conf iam.GetInstanceProfileOutput rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -59,7 +59,7 @@ func TestAccAWSIAMInstanceProfile_withRoleNotRoles(t *testing.T) { var conf iam.GetInstanceProfileOutput rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -75,7 +75,7 @@ func TestAccAWSIAMInstanceProfile_withRoleNotRoles(t *testing.T) { func TestAccAWSIAMInstanceProfile_missingRoleThrowsError(t *testing.T) { rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -91,7 +91,7 @@ func TestAccAWSIAMInstanceProfile_namePrefix(t *testing.T) { var conf iam.GetInstanceProfileOutput rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_iam_instance_profile.test", IDRefreshIgnore: []string{"name_prefix"}, diff --git a/aws/resource_aws_iam_openid_connect_provider.go b/aws/resource_aws_iam_openid_connect_provider.go index 1791da4ecb2..bf7b8232710 100644 --- a/aws/resource_aws_iam_openid_connect_provider.go +++ b/aws/resource_aws_iam_openid_connect_provider.go @@ -22,11 +22,11 @@ func resourceAwsIamOpenIDConnectProvider() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "arn": &schema.Schema{ + "arn": { Type: schema.TypeString, Computed: true, }, - "url": &schema.Schema{ + "url": { Type: schema.TypeString, Computed: false, Required: true, @@ -34,13 +34,13 @@ func resourceAwsIamOpenIDConnectProvider() *schema.Resource { ValidateFunc: validateOpenIdURL, DiffSuppressFunc: suppressOpenIdURL, }, - "client_id_list": &schema.Schema{ + "client_id_list": { Elem: &schema.Schema{Type: schema.TypeString}, Type: schema.TypeList, Required: true, ForceNew: true, }, - "thumbprint_list": &schema.Schema{ + "thumbprint_list": { Elem: &schema.Schema{Type: schema.TypeString}, Type: schema.TypeList, Required: true, diff --git a/aws/resource_aws_iam_openid_connect_provider_test.go b/aws/resource_aws_iam_openid_connect_provider_test.go index 6cf10d8b80c..be2b8237b08 100644 --- a/aws/resource_aws_iam_openid_connect_provider_test.go +++ b/aws/resource_aws_iam_openid_connect_provider_test.go @@ -16,12 +16,12 @@ func TestAccAWSIAMOpenIDConnectProvider_basic(t *testing.T) { rString := acctest.RandString(5) url := "accounts.google.com/" + rString - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckIAMOpenIDConnectProviderDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccIAMOpenIDConnectProviderConfig(rString), Check: resource.ComposeTestCheckFunc( testAccCheckIAMOpenIDConnectProvider("aws_iam_openid_connect_provider.goog"), @@ -32,7 +32,7 @@ func TestAccAWSIAMOpenIDConnectProvider_basic(t *testing.T) { resource.TestCheckResourceAttr("aws_iam_openid_connect_provider.goog", "thumbprint_list.#", "0"), ), }, - resource.TestStep{ + { Config: testAccIAMOpenIDConnectProviderConfig_modified(rString), Check: resource.ComposeTestCheckFunc( testAccCheckIAMOpenIDConnectProvider("aws_iam_openid_connect_provider.goog"), @@ -53,16 +53,16 @@ func TestAccAWSIAMOpenIDConnectProvider_importBasic(t *testing.T) { resourceName := "aws_iam_openid_connect_provider.goog" rString := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckIAMOpenIDConnectProviderDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccIAMOpenIDConnectProviderConfig_modified(rString), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, @@ -74,12 +74,12 @@ func TestAccAWSIAMOpenIDConnectProvider_importBasic(t *testing.T) { func TestAccAWSIAMOpenIDConnectProvider_disappears(t *testing.T) { rString := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckIAMOpenIDConnectProviderDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccIAMOpenIDConnectProviderConfig(rString), Check: resource.ComposeTestCheckFunc( testAccCheckIAMOpenIDConnectProvider("aws_iam_openid_connect_provider.goog"), diff --git a/aws/resource_aws_iam_policy.go b/aws/resource_aws_iam_policy.go index 11c4fbf4d00..a5c8449ba3a 100644 --- a/aws/resource_aws_iam_policy.go +++ b/aws/resource_aws_iam_policy.go @@ -2,13 +2,13 @@ package aws import ( "fmt" + "log" "net/url" "regexp" + "time" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/iam" - "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) @@ -62,9 +62,10 @@ func resourceAwsIamPolicy() *schema.Resource { }, }, "name_prefix": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"name"}, ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { // https://github.com/boto/botocore/blob/2485f5c/botocore/data/iam/2010-05-08/service-2.json#L8329-L8334 value := v.(string) @@ -111,7 +112,9 @@ func resourceAwsIamPolicyCreate(d *schema.ResourceData, meta interface{}) error return fmt.Errorf("Error creating IAM policy %s: %s", name, err) } - return readIamPolicy(d, response.Policy) + d.SetId(*response.Policy.Arn) + + return resourceAwsIamPolicyRead(d, meta) } func resourceAwsIamPolicyRead(d *schema.ResourceData, meta interface{}) error { @@ -120,39 +123,93 @@ func resourceAwsIamPolicyRead(d *schema.ResourceData, meta interface{}) error { getPolicyRequest := &iam.GetPolicyInput{ PolicyArn: aws.String(d.Id()), } + log.Printf("[DEBUG] Getting IAM Policy: %s", getPolicyRequest) - getPolicyResponse, err := iamconn.GetPolicy(getPolicyRequest) - if err != nil { - if iamerr, ok := err.(awserr.Error); ok && iamerr.Code() == "NoSuchEntity" { - d.SetId("") - return nil + // Handle IAM eventual consistency + var getPolicyResponse *iam.GetPolicyOutput + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + var err error + getPolicyResponse, err = iamconn.GetPolicy(getPolicyRequest) + + if d.IsNewResource() && isAWSErr(err, iam.ErrCodeNoSuchEntityException, "") { + return resource.RetryableError(err) + } + + if err != nil { + return resource.NonRetryableError(err) } + + return nil + }) + + if isAWSErr(err, iam.ErrCodeNoSuchEntityException, "") { + log.Printf("[WARN] IAM Policy (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { return fmt.Errorf("Error reading IAM policy %s: %s", d.Id(), err) } + if getPolicyResponse == nil || getPolicyResponse.Policy == nil { + log.Printf("[WARN] IAM Policy (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + d.Set("arn", getPolicyResponse.Policy.Arn) + d.Set("description", getPolicyResponse.Policy.Description) + d.Set("name", getPolicyResponse.Policy.PolicyName) + d.Set("path", getPolicyResponse.Policy.Path) + + // Retrieve policy + getPolicyVersionRequest := &iam.GetPolicyVersionInput{ PolicyArn: aws.String(d.Id()), VersionId: getPolicyResponse.Policy.DefaultVersionId, } + log.Printf("[DEBUG] Getting IAM Policy Version: %s", getPolicyVersionRequest) - getPolicyVersionResponse, err := iamconn.GetPolicyVersion(getPolicyVersionRequest) - if err != nil { - if iamerr, ok := err.(awserr.Error); ok && iamerr.Code() == "NoSuchEntity" { - d.SetId("") - return nil + // Handle IAM eventual consistency + var getPolicyVersionResponse *iam.GetPolicyVersionOutput + err = resource.Retry(1*time.Minute, func() *resource.RetryError { + var err error + getPolicyVersionResponse, err = iamconn.GetPolicyVersion(getPolicyVersionRequest) + + if isAWSErr(err, iam.ErrCodeNoSuchEntityException, "") { + return resource.RetryableError(err) } - return fmt.Errorf("Error reading IAM policy version %s: %s", d.Id(), err) + + if err != nil { + return resource.NonRetryableError(err) + } + + return nil + }) + + if isAWSErr(err, iam.ErrCodeNoSuchEntityException, "") { + log.Printf("[WARN] IAM Policy (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil } - policy, err := url.QueryUnescape(*getPolicyVersionResponse.PolicyVersion.Document) if err != nil { - return err + return fmt.Errorf("Error reading IAM policy version %s: %s", d.Id(), err) } - if err := d.Set("policy", policy); err != nil { - return err + + policy := "" + if getPolicyVersionResponse != nil && getPolicyVersionResponse.PolicyVersion != nil { + var err error + policy, err = url.QueryUnescape(aws.StringValue(getPolicyVersionResponse.PolicyVersion.Document)) + if err != nil { + return fmt.Errorf("error parsing policy: %s", err) + } } - return readIamPolicy(d, getPolicyResponse.Policy) + d.Set("policy", policy) + + return nil } func resourceAwsIamPolicyUpdate(d *schema.ResourceData, meta interface{}) error { @@ -162,9 +219,6 @@ func resourceAwsIamPolicyUpdate(d *schema.ResourceData, meta interface{}) error return err } - if !d.HasChange("policy") { - return nil - } request := &iam.CreatePolicyVersionInput{ PolicyArn: aws.String(d.Id()), PolicyDocument: aws.String(d.Get("policy").(string)), @@ -174,7 +228,8 @@ func resourceAwsIamPolicyUpdate(d *schema.ResourceData, meta interface{}) error if _, err := iamconn.CreatePolicyVersion(request); err != nil { return fmt.Errorf("Error updating IAM policy %s: %s", d.Id(), err) } - return nil + + return resourceAwsIamPolicyRead(d, meta) } func resourceAwsIamPolicyDelete(d *schema.ResourceData, meta interface{}) error { @@ -189,12 +244,14 @@ func resourceAwsIamPolicyDelete(d *schema.ResourceData, meta interface{}) error } _, err := iamconn.DeletePolicy(request) + if isAWSErr(err, iam.ErrCodeNoSuchEntityException, "") { + return nil + } + if err != nil { - if iamerr, ok := err.(awserr.Error); ok && iamerr.Code() == "NoSuchEntity" { - return nil - } return fmt.Errorf("Error deleting IAM policy %s: %#v", d.Id(), err) } + return nil } @@ -274,23 +331,3 @@ func iamPolicyListVersions(arn string, iamconn *iam.IAM) ([]*iam.PolicyVersion, } return response.Versions, nil } - -func readIamPolicy(d *schema.ResourceData, policy *iam.Policy) error { - d.SetId(*policy.Arn) - if policy.Description != nil { - // the description isn't present in the response to CreatePolicy. - if err := d.Set("description", policy.Description); err != nil { - return err - } - } - if err := d.Set("path", policy.Path); err != nil { - return err - } - if err := d.Set("name", policy.PolicyName); err != nil { - return err - } - if err := d.Set("arn", policy.Arn); err != nil { - return err - } - return nil -} diff --git a/aws/resource_aws_iam_policy_attachment.go b/aws/resource_aws_iam_policy_attachment.go index a973460286c..5e1c922734d 100644 --- a/aws/resource_aws_iam_policy_attachment.go +++ b/aws/resource_aws_iam_policy_attachment.go @@ -22,31 +22,31 @@ func resourceAwsIamPolicyAttachment() *schema.Resource { Delete: resourceAwsIamPolicyAttachmentDelete, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, ValidateFunc: validation.NoZeroValues, }, - "users": &schema.Schema{ + "users": { Type: schema.TypeSet, Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, Set: schema.HashString, }, - "roles": &schema.Schema{ + "roles": { Type: schema.TypeSet, Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, Set: schema.HashString, }, - "groups": &schema.Schema{ + "groups": { Type: schema.TypeSet, Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, Set: schema.HashString, }, - "policy_arn": &schema.Schema{ + "policy_arn": { Type: schema.TypeString, Required: true, ForceNew: true, @@ -65,7 +65,7 @@ func resourceAwsIamPolicyAttachmentCreate(d *schema.ResourceData, meta interface groups := expandStringList(d.Get("groups").(*schema.Set).List()) if len(users) == 0 && len(roles) == 0 && len(groups) == 0 { - return fmt.Errorf("[WARN] No Users, Roles, or Groups specified for IAM Policy Attachment %s", name) + return fmt.Errorf("No Users, Roles, or Groups specified for IAM Policy Attachment %s", name) } else { var userErr, roleErr, groupErr error if users != nil { @@ -146,13 +146,13 @@ func resourceAwsIamPolicyAttachmentUpdate(d *schema.ResourceData, meta interface var userErr, roleErr, groupErr error if d.HasChange("users") { - userErr = updateUsers(conn, d, meta) + userErr = updateUsers(conn, d) } if d.HasChange("roles") { - roleErr = updateRoles(conn, d, meta) + roleErr = updateRoles(conn, d) } if d.HasChange("groups") { - groupErr = updateGroups(conn, d, meta) + groupErr = updateGroups(conn, d) } if userErr != nil || roleErr != nil || groupErr != nil { return composeErrors(fmt.Sprint("[WARN] Error updating user, role, or group list from IAM Policy Attachment ", name, ":"), userErr, roleErr, groupErr) @@ -264,7 +264,7 @@ func attachPolicyToGroups(conn *iam.IAM, groups []*string, arn string) error { } return nil } -func updateUsers(conn *iam.IAM, d *schema.ResourceData, meta interface{}) error { +func updateUsers(conn *iam.IAM, d *schema.ResourceData) error { arn := d.Get("policy_arn").(string) o, n := d.GetChange("users") if o == nil { @@ -286,7 +286,7 @@ func updateUsers(conn *iam.IAM, d *schema.ResourceData, meta interface{}) error } return nil } -func updateRoles(conn *iam.IAM, d *schema.ResourceData, meta interface{}) error { +func updateRoles(conn *iam.IAM, d *schema.ResourceData) error { arn := d.Get("policy_arn").(string) o, n := d.GetChange("roles") if o == nil { @@ -308,7 +308,7 @@ func updateRoles(conn *iam.IAM, d *schema.ResourceData, meta interface{}) error } return nil } -func updateGroups(conn *iam.IAM, d *schema.ResourceData, meta interface{}) error { +func updateGroups(conn *iam.IAM, d *schema.ResourceData) error { arn := d.Get("policy_arn").(string) o, n := d.GetChange("groups") if o == nil { diff --git a/aws/resource_aws_iam_policy_attachment_test.go b/aws/resource_aws_iam_policy_attachment_test.go index e50a5124db2..198006ef630 100644 --- a/aws/resource_aws_iam_policy_attachment_test.go +++ b/aws/resource_aws_iam_policy_attachment_test.go @@ -27,19 +27,19 @@ func TestAccAWSIAMPolicyAttachment_basic(t *testing.T) { policyName := fmt.Sprintf("tf-acc-policy-pa-basic-%s", rString) attachmentName := fmt.Sprintf("tf-acc-attachment-pa-basic-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSPolicyAttachmentDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSPolicyAttachConfig(userName, roleName, groupName, policyName, attachmentName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSPolicyAttachmentExists("aws_iam_policy_attachment.test-attach", 3, &out), testAccCheckAWSPolicyAttachmentAttributes([]string{userName}, []string{roleName}, []string{groupName}, &out), ), }, - resource.TestStep{ + { Config: testAccAWSPolicyAttachConfigUpdate(userName, userName2, userName3, roleName, roleName2, roleName3, groupName, groupName2, groupName3, @@ -62,12 +62,12 @@ func TestAccAWSIAMPolicyAttachment_paginatedEntities(t *testing.T) { policyName := fmt.Sprintf("tf-acc-policy-pa-pe-%s-", rString) attachmentName := fmt.Sprintf("tf-acc-attachment-pa-pe-%s-", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSPolicyAttachmentDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSPolicyPaginatedAttachConfig(userNamePrefix, policyName, attachmentName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSPolicyAttachmentExists("aws_iam_policy_attachment.test-paginated-attach", 101, &out), diff --git a/aws/resource_aws_iam_policy_test.go b/aws/resource_aws_iam_policy_test.go index a63a75eb695..59b069b8e91 100644 --- a/aws/resource_aws_iam_policy_test.go +++ b/aws/resource_aws_iam_policy_test.go @@ -3,50 +3,164 @@ package aws import ( "fmt" "regexp" - "strings" "testing" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/iam" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) -func TestAWSPolicy_namePrefix(t *testing.T) { +func TestAccAWSIAMPolicy_basic(t *testing.T) { var out iam.GetPolicyOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_iam_policy.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckAWSPolicyDestroy, + CheckDestroy: testAccCheckAWSIAMPolicyDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSPolicyPrefixNameConfig, + Config: testAccAWSIAMPolicyConfigName(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSPolicyExists("aws_iam_policy.policy", &out), - testAccCheckAWSPolicyGeneratedNamePrefix( - "aws_iam_policy.policy", "test-policy-"), + testAccCheckAWSIAMPolicyExists(resourceName, &out), + testAccCheckResourceAttrGlobalARN(resourceName, "arn", "iam", fmt.Sprintf("policy/%s", rName)), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "path", "/"), + resource.TestCheckResourceAttr(resourceName, "policy", `{"Version":"2012-10-17","Statement":[{"Action":["ec2:Describe*"],"Effect":"Allow","Resource":"*"}]}`), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSIAMPolicy_description(t *testing.T) { + var out iam.GetPolicyOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_iam_policy.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSIAMPolicyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSIAMPolicyConfigDescription(rName, "description1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSIAMPolicyExists(resourceName, &out), + resource.TestCheckResourceAttr(resourceName, "description", "description1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSIAMPolicy_namePrefix(t *testing.T) { + var out iam.GetPolicyOutput + namePrefix := "tf-acc-test-" + resourceName := "aws_iam_policy.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSIAMPolicyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSIAMPolicyConfigNamePrefix(namePrefix), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSIAMPolicyExists(resourceName, &out), + resource.TestMatchResourceAttr(resourceName, "name", regexp.MustCompile(fmt.Sprintf("^%s", namePrefix))), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"name_prefix"}, + }, + }, + }) +} + +func TestAccAWSIAMPolicy_path(t *testing.T) { + var out iam.GetPolicyOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_iam_policy.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSIAMPolicyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSIAMPolicyConfigPath(rName, "/path1/"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSIAMPolicyExists(resourceName, &out), + resource.TestCheckResourceAttr(resourceName, "path", "/path1/"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } -func TestAWSPolicy_invalidJson(t *testing.T) { - resource.Test(t, resource.TestCase{ +func TestAccAWSIAMPolicy_policy(t *testing.T) { + var out iam.GetPolicyOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_iam_policy.test" + policy1 := "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Action\":[\"ec2:Describe*\"],\"Effect\":\"Allow\",\"Resource\":\"*\"}]}" + policy2 := "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Action\":[\"ec2:*\"],\"Effect\":\"Allow\",\"Resource\":\"*\"}]}" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckAWSPolicyDestroy, + CheckDestroy: testAccCheckAWSIAMPolicyDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSPolicyInvalidJsonConfig, + Config: testAccAWSIAMPolicyConfigPolicy(rName, "not-json"), ExpectError: regexp.MustCompile("invalid JSON"), }, + { + Config: testAccAWSIAMPolicyConfigPolicy(rName, policy1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSIAMPolicyExists(resourceName, &out), + resource.TestCheckResourceAttr(resourceName, "policy", policy1), + ), + }, + { + Config: testAccAWSIAMPolicyConfigPolicy(rName, policy2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSIAMPolicyExists(resourceName, &out), + resource.TestCheckResourceAttr(resourceName, "policy", policy2), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } -func testAccCheckAWSPolicyExists(resource string, res *iam.GetPolicyOutput) resource.TestCheckFunc { +func testAccCheckAWSIAMPolicyExists(resource string, res *iam.GetPolicyOutput) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[resource] if !ok { @@ -72,60 +186,75 @@ func testAccCheckAWSPolicyExists(resource string, res *iam.GetPolicyOutput) reso } } -func testAccCheckAWSPolicyGeneratedNamePrefix(resource, prefix string) resource.TestCheckFunc { - return func(s *terraform.State) error { - r, ok := s.RootModule().Resources[resource] - if !ok { - return fmt.Errorf("Resource not found") +func testAccCheckAWSIAMPolicyDestroy(s *terraform.State) error { + iamconn := testAccProvider.Meta().(*AWSClient).iamconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_iam_policy" { + continue } - name, ok := r.Primary.Attributes["name"] - if !ok { - return fmt.Errorf("Name attr not found: %#v", r.Primary.Attributes) + + _, err := iamconn.GetPolicy(&iam.GetPolicyInput{ + PolicyArn: aws.String(rs.Primary.ID), + }) + + if isAWSErr(err, iam.ErrCodeNoSuchEntityException, "") { + continue } - if !strings.HasPrefix(name, prefix) { - return fmt.Errorf("Name: %q, does not have prefix: %q", name, prefix) + + if err != nil { + return err } - return nil + + return fmt.Errorf("IAM Policy (%s) still exists", rs.Primary.ID) } + + return nil } -const testAccAWSPolicyPrefixNameConfig = ` -resource "aws_iam_policy" "policy" { - name_prefix = "test-policy-" - path = "/" - policy = < 0 { + _, err := iamconn.UntagRole(&iam.UntagRoleInput{ + RoleName: aws.String(d.Id()), + TagKeys: tagKeysIam(r), + }) + if err != nil { + return fmt.Errorf("error deleting IAM role tags: %s", err) + } + } + + if len(c) > 0 { + input := &iam.TagRoleInput{ + RoleName: aws.String(d.Id()), + Tags: c, + } + _, err := iamconn.TagRole(input) + if err != nil { + return fmt.Errorf("error update IAM role tags: %s", err) + } + } + } + + return resourceAwsIamRoleRead(d, meta) } func resourceAwsIamRoleDelete(d *schema.ResourceData, meta interface{}) error { @@ -287,7 +382,11 @@ func resourceAwsIamRoleDelete(d *schema.ResourceData, meta interface{}) error { RoleName: aws.String(d.Id()), }) if err != nil { - return fmt.Errorf("Error deleting IAM Role %s: %s", d.Id(), err) + if isAWSErr(err, iam.ErrCodeNoSuchEntityException, "") { + log.Printf("[WARN] Role policy attachment (%s) was already removed from role (%s)", aws.StringValue(parn), d.Id()) + } else { + return fmt.Errorf("Error deleting IAM Role %s: %s", d.Id(), err) + } } } } @@ -312,22 +411,25 @@ func resourceAwsIamRoleDelete(d *schema.ResourceData, meta interface{}) error { RoleName: aws.String(d.Id()), }) if err != nil { - return fmt.Errorf("Error deleting inline policy of IAM Role %s: %s", d.Id(), err) + if isAWSErr(err, iam.ErrCodeNoSuchEntityException, "") { + log.Printf("[WARN] Inline role policy (%s) was already removed from role (%s)", aws.StringValue(pname), d.Id()) + } else { + return fmt.Errorf("Error deleting inline policy of IAM Role %s: %s", d.Id(), err) + } } } } } - request := &iam.DeleteRoleInput{ + deleteRoleInput := &iam.DeleteRoleInput{ RoleName: aws.String(d.Id()), } // IAM is eventually consistent and deletion of attached policies may take time return resource.Retry(30*time.Second, func() *resource.RetryError { - _, err := iamconn.DeleteRole(request) + _, err := iamconn.DeleteRole(deleteRoleInput) if err != nil { - awsErr, ok := err.(awserr.Error) - if ok && awsErr.Code() == "DeleteConflict" { + if isAWSErr(err, iam.ErrCodeDeleteConflictException, "") { return resource.RetryableError(err) } diff --git a/aws/resource_aws_iam_role_policy.go b/aws/resource_aws_iam_role_policy.go index ec05a225953..5ca6096f4f2 100644 --- a/aws/resource_aws_iam_role_policy.go +++ b/aws/resource_aws_iam_role_policy.go @@ -41,10 +41,11 @@ func resourceAwsIamRolePolicy() *schema.Resource { ValidateFunc: validateIamRolePolicyName, }, "name_prefix": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - ValidateFunc: validateIamRolePolicyNamePrefix, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"name"}, + ValidateFunc: validateIamRolePolicyNamePrefix, }, "role": { Type: schema.TypeString, @@ -78,7 +79,7 @@ func resourceAwsIamRolePolicyPut(d *schema.ResourceData, meta interface{}) error } d.SetId(fmt.Sprintf("%s:%s", *request.RoleName, *request.PolicyName)) - return nil + return resourceAwsIamRolePolicyRead(d, meta) } func resourceAwsIamRolePolicyRead(d *schema.ResourceData, meta interface{}) error { diff --git a/aws/resource_aws_iam_role_policy_attachment.go b/aws/resource_aws_iam_role_policy_attachment.go index bb72f879a0b..48ac3e91a85 100644 --- a/aws/resource_aws_iam_role_policy_attachment.go +++ b/aws/resource_aws_iam_role_policy_attachment.go @@ -3,6 +3,7 @@ package aws import ( "fmt" "log" + "strings" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" @@ -16,14 +17,17 @@ func resourceAwsIamRolePolicyAttachment() *schema.Resource { Create: resourceAwsIamRolePolicyAttachmentCreate, Read: resourceAwsIamRolePolicyAttachmentRead, Delete: resourceAwsIamRolePolicyAttachmentDelete, + Importer: &schema.ResourceImporter{ + State: resourceAwsIamRolePolicyAttachmentImport, + }, Schema: map[string]*schema.Schema{ - "role": &schema.Schema{ + "role": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "policy_arn": &schema.Schema{ + "policy_arn": { Type: schema.TypeString, Required: true, ForceNew: true, @@ -40,7 +44,7 @@ func resourceAwsIamRolePolicyAttachmentCreate(d *schema.ResourceData, meta inter err := attachPolicyToRole(conn, role, arn) if err != nil { - return fmt.Errorf("[WARN] Error attaching policy %s to IAM Role %s: %v", arn, role, err) + return fmt.Errorf("Error attaching policy %s to IAM Role %s: %v", arn, role, err) } d.SetId(resource.PrefixedUniqueId(fmt.Sprintf("%s-", role))) @@ -98,11 +102,27 @@ func resourceAwsIamRolePolicyAttachmentDelete(d *schema.ResourceData, meta inter err := detachPolicyFromRole(conn, role, arn) if err != nil { - return fmt.Errorf("[WARN] Error removing policy %s from IAM Role %s: %v", arn, role, err) + return fmt.Errorf("Error removing policy %s from IAM Role %s: %v", arn, role, err) } return nil } +func resourceAwsIamRolePolicyAttachmentImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + idParts := strings.SplitN(d.Id(), "/", 2) + if len(idParts) != 2 || idParts[0] == "" || idParts[1] == "" { + return nil, fmt.Errorf("unexpected format of ID (%q), expected /", d.Id()) + } + + roleName := idParts[0] + policyARN := idParts[1] + + d.Set("role", roleName) + d.Set("policy_arn", policyARN) + d.SetId(fmt.Sprintf("%s-%s", roleName, policyARN)) + + return []*schema.ResourceData{d}, nil +} + func attachPolicyToRole(conn *iam.IAM, role string, arn string) error { _, err := conn.AttachRolePolicy(&iam.AttachRolePolicyInput{ RoleName: aws.String(role), diff --git a/aws/resource_aws_iam_role_policy_attachment_test.go b/aws/resource_aws_iam_role_policy_attachment_test.go index 7a723bc077f..6f0918629fd 100644 --- a/aws/resource_aws_iam_role_policy_attachment_test.go +++ b/aws/resource_aws_iam_role_policy_attachment_test.go @@ -19,7 +19,7 @@ func TestAccAWSRolePolicyAttachment_basic(t *testing.T) { testPolicy2 := fmt.Sprintf("tf-acctest2-%d", rInt) testPolicy3 := fmt.Sprintf("tf-acctest3-%d", rInt) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRolePolicyAttachmentDestroy, @@ -31,6 +31,27 @@ func TestAccAWSRolePolicyAttachment_basic(t *testing.T) { testAccCheckAWSRolePolicyAttachmentAttributes([]string{testPolicy}, &out), ), }, + { + ResourceName: "aws_iam_role_policy_attachment.test-attach", + ImportState: true, + ImportStateIdFunc: testAccAWSIAMRolePolicyAttachmentImportStateIdFunc("aws_iam_role_policy_attachment.test-attach"), + // We do not have a way to align IDs since the Create function uses resource.PrefixedUniqueId() + // Failed state verification, resource with ID ROLE-POLICYARN not found + // ImportStateVerify: true, + ImportStateCheck: func(s []*terraform.InstanceState) error { + if len(s) != 1 { + return fmt.Errorf("expected 1 state: %#v", s) + } + + rs := s[0] + + if !strings.HasPrefix(rs.Attributes["policy_arn"], "arn:") { + return fmt.Errorf("expected policy_arn attribute to be set and begin with arn:, received: %s", rs.Attributes["policy_arn"]) + } + + return nil + }, + }, { Config: testAccAWSRolePolicyAttachConfigUpdate(rInt), Check: resource.ComposeTestCheckFunc( @@ -93,6 +114,17 @@ func testAccCheckAWSRolePolicyAttachmentAttributes(policies []string, out *iam.L } } +func testAccAWSIAMRolePolicyAttachmentImportStateIdFunc(resourceName string) resource.ImportStateIdFunc { + return func(s *terraform.State) (string, error) { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return "", fmt.Errorf("Not found: %s", resourceName) + } + + return fmt.Sprintf("%s/%s", rs.Primary.Attributes["role"], rs.Primary.Attributes["policy_arn"]), nil + } +} + func testAccAWSRolePolicyAttachConfig(rInt int) string { return fmt.Sprintf(` resource "aws_iam_role" "role" { diff --git a/aws/resource_aws_iam_role_policy_test.go b/aws/resource_aws_iam_role_policy_test.go index ea9ceb35659..ef25b3f3e49 100644 --- a/aws/resource_aws_iam_role_policy_test.go +++ b/aws/resource_aws_iam_role_policy_test.go @@ -1,6 +1,7 @@ package aws import ( + "errors" "fmt" "regexp" "testing" @@ -13,12 +14,35 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSIAMRolePolicy_importBasic(t *testing.T) { + suffix := randomString(10) + resourceName := fmt.Sprintf("aws_iam_role_policy.foo_%s", suffix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckIAMRolePolicyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsIamRolePolicyConfig(suffix), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSIAMRolePolicy_basic(t *testing.T) { + var rolePolicy1, rolePolicy2, rolePolicy3 iam.GetRolePolicyOutput role := acctest.RandString(10) policy1 := acctest.RandString(10) policy2 := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckIAMRolePolicyDestroy, @@ -26,19 +50,28 @@ func TestAccAWSIAMRolePolicy_basic(t *testing.T) { { Config: testAccIAMRolePolicyConfig(role, policy1), Check: resource.ComposeTestCheckFunc( - testAccCheckIAMRolePolicy( + testAccCheckIAMRolePolicyExists( "aws_iam_role.role", "aws_iam_role_policy.foo", + &rolePolicy1, ), ), }, { Config: testAccIAMRolePolicyConfigUpdate(role, policy1, policy2), Check: resource.ComposeTestCheckFunc( - testAccCheckIAMRolePolicy( + testAccCheckIAMRolePolicyExists( + "aws_iam_role.role", + "aws_iam_role_policy.foo", + &rolePolicy2, + ), + testAccCheckIAMRolePolicyExists( "aws_iam_role.role", "aws_iam_role_policy.bar", + &rolePolicy3, ), + testAccCheckAWSIAMRolePolicyNameMatches(&rolePolicy1, &rolePolicy2), + testAccCheckAWSIAMRolePolicyNameChanged(&rolePolicy1, &rolePolicy3), ), }, }, @@ -46,21 +79,36 @@ func TestAccAWSIAMRolePolicy_basic(t *testing.T) { } func TestAccAWSIAMRolePolicy_namePrefix(t *testing.T) { + var rolePolicy1, rolePolicy2 iam.GetRolePolicyOutput role := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_iam_role_policy.test", Providers: testAccProviders, CheckDestroy: testAccCheckIAMRolePolicyDestroy, Steps: []resource.TestStep{ { - Config: testAccIAMRolePolicyConfig_namePrefix(role), + Config: testAccIAMRolePolicyConfig_namePrefix(role, "*"), Check: resource.ComposeTestCheckFunc( - testAccCheckIAMRolePolicy( + testAccCheckIAMRolePolicyExists( "aws_iam_role.test", "aws_iam_role_policy.test", + &rolePolicy1, ), + resource.TestCheckResourceAttrSet("aws_iam_role_policy.test", "name"), + ), + }, + { + Config: testAccIAMRolePolicyConfig_namePrefix(role, "ec2:*"), + Check: resource.ComposeTestCheckFunc( + testAccCheckIAMRolePolicyExists( + "aws_iam_role.test", + "aws_iam_role_policy.test", + &rolePolicy2, + ), + testAccCheckAWSIAMRolePolicyNameMatches(&rolePolicy1, &rolePolicy2), + resource.TestCheckResourceAttrSet("aws_iam_role_policy.test", "name"), ), }, }, @@ -68,21 +116,36 @@ func TestAccAWSIAMRolePolicy_namePrefix(t *testing.T) { } func TestAccAWSIAMRolePolicy_generatedName(t *testing.T) { + var rolePolicy1, rolePolicy2 iam.GetRolePolicyOutput role := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_iam_role_policy.test", Providers: testAccProviders, CheckDestroy: testAccCheckIAMRolePolicyDestroy, Steps: []resource.TestStep{ { - Config: testAccIAMRolePolicyConfig_generatedName(role), + Config: testAccIAMRolePolicyConfig_generatedName(role, "*"), + Check: resource.ComposeTestCheckFunc( + testAccCheckIAMRolePolicyExists( + "aws_iam_role.test", + "aws_iam_role_policy.test", + &rolePolicy1, + ), + resource.TestCheckResourceAttrSet("aws_iam_role_policy.test", "name"), + ), + }, + { + Config: testAccIAMRolePolicyConfig_generatedName(role, "ec2:*"), Check: resource.ComposeTestCheckFunc( - testAccCheckIAMRolePolicy( + testAccCheckIAMRolePolicyExists( "aws_iam_role.test", "aws_iam_role_policy.test", + &rolePolicy2, ), + testAccCheckAWSIAMRolePolicyNameMatches(&rolePolicy1, &rolePolicy2), + resource.TestCheckResourceAttrSet("aws_iam_role_policy.test", "name"), ), }, }, @@ -92,7 +155,7 @@ func TestAccAWSIAMRolePolicy_generatedName(t *testing.T) { func TestAccAWSIAMRolePolicy_invalidJSON(t *testing.T) { role := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckIAMRolePolicyDestroy, @@ -140,9 +203,10 @@ func testAccCheckIAMRolePolicyDestroy(s *terraform.State) error { return nil } -func testAccCheckIAMRolePolicy( +func testAccCheckIAMRolePolicyExists( iamRoleResource string, - iamRolePolicyResource string) resource.TestCheckFunc { + iamRolePolicyResource string, + rolePolicy *iam.GetRolePolicyOutput) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[iamRoleResource] if !ok { @@ -164,7 +228,7 @@ func testAccCheckIAMRolePolicy( return err } - _, err = iamconn.GetRolePolicy(&iam.GetRolePolicyInput{ + output, err := iamconn.GetRolePolicy(&iam.GetRolePolicyInput{ RoleName: aws.String(role), PolicyName: aws.String(name), }) @@ -172,10 +236,57 @@ func testAccCheckIAMRolePolicy( return err } + *rolePolicy = *output + return nil } } +func testAccCheckAWSIAMRolePolicyNameChanged(i, j *iam.GetRolePolicyOutput) resource.TestCheckFunc { + return func(s *terraform.State) error { + if aws.StringValue(i.PolicyName) == aws.StringValue(j.PolicyName) { + return errors.New("IAM Role Policy name did not change") + } + + return nil + } +} + +func testAccCheckAWSIAMRolePolicyNameMatches(i, j *iam.GetRolePolicyOutput) resource.TestCheckFunc { + return func(s *terraform.State) error { + if aws.StringValue(i.PolicyName) != aws.StringValue(j.PolicyName) { + return errors.New("IAM Role Policy name did not match") + } + + return nil + } +} + +func testAccAwsIamRolePolicyConfig(suffix string) string { + return fmt.Sprintf(` +resource "aws_iam_role" "role_%[1]s" { + name = "tf_test_role_test_%[1]s" + path = "/" + assume_role_policy = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"ec2.amazonaws.com\"},\"Action\":\"sts:AssumeRole\"}]}" +} + +resource "aws_iam_role_policy" "foo_%[1]s" { + name = "tf_test_policy_test_%[1]s" + role = "${aws_iam_role.role_%[1]s.name}" + policy = < 0 && err == nil { + t.Fatalf("expected %q to trigger an error", tc.Input) + } + if serviceName != tc.ServiceName { + t.Fatalf("expected service name %q to be %q", serviceName, tc.ServiceName) + } + if roleName != tc.RoleName { + t.Fatalf("expected role name %q to be %q", roleName, tc.RoleName) + } + if customSuffix != tc.CustomSuffix { + t.Fatalf("expected custom suffix %q to be %q", customSuffix, tc.CustomSuffix) + } + } +} + +func TestAccAWSIAMServiceLinkedRole_basic(t *testing.T) { + resourceName := "aws_iam_service_linked_role.test" + awsServiceName := "elasticbeanstalk.amazonaws.com" + name := "AWSServiceRoleForElasticBeanstalk" + path := fmt.Sprintf("/aws-service-role/%s/", awsServiceName) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSIAMServiceLinkedRoleDestroy, + Steps: []resource.TestStep{ + { + PreConfig: func() { + // Remove existing if possible + conn := testAccProvider.Meta().(*AWSClient).iamconn + deletionID, err := deleteIamServiceLinkedRole(conn, name) + if err != nil { + t.Fatalf("Error deleting service-linked role %s: %s", name, err) + } + if deletionID == "" { + return + } + + err = deleteIamServiceLinkedRoleWaiter(conn, deletionID) + if err != nil { + t.Fatalf("Error waiting for role (%s) to be deleted: %s", name, err) + } + }, + Config: testAccAWSIAMServiceLinkedRoleConfig(awsServiceName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSIAMServiceLinkedRoleExists(resourceName), + resource.TestMatchResourceAttr(resourceName, "arn", regexp.MustCompile(fmt.Sprintf("^arn:[^:]+:iam::[^:]+:role%s%s$", path, name))), + resource.TestCheckResourceAttr(resourceName, "aws_service_name", awsServiceName), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "name", name), + resource.TestCheckResourceAttr(resourceName, "path", path), + resource.TestCheckResourceAttrSet(resourceName, "unique_id"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSIAMServiceLinkedRole_CustomSuffix(t *testing.T) { + resourceName := "aws_iam_service_linked_role.test" + awsServiceName := "autoscaling.amazonaws.com" + customSuffix := acctest.RandomWithPrefix("tf-acc-test") + name := fmt.Sprintf("AWSServiceRoleForAutoScaling_%s", customSuffix) + path := fmt.Sprintf("/aws-service-role/%s/", awsServiceName) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSIAMServiceLinkedRoleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSIAMServiceLinkedRoleConfig_CustomSuffix(awsServiceName, customSuffix), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSIAMServiceLinkedRoleExists(resourceName), + resource.TestMatchResourceAttr(resourceName, "arn", regexp.MustCompile(fmt.Sprintf("^arn:[^:]+:iam::[^:]+:role%s%s$", path, name))), + resource.TestCheckResourceAttr(resourceName, "name", name), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSIAMServiceLinkedRole_Description(t *testing.T) { + resourceName := "aws_iam_service_linked_role.test" + awsServiceName := "autoscaling.amazonaws.com" + customSuffix := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSIAMServiceLinkedRoleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSIAMServiceLinkedRoleConfig_Description(awsServiceName, customSuffix, "description1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSIAMServiceLinkedRoleExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "description", "description1"), + ), + }, + { + Config: testAccAWSIAMServiceLinkedRoleConfig_Description(awsServiceName, customSuffix, "description2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSIAMServiceLinkedRoleExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "description", "description2"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAWSIAMServiceLinkedRoleDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).iamconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_iam_service_linked_role" { + continue + } + + _, roleName, _, err := decodeIamServiceLinkedRoleID(rs.Primary.ID) + if err != nil { + return err + } + + params := &iam.GetRoleInput{ + RoleName: aws.String(roleName), + } + + _, err = conn.GetRole(params) + + if err == nil { + return fmt.Errorf("Service-Linked Role still exists: %q", rs.Primary.ID) + } + + if !isAWSErr(err, iam.ErrCodeNoSuchEntityException, "") { + return err + } + } + + return nil + +} + +func testAccCheckAWSIAMServiceLinkedRoleExists(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + conn := testAccProvider.Meta().(*AWSClient).iamconn + _, roleName, _, err := decodeIamServiceLinkedRoleID(rs.Primary.ID) + if err != nil { + return err + } + + params := &iam.GetRoleInput{ + RoleName: aws.String(roleName), + } + + _, err = conn.GetRole(params) + + if err != nil { + if isAWSErr(err, iam.ErrCodeNoSuchEntityException, "") { + return fmt.Errorf("Service-Linked Role doesn't exists: %q", rs.Primary.ID) + } + return err + } + + return nil + } +} + +func testAccAWSIAMServiceLinkedRoleConfig(awsServiceName string) string { + return fmt.Sprintf(` +resource "aws_iam_service_linked_role" "test" { + aws_service_name = "%s" +} +`, awsServiceName) +} + +func testAccAWSIAMServiceLinkedRoleConfig_CustomSuffix(awsServiceName, customSuffix string) string { + return fmt.Sprintf(` +resource "aws_iam_service_linked_role" "test" { + aws_service_name = "%s" + custom_suffix = "%s" +} +`, awsServiceName, customSuffix) +} + +func testAccAWSIAMServiceLinkedRoleConfig_Description(awsServiceName, customSuffix, description string) string { + return fmt.Sprintf(` +resource "aws_iam_service_linked_role" "test" { + aws_service_name = "%s" + custom_suffix = "%s" + description = "%s" +} +`, awsServiceName, customSuffix, description) +} diff --git a/aws/resource_aws_iam_user.go b/aws/resource_aws_iam_user.go index 082e9d4dd9d..c3a622a9214 100644 --- a/aws/resource_aws_iam_user.go +++ b/aws/resource_aws_iam_user.go @@ -4,12 +4,14 @@ import ( "fmt" "log" "regexp" + "time" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/iam" + "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsIamUser() *schema.Resource { @@ -23,7 +25,7 @@ func resourceAwsIamUser() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "arn": &schema.Schema{ + "arn": { Type: schema.TypeString, Computed: true, }, @@ -35,26 +37,32 @@ func resourceAwsIamUser() *schema.Resource { and inefficient. Still, there are other reasons one might want the UniqueID, so we can make it available. */ - "unique_id": &schema.Schema{ + "unique_id": { Type: schema.TypeString, Computed: true, }, - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ValidateFunc: validateAwsIamUserName, }, - "path": &schema.Schema{ + "path": { Type: schema.TypeString, Optional: true, Default: "/", }, - "force_destroy": &schema.Schema{ + "permissions_boundary": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(0, 2048), + }, + "force_destroy": { Type: schema.TypeBool, Optional: true, Default: false, Description: "Delete user even if it has non-Terraform-managed IAM access keys, login profile or MFA devices", }, + "tags": tagsSchema(), }, } } @@ -69,13 +77,24 @@ func resourceAwsIamUserCreate(d *schema.ResourceData, meta interface{}) error { UserName: aws.String(name), } + if v, ok := d.GetOk("permissions_boundary"); ok && v.(string) != "" { + request.PermissionsBoundary = aws.String(v.(string)) + } + + if v, ok := d.GetOk("tags"); ok { + tags := tagsFromMapIAM(v.(map[string]interface{})) + request.Tags = tags + } + log.Println("[DEBUG] Create IAM User request:", request) createResp, err := iamconn.CreateUser(request) if err != nil { return fmt.Errorf("Error creating IAM User %s: %s", name, err) } - d.SetId(*createResp.User.UserName) - return resourceAwsIamUserReadResult(d, createResp.User) + + d.SetId(aws.StringValue(createResp.User.UserName)) + + return resourceAwsIamUserRead(d, meta) } func resourceAwsIamUserRead(d *schema.ResourceData, meta interface{}) error { @@ -85,37 +104,40 @@ func resourceAwsIamUserRead(d *schema.ResourceData, meta interface{}) error { UserName: aws.String(d.Id()), } - getResp, err := iamconn.GetUser(request) + output, err := iamconn.GetUser(request) if err != nil { - if iamerr, ok := err.(awserr.Error); ok && iamerr.Code() == "NoSuchEntity" { // XXX test me - log.Printf("[WARN] No IAM user by name (%s) found", d.Id()) + if isAWSErr(err, iam.ErrCodeNoSuchEntityException, "") { + log.Printf("[WARN] No IAM user by name (%s) found, removing from state", d.Id()) d.SetId("") return nil } return fmt.Errorf("Error reading IAM User %s: %s", d.Id(), err) } - return resourceAwsIamUserReadResult(d, getResp.User) -} -func resourceAwsIamUserReadResult(d *schema.ResourceData, user *iam.User) error { - if err := d.Set("name", user.UserName); err != nil { - return err - } - if err := d.Set("arn", user.Arn); err != nil { - return err + if output == nil || output.User == nil { + log.Printf("[WARN] No IAM user by name (%s) found, removing from state", d.Id()) + d.SetId("") + return nil } - if err := d.Set("path", user.Path); err != nil { - return err + + d.Set("arn", output.User.Arn) + d.Set("name", output.User.UserName) + d.Set("path", output.User.Path) + if output.User.PermissionsBoundary != nil { + d.Set("permissions_boundary", output.User.PermissionsBoundary.PermissionsBoundaryArn) } - if err := d.Set("unique_id", user.UserId); err != nil { - return err + d.Set("unique_id", output.User.UserId) + if err := d.Set("tags", tagsToMapIAM(output.User.Tags)); err != nil { + return fmt.Errorf("error setting tags: %s", err) } + return nil } func resourceAwsIamUserUpdate(d *schema.ResourceData, meta interface{}) error { + iamconn := meta.(*AWSClient).iamconn + if d.HasChange("name") || d.HasChange("path") { - iamconn := meta.(*AWSClient).iamconn on, nn := d.GetChange("name") _, np := d.GetChange("path") @@ -128,7 +150,7 @@ func resourceAwsIamUserUpdate(d *schema.ResourceData, meta interface{}) error { log.Println("[DEBUG] Update IAM User request:", request) _, err := iamconn.UpdateUser(request) if err != nil { - if iamerr, ok := err.(awserr.Error); ok && iamerr.Code() == "NoSuchEntity" { + if isAWSErr(err, iam.ErrCodeNoSuchEntityException, "") { log.Printf("[WARN] No IAM user by name (%s) found", d.Id()) d.SetId("") return nil @@ -137,9 +159,60 @@ func resourceAwsIamUserUpdate(d *schema.ResourceData, meta interface{}) error { } d.SetId(nn.(string)) - return resourceAwsIamUserRead(d, meta) } - return nil + + if d.HasChange("permissions_boundary") { + permissionsBoundary := d.Get("permissions_boundary").(string) + if permissionsBoundary != "" { + input := &iam.PutUserPermissionsBoundaryInput{ + PermissionsBoundary: aws.String(permissionsBoundary), + UserName: aws.String(d.Id()), + } + _, err := iamconn.PutUserPermissionsBoundary(input) + if err != nil { + return fmt.Errorf("error updating IAM User permissions boundary: %s", err) + } + } else { + input := &iam.DeleteUserPermissionsBoundaryInput{ + UserName: aws.String(d.Id()), + } + _, err := iamconn.DeleteUserPermissionsBoundary(input) + if err != nil { + return fmt.Errorf("error deleting IAM User permissions boundary: %s", err) + } + } + } + + if d.HasChange("tags") { + // Reset all tags to empty set + oraw, nraw := d.GetChange("tags") + o := oraw.(map[string]interface{}) + n := nraw.(map[string]interface{}) + c, r := diffTagsIAM(tagsFromMapIAM(o), tagsFromMapIAM(n)) + + if len(r) > 0 { + _, err := iamconn.UntagUser(&iam.UntagUserInput{ + UserName: aws.String(d.Id()), + TagKeys: tagKeysIam(r), + }) + if err != nil { + return fmt.Errorf("error deleting IAM user tags: %s", err) + } + } + + if len(c) > 0 { + input := &iam.TagUserInput{ + UserName: aws.String(d.Id()), + Tags: c, + } + _, err := iamconn.TagUser(input) + if err != nil { + return fmt.Errorf("error update IAM user tags: %s", err) + } + } + } + + return resourceAwsIamUserRead(d, meta) } func resourceAwsIamUserDelete(d *schema.ResourceData, meta interface{}) error { @@ -170,72 +243,41 @@ func resourceAwsIamUserDelete(d *schema.ResourceData, meta interface{}) error { // All access keys, MFA devices and login profile for the user must be removed if d.Get("force_destroy").(bool) { - var accessKeys []string - listAccessKeys := &iam.ListAccessKeysInput{ - UserName: aws.String(d.Id()), - } - pageOfAccessKeys := func(page *iam.ListAccessKeysOutput, lastPage bool) (shouldContinue bool) { - for _, k := range page.AccessKeyMetadata { - accessKeys = append(accessKeys, *k.AccessKeyId) - } - return !lastPage - } - err = iamconn.ListAccessKeysPages(listAccessKeys, pageOfAccessKeys) + + err = deleteAwsIamUserAccessKeys(iamconn, d.Id()) if err != nil { - return fmt.Errorf("Error removing access keys of user %s: %s", d.Id(), err) - } - for _, k := range accessKeys { - _, err := iamconn.DeleteAccessKey(&iam.DeleteAccessKeyInput{ - UserName: aws.String(d.Id()), - AccessKeyId: aws.String(k), - }) - if err != nil { - return fmt.Errorf("Error deleting access key %s: %s", k, err) - } + return err } - var MFADevices []string - listMFADevices := &iam.ListMFADevicesInput{ - UserName: aws.String(d.Id()), - } - pageOfMFADevices := func(page *iam.ListMFADevicesOutput, lastPage bool) (shouldContinue bool) { - for _, m := range page.MFADevices { - MFADevices = append(MFADevices, *m.SerialNumber) - } - return !lastPage - } - err = iamconn.ListMFADevicesPages(listMFADevices, pageOfMFADevices) + err = deleteAwsIamUserSSHKeys(iamconn, d.Id()) if err != nil { - return fmt.Errorf("Error removing MFA devices of user %s: %s", d.Id(), err) + return err } - for _, m := range MFADevices { - _, err := iamconn.DeactivateMFADevice(&iam.DeactivateMFADeviceInput{ - UserName: aws.String(d.Id()), - SerialNumber: aws.String(m), - }) - if err != nil { - return fmt.Errorf("Error deactivating MFA device %s: %s", m, err) - } + + err = deleteAwsIamUserMFADevices(iamconn, d.Id()) + if err != nil { + return err } - _, err = iamconn.DeleteLoginProfile(&iam.DeleteLoginProfileInput{ - UserName: aws.String(d.Id()), - }) + err = deleteAwsIamUserLoginProfile(iamconn, d.Id()) if err != nil { - if iamerr, ok := err.(awserr.Error); !ok || iamerr.Code() != "NoSuchEntity" { - return fmt.Errorf("Error deleting Account Login Profile: %s", err) - } + return err } } - request := &iam.DeleteUserInput{ + deleteUserInput := &iam.DeleteUserInput{ UserName: aws.String(d.Id()), } - log.Println("[DEBUG] Delete IAM User request:", request) - if _, err := iamconn.DeleteUser(request); err != nil { + log.Println("[DEBUG] Delete IAM User request:", deleteUserInput) + _, err = iamconn.DeleteUser(deleteUserInput) + if err != nil { + if isAWSErr(err, iam.ErrCodeNoSuchEntityException, "") { + return nil + } return fmt.Errorf("Error deleting IAM User %s: %s", d.Id(), err) } + return nil } @@ -248,3 +290,118 @@ func validateAwsIamUserName(v interface{}, k string) (ws []string, errors []erro } return } + +func deleteAwsIamUserSSHKeys(svc *iam.IAM, username string) error { + var publicKeys []string + var err error + + listSSHPublicKeys := &iam.ListSSHPublicKeysInput{ + UserName: aws.String(username), + } + pageOfListSSHPublicKeys := func(page *iam.ListSSHPublicKeysOutput, lastPage bool) (shouldContinue bool) { + for _, k := range page.SSHPublicKeys { + publicKeys = append(publicKeys, *k.SSHPublicKeyId) + } + return !lastPage + } + err = svc.ListSSHPublicKeysPages(listSSHPublicKeys, pageOfListSSHPublicKeys) + if err != nil { + return fmt.Errorf("Error removing public SSH keys of user %s: %s", username, err) + } + for _, k := range publicKeys { + _, err := svc.DeleteSSHPublicKey(&iam.DeleteSSHPublicKeyInput{ + UserName: aws.String(username), + SSHPublicKeyId: aws.String(k), + }) + if err != nil { + return fmt.Errorf("Error deleting public SSH key %s: %s", k, err) + } + } + + return nil +} + +func deleteAwsIamUserMFADevices(svc *iam.IAM, username string) error { + var MFADevices []string + var err error + + listMFADevices := &iam.ListMFADevicesInput{ + UserName: aws.String(username), + } + pageOfMFADevices := func(page *iam.ListMFADevicesOutput, lastPage bool) (shouldContinue bool) { + for _, m := range page.MFADevices { + MFADevices = append(MFADevices, *m.SerialNumber) + } + return !lastPage + } + err = svc.ListMFADevicesPages(listMFADevices, pageOfMFADevices) + if err != nil { + return fmt.Errorf("Error removing MFA devices of user %s: %s", username, err) + } + for _, m := range MFADevices { + _, err := svc.DeactivateMFADevice(&iam.DeactivateMFADeviceInput{ + UserName: aws.String(username), + SerialNumber: aws.String(m), + }) + if err != nil { + return fmt.Errorf("Error deactivating MFA device %s: %s", m, err) + } + } + + return nil +} + +func deleteAwsIamUserLoginProfile(svc *iam.IAM, username string) error { + var err error + err = resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err = svc.DeleteLoginProfile(&iam.DeleteLoginProfileInput{ + UserName: aws.String(username), + }) + if err != nil { + if isAWSErr(err, iam.ErrCodeNoSuchEntityException, "") { + return nil + } + // EntityTemporarilyUnmodifiable: Login Profile for User XXX cannot be modified while login profile is being created. + if isAWSErr(err, iam.ErrCodeEntityTemporarilyUnmodifiableException, "") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + + if err != nil { + return fmt.Errorf("Error deleting Account Login Profile: %s", err) + } + + return nil +} + +func deleteAwsIamUserAccessKeys(svc *iam.IAM, username string) error { + var accessKeys []string + var err error + listAccessKeys := &iam.ListAccessKeysInput{ + UserName: aws.String(username), + } + pageOfAccessKeys := func(page *iam.ListAccessKeysOutput, lastPage bool) (shouldContinue bool) { + for _, k := range page.AccessKeyMetadata { + accessKeys = append(accessKeys, *k.AccessKeyId) + } + return !lastPage + } + err = svc.ListAccessKeysPages(listAccessKeys, pageOfAccessKeys) + if err != nil { + return fmt.Errorf("Error removing access keys of user %s: %s", username, err) + } + for _, k := range accessKeys { + _, err := svc.DeleteAccessKey(&iam.DeleteAccessKeyInput{ + UserName: aws.String(username), + AccessKeyId: aws.String(k), + }) + if err != nil { + return fmt.Errorf("Error deleting access key %s: %s", k, err) + } + } + + return nil +} diff --git a/aws/resource_aws_iam_user_group_membership.go b/aws/resource_aws_iam_user_group_membership.go new file mode 100644 index 00000000000..e5333a38475 --- /dev/null +++ b/aws/resource_aws_iam_user_group_membership.go @@ -0,0 +1,167 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/service/iam" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsIamUserGroupMembership() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsIamUserGroupMembershipCreate, + Read: resourceAwsIamUserGroupMembershipRead, + Update: resourceAwsIamUserGroupMembershipUpdate, + Delete: resourceAwsIamUserGroupMembershipDelete, + + Schema: map[string]*schema.Schema{ + "user": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "groups": { + Type: schema.TypeSet, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + } +} + +func resourceAwsIamUserGroupMembershipCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).iamconn + + user := d.Get("user").(string) + groupList := expandStringList(d.Get("groups").(*schema.Set).List()) + + if err := addUserToGroups(conn, user, groupList); err != nil { + return err + } + + d.SetId(resource.UniqueId()) + + return resourceAwsIamUserGroupMembershipRead(d, meta) +} + +func resourceAwsIamUserGroupMembershipRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).iamconn + + user := d.Get("user").(string) + groups := d.Get("groups").(*schema.Set) + var gl []string + var marker *string + + for { + resp, err := conn.ListGroupsForUser(&iam.ListGroupsForUserInput{ + UserName: &user, + Marker: marker, + }) + if err != nil { + if isAWSErr(err, iam.ErrCodeNoSuchEntityException, "") { + // no such user + log.Printf("[WARN] Groups not found for user (%s), removing from state", user) + d.SetId("") + return nil + } + return err + } + + for _, g := range resp.Groups { + // only read in the groups we care about + if groups.Contains(*g.GroupName) { + gl = append(gl, *g.GroupName) + } + } + + if !*resp.IsTruncated { + break + } + + marker = resp.Marker + } + + if err := d.Set("groups", gl); err != nil { + return fmt.Errorf("Error setting group list from IAM (%s), error: %s", user, err) + } + + return nil +} + +func resourceAwsIamUserGroupMembershipUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).iamconn + + if d.HasChange("groups") { + user := d.Get("user").(string) + + o, n := d.GetChange("groups") + if o == nil { + o = new(schema.Set) + } + if n == nil { + n = new(schema.Set) + } + + os := o.(*schema.Set) + ns := n.(*schema.Set) + remove := expandStringList(os.Difference(ns).List()) + add := expandStringList(ns.Difference(os).List()) + + if err := removeUserFromGroups(conn, user, remove); err != nil { + return err + } + + if err := addUserToGroups(conn, user, add); err != nil { + return err + } + } + + return resourceAwsIamUserGroupMembershipRead(d, meta) +} + +func resourceAwsIamUserGroupMembershipDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).iamconn + user := d.Get("user").(string) + groups := expandStringList(d.Get("groups").(*schema.Set).List()) + + if err := removeUserFromGroups(conn, user, groups); err != nil { + return err + } + + return nil +} + +func removeUserFromGroups(conn *iam.IAM, user string, groups []*string) error { + for _, group := range groups { + _, err := conn.RemoveUserFromGroup(&iam.RemoveUserFromGroupInput{ + UserName: &user, + GroupName: group, + }) + if err != nil { + if isAWSErr(err, iam.ErrCodeNoSuchEntityException, "") { + continue + } + return err + } + } + + return nil +} + +func addUserToGroups(conn *iam.IAM, user string, groups []*string) error { + for _, group := range groups { + _, err := conn.AddUserToGroup(&iam.AddUserToGroupInput{ + UserName: &user, + GroupName: group, + }) + if err != nil { + return err + } + } + + return nil +} diff --git a/aws/resource_aws_iam_user_group_membership_test.go b/aws/resource_aws_iam_user_group_membership_test.go new file mode 100644 index 00000000000..30661d68ba0 --- /dev/null +++ b/aws/resource_aws_iam_user_group_membership_test.go @@ -0,0 +1,296 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/iam" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSUserGroupMembership_basic(t *testing.T) { + rString := acctest.RandString(8) + userName1 := fmt.Sprintf("tf-acc-ugm-basic-user1-%s", rString) + userName2 := fmt.Sprintf("tf-acc-ugm-basic-user2-%s", rString) + groupName1 := fmt.Sprintf("tf-acc-ugm-basic-group1-%s", rString) + groupName2 := fmt.Sprintf("tf-acc-ugm-basic-group2-%s", rString) + groupName3 := fmt.Sprintf("tf-acc-ugm-basic-group3-%s", rString) + + usersAndGroupsConfig := testAccAWSUserGroupMembershipConfigUsersAndGroups(userName1, userName2, groupName1, groupName2, groupName3) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccAWSUserGroupMembershipDestroy, + Steps: []resource.TestStep{ + // simplest test + { + Config: usersAndGroupsConfig + testAccAWSUserGroupMembershipConfigInit, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_iam_user_group_membership.user1_test1", "user", userName1), + testAccAWSUserGroupMembershipCheckGroupListForUser(userName1, []string{groupName1}, []string{groupName2, groupName3}), + ), + }, + // test adding an additional group to an existing resource + { + Config: usersAndGroupsConfig + testAccAWSUserGroupMembershipConfigAddOne, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_iam_user_group_membership.user1_test1", "user", userName1), + testAccAWSUserGroupMembershipCheckGroupListForUser(userName1, []string{groupName1, groupName2}, []string{groupName3}), + ), + }, + // test adding multiple resources for the same user, and resources with the same groups for another user + { + Config: usersAndGroupsConfig + testAccAWSUserGroupMembershipConfigAddAll, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_iam_user_group_membership.user1_test1", "user", userName1), + resource.TestCheckResourceAttr("aws_iam_user_group_membership.user1_test2", "user", userName1), + resource.TestCheckResourceAttr("aws_iam_user_group_membership.user2_test1", "user", userName2), + resource.TestCheckResourceAttr("aws_iam_user_group_membership.user2_test2", "user", userName2), + testAccAWSUserGroupMembershipCheckGroupListForUser(userName1, []string{groupName1, groupName2, groupName3}, []string{}), + testAccAWSUserGroupMembershipCheckGroupListForUser(userName2, []string{groupName1, groupName2, groupName3}, []string{}), + ), + }, + // test that nothing happens when we apply the same config again + { + Config: usersAndGroupsConfig + testAccAWSUserGroupMembershipConfigAddAll, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_iam_user_group_membership.user1_test1", "user", userName1), + resource.TestCheckResourceAttr("aws_iam_user_group_membership.user1_test2", "user", userName1), + resource.TestCheckResourceAttr("aws_iam_user_group_membership.user2_test1", "user", userName2), + resource.TestCheckResourceAttr("aws_iam_user_group_membership.user2_test2", "user", userName2), + testAccAWSUserGroupMembershipCheckGroupListForUser(userName1, []string{groupName1, groupName2, groupName3}, []string{}), + testAccAWSUserGroupMembershipCheckGroupListForUser(userName2, []string{groupName1, groupName2, groupName3}, []string{}), + ), + }, + // test removing a group + { + Config: usersAndGroupsConfig + testAccAWSUserGroupMembershipConfigRemoveGroup, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_iam_user_group_membership.user1_test1", "user", userName1), + resource.TestCheckResourceAttr("aws_iam_user_group_membership.user1_test2", "user", userName1), + resource.TestCheckResourceAttr("aws_iam_user_group_membership.user2_test1", "user", userName2), + resource.TestCheckResourceAttr("aws_iam_user_group_membership.user2_test2", "user", userName2), + testAccAWSUserGroupMembershipCheckGroupListForUser(userName1, []string{groupName1, groupName3}, []string{groupName2}), + testAccAWSUserGroupMembershipCheckGroupListForUser(userName2, []string{groupName1, groupName2}, []string{groupName3}), + ), + }, + // test removing a resource + { + Config: usersAndGroupsConfig + testAccAWSUserGroupMembershipConfigDeleteResource, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_iam_user_group_membership.user1_test1", "user", userName1), + resource.TestCheckResourceAttr("aws_iam_user_group_membership.user1_test2", "user", userName1), + resource.TestCheckResourceAttr("aws_iam_user_group_membership.user2_test1", "user", userName2), + testAccAWSUserGroupMembershipCheckGroupListForUser(userName1, []string{groupName1, groupName3}, []string{groupName2}), + testAccAWSUserGroupMembershipCheckGroupListForUser(userName2, []string{groupName1}, []string{groupName2, groupName3}), + ), + }, + }, + }) +} + +func testAccAWSUserGroupMembershipDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).iamconn + + for _, rs := range s.RootModule().Resources { + if rs.Type == "aws_iam_user_group_membership" { + input := &iam.ListGroupsForUserInput{ + UserName: aws.String(rs.Primary.Attributes["user"]), + } + foundGroups := 0 + err := conn.ListGroupsForUserPages(input, func(page *iam.ListGroupsForUserOutput, lastPage bool) bool { + if len(page.Groups) > 0 { + foundGroups = foundGroups + len(page.Groups) + } + return !lastPage + }) + if err != nil { + if isAWSErr(err, iam.ErrCodeNoSuchEntityException, "") { + continue + } + return err + } + if foundGroups > 0 { + return fmt.Errorf("Expected all group membership for user to be removed, found: %d", foundGroups) + } + } + } + + return nil +} + +func testAccAWSUserGroupMembershipCheckGroupListForUser(userName string, groups []string, groupsNeg []string) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).iamconn + + // get list of groups for user + userGroupList, err := conn.ListGroupsForUser(&iam.ListGroupsForUserInput{ + UserName: &userName, + }) + if err != nil { + return fmt.Errorf("Error validing user group list for %s: %s", userName, err) + } + + // check required groups + GROUP: + for _, group := range groups { + for _, groupFound := range userGroupList.Groups { + if group == *groupFound.GroupName { + continue GROUP // found our group, start checking the next one + } + } + // group not found, return an error + return fmt.Errorf("Required group not found for %s: %s", userName, group) + } + + // check that none of groupsNeg are present + for _, group := range groupsNeg { + for _, groupFound := range userGroupList.Groups { + if group == *groupFound.GroupName { + return fmt.Errorf("Unexpected group found for %s: %s", userName, group) + } + } + } + + return nil + } +} + +// users and groups for all other tests +func testAccAWSUserGroupMembershipConfigUsersAndGroups(userName1, userName2, groupName1, groupName2, groupName3 string) string { + return fmt.Sprintf(` +resource "aws_iam_user" "user1" { + name = "%s" + force_destroy = true +} + +resource "aws_iam_user" "user2" { + name = "%s" + force_destroy = true +} + +resource "aws_iam_group" "group1" { + name = "%s" +} + +resource "aws_iam_group" "group2" { + name = "%s" +} + +resource "aws_iam_group" "group3" { + name = "%s" +} +`, userName1, userName2, groupName1, groupName2, groupName3) +} + +// associate users and groups +const testAccAWSUserGroupMembershipConfigInit = ` +resource "aws_iam_user_group_membership" "user1_test1" { + user = "${aws_iam_user.user1.name}" + groups = [ + "${aws_iam_group.group1.name}", + ] +} +` + +const testAccAWSUserGroupMembershipConfigAddOne = ` +resource "aws_iam_user_group_membership" "user1_test1" { + user = "${aws_iam_user.user1.name}" + groups = [ + "${aws_iam_group.group1.name}", + "${aws_iam_group.group2.name}", + ] +} +` + +const testAccAWSUserGroupMembershipConfigAddAll = ` +resource "aws_iam_user_group_membership" "user1_test1" { + user = "${aws_iam_user.user1.name}" + groups = [ + "${aws_iam_group.group1.name}", + "${aws_iam_group.group2.name}", + ] +} + +resource "aws_iam_user_group_membership" "user1_test2" { + user = "${aws_iam_user.user1.name}" + groups = [ + "${aws_iam_group.group3.name}", + ] +} + +resource "aws_iam_user_group_membership" "user2_test1" { + user = "${aws_iam_user.user2.name}" + groups = [ + "${aws_iam_group.group1.name}", + ] +} + +resource "aws_iam_user_group_membership" "user2_test2" { + user = "${aws_iam_user.user2.name}" + groups = [ + "${aws_iam_group.group2.name}", + "${aws_iam_group.group3.name}", + ] +} +` + +// test removing a group +const testAccAWSUserGroupMembershipConfigRemoveGroup = ` +resource "aws_iam_user_group_membership" "user1_test1" { + user = "${aws_iam_user.user1.name}" + groups = [ + "${aws_iam_group.group1.name}", + ] +} + +resource "aws_iam_user_group_membership" "user1_test2" { + user = "${aws_iam_user.user1.name}" + groups = [ + "${aws_iam_group.group3.name}", + ] +} + +resource "aws_iam_user_group_membership" "user2_test1" { + user = "${aws_iam_user.user2.name}" + groups = [ + "${aws_iam_group.group1.name}", + ] +} + +resource "aws_iam_user_group_membership" "user2_test2" { + user = "${aws_iam_user.user2.name}" + groups = [ + "${aws_iam_group.group2.name}", + ] +} +` + +// test deleting an entity +const testAccAWSUserGroupMembershipConfigDeleteResource = ` +resource "aws_iam_user_group_membership" "user1_test1" { + user = "${aws_iam_user.user1.name}" + groups = [ + "${aws_iam_group.group1.name}", + ] +} + +resource "aws_iam_user_group_membership" "user1_test2" { + user = "${aws_iam_user.user1.name}" + groups = [ + "${aws_iam_group.group3.name}", + ] +} + +resource "aws_iam_user_group_membership" "user2_test1" { + user = "${aws_iam_user.user2.name}" + groups = [ + "${aws_iam_group.group1.name}", + ] +} +` diff --git a/aws/resource_aws_iam_user_login_profile.go b/aws/resource_aws_iam_user_login_profile.go index 1eb5980a4a9..2d680acc017 100644 --- a/aws/resource_aws_iam_user_login_profile.go +++ b/aws/resource_aws_iam_user_login_profile.go @@ -1,15 +1,15 @@ package aws import ( + "bytes" + "crypto/rand" "fmt" "log" - "math/rand" - "time" + "math/big" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/iam" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/encryption" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/validation" @@ -40,7 +40,7 @@ func resourceAwsIamUserLoginProfile() *schema.Resource { Type: schema.TypeInt, Optional: true, Default: 20, - ValidateFunc: validation.StringLenBetween(4, 128), + ValidateFunc: validation.IntBetween(5, 128), }, "key_fingerprint": { @@ -55,35 +55,62 @@ func resourceAwsIamUserLoginProfile() *schema.Resource { } } -// generatePassword generates a random password of a given length using -// characters that are likely to satisfy any possible AWS password policy -// (given sufficient length). -func generatePassword(length int) string { - charsets := []string{ - "abcdefghijklmnopqrstuvwxyz", - "ABCDEFGHIJKLMNOPQRSTUVWXYZ", - "012346789", - "!@#$%^&*()_+-=[]{}|'", - } +const ( + charLower = "abcdefghijklmnopqrstuvwxyz" + charUpper = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" + charNumbers = "0123456789" + charSymbols = "!@#$%^&*()_+-=[]{}|'" +) - // Use all character sets - random := rand.New(rand.NewSource(time.Now().UTC().UnixNano())) - components := make(map[int]byte, length) - for i := 0; i < length; i++ { - charset := charsets[i%len(charsets)] - components[i] = charset[random.Intn(len(charset))] - } +// generateIAMPassword generates a random password of a given length, matching the +// most restrictive iam password policy. +func generateIAMPassword(length int) string { + const charset = charLower + charUpper + charNumbers + charSymbols - // Randomise the ordering so we don't end up with a predictable - // lower case, upper case, numeric, symbol pattern result := make([]byte, length) - i := 0 - for _, b := range components { - result[i] = b - i = i + 1 + charsetSize := big.NewInt(int64(len(charset))) + + // rather than trying to artificially add specific characters from each + // class to the password to match the policy, we generate passwords + // randomly and reject those that don't match. + // + // Even in the worst case, this tends to take less than 10 tries to find a + // matching password. Any sufficiently long password is likely to succeed + // on the first try + for n := 0; n < 100000; n++ { + for i := range result { + r, err := rand.Int(rand.Reader, charsetSize) + if err != nil { + panic(err) + } + if !r.IsInt64() { + panic("rand.Int() not representable as an Int64") + } + + result[i] = charset[r.Int64()] + } + + if !checkIAMPwdPolicy(result) { + continue + } + + return string(result) + } + + panic("failed to generate acceptable password") +} + +// Check the generated password contains all character classes listed in the +// IAM password policy. +func checkIAMPwdPolicy(pass []byte) bool { + if !(bytes.ContainsAny(pass, charLower) && + bytes.ContainsAny(pass, charNumbers) && + bytes.ContainsAny(pass, charSymbols) && + bytes.ContainsAny(pass, charUpper)) { + return false } - return string(result) + return true } func resourceAwsIamUserLoginProfileCreate(d *schema.ResourceData, meta interface{}) error { @@ -113,7 +140,8 @@ func resourceAwsIamUserLoginProfileCreate(d *schema.ResourceData, meta interface } } - initialPassword := generatePassword(passwordLength) + initialPassword := generateIAMPassword(passwordLength) + fingerprint, encrypted, err := encryption.EncryptValue(encryptionKey, initialPassword, "Password") if err != nil { return err @@ -137,7 +165,7 @@ func resourceAwsIamUserLoginProfileCreate(d *schema.ResourceData, meta interface d.Set("encrypted_password", "") return nil } - return errwrap.Wrapf(fmt.Sprintf("Error creating IAM User Login Profile for %q: {{err}}", username), err) + return fmt.Errorf("Error creating IAM User Login Profile for %q: %s", username, err) } d.SetId(*createResp.LoginProfile.UserName) diff --git a/aws/resource_aws_iam_user_login_profile_test.go b/aws/resource_aws_iam_user_login_profile_test.go index 2755f917ce6..5030b19bd6d 100644 --- a/aws/resource_aws_iam_user_login_profile_test.go +++ b/aws/resource_aws_iam_user_login_profile_test.go @@ -19,18 +19,54 @@ import ( "github.com/hashicorp/vault/helper/pgpkeys" ) +func TestGenerateIAMPassword(t *testing.T) { + p := generateIAMPassword(6) + if len(p) != 6 { + t.Fatalf("expected a 6 character password, got: %q", p) + } + + p = generateIAMPassword(128) + if len(p) != 128 { + t.Fatalf("expected a 128 character password, got: %q", p) + } +} + +func TestIAMPasswordPolicyCheck(t *testing.T) { + for _, tc := range []struct { + pass string + valid bool + }{ + // no symbol + {pass: "abCD12", valid: false}, + // no number + {pass: "abCD%$", valid: false}, + // no upper + {pass: "abcd1#", valid: false}, + // no lower + {pass: "ABCD1#", valid: false}, + {pass: "abCD11#$", valid: true}, + } { + t.Run(tc.pass, func(t *testing.T) { + valid := checkIAMPwdPolicy([]byte(tc.pass)) + if valid != tc.valid { + t.Fatalf("expected %q to be valid==%t, got %t", tc.pass, tc.valid, valid) + } + }) + } +} + func TestAccAWSUserLoginProfile_basic(t *testing.T) { var conf iam.GetLoginProfileOutput username := fmt.Sprintf("test-user-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSUserLoginProfileDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSUserLoginProfileConfig(username, "/", testPubKey1), + Config: testAccAWSUserLoginProfileConfig_Required(username, "/", testPubKey1), Check: resource.ComposeTestCheckFunc( testAccCheckAWSUserLoginProfileExists("aws_iam_user_login_profile.user", &conf), testDecryptPasswordAndTest("aws_iam_user_login_profile.user", "aws_iam_access_key.user", testPrivKey1), @@ -45,13 +81,13 @@ func TestAccAWSUserLoginProfile_keybase(t *testing.T) { username := fmt.Sprintf("test-user-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSUserLoginProfileDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSUserLoginProfileConfig(username, "/", "keybase:terraformacctest"), + Config: testAccAWSUserLoginProfileConfig_Required(username, "/", "keybase:terraformacctest"), Check: resource.ComposeTestCheckFunc( testAccCheckAWSUserLoginProfileExists("aws_iam_user_login_profile.user", &conf), resource.TestCheckResourceAttrSet("aws_iam_user_login_profile.user", "encrypted_password"), @@ -65,14 +101,14 @@ func TestAccAWSUserLoginProfile_keybase(t *testing.T) { func TestAccAWSUserLoginProfile_keybaseDoesntExist(t *testing.T) { username := fmt.Sprintf("test-user-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSUserLoginProfileDestroy, Steps: []resource.TestStep{ { // We own this account but it doesn't have any key associated with it - Config: testAccAWSUserLoginProfileConfig(username, "/", "keybase:terraform_nope"), + Config: testAccAWSUserLoginProfileConfig_Required(username, "/", "keybase:terraform_nope"), ExpectError: regexp.MustCompile(`Error retrieving Public Key`), }, }, @@ -82,20 +118,41 @@ func TestAccAWSUserLoginProfile_keybaseDoesntExist(t *testing.T) { func TestAccAWSUserLoginProfile_notAKey(t *testing.T) { username := fmt.Sprintf("test-user-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSUserLoginProfileDestroy, Steps: []resource.TestStep{ { // We own this account but it doesn't have any key associated with it - Config: testAccAWSUserLoginProfileConfig(username, "/", "lolimnotakey"), + Config: testAccAWSUserLoginProfileConfig_Required(username, "/", "lolimnotakey"), ExpectError: regexp.MustCompile(`Error encrypting Password`), }, }, }) } +func TestAccAWSUserLoginProfile_PasswordLength(t *testing.T) { + var conf iam.GetLoginProfileOutput + + username := fmt.Sprintf("test-user-%d", acctest.RandInt()) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSUserLoginProfileDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSUserLoginProfileConfig_PasswordLength(username, "/", testPubKey1, 128), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSUserLoginProfileExists("aws_iam_user_login_profile.user", &conf), + resource.TestCheckResourceAttr("aws_iam_user_login_profile.user", "password_length", "128"), + ), + }, + }, + }) +} + func testAccCheckAWSUserLoginProfileDestroy(s *terraform.State) error { iamconn := testAccProvider.Meta().(*AWSClient).iamconn @@ -166,10 +223,14 @@ func testDecryptPasswordAndTest(nProfile, nAccessKey, key string) resource.TestC iamAsCreatedUser := iam.New(iamAsCreatedUserSession) _, err = iamAsCreatedUser.ChangePassword(&iam.ChangePasswordInput{ OldPassword: aws.String(decryptedPassword.String()), - NewPassword: aws.String(generatePassword(20)), + NewPassword: aws.String(generateIAMPassword(20)), }) if err != nil { - if awserr, ok := err.(awserr.Error); ok && awserr.Code() == "InvalidClientTokenId" { + // EntityTemporarilyUnmodifiable: Login Profile for User XXX cannot be modified while login profile is being created. + if isAWSErr(err, iam.ErrCodeEntityTemporarilyUnmodifiableException, "") { + return resource.RetryableError(err) + } + if isAWSErr(err, "InvalidClientTokenId", "") { return resource.RetryableError(err) } @@ -206,7 +267,7 @@ func testAccCheckAWSUserLoginProfileExists(n string, res *iam.GetLoginProfileOut } } -func testAccAWSUserLoginProfileConfig(r, p, key string) string { +func testAccAWSUserLoginProfileConfig_base(rName, path string) string { return fmt.Sprintf(` resource "aws_iam_user" "user" { name = "%s" @@ -238,13 +299,32 @@ resource "aws_iam_user_policy" "user" { resource "aws_iam_access_key" "user" { user = "${aws_iam_user.user.name}" } +`, rName, path) +} + +func testAccAWSUserLoginProfileConfig_PasswordLength(rName, path, pgpKey string, passwordLength int) string { + return fmt.Sprintf(` +%s + +resource "aws_iam_user_login_profile" "user" { + user = "${aws_iam_user.user.name}" + password_length = %d + pgp_key = < 0 { + tc := d.Get("cpu_threads_per_core").(int) + if tc < 0 { + tc = 2 + } + opts.CpuOptions = &ec2.CpuOptionsRequest{ + CoreCount: aws.Int64(int64(v)), + ThreadsPerCore: aws.Int64(int64(tc)), + } + } + var groups []*string if v := d.Get("security_groups"); v != nil { // Security group names. @@ -1760,7 +1908,7 @@ func buildAwsInstanceOpts( return opts, nil } -func awsTerminateInstance(conn *ec2.EC2, id string, d *schema.ResourceData) error { +func awsTerminateInstance(conn *ec2.EC2, id string, timeout time.Duration) error { log.Printf("[INFO] Terminating instance: %s", id) req := &ec2.TerminateInstancesInput{ InstanceIds: []*string{aws.String(id)}, @@ -1775,7 +1923,7 @@ func awsTerminateInstance(conn *ec2.EC2, id string, d *schema.ResourceData) erro Pending: []string{"pending", "running", "shutting-down", "stopped", "stopping"}, Target: []string{"terminated"}, Refresh: InstanceStateRefreshFunc(conn, id, []string{}), - Timeout: d.Timeout(schema.TimeoutDelete), + Timeout: timeout, Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -1833,3 +1981,21 @@ func getAwsInstanceVolumeIds(conn *ec2.EC2, d *schema.ResourceData) ([]*string, return volumeIds, nil } + +func getCreditSpecifications(conn *ec2.EC2, instanceId string) ([]map[string]interface{}, error) { + var creditSpecifications []map[string]interface{} + creditSpecification := make(map[string]interface{}) + + attr, err := conn.DescribeInstanceCreditSpecifications(&ec2.DescribeInstanceCreditSpecificationsInput{ + InstanceIds: []*string{aws.String(instanceId)}, + }) + if err != nil { + return creditSpecifications, err + } + if len(attr.InstanceCreditSpecifications) > 0 { + creditSpecification["cpu_credits"] = aws.StringValue(attr.InstanceCreditSpecifications[0].CpuCredits) + creditSpecifications = append(creditSpecifications, creditSpecification) + } + + return creditSpecifications, nil +} diff --git a/aws/resource_aws_instance_migrate_test.go b/aws/resource_aws_instance_migrate_test.go index d392943315e..35239faebeb 100644 --- a/aws/resource_aws_instance_migrate_test.go +++ b/aws/resource_aws_instance_migrate_test.go @@ -17,7 +17,7 @@ func TestAWSInstanceMigrateState(t *testing.T) { StateVersion: 0, Attributes: map[string]string{ // EBS - "block_device.#": "2", + "block_device.#": "2", "block_device.3851383343.delete_on_termination": "true", "block_device.3851383343.device_name": "/dev/sdx", "block_device.3851383343.encrypted": "false", @@ -42,7 +42,7 @@ func TestAWSInstanceMigrateState(t *testing.T) { "block_device.56575650.volume_type": "standard", }, Expected: map[string]string{ - "ebs_block_device.#": "1", + "ebs_block_device.#": "1", "ebs_block_device.3851383343.delete_on_termination": "true", "ebs_block_device.3851383343.device_name": "/dev/sdx", "ebs_block_device.3851383343.encrypted": "false", @@ -64,7 +64,7 @@ func TestAWSInstanceMigrateState(t *testing.T) { StateVersion: 0, Attributes: map[string]string{ // EBS - "block_device.#": "2", + "block_device.#": "2", "block_device.3851383343.delete_on_termination": "true", "block_device.3851383343.device_name": "/dev/sdx", "block_device.3851383343.encrypted": "false", @@ -92,7 +92,7 @@ func TestAWSInstanceMigrateState(t *testing.T) { "root_block_device.3018388612.iops": "1000", }, Expected: map[string]string{ - "ebs_block_device.#": "1", + "ebs_block_device.#": "1", "ebs_block_device.3851383343.delete_on_termination": "true", "ebs_block_device.3851383343.device_name": "/dev/sdx", "ebs_block_device.3851383343.encrypted": "false", @@ -151,7 +151,7 @@ func TestAWSInstanceMigrateState_empty(t *testing.T) { // should handle non-nil but empty is = &terraform.InstanceState{} - is, err = resourceAwsInstanceMigrateState(0, is, meta) + _, err = resourceAwsInstanceMigrateState(0, is, meta) if err != nil { t.Fatalf("err: %#v", err) diff --git a/aws/resource_aws_instance_test.go b/aws/resource_aws_instance_test.go index 1bb6b1f75db..fcb2471edc9 100644 --- a/aws/resource_aws_instance_test.go +++ b/aws/resource_aws_instance_test.go @@ -2,12 +2,18 @@ package aws import ( "fmt" + "log" + "os" "reflect" "regexp" + "strings" "testing" + "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" @@ -15,6 +21,220 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func init() { + resource.AddTestSweepers("aws_instance", &resource.Sweeper{ + Name: "aws_instance", + F: testSweepInstances, + }) +} + +func testSweepInstances(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).ec2conn + + err = conn.DescribeInstancesPages(&ec2.DescribeInstancesInput{}, func(page *ec2.DescribeInstancesOutput, isLast bool) bool { + if len(page.Reservations) == 0 { + log.Print("[DEBUG] No EC2 Instances to sweep") + return false + } + + for _, reservation := range page.Reservations { + for _, instance := range reservation.Instances { + var nameTag string + id := aws.StringValue(instance.InstanceId) + + for _, instanceTag := range instance.Tags { + if aws.StringValue(instanceTag.Key) == "Name" { + nameTag = aws.StringValue(instanceTag.Value) + break + } + } + + if !strings.HasPrefix(nameTag, "tf-acc-test-") { + log.Printf("[INFO] Skipping EC2 Instance: %s", id) + continue + } + + log.Printf("[INFO] Terminating EC2 Instance: %s", id) + err := awsTerminateInstance(conn, id, 5*time.Minute) + if err != nil { + log.Printf("[ERROR] Error terminating EC2 Instance (%s): %s", id, err) + } + } + } + return !isLast + }) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping EC2 Instance sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving EC2 Instances: %s", err) + } + + return nil +} + +func TestFetchRootDevice(t *testing.T) { + cases := []struct { + label string + images []*ec2.Image + name string + }{ + { + "device name in mappings", + []*ec2.Image{{ + RootDeviceType: aws.String("ebs"), + RootDeviceName: aws.String("/dev/xvda"), + BlockDeviceMappings: []*ec2.BlockDeviceMapping{ + {DeviceName: aws.String("/dev/xvdb")}, + {DeviceName: aws.String("/dev/xvda")}, + }, + }}, + "/dev/xvda", + }, + { + "device name not in mappings", + []*ec2.Image{{ + RootDeviceType: aws.String("ebs"), + RootDeviceName: aws.String("/dev/xvda"), + BlockDeviceMappings: []*ec2.BlockDeviceMapping{ + {DeviceName: aws.String("/dev/xvdb")}, + {DeviceName: aws.String("/dev/xvdc")}, + }, + }}, + "/dev/xvdb", + }, + { + "no images", + []*ec2.Image{}, + "", + }, + } + + sess, err := session.NewSession(nil) + if err != nil { + t.Errorf("Error new session: %s", err) + } + + conn := ec2.New(sess) + + for _, tc := range cases { + t.Run(fmt.Sprintf(tc.label), func(t *testing.T) { + conn.Handlers.Clear() + conn.Handlers.Send.PushBack(func(r *request.Request) { + data := r.Data.(*ec2.DescribeImagesOutput) + data.Images = tc.images + }) + name, err := fetchRootDeviceName("ami-123", conn) + if err != nil { + t.Errorf("Error fetching device name: %s", err) + } + if tc.name != aws.StringValue(name) { + t.Errorf("Expected name %s, got %s", tc.name, aws.StringValue(name)) + } + }) + } +} + +func TestAccAWSInstance_importBasic(t *testing.T) { + resourceName := "aws_instance.foo" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceConfigVPC, + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"associate_public_ip_address", "user_data"}, + }, + }, + }) +} + +func TestAccAWSInstance_importInDefaultVpcBySgName(t *testing.T) { + resourceName := "aws_instance.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceConfigInDefaultVpcBySgName(rInt), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSInstance_importInDefaultVpcBySgId(t *testing.T) { + resourceName := "aws_instance.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceConfigInDefaultVpcBySgId(rInt), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSInstance_importInEc2Classic(t *testing.T) { + resourceName := "aws_instance.foo" + rInt := acctest.RandInt() + + // EC2 Classic enabled + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccEC2ClassicPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceConfigInEc2Classic(rInt), + }, + + { + Config: testAccInstanceConfigInEc2Classic(rInt), + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"network_interface", "source_dest_check"}, + }, + }, + }) +} + func TestAccAWSInstance_basic(t *testing.T) { var v ec2.Instance var vol *ec2.Volume @@ -38,7 +258,7 @@ func TestAccAWSInstance_basic(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo", Providers: testAccProviders, @@ -71,6 +291,10 @@ func TestAccAWSInstance_basic(t *testing.T) { "3dc39dda39be1205215e776bad998da361a5955d"), resource.TestCheckResourceAttr( "aws_instance.foo", "ebs_block_device.#", "0"), + resource.TestMatchResourceAttr( + "aws_instance.foo", + "arn", + regexp.MustCompile(`^arn:[^:]+:ec2:[^:]+:\d{12}:instance/i-.+`)), ), }, @@ -110,7 +334,7 @@ func TestAccAWSInstance_userDataBase64(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo", Providers: testAccProviders, @@ -152,7 +376,7 @@ func TestAccAWSInstance_GP2IopsDevice(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo", IDRefreshIgnore: []string{"ephemeral_block_device", "user_data"}, @@ -182,7 +406,7 @@ func TestAccAWSInstance_GP2IopsDevice(t *testing.T) { func TestAccAWSInstance_GP2WithIopsValue(t *testing.T) { var v ec2.Instance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo", IDRefreshIgnore: []string{"ephemeral_block_device", "user_data"}, @@ -238,7 +462,7 @@ func TestAccAWSInstance_blockDevices(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo", IDRefreshIgnore: []string{"ephemeral_block_device"}, @@ -300,7 +524,7 @@ func TestAccAWSInstance_blockDevices(t *testing.T) { func TestAccAWSInstance_rootInstanceStore(t *testing.T) { var v ec2.Instance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo", Providers: testAccProviders, @@ -367,7 +591,7 @@ func TestAccAWSInstance_noAMIEphemeralDevices(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo", IDRefreshIgnore: []string{"ephemeral_block_device"}, @@ -445,7 +669,7 @@ func TestAccAWSInstance_sourceDestCheck(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo", Providers: testAccProviders, @@ -499,7 +723,7 @@ func TestAccAWSInstance_disableApiTermination(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo", Providers: testAccProviders, @@ -527,7 +751,7 @@ func TestAccAWSInstance_disableApiTermination(t *testing.T) { func TestAccAWSInstance_vpc(t *testing.T) { var v ec2.Instance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo", IDRefreshIgnore: []string{"associate_public_ip_address"}, @@ -553,7 +777,7 @@ func TestAccAWSInstance_placementGroup(t *testing.T) { var v ec2.Instance rStr := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo", IDRefreshIgnore: []string{"associate_public_ip_address"}, @@ -578,7 +802,7 @@ func TestAccAWSInstance_placementGroup(t *testing.T) { func TestAccAWSInstance_ipv6_supportAddressCount(t *testing.T) { var v ec2.Instance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -600,7 +824,7 @@ func TestAccAWSInstance_ipv6_supportAddressCount(t *testing.T) { func TestAccAWSInstance_ipv6AddressCountAndSingleAddressCausesError(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -616,7 +840,7 @@ func TestAccAWSInstance_ipv6AddressCountAndSingleAddressCausesError(t *testing.T func TestAccAWSInstance_ipv6_supportAddressCountWithIpv4(t *testing.T) { var v ec2.Instance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -643,7 +867,7 @@ func TestAccAWSInstance_multipleRegions(t *testing.T) { // check for the instances in each region var providers []*schema.Provider - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, ProviderFactories: testAccProviderFactories(&providers), CheckDestroy: testAccCheckWithProviders(testAccCheckInstanceDestroyWithProvider, &providers), @@ -666,7 +890,7 @@ func TestAccAWSInstance_NetworkInstanceSecurityGroups(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo_instance", IDRefreshIgnore: []string{"associate_public_ip_address"}, @@ -689,7 +913,7 @@ func TestAccAWSInstance_NetworkInstanceRemovingAllSecurityGroups(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo_instance", Providers: testAccProviders, @@ -727,7 +951,7 @@ func TestAccAWSInstance_NetworkInstanceVPCSecurityGroupIDs(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo_instance", Providers: testAccProviders, @@ -751,7 +975,7 @@ func TestAccAWSInstance_NetworkInstanceVPCSecurityGroupIDs(t *testing.T) { func TestAccAWSInstance_tags(t *testing.T) { var v ec2.Instance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -780,7 +1004,7 @@ func TestAccAWSInstance_tags(t *testing.T) { func TestAccAWSInstance_volumeTags(t *testing.T) { var v ec2.Instance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -830,7 +1054,7 @@ func TestAccAWSInstance_volumeTags(t *testing.T) { func TestAccAWSInstance_volumeTagsComputed(t *testing.T) { var v ec2.Instance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -860,7 +1084,7 @@ func TestAccAWSInstance_instanceProfileChange(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo", Providers: testAccProviders, @@ -897,7 +1121,7 @@ func TestAccAWSInstance_withIamInstanceProfile(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo", Providers: testAccProviders, @@ -927,7 +1151,7 @@ func TestAccAWSInstance_privateIP(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo", Providers: testAccProviders, @@ -957,7 +1181,7 @@ func TestAccAWSInstance_associatePublicIPAndPrivateIP(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo", IDRefreshIgnore: []string{"associate_public_ip_address"}, @@ -995,7 +1219,7 @@ func TestAccAWSInstance_keyPairCheck(t *testing.T) { keyPairName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo", IDRefreshIgnore: []string{"source_dest_check"}, @@ -1016,7 +1240,7 @@ func TestAccAWSInstance_keyPairCheck(t *testing.T) { func TestAccAWSInstance_rootBlockDeviceMismatch(t *testing.T) { var v ec2.Instance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -1045,7 +1269,7 @@ func TestAccAWSInstance_rootBlockDeviceMismatch(t *testing.T) { func TestAccAWSInstance_forceNewAndTagsDrift(t *testing.T) { var v ec2.Instance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_instance.foo", Providers: testAccProviders, @@ -1073,7 +1297,7 @@ func TestAccAWSInstance_changeInstanceType(t *testing.T) { var before ec2.Instance var after ec2.Instance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -1100,7 +1324,7 @@ func TestAccAWSInstance_primaryNetworkInterface(t *testing.T) { var instance ec2.Instance var ini ec2.NetworkInterface - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -1121,7 +1345,7 @@ func TestAccAWSInstance_primaryNetworkInterfaceSourceDestCheck(t *testing.T) { var instance ec2.Instance var ini ec2.NetworkInterface - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -1144,7 +1368,7 @@ func TestAccAWSInstance_addSecondaryInterface(t *testing.T) { var iniPrimary ec2.NetworkInterface var iniSecondary ec2.NetworkInterface - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -1174,7 +1398,7 @@ func TestAccAWSInstance_addSecurityGroupNetworkInterface(t *testing.T) { var before ec2.Instance var after ec2.Instance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -1203,7 +1427,7 @@ func TestAccAWSInstance_associatePublic_defaultPrivate(t *testing.T) { resName := "aws_instance.foo" rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -1226,7 +1450,7 @@ func TestAccAWSInstance_associatePublic_defaultPublic(t *testing.T) { resName := "aws_instance.foo" rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -1249,7 +1473,7 @@ func TestAccAWSInstance_associatePublic_explicitPublic(t *testing.T) { resName := "aws_instance.foo" rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -1272,7 +1496,7 @@ func TestAccAWSInstance_associatePublic_explicitPrivate(t *testing.T) { resName := "aws_instance.foo" rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -1295,7 +1519,7 @@ func TestAccAWSInstance_associatePublic_overridePublic(t *testing.T) { resName := "aws_instance.foo" rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -1318,7 +1542,7 @@ func TestAccAWSInstance_associatePublic_overridePrivate(t *testing.T) { resName := "aws_instance.foo" rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -1340,7 +1564,7 @@ func TestAccAWSInstance_getPasswordData_falseToTrue(t *testing.T) { resName := "aws_instance.foo" rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -1371,7 +1595,7 @@ func TestAccAWSInstance_getPasswordData_trueToFalse(t *testing.T) { resName := "aws_instance.foo" rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckInstanceDestroy, @@ -1397,6 +1621,380 @@ func TestAccAWSInstance_getPasswordData_trueToFalse(t *testing.T) { }) } +func TestAccAWSInstance_creditSpecification_unspecifiedDefaultsToStandard(t *testing.T) { + var instance ec2.Instance + resName := "aws_instance.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceConfig_creditSpecification_unspecified(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &instance), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "standard"), + ), + }, + }, + }) +} + +func TestAccAWSInstance_creditSpecification_standardCpuCredits(t *testing.T) { + var first, second ec2.Instance + resName := "aws_instance.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceConfig_creditSpecification_standardCpuCredits(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &first), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "standard"), + ), + }, + { + Config: testAccInstanceConfig_creditSpecification_unspecified(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &second), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "standard"), + ), + }, + }, + }) +} + +func TestAccAWSInstance_creditSpecification_unlimitedCpuCredits(t *testing.T) { + var first, second ec2.Instance + resName := "aws_instance.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceConfig_creditSpecification_unlimitedCpuCredits(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &first), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "unlimited"), + ), + }, + { + Config: testAccInstanceConfig_creditSpecification_unspecified(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &second), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "unlimited"), + ), + }, + }, + }) +} + +func TestAccAWSInstance_creditSpecification_updateCpuCredits(t *testing.T) { + var first, second, third ec2.Instance + resName := "aws_instance.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceConfig_creditSpecification_standardCpuCredits(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &first), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "standard"), + ), + }, + { + Config: testAccInstanceConfig_creditSpecification_unlimitedCpuCredits(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &second), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "unlimited"), + ), + }, + { + Config: testAccInstanceConfig_creditSpecification_standardCpuCredits(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &third), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "standard"), + ), + }, + }, + }) +} + +func TestAccAWSInstance_creditSpecification_isNotAppliedToNonBurstable(t *testing.T) { + var instance ec2.Instance + resName := "aws_instance.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceConfig_creditSpecification_isNotAppliedToNonBurstable(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &instance), + ), + }, + }, + }) +} + +func TestAccAWSInstance_creditSpecificationT3_unspecifiedDefaultsToUnlimited(t *testing.T) { + var instance ec2.Instance + resName := "aws_instance.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceConfig_creditSpecification_unspecified_t3(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &instance), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "unlimited"), + ), + }, + }, + }) +} + +func TestAccAWSInstance_creditSpecificationT3_standardCpuCredits(t *testing.T) { + var first, second ec2.Instance + resName := "aws_instance.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceConfig_creditSpecification_standardCpuCredits_t3(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &first), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "standard"), + ), + }, + { + Config: testAccInstanceConfig_creditSpecification_unspecified_t3(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &second), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "standard"), + ), + }, + }, + }) +} + +func TestAccAWSInstance_creditSpecificationT3_unlimitedCpuCredits(t *testing.T) { + var first, second ec2.Instance + resName := "aws_instance.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceConfig_creditSpecification_unlimitedCpuCredits_t3(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &first), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "unlimited"), + ), + }, + { + Config: testAccInstanceConfig_creditSpecification_unspecified_t3(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &second), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "unlimited"), + ), + }, + }, + }) +} + +func TestAccAWSInstance_creditSpecificationT3_updateCpuCredits(t *testing.T) { + var first, second, third ec2.Instance + resName := "aws_instance.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceConfig_creditSpecification_standardCpuCredits_t3(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &first), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "standard"), + ), + }, + { + Config: testAccInstanceConfig_creditSpecification_unlimitedCpuCredits_t3(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &second), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "unlimited"), + ), + }, + { + Config: testAccInstanceConfig_creditSpecification_standardCpuCredits_t3(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &third), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "standard"), + ), + }, + }, + }) +} + +func TestAccAWSInstance_creditSpecification_standardCpuCredits_t2Tot3Taint(t *testing.T) { + var before, after ec2.Instance + resName := "aws_instance.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceConfig_creditSpecification_standardCpuCredits(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &before), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "standard"), + ), + }, + { + Config: testAccInstanceConfig_creditSpecification_standardCpuCredits_t3(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &after), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "standard"), + ), + Taint: []string{resName}, + }, + }, + }) +} + +func TestAccAWSInstance_creditSpecification_unlimitedCpuCredits_t2Tot3Taint(t *testing.T) { + var before, after ec2.Instance + resName := "aws_instance.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceConfig_creditSpecification_unlimitedCpuCredits(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &before), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "unlimited"), + ), + }, + { + Config: testAccInstanceConfig_creditSpecification_unlimitedCpuCredits_t3(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resName, &after), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "unlimited"), + ), + Taint: []string{resName}, + }, + }, + }) +} + +func TestAccAWSInstance_UserData_EmptyStringToUnspecified(t *testing.T) { + var instance ec2.Instance + rInt := acctest.RandInt() + resourceName := "aws_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceConfig_UserData_EmptyString(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resourceName, &instance), + ), + }, + // Switching should show no difference + { + Config: testAccInstanceConfig_UserData_Unspecified(rInt), + ExpectNonEmptyPlan: false, + PlanOnly: true, + }, + }, + }) +} + +func TestAccAWSInstance_UserData_UnspecifiedToEmptyString(t *testing.T) { + var instance ec2.Instance + rInt := acctest.RandInt() + resourceName := "aws_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceConfig_UserData_Unspecified(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(resourceName, &instance), + ), + }, + // Switching should show no difference + { + Config: testAccInstanceConfig_UserData_EmptyString(rInt), + ExpectNonEmptyPlan: false, + PlanOnly: true, + }, + }, + }) +} + func testAccCheckInstanceNotRecreated(t *testing.T, before, after *ec2.Instance) resource.TestCheckFunc { return func(s *terraform.State) error { @@ -1474,14 +2072,46 @@ func testAccCheckInstanceExistsWithProvider(n string, i *ec2.Instance, providerF return nil } - return fmt.Errorf("Instance not found") + return fmt.Errorf("Instance not found") + } +} + +func TestInstanceTenancySchema(t *testing.T) { + actualSchema := resourceAwsInstance().Schema["tenancy"] + expectedSchema := &schema.Schema{ + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + } + if !reflect.DeepEqual(actualSchema, expectedSchema) { + t.Fatalf( + "Got:\n\n%#v\n\nExpected:\n\n%#v\n", + actualSchema, + expectedSchema) + } +} + +func TestInstanceCpuCoreCountSchema(t *testing.T) { + actualSchema := resourceAwsInstance().Schema["cpu_core_count"] + expectedSchema := &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + Computed: true, + ForceNew: true, + } + if !reflect.DeepEqual(actualSchema, expectedSchema) { + t.Fatalf( + "Got:\n\n%#v\n\nExpected:\n\n%#v\n", + actualSchema, + expectedSchema) } } -func TestInstanceTenancySchema(t *testing.T) { - actualSchema := resourceAwsInstance().Schema["tenancy"] +func TestInstanceCpuThreadsPerCoreSchema(t *testing.T) { + actualSchema := resourceAwsInstance().Schema["cpu_threads_per_core"] expectedSchema := &schema.Schema{ - Type: schema.TypeString, + Type: schema.TypeInt, Optional: true, Computed: true, ForceNew: true, @@ -1510,6 +2140,113 @@ func driftTags(instance *ec2.Instance) resource.TestCheckFunc { } } +func testAccInstanceConfigInDefaultVpcBySgName(rInt int) string { + return fmt.Sprintf(` +data "aws_ami" "ubuntu" { + most_recent = true + + filter { + name = "name" + values = ["ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"] + } + + filter { + name = "virtualization-type" + values = ["hvm"] + } + + owners = ["099720109477"] # Canonical +} + +data "aws_vpc" "default" { + default = true +} + +resource "aws_security_group" "sg" { + name = "tf_acc_test_%d" + description = "Test security group" + vpc_id = "${data.aws_vpc.default.id}" +} + +resource "aws_instance" "foo" { + ami = "${data.aws_ami.ubuntu.id}" + instance_type = "t2.micro" + security_groups = ["${aws_security_group.sg.name}"] +} +`, rInt) +} + +func testAccInstanceConfigInDefaultVpcBySgId(rInt int) string { + return fmt.Sprintf(` +data "aws_ami" "ubuntu" { + most_recent = true + + filter { + name = "name" + values = ["ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"] + } + + filter { + name = "virtualization-type" + values = ["hvm"] + } + + owners = ["099720109477"] # Canonical +} + +data "aws_vpc" "default" { + default = true +} + +resource "aws_security_group" "sg" { + name = "tf_acc_test_%d" + description = "Test security group" + vpc_id = "${data.aws_vpc.default.id}" +} + +resource "aws_instance" "foo" { + ami = "${data.aws_ami.ubuntu.id}" + instance_type = "t2.micro" + vpc_security_group_ids = ["${aws_security_group.sg.id}"] +} +`, rInt) +} + +func testAccInstanceConfigInEc2Classic(rInt int) string { + return fmt.Sprintf(` +provider "aws" { + region = "us-east-1" +} + +data "aws_ami" "ubuntu" { + most_recent = true + + filter { + name = "name" + values = ["ubuntu/images/ubuntu-trusty-14.04-amd64-server-*"] + } + + filter { + name = "virtualization-type" + values = ["paravirtual"] + } + + owners = ["099720109477"] # Canonical +} + +resource "aws_security_group" "sg" { + name = "tf_acc_test_%d" + description = "Test security group" +} + +resource "aws_instance" "foo" { + ami = "${data.aws_ami.ubuntu.id}" + instance_type = "m3.medium" + security_groups = ["${aws_security_group.sg.name}"] +} +`, rInt) +} + func testAccInstanceConfig_pre(rInt int) string { return fmt.Sprintf(` resource "aws_security_group" "tf_test_foo" { @@ -2970,3 +3707,235 @@ func testAccInstanceConfig_getPasswordData(val bool, rInt int) string { } `, rInt, val) } + +func testAccInstanceConfig_creditSpecification_unspecified(rInt int) string { + return fmt.Sprintf(` +resource "aws_vpc" "my_vpc" { + cidr_block = "172.16.0.0/16" + tags { + Name = "tf-acctest-%d" + } +} + +resource "aws_subnet" "my_subnet" { + vpc_id = "${aws_vpc.my_vpc.id}" + cidr_block = "172.16.20.0/24" + availability_zone = "us-west-2a" +} + +resource "aws_instance" "foo" { + ami = "ami-22b9a343" # us-west-2 + instance_type = "t2.micro" + subnet_id = "${aws_subnet.my_subnet.id}" +} +`, rInt) +} + +func testAccInstanceConfig_creditSpecification_unspecified_t3(rInt int) string { + return fmt.Sprintf(` +resource "aws_vpc" "my_vpc" { + cidr_block = "172.16.0.0/16" + tags { + Name = "tf-acctest-%d" + } +} + +resource "aws_subnet" "my_subnet" { + vpc_id = "${aws_vpc.my_vpc.id}" + cidr_block = "172.16.20.0/24" + availability_zone = "us-west-2a" +} + +resource "aws_instance" "foo" { + ami = "ami-51537029" # us-west-2 + instance_type = "t3.micro" + subnet_id = "${aws_subnet.my_subnet.id}" +} +`, rInt) +} + +func testAccInstanceConfig_creditSpecification_standardCpuCredits(rInt int) string { + return fmt.Sprintf(` +resource "aws_vpc" "my_vpc" { + cidr_block = "172.16.0.0/16" + tags { + Name = "tf-acctest-%d" + } +} + +resource "aws_subnet" "my_subnet" { + vpc_id = "${aws_vpc.my_vpc.id}" + cidr_block = "172.16.20.0/24" + availability_zone = "us-west-2a" +} + +resource "aws_instance" "foo" { + ami = "ami-22b9a343" # us-west-2 + instance_type = "t2.micro" + subnet_id = "${aws_subnet.my_subnet.id}" + credit_specification { + cpu_credits = "standard" + } +} +`, rInt) +} + +func testAccInstanceConfig_creditSpecification_standardCpuCredits_t3(rInt int) string { + return fmt.Sprintf(` +resource "aws_vpc" "my_vpc" { + cidr_block = "172.16.0.0/16" + tags { + Name = "tf-acctest-%d" + } +} + +resource "aws_subnet" "my_subnet" { + vpc_id = "${aws_vpc.my_vpc.id}" + cidr_block = "172.16.20.0/24" + availability_zone = "us-west-2a" +} + +resource "aws_instance" "foo" { + ami = "ami-51537029" # us-west-2 + instance_type = "t3.micro" + subnet_id = "${aws_subnet.my_subnet.id}" + credit_specification { + cpu_credits = "standard" + } +} +`, rInt) +} + +func testAccInstanceConfig_creditSpecification_unlimitedCpuCredits(rInt int) string { + return fmt.Sprintf(` +resource "aws_vpc" "my_vpc" { + cidr_block = "172.16.0.0/16" + tags { + Name = "tf-acctest-%d" + } +} + +resource "aws_subnet" "my_subnet" { + vpc_id = "${aws_vpc.my_vpc.id}" + cidr_block = "172.16.20.0/24" + availability_zone = "us-west-2a" +} + +resource "aws_instance" "foo" { + ami = "ami-22b9a343" # us-west-2 + instance_type = "t2.micro" + subnet_id = "${aws_subnet.my_subnet.id}" + credit_specification { + cpu_credits = "unlimited" + } +} +`, rInt) +} + +func testAccInstanceConfig_creditSpecification_unlimitedCpuCredits_t3(rInt int) string { + return fmt.Sprintf(` +resource "aws_vpc" "my_vpc" { + cidr_block = "172.16.0.0/16" + tags { + Name = "tf-acctest-%d" + } +} + +resource "aws_subnet" "my_subnet" { + vpc_id = "${aws_vpc.my_vpc.id}" + cidr_block = "172.16.20.0/24" + availability_zone = "us-west-2a" +} + +resource "aws_instance" "foo" { + ami = "ami-51537029" # us-west-2 + instance_type = "t3.micro" + subnet_id = "${aws_subnet.my_subnet.id}" + credit_specification { + cpu_credits = "unlimited" + } +} +`, rInt) +} + +func testAccInstanceConfig_creditSpecification_isNotAppliedToNonBurstable(rInt int) string { + return fmt.Sprintf(` +resource "aws_vpc" "my_vpc" { + cidr_block = "172.16.0.0/16" + tags { + Name = "tf-acctest-%d" + } +} + +resource "aws_subnet" "my_subnet" { + vpc_id = "${aws_vpc.my_vpc.id}" + cidr_block = "172.16.20.0/24" + availability_zone = "us-west-2a" +} + +resource "aws_instance" "foo" { + ami = "ami-22b9a343" # us-west-2 + instance_type = "m1.small" + subnet_id = "${aws_subnet.my_subnet.id}" + credit_specification { + cpu_credits = "standard" + } +} +`, rInt) +} + +func testAccInstanceConfig_UserData_Base(rInt int) string { + return fmt.Sprintf(` +data "aws_ami" "amzn-ami-minimal-hvm-ebs" { + most_recent = true + owners = ["amazon"] + + filter { + name = "name" + values = ["amzn-ami-minimal-hvm-*"] + } + filter { + name = "root-device-type" + values = ["ebs"] + } +} + +resource "aws_vpc" "test" { + cidr_block = "172.16.0.0/16" + + tags { + Name = "tf-acctest-%d" + } +} + +resource "aws_subnet" "test" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "172.16.0.0/24" + + tags { + Name = "tf-acctest-%d" + } +} +`, rInt, rInt) +} + +func testAccInstanceConfig_UserData_Unspecified(rInt int) string { + return testAccInstanceConfig_UserData_Base(rInt) + ` +resource "aws_instance" "test" { + ami = "${data.aws_ami.amzn-ami-minimal-hvm-ebs.id}" + instance_type = "t2.micro" + subnet_id = "${aws_subnet.test.id}" +} +` +} + +func testAccInstanceConfig_UserData_EmptyString(rInt int) string { + return testAccInstanceConfig_UserData_Base(rInt) + ` +resource "aws_instance" "test" { + ami = "${data.aws_ami.amzn-ami-minimal-hvm-ebs.id}" + instance_type = "t2.micro" + subnet_id = "${aws_subnet.test.id}" + user_data = "" +} +` +} diff --git a/aws/resource_aws_internet_gateway.go b/aws/resource_aws_internet_gateway.go index a86f11670cb..7f87e920407 100644 --- a/aws/resource_aws_internet_gateway.go +++ b/aws/resource_aws_internet_gateway.go @@ -8,7 +8,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) @@ -24,7 +23,7 @@ func resourceAwsInternetGateway() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "vpc_id": &schema.Schema{ + "vpc_id": { Type: schema.TypeString, Optional: true, }, @@ -62,7 +61,7 @@ func resourceAwsInternetGatewayCreate(d *schema.ResourceData, meta interface{}) }) if err != nil { - return errwrap.Wrapf("{{err}}", err) + return fmt.Errorf("%s", err) } err = setTags(conn, d) diff --git a/aws/resource_aws_internet_gateway_test.go b/aws/resource_aws_internet_gateway_test.go index 9fcb937f992..af22de9cab5 100644 --- a/aws/resource_aws_internet_gateway_test.go +++ b/aws/resource_aws_internet_gateway_test.go @@ -4,6 +4,7 @@ import ( "fmt" "log" "testing" + "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" @@ -32,12 +33,17 @@ func testSweepInternetGateways(region string) error { Name: aws.String("tag-value"), Values: []*string{ aws.String("terraform-testacc-*"), + aws.String("tf-acc-test-*"), }, }, }, } resp, err := conn.DescribeInternetGateways(req) if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping Internet Gateway sweep for %s: %s", region, err) + return nil + } return fmt.Errorf("Error describing Internet Gateways: %s", err) } @@ -47,19 +53,67 @@ func testSweepInternetGateways(region string) error { } for _, internetGateway := range resp.InternetGateways { - _, err := conn.DeleteInternetGateway(&ec2.DeleteInternetGatewayInput{ + for _, attachment := range internetGateway.Attachments { + input := &ec2.DetachInternetGatewayInput{ + InternetGatewayId: internetGateway.InternetGatewayId, + VpcId: attachment.VpcId, + } + + log.Printf("[DEBUG] Detaching Internet Gateway: %s", input) + _, err := conn.DetachInternetGateway(input) + if err != nil { + return fmt.Errorf("error detaching Internet Gateway (%s) from VPC (%s): %s", aws.StringValue(internetGateway.InternetGatewayId), aws.StringValue(attachment.VpcId), err) + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{"detaching"}, + Target: []string{"detached"}, + Refresh: detachIGStateRefreshFunc(conn, aws.StringValue(internetGateway.InternetGatewayId), aws.StringValue(attachment.VpcId)), + Timeout: 10 * time.Minute, + Delay: 10 * time.Second, + } + + log.Printf("[DEBUG] Waiting for Internet Gateway (%s) to detach from VPC (%s)", aws.StringValue(internetGateway.InternetGatewayId), aws.StringValue(attachment.VpcId)) + if _, err = stateConf.WaitForState(); err != nil { + return fmt.Errorf("error waiting for VPN Gateway (%s) to detach from VPC (%s): %s", aws.StringValue(internetGateway.InternetGatewayId), aws.StringValue(attachment.VpcId), err) + } + } + + input := &ec2.DeleteInternetGatewayInput{ InternetGatewayId: internetGateway.InternetGatewayId, - }) + } + + log.Printf("[DEBUG] Deleting Internet Gateway: %s", input) + _, err := conn.DeleteInternetGateway(input) if err != nil { - return fmt.Errorf( - "Error deleting Internet Gateway (%s): %s", - *internetGateway.InternetGatewayId, err) + return fmt.Errorf("error deleting Internet Gateway (%s): %s", aws.StringValue(internetGateway.InternetGatewayId), err) } } return nil } +func TestAccAWSInternetGateway_importBasic(t *testing.T) { + resourceName := "aws_internet_gateway.foo" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckInternetGatewayDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInternetGatewayConfig, + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSInternetGateway_basic(t *testing.T) { var v, v2 ec2.InternetGateway @@ -80,13 +134,13 @@ func TestAccAWSInternetGateway_basic(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_internet_gateway.foo", Providers: testAccProviders, CheckDestroy: testAccCheckInternetGatewayDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccInternetGatewayConfig, Check: resource.ComposeTestCheckFunc( testAccCheckInternetGatewayExists( @@ -94,7 +148,7 @@ func TestAccAWSInternetGateway_basic(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccInternetGatewayConfigChangeVPC, Check: resource.ComposeTestCheckFunc( testAccCheckInternetGatewayExists( @@ -119,18 +173,18 @@ func TestAccAWSInternetGateway_delete(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_internet_gateway.foo", Providers: testAccProviders, CheckDestroy: testAccCheckInternetGatewayDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccInternetGatewayConfig, Check: resource.ComposeTestCheckFunc( testAccCheckInternetGatewayExists("aws_internet_gateway.foo", &ig)), }, - resource.TestStep{ + { Config: testAccNoInternetGatewayConfig, Check: resource.ComposeTestCheckFunc(testDeleted("aws_internet_gateway.foo")), }, @@ -141,13 +195,13 @@ func TestAccAWSInternetGateway_delete(t *testing.T) { func TestAccAWSInternetGateway_tags(t *testing.T) { var v ec2.InternetGateway - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_internet_gateway.foo", Providers: testAccProviders, CheckDestroy: testAccCheckInternetGatewayDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckInternetGatewayConfigTags, Check: resource.ComposeTestCheckFunc( testAccCheckInternetGatewayExists("aws_internet_gateway.foo", &v), @@ -156,7 +210,7 @@ func TestAccAWSInternetGateway_tags(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccCheckInternetGatewayConfigTagsUpdate, Check: resource.ComposeTestCheckFunc( testAccCheckInternetGatewayExists("aws_internet_gateway.foo", &v), diff --git a/aws/resource_aws_iot_certificate.go b/aws/resource_aws_iot_certificate.go index afa8699042d..48949fd054f 100644 --- a/aws/resource_aws_iot_certificate.go +++ b/aws/resource_aws_iot_certificate.go @@ -15,16 +15,16 @@ func resourceAwsIotCertificate() *schema.Resource { Update: resourceAwsIotCertificateUpdate, Delete: resourceAwsIotCertificateDelete, Schema: map[string]*schema.Schema{ - "csr": &schema.Schema{ + "csr": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "active": &schema.Schema{ + "active": { Type: schema.TypeBool, Required: true, }, - "arn": &schema.Schema{ + "arn": { Type: schema.TypeString, Computed: true, }, diff --git a/aws/resource_aws_iot_certificate_test.go b/aws/resource_aws_iot_certificate_test.go index 6f82df03ebe..8a504b737ba 100644 --- a/aws/resource_aws_iot_certificate_test.go +++ b/aws/resource_aws_iot_certificate_test.go @@ -12,12 +12,12 @@ import ( ) func TestAccAWSIoTCertificate_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSIoTCertificateDestroy_basic, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSIoTCertificate_basic, Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttrSet("aws_iot_certificate.foo_cert", "arn"), diff --git a/aws/resource_aws_iot_policy.go b/aws/resource_aws_iot_policy.go index 9a3bf4bda37..724540e7404 100644 --- a/aws/resource_aws_iot_policy.go +++ b/aws/resource_aws_iot_policy.go @@ -15,19 +15,19 @@ func resourceAwsIotPolicy() *schema.Resource { Update: resourceAwsIotPolicyUpdate, Delete: resourceAwsIotPolicyDelete, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, }, - "policy": &schema.Schema{ + "policy": { Type: schema.TypeString, Required: true, }, - "arn": &schema.Schema{ + "arn": { Type: schema.TypeString, Computed: true, }, - "default_version_id": &schema.Schema{ + "default_version_id": { Type: schema.TypeString, Computed: true, }, diff --git a/aws/resource_aws_iot_policy_attachment.go b/aws/resource_aws_iot_policy_attachment.go new file mode 100644 index 00000000000..eba80e6fecc --- /dev/null +++ b/aws/resource_aws_iot_policy_attachment.go @@ -0,0 +1,137 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/iot" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsIotPolicyAttachment() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsIotPolicyAttachmentCreate, + Read: resourceAwsIotPolicyAttachmentRead, + Delete: resourceAwsIotPolicyAttachmentDelete, + Schema: map[string]*schema.Schema{ + "policy": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "target": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + } +} + +func resourceAwsIotPolicyAttachmentCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).iotconn + + policyName := d.Get("policy").(string) + target := d.Get("target").(string) + + _, err := conn.AttachPolicy(&iot.AttachPolicyInput{ + PolicyName: aws.String(policyName), + Target: aws.String(target), + }) + + if err != nil { + return fmt.Errorf("error attaching policy %s to target %s: %s", policyName, target, err) + } + + d.SetId(fmt.Sprintf("%s|%s", policyName, target)) + return resourceAwsIotPolicyAttachmentRead(d, meta) +} + +func listIotPolicyAttachmentPages(conn *iot.IoT, input *iot.ListAttachedPoliciesInput, + fn func(out *iot.ListAttachedPoliciesOutput, lastPage bool) bool) error { + for { + page, err := conn.ListAttachedPolicies(input) + if err != nil { + return err + } + lastPage := page.NextMarker == nil + + shouldContinue := fn(page, lastPage) + if !shouldContinue || lastPage { + break + } + input.Marker = page.NextMarker + } + return nil +} + +func getIotPolicyAttachment(conn *iot.IoT, target, policyName string) (*iot.Policy, error) { + var policy *iot.Policy + + input := &iot.ListAttachedPoliciesInput{ + PageSize: aws.Int64(250), + Recursive: aws.Bool(false), + Target: aws.String(target), + } + + err := listIotPolicyAttachmentPages(conn, input, func(out *iot.ListAttachedPoliciesOutput, lastPage bool) bool { + for _, att := range out.Policies { + if policyName == aws.StringValue(att.PolicyName) { + policy = att + return false + } + } + return true + }) + + return policy, err +} + +func resourceAwsIotPolicyAttachmentRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).iotconn + + policyName := d.Get("policy").(string) + target := d.Get("target").(string) + + var policy *iot.Policy + + policy, err := getIotPolicyAttachment(conn, target, policyName) + + if err != nil { + return fmt.Errorf("error listing policy attachments for target %s: %s", target, err) + } + + if policy == nil { + log.Printf("[WARN] IOT Policy Attachment (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + return nil +} + +func resourceAwsIotPolicyAttachmentDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).iotconn + + policyName := d.Get("policy").(string) + target := d.Get("target").(string) + + _, err := conn.DetachPolicy(&iot.DetachPolicyInput{ + PolicyName: aws.String(policyName), + Target: aws.String(target), + }) + + // DetachPolicy doesn't return an error if the policy doesn't exist, + // but it returns an error if the Target is not found. + if isAWSErr(err, iot.ErrCodeInvalidRequestException, "Invalid Target") { + log.Printf("[WARN] IOT target (%s) not found, removing attachment to policy (%s) from state", target, policyName) + return nil + } + + if err != nil { + return fmt.Errorf("error detaching policy %s from target %s: %s", policyName, target, err) + } + + return nil +} diff --git a/aws/resource_aws_iot_policy_attachment_test.go b/aws/resource_aws_iot_policy_attachment_test.go new file mode 100644 index 00000000000..d402e086693 --- /dev/null +++ b/aws/resource_aws_iot_policy_attachment_test.go @@ -0,0 +1,318 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/iot" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSIotPolicyAttachment_basic(t *testing.T) { + policyName := acctest.RandomWithPrefix("PolicyName-") + policyName2 := acctest.RandomWithPrefix("PolicyName2-") + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSIotPolicyAttchmentDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSIotPolicyAttachmentConfig(policyName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSIotPolicyAttachmentExists("aws_iot_policy_attachment.att"), + testAccCheckAWSIotPolicyAttachmentCertStatus("aws_iot_certificate.cert", []string{policyName}), + ), + }, + { + Config: testAccAWSIotPolicyAttachmentConfigUpdate1(policyName, policyName2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSIotPolicyAttachmentExists("aws_iot_policy_attachment.att"), + testAccCheckAWSIotPolicyAttachmentExists("aws_iot_policy_attachment.att2"), + testAccCheckAWSIotPolicyAttachmentCertStatus("aws_iot_certificate.cert", []string{policyName, policyName2}), + ), + }, + { + Config: testAccAWSIotPolicyAttachmentConfigUpdate2(policyName2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSIotPolicyAttachmentExists("aws_iot_policy_attachment.att2"), + testAccCheckAWSIotPolicyAttachmentCertStatus("aws_iot_certificate.cert", []string{policyName2}), + ), + }, + { + Config: testAccAWSIotPolicyAttachmentConfigUpdate3(policyName2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSIotPolicyAttachmentExists("aws_iot_policy_attachment.att2"), + testAccCheckAWSIotPolicyAttachmentExists("aws_iot_policy_attachment.att3"), + testAccCheckAWSIotPolicyAttachmentCertStatus("aws_iot_certificate.cert", []string{policyName2}), + testAccCheckAWSIotPolicyAttachmentCertStatus("aws_iot_certificate.cert2", []string{policyName2}), + ), + }, + }, + }) + +} + +func testAccCheckAWSIotPolicyAttchmentDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).iotconn + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_iot_policy_attachment" { + continue + } + + target := rs.Primary.Attributes["target"] + policyName := rs.Primary.Attributes["policy"] + + input := &iot.ListAttachedPoliciesInput{ + PageSize: aws.Int64(250), + Recursive: aws.Bool(false), + Target: aws.String(target), + } + + var policy *iot.Policy + err := listIotPolicyAttachmentPages(conn, input, func(out *iot.ListAttachedPoliciesOutput, lastPage bool) bool { + for _, att := range out.Policies { + if policyName == aws.StringValue(att.PolicyName) { + policy = att + return false + } + } + return true + }) + + if isAWSErr(err, iot.ErrCodeResourceNotFoundException, "The certificate given in the principal does not exist.") { + continue + } else if err != nil { + return err + } + + if policy == nil { + continue + } + + return fmt.Errorf("IOT Policy Attachment (%s) still exists", rs.Primary.Attributes["id"]) + } + return nil +} + +func testAccCheckAWSIotPolicyAttachmentExists(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No policy name is set") + } + + conn := testAccProvider.Meta().(*AWSClient).iotconn + target := rs.Primary.Attributes["target"] + policyName := rs.Primary.Attributes["policy"] + + policy, err := getIotPolicyAttachment(conn, target, policyName) + + if err != nil { + return fmt.Errorf("Error: Failed to get attached policies for target %s (%s): %s", target, n, err) + } + + if policy == nil { + return fmt.Errorf("Error: Policy %s is not attached to target (%s)", policyName, target) + } + + return nil + } +} + +func testAccCheckAWSIotPolicyAttachmentCertStatus(n string, policies []string) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).iotconn + + rs, ok := s.RootModule().Resources[n] + + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + certARN := rs.Primary.Attributes["arn"] + + out, err := conn.ListAttachedPolicies(&iot.ListAttachedPoliciesInput{ + Target: aws.String(certARN), + PageSize: aws.Int64(250), + }) + + if err != nil { + return fmt.Errorf("Error: Cannot list attached policies for target %s: %s", certARN, err) + } + + if len(out.Policies) != len(policies) { + return fmt.Errorf("Error: Invalid attached policies count for target %s, expected %d, got %d", + certARN, + len(policies), + len(out.Policies)) + } + + for _, p1 := range policies { + found := false + for _, p2 := range out.Policies { + if p1 == aws.StringValue(p2.PolicyName) { + found = true + break + } + } + if !found { + return fmt.Errorf("Error: Policy %s is not attached to target %s", p1, certARN) + } + } + + return nil + } +} + +func testAccAWSIotPolicyAttachmentConfig(policyName string) string { + return fmt.Sprintf(` +resource "aws_iot_certificate" "cert" { + csr = "${file("test-fixtures/iot-csr.pem")}" + active = true +} + +resource "aws_iot_policy" "policy" { + name = "%s" + policy = < 0 { + if v, ok := d.GetOk("cloudwatch_logging_options"); ok { + clo := v.([]interface{})[0].(map[string]interface{}) + cloudwatchLoggingOption := expandKinesisAnalyticsCloudwatchLoggingOption(clo) + addOpts := &kinesisanalytics.AddApplicationCloudWatchLoggingOptionInput{ + ApplicationName: aws.String(name), + CurrentApplicationVersionId: aws.Int64(int64(version)), + CloudWatchLoggingOption: cloudwatchLoggingOption, + } + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err := conn.AddApplicationCloudWatchLoggingOption(addOpts) + if err != nil { + if isAWSErr(err, kinesisanalytics.ErrCodeInvalidArgumentException, "Kinesis Analytics service doesn't have sufficient privileges") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("Unable to add CloudWatch logging options: %s", err) + } + version = version + 1 + } + } + + oldInputs, newInputs := d.GetChange("inputs") + if len(oldInputs.([]interface{})) == 0 && len(newInputs.([]interface{})) > 0 { + if v, ok := d.GetOk("inputs"); ok { + i := v.([]interface{})[0].(map[string]interface{}) + input := expandKinesisAnalyticsInputs(i) + addOpts := &kinesisanalytics.AddApplicationInputInput{ + ApplicationName: aws.String(name), + CurrentApplicationVersionId: aws.Int64(int64(version)), + Input: input, + } + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err := conn.AddApplicationInput(addOpts) + if err != nil { + if isAWSErr(err, kinesisanalytics.ErrCodeInvalidArgumentException, "Kinesis Analytics service doesn't have sufficient privileges") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("Unable to add application inputs: %s", err) + } + version = version + 1 + } + } + + oldOutputs, newOutputs := d.GetChange("outputs") + if len(oldOutputs.([]interface{})) == 0 && len(newOutputs.([]interface{})) > 0 { + if v, ok := d.GetOk("outputs"); ok { + o := v.([]interface{})[0].(map[string]interface{}) + output := expandKinesisAnalyticsOutputs(o) + addOpts := &kinesisanalytics.AddApplicationOutputInput{ + ApplicationName: aws.String(name), + CurrentApplicationVersionId: aws.Int64(int64(version)), + Output: output, + } + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err := conn.AddApplicationOutput(addOpts) + if err != nil { + if isAWSErr(err, kinesisanalytics.ErrCodeInvalidArgumentException, "Kinesis Analytics service doesn't have sufficient privileges") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("Unable to add application outputs: %s", err) + } + version = version + 1 + } + } + } + + oldReferenceData, newReferenceData := d.GetChange("reference_data_sources") + if len(oldReferenceData.([]interface{})) == 0 && len(newReferenceData.([]interface{})) > 0 { + if v := d.Get("reference_data_sources").([]interface{}); len(v) > 0 { + for _, r := range v { + rd := r.(map[string]interface{}) + referenceData := expandKinesisAnalyticsReferenceData(rd) + addOpts := &kinesisanalytics.AddApplicationReferenceDataSourceInput{ + ApplicationName: aws.String(name), + CurrentApplicationVersionId: aws.Int64(int64(version)), + ReferenceDataSource: referenceData, + } + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err := conn.AddApplicationReferenceDataSource(addOpts) + if err != nil { + if isAWSErr(err, kinesisanalytics.ErrCodeInvalidArgumentException, "Kinesis Analytics service doesn't have sufficient privileges") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("Unable to add application reference data source: %s", err) + } + version = version + 1 + } + } + } + + return resourceAwsKinesisAnalyticsApplicationRead(d, meta) +} + +func resourceAwsKinesisAnalyticsApplicationDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).kinesisanalyticsconn + name := d.Get("name").(string) + createTimestamp, parseErr := time.Parse(time.RFC3339, d.Get("create_timestamp").(string)) + if parseErr != nil { + return parseErr + } + + log.Printf("[DEBUG] Kinesis Analytics Application destroy: %v", d.Id()) + deleteOpts := &kinesisanalytics.DeleteApplicationInput{ + ApplicationName: aws.String(name), + CreateTimestamp: aws.Time(createTimestamp), + } + _, deleteErr := conn.DeleteApplication(deleteOpts) + if isAWSErr(deleteErr, kinesisanalytics.ErrCodeResourceNotFoundException, "") { + return nil + } + deleteErr = waitForDeleteKinesisAnalyticsApplication(conn, d.Id(), d.Timeout(schema.TimeoutDelete)) + if deleteErr != nil { + return fmt.Errorf("error waiting for deletion of Kinesis Analytics Application (%s): %s", d.Id(), deleteErr) + } + + log.Printf("[DEBUG] Kinesis Analytics Application deleted: %v", d.Id()) + return nil +} + +func expandKinesisAnalyticsCloudwatchLoggingOption(clo map[string]interface{}) *kinesisanalytics.CloudWatchLoggingOption { + cloudwatchLoggingOption := &kinesisanalytics.CloudWatchLoggingOption{ + LogStreamARN: aws.String(clo["log_stream_arn"].(string)), + RoleARN: aws.String(clo["role_arn"].(string)), + } + return cloudwatchLoggingOption +} + +func expandKinesisAnalyticsInputs(i map[string]interface{}) *kinesisanalytics.Input { + input := &kinesisanalytics.Input{ + NamePrefix: aws.String(i["name_prefix"].(string)), + } + + if v := i["kinesis_firehose"].([]interface{}); len(v) > 0 { + kf := v[0].(map[string]interface{}) + kfi := &kinesisanalytics.KinesisFirehoseInput{ + ResourceARN: aws.String(kf["resource_arn"].(string)), + RoleARN: aws.String(kf["role_arn"].(string)), + } + input.KinesisFirehoseInput = kfi + } + + if v := i["kinesis_stream"].([]interface{}); len(v) > 0 { + ks := v[0].(map[string]interface{}) + ksi := &kinesisanalytics.KinesisStreamsInput{ + ResourceARN: aws.String(ks["resource_arn"].(string)), + RoleARN: aws.String(ks["role_arn"].(string)), + } + input.KinesisStreamsInput = ksi + } + + if v := i["parallelism"].([]interface{}); len(v) > 0 { + p := v[0].(map[string]interface{}) + + if c, ok := p["count"]; ok { + ip := &kinesisanalytics.InputParallelism{ + Count: aws.Int64(int64(c.(int))), + } + input.InputParallelism = ip + } + } + + if v := i["processing_configuration"].([]interface{}); len(v) > 0 { + pc := v[0].(map[string]interface{}) + + if l := pc["lambda"].([]interface{}); len(l) > 0 { + lp := l[0].(map[string]interface{}) + ipc := &kinesisanalytics.InputProcessingConfiguration{ + InputLambdaProcessor: &kinesisanalytics.InputLambdaProcessor{ + ResourceARN: aws.String(lp["resource_arn"].(string)), + RoleARN: aws.String(lp["role_arn"].(string)), + }, + } + input.InputProcessingConfiguration = ipc + } + } + + if v := i["schema"].([]interface{}); len(v) > 0 { + vL := v[0].(map[string]interface{}) + ss := expandKinesisAnalyticsSourceSchema(vL) + input.InputSchema = ss + } + + return input +} + +func expandKinesisAnalyticsSourceSchema(vL map[string]interface{}) *kinesisanalytics.SourceSchema { + ss := &kinesisanalytics.SourceSchema{} + if v := vL["record_columns"].([]interface{}); len(v) > 0 { + var rcs []*kinesisanalytics.RecordColumn + + for _, rc := range v { + rcD := rc.(map[string]interface{}) + rc := &kinesisanalytics.RecordColumn{ + Name: aws.String(rcD["name"].(string)), + SqlType: aws.String(rcD["sql_type"].(string)), + } + + if v, ok := rcD["mapping"]; ok { + rc.Mapping = aws.String(v.(string)) + } + + rcs = append(rcs, rc) + } + + ss.RecordColumns = rcs + } + + if v, ok := vL["record_encoding"]; ok && v.(string) != "" { + ss.RecordEncoding = aws.String(v.(string)) + } + + if v := vL["record_format"].([]interface{}); len(v) > 0 { + vL := v[0].(map[string]interface{}) + rf := &kinesisanalytics.RecordFormat{} + + if v := vL["mapping_parameters"].([]interface{}); len(v) > 0 { + vL := v[0].(map[string]interface{}) + mp := &kinesisanalytics.MappingParameters{} + + if v := vL["csv"].([]interface{}); len(v) > 0 { + cL := v[0].(map[string]interface{}) + cmp := &kinesisanalytics.CSVMappingParameters{ + RecordColumnDelimiter: aws.String(cL["record_column_delimiter"].(string)), + RecordRowDelimiter: aws.String(cL["record_row_delimiter"].(string)), + } + mp.CSVMappingParameters = cmp + rf.RecordFormatType = aws.String("CSV") + } + + if v := vL["json"].([]interface{}); len(v) > 0 { + jL := v[0].(map[string]interface{}) + jmp := &kinesisanalytics.JSONMappingParameters{ + RecordRowPath: aws.String(jL["record_row_path"].(string)), + } + mp.JSONMappingParameters = jmp + rf.RecordFormatType = aws.String("JSON") + } + rf.MappingParameters = mp + } + + ss.RecordFormat = rf + } + return ss +} + +func expandKinesisAnalyticsOutputs(o map[string]interface{}) *kinesisanalytics.Output { + output := &kinesisanalytics.Output{ + Name: aws.String(o["name"].(string)), + } + + if v := o["kinesis_firehose"].([]interface{}); len(v) > 0 { + kf := v[0].(map[string]interface{}) + kfo := &kinesisanalytics.KinesisFirehoseOutput{ + ResourceARN: aws.String(kf["resource_arn"].(string)), + RoleARN: aws.String(kf["role_arn"].(string)), + } + output.KinesisFirehoseOutput = kfo + } + + if v := o["kinesis_stream"].([]interface{}); len(v) > 0 { + ks := v[0].(map[string]interface{}) + kso := &kinesisanalytics.KinesisStreamsOutput{ + ResourceARN: aws.String(ks["resource_arn"].(string)), + RoleARN: aws.String(ks["role_arn"].(string)), + } + output.KinesisStreamsOutput = kso + } + + if v := o["lambda"].([]interface{}); len(v) > 0 { + l := v[0].(map[string]interface{}) + lo := &kinesisanalytics.LambdaOutput{ + ResourceARN: aws.String(l["resource_arn"].(string)), + RoleARN: aws.String(l["role_arn"].(string)), + } + output.LambdaOutput = lo + } + + if v := o["schema"].([]interface{}); len(v) > 0 { + ds := v[0].(map[string]interface{}) + dso := &kinesisanalytics.DestinationSchema{ + RecordFormatType: aws.String(ds["record_format_type"].(string)), + } + output.DestinationSchema = dso + } + + return output +} + +func expandKinesisAnalyticsReferenceData(rd map[string]interface{}) *kinesisanalytics.ReferenceDataSource { + referenceData := &kinesisanalytics.ReferenceDataSource{ + TableName: aws.String(rd["table_name"].(string)), + } + + if v := rd["s3"].([]interface{}); len(v) > 0 { + s3 := v[0].(map[string]interface{}) + s3rds := &kinesisanalytics.S3ReferenceDataSource{ + BucketARN: aws.String(s3["bucket_arn"].(string)), + FileKey: aws.String(s3["file_key"].(string)), + ReferenceRoleARN: aws.String(s3["role_arn"].(string)), + } + referenceData.S3ReferenceDataSource = s3rds + } + + if v := rd["schema"].([]interface{}); len(v) > 0 { + ss := expandKinesisAnalyticsSourceSchema(v[0].(map[string]interface{})) + referenceData.ReferenceSchema = ss + } + + return referenceData +} + +func createApplicationUpdateOpts(d *schema.ResourceData) (*kinesisanalytics.ApplicationUpdate, error) { + applicationUpdate := &kinesisanalytics.ApplicationUpdate{} + + if d.HasChange("code") { + if v, ok := d.GetOk("code"); ok && v.(string) != "" { + applicationUpdate.ApplicationCodeUpdate = aws.String(v.(string)) + } + } + + oldLoggingOptions, newLoggingOptions := d.GetChange("cloudwatch_logging_options") + if len(oldLoggingOptions.([]interface{})) > 0 && len(newLoggingOptions.([]interface{})) > 0 { + if v, ok := d.GetOk("cloudwatch_logging_options"); ok { + clo := v.([]interface{})[0].(map[string]interface{}) + cloudwatchLoggingOption := expandKinesisAnalyticsCloudwatchLoggingOptionUpdate(clo) + applicationUpdate.CloudWatchLoggingOptionUpdates = []*kinesisanalytics.CloudWatchLoggingOptionUpdate{cloudwatchLoggingOption} + } + } + + oldInputs, newInputs := d.GetChange("inputs") + if len(oldInputs.([]interface{})) > 0 && len(newInputs.([]interface{})) > 0 { + if v, ok := d.GetOk("inputs"); ok { + vL := v.([]interface{})[0].(map[string]interface{}) + inputUpdate := expandKinesisAnalyticsInputUpdate(vL) + applicationUpdate.InputUpdates = []*kinesisanalytics.InputUpdate{inputUpdate} + } + } + + oldOutputs, newOutputs := d.GetChange("outputs") + if len(oldOutputs.([]interface{})) > 0 && len(newOutputs.([]interface{})) > 0 { + if v, ok := d.GetOk("outputs"); ok { + vL := v.([]interface{})[0].(map[string]interface{}) + outputUpdate := expandKinesisAnalyticsOutputUpdate(vL) + applicationUpdate.OutputUpdates = []*kinesisanalytics.OutputUpdate{outputUpdate} + } + } + + oldReferenceData, newReferenceData := d.GetChange("reference_data_sources") + if len(oldReferenceData.([]interface{})) > 0 && len(newReferenceData.([]interface{})) > 0 { + if v := d.Get("reference_data_sources").([]interface{}); len(v) > 0 { + var rdsus []*kinesisanalytics.ReferenceDataSourceUpdate + for _, rd := range v { + rdL := rd.(map[string]interface{}) + rdsu := &kinesisanalytics.ReferenceDataSourceUpdate{ + ReferenceId: aws.String(rdL["id"].(string)), + TableNameUpdate: aws.String(rdL["table_name"].(string)), + } + + if v := rdL["s3"].([]interface{}); len(v) > 0 { + vL := v[0].(map[string]interface{}) + s3rdsu := &kinesisanalytics.S3ReferenceDataSourceUpdate{ + BucketARNUpdate: aws.String(vL["bucket_arn"].(string)), + FileKeyUpdate: aws.String(vL["file_key"].(string)), + ReferenceRoleARNUpdate: aws.String(vL["role_arn"].(string)), + } + rdsu.S3ReferenceDataSourceUpdate = s3rdsu + } + + if v := rdL["schema"].([]interface{}); len(v) > 0 { + vL := v[0].(map[string]interface{}) + ss := expandKinesisAnalyticsSourceSchema(vL) + rdsu.ReferenceSchemaUpdate = ss + } + + rdsus = append(rdsus, rdsu) + } + applicationUpdate.ReferenceDataSourceUpdates = rdsus + } + } + + return applicationUpdate, nil +} + +func expandKinesisAnalyticsInputUpdate(vL map[string]interface{}) *kinesisanalytics.InputUpdate { + inputUpdate := &kinesisanalytics.InputUpdate{ + InputId: aws.String(vL["id"].(string)), + NamePrefixUpdate: aws.String(vL["name_prefix"].(string)), + } + + if v := vL["kinesis_firehose"].([]interface{}); len(v) > 0 { + kf := v[0].(map[string]interface{}) + kfiu := &kinesisanalytics.KinesisFirehoseInputUpdate{ + ResourceARNUpdate: aws.String(kf["resource_arn"].(string)), + RoleARNUpdate: aws.String(kf["role_arn"].(string)), + } + inputUpdate.KinesisFirehoseInputUpdate = kfiu + } + + if v := vL["kinesis_stream"].([]interface{}); len(v) > 0 { + ks := v[0].(map[string]interface{}) + ksiu := &kinesisanalytics.KinesisStreamsInputUpdate{ + ResourceARNUpdate: aws.String(ks["resource_arn"].(string)), + RoleARNUpdate: aws.String(ks["role_arn"].(string)), + } + inputUpdate.KinesisStreamsInputUpdate = ksiu + } + + if v := vL["parallelism"].([]interface{}); len(v) > 0 { + p := v[0].(map[string]interface{}) + + if c, ok := p["count"]; ok { + ipu := &kinesisanalytics.InputParallelismUpdate{ + CountUpdate: aws.Int64(int64(c.(int))), + } + inputUpdate.InputParallelismUpdate = ipu + } + } + + if v := vL["processing_configuration"].([]interface{}); len(v) > 0 { + pc := v[0].(map[string]interface{}) + + if l := pc["lambda"].([]interface{}); len(l) > 0 { + lp := l[0].(map[string]interface{}) + ipc := &kinesisanalytics.InputProcessingConfigurationUpdate{ + InputLambdaProcessorUpdate: &kinesisanalytics.InputLambdaProcessorUpdate{ + ResourceARNUpdate: aws.String(lp["resource_arn"].(string)), + RoleARNUpdate: aws.String(lp["role_arn"].(string)), + }, + } + inputUpdate.InputProcessingConfigurationUpdate = ipc + } + } + + if v := vL["schema"].([]interface{}); len(v) > 0 { + ss := &kinesisanalytics.InputSchemaUpdate{} + vL := v[0].(map[string]interface{}) + + if v := vL["record_columns"].([]interface{}); len(v) > 0 { + var rcs []*kinesisanalytics.RecordColumn + + for _, rc := range v { + rcD := rc.(map[string]interface{}) + rc := &kinesisanalytics.RecordColumn{ + Name: aws.String(rcD["name"].(string)), + SqlType: aws.String(rcD["sql_type"].(string)), + } + + if v, ok := rcD["mapping"]; ok { + rc.Mapping = aws.String(v.(string)) + } + + rcs = append(rcs, rc) + } + + ss.RecordColumnUpdates = rcs + } + + if v, ok := vL["record_encoding"]; ok && v.(string) != "" { + ss.RecordEncodingUpdate = aws.String(v.(string)) + } + + if v := vL["record_format"].([]interface{}); len(v) > 0 { + vL := v[0].(map[string]interface{}) + rf := &kinesisanalytics.RecordFormat{} + + if v := vL["mapping_parameters"].([]interface{}); len(v) > 0 { + vL := v[0].(map[string]interface{}) + mp := &kinesisanalytics.MappingParameters{} + + if v := vL["csv"].([]interface{}); len(v) > 0 { + cL := v[0].(map[string]interface{}) + cmp := &kinesisanalytics.CSVMappingParameters{ + RecordColumnDelimiter: aws.String(cL["record_column_delimiter"].(string)), + RecordRowDelimiter: aws.String(cL["record_row_delimiter"].(string)), + } + mp.CSVMappingParameters = cmp + rf.RecordFormatType = aws.String("CSV") + } + + if v := vL["json"].([]interface{}); len(v) > 0 { + jL := v[0].(map[string]interface{}) + jmp := &kinesisanalytics.JSONMappingParameters{ + RecordRowPath: aws.String(jL["record_row_path"].(string)), + } + mp.JSONMappingParameters = jmp + rf.RecordFormatType = aws.String("JSON") + } + rf.MappingParameters = mp + } + ss.RecordFormatUpdate = rf + } + inputUpdate.InputSchemaUpdate = ss + } + + return inputUpdate +} + +func expandKinesisAnalyticsOutputUpdate(vL map[string]interface{}) *kinesisanalytics.OutputUpdate { + outputUpdate := &kinesisanalytics.OutputUpdate{ + OutputId: aws.String(vL["id"].(string)), + NameUpdate: aws.String(vL["name"].(string)), + } + + if v := vL["kinesis_firehose"].([]interface{}); len(v) > 0 { + kf := v[0].(map[string]interface{}) + kfou := &kinesisanalytics.KinesisFirehoseOutputUpdate{ + ResourceARNUpdate: aws.String(kf["resource_arn"].(string)), + RoleARNUpdate: aws.String(kf["role_arn"].(string)), + } + outputUpdate.KinesisFirehoseOutputUpdate = kfou + } + + if v := vL["kinesis_stream"].([]interface{}); len(v) > 0 { + ks := v[0].(map[string]interface{}) + ksou := &kinesisanalytics.KinesisStreamsOutputUpdate{ + ResourceARNUpdate: aws.String(ks["resource_arn"].(string)), + RoleARNUpdate: aws.String(ks["role_arn"].(string)), + } + outputUpdate.KinesisStreamsOutputUpdate = ksou + } + + if v := vL["lambda"].([]interface{}); len(v) > 0 { + l := v[0].(map[string]interface{}) + lou := &kinesisanalytics.LambdaOutputUpdate{ + ResourceARNUpdate: aws.String(l["resource_arn"].(string)), + RoleARNUpdate: aws.String(l["role_arn"].(string)), + } + outputUpdate.LambdaOutputUpdate = lou + } + + if v := vL["schema"].([]interface{}); len(v) > 0 { + ds := v[0].(map[string]interface{}) + dsu := &kinesisanalytics.DestinationSchema{ + RecordFormatType: aws.String(ds["record_format_type"].(string)), + } + outputUpdate.DestinationSchemaUpdate = dsu + } + + return outputUpdate +} + +func expandKinesisAnalyticsCloudwatchLoggingOptionUpdate(clo map[string]interface{}) *kinesisanalytics.CloudWatchLoggingOptionUpdate { + cloudwatchLoggingOption := &kinesisanalytics.CloudWatchLoggingOptionUpdate{ + CloudWatchLoggingOptionId: aws.String(clo["id"].(string)), + LogStreamARNUpdate: aws.String(clo["log_stream_arn"].(string)), + RoleARNUpdate: aws.String(clo["role_arn"].(string)), + } + return cloudwatchLoggingOption +} + +func flattenKinesisAnalyticsCloudwatchLoggingOptions(options []*kinesisanalytics.CloudWatchLoggingOptionDescription) []interface{} { + s := []interface{}{} + for _, v := range options { + option := map[string]interface{}{ + "id": aws.StringValue(v.CloudWatchLoggingOptionId), + "log_stream_arn": aws.StringValue(v.LogStreamARN), + "role_arn": aws.StringValue(v.RoleARN), + } + s = append(s, option) + } + return s +} + +func flattenKinesisAnalyticsInputs(inputs []*kinesisanalytics.InputDescription) []interface{} { + s := []interface{}{} + + if len(inputs) > 0 { + id := inputs[0] + + input := map[string]interface{}{ + "id": aws.StringValue(id.InputId), + "name_prefix": aws.StringValue(id.NamePrefix), + } + + list := schema.NewSet(schema.HashString, nil) + for _, sn := range id.InAppStreamNames { + list.Add(aws.StringValue(sn)) + } + input["stream_names"] = list + + if id.InputParallelism != nil { + input["parallelism"] = []interface{}{ + map[string]interface{}{ + "count": int(aws.Int64Value(id.InputParallelism.Count)), + }, + } + } + + if id.InputProcessingConfigurationDescription != nil { + ipcd := id.InputProcessingConfigurationDescription + + if ipcd.InputLambdaProcessorDescription != nil { + input["processing_configuration"] = []interface{}{ + map[string]interface{}{ + "lambda": []interface{}{ + map[string]interface{}{ + "resource_arn": aws.StringValue(ipcd.InputLambdaProcessorDescription.ResourceARN), + "role_arn": aws.StringValue(ipcd.InputLambdaProcessorDescription.RoleARN), + }, + }, + }, + } + } + } + + if id.InputSchema != nil { + inputSchema := id.InputSchema + is := []interface{}{} + rcs := []interface{}{} + ss := map[string]interface{}{ + "record_encoding": aws.StringValue(inputSchema.RecordEncoding), + } + + for _, rc := range inputSchema.RecordColumns { + rcM := map[string]interface{}{ + "mapping": aws.StringValue(rc.Mapping), + "name": aws.StringValue(rc.Name), + "sql_type": aws.StringValue(rc.SqlType), + } + rcs = append(rcs, rcM) + } + ss["record_columns"] = rcs + + if inputSchema.RecordFormat != nil { + rf := inputSchema.RecordFormat + rfM := map[string]interface{}{ + "record_format_type": aws.StringValue(rf.RecordFormatType), + } + + if rf.MappingParameters != nil { + mps := []interface{}{} + if rf.MappingParameters.CSVMappingParameters != nil { + cmp := map[string]interface{}{ + "csv": []interface{}{ + map[string]interface{}{ + "record_column_delimiter": aws.StringValue(rf.MappingParameters.CSVMappingParameters.RecordColumnDelimiter), + "record_row_delimiter": aws.StringValue(rf.MappingParameters.CSVMappingParameters.RecordRowDelimiter), + }, + }, + } + mps = append(mps, cmp) + } + + if rf.MappingParameters.JSONMappingParameters != nil { + jmp := map[string]interface{}{ + "json": []interface{}{ + map[string]interface{}{ + "record_row_path": aws.StringValue(rf.MappingParameters.JSONMappingParameters.RecordRowPath), + }, + }, + } + mps = append(mps, jmp) + } + + rfM["mapping_parameters"] = mps + } + ss["record_format"] = []interface{}{rfM} + } + + is = append(is, ss) + input["schema"] = is + } + + if id.InputStartingPositionConfiguration != nil && id.InputStartingPositionConfiguration.InputStartingPosition != nil { + input["starting_position_configuration"] = []interface{}{ + map[string]interface{}{ + "starting_position": aws.StringValue(id.InputStartingPositionConfiguration.InputStartingPosition), + }, + } + } + + if id.KinesisFirehoseInputDescription != nil { + input["kinesis_firehose"] = []interface{}{ + map[string]interface{}{ + "resource_arn": aws.StringValue(id.KinesisFirehoseInputDescription.ResourceARN), + "role_arn": aws.StringValue(id.KinesisFirehoseInputDescription.RoleARN), + }, + } + } + + if id.KinesisStreamsInputDescription != nil { + input["kinesis_stream"] = []interface{}{ + map[string]interface{}{ + "resource_arn": aws.StringValue(id.KinesisStreamsInputDescription.ResourceARN), + "role_arn": aws.StringValue(id.KinesisStreamsInputDescription.RoleARN), + }, + } + } + + s = append(s, input) + } + return s +} + +func flattenKinesisAnalyticsOutputs(outputs []*kinesisanalytics.OutputDescription) []interface{} { + s := []interface{}{} + + if len(outputs) > 0 { + id := outputs[0] + + output := map[string]interface{}{ + "id": aws.StringValue(id.OutputId), + "name": aws.StringValue(id.Name), + } + + if id.KinesisFirehoseOutputDescription != nil { + output["kinesis_firehose"] = []interface{}{ + map[string]interface{}{ + "resource_arn": aws.StringValue(id.KinesisFirehoseOutputDescription.ResourceARN), + "role_arn": aws.StringValue(id.KinesisFirehoseOutputDescription.RoleARN), + }, + } + } + + if id.KinesisStreamsOutputDescription != nil { + output["kinesis_stream"] = []interface{}{ + map[string]interface{}{ + "resource_arn": aws.StringValue(id.KinesisStreamsOutputDescription.ResourceARN), + "role_arn": aws.StringValue(id.KinesisStreamsOutputDescription.RoleARN), + }, + } + } + + if id.LambdaOutputDescription != nil { + output["lambda"] = []interface{}{ + map[string]interface{}{ + "resource_arn": aws.StringValue(id.LambdaOutputDescription.ResourceARN), + "role_arn": aws.StringValue(id.LambdaOutputDescription.RoleARN), + }, + } + } + + if id.DestinationSchema != nil { + output["schema"] = []interface{}{ + map[string]interface{}{ + "record_format_type": aws.StringValue(id.DestinationSchema.RecordFormatType), + }, + } + } + + s = append(s, output) + } + + return s +} + +func flattenKinesisAnalyticsReferenceDataSources(dataSources []*kinesisanalytics.ReferenceDataSourceDescription) []interface{} { + s := []interface{}{} + + if len(dataSources) > 0 { + for _, ds := range dataSources { + dataSource := map[string]interface{}{ + "id": aws.StringValue(ds.ReferenceId), + "table_name": aws.StringValue(ds.TableName), + } + + if ds.S3ReferenceDataSourceDescription != nil { + dataSource["s3"] = []interface{}{ + map[string]interface{}{ + "bucket_arn": aws.StringValue(ds.S3ReferenceDataSourceDescription.BucketARN), + "file_key": aws.StringValue(ds.S3ReferenceDataSourceDescription.FileKey), + "role_arn": aws.StringValue(ds.S3ReferenceDataSourceDescription.ReferenceRoleARN), + }, + } + } + + if ds.ReferenceSchema != nil { + rs := ds.ReferenceSchema + rcs := []interface{}{} + ss := map[string]interface{}{ + "record_encoding": aws.StringValue(rs.RecordEncoding), + } + + for _, rc := range rs.RecordColumns { + rcM := map[string]interface{}{ + "mapping": aws.StringValue(rc.Mapping), + "name": aws.StringValue(rc.Name), + "sql_type": aws.StringValue(rc.SqlType), + } + rcs = append(rcs, rcM) + } + ss["record_columns"] = rcs + + if rs.RecordFormat != nil { + rf := rs.RecordFormat + rfM := map[string]interface{}{ + "record_format_type": aws.StringValue(rf.RecordFormatType), + } + + if rf.MappingParameters != nil { + mps := []interface{}{} + if rf.MappingParameters.CSVMappingParameters != nil { + cmp := map[string]interface{}{ + "csv": []interface{}{ + map[string]interface{}{ + "record_column_delimiter": aws.StringValue(rf.MappingParameters.CSVMappingParameters.RecordColumnDelimiter), + "record_row_delimiter": aws.StringValue(rf.MappingParameters.CSVMappingParameters.RecordRowDelimiter), + }, + }, + } + mps = append(mps, cmp) + } + + if rf.MappingParameters.JSONMappingParameters != nil { + jmp := map[string]interface{}{ + "json": []interface{}{ + map[string]interface{}{ + "record_row_path": aws.StringValue(rf.MappingParameters.JSONMappingParameters.RecordRowPath), + }, + }, + } + mps = append(mps, jmp) + } + + rfM["mapping_parameters"] = mps + } + ss["record_format"] = []interface{}{rfM} + } + + dataSource["schema"] = []interface{}{ss} + } + + s = append(s, dataSource) + } + } + + return s +} + +func waitForDeleteKinesisAnalyticsApplication(conn *kinesisanalytics.KinesisAnalytics, applicationId string, timeout time.Duration) error { + stateConf := resource.StateChangeConf{ + Pending: []string{ + kinesisanalytics.ApplicationStatusRunning, + kinesisanalytics.ApplicationStatusDeleting, + }, + Target: []string{""}, + Timeout: timeout, + Refresh: refreshKinesisAnalyticsApplicationStatus(conn, applicationId), + } + application, err := stateConf.WaitForState() + if err != nil { + if isAWSErr(err, kinesisanalytics.ErrCodeResourceNotFoundException, "") { + return nil + } + } + if application == nil { + return nil + } + return err +} + +func refreshKinesisAnalyticsApplicationStatus(conn *kinesisanalytics.KinesisAnalytics, applicationId string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := conn.DescribeApplication(&kinesisanalytics.DescribeApplicationInput{ + ApplicationName: aws.String(applicationId), + }) + if err != nil { + return nil, "", err + } + application := output.ApplicationDetail + if application == nil { + return application, "", fmt.Errorf("Kinesis Analytics Application (%s) could not be found.", applicationId) + } + return application, aws.StringValue(application.ApplicationStatus), nil + } +} diff --git a/aws/resource_aws_kinesis_analytics_application_test.go b/aws/resource_aws_kinesis_analytics_application_test.go new file mode 100644 index 00000000000..650f0c182a6 --- /dev/null +++ b/aws/resource_aws_kinesis_analytics_application_test.go @@ -0,0 +1,707 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/kinesisanalytics" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSKinesisAnalyticsApplication_basic(t *testing.T) { + var application kinesisanalytics.ApplicationDetail + resName := "aws_kinesis_analytics_application.test" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisAnalyticsApplicationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccKinesisAnalyticsApplication_basic(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisAnalyticsApplicationExists(resName, &application), + resource.TestCheckResourceAttr(resName, "version", "1"), + resource.TestCheckResourceAttr(resName, "code", "testCode\n"), + ), + }, + }, + }) +} + +func TestAccAWSKinesisAnalyticsApplication_update(t *testing.T) { + var application kinesisanalytics.ApplicationDetail + resName := "aws_kinesis_analytics_application.test" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisAnalyticsApplicationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccKinesisAnalyticsApplication_basic(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisAnalyticsApplicationExists(resName, &application), + ), + }, + { + Config: testAccKinesisAnalyticsApplication_update(rInt), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(resName, "version", "2"), + resource.TestCheckResourceAttr(resName, "code", "testCode2\n"), + ), + }, + }, + }) +} + +func TestAccAWSKinesisAnalyticsApplication_addCloudwatchLoggingOptions(t *testing.T) { + var application kinesisanalytics.ApplicationDetail + resName := "aws_kinesis_analytics_application.test" + rInt := acctest.RandInt() + firstStep := testAccKinesisAnalyticsApplication_prereq(rInt) + testAccKinesisAnalyticsApplication_basic(rInt) + thirdStep := testAccKinesisAnalyticsApplication_prereq(rInt) + testAccKinesisAnalyticsApplication_cloudwatchLoggingOptions(rInt, "testStream") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisAnalyticsApplicationDestroy, + Steps: []resource.TestStep{ + { + Config: firstStep, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisAnalyticsApplicationExists(resName, &application), + resource.TestCheckResourceAttr(resName, "version", "1"), + ), + }, + { + Config: thirdStep, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisAnalyticsApplicationExists(resName, &application), + resource.TestCheckResourceAttr(resName, "version", "2"), + resource.TestCheckResourceAttr(resName, "cloudwatch_logging_options.#", "1"), + resource.TestCheckResourceAttrPair(resName, "cloudwatch_logging_options.0.log_stream_arn", "aws_cloudwatch_log_stream.test", "arn"), + ), + }, + }, + }) +} + +func TestAccAWSKinesisAnalyticsApplication_updateCloudwatchLoggingOptions(t *testing.T) { + var application kinesisanalytics.ApplicationDetail + resName := "aws_kinesis_analytics_application.test" + rInt := acctest.RandInt() + firstStep := testAccKinesisAnalyticsApplication_prereq(rInt) + testAccKinesisAnalyticsApplication_cloudwatchLoggingOptions(rInt, "testStream") + secondStep := testAccKinesisAnalyticsApplication_prereq(rInt) + testAccKinesisAnalyticsApplication_cloudwatchLoggingOptions(rInt, "testStream2") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisAnalyticsApplicationDestroy, + Steps: []resource.TestStep{ + { + Config: firstStep, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisAnalyticsApplicationExists(resName, &application), + resource.TestCheckResourceAttr(resName, "version", "1"), + resource.TestCheckResourceAttr(resName, "cloudwatch_logging_options.#", "1"), + resource.TestCheckResourceAttrPair(resName, "cloudwatch_logging_options.0.log_stream_arn", "aws_cloudwatch_log_stream.test", "arn"), + ), + }, + { + Config: secondStep, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisAnalyticsApplicationExists(resName, &application), + resource.TestCheckResourceAttr(resName, "version", "2"), + resource.TestCheckResourceAttr(resName, "cloudwatch_logging_options.#", "1"), + resource.TestCheckResourceAttrPair(resName, "cloudwatch_logging_options.0.log_stream_arn", "aws_cloudwatch_log_stream.test", "arn"), + ), + }, + }, + }) +} + +func TestAccAWSKinesisAnalyticsApplication_inputsKinesisStream(t *testing.T) { + var application kinesisanalytics.ApplicationDetail + resName := "aws_kinesis_analytics_application.test" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisAnalyticsApplicationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccKinesisAnalyticsApplication_prereq(rInt) + testAccKinesisAnalyticsApplication_inputsKinesisStream(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisAnalyticsApplicationExists(resName, &application), + resource.TestCheckResourceAttr(resName, "version", "1"), + resource.TestCheckResourceAttr(resName, "inputs.#", "1"), + resource.TestCheckResourceAttr(resName, "inputs.0.name_prefix", "test_prefix"), + resource.TestCheckResourceAttr(resName, "inputs.0.kinesis_stream.#", "1"), + resource.TestCheckResourceAttr(resName, "inputs.0.parallelism.#", "1"), + resource.TestCheckResourceAttr(resName, "inputs.0.schema.#", "1"), + resource.TestCheckResourceAttr(resName, "inputs.0.schema.0.record_columns.#", "1"), + resource.TestCheckResourceAttr(resName, "inputs.0.schema.0.record_format.#", "1"), + resource.TestCheckResourceAttr(resName, "inputs.0.schema.0.record_format.0.mapping_parameters.0.json.#", "1"), + ), + }, + }, + }) +} + +func TestAccAWSKinesisAnalyticsApplication_inputsAdd(t *testing.T) { + var before, after kinesisanalytics.ApplicationDetail + resName := "aws_kinesis_analytics_application.test" + rInt := acctest.RandInt() + firstStep := testAccKinesisAnalyticsApplication_prereq(rInt) + testAccKinesisAnalyticsApplication_basic(rInt) + secondStep := testAccKinesisAnalyticsApplication_prereq(rInt) + testAccKinesisAnalyticsApplication_inputsKinesisStream(rInt) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisAnalyticsApplicationDestroy, + Steps: []resource.TestStep{ + { + Config: firstStep, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisAnalyticsApplicationExists(resName, &before), + resource.TestCheckResourceAttr(resName, "version", "1"), + resource.TestCheckResourceAttr(resName, "inputs.#", "0"), + ), + }, + { + Config: secondStep, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisAnalyticsApplicationExists(resName, &after), + resource.TestCheckResourceAttr(resName, "version", "2"), + resource.TestCheckResourceAttr(resName, "inputs.#", "1"), + resource.TestCheckResourceAttr(resName, "inputs.0.name_prefix", "test_prefix"), + resource.TestCheckResourceAttr(resName, "inputs.0.kinesis_stream.#", "1"), + resource.TestCheckResourceAttr(resName, "inputs.0.parallelism.#", "1"), + resource.TestCheckResourceAttr(resName, "inputs.0.schema.#", "1"), + resource.TestCheckResourceAttr(resName, "inputs.0.schema.0.record_columns.#", "1"), + resource.TestCheckResourceAttr(resName, "inputs.0.schema.0.record_format.#", "1"), + resource.TestCheckResourceAttr(resName, "inputs.0.schema.0.record_format.0.mapping_parameters.0.json.#", "1"), + ), + }, + }, + }) +} + +func TestAccAWSKinesisAnalyticsApplication_inputsUpdateKinesisStream(t *testing.T) { + var before, after kinesisanalytics.ApplicationDetail + resName := "aws_kinesis_analytics_application.test" + rInt := acctest.RandInt() + firstStep := testAccKinesisAnalyticsApplication_prereq(rInt) + testAccKinesisAnalyticsApplication_inputsKinesisStream(rInt) + secondStep := testAccKinesisAnalyticsApplication_prereq(rInt) + testAccKinesisAnalyticsApplication_inputsUpdateKinesisStream(rInt, "testStream") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisAnalyticsApplicationDestroy, + Steps: []resource.TestStep{ + { + Config: firstStep, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisAnalyticsApplicationExists(resName, &before), + resource.TestCheckResourceAttr(resName, "version", "1"), + resource.TestCheckResourceAttr(resName, "inputs.#", "1"), + resource.TestCheckResourceAttr(resName, "inputs.0.name_prefix", "test_prefix"), + resource.TestCheckResourceAttr(resName, "inputs.0.parallelism.0.count", "1"), + resource.TestCheckResourceAttr(resName, "inputs.0.schema.0.record_format.0.mapping_parameters.0.json.#", "1"), + ), + }, + { + Config: secondStep, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisAnalyticsApplicationExists(resName, &after), + resource.TestCheckResourceAttr(resName, "version", "2"), + resource.TestCheckResourceAttr(resName, "inputs.#", "1"), + resource.TestCheckResourceAttr(resName, "inputs.0.name_prefix", "test_prefix2"), + resource.TestCheckResourceAttrPair(resName, "inputs.0.kinesis_stream.0.resource_arn", "aws_kinesis_stream.test", "arn"), + resource.TestCheckResourceAttr(resName, "inputs.0.parallelism.0.count", "2"), + resource.TestCheckResourceAttr(resName, "inputs.0.schema.0.record_columns.0.name", "test2"), + resource.TestCheckResourceAttr(resName, "inputs.0.schema.0.record_format.0.mapping_parameters.0.csv.#", "1"), + ), + }, + }, + }) +} + +func TestAccAWSKinesisAnalyticsApplication_outputsKinesisStream(t *testing.T) { + var application kinesisanalytics.ApplicationDetail + resName := "aws_kinesis_analytics_application.test" + rInt := acctest.RandInt() + firstStep := testAccKinesisAnalyticsApplication_prereq(rInt) + testAccKinesisAnalyticsApplication_outputsKinesisStream(rInt) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisAnalyticsApplicationDestroy, + Steps: []resource.TestStep{ + { + Config: firstStep, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisAnalyticsApplicationExists(resName, &application), + resource.TestCheckResourceAttr(resName, "version", "1"), + resource.TestCheckResourceAttr(resName, "outputs.#", "1"), + resource.TestCheckResourceAttr(resName, "outputs.0.name", "test_name"), + resource.TestCheckResourceAttr(resName, "outputs.0.kinesis_stream.#", "1"), + resource.TestCheckResourceAttr(resName, "outputs.0.schema.#", "1"), + resource.TestCheckResourceAttr(resName, "outputs.0.schema.0.record_format_type", "JSON"), + ), + }, + }, + }) +} + +func TestAccAWSKinesisAnalyticsApplication_outputsAdd(t *testing.T) { + var before, after kinesisanalytics.ApplicationDetail + resName := "aws_kinesis_analytics_application.test" + rInt := acctest.RandInt() + firstStep := testAccKinesisAnalyticsApplication_prereq(rInt) + testAccKinesisAnalyticsApplication_basic(rInt) + secondStep := testAccKinesisAnalyticsApplication_prereq(rInt) + testAccKinesisAnalyticsApplication_outputsKinesisStream(rInt) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisAnalyticsApplicationDestroy, + Steps: []resource.TestStep{ + { + Config: firstStep, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisAnalyticsApplicationExists(resName, &before), + resource.TestCheckResourceAttr(resName, "version", "1"), + resource.TestCheckResourceAttr(resName, "outputs.#", "0"), + ), + }, + { + Config: secondStep, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisAnalyticsApplicationExists(resName, &after), + resource.TestCheckResourceAttr(resName, "version", "2"), + resource.TestCheckResourceAttr(resName, "outputs.#", "1"), + resource.TestCheckResourceAttr(resName, "outputs.0.name", "test_name"), + resource.TestCheckResourceAttr(resName, "outputs.0.kinesis_stream.#", "1"), + resource.TestCheckResourceAttr(resName, "outputs.0.schema.#", "1"), + ), + }, + }, + }) +} + +func TestAccAWSKinesisAnalyticsApplication_outputsUpdateKinesisStream(t *testing.T) { + var before, after kinesisanalytics.ApplicationDetail + resName := "aws_kinesis_analytics_application.test" + rInt := acctest.RandInt() + firstStep := testAccKinesisAnalyticsApplication_prereq(rInt) + testAccKinesisAnalyticsApplication_outputsKinesisStream(rInt) + secondStep := testAccKinesisAnalyticsApplication_prereq(rInt) + testAccKinesisAnalyticsApplication_outputsUpdateKinesisStream(rInt, "testStream") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisAnalyticsApplicationDestroy, + Steps: []resource.TestStep{ + { + Config: firstStep, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisAnalyticsApplicationExists(resName, &before), + resource.TestCheckResourceAttr(resName, "version", "1"), + resource.TestCheckResourceAttr(resName, "outputs.#", "1"), + resource.TestCheckResourceAttr(resName, "outputs.0.name", "test_name"), + resource.TestCheckResourceAttr(resName, "outputs.0.kinesis_stream.#", "1"), + resource.TestCheckResourceAttr(resName, "outputs.0.schema.#", "1"), + resource.TestCheckResourceAttr(resName, "outputs.0.schema.0.record_format_type", "JSON"), + ), + }, + { + Config: secondStep, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisAnalyticsApplicationExists(resName, &after), + resource.TestCheckResourceAttr(resName, "version", "2"), + resource.TestCheckResourceAttr(resName, "outputs.#", "1"), + resource.TestCheckResourceAttr(resName, "outputs.0.name", "test_name2"), + resource.TestCheckResourceAttr(resName, "outputs.0.kinesis_stream.#", "1"), + resource.TestCheckResourceAttrPair(resName, "outputs.0.kinesis_stream.0.resource_arn", "aws_kinesis_stream.test", "arn"), + resource.TestCheckResourceAttr(resName, "outputs.0.schema.#", "1"), + resource.TestCheckResourceAttr(resName, "outputs.0.schema.0.record_format_type", "CSV"), + ), + }, + }, + }) +} + +func TestAccAWSKinesisAnalyticsApplication_referenceDataSource(t *testing.T) { + var application kinesisanalytics.ApplicationDetail + resName := "aws_kinesis_analytics_application.test" + rInt := acctest.RandInt() + firstStep := testAccKinesisAnalyticsApplication_prereq(rInt) + testAccKinesisAnalyticsApplication_referenceDataSource(rInt) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisAnalyticsApplicationDestroy, + Steps: []resource.TestStep{ + { + Config: firstStep, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisAnalyticsApplicationExists(resName, &application), + resource.TestCheckResourceAttr(resName, "version", "2"), + resource.TestCheckResourceAttr(resName, "reference_data_sources.#", "1"), + resource.TestCheckResourceAttr(resName, "reference_data_sources.0.schema.#", "1"), + resource.TestCheckResourceAttr(resName, "reference_data_sources.0.schema.0.record_columns.#", "1"), + resource.TestCheckResourceAttr(resName, "reference_data_sources.0.schema.0.record_format.#", "1"), + resource.TestCheckResourceAttr(resName, "reference_data_sources.0.schema.0.record_format.0.mapping_parameters.0.json.#", "1"), + ), + }, + }, + }) +} + +func TestAccAWSKinesisAnalyticsApplication_referenceDataSourceUpdate(t *testing.T) { + var before, after kinesisanalytics.ApplicationDetail + resName := "aws_kinesis_analytics_application.test" + rInt := acctest.RandInt() + firstStep := testAccKinesisAnalyticsApplication_prereq(rInt) + testAccKinesisAnalyticsApplication_referenceDataSource(rInt) + secondStep := testAccKinesisAnalyticsApplication_prereq(rInt) + testAccKinesisAnalyticsApplication_referenceDataSourceUpdate(rInt) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisAnalyticsApplicationDestroy, + Steps: []resource.TestStep{ + { + Config: firstStep, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisAnalyticsApplicationExists(resName, &before), + resource.TestCheckResourceAttr(resName, "version", "2"), + resource.TestCheckResourceAttr(resName, "reference_data_sources.#", "1"), + ), + }, + { + Config: secondStep, + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisAnalyticsApplicationExists(resName, &after), + resource.TestCheckResourceAttr(resName, "version", "3"), + resource.TestCheckResourceAttr(resName, "reference_data_sources.#", "1"), + ), + }, + }, + }) +} + +func testAccCheckKinesisAnalyticsApplicationDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_kinesis_analytics_application" { + continue + } + conn := testAccProvider.Meta().(*AWSClient).kinesisanalyticsconn + describeOpts := &kinesisanalytics.DescribeApplicationInput{ + ApplicationName: aws.String(rs.Primary.Attributes["name"]), + } + resp, err := conn.DescribeApplication(describeOpts) + if err == nil { + if resp.ApplicationDetail != nil && *resp.ApplicationDetail.ApplicationStatus != kinesisanalytics.ApplicationStatusDeleting { + return fmt.Errorf("Error: Application still exists") + } + } + } + return nil +} + +func testAccCheckKinesisAnalyticsApplicationExists(n string, application *kinesisanalytics.ApplicationDetail) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Kinesis Analytics Application ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).kinesisanalyticsconn + describeOpts := &kinesisanalytics.DescribeApplicationInput{ + ApplicationName: aws.String(rs.Primary.Attributes["name"]), + } + resp, err := conn.DescribeApplication(describeOpts) + if err != nil { + return err + } + + *application = *resp.ApplicationDetail + + return nil + } +} + +func testAccKinesisAnalyticsApplication_basic(rInt int) string { + return fmt.Sprintf(` +resource "aws_kinesis_analytics_application" "test" { + name = "testAcc-%d" + code = "testCode\n" +} +`, rInt) +} + +func testAccKinesisAnalyticsApplication_update(rInt int) string { + return fmt.Sprintf(` +resource "aws_kinesis_analytics_application" "test" { + name = "testAcc-%d" + code = "testCode2\n" +} +`, rInt) +} + +func testAccKinesisAnalyticsApplication_cloudwatchLoggingOptions(rInt int, streamName string) string { + return fmt.Sprintf(` +resource "aws_cloudwatch_log_group" "test" { + name = "testAcc-%d" +} + +resource "aws_cloudwatch_log_stream" "test" { + name = "testAcc-%s-%d" + log_group_name = "${aws_cloudwatch_log_group.test.name}" +} + +resource "aws_kinesis_analytics_application" "test" { + name = "testAcc-%d" + code = "testCode\n" + + cloudwatch_logging_options { + log_stream_arn = "${aws_cloudwatch_log_stream.test.arn}" + role_arn = "${aws_iam_role.test.arn}" + } +} +`, rInt, streamName, rInt, rInt) +} + +func testAccKinesisAnalyticsApplication_inputsKinesisStream(rInt int) string { + return fmt.Sprintf(` +resource "aws_kinesis_stream" "test" { + name = "testAcc-%d" + shard_count = 1 +} + +resource "aws_kinesis_analytics_application" "test" { + name = "testAcc-%d" + code = "testCode\n" + + inputs { + name_prefix = "test_prefix" + kinesis_stream { + resource_arn = "${aws_kinesis_stream.test.arn}" + role_arn = "${aws_iam_role.test.arn}" + } + parallelism { + count = 1 + } + schema { + record_columns { + mapping = "$.test" + name = "test" + sql_type = "VARCHAR(8)" + } + record_encoding = "UTF-8" + record_format { + mapping_parameters { + json { + record_row_path = "$" + } + } + } + } + } +} +`, rInt, rInt) +} + +func testAccKinesisAnalyticsApplication_inputsUpdateKinesisStream(rInt int, streamName string) string { + return fmt.Sprintf(` +resource "aws_kinesis_stream" "test" { + name = "testAcc-%s-%d" + shard_count = 1 +} + +resource "aws_kinesis_analytics_application" "test" { + name = "testAcc-%d" + code = "testCode\n" + + inputs { + name_prefix = "test_prefix2" + kinesis_stream { + resource_arn = "${aws_kinesis_stream.test.arn}" + role_arn = "${aws_iam_role.test.arn}" + } + parallelism { + count = 2 + } + schema { + record_columns { + mapping = "$.test2" + name = "test2" + sql_type = "VARCHAR(8)" + } + record_encoding = "UTF-8" + record_format { + mapping_parameters { + csv { + record_column_delimiter = "," + record_row_delimiter = "\n" + } + } + } + } + } +} +`, streamName, rInt, rInt) +} + +func testAccKinesisAnalyticsApplication_outputsKinesisStream(rInt int) string { + return fmt.Sprintf(` +resource "aws_kinesis_stream" "test" { + name = "testAcc-%d" + shard_count = 1 +} + +resource "aws_kinesis_analytics_application" "test" { + name = "testAcc-%d" + code = "testCode\n" + + outputs { + name = "test_name" + kinesis_stream { + resource_arn = "${aws_kinesis_stream.test.arn}" + role_arn = "${aws_iam_role.test.arn}" + } + schema { + record_format_type = "JSON" + } + } +} +`, rInt, rInt) +} + +func testAccKinesisAnalyticsApplication_outputsUpdateKinesisStream(rInt int, streamName string) string { + return fmt.Sprintf(` +resource "aws_kinesis_stream" "test" { + name = "testAcc-%s-%d" + shard_count = 1 +} + +resource "aws_kinesis_analytics_application" "test" { + name = "testAcc-%d" + code = "testCode\n" + + outputs { + name = "test_name2" + kinesis_stream { + resource_arn = "${aws_kinesis_stream.test.arn}" + role_arn = "${aws_iam_role.test.arn}" + } + schema { + record_format_type = "CSV" + } + } +} +`, streamName, rInt, rInt) +} + +func testAccKinesisAnalyticsApplication_referenceDataSource(rInt int) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = "testacc-%d" +} + +resource "aws_kinesis_analytics_application" "test" { + name = "testAcc-%d" + + reference_data_sources { + table_name = "test_table" + s3 { + bucket_arn = "${aws_s3_bucket.test.arn}" + file_key = "test_file_key" + role_arn = "${aws_iam_role.test.arn}" + } + schema { + record_columns { + mapping = "$.test" + name = "test" + sql_type = "VARCHAR(8)" + } + record_encoding = "UTF-8" + record_format { + mapping_parameters { + json { + record_row_path = "$" + } + } + } + } + } +} +`, rInt, rInt) +} + +func testAccKinesisAnalyticsApplication_referenceDataSourceUpdate(rInt int) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = "testacc2-%d" +} + +resource "aws_kinesis_analytics_application" "test" { + name = "testAcc-%d" + + reference_data_sources { + table_name = "test_table2" + s3 { + bucket_arn = "${aws_s3_bucket.test.arn}" + file_key = "test_file_key" + role_arn = "${aws_iam_role.test.arn}" + } + schema { + record_columns { + mapping = "$.test2" + name = "test2" + sql_type = "VARCHAR(8)" + } + record_encoding = "UTF-8" + record_format { + mapping_parameters { + csv { + record_column_delimiter = "," + record_row_delimiter = "\n" + } + } + } + } + } +} +`, rInt, rInt) +} + +// this is used to set up the IAM role +func testAccKinesisAnalyticsApplication_prereq(rInt int) string { + return fmt.Sprintf(` +data "aws_iam_policy_document" "test" { + statement { + actions = ["sts:AssumeRole"] + principals { + type = "Service" + identifiers = ["kinesisanalytics.amazonaws.com"] + } + } +} + +resource "aws_iam_role" "test" { + name = "testAcc-%d" + assume_role_policy = "${data.aws_iam_policy_document.test.json}" +} +`, rInt) +} diff --git a/aws/resource_aws_kinesis_firehose_delivery_stream.go b/aws/resource_aws_kinesis_firehose_delivery_stream.go index 3534c8e6809..db100a15439 100644 --- a/aws/resource_aws_kinesis_firehose_delivery_stream.go +++ b/aws/resource_aws_kinesis_firehose_delivery_stream.go @@ -13,6 +13,7 @@ import ( "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func cloudWatchLoggingOptionsSchema() *schema.Schema { @@ -121,9 +122,9 @@ func processingConfigurationSchema() *schema.Schema { Required: true, ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { value := v.(string) - if value != "LambdaArn" && value != "NumberOfRetries" { + if value != "LambdaArn" && value != "NumberOfRetries" && value != "RoleArn" && value != "BufferSizeInMBs" && value != "BufferIntervalInSeconds" { errors = append(errors, fmt.Errorf( - "%q must be one of 'LambdaArn', 'NumberOfRetries'", k)) + "%q must be one of 'LambdaArn', 'NumberOfRetries', 'RoleArn', 'BufferSizeInMBs', 'BufferIntervalInSeconds'", k)) } return }, @@ -174,41 +175,364 @@ func cloudwatchLoggingOptionsHash(v interface{}) int { return hashcode.String(buf.String()) } -func flattenCloudwatchLoggingOptions(clo firehose.CloudWatchLoggingOptions) *schema.Set { +func flattenCloudwatchLoggingOptions(clo *firehose.CloudWatchLoggingOptions) *schema.Set { + if clo == nil { + return schema.NewSet(cloudwatchLoggingOptionsHash, []interface{}{}) + } + cloudwatchLoggingOptions := map[string]interface{}{ - "enabled": *clo.Enabled, + "enabled": aws.BoolValue(clo.Enabled), } - if *clo.Enabled { - cloudwatchLoggingOptions["log_group_name"] = *clo.LogGroupName - cloudwatchLoggingOptions["log_stream_name"] = *clo.LogStreamName + if aws.BoolValue(clo.Enabled) { + cloudwatchLoggingOptions["log_group_name"] = aws.StringValue(clo.LogGroupName) + cloudwatchLoggingOptions["log_stream_name"] = aws.StringValue(clo.LogStreamName) } return schema.NewSet(cloudwatchLoggingOptionsHash, []interface{}{cloudwatchLoggingOptions}) } -func flattenFirehoseS3Configuration(s3 firehose.S3DestinationDescription) []interface{} { - s3Configuration := map[string]interface{}{ - "role_arn": *s3.RoleARN, - "bucket_arn": *s3.BucketARN, - "buffer_size": *s3.BufferingHints.SizeInMBs, - "buffer_interval": *s3.BufferingHints.IntervalInSeconds, - "compression_format": *s3.CompressionFormat, +func flattenFirehoseElasticsearchConfiguration(description *firehose.ElasticsearchDestinationDescription) []map[string]interface{} { + if description == nil { + return []map[string]interface{}{} } - if s3.CloudWatchLoggingOptions != nil { - s3Configuration["cloudwatch_logging_options"] = flattenCloudwatchLoggingOptions(*s3.CloudWatchLoggingOptions) + + m := map[string]interface{}{ + "cloudwatch_logging_options": flattenCloudwatchLoggingOptions(description.CloudWatchLoggingOptions), + "domain_arn": aws.StringValue(description.DomainARN), + "role_arn": aws.StringValue(description.RoleARN), + "type_name": aws.StringValue(description.TypeName), + "index_name": aws.StringValue(description.IndexName), + "s3_backup_mode": aws.StringValue(description.S3BackupMode), + "index_rotation_period": aws.StringValue(description.IndexRotationPeriod), + "processing_configuration": flattenProcessingConfiguration(description.ProcessingConfiguration, aws.StringValue(description.RoleARN)), } - if s3.EncryptionConfiguration.KMSEncryptionConfig != nil { - s3Configuration["kms_key_arn"] = *s3.EncryptionConfiguration.KMSEncryptionConfig.AWSKMSKeyARN + + if description.BufferingHints != nil { + m["buffering_interval"] = int(aws.Int64Value(description.BufferingHints.IntervalInSeconds)) + m["buffering_size"] = int(aws.Int64Value(description.BufferingHints.SizeInMBs)) } - if s3.Prefix != nil { - s3Configuration["prefix"] = *s3.Prefix + + if description.RetryOptions != nil { + m["retry_duration"] = int(aws.Int64Value(description.RetryOptions.DurationInSeconds)) } - return []interface{}{s3Configuration} + + return []map[string]interface{}{m} } -func flattenProcessingConfiguration(pc firehose.ProcessingConfiguration, roleArn string) []map[string]interface{} { +func flattenFirehoseExtendedS3Configuration(description *firehose.ExtendedS3DestinationDescription) []map[string]interface{} { + if description == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "bucket_arn": aws.StringValue(description.BucketARN), + "cloudwatch_logging_options": flattenCloudwatchLoggingOptions(description.CloudWatchLoggingOptions), + "compression_format": aws.StringValue(description.CompressionFormat), + "data_format_conversion_configuration": flattenFirehoseDataFormatConversionConfiguration(description.DataFormatConversionConfiguration), + "prefix": aws.StringValue(description.Prefix), + "processing_configuration": flattenProcessingConfiguration(description.ProcessingConfiguration, aws.StringValue(description.RoleARN)), + "role_arn": aws.StringValue(description.RoleARN), + "s3_backup_configuration": flattenFirehoseS3Configuration(description.S3BackupDescription), + "s3_backup_mode": aws.StringValue(description.S3BackupMode), + } + + if description.BufferingHints != nil { + m["buffer_interval"] = int(aws.Int64Value(description.BufferingHints.IntervalInSeconds)) + m["buffer_size"] = int(aws.Int64Value(description.BufferingHints.SizeInMBs)) + } + + if description.EncryptionConfiguration != nil && description.EncryptionConfiguration.KMSEncryptionConfig != nil { + m["kms_key_arn"] = aws.StringValue(description.EncryptionConfiguration.KMSEncryptionConfig.AWSKMSKeyARN) + } + + return []map[string]interface{}{m} +} + +func flattenFirehoseRedshiftConfiguration(description *firehose.RedshiftDestinationDescription, configuredPassword string) []map[string]interface{} { + if description == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "cloudwatch_logging_options": flattenCloudwatchLoggingOptions(description.CloudWatchLoggingOptions), + "cluster_jdbcurl": aws.StringValue(description.ClusterJDBCURL), + "password": configuredPassword, + "processing_configuration": flattenProcessingConfiguration(description.ProcessingConfiguration, aws.StringValue(description.RoleARN)), + "role_arn": aws.StringValue(description.RoleARN), + "s3_backup_configuration": flattenFirehoseS3Configuration(description.S3BackupDescription), + "s3_backup_mode": aws.StringValue(description.S3BackupMode), + "username": aws.StringValue(description.Username), + } + + if description.CopyCommand != nil { + m["copy_options"] = aws.StringValue(description.CopyCommand.CopyOptions) + m["data_table_columns"] = aws.StringValue(description.CopyCommand.DataTableColumns) + m["data_table_name"] = aws.StringValue(description.CopyCommand.DataTableName) + } + + if description.RetryOptions != nil { + m["retry_duration"] = int(aws.Int64Value(description.RetryOptions.DurationInSeconds)) + } + + return []map[string]interface{}{m} +} + +func flattenFirehoseSplunkConfiguration(description *firehose.SplunkDestinationDescription) []map[string]interface{} { + if description == nil { + return []map[string]interface{}{} + } + m := map[string]interface{}{ + "cloudwatch_logging_options": flattenCloudwatchLoggingOptions(description.CloudWatchLoggingOptions), + "hec_acknowledgment_timeout": int(aws.Int64Value(description.HECAcknowledgmentTimeoutInSeconds)), + "hec_endpoint_type": aws.StringValue(description.HECEndpointType), + "hec_endpoint": aws.StringValue(description.HECEndpoint), + "hec_token": aws.StringValue(description.HECToken), + "processing_configuration": flattenProcessingConfiguration(description.ProcessingConfiguration, ""), + "s3_backup_mode": aws.StringValue(description.S3BackupMode), + } + + if description.RetryOptions != nil { + m["retry_duration"] = int(aws.Int64Value(description.RetryOptions.DurationInSeconds)) + } + + return []map[string]interface{}{m} +} + +func flattenFirehoseS3Configuration(description *firehose.S3DestinationDescription) []map[string]interface{} { + if description == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "bucket_arn": aws.StringValue(description.BucketARN), + "cloudwatch_logging_options": flattenCloudwatchLoggingOptions(description.CloudWatchLoggingOptions), + "compression_format": aws.StringValue(description.CompressionFormat), + "prefix": aws.StringValue(description.Prefix), + "role_arn": aws.StringValue(description.RoleARN), + } + + if description.BufferingHints != nil { + m["buffer_interval"] = int(aws.Int64Value(description.BufferingHints.IntervalInSeconds)) + m["buffer_size"] = int(aws.Int64Value(description.BufferingHints.SizeInMBs)) + } + + if description.EncryptionConfiguration != nil && description.EncryptionConfiguration.KMSEncryptionConfig != nil { + m["kms_key_arn"] = aws.StringValue(description.EncryptionConfiguration.KMSEncryptionConfig.AWSKMSKeyARN) + } + + return []map[string]interface{}{m} +} + +func flattenFirehoseDataFormatConversionConfiguration(dfcc *firehose.DataFormatConversionConfiguration) []map[string]interface{} { + if dfcc == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "enabled": aws.BoolValue(dfcc.Enabled), + "input_format_configuration": flattenFirehoseInputFormatConfiguration(dfcc.InputFormatConfiguration), + "output_format_configuration": flattenFirehoseOutputFormatConfiguration(dfcc.OutputFormatConfiguration), + "schema_configuration": flattenFirehoseSchemaConfiguration(dfcc.SchemaConfiguration), + } + + return []map[string]interface{}{m} +} + +func flattenFirehoseInputFormatConfiguration(ifc *firehose.InputFormatConfiguration) []map[string]interface{} { + if ifc == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "deserializer": flattenFirehoseDeserializer(ifc.Deserializer), + } + + return []map[string]interface{}{m} +} + +func flattenFirehoseDeserializer(deserializer *firehose.Deserializer) []map[string]interface{} { + if deserializer == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "hive_json_ser_de": flattenFirehoseHiveJsonSerDe(deserializer.HiveJsonSerDe), + "open_x_json_ser_de": flattenFirehoseOpenXJsonSerDe(deserializer.OpenXJsonSerDe), + } + + return []map[string]interface{}{m} +} + +func flattenFirehoseHiveJsonSerDe(hjsd *firehose.HiveJsonSerDe) []map[string]interface{} { + if hjsd == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "timestamp_formats": flattenStringList(hjsd.TimestampFormats), + } + + return []map[string]interface{}{m} +} + +func flattenFirehoseOpenXJsonSerDe(oxjsd *firehose.OpenXJsonSerDe) []map[string]interface{} { + if oxjsd == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "column_to_json_key_mappings": aws.StringValueMap(oxjsd.ColumnToJsonKeyMappings), + "convert_dots_in_json_keys_to_underscores": aws.BoolValue(oxjsd.ConvertDotsInJsonKeysToUnderscores), + } + + // API omits default values + // Return defaults that are not type zero values to prevent extraneous difference + + m["case_insensitive"] = true + if oxjsd.CaseInsensitive != nil { + m["case_insensitive"] = aws.BoolValue(oxjsd.CaseInsensitive) + } + + return []map[string]interface{}{m} +} + +func flattenFirehoseOutputFormatConfiguration(ofc *firehose.OutputFormatConfiguration) []map[string]interface{} { + if ofc == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "serializer": flattenFirehoseSerializer(ofc.Serializer), + } + + return []map[string]interface{}{m} +} + +func flattenFirehoseSerializer(serializer *firehose.Serializer) []map[string]interface{} { + if serializer == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "orc_ser_de": flattenFirehoseOrcSerDe(serializer.OrcSerDe), + "parquet_ser_de": flattenFirehoseParquetSerDe(serializer.ParquetSerDe), + } + + return []map[string]interface{}{m} +} + +func flattenFirehoseOrcSerDe(osd *firehose.OrcSerDe) []map[string]interface{} { + if osd == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "bloom_filter_columns": aws.StringValueSlice(osd.BloomFilterColumns), + "dictionary_key_threshold": aws.Float64Value(osd.DictionaryKeyThreshold), + "enable_padding": aws.BoolValue(osd.EnablePadding), + } + + // API omits default values + // Return defaults that are not type zero values to prevent extraneous difference + + m["block_size_bytes"] = 268435456 + if osd.BlockSizeBytes != nil { + m["block_size_bytes"] = int(aws.Int64Value(osd.BlockSizeBytes)) + } + + m["bloom_filter_false_positive_probability"] = 0.05 + if osd.BloomFilterFalsePositiveProbability != nil { + m["bloom_filter_false_positive_probability"] = aws.Float64Value(osd.BloomFilterFalsePositiveProbability) + } + + m["compression"] = firehose.OrcCompressionSnappy + if osd.Compression != nil { + m["compression"] = aws.StringValue(osd.Compression) + } + + m["format_version"] = firehose.OrcFormatVersionV012 + if osd.FormatVersion != nil { + m["format_version"] = aws.StringValue(osd.FormatVersion) + } + + m["padding_tolerance"] = 0.05 + if osd.PaddingTolerance != nil { + m["padding_tolerance"] = aws.Float64Value(osd.PaddingTolerance) + } + + m["row_index_stride"] = 10000 + if osd.RowIndexStride != nil { + m["row_index_stride"] = int(aws.Int64Value(osd.RowIndexStride)) + } + + m["stripe_size_bytes"] = 67108864 + if osd.StripeSizeBytes != nil { + m["stripe_size_bytes"] = int(aws.Int64Value(osd.StripeSizeBytes)) + } + + return []map[string]interface{}{m} +} + +func flattenFirehoseParquetSerDe(psd *firehose.ParquetSerDe) []map[string]interface{} { + if psd == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "enable_dictionary_compression": aws.BoolValue(psd.EnableDictionaryCompression), + "max_padding_bytes": int(aws.Int64Value(psd.MaxPaddingBytes)), + } + + // API omits default values + // Return defaults that are not type zero values to prevent extraneous difference + + m["block_size_bytes"] = 268435456 + if psd.BlockSizeBytes != nil { + m["block_size_bytes"] = int(aws.Int64Value(psd.BlockSizeBytes)) + } + + m["compression"] = firehose.ParquetCompressionSnappy + if psd.Compression != nil { + m["compression"] = aws.StringValue(psd.Compression) + } + + m["page_size_bytes"] = 1048576 + if psd.PageSizeBytes != nil { + m["page_size_bytes"] = int(aws.Int64Value(psd.PageSizeBytes)) + } + + m["writer_version"] = firehose.ParquetWriterVersionV1 + if psd.WriterVersion != nil { + m["writer_version"] = aws.StringValue(psd.WriterVersion) + } + + return []map[string]interface{}{m} +} + +func flattenFirehoseSchemaConfiguration(sc *firehose.SchemaConfiguration) []map[string]interface{} { + if sc == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "catalog_id": aws.StringValue(sc.CatalogId), + "database_name": aws.StringValue(sc.DatabaseName), + "region": aws.StringValue(sc.Region), + "role_arn": aws.StringValue(sc.RoleARN), + "table_name": aws.StringValue(sc.TableName), + "version_id": aws.StringValue(sc.VersionId), + } + + return []map[string]interface{}{m} +} + +func flattenProcessingConfiguration(pc *firehose.ProcessingConfiguration, roleArn string) []map[string]interface{} { + if pc == nil { + return []map[string]interface{}{} + } + processingConfiguration := make([]map[string]interface{}, 1) - // It is necessary to explicitely filter this out + // It is necessary to explicitly filter this out // to prevent diffs during routine use and retain the ability // to show diffs if any field has drifted defaultLambdaParams := map[string]string{ @@ -220,11 +544,12 @@ func flattenProcessingConfiguration(pc firehose.ProcessingConfiguration, roleArn processors := make([]interface{}, len(pc.Processors), len(pc.Processors)) for i, p := range pc.Processors { - t := *p.Type + t := aws.StringValue(p.Type) parameters := make([]interface{}, 0) for _, params := range p.Parameters { - name, value := *params.ParameterName, *params.ParameterValue + name := aws.StringValue(params.ParameterName) + value := aws.StringValue(params.ParameterValue) if t == firehose.ProcessorTypeLambda { // Ignore defaults @@ -245,7 +570,7 @@ func flattenProcessingConfiguration(pc firehose.ProcessingConfiguration, roleArn } } processingConfiguration[0] = map[string]interface{}{ - "enabled": *pc.Enabled, + "enabled": aws.BoolValue(pc.Enabled), "processors": processors, } return processingConfiguration @@ -259,110 +584,41 @@ func flattenKinesisFirehoseDeliveryStream(d *schema.ResourceData, s *firehose.De destination := s.Destinations[0] if destination.RedshiftDestinationDescription != nil { d.Set("destination", "redshift") - password := d.Get("redshift_configuration.0.password").(string) - - redshiftConfiguration := map[string]interface{}{ - "cluster_jdbcurl": *destination.RedshiftDestinationDescription.ClusterJDBCURL, - "role_arn": *destination.RedshiftDestinationDescription.RoleARN, - "username": *destination.RedshiftDestinationDescription.Username, - "password": password, - "data_table_name": *destination.RedshiftDestinationDescription.CopyCommand.DataTableName, - "copy_options": *destination.RedshiftDestinationDescription.CopyCommand.CopyOptions, - "data_table_columns": *destination.RedshiftDestinationDescription.CopyCommand.DataTableColumns, - "s3_backup_mode": *destination.RedshiftDestinationDescription.S3BackupMode, - "retry_duration": *destination.RedshiftDestinationDescription.RetryOptions.DurationInSeconds, + configuredPassword := d.Get("redshift_configuration.0.password").(string) + if err := d.Set("redshift_configuration", flattenFirehoseRedshiftConfiguration(destination.RedshiftDestinationDescription, configuredPassword)); err != nil { + return fmt.Errorf("error setting redshift_configuration: %s", err) } - - if v := destination.RedshiftDestinationDescription.CloudWatchLoggingOptions; v != nil { - redshiftConfiguration["cloudwatch_logging_options"] = flattenCloudwatchLoggingOptions(*v) + if err := d.Set("s3_configuration", flattenFirehoseS3Configuration(destination.RedshiftDestinationDescription.S3DestinationDescription)); err != nil { + return fmt.Errorf("error setting s3_configuration: %s", err) } - - if v := destination.RedshiftDestinationDescription.S3BackupDescription; v != nil { - redshiftConfiguration["s3_backup_configuration"] = flattenFirehoseS3Configuration(*v) - } - - redshiftConfList := make([]map[string]interface{}, 1) - redshiftConfList[0] = redshiftConfiguration - d.Set("redshift_configuration", redshiftConfList) - d.Set("s3_configuration", flattenFirehoseS3Configuration(*destination.RedshiftDestinationDescription.S3DestinationDescription)) - } else if destination.ElasticsearchDestinationDescription != nil { d.Set("destination", "elasticsearch") - - elasticsearchConfiguration := map[string]interface{}{ - "buffering_interval": *destination.ElasticsearchDestinationDescription.BufferingHints.IntervalInSeconds, - "buffering_size": *destination.ElasticsearchDestinationDescription.BufferingHints.SizeInMBs, - "domain_arn": *destination.ElasticsearchDestinationDescription.DomainARN, - "role_arn": *destination.ElasticsearchDestinationDescription.RoleARN, - "type_name": *destination.ElasticsearchDestinationDescription.TypeName, - "index_name": *destination.ElasticsearchDestinationDescription.IndexName, - "s3_backup_mode": *destination.ElasticsearchDestinationDescription.S3BackupMode, - "retry_duration": *destination.ElasticsearchDestinationDescription.RetryOptions.DurationInSeconds, - "index_rotation_period": *destination.ElasticsearchDestinationDescription.IndexRotationPeriod, + if err := d.Set("elasticsearch_configuration", flattenFirehoseElasticsearchConfiguration(destination.ElasticsearchDestinationDescription)); err != nil { + return fmt.Errorf("error setting elasticsearch_configuration: %s", err) } - - if v := destination.ElasticsearchDestinationDescription.CloudWatchLoggingOptions; v != nil { - elasticsearchConfiguration["cloudwatch_logging_options"] = flattenCloudwatchLoggingOptions(*v) + if err := d.Set("s3_configuration", flattenFirehoseS3Configuration(destination.ElasticsearchDestinationDescription.S3DestinationDescription)); err != nil { + return fmt.Errorf("error setting s3_configuration: %s", err) } - - elasticsearchConfList := make([]map[string]interface{}, 1) - elasticsearchConfList[0] = elasticsearchConfiguration - d.Set("elasticsearch_configuration", elasticsearchConfList) - d.Set("s3_configuration", flattenFirehoseS3Configuration(*destination.ElasticsearchDestinationDescription.S3DestinationDescription)) } else if destination.SplunkDestinationDescription != nil { d.Set("destination", "splunk") - - splunkConfiguration := map[string]interface{}{ - "hec_acknowledgment_timeout": *destination.SplunkDestinationDescription.HECAcknowledgmentTimeoutInSeconds, - "hec_endpoint": *destination.SplunkDestinationDescription.HECEndpoint, - "hec_endpoint_type": *destination.SplunkDestinationDescription.HECEndpointType, - "hec_token": *destination.SplunkDestinationDescription.HECToken, - "s3_backup_mode": *destination.SplunkDestinationDescription.S3BackupMode, - "retry_duration": *destination.SplunkDestinationDescription.RetryOptions.DurationInSeconds, + if err := d.Set("splunk_configuration", flattenFirehoseSplunkConfiguration(destination.SplunkDestinationDescription)); err != nil { + return fmt.Errorf("error setting splunk_configuration: %s", err) } - - if v := destination.SplunkDestinationDescription.CloudWatchLoggingOptions; v != nil { - splunkConfiguration["cloudwatch_logging_options"] = flattenCloudwatchLoggingOptions(*v) + if err := d.Set("s3_configuration", flattenFirehoseS3Configuration(destination.SplunkDestinationDescription.S3DestinationDescription)); err != nil { + return fmt.Errorf("error setting s3_configuration: %s", err) } - - splunkConfList := make([]map[string]interface{}, 1) - splunkConfList[0] = splunkConfiguration - d.Set("splunk_configuration", splunkConfList) - d.Set("s3_configuration", flattenFirehoseS3Configuration(*destination.SplunkDestinationDescription.S3DestinationDescription)) } else if d.Get("destination").(string) == "s3" { d.Set("destination", "s3") - d.Set("s3_configuration", flattenFirehoseS3Configuration(*destination.S3DestinationDescription)) + if err := d.Set("s3_configuration", flattenFirehoseS3Configuration(destination.S3DestinationDescription)); err != nil { + return fmt.Errorf("error setting s3_configuration: %s", err) + } } else { d.Set("destination", "extended_s3") - - roleArn := *destination.ExtendedS3DestinationDescription.RoleARN - extendedS3Configuration := map[string]interface{}{ - "buffer_interval": *destination.ExtendedS3DestinationDescription.BufferingHints.IntervalInSeconds, - "buffer_size": *destination.ExtendedS3DestinationDescription.BufferingHints.SizeInMBs, - "bucket_arn": *destination.ExtendedS3DestinationDescription.BucketARN, - "role_arn": roleArn, - "compression_format": *destination.ExtendedS3DestinationDescription.CompressionFormat, - "prefix": *destination.ExtendedS3DestinationDescription.Prefix, - "cloudwatch_logging_options": flattenCloudwatchLoggingOptions(*destination.ExtendedS3DestinationDescription.CloudWatchLoggingOptions), - } - - if v := destination.ExtendedS3DestinationDescription.EncryptionConfiguration.KMSEncryptionConfig; v != nil { - extendedS3Configuration["kms_key_arn"] = *v.AWSKMSKeyARN - } - - if v := destination.ExtendedS3DestinationDescription.ProcessingConfiguration; v != nil { - extendedS3Configuration["processing_configuration"] = flattenProcessingConfiguration(*v, roleArn) - } - - extendedS3ConfList := make([]map[string]interface{}, 1) - extendedS3ConfList[0] = extendedS3Configuration - - err := d.Set("extended_s3_configuration", extendedS3ConfList) - if err != nil { - return err + if err := d.Set("extended_s3_configuration", flattenFirehoseExtendedS3Configuration(destination.ExtendedS3DestinationDescription)); err != nil { + return fmt.Errorf("error setting extended_s3_configuration: %s", err) } } - d.Set("destination_id", *destination.DestinationId) + d.Set("destination_id", destination.DestinationId) } return nil } @@ -417,12 +673,14 @@ func resourceAwsKinesisFirehoseDeliveryStream() *schema.Resource { "kinesis_stream_arn": { Type: schema.TypeString, Required: true, + ForceNew: true, ValidateFunc: validateArn, }, "role_arn": { Type: schema.TypeString, Required: true, + ForceNew: true, ValidateFunc: validateArn, }, }, @@ -479,6 +737,264 @@ func resourceAwsKinesisFirehoseDeliveryStream() *schema.Resource { Default: "UNCOMPRESSED", }, + "data_format_conversion_configuration": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + "input_format_configuration": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "deserializer": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "hive_json_ser_de": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{"extended_s3_configuration.0.data_format_conversion_configuration.0.input_format_configuration.0.deserializer.0.open_x_json_ser_de"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "timestamp_formats": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + }, + }, + "open_x_json_ser_de": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{"extended_s3_configuration.0.data_format_conversion_configuration.0.input_format_configuration.0.deserializer.0.hive_json_ser_de"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "case_insensitive": { + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + "column_to_json_key_mappings": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "convert_dots_in_json_keys_to_underscores": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + "output_format_configuration": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "serializer": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "orc_ser_de": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{"extended_s3_configuration.0.data_format_conversion_configuration.0.output_format_configuration.0.serializer.0.parquet_ser_de"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "block_size_bytes": { + Type: schema.TypeInt, + Optional: true, + // 256 MiB + Default: 268435456, + // 64 MiB + ValidateFunc: validation.IntAtLeast(67108864), + }, + "bloom_filter_columns": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "bloom_filter_false_positive_probability": { + Type: schema.TypeFloat, + Optional: true, + Default: 0.05, + }, + "compression": { + Type: schema.TypeString, + Optional: true, + Default: firehose.OrcCompressionSnappy, + ValidateFunc: validation.StringInSlice([]string{ + firehose.OrcCompressionNone, + firehose.OrcCompressionSnappy, + firehose.OrcCompressionZlib, + }, false), + }, + "dictionary_key_threshold": { + Type: schema.TypeFloat, + Optional: true, + Default: 0.0, + }, + "enable_padding": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "format_version": { + Type: schema.TypeString, + Optional: true, + Default: firehose.OrcFormatVersionV012, + ValidateFunc: validation.StringInSlice([]string{ + firehose.OrcFormatVersionV011, + firehose.OrcFormatVersionV012, + }, false), + }, + "padding_tolerance": { + Type: schema.TypeFloat, + Optional: true, + Default: 0.05, + }, + "row_index_stride": { + Type: schema.TypeInt, + Optional: true, + Default: 10000, + ValidateFunc: validation.IntAtLeast(1000), + }, + "stripe_size_bytes": { + Type: schema.TypeInt, + Optional: true, + // 64 MiB + Default: 67108864, + // 8 MiB + ValidateFunc: validation.IntAtLeast(8388608), + }, + }, + }, + }, + "parquet_ser_de": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{"extended_s3_configuration.0.data_format_conversion_configuration.0.output_format_configuration.0.serializer.0.orc_ser_de"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "block_size_bytes": { + Type: schema.TypeInt, + Optional: true, + // 256 MiB + Default: 268435456, + // 64 MiB + ValidateFunc: validation.IntAtLeast(67108864), + }, + "compression": { + Type: schema.TypeString, + Optional: true, + Default: firehose.ParquetCompressionSnappy, + ValidateFunc: validation.StringInSlice([]string{ + firehose.ParquetCompressionGzip, + firehose.ParquetCompressionSnappy, + firehose.ParquetCompressionUncompressed, + }, false), + }, + "enable_dictionary_compression": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "max_padding_bytes": { + Type: schema.TypeInt, + Optional: true, + Default: 0, + }, + "page_size_bytes": { + Type: schema.TypeInt, + Optional: true, + // 1 MiB + Default: 1048576, + // 64 KiB + ValidateFunc: validation.IntAtLeast(65536), + }, + "writer_version": { + Type: schema.TypeString, + Optional: true, + Default: firehose.ParquetWriterVersionV1, + ValidateFunc: validation.StringInSlice([]string{ + firehose.ParquetWriterVersionV1, + firehose.ParquetWriterVersionV2, + }, false), + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + "schema_configuration": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "catalog_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "database_name": { + Type: schema.TypeString, + Required: true, + }, + "region": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "role_arn": { + Type: schema.TypeString, + Required: true, + }, + "table_name": { + Type: schema.TypeString, + Required: true, + }, + "version_id": { + Type: schema.TypeString, + Optional: true, + Default: "LATEST", + }, + }, + }, + }, + }, + }, + }, + "kms_key_arn": { Type: schema.TypeString, Optional: true, @@ -495,6 +1011,22 @@ func resourceAwsKinesisFirehoseDeliveryStream() *schema.Resource { Optional: true, }, + "s3_backup_mode": { + Type: schema.TypeString, + Optional: true, + Default: "Disabled", + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if value != "Disabled" && value != "Enabled" { + errors = append(errors, fmt.Errorf( + "%q must be one of 'Disabled', 'Enabled'", k)) + } + return + }, + }, + + "s3_backup_configuration": s3ConfigurationSchema(), + "cloudwatch_logging_options": cloudWatchLoggingOptionsSchema(), "processing_configuration": processingConfigurationSchema(), @@ -524,6 +1056,8 @@ func resourceAwsKinesisFirehoseDeliveryStream() *schema.Resource { Sensitive: true, }, + "processing_configuration": processingConfigurationSchema(), + "role_arn": { Type: schema.TypeString, Required: true, @@ -658,6 +1192,7 @@ func resourceAwsKinesisFirehoseDeliveryStream() *schema.Resource { "s3_backup_mode": { Type: schema.TypeString, + ForceNew: true, Optional: true, Default: "FailedDocumentsOnly", ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { @@ -684,6 +1219,8 @@ func resourceAwsKinesisFirehoseDeliveryStream() *schema.Resource { }, "cloudwatch_logging_options": cloudWatchLoggingOptionsSchema(), + + "processing_configuration": processingConfigurationSchema(), }, }, }, @@ -698,7 +1235,7 @@ func resourceAwsKinesisFirehoseDeliveryStream() *schema.Resource { Type: schema.TypeInt, Optional: true, Default: 180, - ValidateFunc: validateIntegerInRange(180, 600), + ValidateFunc: validation.IntBetween(180, 600), }, "hec_endpoint": { @@ -743,7 +1280,7 @@ func resourceAwsKinesisFirehoseDeliveryStream() *schema.Resource { Type: schema.TypeInt, Optional: true, Default: 3600, - ValidateFunc: validateIntegerInRange(0, 7200), + ValidateFunc: validation.IntBetween(0, 7200), }, "cloudwatch_logging_options": cloudWatchLoggingOptionsSchema(), @@ -843,9 +1380,10 @@ func createExtendedS3Config(d *schema.ResourceData) *firehose.ExtendedS3Destinat IntervalInSeconds: aws.Int64(int64(s3["buffer_interval"].(int))), SizeInMBs: aws.Int64(int64(s3["buffer_size"].(int))), }, - Prefix: extractPrefixConfiguration(s3), - CompressionFormat: aws.String(s3["compression_format"].(string)), - EncryptionConfiguration: extractEncryptionConfiguration(s3), + Prefix: extractPrefixConfiguration(s3), + CompressionFormat: aws.String(s3["compression_format"].(string)), + DataFormatConversionConfiguration: expandFirehoseDataFormatConversionConfiguration(s3["data_format_conversion_configuration"].([]interface{})), + EncryptionConfiguration: extractEncryptionConfiguration(s3), } if _, ok := s3["processing_configuration"]; ok { @@ -856,6 +1394,11 @@ func createExtendedS3Config(d *schema.ResourceData) *firehose.ExtendedS3Destinat configuration.CloudWatchLoggingOptions = extractCloudWatchLoggingConfiguration(s3) } + if s3BackupMode, ok := s3["s3_backup_mode"]; ok { + configuration.S3BackupMode = aws.String(s3BackupMode.(string)) + configuration.S3BackupConfiguration = expandS3BackupConfig(d.Get("extended_s3_configuration").([]interface{})[0].(map[string]interface{})) + } + return configuration } @@ -920,20 +1463,200 @@ func updateExtendedS3Config(d *schema.ResourceData) *firehose.ExtendedS3Destinat IntervalInSeconds: aws.Int64((int64)(s3["buffer_interval"].(int))), SizeInMBs: aws.Int64((int64)(s3["buffer_size"].(int))), }, - Prefix: extractPrefixConfiguration(s3), - CompressionFormat: aws.String(s3["compression_format"].(string)), - EncryptionConfiguration: extractEncryptionConfiguration(s3), - CloudWatchLoggingOptions: extractCloudWatchLoggingConfiguration(s3), - ProcessingConfiguration: extractProcessingConfiguration(s3), + Prefix: extractPrefixConfiguration(s3), + CompressionFormat: aws.String(s3["compression_format"].(string)), + EncryptionConfiguration: extractEncryptionConfiguration(s3), + DataFormatConversionConfiguration: expandFirehoseDataFormatConversionConfiguration(s3["data_format_conversion_configuration"].([]interface{})), + CloudWatchLoggingOptions: extractCloudWatchLoggingConfiguration(s3), + ProcessingConfiguration: extractProcessingConfiguration(s3), } if _, ok := s3["cloudwatch_logging_options"]; ok { configuration.CloudWatchLoggingOptions = extractCloudWatchLoggingConfiguration(s3) } + if s3BackupMode, ok := s3["s3_backup_mode"]; ok { + configuration.S3BackupMode = aws.String(s3BackupMode.(string)) + configuration.S3BackupUpdate = updateS3BackupConfig(d.Get("extended_s3_configuration").([]interface{})[0].(map[string]interface{})) + } + return configuration } +func expandFirehoseDataFormatConversionConfiguration(l []interface{}) *firehose.DataFormatConversionConfiguration { + if len(l) == 0 { + return nil + } + + m := l[0].(map[string]interface{}) + + return &firehose.DataFormatConversionConfiguration{ + Enabled: aws.Bool(m["enabled"].(bool)), + InputFormatConfiguration: expandFirehoseInputFormatConfiguration(m["input_format_configuration"].([]interface{})), + OutputFormatConfiguration: expandFirehoseOutputFormatConfiguration(m["output_format_configuration"].([]interface{})), + SchemaConfiguration: expandFirehoseSchemaConfiguration(m["schema_configuration"].([]interface{})), + } +} + +func expandFirehoseInputFormatConfiguration(l []interface{}) *firehose.InputFormatConfiguration { + if len(l) == 0 { + return nil + } + + m := l[0].(map[string]interface{}) + + return &firehose.InputFormatConfiguration{ + Deserializer: expandFirehoseDeserializer(m["deserializer"].([]interface{})), + } +} + +func expandFirehoseDeserializer(l []interface{}) *firehose.Deserializer { + if len(l) == 0 { + return nil + } + + m := l[0].(map[string]interface{}) + + return &firehose.Deserializer{ + HiveJsonSerDe: expandFirehoseHiveJsonSerDe(m["hive_json_ser_de"].([]interface{})), + OpenXJsonSerDe: expandFirehoseOpenXJsonSerDe(m["open_x_json_ser_de"].([]interface{})), + } +} + +func expandFirehoseHiveJsonSerDe(l []interface{}) *firehose.HiveJsonSerDe { + if len(l) == 0 { + return nil + } + + if l[0] == nil { + return &firehose.HiveJsonSerDe{} + } + + m := l[0].(map[string]interface{}) + + return &firehose.HiveJsonSerDe{ + TimestampFormats: expandStringList(m["timestamp_formats"].([]interface{})), + } +} + +func expandFirehoseOpenXJsonSerDe(l []interface{}) *firehose.OpenXJsonSerDe { + if len(l) == 0 { + return nil + } + + if l[0] == nil { + return &firehose.OpenXJsonSerDe{} + } + + m := l[0].(map[string]interface{}) + + return &firehose.OpenXJsonSerDe{ + CaseInsensitive: aws.Bool(m["case_insensitive"].(bool)), + ColumnToJsonKeyMappings: stringMapToPointers(m["column_to_json_key_mappings"].(map[string]interface{})), + ConvertDotsInJsonKeysToUnderscores: aws.Bool(m["convert_dots_in_json_keys_to_underscores"].(bool)), + } +} + +func expandFirehoseOutputFormatConfiguration(l []interface{}) *firehose.OutputFormatConfiguration { + if len(l) == 0 { + return nil + } + + m := l[0].(map[string]interface{}) + + return &firehose.OutputFormatConfiguration{ + Serializer: expandFirehoseSerializer(m["serializer"].([]interface{})), + } +} + +func expandFirehoseSerializer(l []interface{}) *firehose.Serializer { + if len(l) == 0 { + return nil + } + + m := l[0].(map[string]interface{}) + + return &firehose.Serializer{ + OrcSerDe: expandFirehoseOrcSerDe(m["orc_ser_de"].([]interface{})), + ParquetSerDe: expandFirehoseParquetSerDe(m["parquet_ser_de"].([]interface{})), + } +} + +func expandFirehoseOrcSerDe(l []interface{}) *firehose.OrcSerDe { + if len(l) == 0 { + return nil + } + + if l[0] == nil { + return &firehose.OrcSerDe{} + } + + m := l[0].(map[string]interface{}) + + orcSerDe := &firehose.OrcSerDe{ + BlockSizeBytes: aws.Int64(int64(m["block_size_bytes"].(int))), + BloomFilterFalsePositiveProbability: aws.Float64(m["bloom_filter_false_positive_probability"].(float64)), + Compression: aws.String(m["compression"].(string)), + DictionaryKeyThreshold: aws.Float64(m["dictionary_key_threshold"].(float64)), + EnablePadding: aws.Bool(m["enable_padding"].(bool)), + FormatVersion: aws.String(m["format_version"].(string)), + PaddingTolerance: aws.Float64(m["padding_tolerance"].(float64)), + RowIndexStride: aws.Int64(int64(m["row_index_stride"].(int))), + StripeSizeBytes: aws.Int64(int64(m["stripe_size_bytes"].(int))), + } + + if v, ok := m["bloom_filter_columns"].([]interface{}); ok && len(v) > 0 { + orcSerDe.BloomFilterColumns = expandStringList(v) + } + + return orcSerDe +} + +func expandFirehoseParquetSerDe(l []interface{}) *firehose.ParquetSerDe { + if len(l) == 0 { + return nil + } + + if l[0] == nil { + return &firehose.ParquetSerDe{} + } + + m := l[0].(map[string]interface{}) + + return &firehose.ParquetSerDe{ + BlockSizeBytes: aws.Int64(int64(m["block_size_bytes"].(int))), + Compression: aws.String(m["compression"].(string)), + EnableDictionaryCompression: aws.Bool(m["enable_dictionary_compression"].(bool)), + MaxPaddingBytes: aws.Int64(int64(m["max_padding_bytes"].(int))), + PageSizeBytes: aws.Int64(int64(m["page_size_bytes"].(int))), + WriterVersion: aws.String(m["writer_version"].(string)), + } +} + +func expandFirehoseSchemaConfiguration(l []interface{}) *firehose.SchemaConfiguration { + if len(l) == 0 { + return nil + } + + m := l[0].(map[string]interface{}) + + config := &firehose.SchemaConfiguration{ + DatabaseName: aws.String(m["database_name"].(string)), + RoleARN: aws.String(m["role_arn"].(string)), + TableName: aws.String(m["table_name"].(string)), + VersionId: aws.String(m["version_id"].(string)), + } + + if v, ok := m["catalog_id"].(string); ok && v != "" { + config.CatalogId = aws.String(v) + } + if v, ok := m["region"].(string); ok && v != "" { + config.Region = aws.String(v) + } + + return config +} + func extractProcessingConfiguration(s3 map[string]interface{}) *firehose.ProcessingConfiguration { config := s3["processing_configuration"].([]interface{}) if len(config) == 0 { @@ -1032,7 +1755,7 @@ func extractPrefixConfiguration(s3 map[string]interface{}) *string { func createRedshiftConfig(d *schema.ResourceData, s3Config *firehose.S3DestinationConfiguration) (*firehose.RedshiftDestinationConfiguration, error) { redshiftRaw, ok := d.GetOk("redshift_configuration") if !ok { - return nil, fmt.Errorf("[ERR] Error loading Redshift Configuration for Kinesis Firehose: redshift_configuration not found") + return nil, fmt.Errorf("Error loading Redshift Configuration for Kinesis Firehose: redshift_configuration not found") } rl := redshiftRaw.([]interface{}) @@ -1051,6 +1774,9 @@ func createRedshiftConfig(d *schema.ResourceData, s3Config *firehose.S3Destinati if _, ok := redshift["cloudwatch_logging_options"]; ok { configuration.CloudWatchLoggingOptions = extractCloudWatchLoggingConfiguration(redshift) } + if _, ok := redshift["processing_configuration"]; ok { + configuration.ProcessingConfiguration = extractProcessingConfiguration(redshift) + } if s3BackupMode, ok := redshift["s3_backup_mode"]; ok { configuration.S3BackupMode = aws.String(s3BackupMode.(string)) configuration.S3BackupConfiguration = expandS3BackupConfig(d.Get("redshift_configuration").([]interface{})[0].(map[string]interface{})) @@ -1062,7 +1788,7 @@ func createRedshiftConfig(d *schema.ResourceData, s3Config *firehose.S3Destinati func updateRedshiftConfig(d *schema.ResourceData, s3Update *firehose.S3DestinationUpdate) (*firehose.RedshiftDestinationUpdate, error) { redshiftRaw, ok := d.GetOk("redshift_configuration") if !ok { - return nil, fmt.Errorf("[ERR] Error loading Redshift Configuration for Kinesis Firehose: redshift_configuration not found") + return nil, fmt.Errorf("Error loading Redshift Configuration for Kinesis Firehose: redshift_configuration not found") } rl := redshiftRaw.([]interface{}) @@ -1081,6 +1807,9 @@ func updateRedshiftConfig(d *schema.ResourceData, s3Update *firehose.S3Destinati if _, ok := redshift["cloudwatch_logging_options"]; ok { configuration.CloudWatchLoggingOptions = extractCloudWatchLoggingConfiguration(redshift) } + if _, ok := redshift["processing_configuration"]; ok { + configuration.ProcessingConfiguration = extractProcessingConfiguration(redshift) + } if s3BackupMode, ok := redshift["s3_backup_mode"]; ok { configuration.S3BackupMode = aws.String(s3BackupMode.(string)) configuration.S3BackupUpdate = updateS3BackupConfig(d.Get("redshift_configuration").([]interface{})[0].(map[string]interface{})) @@ -1092,7 +1821,7 @@ func updateRedshiftConfig(d *schema.ResourceData, s3Update *firehose.S3Destinati func createElasticsearchConfig(d *schema.ResourceData, s3Config *firehose.S3DestinationConfiguration) (*firehose.ElasticsearchDestinationConfiguration, error) { esConfig, ok := d.GetOk("elasticsearch_configuration") if !ok { - return nil, fmt.Errorf("[ERR] Error loading Elasticsearch Configuration for Kinesis Firehose: elasticsearch_configuration not found") + return nil, fmt.Errorf("Error loading Elasticsearch Configuration for Kinesis Firehose: elasticsearch_configuration not found") } esList := esConfig.([]interface{}) @@ -1112,6 +1841,10 @@ func createElasticsearchConfig(d *schema.ResourceData, s3Config *firehose.S3Dest config.CloudWatchLoggingOptions = extractCloudWatchLoggingConfiguration(es) } + if _, ok := es["processing_configuration"]; ok { + config.ProcessingConfiguration = extractProcessingConfiguration(es) + } + if indexRotationPeriod, ok := es["index_rotation_period"]; ok { config.IndexRotationPeriod = aws.String(indexRotationPeriod.(string)) } @@ -1125,7 +1858,7 @@ func createElasticsearchConfig(d *schema.ResourceData, s3Config *firehose.S3Dest func updateElasticsearchConfig(d *schema.ResourceData, s3Update *firehose.S3DestinationUpdate) (*firehose.ElasticsearchDestinationUpdate, error) { esConfig, ok := d.GetOk("elasticsearch_configuration") if !ok { - return nil, fmt.Errorf("[ERR] Error loading Elasticsearch Configuration for Kinesis Firehose: elasticsearch_configuration not found") + return nil, fmt.Errorf("Error loading Elasticsearch Configuration for Kinesis Firehose: elasticsearch_configuration not found") } esList := esConfig.([]interface{}) @@ -1145,6 +1878,10 @@ func updateElasticsearchConfig(d *schema.ResourceData, s3Update *firehose.S3Dest update.CloudWatchLoggingOptions = extractCloudWatchLoggingConfiguration(es) } + if _, ok := es["processing_configuration"]; ok { + update.ProcessingConfiguration = extractProcessingConfiguration(es) + } + if indexRotationPeriod, ok := es["index_rotation_period"]; ok { update.IndexRotationPeriod = aws.String(indexRotationPeriod.(string)) } @@ -1155,7 +1892,7 @@ func updateElasticsearchConfig(d *schema.ResourceData, s3Update *firehose.S3Dest func createSplunkConfig(d *schema.ResourceData, s3Config *firehose.S3DestinationConfiguration) (*firehose.SplunkDestinationConfiguration, error) { splunkRaw, ok := d.GetOk("splunk_configuration") if !ok { - return nil, fmt.Errorf("[ERR] Error loading Splunk Configuration for Kinesis Firehose: splunk_configuration not found") + return nil, fmt.Errorf("Error loading Splunk Configuration for Kinesis Firehose: splunk_configuration not found") } sl := splunkRaw.([]interface{}) @@ -1170,6 +1907,10 @@ func createSplunkConfig(d *schema.ResourceData, s3Config *firehose.S3Destination S3Configuration: s3Config, } + if _, ok := splunk["processing_configuration"]; ok { + configuration.ProcessingConfiguration = extractProcessingConfiguration(splunk) + } + if _, ok := splunk["cloudwatch_logging_options"]; ok { configuration.CloudWatchLoggingOptions = extractCloudWatchLoggingConfiguration(splunk) } @@ -1183,7 +1924,7 @@ func createSplunkConfig(d *schema.ResourceData, s3Config *firehose.S3Destination func updateSplunkConfig(d *schema.ResourceData, s3Update *firehose.S3DestinationUpdate) (*firehose.SplunkDestinationUpdate, error) { splunkRaw, ok := d.GetOk("splunk_configuration") if !ok { - return nil, fmt.Errorf("[ERR] Error loading Splunk Configuration for Kinesis Firehose: splunk_configuration not found") + return nil, fmt.Errorf("Error loading Splunk Configuration for Kinesis Firehose: splunk_configuration not found") } sl := splunkRaw.([]interface{}) @@ -1198,6 +1939,10 @@ func updateSplunkConfig(d *schema.ResourceData, s3Update *firehose.S3Destination S3Update: s3Update, } + if _, ok := splunk["processing_configuration"]; ok { + configuration.ProcessingConfiguration = extractProcessingConfiguration(splunk) + } + if _, ok := splunk["cloudwatch_logging_options"]; ok { configuration.CloudWatchLoggingOptions = extractCloudWatchLoggingConfiguration(splunk) } @@ -1317,15 +2062,17 @@ func resourceAwsKinesisFirehoseDeliveryStreamCreate(d *schema.ResourceData, meta } } - var lastError error err := resource.Retry(1*time.Minute, func() *resource.RetryError { _, err := conn.CreateDeliveryStream(createInput) if err != nil { log.Printf("[DEBUG] Error creating Firehose Delivery Stream: %s", err) - lastError = err // Retry for IAM eventual consistency - if isAWSErr(err, firehose.ErrCodeInvalidArgumentException, "is not authorized to perform") { + if isAWSErr(err, firehose.ErrCodeInvalidArgumentException, "is not authorized to") { + return resource.RetryableError(err) + } + // InvalidArgumentException: Verify that the IAM role has access to the ElasticSearch domain. + if isAWSErr(err, firehose.ErrCodeInvalidArgumentException, "Verify that the IAM role has access") { return resource.RetryableError(err) } // IAM roles can take ~10 seconds to propagate in AWS: @@ -1444,7 +2191,32 @@ func resourceAwsKinesisFirehoseDeliveryStreamUpdate(d *schema.ResourceData, meta } } - _, err := conn.UpdateDestination(updateInput) + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err := conn.UpdateDestination(updateInput) + if err != nil { + log.Printf("[DEBUG] Error creating Firehose Delivery Stream: %s", err) + + // Retry for IAM eventual consistency + if isAWSErr(err, firehose.ErrCodeInvalidArgumentException, "is not authorized to") { + return resource.RetryableError(err) + } + // InvalidArgumentException: Verify that the IAM role has access to the ElasticSearch domain. + if isAWSErr(err, firehose.ErrCodeInvalidArgumentException, "Verify that the IAM role has access") { + return resource.RetryableError(err) + } + // IAM roles can take ~10 seconds to propagate in AWS: + // http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#launch-instance-with-role-console + if isAWSErr(err, firehose.ErrCodeInvalidArgumentException, "Firehose is unable to assume role") { + log.Printf("[DEBUG] Firehose could not assume role referenced, retrying...") + return resource.RetryableError(err) + } + // Not retryable + return resource.NonRetryableError(err) + } + + return nil + }) + if err != nil { return fmt.Errorf( "Error Updating Kinesis Firehose Delivery Stream: \"%s\"\n%s", @@ -1507,7 +2279,6 @@ func resourceAwsKinesisFirehoseDeliveryStreamDelete(d *schema.ResourceData, meta sn, err) } - d.SetId("") return nil } diff --git a/aws/resource_aws_kinesis_firehose_delivery_stream_migrate_test.go b/aws/resource_aws_kinesis_firehose_delivery_stream_migrate_test.go index 6f6f0c1d5c5..b22d1601a34 100644 --- a/aws/resource_aws_kinesis_firehose_delivery_stream_migrate_test.go +++ b/aws/resource_aws_kinesis_firehose_delivery_stream_migrate_test.go @@ -85,7 +85,7 @@ func TestAWSKinesisFirehoseMigrateState_empty(t *testing.T) { // should handle non-nil but empty is = &terraform.InstanceState{} - is, err = resourceAwsInstanceMigrateState(0, is, meta) + _, err = resourceAwsInstanceMigrateState(0, is, meta) if err != nil { t.Fatalf("err: %#v", err) diff --git a/aws/resource_aws_kinesis_firehose_delivery_stream_test.go b/aws/resource_aws_kinesis_firehose_delivery_stream_test.go index 78b466ddfd2..58b7fc1b217 100644 --- a/aws/resource_aws_kinesis_firehose_delivery_stream_test.go +++ b/aws/resource_aws_kinesis_firehose_delivery_stream_test.go @@ -15,13 +15,55 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSKinesisFirehoseDeliveryStream_importBasic(t *testing.T) { + resName := "aws_kinesis_firehose_delivery_stream.test_stream" + rInt := acctest.RandInt() + + funcName := fmt.Sprintf("aws_kinesis_firehose_ds_import_%d", rInt) + policyName := fmt.Sprintf("tf_acc_policy_%d", rInt) + roleName := fmt.Sprintf("tf_acc_role_%d", rInt) + + config := testAccFirehoseAWSLambdaConfigBasic(funcName, policyName, roleName) + + fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_extendedS3basic, + rInt, rInt, rInt, rInt) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy, + Steps: []resource.TestStep{ + { + Config: config, + }, + { + ResourceName: resName, + ImportState: true, + ImportStateVerify: true, + }, + // Ensure we properly error on malformed import IDs + { + ResourceName: resName, + ImportState: true, + ImportStateId: "just-a-name", + ExpectError: regexp.MustCompile(`Expected ID in format`), + }, + { + ResourceName: resName, + ImportState: true, + ImportStateId: "arn:aws:firehose:us-east-1:123456789012:missing-slash", + ExpectError: regexp.MustCompile(`Expected ID in format`), + }, + }, + }) +} + func TestAccAWSKinesisFirehoseDeliveryStream_s3basic(t *testing.T) { var stream firehose.DeliveryStreamDescription ri := acctest.RandInt() config := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_s3basic, ri, ri, ri, ri) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy, @@ -43,7 +85,7 @@ func TestAccAWSKinesisFirehoseDeliveryStream_s3KinesisStreamSource(t *testing.T) config := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_s3KinesisStreamSource, ri, ri, ri, ri, ri, ri, ri, ri) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy, @@ -63,7 +105,7 @@ func TestAccAWSKinesisFirehoseDeliveryStream_s3WithCloudwatchLogging(t *testing. var stream firehose.DeliveryStreamDescription ri := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy, @@ -95,7 +137,7 @@ func TestAccAWSKinesisFirehoseDeliveryStream_s3ConfigUpdates(t *testing.T) { }, } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy, @@ -131,7 +173,7 @@ func TestAccAWSKinesisFirehoseDeliveryStream_ExtendedS3basic(t *testing.T) { fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_extendedS3basic, ri, ri, ri, ri) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy_ExtendedS3, @@ -147,6 +189,242 @@ func TestAccAWSKinesisFirehoseDeliveryStream_ExtendedS3basic(t *testing.T) { }) } +func TestAccAWSKinesisFirehoseDeliveryStream_ExtendedS3_DataFormatConversionConfiguration_Enabled(t *testing.T) { + var stream firehose.DeliveryStreamDescription + rInt := acctest.RandInt() + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_kinesis_firehose_delivery_stream.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy_ExtendedS3, + Steps: []resource.TestStep{ + { + Config: testAccKinesisFirehoseDeliveryStreamConfig_ExtendedS3_DataFormatConversionConfiguration_Enabled(rName, rInt, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisFirehoseDeliveryStreamExists(resourceName, &stream), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.enabled", "false"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccKinesisFirehoseDeliveryStreamConfig_ExtendedS3_DataFormatConversionConfiguration_Enabled(rName, rInt, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisFirehoseDeliveryStreamExists(resourceName, &stream), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.enabled", "true"), + ), + }, + }, + }) +} + +func TestAccAWSKinesisFirehoseDeliveryStream_ExtendedS3_DataFormatConversionConfiguration_Deserializer_Update(t *testing.T) { + var stream firehose.DeliveryStreamDescription + rInt := acctest.RandInt() + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_kinesis_firehose_delivery_stream.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy_ExtendedS3, + Steps: []resource.TestStep{ + { + Config: testAccKinesisFirehoseDeliveryStreamConfig_ExtendedS3_DataFormatConversionConfiguration_HiveJsonSerDe_Empty(rName, rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisFirehoseDeliveryStreamExists(resourceName, &stream), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.input_format_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.input_format_configuration.0.deserializer.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.input_format_configuration.0.deserializer.0.hive_json_ser_de.#", "1"), + ), + }, + { + Config: testAccKinesisFirehoseDeliveryStreamConfig_ExtendedS3_DataFormatConversionConfiguration_OpenXJsonSerDe_Empty(rName, rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisFirehoseDeliveryStreamExists(resourceName, &stream), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.input_format_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.input_format_configuration.0.deserializer.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.input_format_configuration.0.deserializer.0.open_x_json_ser_de.#", "1"), + ), + }, + }, + }) +} + +func TestAccAWSKinesisFirehoseDeliveryStream_ExtendedS3_DataFormatConversionConfiguration_HiveJsonSerDe_Empty(t *testing.T) { + var stream firehose.DeliveryStreamDescription + rInt := acctest.RandInt() + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_kinesis_firehose_delivery_stream.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy_ExtendedS3, + Steps: []resource.TestStep{ + { + Config: testAccKinesisFirehoseDeliveryStreamConfig_ExtendedS3_DataFormatConversionConfiguration_HiveJsonSerDe_Empty(rName, rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisFirehoseDeliveryStreamExists(resourceName, &stream), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.input_format_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.input_format_configuration.0.deserializer.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.input_format_configuration.0.deserializer.0.hive_json_ser_de.#", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSKinesisFirehoseDeliveryStream_ExtendedS3_DataFormatConversionConfiguration_OpenXJsonSerDe_Empty(t *testing.T) { + var stream firehose.DeliveryStreamDescription + rInt := acctest.RandInt() + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_kinesis_firehose_delivery_stream.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy_ExtendedS3, + Steps: []resource.TestStep{ + { + Config: testAccKinesisFirehoseDeliveryStreamConfig_ExtendedS3_DataFormatConversionConfiguration_OpenXJsonSerDe_Empty(rName, rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisFirehoseDeliveryStreamExists(resourceName, &stream), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.input_format_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.input_format_configuration.0.deserializer.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.input_format_configuration.0.deserializer.0.open_x_json_ser_de.#", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSKinesisFirehoseDeliveryStream_ExtendedS3_DataFormatConversionConfiguration_OrcSerDe_Empty(t *testing.T) { + var stream firehose.DeliveryStreamDescription + rInt := acctest.RandInt() + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_kinesis_firehose_delivery_stream.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy_ExtendedS3, + Steps: []resource.TestStep{ + { + Config: testAccKinesisFirehoseDeliveryStreamConfig_ExtendedS3_DataFormatConversionConfiguration_OrcSerDe_Empty(rName, rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisFirehoseDeliveryStreamExists(resourceName, &stream), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.output_format_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.output_format_configuration.0.serializer.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.output_format_configuration.0.serializer.0.orc_ser_de.#", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSKinesisFirehoseDeliveryStream_ExtendedS3_DataFormatConversionConfiguration_ParquetSerDe_Empty(t *testing.T) { + var stream firehose.DeliveryStreamDescription + rInt := acctest.RandInt() + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_kinesis_firehose_delivery_stream.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy_ExtendedS3, + Steps: []resource.TestStep{ + { + Config: testAccKinesisFirehoseDeliveryStreamConfig_ExtendedS3_DataFormatConversionConfiguration_ParquetSerDe_Empty(rName, rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisFirehoseDeliveryStreamExists(resourceName, &stream), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.output_format_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.output_format_configuration.0.serializer.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.output_format_configuration.0.serializer.0.parquet_ser_de.#", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSKinesisFirehoseDeliveryStream_ExtendedS3_DataFormatConversionConfiguration_Serializer_Update(t *testing.T) { + var stream firehose.DeliveryStreamDescription + rInt := acctest.RandInt() + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_kinesis_firehose_delivery_stream.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy_ExtendedS3, + Steps: []resource.TestStep{ + { + Config: testAccKinesisFirehoseDeliveryStreamConfig_ExtendedS3_DataFormatConversionConfiguration_OrcSerDe_Empty(rName, rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisFirehoseDeliveryStreamExists(resourceName, &stream), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.output_format_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.output_format_configuration.0.serializer.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.output_format_configuration.0.serializer.0.orc_ser_de.#", "1"), + ), + }, + { + Config: testAccKinesisFirehoseDeliveryStreamConfig_ExtendedS3_DataFormatConversionConfiguration_ParquetSerDe_Empty(rName, rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckKinesisFirehoseDeliveryStreamExists(resourceName, &stream), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.output_format_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.output_format_configuration.0.serializer.#", "1"), + resource.TestCheckResourceAttr(resourceName, "extended_s3_configuration.0.data_format_conversion_configuration.0.output_format_configuration.0.serializer.0.parquet_ser_de.#", "1"), + ), + }, + }, + }) +} + func TestAccAWSKinesisFirehoseDeliveryStream_ExtendedS3KmsKeyArn(t *testing.T) { rString := acctest.RandString(8) funcName := fmt.Sprintf("aws_kinesis_firehose_delivery_stream_test_%s", rString) @@ -160,7 +438,7 @@ func TestAccAWSKinesisFirehoseDeliveryStream_ExtendedS3KmsKeyArn(t *testing.T) { fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_extendedS3KmsKeyArn, ri, ri, ri, ri, ri) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy_ExtendedS3, @@ -188,7 +466,7 @@ func TestAccAWSKinesisFirehoseDeliveryStream_ExtendedS3InvalidProcessorType(t *t fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_extendedS3InvalidProcessorType, ri, ri, ri, ri) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy_ExtendedS3, @@ -212,7 +490,7 @@ func TestAccAWSKinesisFirehoseDeliveryStream_ExtendedS3InvalidParameterName(t *t fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_extendedS3InvalidParameterName, ri, ri, ri, ri) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy_ExtendedS3, @@ -249,10 +527,10 @@ func TestAccAWSKinesisFirehoseDeliveryStream_ExtendedS3Updates(t *testing.T) { ProcessingConfiguration: &firehose.ProcessingConfiguration{ Enabled: aws.Bool(true), Processors: []*firehose.Processor{ - &firehose.Processor{ + { Type: aws.String("Lambda"), Parameters: []*firehose.ProcessorParameter{ - &firehose.ProcessorParameter{ + { ParameterName: aws.String("LambdaArn"), ParameterValue: aws.String("valueNotTested"), }, @@ -260,9 +538,10 @@ func TestAccAWSKinesisFirehoseDeliveryStream_ExtendedS3Updates(t *testing.T) { }, }, }, + S3BackupMode: aws.String("Enabled"), } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy_ExtendedS3, @@ -289,19 +568,38 @@ func TestAccAWSKinesisFirehoseDeliveryStream_RedshiftConfigUpdates(t *testing.T) var stream firehose.DeliveryStreamDescription ri := acctest.RandInt() + rString := acctest.RandString(8) + funcName := fmt.Sprintf("aws_kinesis_firehose_delivery_stream_test_%s", rString) + policyName := fmt.Sprintf("tf_acc_policy_%s", rString) + roleName := fmt.Sprintf("tf_acc_role_%s", rString) preConfig := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_RedshiftBasic, ri, ri, ri, ri, ri) - postConfig := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_RedshiftUpdates, - ri, ri, ri, ri, ri) + postConfig := testAccFirehoseAWSLambdaConfigBasic(funcName, policyName, roleName) + + fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_RedshiftUpdates, + ri, ri, ri, ri, ri) updatedRedshiftConfig := &firehose.RedshiftDestinationDescription{ CopyCommand: &firehose.CopyCommand{ CopyOptions: aws.String("GZIP"), }, S3BackupMode: aws.String("Enabled"), + ProcessingConfiguration: &firehose.ProcessingConfiguration{ + Enabled: aws.Bool(true), + Processors: []*firehose.Processor{ + { + Type: aws.String("Lambda"), + Parameters: []*firehose.ProcessorParameter{ + { + ParameterName: aws.String("LambdaArn"), + ParameterValue: aws.String("valueNotTested"), + }, + }, + }, + }, + }, } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy, @@ -329,18 +627,39 @@ func TestAccAWSKinesisFirehoseDeliveryStream_SplunkConfigUpdates(t *testing.T) { var stream firehose.DeliveryStreamDescription ri := acctest.RandInt() + + rString := acctest.RandString(8) + funcName := fmt.Sprintf("aws_kinesis_firehose_delivery_stream_test_%s", rString) + policyName := fmt.Sprintf("tf_acc_policy_%s", rString) + roleName := fmt.Sprintf("tf_acc_role_%s", rString) + preConfig := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_SplunkBasic, ri, ri, ri, ri) - postConfig := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_SplunkUpdates, - ri, ri, ri, ri) + postConfig := testAccFirehoseAWSLambdaConfigBasic(funcName, policyName, roleName) + + fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_SplunkUpdates, + ri, ri, ri, ri) updatedSplunkConfig := &firehose.SplunkDestinationDescription{ HECEndpointType: aws.String("Event"), HECAcknowledgmentTimeoutInSeconds: aws.Int64(600), S3BackupMode: aws.String("FailedEventsOnly"), + ProcessingConfiguration: &firehose.ProcessingConfiguration{ + Enabled: aws.Bool(true), + Processors: []*firehose.Processor{ + { + Type: aws.String("Lambda"), + Parameters: []*firehose.ProcessorParameter{ + { + ParameterName: aws.String("LambdaArn"), + ParameterValue: aws.String("valueNotTested"), + }, + }, + }, + }, + }, } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy, @@ -368,18 +687,37 @@ func TestAccAWSKinesisFirehoseDeliveryStream_ElasticsearchConfigUpdates(t *testi var stream firehose.DeliveryStreamDescription ri := acctest.RandInt() + rString := acctest.RandString(8) + funcName := fmt.Sprintf("aws_kinesis_firehose_delivery_stream_test_%s", rString) + policyName := fmt.Sprintf("tf_acc_policy_%s", rString) + roleName := fmt.Sprintf("tf_acc_role_%s", rString) preConfig := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_ElasticsearchBasic, - ri, ri, ri, ri, ri, ri) - postConfig := fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_ElasticsearchUpdate, - ri, ri, ri, ri, ri, ri) + ri, ri, ri, ri, ri) + postConfig := testAccFirehoseAWSLambdaConfigBasic(funcName, policyName, roleName) + + fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamConfig_ElasticsearchUpdate, + ri, ri, ri, ri, ri) updatedElasticSearchConfig := &firehose.ElasticsearchDestinationDescription{ BufferingHints: &firehose.ElasticsearchBufferingHints{ IntervalInSeconds: aws.Int64(500), }, + ProcessingConfiguration: &firehose.ProcessingConfiguration{ + Enabled: aws.Bool(true), + Processors: []*firehose.Processor{ + { + Type: aws.String("Lambda"), + Parameters: []*firehose.ProcessorParameter{ + { + ParameterName: aws.String("LambdaArn"), + ParameterValue: aws.String("valueNotTested"), + }, + }, + }, + }, + }, } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy, @@ -407,7 +745,7 @@ func TestAccAWSKinesisFirehoseDeliveryStream_missingProcessingConfiguration(t *t var stream firehose.DeliveryStreamDescription ri := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckKinesisFirehoseDeliveryStreamDestroy, @@ -489,12 +827,15 @@ func testAccCheckAWSKinesisFirehoseDeliveryStreamAttributes(stream *firehose.Del // destination. For simplicity, our test only have a single S3 or // Redshift destination, so at this time it's safe to match on the first // one - var match, processingConfigMatch bool + var match, processingConfigMatch, matchS3BackupMode bool for _, d := range stream.Destinations { if d.ExtendedS3DestinationDescription != nil { if *d.ExtendedS3DestinationDescription.BufferingHints.SizeInMBs == *es.BufferingHints.SizeInMBs { match = true } + if *d.ExtendedS3DestinationDescription.S3BackupMode == *es.S3BackupMode { + matchS3BackupMode = true + } processingConfigMatch = len(es.ProcessingConfiguration.Processors) == len(d.ExtendedS3DestinationDescription.ProcessingConfiguration.Processors) } @@ -505,13 +846,16 @@ func testAccCheckAWSKinesisFirehoseDeliveryStreamAttributes(stream *firehose.Del if !processingConfigMatch { return fmt.Errorf("Mismatch extended s3 ProcessingConfiguration.Processors count, expected: %s, got: %s", es, stream.Destinations) } + if !matchS3BackupMode { + return fmt.Errorf("Mismatch extended s3 S3BackupMode, expected: %s, got: %s", es, stream.Destinations) + } } if redshiftConfig != nil { r := redshiftConfig.(*firehose.RedshiftDestinationDescription) // Range over the Stream Destinations, looking for the matching Redshift // destination - var matchCopyOptions, matchS3BackupMode bool + var matchCopyOptions, matchS3BackupMode, processingConfigMatch bool for _, d := range stream.Destinations { if d.RedshiftDestinationDescription != nil { if *d.RedshiftDestinationDescription.CopyCommand.CopyOptions == *r.CopyCommand.CopyOptions { @@ -520,33 +864,46 @@ func testAccCheckAWSKinesisFirehoseDeliveryStreamAttributes(stream *firehose.Del if *d.RedshiftDestinationDescription.S3BackupMode == *r.S3BackupMode { matchS3BackupMode = true } + if r.ProcessingConfiguration != nil && d.RedshiftDestinationDescription.ProcessingConfiguration != nil { + processingConfigMatch = len(r.ProcessingConfiguration.Processors) == len(d.RedshiftDestinationDescription.ProcessingConfiguration.Processors) + } } } if !matchCopyOptions || !matchS3BackupMode { return fmt.Errorf("Mismatch Redshift CopyOptions or S3BackupMode, expected: %s, got: %s", r, stream.Destinations) } + if !processingConfigMatch { + return fmt.Errorf("Mismatch Redshift ProcessingConfiguration.Processors count, expected: %s, got: %s", r, stream.Destinations) + } } if elasticsearchConfig != nil { es := elasticsearchConfig.(*firehose.ElasticsearchDestinationDescription) // Range over the Stream Destinations, looking for the matching Elasticsearch destination - var match bool + var match, processingConfigMatch bool for _, d := range stream.Destinations { if d.ElasticsearchDestinationDescription != nil { match = true + if es.ProcessingConfiguration != nil && d.ElasticsearchDestinationDescription.ProcessingConfiguration != nil { + processingConfigMatch = len(es.ProcessingConfiguration.Processors) == len(d.ElasticsearchDestinationDescription.ProcessingConfiguration.Processors) + } } } if !match { return fmt.Errorf("Mismatch Elasticsearch Buffering Interval, expected: %s, got: %s", es, stream.Destinations) } + if !processingConfigMatch { + return fmt.Errorf("Mismatch Elasticsearch ProcessingConfiguration.Processors count, expected: %s, got: %s", es, stream.Destinations) + } } if splunkConfig != nil { s := splunkConfig.(*firehose.SplunkDestinationDescription) // Range over the Stream Destinations, looking for the matching Splunk destination - var matchHECEndpointType, matchHECAcknowledgmentTimeoutInSeconds, matchS3BackupMode bool + var matchHECEndpointType, matchHECAcknowledgmentTimeoutInSeconds, matchS3BackupMode, processingConfigMatch bool for _, d := range stream.Destinations { if d.SplunkDestinationDescription != nil { + if *d.SplunkDestinationDescription.HECEndpointType == *s.HECEndpointType { matchHECEndpointType = true } @@ -556,11 +913,17 @@ func testAccCheckAWSKinesisFirehoseDeliveryStreamAttributes(stream *firehose.Del if *d.SplunkDestinationDescription.S3BackupMode == *s.S3BackupMode { matchS3BackupMode = true } + if s.ProcessingConfiguration != nil && d.SplunkDestinationDescription.ProcessingConfiguration != nil { + processingConfigMatch = len(s.ProcessingConfiguration.Processors) == len(d.SplunkDestinationDescription.ProcessingConfiguration.Processors) + } } } if !matchHECEndpointType || !matchHECAcknowledgmentTimeoutInSeconds || !matchS3BackupMode { return fmt.Errorf("Mismatch Splunk HECEndpointType or HECAcknowledgmentTimeoutInSeconds or S3BackupMode, expected: %s, got: %s", s, stream.Destinations) } + if !processingConfigMatch { + return fmt.Errorf("Mismatch extended splunk ProcessingConfiguration.Processors count, expected: %s, got: %s", s, stream.Destinations) + } } } return nil @@ -739,6 +1102,16 @@ resource "aws_iam_role_policy" "firehose" { "arn:aws:s3:::${aws_s3_bucket.bucket.id}", "arn:aws:s3:::${aws_s3_bucket.bucket.id}/*" ] + }, + { + "Sid": "GlueAccess", + "Effect": "Allow", + "Action": [ + "glue:GetTableVersions" + ], + "Resource": [ + "*" + ] } ] } @@ -956,11 +1329,284 @@ resource "aws_kinesis_firehose_delivery_stream" "test_stream" { parameter_value = "${aws_lambda_function.lambda_function_test.arn}:$LATEST" }] }] - }] + }], + s3_backup_mode = "Disabled" } } ` +func testAccKinesisFirehoseDeliveryStreamConfig_ExtendedS3_DataFormatConversionConfiguration_Enabled(rName string, rInt int, enabled bool) string { + return fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamBaseConfig, rInt, rInt, rInt) + fmt.Sprintf(` +resource "aws_glue_catalog_database" "test" { + name = "%s" +} + +resource "aws_glue_catalog_table" "test" { + database_name = "${aws_glue_catalog_database.test.name}" + name = "%s" + + storage_descriptor { + columns { + name = "test" + type = "string" + } + } +} + +resource "aws_kinesis_firehose_delivery_stream" "test" { + destination = "extended_s3" + name = "%s" + + extended_s3_configuration { + bucket_arn = "${aws_s3_bucket.bucket.arn}" + # InvalidArgumentException: BufferingHints.SizeInMBs must be at least 64 when data format conversion is enabled. + buffer_size = 128 + role_arn = "${aws_iam_role.firehose.arn}" + + data_format_conversion_configuration { + enabled = %t + + input_format_configuration { + deserializer { + hive_json_ser_de {} # we have to pick one + } + } + + output_format_configuration { + serializer { + orc_ser_de {} # we have to pick one + } + } + + schema_configuration { + database_name = "${aws_glue_catalog_table.test.database_name}" + role_arn = "${aws_iam_role.firehose.arn}" + table_name = "${aws_glue_catalog_table.test.name}" + } + } + } + + depends_on = ["aws_iam_role_policy.firehose"] +} +`, rName, rName, rName, enabled) +} + +func testAccKinesisFirehoseDeliveryStreamConfig_ExtendedS3_DataFormatConversionConfiguration_HiveJsonSerDe_Empty(rName string, rInt int) string { + return fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamBaseConfig, rInt, rInt, rInt) + fmt.Sprintf(` +resource "aws_glue_catalog_database" "test" { + name = "%s" +} + +resource "aws_glue_catalog_table" "test" { + database_name = "${aws_glue_catalog_database.test.name}" + name = "%s" + + storage_descriptor { + columns { + name = "test" + type = "string" + } + } +} + +resource "aws_kinesis_firehose_delivery_stream" "test" { + destination = "extended_s3" + name = "%s" + + extended_s3_configuration { + bucket_arn = "${aws_s3_bucket.bucket.arn}" + # InvalidArgumentException: BufferingHints.SizeInMBs must be at least 64 when data format conversion is enabled. + buffer_size = 128 + role_arn = "${aws_iam_role.firehose.arn}" + + data_format_conversion_configuration { + input_format_configuration { + deserializer { + hive_json_ser_de {} + } + } + + output_format_configuration { + serializer { + orc_ser_de {} # we have to pick one + } + } + + schema_configuration { + database_name = "${aws_glue_catalog_table.test.database_name}" + role_arn = "${aws_iam_role.firehose.arn}" + table_name = "${aws_glue_catalog_table.test.name}" + } + } + } + + depends_on = ["aws_iam_role_policy.firehose"] +} +`, rName, rName, rName) +} + +func testAccKinesisFirehoseDeliveryStreamConfig_ExtendedS3_DataFormatConversionConfiguration_OpenXJsonSerDe_Empty(rName string, rInt int) string { + return fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamBaseConfig, rInt, rInt, rInt) + fmt.Sprintf(` +resource "aws_glue_catalog_database" "test" { + name = "%s" +} + +resource "aws_glue_catalog_table" "test" { + database_name = "${aws_glue_catalog_database.test.name}" + name = "%s" + + storage_descriptor { + columns { + name = "test" + type = "string" + } + } +} + +resource "aws_kinesis_firehose_delivery_stream" "test" { + destination = "extended_s3" + name = "%s" + + extended_s3_configuration { + bucket_arn = "${aws_s3_bucket.bucket.arn}" + # InvalidArgumentException: BufferingHints.SizeInMBs must be at least 64 when data format conversion is enabled. + buffer_size = 128 + role_arn = "${aws_iam_role.firehose.arn}" + + data_format_conversion_configuration { + input_format_configuration { + deserializer { + open_x_json_ser_de {} + } + } + + output_format_configuration { + serializer { + orc_ser_de {} # we have to pick one + } + } + + schema_configuration { + database_name = "${aws_glue_catalog_table.test.database_name}" + role_arn = "${aws_iam_role.firehose.arn}" + table_name = "${aws_glue_catalog_table.test.name}" + } + } + } + + depends_on = ["aws_iam_role_policy.firehose"] +} +`, rName, rName, rName) +} + +func testAccKinesisFirehoseDeliveryStreamConfig_ExtendedS3_DataFormatConversionConfiguration_OrcSerDe_Empty(rName string, rInt int) string { + return fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamBaseConfig, rInt, rInt, rInt) + fmt.Sprintf(` +resource "aws_glue_catalog_database" "test" { + name = "%s" +} + +resource "aws_glue_catalog_table" "test" { + database_name = "${aws_glue_catalog_database.test.name}" + name = "%s" + + storage_descriptor { + columns { + name = "test" + type = "string" + } + } +} + +resource "aws_kinesis_firehose_delivery_stream" "test" { + destination = "extended_s3" + name = "%s" + + extended_s3_configuration { + bucket_arn = "${aws_s3_bucket.bucket.arn}" + # InvalidArgumentException: BufferingHints.SizeInMBs must be at least 64 when data format conversion is enabled. + buffer_size = 128 + role_arn = "${aws_iam_role.firehose.arn}" + + data_format_conversion_configuration { + input_format_configuration { + deserializer { + hive_json_ser_de {} # we have to pick one + } + } + + output_format_configuration { + serializer { + orc_ser_de {} + } + } + + schema_configuration { + database_name = "${aws_glue_catalog_table.test.database_name}" + role_arn = "${aws_iam_role.firehose.arn}" + table_name = "${aws_glue_catalog_table.test.name}" + } + } + } + + depends_on = ["aws_iam_role_policy.firehose"] +} +`, rName, rName, rName) +} + +func testAccKinesisFirehoseDeliveryStreamConfig_ExtendedS3_DataFormatConversionConfiguration_ParquetSerDe_Empty(rName string, rInt int) string { + return fmt.Sprintf(testAccKinesisFirehoseDeliveryStreamBaseConfig, rInt, rInt, rInt) + fmt.Sprintf(` +resource "aws_glue_catalog_database" "test" { + name = "%s" +} + +resource "aws_glue_catalog_table" "test" { + database_name = "${aws_glue_catalog_database.test.name}" + name = "%s" + + storage_descriptor { + columns { + name = "test" + type = "string" + } + } +} + +resource "aws_kinesis_firehose_delivery_stream" "test" { + destination = "extended_s3" + name = "%s" + + extended_s3_configuration { + bucket_arn = "${aws_s3_bucket.bucket.arn}" + # InvalidArgumentException: BufferingHints.SizeInMBs must be at least 64 when data format conversion is enabled. + buffer_size = 128 + role_arn = "${aws_iam_role.firehose.arn}" + + data_format_conversion_configuration { + input_format_configuration { + deserializer { + hive_json_ser_de {} # we have to pick one + } + } + + output_format_configuration { + serializer { + parquet_ser_de {} + } + } + + schema_configuration { + database_name = "${aws_glue_catalog_table.test.database_name}" + role_arn = "${aws_iam_role.firehose.arn}" + table_name = "${aws_glue_catalog_table.test.name}" + } + } + } + + depends_on = ["aws_iam_role_policy.firehose"] +} +`, rName, rName, rName) +} + var testAccKinesisFirehoseDeliveryStreamConfig_extendedS3KmsKeyArn = testAccKinesisFirehoseDeliveryStreamBaseConfig + ` resource "aws_kms_key" "test" { description = "Terraform acc test %s" @@ -1053,6 +1699,11 @@ resource "aws_kinesis_firehose_delivery_stream" "test_stream" { buffer_size = 10 buffer_interval = 400 compression_format = "GZIP" + s3_backup_mode = "Enabled" + s3_backup_configuration { + role_arn = "${aws_iam_role.firehose.arn}" + bucket_arn = "${aws_s3_bucket.bucket.arn}" + } } } ` @@ -1111,6 +1762,16 @@ resource "aws_kinesis_firehose_delivery_stream" "test_stream" { data_table_name = "test-table" copy_options = "GZIP" data_table_columns = "test-col" + processing_configuration = [{ + enabled = false, + processors = [{ + type = "Lambda" + parameters = [{ + parameter_name = "LambdaArn" + parameter_value = "${aws_lambda_function.lambda_function_test.arn}:$LATEST" + }] + }] + }] } }` @@ -1147,6 +1808,34 @@ resource "aws_kinesis_firehose_delivery_stream" "test_stream" { hec_acknowledgment_timeout = 600 hec_endpoint_type = "Event" s3_backup_mode = "FailedEventsOnly" + processing_configuration = [ + { + enabled = "true" + processors = [ + { + type = "Lambda" + parameters = [ + { + parameter_name = "LambdaArn" + parameter_value = "${aws_lambda_function.lambda_function_test.arn}:$LATEST" + }, + { + parameter_name = "RoleArn" + parameter_value = "${aws_iam_role.firehose.arn}" + }, + { + parameter_name = "BufferSizeInMBs" + parameter_value = 1 + }, + { + parameter_name = "BufferIntervalInSeconds" + parameter_value = 120 + } + ] + } + ] + } + ] } }` @@ -1156,27 +1845,35 @@ resource "aws_elasticsearch_domain" "test_cluster" { cluster_config { instance_type = "m3.medium.elasticsearch" } +} - access_policies = < 0 { - buf.WriteString(fmt.Sprintf("encryption_context_equals-%s-", sortedConcatStringMap(stringMapToPointers(v.(map[string]interface{})), "-"))) + buf.WriteString(fmt.Sprintf("encryption_context_equals-%s-", sortedConcatStringMap(stringMapToPointers(v.(map[string]interface{}))))) } } if v, ok := m["encryption_context_subset"]; ok { if len(v.(map[string]interface{})) > 0 { - buf.WriteString(fmt.Sprintf("encryption_context_subset-%s-", sortedConcatStringMap(stringMapToPointers(v.(map[string]interface{})), "-"))) + buf.WriteString(fmt.Sprintf("encryption_context_subset-%s-", sortedConcatStringMap(stringMapToPointers(v.(map[string]interface{}))))) } } @@ -520,11 +523,24 @@ func flattenKmsGrantConstraints(constraint *kms.GrantConstraints) *schema.Set { } func decodeKmsGrantId(id string) (string, string, error) { - parts := strings.Split(id, ":") - if len(parts) != 2 { - return "", "", fmt.Errorf("unexpected format of ID (%q), expected KeyID:GrantID", id) + if strings.HasPrefix(id, "arn:aws") { + arn_parts := strings.Split(id, "/") + if len(arn_parts) != 2 { + return "", "", fmt.Errorf("unexpected format of ARN (%q), expected KeyID:GrantID", id) + } + arn_prefix := arn_parts[0] + parts := strings.Split(arn_parts[1], ":") + if len(parts) != 2 { + return "", "", fmt.Errorf("unexpected format of ID (%q), expected KeyID:GrantID", id) + } + return fmt.Sprintf("%s/%s", arn_prefix, parts[0]), parts[1], nil + } else { + parts := strings.Split(id, ":") + if len(parts) != 2 { + return "", "", fmt.Errorf("unexpected format of ID (%q), expected KeyID:GrantID", id) + } + return parts[0], parts[1], nil } - return parts[0], parts[1], nil } // Custom error, so we don't have to rely on diff --git a/aws/resource_aws_kms_grant_test.go b/aws/resource_aws_kms_grant_test.go index 4cd8fbef8ef..212cc7980ca 100644 --- a/aws/resource_aws_kms_grant_test.go +++ b/aws/resource_aws_kms_grant_test.go @@ -9,10 +9,10 @@ import ( "github.com/hashicorp/terraform/terraform" ) -func TestAWSKmsGrant_Basic(t *testing.T) { +func TestAccAWSKmsGrant_Basic(t *testing.T) { timestamp := time.Now().Format(time.RFC1123) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSKmsGrantDestroy, @@ -33,10 +33,10 @@ func TestAWSKmsGrant_Basic(t *testing.T) { }) } -func TestAWSKmsGrant_withConstraints(t *testing.T) { +func TestAccAWSKmsGrant_withConstraints(t *testing.T) { timestamp := time.Now().Format(time.RFC1123) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSKmsGrantDestroy, @@ -69,10 +69,10 @@ func TestAWSKmsGrant_withConstraints(t *testing.T) { }) } -func TestAWSKmsGrant_withRetiringPrincipal(t *testing.T) { +func TestAccAWSKmsGrant_withRetiringPrincipal(t *testing.T) { timestamp := time.Now().Format(time.RFC1123) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSKmsGrantDestroy, @@ -88,10 +88,10 @@ func TestAWSKmsGrant_withRetiringPrincipal(t *testing.T) { }) } -func TestAWSKmsGrant_bare(t *testing.T) { +func TestAccAWSKmsGrant_bare(t *testing.T) { timestamp := time.Now().Format(time.RFC1123) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSKmsGrantDestroy, @@ -109,6 +109,30 @@ func TestAWSKmsGrant_bare(t *testing.T) { }) } +func TestAccAWSKmsGrant_ARN(t *testing.T) { + timestamp := time.Now().Format(time.RFC1123) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSKmsGrantDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSKmsGrant_ARN("arn", timestamp, "\"Encrypt\", \"Decrypt\""), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSKmsGrantExists("aws_kms_grant.arn"), + resource.TestCheckResourceAttr("aws_kms_grant.arn", "name", "arn"), + resource.TestCheckResourceAttr("aws_kms_grant.arn", "operations.#", "2"), + resource.TestCheckResourceAttr("aws_kms_grant.arn", "operations.2238845196", "Encrypt"), + resource.TestCheckResourceAttr("aws_kms_grant.arn", "operations.1237510779", "Decrypt"), + resource.TestCheckResourceAttrSet("aws_kms_grant.arn", "grantee_principal"), + resource.TestCheckResourceAttrSet("aws_kms_grant.arn", "key_id"), + ), + }, + }, + }) +} + func testAccCheckAWSKmsGrantDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).kmsconn @@ -252,3 +276,27 @@ data "aws_iam_policy_document" "assumerole-policy-template" { } } ` + +func testAccAWSKmsGrant_ARN(rName string, timestamp string, operations string) string { + return fmt.Sprintf(` +resource "aws_kms_key" "tf-acc-test-key" { + description = "Terraform acc test key %s" + deletion_window_in_days = 7 +} + +%s + +resource "aws_iam_role" "tf-acc-test-role" { + name = "tf-acc-test-kms-grant-role-%s" + path = "/service-role/" + assume_role_policy = "${data.aws_iam_policy_document.assumerole-policy-template.json}" +} + +resource "aws_kms_grant" "%s" { + name = "%s" + key_id = "${aws_kms_key.tf-acc-test-key.arn}" + grantee_principal = "${aws_iam_role.tf-acc-test-role.arn}" + operations = [ %s ] +} +`, timestamp, staticAssumeRolePolicyString, rName, rName, rName, operations) +} diff --git a/aws/resource_aws_kms_key.go b/aws/resource_aws_kms_key.go index d86d1ab676b..477ac628ca6 100644 --- a/aws/resource_aws_kms_key.go +++ b/aws/resource_aws_kms_key.go @@ -8,7 +8,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/kms" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/structure" @@ -55,7 +54,7 @@ func resourceAwsKmsKey() *schema.Resource { Type: schema.TypeString, Optional: true, Computed: true, - ValidateFunc: validateJsonString, + ValidateFunc: validation.ValidateJsonString, DiffSuppressFunc: suppressEquivalentAwsPolicyDiffs, }, "is_enabled": { @@ -169,7 +168,7 @@ func resourceAwsKmsKeyRead(d *schema.ResourceData, meta interface{}) error { p := pOut.(*kms.GetKeyPolicyOutput) policy, err := structure.NormalizeJsonString(*p.Policy) if err != nil { - return errwrap.Wrapf("policy contains an invalid JSON: {{err}}", err) + return fmt.Errorf("policy contains an invalid JSON: %s", err) } d.Set("policy", policy) @@ -260,7 +259,7 @@ func resourceAwsKmsKeyDescriptionUpdate(conn *kms.KMS, d *schema.ResourceData) e func resourceAwsKmsKeyPolicyUpdate(conn *kms.KMS, d *schema.ResourceData) error { policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) if err != nil { - return errwrap.Wrapf("policy contains an invalid JSON: {{err}}", err) + return fmt.Errorf("policy contains an invalid JSON: %s", err) } keyId := d.Get("key_id").(string) @@ -473,6 +472,6 @@ func resourceAwsKmsKeyDelete(d *schema.ResourceData, meta interface{}) error { } log.Printf("[DEBUG] KMS Key %s deactivated.", keyId) - d.SetId("") + return nil } diff --git a/aws/resource_aws_kms_key_test.go b/aws/resource_aws_kms_key_test.go index 404ee1761c2..5d7fa49582a 100644 --- a/aws/resource_aws_kms_key_test.go +++ b/aws/resource_aws_kms_key_test.go @@ -70,6 +70,10 @@ func testSweepKmsKeys(region string) error { return !lastPage }) if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping KMS Key sweep for %s: %s", region, err) + return nil + } return fmt.Errorf("Error describing KMS keys: %s", err) } @@ -85,11 +89,34 @@ func kmsTagHasPrefix(tags []*kms.Tag, key, prefix string) bool { return false } +func TestAccAWSKmsKey_importBasic(t *testing.T) { + resourceName := "aws_kms_key.foo" + rName := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSKmsKeyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSKmsKey(rName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_window_in_days"}, + }, + }, + }) +} + func TestAccAWSKmsKey_basic(t *testing.T) { var keyBefore, keyAfter kms.KeyMetadata rName := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSKmsKeyDestroy, @@ -114,7 +141,7 @@ func TestAccAWSKmsKey_disappears(t *testing.T) { var key kms.KeyMetadata rName := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSKmsKeyDestroy, @@ -139,7 +166,7 @@ func TestAccAWSKmsKey_policy(t *testing.T) { rName := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum) expectedPolicyText := `{"Version":"2012-10-17","Id":"kms-tf-1","Statement":[{"Sid":"Enable IAM User Permissions","Effect":"Allow","Principal":{"AWS":"*"},"Action":"kms:*","Resource":"*"}]}` - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSKmsKeyDestroy, @@ -159,7 +186,7 @@ func TestAccAWSKmsKey_isEnabled(t *testing.T) { var key1, key2, key3 kms.KeyMetadata rName := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSKmsKeyDestroy, @@ -199,7 +226,7 @@ func TestAccAWSKmsKey_tags(t *testing.T) { var keyBefore kms.KeyMetadata rName := acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSKmsKeyDestroy, diff --git a/aws/resource_aws_lambda_alias.go b/aws/resource_aws_lambda_alias.go index 083225f3ad2..fdd38fb83fb 100644 --- a/aws/resource_aws_lambda_alias.go +++ b/aws/resource_aws_lambda_alias.go @@ -19,26 +19,41 @@ func resourceAwsLambdaAlias() *schema.Resource { Delete: resourceAwsLambdaAliasDelete, Schema: map[string]*schema.Schema{ - "description": &schema.Schema{ + "description": { Type: schema.TypeString, Optional: true, }, - "function_name": &schema.Schema{ + "function_name": { Type: schema.TypeString, Required: true, }, - "function_version": &schema.Schema{ + "function_version": { Type: schema.TypeString, Required: true, }, - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, + ForceNew: true, }, - "arn": &schema.Schema{ + "arn": { Type: schema.TypeString, Computed: true, }, + "routing_config": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "additional_version_weights": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeFloat}, + }, + }, + }, + }, }, } } @@ -58,6 +73,7 @@ func resourceAwsLambdaAliasCreate(d *schema.ResourceData, meta interface{}) erro FunctionName: aws.String(functionName), FunctionVersion: aws.String(d.Get("function_version").(string)), Name: aws.String(aliasName), + RoutingConfig: expandLambdaAliasRoutingConfiguration(d.Get("routing_config").([]interface{})), } aliasConfiguration, err := conn.CreateAlias(params) @@ -98,6 +114,10 @@ func resourceAwsLambdaAliasRead(d *schema.ResourceData, meta interface{}) error d.Set("name", aliasConfiguration.Name) d.Set("arn", aliasConfiguration.AliasArn) + if err := d.Set("routing_config", flattenLambdaAliasRoutingConfiguration(aliasConfiguration.RoutingConfig)); err != nil { + return fmt.Errorf("error setting routing_config: %s", err) + } + return nil } @@ -118,8 +138,6 @@ func resourceAwsLambdaAliasDelete(d *schema.ResourceData, meta interface{}) erro return fmt.Errorf("Error deleting Lambda alias: %s", err) } - d.SetId("") - return nil } @@ -135,6 +153,7 @@ func resourceAwsLambdaAliasUpdate(d *schema.ResourceData, meta interface{}) erro FunctionName: aws.String(d.Get("function_name").(string)), FunctionVersion: aws.String(d.Get("function_version").(string)), Name: aws.String(d.Get("name").(string)), + RoutingConfig: expandLambdaAliasRoutingConfiguration(d.Get("routing_config").([]interface{})), } _, err := conn.UpdateAlias(params) @@ -144,3 +163,19 @@ func resourceAwsLambdaAliasUpdate(d *schema.ResourceData, meta interface{}) erro return nil } + +func expandLambdaAliasRoutingConfiguration(l []interface{}) *lambda.AliasRoutingConfiguration { + aliasRoutingConfiguration := &lambda.AliasRoutingConfiguration{} + + if len(l) == 0 || l[0] == nil { + return aliasRoutingConfiguration + } + + m := l[0].(map[string]interface{}) + + if v, ok := m["additional_version_weights"]; ok { + aliasRoutingConfiguration.AdditionalVersionWeights = expandFloat64Map(v.(map[string]interface{})) + } + + return aliasRoutingConfiguration +} diff --git a/aws/resource_aws_lambda_alias_test.go b/aws/resource_aws_lambda_alias_test.go index f359157594d..3addc4bb28a 100644 --- a/aws/resource_aws_lambda_alias_test.go +++ b/aws/resource_aws_lambda_alias_test.go @@ -22,16 +22,103 @@ func TestAccAWSLambdaAlias_basic(t *testing.T) { funcName := fmt.Sprintf("tf_acc_lambda_func_alias_basic_%s", rString) aliasName := fmt.Sprintf("tf_acc_lambda_alias_basic_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsLambdaAliasDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAwsLambdaAliasConfig(roleName, policyName, attachmentName, funcName, aliasName), Check: resource.ComposeTestCheckFunc( testAccCheckAwsLambdaAliasExists("aws_lambda_alias.lambda_alias_test", &conf), testAccCheckAwsLambdaAttributes(&conf), + testAccCheckAwsLambdaAliasRoutingConfigDoesNotExist(&conf), + resource.TestMatchResourceAttr("aws_lambda_alias.lambda_alias_test", "arn", + regexp.MustCompile(`^arn:aws:lambda:[a-z]+-[a-z]+-[0-9]+:\d{12}:function:`+funcName+`:`+aliasName+`$`)), + ), + }, + }, + }) +} + +func TestAccAWSLambdaAlias_nameupdate(t *testing.T) { + var conf lambda.AliasConfiguration + + rString := acctest.RandString(8) + roleName := fmt.Sprintf("tf_acc_role_lambda_alias_basic_%s", rString) + policyName := fmt.Sprintf("tf_acc_policy_lambda_alias_basic_%s", rString) + attachmentName := fmt.Sprintf("tf_acc_attachment_%s", rString) + funcName := fmt.Sprintf("tf_acc_lambda_func_alias_basic_%s", rString) + aliasName := fmt.Sprintf("tf_acc_lambda_alias_basic_%s", rString) + aliasNameUpdate := fmt.Sprintf("tf_acc_lambda_alias_basic_%s", acctest.RandString(8)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsLambdaAliasDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsLambdaAliasConfig(roleName, policyName, attachmentName, funcName, aliasName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsLambdaAliasExists("aws_lambda_alias.lambda_alias_test", &conf), + testAccCheckAwsLambdaAttributes(&conf), + resource.TestMatchResourceAttr("aws_lambda_alias.lambda_alias_test", "arn", + regexp.MustCompile(`^arn:aws:lambda:[a-z]+-[a-z]+-[0-9]+:\d{12}:function:`+funcName+`:`+aliasName+`$`)), + ), + }, + { + Config: testAccAwsLambdaAliasConfig(roleName, policyName, attachmentName, funcName, aliasNameUpdate), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsLambdaAliasExists("aws_lambda_alias.lambda_alias_test", &conf), + testAccCheckAwsLambdaAttributes(&conf), + resource.TestMatchResourceAttr("aws_lambda_alias.lambda_alias_test", "arn", + regexp.MustCompile(`^arn:aws:lambda:[a-z]+-[a-z]+-[0-9]+:\d{12}:function:`+funcName+`:`+aliasNameUpdate+`$`)), + ), + }, + }, + }) +} + +func TestAccAWSLambdaAlias_routingconfig(t *testing.T) { + var conf lambda.AliasConfiguration + + rString := acctest.RandString(8) + roleName := fmt.Sprintf("tf_acc_role_lambda_alias_basic_%s", rString) + policyName := fmt.Sprintf("tf_acc_policy_lambda_alias_basic_%s", rString) + attachmentName := fmt.Sprintf("tf_acc_attachment_%s", rString) + funcName := fmt.Sprintf("tf_acc_lambda_func_alias_basic_%s", rString) + aliasName := fmt.Sprintf("tf_acc_lambda_alias_basic_%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsLambdaAliasDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsLambdaAliasConfig(roleName, policyName, attachmentName, funcName, aliasName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsLambdaAliasExists("aws_lambda_alias.lambda_alias_test", &conf), + testAccCheckAwsLambdaAttributes(&conf), + resource.TestMatchResourceAttr("aws_lambda_alias.lambda_alias_test", "arn", + regexp.MustCompile(`^arn:aws:lambda:[a-z]+-[a-z]+-[0-9]+:\d{12}:function:`+funcName+`:`+aliasName+`$`)), + ), + }, + { + Config: testAccAwsLambdaAliasConfigWithRoutingConfig(roleName, policyName, attachmentName, funcName, aliasName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsLambdaAliasExists("aws_lambda_alias.lambda_alias_test", &conf), + testAccCheckAwsLambdaAttributes(&conf), + testAccCheckAwsLambdaAliasRoutingConfigExists(&conf), + resource.TestMatchResourceAttr("aws_lambda_alias.lambda_alias_test", "arn", + regexp.MustCompile(`^arn:aws:lambda:[a-z]+-[a-z]+-[0-9]+:\d{12}:function:`+funcName+`:`+aliasName+`$`)), + ), + }, + { + Config: testAccAwsLambdaAliasConfig(roleName, policyName, attachmentName, funcName, aliasName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsLambdaAliasExists("aws_lambda_alias.lambda_alias_test", &conf), + testAccCheckAwsLambdaAttributes(&conf), + testAccCheckAwsLambdaAliasRoutingConfigDoesNotExist(&conf), resource.TestMatchResourceAttr("aws_lambda_alias.lambda_alias_test", "arn", regexp.MustCompile(`^arn:aws:lambda:[a-z]+-[a-z]+-[0-9]+:\d{12}:function:`+funcName+`:`+aliasName+`$`)), ), @@ -104,6 +191,31 @@ func testAccCheckAwsLambdaAttributes(mapping *lambda.AliasConfiguration) resourc } } +func testAccCheckAwsLambdaAliasRoutingConfigExists(mapping *lambda.AliasConfiguration) resource.TestCheckFunc { + return func(s *terraform.State) error { + routingConfig := mapping.RoutingConfig + + if routingConfig == nil { + return fmt.Errorf("Could not read Lambda alias routing config") + } + if len(routingConfig.AdditionalVersionWeights) != 1 { + return fmt.Errorf("Could not read Lambda alias additional version weights") + } + return nil + } +} + +func testAccCheckAwsLambdaAliasRoutingConfigDoesNotExist(mapping *lambda.AliasConfiguration) resource.TestCheckFunc { + return func(s *terraform.State) error { + routingConfig := mapping.RoutingConfig + + if routingConfig != nil { + return fmt.Errorf("Lambda alias routing config still exists after removal") + } + return nil + } +} + func testAccAwsLambdaAliasConfig(roleName, policyName, attachmentName, funcName, aliasName string) string { return fmt.Sprintf(` resource "aws_iam_role" "iam_for_lambda" { @@ -154,17 +266,91 @@ resource "aws_iam_policy_attachment" "policy_attachment_for_role" { } resource "aws_lambda_function" "lambda_function_test_create" { - filename = "test-fixtures/lambdatest.zip" - function_name = "%s" - role = "${aws_iam_role.iam_for_lambda.arn}" - handler = "exports.example" - runtime = "nodejs4.3" + filename = "test-fixtures/lambdatest.zip" + function_name = "%s" + role = "${aws_iam_role.iam_for_lambda.arn}" + handler = "exports.example" + runtime = "nodejs4.3" + source_code_hash = "${base64sha256(file("test-fixtures/lambdatest.zip"))}" + publish = "true" +} + +resource "aws_lambda_alias" "lambda_alias_test" { + name = "%s" + description = "a sample description" + function_name = "${aws_lambda_function.lambda_function_test_create.arn}" + function_version = "1" +}`, roleName, policyName, attachmentName, funcName, aliasName) +} + +func testAccAwsLambdaAliasConfigWithRoutingConfig(roleName, policyName, attachmentName, funcName, aliasName string) string { + return fmt.Sprintf(` +resource "aws_iam_role" "iam_for_lambda" { + name = "%s" + + assume_role_policy = < 0 { + config := v.([]interface{})[0].(map[string]interface{}) - configs := v.([]interface{}) - config, ok := configs[0].(map[string]interface{}) - - if !ok { - return errors.New("vpc_config is ") - } - - if config != nil { - var subnetIds []*string - for _, id := range config["subnet_ids"].(*schema.Set).List() { - subnetIds = append(subnetIds, aws.String(id.(string))) - } - - var securityGroupIds []*string - for _, id := range config["security_group_ids"].(*schema.Set).List() { - securityGroupIds = append(securityGroupIds, aws.String(id.(string))) - } - - params.VpcConfig = &lambda.VpcConfig{ - SubnetIds: subnetIds, - SecurityGroupIds: securityGroupIds, - } + params.VpcConfig = &lambda.VpcConfig{ + SecurityGroupIds: expandStringSet(config["security_group_ids"].(*schema.Set)), + SubnetIds: expandStringSet(config["subnet_ids"].(*schema.Set)), } } @@ -378,17 +385,21 @@ func resourceAwsLambdaFunctionCreate(d *schema.ResourceData, meta interface{}) e log.Printf("[DEBUG] Received %s, retrying CreateFunction", err) return resource.RetryableError(err) } + if isAWSErr(err, "InvalidParameterValueException", "Lambda was unable to configure access to your environment variables because the KMS key is invalid for CreateGrant") { + log.Printf("[DEBUG] Received %s, retrying CreateFunction", err) + return resource.RetryableError(err) + } return resource.NonRetryableError(err) } return nil }) if err != nil { - if !isAWSErr(err, "InvalidParameterValueException", "Your request has been throttled by EC2") { + if !isResourceTimeoutError(err) && !isAWSErr(err, "InvalidParameterValueException", "Your request has been throttled by EC2") { return fmt.Errorf("Error creating Lambda function: %s", err) } - // Allow 9 more minutes for EC2 throttling - err := resource.Retry(9*time.Minute, func() *resource.RetryError { + // Allow additional time for slower uploads or EC2 throttling + err := resource.Retry(d.Timeout(schema.TimeoutCreate), func() *resource.RetryError { _, err := conn.CreateFunction(params) if err != nil { log.Printf("[DEBUG] Error creating Lambda Function: %s", err) @@ -441,12 +452,19 @@ func resourceAwsLambdaFunctionCreate(d *schema.ResourceData, meta interface{}) e func resourceAwsLambdaFunctionRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).lambdaconn - log.Printf("[DEBUG] Fetching Lambda Function: %s", d.Id()) - params := &lambda.GetFunctionInput{ FunctionName: aws.String(d.Get("function_name").(string)), } + // qualifier for lambda function data source + qualifier, qualifierExistance := d.GetOk("qualifier") + if qualifierExistance { + params.Qualifier = aws.String(qualifier.(string)) + log.Printf("[DEBUG] Fetching Lambda Function: %s:%s", d.Id(), qualifier.(string)) + } else { + log.Printf("[DEBUG] Fetching Lambda Function: %s", d.Id()) + } + getFunctionOutput, err := conn.GetFunction(params) if err != nil { if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "ResourceNotFoundException" && !d.IsNewResource() { @@ -456,6 +474,18 @@ func resourceAwsLambdaFunctionRead(d *schema.ResourceData, meta interface{}) err return err } + if getFunctionOutput.Concurrency != nil { + d.Set("reserved_concurrent_executions", getFunctionOutput.Concurrency.ReservedConcurrentExecutions) + } else { + d.Set("reserved_concurrent_executions", nil) + } + + // Tagging operations are permitted on Lambda functions only. + // Tags on aliases and versions are not supported. + if !qualifierExistance { + d.Set("tags", tagsToMapGeneric(getFunctionOutput.Tags)) + } + // getFunctionOutput.Code.Location is a pre-signed URL pointing at the zip // file that we uploaded when we created the resource. You can use it to // download the code from AWS. The other part is @@ -472,18 +502,18 @@ func resourceAwsLambdaFunctionRead(d *schema.ResourceData, meta interface{}) err d.Set("runtime", function.Runtime) d.Set("timeout", function.Timeout) d.Set("kms_key_arn", function.KMSKeyArn) - d.Set("tags", tagsToMapGeneric(getFunctionOutput.Tags)) + d.Set("source_code_hash", function.CodeSha256) + d.Set("source_code_size", function.CodeSize) config := flattenLambdaVpcConfigResponse(function.VpcConfig) log.Printf("[INFO] Setting Lambda %s VPC config %#v from API", d.Id(), config) - vpcSetErr := d.Set("vpc_config", config) - if vpcSetErr != nil { - return fmt.Errorf("Failed setting vpc_config: %s", vpcSetErr) + if err := d.Set("vpc_config", config); err != nil { + return fmt.Errorf("Error setting vpc_config for Lambda Function (%s): %s", d.Id(), err) } - d.Set("source_code_hash", function.CodeSha256) - - if err := d.Set("environment", flattenLambdaEnvironment(function.Environment)); err != nil { + environment := flattenLambdaEnvironment(function.Environment) + log.Printf("[INFO] Setting Lambda %s environment %#v from API", d.Id(), environment) + if err := d.Set("environment", environment); err != nil { log.Printf("[ERR] Error setting environment for Lambda Function (%s): %s", d.Id(), err) } @@ -497,44 +527,54 @@ func resourceAwsLambdaFunctionRead(d *schema.ResourceData, meta interface{}) err d.Set("dead_letter_config", []interface{}{}) } + // Assume `PassThrough` on partitions that don't support tracing config + tracingConfigMode := "PassThrough" if function.TracingConfig != nil { - d.Set("tracing_config", []interface{}{ - map[string]interface{}{ - "mode": *function.TracingConfig.Mode, - }, - }) + tracingConfigMode = *function.TracingConfig.Mode } - - // List is sorted from oldest to latest - // so this may get costly over time :'( - var lastVersion, lastQualifiedArn string - err = listVersionsByFunctionPages(conn, &lambda.ListVersionsByFunctionInput{ - FunctionName: function.FunctionName, - MaxItems: aws.Int64(10000), - }, func(p *lambda.ListVersionsByFunctionOutput, lastPage bool) bool { - if lastPage { - last := p.Versions[len(p.Versions)-1] - lastVersion = *last.Version - lastQualifiedArn = *last.FunctionArn - return false - } - return true + d.Set("tracing_config", []interface{}{ + map[string]interface{}{ + "mode": tracingConfigMode, + }, }) - if err != nil { - return err - } - - d.Set("version", lastVersion) - d.Set("qualified_arn", lastQualifiedArn) - - d.Set("invoke_arn", buildLambdaInvokeArn(*function.FunctionArn, meta.(*AWSClient).region)) - if getFunctionOutput.Concurrency != nil { - d.Set("reserved_concurrent_executions", getFunctionOutput.Concurrency.ReservedConcurrentExecutions) + // Get latest version and ARN unless qualifier is specified via data source + if qualifierExistance { + d.Set("version", function.Version) + d.Set("qualified_arn", function.FunctionArn) } else { - d.Set("reserved_concurrent_executions", nil) + // List is sorted from oldest to latest + // so this may get costly over time :'( + var lastVersion, lastQualifiedArn string + err = listVersionsByFunctionPages(conn, &lambda.ListVersionsByFunctionInput{ + FunctionName: function.FunctionName, + MaxItems: aws.Int64(10000), + }, func(p *lambda.ListVersionsByFunctionOutput, lastPage bool) bool { + if lastPage { + last := p.Versions[len(p.Versions)-1] + lastVersion = *last.Version + lastQualifiedArn = *last.FunctionArn + return false + } + return true + }) + if err != nil { + return err + } + + d.Set("version", lastVersion) + d.Set("qualified_arn", lastQualifiedArn) } + invokeArn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Service: "apigateway", + Region: meta.(*AWSClient).region, + AccountID: "lambda", + Resource: fmt.Sprintf("path/2015-03-31/functions/%s/invocations", *function.FunctionArn), + }.String() + d.Set("invoke_arn", invokeArn) + return nil } @@ -572,8 +612,6 @@ func resourceAwsLambdaFunctionDelete(d *schema.ResourceData, meta interface{}) e return fmt.Errorf("Error deleting Lambda Function: %s", err) } - d.SetId("") - return nil } @@ -629,13 +667,14 @@ func resourceAwsLambdaFunctionUpdate(d *schema.ResourceData, meta interface{}) e } if d.HasChange("dead_letter_config") { dlcMaps := d.Get("dead_letter_config").([]interface{}) + configReq.DeadLetterConfig = &lambda.DeadLetterConfig{ + TargetArn: aws.String(""), + } if len(dlcMaps) == 1 { // Schema guarantees either 0 or 1 dlcMap := dlcMaps[0].(map[string]interface{}) - configReq.DeadLetterConfig = &lambda.DeadLetterConfig{ - TargetArn: aws.String(dlcMap["target_arn"].(string)), - } - configUpdate = true + configReq.DeadLetterConfig.TargetArn = aws.String(dlcMap["target_arn"].(string)) } + configUpdate = true } if d.HasChange("tracing_config") { tracingConfig := d.Get("tracing_config").([]interface{}) @@ -648,29 +687,16 @@ func resourceAwsLambdaFunctionUpdate(d *schema.ResourceData, meta interface{}) e } } if d.HasChange("vpc_config") { - vpcConfigRaw := d.Get("vpc_config").([]interface{}) - vpcConfig, ok := vpcConfigRaw[0].(map[string]interface{}) - if !ok { - return errors.New("vpc_config is ") + configReq.VpcConfig = &lambda.VpcConfig{ + SecurityGroupIds: []*string{}, + SubnetIds: []*string{}, } - - if vpcConfig != nil { - var subnetIds []*string - for _, id := range vpcConfig["subnet_ids"].(*schema.Set).List() { - subnetIds = append(subnetIds, aws.String(id.(string))) - } - - var securityGroupIds []*string - for _, id := range vpcConfig["security_group_ids"].(*schema.Set).List() { - securityGroupIds = append(securityGroupIds, aws.String(id.(string))) - } - - configReq.VpcConfig = &lambda.VpcConfig{ - SubnetIds: subnetIds, - SecurityGroupIds: securityGroupIds, - } - configUpdate = true + if v, ok := d.GetOk("vpc_config"); ok && len(v.([]interface{})) > 0 { + vpcConfig := v.([]interface{})[0].(map[string]interface{}) + configReq.VpcConfig.SecurityGroupIds = expandStringSet(vpcConfig["security_group_ids"].(*schema.Set)) + configReq.VpcConfig.SubnetIds = expandStringSet(vpcConfig["subnet_ids"].(*schema.Set)) } + configUpdate = true } if d.HasChange("runtime") { @@ -710,6 +736,10 @@ func resourceAwsLambdaFunctionUpdate(d *schema.ResourceData, meta interface{}) e if err != nil { log.Printf("[DEBUG] Received error modifying Lambda Function Configuration %s: %s", d.Id(), err) + if isAWSErr(err, "InvalidParameterValueException", "The role defined for the function cannot be assumed by Lambda") { + log.Printf("[DEBUG] Received %s, retrying UpdateFunctionConfiguration", err) + return resource.RetryableError(err) + } if isAWSErr(err, "InvalidParameterValueException", "The provided execution role does not have permissions") { log.Printf("[DEBUG] Received %s, retrying UpdateFunctionConfiguration", err) return resource.RetryableError(err) @@ -718,6 +748,11 @@ func resourceAwsLambdaFunctionUpdate(d *schema.ResourceData, meta interface{}) e log.Printf("[DEBUG] Received %s, retrying UpdateFunctionConfiguration", err) return resource.RetryableError(err) } + if isAWSErr(err, "InvalidParameterValueException", "Lambda was unable to configure access to your environment variables because the KMS key is invalid for CreateGrant") { + log.Printf("[DEBUG] Received %s, retrying CreateFunction", err) + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) } return nil diff --git a/aws/resource_aws_lambda_function_test.go b/aws/resource_aws_lambda_function_test.go index 48cb7161a0a..a766f686a9e 100644 --- a/aws/resource_aws_lambda_function_test.go +++ b/aws/resource_aws_lambda_function_test.go @@ -4,6 +4,7 @@ import ( "archive/zip" "fmt" "io/ioutil" + "log" "os" "path/filepath" "regexp" @@ -18,6 +19,142 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func init() { + resource.AddTestSweepers("aws_lambda_function", &resource.Sweeper{ + Name: "aws_lambda_function", + F: testSweepLambdaFunctions, + }) +} + +func testSweepLambdaFunctions(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + + lambdaconn := client.(*AWSClient).lambdaconn + + resp, err := lambdaconn.ListFunctions(&lambda.ListFunctionsInput{}) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping Lambda Function sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving Lambda functions: %s", err) + } + + if len(resp.Functions) == 0 { + log.Print("[DEBUG] No aws lambda functions to sweep") + return nil + } + + for _, f := range resp.Functions { + var testOptGroup bool + for _, testName := range []string{"tf_test", "tf_acc_"} { + if strings.HasPrefix(*f.FunctionName, testName) { + testOptGroup = true + } + } + + if !testOptGroup { + continue + } + + _, err := lambdaconn.DeleteFunction( + &lambda.DeleteFunctionInput{ + FunctionName: f.FunctionName, + }) + if err != nil { + return err + } + } + + return nil +} + +func TestAccAWSLambdaFunction_importLocalFile(t *testing.T) { + resourceName := "aws_lambda_function.lambda_function_test" + + rString := acctest.RandString(8) + funcName := fmt.Sprintf("tf_acc_lambda_func_import_local_%s", rString) + policyName := fmt.Sprintf("tf_acc_policy_lambda_func_import_local_%s", rString) + roleName := fmt.Sprintf("tf_acc_role_lambda_func_import_local_%s", rString) + sgName := fmt.Sprintf("tf_acc_sg_lambda_func_import_local_%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckLambdaFunctionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLambdaConfigBasic(funcName, policyName, roleName, sgName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"filename", "publish"}, + }, + }, + }) +} + +func TestAccAWSLambdaFunction_importLocalFile_VPC(t *testing.T) { + resourceName := "aws_lambda_function.lambda_function_test" + + rString := acctest.RandString(8) + funcName := fmt.Sprintf("tf_acc_lambda_func_import_vpc_%s", rString) + policyName := fmt.Sprintf("tf_acc_policy_lambda_func_import_vpc_%s", rString) + roleName := fmt.Sprintf("tf_acc_role_lambda_func_import_vpc_%s", rString) + sgName := fmt.Sprintf("tf_acc_sg_lambda_func_import_vpc_%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckLambdaFunctionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLambdaConfigWithVPC(funcName, policyName, roleName, sgName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"filename", "publish"}, + }, + }, + }) +} + +func TestAccAWSLambdaFunction_importS3(t *testing.T) { + resourceName := "aws_lambda_function.lambda_function_s3test" + + rString := acctest.RandString(8) + bucketName := fmt.Sprintf("tf-acc-bucket-lambda-func-import-s3-%s", rString) + roleName := fmt.Sprintf("tf_acc_role_lambda_func_import_s3_%s", rString) + funcName := fmt.Sprintf("tf_acc_lambda_func_import_s3_%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckLambdaFunctionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLambdaConfigS3(bucketName, roleName, funcName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"s3_bucket", "s3_key", "publish"}, + }, + }, + }) +} + func TestAccAWSLambdaFunction_basic(t *testing.T) { var conf lambda.GetFunctionOutput @@ -27,7 +164,7 @@ func TestAccAWSLambdaFunction_basic(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_func_basic_%s", rString) sgName := fmt.Sprintf("tf_acc_sg_lambda_func_basic_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -54,7 +191,7 @@ func TestAccAWSLambdaFunction_concurrency(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_func_concurrency_%s", rString) sgName := fmt.Sprintf("tf_acc_sg_lambda_func_concurrency_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -90,7 +227,7 @@ func TestAccAWSLambdaFunction_concurrencyCycle(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_func_concurrency_cycle_%s", rString) sgName := fmt.Sprintf("tf_acc_sg_lambda_func_concurrency_cycle_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -135,7 +272,7 @@ func TestAccAWSLambdaFunction_updateRuntime(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_func_update_runtime_%s", rString) sgName := fmt.Sprintf("tf_acc_sg_lambda_func_update_runtime_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -151,7 +288,7 @@ func TestAccAWSLambdaFunction_updateRuntime(t *testing.T) { Config: testAccAWSLambdaConfigBasicUpdateRuntime(funcName, policyName, roleName, sgName), Check: resource.ComposeTestCheckFunc( testAccCheckAwsLambdaFunctionExists("aws_lambda_function.lambda_function_test", funcName, &conf), - resource.TestCheckResourceAttr("aws_lambda_function.lambda_function_test", "runtime", "nodejs4.3-edge"), + resource.TestCheckResourceAttr("aws_lambda_function.lambda_function_test", "runtime", "nodejs6.10"), ), }, }, @@ -165,7 +302,7 @@ func TestAccAWSLambdaFunction_expectFilenameAndS3Attributes(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_func_expect_%s", rString) sgName := fmt.Sprintf("tf_acc_sg_lambda_func_expect_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -187,7 +324,7 @@ func TestAccAWSLambdaFunction_envVariables(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_func_env_vars_%s", rString) sgName := fmt.Sprintf("tf_acc_sg_lambda_func_env_vars_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -242,9 +379,9 @@ func TestAccAWSLambdaFunction_encryptedEnvVariables(t *testing.T) { policyName := fmt.Sprintf("tf_acc_policy_lambda_func_encrypted_env_%s", rString) roleName := fmt.Sprintf("tf_acc_role_lambda_func_encrypted_env_%s", rString) sgName := fmt.Sprintf("tf_acc_sg_lambda_func_encrypted_env_%s", rString) - keyRegex := regexp.MustCompile("^arn:aws:kms:") + keyRegex := regexp.MustCompile("^arn:aws[\\w-]*:kms:") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -282,7 +419,7 @@ func TestAccAWSLambdaFunction_versioned(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_func_versioned_%s", rString) sgName := fmt.Sprintf("tf_acc_sg_lambda_func_versioned_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -320,7 +457,7 @@ func TestAccAWSLambdaFunction_versionedUpdate(t *testing.T) { var timeBeforeUpdate time.Time - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -365,7 +502,7 @@ func TestAccAWSLambdaFunction_DeadLetterConfig(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_func_dlconfig_%s", rString) sgName := fmt.Sprintf("tf_acc_sg_lambda_func_dlconfig_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -386,6 +523,20 @@ func TestAccAWSLambdaFunction_DeadLetterConfig(t *testing.T) { }, ), }, + // Ensure configuration can be imported + { + ResourceName: "aws_lambda_function.lambda_function_test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"filename", "publish"}, + }, + // Ensure configuration can be removed + { + Config: testAccAWSLambdaConfigBasic(funcName, policyName, roleName, sgName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsLambdaFunctionExists("aws_lambda_function.lambda_function_test", funcName, &conf), + ), + }, }, }) } @@ -402,7 +553,7 @@ func TestAccAWSLambdaFunction_DeadLetterConfigUpdated(t *testing.T) { topic1Name := fmt.Sprintf("tf_acc_topic_lambda_func_dlcfg_upd_%s", rString) topic2Name := fmt.Sprintf("tf_acc_topic_lambda_func_dlcfg_upd_2_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -435,7 +586,7 @@ func TestAccAWSLambdaFunction_nilDeadLetterConfig(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_func_nil_dlcfg_%s", rString) sgName := fmt.Sprintf("tf_acc_sg_lambda_func_nil_dlcfg_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -458,7 +609,11 @@ func TestAccAWSLambdaFunction_tracingConfig(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_func_tracing_cfg_%s", rString) sgName := fmt.Sprintf("tf_acc_sg_lambda_func_tracing_cfg_%s", rString) - resource.Test(t, resource.TestCase{ + if testAccGetPartition() == "aws-us-gov" { + t.Skip("Lambda tracing config is not supported in GovCloud partition") + } + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -494,7 +649,7 @@ func TestAccAWSLambdaFunction_VPC(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_func_vpc_%s", rString) sgName := fmt.Sprintf("tf_acc_sg_lambda_func_vpc_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -516,6 +671,35 @@ func TestAccAWSLambdaFunction_VPC(t *testing.T) { }) } +func TestAccAWSLambdaFunction_VPCRemoval(t *testing.T) { + var conf lambda.GetFunctionOutput + + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_lambda_function.lambda_function_test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckLambdaFunctionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLambdaConfigWithVPC(rName, rName, rName, rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsLambdaFunctionExists(resourceName, rName, &conf), + resource.TestCheckResourceAttr(resourceName, "vpc_config.#", "1"), + ), + }, + { + Config: testAccAWSLambdaConfigBasic(rName, rName, rName, rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsLambdaFunctionExists(resourceName, rName, &conf), + resource.TestCheckResourceAttr(resourceName, "vpc_config.#", "0"), + ), + }, + }, + }) +} + func TestAccAWSLambdaFunction_VPCUpdate(t *testing.T) { var conf lambda.GetFunctionOutput @@ -526,7 +710,7 @@ func TestAccAWSLambdaFunction_VPCUpdate(t *testing.T) { sgName := fmt.Sprintf("tf_acc_sg_lambda_func_vpc_upd_%s", rString) sgName2 := fmt.Sprintf("tf_acc_sg_lambda_func_2nd_vpc_upd_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -570,7 +754,7 @@ func TestAccAWSLambdaFunction_VPC_withInvocation(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_func_vpc_w_invc_%s", rString) sgName := fmt.Sprintf("tf_acc_sg_lambda_func_vpc_w_invc_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -586,6 +770,31 @@ func TestAccAWSLambdaFunction_VPC_withInvocation(t *testing.T) { }) } +func TestAccAWSLambdaFunction_EmptyVpcConfig(t *testing.T) { + var conf lambda.GetFunctionOutput + + rString := acctest.RandString(8) + funcName := fmt.Sprintf("tf_acc_lambda_func_empty_vpc_config_%s", rString) + policyName := fmt.Sprintf("tf_acc_policy_lambda_func_empty_vpc_config_%s", rString) + roleName := fmt.Sprintf("tf_acc_role_lambda_func_empty_vpc_config_%s", rString) + sgName := fmt.Sprintf("tf_acc_sg_lambda_func_empty_vpc_config_%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckLambdaFunctionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLambdaConfigWithEmptyVpcConfig(funcName, policyName, roleName, sgName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsLambdaFunctionExists("aws_lambda_function.test", funcName, &conf), + resource.TestCheckResourceAttr("aws_lambda_function.test", "vpc_config.#", "0"), + ), + }, + }, + }) +} + func TestAccAWSLambdaFunction_s3(t *testing.T) { var conf lambda.GetFunctionOutput @@ -594,7 +803,7 @@ func TestAccAWSLambdaFunction_s3(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_func_s3_%s", rString) funcName := fmt.Sprintf("tf_acc_lambda_func_s3_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -627,7 +836,7 @@ func TestAccAWSLambdaFunction_localUpdate(t *testing.T) { var timeBeforeUpdate time.Time - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -683,7 +892,7 @@ func TestAccAWSLambdaFunction_localUpdate_nameOnly(t *testing.T) { } defer os.Remove(updatedPath) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -732,7 +941,7 @@ func TestAccAWSLambdaFunction_s3Update_basic(t *testing.T) { key := "lambda-func.zip" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -751,17 +960,11 @@ func TestAccAWSLambdaFunction_s3Update_basic(t *testing.T) { ), }, { - ExpectNonEmptyPlan: true, PreConfig: func() { // Upload 2nd version testAccCreateZipFromFiles(map[string]string{"test-fixtures/lambda_func_modified.js": "lambda.js"}, zipFile) }, Config: genAWSLambdaFunctionConfig_s3(bucketName, key, path, roleName, funcName), - }, - // Extra step because of missing ComputedWhen - // See https://github.com/hashicorp/terraform/pull/4846 & https://github.com/hashicorp/terraform/pull/5330 - { - Config: genAWSLambdaFunctionConfig_s3(bucketName, key, path, roleName, funcName), Check: resource.ComposeTestCheckFunc( testAccCheckAwsLambdaFunctionExists("aws_lambda_function.lambda_function_s3", funcName, &conf), testAccCheckAwsLambdaFunctionName(&conf, funcName), @@ -790,7 +993,7 @@ func TestAccAWSLambdaFunction_s3Update_unversioned(t *testing.T) { key := "lambda-func.zip" key2 := "lambda-func-modified.zip" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -833,7 +1036,7 @@ func TestAccAWSLambdaFunction_runtimeValidation_noRuntime(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_func_runtime_valid_no_%s", rString) sgName := fmt.Sprintf("tf_acc_sg_lambda_func_runtime_valid_no_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -856,7 +1059,7 @@ func TestAccAWSLambdaFunction_runtimeValidation_nodeJs43(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_func_runtime_valid_node43_%s", rString) sgName := fmt.Sprintf("tf_acc_sg_lambda_func_runtime_valid_node43_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -882,7 +1085,7 @@ func TestAccAWSLambdaFunction_runtimeValidation_python27(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_func_runtime_valid_p27_%s", rString) sgName := fmt.Sprintf("tf_acc_sg_lambda_func_runtime_valid_p27_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -908,7 +1111,7 @@ func TestAccAWSLambdaFunction_runtimeValidation_java8(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_func_runtime_valid_j8_%s", rString) sgName := fmt.Sprintf("tf_acc_sg_lambda_func_runtime_valid_j8_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -934,7 +1137,7 @@ func TestAccAWSLambdaFunction_tags(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_func_tags_%s", rString) sgName := fmt.Sprintf("tf_acc_sg_lambda_func_tags_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -985,7 +1188,7 @@ func TestAccAWSLambdaFunction_runtimeValidation_python36(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_func_runtime_valid_p36_%s", rString) sgName := fmt.Sprintf("tf_acc_sg_lambda_func_runtime_valid_p36_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLambdaFunctionDestroy, @@ -1179,6 +1382,8 @@ func createTempFile(prefix string) (string, *os.File, error) { func baseAccAWSLambdaConfig(policyName, roleName, sgName string) string { return fmt.Sprintf(` +data "aws_partition" "current" {} + resource "aws_iam_role_policy" "iam_policy_for_lambda" { name = "%s" role = "${aws_iam_role.iam_for_lambda.id}" @@ -1193,7 +1398,7 @@ resource "aws_iam_role_policy" "iam_policy_for_lambda" { "logs:CreateLogStream", "logs:PutLogEvents" ], - "Resource": "arn:aws:logs:*:*:*" + "Resource": "arn:${data.aws_partition.current.partition}:logs:*:*:*" }, { "Effect": "Allow", @@ -1330,7 +1535,7 @@ resource "aws_lambda_function" "lambda_function_test" { function_name = "%s" role = "${aws_iam_role.iam_for_lambda.arn}" handler = "exports.example" - runtime = "nodejs4.3-edge" + runtime = "nodejs6.10" } `, funcName) } @@ -1668,6 +1873,22 @@ resource "aws_security_group" "sg_for_lambda_2" { `, funcName, sgName2) } +func testAccAWSLambdaConfigWithEmptyVpcConfig(funcName, policyName, roleName, sgName string) string { + return fmt.Sprintf(baseAccAWSLambdaConfig(policyName, roleName, sgName)+` +resource "aws_lambda_function" "test" { + filename = "test-fixtures/lambdatest.zip" + function_name = "%s" + role = "${aws_iam_role.iam_for_lambda.arn}" + handler = "exports.example" + runtime = "nodejs4.3" + + vpc_config { + subnet_ids = [] + security_group_ids = [] + } +}`, funcName) +} + func testAccAWSLambdaConfigS3(bucketName, roleName, funcName string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "lambda_bucket" { diff --git a/aws/resource_aws_lambda_permission.go b/aws/resource_aws_lambda_permission.go index f5b60e75096..11308e53be7 100644 --- a/aws/resource_aws_lambda_permission.go +++ b/aws/resource_aws_lambda_permission.go @@ -30,6 +30,12 @@ func resourceAwsLambdaPermission() *schema.Resource { ForceNew: true, ValidateFunc: validateLambdaPermissionAction, }, + "event_source_token": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validateLambdaPermissionEventSourceToken, + }, "function_name": { Type: schema.TypeString, Required: true, @@ -60,10 +66,19 @@ func resourceAwsLambdaPermission() *schema.Resource { ValidateFunc: validateArn, }, "statement_id": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validatePolicyStatementId, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"statement_id_prefix"}, + ValidateFunc: validatePolicyStatementId, + }, + "statement_id_prefix": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"statement_id"}, + ValidateFunc: validatePolicyStatementId, }, }, } @@ -74,6 +89,15 @@ func resourceAwsLambdaPermissionCreate(d *schema.ResourceData, meta interface{}) functionName := d.Get("function_name").(string) + var statementId string + if v, ok := d.GetOk("statement_id"); ok { + statementId = v.(string) + } else if v, ok := d.GetOk("statement_id_prefix"); ok { + statementId = resource.PrefixedUniqueId(v.(string)) + } else { + statementId = resource.UniqueId() + } + // There is a bug in the API (reported and acknowledged by AWS) // which causes some permissions to be ignored when API calls are sent in parallel // We work around this bug via mutex @@ -84,9 +108,12 @@ func resourceAwsLambdaPermissionCreate(d *schema.ResourceData, meta interface{}) Action: aws.String(d.Get("action").(string)), FunctionName: aws.String(functionName), Principal: aws.String(d.Get("principal").(string)), - StatementId: aws.String(d.Get("statement_id").(string)), + StatementId: aws.String(statementId), } + if v, ok := d.GetOk("event_source_token"); ok { + input.EventSourceToken = aws.String(v.(string)) + } if v, ok := d.GetOk("qualifier"); ok { input.Qualifier = aws.String(v.(string)) } @@ -108,7 +135,7 @@ func resourceAwsLambdaPermissionCreate(d *schema.ResourceData, meta interface{}) // IAM is eventually consistent :/ if awsErr.Code() == "ResourceConflictException" { return resource.RetryableError( - fmt.Errorf("[WARN] Error adding new Lambda Permission for %s, retrying: %s", + fmt.Errorf("Error adding new Lambda Permission for %s, retrying: %s", *input.FunctionName, err)) } } @@ -127,20 +154,20 @@ func resourceAwsLambdaPermissionCreate(d *schema.ResourceData, meta interface{}) log.Printf("[DEBUG] Created new Lambda permission, but no Statement was included") } - d.SetId(d.Get("statement_id").(string)) + d.SetId(statementId) err = resource.Retry(5*time.Minute, func() *resource.RetryError { - // IAM is eventually cosistent :/ + // IAM is eventually consistent :/ err := resourceAwsLambdaPermissionRead(d, meta) if err != nil { if strings.HasPrefix(err.Error(), "Error reading Lambda policy: ResourceNotFoundException") { return resource.RetryableError( - fmt.Errorf("[WARN] Error reading newly created Lambda Permission for %s, retrying: %s", + fmt.Errorf("Error reading newly created Lambda Permission for %s, retrying: %s", *input.FunctionName, err)) } if strings.HasPrefix(err.Error(), "Failed to find statement \""+d.Id()) { return resource.RetryableError( - fmt.Errorf("[WARN] Error reading newly created Lambda Permission statement for %s, retrying: %s", + fmt.Errorf("Error reading newly created Lambda Permission statement for %s, retrying: %s", *input.FunctionName, err)) } @@ -167,7 +194,7 @@ func resourceAwsLambdaPermissionRead(d *schema.ResourceData, meta interface{}) e var out *lambda.GetPolicyOutput var statement *LambdaPolicyStatement err := resource.Retry(1*time.Minute, func() *resource.RetryError { - // IAM is eventually cosistent :/ + // IAM is eventually consistent :/ var err error out, err = conn.GetPolicy(&input) if err != nil { @@ -230,7 +257,7 @@ func resourceAwsLambdaPermissionRead(d *schema.ResourceData, meta interface{}) e } d.Set("action", statement.Action) - // Check if the pricipal is a cross-account IAM role + // Check if the principal is a cross-account IAM role if _, ok := statement.Principal["AWS"]; ok { d.Set("principal", statement.Principal["AWS"]) } else { @@ -239,12 +266,15 @@ func resourceAwsLambdaPermissionRead(d *schema.ResourceData, meta interface{}) e if stringEquals, ok := statement.Condition["StringEquals"]; ok { d.Set("source_account", stringEquals["AWS:SourceAccount"]) + d.Set("event_source_token", stringEquals["lambda:EventSourceToken"]) } if arnLike, ok := statement.Condition["ArnLike"]; ok { d.Set("source_arn", arnLike["AWS:SourceArn"]) } + d.Set("statement_id", statement.Sid) + return nil } @@ -321,7 +351,6 @@ func resourceAwsLambdaPermissionDelete(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Lambda permission with ID %q removed", d.Id()) - d.SetId("") return nil } diff --git a/aws/resource_aws_lambda_permission_test.go b/aws/resource_aws_lambda_permission_test.go index b7186fc9102..e79ccc9af53 100644 --- a/aws/resource_aws_lambda_permission_test.go +++ b/aws/resource_aws_lambda_permission_test.go @@ -49,6 +49,13 @@ func TestLambdaPermissionUnmarshalling(t *testing.T) { v.Statement[0].Condition["StringEquals"]["AWS:SourceAccount"], expectedSourceAccount) } + + expectedEventSourceToken := "test-event-source-token" + if v.Statement[0].Condition["StringEquals"]["lambda:EventSourceToken"] != expectedEventSourceToken { + t.Fatalf("Expected Event Source Token to match (%q != %q)", + v.Statement[0].Condition["StringEquals"]["lambda:EventSourceToken"], + expectedEventSourceToken) + } } func TestLambdaPermissionGetQualifierFromLambdaAliasOrVersionArn_alias(t *testing.T) { @@ -164,7 +171,7 @@ func TestAccAWSLambdaPermission_basic(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_perm_basic_%s", rString) funcArnRe := regexp.MustCompile(":function:" + funcName + "$") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLambdaPermissionDestroy, @@ -178,6 +185,7 @@ func TestAccAWSLambdaPermission_basic(t *testing.T) { resource.TestCheckResourceAttr("aws_lambda_permission.allow_cloudwatch", "statement_id", "AllowExecutionFromCloudWatch"), resource.TestCheckResourceAttr("aws_lambda_permission.allow_cloudwatch", "qualifier", ""), resource.TestMatchResourceAttr("aws_lambda_permission.allow_cloudwatch", "function_name", funcArnRe), + resource.TestCheckResourceAttr("aws_lambda_permission.allow_cloudwatch", "event_source_token", "test-event-source-token"), ), }, }, @@ -192,7 +200,7 @@ func TestAccAWSLambdaPermission_withRawFunctionName(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_perm_w_raw_fname_%s", rString) funcArnRe := regexp.MustCompile(":function:" + funcName + "$") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLambdaPermissionDestroy, @@ -211,6 +219,31 @@ func TestAccAWSLambdaPermission_withRawFunctionName(t *testing.T) { }) } +func TestAccAWSLambdaPermission_withStatementIdPrefix(t *testing.T) { + var statement LambdaPolicyStatement + rName := acctest.RandomWithPrefix("tf-acc-test") + endsWithFuncName := regexp.MustCompile(":function:lambda_function_name_perm$") + startsWithPrefix := regexp.MustCompile("^AllowExecutionWithStatementIdPrefix-") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLambdaPermissionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLambdaPermissionConfig_withStatementIdPrefix(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckLambdaPermissionExists("aws_lambda_permission.with_statement_id_prefix", &statement), + resource.TestCheckResourceAttr("aws_lambda_permission.with_statement_id_prefix", "action", "lambda:InvokeFunction"), + resource.TestCheckResourceAttr("aws_lambda_permission.with_statement_id_prefix", "principal", "events.amazonaws.com"), + resource.TestMatchResourceAttr("aws_lambda_permission.with_statement_id_prefix", "statement_id", startsWithPrefix), + resource.TestMatchResourceAttr("aws_lambda_permission.with_statement_id_prefix", "function_name", endsWithFuncName), + ), + }, + }, + }) +} + func TestAccAWSLambdaPermission_withQualifier(t *testing.T) { var statement LambdaPolicyStatement @@ -220,7 +253,7 @@ func TestAccAWSLambdaPermission_withQualifier(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_perm_w_qualifier_%s", rString) funcArnRe := regexp.MustCompile(":function:" + funcName + "$") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLambdaPermissionDestroy, @@ -252,7 +285,7 @@ func TestAccAWSLambdaPermission_multiplePerms(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_perm_multi_%s", rString) funcArnRe := regexp.MustCompile(":function:" + funcName + "$") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLambdaPermissionDestroy, @@ -310,7 +343,7 @@ func TestAccAWSLambdaPermission_withS3(t *testing.T) { roleName := fmt.Sprintf("tf_acc_role_lambda_perm_w_s3_%s", rString) funcArnRe := regexp.MustCompile(":function:" + funcName + "$") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLambdaPermissionDestroy, @@ -342,7 +375,7 @@ func TestAccAWSLambdaPermission_withSNS(t *testing.T) { funcArnRe := regexp.MustCompile(":function:" + funcName + "$") topicArnRe := regexp.MustCompile(":" + topicName + "$") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLambdaPermissionDestroy, @@ -371,7 +404,7 @@ func TestAccAWSLambdaPermission_withIAMRole(t *testing.T) { funcArnRe := regexp.MustCompile(":function:" + funcName + "$") roleArnRe := regexp.MustCompile("/" + roleName + "$") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLambdaPermissionDestroy, @@ -526,6 +559,7 @@ resource "aws_lambda_permission" "allow_cloudwatch" { action = "lambda:InvokeFunction" function_name = "${aws_lambda_function.test_lambda.arn}" principal = "events.amazonaws.com" + event_source_token = "test-event-source-token" } resource "aws_lambda_function" "test_lambda" { @@ -594,6 +628,44 @@ EOF `, funcName, roleName) } +func testAccAWSLambdaPermissionConfig_withStatementIdPrefix(rName string) string { + return fmt.Sprintf(` +resource "aws_lambda_permission" "with_statement_id_prefix" { + statement_id_prefix = "AllowExecutionWithStatementIdPrefix-" + action = "lambda:InvokeFunction" + function_name = "${aws_lambda_function.test_lambda.arn}" + principal = "events.amazonaws.com" +} + +resource "aws_lambda_function" "test_lambda" { + filename = "test-fixtures/lambdatest.zip" + function_name = "lambda_function_name_perm" + role = "${aws_iam_role.iam_for_lambda.arn}" + handler = "exports.handler" + runtime = "nodejs4.3" +} + +resource "aws_iam_role" "iam_for_lambda" { + name = "%s" + assume_role_policy = < 0 { + raw, ok := networkInterface["ipv6_addresses"] + if !ok { + raw = schema.NewSet(schema.HashString, nil) + } + + list := raw.(*schema.Set) + + for _, address := range v.Ipv6Addresses { + list.Add(aws.StringValue(address.Ipv6Address)) + } + + networkInterface["ipv6_addresses"] = list + } + + for _, address := range v.PrivateIpAddresses { + ipv4Addresses = append(ipv4Addresses, aws.StringValue(address.PrivateIpAddress)) + } + if len(ipv4Addresses) > 0 { + networkInterface["ipv4_addresses"] = ipv4Addresses + } + + if len(v.Groups) > 0 { + raw, ok := networkInterface["security_groups"] + if !ok { + raw = schema.NewSet(schema.HashString, nil) + } + list := raw.(*schema.Set) + + for _, group := range v.Groups { + list.Add(aws.StringValue(group)) + } + + networkInterface["security_groups"] = list + } + + s = append(s, networkInterface) + } + return s +} + +func getPlacement(p *ec2.LaunchTemplatePlacement) []interface{} { + s := []interface{}{} + if p != nil { + s = append(s, map[string]interface{}{ + "affinity": aws.StringValue(p.Affinity), + "availability_zone": aws.StringValue(p.AvailabilityZone), + "group_name": aws.StringValue(p.GroupName), + "host_id": aws.StringValue(p.HostId), + "spread_domain": aws.StringValue(p.SpreadDomain), + "tenancy": aws.StringValue(p.Tenancy), + }) + } + return s +} + +func getTagSpecifications(t []*ec2.LaunchTemplateTagSpecification) []interface{} { + s := []interface{}{} + for _, v := range t { + s = append(s, map[string]interface{}{ + "resource_type": aws.StringValue(v.ResourceType), + "tags": tagsToMap(v.Tags), + }) + } + return s +} + +func buildLaunchTemplateData(d *schema.ResourceData) (*ec2.RequestLaunchTemplateData, error) { + opts := &ec2.RequestLaunchTemplateData{ + UserData: aws.String(d.Get("user_data").(string)), + } + + if v, ok := d.GetOk("image_id"); ok { + opts.ImageId = aws.String(v.(string)) + } + + if v, ok := d.GetOk("instance_initiated_shutdown_behavior"); ok { + opts.InstanceInitiatedShutdownBehavior = aws.String(v.(string)) + } + + instanceType := d.Get("instance_type").(string) + if instanceType != "" { + opts.InstanceType = aws.String(instanceType) + } + + if v, ok := d.GetOk("kernel_id"); ok { + opts.KernelId = aws.String(v.(string)) + } + + if v, ok := d.GetOk("key_name"); ok { + opts.KeyName = aws.String(v.(string)) + } + + if v, ok := d.GetOk("ram_disk_id"); ok { + opts.RamDiskId = aws.String(v.(string)) + } + + if v, ok := d.GetOk("disable_api_termination"); ok { + opts.DisableApiTermination = aws.Bool(v.(bool)) + } + + if v, ok := d.GetOk("ebs_optimized"); ok && v.(string) != "" { + vBool, err := strconv.ParseBool(v.(string)) + if err != nil { + return nil, fmt.Errorf("error converting ebs_optimized %q from string to boolean: %s", v.(string), err) + } + opts.EbsOptimized = aws.Bool(vBool) + } + + if v, ok := d.GetOk("security_group_names"); ok { + opts.SecurityGroups = expandStringList(v.(*schema.Set).List()) + } + + if v, ok := d.GetOk("vpc_security_group_ids"); ok { + opts.SecurityGroupIds = expandStringList(v.(*schema.Set).List()) + } + + if v, ok := d.GetOk("block_device_mappings"); ok { + var blockDeviceMappings []*ec2.LaunchTemplateBlockDeviceMappingRequest + bdms := v.([]interface{}) + + for _, bdm := range bdms { + blockDeviceMapping, err := readBlockDeviceMappingFromConfig(bdm.(map[string]interface{})) + if err != nil { + return nil, err + } + blockDeviceMappings = append(blockDeviceMappings, blockDeviceMapping) + } + opts.BlockDeviceMappings = blockDeviceMappings + } + + if v, ok := d.GetOk("capacity_reservation_specification"); ok { + crs := v.([]interface{}) + + if len(crs) > 0 { + opts.CapacityReservationSpecification = readCapacityReservationSpecificationFromConfig(crs[0].(map[string]interface{})) + } + } + + if v, ok := d.GetOk("credit_specification"); ok && (strings.HasPrefix(instanceType, "t2") || strings.HasPrefix(instanceType, "t3")) { + cs := v.([]interface{}) + + if len(cs) > 0 { + opts.CreditSpecification = readCreditSpecificationFromConfig(cs[0].(map[string]interface{})) + } + } + + if v, ok := d.GetOk("elastic_gpu_specifications"); ok { + var elasticGpuSpecifications []*ec2.ElasticGpuSpecification + egsList := v.([]interface{}) + + for _, egs := range egsList { + elasticGpuSpecifications = append(elasticGpuSpecifications, readElasticGpuSpecificationsFromConfig(egs.(map[string]interface{}))) + } + opts.ElasticGpuSpecifications = elasticGpuSpecifications + } + + if v, ok := d.GetOk("iam_instance_profile"); ok { + iip := v.([]interface{}) + + if len(iip) > 0 { + opts.IamInstanceProfile = readIamInstanceProfileFromConfig(iip[0].(map[string]interface{})) + } + } + + if v, ok := d.GetOk("instance_market_options"); ok { + imo := v.([]interface{}) + + if len(imo) > 0 { + instanceMarketOptions, err := readInstanceMarketOptionsFromConfig(imo[0].(map[string]interface{})) + if err != nil { + return nil, err + } + opts.InstanceMarketOptions = instanceMarketOptions + } + } + + if v, ok := d.GetOk("monitoring"); ok { + m := v.([]interface{}) + if len(m) > 0 { + mData := m[0].(map[string]interface{}) + monitoring := &ec2.LaunchTemplatesMonitoringRequest{ + Enabled: aws.Bool(mData["enabled"].(bool)), + } + opts.Monitoring = monitoring + } + } + + if v, ok := d.GetOk("network_interfaces"); ok { + var networkInterfaces []*ec2.LaunchTemplateInstanceNetworkInterfaceSpecificationRequest + niList := v.([]interface{}) + + for _, ni := range niList { + niData := ni.(map[string]interface{}) + networkInterface := readNetworkInterfacesFromConfig(niData) + networkInterfaces = append(networkInterfaces, networkInterface) + } + opts.NetworkInterfaces = networkInterfaces + } + + if v, ok := d.GetOk("placement"); ok { + p := v.([]interface{}) + + if len(p) > 0 { + opts.Placement = readPlacementFromConfig(p[0].(map[string]interface{})) + } + } + + if v, ok := d.GetOk("tag_specifications"); ok { + var tagSpecifications []*ec2.LaunchTemplateTagSpecificationRequest + t := v.([]interface{}) + + for _, ts := range t { + tsData := ts.(map[string]interface{}) + tags := tagsFromMap(tsData["tags"].(map[string]interface{})) + tagSpecification := &ec2.LaunchTemplateTagSpecificationRequest{ + ResourceType: aws.String(tsData["resource_type"].(string)), + Tags: tags, + } + tagSpecifications = append(tagSpecifications, tagSpecification) + } + opts.TagSpecifications = tagSpecifications + } + + return opts, nil +} + +func readBlockDeviceMappingFromConfig(bdm map[string]interface{}) (*ec2.LaunchTemplateBlockDeviceMappingRequest, error) { + blockDeviceMapping := &ec2.LaunchTemplateBlockDeviceMappingRequest{} + + if v := bdm["device_name"].(string); v != "" { + blockDeviceMapping.DeviceName = aws.String(v) + } + + if v := bdm["no_device"].(string); v != "" { + blockDeviceMapping.NoDevice = aws.String(v) + } + + if v := bdm["virtual_name"].(string); v != "" { + blockDeviceMapping.VirtualName = aws.String(v) + } + + if v := bdm["ebs"]; len(v.([]interface{})) > 0 { + ebs := v.([]interface{}) + if len(ebs) > 0 && ebs[0] != nil { + ebsData := ebs[0].(map[string]interface{}) + launchTemplateEbsBlockDeviceRequest, err := readEbsBlockDeviceFromConfig(ebsData) + if err != nil { + return nil, err + } + blockDeviceMapping.Ebs = launchTemplateEbsBlockDeviceRequest + } + } + + return blockDeviceMapping, nil +} + +func readEbsBlockDeviceFromConfig(ebs map[string]interface{}) (*ec2.LaunchTemplateEbsBlockDeviceRequest, error) { + ebsDevice := &ec2.LaunchTemplateEbsBlockDeviceRequest{} + + if v, ok := ebs["delete_on_termination"]; ok && v.(string) != "" { + vBool, err := strconv.ParseBool(v.(string)) + if err != nil { + return nil, fmt.Errorf("error converting delete_on_termination %q from string to boolean: %s", v.(string), err) + } + ebsDevice.DeleteOnTermination = aws.Bool(vBool) + } + + if v, ok := ebs["encrypted"]; ok && v.(string) != "" { + vBool, err := strconv.ParseBool(v.(string)) + if err != nil { + return nil, fmt.Errorf("error converting encrypted %q from string to boolean: %s", v.(string), err) + } + ebsDevice.Encrypted = aws.Bool(vBool) + } + + if v := ebs["iops"].(int); v > 0 { + ebsDevice.Iops = aws.Int64(int64(v)) + } + + if v := ebs["kms_key_id"].(string); v != "" { + ebsDevice.KmsKeyId = aws.String(v) + } + + if v := ebs["snapshot_id"].(string); v != "" { + ebsDevice.SnapshotId = aws.String(v) + } + + if v := ebs["volume_size"]; v != nil { + ebsDevice.VolumeSize = aws.Int64(int64(v.(int))) + } + + if v := ebs["volume_type"].(string); v != "" { + ebsDevice.VolumeType = aws.String(v) + } + + return ebsDevice, nil +} + +func readNetworkInterfacesFromConfig(ni map[string]interface{}) *ec2.LaunchTemplateInstanceNetworkInterfaceSpecificationRequest { + var ipv4Addresses []*ec2.PrivateIpAddressSpecification + var ipv6Addresses []*ec2.InstanceIpv6AddressRequest + var privateIpAddress string + networkInterface := &ec2.LaunchTemplateInstanceNetworkInterfaceSpecificationRequest{} + + if v, ok := ni["delete_on_termination"]; ok { + networkInterface.DeleteOnTermination = aws.Bool(v.(bool)) + } + + if v, ok := ni["description"].(string); ok && v != "" { + networkInterface.Description = aws.String(v) + } + + if v, ok := ni["device_index"].(int); ok { + networkInterface.DeviceIndex = aws.Int64(int64(v)) + } + + if v, ok := ni["network_interface_id"].(string); ok && v != "" { + networkInterface.NetworkInterfaceId = aws.String(v) + } else if v, ok := ni["associate_public_ip_address"]; ok { + networkInterface.AssociatePublicIpAddress = aws.Bool(v.(bool)) + } + + if v, ok := ni["private_ip_address"].(string); ok && v != "" { + privateIpAddress = v + networkInterface.PrivateIpAddress = aws.String(v) + } + + if v, ok := ni["subnet_id"].(string); ok && v != "" { + networkInterface.SubnetId = aws.String(v) + } + + if v := ni["security_groups"].(*schema.Set); v.Len() > 0 { + for _, v := range v.List() { + networkInterface.Groups = append(networkInterface.Groups, aws.String(v.(string))) + } + } + + ipv6AddressList := ni["ipv6_addresses"].(*schema.Set).List() + for _, address := range ipv6AddressList { + ipv6Addresses = append(ipv6Addresses, &ec2.InstanceIpv6AddressRequest{ + Ipv6Address: aws.String(address.(string)), + }) + } + networkInterface.Ipv6Addresses = ipv6Addresses + + if v := ni["ipv6_address_count"].(int); v > 0 { + networkInterface.Ipv6AddressCount = aws.Int64(int64(v)) + } + + if v := ni["ipv4_address_count"].(int); v > 0 { + networkInterface.SecondaryPrivateIpAddressCount = aws.Int64(int64(v)) + } else if v := ni["ipv4_addresses"].(*schema.Set); v.Len() > 0 { + for _, address := range v.List() { + privateIp := &ec2.PrivateIpAddressSpecification{ + Primary: aws.Bool(address.(string) == privateIpAddress), + PrivateIpAddress: aws.String(address.(string)), + } + ipv4Addresses = append(ipv4Addresses, privateIp) + } + networkInterface.PrivateIpAddresses = ipv4Addresses + } + + return networkInterface +} + +func readIamInstanceProfileFromConfig(iip map[string]interface{}) *ec2.LaunchTemplateIamInstanceProfileSpecificationRequest { + iamInstanceProfile := &ec2.LaunchTemplateIamInstanceProfileSpecificationRequest{} + + if v, ok := iip["arn"].(string); ok && v != "" { + iamInstanceProfile.Arn = aws.String(v) + } + + if v, ok := iip["name"].(string); ok && v != "" { + iamInstanceProfile.Name = aws.String(v) + } + + return iamInstanceProfile +} + +func readCapacityReservationSpecificationFromConfig(crs map[string]interface{}) *ec2.LaunchTemplateCapacityReservationSpecificationRequest { + capacityReservationSpecification := &ec2.LaunchTemplateCapacityReservationSpecificationRequest{} + + if v, ok := crs["capacity_reservation_preference"].(string); ok && v != "" { + capacityReservationSpecification.CapacityReservationPreference = aws.String(v) + } + + if v, ok := crs["capacity_reservation_target"]; ok { + crt := v.([]interface{}) + + if len(crt) > 0 { + capacityReservationSpecification.CapacityReservationTarget = readCapacityReservationTargetFromConfig(crt[0].(map[string]interface{})) + } + } + + return capacityReservationSpecification +} + +func readCapacityReservationTargetFromConfig(crt map[string]interface{}) *ec2.CapacityReservationTarget { + capacityReservationTarget := &ec2.CapacityReservationTarget{} + + if v, ok := crt["capacity_reservation_id"].(string); ok && v != "" { + capacityReservationTarget.CapacityReservationId = aws.String(v) + } + + return capacityReservationTarget +} + +func readCreditSpecificationFromConfig(cs map[string]interface{}) *ec2.CreditSpecificationRequest { + creditSpecification := &ec2.CreditSpecificationRequest{} + + if v, ok := cs["cpu_credits"].(string); ok && v != "" { + creditSpecification.CpuCredits = aws.String(v) + } + + return creditSpecification +} + +func readElasticGpuSpecificationsFromConfig(egs map[string]interface{}) *ec2.ElasticGpuSpecification { + elasticGpuSpecification := &ec2.ElasticGpuSpecification{} + + if v, ok := egs["type"].(string); ok && v != "" { + elasticGpuSpecification.Type = aws.String(v) + } + + return elasticGpuSpecification +} + +func readInstanceMarketOptionsFromConfig(imo map[string]interface{}) (*ec2.LaunchTemplateInstanceMarketOptionsRequest, error) { + instanceMarketOptions := &ec2.LaunchTemplateInstanceMarketOptionsRequest{} + spotOptions := &ec2.LaunchTemplateSpotMarketOptionsRequest{} + + if v, ok := imo["market_type"].(string); ok && v != "" { + instanceMarketOptions.MarketType = aws.String(v) + } + + if v, ok := imo["spot_options"]; ok { + vL := v.([]interface{}) + for _, v := range vL { + so := v.(map[string]interface{}) + + if v, ok := so["block_duration_minutes"].(int); ok && v != 0 { + spotOptions.BlockDurationMinutes = aws.Int64(int64(v)) + } + + if v, ok := so["instance_interruption_behavior"].(string); ok && v != "" { + spotOptions.InstanceInterruptionBehavior = aws.String(v) + } + + if v, ok := so["max_price"].(string); ok && v != "" { + spotOptions.MaxPrice = aws.String(v) + } + + if v, ok := so["spot_instance_type"].(string); ok && v != "" { + spotOptions.SpotInstanceType = aws.String(v) + } + + if v, ok := so["valid_until"].(string); ok && v != "" { + t, err := time.Parse(time.RFC3339, v) + if err != nil { + return nil, fmt.Errorf("Error Parsing Launch Template Spot Options valid until: %s", err.Error()) + } + spotOptions.ValidUntil = aws.Time(t) + } + } + instanceMarketOptions.SpotOptions = spotOptions + } + + return instanceMarketOptions, nil +} + +func readPlacementFromConfig(p map[string]interface{}) *ec2.LaunchTemplatePlacementRequest { + placement := &ec2.LaunchTemplatePlacementRequest{} + + if v, ok := p["affinity"].(string); ok && v != "" { + placement.Affinity = aws.String(v) + } + + if v, ok := p["availability_zone"].(string); ok && v != "" { + placement.AvailabilityZone = aws.String(v) + } + + if v, ok := p["group_name"].(string); ok && v != "" { + placement.GroupName = aws.String(v) + } + + if v, ok := p["host_id"].(string); ok && v != "" { + placement.HostId = aws.String(v) + } + + if v, ok := p["spread_domain"].(string); ok && v != "" { + placement.SpreadDomain = aws.String(v) + } + + if v, ok := p["tenancy"].(string); ok && v != "" { + placement.Tenancy = aws.String(v) + } + + return placement +} diff --git a/aws/resource_aws_launch_template_test.go b/aws/resource_aws_launch_template_test.go new file mode 100644 index 00000000000..4bd64d29da3 --- /dev/null +++ b/aws/resource_aws_launch_template_test.go @@ -0,0 +1,1051 @@ +package aws + +import ( + "fmt" + "log" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/autoscaling" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSLaunchTemplate_importBasic(t *testing.T) { + resName := "aws_launch_template.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateConfig_basic(rInt), + }, + { + ResourceName: resName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSLaunchTemplate_importData(t *testing.T) { + resName := "aws_launch_template.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateConfig_data(rInt), + }, + { + ResourceName: resName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSLaunchTemplate_basic(t *testing.T) { + var template ec2.LaunchTemplate + resName := "aws_launch_template.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateConfig_basic(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resName, &template), + resource.TestCheckResourceAttr(resName, "default_version", "1"), + resource.TestCheckResourceAttr(resName, "latest_version", "1"), + resource.TestCheckResourceAttrSet(resName, "arn"), + resource.TestCheckResourceAttr(resName, "ebs_optimized", ""), + ), + }, + }, + }) +} + +func TestAccAWSLaunchTemplate_disappears(t *testing.T) { + var launchTemplate ec2.LaunchTemplate + resourceName := "aws_launch_template.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateConfig_basic(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resourceName, &launchTemplate), + testAccCheckAWSLaunchTemplateDisappears(&launchTemplate), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccAWSLaunchTemplate_BlockDeviceMappings_EBS(t *testing.T) { + var template ec2.LaunchTemplate + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_launch_template.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateConfig_BlockDeviceMappings_EBS(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resourceName, &template), + resource.TestCheckResourceAttr(resourceName, "block_device_mappings.#", "1"), + resource.TestCheckResourceAttr(resourceName, "block_device_mappings.0.device_name", "/dev/sda1"), + resource.TestCheckResourceAttr(resourceName, "block_device_mappings.0.ebs.#", "1"), + resource.TestCheckResourceAttr(resourceName, "block_device_mappings.0.ebs.0.volume_size", "15"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSLaunchTemplate_BlockDeviceMappings_EBS_DeleteOnTermination(t *testing.T) { + var template ec2.LaunchTemplate + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_launch_template.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateConfig_BlockDeviceMappings_EBS_DeleteOnTermination(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resourceName, &template), + resource.TestCheckResourceAttr(resourceName, "block_device_mappings.#", "1"), + resource.TestCheckResourceAttr(resourceName, "block_device_mappings.0.device_name", "/dev/sda1"), + resource.TestCheckResourceAttr(resourceName, "block_device_mappings.0.ebs.#", "1"), + resource.TestCheckResourceAttr(resourceName, "block_device_mappings.0.ebs.0.delete_on_termination", "true"), + resource.TestCheckResourceAttr(resourceName, "block_device_mappings.0.ebs.0.volume_size", "15"), + ), + }, + { + Config: testAccAWSLaunchTemplateConfig_BlockDeviceMappings_EBS_DeleteOnTermination(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resourceName, &template), + resource.TestCheckResourceAttr(resourceName, "block_device_mappings.#", "1"), + resource.TestCheckResourceAttr(resourceName, "block_device_mappings.0.device_name", "/dev/sda1"), + resource.TestCheckResourceAttr(resourceName, "block_device_mappings.0.ebs.#", "1"), + resource.TestCheckResourceAttr(resourceName, "block_device_mappings.0.ebs.0.delete_on_termination", "false"), + resource.TestCheckResourceAttr(resourceName, "block_device_mappings.0.ebs.0.volume_size", "15"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSLaunchTemplate_EbsOptimized(t *testing.T) { + var template ec2.LaunchTemplate + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_launch_template.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateConfig_EbsOptimized(rName, "true"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resourceName, &template), + resource.TestCheckResourceAttr(resourceName, "ebs_optimized", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSLaunchTemplateConfig_EbsOptimized(rName, "false"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resourceName, &template), + resource.TestCheckResourceAttr(resourceName, "ebs_optimized", "false"), + ), + }, + { + Config: testAccAWSLaunchTemplateConfig_EbsOptimized(rName, "\"true\""), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resourceName, &template), + resource.TestCheckResourceAttr(resourceName, "ebs_optimized", "true"), + ), + }, + { + Config: testAccAWSLaunchTemplateConfig_EbsOptimized(rName, "\"false\""), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resourceName, &template), + resource.TestCheckResourceAttr(resourceName, "ebs_optimized", "false"), + ), + }, + { + Config: testAccAWSLaunchTemplateConfig_EbsOptimized(rName, "\"\""), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resourceName, &template), + resource.TestCheckResourceAttr(resourceName, "ebs_optimized", ""), + ), + }, + }, + }) +} + +func TestAccAWSLaunchTemplate_data(t *testing.T) { + var template ec2.LaunchTemplate + resName := "aws_launch_template.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateConfig_data(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resName, &template), + resource.TestCheckResourceAttr(resName, "block_device_mappings.#", "1"), + resource.TestCheckResourceAttrSet(resName, "disable_api_termination"), + resource.TestCheckResourceAttr(resName, "ebs_optimized", "false"), + resource.TestCheckResourceAttr(resName, "elastic_gpu_specifications.#", "1"), + resource.TestCheckResourceAttr(resName, "iam_instance_profile.#", "1"), + resource.TestCheckResourceAttrSet(resName, "image_id"), + resource.TestCheckResourceAttrSet(resName, "instance_initiated_shutdown_behavior"), + resource.TestCheckResourceAttr(resName, "instance_market_options.#", "1"), + resource.TestCheckResourceAttrSet(resName, "instance_type"), + resource.TestCheckResourceAttrSet(resName, "kernel_id"), + resource.TestCheckResourceAttrSet(resName, "key_name"), + resource.TestCheckResourceAttr(resName, "monitoring.#", "1"), + resource.TestCheckResourceAttr(resName, "network_interfaces.#", "1"), + resource.TestCheckResourceAttr(resName, "network_interfaces.0.security_groups.#", "1"), + resource.TestCheckResourceAttr(resName, "placement.#", "1"), + resource.TestCheckResourceAttrSet(resName, "ram_disk_id"), + resource.TestCheckResourceAttr(resName, "vpc_security_group_ids.#", "1"), + resource.TestCheckResourceAttr(resName, "tag_specifications.#", "1"), + ), + }, + }, + }) +} + +func TestAccAWSLaunchTemplate_update(t *testing.T) { + var template ec2.LaunchTemplate + resName := "aws_launch_template.foo" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateConfig_asg_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resName, &template), + resource.TestCheckResourceAttr(resName, "default_version", "1"), + resource.TestCheckResourceAttr(resName, "latest_version", "1"), + resource.TestCheckResourceAttr( + "aws_autoscaling_group.bar", "launch_template.0.version", "1"), + ), + }, + { + Config: testAccAWSLaunchTemplateConfig_asg_update, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resName, &template), + resource.TestCheckResourceAttr(resName, "default_version", "1"), + resource.TestCheckResourceAttr(resName, "latest_version", "2"), + resource.TestCheckResourceAttrSet(resName, "instance_type"), + resource.TestCheckResourceAttr( + "aws_autoscaling_group.bar", "launch_template.0.version", "2"), + ), + }, + }, + }) +} + +func TestAccAWSLaunchTemplate_tags(t *testing.T) { + var template ec2.LaunchTemplate + resName := "aws_launch_template.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateConfig_basic(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resName, &template), + testAccCheckTags(&template.Tags, "foo", "bar"), + ), + }, + { + Config: testAccAWSLaunchTemplateConfig_tagsUpdate(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resName, &template), + testAccCheckTags(&template.Tags, "foo", ""), + testAccCheckTags(&template.Tags, "bar", "baz"), + ), + }, + }, + }) +} + +func TestAccAWSLaunchTemplate_capacityReservation_preference(t *testing.T) { + var template ec2.LaunchTemplate + resName := "aws_launch_template.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateConfig_capacityReservation_preference(rInt, "open"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resName, &template), + ), + }, + }, + }) +} + +func TestAccAWSLaunchTemplate_capacityReservation_target(t *testing.T) { + var template ec2.LaunchTemplate + resName := "aws_launch_template.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateConfig_capacityReservation_target(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resName, &template), + ), + }, + }, + }) +} + +func TestAccAWSLaunchTemplate_creditSpecification_nonBurstable(t *testing.T) { + var template ec2.LaunchTemplate + rName := acctest.RandomWithPrefix("tf-acc-test") + resName := "aws_launch_template.foo" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateConfig_creditSpecification(rName, "m1.small", "standard"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resName, &template), + ), + }, + }, + }) +} + +func TestAccAWSLaunchTemplate_creditSpecification_t2(t *testing.T) { + var template ec2.LaunchTemplate + rName := acctest.RandomWithPrefix("tf-acc-test") + resName := "aws_launch_template.foo" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateConfig_creditSpecification(rName, "t2.micro", "unlimited"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resName, &template), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "unlimited"), + ), + }, + }, + }) +} + +func TestAccAWSLaunchTemplate_creditSpecification_t3(t *testing.T) { + var template ec2.LaunchTemplate + rName := acctest.RandomWithPrefix("tf-acc-test") + resName := "aws_launch_template.foo" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateConfig_creditSpecification(rName, "t3.micro", "unlimited"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resName, &template), + resource.TestCheckResourceAttr(resName, "credit_specification.#", "1"), + resource.TestCheckResourceAttr(resName, "credit_specification.0.cpu_credits", "unlimited"), + ), + }, + }, + }) +} + +func TestAccAWSLaunchTemplate_networkInterface(t *testing.T) { + var template ec2.LaunchTemplate + resName := "aws_launch_template.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateConfig_networkInterface, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resName, &template), + resource.TestCheckResourceAttr(resName, "network_interfaces.#", "1"), + resource.TestCheckResourceAttrSet(resName, "network_interfaces.0.network_interface_id"), + resource.TestCheckResourceAttr(resName, "network_interfaces.0.associate_public_ip_address", "false"), + resource.TestCheckResourceAttr(resName, "network_interfaces.0.ipv4_address_count", "2"), + ), + }, + }, + }) +} + +func TestAccAWSLaunchTemplate_networkInterface_ipv6Addresses(t *testing.T) { + var template ec2.LaunchTemplate + resName := "aws_launch_template.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateConfig_networkInterface_ipv6Addresses, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resName, &template), + resource.TestCheckResourceAttr(resName, "network_interfaces.#", "1"), + resource.TestCheckResourceAttr(resName, "network_interfaces.0.ipv6_addresses.#", "2"), + ), + }, + }, + }) +} + +func TestAccAWSLaunchTemplate_networkInterface_ipv6AddressCount(t *testing.T) { + var template ec2.LaunchTemplate + resName := "aws_launch_template.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateConfig_ipv6_count(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(resName, &template), + resource.TestCheckResourceAttr(resName, "network_interfaces.#", "1"), + resource.TestCheckResourceAttr(resName, "network_interfaces.0.ipv6_address_count", "1"), + ), + }, + }, + }) +} + +func TestAccAWSLaunchTemplate_instanceMarketOptions(t *testing.T) { + var template ec2.LaunchTemplate + var group autoscaling.Group + templateName := "aws_launch_template.test" + groupName := "aws_autoscaling_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchTemplateConfig_instanceMarketOptions_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(templateName, &template), + testAccCheckAWSAutoScalingGroupExists(groupName, &group), + resource.TestCheckResourceAttr(templateName, "instance_market_options.#", "1"), + resource.TestCheckResourceAttr(templateName, "instance_market_options.0.spot_options.#", "1"), + resource.TestCheckResourceAttr(groupName, "launch_template.#", "1"), + resource.TestCheckResourceAttr(groupName, "launch_template.0.version", "1"), + ), + }, + { + Config: testAccAWSLaunchTemplateConfig_instanceMarketOptions_update, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchTemplateExists(templateName, &template), + testAccCheckAWSAutoScalingGroupExists(groupName, &group), + resource.TestCheckResourceAttr(templateName, "instance_market_options.#", "1"), + resource.TestCheckResourceAttr(templateName, "instance_market_options.0.spot_options.#", "1"), + resource.TestCheckResourceAttr(groupName, "launch_template.#", "1"), + resource.TestCheckResourceAttr(groupName, "launch_template.0.version", "2"), + ), + }, + }, + }) +} + +func testAccCheckAWSLaunchTemplateExists(n string, t *ec2.LaunchTemplate) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Launch Template ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).ec2conn + + resp, err := conn.DescribeLaunchTemplates(&ec2.DescribeLaunchTemplatesInput{ + LaunchTemplateIds: []*string{aws.String(rs.Primary.ID)}, + }) + if err != nil { + return err + } + + if len(resp.LaunchTemplates) != 1 || *resp.LaunchTemplates[0].LaunchTemplateId != rs.Primary.ID { + return fmt.Errorf("Launch Template not found") + } + + *t = *resp.LaunchTemplates[0] + + return nil + } +} + +func testAccCheckAWSLaunchTemplateDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).ec2conn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_launch_template" { + continue + } + + resp, err := conn.DescribeLaunchTemplates(&ec2.DescribeLaunchTemplatesInput{ + LaunchTemplateIds: []*string{aws.String(rs.Primary.ID)}, + }) + + if err == nil { + if len(resp.LaunchTemplates) != 0 && *resp.LaunchTemplates[0].LaunchTemplateId == rs.Primary.ID { + return fmt.Errorf("Launch Template still exists") + } + } + + if isAWSErr(err, "InvalidLaunchTemplateId.NotFound", "") { + log.Printf("[WARN] launch template (%s) not found.", rs.Primary.ID) + continue + } + return err + } + + return nil +} + +func testAccCheckAWSLaunchTemplateDisappears(launchTemplate *ec2.LaunchTemplate) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).ec2conn + + input := &ec2.DeleteLaunchTemplateInput{ + LaunchTemplateId: launchTemplate.LaunchTemplateId, + } + + _, err := conn.DeleteLaunchTemplate(input) + + return err + } +} + +func testAccAWSLaunchTemplateConfig_basic(rInt int) string { + return fmt.Sprintf(` +resource "aws_launch_template" "foo" { + name = "foo_%d" + + tags { + foo = "bar" + } +} +`, rInt) +} + +func testAccAWSLaunchTemplateConfig_ipv6_count(rInt int) string { + return fmt.Sprintf(` +resource "aws_launch_template" "foo" { + name = "set_ipv6_count_foo_%d" + + network_interfaces { + ipv6_address_count = 1 + } +} +`, rInt) +} + +func testAccAWSLaunchTemplateConfig_BlockDeviceMappings_EBS(rName string) string { + return fmt.Sprintf(` +data "aws_ami" "test" { + most_recent = true + owners = ["099720109477"] # Canonical + + filter { + name = "name" + values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-*"] + } + + filter { + name = "virtualization-type" + values = ["hvm"] + } +} + +data "aws_availability_zones" "available" {} + +resource "aws_launch_template" "test" { + image_id = "${data.aws_ami.test.id}" + name = %q + + block_device_mappings { + device_name = "/dev/sda1" + + ebs { + volume_size = 15 + } + } +} + +# Creating an AutoScaling Group verifies the launch template +# ValidationError: You must use a valid fully-formed launch template. the encrypted flag cannot be specified since device /dev/sda1 has a snapshot specified. +resource "aws_autoscaling_group" "test" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + name = %q + + launch_template { + id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.default_version}" + } +} +`, rName, rName) +} + +func testAccAWSLaunchTemplateConfig_BlockDeviceMappings_EBS_DeleteOnTermination(rName string, deleteOnTermination bool) string { + return fmt.Sprintf(` +data "aws_ami" "test" { + most_recent = true + owners = ["099720109477"] # Canonical + + filter { + name = "name" + values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-*"] + } + + filter { + name = "virtualization-type" + values = ["hvm"] + } +} + +data "aws_availability_zones" "available" {} + +resource "aws_launch_template" "test" { + image_id = "${data.aws_ami.test.id}" + name = %q + + block_device_mappings { + device_name = "/dev/sda1" + + ebs { + delete_on_termination = %t + volume_size = 15 + } + } +} + +# Creating an AutoScaling Group verifies the launch template +# ValidationError: You must use a valid fully-formed launch template. the encrypted flag cannot be specified since device /dev/sda1 has a snapshot specified. +resource "aws_autoscaling_group" "test" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + name = %q + + launch_template { + id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.default_version}" + } +} +`, rName, deleteOnTermination, rName) +} + +func testAccAWSLaunchTemplateConfig_EbsOptimized(rName, ebsOptimized string) string { + return fmt.Sprintf(` +resource "aws_launch_template" "test" { + ebs_optimized = %s # allows "", false, true, "false", "true" values + name = %q +} +`, ebsOptimized, rName) +} + +func testAccAWSLaunchTemplateConfig_data(rInt int) string { + return fmt.Sprintf(` +resource "aws_launch_template" "foo" { + name = "foo_%d" + + block_device_mappings { + device_name = "test" + } + + disable_api_termination = true + + ebs_optimized = false + + elastic_gpu_specifications { + type = "test" + } + + iam_instance_profile { + name = "test" + } + + image_id = "ami-12a3b456" + + instance_initiated_shutdown_behavior = "terminate" + + instance_market_options { + market_type = "spot" + } + + instance_type = "t2.micro" + + kernel_id = "aki-a12bc3de" + + key_name = "test" + + monitoring { + enabled = true + } + + network_interfaces { + network_interface_id = "eni-123456ab" + security_groups = ["sg-1a23bc45"] + } + + placement { + availability_zone = "us-west-2b" + } + + ram_disk_id = "ari-a12bc3de" + + vpc_security_group_ids = ["sg-12a3b45c"] + + tag_specifications { + resource_type = "instance" + tags { + Name = "test" + } + } +} +`, rInt) +} + +func testAccAWSLaunchTemplateConfig_tagsUpdate(rInt int) string { + return fmt.Sprintf(` +resource "aws_launch_template" "foo" { + name = "foo_%d" + + tags { + bar = "baz" + } +} +`, rInt) +} + +func testAccAWSLaunchTemplateConfig_capacityReservation_preference(rInt int, preference string) string { + return fmt.Sprintf(` +resource "aws_launch_template" "foo" { + name = "foo_%d" + + capacity_reservation_specification { + capacity_reservation_preference = %q + } +} +`, rInt, preference) +} + +func testAccAWSLaunchTemplateConfig_capacityReservation_target(rInt int) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_ec2_capacity_reservation" "test" { + availability_zone = "${data.aws_availability_zones.available.names[0]}" + instance_count = 1 + instance_platform = "Linux/UNIX" + instance_type = "t2.micro" +} + +resource "aws_launch_template" "foo" { + name = "foo_%d" + + capacity_reservation_specification { + capacity_reservation_target { + capacity_reservation_id = "${aws_ec2_capacity_reservation.test.id}" + } + } +} +`, rInt) +} + +func testAccAWSLaunchTemplateConfig_creditSpecification(rName, instanceType, cpuCredits string) string { + return fmt.Sprintf(` +resource "aws_launch_template" "foo" { + instance_type = %q + name = %q + + credit_specification { + cpu_credits = %q + } +} +`, instanceType, rName, cpuCredits) +} + +const testAccAWSLaunchTemplateConfig_networkInterface = ` +resource "aws_vpc" "test" { + cidr_block = "10.1.0.0/16" +} + +resource "aws_subnet" "test" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "10.1.0.0/24" +} + +resource "aws_network_interface" "test" { + subnet_id = "${aws_subnet.test.id}" +} + +resource "aws_launch_template" "test" { + name = "network-interface-launch-template" + + network_interfaces { + network_interface_id = "${aws_network_interface.test.id}" + ipv4_address_count = 2 + } +} +` + +const testAccAWSLaunchTemplateConfig_networkInterface_ipv6Addresses = ` +resource "aws_launch_template" "test" { + name = "network-interface-ipv6-addresses-launch-template" + + network_interfaces { + ipv6_addresses = [ + "0:0:0:0:0:ffff:a01:5", + "0:0:0:0:0:ffff:a01:6", + ] + } +} +` + +const testAccAWSLaunchTemplateConfig_asg_basic = ` +data "aws_ami" "test_ami" { + most_recent = true + + filter { + name = "owner-alias" + values = ["amazon"] + } + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +resource "aws_launch_template" "foo" { + name_prefix = "foobar" + image_id = "${data.aws_ami.test_ami.id}" +} + +data "aws_availability_zones" "available" {} + +resource "aws_autoscaling_group" "bar" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + launch_template = { + id = "${aws_launch_template.foo.id}" + version = "${aws_launch_template.foo.latest_version}" + } +} +` + +const testAccAWSLaunchTemplateConfig_asg_update = ` +data "aws_ami" "test_ami" { + most_recent = true + + filter { + name = "owner-alias" + values = ["amazon"] + } + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +resource "aws_launch_template" "foo" { + name_prefix = "foobar" + image_id = "${data.aws_ami.test_ami.id}" + instance_type = "t2.nano" +} + +data "aws_availability_zones" "available" {} + +resource "aws_autoscaling_group" "bar" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + max_size = 0 + min_size = 0 + launch_template = { + id = "${aws_launch_template.foo.id}" + version = "${aws_launch_template.foo.latest_version}" + } +} +` + +const testAccAWSLaunchTemplateConfig_instanceMarketOptions_basic = ` +data "aws_ami" "test" { + most_recent = true + + filter { + name = "owner-alias" + values = ["amazon"] + } + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +resource "aws_launch_template" "test" { + name_prefix = "instance_market_options" + image_id = "${data.aws_ami.test.id}" + + instance_market_options { + market_type = "spot" + spot_options { + spot_instance_type = "one-time" + } + } +} + +data "aws_availability_zones" "available" {} + +resource "aws_autoscaling_group" "test" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + min_size = 0 + max_size = 0 + + launch_template { + id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } +} +` + +const testAccAWSLaunchTemplateConfig_instanceMarketOptions_update = ` +data "aws_ami" "test" { + most_recent = true + + filter { + name = "owner-alias" + values = ["amazon"] + } + + filter { + name = "name" + values = ["amzn-ami-hvm-*-x86_64-gp2"] + } +} + +resource "aws_launch_template" "test" { + name_prefix = "instance_market_options" + image_id = "${data.aws_ami.test.id}" + instance_type = "t2.micro" + + instance_market_options { + market_type = "spot" + spot_options { + spot_instance_type = "one-time" + } + } +} + +data "aws_availability_zones" "available" {} + +resource "aws_autoscaling_group" "test" { + availability_zones = ["${data.aws_availability_zones.available.names[0]}"] + desired_capacity = 0 + min_size = 0 + max_size = 0 + + launch_template { + id = "${aws_launch_template.test.id}" + version = "${aws_launch_template.test.latest_version}" + } +} +` diff --git a/aws/resource_aws_lb.go b/aws/resource_aws_lb.go index f6d978b34ed..5add0801f2d 100644 --- a/aws/resource_aws_lb.go +++ b/aws/resource_aws_lb.go @@ -1,18 +1,16 @@ package aws import ( + "bytes" "fmt" "log" "regexp" "strconv" "time" - "bytes" - "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" "github.com/aws/aws-sdk-go/service/elbv2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" @@ -57,10 +55,11 @@ func resourceAwsLb() *schema.Resource { }, "name_prefix": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - ValidateFunc: validateElbNamePrefix, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"name"}, + ValidateFunc: validateElbNamePrefix, }, "internal": { @@ -96,16 +95,19 @@ func resourceAwsLb() *schema.Resource { "subnet_mapping": { Type: schema.TypeSet, Optional: true, + Computed: true, ForceNew: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "subnet_id": { Type: schema.TypeString, Required: true, + ForceNew: true, }, "allocation_id": { Type: schema.TypeString, Optional: true, + ForceNew: true, }, }, }, @@ -121,25 +123,27 @@ func resourceAwsLb() *schema.Resource { }, "access_logs": { - Type: schema.TypeList, - Optional: true, - Computed: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + DiffSuppressFunc: suppressIfLBType("network"), Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "bucket": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + DiffSuppressFunc: suppressIfLBType("network"), }, "prefix": { - Type: schema.TypeString, - Optional: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + DiffSuppressFunc: suppressIfLBType("network"), }, "enabled": { - Type: schema.TypeBool, - Optional: true, - Computed: true, + Type: schema.TypeBool, + Optional: true, + DiffSuppressFunc: suppressIfLBType("network"), }, }, }, @@ -152,21 +156,24 @@ func resourceAwsLb() *schema.Resource { }, "idle_timeout": { - Type: schema.TypeInt, - Optional: true, - Default: 60, + Type: schema.TypeInt, + Optional: true, + Default: 60, + DiffSuppressFunc: suppressIfLBType("network"), }, "enable_cross_zone_load_balancing": { - Type: schema.TypeBool, - Optional: true, - Default: false, + Type: schema.TypeBool, + Optional: true, + Default: false, + DiffSuppressFunc: suppressIfLBType("application"), }, "enable_http2": { - Type: schema.TypeBool, - Optional: true, - Default: true, + Type: schema.TypeBool, + Optional: true, + Default: true, + DiffSuppressFunc: suppressIfLBType("network"), }, "ip_address_type": { @@ -195,6 +202,12 @@ func resourceAwsLb() *schema.Resource { } } +func suppressIfLBType(t string) schema.SchemaDiffSuppressFunc { + return func(k string, old string, new string, d *schema.ResourceData) bool { + return d.Get("load_balancer_type").(string) == t + } +} + func resourceAwsLbCreate(d *schema.ResourceData, meta interface{}) error { elbconn := meta.(*AWSClient).elbv2conn @@ -250,7 +263,7 @@ func resourceAwsLbCreate(d *schema.ResourceData, meta interface{}) error { resp, err := elbconn.CreateLoadBalancer(elbOpts) if err != nil { - return errwrap.Wrapf("Error creating Application Load Balancer: {{err}}", err) + return fmt.Errorf("Error creating Application Load Balancer: %s", err) } if len(resp.LoadBalancers) != 1 { @@ -258,7 +271,7 @@ func resourceAwsLbCreate(d *schema.ResourceData, meta interface{}) error { } lb := resp.LoadBalancers[0] - d.SetId(*lb.LoadBalancerArn) + d.SetId(aws.StringValue(lb.LoadBalancerArn)) log.Printf("[INFO] LB ID: %s", d.Id()) stateConf := &resource.StateChangeConf{ @@ -273,13 +286,13 @@ func resourceAwsLbCreate(d *schema.ResourceData, meta interface{}) error { } if len(describeResp.LoadBalancers) != 1 { - return nil, "", fmt.Errorf("No load balancers returned for %s", *lb.LoadBalancerArn) + return nil, "", fmt.Errorf("No load balancers returned for %s", aws.StringValue(lb.LoadBalancerArn)) } dLb := describeResp.LoadBalancers[0] - log.Printf("[INFO] LB state: %s", *dLb.State.Code) + log.Printf("[INFO] LB state: %s", aws.StringValue(dLb.State.Code)) - return describeResp, *dLb.State.Code, nil + return describeResp, aws.StringValue(dLb.State.Code), nil }, Timeout: d.Timeout(schema.TimeoutCreate), MinTimeout: 10 * time.Second, @@ -310,7 +323,7 @@ func resourceAwsLbRead(d *schema.ResourceData, meta interface{}) error { return nil } - return errwrap.Wrapf("Error retrieving ALB: {{err}}", err) + return fmt.Errorf("Error retrieving ALB: %s", err) } if len(describeResp.LoadBalancers) != 1 { return fmt.Errorf("Unable to find ALB: %#v", describeResp.LoadBalancers) @@ -324,7 +337,7 @@ func resourceAwsLbUpdate(d *schema.ResourceData, meta interface{}) error { if !d.IsNewResource() { if err := setElbV2Tags(elbconn, d); err != nil { - return errwrap.Wrapf("Error Modifying Tags on ALB: {{err}}", err) + return fmt.Errorf("Error Modifying Tags on ALB: %s", err) } } @@ -332,7 +345,7 @@ func resourceAwsLbUpdate(d *schema.ResourceData, meta interface{}) error { switch d.Get("load_balancer_type").(string) { case "application": - if d.HasChange("access_logs") { + if d.HasChange("access_logs") || d.IsNewResource() { logs := d.Get("access_logs").([]interface{}) if len(logs) == 1 { log := logs[0].(map[string]interface{}) @@ -345,14 +358,11 @@ func resourceAwsLbUpdate(d *schema.ResourceData, meta interface{}) error { &elbv2.LoadBalancerAttribute{ Key: aws.String("access_logs.s3.bucket"), Value: aws.String(log["bucket"].(string)), - }) - - if prefix, ok := log["prefix"]; ok { - attributes = append(attributes, &elbv2.LoadBalancerAttribute{ + }, + &elbv2.LoadBalancerAttribute{ Key: aws.String("access_logs.s3.prefix"), - Value: aws.String(prefix.(string)), + Value: aws.String(log["prefix"].(string)), }) - } } else if len(logs) == 0 { attributes = append(attributes, &elbv2.LoadBalancerAttribute{ Key: aws.String("access_logs.s3.enabled"), @@ -360,20 +370,20 @@ func resourceAwsLbUpdate(d *schema.ResourceData, meta interface{}) error { }) } } - if d.HasChange("idle_timeout") { + if d.HasChange("idle_timeout") || d.IsNewResource() { attributes = append(attributes, &elbv2.LoadBalancerAttribute{ Key: aws.String("idle_timeout.timeout_seconds"), Value: aws.String(fmt.Sprintf("%d", d.Get("idle_timeout").(int))), }) } - if d.HasChange("enable_http2") { + if d.HasChange("enable_http2") || d.IsNewResource() { attributes = append(attributes, &elbv2.LoadBalancerAttribute{ Key: aws.String("routing.http2.enabled"), Value: aws.String(strconv.FormatBool(d.Get("enable_http2").(bool))), }) } case "network": - if d.HasChange("enable_cross_zone_load_balancing") { + if d.HasChange("enable_cross_zone_load_balancing") || d.IsNewResource() { attributes = append(attributes, &elbv2.LoadBalancerAttribute{ Key: aws.String("load_balancing.cross_zone.enabled"), Value: aws.String(fmt.Sprintf("%t", d.Get("enable_cross_zone_load_balancing").(bool))), @@ -381,7 +391,7 @@ func resourceAwsLbUpdate(d *schema.ResourceData, meta interface{}) error { } } - if d.HasChange("enable_deletion_protection") { + if d.HasChange("enable_deletion_protection") || d.IsNewResource() { attributes = append(attributes, &elbv2.LoadBalancerAttribute{ Key: aws.String("deletion_protection.enabled"), Value: aws.String(fmt.Sprintf("%t", d.Get("enable_deletion_protection").(bool))), @@ -463,9 +473,9 @@ func resourceAwsLbUpdate(d *schema.ResourceData, meta interface{}) error { } dLb := describeResp.LoadBalancers[0] - log.Printf("[INFO] LB state: %s", *dLb.State.Code) + log.Printf("[INFO] LB state: %s", aws.StringValue(dLb.State.Code)) - return describeResp, *dLb.State.Code, nil + return describeResp, aws.StringValue(dLb.State.Code), nil }, Timeout: d.Timeout(schema.TimeoutUpdate), MinTimeout: 10 * time.Second, @@ -612,11 +622,28 @@ func getLbNameFromArn(arn string) (string, error) { func flattenSubnetsFromAvailabilityZones(availabilityZones []*elbv2.AvailabilityZone) []string { var result []string for _, az := range availabilityZones { - result = append(result, *az.SubnetId) + result = append(result, aws.StringValue(az.SubnetId)) } return result } +func flattenSubnetMappingsFromAvailabilityZones(availabilityZones []*elbv2.AvailabilityZone) []map[string]interface{} { + l := make([]map[string]interface{}, 0) + for _, availabilityZone := range availabilityZones { + for _, loadBalancerAddress := range availabilityZone.LoadBalancerAddresses { + m := make(map[string]interface{}, 0) + m["subnet_id"] = aws.StringValue(availabilityZone.SubnetId) + + if loadBalancerAddress.AllocationId != nil { + m["allocation_id"] = aws.StringValue(loadBalancerAddress.AllocationId) + } + + l = append(l, m) + } + } + return l +} + func lbSuffixFromARN(arn *string) string { if arn == nil { return "" @@ -638,37 +665,27 @@ func flattenAwsLbResource(d *schema.ResourceData, meta interface{}, lb *elbv2.Lo d.Set("arn", lb.LoadBalancerArn) d.Set("arn_suffix", lbSuffixFromARN(lb.LoadBalancerArn)) d.Set("name", lb.LoadBalancerName) - d.Set("internal", (lb.Scheme != nil && *lb.Scheme == "internal")) + d.Set("internal", (lb.Scheme != nil && aws.StringValue(lb.Scheme) == "internal")) d.Set("security_groups", flattenStringList(lb.SecurityGroups)) - d.Set("subnets", flattenSubnetsFromAvailabilityZones(lb.AvailabilityZones)) d.Set("vpc_id", lb.VpcId) d.Set("zone_id", lb.CanonicalHostedZoneId) d.Set("dns_name", lb.DNSName) d.Set("ip_address_type", lb.IpAddressType) d.Set("load_balancer_type", lb.Type) - subnetMappings := make([]interface{}, 0) - for _, az := range lb.AvailabilityZones { - subnetMappingRaw := make([]map[string]interface{}, len(az.LoadBalancerAddresses)) - for _, subnet := range az.LoadBalancerAddresses { - subnetMap := make(map[string]interface{}, 0) - subnetMap["subnet_id"] = *az.SubnetId - - if subnet.AllocationId != nil { - subnetMap["allocation_id"] = *subnet.AllocationId - } + if err := d.Set("subnets", flattenSubnetsFromAvailabilityZones(lb.AvailabilityZones)); err != nil { + return fmt.Errorf("error setting subnets: %s", err) + } - subnetMappingRaw = append(subnetMappingRaw, subnetMap) - } - subnetMappings = append(subnetMappings, subnetMappingRaw) + if err := d.Set("subnet_mapping", flattenSubnetMappingsFromAvailabilityZones(lb.AvailabilityZones)); err != nil { + return fmt.Errorf("error setting subnet_mapping: %s", err) } - d.Set("subnet_mapping", subnetMappings) respTags, err := elbconn.DescribeTags(&elbv2.DescribeTagsInput{ ResourceArns: []*string{lb.LoadBalancerArn}, }) if err != nil { - return errwrap.Wrapf("Error retrieving LB Tags: {{err}}", err) + return fmt.Errorf("Error retrieving LB Tags: %s", err) } var et []*elbv2.Tag @@ -684,43 +701,44 @@ func flattenAwsLbResource(d *schema.ResourceData, meta interface{}, lb *elbv2.Lo LoadBalancerArn: aws.String(d.Id()), }) if err != nil { - return errwrap.Wrapf("Error retrieving LB Attributes: {{err}}", err) + return fmt.Errorf("Error retrieving LB Attributes: %s", err) } accessLogMap := map[string]interface{}{} for _, attr := range attributesResp.Attributes { - switch *attr.Key { + switch aws.StringValue(attr.Key) { case "access_logs.s3.enabled": - accessLogMap["enabled"] = *attr.Value + accessLogMap["enabled"] = aws.StringValue(attr.Value) == "true" case "access_logs.s3.bucket": - accessLogMap["bucket"] = *attr.Value + accessLogMap["bucket"] = aws.StringValue(attr.Value) case "access_logs.s3.prefix": - accessLogMap["prefix"] = *attr.Value + accessLogMap["prefix"] = aws.StringValue(attr.Value) case "idle_timeout.timeout_seconds": - timeout, err := strconv.Atoi(*attr.Value) + timeout, err := strconv.Atoi(aws.StringValue(attr.Value)) if err != nil { - return errwrap.Wrapf("Error parsing ALB timeout: {{err}}", err) + return fmt.Errorf("Error parsing ALB timeout: %s", err) } log.Printf("[DEBUG] Setting ALB Timeout Seconds: %d", timeout) d.Set("idle_timeout", timeout) case "deletion_protection.enabled": - protectionEnabled := (*attr.Value) == "true" + protectionEnabled := aws.StringValue(attr.Value) == "true" log.Printf("[DEBUG] Setting LB Deletion Protection Enabled: %t", protectionEnabled) d.Set("enable_deletion_protection", protectionEnabled) - case "enable_http2": - http2Enabled := (*attr.Value) == "true" + case "routing.http2.enabled": + http2Enabled := aws.StringValue(attr.Value) == "true" log.Printf("[DEBUG] Setting ALB HTTP/2 Enabled: %t", http2Enabled) d.Set("enable_http2", http2Enabled) case "load_balancing.cross_zone.enabled": - crossZoneLbEnabled := (*attr.Value) == "true" + crossZoneLbEnabled := aws.StringValue(attr.Value) == "true" log.Printf("[DEBUG] Setting NLB Cross Zone Load Balancing Enabled: %t", crossZoneLbEnabled) d.Set("enable_cross_zone_load_balancing", crossZoneLbEnabled) } } - log.Printf("[DEBUG] Setting ALB Access Logs: %#v", accessLogMap) if accessLogMap["bucket"] != "" || accessLogMap["prefix"] != "" { - d.Set("access_logs", []interface{}{accessLogMap}) + if err := d.Set("access_logs", []interface{}{accessLogMap}); err != nil { + return fmt.Errorf("error setting access_logs: %s", err) + } } else { d.Set("access_logs", []interface{}{}) } diff --git a/aws/resource_aws_lb_cookie_stickiness_policy.go b/aws/resource_aws_lb_cookie_stickiness_policy.go index 1379b175265..acb6acdbe93 100644 --- a/aws/resource_aws_lb_cookie_stickiness_policy.go +++ b/aws/resource_aws_lb_cookie_stickiness_policy.go @@ -21,25 +21,25 @@ func resourceAwsLBCookieStickinessPolicy() *schema.Resource { Delete: resourceAwsLBCookieStickinessPolicyDelete, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "load_balancer": &schema.Schema{ + "load_balancer": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "lb_port": &schema.Schema{ + "lb_port": { Type: schema.TypeInt, Required: true, ForceNew: true, }, - "cookie_expiration_period": &schema.Schema{ + "cookie_expiration_period": { Type: schema.TypeInt, Optional: true, ForceNew: true, diff --git a/aws/resource_aws_lb_cookie_stickiness_policy_test.go b/aws/resource_aws_lb_cookie_stickiness_policy_test.go index 378c2672883..f0a72b31097 100644 --- a/aws/resource_aws_lb_cookie_stickiness_policy_test.go +++ b/aws/resource_aws_lb_cookie_stickiness_policy_test.go @@ -15,12 +15,12 @@ import ( func TestAccAWSLBCookieStickinessPolicy_basic(t *testing.T) { lbName := fmt.Sprintf("tf-test-lb-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLBCookieStickinessPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccLBCookieStickinessPolicyConfig(lbName), Check: resource.ComposeTestCheckFunc( testAccCheckLBCookieStickinessPolicy( @@ -29,7 +29,7 @@ func TestAccAWSLBCookieStickinessPolicy_basic(t *testing.T) { ), ), }, - resource.TestStep{ + { Config: testAccLBCookieStickinessPolicyConfigUpdate(lbName), Check: resource.ComposeTestCheckFunc( testAccCheckLBCookieStickinessPolicy( @@ -121,12 +121,12 @@ func TestAccAWSLBCookieStickinessPolicy_drift(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLBCookieStickinessPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccLBCookieStickinessPolicyConfig(lbName), Check: resource.ComposeTestCheckFunc( testAccCheckLBCookieStickinessPolicy( @@ -135,7 +135,7 @@ func TestAccAWSLBCookieStickinessPolicy_drift(t *testing.T) { ), ), }, - resource.TestStep{ + { PreConfig: removePolicy, Config: testAccLBCookieStickinessPolicyConfig(lbName), Check: resource.ComposeTestCheckFunc( @@ -163,12 +163,12 @@ func TestAccAWSLBCookieStickinessPolicy_missingLB(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckLBCookieStickinessPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccLBCookieStickinessPolicyConfig(lbName), Check: resource.ComposeTestCheckFunc( testAccCheckLBCookieStickinessPolicy( @@ -177,7 +177,7 @@ func TestAccAWSLBCookieStickinessPolicy_missingLB(t *testing.T) { ), ), }, - resource.TestStep{ + { PreConfig: removeLB, Config: testAccLBCookieStickinessPolicyConfigDestroy(lbName), }, diff --git a/aws/resource_aws_lb_listener.go b/aws/resource_aws_lb_listener.go index 27182b06ef0..605ddd98cd4 100644 --- a/aws/resource_aws_lb_listener.go +++ b/aws/resource_aws_lb_listener.go @@ -4,13 +4,14 @@ import ( "errors" "fmt" "log" + "regexp" + "sort" + "strconv" "strings" "time" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/elbv2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/validation" @@ -26,6 +27,10 @@ func resourceAwsLbListener() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Read: schema.DefaultTimeout(10 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "arn": { Type: schema.TypeString, @@ -51,7 +56,11 @@ func resourceAwsLbListener() *schema.Resource { StateFunc: func(v interface{}) string { return strings.ToUpper(v.(string)) }, - ValidateFunc: validateLbListenerProtocol(), + ValidateFunc: validation.StringInSlice([]string{ + elbv2.ProtocolEnumHttp, + elbv2.ProtocolEnumHttps, + elbv2.ProtocolEnumTcp, + }, true), }, "ssl_policy": { @@ -70,14 +79,233 @@ func resourceAwsLbListener() *schema.Resource { Required: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "target_group_arn": { + "type": { Type: schema.TypeString, Required: true, + ValidateFunc: validation.StringInSlice([]string{ + elbv2.ActionTypeEnumAuthenticateCognito, + elbv2.ActionTypeEnumAuthenticateOidc, + elbv2.ActionTypeEnumFixedResponse, + elbv2.ActionTypeEnumForward, + elbv2.ActionTypeEnumRedirect, + }, true), }, - "type": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validateLbListenerActionType(), + "order": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + ValidateFunc: validation.IntBetween(1, 50000), + }, + + "target_group_arn": { + Type: schema.TypeString, + Optional: true, + DiffSuppressFunc: suppressIfDefaultActionTypeNot("forward"), + }, + + "redirect": { + Type: schema.TypeList, + Optional: true, + DiffSuppressFunc: suppressIfDefaultActionTypeNot("redirect"), + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "host": { + Type: schema.TypeString, + Optional: true, + Default: "#{host}", + }, + + "path": { + Type: schema.TypeString, + Optional: true, + Default: "/#{path}", + }, + + "port": { + Type: schema.TypeString, + Optional: true, + Default: "#{port}", + }, + + "protocol": { + Type: schema.TypeString, + Optional: true, + Default: "#{protocol}", + ValidateFunc: validation.StringInSlice([]string{ + "#{protocol}", + "HTTP", + "HTTPS", + }, false), + }, + + "query": { + Type: schema.TypeString, + Optional: true, + Default: "#{query}", + }, + + "status_code": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{ + "HTTP_301", + "HTTP_302", + }, false), + }, + }, + }, + }, + + "fixed_response": { + Type: schema.TypeList, + Optional: true, + DiffSuppressFunc: suppressIfDefaultActionTypeNot("fixed-response"), + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "content_type": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{ + "text/plain", + "text/css", + "text/html", + "application/javascript", + "application/json", + }, false), + }, + + "message_body": { + Type: schema.TypeString, + Optional: true, + }, + + "status_code": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^[245]\d\d$`), ""), + }, + }, + }, + }, + + "authenticate_cognito": { + Type: schema.TypeList, + Optional: true, + DiffSuppressFunc: suppressIfDefaultActionTypeNot("authenticate-cognito"), + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "authentication_request_extra_params": { + Type: schema.TypeMap, + Optional: true, + }, + "on_unauthenticated_request": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice([]string{ + elbv2.AuthenticateCognitoActionConditionalBehaviorEnumDeny, + elbv2.AuthenticateCognitoActionConditionalBehaviorEnumAllow, + elbv2.AuthenticateCognitoActionConditionalBehaviorEnumAuthenticate, + }, true), + }, + "scope": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "session_cookie_name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "session_timeout": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + }, + "user_pool_arn": { + Type: schema.TypeString, + Required: true, + }, + "user_pool_client_id": { + Type: schema.TypeString, + Required: true, + }, + "user_pool_domain": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + + "authenticate_oidc": { + Type: schema.TypeList, + Optional: true, + DiffSuppressFunc: suppressIfDefaultActionTypeNot("authenticate-oidc"), + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "authentication_request_extra_params": { + Type: schema.TypeMap, + Optional: true, + }, + "authorization_endpoint": { + Type: schema.TypeString, + Required: true, + }, + "client_id": { + Type: schema.TypeString, + Required: true, + }, + "client_secret": { + Type: schema.TypeString, + Required: true, + Sensitive: true, + }, + "issuer": { + Type: schema.TypeString, + Required: true, + }, + "on_unauthenticated_request": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice([]string{ + elbv2.AuthenticateOidcActionConditionalBehaviorEnumDeny, + elbv2.AuthenticateOidcActionConditionalBehaviorEnumAllow, + elbv2.AuthenticateOidcActionConditionalBehaviorEnumAuthenticate, + }, true), + }, + "scope": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "session_cookie_name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "session_timeout": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + }, + "token_endpoint": { + Type: schema.TypeString, + Required: true, + }, + "user_info_endpoint": { + Type: schema.TypeString, + Required: true, + }, + }, + }, }, }, }, @@ -86,6 +314,21 @@ func resourceAwsLbListener() *schema.Resource { } } +func suppressIfDefaultActionTypeNot(t string) schema.SchemaDiffSuppressFunc { + return func(k, old, new string, d *schema.ResourceData) bool { + take := 2 + i := strings.IndexFunc(k, func(r rune) bool { + if r == '.' { + take -= 1 + return take == 0 + } + return false + }) + at := k[:i+1] + "type" + return d.Get(at).(string) != t + } +} + func resourceAwsLbListenerCreate(d *schema.ResourceData, meta interface{}) error { elbconn := meta.(*AWSClient).elbv2conn @@ -108,17 +351,130 @@ func resourceAwsLbListenerCreate(d *schema.ResourceData, meta interface{}) error } } - if defaultActions := d.Get("default_action").([]interface{}); len(defaultActions) == 1 { - params.DefaultActions = make([]*elbv2.Action, len(defaultActions)) + defaultActions := d.Get("default_action").([]interface{}) + params.DefaultActions = make([]*elbv2.Action, len(defaultActions)) + for i, defaultAction := range defaultActions { + defaultActionMap := defaultAction.(map[string]interface{}) - for i, defaultAction := range defaultActions { - defaultActionMap := defaultAction.(map[string]interface{}) + action := &elbv2.Action{ + Order: aws.Int64(int64(i + 1)), + Type: aws.String(defaultActionMap["type"].(string)), + } + + if order, ok := defaultActionMap["order"]; ok && order.(int) != 0 { + action.Order = aws.Int64(int64(order.(int))) + } - params.DefaultActions[i] = &elbv2.Action{ - TargetGroupArn: aws.String(defaultActionMap["target_group_arn"].(string)), - Type: aws.String(defaultActionMap["type"].(string)), + switch defaultActionMap["type"].(string) { + case "forward": + action.TargetGroupArn = aws.String(defaultActionMap["target_group_arn"].(string)) + + case "redirect": + redirectList := defaultActionMap["redirect"].([]interface{}) + + if len(redirectList) == 1 { + redirectMap := redirectList[0].(map[string]interface{}) + + action.RedirectConfig = &elbv2.RedirectActionConfig{ + Host: aws.String(redirectMap["host"].(string)), + Path: aws.String(redirectMap["path"].(string)), + Port: aws.String(redirectMap["port"].(string)), + Protocol: aws.String(redirectMap["protocol"].(string)), + Query: aws.String(redirectMap["query"].(string)), + StatusCode: aws.String(redirectMap["status_code"].(string)), + } + } else { + return errors.New("for actions of type 'redirect', you must specify a 'redirect' block") + } + + case "fixed-response": + fixedResponseList := defaultActionMap["fixed_response"].([]interface{}) + + if len(fixedResponseList) == 1 { + fixedResponseMap := fixedResponseList[0].(map[string]interface{}) + + action.FixedResponseConfig = &elbv2.FixedResponseActionConfig{ + ContentType: aws.String(fixedResponseMap["content_type"].(string)), + MessageBody: aws.String(fixedResponseMap["message_body"].(string)), + StatusCode: aws.String(fixedResponseMap["status_code"].(string)), + } + } else { + return errors.New("for actions of type 'fixed-response', you must specify a 'fixed_response' block") + } + + case elbv2.ActionTypeEnumAuthenticateCognito: + authenticateCognitoList := defaultActionMap["authenticate_cognito"].([]interface{}) + + if len(authenticateCognitoList) == 1 { + authenticateCognitoMap := authenticateCognitoList[0].(map[string]interface{}) + + authenticationRequestExtraParams := make(map[string]*string) + for key, value := range authenticateCognitoMap["authentication_request_extra_params"].(map[string]interface{}) { + authenticationRequestExtraParams[key] = aws.String(value.(string)) + } + + action.AuthenticateCognitoConfig = &elbv2.AuthenticateCognitoActionConfig{ + AuthenticationRequestExtraParams: authenticationRequestExtraParams, + UserPoolArn: aws.String(authenticateCognitoMap["user_pool_arn"].(string)), + UserPoolClientId: aws.String(authenticateCognitoMap["user_pool_client_id"].(string)), + UserPoolDomain: aws.String(authenticateCognitoMap["user_pool_domain"].(string)), + } + + if onUnauthenticatedRequest, ok := authenticateCognitoMap["on_unauthenticated_request"]; ok && onUnauthenticatedRequest != "" { + action.AuthenticateCognitoConfig.OnUnauthenticatedRequest = aws.String(onUnauthenticatedRequest.(string)) + } + if scope, ok := authenticateCognitoMap["scope"]; ok && scope != "" { + action.AuthenticateCognitoConfig.Scope = aws.String(scope.(string)) + } + if sessionCookieName, ok := authenticateCognitoMap["session_cookie_name"]; ok && sessionCookieName != "" { + action.AuthenticateCognitoConfig.SessionCookieName = aws.String(sessionCookieName.(string)) + } + if sessionTimeout, ok := authenticateCognitoMap["session_timeout"]; ok && sessionTimeout != 0 { + action.AuthenticateCognitoConfig.SessionTimeout = aws.Int64(int64(sessionTimeout.(int))) + } + } else { + return errors.New("for actions of type 'authenticate-cognito', you must specify a 'authenticate_cognito' block") + } + + case elbv2.ActionTypeEnumAuthenticateOidc: + authenticateOidcList := defaultActionMap["authenticate_oidc"].([]interface{}) + + if len(authenticateOidcList) == 1 { + authenticateOidcMap := authenticateOidcList[0].(map[string]interface{}) + + authenticationRequestExtraParams := make(map[string]*string) + for key, value := range authenticateOidcMap["authentication_request_extra_params"].(map[string]interface{}) { + authenticationRequestExtraParams[key] = aws.String(value.(string)) + } + + action.AuthenticateOidcConfig = &elbv2.AuthenticateOidcActionConfig{ + AuthenticationRequestExtraParams: authenticationRequestExtraParams, + AuthorizationEndpoint: aws.String(authenticateOidcMap["authorization_endpoint"].(string)), + ClientId: aws.String(authenticateOidcMap["client_id"].(string)), + ClientSecret: aws.String(authenticateOidcMap["client_secret"].(string)), + Issuer: aws.String(authenticateOidcMap["issuer"].(string)), + TokenEndpoint: aws.String(authenticateOidcMap["token_endpoint"].(string)), + UserInfoEndpoint: aws.String(authenticateOidcMap["user_info_endpoint"].(string)), + } + + if onUnauthenticatedRequest, ok := authenticateOidcMap["on_unauthenticated_request"]; ok && onUnauthenticatedRequest != "" { + action.AuthenticateOidcConfig.OnUnauthenticatedRequest = aws.String(onUnauthenticatedRequest.(string)) + } + if scope, ok := authenticateOidcMap["scope"]; ok && scope != "" { + action.AuthenticateOidcConfig.Scope = aws.String(scope.(string)) + } + if sessionCookieName, ok := authenticateOidcMap["session_cookie_name"]; ok && sessionCookieName != "" { + action.AuthenticateOidcConfig.SessionCookieName = aws.String(sessionCookieName.(string)) + } + if sessionTimeout, ok := authenticateOidcMap["session_timeout"]; ok && sessionTimeout != 0 { + action.AuthenticateOidcConfig.SessionTimeout = aws.Int64(int64(sessionTimeout.(int))) + } + } else { + return errors.New("for actions of type 'authenticate-oidc', you must specify a 'authenticate_oidc' block") } } + + params.DefaultActions[i] = action } var resp *elbv2.CreateListenerOutput @@ -127,21 +483,17 @@ func resourceAwsLbListenerCreate(d *schema.ResourceData, meta interface{}) error var err error log.Printf("[DEBUG] Creating LB listener for ARN: %s", d.Get("load_balancer_arn").(string)) resp, err = elbconn.CreateListener(params) - if awsErr, ok := err.(awserr.Error); ok { - if awsErr.Code() == "CertificateNotFound" { - log.Printf("[WARN] Got an error while trying to create LB listener for ARN: %s: %s", lbArn, err) + if err != nil { + if isAWSErr(err, elbv2.ErrCodeCertificateNotFoundException, "") { return resource.RetryableError(err) } - } - if err != nil { return resource.NonRetryableError(err) } - return nil }) if err != nil { - return errwrap.Wrapf("Error creating LB Listener: {{err}}", err) + return fmt.Errorf("Error creating LB Listener: %s", err) } if len(resp.Listeners) == 0 { @@ -156,16 +508,31 @@ func resourceAwsLbListenerCreate(d *schema.ResourceData, meta interface{}) error func resourceAwsLbListenerRead(d *schema.ResourceData, meta interface{}) error { elbconn := meta.(*AWSClient).elbv2conn - resp, err := elbconn.DescribeListeners(&elbv2.DescribeListenersInput{ + var resp *elbv2.DescribeListenersOutput + var request = &elbv2.DescribeListenersInput{ ListenerArns: []*string{aws.String(d.Id())}, + } + + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + var err error + resp, err = elbconn.DescribeListeners(request) + if d.IsNewResource() && isAWSErr(err, elbv2.ErrCodeListenerNotFoundException, "") { + return resource.RetryableError(err) + } + if err != nil { + return resource.NonRetryableError(err) + } + return nil }) + + if isAWSErr(err, elbv2.ErrCodeListenerNotFoundException, "") { + log.Printf("[WARN] ELBv2 Listener (%s) not found - removing from state", d.Id()) + d.SetId("") + return nil + } + if err != nil { - if isListenerNotFound(err) { - log.Printf("[WARN] DescribeListeners - removing %s from state", d.Id()) - d.SetId("") - return nil - } - return errwrap.Wrapf("Error retrieving Listener: {{err}}", err) + return fmt.Errorf("Error retrieving Listener: %s", err) } if len(resp.Listeners) != 1 { @@ -180,19 +547,90 @@ func resourceAwsLbListenerRead(d *schema.ResourceData, meta interface{}) error { d.Set("protocol", listener.Protocol) d.Set("ssl_policy", listener.SslPolicy) - if listener.Certificates != nil && len(listener.Certificates) == 1 { + if listener.Certificates != nil && len(listener.Certificates) == 1 && listener.Certificates[0] != nil { d.Set("certificate_arn", listener.Certificates[0].CertificateArn) } - defaultActions := make([]map[string]interface{}, 0) - if listener.DefaultActions != nil && len(listener.DefaultActions) > 0 { - for _, defaultAction := range listener.DefaultActions { - action := map[string]interface{}{ - "target_group_arn": *defaultAction.TargetGroupArn, - "type": *defaultAction.Type, + sort.Slice(listener.DefaultActions, func(i, j int) bool { + return aws.Int64Value(listener.DefaultActions[i].Order) < aws.Int64Value(listener.DefaultActions[j].Order) + }) + defaultActions := make([]interface{}, len(listener.DefaultActions)) + for i, defaultAction := range listener.DefaultActions { + defaultActionMap := make(map[string]interface{}) + defaultActionMap["type"] = aws.StringValue(defaultAction.Type) + defaultActionMap["order"] = aws.Int64Value(defaultAction.Order) + + switch aws.StringValue(defaultAction.Type) { + case "forward": + defaultActionMap["target_group_arn"] = aws.StringValue(defaultAction.TargetGroupArn) + + case "redirect": + defaultActionMap["redirect"] = []map[string]interface{}{ + { + "host": aws.StringValue(defaultAction.RedirectConfig.Host), + "path": aws.StringValue(defaultAction.RedirectConfig.Path), + "port": aws.StringValue(defaultAction.RedirectConfig.Port), + "protocol": aws.StringValue(defaultAction.RedirectConfig.Protocol), + "query": aws.StringValue(defaultAction.RedirectConfig.Query), + "status_code": aws.StringValue(defaultAction.RedirectConfig.StatusCode), + }, + } + + case "fixed-response": + defaultActionMap["fixed_response"] = []map[string]interface{}{ + { + "content_type": aws.StringValue(defaultAction.FixedResponseConfig.ContentType), + "message_body": aws.StringValue(defaultAction.FixedResponseConfig.MessageBody), + "status_code": aws.StringValue(defaultAction.FixedResponseConfig.StatusCode), + }, + } + + case "authenticate-cognito": + authenticationRequestExtraParams := make(map[string]interface{}) + for key, value := range defaultAction.AuthenticateCognitoConfig.AuthenticationRequestExtraParams { + authenticationRequestExtraParams[key] = aws.StringValue(value) + } + defaultActionMap["authenticate_cognito"] = []map[string]interface{}{ + { + "authentication_request_extra_params": authenticationRequestExtraParams, + "on_unauthenticated_request": aws.StringValue(defaultAction.AuthenticateCognitoConfig.OnUnauthenticatedRequest), + "scope": aws.StringValue(defaultAction.AuthenticateCognitoConfig.Scope), + "session_cookie_name": aws.StringValue(defaultAction.AuthenticateCognitoConfig.SessionCookieName), + "session_timeout": aws.Int64Value(defaultAction.AuthenticateCognitoConfig.SessionTimeout), + "user_pool_arn": aws.StringValue(defaultAction.AuthenticateCognitoConfig.UserPoolArn), + "user_pool_client_id": aws.StringValue(defaultAction.AuthenticateCognitoConfig.UserPoolClientId), + "user_pool_domain": aws.StringValue(defaultAction.AuthenticateCognitoConfig.UserPoolDomain), + }, + } + + case "authenticate-oidc": + authenticationRequestExtraParams := make(map[string]interface{}) + for key, value := range defaultAction.AuthenticateOidcConfig.AuthenticationRequestExtraParams { + authenticationRequestExtraParams[key] = aws.StringValue(value) + } + + // The LB API currently provides no way to read the ClientSecret + // Instead we passthrough the configuration value into the state + clientSecret := d.Get("default_action." + strconv.Itoa(i) + ".authenticate_oidc.0.client_secret").(string) + + defaultActionMap["authenticate_oidc"] = []map[string]interface{}{ + { + "authentication_request_extra_params": authenticationRequestExtraParams, + "authorization_endpoint": aws.StringValue(defaultAction.AuthenticateOidcConfig.AuthorizationEndpoint), + "client_id": aws.StringValue(defaultAction.AuthenticateOidcConfig.ClientId), + "client_secret": clientSecret, + "issuer": aws.StringValue(defaultAction.AuthenticateOidcConfig.Issuer), + "on_unauthenticated_request": aws.StringValue(defaultAction.AuthenticateOidcConfig.OnUnauthenticatedRequest), + "scope": aws.StringValue(defaultAction.AuthenticateOidcConfig.Scope), + "session_cookie_name": aws.StringValue(defaultAction.AuthenticateOidcConfig.SessionCookieName), + "session_timeout": aws.Int64Value(defaultAction.AuthenticateOidcConfig.SessionTimeout), + "token_endpoint": aws.StringValue(defaultAction.AuthenticateOidcConfig.TokenEndpoint), + "user_info_endpoint": aws.StringValue(defaultAction.AuthenticateOidcConfig.UserInfoEndpoint), + }, } - defaultActions = append(defaultActions, action) } + + defaultActions[i] = defaultActionMap } d.Set("default_action", defaultActions) @@ -219,22 +657,147 @@ func resourceAwsLbListenerUpdate(d *schema.ResourceData, meta interface{}) error } } - if defaultActions := d.Get("default_action").([]interface{}); len(defaultActions) == 1 { + if d.HasChange("default_action") { + defaultActions := d.Get("default_action").([]interface{}) params.DefaultActions = make([]*elbv2.Action, len(defaultActions)) for i, defaultAction := range defaultActions { defaultActionMap := defaultAction.(map[string]interface{}) - params.DefaultActions[i] = &elbv2.Action{ - TargetGroupArn: aws.String(defaultActionMap["target_group_arn"].(string)), - Type: aws.String(defaultActionMap["type"].(string)), + action := &elbv2.Action{ + Order: aws.Int64(int64(i + 1)), + Type: aws.String(defaultActionMap["type"].(string)), + } + + if order, ok := defaultActionMap["order"]; ok && order.(int) != 0 { + action.Order = aws.Int64(int64(order.(int))) + } + + switch defaultActionMap["type"].(string) { + case "forward": + action.TargetGroupArn = aws.String(defaultActionMap["target_group_arn"].(string)) + + case "redirect": + redirectList := defaultActionMap["redirect"].([]interface{}) + + if len(redirectList) == 1 { + redirectMap := redirectList[0].(map[string]interface{}) + + action.RedirectConfig = &elbv2.RedirectActionConfig{ + Host: aws.String(redirectMap["host"].(string)), + Path: aws.String(redirectMap["path"].(string)), + Port: aws.String(redirectMap["port"].(string)), + Protocol: aws.String(redirectMap["protocol"].(string)), + Query: aws.String(redirectMap["query"].(string)), + StatusCode: aws.String(redirectMap["status_code"].(string)), + } + } else { + return errors.New("for actions of type 'redirect', you must specify a 'redirect' block") + } + + case "fixed-response": + fixedResponseList := defaultActionMap["fixed_response"].([]interface{}) + + if len(fixedResponseList) == 1 { + fixedResponseMap := fixedResponseList[0].(map[string]interface{}) + + action.FixedResponseConfig = &elbv2.FixedResponseActionConfig{ + ContentType: aws.String(fixedResponseMap["content_type"].(string)), + MessageBody: aws.String(fixedResponseMap["message_body"].(string)), + StatusCode: aws.String(fixedResponseMap["status_code"].(string)), + } + } else { + return errors.New("for actions of type 'fixed-response', you must specify a 'fixed_response' block") + } + + case "authenticate-cognito": + authenticateCognitoList := defaultActionMap["authenticate_cognito"].([]interface{}) + + if len(authenticateCognitoList) == 1 { + authenticateCognitoMap := authenticateCognitoList[0].(map[string]interface{}) + + authenticationRequestExtraParams := make(map[string]*string) + for key, value := range authenticateCognitoMap["authentication_request_extra_params"].(map[string]interface{}) { + authenticationRequestExtraParams[key] = aws.String(value.(string)) + } + + action.AuthenticateCognitoConfig = &elbv2.AuthenticateCognitoActionConfig{ + AuthenticationRequestExtraParams: authenticationRequestExtraParams, + UserPoolArn: aws.String(authenticateCognitoMap["user_pool_arn"].(string)), + UserPoolClientId: aws.String(authenticateCognitoMap["user_pool_client_id"].(string)), + UserPoolDomain: aws.String(authenticateCognitoMap["user_pool_domain"].(string)), + } + + if onUnauthenticatedRequest, ok := authenticateCognitoMap["on_unauthenticated_request"]; ok && onUnauthenticatedRequest != "" { + action.AuthenticateCognitoConfig.OnUnauthenticatedRequest = aws.String(onUnauthenticatedRequest.(string)) + } + if scope, ok := authenticateCognitoMap["scope"]; ok && scope != "" { + action.AuthenticateCognitoConfig.Scope = aws.String(scope.(string)) + } + if sessionCookieName, ok := authenticateCognitoMap["session_cookie_name"]; ok && sessionCookieName != "" { + action.AuthenticateCognitoConfig.SessionCookieName = aws.String(sessionCookieName.(string)) + } + if sessionTimeout, ok := authenticateCognitoMap["session_timeout"]; ok && sessionTimeout != 0 { + action.AuthenticateCognitoConfig.SessionTimeout = aws.Int64(int64(sessionTimeout.(int))) + } + } else { + return errors.New("for actions of type 'authenticate-cognito', you must specify a 'authenticate_cognito' block") + } + + case "authenticate-oidc": + authenticateOidcList := defaultActionMap["authenticate_oidc"].([]interface{}) + + if len(authenticateOidcList) == 1 { + authenticateOidcMap := authenticateOidcList[0].(map[string]interface{}) + + authenticationRequestExtraParams := make(map[string]*string) + for key, value := range authenticateOidcMap["authentication_request_extra_params"].(map[string]interface{}) { + authenticationRequestExtraParams[key] = aws.String(value.(string)) + } + + action.AuthenticateOidcConfig = &elbv2.AuthenticateOidcActionConfig{ + AuthenticationRequestExtraParams: authenticationRequestExtraParams, + AuthorizationEndpoint: aws.String(authenticateOidcMap["authorization_endpoint"].(string)), + ClientId: aws.String(authenticateOidcMap["client_id"].(string)), + ClientSecret: aws.String(authenticateOidcMap["client_secret"].(string)), + Issuer: aws.String(authenticateOidcMap["issuer"].(string)), + TokenEndpoint: aws.String(authenticateOidcMap["token_endpoint"].(string)), + UserInfoEndpoint: aws.String(authenticateOidcMap["user_info_endpoint"].(string)), + } + + if onUnauthenticatedRequest, ok := authenticateOidcMap["on_unauthenticated_request"]; ok && onUnauthenticatedRequest != "" { + action.AuthenticateOidcConfig.OnUnauthenticatedRequest = aws.String(onUnauthenticatedRequest.(string)) + } + if scope, ok := authenticateOidcMap["scope"]; ok && scope != "" { + action.AuthenticateOidcConfig.Scope = aws.String(scope.(string)) + } + if sessionCookieName, ok := authenticateOidcMap["session_cookie_name"]; ok && sessionCookieName != "" { + action.AuthenticateOidcConfig.SessionCookieName = aws.String(sessionCookieName.(string)) + } + if sessionTimeout, ok := authenticateOidcMap["session_timeout"]; ok && sessionTimeout != 0 { + action.AuthenticateOidcConfig.SessionTimeout = aws.Int64(int64(sessionTimeout.(int))) + } + } else { + return errors.New("for actions of type 'authenticate-oidc', you must specify a 'authenticate_oidc' block") + } } + + params.DefaultActions[i] = action } } - _, err := elbconn.ModifyListener(params) + err := resource.Retry(5*time.Minute, func() *resource.RetryError { + _, err := elbconn.ModifyListener(params) + if err != nil { + if isAWSErr(err, elbv2.ErrCodeCertificateNotFoundException, "") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) if err != nil { - return errwrap.Wrapf("Error modifying LB Listener: {{err}}", err) + return fmt.Errorf("Error modifying LB Listener: %s", err) } return resourceAwsLbListenerRead(d, meta) @@ -247,27 +810,8 @@ func resourceAwsLbListenerDelete(d *schema.ResourceData, meta interface{}) error ListenerArn: aws.String(d.Id()), }) if err != nil { - return errwrap.Wrapf("Error deleting Listener: {{err}}", err) + return fmt.Errorf("Error deleting Listener: %s", err) } return nil } - -func isListenerNotFound(err error) bool { - elberr, ok := err.(awserr.Error) - return ok && elberr.Code() == "ListenerNotFound" -} - -func validateLbListenerActionType() schema.SchemaValidateFunc { - return validation.StringInSlice([]string{ - elbv2.ActionTypeEnumForward, - }, true) -} - -func validateLbListenerProtocol() schema.SchemaValidateFunc { - return validation.StringInSlice([]string{ - "http", - "https", - "tcp", - }, true) -} diff --git a/aws/resource_aws_lb_listener_certificate.go b/aws/resource_aws_lb_listener_certificate.go index 39f32ad53ed..51405e30fb1 100644 --- a/aws/resource_aws_lb_listener_certificate.go +++ b/aws/resource_aws_lb_listener_certificate.go @@ -6,10 +6,9 @@ import ( "log" "time" - "github.com/hashicorp/terraform/helper/resource" - "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/elbv2" + "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) @@ -40,7 +39,7 @@ func resourceAwsLbListenerCertificateCreate(d *schema.ResourceData, meta interfa params := &elbv2.AddListenerCertificatesInput{ ListenerArn: aws.String(d.Get("listener_arn").(string)), Certificates: []*elbv2.Certificate{ - &elbv2.Certificate{ + { CertificateArn: aws.String(d.Get("certificate_arn").(string)), }, }, @@ -106,7 +105,7 @@ func resourceAwsLbListenerCertificateDelete(d *schema.ResourceData, meta interfa params := &elbv2.RemoveListenerCertificatesInput{ ListenerArn: aws.String(d.Get("listener_arn").(string)), Certificates: []*elbv2.Certificate{ - &elbv2.Certificate{ + { CertificateArn: aws.String(d.Get("certificate_arn").(string)), }, }, @@ -141,11 +140,11 @@ func findAwsLbListenerCertificate(certificateArn, listenerArn string, skipDefaul } for _, cert := range resp.Certificates { - if skipDefault && *cert.IsDefault { + if skipDefault && aws.BoolValue(cert.IsDefault) { continue } - if *cert.CertificateArn == certificateArn { + if aws.StringValue(cert.CertificateArn) == certificateArn { return cert, nil } } diff --git a/aws/resource_aws_lb_listener_certificate_test.go b/aws/resource_aws_lb_listener_certificate_test.go index d4809472fd3..590c237e74a 100644 --- a/aws/resource_aws_lb_listener_certificate_test.go +++ b/aws/resource_aws_lb_listener_certificate_test.go @@ -6,7 +6,6 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/elbv2" "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" @@ -14,7 +13,7 @@ import ( ) func TestAccAwsLbListenerCertificate_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsLbListenerCertificateDestroy, @@ -41,7 +40,7 @@ func TestAccAwsLbListenerCertificate_cycle(t *testing.T) { rName := acctest.RandString(5) suffix := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsLbListenerCertificateDestroy, @@ -111,7 +110,7 @@ func testAccCheckAwsLbListenerCertificateDestroy(s *terraform.State) error { resp, err := conn.DescribeListenerCertificates(input) if err != nil { - if wserr, ok := err.(awserr.Error); ok && wserr.Code() == "ListenerNotFound" { + if isAWSErr(err, elbv2.ErrCodeListenerNotFoundException, "") { return nil } return err @@ -119,11 +118,11 @@ func testAccCheckAwsLbListenerCertificateDestroy(s *terraform.State) error { for _, cert := range resp.Certificates { // We only care about additional certificates. - if *cert.IsDefault { + if aws.BoolValue(cert.IsDefault) { continue } - if *cert.CertificateArn == rs.Primary.Attributes["certificate_arn"] { + if aws.StringValue(cert.CertificateArn) == rs.Primary.Attributes["certificate_arn"] { return errors.New("LB listener certificate not destroyed") } } diff --git a/aws/resource_aws_lb_listener_rule.go b/aws/resource_aws_lb_listener_rule.go index c32465a64e7..a6edf22937e 100644 --- a/aws/resource_aws_lb_listener_rule.go +++ b/aws/resource_aws_lb_listener_rule.go @@ -7,14 +7,14 @@ import ( "regexp" "sort" "strconv" + "strings" "time" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/elbv2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsLbbListenerRule() *schema.Resource { @@ -49,14 +49,233 @@ func resourceAwsLbbListenerRule() *schema.Resource { Required: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "target_group_arn": { + "type": { Type: schema.TypeString, Required: true, + ValidateFunc: validation.StringInSlice([]string{ + elbv2.ActionTypeEnumAuthenticateCognito, + elbv2.ActionTypeEnumAuthenticateOidc, + elbv2.ActionTypeEnumFixedResponse, + elbv2.ActionTypeEnumForward, + elbv2.ActionTypeEnumRedirect, + }, true), }, - "type": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validateLbListenerActionType(), + "order": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + ValidateFunc: validation.IntBetween(1, 50000), + }, + + "target_group_arn": { + Type: schema.TypeString, + Optional: true, + DiffSuppressFunc: suppressIfActionTypeNot("forward"), + }, + + "redirect": { + Type: schema.TypeList, + Optional: true, + DiffSuppressFunc: suppressIfActionTypeNot("redirect"), + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "host": { + Type: schema.TypeString, + Optional: true, + Default: "#{host}", + }, + + "path": { + Type: schema.TypeString, + Optional: true, + Default: "/#{path}", + }, + + "port": { + Type: schema.TypeString, + Optional: true, + Default: "#{port}", + }, + + "protocol": { + Type: schema.TypeString, + Optional: true, + Default: "#{protocol}", + ValidateFunc: validation.StringInSlice([]string{ + "#{protocol}", + "HTTP", + "HTTPS", + }, false), + }, + + "query": { + Type: schema.TypeString, + Optional: true, + Default: "#{query}", + }, + + "status_code": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{ + "HTTP_301", + "HTTP_302", + }, false), + }, + }, + }, + }, + + "fixed_response": { + Type: schema.TypeList, + Optional: true, + DiffSuppressFunc: suppressIfActionTypeNot("fixed-response"), + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "content_type": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{ + "text/plain", + "text/css", + "text/html", + "application/javascript", + "application/json", + }, false), + }, + + "message_body": { + Type: schema.TypeString, + Optional: true, + }, + + "status_code": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^[245]\d\d$`), ""), + }, + }, + }, + }, + + "authenticate_cognito": { + Type: schema.TypeList, + Optional: true, + DiffSuppressFunc: suppressIfDefaultActionTypeNot(elbv2.ActionTypeEnumAuthenticateCognito), + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "authentication_request_extra_params": { + Type: schema.TypeMap, + Optional: true, + }, + "on_unauthenticated_request": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice([]string{ + elbv2.AuthenticateCognitoActionConditionalBehaviorEnumDeny, + elbv2.AuthenticateCognitoActionConditionalBehaviorEnumAllow, + elbv2.AuthenticateCognitoActionConditionalBehaviorEnumAuthenticate, + }, true), + }, + "scope": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "session_cookie_name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "session_timeout": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + }, + "user_pool_arn": { + Type: schema.TypeString, + Required: true, + }, + "user_pool_client_id": { + Type: schema.TypeString, + Required: true, + }, + "user_pool_domain": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + + "authenticate_oidc": { + Type: schema.TypeList, + Optional: true, + DiffSuppressFunc: suppressIfDefaultActionTypeNot(elbv2.ActionTypeEnumAuthenticateOidc), + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "authentication_request_extra_params": { + Type: schema.TypeMap, + Optional: true, + }, + "authorization_endpoint": { + Type: schema.TypeString, + Required: true, + }, + "client_id": { + Type: schema.TypeString, + Required: true, + }, + "client_secret": { + Type: schema.TypeString, + Required: true, + Sensitive: true, + }, + "issuer": { + Type: schema.TypeString, + Required: true, + }, + "on_unauthenticated_request": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice([]string{ + elbv2.AuthenticateOidcActionConditionalBehaviorEnumDeny, + elbv2.AuthenticateOidcActionConditionalBehaviorEnumAllow, + elbv2.AuthenticateOidcActionConditionalBehaviorEnumAuthenticate, + }, true), + }, + "scope": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "session_cookie_name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "session_timeout": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + }, + "token_endpoint": { + Type: schema.TypeString, + Required: true, + }, + "user_info_endpoint": { + Type: schema.TypeString, + Required: true, + }, + }, + }, }, }, }, @@ -69,7 +288,7 @@ func resourceAwsLbbListenerRule() *schema.Resource { "field": { Type: schema.TypeString, Optional: true, - ValidateFunc: validateMaxLength(64), + ValidateFunc: validation.StringLenBetween(0, 64), }, "values": { Type: schema.TypeList, @@ -84,6 +303,21 @@ func resourceAwsLbbListenerRule() *schema.Resource { } } +func suppressIfActionTypeNot(t string) schema.SchemaDiffSuppressFunc { + return func(k, old, new string, d *schema.ResourceData) bool { + take := 2 + i := strings.IndexFunc(k, func(r rune) bool { + if r == '.' { + take -= 1 + return take == 0 + } + return false + }) + at := k[:i+1] + "type" + return d.Get(at).(string) != t + } +} + func resourceAwsLbListenerRuleCreate(d *schema.ResourceData, meta interface{}) error { elbconn := meta.(*AWSClient).elbv2conn listenerArn := d.Get("listener_arn").(string) @@ -96,10 +330,126 @@ func resourceAwsLbListenerRuleCreate(d *schema.ResourceData, meta interface{}) e params.Actions = make([]*elbv2.Action, len(actions)) for i, action := range actions { actionMap := action.(map[string]interface{}) - params.Actions[i] = &elbv2.Action{ - TargetGroupArn: aws.String(actionMap["target_group_arn"].(string)), - Type: aws.String(actionMap["type"].(string)), + + action := &elbv2.Action{ + Order: aws.Int64(int64(i + 1)), + Type: aws.String(actionMap["type"].(string)), + } + + if order, ok := actionMap["order"]; ok && order.(int) != 0 { + action.Order = aws.Int64(int64(order.(int))) } + + switch actionMap["type"].(string) { + case "forward": + action.TargetGroupArn = aws.String(actionMap["target_group_arn"].(string)) + + case "redirect": + redirectList := actionMap["redirect"].([]interface{}) + + if len(redirectList) == 1 { + redirectMap := redirectList[0].(map[string]interface{}) + + action.RedirectConfig = &elbv2.RedirectActionConfig{ + Host: aws.String(redirectMap["host"].(string)), + Path: aws.String(redirectMap["path"].(string)), + Port: aws.String(redirectMap["port"].(string)), + Protocol: aws.String(redirectMap["protocol"].(string)), + Query: aws.String(redirectMap["query"].(string)), + StatusCode: aws.String(redirectMap["status_code"].(string)), + } + } else { + return errors.New("for actions of type 'redirect', you must specify a 'redirect' block") + } + + case "fixed-response": + fixedResponseList := actionMap["fixed_response"].([]interface{}) + + if len(fixedResponseList) == 1 { + fixedResponseMap := fixedResponseList[0].(map[string]interface{}) + + action.FixedResponseConfig = &elbv2.FixedResponseActionConfig{ + ContentType: aws.String(fixedResponseMap["content_type"].(string)), + MessageBody: aws.String(fixedResponseMap["message_body"].(string)), + StatusCode: aws.String(fixedResponseMap["status_code"].(string)), + } + } else { + return errors.New("for actions of type 'fixed-response', you must specify a 'fixed_response' block") + } + + case "authenticate-cognito": + authenticateCognitoList := actionMap["authenticate_cognito"].([]interface{}) + + if len(authenticateCognitoList) == 1 { + authenticateCognitoMap := authenticateCognitoList[0].(map[string]interface{}) + + authenticationRequestExtraParams := make(map[string]*string) + for key, value := range authenticateCognitoMap["authentication_request_extra_params"].(map[string]interface{}) { + authenticationRequestExtraParams[key] = aws.String(value.(string)) + } + + action.AuthenticateCognitoConfig = &elbv2.AuthenticateCognitoActionConfig{ + AuthenticationRequestExtraParams: authenticationRequestExtraParams, + UserPoolArn: aws.String(authenticateCognitoMap["user_pool_arn"].(string)), + UserPoolClientId: aws.String(authenticateCognitoMap["user_pool_client_id"].(string)), + UserPoolDomain: aws.String(authenticateCognitoMap["user_pool_domain"].(string)), + } + + if onUnauthenticatedRequest, ok := authenticateCognitoMap["on_unauthenticated_request"]; ok && onUnauthenticatedRequest != "" { + action.AuthenticateCognitoConfig.OnUnauthenticatedRequest = aws.String(onUnauthenticatedRequest.(string)) + } + if scope, ok := authenticateCognitoMap["scope"]; ok && scope != "" { + action.AuthenticateCognitoConfig.Scope = aws.String(scope.(string)) + } + if sessionCookieName, ok := authenticateCognitoMap["session_cookie_name"]; ok && sessionCookieName != "" { + action.AuthenticateCognitoConfig.SessionCookieName = aws.String(sessionCookieName.(string)) + } + if sessionTimeout, ok := authenticateCognitoMap["session_timeout"]; ok && sessionTimeout != 0 { + action.AuthenticateCognitoConfig.SessionTimeout = aws.Int64(int64(sessionTimeout.(int))) + } + } else { + return errors.New("for actions of type 'authenticate-cognito', you must specify a 'authenticate_cognito' block") + } + + case "authenticate-oidc": + authenticateOidcList := actionMap["authenticate_oidc"].([]interface{}) + + if len(authenticateOidcList) == 1 { + authenticateOidcMap := authenticateOidcList[0].(map[string]interface{}) + + authenticationRequestExtraParams := make(map[string]*string) + for key, value := range authenticateOidcMap["authentication_request_extra_params"].(map[string]interface{}) { + authenticationRequestExtraParams[key] = aws.String(value.(string)) + } + + action.AuthenticateOidcConfig = &elbv2.AuthenticateOidcActionConfig{ + AuthenticationRequestExtraParams: authenticationRequestExtraParams, + AuthorizationEndpoint: aws.String(authenticateOidcMap["authorization_endpoint"].(string)), + ClientId: aws.String(authenticateOidcMap["client_id"].(string)), + ClientSecret: aws.String(authenticateOidcMap["client_secret"].(string)), + Issuer: aws.String(authenticateOidcMap["issuer"].(string)), + TokenEndpoint: aws.String(authenticateOidcMap["token_endpoint"].(string)), + UserInfoEndpoint: aws.String(authenticateOidcMap["user_info_endpoint"].(string)), + } + + if onUnauthenticatedRequest, ok := authenticateOidcMap["on_unauthenticated_request"]; ok && onUnauthenticatedRequest != "" { + action.AuthenticateOidcConfig.OnUnauthenticatedRequest = aws.String(onUnauthenticatedRequest.(string)) + } + if scope, ok := authenticateOidcMap["scope"]; ok && scope != "" { + action.AuthenticateOidcConfig.Scope = aws.String(scope.(string)) + } + if sessionCookieName, ok := authenticateOidcMap["session_cookie_name"]; ok && sessionCookieName != "" { + action.AuthenticateOidcConfig.SessionCookieName = aws.String(sessionCookieName.(string)) + } + if sessionTimeout, ok := authenticateOidcMap["session_timeout"]; ok && sessionTimeout != 0 { + action.AuthenticateOidcConfig.SessionTimeout = aws.Int64(int64(sessionTimeout.(int))) + } + } else { + return errors.New("for actions of type 'authenticate-oidc', you must specify a 'authenticate_oidc' block") + } + } + + params.Actions[i] = action } conditions := d.Get("condition").(*schema.Set).List() @@ -150,7 +500,7 @@ func resourceAwsLbListenerRuleCreate(d *schema.ResourceData, meta interface{}) e return errors.New("Error creating LB Listener Rule: no rules returned in response") } - d.SetId(*resp.Rules[0].RuleArn) + d.SetId(aws.StringValue(resp.Rules[0].RuleArn)) return resourceAwsLbListenerRuleRead(d, meta) } @@ -158,16 +508,31 @@ func resourceAwsLbListenerRuleCreate(d *schema.ResourceData, meta interface{}) e func resourceAwsLbListenerRuleRead(d *schema.ResourceData, meta interface{}) error { elbconn := meta.(*AWSClient).elbv2conn - resp, err := elbconn.DescribeRules(&elbv2.DescribeRulesInput{ + var resp *elbv2.DescribeRulesOutput + var req = &elbv2.DescribeRulesInput{ RuleArns: []*string{aws.String(d.Id())}, + } + + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + var err error + resp, err = elbconn.DescribeRules(req) + if err != nil { + if d.IsNewResource() && isAWSErr(err, elbv2.ErrCodeRuleNotFoundException, "") { + return resource.RetryableError(err) + } else { + return resource.NonRetryableError(err) + } + } + return nil }) + if err != nil { - if isRuleNotFound(err) { + if isAWSErr(err, elbv2.ErrCodeRuleNotFoundException, "") { log.Printf("[WARN] DescribeRules - removing %s from state", d.Id()) d.SetId("") return nil } - return errwrap.Wrapf(fmt.Sprintf("Error retrieving Rules for listener %s: {{err}}", d.Id()), err) + return fmt.Errorf("Error retrieving Rules for listener %q: %s", d.Id(), err) } if len(resp.Rules) != 1 { @@ -179,24 +544,99 @@ func resourceAwsLbListenerRuleRead(d *schema.ResourceData, meta interface{}) err d.Set("arn", rule.RuleArn) // The listener arn isn't in the response but can be derived from the rule arn - d.Set("listener_arn", lbListenerARNFromRuleARN(*rule.RuleArn)) + d.Set("listener_arn", lbListenerARNFromRuleARN(aws.StringValue(rule.RuleArn))) // Rules are evaluated in priority order, from the lowest value to the highest value. The default rule has the lowest priority. - if *rule.Priority == "default" { + if aws.StringValue(rule.Priority) == "default" { d.Set("priority", 99999) } else { - if priority, err := strconv.Atoi(*rule.Priority); err != nil { - return fmt.Errorf("Cannot convert rule priority %q to int: {{err}}", err) + if priority, err := strconv.Atoi(aws.StringValue(rule.Priority)); err != nil { + return fmt.Errorf("Cannot convert rule priority %q to int: %s", aws.StringValue(rule.Priority), err) } else { d.Set("priority", priority) } } + sort.Slice(rule.Actions, func(i, j int) bool { + return aws.Int64Value(rule.Actions[i].Order) < aws.Int64Value(rule.Actions[j].Order) + }) actions := make([]interface{}, len(rule.Actions)) for i, action := range rule.Actions { actionMap := make(map[string]interface{}) - actionMap["target_group_arn"] = *action.TargetGroupArn - actionMap["type"] = *action.Type + actionMap["type"] = aws.StringValue(action.Type) + actionMap["order"] = aws.Int64Value(action.Order) + + switch actionMap["type"] { + case "forward": + actionMap["target_group_arn"] = aws.StringValue(action.TargetGroupArn) + + case "redirect": + actionMap["redirect"] = []map[string]interface{}{ + { + "host": aws.StringValue(action.RedirectConfig.Host), + "path": aws.StringValue(action.RedirectConfig.Path), + "port": aws.StringValue(action.RedirectConfig.Port), + "protocol": aws.StringValue(action.RedirectConfig.Protocol), + "query": aws.StringValue(action.RedirectConfig.Query), + "status_code": aws.StringValue(action.RedirectConfig.StatusCode), + }, + } + + case "fixed-response": + actionMap["fixed_response"] = []map[string]interface{}{ + { + "content_type": aws.StringValue(action.FixedResponseConfig.ContentType), + "message_body": aws.StringValue(action.FixedResponseConfig.MessageBody), + "status_code": aws.StringValue(action.FixedResponseConfig.StatusCode), + }, + } + + case "authenticate-cognito": + authenticationRequestExtraParams := make(map[string]interface{}) + for key, value := range action.AuthenticateCognitoConfig.AuthenticationRequestExtraParams { + authenticationRequestExtraParams[key] = aws.StringValue(value) + } + + actionMap["authenticate_cognito"] = []map[string]interface{}{ + { + "authentication_request_extra_params": authenticationRequestExtraParams, + "on_unauthenticated_request": aws.StringValue(action.AuthenticateCognitoConfig.OnUnauthenticatedRequest), + "scope": aws.StringValue(action.AuthenticateCognitoConfig.Scope), + "session_cookie_name": aws.StringValue(action.AuthenticateCognitoConfig.SessionCookieName), + "session_timeout": aws.Int64Value(action.AuthenticateCognitoConfig.SessionTimeout), + "user_pool_arn": aws.StringValue(action.AuthenticateCognitoConfig.UserPoolArn), + "user_pool_client_id": aws.StringValue(action.AuthenticateCognitoConfig.UserPoolClientId), + "user_pool_domain": aws.StringValue(action.AuthenticateCognitoConfig.UserPoolDomain), + }, + } + + case "authenticate-oidc": + authenticationRequestExtraParams := make(map[string]interface{}) + for key, value := range action.AuthenticateOidcConfig.AuthenticationRequestExtraParams { + authenticationRequestExtraParams[key] = aws.StringValue(value) + } + + // The LB API currently provides no way to read the ClientSecret + // Instead we passthrough the configuration value into the state + clientSecret := d.Get("action." + strconv.Itoa(i) + ".authenticate_oidc.0.client_secret").(string) + + actionMap["authenticate_oidc"] = []map[string]interface{}{ + { + "authentication_request_extra_params": authenticationRequestExtraParams, + "authorization_endpoint": aws.StringValue(action.AuthenticateOidcConfig.AuthorizationEndpoint), + "client_id": aws.StringValue(action.AuthenticateOidcConfig.ClientId), + "client_secret": clientSecret, + "issuer": aws.StringValue(action.AuthenticateOidcConfig.Issuer), + "on_unauthenticated_request": aws.StringValue(action.AuthenticateOidcConfig.OnUnauthenticatedRequest), + "scope": aws.StringValue(action.AuthenticateOidcConfig.Scope), + "session_cookie_name": aws.StringValue(action.AuthenticateOidcConfig.SessionCookieName), + "session_timeout": aws.Int64Value(action.AuthenticateOidcConfig.SessionTimeout), + "token_endpoint": aws.StringValue(action.AuthenticateOidcConfig.TokenEndpoint), + "user_info_endpoint": aws.StringValue(action.AuthenticateOidcConfig.UserInfoEndpoint), + }, + } + } + actions[i] = actionMap } d.Set("action", actions) @@ -204,10 +644,10 @@ func resourceAwsLbListenerRuleRead(d *schema.ResourceData, meta interface{}) err conditions := make([]interface{}, len(rule.Conditions)) for i, condition := range rule.Conditions { conditionMap := make(map[string]interface{}) - conditionMap["field"] = *condition.Field + conditionMap["field"] = aws.StringValue(condition.Field) conditionValues := make([]string, len(condition.Values)) for k, value := range condition.Values { - conditionValues[k] = *value + conditionValues[k] = aws.StringValue(value) } conditionMap["values"] = conditionValues conditions[i] = conditionMap @@ -250,10 +690,126 @@ func resourceAwsLbListenerRuleUpdate(d *schema.ResourceData, meta interface{}) e params.Actions = make([]*elbv2.Action, len(actions)) for i, action := range actions { actionMap := action.(map[string]interface{}) - params.Actions[i] = &elbv2.Action{ - TargetGroupArn: aws.String(actionMap["target_group_arn"].(string)), - Type: aws.String(actionMap["type"].(string)), + + action := &elbv2.Action{ + Order: aws.Int64(int64(i + 1)), + Type: aws.String(actionMap["type"].(string)), } + + if order, ok := actionMap["order"]; ok && order.(int) != 0 { + action.Order = aws.Int64(int64(order.(int))) + } + + switch actionMap["type"].(string) { + case "forward": + action.TargetGroupArn = aws.String(actionMap["target_group_arn"].(string)) + + case "redirect": + redirectList := actionMap["redirect"].([]interface{}) + + if len(redirectList) == 1 { + redirectMap := redirectList[0].(map[string]interface{}) + + action.RedirectConfig = &elbv2.RedirectActionConfig{ + Host: aws.String(redirectMap["host"].(string)), + Path: aws.String(redirectMap["path"].(string)), + Port: aws.String(redirectMap["port"].(string)), + Protocol: aws.String(redirectMap["protocol"].(string)), + Query: aws.String(redirectMap["query"].(string)), + StatusCode: aws.String(redirectMap["status_code"].(string)), + } + } else { + return errors.New("for actions of type 'redirect', you must specify a 'redirect' block") + } + + case "fixed-response": + fixedResponseList := actionMap["fixed_response"].([]interface{}) + + if len(fixedResponseList) == 1 { + fixedResponseMap := fixedResponseList[0].(map[string]interface{}) + + action.FixedResponseConfig = &elbv2.FixedResponseActionConfig{ + ContentType: aws.String(fixedResponseMap["content_type"].(string)), + MessageBody: aws.String(fixedResponseMap["message_body"].(string)), + StatusCode: aws.String(fixedResponseMap["status_code"].(string)), + } + } else { + return errors.New("for actions of type 'fixed-response', you must specify a 'fixed_response' block") + } + + case "authenticate-cognito": + authenticateCognitoList := actionMap["authenticate_cognito"].([]interface{}) + + if len(authenticateCognitoList) == 1 { + authenticateCognitoMap := authenticateCognitoList[0].(map[string]interface{}) + + authenticationRequestExtraParams := make(map[string]*string) + for key, value := range authenticateCognitoMap["authentication_request_extra_params"].(map[string]interface{}) { + authenticationRequestExtraParams[key] = aws.String(value.(string)) + } + + action.AuthenticateCognitoConfig = &elbv2.AuthenticateCognitoActionConfig{ + AuthenticationRequestExtraParams: authenticationRequestExtraParams, + UserPoolArn: aws.String(authenticateCognitoMap["user_pool_arn"].(string)), + UserPoolClientId: aws.String(authenticateCognitoMap["user_pool_client_id"].(string)), + UserPoolDomain: aws.String(authenticateCognitoMap["user_pool_domain"].(string)), + } + + if onUnauthenticatedRequest, ok := authenticateCognitoMap["on_unauthenticated_request"]; ok && onUnauthenticatedRequest != "" { + action.AuthenticateCognitoConfig.OnUnauthenticatedRequest = aws.String(onUnauthenticatedRequest.(string)) + } + if scope, ok := authenticateCognitoMap["scope"]; ok && scope != "" { + action.AuthenticateCognitoConfig.Scope = aws.String(scope.(string)) + } + if sessionCookieName, ok := authenticateCognitoMap["session_cookie_name"]; ok && sessionCookieName != "" { + action.AuthenticateCognitoConfig.SessionCookieName = aws.String(sessionCookieName.(string)) + } + if sessionTimeout, ok := authenticateCognitoMap["session_timeout"]; ok && sessionTimeout != 0 { + action.AuthenticateCognitoConfig.SessionTimeout = aws.Int64(int64(sessionTimeout.(int))) + } + } else { + return errors.New("for actions of type 'authenticate-cognito', you must specify a 'authenticate_cognito' block") + } + + case "authenticate-oidc": + authenticateOidcList := actionMap["authenticate_oidc"].([]interface{}) + + if len(authenticateOidcList) == 1 { + authenticateOidcMap := authenticateOidcList[0].(map[string]interface{}) + + authenticationRequestExtraParams := make(map[string]*string) + for key, value := range authenticateOidcMap["authentication_request_extra_params"].(map[string]interface{}) { + authenticationRequestExtraParams[key] = aws.String(value.(string)) + } + + action.AuthenticateOidcConfig = &elbv2.AuthenticateOidcActionConfig{ + AuthenticationRequestExtraParams: authenticationRequestExtraParams, + AuthorizationEndpoint: aws.String(authenticateOidcMap["authorization_endpoint"].(string)), + ClientId: aws.String(authenticateOidcMap["client_id"].(string)), + ClientSecret: aws.String(authenticateOidcMap["client_secret"].(string)), + Issuer: aws.String(authenticateOidcMap["issuer"].(string)), + TokenEndpoint: aws.String(authenticateOidcMap["token_endpoint"].(string)), + UserInfoEndpoint: aws.String(authenticateOidcMap["user_info_endpoint"].(string)), + } + + if onUnauthenticatedRequest, ok := authenticateOidcMap["on_unauthenticated_request"]; ok && onUnauthenticatedRequest != "" { + action.AuthenticateOidcConfig.OnUnauthenticatedRequest = aws.String(onUnauthenticatedRequest.(string)) + } + if scope, ok := authenticateOidcMap["scope"]; ok && scope != "" { + action.AuthenticateOidcConfig.Scope = aws.String(scope.(string)) + } + if sessionCookieName, ok := authenticateOidcMap["session_cookie_name"]; ok && sessionCookieName != "" { + action.AuthenticateOidcConfig.SessionCookieName = aws.String(sessionCookieName.(string)) + } + if sessionTimeout, ok := authenticateOidcMap["session_timeout"]; ok && sessionTimeout != 0 { + action.AuthenticateOidcConfig.SessionTimeout = aws.Int64(int64(sessionTimeout.(int))) + } + } else { + return errors.New("for actions of type 'authenticate-oidc', you must specify a 'authenticate_oidc' block") + } + } + + params.Actions[i] = action } requestUpdate = true d.SetPartial("action") @@ -280,7 +836,7 @@ func resourceAwsLbListenerRuleUpdate(d *schema.ResourceData, meta interface{}) e if requestUpdate { resp, err := elbconn.ModifyRule(params) if err != nil { - return errwrap.Wrapf("Error modifying LB Listener Rule: {{err}}", err) + return fmt.Errorf("Error modifying LB Listener Rule: %s", err) } if len(resp.Rules) == 0 { @@ -299,8 +855,8 @@ func resourceAwsLbListenerRuleDelete(d *schema.ResourceData, meta interface{}) e _, err := elbconn.DeleteRule(&elbv2.DeleteRuleInput{ RuleArn: aws.String(d.Id()), }) - if err != nil && !isRuleNotFound(err) { - return errwrap.Wrapf("Error deleting LB Listener Rule: {{err}}", err) + if err != nil && !isAWSErr(err, elbv2.ErrCodeRuleNotFoundException, "") { + return fmt.Errorf("Error deleting LB Listener Rule: %s", err) } return nil } @@ -329,11 +885,6 @@ func lbListenerARNFromRuleARN(ruleArn string) string { return "" } -func isRuleNotFound(err error) bool { - elberr, ok := err.(awserr.Error) - return ok && elberr.Code() == "RuleNotFound" -} - func highestListenerRulePriority(conn *elbv2.ELBV2, arn string) (priority int64, err error) { var priorities []int var nextMarker *string @@ -348,8 +899,8 @@ func highestListenerRulePriority(conn *elbv2.ELBV2, arn string) (priority int64, return } for _, rule := range out.Rules { - if *rule.Priority != "default" { - p, _ := strconv.Atoi(*rule.Priority) + if aws.StringValue(rule.Priority) != "default" { + p, _ := strconv.Atoi(aws.StringValue(rule.Priority)) priorities = append(priorities, p) } } diff --git a/aws/resource_aws_lb_listener_rule_test.go b/aws/resource_aws_lb_listener_rule_test.go index 4f4471d448e..10a75e7a981 100644 --- a/aws/resource_aws_lb_listener_rule_test.go +++ b/aws/resource_aws_lb_listener_rule_test.go @@ -8,7 +8,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/elbv2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -60,7 +59,7 @@ func TestAccAWSLBListenerRule_basic(t *testing.T) { lbName := fmt.Sprintf("testrule-basic-%s", acctest.RandStringFromCharSet(13, acctest.CharSetAlphaNum)) targetGroupName := fmt.Sprintf("testtargetgroup-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_listener_rule.static", Providers: testAccProviders, @@ -74,8 +73,11 @@ func TestAccAWSLBListenerRule_basic(t *testing.T) { resource.TestCheckResourceAttrSet("aws_lb_listener_rule.static", "listener_arn"), resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "priority", "100"), resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.#", "1"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.order", "1"), resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.type", "forward"), resource.TestCheckResourceAttrSet("aws_lb_listener_rule.static", "action.0.target_group_arn"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.redirect.#", "0"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.fixed_response.#", "0"), resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "condition.#", "1"), resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "condition.1366281676.field", "path-pattern"), resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "condition.1366281676.values.#", "1"), @@ -91,7 +93,7 @@ func TestAccAWSLBListenerRuleBackwardsCompatibility(t *testing.T) { lbName := fmt.Sprintf("testrule-basic-%s", acctest.RandStringFromCharSet(13, acctest.CharSetAlphaNum)) targetGroupName := fmt.Sprintf("testtargetgroup-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_alb_listener_rule.static", Providers: testAccProviders, @@ -105,8 +107,11 @@ func TestAccAWSLBListenerRuleBackwardsCompatibility(t *testing.T) { resource.TestCheckResourceAttrSet("aws_alb_listener_rule.static", "listener_arn"), resource.TestCheckResourceAttr("aws_alb_listener_rule.static", "priority", "100"), resource.TestCheckResourceAttr("aws_alb_listener_rule.static", "action.#", "1"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.order", "1"), resource.TestCheckResourceAttr("aws_alb_listener_rule.static", "action.0.type", "forward"), resource.TestCheckResourceAttrSet("aws_alb_listener_rule.static", "action.0.target_group_arn"), + resource.TestCheckResourceAttr("aws_alb_listener_rule.static", "action.0.redirect.#", "0"), + resource.TestCheckResourceAttr("aws_alb_listener_rule.static", "action.0.fixed_response.#", "0"), resource.TestCheckResourceAttr("aws_alb_listener_rule.static", "condition.#", "1"), resource.TestCheckResourceAttr("aws_alb_listener_rule.static", "condition.1366281676.field", "path-pattern"), resource.TestCheckResourceAttr("aws_alb_listener_rule.static", "condition.1366281676.values.#", "1"), @@ -117,12 +122,87 @@ func TestAccAWSLBListenerRuleBackwardsCompatibility(t *testing.T) { }) } +func TestAccAWSLBListenerRule_redirect(t *testing.T) { + var conf elbv2.Rule + lbName := fmt.Sprintf("testrule-redirect-%s", acctest.RandStringFromCharSet(14, acctest.CharSetAlphaNum)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "aws_lb_listener_rule.static", + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLBListenerRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLBListenerRuleConfig_redirect(lbName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSLBListenerRuleExists("aws_lb_listener_rule.static", &conf), + resource.TestCheckResourceAttrSet("aws_lb_listener_rule.static", "arn"), + resource.TestCheckResourceAttrSet("aws_lb_listener_rule.static", "listener_arn"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "priority", "100"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.#", "1"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.order", "1"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.type", "redirect"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.target_group_arn", ""), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.redirect.#", "1"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.redirect.0.host", "#{host}"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.redirect.0.path", "/#{path}"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.redirect.0.port", "443"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.redirect.0.protocol", "HTTPS"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.redirect.0.query", "#{query}"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.redirect.0.status_code", "HTTP_301"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.fixed_response.#", "0"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "condition.#", "1"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "condition.1366281676.field", "path-pattern"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "condition.1366281676.values.#", "1"), + resource.TestCheckResourceAttrSet("aws_lb_listener_rule.static", "condition.1366281676.values.0"), + ), + }, + }, + }) +} + +func TestAccAWSLBListenerRule_fixedResponse(t *testing.T) { + var conf elbv2.Rule + lbName := fmt.Sprintf("testrule-fixedresponse-%s", acctest.RandStringFromCharSet(9, acctest.CharSetAlphaNum)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "aws_lb_listener_rule.static", + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLBListenerRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLBListenerRuleConfig_fixedResponse(lbName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSLBListenerRuleExists("aws_lb_listener_rule.static", &conf), + resource.TestCheckResourceAttrSet("aws_lb_listener_rule.static", "arn"), + resource.TestCheckResourceAttrSet("aws_lb_listener_rule.static", "listener_arn"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "priority", "100"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.#", "1"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.order", "1"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.type", "fixed-response"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.target_group_arn", ""), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.redirect.#", "0"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.fixed_response.#", "1"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.fixed_response.0.content_type", "text/plain"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.fixed_response.0.message_body", "Fixed response content"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "action.0.fixed_response.0.status_code", "200"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "condition.#", "1"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "condition.1366281676.field", "path-pattern"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.static", "condition.1366281676.values.#", "1"), + resource.TestCheckResourceAttrSet("aws_lb_listener_rule.static", "condition.1366281676.values.0"), + ), + }, + }, + }) +} + func TestAccAWSLBListenerRule_updateRulePriority(t *testing.T) { var rule elbv2.Rule lbName := fmt.Sprintf("testrule-basic-%s", acctest.RandStringFromCharSet(13, acctest.CharSetAlphaNum)) targetGroupName := fmt.Sprintf("testtargetgroup-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_listener_rule.static", Providers: testAccProviders, @@ -151,7 +231,7 @@ func TestAccAWSLBListenerRule_changeListenerRuleArnForcesNew(t *testing.T) { lbName := fmt.Sprintf("testrule-basic-%s", acctest.RandStringFromCharSet(13, acctest.CharSetAlphaNum)) targetGroupName := fmt.Sprintf("testtargetgroup-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_listener_rule.static", Providers: testAccProviders, @@ -178,7 +258,7 @@ func TestAccAWSLBListenerRule_multipleConditionThrowsError(t *testing.T) { lbName := fmt.Sprintf("testrule-basic-%s", acctest.RandStringFromCharSet(13, acctest.CharSetAlphaNum)) targetGroupName := fmt.Sprintf("testtargetgroup-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLBListenerRuleDestroy, @@ -196,7 +276,7 @@ func TestAccAWSLBListenerRule_priority(t *testing.T) { lbName := fmt.Sprintf("testrule-basic-%s", acctest.RandStringFromCharSet(13, acctest.CharSetAlphaNum)) targetGroupName := fmt.Sprintf("testtargetgroup-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_listener_rule.first", Providers: testAccProviders, @@ -265,6 +345,167 @@ func TestAccAWSLBListenerRule_priority(t *testing.T) { }) } +func TestAccAWSLBListenerRule_cognito(t *testing.T) { + var conf elbv2.Rule + lbName := fmt.Sprintf("testrule-cognito-%s", acctest.RandStringFromCharSet(13, acctest.CharSetAlphaNum)) + targetGroupName := fmt.Sprintf("testtargetgroup-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + certificateName := fmt.Sprintf("testcert-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + cognitoPrefix := fmt.Sprintf("testcog-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "aws_lb_listener_rule.cognito", + Providers: testAccProvidersWithTLS, + CheckDestroy: testAccCheckAWSLBListenerRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLBListenerRuleConfig_cognito(lbName, targetGroupName, certificateName, cognitoPrefix), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSLBListenerRuleExists("aws_lb_listener_rule.cognito", &conf), + resource.TestCheckResourceAttrSet("aws_lb_listener_rule.cognito", "arn"), + resource.TestCheckResourceAttrSet("aws_lb_listener_rule.cognito", "listener_arn"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.cognito", "priority", "100"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.cognito", "action.#", "2"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.cognito", "action.0.order", "1"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.cognito", "action.0.type", "authenticate-cognito"), + resource.TestCheckResourceAttrSet("aws_lb_listener_rule.cognito", "action.0.authenticate_cognito.0.user_pool_arn"), + resource.TestCheckResourceAttrSet("aws_lb_listener_rule.cognito", "action.0.authenticate_cognito.0.user_pool_client_id"), + resource.TestCheckResourceAttrSet("aws_lb_listener_rule.cognito", "action.0.authenticate_cognito.0.user_pool_domain"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.cognito", "action.0.authenticate_cognito.0.authentication_request_extra_params.%", "1"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.cognito", "action.0.authenticate_cognito.0.authentication_request_extra_params.param", "test"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.cognito", "action.1.order", "2"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.cognito", "action.1.type", "forward"), + resource.TestCheckResourceAttrSet("aws_lb_listener_rule.cognito", "action.1.target_group_arn"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.cognito", "condition.#", "1"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.cognito", "condition.1366281676.field", "path-pattern"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.cognito", "condition.1366281676.values.#", "1"), + resource.TestCheckResourceAttrSet("aws_lb_listener_rule.cognito", "condition.1366281676.values.0"), + ), + }, + }, + }) +} + +func TestAccAWSLBListenerRule_oidc(t *testing.T) { + var conf elbv2.Rule + lbName := fmt.Sprintf("testrule-oidc-%s", acctest.RandStringFromCharSet(13, acctest.CharSetAlphaNum)) + targetGroupName := fmt.Sprintf("testtargetgroup-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + certificateName := fmt.Sprintf("testcert-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "aws_lb_listener_rule.oidc", + Providers: testAccProvidersWithTLS, + CheckDestroy: testAccCheckAWSLBListenerRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLBListenerRuleConfig_oidc(lbName, targetGroupName, certificateName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSLBListenerRuleExists("aws_lb_listener_rule.oidc", &conf), + resource.TestCheckResourceAttrSet("aws_lb_listener_rule.oidc", "arn"), + resource.TestCheckResourceAttrSet("aws_lb_listener_rule.oidc", "listener_arn"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.oidc", "priority", "100"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.oidc", "action.#", "2"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.oidc", "action.0.order", "1"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.oidc", "action.0.type", "authenticate-oidc"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.oidc", "action.0.authenticate_oidc.0.authorization_endpoint", "https://example.com/authorization_endpoint"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.oidc", "action.0.authenticate_oidc.0.client_id", "s6BhdRkqt3"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.oidc", "action.0.authenticate_oidc.0.client_secret", "7Fjfp0ZBr1KtDRbnfVdmIw"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.oidc", "action.0.authenticate_oidc.0.issuer", "https://example.com"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.oidc", "action.0.authenticate_oidc.0.token_endpoint", "https://example.com/token_endpoint"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.oidc", "action.0.authenticate_oidc.0.user_info_endpoint", "https://example.com/user_info_endpoint"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.oidc", "action.0.authenticate_oidc.0.authentication_request_extra_params.%", "1"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.oidc", "action.0.authenticate_oidc.0.authentication_request_extra_params.param", "test"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.oidc", "action.1.order", "2"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.oidc", "action.1.type", "forward"), + resource.TestCheckResourceAttrSet("aws_lb_listener_rule.oidc", "action.1.target_group_arn"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.oidc", "condition.#", "1"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.oidc", "condition.1366281676.field", "path-pattern"), + resource.TestCheckResourceAttr("aws_lb_listener_rule.oidc", "condition.1366281676.values.#", "1"), + resource.TestCheckResourceAttrSet("aws_lb_listener_rule.oidc", "condition.1366281676.values.0"), + ), + }, + }, + }) +} + +func TestAccAWSLBListenerRule_Action_Order(t *testing.T) { + var rule elbv2.Rule + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_lb_listener_rule.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProvidersWithTLS, + CheckDestroy: testAccCheckAWSLBListenerRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLBListenerRuleConfig_Action_Order(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSLBListenerRuleExists(resourceName, &rule), + resource.TestCheckResourceAttr(resourceName, "action.#", "2"), + resource.TestCheckResourceAttr(resourceName, "action.0.order", "1"), + resource.TestCheckResourceAttr(resourceName, "action.1.order", "2"), + ), + }, + }, + }) +} + +// Reference: https://github.com/terraform-providers/terraform-provider-aws/issues/6171 +func TestAccAWSLBListenerRule_Action_Order_Recreates(t *testing.T) { + var rule elbv2.Rule + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_lb_listener_rule.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProvidersWithTLS, + CheckDestroy: testAccCheckAWSLBListenerRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLBListenerRuleConfig_Action_Order(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSLBListenerRuleExists(resourceName, &rule), + resource.TestCheckResourceAttr(resourceName, "action.#", "2"), + resource.TestCheckResourceAttr(resourceName, "action.0.order", "1"), + resource.TestCheckResourceAttr(resourceName, "action.1.order", "2"), + testAccCheckAWSLBListenerRuleActionOrderDisappears(&rule, 1), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccCheckAWSLBListenerRuleActionOrderDisappears(rule *elbv2.Rule, actionOrderToDelete int) resource.TestCheckFunc { + return func(s *terraform.State) error { + var newActions []*elbv2.Action + + for i, action := range rule.Actions { + if int(aws.Int64Value(action.Order)) == actionOrderToDelete { + newActions = append(rule.Actions[:i], rule.Actions[i+1:]...) + break + } + } + + if len(newActions) == 0 { + return fmt.Errorf("Unable to find action order %d from actions: %#v", actionOrderToDelete, rule.Actions) + } + + conn := testAccProvider.Meta().(*AWSClient).elbv2conn + + input := &elbv2.ModifyRuleInput{ + Actions: newActions, + RuleArn: rule.RuleArn, + } + + _, err := conn.ModifyRule(input) + + return err + } +} + func testAccCheckAWSLbListenerRuleRecreated(t *testing.T, before, after *elbv2.Rule) resource.TestCheckFunc { return func(s *terraform.State) error { @@ -326,10 +567,10 @@ func testAccCheckAWSLBListenerRuleDestroy(s *terraform.State) error { } // Verify the error - if isRuleNotFound(err) { + if isAWSErr(err, elbv2.ErrCodeRuleNotFoundException, "") { return nil } else { - return errwrap.Wrapf("Unexpected error checking LB Listener Rule destroyed: {{err}}", err) + return fmt.Errorf("Unexpected error checking LB Listener Rule destroyed: %s", err) } } @@ -669,15 +910,18 @@ resource "aws_security_group" "alb_test" { }`, lbName, targetGroupName) } -func testAccAWSLBListenerRuleConfig_updateRulePriority(lbName, targetGroupName string) string { - return fmt.Sprintf(` -resource "aws_lb_listener_rule" "static" { +func testAccAWSLBListenerRuleConfig_redirect(lbName string) string { + return fmt.Sprintf(`resource "aws_lb_listener_rule" "static" { listener_arn = "${aws_lb_listener.front_end.arn}" - priority = 101 + priority = 100 action { - type = "forward" - target_group_arn = "${aws_lb_target_group.test.arn}" + type = "redirect" + redirect { + port = "443" + protocol = "HTTPS" + status_code = "HTTP_301" + } } condition { @@ -687,14 +931,18 @@ resource "aws_lb_listener_rule" "static" { } resource "aws_lb_listener" "front_end" { - load_balancer_arn = "${aws_lb.alb_test.id}" - protocol = "HTTP" - port = "80" + load_balancer_arn = "${aws_lb.alb_test.id}" + protocol = "HTTP" + port = "80" - default_action { - target_group_arn = "${aws_lb_target_group.test.id}" - type = "forward" - } + default_action { + type = "redirect" + redirect { + port = "443" + protocol = "HTTPS" + status_code = "HTTP_301" + } + } } resource "aws_lb" "alb_test" { @@ -707,25 +955,7 @@ resource "aws_lb" "alb_test" { enable_deletion_protection = false tags { - Name = "TestAccAWSALB_basic" - } -} - -resource "aws_lb_target_group" "test" { - name = "%s" - port = 8080 - protocol = "HTTP" - vpc_id = "${aws_vpc.alb_test.id}" - - health_check { - path = "/health" - interval = 60 - port = 8081 - protocol = "HTTP" - timeout = 3 - healthy_threshold = 3 - unhealthy_threshold = 3 - matcher = "200-299" + Name = "TestAccAWSALB_redirect" } } @@ -740,7 +970,7 @@ resource "aws_vpc" "alb_test" { cidr_block = "10.0.0.0/16" tags { - Name = "terraform-testacc-lb-listener-rule-update-rule-priority" + Name = "terraform-testacc-lb-listener-rule-redirect" } } @@ -752,7 +982,7 @@ resource "aws_subnet" "alb_test" { availability_zone = "${element(data.aws_availability_zones.available.names, count.index)}" tags { - Name = "tf-acc-lb-listener-rule-update-rule-priority-${count.index}" + Name = "tf-acc-lb-listener-rule-redirect-${count.index}" } } @@ -776,20 +1006,23 @@ resource "aws_security_group" "alb_test" { } tags { - Name = "TestAccAWSALB_basic" + Name = "TestAccAWSALB_redirect" } -}`, lbName, targetGroupName) +}`, lbName) } -func testAccAWSLBListenerRuleConfig_changeRuleArn(lbName, targetGroupName string) string { - return fmt.Sprintf(` -resource "aws_lb_listener_rule" "static" { - listener_arn = "${aws_lb_listener.front_end_ruleupdate.arn}" - priority = 101 +func testAccAWSLBListenerRuleConfig_fixedResponse(lbName string) string { + return fmt.Sprintf(`resource "aws_lb_listener_rule" "static" { + listener_arn = "${aws_lb_listener.front_end.arn}" + priority = 100 action { - type = "forward" - target_group_arn = "${aws_lb_target_group.test.arn}" + type = "fixed-response" + fixed_response { + content_type = "text/plain" + message_body = "Fixed response content" + status_code = "200" + } } condition { @@ -799,25 +1032,18 @@ resource "aws_lb_listener_rule" "static" { } resource "aws_lb_listener" "front_end" { - load_balancer_arn = "${aws_lb.alb_test.id}" - protocol = "HTTP" - port = "80" - - default_action { - target_group_arn = "${aws_lb_target_group.test.id}" - type = "forward" - } -} - -resource "aws_lb_listener" "front_end_ruleupdate" { - load_balancer_arn = "${aws_lb.alb_test.id}" - protocol = "HTTP" - port = "8080" + load_balancer_arn = "${aws_lb.alb_test.id}" + protocol = "HTTP" + port = "80" - default_action { - target_group_arn = "${aws_lb_target_group.test.id}" - type = "forward" - } + default_action { + type = "fixed-response" + fixed_response { + content_type = "text/plain" + message_body = "Fixed response content" + status_code = "200" + } + } } resource "aws_lb" "alb_test" { @@ -830,25 +1056,7 @@ resource "aws_lb" "alb_test" { enable_deletion_protection = false tags { - Name = "TestAccAWSALB_basic" - } -} - -resource "aws_lb_target_group" "test" { - name = "%s" - port = 8080 - protocol = "HTTP" - vpc_id = "${aws_vpc.alb_test.id}" - - health_check { - path = "/health" - interval = 60 - port = 8081 - protocol = "HTTP" - timeout = 3 - healthy_threshold = 3 - unhealthy_threshold = 3 - matcher = "200-299" + Name = "TestAccAWSALB_fixedResponse" } } @@ -863,7 +1071,7 @@ resource "aws_vpc" "alb_test" { cidr_block = "10.0.0.0/16" tags { - Name = "terraform-testacc-lb-listener-rule-change-rule-arn" + Name = "terraform-testacc-lb-listener-rule-fixedresponse" } } @@ -875,7 +1083,7 @@ resource "aws_subnet" "alb_test" { availability_zone = "${element(data.aws_availability_zones.available.names, count.index)}" tags { - Name = "tf-acc-lb-listener-rule-change-rule-arn-${count.index}" + Name = "tf-acc-lb-listener-rule-fixedresponse-${count.index}" } } @@ -899,13 +1107,28 @@ resource "aws_security_group" "alb_test" { } tags { - Name = "TestAccAWSALB_basic" + Name = "TestAccAWSALB_fixedresponse" } -}`, lbName, targetGroupName) +}`, lbName) } -func testAccAWSLBListenerRuleConfig_priorityBase(lbName, targetGroupName string) string { +func testAccAWSLBListenerRuleConfig_updateRulePriority(lbName, targetGroupName string) string { return fmt.Sprintf(` +resource "aws_lb_listener_rule" "static" { + listener_arn = "${aws_lb_listener.front_end.arn}" + priority = 101 + + action { + type = "forward" + target_group_arn = "${aws_lb_target_group.test.arn}" + } + + condition { + field = "path-pattern" + values = ["/static/*"] + } +} + resource "aws_lb_listener" "front_end" { load_balancer_arn = "${aws_lb.alb_test.id}" protocol = "HTTP" @@ -960,7 +1183,7 @@ resource "aws_vpc" "alb_test" { cidr_block = "10.0.0.0/16" tags { - Name = "terraform-testacc-lb-listener-rule-priority" + Name = "terraform-testacc-lb-listener-rule-update-rule-priority" } } @@ -972,7 +1195,7 @@ resource "aws_subnet" "alb_test" { availability_zone = "${element(data.aws_availability_zones.available.names, count.index)}" tags { - Name = "tf-acc-lb-listener-rule-priority-${count.index}" + Name = "tf-acc-lb-listener-rule-update-rule-priority-${count.index}" } } @@ -1001,10 +1224,11 @@ resource "aws_security_group" "alb_test" { }`, lbName, targetGroupName) } -func testAccAWSLBListenerRuleConfig_priorityFirst(lbName, targetGroupName string) string { - return testAccAWSLBListenerRuleConfig_priorityBase(lbName, targetGroupName) + fmt.Sprintf(` -resource "aws_lb_listener_rule" "first" { - listener_arn = "${aws_lb_listener.front_end.arn}" +func testAccAWSLBListenerRuleConfig_changeRuleArn(lbName, targetGroupName string) string { + return fmt.Sprintf(` +resource "aws_lb_listener_rule" "static" { + listener_arn = "${aws_lb_listener.front_end_ruleupdate.arn}" + priority = 101 action { type = "forward" @@ -1013,21 +1237,240 @@ resource "aws_lb_listener_rule" "first" { condition { field = "path-pattern" - values = ["/first/*"] + values = ["/static/*"] } } -resource "aws_lb_listener_rule" "third" { - listener_arn = "${aws_lb_listener.front_end.arn}" - priority = 3 - - action { - type = "forward" - target_group_arn = "${aws_lb_target_group.test.arn}" - } +resource "aws_lb_listener" "front_end" { + load_balancer_arn = "${aws_lb.alb_test.id}" + protocol = "HTTP" + port = "80" - condition { - field = "path-pattern" + default_action { + target_group_arn = "${aws_lb_target_group.test.id}" + type = "forward" + } +} + +resource "aws_lb_listener" "front_end_ruleupdate" { + load_balancer_arn = "${aws_lb.alb_test.id}" + protocol = "HTTP" + port = "8080" + + default_action { + target_group_arn = "${aws_lb_target_group.test.id}" + type = "forward" + } +} + +resource "aws_lb" "alb_test" { + name = "%s" + internal = true + security_groups = ["${aws_security_group.alb_test.id}"] + subnets = ["${aws_subnet.alb_test.*.id}"] + + idle_timeout = 30 + enable_deletion_protection = false + + tags { + Name = "TestAccAWSALB_basic" + } +} + +resource "aws_lb_target_group" "test" { + name = "%s" + port = 8080 + protocol = "HTTP" + vpc_id = "${aws_vpc.alb_test.id}" + + health_check { + path = "/health" + interval = 60 + port = 8081 + protocol = "HTTP" + timeout = 3 + healthy_threshold = 3 + unhealthy_threshold = 3 + matcher = "200-299" + } +} + +variable "subnets" { + default = ["10.0.1.0/24", "10.0.2.0/24"] + type = "list" +} + +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "alb_test" { + cidr_block = "10.0.0.0/16" + + tags { + Name = "terraform-testacc-lb-listener-rule-change-rule-arn" + } +} + +resource "aws_subnet" "alb_test" { + count = 2 + vpc_id = "${aws_vpc.alb_test.id}" + cidr_block = "${element(var.subnets, count.index)}" + map_public_ip_on_launch = true + availability_zone = "${element(data.aws_availability_zones.available.names, count.index)}" + + tags { + Name = "tf-acc-lb-listener-rule-change-rule-arn-${count.index}" + } +} + +resource "aws_security_group" "alb_test" { + name = "allow_all_alb_test" + description = "Used for ALB Testing" + vpc_id = "${aws_vpc.alb_test.id}" + + ingress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + tags { + Name = "TestAccAWSALB_basic" + } +}`, lbName, targetGroupName) +} + +func testAccAWSLBListenerRuleConfig_priorityBase(lbName, targetGroupName string) string { + return fmt.Sprintf(` +resource "aws_lb_listener" "front_end" { + load_balancer_arn = "${aws_lb.alb_test.id}" + protocol = "HTTP" + port = "80" + + default_action { + target_group_arn = "${aws_lb_target_group.test.id}" + type = "forward" + } +} + +resource "aws_lb" "alb_test" { + name = "%s" + internal = true + security_groups = ["${aws_security_group.alb_test.id}"] + subnets = ["${aws_subnet.alb_test.*.id}"] + + idle_timeout = 30 + enable_deletion_protection = false + + tags { + Name = "TestAccAWSALB_basic" + } +} + +resource "aws_lb_target_group" "test" { + name = "%s" + port = 8080 + protocol = "HTTP" + vpc_id = "${aws_vpc.alb_test.id}" + + health_check { + path = "/health" + interval = 60 + port = 8081 + protocol = "HTTP" + timeout = 3 + healthy_threshold = 3 + unhealthy_threshold = 3 + matcher = "200-299" + } +} + +variable "subnets" { + default = ["10.0.1.0/24", "10.0.2.0/24"] + type = "list" +} + +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "alb_test" { + cidr_block = "10.0.0.0/16" + + tags { + Name = "terraform-testacc-lb-listener-rule-priority" + } +} + +resource "aws_subnet" "alb_test" { + count = 2 + vpc_id = "${aws_vpc.alb_test.id}" + cidr_block = "${element(var.subnets, count.index)}" + map_public_ip_on_launch = true + availability_zone = "${element(data.aws_availability_zones.available.names, count.index)}" + + tags { + Name = "tf-acc-lb-listener-rule-priority-${count.index}" + } +} + +resource "aws_security_group" "alb_test" { + name = "allow_all_alb_test" + description = "Used for ALB Testing" + vpc_id = "${aws_vpc.alb_test.id}" + + ingress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + tags { + Name = "TestAccAWSALB_basic" + } +}`, lbName, targetGroupName) +} + +func testAccAWSLBListenerRuleConfig_priorityFirst(lbName, targetGroupName string) string { + return testAccAWSLBListenerRuleConfig_priorityBase(lbName, targetGroupName) + fmt.Sprintf(` +resource "aws_lb_listener_rule" "first" { + listener_arn = "${aws_lb_listener.front_end.arn}" + + action { + type = "forward" + target_group_arn = "${aws_lb_target_group.test.arn}" + } + + condition { + field = "path-pattern" + values = ["/first/*"] + } +} + +resource "aws_lb_listener_rule" "third" { + listener_arn = "${aws_lb_listener.front_end.arn}" + priority = 3 + + action { + type = "forward" + target_group_arn = "${aws_lb_target_group.test.arn}" + } + + condition { + field = "path-pattern" values = ["/third/*"] } @@ -1149,3 +1592,486 @@ resource "aws_lb_listener_rule" "50000_in_use" { } `) } + +func testAccAWSLBListenerRuleConfig_cognito(lbName string, targetGroupName string, certificateName string, cognitoPrefix string) string { + return fmt.Sprintf(`resource "aws_lb_listener_rule" "cognito" { + listener_arn = "${aws_lb_listener.front_end.arn}" + priority = 100 + + action { + type = "authenticate-cognito" + authenticate_cognito { + user_pool_arn = "${aws_cognito_user_pool.test.arn}" + user_pool_client_id = "${aws_cognito_user_pool_client.test.id}" + user_pool_domain = "${aws_cognito_user_pool_domain.test.domain}" + + authentication_request_extra_params { + param = "test" + } + } + } + + action { + type = "forward" + target_group_arn = "${aws_lb_target_group.test.arn}" + } + + condition { + field = "path-pattern" + values = ["/static/*"] + } +} + +resource "aws_iam_server_certificate" "test" { + name = "terraform-test-cert-%s" + certificate_body = "${tls_self_signed_cert.test.cert_pem}" + private_key = "${tls_private_key.test.private_key_pem}" +} + +resource "tls_private_key" "test" { + algorithm = "RSA" +} + +resource "tls_self_signed_cert" "test" { + key_algorithm = "RSA" + private_key_pem = "${tls_private_key.test.private_key_pem}" + + subject { + common_name = "example.com" + organization = "ACME Examples, Inc" + } + + validity_period_hours = 12 + + allowed_uses = [ + "key_encipherment", + "digital_signature", + "server_auth", + ] +} + +resource "aws_lb_listener" "front_end" { + load_balancer_arn = "${aws_lb.alb_test.id}" + protocol = "HTTPS" + port = "443" + ssl_policy = "ELBSecurityPolicy-2015-05" + certificate_arn = "${aws_iam_server_certificate.test.arn}" + + default_action { + target_group_arn = "${aws_lb_target_group.test.id}" + type = "forward" + } +} + +resource "aws_lb" "alb_test" { + name = "%s" + internal = true + security_groups = ["${aws_security_group.alb_test.id}"] + subnets = ["${aws_subnet.alb_test.*.id}"] + + idle_timeout = 30 + enable_deletion_protection = false + + tags { + Name = "TestAccAWSALB_cognito" + } +} + +resource "aws_lb_target_group" "test" { + name = "%s" + port = 8080 + protocol = "HTTP" + vpc_id = "${aws_vpc.alb_test.id}" + + health_check { + path = "/health" + interval = 60 + port = 8081 + protocol = "HTTP" + timeout = 3 + healthy_threshold = 3 + unhealthy_threshold = 3 + matcher = "200-299" + } +} + +variable "subnets" { + default = ["10.0.1.0/24", "10.0.2.0/24"] + type = "list" +} + +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "alb_test" { + cidr_block = "10.0.0.0/16" + + tags { + Name = "terraform-testacc-lb-listener-rule-cognito" + } +} + +resource "aws_subnet" "alb_test" { + count = 2 + vpc_id = "${aws_vpc.alb_test.id}" + cidr_block = "${element(var.subnets, count.index)}" + map_public_ip_on_launch = true + availability_zone = "${element(data.aws_availability_zones.available.names, count.index)}" + + tags { + Name = "tf-acc-lb-listener-rule-cognito-${count.index}" + } +} + +resource "aws_security_group" "alb_test" { + name = "allow_all_alb_test" + description = "Used for ALB Testing" + vpc_id = "${aws_vpc.alb_test.id}" + + ingress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + tags { + Name = "TestAccAWSALB_cognito" + } +} + +resource "aws_cognito_user_pool" "test" { + name = "%s-pool" +} + +resource "aws_cognito_user_pool_client" "test" { + name = "%s-pool-client" + user_pool_id = "${aws_cognito_user_pool.test.id}" + generate_secret = true + allowed_oauth_flows_user_pool_client = true + allowed_oauth_flows = ["code", "implicit"] + allowed_oauth_scopes = ["phone", "email", "openid", "profile", "aws.cognito.signin.user.admin"] + callback_urls = ["https://www.example.com/callback", "https://www.example.com/redirect"] + default_redirect_uri = "https://www.example.com/redirect" + logout_urls = ["https://www.example.com/login"] +} + +resource "aws_cognito_user_pool_domain" "test" { + domain = "%s-pool-domain" + user_pool_id = "${aws_cognito_user_pool.test.id}" +}`, lbName, targetGroupName, certificateName, cognitoPrefix, cognitoPrefix, cognitoPrefix) +} + +func testAccAWSLBListenerRuleConfig_oidc(lbName string, targetGroupName string, certificateName string) string { + return fmt.Sprintf(`resource "aws_lb_listener_rule" "oidc" { + listener_arn = "${aws_lb_listener.front_end.arn}" + priority = 100 + + action { + type = "authenticate-oidc" + authenticate_oidc { + authorization_endpoint = "https://example.com/authorization_endpoint" + client_id = "s6BhdRkqt3" + client_secret = "7Fjfp0ZBr1KtDRbnfVdmIw" + issuer = "https://example.com" + token_endpoint = "https://example.com/token_endpoint" + user_info_endpoint = "https://example.com/user_info_endpoint" + + authentication_request_extra_params { + param = "test" + } + } + } + + action { + type = "forward" + target_group_arn = "${aws_lb_target_group.test.arn}" + } + + condition { + field = "path-pattern" + values = ["/static/*"] + } +} + +resource "aws_iam_server_certificate" "test" { + name = "terraform-test-cert-%s" + certificate_body = "${tls_self_signed_cert.test.cert_pem}" + private_key = "${tls_private_key.test.private_key_pem}" +} + +resource "tls_private_key" "test" { + algorithm = "RSA" +} + +resource "tls_self_signed_cert" "test" { + key_algorithm = "RSA" + private_key_pem = "${tls_private_key.test.private_key_pem}" + + subject { + common_name = "example.com" + organization = "ACME Examples, Inc" + } + + validity_period_hours = 12 + + allowed_uses = [ + "key_encipherment", + "digital_signature", + "server_auth", + ] +} + +resource "aws_lb_listener" "front_end" { + load_balancer_arn = "${aws_lb.alb_test.id}" + protocol = "HTTPS" + port = "443" + ssl_policy = "ELBSecurityPolicy-2015-05" + certificate_arn = "${aws_iam_server_certificate.test.arn}" + + default_action { + target_group_arn = "${aws_lb_target_group.test.id}" + type = "forward" + } +} + +resource "aws_lb" "alb_test" { + name = "%s" + internal = true + security_groups = ["${aws_security_group.alb_test.id}"] + subnets = ["${aws_subnet.alb_test.*.id}"] + + idle_timeout = 30 + enable_deletion_protection = false + + tags { + Name = "TestAccAWSALB_cognito" + } +} + +resource "aws_lb_target_group" "test" { + name = "%s" + port = 8080 + protocol = "HTTP" + vpc_id = "${aws_vpc.alb_test.id}" + + health_check { + path = "/health" + interval = 60 + port = 8081 + protocol = "HTTP" + timeout = 3 + healthy_threshold = 3 + unhealthy_threshold = 3 + matcher = "200-299" + } +} + +variable "subnets" { + default = ["10.0.1.0/24", "10.0.2.0/24"] + type = "list" +} + +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "alb_test" { + cidr_block = "10.0.0.0/16" + + tags { + Name = "terraform-testacc-lb-listener-rule-cognito" + } +} + +resource "aws_subnet" "alb_test" { + count = 2 + vpc_id = "${aws_vpc.alb_test.id}" + cidr_block = "${element(var.subnets, count.index)}" + map_public_ip_on_launch = true + availability_zone = "${element(data.aws_availability_zones.available.names, count.index)}" + + tags { + Name = "tf-acc-lb-listener-rule-cognito-${count.index}" + } +} + +resource "aws_security_group" "alb_test" { + name = "allow_all_alb_test" + description = "Used for ALB Testing" + vpc_id = "${aws_vpc.alb_test.id}" + + ingress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + tags { + Name = "TestAccAWSALB_cognito" + } +}`, lbName, targetGroupName, certificateName) +} + +func testAccAWSLBListenerRuleConfig_Action_Order(rName string) string { + return fmt.Sprintf(` +variable "rName" { + default = %q +} + +data "aws_availability_zones" "available" {} + +resource "aws_lb_listener_rule" "test" { + listener_arn = "${aws_lb_listener.test.arn}" + + action { + order = 1 + type = "authenticate-oidc" + + authenticate_oidc { + authorization_endpoint = "https://example.com/authorization_endpoint" + client_id = "s6BhdRkqt3" + client_secret = "7Fjfp0ZBr1KtDRbnfVdmIw" + issuer = "https://example.com" + token_endpoint = "https://example.com/token_endpoint" + user_info_endpoint = "https://example.com/user_info_endpoint" + + authentication_request_extra_params { + param = "test" + } + } + } + + action { + order = 2 + type = "forward" + target_group_arn = "${aws_lb_target_group.test.arn}" + } + + condition { + field = "path-pattern" + values = ["/static/*"] + } +} + +resource "aws_iam_server_certificate" "test" { + certificate_body = "${tls_self_signed_cert.test.cert_pem}" + name = "${var.rName}" + private_key = "${tls_private_key.test.private_key_pem}" +} + +resource "tls_private_key" "test" { + algorithm = "RSA" +} + +resource "tls_self_signed_cert" "test" { + key_algorithm = "RSA" + private_key_pem = "${tls_private_key.test.private_key_pem}" + validity_period_hours = 12 + + subject { + common_name = "example.com" + organization = "ACME Examples, Inc" + } + + allowed_uses = [ + "key_encipherment", + "digital_signature", + "server_auth", + ] +} + +resource "aws_lb_listener" "test" { + load_balancer_arn = "${aws_lb.test.id}" + protocol = "HTTPS" + port = "443" + ssl_policy = "ELBSecurityPolicy-2015-05" + certificate_arn = "${aws_iam_server_certificate.test.arn}" + + default_action { + target_group_arn = "${aws_lb_target_group.test.id}" + type = "forward" + } +} + +resource "aws_lb" "test" { + internal = true + name = "${var.rName}" + security_groups = ["${aws_security_group.test.id}"] + subnets = ["${aws_subnet.test.*.id}"] +} + +resource "aws_lb_target_group" "test" { + name = "${var.rName}" + port = 8080 + protocol = "HTTP" + vpc_id = "${aws_vpc.test.id}" + + health_check { + path = "/health" + interval = 60 + port = 8081 + protocol = "HTTP" + timeout = 3 + healthy_threshold = 3 + unhealthy_threshold = 3 + matcher = "200-299" + } +} + +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" + + tags { + Name = "${var.rName}" + } +} + +resource "aws_subnet" "test" { + count = 2 + + availability_zone = "${data.aws_availability_zones.available.names[count.index]}" + cidr_block = "10.0.${count.index}.0/24" + map_public_ip_on_launch = true + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = "${var.rName}" + } +} + +resource "aws_security_group" "test" { + name = "${var.rName}" + vpc_id = "${aws_vpc.test.id}" + + ingress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + tags { + Name = "${var.rName}" + } +}`, rName) +} diff --git a/aws/resource_aws_lb_listener_test.go b/aws/resource_aws_lb_listener_test.go index 909ad90bdb5..ba46a8881ae 100644 --- a/aws/resource_aws_lb_listener_test.go +++ b/aws/resource_aws_lb_listener_test.go @@ -3,13 +3,10 @@ package aws import ( "errors" "fmt" - "math/rand" "testing" - "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/elbv2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -20,7 +17,7 @@ func TestAccAWSLBListener_basic(t *testing.T) { lbName := fmt.Sprintf("testlistener-basic-%s", acctest.RandStringFromCharSet(13, acctest.CharSetAlphaNum)) targetGroupName := fmt.Sprintf("testtargetgroup-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_listener.front_end", Providers: testAccProviders, @@ -35,8 +32,11 @@ func TestAccAWSLBListener_basic(t *testing.T) { resource.TestCheckResourceAttr("aws_lb_listener.front_end", "protocol", "HTTP"), resource.TestCheckResourceAttr("aws_lb_listener.front_end", "port", "80"), resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.#", "1"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.order", "1"), resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.type", "forward"), resource.TestCheckResourceAttrSet("aws_lb_listener.front_end", "default_action.0.target_group_arn"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.redirect.#", "0"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.fixed_response.#", "0"), ), }, }, @@ -48,7 +48,7 @@ func TestAccAWSLBListenerBackwardsCompatibility(t *testing.T) { lbName := fmt.Sprintf("testlistener-basic-%s", acctest.RandStringFromCharSet(13, acctest.CharSetAlphaNum)) targetGroupName := fmt.Sprintf("testtargetgroup-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_alb_listener.front_end", Providers: testAccProviders, @@ -63,8 +63,11 @@ func TestAccAWSLBListenerBackwardsCompatibility(t *testing.T) { resource.TestCheckResourceAttr("aws_alb_listener.front_end", "protocol", "HTTP"), resource.TestCheckResourceAttr("aws_alb_listener.front_end", "port", "80"), resource.TestCheckResourceAttr("aws_alb_listener.front_end", "default_action.#", "1"), + resource.TestCheckResourceAttr("aws_alb_listener.front_end", "default_action.0.order", "1"), resource.TestCheckResourceAttr("aws_alb_listener.front_end", "default_action.0.type", "forward"), resource.TestCheckResourceAttrSet("aws_alb_listener.front_end", "default_action.0.target_group_arn"), + resource.TestCheckResourceAttr("aws_alb_listener.front_end", "default_action.0.redirect.#", "0"), + resource.TestCheckResourceAttr("aws_alb_listener.front_end", "default_action.0.fixed_response.#", "0"), ), }, }, @@ -76,7 +79,7 @@ func TestAccAWSLBListener_https(t *testing.T) { lbName := fmt.Sprintf("testlistener-https-%s", acctest.RandStringFromCharSet(13, acctest.CharSetAlphaNum)) targetGroupName := fmt.Sprintf("testtargetgroup-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_listener.front_end", Providers: testAccProvidersWithTLS, @@ -91,8 +94,11 @@ func TestAccAWSLBListener_https(t *testing.T) { resource.TestCheckResourceAttr("aws_lb_listener.front_end", "protocol", "HTTPS"), resource.TestCheckResourceAttr("aws_lb_listener.front_end", "port", "443"), resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.#", "1"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.order", "1"), resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.type", "forward"), resource.TestCheckResourceAttrSet("aws_lb_listener.front_end", "default_action.0.target_group_arn"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.redirect.#", "0"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.fixed_response.#", "0"), resource.TestCheckResourceAttrSet("aws_lb_listener.front_end", "certificate_arn"), resource.TestCheckResourceAttr("aws_lb_listener.front_end", "ssl_policy", "ELBSecurityPolicy-2015-05"), ), @@ -101,6 +107,225 @@ func TestAccAWSLBListener_https(t *testing.T) { }) } +func TestAccAWSLBListener_redirect(t *testing.T) { + var conf elbv2.Listener + lbName := fmt.Sprintf("testlistener-redirect-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "aws_lb_listener.front_end", + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLBListenerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLBListenerConfig_redirect(lbName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSLBListenerExists("aws_lb_listener.front_end", &conf), + resource.TestCheckResourceAttrSet("aws_lb_listener.front_end", "load_balancer_arn"), + resource.TestCheckResourceAttrSet("aws_lb_listener.front_end", "arn"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "protocol", "HTTP"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "port", "80"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.#", "1"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.order", "1"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.type", "redirect"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.target_group_arn", ""), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.redirect.#", "1"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.redirect.0.host", "#{host}"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.redirect.0.path", "/#{path}"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.redirect.0.port", "443"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.redirect.0.protocol", "HTTPS"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.redirect.0.query", "#{query}"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.redirect.0.status_code", "HTTP_301"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.fixed_response.#", "0"), + ), + }, + }, + }) +} + +func TestAccAWSLBListener_fixedResponse(t *testing.T) { + var conf elbv2.Listener + lbName := fmt.Sprintf("testlistener-fixedresponse-%s", acctest.RandStringFromCharSet(5, acctest.CharSetAlphaNum)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "aws_lb_listener.front_end", + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLBListenerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLBListenerConfig_fixedResponse(lbName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSLBListenerExists("aws_lb_listener.front_end", &conf), + resource.TestCheckResourceAttrSet("aws_lb_listener.front_end", "load_balancer_arn"), + resource.TestCheckResourceAttrSet("aws_lb_listener.front_end", "arn"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "protocol", "HTTP"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "port", "80"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.#", "1"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.order", "1"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.type", "fixed-response"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.target_group_arn", ""), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.redirect.#", "0"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.fixed_response.#", "1"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.fixed_response.0.content_type", "text/plain"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.fixed_response.0.message_body", "Fixed response content"), + resource.TestCheckResourceAttr("aws_lb_listener.front_end", "default_action.0.fixed_response.0.status_code", "200"), + ), + }, + }, + }) +} + +func TestAccAWSLBListener_cognito(t *testing.T) { + var conf elbv2.Listener + rName := acctest.RandString(5) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "aws_lb_listener.test", + Providers: testAccProvidersWithTLS, + CheckDestroy: testAccCheckAWSLBListenerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLBListenerConfig_cognito(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSLBListenerExists("aws_lb_listener.test", &conf), + resource.TestCheckResourceAttrSet("aws_lb_listener.test", "load_balancer_arn"), + resource.TestCheckResourceAttrSet("aws_lb_listener.test", "arn"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "protocol", "HTTPS"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "port", "443"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.#", "2"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.0.order", "1"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.0.type", "authenticate-cognito"), + resource.TestCheckResourceAttrSet("aws_lb_listener.test", "default_action.0.authenticate_cognito.0.user_pool_arn"), + resource.TestCheckResourceAttrSet("aws_lb_listener.test", "default_action.0.authenticate_cognito.0.user_pool_client_id"), + resource.TestCheckResourceAttrSet("aws_lb_listener.test", "default_action.0.authenticate_cognito.0.user_pool_domain"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.0.authenticate_cognito.0.authentication_request_extra_params.%", "1"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.0.authenticate_cognito.0.authentication_request_extra_params.param", "test"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.1.type", "forward"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.1.order", "2"), + resource.TestCheckResourceAttrSet("aws_lb_listener.test", "default_action.1.target_group_arn"), + ), + }, + }, + }) +} + +func TestAccAWSLBListener_oidc(t *testing.T) { + var conf elbv2.Listener + rName := acctest.RandString(5) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "aws_lb_listener.test", + Providers: testAccProvidersWithTLS, + CheckDestroy: testAccCheckAWSLBListenerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLBListenerConfig_oidc(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSLBListenerExists("aws_lb_listener.test", &conf), + resource.TestCheckResourceAttrSet("aws_lb_listener.test", "load_balancer_arn"), + resource.TestCheckResourceAttrSet("aws_lb_listener.test", "arn"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "protocol", "HTTPS"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "port", "443"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.#", "2"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.0.order", "1"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.0.type", "authenticate-oidc"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.0.authenticate_oidc.0.authorization_endpoint", "https://example.com/authorization_endpoint"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.0.authenticate_oidc.0.client_id", "s6BhdRkqt3"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.0.authenticate_oidc.0.client_secret", "7Fjfp0ZBr1KtDRbnfVdmIw"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.0.authenticate_oidc.0.issuer", "https://example.com"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.0.authenticate_oidc.0.token_endpoint", "https://example.com/token_endpoint"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.0.authenticate_oidc.0.user_info_endpoint", "https://example.com/user_info_endpoint"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.0.authenticate_oidc.0.authentication_request_extra_params.%", "1"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.0.authenticate_oidc.0.authentication_request_extra_params.param", "test"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.1.order", "2"), + resource.TestCheckResourceAttr("aws_lb_listener.test", "default_action.1.type", "forward"), + resource.TestCheckResourceAttrSet("aws_lb_listener.test", "default_action.1.target_group_arn"), + ), + }, + }, + }) +} + +func TestAccAWSLBListener_DefaultAction_Order(t *testing.T) { + var listener elbv2.Listener + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_lb_listener.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProvidersWithTLS, + CheckDestroy: testAccCheckAWSLBListenerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLBListenerConfig_DefaultAction_Order(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSLBListenerExists(resourceName, &listener), + resource.TestCheckResourceAttr(resourceName, "default_action.#", "2"), + resource.TestCheckResourceAttr(resourceName, "default_action.0.order", "1"), + resource.TestCheckResourceAttr(resourceName, "default_action.1.order", "2"), + ), + }, + }, + }) +} + +// Reference: https://github.com/terraform-providers/terraform-provider-aws/issues/6171 +func TestAccAWSLBListener_DefaultAction_Order_Recreates(t *testing.T) { + var listener elbv2.Listener + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_lb_listener.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProvidersWithTLS, + CheckDestroy: testAccCheckAWSLBListenerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLBListenerConfig_DefaultAction_Order(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSLBListenerExists(resourceName, &listener), + resource.TestCheckResourceAttr(resourceName, "default_action.#", "2"), + resource.TestCheckResourceAttr(resourceName, "default_action.0.order", "1"), + resource.TestCheckResourceAttr(resourceName, "default_action.1.order", "2"), + testAccCheckAWSLBListenerDefaultActionOrderDisappears(&listener, 1), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccCheckAWSLBListenerDefaultActionOrderDisappears(listener *elbv2.Listener, actionOrderToDelete int) resource.TestCheckFunc { + return func(s *terraform.State) error { + var newDefaultActions []*elbv2.Action + + for i, action := range listener.DefaultActions { + if int(aws.Int64Value(action.Order)) == actionOrderToDelete { + newDefaultActions = append(listener.DefaultActions[:i], listener.DefaultActions[i+1:]...) + break + } + } + + if len(newDefaultActions) == 0 { + return fmt.Errorf("Unable to find default action order %d from default actions: %#v", actionOrderToDelete, listener.DefaultActions) + } + + conn := testAccProvider.Meta().(*AWSClient).elbv2conn + + input := &elbv2.ModifyListenerInput{ + DefaultActions: newDefaultActions, + ListenerArn: listener.ListenerArn, + } + + _, err := conn.ModifyListener(input) + + return err + } +} + func testAccCheckAWSLBListenerExists(n string, res *elbv2.Listener) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -152,10 +377,10 @@ func testAccCheckAWSLBListenerDestroy(s *terraform.State) error { } // Verify the error - if isListenerNotFound(err) { + if isAWSErr(err, elbv2.ErrCodeListenerNotFoundException, "") { return nil } else { - return errwrap.Wrapf("Unexpected error checking LB Listener destroyed: {{err}}", err) + return fmt.Errorf("Unexpected error checking LB Listener destroyed: %s", err) } } @@ -488,5 +713,584 @@ resource "tls_self_signed_cert" "example" { "server_auth", ] } -`, lbName, targetGroupName, rand.New(rand.NewSource(time.Now().UnixNano())).Int()) +`, lbName, targetGroupName, acctest.RandInt()) +} + +func testAccAWSLBListenerConfig_redirect(lbName string) string { + return fmt.Sprintf(`resource "aws_lb_listener" "front_end" { + load_balancer_arn = "${aws_lb.alb_test.id}" + protocol = "HTTP" + port = "80" + + default_action { + type = "redirect" + redirect { + port = "443" + protocol = "HTTPS" + status_code = "HTTP_301" + } + } +} + +resource "aws_lb" "alb_test" { + name = "%s" + internal = true + security_groups = ["${aws_security_group.alb_test.id}"] + subnets = ["${aws_subnet.alb_test.*.id}"] + + idle_timeout = 30 + enable_deletion_protection = false + + tags { + Name = "TestAccAWSALB_redirect" + } +} + +variable "subnets" { + default = ["10.0.1.0/24", "10.0.2.0/24"] + type = "list" +} + +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "alb_test" { + cidr_block = "10.0.0.0/16" + + tags { + Name = "terraform-testacc-lb-listener-redirect" + } +} + +resource "aws_subnet" "alb_test" { + count = 2 + vpc_id = "${aws_vpc.alb_test.id}" + cidr_block = "${element(var.subnets, count.index)}" + map_public_ip_on_launch = true + availability_zone = "${element(data.aws_availability_zones.available.names, count.index)}" + + tags { + Name = "tf-acc-lb-listener-redirect-${count.index}" + } +} + +resource "aws_security_group" "alb_test" { + name = "allow_all_alb_test" + description = "Used for ALB Testing" + vpc_id = "${aws_vpc.alb_test.id}" + + ingress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + tags { + Name = "TestAccAWSALB_redirect" + } +}`, lbName) +} + +func testAccAWSLBListenerConfig_fixedResponse(lbName string) string { + return fmt.Sprintf(`resource "aws_lb_listener" "front_end" { + load_balancer_arn = "${aws_lb.alb_test.id}" + protocol = "HTTP" + port = "80" + + default_action { + type = "fixed-response" + fixed_response { + content_type = "text/plain" + message_body = "Fixed response content" + status_code = "200" + } + } +} + +resource "aws_lb" "alb_test" { + name = "%s" + internal = true + security_groups = ["${aws_security_group.alb_test.id}"] + subnets = ["${aws_subnet.alb_test.*.id}"] + + idle_timeout = 30 + enable_deletion_protection = false + + tags { + Name = "TestAccAWSALB_fixedresponse" + } +} + +variable "subnets" { + default = ["10.0.1.0/24", "10.0.2.0/24"] + type = "list" +} + +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "alb_test" { + cidr_block = "10.0.0.0/16" + + tags { + Name = "terraform-testacc-lb-listener-fixedresponse" + } +} + +resource "aws_subnet" "alb_test" { + count = 2 + vpc_id = "${aws_vpc.alb_test.id}" + cidr_block = "${element(var.subnets, count.index)}" + map_public_ip_on_launch = true + availability_zone = "${element(data.aws_availability_zones.available.names, count.index)}" + + tags { + Name = "tf-acc-lb-listener-fixedresponse-${count.index}" + } +} + +resource "aws_security_group" "alb_test" { + name = "allow_all_alb_test" + description = "Used for ALB Testing" + vpc_id = "${aws_vpc.alb_test.id}" + + ingress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + tags { + Name = "TestAccAWSALB_fixedresponse" + } +}`, lbName) +} + +func testAccAWSLBListenerConfig_cognito(rName string) string { + return fmt.Sprintf(` +resource "aws_lb" "test" { + name = "%s" + internal = false + security_groups = ["${aws_security_group.test.id}"] + subnets = ["${aws_subnet.test.*.id}"] + enable_deletion_protection = false +} + +resource "aws_lb_target_group" "test" { + name = "%s" + port = 8080 + protocol = "HTTP" + vpc_id = "${aws_vpc.test.id}" + + health_check { + path = "/health" + interval = 60 + port = 8081 + protocol = "HTTP" + timeout = 3 + healthy_threshold = 3 + unhealthy_threshold = 3 + matcher = "200-299" + } +} + +variable "subnets" { + default = ["10.0.1.0/24", "10.0.2.0/24"] + type = "list" +} + +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_internet_gateway" "test" { + vpc_id = "${aws_vpc.test.id}" +} + +resource "aws_subnet" "test" { + count = 2 + vpc_id = "${aws_vpc.test.id}" + cidr_block = "${element(var.subnets, count.index)}" + map_public_ip_on_launch = true + availability_zone = "${element(data.aws_availability_zones.available.names, count.index)}" +} + +resource "aws_security_group" "test" { + name = "%s" + description = "Used for ALB Testing" + vpc_id = "${aws_vpc.test.id}" + + ingress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } +} + +resource "aws_cognito_user_pool" "test" { + name = "%s" +} + +resource "aws_cognito_user_pool_client" "test" { + name = "%s" + user_pool_id = "${aws_cognito_user_pool.test.id}" + generate_secret = true + allowed_oauth_flows_user_pool_client = true + allowed_oauth_flows = ["code", "implicit"] + allowed_oauth_scopes = ["phone", "email", "openid", "profile", "aws.cognito.signin.user.admin"] + callback_urls = ["https://www.example.com/callback", "https://www.example.com/redirect"] + default_redirect_uri = "https://www.example.com/redirect" + logout_urls = ["https://www.example.com/login"] +} + +resource "aws_cognito_user_pool_domain" "test" { + domain = "%s" + user_pool_id = "${aws_cognito_user_pool.test.id}" +} + +resource "aws_iam_server_certificate" "test" { + name = "terraform-test-cert-%s" + certificate_body = "${tls_self_signed_cert.test.cert_pem}" + private_key = "${tls_private_key.test.private_key_pem}" +} + +resource "tls_private_key" "test" { + algorithm = "RSA" +} + +resource "tls_self_signed_cert" "test" { + key_algorithm = "RSA" + private_key_pem = "${tls_private_key.test.private_key_pem}" + + subject { + common_name = "example.com" + organization = "ACME Examples, Inc" + } + + validity_period_hours = 12 + + allowed_uses = [ + "key_encipherment", + "digital_signature", + "server_auth", + ] +} + +resource "aws_lb_listener" "test" { + load_balancer_arn = "${aws_lb.test.id}" + protocol = "HTTPS" + port = "443" + ssl_policy = "ELBSecurityPolicy-2015-05" + certificate_arn = "${aws_iam_server_certificate.test.arn}" + + default_action { + type = "authenticate-cognito" + authenticate_cognito { + user_pool_arn = "${aws_cognito_user_pool.test.arn}" + user_pool_client_id = "${aws_cognito_user_pool_client.test.id}" + user_pool_domain = "${aws_cognito_user_pool_domain.test.domain}" + + authentication_request_extra_params { + param = "test" + } + } + } + + default_action { + target_group_arn = "${aws_lb_target_group.test.id}" + type = "forward" + } +} +`, rName, rName, rName, rName, rName, rName, rName) +} + +func testAccAWSLBListenerConfig_oidc(rName string) string { + return fmt.Sprintf(` +resource "aws_lb" "test" { + name = "%s" + internal = false + security_groups = ["${aws_security_group.test.id}"] + subnets = ["${aws_subnet.test.*.id}"] + enable_deletion_protection = false +} + +resource "aws_lb_target_group" "test" { + name = "%s" + port = 8080 + protocol = "HTTP" + vpc_id = "${aws_vpc.test.id}" + + health_check { + path = "/health" + interval = 60 + port = 8081 + protocol = "HTTP" + timeout = 3 + healthy_threshold = 3 + unhealthy_threshold = 3 + matcher = "200-299" + } +} + +variable "subnets" { + default = ["10.0.1.0/24", "10.0.2.0/24"] + type = "list" +} + +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_internet_gateway" "test" { + vpc_id = "${aws_vpc.test.id}" +} + +resource "aws_subnet" "test" { + count = 2 + vpc_id = "${aws_vpc.test.id}" + cidr_block = "${element(var.subnets, count.index)}" + map_public_ip_on_launch = true + availability_zone = "${element(data.aws_availability_zones.available.names, count.index)}" +} + +resource "aws_security_group" "test" { + name = "%s" + description = "Used for ALB Testing" + vpc_id = "${aws_vpc.test.id}" + + ingress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } +} + +resource "aws_iam_server_certificate" "test" { + name = "terraform-test-cert-%s" + certificate_body = "${tls_self_signed_cert.test.cert_pem}" + private_key = "${tls_private_key.test.private_key_pem}" +} + +resource "tls_private_key" "test" { + algorithm = "RSA" +} + +resource "tls_self_signed_cert" "test" { + key_algorithm = "RSA" + private_key_pem = "${tls_private_key.test.private_key_pem}" + + subject { + common_name = "example.com" + organization = "ACME Examples, Inc" + } + + validity_period_hours = 12 + + allowed_uses = [ + "key_encipherment", + "digital_signature", + "server_auth", + ] +} + +resource "aws_lb_listener" "test" { + load_balancer_arn = "${aws_lb.test.id}" + protocol = "HTTPS" + port = "443" + ssl_policy = "ELBSecurityPolicy-2015-05" + certificate_arn = "${aws_iam_server_certificate.test.arn}" + + default_action { + type = "authenticate-oidc" + authenticate_oidc { + authorization_endpoint = "https://example.com/authorization_endpoint" + client_id = "s6BhdRkqt3" + client_secret = "7Fjfp0ZBr1KtDRbnfVdmIw" + issuer = "https://example.com" + token_endpoint = "https://example.com/token_endpoint" + user_info_endpoint = "https://example.com/user_info_endpoint" + + authentication_request_extra_params { + param = "test" + } + } + } + + default_action { + target_group_arn = "${aws_lb_target_group.test.id}" + type = "forward" + } +} +`, rName, rName, rName, rName) +} + +func testAccAWSLBListenerConfig_DefaultAction_Order(rName string) string { + return fmt.Sprintf(` +variable "rName" { + default = %q +} + +data "aws_availability_zones" "available" {} + +resource "aws_lb_listener" "test" { + load_balancer_arn = "${aws_lb.test.id}" + protocol = "HTTPS" + port = "443" + ssl_policy = "ELBSecurityPolicy-2015-05" + certificate_arn = "${aws_iam_server_certificate.test.arn}" + + default_action { + order = 1 + type = "authenticate-oidc" + + authenticate_oidc { + authorization_endpoint = "https://example.com/authorization_endpoint" + client_id = "s6BhdRkqt3" + client_secret = "7Fjfp0ZBr1KtDRbnfVdmIw" + issuer = "https://example.com" + token_endpoint = "https://example.com/token_endpoint" + user_info_endpoint = "https://example.com/user_info_endpoint" + + authentication_request_extra_params { + param = "test" + } + } + } + + default_action { + order = 2 + type = "forward" + target_group_arn = "${aws_lb_target_group.test.arn}" + } +} + +resource "aws_iam_server_certificate" "test" { + certificate_body = "${tls_self_signed_cert.test.cert_pem}" + name = "${var.rName}" + private_key = "${tls_private_key.test.private_key_pem}" +} + +resource "tls_private_key" "test" { + algorithm = "RSA" +} + +resource "tls_self_signed_cert" "test" { + key_algorithm = "RSA" + private_key_pem = "${tls_private_key.test.private_key_pem}" + validity_period_hours = 12 + + subject { + common_name = "example.com" + organization = "ACME Examples, Inc" + } + + allowed_uses = [ + "key_encipherment", + "digital_signature", + "server_auth", + ] +} + +resource "aws_lb" "test" { + internal = true + name = "${var.rName}" + security_groups = ["${aws_security_group.test.id}"] + subnets = ["${aws_subnet.test.*.id}"] +} + +resource "aws_lb_target_group" "test" { + name = "${var.rName}" + port = 8080 + protocol = "HTTP" + vpc_id = "${aws_vpc.test.id}" + + health_check { + path = "/health" + interval = 60 + port = 8081 + protocol = "HTTP" + timeout = 3 + healthy_threshold = 3 + unhealthy_threshold = 3 + matcher = "200-299" + } +} + +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" + + tags { + Name = "${var.rName}" + } +} + +resource "aws_subnet" "test" { + count = 2 + + availability_zone = "${data.aws_availability_zones.available.names[count.index]}" + cidr_block = "10.0.${count.index}.0/24" + map_public_ip_on_launch = true + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = "${var.rName}" + } +} + +resource "aws_security_group" "test" { + name = "${var.rName}" + vpc_id = "${aws_vpc.test.id}" + + ingress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + tags { + Name = "${var.rName}" + } +}`, rName) } diff --git a/aws/resource_aws_lb_ssl_negotiation_policy.go b/aws/resource_aws_lb_ssl_negotiation_policy.go index 64a9f98ce39..eab8350ffdf 100644 --- a/aws/resource_aws_lb_ssl_negotiation_policy.go +++ b/aws/resource_aws_lb_ssl_negotiation_policy.go @@ -22,36 +22,36 @@ func resourceAwsLBSSLNegotiationPolicy() *schema.Resource { Delete: resourceAwsLBSSLNegotiationPolicyDelete, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "load_balancer": &schema.Schema{ + "load_balancer": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "lb_port": &schema.Schema{ + "lb_port": { Type: schema.TypeInt, Required: true, ForceNew: true, }, - "attribute": &schema.Schema{ + "attribute": { Type: schema.TypeSet, Optional: true, ForceNew: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, }, - "value": &schema.Schema{ + "value": { Type: schema.TypeString, Required: true, }, diff --git a/aws/resource_aws_lb_ssl_negotiation_policy_test.go b/aws/resource_aws_lb_ssl_negotiation_policy_test.go index b4049e38d13..128435c739d 100644 --- a/aws/resource_aws_lb_ssl_negotiation_policy_test.go +++ b/aws/resource_aws_lb_ssl_negotiation_policy_test.go @@ -14,12 +14,12 @@ import ( ) func TestAccAWSLBSSLNegotiationPolicy_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProvidersWithTLS, CheckDestroy: testAccCheckLBSSLNegotiationPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccSslNegotiationPolicyConfig( fmt.Sprintf("tf-acctest-%s", acctest.RandString(10)), fmt.Sprintf("tf-test-lb-%s", acctest.RandString(5))), Check: resource.ComposeTestCheckFunc( @@ -49,12 +49,12 @@ func TestAccAWSLBSSLNegotiationPolicy_missingLB(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProvidersWithTLS, CheckDestroy: testAccCheckLBSSLNegotiationPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccSslNegotiationPolicyConfig(fmt.Sprintf("tf-acctest-%s", acctest.RandString(10)), lbName), Check: resource.ComposeTestCheckFunc( testAccCheckLBSSLNegotiationPolicy( @@ -65,7 +65,7 @@ func TestAccAWSLBSSLNegotiationPolicy_missingLB(t *testing.T) { "aws_lb_ssl_negotiation_policy.foo", "attribute.#", "7"), ), }, - resource.TestStep{ + { PreConfig: removeLB, Config: testAccSslNegotiationPolicyConfig(fmt.Sprintf("tf-acctest-%s", acctest.RandString(10)), lbName), }, diff --git a/aws/resource_aws_lb_target_group.go b/aws/resource_aws_lb_target_group.go index 072f52238d6..0130949d86e 100644 --- a/aws/resource_aws_lb_target_group.go +++ b/aws/resource_aws_lb_target_group.go @@ -9,9 +9,7 @@ import ( "strings" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/elbv2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/validation" @@ -47,13 +45,14 @@ func resourceAwsLbTargetGroup() *schema.Resource { Computed: true, ForceNew: true, ConflictsWith: []string{"name_prefix"}, - ValidateFunc: validateMaxLength(32), + ValidateFunc: validateLbTargetGroupName, }, "name_prefix": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - ValidateFunc: validateMaxLength(32 - resource.UniqueIDSuffixLength), + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"name"}, + ValidateFunc: validateLbTargetGroupNamePrefix, }, "port": { @@ -64,10 +63,14 @@ func resourceAwsLbTargetGroup() *schema.Resource { }, "protocol": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validateLbListenerProtocol(), + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{ + elbv2.ProtocolEnumHttp, + elbv2.ProtocolEnumHttps, + elbv2.ProtocolEnumTcp, + }, true), }, "vpc_id": { @@ -83,6 +86,19 @@ func resourceAwsLbTargetGroup() *schema.Resource { ValidateFunc: validation.IntBetween(0, 3600), }, + "slow_start": { + Type: schema.TypeInt, + Optional: true, + Default: 0, + ValidateFunc: validateSlowStart, + }, + + "proxy_protocol_v2": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "target_type": { Type: schema.TypeString, Optional: true, @@ -153,7 +169,11 @@ func resourceAwsLbTargetGroup() *schema.Resource { StateFunc: func(v interface{}) string { return strings.ToUpper(v.(string)) }, - ValidateFunc: validateLbListenerProtocol(), + ValidateFunc: validation.StringInSlice([]string{ + elbv2.ProtocolEnumHttp, + elbv2.ProtocolEnumHttps, + elbv2.ProtocolEnumTcp, + }, true), }, "timeout": { @@ -241,15 +261,14 @@ func resourceAwsLbTargetGroupCreate(d *schema.ResourceData, meta interface{}) er resp, err := elbconn.CreateTargetGroup(params) if err != nil { - return errwrap.Wrapf("Error creating LB Target Group: {{err}}", err) + return fmt.Errorf("Error creating LB Target Group: %s", err) } if len(resp.TargetGroups) == 0 { return errors.New("Error creating LB Target Group: no groups returned in response") } - targetGroupArn := resp.TargetGroups[0].TargetGroupArn - d.SetId(*targetGroupArn) + d.SetId(aws.StringValue(resp.TargetGroups[0].TargetGroupArn)) return resourceAwsLbTargetGroupUpdate(d, meta) } @@ -261,12 +280,12 @@ func resourceAwsLbTargetGroupRead(d *schema.ResourceData, meta interface{}) erro TargetGroupArns: []*string{aws.String(d.Id())}, }) if err != nil { - if isTargetGroupNotFound(err) { + if isAWSErr(err, elbv2.ErrCodeTargetGroupNotFoundException, "") { log.Printf("[DEBUG] DescribeTargetGroups - removing %s from state", d.Id()) d.SetId("") return nil } - return errwrap.Wrapf("Error retrieving Target Group: {{err}}", err) + return fmt.Errorf("Error retrieving Target Group: %s", err) } if len(resp.TargetGroups) != 1 { @@ -280,7 +299,7 @@ func resourceAwsLbTargetGroupUpdate(d *schema.ResourceData, meta interface{}) er elbconn := meta.(*AWSClient).elbv2conn if err := setElbV2Tags(elbconn, d); err != nil { - return errwrap.Wrapf("Error Modifying Tags on LB Target Group: {{err}}", err) + return fmt.Errorf("Error Modifying Tags on LB Target Group: %s", err) } if d.HasChange("health_check") { @@ -319,7 +338,7 @@ func resourceAwsLbTargetGroupUpdate(d *schema.ResourceData, meta interface{}) er if params != nil { _, err := elbconn.ModifyTargetGroup(params) if err != nil { - return errwrap.Wrapf("Error modifying Target Group: {{err}}", err) + return fmt.Errorf("Error modifying Target Group: %s", err) } } } @@ -333,6 +352,20 @@ func resourceAwsLbTargetGroupUpdate(d *schema.ResourceData, meta interface{}) er }) } + if d.HasChange("slow_start") { + attrs = append(attrs, &elbv2.TargetGroupAttribute{ + Key: aws.String("slow_start.duration_seconds"), + Value: aws.String(fmt.Sprintf("%d", d.Get("slow_start").(int))), + }) + } + + if d.HasChange("proxy_protocol_v2") { + attrs = append(attrs, &elbv2.TargetGroupAttribute{ + Key: aws.String("proxy_protocol_v2.enabled"), + Value: aws.String(strconv.FormatBool(d.Get("proxy_protocol_v2").(bool))), + }) + } + // In CustomizeDiff we allow LB stickiness to be declared for TCP target // groups, so long as it's not enabled. This allows for better support for // modules, but also means we need to completely skip sending the data to the @@ -371,7 +404,7 @@ func resourceAwsLbTargetGroupUpdate(d *schema.ResourceData, meta interface{}) er _, err := elbconn.ModifyTargetGroupAttributes(params) if err != nil { - return errwrap.Wrapf("Error modifying Target Group Attributes: {{err}}", err) + return fmt.Errorf("Error modifying Target Group Attributes: %s", err) } } @@ -385,17 +418,12 @@ func resourceAwsLbTargetGroupDelete(d *schema.ResourceData, meta interface{}) er TargetGroupArn: aws.String(d.Id()), }) if err != nil { - return errwrap.Wrapf("Error deleting Target Group: {{err}}", err) + return fmt.Errorf("Error deleting Target Group: %s", err) } return nil } -func isTargetGroupNotFound(err error) bool { - elberr, ok := err.(awserr.Error) - return ok && elberr.Code() == "TargetGroupNotFound" -} - func validateAwsLbTargetGroupHealthCheckPath(v interface{}, k string) (ws []string, errors []error) { value := v.(string) if len(value) > 1024 { @@ -409,6 +437,19 @@ func validateAwsLbTargetGroupHealthCheckPath(v interface{}, k string) (ws []stri return } +func validateSlowStart(v interface{}, k string) (ws []string, errors []error) { + value := v.(int) + + // Check if the value is between 30-900 or 0 (seconds). + if value != 0 && !(value >= 30 && value <= 900) { + errors = append(errors, fmt.Errorf( + "%q contains an invalid Slow Start Duration \"%d\". "+ + "Valid intervals are 30-900 or 0 to disable.", + k, value)) + } + return +} + func validateAwsLbTargetGroupHealthCheckPort(v interface{}, k string) (ws []string, errors []error) { value := v.(string) @@ -455,29 +496,46 @@ func flattenAwsLbTargetGroupResource(d *schema.ResourceData, meta interface{}, t d.Set("target_type", targetGroup.TargetType) healthCheck := make(map[string]interface{}) - healthCheck["interval"] = *targetGroup.HealthCheckIntervalSeconds - healthCheck["port"] = *targetGroup.HealthCheckPort - healthCheck["protocol"] = *targetGroup.HealthCheckProtocol - healthCheck["timeout"] = *targetGroup.HealthCheckTimeoutSeconds - healthCheck["healthy_threshold"] = *targetGroup.HealthyThresholdCount - healthCheck["unhealthy_threshold"] = *targetGroup.UnhealthyThresholdCount + healthCheck["interval"] = int(aws.Int64Value(targetGroup.HealthCheckIntervalSeconds)) + healthCheck["port"] = aws.StringValue(targetGroup.HealthCheckPort) + healthCheck["protocol"] = aws.StringValue(targetGroup.HealthCheckProtocol) + healthCheck["timeout"] = int(aws.Int64Value(targetGroup.HealthCheckTimeoutSeconds)) + healthCheck["healthy_threshold"] = int(aws.Int64Value(targetGroup.HealthyThresholdCount)) + healthCheck["unhealthy_threshold"] = int(aws.Int64Value(targetGroup.UnhealthyThresholdCount)) if targetGroup.HealthCheckPath != nil { - healthCheck["path"] = *targetGroup.HealthCheckPath + healthCheck["path"] = aws.StringValue(targetGroup.HealthCheckPath) } - if targetGroup.Matcher.HttpCode != nil { - healthCheck["matcher"] = *targetGroup.Matcher.HttpCode + if targetGroup.Matcher != nil && targetGroup.Matcher.HttpCode != nil { + healthCheck["matcher"] = aws.StringValue(targetGroup.Matcher.HttpCode) } if err := d.Set("health_check", []interface{}{healthCheck}); err != nil { - log.Printf("[WARN] Error setting health check: %s", err) + return fmt.Errorf("error setting health_check: %s", err) } attrResp, err := elbconn.DescribeTargetGroupAttributes(&elbv2.DescribeTargetGroupAttributesInput{ TargetGroupArn: aws.String(d.Id()), }) if err != nil { - return errwrap.Wrapf("Error retrieving Target Group Attributes: {{err}}", err) + return fmt.Errorf("Error retrieving Target Group Attributes: %s", err) + } + + for _, attr := range attrResp.Attributes { + switch aws.StringValue(attr.Key) { + case "proxy_protocol_v2.enabled": + enabled, err := strconv.ParseBool(aws.StringValue(attr.Value)) + if err != nil { + return fmt.Errorf("Error converting proxy_protocol_v2.enabled to bool: %s", aws.StringValue(attr.Value)) + } + d.Set("proxy_protocol_v2", enabled) + case "slow_start.duration_seconds": + slowStart, err := strconv.Atoi(aws.StringValue(attr.Value)) + if err != nil { + return fmt.Errorf("Error converting slow_start.duration_seconds to int: %s", aws.StringValue(attr.Value)) + } + d.Set("slow_start", slowStart) + } } // We only read in the stickiness attributes if the target group is not @@ -489,13 +547,13 @@ func flattenAwsLbTargetGroupResource(d *schema.ResourceData, meta interface{}, t // This is a workaround to support module design where the module needs to // support HTTP and TCP target groups. switch { - case *targetGroup.Protocol != "TCP": + case aws.StringValue(targetGroup.Protocol) != "TCP": if err = flattenAwsLbTargetGroupStickiness(d, attrResp.Attributes); err != nil { return err } - case *targetGroup.Protocol == "TCP" && len(d.Get("stickiness").([]interface{})) < 1: + case aws.StringValue(targetGroup.Protocol) == "TCP" && len(d.Get("stickiness").([]interface{})) < 1: if err = d.Set("stickiness", []interface{}{}); err != nil { - return err + return fmt.Errorf("error setting stickiness: %s", err) } } @@ -503,12 +561,12 @@ func flattenAwsLbTargetGroupResource(d *schema.ResourceData, meta interface{}, t ResourceArns: []*string{aws.String(d.Id())}, }) if err != nil { - return errwrap.Wrapf("Error retrieving Target Group Tags: {{err}}", err) + return fmt.Errorf("Error retrieving Target Group Tags: %s", err) } for _, t := range tagsResp.TagDescriptions { - if *t.ResourceArn == d.Id() { + if aws.StringValue(t.ResourceArn) == d.Id() { if err := d.Set("tags", tagsToMapELBv2(t.Tags)); err != nil { - return err + return fmt.Errorf("error setting tags: %s", err) } } } @@ -519,25 +577,25 @@ func flattenAwsLbTargetGroupResource(d *schema.ResourceData, meta interface{}, t func flattenAwsLbTargetGroupStickiness(d *schema.ResourceData, attributes []*elbv2.TargetGroupAttribute) error { stickinessMap := map[string]interface{}{} for _, attr := range attributes { - switch *attr.Key { + switch aws.StringValue(attr.Key) { case "stickiness.enabled": - enabled, err := strconv.ParseBool(*attr.Value) + enabled, err := strconv.ParseBool(aws.StringValue(attr.Value)) if err != nil { - return fmt.Errorf("Error converting stickiness.enabled to bool: %s", *attr.Value) + return fmt.Errorf("Error converting stickiness.enabled to bool: %s", aws.StringValue(attr.Value)) } stickinessMap["enabled"] = enabled case "stickiness.type": - stickinessMap["type"] = *attr.Value + stickinessMap["type"] = aws.StringValue(attr.Value) case "stickiness.lb_cookie.duration_seconds": - duration, err := strconv.Atoi(*attr.Value) + duration, err := strconv.Atoi(aws.StringValue(attr.Value)) if err != nil { - return fmt.Errorf("Error converting stickiness.lb_cookie.duration_seconds to int: %s", *attr.Value) + return fmt.Errorf("Error converting stickiness.lb_cookie.duration_seconds to int: %s", aws.StringValue(attr.Value)) } stickinessMap["cookie_duration"] = duration case "deregistration_delay.timeout_seconds": - timeout, err := strconv.Atoi(*attr.Value) + timeout, err := strconv.Atoi(aws.StringValue(attr.Value)) if err != nil { - return fmt.Errorf("Error converting deregistration_delay.timeout_seconds to int: %s", *attr.Value) + return fmt.Errorf("Error converting deregistration_delay.timeout_seconds to int: %s", aws.StringValue(attr.Value)) } d.Set("deregistration_delay", timeout) } diff --git a/aws/resource_aws_lb_target_group_attachment.go b/aws/resource_aws_lb_target_group_attachment.go index 74eeaa850f0..d845a1d4020 100644 --- a/aws/resource_aws_lb_target_group_attachment.go +++ b/aws/resource_aws_lb_target_group_attachment.go @@ -5,9 +5,7 @@ import ( "log" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/elbv2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) @@ -71,7 +69,7 @@ func resourceAwsLbAttachmentCreate(d *schema.ResourceData, meta interface{}) err _, err := elbconn.RegisterTargets(params) if err != nil { - return errwrap.Wrapf("Error registering targets with target group: {{err}}", err) + return fmt.Errorf("Error registering targets with target group: %s", err) } d.SetId(resource.PrefixedUniqueId(fmt.Sprintf("%s-", d.Get("target_group_arn")))) @@ -100,12 +98,10 @@ func resourceAwsLbAttachmentDelete(d *schema.ResourceData, meta interface{}) err } _, err := elbconn.DeregisterTargets(params) - if err != nil && !isTargetGroupNotFound(err) { - return errwrap.Wrapf("Error deregistering Targets: {{err}}", err) + if err != nil && !isAWSErr(err, elbv2.ErrCodeTargetGroupNotFoundException, "") { + return fmt.Errorf("Error deregistering Targets: %s", err) } - d.SetId("") - return nil } @@ -131,17 +127,17 @@ func resourceAwsLbAttachmentRead(d *schema.ResourceData, meta interface{}) error Targets: []*elbv2.TargetDescription{target}, }) if err != nil { - if isTargetGroupNotFound(err) { + if isAWSErr(err, elbv2.ErrCodeTargetGroupNotFoundException, "") { log.Printf("[WARN] Target group does not exist, removing target attachment %s", d.Id()) d.SetId("") return nil } - if isInvalidTarget(err) { + if isAWSErr(err, elbv2.ErrCodeInvalidTargetException, "") { log.Printf("[WARN] Target does not exist, removing target attachment %s", d.Id()) d.SetId("") return nil } - return errwrap.Wrapf("Error reading Target Health: {{err}}", err) + return fmt.Errorf("Error reading Target Health: %s", err) } if len(resp.TargetHealthDescriptions) != 1 { @@ -152,8 +148,3 @@ func resourceAwsLbAttachmentRead(d *schema.ResourceData, meta interface{}) error return nil } - -func isInvalidTarget(err error) bool { - elberr, ok := err.(awserr.Error) - return ok && elberr.Code() == "InvalidTarget" -} diff --git a/aws/resource_aws_lb_target_group_attachment_test.go b/aws/resource_aws_lb_target_group_attachment_test.go index b6b0dc76b73..0cb548af6a7 100644 --- a/aws/resource_aws_lb_target_group_attachment_test.go +++ b/aws/resource_aws_lb_target_group_attachment_test.go @@ -8,7 +8,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/elbv2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -17,7 +16,7 @@ import ( func TestAccAWSLBTargetGroupAttachment_basic(t *testing.T) { targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_target_group.test", Providers: testAccProviders, @@ -36,7 +35,7 @@ func TestAccAWSLBTargetGroupAttachment_basic(t *testing.T) { func TestAccAWSLBTargetGroupAttachmentBackwardsCompatibility(t *testing.T) { targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_alb_target_group.test", Providers: testAccProviders, @@ -55,7 +54,7 @@ func TestAccAWSLBTargetGroupAttachmentBackwardsCompatibility(t *testing.T) { func TestAccAWSLBTargetGroupAttachment_withoutPort(t *testing.T) { targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_target_group.test", Providers: testAccProviders, @@ -74,7 +73,7 @@ func TestAccAWSLBTargetGroupAttachment_withoutPort(t *testing.T) { func TestAccAWSALBTargetGroupAttachment_ipAddress(t *testing.T) { targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_target_group.test", Providers: testAccProviders, @@ -161,10 +160,10 @@ func testAccCheckAWSLBTargetGroupAttachmentDestroy(s *terraform.State) error { } // Verify the error - if isTargetGroupNotFound(err) || isInvalidTarget(err) { + if isAWSErr(err, elbv2.ErrCodeTargetGroupNotFoundException, "") || isAWSErr(err, elbv2.ErrCodeInvalidTargetException, "") { return nil } else { - return errwrap.Wrapf("Unexpected error checking LB destroyed: {{err}}", err) + return fmt.Errorf("Unexpected error checking LB destroyed: %s", err) } } diff --git a/aws/resource_aws_lb_target_group_test.go b/aws/resource_aws_lb_target_group_test.go index 259ba5b314a..6948ebb3906 100644 --- a/aws/resource_aws_lb_target_group_test.go +++ b/aws/resource_aws_lb_target_group_test.go @@ -3,17 +3,81 @@ package aws import ( "errors" "fmt" + "log" "regexp" + "strings" "testing" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/elbv2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) +func init() { + resource.AddTestSweepers("aws_lb_target_group", &resource.Sweeper{ + Name: "aws_lb_target_group", + F: testSweepLBTargetGroups, + Dependencies: []string{ + "aws_lb", + }, + }) +} + +func testSweepLBTargetGroups(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).elbv2conn + + prefixes := []string{ + "tf-", + "tf-test-", + "tf-acc-test-", + "test", + } + + err = conn.DescribeTargetGroupsPages(&elbv2.DescribeTargetGroupsInput{}, func(page *elbv2.DescribeTargetGroupsOutput, isLast bool) bool { + if page == nil || len(page.TargetGroups) == 0 { + log.Print("[DEBUG] No LB Target Groups to sweep") + return false + } + + for _, targetGroup := range page.TargetGroups { + name := aws.StringValue(targetGroup.TargetGroupName) + skip := true + for _, prefix := range prefixes { + if strings.HasPrefix(name, prefix) { + skip = false + break + } + } + if skip { + log.Printf("[INFO] Skipping LB Target Group: %s", name) + continue + } + log.Printf("[INFO] Deleting LB Target Group: %s", name) + _, err := conn.DeleteTargetGroup(&elbv2.DeleteTargetGroupInput{ + TargetGroupArn: targetGroup.TargetGroupArn, + }) + if err != nil { + log.Printf("[ERROR] Failed to delete LB Target Group (%s): %s", name, err) + } + } + return !isLast + }) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping LB Target Group sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving LB Target Groups: %s", err) + } + return nil +} + func TestLBTargetGroupCloudwatchSuffixFromARN(t *testing.T) { cases := []struct { name string @@ -49,7 +113,7 @@ func TestAccAWSLBTargetGroup_basic(t *testing.T) { var conf elbv2.TargetGroup targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_target_group.test", Providers: testAccProviders, @@ -65,6 +129,7 @@ func TestAccAWSLBTargetGroup_basic(t *testing.T) { resource.TestCheckResourceAttr("aws_lb_target_group.test", "protocol", "HTTPS"), resource.TestCheckResourceAttrSet("aws_lb_target_group.test", "vpc_id"), resource.TestCheckResourceAttr("aws_lb_target_group.test", "deregistration_delay", "200"), + resource.TestCheckResourceAttr("aws_lb_target_group.test", "slow_start", "0"), resource.TestCheckResourceAttr("aws_lb_target_group.test", "stickiness.#", "1"), resource.TestCheckResourceAttr("aws_lb_target_group.test", "stickiness.0.enabled", "true"), resource.TestCheckResourceAttr("aws_lb_target_group.test", "stickiness.0.type", "lb_cookie"), @@ -90,7 +155,7 @@ func TestAccAWSLBTargetGroup_networkLB_TargetGroup(t *testing.T) { var confBefore, confAfter elbv2.TargetGroup targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_target_group.test", Providers: testAccProviders, @@ -106,6 +171,7 @@ func TestAccAWSLBTargetGroup_networkLB_TargetGroup(t *testing.T) { resource.TestCheckResourceAttr("aws_lb_target_group.test", "protocol", "TCP"), resource.TestCheckResourceAttrSet("aws_lb_target_group.test", "vpc_id"), resource.TestCheckResourceAttr("aws_lb_target_group.test", "deregistration_delay", "200"), + resource.TestCheckResourceAttr("aws_lb_target_group.test", "proxy_protocol_v2", "false"), resource.TestCheckResourceAttr("aws_lb_target_group.test", "health_check.#", "1"), resource.TestCheckResourceAttr("aws_lb_target_group.test", "health_check.0.interval", "10"), testAccCheckAWSLBTargetGroupHealthCheckInterval(&confBefore, 10), @@ -160,12 +226,40 @@ func TestAccAWSLBTargetGroup_networkLB_TargetGroup(t *testing.T) { }) } +func TestAccAWSLBTargetGroup_networkLB_TargetGroupWithProxy(t *testing.T) { + var confBefore, confAfter elbv2.TargetGroup + targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "aws_lb_target_group.test", + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLBTargetGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLBTargetGroupConfig_typeTCP(targetGroupName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSLBTargetGroupExists("aws_lb_target_group.test", &confBefore), + resource.TestCheckResourceAttr("aws_lb_target_group.test", "proxy_protocol_v2", "false"), + ), + }, + { + Config: testAccAWSLBTargetGroupConfig_typeTCP_withProxyProtocol(targetGroupName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSLBTargetGroupExists("aws_lb_target_group.test", &confAfter), + resource.TestCheckResourceAttr("aws_lb_target_group.test", "proxy_protocol_v2", "true"), + ), + }, + }, + }) +} + func TestAccAWSLBTargetGroup_TCP_HTTPHealthCheck(t *testing.T) { var confBefore, confAfter elbv2.TargetGroup rString := acctest.RandString(8) targetGroupName := fmt.Sprintf("test-tg-tcp-http-hc-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_target_group.test", Providers: testAccProviders, @@ -231,7 +325,7 @@ func TestAccAWSLBTargetGroupBackwardsCompatibility(t *testing.T) { var conf elbv2.TargetGroup targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_alb_target_group.test", Providers: testAccProviders, @@ -247,6 +341,7 @@ func TestAccAWSLBTargetGroupBackwardsCompatibility(t *testing.T) { resource.TestCheckResourceAttr("aws_alb_target_group.test", "protocol", "HTTPS"), resource.TestCheckResourceAttrSet("aws_alb_target_group.test", "vpc_id"), resource.TestCheckResourceAttr("aws_alb_target_group.test", "deregistration_delay", "200"), + resource.TestCheckResourceAttr("aws_alb_target_group.test", "slow_start", "0"), resource.TestCheckResourceAttr("aws_alb_target_group.test", "stickiness.#", "1"), resource.TestCheckResourceAttr("aws_alb_target_group.test", "stickiness.0.enabled", "true"), resource.TestCheckResourceAttr("aws_alb_target_group.test", "stickiness.0.type", "lb_cookie"), @@ -271,7 +366,7 @@ func TestAccAWSLBTargetGroupBackwardsCompatibility(t *testing.T) { func TestAccAWSLBTargetGroup_namePrefix(t *testing.T) { var conf elbv2.TargetGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_target_group.test", Providers: testAccProviders, @@ -291,7 +386,7 @@ func TestAccAWSLBTargetGroup_namePrefix(t *testing.T) { func TestAccAWSLBTargetGroup_generatedName(t *testing.T) { var conf elbv2.TargetGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_target_group.test", Providers: testAccProviders, @@ -312,7 +407,7 @@ func TestAccAWSLBTargetGroup_changeNameForceNew(t *testing.T) { targetGroupNameBefore := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) targetGroupNameAfter := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(4, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_target_group.test", Providers: testAccProviders, @@ -340,7 +435,7 @@ func TestAccAWSLBTargetGroup_changeProtocolForceNew(t *testing.T) { var before, after elbv2.TargetGroup targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_target_group.test", Providers: testAccProviders, @@ -368,7 +463,7 @@ func TestAccAWSLBTargetGroup_changePortForceNew(t *testing.T) { var before, after elbv2.TargetGroup targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_target_group.test", Providers: testAccProviders, @@ -396,7 +491,7 @@ func TestAccAWSLBTargetGroup_changeVpcForceNew(t *testing.T) { var before, after elbv2.TargetGroup targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_target_group.test", Providers: testAccProviders, @@ -422,7 +517,7 @@ func TestAccAWSLBTargetGroup_tags(t *testing.T) { var conf elbv2.TargetGroup targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_target_group.test", Providers: testAccProviders, @@ -453,7 +548,7 @@ func TestAccAWSLBTargetGroup_updateHealthCheck(t *testing.T) { var conf elbv2.TargetGroup targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_target_group.test", Providers: testAccProviders, @@ -521,7 +616,7 @@ func TestAccAWSLBTargetGroup_updateSticknessEnabled(t *testing.T) { var conf elbv2.TargetGroup targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_target_group.test", Providers: testAccProviders, @@ -606,7 +701,7 @@ func TestAccAWSLBTargetGroup_defaults_application(t *testing.T) { var conf elbv2.TargetGroup targetGroupName := fmt.Sprintf("test-target-group-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_target_group.test", Providers: testAccProviders, @@ -622,6 +717,7 @@ func TestAccAWSLBTargetGroup_defaults_application(t *testing.T) { resource.TestCheckResourceAttr("aws_lb_target_group.test", "protocol", "HTTP"), resource.TestCheckResourceAttrSet("aws_lb_target_group.test", "vpc_id"), resource.TestCheckResourceAttr("aws_lb_target_group.test", "deregistration_delay", "200"), + resource.TestCheckResourceAttr("aws_lb_target_group.test", "slow_start", "0"), resource.TestCheckResourceAttr("aws_lb_target_group.test", "health_check.#", "1"), resource.TestCheckResourceAttr("aws_lb_target_group.test", "health_check.0.interval", "10"), resource.TestCheckResourceAttr("aws_lb_target_group.test", "health_check.0.port", "8081"), @@ -664,7 +760,7 @@ func TestAccAWSLBTargetGroup_defaults_network(t *testing.T) { protocol = "TCP" ` - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_target_group.test", Providers: testAccProviders, @@ -692,6 +788,7 @@ func TestAccAWSLBTargetGroup_defaults_network(t *testing.T) { resource.TestCheckResourceAttr("aws_lb_target_group.test", "protocol", "TCP"), resource.TestCheckResourceAttrSet("aws_lb_target_group.test", "vpc_id"), resource.TestCheckResourceAttr("aws_lb_target_group.test", "deregistration_delay", "200"), + resource.TestCheckResourceAttr("aws_lb_target_group.test", "slow_start", "0"), resource.TestCheckResourceAttr("aws_lb_target_group.test", "health_check.#", "1"), resource.TestCheckResourceAttr("aws_lb_target_group.test", "health_check.0.interval", "10"), resource.TestCheckResourceAttr("aws_lb_target_group.test", "health_check.0.port", "8081"), @@ -710,7 +807,7 @@ func TestAccAWSLBTargetGroup_defaults_network(t *testing.T) { func TestAccAWSLBTargetGroup_stickinessWithTCPDisabled(t *testing.T) { var conf elbv2.TargetGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb_target_group.test", Providers: testAccProviders, @@ -731,7 +828,7 @@ func TestAccAWSLBTargetGroup_stickinessWithTCPDisabled(t *testing.T) { } func TestAccAWSLBTargetGroup_stickinessWithTCPEnabledShouldError(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, Steps: []resource.TestStep{ @@ -851,10 +948,10 @@ func testAccCheckAWSLBTargetGroupDestroy(s *terraform.State) error { } // Verify the error - if isTargetGroupNotFound(err) { + if isAWSErr(err, elbv2.ErrCodeTargetGroupNotFoundException, "") { return nil } else { - return errwrap.Wrapf("Unexpected error checking ALB destroyed: {{err}}", err) + return fmt.Errorf("Unexpected error checking ALB destroyed: %s", err) } } @@ -870,6 +967,7 @@ resource "aws_lb_target_group" "test" { vpc_id = "${aws_vpc.test.id}" deregistration_delay = 200 + slow_start = 0 # HTTP Only stickiness { @@ -907,6 +1005,7 @@ resource "aws_lb_target_group" "test" { vpc_id = "${aws_vpc.test.id}" deregistration_delay = 200 + slow_start = 0 health_check { %s @@ -934,6 +1033,7 @@ func testAccAWSLBTargetGroupConfig_basic(targetGroupName string) string { vpc_id = "${aws_vpc.test.id}" deregistration_delay = 200 + slow_start = 0 stickiness { type = "lb_cookie" @@ -973,6 +1073,7 @@ func testAccAWSLBTargetGroupConfigBackwardsCompatibility(targetGroupName string) vpc_id = "${aws_vpc.test.id}" deregistration_delay = 200 + slow_start = 0 stickiness { type = "lb_cookie" @@ -1235,6 +1336,38 @@ resource "aws_vpc" "test" { }`, targetGroupName) } +func testAccAWSLBTargetGroupConfig_typeTCP_withProxyProtocol(targetGroupName string) string { + return fmt.Sprintf(`resource "aws_lb_target_group" "test" { + name = "%s" + port = 8082 + protocol = "TCP" + vpc_id = "${aws_vpc.test.id}" + + proxy_protocol_v2 = "true" + deregistration_delay = 200 + + health_check { + interval = 10 + port = "traffic-port" + protocol = "TCP" + healthy_threshold = 3 + unhealthy_threshold = 3 + } + + tags { + Name = "TestAcc_networkLB_TargetGroup" + } +} + +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" + + tags { + Name = "terraform-testacc-lb-target-group-type-tcp" + } +}`, targetGroupName) +} + func testAccAWSLBTargetGroupConfig_typeTCPInvalidThreshold(targetGroupName string) string { return fmt.Sprintf(`resource "aws_lb_target_group" "test" { name = "%s" diff --git a/aws/resource_aws_lb_test.go b/aws/resource_aws_lb_test.go index 59a1dd70a07..6a5972eaa5d 100644 --- a/aws/resource_aws_lb_test.go +++ b/aws/resource_aws_lb_test.go @@ -3,17 +3,79 @@ package aws import ( "errors" "fmt" + "log" "regexp" + "strings" "testing" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/elbv2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) +func init() { + resource.AddTestSweepers("aws_lb", &resource.Sweeper{ + Name: "aws_lb", + F: testSweepLBs, + }) +} + +func testSweepLBs(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).elbv2conn + + prefixes := []string{ + "tf-", + "tf-test-", + "tf-acc-test-", + "test-", + "testacc", + } + + err = conn.DescribeLoadBalancersPages(&elbv2.DescribeLoadBalancersInput{}, func(page *elbv2.DescribeLoadBalancersOutput, isLast bool) bool { + if page == nil || len(page.LoadBalancers) == 0 { + log.Print("[DEBUG] No LBs to sweep") + return false + } + + for _, loadBalancer := range page.LoadBalancers { + name := aws.StringValue(loadBalancer.LoadBalancerName) + skip := true + for _, prefix := range prefixes { + if strings.HasPrefix(name, prefix) { + skip = false + break + } + } + if skip { + log.Printf("[INFO] Skipping LB: %s", name) + continue + } + log.Printf("[INFO] Deleting LB: %s", name) + _, err := conn.DeleteLoadBalancer(&elbv2.DeleteLoadBalancerInput{ + LoadBalancerArn: loadBalancer.LoadBalancerArn, + }) + if err != nil { + log.Printf("[ERROR] Failed to delete LB (%s): %s", name, err) + } + } + return !isLast + }) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping LB sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving LBs: %s", err) + } + return nil +} + func TestLBCloudwatchSuffixFromARN(t *testing.T) { cases := []struct { name string @@ -49,7 +111,7 @@ func TestAccAWSLB_basic(t *testing.T) { var conf elbv2.LoadBalancer lbName := fmt.Sprintf("testaccawslb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb.lb_test", Providers: testAccProviders, @@ -83,7 +145,7 @@ func TestAccAWSLB_networkLoadbalancerBasic(t *testing.T) { var conf elbv2.LoadBalancer lbName := fmt.Sprintf("testaccawslb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb.lb_test", Providers: testAccProviders, @@ -113,7 +175,7 @@ func TestAccAWSLB_networkLoadbalancerEIP(t *testing.T) { var conf elbv2.LoadBalancer lbName := fmt.Sprintf("testaccawslb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLBDestroy, @@ -141,7 +203,7 @@ func TestAccAWSLBBackwardsCompatibility(t *testing.T) { var conf elbv2.LoadBalancer lbName := fmt.Sprintf("testaccawslb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_alb.lb_test", Providers: testAccProviders, @@ -158,7 +220,6 @@ func TestAccAWSLBBackwardsCompatibility(t *testing.T) { resource.TestCheckResourceAttr("aws_alb.lb_test", "tags.%", "1"), resource.TestCheckResourceAttr("aws_alb.lb_test", "tags.Name", "TestAccAWSALB_basic"), resource.TestCheckResourceAttr("aws_alb.lb_test", "enable_deletion_protection", "false"), - resource.TestCheckResourceAttr("aws_alb.lb_test", "enable_cross_zone_load_balancing", "false"), resource.TestCheckResourceAttr("aws_alb.lb_test", "idle_timeout", "30"), resource.TestCheckResourceAttr("aws_alb.lb_test", "ip_address_type", "ipv4"), resource.TestCheckResourceAttr("aws_alb.lb_test", "load_balancer_type", "application"), @@ -175,7 +236,7 @@ func TestAccAWSLBBackwardsCompatibility(t *testing.T) { func TestAccAWSLB_generatedName(t *testing.T) { var conf elbv2.LoadBalancer - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb.lb_test", Providers: testAccProviders, @@ -195,7 +256,7 @@ func TestAccAWSLB_generatedName(t *testing.T) { func TestAccAWSLB_generatesNameForZeroValue(t *testing.T) { var conf elbv2.LoadBalancer - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb.lb_test", Providers: testAccProviders, @@ -215,7 +276,7 @@ func TestAccAWSLB_generatesNameForZeroValue(t *testing.T) { func TestAccAWSLB_namePrefix(t *testing.T) { var conf elbv2.LoadBalancer - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb.lb_test", Providers: testAccProviders, @@ -238,7 +299,7 @@ func TestAccAWSLB_tags(t *testing.T) { var conf elbv2.LoadBalancer lbName := fmt.Sprintf("testaccawslb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb.lb_test", Providers: testAccProviders, @@ -269,7 +330,7 @@ func TestAccAWSLB_networkLoadbalancer_updateCrossZone(t *testing.T) { var pre, mid, post elbv2.LoadBalancer lbName := fmt.Sprintf("testaccawslb-nlbcz-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb.lb_test", Providers: testAccProviders, @@ -279,6 +340,7 @@ func TestAccAWSLB_networkLoadbalancer_updateCrossZone(t *testing.T) { Config: testAccAWSLBConfig_networkLoadbalancer(lbName, true), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSLBExists("aws_lb.lb_test", &pre), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "load_balancing.cross_zone.enabled", "true"), resource.TestCheckResourceAttr("aws_lb.lb_test", "enable_cross_zone_load_balancing", "true"), ), }, @@ -286,6 +348,7 @@ func TestAccAWSLB_networkLoadbalancer_updateCrossZone(t *testing.T) { Config: testAccAWSLBConfig_networkLoadbalancer(lbName, false), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSLBExists("aws_lb.lb_test", &mid), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "load_balancing.cross_zone.enabled", "false"), resource.TestCheckResourceAttr("aws_lb.lb_test", "enable_cross_zone_load_balancing", "false"), testAccCheckAWSlbARNs(&pre, &mid), ), @@ -294,6 +357,7 @@ func TestAccAWSLB_networkLoadbalancer_updateCrossZone(t *testing.T) { Config: testAccAWSLBConfig_networkLoadbalancer(lbName, true), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSLBExists("aws_lb.lb_test", &post), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "load_balancing.cross_zone.enabled", "true"), resource.TestCheckResourceAttr("aws_lb.lb_test", "enable_cross_zone_load_balancing", "true"), testAccCheckAWSlbARNs(&mid, &post), ), @@ -306,32 +370,35 @@ func TestAccAWSLB_applicationLoadBalancer_updateHttp2(t *testing.T) { var pre, mid, post elbv2.LoadBalancer lbName := fmt.Sprintf("testaccawsalb-http2-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb.lb_test", Providers: testAccProviders, CheckDestroy: testAccCheckAWSLBDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSLBConfig_enableHttp2(lbName, true), + Config: testAccAWSLBConfig_enableHttp2(lbName, false), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSLBExists("aws_lb.lb_test", &pre), - resource.TestCheckResourceAttr("aws_lb.lb_test", "enable_http2", "true"), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "routing.http2.enabled", "false"), + resource.TestCheckResourceAttr("aws_lb.lb_test", "enable_http2", "false"), ), }, { - Config: testAccAWSLBConfig_enableHttp2(lbName, false), + Config: testAccAWSLBConfig_enableHttp2(lbName, true), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSLBExists("aws_lb.lb_test", &mid), - resource.TestCheckResourceAttr("aws_lb.lb_test", "enable_http2", "false"), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "routing.http2.enabled", "true"), + resource.TestCheckResourceAttr("aws_lb.lb_test", "enable_http2", "true"), testAccCheckAWSlbARNs(&pre, &mid), ), }, { - Config: testAccAWSLBConfig_enableHttp2(lbName, true), + Config: testAccAWSLBConfig_enableHttp2(lbName, false), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSLBExists("aws_lb.lb_test", &post), - resource.TestCheckResourceAttr("aws_lb.lb_test", "enable_http2", "true"), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "routing.http2.enabled", "false"), + resource.TestCheckResourceAttr("aws_lb.lb_test", "enable_http2", "false"), testAccCheckAWSlbARNs(&mid, &post), ), }, @@ -343,7 +410,7 @@ func TestAccAWSLB_applicationLoadBalancer_updateDeletionProtection(t *testing.T) var pre, mid, post elbv2.LoadBalancer lbName := fmt.Sprintf("testaccawsalb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb.lb_test", Providers: testAccProviders, @@ -353,6 +420,7 @@ func TestAccAWSLB_applicationLoadBalancer_updateDeletionProtection(t *testing.T) Config: testAccAWSLBConfig_enableDeletionProtection(lbName, false), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSLBExists("aws_lb.lb_test", &pre), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "deletion_protection.enabled", "false"), resource.TestCheckResourceAttr("aws_lb.lb_test", "enable_deletion_protection", "false"), ), }, @@ -360,6 +428,7 @@ func TestAccAWSLB_applicationLoadBalancer_updateDeletionProtection(t *testing.T) Config: testAccAWSLBConfig_enableDeletionProtection(lbName, true), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSLBExists("aws_lb.lb_test", &mid), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "deletion_protection.enabled", "true"), resource.TestCheckResourceAttr("aws_lb.lb_test", "enable_deletion_protection", "true"), testAccCheckAWSlbARNs(&pre, &mid), ), @@ -368,6 +437,7 @@ func TestAccAWSLB_applicationLoadBalancer_updateDeletionProtection(t *testing.T) Config: testAccAWSLBConfig_enableDeletionProtection(lbName, false), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSLBExists("aws_lb.lb_test", &post), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "deletion_protection.enabled", "false"), resource.TestCheckResourceAttr("aws_lb.lb_test", "enable_deletion_protection", "false"), testAccCheckAWSlbARNs(&mid, &post), ), @@ -380,7 +450,7 @@ func TestAccAWSLB_updatedSecurityGroups(t *testing.T) { var pre, post elbv2.LoadBalancer lbName := fmt.Sprintf("testaccawslb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb.lb_test", Providers: testAccProviders, @@ -409,7 +479,7 @@ func TestAccAWSLB_updatedSubnets(t *testing.T) { var pre, post elbv2.LoadBalancer lbName := fmt.Sprintf("testaccawslb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb.lb_test", Providers: testAccProviders, @@ -438,7 +508,7 @@ func TestAccAWSLB_updatedIpAddressType(t *testing.T) { var pre, post elbv2.LoadBalancer lbName := fmt.Sprintf("testaccawslb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb.lb_test", Providers: testAccProviders, @@ -469,7 +539,7 @@ func TestAccAWSLB_noSecurityGroup(t *testing.T) { var conf elbv2.LoadBalancer lbName := fmt.Sprintf("testaccawslb-nosg-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb.lb_test", Providers: testAccProviders, @@ -500,8 +570,9 @@ func TestAccAWSLB_accesslogs(t *testing.T) { var conf elbv2.LoadBalancer bucketName := fmt.Sprintf("testaccawslbaccesslogs-%s", acctest.RandStringFromCharSet(6, acctest.CharSetAlphaNum)) lbName := fmt.Sprintf("testaccawslbaccesslog-%s", acctest.RandStringFromCharSet(4, acctest.CharSetAlpha)) + bucketPrefix := "testAccAWSALBConfig_accessLogs" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb.lb_test", Providers: testAccProviders, @@ -511,6 +582,9 @@ func TestAccAWSLB_accesslogs(t *testing.T) { Config: testAccAWSLBConfig_basic(lbName), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSLBExists("aws_lb.lb_test", &conf), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "access_logs.s3.enabled", "false"), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "access_logs.s3.bucket", ""), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "access_logs.s3.prefix", ""), resource.TestCheckResourceAttr("aws_lb.lb_test", "name", lbName), resource.TestCheckResourceAttr("aws_lb.lb_test", "internal", "true"), resource.TestCheckResourceAttr("aws_lb.lb_test", "subnets.#", "2"), @@ -526,9 +600,12 @@ func TestAccAWSLB_accesslogs(t *testing.T) { ), }, { - Config: testAccAWSLBConfig_accessLogs(true, lbName, bucketName), + Config: testAccAWSLBConfig_accessLogs(true, lbName, bucketName, bucketPrefix), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSLBExists("aws_lb.lb_test", &conf), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "access_logs.s3.enabled", "true"), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "access_logs.s3.bucket", bucketName), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "access_logs.s3.prefix", bucketPrefix), resource.TestCheckResourceAttr("aws_lb.lb_test", "name", lbName), resource.TestCheckResourceAttr("aws_lb.lb_test", "internal", "true"), resource.TestCheckResourceAttr("aws_lb.lb_test", "subnets.#", "2"), @@ -542,15 +619,43 @@ func TestAccAWSLB_accesslogs(t *testing.T) { resource.TestCheckResourceAttrSet("aws_lb.lb_test", "dns_name"), resource.TestCheckResourceAttr("aws_lb.lb_test", "access_logs.#", "1"), resource.TestCheckResourceAttr("aws_lb.lb_test", "access_logs.0.bucket", bucketName), - resource.TestCheckResourceAttr("aws_lb.lb_test", "access_logs.0.prefix", "testAccAWSALBConfig_accessLogs"), + resource.TestCheckResourceAttr("aws_lb.lb_test", "access_logs.0.prefix", bucketPrefix), resource.TestCheckResourceAttr("aws_lb.lb_test", "access_logs.0.enabled", "true"), resource.TestCheckResourceAttrSet("aws_lb.lb_test", "arn"), ), }, { - Config: testAccAWSLBConfig_accessLogs(false, lbName, bucketName), + Config: testAccAWSLBConfig_accessLogs(true, lbName, bucketName, ""), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSLBExists("aws_lb.lb_test", &conf), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "access_logs.s3.enabled", "true"), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "access_logs.s3.bucket", bucketName), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "access_logs.s3.prefix", ""), + resource.TestCheckResourceAttr("aws_lb.lb_test", "name", lbName), + resource.TestCheckResourceAttr("aws_lb.lb_test", "internal", "true"), + resource.TestCheckResourceAttr("aws_lb.lb_test", "subnets.#", "2"), + resource.TestCheckResourceAttr("aws_lb.lb_test", "security_groups.#", "1"), + resource.TestCheckResourceAttr("aws_lb.lb_test", "tags.%", "1"), + resource.TestCheckResourceAttr("aws_lb.lb_test", "tags.Name", "TestAccAWSALB_basic1"), + resource.TestCheckResourceAttr("aws_lb.lb_test", "enable_deletion_protection", "false"), + resource.TestCheckResourceAttr("aws_lb.lb_test", "idle_timeout", "50"), + resource.TestCheckResourceAttrSet("aws_lb.lb_test", "vpc_id"), + resource.TestCheckResourceAttrSet("aws_lb.lb_test", "zone_id"), + resource.TestCheckResourceAttrSet("aws_lb.lb_test", "dns_name"), + resource.TestCheckResourceAttr("aws_lb.lb_test", "access_logs.#", "1"), + resource.TestCheckResourceAttr("aws_lb.lb_test", "access_logs.0.bucket", bucketName), + resource.TestCheckResourceAttr("aws_lb.lb_test", "access_logs.0.prefix", ""), + resource.TestCheckResourceAttr("aws_lb.lb_test", "access_logs.0.enabled", "true"), + resource.TestCheckResourceAttrSet("aws_lb.lb_test", "arn"), + ), + }, + { + Config: testAccAWSLBConfig_accessLogs(false, lbName, bucketName, ""), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSLBExists("aws_lb.lb_test", &conf), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "access_logs.s3.enabled", "false"), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "access_logs.s3.bucket", bucketName), + testAccCheckAWSLBAttribute("aws_lb.lb_test", "access_logs.s3.prefix", ""), resource.TestCheckResourceAttr("aws_lb.lb_test", "name", lbName), resource.TestCheckResourceAttr("aws_lb.lb_test", "internal", "true"), resource.TestCheckResourceAttr("aws_lb.lb_test", "subnets.#", "2"), @@ -567,6 +672,11 @@ func TestAccAWSLB_accesslogs(t *testing.T) { resource.TestCheckResourceAttrSet("aws_lb.lb_test", "arn"), ), }, + { + ResourceName: "aws_lb.lb_test", + ImportState: true, + ImportStateVerify: true, + }, }, }) } @@ -575,7 +685,7 @@ func TestAccAWSLB_networkLoadbalancer_subnet_change(t *testing.T) { var conf elbv2.LoadBalancer lbName := fmt.Sprintf("testaccawslb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lb.lb_test", Providers: testAccProviders, @@ -598,7 +708,7 @@ func TestAccAWSLB_networkLoadbalancer_subnet_change(t *testing.T) { func testAccCheckAWSlbARNs(pre, post *elbv2.LoadBalancer) resource.TestCheckFunc { return func(s *terraform.State) error { - if *pre.LoadBalancerArn != *post.LoadBalancerArn { + if aws.StringValue(pre.LoadBalancerArn) != aws.StringValue(post.LoadBalancerArn) { return errors.New("LB has been recreated. ARNs are different") } @@ -628,7 +738,7 @@ func testAccCheckAWSLBExists(n string, res *elbv2.LoadBalancer) resource.TestChe } if len(describe.LoadBalancers) != 1 || - *describe.LoadBalancers[0].LoadBalancerArn != rs.Primary.ID { + aws.StringValue(describe.LoadBalancers[0].LoadBalancerArn) != rs.Primary.ID { return errors.New("LB not found") } @@ -637,6 +747,37 @@ func testAccCheckAWSLBExists(n string, res *elbv2.LoadBalancer) resource.TestChe } } +func testAccCheckAWSLBAttribute(n, key, value string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return errors.New("No LB ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).elbv2conn + attributesResp, err := conn.DescribeLoadBalancerAttributes(&elbv2.DescribeLoadBalancerAttributesInput{ + LoadBalancerArn: aws.String(rs.Primary.ID), + }) + if err != nil { + return fmt.Errorf("Error retrieving LB Attributes: %s", err) + } + + for _, attr := range attributesResp.Attributes { + if aws.StringValue(attr.Key) == key { + if aws.StringValue(attr.Value) == value { + return nil + } + return fmt.Errorf("LB attribute %s expected: %q actual: %q", key, value, aws.StringValue(attr.Value)) + } + } + return fmt.Errorf("LB attribute %s does not exist on LB: %s", key, rs.Primary.ID) + } +} + func testAccCheckAWSLBDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).elbv2conn @@ -651,7 +792,7 @@ func testAccCheckAWSLBDestroy(s *terraform.State) error { if err == nil { if len(describe.LoadBalancers) != 0 && - *describe.LoadBalancers[0].LoadBalancerArn == rs.Primary.ID { + aws.StringValue(describe.LoadBalancers[0].LoadBalancerArn) == rs.Primary.ID { return fmt.Errorf("LB %q still exists", rs.Primary.ID) } } @@ -660,7 +801,7 @@ func testAccCheckAWSLBDestroy(s *terraform.State) error { if isLoadBalancerNotFound(err) { return nil } else { - return errwrap.Wrapf("Unexpected error checking LB destroyed: {{err}}", err) + return fmt.Errorf("Unexpected error checking LB destroyed: %s", err) } } @@ -1678,7 +1819,7 @@ resource "aws_security_group" "alb_test" { }`, lbName) } -func testAccAWSLBConfig_accessLogs(enabled bool, lbName, bucketName string) string { +func testAccAWSLBConfig_accessLogs(enabled bool, lbName, bucketName, bucketPrefix string) string { return fmt.Sprintf(`resource "aws_lb" "lb_test" { name = "%s" internal = true @@ -1690,7 +1831,7 @@ func testAccAWSLBConfig_accessLogs(enabled bool, lbName, bucketName string) stri access_logs { bucket = "${aws_s3_bucket.logs.bucket}" - prefix = "${var.bucket_prefix}" + prefix = "${var.bucket_prefix}" enabled = "%t" } @@ -1706,7 +1847,7 @@ variable "bucket_name" { variable "bucket_prefix" { type = "string" - default = "testAccAWSALBConfig_accessLogs" + default = "%s" } resource "aws_s3_bucket" "logs" { @@ -1720,6 +1861,8 @@ resource "aws_s3_bucket" "logs" { } } +data "aws_partition" "current" {} + data "aws_caller_identity" "current" {} data "aws_elb_service_account" "current" {} @@ -1728,11 +1871,11 @@ data "aws_iam_policy_document" "logs_bucket" { statement { actions = ["s3:PutObject"] effect = "Allow" - resources = ["arn:aws:s3:::${var.bucket_name}/${var.bucket_prefix}/AWSLogs/${data.aws_caller_identity.current.account_id}/*"] + resources = ["arn:${data.aws_partition.current.partition}:s3:::${var.bucket_name}/${var.bucket_prefix}${var.bucket_prefix == "" ? "" : "/"}AWSLogs/${data.aws_caller_identity.current.account_id}/*"] principals = { type = "AWS" - identifiers = ["arn:aws:iam::${data.aws_elb_service_account.current.id}:root"] + identifiers = ["${data.aws_elb_service_account.current.arn}"] } } } @@ -1786,7 +1929,7 @@ resource "aws_security_group" "alb_test" { tags { Name = "TestAccAWSALB_basic" } -}`, lbName, enabled, bucketName) +}`, lbName, enabled, bucketName, bucketPrefix) } func testAccAWSLBConfig_nosg(lbName string) string { diff --git a/aws/resource_aws_lightsail_domain_test.go b/aws/resource_aws_lightsail_domain_test.go index 64716900b21..a08e1875b1a 100644 --- a/aws/resource_aws_lightsail_domain_test.go +++ b/aws/resource_aws_lightsail_domain_test.go @@ -17,7 +17,7 @@ func TestAccAWSLightsailDomain_basic(t *testing.T) { var domain lightsail.Domain lightsailDomainName := fmt.Sprintf("tf-test-lightsail-%s.com", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLightsailDomainDestroy, @@ -50,7 +50,7 @@ func TestAccAWSLightsailDomain_disappears(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLightsailDomainDestroy, diff --git a/aws/resource_aws_lightsail_instance.go b/aws/resource_aws_lightsail_instance.go index 34f24957335..17fdf90e603 100644 --- a/aws/resource_aws_lightsail_instance.go +++ b/aws/resource_aws_lightsail_instance.go @@ -132,7 +132,7 @@ func resourceAwsLightsailInstanceCreate(d *schema.ResourceData, meta interface{} } if len(resp.Operations) == 0 { - return fmt.Errorf("[ERR] No operations found for CreateInstance request") + return fmt.Errorf("No operations found for CreateInstance request") } op := resp.Operations[0] @@ -149,7 +149,7 @@ func resourceAwsLightsailInstanceCreate(d *schema.ResourceData, meta interface{} _, err = stateConf.WaitForState() if err != nil { - // We don't return an error here because the Create call succeded + // We don't return an error here because the Create call succeeded log.Printf("[ERR] Error waiting for instance (%s) to become ready: %s", d.Id(), err) } @@ -230,7 +230,6 @@ func resourceAwsLightsailInstanceDelete(d *schema.ResourceData, meta interface{} d.Id(), err) } - d.SetId("") return nil } @@ -255,7 +254,7 @@ func resourceAwsLightsailOperationRefreshFunc( } if o.Operation == nil { - return nil, "Failed", fmt.Errorf("[ERR] Error retrieving Operation info for operation (%s)", *oid) + return nil, "Failed", fmt.Errorf("Error retrieving Operation info for operation (%s)", *oid) } log.Printf("[DEBUG] Lightsail Operation (%s) is currently %q", *oid, *o.Operation.Status) diff --git a/aws/resource_aws_lightsail_instance_test.go b/aws/resource_aws_lightsail_instance_test.go index da7bc3b2e54..87a1f044602 100644 --- a/aws/resource_aws_lightsail_instance_test.go +++ b/aws/resource_aws_lightsail_instance_test.go @@ -18,7 +18,7 @@ func TestAccAWSLightsailInstance_basic(t *testing.T) { var conf lightsail.Instance lightsailName := fmt.Sprintf("tf-test-lightsail-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lightsail_instance.lightsail_instance_test", Providers: testAccProviders, @@ -42,7 +42,7 @@ func TestAccAWSLightsailInstance_euRegion(t *testing.T) { var conf lightsail.Instance lightsailName := fmt.Sprintf("tf-test-lightsail-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_lightsail_instance.lightsail_instance_test", Providers: testAccProviders, @@ -83,7 +83,7 @@ func TestAccAWSLightsailInstance_disapear(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLightsailInstanceDestroy, diff --git a/aws/resource_aws_lightsail_key_pair.go b/aws/resource_aws_lightsail_key_pair.go index 24138aaa94b..892ed298f32 100644 --- a/aws/resource_aws_lightsail_key_pair.go +++ b/aws/resource_aws_lightsail_key_pair.go @@ -28,9 +28,10 @@ func resourceAwsLightsailKeyPair() *schema.Resource { ConflictsWith: []string{"name_prefix"}, }, "name_prefix": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"name"}, }, // optional fields @@ -102,10 +103,10 @@ func resourceAwsLightsailKeyPairCreate(d *schema.ResourceData, meta interface{}) return err } if resp.Operation == nil { - return fmt.Errorf("[ERR] No operation found for CreateKeyPair response") + return fmt.Errorf("No operation found for CreateKeyPair response") } if resp.KeyPair == nil { - return fmt.Errorf("[ERR] No KeyPair information found for CreateKeyPair response") + return fmt.Errorf("No KeyPair information found for CreateKeyPair response") } d.SetId(kName) @@ -159,7 +160,7 @@ func resourceAwsLightsailKeyPairCreate(d *schema.ResourceData, meta interface{}) _, err := stateConf.WaitForState() if err != nil { - // We don't return an error here because the Create call succeded + // We don't return an error here because the Create call succeeded log.Printf("[ERR] Error waiting for KeyPair (%s) to become ready: %s", d.Id(), err) } @@ -220,6 +221,5 @@ func resourceAwsLightsailKeyPairDelete(d *schema.ResourceData, meta interface{}) d.Id(), err) } - d.SetId("") return nil } diff --git a/aws/resource_aws_lightsail_key_pair_test.go b/aws/resource_aws_lightsail_key_pair_test.go index 539e9c5ac39..bd56c20e272 100644 --- a/aws/resource_aws_lightsail_key_pair_test.go +++ b/aws/resource_aws_lightsail_key_pair_test.go @@ -17,7 +17,7 @@ func TestAccAWSLightsailKeyPair_basic(t *testing.T) { var conf lightsail.KeyPair lightsailName := fmt.Sprintf("tf-test-lightsail-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLightsailKeyPairDestroy, @@ -40,7 +40,7 @@ func TestAccAWSLightsailKeyPair_imported(t *testing.T) { var conf lightsail.KeyPair lightsailName := fmt.Sprintf("tf-test-lightsail-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLightsailKeyPairDestroy, @@ -65,7 +65,7 @@ func TestAccAWSLightsailKeyPair_encrypted(t *testing.T) { var conf lightsail.KeyPair lightsailName := fmt.Sprintf("tf-test-lightsail-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLightsailKeyPairDestroy, @@ -89,7 +89,7 @@ func TestAccAWSLightsailKeyPair_encrypted(t *testing.T) { func TestAccAWSLightsailKeyPair_nameprefix(t *testing.T) { var conf1, conf2 lightsail.KeyPair - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLightsailKeyPairDestroy, diff --git a/aws/resource_aws_lightsail_static_ip_attachment_test.go b/aws/resource_aws_lightsail_static_ip_attachment_test.go index bf0aa0898f7..0ae89589cb0 100644 --- a/aws/resource_aws_lightsail_static_ip_attachment_test.go +++ b/aws/resource_aws_lightsail_static_ip_attachment_test.go @@ -19,7 +19,7 @@ func TestAccAWSLightsailStaticIpAttachment_basic(t *testing.T) { instanceName := fmt.Sprintf("tf-test-lightsail-%s", acctest.RandString(5)) keypairName := fmt.Sprintf("tf-test-lightsail-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLightsailStaticIpAttachmentDestroy, @@ -53,7 +53,7 @@ func TestAccAWSLightsailStaticIpAttachment_disappears(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLightsailStaticIpAttachmentDestroy, diff --git a/aws/resource_aws_lightsail_static_ip_test.go b/aws/resource_aws_lightsail_static_ip_test.go index 275a29f2023..441126bceda 100644 --- a/aws/resource_aws_lightsail_static_ip_test.go +++ b/aws/resource_aws_lightsail_static_ip_test.go @@ -3,6 +3,8 @@ package aws import ( "errors" "fmt" + "log" + "strings" "testing" "github.com/aws/aws-sdk-go/aws" @@ -13,11 +15,67 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func init() { + resource.AddTestSweepers("aws_lightsail_static_ip", &resource.Sweeper{ + Name: "aws_lightsail_static_ip", + F: testSweepLightsailStaticIps, + }) +} + +func testSweepLightsailStaticIps(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("Error getting client: %s", err) + } + conn := client.(*AWSClient).lightsailconn + + input := &lightsail.GetStaticIpsInput{} + + for { + output, err := conn.GetStaticIps(input) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping Lightsail Static IP sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving Lightsail Static IPs: %s", err) + } + + if len(output.StaticIps) == 0 { + log.Print("[DEBUG] No Lightsail Static IPs to sweep") + return nil + } + + for _, staticIp := range output.StaticIps { + name := aws.StringValue(staticIp.Name) + + if !strings.HasPrefix(name, "tf-test-") { + continue + } + + log.Printf("[INFO] Deleting Lightsail Static IP %s", name) + _, err := conn.ReleaseStaticIp(&lightsail.ReleaseStaticIpInput{ + StaticIpName: aws.String(name), + }) + if err != nil { + return fmt.Errorf("Error deleting Lightsail Static IP %s: %s", name, err) + } + } + + if output.NextPageToken == nil { + break + } + input.PageToken = output.NextPageToken + } + + return nil +} + func TestAccAWSLightsailStaticIp_basic(t *testing.T) { var staticIp lightsail.StaticIp staticIpName := fmt.Sprintf("tf-test-lightsail-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLightsailStaticIpDestroy, @@ -49,7 +107,7 @@ func TestAccAWSLightsailStaticIp_disappears(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLightsailStaticIpDestroy, diff --git a/aws/resource_aws_load_balancer_backend_server_policy.go b/aws/resource_aws_load_balancer_backend_server_policy.go index 325c4fd1abd..60f67d58c2c 100644 --- a/aws/resource_aws_load_balancer_backend_server_policy.go +++ b/aws/resource_aws_load_balancer_backend_server_policy.go @@ -19,19 +19,19 @@ func resourceAwsLoadBalancerBackendServerPolicies() *schema.Resource { Delete: resourceAwsLoadBalancerBackendServerPoliciesDelete, Schema: map[string]*schema.Schema{ - "load_balancer_name": &schema.Schema{ + "load_balancer_name": { Type: schema.TypeString, Required: true, }, - "policy_names": &schema.Schema{ + "policy_names": { Type: schema.TypeSet, Elem: &schema.Schema{Type: schema.TypeString}, Optional: true, Set: schema.HashString, }, - "instance_port": &schema.Schema{ + "instance_port": { Type: schema.TypeInt, Required: true, }, @@ -128,7 +128,6 @@ func resourceAwsLoadBalancerBackendServerPoliciesDelete(d *schema.ResourceData, return fmt.Errorf("Error setting LoadBalancerPoliciesForBackendServer: %s", err) } - d.SetId("") return nil } diff --git a/aws/resource_aws_load_balancer_backend_server_policy_test.go b/aws/resource_aws_load_balancer_backend_server_policy_test.go index b8db5d39b5d..9f9073ae16b 100644 --- a/aws/resource_aws_load_balancer_backend_server_policy_test.go +++ b/aws/resource_aws_load_balancer_backend_server_policy_test.go @@ -17,7 +17,7 @@ func TestAccAWSLoadBalancerBackendServerPolicy_basic(t *testing.T) { rString := acctest.RandString(8) lbName := fmt.Sprintf("tf-acc-lb-bsp-basic-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: map[string]terraform.ResourceProvider{ "aws": testAccProvider, @@ -25,7 +25,7 @@ func TestAccAWSLoadBalancerBackendServerPolicy_basic(t *testing.T) { }, CheckDestroy: testAccCheckAWSLoadBalancerBackendServerPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSLoadBalancerBackendServerPolicyConfig_basic0(lbName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSLoadBalancerPolicyState("aws_elb.test-lb", "aws_load_balancer_policy.test-pubkey-policy0"), @@ -33,7 +33,7 @@ func TestAccAWSLoadBalancerBackendServerPolicy_basic(t *testing.T) { testAccCheckAWSLoadBalancerBackendServerPolicyState(lbName, "test-backend-auth-policy0", true), ), }, - resource.TestStep{ + { Config: testAccAWSLoadBalancerBackendServerPolicyConfig_basic1(lbName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSLoadBalancerPolicyState("aws_elb.test-lb", "aws_load_balancer_policy.test-pubkey-policy0"), @@ -42,7 +42,7 @@ func TestAccAWSLoadBalancerBackendServerPolicy_basic(t *testing.T) { testAccCheckAWSLoadBalancerBackendServerPolicyState(lbName, "test-backend-auth-policy0", true), ), }, - resource.TestStep{ + { Config: testAccAWSLoadBalancerBackendServerPolicyConfig_basic2(lbName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSLoadBalancerBackendServerPolicyState(lbName, "test-backend-auth-policy0", false), diff --git a/aws/resource_aws_load_balancer_listener_policy.go b/aws/resource_aws_load_balancer_listener_policy.go index d1c8cacbbe4..a423d74d8a4 100644 --- a/aws/resource_aws_load_balancer_listener_policy.go +++ b/aws/resource_aws_load_balancer_listener_policy.go @@ -19,19 +19,19 @@ func resourceAwsLoadBalancerListenerPolicies() *schema.Resource { Delete: resourceAwsLoadBalancerListenerPoliciesDelete, Schema: map[string]*schema.Schema{ - "load_balancer_name": &schema.Schema{ + "load_balancer_name": { Type: schema.TypeString, Required: true, }, - "policy_names": &schema.Schema{ + "policy_names": { Type: schema.TypeSet, Elem: &schema.Schema{Type: schema.TypeString}, Optional: true, Set: schema.HashString, }, - "load_balancer_port": &schema.Schema{ + "load_balancer_port": { Type: schema.TypeInt, Required: true, }, @@ -128,7 +128,6 @@ func resourceAwsLoadBalancerListenerPoliciesDelete(d *schema.ResourceData, meta return fmt.Errorf("Error setting LoadBalancerPoliciesOfListener: %s", err) } - d.SetId("") return nil } diff --git a/aws/resource_aws_load_balancer_listener_policy_test.go b/aws/resource_aws_load_balancer_listener_policy_test.go index bd663a157ed..3ddae9af654 100644 --- a/aws/resource_aws_load_balancer_listener_policy_test.go +++ b/aws/resource_aws_load_balancer_listener_policy_test.go @@ -18,26 +18,26 @@ func TestAccAWSLoadBalancerListenerPolicy_basic(t *testing.T) { rChar := acctest.RandStringFromCharSet(6, acctest.CharSetAlpha) lbName := fmt.Sprintf("%s", rChar) mcName := fmt.Sprintf("%s", rChar) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLoadBalancerListenerPolicyDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSLoadBalancerListenerPolicyConfig_basic0(lbName, mcName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSLoadBalancerPolicyState("aws_elb.test-lb", "aws_load_balancer_policy.magic-cookie-sticky"), testAccCheckAWSLoadBalancerListenerPolicyState(lbName, int64(80), mcName, true), ), }, - resource.TestStep{ + { Config: testAccAWSLoadBalancerListenerPolicyConfig_basic1(lbName, mcName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSLoadBalancerPolicyState("aws_elb.test-lb", "aws_load_balancer_policy.magic-cookie-sticky"), testAccCheckAWSLoadBalancerListenerPolicyState(lbName, int64(80), mcName, true), ), }, - resource.TestStep{ + { Config: testAccAWSLoadBalancerListenerPolicyConfig_basic2(lbName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSLoadBalancerListenerPolicyState(lbName, int64(80), mcName, false), @@ -90,7 +90,7 @@ func testAccCheckAWSLoadBalancerListenerPolicyDestroy(s *terraform.State) error return err } policyNames := []string{} - for k, _ := range rs.Primary.Attributes { + for k := range rs.Primary.Attributes { if strings.HasPrefix(k, "policy_names.") && strings.HasSuffix(k, ".name") { value_key := fmt.Sprintf("%s.value", strings.TrimSuffix(k, ".name")) policyNames = append(policyNames, rs.Primary.Attributes[value_key]) diff --git a/aws/resource_aws_load_balancer_policy.go b/aws/resource_aws_load_balancer_policy.go index 8305cf992ba..703ba85cd68 100644 --- a/aws/resource_aws_load_balancer_policy.go +++ b/aws/resource_aws_load_balancer_policy.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "log" "strings" "github.com/aws/aws-sdk-go/aws" @@ -18,35 +19,35 @@ func resourceAwsLoadBalancerPolicy() *schema.Resource { Delete: resourceAwsLoadBalancerPolicyDelete, Schema: map[string]*schema.Schema{ - "load_balancer_name": &schema.Schema{ + "load_balancer_name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "policy_name": &schema.Schema{ + "policy_name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "policy_type_name": &schema.Schema{ + "policy_type_name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "policy_attribute": &schema.Schema{ + "policy_attribute": { Type: schema.TypeSet, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Optional: true, }, - "value": &schema.Schema{ + "value": { Type: schema.TypeString, Optional: true, }, @@ -100,11 +101,20 @@ func resourceAwsLoadBalancerPolicyRead(d *schema.ResourceData, meta interface{}) } getResp, err := elbconn.DescribeLoadBalancerPolicies(request) + + if isAWSErr(err, "LoadBalancerNotFound", "") { + log.Printf("[WARN] Load Balancer Policy (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if isAWSErr(err, elb.ErrCodePolicyNotFoundException, "") { + log.Printf("[WARN] Load Balancer Policy (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + if err != nil { - if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "PolicyNotFound" { - d.SetId("") - return nil - } return fmt.Errorf("Error retrieving policy: %s", err) } @@ -163,6 +173,9 @@ func resourceAwsLoadBalancerPolicyUpdate(d *schema.ResourceData, meta interface{ } err = resourceAwsLoadBalancerPolicyCreate(d, meta) + if err != nil { + return err + } for _, listenerAssignment := range reassignments.listenerPolicies { if _, err := elbconn.SetLoadBalancerPoliciesOfListener(listenerAssignment); err != nil { @@ -205,7 +218,6 @@ func resourceAwsLoadBalancerPolicyDelete(d *schema.ResourceData, meta interface{ return fmt.Errorf("Error deleting Load Balancer Policy %s: %s", d.Id(), err) } - d.SetId("") return nil } diff --git a/aws/resource_aws_load_balancer_policy_test.go b/aws/resource_aws_load_balancer_policy_test.go index cfdbec8ec6d..77cbe7b9999 100644 --- a/aws/resource_aws_load_balancer_policy_test.go +++ b/aws/resource_aws_load_balancer_policy_test.go @@ -16,8 +16,35 @@ import ( ) func TestAccAWSLoadBalancerPolicy_basic(t *testing.T) { + var policy elb.PolicyDescription + loadBalancerResourceName := "aws_elb.test-lb" + resourceName := "aws_load_balancer_policy.test-policy" rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLoadBalancerPolicyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLoadBalancerPolicyConfig_basic(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLoadBalancerPolicyExists(resourceName, &policy), + testAccCheckAWSLoadBalancerPolicyState(loadBalancerResourceName, resourceName), + ), + }, + }, + }) +} + +func TestAccAWSLoadBalancerPolicy_disappears(t *testing.T) { + var loadBalancer elb.LoadBalancerDescription + var policy elb.PolicyDescription + loadBalancerResourceName := "aws_elb.test-lb" + resourceName := "aws_load_balancer_policy.test-policy" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLoadBalancerPolicyDestroy, @@ -25,16 +52,32 @@ func TestAccAWSLoadBalancerPolicy_basic(t *testing.T) { { Config: testAccAWSLoadBalancerPolicyConfig_basic(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSLoadBalancerPolicyState("aws_elb.test-lb", "aws_load_balancer_policy.test-policy"), + testAccCheckAWSELBExists(loadBalancerResourceName, &loadBalancer), + testAccCheckAWSLoadBalancerPolicyExists(resourceName, &policy), + testAccCheckAWSLoadBalancerPolicyDisappears(&loadBalancer, &policy), + ), + ExpectNonEmptyPlan: true, + }, + { + Config: testAccAWSLoadBalancerPolicyConfig_basic(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSELBExists(loadBalancerResourceName, &loadBalancer), + testAccCheckAWSLoadBalancerPolicyExists(resourceName, &policy), + testAccCheckAWSELBDisappears(&loadBalancer), ), + ExpectNonEmptyPlan: true, }, }, }) } func TestAccAWSLoadBalancerPolicy_updateWhileAssigned(t *testing.T) { + var policy elb.PolicyDescription + loadBalancerResourceName := "aws_elb.test-lb" + resourceName := "aws_load_balancer_policy.test-policy" rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSLoadBalancerPolicyDestroy, @@ -42,19 +85,57 @@ func TestAccAWSLoadBalancerPolicy_updateWhileAssigned(t *testing.T) { { Config: testAccAWSLoadBalancerPolicyConfig_updateWhileAssigned0(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSLoadBalancerPolicyState("aws_elb.test-lb", "aws_load_balancer_policy.test-policy"), + testAccCheckAWSLoadBalancerPolicyExists(resourceName, &policy), + testAccCheckAWSLoadBalancerPolicyState(loadBalancerResourceName, resourceName), ), }, { Config: testAccAWSLoadBalancerPolicyConfig_updateWhileAssigned1(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSLoadBalancerPolicyState("aws_elb.test-lb", "aws_load_balancer_policy.test-policy"), + testAccCheckAWSLoadBalancerPolicyExists(resourceName, &policy), + testAccCheckAWSLoadBalancerPolicyState(loadBalancerResourceName, resourceName), ), }, }, }) } +func testAccCheckAWSLoadBalancerPolicyExists(resourceName string, policyDescription *elb.PolicyDescription) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Load Balancer Policy ID is set for %s", resourceName) + } + + loadBalancerName, policyName := resourceAwsLoadBalancerPolicyParseId(rs.Primary.ID) + + conn := testAccProvider.Meta().(*AWSClient).elbconn + + input := &elb.DescribeLoadBalancerPoliciesInput{ + LoadBalancerName: aws.String(loadBalancerName), + PolicyNames: []*string{aws.String(policyName)}, + } + + output, err := conn.DescribeLoadBalancerPolicies(input) + + if err != nil { + return err + } + + if output == nil || len(output.PolicyDescriptions) == 0 { + return fmt.Errorf("Load Balancer Policy (%s) not found", rs.Primary.ID) + } + + *policyDescription = *output.PolicyDescriptions[0] + + return nil + } +} + func testAccCheckAWSLoadBalancerPolicyDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).elbconn @@ -83,6 +164,20 @@ func testAccCheckAWSLoadBalancerPolicyDestroy(s *terraform.State) error { return nil } +func testAccCheckAWSLoadBalancerPolicyDisappears(loadBalancer *elb.LoadBalancerDescription, policy *elb.PolicyDescription) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).elbconn + + input := elb.DeleteLoadBalancerPolicyInput{ + LoadBalancerName: loadBalancer.LoadBalancerName, + PolicyName: policy.PolicyName, + } + _, err := conn.DeleteLoadBalancerPolicy(&input) + + return err + } +} + func testAccCheckAWSLoadBalancerPolicyState(elbResource string, policyResource string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[elbResource] diff --git a/aws/resource_aws_macie_member_account_association.go b/aws/resource_aws_macie_member_account_association.go new file mode 100644 index 00000000000..23b53147dd0 --- /dev/null +++ b/aws/resource_aws_macie_member_account_association.go @@ -0,0 +1,96 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/macie" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsMacieMemberAccountAssociation() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsMacieMemberAccountAssociationCreate, + Read: resourceAwsMacieMemberAccountAssociationRead, + Delete: resourceAwsMacieMemberAccountAssociationDelete, + + Schema: map[string]*schema.Schema{ + "member_account_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateAwsAccountId, + }, + }, + } +} + +func resourceAwsMacieMemberAccountAssociationCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).macieconn + + memberAccountId := d.Get("member_account_id").(string) + req := &macie.AssociateMemberAccountInput{ + MemberAccountId: aws.String(memberAccountId), + } + + log.Printf("[DEBUG] Creating Macie member account association: %#v", req) + _, err := conn.AssociateMemberAccount(req) + if err != nil { + return fmt.Errorf("Error creating Macie member account association: %s", err) + } + + d.SetId(memberAccountId) + return resourceAwsMacieMemberAccountAssociationRead(d, meta) +} + +func resourceAwsMacieMemberAccountAssociationRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).macieconn + + req := &macie.ListMemberAccountsInput{} + + var res *macie.MemberAccount + err := conn.ListMemberAccountsPages(req, func(page *macie.ListMemberAccountsOutput, lastPage bool) bool { + for _, v := range page.MemberAccounts { + if aws.StringValue(v.AccountId) == d.Get("member_account_id").(string) { + res = v + return false + } + } + + return true + }) + if err != nil { + return fmt.Errorf("Error listing Macie member account associations: %s", err) + } + + if res == nil { + log.Printf("[WARN] Macie member account association (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + return nil +} + +func resourceAwsMacieMemberAccountAssociationDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).macieconn + + log.Printf("[DEBUG] Deleting Macie member account association: %s", d.Id()) + + _, err := conn.DisassociateMemberAccount(&macie.DisassociateMemberAccountInput{ + MemberAccountId: aws.String(d.Get("member_account_id").(string)), + }) + if err != nil { + if isAWSErr(err, macie.ErrCodeInvalidInputException, "is a master Macie account and cannot be disassociated") { + log.Printf("[INFO] Macie master account (%s) cannot be disassociated, removing from state", d.Id()) + return nil + } else if isAWSErr(err, macie.ErrCodeInvalidInputException, "is not yet associated with Macie") { + return nil + } else { + return fmt.Errorf("Error deleting Macie member account association: %s", err) + } + } + + return nil +} diff --git a/aws/resource_aws_macie_member_account_association_test.go b/aws/resource_aws_macie_member_account_association_test.go new file mode 100644 index 00000000000..fe781e6d9d1 --- /dev/null +++ b/aws/resource_aws_macie_member_account_association_test.go @@ -0,0 +1,131 @@ +package aws + +import ( + "fmt" + "os" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/macie" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSMacieMemberAccountAssociation_basic(t *testing.T) { + key := "MACIE_MEMBER_ACCOUNT_ID" + memberAcctId := os.Getenv(key) + if memberAcctId == "" { + t.Skipf("Environment variable %s is not set", key) + } + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSMacieMemberAccountAssociationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSMacieMemberAccountAssociationConfig_basic(memberAcctId), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSMacieMemberAccountAssociationExists("aws_macie_member_account_association.test"), + ), + }, + }, + }) +} + +func TestAccAWSMacieMemberAccountAssociation_self(t *testing.T) { + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAWSMacieMemberAccountAssociationConfig_self, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSMacieMemberAccountAssociationExists("aws_macie_member_account_association.test"), + ), + }, + }, + }) +} + +func testAccCheckAWSMacieMemberAccountAssociationDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).macieconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_macie_member_account_association" { + continue + } + + req := &macie.ListMemberAccountsInput{} + + dissociated := true + err := conn.ListMemberAccountsPages(req, func(page *macie.ListMemberAccountsOutput, lastPage bool) bool { + for _, v := range page.MemberAccounts { + if aws.StringValue(v.AccountId) == rs.Primary.Attributes["member_account_id"] { + dissociated = false + return false + } + } + + return true + }) + if err != nil { + return err + } + + if !dissociated { + return fmt.Errorf("Member account %s is not dissociated from Macie", rs.Primary.Attributes["member_account_id"]) + } + } + return nil +} + +func testAccCheckAWSMacieMemberAccountAssociationExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).macieconn + + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + req := &macie.ListMemberAccountsInput{} + + exists := false + err := conn.ListMemberAccountsPages(req, func(page *macie.ListMemberAccountsOutput, lastPage bool) bool { + for _, v := range page.MemberAccounts { + if aws.StringValue(v.AccountId) == rs.Primary.Attributes["member_account_id"] { + exists = true + return false + } + } + + return true + }) + if err != nil { + return err + } + + if !exists { + return fmt.Errorf("Member account %s is not associated with Macie", rs.Primary.Attributes["member_account_id"]) + } + + return nil + } +} + +func testAccAWSMacieMemberAccountAssociationConfig_basic(memberAcctId string) string { + return fmt.Sprintf(` +resource "aws_macie_member_account_association" "test" { + member_account_id = "%s" +} +`, memberAcctId) +} + +const testAccAWSMacieMemberAccountAssociationConfig_self = ` +data "aws_caller_identity" "current" {} + +resource "aws_macie_member_account_association" "test" { + member_account_id = "${data.aws_caller_identity.current.account_id}" +} +` diff --git a/aws/resource_aws_macie_s3_bucket_association.go b/aws/resource_aws_macie_s3_bucket_association.go new file mode 100644 index 00000000000..8f8606eba09 --- /dev/null +++ b/aws/resource_aws_macie_s3_bucket_association.go @@ -0,0 +1,206 @@ +package aws + +import ( + "fmt" + "log" + "strings" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/macie" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsMacieS3BucketAssociation() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsMacieS3BucketAssociationCreate, + Read: resourceAwsMacieS3BucketAssociationRead, + Update: resourceAwsMacieS3BucketAssociationUpdate, + Delete: resourceAwsMacieS3BucketAssociationDelete, + + Schema: map[string]*schema.Schema{ + "bucket_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "prefix": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "member_account_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validateAwsAccountId, + }, + "classification_type": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "continuous": { + Type: schema.TypeString, + Optional: true, + Default: macie.S3ContinuousClassificationTypeFull, + ValidateFunc: validation.StringInSlice([]string{macie.S3ContinuousClassificationTypeFull}, false), + }, + "one_time": { + Type: schema.TypeString, + Optional: true, + Default: macie.S3OneTimeClassificationTypeNone, + ValidateFunc: validation.StringInSlice([]string{macie.S3OneTimeClassificationTypeFull, macie.S3OneTimeClassificationTypeNone}, false), + }, + }, + }, + }, + }, + } +} + +func resourceAwsMacieS3BucketAssociationCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).macieconn + + req := &macie.AssociateS3ResourcesInput{ + S3Resources: []*macie.S3ResourceClassification{ + { + BucketName: aws.String(d.Get("bucket_name").(string)), + ClassificationType: expandMacieClassificationType(d), + }, + }, + } + if v, ok := d.GetOk("member_account_id"); ok { + req.MemberAccountId = aws.String(v.(string)) + } + if v, ok := d.GetOk("prefix"); ok { + req.S3Resources[0].Prefix = aws.String(v.(string)) + } + + log.Printf("[DEBUG] Creating Macie S3 bucket association: %#v", req) + resp, err := conn.AssociateS3Resources(req) + if err != nil { + return fmt.Errorf("Error creating Macie S3 bucket association: %s", err) + } + if len(resp.FailedS3Resources) > 0 { + return fmt.Errorf("Error creating Macie S3 bucket association: %s", resp.FailedS3Resources[0]) + } + + d.SetId(fmt.Sprintf("%s/%s", d.Get("bucket_name"), d.Get("prefix"))) + return resourceAwsMacieS3BucketAssociationRead(d, meta) +} + +func resourceAwsMacieS3BucketAssociationRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).macieconn + + req := &macie.ListS3ResourcesInput{} + if v, ok := d.GetOk("member_account_id"); ok { + req.MemberAccountId = aws.String(v.(string)) + } + + bucketName := d.Get("bucket_name").(string) + prefix := d.Get("prefix") + + var res *macie.S3ResourceClassification + err := conn.ListS3ResourcesPages(req, func(page *macie.ListS3ResourcesOutput, lastPage bool) bool { + for _, v := range page.S3Resources { + if aws.StringValue(v.BucketName) == bucketName && aws.StringValue(v.Prefix) == prefix { + res = v + return false + } + } + + return true + }) + if err != nil { + return fmt.Errorf("Error listing Macie S3 bucket associations: %s", err) + } + + if res == nil { + log.Printf("[WARN] Macie S3 bucket association (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err := d.Set("classification_type", flattenMacieClassificationType(res.ClassificationType)); err != nil { + return fmt.Errorf("error setting classification_type: %s", err) + } + + return nil +} + +func resourceAwsMacieS3BucketAssociationUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).macieconn + + if d.HasChange("classification_type") { + req := &macie.UpdateS3ResourcesInput{ + S3ResourcesUpdate: []*macie.S3ResourceClassificationUpdate{ + { + BucketName: aws.String(d.Get("bucket_name").(string)), + ClassificationTypeUpdate: expandMacieClassificationTypeUpdate(d), + }, + }, + } + if v, ok := d.GetOk("member_account_id"); ok { + req.MemberAccountId = aws.String(v.(string)) + } + if v, ok := d.GetOk("prefix"); ok { + req.S3ResourcesUpdate[0].Prefix = aws.String(v.(string)) + } + + log.Printf("[DEBUG] Updating Macie S3 bucket association: %#v", req) + resp, err := conn.UpdateS3Resources(req) + if err != nil { + return fmt.Errorf("Error updating Macie S3 bucket association: %s", err) + } + if len(resp.FailedS3Resources) > 0 { + return fmt.Errorf("Error updating Macie S3 bucket association: %s", resp.FailedS3Resources[0]) + } + } + + return resourceAwsMacieS3BucketAssociationRead(d, meta) +} + +func resourceAwsMacieS3BucketAssociationDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).macieconn + + log.Printf("[DEBUG] Deleting Macie S3 bucket association: %s", d.Id()) + + req := &macie.DisassociateS3ResourcesInput{ + AssociatedS3Resources: []*macie.S3Resource{ + { + BucketName: aws.String(d.Get("bucket_name").(string)), + }, + }, + } + if v, ok := d.GetOk("member_account_id"); ok { + req.MemberAccountId = aws.String(v.(string)) + } + if v, ok := d.GetOk("prefix"); ok { + req.AssociatedS3Resources[0].Prefix = aws.String(v.(string)) + } + + resp, err := conn.DisassociateS3Resources(req) + if err != nil { + return fmt.Errorf("Error deleting Macie S3 bucket association: %s", err) + } + if len(resp.FailedS3Resources) > 0 { + failed := resp.FailedS3Resources[0] + // { + // ErrorCode: "InvalidInputException", + // ErrorMessage: "The request was rejected. The specified S3 resource (bucket or prefix) is not associated with Macie.", + // FailedItem: { + // BucketName: "tf-macie-example-002" + // } + // } + if aws.StringValue(failed.ErrorCode) == macie.ErrCodeInvalidInputException && + strings.Contains(aws.StringValue(failed.ErrorMessage), "is not associated with Macie") { + return nil + } + return fmt.Errorf("Error deleting Macie S3 bucket association: %s", failed) + } + + return nil +} diff --git a/aws/resource_aws_macie_s3_bucket_association_test.go b/aws/resource_aws_macie_s3_bucket_association_test.go new file mode 100644 index 00000000000..304a780bd0d --- /dev/null +++ b/aws/resource_aws_macie_s3_bucket_association_test.go @@ -0,0 +1,206 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/macie" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSMacieS3BucketAssociation_basic(t *testing.T) { + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSMacieS3BucketAssociationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSMacieS3BucketAssociationConfig_basic(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSMacieS3BucketAssociationExists("aws_macie_s3_bucket_association.test"), + resource.TestCheckResourceAttr("aws_macie_s3_bucket_association.test", "classification_type.0.continuous", "FULL"), + resource.TestCheckResourceAttr("aws_macie_s3_bucket_association.test", "classification_type.0.one_time", "NONE"), + ), + }, + { + Config: testAccAWSMacieS3BucketAssociationConfig_basicOneTime(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSMacieS3BucketAssociationExists("aws_macie_s3_bucket_association.test"), + resource.TestCheckResourceAttr("aws_macie_s3_bucket_association.test", "classification_type.0.continuous", "FULL"), + resource.TestCheckResourceAttr("aws_macie_s3_bucket_association.test", "classification_type.0.one_time", "FULL"), + ), + }, + }, + }) +} + +func TestAccAWSMacieS3BucketAssociation_accountIdAndPrefix(t *testing.T) { + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSMacieS3BucketAssociationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSMacieS3BucketAssociationConfig_accountIdAndPrefix(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSMacieS3BucketAssociationExists("aws_macie_s3_bucket_association.test"), + resource.TestCheckResourceAttr("aws_macie_s3_bucket_association.test", "classification_type.0.continuous", "FULL"), + resource.TestCheckResourceAttr("aws_macie_s3_bucket_association.test", "classification_type.0.one_time", "NONE"), + ), + }, + { + Config: testAccAWSMacieS3BucketAssociationConfig_accountIdAndPrefixOneTime(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSMacieS3BucketAssociationExists("aws_macie_s3_bucket_association.test"), + resource.TestCheckResourceAttr("aws_macie_s3_bucket_association.test", "classification_type.0.continuous", "FULL"), + resource.TestCheckResourceAttr("aws_macie_s3_bucket_association.test", "classification_type.0.one_time", "FULL"), + ), + }, + }, + }) +} + +func testAccCheckAWSMacieS3BucketAssociationDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).macieconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_macie_s3_bucket_association" { + continue + } + + req := &macie.ListS3ResourcesInput{} + acctId := rs.Primary.Attributes["member_account_id"] + if acctId != "" { + req.MemberAccountId = aws.String(acctId) + } + + dissociated := true + err := conn.ListS3ResourcesPages(req, func(page *macie.ListS3ResourcesOutput, lastPage bool) bool { + for _, v := range page.S3Resources { + if aws.StringValue(v.BucketName) == rs.Primary.Attributes["bucket_name"] && aws.StringValue(v.Prefix) == rs.Primary.Attributes["prefix"] { + dissociated = false + return false + } + } + + return true + }) + if err != nil { + return err + } + + if !dissociated { + return fmt.Errorf("S3 resource %s/%s is not dissociated from Macie", rs.Primary.Attributes["bucket_name"], rs.Primary.Attributes["prefix"]) + } + } + return nil +} + +func testAccCheckAWSMacieS3BucketAssociationExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).macieconn + + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + req := &macie.ListS3ResourcesInput{} + acctId := rs.Primary.Attributes["member_account_id"] + if acctId != "" { + req.MemberAccountId = aws.String(acctId) + } + + exists := false + err := conn.ListS3ResourcesPages(req, func(page *macie.ListS3ResourcesOutput, lastPage bool) bool { + for _, v := range page.S3Resources { + if aws.StringValue(v.BucketName) == rs.Primary.Attributes["bucket_name"] && aws.StringValue(v.Prefix) == rs.Primary.Attributes["prefix"] { + exists = true + return false + } + } + + return true + }) + if err != nil { + return err + } + + if !exists { + return fmt.Errorf("S3 resource %s/%s is not associated with Macie", rs.Primary.Attributes["bucket_name"], rs.Primary.Attributes["prefix"]) + } + + return nil + } +} + +func testAccAWSMacieS3BucketAssociationConfig_basic(randInt int) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = "tf-macie-test-bucket-%d" +} + +resource "aws_macie_s3_bucket_association" "test" { + bucket_name = "${aws_s3_bucket.test.id}" +} +`, randInt) +} + +func testAccAWSMacieS3BucketAssociationConfig_basicOneTime(randInt int) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = "tf-macie-test-bucket-%d" +} + +resource "aws_macie_s3_bucket_association" "test" { + bucket_name = "${aws_s3_bucket.test.id}" + + classification_type { + one_time = "FULL" + } +} +`, randInt) +} + +func testAccAWSMacieS3BucketAssociationConfig_accountIdAndPrefix(randInt int) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = "tf-macie-test-bucket-%d" +} + +data "aws_caller_identity" "current" {} + +resource "aws_macie_s3_bucket_association" "test" { + bucket_name = "${aws_s3_bucket.test.id}" + member_account_id = "${data.aws_caller_identity.current.account_id}" + prefix = "data" +} +`, randInt) +} + +func testAccAWSMacieS3BucketAssociationConfig_accountIdAndPrefixOneTime(randInt int) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = "tf-macie-test-bucket-%d" +} + +data "aws_caller_identity" "current" {} + +resource "aws_macie_s3_bucket_association" "test" { + bucket_name = "${aws_s3_bucket.test.id}" + member_account_id = "${data.aws_caller_identity.current.account_id}" + prefix = "data" + + classification_type { + one_time = "FULL" + } +} +`, randInt) +} diff --git a/aws/resource_aws_main_route_table_association.go b/aws/resource_aws_main_route_table_association.go index aabecda543c..58dadccd884 100644 --- a/aws/resource_aws_main_route_table_association.go +++ b/aws/resource_aws_main_route_table_association.go @@ -17,12 +17,12 @@ func resourceAwsMainRouteTableAssociation() *schema.Resource { Delete: resourceAwsMainRouteTableAssociationDelete, Schema: map[string]*schema.Schema{ - "vpc_id": &schema.Schema{ + "vpc_id": { Type: schema.TypeString, Required: true, }, - "route_table_id": &schema.Schema{ + "route_table_id": { Type: schema.TypeString, Required: true, }, @@ -31,7 +31,7 @@ func resourceAwsMainRouteTableAssociation() *schema.Resource { // created when the VPC is created. We need this to be able to "destroy" // our main route table association, which we do by returning this route // table to its original place as the Main Route Table for the VPC. - "original_route_table_id": &schema.Schema{ + "original_route_table_id": { Type: schema.TypeString, Computed: true, }, diff --git a/aws/resource_aws_main_route_table_association_test.go b/aws/resource_aws_main_route_table_association_test.go index c502c197bf4..5639c3e6aad 100644 --- a/aws/resource_aws_main_route_table_association_test.go +++ b/aws/resource_aws_main_route_table_association_test.go @@ -10,12 +10,12 @@ import ( ) func TestAccAWSMainRouteTableAssociation_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckMainRouteTableAssociationDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccMainRouteTableAssociationConfig, Check: resource.ComposeTestCheckFunc( testAccCheckMainRouteTableAssociation( @@ -25,7 +25,7 @@ func TestAccAWSMainRouteTableAssociation_basic(t *testing.T) { ), ), }, - resource.TestStep{ + { Config: testAccMainRouteTableAssociationConfigUpdate, Check: resource.ComposeTestCheckFunc( testAccCheckMainRouteTableAssociation( diff --git a/aws/resource_aws_media_store_container.go b/aws/resource_aws_media_store_container.go index 7380dad1f81..cb88ed340e4 100644 --- a/aws/resource_aws_media_store_container.go +++ b/aws/resource_aws_media_store_container.go @@ -16,9 +16,11 @@ func resourceAwsMediaStoreContainer() *schema.Resource { Create: resourceAwsMediaStoreContainerCreate, Read: resourceAwsMediaStoreContainerRead, Delete: resourceAwsMediaStoreContainerDelete, - + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, @@ -82,6 +84,7 @@ func resourceAwsMediaStoreContainerRead(d *schema.ResourceData, meta interface{} return err } d.Set("arn", resp.Container.ARN) + d.Set("name", resp.Container.Name) d.Set("endpoint", resp.Container.Endpoint) return nil } @@ -95,7 +98,6 @@ func resourceAwsMediaStoreContainerDelete(d *schema.ResourceData, meta interface _, err := conn.DeleteContainer(input) if err != nil { if isAWSErr(err, mediastore.ErrCodeContainerNotFoundException, "") { - d.SetId("") return nil } return err @@ -118,7 +120,6 @@ func resourceAwsMediaStoreContainerDelete(d *schema.ResourceData, meta interface return err } - d.SetId("") return nil } diff --git a/aws/resource_aws_media_store_container_policy.go b/aws/resource_aws_media_store_container_policy.go new file mode 100644 index 00000000000..b3469fe9c2e --- /dev/null +++ b/aws/resource_aws_media_store_container_policy.go @@ -0,0 +1,103 @@ +package aws + +import ( + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/mediastore" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsMediaStoreContainerPolicy() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsMediaStoreContainerPolicyPut, + Read: resourceAwsMediaStoreContainerPolicyRead, + Update: resourceAwsMediaStoreContainerPolicyPut, + Delete: resourceAwsMediaStoreContainerPolicyDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "container_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "policy": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validateIAMPolicyJson, + DiffSuppressFunc: suppressEquivalentAwsPolicyDiffs, + }, + }, + } +} + +func resourceAwsMediaStoreContainerPolicyPut(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).mediastoreconn + + input := &mediastore.PutContainerPolicyInput{ + ContainerName: aws.String(d.Get("container_name").(string)), + Policy: aws.String(d.Get("policy").(string)), + } + + _, err := conn.PutContainerPolicy(input) + if err != nil { + return err + } + + d.SetId(d.Get("container_name").(string)) + return resourceAwsMediaStoreContainerPolicyRead(d, meta) +} + +func resourceAwsMediaStoreContainerPolicyRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).mediastoreconn + + input := &mediastore.GetContainerPolicyInput{ + ContainerName: aws.String(d.Id()), + } + + resp, err := conn.GetContainerPolicy(input) + if err != nil { + if isAWSErr(err, mediastore.ErrCodeContainerNotFoundException, "") { + log.Printf("[WARN] MediaContainer Policy %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + if isAWSErr(err, mediastore.ErrCodePolicyNotFoundException, "") { + log.Printf("[WARN] MediaContainer Policy %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + return err + } + + d.Set("container_name", d.Id()) + d.Set("policy", resp.Policy) + return nil +} + +func resourceAwsMediaStoreContainerPolicyDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).mediastoreconn + + input := &mediastore.DeleteContainerPolicyInput{ + ContainerName: aws.String(d.Id()), + } + + _, err := conn.DeleteContainerPolicy(input) + if err != nil { + if isAWSErr(err, mediastore.ErrCodeContainerNotFoundException, "") { + return nil + } + if isAWSErr(err, mediastore.ErrCodePolicyNotFoundException, "") { + return nil + } + // if isAWSErr(err, mediastore.ErrCodeContainerInUseException, "Container must be ACTIVE in order to perform this operation") { + // return nil + // } + return err + } + + return nil +} diff --git a/aws/resource_aws_media_store_container_policy_test.go b/aws/resource_aws_media_store_container_policy_test.go new file mode 100644 index 00000000000..6bfe95ff190 --- /dev/null +++ b/aws/resource_aws_media_store_container_policy_test.go @@ -0,0 +1,144 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/mediastore" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSMediaStoreContainerPolicy_basic(t *testing.T) { + rname := acctest.RandString(5) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsMediaStoreContainerPolicyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccMediaStoreContainerPolicyConfig(rname, acctest.RandString(5)), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsMediaStoreContainerPolicyExists("aws_media_store_container_policy.test"), + resource.TestCheckResourceAttrSet("aws_media_store_container_policy.test", "container_name"), + resource.TestCheckResourceAttrSet("aws_media_store_container_policy.test", "policy"), + ), + }, + { + Config: testAccMediaStoreContainerPolicyConfig(rname, acctest.RandString(5)), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsMediaStoreContainerPolicyExists("aws_media_store_container_policy.test"), + resource.TestCheckResourceAttrSet("aws_media_store_container_policy.test", "container_name"), + resource.TestCheckResourceAttrSet("aws_media_store_container_policy.test", "policy"), + ), + }, + }, + }) +} + +func TestAccAWSMediaStoreContainerPolicy_import(t *testing.T) { + resourceName := "aws_media_store_container_policy.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsMediaStoreContainerPolicyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccMediaStoreContainerPolicyConfig(acctest.RandString(5), acctest.RandString(5)), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAwsMediaStoreContainerPolicyDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).mediastoreconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_media_store_container_policy" { + continue + } + + input := &mediastore.GetContainerPolicyInput{ + ContainerName: aws.String(rs.Primary.ID), + } + + _, err := conn.GetContainerPolicy(input) + if err != nil { + if isAWSErr(err, mediastore.ErrCodeContainerNotFoundException, "") { + return nil + } + if isAWSErr(err, mediastore.ErrCodePolicyNotFoundException, "") { + return nil + } + if isAWSErr(err, mediastore.ErrCodeContainerInUseException, "Container must be ACTIVE in order to perform this operation") { + return nil + } + return err + } + + return fmt.Errorf("Expected MediaStore Container Policy to be destroyed, %s found", rs.Primary.ID) + } + return nil +} + +func testAccCheckAwsMediaStoreContainerPolicyExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + conn := testAccProvider.Meta().(*AWSClient).mediastoreconn + + input := &mediastore.GetContainerPolicyInput{ + ContainerName: aws.String(rs.Primary.ID), + } + + _, err := conn.GetContainerPolicy(input) + if err != nil { + return err + } + + return nil + } +} + +func testAccMediaStoreContainerPolicyConfig(rName, sid string) string { + return fmt.Sprintf(` +data "aws_region" "current" {} + +data "aws_caller_identity" "current" {} + +resource "aws_media_store_container" "test" { + name = "tf_mediastore_%s" +} + +resource "aws_media_store_container_policy" "test" { + container_name = "${aws_media_store_container.test.name}" + policy = < ` - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsMqBrokerDestroy, @@ -324,6 +333,9 @@ func TestAccAWSMqBroker_allFieldsDefaultVpc(t *testing.T) { resource.TestCheckResourceAttr("aws_mq_broker.test", "maintenance_window_start_time.0.day_of_week", "TUESDAY"), resource.TestCheckResourceAttr("aws_mq_broker.test", "maintenance_window_start_time.0.time_of_day", "02:00"), resource.TestCheckResourceAttr("aws_mq_broker.test", "maintenance_window_start_time.0.time_zone", "CET"), + resource.TestCheckResourceAttr("aws_mq_broker.test", "logs.#", "1"), + resource.TestCheckResourceAttr("aws_mq_broker.test", "logs.0.general", "false"), + resource.TestCheckResourceAttr("aws_mq_broker.test", "logs.0.audit", "false"), resource.TestCheckResourceAttr("aws_mq_broker.test", "publicly_accessible", "true"), resource.TestCheckResourceAttr("aws_mq_broker.test", "security_groups.#", "2"), resource.TestCheckResourceAttr("aws_mq_broker.test", "subnet_ids.#", "2"), @@ -344,6 +356,8 @@ func TestAccAWSMqBroker_allFieldsDefaultVpc(t *testing.T) { resource.TestCheckResourceAttr("aws_mq_broker.test", "instances.#", "2"), resource.TestMatchResourceAttr("aws_mq_broker.test", "instances.0.console_url", regexp.MustCompile(`^https://[a-f0-9-]+\.mq.[a-z0-9-]+.amazonaws.com:8162$`)), + resource.TestMatchResourceAttr("aws_mq_broker.test", "instances.0.ip_address", + regexp.MustCompile(`^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$`)), resource.TestCheckResourceAttr("aws_mq_broker.test", "instances.0.endpoints.#", "5"), resource.TestMatchResourceAttr("aws_mq_broker.test", "instances.0.endpoints.0", regexp.MustCompile(`^ssl://[a-z0-9-\.]+:61617$`)), resource.TestMatchResourceAttr("aws_mq_broker.test", "instances.0.endpoints.1", regexp.MustCompile(`^amqp\+ssl://[a-z0-9-\.]+:5671$`)), @@ -352,6 +366,8 @@ func TestAccAWSMqBroker_allFieldsDefaultVpc(t *testing.T) { resource.TestMatchResourceAttr("aws_mq_broker.test", "instances.0.endpoints.4", regexp.MustCompile(`^wss://[a-z0-9-\.]+:61619$`)), resource.TestMatchResourceAttr("aws_mq_broker.test", "instances.1.console_url", regexp.MustCompile(`^https://[a-f0-9-]+\.mq.[a-z0-9-]+.amazonaws.com:8162$`)), + resource.TestMatchResourceAttr("aws_mq_broker.test", "instances.1.ip_address", + regexp.MustCompile(`^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$`)), resource.TestCheckResourceAttr("aws_mq_broker.test", "instances.1.endpoints.#", "5"), resource.TestMatchResourceAttr("aws_mq_broker.test", "instances.1.endpoints.0", regexp.MustCompile(`^ssl://[a-z0-9-\.]+:61617$`)), resource.TestMatchResourceAttr("aws_mq_broker.test", "instances.1.endpoints.1", regexp.MustCompile(`^amqp\+ssl://[a-z0-9-\.]+:5671$`)), @@ -404,7 +420,7 @@ func TestAccAWSMqBroker_allFieldsCustomVpc(t *testing.T) { ` - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsMqBrokerDestroy, @@ -426,6 +442,9 @@ func TestAccAWSMqBroker_allFieldsCustomVpc(t *testing.T) { resource.TestCheckResourceAttr("aws_mq_broker.test", "maintenance_window_start_time.0.day_of_week", "TUESDAY"), resource.TestCheckResourceAttr("aws_mq_broker.test", "maintenance_window_start_time.0.time_of_day", "02:00"), resource.TestCheckResourceAttr("aws_mq_broker.test", "maintenance_window_start_time.0.time_zone", "CET"), + resource.TestCheckResourceAttr("aws_mq_broker.test", "logs.#", "1"), + resource.TestCheckResourceAttr("aws_mq_broker.test", "logs.0.general", "true"), + resource.TestCheckResourceAttr("aws_mq_broker.test", "logs.0.audit", "true"), resource.TestCheckResourceAttr("aws_mq_broker.test", "publicly_accessible", "true"), resource.TestCheckResourceAttr("aws_mq_broker.test", "security_groups.#", "2"), resource.TestCheckResourceAttr("aws_mq_broker.test", "subnet_ids.#", "2"), @@ -446,6 +465,8 @@ func TestAccAWSMqBroker_allFieldsCustomVpc(t *testing.T) { resource.TestCheckResourceAttr("aws_mq_broker.test", "instances.#", "2"), resource.TestMatchResourceAttr("aws_mq_broker.test", "instances.0.console_url", regexp.MustCompile(`^https://[a-f0-9-]+\.mq.[a-z0-9-]+.amazonaws.com:8162$`)), + resource.TestMatchResourceAttr("aws_mq_broker.test", "instances.0.ip_address", + regexp.MustCompile(`^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$`)), resource.TestCheckResourceAttr("aws_mq_broker.test", "instances.0.endpoints.#", "5"), resource.TestMatchResourceAttr("aws_mq_broker.test", "instances.0.endpoints.0", regexp.MustCompile(`^ssl://[a-z0-9-\.]+:61617$`)), resource.TestMatchResourceAttr("aws_mq_broker.test", "instances.0.endpoints.1", regexp.MustCompile(`^amqp\+ssl://[a-z0-9-\.]+:5671$`)), @@ -454,6 +475,8 @@ func TestAccAWSMqBroker_allFieldsCustomVpc(t *testing.T) { resource.TestMatchResourceAttr("aws_mq_broker.test", "instances.0.endpoints.4", regexp.MustCompile(`^wss://[a-z0-9-\.]+:61619$`)), resource.TestMatchResourceAttr("aws_mq_broker.test", "instances.1.console_url", regexp.MustCompile(`^https://[a-f0-9-]+\.mq.[a-z0-9-]+.amazonaws.com:8162$`)), + resource.TestMatchResourceAttr("aws_mq_broker.test", "instances.1.ip_address", + regexp.MustCompile(`^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$`)), resource.TestCheckResourceAttr("aws_mq_broker.test", "instances.1.endpoints.#", "5"), resource.TestMatchResourceAttr("aws_mq_broker.test", "instances.1.endpoints.0", regexp.MustCompile(`^ssl://[a-z0-9-\.]+:61617$`)), resource.TestMatchResourceAttr("aws_mq_broker.test", "instances.1.endpoints.1", regexp.MustCompile(`^amqp\+ssl://[a-z0-9-\.]+:5671$`)), @@ -492,7 +515,7 @@ func TestAccAWSMqBroker_updateUsers(t *testing.T) { sgName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) brokerName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsMqBrokerDestroy, @@ -590,6 +613,9 @@ resource "aws_mq_broker" "test" { engine_version = "5.15.0" host_instance_type = "mq.t2.micro" security_groups = ["${aws_security_group.test.id}"] + logs { + general = true + } user { username = "Test" password = "TestTest1234" @@ -719,6 +745,10 @@ resource "aws_mq_broker" "test" { engine_type = "ActiveMQ" engine_version = "5.15.0" host_instance_type = "mq.t2.micro" + logs { + general = true + audit = true + } maintenance_window_start_time { day_of_week = "TUESDAY" time_of_day = "02:00" diff --git a/aws/resource_aws_mq_configuration_test.go b/aws/resource_aws_mq_configuration_test.go index b81db954250..18868146278 100644 --- a/aws/resource_aws_mq_configuration_test.go +++ b/aws/resource_aws_mq_configuration_test.go @@ -14,7 +14,7 @@ import ( func TestAccAWSMqConfiguration_basic(t *testing.T) { configurationName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsMqConfigurationDestroy, @@ -50,7 +50,7 @@ func TestAccAWSMqConfiguration_basic(t *testing.T) { func TestAccAWSMqConfiguration_withData(t *testing.T) { configurationName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsMqConfigurationDestroy, diff --git a/aws/resource_aws_nat_gateway.go b/aws/resource_aws_nat_gateway.go index 76045c99213..d227ca2f917 100644 --- a/aws/resource_aws_nat_gateway.go +++ b/aws/resource_aws_nat_gateway.go @@ -24,33 +24,30 @@ func resourceAwsNatGateway() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "allocation_id": &schema.Schema{ + "allocation_id": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "subnet_id": &schema.Schema{ + "subnet_id": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "network_interface_id": &schema.Schema{ + "network_interface_id": { Type: schema.TypeString, - Optional: true, Computed: true, }, - "private_ip": &schema.Schema{ + "private_ip": { Type: schema.TypeString, - Optional: true, Computed: true, }, - "public_ip": &schema.Schema{ + "public_ip": { Type: schema.TypeString, - Optional: true, Computed: true, }, diff --git a/aws/resource_aws_nat_gateway_test.go b/aws/resource_aws_nat_gateway_test.go index f3645dc3080..8fd9ccef778 100644 --- a/aws/resource_aws_nat_gateway_test.go +++ b/aws/resource_aws_nat_gateway_test.go @@ -33,12 +33,17 @@ func testSweepNatGateways(region string) error { Name: aws.String("tag-value"), Values: []*string{ aws.String("terraform-testacc-*"), + aws.String("tf-acc-test-*"), }, }, }, } resp, err := conn.DescribeNatGateways(req) if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping EC2 NAT Gateway sweep for %s: %s", region, err) + return nil + } return fmt.Errorf("Error describing NAT Gateways: %s", err) } @@ -63,49 +68,61 @@ func testSweepNatGateways(region string) error { func TestAccAWSNatGateway_basic(t *testing.T) { var natGateway ec2.NatGateway + resourceName := "aws_nat_gateway.gateway" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_nat_gateway.gateway", Providers: testAccProviders, CheckDestroy: testAccCheckNatGatewayDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccNatGatewayConfig, Check: resource.ComposeTestCheckFunc( - testAccCheckNatGatewayExists("aws_nat_gateway.gateway", &natGateway), + testAccCheckNatGatewayExists(resourceName, &natGateway), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } func TestAccAWSNatGateway_tags(t *testing.T) { var natGateway ec2.NatGateway + resourceName := "aws_nat_gateway.gateway" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckNatGatewayDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccNatGatewayConfigTags, Check: resource.ComposeTestCheckFunc( - testAccCheckNatGatewayExists("aws_nat_gateway.gateway", &natGateway), + testAccCheckNatGatewayExists(resourceName, &natGateway), testAccCheckTags(&natGateway.Tags, "Name", "terraform-testacc-nat-gw-tags"), testAccCheckTags(&natGateway.Tags, "foo", "bar"), ), }, - resource.TestStep{ + { Config: testAccNatGatewayConfigTagsUpdate, Check: resource.ComposeTestCheckFunc( - testAccCheckNatGatewayExists("aws_nat_gateway.gateway", &natGateway), + testAccCheckNatGatewayExists(resourceName, &natGateway), testAccCheckTags(&natGateway.Tags, "Name", "terraform-testacc-nat-gw-tags"), testAccCheckTags(&natGateway.Tags, "foo", ""), testAccCheckTags(&natGateway.Tags, "bar", "baz"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } diff --git a/aws/resource_aws_neptune_cluster.go b/aws/resource_aws_neptune_cluster.go new file mode 100644 index 00000000000..3ced18b515c --- /dev/null +++ b/aws/resource_aws_neptune_cluster.go @@ -0,0 +1,744 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/neptune" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsNeptuneCluster() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsNeptuneClusterCreate, + Read: resourceAwsNeptuneClusterRead, + Update: resourceAwsNeptuneClusterUpdate, + Delete: resourceAwsNeptuneClusterDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(120 * time.Minute), + Update: schema.DefaultTimeout(120 * time.Minute), + Delete: schema.DefaultTimeout(120 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + + // apply_immediately is used to determine when the update modifications + // take place. + "apply_immediately": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + + "arn": { + Type: schema.TypeString, + Computed: true, + }, + + "availability_zones": { + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Optional: true, + ForceNew: true, + Computed: true, + Set: schema.HashString, + }, + + "backup_retention_period": { + Type: schema.TypeInt, + Optional: true, + Default: 1, + ValidateFunc: validation.IntAtMost(35), + }, + + "cluster_identifier": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"cluster_identifier_prefix"}, + ValidateFunc: validateNeptuneIdentifier, + }, + + "cluster_identifier_prefix": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: validateNeptuneIdentifierPrefix, + }, + + "cluster_members": { + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Computed: true, + Set: schema.HashString, + }, + + "cluster_resource_id": { + Type: schema.TypeString, + Computed: true, + }, + + "endpoint": { + Type: schema.TypeString, + Computed: true, + }, + + "engine": { + Type: schema.TypeString, + Optional: true, + Default: "neptune", + ForceNew: true, + ValidateFunc: validateNeptuneEngine(), + }, + + "engine_version": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + }, + + "final_snapshot_identifier": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9A-Za-z-]+$`).MatchString(value) { + es = append(es, fmt.Errorf( + "only alphanumeric characters and hyphens allowed in %q", k)) + } + if regexp.MustCompile(`--`).MatchString(value) { + es = append(es, fmt.Errorf("%q cannot contain two consecutive hyphens", k)) + } + if regexp.MustCompile(`-$`).MatchString(value) { + es = append(es, fmt.Errorf("%q cannot end in a hyphen", k)) + } + return + }, + }, + + "hosted_zone_id": { + Type: schema.TypeString, + Computed: true, + }, + + "iam_roles": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + + "iam_database_authentication_enabled": { + Type: schema.TypeBool, + Optional: true, + }, + + "kms_key_arn": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: validateArn, + }, + + "neptune_subnet_group_name": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + }, + + "neptune_cluster_parameter_group_name": { + Type: schema.TypeString, + Optional: true, + Default: "default.neptune1", + }, + + "port": { + Type: schema.TypeInt, + Optional: true, + Default: 8182, + ForceNew: true, + }, + + "preferred_backup_window": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validateOnceADayWindowFormat, + }, + + "preferred_maintenance_window": { + Type: schema.TypeString, + Optional: true, + Computed: true, + StateFunc: func(val interface{}) string { + if val == nil { + return "" + } + return strings.ToLower(val.(string)) + }, + ValidateFunc: validateOnceAWeekWindowFormat, + }, + + "reader_endpoint": { + Type: schema.TypeString, + Computed: true, + }, + + "replication_source_identifier": { + Type: schema.TypeString, + Optional: true, + }, + + "storage_encrypted": { + Type: schema.TypeBool, + Optional: true, + Default: false, + ForceNew: true, + }, + + "skip_final_snapshot": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + + "snapshot_identifier": { + Type: schema.TypeString, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "tags": tagsSchema(), + + "vpc_security_group_ids": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + }, + } +} + +func resourceAwsNeptuneClusterCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + tags := tagsFromMapNeptune(d.Get("tags").(map[string]interface{})) + + // Check if any of the parameters that require a cluster modification after creation are set + clusterUpdate := false + restoreDBClusterFromSnapshot := false + if _, ok := d.GetOk("snapshot_identifier"); ok { + restoreDBClusterFromSnapshot = true + } + + if v, ok := d.GetOk("cluster_identifier"); ok { + d.Set("cluster_identifier", v.(string)) + } else { + if v, ok := d.GetOk("cluster_identifier_prefix"); ok { + d.Set("cluster_identifier", resource.PrefixedUniqueId(v.(string))) + } else { + d.Set("cluster_identifier", resource.PrefixedUniqueId("tf-")) + } + } + + createDbClusterInput := &neptune.CreateDBClusterInput{ + DBClusterIdentifier: aws.String(d.Get("cluster_identifier").(string)), + Engine: aws.String(d.Get("engine").(string)), + Port: aws.Int64(int64(d.Get("port").(int))), + StorageEncrypted: aws.Bool(d.Get("storage_encrypted").(bool)), + Tags: tags, + } + restoreDBClusterFromSnapshotInput := &neptune.RestoreDBClusterFromSnapshotInput{ + DBClusterIdentifier: aws.String(d.Get("cluster_identifier").(string)), + Engine: aws.String(d.Get("engine").(string)), + Port: aws.Int64(int64(d.Get("port").(int))), + SnapshotIdentifier: aws.String(d.Get("snapshot_identifier").(string)), + Tags: tags, + } + + if attr := d.Get("availability_zones").(*schema.Set); attr.Len() > 0 { + createDbClusterInput.AvailabilityZones = expandStringList(attr.List()) + restoreDBClusterFromSnapshotInput.AvailabilityZones = expandStringList(attr.List()) + } + + if attr, ok := d.GetOk("backup_retention_period"); ok { + createDbClusterInput.BackupRetentionPeriod = aws.Int64(int64(attr.(int))) + if restoreDBClusterFromSnapshot { + clusterUpdate = true + } + } + + if attr, ok := d.GetOk("engine_version"); ok { + createDbClusterInput.EngineVersion = aws.String(attr.(string)) + restoreDBClusterFromSnapshotInput.EngineVersion = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("iam_database_authentication_enabled"); ok { + createDbClusterInput.EnableIAMDatabaseAuthentication = aws.Bool(attr.(bool)) + restoreDBClusterFromSnapshotInput.EnableIAMDatabaseAuthentication = aws.Bool(attr.(bool)) + } + + if attr, ok := d.GetOk("kms_key_arn"); ok { + createDbClusterInput.KmsKeyId = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("neptune_cluster_parameter_group_name"); ok { + createDbClusterInput.DBClusterParameterGroupName = aws.String(attr.(string)) + if restoreDBClusterFromSnapshot { + clusterUpdate = true + } + } + + if attr, ok := d.GetOk("neptune_subnet_group_name"); ok { + createDbClusterInput.DBSubnetGroupName = aws.String(attr.(string)) + restoreDBClusterFromSnapshotInput.DBSubnetGroupName = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("preferred_backup_window"); ok { + createDbClusterInput.PreferredBackupWindow = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("preferred_maintenance_window"); ok { + createDbClusterInput.PreferredMaintenanceWindow = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("replication_source_identifier"); ok { + createDbClusterInput.ReplicationSourceIdentifier = aws.String(attr.(string)) + } + + if attr := d.Get("vpc_security_group_ids").(*schema.Set); attr.Len() > 0 { + createDbClusterInput.VpcSecurityGroupIds = expandStringList(attr.List()) + if restoreDBClusterFromSnapshot { + clusterUpdate = true + } + restoreDBClusterFromSnapshotInput.VpcSecurityGroupIds = expandStringList(attr.List()) + } + + if restoreDBClusterFromSnapshot { + log.Printf("[DEBUG] Neptune Cluster restore from snapshot configuration: %s", restoreDBClusterFromSnapshotInput) + } else { + log.Printf("[DEBUG] Neptune Cluster create options: %s", createDbClusterInput) + } + + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + var err error + if restoreDBClusterFromSnapshot { + _, err = conn.RestoreDBClusterFromSnapshot(restoreDBClusterFromSnapshotInput) + } else { + _, err = conn.CreateDBCluster(createDbClusterInput) + } + if err != nil { + if isAWSErr(err, "InvalidParameterValue", "IAM role ARN value is invalid or does not include the required permissions") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("error creating Neptune Cluster: %s", err) + } + + d.SetId(d.Get("cluster_identifier").(string)) + + log.Printf("[INFO] Neptune Cluster ID: %s", d.Id()) + log.Println("[INFO] Waiting for Neptune Cluster to be available") + + stateConf := &resource.StateChangeConf{ + Pending: resourceAwsNeptuneClusterCreatePendingStates, + Target: []string{"available"}, + Refresh: resourceAwsNeptuneClusterStateRefreshFunc(d, meta), + Timeout: d.Timeout(schema.TimeoutCreate), + MinTimeout: 10 * time.Second, + Delay: 30 * time.Second, + } + + // Wait, catching any errors + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("error waiting for Neptune Cluster state to be \"available\": %s", err) + } + + if v, ok := d.GetOk("iam_roles"); ok { + for _, role := range v.(*schema.Set).List() { + err := setIamRoleToNeptuneCluster(d.Id(), role.(string), conn) + if err != nil { + return err + } + } + } + + if clusterUpdate { + return resourceAwsNeptuneClusterUpdate(d, meta) + } + + return resourceAwsNeptuneClusterRead(d, meta) + +} + +func resourceAwsNeptuneClusterRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + + resp, err := conn.DescribeDBClusters(&neptune.DescribeDBClustersInput{ + DBClusterIdentifier: aws.String(d.Id()), + }) + + if err != nil { + if isAWSErr(err, neptune.ErrCodeDBClusterNotFoundFault, "") { + d.SetId("") + log.Printf("[DEBUG] Neptune Cluster (%s) not found", d.Id()) + return nil + } + log.Printf("[DEBUG] Error describing Neptune Cluster (%s) when waiting: %s", d.Id(), err) + return err + } + + var dbc *neptune.DBCluster + for _, v := range resp.DBClusters { + if aws.StringValue(v.DBClusterIdentifier) == d.Id() { + dbc = v + } + } + + if dbc == nil { + log.Printf("[WARN] Neptune Cluster (%s) not found", d.Id()) + d.SetId("") + return nil + } + + return flattenAwsNeptuneClusterResource(d, meta, dbc) +} + +func flattenAwsNeptuneClusterResource(d *schema.ResourceData, meta interface{}, dbc *neptune.DBCluster) error { + conn := meta.(*AWSClient).neptuneconn + + if err := d.Set("availability_zones", aws.StringValueSlice(dbc.AvailabilityZones)); err != nil { + return fmt.Errorf("Error saving AvailabilityZones to state for Neptune Cluster (%s): %s", d.Id(), err) + } + + d.Set("backup_retention_period", dbc.BackupRetentionPeriod) + d.Set("cluster_identifier", dbc.DBClusterIdentifier) + d.Set("cluster_resource_id", dbc.DbClusterResourceId) + d.Set("endpoint", dbc.Endpoint) + d.Set("engine_version", dbc.EngineVersion) + d.Set("engine", dbc.Engine) + d.Set("hosted_zone_id", dbc.HostedZoneId) + d.Set("iam_database_authentication_enabled", dbc.IAMDatabaseAuthenticationEnabled) + d.Set("kms_key_arn", dbc.KmsKeyId) + d.Set("neptune_cluster_parameter_group_name", dbc.DBClusterParameterGroup) + d.Set("neptune_subnet_group_name", dbc.DBSubnetGroup) + d.Set("port", dbc.Port) + d.Set("preferred_backup_window", dbc.PreferredBackupWindow) + d.Set("preferred_maintenance_window", dbc.PreferredMaintenanceWindow) + d.Set("reader_endpoint", dbc.ReaderEndpoint) + d.Set("replication_source_identifier", dbc.ReplicationSourceIdentifier) + d.Set("storage_encrypted", dbc.StorageEncrypted) + + var sg []string + for _, g := range dbc.VpcSecurityGroups { + sg = append(sg, aws.StringValue(g.VpcSecurityGroupId)) + } + if err := d.Set("vpc_security_group_ids", sg); err != nil { + return fmt.Errorf("Error saving VPC Security Group IDs to state for Neptune Cluster (%s): %s", d.Id(), err) + } + + var cm []string + for _, m := range dbc.DBClusterMembers { + cm = append(cm, aws.StringValue(m.DBInstanceIdentifier)) + } + if err := d.Set("cluster_members", cm); err != nil { + return fmt.Errorf("Error saving Neptune Cluster Members to state for Neptune Cluster (%s): %s", d.Id(), err) + } + + var roles []string + for _, r := range dbc.AssociatedRoles { + roles = append(roles, aws.StringValue(r.RoleArn)) + } + + if err := d.Set("iam_roles", roles); err != nil { + return fmt.Errorf("Error saving IAM Roles to state for Neptune Cluster (%s): %s", d.Id(), err) + } + + arn := aws.StringValue(dbc.DBClusterArn) + d.Set("arn", arn) + + if err := saveTagsNeptune(conn, d, arn); err != nil { + return fmt.Errorf("Failed to save tags for Neptune Cluster (%s): %s", aws.StringValue(dbc.DBClusterIdentifier), err) + } + + return nil +} + +func resourceAwsNeptuneClusterUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + requestUpdate := false + + req := &neptune.ModifyDBClusterInput{ + ApplyImmediately: aws.Bool(d.Get("apply_immediately").(bool)), + DBClusterIdentifier: aws.String(d.Id()), + } + + if d.HasChange("vpc_security_group_ids") { + if attr := d.Get("vpc_security_group_ids").(*schema.Set); attr.Len() > 0 { + req.VpcSecurityGroupIds = expandStringList(attr.List()) + } else { + req.VpcSecurityGroupIds = []*string{} + } + requestUpdate = true + } + + if d.HasChange("preferred_backup_window") { + req.PreferredBackupWindow = aws.String(d.Get("preferred_backup_window").(string)) + requestUpdate = true + } + + if d.HasChange("preferred_maintenance_window") { + req.PreferredMaintenanceWindow = aws.String(d.Get("preferred_maintenance_window").(string)) + requestUpdate = true + } + + if d.HasChange("backup_retention_period") { + req.BackupRetentionPeriod = aws.Int64(int64(d.Get("backup_retention_period").(int))) + requestUpdate = true + } + + if d.HasChange("neptune_cluster_parameter_group_name") { + d.SetPartial("neptune_cluster_parameter_group_name") + req.DBClusterParameterGroupName = aws.String(d.Get("neptune_cluster_parameter_group_name").(string)) + requestUpdate = true + } + + if d.HasChange("iam_database_authentication_enabled") { + req.EnableIAMDatabaseAuthentication = aws.Bool(d.Get("iam_database_authentication_enabled").(bool)) + requestUpdate = true + } + + if requestUpdate { + err := resource.Retry(5*time.Minute, func() *resource.RetryError { + _, err := conn.ModifyDBCluster(req) + if err != nil { + if isAWSErr(err, "InvalidParameterValue", "IAM role ARN value is invalid or does not include the required permissions") { + return resource.RetryableError(err) + } + if isAWSErr(err, neptune.ErrCodeInvalidDBClusterStateFault, "") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("Failed to modify Neptune Cluster (%s): %s", d.Id(), err) + } + + stateConf := &resource.StateChangeConf{ + Pending: resourceAwsNeptuneClusterUpdatePendingStates, + Target: []string{"available"}, + Refresh: resourceAwsNeptuneClusterStateRefreshFunc(d, meta), + Timeout: d.Timeout(schema.TimeoutUpdate), + MinTimeout: 10 * time.Second, + Delay: 10 * time.Second, + } + + log.Printf("[INFO] Waiting for Neptune Cluster (%s) to modify", d.Id()) + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("error waiting for Neptune Cluster (%s) to modify: %s", d.Id(), err) + } + } + + if d.HasChange("iam_roles") { + oraw, nraw := d.GetChange("iam_roles") + if oraw == nil { + oraw = new(schema.Set) + } + if nraw == nil { + nraw = new(schema.Set) + } + + os := oraw.(*schema.Set) + ns := nraw.(*schema.Set) + removeRoles := os.Difference(ns) + enableRoles := ns.Difference(os) + + for _, role := range enableRoles.List() { + err := setIamRoleToNeptuneCluster(d.Id(), role.(string), conn) + if err != nil { + return err + } + } + + for _, role := range removeRoles.List() { + err := removeIamRoleFromNeptuneCluster(d.Id(), role.(string), conn) + if err != nil { + return err + } + } + } + + if arn, ok := d.GetOk("arn"); ok { + if err := setTagsNeptune(conn, d, arn.(string)); err != nil { + return err + } else { + d.SetPartial("tags") + } + } + + return resourceAwsNeptuneClusterRead(d, meta) +} + +func resourceAwsNeptuneClusterDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + log.Printf("[DEBUG] Destroying Neptune Cluster (%s)", d.Id()) + + deleteOpts := neptune.DeleteDBClusterInput{ + DBClusterIdentifier: aws.String(d.Id()), + } + + skipFinalSnapshot := d.Get("skip_final_snapshot").(bool) + deleteOpts.SkipFinalSnapshot = aws.Bool(skipFinalSnapshot) + + if skipFinalSnapshot == false { + if name, present := d.GetOk("final_snapshot_identifier"); present { + deleteOpts.FinalDBSnapshotIdentifier = aws.String(name.(string)) + } else { + return fmt.Errorf("Neptune Cluster FinalSnapshotIdentifier is required when a final snapshot is required") + } + } + + log.Printf("[DEBUG] Neptune Cluster delete options: %s", deleteOpts) + + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err := conn.DeleteDBCluster(&deleteOpts) + if err != nil { + if isAWSErr(err, neptune.ErrCodeInvalidDBClusterStateFault, "is not currently in the available state") { + return resource.RetryableError(err) + } + if isAWSErr(err, neptune.ErrCodeDBClusterNotFoundFault, "") { + return nil + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("Neptune Cluster cannot be deleted: %s", err) + } + + stateConf := &resource.StateChangeConf{ + Pending: resourceAwsNeptuneClusterDeletePendingStates, + Target: []string{"destroyed"}, + Refresh: resourceAwsNeptuneClusterStateRefreshFunc(d, meta), + Timeout: d.Timeout(schema.TimeoutDelete), + MinTimeout: 10 * time.Second, + Delay: 30 * time.Second, + } + + // Wait, catching any errors + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("Error deleting Neptune Cluster (%s): %s", d.Id(), err) + } + + return nil +} + +func resourceAwsNeptuneClusterStateRefreshFunc( + d *schema.ResourceData, meta interface{}) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + conn := meta.(*AWSClient).neptuneconn + + resp, err := conn.DescribeDBClusters(&neptune.DescribeDBClustersInput{ + DBClusterIdentifier: aws.String(d.Id()), + }) + + if err != nil { + if isAWSErr(err, neptune.ErrCodeDBClusterNotFoundFault, "") { + log.Printf("[DEBUG] Neptune Cluster (%s) not found", d.Id()) + return 42, "destroyed", nil + } + log.Printf("[DEBUG] Error on retrieving Neptune Cluster (%s) when waiting: %s", d.Id(), err) + return nil, "", err + } + + var dbc *neptune.DBCluster + + for _, v := range resp.DBClusters { + if aws.StringValue(v.DBClusterIdentifier) == d.Id() { + dbc = v + } + } + + if dbc == nil { + return 42, "destroyed", nil + } + + if dbc.Status != nil { + log.Printf("[DEBUG] Neptune Cluster status (%s): %s", d.Id(), aws.StringValue(dbc.Status)) + } + + return dbc, aws.StringValue(dbc.Status), nil + } +} + +func setIamRoleToNeptuneCluster(clusterIdentifier string, roleArn string, conn *neptune.Neptune) error { + params := &neptune.AddRoleToDBClusterInput{ + DBClusterIdentifier: aws.String(clusterIdentifier), + RoleArn: aws.String(roleArn), + } + _, err := conn.AddRoleToDBCluster(params) + if err != nil { + return err + } + + return nil +} + +func removeIamRoleFromNeptuneCluster(clusterIdentifier string, roleArn string, conn *neptune.Neptune) error { + params := &neptune.RemoveRoleFromDBClusterInput{ + DBClusterIdentifier: aws.String(clusterIdentifier), + RoleArn: aws.String(roleArn), + } + _, err := conn.RemoveRoleFromDBCluster(params) + if err != nil { + return err + } + + return nil +} + +var resourceAwsNeptuneClusterCreatePendingStates = []string{ + "creating", + "backing-up", + "modifying", + "preparing-data-migration", + "migrating", +} + +var resourceAwsNeptuneClusterUpdatePendingStates = []string{ + "backing-up", + "modifying", + "configuring-iam-database-auth", +} + +var resourceAwsNeptuneClusterDeletePendingStates = []string{ + "available", + "deleting", + "backing-up", + "modifying", +} diff --git a/aws/resource_aws_neptune_cluster_instance.go b/aws/resource_aws_neptune_cluster_instance.go new file mode 100644 index 00000000000..6cb8ec42a41 --- /dev/null +++ b/aws/resource_aws_neptune_cluster_instance.go @@ -0,0 +1,529 @@ +package aws + +import ( + "fmt" + "log" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/neptune" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsNeptuneClusterInstance() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsNeptuneClusterInstanceCreate, + Read: resourceAwsNeptuneClusterInstanceRead, + Update: resourceAwsNeptuneClusterInstanceUpdate, + Delete: resourceAwsNeptuneClusterInstanceDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(90 * time.Minute), + Update: schema.DefaultTimeout(90 * time.Minute), + Delete: schema.DefaultTimeout(90 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + "address": { + Type: schema.TypeString, + Computed: true, + }, + + "apply_immediately": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + + "arn": { + Type: schema.TypeString, + Computed: true, + }, + + "auto_minor_version_upgrade": { + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + + "availability_zone": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + }, + + "cluster_identifier": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "dbi_resource_id": { + Type: schema.TypeString, + Computed: true, + }, + + "endpoint": { + Type: schema.TypeString, + Computed: true, + }, + + "engine": { + Type: schema.TypeString, + Optional: true, + Default: "neptune", + ForceNew: true, + ValidateFunc: validateNeptuneEngine(), + }, + + "engine_version": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + }, + + "identifier": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"identifier_prefix"}, + ValidateFunc: validateNeptuneIdentifier, + }, + + "identifier_prefix": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"identifier"}, + ValidateFunc: validateNeptuneIdentifierPrefix, + }, + + "instance_class": { + Type: schema.TypeString, + Required: true, + }, + + "kms_key_arn": { + Type: schema.TypeString, + Computed: true, + }, + + "neptune_parameter_group_name": { + Type: schema.TypeString, + Optional: true, + Default: "default.neptune1", + }, + + "neptune_subnet_group_name": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + }, + + "port": { + Type: schema.TypeInt, + Optional: true, + Default: 8182, + ForceNew: true, + }, + + "preferred_backup_window": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validateOnceADayWindowFormat, + }, + + "preferred_maintenance_window": { + Type: schema.TypeString, + Optional: true, + Computed: true, + StateFunc: func(val interface{}) string { + if val == nil { + return "" + } + return strings.ToLower(val.(string)) + }, + ValidateFunc: validateOnceAWeekWindowFormat, + }, + + "promotion_tier": { + Type: schema.TypeInt, + Optional: true, + Default: 0, + }, + + "publicly_accessible": { + Type: schema.TypeBool, + Optional: true, + Default: false, + ForceNew: true, + }, + + "storage_encrypted": { + Type: schema.TypeBool, + Computed: true, + }, + + "tags": tagsSchema(), + + "writer": { + Type: schema.TypeBool, + Computed: true, + }, + }, + } +} + +func resourceAwsNeptuneClusterInstanceCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + tags := tagsFromMapNeptune(d.Get("tags").(map[string]interface{})) + + createOpts := &neptune.CreateDBInstanceInput{ + DBInstanceClass: aws.String(d.Get("instance_class").(string)), + DBClusterIdentifier: aws.String(d.Get("cluster_identifier").(string)), + Engine: aws.String(d.Get("engine").(string)), + PubliclyAccessible: aws.Bool(d.Get("publicly_accessible").(bool)), + PromotionTier: aws.Int64(int64(d.Get("promotion_tier").(int))), + AutoMinorVersionUpgrade: aws.Bool(d.Get("auto_minor_version_upgrade").(bool)), + Tags: tags, + } + + if attr, ok := d.GetOk("availability_zone"); ok { + createOpts.AvailabilityZone = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("neptune_parameter_group_name"); ok { + createOpts.DBParameterGroupName = aws.String(attr.(string)) + } + + if v, ok := d.GetOk("identifier"); ok { + createOpts.DBInstanceIdentifier = aws.String(v.(string)) + } else { + if v, ok := d.GetOk("identifier_prefix"); ok { + createOpts.DBInstanceIdentifier = aws.String(resource.PrefixedUniqueId(v.(string))) + } else { + createOpts.DBInstanceIdentifier = aws.String(resource.PrefixedUniqueId("tf-")) + } + } + + if attr, ok := d.GetOk("neptune_subnet_group_name"); ok { + createOpts.DBSubnetGroupName = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("engine_version"); ok { + createOpts.EngineVersion = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("preferred_backup_window"); ok { + createOpts.PreferredBackupWindow = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("preferred_maintenance_window"); ok { + createOpts.PreferredMaintenanceWindow = aws.String(attr.(string)) + } + + log.Printf("[DEBUG] Creating Neptune Instance: %s", createOpts) + + var resp *neptune.CreateDBInstanceOutput + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + var err error + resp, err = conn.CreateDBInstance(createOpts) + if err != nil { + if isAWSErr(err, "InvalidParameterValue", "IAM role ARN value is invalid or does not include the required permissions") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("error creating Neptune Instance: %s", err) + } + + d.SetId(aws.StringValue(resp.DBInstance.DBInstanceIdentifier)) + + stateConf := &resource.StateChangeConf{ + Pending: resourceAwsNeptuneClusterInstanceCreateUpdatePendingStates, + Target: []string{"available"}, + Refresh: resourceAwsNeptuneInstanceStateRefreshFunc(d.Id(), conn), + Timeout: d.Timeout(schema.TimeoutCreate), + MinTimeout: 10 * time.Second, + Delay: 30 * time.Second, + } + + // Wait, catching any errors + _, err = stateConf.WaitForState() + if err != nil { + return err + } + + return resourceAwsNeptuneClusterInstanceRead(d, meta) +} + +func resourceAwsNeptuneClusterInstanceRead(d *schema.ResourceData, meta interface{}) error { + db, err := resourceAwsNeptuneInstanceRetrieve(d.Id(), meta.(*AWSClient).neptuneconn) + if err != nil { + return fmt.Errorf("Error on retrieving Neptune Cluster Instance (%s): %s", d.Id(), err) + } + + if db == nil { + log.Printf("[WARN] Neptune Cluster Instance (%s): not found, removing from state.", d.Id()) + d.SetId("") + return nil + } + + if db.DBClusterIdentifier == nil { + return fmt.Errorf("Cluster identifier is missing from instance (%s)", d.Id()) + } + + conn := meta.(*AWSClient).neptuneconn + resp, err := conn.DescribeDBClusters(&neptune.DescribeDBClustersInput{ + DBClusterIdentifier: db.DBClusterIdentifier, + }) + + var dbc *neptune.DBCluster + for _, c := range resp.DBClusters { + if aws.StringValue(c.DBClusterIdentifier) == aws.StringValue(db.DBClusterIdentifier) { + dbc = c + } + } + if dbc == nil { + return fmt.Errorf("Error finding Neptune Cluster (%s) for Cluster Instance (%s): %s", + aws.StringValue(db.DBClusterIdentifier), aws.StringValue(db.DBInstanceIdentifier), err) + } + for _, m := range dbc.DBClusterMembers { + if aws.StringValue(db.DBInstanceIdentifier) == aws.StringValue(m.DBInstanceIdentifier) { + if aws.BoolValue(m.IsClusterWriter) == true { + d.Set("writer", true) + } else { + d.Set("writer", false) + } + } + } + + if db.Endpoint != nil { + address := aws.StringValue(db.Endpoint.Address) + port := int(aws.Int64Value(db.Endpoint.Port)) + d.Set("address", address) + d.Set("endpoint", fmt.Sprintf("%s:%d", address, port)) + d.Set("port", port) + } + + if db.DBSubnetGroup != nil { + d.Set("neptune_subnet_group_name", db.DBSubnetGroup.DBSubnetGroupName) + } + + d.Set("arn", db.DBInstanceArn) + d.Set("auto_minor_version_upgrade", db.AutoMinorVersionUpgrade) + d.Set("availability_zone", db.AvailabilityZone) + d.Set("cluster_identifier", db.DBClusterIdentifier) + d.Set("dbi_resource_id", db.DbiResourceId) + d.Set("engine_version", db.EngineVersion) + d.Set("engine", db.Engine) + d.Set("identifier", db.DBInstanceIdentifier) + d.Set("instance_class", db.DBInstanceClass) + d.Set("kms_key_arn", db.KmsKeyId) + d.Set("preferred_backup_window", db.PreferredBackupWindow) + d.Set("preferred_maintenance_window", db.PreferredMaintenanceWindow) + d.Set("promotion_tier", db.PromotionTier) + d.Set("publicly_accessible", db.PubliclyAccessible) + d.Set("storage_encrypted", db.StorageEncrypted) + + if len(db.DBParameterGroups) > 0 { + d.Set("neptune_parameter_group_name", db.DBParameterGroups[0].DBParameterGroupName) + } + + if err := saveTagsNeptune(conn, d, aws.StringValue(db.DBInstanceArn)); err != nil { + return fmt.Errorf("Failed to save tags for Neptune Cluster Instance (%s): %s", aws.StringValue(db.DBInstanceIdentifier), err) + } + + return nil +} + +func resourceAwsNeptuneClusterInstanceUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + requestUpdate := false + + req := &neptune.ModifyDBInstanceInput{ + ApplyImmediately: aws.Bool(d.Get("apply_immediately").(bool)), + DBInstanceIdentifier: aws.String(d.Id()), + } + + if d.HasChange("neptune_parameter_group_name") { + req.DBParameterGroupName = aws.String(d.Get("neptune_parameter_group_name").(string)) + requestUpdate = true + } + + if d.HasChange("instance_class") { + req.DBInstanceClass = aws.String(d.Get("instance_class").(string)) + requestUpdate = true + } + + if d.HasChange("preferred_backup_window") { + d.SetPartial("preferred_backup_window") + req.PreferredBackupWindow = aws.String(d.Get("preferred_backup_window").(string)) + requestUpdate = true + } + + if d.HasChange("preferred_maintenance_window") { + d.SetPartial("preferred_maintenance_window") + req.PreferredMaintenanceWindow = aws.String(d.Get("preferred_maintenance_window").(string)) + requestUpdate = true + } + + if d.HasChange("auto_minor_version_upgrade") { + d.SetPartial("auto_minor_version_upgrade") + req.AutoMinorVersionUpgrade = aws.Bool(d.Get("auto_minor_version_upgrade").(bool)) + requestUpdate = true + } + + if d.HasChange("promotion_tier") { + d.SetPartial("promotion_tier") + req.PromotionTier = aws.Int64(int64(d.Get("promotion_tier").(int))) + requestUpdate = true + } + + log.Printf("[DEBUG] Send Neptune Instance Modification request: %#v", requestUpdate) + if requestUpdate { + log.Printf("[DEBUG] Neptune Instance Modification request: %#v", req) + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err := conn.ModifyDBInstance(req) + if err != nil { + if isAWSErr(err, "InvalidParameterValue", "IAM role ARN value is invalid or does not include the required permissions") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("Error modifying Neptune Instance %s: %s", d.Id(), err) + } + + stateConf := &resource.StateChangeConf{ + Pending: resourceAwsNeptuneClusterInstanceCreateUpdatePendingStates, + Target: []string{"available"}, + Refresh: resourceAwsNeptuneInstanceStateRefreshFunc(d.Id(), conn), + Timeout: d.Timeout(schema.TimeoutUpdate), + MinTimeout: 10 * time.Second, + Delay: 30 * time.Second, + } + + // Wait, catching any errors + _, err = stateConf.WaitForState() + if err != nil { + return err + } + + } + + if err := setTagsNeptune(conn, d, d.Get("arn").(string)); err != nil { + return err + } + + return resourceAwsNeptuneClusterInstanceRead(d, meta) +} + +func resourceAwsNeptuneClusterInstanceDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + + log.Printf("[DEBUG] Neptune Cluster Instance destroy: %v", d.Id()) + + opts := neptune.DeleteDBInstanceInput{DBInstanceIdentifier: aws.String(d.Id())} + + log.Printf("[DEBUG] Neptune Cluster Instance destroy configuration: %s", opts) + if _, err := conn.DeleteDBInstance(&opts); err != nil { + if isAWSErr(err, neptune.ErrCodeDBInstanceNotFoundFault, "") { + return nil + } + return fmt.Errorf("error deleting Neptune cluster instance %q: %s", d.Id(), err) + } + + log.Println("[INFO] Waiting for Neptune Cluster Instance to be destroyed") + stateConf := &resource.StateChangeConf{ + Pending: resourceAwsNeptuneClusterInstanceDeletePendingStates, + Target: []string{}, + Refresh: resourceAwsNeptuneInstanceStateRefreshFunc(d.Id(), conn), + Timeout: d.Timeout(schema.TimeoutDelete), + MinTimeout: 10 * time.Second, + Delay: 30 * time.Second, + } + + if _, err := stateConf.WaitForState(); err != nil { + return err + } + + return nil + +} + +var resourceAwsNeptuneClusterInstanceCreateUpdatePendingStates = []string{ + "backing-up", + "creating", + "maintenance", + "modifying", + "rebooting", + "renaming", + "starting", + "upgrading", +} + +var resourceAwsNeptuneClusterInstanceDeletePendingStates = []string{ + "modifying", + "deleting", +} + +func resourceAwsNeptuneInstanceStateRefreshFunc(id string, conn *neptune.Neptune) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + v, err := resourceAwsNeptuneInstanceRetrieve(id, conn) + + if err != nil { + log.Printf("Error on retrieving Neptune Instance when waiting: %s", err) + return nil, "", err + } + + if v == nil { + return nil, "", nil + } + + if v.DBInstanceStatus != nil { + log.Printf("[DEBUG] Neptune Instance status for instance %s: %s", id, aws.StringValue(v.DBInstanceStatus)) + } + + return v, aws.StringValue(v.DBInstanceStatus), nil + } +} + +func resourceAwsNeptuneInstanceRetrieve(id string, conn *neptune.Neptune) (*neptune.DBInstance, error) { + opts := neptune.DescribeDBInstancesInput{ + DBInstanceIdentifier: aws.String(id), + } + + log.Printf("[DEBUG] Neptune Instance describe configuration: %#v", opts) + + resp, err := conn.DescribeDBInstances(&opts) + if err != nil { + if isAWSErr(err, neptune.ErrCodeDBInstanceNotFoundFault, "") { + return nil, nil + } + return nil, fmt.Errorf("Error retrieving Neptune Instances: %s", err) + } + + if len(resp.DBInstances) != 1 || + aws.StringValue(resp.DBInstances[0].DBInstanceIdentifier) != id { + return nil, nil + } + + return resp.DBInstances[0], nil +} diff --git a/aws/resource_aws_neptune_cluster_instance_test.go b/aws/resource_aws_neptune_cluster_instance_test.go new file mode 100644 index 00000000000..a40eb9fc171 --- /dev/null +++ b/aws/resource_aws_neptune_cluster_instance_test.go @@ -0,0 +1,507 @@ +package aws + +import ( + "fmt" + "regexp" + "strings" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/neptune" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSNeptuneClusterInstance_basic(t *testing.T) { + var v neptune.DBInstance + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterInstanceConfig(acctest.RandInt()), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterInstanceExists("aws_neptune_cluster_instance.cluster_instances", &v), + testAccCheckAWSNeptuneClusterInstanceAttributes(&v), + resource.TestCheckResourceAttrSet("aws_neptune_cluster_instance.cluster_instances", "address"), + resource.TestMatchResourceAttr("aws_neptune_cluster_instance.cluster_instances", "arn", regexp.MustCompile(`^arn:[^:]+:rds:[^:]+:[^:]+:db:.+`)), + resource.TestCheckResourceAttr("aws_neptune_cluster_instance.cluster_instances", "auto_minor_version_upgrade", "true"), + resource.TestMatchResourceAttr("aws_neptune_cluster_instance.cluster_instances", "availability_zone", regexp.MustCompile(fmt.Sprintf("^%s", testAccGetRegion()))), + resource.TestCheckResourceAttrSet("aws_neptune_cluster_instance.cluster_instances", "cluster_identifier"), + resource.TestCheckResourceAttrSet("aws_neptune_cluster_instance.cluster_instances", "dbi_resource_id"), + resource.TestMatchResourceAttr("aws_neptune_cluster_instance.cluster_instances", "endpoint", regexp.MustCompile(`:8182$`)), + resource.TestCheckResourceAttr("aws_neptune_cluster_instance.cluster_instances", "engine", "neptune"), + resource.TestCheckResourceAttrSet("aws_neptune_cluster_instance.cluster_instances", "engine_version"), + resource.TestCheckResourceAttrSet("aws_neptune_cluster_instance.cluster_instances", "identifier"), + resource.TestCheckResourceAttr("aws_neptune_cluster_instance.cluster_instances", "instance_class", "db.r4.large"), + resource.TestCheckResourceAttr("aws_neptune_cluster_instance.cluster_instances", "kms_key_arn", ""), + resource.TestMatchResourceAttr("aws_neptune_cluster_instance.cluster_instances", "neptune_parameter_group_name", regexp.MustCompile(`^tf-cluster-test-group-`)), + resource.TestCheckResourceAttr("aws_neptune_cluster_instance.cluster_instances", "neptune_subnet_group_name", "default"), + resource.TestCheckResourceAttr("aws_neptune_cluster_instance.cluster_instances", "port", "8182"), + resource.TestCheckResourceAttrSet("aws_neptune_cluster_instance.cluster_instances", "preferred_backup_window"), + resource.TestCheckResourceAttrSet("aws_neptune_cluster_instance.cluster_instances", "preferred_maintenance_window"), + resource.TestCheckResourceAttr("aws_neptune_cluster_instance.cluster_instances", "promotion_tier", "3"), + resource.TestCheckResourceAttr("aws_neptune_cluster_instance.cluster_instances", "publicly_accessible", "false"), + resource.TestCheckResourceAttr("aws_neptune_cluster_instance.cluster_instances", "storage_encrypted", "false"), + resource.TestCheckResourceAttr("aws_neptune_cluster_instance.cluster_instances", "tags.%", "0"), + resource.TestCheckResourceAttr("aws_neptune_cluster_instance.cluster_instances", "writer", "true"), + ), + }, + { + Config: testAccAWSNeptuneClusterInstanceConfigModified(acctest.RandInt()), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterInstanceExists("aws_neptune_cluster_instance.cluster_instances", &v), + testAccCheckAWSNeptuneClusterInstanceAttributes(&v), + resource.TestCheckResourceAttr("aws_neptune_cluster_instance.cluster_instances", "auto_minor_version_upgrade", "false"), + ), + }, + }, + }) +} + +func TestAccAWSNeptuneClusterInstance_withaz(t *testing.T) { + var v neptune.DBInstance + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterInstanceConfig_az(acctest.RandInt()), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterInstanceExists("aws_neptune_cluster_instance.cluster_instances", &v), + testAccCheckAWSNeptuneClusterInstanceAttributes(&v), + resource.TestMatchResourceAttr("aws_neptune_cluster_instance.cluster_instances", "availability_zone", regexp.MustCompile("^us-west-2[a-z]{1}$")), + ), + }, + }, + }) +} + +func TestAccAWSNeptuneClusterInstance_namePrefix(t *testing.T) { + var v neptune.DBInstance + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterInstanceConfig_namePrefix(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterInstanceExists("aws_neptune_cluster_instance.test", &v), + testAccCheckAWSNeptuneClusterInstanceAttributes(&v), + resource.TestMatchResourceAttr( + "aws_neptune_cluster_instance.test", "identifier", regexp.MustCompile("^tf-cluster-instance-")), + ), + }, + }, + }) +} + +func TestAccAWSNeptuneClusterInstance_withSubnetGroup(t *testing.T) { + var v neptune.DBInstance + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterInstanceConfig_withSubnetGroup(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterInstanceExists("aws_neptune_cluster_instance.test", &v), + testAccCheckAWSNeptuneClusterInstanceAttributes(&v), + resource.TestCheckResourceAttr( + "aws_neptune_cluster_instance.test", "neptune_subnet_group_name", fmt.Sprintf("tf-test-%d", rInt)), + ), + }, + }, + }) +} + +func TestAccAWSNeptuneClusterInstance_generatedName(t *testing.T) { + var v neptune.DBInstance + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterInstanceConfig_generatedName(acctest.RandInt()), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterInstanceExists("aws_neptune_cluster_instance.test", &v), + testAccCheckAWSNeptuneClusterInstanceAttributes(&v), + resource.TestMatchResourceAttr( + "aws_neptune_cluster_instance.test", "identifier", regexp.MustCompile("^tf-")), + ), + }, + }, + }) +} + +func TestAccAWSNeptuneClusterInstance_kmsKey(t *testing.T) { + var v neptune.DBInstance + keyRegex := regexp.MustCompile("^arn:aws:kms:") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterInstanceConfigKmsKey(acctest.RandInt()), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterInstanceExists("aws_neptune_cluster_instance.cluster_instances", &v), + resource.TestMatchResourceAttr( + "aws_neptune_cluster_instance.cluster_instances", "kms_key_arn", keyRegex), + ), + }, + }, + }) +} + +func testAccCheckAWSNeptuneClusterInstanceExists(n string, v *neptune.DBInstance) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Instance not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Neptune Instance ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).neptuneconn + resp, err := conn.DescribeDBInstances(&neptune.DescribeDBInstancesInput{ + DBInstanceIdentifier: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + for _, d := range resp.DBInstances { + if aws.StringValue(d.DBInstanceIdentifier) == rs.Primary.ID { + *v = *d + return nil + } + } + + return fmt.Errorf("Neptune Cluster (%s) not found", rs.Primary.ID) + } +} + +func testAccCheckAWSNeptuneClusterInstanceAttributes(v *neptune.DBInstance) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if aws.StringValue(v.Engine) != "neptune" { + return fmt.Errorf("Incorrect engine, expected \"neptune\": %#v", aws.StringValue(v.Engine)) + } + + if !strings.HasPrefix(aws.StringValue(v.DBClusterIdentifier), "tf-neptune-cluster") { + return fmt.Errorf("Incorrect Cluster Identifier prefix:\nexpected: %s\ngot: %s", "tf-neptune-cluster", aws.StringValue(v.DBClusterIdentifier)) + } + + return nil + } +} + +func testAccAWSNeptuneClusterInstanceConfig(n int) string { + return fmt.Sprintf(` +resource "aws_neptune_cluster" "default" { + cluster_identifier = "tf-neptune-cluster-test-%d" + availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] + skip_final_snapshot = true +} + +resource "aws_neptune_cluster_instance" "cluster_instances" { + identifier = "tf-cluster-instance-%d" + cluster_identifier = "${aws_neptune_cluster.default.id}" + instance_class = "db.r4.large" + neptune_parameter_group_name = "${aws_neptune_parameter_group.bar.name}" + promotion_tier = "3" +} + +resource "aws_neptune_parameter_group" "bar" { + name = "tf-cluster-test-group-%d" + family = "neptune1" + + parameter { + name = "neptune_query_timeout" + value = "25" + } + + tags { + foo = "bar" + } +} +`, n, n, n) +} + +func testAccAWSNeptuneClusterInstanceConfigModified(n int) string { + return fmt.Sprintf(` +resource "aws_neptune_cluster" "default" { + cluster_identifier = "tf-neptune-cluster-test-%d" + availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] + skip_final_snapshot = true +} + +resource "aws_neptune_cluster_instance" "cluster_instances" { + identifier = "tf-cluster-instance-%d" + cluster_identifier = "${aws_neptune_cluster.default.id}" + instance_class = "db.r4.large" + neptune_parameter_group_name = "${aws_neptune_parameter_group.bar.name}" + auto_minor_version_upgrade = false + promotion_tier = "3" +} + +resource "aws_neptune_parameter_group" "bar" { + name = "tf-cluster-test-group-%d" + family = "neptune1" + + parameter { + name = "neptune_query_timeout" + value = "25" + } + + tags { + foo = "bar" + } +} +`, n, n, n) +} + +func testAccAWSNeptuneClusterInstanceConfig_az(n int) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_neptune_cluster" "default" { + cluster_identifier = "tf-neptune-cluster-test-%d" + availability_zones = ["${data.aws_availability_zones.available.names}"] + skip_final_snapshot = true +} + +resource "aws_neptune_cluster_instance" "cluster_instances" { + identifier = "tf-cluster-instance-%d" + cluster_identifier = "${aws_neptune_cluster.default.id}" + instance_class = "db.r4.large" + neptune_parameter_group_name = "${aws_neptune_parameter_group.bar.name}" + promotion_tier = "3" + availability_zone = "${data.aws_availability_zones.available.names[0]}" +} + +resource "aws_neptune_parameter_group" "bar" { + name = "tf-cluster-test-group-%d" + family = "neptune1" + + parameter { + name = "neptune_query_timeout" + value = "25" + } + + tags { + foo = "bar" + } +} +`, n, n, n) +} + +func testAccAWSNeptuneClusterInstanceConfig_withSubnetGroup(n int) string { + return fmt.Sprintf(` +resource "aws_neptune_cluster_instance" "test" { + identifier = "tf-cluster-instance-%d" + cluster_identifier = "${aws_neptune_cluster.test.id}" + instance_class = "db.r4.large" +} + +resource "aws_neptune_cluster" "test" { + cluster_identifier = "tf-neptune-cluster-%d" + neptune_subnet_group_name = "${aws_neptune_subnet_group.test.name}" + skip_final_snapshot = true +} + +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" + tags { + Name = "terraform-testacc-neptune-cluster-instance-name-prefix" + } +} + +resource "aws_subnet" "a" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "10.0.0.0/24" + availability_zone = "us-west-2a" + tags { + Name = "tf-acc-neptune-cluster-instance-name-prefix-a" + } +} + +resource "aws_subnet" "b" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "10.0.1.0/24" + availability_zone = "us-west-2b" + tags { + Name = "tf-acc-neptune-cluster-instance-name-prefix-b" + } +} + +resource "aws_neptune_subnet_group" "test" { + name = "tf-test-%d" + subnet_ids = ["${aws_subnet.a.id}", "${aws_subnet.b.id}"] +} +`, n, n, n) +} + +func testAccAWSNeptuneClusterInstanceConfig_namePrefix(n int) string { + return fmt.Sprintf(` +resource "aws_neptune_cluster_instance" "test" { + identifier_prefix = "tf-cluster-instance-" + cluster_identifier = "${aws_neptune_cluster.test.id}" + instance_class = "db.r4.large" +} + +resource "aws_neptune_cluster" "test" { + cluster_identifier = "tf-neptune-cluster-%d" + neptune_subnet_group_name = "${aws_neptune_subnet_group.test.name}" + skip_final_snapshot = true +} + +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" + tags { + Name = "terraform-testacc-neptune-cluster-instance-name-prefix" + } +} + +resource "aws_subnet" "a" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "10.0.0.0/24" + availability_zone = "us-west-2a" + tags { + Name = "tf-acc-neptune-cluster-instance-name-prefix-a" + } +} + +resource "aws_subnet" "b" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "10.0.1.0/24" + availability_zone = "us-west-2b" + tags { + Name = "tf-acc-neptune-cluster-instance-name-prefix-b" + } +} + +resource "aws_neptune_subnet_group" "test" { + name = "tf-test-%d" + subnet_ids = ["${aws_subnet.a.id}", "${aws_subnet.b.id}"] +} +`, n, n) +} + +func testAccAWSNeptuneClusterInstanceConfig_generatedName(n int) string { + return fmt.Sprintf(` +resource "aws_neptune_cluster_instance" "test" { + cluster_identifier = "${aws_neptune_cluster.test.id}" + instance_class = "db.r4.large" +} + +resource "aws_neptune_cluster" "test" { + cluster_identifier = "tf-neptune-cluster-%d" + neptune_subnet_group_name = "${aws_neptune_subnet_group.test.name}" + skip_final_snapshot = true +} + +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" + tags { + Name = "terraform-testacc-neptune-cluster-instance-name-prefix" + } +} + +resource "aws_subnet" "a" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "10.0.0.0/24" + availability_zone = "us-west-2a" + tags { + Name = "tf-acc-neptune-cluster-instance-name-prefix-a" + } +} + +resource "aws_subnet" "b" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "10.0.1.0/24" + availability_zone = "us-west-2b" + tags { + Name = "tf-acc-neptune-cluster-instance-name-prefix-b" + } +} + +resource "aws_neptune_subnet_group" "test" { + name = "tf-test-%d" + subnet_ids = ["${aws_subnet.a.id}", "${aws_subnet.b.id}"] +} +`, n, n) +} + +func testAccAWSNeptuneClusterInstanceConfigKmsKey(n int) string { + return fmt.Sprintf(` + +resource "aws_kms_key" "foo" { + description = "Terraform acc test %d" + policy = < 0 { + // We can only modify 20 parameters at a time, so walk them until + // we've got them all. + for parameters != nil { + var paramsToModify []*neptune.Parameter + if len(parameters) <= neptuneClusterParameterGroupMaxParamsBulkEdit { + paramsToModify, parameters = parameters[:], nil + } else { + paramsToModify, parameters = parameters[:neptuneClusterParameterGroupMaxParamsBulkEdit], parameters[neptuneClusterParameterGroupMaxParamsBulkEdit:] + } + parameterGroupName := d.Get("name").(string) + modifyOpts := neptune.ModifyDBClusterParameterGroupInput{ + DBClusterParameterGroupName: aws.String(parameterGroupName), + Parameters: paramsToModify, + } + + log.Printf("[DEBUG] Modify Neptune Cluster Parameter Group: %s", modifyOpts) + _, err = conn.ModifyDBClusterParameterGroup(&modifyOpts) + if err != nil { + return fmt.Errorf("Error modifying Neptune Cluster Parameter Group: %s", err) + } + } + d.SetPartial("parameter") + } + } + + arn := d.Get("arn").(string) + if err := setTagsNeptune(conn, d, arn); err != nil { + return err + } else { + d.SetPartial("tags") + } + + d.Partial(false) + + return resourceAwsNeptuneClusterParameterGroupRead(d, meta) +} + +func resourceAwsNeptuneClusterParameterGroupDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + + input := neptune.DeleteDBClusterParameterGroupInput{ + DBClusterParameterGroupName: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Deleting Neptune Cluster Parameter Group: %s", d.Id()) + _, err := conn.DeleteDBClusterParameterGroup(&input) + if err != nil { + if isAWSErr(err, neptune.ErrCodeDBParameterGroupNotFoundFault, "") { + return nil + } + return fmt.Errorf("error deleting Neptune Cluster Parameter Group (%s): %s", d.Id(), err) + } + + return nil +} diff --git a/aws/resource_aws_neptune_cluster_parameter_group_test.go b/aws/resource_aws_neptune_cluster_parameter_group_test.go new file mode 100644 index 00000000000..19167cdf3f5 --- /dev/null +++ b/aws/resource_aws_neptune_cluster_parameter_group_test.go @@ -0,0 +1,347 @@ +package aws + +import ( + "errors" + "fmt" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/neptune" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSNeptuneClusterParameterGroup_basic(t *testing.T) { + var v neptune.DBClusterParameterGroup + + parameterGroupName := fmt.Sprintf("cluster-parameter-group-test-terraform-%d", acctest.RandInt()) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterParameterGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterParameterGroupConfig(parameterGroupName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterParameterGroupExists("aws_neptune_cluster_parameter_group.bar", &v), + testAccCheckAWSNeptuneClusterParameterGroupAttributes(&v, parameterGroupName), + resource.TestMatchResourceAttr( + "aws_neptune_cluster_parameter_group.bar", "arn", regexp.MustCompile(fmt.Sprintf("^arn:[^:]+:rds:[^:]+:\\d{12}:cluster-pg:%s", parameterGroupName))), + resource.TestCheckResourceAttr( + "aws_neptune_cluster_parameter_group.bar", "name", parameterGroupName), + resource.TestCheckResourceAttr( + "aws_neptune_cluster_parameter_group.bar", "family", "neptune1"), + resource.TestCheckResourceAttr( + "aws_neptune_cluster_parameter_group.bar", "description", "Managed by Terraform"), + resource.TestCheckResourceAttr("aws_neptune_cluster_parameter_group.bar", "parameter.#", "0"), + resource.TestCheckResourceAttr("aws_neptune_cluster_parameter_group.bar", "tags.%", "0"), + ), + }, + { + ResourceName: "aws_neptune_cluster_parameter_group.bar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSNeptuneClusterParameterGroup_namePrefix(t *testing.T) { + var v neptune.DBClusterParameterGroup + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterParameterGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterParameterGroupConfig_namePrefix, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterParameterGroupExists("aws_neptune_cluster_parameter_group.test", &v), + resource.TestMatchResourceAttr( + "aws_neptune_cluster_parameter_group.test", "name", regexp.MustCompile("^tf-test-")), + ), + }, + { + ResourceName: "aws_neptune_cluster_parameter_group.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"name_prefix"}, + }, + }, + }) +} + +func TestAccAWSNeptuneClusterParameterGroup_generatedName(t *testing.T) { + var v neptune.DBClusterParameterGroup + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterParameterGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterParameterGroupConfig_generatedName, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterParameterGroupExists("aws_neptune_cluster_parameter_group.test", &v), + ), + }, + { + ResourceName: "aws_neptune_cluster_parameter_group.test", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSNeptuneClusterParameterGroup_Description(t *testing.T) { + var v neptune.DBClusterParameterGroup + + parameterGroupName := fmt.Sprintf("cluster-parameter-group-test-terraform-%d", acctest.RandInt()) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterParameterGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterParameterGroupConfig_Description(parameterGroupName, "custom description"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterParameterGroupExists("aws_neptune_cluster_parameter_group.bar", &v), + testAccCheckAWSNeptuneClusterParameterGroupAttributes(&v, parameterGroupName), + resource.TestCheckResourceAttr("aws_neptune_cluster_parameter_group.bar", "description", "custom description"), + ), + }, + { + ResourceName: "aws_neptune_cluster_parameter_group.bar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSNeptuneClusterParameterGroup_Parameter(t *testing.T) { + var v neptune.DBClusterParameterGroup + + parameterGroupName := fmt.Sprintf("cluster-parameter-group-test-tf-%d", acctest.RandInt()) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterParameterGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterParameterGroupConfig_Parameter(parameterGroupName, "neptune_enable_audit_log", "1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterParameterGroupExists("aws_neptune_cluster_parameter_group.bar", &v), + testAccCheckAWSNeptuneClusterParameterGroupAttributes(&v, parameterGroupName), + resource.TestCheckResourceAttr("aws_neptune_cluster_parameter_group.bar", "parameter.#", "1"), + resource.TestCheckResourceAttr("aws_neptune_cluster_parameter_group.bar", "parameter.709171678.apply_method", "pending-reboot"), + resource.TestCheckResourceAttr("aws_neptune_cluster_parameter_group.bar", "parameter.709171678.name", "neptune_enable_audit_log"), + resource.TestCheckResourceAttr("aws_neptune_cluster_parameter_group.bar", "parameter.709171678.value", "1"), + ), + }, + { + ResourceName: "aws_neptune_cluster_parameter_group.bar", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSNeptuneClusterParameterGroupConfig_Parameter(parameterGroupName, "neptune_enable_audit_log", "0"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterParameterGroupExists("aws_neptune_cluster_parameter_group.bar", &v), + testAccCheckAWSNeptuneClusterParameterGroupAttributes(&v, parameterGroupName), + resource.TestCheckResourceAttr("aws_neptune_cluster_parameter_group.bar", "parameter.#", "1"), + resource.TestCheckResourceAttr("aws_neptune_cluster_parameter_group.bar", "parameter.861808799.apply_method", "pending-reboot"), + resource.TestCheckResourceAttr("aws_neptune_cluster_parameter_group.bar", "parameter.861808799.name", "neptune_enable_audit_log"), + resource.TestCheckResourceAttr("aws_neptune_cluster_parameter_group.bar", "parameter.861808799.value", "0"), + ), + }, + }, + }) +} + +func TestAccAWSNeptuneClusterParameterGroup_Tags(t *testing.T) { + var v neptune.DBClusterParameterGroup + + parameterGroupName := fmt.Sprintf("cluster-parameter-group-test-tf-%d", acctest.RandInt()) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterParameterGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterParameterGroupConfig_Tags(parameterGroupName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterParameterGroupExists("aws_neptune_cluster_parameter_group.bar", &v), + testAccCheckAWSNeptuneClusterParameterGroupAttributes(&v, parameterGroupName), + resource.TestCheckResourceAttr("aws_neptune_cluster_parameter_group.bar", "tags.%", "1"), + resource.TestCheckResourceAttr("aws_neptune_cluster_parameter_group.bar", "tags.key1", "value1"), + ), + }, + { + ResourceName: "aws_neptune_cluster_parameter_group.bar", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSNeptuneClusterParameterGroupConfig_Tags(parameterGroupName, "key1", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterParameterGroupExists("aws_neptune_cluster_parameter_group.bar", &v), + testAccCheckAWSNeptuneClusterParameterGroupAttributes(&v, parameterGroupName), + resource.TestCheckResourceAttr("aws_neptune_cluster_parameter_group.bar", "tags.%", "1"), + resource.TestCheckResourceAttr("aws_neptune_cluster_parameter_group.bar", "tags.key1", "value2"), + ), + }, + { + Config: testAccAWSNeptuneClusterParameterGroupConfig_Tags(parameterGroupName, "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterParameterGroupExists("aws_neptune_cluster_parameter_group.bar", &v), + testAccCheckAWSNeptuneClusterParameterGroupAttributes(&v, parameterGroupName), + resource.TestCheckResourceAttr("aws_neptune_cluster_parameter_group.bar", "tags.%", "1"), + resource.TestCheckResourceAttr("aws_neptune_cluster_parameter_group.bar", "tags.key2", "value2"), + ), + }, + }, + }) +} + +func testAccCheckAWSNeptuneClusterParameterGroupDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).neptuneconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_neptune_cluster_parameter_group" { + continue + } + + resp, err := conn.DescribeDBClusterParameterGroups( + &neptune.DescribeDBClusterParameterGroupsInput{ + DBClusterParameterGroupName: aws.String(rs.Primary.ID), + }) + + if err == nil { + if len(resp.DBClusterParameterGroups) != 0 && + aws.StringValue(resp.DBClusterParameterGroups[0].DBClusterParameterGroupName) == rs.Primary.ID { + return errors.New("Neptune Cluster Parameter Group still exists") + } + } + + if err != nil { + if isAWSErr(err, neptune.ErrCodeDBParameterGroupNotFoundFault, "") { + return nil + } + return err + } + } + + return nil +} + +func testAccCheckAWSNeptuneClusterParameterGroupAttributes(v *neptune.DBClusterParameterGroup, name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if *v.DBClusterParameterGroupName != name { + return fmt.Errorf("bad name: %#v expected: %v", *v.DBClusterParameterGroupName, name) + } + + if *v.DBParameterGroupFamily != "neptune1" { + return fmt.Errorf("bad family: %#v", *v.DBParameterGroupFamily) + } + + return nil + } +} + +func testAccCheckAWSNeptuneClusterParameterGroupExists(n string, v *neptune.DBClusterParameterGroup) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return errors.New("No Neptune Cluster Parameter Group ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).neptuneconn + + opts := neptune.DescribeDBClusterParameterGroupsInput{ + DBClusterParameterGroupName: aws.String(rs.Primary.ID), + } + + resp, err := conn.DescribeDBClusterParameterGroups(&opts) + + if err != nil { + return err + } + + if len(resp.DBClusterParameterGroups) != 1 || + aws.StringValue(resp.DBClusterParameterGroups[0].DBClusterParameterGroupName) != rs.Primary.ID { + return errors.New("Neptune Cluster Parameter Group not found") + } + + *v = *resp.DBClusterParameterGroups[0] + + return nil + } +} + +func testAccAWSNeptuneClusterParameterGroupConfig_Description(name, description string) string { + return fmt.Sprintf(`resource "aws_neptune_cluster_parameter_group" "bar" { + description = "%s" + family = "neptune1" + name = "%s" +}`, description, name) +} + +func testAccAWSNeptuneClusterParameterGroupConfig_Parameter(name, pName, pValue string) string { + return fmt.Sprintf(` +resource "aws_neptune_cluster_parameter_group" "bar" { + family = "neptune1" + name = "%s" + + parameter { + name = "%s" + value = "%s" + } +} +`, name, pName, pValue) +} + +func testAccAWSNeptuneClusterParameterGroupConfig_Tags(name, tKey, tValue string) string { + return fmt.Sprintf(` +resource "aws_neptune_cluster_parameter_group" "bar" { + family = "neptune1" + name = "%s" + + tags { + %s = "%s" + } +} +`, name, tKey, tValue) +} + +func testAccAWSNeptuneClusterParameterGroupConfig(name string) string { + return fmt.Sprintf(`resource "aws_neptune_cluster_parameter_group" "bar" { + family = "neptune1" + name = "%s" +}`, name) +} + +const testAccAWSNeptuneClusterParameterGroupConfig_namePrefix = ` +resource "aws_neptune_cluster_parameter_group" "test" { + family = "neptune1" + name_prefix = "tf-test-" +} +` +const testAccAWSNeptuneClusterParameterGroupConfig_generatedName = ` +resource "aws_neptune_cluster_parameter_group" "test" { + family = "neptune1" +} +` diff --git a/aws/resource_aws_neptune_cluster_snapshot.go b/aws/resource_aws_neptune_cluster_snapshot.go new file mode 100644 index 00000000000..cb06484d3e9 --- /dev/null +++ b/aws/resource_aws_neptune_cluster_snapshot.go @@ -0,0 +1,218 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/neptune" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsNeptuneClusterSnapshot() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsNeptuneClusterSnapshotCreate, + Read: resourceAwsNeptuneClusterSnapshotRead, + Delete: resourceAwsNeptuneClusterSnapshotDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(20 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + "db_cluster_snapshot_identifier": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "db_cluster_identifier": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "allocated_storage": { + Type: schema.TypeInt, + Computed: true, + }, + "availability_zones": { + Type: schema.TypeList, + Elem: &schema.Schema{Type: schema.TypeString}, + Computed: true, + }, + "db_cluster_snapshot_arn": { + Type: schema.TypeString, + Computed: true, + }, + "engine": { + Type: schema.TypeString, + Computed: true, + }, + "engine_version": { + Type: schema.TypeString, + Computed: true, + }, + "kms_key_id": { + Type: schema.TypeString, + Computed: true, + }, + "license_model": { + Type: schema.TypeString, + Computed: true, + }, + "port": { + Type: schema.TypeInt, + Computed: true, + }, + "source_db_cluster_snapshot_arn": { + Type: schema.TypeString, + Computed: true, + }, + "snapshot_type": { + Type: schema.TypeString, + Computed: true, + }, + "status": { + Type: schema.TypeString, + Computed: true, + }, + "storage_encrypted": { + Type: schema.TypeBool, + Computed: true, + }, + "vpc_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceAwsNeptuneClusterSnapshotCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + + input := &neptune.CreateDBClusterSnapshotInput{ + DBClusterIdentifier: aws.String(d.Get("db_cluster_identifier").(string)), + DBClusterSnapshotIdentifier: aws.String(d.Get("db_cluster_snapshot_identifier").(string)), + } + + log.Printf("[DEBUG] Creating Neptune DB Cluster Snapshot: %s", input) + _, err := conn.CreateDBClusterSnapshot(input) + if err != nil { + return fmt.Errorf("error creating Neptune DB Cluster Snapshot: %s", err) + } + d.SetId(d.Get("db_cluster_snapshot_identifier").(string)) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"creating"}, + Target: []string{"available"}, + Refresh: resourceAwsNeptuneClusterSnapshotStateRefreshFunc(d.Id(), conn), + Timeout: d.Timeout(schema.TimeoutCreate), + MinTimeout: 10 * time.Second, + Delay: 5 * time.Second, + } + + // Wait, catching any errors + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("error waiting for Neptune DB Cluster Snapshot %q to create: %s", d.Id(), err) + } + + return resourceAwsNeptuneClusterSnapshotRead(d, meta) +} + +func resourceAwsNeptuneClusterSnapshotRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + + input := &neptune.DescribeDBClusterSnapshotsInput{ + DBClusterSnapshotIdentifier: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Reading Neptune DB Cluster Snapshot: %s", input) + output, err := conn.DescribeDBClusterSnapshots(input) + if err != nil { + if isAWSErr(err, neptune.ErrCodeDBClusterSnapshotNotFoundFault, "") { + log.Printf("[WARN] Neptune DB Cluster Snapshot %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + return fmt.Errorf("error reading Neptune DB Cluster Snapshot %q: %s", d.Id(), err) + } + + if output == nil || len(output.DBClusterSnapshots) == 0 || output.DBClusterSnapshots[0] == nil || aws.StringValue(output.DBClusterSnapshots[0].DBClusterSnapshotIdentifier) != d.Id() { + log.Printf("[WARN] Neptune DB Cluster Snapshot %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + snapshot := output.DBClusterSnapshots[0] + + d.Set("allocated_storage", snapshot.AllocatedStorage) + if err := d.Set("availability_zones", flattenStringList(snapshot.AvailabilityZones)); err != nil { + return fmt.Errorf("error setting availability_zones: %s", err) + } + d.Set("db_cluster_identifier", snapshot.DBClusterIdentifier) + d.Set("db_cluster_snapshot_arn", snapshot.DBClusterSnapshotArn) + d.Set("db_cluster_snapshot_identifier", snapshot.DBClusterSnapshotIdentifier) + d.Set("engine_version", snapshot.EngineVersion) + d.Set("engine", snapshot.Engine) + d.Set("kms_key_id", snapshot.KmsKeyId) + d.Set("license_model", snapshot.LicenseModel) + d.Set("port", snapshot.Port) + d.Set("snapshot_type", snapshot.SnapshotType) + d.Set("source_db_cluster_snapshot_arn", snapshot.SourceDBClusterSnapshotArn) + d.Set("status", snapshot.Status) + d.Set("storage_encrypted", snapshot.StorageEncrypted) + d.Set("vpc_id", snapshot.VpcId) + + return nil +} + +func resourceAwsNeptuneClusterSnapshotDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + + input := &neptune.DeleteDBClusterSnapshotInput{ + DBClusterSnapshotIdentifier: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Deleting Neptune DB Cluster Snapshot: %s", input) + _, err := conn.DeleteDBClusterSnapshot(input) + if err != nil { + if isAWSErr(err, neptune.ErrCodeDBClusterSnapshotNotFoundFault, "") { + return nil + } + return fmt.Errorf("error deleting Neptune DB Cluster Snapshot %q: %s", d.Id(), err) + } + + return nil +} + +func resourceAwsNeptuneClusterSnapshotStateRefreshFunc(dbClusterSnapshotIdentifier string, conn *neptune.Neptune) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + input := &neptune.DescribeDBClusterSnapshotsInput{ + DBClusterSnapshotIdentifier: aws.String(dbClusterSnapshotIdentifier), + } + + log.Printf("[DEBUG] Reading Neptune DB Cluster Snapshot: %s", input) + output, err := conn.DescribeDBClusterSnapshots(input) + if err != nil { + if isAWSErr(err, neptune.ErrCodeDBClusterSnapshotNotFoundFault, "") { + return nil, "", nil + } + return nil, "", fmt.Errorf("Error retrieving DB Cluster Snapshots: %s", err) + } + + if output == nil || len(output.DBClusterSnapshots) == 0 || output.DBClusterSnapshots[0] == nil { + return nil, "", fmt.Errorf("No snapshots returned for %s", dbClusterSnapshotIdentifier) + } + + snapshot := output.DBClusterSnapshots[0] + + return output, aws.StringValue(snapshot.Status), nil + } +} diff --git a/aws/resource_aws_neptune_cluster_snapshot_test.go b/aws/resource_aws_neptune_cluster_snapshot_test.go new file mode 100644 index 00000000000..dc2180d2464 --- /dev/null +++ b/aws/resource_aws_neptune_cluster_snapshot_test.go @@ -0,0 +1,125 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/neptune" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSNeptuneClusterSnapshot_basic(t *testing.T) { + var dbClusterSnapshot neptune.DBClusterSnapshot + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_neptune_cluster_snapshot.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNeptuneClusterSnapshotDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsNeptuneClusterSnapshotConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckNeptuneClusterSnapshotExists(resourceName, &dbClusterSnapshot), + resource.TestCheckResourceAttrSet(resourceName, "allocated_storage"), + resource.TestCheckResourceAttrSet(resourceName, "availability_zones.#"), + resource.TestMatchResourceAttr(resourceName, "db_cluster_snapshot_arn", regexp.MustCompile(`^arn:[^:]+:rds:[^:]+:\d{12}:cluster-snapshot:.+`)), + resource.TestCheckResourceAttrSet(resourceName, "engine"), + resource.TestCheckResourceAttrSet(resourceName, "engine_version"), + resource.TestCheckResourceAttr(resourceName, "kms_key_id", ""), + resource.TestCheckResourceAttrSet(resourceName, "license_model"), + resource.TestCheckResourceAttrSet(resourceName, "port"), + resource.TestCheckResourceAttr(resourceName, "snapshot_type", "manual"), + resource.TestCheckResourceAttr(resourceName, "source_db_cluster_snapshot_arn", ""), + resource.TestCheckResourceAttr(resourceName, "status", "available"), + resource.TestCheckResourceAttr(resourceName, "storage_encrypted", "false"), + resource.TestMatchResourceAttr(resourceName, "vpc_id", regexp.MustCompile(`^vpc-.+`)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckNeptuneClusterSnapshotDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).neptuneconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_neptune_cluster_snapshot" { + continue + } + + input := &neptune.DescribeDBClusterSnapshotsInput{ + DBClusterSnapshotIdentifier: aws.String(rs.Primary.ID), + } + + output, err := conn.DescribeDBClusterSnapshots(input) + if err != nil { + if isAWSErr(err, neptune.ErrCodeDBClusterSnapshotNotFoundFault, "") { + continue + } + return err + } + + if output != nil && len(output.DBClusterSnapshots) > 0 && output.DBClusterSnapshots[0] != nil && aws.StringValue(output.DBClusterSnapshots[0].DBClusterSnapshotIdentifier) == rs.Primary.ID { + return fmt.Errorf("Neptune DB Cluster Snapshot %q still exists", rs.Primary.ID) + } + } + + return nil +} + +func testAccCheckNeptuneClusterSnapshotExists(resourceName string, dbClusterSnapshot *neptune.DBClusterSnapshot) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set for %s", resourceName) + } + + conn := testAccProvider.Meta().(*AWSClient).neptuneconn + + input := &neptune.DescribeDBClusterSnapshotsInput{ + DBClusterSnapshotIdentifier: aws.String(rs.Primary.ID), + } + + output, err := conn.DescribeDBClusterSnapshots(input) + if err != nil { + return err + } + + if output == nil || len(output.DBClusterSnapshots) == 0 || output.DBClusterSnapshots[0] == nil || aws.StringValue(output.DBClusterSnapshots[0].DBClusterSnapshotIdentifier) != rs.Primary.ID { + return fmt.Errorf("Neptune DB Cluster Snapshot %q not found", rs.Primary.ID) + } + + *dbClusterSnapshot = *output.DBClusterSnapshots[0] + + return nil + } +} + +func testAccAwsNeptuneClusterSnapshotConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_neptune_cluster" "test" { + cluster_identifier = %q + skip_final_snapshot = true +} + +resource "aws_neptune_cluster_snapshot" "test" { + db_cluster_identifier = "${aws_neptune_cluster.test.id}" + db_cluster_snapshot_identifier = %q +} +`, rName, rName) +} diff --git a/aws/resource_aws_neptune_cluster_test.go b/aws/resource_aws_neptune_cluster_test.go new file mode 100644 index 00000000000..098e51832d9 --- /dev/null +++ b/aws/resource_aws_neptune_cluster_test.go @@ -0,0 +1,765 @@ +package aws + +import ( + //"errors" + "fmt" + "log" + "regexp" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/neptune" +) + +func TestAccAWSNeptuneCluster_basic(t *testing.T) { + var dbCluster neptune.DBCluster + rInt := acctest.RandInt() + resourceName := "aws_neptune_cluster.default" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterConfig(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterExists(resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "storage_encrypted", "false"), + resource.TestCheckResourceAttr(resourceName, "neptune_cluster_parameter_group_name", "default.neptune1"), + resource.TestCheckResourceAttrSet(resourceName, "reader_endpoint"), + resource.TestCheckResourceAttrSet(resourceName, "cluster_resource_id"), + resource.TestCheckResourceAttr(resourceName, "engine", "neptune"), + resource.TestCheckResourceAttrSet(resourceName, "engine_version"), + resource.TestCheckResourceAttrSet(resourceName, "hosted_zone_id"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_final_snapshot"}, + }, + }, + }) +} + +func TestAccAWSNeptuneCluster_namePrefix(t *testing.T) { + var v neptune.DBCluster + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterConfig_namePrefix(), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterExists("aws_neptune_cluster.test", &v), + resource.TestMatchResourceAttr( + "aws_neptune_cluster.test", "cluster_identifier", regexp.MustCompile("^tf-test-")), + ), + }, + { + ResourceName: "aws_neptune_cluster.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"cluster_identifier_prefix", "skip_final_snapshot"}, + }, + }, + }) +} + +func TestAccAWSNeptuneCluster_takeFinalSnapshot(t *testing.T) { + var v neptune.DBCluster + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterSnapshot(rInt), + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterConfigWithFinalSnapshot(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterExists("aws_neptune_cluster.default", &v), + ), + }, + { + ResourceName: "aws_neptune_cluster.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"final_snapshot_identifier", "skip_final_snapshot"}, + }, + }, + }) +} + +func TestAccAWSNeptuneCluster_updateTags(t *testing.T) { + var v neptune.DBCluster + ri := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterConfig(ri), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterExists("aws_neptune_cluster.default", &v), + resource.TestCheckResourceAttr( + "aws_neptune_cluster.default", "tags.%", "1"), + ), + }, + { + Config: testAccAWSNeptuneClusterConfigUpdatedTags(ri), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterExists("aws_neptune_cluster.default", &v), + resource.TestCheckResourceAttr( + "aws_neptune_cluster.default", "tags.%", "2"), + ), + }, + { + ResourceName: "aws_neptune_cluster.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_final_snapshot"}, + }, + }, + }) +} + +func TestAccAWSNeptuneCluster_updateIamRoles(t *testing.T) { + var v neptune.DBCluster + ri := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterConfigIncludingIamRoles(ri), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterExists("aws_neptune_cluster.default", &v), + ), + }, + { + Config: testAccAWSNeptuneClusterConfigAddIamRoles(ri), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterExists("aws_neptune_cluster.default", &v), + resource.TestCheckResourceAttr( + "aws_neptune_cluster.default", "iam_roles.#", "2"), + ), + }, + { + Config: testAccAWSNeptuneClusterConfigRemoveIamRoles(ri), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterExists("aws_neptune_cluster.default", &v), + resource.TestCheckResourceAttr( + "aws_neptune_cluster.default", "iam_roles.#", "1"), + ), + }, + { + ResourceName: "aws_neptune_cluster.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_final_snapshot"}, + }, + }, + }) +} + +func TestAccAWSNeptuneCluster_kmsKey(t *testing.T) { + var v neptune.DBCluster + keyRegex := regexp.MustCompile("^arn:aws:kms:") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterConfig_kmsKey(acctest.RandInt()), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterExists("aws_neptune_cluster.default", &v), + resource.TestMatchResourceAttr( + "aws_neptune_cluster.default", "kms_key_arn", keyRegex), + ), + }, + { + ResourceName: "aws_neptune_cluster.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_final_snapshot"}, + }, + }, + }) +} + +func TestAccAWSNeptuneCluster_encrypted(t *testing.T) { + var v neptune.DBCluster + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterConfig_encrypted(acctest.RandInt()), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterExists("aws_neptune_cluster.default", &v), + resource.TestCheckResourceAttr( + "aws_neptune_cluster.default", "storage_encrypted", "true"), + ), + }, + { + ResourceName: "aws_neptune_cluster.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_final_snapshot"}, + }, + }, + }) +} + +func TestAccAWSNeptuneCluster_backupsUpdate(t *testing.T) { + var v neptune.DBCluster + + ri := acctest.RandInt() + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterConfig_backups(ri), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterExists("aws_neptune_cluster.default", &v), + resource.TestCheckResourceAttr( + "aws_neptune_cluster.default", "preferred_backup_window", "07:00-09:00"), + resource.TestCheckResourceAttr( + "aws_neptune_cluster.default", "backup_retention_period", "5"), + resource.TestCheckResourceAttr( + "aws_neptune_cluster.default", "preferred_maintenance_window", "tue:04:00-tue:04:30"), + ), + }, + { + Config: testAccAWSNeptuneClusterConfig_backupsUpdate(ri), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterExists("aws_neptune_cluster.default", &v), + resource.TestCheckResourceAttr( + "aws_neptune_cluster.default", "preferred_backup_window", "03:00-09:00"), + resource.TestCheckResourceAttr( + "aws_neptune_cluster.default", "backup_retention_period", "10"), + resource.TestCheckResourceAttr( + "aws_neptune_cluster.default", "preferred_maintenance_window", "wed:01:00-wed:01:30"), + ), + }, + { + ResourceName: "aws_neptune_cluster.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"apply_immediately", "skip_final_snapshot"}, + }, + }, + }) +} + +func TestAccAWSNeptuneCluster_iamAuth(t *testing.T) { + var v neptune.DBCluster + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneClusterConfig_iamAuth(acctest.RandInt()), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneClusterExists("aws_neptune_cluster.default", &v), + resource.TestCheckResourceAttr( + "aws_neptune_cluster.default", "iam_database_authentication_enabled", "true"), + ), + }, + { + ResourceName: "aws_neptune_cluster.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_final_snapshot"}, + }, + }, + }) +} + +func testAccCheckAWSNeptuneClusterDestroy(s *terraform.State) error { + return testAccCheckAWSNeptuneClusterDestroyWithProvider(s, testAccProvider) +} + +func testAccCheckAWSNeptuneClusterDestroyWithProvider(s *terraform.State, provider *schema.Provider) error { + conn := provider.Meta().(*AWSClient).neptuneconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_neptune_cluster" { + continue + } + + // Try to find the Group + var err error + resp, err := conn.DescribeDBClusters( + &neptune.DescribeDBClustersInput{ + DBClusterIdentifier: aws.String(rs.Primary.ID), + }) + + if err == nil { + if len(resp.DBClusters) != 0 && + aws.StringValue(resp.DBClusters[0].DBClusterIdentifier) == rs.Primary.ID { + return fmt.Errorf("Neptune Cluster %s still exists", rs.Primary.ID) + } + } + + // Return nil if the cluster is already destroyed + if err != nil { + if isAWSErr(err, neptune.ErrCodeDBClusterNotFoundFault, "") { + return nil + } + } + + return err + } + + return nil +} + +func testAccCheckAWSNeptuneClusterExists(n string, v *neptune.DBCluster) resource.TestCheckFunc { + return testAccCheckAWSNeptuneClusterExistsWithProvider(n, v, func() *schema.Provider { return testAccProvider }) +} + +func testAccCheckAWSNeptuneClusterExistsWithProvider(n string, v *neptune.DBCluster, providerF func() *schema.Provider) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Neptune Instance ID is set") + } + + provider := providerF() + conn := provider.Meta().(*AWSClient).neptuneconn + resp, err := conn.DescribeDBClusters(&neptune.DescribeDBClustersInput{ + DBClusterIdentifier: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + for _, c := range resp.DBClusters { + if *c.DBClusterIdentifier == rs.Primary.ID { + *v = *c + return nil + } + } + + return fmt.Errorf("Neptune Cluster (%s) not found", rs.Primary.ID) + } +} + +func testAccCheckAWSNeptuneClusterSnapshot(rInt int) resource.TestCheckFunc { + return func(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_neptune_cluster" { + continue + } + + // Try and delete the snapshot before we check for the cluster not found + snapshot_identifier := fmt.Sprintf("tf-acctest-neptunecluster-snapshot-%d", rInt) + + awsClient := testAccProvider.Meta().(*AWSClient) + conn := awsClient.neptuneconn + + log.Printf("[INFO] Deleting the Snapshot %s", snapshot_identifier) + _, snapDeleteErr := conn.DeleteDBClusterSnapshot( + &neptune.DeleteDBClusterSnapshotInput{ + DBClusterSnapshotIdentifier: aws.String(snapshot_identifier), + }) + if snapDeleteErr != nil { + return snapDeleteErr + } + + // Try to find the Group + var err error + resp, err := conn.DescribeDBClusters( + &neptune.DescribeDBClustersInput{ + DBClusterIdentifier: aws.String(rs.Primary.ID), + }) + + if err == nil { + if len(resp.DBClusters) != 0 && + aws.StringValue(resp.DBClusters[0].DBClusterIdentifier) == rs.Primary.ID { + return fmt.Errorf("Neptune Cluster %s still exists", rs.Primary.ID) + } + } + + // Return nil if the cluster is already destroyed + if err != nil { + if isAWSErr(err, neptune.ErrCodeDBClusterNotFoundFault, "") { + return nil + } + } + + return err + } + + return nil + } +} + +func testAccAWSNeptuneClusterConfig(n int) string { + return fmt.Sprintf(` +resource "aws_neptune_cluster" "default" { + cluster_identifier = "tf-neptune-cluster-%d" + availability_zones = ["us-west-2a","us-west-2b","us-west-2c"] + engine = "neptune" + neptune_cluster_parameter_group_name = "default.neptune1" + skip_final_snapshot = true + tags { + Environment = "production" + } +}`, n) +} + +func testAccAWSNeptuneClusterConfig_namePrefix() string { + return fmt.Sprintf(` +resource "aws_neptune_cluster" "test" { + cluster_identifier_prefix = "tf-test-" + engine = "neptune" + neptune_cluster_parameter_group_name = "default.neptune1" + skip_final_snapshot = true +} +`) +} + +func testAccAWSNeptuneClusterConfigWithFinalSnapshot(n int) string { + return fmt.Sprintf(` +resource "aws_neptune_cluster" "default" { + cluster_identifier = "tf-neptune-cluster-%d" + availability_zones = ["us-west-2a","us-west-2b","us-west-2c"] + neptune_cluster_parameter_group_name = "default.neptune1" + final_snapshot_identifier = "tf-acctest-neptunecluster-snapshot-%d" + tags { + Environment = "production" + } +}`, n, n) +} + +func testAccAWSNeptuneClusterConfigUpdatedTags(n int) string { + return fmt.Sprintf(` +resource "aws_neptune_cluster" "default" { + cluster_identifier = "tf-neptune-cluster-%d" + availability_zones = ["us-west-2a","us-west-2b","us-west-2c"] + neptune_cluster_parameter_group_name = "default.neptune1" + skip_final_snapshot = true + tags { + Environment = "production" + AnotherTag = "test" + } +}`, n) +} + +func testAccAWSNeptuneClusterConfigIncludingIamRoles(n int) string { + return fmt.Sprintf(` +resource "aws_iam_role" "neptune_sample_role" { + name = "neptune_sample_role_%d" + path = "/" + assume_role_policy = < 0 { + for _, removing := range remove { + log.Printf("[INFO] Removing %s as a Source Identifier from %q", *removing, d.Id()) + _, err := conn.RemoveSourceIdentifierFromSubscription(&neptune.RemoveSourceIdentifierFromSubscriptionInput{ + SourceIdentifier: removing, + SubscriptionName: aws.String(d.Id()), + }) + if err != nil { + return err + } + } + } + + if len(add) > 0 { + for _, adding := range add { + log.Printf("[INFO] Adding %s as a Source Identifier to %q", *adding, d.Id()) + _, err := conn.AddSourceIdentifierToSubscription(&neptune.AddSourceIdentifierToSubscriptionInput{ + SourceIdentifier: adding, + SubscriptionName: aws.String(d.Id()), + }) + if err != nil { + return err + } + } + } + d.SetPartial("source_ids") + } + + d.Partial(false) + + return resourceAwsNeptuneEventSubscriptionRead(d, meta) +} + +func resourceAwsNeptuneEventSubscriptionDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + deleteOpts := neptune.DeleteEventSubscriptionInput{ + SubscriptionName: aws.String(d.Id()), + } + + if _, err := conn.DeleteEventSubscription(&deleteOpts); err != nil { + if isAWSErr(err, neptune.ErrCodeSubscriptionNotFoundFault, "") { + log.Printf("[WARN] Neptune Event Subscription %s not found, removing from state", d.Id()) + d.SetId("") + return nil + } + return fmt.Errorf("Error deleting Neptune Event Subscription %s: %s", d.Id(), err) + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{"deleting"}, + Target: []string{}, + Refresh: resourceAwsNeptuneEventSubscriptionRefreshFunc(d.Id(), conn), + Timeout: d.Timeout(schema.TimeoutDelete), + MinTimeout: 10 * time.Second, + Delay: 30 * time.Second, + } + + _, err := stateConf.WaitForState() + if err != nil { + return fmt.Errorf("Error deleting Neptune Event Subscription %s: %s", d.Id(), err) + } + + return nil +} + +func resourceAwsNeptuneEventSubscriptionRefreshFunc(name string, conn *neptune.Neptune) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + sub, err := resourceAwsNeptuneEventSubscriptionRetrieve(name, conn) + + if err != nil { + log.Printf("Error on retrieving Neptune Event Subscription when waiting: %s", err) + return nil, "", err + } + + if sub == nil { + return nil, "", nil + } + + if sub.Status != nil { + log.Printf("[DEBUG] Neptune Event Subscription status for %s: %s", name, aws.StringValue(sub.Status)) + } + + return sub, aws.StringValue(sub.Status), nil + } +} + +func resourceAwsNeptuneEventSubscriptionRetrieve(name string, conn *neptune.Neptune) (*neptune.EventSubscription, error) { + + request := &neptune.DescribeEventSubscriptionsInput{ + SubscriptionName: aws.String(name), + } + + describeResp, err := conn.DescribeEventSubscriptions(request) + if err != nil { + if isAWSErr(err, neptune.ErrCodeSubscriptionNotFoundFault, "") { + log.Printf("[DEBUG] Neptune Event Subscription (%s) not found", name) + return nil, nil + } + return nil, err + } + + if len(describeResp.EventSubscriptionsList) != 1 || + aws.StringValue(describeResp.EventSubscriptionsList[0].CustSubscriptionId) != name { + return nil, nil + } + + return describeResp.EventSubscriptionsList[0], nil +} diff --git a/aws/resource_aws_neptune_event_subscription_test.go b/aws/resource_aws_neptune_event_subscription_test.go new file mode 100644 index 00000000000..ffcf4e74938 --- /dev/null +++ b/aws/resource_aws_neptune_event_subscription_test.go @@ -0,0 +1,372 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/neptune" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSNeptuneEventSubscription_basic(t *testing.T) { + var v neptune.EventSubscription + rInt := acctest.RandInt() + rName := fmt.Sprintf("tf-acc-test-neptune-event-subs-%d", rInt) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneEventSubscriptionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneEventSubscriptionConfig(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneEventSubscriptionExists("aws_neptune_event_subscription.bar", &v), + resource.TestMatchResourceAttr("aws_neptune_event_subscription.bar", "arn", regexp.MustCompile(fmt.Sprintf("^arn:[^:]+:rds:[^:]+:[^:]+:es:%s$", rName))), + resource.TestCheckResourceAttr("aws_neptune_event_subscription.bar", "enabled", "true"), + resource.TestCheckResourceAttr("aws_neptune_event_subscription.bar", "source_type", "db-instance"), + resource.TestCheckResourceAttr("aws_neptune_event_subscription.bar", "name", rName), + resource.TestCheckResourceAttr("aws_neptune_event_subscription.bar", "tags.%", "1"), + resource.TestCheckResourceAttr("aws_neptune_event_subscription.bar", "tags.Name", "tf-acc-test"), + ), + }, + { + Config: testAccAWSNeptuneEventSubscriptionConfigUpdate(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneEventSubscriptionExists("aws_neptune_event_subscription.bar", &v), + resource.TestCheckResourceAttr("aws_neptune_event_subscription.bar", "enabled", "false"), + resource.TestCheckResourceAttr("aws_neptune_event_subscription.bar", "source_type", "db-parameter-group"), + resource.TestCheckResourceAttr("aws_neptune_event_subscription.bar", "tags.%", "1"), + resource.TestCheckResourceAttr("aws_neptune_event_subscription.bar", "tags.Name", "tf-acc-test1"), + ), + }, + { + ResourceName: "aws_neptune_event_subscription.bar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSNeptuneEventSubscription_withPrefix(t *testing.T) { + var v neptune.EventSubscription + rInt := acctest.RandInt() + startsWithPrefix := regexp.MustCompile("^tf-acc-test-neptune-event-subs-") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneEventSubscriptionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneEventSubscriptionConfigWithPrefix(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneEventSubscriptionExists("aws_neptune_event_subscription.bar", &v), + resource.TestMatchResourceAttr( + "aws_neptune_event_subscription.bar", "name", startsWithPrefix), + ), + }, + { + ResourceName: "aws_neptune_event_subscription.bar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"name_prefix"}, + }, + }, + }) +} + +func TestAccAWSNeptuneEventSubscription_withSourceIds(t *testing.T) { + var v neptune.EventSubscription + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneEventSubscriptionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneEventSubscriptionConfigWithSourceIds(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneEventSubscriptionExists("aws_neptune_event_subscription.bar", &v), + resource.TestCheckResourceAttr( + "aws_neptune_event_subscription.bar", "source_type", "db-parameter-group"), + resource.TestCheckResourceAttr( + "aws_neptune_event_subscription.bar", "source_ids.#", "1"), + ), + }, + { + Config: testAccAWSNeptuneEventSubscriptionConfigUpdateSourceIds(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneEventSubscriptionExists("aws_neptune_event_subscription.bar", &v), + resource.TestCheckResourceAttr( + "aws_neptune_event_subscription.bar", "source_type", "db-parameter-group"), + resource.TestCheckResourceAttr( + "aws_neptune_event_subscription.bar", "source_ids.#", "2"), + ), + }, + { + ResourceName: "aws_neptune_event_subscription.bar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSNeptuneEventSubscription_withCategories(t *testing.T) { + var v neptune.EventSubscription + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneEventSubscriptionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneEventSubscriptionConfig(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneEventSubscriptionExists("aws_neptune_event_subscription.bar", &v), + resource.TestCheckResourceAttr( + "aws_neptune_event_subscription.bar", "source_type", "db-instance"), + resource.TestCheckResourceAttr( + "aws_neptune_event_subscription.bar", "event_categories.#", "5"), + ), + }, + { + Config: testAccAWSNeptuneEventSubscriptionConfigUpdateCategories(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneEventSubscriptionExists("aws_neptune_event_subscription.bar", &v), + resource.TestCheckResourceAttr( + "aws_neptune_event_subscription.bar", "source_type", "db-instance"), + resource.TestCheckResourceAttr( + "aws_neptune_event_subscription.bar", "event_categories.#", "1"), + ), + }, + { + ResourceName: "aws_neptune_event_subscription.bar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAWSNeptuneEventSubscriptionExists(n string, v *neptune.EventSubscription) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Neptune Event Subscription is set") + } + + conn := testAccProvider.Meta().(*AWSClient).neptuneconn + + opts := neptune.DescribeEventSubscriptionsInput{ + SubscriptionName: aws.String(rs.Primary.ID), + } + + resp, err := conn.DescribeEventSubscriptions(&opts) + + if err != nil { + return err + } + + if len(resp.EventSubscriptionsList) != 1 || + aws.StringValue(resp.EventSubscriptionsList[0].CustSubscriptionId) != rs.Primary.ID { + return fmt.Errorf("Neptune Event Subscription not found") + } + + *v = *resp.EventSubscriptionsList[0] + return nil + } +} + +func testAccCheckAWSNeptuneEventSubscriptionDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).neptuneconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_neptune_event_subscription" { + continue + } + + var err error + resp, err := conn.DescribeEventSubscriptions( + &neptune.DescribeEventSubscriptionsInput{ + SubscriptionName: aws.String(rs.Primary.ID), + }) + + if ae, ok := err.(awserr.Error); ok && ae.Code() == "SubscriptionNotFound" { + continue + } + + if err == nil { + if len(resp.EventSubscriptionsList) != 0 && + aws.StringValue(resp.EventSubscriptionsList[0].CustSubscriptionId) == rs.Primary.ID { + return fmt.Errorf("Event Subscription still exists") + } + } + + newerr, ok := err.(awserr.Error) + if !ok { + return err + } + if newerr.Code() != "SubscriptionNotFound" { + return err + } + } + + return nil +} + +func testAccAWSNeptuneEventSubscriptionConfig(rInt int) string { + return fmt.Sprintf(` +resource "aws_sns_topic" "aws_sns_topic" { + name = "tf-acc-test-neptune-event-subs-sns-topic-%d" +} + +resource "aws_neptune_event_subscription" "bar" { + name = "tf-acc-test-neptune-event-subs-%d" + sns_topic_arn = "${aws_sns_topic.aws_sns_topic.arn}" + source_type = "db-instance" + event_categories = [ + "availability", + "backup", + "creation", + "deletion", + "maintenance" + ] + tags { + Name = "tf-acc-test" + } +}`, rInt, rInt) +} + +func testAccAWSNeptuneEventSubscriptionConfigUpdate(rInt int) string { + return fmt.Sprintf(` +resource "aws_sns_topic" "aws_sns_topic" { + name = "tf-acc-test-neptune-event-subs-sns-topic-%d" +} + +resource "aws_neptune_event_subscription" "bar" { + name = "tf-acc-test-neptune-event-subs-%d" + sns_topic_arn = "${aws_sns_topic.aws_sns_topic.arn}" + enabled = false + source_type = "db-parameter-group" + event_categories = [ + "configuration change" + ] + tags { + Name = "tf-acc-test1" + } +}`, rInt, rInt) +} + +func testAccAWSNeptuneEventSubscriptionConfigWithPrefix(rInt int) string { + return fmt.Sprintf(` +resource "aws_sns_topic" "aws_sns_topic" { + name = "tf-acc-test-neptune-event-subs-sns-topic-%d" +} + +resource "aws_neptune_event_subscription" "bar" { + name_prefix = "tf-acc-test-neptune-event-subs-" + sns_topic_arn = "${aws_sns_topic.aws_sns_topic.arn}" + source_type = "db-instance" + event_categories = [ + "availability", + "backup", + "creation", + "deletion", + "maintenance" + ] + tags { + Name = "tf-acc-test" + } +}`, rInt) +} + +func testAccAWSNeptuneEventSubscriptionConfigWithSourceIds(rInt int) string { + return fmt.Sprintf(` +resource "aws_sns_topic" "aws_sns_topic" { + name = "tf-acc-test-neptune-event-subs-sns-topic-%d" +} + +resource "aws_neptune_parameter_group" "bar" { + name = "neptune-parameter-group-event-%d" + family = "neptune1" + description = "Test parameter group for terraform" +} + +resource "aws_neptune_event_subscription" "bar" { + name = "tf-acc-test-neptune-event-subs-with-ids-%d" + sns_topic_arn = "${aws_sns_topic.aws_sns_topic.arn}" + source_type = "db-parameter-group" + source_ids = ["${aws_neptune_parameter_group.bar.id}"] + event_categories = [ + "configuration change" + ] + tags { + Name = "tf-acc-test" + } +}`, rInt, rInt, rInt) +} + +func testAccAWSNeptuneEventSubscriptionConfigUpdateSourceIds(rInt int) string { + return fmt.Sprintf(` + resource "aws_sns_topic" "aws_sns_topic" { + name = "tf-acc-test-neptune-event-subs-sns-topic-%d" + } + + resource "aws_neptune_parameter_group" "bar" { + name = "neptune-parameter-group-event-%d" + family = "neptune1" + description = "Test parameter group for terraform" + } + + resource "aws_neptune_parameter_group" "foo" { + name = "neptune-parameter-group-event-2-%d" + family = "neptune1" + description = "Test parameter group for terraform" + } + + resource "aws_neptune_event_subscription" "bar" { + name = "tf-acc-test-neptune-event-subs-with-ids-%d" + sns_topic_arn = "${aws_sns_topic.aws_sns_topic.arn}" + source_type = "db-parameter-group" + source_ids = ["${aws_neptune_parameter_group.bar.id}","${aws_neptune_parameter_group.foo.id}"] + event_categories = [ + "configuration change" + ] + tags { + Name = "tf-acc-test" + } + }`, rInt, rInt, rInt, rInt) +} + +func testAccAWSNeptuneEventSubscriptionConfigUpdateCategories(rInt int) string { + return fmt.Sprintf(` +resource "aws_sns_topic" "aws_sns_topic" { + name = "tf-acc-test-neptune-event-subs-sns-topic-%d" +} + +resource "aws_neptune_event_subscription" "bar" { + name = "tf-acc-test-neptune-event-subs-%d" + sns_topic_arn = "${aws_sns_topic.aws_sns_topic.arn}" + source_type = "db-instance" + event_categories = [ + "availability", + ] + tags { + Name = "tf-acc-test" + } +}`, rInt, rInt) +} diff --git a/aws/resource_aws_neptune_parameter_group.go b/aws/resource_aws_neptune_parameter_group.go new file mode 100644 index 00000000000..05bc64b556f --- /dev/null +++ b/aws/resource_aws_neptune_parameter_group.go @@ -0,0 +1,291 @@ +package aws + +import ( + "fmt" + "log" + "strings" + "time" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/neptune" +) + +// We can only modify 20 parameters at a time, so walk them until +// we've got them all. +const maxParams = 20 + +func resourceAwsNeptuneParameterGroup() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsNeptuneParameterGroupCreate, + Read: resourceAwsNeptuneParameterGroupRead, + Update: resourceAwsNeptuneParameterGroupUpdate, + Delete: resourceAwsNeptuneParameterGroupDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + StateFunc: func(val interface{}) string { + return strings.ToLower(val.(string)) + }, + }, + "family": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "Managed by Terraform", + }, + "parameter": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + }, + "value": { + Type: schema.TypeString, + Required: true, + }, + "apply_method": { + Type: schema.TypeString, + Optional: true, + Default: neptune.ApplyMethodPendingReboot, + ValidateFunc: validation.StringInSlice([]string{ + neptune.ApplyMethodImmediate, + neptune.ApplyMethodPendingReboot, + }, false), + }, + }, + }, + }, + "tags": tagsSchema(), + }, + } +} + +func resourceAwsNeptuneParameterGroupCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + + createOpts := neptune.CreateDBParameterGroupInput{ + DBParameterGroupName: aws.String(d.Get("name").(string)), + DBParameterGroupFamily: aws.String(d.Get("family").(string)), + Description: aws.String(d.Get("description").(string)), + } + + log.Printf("[DEBUG] Create Neptune Parameter Group: %#v", createOpts) + resp, err := conn.CreateDBParameterGroup(&createOpts) + if err != nil { + return fmt.Errorf("Error creating Neptune Parameter Group: %s", err) + } + + d.Partial(true) + d.SetPartial("name") + d.SetPartial("family") + d.SetPartial("description") + d.Partial(false) + + d.SetId(*resp.DBParameterGroup.DBParameterGroupName) + d.Set("arn", resp.DBParameterGroup.DBParameterGroupArn) + log.Printf("[INFO] Neptune Parameter Group ID: %s", d.Id()) + + return resourceAwsNeptuneParameterGroupUpdate(d, meta) +} + +func resourceAwsNeptuneParameterGroupRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + + describeOpts := neptune.DescribeDBParameterGroupsInput{ + DBParameterGroupName: aws.String(d.Id()), + } + + describeResp, err := conn.DescribeDBParameterGroups(&describeOpts) + if err != nil { + if isAWSErr(err, neptune.ErrCodeDBParameterGroupNotFoundFault, "") { + log.Printf("[WARN] Neptune Parameter Group (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + return err + } + + if describeResp == nil { + return fmt.Errorf("Unable to get Describe Response for Neptune Parameter Group (%s)", d.Id()) + } + + if len(describeResp.DBParameterGroups) != 1 || + *describeResp.DBParameterGroups[0].DBParameterGroupName != d.Id() { + return fmt.Errorf("Unable to find Parameter Group: %#v", describeResp.DBParameterGroups) + } + + arn := aws.StringValue(describeResp.DBParameterGroups[0].DBParameterGroupArn) + d.Set("arn", arn) + d.Set("name", describeResp.DBParameterGroups[0].DBParameterGroupName) + d.Set("family", describeResp.DBParameterGroups[0].DBParameterGroupFamily) + d.Set("description", describeResp.DBParameterGroups[0].Description) + + // Only include user customized parameters as there's hundreds of system/default ones + describeParametersOpts := neptune.DescribeDBParametersInput{ + DBParameterGroupName: aws.String(d.Id()), + Source: aws.String("user"), + } + + var parameters []*neptune.Parameter + err = conn.DescribeDBParametersPages(&describeParametersOpts, + func(describeParametersResp *neptune.DescribeDBParametersOutput, lastPage bool) bool { + parameters = append(parameters, describeParametersResp.Parameters...) + return !lastPage + }) + if err != nil { + return err + } + + if err := d.Set("parameter", flattenNeptuneParameters(parameters)); err != nil { + return fmt.Errorf("error setting parameter: %s", err) + } + + resp, err := conn.ListTagsForResource(&neptune.ListTagsForResourceInput{ + ResourceName: aws.String(arn), + }) + if err != nil { + log.Printf("[DEBUG] Error retrieving tags for ARN: %s", arn) + } + + if err := d.Set("tags", tagsToMapNeptune(resp.TagList)); err != nil { + return fmt.Errorf("error setting neptune tags: %s", err) + } + + return nil +} + +func resourceAwsNeptuneParameterGroupUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + + d.Partial(true) + + if d.HasChange("parameter") { + o, n := d.GetChange("parameter") + if o == nil { + o = new(schema.Set) + } + if n == nil { + n = new(schema.Set) + } + + os := o.(*schema.Set) + ns := n.(*schema.Set) + + toRemove, err := expandNeptuneParameters(os.Difference(ns).List()) + if err != nil { + return err + } + + log.Printf("[DEBUG] Parameters to remove: %#v", toRemove) + + toAdd, err := expandNeptuneParameters(ns.Difference(os).List()) + if err != nil { + return err + } + + log.Printf("[DEBUG] Parameters to add: %#v", toAdd) + + for len(toRemove) > 0 { + var paramsToModify []*neptune.Parameter + if len(toRemove) <= maxParams { + paramsToModify, toRemove = toRemove[:], nil + } else { + paramsToModify, toRemove = toRemove[:maxParams], toRemove[maxParams:] + } + resetOpts := neptune.ResetDBParameterGroupInput{ + DBParameterGroupName: aws.String(d.Get("name").(string)), + Parameters: paramsToModify, + } + + log.Printf("[DEBUG] Reset Neptune Parameter Group: %s", resetOpts) + err := resource.Retry(30*time.Second, func() *resource.RetryError { + _, err = conn.ResetDBParameterGroup(&resetOpts) + if err != nil { + if isAWSErr(err, "InvalidDBParameterGroupState", " has pending changes") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("Error resetting Neptune Parameter Group: %s", err) + } + } + + for len(toAdd) > 0 { + var paramsToModify []*neptune.Parameter + if len(toAdd) <= maxParams { + paramsToModify, toAdd = toAdd[:], nil + } else { + paramsToModify, toAdd = toAdd[:maxParams], toAdd[maxParams:] + } + modifyOpts := neptune.ModifyDBParameterGroupInput{ + DBParameterGroupName: aws.String(d.Get("name").(string)), + Parameters: paramsToModify, + } + + log.Printf("[DEBUG] Modify Neptune Parameter Group: %s", modifyOpts) + _, err = conn.ModifyDBParameterGroup(&modifyOpts) + if err != nil { + return fmt.Errorf("Error modifying Neptune Parameter Group: %s", err) + } + } + + d.SetPartial("parameter") + } + + if d.HasChange("tags") { + err := setTagsNeptune(conn, d, d.Get("arn").(string)) + if err != nil { + return fmt.Errorf("error setting Neptune Parameter Group %q tags: %s", d.Id(), err) + } + d.SetPartial("tags") + } + + d.Partial(false) + + return resourceAwsNeptuneParameterGroupRead(d, meta) +} + +func resourceAwsNeptuneParameterGroupDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + + return resource.Retry(3*time.Minute, func() *resource.RetryError { + deleteOpts := neptune.DeleteDBParameterGroupInput{ + DBParameterGroupName: aws.String(d.Id()), + } + _, err := conn.DeleteDBParameterGroup(&deleteOpts) + if err != nil { + if isAWSErr(err, neptune.ErrCodeDBParameterGroupNotFoundFault, "") { + return nil + } + if isAWSErr(err, neptune.ErrCodeInvalidDBParameterGroupStateFault, "") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) +} diff --git a/aws/resource_aws_neptune_parameter_group_test.go b/aws/resource_aws_neptune_parameter_group_test.go new file mode 100644 index 00000000000..a04afea8672 --- /dev/null +++ b/aws/resource_aws_neptune_parameter_group_test.go @@ -0,0 +1,297 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/neptune" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSNeptuneParameterGroup_basic(t *testing.T) { + var v neptune.DBParameterGroup + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_neptune_parameter_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneParameterGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneParameterGroupConfig_Required(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneParameterGroupExists(resourceName, &v), + testAccCheckAWSNeptuneParameterGroupAttributes(&v, rName), + resource.TestMatchResourceAttr(resourceName, "arn", regexp.MustCompile(fmt.Sprintf("^arn:[^:]+:rds:[^:]+:\\d{12}:pg:%s", rName))), + resource.TestCheckResourceAttr(resourceName, "description", "Managed by Terraform"), + resource.TestCheckResourceAttr(resourceName, "family", "neptune1"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "parameter.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSNeptuneParameterGroup_Description(t *testing.T) { + var v neptune.DBParameterGroup + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_neptune_parameter_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneParameterGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneParameterGroupConfig_Description(rName, "description1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneParameterGroupExists(resourceName, &v), + testAccCheckAWSNeptuneParameterGroupAttributes(&v, rName), + resource.TestCheckResourceAttr(resourceName, "description", "description1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSNeptuneParameterGroup_Parameter(t *testing.T) { + var v neptune.DBParameterGroup + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_neptune_parameter_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneParameterGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneParameterGroupConfig_Parameter(rName, "neptune_query_timeout", "25", "pending-reboot"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneParameterGroupExists(resourceName, &v), + testAccCheckAWSNeptuneParameterGroupAttributes(&v, rName), + resource.TestCheckResourceAttr(resourceName, "parameter.#", "1"), + resource.TestCheckResourceAttr(resourceName, "parameter.2423897584.apply_method", "pending-reboot"), + resource.TestCheckResourceAttr(resourceName, "parameter.2423897584.name", "neptune_query_timeout"), + resource.TestCheckResourceAttr(resourceName, "parameter.2423897584.value", "25"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + // This test should be updated with a dynamic parameter when available + { + Config: testAccAWSNeptuneParameterGroupConfig_Parameter(rName, "neptune_query_timeout", "25", "immediate"), + ExpectError: regexp.MustCompile(`cannot use immediate apply method for static parameter`), + }, + // Test removing the configuration + { + Config: testAccAWSNeptuneParameterGroupConfig_Required(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneParameterGroupExists(resourceName, &v), + testAccCheckAWSNeptuneParameterGroupAttributes(&v, rName), + resource.TestCheckResourceAttr(resourceName, "parameter.#", "0"), + ), + }, + }, + }) +} + +func TestAccAWSNeptuneParameterGroup_Tags(t *testing.T) { + var v neptune.DBParameterGroup + + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_neptune_parameter_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNeptuneParameterGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNeptuneParameterGroupConfig_Tags_SingleTag(rName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneParameterGroupExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSNeptuneParameterGroupConfig_Tags_SingleTag(rName, "key1", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneParameterGroupExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value2"), + ), + }, + { + Config: testAccAWSNeptuneParameterGroupConfig_Tags_MultipleTags(rName, "key2", "value2", "key3", "value3"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNeptuneParameterGroupExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + resource.TestCheckResourceAttr(resourceName, "tags.key3", "value3"), + ), + }, + }, + }) +} + +func testAccCheckAWSNeptuneParameterGroupDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).neptuneconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_neptune_parameter_group" { + continue + } + + // Try to find the Group + resp, err := conn.DescribeDBParameterGroups( + &neptune.DescribeDBParameterGroupsInput{ + DBParameterGroupName: aws.String(rs.Primary.ID), + }) + + if err != nil { + if isAWSErr(err, neptune.ErrCodeDBParameterGroupNotFoundFault, "") { + return nil + } + return err + } + + if len(resp.DBParameterGroups) != 0 && aws.StringValue(resp.DBParameterGroups[0].DBParameterGroupName) == rs.Primary.ID { + return fmt.Errorf("DB Parameter Group still exists") + } + } + + return nil +} + +func testAccCheckAWSNeptuneParameterGroupAttributes(v *neptune.DBParameterGroup, rName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if *v.DBParameterGroupName != rName { + return fmt.Errorf("bad name: %#v", v.DBParameterGroupName) + } + + if *v.DBParameterGroupFamily != "neptune1" { + return fmt.Errorf("bad family: %#v", v.DBParameterGroupFamily) + } + + return nil + } +} + +func testAccCheckAWSNeptuneParameterGroupExists(n string, v *neptune.DBParameterGroup) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Neptune Parameter Group ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).neptuneconn + + opts := neptune.DescribeDBParameterGroupsInput{ + DBParameterGroupName: aws.String(rs.Primary.ID), + } + + resp, err := conn.DescribeDBParameterGroups(&opts) + + if err != nil { + return err + } + + if len(resp.DBParameterGroups) != 1 || + *resp.DBParameterGroups[0].DBParameterGroupName != rs.Primary.ID { + return fmt.Errorf("Neptune Parameter Group not found") + } + + *v = *resp.DBParameterGroups[0] + + return nil + } +} + +func testAccAWSNeptuneParameterGroupConfig_Parameter(rName, pName, pValue, pApplyMethod string) string { + return fmt.Sprintf(` +resource "aws_neptune_parameter_group" "test" { + family = "neptune1" + name = %q + + parameter { + apply_method = %q + name = %q + value = %q + } +}`, rName, pApplyMethod, pName, pValue) +} + +func testAccAWSNeptuneParameterGroupConfig_Description(rName, description string) string { + return fmt.Sprintf(` +resource "aws_neptune_parameter_group" "test" { + description = %q + family = "neptune1" + name = %q +}`, description, rName) +} + +func testAccAWSNeptuneParameterGroupConfig_Required(rName string) string { + return fmt.Sprintf(` +resource "aws_neptune_parameter_group" "test" { + family = "neptune1" + name = %q +}`, rName) +} + +func testAccAWSNeptuneParameterGroupConfig_Tags_SingleTag(name, tKey, tValue string) string { + return fmt.Sprintf(` +resource "aws_neptune_parameter_group" "test" { + family = "neptune1" + name = %q + + tags { + %s = %q + } +} +`, name, tKey, tValue) +} + +func testAccAWSNeptuneParameterGroupConfig_Tags_MultipleTags(name, tKey1, tValue1, tKey2, tValue2 string) string { + return fmt.Sprintf(` +resource "aws_neptune_parameter_group" "test" { + family = "neptune1" + name = %q + + tags { + %s = %q + %s = %q + } +} +`, name, tKey1, tValue1, tKey2, tValue2) +} diff --git a/aws/resource_aws_neptune_subnet_group.go b/aws/resource_aws_neptune_subnet_group.go new file mode 100644 index 00000000000..3db16ee108e --- /dev/null +++ b/aws/resource_aws_neptune_subnet_group.go @@ -0,0 +1,221 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/neptune" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsNeptuneSubnetGroup() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsNeptuneSubnetGroupCreate, + Read: resourceAwsNeptuneSubnetGroupRead, + Update: resourceAwsNeptuneSubnetGroupUpdate, + Delete: resourceAwsNeptuneSubnetGroupDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + + "name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"name_prefix"}, + ValidateFunc: validateNeptuneSubnetGroupName, + }, + "name_prefix": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"name"}, + ValidateFunc: validateNeptuneSubnetGroupNamePrefix, + }, + + "description": { + Type: schema.TypeString, + Optional: true, + Default: "Managed by Terraform", + }, + + "subnet_ids": { + Type: schema.TypeSet, + Required: true, + MinItems: 1, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + + "tags": tagsSchema(), + }, + } +} + +func resourceAwsNeptuneSubnetGroupCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + tags := tagsFromMapNeptune(d.Get("tags").(map[string]interface{})) + + subnetIdsSet := d.Get("subnet_ids").(*schema.Set) + subnetIds := make([]*string, subnetIdsSet.Len()) + for i, subnetId := range subnetIdsSet.List() { + subnetIds[i] = aws.String(subnetId.(string)) + } + + var groupName string + if v, ok := d.GetOk("name"); ok { + groupName = v.(string) + } else if v, ok := d.GetOk("name_prefix"); ok { + groupName = resource.PrefixedUniqueId(v.(string)) + } else { + groupName = resource.UniqueId() + } + + createOpts := neptune.CreateDBSubnetGroupInput{ + DBSubnetGroupName: aws.String(groupName), + DBSubnetGroupDescription: aws.String(d.Get("description").(string)), + SubnetIds: subnetIds, + Tags: tags, + } + + log.Printf("[DEBUG] Create Neptune Subnet Group: %#v", createOpts) + _, err := conn.CreateDBSubnetGroup(&createOpts) + if err != nil { + return fmt.Errorf("Error creating Neptune Subnet Group: %s", err) + } + + d.SetId(aws.StringValue(createOpts.DBSubnetGroupName)) + log.Printf("[INFO] Neptune Subnet Group ID: %s", d.Id()) + return resourceAwsNeptuneSubnetGroupRead(d, meta) +} + +func resourceAwsNeptuneSubnetGroupRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + + describeOpts := neptune.DescribeDBSubnetGroupsInput{ + DBSubnetGroupName: aws.String(d.Id()), + } + + var subnetGroups []*neptune.DBSubnetGroup + if err := conn.DescribeDBSubnetGroupsPages(&describeOpts, func(resp *neptune.DescribeDBSubnetGroupsOutput, lastPage bool) bool { + for _, v := range resp.DBSubnetGroups { + subnetGroups = append(subnetGroups, v) + } + return !lastPage + }); err != nil { + if isAWSErr(err, neptune.ErrCodeDBSubnetGroupNotFoundFault, "") { + log.Printf("[WARN] Neptune Subnet Group (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + return err + } + + if len(subnetGroups) == 0 { + log.Printf("[WARN] Unable to find Neptune Subnet Group: %#v, removing from state", subnetGroups) + d.SetId("") + return nil + } + + var subnetGroup *neptune.DBSubnetGroup + subnetGroup = subnetGroups[0] + + if subnetGroup.DBSubnetGroupName == nil { + return fmt.Errorf("Unable to find Neptune Subnet Group: %#v", subnetGroups) + } + + d.Set("name", subnetGroup.DBSubnetGroupName) + d.Set("description", subnetGroup.DBSubnetGroupDescription) + + subnets := make([]string, 0, len(subnetGroup.Subnets)) + for _, s := range subnetGroup.Subnets { + subnets = append(subnets, aws.StringValue(s.SubnetIdentifier)) + } + if err := d.Set("subnet_ids", subnets); err != nil { + return fmt.Errorf("error setting subnet_ids: %s", err) + } + + // list tags for resource + // set tags + + //Amazon Neptune shares the format of Amazon RDS ARNs. Neptune ARNs contain rds and not neptune. + //https://docs.aws.amazon.com/neptune/latest/userguide/tagging.ARN.html + d.Set("arn", subnetGroup.DBSubnetGroupArn) + resp, err := conn.ListTagsForResource(&neptune.ListTagsForResourceInput{ + ResourceName: subnetGroup.DBSubnetGroupArn, + }) + + if err != nil { + log.Printf("[DEBUG] Error retreiving tags for ARN: %s", aws.StringValue(subnetGroup.DBSubnetGroupArn)) + } + + d.Set("tags", tagsToMapNeptune(resp.TagList)) + + return nil +} + +func resourceAwsNeptuneSubnetGroupUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + if d.HasChange("subnet_ids") || d.HasChange("description") { + _, n := d.GetChange("subnet_ids") + if n == nil { + n = new(schema.Set) + } + ns := n.(*schema.Set) + + var sIds []*string + for _, s := range ns.List() { + sIds = append(sIds, aws.String(s.(string))) + } + + _, err := conn.ModifyDBSubnetGroup(&neptune.ModifyDBSubnetGroupInput{ + DBSubnetGroupName: aws.String(d.Id()), + DBSubnetGroupDescription: aws.String(d.Get("description").(string)), + SubnetIds: sIds, + }) + + if err != nil { + return err + } + } + + //https://docs.aws.amazon.com/neptune/latest/userguide/tagging.ARN.html + arn := d.Get("arn").(string) + if err := setTagsNeptune(conn, d, arn); err != nil { + return err + } else { + d.SetPartial("tags") + } + + return resourceAwsNeptuneSubnetGroupRead(d, meta) +} + +func resourceAwsNeptuneSubnetGroupDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).neptuneconn + + input := neptune.DeleteDBSubnetGroupInput{ + DBSubnetGroupName: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Deleting Neptune Subnet Group: %s", d.Id()) + _, err := conn.DeleteDBSubnetGroup(&input) + if err != nil { + if isAWSErr(err, neptune.ErrCodeDBSubnetGroupNotFoundFault, "") { + return nil + } + return fmt.Errorf("error deleting Neptune Subnet Group (%s): %s", d.Id(), err) + } + + return nil +} diff --git a/aws/resource_aws_neptune_subnet_group_test.go b/aws/resource_aws_neptune_subnet_group_test.go new file mode 100644 index 00000000000..c6e55a4920b --- /dev/null +++ b/aws/resource_aws_neptune_subnet_group_test.go @@ -0,0 +1,326 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/neptune" +) + +func TestAccAWSNeptuneSubnetGroup_basic(t *testing.T) { + var v neptune.DBSubnetGroup + + rName := fmt.Sprintf("tf-test-%d", acctest.RandInt()) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNeptuneSubnetGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccNeptuneSubnetGroupConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckNeptuneSubnetGroupExists( + "aws_neptune_subnet_group.foo", &v), + resource.TestCheckResourceAttr( + "aws_neptune_subnet_group.foo", "name", rName), + resource.TestCheckResourceAttr( + "aws_neptune_subnet_group.foo", "description", "Managed by Terraform"), + ), + }, + { + ResourceName: "aws_neptune_subnet_group.foo", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSNeptuneSubnetGroup_namePrefix(t *testing.T) { + var v neptune.DBSubnetGroup + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNeptuneSubnetGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccNeptuneSubnetGroupConfig_namePrefix, + Check: resource.ComposeTestCheckFunc( + testAccCheckNeptuneSubnetGroupExists( + "aws_neptune_subnet_group.test", &v), + resource.TestMatchResourceAttr( + "aws_neptune_subnet_group.test", "name", regexp.MustCompile("^tf_test-")), + ), + }, + { + ResourceName: "aws_neptune_subnet_group.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"name_prefix"}, + }, + }, + }) +} + +func TestAccAWSNeptuneSubnetGroup_generatedName(t *testing.T) { + var v neptune.DBSubnetGroup + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNeptuneSubnetGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccNeptuneSubnetGroupConfig_generatedName, + Check: resource.ComposeTestCheckFunc( + testAccCheckNeptuneSubnetGroupExists( + "aws_neptune_subnet_group.test", &v), + ), + }, + { + ResourceName: "aws_neptune_subnet_group.test", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSNeptuneSubnetGroup_updateDescription(t *testing.T) { + var v neptune.DBSubnetGroup + + rName := fmt.Sprintf("tf-test-%d", acctest.RandInt()) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckNeptuneSubnetGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccNeptuneSubnetGroupConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckNeptuneSubnetGroupExists( + "aws_neptune_subnet_group.foo", &v), + resource.TestCheckResourceAttr( + "aws_neptune_subnet_group.foo", "description", "Managed by Terraform"), + ), + }, + + { + Config: testAccNeptuneSubnetGroupConfig_updatedDescription(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckNeptuneSubnetGroupExists( + "aws_neptune_subnet_group.foo", &v), + resource.TestCheckResourceAttr( + "aws_neptune_subnet_group.foo", "description", "foo description updated"), + ), + }, + { + ResourceName: "aws_neptune_subnet_group.foo", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckNeptuneSubnetGroupDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).neptuneconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_neptune_subnet_group" { + continue + } + + // Try to find the resource + resp, err := conn.DescribeDBSubnetGroups( + &neptune.DescribeDBSubnetGroupsInput{DBSubnetGroupName: aws.String(rs.Primary.ID)}) + if err == nil { + if len(resp.DBSubnetGroups) > 0 { + return fmt.Errorf("still exist.") + } + + return nil + } + + // Verify the error is what we want + neptuneerr, ok := err.(awserr.Error) + if !ok { + return err + } + if neptuneerr.Code() != "DBSubnetGroupNotFoundFault" { + return err + } + } + + return nil +} + +func testAccCheckNeptuneSubnetGroupExists(n string, v *neptune.DBSubnetGroup) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).neptuneconn + resp, err := conn.DescribeDBSubnetGroups( + &neptune.DescribeDBSubnetGroupsInput{DBSubnetGroupName: aws.String(rs.Primary.ID)}) + if err != nil { + return err + } + if len(resp.DBSubnetGroups) == 0 { + return fmt.Errorf("DbSubnetGroup not found") + } + + *v = *resp.DBSubnetGroups[0] + + return nil + } +} + +func testAccNeptuneSubnetGroupConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" + tags { + Name = "terraform-testacc-neptune-subnet-group" + } +} + +resource "aws_subnet" "foo" { + cidr_block = "10.1.1.0/24" + availability_zone = "us-west-2a" + vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-acc-neptune-subnet-group-1" + } +} + +resource "aws_subnet" "bar" { + cidr_block = "10.1.2.0/24" + availability_zone = "us-west-2b" + vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-acc-neptune-subnet-group-2" + } +} + +resource "aws_neptune_subnet_group" "foo" { + name = "%s" + subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] + tags { + Name = "tf-neptunesubnet-group-test" + } +}`, rName) +} + +func testAccNeptuneSubnetGroupConfig_updatedDescription(rName string) string { + return fmt.Sprintf(` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" + tags { + Name = "terraform-testacc-neptune-subnet-group-updated-description" + } +} + +resource "aws_subnet" "foo" { + cidr_block = "10.1.1.0/24" + availability_zone = "us-west-2a" + vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-acc-neptune-subnet-group-1" + } +} + +resource "aws_subnet" "bar" { + cidr_block = "10.1.2.0/24" + availability_zone = "us-west-2b" + vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-acc-neptune-subnet-group-2" + } +} + +resource "aws_neptune_subnet_group" "foo" { + name = "%s" + description = "foo description updated" + subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] + tags { + Name = "tf-neptunesubnet-group-test" + } +}`, rName) +} + +const testAccNeptuneSubnetGroupConfig_namePrefix = ` +resource "aws_vpc" "test" { + cidr_block = "10.1.0.0/16" + tags { + Name = "terraform-testacc-neptune-subnet-group-name-prefix" + } +} + +resource "aws_subnet" "a" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "10.1.1.0/24" + availability_zone = "us-west-2a" + tags { + Name = "tf-acc-neptune-subnet-group-name-prefix-a" + } +} + +resource "aws_subnet" "b" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "10.1.2.0/24" + availability_zone = "us-west-2b" + tags { + Name = "tf-acc-neptune-subnet-group-name-prefix-b" + } +} + +resource "aws_neptune_subnet_group" "test" { + name_prefix = "tf_test-" + subnet_ids = ["${aws_subnet.a.id}", "${aws_subnet.b.id}"] +}` + +const testAccNeptuneSubnetGroupConfig_generatedName = ` +resource "aws_vpc" "test" { + cidr_block = "10.1.0.0/16" + tags { + Name = "terraform-testacc-neptune-subnet-group-generated-name" + } +} + +resource "aws_subnet" "a" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "10.1.1.0/24" + availability_zone = "us-west-2a" + tags { + Name = "tf-acc-neptune-subnet-group-generated-name-a" + } +} + +resource "aws_subnet" "b" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "10.1.2.0/24" + availability_zone = "us-west-2b" + tags { + Name = "tf-acc-neptune-subnet-group-generated-name-a" + } +} + +resource "aws_neptune_subnet_group" "test" { + subnet_ids = ["${aws_subnet.a.id}", "${aws_subnet.b.id}"] +}` diff --git a/aws/resource_aws_network_acl.go b/aws/resource_aws_network_acl.go index 6080e53c153..3776cc2c0dd 100644 --- a/aws/resource_aws_network_acl.go +++ b/aws/resource_aws_network_acl.go @@ -36,11 +36,12 @@ func resourceAwsNetworkAcl() *schema.Resource { Computed: false, }, "subnet_id": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Computed: false, - Deprecated: "Attribute subnet_id is deprecated on network_acl resources. Use subnet_ids instead", + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: false, + ConflictsWith: []string{"subnet_ids"}, + Deprecated: "Attribute subnet_id is deprecated on network_acl resources. Use subnet_ids instead", }, "subnet_ids": { Type: schema.TypeSet, @@ -308,6 +309,10 @@ func resourceAwsNetworkAclUpdate(d *schema.ResourceData, meta interface{}) error for _, r := range remove { association, err := findNetworkAclAssociation(r.(string), conn) if err != nil { + if isResourceNotFoundError(err) { + // Subnet has been deleted. + continue + } return fmt.Errorf("Failed to find acl association: acl %s with subnet %s: %s", d.Id(), r, err) } log.Printf("DEBUG] Replacing Network Acl Association (%s) with Default Network ACL ID (%s)", *association.NetworkAclAssociationId, *defaultAcl.NetworkAclId) @@ -479,6 +484,9 @@ func resourceAwsNetworkAclDelete(d *schema.ResourceData, meta interface{}) error for _, i := range ids { a, err := findNetworkAclAssociation(i.(string), conn) if err != nil { + if isResourceNotFoundError(err) { + continue + } return resource.NonRetryableError(err) } associations = append(associations, a) @@ -530,7 +538,7 @@ func resourceAwsNetworkAclDelete(d *schema.ResourceData, meta interface{}) error }) if retryErr != nil { - return fmt.Errorf("[ERR] Error destroying Network ACL (%s): %s", d.Id(), retryErr) + return fmt.Errorf("Error destroying Network ACL (%s): %s", d.Id(), retryErr) } return nil } @@ -597,26 +605,30 @@ func getDefaultNetworkAcl(vpc_id string, conn *ec2.EC2) (defaultAcl *ec2.Network } func findNetworkAclAssociation(subnetId string, conn *ec2.EC2) (networkAclAssociation *ec2.NetworkAclAssociation, err error) { - resp, err := conn.DescribeNetworkAcls(&ec2.DescribeNetworkAclsInput{ - Filters: []*ec2.Filter{ - { - Name: aws.String("association.subnet-id"), - Values: []*string{aws.String(subnetId)}, - }, + req := &ec2.DescribeNetworkAclsInput{} + req.Filters = buildEC2AttributeFilterList( + map[string]string{ + "association.subnet-id": subnetId, }, - }) - + ) + resp, err := conn.DescribeNetworkAcls(req) if err != nil { return nil, err } - if resp.NetworkAcls != nil && len(resp.NetworkAcls) > 0 { + + if len(resp.NetworkAcls) > 0 { for _, association := range resp.NetworkAcls[0].Associations { - if *association.SubnetId == subnetId { + if aws.StringValue(association.SubnetId) == subnetId { return association, nil } } } - return nil, fmt.Errorf("could not find association for subnet: %s ", subnetId) + + return nil, &resource.NotFoundError{ + LastRequest: req, + LastResponse: resp, + Message: fmt.Sprintf("could not find association for subnet: %s ", subnetId), + } } // networkAclEntriesToMapList turns ingress/egress rules read from AWS into a list diff --git a/aws/resource_aws_network_acl_rule.go b/aws/resource_aws_network_acl_rule.go index d3aa099fce6..6fe843ca70d 100644 --- a/aws/resource_aws_network_acl_rule.go +++ b/aws/resource_aws_network_acl_rule.go @@ -42,10 +42,15 @@ func resourceAwsNetworkAclRule() *schema.Resource { Required: true, ForceNew: true, DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { - if old == "all" && new == "-1" || old == "-1" && new == "all" { - return true + pi := protocolIntegers() + if val, ok := pi[old]; ok { + old = strconv.Itoa(val) } - return false + if val, ok := pi[new]; ok { + new = strconv.Itoa(val) + } + + return old == new }, }, "rule_action": { @@ -54,14 +59,16 @@ func resourceAwsNetworkAclRule() *schema.Resource { ForceNew: true, }, "cidr_block": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"ipv6_cidr_block"}, }, "ipv6_cidr_block": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"cidr_block"}, }, "from_port": { Type: schema.TypeInt, @@ -131,8 +138,8 @@ func resourceAwsNetworkAclRuleCreate(d *schema.ResourceData, meta interface{}) e } // Specify additional required fields for ICMP. For the list - // of ICMP codes and types, see: http://www.nthelp.com/icmp.html - if p == 1 { + // of ICMP codes and types, see: https://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml + if p == 1 || p == 58 { params.IcmpTypeCode = &ec2.IcmpTypeCode{} if v, ok := d.GetOk("icmp_type"); ok { icmpType, err := strconv.Atoi(v.(string)) diff --git a/aws/resource_aws_network_acl_rule_test.go b/aws/resource_aws_network_acl_rule_test.go index 217dc78fb36..478633a1ae8 100644 --- a/aws/resource_aws_network_acl_rule_test.go +++ b/aws/resource_aws_network_acl_rule_test.go @@ -9,6 +9,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -16,7 +17,7 @@ import ( func TestAccAWSNetworkAclRule_basic(t *testing.T) { var networkAcl ec2.NetworkAcl - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSNetworkAclRuleDestroy, @@ -35,7 +36,7 @@ func TestAccAWSNetworkAclRule_basic(t *testing.T) { func TestAccAWSNetworkAclRule_missingParam(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSNetworkAclRuleDestroy, @@ -51,7 +52,7 @@ func TestAccAWSNetworkAclRule_missingParam(t *testing.T) { func TestAccAWSNetworkAclRule_ipv6(t *testing.T) { var networkAcl ec2.NetworkAcl - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSNetworkAclRuleDestroy, @@ -66,9 +67,29 @@ func TestAccAWSNetworkAclRule_ipv6(t *testing.T) { }) } +func TestAccAWSNetworkAclRule_ipv6ICMP(t *testing.T) { + var networkAcl ec2.NetworkAcl + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_network_acl_rule.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNetworkAclRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNetworkAclRuleConfigIpv6ICMP(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNetworkAclRuleExists(resourceName, &networkAcl), + ), + }, + }, + }) +} + func TestAccAWSNetworkAclRule_allProtocol(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSNetworkAclRuleDestroy, @@ -85,6 +106,25 @@ func TestAccAWSNetworkAclRule_allProtocol(t *testing.T) { }) } +func TestAccAWSNetworkAclRule_tcpProtocol(t *testing.T) { + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNetworkAclRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNetworkAclRuleTcpProtocolConfig, + ExpectNonEmptyPlan: false, + }, + { + Config: testAccAWSNetworkAclRuleTcpProtocolConfigNoRealUpdate, + ExpectNonEmptyPlan: false, + }, + }, + }) +} + func TestResourceAWSNetworkAclRule_validateICMPArgumentValue(t *testing.T) { type testCases struct { Value string @@ -140,7 +180,7 @@ func TestResourceAWSNetworkAclRule_validateICMPArgumentValue(t *testing.T) { func TestAccAWSNetworkAclRule_deleteRule(t *testing.T) { var networkAcl ec2.NetworkAcl - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSNetworkAclRuleDestroy, @@ -371,6 +411,28 @@ resource "aws_network_acl_rule" "baz" { } ` +const testAccAWSNetworkAclRuleTcpProtocolConfigNoRealUpdate = ` +resource "aws_vpc" "foo" { + cidr_block = "10.3.0.0/16" + tags { + Name = "testAccAWSNetworkAclRuleTcpProtocolConfigNoRealUpdate" + } +} +resource "aws_network_acl" "bar" { + vpc_id = "${aws_vpc.foo.id}" +} +resource "aws_network_acl_rule" "baz" { + network_acl_id = "${aws_network_acl.bar.id}" + rule_number = 150 + egress = false + protocol = "tcp" + rule_action = "allow" + cidr_block = "0.0.0.0/0" + from_port = 22 + to_port = 22 +} +` + const testAccAWSNetworkAclRuleAllProtocolConfig = ` resource "aws_vpc" "foo" { cidr_block = "10.3.0.0/16" @@ -398,6 +460,28 @@ resource "aws_network_acl_rule" "baz" { } ` +const testAccAWSNetworkAclRuleTcpProtocolConfig = ` +resource "aws_vpc" "foo" { + cidr_block = "10.3.0.0/16" + tags { + Name = "testAccAWSNetworkAclRuleTcpProtocolConfig" + } +} +resource "aws_network_acl" "bar" { + vpc_id = "${aws_vpc.foo.id}" +} +resource "aws_network_acl_rule" "baz" { + network_acl_id = "${aws_network_acl.bar.id}" + rule_number = 150 + egress = false + protocol = "6" + rule_action = "allow" + cidr_block = "0.0.0.0/0" + from_port = 22 + to_port = 22 +} +` + const testAccAWSNetworkAclRuleIpv6Config = ` resource "aws_vpc" "foo" { cidr_block = "10.3.0.0/16" @@ -424,3 +508,35 @@ resource "aws_network_acl_rule" "baz" { to_port = 22 } ` + +func testAccAWSNetworkAclRuleConfigIpv6ICMP(rName string) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "10.3.0.0/16" + + tags { + Name = %q + } +} + +resource "aws_network_acl" "test" { + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = %q + } +} + +resource "aws_network_acl_rule" "test" { + from_port = -1 + icmp_code = -1 + icmp_type = -1 + ipv6_cidr_block = "::/0" + network_acl_id = "${aws_network_acl.test.id}" + protocol = 58 + rule_action = "allow" + rule_number = 150 + to_port = -1 +} +`, rName, rName) +} diff --git a/aws/resource_aws_network_acl_test.go b/aws/resource_aws_network_acl_test.go index 510488e2286..d97cae8e5eb 100644 --- a/aws/resource_aws_network_acl_test.go +++ b/aws/resource_aws_network_acl_test.go @@ -8,6 +8,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) @@ -38,6 +39,10 @@ func testSweepNetworkAcls(region string) error { } resp, err := conn.DescribeNetworkAcls(req) if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping EC2 Network ACL sweep for %s: %s", region, err) + return nil + } return fmt.Errorf("Error describing Network ACLs: %s", err) } @@ -106,10 +111,29 @@ func testSweepNetworkAcls(region string) error { return nil } +func TestAccAWSNetworkAcl_importBasic(t *testing.T) { + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNetworkAclDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNetworkAclEgressNIngressConfig, + }, + + { + ResourceName: "aws_network_acl.bar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSNetworkAcl_EgressAndIngressRules(t *testing.T) { var networkAcl ec2.NetworkAcl - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_network_acl.bar", Providers: testAccProviders, @@ -152,7 +176,7 @@ func TestAccAWSNetworkAcl_EgressAndIngressRules(t *testing.T) { func TestAccAWSNetworkAcl_OnlyIngressRules_basic(t *testing.T) { var networkAcl ec2.NetworkAcl - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_network_acl.foos", Providers: testAccProviders, @@ -183,7 +207,7 @@ func TestAccAWSNetworkAcl_OnlyIngressRules_basic(t *testing.T) { func TestAccAWSNetworkAcl_OnlyIngressRules_update(t *testing.T) { var networkAcl ec2.NetworkAcl - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_network_acl.foos", Providers: testAccProviders, @@ -238,7 +262,7 @@ func TestAccAWSNetworkAcl_OnlyIngressRules_update(t *testing.T) { func TestAccAWSNetworkAcl_CaseSensitivityNoChanges(t *testing.T) { var networkAcl ec2.NetworkAcl - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_network_acl.foos", Providers: testAccProviders, @@ -257,7 +281,7 @@ func TestAccAWSNetworkAcl_CaseSensitivityNoChanges(t *testing.T) { func TestAccAWSNetworkAcl_OnlyEgressRules(t *testing.T) { var networkAcl ec2.NetworkAcl - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_network_acl.bond", Providers: testAccProviders, @@ -275,8 +299,9 @@ func TestAccAWSNetworkAcl_OnlyEgressRules(t *testing.T) { } func TestAccAWSNetworkAcl_SubnetChange(t *testing.T) { + var networkAcl ec2.NetworkAcl - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_network_acl.bar", Providers: testAccProviders, @@ -285,12 +310,14 @@ func TestAccAWSNetworkAcl_SubnetChange(t *testing.T) { { Config: testAccAWSNetworkAclSubnetConfig, Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNetworkAclExists("aws_network_acl.bar", &networkAcl), testAccCheckSubnetIsAssociatedWithAcl("aws_network_acl.bar", "aws_subnet.old"), ), }, { Config: testAccAWSNetworkAclSubnetConfigChange, Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNetworkAclExists("aws_network_acl.bar", &networkAcl), testAccCheckSubnetIsNotAssociatedWithAcl("aws_network_acl.bar", "aws_subnet.old"), testAccCheckSubnetIsAssociatedWithAcl("aws_network_acl.bar", "aws_subnet.new"), ), @@ -313,7 +340,7 @@ func TestAccAWSNetworkAcl_Subnets(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_network_acl.bar", Providers: testAccProviders, @@ -343,10 +370,51 @@ func TestAccAWSNetworkAcl_Subnets(t *testing.T) { }) } +func TestAccAWSNetworkAcl_SubnetsDelete(t *testing.T) { + var networkAcl ec2.NetworkAcl + + checkACLSubnets := func(acl *ec2.NetworkAcl, count int) resource.TestCheckFunc { + return func(*terraform.State) (err error) { + if count != len(acl.Associations) { + return fmt.Errorf("ACL association count does not match, expected %d, got %d", count, len(acl.Associations)) + } + + return nil + } + } + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "aws_network_acl.bar", + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNetworkAclDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNetworkAclSubnet_SubnetIds, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNetworkAclExists("aws_network_acl.bar", &networkAcl), + testAccCheckSubnetIsAssociatedWithAcl("aws_network_acl.bar", "aws_subnet.one"), + testAccCheckSubnetIsAssociatedWithAcl("aws_network_acl.bar", "aws_subnet.two"), + checkACLSubnets(&networkAcl, 2), + ), + }, + + { + Config: testAccAWSNetworkAclSubnet_SubnetIdsDeleteOne, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNetworkAclExists("aws_network_acl.bar", &networkAcl), + testAccCheckSubnetIsAssociatedWithAcl("aws_network_acl.bar", "aws_subnet.one"), + checkACLSubnets(&networkAcl, 1), + ), + }, + }, + }) +} + func TestAccAWSNetworkAcl_ipv6Rules(t *testing.T) { var networkAcl ec2.NetworkAcl - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_network_acl.foos", Providers: testAccProviders, @@ -376,10 +444,30 @@ func TestAccAWSNetworkAcl_ipv6Rules(t *testing.T) { }) } +func TestAccAWSNetworkAcl_ipv6ICMPRules(t *testing.T) { + var networkAcl ec2.NetworkAcl + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_network_acl.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNetworkAclDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNetworkAclConfigIpv6ICMP(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNetworkAclExists(resourceName, &networkAcl), + ), + }, + }, + }) +} + func TestAccAWSNetworkAcl_ipv6VpcRules(t *testing.T) { var networkAcl ec2.NetworkAcl - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_network_acl.foos", Providers: testAccProviders, @@ -402,7 +490,7 @@ func TestAccAWSNetworkAcl_ipv6VpcRules(t *testing.T) { func TestAccAWSNetworkAcl_espProtocol(t *testing.T) { var networkAcl ec2.NetworkAcl - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_network_acl.testesp", Providers: testAccProviders, @@ -459,7 +547,7 @@ func testAccCheckAWSNetworkAclExists(n string, networkAcl *ec2.NetworkAcl) resou } if rs.Primary.ID == "" { - return fmt.Errorf("No Security Group is set") + return fmt.Errorf("No ID is set: %s", n) } conn := testAccProvider.Meta().(*AWSClient).ec2conn @@ -548,38 +636,69 @@ func testAccCheckSubnetIsNotAssociatedWithAcl(acl string, subnet string) resourc } } +func testAccAWSNetworkAclConfigIpv6ICMP(rName string) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "10.1.0.0/16" + + tags { + Name = %q + } +} + +resource "aws_network_acl" "test" { + vpc_id = "${aws_vpc.test.id}" + + ingress { + action = "allow" + from_port = 0 + icmp_code = -1 + icmp_type = -1 + ipv6_cidr_block = "::/0" + protocol = 58 + rule_no = 1 + to_port = 0 + } + + tags { + Name = %q + } +} +`, rName, rName) +} + const testAccAWSNetworkAclIpv6Config = ` resource "aws_vpc" "foo" { - cidr_block = "10.1.0.0/16" - tags { - Name = "terraform-testacc-network-acl-ipv6" - } + cidr_block = "10.1.0.0/16" + tags { + Name = "terraform-testacc-network-acl-ipv6" + } } resource "aws_subnet" "blob" { - cidr_block = "10.1.1.0/24" - vpc_id = "${aws_vpc.foo.id}" - map_public_ip_on_launch = true - tags { - Name = "tf-acc-network-acl-ipv6" - } + cidr_block = "10.1.1.0/24" + vpc_id = "${aws_vpc.foo.id}" + map_public_ip_on_launch = true + tags { + Name = "tf-acc-network-acl-ipv6" + } } resource "aws_network_acl" "foos" { - vpc_id = "${aws_vpc.foo.id}" - ingress = { - protocol = "tcp" - rule_no = 1 - action = "allow" - ipv6_cidr_block = "::/0" - from_port = 0 - to_port = 22 - } + vpc_id = "${aws_vpc.foo.id}" + ingress = { + protocol = "tcp" + rule_no = 1 + action = "allow" + ipv6_cidr_block = "::/0" + from_port = 0 + to_port = 22 + } - subnet_ids = ["${aws_subnet.blob.id}"] - tags { - Name = "tf-acc-acl-ipv6" - } + subnet_ids = ["${aws_subnet.blob.id}"] + tags { + Name = "tf-acc-acl-ipv6" + } } ` @@ -589,414 +708,439 @@ provider "aws" { } resource "aws_vpc" "foo" { - cidr_block = "10.1.0.0/16" - assign_generated_ipv6_cidr_block = true + cidr_block = "10.1.0.0/16" + assign_generated_ipv6_cidr_block = true - tags { - Name = "terraform-testacc-network-acl-ipv6-vpc-rules" - } + tags { + Name = "terraform-testacc-network-acl-ipv6-vpc-rules" + } } resource "aws_network_acl" "foos" { - vpc_id = "${aws_vpc.foo.id}" - ingress = { - protocol = "tcp" - rule_no = 1 - action = "allow" - ipv6_cidr_block = "2600:1f16:d1e:9a00::/56" - from_port = 0 - to_port = 22 - } - tags { - Name = "tf-acc-acl-ipv6" - } + vpc_id = "${aws_vpc.foo.id}" + ingress = { + protocol = "tcp" + rule_no = 1 + action = "allow" + ipv6_cidr_block = "2600:1f16:d1e:9a00::/56" + from_port = 0 + to_port = 22 + } + tags { + Name = "tf-acc-acl-ipv6" + } } ` const testAccAWSNetworkAclIngressConfig = ` resource "aws_vpc" "foo" { - cidr_block = "10.1.0.0/16" - tags { - Name = "terraform-testacc-network-acl-ingress" - } + cidr_block = "10.1.0.0/16" + tags { + Name = "terraform-testacc-network-acl-ingress" + } } resource "aws_subnet" "blob" { - cidr_block = "10.1.1.0/24" - vpc_id = "${aws_vpc.foo.id}" - map_public_ip_on_launch = true - tags { - Name = "tf-acc-network-acl-ingress" - } + cidr_block = "10.1.1.0/24" + vpc_id = "${aws_vpc.foo.id}" + map_public_ip_on_launch = true + tags { + Name = "tf-acc-network-acl-ingress" + } } resource "aws_network_acl" "foos" { - vpc_id = "${aws_vpc.foo.id}" - ingress = { - protocol = "tcp" - rule_no = 1 - action = "deny" - cidr_block = "10.2.0.0/18" - from_port = 0 - to_port = 22 - } - ingress = { - protocol = "tcp" - rule_no = 2 - action = "deny" - cidr_block = "10.2.0.0/18" - from_port = 443 - to_port = 443 - } + vpc_id = "${aws_vpc.foo.id}" + ingress = { + protocol = "tcp" + rule_no = 1 + action = "deny" + cidr_block = "10.2.0.0/18" + from_port = 0 + to_port = 22 + } + ingress = { + protocol = "tcp" + rule_no = 2 + action = "deny" + cidr_block = "10.2.0.0/18" + from_port = 443 + to_port = 443 + } - subnet_ids = ["${aws_subnet.blob.id}"] - tags { - Name = "tf-acc-acl-ingress" - } + subnet_ids = ["${aws_subnet.blob.id}"] + tags { + Name = "tf-acc-acl-ingress" + } } ` const testAccAWSNetworkAclCaseSensitiveConfig = ` resource "aws_vpc" "foo" { - cidr_block = "10.1.0.0/16" - tags { - Name = "terraform-testacc-network-acl-ingress" - } + cidr_block = "10.1.0.0/16" + tags { + Name = "terraform-testacc-network-acl-ingress" + } } resource "aws_subnet" "blob" { - cidr_block = "10.1.1.0/24" - vpc_id = "${aws_vpc.foo.id}" - map_public_ip_on_launch = true - tags { - Name = "tf-acc-network-acl-ingress" - } + cidr_block = "10.1.1.0/24" + vpc_id = "${aws_vpc.foo.id}" + map_public_ip_on_launch = true + tags { + Name = "tf-acc-network-acl-ingress" + } } resource "aws_network_acl" "foos" { - vpc_id = "${aws_vpc.foo.id}" - ingress = { - protocol = "tcp" - rule_no = 1 - action = "Allow" - cidr_block = "10.2.0.0/18" - from_port = 0 - to_port = 22 - } - subnet_ids = ["${aws_subnet.blob.id}"] - tags { - Name = "tf-acc-acl-case-sensitive" - } + vpc_id = "${aws_vpc.foo.id}" + ingress = { + protocol = "tcp" + rule_no = 1 + action = "Allow" + cidr_block = "10.2.0.0/18" + from_port = 0 + to_port = 22 + } + subnet_ids = ["${aws_subnet.blob.id}"] + tags { + Name = "tf-acc-acl-case-sensitive" + } } ` const testAccAWSNetworkAclIngressConfigChange = ` resource "aws_vpc" "foo" { - cidr_block = "10.1.0.0/16" - tags { - Name = "terraform-testacc-network-acl-ingress" - } + cidr_block = "10.1.0.0/16" + tags { + Name = "terraform-testacc-network-acl-ingress" + } } resource "aws_subnet" "blob" { - cidr_block = "10.1.1.0/24" - vpc_id = "${aws_vpc.foo.id}" - map_public_ip_on_launch = true - tags { - Name = "tf-acc-network-acl-ingress" - } + cidr_block = "10.1.1.0/24" + vpc_id = "${aws_vpc.foo.id}" + map_public_ip_on_launch = true + tags { + Name = "tf-acc-network-acl-ingress" + } } resource "aws_network_acl" "foos" { - vpc_id = "${aws_vpc.foo.id}" - ingress = { - protocol = "tcp" - rule_no = 1 - action = "deny" - cidr_block = "10.2.0.0/18" - from_port = 0 - to_port = 22 - } - subnet_ids = ["${aws_subnet.blob.id}"] - tags { - Name = "tf-acc-acl-ingress" - } + vpc_id = "${aws_vpc.foo.id}" + ingress = { + protocol = "tcp" + rule_no = 1 + action = "deny" + cidr_block = "10.2.0.0/18" + from_port = 0 + to_port = 22 + } + subnet_ids = ["${aws_subnet.blob.id}"] + tags { + Name = "tf-acc-acl-ingress" + } } ` const testAccAWSNetworkAclEgressConfig = ` resource "aws_vpc" "foo" { - cidr_block = "10.2.0.0/16" - tags { - Name = "terraform-testacc-network-acl-egress" - } + cidr_block = "10.2.0.0/16" + tags { + Name = "terraform-testacc-network-acl-egress" + } } resource "aws_subnet" "blob" { - cidr_block = "10.2.0.0/24" - vpc_id = "${aws_vpc.foo.id}" - map_public_ip_on_launch = true - tags { - Name = "tf-acc-network-acl-egress" - } + cidr_block = "10.2.0.0/24" + vpc_id = "${aws_vpc.foo.id}" + map_public_ip_on_launch = true + tags { + Name = "tf-acc-network-acl-egress" + } } resource "aws_network_acl" "bond" { - vpc_id = "${aws_vpc.foo.id}" - egress = { - protocol = "tcp" - rule_no = 2 - action = "allow" - cidr_block = "10.2.0.0/18" - from_port = 443 - to_port = 443 - } + vpc_id = "${aws_vpc.foo.id}" + egress = { + protocol = "tcp" + rule_no = 2 + action = "allow" + cidr_block = "10.2.0.0/18" + from_port = 443 + to_port = 443 + } - egress = { - protocol = "-1" - rule_no = 4 - action = "allow" - cidr_block = "0.0.0.0/0" - from_port = 0 - to_port = 0 - } + egress = { + protocol = "-1" + rule_no = 4 + action = "allow" + cidr_block = "0.0.0.0/0" + from_port = 0 + to_port = 0 + } - egress = { - protocol = "tcp" - rule_no = 1 - action = "allow" - cidr_block = "10.2.0.0/18" - from_port = 80 - to_port = 80 - } + egress = { + protocol = "tcp" + rule_no = 1 + action = "allow" + cidr_block = "10.2.0.0/18" + from_port = 80 + to_port = 80 + } - egress = { - protocol = "tcp" - rule_no = 3 - action = "allow" - cidr_block = "10.2.0.0/18" - from_port = 22 - to_port = 22 - } + egress = { + protocol = "tcp" + rule_no = 3 + action = "allow" + cidr_block = "10.2.0.0/18" + from_port = 22 + to_port = 22 + } - tags { - foo = "bar" - Name = "tf-acc-acl-egress" - } + tags { + foo = "bar" + Name = "tf-acc-acl-egress" + } } ` const testAccAWSNetworkAclEgressNIngressConfig = ` resource "aws_vpc" "foo" { - cidr_block = "10.3.0.0/16" - tags { - Name = "terraform-testacc-network-acl-egress-and-ingress" - } + cidr_block = "10.3.0.0/16" + tags { + Name = "terraform-testacc-network-acl-egress-and-ingress" + } } resource "aws_subnet" "blob" { - cidr_block = "10.3.0.0/24" - vpc_id = "${aws_vpc.foo.id}" - map_public_ip_on_launch = true - tags { - Name = "tf-acc-network-acl-egress-and-ingress" - } + cidr_block = "10.3.0.0/24" + vpc_id = "${aws_vpc.foo.id}" + map_public_ip_on_launch = true + tags { + Name = "tf-acc-network-acl-egress-and-ingress" + } } resource "aws_network_acl" "bar" { - vpc_id = "${aws_vpc.foo.id}" - egress = { - protocol = "tcp" - rule_no = 2 - action = "allow" - cidr_block = "10.3.0.0/18" - from_port = 443 - to_port = 443 - } + vpc_id = "${aws_vpc.foo.id}" + egress = { + protocol = "tcp" + rule_no = 2 + action = "allow" + cidr_block = "10.3.0.0/18" + from_port = 443 + to_port = 443 + } - ingress = { - protocol = "tcp" - rule_no = 1 - action = "allow" - cidr_block = "10.3.0.0/18" - from_port = 80 - to_port = 80 - } - tags { - Name = "tf-acc-acl-egress-and-ingress" - } + ingress = { + protocol = "tcp" + rule_no = 1 + action = "allow" + cidr_block = "10.3.0.0/18" + from_port = 80 + to_port = 80 + } + tags { + Name = "tf-acc-acl-egress-and-ingress" + } } ` const testAccAWSNetworkAclSubnetConfig = ` resource "aws_vpc" "foo" { - cidr_block = "10.1.0.0/16" - tags { - Name = "terraform-testacc-network-acl-subnet-change" - } + cidr_block = "10.1.0.0/16" + tags { + Name = "terraform-testacc-network-acl-subnet-change" + } } resource "aws_subnet" "old" { - cidr_block = "10.1.111.0/24" - vpc_id = "${aws_vpc.foo.id}" - map_public_ip_on_launch = true - tags { - Name = "tf-acc-network-acl-subnet-change-old" - } + cidr_block = "10.1.111.0/24" + vpc_id = "${aws_vpc.foo.id}" + map_public_ip_on_launch = true + tags { + Name = "tf-acc-network-acl-subnet-change-old" + } } resource "aws_subnet" "new" { - cidr_block = "10.1.1.0/24" - vpc_id = "${aws_vpc.foo.id}" - map_public_ip_on_launch = true - tags { - Name = "tf-acc-network-acl-subnet-change-new" - } + cidr_block = "10.1.1.0/24" + vpc_id = "${aws_vpc.foo.id}" + map_public_ip_on_launch = true + tags { + Name = "tf-acc-network-acl-subnet-change-new" + } } resource "aws_network_acl" "roll" { - vpc_id = "${aws_vpc.foo.id}" - subnet_ids = ["${aws_subnet.new.id}"] - tags { - Name = "tf-acc-acl-subnet-change-roll" - } + vpc_id = "${aws_vpc.foo.id}" + subnet_ids = ["${aws_subnet.new.id}"] + tags { + Name = "tf-acc-acl-subnet-change-roll" + } } resource "aws_network_acl" "bar" { - vpc_id = "${aws_vpc.foo.id}" - subnet_ids = ["${aws_subnet.old.id}"] - tags { - Name = "tf-acc-acl-subnet-change-bar" - } + vpc_id = "${aws_vpc.foo.id}" + subnet_ids = ["${aws_subnet.old.id}"] + tags { + Name = "tf-acc-acl-subnet-change-bar" + } } ` const testAccAWSNetworkAclSubnetConfigChange = ` resource "aws_vpc" "foo" { - cidr_block = "10.1.0.0/16" - tags { - Name = "terraform-testacc-network-acl-subnet-change" - } + cidr_block = "10.1.0.0/16" + tags { + Name = "terraform-testacc-network-acl-subnet-change" + } } resource "aws_subnet" "old" { - cidr_block = "10.1.111.0/24" - vpc_id = "${aws_vpc.foo.id}" - map_public_ip_on_launch = true - tags { - Name = "tf-acc-network-acl-subnet-change-old" - } + cidr_block = "10.1.111.0/24" + vpc_id = "${aws_vpc.foo.id}" + map_public_ip_on_launch = true + tags { + Name = "tf-acc-network-acl-subnet-change-old" + } } resource "aws_subnet" "new" { - cidr_block = "10.1.1.0/24" - vpc_id = "${aws_vpc.foo.id}" - map_public_ip_on_launch = true - tags { - Name = "tf-acc-network-acl-subnet-change-new" - } + cidr_block = "10.1.1.0/24" + vpc_id = "${aws_vpc.foo.id}" + map_public_ip_on_launch = true + tags { + Name = "tf-acc-network-acl-subnet-change-new" + } } resource "aws_network_acl" "bar" { - vpc_id = "${aws_vpc.foo.id}" - subnet_ids = ["${aws_subnet.new.id}"] - tags { - Name = "tf-acc-acl-subnet-change-bar" - } + vpc_id = "${aws_vpc.foo.id}" + subnet_ids = ["${aws_subnet.new.id}"] + tags { + Name = "tf-acc-acl-subnet-change-bar" + } } ` const testAccAWSNetworkAclSubnet_SubnetIds = ` resource "aws_vpc" "foo" { - cidr_block = "10.1.0.0/16" - tags { - Name = "terraform-testacc-network-acl-subnet-ids" - } + cidr_block = "10.1.0.0/16" + tags { + Name = "terraform-testacc-network-acl-subnet-ids" + } } resource "aws_subnet" "one" { - cidr_block = "10.1.111.0/24" - vpc_id = "${aws_vpc.foo.id}" - tags { - Name = "tf-acc-network-acl-subnet-ids-one" - } + cidr_block = "10.1.111.0/24" + vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-acc-network-acl-subnet-ids-one" + } } resource "aws_subnet" "two" { - cidr_block = "10.1.1.0/24" - vpc_id = "${aws_vpc.foo.id}" - tags { - Name = "tf-acc-network-acl-subnet-ids-two" - } + cidr_block = "10.1.1.0/24" + vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-acc-network-acl-subnet-ids-two" + } } resource "aws_network_acl" "bar" { - vpc_id = "${aws_vpc.foo.id}" - subnet_ids = ["${aws_subnet.one.id}", "${aws_subnet.two.id}"] - tags { - Name = "tf-acc-acl-subnet-ids" - } + vpc_id = "${aws_vpc.foo.id}" + subnet_ids = ["${aws_subnet.one.id}", "${aws_subnet.two.id}"] + tags { + Name = "tf-acc-acl-subnet-ids" + } } ` const testAccAWSNetworkAclSubnet_SubnetIdsUpdate = ` resource "aws_vpc" "foo" { - cidr_block = "10.1.0.0/16" - tags { - Name = "terraform-testacc-network-acl-subnet-ids" - } + cidr_block = "10.1.0.0/16" + tags { + Name = "terraform-testacc-network-acl-subnet-ids" + } } resource "aws_subnet" "one" { - cidr_block = "10.1.111.0/24" - vpc_id = "${aws_vpc.foo.id}" - tags { - Name = "tf-acc-network-acl-subnet-ids-one" - } + cidr_block = "10.1.111.0/24" + vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-acc-network-acl-subnet-ids-one" + } } resource "aws_subnet" "two" { - cidr_block = "10.1.1.0/24" - vpc_id = "${aws_vpc.foo.id}" - tags { - Name = "tf-acc-network-acl-subnet-ids-two" - } + cidr_block = "10.1.1.0/24" + vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-acc-network-acl-subnet-ids-two" + } } resource "aws_subnet" "three" { - cidr_block = "10.1.222.0/24" - vpc_id = "${aws_vpc.foo.id}" - tags { - Name = "tf-acc-network-acl-subnet-ids-three" - } + cidr_block = "10.1.222.0/24" + vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-acc-network-acl-subnet-ids-three" + } } resource "aws_subnet" "four" { - cidr_block = "10.1.4.0/24" - vpc_id = "${aws_vpc.foo.id}" - tags { - Name = "tf-acc-network-acl-subnet-ids-four" - } + cidr_block = "10.1.4.0/24" + vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-acc-network-acl-subnet-ids-four" + } } resource "aws_network_acl" "bar" { - vpc_id = "${aws_vpc.foo.id}" - subnet_ids = [ - "${aws_subnet.one.id}", - "${aws_subnet.three.id}", - "${aws_subnet.four.id}", - ] - tags { - Name = "tf-acc-acl-subnet-ids" - } + vpc_id = "${aws_vpc.foo.id}" + subnet_ids = [ + "${aws_subnet.one.id}", + "${aws_subnet.three.id}", + "${aws_subnet.four.id}", + ] + tags { + Name = "tf-acc-acl-subnet-ids" + } +} +` + +const testAccAWSNetworkAclSubnet_SubnetIdsDeleteOne = ` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" + tags { + Name = "terraform-testacc-network-acl-subnet-ids" + } +} + +resource "aws_subnet" "one" { + cidr_block = "10.1.111.0/24" + vpc_id = "${aws_vpc.foo.id}" + tags { + Name = "tf-acc-network-acl-subnet-ids-one" + } +} + +resource "aws_network_acl" "bar" { + vpc_id = "${aws_vpc.foo.id}" + subnet_ids = ["${aws_subnet.one.id}"] + tags { + Name = "tf-acc-acl-subnet-ids" + } } ` const testAccAWSNetworkAclEsp = ` resource "aws_vpc" "testespvpc" { cidr_block = "10.1.0.0/16" - tags { - Name = "terraform-testacc-network-acl-esp" - } + tags { + Name = "terraform-testacc-network-acl-esp" + } } resource "aws_network_acl" "testesp" { diff --git a/aws/resource_aws_network_interface.go b/aws/resource_aws_network_interface.go index ae60963cc58..92cf2769dd8 100644 --- a/aws/resource_aws_network_interface.go +++ b/aws/resource_aws_network_interface.go @@ -28,24 +28,24 @@ func resourceAwsNetworkInterface() *schema.Resource { Schema: map[string]*schema.Schema{ - "subnet_id": &schema.Schema{ + "subnet_id": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "private_ip": &schema.Schema{ + "private_ip": { Type: schema.TypeString, Optional: true, Computed: true, }, - "private_dns_name": &schema.Schema{ + "private_dns_name": { Type: schema.TypeString, Computed: true, }, - "private_ips": &schema.Schema{ + "private_ips": { Type: schema.TypeSet, Optional: true, Computed: true, @@ -53,13 +53,13 @@ func resourceAwsNetworkInterface() *schema.Resource { Set: schema.HashString, }, - "private_ips_count": &schema.Schema{ + "private_ips_count": { Type: schema.TypeInt, Optional: true, Computed: true, }, - "security_groups": &schema.Schema{ + "security_groups": { Type: schema.TypeSet, Optional: true, Computed: true, @@ -67,32 +67,32 @@ func resourceAwsNetworkInterface() *schema.Resource { Set: schema.HashString, }, - "source_dest_check": &schema.Schema{ + "source_dest_check": { Type: schema.TypeBool, Optional: true, Default: true, }, - "description": &schema.Schema{ + "description": { Type: schema.TypeString, Optional: true, }, - "attachment": &schema.Schema{ + "attachment": { Type: schema.TypeSet, Optional: true, Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "instance": &schema.Schema{ + "instance": { Type: schema.TypeString, Required: true, }, - "device_index": &schema.Schema{ + "device_index": { Type: schema.TypeInt, Required: true, }, - "attachment_id": &schema.Schema{ + "attachment_id": { Type: schema.TypeString, Computed: true, }, diff --git a/aws/resource_aws_network_interface_attachment_test.go b/aws/resource_aws_network_interface_attachment_test.go index 7d5d7f73d9a..c207987bf25 100644 --- a/aws/resource_aws_network_interface_attachment_test.go +++ b/aws/resource_aws_network_interface_attachment_test.go @@ -13,7 +13,7 @@ func TestAccAWSNetworkInterfaceAttachment_basic(t *testing.T) { var conf ec2.NetworkInterface rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_network_interface.bar", Providers: testAccProviders, diff --git a/aws/resource_aws_network_interface_sg_attachment.go b/aws/resource_aws_network_interface_sg_attachment.go index 29f09aedbcf..1ac6476975e 100644 --- a/aws/resource_aws_network_interface_sg_attachment.go +++ b/aws/resource_aws_network_interface_sg_attachment.go @@ -81,6 +81,13 @@ func resourceAwsNetworkInterfaceSGAttachmentRead(d *schema.ResourceData, meta in conn := meta.(*AWSClient).ec2conn iface, err := fetchNetworkInterface(conn, interfaceID) + + if isAWSErr(err, "InvalidNetworkInterfaceID.NotFound", "") { + log.Printf("[WARN] EC2 Network Interface (%s) not found, removing from state", interfaceID) + d.SetId("") + return nil + } + if err != nil { return err } @@ -108,16 +115,16 @@ func resourceAwsNetworkInterfaceSGAttachmentDelete(d *schema.ResourceData, meta conn := meta.(*AWSClient).ec2conn iface, err := fetchNetworkInterface(conn, interfaceID) - if err != nil { - return err + + if isAWSErr(err, "InvalidNetworkInterfaceID.NotFound", "") { + return nil } - if err := delSGFromENI(conn, sgID, iface); err != nil { + if err != nil { return err } - d.SetId("") - return nil + return delSGFromENI(conn, sgID, iface) } // fetchNetworkInterface is a utility function used by Read and Delete to fetch @@ -155,6 +162,11 @@ func delSGFromENI(conn *ec2.EC2, sgID string, iface *ec2.NetworkInterface) error } _, err := conn.ModifyNetworkInterfaceAttribute(params) + + if isAWSErr(err, "InvalidNetworkInterfaceID.NotFound", "") { + return nil + } + return err } diff --git a/aws/resource_aws_network_interface_sg_attachment_test.go b/aws/resource_aws_network_interface_sg_attachment_test.go index eba1affe201..97c353370f0 100644 --- a/aws/resource_aws_network_interface_sg_attachment_test.go +++ b/aws/resource_aws_network_interface_sg_attachment_test.go @@ -2,76 +2,269 @@ package aws import ( "fmt" - "strconv" "testing" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) -func TestAccAWSNetworkInterfaceSGAttachment(t *testing.T) { - cases := []struct { - Name string - ResourceAttr string - Config func(bool) string - }{ - { - Name: "instance primary interface", - ResourceAttr: "primary_network_interface_id", - Config: testAccAwsNetworkInterfaceSGAttachmentConfigViaInstance, +func TestAccAWSNetworkInterfaceSGAttachment_basic(t *testing.T) { + var networkInterface ec2.NetworkInterface + networkInterfaceResourceName := "aws_network_interface.test" + securityGroupResourceName := "aws_security_group.test" + resourceName := "aws_network_interface_sg_attachment.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNetworkInterfaceSGAttachmentDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsNetworkInterfaceSGAttachmentConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNetworkInterfaceSGAttachmentExists(resourceName, &networkInterface), + resource.TestCheckResourceAttrPair(resourceName, "network_interface_id", networkInterfaceResourceName, "id"), + resource.TestCheckResourceAttrPair(resourceName, "security_group_id", securityGroupResourceName, "id"), + ), + }, }, - { - Name: "externally supplied instance through data source", - ResourceAttr: "network_interface_id", - Config: testAccAwsNetworkInterfaceSGAttachmentConfigViaDataSource, + }) +} + +func TestAccAWSNetworkInterfaceSGAttachment_disappears(t *testing.T) { + var networkInterface ec2.NetworkInterface + resourceName := "aws_network_interface_sg_attachment.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNetworkInterfaceSGAttachmentDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsNetworkInterfaceSGAttachmentConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNetworkInterfaceSGAttachmentExists(resourceName, &networkInterface), + testAccCheckAwsNetworkInterfaceSGAttachmentDisappears(resourceName, &networkInterface), + ), + ExpectNonEmptyPlan: true, + }, + { + Config: testAccAwsNetworkInterfaceSGAttachmentConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNetworkInterfaceSGAttachmentExists(resourceName, &networkInterface), + testAccCheckAWSENIDisappears(&networkInterface), + ), + ExpectNonEmptyPlan: true, + }, }, - } - for _, tc := range cases { - t.Run(tc.Name, func(t *testing.T) { - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: tc.Config(true), - Check: checkSecurityGroupAttached(tc.ResourceAttr, true), - }, - resource.TestStep{ - Config: tc.Config(false), - Check: checkSecurityGroupAttached(tc.ResourceAttr, false), - }, - }, - }) - }) - } + }) +} + +func TestAccAWSNetworkInterfaceSGAttachment_Instance(t *testing.T) { + var networkInterface ec2.NetworkInterface + instanceResourceName := "aws_instance.test" + securityGroupResourceName := "aws_security_group.test" + resourceName := "aws_network_interface_sg_attachment.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNetworkInterfaceSGAttachmentDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsNetworkInterfaceSGAttachmentConfigViaInstance(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNetworkInterfaceSGAttachmentExists(resourceName, &networkInterface), + resource.TestCheckResourceAttrPair(resourceName, "network_interface_id", instanceResourceName, "primary_network_interface_id"), + resource.TestCheckResourceAttrPair(resourceName, "security_group_id", securityGroupResourceName, "id"), + ), + }, + }, + }) +} + +func TestAccAWSNetworkInterfaceSGAttachment_DataSource(t *testing.T) { + var networkInterface ec2.NetworkInterface + instanceDataSourceName := "data.aws_instance.test" + securityGroupResourceName := "aws_security_group.test" + resourceName := "aws_network_interface_sg_attachment.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNetworkInterfaceSGAttachmentDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsNetworkInterfaceSGAttachmentConfigViaDataSource(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNetworkInterfaceSGAttachmentExists(resourceName, &networkInterface), + resource.TestCheckResourceAttrPair(resourceName, "network_interface_id", instanceDataSourceName, "network_interface_id"), + resource.TestCheckResourceAttrPair(resourceName, "security_group_id", securityGroupResourceName, "id"), + ), + }, + }, + }) +} + +func TestAccAWSNetworkInterfaceSGAttachment_Multiple(t *testing.T) { + var networkInterface ec2.NetworkInterface + networkInterfaceResourceName := "aws_network_interface.test" + securityGroupResourceName1 := "aws_security_group.test.0" + securityGroupResourceName2 := "aws_security_group.test.1" + securityGroupResourceName3 := "aws_security_group.test.2" + securityGroupResourceName4 := "aws_security_group.test.3" + resourceName1 := "aws_network_interface_sg_attachment.test.0" + resourceName2 := "aws_network_interface_sg_attachment.test.1" + resourceName3 := "aws_network_interface_sg_attachment.test.2" + resourceName4 := "aws_network_interface_sg_attachment.test.3" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSNetworkInterfaceSGAttachmentDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsNetworkInterfaceSGAttachmentConfigMultiple(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSNetworkInterfaceSGAttachmentExists(resourceName1, &networkInterface), + resource.TestCheckResourceAttrPair(resourceName1, "network_interface_id", networkInterfaceResourceName, "id"), + resource.TestCheckResourceAttrPair(resourceName1, "security_group_id", securityGroupResourceName1, "id"), + testAccCheckAWSNetworkInterfaceSGAttachmentExists(resourceName2, &networkInterface), + resource.TestCheckResourceAttrPair(resourceName2, "network_interface_id", networkInterfaceResourceName, "id"), + resource.TestCheckResourceAttrPair(resourceName2, "security_group_id", securityGroupResourceName2, "id"), + testAccCheckAWSNetworkInterfaceSGAttachmentExists(resourceName3, &networkInterface), + resource.TestCheckResourceAttrPair(resourceName3, "network_interface_id", networkInterfaceResourceName, "id"), + resource.TestCheckResourceAttrPair(resourceName3, "security_group_id", securityGroupResourceName3, "id"), + testAccCheckAWSNetworkInterfaceSGAttachmentExists(resourceName4, &networkInterface), + resource.TestCheckResourceAttrPair(resourceName4, "network_interface_id", networkInterfaceResourceName, "id"), + resource.TestCheckResourceAttrPair(resourceName4, "security_group_id", securityGroupResourceName4, "id"), + ), + }, + }, + }) } -func checkSecurityGroupAttached(attr string, expected bool) resource.TestCheckFunc { +func testAccCheckAWSNetworkInterfaceSGAttachmentExists(resourceName string, networkInterface *ec2.NetworkInterface) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := testAccProvider.Meta().(*AWSClient).ec2conn + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } - interfaceID := s.Modules[0].Resources["aws_instance.instance"].Primary.Attributes[attr] - sgID := s.Modules[0].Resources["aws_security_group.sg"].Primary.ID + if rs.Primary.ID == "" { + return fmt.Errorf("No resource ID set: %s", resourceName) + } - iface, err := fetchNetworkInterface(conn, interfaceID) + conn := testAccProvider.Meta().(*AWSClient).ec2conn + networkInterfaceID := rs.Primary.Attributes["network_interface_id"] + securityGroupID := rs.Primary.Attributes["security_group_id"] + + iface, err := fetchNetworkInterface(conn, networkInterfaceID) if err != nil { - return err + return fmt.Errorf("ENI (%s) not found: %s", networkInterfaceID, err) } - actual := sgExistsInENI(sgID, iface) - if expected != actual { - return fmt.Errorf("expected existence of security group in ENI to be %t, got %t", expected, actual) + + if !sgExistsInENI(securityGroupID, iface) { + return fmt.Errorf("Security Group ID (%s) not attached to ENI (%s)", securityGroupID, networkInterfaceID) } + + *networkInterface = *iface + return nil } } -func testAccAwsNetworkInterfaceSGAttachmentConfigViaInstance(attachmentEnabled bool) string { +func testAccCheckAWSNetworkInterfaceSGAttachmentDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).ec2conn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_network_interface_sg_attachment" { + continue + } + + networkInterfaceID := rs.Primary.Attributes["network_interface_id"] + securityGroupID := rs.Primary.Attributes["security_group_id"] + + networkInterface, err := fetchNetworkInterface(conn, networkInterfaceID) + + if isAWSErr(err, "InvalidNetworkInterfaceID.NotFound", "") { + continue + } + + if err != nil { + return fmt.Errorf("ENI (%s) not found: %s", networkInterfaceID, err) + } + + if sgExistsInENI(securityGroupID, networkInterface) { + return fmt.Errorf("Security Group ID (%s) still attached to ENI (%s)", securityGroupID, networkInterfaceID) + } + } + + return nil +} + +func testAccCheckAwsNetworkInterfaceSGAttachmentDisappears(resourceName string, networkInterface *ec2.NetworkInterface) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + conn := testAccProvider.Meta().(*AWSClient).ec2conn + securityGroupID := rs.Primary.Attributes["security_group_id"] + + return delSGFromENI(conn, securityGroupID, networkInterface) + } +} + +func testAccAwsNetworkInterfaceSGAttachmentConfig(rName string) string { return fmt.Sprintf(` -variable "sg_attachment_enabled" { - type = "string" - default = "%t" +resource "aws_vpc" "test" { + cidr_block = "172.16.0.0/16" + + tags { + Name = %q + } +} + +resource "aws_subnet" "test" { + cidr_block = "172.16.10.0/24" + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = %q + } +} + +resource "aws_security_group" "test" { + name = %q + vpc_id = "${aws_vpc.test.id}" } +resource "aws_network_interface" "test" { + subnet_id = "${aws_subnet.test.id}" + + tags { + Name = %q + } +} + +resource "aws_network_interface_sg_attachment" "test" { + network_interface_id = "${aws_network_interface.test.id}" + security_group_id = "${aws_security_group.test.id}" +} +`, rName, rName, rName, rName) +} + +func testAccAwsNetworkInterfaceSGAttachmentConfigViaInstance(rName string) string { + return fmt.Sprintf(` data "aws_ami" "ami" { most_recent = true @@ -83,36 +276,28 @@ data "aws_ami" "ami" { owners = ["amazon"] } -resource "aws_instance" "instance" { +resource "aws_instance" "test" { instance_type = "t2.micro" ami = "${data.aws_ami.ami.id}" tags = { - "type" = "terraform-test-instance" + Name = %q } } -resource "aws_security_group" "sg" { - tags = { - "type" = "terraform-test-security-group" - } +resource "aws_security_group" "test" { + name = %q } -resource "aws_network_interface_sg_attachment" "sg_attachment" { - count = "${var.sg_attachment_enabled == "true" ? 1 : 0}" - security_group_id = "${aws_security_group.sg.id}" - network_interface_id = "${aws_instance.instance.primary_network_interface_id}" +resource "aws_network_interface_sg_attachment" "test" { + network_interface_id = "${aws_instance.test.primary_network_interface_id}" + security_group_id = "${aws_security_group.test.id}" } -`, attachmentEnabled) +`, rName, rName) } -func testAccAwsNetworkInterfaceSGAttachmentConfigViaDataSource(attachmentEnabled bool) string { +func testAccAwsNetworkInterfaceSGAttachmentConfigViaDataSource(rName string) string { return fmt.Sprintf(` -variable "sg_attachment_enabled" { - type = "string" - default = "%t" -} - data "aws_ami" "ami" { most_recent = true @@ -124,105 +309,56 @@ data "aws_ami" "ami" { owners = ["amazon"] } -resource "aws_instance" "instance" { +resource "aws_instance" "test" { instance_type = "t2.micro" ami = "${data.aws_ami.ami.id}" tags = { - "type" = "terraform-test-instance" + Name = %q } } -data "aws_instance" "external_instance" { - instance_id = "${aws_instance.instance.id}" +data "aws_instance" "test" { + instance_id = "${aws_instance.test.id}" } -resource "aws_security_group" "sg" { - tags = { - "type" = "terraform-test-security-group" - } +resource "aws_security_group" "test" { + name = %q } -resource "aws_network_interface_sg_attachment" "sg_attachment" { - count = "${var.sg_attachment_enabled == "true" ? 1 : 0}" - security_group_id = "${aws_security_group.sg.id}" - network_interface_id = "${data.aws_instance.external_instance.network_interface_id}" +resource "aws_network_interface_sg_attachment" "test" { + security_group_id = "${aws_security_group.test.id}" + network_interface_id = "${data.aws_instance.test.network_interface_id}" } -`, attachmentEnabled) +`, rName, rName) } -func TestAccAWSNetworkInterfaceSGAttachmentRaceCheck(t *testing.T) { - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAwsNetworkInterfaceSGAttachmentRaceCheckConfig(), - Check: checkSecurityGroupAttachmentRace(), - }, - }, - }) -} - -// sgRaceCheckCount specifies the amount of security groups to create in the -// race check. This should be the maximum amount of security groups that can be -// attached to an interface at once, minus the default (we don't remove it in -// the config). -const sgRaceCheckCount = 4 - -func checkSecurityGroupAttachmentRace() resource.TestCheckFunc { - return func(s *terraform.State) error { - conn := testAccProvider.Meta().(*AWSClient).ec2conn - - interfaceID := s.Modules[0].Resources["aws_network_interface.interface"].Primary.ID - for i := 0; i < sgRaceCheckCount; i++ { - sgID := s.Modules[0].Resources["aws_security_group.sg."+strconv.Itoa(i)].Primary.ID - iface, err := fetchNetworkInterface(conn, interfaceID) - if err != nil { - return err - } - if !sgExistsInENI(sgID, iface) { - return fmt.Errorf("security group ID %s was not attached to ENI ID %s", sgID, interfaceID) - } - } - return nil - } -} - -func testAccAwsNetworkInterfaceSGAttachmentRaceCheckConfig() string { +func testAccAwsNetworkInterfaceSGAttachmentConfigMultiple(rName string) string { return fmt.Sprintf(` -variable "security_group_count" { - type = "string" - default = "%d" -} - data "aws_availability_zones" "available" {} -data "aws_subnet" "subnet" { +data "aws_subnet" "test" { availability_zone = "${data.aws_availability_zones.available.names[0]}" default_for_az = "true" } -resource "aws_network_interface" "interface" { - subnet_id = "${data.aws_subnet.subnet.id}" +resource "aws_network_interface" "test" { + subnet_id = "${data.aws_subnet.test.id}" tags = { - "type" = "terraform-test-network-interface" + Name = %q } } -resource "aws_security_group" "sg" { - count = "${var.security_group_count}" - - tags = { - "type" = "terraform-test-security-group" - } +resource "aws_security_group" "test" { + count = 4 + name = "%s-${count.index}" } -resource "aws_network_interface_sg_attachment" "sg_attachment" { - count = "${var.security_group_count}" - security_group_id = "${aws_security_group.sg.*.id[count.index]}" - network_interface_id = "${aws_network_interface.interface.id}" +resource "aws_network_interface_sg_attachment" "test" { + count = 4 + network_interface_id = "${aws_network_interface.test.id}" + security_group_id = "${aws_security_group.test.*.id[count.index]}" } -`, sgRaceCheckCount) +`, rName, rName) } diff --git a/aws/resource_aws_network_interface_test.go b/aws/resource_aws_network_interface_test.go index df743af61a7..c0b29c4cadc 100644 --- a/aws/resource_aws_network_interface_test.go +++ b/aws/resource_aws_network_interface_test.go @@ -11,16 +11,37 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSENI_importBasic(t *testing.T) { + resourceName := "aws_network_interface.bar" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSENIDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSENIConfig, + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSENI_basic(t *testing.T) { var conf ec2.NetworkInterface - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_network_interface.bar", Providers: testAccProviders, CheckDestroy: testAccCheckAWSENIDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSENIConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSENIExists("aws_network_interface.bar", &conf), @@ -39,16 +60,37 @@ func TestAccAWSENI_basic(t *testing.T) { }) } +func TestAccAWSENI_disappears(t *testing.T) { + var networkInterface ec2.NetworkInterface + resourceName := "aws_network_interface.bar" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSENIDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSENIConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSENIExists(resourceName, &networkInterface), + testAccCheckAWSENIDisappears(&networkInterface), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + func TestAccAWSENI_updatedDescription(t *testing.T) { var conf ec2.NetworkInterface - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_network_interface.bar", Providers: testAccProviders, CheckDestroy: testAccCheckAWSENIDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSENIConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSENIExists("aws_network_interface.bar", &conf), @@ -57,7 +99,7 @@ func TestAccAWSENI_updatedDescription(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSENIConfigUpdatedDescription, Check: resource.ComposeTestCheckFunc( testAccCheckAWSENIExists("aws_network_interface.bar", &conf), @@ -72,13 +114,13 @@ func TestAccAWSENI_updatedDescription(t *testing.T) { func TestAccAWSENI_attached(t *testing.T) { var conf ec2.NetworkInterface - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_network_interface.bar", Providers: testAccProviders, CheckDestroy: testAccCheckAWSENIDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSENIConfigWithAttachment, Check: resource.ComposeTestCheckFunc( testAccCheckAWSENIExists("aws_network_interface.bar", &conf), @@ -96,13 +138,13 @@ func TestAccAWSENI_attached(t *testing.T) { func TestAccAWSENI_ignoreExternalAttachment(t *testing.T) { var conf ec2.NetworkInterface - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_network_interface.bar", Providers: testAccProviders, CheckDestroy: testAccCheckAWSENIDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSENIConfigExternalAttachment, Check: resource.ComposeTestCheckFunc( testAccCheckAWSENIExists("aws_network_interface.bar", &conf), @@ -117,13 +159,13 @@ func TestAccAWSENI_ignoreExternalAttachment(t *testing.T) { func TestAccAWSENI_sourceDestCheck(t *testing.T) { var conf ec2.NetworkInterface - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_network_interface.bar", Providers: testAccProviders, CheckDestroy: testAccCheckAWSENIDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSENIConfigWithSourceDestCheck, Check: resource.ComposeTestCheckFunc( testAccCheckAWSENIExists("aws_network_interface.bar", &conf), @@ -138,13 +180,13 @@ func TestAccAWSENI_sourceDestCheck(t *testing.T) { func TestAccAWSENI_computedIPs(t *testing.T) { var conf ec2.NetworkInterface - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_network_interface.bar", Providers: testAccProviders, CheckDestroy: testAccCheckAWSENIDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSENIConfigWithNoPrivateIPs, Check: resource.ComposeTestCheckFunc( testAccCheckAWSENIExists("aws_network_interface.bar", &conf), @@ -278,6 +320,19 @@ func testAccCheckAWSENIDestroy(s *terraform.State) error { return nil } +func testAccCheckAWSENIDisappears(networkInterface *ec2.NetworkInterface) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).ec2conn + + input := &ec2.DeleteNetworkInterfaceInput{ + NetworkInterfaceId: networkInterface.NetworkInterfaceId, + } + _, err := conn.DeleteNetworkInterface(input) + + return err + } +} + func testAccCheckAWSENIMakeExternalAttachment(n string, conf *ec2.NetworkInterface) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] diff --git a/aws/resource_aws_opsworks_application.go b/aws/resource_aws_opsworks_application.go index 939d2fef491..e0ef1a26524 100644 --- a/aws/resource_aws_opsworks_application.go +++ b/aws/resource_aws_opsworks_application.go @@ -30,6 +30,7 @@ func resourceAwsOpsworksApplication() *schema.Resource { Type: schema.TypeString, Computed: true, Optional: true, + ForceNew: true, }, "type": { Type: schema.TypeString, @@ -415,7 +416,7 @@ func resourceAwsOpsworksSetApplicationEnvironmentVariable(d *schema.ResourceData } if config.Secure != nil { - if bool(*config.Secure) { + if aws.BoolValue(config.Secure) { data["secure"] = &opsworksTrueString } else { data["secure"] = &opsworksFalseString diff --git a/aws/resource_aws_opsworks_application_test.go b/aws/resource_aws_opsworks_application_test.go index 37d2df0b32c..e23f73c9821 100644 --- a/aws/resource_aws_opsworks_application_test.go +++ b/aws/resource_aws_opsworks_application_test.go @@ -19,7 +19,7 @@ func TestAccAWSOpsworksApplication(t *testing.T) { rInt := acctest.RandInt() name := fmt.Sprintf("tf-ops-acc-application-%d", rInt) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsOpsworksApplicationDestroy, diff --git a/aws/resource_aws_opsworks_custom_layer_test.go b/aws/resource_aws_opsworks_custom_layer_test.go index e8d0c55d5ce..bec52b57cbf 100644 --- a/aws/resource_aws_opsworks_custom_layer_test.go +++ b/aws/resource_aws_opsworks_custom_layer_test.go @@ -16,10 +16,33 @@ import ( // These tests assume the existence of predefined Opsworks IAM roles named `aws-opsworks-ec2-role` // and `aws-opsworks-service-role`. +func TestAccAWSOpsworksCustomLayer_importBasic(t *testing.T) { + name := acctest.RandString(10) + + resourceName := "aws_opsworks_custom_layer.tf-acc" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsOpsworksCustomLayerDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsOpsworksCustomLayerConfigVpcCreate(name), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSOpsworksCustomLayer_basic(t *testing.T) { stackName := fmt.Sprintf("tf-%d", acctest.RandInt()) var opslayer opsworks.Layer - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsOpsworksCustomLayerDestroy, diff --git a/aws/resource_aws_opsworks_instance.go b/aws/resource_aws_opsworks_instance.go index ea0fcdb86de..ccbddadce14 100644 --- a/aws/resource_aws_opsworks_instance.go +++ b/aws/resource_aws_opsworks_instance.go @@ -99,7 +99,6 @@ func resourceAwsOpsworksInstance() *schema.Resource { "ec2_instance_id": { Type: schema.TypeString, - Optional: true, Computed: true, }, @@ -532,12 +531,12 @@ func resourceAwsOpsworksInstanceRead(d *schema.ResourceData, meta interface{}) e for _, v := range instance.LayerIds { layerIds = append(layerIds, *v) } - layerIds, err = sortListBasedonTFFile(layerIds, d, "layer_ids") + layerIds, err = sortListBasedonTFFile(layerIds, d) if err != nil { - return fmt.Errorf("[DEBUG] Error sorting layer_ids attribute: %#v", err) + return fmt.Errorf("Error sorting layer_ids attribute: %#v", err) } if err := d.Set("layer_ids", layerIds); err != nil { - return fmt.Errorf("[DEBUG] Error setting layer_ids attribute: %#v, error: %#v", layerIds, err) + return fmt.Errorf("Error setting layer_ids attribute: %#v, error: %#v", layerIds, err) } d.Set("os", instance.Os) d.Set("platform", instance.Platform) @@ -562,7 +561,7 @@ func resourceAwsOpsworksInstanceRead(d *schema.ResourceData, meta interface{}) e d.Set("virtualization_type", instance.VirtualizationType) // Read BlockDeviceMapping - ibds, err := readOpsworksBlockDevices(d, instance, meta) + ibds, err := readOpsworksBlockDevices(instance) if err != nil { return err } @@ -826,7 +825,7 @@ func resourceAwsOpsworksInstanceUpdate(d *schema.ResourceData, meta interface{}) } } else { if status != "stopped" && status != "stopping" && status != "shutting_down" { - err := stopOpsworksInstance(d, meta, true, d.Timeout(schema.TimeoutUpdate)) + err := stopOpsworksInstance(d, meta, d.Timeout(schema.TimeoutUpdate)) if err != nil { return err } @@ -841,7 +840,7 @@ func resourceAwsOpsworksInstanceDelete(d *schema.ResourceData, meta interface{}) client := meta.(*AWSClient).opsworksconn if v, ok := d.GetOk("status"); ok && v.(string) != "stopped" { - err := stopOpsworksInstance(d, meta, true, d.Timeout(schema.TimeoutDelete)) + err := stopOpsworksInstance(d, meta, d.Timeout(schema.TimeoutDelete)) if err != nil { return err } @@ -860,7 +859,6 @@ func resourceAwsOpsworksInstanceDelete(d *schema.ResourceData, meta interface{}) return err } - d.SetId("") return nil } @@ -912,7 +910,7 @@ func startOpsworksInstance(d *schema.ResourceData, meta interface{}, wait bool, return nil } -func stopOpsworksInstance(d *schema.ResourceData, meta interface{}, wait bool, timeout time.Duration) error { +func stopOpsworksInstance(d *schema.ResourceData, meta interface{}, timeout time.Duration) error { client := meta.(*AWSClient).opsworksconn instanceId := d.Id() @@ -929,29 +927,26 @@ func stopOpsworksInstance(d *schema.ResourceData, meta interface{}, wait bool, t return err } - if wait { - log.Printf("[DEBUG] Waiting for instance (%s) to become stopped", instanceId) + log.Printf("[DEBUG] Waiting for instance (%s) to become stopped", instanceId) - stateConf := &resource.StateChangeConf{ - Pending: []string{"stopping", "terminating", "shutting_down", "terminated"}, - Target: []string{"stopped"}, - Refresh: OpsworksInstanceStateRefreshFunc(client, instanceId), - Timeout: timeout, - Delay: 10 * time.Second, - MinTimeout: 3 * time.Second, - } - _, err = stateConf.WaitForState() - if err != nil { - return fmt.Errorf("Error waiting for instance (%s) to become stopped: %s", - instanceId, err) - } + stateConf := &resource.StateChangeConf{ + Pending: []string{"stopping", "terminating", "shutting_down", "terminated"}, + Target: []string{"stopped"}, + Refresh: OpsworksInstanceStateRefreshFunc(client, instanceId), + Timeout: timeout, + Delay: 10 * time.Second, + MinTimeout: 3 * time.Second, + } + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("Error waiting for instance (%s) to become stopped: %s", + instanceId, err) } return nil } -func readOpsworksBlockDevices(d *schema.ResourceData, instance *opsworks.Instance, meta interface{}) ( - map[string]interface{}, error) { +func readOpsworksBlockDevices(instance *opsworks.Instance) (map[string]interface{}, error) { blockDevices := make(map[string]interface{}) blockDevices["ebs"] = make([]map[string]interface{}, 0) diff --git a/aws/resource_aws_opsworks_instance_test.go b/aws/resource_aws_opsworks_instance_test.go index 8b7b0d2866b..315724c7fae 100644 --- a/aws/resource_aws_opsworks_instance_test.go +++ b/aws/resource_aws_opsworks_instance_test.go @@ -16,7 +16,7 @@ func TestAccAWSOpsworksInstance_importBasic(t *testing.T) { stackName := fmt.Sprintf("tf-%d", acctest.RandInt()) resourceName := "aws_opsworks_instance.tf-acc" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsOpsworksInstanceDestroy, @@ -38,7 +38,7 @@ func TestAccAWSOpsworksInstance_importBasic(t *testing.T) { func TestAccAWSOpsworksInstance(t *testing.T) { stackName := fmt.Sprintf("tf-%d", acctest.RandInt()) var opsinst opsworks.Instance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsOpsworksInstanceDestroy, @@ -112,7 +112,7 @@ func TestAccAWSOpsworksInstance_UpdateHostNameForceNew(t *testing.T) { stackName := fmt.Sprintf("tf-%d", acctest.RandInt()) var before, after opsworks.Instance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsOpsworksInstanceDestroy, diff --git a/aws/resource_aws_opsworks_permission_test.go b/aws/resource_aws_opsworks_permission_test.go index 9ff9c7e6e25..cf239f538c0 100644 --- a/aws/resource_aws_opsworks_permission_test.go +++ b/aws/resource_aws_opsworks_permission_test.go @@ -15,7 +15,7 @@ import ( func TestAccAWSOpsworksPermission(t *testing.T) { sName := fmt.Sprintf("tf-ops-perm-%d", acctest.RandInt()) var opsperm opsworks.Permission - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsOpsworksPermissionDestroy, diff --git a/aws/resource_aws_opsworks_rails_app_layer_test.go b/aws/resource_aws_opsworks_rails_app_layer_test.go index 710d88312af..d2d169d8a22 100644 --- a/aws/resource_aws_opsworks_rails_app_layer_test.go +++ b/aws/resource_aws_opsworks_rails_app_layer_test.go @@ -17,7 +17,7 @@ import ( func TestAccAWSOpsworksRailsAppLayer(t *testing.T) { stackName := fmt.Sprintf("tf-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsOpsworksRailsAppLayerDestroy, diff --git a/aws/resource_aws_opsworks_rds_db_instance.go b/aws/resource_aws_opsworks_rds_db_instance.go index ef0d61f70fe..352dc082441 100644 --- a/aws/resource_aws_opsworks_rds_db_instance.go +++ b/aws/resource_aws_opsworks_rds_db_instance.go @@ -135,7 +135,7 @@ func resourceAwsOpsworksRdsDbInstanceRead(d *schema.ResourceData, meta interface StackId: aws.String(d.Get("stack_id").(string)), } - log.Printf("[DEBUG] Reading OpsWorks registerd rds db instances for stack: %s", d.Get("stack_id")) + log.Printf("[DEBUG] Reading OpsWorks registered rds db instances for stack: %s", d.Get("stack_id")) resp, err := client.DescribeRdsDbInstances(req) if err != nil { diff --git a/aws/resource_aws_opsworks_rds_db_instance_test.go b/aws/resource_aws_opsworks_rds_db_instance_test.go index 1b6be820474..ff2ac0fdc65 100644 --- a/aws/resource_aws_opsworks_rds_db_instance_test.go +++ b/aws/resource_aws_opsworks_rds_db_instance_test.go @@ -15,7 +15,7 @@ import ( func TestAccAWSOpsworksRdsDbInstance(t *testing.T) { sName := fmt.Sprintf("test-db-instance-%d", acctest.RandInt()) var opsdb opsworks.RdsDbInstance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsOpsworksRdsDbDestroy, diff --git a/aws/resource_aws_opsworks_stack.go b/aws/resource_aws_opsworks_stack.go index a173da10a79..dbb0887be91 100644 --- a/aws/resource_aws_opsworks_stack.go +++ b/aws/resource_aws_opsworks_stack.go @@ -7,7 +7,6 @@ import ( "strings" "time" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" @@ -385,7 +384,7 @@ func opsworksConnForRegion(region string, meta interface{}) (*opsworks.OpsWorks, // Set up base session sess, err := session.NewSession(&originalConn.Config) if err != nil { - return nil, errwrap.Wrapf("Error creating AWS session: {{err}}", err) + return nil, fmt.Errorf("Error creating AWS session: %s", err) } sess.Handlers.Build.PushBackNamed(addTerraformVersionToUserAgent) diff --git a/aws/resource_aws_opsworks_stack_test.go b/aws/resource_aws_opsworks_stack_test.go index 2c428b7eb38..3d17ec366c8 100644 --- a/aws/resource_aws_opsworks_stack_test.go +++ b/aws/resource_aws_opsworks_stack_test.go @@ -13,6 +13,29 @@ import ( "github.com/aws/aws-sdk-go/service/opsworks" ) +func TestAccAWSOpsworksStackImportBasic(t *testing.T) { + name := acctest.RandString(10) + + resourceName := "aws_opsworks_stack.tf-acc" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsOpsworksStackDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsOpsworksStackConfigVpcCreate(name), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + /////////////////////////////// //// Tests for the No-VPC case /////////////////////////////// @@ -20,7 +43,7 @@ import ( func TestAccAWSOpsworksStackNoVpc(t *testing.T) { stackName := fmt.Sprintf("tf-opsworks-acc-%d", acctest.RandInt()) var opsstack opsworks.Stack - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsOpsworksStackDestroy, @@ -43,7 +66,7 @@ func TestAccAWSOpsworksStackNoVpc(t *testing.T) { func TestAccAWSOpsworksStackNoVpcChangeServiceRoleForceNew(t *testing.T) { stackName := fmt.Sprintf("tf-opsworks-acc-%d", acctest.RandInt()) var before, after opsworks.Stack - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsOpsworksStackDestroy, @@ -70,7 +93,7 @@ func TestAccAWSOpsworksStackNoVpcChangeServiceRoleForceNew(t *testing.T) { func TestAccAWSOpsworksStackVpc(t *testing.T) { stackName := fmt.Sprintf("tf-opsworks-acc-%d", acctest.RandInt()) var opsstack opsworks.Stack - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsOpsworksStackDestroy, @@ -104,7 +127,7 @@ func TestAccAWSOpsworksStackVpc(t *testing.T) { func TestAccAWSOpsworksStackNoVpcCreateTags(t *testing.T) { stackName := fmt.Sprintf("tf-opsworks-acc-%d", acctest.RandInt()) var opsstack opsworks.Stack - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsOpsworksStackDestroy, @@ -138,7 +161,7 @@ func TestAccAWSOpsWorksStack_classic_endpoints(t *testing.T) { stackName := fmt.Sprintf("tf-opsworks-acc-%d", acctest.RandInt()) rInt := acctest.RandInt() var opsstack opsworks.Stack - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsOpsworksStackDestroy, diff --git a/aws/resource_aws_opsworks_user_profile_test.go b/aws/resource_aws_opsworks_user_profile_test.go index d9f719d0b1c..546ec2b9538 100644 --- a/aws/resource_aws_opsworks_user_profile_test.go +++ b/aws/resource_aws_opsworks_user_profile_test.go @@ -15,7 +15,7 @@ import ( func TestAccAWSOpsworksUserProfile(t *testing.T) { rName := fmt.Sprintf("test-user-%d", acctest.RandInt()) updateRName := fmt.Sprintf("test-user-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsOpsworksUserProfileDestroy, diff --git a/aws/resource_aws_organizations_account.go b/aws/resource_aws_organizations_account.go new file mode 100644 index 00000000000..99bd7dae8fc --- /dev/null +++ b/aws/resource_aws_organizations_account.go @@ -0,0 +1,243 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/organizations" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsOrganizationsAccount() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsOrganizationsAccountCreate, + Read: resourceAwsOrganizationsAccountRead, + Delete: resourceAwsOrganizationsAccountDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "joined_method": { + Type: schema.TypeString, + Computed: true, + }, + "joined_timestamp": { + Type: schema.TypeString, + Computed: true, + }, + "status": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + ForceNew: true, + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 50), + }, + "email": { + ForceNew: true, + Type: schema.TypeString, + Required: true, + ValidateFunc: validateAwsOrganizationsAccountEmail, + }, + "iam_user_access_to_billing": { + ForceNew: true, + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{organizations.IAMUserAccessToBillingAllow, organizations.IAMUserAccessToBillingDeny}, true), + }, + "role_name": { + ForceNew: true, + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateAwsOrganizationsAccountRoleName, + }, + }, + } +} + +func resourceAwsOrganizationsAccountCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).organizationsconn + + // Create the account + createOpts := &organizations.CreateAccountInput{ + AccountName: aws.String(d.Get("name").(string)), + Email: aws.String(d.Get("email").(string)), + } + if role, ok := d.GetOk("role_name"); ok { + createOpts.RoleName = aws.String(role.(string)) + } + + if iam_user, ok := d.GetOk("iam_user_access_to_billing"); ok { + createOpts.IamUserAccessToBilling = aws.String(iam_user.(string)) + } + + log.Printf("[DEBUG] Account create config: %#v", createOpts) + + var err error + var resp *organizations.CreateAccountOutput + err = resource.Retry(4*time.Minute, func() *resource.RetryError { + resp, err = conn.CreateAccount(createOpts) + + if err != nil { + if isAWSErr(err, organizations.ErrCodeFinalizingOrganizationException, "") { + log.Printf("[DEBUG] Trying to create account again: %q", err.Error()) + return resource.RetryableError(err) + } + + return resource.NonRetryableError(err) + } + + return nil + }) + + if err != nil { + return fmt.Errorf("Error creating account: %s", err) + } + log.Printf("[DEBUG] Account create response: %#v", resp) + + requestId := *resp.CreateAccountStatus.Id + + // Wait for the account to become available + log.Printf("[DEBUG] Waiting for account request (%s) to succeed", requestId) + + stateConf := &resource.StateChangeConf{ + Pending: []string{organizations.CreateAccountStateInProgress}, + Target: []string{organizations.CreateAccountStateSucceeded}, + Refresh: resourceAwsOrganizationsAccountStateRefreshFunc(conn, requestId), + PollInterval: 10 * time.Second, + Timeout: 5 * time.Minute, + } + stateResp, stateErr := stateConf.WaitForState() + if stateErr != nil { + return fmt.Errorf( + "Error waiting for account request (%s) to become available: %s", + requestId, stateErr) + } + + // Store the ID + accountId := stateResp.(*organizations.CreateAccountStatus).AccountId + d.SetId(*accountId) + + return resourceAwsOrganizationsAccountRead(d, meta) +} + +func resourceAwsOrganizationsAccountRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).organizationsconn + describeOpts := &organizations.DescribeAccountInput{ + AccountId: aws.String(d.Id()), + } + resp, err := conn.DescribeAccount(describeOpts) + if err != nil { + if isAWSErr(err, organizations.ErrCodeAccountNotFoundException, "") { + log.Printf("[WARN] Account does not exist, removing from state: %s", d.Id()) + d.SetId("") + return nil + } + return err + } + + account := resp.Account + if account == nil { + log.Printf("[WARN] Account does not exist, removing from state: %s", d.Id()) + d.SetId("") + return nil + } + + d.Set("arn", account.Arn) + d.Set("email", account.Email) + d.Set("joined_method", account.JoinedMethod) + d.Set("joined_timestamp", account.JoinedTimestamp) + d.Set("name", account.Name) + d.Set("status", account.Status) + return nil +} + +func resourceAwsOrganizationsAccountDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).organizationsconn + + input := &organizations.RemoveAccountFromOrganizationInput{ + AccountId: aws.String(d.Id()), + } + log.Printf("[DEBUG] Removing AWS account from organization: %s", input) + _, err := conn.RemoveAccountFromOrganization(input) + if err != nil { + if isAWSErr(err, organizations.ErrCodeAccountNotFoundException, "") { + return nil + } + return err + } + return nil +} + +// resourceAwsOrganizationsAccountStateRefreshFunc returns a resource.StateRefreshFunc +// that is used to watch a CreateAccount request +func resourceAwsOrganizationsAccountStateRefreshFunc(conn *organizations.Organizations, id string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + opts := &organizations.DescribeCreateAccountStatusInput{ + CreateAccountRequestId: aws.String(id), + } + resp, err := conn.DescribeCreateAccountStatus(opts) + if err != nil { + if isAWSErr(err, organizations.ErrCodeCreateAccountStatusNotFoundException, "") { + resp = nil + } else { + log.Printf("Error on OrganizationAccountStateRefresh: %s", err) + return nil, "", err + } + } + + if resp == nil { + // Sometimes AWS just has consistency issues and doesn't see + // our account yet. Return an empty state. + return nil, "", nil + } + + accountStatus := resp.CreateAccountStatus + if *accountStatus.State == organizations.CreateAccountStateFailed { + return nil, *accountStatus.State, fmt.Errorf(*accountStatus.FailureReason) + } + return accountStatus, *accountStatus.State, nil + } +} + +func validateAwsOrganizationsAccountEmail(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[^\s@]+@[^\s@]+\.[^\s@]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q must be a valid email address", value)) + } + + if len(value) < 6 { + errors = append(errors, fmt.Errorf( + "%q cannot be less than 6 characters", value)) + } + + if len(value) > 64 { + errors = append(errors, fmt.Errorf( + "%q cannot be greater than 64 characters", value)) + } + + return +} + +func validateAwsOrganizationsAccountRoleName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[\w+=,.@-]{1,64}$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q must consist of uppercase letters, lowercase letters, digits with no spaces, and any of the following characters: =,.@-", value)) + } + + return +} diff --git a/aws/resource_aws_organizations_account_test.go b/aws/resource_aws_organizations_account_test.go new file mode 100644 index 00000000000..6cb4b47d873 --- /dev/null +++ b/aws/resource_aws_organizations_account_test.go @@ -0,0 +1,118 @@ +package aws + +import ( + "fmt" + "os" + "testing" + + "github.com/aws/aws-sdk-go/service/organizations" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func testAccAwsOrganizationsAccount_basic(t *testing.T) { + var account organizations.Account + + orgsEmailDomain, ok := os.LookupEnv("TEST_AWS_ORGANIZATION_ACCOUNT_EMAIL_DOMAIN") + + if !ok { + t.Skip("'TEST_AWS_ORGANIZATION_ACCOUNT_EMAIL_DOMAIN' not set, skipping test.") + } + + rInt := acctest.RandInt() + name := fmt.Sprintf("tf_acctest_%d", rInt) + email := fmt.Sprintf("tf-acctest+%d@%s", rInt, orgsEmailDomain) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccOrganizationsAccountPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsOrganizationsAccountDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsOrganizationsAccountConfig(name, email), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsOrganizationsAccountExists("aws_organizations_account.test", &account), + resource.TestCheckResourceAttrSet("aws_organizations_account.test", "arn"), + resource.TestCheckResourceAttrSet("aws_organizations_account.test", "joined_method"), + resource.TestCheckResourceAttrSet("aws_organizations_account.test", "joined_timestamp"), + resource.TestCheckResourceAttr("aws_organizations_account.test", "name", name), + resource.TestCheckResourceAttr("aws_organizations_account.test", "email", email), + resource.TestCheckResourceAttrSet("aws_organizations_account.test", "status"), + ), + }, + { + ResourceName: "aws_organizations_account.test", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAwsOrganizationsAccountDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).organizationsconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_organizations_account" { + continue + } + + params := &organizations.DescribeAccountInput{ + AccountId: &rs.Primary.ID, + } + + resp, err := conn.DescribeAccount(params) + + if err != nil { + if isAWSErr(err, organizations.ErrCodeAccountNotFoundException, "") { + return nil + } + return err + } + + if resp == nil && resp.Account != nil { + return fmt.Errorf("Bad: Account still exists: %q", rs.Primary.ID) + } + } + + return nil + +} + +func testAccCheckAwsOrganizationsAccountExists(n string, a *organizations.Account) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + conn := testAccProvider.Meta().(*AWSClient).organizationsconn + params := &organizations.DescribeAccountInput{ + AccountId: &rs.Primary.ID, + } + + resp, err := conn.DescribeAccount(params) + + if err != nil { + return err + } + + if resp == nil || resp.Account == nil { + return fmt.Errorf("Account %q does not exist", rs.Primary.ID) + } + + a = resp.Account + + return nil + } +} + +func testAccAwsOrganizationsAccountConfig(name, email string) string { + return fmt.Sprintf(` +resource "aws_organizations_account" "test" { + name = "%s" + email = "%s" +} +`, name, email) +} diff --git a/aws/resource_aws_organizations_organization_test.go b/aws/resource_aws_organizations_organization_test.go index dc951045e36..5ab92da3540 100644 --- a/aws/resource_aws_organizations_organization_test.go +++ b/aws/resource_aws_organizations_organization_test.go @@ -9,11 +9,32 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func testAccAwsOrganizationsOrganization_importBasic(t *testing.T) { + resourceName := "aws_organizations_organization.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccOrganizationsAccountPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsOrganizationsOrganizationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsOrganizationsOrganizationConfig, + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func testAccAwsOrganizationsOrganization_basic(t *testing.T) { var organization organizations.Organization resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, + PreCheck: func() { testAccPreCheck(t); testAccOrganizationsAccountPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsOrganizationsOrganizationDestroy, Steps: []resource.TestStep{ @@ -38,7 +59,7 @@ func testAccAwsOrganizationsOrganization_consolidatedBilling(t *testing.T) { feature_set := organizations.OrganizationFeatureSetConsolidatedBilling resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, + PreCheck: func() { testAccPreCheck(t); testAccOrganizationsAccountPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsOrganizationsOrganizationDestroy, Steps: []resource.TestStep{ diff --git a/aws/resource_aws_organizations_policy.go b/aws/resource_aws_organizations_policy.go new file mode 100644 index 00000000000..715e6bf93b6 --- /dev/null +++ b/aws/resource_aws_organizations_policy.go @@ -0,0 +1,174 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/organizations" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsOrganizationsPolicy() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsOrganizationsPolicyCreate, + Read: resourceAwsOrganizationsPolicyRead, + Update: resourceAwsOrganizationsPolicyUpdate, + Delete: resourceAwsOrganizationsPolicyDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "content": { + Type: schema.TypeString, + Required: true, + DiffSuppressFunc: suppressEquivalentAwsPolicyDiffs, + ValidateFunc: validation.ValidateJsonString, + }, + "description": { + Type: schema.TypeString, + Optional: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + }, + "type": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: organizations.PolicyTypeServiceControlPolicy, + ValidateFunc: validation.StringInSlice([]string{ + organizations.PolicyTypeServiceControlPolicy, + }, false), + }, + }, + } +} + +func resourceAwsOrganizationsPolicyCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).organizationsconn + + // Description is required: + // InvalidParameter: 1 validation error(s) found. + // - missing required field, CreatePolicyInput.Description. + input := &organizations.CreatePolicyInput{ + Content: aws.String(d.Get("content").(string)), + Description: aws.String(d.Get("description").(string)), + Name: aws.String(d.Get("name").(string)), + Type: aws.String(d.Get("type").(string)), + } + + log.Printf("[DEBUG] Creating Organizations Policy: %s", input) + + var err error + var resp *organizations.CreatePolicyOutput + err = resource.Retry(4*time.Minute, func() *resource.RetryError { + resp, err = conn.CreatePolicy(input) + + if err != nil { + if isAWSErr(err, organizations.ErrCodeFinalizingOrganizationException, "") { + log.Printf("[DEBUG] Trying to create policy again: %q", err.Error()) + return resource.RetryableError(err) + } + + return resource.NonRetryableError(err) + } + + return nil + }) + + if err != nil { + return fmt.Errorf("error creating Organizations Policy: %s", err) + } + + d.SetId(*resp.Policy.PolicySummary.Id) + + return resourceAwsOrganizationsPolicyRead(d, meta) +} + +func resourceAwsOrganizationsPolicyRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).organizationsconn + + input := &organizations.DescribePolicyInput{ + PolicyId: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Reading Organizations Policy: %s", input) + resp, err := conn.DescribePolicy(input) + if err != nil { + if isAWSErr(err, organizations.ErrCodePolicyNotFoundException, "") { + log.Printf("[WARN] Policy does not exist, removing from state: %s", d.Id()) + d.SetId("") + return nil + } + return err + } + + if resp.Policy == nil || resp.Policy.PolicySummary == nil { + log.Printf("[WARN] Policy does not exist, removing from state: %s", d.Id()) + d.SetId("") + return nil + } + + d.Set("arn", resp.Policy.PolicySummary.Arn) + d.Set("content", resp.Policy.Content) + d.Set("description", resp.Policy.PolicySummary.Description) + d.Set("name", resp.Policy.PolicySummary.Name) + d.Set("type", resp.Policy.PolicySummary.Type) + return nil +} + +func resourceAwsOrganizationsPolicyUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).organizationsconn + + input := &organizations.UpdatePolicyInput{ + PolicyId: aws.String(d.Id()), + } + + if d.HasChange("content") { + input.Content = aws.String(d.Get("content").(string)) + } + + if d.HasChange("description") { + input.Description = aws.String(d.Get("description").(string)) + } + + if d.HasChange("name") { + input.Name = aws.String(d.Get("name").(string)) + } + + log.Printf("[DEBUG] Updating Organizations Policy: %s", input) + _, err := conn.UpdatePolicy(input) + if err != nil { + return fmt.Errorf("error updating Organizations Policy: %s", err) + } + + return resourceAwsOrganizationsPolicyRead(d, meta) +} + +func resourceAwsOrganizationsPolicyDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).organizationsconn + + input := &organizations.DeletePolicyInput{ + PolicyId: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Deletion Organizations Policy: %s", input) + _, err := conn.DeletePolicy(input) + if err != nil { + if isAWSErr(err, organizations.ErrCodePolicyNotFoundException, "") { + return nil + } + return err + } + return nil +} diff --git a/aws/resource_aws_organizations_policy_attachment.go b/aws/resource_aws_organizations_policy_attachment.go new file mode 100644 index 00000000000..b51949e8034 --- /dev/null +++ b/aws/resource_aws_organizations_policy_attachment.go @@ -0,0 +1,154 @@ +package aws + +import ( + "fmt" + "log" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/organizations" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsOrganizationsPolicyAttachment() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsOrganizationsPolicyAttachmentCreate, + Read: resourceAwsOrganizationsPolicyAttachmentRead, + Delete: resourceAwsOrganizationsPolicyAttachmentDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "policy_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "target_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + } +} + +func resourceAwsOrganizationsPolicyAttachmentCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).organizationsconn + + policyID := d.Get("policy_id").(string) + targetID := d.Get("target_id").(string) + + input := &organizations.AttachPolicyInput{ + PolicyId: aws.String(policyID), + TargetId: aws.String(targetID), + } + + log.Printf("[DEBUG] Creating Organizations Policy Attachment: %s", input) + + err := resource.Retry(4*time.Minute, func() *resource.RetryError { + _, err := conn.AttachPolicy(input) + + if err != nil { + if isAWSErr(err, organizations.ErrCodeFinalizingOrganizationException, "") { + log.Printf("[DEBUG] Trying to create policy attachment again: %q", err.Error()) + return resource.RetryableError(err) + } + + return resource.NonRetryableError(err) + } + + return nil + }) + + if err != nil { + return fmt.Errorf("error creating Organizations Policy Attachment: %s", err) + } + + d.SetId(fmt.Sprintf("%s:%s", targetID, policyID)) + + return resourceAwsOrganizationsPolicyAttachmentRead(d, meta) +} + +func resourceAwsOrganizationsPolicyAttachmentRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).organizationsconn + + targetID, policyID, err := decodeAwsOrganizationsPolicyAttachmentID(d.Id()) + if err != nil { + return err + } + + input := &organizations.ListPoliciesForTargetInput{ + Filter: aws.String(organizations.PolicyTypeServiceControlPolicy), + TargetId: aws.String(targetID), + } + + log.Printf("[DEBUG] Listing Organizations Policies for Target: %s", input) + var output *organizations.PolicySummary + err = conn.ListPoliciesForTargetPages(input, func(page *organizations.ListPoliciesForTargetOutput, lastPage bool) bool { + for _, policySummary := range page.Policies { + if aws.StringValue(policySummary.Id) == policyID { + output = policySummary + return true + } + } + return !lastPage + }) + + if err != nil { + if isAWSErr(err, organizations.ErrCodeTargetNotFoundException, "") { + log.Printf("[WARN] Target does not exist, removing from state: %s", d.Id()) + d.SetId("") + return nil + } + return err + } + + if output == nil { + log.Printf("[WARN] Attachment does not exist, removing from state: %s", d.Id()) + d.SetId("") + return nil + } + + d.Set("policy_id", policyID) + d.Set("target_id", targetID) + return nil +} + +func resourceAwsOrganizationsPolicyAttachmentDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).organizationsconn + + targetID, policyID, err := decodeAwsOrganizationsPolicyAttachmentID(d.Id()) + if err != nil { + return err + } + + input := &organizations.DetachPolicyInput{ + PolicyId: aws.String(policyID), + TargetId: aws.String(targetID), + } + + log.Printf("[DEBUG] Detaching Organizations Policy %q from %q", policyID, targetID) + _, err = conn.DetachPolicy(input) + if err != nil { + if isAWSErr(err, organizations.ErrCodePolicyNotFoundException, "") { + return nil + } + if isAWSErr(err, organizations.ErrCodeTargetNotFoundException, "") { + return nil + } + return err + } + return nil +} + +func decodeAwsOrganizationsPolicyAttachmentID(id string) (string, string, error) { + idParts := strings.Split(id, ":") + if len(idParts) != 2 { + return "", "", fmt.Errorf("expected ID in format of TARGETID:POLICYID, received: %s", id) + } + return idParts[0], idParts[1], nil +} diff --git a/aws/resource_aws_organizations_policy_attachment_test.go b/aws/resource_aws_organizations_policy_attachment_test.go new file mode 100644 index 00000000000..d9f32f73f5e --- /dev/null +++ b/aws/resource_aws_organizations_policy_attachment_test.go @@ -0,0 +1,146 @@ +package aws + +import ( + "fmt" + "log" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/organizations" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAwsOrganizationsPolicyAttachment_account(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_organizations_policy_attachment.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccOrganizationsAccountPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsOrganizationsPolicyAttachmentDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsOrganizationsPolicyAttachmentConfig_Account(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsOrganizationsPolicyAttachmentExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "policy_id"), + resource.TestCheckResourceAttrSet(resourceName, "target_id"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAwsOrganizationsPolicyAttachmentDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).organizationsconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_organizations_policy_attachment" { + continue + } + + targetID, policyID, err := decodeAwsOrganizationsPolicyAttachmentID(rs.Primary.ID) + if err != nil { + return err + } + + input := &organizations.ListPoliciesForTargetInput{ + Filter: aws.String(organizations.PolicyTypeServiceControlPolicy), + TargetId: aws.String(targetID), + } + + log.Printf("[DEBUG] Listing Organizations Policies for Target: %s", input) + var output *organizations.PolicySummary + err = conn.ListPoliciesForTargetPages(input, func(page *organizations.ListPoliciesForTargetOutput, lastPage bool) bool { + for _, policySummary := range page.Policies { + if aws.StringValue(policySummary.Id) == policyID { + output = policySummary + return true + } + } + return !lastPage + }) + + if err != nil { + if isAWSErr(err, organizations.ErrCodeTargetNotFoundException, "") { + return nil + } + return err + } + + if output == nil { + return nil + } + + return fmt.Errorf("Policy attachment %q still exists", rs.Primary.ID) + } + + return nil + +} + +func testAccCheckAwsOrganizationsPolicyAttachmentExists(resourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + conn := testAccProvider.Meta().(*AWSClient).organizationsconn + + targetID, policyID, err := decodeAwsOrganizationsPolicyAttachmentID(rs.Primary.ID) + if err != nil { + return err + } + + input := &organizations.ListPoliciesForTargetInput{ + Filter: aws.String(organizations.PolicyTypeServiceControlPolicy), + TargetId: aws.String(targetID), + } + + log.Printf("[DEBUG] Listing Organizations Policies for Target: %s", input) + var output *organizations.PolicySummary + err = conn.ListPoliciesForTargetPages(input, func(page *organizations.ListPoliciesForTargetOutput, lastPage bool) bool { + for _, policySummary := range page.Policies { + if aws.StringValue(policySummary.Id) == policyID { + output = policySummary + return true + } + } + return !lastPage + }) + + if err != nil { + return err + } + + if output == nil { + return fmt.Errorf("Policy attachment %q does not exist", rs.Primary.ID) + } + + return nil + } +} + +func testAccAwsOrganizationsPolicyAttachmentConfig_Account(rName string) string { + return fmt.Sprintf(` +data "aws_caller_identity" "current" {} + +resource "aws_organizations_policy" "test" { + content = "{\"Version\": \"2012-10-17\", \"Statement\": { \"Effect\": \"Allow\", \"Action\": \"*\", \"Resource\": \"*\"}}" + name = "%s" +} + +resource "aws_organizations_policy_attachment" "test" { + policy_id = "${aws_organizations_policy.test.id}" + target_id = "${data.aws_caller_identity.current.account_id}" +} +`, rName) +} diff --git a/aws/resource_aws_organizations_policy_test.go b/aws/resource_aws_organizations_policy_test.go new file mode 100644 index 00000000000..3c1a4412009 --- /dev/null +++ b/aws/resource_aws_organizations_policy_test.go @@ -0,0 +1,162 @@ +package aws + +import ( + "fmt" + "regexp" + "strconv" + "testing" + + "github.com/aws/aws-sdk-go/service/organizations" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAwsOrganizationsPolicy_basic(t *testing.T) { + var policy organizations.Policy + content1 := `{"Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "*", "Resource": "*"}}` + content2 := `{"Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "s3:*", "Resource": "*"}}` + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_organizations_policy.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccOrganizationsAccountPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsOrganizationsPolicyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsOrganizationsPolicyConfig_Required(rName, content1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsOrganizationsPolicyExists(resourceName, &policy), + resource.TestMatchResourceAttr(resourceName, "arn", regexp.MustCompile(`^arn:[^:]+:organizations::[^:]+:policy/o-.+/service_control_policy/p-.+$`)), + resource.TestCheckResourceAttr(resourceName, "content", content1), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "type", organizations.PolicyTypeServiceControlPolicy), + ), + }, + { + Config: testAccAwsOrganizationsPolicyConfig_Required(rName, content2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsOrganizationsPolicyExists(resourceName, &policy), + resource.TestCheckResourceAttr(resourceName, "content", content2), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAwsOrganizationsPolicy_description(t *testing.T) { + var policy organizations.Policy + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_organizations_policy.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccOrganizationsAccountPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsOrganizationsPolicyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsOrganizationsPolicyConfig_Description(rName, "description1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsOrganizationsPolicyExists(resourceName, &policy), + resource.TestCheckResourceAttr(resourceName, "description", "description1"), + ), + }, + { + Config: testAccAwsOrganizationsPolicyConfig_Description(rName, "description2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsOrganizationsPolicyExists(resourceName, &policy), + resource.TestCheckResourceAttr(resourceName, "description", "description2"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAwsOrganizationsPolicyDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).organizationsconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_organizations_policy" { + continue + } + + input := &organizations.DescribePolicyInput{ + PolicyId: &rs.Primary.ID, + } + + resp, err := conn.DescribePolicy(input) + + if err != nil { + if isAWSErr(err, organizations.ErrCodePolicyNotFoundException, "") { + return nil + } + return err + } + + if resp == nil && resp.Policy != nil { + return fmt.Errorf("Policy %q still exists", rs.Primary.ID) + } + } + + return nil + +} + +func testAccCheckAwsOrganizationsPolicyExists(resourceName string, policy *organizations.Policy) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + conn := testAccProvider.Meta().(*AWSClient).organizationsconn + input := &organizations.DescribePolicyInput{ + PolicyId: &rs.Primary.ID, + } + + resp, err := conn.DescribePolicy(input) + + if err != nil { + return err + } + + if resp == nil || resp.Policy == nil { + return fmt.Errorf("Policy %q does not exist", rs.Primary.ID) + } + + *policy = *resp.Policy + + return nil + } +} + +func testAccAwsOrganizationsPolicyConfig_Description(rName, description string) string { + return fmt.Sprintf(` +resource "aws_organizations_policy" "test" { + content = "{\"Version\": \"2012-10-17\", \"Statement\": { \"Effect\": \"Allow\", \"Action\": \"*\", \"Resource\": \"*\"}}" + description = "%s" + name = "%s" +} +`, description, rName) +} + +func testAccAwsOrganizationsPolicyConfig_Required(rName, content string) string { + return fmt.Sprintf(` +resource "aws_organizations_policy" "test" { + content = %s + name = "%s" +} +`, strconv.Quote(content), rName) +} diff --git a/aws/resource_aws_organizations_test.go b/aws/resource_aws_organizations_test.go index 33fef61b690..5804aeb009f 100644 --- a/aws/resource_aws_organizations_test.go +++ b/aws/resource_aws_organizations_test.go @@ -11,6 +11,9 @@ func TestAccAWSOrganizations(t *testing.T) { "importBasic": testAccAwsOrganizationsOrganization_importBasic, "consolidatedBilling": testAccAwsOrganizationsOrganization_consolidatedBilling, }, + "Account": { + "basic": testAccAwsOrganizationsAccount_basic, + }, } for group, m := range testCases { diff --git a/aws/resource_aws_pinpoint_adm_channel.go b/aws/resource_aws_pinpoint_adm_channel.go new file mode 100644 index 00000000000..71373a98bb2 --- /dev/null +++ b/aws/resource_aws_pinpoint_adm_channel.go @@ -0,0 +1,114 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/pinpoint" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsPinpointADMChannel() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsPinpointADMChannelUpsert, + Read: resourceAwsPinpointADMChannelRead, + Update: resourceAwsPinpointADMChannelUpsert, + Delete: resourceAwsPinpointADMChannelDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "application_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "client_id": { + Type: schema.TypeString, + Required: true, + Sensitive: true, + }, + "client_secret": { + Type: schema.TypeString, + Required: true, + Sensitive: true, + }, + "enabled": { + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + }, + } +} + +func resourceAwsPinpointADMChannelUpsert(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).pinpointconn + + applicationId := d.Get("application_id").(string) + + params := &pinpoint.ADMChannelRequest{} + + params.ClientId = aws.String(d.Get("client_id").(string)) + params.ClientSecret = aws.String(d.Get("client_secret").(string)) + params.Enabled = aws.Bool(d.Get("enabled").(bool)) + + req := pinpoint.UpdateAdmChannelInput{ + ApplicationId: aws.String(applicationId), + ADMChannelRequest: params, + } + + _, err := conn.UpdateAdmChannel(&req) + if err != nil { + return err + } + + d.SetId(applicationId) + + return resourceAwsPinpointADMChannelRead(d, meta) +} + +func resourceAwsPinpointADMChannelRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).pinpointconn + + log.Printf("[INFO] Reading Pinpoint ADM Channel for application %s", d.Id()) + + channel, err := conn.GetAdmChannel(&pinpoint.GetAdmChannelInput{ + ApplicationId: aws.String(d.Id()), + }) + if err != nil { + if isAWSErr(err, pinpoint.ErrCodeNotFoundException, "") { + log.Printf("[WARN] Pinpoint ADM Channel for application %s not found, error code (404)", d.Id()) + d.SetId("") + return nil + } + + return fmt.Errorf("error getting Pinpoint ADM Channel for application %s: %s", d.Id(), err) + } + + d.Set("application_id", channel.ADMChannelResponse.ApplicationId) + d.Set("enabled", channel.ADMChannelResponse.Enabled) + // client_id and client_secret are never returned + + return nil +} + +func resourceAwsPinpointADMChannelDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).pinpointconn + + log.Printf("[DEBUG] Pinpoint Delete ADM Channel: %s", d.Id()) + _, err := conn.DeleteAdmChannel(&pinpoint.DeleteAdmChannelInput{ + ApplicationId: aws.String(d.Id()), + }) + + if isAWSErr(err, pinpoint.ErrCodeNotFoundException, "") { + return nil + } + + if err != nil { + return fmt.Errorf("error deleting Pinpoint ADM Channel for application %s: %s", d.Id(), err) + } + return nil +} diff --git a/aws/resource_aws_pinpoint_adm_channel_test.go b/aws/resource_aws_pinpoint_adm_channel_test.go new file mode 100644 index 00000000000..bdf168d3c27 --- /dev/null +++ b/aws/resource_aws_pinpoint_adm_channel_test.go @@ -0,0 +1,155 @@ +package aws + +import ( + "fmt" + "os" + "testing" + + "github.com/aws/aws-sdk-go/service/pinpoint" + + "github.com/aws/aws-sdk-go/aws" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +/** + Before running this test, the following two ENV variables must be set: + + ADM_CLIENT_ID - Amazon ADM OAuth Credentials Client ID + ADM_CLIENT_SECRET - Amazon ADM OAuth Credentials Client Secret +**/ + +type testAccAwsPinpointADMChannelConfiguration struct { + ClientID string + ClientSecret string +} + +func testAccAwsPinpointADMChannelConfigurationFromEnv(t *testing.T) *testAccAwsPinpointADMChannelConfiguration { + + if os.Getenv("ADM_CLIENT_ID") == "" { + t.Skipf("ADM_CLIENT_ID ENV is missing") + } + + if os.Getenv("ADM_CLIENT_SECRET") == "" { + t.Skipf("ADM_CLIENT_SECRET ENV is missing") + } + + conf := testAccAwsPinpointADMChannelConfiguration{ + ClientID: os.Getenv("ADM_CLIENT_ID"), + ClientSecret: os.Getenv("ADM_CLIENT_SECRET"), + } + + return &conf +} + +func TestAccAWSPinpointADMChannel_basic(t *testing.T) { + oldDefaultRegion := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldDefaultRegion) + + var channel pinpoint.ADMChannelResponse + resourceName := "aws_pinpoint_adm_channel.channel" + + config := testAccAwsPinpointADMChannelConfigurationFromEnv(t) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: resourceName, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSPinpointADMChannelDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSPinpointADMChannelConfig_basic(config), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSPinpointADMChannelExists(resourceName, &channel), + resource.TestCheckResourceAttr(resourceName, "enabled", "false"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"client_id", "client_secret"}, + }, + { + Config: testAccAWSPinpointADMChannelConfig_basic(config), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSPinpointADMChannelExists(resourceName, &channel), + resource.TestCheckResourceAttr(resourceName, "enabled", "false"), + ), + }, + }, + }) +} + +func testAccCheckAWSPinpointADMChannelExists(n string, channel *pinpoint.ADMChannelResponse) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Pinpoint ADM channel with that Application ID exists") + } + + conn := testAccProvider.Meta().(*AWSClient).pinpointconn + + // Check if the ADM Channel exists + params := &pinpoint.GetAdmChannelInput{ + ApplicationId: aws.String(rs.Primary.ID), + } + output, err := conn.GetAdmChannel(params) + + if err != nil { + return err + } + + *channel = *output.ADMChannelResponse + + return nil + } +} + +func testAccAWSPinpointADMChannelConfig_basic(conf *testAccAwsPinpointADMChannelConfiguration) string { + return fmt.Sprintf(` +provider "aws" { + region = "us-east-1" +} + +resource "aws_pinpoint_app" "test_app" {} + +resource "aws_pinpoint_adm_channel" "channel" { + application_id = "${aws_pinpoint_app.test_app.application_id}" + + client_id = "%s" + client_secret = "%s" + enabled = false +} +`, conf.ClientID, conf.ClientSecret) +} + +func testAccCheckAWSPinpointADMChannelDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).pinpointconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_pinpoint_adm_channel" { + continue + } + + // Check if the ADM channel exists by fetching its attributes + params := &pinpoint.GetAdmChannelInput{ + ApplicationId: aws.String(rs.Primary.ID), + } + _, err := conn.GetAdmChannel(params) + if err != nil { + if isAWSErr(err, pinpoint.ErrCodeNotFoundException, "") { + continue + } + return err + } + return fmt.Errorf("ADM Channel exists when it should be destroyed!") + } + + return nil +} diff --git a/aws/resource_aws_pinpoint_apns_channel.go b/aws/resource_aws_pinpoint_apns_channel.go new file mode 100644 index 00000000000..95949cb14e0 --- /dev/null +++ b/aws/resource_aws_pinpoint_apns_channel.go @@ -0,0 +1,159 @@ +package aws + +import ( + "errors" + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/pinpoint" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsPinpointAPNSChannel() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsPinpointAPNSChannelUpsert, + Read: resourceAwsPinpointAPNSChannelRead, + Update: resourceAwsPinpointAPNSChannelUpsert, + Delete: resourceAwsPinpointAPNSChannelDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "application_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "bundle_id": { + Type: schema.TypeString, + Optional: true, + Sensitive: true, + }, + "certificate": { + Type: schema.TypeString, + Optional: true, + Sensitive: true, + }, + "default_authentication_method": { + Type: schema.TypeString, + Optional: true, + }, + "enabled": { + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + "private_key": { + Type: schema.TypeString, + Optional: true, + Sensitive: true, + }, + "team_id": { + Type: schema.TypeString, + Optional: true, + Sensitive: true, + }, + "token_key": { + Type: schema.TypeString, + Optional: true, + Sensitive: true, + }, + "token_key_id": { + Type: schema.TypeString, + Optional: true, + Sensitive: true, + }, + }, + } +} + +func resourceAwsPinpointAPNSChannelUpsert(d *schema.ResourceData, meta interface{}) error { + certificate, certificateOk := d.GetOk("certificate") + privateKey, privateKeyOk := d.GetOk("private_key") + + bundleId, bundleIdOk := d.GetOk("bundle_id") + teamId, teamIdOk := d.GetOk("team_id") + tokenKey, tokenKeyOk := d.GetOk("token_key") + tokenKeyId, tokenKeyIdOk := d.GetOk("token_key_id") + + if !(certificateOk && privateKeyOk) && !(bundleIdOk && teamIdOk && tokenKeyOk && tokenKeyIdOk) { + return errors.New("At least one set of credentials is required; either [certificate, private_key] or [bundle_id, team_id, token_key, token_key_id]") + } + + conn := meta.(*AWSClient).pinpointconn + + applicationId := d.Get("application_id").(string) + + params := &pinpoint.APNSChannelRequest{} + + params.DefaultAuthenticationMethod = aws.String(d.Get("default_authentication_method").(string)) + params.Enabled = aws.Bool(d.Get("enabled").(bool)) + + params.Certificate = aws.String(certificate.(string)) + params.PrivateKey = aws.String(privateKey.(string)) + + params.BundleId = aws.String(bundleId.(string)) + params.TeamId = aws.String(teamId.(string)) + params.TokenKey = aws.String(tokenKey.(string)) + params.TokenKeyId = aws.String(tokenKeyId.(string)) + + req := pinpoint.UpdateApnsChannelInput{ + ApplicationId: aws.String(applicationId), + APNSChannelRequest: params, + } + + _, err := conn.UpdateApnsChannel(&req) + if err != nil { + return fmt.Errorf("error updating Pinpoint APNs Channel for Application %s: %s", applicationId, err) + } + + d.SetId(applicationId) + + return resourceAwsPinpointAPNSChannelRead(d, meta) +} + +func resourceAwsPinpointAPNSChannelRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).pinpointconn + + log.Printf("[INFO] Reading Pinpoint APNs Channel for Application %s", d.Id()) + + output, err := conn.GetApnsChannel(&pinpoint.GetApnsChannelInput{ + ApplicationId: aws.String(d.Id()), + }) + if err != nil { + if isAWSErr(err, pinpoint.ErrCodeNotFoundException, "") { + log.Printf("[WARN] Pinpoint APNs Channel for application %s not found, error code (404)", d.Id()) + d.SetId("") + return nil + } + + return fmt.Errorf("error getting Pinpoint APNs Channel for application %s: %s", d.Id(), err) + } + + d.Set("application_id", output.APNSChannelResponse.ApplicationId) + d.Set("default_authentication_method", output.APNSChannelResponse.DefaultAuthenticationMethod) + d.Set("enabled", output.APNSChannelResponse.Enabled) + // Sensitive params are not returned + + return nil +} + +func resourceAwsPinpointAPNSChannelDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).pinpointconn + + log.Printf("[DEBUG] Deleting Pinpoint APNs Channel: %s", d.Id()) + _, err := conn.DeleteApnsChannel(&pinpoint.DeleteApnsChannelInput{ + ApplicationId: aws.String(d.Id()), + }) + + if isAWSErr(err, pinpoint.ErrCodeNotFoundException, "") { + return nil + } + + if err != nil { + return fmt.Errorf("error deleting Pinpoint APNs Channel for Application %s: %s", d.Id(), err) + } + return nil +} diff --git a/aws/resource_aws_pinpoint_apns_channel_test.go b/aws/resource_aws_pinpoint_apns_channel_test.go new file mode 100644 index 00000000000..0a5427ef076 --- /dev/null +++ b/aws/resource_aws_pinpoint_apns_channel_test.go @@ -0,0 +1,257 @@ +package aws + +import ( + "fmt" + "os" + "strconv" + "strings" + "testing" + + "github.com/aws/aws-sdk-go/service/pinpoint" + + "github.com/aws/aws-sdk-go/aws" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +/** + Before running this test, one of the following two ENV variables set must be defined. See here for details: + https://docs.aws.amazon.com/pinpoint/latest/userguide/channels-mobile-manage.html + + * Key Configuration (ref. https://developer.apple.com/documentation/usernotifications/setting_up_a_remote_notification_server/establishing_a_token-based_connection_to_apns ) + APNS_BUNDLE_ID - APNs Bundle ID + APNS_TEAM_ID - APNs Team ID + APNS_TOKEN_KEY - Token key file content (.p8 file) + APNS_TOKEN_KEY_ID - APNs Token Key ID + + * Certificate Configuration (ref. https://developer.apple.com/documentation/usernotifications/setting_up_a_remote_notification_server/establishing_a_certificate-based_connection_to_apns ) + APNS_CERTIFICATE - APNs Certificate content (.pem file content) + APNS_CERTIFICATE_PRIVATE_KEY - APNs Certificate Private Key File content +**/ + +type testAccAwsPinpointAPNSChannelCertConfiguration struct { + Certificate string + PrivateKey string +} + +type testAccAwsPinpointAPNSChannelTokenConfiguration struct { + BundleId string + TeamId string + TokenKey string + TokenKeyId string +} + +func testAccAwsPinpointAPNSChannelCertConfigurationFromEnv(t *testing.T) *testAccAwsPinpointAPNSChannelCertConfiguration { + var conf *testAccAwsPinpointAPNSChannelCertConfiguration + if os.Getenv("APNS_CERTIFICATE") != "" { + if os.Getenv("APNS_CERTIFICATE_PRIVATE_KEY") == "" { + t.Fatalf("APNS_CERTIFICATE set but missing APNS_CERTIFICATE_PRIVATE_KEY") + } + + conf = &testAccAwsPinpointAPNSChannelCertConfiguration{ + Certificate: fmt.Sprintf("< 0 { opts.AvailabilityZones = expandStringList(attr.List()) } - if attr, ok := d.GetOk("db_subnet_group_name"); ok { - opts.DBSubnetGroupName = aws.String(attr.(string)) + // Need to check value > 0 due to: + // InvalidParameterValue: Backtrack is not enabled for the aurora-postgresql engine. + if v, ok := d.GetOk("backtrack_window"); ok && v.(int) > 0 { + opts.BacktrackWindow = aws.Int64(int64(v.(int))) + } + + if attr, ok := d.GetOk("backup_retention_period"); ok { + modifyDbClusterInput.BackupRetentionPeriod = aws.Int64(int64(attr.(int))) + requiresModifyDbCluster = true } if attr, ok := d.GetOk("database_name"); ok { opts.DatabaseName = aws.String(attr.(string)) } + if attr, ok := d.GetOk("db_cluster_parameter_group_name"); ok { + opts.DBClusterParameterGroupName = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("db_subnet_group_name"); ok { + opts.DBSubnetGroupName = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("enabled_cloudwatch_logs_exports"); ok && len(attr.([]interface{})) > 0 { + opts.EnableCloudwatchLogsExports = expandStringList(attr.([]interface{})) + } + + if attr, ok := d.GetOk("engine_version"); ok { + opts.EngineVersion = aws.String(attr.(string)) + } + + if attr, ok := d.GetOk("kms_key_id"); ok { + opts.KmsKeyId = aws.String(attr.(string)) + } + if attr, ok := d.GetOk("option_group_name"); ok { opts.OptionGroupName = aws.String(attr.(string)) } @@ -313,64 +477,152 @@ func resourceAwsRDSClusterCreate(d *schema.ResourceData, meta interface{}) error opts.Port = aws.Int64(int64(attr.(int))) } - // Check if any of the parameters that require a cluster modification after creation are set - var clusterUpdate bool - if attr := d.Get("vpc_security_group_ids").(*schema.Set); attr.Len() > 0 { - clusterUpdate = true - opts.VpcSecurityGroupIds = expandStringList(attr.List()) + if attr, ok := d.GetOk("preferred_backup_window"); ok { + modifyDbClusterInput.PreferredBackupWindow = aws.String(attr.(string)) + requiresModifyDbCluster = true } - if _, ok := d.GetOk("db_cluster_parameter_group_name"); ok { - clusterUpdate = true + if attr, ok := d.GetOk("preferred_maintenance_window"); ok { + modifyDbClusterInput.PreferredMaintenanceWindow = aws.String(attr.(string)) + requiresModifyDbCluster = true } - if _, ok := d.GetOk("backup_retention_period"); ok { - clusterUpdate = true + if attr := d.Get("vpc_security_group_ids").(*schema.Set); attr.Len() > 0 { + opts.VpcSecurityGroupIds = expandStringList(attr.List()) } log.Printf("[DEBUG] RDS Cluster restore from snapshot configuration: %s", opts) - _, err := conn.RestoreDBClusterFromSnapshot(&opts) + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err := conn.RestoreDBClusterFromSnapshot(&opts) + if err != nil { + if isAWSErr(err, "InvalidParameterValue", "IAM role ARN value is invalid or does not include the required permissions") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) if err != nil { return fmt.Errorf("Error creating RDS Cluster: %s", err) } + } else if _, ok := d.GetOk("replication_source_identifier"); ok { + createOpts := &rds.CreateDBClusterInput{ + DBClusterIdentifier: aws.String(identifier), + DeletionProtection: aws.Bool(d.Get("deletion_protection").(bool)), + Engine: aws.String(d.Get("engine").(string)), + EngineMode: aws.String(d.Get("engine_mode").(string)), + ReplicationSourceIdentifier: aws.String(d.Get("replication_source_identifier").(string)), + ScalingConfiguration: expandRdsScalingConfiguration(d.Get("scaling_configuration").([]interface{})), + Tags: tags, + } - if clusterUpdate { - log.Printf("[INFO] RDS Cluster is restoring from snapshot with default db_cluster_parameter_group_name, backup_retention_period and vpc_security_group_ids" + - "but custom values should be set, will now update after snapshot is restored!") + // Need to check value > 0 due to: + // InvalidParameterValue: Backtrack is not enabled for the aurora-postgresql engine. + if v, ok := d.GetOk("backtrack_window"); ok && v.(int) > 0 { + createOpts.BacktrackWindow = aws.Int64(int64(v.(int))) + } - d.SetId(d.Get("cluster_identifier").(string)) + if attr, ok := d.GetOk("port"); ok { + createOpts.Port = aws.Int64(int64(attr.(int))) + } - log.Printf("[INFO] RDS Cluster ID: %s", d.Id()) + if attr, ok := d.GetOk("db_subnet_group_name"); ok { + createOpts.DBSubnetGroupName = aws.String(attr.(string)) + } - log.Println("[INFO] Waiting for RDS Cluster to be available") + if attr, ok := d.GetOk("db_cluster_parameter_group_name"); ok { + createOpts.DBClusterParameterGroupName = aws.String(attr.(string)) + } - stateConf := &resource.StateChangeConf{ - Pending: resourceAwsRdsClusterCreatePendingStates, - Target: []string{"available"}, - Refresh: resourceAwsRDSClusterStateRefreshFunc(d, meta), - Timeout: d.Timeout(schema.TimeoutCreate), - MinTimeout: 10 * time.Second, - Delay: 30 * time.Second, - } + if attr, ok := d.GetOk("engine_version"); ok { + createOpts.EngineVersion = aws.String(attr.(string)) + } - // Wait, catching any errors - _, err := stateConf.WaitForState() - if err != nil { - return err - } + if attr := d.Get("vpc_security_group_ids").(*schema.Set); attr.Len() > 0 { + createOpts.VpcSecurityGroupIds = expandStringList(attr.List()) + } + + if attr := d.Get("availability_zones").(*schema.Set); attr.Len() > 0 { + createOpts.AvailabilityZones = expandStringList(attr.List()) + } + + if v, ok := d.GetOk("backup_retention_period"); ok { + createOpts.BackupRetentionPeriod = aws.Int64(int64(v.(int))) + } + + if v, ok := d.GetOk("preferred_backup_window"); ok { + createOpts.PreferredBackupWindow = aws.String(v.(string)) + } + + if v, ok := d.GetOk("preferred_maintenance_window"); ok { + createOpts.PreferredMaintenanceWindow = aws.String(v.(string)) + } + + if attr, ok := d.GetOk("kms_key_id"); ok { + createOpts.KmsKeyId = aws.String(attr.(string)) + } - err = resourceAwsRDSClusterUpdate(d, meta) + if attr, ok := d.GetOk("source_region"); ok { + createOpts.SourceRegion = aws.String(attr.(string)) + } + + if attr, ok := d.GetOkExists("storage_encrypted"); ok { + createOpts.StorageEncrypted = aws.Bool(attr.(bool)) + } + + if attr, ok := d.GetOk("enabled_cloudwatch_logs_exports"); ok && len(attr.([]interface{})) > 0 { + createOpts.EnableCloudwatchLogsExports = expandStringList(attr.([]interface{})) + } + + log.Printf("[DEBUG] Create RDS Cluster as read replica: %s", createOpts) + var resp *rds.CreateDBClusterOutput + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + var err error + resp, err = conn.CreateDBCluster(createOpts) if err != nil { - return err + if isAWSErr(err, "InvalidParameterValue", "IAM role ARN value is invalid or does not include the required permissions") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) } + return nil + }) + if err != nil { + return fmt.Errorf("error creating RDS cluster: %s", err) } - } else if _, ok := d.GetOk("replication_source_identifier"); ok { - createOpts := &rds.CreateDBClusterInput{ - DBClusterIdentifier: aws.String(d.Get("cluster_identifier").(string)), - Engine: aws.String(d.Get("engine").(string)), - StorageEncrypted: aws.Bool(d.Get("storage_encrypted").(bool)), - ReplicationSourceIdentifier: aws.String(d.Get("replication_source_identifier").(string)), - Tags: tags, + + log.Printf("[DEBUG]: RDS Cluster create response: %s", resp) + + } else if v, ok := d.GetOk("s3_import"); ok { + if _, ok := d.GetOk("master_password"); !ok { + return fmt.Errorf(`provider.aws: aws_db_instance: %s: "master_password": required field is not set`, d.Get("name").(string)) + } + if _, ok := d.GetOk("master_username"); !ok { + return fmt.Errorf(`provider.aws: aws_db_instance: %s: "master_username": required field is not set`, d.Get("name").(string)) + } + s3_bucket := v.([]interface{})[0].(map[string]interface{}) + createOpts := &rds.RestoreDBClusterFromS3Input{ + DBClusterIdentifier: aws.String(identifier), + DeletionProtection: aws.Bool(d.Get("deletion_protection").(bool)), + Engine: aws.String(d.Get("engine").(string)), + MasterUsername: aws.String(d.Get("master_username").(string)), + MasterUserPassword: aws.String(d.Get("master_password").(string)), + S3BucketName: aws.String(s3_bucket["bucket_name"].(string)), + S3IngestionRoleArn: aws.String(s3_bucket["ingestion_role"].(string)), + S3Prefix: aws.String(s3_bucket["bucket_prefix"].(string)), + SourceEngine: aws.String(s3_bucket["source_engine"].(string)), + SourceEngineVersion: aws.String(s3_bucket["source_engine_version"].(string)), + Tags: tags, + } + + // Need to check value > 0 due to: + // InvalidParameterValue: Backtrack is not enabled for the aurora-postgresql engine. + if v, ok := d.GetOk("backtrack_window"); ok && v.(int) > 0 { + createOpts.BacktrackWindow = aws.Int64(int64(v.(int))) + } + + if v := d.Get("database_name"); v.(string) != "" { + createOpts.DatabaseName = aws.String(v.(string)) } if attr, ok := d.GetOk("port"); ok { @@ -413,19 +665,45 @@ func resourceAwsRDSClusterCreate(d *schema.ResourceData, meta interface{}) error createOpts.KmsKeyId = aws.String(attr.(string)) } - if attr, ok := d.GetOk("source_region"); ok { - createOpts.SourceRegion = aws.String(attr.(string)) + if attr, ok := d.GetOk("iam_database_authentication_enabled"); ok { + createOpts.EnableIAMDatabaseAuthentication = aws.Bool(attr.(bool)) } - log.Printf("[DEBUG] Create RDS Cluster as read replica: %s", createOpts) - resp, err := conn.CreateDBCluster(createOpts) + if attr, ok := d.GetOk("enabled_cloudwatch_logs_exports"); ok && len(attr.([]interface{})) > 0 { + createOpts.EnableCloudwatchLogsExports = expandStringList(attr.([]interface{})) + } + + if attr, ok := d.GetOkExists("storage_encrypted"); ok { + createOpts.StorageEncrypted = aws.Bool(attr.(bool)) + } + + log.Printf("[DEBUG] RDS Cluster restore options: %s", createOpts) + // Retry for IAM/S3 eventual consistency + err := resource.Retry(5*time.Minute, func() *resource.RetryError { + resp, err := conn.RestoreDBClusterFromS3(createOpts) + if err != nil { + // InvalidParameterValue: Files from the specified Amazon S3 bucket cannot be downloaded. + // Make sure that you have created an AWS Identity and Access Management (IAM) role that lets Amazon RDS access Amazon S3 for you. + if isAWSErr(err, "InvalidParameterValue", "Files from the specified Amazon S3 bucket cannot be downloaded") { + return resource.RetryableError(err) + } + if isAWSErr(err, "InvalidParameterValue", "S3_SNAPSHOT_INGESTION") { + return resource.RetryableError(err) + } + if isAWSErr(err, "InvalidParameterValue", "S3 bucket cannot be found") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + log.Printf("[DEBUG]: RDS Cluster create response: %s", resp) + return nil + }) + if err != nil { log.Printf("[ERROR] Error creating RDS Cluster: %s", err) return err } - log.Printf("[DEBUG]: RDS Cluster create response: %s", resp) - } else { if _, ok := d.GetOk("master_password"); !ok { return fmt.Errorf(`provider.aws: aws_rds_cluster: %s: "master_password": required field is not set`, d.Get("database_name").(string)) @@ -436,12 +714,20 @@ func resourceAwsRDSClusterCreate(d *schema.ResourceData, meta interface{}) error } createOpts := &rds.CreateDBClusterInput{ - DBClusterIdentifier: aws.String(d.Get("cluster_identifier").(string)), - Engine: aws.String(d.Get("engine").(string)), - MasterUserPassword: aws.String(d.Get("master_password").(string)), - MasterUsername: aws.String(d.Get("master_username").(string)), - StorageEncrypted: aws.Bool(d.Get("storage_encrypted").(bool)), - Tags: tags, + DBClusterIdentifier: aws.String(identifier), + DeletionProtection: aws.Bool(d.Get("deletion_protection").(bool)), + Engine: aws.String(d.Get("engine").(string)), + EngineMode: aws.String(d.Get("engine_mode").(string)), + MasterUserPassword: aws.String(d.Get("master_password").(string)), + MasterUsername: aws.String(d.Get("master_username").(string)), + ScalingConfiguration: expandRdsScalingConfiguration(d.Get("scaling_configuration").([]interface{})), + Tags: tags, + } + + // Need to check value > 0 due to: + // InvalidParameterValue: Backtrack is not enabled for the aurora-postgresql engine. + if v, ok := d.GetOk("backtrack_window"); ok && v.(int) > 0 { + createOpts.BacktrackWindow = aws.Int64(int64(v.(int))) } if v := d.Get("database_name"); v.(string) != "" { @@ -460,6 +746,10 @@ func resourceAwsRDSClusterCreate(d *schema.ResourceData, meta interface{}) error createOpts.DBClusterParameterGroupName = aws.String(attr.(string)) } + if attr, ok := d.GetOk("engine_version"); ok { + createOpts.EngineVersion = aws.String(attr.(string)) + } + if attr := d.Get("vpc_security_group_ids").(*schema.Set); attr.Len() > 0 { createOpts.VpcSecurityGroupIds = expandStringList(attr.List()) } @@ -488,17 +778,39 @@ func resourceAwsRDSClusterCreate(d *schema.ResourceData, meta interface{}) error createOpts.EnableIAMDatabaseAuthentication = aws.Bool(attr.(bool)) } + if attr, ok := d.GetOk("enabled_cloudwatch_logs_exports"); ok && len(attr.([]interface{})) > 0 { + createOpts.EnableCloudwatchLogsExports = expandStringList(attr.([]interface{})) + } + + if attr, ok := d.GetOk("scaling_configuration"); ok && len(attr.([]interface{})) > 0 { + createOpts.ScalingConfiguration = expandRdsScalingConfiguration(attr.([]interface{})) + } + + if attr, ok := d.GetOkExists("storage_encrypted"); ok { + createOpts.StorageEncrypted = aws.Bool(attr.(bool)) + } + log.Printf("[DEBUG] RDS Cluster create options: %s", createOpts) - resp, err := conn.CreateDBCluster(createOpts) + var resp *rds.CreateDBClusterOutput + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + var err error + resp, err = conn.CreateDBCluster(createOpts) + if err != nil { + if isAWSErr(err, "InvalidParameterValue", "IAM role ARN value is invalid or does not include the required permissions") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) if err != nil { - log.Printf("[ERROR] Error creating RDS Cluster: %s", err) - return err + return fmt.Errorf("error creating RDS cluster: %s", err) } log.Printf("[DEBUG]: RDS Cluster create response: %s", resp) } - d.SetId(d.Get("cluster_identifier").(string)) + d.SetId(identifier) log.Printf("[INFO] RDS Cluster ID: %s", d.Id()) @@ -508,7 +820,7 @@ func resourceAwsRDSClusterCreate(d *schema.ResourceData, meta interface{}) error stateConf := &resource.StateChangeConf{ Pending: resourceAwsRdsClusterCreatePendingStates, Target: []string{"available"}, - Refresh: resourceAwsRDSClusterStateRefreshFunc(d, meta), + Refresh: resourceAwsRDSClusterStateRefreshFunc(conn, d.Id()), Timeout: d.Timeout(schema.TimeoutCreate), MinTimeout: 10 * time.Second, Delay: 30 * time.Second, @@ -517,7 +829,7 @@ func resourceAwsRDSClusterCreate(d *schema.ResourceData, meta interface{}) error // Wait, catching any errors _, err := stateConf.WaitForState() if err != nil { - return fmt.Errorf("[WARN] Error waiting for RDS Cluster state to be \"available\": %s", err) + return fmt.Errorf("Error waiting for RDS Cluster state to be \"available\": %s", err) } if v, ok := d.GetOk("iam_roles"); ok { @@ -529,50 +841,81 @@ func resourceAwsRDSClusterCreate(d *schema.ResourceData, meta interface{}) error } } + if requiresModifyDbCluster { + modifyDbClusterInput.DBClusterIdentifier = aws.String(d.Id()) + + log.Printf("[INFO] RDS Cluster (%s) configuration requires ModifyDBCluster: %s", d.Id(), modifyDbClusterInput) + _, err := conn.ModifyDBCluster(modifyDbClusterInput) + if err != nil { + return fmt.Errorf("error modifying RDS Cluster (%s): %s", d.Id(), err) + } + + log.Printf("[INFO] Waiting for RDS Cluster (%s) to be available", d.Id()) + err = waitForRDSClusterUpdate(conn, d.Id(), d.Timeout(schema.TimeoutCreate)) + if err != nil { + return fmt.Errorf("error waiting for RDS Cluster (%s) to be available: %s", d.Id(), err) + } + } + return resourceAwsRDSClusterRead(d, meta) } func resourceAwsRDSClusterRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).rdsconn - resp, err := conn.DescribeDBClusters(&rds.DescribeDBClustersInput{ + input := &rds.DescribeDBClustersInput{ DBClusterIdentifier: aws.String(d.Id()), - }) + } + + log.Printf("[DEBUG] Describing RDS Cluster: %s", input) + resp, err := conn.DescribeDBClusters(input) + + if isAWSErr(err, rds.ErrCodeDBClusterNotFoundFault, "") { + log.Printf("[WARN] RDS Cluster (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } if err != nil { - if awsErr, ok := err.(awserr.Error); ok { - if "DBClusterNotFoundFault" == awsErr.Code() { - d.SetId("") - log.Printf("[DEBUG] RDS Cluster (%s) not found", d.Id()) - return nil - } - } - log.Printf("[DEBUG] Error describing RDS Cluster (%s)", d.Id()) - return err + return fmt.Errorf("error describing RDS Cluster (%s): %s", d.Id(), err) + } + + if resp == nil { + return fmt.Errorf("Error retrieving RDS cluster: empty response for: %s", input) } var dbc *rds.DBCluster for _, c := range resp.DBClusters { - if *c.DBClusterIdentifier == d.Id() { + if aws.StringValue(c.DBClusterIdentifier) == d.Id() { dbc = c + break } } if dbc == nil { - log.Printf("[WARN] RDS Cluster (%s) not found", d.Id()) + log.Printf("[WARN] RDS Cluster (%s) not found, removing from state", d.Id()) d.SetId("") return nil } - return flattenAwsRdsClusterResource(d, meta, dbc) -} + if err := d.Set("availability_zones", aws.StringValueSlice(dbc.AvailabilityZones)); err != nil { + return fmt.Errorf("error setting availability_zones: %s", err) + } -func flattenAwsRdsClusterResource(d *schema.ResourceData, meta interface{}, dbc *rds.DBCluster) error { - conn := meta.(*AWSClient).rdsconn + d.Set("arn", dbc.DBClusterArn) + d.Set("backtrack_window", int(aws.Int64Value(dbc.BacktrackWindow))) + d.Set("backup_retention_period", dbc.BackupRetentionPeriod) + d.Set("cluster_identifier", dbc.DBClusterIdentifier) - if err := d.Set("availability_zones", aws.StringValueSlice(dbc.AvailabilityZones)); err != nil { - return fmt.Errorf("[DEBUG] Error saving AvailabilityZones to state for RDS Cluster (%s): %s", d.Id(), err) + var cm []string + for _, m := range dbc.DBClusterMembers { + cm = append(cm, aws.StringValue(m.DBInstanceIdentifier)) } + if err := d.Set("cluster_members", cm); err != nil { + return fmt.Errorf("error setting cluster_members: %s", err) + } + + d.Set("cluster_resource_id", dbc.DbClusterResourceId) // Only set the DatabaseName if it is not nil. There is a known API bug where // RDS accepts a DatabaseName but does not return it, causing a perpetual @@ -582,58 +925,54 @@ func flattenAwsRdsClusterResource(d *schema.ResourceData, meta interface{}, dbc d.Set("database_name", dbc.DatabaseName) } - d.Set("cluster_identifier", dbc.DBClusterIdentifier) - d.Set("cluster_resource_id", dbc.DbClusterResourceId) - d.Set("db_subnet_group_name", dbc.DBSubnetGroup) d.Set("db_cluster_parameter_group_name", dbc.DBClusterParameterGroup) + d.Set("db_subnet_group_name", dbc.DBSubnetGroup) + d.Set("deletion_protection", dbc.DeletionProtection) + + if err := d.Set("enabled_cloudwatch_logs_exports", aws.StringValueSlice(dbc.EnabledCloudwatchLogsExports)); err != nil { + return fmt.Errorf("error setting enabled_cloudwatch_logs_exports: %s", err) + } + d.Set("endpoint", dbc.Endpoint) - d.Set("engine", dbc.Engine) + d.Set("engine_mode", dbc.EngineMode) d.Set("engine_version", dbc.EngineVersion) + d.Set("engine", dbc.Engine) + d.Set("hosted_zone_id", dbc.HostedZoneId) + d.Set("iam_database_authentication_enabled", dbc.IAMDatabaseAuthenticationEnabled) + + var roles []string + for _, r := range dbc.AssociatedRoles { + roles = append(roles, aws.StringValue(r.RoleArn)) + } + if err := d.Set("iam_roles", roles); err != nil { + return fmt.Errorf("error setting iam_roles: %s", err) + } + + d.Set("kms_key_id", dbc.KmsKeyId) d.Set("master_username", dbc.MasterUsername) d.Set("port", dbc.Port) - d.Set("storage_encrypted", dbc.StorageEncrypted) - d.Set("backup_retention_period", dbc.BackupRetentionPeriod) d.Set("preferred_backup_window", dbc.PreferredBackupWindow) d.Set("preferred_maintenance_window", dbc.PreferredMaintenanceWindow) - d.Set("kms_key_id", dbc.KmsKeyId) d.Set("reader_endpoint", dbc.ReaderEndpoint) d.Set("replication_source_identifier", dbc.ReplicationSourceIdentifier) - d.Set("iam_database_authentication_enabled", dbc.IAMDatabaseAuthenticationEnabled) - d.Set("hosted_zone_id", dbc.HostedZoneId) - var vpcg []string - for _, g := range dbc.VpcSecurityGroups { - vpcg = append(vpcg, *g.VpcSecurityGroupId) - } - if err := d.Set("vpc_security_group_ids", vpcg); err != nil { - return fmt.Errorf("[DEBUG] Error saving VPC Security Group IDs to state for RDS Cluster (%s): %s", d.Id(), err) + if err := d.Set("scaling_configuration", flattenRdsScalingConfigurationInfo(dbc.ScalingConfigurationInfo)); err != nil { + return fmt.Errorf("error setting scaling_configuration: %s", err) } - var cm []string - for _, m := range dbc.DBClusterMembers { - cm = append(cm, *m.DBInstanceIdentifier) - } - if err := d.Set("cluster_members", cm); err != nil { - return fmt.Errorf("[DEBUG] Error saving RDS Cluster Members to state for RDS Cluster (%s): %s", d.Id(), err) - } + d.Set("storage_encrypted", dbc.StorageEncrypted) - var roles []string - for _, r := range dbc.AssociatedRoles { - roles = append(roles, *r.RoleArn) + var vpcg []string + for _, g := range dbc.VpcSecurityGroups { + vpcg = append(vpcg, aws.StringValue(g.VpcSecurityGroupId)) } - - if err := d.Set("iam_roles", roles); err != nil { - return fmt.Errorf("[DEBUG] Error saving IAM Roles to state for RDS Cluster (%s): %s", d.Id(), err) + if err := d.Set("vpc_security_group_ids", vpcg); err != nil { + return fmt.Errorf("error setting vpc_security_group_ids: %s", err) } // Fetch and save tags - arn, err := buildRDSClusterARN(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region) - if err != nil { - log.Printf("[DEBUG] Error building ARN for RDS Cluster (%s), not setting Tags", *dbc.DBClusterIdentifier) - } else { - if err := saveTagsRDS(conn, d, arn); err != nil { - log.Printf("[WARN] Failed to save tags for RDS Cluster (%s): %s", *dbc.DBClusterIdentifier, err) - } + if err := saveTagsRDS(conn, d, aws.StringValue(dbc.DBClusterArn)); err != nil { + log.Printf("[WARN] Failed to save tags for RDS Cluster (%s): %s", aws.StringValue(dbc.DBClusterIdentifier), err) } return nil @@ -648,11 +987,21 @@ func resourceAwsRDSClusterUpdate(d *schema.ResourceData, meta interface{}) error DBClusterIdentifier: aws.String(d.Id()), } + if d.HasChange("backtrack_window") { + req.BacktrackWindow = aws.Int64(int64(d.Get("backtrack_window").(int))) + requestUpdate = true + } + if d.HasChange("master_password") { req.MasterUserPassword = aws.String(d.Get("master_password").(string)) requestUpdate = true } + if d.HasChange("engine_version") { + req.EngineVersion = aws.String(d.Get("engine_version").(string)) + requestUpdate = true + } + if d.HasChange("vpc_security_group_ids") { if attr := d.Get("vpc_security_group_ids").(*schema.Set); attr.Len() > 0 { req.VpcSecurityGroupIds = expandStringList(attr.List()) @@ -683,17 +1032,41 @@ func resourceAwsRDSClusterUpdate(d *schema.ResourceData, meta interface{}) error requestUpdate = true } + if d.HasChange("deletion_protection") { + req.DeletionProtection = aws.Bool(d.Get("deletion_protection").(bool)) + requestUpdate = true + } + if d.HasChange("iam_database_authentication_enabled") { req.EnableIAMDatabaseAuthentication = aws.Bool(d.Get("iam_database_authentication_enabled").(bool)) requestUpdate = true } + if d.HasChange("enabled_cloudwatch_logs_exports") { + d.SetPartial("enabled_cloudwatch_logs_exports") + req.CloudwatchLogsExportConfiguration = buildCloudwatchLogsExportConfiguration(d) + requestUpdate = true + } + + if d.HasChange("scaling_configuration") { + d.SetPartial("scaling_configuration") + req.ScalingConfiguration = expandRdsScalingConfiguration(d.Get("scaling_configuration").([]interface{})) + requestUpdate = true + } + if requestUpdate { err := resource.Retry(5*time.Minute, func() *resource.RetryError { _, err := conn.ModifyDBCluster(req) if err != nil { - awsErr, ok := err.(awserr.Error) - if ok && awsErr.Code() == rds.ErrCodeInvalidDBClusterStateFault { + if isAWSErr(err, "InvalidParameterValue", "IAM role ARN value is invalid or does not include the required permissions") { + return resource.RetryableError(err) + } + + if isAWSErr(err, rds.ErrCodeInvalidDBClusterStateFault, "Cannot modify engine version without a primary instance in DB cluster") { + return resource.NonRetryableError(err) + } + + if isAWSErr(err, rds.ErrCodeInvalidDBClusterStateFault, "") { return resource.RetryableError(err) } return resource.NonRetryableError(err) @@ -703,6 +1076,12 @@ func resourceAwsRDSClusterUpdate(d *schema.ResourceData, meta interface{}) error if err != nil { return fmt.Errorf("Failed to modify RDS Cluster (%s): %s", d.Id(), err) } + + log.Printf("[INFO] Waiting for RDS Cluster (%s) to be available", d.Id()) + err = waitForRDSClusterUpdate(conn, d.Id(), d.Timeout(schema.TimeoutUpdate)) + if err != nil { + return fmt.Errorf("error waiting for RDS Cluster (%s) to be available: %s", d.Id(), err) + } } if d.HasChange("iam_roles") { @@ -734,8 +1113,8 @@ func resourceAwsRDSClusterUpdate(d *schema.ResourceData, meta interface{}) error } } - if arn, err := buildRDSClusterARN(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region); err == nil { - if err := setTagsRDS(conn, d, arn); err != nil { + if d.HasChange("tags") { + if err := setTagsRDS(conn, d, d.Get("arn").(string)); err != nil { return err } else { d.SetPartial("tags") @@ -786,7 +1165,7 @@ func resourceAwsRDSClusterDelete(d *schema.ResourceData, meta interface{}) error stateConf := &resource.StateChangeConf{ Pending: resourceAwsRdsClusterDeletePendingStates, Target: []string{"destroyed"}, - Refresh: resourceAwsRDSClusterStateRefreshFunc(d, meta), + Refresh: resourceAwsRDSClusterStateRefreshFunc(conn, d.Id()), Timeout: d.Timeout(schema.TimeoutDelete), MinTimeout: 10 * time.Second, Delay: 30 * time.Second, @@ -795,35 +1174,30 @@ func resourceAwsRDSClusterDelete(d *schema.ResourceData, meta interface{}) error // Wait, catching any errors _, err = stateConf.WaitForState() if err != nil { - return fmt.Errorf("[WARN] Error deleting RDS Cluster (%s): %s", d.Id(), err) + return fmt.Errorf("Error deleting RDS Cluster (%s): %s", d.Id(), err) } return nil } -func resourceAwsRDSClusterStateRefreshFunc( - d *schema.ResourceData, meta interface{}) resource.StateRefreshFunc { +func resourceAwsRDSClusterStateRefreshFunc(conn *rds.RDS, dbClusterIdentifier string) resource.StateRefreshFunc { return func() (interface{}, string, error) { - conn := meta.(*AWSClient).rdsconn - resp, err := conn.DescribeDBClusters(&rds.DescribeDBClustersInput{ - DBClusterIdentifier: aws.String(d.Id()), + DBClusterIdentifier: aws.String(dbClusterIdentifier), }) + if isAWSErr(err, rds.ErrCodeDBClusterNotFoundFault, "") { + return 42, "destroyed", nil + } + if err != nil { - if awsErr, ok := err.(awserr.Error); ok { - if "DBClusterNotFoundFault" == awsErr.Code() { - return 42, "destroyed", nil - } - } - log.Printf("[WARN] Error on retrieving DB Cluster (%s) when waiting: %s", d.Id(), err) return nil, "", err } var dbc *rds.DBCluster for _, c := range resp.DBClusters { - if *c.DBClusterIdentifier == d.Id() { + if *c.DBClusterIdentifier == dbClusterIdentifier { dbc = c } } @@ -833,26 +1207,13 @@ func resourceAwsRDSClusterStateRefreshFunc( } if dbc.Status != nil { - log.Printf("[DEBUG] DB Cluster status (%s): %s", d.Id(), *dbc.Status) + log.Printf("[DEBUG] DB Cluster status (%s): %s", dbClusterIdentifier, *dbc.Status) } - return dbc, *dbc.Status, nil + return dbc, aws.StringValue(dbc.Status), nil } } -func buildRDSClusterARN(identifier, partition, accountid, region string) (string, error) { - if partition == "" { - return "", fmt.Errorf("Unable to construct RDS Cluster ARN because of missing AWS partition") - } - if accountid == "" { - return "", fmt.Errorf("Unable to construct RDS Cluster ARN because of missing AWS Account ID") - } - - arn := fmt.Sprintf("arn:%s:rds:%s:%s:cluster:%s", partition, region, accountid, identifier) - return arn, nil - -} - func setIamRoleToRdsCluster(clusterIdentifier string, roleArn string, conn *rds.RDS) error { params := &rds.AddRoleToDBClusterInput{ DBClusterIdentifier: aws.String(clusterIdentifier), @@ -894,3 +1255,23 @@ var resourceAwsRdsClusterDeletePendingStates = []string{ "backing-up", "modifying", } + +var resourceAwsRdsClusterUpdatePendingStates = []string{ + "backing-up", + "modifying", + "resetting-master-credentials", + "upgrading", +} + +func waitForRDSClusterUpdate(conn *rds.RDS, id string, timeout time.Duration) error { + stateConf := &resource.StateChangeConf{ + Pending: resourceAwsRdsClusterUpdatePendingStates, + Target: []string{"available"}, + Refresh: resourceAwsRDSClusterStateRefreshFunc(conn, id), + Timeout: timeout, + MinTimeout: 10 * time.Second, + Delay: 30 * time.Second, // Wait 30 secs before starting + } + _, err := stateConf.WaitForState() + return err +} diff --git a/aws/resource_aws_rds_cluster_instance.go b/aws/resource_aws_rds_cluster_instance.go index 3402f25613a..d39581effb9 100644 --- a/aws/resource_aws_rds_cluster_instance.go +++ b/aws/resource_aws_rds_cluster_instance.go @@ -8,6 +8,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/rds" + "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) @@ -29,6 +30,11 @@ func resourceAwsRDSClusterInstance() *schema.Resource { }, Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "identifier": { Type: schema.TypeString, Optional: true, @@ -210,7 +216,7 @@ func resourceAwsRDSClusterInstanceCreate(d *schema.ResourceData, meta interface{ PubliclyAccessible: aws.Bool(d.Get("publicly_accessible").(bool)), PromotionTier: aws.Int64(int64(d.Get("promotion_tier").(int))), AutoMinorVersionUpgrade: aws.Bool(d.Get("auto_minor_version_upgrade").(bool)), - Tags: tags, + Tags: tags, } if attr, ok := d.GetOk("availability_zone"); ok { @@ -243,7 +249,7 @@ func resourceAwsRDSClusterInstanceCreate(d *schema.ResourceData, meta interface{ createOpts.MonitoringRoleArn = aws.String(attr.(string)) } - if attr, _ := d.GetOk("engine"); attr == "aurora-postgresql" { + if attr, _ := d.GetOk("engine"); attr == "aurora-postgresql" || attr == "aurora" { if attr, ok := d.GetOk("performance_insights_enabled"); ok { createOpts.EnablePerformanceInsights = aws.Bool(attr.(bool)) } @@ -266,19 +272,27 @@ func resourceAwsRDSClusterInstanceCreate(d *schema.ResourceData, meta interface{ } log.Printf("[DEBUG] Creating RDS DB Instance opts: %s", createOpts) - resp, err := conn.CreateDBInstance(createOpts) + var resp *rds.CreateDBInstanceOutput + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + var err error + resp, err = conn.CreateDBInstance(createOpts) + if err != nil { + if isAWSErr(err, "InvalidParameterValue", "IAM role ARN value is invalid or does not include the required permissions") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) if err != nil { - return err + return fmt.Errorf("error creating RDS DB Instance: %s", err) } d.SetId(*resp.DBInstance.DBInstanceIdentifier) // reuse db_instance refresh func stateConf := &resource.StateChangeConf{ - Pending: []string{"creating", "backing-up", "modifying", - "configuring-enhanced-monitoring", "maintenance", - "rebooting", "renaming", "resetting-master-credentials", - "starting", "upgrading"}, + Pending: resourceAwsRdsClusterInstanceCreateUpdatePendingStates, Target: []string{"available"}, Refresh: resourceAwsDbInstanceStateRefreshFunc(d.Id(), conn), Timeout: d.Timeout(schema.TimeoutCreate), @@ -299,7 +313,7 @@ func resourceAwsRDSClusterInstanceRead(d *schema.ResourceData, meta interface{}) db, err := resourceAwsDbInstanceRetrieve(d.Id(), meta.(*AWSClient).rdsconn) // Errors from this helper are always reportable if err != nil { - return fmt.Errorf("[WARN] Error on retrieving RDS Cluster Instance (%s): %s", d.Id(), err) + return fmt.Errorf("Error on retrieving RDS Cluster Instance (%s): %s", d.Id(), err) } // A nil response means "not found" if db == nil { @@ -307,6 +321,10 @@ func resourceAwsRDSClusterInstanceRead(d *schema.ResourceData, meta interface{}) d.SetId("") return nil } + // Database instance is not in RDS Cluster + if db.DBClusterIdentifier == nil { + return fmt.Errorf("Cluster identifier is missing from instance (%s). The aws_db_instance resource should be used for non-Aurora instances", d.Id()) + } // Retrieve DB Cluster information, to determine if this Instance is a writer conn := meta.(*AWSClient).rdsconn @@ -322,7 +340,7 @@ func resourceAwsRDSClusterInstanceRead(d *schema.ResourceData, meta interface{}) } if dbc == nil { - return fmt.Errorf("[WARN] Error finding RDS Cluster (%s) for Cluster Instance (%s): %s", + return fmt.Errorf("Error finding RDS Cluster (%s) for Cluster Instance (%s): %s", *db.DBClusterIdentifier, *db.DBInstanceIdentifier, err) } @@ -345,22 +363,23 @@ func resourceAwsRDSClusterInstanceRead(d *schema.ResourceData, meta interface{}) d.Set("db_subnet_group_name", db.DBSubnetGroup.DBSubnetGroupName) } - d.Set("publicly_accessible", db.PubliclyAccessible) + d.Set("arn", db.DBInstanceArn) + d.Set("auto_minor_version_upgrade", db.AutoMinorVersionUpgrade) + d.Set("availability_zone", db.AvailabilityZone) d.Set("cluster_identifier", db.DBClusterIdentifier) - d.Set("engine", db.Engine) + d.Set("dbi_resource_id", db.DbiResourceId) d.Set("engine_version", db.EngineVersion) - d.Set("instance_class", db.DBInstanceClass) + d.Set("engine", db.Engine) d.Set("identifier", db.DBInstanceIdentifier) - d.Set("dbi_resource_id", db.DbiResourceId) - d.Set("storage_encrypted", db.StorageEncrypted) + d.Set("instance_class", db.DBInstanceClass) d.Set("kms_key_id", db.KmsKeyId) - d.Set("auto_minor_version_upgrade", db.AutoMinorVersionUpgrade) - d.Set("promotion_tier", db.PromotionTier) - d.Set("preferred_backup_window", db.PreferredBackupWindow) - d.Set("preferred_maintenance_window", db.PreferredMaintenanceWindow) - d.Set("availability_zone", db.AvailabilityZone) d.Set("performance_insights_enabled", db.PerformanceInsightsEnabled) d.Set("performance_insights_kms_key_id", db.PerformanceInsightsKMSKeyId) + d.Set("preferred_backup_window", db.PreferredBackupWindow) + d.Set("preferred_maintenance_window", db.PreferredMaintenanceWindow) + d.Set("promotion_tier", db.PromotionTier) + d.Set("publicly_accessible", db.PubliclyAccessible) + d.Set("storage_encrypted", db.StorageEncrypted) if db.MonitoringInterval != nil { d.Set("monitoring_interval", db.MonitoringInterval) @@ -374,14 +393,8 @@ func resourceAwsRDSClusterInstanceRead(d *schema.ResourceData, meta interface{}) d.Set("db_parameter_group_name", db.DBParameterGroups[0].DBParameterGroupName) } - // Fetch and save tags - arn, err := buildRDSARN(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region) - if err != nil { - log.Printf("[DEBUG] Error building ARN for RDS Cluster Instance (%s), not setting Tags", *db.DBInstanceIdentifier) - } else { - if err := saveTagsRDS(conn, d, arn); err != nil { - log.Printf("[WARN] Failed to save tags for RDS Cluster Instance (%s): %s", *db.DBClusterIdentifier, err) - } + if err := saveTagsRDS(conn, d, aws.StringValue(db.DBInstanceArn)); err != nil { + log.Printf("[WARN] Failed to save tags for RDS Cluster Instance (%s): %s", *db.DBClusterIdentifier, err) } return nil @@ -454,20 +467,32 @@ func resourceAwsRDSClusterInstanceUpdate(d *schema.ResourceData, meta interface{ requestUpdate = true } + if d.HasChange("publicly_accessible") { + d.SetPartial("publicly_accessible") + req.PubliclyAccessible = aws.Bool(d.Get("publicly_accessible").(bool)) + requestUpdate = true + } + log.Printf("[DEBUG] Send DB Instance Modification request: %#v", requestUpdate) if requestUpdate { log.Printf("[DEBUG] DB Instance Modification request: %#v", req) - _, err := conn.ModifyDBInstance(req) + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err := conn.ModifyDBInstance(req) + if err != nil { + if isAWSErr(err, "InvalidParameterValue", "IAM role ARN value is invalid or does not include the required permissions") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) if err != nil { return fmt.Errorf("Error modifying DB Instance %s: %s", d.Id(), err) } // reuse db_instance refresh func stateConf := &resource.StateChangeConf{ - Pending: []string{"creating", "backing-up", "modifying", - "configuring-enhanced-monitoring", "maintenance", - "rebooting", "renaming", "resetting-master-credentials", - "starting", "upgrading"}, + Pending: resourceAwsRdsClusterInstanceCreateUpdatePendingStates, Target: []string{"available"}, Refresh: resourceAwsDbInstanceStateRefreshFunc(d.Id(), conn), Timeout: d.Timeout(schema.TimeoutUpdate), @@ -483,10 +508,8 @@ func resourceAwsRDSClusterInstanceUpdate(d *schema.ResourceData, meta interface{ } - if arn, err := buildRDSARN(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region); err == nil { - if err := setTagsRDS(conn, d, arn); err != nil { - return err - } + if err := setTagsRDS(conn, d, d.Get("arn").(string)); err != nil { + return err } return resourceAwsRDSClusterInstanceRead(d, meta) @@ -507,7 +530,7 @@ func resourceAwsRDSClusterInstanceDelete(d *schema.ResourceData, meta interface{ // re-uses db_instance refresh func log.Println("[INFO] Waiting for RDS Cluster Instance to be destroyed") stateConf := &resource.StateChangeConf{ - Pending: []string{"modifying", "deleting"}, + Pending: resourceAwsRdsClusterInstanceDeletePendingStates, Target: []string{}, Refresh: resourceAwsDbInstanceStateRefreshFunc(d.Id(), conn), Timeout: d.Timeout(schema.TimeoutDelete), @@ -522,3 +545,23 @@ func resourceAwsRDSClusterInstanceDelete(d *schema.ResourceData, meta interface{ return nil } + +var resourceAwsRdsClusterInstanceCreateUpdatePendingStates = []string{ + "backing-up", + "configuring-enhanced-monitoring", + "configuring-log-exports", + "creating", + "maintenance", + "modifying", + "rebooting", + "renaming", + "resetting-master-credentials", + "starting", + "upgrading", +} + +var resourceAwsRdsClusterInstanceDeletePendingStates = []string{ + "configuring-log-exports", + "modifying", + "deleting", +} diff --git a/aws/resource_aws_rds_cluster_instance_test.go b/aws/resource_aws_rds_cluster_instance_test.go index 03e76454d25..f4b3de67a66 100644 --- a/aws/resource_aws_rds_cluster_instance_test.go +++ b/aws/resource_aws_rds_cluster_instance_test.go @@ -16,10 +16,31 @@ import ( "github.com/aws/aws-sdk-go/service/rds" ) +func TestAccAWSRDSClusterInstance_importBasic(t *testing.T) { + resourceName := "aws_rds_cluster_instance.cluster_instances" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSClusterInstanceConfig(acctest.RandInt()), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSRDSClusterInstance_basic(t *testing.T) { var v rds.DBInstance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSClusterDestroy, @@ -29,6 +50,7 @@ func TestAccAWSRDSClusterInstance_basic(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckAWSClusterInstanceExists("aws_rds_cluster_instance.cluster_instances", &v), testAccCheckAWSDBClusterInstanceAttributes(&v), + resource.TestMatchResourceAttr("aws_rds_cluster_instance.cluster_instances", "arn", regexp.MustCompile(`^arn:[^:]+:rds:[^:]+:[^:]+:db:.+`)), resource.TestCheckResourceAttr("aws_rds_cluster_instance.cluster_instances", "auto_minor_version_upgrade", "true"), resource.TestCheckResourceAttrSet("aws_rds_cluster_instance.cluster_instances", "preferred_maintenance_window"), resource.TestCheckResourceAttrSet("aws_rds_cluster_instance.cluster_instances", "preferred_backup_window"), @@ -53,7 +75,7 @@ func TestAccAWSRDSClusterInstance_basic(t *testing.T) { func TestAccAWSRDSClusterInstance_az(t *testing.T) { var v rds.DBInstance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSClusterDestroy, @@ -74,7 +96,7 @@ func TestAccAWSRDSClusterInstance_namePrefix(t *testing.T) { var v rds.DBInstance rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSClusterDestroy, @@ -97,7 +119,7 @@ func TestAccAWSRDSClusterInstance_namePrefix(t *testing.T) { func TestAccAWSRDSClusterInstance_generatedName(t *testing.T) { var v rds.DBInstance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSClusterDestroy, @@ -119,7 +141,7 @@ func TestAccAWSRDSClusterInstance_kmsKey(t *testing.T) { var v rds.DBInstance keyRegex := regexp.MustCompile("^arn:aws:kms:") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSClusterDestroy, @@ -140,7 +162,7 @@ func TestAccAWSRDSClusterInstance_kmsKey(t *testing.T) { func TestAccAWSRDSClusterInstance_disappears(t *testing.T) { var v rds.DBInstance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSClusterDestroy, @@ -158,6 +180,40 @@ func TestAccAWSRDSClusterInstance_disappears(t *testing.T) { }) } +func TestAccAWSRDSClusterInstance_PubliclyAccessible(t *testing.T) { + var dbInstance rds.DBInstance + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_rds_cluster_instance.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRDSClusterInstanceConfig_PubliclyAccessible(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "publicly_accessible", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"apply_immediately"}, + }, + { + Config: testAccAWSRDSClusterInstanceConfig_PubliclyAccessible(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterInstanceExists(resourceName, &dbInstance), + resource.TestCheckResourceAttr(resourceName, "publicly_accessible", "false"), + ), + }, + }, + }) +} + func testAccCheckAWSDBClusterInstanceAttributes(v *rds.DBInstance) resource.TestCheckFunc { return func(s *terraform.State) error { @@ -235,7 +291,7 @@ func testAccCheckAWSClusterInstanceExists(n string, v *rds.DBInstance) resource. func TestAccAWSRDSClusterInstance_withInstanceEnhancedMonitor(t *testing.T) { var v rds.DBInstance - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSClusterDestroy, @@ -255,7 +311,7 @@ func TestAccAWSRDSClusterInstance_withInstancePerformanceInsights(t *testing.T) var v rds.DBInstance keyRegex := regexp.MustCompile("^arn:aws:kms:") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSClusterDestroy, @@ -699,3 +755,22 @@ resource "aws_db_parameter_group" "bar" { } `, n, n, n, n) } + +func testAccAWSRDSClusterInstanceConfig_PubliclyAccessible(rName string, publiclyAccessible bool) string { + return fmt.Sprintf(` +resource "aws_rds_cluster" "test" { + cluster_identifier = %q + master_username = "foo" + master_password = "mustbeeightcharaters" + skip_final_snapshot = true +} + +resource "aws_rds_cluster_instance" "test" { + apply_immediately = true + cluster_identifier = "${aws_rds_cluster.test.id}" + identifier = %q + instance_class = "db.t2.small" + publicly_accessible = %t +} +`, rName, rName, publiclyAccessible) +} diff --git a/aws/resource_aws_rds_cluster_parameter_group.go b/aws/resource_aws_rds_cluster_parameter_group.go index b81504b0683..358abc5e0be 100644 --- a/aws/resource_aws_rds_cluster_parameter_group.go +++ b/aws/resource_aws_rds_cluster_parameter_group.go @@ -26,11 +26,11 @@ func resourceAwsRDSClusterParameterGroup() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "arn": &schema.Schema{ + "arn": { Type: schema.TypeString, Computed: true, }, - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Optional: true, Computed: true, @@ -38,50 +38,43 @@ func resourceAwsRDSClusterParameterGroup() *schema.Resource { ConflictsWith: []string{"name_prefix"}, ValidateFunc: validateDbParamGroupName, }, - "name_prefix": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - ValidateFunc: validateDbParamGroupNamePrefix, + "name_prefix": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"name"}, + ValidateFunc: validateDbParamGroupNamePrefix, }, - "family": &schema.Schema{ + "family": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "description": &schema.Schema{ + "description": { Type: schema.TypeString, Optional: true, ForceNew: true, Default: "Managed by Terraform", }, - "parameter": &schema.Schema{ + "parameter": { Type: schema.TypeSet, Optional: true, ForceNew: false, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, }, - "value": &schema.Schema{ + "value": { Type: schema.TypeString, Required: true, }, - "apply_method": &schema.Schema{ + "apply_method": { Type: schema.TypeString, Optional: true, Default: "immediate", - // this parameter is not actually state, but a - // meta-parameter describing how the RDS API call - // to modify the parameter group should be made. - // Future reads of the resource from AWS don't tell - // us what we used for apply_method previously, so - // by squashing state to an empty string we avoid - // needing to do an update for every future run. - StateFunc: func(interface{}) string { return "" }, }, }, }, @@ -114,7 +107,7 @@ func resourceAwsRDSClusterParameterGroupCreate(d *schema.ResourceData, meta inte } log.Printf("[DEBUG] Create DB Cluster Parameter Group: %#v", createOpts) - _, err := rdsconn.CreateDBClusterParameterGroup(&createOpts) + output, err := rdsconn.CreateDBClusterParameterGroup(&createOpts) if err != nil { return fmt.Errorf("Error creating DB Cluster Parameter Group: %s", err) } @@ -122,6 +115,9 @@ func resourceAwsRDSClusterParameterGroupCreate(d *schema.ResourceData, meta inte d.SetId(*createOpts.DBClusterParameterGroupName) log.Printf("[INFO] DB Cluster Parameter Group ID: %s", d.Id()) + // Set for update + d.Set("arn", output.DBClusterParameterGroup.DBClusterParameterGroupArn) + return resourceAwsRDSClusterParameterGroupUpdate(d, meta) } @@ -148,14 +144,16 @@ func resourceAwsRDSClusterParameterGroupRead(d *schema.ResourceData, meta interf return fmt.Errorf("Unable to find Cluster Parameter Group: %#v", describeResp.DBClusterParameterGroups) } - d.Set("name", describeResp.DBClusterParameterGroups[0].DBClusterParameterGroupName) - d.Set("family", describeResp.DBClusterParameterGroups[0].DBParameterGroupFamily) + arn := aws.StringValue(describeResp.DBClusterParameterGroups[0].DBClusterParameterGroupArn) + d.Set("arn", arn) d.Set("description", describeResp.DBClusterParameterGroups[0].Description) + d.Set("family", describeResp.DBClusterParameterGroups[0].DBParameterGroupFamily) + d.Set("name", describeResp.DBClusterParameterGroups[0].DBClusterParameterGroupName) // Only include user customized parameters as there's hundreds of system/default ones describeParametersOpts := rds.DescribeDBClusterParametersInput{ DBClusterParameterGroupName: aws.String(d.Id()), - Source: aws.String("user"), + Source: aws.String("user"), } describeParametersResp, err := rdsconn.DescribeDBClusterParameters(&describeParametersOpts) @@ -165,30 +163,18 @@ func resourceAwsRDSClusterParameterGroupRead(d *schema.ResourceData, meta interf d.Set("parameter", flattenParameters(describeParametersResp.Parameters)) - paramGroup := describeResp.DBClusterParameterGroups[0] - arn, err := buildRDSCPGARN(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region) + resp, err := rdsconn.ListTagsForResource(&rds.ListTagsForResourceInput{ + ResourceName: aws.String(arn), + }) if err != nil { - name := "" - if paramGroup.DBClusterParameterGroupName != nil && *paramGroup.DBClusterParameterGroupName != "" { - name = *paramGroup.DBClusterParameterGroupName - } - log.Printf("[DEBUG] Error building ARN for DB Cluster Parameter Group, not setting Tags for Cluster Param Group %s", name) - } else { - d.Set("arn", arn) - resp, err := rdsconn.ListTagsForResource(&rds.ListTagsForResourceInput{ - ResourceName: aws.String(arn), - }) - - if err != nil { - log.Printf("[DEBUG] Error retrieving tags for ARN: %s", arn) - } + log.Printf("[DEBUG] Error retrieving tags for ARN: %s", arn) + } - var dt []*rds.Tag - if len(resp.TagList) > 0 { - dt = resp.TagList - } - d.Set("tags", tagsToMapRDS(dt)) + var dt []*rds.Tag + if len(resp.TagList) > 0 { + dt = resp.TagList } + d.Set("tags", tagsToMapRDS(dt)) return nil } @@ -220,7 +206,7 @@ func resourceAwsRDSClusterParameterGroupUpdate(d *schema.ResourceData, meta inte // We can only modify 20 parameters at a time, so walk them until // we've got them all. for parameters != nil { - paramsToModify := make([]*rds.Parameter, 0) + var paramsToModify []*rds.Parameter if len(parameters) <= rdsClusterParameterGroupMaxParamsBulkEdit { paramsToModify, parameters = parameters[:], nil } else { @@ -242,12 +228,10 @@ func resourceAwsRDSClusterParameterGroupUpdate(d *schema.ResourceData, meta inte } } - if arn, err := buildRDSCPGARN(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region); err == nil { - if err := setTagsRDS(rdsconn, d, arn); err != nil { - return err - } else { - d.SetPartial("tags") - } + if err := setTagsRDS(rdsconn, d, d.Get("arn").(string)); err != nil { + return err + } else { + d.SetPartial("tags") } d.Partial(false) @@ -292,15 +276,3 @@ func resourceAwsRDSClusterParameterGroupDeleteRefreshFunc( return d, "destroyed", nil } } - -func buildRDSCPGARN(identifier, partition, accountid, region string) (string, error) { - if partition == "" { - return "", fmt.Errorf("Unable to construct RDS Cluster ARN because of missing AWS partition") - } - if accountid == "" { - return "", fmt.Errorf("Unable to construct RDS Cluster ARN because of missing AWS Account ID") - } - arn := fmt.Sprintf("arn:%s:rds:%s:%s:cluster-pg:%s", partition, region, accountid, identifier) - return arn, nil - -} diff --git a/aws/resource_aws_rds_cluster_parameter_group_test.go b/aws/resource_aws_rds_cluster_parameter_group_test.go index c0ecc00d5fe..c023e66af54 100644 --- a/aws/resource_aws_rds_cluster_parameter_group_test.go +++ b/aws/resource_aws_rds_cluster_parameter_group_test.go @@ -15,12 +15,35 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSDBClusterParameterGroup_importBasic(t *testing.T) { + resourceName := "aws_rds_cluster_parameter_group.bar" + + parameterGroupName := fmt.Sprintf("cluster-parameter-group-test-terraform-%d", acctest.RandInt()) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBClusterParameterGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBClusterParameterGroupConfig(parameterGroupName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSDBClusterParameterGroup_basic(t *testing.T) { var v rds.DBClusterParameterGroup parameterGroupName := fmt.Sprintf("cluster-parameter-group-test-terraform-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBClusterParameterGroupDestroy, @@ -30,6 +53,7 @@ func TestAccAWSDBClusterParameterGroup_basic(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckAWSDBClusterParameterGroupExists("aws_rds_cluster_parameter_group.bar", &v), testAccCheckAWSDBClusterParameterGroupAttributes(&v, parameterGroupName), + resource.TestMatchResourceAttr("aws_rds_cluster_parameter_group.bar", "arn", regexp.MustCompile(`^arn:[^:]+:rds:[^:]+:\d{12}:cluster-pg:.+`)), resource.TestCheckResourceAttr( "aws_rds_cluster_parameter_group.bar", "name", parameterGroupName), resource.TestCheckResourceAttr( @@ -91,10 +115,50 @@ func TestAccAWSDBClusterParameterGroup_basic(t *testing.T) { }) } +func TestAccAWSDBClusterParameterGroup_withApplyMethod(t *testing.T) { + var v rds.DBClusterParameterGroup + parameterGroupName := fmt.Sprintf("cluster-parameter-group-test-terraform-%d", acctest.RandInt()) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBClusterParameterGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSDBClusterParameterGroupConfigWithApplyMethod(parameterGroupName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSDBClusterParameterGroupExists("aws_rds_cluster_parameter_group.bar", &v), + testAccCheckAWSDBClusterParameterGroupAttributes(&v, parameterGroupName), + resource.TestMatchResourceAttr( + "aws_rds_cluster_parameter_group.bar", "arn", regexp.MustCompile(`^arn:[^:]+:rds:[^:]+:\d{12}:cluster-pg:.+`)), + resource.TestCheckResourceAttr( + "aws_rds_cluster_parameter_group.bar", "name", parameterGroupName), + resource.TestCheckResourceAttr( + "aws_rds_cluster_parameter_group.bar", "family", "aurora5.6"), + resource.TestCheckResourceAttr( + "aws_rds_cluster_parameter_group.bar", "description", "Test cluster parameter group for terraform"), + resource.TestCheckResourceAttr( + "aws_rds_cluster_parameter_group.bar", "parameter.2421266705.name", "character_set_server"), + resource.TestCheckResourceAttr( + "aws_rds_cluster_parameter_group.bar", "parameter.2421266705.value", "utf8"), + resource.TestCheckResourceAttr( + "aws_rds_cluster_parameter_group.bar", "parameter.2421266705.apply_method", "immediate"), + resource.TestCheckResourceAttr( + "aws_rds_cluster_parameter_group.bar", "parameter.2478663599.name", "character_set_client"), + resource.TestCheckResourceAttr( + "aws_rds_cluster_parameter_group.bar", "parameter.2478663599.value", "utf8"), + resource.TestCheckResourceAttr( + "aws_rds_cluster_parameter_group.bar", "parameter.2478663599.apply_method", "pending-reboot"), + ), + }, + }, + }) +} + func TestAccAWSDBClusterParameterGroup_namePrefix(t *testing.T) { var v rds.DBClusterParameterGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBClusterParameterGroupDestroy, @@ -114,7 +178,7 @@ func TestAccAWSDBClusterParameterGroup_namePrefix(t *testing.T) { func TestAccAWSDBClusterParameterGroup_generatedName(t *testing.T) { var v rds.DBClusterParameterGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBClusterParameterGroupDestroy, @@ -134,7 +198,7 @@ func TestAccAWSDBClusterParameterGroup_disappears(t *testing.T) { parameterGroupName := fmt.Sprintf("cluster-parameter-group-test-terraform-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBClusterParameterGroupDestroy, @@ -156,7 +220,7 @@ func TestAccAWSDBClusterParameterGroupOnly(t *testing.T) { parameterGroupName := fmt.Sprintf("cluster-parameter-group-test-tf-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSDBClusterParameterGroupDestroy, @@ -318,6 +382,31 @@ resource "aws_rds_cluster_parameter_group" "bar" { `, name) } +func testAccAWSDBClusterParameterGroupConfigWithApplyMethod(name string) string { + return fmt.Sprintf(` +resource "aws_rds_cluster_parameter_group" "bar" { + name = "%s" + family = "aurora5.6" + description = "Test cluster parameter group for terraform" + + parameter { + name = "character_set_server" + value = "utf8" + } + + parameter { + name = "character_set_client" + value = "utf8" + apply_method = "pending-reboot" + } + + tags { + foo = "bar" + } +} +`, name) +} + func testAccAWSDBClusterParameterGroupAddParametersConfig(name string) string { return fmt.Sprintf(` resource "aws_rds_cluster_parameter_group" "bar" { diff --git a/aws/resource_aws_rds_cluster_test.go b/aws/resource_aws_rds_cluster_test.go index 1fd1cbb3891..19b9f2d8a64 100644 --- a/aws/resource_aws_rds_cluster_test.go +++ b/aws/resource_aws_rds_cluster_test.go @@ -1,10 +1,10 @@ package aws import ( + "errors" "fmt" "log" "regexp" - "strings" "testing" "github.com/hashicorp/terraform/helper/acctest" @@ -17,42 +17,106 @@ import ( "github.com/aws/aws-sdk-go/service/rds" ) +func TestAccAWSRDSCluster_importBasic(t *testing.T) { + resourceName := "aws_rds_cluster.default" + ri := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSClusterConfig(ri), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "master_password", "skip_final_snapshot"}, + }, + }, + }) +} + func TestAccAWSRDSCluster_basic(t *testing.T) { - var v rds.DBCluster + var dbCluster rds.DBCluster + rInt := acctest.RandInt() + resourceName := "aws_rds_cluster.default" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSClusterDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSClusterConfig(acctest.RandInt()), + Config: testAccAWSClusterConfig(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSClusterExists("aws_rds_cluster.default", &v), - resource.TestCheckResourceAttr( - "aws_rds_cluster.default", "storage_encrypted", "false"), - resource.TestCheckResourceAttr( - "aws_rds_cluster.default", "db_cluster_parameter_group_name", "default.aurora5.6"), - resource.TestCheckResourceAttrSet( - "aws_rds_cluster.default", "reader_endpoint"), - resource.TestCheckResourceAttrSet( - "aws_rds_cluster.default", "cluster_resource_id"), - resource.TestCheckResourceAttr( - "aws_rds_cluster.default", "engine", "aurora"), - resource.TestCheckResourceAttrSet( - "aws_rds_cluster.default", "engine_version"), - resource.TestCheckResourceAttrSet( - "aws_rds_cluster.default", "hosted_zone_id"), + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestMatchResourceAttr(resourceName, "arn", regexp.MustCompile(`^arn:[^:]+:rds:[^:]+:\d{12}:cluster:.+`)), + resource.TestCheckResourceAttr(resourceName, "backtrack_window", "0"), + resource.TestCheckResourceAttr(resourceName, "storage_encrypted", "false"), + resource.TestCheckResourceAttr(resourceName, "db_cluster_parameter_group_name", "default.aurora5.6"), + resource.TestCheckResourceAttrSet(resourceName, "reader_endpoint"), + resource.TestCheckResourceAttrSet(resourceName, "cluster_resource_id"), + resource.TestCheckResourceAttr(resourceName, "engine", "aurora"), + resource.TestCheckResourceAttrSet(resourceName, "engine_version"), + resource.TestCheckResourceAttrSet(resourceName, "hosted_zone_id"), + resource.TestCheckResourceAttr(resourceName, + "enabled_cloudwatch_logs_exports.0", "audit"), + resource.TestCheckResourceAttr(resourceName, + "enabled_cloudwatch_logs_exports.1", "error"), + resource.TestCheckResourceAttr(resourceName, "scaling_configuration.#", "0"), ), }, }, }) } +func TestAccAWSRDSCluster_BacktrackWindow(t *testing.T) { + var dbCluster rds.DBCluster + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSClusterConfig_BacktrackWindow(43200), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "backtrack_window", "43200"), + ), + }, + { + Config: testAccAWSClusterConfig_BacktrackWindow(86400), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "backtrack_window", "86400"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "apply_immediately", + "cluster_identifier_prefix", + "master_password", + "skip_final_snapshot", + }, + }, + }, + }) +} + func TestAccAWSRDSCluster_namePrefix(t *testing.T) { var v rds.DBCluster - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSClusterDestroy, @@ -69,10 +133,33 @@ func TestAccAWSRDSCluster_namePrefix(t *testing.T) { }) } +func TestAccAWSRDSCluster_s3Restore(t *testing.T) { + var v rds.DBCluster + resourceName := "aws_rds_cluster.test" + bucket := acctest.RandomWithPrefix("tf-acc-test") + uniqueId := acctest.RandomWithPrefix("tf-acc-s3-import-test") + bucketPrefix := acctest.RandString(5) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSClusterConfig_s3Restore(bucket, bucketPrefix, uniqueId), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists("aws_rds_cluster.test", &v), + resource.TestCheckResourceAttr(resourceName, "engine", "aurora"), + ), + }, + }, + }) +} + func TestAccAWSRDSCluster_generatedName(t *testing.T) { var v rds.DBCluster - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSClusterDestroy, @@ -93,7 +180,7 @@ func TestAccAWSRDSCluster_takeFinalSnapshot(t *testing.T) { var v rds.DBCluster rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSClusterSnapshot(rInt), @@ -111,7 +198,7 @@ func TestAccAWSRDSCluster_takeFinalSnapshot(t *testing.T) { /// This is a regression test to make sure that we always cover the scenario as hightlighted in /// https://github.com/hashicorp/terraform/issues/11568 func TestAccAWSRDSCluster_missingUserNameCausesError(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSClusterDestroy, @@ -128,7 +215,7 @@ func TestAccAWSRDSCluster_updateTags(t *testing.T) { var v rds.DBCluster ri := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSClusterDestroy, @@ -153,11 +240,44 @@ func TestAccAWSRDSCluster_updateTags(t *testing.T) { }) } +func TestAccAWSRDSCluster_updateCloudwatchLogsExports(t *testing.T) { + var v rds.DBCluster + ri := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSClusterConfig(ri), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists("aws_rds_cluster.default", &v), + resource.TestCheckResourceAttr("aws_rds_cluster.default", + "enabled_cloudwatch_logs_exports.0", "audit"), + resource.TestCheckResourceAttr("aws_rds_cluster.default", + "enabled_cloudwatch_logs_exports.1", "error"), + ), + }, + { + Config: testAccAWSClusterConfigUpdatedCloudwatchLogsExports(ri), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists("aws_rds_cluster.default", &v), + resource.TestCheckResourceAttr("aws_rds_cluster.default", + "enabled_cloudwatch_logs_exports.0", "error"), + resource.TestCheckResourceAttr("aws_rds_cluster.default", + "enabled_cloudwatch_logs_exports.1", "slowquery"), + ), + }, + }, + }) +} + func TestAccAWSRDSCluster_updateIamRoles(t *testing.T) { var v rds.DBCluster ri := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSClusterDestroy, @@ -192,7 +312,7 @@ func TestAccAWSRDSCluster_kmsKey(t *testing.T) { var v rds.DBCluster keyRegex := regexp.MustCompile("^arn:aws:kms:") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSClusterDestroy, @@ -212,7 +332,7 @@ func TestAccAWSRDSCluster_kmsKey(t *testing.T) { func TestAccAWSRDSCluster_encrypted(t *testing.T) { var v rds.DBCluster - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSClusterDestroy, @@ -239,7 +359,7 @@ func TestAccAWSRDSCluster_EncryptedCrossRegionReplication(t *testing.T) { // check for the cluster in each region var providers []*schema.Provider - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, ProviderFactories: testAccProviderFactories(&providers), CheckDestroy: testAccCheckWithProviders(testAccCheckAWSClusterDestroyWithProvider, &providers), @@ -261,7 +381,7 @@ func TestAccAWSRDSCluster_backupsUpdate(t *testing.T) { var v rds.DBCluster ri := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSClusterDestroy, @@ -279,7 +399,7 @@ func TestAccAWSRDSCluster_backupsUpdate(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccAWSClusterConfig_backupsUpdate(ri), Check: resource.ComposeTestCheckFunc( testAccCheckAWSClusterExists("aws_rds_cluster.default", &v), @@ -295,20 +415,754 @@ func TestAccAWSRDSCluster_backupsUpdate(t *testing.T) { }) } -func TestAccAWSRDSCluster_iamAuth(t *testing.T) { - var v rds.DBCluster +func TestAccAWSRDSCluster_iamAuth(t *testing.T) { + var v rds.DBCluster + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSClusterConfig_iamAuth(acctest.RandInt()), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists("aws_rds_cluster.default", &v), + resource.TestCheckResourceAttr( + "aws_rds_cluster.default", "iam_database_authentication_enabled", "true"), + ), + }, + }, + }) +} + +func TestAccAWSRDSCluster_DeletionProtection(t *testing.T) { + var dbCluster1 rds.DBCluster + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRDSClusterConfig_DeletionProtection(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(resourceName, &dbCluster1), + resource.TestCheckResourceAttr(resourceName, "deletion_protection", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "apply_immediately", + "cluster_identifier_prefix", + "master_password", + "skip_final_snapshot", + "snapshot_identifier", + }, + }, + { + Config: testAccAWSRDSClusterConfig_DeletionProtection(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(resourceName, &dbCluster1), + resource.TestCheckResourceAttr(resourceName, "deletion_protection", "false"), + ), + }, + }, + }) +} + +func TestAccAWSRDSCluster_EngineMode(t *testing.T) { + var dbCluster1, dbCluster2 rds.DBCluster + + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRDSClusterConfig_EngineMode(rName, "serverless"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(resourceName, &dbCluster1), + resource.TestCheckResourceAttr(resourceName, "engine_mode", "serverless"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "apply_immediately", + "cluster_identifier_prefix", + "master_password", + "skip_final_snapshot", + "snapshot_identifier", + }, + }, + { + Config: testAccAWSRDSClusterConfig_EngineMode(rName, "provisioned"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(resourceName, &dbCluster2), + testAccCheckAWSClusterRecreated(&dbCluster1, &dbCluster2), + resource.TestCheckResourceAttr(resourceName, "engine_mode", "provisioned"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "apply_immediately", + "cluster_identifier_prefix", + "master_password", + "skip_final_snapshot", + "snapshot_identifier", + }, + }, + }, + }) +} + +func TestAccAWSRDSCluster_EngineMode_ParallelQuery(t *testing.T) { + var dbCluster1 rds.DBCluster + + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRDSClusterConfig_EngineMode(rName, "parallelquery"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(resourceName, &dbCluster1), + resource.TestCheckResourceAttr(resourceName, "engine_mode", "parallelquery"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "apply_immediately", + "cluster_identifier_prefix", + "master_password", + "skip_final_snapshot", + "snapshot_identifier", + }, + }, + }, + }) +} + +func TestAccAWSRDSCluster_EngineVersion(t *testing.T) { + var dbCluster rds.DBCluster + rInt := acctest.RandInt() + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSClusterConfig_EngineVersion(rInt, "aurora-postgresql", "9.6.3"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "engine", "aurora-postgresql"), + resource.TestCheckResourceAttr(resourceName, "engine_version", "9.6.3"), + ), + }, + { + Config: testAccAWSClusterConfig_EngineVersion(rInt, "aurora-postgresql", "9.6.6"), + ExpectError: regexp.MustCompile(`Cannot modify engine version without a primary instance in DB cluster`), + }, + }, + }) +} + +func TestAccAWSRDSCluster_EngineVersionWithPrimaryInstance(t *testing.T) { + var dbCluster rds.DBCluster + rInt := acctest.RandInt() + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSClusterConfig_EngineVersionWithPrimaryInstance(rInt, "aurora-postgresql", "9.6.3"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "engine", "aurora-postgresql"), + resource.TestCheckResourceAttr(resourceName, "engine_version", "9.6.3"), + ), + }, + { + Config: testAccAWSClusterConfig_EngineVersionWithPrimaryInstance(rInt, "aurora-postgresql", "9.6.6"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "engine", "aurora-postgresql"), + resource.TestCheckResourceAttr(resourceName, "engine_version", "9.6.6"), + ), + }, + }, + }) +} + +func TestAccAWSRDSCluster_Port(t *testing.T) { + var dbCluster1, dbCluster2 rds.DBCluster + rInt := acctest.RandInt() + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSClusterConfig_Port(rInt, 5432), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(resourceName, &dbCluster1), + resource.TestCheckResourceAttr(resourceName, "port", "5432"), + ), + }, + { + Config: testAccAWSClusterConfig_Port(rInt, 2345), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(resourceName, &dbCluster2), + testAccCheckAWSClusterRecreated(&dbCluster1, &dbCluster2), + resource.TestCheckResourceAttr(resourceName, "port", "2345"), + ), + }, + }, + }) +} + +func TestAccAWSRDSCluster_ScalingConfiguration(t *testing.T) { + var dbCluster rds.DBCluster + + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRDSClusterConfig_ScalingConfiguration(rName, false, 128, 4, 301), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "scaling_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "scaling_configuration.0.auto_pause", "false"), + resource.TestCheckResourceAttr(resourceName, "scaling_configuration.0.max_capacity", "128"), + resource.TestCheckResourceAttr(resourceName, "scaling_configuration.0.min_capacity", "4"), + resource.TestCheckResourceAttr(resourceName, "scaling_configuration.0.seconds_until_auto_pause", "301"), + ), + }, + { + Config: testAccAWSRDSClusterConfig_ScalingConfiguration(rName, true, 256, 8, 86400), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "scaling_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "scaling_configuration.0.auto_pause", "true"), + resource.TestCheckResourceAttr(resourceName, "scaling_configuration.0.max_capacity", "256"), + resource.TestCheckResourceAttr(resourceName, "scaling_configuration.0.min_capacity", "8"), + resource.TestCheckResourceAttr(resourceName, "scaling_configuration.0.seconds_until_auto_pause", "86400"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "apply_immediately", + "cluster_identifier_prefix", + "master_password", + "skip_final_snapshot", + "snapshot_identifier", + }, + }, + }, + }) +} + +func TestAccAWSRDSCluster_SnapshotIdentifier(t *testing.T) { + var dbCluster, sourceDbCluster rds.DBCluster + var dbClusterSnapshot rds.DBClusterSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_rds_cluster.source" + snapshotResourceName := "aws_db_cluster_snapshot.test" + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRDSClusterConfig_SnapshotIdentifier(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(sourceDbResourceName, &sourceDbCluster), + testAccCheckDbClusterSnapshotExists(snapshotResourceName, &dbClusterSnapshot), + testAccCheckAWSClusterExists(resourceName, &dbCluster), + ), + }, + }, + }) +} + +func TestAccAWSRDSCluster_SnapshotIdentifier_DeletionProtection(t *testing.T) { + var dbCluster, sourceDbCluster rds.DBCluster + var dbClusterSnapshot rds.DBClusterSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_rds_cluster.source" + snapshotResourceName := "aws_db_cluster_snapshot.test" + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRDSClusterConfig_SnapshotIdentifier_DeletionProtection(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(sourceDbResourceName, &sourceDbCluster), + testAccCheckDbClusterSnapshotExists(snapshotResourceName, &dbClusterSnapshot), + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "deletion_protection", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "apply_immediately", + "cluster_identifier_prefix", + "master_password", + "skip_final_snapshot", + "snapshot_identifier", + }, + }, + // Ensure we disable deletion protection before attempting to delete :) + { + Config: testAccAWSRDSClusterConfig_SnapshotIdentifier_DeletionProtection(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(sourceDbResourceName, &sourceDbCluster), + testAccCheckDbClusterSnapshotExists(snapshotResourceName, &dbClusterSnapshot), + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "deletion_protection", "false"), + ), + }, + }, + }) +} + +func TestAccAWSRDSCluster_SnapshotIdentifier_EngineMode_ParallelQuery(t *testing.T) { + var dbCluster, sourceDbCluster rds.DBCluster + var dbClusterSnapshot rds.DBClusterSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_rds_cluster.source" + snapshotResourceName := "aws_db_cluster_snapshot.test" + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRDSClusterConfig_SnapshotIdentifier_EngineMode(rName, "parallelquery"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(sourceDbResourceName, &sourceDbCluster), + testAccCheckDbClusterSnapshotExists(snapshotResourceName, &dbClusterSnapshot), + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "engine_mode", "parallelquery"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "apply_immediately", + "cluster_identifier_prefix", + "master_password", + "skip_final_snapshot", + "snapshot_identifier", + }, + }, + }, + }) +} + +func TestAccAWSRDSCluster_SnapshotIdentifier_EngineMode_Provisioned(t *testing.T) { + var dbCluster, sourceDbCluster rds.DBCluster + var dbClusterSnapshot rds.DBClusterSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_rds_cluster.source" + snapshotResourceName := "aws_db_cluster_snapshot.test" + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRDSClusterConfig_SnapshotIdentifier_EngineMode(rName, "provisioned"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(sourceDbResourceName, &sourceDbCluster), + testAccCheckDbClusterSnapshotExists(snapshotResourceName, &dbClusterSnapshot), + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "engine_mode", "provisioned"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "apply_immediately", + "cluster_identifier_prefix", + "master_password", + "skip_final_snapshot", + "snapshot_identifier", + }, + }, + }, + }) +} + +func TestAccAWSRDSCluster_SnapshotIdentifier_EngineMode_Serverless(t *testing.T) { + // The below is according to AWS Support. This test can be updated in the future + // to initialize some data. + t.Skip("serverless does not support snapshot restore on an empty volume") + + var dbCluster, sourceDbCluster rds.DBCluster + var dbClusterSnapshot rds.DBClusterSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_rds_cluster.source" + snapshotResourceName := "aws_db_cluster_snapshot.test" + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRDSClusterConfig_SnapshotIdentifier_EngineMode(rName, "serverless"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(sourceDbResourceName, &sourceDbCluster), + testAccCheckDbClusterSnapshotExists(snapshotResourceName, &dbClusterSnapshot), + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "engine_mode", "serverless"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "apply_immediately", + "cluster_identifier_prefix", + "master_password", + "skip_final_snapshot", + "snapshot_identifier", + }, + }, + }, + }) +} + +// Reference: https://github.com/terraform-providers/terraform-provider-aws/issues/6157 +func TestAccAWSRDSCluster_SnapshotIdentifier_EngineVersion_Different(t *testing.T) { + var dbCluster, sourceDbCluster rds.DBCluster + var dbClusterSnapshot rds.DBClusterSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_rds_cluster.source" + snapshotResourceName := "aws_db_cluster_snapshot.test" + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRDSClusterConfig_SnapshotIdentifier_EngineVersion(rName, "aurora-postgresql", "9.6.3", "9.6.6"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(sourceDbResourceName, &sourceDbCluster), + testAccCheckDbClusterSnapshotExists(snapshotResourceName, &dbClusterSnapshot), + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "engine_version", "9.6.6"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "apply_immediately", + "cluster_identifier_prefix", + "master_password", + "skip_final_snapshot", + "snapshot_identifier", + }, + }, + }, + }) +} + +// Reference: https://github.com/terraform-providers/terraform-provider-aws/issues/6157 +func TestAccAWSRDSCluster_SnapshotIdentifier_EngineVersion_Equal(t *testing.T) { + var dbCluster, sourceDbCluster rds.DBCluster + var dbClusterSnapshot rds.DBClusterSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_rds_cluster.source" + snapshotResourceName := "aws_db_cluster_snapshot.test" + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRDSClusterConfig_SnapshotIdentifier_EngineVersion(rName, "aurora-postgresql", "9.6.3", "9.6.3"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(sourceDbResourceName, &sourceDbCluster), + testAccCheckDbClusterSnapshotExists(snapshotResourceName, &dbClusterSnapshot), + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "engine_version", "9.6.3"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "apply_immediately", + "cluster_identifier_prefix", + "master_password", + "skip_final_snapshot", + "snapshot_identifier", + }, + }, + }, + }) +} + +func TestAccAWSRDSCluster_SnapshotIdentifier_PreferredBackupWindow(t *testing.T) { + var dbCluster, sourceDbCluster rds.DBCluster + var dbClusterSnapshot rds.DBClusterSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_rds_cluster.source" + snapshotResourceName := "aws_db_cluster_snapshot.test" + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRDSClusterConfig_SnapshotIdentifier_PreferredBackupWindow(rName, "00:00-08:00"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(sourceDbResourceName, &sourceDbCluster), + testAccCheckDbClusterSnapshotExists(snapshotResourceName, &dbClusterSnapshot), + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "preferred_backup_window", "00:00-08:00"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "apply_immediately", + "cluster_identifier_prefix", + "master_password", + "skip_final_snapshot", + "snapshot_identifier", + }, + }, + }, + }) +} + +func TestAccAWSRDSCluster_SnapshotIdentifier_PreferredMaintenanceWindow(t *testing.T) { + var dbCluster, sourceDbCluster rds.DBCluster + var dbClusterSnapshot rds.DBClusterSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_rds_cluster.source" + snapshotResourceName := "aws_db_cluster_snapshot.test" + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRDSClusterConfig_SnapshotIdentifier_PreferredMaintenanceWindow(rName, "sun:01:00-sun:01:30"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(sourceDbResourceName, &sourceDbCluster), + testAccCheckDbClusterSnapshotExists(snapshotResourceName, &dbClusterSnapshot), + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "preferred_maintenance_window", "sun:01:00-sun:01:30"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "apply_immediately", + "cluster_identifier_prefix", + "master_password", + "skip_final_snapshot", + "snapshot_identifier", + }, + }, + }, + }) +} + +func TestAccAWSRDSCluster_SnapshotIdentifier_Tags(t *testing.T) { + var dbCluster, sourceDbCluster rds.DBCluster + var dbClusterSnapshot rds.DBClusterSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_rds_cluster.source" + snapshotResourceName := "aws_db_cluster_snapshot.test" + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRDSClusterConfig_SnapshotIdentifier_Tags(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(sourceDbResourceName, &sourceDbCluster), + testAccCheckDbClusterSnapshotExists(snapshotResourceName, &dbClusterSnapshot), + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + }, + }) +} + +func TestAccAWSRDSCluster_SnapshotIdentifier_VpcSecurityGroupIds(t *testing.T) { + var dbCluster, sourceDbCluster rds.DBCluster + var dbClusterSnapshot rds.DBClusterSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_rds_cluster.source" + snapshotResourceName := "aws_db_cluster_snapshot.test" + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRDSClusterConfig_SnapshotIdentifier_VpcSecurityGroupIds(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(sourceDbResourceName, &sourceDbCluster), + testAccCheckDbClusterSnapshotExists(snapshotResourceName, &dbClusterSnapshot), + testAccCheckAWSClusterExists(resourceName, &dbCluster), + ), + }, + }, + }) +} + +// Regression reference: https://github.com/terraform-providers/terraform-provider-aws/issues/5450 +// This acceptance test explicitly tests when snapshot_identifer is set, +// vpc_security_group_ids is set (which triggered the resource update function), +// and tags is set which was missing its ARN used for tagging +func TestAccAWSRDSCluster_SnapshotIdentifier_VpcSecurityGroupIds_Tags(t *testing.T) { + var dbCluster, sourceDbCluster rds.DBCluster + var dbClusterSnapshot rds.DBClusterSnapshot + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_rds_cluster.source" + snapshotResourceName := "aws_db_cluster_snapshot.test" + resourceName := "aws_rds_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRDSClusterConfig_SnapshotIdentifier_VpcSecurityGroupIds_Tags(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSClusterExists(sourceDbResourceName, &sourceDbCluster), + testAccCheckDbClusterSnapshotExists(snapshotResourceName, &dbClusterSnapshot), + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + }, + }) +} + +func TestAccAWSRDSCluster_SnapshotIdentifier_EncryptedRestore(t *testing.T) { + var dbCluster, sourceDbCluster rds.DBCluster + var dbClusterSnapshot rds.DBClusterSnapshot + + keyRegex := regexp.MustCompile("^arn:aws:kms:") + + rName := acctest.RandomWithPrefix("tf-acc-test") + sourceDbResourceName := "aws_rds_cluster.source" + snapshotResourceName := "aws_db_cluster_snapshot.test" + resourceName := "aws_rds_cluster.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, - CheckDestroy: testAccCheckAWSClusterDestroy, + CheckDestroy: testAccCheckAWSDBInstanceDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSClusterConfig_iamAuth(acctest.RandInt()), + Config: testAccAWSRDSClusterConfig_SnapshotIdentifier_EncryptedRestore(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSClusterExists("aws_rds_cluster.default", &v), + testAccCheckAWSClusterExists(sourceDbResourceName, &sourceDbCluster), + testAccCheckDbClusterSnapshotExists(snapshotResourceName, &dbClusterSnapshot), + testAccCheckAWSClusterExists(resourceName, &dbCluster), + resource.TestMatchResourceAttr( + "aws_rds_cluster.test", "kms_key_id", keyRegex), resource.TestCheckResourceAttr( - "aws_rds_cluster.default", "iam_database_authentication_enabled", "true"), + "aws_rds_cluster.test", "storage_encrypted", "true"), ), }, }, @@ -367,12 +1221,6 @@ func testAccCheckAWSClusterSnapshot(rInt int) resource.TestCheckFunc { awsClient := testAccProvider.Meta().(*AWSClient) conn := awsClient.rdsconn - arn, arnErr := buildRDSClusterARN(snapshot_identifier, awsClient.partition, awsClient.accountid, awsClient.region) - tagsARN := strings.Replace(arn, ":cluster:", ":snapshot:", 1) - if arnErr != nil { - return fmt.Errorf("Error building ARN for tags check with ARN (%s): %s", tagsARN, arnErr) - } - log.Printf("[INFO] Deleting the Snapshot %s", snapshot_identifier) _, snapDeleteErr := conn.DeleteDBClusterSnapshot( &rds.DeleteDBClusterSnapshotInput{ @@ -446,6 +1294,16 @@ func testAccCheckAWSClusterExistsWithProvider(n string, v *rds.DBCluster, provid } } +func testAccCheckAWSClusterRecreated(i, j *rds.DBCluster) resource.TestCheckFunc { + return func(s *terraform.State) error { + if aws.TimeValue(i.ClusterCreateTime) == aws.TimeValue(j.ClusterCreateTime) { + return errors.New("RDS Cluster was not recreated") + } + + return nil + } +} + func testAccAWSClusterConfig(n int) string { return fmt.Sprintf(` resource "aws_rds_cluster" "default" { @@ -459,9 +1317,26 @@ resource "aws_rds_cluster" "default" { tags { Environment = "production" } + enabled_cloudwatch_logs_exports = [ + "audit", + "error", + ] }`, n) } +func testAccAWSClusterConfig_BacktrackWindow(backtrackWindow int) string { + return fmt.Sprintf(` +resource "aws_rds_cluster" "test" { + apply_immediately = true + backtrack_window = %d + cluster_identifier_prefix = "tf-acc-test-" + master_password = "mustbeeightcharaters" + master_username = "test" + skip_final_snapshot = true +} +`, backtrackWindow) +} + func testAccAWSClusterConfig_namePrefix(n int) string { return fmt.Sprintf(` resource "aws_rds_cluster" "test" { @@ -504,6 +1379,124 @@ resource "aws_db_subnet_group" "test" { `, n) } +func testAccAWSClusterConfig_s3Restore(bucketName string, bucketPrefix string, uniqueId string) string { + return fmt.Sprintf(` + +data "aws_region" "current" {} + +resource "aws_s3_bucket" "xtrabackup" { + bucket = "%s" + region = "${data.aws_region.current.name}" +} + +resource "aws_s3_bucket_object" "xtrabackup_db" { + bucket = "${aws_s3_bucket.xtrabackup.id}" + key = "%s/mysql-5-6-xtrabackup.tar.gz" + source = "../files/mysql-5-6-xtrabackup.tar.gz" + etag = "${md5(file("../files/mysql-5-6-xtrabackup.tar.gz"))}" +} + + + +resource "aws_iam_role" "rds_s3_access_role" { + name = "%s-role" + assume_role_policy = < 1 { @@ -484,7 +491,7 @@ func resourceAwsRedshiftClusterCreate(d *schema.ResourceData, meta interface{}) _, err := stateConf.WaitForState() if err != nil { - return fmt.Errorf("[WARN] Error waiting for Redshift Cluster state to be \"available\": %s", err) + return fmt.Errorf("Error waiting for Redshift Cluster state to be \"available\": %s", err) } if v, ok := d.GetOk("snapshot_copy"); ok { @@ -566,6 +573,7 @@ func resourceAwsRedshiftClusterRead(d *schema.ResourceData, meta interface{}) er if rsc.Endpoint.Port != nil { endpoint = fmt.Sprintf("%s:%d", endpoint, *rsc.Endpoint.Port) } + d.Set("dns_name", rsc.Endpoint.Address) d.Set("port", rsc.Endpoint.Port) d.Set("endpoint", endpoint) } @@ -583,7 +591,7 @@ func resourceAwsRedshiftClusterRead(d *schema.ResourceData, meta interface{}) er vpcg = append(vpcg, *g.VpcSecurityGroupId) } if err := d.Set("vpc_security_group_ids", vpcg); err != nil { - return fmt.Errorf("[DEBUG] Error saving VPC Security Group IDs to state for Redshift Cluster (%s): %s", d.Id(), err) + return fmt.Errorf("Error saving VPC Security Group IDs to state for Redshift Cluster (%s): %s", d.Id(), err) } var csg []string @@ -591,7 +599,7 @@ func resourceAwsRedshiftClusterRead(d *schema.ResourceData, meta interface{}) er csg = append(csg, *g.ClusterSecurityGroupName) } if err := d.Set("cluster_security_groups", csg); err != nil { - return fmt.Errorf("[DEBUG] Error saving Cluster Security Group Names to state for Redshift Cluster (%s): %s", d.Id(), err) + return fmt.Errorf("Error saving Cluster Security Group Names to state for Redshift Cluster (%s): %s", d.Id(), err) } var iamRoles []string @@ -599,7 +607,7 @@ func resourceAwsRedshiftClusterRead(d *schema.ResourceData, meta interface{}) er iamRoles = append(iamRoles, *i.IamRoleArn) } if err := d.Set("iam_roles", iamRoles); err != nil { - return fmt.Errorf("[DEBUG] Error saving IAM Roles to state for Redshift Cluster (%s): %s", d.Id(), err) + return fmt.Errorf("Error saving IAM Roles to state for Redshift Cluster (%s): %s", d.Id(), err) } d.Set("cluster_public_key", rsc.ClusterPublicKey) @@ -622,15 +630,17 @@ func resourceAwsRedshiftClusterUpdate(d *schema.ResourceData, meta interface{}) conn := meta.(*AWSClient).redshiftconn d.Partial(true) - arn, tagErr := buildRedshiftARN(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region) - if tagErr != nil { - return fmt.Errorf("Error building ARN for Redshift Cluster, not updating Tags for cluster %s", d.Id()) + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Service: "redshift", + Region: meta.(*AWSClient).region, + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("cluster:%s", d.Id()), + }.String() + if tagErr := setTagsRedshift(conn, d, arn); tagErr != nil { + return tagErr } else { - if tagErr := setTagsRedshift(conn, d, arn); tagErr != nil { - return tagErr - } else { - d.SetPartial("tags") - } + d.SetPartial("tags") } requestUpdate := false @@ -639,25 +649,17 @@ func resourceAwsRedshiftClusterUpdate(d *schema.ResourceData, meta interface{}) ClusterIdentifier: aws.String(d.Id()), } - if d.HasChange("cluster_type") { + // If the cluster type, node type, or number of nodes changed, then the AWS API expects all three + // items to be sent over + if d.HasChange("cluster_type") || d.HasChange("node_type") || d.HasChange("number_of_nodes") { req.ClusterType = aws.String(d.Get("cluster_type").(string)) - requestUpdate = true - } - - if d.HasChange("node_type") { req.NodeType = aws.String(d.Get("node_type").(string)) - requestUpdate = true - } - - if d.HasChange("number_of_nodes") { if v := d.Get("number_of_nodes").(int); v > 1 { req.ClusterType = aws.String("multi-node") req.NumberOfNodes = aws.Int64(int64(d.Get("number_of_nodes").(int))) } else { req.ClusterType = aws.String("single-node") } - - req.NodeType = aws.String(d.Get("node_type").(string)) requestUpdate = true } @@ -716,7 +718,7 @@ func resourceAwsRedshiftClusterUpdate(d *schema.ResourceData, meta interface{}) log.Printf("[DEBUG] Redshift Cluster Modify options: %s", req) _, err := conn.ModifyCluster(req) if err != nil { - return fmt.Errorf("[WARN] Error modifying Redshift Cluster (%s): %s", d.Id(), err) + return fmt.Errorf("Error modifying Redshift Cluster (%s): %s", d.Id(), err) } } @@ -746,7 +748,7 @@ func resourceAwsRedshiftClusterUpdate(d *schema.ResourceData, meta interface{}) log.Printf("[DEBUG] Redshift Cluster Modify IAM Role options: %s", req) _, err := conn.ModifyClusterIamRoles(req) if err != nil { - return fmt.Errorf("[WARN] Error modifying Redshift Cluster IAM Roles (%s): %s", d.Id(), err) + return fmt.Errorf("Error modifying Redshift Cluster IAM Roles (%s): %s", d.Id(), err) } d.SetPartial("iam_roles") @@ -765,7 +767,7 @@ func resourceAwsRedshiftClusterUpdate(d *schema.ResourceData, meta interface{}) // Wait, catching any errors _, err := stateConf.WaitForState() if err != nil { - return fmt.Errorf("[WARN] Error Modifying Redshift Cluster (%s): %s", d.Id(), err) + return fmt.Errorf("Error Modifying Redshift Cluster (%s): %s", d.Id(), err) } } @@ -785,31 +787,38 @@ func resourceAwsRedshiftClusterUpdate(d *schema.ResourceData, meta interface{}) } } - deprecatedHasChange := (d.HasChange("enable_logging") || d.HasChange("bucket_name") || d.HasChange("s3_key_prefix")) - if d.HasChange("logging") || deprecatedHasChange { - var loggingErr error - - logging, ok := d.GetOk("logging.0.enable") - _, deprecatedOk := d.GetOk("enable_logging") - - if (ok && logging.(bool)) || deprecatedOk { + if d.HasChange("logging") { + if loggingEnabled, ok := d.GetOk("logging.0.enable"); ok && loggingEnabled.(bool) { + log.Printf("[INFO] Enabling Logging for Redshift Cluster %q", d.Id()) + err := enableRedshiftClusterLogging(d, conn) + if err != nil { + return err + } + } else { + log.Printf("[INFO] Disabling Logging for Redshift Cluster %q", d.Id()) + _, err := conn.DisableLogging(&redshift.DisableLoggingInput{ + ClusterIdentifier: aws.String(d.Id()), + }) + if err != nil { + return err + } + } + } else if d.HasChange("enable_logging") || d.HasChange("bucket_name") || d.HasChange("s3_key_prefix") { + if enableLogging, ok := d.GetOk("enable_logging"); ok && enableLogging.(bool) { log.Printf("[INFO] Enabling Logging for Redshift Cluster %q", d.Id()) - loggingErr = enableRedshiftClusterLogging(d, conn) - if loggingErr != nil { - return loggingErr + err := enableRedshiftClusterLogging(d, conn) + if err != nil { + return err } } else { log.Printf("[INFO] Disabling Logging for Redshift Cluster %q", d.Id()) - _, loggingErr = conn.DisableLogging(&redshift.DisableLoggingInput{ + _, err := conn.DisableLogging(&redshift.DisableLoggingInput{ ClusterIdentifier: aws.String(d.Id()), }) - if loggingErr != nil { - return loggingErr + if err != nil { + return err } } - - d.SetPartial("enable_logging") - d.SetPartial("logging") } d.Partial(false) @@ -893,7 +902,7 @@ func resourceAwsRedshiftClusterDelete(d *schema.ResourceData, meta interface{}) } log.Printf("[DEBUG] Deleting Redshift Cluster: %s", deleteOpts) - _, err := deleteAwsRedshiftCluster(&deleteOpts, conn) + err := deleteAwsRedshiftCluster(&deleteOpts, conn) if err != nil { return err } @@ -903,7 +912,7 @@ func resourceAwsRedshiftClusterDelete(d *schema.ResourceData, meta interface{}) return nil } -func deleteAwsRedshiftCluster(opts *redshift.DeleteClusterInput, conn *redshift.Redshift) (interface{}, error) { +func deleteAwsRedshiftCluster(opts *redshift.DeleteClusterInput, conn *redshift.Redshift) error { id := *opts.ClusterIdentifier log.Printf("[INFO] Deleting Redshift Cluster %q", id) err := resource.Retry(15*time.Minute, func() *resource.RetryError { @@ -915,8 +924,7 @@ func deleteAwsRedshiftCluster(opts *redshift.DeleteClusterInput, conn *redshift. return resource.NonRetryableError(err) }) if err != nil { - return nil, fmt.Errorf("[ERROR] Error deleting Redshift Cluster (%s): %s", - id, err) + return fmt.Errorf("Error deleting Redshift Cluster (%s): %s", id, err) } stateConf := &resource.StateChangeConf{ @@ -927,7 +935,9 @@ func deleteAwsRedshiftCluster(opts *redshift.DeleteClusterInput, conn *redshift. MinTimeout: 5 * time.Second, } - return stateConf.WaitForState() + _, err = stateConf.WaitForState() + + return err } func resourceAwsRedshiftClusterStateRefreshFunc(id string, conn *redshift.Redshift) resource.StateRefreshFunc { @@ -1065,15 +1075,3 @@ func validateRedshiftClusterMasterPassword(v interface{}, k string) (ws []string } return } - -func buildRedshiftARN(identifier, partition, accountid, region string) (string, error) { - if partition == "" { - return "", fmt.Errorf("Unable to construct cluster ARN because of missing AWS partition") - } - if accountid == "" { - return "", fmt.Errorf("Unable to construct cluster ARN because of missing AWS Account ID") - } - arn := fmt.Sprintf("arn:%s:redshift:%s:%s:cluster:%s", partition, region, accountid, identifier) - return arn, nil - -} diff --git a/aws/resource_aws_redshift_cluster_test.go b/aws/resource_aws_redshift_cluster_test.go index 04af649ee6a..61626910144 100644 --- a/aws/resource_aws_redshift_cluster_test.go +++ b/aws/resource_aws_redshift_cluster_test.go @@ -3,11 +3,9 @@ package aws import ( "fmt" "log" - "math/rand" "regexp" "strings" "testing" - "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" @@ -31,7 +29,7 @@ func testSweepRedshiftClusters(region string) error { } conn := client.(*AWSClient).redshiftconn - return conn.DescribeClustersPages(&redshift.DescribeClustersInput{}, func(resp *redshift.DescribeClustersOutput, isLast bool) bool { + err = conn.DescribeClustersPages(&redshift.DescribeClustersInput{}, func(resp *redshift.DescribeClustersOutput, isLast bool) bool { if len(resp.Clusters) == 0 { log.Print("[DEBUG] No Redshift clusters to sweep") return false @@ -55,6 +53,14 @@ func testSweepRedshiftClusters(region string) error { } return !isLast }) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping Redshift Cluster sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving Redshift Clusters: %s", err) + } + return nil } func TestValidateRedshiftClusterDbName(t *testing.T) { @@ -93,13 +99,36 @@ func TestValidateRedshiftClusterDbName(t *testing.T) { } } +func TestAccAWSRedshiftCluster_importBasic(t *testing.T) { + resourceName := "aws_redshift_cluster.default" + config := testAccAWSRedshiftClusterConfig_basic(acctest.RandInt()) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSRedshiftClusterDestroy, + Steps: []resource.TestStep{ + { + Config: config, + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"master_password", "skip_final_snapshot"}, + }, + }, + }) +} + func TestAccAWSRedshiftCluster_basic(t *testing.T) { var v redshift.Cluster - ri := rand.New(rand.NewSource(time.Now().UnixNano())).Int() + ri := acctest.RandInt() config := testAccAWSRedshiftClusterConfig_basic(ri) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRedshiftClusterDestroy, @@ -112,6 +141,7 @@ func TestAccAWSRedshiftCluster_basic(t *testing.T) { "aws_redshift_cluster.default", "cluster_type", "single-node"), resource.TestCheckResourceAttr( "aws_redshift_cluster.default", "publicly_accessible", "true"), + resource.TestMatchResourceAttr("aws_redshift_cluster.default", "dns_name", regexp.MustCompile(fmt.Sprintf("^tf-redshift-cluster-%d.*\\.redshift\\..*", ri))), ), }, }, @@ -123,7 +153,7 @@ func TestAccAWSRedshiftCluster_withFinalSnapshot(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRedshiftClusterSnapshot(rInt), @@ -141,11 +171,11 @@ func TestAccAWSRedshiftCluster_withFinalSnapshot(t *testing.T) { func TestAccAWSRedshiftCluster_kmsKey(t *testing.T) { var v redshift.Cluster - ri := rand.New(rand.NewSource(time.Now().UnixNano())).Int() + ri := acctest.RandInt() config := testAccAWSRedshiftClusterConfig_kmsKey(ri) keyRegex := regexp.MustCompile("^arn:aws:([a-zA-Z0-9\\-])+:([a-z]{2}-[a-z]+-\\d{1})?:(\\d{12})?:(.*)$") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRedshiftClusterDestroy, @@ -168,11 +198,11 @@ func TestAccAWSRedshiftCluster_kmsKey(t *testing.T) { func TestAccAWSRedshiftCluster_enhancedVpcRoutingEnabled(t *testing.T) { var v redshift.Cluster - ri := rand.New(rand.NewSource(time.Now().UnixNano())).Int() + ri := acctest.RandInt() preConfig := testAccAWSRedshiftClusterConfig_enhancedVpcRoutingEnabled(ri) postConfig := testAccAWSRedshiftClusterConfig_enhancedVpcRoutingDisabled(ri) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRedshiftClusterDestroy, @@ -201,7 +231,7 @@ func TestAccAWSRedshiftCluster_loggingEnabledDeprecated(t *testing.T) { var v redshift.Cluster rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRedshiftClusterDestroy, @@ -233,7 +263,7 @@ func TestAccAWSRedshiftCluster_loggingEnabled(t *testing.T) { var v redshift.Cluster rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRedshiftClusterDestroy, @@ -265,7 +295,7 @@ func TestAccAWSRedshiftCluster_snapshotCopy(t *testing.T) { var v redshift.Cluster rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRedshiftClusterDestroy, @@ -295,11 +325,11 @@ func TestAccAWSRedshiftCluster_snapshotCopy(t *testing.T) { func TestAccAWSRedshiftCluster_iamRoles(t *testing.T) { var v redshift.Cluster - ri := rand.New(rand.NewSource(time.Now().UnixNano())).Int() + ri := acctest.RandInt() preConfig := testAccAWSRedshiftClusterConfig_iamRoles(ri) postConfig := testAccAWSRedshiftClusterConfig_updateIamRoles(ri) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRedshiftClusterDestroy, @@ -329,7 +359,7 @@ func TestAccAWSRedshiftCluster_publiclyAccessible(t *testing.T) { var v redshift.Cluster rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRedshiftClusterDestroy, @@ -358,11 +388,11 @@ func TestAccAWSRedshiftCluster_publiclyAccessible(t *testing.T) { func TestAccAWSRedshiftCluster_updateNodeCount(t *testing.T) { var v redshift.Cluster - ri := rand.New(rand.NewSource(time.Now().UnixNano())).Int() + ri := acctest.RandInt() preConfig := testAccAWSRedshiftClusterConfig_basic(ri) postConfig := testAccAWSRedshiftClusterConfig_updateNodeCount(ri) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRedshiftClusterDestroy, @@ -382,6 +412,47 @@ func TestAccAWSRedshiftCluster_updateNodeCount(t *testing.T) { testAccCheckAWSRedshiftClusterExists("aws_redshift_cluster.default", &v), resource.TestCheckResourceAttr( "aws_redshift_cluster.default", "number_of_nodes", "2"), + resource.TestCheckResourceAttr( + "aws_redshift_cluster.default", "cluster_type", "multi-node"), + resource.TestCheckResourceAttr( + "aws_redshift_cluster.default", "node_type", "dc1.large"), + ), + }, + }, + }) +} + +func TestAccAWSRedshiftCluster_updateNodeType(t *testing.T) { + var v redshift.Cluster + + ri := acctest.RandInt() + preConfig := testAccAWSRedshiftClusterConfig_basic(ri) + postConfig := testAccAWSRedshiftClusterConfig_updateNodeType(ri) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSRedshiftClusterDestroy, + Steps: []resource.TestStep{ + { + Config: preConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRedshiftClusterExists("aws_redshift_cluster.default", &v), + resource.TestCheckResourceAttr( + "aws_redshift_cluster.default", "node_type", "dc1.large"), + ), + }, + + { + Config: postConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRedshiftClusterExists("aws_redshift_cluster.default", &v), + resource.TestCheckResourceAttr( + "aws_redshift_cluster.default", "number_of_nodes", "1"), + resource.TestCheckResourceAttr( + "aws_redshift_cluster.default", "cluster_type", "single-node"), + resource.TestCheckResourceAttr( + "aws_redshift_cluster.default", "node_type", "dc2.large"), ), }, }, @@ -391,11 +462,11 @@ func TestAccAWSRedshiftCluster_updateNodeCount(t *testing.T) { func TestAccAWSRedshiftCluster_tags(t *testing.T) { var v redshift.Cluster - ri := rand.New(rand.NewSource(time.Now().UnixNano())).Int() + ri := acctest.RandInt() preConfig := testAccAWSRedshiftClusterConfig_tags(ri) postConfig := testAccAWSRedshiftClusterConfig_updatedTags(ri) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRedshiftClusterDestroy, @@ -426,11 +497,11 @@ func TestAccAWSRedshiftCluster_tags(t *testing.T) { func TestAccAWSRedshiftCluster_forceNewUsername(t *testing.T) { var first, second redshift.Cluster - ri := rand.New(rand.NewSource(time.Now().UnixNano())).Int() + ri := acctest.RandInt() preConfig := testAccAWSRedshiftClusterConfig_basic(ri) postConfig := testAccAWSRedshiftClusterConfig_updatedUsername(ri) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRedshiftClusterDestroy, @@ -456,6 +527,39 @@ func TestAccAWSRedshiftCluster_forceNewUsername(t *testing.T) { }) } +func TestAccAWSRedshiftCluster_changeAvailabilityZone(t *testing.T) { + var first, second redshift.Cluster + + ri := acctest.RandInt() + preConfig := testAccAWSRedshiftClusterConfig_basic(ri) + postConfig := testAccAWSRedshiftClusterConfig_updatedAvailabilityZone(ri) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSRedshiftClusterDestroy, + Steps: []resource.TestStep{ + { + Config: preConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRedshiftClusterExists("aws_redshift_cluster.default", &first), + testAccCheckAWSRedshiftClusterAvailabilityZone(&first, "us-west-2a"), + resource.TestCheckResourceAttr("aws_redshift_cluster.default", "availability_zone", "us-west-2a"), + ), + }, + + { + Config: postConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRedshiftClusterExists("aws_redshift_cluster.default", &second), + testAccCheckAWSRedshiftClusterAvailabilityZone(&second, "us-west-2b"), + resource.TestCheckResourceAttr("aws_redshift_cluster.default", "availability_zone", "us-west-2b"), + ), + }, + }, + }) +} + func testAccCheckAWSRedshiftClusterDestroy(s *terraform.State) error { for _, rs := range s.RootModule().Resources { if rs.Type != "aws_redshift_cluster" { @@ -503,11 +607,6 @@ func testAccCheckAWSRedshiftClusterSnapshot(rInt int) resource.TestCheckFunc { conn := testAccProvider.Meta().(*AWSClient).redshiftconn snapshot_identifier := fmt.Sprintf("tf-acctest-snapshot-%d", rInt) - arn, err := buildRedshiftARN(snapshot_identifier, testAccProvider.Meta().(*AWSClient).partition, testAccProvider.Meta().(*AWSClient).accountid, testAccProvider.Meta().(*AWSClient).region) - tagsARN := strings.Replace(arn, ":cluster:", ":snapshot:", 1) - if err != nil { - return fmt.Errorf("Error building ARN for tags check with ARN (%s): %s", tagsARN, err) - } log.Printf("[INFO] Deleting the Snapshot %s", snapshot_identifier) _, snapDeleteErr := conn.DeleteClusterSnapshot( @@ -586,6 +685,15 @@ func testAccCheckAWSRedshiftClusterMasterUsername(c *redshift.Cluster, value str } } +func testAccCheckAWSRedshiftClusterAvailabilityZone(c *redshift.Cluster, value string) resource.TestCheckFunc { + return func(s *terraform.State) error { + if *c.AvailabilityZone != value { + return fmt.Errorf("Expected cluster's AvailabilityZone: %q, given: %q", value, *c.AvailabilityZone) + } + return nil + } +} + func TestResourceAWSRedshiftClusterIdentifierValidation(t *testing.T) { cases := []struct { Value string @@ -743,6 +851,23 @@ resource "aws_redshift_cluster" "default" { `, rInt) } +func testAccAWSRedshiftClusterConfig_updateNodeType(rInt int) string { + return fmt.Sprintf(` +resource "aws_redshift_cluster" "default" { + cluster_identifier = "tf-redshift-cluster-%d" + availability_zone = "us-west-2a" + database_name = "mydb" + master_username = "foo_test" + master_password = "Mustbe8characters" + node_type = "dc2.large" + automated_snapshot_retention_period = 0 + allow_version_upgrade = false + number_of_nodes = 1 + skip_final_snapshot = true +} +`, rInt) +} + func testAccAWSRedshiftClusterConfig_basic(rInt int) string { return fmt.Sprintf(` resource "aws_redshift_cluster" "default" { @@ -1244,3 +1369,18 @@ resource "aws_redshift_cluster" "default" { skip_final_snapshot = true }`, rInt) } + +func testAccAWSRedshiftClusterConfig_updatedAvailabilityZone(rInt int) string { + return fmt.Sprintf(` + resource "aws_redshift_cluster" "default" { + cluster_identifier = "tf-redshift-cluster-%d" + availability_zone = "us-west-2b" + database_name = "mydb" + master_username = "foo_test" + master_password = "Mustbe8characters" + node_type = "dc1.large" + automated_snapshot_retention_period = 0 + allow_version_upgrade = false + skip_final_snapshot = true + }`, rInt) +} diff --git a/aws/resource_aws_redshift_event_subscription.go b/aws/resource_aws_redshift_event_subscription.go new file mode 100644 index 00000000000..8704ba6fb83 --- /dev/null +++ b/aws/resource_aws_redshift_event_subscription.go @@ -0,0 +1,211 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/redshift" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsRedshiftEventSubscription() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsRedshiftEventSubscriptionCreate, + Read: resourceAwsRedshiftEventSubscriptionRead, + Update: resourceAwsRedshiftEventSubscriptionUpdate, + Delete: resourceAwsRedshiftEventSubscriptionDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(40 * time.Minute), + Delete: schema.DefaultTimeout(40 * time.Minute), + Update: schema.DefaultTimeout(40 * time.Minute), + }, + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "sns_topic_arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validateArn, + }, + "status": { + Type: schema.TypeString, + Computed: true, + }, + "event_categories": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "source_ids": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "source_type": { + Type: schema.TypeString, + Optional: true, + }, + "enabled": { + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + "severity": { + Type: schema.TypeString, + Optional: true, + }, + "customer_aws_id": { + Type: schema.TypeString, + Computed: true, + }, + "tags": { + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + }, + }, + } +} + +func resourceAwsRedshiftEventSubscriptionCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + + request := &redshift.CreateEventSubscriptionInput{ + SubscriptionName: aws.String(d.Get("name").(string)), + SnsTopicArn: aws.String(d.Get("sns_topic_arn").(string)), + Enabled: aws.Bool(d.Get("enabled").(bool)), + SourceIds: expandStringSet(d.Get("source_ids").(*schema.Set)), + SourceType: aws.String(d.Get("source_type").(string)), + Severity: aws.String(d.Get("severity").(string)), + EventCategories: expandStringSet(d.Get("event_categories").(*schema.Set)), + Tags: tagsFromMapRedshift(d.Get("tags").(map[string]interface{})), + } + + log.Println("[DEBUG] Create Redshift Event Subscription:", request) + + output, err := conn.CreateEventSubscription(request) + if err != nil || output.EventSubscription == nil { + return fmt.Errorf("Error creating Redshift Event Subscription %s: %s", d.Get("name").(string), err) + } + + d.SetId(aws.StringValue(output.EventSubscription.CustSubscriptionId)) + + return resourceAwsRedshiftEventSubscriptionRead(d, meta) +} + +func resourceAwsRedshiftEventSubscriptionRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + + sub, err := resourceAwsRedshiftEventSubscriptionRetrieve(d.Id(), conn) + if err != nil { + return fmt.Errorf("Error retrieving Redshift Event Subscription %s: %s", d.Id(), err) + } + if sub == nil { + log.Printf("[WARN] Redshift Event Subscription (%s) not found - removing from state", d.Id()) + d.SetId("") + return nil + } + + if err := d.Set("name", sub.CustSubscriptionId); err != nil { + return err + } + if err := d.Set("sns_topic_arn", sub.SnsTopicArn); err != nil { + return err + } + if err := d.Set("status", sub.Status); err != nil { + return err + } + if err := d.Set("source_type", sub.SourceType); err != nil { + return err + } + if err := d.Set("severity", sub.Severity); err != nil { + return err + } + if err := d.Set("enabled", sub.Enabled); err != nil { + return err + } + if err := d.Set("source_ids", flattenStringList(sub.SourceIdsList)); err != nil { + return err + } + if err := d.Set("event_categories", flattenStringList(sub.EventCategoriesList)); err != nil { + return err + } + if err := d.Set("customer_aws_id", sub.CustomerAwsId); err != nil { + return err + } + if err := d.Set("tags", tagsToMapRedshift(sub.Tags)); err != nil { + return err + } + + return nil +} + +func resourceAwsRedshiftEventSubscriptionRetrieve(name string, conn *redshift.Redshift) (*redshift.EventSubscription, error) { + + request := &redshift.DescribeEventSubscriptionsInput{ + SubscriptionName: aws.String(name), + } + + describeResp, err := conn.DescribeEventSubscriptions(request) + if err != nil { + if isAWSErr(err, redshift.ErrCodeSubscriptionNotFoundFault, "") { + log.Printf("[WARN] No Redshift Event Subscription by name (%s) found", name) + return nil, nil + } + return nil, fmt.Errorf("Error reading Redshift Event Subscription %s: %s", name, err) + } + + if len(describeResp.EventSubscriptionsList) != 1 { + return nil, fmt.Errorf("Unable to find Redshift Event Subscription: %#v", describeResp.EventSubscriptionsList) + } + + return describeResp.EventSubscriptionsList[0], nil +} + +func resourceAwsRedshiftEventSubscriptionUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + + req := &redshift.ModifyEventSubscriptionInput{ + SubscriptionName: aws.String(d.Id()), + SnsTopicArn: aws.String(d.Get("sns_topic_arn").(string)), + Enabled: aws.Bool(d.Get("enabled").(bool)), + SourceIds: expandStringSet(d.Get("source_ids").(*schema.Set)), + SourceType: aws.String(d.Get("source_type").(string)), + Severity: aws.String(d.Get("severity").(string)), + EventCategories: expandStringSet(d.Get("event_categories").(*schema.Set)), + } + + log.Printf("[DEBUG] Redshift Event Subscription modification request: %#v", req) + _, err := conn.ModifyEventSubscription(req) + if err != nil { + return fmt.Errorf("Modifying Redshift Event Subscription %s failed: %s", d.Id(), err) + } + + return nil +} + +func resourceAwsRedshiftEventSubscriptionDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + deleteOpts := redshift.DeleteEventSubscriptionInput{ + SubscriptionName: aws.String(d.Id()), + } + + if _, err := conn.DeleteEventSubscription(&deleteOpts); err != nil { + if isAWSErr(err, redshift.ErrCodeSubscriptionNotFoundFault, "") { + return nil + } + return fmt.Errorf("Error deleting Redshift Event Subscription %s: %s", d.Id(), err) + } + + return nil +} diff --git a/aws/resource_aws_redshift_event_subscription_test.go b/aws/resource_aws_redshift_event_subscription_test.go new file mode 100644 index 00000000000..edb1e0c6d84 --- /dev/null +++ b/aws/resource_aws_redshift_event_subscription_test.go @@ -0,0 +1,369 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/redshift" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSRedshiftEventSubscription_basicUpdate(t *testing.T) { + var v redshift.EventSubscription + rInt := acctest.RandInt() + rName := fmt.Sprintf("tf-acc-test-redshift-event-subs-%d", rInt) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSRedshiftEventSubscriptionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRedshiftEventSubscriptionConfig(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRedshiftEventSubscriptionExists("aws_redshift_event_subscription.bar", &v), + resource.TestCheckResourceAttr("aws_redshift_event_subscription.bar", "enabled", "true"), + resource.TestCheckResourceAttr("aws_redshift_event_subscription.bar", "source_type", "cluster"), + resource.TestCheckResourceAttr("aws_redshift_event_subscription.bar", "name", rName), + resource.TestCheckResourceAttr("aws_redshift_event_subscription.bar", "tags.%", "1"), + resource.TestCheckResourceAttr("aws_redshift_event_subscription.bar", "tags.Name", "name"), + ), + }, + { + Config: testAccAWSRedshiftEventSubscriptionConfigUpdate(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRedshiftEventSubscriptionExists("aws_redshift_event_subscription.bar", &v), + resource.TestCheckResourceAttr("aws_redshift_event_subscription.bar", "enabled", "false"), + resource.TestCheckResourceAttr("aws_redshift_event_subscription.bar", "source_type", "cluster-snapshot"), + resource.TestCheckResourceAttr("aws_redshift_event_subscription.bar", "tags.%", "1"), + resource.TestCheckResourceAttr("aws_redshift_event_subscription.bar", "tags.Name", "new-name"), + ), + }, + { + ResourceName: "aws_redshift_event_subscription.bar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSRedshiftEventSubscription_withPrefix(t *testing.T) { + var v redshift.EventSubscription + rInt := acctest.RandInt() + rName := fmt.Sprintf("tf-acc-test-redshift-event-subs-%d", rInt) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSRedshiftEventSubscriptionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRedshiftEventSubscriptionConfig(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRedshiftEventSubscriptionExists("aws_redshift_event_subscription.bar", &v), + resource.TestCheckResourceAttr( + "aws_redshift_event_subscription.bar", "enabled", "true"), + resource.TestCheckResourceAttr( + "aws_redshift_event_subscription.bar", "source_type", "cluster"), + resource.TestCheckResourceAttr( + "aws_redshift_event_subscription.bar", "name", rName), + resource.TestCheckResourceAttr( + "aws_redshift_event_subscription.bar", "tags.Name", "name"), + ), + }, + { + ResourceName: "aws_redshift_event_subscription.bar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSRedshiftEventSubscription_withSourceIds(t *testing.T) { + var v redshift.EventSubscription + rInt := acctest.RandInt() + rName := fmt.Sprintf("tf-acc-test-redshift-event-subs-with-ids-%d", rInt) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSRedshiftEventSubscriptionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRedshiftEventSubscriptionConfigWithSourceIds(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRedshiftEventSubscriptionExists("aws_redshift_event_subscription.bar", &v), + resource.TestCheckResourceAttr( + "aws_redshift_event_subscription.bar", "enabled", "true"), + resource.TestCheckResourceAttr( + "aws_redshift_event_subscription.bar", "source_type", "cluster-parameter-group"), + resource.TestCheckResourceAttr( + "aws_redshift_event_subscription.bar", "name", rName), + resource.TestCheckResourceAttr( + "aws_redshift_event_subscription.bar", "source_ids.#", "1"), + ), + }, + { + Config: testAccAWSRedshiftEventSubscriptionConfigUpdateSourceIds(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRedshiftEventSubscriptionExists("aws_redshift_event_subscription.bar", &v), + resource.TestCheckResourceAttr( + "aws_redshift_event_subscription.bar", "enabled", "true"), + resource.TestCheckResourceAttr( + "aws_redshift_event_subscription.bar", "source_type", "cluster-parameter-group"), + resource.TestCheckResourceAttr( + "aws_redshift_event_subscription.bar", "name", rName), + resource.TestCheckResourceAttr( + "aws_redshift_event_subscription.bar", "source_ids.#", "2"), + ), + }, + { + ResourceName: "aws_redshift_event_subscription.bar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSRedshiftEventSubscription_categoryUpdate(t *testing.T) { + var v redshift.EventSubscription + rInt := acctest.RandInt() + rName := fmt.Sprintf("tf-acc-test-redshift-event-subs-%d", rInt) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSRedshiftEventSubscriptionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRedshiftEventSubscriptionConfig(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRedshiftEventSubscriptionExists("aws_redshift_event_subscription.bar", &v), + resource.TestCheckResourceAttr( + "aws_redshift_event_subscription.bar", "enabled", "true"), + resource.TestCheckResourceAttr( + "aws_redshift_event_subscription.bar", "source_type", "cluster"), + resource.TestCheckResourceAttr( + "aws_redshift_event_subscription.bar", "name", rName), + ), + }, + { + Config: testAccAWSRedshiftEventSubscriptionConfigUpdateCategories(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRedshiftEventSubscriptionExists("aws_redshift_event_subscription.bar", &v), + resource.TestCheckResourceAttr( + "aws_redshift_event_subscription.bar", "enabled", "true"), + resource.TestCheckResourceAttr( + "aws_redshift_event_subscription.bar", "source_type", "cluster"), + ), + }, + { + ResourceName: "aws_redshift_event_subscription.bar", + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAWSRedshiftEventSubscriptionExists(n string, v *redshift.EventSubscription) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Redshift Event Subscription is set") + } + + conn := testAccProvider.Meta().(*AWSClient).redshiftconn + + opts := redshift.DescribeEventSubscriptionsInput{ + SubscriptionName: aws.String(rs.Primary.ID), + } + + resp, err := conn.DescribeEventSubscriptions(&opts) + + if err != nil { + return err + } + + if len(resp.EventSubscriptionsList) != 1 || + *resp.EventSubscriptionsList[0].CustSubscriptionId != rs.Primary.ID { + return fmt.Errorf("Redshift Event Subscription not found") + } + + *v = *resp.EventSubscriptionsList[0] + return nil + } +} + +func testAccCheckAWSRedshiftEventSubscriptionDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).redshiftconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_redshift_event_subscription" { + continue + } + + var err error + resp, err := conn.DescribeEventSubscriptions( + &redshift.DescribeEventSubscriptionsInput{ + SubscriptionName: aws.String(rs.Primary.ID), + }) + + if ae, ok := err.(awserr.Error); ok && ae.Code() == "SubscriptionNotFound" { + continue + } + + if err == nil { + if len(resp.EventSubscriptionsList) != 0 && + *resp.EventSubscriptionsList[0].CustSubscriptionId == rs.Primary.ID { + return fmt.Errorf("Event Subscription still exists") + } + } + + // Verify the error + newerr, ok := err.(awserr.Error) + if !ok { + return err + } + if newerr.Code() != "SubscriptionNotFound" { + return err + } + } + + return nil +} + +func testAccAWSRedshiftEventSubscriptionConfig(rInt int) string { + return fmt.Sprintf(` +resource "aws_sns_topic" "aws_sns_topic" { + name = "tf-acc-test-redshift-event-subs-sns-topic-%d" +} + +resource "aws_redshift_event_subscription" "bar" { + name = "tf-acc-test-redshift-event-subs-%d" + sns_topic_arn = "${aws_sns_topic.aws_sns_topic.arn}" + source_type = "cluster" + severity = "INFO" + event_categories = [ + "configuration", + "management", + "monitoring", + "security", + ] + tags { + Name = "name" + } +}`, rInt, rInt) +} + +func testAccAWSRedshiftEventSubscriptionConfigUpdate(rInt int) string { + return fmt.Sprintf(` +resource "aws_sns_topic" "aws_sns_topic" { + name = "tf-acc-test-redshift-event-subs-sns-topic-%d" +} + +resource "aws_redshift_event_subscription" "bar" { + name = "tf-acc-test-redshift-event-subs-%d" + sns_topic_arn = "${aws_sns_topic.aws_sns_topic.arn}" + enabled = false + source_type = "cluster-snapshot" + severity = "INFO" + event_categories = [ + "monitoring", + ] + tags { + Name = "new-name" + } +}`, rInt, rInt) +} + +func testAccAWSRedshiftEventSubscriptionConfigWithSourceIds(rInt int) string { + return fmt.Sprintf(` +resource "aws_sns_topic" "aws_sns_topic" { + name = "tf-acc-test-redshift-event-subs-sns-topic-%d" +} + +resource "aws_redshift_parameter_group" "bar" { + name = "redshift-parameter-group-event-%d" + family = "redshift-1.0" + description = "Test parameter group for terraform" +} + +resource "aws_redshift_event_subscription" "bar" { + name = "tf-acc-test-redshift-event-subs-with-ids-%d" + sns_topic_arn = "${aws_sns_topic.aws_sns_topic.arn}" + source_type = "cluster-parameter-group" + severity = "INFO" + source_ids = ["${aws_redshift_parameter_group.bar.id}"] + event_categories = [ + "configuration", + ] + tags { + Name = "name" + } +}`, rInt, rInt, rInt) +} + +func testAccAWSRedshiftEventSubscriptionConfigUpdateSourceIds(rInt int) string { + return fmt.Sprintf(` + resource "aws_sns_topic" "aws_sns_topic" { + name = "tf-acc-test-redshift-event-subs-sns-topic-%d" + } + + resource "aws_redshift_parameter_group" "bar" { + name = "tf-acc-redshift-parameter-group-event-%d" + family = "redshift-1.0" + description = "Test parameter group for terraform" + } + + resource "aws_redshift_parameter_group" "foo" { + name = "tf-acc-redshift-parameter-group-event-2-%d" + family = "redshift-1.0" + description = "Test parameter group for terraform" + } + + resource "aws_redshift_event_subscription" "bar" { + name = "tf-acc-test-redshift-event-subs-with-ids-%d" + sns_topic_arn = "${aws_sns_topic.aws_sns_topic.arn}" + source_type = "cluster-parameter-group" + severity = "INFO" + source_ids = ["${aws_redshift_parameter_group.bar.id}","${aws_redshift_parameter_group.foo.id}"] + event_categories = [ + "configuration", + ] + tags { + Name = "name" + } + }`, rInt, rInt, rInt, rInt) +} + +func testAccAWSRedshiftEventSubscriptionConfigUpdateCategories(rInt int) string { + return fmt.Sprintf(` +resource "aws_sns_topic" "aws_sns_topic" { + name = "tf-acc-test-redshift-event-subs-sns-topic-%d" +} + +resource "aws_redshift_event_subscription" "bar" { + name = "tf-acc-test-redshift-event-subs-%d" + sns_topic_arn = "${aws_sns_topic.aws_sns_topic.arn}" + source_type = "cluster" + severity = "INFO" + event_categories = [ + "monitoring", + ] + tags { + Name = "name" + } +}`, rInt, rInt) +} diff --git a/aws/resource_aws_redshift_parameter_group.go b/aws/resource_aws_redshift_parameter_group.go index 808a1886e06..ae7778f54b9 100644 --- a/aws/resource_aws_redshift_parameter_group.go +++ b/aws/resource_aws_redshift_parameter_group.go @@ -24,37 +24,37 @@ func resourceAwsRedshiftParameterGroup() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, ForceNew: true, Required: true, ValidateFunc: validateRedshiftParamGroupName, }, - "family": &schema.Schema{ + "family": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "description": &schema.Schema{ + "description": { Type: schema.TypeString, Optional: true, ForceNew: true, Default: "Managed by Terraform", }, - "parameter": &schema.Schema{ + "parameter": { Type: schema.TypeSet, Optional: true, ForceNew: false, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, }, - "value": &schema.Schema{ + "value": { Type: schema.TypeString, Required: true, }, diff --git a/aws/resource_aws_redshift_parameter_group_test.go b/aws/resource_aws_redshift_parameter_group_test.go index edd293b820f..b6659546b82 100644 --- a/aws/resource_aws_redshift_parameter_group_test.go +++ b/aws/resource_aws_redshift_parameter_group_test.go @@ -12,11 +12,33 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSRedshiftParameterGroup_importBasic(t *testing.T) { + resourceName := "aws_redshift_parameter_group.bar" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSRedshiftParameterGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRedshiftParameterGroupConfig(rInt), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSRedshiftParameterGroup_withParameters(t *testing.T) { var v redshift.ClusterParameterGroup rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRedshiftParameterGroupDestroy, @@ -53,7 +75,7 @@ func TestAccAWSRedshiftParameterGroup_withoutParameters(t *testing.T) { var v redshift.ClusterParameterGroup rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRedshiftParameterGroupDestroy, diff --git a/aws/resource_aws_redshift_security_group.go b/aws/resource_aws_redshift_security_group.go index 24a45bfdec4..87e2b2fc28a 100644 --- a/aws/resource_aws_redshift_security_group.go +++ b/aws/resource_aws_redshift_security_group.go @@ -27,37 +27,37 @@ func resourceAwsRedshiftSecurityGroup() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, ValidateFunc: validateRedshiftSecurityGroupName, }, - "description": &schema.Schema{ + "description": { Type: schema.TypeString, Optional: true, ForceNew: true, Default: "Managed by Terraform", }, - "ingress": &schema.Schema{ + "ingress": { Type: schema.TypeSet, Required: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "cidr": &schema.Schema{ + "cidr": { Type: schema.TypeString, Optional: true, }, - "security_group_name": &schema.Schema{ + "security_group_name": { Type: schema.TypeString, Optional: true, Computed: true, }, - "security_group_owner_id": &schema.Schema{ + "security_group_owner_id": { Type: schema.TypeString, Optional: true, Computed: true, diff --git a/aws/resource_aws_redshift_security_group_test.go b/aws/resource_aws_redshift_security_group_test.go index aa30a514534..0f39e673467 100644 --- a/aws/resource_aws_redshift_security_group_test.go +++ b/aws/resource_aws_redshift_security_group_test.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "os" "testing" "github.com/aws/aws-sdk-go/aws" @@ -12,11 +13,37 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSRedshiftSecurityGroup_importBasic(t *testing.T) { + oldvar := os.Getenv("AWS_DEFAULT_REGION") + os.Setenv("AWS_DEFAULT_REGION", "us-east-1") + defer os.Setenv("AWS_DEFAULT_REGION", oldvar) + rInt := acctest.RandInt() + + resourceName := "aws_redshift_security_group.bar" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSRedshiftSecurityGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRedshiftSecurityGroupConfig_ingressCidr(rInt), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSRedshiftSecurityGroup_ingressCidr(t *testing.T) { var v redshift.ClusterSecurityGroup rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRedshiftSecurityGroupDestroy, @@ -43,7 +70,7 @@ func TestAccAWSRedshiftSecurityGroup_updateIngressCidr(t *testing.T) { var v redshift.ClusterSecurityGroup rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRedshiftSecurityGroupDestroy, @@ -82,7 +109,7 @@ func TestAccAWSRedshiftSecurityGroup_ingressSecurityGroup(t *testing.T) { var v redshift.ClusterSecurityGroup rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRedshiftSecurityGroupDestroy, @@ -107,7 +134,7 @@ func TestAccAWSRedshiftSecurityGroup_updateIngressSecurityGroup(t *testing.T) { var v redshift.ClusterSecurityGroup rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRedshiftSecurityGroupDestroy, diff --git a/aws/resource_aws_redshift_snapshot_copy_grant.go b/aws/resource_aws_redshift_snapshot_copy_grant.go new file mode 100644 index 00000000000..0e0b750c081 --- /dev/null +++ b/aws/resource_aws_redshift_snapshot_copy_grant.go @@ -0,0 +1,255 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/redshift" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsRedshiftSnapshotCopyGrant() *schema.Resource { + return &schema.Resource{ + // There is no API for updating/modifying grants, hence no Update + // Instead changes to most fields will force a new resource + Create: resourceAwsRedshiftSnapshotCopyGrantCreate, + Read: resourceAwsRedshiftSnapshotCopyGrantRead, + Delete: resourceAwsRedshiftSnapshotCopyGrantDelete, + Exists: resourceAwsRedshiftSnapshotCopyGrantExists, + + Schema: map[string]*schema.Schema{ + "snapshot_copy_grant_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "kms_key_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + }, + "tags": { + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + }, + }, + } +} + +func resourceAwsRedshiftSnapshotCopyGrantCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + + grantName := d.Get("snapshot_copy_grant_name").(string) + + input := redshift.CreateSnapshotCopyGrantInput{ + SnapshotCopyGrantName: aws.String(grantName), + } + + if v, ok := d.GetOk("kms_key_id"); ok { + input.KmsKeyId = aws.String(v.(string)) + } + + input.Tags = tagsFromMapRedshift(d.Get("tags").(map[string]interface{})) + + log.Printf("[DEBUG]: Adding new Redshift SnapshotCopyGrant: %s", input) + + var out *redshift.CreateSnapshotCopyGrantOutput + var err error + + out, err = conn.CreateSnapshotCopyGrant(&input) + + if err != nil { + return fmt.Errorf("error creating Redshift Snapshot Copy Grant (%s): %s", grantName, err) + } + + log.Printf("[DEBUG] Created new Redshift SnapshotCopyGrant: %s", *out.SnapshotCopyGrant.SnapshotCopyGrantName) + d.SetId(grantName) + + return resourceAwsRedshiftSnapshotCopyGrantRead(d, meta) +} + +func resourceAwsRedshiftSnapshotCopyGrantRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + + grantName := d.Id() + log.Printf("[DEBUG] Looking for grant: %s", grantName) + grant, err := findAwsRedshiftSnapshotCopyGrantWithRetry(conn, grantName) + + if err != nil { + return err + } + + if grant == nil { + log.Printf("[WARN] %s Redshift snapshot copy grant not found, removing from state file", grantName) + d.SetId("") + return nil + } + + d.Set("kms_key_id", grant.KmsKeyId) + d.Set("snapshot_copy_grant_name", grant.SnapshotCopyGrantName) + if err := d.Set("tags", tagsToMapRedshift(grant.Tags)); err != nil { + return fmt.Errorf("Error setting Redshift Snapshot Copy Grant Tags: %#v", err) + } + + return nil +} + +func resourceAwsRedshiftSnapshotCopyGrantDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).redshiftconn + + grantName := d.Id() + + deleteInput := redshift.DeleteSnapshotCopyGrantInput{ + SnapshotCopyGrantName: aws.String(grantName), + } + + log.Printf("[DEBUG] Deleting snapshot copy grant: %s", grantName) + _, err := conn.DeleteSnapshotCopyGrant(&deleteInput) + + if err != nil { + if isAWSErr(err, redshift.ErrCodeSnapshotCopyGrantNotFoundFault, "") { + return nil + } + return err + } + + log.Printf("[DEBUG] Checking if grant is deleted: %s", grantName) + err = waitForAwsRedshiftSnapshotCopyGrantToBeDeleted(conn, grantName) + + if err != nil { + return err + } + + return nil +} + +func resourceAwsRedshiftSnapshotCopyGrantExists(d *schema.ResourceData, meta interface{}) (bool, error) { + conn := meta.(*AWSClient).redshiftconn + + grantName := d.Id() + + log.Printf("[DEBUG] Looking for Grant: %s", grantName) + grant, err := findAwsRedshiftSnapshotCopyGrantWithRetry(conn, grantName) + + if err != nil { + return false, err + } + if grant != nil { + return true, err + } + + return false, nil +} + +func getAwsRedshiftSnapshotCopyGrant(grants []*redshift.SnapshotCopyGrant, grantName string) *redshift.SnapshotCopyGrant { + for _, grant := range grants { + if *grant.SnapshotCopyGrantName == grantName { + return grant + } + } + + return nil +} + +/* +In the functions below it is not possible to use retryOnAwsCodes function, as there +is no get grant call, so an error has to be created if the grant is or isn't returned +by the describe grants call when expected. +*/ + +// NB: This function only retries the grant not being returned and some edge cases, while AWS Errors +// are handled by the findAwsRedshiftSnapshotCopyGrant function +func findAwsRedshiftSnapshotCopyGrantWithRetry(conn *redshift.Redshift, grantName string) (*redshift.SnapshotCopyGrant, error) { + var grant *redshift.SnapshotCopyGrant + err := resource.Retry(3*time.Minute, func() *resource.RetryError { + var err error + grant, err = findAwsRedshiftSnapshotCopyGrant(conn, grantName, nil) + + if err != nil { + if serr, ok := err.(*resource.NotFoundError); ok { + // Force a retry if the grant should exist + return resource.RetryableError(serr) + } + + return resource.NonRetryableError(err) + } + + return nil + }) + + return grant, err +} + +// Used by the tests as well +func waitForAwsRedshiftSnapshotCopyGrantToBeDeleted(conn *redshift.Redshift, grantName string) error { + err := resource.Retry(3*time.Minute, func() *resource.RetryError { + grant, err := findAwsRedshiftSnapshotCopyGrant(conn, grantName, nil) + if err != nil { + if isAWSErr(err, redshift.ErrCodeSnapshotCopyGrantNotFoundFault, "") { + return nil + } + } + + if grant != nil { + // Force a retry if the grant still exists + return resource.RetryableError( + fmt.Errorf("[DEBUG] Grant still exists while expected to be deleted: %s", *grant.SnapshotCopyGrantName)) + } + + return resource.NonRetryableError(err) + }) + + return err +} + +// The DescribeSnapshotCopyGrants API defaults to listing only 100 grants +// Use a marker to iterate over all grants in "pages" +// NB: This function only retries on AWS Errors +func findAwsRedshiftSnapshotCopyGrant(conn *redshift.Redshift, grantName string, marker *string) (*redshift.SnapshotCopyGrant, error) { + + input := redshift.DescribeSnapshotCopyGrantsInput{ + MaxRecords: aws.Int64(int64(100)), + } + + // marker and grant name are mutually exclusive + if marker != nil { + input.Marker = marker + } else { + input.SnapshotCopyGrantName = aws.String(grantName) + } + + var out *redshift.DescribeSnapshotCopyGrantsOutput + var err error + var grant *redshift.SnapshotCopyGrant + + err = resource.Retry(3*time.Minute, func() *resource.RetryError { + out, err = conn.DescribeSnapshotCopyGrants(&input) + + if err != nil { + return resource.NonRetryableError(err) + } + + return nil + }) + if err != nil { + return nil, err + } + + grant = getAwsRedshiftSnapshotCopyGrant(out.SnapshotCopyGrants, grantName) + if grant != nil { + return grant, nil + } else if out.Marker != nil { + log.Printf("[DEBUG] Snapshot copy grant not found but marker returned, getting next page via marker: %s", aws.StringValue(out.Marker)) + return findAwsRedshiftSnapshotCopyGrant(conn, grantName, out.Marker) + } + + return nil, &resource.NotFoundError{ + Message: fmt.Sprintf("[DEBUG] Grant %s not found", grantName), + LastRequest: input, + } +} diff --git a/aws/resource_aws_redshift_snapshot_copy_grant_test.go b/aws/resource_aws_redshift_snapshot_copy_grant_test.go new file mode 100644 index 00000000000..fe34e1de360 --- /dev/null +++ b/aws/resource_aws_redshift_snapshot_copy_grant_test.go @@ -0,0 +1,101 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/redshift" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSRedshiftSnapshotCopyGrant_Basic(t *testing.T) { + + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSRedshiftSnapshotCopyGrantDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRedshiftSnapshotCopyGrant_Basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRedshiftSnapshotCopyGrantExists("aws_redshift_snapshot_copy_grant.basic"), + resource.TestCheckResourceAttr("aws_redshift_snapshot_copy_grant.basic", "snapshot_copy_grant_name", rName), + resource.TestCheckResourceAttr("aws_redshift_snapshot_copy_grant.basic", "tags.Name", "tf-redshift-snapshot-copy-grant-basic"), + resource.TestCheckResourceAttrSet("aws_redshift_snapshot_copy_grant.basic", "kms_key_id"), + ), + }, + }, + }) +} + +func testAccCheckAWSRedshiftSnapshotCopyGrantDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).redshiftconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_redshift_snapshot_copy_grant" { + continue + } + + err := waitForAwsRedshiftSnapshotCopyGrantToBeDeleted(conn, rs.Primary.ID) + if err != nil { + return err + } + + return nil + } + + return nil +} + +func testAccCheckAWSRedshiftSnapshotCopyGrantExists(name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("not found: %s", name) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("Snapshot Copy Grant ID (SnapshotCopyGrantName) is not set") + } + + // retrieve the client from the test provider + conn := testAccProvider.Meta().(*AWSClient).redshiftconn + + input := redshift.DescribeSnapshotCopyGrantsInput{ + MaxRecords: aws.Int64(int64(100)), + SnapshotCopyGrantName: aws.String(rs.Primary.ID), + } + + response, err := conn.DescribeSnapshotCopyGrants(&input) + + if err != nil { + return err + } + + // we expect only a single snapshot copy grant by this ID. If we find zero, or many, + // then we consider this an error + if len(response.SnapshotCopyGrants) != 1 || + *response.SnapshotCopyGrants[0].SnapshotCopyGrantName != rs.Primary.ID { + return fmt.Errorf("Snapshot copy grant not found") + } + + return nil + } +} + +func testAccAWSRedshiftSnapshotCopyGrant_Basic(rName string) string { + return fmt.Sprintf(` +resource "aws_redshift_snapshot_copy_grant" "basic" { + snapshot_copy_grant_name = "%s" + + tags { + Name = "tf-redshift-snapshot-copy-grant-basic" + } +} +`, rName) +} diff --git a/aws/resource_aws_redshift_subnet_group.go b/aws/resource_aws_redshift_subnet_group.go index 3fe9563940b..e503108ece7 100644 --- a/aws/resource_aws_redshift_subnet_group.go +++ b/aws/resource_aws_redshift_subnet_group.go @@ -6,6 +6,7 @@ import ( "regexp" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" "github.com/aws/aws-sdk-go/service/redshift" "github.com/hashicorp/terraform/helper/schema" ) @@ -99,7 +100,7 @@ func resourceAwsRedshiftSubnetGroupRead(d *schema.ResourceData, meta interface{} d.Set("description", describeResp.ClusterSubnetGroups[0].Description) d.Set("subnet_ids", subnetIdsToSlice(describeResp.ClusterSubnetGroups[0].Subnets)) if err := d.Set("tags", tagsToMapRedshift(describeResp.ClusterSubnetGroups[0].Tags)); err != nil { - return fmt.Errorf("[DEBUG] Error setting Redshift Subnet Group Tags: %#v", err) + return fmt.Errorf("Error setting Redshift Subnet Group Tags: %#v", err) } return nil @@ -108,13 +109,15 @@ func resourceAwsRedshiftSubnetGroupRead(d *schema.ResourceData, meta interface{} func resourceAwsRedshiftSubnetGroupUpdate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).redshiftconn - arn, tagErr := buildRedshiftSubnetGroupARN(d.Id(), meta.(*AWSClient).partition, meta.(*AWSClient).accountid, meta.(*AWSClient).region) - if tagErr != nil { - return fmt.Errorf("Error building ARN for Redshift Subnet Group, not updating Tags for Subnet Group %s", d.Id()) - } else { - if tagErr := setTagsRedshift(conn, d, arn); tagErr != nil { - return tagErr - } + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Service: "redshift", + Region: meta.(*AWSClient).region, + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("subnetgroup:%s", d.Id()), + }.String() + if tagErr := setTagsRedshift(conn, d, arn); tagErr != nil { + return tagErr } if d.HasChange("subnet_ids") || d.HasChange("description") { @@ -180,15 +183,3 @@ func validateRedshiftSubnetGroupName(v interface{}, k string) (ws []string, erro } return } - -func buildRedshiftSubnetGroupARN(identifier, partition, accountid, region string) (string, error) { - if partition == "" { - return "", fmt.Errorf("Unable to construct Subnet Group ARN because of missing AWS partition") - } - if accountid == "" { - return "", fmt.Errorf("Unable to construct Subnet Group ARN because of missing AWS Account ID") - } - arn := fmt.Sprintf("arn:%s:redshift:%s:%s:subnetgroup:%s", partition, region, accountid, identifier) - return arn, nil - -} diff --git a/aws/resource_aws_redshift_subnet_group_test.go b/aws/resource_aws_redshift_subnet_group_test.go index d104d909c0d..cb18589a73b 100644 --- a/aws/resource_aws_redshift_subnet_group_test.go +++ b/aws/resource_aws_redshift_subnet_group_test.go @@ -12,16 +12,40 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSRedshiftSubnetGroup_importBasic(t *testing.T) { + resourceName := "aws_redshift_subnet_group.foo" + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRedshiftSubnetGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccRedshiftSubnetGroupConfig(rInt), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "description"}, + }, + }, + }) +} + func TestAccAWSRedshiftSubnetGroup_basic(t *testing.T) { var v redshift.ClusterSubnetGroup rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckRedshiftSubnetGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRedshiftSubnetGroupConfig(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckRedshiftSubnetGroupExists("aws_redshift_subnet_group.foo", &v), @@ -39,12 +63,12 @@ func TestAccAWSRedshiftSubnetGroup_updateDescription(t *testing.T) { var v redshift.ClusterSubnetGroup rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckRedshiftSubnetGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRedshiftSubnetGroupConfig(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckRedshiftSubnetGroupExists("aws_redshift_subnet_group.foo", &v), @@ -53,7 +77,7 @@ func TestAccAWSRedshiftSubnetGroup_updateDescription(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccRedshiftSubnetGroup_updateDescription(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckRedshiftSubnetGroupExists("aws_redshift_subnet_group.foo", &v), @@ -69,12 +93,12 @@ func TestAccAWSRedshiftSubnetGroup_updateSubnetIds(t *testing.T) { var v redshift.ClusterSubnetGroup rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckRedshiftSubnetGroupDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRedshiftSubnetGroupConfig(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckRedshiftSubnetGroupExists("aws_redshift_subnet_group.foo", &v), @@ -83,7 +107,7 @@ func TestAccAWSRedshiftSubnetGroup_updateSubnetIds(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccRedshiftSubnetGroupConfig_updateSubnetIds(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckRedshiftSubnetGroupExists("aws_redshift_subnet_group.foo", &v), @@ -99,7 +123,7 @@ func TestAccAWSRedshiftSubnetGroup_tags(t *testing.T) { var v redshift.ClusterSubnetGroup rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckRedshiftSubnetGroupDestroy, diff --git a/aws/resource_aws_route.go b/aws/resource_aws_route.go index dfb12652957..bbd8a58e5ab 100644 --- a/aws/resource_aws_route.go +++ b/aws/resource_aws_route.go @@ -4,6 +4,7 @@ import ( "errors" "fmt" "log" + "strings" "time" "github.com/aws/aws-sdk-go/aws" @@ -27,6 +28,24 @@ func resourceAwsRoute() *schema.Resource { Update: resourceAwsRouteUpdate, Delete: resourceAwsRouteDelete, Exists: resourceAwsRouteExists, + Importer: &schema.ResourceImporter{ + State: func(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + idParts := strings.Split(d.Id(), "_") + if len(idParts) != 2 || idParts[0] == "" || idParts[1] == "" { + return nil, fmt.Errorf("unexpected format of ID (%q), expected ROUTETABLEID_DESTINATION", d.Id()) + } + routeTableID := idParts[0] + destination := idParts[1] + d.Set("route_table_id", routeTableID) + if strings.Contains(destination, ":") { + d.Set("destination_ipv6_cidr_block", destination) + } else { + d.Set("destination_cidr_block", destination) + } + d.SetId(fmt.Sprintf("r-%s%d", routeTableID, hashcode.String(destination))) + return []*schema.ResourceData{d}, nil + }, + }, Timeouts: &schema.ResourceTimeout{ Create: schema.DefaultTimeout(2 * time.Minute), @@ -98,6 +117,7 @@ func resourceAwsRoute() *schema.Resource { "route_table_id": { Type: schema.TypeString, Required: true, + ForceNew: true, }, "vpc_peering_connection_id": { @@ -205,7 +225,7 @@ func resourceAwsRouteCreate(d *schema.ResourceData, meta interface{}) error { } default: - return fmt.Errorf("An invalid target type specified: %s", setTarget) + return fmt.Errorf("A valid target type is missing. Specify one of the following attributes: %s", strings.Join(allowedTargets, ", ")) } log.Printf("[DEBUG] Route create config: %s", createOpts) @@ -238,7 +258,7 @@ func resourceAwsRouteCreate(d *schema.ResourceData, meta interface{}) error { if v, ok := d.GetOk("destination_cidr_block"); ok { err = resource.Retry(d.Timeout(schema.TimeoutCreate), func() *resource.RetryError { - route, err = findResourceRoute(conn, d.Get("route_table_id").(string), v.(string), "") + route, err = resourceAwsRouteFindRoute(conn, d.Get("route_table_id").(string), v.(string), "") return resource.RetryableError(err) }) if err != nil { @@ -248,7 +268,7 @@ func resourceAwsRouteCreate(d *schema.ResourceData, meta interface{}) error { if v, ok := d.GetOk("destination_ipv6_cidr_block"); ok { err = resource.Retry(d.Timeout(schema.TimeoutCreate), func() *resource.RetryError { - route, err = findResourceRoute(conn, d.Get("route_table_id").(string), "", v.(string)) + route, err = resourceAwsRouteFindRoute(conn, d.Get("route_table_id").(string), "", v.(string)) return resource.RetryableError(err) }) if err != nil { @@ -256,7 +276,7 @@ func resourceAwsRouteCreate(d *schema.ResourceData, meta interface{}) error { } } - d.SetId(routeIDHash(d, route)) + d.SetId(resourceAwsRouteID(d, route)) resourceAwsRouteSetResourceData(d, route) return nil } @@ -268,7 +288,7 @@ func resourceAwsRouteRead(d *schema.ResourceData, meta interface{}) error { destinationCidrBlock := d.Get("destination_cidr_block").(string) destinationIpv6CidrBlock := d.Get("destination_ipv6_cidr_block").(string) - route, err := findResourceRoute(conn, routeTableId, destinationCidrBlock, destinationIpv6CidrBlock) + route, err := resourceAwsRouteFindRoute(conn, routeTableId, destinationCidrBlock, destinationIpv6CidrBlock) if err != nil { if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidRouteTableID.NotFound" { log.Printf("[WARN] Route Table %q could not be found. Removing Route from state.", @@ -319,7 +339,7 @@ func resourceAwsRouteUpdate(d *schema.ResourceData, meta interface{}) error { } switch setTarget { - //instance_id is a special case due to the fact that AWS will "discover" the network_interace_id + //instance_id is a special case due to the fact that AWS will "discover" the network_interface_id //when it creates the route and return that data. In the case of an update, we should ignore the //existing network_interface_id case "instance_id": @@ -425,7 +445,6 @@ func resourceAwsRouteDelete(d *schema.ResourceData, meta interface{}) error { return err } - d.SetId("") return nil } @@ -471,8 +490,8 @@ func resourceAwsRouteExists(d *schema.ResourceData, meta interface{}) (bool, err return false, nil } -// Create an ID for a route -func routeIDHash(d *schema.ResourceData, r *ec2.Route) string { +// Helper: Create an ID for a route +func resourceAwsRouteID(d *schema.ResourceData, r *ec2.Route) string { if r.DestinationIpv6CidrBlock != nil && *r.DestinationIpv6CidrBlock != "" { return fmt.Sprintf("r-%s%d", d.Get("route_table_id").(string), hashcode.String(*r.DestinationIpv6CidrBlock)) @@ -482,7 +501,7 @@ func routeIDHash(d *schema.ResourceData, r *ec2.Route) string { } // Helper: retrieve a route -func findResourceRoute(conn *ec2.EC2, rtbid string, cidr string, ipv6cidr string) (*ec2.Route, error) { +func resourceAwsRouteFindRoute(conn *ec2.EC2, rtbid string, cidr string, ipv6cidr string) (*ec2.Route, error) { routeTableID := rtbid findOpts := &ec2.DescribeRouteTablesInput{ diff --git a/aws/resource_aws_route53_delegation_set.go b/aws/resource_aws_route53_delegation_set.go index 34f96ddf5ea..1b14c6c1c97 100644 --- a/aws/resource_aws_route53_delegation_set.go +++ b/aws/resource_aws_route53_delegation_set.go @@ -22,13 +22,13 @@ func resourceAwsRoute53DelegationSet() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "reference_name": &schema.Schema{ + "reference_name": { Type: schema.TypeString, Optional: true, ForceNew: true, }, - "name_servers": &schema.Schema{ + "name_servers": { Type: schema.TypeList, Elem: &schema.Schema{Type: schema.TypeString}, Computed: true, diff --git a/aws/resource_aws_route53_delegation_set_test.go b/aws/resource_aws_route53_delegation_set_test.go index 6090a283a49..72322a71e7b 100644 --- a/aws/resource_aws_route53_delegation_set_test.go +++ b/aws/resource_aws_route53_delegation_set_test.go @@ -7,7 +7,6 @@ import ( "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" - "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/terraform" "github.com/aws/aws-sdk-go/aws" @@ -17,20 +16,27 @@ import ( func TestAccAWSRoute53DelegationSet_basic(t *testing.T) { rString := acctest.RandString(8) refName := fmt.Sprintf("tf_acc_%s", rString) + resourceName := "aws_route53_delegation_set.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, - IDRefreshName: "aws_route53_delegation_set.test", + IDRefreshName: resourceName, IDRefreshIgnore: []string{"reference_name"}, Providers: testAccProviders, - CheckDestroy: testAccCheckRoute53ZoneDestroy, + CheckDestroy: testAccCheckRoute53DelegationSetDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53DelegationSetConfig(refName), Check: resource.ComposeTestCheckFunc( - testAccCheckRoute53DelegationSetExists("aws_route53_delegation_set.test"), + testAccCheckRoute53DelegationSetExists(resourceName), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"reference_name"}, + }, }, }) } @@ -40,32 +46,39 @@ func TestAccAWSRoute53DelegationSet_withZones(t *testing.T) { rString := acctest.RandString(8) refName := fmt.Sprintf("tf_acc_%s", rString) + resourceName := "aws_route53_delegation_set.test" zoneName1 := fmt.Sprintf("%s-primary.terraformtest.com", rString) zoneName2 := fmt.Sprintf("%s-secondary.terraformtest.com", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, - IDRefreshName: "aws_route53_delegation_set.main", + IDRefreshName: resourceName, IDRefreshIgnore: []string{"reference_name"}, Providers: testAccProviders, - CheckDestroy: testAccCheckRoute53ZoneDestroy, + CheckDestroy: testAccCheckRoute53DelegationSetDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53DelegationSetWithZonesConfig(refName, zoneName1, zoneName2), Check: resource.ComposeTestCheckFunc( - testAccCheckRoute53DelegationSetExists("aws_route53_delegation_set.main"), + testAccCheckRoute53DelegationSetExists(resourceName), testAccCheckRoute53ZoneExists("aws_route53_zone.primary", &zone), testAccCheckRoute53ZoneExists("aws_route53_zone.secondary", &zone), - testAccCheckRoute53NameServersMatch("aws_route53_delegation_set.main", "aws_route53_zone.primary"), - testAccCheckRoute53NameServersMatch("aws_route53_delegation_set.main", "aws_route53_zone.secondary"), + testAccCheckRoute53NameServersMatch(resourceName, "aws_route53_zone.primary"), + testAccCheckRoute53NameServersMatch(resourceName, "aws_route53_zone.secondary"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"reference_name"}, + }, }, }) } -func testAccCheckRoute53DelegationSetDestroy(s *terraform.State, provider *schema.Provider) error { - conn := provider.Meta().(*AWSClient).r53conn +func testAccCheckRoute53DelegationSetDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).r53conn for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_delegation_set" { continue @@ -153,18 +166,18 @@ resource "aws_route53_delegation_set" "test" { func testAccRoute53DelegationSetWithZonesConfig(refName, zoneName1, zoneName2 string) string { return fmt.Sprintf(` -resource "aws_route53_delegation_set" "main" { +resource "aws_route53_delegation_set" "test" { reference_name = "%s" } resource "aws_route53_zone" "primary" { name = "%s" - delegation_set_id = "${aws_route53_delegation_set.main.id}" + delegation_set_id = "${aws_route53_delegation_set.test.id}" } resource "aws_route53_zone" "secondary" { name = "%s" - delegation_set_id = "${aws_route53_delegation_set.main.id}" + delegation_set_id = "${aws_route53_delegation_set.test.id}" } `, refName, zoneName1, zoneName2) } diff --git a/aws/resource_aws_route53_health_check.go b/aws/resource_aws_route53_health_check.go index ce9e8843df7..98f0d15b731 100644 --- a/aws/resource_aws_route53_health_check.go +++ b/aws/resource_aws_route53_health_check.go @@ -32,6 +32,15 @@ func resourceAwsRoute53HealthCheck() *schema.Resource { StateFunc: func(val interface{}) string { return strings.ToUpper(val.(string)) }, + ValidateFunc: validation.StringInSlice([]string{ + route53.HealthCheckTypeCalculated, + route53.HealthCheckTypeCloudwatchMetric, + route53.HealthCheckTypeHttp, + route53.HealthCheckTypeHttpStrMatch, + route53.HealthCheckTypeHttps, + route53.HealthCheckTypeHttpsStrMatch, + route53.HealthCheckTypeTcp, + }, true), }, "failure_threshold": { Type: schema.TypeInt, @@ -333,7 +342,11 @@ func resourceAwsRoute53HealthCheckRead(d *schema.ResourceData, meta interface{}) d.Set("resource_path", updated.ResourcePath) d.Set("measure_latency", updated.MeasureLatency) d.Set("invert_healthcheck", updated.Inverted) - d.Set("child_healthchecks", updated.ChildHealthChecks) + + if err := d.Set("child_healthchecks", flattenStringList(updated.ChildHealthChecks)); err != nil { + return fmt.Errorf("error setting child_healthchecks: %s", err) + } + d.Set("child_health_threshold", updated.HealthThreshold) d.Set("insufficient_data_health_status", updated.InsufficientDataHealthStatus) d.Set("enable_sni", updated.EnableSNI) @@ -379,12 +392,3 @@ func resourceAwsRoute53HealthCheckDelete(d *schema.ResourceData, meta interface{ return nil } - -func createChildHealthCheckList(s *schema.Set) (nl []*string) { - l := s.List() - for _, n := range l { - nl = append(nl, aws.String(n.(string))) - } - - return nl -} diff --git a/aws/resource_aws_route53_health_check_test.go b/aws/resource_aws_route53_health_check_test.go index cf4e484db1b..61b30cc91d7 100644 --- a/aws/resource_aws_route53_health_check_test.go +++ b/aws/resource_aws_route53_health_check_test.go @@ -4,14 +4,13 @@ import ( "fmt" "testing" + "github.com/aws/aws-sdk-go/service/route53" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" - - "github.com/aws/aws-sdk-go/service/route53" ) func TestAccAWSRoute53HealthCheck_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_health_check.foo", Providers: testAccProviders, @@ -27,6 +26,11 @@ func TestAccAWSRoute53HealthCheck_basic(t *testing.T) { "aws_route53_health_check.foo", "invert_healthcheck", "true"), ), }, + { + ResourceName: "aws_route53_health_check.foo", + ImportState: true, + ImportStateVerify: true, + }, { Config: testAccRoute53HealthCheckConfigUpdate, Check: resource.ComposeTestCheckFunc( @@ -42,7 +46,7 @@ func TestAccAWSRoute53HealthCheck_basic(t *testing.T) { } func TestAccAWSRoute53HealthCheck_withSearchString(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_health_check.foo", Providers: testAccProviders, @@ -58,6 +62,11 @@ func TestAccAWSRoute53HealthCheck_withSearchString(t *testing.T) { "aws_route53_health_check.foo", "search_string", "OK"), ), }, + { + ResourceName: "aws_route53_health_check.foo", + ImportState: true, + ImportStateVerify: true, + }, { Config: testAccRoute53HealthCheckConfigWithSearchStringUpdate, Check: resource.ComposeTestCheckFunc( @@ -73,7 +82,7 @@ func TestAccAWSRoute53HealthCheck_withSearchString(t *testing.T) { } func TestAccAWSRoute53HealthCheck_withChildHealthChecks(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckRoute53HealthCheckDestroy, @@ -84,12 +93,17 @@ func TestAccAWSRoute53HealthCheck_withChildHealthChecks(t *testing.T) { testAccCheckRoute53HealthCheckExists("aws_route53_health_check.foo"), ), }, + { + ResourceName: "aws_route53_health_check.foo", + ImportState: true, + ImportStateVerify: true, + }, }, }) } func TestAccAWSRoute53HealthCheck_withHealthCheckRegions(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckRoute53HealthCheckDestroy, @@ -102,12 +116,17 @@ func TestAccAWSRoute53HealthCheck_withHealthCheckRegions(t *testing.T) { "aws_route53_health_check.foo", "regions.#", "3"), ), }, + { + ResourceName: "aws_route53_health_check.foo", + ImportState: true, + ImportStateVerify: true, + }, }, }) } func TestAccAWSRoute53HealthCheck_IpConfig(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckRoute53HealthCheckDestroy, @@ -118,12 +137,17 @@ func TestAccAWSRoute53HealthCheck_IpConfig(t *testing.T) { testAccCheckRoute53HealthCheckExists("aws_route53_health_check.bar"), ), }, + { + ResourceName: "aws_route53_health_check.bar", + ImportState: true, + ImportStateVerify: true, + }, }, }) } func TestAccAWSRoute53HealthCheck_CloudWatchAlarmCheck(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckRoute53HealthCheckDestroy, @@ -136,12 +160,17 @@ func TestAccAWSRoute53HealthCheck_CloudWatchAlarmCheck(t *testing.T) { "aws_route53_health_check.foo", "cloudwatch_alarm_name", "cloudwatch-healthcheck-alarm"), ), }, + { + ResourceName: "aws_route53_health_check.foo", + ImportState: true, + ImportStateVerify: true, + }, }, }) } func TestAccAWSRoute53HealthCheck_withSNI(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_health_check.foo", Providers: testAccProviders, @@ -155,6 +184,11 @@ func TestAccAWSRoute53HealthCheck_withSNI(t *testing.T) { "aws_route53_health_check.foo", "enable_sni", "true"), ), }, + { + ResourceName: "aws_route53_health_check.foo", + ImportState: true, + ImportStateVerify: true, + }, { Config: testAccRoute53HealthCheckConfigWithSNIDisabled, Check: resource.ComposeTestCheckFunc( @@ -212,8 +246,6 @@ func testAccCheckRoute53HealthCheckExists(n string) resource.TestCheckFunc { return fmt.Errorf("Not found: %s", n) } - fmt.Print(rs.Primary.ID) - if rs.Primary.ID == "" { return fmt.Errorf("No health check ID is set") } @@ -237,10 +269,6 @@ func testAccCheckRoute53HealthCheckExists(n string) resource.TestCheckFunc { } } -func testUpdateHappened(n string) resource.TestCheckFunc { - return nil -} - const testAccRoute53HealthCheckConfig = ` resource "aws_route53_health_check" "foo" { fqdn = "dev.notexample.com" diff --git a/aws/resource_aws_route53_query_log_test.go b/aws/resource_aws_route53_query_log_test.go index 411476b4812..de9b72b802e 100644 --- a/aws/resource_aws_route53_query_log_test.go +++ b/aws/resource_aws_route53_query_log_test.go @@ -25,12 +25,12 @@ func TestAccAWSRoute53QueryLog_Basic(t *testing.T) { rName := fmt.Sprintf("%s-%s", t.Name(), acctest.RandString(5)) var queryLoggingConfig route53.QueryLoggingConfig - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckRoute53QueryLogDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckAWSRoute53QueryLogResourceConfigBasic1(rName), Check: resource.ComposeTestCheckFunc( testAccCheckRoute53QueryLogExists(resourceName, &queryLoggingConfig), @@ -53,16 +53,16 @@ func TestAccAWSRoute53QueryLog_Import(t *testing.T) { resourceName := "aws_route53_query_log.test" rName := fmt.Sprintf("%s-%s", t.Name(), acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckRoute53QueryLogDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckAWSRoute53QueryLogResourceConfigBasic1(rName), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, diff --git a/aws/resource_aws_route53_record.go b/aws/resource_aws_route53_record.go index a16688a22ac..3a47ec9cdb0 100644 --- a/aws/resource_aws_route53_record.go +++ b/aws/resource_aws_route53_record.go @@ -6,10 +6,10 @@ import ( "fmt" "log" "regexp" + "strconv" "strings" "time" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" @@ -20,8 +20,8 @@ import ( "github.com/aws/aws-sdk-go/service/route53" ) -var r53NoRecordsFound = errors.New("No matching Hosted Zone found") -var r53NoHostedZoneFound = errors.New("No matching records found") +var r53NoRecordsFound = errors.New("No matching records found") +var r53NoHostedZoneFound = errors.New("No matching Hosted Zone found") var r53ValidRecordTypes = regexp.MustCompile("^(A|AAAA|CAA|CNAME|MX|NAPTR|NS|PTR|SOA|SPF|SRV|TXT)$") func resourceAwsRoute53Record() *schema.Resource { @@ -278,7 +278,7 @@ func resourceAwsRoute53RecordUpdate(d *schema.ResourceData, meta interface{}) er return err } if zoneRecord.HostedZone == nil { - return fmt.Errorf("[WARN] No Route53 Zone found for id (%s)", zone) + return fmt.Errorf("No Route53 Zone found for id (%s)", zone) } // Build the to be deleted record @@ -352,7 +352,7 @@ func resourceAwsRoute53RecordUpdate(d *schema.ResourceData, meta interface{}) er respRaw, err := changeRoute53RecordSet(conn, req) if err != nil { - return errwrap.Wrapf("[ERR]: Error building changeset: {{err}}", err) + return fmt.Errorf("[ERR]: Error building changeset: %s", err) } changeInfo := respRaw.(*route53.ChangeResourceRecordSetsOutput).ChangeInfo @@ -374,7 +374,8 @@ func resourceAwsRoute53RecordUpdate(d *schema.ResourceData, meta interface{}) er return err } - return resourceAwsRoute53RecordRead(d, meta) + _, err = findRecord(d, meta) + return err } func resourceAwsRoute53RecordCreate(d *schema.ResourceData, meta interface{}) error { @@ -387,7 +388,7 @@ func resourceAwsRoute53RecordCreate(d *schema.ResourceData, meta interface{}) er return err } if zoneRecord.HostedZone == nil { - return fmt.Errorf("[WARN] No Route53 Zone found for id (%s)", zone) + return fmt.Errorf("No Route53 Zone found for id (%s)", zone) } // Build the record @@ -429,7 +430,7 @@ func resourceAwsRoute53RecordCreate(d *schema.ResourceData, meta interface{}) er respRaw, err := changeRoute53RecordSet(conn, req) if err != nil { - return errwrap.Wrapf("[ERR]: Error building changeset: {{err}}", err) + return fmt.Errorf("[ERR]: Error building changeset: %s", err) } changeInfo := respRaw.(*route53.ChangeResourceRecordSetsOutput).ChangeInfo @@ -451,7 +452,8 @@ func resourceAwsRoute53RecordCreate(d *schema.ResourceData, meta interface{}) er return err } - return resourceAwsRoute53RecordRead(d, meta) + _, err = findRecord(d, meta) + return err } func changeRoute53RecordSet(conn *route53.Route53, input *route53.ChangeResourceRecordSetsInput) (interface{}, error) { @@ -531,15 +533,15 @@ func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) erro err = d.Set("records", flattenResourceRecords(record.ResourceRecords, *record.Type)) if err != nil { - return fmt.Errorf("[DEBUG] Error setting records for: %s, error: %#v", d.Id(), err) + return fmt.Errorf("Error setting records for: %s, error: %#v", d.Id(), err) } if alias := record.AliasTarget; alias != nil { name := normalizeAwsAliasName(*alias.DNSName) d.Set("alias", []interface{}{ map[string]interface{}{ - "zone_id": *alias.HostedZoneId, - "name": name, + "zone_id": *alias.HostedZoneId, + "name": name, "evaluate_target_health": *alias.EvaluateTargetHealth, }, }) @@ -552,7 +554,7 @@ func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) erro "type": aws.StringValue(record.Failover), }} if err := d.Set("failover_routing_policy", v); err != nil { - return fmt.Errorf("[DEBUG] Error setting failover records for: %s, error: %#v", d.Id(), err) + return fmt.Errorf("Error setting failover records for: %s, error: %#v", d.Id(), err) } } @@ -563,7 +565,7 @@ func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) erro "subdivision": aws.StringValue(record.GeoLocation.SubdivisionCode), }} if err := d.Set("geolocation_routing_policy", v); err != nil { - return fmt.Errorf("[DEBUG] Error setting gelocation records for: %s, error: %#v", d.Id(), err) + return fmt.Errorf("Error setting gelocation records for: %s, error: %#v", d.Id(), err) } } @@ -572,7 +574,7 @@ func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) erro "region": aws.StringValue(record.Region), }} if err := d.Set("latency_routing_policy", v); err != nil { - return fmt.Errorf("[DEBUG] Error setting latency records for: %s, error: %#v", d.Id(), err) + return fmt.Errorf("Error setting latency records for: %s, error: %#v", d.Id(), err) } } @@ -581,13 +583,13 @@ func resourceAwsRoute53RecordRead(d *schema.ResourceData, meta interface{}) erro "weight": aws.Int64Value((record.Weight)), }} if err := d.Set("weighted_routing_policy", v); err != nil { - return fmt.Errorf("[DEBUG] Error setting weighted records for: %s, error: %#v", d.Id(), err) + return fmt.Errorf("Error setting weighted records for: %s, error: %#v", d.Id(), err) } } if record.MultiValueAnswer != nil { if err := d.Set("multivalue_answer_routing_policy", *record.MultiValueAnswer); err != nil { - return fmt.Errorf("[DEBUG] Error setting multivalue answer records for: %s, error: %#v", d.Id(), err) + return fmt.Errorf("Error setting multivalue answer records for: %s, error: %#v", d.Id(), err) } } @@ -630,35 +632,65 @@ func findRecord(d *schema.ResourceData, meta interface{}) (*route53.ResourceReco log.Printf("[DEBUG] Expanded record name: %s", en) d.Set("fqdn", en) + recordName := FQDN(strings.ToLower(en)) + recordType := d.Get("type").(string) + recordSetIdentifier := d.Get("set_identifier") + + // If this isn't a Weighted, Latency, Geo, or Failover resource with + // a SetIdentifier we only need to look at the first record in the response since there can be + // only one + maxItems := "1" + if recordSetIdentifier != "" { + maxItems = "100" + } + lopts := &route53.ListResourceRecordSetsInput{ HostedZoneId: aws.String(cleanZoneID(zone)), - StartRecordName: aws.String(en), - StartRecordType: aws.String(d.Get("type").(string)), + StartRecordName: aws.String(recordName), + StartRecordType: aws.String(recordType), + MaxItems: aws.String(maxItems), } log.Printf("[DEBUG] List resource records sets for zone: %s, opts: %s", zone, lopts) - resp, err := conn.ListResourceRecordSets(lopts) - if err != nil { - return nil, err - } - for _, record := range resp.ResourceRecordSets { - name := cleanRecordName(*record.Name) - if FQDN(strings.ToLower(name)) != FQDN(strings.ToLower(*lopts.StartRecordName)) { - continue - } - if strings.ToUpper(*record.Type) != strings.ToUpper(*lopts.StartRecordType) { - continue - } + var record *route53.ResourceRecordSet + + // We need to loop over all records starting from the record we are looking for because + // Weighted, Latency, Geo, and Failover resource record sets have a special option + // called SetIdentifier which allows multiple entries with the same name and type but + // a different SetIdentifier + // For all other records we are setting the maxItems to 1 so that we don't return extra + // unneeded records + err = conn.ListResourceRecordSetsPages(lopts, func(resp *route53.ListResourceRecordSetsOutput, lastPage bool) bool { + for _, recordSet := range resp.ResourceRecordSets { + + responseName := strings.ToLower(cleanRecordName(*recordSet.Name)) + responseType := strings.ToUpper(*recordSet.Type) - if record.SetIdentifier != nil && *record.SetIdentifier != d.Get("set_identifier") { - continue + if recordName != responseName { + continue + } + if recordType != responseType { + continue + } + if recordSet.SetIdentifier != nil && *recordSet.SetIdentifier != recordSetIdentifier { + continue + } + + record = recordSet + return false } - // The only safe return where a record is found - return record, nil + return !lastPage + }) + + if err != nil { + return nil, err + } + if record == nil { + return nil, r53NoRecordsFound } - return nil, r53NoRecordsFound + return record, nil } func resourceAwsRoute53RecordDelete(d *schema.ResourceData, meta interface{}) error { @@ -668,8 +700,6 @@ func resourceAwsRoute53RecordDelete(d *schema.ResourceData, meta interface{}) er if err != nil { switch err { case r53NoHostedZoneFound, r53NoRecordsFound: - log.Printf("[DEBUG] %s for: %s, removing from state file", err, d.Id()) - d.SetId("") return nil default: return err @@ -696,7 +726,7 @@ func resourceAwsRoute53RecordDelete(d *schema.ResourceData, meta interface{}) er respRaw, err := deleteRoute53RecordSet(conn, req) if err != nil { - return errwrap.Wrapf("[ERR]: Error building changeset: {{err}}", err) + return fmt.Errorf("[ERR]: Error building changeset: %s", err) } changeInfo := respRaw.(*route53.ChangeResourceRecordSetsOutput).ChangeInfo @@ -875,16 +905,17 @@ func FQDN(name string) string { } } -// Route 53 stores the "*" wildcard indicator as ASCII 42 and returns the -// octal equivalent, "\\052". Here we look for that, and convert back to "*" -// as needed. +// Route 53 stores certain characters with the octal equivalent in ASCII format. +// This function converts all of these characters back into the original character +// E.g. "*" is stored as "\\052" and "@" as "\\100" + func cleanRecordName(name string) string { str := name - if strings.HasPrefix(name, "\\052") { - str = strings.Replace(name, "\\052", "*", 1) - log.Printf("[DEBUG] Replacing octal \\052 for * in: %s", name) + s, err := strconv.Unquote(`"` + str + `"`) + if err != nil { + return str } - return str + return s } // Check if the current record name contains the zone suffix. @@ -897,7 +928,7 @@ func expandRecordName(name, zone string) string { if len(name) == 0 { rn = zone } else { - rn = strings.Join([]string{name, zone}, ".") + rn = strings.Join([]string{rn, zone}, ".") } } return rn diff --git a/aws/resource_aws_route53_record_test.go b/aws/resource_aws_route53_record_test.go index 72bb533d25a..b0763841f20 100644 --- a/aws/resource_aws_route53_record_test.go +++ b/aws/resource_aws_route53_record_test.go @@ -22,6 +22,8 @@ func TestCleanRecordName(t *testing.T) { }{ {"www.nonexample.com", "www.nonexample.com"}, {"\\052.nonexample.com", "*.nonexample.com"}, + {"\\100.nonexample.com", "@.nonexample.com"}, + {"\\043.nonexample.com", "#.nonexample.com"}, {"nonexample.com", "nonexample.com"}, } @@ -38,6 +40,7 @@ func TestExpandRecordName(t *testing.T) { Input, Output string }{ {"www", "www.nonexample.com"}, + {"www.", "www.nonexample.com"}, {"dev.www", "dev.www.nonexample.com"}, {"*", "*.nonexample.com"}, {"nonexample.com", "nonexample.com"}, @@ -102,14 +105,56 @@ func TestParseRecordId(t *testing.T) { } } +func TestAccAWSRoute53Record_importBasic(t *testing.T) { + resourceName := "aws_route53_record.default" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRoute53RecordDestroy, + Steps: []resource.TestStep{ + { + Config: testAccRoute53RecordConfig, + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"allow_overwrite", "weight"}, + }, + }, + }) +} + +func TestAccAWSRoute53Record_importUnderscored(t *testing.T) { + resourceName := "aws_route53_record.underscore" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRoute53RecordDestroy, + Steps: []resource.TestStep{ + { + Config: testAccRoute53RecordConfigUnderscoreInName, + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"allow_overwrite", "weight"}, + }, + }, + }) +} + func TestAccAWSRoute53Record_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_record.default", Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53RecordConfig, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.default"), @@ -120,13 +165,13 @@ func TestAccAWSRoute53Record_basic(t *testing.T) { } func TestAccAWSRoute53Record_basic_fqdn(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_record.default", Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53RecordConfig_fqdn, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.default"), @@ -139,7 +184,7 @@ func TestAccAWSRoute53Record_basic_fqdn(t *testing.T) { // create_before_destroy, the record would actually be destroyed, and a // non-empty plan would appear, and the record will fail to exist in // testAccCheckRoute53RecordExists - resource.TestStep{ + { Config: testAccRoute53RecordConfig_fqdn_no_op, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.default"), @@ -150,14 +195,14 @@ func TestAccAWSRoute53Record_basic_fqdn(t *testing.T) { } func TestAccAWSRoute53Record_txtSupport(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_record.default", IDRefreshIgnore: []string{"zone_id"}, // just for this test Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53RecordConfigTXT, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.default"), @@ -168,13 +213,13 @@ func TestAccAWSRoute53Record_txtSupport(t *testing.T) { } func TestAccAWSRoute53Record_spfSupport(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_record.default", Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53RecordConfigSPF, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.default"), @@ -187,13 +232,13 @@ func TestAccAWSRoute53Record_spfSupport(t *testing.T) { } func TestAccAWSRoute53Record_caaSupport(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_record.default", Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53RecordConfigCAA, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.default"), @@ -206,13 +251,13 @@ func TestAccAWSRoute53Record_caaSupport(t *testing.T) { } func TestAccAWSRoute53Record_generatesSuffix(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_record.default", Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53RecordConfigSuffix, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.default"), @@ -223,13 +268,13 @@ func TestAccAWSRoute53Record_generatesSuffix(t *testing.T) { } func TestAccAWSRoute53Record_wildcard(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_record.wildcard", Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53WildCardRecordConfig, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.wildcard"), @@ -237,7 +282,7 @@ func TestAccAWSRoute53Record_wildcard(t *testing.T) { }, // Cause a change, which will trigger a refresh - resource.TestStep{ + { Config: testAccRoute53WildCardRecordConfigUpdate, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.wildcard"), @@ -248,13 +293,13 @@ func TestAccAWSRoute53Record_wildcard(t *testing.T) { } func TestAccAWSRoute53Record_failover(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_record.www-primary", Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53FailoverCNAMERecord, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.www-primary"), @@ -266,13 +311,13 @@ func TestAccAWSRoute53Record_failover(t *testing.T) { } func TestAccAWSRoute53Record_weighted_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_record.www-live", Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53WeightedCNAMERecord, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.www-dev"), @@ -287,13 +332,13 @@ func TestAccAWSRoute53Record_weighted_basic(t *testing.T) { func TestAccAWSRoute53Record_alias(t *testing.T) { rs := acctest.RandString(10) config := fmt.Sprintf(testAccRoute53ElbAliasRecord, rs) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_record.alias", Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: config, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.alias"), @@ -306,13 +351,13 @@ func TestAccAWSRoute53Record_alias(t *testing.T) { func TestAccAWSRoute53Record_aliasUppercase(t *testing.T) { rs := acctest.RandString(10) config := fmt.Sprintf(testAccRoute53ElbAliasRecordUppercase, rs) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_record.alias", Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: config, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.alias"), @@ -323,12 +368,12 @@ func TestAccAWSRoute53Record_aliasUppercase(t *testing.T) { } func TestAccAWSRoute53Record_s3_alias(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53S3AliasRecord, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.alias"), @@ -339,13 +384,13 @@ func TestAccAWSRoute53Record_s3_alias(t *testing.T) { } func TestAccAWSRoute53Record_weighted_alias(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_record.elb_weighted_alias_live", Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53WeightedElbAliasRecord, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.elb_weighted_alias_live"), @@ -353,7 +398,7 @@ func TestAccAWSRoute53Record_weighted_alias(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccRoute53WeightedR53AliasRecord, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.green_origin"), @@ -367,12 +412,12 @@ func TestAccAWSRoute53Record_weighted_alias(t *testing.T) { } func TestAccAWSRoute53Record_geolocation_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53GeolocationCNAMERecord, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.default"), @@ -386,12 +431,12 @@ func TestAccAWSRoute53Record_geolocation_basic(t *testing.T) { } func TestAccAWSRoute53Record_latency_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53LatencyCNAMERecord, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.us-east-1"), @@ -404,13 +449,13 @@ func TestAccAWSRoute53Record_latency_basic(t *testing.T) { } func TestAccAWSRoute53Record_TypeChange(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_record.sample", Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53RecordTypeChangePre, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.sample"), @@ -418,7 +463,7 @@ func TestAccAWSRoute53Record_TypeChange(t *testing.T) { }, // Cause a change, which will trigger a refresh - resource.TestStep{ + { Config: testAccRoute53RecordTypeChangePost, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.sample"), @@ -429,13 +474,13 @@ func TestAccAWSRoute53Record_TypeChange(t *testing.T) { } func TestAccAWSRoute53Record_SetIdentiferChange(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_record.basic_to_weighted", Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53RecordSetIdentifierChangePre, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.basic_to_weighted"), @@ -443,7 +488,7 @@ func TestAccAWSRoute53Record_SetIdentiferChange(t *testing.T) { }, // Cause a change, which will trigger a refresh - resource.TestStep{ + { Config: testAccRoute53RecordSetIdentifierChangePost, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.basic_to_weighted"), @@ -454,13 +499,13 @@ func TestAccAWSRoute53Record_SetIdentiferChange(t *testing.T) { } func TestAccAWSRoute53Record_AliasChange(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_record.elb_alias_change", Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53RecordAliasChangePre, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.elb_alias_change"), @@ -468,7 +513,7 @@ func TestAccAWSRoute53Record_AliasChange(t *testing.T) { }, // Cause a change, which will trigger a refresh - resource.TestStep{ + { Config: testAccRoute53RecordAliasChangePost, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.elb_alias_change"), @@ -479,13 +524,13 @@ func TestAccAWSRoute53Record_AliasChange(t *testing.T) { } func TestAccAWSRoute53Record_empty(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_record.empty", Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53RecordConfigEmptyName, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.empty"), @@ -497,13 +542,13 @@ func TestAccAWSRoute53Record_empty(t *testing.T) { // Regression test for https://github.com/hashicorp/terraform/issues/8423 func TestAccAWSRoute53Record_longTXTrecord(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route53_record.long_txt", Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53RecordConfigLongTxtRecord, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.long_txt"), @@ -514,12 +559,12 @@ func TestAccAWSRoute53Record_longTXTrecord(t *testing.T) { } func TestAccAWSRoute53Record_multivalue_answer_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53MultiValueAnswerARecord, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53RecordExists("aws_route53_record.www-server1"), @@ -531,7 +576,7 @@ func TestAccAWSRoute53Record_multivalue_answer_basic(t *testing.T) { } func TestAccAWSRoute53Record_allowOverwrite(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckRoute53RecordDestroy, @@ -674,34 +719,6 @@ resource "aws_route53_record" "default" { } ` -const testAccRoute53RecordConfigCNAMERecord = ` -resource "aws_route53_zone" "main" { - name = "notexample.com" -} - -resource "aws_route53_record" "default" { - zone_id = "${aws_route53_zone.main.zone_id}" - name = "host123.domain" - type = "CNAME" - ttl = "30" - records = ["1.2.3.4"] -} -` - -const testAccRoute53RecordConfigCNAMERecordUpdateToCNAME = ` -resource "aws_route53_zone" "main" { - name = "notexample.com" -} - -resource "aws_route53_record" "default" { - zone_id = "${aws_route53_zone.main.zone_id}" - name = "host123.domain" - type = "A" - ttl = "30" - records = ["1.2.3.4"] -} -` - const testAccRoute53RecordConfig_fqdn = ` resource "aws_route53_zone" "main" { name = "notexample.com" @@ -738,12 +755,6 @@ resource "aws_route53_record" "default" { } ` -const testAccRoute53RecordNoConfig = ` -resource "aws_route53_zone" "main" { - name = "notexample.com" -} -` - const testAccRoute53RecordConfigSuffix = ` resource "aws_route53_zone" "main" { name = "notexample.com" @@ -1085,32 +1096,6 @@ resource "aws_elb" "main" { } ` -const testAccRoute53AliasRecord = ` -resource "aws_route53_zone" "main" { - name = "notexample.com" -} - -resource "aws_route53_record" "origin" { - zone_id = "${aws_route53_zone.main.zone_id}" - name = "origin" - type = "A" - ttl = 5 - records = ["127.0.0.1"] -} - -resource "aws_route53_record" "alias" { - zone_id = "${aws_route53_zone.main.zone_id}" - name = "www" - type = "A" - - alias { - zone_id = "${aws_route53_zone.main.zone_id}" - name = "${aws_route53_record.origin.name}.${aws_route53_zone.main.name}" - evaluate_target_health = true - } -} -` - const testAccRoute53S3AliasRecord = ` resource "aws_route53_zone" "main" { name = "notexample.com" diff --git a/aws/resource_aws_route53_zone.go b/aws/resource_aws_route53_zone.go index 0373db6727b..8e60d8ce0f6 100644 --- a/aws/resource_aws_route53_zone.go +++ b/aws/resource_aws_route53_zone.go @@ -1,19 +1,19 @@ package aws import ( + "bytes" "fmt" "log" "sort" "strings" "time" - "github.com/hashicorp/errwrap" - "github.com/hashicorp/terraform/helper/resource" - "github.com/hashicorp/terraform/helper/schema" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/route53" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsRoute53Zone() *schema.Resource { @@ -27,45 +27,74 @@ func resourceAwsRoute53Zone() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ - Type: schema.TypeString, - Required: true, - ForceNew: true, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + DiffSuppressFunc: suppressRoute53ZoneNameWithTrailingDot, }, - "comment": &schema.Schema{ + "comment": { Type: schema.TypeString, Optional: true, Default: "Managed by Terraform", }, - "vpc_id": &schema.Schema{ + "vpc": { + Type: schema.TypeSet, + Optional: true, + // Deprecated: Remove Computed: true in next major version of the provider + Computed: true, + MinItems: 1, + ConflictsWith: []string{"delegation_set_id", "vpc_id", "vpc_region"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "vpc_id": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "vpc_region": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + }, + }, + Set: route53HostedZoneVPCHash, + }, + + "vpc_id": { Type: schema.TypeString, Optional: true, ForceNew: true, - ConflictsWith: []string{"delegation_set_id"}, + Computed: true, + ConflictsWith: []string{"delegation_set_id", "vpc"}, + Deprecated: "use 'vpc' attribute instead", }, - "vpc_region": &schema.Schema{ - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Computed: true, + "vpc_region": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + ConflictsWith: []string{"delegation_set_id", "vpc"}, + Deprecated: "use 'vpc' attribute instead", }, - "zone_id": &schema.Schema{ + "zone_id": { Type: schema.TypeString, Computed: true, }, - "delegation_set_id": &schema.Schema{ + "delegation_set_id": { Type: schema.TypeString, Optional: true, ForceNew: true, ConflictsWith: []string{"vpc_id"}, }, - "name_servers": &schema.Schema{ + "name_servers": { Type: schema.TypeList, Elem: &schema.Schema{Type: schema.TypeString}, Computed: true, @@ -73,7 +102,7 @@ func resourceAwsRoute53Zone() *schema.Resource { "tags": tagsSchema(), - "force_destroy": &schema.Schema{ + "force_destroy": { Type: schema.TypeBool, Optional: true, Default: false, @@ -83,141 +112,171 @@ func resourceAwsRoute53Zone() *schema.Resource { } func resourceAwsRoute53ZoneCreate(d *schema.ResourceData, meta interface{}) error { - r53 := meta.(*AWSClient).r53conn + conn := meta.(*AWSClient).r53conn + region := meta.(*AWSClient).region - req := &route53.CreateHostedZoneInput{ - Name: aws.String(d.Get("name").(string)), - HostedZoneConfig: &route53.HostedZoneConfig{Comment: aws.String(d.Get("comment").(string))}, - CallerReference: aws.String(time.Now().Format(time.RFC3339Nano)), + input := &route53.CreateHostedZoneInput{ + CallerReference: aws.String(resource.UniqueId()), + Name: aws.String(d.Get("name").(string)), + HostedZoneConfig: &route53.HostedZoneConfig{ + Comment: aws.String(d.Get("comment").(string)), + }, } - if v := d.Get("vpc_id"); v != "" { - req.VPC = &route53.VPC{ - VPCId: aws.String(v.(string)), - VPCRegion: aws.String(meta.(*AWSClient).region), + + if v, ok := d.GetOk("delegation_set_id"); ok { + input.DelegationSetId = aws.String(v.(string)) + } + + // Private Route53 Hosted Zones can only be created with their first VPC association, + // however we need to associate the remaining after creation. + var vpcs []*route53.VPC + vpcs = expandRoute53VPCs(d.Get("vpc").(*schema.Set).List(), region) + + // Backwards compatibility + if vpcID, ok := d.GetOk("vpc_id"); ok { + vpc := &route53.VPC{ + VPCId: aws.String(vpcID.(string)), + VPCRegion: aws.String(region), } - if w := d.Get("vpc_region"); w != "" { - req.VPC.VPCRegion = aws.String(w.(string)) + + if vpcRegion, ok := d.GetOk("vpc_region"); ok { + vpc.VPCRegion = aws.String(vpcRegion.(string)) } - d.Set("vpc_region", req.VPC.VPCRegion) + + vpcs = []*route53.VPC{vpc} } - if v, ok := d.GetOk("delegation_set_id"); ok { - req.DelegationSetId = aws.String(v.(string)) + if len(vpcs) > 0 { + input.VPC = vpcs[0] } - log.Printf("[DEBUG] Creating Route53 hosted zone: %s", *req.Name) - var err error - resp, err := r53.CreateHostedZone(req) + log.Printf("[DEBUG] Creating Route53 hosted zone: %s", input) + output, err := conn.CreateHostedZone(input) + if err != nil { - return err + return fmt.Errorf("error creating Route53 Hosted Zone: %s", err) } - // Store the zone_id - zone := cleanZoneID(*resp.HostedZone.Id) - d.Set("zone_id", zone) - d.SetId(zone) + d.SetId(cleanZoneID(aws.StringValue(output.HostedZone.Id))) - // Wait until we are done initializing - wait := resource.StateChangeConf{ - Delay: 30 * time.Second, - Pending: []string{"PENDING"}, - Target: []string{"INSYNC"}, - Timeout: 15 * time.Minute, - MinTimeout: 2 * time.Second, - Refresh: func() (result interface{}, state string, err error) { - changeRequest := &route53.GetChangeInput{ - Id: aws.String(cleanChangeID(*resp.ChangeInfo.Id)), - } - return resourceAwsGoRoute53Wait(r53, changeRequest) - }, + if err := route53WaitForChangeSynchronization(conn, cleanChangeID(aws.StringValue(output.ChangeInfo.Id))); err != nil { + return fmt.Errorf("error waiting for Route53 Hosted Zone (%s) creation: %s", d.Id(), err) } - _, err = wait.WaitForState() - if err != nil { - return err + + if err := setTagsR53(conn, d, route53.TagResourceTypeHostedzone); err != nil { + return fmt.Errorf("error setting tags for Route53 Hosted Zone (%s): %s", d.Id(), err) + } + + // Associate additional VPCs beyond the first + if len(vpcs) > 1 { + for _, vpc := range vpcs[1:] { + err := route53HostedZoneVPCAssociate(conn, d.Id(), vpc) + + if err != nil { + return err + } + } } - return resourceAwsRoute53ZoneUpdate(d, meta) + + return resourceAwsRoute53ZoneRead(d, meta) } func resourceAwsRoute53ZoneRead(d *schema.ResourceData, meta interface{}) error { - r53 := meta.(*AWSClient).r53conn - zone, err := r53.GetHostedZone(&route53.GetHostedZoneInput{Id: aws.String(d.Id())}) + conn := meta.(*AWSClient).r53conn + + input := &route53.GetHostedZoneInput{ + Id: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Getting Route53 Hosted Zone: %s", input) + output, err := conn.GetHostedZone(input) + + if isAWSErr(err, route53.ErrCodeNoSuchHostedZone, "") { + log.Printf("[WARN] Route53 Hosted Zone (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + if err != nil { - // Handle a deleted zone - if r53err, ok := err.(awserr.Error); ok && r53err.Code() == "NoSuchHostedZone" { - d.SetId("") - return nil - } - return err + return fmt.Errorf("error getting Route53 Hosted Zone (%s): %s", d.Id(), err) } - // In the import case this will be empty - if _, ok := d.GetOk("zone_id"); !ok { - d.Set("zone_id", d.Id()) + if output == nil || output.HostedZone == nil { + log.Printf("[WARN] Route53 Hosted Zone (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil } - if _, ok := d.GetOk("name"); !ok { - d.Set("name", zone.HostedZone.Name) + + d.Set("comment", "") + d.Set("delegation_set_id", "") + d.Set("name", output.HostedZone.Name) + d.Set("zone_id", cleanZoneID(aws.StringValue(output.HostedZone.Id))) + + var nameServers []string + + if output.DelegationSet != nil { + d.Set("delegation_set_id", cleanDelegationSetId(aws.StringValue(output.DelegationSet.Id))) + + nameServers = aws.StringValueSlice(output.DelegationSet.NameServers) } - if !*zone.HostedZone.Config.PrivateZone { - ns := make([]string, len(zone.DelegationSet.NameServers)) - for i := range zone.DelegationSet.NameServers { - ns[i] = *zone.DelegationSet.NameServers[i] - } - sort.Strings(ns) - if err := d.Set("name_servers", ns); err != nil { - return fmt.Errorf("[DEBUG] Error setting name servers for: %s, error: %#v", d.Id(), err) - } - } else { - ns, err := getNameServers(d.Id(), d.Get("name").(string), r53) - if err != nil { - return err - } - if err := d.Set("name_servers", ns); err != nil { - return fmt.Errorf("[DEBUG] Error setting name servers for: %s, error: %#v", d.Id(), err) - } + if output.HostedZone.Config != nil { + d.Set("comment", output.HostedZone.Config.Comment) - // In the import case we just associate it with the first VPC - if _, ok := d.GetOk("vpc_id"); !ok { - if len(zone.VPCs) > 1 { - return fmt.Errorf( - "Can't import a route53_zone with more than one VPC attachment") - } + if aws.BoolValue(output.HostedZone.Config.PrivateZone) { + var err error + nameServers, err = getNameServers(d.Id(), d.Get("name").(string), conn) - if len(zone.VPCs) > 0 { - d.Set("vpc_id", zone.VPCs[0].VPCId) - d.Set("vpc_region", zone.VPCs[0].VPCRegion) + if err != nil { + return fmt.Errorf("error getting Route53 Hosted Zone (%s) name servers: %s", d.Id(), err) } } + } - var associatedVPC *route53.VPC - for _, vpc := range zone.VPCs { - if *vpc.VPCId == d.Get("vpc_id") { - associatedVPC = vpc - break - } - } - if associatedVPC == nil { - return fmt.Errorf("[DEBUG] VPC: %v is not associated with Zone: %v", d.Get("vpc_id"), d.Id()) - } + sort.Strings(nameServers) + if err := d.Set("name_servers", nameServers); err != nil { + return fmt.Errorf("error setting name_servers: %s", err) } - if zone.DelegationSet != nil && zone.DelegationSet.Id != nil { - d.Set("delegation_set_id", cleanDelegationSetId(*zone.DelegationSet.Id)) + // Backwards compatibility: only set vpc_id/vpc_region if either is true: + // * Previously configured + // * Only one VPC association + existingVpcID := d.Get("vpc_id").(string) + + // Detect drift in configuration + d.Set("vpc_id", "") + d.Set("vpc_region", "") + + if len(output.VPCs) == 1 && output.VPCs[0] != nil { + d.Set("vpc_id", output.VPCs[0].VPCId) + d.Set("vpc_region", output.VPCs[0].VPCRegion) + } else if len(output.VPCs) > 1 { + for _, vpc := range output.VPCs { + if vpc == nil { + continue + } + if aws.StringValue(vpc.VPCId) == existingVpcID { + d.Set("vpc_id", vpc.VPCId) + d.Set("vpc_region", vpc.VPCRegion) + } + } } - if zone.HostedZone != nil && zone.HostedZone.Config != nil && zone.HostedZone.Config.Comment != nil { - d.Set("comment", zone.HostedZone.Config.Comment) + if err := d.Set("vpc", flattenRoute53VPCs(output.VPCs)); err != nil { + return fmt.Errorf("error setting vpc: %s", err) } // get tags req := &route53.ListTagsForResourceInput{ ResourceId: aws.String(d.Id()), - ResourceType: aws.String("hostedzone"), + ResourceType: aws.String(route53.TagResourceTypeHostedzone), } - resp, err := r53.ListTagsForResource(req) + log.Printf("[DEBUG] Listing tags for Route53 Hosted Zone: %s", req) + resp, err := conn.ListTagsForResource(req) + if err != nil { - return err + return fmt.Errorf("error listing tags for Route53 Hosted Zone (%s): %s", d.Id(), err) } var tags []*route53.Tag @@ -234,53 +293,95 @@ func resourceAwsRoute53ZoneRead(d *schema.ResourceData, meta interface{}) error func resourceAwsRoute53ZoneUpdate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).r53conn + region := meta.(*AWSClient).region d.Partial(true) if d.HasChange("comment") { - zoneInput := route53.UpdateHostedZoneCommentInput{ + input := route53.UpdateHostedZoneCommentInput{ Id: aws.String(d.Id()), Comment: aws.String(d.Get("comment").(string)), } - _, err := conn.UpdateHostedZoneComment(&zoneInput) + _, err := conn.UpdateHostedZoneComment(&input) + if err != nil { - return err - } else { - d.SetPartial("comment") + return fmt.Errorf("error updating Route53 Hosted Zone (%s) comment: %s", d.Id(), err) } + + d.SetPartial("comment") } - if err := setTagsR53(conn, d, "hostedzone"); err != nil { - return err - } else { + if d.HasChange("tags") { + if err := setTagsR53(conn, d, route53.TagResourceTypeHostedzone); err != nil { + return err + } + d.SetPartial("tags") } + if d.HasChange("vpc") { + o, n := d.GetChange("vpc") + oldVPCs := o.(*schema.Set) + newVPCs := n.(*schema.Set) + + // VPCs cannot be empty, so add first and then remove + for _, vpcRaw := range newVPCs.Difference(oldVPCs).List() { + if vpcRaw == nil { + continue + } + + vpc := expandRoute53VPC(vpcRaw.(map[string]interface{}), region) + err := route53HostedZoneVPCAssociate(conn, d.Id(), vpc) + + if err != nil { + return err + } + } + + for _, vpcRaw := range oldVPCs.Difference(newVPCs).List() { + if vpcRaw == nil { + continue + } + + vpc := expandRoute53VPC(vpcRaw.(map[string]interface{}), region) + err := route53HostedZoneVPCDisassociate(conn, d.Id(), vpc) + + if err != nil { + return err + } + } + + d.SetPartial("vpc") + } + d.Partial(false) return resourceAwsRoute53ZoneRead(d, meta) } func resourceAwsRoute53ZoneDelete(d *schema.ResourceData, meta interface{}) error { - r53 := meta.(*AWSClient).r53conn + conn := meta.(*AWSClient).r53conn if d.Get("force_destroy").(bool) { - if err := deleteAllRecordsInHostedZoneId(d.Id(), d.Get("name").(string), r53); err != nil { - return errwrap.Wrapf("{{err}}", err) + if err := deleteAllRecordsInHostedZoneId(d.Id(), d.Get("name").(string), conn); err != nil { + return fmt.Errorf("error deleting records in Route53 Hosted Zone (%s): %s", d.Id(), err) } } - log.Printf("[DEBUG] Deleting Route53 hosted zone: %s (ID: %s)", - d.Get("name").(string), d.Id()) - _, err := r53.DeleteHostedZone(&route53.DeleteHostedZoneInput{Id: aws.String(d.Id())}) + input := &route53.DeleteHostedZoneInput{ + Id: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Deleting Route53 Hosted Zone: %s", input) + _, err := conn.DeleteHostedZone(input) + + if isAWSErr(err, route53.ErrCodeNoSuchHostedZone, "") { + return nil + } + if err != nil { - if r53err, ok := err.(awserr.Error); ok && r53err.Code() == "NoSuchHostedZone" { - log.Printf("[DEBUG] No matching Route 53 Zone found for: %s, removing from state file", d.Id()) - d.SetId("") - return nil - } - return err + return fmt.Errorf("error deleting Route53 Hosted Zone (%s): %s", d.Id(), err) } return nil @@ -389,3 +490,129 @@ func getNameServers(zoneId string, zoneName string, r53 *route53.Route53) ([]str sort.Strings(ns) return ns, nil } + +func expandRoute53VPCs(l []interface{}, currentRegion string) []*route53.VPC { + vpcs := []*route53.VPC{} + + for _, mRaw := range l { + if mRaw == nil { + continue + } + + vpcs = append(vpcs, expandRoute53VPC(mRaw.(map[string]interface{}), currentRegion)) + } + + return vpcs +} + +func expandRoute53VPC(m map[string]interface{}, currentRegion string) *route53.VPC { + vpc := &route53.VPC{ + VPCId: aws.String(m["vpc_id"].(string)), + VPCRegion: aws.String(currentRegion), + } + + if v, ok := m["vpc_region"]; ok && v.(string) != "" { + vpc.VPCRegion = aws.String(v.(string)) + } + + return vpc +} + +func flattenRoute53VPCs(vpcs []*route53.VPC) []interface{} { + l := []interface{}{} + + for _, vpc := range vpcs { + if vpc == nil { + continue + } + + m := map[string]interface{}{ + "vpc_id": aws.StringValue(vpc.VPCId), + "vpc_region": aws.StringValue(vpc.VPCRegion), + } + + l = append(l, m) + } + + return l +} + +func route53HostedZoneVPCAssociate(conn *route53.Route53, zoneID string, vpc *route53.VPC) error { + input := &route53.AssociateVPCWithHostedZoneInput{ + HostedZoneId: aws.String(zoneID), + VPC: vpc, + } + + log.Printf("[DEBUG] Associating Route53 Hosted Zone with VPC: %s", input) + output, err := conn.AssociateVPCWithHostedZone(input) + + if err != nil { + return fmt.Errorf("error associating Route53 Hosted Zone (%s) to VPC (%s): %s", zoneID, aws.StringValue(vpc.VPCId), err) + } + + if err := route53WaitForChangeSynchronization(conn, cleanChangeID(aws.StringValue(output.ChangeInfo.Id))); err != nil { + return fmt.Errorf("error waiting for Route53 Hosted Zone (%s) association to VPC (%s): %s", zoneID, aws.StringValue(vpc.VPCId), err) + } + + return nil +} + +func route53HostedZoneVPCDisassociate(conn *route53.Route53, zoneID string, vpc *route53.VPC) error { + input := &route53.DisassociateVPCFromHostedZoneInput{ + HostedZoneId: aws.String(zoneID), + VPC: vpc, + } + + log.Printf("[DEBUG] Disassociating Route53 Hosted Zone with VPC: %s", input) + output, err := conn.DisassociateVPCFromHostedZone(input) + + if err != nil { + return fmt.Errorf("error disassociating Route53 Hosted Zone (%s) from VPC (%s): %s", zoneID, aws.StringValue(vpc.VPCId), err) + } + + if err := route53WaitForChangeSynchronization(conn, cleanChangeID(aws.StringValue(output.ChangeInfo.Id))); err != nil { + return fmt.Errorf("error waiting for Route53 Hosted Zone (%s) disassociation from VPC (%s): %s", zoneID, aws.StringValue(vpc.VPCId), err) + } + + return nil +} + +func route53HostedZoneVPCHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%s-", m["vpc_id"].(string))) + + return hashcode.String(buf.String()) +} + +func route53WaitForChangeSynchronization(conn *route53.Route53, changeID string) error { + conf := resource.StateChangeConf{ + Delay: 30 * time.Second, + Pending: []string{route53.ChangeStatusPending}, + Target: []string{route53.ChangeStatusInsync}, + Timeout: 15 * time.Minute, + MinTimeout: 2 * time.Second, + Refresh: func() (result interface{}, state string, err error) { + input := &route53.GetChangeInput{ + Id: aws.String(changeID), + } + + log.Printf("[DEBUG] Getting Route53 Change status: %s", input) + output, err := conn.GetChange(input) + + if err != nil { + return nil, "UNKNOWN", err + } + + if output == nil || output.ChangeInfo == nil { + return nil, "UNKNOWN", fmt.Errorf("Route53 GetChange response empty for ID: %s", changeID) + } + + return true, aws.StringValue(output.ChangeInfo.Status), nil + }, + } + + _, err := conf.WaitForState() + + return err +} diff --git a/aws/resource_aws_route53_zone_association.go b/aws/resource_aws_route53_zone_association.go index c416095ec9a..c5763c29e57 100644 --- a/aws/resource_aws_route53_zone_association.go +++ b/aws/resource_aws_route53_zone_association.go @@ -22,17 +22,17 @@ func resourceAwsRoute53ZoneAssociation() *schema.Resource { Delete: resourceAwsRoute53ZoneAssociationDelete, Schema: map[string]*schema.Schema{ - "zone_id": &schema.Schema{ + "zone_id": { Type: schema.TypeString, Required: true, }, - "vpc_id": &schema.Schema{ + "vpc_id": { Type: schema.TypeString, Required: true, }, - "vpc_region": &schema.Schema{ + "vpc_region": { Type: schema.TypeString, Optional: true, Computed: true, diff --git a/aws/resource_aws_route53_zone_association_test.go b/aws/resource_aws_route53_zone_association_test.go index 09c48700c6e..a1e6bf56492 100644 --- a/aws/resource_aws_route53_zone_association_test.go +++ b/aws/resource_aws_route53_zone_association_test.go @@ -15,12 +15,12 @@ import ( func TestAccAWSRoute53ZoneAssociation_basic(t *testing.T) { var zone route53.HostedZone - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckRoute53ZoneAssociationDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53ZoneAssociationConfig, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53ZoneAssociationExists("aws_route53_zone_association.foobar", &zone), @@ -37,12 +37,12 @@ func TestAccAWSRoute53ZoneAssociation_region(t *testing.T) { // check for the instances in each region var providers []*schema.Provider - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, ProviderFactories: testAccProviderFactories(&providers), CheckDestroy: testAccCheckWithProviders(testAccCheckRoute53ZoneAssociationDestroyWithProvider, &providers), Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53ZoneAssociationRegionConfig, Check: resource.ComposeTestCheckFunc( testAccCheckRoute53ZoneAssociationExistsWithProvider("aws_route53_zone_association.foobar", &zone, diff --git a/aws/resource_aws_route53_zone_test.go b/aws/resource_aws_route53_zone_test.go index d02ea70111b..f863f4eb438 100644 --- a/aws/resource_aws_route53_zone_test.go +++ b/aws/resource_aws_route53_zone_test.go @@ -67,154 +67,408 @@ func TestCleanChangeID(t *testing.T) { func TestAccAWSRoute53Zone_basic(t *testing.T) { var zone route53.GetHostedZoneOutput - var td route53.ResourceTagSet rString := acctest.RandString(8) + resourceName := "aws_route53_zone.test" zoneName := fmt.Sprintf("%s.terraformtest.com", rString) - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - IDRefreshName: "aws_route53_zone.main", - Providers: testAccProviders, - CheckDestroy: testAccCheckRoute53ZoneDestroy, + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRoute53ZoneDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRoute53ZoneConfig(zoneName), Check: resource.ComposeTestCheckFunc( - testAccCheckRoute53ZoneExists("aws_route53_zone.main", &zone), - testAccLoadTagsR53(&zone, &td), - testAccCheckTagsR53(&td.Tags, "foo", "bar"), + testAccCheckRoute53ZoneExists(resourceName, &zone), + resource.TestCheckResourceAttr(resourceName, "name", fmt.Sprintf("%s.", zoneName)), + resource.TestCheckResourceAttr(resourceName, "name_servers.#", "4"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttr(resourceName, "vpc.#", "0"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy"}, + }, }, }) } -func TestAccAWSRoute53Zone_forceDestroy(t *testing.T) { - var zone, zoneWithDot route53.GetHostedZoneOutput +func TestAccAWSRoute53Zone_multiple(t *testing.T) { + var zone0, zone1, zone2, zone3, zone4 route53.GetHostedZoneOutput + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRoute53ZoneDestroy, + Steps: []resource.TestStep{ + { + Config: testAccRoute53ZoneConfigMultiple(), + Check: resource.ComposeTestCheckFunc( + testAccCheckRoute53ZoneExists("aws_route53_zone.test.0", &zone0), + testAccCheckDomainName(&zone0, "subdomain0.terraformtest.com."), + testAccCheckRoute53ZoneExists("aws_route53_zone.test.1", &zone1), + testAccCheckDomainName(&zone1, "subdomain1.terraformtest.com."), + testAccCheckRoute53ZoneExists("aws_route53_zone.test.2", &zone2), + testAccCheckDomainName(&zone2, "subdomain2.terraformtest.com."), + testAccCheckRoute53ZoneExists("aws_route53_zone.test.3", &zone3), + testAccCheckDomainName(&zone3, "subdomain3.terraformtest.com."), + testAccCheckRoute53ZoneExists("aws_route53_zone.test.4", &zone4), + testAccCheckDomainName(&zone4, "subdomain4.terraformtest.com."), + ), + }, + }, + }) +} + +func TestAccAWSRoute53Zone_Comment(t *testing.T) { + var zone route53.GetHostedZoneOutput rString := acctest.RandString(8) - zoneName1 := fmt.Sprintf("%s-one.terraformtest.com", rString) - zoneName2 := fmt.Sprintf("%s-two.terraformtest.com", rString) + resourceName := "aws_route53_zone.test" + zoneName := fmt.Sprintf("%s.terraformtest.com", rString) - // record the initialized providers so that we can use them to - // check for the instances in each region - var providers []*schema.Provider + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRoute53ZoneDestroy, + Steps: []resource.TestStep{ + { + Config: testAccRoute53ZoneConfigComment(zoneName, "comment1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckRoute53ZoneExists(resourceName, &zone), + resource.TestCheckResourceAttr(resourceName, "comment", "comment1"), + ), + }, + { + Config: testAccRoute53ZoneConfigComment(zoneName, "comment2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckRoute53ZoneExists(resourceName, &zone), + resource.TestCheckResourceAttr(resourceName, "comment", "comment2"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy"}, + }, + }, + }) +} - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - IDRefreshName: "aws_route53_zone.destroyable", - ProviderFactories: testAccProviderFactories(&providers), - CheckDestroy: testAccCheckWithProviders(testAccCheckRoute53ZoneDestroyWithProvider, &providers), +func TestAccAWSRoute53Zone_DelegationSetID(t *testing.T) { + var zone route53.GetHostedZoneOutput + + rString := acctest.RandString(8) + delegationSetResourceName := "aws_route53_delegation_set.test" + resourceName := "aws_route53_zone.test" + zoneName := fmt.Sprintf("%s.terraformtest.com", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRoute53ZoneDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccRoute53ZoneConfig_forceDestroy(zoneName1, zoneName2), + { + Config: testAccRoute53ZoneConfigDelegationSetID(zoneName), Check: resource.ComposeTestCheckFunc( - testAccCheckRoute53ZoneExistsWithProvider("aws_route53_zone.destroyable", &zone, - testAccAwsRegionProviderFunc("us-west-2", &providers)), - // Add >100 records to verify pagination works ok - testAccCreateRandomRoute53RecordsInZoneIdWithProvider( - testAccAwsRegionProviderFunc("us-west-2", &providers), &zone, 100), - testAccCreateRandomRoute53RecordsInZoneIdWithProvider( - testAccAwsRegionProviderFunc("us-west-2", &providers), &zone, 5), + testAccCheckRoute53ZoneExists(resourceName, &zone), + resource.TestCheckResourceAttrPair(resourceName, "delegation_set_id", delegationSetResourceName, "id"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy"}, + }, + }, + }) +} - testAccCheckRoute53ZoneExistsWithProvider("aws_route53_zone.with_trailing_dot", &zoneWithDot, - testAccAwsRegionProviderFunc("us-west-2", &providers)), +func TestAccAWSRoute53Zone_ForceDestroy(t *testing.T) { + var zone route53.GetHostedZoneOutput + + rString := acctest.RandString(8) + resourceName := "aws_route53_zone.test" + zoneName := fmt.Sprintf("%s.terraformtest.com", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRoute53ZoneDestroy, + Steps: []resource.TestStep{ + { + Config: testAccRoute53ZoneConfigForceDestroy(zoneName), + Check: resource.ComposeTestCheckFunc( + testAccCheckRoute53ZoneExists(resourceName, &zone), // Add >100 records to verify pagination works ok - testAccCreateRandomRoute53RecordsInZoneIdWithProvider( - testAccAwsRegionProviderFunc("us-west-2", &providers), &zoneWithDot, 100), - testAccCreateRandomRoute53RecordsInZoneIdWithProvider( - testAccAwsRegionProviderFunc("us-west-2", &providers), &zoneWithDot, 5), + testAccCreateRandomRoute53RecordsInZoneId(&zone, 100), + testAccCreateRandomRoute53RecordsInZoneId(&zone, 5), ), }, }, }) } -func TestAccAWSRoute53Zone_updateComment(t *testing.T) { +func TestAccAWSRoute53Zone_ForceDestroy_TrailingPeriod(t *testing.T) { var zone route53.GetHostedZoneOutput - var td route53.ResourceTagSet rString := acctest.RandString(8) + resourceName := "aws_route53_zone.test" zoneName := fmt.Sprintf("%s.terraformtest.com", rString) - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - IDRefreshName: "aws_route53_zone.main", - Providers: testAccProviders, - CheckDestroy: testAccCheckRoute53ZoneDestroy, + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRoute53ZoneDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccRoute53ZoneConfig(zoneName), + { + Config: testAccRoute53ZoneConfigForceDestroyTrailingPeriod(zoneName), Check: resource.ComposeTestCheckFunc( - testAccCheckRoute53ZoneExists("aws_route53_zone.main", &zone), - testAccLoadTagsR53(&zone, &td), - testAccCheckTagsR53(&td.Tags, "foo", "bar"), - resource.TestCheckResourceAttr( - "aws_route53_zone.main", "comment", "Custom comment"), + testAccCheckRoute53ZoneExists(resourceName, &zone), + // Add >100 records to verify pagination works ok + testAccCreateRandomRoute53RecordsInZoneId(&zone, 100), + testAccCreateRandomRoute53RecordsInZoneId(&zone, 5), ), }, + }, + }) +} + +func TestAccAWSRoute53Zone_Tags(t *testing.T) { + var zone route53.GetHostedZoneOutput + var td route53.ResourceTagSet + + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_route53_zone.test" + zoneName := fmt.Sprintf("%s.terraformtest.com", rName) - resource.TestStep{ - Config: testAccRoute53ZoneConfigUpdateComment(zoneName), + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRoute53ZoneDestroy, + Steps: []resource.TestStep{ + { + Config: testAccRoute53ZoneConfigTagsSingle(zoneName, "tag1key", "tag1value"), + Check: resource.ComposeTestCheckFunc( + testAccCheckRoute53ZoneExists(resourceName, &zone), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.tag1key", "tag1value"), + testAccLoadTagsR53(&zone, &td), + testAccCheckTagsR53(&td.Tags, "tag1key", "tag1value"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy"}, + }, + { + Config: testAccRoute53ZoneConfigTagsMultiple(zoneName, "tag1key", "tag1valueupdated", "tag2key", "tag2value"), Check: resource.ComposeTestCheckFunc( - testAccCheckRoute53ZoneExists("aws_route53_zone.main", &zone), + testAccCheckRoute53ZoneExists(resourceName, &zone), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.tag1key", "tag1valueupdated"), + resource.TestCheckResourceAttr(resourceName, "tags.tag2key", "tag2value"), testAccLoadTagsR53(&zone, &td), - resource.TestCheckResourceAttr( - "aws_route53_zone.main", "comment", "Change Custom Comment"), + testAccCheckTagsR53(&td.Tags, "tag1key", "tag1valueupdated"), + testAccCheckTagsR53(&td.Tags, "tag2key", "tag2value"), ), }, + { + Config: testAccRoute53ZoneConfigTagsSingle(zoneName, "tag2key", "tag2value"), + Check: resource.ComposeTestCheckFunc( + testAccCheckRoute53ZoneExists(resourceName, &zone), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.tag2key", "tag2value"), + testAccLoadTagsR53(&zone, &td), + testAccCheckTagsR53(&td.Tags, "tag2key", "tag2value"), + ), + }, + }, + }) +} + +func TestAccAWSRoute53Zone_VPC_Single(t *testing.T) { + var zone route53.GetHostedZoneOutput + + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_route53_zone.test" + vpcResourceName := "aws_vpc.test1" + zoneName := fmt.Sprintf("%s.terraformtest.com", rName) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRoute53ZoneDestroy, + Steps: []resource.TestStep{ + { + Config: testAccRoute53ZoneConfigVPCSingle(rName, zoneName), + Check: resource.ComposeTestCheckFunc( + testAccCheckRoute53ZoneExists(resourceName, &zone), + resource.TestCheckResourceAttr(resourceName, "vpc.#", "1"), + testAccCheckRoute53ZoneAssociatesWithVpc(vpcResourceName, &zone), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy"}, + }, }, }) } -func TestAccAWSRoute53Zone_private_basic(t *testing.T) { +func TestAccAWSRoute53Zone_VPC_Multiple(t *testing.T) { + var zone route53.GetHostedZoneOutput + + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_route53_zone.test" + vpcResourceName1 := "aws_vpc.test1" + vpcResourceName2 := "aws_vpc.test2" + zoneName := fmt.Sprintf("%s.terraformtest.com", rName) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRoute53ZoneDestroy, + Steps: []resource.TestStep{ + { + Config: testAccRoute53ZoneConfigVPCMultiple(rName, zoneName), + Check: resource.ComposeTestCheckFunc( + testAccCheckRoute53ZoneExists(resourceName, &zone), + resource.TestCheckResourceAttr(resourceName, "vpc.#", "2"), + testAccCheckRoute53ZoneAssociatesWithVpc(vpcResourceName1, &zone), + testAccCheckRoute53ZoneAssociatesWithVpc(vpcResourceName2, &zone), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy"}, + }, + }, + }) +} + +func TestAccAWSRoute53Zone_VPC_Updates(t *testing.T) { + var zone route53.GetHostedZoneOutput + + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_route53_zone.test" + vpcResourceName1 := "aws_vpc.test1" + vpcResourceName2 := "aws_vpc.test2" + zoneName := fmt.Sprintf("%s.terraformtest.com", rName) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRoute53ZoneDestroy, + Steps: []resource.TestStep{ + { + Config: testAccRoute53ZoneConfigVPCSingle(rName, zoneName), + Check: resource.ComposeTestCheckFunc( + testAccCheckRoute53ZoneExists(resourceName, &zone), + resource.TestCheckResourceAttr(resourceName, "vpc.#", "1"), + testAccCheckRoute53ZoneAssociatesWithVpc(vpcResourceName1, &zone), + ), + }, + { + Config: testAccRoute53ZoneConfigVPCMultiple(rName, zoneName), + Check: resource.ComposeTestCheckFunc( + testAccCheckRoute53ZoneExists(resourceName, &zone), + resource.TestCheckResourceAttr(resourceName, "vpc.#", "2"), + testAccCheckRoute53ZoneAssociatesWithVpc(vpcResourceName1, &zone), + testAccCheckRoute53ZoneAssociatesWithVpc(vpcResourceName2, &zone), + ), + }, + { + Config: testAccRoute53ZoneConfigVPCSingle(rName, zoneName), + Check: resource.ComposeTestCheckFunc( + testAccCheckRoute53ZoneExists(resourceName, &zone), + resource.TestCheckResourceAttr(resourceName, "vpc.#", "1"), + testAccCheckRoute53ZoneAssociatesWithVpc(vpcResourceName1, &zone), + ), + }, + }, + }) +} + +func TestAccAWSRoute53Zone_VPCID(t *testing.T) { var zone route53.GetHostedZoneOutput rString := acctest.RandString(8) + resourceName := "aws_route53_zone.test" + vpcResourceName := "aws_vpc.test" zoneName := fmt.Sprintf("%s.terraformtest.com", rString) - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - IDRefreshName: "aws_route53_zone.main", - Providers: testAccProviders, - CheckDestroy: testAccCheckRoute53ZoneDestroy, + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRoute53ZoneDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccRoute53PrivateZoneConfig(zoneName), + { + Config: testAccRoute53ZoneConfigVPCID(zoneName), Check: resource.ComposeTestCheckFunc( - testAccCheckRoute53ZoneExists("aws_route53_zone.main", &zone), - testAccCheckRoute53ZoneAssociatesWithVpc("aws_vpc.main", &zone), + testAccCheckRoute53ZoneExists(resourceName, &zone), + resource.TestCheckResourceAttr(resourceName, "vpc.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "vpc_id", vpcResourceName, "id"), + testAccCheckRoute53ZoneAssociatesWithVpc(vpcResourceName, &zone), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy"}, + }, }, }) } -func TestAccAWSRoute53Zone_private_region(t *testing.T) { +func TestAccAWSRoute53Zone_VPCRegion(t *testing.T) { var zone route53.GetHostedZoneOutput rString := acctest.RandString(8) + resourceName := "aws_route53_zone.test" + vpcResourceName := "aws_vpc.test" zoneName := fmt.Sprintf("%s.terraformtest.com", rString) // record the initialized providers so that we can use them to // check for the instances in each region var providers []*schema.Provider - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, - IDRefreshName: "aws_route53_zone.main", ProviderFactories: testAccProviderFactories(&providers), CheckDestroy: testAccCheckWithProviders(testAccCheckRoute53ZoneDestroyWithProvider, &providers), Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccRoute53PrivateZoneRegionConfig(zoneName), + { + Config: testAccRoute53ZoneConfigVPCRegion(zoneName), Check: resource.ComposeTestCheckFunc( - testAccCheckRoute53ZoneExistsWithProvider("aws_route53_zone.main", &zone, + testAccCheckRoute53ZoneExistsWithProvider(resourceName, &zone, testAccAwsRegionProviderFunc("us-west-2", &providers)), - testAccCheckRoute53ZoneAssociatesWithVpc("aws_vpc.main", &zone), + resource.TestCheckResourceAttr(resourceName, "vpc.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "vpc_id", vpcResourceName, "id"), + testAccCheckRoute53ZoneAssociatesWithVpc(vpcResourceName, &zone), ), }, + { + // Config must be provided for aliased provider + Config: testAccRoute53ZoneConfigVPCRegion(zoneName), + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy"}, + }, }, }) } @@ -238,6 +492,10 @@ func testAccCheckRoute53ZoneDestroyWithProvider(s *terraform.State, provider *sc return nil } +func testAccCreateRandomRoute53RecordsInZoneId(zone *route53.GetHostedZoneOutput, recordsCount int) resource.TestCheckFunc { + return testAccCreateRandomRoute53RecordsInZoneIdWithProvider(func() *schema.Provider { return testAccProvider }, zone, recordsCount) +} + func testAccCreateRandomRoute53RecordsInZoneIdWithProvider(providerF func() *schema.Provider, zone *route53.GetHostedZoneOutput, recordsCount int) resource.TestCheckFunc { return func(s *terraform.State) error { provider := providerF() @@ -254,7 +512,7 @@ func testAccCreateRandomRoute53RecordsInZoneIdWithProvider(providerF func() *sch Name: aws.String(fmt.Sprintf("%d-tf-acc-random.%s", acctest.RandInt(), *zone.HostedZone.Name)), Type: aws.String("CNAME"), ResourceRecords: []*route53.ResourceRecord{ - &route53.ResourceRecord{Value: aws.String(fmt.Sprintf("random.%s", *zone.HostedZone.Name))}, + {Value: aws.String(fmt.Sprintf("random.%s", *zone.HostedZone.Name))}, }, TTL: aws.Int64(int64(30)), }, @@ -338,16 +596,13 @@ func testAccCheckRoute53ZoneAssociatesWithVpc(n string, zone *route53.GetHostedZ return fmt.Errorf("No VPC ID is set") } - var associatedVPC *route53.VPC for _, vpc := range zone.VPCs { - if *vpc.VPCId == rs.Primary.ID { - associatedVPC = vpc + if aws.StringValue(vpc.VPCId) == rs.Primary.ID { + return nil } } - if associatedVPC == nil { - return fmt.Errorf("VPC: %v is not associated to Zone: %v", n, cleanZoneID(*zone.HostedZone.Id)) - } - return nil + + return fmt.Errorf("VPC: %s is not associated to Zone: %v", n, cleanZoneID(aws.StringValue(zone.HostedZone.Id))) } } @@ -373,96 +628,197 @@ func testAccLoadTagsR53(zone *route53.GetHostedZoneOutput, td *route53.ResourceT return nil } } +func testAccCheckDomainName(zone *route53.GetHostedZoneOutput, domain string) resource.TestCheckFunc { + return func(s *terraform.State) error { + if zone.HostedZone.Name == nil { + return fmt.Errorf("Empty name in HostedZone for domain %s", domain) + } -func testAccRoute53ZoneConfig(zoneName string) string { - return fmt.Sprintf(` -resource "aws_route53_zone" "main" { - name = "%s." - comment = "Custom comment" + if *zone.HostedZone.Name == domain { + return nil + } - tags { - foo = "bar" - Name = "tf-route53-tag-test" + return fmt.Errorf("Invalid domain name. Expected %s is %s", domain, *zone.HostedZone.Name) } } +func testAccRoute53ZoneConfig(zoneName string) string { + return fmt.Sprintf(` +resource "aws_route53_zone" "test" { + name = "%s." +} `, zoneName) } -func testAccRoute53ZoneConfig_forceDestroy(zoneName1, zoneName2 string) string { +func testAccRoute53ZoneConfigMultiple() string { return fmt.Sprintf(` -resource "aws_route53_zone" "destroyable" { - name = "%s" - force_destroy = true +resource "aws_route53_zone" "test" { + count = 5 + + name = "subdomain${count.index}.terraformtest.com" +} +`) } -resource "aws_route53_zone" "with_trailing_dot" { - name = "%s." - force_destroy = true +func testAccRoute53ZoneConfigComment(zoneName, comment string) string { + return fmt.Sprintf(` +resource "aws_route53_zone" "test" { + comment = %q + name = "%s." } -`, zoneName1, zoneName2) +`, comment, zoneName) } -func testAccRoute53ZoneConfigUpdateComment(zoneName string) string { +func testAccRoute53ZoneConfigDelegationSetID(zoneName string) string { return fmt.Sprintf(` -resource "aws_route53_zone" "main" { - name = "%s." - comment = "Change Custom Comment" +resource "aws_route53_delegation_set" "test" {} - tags { - foo = "bar" - Name = "tf-route53-tag-test" - } +resource "aws_route53_zone" "test" { + delegation_set_id = "${aws_route53_delegation_set.test.id}" + name = "%s." } `, zoneName) } -func testAccRoute53PrivateZoneConfig(zoneName string) string { +func testAccRoute53ZoneConfigForceDestroy(zoneName string) string { return fmt.Sprintf(` -resource "aws_vpc" "main" { - cidr_block = "172.29.0.0/24" - instance_tenancy = "default" - enable_dns_support = true - enable_dns_hostnames = true - tags { - Name = "terraform-testacc-route53-zone-private" - } +resource "aws_route53_zone" "test" { + force_destroy = true + name = "%s" +} +`, zoneName) } -resource "aws_route53_zone" "main" { - name = "%s." - vpc_id = "${aws_vpc.main.id}" +func testAccRoute53ZoneConfigForceDestroyTrailingPeriod(zoneName string) string { + return fmt.Sprintf(` +resource "aws_route53_zone" "test" { + force_destroy = true + name = "%s." } `, zoneName) } -func testAccRoute53PrivateZoneRegionConfig(zoneName string) string { +func testAccRoute53ZoneConfigTagsSingle(zoneName, tag1Key, tag1Value string) string { + return fmt.Sprintf(` +resource "aws_route53_zone" "test" { + name = "%s." + + tags { + %q = %q + } +} +`, zoneName, tag1Key, tag1Value) +} + +func testAccRoute53ZoneConfigTagsMultiple(zoneName, tag1Key, tag1Value, tag2Key, tag2Value string) string { + return fmt.Sprintf(` +resource "aws_route53_zone" "test" { + name = "%s." + + tags { + %q = %q + %q = %q + } +} +`, zoneName, tag1Key, tag1Value, tag2Key, tag2Value) +} + +func testAccRoute53ZoneConfigVPCID(zoneName string) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "172.29.0.0/24" + + tags { + Name = "terraform-testacc-route53-zone-private" + } +} + +resource "aws_route53_zone" "test" { + name = "%s." + vpc_id = "${aws_vpc.test.id}" +} +`, zoneName) +} + +func testAccRoute53ZoneConfigVPCRegion(zoneName string) string { return fmt.Sprintf(` provider "aws" { - alias = "west" - region = "us-west-2" + alias = "west" + region = "us-west-2" } provider "aws" { - alias = "east" - region = "us-east-1" -} - -resource "aws_vpc" "main" { - provider = "aws.east" - cidr_block = "172.29.0.0/24" - instance_tenancy = "default" - enable_dns_support = true - enable_dns_hostnames = true - tags { - Name = "terraform-testacc-route53-zone-private-region" - } + alias = "east" + region = "us-east-1" } -resource "aws_route53_zone" "main" { - provider = "aws.west" - name = "%s." - vpc_id = "${aws_vpc.main.id}" - vpc_region = "us-east-1" +resource "aws_vpc" "test" { + provider = "aws.east" + + cidr_block = "172.29.0.0/24" + + tags { + Name = "terraform-testacc-route53-zone-private-region" + } +} + +resource "aws_route53_zone" "test" { + provider = "aws.west" + + name = "%s." + vpc_id = "${aws_vpc.test.id}" + vpc_region = "us-east-1" } `, zoneName) } + +func testAccRoute53ZoneConfigVPCSingle(rName, zoneName string) string { + return fmt.Sprintf(` +resource "aws_vpc" "test1" { + cidr_block = "10.1.0.0/16" + + tags { + Name = %q + } +} + +resource "aws_route53_zone" "test" { + name = "%s." + + vpc { + vpc_id = "${aws_vpc.test1.id}" + } +} +`, rName, zoneName) +} + +func testAccRoute53ZoneConfigVPCMultiple(rName, zoneName string) string { + return fmt.Sprintf(` +resource "aws_vpc" "test1" { + cidr_block = "10.1.0.0/16" + + tags { + Name = %q + } +} + +resource "aws_vpc" "test2" { + cidr_block = "10.2.0.0/16" + + tags { + Name = %q + } +} + +resource "aws_route53_zone" "test" { + name = "%s." + + vpc { + vpc_id = "${aws_vpc.test1.id}" + } + + vpc { + vpc_id = "${aws_vpc.test2.id}" + } +} +`, rName, rName, zoneName) +} diff --git a/aws/resource_aws_route_table_association.go b/aws/resource_aws_route_table_association.go index eb2c194094d..9dc6e0b4bac 100644 --- a/aws/resource_aws_route_table_association.go +++ b/aws/resource_aws_route_table_association.go @@ -20,13 +20,13 @@ func resourceAwsRouteTableAssociation() *schema.Resource { Delete: resourceAwsRouteTableAssociationDelete, Schema: map[string]*schema.Schema{ - "subnet_id": &schema.Schema{ + "subnet_id": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "route_table_id": &schema.Schema{ + "route_table_id": { Type: schema.TypeString, Required: true, }, diff --git a/aws/resource_aws_route_table_association_test.go b/aws/resource_aws_route_table_association_test.go index 5dfe8c21f2a..a7298966e20 100644 --- a/aws/resource_aws_route_table_association_test.go +++ b/aws/resource_aws_route_table_association_test.go @@ -14,12 +14,12 @@ import ( func TestAccAWSRouteTableAssociation_basic(t *testing.T) { var v, v2 ec2.RouteTable - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckRouteTableAssociationDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccRouteTableAssociationConfig, Check: resource.ComposeTestCheckFunc( testAccCheckRouteTableAssociationExists( @@ -27,7 +27,7 @@ func TestAccAWSRouteTableAssociation_basic(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccRouteTableAssociationConfigChange, Check: resource.ComposeTestCheckFunc( testAccCheckRouteTableAssociationExists( diff --git a/aws/resource_aws_route_table_test.go b/aws/resource_aws_route_table_test.go index c1d51fce11b..00163f61381 100644 --- a/aws/resource_aws_route_table_test.go +++ b/aws/resource_aws_route_table_test.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "log" "regexp" "testing" @@ -12,6 +13,72 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func init() { + resource.AddTestSweepers("aws_route_table", &resource.Sweeper{ + Name: "aws_route_table", + F: testSweepRouteTables, + }) +} + +func testSweepRouteTables(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).ec2conn + + req := &ec2.DescribeRouteTablesInput{ + Filters: []*ec2.Filter{ + { + Name: aws.String("tag-value"), + Values: []*string{ + aws.String("terraform-testacc-*"), + aws.String("tf-acc-test-*"), + }, + }, + }, + } + resp, err := conn.DescribeRouteTables(req) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping EC2 Route Table sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error describing Route Tables: %s", err) + } + + if len(resp.RouteTables) == 0 { + log.Print("[DEBUG] No Route Tables to sweep") + return nil + } + + for _, routeTable := range resp.RouteTables { + for _, routeTableAssociation := range routeTable.Associations { + input := &ec2.DisassociateRouteTableInput{ + AssociationId: routeTableAssociation.RouteTableAssociationId, + } + + log.Printf("[DEBUG] Deleting Route Table Association: %s", input) + _, err := conn.DisassociateRouteTable(input) + if err != nil { + return fmt.Errorf("error deleting Route Table Association (%s): %s", aws.StringValue(routeTableAssociation.RouteTableAssociationId), err) + } + } + + input := &ec2.DeleteRouteTableInput{ + RouteTableId: routeTable.RouteTableId, + } + + log.Printf("[DEBUG] Deleting Route Table: %s", input) + _, err := conn.DeleteRouteTable(input) + if err != nil { + return fmt.Errorf("error deleting Route Table (%s): %s", aws.StringValue(routeTable.RouteTableId), err) + } + } + + return nil +} + func TestAccAWSRouteTable_basic(t *testing.T) { var v ec2.RouteTable @@ -58,7 +125,7 @@ func TestAccAWSRouteTable_basic(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route_table.foo", Providers: testAccProviders, @@ -108,7 +175,7 @@ func TestAccAWSRouteTable_instance(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route_table.foo", Providers: testAccProviders, @@ -138,7 +205,7 @@ func TestAccAWSRouteTable_ipv6(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route_table.foo", Providers: testAccProviders, @@ -158,7 +225,7 @@ func TestAccAWSRouteTable_ipv6(t *testing.T) { func TestAccAWSRouteTable_tags(t *testing.T) { var route_table ec2.RouteTable - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route_table.foo", Providers: testAccProviders, @@ -186,7 +253,7 @@ func TestAccAWSRouteTable_tags(t *testing.T) { // For GH-13545, Fixes panic on an empty route config block func TestAccAWSRouteTable_panicEmptyRoute(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_route_table.foo", Providers: testAccProviders, @@ -285,7 +352,7 @@ func TestAccAWSRouteTable_vpcPeering(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckRouteTableDestroy, @@ -323,7 +390,7 @@ func TestAccAWSRouteTable_vgwRoutePropagation(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: resource.ComposeTestCheckFunc( diff --git a/aws/resource_aws_route_test.go b/aws/resource_aws_route_test.go index 3a69a9671fb..25e05e50ee0 100644 --- a/aws/resource_aws_route_test.go +++ b/aws/resource_aws_route_test.go @@ -31,7 +31,7 @@ func TestAccAWSRoute_basic(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -45,6 +45,12 @@ func TestAccAWSRoute_basic(t *testing.T) { testCheck, ), }, + { + ResourceName: "aws_route.bar", + ImportState: true, + ImportStateIdFunc: testAccAWSRouteImportStateIdFunc("aws_route.bar"), + ImportStateVerify: true, + }, }, }) } @@ -68,7 +74,7 @@ func TestAccAWSRoute_ipv6Support(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -82,6 +88,12 @@ func TestAccAWSRoute_ipv6Support(t *testing.T) { testCheck, ), }, + { + ResourceName: "aws_route.bar", + ImportState: true, + ImportStateIdFunc: testAccAWSRouteImportStateIdFunc("aws_route.bar"), + ImportStateVerify: true, + }, }, }) } @@ -89,7 +101,7 @@ func TestAccAWSRoute_ipv6Support(t *testing.T) { func TestAccAWSRoute_ipv6ToInternetGateway(t *testing.T) { var route ec2.Route - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -102,6 +114,12 @@ func TestAccAWSRoute_ipv6ToInternetGateway(t *testing.T) { testAccCheckAWSRouteExists("aws_route.igw", &route), ), }, + { + ResourceName: "aws_route.igw", + ImportState: true, + ImportStateIdFunc: testAccAWSRouteImportStateIdFunc("aws_route.igw"), + ImportStateVerify: true, + }, }, }) } @@ -109,7 +127,7 @@ func TestAccAWSRoute_ipv6ToInternetGateway(t *testing.T) { func TestAccAWSRoute_ipv6ToInstance(t *testing.T) { var route ec2.Route - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -122,6 +140,12 @@ func TestAccAWSRoute_ipv6ToInstance(t *testing.T) { testAccCheckAWSRouteExists("aws_route.internal-default-route-ipv6", &route), ), }, + { + ResourceName: "aws_route.internal-default-route-ipv6", + ImportState: true, + ImportStateIdFunc: testAccAWSRouteImportStateIdFunc("aws_route.internal-default-route-ipv6"), + ImportStateVerify: true, + }, }, }) } @@ -129,7 +153,7 @@ func TestAccAWSRoute_ipv6ToInstance(t *testing.T) { func TestAccAWSRoute_ipv6ToNetworkInterface(t *testing.T) { var route ec2.Route - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -142,6 +166,12 @@ func TestAccAWSRoute_ipv6ToNetworkInterface(t *testing.T) { testAccCheckAWSRouteExists("aws_route.internal-default-route-ipv6", &route), ), }, + { + ResourceName: "aws_route.internal-default-route-ipv6", + ImportState: true, + ImportStateIdFunc: testAccAWSRouteImportStateIdFunc("aws_route.internal-default-route-ipv6"), + ImportStateVerify: true, + }, }, }) } @@ -149,7 +179,7 @@ func TestAccAWSRoute_ipv6ToNetworkInterface(t *testing.T) { func TestAccAWSRoute_ipv6ToPeeringConnection(t *testing.T) { var route ec2.Route - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -162,6 +192,45 @@ func TestAccAWSRoute_ipv6ToPeeringConnection(t *testing.T) { testAccCheckAWSRouteExists("aws_route.pc", &route), ), }, + { + ResourceName: "aws_route.pc", + ImportState: true, + ImportStateIdFunc: testAccAWSRouteImportStateIdFunc("aws_route.pc"), + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSRoute_changeRouteTable(t *testing.T) { + var before ec2.Route + var after ec2.Route + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSRouteDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSRouteBasicConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRouteExists("aws_route.bar", &before), + ), + }, + { + Config: testAccAWSRouteNewRouteTable, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRouteExists("aws_route.bar", &after), + ), + }, + { + ResourceName: "aws_route.bar", + ImportState: true, + ImportStateIdFunc: testAccAWSRouteImportStateIdFunc("aws_route.bar"), + ImportStateVerify: true, + }, }, }) } @@ -211,7 +280,7 @@ func TestAccAWSRoute_changeCidr(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -233,6 +302,12 @@ func TestAccAWSRoute_changeCidr(t *testing.T) { testCheckChange, ), }, + { + ResourceName: "aws_route.bar", + ImportState: true, + ImportStateIdFunc: testAccAWSRouteImportStateIdFunc("aws_route.bar"), + ImportStateVerify: true, + }, }, }) } @@ -249,7 +324,7 @@ func TestAccAWSRoute_noopdiff(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -271,6 +346,12 @@ func TestAccAWSRoute_noopdiff(t *testing.T) { testCheckChange, ), }, + { + ResourceName: "aws_route.test", + ImportState: true, + ImportStateIdFunc: testAccAWSRouteImportStateIdFunc("aws_route.test"), + ImportStateVerify: true, + }, }, }) } @@ -278,7 +359,7 @@ func TestAccAWSRoute_noopdiff(t *testing.T) { func TestAccAWSRoute_doesNotCrashWithVPCEndpoint(t *testing.T) { var route ec2.Route - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSRouteDestroy, @@ -289,6 +370,12 @@ func TestAccAWSRoute_doesNotCrashWithVPCEndpoint(t *testing.T) { testAccCheckAWSRouteExists("aws_route.bar", &route), ), }, + { + ResourceName: "aws_route.bar", + ImportState: true, + ImportStateIdFunc: testAccAWSRouteImportStateIdFunc("aws_route.bar"), + ImportStateVerify: true, + }, }, }) } @@ -305,7 +392,7 @@ func testAccCheckAWSRouteExists(n string, res *ec2.Route) resource.TestCheckFunc } conn := testAccProvider.Meta().(*AWSClient).ec2conn - r, err := findResourceRoute( + r, err := resourceAwsRouteFindRoute( conn, rs.Primary.Attributes["route_table_id"], rs.Primary.Attributes["destination_cidr_block"], @@ -333,7 +420,7 @@ func testAccCheckAWSRouteDestroy(s *terraform.State) error { } conn := testAccProvider.Meta().(*AWSClient).ec2conn - route, err := findResourceRoute( + route, err := resourceAwsRouteFindRoute( conn, rs.Primary.Attributes["route_table_id"], rs.Primary.Attributes["destination_cidr_block"], @@ -348,6 +435,22 @@ func testAccCheckAWSRouteDestroy(s *terraform.State) error { return nil } +func testAccAWSRouteImportStateIdFunc(resourceName string) resource.ImportStateIdFunc { + return func(s *terraform.State) (string, error) { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return "", fmt.Errorf("not found: %s", resourceName) + } + + destination := rs.Primary.Attributes["destination_cidr_block"] + if _, ok := rs.Primary.Attributes["destination_ipv6_cidr_block"]; ok { + destination = rs.Primary.Attributes["destination_ipv6_cidr_block"] + } + + return fmt.Sprintf("%s_%s", rs.Primary.Attributes["route_table_id"], destination), nil + } +} + var testAccAWSRouteBasicConfig = fmt.Sprint(` resource "aws_vpc" "foo" { cidr_block = "10.1.0.0/16" @@ -700,39 +803,6 @@ resource "aws_route" "bar" { } `) -// Acceptance test if mixed inline and external routes are implemented -var testAccAWSRouteMixConfig = fmt.Sprint(` -resource "aws_vpc" "foo" { - cidr_block = "10.1.0.0/16" - tags { - Name = "terraform-testacc-route-route-mix" - } -} - -resource "aws_internet_gateway" "foo" { - vpc_id = "${aws_vpc.foo.id}" - - tags { - Name = "terraform-testacc-route-route-mix" - } -} - -resource "aws_route_table" "foo" { - vpc_id = "${aws_vpc.foo.id}" - - route { - cidr_block = "10.2.0.0/16" - gateway_id = "${aws_internet_gateway.foo.id}" - } -} - -resource "aws_route" "bar" { - route_table_id = "${aws_route_table.foo.id}" - destination_cidr_block = "0.0.0.0/0" - gateway_id = "${aws_internet_gateway.foo.id}" -} -`) - var testAccAWSRouteNoopChange = fmt.Sprint(` resource "aws_vpc" "test" { cidr_block = "10.10.0.0/16" @@ -801,3 +871,57 @@ resource "aws_vpc_endpoint" "baz" { route_table_ids = ["${aws_route_table.foo.id}"] } `) + +var testAccAWSRouteNewRouteTable = fmt.Sprint(` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" + tags { + Name = "terraform-testacc-route-basic" + } +} + +resource "aws_vpc" "bar" { + cidr_block = "10.2.0.0/16" + tags { + Name = "terraform-testacc-route-new-route-table" + } +} + +resource "aws_internet_gateway" "foo" { + vpc_id = "${aws_vpc.foo.id}" + + tags { + Name = "terraform-testacc-route-basic" + } +} + +resource "aws_internet_gateway" "bar" { + vpc_id = "${aws_vpc.bar.id}" + + tags { + Name = "terraform-testacc-route-new-route-table" + } +} + +resource "aws_route_table" "foo" { + vpc_id = "${aws_vpc.foo.id}" + + tags { + Name = "terraform-testacc-route-basic" + } +} + +resource "aws_route_table" "bar" { + vpc_id = "${aws_vpc.bar.id}" + + tags { + Name = "terraform-testacc-route-new-route-table" + } +} + +resource "aws_route" "bar" { + route_table_id = "${aws_route_table.bar.id}" + destination_cidr_block = "10.4.0.0/16" + gateway_id = "${aws_internet_gateway.bar.id}" +} +`) diff --git a/aws/resource_aws_s3_bucket.go b/aws/resource_aws_s3_bucket.go index 5f96382f35c..d7ee14d00de 100644 --- a/aws/resource_aws_s3_bucket.go +++ b/aws/resource_aws_s3_bucket.go @@ -11,9 +11,10 @@ import ( "time" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/endpoints" "github.com/aws/aws-sdk-go/service/s3" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" @@ -40,9 +41,10 @@ func resourceAwsS3Bucket() *schema.Resource { ConflictsWith: []string{"bucket_prefix"}, }, "bucket_prefix": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"bucket"}, }, "bucket_domain_name": { @@ -50,6 +52,11 @@ func resourceAwsS3Bucket() *schema.Resource { Computed: true, }, + "bucket_regional_domain_name": { + Type: schema.TypeString, + Computed: true, + }, + "arn": { Type: schema.TypeString, Optional: true, @@ -65,7 +72,7 @@ func resourceAwsS3Bucket() *schema.Resource { "policy": { Type: schema.TypeString, Optional: true, - ValidateFunc: validateJsonString, + ValidateFunc: validation.ValidateJsonString, DiffSuppressFunc: suppressEquivalentAwsPolicyDiffs, }, @@ -105,6 +112,7 @@ func resourceAwsS3Bucket() *schema.Resource { "website": { Type: schema.TypeList, Optional: true, + MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "index_document": { @@ -130,7 +138,7 @@ func resourceAwsS3Bucket() *schema.Resource { "routing_rules": { Type: schema.TypeString, Optional: true, - ValidateFunc: validateJsonString, + ValidateFunc: validation.ValidateJsonString, StateFunc: func(v interface{}) string { json, _ := structure.NormalizeJsonString(v) return json @@ -216,7 +224,7 @@ func resourceAwsS3Bucket() *schema.Resource { Type: schema.TypeString, Optional: true, Computed: true, - ValidateFunc: validateMaxLength(255), + ValidateFunc: validation.StringLenBetween(0, 255), }, "prefix": { Type: schema.TypeString, @@ -245,7 +253,7 @@ func resourceAwsS3Bucket() *schema.Resource { "days": { Type: schema.TypeInt, Optional: true, - ValidateFunc: validation.IntAtLeast(1), + ValidateFunc: validation.IntAtLeast(0), }, "expired_object_delete_marker": { Type: schema.TypeBool, @@ -360,7 +368,7 @@ func resourceAwsS3Bucket() *schema.Resource { "id": { Type: schema.TypeString, Optional: true, - ValidateFunc: validateMaxLength(255), + ValidateFunc: validation.StringLenBetween(0, 255), }, "destination": { Type: schema.TypeSet, @@ -370,6 +378,11 @@ func resourceAwsS3Bucket() *schema.Resource { Set: destinationHash, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ + "account_id": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateAwsAccountId, + }, "bucket": { Type: schema.TypeString, Required: true, @@ -380,6 +393,7 @@ func resourceAwsS3Bucket() *schema.Resource { Optional: true, ValidateFunc: validation.StringInSlice([]string{ s3.StorageClassStandard, + s3.StorageClassOnezoneIa, s3.StorageClassStandardIa, s3.StorageClassReducedRedundancy, }, false), @@ -388,6 +402,23 @@ func resourceAwsS3Bucket() *schema.Resource { Type: schema.TypeString, Optional: true, }, + "access_control_translation": { + Type: schema.TypeList, + Optional: true, + MinItems: 1, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "owner": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{ + s3.OwnerOverrideDestination, + }, false), + }, + }, + }, + }, }, }, }, @@ -419,8 +450,8 @@ func resourceAwsS3Bucket() *schema.Resource { }, "prefix": { Type: schema.TypeString, - Required: true, - ValidateFunc: validateMaxLength(1024), + Optional: true, + ValidateFunc: validation.StringLenBetween(0, 1024), }, "status": { Type: schema.TypeString, @@ -430,6 +461,26 @@ func resourceAwsS3Bucket() *schema.Resource { s3.ReplicationRuleStatusDisabled, }, false), }, + "priority": { + Type: schema.TypeInt, + Optional: true, + }, + "filter": { + Type: schema.TypeList, + Optional: true, + MinItems: 1, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "prefix": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(0, 1024), + }, + "tags": tagsSchema(), + }, + }, + }, }, }, }, @@ -531,7 +582,7 @@ func resourceAwsS3BucketCreate(d *schema.ResourceData, meta interface{}) error { if awsErr.Code() == "OperationAborted" { log.Printf("[WARN] Got an error while trying to create S3 bucket %s: %s", bucket, err) return resource.RetryableError( - fmt.Errorf("[WARN] Error creating S3 bucket %s, retrying: %s", + fmt.Errorf("Error creating S3 bucket %s, retrying: %s", bucket, err)) } } @@ -580,7 +631,7 @@ func resourceAwsS3BucketUpdate(d *schema.ResourceData, meta interface{}) error { return err } } - if d.HasChange("acl") { + if d.HasChange("acl") && !d.IsNewResource() { if err := resourceAwsS3BucketAclUpdate(s3conn, d); err != nil { return err } @@ -640,11 +691,8 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { log.Printf("[WARN] S3 Bucket (%s) not found, error code (404)", d.Id()) d.SetId("") return nil - } else { - // some of the AWS SDK's errors can be empty strings, so let's add - // some additional context. - return fmt.Errorf("error reading S3 bucket \"%s\": %s", d.Id(), err) } + return fmt.Errorf("error reading S3 Bucket (%s): %s", d.Id(), err) } // In the import case, we won't have this @@ -675,7 +723,7 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { } else { policy, err := structure.NormalizeJsonString(*v) if err != nil { - return errwrap.Wrapf("policy contains an invalid JSON: {{err}}", err) + return fmt.Errorf("policy contains an invalid JSON: %s", err) } d.Set("policy", policy) } @@ -688,17 +736,13 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { Bucket: aws.String(d.Id()), }) }) - cors := corsResponse.(*s3.GetBucketCorsOutput) - if err != nil { - // An S3 Bucket might not have CORS configuration set. - if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() != "NoSuchCORSConfiguration" { - return err - } - log.Printf("[WARN] S3 bucket: %s, no CORS configuration could be found.", d.Id()) + if err != nil && !isAWSErr(err, "NoSuchCORSConfiguration", "") { + return fmt.Errorf("error getting S3 Bucket CORS configuration: %s", err) } - log.Printf("[DEBUG] S3 bucket: %s, read CORS: %v", d.Id(), cors) - if cors.CORSRules != nil { - rules := make([]map[string]interface{}, 0, len(cors.CORSRules)) + + corsRules := make([]map[string]interface{}, 0) + if cors, ok := corsResponse.(*s3.GetBucketCorsOutput); ok && len(cors.CORSRules) > 0 { + corsRules = make([]map[string]interface{}, 0, len(cors.CORSRules)) for _, ruleObject := range cors.CORSRules { rule := make(map[string]interface{}) rule["allowed_headers"] = flattenStringList(ruleObject.AllowedHeaders) @@ -711,12 +755,12 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { if ruleObject.MaxAgeSeconds != nil { rule["max_age_seconds"] = int(*ruleObject.MaxAgeSeconds) } - rules = append(rules, rule) - } - if err := d.Set("cors_rule", rules); err != nil { - return err + corsRules = append(corsRules, rule) } } + if err := d.Set("cors_rule", corsRules); err != nil { + return fmt.Errorf("error setting cors_rule: %s", err) + } // Read the website configuration wsResponse, err := retryOnAwsCode("NoSuchBucket", func() (interface{}, error) { @@ -724,9 +768,12 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { Bucket: aws.String(d.Id()), }) }) - ws := wsResponse.(*s3.GetBucketWebsiteOutput) - var websites []map[string]interface{} - if err == nil { + if err != nil && !isAWSErr(err, "NotImplemented", "") && !isAWSErr(err, "NoSuchWebsiteConfiguration", "") { + return fmt.Errorf("error getting S3 Bucket website configuration: %s", err) + } + + websites := make([]map[string]interface{}, 0, 1) + if ws, ok := wsResponse.(*s3.GetBucketWebsiteOutput); ok { w := make(map[string]interface{}) if v := ws.IndexDocument; v != nil { @@ -771,10 +818,14 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { w["routing_rules"] = rr } - websites = append(websites, w) + // We have special handling for the website configuration, + // so only add the configuration if there is any + if len(w) > 0 { + websites = append(websites, w) + } } if err := d.Set("website", websites); err != nil { - return err + return fmt.Errorf("error setting website: %s", err) } // Read the versioning configuration @@ -784,13 +835,12 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { Bucket: aws.String(d.Id()), }) }) - versioning := versioningResponse.(*s3.GetBucketVersioningOutput) if err != nil { return err } - log.Printf("[DEBUG] S3 Bucket: %s, versioning: %v", d.Id(), versioning) - if versioning != nil { - vcl := make([]map[string]interface{}, 0, 1) + + vcl := make([]map[string]interface{}, 0, 1) + if versioning, ok := versioningResponse.(*s3.GetBucketVersioningOutput); ok { vc := make(map[string]interface{}) if versioning.Status != nil && *versioning.Status == s3.BucketVersioningStatusEnabled { vc["enabled"] = true @@ -804,9 +854,9 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { vc["mfa_delete"] = false } vcl = append(vcl, vc) - if err := d.Set("versioning", vcl); err != nil { - return err - } + } + if err := d.Set("versioning", vcl); err != nil { + return fmt.Errorf("error setting versioning: %s", err) } // Read the acceleration status @@ -816,25 +866,12 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { Bucket: aws.String(d.Id()), }) }) - accelerate := accelerateResponse.(*s3.GetBucketAccelerateConfigurationOutput) - if err != nil { - // Amazon S3 Transfer Acceleration might not be supported in the - // given region, for example, China (Beijing) and the Government - // Cloud does not support this feature at the moment. - if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() != "UnsupportedArgument" { - return err - } - var awsRegion string - if region, ok := d.GetOk("region"); ok { - awsRegion = region.(string) - } else { - awsRegion = meta.(*AWSClient).region - } - - log.Printf("[WARN] S3 bucket: %s, the S3 Transfer Acceleration is not supported in the region: %s", d.Id(), awsRegion) - } else { - log.Printf("[DEBUG] S3 bucket: %s, read Acceleration: %v", d.Id(), accelerate) + // Amazon S3 Transfer Acceleration might not be supported in the region + if err != nil && !isAWSErr(err, "UnsupportedArgument", "") { + return fmt.Errorf("error getting S3 Bucket acceleration configuration: %s", err) + } + if accelerate, ok := accelerateResponse.(*s3.GetBucketAccelerateConfigurationOutput); ok { d.Set("acceleration_status", accelerate.Status) } @@ -845,15 +882,13 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { Bucket: aws.String(d.Id()), }) }) - payer := payerResponse.(*s3.GetBucketRequestPaymentOutput) + if err != nil { - return err + return fmt.Errorf("error getting S3 Bucket request payment: %s", err) } - log.Printf("[DEBUG] S3 Bucket: %s, read request payer: %v", d.Id(), payer) - if payer.Payer != nil { - if err := d.Set("request_payer", *payer.Payer); err != nil { - return err - } + + if payer, ok := payerResponse.(*s3.GetBucketRequestPaymentOutput); ok { + d.Set("request_payer", payer.Payer) } // Read the logging configuration @@ -862,14 +897,14 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { Bucket: aws.String(d.Id()), }) }) - logging := loggingResponse.(*s3.GetBucketLoggingOutput) + if err != nil { - return err + return fmt.Errorf("error getting S3 Bucket logging: %s", err) } - log.Printf("[DEBUG] S3 Bucket: %s, logging: %v", d.Id(), logging) lcl := make([]map[string]interface{}, 0, 1) - if v := logging.LoggingEnabled; v != nil { + if logging, ok := loggingResponse.(*s3.GetBucketLoggingOutput); ok && logging.LoggingEnabled != nil { + v := logging.LoggingEnabled lc := make(map[string]interface{}) if *v.TargetBucket != "" { lc["target_bucket"] = *v.TargetBucket @@ -880,7 +915,7 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { lcl = append(lcl, lc) } if err := d.Set("logging", lcl); err != nil { - return err + return fmt.Errorf("error setting logging: %s", err) } // Read the lifecycle configuration @@ -890,15 +925,13 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { Bucket: aws.String(d.Id()), }) }) - lifecycle := lifecycleResponse.(*s3.GetBucketLifecycleConfigurationOutput) - if err != nil { - if awsError, ok := err.(awserr.RequestFailure); ok && awsError.StatusCode() != 404 { - return err - } + if err != nil && !isAWSErr(err, "NoSuchLifecycleConfiguration", "") { + return err } - log.Printf("[DEBUG] S3 Bucket: %s, lifecycle: %v", d.Id(), lifecycle) - if len(lifecycle.Rules) > 0 { - rules := make([]map[string]interface{}, 0, len(lifecycle.Rules)) + + lifecycleRules := make([]map[string]interface{}, 0) + if lifecycle, ok := lifecycleResponse.(*s3.GetBucketLifecycleConfigurationOutput); ok && len(lifecycle.Rules) > 0 { + lifecycleRules = make([]map[string]interface{}, 0, len(lifecycle.Rules)) for _, lifecycleRule := range lifecycle.Rules { rule := make(map[string]interface{}) @@ -1002,13 +1035,12 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { rule["noncurrent_version_transition"] = schema.NewSet(transitionHash, transitions) } - rules = append(rules, rule) - } - - if err := d.Set("lifecycle_rule", rules); err != nil { - return err + lifecycleRules = append(lifecycleRules, rule) } } + if err := d.Set("lifecycle_rule", lifecycleRules); err != nil { + return fmt.Errorf("error setting lifecycle_rule: %s", err) + } // Read the bucket replication configuration @@ -1017,17 +1049,16 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { Bucket: aws.String(d.Id()), }) }) - if err != nil { - if awsError, ok := err.(awserr.RequestFailure); ok && awsError.StatusCode() != 404 { - return err - } + if err != nil && !isAWSErr(err, "ReplicationConfigurationNotFoundError", "") { + return fmt.Errorf("error getting S3 Bucket replication: %s", err) } - replication := replicationResponse.(*s3.GetBucketReplicationOutput) - log.Printf("[DEBUG] S3 Bucket: %s, read replication configuration: %v", d.Id(), replication) - if err := d.Set("replication_configuration", flattenAwsS3BucketReplicationConfiguration(replication.ReplicationConfiguration)); err != nil { - log.Printf("[DEBUG] Error setting replication configuration: %s", err) - return err + replicationConfiguration := make([]map[string]interface{}, 0) + if replication, ok := replicationResponse.(*s3.GetBucketReplicationOutput); ok { + replicationConfiguration = flattenAwsS3BucketReplicationConfiguration(replication.ReplicationConfiguration) + } + if err := d.Set("replication_configuration", replicationConfiguration); err != nil { + return fmt.Errorf("error setting replication_configuration: %s", err) } // Read the bucket server side encryption configuration @@ -1037,22 +1068,16 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { Bucket: aws.String(d.Id()), }) }) - if err != nil { - if isAWSErr(err, "ServerSideEncryptionConfigurationNotFoundError", "encryption configuration was not found") { - log.Printf("[DEBUG] Default encryption is not enabled for %s", d.Id()) - d.Set("server_side_encryption_configuration", []map[string]interface{}{}) - } else { - return err - } - } else { - encryption := encryptionResponse.(*s3.GetBucketEncryptionOutput) - log.Printf("[DEBUG] S3 Bucket: %s, read encryption configuration: %v", d.Id(), encryption) - if c := encryption.ServerSideEncryptionConfiguration; c != nil { - if err := d.Set("server_side_encryption_configuration", flattenAwsS3ServerSideEncryptionConfiguration(c)); err != nil { - log.Printf("[DEBUG] Error setting server side encryption configuration: %s", err) - return err - } - } + if err != nil && !isAWSErr(err, "ServerSideEncryptionConfigurationNotFoundError", "encryption configuration was not found") { + return fmt.Errorf("error getting S3 Bucket encryption: %s", err) + } + + serverSideEncryptionConfiguration := make([]map[string]interface{}, 0) + if encryption, ok := encryptionResponse.(*s3.GetBucketEncryptionOutput); ok && encryption.ServerSideEncryptionConfiguration != nil { + serverSideEncryptionConfiguration = flattenAwsS3ServerSideEncryptionConfiguration(encryption.ServerSideEncryptionConfiguration) + } + if err := d.Set("server_side_encryption_configuration", serverSideEncryptionConfiguration); err != nil { + return fmt.Errorf("error setting server_side_encryption_configuration: %s", err) } // Add the region as an attribute @@ -1065,11 +1090,11 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { ) }) if err != nil { - return err + return fmt.Errorf("error getting S3 Bucket location: %s", err) } - location := locationResponse.(*s3.GetBucketLocationOutput) + var region string - if location.LocationConstraint != nil { + if location, ok := locationResponse.(*s3.GetBucketLocationOutput); ok && location.LocationConstraint != nil { region = *location.LocationConstraint } region = normalizeRegion(region) @@ -1077,6 +1102,13 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { return err } + // Add the bucket_regional_domain_name as an attribute + regionalEndpoint, err := BucketRegionalDomainName(d.Get("bucket").(string), region) + if err != nil { + return err + } + d.Set("bucket_regional_domain_name", regionalEndpoint) + // Add the hosted zone ID for this bucket's region as an attribute hostedZoneID, err := HostedZoneIDForRegion(region) if err != nil { @@ -1108,7 +1140,12 @@ func resourceAwsS3BucketRead(d *schema.ResourceData, meta interface{}) error { return err } - d.Set("arn", fmt.Sprintf("arn:%s:s3:::%s", meta.(*AWSClient).partition, d.Id())) + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Service: "s3", + Resource: d.Id(), + }.String() + d.Set("arn", arn) return nil } @@ -1120,64 +1157,70 @@ func resourceAwsS3BucketDelete(d *schema.ResourceData, meta interface{}) error { _, err := s3conn.DeleteBucket(&s3.DeleteBucketInput{ Bucket: aws.String(d.Id()), }) - if err != nil { - ec2err, ok := err.(awserr.Error) - if ok && ec2err.Code() == "BucketNotEmpty" { - if d.Get("force_destroy").(bool) { - // bucket may have things delete them - log.Printf("[DEBUG] S3 Bucket attempting to forceDestroy %+v", err) - - bucket := d.Get("bucket").(string) - resp, err := s3conn.ListObjectVersions( - &s3.ListObjectVersionsInput{ - Bucket: aws.String(bucket), - }, - ) - if err != nil { - return fmt.Errorf("Error S3 Bucket list Object Versions err: %s", err) - } + if isAWSErr(err, s3.ErrCodeNoSuchBucket, "") { + return nil + } - objectsToDelete := make([]*s3.ObjectIdentifier, 0) + if isAWSErr(err, "BucketNotEmpty", "") { + if d.Get("force_destroy").(bool) { + // bucket may have things delete them + log.Printf("[DEBUG] S3 Bucket attempting to forceDestroy %+v", err) - if len(resp.DeleteMarkers) != 0 { + bucket := d.Get("bucket").(string) + resp, err := s3conn.ListObjectVersions( + &s3.ListObjectVersionsInput{ + Bucket: aws.String(bucket), + }, + ) - for _, v := range resp.DeleteMarkers { - objectsToDelete = append(objectsToDelete, &s3.ObjectIdentifier{ - Key: v.Key, - VersionId: v.VersionId, - }) - } - } + if err != nil { + return fmt.Errorf("Error S3 Bucket list Object Versions err: %s", err) + } - if len(resp.Versions) != 0 { - for _, v := range resp.Versions { - objectsToDelete = append(objectsToDelete, &s3.ObjectIdentifier{ - Key: v.Key, - VersionId: v.VersionId, - }) - } + objectsToDelete := make([]*s3.ObjectIdentifier, 0) + + if len(resp.DeleteMarkers) != 0 { + + for _, v := range resp.DeleteMarkers { + objectsToDelete = append(objectsToDelete, &s3.ObjectIdentifier{ + Key: v.Key, + VersionId: v.VersionId, + }) } + } - params := &s3.DeleteObjectsInput{ - Bucket: aws.String(bucket), - Delete: &s3.Delete{ - Objects: objectsToDelete, - }, + if len(resp.Versions) != 0 { + for _, v := range resp.Versions { + objectsToDelete = append(objectsToDelete, &s3.ObjectIdentifier{ + Key: v.Key, + VersionId: v.VersionId, + }) } + } - _, err = s3conn.DeleteObjects(params) + params := &s3.DeleteObjectsInput{ + Bucket: aws.String(bucket), + Delete: &s3.Delete{ + Objects: objectsToDelete, + }, + } - if err != nil { - return fmt.Errorf("Error S3 Bucket force_destroy error deleting: %s", err) - } + _, err = s3conn.DeleteObjects(params) - // this line recurses until all objects are deleted or an error is returned - return resourceAwsS3BucketDelete(d, meta) + if err != nil { + return fmt.Errorf("Error S3 Bucket force_destroy error deleting: %s", err) } + + // this line recurses until all objects are deleted or an error is returned + return resourceAwsS3BucketDelete(d, meta) } - return fmt.Errorf("Error deleting S3 Bucket: %s %q", err, d.Get("bucket").(string)) } + + if err != nil { + return fmt.Errorf("error deleting S3 Bucket (%s): %s", d.Id(), err) + } + return nil } @@ -1253,8 +1296,9 @@ func resourceAwsS3BucketCorsUpdate(s3conn *s3.S3, d *schema.ResourceData) error } else { vMap := make([]*string, len(v.([]interface{}))) for i, vv := range v.([]interface{}) { - str := vv.(string) - vMap[i] = aws.String(str) + if str, ok := vv.(string); ok { + vMap[i] = aws.String(str) + } } switch k { case "allowed_headers": @@ -1292,19 +1336,17 @@ func resourceAwsS3BucketCorsUpdate(s3conn *s3.S3, d *schema.ResourceData) error func resourceAwsS3BucketWebsiteUpdate(s3conn *s3.S3, d *schema.ResourceData) error { ws := d.Get("website").([]interface{}) - if len(ws) == 1 { - var w map[string]interface{} - if ws[0] != nil { - w = ws[0].(map[string]interface{}) - } else { - w = make(map[string]interface{}) - } - return resourceAwsS3BucketWebsitePut(s3conn, d, w) - } else if len(ws) == 0 { + if len(ws) == 0 { return resourceAwsS3BucketWebsiteDelete(s3conn, d) + } + + var w map[string]interface{} + if ws[0] != nil { + w = ws[0].(map[string]interface{}) } else { - return fmt.Errorf("Cannot specify more than one website.") + w = make(map[string]interface{}) } + return resourceAwsS3BucketWebsitePut(s3conn, d, w) } func resourceAwsS3BucketWebsitePut(s3conn *s3.S3, d *schema.ResourceData, website map[string]interface{}) error { @@ -1434,6 +1476,20 @@ func bucketDomainName(bucket string) string { return fmt.Sprintf("%s.s3.amazonaws.com", bucket) } +// https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region +func BucketRegionalDomainName(bucket string, region string) (string, error) { + // Return a default AWS Commercial domain name if no region is provided + // Otherwise EndpointFor() will return BUCKET.s3..amazonaws.com + if region == "" { + return fmt.Sprintf("%s.s3.amazonaws.com", bucket), nil + } + endpoint, err := endpoints.DefaultResolver().EndpointFor(endpoints.S3ServiceID, region) + if err != nil { + return "", err + } + return fmt.Sprintf("%s.%s", bucket, strings.TrimPrefix(endpoint.URL, "https://")), nil +} + func WebsiteEndpoint(bucket string, region string) *S3Website { domain := WebsiteDomainUrl(region) return &S3Website{Endpoint: fmt.Sprintf("%s.%s", bucket, domain), Domain: domain} @@ -1656,7 +1712,7 @@ func resourceAwsS3BucketServerSideEncryptionConfigurationUpdate(s3conn *s3.S3, d rc.Rules = rules i := &s3.PutBucketEncryptionInput{ - Bucket: aws.String(bucket), + Bucket: aws.String(bucket), ServerSideEncryptionConfiguration: rc, } log.Printf("[DEBUG] S3 put bucket replication configuration: %#v", i) @@ -1717,12 +1773,14 @@ func resourceAwsS3BucketReplicationConfigurationUpdate(s3conn *s3.S3, d *schema. rules := []*s3.ReplicationRule{} for _, v := range rcRules { rr := v.(map[string]interface{}) - rcRule := &s3.ReplicationRule{ - Prefix: aws.String(rr["prefix"].(string)), - Status: aws.String(rr["status"].(string)), + rcRule := &s3.ReplicationRule{} + if status, ok := rr["status"]; ok && status != "" { + rcRule.Status = aws.String(status.(string)) + } else { + continue } - if rrid, ok := rr["id"]; ok { + if rrid, ok := rr["id"]; ok && rrid != "" { rcRule.ID = aws.String(rrid.(string)) } @@ -1740,6 +1798,18 @@ func resourceAwsS3BucketReplicationConfigurationUpdate(s3conn *s3.S3, d *schema. ReplicaKmsKeyID: aws.String(replicaKmsKeyId.(string)), } } + + if account, ok := bd["account_id"]; ok && account != "" { + ruleDestination.Account = aws.String(account.(string)) + } + + if aclTranslation, ok := bd["access_control_translation"].([]interface{}); ok && len(aclTranslation) > 0 { + aclTranslationValues := aclTranslation[0].(map[string]interface{}) + ruleAclTranslation := &s3.AccessControlTranslation{} + ruleAclTranslation.Owner = aws.String(aclTranslationValues["owner"].(string)) + ruleDestination.AccessControlTranslation = ruleAclTranslation + } + } rcRule.Destination = ruleDestination @@ -1758,18 +1828,48 @@ func resourceAwsS3BucketReplicationConfigurationUpdate(s3conn *s3.S3, d *schema. } rcRule.SourceSelectionCriteria = ruleSsc } + + if f, ok := rr["filter"].([]interface{}); ok && len(f) > 0 { + // XML schema V2. + rcRule.Priority = aws.Int64(int64(rr["priority"].(int))) + rcRule.Filter = &s3.ReplicationRuleFilter{} + filter := f[0].(map[string]interface{}) + tags := filter["tags"].(map[string]interface{}) + if len(tags) > 0 { + rcRule.Filter.And = &s3.ReplicationRuleAndOperator{ + Prefix: aws.String(filter["prefix"].(string)), + Tags: tagsFromMapS3(tags), + } + } else { + rcRule.Filter.Prefix = aws.String(filter["prefix"].(string)) + } + rcRule.DeleteMarkerReplication = &s3.DeleteMarkerReplication{ + Status: aws.String(s3.DeleteMarkerReplicationStatusDisabled), + } + } else { + // XML schema V1. + rcRule.Prefix = aws.String(rr["prefix"].(string)) + } + rules = append(rules, rcRule) } rc.Rules = rules i := &s3.PutBucketReplicationInput{ - Bucket: aws.String(bucket), + Bucket: aws.String(bucket), ReplicationConfiguration: rc, } log.Printf("[DEBUG] S3 put bucket replication configuration: %#v", i) - _, err := retryOnAwsCode("NoSuchBucket", func() (interface{}, error) { - return s3conn.PutBucketReplication(i) + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + if _, err := s3conn.PutBucketReplication(i); err != nil { + if isAWSErr(err, s3.ErrCodeNoSuchBucket, "") || + isAWSErr(err, "InvalidRequest", "Versioning must be 'Enabled' on the bucket") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil }) if err != nil { return fmt.Errorf("Error putting S3 replication configuration: %s", err) @@ -1985,6 +2085,15 @@ func flattenAwsS3BucketReplicationConfiguration(r *s3.ReplicationConfiguration) rd["replica_kms_key_id"] = *v.Destination.EncryptionConfiguration.ReplicaKmsKeyID } } + if v.Destination.Account != nil { + rd["account_id"] = *v.Destination.Account + } + if v.Destination.AccessControlTranslation != nil { + rdt := map[string]interface{}{ + "owner": aws.StringValue(v.Destination.AccessControlTranslation.Owner), + } + rd["access_control_translation"] = []interface{}{rdt} + } t["destination"] = schema.NewSet(destinationHash, []interface{}{rd}) } @@ -2010,6 +2119,26 @@ func flattenAwsS3BucketReplicationConfiguration(r *s3.ReplicationConfiguration) } t["source_selection_criteria"] = schema.NewSet(sourceSelectionCriteriaHash, []interface{}{tssc}) } + + if v.Priority != nil { + t["priority"] = int(aws.Int64Value(v.Priority)) + } + + if f := v.Filter; f != nil { + m := map[string]interface{}{} + if f.Prefix != nil { + m["prefix"] = aws.StringValue(f.Prefix) + } + if t := f.Tag; t != nil { + m["tags"] = tagsMapToRaw(tagsToMapS3([]*s3.Tag{t})) + } + if a := f.And; a != nil { + m["prefix"] = aws.StringValue(a.Prefix) + m["tags"] = tagsMapToRaw(tagsToMapS3(a.Tags)) + } + t["filter"] = []interface{}{m} + } + rules = append(rules, t) } m["rules"] = schema.NewSet(rulesHash, rules) @@ -2155,6 +2284,24 @@ func rulesHash(v interface{}) int { if v, ok := m["source_selection_criteria"].(*schema.Set); ok && v.Len() > 0 && v.List()[0] != nil { buf.WriteString(fmt.Sprintf("%d-", sourceSelectionCriteriaHash(v.List()[0]))) } + if v, ok := m["priority"]; ok { + buf.WriteString(fmt.Sprintf("%d-", v.(int))) + } + if v, ok := m["filter"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + buf.WriteString(fmt.Sprintf("%d-", replicationRuleFilterHash(v[0]))) + } + return hashcode.String(buf.String()) +} + +func replicationRuleFilterHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + if v, ok := m["prefix"]; ok { + buf.WriteString(fmt.Sprintf("%s-", v.(string))) + } + if v, ok := m["tags"]; ok { + buf.WriteString(fmt.Sprintf("%d-", tagsMapToHash(v.(map[string]interface{})))) + } return hashcode.String(buf.String()) } @@ -2171,6 +2318,26 @@ func destinationHash(v interface{}) int { if v, ok := m["replica_kms_key_id"]; ok { buf.WriteString(fmt.Sprintf("%s-", v.(string))) } + if v, ok := m["account"]; ok { + buf.WriteString(fmt.Sprintf("%s-", v.(string))) + } + if v, ok := m["access_control_translation"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + buf.WriteString(fmt.Sprintf("%d-", accessControlTranslationHash(v[0]))) + } + return hashcode.String(buf.String()) +} + +func accessControlTranslationHash(v interface{}) int { + // v is nil if empty access_control_translation is given. + if v == nil { + return 0 + } + var buf bytes.Buffer + m := v.(map[string]interface{}) + + if v, ok := m["owner"]; ok { + buf.WriteString(fmt.Sprintf("%s-", v.(string))) + } return hashcode.String(buf.String()) } diff --git a/aws/resource_aws_s3_bucket_inventory.go b/aws/resource_aws_s3_bucket_inventory.go new file mode 100644 index 00000000000..7b4bd5bb4b5 --- /dev/null +++ b/aws/resource_aws_s3_bucket_inventory.go @@ -0,0 +1,466 @@ +package aws + +import ( + "fmt" + "log" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/s3" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsS3BucketInventory() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsS3BucketInventoryPut, + Read: resourceAwsS3BucketInventoryRead, + Update: resourceAwsS3BucketInventoryPut, + Delete: resourceAwsS3BucketInventoryDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "bucket": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(0, 64), + }, + "enabled": { + Type: schema.TypeBool, + Default: true, + Optional: true, + }, + "filter": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "prefix": { + Type: schema.TypeString, + Optional: true, + }, + }, + }, + }, + "destination": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + MinItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "bucket": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + MinItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "format": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{ + s3.InventoryFormatCsv, + s3.InventoryFormatOrc, + }, false), + }, + "bucket_arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validateArn, + }, + "account_id": { + Type: schema.TypeString, + Optional: true, + }, + "prefix": { + Type: schema.TypeString, + Optional: true, + }, + "encryption": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "sse_kms": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{"destination.0.bucket.0.encryption.0.sse_s3"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "key_id": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validateArn, + }, + }, + }, + }, + "sse_s3": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{"destination.0.bucket.0.encryption.0.sse_kms"}, + Elem: &schema.Resource{ + // No options currently; just existence of "sse_s3". + Schema: map[string]*schema.Schema{}, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + "schedule": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + MinItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "frequency": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{ + s3.InventoryFrequencyDaily, + s3.InventoryFrequencyWeekly, + }, false), + }, + }, + }, + }, + // TODO: Is there a sensible default for this? + "included_object_versions": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{ + s3.InventoryIncludedObjectVersionsCurrent, + s3.InventoryIncludedObjectVersionsAll, + }, false), + }, + "optional_fields": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice([]string{ + s3.InventoryOptionalFieldSize, + s3.InventoryOptionalFieldLastModifiedDate, + s3.InventoryOptionalFieldStorageClass, + s3.InventoryOptionalFieldEtag, + s3.InventoryOptionalFieldIsMultipartUploaded, + s3.InventoryOptionalFieldReplicationStatus, + s3.InventoryOptionalFieldEncryptionStatus, + }, false), + }, + Set: schema.HashString, + }, + }, + } +} + +func resourceAwsS3BucketInventoryPut(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).s3conn + bucket := d.Get("bucket").(string) + name := d.Get("name").(string) + + inventoryConfiguration := &s3.InventoryConfiguration{ + Id: aws.String(name), + IsEnabled: aws.Bool(d.Get("enabled").(bool)), + } + + if v, ok := d.GetOk("included_object_versions"); ok { + inventoryConfiguration.IncludedObjectVersions = aws.String(v.(string)) + } + + if v, ok := d.GetOk("optional_fields"); ok { + inventoryConfiguration.OptionalFields = expandStringList(v.(*schema.Set).List()) + } + + if v, ok := d.GetOk("schedule"); ok { + scheduleList := v.([]interface{}) + scheduleMap := scheduleList[0].(map[string]interface{}) + inventoryConfiguration.Schedule = &s3.InventorySchedule{ + Frequency: aws.String(scheduleMap["frequency"].(string)), + } + } + + if v, ok := d.GetOk("filter"); ok { + filterList := v.([]interface{}) + filterMap := filterList[0].(map[string]interface{}) + inventoryConfiguration.Filter = expandS3InventoryFilter(filterMap) + } + + if v, ok := d.GetOk("destination"); ok { + destinationList := v.([]interface{}) + destinationMap := destinationList[0].(map[string]interface{}) + bucketList := destinationMap["bucket"].([]interface{}) + bucketMap := bucketList[0].(map[string]interface{}) + + inventoryConfiguration.Destination = &s3.InventoryDestination{ + S3BucketDestination: expandS3InventoryS3BucketDestination(bucketMap), + } + } + + input := &s3.PutBucketInventoryConfigurationInput{ + Bucket: aws.String(bucket), + Id: aws.String(name), + InventoryConfiguration: inventoryConfiguration, + } + + log.Printf("[DEBUG] Putting S3 bucket inventory configuration: %s", input) + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err := conn.PutBucketInventoryConfiguration(input) + if err != nil { + if isAWSErr(err, s3.ErrCodeNoSuchBucket, "") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("Error putting S3 bucket inventory configuration: %s", err) + } + + d.SetId(fmt.Sprintf("%s:%s", bucket, name)) + + return resourceAwsS3BucketInventoryRead(d, meta) +} + +func resourceAwsS3BucketInventoryDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).s3conn + + bucket, name, err := resourceAwsS3BucketInventoryParseID(d.Id()) + if err != nil { + return err + } + + input := &s3.DeleteBucketInventoryConfigurationInput{ + Bucket: aws.String(bucket), + Id: aws.String(name), + } + + log.Printf("[DEBUG] Deleting S3 bucket inventory configuration: %s", input) + _, err = conn.DeleteBucketInventoryConfiguration(input) + if err != nil { + if isAWSErr(err, s3.ErrCodeNoSuchBucket, "") || isAWSErr(err, "NoSuchConfiguration", "The specified configuration does not exist.") { + return nil + } + return fmt.Errorf("Error deleting S3 bucket inventory configuration: %s", err) + } + + return nil +} + +func resourceAwsS3BucketInventoryRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).s3conn + + bucket, name, err := resourceAwsS3BucketInventoryParseID(d.Id()) + if err != nil { + return err + } + + d.Set("bucket", bucket) + d.Set("name", name) + + input := &s3.GetBucketInventoryConfigurationInput{ + Bucket: aws.String(bucket), + Id: aws.String(name), + } + + log.Printf("[DEBUG] Reading S3 bucket inventory configuration: %s", input) + var output *s3.GetBucketInventoryConfigurationOutput + err = resource.Retry(1*time.Minute, func() *resource.RetryError { + var err error + output, err = conn.GetBucketInventoryConfiguration(input) + if err != nil { + if isAWSErr(err, s3.ErrCodeNoSuchBucket, "") || isAWSErr(err, "NoSuchConfiguration", "The specified configuration does not exist.") { + if d.IsNewResource() { + return resource.RetryableError(err) + } + return nil + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("error getting S3 Bucket Inventory (%s): %s", d.Id(), err) + } + + if output == nil || output.InventoryConfiguration == nil { + log.Printf("[WARN] %s S3 bucket inventory configuration not found, removing from state.", d.Id()) + d.SetId("") + return nil + } + + d.Set("enabled", aws.BoolValue(output.InventoryConfiguration.IsEnabled)) + d.Set("included_object_versions", aws.StringValue(output.InventoryConfiguration.IncludedObjectVersions)) + + if err := d.Set("optional_fields", flattenStringList(output.InventoryConfiguration.OptionalFields)); err != nil { + return fmt.Errorf("error setting optional_fields: %s", err) + } + + if err := d.Set("filter", flattenS3InventoryFilter(output.InventoryConfiguration.Filter)); err != nil { + return fmt.Errorf("error setting filter: %s", err) + } + + if err := d.Set("schedule", flattenS3InventorySchedule(output.InventoryConfiguration.Schedule)); err != nil { + return fmt.Errorf("error setting schedule: %s", err) + } + + if output.InventoryConfiguration.Destination != nil { + destination := map[string]interface{}{ + "bucket": flattenS3InventoryS3BucketDestination(output.InventoryConfiguration.Destination.S3BucketDestination), + } + + if err := d.Set("destination", []map[string]interface{}{destination}); err != nil { + return fmt.Errorf("error setting destination: %s", err) + } + } + + return nil +} + +func expandS3InventoryFilter(m map[string]interface{}) *s3.InventoryFilter { + v, ok := m["prefix"] + if !ok { + return nil + } + return &s3.InventoryFilter{ + Prefix: aws.String(v.(string)), + } +} + +func flattenS3InventoryFilter(filter *s3.InventoryFilter) []map[string]interface{} { + if filter == nil { + return nil + } + + result := make([]map[string]interface{}, 0, 1) + + m := make(map[string]interface{}, 0) + if filter.Prefix != nil { + m["prefix"] = aws.StringValue(filter.Prefix) + } + + result = append(result, m) + + return result +} + +func flattenS3InventorySchedule(schedule *s3.InventorySchedule) []map[string]interface{} { + result := make([]map[string]interface{}, 0, 1) + + m := make(map[string]interface{}, 1) + m["frequency"] = aws.StringValue(schedule.Frequency) + + result = append(result, m) + + return result +} + +func expandS3InventoryS3BucketDestination(m map[string]interface{}) *s3.InventoryS3BucketDestination { + destination := &s3.InventoryS3BucketDestination{ + Format: aws.String(m["format"].(string)), + Bucket: aws.String(m["bucket_arn"].(string)), + } + + if v, ok := m["account_id"]; ok && v.(string) != "" { + destination.AccountId = aws.String(v.(string)) + } + + if v, ok := m["prefix"]; ok && v.(string) != "" { + destination.Prefix = aws.String(v.(string)) + } + + if v, ok := m["encryption"].([]interface{}); ok && len(v) > 0 { + encryptionMap := v[0].(map[string]interface{}) + + encryption := &s3.InventoryEncryption{} + + for k, v := range encryptionMap { + data := v.([]interface{}) + + if len(data) == 0 { + continue + } + + switch k { + case "sse_kms": + m := data[0].(map[string]interface{}) + encryption.SSEKMS = &s3.SSEKMS{ + KeyId: aws.String(m["key_id"].(string)), + } + case "sse_s3": + encryption.SSES3 = &s3.SSES3{} + } + } + + destination.Encryption = encryption + } + + return destination +} + +func flattenS3InventoryS3BucketDestination(destination *s3.InventoryS3BucketDestination) []map[string]interface{} { + result := make([]map[string]interface{}, 0, 1) + + m := map[string]interface{}{ + "format": aws.StringValue(destination.Format), + "bucket_arn": aws.StringValue(destination.Bucket), + } + + if destination.AccountId != nil { + m["account_id"] = aws.StringValue(destination.AccountId) + } + if destination.Prefix != nil { + m["prefix"] = aws.StringValue(destination.Prefix) + } + + if destination.Encryption != nil { + encryption := make(map[string]interface{}, 1) + if destination.Encryption.SSES3 != nil { + encryption["sse_s3"] = []map[string]interface{}{{}} + } else if destination.Encryption.SSEKMS != nil { + encryption["sse_kms"] = []map[string]interface{}{ + { + "key_id": aws.StringValue(destination.Encryption.SSEKMS.KeyId), + }, + } + } + m["encryption"] = []map[string]interface{}{encryption} + } + + result = append(result, m) + + return result +} + +func resourceAwsS3BucketInventoryParseID(id string) (string, string, error) { + idParts := strings.Split(id, ":") + if len(idParts) != 2 { + return "", "", fmt.Errorf("please make sure the ID is in the form BUCKET:NAME (i.e. my-bucket:EntireBucket") + } + bucket := idParts[0] + name := idParts[1] + return bucket, name, nil +} diff --git a/aws/resource_aws_s3_bucket_inventory_test.go b/aws/resource_aws_s3_bucket_inventory_test.go new file mode 100644 index 00000000000..2a98cd08345 --- /dev/null +++ b/aws/resource_aws_s3_bucket_inventory_test.go @@ -0,0 +1,296 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + "testing" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/s3" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSS3BucketInventory_basic(t *testing.T) { + var conf s3.InventoryConfiguration + rString := acctest.RandString(8) + resourceName := "aws_s3_bucket_inventory.test" + + bucketName := fmt.Sprintf("tf-acc-bucket-inventory-%s", rString) + inventoryName := t.Name() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSS3BucketInventoryDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSS3BucketInventoryConfig(bucketName, inventoryName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSS3BucketInventoryConfigExists(resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "bucket", bucketName), + resource.TestCheckNoResourceAttr(resourceName, "filter"), + resource.TestCheckResourceAttr(resourceName, "name", inventoryName), + resource.TestCheckResourceAttr(resourceName, "enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "included_object_versions", "All"), + + resource.TestCheckResourceAttr(resourceName, "optional_fields.#", "2"), + + resource.TestCheckResourceAttr(resourceName, "schedule.#", "1"), + resource.TestCheckResourceAttr(resourceName, "schedule.0.frequency", "Weekly"), + + resource.TestCheckResourceAttr(resourceName, "destination.#", "1"), + resource.TestCheckResourceAttr(resourceName, "destination.0.bucket.#", "1"), + resource.TestCheckResourceAttr(resourceName, "destination.0.bucket.0.bucket_arn", "arn:aws:s3:::"+bucketName), + resource.TestCheckResourceAttrSet(resourceName, "destination.0.bucket.0.account_id"), + resource.TestCheckResourceAttr(resourceName, "destination.0.bucket.0.format", "ORC"), + resource.TestCheckResourceAttr(resourceName, "destination.0.bucket.0.prefix", "inventory"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSS3BucketInventory_encryptWithSSES3(t *testing.T) { + var conf s3.InventoryConfiguration + rString := acctest.RandString(8) + resourceName := "aws_s3_bucket_inventory.test" + + bucketName := fmt.Sprintf("tf-acc-bucket-inventory-%s", rString) + inventoryName := t.Name() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSS3BucketInventoryDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSS3BucketInventoryConfigEncryptWithSSES3(bucketName, inventoryName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSS3BucketInventoryConfigExists(resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "destination.0.bucket.0.encryption.0.sse_s3.#", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSS3BucketInventory_encryptWithSSEKMS(t *testing.T) { + var conf s3.InventoryConfiguration + rString := acctest.RandString(8) + resourceName := "aws_s3_bucket_inventory.test" + + bucketName := fmt.Sprintf("tf-acc-bucket-inventory-%s", rString) + inventoryName := t.Name() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSS3BucketInventoryDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSS3BucketInventoryConfigEncryptWithSSEKMS(bucketName, inventoryName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSS3BucketInventoryConfigExists(resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "destination.0.bucket.0.encryption.0.sse_kms.#", "1"), + resource.TestMatchResourceAttr(resourceName, "destination.0.bucket.0.encryption.0.sse_kms.0.key_id", regexp.MustCompile("^arn:aws:kms:")), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAWSS3BucketInventoryConfigExists(n string, res *s3.InventoryConfiguration) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No S3 bucket inventory configuration ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).s3conn + bucket, name, err := resourceAwsS3BucketInventoryParseID(rs.Primary.ID) + if err != nil { + return err + } + + input := &s3.GetBucketInventoryConfigurationInput{ + Bucket: aws.String(bucket), + Id: aws.String(name), + } + log.Printf("[DEBUG] Reading S3 bucket inventory configuration: %s", input) + output, err := conn.GetBucketInventoryConfiguration(input) + if err != nil { + return err + } + + *res = *output.InventoryConfiguration + + return nil + } +} + +func testAccCheckAWSS3BucketInventoryDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).s3conn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_s3_bucket_inventory" { + continue + } + + bucket, name, err := resourceAwsS3BucketInventoryParseID(rs.Primary.ID) + if err != nil { + return err + } + + err = resource.Retry(1*time.Minute, func() *resource.RetryError { + input := &s3.GetBucketInventoryConfigurationInput{ + Bucket: aws.String(bucket), + Id: aws.String(name), + } + log.Printf("[DEBUG] Reading S3 bucket inventory configuration: %s", input) + output, err := conn.GetBucketInventoryConfiguration(input) + if err != nil { + if isAWSErr(err, s3.ErrCodeNoSuchBucket, "") || isAWSErr(err, "NoSuchConfiguration", "The specified configuration does not exist.") { + return nil + } + return resource.NonRetryableError(err) + } + if output.InventoryConfiguration != nil { + return resource.RetryableError(fmt.Errorf("S3 bucket inventory configuration exists: %v", output)) + } + return nil + }) + if err != nil { + return err + } + } + return nil +} + +func testAccAWSS3BucketInventoryConfigBucket(name string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "bucket" { + bucket = "%s" + acl = "private" +} +`, name) +} + +func testAccAWSS3BucketInventoryConfig(bucketName, inventoryName string) string { + return fmt.Sprintf(` +%s +data "aws_caller_identity" "current" {} + +resource "aws_s3_bucket_inventory" "test" { + bucket = "${aws_s3_bucket.bucket.id}" + name = "%s" + + included_object_versions = "All" + + optional_fields = [ + "Size", + "LastModifiedDate", + ] + + filter { + prefix = "documents/" + } + + schedule { + frequency = "Weekly" + } + + destination { + bucket { + format = "ORC" + bucket_arn = "${aws_s3_bucket.bucket.arn}" + account_id = "${data.aws_caller_identity.current.account_id}" + prefix = "inventory" + } + } +} +`, testAccAWSS3BucketInventoryConfigBucket(bucketName), inventoryName) +} + +func testAccAWSS3BucketInventoryConfigEncryptWithSSES3(bucketName, inventoryName string) string { + return fmt.Sprintf(` +%s +resource "aws_s3_bucket_inventory" "test" { + bucket = "${aws_s3_bucket.bucket.id}" + name = "%s" + + included_object_versions = "Current" + + schedule { + frequency = "Daily" + } + + destination { + bucket { + format = "CSV" + bucket_arn = "${aws_s3_bucket.bucket.arn}" + + encryption { + sse_s3 {} + } + } + } +} +`, testAccAWSS3BucketInventoryConfigBucket(bucketName), inventoryName) +} + +func testAccAWSS3BucketInventoryConfigEncryptWithSSEKMS(bucketName, inventoryName string) string { + return fmt.Sprintf(` +%s +resource "aws_kms_key" "inventory" { + description = "Terraform acc test S3 inventory SSE-KMS encryption: %s" + deletion_window_in_days = 7 +} + +resource "aws_s3_bucket_inventory" "test" { + bucket = "${aws_s3_bucket.bucket.id}" + name = "%s" + + included_object_versions = "Current" + + schedule { + frequency = "Daily" + } + + destination { + bucket { + format = "ORC" + bucket_arn = "${aws_s3_bucket.bucket.arn}" + + encryption { + sse_kms { + key_id = "${aws_kms_key.inventory.arn}" + } + } + } + } +} +`, testAccAWSS3BucketInventoryConfigBucket(bucketName), bucketName, inventoryName) +} diff --git a/aws/resource_aws_s3_bucket_metric.go b/aws/resource_aws_s3_bucket_metric.go index 37ed7f137d0..c67f56b7434 100644 --- a/aws/resource_aws_s3_bucket_metric.go +++ b/aws/resource_aws_s3_bucket_metric.go @@ -109,14 +109,11 @@ func resourceAwsS3BucketMetricDelete(d *schema.ResourceData, meta interface{}) e _, err = conn.DeleteBucketMetricsConfiguration(input) if err != nil { if isAWSErr(err, s3.ErrCodeNoSuchBucket, "") || isAWSErr(err, "NoSuchConfiguration", "The specified configuration does not exist.") { - log.Printf("[WARN] %s S3 bucket metrics configuration not found, removing from state.", d.Id()) - d.SetId("") return nil } return fmt.Errorf("Error deleting S3 metric configuration: %s", err) } - d.SetId("") return nil } diff --git a/aws/resource_aws_s3_bucket_metric_test.go b/aws/resource_aws_s3_bucket_metric_test.go index 8bf31f70f82..e5171d8b4bd 100644 --- a/aws/resource_aws_s3_bucket_metric_test.go +++ b/aws/resource_aws_s3_bucket_metric_test.go @@ -40,7 +40,7 @@ func TestExpandS3MetricsFilter(t *testing.T) { And: &s3.MetricsAndOperator{ Prefix: aws.String("prefix/"), Tags: []*s3.Tag{ - &s3.Tag{ + { Key: aws.String("tag1key"), Value: aws.String("tag1value"), }, @@ -60,11 +60,11 @@ func TestExpandS3MetricsFilter(t *testing.T) { And: &s3.MetricsAndOperator{ Prefix: aws.String("prefix/"), Tags: []*s3.Tag{ - &s3.Tag{ + { Key: aws.String("tag1key"), Value: aws.String("tag1value"), }, - &s3.Tag{ + { Key: aws.String("tag2key"), Value: aws.String("tag2value"), }, @@ -95,11 +95,11 @@ func TestExpandS3MetricsFilter(t *testing.T) { ExpectedS3MetricsFilter: &s3.MetricsFilter{ And: &s3.MetricsAndOperator{ Tags: []*s3.Tag{ - &s3.Tag{ + { Key: aws.String("tag1key"), Value: aws.String("tag1value"), }, - &s3.Tag{ + { Key: aws.String("tag2key"), Value: aws.String("tag2value"), }, @@ -147,7 +147,7 @@ func TestFlattenS3MetricsFilter(t *testing.T) { And: &s3.MetricsAndOperator{ Prefix: aws.String("prefix/"), Tags: []*s3.Tag{ - &s3.Tag{ + { Key: aws.String("tag1key"), Value: aws.String("tag1value"), }, @@ -166,11 +166,11 @@ func TestFlattenS3MetricsFilter(t *testing.T) { And: &s3.MetricsAndOperator{ Prefix: aws.String("prefix/"), Tags: []*s3.Tag{ - &s3.Tag{ + { Key: aws.String("tag1key"), Value: aws.String("tag1value"), }, - &s3.Tag{ + { Key: aws.String("tag2key"), Value: aws.String("tag2value"), }, @@ -202,11 +202,11 @@ func TestFlattenS3MetricsFilter(t *testing.T) { S3MetricsFilter: &s3.MetricsFilter{ And: &s3.MetricsAndOperator{ Tags: []*s3.Tag{ - &s3.Tag{ + { Key: aws.String("tag1key"), Value: aws.String("tag1value"), }, - &s3.Tag{ + { Key: aws.String("tag2key"), Value: aws.String("tag2value"), }, @@ -269,12 +269,12 @@ func TestAccAWSS3BucketMetric_basic(t *testing.T) { bucketName := fmt.Sprintf("tf-acc-%d", rInt) metricName := t.Name() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSS3BucketMetricDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSS3BucketMetricsConfigWithoutFilter(bucketName, metricName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketMetricsConfigExists(resourceName, &conf), @@ -283,7 +283,7 @@ func TestAccAWSS3BucketMetric_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "name", metricName), ), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, @@ -302,12 +302,12 @@ func TestAccAWSS3BucketMetric_WithFilterPrefix(t *testing.T) { prefix := fmt.Sprintf("prefix-%d/", rInt) prefixUpdate := fmt.Sprintf("prefix-update-%d/", rInt) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSS3BucketMetricDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSS3BucketMetricsConfigWithFilterPrefix(bucketName, metricName, prefix), Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketMetricsConfigExists(resourceName, &conf), @@ -316,7 +316,7 @@ func TestAccAWSS3BucketMetric_WithFilterPrefix(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "filter.0.tags.%", "0"), ), }, - resource.TestStep{ + { Config: testAccAWSS3BucketMetricsConfigWithFilterPrefix(bucketName, metricName, prefixUpdate), Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketMetricsConfigExists(resourceName, &conf), @@ -325,7 +325,7 @@ func TestAccAWSS3BucketMetric_WithFilterPrefix(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "filter.0.tags.%", "0"), ), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, @@ -348,12 +348,12 @@ func TestAccAWSS3BucketMetric_WithFilterPrefixAndMultipleTags(t *testing.T) { tag2 := fmt.Sprintf("tag2-%d", rInt) tag2Update := fmt.Sprintf("tag2-update-%d", rInt) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSS3BucketMetricDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSS3BucketMetricsConfigWithFilterPrefixAndMultipleTags(bucketName, metricName, prefix, tag1, tag2), Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketMetricsConfigExists(resourceName, &conf), @@ -364,7 +364,7 @@ func TestAccAWSS3BucketMetric_WithFilterPrefixAndMultipleTags(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "filter.0.tags.tag2", tag2), ), }, - resource.TestStep{ + { Config: testAccAWSS3BucketMetricsConfigWithFilterPrefixAndMultipleTags(bucketName, metricName, prefixUpdate, tag1Update, tag2Update), Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketMetricsConfigExists(resourceName, &conf), @@ -375,7 +375,7 @@ func TestAccAWSS3BucketMetric_WithFilterPrefixAndMultipleTags(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "filter.0.tags.tag2", tag2Update), ), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, @@ -396,12 +396,12 @@ func TestAccAWSS3BucketMetric_WithFilterPrefixAndSingleTag(t *testing.T) { tag1 := fmt.Sprintf("tag-%d", rInt) tag1Update := fmt.Sprintf("tag-update-%d", rInt) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSS3BucketMetricDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSS3BucketMetricsConfigWithFilterPrefixAndSingleTag(bucketName, metricName, prefix, tag1), Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketMetricsConfigExists(resourceName, &conf), @@ -411,7 +411,7 @@ func TestAccAWSS3BucketMetric_WithFilterPrefixAndSingleTag(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "filter.0.tags.tag1", tag1), ), }, - resource.TestStep{ + { Config: testAccAWSS3BucketMetricsConfigWithFilterPrefixAndSingleTag(bucketName, metricName, prefixUpdate, tag1Update), Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketMetricsConfigExists(resourceName, &conf), @@ -421,7 +421,7 @@ func TestAccAWSS3BucketMetric_WithFilterPrefixAndSingleTag(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "filter.0.tags.tag1", tag1Update), ), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, @@ -442,12 +442,12 @@ func TestAccAWSS3BucketMetric_WithFilterMultipleTags(t *testing.T) { tag2 := fmt.Sprintf("tag2-%d", rInt) tag2Update := fmt.Sprintf("tag2-update-%d", rInt) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSS3BucketMetricDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSS3BucketMetricsConfigWithFilterMultipleTags(bucketName, metricName, tag1, tag2), Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketMetricsConfigExists(resourceName, &conf), @@ -458,7 +458,7 @@ func TestAccAWSS3BucketMetric_WithFilterMultipleTags(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "filter.0.tags.tag2", tag2), ), }, - resource.TestStep{ + { Config: testAccAWSS3BucketMetricsConfigWithFilterMultipleTags(bucketName, metricName, tag1Update, tag2Update), Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketMetricsConfigExists(resourceName, &conf), @@ -469,7 +469,7 @@ func TestAccAWSS3BucketMetric_WithFilterMultipleTags(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "filter.0.tags.tag2", tag2Update), ), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, @@ -488,12 +488,12 @@ func TestAccAWSS3BucketMetric_WithFilterSingleTag(t *testing.T) { tag1 := fmt.Sprintf("tag-%d", rInt) tag1Update := fmt.Sprintf("tag-update-%d", rInt) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSS3BucketMetricDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSS3BucketMetricsConfigWithFilterSingleTag(bucketName, metricName, tag1), Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketMetricsConfigExists(resourceName, &conf), @@ -503,7 +503,7 @@ func TestAccAWSS3BucketMetric_WithFilterSingleTag(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "filter.0.tags.tag1", tag1), ), }, - resource.TestStep{ + { Config: testAccAWSS3BucketMetricsConfigWithFilterSingleTag(bucketName, metricName, tag1Update), Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketMetricsConfigExists(resourceName, &conf), @@ -513,7 +513,7 @@ func TestAccAWSS3BucketMetric_WithFilterSingleTag(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "filter.0.tags.tag1", tag1Update), ), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, diff --git a/aws/resource_aws_s3_bucket_notification.go b/aws/resource_aws_s3_bucket_notification.go index f7ba2ac22dd..4e313b6e833 100644 --- a/aws/resource_aws_s3_bucket_notification.go +++ b/aws/resource_aws_s3_bucket_notification.go @@ -25,35 +25,35 @@ func resourceAwsS3BucketNotification() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "bucket": &schema.Schema{ + "bucket": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "topic": &schema.Schema{ + "topic": { Type: schema.TypeList, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "id": &schema.Schema{ + "id": { Type: schema.TypeString, Optional: true, Computed: true, }, - "filter_prefix": &schema.Schema{ + "filter_prefix": { Type: schema.TypeString, Optional: true, }, - "filter_suffix": &schema.Schema{ + "filter_suffix": { Type: schema.TypeString, Optional: true, }, - "topic_arn": &schema.Schema{ + "topic_arn": { Type: schema.TypeString, Required: true, }, - "events": &schema.Schema{ + "events": { Type: schema.TypeSet, Required: true, Elem: &schema.Schema{Type: schema.TypeString}, @@ -63,29 +63,29 @@ func resourceAwsS3BucketNotification() *schema.Resource { }, }, - "queue": &schema.Schema{ + "queue": { Type: schema.TypeList, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "id": &schema.Schema{ + "id": { Type: schema.TypeString, Optional: true, Computed: true, }, - "filter_prefix": &schema.Schema{ + "filter_prefix": { Type: schema.TypeString, Optional: true, }, - "filter_suffix": &schema.Schema{ + "filter_suffix": { Type: schema.TypeString, Optional: true, }, - "queue_arn": &schema.Schema{ + "queue_arn": { Type: schema.TypeString, Required: true, }, - "events": &schema.Schema{ + "events": { Type: schema.TypeSet, Required: true, Elem: &schema.Schema{Type: schema.TypeString}, @@ -95,29 +95,29 @@ func resourceAwsS3BucketNotification() *schema.Resource { }, }, - "lambda_function": &schema.Schema{ + "lambda_function": { Type: schema.TypeList, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "id": &schema.Schema{ + "id": { Type: schema.TypeString, Optional: true, Computed: true, }, - "filter_prefix": &schema.Schema{ + "filter_prefix": { Type: schema.TypeString, Optional: true, }, - "filter_suffix": &schema.Schema{ + "filter_suffix": { Type: schema.TypeString, Optional: true, }, - "lambda_function_arn": &schema.Schema{ + "lambda_function_arn": { Type: schema.TypeString, Optional: true, }, - "events": &schema.Schema{ + "events": { Type: schema.TypeSet, Required: true, Elem: &schema.Schema{Type: schema.TypeString}, @@ -304,7 +304,7 @@ func resourceAwsS3BucketNotificationPut(d *schema.ResourceData, meta interface{} notificationConfiguration.TopicConfigurations = topicConfigs } i := &s3.PutBucketNotificationConfigurationInput{ - Bucket: aws.String(bucket), + Bucket: aws.String(bucket), NotificationConfiguration: notificationConfiguration, } @@ -336,7 +336,7 @@ func resourceAwsS3BucketNotificationDelete(d *schema.ResourceData, meta interfac s3conn := meta.(*AWSClient).s3conn i := &s3.PutBucketNotificationConfigurationInput{ - Bucket: aws.String(d.Id()), + Bucket: aws.String(d.Id()), NotificationConfiguration: &s3.NotificationConfiguration{}, } @@ -346,8 +346,6 @@ func resourceAwsS3BucketNotificationDelete(d *schema.ResourceData, meta interfac return fmt.Errorf("Error deleting S3 notification configuration: %s", err) } - d.SetId("") - return nil } diff --git a/aws/resource_aws_s3_bucket_notification_test.go b/aws/resource_aws_s3_bucket_notification_test.go index 99adf7cca49..67563ce5551 100644 --- a/aws/resource_aws_s3_bucket_notification_test.go +++ b/aws/resource_aws_s3_bucket_notification_test.go @@ -16,6 +16,69 @@ import ( "github.com/aws/aws-sdk-go/service/s3" ) +func TestAccAWSS3BucketNotification_importBasic(t *testing.T) { + rString := acctest.RandString(8) + + topicName := fmt.Sprintf("tf-acc-topic-s3-b-n-import-%s", rString) + bucketName := fmt.Sprintf("tf-acc-bucket-n-import-%s", rString) + queueName := fmt.Sprintf("tf-acc-queue-s3-b-n-import-%s", rString) + roleName := fmt.Sprintf("tf-acc-role-s3-b-n-import-%s", rString) + lambdaFuncName := fmt.Sprintf("tf-acc-lambda-func-s3-b-n-import-%s", rString) + + resourceName := "aws_s3_bucket_notification.notification" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSS3BucketNotificationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSS3BucketConfigWithTopicNotification(topicName, bucketName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSS3BucketNotificationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSS3BucketConfigWithQueueNotification(queueName, bucketName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSS3BucketNotificationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSS3BucketConfigWithLambdaNotification(roleName, lambdaFuncName, bucketName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSS3BucketNotification_basic(t *testing.T) { rString := acctest.RandString(8) @@ -25,12 +88,12 @@ func TestAccAWSS3BucketNotification_basic(t *testing.T) { roleName := fmt.Sprintf("tf-acc-role-s3-b-notification-%s", rString) lambdaFuncName := fmt.Sprintf("tf-acc-lambda-func-s3-b-notification-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSS3BucketNotificationDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSS3BucketConfigWithTopicNotification(topicName, bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketTopicNotification( @@ -40,11 +103,11 @@ func TestAccAWSS3BucketNotification_basic(t *testing.T) { []string{"s3:ObjectCreated:*", "s3:ObjectRemoved:Delete"}, &s3.KeyFilter{ FilterRules: []*s3.FilterRule{ - &s3.FilterRule{ + { Name: aws.String("Prefix"), Value: aws.String("tf-acc-test/"), }, - &s3.FilterRule{ + { Name: aws.String("Suffix"), Value: aws.String(".txt"), }, @@ -58,7 +121,7 @@ func TestAccAWSS3BucketNotification_basic(t *testing.T) { []string{"s3:ObjectCreated:*", "s3:ObjectRemoved:Delete"}, &s3.KeyFilter{ FilterRules: []*s3.FilterRule{ - &s3.FilterRule{ + { Name: aws.String("Suffix"), Value: aws.String(".log"), }, @@ -67,7 +130,7 @@ func TestAccAWSS3BucketNotification_basic(t *testing.T) { ), ), }, - resource.TestStep{ + { Config: testAccAWSS3BucketConfigWithQueueNotification(queueName, bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketQueueNotification( @@ -77,11 +140,11 @@ func TestAccAWSS3BucketNotification_basic(t *testing.T) { []string{"s3:ObjectCreated:*", "s3:ObjectRemoved:Delete"}, &s3.KeyFilter{ FilterRules: []*s3.FilterRule{ - &s3.FilterRule{ + { Name: aws.String("Prefix"), Value: aws.String("tf-acc-test/"), }, - &s3.FilterRule{ + { Name: aws.String("Suffix"), Value: aws.String(".mp4"), }, @@ -90,7 +153,7 @@ func TestAccAWSS3BucketNotification_basic(t *testing.T) { ), ), }, - resource.TestStep{ + { Config: testAccAWSS3BucketConfigWithLambdaNotification(roleName, lambdaFuncName, bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketLambdaFunctionConfiguration( @@ -100,11 +163,11 @@ func TestAccAWSS3BucketNotification_basic(t *testing.T) { []string{"s3:ObjectCreated:*", "s3:ObjectRemoved:Delete"}, &s3.KeyFilter{ FilterRules: []*s3.FilterRule{ - &s3.FilterRule{ + { Name: aws.String("Prefix"), Value: aws.String("tf-acc-test/"), }, - &s3.FilterRule{ + { Name: aws.String("Suffix"), Value: aws.String(".png"), }, @@ -123,12 +186,12 @@ func TestAccAWSS3BucketNotification_withoutFilter(t *testing.T) { topicName := fmt.Sprintf("tf-acc-topic-s3-b-notification-wo-f-%s", rString) bucketName := fmt.Sprintf("tf-acc-bucket-notification-wo-f-%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSS3BucketNotificationDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSS3BucketConfigWithTopicNotificationWithoutFilter(topicName, bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSS3BucketTopicNotification( @@ -354,6 +417,8 @@ func testAccCheckAWSS3BucketLambdaFunctionConfiguration(n, i, t string, events [ func testAccAWSS3BucketConfigWithTopicNotification(topicName, bucketName string) string { return fmt.Sprintf(` +data "aws_partition" "current" {} + resource "aws_sns_topic" "topic" { name = "%s" policy = < 0 && err == nil { + t.Fatalf("expected %q to trigger an error", tc.Region) + } + if output != tc.ExpectedOutput { + t.Fatalf("expected %q, received %q", tc.ExpectedOutput, output) + } + } +} + func testAccCheckAWSS3BucketDestroy(s *terraform.State) error { return testAccCheckAWSS3BucketDestroyWithProvider(s, testAccProvider) } @@ -1143,7 +1686,9 @@ func testAccCheckAWSS3BucketCors(n string, corsRules []*s3.CORSRule) resource.Te }) if err != nil { - return fmt.Errorf("GetBucketCors error: %v", err) + if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() != "NoSuchCORSConfiguration" { + return fmt.Errorf("GetBucketCors error: %v", err) + } } if !reflect.DeepEqual(out.CORSRules, corsRules) { @@ -1189,6 +1734,10 @@ func testAccCheckAWSS3BucketLogging(n, b, p string) resource.TestCheckFunc { return fmt.Errorf("GetBucketLogging error: %v", err) } + if out.LoggingEnabled == nil { + return fmt.Errorf("logging not enabled for bucket: %s", rs.Primary.ID) + } + tb, _ := s.RootModule().Resources[b] if v := out.LoggingEnabled.TargetBucket; v == nil { @@ -1220,6 +1769,15 @@ func testAccCheckAWSS3BucketReplicationRules(n string, providerF func() *schema. rs, _ := s.RootModule().Resources[n] for _, rule := range rules { if dest := rule.Destination; dest != nil { + if account := dest.Account; account != nil && strings.HasPrefix(aws.StringValue(dest.Account), "${") { + resourceReference := strings.Replace(aws.StringValue(dest.Account), "${", "", 1) + resourceReference = strings.Replace(resourceReference, "}", "", 1) + resourceReferenceParts := strings.Split(resourceReference, ".") + resourceAttribute := resourceReferenceParts[len(resourceReferenceParts)-1] + resourceName := strings.Join(resourceReferenceParts[:len(resourceReferenceParts)-1], ".") + value := s.RootModule().Resources[resourceName].Primary.Attributes[resourceAttribute] + dest.Account = aws.String(value) + } if ec := dest.EncryptionConfiguration; ec != nil { if ec.ReplicaKmsKeyID != nil { key_arn := s.RootModule().Resources["aws_kms_key.replica"].Primary.Attributes["arn"] @@ -1227,6 +1785,14 @@ func testAccCheckAWSS3BucketReplicationRules(n string, providerF func() *schema. } } } + // Sort filter tags by key. + if filter := rule.Filter; filter != nil { + if and := filter.And; and != nil { + if tags := and.Tags; tags != nil { + sort.Slice(tags, func(i, j int) bool { return *tags[i].Key < *tags[j].Key }) + } + } + } } provider := providerF() @@ -1244,6 +1810,17 @@ func testAccCheckAWSS3BucketReplicationRules(n string, providerF func() *schema. } return fmt.Errorf("GetReplicationConfiguration error: %v", err) } + + for _, rule := range out.ReplicationConfiguration.Rules { + // Sort filter tags by key. + if filter := rule.Filter; filter != nil { + if and := filter.And; and != nil { + if tags := and.Tags; tags != nil { + sort.Slice(tags, func(i, j int) bool { return *tags[i].Key < *tags[j].Key }) + } + } + } + } if !reflect.DeepEqual(out.ReplicationConfiguration.Rules, rules) { return fmt.Errorf("bad replication rules, expected: %v, got %v", rules, out.ReplicationConfiguration.Rules) } @@ -1262,12 +1839,32 @@ func testAccBucketDomainName(randInt int) string { return fmt.Sprintf("tf-test-bucket-%d.s3.amazonaws.com", randInt) } -func testAccWebsiteEndpoint(randInt int) string { - return fmt.Sprintf("tf-test-bucket-%d.s3-website-us-west-2.amazonaws.com", randInt) +func testAccBucketRegionalDomainName(randInt int, region string) string { + bucket := fmt.Sprintf("tf-test-bucket-%d", randInt) + regionalEndpoint, err := BucketRegionalDomainName(bucket, region) + if err != nil { + return fmt.Sprintf("Regional endpoint not found for bucket %s", bucket) + } + return regionalEndpoint +} + +func testAccWebsiteEndpoint(randInt int, region string) string { + return fmt.Sprintf("tf-test-bucket-%d.s3-website-%s.amazonaws.com", randInt, region) } -func testAccAWSS3BucketPolicy(randInt int) string { - return fmt.Sprintf(`{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::tf-test-bucket-%d/*" } ] }`, randInt) +func testAccAWSS3BucketPolicy(randInt int, partition string) string { + return fmt.Sprintf(`{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "", + "Effect": "Allow", + "Principal": {"AWS": "*"}, + "Action": "s3:GetObject", + "Resource": "arn:%s:s3:::tf-test-bucket-%d/*" + } + ] +}`, partition, randInt) } func testAccAWSS3BucketConfig(randInt int) string { @@ -1441,15 +2038,8 @@ EOF func testAccAWSS3BucketConfigWithAcceleration(randInt int) string { return fmt.Sprintf(` -provider "aws" { - alias = "west" - region = "eu-west-1" -} - resource "aws_s3_bucket" "bucket" { - provider = "aws.west" bucket = "tf-test-bucket-%d" - region = "eu-west-1" acl = "public-read" acceleration_status = "Enabled" } @@ -1458,15 +2048,8 @@ resource "aws_s3_bucket" "bucket" { func testAccAWSS3BucketConfigWithoutAcceleration(randInt int) string { return fmt.Sprintf(` -provider "aws" { - alias = "west" - region = "eu-west-1" -} - resource "aws_s3_bucket" "bucket" { - provider = "aws.west" bucket = "tf-test-bucket-%d" - region = "eu-west-1" acl = "public-read" acceleration_status = "Suspended" } @@ -1493,14 +2076,14 @@ resource "aws_s3_bucket" "bucket" { `, randInt) } -func testAccAWSS3BucketConfigWithPolicy(randInt int) string { +func testAccAWSS3BucketConfigWithPolicy(randInt int, partition string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "bucket" { bucket = "tf-test-bucket-%d" acl = "public-read" policy = %s } -`, randInt, strconv.Quote(testAccAWSS3BucketPolicy(randInt))) +`, randInt, strconv.Quote(testAccAWSS3BucketPolicy(randInt, partition))) } func testAccAWSS3BucketDestroyedConfig(randInt int) string { @@ -1621,6 +2204,22 @@ resource "aws_s3_bucket" "bucket" { `, randInt) } +func testAccAWSS3BucketConfigWithCORSEmptyOrigin(randInt int) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "bucket" { + bucket = "tf-test-bucket-%d" + acl = "public-read" + cors_rule { + allowed_headers = ["*"] + allowed_methods = ["PUT","POST"] + allowed_origins = [""] + expose_headers = ["x-amz-server-side-encryption","ETag"] + max_age_seconds = 3000 + } +} +`, randInt) +} + var testAccAWSS3BucketConfigWithAcl = ` resource "aws_s3_bucket" "bucket" { bucket = "tf-test-bucket-%d" @@ -1670,8 +2269,14 @@ resource "aws_s3_bucket" "bucket" { days = 30 storage_class = "STANDARD_IA" } + transition { days = 60 + storage_class = "ONEZONE_IA" + } + + transition { + days = 90 storage_class = "GLACIER" } } @@ -1712,6 +2317,24 @@ resource "aws_s3_bucket" "bucket" { `, randInt) } +func testAccAWSS3BucketConfigWithLifecycleExpireMarker(randInt int) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "bucket" { + bucket = "tf-test-bucket-%d" + acl = "private" + lifecycle_rule { + id = "id1" + prefix = "path1/" + enabled = true + + expiration { + expired_object_delete_marker = "true" + } + } +} +`, randInt) +} + func testAccAWSS3BucketConfigWithVersioningLifecycle(randInt int) string { return fmt.Sprintf(` resource "aws_s3_bucket" "bucket" { @@ -1904,6 +2527,109 @@ resource "aws_s3_bucket" "destination" { `, randInt, randInt, randInt) } +func testAccAWSS3BucketConfigReplicationWithAccessControlTranslation(randInt int) string { + return fmt.Sprintf(testAccAWSS3BucketConfigReplicationBasic+` +data "aws_caller_identity" "current" {} + +resource "aws_s3_bucket" "bucket" { + provider = "aws.uswest2" + bucket = "tf-test-bucket-%d" + acl = "private" + + versioning { + enabled = true + } + + replication_configuration { + role = "${aws_iam_role.role.arn}" + rules { + id = "foobar" + prefix = "foo" + status = "Enabled" + + destination { + account_id = "${data.aws_caller_identity.current.account_id}" + bucket = "${aws_s3_bucket.destination.arn}" + storage_class = "STANDARD" + + access_control_translation { + owner = "Destination" + } + } + } + } +} + +resource "aws_s3_bucket" "destination" { + provider = "aws.euwest" + bucket = "tf-test-bucket-destination-%d" + region = "eu-west-1" + + versioning { + enabled = true + } +} +`, randInt, randInt, randInt) +} + +func testAccAWSS3BucketConfigReplicationWithSseKmsEncryptedObjectsAndAccessControlTranslation(randInt int) string { + return fmt.Sprintf(testAccAWSS3BucketConfigReplicationBasic+` +data "aws_caller_identity" "current" {} + +resource "aws_kms_key" "replica" { + provider = "aws.euwest" + description = "TF Acceptance Test S3 repl KMS key" + deletion_window_in_days = 7 +} + +resource "aws_s3_bucket" "bucket" { + provider = "aws.uswest2" + bucket = "tf-test-bucket-%d" + acl = "private" + + versioning { + enabled = true + } + + replication_configuration { + role = "${aws_iam_role.role.arn}" + rules { + id = "foobar" + prefix = "foo" + status = "Enabled" + + destination { + account_id = "${data.aws_caller_identity.current.account_id}" + bucket = "${aws_s3_bucket.destination.arn}" + storage_class = "STANDARD" + replica_kms_key_id = "${aws_kms_key.replica.arn}" + + access_control_translation { + owner = "Destination" + } + } + + source_selection_criteria { + sse_kms_encrypted_objects { + enabled = true + } + } + } + } +} + +resource "aws_s3_bucket" "destination" { + provider = "aws.euwest" + bucket = "tf-test-bucket-destination-%d" + region = "eu-west-1" + + versioning { + enabled = true + } +} +`, randInt, randInt, randInt) +} + func testAccAWSS3BucketConfigReplicationWithoutStorageClass(randInt int) string { return fmt.Sprintf(testAccAWSS3BucketConfigReplicationBasic+` resource "aws_s3_bucket" "bucket" { @@ -1941,6 +2667,43 @@ resource "aws_s3_bucket" "destination" { `, randInt, randInt, randInt) } +func testAccAWSS3BucketConfigReplicationWithoutPrefix(randInt int) string { + return fmt.Sprintf(testAccAWSS3BucketConfigReplicationBasic+` +resource "aws_s3_bucket" "bucket" { + provider = "aws.uswest2" + bucket = "tf-test-bucket-%d" + acl = "private" + + versioning { + enabled = true + } + + replication_configuration { + role = "${aws_iam_role.role.arn}" + rules { + id = "foobar" + status = "Enabled" + + destination { + bucket = "${aws_s3_bucket.destination.arn}" + storage_class = "STANDARD" + } + } + } +} + +resource "aws_s3_bucket" "destination" { + provider = "aws.euwest" + bucket = "tf-test-bucket-destination-%d" + region = "eu-west-1" + + versioning { + enabled = true + } +} +`, randInt, randInt, randInt) +} + func testAccAWSS3BucketConfigReplicationNoVersioning(randInt int) string { return fmt.Sprintf(testAccAWSS3BucketConfigReplicationBasic+` resource "aws_s3_bucket" "bucket" { @@ -1975,6 +2738,185 @@ resource "aws_s3_bucket" "destination" { `, randInt, randInt, randInt) } +func testAccAWSS3BucketConfigReplicationWithV2ConfigurationNoTags(randInt int) string { + return fmt.Sprintf(testAccAWSS3BucketConfigReplicationBasic+` +resource "aws_s3_bucket" "bucket" { + provider = "aws.uswest2" + bucket = "tf-test-bucket-%d" + acl = "private" + + versioning { + enabled = true + } + + replication_configuration { + role = "${aws_iam_role.role.arn}" + rules { + id = "foobar" + status = "Enabled" + + filter { + prefix = "foo" + } + + destination { + bucket = "${aws_s3_bucket.destination.arn}" + storage_class = "STANDARD" + } + } + } +} + +resource "aws_s3_bucket" "destination" { + provider = "aws.euwest" + bucket = "tf-test-bucket-destination-%d" + region = "eu-west-1" + + versioning { + enabled = true + } +} +`, randInt, randInt, randInt) +} + +func testAccAWSS3BucketConfigReplicationWithV2ConfigurationOnlyOneTag(randInt int) string { + return fmt.Sprintf(testAccAWSS3BucketConfigReplicationBasic+` +resource "aws_s3_bucket" "bucket" { + provider = "aws.uswest2" + bucket = "tf-test-bucket-%d" + acl = "private" + + versioning { + enabled = true + } + + replication_configuration { + role = "${aws_iam_role.role.arn}" + rules { + id = "foobar" + status = "Enabled" + + priority = 42 + + filter { + tags { + ReplicateMe = "Yes" + } + } + + destination { + bucket = "${aws_s3_bucket.destination.arn}" + storage_class = "STANDARD" + } + } + } +} + +resource "aws_s3_bucket" "destination" { + provider = "aws.euwest" + bucket = "tf-test-bucket-destination-%d" + region = "eu-west-1" + + versioning { + enabled = true + } +} +`, randInt, randInt, randInt) +} + +func testAccAWSS3BucketConfigReplicationWithV2ConfigurationPrefixAndTags(randInt int) string { + return fmt.Sprintf(testAccAWSS3BucketConfigReplicationBasic+` +resource "aws_s3_bucket" "bucket" { + provider = "aws.uswest2" + bucket = "tf-test-bucket-%d" + acl = "private" + + versioning { + enabled = true + } + + replication_configuration { + role = "${aws_iam_role.role.arn}" + rules { + id = "foobar" + status = "Enabled" + + priority = 41 + + filter { + prefix = "foo" + + tags { + AnotherTag = "OK" + ReplicateMe = "Yes" + } + } + + destination { + bucket = "${aws_s3_bucket.destination.arn}" + storage_class = "STANDARD" + } + } + } +} + +resource "aws_s3_bucket" "destination" { + provider = "aws.euwest" + bucket = "tf-test-bucket-destination-%d" + region = "eu-west-1" + + versioning { + enabled = true + } +} +`, randInt, randInt, randInt) +} + +func testAccAWSS3BucketConfigReplicationWithV2ConfigurationMultipleTags(randInt int) string { + return fmt.Sprintf(testAccAWSS3BucketConfigReplicationBasic+` +resource "aws_s3_bucket" "bucket" { + provider = "aws.uswest2" + bucket = "tf-test-bucket-%d" + acl = "private" + + versioning { + enabled = true + } + + replication_configuration { + role = "${aws_iam_role.role.arn}" + rules { + id = "foobar" + status = "Enabled" + + filter { + tags { + AnotherTag = "OK" + Foo = "Bar" + ReplicateMe = "Yes" + } + } + + destination { + bucket = "${aws_s3_bucket.destination.arn}" + storage_class = "STANDARD" + } + } + } +} + +resource "aws_s3_bucket" "destination" { + provider = "aws.euwest" + bucket = "tf-test-bucket-destination-%d" + region = "eu-west-1" + + versioning { + enabled = true + } +} +`, randInt, randInt, randInt) +} + const testAccAWSS3BucketConfig_namePrefix = ` resource "aws_s3_bucket" "test" { bucket_prefix = "tf-test-" diff --git a/aws/resource_aws_secretsmanager_secret.go b/aws/resource_aws_secretsmanager_secret.go new file mode 100644 index 00000000000..edb12687036 --- /dev/null +++ b/aws/resource_aws_secretsmanager_secret.go @@ -0,0 +1,429 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/secretsmanager" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/structure" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsSecretsManagerSecret() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsSecretsManagerSecretCreate, + Read: resourceAwsSecretsManagerSecretRead, + Update: resourceAwsSecretsManagerSecretUpdate, + Delete: resourceAwsSecretsManagerSecretDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + }, + "kms_key_id": { + Type: schema.TypeString, + Optional: true, + }, + "name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"name_prefix"}, + ValidateFunc: validateSecretManagerSecretName, + }, + "name_prefix": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"name"}, + ValidateFunc: validateSecretManagerSecretNamePrefix, + }, + "policy": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.ValidateJsonString, + DiffSuppressFunc: suppressEquivalentAwsPolicyDiffs, + }, + "recovery_window_in_days": { + Type: schema.TypeInt, + Optional: true, + Default: 30, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(int) + if value == 0 { + return + } + if value >= 7 && value <= 30 { + return + } + errors = append(errors, fmt.Errorf("%q must be 0 or between 7 and 30", k)) + return + }, + }, + "rotation_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "rotation_lambda_arn": { + Type: schema.TypeString, + Optional: true, + }, + "rotation_rules": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "automatically_after_days": { + Type: schema.TypeInt, + Required: true, + }, + }, + }, + }, + "tags": tagsSchema(), + }, + } +} + +func resourceAwsSecretsManagerSecretCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).secretsmanagerconn + + var secretName string + if v, ok := d.GetOk("name"); ok { + secretName = v.(string) + } else if v, ok := d.GetOk("name_prefix"); ok { + secretName = resource.PrefixedUniqueId(v.(string)) + } else { + secretName = resource.UniqueId() + } + + input := &secretsmanager.CreateSecretInput{ + Description: aws.String(d.Get("description").(string)), + Name: aws.String(secretName), + } + + if v, ok := d.GetOk("kms_key_id"); ok && v.(string) != "" { + input.KmsKeyId = aws.String(v.(string)) + } + + log.Printf("[DEBUG] Creating Secrets Manager Secret: %s", input) + + // Retry for secret recreation after deletion + var output *secretsmanager.CreateSecretOutput + err := resource.Retry(2*time.Minute, func() *resource.RetryError { + var err error + output, err = conn.CreateSecret(input) + // InvalidRequestException: You can’t perform this operation on the secret because it was deleted. + if isAWSErr(err, secretsmanager.ErrCodeInvalidRequestException, "You can’t perform this operation on the secret because it was deleted") { + return resource.RetryableError(err) + } + if err != nil { + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("error creating Secrets Manager Secret: %s", err) + } + + d.SetId(aws.StringValue(output.ARN)) + + if v, ok := d.GetOk("policy"); ok && v.(string) != "" { + input := &secretsmanager.PutResourcePolicyInput{ + ResourcePolicy: aws.String(v.(string)), + SecretId: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Setting Secrets Manager Secret resource policy; %s", input) + _, err := conn.PutResourcePolicy(input) + if err != nil { + return fmt.Errorf("error setting Secrets Manager Secret %q policy: %s", d.Id(), err) + } + } + + if v, ok := d.GetOk("rotation_lambda_arn"); ok && v.(string) != "" { + input := &secretsmanager.RotateSecretInput{ + RotationLambdaARN: aws.String(v.(string)), + RotationRules: expandSecretsManagerRotationRules(d.Get("rotation_rules").([]interface{})), + SecretId: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Enabling Secrets Manager Secret rotation: %s", input) + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err := conn.RotateSecret(input) + if err != nil { + // AccessDeniedException: Secrets Manager cannot invoke the specified Lambda function. + if isAWSErr(err, "AccessDeniedException", "") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("error enabling Secrets Manager Secret %q rotation: %s", d.Id(), err) + } + } + + if v, ok := d.GetOk("tags"); ok { + input := &secretsmanager.TagResourceInput{ + SecretId: aws.String(d.Id()), + Tags: tagsFromMapSecretsManager(v.(map[string]interface{})), + } + + log.Printf("[DEBUG] Tagging Secrets Manager Secret: %s", input) + _, err := conn.TagResource(input) + if err != nil { + return fmt.Errorf("error tagging Secrets Manager Secret %q: %s", d.Id(), input) + } + } + + return resourceAwsSecretsManagerSecretRead(d, meta) +} + +func resourceAwsSecretsManagerSecretRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).secretsmanagerconn + + input := &secretsmanager.DescribeSecretInput{ + SecretId: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Reading Secrets Manager Secret: %s", input) + output, err := conn.DescribeSecret(input) + if err != nil { + if isAWSErr(err, secretsmanager.ErrCodeResourceNotFoundException, "") { + log.Printf("[WARN] Secrets Manager Secret %q not found - removing from state", d.Id()) + d.SetId("") + return nil + } + return fmt.Errorf("error reading Secrets Manager Secret: %s", err) + } + + d.Set("arn", output.ARN) + d.Set("description", output.Description) + d.Set("kms_key_id", output.KmsKeyId) + d.Set("name", output.Name) + + pIn := &secretsmanager.GetResourcePolicyInput{ + SecretId: aws.String(d.Id()), + } + log.Printf("[DEBUG] Reading Secrets Manager Secret policy: %s", pIn) + pOut, err := conn.GetResourcePolicy(pIn) + if err != nil { + return fmt.Errorf("error reading Secrets Manager Secret policy: %s", err) + } + + if pOut.ResourcePolicy != nil { + policy, err := structure.NormalizeJsonString(aws.StringValue(pOut.ResourcePolicy)) + if err != nil { + return fmt.Errorf("policy contains an invalid JSON: %s", err) + } + d.Set("policy", policy) + } + + d.Set("rotation_enabled", output.RotationEnabled) + + if aws.BoolValue(output.RotationEnabled) { + d.Set("rotation_lambda_arn", output.RotationLambdaARN) + if err := d.Set("rotation_rules", flattenSecretsManagerRotationRules(output.RotationRules)); err != nil { + return fmt.Errorf("error setting rotation_rules: %s", err) + } + } else { + d.Set("rotation_lambda_arn", "") + d.Set("rotation_rules", []interface{}{}) + } + + if err := d.Set("tags", tagsToMapSecretsManager(output.Tags)); err != nil { + return fmt.Errorf("error setting tags: %s", err) + } + + return nil +} + +func resourceAwsSecretsManagerSecretUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).secretsmanagerconn + + if d.HasChange("description") || d.HasChange("kms_key_id") { + input := &secretsmanager.UpdateSecretInput{ + Description: aws.String(d.Get("description").(string)), + SecretId: aws.String(d.Id()), + } + + if v, ok := d.GetOk("kms_key_id"); ok && v.(string) != "" { + input.KmsKeyId = aws.String(v.(string)) + } + + log.Printf("[DEBUG] Updating Secrets Manager Secret: %s", input) + _, err := conn.UpdateSecret(input) + if err != nil { + return fmt.Errorf("error updating Secrets Manager Secret: %s", err) + } + } + + if d.HasChange("policy") { + if v, ok := d.GetOk("policy"); ok && v.(string) != "" { + policy, err := structure.NormalizeJsonString(v.(string)) + if err != nil { + return fmt.Errorf("policy contains an invalid JSON: %s", err) + } + input := &secretsmanager.PutResourcePolicyInput{ + ResourcePolicy: aws.String(policy), + SecretId: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Setting Secrets Manager Secret resource policy; %s", input) + _, err = conn.PutResourcePolicy(input) + if err != nil { + return fmt.Errorf("error setting Secrets Manager Secret %q policy: %s", d.Id(), err) + } + } else { + input := &secretsmanager.DeleteResourcePolicyInput{ + SecretId: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Removing Secrets Manager Secret policy: %s", input) + _, err := conn.DeleteResourcePolicy(input) + if err != nil { + return fmt.Errorf("error removing Secrets Manager Secret %q policy: %s", d.Id(), err) + } + } + } + + if d.HasChange("rotation_lambda_arn") || d.HasChange("rotation_rules") { + if v, ok := d.GetOk("rotation_lambda_arn"); ok && v.(string) != "" { + input := &secretsmanager.RotateSecretInput{ + RotationLambdaARN: aws.String(v.(string)), + RotationRules: expandSecretsManagerRotationRules(d.Get("rotation_rules").([]interface{})), + SecretId: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Enabling Secrets Manager Secret rotation: %s", input) + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err := conn.RotateSecret(input) + if err != nil { + // AccessDeniedException: Secrets Manager cannot invoke the specified Lambda function. + if isAWSErr(err, "AccessDeniedException", "") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("error updating Secrets Manager Secret %q rotation: %s", d.Id(), err) + } + } else { + input := &secretsmanager.CancelRotateSecretInput{ + SecretId: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Cancelling Secrets Manager Secret rotation: %s", input) + _, err := conn.CancelRotateSecret(input) + if err != nil { + return fmt.Errorf("error cancelling Secret Manager Secret %q rotation: %s", d.Id(), err) + } + } + } + + if d.HasChange("tags") { + oraw, nraw := d.GetChange("tags") + o := oraw.(map[string]interface{}) + n := nraw.(map[string]interface{}) + create, remove := diffTagsSecretsManager(tagsFromMapSecretsManager(o), tagsFromMapSecretsManager(n)) + + if len(remove) > 0 { + log.Printf("[DEBUG] Removing Secrets Manager Secret %q tags: %#v", d.Id(), remove) + k := make([]*string, len(remove), len(remove)) + for i, t := range remove { + k[i] = t.Key + } + + _, err := conn.UntagResource(&secretsmanager.UntagResourceInput{ + SecretId: aws.String(d.Id()), + TagKeys: k, + }) + if err != nil { + return fmt.Errorf("error updating Secrets Manager Secrets %q tags: %s", d.Id(), err) + } + } + if len(create) > 0 { + log.Printf("[DEBUG] Creating Secrets Manager Secret %q tags: %#v", d.Id(), create) + _, err := conn.TagResource(&secretsmanager.TagResourceInput{ + SecretId: aws.String(d.Id()), + Tags: create, + }) + if err != nil { + return fmt.Errorf("error updating Secrets Manager Secrets %q tags: %s", d.Id(), err) + } + } + } + + return resourceAwsSecretsManagerSecretRead(d, meta) +} + +func resourceAwsSecretsManagerSecretDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).secretsmanagerconn + + input := &secretsmanager.DeleteSecretInput{ + SecretId: aws.String(d.Id()), + } + + recoveryWindowInDays := d.Get("recovery_window_in_days").(int) + if recoveryWindowInDays == 0 { + input.ForceDeleteWithoutRecovery = aws.Bool(true) + } else { + input.RecoveryWindowInDays = aws.Int64(int64(recoveryWindowInDays)) + } + + log.Printf("[DEBUG] Deleting Secrets Manager Secret: %s", input) + _, err := conn.DeleteSecret(input) + if err != nil { + if isAWSErr(err, secretsmanager.ErrCodeResourceNotFoundException, "") { + return nil + } + return fmt.Errorf("error deleting Secrets Manager Secret: %s", err) + } + + return nil +} + +func expandSecretsManagerRotationRules(l []interface{}) *secretsmanager.RotationRulesType { + if len(l) == 0 { + return nil + } + + m := l[0].(map[string]interface{}) + + rules := &secretsmanager.RotationRulesType{ + AutomaticallyAfterDays: aws.Int64(int64(m["automatically_after_days"].(int))), + } + + return rules +} + +func flattenSecretsManagerRotationRules(rules *secretsmanager.RotationRulesType) []interface{} { + if rules == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "automatically_after_days": int(aws.Int64Value(rules.AutomaticallyAfterDays)), + } + + return []interface{}{m} +} diff --git a/aws/resource_aws_secretsmanager_secret_test.go b/aws/resource_aws_secretsmanager_secret_test.go new file mode 100644 index 00000000000..e65b583e6ee --- /dev/null +++ b/aws/resource_aws_secretsmanager_secret_test.go @@ -0,0 +1,711 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + "strings" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/secretsmanager" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "github.com/jen20/awspolicyequivalence" +) + +func init() { + resource.AddTestSweepers("aws_secretsmanager_secret", &resource.Sweeper{ + Name: "aws_secretsmanager_secret", + F: testSweepSecretsManagerSecrets, + }) +} + +func testSweepSecretsManagerSecrets(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).secretsmanagerconn + + err = conn.ListSecretsPages(&secretsmanager.ListSecretsInput{}, func(page *secretsmanager.ListSecretsOutput, isLast bool) bool { + if len(page.SecretList) == 0 { + log.Print("[DEBUG] No Secrets Manager Secrets to sweep") + return true + } + + for _, secret := range page.SecretList { + name := aws.StringValue(secret.Name) + if !strings.HasPrefix(name, "tf-acc-test-") { + log.Printf("[INFO] Skipping Secrets Manager Secret: %s", name) + continue + } + log.Printf("[INFO] Deleting Secrets Manager Secret: %s", name) + input := &secretsmanager.DeleteSecretInput{ + ForceDeleteWithoutRecovery: aws.Bool(true), + SecretId: aws.String(name), + } + + _, err := conn.DeleteSecret(input) + if err != nil { + if isAWSErr(err, secretsmanager.ErrCodeResourceNotFoundException, "") { + continue + } + log.Printf("[ERROR] Failed to delete Secrets Manager Secret (%s): %s", name, err) + } + } + + return !isLast + }) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping Secrets Manager Secret sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving Secrets Manager Secrets: %s", err) + } + return nil +} + +func TestAccAwsSecretsManagerSecret_Basic(t *testing.T) { + var secret secretsmanager.DescribeSecretOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_secretsmanager_secret.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsSecretsManagerSecretDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsSecretsManagerSecretConfig_Name(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSecretsManagerSecretExists(resourceName, &secret), + resource.TestMatchResourceAttr(resourceName, "arn", regexp.MustCompile(fmt.Sprintf("^arn:[^:]+:secretsmanager:[^:]+:[^:]+:secret:%s-.+$", rName))), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "kms_key_id", ""), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "recovery_window_in_days", "30"), + resource.TestCheckResourceAttr(resourceName, "rotation_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "rotation_lambda_arn", ""), + resource.TestCheckResourceAttr(resourceName, "rotation_rules.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"recovery_window_in_days"}, + }, + }, + }) +} + +func TestAccAwsSecretsManagerSecret_withNamePrefix(t *testing.T) { + var secret secretsmanager.DescribeSecretOutput + rName := "tf-acc-test-" + resourceName := "aws_secretsmanager_secret.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsSecretsManagerSecretDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsSecretsManagerSecretConfig_withNamePrefix(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSecretsManagerSecretExists(resourceName, &secret), + resource.TestMatchResourceAttr(resourceName, "arn", regexp.MustCompile(fmt.Sprintf("^arn:[^:]+:secretsmanager:[^:]+:[^:]+:secret:%s.+$", rName))), + resource.TestMatchResourceAttr(resourceName, "name", regexp.MustCompile(fmt.Sprintf("^%s", rName))), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"recovery_window_in_days", "name_prefix"}, + }, + }, + }) +} + +func TestAccAwsSecretsManagerSecret_Description(t *testing.T) { + var secret secretsmanager.DescribeSecretOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_secretsmanager_secret.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsSecretsManagerSecretDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsSecretsManagerSecretConfig_Description(rName, "description1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSecretsManagerSecretExists(resourceName, &secret), + resource.TestCheckResourceAttr(resourceName, "description", "description1"), + ), + }, + { + Config: testAccAwsSecretsManagerSecretConfig_Description(rName, "description2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSecretsManagerSecretExists(resourceName, &secret), + resource.TestCheckResourceAttr(resourceName, "description", "description2"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"recovery_window_in_days"}, + }, + }, + }) +} + +func TestAccAwsSecretsManagerSecret_KmsKeyID(t *testing.T) { + var secret secretsmanager.DescribeSecretOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_secretsmanager_secret.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsSecretsManagerSecretDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsSecretsManagerSecretConfig_KmsKeyID(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSecretsManagerSecretExists(resourceName, &secret), + resource.TestCheckResourceAttrSet(resourceName, "kms_key_id"), + ), + }, + { + Config: testAccAwsSecretsManagerSecretConfig_KmsKeyID_Updated(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSecretsManagerSecretExists(resourceName, &secret), + resource.TestCheckResourceAttrSet(resourceName, "kms_key_id"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"recovery_window_in_days"}, + }, + }, + }) +} + +func TestAccAwsSecretsManagerSecret_RecoveryWindowInDays_Recreate(t *testing.T) { + var secret secretsmanager.DescribeSecretOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_secretsmanager_secret.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsSecretsManagerSecretDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsSecretsManagerSecretConfig_RecoveryWindowInDays(rName, 0), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSecretsManagerSecretExists(resourceName, &secret), + resource.TestCheckResourceAttr(resourceName, "recovery_window_in_days", "0"), + ), + }, + { + Config: testAccAwsSecretsManagerSecretConfig_RecoveryWindowInDays(rName, 0), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSecretsManagerSecretExists(resourceName, &secret), + resource.TestCheckResourceAttr(resourceName, "recovery_window_in_days", "0"), + ), + Taint: []string{resourceName}, + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"recovery_window_in_days"}, + }, + }, + }) +} + +func TestAccAwsSecretsManagerSecret_RotationLambdaARN(t *testing.T) { + var secret secretsmanager.DescribeSecretOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_secretsmanager_secret.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsSecretsManagerSecretDestroy, + Steps: []resource.TestStep{ + // Test enabling rotation on resource creation + { + Config: testAccAwsSecretsManagerSecretConfig_RotationLambdaARN(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSecretsManagerSecretExists(resourceName, &secret), + resource.TestCheckResourceAttr(resourceName, "rotation_enabled", "true"), + resource.TestMatchResourceAttr(resourceName, "rotation_lambda_arn", regexp.MustCompile(fmt.Sprintf("^arn:[^:]+:lambda:[^:]+:[^:]+:function:%s-1$", rName))), + ), + }, + // Test updating rotation + // We need a valid rotation function for this testing + // InvalidRequestException: A previous rotation isn’t complete. That rotation will be reattempted. + /* + { + Config: testAccAwsSecretsManagerSecretConfig_RotationLambdaARN_Updated(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSecretsManagerSecretExists(resourceName, &secret), + resource.TestCheckResourceAttr(resourceName, "rotation_enabled", "true"), + resource.TestMatchResourceAttr(resourceName, "rotation_lambda_arn", regexp.MustCompile(fmt.Sprintf("^arn:[^:]+:lambda:[^:]+:[^:]+:function:%s-2$", rName))), + ), + }, + */ + // Test importing rotation + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"recovery_window_in_days"}, + }, + // Test removing rotation on resource update + { + Config: testAccAwsSecretsManagerSecretConfig_Name(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSecretsManagerSecretExists(resourceName, &secret), + resource.TestCheckResourceAttr(resourceName, "rotation_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "rotation_lambda_arn", ""), + ), + }, + }, + }) +} + +func TestAccAwsSecretsManagerSecret_RotationRules(t *testing.T) { + var secret secretsmanager.DescribeSecretOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_secretsmanager_secret.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsSecretsManagerSecretDestroy, + Steps: []resource.TestStep{ + // Test creating rotation rules on resource creation + { + Config: testAccAwsSecretsManagerSecretConfig_RotationRules(rName, 7), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSecretsManagerSecretExists(resourceName, &secret), + resource.TestCheckResourceAttr(resourceName, "rotation_enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "rotation_rules.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rotation_rules.0.automatically_after_days", "7"), + ), + }, + // Test updating rotation rules + // We need a valid rotation function for this testing + // InvalidRequestException: A previous rotation isn’t complete. That rotation will be reattempted. + /* + { + Config: testAccAwsSecretsManagerSecretConfig_RotationRules(rName, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSecretsManagerSecretExists(resourceName, &secret), + resource.TestCheckResourceAttr(resourceName, "rotation_enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "rotation_rules.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rotation_rules.0.automatically_after_days", "1"), + ), + }, + */ + // Test importing rotation rules + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"recovery_window_in_days"}, + }, + // Test removing rotation rules on resource update + { + Config: testAccAwsSecretsManagerSecretConfig_Name(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSecretsManagerSecretExists(resourceName, &secret), + resource.TestCheckResourceAttr(resourceName, "rotation_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "rotation_rules.#", "0"), + ), + }, + }, + }) +} + +func TestAccAwsSecretsManagerSecret_Tags(t *testing.T) { + var secret secretsmanager.DescribeSecretOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_secretsmanager_secret.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsSecretsManagerSecretDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsSecretsManagerSecretConfig_Tags_Single(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSecretsManagerSecretExists(resourceName, &secret), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.tag1", "tag1value"), + ), + }, + { + Config: testAccAwsSecretsManagerSecretConfig_Tags_SingleUpdated(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSecretsManagerSecretExists(resourceName, &secret), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.tag1", "tag1value-updated"), + ), + }, + { + Config: testAccAwsSecretsManagerSecretConfig_Tags_Multiple(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSecretsManagerSecretExists(resourceName, &secret), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.tag1", "tag1value"), + resource.TestCheckResourceAttr(resourceName, "tags.tag2", "tag2value"), + ), + }, + { + Config: testAccAwsSecretsManagerSecretConfig_Tags_Single(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSecretsManagerSecretExists(resourceName, &secret), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.tag1", "tag1value"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"recovery_window_in_days"}, + }, + }, + }) +} + +func TestAccAwsSecretsManagerSecret_policy(t *testing.T) { + var secret secretsmanager.DescribeSecretOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_secretsmanager_secret.test" + expectedPolicyText := `{"Version":"2012-10-17","Statement":[{"Sid":"EnableAllPermissions","Effect":"Allow","Principal":{"AWS":"*"},"Action":"secretsmanager:GetSecretValue","Resource":"*"}]}` + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsSecretsManagerSecretDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsSecretsManagerSecretConfig_Policy(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSecretsManagerSecretExists(resourceName, &secret), + testAccCheckAwsSecretsManagerSecretHasPolicy(resourceName, expectedPolicyText), + ), + }, + }, + }) +} + +func testAccCheckAwsSecretsManagerSecretDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).secretsmanagerconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_secretsmanager_secret" { + continue + } + + input := &secretsmanager.DescribeSecretInput{ + SecretId: aws.String(rs.Primary.ID), + } + + output, err := conn.DescribeSecret(input) + + if err != nil { + if isAWSErr(err, secretsmanager.ErrCodeResourceNotFoundException, "") { + return nil + } + return err + } + + if output != nil && output.DeletedDate == nil { + return fmt.Errorf("Secret %q still exists", rs.Primary.ID) + } + } + + return nil + +} + +func testAccCheckAwsSecretsManagerSecretExists(resourceName string, secret *secretsmanager.DescribeSecretOutput) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + conn := testAccProvider.Meta().(*AWSClient).secretsmanagerconn + input := &secretsmanager.DescribeSecretInput{ + SecretId: aws.String(rs.Primary.ID), + } + + output, err := conn.DescribeSecret(input) + + if err != nil { + return err + } + + if output == nil { + return fmt.Errorf("Secret %q does not exist", rs.Primary.ID) + } + + *secret = *output + + return nil + } +} + +func testAccCheckAwsSecretsManagerSecretHasPolicy(name string, expectedPolicyText string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + conn := testAccProvider.Meta().(*AWSClient).secretsmanagerconn + input := &secretsmanager.GetResourcePolicyInput{ + SecretId: aws.String(rs.Primary.ID), + } + + out, err := conn.GetResourcePolicy(input) + + if err != nil { + return err + } + + actualPolicyText := *out.ResourcePolicy + + equivalent, err := awspolicy.PoliciesAreEquivalent(actualPolicyText, expectedPolicyText) + if err != nil { + return fmt.Errorf("Error testing policy equivalence: %s", err) + } + if !equivalent { + return fmt.Errorf("Non-equivalent policy error:\n\nexpected: %s\n\n got: %s\n", + expectedPolicyText, actualPolicyText) + } + + return nil + } +} + +func testAccAwsSecretsManagerSecretConfig_Description(rName, description string) string { + return fmt.Sprintf(` +resource "aws_secretsmanager_secret" "test" { + description = "%s" + name = "%s" +} +`, description, rName) +} + +func testAccAwsSecretsManagerSecretConfig_Name(rName string) string { + return fmt.Sprintf(` +resource "aws_secretsmanager_secret" "test" { + name = "%s" +} +`, rName) +} + +func testAccAwsSecretsManagerSecretConfig_withNamePrefix(rName string) string { + return fmt.Sprintf(` +resource "aws_secretsmanager_secret" "test" { + name_prefix = "%s" +} +`, rName) +} + +func testAccAwsSecretsManagerSecretConfig_KmsKeyID(rName string) string { + return fmt.Sprintf(` +resource "aws_kms_key" "test1" { + deletion_window_in_days = 7 +} + +resource "aws_kms_key" "test2" { + deletion_window_in_days = 7 +} + +resource "aws_secretsmanager_secret" "test" { + kms_key_id = "${aws_kms_key.test1.id}" + name = "%s" +} +`, rName) +} + +func testAccAwsSecretsManagerSecretConfig_KmsKeyID_Updated(rName string) string { + return fmt.Sprintf(` +resource "aws_kms_key" "test1" { + deletion_window_in_days = 7 +} + +resource "aws_kms_key" "test2" { + deletion_window_in_days = 7 +} + +resource "aws_secretsmanager_secret" "test" { + kms_key_id = "${aws_kms_key.test2.id}" + name = "%s" +} +`, rName) +} + +func testAccAwsSecretsManagerSecretConfig_RecoveryWindowInDays(rName string, recoveryWindowInDays int) string { + return fmt.Sprintf(` +resource "aws_secretsmanager_secret" "test" { + name = %q + recovery_window_in_days = %d +} +`, rName, recoveryWindowInDays) +} + +func testAccAwsSecretsManagerSecretConfig_RotationLambdaARN(rName string) string { + return baseAccAWSLambdaConfig(rName, rName, rName) + fmt.Sprintf(` +# Not a real rotation function +resource "aws_lambda_function" "test1" { + filename = "test-fixtures/lambdatest.zip" + function_name = "%[1]s-1" + handler = "exports.example" + role = "${aws_iam_role.iam_for_lambda.arn}" + runtime = "nodejs4.3" +} + +resource "aws_lambda_permission" "test1" { + action = "lambda:InvokeFunction" + function_name = "${aws_lambda_function.test1.function_name}" + principal = "secretsmanager.amazonaws.com" + statement_id = "AllowExecutionFromSecretsManager1" +} + +# Not a real rotation function +resource "aws_lambda_function" "test2" { + filename = "test-fixtures/lambdatest.zip" + function_name = "%[1]s-2" + handler = "exports.example" + role = "${aws_iam_role.iam_for_lambda.arn}" + runtime = "nodejs4.3" +} + +resource "aws_lambda_permission" "test2" { + action = "lambda:InvokeFunction" + function_name = "${aws_lambda_function.test2.function_name}" + principal = "secretsmanager.amazonaws.com" + statement_id = "AllowExecutionFromSecretsManager2" +} + +resource "aws_secretsmanager_secret" "test" { + name = "%[1]s" + rotation_lambda_arn = "${aws_lambda_function.test1.arn}" + + depends_on = ["aws_lambda_permission.test1"] +} +`, rName) +} + +func testAccAwsSecretsManagerSecretConfig_RotationRules(rName string, automaticallyAfterDays int) string { + return baseAccAWSLambdaConfig(rName, rName, rName) + fmt.Sprintf(` +# Not a real rotation function +resource "aws_lambda_function" "test" { + filename = "test-fixtures/lambdatest.zip" + function_name = "%[1]s" + handler = "exports.example" + role = "${aws_iam_role.iam_for_lambda.arn}" + runtime = "nodejs4.3" +} + +resource "aws_lambda_permission" "test" { + action = "lambda:InvokeFunction" + function_name = "${aws_lambda_function.test.function_name}" + principal = "secretsmanager.amazonaws.com" + statement_id = "AllowExecutionFromSecretsManager1" +} + +resource "aws_secretsmanager_secret" "test" { + name = "%[1]s" + rotation_lambda_arn = "${aws_lambda_function.test.arn}" + + rotation_rules { + automatically_after_days = %[2]d + } + + depends_on = ["aws_lambda_permission.test"] +} +`, rName, automaticallyAfterDays) +} + +func testAccAwsSecretsManagerSecretConfig_Tags_Single(rName string) string { + return fmt.Sprintf(` +resource "aws_secretsmanager_secret" "test" { + name = "%s" + + tags = { + tag1 = "tag1value" + } +} +`, rName) +} + +func testAccAwsSecretsManagerSecretConfig_Tags_SingleUpdated(rName string) string { + return fmt.Sprintf(` +resource "aws_secretsmanager_secret" "test" { + name = "%s" + + tags = { + tag1 = "tag1value-updated" + } +} +`, rName) +} + +func testAccAwsSecretsManagerSecretConfig_Tags_Multiple(rName string) string { + return fmt.Sprintf(` +resource "aws_secretsmanager_secret" "test" { + name = "%s" + + tags = { + tag1 = "tag1value" + tag2 = "tag2value" + } +} +`, rName) +} + +func testAccAwsSecretsManagerSecretConfig_Policy(rName string) string { + return fmt.Sprintf(` +resource "aws_secretsmanager_secret" "test" { + name = "%s" + + policy = < 0 { - raw, ok := m["cidr_blocks"] - if !ok { - raw = make([]string, 0, len(perm.IpRanges)) - } - list := raw.([]string) - for _, ip := range perm.IpRanges { - list = append(list, *ip.CidrIp) - desc := aws.StringValue(ip.Description) - if desc != "" { - description = desc + + rule := initSecurityGroupRule(ruleMap, perm, desc) + + raw, ok := rule["cidr_blocks"] + if !ok { + raw = make([]string, 0) } - } + list := raw.([]string) - m["cidr_blocks"] = list + rule["cidr_blocks"] = append(list, *ip.CidrIp) + } } if len(perm.Ipv6Ranges) > 0 { - raw, ok := m["ipv6_cidr_blocks"] - if !ok { - raw = make([]string, 0, len(perm.Ipv6Ranges)) - } - list := raw.([]string) - for _, ip := range perm.Ipv6Ranges { - list = append(list, *ip.CidrIpv6) - desc := aws.StringValue(ip.Description) - if desc != "" { - description = desc + + rule := initSecurityGroupRule(ruleMap, perm, desc) + + raw, ok := rule["ipv6_cidr_blocks"] + if !ok { + raw = make([]string, 0) } - } + list := raw.([]string) - m["ipv6_cidr_blocks"] = list + rule["ipv6_cidr_blocks"] = append(list, *ip.CidrIpv6) + } } if len(perm.PrefixListIds) > 0 { - raw, ok := m["prefix_list_ids"] - if !ok { - raw = make([]string, 0, len(perm.PrefixListIds)) - } - list := raw.([]string) - for _, pl := range perm.PrefixListIds { - list = append(list, *pl.PrefixListId) - desc := aws.StringValue(pl.Description) - if desc != "" { - description = desc + + rule := initSecurityGroupRule(ruleMap, perm, desc) + + raw, ok := rule["prefix_list_ids"] + if !ok { + raw = make([]string, 0) } - } + list := raw.([]string) - m["prefix_list_ids"] = list + rule["prefix_list_ids"] = append(list, *pl.PrefixListId) + } } groups := flattenSecurityGroups(perm.UserIdGroupPairs, ownerId) - for i, g := range groups { - if *g.GroupId == groupId { - groups[i], groups = groups[len(groups)-1], groups[:len(groups)-1] - m["self"] = true - + if len(groups) > 0 { + for _, g := range groups { desc := aws.StringValue(g.Description) - if desc != "" { - description = desc + + rule := initSecurityGroupRule(ruleMap, perm, desc) + + if *g.GroupId == groupId { + rule["self"] = true + continue } - } - } - if len(groups) > 0 { - raw, ok := m["security_groups"] - if !ok { - raw = schema.NewSet(schema.HashString, nil) - } - list := raw.(*schema.Set) + raw, ok := rule["security_groups"] + if !ok { + raw = schema.NewSet(schema.HashString, nil) + } + list := raw.(*schema.Set) - for _, g := range groups { if g.GroupName != nil { list.Add(*g.GroupName) } else { list.Add(*g.GroupId) } - - desc := aws.StringValue(g.Description) - if desc != "" { - description = desc - } + rule["security_groups"] = list } - - m["security_groups"] = list } - m["description"] = description } + rules := make([]map[string]interface{}, 0, len(ruleMap)) for _, m := range ruleMap { rules = append(rules, m) @@ -720,14 +696,14 @@ func resourceAwsSecurityGroupUpdateRules( n = new(schema.Set) } - os := o.(*schema.Set) - ns := n.(*schema.Set) + os := resourceAwsSecurityGroupExpandRules(o.(*schema.Set)) + ns := resourceAwsSecurityGroupExpandRules(n.(*schema.Set)) - remove, err := expandIPPerms(group, os.Difference(ns).List()) + remove, err := expandIPPerms(group, resourceAwsSecurityGroupCollapseRules(ruleset, os.Difference(ns).List())) if err != nil { return err } - add, err := expandIPPerms(group, ns.Difference(os).List()) + add, err := expandIPPerms(group, resourceAwsSecurityGroupCollapseRules(ruleset, ns.Difference(os).List())) if err != nil { return err } @@ -840,6 +816,18 @@ func SGStateRefreshFunc(conn *ec2.EC2, id string) resource.StateRefreshFunc { } } +func waitForSgToExist(conn *ec2.EC2, id string, timeout time.Duration) (interface{}, error) { + log.Printf("[DEBUG] Waiting for Security Group (%s) to exist", id) + stateConf := &resource.StateChangeConf{ + Pending: []string{""}, + Target: []string{"exists"}, + Refresh: SGStateRefreshFunc(conn, id), + Timeout: timeout, + } + + return stateConf.WaitForState() +} + // matchRules receives the group id, type of rules, and the local / remote maps // of rules. We iterate through the local set of rules trying to find a matching // remote rule, which may be structured differently because of how AWS @@ -1163,6 +1151,175 @@ func matchRules(rType string, local []interface{}, remote []map[string]interface return saves } +// Duplicate ingress/egress block structure and fill out all +// the required fields +func resourceAwsSecurityGroupCopyRule(src map[string]interface{}, self bool, k string, v interface{}) map[string]interface{} { + var keys_to_copy = []string{"description", "from_port", "to_port", "protocol"} + + dst := make(map[string]interface{}) + for _, key := range keys_to_copy { + if val, ok := src[key]; ok { + dst[key] = val + } + } + if k != "" { + dst[k] = v + } + if _, ok := src["self"]; ok { + dst["self"] = self + } + return dst +} + +// Given a set of SG rules (ingress/egress blocks), this function +// will group the rules by from_port/to_port/protocol/description +// tuples. This is inverse operation of +// resourceAwsSecurityGroupExpandRules() +// +// For more detail, see comments for +// resourceAwsSecurityGroupExpandRules() +func resourceAwsSecurityGroupCollapseRules(ruleset string, rules []interface{}) []interface{} { + + var keys_to_collapse = []string{"cidr_blocks", "ipv6_cidr_blocks", "prefix_list_ids", "security_groups"} + + collapsed := make(map[string]map[string]interface{}) + + for _, rule := range rules { + r := rule.(map[string]interface{}) + + ruleHash := idCollapseHash(ruleset, r["protocol"].(string), int64(r["to_port"].(int)), int64(r["from_port"].(int)), r["description"].(string)) + + if _, ok := collapsed[ruleHash]; ok { + if v, ok := r["self"]; ok && v.(bool) { + collapsed[ruleHash]["self"] = r["self"] + } + } else { + collapsed[ruleHash] = r + continue + } + + for _, key := range keys_to_collapse { + if _, ok := r[key]; ok { + if _, ok := collapsed[ruleHash][key]; ok { + if key == "security_groups" { + collapsed[ruleHash][key] = collapsed[ruleHash][key].(*schema.Set).Union(r[key].(*schema.Set)) + } else { + collapsed[ruleHash][key] = append(collapsed[ruleHash][key].([]interface{}), r[key].([]interface{})...) + } + } else { + collapsed[ruleHash][key] = r[key] + } + } + } + } + + values := make([]interface{}, 0, len(collapsed)) + for _, val := range collapsed { + values = append(values, val) + } + return values +} + +// resourceAwsSecurityGroupExpandRules works in pair with +// resourceAwsSecurityGroupCollapseRules and is used as a +// workaround for the problem explained in +// https://github.com/terraform-providers/terraform-provider-aws/pull/4726 +// +// This function converts every ingress/egress block that +// contains multiple rules to multiple blocks with only one +// rule. Doing a Difference operation on such a normalized +// set helps to avoid unnecessary removal of unchanged +// rules during the Apply step. +// +// For example, in terraform syntax, the following block: +// +// ingress { +// from_port = 80 +// to_port = 80 +// protocol = "tcp" +// cidr_blocks = [ +// "192.168.0.1/32", +// "192.168.0.2/32", +// ] +// } +// +// will be converted to the two blocks below: +// +// ingress { +// from_port = 80 +// to_port = 80 +// protocol = "tcp" +// cidr_blocks = [ "192.168.0.1/32" ] +// } +// +// ingress { +// from_port = 80 +// to_port = 80 +// protocol = "tcp" +// cidr_blocks = [ "192.168.0.2/32" ] +// } +// +// Then the Difference operation is executed on the new set +// to find which rules got modified, and the resulting set +// is then passed to resourceAwsSecurityGroupCollapseRules +// to convert the "diff" back to a more compact form for +// execution. Such compact form helps reduce the number of +// API calls. +// +func resourceAwsSecurityGroupExpandRules(rules *schema.Set) *schema.Set { + var keys_to_expand = []string{"cidr_blocks", "ipv6_cidr_blocks", "prefix_list_ids", "security_groups"} + + normalized := schema.NewSet(resourceAwsSecurityGroupRuleHash, nil) + + for _, rawRule := range rules.List() { + rule := rawRule.(map[string]interface{}) + + if v, ok := rule["self"]; ok && v.(bool) { + new_rule := resourceAwsSecurityGroupCopyRule(rule, true, "", nil) + normalized.Add(new_rule) + } + for _, key := range keys_to_expand { + item, exists := rule[key] + if exists { + var list []interface{} + if key == "security_groups" { + list = item.(*schema.Set).List() + } else { + list = item.([]interface{}) + } + for _, v := range list { + var new_rule map[string]interface{} + if key == "security_groups" { + new_v := schema.NewSet(schema.HashString, nil) + new_v.Add(v) + new_rule = resourceAwsSecurityGroupCopyRule(rule, false, key, new_v) + } else { + new_v := make([]interface{}, 0) + new_v = append(new_v, v) + new_rule = resourceAwsSecurityGroupCopyRule(rule, false, key, new_v) + } + normalized.Add(new_rule) + } + } + } + } + + return normalized +} + +// Convert type-to_port-from_port-protocol-description tuple +// to a hash to use as a key in Set. +func idCollapseHash(rType, protocol string, toPort, fromPort int64, description string) string { + var buf bytes.Buffer + buf.WriteString(fmt.Sprintf("%s-", rType)) + buf.WriteString(fmt.Sprintf("%d-", toPort)) + buf.WriteString(fmt.Sprintf("%d-", fromPort)) + buf.WriteString(fmt.Sprintf("%s-", strings.ToLower(protocol))) + buf.WriteString(fmt.Sprintf("%s-", description)) + + return fmt.Sprintf("rule-%d", hashcode.String(buf.String())) +} + // Creates a unique hash for the type, ports, and protocol, used as a key in // maps func idHash(rType, protocol string, toPort, fromPort int64, self bool) string { @@ -1231,24 +1388,22 @@ func protocolForValue(v string) string { // Similar to protocolIntegers() used by Network ACLs, but explicitly only // supports "tcp", "udp", "icmp", and "all" func sgProtocolIntegers() map[string]int { - var protocolIntegers = make(map[string]int) - protocolIntegers = map[string]int{ + return map[string]int{ "udp": 17, "tcp": 6, "icmp": 1, "all": -1, } - return protocolIntegers } // The AWS Lambda service creates ENIs behind the scenes and keeps these around for a while // which would prevent SGs attached to such ENIs from being destroyed -func deleteLingeringLambdaENIs(conn *ec2.EC2, d *schema.ResourceData) error { +func deleteLingeringLambdaENIs(conn *ec2.EC2, d *schema.ResourceData, filterName string) error { // Here we carefully find the offenders params := &ec2.DescribeNetworkInterfacesInput{ Filters: []*ec2.Filter{ { - Name: aws.String("group-id"), + Name: aws.String(filterName), Values: []*string{aws.String(d.Id())}, }, { @@ -1258,6 +1413,11 @@ func deleteLingeringLambdaENIs(conn *ec2.EC2, d *schema.ResourceData) error { }, } networkInterfaceResp, err := conn.DescribeNetworkInterfaces(params) + + if isAWSErr(err, "InvalidNetworkInterfaceID.NotFound", "") { + return nil + } + if err != nil { return err } @@ -1271,6 +1431,10 @@ func deleteLingeringLambdaENIs(conn *ec2.EC2, d *schema.ResourceData) error { } _, detachNetworkInterfaceErr := conn.DetachNetworkInterface(detachNetworkInterfaceParams) + if isAWSErr(detachNetworkInterfaceErr, "InvalidNetworkInterfaceID.NotFound", "") { + return nil + } + if detachNetworkInterfaceErr != nil { return detachNetworkInterfaceErr } @@ -1293,6 +1457,10 @@ func deleteLingeringLambdaENIs(conn *ec2.EC2, d *schema.ResourceData) error { } _, deleteNetworkInterfaceErr := conn.DeleteNetworkInterface(deleteNetworkInterfaceParams) + if isAWSErr(deleteNetworkInterfaceErr, "InvalidNetworkInterfaceID.NotFound", "") { + return nil + } + if deleteNetworkInterfaceErr != nil { return deleteNetworkInterfaceErr } @@ -1309,8 +1477,11 @@ func networkInterfaceAttachedRefreshFunc(conn *ec2.EC2, id string) resource.Stat } describeResp, err := conn.DescribeNetworkInterfaces(describe_network_interfaces_request) + if isAWSErr(err, "InvalidNetworkInterfaceID.NotFound", "") { + return 42, "false", nil + } + if err != nil { - log.Printf("[ERROR] Could not find network interface %s. %s", id, err) return nil, "", err } @@ -1320,3 +1491,27 @@ func networkInterfaceAttachedRefreshFunc(conn *ec2.EC2, id string) resource.Stat return eni, hasAttachment, nil } } + +func initSecurityGroupRule(ruleMap map[string]map[string]interface{}, perm *ec2.IpPermission, desc string) map[string]interface{} { + var fromPort, toPort int64 + if v := perm.FromPort; v != nil { + fromPort = *v + } + if v := perm.ToPort; v != nil { + toPort = *v + } + k := fmt.Sprintf("%s-%d-%d-%s", *perm.IpProtocol, fromPort, toPort, desc) + rule, ok := ruleMap[k] + if !ok { + rule = make(map[string]interface{}) + ruleMap[k] = rule + } + rule["protocol"] = *perm.IpProtocol + rule["from_port"] = fromPort + rule["to_port"] = toPort + if desc != "" { + rule["description"] = desc + } + + return rule +} diff --git a/aws/resource_aws_security_group_migrate_test.go b/aws/resource_aws_security_group_migrate_test.go index 3b14edb0e5e..54d6c4b078d 100644 --- a/aws/resource_aws_security_group_migrate_test.go +++ b/aws/resource_aws_security_group_migrate_test.go @@ -19,7 +19,7 @@ func TestAWSSecurityGroupMigrateState(t *testing.T) { "name": "test", }, Expected: map[string]string{ - "name": "test", + "name": "test", "revoke_rules_on_delete": "false", }, }, @@ -63,7 +63,7 @@ func TestAWSSecurityGroupMigrateState_empty(t *testing.T) { // should handle non-nil but empty is = &terraform.InstanceState{} - is, err = resourceAwsSecurityGroupMigrateState(0, is, meta) + _, err = resourceAwsSecurityGroupMigrateState(0, is, meta) if err != nil { t.Fatalf("err: %#v", err) diff --git a/aws/resource_aws_security_group_rule.go b/aws/resource_aws_security_group_rule.go index 5ab3044c0eb..025f364f615 100644 --- a/aws/resource_aws_security_group_rule.go +++ b/aws/resource_aws_security_group_rule.go @@ -5,13 +5,13 @@ import ( "fmt" "log" "sort" + "strconv" "strings" "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" @@ -24,6 +24,18 @@ func resourceAwsSecurityGroupRule() *schema.Resource { Read: resourceAwsSecurityGroupRuleRead, Update: resourceAwsSecurityGroupRuleUpdate, Delete: resourceAwsSecurityGroupRuleDelete, + Importer: &schema.ResourceImporter{ + State: func(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + importParts, err := validateSecurityGroupRuleImportString(d.Id()) + if err != nil { + return nil, err + } + if err := populateSecurityGroupRuleFromImport(d, importParts); err != nil { + return nil, err + } + return []*schema.ResourceData{d}, nil + }, + }, SchemaVersion: 2, MigrateState: resourceAwsSecurityGroupRuleMigrateState, @@ -44,12 +56,28 @@ func resourceAwsSecurityGroupRule() *schema.Resource { Type: schema.TypeInt, Required: true, ForceNew: true, + // Support existing configurations that have non-zero from_port and to_port defined with all protocols + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + protocol := protocolForValue(d.Get("protocol").(string)) + if protocol == "-1" && old == "0" { + return true + } + return false + }, }, "to_port": { Type: schema.TypeInt, Required: true, ForceNew: true, + // Support existing configurations that have non-zero from_port and to_port defined with all protocols + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + protocol := protocolForValue(d.Get("protocol").(string)) + if protocol == "-1" && old == "0" { + return true + } + return false + }, }, "protocol": { @@ -257,6 +285,7 @@ func resourceAwsSecurityGroupRuleRead(d *schema.ResourceData, meta interface{}) default: rules = sg.IpPermissionsEgress } + log.Printf("[DEBUG] Rules %v", rules) p, err := expandIPPerm(d, sg) if err != nil { @@ -283,11 +312,17 @@ func resourceAwsSecurityGroupRuleRead(d *schema.ResourceData, meta interface{}) d.Set("type", ruleType) if err := setFromIPPerm(d, sg, p); err != nil { - return errwrap.Wrapf("Error setting IP Permission for Security Group Rule: {{err}}", err) + return fmt.Errorf("Error setting IP Permission for Security Group Rule: %s", err) } d.Set("description", descriptionFromIPPerm(d, rule)) + if strings.Contains(d.Id(), "_") { + // import so fix the id + id := ipPermissionIDHash(sg_id, ruleType, p) + d.SetId(id) + } + return nil } @@ -354,8 +389,6 @@ func resourceAwsSecurityGroupRuleDelete(d *schema.ResourceData, meta interface{} } } - d.SetId("") - return nil } @@ -413,21 +446,24 @@ func (b ByGroupPair) Less(i, j int) bool { func findRuleMatch(p *ec2.IpPermission, rules []*ec2.IpPermission, isVPC bool) *ec2.IpPermission { var rule *ec2.IpPermission for _, r := range rules { - if r.ToPort != nil && *p.ToPort != *r.ToPort { + if p.ToPort != nil && r.ToPort != nil && *p.ToPort != *r.ToPort { continue } - if r.FromPort != nil && *p.FromPort != *r.FromPort { + if p.FromPort != nil && r.FromPort != nil && *p.FromPort != *r.FromPort { continue } - if r.IpProtocol != nil && *p.IpProtocol != *r.IpProtocol { + if p.IpProtocol != nil && r.IpProtocol != nil && *p.IpProtocol != *r.IpProtocol { continue } remaining := len(p.IpRanges) for _, ip := range p.IpRanges { for _, rip := range r.IpRanges { + if ip.CidrIp == nil || rip.CidrIp == nil { + continue + } if *ip.CidrIp == *rip.CidrIp { remaining-- } @@ -441,6 +477,9 @@ func findRuleMatch(p *ec2.IpPermission, rules []*ec2.IpPermission, isVPC bool) * remaining = len(p.Ipv6Ranges) for _, ipv6 := range p.Ipv6Ranges { for _, ipv6ip := range r.Ipv6Ranges { + if ipv6.CidrIpv6 == nil || ipv6ip.CidrIpv6 == nil { + continue + } if *ipv6.CidrIpv6 == *ipv6ip.CidrIpv6 { remaining-- } @@ -454,6 +493,9 @@ func findRuleMatch(p *ec2.IpPermission, rules []*ec2.IpPermission, isVPC bool) * remaining = len(p.PrefixListIds) for _, pl := range p.PrefixListIds { for _, rpl := range r.PrefixListIds { + if pl.PrefixListId == nil || rpl.PrefixListId == nil { + continue + } if *pl.PrefixListId == *rpl.PrefixListId { remaining-- } @@ -468,10 +510,16 @@ func findRuleMatch(p *ec2.IpPermission, rules []*ec2.IpPermission, isVPC bool) * for _, ip := range p.UserIdGroupPairs { for _, rip := range r.UserIdGroupPairs { if isVPC { + if ip.GroupId == nil || rip.GroupId == nil { + continue + } if *ip.GroupId == *rip.GroupId { remaining-- } } else { + if ip.GroupName == nil || rip.GroupName == nil { + continue + } if *ip.GroupName == *rip.GroupName { remaining-- } @@ -560,11 +608,15 @@ func ipPermissionIDHash(sg_id, ruleType string, ip *ec2.IpPermission) string { func expandIPPerm(d *schema.ResourceData, sg *ec2.SecurityGroup) (*ec2.IpPermission, error) { var perm ec2.IpPermission - perm.FromPort = aws.Int64(int64(d.Get("from_port").(int))) - perm.ToPort = aws.Int64(int64(d.Get("to_port").(int))) protocol := protocolForValue(d.Get("protocol").(string)) perm.IpProtocol = aws.String(protocol) + // InvalidParameterValue: When protocol is ALL, you cannot specify from-port. + if protocol != "-1" { + perm.FromPort = aws.Int64(int64(d.Get("from_port").(int))) + perm.ToPort = aws.Int64(int64(d.Get("to_port").(int))) + } + // build a group map that behaves like a set groups := make(map[string]bool) if raw, ok := d.GetOk("source_security_group_id"); ok { @@ -585,7 +637,7 @@ func expandIPPerm(d *schema.ResourceData, sg *ec2.SecurityGroup) (*ec2.IpPermiss perm.UserIdGroupPairs = make([]*ec2.UserIdGroupPair, len(groups)) // build string list of group name/ids var gl []string - for k, _ := range groups { + for k := range groups { gl = append(gl, k) } @@ -848,3 +900,104 @@ func resourceSecurityGroupRuleDescriptionUpdate(conn *ec2.EC2, d *schema.Resourc return nil } + +// validateSecurityGroupRuleImportString does minimal validation of import string without going to AWS +func validateSecurityGroupRuleImportString(importStr string) ([]string, error) { + // example: sg-09a093729ef9382a6_ingress_tcp_8000_8000_10.0.3.0/24 + // example: sg-09a093729ef9382a6_ingress_92_0_65536_10.0.3.0/24_10.0.4.0/24 + // example: sg-09a093729ef9382a6_egress_tcp_8000_8000_10.0.3.0/24 + // example: sg-09a093729ef9382a6_egress_tcp_8000_8000_pl-34800000 + // example: sg-09a093729ef9382a6_ingress_all_0_65536_sg-08123412342323 + // example: sg-09a093729ef9382a6_ingress_tcp_100_121_10.1.0.0/16_2001:db8::/48_10.2.0.0/16_2002:db8::/48 + + log.Printf("[DEBUG] Validating import string %s", importStr) + + importParts := strings.Split(strings.ToLower(importStr), "_") + errStr := "unexpected format of import string (%q), expected SECURITYGROUPID_TYPE_PROTOCOL_FROMPORT_TOPORT_SOURCE[_SOURCE]*: %s" + if len(importParts) < 6 { + return nil, fmt.Errorf(errStr, importStr, "too few parts") + } + + sgID := importParts[0] + ruleType := importParts[1] + protocol := importParts[2] + fromPort := importParts[3] + toPort := importParts[4] + sources := importParts[5:] + + if !strings.HasPrefix(sgID, "sg-") { + return nil, fmt.Errorf(errStr, importStr, "invalid security group ID") + } + + if ruleType != "ingress" && ruleType != "egress" { + return nil, fmt.Errorf(errStr, importStr, "expecting 'ingress' or 'egress'") + } + + if _, ok := sgProtocolIntegers()[protocol]; !ok { + if _, err := strconv.Atoi(protocol); err != nil { + return nil, fmt.Errorf(errStr, importStr, "protocol must be tcp/udp/icmp/all or a number") + } + } + + if p1, err := strconv.Atoi(fromPort); err != nil { + return nil, fmt.Errorf(errStr, importStr, "invalid port") + } else if p2, err := strconv.Atoi(toPort); err != nil || p2 < p1 { + return nil, fmt.Errorf(errStr, importStr, "invalid port") + } + + for _, source := range sources { + // will be properly validated later + if source != "self" && !strings.Contains(source, "sg-") && !strings.Contains(source, "pl-") && !strings.Contains(source, ":") && !strings.Contains(source, ".") { + return nil, fmt.Errorf(errStr, importStr, "source must be cidr, ipv6cidr, prefix list, 'self', or a sg ID") + } + } + + log.Printf("[DEBUG] Validated import string %s", importStr) + return importParts, nil +} + +func populateSecurityGroupRuleFromImport(d *schema.ResourceData, importParts []string) error { + log.Printf("[DEBUG] Populating resource data on import: %v", importParts) + + sgID := importParts[0] + ruleType := importParts[1] + protocol := importParts[2] + fromPort, _ := strconv.Atoi(importParts[3]) + toPort, _ := strconv.Atoi(importParts[4]) + sources := importParts[5:] + + d.Set("security_group_id", sgID) + + if ruleType == "ingress" { + d.Set("type", ruleType) + } else { + d.Set("type", "egress") + } + + d.Set("protocol", protocolForValue(protocol)) + d.Set("from_port", fromPort) + d.Set("to_port", toPort) + + d.Set("self", false) + var cidrs []string + var prefixList []string + var ipv6cidrs []string + for _, source := range sources { + if source == "self" { + d.Set("self", true) + } else if strings.Contains(source, "sg-") { + d.Set("source_security_group_id", source) + } else if strings.Contains(source, "pl-") { + prefixList = append(prefixList, source) + } else if strings.Contains(source, ":") { + ipv6cidrs = append(ipv6cidrs, source) + } else { + cidrs = append(cidrs, source) + } + } + d.Set("ipv6_cidr_blocks", ipv6cidrs) + d.Set("cidr_blocks", cidrs) + d.Set("prefix_list_ids", prefixList) + + return nil +} diff --git a/aws/resource_aws_security_group_rule_migrate.go b/aws/resource_aws_security_group_rule_migrate.go index 12788054e31..1b2cb21f8d6 100644 --- a/aws/resource_aws_security_group_rule_migrate.go +++ b/aws/resource_aws_security_group_rule_migrate.go @@ -37,7 +37,7 @@ func migrateSGRuleStateV0toV1(is *terraform.InstanceState) (*terraform.InstanceS perm, err := migrateExpandIPPerm(is.Attributes) if err != nil { - return nil, fmt.Errorf("[WARN] Error making new IP Permission in Security Group migration") + return nil, fmt.Errorf("Error making new IP Permission in Security Group migration") } log.Printf("[DEBUG] Attributes before migration: %#v", is.Attributes) @@ -77,7 +77,7 @@ func migrateExpandIPPerm(attrs map[string]string) (*ec2.IpPermission, error) { perm.UserIdGroupPairs = make([]*ec2.UserIdGroupPair, len(groups)) // build string list of group name/ids var gl []string - for k, _ := range groups { + for k := range groups { gl = append(gl, k) } diff --git a/aws/resource_aws_security_group_rule_test.go b/aws/resource_aws_security_group_rule_test.go index 8c61f225d68..5779dc0013c 100644 --- a/aws/resource_aws_security_group_rule_test.go +++ b/aws/resource_aws_security_group_rule_test.go @@ -5,6 +5,8 @@ import ( "fmt" "log" "regexp" + "strconv" + "strings" "testing" "github.com/aws/aws-sdk-go/aws" @@ -127,7 +129,7 @@ func TestAccAWSSecurityGroupRule_Ingress_VPC(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -142,6 +144,12 @@ func TestAccAWSSecurityGroupRule_Ingress_VPC(t *testing.T) { testRuleCount, ), }, + { + ResourceName: "aws_security_group_rule.ingress_1", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.ingress_1"), + ImportStateVerify: true, + }, }, }) } @@ -164,7 +172,7 @@ func TestAccAWSSecurityGroupRule_Ingress_Protocol(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -179,6 +187,12 @@ func TestAccAWSSecurityGroupRule_Ingress_Protocol(t *testing.T) { testRuleCount, ), }, + { + ResourceName: "aws_security_group_rule.ingress_1", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.ingress_1"), + ImportStateVerify: true, + }, }, }) } @@ -207,7 +221,7 @@ func TestAccAWSSecurityGroupRule_Ingress_Ipv6(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -219,6 +233,12 @@ func TestAccAWSSecurityGroupRule_Ingress_Ipv6(t *testing.T) { testRuleCount, ), }, + { + ResourceName: "aws_security_group_rule.ingress_1", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.ingress_1"), + ImportStateVerify: true, + }, }, }) } @@ -242,7 +262,7 @@ func TestAccAWSSecurityGroupRule_Ingress_Classic(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -257,6 +277,12 @@ func TestAccAWSSecurityGroupRule_Ingress_Classic(t *testing.T) { testRuleCount, ), }, + { + ResourceName: "aws_security_group_rule.ingress_1", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.ingress_1"), + ImportStateVerify: true, + }, }, }) } @@ -285,7 +311,7 @@ func TestAccAWSSecurityGroupRule_MultiIngress(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -297,6 +323,12 @@ func TestAccAWSSecurityGroupRule_MultiIngress(t *testing.T) { testMultiRuleCount, ), }, + { + ResourceName: "aws_security_group_rule.ingress_2", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.ingress_2"), + ImportStateVerify: true, + }, }, }) } @@ -305,7 +337,7 @@ func TestAccAWSSecurityGroupRule_Egress(t *testing.T) { var group ec2.SecurityGroup rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -317,6 +349,12 @@ func TestAccAWSSecurityGroupRule_Egress(t *testing.T) { testAccCheckAWSSecurityGroupRuleAttributes("aws_security_group_rule.egress_1", &group, nil, "egress"), ), }, + { + ResourceName: "aws_security_group_rule.egress_1", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.egress_1"), + ImportStateVerify: true, + }, }, }) } @@ -324,7 +362,7 @@ func TestAccAWSSecurityGroupRule_Egress(t *testing.T) { func TestAccAWSSecurityGroupRule_SelfReference(t *testing.T) { var group ec2.SecurityGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -335,13 +373,19 @@ func TestAccAWSSecurityGroupRule_SelfReference(t *testing.T) { testAccCheckAWSSecurityGroupRuleExists("aws_security_group.web", &group), ), }, + { + ResourceName: "aws_security_group_rule.self", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.self"), + ImportStateVerify: true, + }, }, }) } func TestAccAWSSecurityGroupRule_ExpectInvalidTypeError(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -356,7 +400,7 @@ func TestAccAWSSecurityGroupRule_ExpectInvalidTypeError(t *testing.T) { func TestAccAWSSecurityGroupRule_ExpectInvalidCIDR(t *testing.T) { rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -398,7 +442,7 @@ func TestAccAWSSecurityGroupRule_PartialMatching_basic(t *testing.T) { }, } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -412,6 +456,24 @@ func TestAccAWSSecurityGroupRule_PartialMatching_basic(t *testing.T) { testAccCheckAWSSecurityGroupRuleAttributes("aws_security_group_rule.nat_ingress", &group, &o, "ingress"), ), }, + { + ResourceName: "aws_security_group_rule.ingress", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.ingress"), + ImportStateVerify: true, + }, + { + ResourceName: "aws_security_group_rule.other", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.other"), + ImportStateVerify: true, + }, + { + ResourceName: "aws_security_group_rule.nat_ingress", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.nat_ingress"), + ImportStateVerify: true, + }, }, }) } @@ -442,7 +504,7 @@ func TestAccAWSSecurityGroupRule_PartialMatching_Source(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -456,6 +518,12 @@ func TestAccAWSSecurityGroupRule_PartialMatching_Source(t *testing.T) { testAccCheckAWSSecurityGroupRuleAttributes("aws_security_group_rule.source_ingress", &group, &p, "ingress"), ), }, + { + ResourceName: "aws_security_group_rule.source_ingress", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.source_ingress"), + ImportStateVerify: true, + }, }, }) } @@ -463,7 +531,7 @@ func TestAccAWSSecurityGroupRule_PartialMatching_Source(t *testing.T) { func TestAccAWSSecurityGroupRule_Issue5310(t *testing.T) { var group ec2.SecurityGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -474,6 +542,12 @@ func TestAccAWSSecurityGroupRule_Issue5310(t *testing.T) { testAccCheckAWSSecurityGroupRuleExists("aws_security_group.issue_5310", &group), ), }, + { + ResourceName: "aws_security_group_rule.issue_5310", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.issue_5310"), + ImportStateVerify: true, + }, }, }) } @@ -481,7 +555,7 @@ func TestAccAWSSecurityGroupRule_Issue5310(t *testing.T) { func TestAccAWSSecurityGroupRule_Race(t *testing.T) { var group ec2.SecurityGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -500,7 +574,7 @@ func TestAccAWSSecurityGroupRule_SelfSource(t *testing.T) { var group ec2.SecurityGroup rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -511,6 +585,12 @@ func TestAccAWSSecurityGroupRule_SelfSource(t *testing.T) { testAccCheckAWSSecurityGroupRuleExists("aws_security_group.web", &group), ), }, + { + ResourceName: "aws_security_group_rule.allow_self", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.allow_self"), + ImportStateVerify: true, + }, }, }) } @@ -554,7 +634,7 @@ func TestAccAWSSecurityGroupRule_PrefixListEgress(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -570,6 +650,12 @@ func TestAccAWSSecurityGroupRule_PrefixListEgress(t *testing.T) { testAccCheckAWSSecurityGroupRuleAttributes("aws_security_group_rule.egress_1", &group, &p, "egress"), ), }, + { + ResourceName: "aws_security_group_rule.egress_1", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.egress_1"), + ImportStateVerify: true, + }, }, }) } @@ -578,7 +664,7 @@ func TestAccAWSSecurityGroupRule_IngressDescription(t *testing.T) { var group ec2.SecurityGroup rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -591,6 +677,12 @@ func TestAccAWSSecurityGroupRule_IngressDescription(t *testing.T) { resource.TestCheckResourceAttr("aws_security_group_rule.ingress_1", "description", "TF acceptance test ingress rule"), ), }, + { + ResourceName: "aws_security_group_rule.ingress_1", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.ingress_1"), + ImportStateVerify: true, + }, }, }) } @@ -599,7 +691,7 @@ func TestAccAWSSecurityGroupRule_EgressDescription(t *testing.T) { var group ec2.SecurityGroup rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -612,6 +704,12 @@ func TestAccAWSSecurityGroupRule_EgressDescription(t *testing.T) { resource.TestCheckResourceAttr("aws_security_group_rule.egress_1", "description", "TF acceptance test egress rule"), ), }, + { + ResourceName: "aws_security_group_rule.egress_1", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.egress_1"), + ImportStateVerify: true, + }, }, }) } @@ -620,7 +718,7 @@ func TestAccAWSSecurityGroupRule_IngressDescription_updates(t *testing.T) { var group ec2.SecurityGroup rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -642,6 +740,12 @@ func TestAccAWSSecurityGroupRule_IngressDescription_updates(t *testing.T) { resource.TestCheckResourceAttr("aws_security_group_rule.ingress_1", "description", "TF acceptance test ingress rule updated"), ), }, + { + ResourceName: "aws_security_group_rule.ingress_1", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.ingress_1"), + ImportStateVerify: true, + }, }, }) } @@ -650,7 +754,7 @@ func TestAccAWSSecurityGroupRule_EgressDescription_updates(t *testing.T) { var group ec2.SecurityGroup rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -672,6 +776,173 @@ func TestAccAWSSecurityGroupRule_EgressDescription_updates(t *testing.T) { resource.TestCheckResourceAttr("aws_security_group_rule.egress_1", "description", "TF acceptance test egress rule updated"), ), }, + { + ResourceName: "aws_security_group_rule.egress_1", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.egress_1"), + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSSecurityGroupRule_Description_AllPorts(t *testing.T) { + var group ec2.SecurityGroup + rName := acctest.RandomWithPrefix("tf-acc-test") + securityGroupResourceName := "aws_security_group.test" + resourceName := "aws_security_group_rule.test" + + rule1 := ec2.IpPermission{ + IpProtocol: aws.String("-1"), + IpRanges: []*ec2.IpRange{ + {CidrIp: aws.String("0.0.0.0/0"), Description: aws.String("description1")}, + }, + } + + rule2 := ec2.IpPermission{ + IpProtocol: aws.String("-1"), + IpRanges: []*ec2.IpRange{ + {CidrIp: aws.String("0.0.0.0/0"), Description: aws.String("description2")}, + }, + } + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSecurityGroupRuleConfigDescriptionAllPorts(rName, "description1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupRuleExists(securityGroupResourceName, &group), + testAccCheckAWSSecurityGroupRuleAttributes(resourceName, &group, &rule1, "ingress"), + resource.TestCheckResourceAttr(resourceName, "description", "description1"), + resource.TestCheckResourceAttr(resourceName, "from_port", "0"), + resource.TestCheckResourceAttr(resourceName, "protocol", "-1"), + resource.TestCheckResourceAttr(resourceName, "to_port", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc(resourceName), + ImportStateVerify: true, + }, + { + Config: testAccAWSSecurityGroupRuleConfigDescriptionAllPorts(rName, "description2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupRuleExists(securityGroupResourceName, &group), + testAccCheckAWSSecurityGroupRuleAttributes(resourceName, &group, &rule2, "ingress"), + resource.TestCheckResourceAttr(resourceName, "description", "description2"), + resource.TestCheckResourceAttr(resourceName, "from_port", "0"), + resource.TestCheckResourceAttr(resourceName, "protocol", "-1"), + resource.TestCheckResourceAttr(resourceName, "to_port", "0"), + ), + }, + }, + }) +} + +func TestAccAWSSecurityGroupRule_Description_AllPorts_NonZeroPorts(t *testing.T) { + var group ec2.SecurityGroup + rName := acctest.RandomWithPrefix("tf-acc-test") + securityGroupResourceName := "aws_security_group.test" + resourceName := "aws_security_group_rule.test" + + rule1 := ec2.IpPermission{ + IpProtocol: aws.String("-1"), + IpRanges: []*ec2.IpRange{ + {CidrIp: aws.String("0.0.0.0/0"), Description: aws.String("description1")}, + }, + } + + rule2 := ec2.IpPermission{ + IpProtocol: aws.String("-1"), + IpRanges: []*ec2.IpRange{ + {CidrIp: aws.String("0.0.0.0/0"), Description: aws.String("description2")}, + }, + } + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSecurityGroupRuleConfigDescriptionAllPortsNonZeroPorts(rName, "description1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupRuleExists(securityGroupResourceName, &group), + testAccCheckAWSSecurityGroupRuleAttributes(resourceName, &group, &rule1, "ingress"), + resource.TestCheckResourceAttr(resourceName, "description", "description1"), + resource.TestCheckResourceAttr(resourceName, "from_port", "-1"), + resource.TestCheckResourceAttr(resourceName, "protocol", "-1"), + resource.TestCheckResourceAttr(resourceName, "to_port", "-1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc(resourceName), + ImportStateVerify: true, + }, + { + Config: testAccAWSSecurityGroupRuleConfigDescriptionAllPortsNonZeroPorts(rName, "description2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupRuleExists(securityGroupResourceName, &group), + testAccCheckAWSSecurityGroupRuleAttributes(resourceName, &group, &rule2, "ingress"), + resource.TestCheckResourceAttr(resourceName, "description", "description2"), + resource.TestCheckResourceAttr(resourceName, "from_port", "0"), + resource.TestCheckResourceAttr(resourceName, "protocol", "-1"), + resource.TestCheckResourceAttr(resourceName, "to_port", "0"), + ), + }, + }, + }) +} + +// Reference: https://github.com/terraform-providers/terraform-provider-aws/issues/6416 +func TestAccAWSSecurityGroupRule_MultipleRuleSearching_AllProtocolCrash(t *testing.T) { + var group ec2.SecurityGroup + rName := acctest.RandomWithPrefix("tf-acc-test") + securityGroupResourceName := "aws_security_group.test" + resourceName1 := "aws_security_group_rule.test1" + resourceName2 := "aws_security_group_rule.test2" + + rule1 := ec2.IpPermission{ + IpProtocol: aws.String("-1"), + IpRanges: []*ec2.IpRange{ + {CidrIp: aws.String("10.0.0.0/8")}, + }, + } + + rule2 := ec2.IpPermission{ + FromPort: aws.Int64(443), + ToPort: aws.Int64(443), + IpProtocol: aws.String("tcp"), + IpRanges: []*ec2.IpRange{ + {CidrIp: aws.String("172.168.0.0/16")}, + }, + } + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSecurityGroupRuleConfigMultipleRuleSearchingAllProtocolCrash(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupRuleExists(securityGroupResourceName, &group), + testAccCheckAWSSecurityGroupRuleAttributes(resourceName1, &group, &rule1, "ingress"), + testAccCheckAWSSecurityGroupRuleAttributes(resourceName2, &group, &rule2, "ingress"), + resource.TestCheckResourceAttr(resourceName1, "from_port", "0"), + resource.TestCheckResourceAttr(resourceName1, "protocol", "-1"), + resource.TestCheckResourceAttr(resourceName1, "to_port", "65535"), + resource.TestCheckResourceAttr(resourceName2, "from_port", "443"), + resource.TestCheckResourceAttr(resourceName2, "protocol", "tcp"), + resource.TestCheckResourceAttr(resourceName2, "to_port", "443"), + ), + }, }, }) } @@ -760,7 +1031,7 @@ func TestAccAWSSecurityGroupRule_MultiDescription(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupRuleDestroy, @@ -783,6 +1054,24 @@ func TestAccAWSSecurityGroupRule_MultiDescription(t *testing.T) { resource.TestCheckResourceAttr("aws_security_group_rule.ingress_rule_3", "description", "NAT SG Description"), ), }, + { + ResourceName: "aws_security_group_rule.ingress_rule_1", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.ingress_rule_1"), + ImportStateVerify: true, + }, + { + ResourceName: "aws_security_group_rule.ingress_rule_2", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.ingress_rule_2"), + ImportStateVerify: true, + }, + { + ResourceName: "aws_security_group_rule.ingress_rule_3", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.ingress_rule_3"), + ImportStateVerify: true, + }, { Config: testAccAWSSecurityGroupRuleMultiDescription(rInt, "egress"), Check: resource.ComposeTestCheckFunc( @@ -805,6 +1094,30 @@ func TestAccAWSSecurityGroupRule_MultiDescription(t *testing.T) { resource.TestCheckResourceAttr("aws_security_group_rule.egress_rule_4", "description", "Prefix List Description"), ), }, + { + ResourceName: "aws_security_group_rule.egress_rule_1", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.egress_rule_1"), + ImportStateVerify: true, + }, + { + ResourceName: "aws_security_group_rule.egress_rule_2", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.egress_rule_2"), + ImportStateVerify: true, + }, + { + ResourceName: "aws_security_group_rule.egress_rule_3", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.egress_rule_3"), + ImportStateVerify: true, + }, + { + ResourceName: "aws_security_group_rule.egress_rule_4", + ImportState: true, + ImportStateIdFunc: testAccAWSSecurityGroupRuleImportStateIdFunc("aws_security_group_rule.egress_rule_4"), + ImportStateVerify: true, + }, }, }) } @@ -905,21 +1218,24 @@ func testAccCheckAWSSecurityGroupRuleAttributes(n string, group *ec2.SecurityGro } for _, r := range rules { - if r.ToPort != nil && *p.ToPort != *r.ToPort { + if p.ToPort != nil && r.ToPort != nil && *p.ToPort != *r.ToPort { continue } - if r.FromPort != nil && *p.FromPort != *r.FromPort { + if p.FromPort != nil && r.FromPort != nil && *p.FromPort != *r.FromPort { continue } - if r.IpProtocol != nil && *p.IpProtocol != *r.IpProtocol { + if p.IpProtocol != nil && r.IpProtocol != nil && *p.IpProtocol != *r.IpProtocol { continue } remaining := len(p.IpRanges) for _, ip := range p.IpRanges { for _, rip := range r.IpRanges { + if ip.CidrIp == nil || rip.CidrIp == nil { + continue + } if *ip.CidrIp == *rip.CidrIp { remaining-- } @@ -933,6 +1249,9 @@ func testAccCheckAWSSecurityGroupRuleAttributes(n string, group *ec2.SecurityGro remaining = len(p.Ipv6Ranges) for _, ip := range p.Ipv6Ranges { for _, rip := range r.Ipv6Ranges { + if ip.CidrIpv6 == nil || rip.CidrIpv6 == nil { + continue + } if *ip.CidrIpv6 == *rip.CidrIpv6 { remaining-- } @@ -946,6 +1265,9 @@ func testAccCheckAWSSecurityGroupRuleAttributes(n string, group *ec2.SecurityGro remaining = len(p.UserIdGroupPairs) for _, ip := range p.UserIdGroupPairs { for _, rip := range r.UserIdGroupPairs { + if ip.GroupId == nil || rip.GroupId == nil { + continue + } if *ip.GroupId == *rip.GroupId { remaining-- } @@ -959,6 +1281,9 @@ func testAccCheckAWSSecurityGroupRuleAttributes(n string, group *ec2.SecurityGro remaining = len(p.PrefixListIds) for _, pip := range p.PrefixListIds { for _, rpip := range r.PrefixListIds { + if pip.PrefixListId == nil || rpip.PrefixListId == nil { + continue + } if *pip.PrefixListId == *rpip.PrefixListId { remaining-- } @@ -981,26 +1306,91 @@ func testAccCheckAWSSecurityGroupRuleAttributes(n string, group *ec2.SecurityGro } } +func testAccAWSSecurityGroupRuleImportStateIdFunc(resourceName string) resource.ImportStateIdFunc { + return func(s *terraform.State) (string, error) { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return "", fmt.Errorf("not found: %s", resourceName) + } + + sgID := rs.Primary.Attributes["security_group_id"] + ruleType := rs.Primary.Attributes["type"] + protocol := rs.Primary.Attributes["protocol"] + fromPort := rs.Primary.Attributes["from_port"] + toPort := rs.Primary.Attributes["to_port"] + + cidrs, err := testAccAWSSecurityGroupRuleImportGetAttrs(rs.Primary.Attributes, "cidr_blocks") + if err != nil { + return "", err + } + + ipv6CIDRs, err := testAccAWSSecurityGroupRuleImportGetAttrs(rs.Primary.Attributes, "ipv6_cidr_blocks") + if err != nil { + return "", err + } + + prefixes, err := testAccAWSSecurityGroupRuleImportGetAttrs(rs.Primary.Attributes, "prefix_list_ids") + if err != nil { + return "", err + } + + var parts []string + parts = append(parts, sgID) + parts = append(parts, ruleType) + parts = append(parts, protocol) + parts = append(parts, fromPort) + parts = append(parts, toPort) + parts = append(parts, *cidrs...) + parts = append(parts, *ipv6CIDRs...) + parts = append(parts, *prefixes...) + + if sgSource, ok := rs.Primary.Attributes["source_security_group_id"]; ok { + parts = append(parts, sgSource) + } + + if rs.Primary.Attributes["self"] == "true" { + parts = append(parts, "self") + } + + return strings.Join(parts, "_"), nil + } +} + +func testAccAWSSecurityGroupRuleImportGetAttrs(attrs map[string]string, key string) (*[]string, error) { + var values []string + if countStr, ok := attrs[fmt.Sprintf("%s.#", key)]; ok && countStr != "0" { + count, err := strconv.Atoi(countStr) + if err != nil { + return nil, err + } + for i := 0; i < count; i++ { + values = append(values, attrs[fmt.Sprintf("%s.%d", key, i)]) + } + } + return &values, nil +} + func testAccAWSSecurityGroupRuleIngressConfig(rInt int) string { return fmt.Sprintf(` - resource "aws_security_group" "web" { - name = "terraform_test_%d" - description = "Used in the terraform acceptance tests" +resource "aws_security_group" "web" { + name = "terraform_test_%d" + description = "Used in the terraform acceptance tests" - tags { - Name = "tf-acc-test" - } - } + tags { + Name = "tf-acc-test" + } +} - resource "aws_security_group_rule" "ingress_1" { - type = "ingress" - protocol = "tcp" - from_port = 80 - to_port = 8000 - cidr_blocks = ["10.0.0.0/8"] +resource "aws_security_group_rule" "ingress_1" { + type = "ingress" + protocol = "tcp" + from_port = 80 + to_port = 8000 + cidr_blocks = ["10.0.0.0/8"] - security_group_id = "${aws_security_group.web.id}" - }`, rInt) + security_group_id = "${aws_security_group.web.id}" +} + `, rInt) } const testAccAWSSecurityGroupRuleIngress_ipv6Config = ` @@ -1061,10 +1451,6 @@ resource "aws_security_group_rule" "ingress_1" { ` const testAccAWSSecurityGroupRuleIssue5310 = ` -provider "aws" { - region = "us-east-1" -} - resource "aws_security_group" "issue_5310" { name = "terraform-test-issue_5310" description = "SG for test of issue 5310" @@ -1082,50 +1468,48 @@ resource "aws_security_group_rule" "issue_5310" { func testAccAWSSecurityGroupRuleIngressClassicConfig(rInt int) string { return fmt.Sprintf(` - provider "aws" { - region = "us-east-1" - } - - resource "aws_security_group" "web" { - name = "terraform_test_%d" - description = "Used in the terraform acceptance tests" +resource "aws_security_group" "web" { + name = "terraform_test_%d" + description = "Used in the terraform acceptance tests" - tags { - Name = "tf-acc-test" - } - } + tags { + Name = "tf-acc-test" + } +} - resource "aws_security_group_rule" "ingress_1" { - type = "ingress" - protocol = "tcp" - from_port = 80 - to_port = 8000 - cidr_blocks = ["10.0.0.0/8"] +resource "aws_security_group_rule" "ingress_1" { + type = "ingress" + protocol = "tcp" + from_port = 80 + to_port = 8000 + cidr_blocks = ["10.0.0.0/8"] - security_group_id = "${aws_security_group.web.id}" - }`, rInt) + security_group_id = "${aws_security_group.web.id}" +} + `, rInt) } func testAccAWSSecurityGroupRuleEgressConfig(rInt int) string { return fmt.Sprintf(` - resource "aws_security_group" "web" { - name = "terraform_test_%d" - description = "Used in the terraform acceptance tests" +resource "aws_security_group" "web" { + name = "terraform_test_%d" + description = "Used in the terraform acceptance tests" - tags { - Name = "tf-acc-test" - } - } + tags { + Name = "tf-acc-test" + } +} - resource "aws_security_group_rule" "egress_1" { - type = "egress" - protocol = "tcp" - from_port = 80 - to_port = 8000 - cidr_blocks = ["10.0.0.0/8"] +resource "aws_security_group_rule" "egress_1" { + type = "egress" + protocol = "tcp" + from_port = 80 + to_port = 8000 + cidr_blocks = ["10.0.0.0/8"] - security_group_id = "${aws_security_group.web.id}" - }`, rInt) + security_group_id = "${aws_security_group.web.id}" +} + `, rInt) } const testAccAWSSecurityGroupRuleConfigMultiIngress = ` @@ -1164,75 +1548,75 @@ resource "aws_security_group_rule" "ingress_2" { func testAccAWSSecurityGroupRuleMultiDescription(rInt int, rType string) string { var b bytes.Buffer b.WriteString(fmt.Sprintf(` - resource "aws_vpc" "tf_sgrule_description_test" { - cidr_block = "10.0.0.0/16" - tags { - Name = "terraform-testacc-security-group-rule-multi-desc" - } - } +resource "aws_vpc" "tf_sgrule_description_test" { + cidr_block = "10.0.0.0/16" + tags { + Name = "terraform-testacc-security-group-rule-multi-desc" + } +} - resource "aws_vpc_endpoint" "s3-us-west-2" { - vpc_id = "${aws_vpc.tf_sgrule_description_test.id}" - service_name = "com.amazonaws.us-west-2.s3" - } +resource "aws_vpc_endpoint" "s3-us-west-2" { + vpc_id = "${aws_vpc.tf_sgrule_description_test.id}" + service_name = "com.amazonaws.us-west-2.s3" + } - resource "aws_security_group" "worker" { - name = "terraform_test_%[1]d" - vpc_id = "${aws_vpc.tf_sgrule_description_test.id}" - description = "Used in the terraform acceptance tests" - tags { Name = "tf-sg-rule-description" } - } +resource "aws_security_group" "worker" { + name = "terraform_test_%[1]d" + vpc_id = "${aws_vpc.tf_sgrule_description_test.id}" + description = "Used in the terraform acceptance tests" + tags { Name = "tf-sg-rule-description" } +} - resource "aws_security_group" "nat" { - name = "terraform_test_%[1]d_nat" - vpc_id = "${aws_vpc.tf_sgrule_description_test.id}" - description = "Used in the terraform acceptance tests" - tags { Name = "tf-sg-rule-description" } - } +resource "aws_security_group" "nat" { + name = "terraform_test_%[1]d_nat" + vpc_id = "${aws_vpc.tf_sgrule_description_test.id}" + description = "Used in the terraform acceptance tests" + tags { Name = "tf-sg-rule-description" } +} - resource "aws_security_group_rule" "%[2]s_rule_1" { - security_group_id = "${aws_security_group.worker.id}" - description = "CIDR Description" - type = "%[2]s" - protocol = "tcp" - from_port = 22 - to_port = 22 - cidr_blocks = ["0.0.0.0/0"] - } +resource "aws_security_group_rule" "%[2]s_rule_1" { + security_group_id = "${aws_security_group.worker.id}" + description = "CIDR Description" + type = "%[2]s" + protocol = "tcp" + from_port = 22 + to_port = 22 + cidr_blocks = ["0.0.0.0/0"] +} - resource "aws_security_group_rule" "%[2]s_rule_2" { - security_group_id = "${aws_security_group.worker.id}" - description = "IPv6 CIDR Description" - type = "%[2]s" - protocol = "tcp" - from_port = 22 - to_port = 22 - ipv6_cidr_blocks = ["::/0"] - } +resource "aws_security_group_rule" "%[2]s_rule_2" { + security_group_id = "${aws_security_group.worker.id}" + description = "IPv6 CIDR Description" + type = "%[2]s" + protocol = "tcp" + from_port = 22 + to_port = 22 + ipv6_cidr_blocks = ["::/0"] +} - resource "aws_security_group_rule" "%[2]s_rule_3" { - security_group_id = "${aws_security_group.worker.id}" - description = "NAT SG Description" - type = "%[2]s" - protocol = "tcp" - from_port = 22 - to_port = 22 - source_security_group_id = "${aws_security_group.nat.id}" - } - `, rInt, rType)) +resource "aws_security_group_rule" "%[2]s_rule_3" { + security_group_id = "${aws_security_group.worker.id}" + description = "NAT SG Description" + type = "%[2]s" + protocol = "tcp" + from_port = 22 + to_port = 22 + source_security_group_id = "${aws_security_group.nat.id}" +} + `, rInt, rType)) if rType == "egress" { b.WriteString(fmt.Sprintf(` - resource "aws_security_group_rule" "egress_rule_4" { - security_group_id = "${aws_security_group.worker.id}" - description = "Prefix List Description" - type = "egress" - protocol = "tcp" - from_port = 22 - to_port = 22 - prefix_list_ids = ["${aws_vpc_endpoint.s3-us-west-2.prefix_list_id}"] - } - `)) +resource "aws_security_group_rule" "egress_rule_4" { + security_group_id = "${aws_security_group.worker.id}" + description = "Prefix List Description" + type = "egress" + protocol = "tcp" + from_port = 22 + to_port = 22 + prefix_list_ids = ["${aws_vpc_endpoint.s3-us-west-2.prefix_list_id}"] +} + `)) } return b.String() @@ -1240,10 +1624,6 @@ func testAccAWSSecurityGroupRuleMultiDescription(rInt int, rType string) string // check for GH-1985 regression const testAccAWSSecurityGroupRuleConfigSelfReference = ` -provider "aws" { - region = "us-west-2" -} - resource "aws_vpc" "main" { cidr_block = "10.0.0.0/16" tags { @@ -1271,105 +1651,107 @@ resource "aws_security_group_rule" "self" { func testAccAWSSecurityGroupRulePartialMatchingConfig(rInt int) string { return fmt.Sprintf(` - resource "aws_vpc" "default" { - cidr_block = "10.0.0.0/16" - tags { - Name = "terraform-testacc-security-group-rule-partial-match" - } - } +resource "aws_vpc" "default" { + cidr_block = "10.0.0.0/16" + tags { + Name = "terraform-testacc-security-group-rule-partial-match" + } +} - resource "aws_security_group" "web" { - name = "tf-other-%d" - vpc_id = "${aws_vpc.default.id}" - tags { - Name = "tf-other-sg" - } - } +resource "aws_security_group" "web" { + name = "tf-other-%d" + vpc_id = "${aws_vpc.default.id}" + tags { + Name = "tf-other-sg" + } +} - resource "aws_security_group" "nat" { - name = "tf-nat-%d" - vpc_id = "${aws_vpc.default.id}" - tags { - Name = "tf-nat-sg" - } - } +resource "aws_security_group" "nat" { + name = "tf-nat-%d" + vpc_id = "${aws_vpc.default.id}" + tags { + Name = "tf-nat-sg" + } +} - resource "aws_security_group_rule" "ingress" { - type = "ingress" - from_port = 80 - to_port = 80 - protocol = "tcp" - cidr_blocks = ["10.0.2.0/24", "10.0.3.0/24", "10.0.4.0/24"] +resource "aws_security_group_rule" "ingress" { + type = "ingress" + from_port = 80 + to_port = 80 + protocol = "tcp" + cidr_blocks = ["10.0.2.0/24", "10.0.3.0/24", "10.0.4.0/24"] - security_group_id = "${aws_security_group.web.id}" - } + security_group_id = "${aws_security_group.web.id}" +} - resource "aws_security_group_rule" "other" { - type = "ingress" - from_port = 80 - to_port = 80 - protocol = "tcp" - cidr_blocks = ["10.0.5.0/24"] +resource "aws_security_group_rule" "other" { + type = "ingress" + from_port = 80 + to_port = 80 + protocol = "tcp" + cidr_blocks = ["10.0.5.0/24"] - security_group_id = "${aws_security_group.web.id}" - } + security_group_id = "${aws_security_group.web.id}" +} - // same a above, but different group, to guard against bad hashing - resource "aws_security_group_rule" "nat_ingress" { - type = "ingress" - from_port = 80 - to_port = 80 - protocol = "tcp" - cidr_blocks = ["10.0.2.0/24", "10.0.3.0/24", "10.0.4.0/24"] +// same a above, but different group, to guard against bad hashing +resource "aws_security_group_rule" "nat_ingress" { + type = "ingress" + from_port = 80 + to_port = 80 + protocol = "tcp" + cidr_blocks = ["10.0.2.0/24", "10.0.3.0/24", "10.0.4.0/24"] - security_group_id = "${aws_security_group.nat.id}" - }`, rInt, rInt) + security_group_id = "${aws_security_group.nat.id}" +} + `, rInt, rInt) } func testAccAWSSecurityGroupRulePartialMatching_SourceConfig(rInt int) string { return fmt.Sprintf(` - resource "aws_vpc" "default" { - cidr_block = "10.0.0.0/16" - tags { - Name = "terraform-testacc-security-group-rule-partial-match" - } - } +resource "aws_vpc" "default" { + cidr_block = "10.0.0.0/16" + tags { + Name = "terraform-testacc-security-group-rule-partial-match" + } +} - resource "aws_security_group" "web" { - name = "tf-other-%d" - vpc_id = "${aws_vpc.default.id}" - tags { - Name = "tf-other-sg" - } - } +resource "aws_security_group" "web" { + name = "tf-other-%d" + vpc_id = "${aws_vpc.default.id}" + tags { + Name = "tf-other-sg" + } +} - resource "aws_security_group" "nat" { - name = "tf-nat-%d" - vpc_id = "${aws_vpc.default.id}" - tags { - Name = "tf-nat-sg" - } - } +resource "aws_security_group" "nat" { + name = "tf-nat-%d" + vpc_id = "${aws_vpc.default.id}" + tags { + Name = "tf-nat-sg" + } +} - resource "aws_security_group_rule" "source_ingress" { - type = "ingress" - from_port = 80 - to_port = 80 - protocol = "tcp" +resource "aws_security_group_rule" "source_ingress" { + type = "ingress" + from_port = 80 + to_port = 80 + protocol = "tcp" - source_security_group_id = "${aws_security_group.nat.id}" - security_group_id = "${aws_security_group.web.id}" - } + source_security_group_id = "${aws_security_group.nat.id}" + security_group_id = "${aws_security_group.web.id}" +} - resource "aws_security_group_rule" "other_ingress" { - type = "ingress" - from_port = 80 - to_port = 80 - protocol = "tcp" - cidr_blocks = ["10.0.2.0/24", "10.0.3.0/24", "10.0.4.0/24"] +resource "aws_security_group_rule" "other_ingress" { + type = "ingress" + from_port = 80 + to_port = 80 + protocol = "tcp" + cidr_blocks = ["10.0.2.0/24", "10.0.3.0/24", "10.0.4.0/24"] - security_group_id = "${aws_security_group.web.id}" - }`, rInt, rInt) + security_group_id = "${aws_security_group.web.id}" +} + `, rInt, rInt) } const testAccAWSSecurityGroupRulePrefixListEgressConfig = ` @@ -1386,21 +1768,21 @@ resource "aws_route_table" "default" { } resource "aws_vpc_endpoint" "s3-us-west-2" { - vpc_id = "${aws_vpc.tf_sg_prefix_list_egress_test.id}" - service_name = "com.amazonaws.us-west-2.s3" - route_table_ids = ["${aws_route_table.default.id}"] - policy = < 1 { + if *group.IpPermissions[1].IpRanges[0].CidrIp != "0.0.0.0/0" { + group.IpPermissions[1].IpRanges[0], group.IpPermissions[1].IpRanges[1] = + group.IpPermissions[1].IpRanges[1], group.IpPermissions[1].IpRanges[0] + } + } + if !reflect.DeepEqual(group.IpPermissions, p) { return fmt.Errorf( "Got:\n\n%#v\n\nExpected:\n\n%#v\n", @@ -1616,7 +2212,7 @@ func testAccCheckAWSSecurityGroupExistsWithoutDefault(n string) resource.TestChe func TestAccAWSSecurityGroup_failWithDiffMismatch(t *testing.T) { var group ec2.SecurityGroup - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSecurityGroupDestroy, @@ -1625,63 +2221,361 @@ func TestAccAWSSecurityGroup_failWithDiffMismatch(t *testing.T) { Config: testAccAWSSecurityGroupConfig_failWithDiffMismatch, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSecurityGroupExists("aws_security_group.nat", &group), + resource.TestCheckResourceAttr("aws_security_group.nat", "egress.#", "0"), + resource.TestCheckResourceAttr("aws_security_group.nat", "ingress.#", "2"), ), }, }, }) } -const testAccAWSSecurityGroupConfigEmptyRuleDescription = ` -resource "aws_vpc" "foo" { +func TestAccAWSSecurityGroup_ruleLimitExceededAppend(t *testing.T) { + ruleLimit := testAccAWSSecurityGroupRulesPerGroupLimitFromEnv() + + var group ec2.SecurityGroup + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupDestroy, + Steps: []resource.TestStep{ + // create a valid SG just under the limit + { + Config: testAccAWSSecurityGroupConfigRuleLimit(0, ruleLimit), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupExists("aws_security_group.test", &group), + testAccCheckAWSSecurityGroupRuleCount(&group, 0, ruleLimit), + ), + }, + // append a rule to step over the limit + { + Config: testAccAWSSecurityGroupConfigRuleLimit(0, ruleLimit+1), + ExpectError: regexp.MustCompile("RulesPerSecurityGroupLimitExceeded"), + }, + { + PreConfig: func() { + // should have the original rules still + err := testSecurityGroupRuleCount(*group.GroupId, 0, ruleLimit) + if err != nil { + t.Fatalf("PreConfig check failed: %s", err) + } + }, + // running the original config again now should restore the rules + Config: testAccAWSSecurityGroupConfigRuleLimit(0, ruleLimit), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupExists("aws_security_group.test", &group), + testAccCheckAWSSecurityGroupRuleCount(&group, 0, ruleLimit), + ), + }, + }, + }) +} + +func TestAccAWSSecurityGroup_ruleLimitCidrBlockExceededAppend(t *testing.T) { + ruleLimit := testAccAWSSecurityGroupRulesPerGroupLimitFromEnv() + + var group ec2.SecurityGroup + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupDestroy, + Steps: []resource.TestStep{ + // create a valid SG just under the limit + { + Config: testAccAWSSecurityGroupConfigCidrBlockRuleLimit(0, ruleLimit), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupExists("aws_security_group.test", &group), + testAccCheckAWSSecurityGroupRuleCount(&group, 0, 1), + ), + }, + // append a rule to step over the limit + { + Config: testAccAWSSecurityGroupConfigCidrBlockRuleLimit(0, ruleLimit+1), + ExpectError: regexp.MustCompile("RulesPerSecurityGroupLimitExceeded"), + }, + { + PreConfig: func() { + // should have the original cidr blocks still in 1 rule + err := testSecurityGroupRuleCount(*group.GroupId, 0, 1) + if err != nil { + t.Fatalf("PreConfig check failed: %s", err) + } + + id := *group.GroupId + + conn := testAccProvider.Meta().(*AWSClient).ec2conn + req := &ec2.DescribeSecurityGroupsInput{ + GroupIds: []*string{aws.String(id)}, + } + resp, err := conn.DescribeSecurityGroups(req) + if err != nil { + t.Fatalf("PreConfig check failed: %s", err) + } + + var match *ec2.SecurityGroup + if len(resp.SecurityGroups) > 0 && *resp.SecurityGroups[0].GroupId == id { + match = resp.SecurityGroups[0] + } + + if match == nil { + t.Fatalf("PreConfig check failed: security group %s not found", id) + } + + if cidrCount := len(match.IpPermissionsEgress[0].IpRanges); cidrCount != ruleLimit { + t.Fatalf("PreConfig check failed: rule does not have previous IP ranges, has %d", cidrCount) + } + }, + // running the original config again now should restore the rules + Config: testAccAWSSecurityGroupConfigCidrBlockRuleLimit(0, ruleLimit), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupExists("aws_security_group.test", &group), + testAccCheckAWSSecurityGroupRuleCount(&group, 0, 1), + ), + }, + }, + }) +} + +func TestAccAWSSecurityGroup_ruleLimitExceededPrepend(t *testing.T) { + ruleLimit := testAccAWSSecurityGroupRulesPerGroupLimitFromEnv() + + var group ec2.SecurityGroup + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupDestroy, + Steps: []resource.TestStep{ + // create a valid SG just under the limit + { + Config: testAccAWSSecurityGroupConfigRuleLimit(0, ruleLimit), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupExists("aws_security_group.test", &group), + testAccCheckAWSSecurityGroupRuleCount(&group, 0, ruleLimit), + ), + }, + // prepend a rule to step over the limit + { + Config: testAccAWSSecurityGroupConfigRuleLimit(1, ruleLimit+1), + ExpectError: regexp.MustCompile("RulesPerSecurityGroupLimitExceeded"), + }, + { + PreConfig: func() { + // should have the original rules still (limit - 1 because of the shift) + err := testSecurityGroupRuleCount(*group.GroupId, 0, ruleLimit-1) + if err != nil { + t.Fatalf("PreConfig check failed: %s", err) + } + }, + // running the original config again now should restore the rules + Config: testAccAWSSecurityGroupConfigRuleLimit(0, ruleLimit), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupExists("aws_security_group.test", &group), + testAccCheckAWSSecurityGroupRuleCount(&group, 0, ruleLimit), + ), + }, + }, + }) +} + +func TestAccAWSSecurityGroup_ruleLimitExceededAllNew(t *testing.T) { + ruleLimit := testAccAWSSecurityGroupRulesPerGroupLimitFromEnv() + + var group ec2.SecurityGroup + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupDestroy, + Steps: []resource.TestStep{ + // create a valid SG just under the limit + { + Config: testAccAWSSecurityGroupConfigRuleLimit(0, ruleLimit), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupExists("aws_security_group.test", &group), + testAccCheckAWSSecurityGroupRuleCount(&group, 0, ruleLimit), + ), + }, + // add a rule to step over the limit with entirely new rules + { + Config: testAccAWSSecurityGroupConfigRuleLimit(100, ruleLimit+1), + ExpectError: regexp.MustCompile("RulesPerSecurityGroupLimitExceeded"), + }, + { + // all the rules should have been revoked and the add failed + PreConfig: func() { + err := testSecurityGroupRuleCount(*group.GroupId, 0, 0) + if err != nil { + t.Fatalf("PreConfig check failed: %s", err) + } + }, + // running the original config again now should restore the rules + Config: testAccAWSSecurityGroupConfigRuleLimit(0, ruleLimit), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupExists("aws_security_group.test", &group), + testAccCheckAWSSecurityGroupRuleCount(&group, 0, ruleLimit), + ), + }, + }, + }) +} + +func TestAccAWSSecurityGroup_rulesDropOnError(t *testing.T) { + var group ec2.SecurityGroup + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupDestroy, + Steps: []resource.TestStep{ + // Create a valid security group with some rules and make sure it exists + { + Config: testAccAWSSecurityGroupConfig_rulesDropOnError_Init, + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSecurityGroupExists("aws_security_group.test", &group), + ), + }, + // Add a bad rule to trigger API error + { + Config: testAccAWSSecurityGroupConfig_rulesDropOnError_AddBadRule, + ExpectError: regexp.MustCompile("InvalidGroupId.Malformed"), + }, + // All originally added rules must survive. This will return non-empty plan if anything changed. + { + Config: testAccAWSSecurityGroupConfig_rulesDropOnError_Init, + PlanOnly: true, + }, + }, + }) +} + +func testAccCheckAWSSecurityGroupRuleCount(group *ec2.SecurityGroup, expectedIngressCount, expectedEgressCount int) resource.TestCheckFunc { + return func(s *terraform.State) error { + id := *group.GroupId + return testSecurityGroupRuleCount(id, expectedIngressCount, expectedEgressCount) + } +} + +func testSecurityGroupRuleCount(id string, expectedIngressCount, expectedEgressCount int) error { + conn := testAccProvider.Meta().(*AWSClient).ec2conn + req := &ec2.DescribeSecurityGroupsInput{ + GroupIds: []*string{aws.String(id)}, + } + resp, err := conn.DescribeSecurityGroups(req) + if err != nil { + return err + } + + var group *ec2.SecurityGroup + if len(resp.SecurityGroups) > 0 && *resp.SecurityGroups[0].GroupId == id { + group = resp.SecurityGroups[0] + } + + if group == nil { + return fmt.Errorf("Security group %s not found", id) + } + + if actual := len(group.IpPermissions); actual != expectedIngressCount { + return fmt.Errorf("Security group ingress rule count %d does not match %d", actual, expectedIngressCount) + } + + if actual := len(group.IpPermissionsEgress); actual != expectedEgressCount { + return fmt.Errorf("Security group egress rule count %d does not match %d", actual, expectedEgressCount) + } + + return nil +} + +func testAccAWSSecurityGroupConfigRuleLimit(egressStartIndex, egressRulesCount int) string { + c := ` +resource "aws_vpc" "test" { cidr_block = "10.1.0.0/16" tags { - Name = "terraform-testacc-security-group-empty-rule-description" + Name = "terraform-testacc-security-group-rule-limit" } } -resource "aws_security_group" "web" { - name = "terraform_acceptance_test_desc_example" +resource "aws_security_group" "test" { + name = "terraform_acceptance_test_rule_limit" description = "Used in the terraform acceptance tests" - vpc_id = "${aws_vpc.foo.id}" + vpc_id = "${aws_vpc.test.id}" - ingress { - protocol = "6" - from_port = 80 - to_port = 8000 - cidr_blocks = ["10.0.0.0/8"] - description = "" + tags { + Name = "tf-acc-test" } + // egress rules to exhaust the limit +` + + for i := egressStartIndex; i < egressRulesCount+egressStartIndex; i++ { + c += fmt.Sprintf(` egress { - protocol = "tcp" - from_port = 80 - to_port = 8000 - cidr_blocks = ["10.0.0.0/8"] - description = "" - } + protocol = "tcp" + from_port = "${80 + %[1]d}" + to_port = "${80 + %[1]d}" + cidr_blocks = ["${cidrhost("10.1.0.0/16", %[1]d)}/32"] + } +`, i) + } + + c += "\n}" + return c +} + +func testAccAWSSecurityGroupConfigCidrBlockRuleLimit(egressStartIndex, egressRulesCount int) string { + c := ` +resource "aws_vpc" "test" { + cidr_block = "10.1.0.0/16" tags { + Name = "terraform-testacc-security-group-rule-limit" + } +} + +resource "aws_security_group" "test" { + name = "terraform_acceptance_test_rule_limit" + description = "Used in the terraform acceptance tests" + vpc_id = "${aws_vpc.test.id}" + + tags { Name = "tf-acc-test" } -}` -const testAccAWSSecurityGroupConfigForTagsOrdering = ` + + // egress rules to exhaust the limit + egress { + protocol = "tcp" + from_port = "80" + to_port = "80" + cidr_blocks = [ +` + + for i := egressStartIndex; i < egressRulesCount+egressStartIndex; i++ { + c += fmt.Sprintf(` + "${cidrhost("10.1.0.0/16", %[1]d)}/32", +`, i) + } + + c += "\n\t\t]\n\t}\n}" + + return c +} + +const testAccAWSSecurityGroupConfigEmptyRuleDescription = ` resource "aws_vpc" "foo" { cidr_block = "10.1.0.0/16" tags { - Name = "terraform-testacc-security-group-tags-ordering" + Name = "terraform-testacc-security-group-empty-rule-description" } } resource "aws_security_group" "web" { - name = "terraform_acceptance_test_example" + name = "terraform_acceptance_test_desc_example" description = "Used in the terraform acceptance tests" vpc_id = "${aws_vpc.foo.id}" ingress { protocol = "6" from_port = 80 - to_port = 80000 + to_port = 8000 cidr_blocks = ["10.0.0.0/8"] + description = "" } egress { @@ -1689,11 +2583,12 @@ resource "aws_security_group" "web" { from_port = 80 to_port = 8000 cidr_blocks = ["10.0.0.0/8"] + description = "" } - tags { - Name = "tf-acc-test" - } + tags { + Name = "tf-acc-test" + } }` const testAccAWSSecurityGroupConfigIpv6 = ` @@ -1842,7 +2737,7 @@ resource "aws_security_group" "primary" { Name = "tf-acc-revoke-test-primary" } - revoke_rules_on_delete = true + revoke_rules_on_delete = true } resource "aws_security_group" "secondary" { @@ -1854,7 +2749,7 @@ resource "aws_security_group" "secondary" { Name = "tf-acc-revoke-test-secondary" } - revoke_rules_on_delete = true + revoke_rules_on_delete = true } ` @@ -1894,7 +2789,8 @@ resource "aws_security_group" "web" { } ` -const testAccAWSSecurityGroupConfigRuleDescription = ` +func testAccAWSSecurityGroupConfigRuleDescription(egressDescription, ingressDescription string) string { + return fmt.Sprintf(` resource "aws_vpc" "foo" { cidr_block = "10.1.0.0/16" tags { @@ -1912,7 +2808,7 @@ resource "aws_security_group" "web" { from_port = 80 to_port = 8000 cidr_blocks = ["10.0.0.0/8"] - description = "Ingress description" + description = "%s" } egress { @@ -1920,49 +2816,15 @@ resource "aws_security_group" "web" { from_port = 80 to_port = 8000 cidr_blocks = ["10.0.0.0/8"] - description = "Egress description" + description = "%s" } tags { Name = "tf-acc-test" } } -` - -const testAccAWSSecurityGroupConfigChangeRuleDescription = ` -resource "aws_vpc" "foo" { - cidr_block = "10.1.0.0/16" - tags { - Name = "terraform-testacc-security-group-change-rule-desc" - } -} - -resource "aws_security_group" "web" { - name = "terraform_acceptance_test_example" - description = "Used in the terraform acceptance tests" - vpc_id = "${aws_vpc.foo.id}" - - ingress { - protocol = "6" - from_port = 80 - to_port = 8000 - cidr_blocks = ["10.0.0.0/8"] - description = "New ingress description" - } - - egress { - protocol = "tcp" - from_port = 80 - to_port = 8000 - cidr_blocks = ["10.0.0.0/8"] - description = "New egress description" - } - - tags { - Name = "tf-acc-test" - } +`, ingressDescription, egressDescription) } -` const testAccAWSSecurityGroupConfigSelf = ` resource "aws_vpc" "foo" { @@ -2142,20 +3004,6 @@ resource "aws_security_group" "foo" { description = "Used in the terraform acceptance tests" vpc_id = "${aws_vpc.foo.id}" - ingress { - protocol = "tcp" - from_port = 80 - to_port = 8000 - cidr_blocks = ["10.0.0.0/8"] - } - - egress { - protocol = "tcp" - from_port = 80 - to_port = 8000 - cidr_blocks = ["10.0.0.0/8"] - } - tags { foo = "bar" } @@ -2175,20 +3023,6 @@ resource "aws_security_group" "foo" { description = "Used in the terraform acceptance tests" vpc_id = "${aws_vpc.foo.id}" - ingress { - protocol = "tcp" - from_port = 80 - to_port = 8000 - cidr_blocks = ["10.0.0.0/8"] - } - - egress { - protocol = "tcp" - from_port = 80 - to_port = 8000 - cidr_blocks = ["10.0.0.0/8"] - } - tags { bar = "baz" env = "Production" @@ -2207,20 +3041,6 @@ resource "aws_vpc" "foo" { resource "aws_security_group" "web" { vpc_id = "${aws_vpc.foo.id}" - ingress { - protocol = "tcp" - from_port = 80 - to_port = 8000 - cidr_blocks = ["10.0.0.0/8"] - } - - egress { - protocol = "tcp" - from_port = 80 - to_port = 8000 - cidr_blocks = ["10.0.0.0/8"] - } - tags { Name = "tf-acc-test" } @@ -2250,10 +3070,6 @@ resource "aws_security_group" "worker" { ` const testAccAWSSecurityGroupConfigClassic = ` -provider "aws" { - region = "us-east-1" -} - resource "aws_security_group" "web" { name = "terraform_acceptance_test_example_1" description = "Used in the terraform acceptance tests" @@ -2524,10 +3340,6 @@ resource "aws_security_group" "web" { ` const testAccAWSSecurityGroupConfig_ingressWithCidrAndSGs_classic = ` -provider "aws" { - region = "us-east-1" -} - resource "aws_security_group" "other_web" { name = "tf_other_acc_tests" description = "Used in the terraform acceptance tests" @@ -2790,12 +3602,6 @@ resource "aws_security_group" "egress" { name = "terraform_acceptance_test_example" description = "Used in the terraform acceptance tests" vpc_id = "${aws_vpc.foo.id}" - ingress { - from_port = 22 - to_port = 22 - protocol = "6" - cidr_blocks = ["0.0.0.0/0"] - } egress { from_port = 0 to_port = 0 @@ -2812,6 +3618,8 @@ resource "aws_security_group" "egress" { ` const testAccAWSSecurityGroupConfigPrefixListEgress = ` +data "aws_region" "current" {} + resource "aws_vpc" "tf_sg_prefix_list_egress_test" { cidr_block = "10.0.0.0/16" tags { @@ -2823,9 +3631,9 @@ resource "aws_route_table" "default" { vpc_id = "${aws_vpc.tf_sg_prefix_list_egress_test.id}" } -resource "aws_vpc_endpoint" "s3-us-west-2" { +resource "aws_vpc_endpoint" "test" { vpc_id = "${aws_vpc.tf_sg_prefix_list_egress_test.id}" - service_name = "com.amazonaws.us-west-2.s3" + service_name = "com.amazonaws.${data.aws_region.current.name}.s3" route_table_ids = ["${aws_route_table.default.id}"] policy = < (64 - resource.UniqueIDSuffixLength) { + requestId = resource.PrefixedUniqueId(name[0:(64 - resource.UniqueIDSuffixLength - 1)]) + } else { + requestId = resource.PrefixedUniqueId(name) + } + input := &servicediscovery.CreatePrivateDnsNamespaceInput{ - Name: aws.String(d.Get("name").(string)), + Name: aws.String(name), Vpc: aws.String(d.Get("vpc").(string)), CreatorRequestId: aws.String(requestId), } @@ -127,6 +134,5 @@ func resourceAwsServiceDiscoveryPrivateDnsNamespaceDelete(d *schema.ResourceData return err } - d.SetId("") return nil } diff --git a/aws/resource_aws_service_discovery_private_dns_namespace_test.go b/aws/resource_aws_service_discovery_private_dns_namespace_test.go index f7d13f50f74..31d7e956c15 100644 --- a/aws/resource_aws_service_discovery_private_dns_namespace_test.go +++ b/aws/resource_aws_service_discovery_private_dns_namespace_test.go @@ -12,13 +12,35 @@ import ( ) func TestAccAWSServiceDiscoveryPrivateDnsNamespace_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + rName := acctest.RandString(5) + ".example.com" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsServiceDiscoveryPrivateDnsNamespaceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccServiceDiscoveryPrivateDnsNamespaceConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsServiceDiscoveryPrivateDnsNamespaceExists("aws_service_discovery_private_dns_namespace.test"), + resource.TestCheckResourceAttrSet("aws_service_discovery_private_dns_namespace.test", "arn"), + resource.TestCheckResourceAttrSet("aws_service_discovery_private_dns_namespace.test", "hosted_zone"), + ), + }, + }, + }) +} + +func TestAccAWSServiceDiscoveryPrivateDnsNamespace_longname(t *testing.T) { + rName := acctest.RandString(64-len("example.com")) + ".example.com" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsServiceDiscoveryPrivateDnsNamespaceDestroy, Steps: []resource.TestStep{ { - Config: testAccServiceDiscoveryPrivateDnsNamespaceConfig(acctest.RandString(5)), + Config: testAccServiceDiscoveryPrivateDnsNamespaceConfig(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAwsServiceDiscoveryPrivateDnsNamespaceExists("aws_service_discovery_private_dns_namespace.test"), resource.TestCheckResourceAttrSet("aws_service_discovery_private_dns_namespace.test", "arn"), @@ -73,7 +95,7 @@ resource "aws_vpc" "test" { } resource "aws_service_discovery_private_dns_namespace" "test" { - name = "tf-sd-%s.terraform.local" + name = "%s" description = "test" vpc = "${aws_vpc.test.id}" } diff --git a/aws/resource_aws_service_discovery_public_dns_namespace.go b/aws/resource_aws_service_discovery_public_dns_namespace.go index 33fba76a51f..d311f2247d9 100644 --- a/aws/resource_aws_service_discovery_public_dns_namespace.go +++ b/aws/resource_aws_service_discovery_public_dns_namespace.go @@ -1,7 +1,6 @@ package aws import ( - "fmt" "time" "github.com/aws/aws-sdk-go/aws" @@ -46,9 +45,17 @@ func resourceAwsServiceDiscoveryPublicDnsNamespace() *schema.Resource { func resourceAwsServiceDiscoveryPublicDnsNamespaceCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).sdconn - requestId := resource.PrefixedUniqueId(fmt.Sprintf("tf-%s", d.Get("name").(string))) + name := d.Get("name").(string) + // The CreatorRequestId has a limit of 64 bytes + var requestId string + if len(name) > (64 - resource.UniqueIDSuffixLength) { + requestId = resource.PrefixedUniqueId(name[0:(64 - resource.UniqueIDSuffixLength - 1)]) + } else { + requestId = resource.PrefixedUniqueId(name) + } + input := &servicediscovery.CreatePublicDnsNamespaceInput{ - Name: aws.String(d.Get("name").(string)), + Name: aws.String(name), CreatorRequestId: aws.String(requestId), } @@ -126,7 +133,6 @@ func resourceAwsServiceDiscoveryPublicDnsNamespaceDelete(d *schema.ResourceData, return err } - d.SetId("") return nil } diff --git a/aws/resource_aws_service_discovery_public_dns_namespace_test.go b/aws/resource_aws_service_discovery_public_dns_namespace_test.go index f082c4b9fb3..82a73db6505 100644 --- a/aws/resource_aws_service_discovery_public_dns_namespace_test.go +++ b/aws/resource_aws_service_discovery_public_dns_namespace_test.go @@ -12,36 +12,49 @@ import ( ) func TestAccAWSServiceDiscoveryPublicDnsNamespace_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resourceName := "aws_service_discovery_public_dns_namespace.test" + rName := acctest.RandStringFromCharSet(5, acctest.CharSetAlpha) + ".terraformtesting.com" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsServiceDiscoveryPublicDnsNamespaceDestroy, Steps: []resource.TestStep{ { - Config: testAccServiceDiscoveryPublicDnsNamespaceConfig(acctest.RandString(5)), + Config: testAccServiceDiscoveryPublicDnsNamespaceConfig(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAwsServiceDiscoveryPublicDnsNamespaceExists("aws_service_discovery_public_dns_namespace.test"), resource.TestCheckResourceAttrSet("aws_service_discovery_public_dns_namespace.test", "arn"), resource.TestCheckResourceAttrSet("aws_service_discovery_public_dns_namespace.test", "hosted_zone"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } -func TestAccAWSServiceDiscoveryPublicDnsNamespace_import(t *testing.T) { +func TestAccAWSServiceDiscoveryPublicDnsNamespace_longname(t *testing.T) { resourceName := "aws_service_discovery_public_dns_namespace.test" + rName := acctest.RandStringFromCharSet(64-len("terraformtesting.com"), acctest.CharSetAlpha) + ".terraformtesting.com" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsServiceDiscoveryPublicDnsNamespaceDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccServiceDiscoveryPublicDnsNamespaceConfig(acctest.RandString(5)), + { + Config: testAccServiceDiscoveryPublicDnsNamespaceConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsServiceDiscoveryPublicDnsNamespaceExists("aws_service_discovery_public_dns_namespace.test"), + resource.TestCheckResourceAttrSet("aws_service_discovery_public_dns_namespace.test", "arn"), + resource.TestCheckResourceAttrSet("aws_service_discovery_public_dns_namespace.test", "hosted_zone"), + ), }, - - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, @@ -98,7 +111,7 @@ func testAccCheckAwsServiceDiscoveryPublicDnsNamespaceExists(name string) resour func testAccServiceDiscoveryPublicDnsNamespaceConfig(rName string) string { return fmt.Sprintf(` resource "aws_service_discovery_public_dns_namespace" "test" { - name = "tf-sd-%s.terraform.com" + name = %q description = "test" } `, rName) diff --git a/aws/resource_aws_service_discovery_service.go b/aws/resource_aws_service_discovery_service.go index 38f07bb7377..efaeacef4b4 100644 --- a/aws/resource_aws_service_discovery_service.go +++ b/aws/resource_aws_service_discovery_service.go @@ -8,6 +8,7 @@ import ( "github.com/aws/aws-sdk-go/service/servicediscovery" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsServiceDiscoveryService() *schema.Resource { @@ -55,12 +56,12 @@ func resourceAwsServiceDiscoveryService() *schema.Resource { Type: schema.TypeString, Required: true, ForceNew: true, - ValidateFunc: validateStringIn( + ValidateFunc: validation.StringInSlice([]string{ servicediscovery.RecordTypeSrv, servicediscovery.RecordTypeA, servicediscovery.RecordTypeAaaa, servicediscovery.RecordTypeCname, - ), + }, false), }, }, }, @@ -70,10 +71,10 @@ func resourceAwsServiceDiscoveryService() *schema.Resource { Optional: true, ForceNew: true, Default: servicediscovery.RoutingPolicyMultivalue, - ValidateFunc: validateStringIn( + ValidateFunc: validation.StringInSlice([]string{ servicediscovery.RoutingPolicyMultivalue, servicediscovery.RoutingPolicyWeighted, - ), + }, false), }, }, }, @@ -96,11 +97,26 @@ func resourceAwsServiceDiscoveryService() *schema.Resource { Type: schema.TypeString, Optional: true, ForceNew: true, - ValidateFunc: validateStringIn( + ValidateFunc: validation.StringInSlice([]string{ servicediscovery.HealthCheckTypeHttp, servicediscovery.HealthCheckTypeHttps, servicediscovery.HealthCheckTypeTcp, - ), + }, false), + }, + }, + }, + }, + "health_check_custom_config": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "failure_threshold": { + Type: schema.TypeInt, + Optional: true, + ForceNew: true, }, }, }, @@ -130,6 +146,11 @@ func resourceAwsServiceDiscoveryServiceCreate(d *schema.ResourceData, meta inter input.HealthCheckConfig = expandServiceDiscoveryHealthCheckConfig(hcconfig[0].(map[string]interface{})) } + healthCustomConfig := d.Get("health_check_custom_config").([]interface{}) + if len(healthCustomConfig) > 0 { + input.HealthCheckCustomConfig = expandServiceDiscoveryHealthCheckCustomConfig(healthCustomConfig[0].(map[string]interface{})) + } + resp, err := conn.CreateService(input) if err != nil { return err @@ -163,6 +184,7 @@ func resourceAwsServiceDiscoveryServiceRead(d *schema.ResourceData, meta interfa d.Set("description", service.Description) d.Set("dns_config", flattenServiceDiscoveryDnsConfig(service.DnsConfig)) d.Set("health_check_config", flattenServiceDiscoveryHealthCheckConfig(service.HealthCheckConfig)) + d.Set("health_check_custom_config", flattenServiceDiscoveryHealthCheckCustomConfig(service.HealthCheckCustomConfig)) return nil } @@ -322,3 +344,33 @@ func flattenServiceDiscoveryHealthCheckConfig(config *servicediscovery.HealthChe return []map[string]interface{}{result} } + +func expandServiceDiscoveryHealthCheckCustomConfig(configured map[string]interface{}) *servicediscovery.HealthCheckCustomConfig { + if len(configured) < 1 { + return nil + } + result := &servicediscovery.HealthCheckCustomConfig{} + + if v, ok := configured["failure_threshold"]; ok && v.(int) != 0 { + result.FailureThreshold = aws.Int64(int64(v.(int))) + } + + return result +} + +func flattenServiceDiscoveryHealthCheckCustomConfig(config *servicediscovery.HealthCheckCustomConfig) []map[string]interface{} { + if config == nil { + return nil + } + result := map[string]interface{}{} + + if config.FailureThreshold != nil { + result["failure_threshold"] = *config.FailureThreshold + } + + if len(result) < 1 { + return nil + } + + return []map[string]interface{}{result} +} diff --git a/aws/resource_aws_service_discovery_service_migrate.go b/aws/resource_aws_service_discovery_service_migrate.go deleted file mode 100644 index d0fd571f7c5..00000000000 --- a/aws/resource_aws_service_discovery_service_migrate.go +++ /dev/null @@ -1,36 +0,0 @@ -package aws - -import ( - "fmt" - "log" - - "github.com/aws/aws-sdk-go/service/servicediscovery" - "github.com/hashicorp/terraform/terraform" -) - -func resourceAwsServiceDiscoveryServiceMigrateState( - v int, is *terraform.InstanceState, meta interface{}) (*terraform.InstanceState, error) { - switch v { - case 0: - log.Println("[INFO] Found AWS ServiceDiscovery Service State v0; migrating to v1") - return migrateServiceDiscoveryServiceStateV0toV1(is) - default: - return is, fmt.Errorf("Unexpected schema version: %d", v) - } -} - -func migrateServiceDiscoveryServiceStateV0toV1(is *terraform.InstanceState) (*terraform.InstanceState, error) { - if is.Empty() { - log.Println("[DEBUG] Empty InstanceState; nothing to migrate.") - return is, nil - } - - log.Printf("[DEBUG] Attributes before migration: %#v", is.Attributes) - - if v, ok := is.Attributes["dns_config.0.routing_policy"]; !ok && v == "" { - is.Attributes["dns_config.0.routing_policy"] = servicediscovery.RoutingPolicyMultivalue - } - - log.Printf("[DEBUG] Attributes after migration: %#v", is.Attributes) - return is, nil -} diff --git a/aws/resource_aws_service_discovery_service_migrate_test.go b/aws/resource_aws_service_discovery_service_migrate_test.go deleted file mode 100644 index 8a36485b451..00000000000 --- a/aws/resource_aws_service_discovery_service_migrate_test.go +++ /dev/null @@ -1,61 +0,0 @@ -package aws - -import ( - "testing" - - "github.com/hashicorp/terraform/terraform" -) - -func TestAwsServiceDiscoveryServiceMigrateState(t *testing.T) { - cases := map[string]struct { - StateVersion int - ID string - Attributes map[string]string - Expected map[string]string - Meta interface{} - }{ - "v0_1": { - StateVersion: 0, - ID: "tf-testing-file", - Attributes: map[string]string{ - "name": "test-name", - "dns_config.#": "1", - "dns_config.0.namespace_id": "test-namespace", - "dns_config.0.dns_records.#": "1", - "dns_config.0.dns_records.0.ttl": "10", - "dns_config.0.dns_records.0.type": "A", - "arn": "arn", - }, - Expected: map[string]string{ - "name": "test-name", - "dns_config.#": "1", - "dns_config.0.namespace_id": "test-namespace", - "dns_config.0.routing_policy": "MULTIVALUE", - "dns_config.0.dns_records.#": "1", - "dns_config.0.dns_records.0.ttl": "10", - "dns_config.0.dns_records.0.type": "A", - "arn": "arn", - }, - }, - } - - for tn, tc := range cases { - is := &terraform.InstanceState{ - ID: tc.ID, - Attributes: tc.Attributes, - } - is, err := resourceAwsServiceDiscoveryServiceMigrateState(tc.StateVersion, is, tc.Meta) - - if err != nil { - t.Fatalf("bad: %s, err: %#v", tn, err) - } - - for k, v := range tc.Expected { - if is.Attributes[k] != v { - t.Fatalf( - "bad: %s\n\n expected: %#v -> %#v\n got: %#v -> %#v\n in: %#v", - tn, k, v, k, is.Attributes[k], is.Attributes) - } - } - } -} diff --git a/aws/resource_aws_service_discovery_service_test.go b/aws/resource_aws_service_discovery_service_test.go index 851015a20d2..4b380054cad 100644 --- a/aws/resource_aws_service_discovery_service_test.go +++ b/aws/resource_aws_service_discovery_service_test.go @@ -13,15 +13,16 @@ import ( func TestAccAWSServiceDiscoveryService_private(t *testing.T) { rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsServiceDiscoveryServiceDestroy, Steps: []resource.TestStep{ { - Config: testAccServiceDiscoveryServiceConfig_private(rName), + Config: testAccServiceDiscoveryServiceConfig_private(rName, 5), Check: resource.ComposeTestCheckFunc( testAccCheckAwsServiceDiscoveryServiceExists("aws_service_discovery_service.test"), + resource.TestCheckResourceAttr("aws_service_discovery_service.test", "health_check_custom_config.0.failure_threshold", "5"), resource.TestCheckResourceAttr("aws_service_discovery_service.test", "dns_config.0.dns_records.#", "1"), resource.TestCheckResourceAttr("aws_service_discovery_service.test", "dns_config.0.dns_records.0.type", "A"), resource.TestCheckResourceAttr("aws_service_discovery_service.test", "dns_config.0.dns_records.0.ttl", "5"), @@ -30,7 +31,7 @@ func TestAccAWSServiceDiscoveryService_private(t *testing.T) { ), }, { - Config: testAccServiceDiscoveryServiceConfig_private_update(rName), + Config: testAccServiceDiscoveryServiceConfig_private_update(rName, 5), Check: resource.ComposeTestCheckFunc( testAccCheckAwsServiceDiscoveryServiceExists("aws_service_discovery_service.test"), resource.TestCheckResourceAttr("aws_service_discovery_service.test", "dns_config.0.dns_records.#", "2"), @@ -47,7 +48,7 @@ func TestAccAWSServiceDiscoveryService_private(t *testing.T) { func TestAccAWSServiceDiscoveryService_public(t *testing.T) { rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsServiceDiscoveryServiceDestroy, @@ -80,16 +81,16 @@ func TestAccAWSServiceDiscoveryService_public(t *testing.T) { func TestAccAWSServiceDiscoveryService_import(t *testing.T) { resourceName := "aws_service_discovery_service.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsServiceDiscoveryServiceDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccServiceDiscoveryServiceConfig_private(acctest.RandString(5)), + { + Config: testAccServiceDiscoveryServiceConfig_private(acctest.RandString(5), 5), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, @@ -143,7 +144,7 @@ func testAccCheckAwsServiceDiscoveryServiceExists(name string) resource.TestChec } } -func testAccServiceDiscoveryServiceConfig_private(rName string) string { +func testAccServiceDiscoveryServiceConfig_private(rName string, th int) string { return fmt.Sprintf(` resource "aws_vpc" "test" { cidr_block = "10.0.0.0/16" @@ -167,11 +168,14 @@ resource "aws_service_discovery_service" "test" { type = "A" } } + health_check_custom_config { + failure_threshold = %d + } } -`, rName, rName) +`, rName, rName, th) } -func testAccServiceDiscoveryServiceConfig_private_update(rName string) string { +func testAccServiceDiscoveryServiceConfig_private_update(rName string, th int) string { return fmt.Sprintf(` resource "aws_vpc" "test" { cidr_block = "10.0.0.0/16" @@ -200,8 +204,11 @@ resource "aws_service_discovery_service" "test" { } routing_policy = "MULTIVALUE" } + health_check_custom_config { + failure_threshold = %d + } } -`, rName, rName) +`, rName, rName, th) } func testAccServiceDiscoveryServiceConfig_public(rName string, th int, path string) string { diff --git a/aws/resource_aws_servicecatalog_portfolio.go b/aws/resource_aws_servicecatalog_portfolio.go index 161550ccc4a..1fe71297251 100644 --- a/aws/resource_aws_servicecatalog_portfolio.go +++ b/aws/resource_aws_servicecatalog_portfolio.go @@ -47,7 +47,7 @@ func resourceAwsServiceCatalogPortfolio() *schema.Resource { Type: schema.TypeString, Optional: true, Computed: true, - ValidateFunc: validateMaxLength(2000), + ValidateFunc: validation.StringLenBetween(0, 2000), }, "provider_name": { Type: schema.TypeString, diff --git a/aws/resource_aws_servicecatalog_portfolio_test.go b/aws/resource_aws_servicecatalog_portfolio_test.go index 99da34c2054..f405ff34033 100644 --- a/aws/resource_aws_servicecatalog_portfolio_test.go +++ b/aws/resource_aws_servicecatalog_portfolio_test.go @@ -16,12 +16,12 @@ import ( func TestAccAWSServiceCatalogPortfolioBasic(t *testing.T) { name := acctest.RandString(5) var dpo servicecatalog.DescribePortfolioOutput - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckServiceCatlaogPortfolioDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckAwsServiceCatalogPortfolioResourceConfigBasic1(name), Check: resource.ComposeTestCheckFunc( testAccCheckPortfolio("aws_servicecatalog_portfolio.test", &dpo), @@ -34,7 +34,7 @@ func TestAccAWSServiceCatalogPortfolioBasic(t *testing.T) { resource.TestCheckResourceAttr("aws_servicecatalog_portfolio.test", "tags.Key1", "Value One"), ), }, - resource.TestStep{ + { Config: testAccCheckAwsServiceCatalogPortfolioResourceConfigBasic2(name), Check: resource.ComposeTestCheckFunc( testAccCheckPortfolio("aws_servicecatalog_portfolio.test", &dpo), @@ -48,7 +48,7 @@ func TestAccAWSServiceCatalogPortfolioBasic(t *testing.T) { resource.TestCheckResourceAttr("aws_servicecatalog_portfolio.test", "tags.Key2", "Value Two"), ), }, - resource.TestStep{ + { Config: testAccCheckAwsServiceCatalogPortfolioResourceConfigBasic3(name), Check: resource.ComposeTestCheckFunc( testAccCheckPortfolio("aws_servicecatalog_portfolio.test", &dpo), @@ -68,12 +68,12 @@ func TestAccAWSServiceCatalogPortfolioBasic(t *testing.T) { func TestAccAWSServiceCatalogPortfolioDisappears(t *testing.T) { name := acctest.RandString(5) var dpo servicecatalog.DescribePortfolioOutput - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckServiceCatlaogPortfolioDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckAwsServiceCatalogPortfolioResourceConfigBasic1(name), Check: resource.ComposeTestCheckFunc( testAccCheckPortfolio("aws_servicecatalog_portfolio.test", &dpo), @@ -90,16 +90,16 @@ func TestAccAWSServiceCatalogPortfolioImport(t *testing.T) { name := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckServiceCatlaogPortfolioDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckAwsServiceCatalogPortfolioResourceConfigBasic1(name), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, diff --git a/aws/resource_aws_ses_active_receipt_rule_set.go b/aws/resource_aws_ses_active_receipt_rule_set.go index 854d645a6cd..21534257b6f 100644 --- a/aws/resource_aws_ses_active_receipt_rule_set.go +++ b/aws/resource_aws_ses_active_receipt_rule_set.go @@ -17,7 +17,7 @@ func resourceAwsSesActiveReceiptRuleSet() *schema.Resource { Delete: resourceAwsSesActiveReceiptRuleSetDelete, Schema: map[string]*schema.Schema{ - "rule_set_name": &schema.Schema{ + "rule_set_name": { Type: schema.TypeString, Required: true, }, diff --git a/aws/resource_aws_ses_active_receipt_rule_set_test.go b/aws/resource_aws_ses_active_receipt_rule_set_test.go index 0f9a37cdfbd..c96f339720e 100644 --- a/aws/resource_aws_ses_active_receipt_rule_set_test.go +++ b/aws/resource_aws_ses_active_receipt_rule_set_test.go @@ -10,14 +10,14 @@ import ( ) func TestAccAWSSESActiveReceiptRuleSet_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckSESActiveReceiptRuleSetDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSESActiveReceiptRuleSetConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAwsSESActiveReceiptRuleSetExists("aws_ses_active_receipt_rule_set.test"), diff --git a/aws/resource_aws_ses_configuration_set.go b/aws/resource_aws_ses_configuration_set.go index e631b887cb4..287de0f660a 100644 --- a/aws/resource_aws_ses_configuration_set.go +++ b/aws/resource_aws_ses_configuration_set.go @@ -19,7 +19,7 @@ func resourceAwsSesConfigurationSet() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, diff --git a/aws/resource_aws_ses_configuration_set_test.go b/aws/resource_aws_ses_configuration_set_test.go index 5a5bd1ec8a3..062fa0dc453 100644 --- a/aws/resource_aws_ses_configuration_set_test.go +++ b/aws/resource_aws_ses_configuration_set_test.go @@ -11,19 +11,24 @@ import ( ) func TestAccAWSSESConfigurationSet_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckSESConfigurationSetDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSESConfigurationSetConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAwsSESConfigurationSetExists("aws_ses_configuration_set.test"), ), }, + { + ResourceName: "aws_ses_configuration_set.test", + ImportState: true, + ImportStateVerify: true, + }, }, }) } diff --git a/aws/resource_aws_ses_domain_dkim_test.go b/aws/resource_aws_ses_domain_dkim_test.go index bf5569e1ff1..5679ba2c9d2 100644 --- a/aws/resource_aws_ses_domain_dkim_test.go +++ b/aws/resource_aws_ses_domain_dkim_test.go @@ -18,7 +18,7 @@ func TestAccAWSSESDomainDkim_basic(t *testing.T) { "%s.terraformtesting.com", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, diff --git a/aws/resource_aws_ses_domain_identity.go b/aws/resource_aws_ses_domain_identity.go index e0f76916915..84579dd3adf 100644 --- a/aws/resource_aws_ses_domain_identity.go +++ b/aws/resource_aws_ses_domain_identity.go @@ -6,6 +6,7 @@ import ( "strings" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" "github.com/aws/aws-sdk-go/service/ses" "github.com/hashicorp/terraform/helper/schema" ) @@ -85,7 +86,14 @@ func resourceAwsSesDomainIdentityRead(d *schema.ResourceData, meta interface{}) return nil } - d.Set("arn", fmt.Sprintf("arn:%s:ses:%s:%s:identity/%s", meta.(*AWSClient).partition, meta.(*AWSClient).region, meta.(*AWSClient).accountid, d.Id())) + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Service: "ses", + Region: meta.(*AWSClient).region, + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("identity/%s", d.Id()), + }.String() + d.Set("arn", arn) d.Set("verification_token", verificationAttrs.VerificationToken) return nil } diff --git a/aws/resource_aws_ses_domain_identity_test.go b/aws/resource_aws_ses_domain_identity_test.go index 2dea4b8544c..ec4284dd392 100644 --- a/aws/resource_aws_ses_domain_identity_test.go +++ b/aws/resource_aws_ses_domain_identity_test.go @@ -18,7 +18,7 @@ func TestAccAWSSESDomainIdentity_basic(t *testing.T) { "%s.terraformtesting.com", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsSESDomainIdentityDestroy, @@ -39,7 +39,7 @@ func TestAccAWSSESDomainIdentity_trailingPeriod(t *testing.T) { "%s.terraformtesting.com.", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAwsSESDomainIdentityDestroy, diff --git a/aws/resource_aws_ses_domain_identity_verification.go b/aws/resource_aws_ses_domain_identity_verification.go new file mode 100644 index 00000000000..c7b546723e6 --- /dev/null +++ b/aws/resource_aws_ses_domain_identity_verification.go @@ -0,0 +1,124 @@ +package aws + +import ( + "fmt" + "log" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" + "github.com/aws/aws-sdk-go/service/ses" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsSesDomainIdentityVerification() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsSesDomainIdentityVerificationCreate, + Read: resourceAwsSesDomainIdentityVerificationRead, + Delete: resourceAwsSesDomainIdentityVerificationDelete, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "domain": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + StateFunc: func(v interface{}) string { + return strings.TrimSuffix(v.(string), ".") + }, + }, + }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(45 * time.Minute), + }, + } +} + +func getAwsSesIdentityVerificationAttributes(conn *ses.SES, domainName string) (*ses.IdentityVerificationAttributes, error) { + input := &ses.GetIdentityVerificationAttributesInput{ + Identities: []*string{ + aws.String(domainName), + }, + } + + response, err := conn.GetIdentityVerificationAttributes(input) + if err != nil { + return nil, fmt.Errorf("Error getting identity verification attributes: %s", err) + } + + return response.VerificationAttributes[domainName], nil +} + +func resourceAwsSesDomainIdentityVerificationCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).sesConn + domainName := strings.TrimSuffix(d.Get("domain").(string), ".") + err := resource.Retry(d.Timeout(schema.TimeoutCreate), func() *resource.RetryError { + att, err := getAwsSesIdentityVerificationAttributes(conn, domainName) + if err != nil { + return resource.NonRetryableError(fmt.Errorf("Error getting identity verification attributes: %s", err)) + } + + if att == nil { + return resource.NonRetryableError(fmt.Errorf("SES Domain Identity %s not found in AWS", domainName)) + } + + if aws.StringValue(att.VerificationStatus) != ses.VerificationStatusSuccess { + return resource.RetryableError(fmt.Errorf("Expected domain verification Success, but was in state %s", aws.StringValue(att.VerificationStatus))) + } + + return nil + }) + if err != nil { + return err + } + + log.Printf("[INFO] Domain verification successful for %s", domainName) + d.SetId(domainName) + return resourceAwsSesDomainIdentityVerificationRead(d, meta) +} + +func resourceAwsSesDomainIdentityVerificationRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).sesConn + + domainName := d.Id() + d.Set("domain", domainName) + + att, err := getAwsSesIdentityVerificationAttributes(conn, domainName) + if err != nil { + log.Printf("[WARN] Error fetching identity verification attributes for %s: %s", d.Id(), err) + return err + } + + if att == nil { + log.Printf("[WARN] Domain not listed in response when fetching verification attributes for %s", d.Id()) + d.SetId("") + return nil + } + + if aws.StringValue(att.VerificationStatus) != ses.VerificationStatusSuccess { + log.Printf("[WARN] Expected domain verification Success, but was %s, tainting verification", aws.StringValue(att.VerificationStatus)) + d.SetId("") + return nil + } + + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Service: "ses", + Region: meta.(*AWSClient).region, + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("identity/%s", d.Id()), + }.String() + d.Set("arn", arn) + + return nil +} + +func resourceAwsSesDomainIdentityVerificationDelete(d *schema.ResourceData, meta interface{}) error { + // No need to do anything, domain identity will be deleted when aws_ses_domain_identity is deleted + return nil +} diff --git a/aws/resource_aws_ses_domain_identity_verification_test.go b/aws/resource_aws_ses_domain_identity_verification_test.go new file mode 100644 index 00000000000..0f3a9f76f59 --- /dev/null +++ b/aws/resource_aws_ses_domain_identity_verification_test.go @@ -0,0 +1,179 @@ +package aws + +import ( + "fmt" + "os" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" + "github.com/aws/aws-sdk-go/service/ses" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func testAccAwsSesDomainIdentityDomainFromEnv(t *testing.T) string { + rootDomain := os.Getenv("SES_DOMAIN_IDENTITY_ROOT_DOMAIN") + if rootDomain == "" { + t.Skip( + "Environment variable SES_DOMAIN_IDENTITY_ROOT_DOMAIN is not set. " + + "For DNS verification requests, this domain must be publicly " + + "accessible and configurable via Route53 during the testing. ") + } + return rootDomain +} + +func TestAccAwsSesDomainIdentityVerification_basic(t *testing.T) { + rootDomain := testAccAwsSesDomainIdentityDomainFromEnv(t) + domain := fmt.Sprintf("tf-acc-%d.%s", acctest.RandInt(), rootDomain) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsSESDomainIdentityDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsSesDomainIdentityVerification_basic(rootDomain, domain), + Check: testAccCheckAwsSesDomainIdentityVerificationPassed("aws_ses_domain_identity_verification.test"), + }, + }, + }) +} + +func TestAccAwsSesDomainIdentityVerification_timeout(t *testing.T) { + domain := fmt.Sprintf( + "%s.terraformtesting.com", + acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsSESDomainIdentityDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsSesDomainIdentityVerification_timeout(domain), + ExpectError: regexp.MustCompile("Expected domain verification Success, but was in state Pending"), + }, + }, + }) +} + +func TestAccAwsSesDomainIdentityVerification_nonexistent(t *testing.T) { + domain := fmt.Sprintf( + "%s.terraformtesting.com", + acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsSESDomainIdentityDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsSesDomainIdentityVerification_nonexistent(domain), + ExpectError: regexp.MustCompile(fmt.Sprintf("SES Domain Identity %s not found in AWS", domain)), + }, + }, + }) +} + +func testAccCheckAwsSesDomainIdentityVerificationPassed(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("SES Domain Identity not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("SES Domain Identity name not set") + } + + domain := rs.Primary.ID + awsClient := testAccProvider.Meta().(*AWSClient) + conn := awsClient.sesConn + + params := &ses.GetIdentityVerificationAttributesInput{ + Identities: []*string{ + aws.String(domain), + }, + } + + response, err := conn.GetIdentityVerificationAttributes(params) + if err != nil { + return err + } + + if response.VerificationAttributes[domain] == nil { + return fmt.Errorf("SES Domain Identity %s not found in AWS", domain) + } + + if aws.StringValue(response.VerificationAttributes[domain].VerificationStatus) != ses.VerificationStatusSuccess { + return fmt.Errorf("SES Domain Identity %s not successfully verified.", domain) + } + + expected := arn.ARN{ + AccountID: awsClient.accountid, + Partition: awsClient.partition, + Region: awsClient.region, + Resource: fmt.Sprintf("identity/%s", domain), + Service: "ses", + } + + if rs.Primary.Attributes["arn"] != expected.String() { + return fmt.Errorf("Incorrect ARN: expected %q, got %q", expected, rs.Primary.Attributes["arn"]) + } + + return nil + } +} + +func testAccAwsSesDomainIdentityVerification_basic(rootDomain string, domain string) string { + return fmt.Sprintf(` +data "aws_route53_zone" "test" { + name = "%s." + private_zone = false +} + +resource "aws_ses_domain_identity" "test" { + domain = "%s" +} + +resource "aws_route53_record" "domain_identity_verification" { + zone_id = "${data.aws_route53_zone.test.id}" + name = "_amazonses.${aws_ses_domain_identity.test.id}" + type = "TXT" + ttl = "600" + records = ["${aws_ses_domain_identity.test.verification_token}"] +} + +resource "aws_ses_domain_identity_verification" "test" { + domain = "${aws_ses_domain_identity.test.id}" + + depends_on = ["aws_route53_record.domain_identity_verification"] +} +`, rootDomain, domain) +} + +func testAccAwsSesDomainIdentityVerification_timeout(domain string) string { + return fmt.Sprintf(` +resource "aws_ses_domain_identity" "test" { + domain = "%s" +} + +resource "aws_ses_domain_identity_verification" "test" { + domain = "${aws_ses_domain_identity.test.id}" + timeouts { + create = "5s" + } +} +`, domain) +} + +func testAccAwsSesDomainIdentityVerification_nonexistent(domain string) string { + return fmt.Sprintf(` +resource "aws_ses_domain_identity_verification" "test" { + domain = "%s" +} +`, domain) +} diff --git a/aws/resource_aws_ses_domain_mail_from_test.go b/aws/resource_aws_ses_domain_mail_from_test.go index babb583096e..d421a97b34d 100644 --- a/aws/resource_aws_ses_domain_mail_from_test.go +++ b/aws/resource_aws_ses_domain_mail_from_test.go @@ -19,7 +19,7 @@ func TestAccAWSSESDomainMailFrom_basic(t *testing.T) { mailFromDomain2 := fmt.Sprintf("bounce2.%s", domain) resourceName := "aws_ses_domain_mail_from.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckSESDomainMailFromDestroy, @@ -57,7 +57,7 @@ func TestAccAWSSESDomainMailFrom_behaviorOnMxFailure(t *testing.T) { acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) resourceName := "aws_ses_domain_mail_from.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckSESDomainMailFromDestroy, diff --git a/aws/resource_aws_ses_event_destination_test.go b/aws/resource_aws_ses_event_destination_test.go index d781dfd2cf5..3bd689a37fc 100644 --- a/aws/resource_aws_ses_event_destination_test.go +++ b/aws/resource_aws_ses_event_destination_test.go @@ -23,14 +23,14 @@ func TestAccAWSSESEventDestination_basic(t *testing.T) { sesEventDstNameCw := fmt.Sprintf("tf_acc_event_dst_cloudwatch_%s", rString) sesEventDstNameSns := fmt.Sprintf("tf_acc_event_dst_sns_%s", rString) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckSESEventDestinationDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSESEventDestinationConfig(bucketName, roleName, streamName, policyName, topicName, sesCfgSetName, sesEventDstNameKinesis, sesEventDstNameCw, sesEventDstNameSns), Check: resource.ComposeTestCheckFunc( diff --git a/aws/resource_aws_ses_identity_notification_topic.go b/aws/resource_aws_ses_identity_notification_topic.go new file mode 100644 index 00000000000..00adb0644e0 --- /dev/null +++ b/aws/resource_aws_ses_identity_notification_topic.go @@ -0,0 +1,145 @@ +package aws + +import ( + "fmt" + "log" + "strings" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ses" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsSesNotificationTopic() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsSesNotificationTopicSet, + Read: resourceAwsSesNotificationTopicRead, + Update: resourceAwsSesNotificationTopicSet, + Delete: resourceAwsSesNotificationTopicDelete, + + Schema: map[string]*schema.Schema{ + "topic_arn": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateArn, + }, + + "notification_type": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{ + ses.NotificationTypeBounce, + ses.NotificationTypeComplaint, + ses.NotificationTypeDelivery, + }, false), + }, + + "identity": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.NoZeroValues, + }, + }, + } +} + +func resourceAwsSesNotificationTopicSet(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).sesConn + notification := d.Get("notification_type").(string) + identity := d.Get("identity").(string) + + setOpts := &ses.SetIdentityNotificationTopicInput{ + Identity: aws.String(identity), + NotificationType: aws.String(notification), + } + + if v, ok := d.GetOk("topic_arn"); ok && v.(string) != "" { + setOpts.SnsTopic = aws.String(v.(string)) + } + + d.SetId(fmt.Sprintf("%s|%s", identity, notification)) + + log.Printf("[DEBUG] Setting SES Identity Notification Topic: %#v", setOpts) + + if _, err := conn.SetIdentityNotificationTopic(setOpts); err != nil { + return fmt.Errorf("Error setting SES Identity Notification Topic: %s", err) + } + + return resourceAwsSesNotificationTopicRead(d, meta) +} + +func resourceAwsSesNotificationTopicRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).sesConn + + identity, notificationType, err := decodeSesIdentityNotificationTopicId(d.Id()) + if err != nil { + return err + } + + getOpts := &ses.GetIdentityNotificationAttributesInput{ + Identities: []*string{aws.String(identity)}, + } + + log.Printf("[DEBUG] Reading SES Identity Notification Topic Attributes: %#v", getOpts) + + response, err := conn.GetIdentityNotificationAttributes(getOpts) + + if err != nil { + return fmt.Errorf("Error reading SES Identity Notification Topic: %s", err) + } + + d.Set("topic_arn", "") + if response == nil { + return nil + } + + notificationAttributes, notificationAttributesOk := response.NotificationAttributes[identity] + if !notificationAttributesOk { + return nil + } + + switch notificationType { + case ses.NotificationTypeBounce: + d.Set("topic_arn", notificationAttributes.BounceTopic) + case ses.NotificationTypeComplaint: + d.Set("topic_arn", notificationAttributes.ComplaintTopic) + case ses.NotificationTypeDelivery: + d.Set("topic_arn", notificationAttributes.DeliveryTopic) + } + + return nil +} + +func resourceAwsSesNotificationTopicDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).sesConn + + identity, notificationType, err := decodeSesIdentityNotificationTopicId(d.Id()) + if err != nil { + return err + } + + setOpts := &ses.SetIdentityNotificationTopicInput{ + Identity: aws.String(identity), + NotificationType: aws.String(notificationType), + SnsTopic: nil, + } + + log.Printf("[DEBUG] Deleting SES Identity Notification Topic: %#v", setOpts) + + if _, err := conn.SetIdentityNotificationTopic(setOpts); err != nil { + return fmt.Errorf("Error deleting SES Identity Notification Topic: %s", err) + } + + return resourceAwsSesNotificationTopicRead(d, meta) +} + +func decodeSesIdentityNotificationTopicId(id string) (string, string, error) { + parts := strings.Split(id, "|") + if len(parts) != 2 { + return "", "", fmt.Errorf("Unexpected format of ID (%q), expected IDENTITY|TYPE", id) + } + return parts[0], parts[1], nil +} diff --git a/aws/resource_aws_ses_identity_notification_topic_test.go b/aws/resource_aws_ses_identity_notification_topic_test.go new file mode 100644 index 00000000000..8eafd947cbc --- /dev/null +++ b/aws/resource_aws_ses_identity_notification_topic_test.go @@ -0,0 +1,129 @@ +package aws + +import ( + "fmt" + "log" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ses" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAwsSESIdentityNotificationTopic_basic(t *testing.T) { + domain := fmt.Sprintf( + "%s.terraformtesting.com", + acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + topicName := fmt.Sprintf("test-topic-%d", acctest.RandInt()) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsSESIdentityNotificationTopicDestroy, + Steps: []resource.TestStep{ + { + Config: fmt.Sprintf(testAccAwsSESIdentityNotificationTopicConfig_basic, domain), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSESIdentityNotificationTopicExists("aws_ses_identity_notification_topic.test"), + ), + }, + { + Config: fmt.Sprintf(testAccAwsSESIdentityNotificationTopicConfig_update, domain, topicName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSESIdentityNotificationTopicExists("aws_ses_identity_notification_topic.test"), + ), + }, + }, + }) +} + +func testAccCheckAwsSESIdentityNotificationTopicDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).sesConn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_ses_identity_notification_topic" { + continue + } + + identity := rs.Primary.Attributes["identity"] + params := &ses.GetIdentityNotificationAttributesInput{ + Identities: []*string{aws.String(identity)}, + } + + log.Printf("[DEBUG] Testing SES Identity Notification Topic Destroy: %#v", params) + + response, err := conn.GetIdentityNotificationAttributes(params) + if err != nil { + return err + } + + if response.NotificationAttributes[identity] != nil { + return fmt.Errorf("SES Identity Notification Topic %s still exists. Failing!", identity) + } + } + + return nil +} + +func testAccCheckAwsSESIdentityNotificationTopicExists(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("SES Identity Notification Topic not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("SES Identity Notification Topic identity not set") + } + + identity := rs.Primary.Attributes["identity"] + conn := testAccProvider.Meta().(*AWSClient).sesConn + + params := &ses.GetIdentityNotificationAttributesInput{ + Identities: []*string{aws.String(identity)}, + } + + log.Printf("[DEBUG] Testing SES Identity Notification Topic Exists: %#v", params) + + response, err := conn.GetIdentityNotificationAttributes(params) + if err != nil { + return err + } + + if response.NotificationAttributes[identity] == nil { + return fmt.Errorf("SES Identity Notification Topic %s not found in AWS", identity) + } + + return nil + } +} + +const testAccAwsSESIdentityNotificationTopicConfig_basic = ` +resource "aws_ses_identity_notification_topic" "test" { + identity = "${aws_ses_domain_identity.test.arn}" + notification_type = "Complaint" +} + +resource "aws_ses_domain_identity" "test" { + domain = "%s" +} +` +const testAccAwsSESIdentityNotificationTopicConfig_update = ` +resource "aws_ses_identity_notification_topic" "test" { + topic_arn = "${aws_sns_topic.test.arn}" + identity = "${aws_ses_domain_identity.test.arn}" + notification_type = "Complaint" +} + +resource "aws_ses_domain_identity" "test" { + domain = "%s" +} + +resource "aws_sns_topic" "test" { + name = "%s" +} +` diff --git a/aws/resource_aws_ses_receipt_filter.go b/aws/resource_aws_ses_receipt_filter.go index 2242d7ecac8..ab87dd21324 100644 --- a/aws/resource_aws_ses_receipt_filter.go +++ b/aws/resource_aws_ses_receipt_filter.go @@ -19,19 +19,19 @@ func resourceAwsSesReceiptFilter() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "cidr": &schema.Schema{ + "cidr": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "policy": &schema.Schema{ + "policy": { Type: schema.TypeString, Required: true, ForceNew: true, diff --git a/aws/resource_aws_ses_receipt_filter_test.go b/aws/resource_aws_ses_receipt_filter_test.go index 397d3f9a181..044e9819f6a 100644 --- a/aws/resource_aws_ses_receipt_filter_test.go +++ b/aws/resource_aws_ses_receipt_filter_test.go @@ -4,25 +4,36 @@ import ( "fmt" "testing" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ses" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccAWSSESReceiptFilter_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ - PreCheck: func() { - testAccPreCheck(t) - }, + resourceName := "aws_ses_receipt_filter.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckSESReceiptFilterDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSSESReceiptFilterConfig, + { + Config: testAccAWSSESReceiptFilterConfig(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckAwsSESReceiptFilterExists("aws_ses_receipt_filter.test"), + testAccCheckAwsSESReceiptFilterExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "cidr", "10.10.10.10"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "policy", "Block"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } @@ -40,17 +51,11 @@ func testAccCheckSESReceiptFilterDestroy(s *terraform.State) error { return err } - found := false for _, element := range response.Filters { - if *element.Name == "block-some-ip" { - found = true + if aws.StringValue(element.Name) == rs.Primary.ID { + return fmt.Errorf("SES Receipt Filter (%s) still exists", rs.Primary.ID) } } - - if found { - return fmt.Errorf("The receipt filter still exists") - } - } return nil @@ -75,25 +80,22 @@ func testAccCheckAwsSESReceiptFilterExists(n string) resource.TestCheckFunc { return err } - found := false for _, element := range response.Filters { - if *element.Name == "block-some-ip" { - found = true + if aws.StringValue(element.Name) == rs.Primary.ID { + return nil } } - if !found { - return fmt.Errorf("The receipt filter was not created") - } - - return nil + return fmt.Errorf("The receipt filter was not created") } } -const testAccAWSSESReceiptFilterConfig = ` +func testAccAWSSESReceiptFilterConfig(rName string) string { + return fmt.Sprintf(` resource "aws_ses_receipt_filter" "test" { - name = "block-some-ip" - cidr = "10.10.10.10" - policy = "Block" + cidr = "10.10.10.10" + name = %q + policy = "Block" +} +`, rName) } -` diff --git a/aws/resource_aws_ses_receipt_rule.go b/aws/resource_aws_ses_receipt_rule.go index 912620acd90..ba28eadb715 100644 --- a/aws/resource_aws_ses_receipt_rule.go +++ b/aws/resource_aws_ses_receipt_rule.go @@ -5,12 +5,14 @@ import ( "fmt" "log" "sort" + "strings" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ses" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsSesReceiptRule() *schema.Resource { @@ -19,6 +21,9 @@ func resourceAwsSesReceiptRule() *schema.Resource { Update: resourceAwsSesReceiptRuleUpdate, Read: resourceAwsSesReceiptRuleRead, Delete: resourceAwsSesReceiptRuleDelete, + Importer: &schema.ResourceImporter{ + State: resourceAwsSesReceiptRuleImport, + }, Schema: map[string]*schema.Schema{ "name": { @@ -225,8 +230,9 @@ func resourceAwsSesReceiptRule() *schema.Resource { }, "position": { - Type: schema.TypeInt, - Required: true, + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntAtLeast(1), }, }, }, @@ -354,11 +360,27 @@ func resourceAwsSesReceiptRule() *schema.Resource { } } +func resourceAwsSesReceiptRuleImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + idParts := strings.Split(d.Id(), ":") + if len(idParts) != 2 || idParts[0] == "" || idParts[1] == "" { + return nil, fmt.Errorf("unexpected format of ID (%q), expected :", d.Id()) + } + + ruleSetName := idParts[0] + ruleName := idParts[1] + + d.Set("rule_set_name", ruleSetName) + d.Set("name", ruleName) + d.SetId(ruleName) + + return []*schema.ResourceData{d}, nil +} + func resourceAwsSesReceiptRuleCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).sesConn createOpts := &ses.CreateReceiptRuleInput{ - Rule: buildReceiptRule(d, meta), + Rule: buildReceiptRule(d), RuleSetName: aws.String(d.Get("rule_set_name").(string)), } @@ -380,7 +402,7 @@ func resourceAwsSesReceiptRuleUpdate(d *schema.ResourceData, meta interface{}) e conn := meta.(*AWSClient).sesConn updateOpts := &ses.UpdateReceiptRuleInput{ - Rule: buildReceiptRule(d, meta), + Rule: buildReceiptRule(d), RuleSetName: aws.String(d.Get("rule_set_name").(string)), } @@ -596,7 +618,7 @@ func resourceAwsSesReceiptRuleDelete(d *schema.ResourceData, meta interface{}) e return nil } -func buildReceiptRule(d *schema.ResourceData, meta interface{}) *ses.ReceiptRule { +func buildReceiptRule(d *schema.ResourceData) *ses.ReceiptRule { receiptRule := &ses.ReceiptRule{ Name: aws.String(d.Get("name").(string)), } @@ -683,9 +705,15 @@ func buildReceiptRule(d *schema.ResourceData, meta interface{}) *ses.ReceiptRule elem := element.(map[string]interface{}) s3Action := &ses.S3Action{ - BucketName: aws.String(elem["bucket_name"].(string)), - KmsKeyArn: aws.String(elem["kms_key_arn"].(string)), - ObjectKeyPrefix: aws.String(elem["object_key_prefix"].(string)), + BucketName: aws.String(elem["bucket_name"].(string)), + } + + if elem["kms_key_arn"] != "" { + s3Action.KmsKeyArn = aws.String(elem["kms_key_arn"].(string)) + } + + if elem["object_key_prefix"] != "" { + s3Action.ObjectKeyPrefix = aws.String(elem["object_key_prefix"].(string)) } if elem["topic_arn"] != "" { diff --git a/aws/resource_aws_ses_receipt_rule_set.go b/aws/resource_aws_ses_receipt_rule_set.go index dfaf98cf88b..932d42acb77 100644 --- a/aws/resource_aws_ses_receipt_rule_set.go +++ b/aws/resource_aws_ses_receipt_rule_set.go @@ -19,7 +19,7 @@ func resourceAwsSesReceiptRuleSet() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "rule_set_name": &schema.Schema{ + "rule_set_name": { Type: schema.TypeString, Required: true, ForceNew: true, diff --git a/aws/resource_aws_ses_receipt_rule_set_test.go b/aws/resource_aws_ses_receipt_rule_set_test.go index 8fe767cfc9c..76623ce9646 100644 --- a/aws/resource_aws_ses_receipt_rule_set_test.go +++ b/aws/resource_aws_ses_receipt_rule_set_test.go @@ -5,26 +5,33 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ses" + "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" ) func TestAccAWSSESReceiptRuleSet_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ - PreCheck: func() { - testAccPreCheck(t) - }, + resourceName := "aws_ses_receipt_rule_set.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckSESReceiptRuleSetDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSSESReceiptRuleSetConfig, + { + Config: testAccAWSSESReceiptRuleSetConfig(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckAwsSESReceiptRuleSetExists("aws_ses_receipt_rule_set.test"), + testAccCheckAwsSESReceiptRuleSetExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "rule_set_name", rName), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } @@ -38,20 +45,20 @@ func testAccCheckSESReceiptRuleSetDestroy(s *terraform.State) error { } params := &ses.DescribeReceiptRuleSetInput{ - RuleSetName: aws.String("just-a-test"), + RuleSetName: aws.String(rs.Primary.ID), } _, err := conn.DescribeReceiptRuleSet(params) - if err == nil { - return fmt.Errorf("Receipt rule set %s still exists. Failing!", rs.Primary.ID) + + if isAWSErr(err, ses.ErrCodeRuleSetDoesNotExistException, "") { + continue } - // Verify the error is what we want - _, ok := err.(awserr.Error) - if !ok { + if err != nil { return err } + return fmt.Errorf("SES Receipt Rule Set (%s) still exists", rs.Primary.ID) } return nil @@ -72,7 +79,7 @@ func testAccCheckAwsSESReceiptRuleSetExists(n string) resource.TestCheckFunc { conn := testAccProvider.Meta().(*AWSClient).sesConn params := &ses.DescribeReceiptRuleSetInput{ - RuleSetName: aws.String("just-a-test"), + RuleSetName: aws.String(rs.Primary.ID), } _, err := conn.DescribeReceiptRuleSet(params) @@ -84,8 +91,10 @@ func testAccCheckAwsSESReceiptRuleSetExists(n string) resource.TestCheckFunc { } } -const testAccAWSSESReceiptRuleSetConfig = ` +func testAccAWSSESReceiptRuleSetConfig(rName string) string { + return fmt.Sprintf(` resource "aws_ses_receipt_rule_set" "test" { - rule_set_name = "just-a-test" + rule_set_name = %q +} +`, rName) } -` diff --git a/aws/resource_aws_ses_receipt_rule_test.go b/aws/resource_aws_ses_receipt_rule_test.go index a443ef939f1..57a48f149b5 100644 --- a/aws/resource_aws_ses_receipt_rule_test.go +++ b/aws/resource_aws_ses_receipt_rule_test.go @@ -14,55 +14,97 @@ import ( ) func TestAccAWSSESReceiptRule_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + rInt := acctest.RandInt() + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckSESReceiptRuleDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSSESReceiptRuleBasicConfig, + { + Config: testAccAWSSESReceiptRuleBasicConfig(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckAwsSESReceiptRuleExists("aws_ses_receipt_rule.basic"), ), }, + { + ResourceName: "aws_ses_receipt_rule.basic", + ImportState: true, + ImportStateIdFunc: testAccAwsSesReceiptRuleImportStateIdFunc("aws_ses_receipt_rule.basic"), + }, + }, + }) +} + +func TestAccAWSSESReceiptRule_s3Action(t *testing.T) { + rInt := acctest.RandInt() + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckSESReceiptRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSESReceiptRuleS3ActionConfig(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsSESReceiptRuleExists("aws_ses_receipt_rule.basic"), + ), + }, + { + ResourceName: "aws_ses_receipt_rule.basic", + ImportState: true, + ImportStateIdFunc: testAccAwsSesReceiptRuleImportStateIdFunc("aws_ses_receipt_rule.basic"), + }, }, }) } func TestAccAWSSESReceiptRule_order(t *testing.T) { - resource.Test(t, resource.TestCase{ + rInt := acctest.RandInt() + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckSESReceiptRuleDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSSESReceiptRuleOrderConfig, + { + Config: testAccAWSSESReceiptRuleOrderConfig(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckAwsSESReceiptRuleOrder("aws_ses_receipt_rule.second"), ), }, + { + ResourceName: "aws_ses_receipt_rule.second", + ImportState: true, + ImportStateIdFunc: testAccAwsSesReceiptRuleImportStateIdFunc("aws_ses_receipt_rule.second"), + }, }, }) } func TestAccAWSSESReceiptRule_actions(t *testing.T) { - resource.Test(t, resource.TestCase{ + rInt := acctest.RandInt() + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckSESReceiptRuleDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSSESReceiptRuleActionsConfig, + { + Config: testAccAWSSESReceiptRuleActionsConfig(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckAwsSESReceiptRuleActions("aws_ses_receipt_rule.actions"), ), }, + { + ResourceName: "aws_ses_receipt_rule.actions", + ImportState: true, + ImportStateIdFunc: testAccAwsSesReceiptRuleImportStateIdFunc("aws_ses_receipt_rule.actions"), + }, }, }) } @@ -111,8 +153,8 @@ func testAccCheckAwsSESReceiptRuleExists(n string) resource.TestCheckFunc { conn := testAccProvider.Meta().(*AWSClient).sesConn params := &ses.DescribeReceiptRuleInput{ - RuleName: aws.String("basic"), - RuleSetName: aws.String(fmt.Sprintf("test-me-%d", srrsRandomInt)), + RuleName: aws.String(rs.Primary.Attributes["name"]), + RuleSetName: aws.String(rs.Primary.Attributes["rule_set_name"]), } response, err := conn.DescribeReceiptRule(params) @@ -140,6 +182,17 @@ func testAccCheckAwsSESReceiptRuleExists(n string) resource.TestCheckFunc { } } +func testAccAwsSesReceiptRuleImportStateIdFunc(resourceName string) resource.ImportStateIdFunc { + return func(s *terraform.State) (string, error) { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return "", fmt.Errorf("Not Found: %s", resourceName) + } + + return fmt.Sprintf("%s:%s", rs.Primary.Attributes["rule_set_name"], rs.Primary.Attributes["name"]), nil + } +} + func testAccCheckAwsSESReceiptRuleOrder(n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -154,7 +207,7 @@ func testAccCheckAwsSESReceiptRuleOrder(n string) resource.TestCheckFunc { conn := testAccProvider.Meta().(*AWSClient).sesConn params := &ses.DescribeReceiptRuleSetInput{ - RuleSetName: aws.String(fmt.Sprintf("test-me-%d", srrsRandomInt)), + RuleSetName: aws.String(rs.Primary.Attributes["rule_set_name"]), } response, err := conn.DescribeReceiptRuleSet(params) @@ -186,8 +239,8 @@ func testAccCheckAwsSESReceiptRuleActions(n string) resource.TestCheckFunc { conn := testAccProvider.Meta().(*AWSClient).sesConn params := &ses.DescribeReceiptRuleInput{ - RuleName: aws.String("actions4"), - RuleSetName: aws.String(fmt.Sprintf("test-me-%d", srrsRandomInt)), + RuleName: aws.String(rs.Primary.Attributes["name"]), + RuleSetName: aws.String(rs.Primary.Attributes["rule_set_name"]), } response, err := conn.DescribeReceiptRule(params) @@ -228,8 +281,8 @@ func testAccCheckAwsSESReceiptRuleActions(n string) resource.TestCheckFunc { } } -var srrsRandomInt = acctest.RandInt() -var testAccAWSSESReceiptRuleBasicConfig = fmt.Sprintf(` +func testAccAWSSESReceiptRuleBasicConfig(rInt int) string { + return fmt.Sprintf(` resource "aws_ses_receipt_rule_set" "test" { rule_set_name = "test-me-%d" } @@ -242,9 +295,39 @@ resource "aws_ses_receipt_rule" "basic" { scan_enabled = true tls_policy = "Require" } -`, srrsRandomInt) +`, rInt) +} -var testAccAWSSESReceiptRuleOrderConfig = fmt.Sprintf(` +func testAccAWSSESReceiptRuleS3ActionConfig(rInt int) string { + return fmt.Sprintf(` +resource "aws_ses_receipt_rule_set" "test" { + rule_set_name = "test-me-%d" +} + +resource "aws_s3_bucket" "emails" { + bucket = "ses-terraform-emails-%d" + acl = "public-read-write" + force_destroy = "true" +} + +resource "aws_ses_receipt_rule" "basic" { + name = "basic" + rule_set_name = "${aws_ses_receipt_rule_set.test.rule_set_name}" + recipients = ["test@example.com"] + enabled = true + scan_enabled = true + tls_policy = "Require" + + s3_action { + bucket_name = "${aws_s3_bucket.emails.id}" + position = 1 + } +} +`, rInt, rInt) +} + +func testAccAWSSESReceiptRuleOrderConfig(rInt int) string { + return fmt.Sprintf(` resource "aws_ses_receipt_rule_set" "test" { rule_set_name = "test-me-%d" } @@ -259,13 +342,11 @@ resource "aws_ses_receipt_rule" "first" { name = "first" rule_set_name = "${aws_ses_receipt_rule_set.test.rule_set_name}" } -`, srrsRandomInt) - -var testAccAWSSESReceiptRuleActionsConfig = fmt.Sprintf(` -resource "aws_s3_bucket" "emails" { - bucket = "ses-terraform-emails" +`, rInt) } +func testAccAWSSESReceiptRuleActionsConfig(rInt int) string { + return fmt.Sprintf(` resource "aws_ses_receipt_rule_set" "test" { rule_set_name = "test-me-%d" } @@ -291,4 +372,5 @@ resource "aws_ses_receipt_rule" "actions" { position = 3 } } -`, srrsRandomInt) +`, rInt) +} diff --git a/aws/resource_aws_ses_template.go b/aws/resource_aws_ses_template.go index cecf7c28a81..326a6d0a3c6 100644 --- a/aws/resource_aws_ses_template.go +++ b/aws/resource_aws_ses_template.go @@ -31,7 +31,7 @@ func resourceAwsSesTemplate() *schema.Resource { "html": { Type: schema.TypeString, Optional: true, - ValidateFunc: validateMaxLength(512000), + ValidateFunc: validation.StringLenBetween(0, 512000), }, "subject": { Type: schema.TypeString, @@ -40,7 +40,7 @@ func resourceAwsSesTemplate() *schema.Resource { "text": { Type: schema.TypeString, Optional: true, - ValidateFunc: validateMaxLength(512000), + ValidateFunc: validation.StringLenBetween(0, 512000), }, }, } diff --git a/aws/resource_aws_ses_template_test.go b/aws/resource_aws_ses_template_test.go index b5ceaaec52b..81ee1735345 100644 --- a/aws/resource_aws_ses_template_test.go +++ b/aws/resource_aws_ses_template_test.go @@ -17,12 +17,12 @@ import ( func TestAccAWSSesTemplate_Basic(t *testing.T) { name := acctest.RandString(5) var template ses.Template - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckSesTemplateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckAwsSesTemplateResourceConfigBasic1(name), Check: resource.ComposeTestCheckFunc( testAccCheckSesTemplate("aws_ses_template.test", &template), @@ -40,12 +40,12 @@ func TestAccAWSSesTemplate_Update(t *testing.T) { t.Skipf("Skip due to SES.UpdateTemplate eventual consistency issues") name := acctest.RandString(5) var template ses.Template - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckSesTemplateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckAwsSesTemplateResourceConfigBasic1(name), Check: resource.ComposeTestCheckFunc( testAccCheckSesTemplate("aws_ses_template.test", &template), @@ -55,7 +55,7 @@ func TestAccAWSSesTemplate_Update(t *testing.T) { resource.TestCheckResourceAttr("aws_ses_template.test", "text", ""), ), }, - resource.TestStep{ + { Config: testAccCheckAwsSesTemplateResourceConfigBasic2(name), Check: resource.ComposeTestCheckFunc( testAccCheckSesTemplate("aws_ses_template.test", &template), @@ -65,7 +65,7 @@ func TestAccAWSSesTemplate_Update(t *testing.T) { resource.TestCheckResourceAttr("aws_ses_template.test", "text", "text"), ), }, - resource.TestStep{ + { Config: testAccCheckAwsSesTemplateResourceConfigBasic3(name), Check: resource.ComposeTestCheckFunc( testAccCheckSesTemplate("aws_ses_template.test", &template), @@ -84,16 +84,16 @@ func TestAccAWSSesTemplate_Import(t *testing.T) { name := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckSesTemplateDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckAwsSesTemplateResourceConfigBasic1(name), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, diff --git a/aws/resource_aws_sfn_activity.go b/aws/resource_aws_sfn_activity.go index 9a5e49d8302..9fac3616560 100644 --- a/aws/resource_aws_sfn_activity.go +++ b/aws/resource_aws_sfn_activity.go @@ -10,6 +10,7 @@ import ( "github.com/aws/aws-sdk-go/service/sfn" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsSfnActivity() *schema.Resource { @@ -26,7 +27,7 @@ func resourceAwsSfnActivity() *schema.Resource { Type: schema.TypeString, Required: true, ForceNew: true, - ValidateFunc: validateMaxLength(80), + ValidateFunc: validation.StringLenBetween(0, 80), }, "creation_date": { diff --git a/aws/resource_aws_sfn_activity_test.go b/aws/resource_aws_sfn_activity_test.go index 47a005717ae..f78d4c813f8 100644 --- a/aws/resource_aws_sfn_activity_test.go +++ b/aws/resource_aws_sfn_activity_test.go @@ -3,6 +3,7 @@ package aws import ( "fmt" "testing" + "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" @@ -10,13 +11,34 @@ import ( "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" - "time" ) +func TestAccAWSSfnActivity_importBasic(t *testing.T) { + resourceName := "aws_sfn_activity.foo" + rName := acctest.RandString(10) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSfnActivityDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSfnActivityBasicConfig(rName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSSfnActivity_basic(t *testing.T) { name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSfnActivityDestroy, diff --git a/aws/resource_aws_sfn_state_machine.go b/aws/resource_aws_sfn_state_machine.go index 70d8fd08cc1..388cfa56292 100644 --- a/aws/resource_aws_sfn_state_machine.go +++ b/aws/resource_aws_sfn_state_machine.go @@ -8,9 +8,9 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/sfn" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsSfnStateMachine() *schema.Resource { @@ -27,7 +27,7 @@ func resourceAwsSfnStateMachine() *schema.Resource { "definition": { Type: schema.TypeString, Required: true, - ValidateFunc: validateMaxLength(1024 * 1024), // 1048576 + ValidateFunc: validation.StringLenBetween(0, 1024*1024), // 1048576 }, "name": { @@ -87,7 +87,7 @@ func resourceAwsSfnStateMachineCreate(d *schema.ResourceData, meta interface{}) }) if err != nil { - return errwrap.Wrapf("Error creating Step Function State Machine: {{err}}", err) + return fmt.Errorf("Error creating Step Function State Machine: %s", err) } d.SetId(*activity.StateMachineArn) diff --git a/aws/resource_aws_sfn_state_machine_test.go b/aws/resource_aws_sfn_state_machine_test.go index 7e3da1a0a40..0cda0ebdf68 100644 --- a/aws/resource_aws_sfn_state_machine_test.go +++ b/aws/resource_aws_sfn_state_machine_test.go @@ -16,7 +16,7 @@ import ( func TestAccAWSSfnStateMachine_createUpdate(t *testing.T) { name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSfnStateMachineDestroy, diff --git a/aws/resource_aws_simpledb_domain.go b/aws/resource_aws_simpledb_domain.go index 8450342e3bc..53f3024915c 100644 --- a/aws/resource_aws_simpledb_domain.go +++ b/aws/resource_aws_simpledb_domain.go @@ -21,7 +21,7 @@ func resourceAwsSimpleDBDomain() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, @@ -79,6 +79,5 @@ func resourceAwsSimpleDBDomainDelete(d *schema.ResourceData, meta interface{}) e return fmt.Errorf("Delete SimpleDB Domain failed: %s", err) } - d.SetId("") return nil } diff --git a/aws/resource_aws_simpledb_domain_test.go b/aws/resource_aws_simpledb_domain_test.go index 0e4ee2e8e2a..5a35eb116a8 100644 --- a/aws/resource_aws_simpledb_domain_test.go +++ b/aws/resource_aws_simpledb_domain_test.go @@ -11,13 +11,34 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSSimpleDBDomain_importBasic(t *testing.T) { + resourceName := "aws_simpledb_domain.test_domain" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSimpleDBDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSimpleDBDomainConfig, + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSSimpleDBDomain_basic(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSimpleDBDomainDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSimpleDBDomainConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSimpleDBDomainExists("aws_simpledb_domain.test_domain"), diff --git a/aws/resource_aws_snapshot_create_volume_permission.go b/aws/resource_aws_snapshot_create_volume_permission.go index 58b7090cbdb..ed8e757c281 100644 --- a/aws/resource_aws_snapshot_create_volume_permission.go +++ b/aws/resource_aws_snapshot_create_volume_permission.go @@ -18,12 +18,12 @@ func resourceAwsSnapshotCreateVolumePermission() *schema.Resource { Delete: resourceAwsSnapshotCreateVolumePermissionDelete, Schema: map[string]*schema.Schema{ - "snapshot_id": &schema.Schema{ + "snapshot_id": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "account_id": &schema.Schema{ + "account_id": { Type: schema.TypeString, Required: true, ForceNew: true, @@ -51,7 +51,7 @@ func resourceAwsSnapshotCreateVolumePermissionCreate(d *schema.ResourceData, met Attribute: aws.String("createVolumePermission"), CreateVolumePermission: &ec2.CreateVolumePermissionModifications{ Add: []*ec2.CreateVolumePermission{ - &ec2.CreateVolumePermission{UserId: aws.String(account_id)}, + {UserId: aws.String(account_id)}, }, }, }) @@ -94,7 +94,7 @@ func resourceAwsSnapshotCreateVolumePermissionDelete(d *schema.ResourceData, met Attribute: aws.String("createVolumePermission"), CreateVolumePermission: &ec2.CreateVolumePermissionModifications{ Remove: []*ec2.CreateVolumePermission{ - &ec2.CreateVolumePermission{UserId: aws.String(account_id)}, + {UserId: aws.String(account_id)}, }, }, }) diff --git a/aws/resource_aws_snapshot_create_volume_permission_test.go b/aws/resource_aws_snapshot_create_volume_permission_test.go index 53691350cf1..fbebe821025 100644 --- a/aws/resource_aws_snapshot_create_volume_permission_test.go +++ b/aws/resource_aws_snapshot_create_volume_permission_test.go @@ -11,11 +11,11 @@ import ( func TestAccAWSSnapshotCreateVolumePermission_Basic(t *testing.T) { var snapshotId, accountId string - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ Providers: testAccProviders, Steps: []resource.TestStep{ // Scaffold everything - resource.TestStep{ + { Config: testAccAWSSnapshotCreateVolumePermissionConfig(true), Check: resource.ComposeTestCheckFunc( testCheckResourceGetAttr("aws_ebs_snapshot.example_snapshot", "id", &snapshotId), @@ -24,7 +24,7 @@ func TestAccAWSSnapshotCreateVolumePermission_Basic(t *testing.T) { ), }, // Drop just create volume permission to test destruction - resource.TestStep{ + { Config: testAccAWSSnapshotCreateVolumePermissionConfig(false), Check: resource.ComposeTestCheckFunc( testAccAWSSnapshotCreateVolumePermissionDestroyed(&accountId, &snapshotId), diff --git a/aws/resource_aws_sns_platform_application.go b/aws/resource_aws_sns_platform_application.go index 4fe82d8503d..c684e06cf86 100644 --- a/aws/resource_aws_sns_platform_application.go +++ b/aws/resource_aws_sns_platform_application.go @@ -138,7 +138,7 @@ func resourceAwsSnsPlatformApplicationUpdate(d *schema.ResourceData, meta interf attributes := make(map[string]*string) - for k, _ := range resourceAwsSnsPlatformApplication().Schema { + for k := range resourceAwsSnsPlatformApplication().Schema { if attrKey, ok := snsPlatformApplicationAttributeMap[k]; ok { if d.HasChange(k) { log.Printf("[DEBUG] Updating %s", attrKey) @@ -203,6 +203,11 @@ func resourceAwsSnsPlatformApplicationRead(d *schema.ResourceData, meta interfac }) if err != nil { + if isAWSErr(err, sns.ErrCodeNotFoundException, "") { + log.Printf("[WARN] SNS Platform Application (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } return err } diff --git a/aws/resource_aws_sns_platform_application_test.go b/aws/resource_aws_sns_platform_application_test.go index c44dce38956..164e9f3df3b 100644 --- a/aws/resource_aws_sns_platform_application_test.go +++ b/aws/resource_aws_sns_platform_application_test.go @@ -158,7 +158,7 @@ func TestAccAWSSnsPlatformApplication_basic(t *testing.T) { } t.Run(platform.Name, func(*testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSNSPlatformApplicationDestroy, @@ -224,7 +224,7 @@ func TestAccAWSSnsPlatformApplication_basicAttributes(t *testing.T) { t.Run(fmt.Sprintf("%s/%s", platform.Name, tc.AttributeKey), func(*testing.T) { name := fmt.Sprintf("tf-acc-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSNSPlatformApplicationDestroy, @@ -274,7 +274,7 @@ func TestAccAWSSnsPlatformApplication_iamRoleAttributes(t *testing.T) { iamRoleName2 := fmt.Sprintf("tf-acc-%d", acctest.RandInt()) name := fmt.Sprintf("tf-acc-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSNSPlatformApplicationDestroy, @@ -326,7 +326,7 @@ func TestAccAWSSnsPlatformApplication_snsTopicAttributes(t *testing.T) { snsTopicName2 := fmt.Sprintf("tf-acc-%d", acctest.RandInt()) name := fmt.Sprintf("tf-acc-%d", acctest.RandInt()) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSNSPlatformApplicationDestroy, diff --git a/aws/resource_aws_sns_sms_preferences.go b/aws/resource_aws_sns_sms_preferences.go new file mode 100644 index 00000000000..6892ec250be --- /dev/null +++ b/aws/resource_aws_sns_sms_preferences.go @@ -0,0 +1,166 @@ +package aws + +import ( + "fmt" + "log" + "strconv" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/sns" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func validateMonthlySpend(v interface{}, k string) (ws []string, errors []error) { + vInt, _ := strconv.Atoi(v.(string)) + if vInt < 0 { + errors = append(errors, fmt.Errorf("Error setting SMS preferences: monthly spend limit value [%d] must be >= 0!", vInt)) + } + return +} + +func validateDeliverySamplingRate(v interface{}, k string) (ws []string, errors []error) { + vInt, _ := strconv.Atoi(v.(string)) + if vInt < 0 || vInt > 100 { + errors = append(errors, fmt.Errorf("Error setting SMS preferences: default percentage of success to sample value [%d] must be between 0 and 100!", vInt)) + } + return +} + +func resourceAwsSnsSmsPreferences() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsSnsSmsPreferencesSet, + Read: resourceAwsSnsSmsPreferencesGet, + Update: resourceAwsSnsSmsPreferencesSet, + Delete: resourceAwsSnsSmsPreferencesDelete, + + Schema: map[string]*schema.Schema{ + "monthly_spend_limit": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateMonthlySpend, + }, + + "delivery_status_iam_role_arn": { + Type: schema.TypeString, + Optional: true, + }, + + "delivery_status_success_sampling_rate": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateDeliverySamplingRate, + }, + + "default_sender_id": { + Type: schema.TypeString, + Optional: true, + }, + + "default_sms_type": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{"Promotional", "Transactional"}, false), + }, + + "usage_report_s3_bucket": { + Type: schema.TypeString, + Optional: true, + }, + }, + } +} + +const resourceId = "aws_sns_sms_id" + +var smsAttributeMap = map[string]string{ + "monthly_spend_limit": "MonthlySpendLimit", + "delivery_status_iam_role_arn": "DeliveryStatusIAMRole", + "delivery_status_success_sampling_rate": "DeliveryStatusSuccessSamplingRate", + "default_sender_id": "DefaultSenderID", + "default_sms_type": "DefaultSMSType", + "usage_report_s3_bucket": "UsageReportS3Bucket", +} + +var smsAttributeDefaultValues = map[string]string{ + "monthly_spend_limit": "", + "delivery_status_iam_role_arn": "", + "delivery_status_success_sampling_rate": "", + "default_sender_id": "", + "default_sms_type": "", + "usage_report_s3_bucket": "", +} + +func resourceAwsSnsSmsPreferencesSet(d *schema.ResourceData, meta interface{}) error { + snsconn := meta.(*AWSClient).snsconn + + log.Printf("[DEBUG] SNS Set SMS preferences") + + monthlySpendLimit := d.Get("monthly_spend_limit").(string) + deliveryStatusIamRoleArn := d.Get("delivery_status_iam_role_arn").(string) + deliveryStatusSuccessSamplingRate := d.Get("delivery_status_success_sampling_rate").(string) + defaultSenderId := d.Get("default_sender_id").(string) + defaultSmsType := d.Get("default_sms_type").(string) + usageReportS3Bucket := d.Get("usage_report_s3_bucket").(string) + + // Set preferences + params := &sns.SetSMSAttributesInput{ + Attributes: map[string]*string{ + "MonthlySpendLimit": aws.String(monthlySpendLimit), + "DeliveryStatusIAMRole": aws.String(deliveryStatusIamRoleArn), + "DeliveryStatusSuccessSamplingRate": aws.String(deliveryStatusSuccessSamplingRate), + "DefaultSenderID": aws.String(defaultSenderId), + "DefaultSMSType": aws.String(defaultSmsType), + "UsageReportS3Bucket": aws.String(usageReportS3Bucket), + }, + } + + if _, err := snsconn.SetSMSAttributes(params); err != nil { + return fmt.Errorf("Error setting SMS preferences: %s", err) + } + + d.SetId(resourceId) + return nil +} + +func resourceAwsSnsSmsPreferencesGet(d *schema.ResourceData, meta interface{}) error { + snsconn := meta.(*AWSClient).snsconn + + // Fetch ALL attributes + attrs, err := snsconn.GetSMSAttributes(&sns.GetSMSAttributesInput{}) + if err != nil { + return err + } + + // Reset with default values first + for tfAttrName, defValue := range smsAttributeDefaultValues { + d.Set(tfAttrName, defValue) + } + + // Apply existing settings + if attrs.Attributes != nil && len(attrs.Attributes) > 0 { + attrmap := attrs.Attributes + for tfAttrName, snsAttrName := range smsAttributeMap { + d.Set(tfAttrName, attrmap[snsAttrName]) + } + } + + return nil +} + +func resourceAwsSnsSmsPreferencesDelete(d *schema.ResourceData, meta interface{}) error { + snsconn := meta.(*AWSClient).snsconn + + // Reset the attributes to their default value + attrs := map[string]*string{} + for tfAttrName, defValue := range smsAttributeDefaultValues { + attrs[smsAttributeMap[tfAttrName]] = &defValue + } + + params := &sns.SetSMSAttributesInput{Attributes: attrs} + if _, err := snsconn.SetSMSAttributes(params); err != nil { + return fmt.Errorf("Error resetting SMS preferences: %s", err) + } + + return nil +} diff --git a/aws/resource_aws_sns_sms_preferences_test.go b/aws/resource_aws_sns_sms_preferences_test.go new file mode 100644 index 00000000000..24f56176ad8 --- /dev/null +++ b/aws/resource_aws_sns_sms_preferences_test.go @@ -0,0 +1,194 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/sns" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +// The preferences are account-wide, so the tests must be serialized +func TestAccAWSSNSSMSPreferences(t *testing.T) { + testCases := map[string]func(t *testing.T){ + "almostAll": testAccAWSSNSSMSPreferences_almostAll, + "defaultSMSType": testAccAWSSNSSMSPreferences_defaultSMSType, + "deliveryRole": testAccAWSSNSSMSPreferences_deliveryRole, + "empty": testAccAWSSNSSMSPreferences_empty, + } + + for name, tc := range testCases { + tc := tc + t.Run(name, func(t *testing.T) { + tc(t) + }) + } +} + +func testAccAWSSNSSMSPreferences_empty(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSNSSMSPrefsDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSNSSMSPreferencesConfig_empty, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckNoResourceAttr("aws_sns_sms_preferences.test_pref", "monthly_spend_limit"), + resource.TestCheckNoResourceAttr("aws_sns_sms_preferences.test_pref", "delivery_status_iam_role_arn"), + resource.TestCheckNoResourceAttr("aws_sns_sms_preferences.test_pref", "delivery_status_success_sampling_rate"), + resource.TestCheckNoResourceAttr("aws_sns_sms_preferences.test_pref", "default_sender_id"), + resource.TestCheckNoResourceAttr("aws_sns_sms_preferences.test_pref", "default_sms_type"), + resource.TestCheckNoResourceAttr("aws_sns_sms_preferences.test_pref", "usage_report_s3_bucket"), + ), + }, + }, + }) +} + +func testAccAWSSNSSMSPreferences_defaultSMSType(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSNSSMSPrefsDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSNSSMSPreferencesConfig_defSMSType, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckNoResourceAttr("aws_sns_sms_preferences.test_pref", "monthly_spend_limit"), + resource.TestCheckNoResourceAttr("aws_sns_sms_preferences.test_pref", "delivery_status_iam_role_arn"), + resource.TestCheckNoResourceAttr("aws_sns_sms_preferences.test_pref", "delivery_status_success_sampling_rate"), + resource.TestCheckNoResourceAttr("aws_sns_sms_preferences.test_pref", "default_sender_id"), + resource.TestCheckResourceAttr("aws_sns_sms_preferences.test_pref", "default_sms_type", "Transactional"), + resource.TestCheckNoResourceAttr("aws_sns_sms_preferences.test_pref", "usage_report_s3_bucket"), + ), + }, + }, + }) +} + +func testAccAWSSNSSMSPreferences_almostAll(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSNSSMSPrefsDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSNSSMSPreferencesConfig_almostAll, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_sns_sms_preferences.test_pref", "monthly_spend_limit", "1"), + resource.TestCheckResourceAttr("aws_sns_sms_preferences.test_pref", "default_sms_type", "Transactional"), + resource.TestCheckResourceAttr("aws_sns_sms_preferences.test_pref", "usage_report_s3_bucket", "some-bucket"), + ), + }, + }, + }) +} + +func testAccAWSSNSSMSPreferences_deliveryRole(t *testing.T) { + arnRole := regexp.MustCompile(`^arn:aws:iam::\d+:role/test_smsdelivery_role$`) + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSNSSMSPrefsDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSNSSMSPreferencesConfig_deliveryRole, + Check: resource.ComposeTestCheckFunc( + resource.TestMatchResourceAttr("aws_sns_sms_preferences.test_pref", "delivery_status_iam_role_arn", arnRole), + resource.TestCheckResourceAttr("aws_sns_sms_preferences.test_pref", "delivery_status_success_sampling_rate", "75"), + ), + }, + }, + }) +} + +func testAccCheckAWSSNSSMSPrefsDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_sns_sms_preferences" { + continue + } + + conn := testAccProvider.Meta().(*AWSClient).snsconn + attrs, err := conn.GetSMSAttributes(&sns.GetSMSAttributesInput{}) + if err != nil { + return fmt.Errorf("error getting SMS attributes: %s", err) + } + if attrs == nil || len(attrs.Attributes) == 0 { + return nil + } + + for attrName, attrValue := range attrs.Attributes { + if aws.StringValue(attrValue) != "" { + return fmt.Errorf("expected SMS attribute %q to be empty, but received: %s", attrName, aws.StringValue(attrValue)) + } + } + + return nil + } + + return nil +} + +const testAccAWSSNSSMSPreferencesConfig_empty = ` +resource "aws_sns_sms_preferences" "test_pref" {} +` +const testAccAWSSNSSMSPreferencesConfig_defSMSType = ` +resource "aws_sns_sms_preferences" "test_pref" { + default_sms_type = "Transactional" +} +` +const testAccAWSSNSSMSPreferencesConfig_almostAll = ` +resource "aws_sns_sms_preferences" "test_pref" { + monthly_spend_limit = "1", + default_sms_type = "Transactional", + usage_report_s3_bucket = "some-bucket", +} +` +const testAccAWSSNSSMSPreferencesConfig_deliveryRole = ` +resource "aws_iam_role" "test_smsdelivery_role" { + name = "test_smsdelivery_role" + path = "/" + assume_role_policy = < 0 { - attrHash := attributeOutput.Attributes - resource := *resourceAwsSnsTopicSubscription() + if attributeOutput == nil || len(attributeOutput.Attributes) == 0 { + return fmt.Errorf("error reading SNS Topic Subscription (%s) attributes: no attributes found", d.Id()) + } - for iKey, oKey := range SNSSubscriptionAttributeMap { - log.Printf("[DEBUG] Reading %s => %s", iKey, oKey) + d.Set("arn", attributeOutput.Attributes["SubscriptionArn"]) + d.Set("delivery_policy", attributeOutput.Attributes["DeliveryPolicy"]) + d.Set("endpoint", attributeOutput.Attributes["Endpoint"]) + d.Set("filter_policy", attributeOutput.Attributes["FilterPolicy"]) + d.Set("protocol", attributeOutput.Attributes["Protocol"]) - if attrHash[oKey] != nil { - if resource.Schema[iKey] != nil { - var value string - value = *attrHash[oKey] - log.Printf("[DEBUG] Reading %s => %s -> %s", iKey, oKey, value) - d.Set(iKey, value) - } - } - } + d.Set("raw_message_delivery", false) + if v, ok := attributeOutput.Attributes["RawMessageDelivery"]; ok && aws.StringValue(v) == "true" { + d.Set("raw_message_delivery", true) } + d.Set("topic_arn", attributeOutput.Attributes["TopicArn"]) + return nil } @@ -337,3 +321,108 @@ func obfuscateEndpoint(endpoint string) string { } return obfuscatedEndpoint } + +func snsSubscriptionAttributeUpdate(snsconn *sns.SNS, subscriptionArn, attributeName, attributeValue string) error { + req := &sns.SetSubscriptionAttributesInput{ + SubscriptionArn: aws.String(subscriptionArn), + AttributeName: aws.String(attributeName), + AttributeValue: aws.String(attributeValue), + } + _, err := snsconn.SetSubscriptionAttributes(req) + + if err != nil { + return fmt.Errorf("error setting subscription (%s) attribute (%s): %s", subscriptionArn, attributeName, err) + } + return nil +} + +type snsTopicSubscriptionDeliveryPolicy struct { + Guaranteed bool `json:"guaranteed,omitempty"` + HealthyRetryPolicy *snsTopicSubscriptionDeliveryPolicyHealthyRetryPolicy `json:"healthyRetryPolicy,omitempty"` + SicklyRetryPolicy *snsTopicSubscriptionDeliveryPolicySicklyRetryPolicy `json:"sicklyRetryPolicy,omitempty"` + ThrottlePolicy *snsTopicSubscriptionDeliveryPolicyThrottlePolicy `json:"throttlePolicy,omitempty"` +} + +func (s snsTopicSubscriptionDeliveryPolicy) String() string { + return awsutil.Prettify(s) +} + +func (s snsTopicSubscriptionDeliveryPolicy) GoString() string { + return s.String() +} + +type snsTopicSubscriptionDeliveryPolicyHealthyRetryPolicy struct { + BackoffFunction string `json:"backoffFunction,omitempty"` + MaxDelayTarget int `json:"maxDelayTarget,omitempty"` + MinDelayTarget int `json:"minDelayTarget,omitempty"` + NumMaxDelayRetries int `json:"numMaxDelayRetries,omitempty"` + NumMinDelayRetries int `json:"numMinDelayRetries,omitempty"` + NumNoDelayRetries int `json:"numNoDelayRetries,omitempty"` + NumRetries int `json:"numRetries,omitempty"` +} + +func (s snsTopicSubscriptionDeliveryPolicyHealthyRetryPolicy) String() string { + return awsutil.Prettify(s) +} + +func (s snsTopicSubscriptionDeliveryPolicyHealthyRetryPolicy) GoString() string { + return s.String() +} + +type snsTopicSubscriptionDeliveryPolicySicklyRetryPolicy struct { + BackoffFunction string `json:"backoffFunction,omitempty"` + MaxDelayTarget int `json:"maxDelayTarget,omitempty"` + MinDelayTarget int `json:"minDelayTarget,omitempty"` + NumMaxDelayRetries int `json:"numMaxDelayRetries,omitempty"` + NumMinDelayRetries int `json:"numMinDelayRetries,omitempty"` + NumNoDelayRetries int `json:"numNoDelayRetries,omitempty"` + NumRetries int `json:"numRetries,omitempty"` +} + +func (s snsTopicSubscriptionDeliveryPolicySicklyRetryPolicy) String() string { + return awsutil.Prettify(s) +} + +func (s snsTopicSubscriptionDeliveryPolicySicklyRetryPolicy) GoString() string { + return s.String() +} + +type snsTopicSubscriptionDeliveryPolicyThrottlePolicy struct { + MaxReceivesPerSecond int `json:"maxReceivesPerSecond,omitempty"` +} + +func (s snsTopicSubscriptionDeliveryPolicyThrottlePolicy) String() string { + return awsutil.Prettify(s) +} + +func (s snsTopicSubscriptionDeliveryPolicyThrottlePolicy) GoString() string { + return s.String() +} + +func suppressEquivalentSnsTopicSubscriptionDeliveryPolicy(k, old, new string, d *schema.ResourceData) bool { + var deliveryPolicy snsTopicSubscriptionDeliveryPolicy + + if err := json.Unmarshal([]byte(old), &deliveryPolicy); err != nil { + log.Printf("[WARN] Unable to unmarshal SNS Topic Subscription delivery policy JSON: %s", err) + return false + } + + normalizedDeliveryPolicy, err := json.Marshal(deliveryPolicy) + + if err != nil { + log.Printf("[WARN] Unable to marshal SNS Topic Subscription delivery policy back to JSON: %s", err) + return false + } + + ob := bytes.NewBufferString("") + if err := json.Compact(ob, normalizedDeliveryPolicy); err != nil { + return false + } + + nb := bytes.NewBufferString("") + if err := json.Compact(nb, []byte(new)); err != nil { + return false + } + + return jsonBytesEqual(ob.Bytes(), nb.Bytes()) +} diff --git a/aws/resource_aws_sns_topic_subscription_test.go b/aws/resource_aws_sns_topic_subscription_test.go index 6988057dd84..7704a71ccf8 100644 --- a/aws/resource_aws_sns_topic_subscription_test.go +++ b/aws/resource_aws_sns_topic_subscription_test.go @@ -1,7 +1,10 @@ package aws import ( + "encoding/json" "fmt" + "reflect" + "regexp" "strconv" "testing" @@ -13,12 +16,53 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestSuppressEquivalentSnsTopicSubscriptionDeliveryPolicy(t *testing.T) { + var testCases = []struct { + old string + new string + equivalent bool + }{ + { + old: `{"healthyRetryPolicy":{"minDelayTarget":5,"maxDelayTarget":20,"numRetries":5,"numMaxDelayRetries":null,"numNoDelayRetries":null,"numMinDelayRetries":null,"backoffFunction":null},"sicklyRetryPolicy":null,"throttlePolicy":null,"guaranteed":false}`, + new: `{"healthyRetryPolicy":{"maxDelayTarget":20,"minDelayTarget":5,"numRetries":5}}`, + equivalent: true, + }, + { + old: `{"healthyRetryPolicy":{"minDelayTarget":5,"maxDelayTarget":20,"numRetries":5,"numMaxDelayRetries":null,"numNoDelayRetries":null,"numMinDelayRetries":null,"backoffFunction":null},"sicklyRetryPolicy":null,"throttlePolicy":null,"guaranteed":false}`, + new: `{"healthyRetryPolicy":{"minDelayTarget":5,"maxDelayTarget":20,"numRetries":5}}`, + equivalent: true, + }, + { + old: `{"healthyRetryPolicy":{"minDelayTarget":5,"maxDelayTarget":20,"numRetries":5,"numMaxDelayRetries":null,"numNoDelayRetries":null,"numMinDelayRetries":null,"backoffFunction":null},"sicklyRetryPolicy":null,"throttlePolicy":null,"guaranteed":false}`, + new: `{"healthyRetryPolicy":{"minDelayTarget":5,"maxDelayTarget":20,"numRetries":6}}`, + equivalent: false, + }, + { + old: `{"healthyRetryPolicy":{"minDelayTarget":5,"maxDelayTarget":20,"numRetries":5,"numMaxDelayRetries":null,"numNoDelayRetries":null,"numMinDelayRetries":null,"backoffFunction":null},"sicklyRetryPolicy":null,"throttlePolicy":null,"guaranteed":false}`, + new: `{"healthyRetryPolicy":{"minDelayTarget":5,"maxDelayTarget":20}}`, + equivalent: false, + }, + { + old: `{"healthyRetryPolicy":null,"sicklyRetryPolicy":null,"throttlePolicy":null,"guaranteed":true}`, + new: `{"guaranteed":true}`, + equivalent: true, + }, + } + + for i, tc := range testCases { + actual := suppressEquivalentSnsTopicSubscriptionDeliveryPolicy("", tc.old, tc.new, nil) + if actual != tc.equivalent { + t.Fatalf("Test Case %d: Got: %t Expected: %t", i, actual, tc.equivalent) + } + } +} + func TestAccAWSSNSTopicSubscription_basic(t *testing.T) { attributes := make(map[string]string) - + resourceName := "aws_sns_topic_subscription.test_subscription" ri := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSNSTopicSubscriptionDestroy, @@ -26,41 +70,187 @@ func TestAccAWSSNSTopicSubscription_basic(t *testing.T) { { Config: testAccAWSSNSTopicSubscriptionConfig(ri), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSSNSTopicExists("aws_sns_topic.test_topic", attributes), - testAccCheckAWSSNSTopicSubscriptionExists("aws_sns_topic_subscription.test_subscription"), + testAccCheckAWSSNSTopicSubscriptionExists(resourceName, attributes), + testAccMatchResourceAttrRegionalARN(resourceName, "arn", "sns", regexp.MustCompile(fmt.Sprintf("terraform-test-topic-%d:.+", ri))), + resource.TestCheckResourceAttr(resourceName, "delivery_policy", ""), + resource.TestCheckResourceAttrPair(resourceName, "endpoint", "aws_sqs_queue.test_queue", "arn"), + resource.TestCheckResourceAttr(resourceName, "filter_policy", ""), + resource.TestCheckResourceAttr(resourceName, "protocol", "sqs"), + resource.TestCheckResourceAttr(resourceName, "raw_message_delivery", "false"), + resource.TestCheckResourceAttrPair(resourceName, "topic_arn", "aws_sns_topic.test_topic", "arn"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "confirmation_timeout_in_minutes", + "endpoint_auto_confirms", + }, + }, }, }) } func TestAccAWSSNSTopicSubscription_filterPolicy(t *testing.T) { + attributes := make(map[string]string) + resourceName := "aws_sns_topic_subscription.test_subscription" ri := acctest.RandInt() filterPolicy1 := `{"key1": ["val1"], "key2": ["val2"]}` filterPolicy2 := `{"key3": ["val3"], "key4": ["val4"]}` - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSNSTopicSubscriptionDestroy, Steps: []resource.TestStep{ { Config: testAccAWSSNSTopicSubscriptionConfig_filterPolicy(ri, strconv.Quote(filterPolicy1)), - Check: resource.TestCheckResourceAttr("aws_sns_topic_subscription.test_subscription", "filter_policy", filterPolicy1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSNSTopicSubscriptionExists(resourceName, attributes), + resource.TestCheckResourceAttr(resourceName, "filter_policy", filterPolicy1), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "confirmation_timeout_in_minutes", + "endpoint_auto_confirms", + }, }, + // Test attribute update { Config: testAccAWSSNSTopicSubscriptionConfig_filterPolicy(ri, strconv.Quote(filterPolicy2)), - Check: resource.TestCheckResourceAttr("aws_sns_topic_subscription.test_subscription", "filter_policy", filterPolicy2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSNSTopicSubscriptionExists(resourceName, attributes), + resource.TestCheckResourceAttr(resourceName, "filter_policy", filterPolicy2), + ), + }, + // Test attribute removal + { + Config: testAccAWSSNSTopicSubscriptionConfig(ri), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSNSTopicSubscriptionExists(resourceName, attributes), + resource.TestCheckResourceAttr(resourceName, "filter_policy", ""), + ), }, }, }) } -func TestAccAWSSNSTopicSubscription_autoConfirmingEndpoint(t *testing.T) { + +func TestAccAWSSNSTopicSubscription_deliveryPolicy(t *testing.T) { + attributes := make(map[string]string) + resourceName := "aws_sns_topic_subscription.test_subscription" + ri := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSNSTopicSubscriptionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSNSTopicSubscriptionConfig_deliveryPolicy(ri, strconv.Quote(`{"healthyRetryPolicy":{"minDelayTarget":5,"maxDelayTarget":20,"numRetries": 5}}`)), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSNSTopicSubscriptionExists(resourceName, attributes), + testAccCheckAWSSNSTopicSubscriptionDeliveryPolicyAttribute(attributes, &snsTopicSubscriptionDeliveryPolicy{ + HealthyRetryPolicy: &snsTopicSubscriptionDeliveryPolicyHealthyRetryPolicy{ + MaxDelayTarget: 20, + MinDelayTarget: 5, + NumRetries: 5, + }, + }), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "confirmation_timeout_in_minutes", + "endpoint_auto_confirms", + }, + }, + // Test attribute update + { + Config: testAccAWSSNSTopicSubscriptionConfig_deliveryPolicy(ri, strconv.Quote(`{"healthyRetryPolicy":{"minDelayTarget":3,"maxDelayTarget":78,"numRetries": 11}}`)), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSNSTopicSubscriptionExists(resourceName, attributes), + testAccCheckAWSSNSTopicSubscriptionDeliveryPolicyAttribute(attributes, &snsTopicSubscriptionDeliveryPolicy{ + HealthyRetryPolicy: &snsTopicSubscriptionDeliveryPolicyHealthyRetryPolicy{ + MaxDelayTarget: 78, + MinDelayTarget: 3, + NumRetries: 11, + }, + }), + ), + }, + // Test attribute removal + { + Config: testAccAWSSNSTopicSubscriptionConfig(ri), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSNSTopicSubscriptionExists(resourceName, attributes), + resource.TestCheckResourceAttr(resourceName, "delivery_policy", ""), + ), + }, + }, + }) +} + +func TestAccAWSSNSTopicSubscription_rawMessageDelivery(t *testing.T) { attributes := make(map[string]string) + resourceName := "aws_sns_topic_subscription.test_subscription" + ri := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSNSTopicSubscriptionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSNSTopicSubscriptionConfig_rawMessageDelivery(ri, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSNSTopicSubscriptionExists(resourceName, attributes), + resource.TestCheckResourceAttr(resourceName, "raw_message_delivery", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "confirmation_timeout_in_minutes", + "endpoint_auto_confirms", + }, + }, + // Test attribute update + { + Config: testAccAWSSNSTopicSubscriptionConfig_rawMessageDelivery(ri, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSNSTopicSubscriptionExists(resourceName, attributes), + resource.TestCheckResourceAttr(resourceName, "raw_message_delivery", "false"), + ), + }, + // Test attribute removal + { + Config: testAccAWSSNSTopicSubscriptionConfig(ri), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSNSTopicSubscriptionExists(resourceName, attributes), + resource.TestCheckResourceAttr(resourceName, "raw_message_delivery", "false"), + ), + }, + }, + }) +} +func TestAccAWSSNSTopicSubscription_autoConfirmingEndpoint(t *testing.T) { + attributes := make(map[string]string) + resourceName := "aws_sns_topic_subscription.test_subscription" ri := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSNSTopicSubscriptionDestroy, @@ -68,20 +258,28 @@ func TestAccAWSSNSTopicSubscription_autoConfirmingEndpoint(t *testing.T) { { Config: testAccAWSSNSTopicSubscriptionConfig_autoConfirmingEndpoint(ri), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSSNSTopicExists("aws_sns_topic.test_topic", attributes), - testAccCheckAWSSNSTopicSubscriptionExists("aws_sns_topic_subscription.test_subscription"), + testAccCheckAWSSNSTopicSubscriptionExists(resourceName, attributes), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "confirmation_timeout_in_minutes", + "endpoint_auto_confirms", + }, + }, }, }) } func TestAccAWSSNSTopicSubscription_autoConfirmingSecuredEndpoint(t *testing.T) { attributes := make(map[string]string) - + resourceName := "aws_sns_topic_subscription.test_subscription" ri := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSNSTopicSubscriptionDestroy, @@ -89,10 +287,18 @@ func TestAccAWSSNSTopicSubscription_autoConfirmingSecuredEndpoint(t *testing.T) { Config: testAccAWSSNSTopicSubscriptionConfig_autoConfirmingSecuredEndpoint(ri, "john", "doe"), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSSNSTopicExists("aws_sns_topic.test_topic", attributes), - testAccCheckAWSSNSTopicSubscriptionExists("aws_sns_topic_subscription.test_subscription"), + testAccCheckAWSSNSTopicSubscriptionExists(resourceName, attributes), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "confirmation_timeout_in_minutes", + "endpoint_auto_confirms", + }, + }, }, }) } @@ -126,7 +332,7 @@ func testAccCheckAWSSNSTopicSubscriptionDestroy(s *terraform.State) error { return nil } -func testAccCheckAWSSNSTopicSubscriptionExists(n string) resource.TestCheckFunc { +func testAccCheckAWSSNSTopicSubscriptionExists(n string, attributes map[string]string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -142,7 +348,11 @@ func testAccCheckAWSSNSTopicSubscriptionExists(n string) resource.TestCheckFunc params := &sns.GetSubscriptionAttributesInput{ SubscriptionArn: aws.String(rs.Primary.ID), } - _, err := conn.GetSubscriptionAttributes(params) + output, err := conn.GetSubscriptionAttributes(params) + + for k, v := range output.Attributes { + attributes[k] = aws.StringValue(v) + } if err != nil { return err @@ -152,6 +362,27 @@ func testAccCheckAWSSNSTopicSubscriptionExists(n string) resource.TestCheckFunc } } +func testAccCheckAWSSNSTopicSubscriptionDeliveryPolicyAttribute(attributes map[string]string, expectedDeliveryPolicy *snsTopicSubscriptionDeliveryPolicy) resource.TestCheckFunc { + return func(s *terraform.State) error { + apiDeliveryPolicyJSONString, ok := attributes["DeliveryPolicy"] + + if !ok { + return fmt.Errorf("DeliveryPolicy attribute not found in attributes: %s", attributes) + } + + var apiDeliveryPolicy snsTopicSubscriptionDeliveryPolicy + if err := json.Unmarshal([]byte(apiDeliveryPolicyJSONString), &apiDeliveryPolicy); err != nil { + return fmt.Errorf("unable to unmarshal SNS Topic Subscription delivery policy JSON (%s): %s", apiDeliveryPolicyJSONString, err) + } + + if reflect.DeepEqual(apiDeliveryPolicy, *expectedDeliveryPolicy) { + return nil + } + + return fmt.Errorf("SNS Topic Subscription delivery policy did not match:\n\nReceived\n\n%s\n\nExpected\n\n%s\n\n", apiDeliveryPolicy, *expectedDeliveryPolicy) + } +} + func TestObfuscateEndpointPassword(t *testing.T) { checks := map[string]string{ "https://example.com/myroute": "https://example.com/myroute", @@ -170,17 +401,17 @@ func TestObfuscateEndpointPassword(t *testing.T) { func testAccAWSSNSTopicSubscriptionConfig(i int) string { return fmt.Sprintf(` resource "aws_sns_topic" "test_topic" { - name = "terraform-test-topic-%d" + name = "terraform-test-topic-%d" } resource "aws_sqs_queue" "test_queue" { - name = "terraform-subscription-test-queue-%d" + name = "terraform-subscription-test-queue-%d" } resource "aws_sns_topic_subscription" "test_subscription" { - topic_arn = "${aws_sns_topic.test_topic.arn}" - protocol = "sqs" - endpoint = "${aws_sqs_queue.test_queue.arn}" + topic_arn = "${aws_sns_topic.test_topic.arn}" + protocol = "sqs" + endpoint = "${aws_sqs_queue.test_queue.arn}" } `, i, i) } @@ -188,22 +419,60 @@ resource "aws_sns_topic_subscription" "test_subscription" { func testAccAWSSNSTopicSubscriptionConfig_filterPolicy(i int, policy string) string { return fmt.Sprintf(` resource "aws_sns_topic" "test_topic" { - name = "terraform-test-topic-%d" + name = "terraform-test-topic-%d" } resource "aws_sqs_queue" "test_queue" { - name = "terraform-subscription-test-queue-%d" + name = "terraform-subscription-test-queue-%d" } resource "aws_sns_topic_subscription" "test_subscription" { - topic_arn = "${aws_sns_topic.test_topic.arn}" - protocol = "sqs" - endpoint = "${aws_sqs_queue.test_queue.arn}" - filter_policy = %s - } + topic_arn = "${aws_sns_topic.test_topic.arn}" + protocol = "sqs" + endpoint = "${aws_sqs_queue.test_queue.arn}" + filter_policy = %s +} `, i, i, policy) } +func testAccAWSSNSTopicSubscriptionConfig_deliveryPolicy(i int, policy string) string { + return fmt.Sprintf(` +resource "aws_sns_topic" "test_topic" { + name = "terraform-test-topic-%d" +} + +resource "aws_sqs_queue" "test_queue" { + name = "terraform-subscription-test-queue-%d" +} + +resource "aws_sns_topic_subscription" "test_subscription" { + delivery_policy = %s + endpoint = "${aws_sqs_queue.test_queue.arn}" + protocol = "sqs" + topic_arn = "${aws_sns_topic.test_topic.arn}" +} +`, i, i, policy) +} + +func testAccAWSSNSTopicSubscriptionConfig_rawMessageDelivery(i int, rawMessageDelivery bool) string { + return fmt.Sprintf(` +resource "aws_sns_topic" "test_topic" { + name = "terraform-test-topic-%d" +} + +resource "aws_sqs_queue" "test_queue" { + name = "terraform-subscription-test-queue-%d" +} + +resource "aws_sns_topic_subscription" "test_subscription" { + endpoint = "${aws_sqs_queue.test_queue.arn}" + protocol = "sqs" + raw_message_delivery = %t + topic_arn = "${aws_sns_topic.test_topic.arn}" +} +`, i, i, rawMessageDelivery) +} + func testAccAWSSNSTopicSubscriptionConfig_autoConfirmingEndpoint(i int) string { return fmt.Sprintf(` resource "aws_sns_topic" "test_topic" { diff --git a/aws/resource_aws_sns_topic_test.go b/aws/resource_aws_sns_topic_test.go index 20b716f80b7..543af3f0c79 100644 --- a/aws/resource_aws_sns_topic_test.go +++ b/aws/resource_aws_sns_topic_test.go @@ -14,16 +14,38 @@ import ( "github.com/jen20/awspolicyequivalence" ) +func TestAccAWSSNSTopic_importBasic(t *testing.T) { + resourceName := "aws_sns_topic.test_topic" + rName := acctest.RandString(10) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSNSTopicDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSNSTopicConfig_withName(rName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSSNSTopic_basic(t *testing.T) { attributes := make(map[string]string) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_sns_topic.test_topic", Providers: testAccProviders, CheckDestroy: testAccCheckAWSSNSTopicDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSNSTopicConfig_withGeneratedName, Check: resource.ComposeTestCheckFunc( testAccCheckAWSSNSTopicExists("aws_sns_topic.test_topic", attributes), @@ -38,13 +60,13 @@ func TestAccAWSSNSTopic_name(t *testing.T) { rName := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_sns_topic.test_topic", Providers: testAccProviders, CheckDestroy: testAccCheckAWSSNSTopicDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSNSTopicConfig_withName(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSSNSTopicExists("aws_sns_topic.test_topic", attributes), @@ -59,13 +81,13 @@ func TestAccAWSSNSTopic_namePrefix(t *testing.T) { startsWithPrefix := regexp.MustCompile("^terraform-test-topic-") - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_sns_topic.test_topic", Providers: testAccProviders, CheckDestroy: testAccCheckAWSSNSTopicDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSNSTopicConfig_withNamePrefix(), Check: resource.ComposeTestCheckFunc( testAccCheckAWSSNSTopicExists("aws_sns_topic.test_topic", attributes), @@ -81,13 +103,13 @@ func TestAccAWSSNSTopic_policy(t *testing.T) { rName := acctest.RandString(10) expectedPolicy := `{"Statement":[{"Sid":"Stmt1445931846145","Effect":"Allow","Principal":{"AWS":"*"},"Action":"sns:Publish","Resource":"arn:aws:sns:us-west-2::example"}],"Version":"2012-10-17","Id":"Policy1445931846145"}` - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_sns_topic.test_topic", Providers: testAccProviders, CheckDestroy: testAccCheckAWSSNSTopicDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSNSTopicWithPolicy(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSSNSTopicExists("aws_sns_topic.test_topic", attributes), @@ -102,13 +124,13 @@ func TestAccAWSSNSTopic_withIAMRole(t *testing.T) { attributes := make(map[string]string) rName := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_sns_topic.test_topic", Providers: testAccProviders, CheckDestroy: testAccCheckAWSSNSTopicDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSNSTopicConfig_withIAMRole(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSSNSTopicExists("aws_sns_topic.test_topic", attributes), @@ -120,13 +142,13 @@ func TestAccAWSSNSTopic_withIAMRole(t *testing.T) { func TestAccAWSSNSTopic_withFakeIAMRole(t *testing.T) { rName := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_sns_topic.test_topic", Providers: testAccProviders, CheckDestroy: testAccCheckAWSSNSTopicDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSNSTopicConfig_withFakeIAMRole(rName), ExpectError: regexp.MustCompile(`PrincipalNotFound`), }, @@ -139,13 +161,13 @@ func TestAccAWSSNSTopic_withDeliveryPolicy(t *testing.T) { rName := acctest.RandString(10) expectedPolicy := `{"http":{"defaultHealthyRetryPolicy": {"minDelayTarget": 20,"maxDelayTarget": 20,"numMaxDelayRetries": 0,"numRetries": 3,"numNoDelayRetries": 0,"numMinDelayRetries": 0,"backoffFunction": "linear"},"disableSubscriptionOverrides": false}}` - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_sns_topic.test_topic", Providers: testAccProviders, CheckDestroy: testAccCheckAWSSNSTopicDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSNSTopicConfig_withDeliveryPolicy(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSSNSTopicExists("aws_sns_topic.test_topic", attributes), @@ -176,13 +198,13 @@ func TestAccAWSSNSTopic_deliveryStatus(t *testing.T) { "SQSSuccessFeedbackSampleRate": regexp.MustCompile(`^70$`), } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_sns_topic.test_topic", Providers: testAccProviders, CheckDestroy: testAccCheckAWSSNSTopicDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSNSTopicConfig_deliveryStatus(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSSNSTopicExists("aws_sns_topic.test_topic", attributes), @@ -205,6 +227,35 @@ func TestAccAWSSNSTopic_deliveryStatus(t *testing.T) { }) } +func TestAccAWSSNSTopic_encryption(t *testing.T) { + attributes := make(map[string]string) + + rName := acctest.RandString(10) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "aws_sns_topic.test_topic", + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSNSTopicDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSNSTopicConfig_withEncryption(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSNSTopicExists("aws_sns_topic.test_topic", attributes), + resource.TestCheckResourceAttr("aws_sns_topic.test_topic", "kms_master_key_id", "alias/aws/sns"), + ), + }, + { + Config: testAccAWSSNSTopicConfig_withName(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSNSTopicExists("aws_sns_topic.test_topic", attributes), + resource.TestCheckResourceAttr("aws_sns_topic.test_topic", "kms_master_key_id", ""), + ), + }, + }, + }) +} + func testAccCheckAWSNSTopicHasPolicy(n string, expectedPolicyText string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -571,3 +622,12 @@ EOF } `, r, r, r) } + +func testAccAWSSNSTopicConfig_withEncryption(r string) string { + return fmt.Sprintf(` +resource "aws_sns_topic" "test_topic" { + name = "terraform-test-topic-%s" + kms_master_key_id = "alias/aws/sns" +} +`, r) +} diff --git a/aws/resource_aws_spot_datafeed_subscription.go b/aws/resource_aws_spot_datafeed_subscription.go index 2e3322710c4..5051e17603d 100644 --- a/aws/resource_aws_spot_datafeed_subscription.go +++ b/aws/resource_aws_spot_datafeed_subscription.go @@ -1,12 +1,12 @@ package aws import ( + "fmt" "log" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/schema" ) @@ -20,12 +20,12 @@ func resourceAwsSpotDataFeedSubscription() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "bucket": &schema.Schema{ + "bucket": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "prefix": &schema.Schema{ + "prefix": { Type: schema.TypeString, Optional: true, ForceNew: true, @@ -48,7 +48,7 @@ func resourceAwsSpotDataFeedSubscriptionCreate(d *schema.ResourceData, meta inte log.Printf("[INFO] Creating Spot Datafeed Subscription") _, err := conn.CreateSpotDatafeedSubscription(params) if err != nil { - return errwrap.Wrapf("Error Creating Spot Datafeed Subscription: {{err}}", err) + return fmt.Errorf("Error Creating Spot Datafeed Subscription: %s", err) } d.SetId("spot-datafeed-subscription") @@ -66,7 +66,7 @@ func resourceAwsSpotDataFeedSubscriptionRead(d *schema.ResourceData, meta interf d.SetId("") return nil } - return errwrap.Wrapf("Error Describing Spot Datafeed Subscription: {{err}}", err) + return fmt.Errorf("Error Describing Spot Datafeed Subscription: %s", err) } if resp == nil { @@ -87,7 +87,7 @@ func resourceAwsSpotDataFeedSubscriptionDelete(d *schema.ResourceData, meta inte log.Printf("[INFO] Deleting Spot Datafeed Subscription") _, err := conn.DeleteSpotDatafeedSubscription(&ec2.DeleteSpotDatafeedSubscriptionInput{}) if err != nil { - return errwrap.Wrapf("Error deleting Spot Datafeed Subscription: {{err}}", err) + return fmt.Errorf("Error deleting Spot Datafeed Subscription: %s", err) } return nil } diff --git a/aws/resource_aws_spot_datafeed_subscription_test.go b/aws/resource_aws_spot_datafeed_subscription_test.go index b05c691e898..535c4367cd4 100644 --- a/aws/resource_aws_spot_datafeed_subscription_test.go +++ b/aws/resource_aws_spot_datafeed_subscription_test.go @@ -26,6 +26,28 @@ func TestAccAWSSpotDatafeedSubscription(t *testing.T) { } } +func testAccAWSSpotDatafeedSubscription_importBasic(t *testing.T) { + resourceName := "aws_spot_datafeed_subscription.default" + ri := acctest.RandInt() + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSpotDatafeedSubscriptionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSpotDatafeedSubscription(ri), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func testAccAWSSpotDatafeedSubscription_basic(t *testing.T) { var subscription ec2.SpotDatafeedSubscription ri := acctest.RandInt() diff --git a/aws/resource_aws_spot_fleet_request.go b/aws/resource_aws_spot_fleet_request.go index a9692f9561f..4dbd1e6e270 100644 --- a/aws/resource_aws_spot_fleet_request.go +++ b/aws/resource_aws_spot_fleet_request.go @@ -25,6 +25,7 @@ func resourceAwsSpotFleetRequest() *schema.Resource { Timeouts: &schema.ResourceTimeout{ Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(5 * time.Minute), }, SchemaVersion: 1, @@ -190,6 +191,11 @@ func resourceAwsSpotFleetRequest() *schema.Resource { ForceNew: true, Optional: true, }, + "iam_instance_profile_arn": { + Type: schema.TypeString, + ForceNew: true, + Optional: true, + }, "ami": { Type: schema.TypeString, Required: true, @@ -279,6 +285,12 @@ func resourceAwsSpotFleetRequest() *schema.Resource { Default: "lowestPrice", ForceNew: true, }, + "instance_pools_to_use_count": { + Type: schema.TypeInt, + Optional: true, + Default: 1, + ForceNew: true, + }, "excess_capacity_termination_policy": { Type: schema.TypeString, Optional: true, @@ -293,7 +305,7 @@ func resourceAwsSpotFleetRequest() *schema.Resource { }, "spot_price": { Type: schema.TypeString, - Required: true, + Optional: true, ForceNew: true, }, "terminate_instances_with_expiration": { @@ -302,14 +314,26 @@ func resourceAwsSpotFleetRequest() *schema.Resource { ForceNew: true, }, "valid_from": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.ValidateRFC3339TimeString, }, "valid_until": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.ValidateRFC3339TimeString, + }, + "fleet_type": { Type: schema.TypeString, Optional: true, + Default: ec2.FleetTypeMaintain, ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{ + ec2.FleetTypeMaintain, + ec2.FleetTypeRequest, + }, false), }, "spot_request_state": { Type: schema.TypeString, @@ -375,6 +399,12 @@ func buildSpotFleetLaunchSpecification(d map[string]interface{}, meta interface{ } } + if v, ok := d["iam_instance_profile_arn"]; ok && v.(string) != "" { + opts.IamInstanceProfile = &ec2.IamInstanceProfileSpecification{ + Arn: aws.String(v.(string)), + } + } + if v, ok := d["user_data"]; ok { opts.UserData = aws.String(base64Encode([]byte(v.(string)))) } @@ -581,12 +611,12 @@ func resourceAwsSpotFleetRequestCreate(d *schema.ResourceData, meta interface{}) spotFleetConfig := &ec2.SpotFleetRequestConfigData{ IamFleetRole: aws.String(d.Get("iam_fleet_role").(string)), LaunchSpecifications: launch_specs, - SpotPrice: aws.String(d.Get("spot_price").(string)), TargetCapacity: aws.Int64(int64(d.Get("target_capacity").(int))), ClientToken: aws.String(resource.UniqueId()), TerminateInstancesWithExpiration: aws.Bool(d.Get("terminate_instances_with_expiration").(bool)), ReplaceUnhealthyInstances: aws.Bool(d.Get("replace_unhealthy_instances").(bool)), InstanceInterruptionBehavior: aws.String(d.Get("instance_interruption_behaviour").(string)), + Type: aws.String(d.Get("fleet_type").(string)), } if v, ok := d.GetOk("excess_capacity_termination_policy"); ok { @@ -599,23 +629,31 @@ func resourceAwsSpotFleetRequestCreate(d *schema.ResourceData, meta interface{}) spotFleetConfig.AllocationStrategy = aws.String("lowestPrice") } + if v, ok := d.GetOk("instance_pools_to_use_count"); ok && v.(int) != 1 { + spotFleetConfig.InstancePoolsToUseCount = aws.Int64(int64(v.(int))) + } + + if v, ok := d.GetOk("spot_price"); ok && v.(string) != "" { + spotFleetConfig.SpotPrice = aws.String(v.(string)) + } + if v, ok := d.GetOk("valid_from"); ok { - valid_from, err := time.Parse(awsAutoscalingScheduleTimeLayout, v.(string)) + valid_from, err := time.Parse(time.RFC3339, v.(string)) if err != nil { return err } - spotFleetConfig.ValidFrom = &valid_from + spotFleetConfig.ValidFrom = aws.Time(valid_from) } if v, ok := d.GetOk("valid_until"); ok { - valid_until, err := time.Parse(awsAutoscalingScheduleTimeLayout, v.(string)) + valid_until, err := time.Parse(time.RFC3339, v.(string)) if err != nil { return err } - spotFleetConfig.ValidUntil = &valid_until + spotFleetConfig.ValidUntil = aws.Time(valid_until) } else { valid_until := time.Now().Add(24 * time.Hour) - spotFleetConfig.ValidUntil = &valid_until + spotFleetConfig.ValidUntil = aws.Time(valid_until) } if v, ok := d.GetOk("load_balancers"); ok && v.(*schema.Set).Len() > 0 { @@ -668,7 +706,7 @@ func resourceAwsSpotFleetRequestCreate(d *schema.ResourceData, meta interface{}) // IAM is eventually consistent :/ if awsErr.Code() == "InvalidSpotFleetRequestConfig" { return resource.RetryableError( - fmt.Errorf("[WARN] Error creating Spot fleet request, retrying: %s", err)) + fmt.Errorf("Error creating Spot fleet request, retrying: %s", err)) } } return resource.NonRetryableError(err) @@ -851,6 +889,10 @@ func resourceAwsSpotFleetRequestRead(d *schema.ResourceData, meta interface{}) e d.Set("allocation_strategy", aws.StringValue(config.AllocationStrategy)) } + if config.InstancePoolsToUseCount != nil { + d.Set("instance_pools_to_use_count", aws.Int64Value(config.InstancePoolsToUseCount)) + } + if config.ClientToken != nil { d.Set("client_token", aws.StringValue(config.ClientToken)) } @@ -879,16 +921,17 @@ func resourceAwsSpotFleetRequestRead(d *schema.ResourceData, meta interface{}) e if config.ValidFrom != nil { d.Set("valid_from", - aws.TimeValue(config.ValidFrom).Format(awsAutoscalingScheduleTimeLayout)) + aws.TimeValue(config.ValidFrom).Format(time.RFC3339)) } if config.ValidUntil != nil { d.Set("valid_until", - aws.TimeValue(config.ValidUntil).Format(awsAutoscalingScheduleTimeLayout)) + aws.TimeValue(config.ValidUntil).Format(time.RFC3339)) } d.Set("replace_unhealthy_instances", config.ReplaceUnhealthyInstances) d.Set("instance_interruption_behaviour", config.InstanceInterruptionBehavior) + d.Set("fleet_type", config.Type) d.Set("launch_specification", launchSpecsToSet(config.LaunchSpecifications, conn)) return nil @@ -938,6 +981,10 @@ func launchSpecToMap(l *ec2.SpotFleetLaunchSpecification, rootDevName *string) m m["iam_instance_profile"] = aws.StringValue(l.IamInstanceProfile.Name) } + if l.IamInstanceProfile != nil && l.IamInstanceProfile.Arn != nil { + m["iam_instance_profile_arn"] = aws.StringValue(l.IamInstanceProfile.Arn) + } + if l.UserData != nil { m["user_data"] = userDataHashSum(aws.StringValue(l.UserData)) } @@ -1118,25 +1165,21 @@ func resourceAwsSpotFleetRequestDelete(d *schema.ResourceData, meta interface{}) terminateInstances := d.Get("terminate_instances_with_expiration").(bool) log.Printf("[INFO] Cancelling spot fleet request: %s", d.Id()) - resp, err := conn.CancelSpotFleetRequests(&ec2.CancelSpotFleetRequestsInput{ - SpotFleetRequestIds: []*string{aws.String(d.Id())}, - TerminateInstances: aws.Bool(terminateInstances), - }) - + err := deleteSpotFleetRequest(d.Id(), terminateInstances, d.Timeout(schema.TimeoutDelete), conn) if err != nil { - return fmt.Errorf("Error cancelling spot request (%s): %s", d.Id(), err) + return fmt.Errorf("error deleting spot request (%s): %s", d.Id(), err) } - // check response successfulFleetRequestSet to make sure our request was canceled - var found bool - for _, s := range resp.SuccessfulFleetRequests { - if *s.SpotFleetRequestId == d.Id() { - found = true - } - } + return nil +} - if !found { - return fmt.Errorf("[ERR] Spot Fleet request (%s) was not found to be successfully canceled, dangling resources may exit", d.Id()) +func deleteSpotFleetRequest(spotFleetRequestID string, terminateInstances bool, timeout time.Duration, conn *ec2.EC2) error { + _, err := conn.CancelSpotFleetRequests(&ec2.CancelSpotFleetRequestsInput{ + SpotFleetRequestIds: []*string{aws.String(spotFleetRequestID)}, + TerminateInstances: aws.Bool(terminateInstances), + }) + if err != nil { + return err } // Only wait for instance termination if requested @@ -1144,20 +1187,20 @@ func resourceAwsSpotFleetRequestDelete(d *schema.ResourceData, meta interface{}) return nil } - return resource.Retry(5*time.Minute, func() *resource.RetryError { + return resource.Retry(timeout, func() *resource.RetryError { resp, err := conn.DescribeSpotFleetInstances(&ec2.DescribeSpotFleetInstancesInput{ - SpotFleetRequestId: aws.String(d.Id()), + SpotFleetRequestId: aws.String(spotFleetRequestID), }) if err != nil { return resource.NonRetryableError(err) } if len(resp.ActiveInstances) == 0 { - log.Printf("[DEBUG] Active instance count is 0 for Spot Fleet Request (%s), removing", d.Id()) + log.Printf("[DEBUG] Active instance count is 0 for Spot Fleet Request (%s), removing", spotFleetRequestID) return nil } - log.Printf("[DEBUG] Active instance count in Spot Fleet Request (%s): %d", d.Id(), len(resp.ActiveInstances)) + log.Printf("[DEBUG] Active instance count in Spot Fleet Request (%s): %d", spotFleetRequestID, len(resp.ActiveInstances)) return resource.RetryableError( fmt.Errorf("fleet still has (%d) running instances", len(resp.ActiveInstances))) diff --git a/aws/resource_aws_spot_fleet_request_test.go b/aws/resource_aws_spot_fleet_request_test.go index 2a1f6d0b51a..f29234c7199 100644 --- a/aws/resource_aws_spot_fleet_request_test.go +++ b/aws/resource_aws_spot_fleet_request_test.go @@ -4,6 +4,7 @@ import ( "errors" "fmt" "log" + "regexp" "testing" "time" @@ -14,12 +15,53 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func init() { + resource.AddTestSweepers("aws_spot_fleet_request", &resource.Sweeper{ + Name: "aws_spot_fleet_request", + F: testSweepSpotFleetRequests, + }) +} + +func testSweepSpotFleetRequests(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).ec2conn + + err = conn.DescribeSpotFleetRequestsPages(&ec2.DescribeSpotFleetRequestsInput{}, func(page *ec2.DescribeSpotFleetRequestsOutput, isLast bool) bool { + if len(page.SpotFleetRequestConfigs) == 0 { + log.Print("[DEBUG] No Spot Fleet Requests to sweep") + return false + } + + for _, config := range page.SpotFleetRequestConfigs { + id := aws.StringValue(config.SpotFleetRequestId) + + log.Printf("[INFO] Deleting Spot Fleet Request: %s", id) + err := deleteSpotFleetRequest(id, true, 5*time.Minute, conn) + if err != nil { + log.Printf("[ERROR] Failed to delete Spot Fleet Request (%s): %s", id, err) + } + } + return !isLast + }) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping EC2 Spot Fleet Requests sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving EC2 Spot Fleet Requests: %s", err) + } + return nil +} + func TestAccAWSSpotFleetRequest_associatePublicIpAddress(t *testing.T) { var sfr ec2.SpotFleetRequestConfig rName := acctest.RandString(10) rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, @@ -46,7 +88,7 @@ func TestAccAWSSpotFleetRequest_instanceInterruptionBehavior(t *testing.T) { rName := acctest.RandString(10) rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, @@ -66,12 +108,61 @@ func TestAccAWSSpotFleetRequest_instanceInterruptionBehavior(t *testing.T) { }) } +func TestAccAWSSpotFleetRequest_fleetType(t *testing.T) { + var sfr ec2.SpotFleetRequestConfig + rName := acctest.RandString(10) + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSpotFleetRequestConfigFleetType(rName, rInt), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSSpotFleetRequestExists( + "aws_spot_fleet_request.foo", &sfr), + resource.TestCheckResourceAttr( + "aws_spot_fleet_request.foo", "spot_request_state", "active"), + resource.TestCheckResourceAttr( + "aws_spot_fleet_request.foo", "fleet_type", "request"), + ), + }, + }, + }) +} + +func TestAccAWSSpotFleetRequest_iamInstanceProfileArn(t *testing.T) { + var sfr ec2.SpotFleetRequestConfig + rName := acctest.RandString(10) + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSpotFleetRequestConfigIamInstanceProfileArn(rName, rInt), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSSpotFleetRequestExists( + "aws_spot_fleet_request.foo", &sfr), + resource.TestCheckResourceAttr( + "aws_spot_fleet_request.foo", "spot_request_state", "active"), + testAccCheckAWSSpotFleetRequest_IamInstanceProfileArn(&sfr), + ), + }, + }, + }) +} + func TestAccAWSSpotFleetRequest_changePriceForcesNewRequest(t *testing.T) { var before, after ec2.SpotFleetRequestConfig rName := acctest.RandString(10) rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, @@ -112,7 +203,7 @@ func TestAccAWSSpotFleetRequest_lowestPriceAzOrSubnetInRegion(t *testing.T) { rName := acctest.RandString(10) rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, @@ -137,7 +228,7 @@ func TestAccAWSSpotFleetRequest_lowestPriceAzInGivenList(t *testing.T) { rName := acctest.RandString(10) rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, @@ -166,7 +257,7 @@ func TestAccAWSSpotFleetRequest_lowestPriceSubnetInGivenList(t *testing.T) { rName := acctest.RandString(10) rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, @@ -191,7 +282,7 @@ func TestAccAWSSpotFleetRequest_multipleInstanceTypesInSameAz(t *testing.T) { rName := acctest.RandString(10) rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, @@ -224,7 +315,7 @@ func TestAccAWSSpotFleetRequest_multipleInstanceTypesInSameSubnet(t *testing.T) rName := acctest.RandString(10) rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, @@ -249,7 +340,7 @@ func TestAccAWSSpotFleetRequest_overriddingSpotPrice(t *testing.T) { rName := acctest.RandString(10) rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, @@ -279,12 +370,37 @@ func TestAccAWSSpotFleetRequest_overriddingSpotPrice(t *testing.T) { }) } +func TestAccAWSSpotFleetRequest_withoutSpotPrice(t *testing.T) { + var sfr ec2.SpotFleetRequestConfig + rName := acctest.RandString(10) + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSpotFleetRequestConfigWithoutSpotPrice(rName, rInt), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSSpotFleetRequestExists( + "aws_spot_fleet_request.foo", &sfr), + resource.TestCheckResourceAttr( + "aws_spot_fleet_request.foo", "spot_request_state", "active"), + resource.TestCheckResourceAttr( + "aws_spot_fleet_request.foo", "launch_specification.#", "2"), + ), + }, + }, + }) +} + func TestAccAWSSpotFleetRequest_diversifiedAllocation(t *testing.T) { var sfr ec2.SpotFleetRequestConfig rName := acctest.RandString(10) rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, @@ -306,6 +422,35 @@ func TestAccAWSSpotFleetRequest_diversifiedAllocation(t *testing.T) { }) } +func TestAccAWSSpotFleetRequest_multipleInstancePools(t *testing.T) { + var sfr ec2.SpotFleetRequestConfig + rName := acctest.RandString(10) + rInt := acctest.RandInt() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSpotFleetRequestConfigMultipleInstancePools(rName, rInt), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSSpotFleetRequestExists( + "aws_spot_fleet_request.foo", &sfr), + resource.TestCheckResourceAttr( + "aws_spot_fleet_request.foo", "spot_request_state", "active"), + resource.TestCheckResourceAttr( + "aws_spot_fleet_request.foo", "launch_specification.#", "3"), + resource.TestCheckResourceAttr( + "aws_spot_fleet_request.foo", "allocation_strategy", "lowestPrice"), + resource.TestCheckResourceAttr( + "aws_spot_fleet_request.foo", "instance_pools_to_use_count", "2"), + ), + }, + }, + }) +} + func TestAccAWSSpotFleetRequest_withWeightedCapacity(t *testing.T) { var sfr ec2.SpotFleetRequestConfig rName := acctest.RandString(10) @@ -325,7 +470,7 @@ func TestAccAWSSpotFleetRequest_withWeightedCapacity(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, @@ -359,7 +504,7 @@ func TestAccAWSSpotFleetRequest_withEBSDisk(t *testing.T) { rName := acctest.RandString(10) rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, @@ -382,7 +527,7 @@ func TestAccAWSSpotFleetRequest_withTags(t *testing.T) { rName := acctest.RandString(10) rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, @@ -405,7 +550,7 @@ func TestAccAWSSpotFleetRequest_placementTenancy(t *testing.T) { rName := acctest.RandString(10) rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, @@ -506,7 +651,7 @@ func testAccCheckAWSSpotFleetRequest_PlacementAttributes( return fmt.Errorf("Expected placement to be set, got nil") } if *placement.Tenancy != "dedicated" { - return fmt.Errorf("Expected placement tenancy to be %q, got %q", "dedicated", placement.Tenancy) + return fmt.Errorf("Expected placement tenancy to be %q, got %q", "dedicated", *placement.Tenancy) } return nil @@ -539,7 +684,7 @@ func TestAccAWSSpotFleetRequest_WithELBs(t *testing.T) { rName := acctest.RandString(10) rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, @@ -561,12 +706,35 @@ func TestAccAWSSpotFleetRequest_WithELBs(t *testing.T) { }) } +func testAccCheckAWSSpotFleetRequest_IamInstanceProfileArn( + sfr *ec2.SpotFleetRequestConfig) resource.TestCheckFunc { + return func(s *terraform.State) error { + if len(sfr.SpotFleetRequestConfig.LaunchSpecifications) == 0 { + return errors.New("Missing launch specification") + } + + spec := *sfr.SpotFleetRequestConfig.LaunchSpecifications[0] + + profile := spec.IamInstanceProfile + if profile == nil { + return fmt.Errorf("Expected IamInstanceProfile to be set, got nil") + } + //Validate the string whether it is ARN + re := regexp.MustCompile("arn:aws:iam::\\d{12}:instance-profile/?[a-zA-Z0-9+=,.@-_].*") + if !re.MatchString(*profile.Arn) { + return fmt.Errorf("Expected IamInstanceProfile input as ARN, got %s", *profile.Arn) + } + + return nil + } +} + func TestAccAWSSpotFleetRequest_WithTargetGroups(t *testing.T) { var sfr ec2.SpotFleetRequestConfig rName := acctest.RandString(10) rInt := acctest.RandInt() - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSpotFleetRequestDestroy, @@ -716,6 +884,195 @@ resource "aws_spot_fleet_request" "foo" { `, rName, rInt, rInt, rName) } +func testAccAWSSpotFleetRequestConfigFleetType(rName string, rInt int) string { + return fmt.Sprintf(` +resource "aws_iam_policy" "test-policy" { + name = "test-policy-%d" + path = "/" + description = "Spot Fleet Request ACCTest Policy" + policy = < 0 { + if errors := validateSQSFifoQueueName(name); len(errors) > 0 { return fmt.Errorf("Error validating the FIFO queue name: %v", errors) } } else { - if errors := validateSQSNonFifoQueueName(name, "name"); len(errors) > 0 { + if errors := validateSQSNonFifoQueueName(name); len(errors) > 0 { return fmt.Errorf("Error validating SQS queue name: %v", errors) } } @@ -244,7 +249,7 @@ func resourceAwsSqsQueueUpdate(d *schema.ResourceData, meta interface{}) error { Attributes: attributes, } if _, err := sqsconn.SetQueueAttributes(req); err != nil { - return fmt.Errorf("[ERR] Error updating SQS attributes: %s", err) + return fmt.Errorf("Error updating SQS attributes: %s", err) } } @@ -367,7 +372,7 @@ func setTagsSQS(conn *sqs.SQS, d *schema.ResourceData) error { if len(remove) > 0 { log.Printf("[DEBUG] Removing tags: %#v", remove) keys := make([]*string, 0, len(remove)) - for k, _ := range remove { + for k := range remove { keys = append(keys, aws.String(k)) } diff --git a/aws/resource_aws_sqs_queue_policy.go b/aws/resource_aws_sqs_queue_policy.go index 277afc2e787..0459e7c209a 100644 --- a/aws/resource_aws_sqs_queue_policy.go +++ b/aws/resource_aws_sqs_queue_policy.go @@ -9,6 +9,7 @@ import ( "github.com/aws/aws-sdk-go/service/sqs" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" "github.com/jen20/awspolicyequivalence" ) @@ -34,7 +35,7 @@ func resourceAwsSqsQueuePolicy() *schema.Resource { "policy": { Type: schema.TypeString, Required: true, - ValidateFunc: validateJsonString, + ValidateFunc: validation.ValidateJsonString, DiffSuppressFunc: suppressEquivalentAwsPolicyDiffs, }, }, diff --git a/aws/resource_aws_sqs_queue_policy_test.go b/aws/resource_aws_sqs_queue_policy_test.go index 72d5ab743e0..7dfb9129627 100644 --- a/aws/resource_aws_sqs_queue_policy_test.go +++ b/aws/resource_aws_sqs_queue_policy_test.go @@ -10,16 +10,19 @@ import ( ) func TestAccAWSSQSQueuePolicy_basic(t *testing.T) { + var queueAttributes map[string]*string + queueName := fmt.Sprintf("sqs-queue-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSQSQueueDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSQSPolicyConfig_basic(queueName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSSQSExistsWithDefaults("aws_sqs_queue.q"), + testAccCheckAWSSQSQueueExists("aws_sqs_queue.q", &queueAttributes), + testAccCheckAWSSQSQueueDefaultAttributes(&queueAttributes), resource.TestMatchResourceAttr("aws_sqs_queue_policy.test", "policy", regexp.MustCompile("^{\"Version\":\"2012-10-17\".+")), ), @@ -32,16 +35,16 @@ func TestAccAWSSQSQueuePolicy_import(t *testing.T) { queueName := fmt.Sprintf("sqs-queue-%s", acctest.RandString(5)) resourceName := "aws_sqs_queue_policy.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSQSQueueDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSQSPolicyConfig_basic(queueName), }, - resource.TestStep{ + { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, diff --git a/aws/resource_aws_sqs_queue_test.go b/aws/resource_aws_sqs_queue_test.go index b4628b0167d..5856abbb153 100644 --- a/aws/resource_aws_sqs_queue_test.go +++ b/aws/resource_aws_sqs_queue_test.go @@ -2,12 +2,10 @@ package aws import ( "fmt" + "regexp" "testing" "time" - "regexp" - "strings" - "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/sqs" "github.com/hashicorp/terraform/helper/acctest" @@ -16,9 +14,83 @@ import ( "github.com/jen20/awspolicyequivalence" ) +func TestAccAWSSQSQueue_importBasic(t *testing.T) { + resourceName := "aws_sqs_queue.queue" + queueName := fmt.Sprintf("sqs-queue-%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSQSQueueDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSQSConfigWithDefaults(queueName), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_sqs_queue.queue", "fifo_queue", "false"), + ), + }, + }, + }) +} + +func TestAccAWSSQSQueue_importFifo(t *testing.T) { + resourceName := "aws_sqs_queue.queue" + queueName := fmt.Sprintf("sqs-queue-%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSQSQueueDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSQSFifoConfigWithDefaults(queueName), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_sqs_queue.queue", "fifo_queue", "true"), + ), + }, + }, + }) +} + +func TestAccAWSSQSQueue_importEncryption(t *testing.T) { + resourceName := "aws_sqs_queue.queue" + queueName := fmt.Sprintf("sqs-queue-%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSQSQueueDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSQSConfigWithEncryption(queueName), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("aws_sqs_queue.queue", "kms_master_key_id", "alias/aws/sqs"), + ), + }, + }, + }) +} + func TestAccAWSSQSQueue_basic(t *testing.T) { + var queueAttributes map[string]*string + queueName := fmt.Sprintf("sqs-queue-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSQSQueueDestroy, @@ -26,19 +98,22 @@ func TestAccAWSSQSQueue_basic(t *testing.T) { { Config: testAccAWSSQSConfigWithDefaults(queueName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSSQSExistsWithDefaults("aws_sqs_queue.queue"), + testAccCheckAWSSQSQueueExists("aws_sqs_queue.queue", &queueAttributes), + testAccCheckAWSSQSQueueDefaultAttributes(&queueAttributes), ), }, { Config: testAccAWSSQSConfigWithOverrides(queueName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSSQSExistsWithOverrides("aws_sqs_queue.queue"), + testAccCheckAWSSQSQueueExists("aws_sqs_queue.queue", &queueAttributes), + testAccCheckAWSSQSQueueOverrideAttributes(&queueAttributes), ), }, { Config: testAccAWSSQSConfigWithDefaults(queueName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSSQSExistsWithDefaults("aws_sqs_queue.queue"), + testAccCheckAWSSQSQueueExists("aws_sqs_queue.queue", &queueAttributes), + testAccCheckAWSSQSQueueDefaultAttributes(&queueAttributes), ), }, }, @@ -46,8 +121,10 @@ func TestAccAWSSQSQueue_basic(t *testing.T) { } func TestAccAWSSQSQueue_tags(t *testing.T) { + var queueAttributes map[string]*string + queueName := fmt.Sprintf("sqs-queue-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSQSQueueDestroy, @@ -55,7 +132,8 @@ func TestAccAWSSQSQueue_tags(t *testing.T) { { Config: testAccAWSSQSConfigWithTags(queueName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSSQSExistsWithDefaults("aws_sqs_queue.queue"), + testAccCheckAWSSQSQueueExists("aws_sqs_queue.queue", &queueAttributes), + testAccCheckAWSSQSQueueDefaultAttributes(&queueAttributes), resource.TestCheckResourceAttr("aws_sqs_queue.queue", "tags.%", "2"), resource.TestCheckResourceAttr("aws_sqs_queue.queue", "tags.Usage", "original"), ), @@ -63,7 +141,8 @@ func TestAccAWSSQSQueue_tags(t *testing.T) { { Config: testAccAWSSQSConfigWithTagsChanged(queueName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSSQSExistsWithDefaults("aws_sqs_queue.queue"), + testAccCheckAWSSQSQueueExists("aws_sqs_queue.queue", &queueAttributes), + testAccCheckAWSSQSQueueDefaultAttributes(&queueAttributes), resource.TestCheckResourceAttr("aws_sqs_queue.queue", "tags.%", "1"), resource.TestCheckResourceAttr("aws_sqs_queue.queue", "tags.Usage", "changed"), ), @@ -71,7 +150,8 @@ func TestAccAWSSQSQueue_tags(t *testing.T) { { Config: testAccAWSSQSConfigWithDefaults(queueName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSSQSExistsWithDefaults("aws_sqs_queue.queue"), + testAccCheckAWSSQSQueueExists("aws_sqs_queue.queue", &queueAttributes), + testAccCheckAWSSQSQueueDefaultAttributes(&queueAttributes), resource.TestCheckNoResourceAttr("aws_sqs_queue.queue", "tags"), ), }, @@ -80,8 +160,10 @@ func TestAccAWSSQSQueue_tags(t *testing.T) { } func TestAccAWSSQSQueue_namePrefix(t *testing.T) { + var queueAttributes map[string]*string + prefix := "acctest-sqs-queue" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSQSQueueDestroy, @@ -89,8 +171,30 @@ func TestAccAWSSQSQueue_namePrefix(t *testing.T) { { Config: testAccAWSSQSConfigWithNamePrefix(prefix), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSSQSExistsWithDefaults("aws_sqs_queue.queue"), - testAccCheckAWSSQSGeneratedNamePrefix("aws_sqs_queue.queue", "acctest-sqs-queue"), + testAccCheckAWSSQSQueueExists("aws_sqs_queue.queue", &queueAttributes), + testAccCheckAWSSQSQueueDefaultAttributes(&queueAttributes), + resource.TestMatchResourceAttr("aws_sqs_queue.queue", "name", regexp.MustCompile(`^acctest-sqs-queue`)), + ), + }, + }, + }) +} + +func TestAccAWSSQSQueue_namePrefix_fifo(t *testing.T) { + var queueAttributes map[string]*string + + prefix := "acctest-sqs-queue" + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSQSQueueDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSQSFifoConfigWithNamePrefix(prefix), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSQSQueueExists("aws_sqs_queue.queue", &queueAttributes), + testAccCheckAWSSQSQueueDefaultAttributes(&queueAttributes), + resource.TestMatchResourceAttr("aws_sqs_queue.queue", "name", regexp.MustCompile(`^acctest-sqs-queue.*\.fifo$`)), ), }, }, @@ -98,6 +202,8 @@ func TestAccAWSSQSQueue_namePrefix(t *testing.T) { } func TestAccAWSSQSQueue_policy(t *testing.T) { + var queueAttributes map[string]*string + queueName := fmt.Sprintf("sqs-queue-%s", acctest.RandString(10)) topicName := fmt.Sprintf("sns-topic-%s", acctest.RandString(10)) @@ -105,7 +211,7 @@ func TestAccAWSSQSQueue_policy(t *testing.T) { `{"Version": "2012-10-17","Id": "sqspolicy","Statement":[{"Sid": "Stmt1451501026839","Effect": "Allow","Principal":"*","Action":"sqs:SendMessage","Resource":"arn:aws:sqs:us-west-2:470663696735:%s","Condition":{"ArnEquals":{"aws:SourceArn":"arn:aws:sns:us-west-2:470663696735:%s"}}}]}`, topicName, queueName) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSQSQueueDestroy, @@ -113,7 +219,8 @@ func TestAccAWSSQSQueue_policy(t *testing.T) { { Config: testAccAWSSQSConfig_PolicyFormat(topicName, queueName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSQSHasPolicy("aws_sqs_queue.test-email-events", expectedPolicyText), + testAccCheckAWSSQSQueueExists("aws_sqs_queue.test-email-events", &queueAttributes), + testAccCheckAWSSQSQueuePolicyAttribute(&queueAttributes, expectedPolicyText), ), }, }, @@ -121,8 +228,10 @@ func TestAccAWSSQSQueue_policy(t *testing.T) { } func TestAccAWSSQSQueue_queueDeletedRecently(t *testing.T) { + var queueAttributes map[string]*string + queueName := fmt.Sprintf("sqs-queue-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSQSQueueDestroy, @@ -130,25 +239,26 @@ func TestAccAWSSQSQueue_queueDeletedRecently(t *testing.T) { { Config: testAccAWSSQSConfigWithDefaults(queueName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSSQSExistsWithDefaults("aws_sqs_queue.queue"), + testAccCheckAWSSQSQueueExists("aws_sqs_queue.queue", &queueAttributes), + testAccCheckAWSSQSQueueDefaultAttributes(&queueAttributes), ), }, - { - Config: "# delete queue to quickly recreate", - Check: testAccCheckAWSSQSQueueDestroy, - }, { Config: testAccAWSSQSConfigWithDefaults(queueName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSSQSExistsWithDefaults("aws_sqs_queue.queue"), + testAccCheckAWSSQSQueueExists("aws_sqs_queue.queue", &queueAttributes), + testAccCheckAWSSQSQueueDefaultAttributes(&queueAttributes), ), + Taint: []string{"aws_sqs_queue.queue"}, }, }, }) } func TestAccAWSSQSQueue_redrivePolicy(t *testing.T) { - resource.Test(t, resource.TestCase{ + var queueAttributes map[string]*string + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSQSQueueDestroy, @@ -156,7 +266,8 @@ func TestAccAWSSQSQueue_redrivePolicy(t *testing.T) { { Config: testAccAWSSQSConfigWithRedrive(acctest.RandString(10)), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSSQSExistsWithDefaults("aws_sqs_queue.my_dead_letter_queue"), + testAccCheckAWSSQSQueueExists("aws_sqs_queue.my_dead_letter_queue", &queueAttributes), + testAccCheckAWSSQSQueueDefaultAttributes(&queueAttributes), ), }, }, @@ -165,9 +276,11 @@ func TestAccAWSSQSQueue_redrivePolicy(t *testing.T) { // Tests formatting and compacting of Policy, Redrive json func TestAccAWSSQSQueue_Policybasic(t *testing.T) { + var queueAttributes map[string]*string + queueName := fmt.Sprintf("sqs-queue-%s", acctest.RandString(10)) topicName := fmt.Sprintf("sns-topic-%s", acctest.RandString(10)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSQSQueueDestroy, @@ -175,7 +288,8 @@ func TestAccAWSSQSQueue_Policybasic(t *testing.T) { { Config: testAccAWSSQSConfig_PolicyFormat(topicName, queueName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSSQSExistsWithOverrides("aws_sqs_queue.test-email-events"), + testAccCheckAWSSQSQueueExists("aws_sqs_queue.test-email-events", &queueAttributes), + testAccCheckAWSSQSQueueOverrideAttributes(&queueAttributes), ), }, }, @@ -183,8 +297,9 @@ func TestAccAWSSQSQueue_Policybasic(t *testing.T) { } func TestAccAWSSQSQueue_FIFO(t *testing.T) { + var queueAttributes map[string]*string - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSQSQueueDestroy, @@ -192,7 +307,7 @@ func TestAccAWSSQSQueue_FIFO(t *testing.T) { { Config: testAccAWSSQSConfigWithFIFO(acctest.RandString(10)), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSSQSExists("aws_sqs_queue.queue"), + testAccCheckAWSSQSQueueExists("aws_sqs_queue.queue", &queueAttributes), resource.TestCheckResourceAttr("aws_sqs_queue.queue", "fifo_queue", "true"), ), }, @@ -201,7 +316,7 @@ func TestAccAWSSQSQueue_FIFO(t *testing.T) { } func TestAccAWSSQSQueue_FIFOExpectNameError(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSQSQueueDestroy, @@ -215,7 +330,9 @@ func TestAccAWSSQSQueue_FIFOExpectNameError(t *testing.T) { } func TestAccAWSSQSQueue_FIFOWithContentBasedDeduplication(t *testing.T) { - resource.Test(t, resource.TestCase{ + var queueAttributes map[string]*string + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSQSQueueDestroy, @@ -223,7 +340,7 @@ func TestAccAWSSQSQueue_FIFOWithContentBasedDeduplication(t *testing.T) { { Config: testAccAWSSQSConfigWithFIFOContentBasedDeduplication(acctest.RandString(10)), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSSQSExists("aws_sqs_queue.queue"), + testAccCheckAWSSQSQueueExists("aws_sqs_queue.queue", &queueAttributes), resource.TestCheckResourceAttr("aws_sqs_queue.queue", "fifo_queue", "true"), resource.TestCheckResourceAttr("aws_sqs_queue.queue", "content_based_deduplication", "true"), ), @@ -233,7 +350,7 @@ func TestAccAWSSQSQueue_FIFOWithContentBasedDeduplication(t *testing.T) { } func TestAccAWSSQSQueue_ExpectContentBasedDeduplicationError(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSQSQueueDestroy, @@ -246,6 +363,25 @@ func TestAccAWSSQSQueue_ExpectContentBasedDeduplicationError(t *testing.T) { }) } +func TestAccAWSSQSQueue_Encryption(t *testing.T) { + var queueAttributes map[string]*string + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSQSQueueDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSQSConfigWithEncryption(acctest.RandString(10)), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSQSQueueExists("aws_sqs_queue.queue", &queueAttributes), + resource.TestCheckResourceAttr("aws_sqs_queue.queue", "kms_master_key_id", "alias/aws/sqs"), + ), + }, + }, + }) +} + func testAccCheckAWSSQSQueueDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).sqsconn @@ -275,32 +411,12 @@ func testAccCheckAWSSQSQueueDestroy(s *terraform.State) error { return nil } -func testAccCheckAWSQSHasPolicy(n string, expectedPolicyText string) resource.TestCheckFunc { +func testAccCheckAWSSQSQueuePolicyAttribute(queueAttributes *map[string]*string, expectedPolicyText string) resource.TestCheckFunc { return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[n] - if !ok { - return fmt.Errorf("Not found: %s", n) - } - - if rs.Primary.ID == "" { - return fmt.Errorf("No Queue URL specified!") - } - - conn := testAccProvider.Meta().(*AWSClient).sqsconn - - params := &sqs.GetQueueAttributesInput{ - QueueUrl: aws.String(rs.Primary.ID), - AttributeNames: []*string{aws.String("Policy")}, - } - resp, err := conn.GetQueueAttributes(params) - if err != nil { - return err - } - var actualPolicyText string - for k, v := range resp.Attributes { - if k == "Policy" { - actualPolicyText = *v + for key, valuePointer := range *queueAttributes { + if key == "Policy" { + actualPolicyText = aws.StringValue(valuePointer) break } } @@ -318,26 +434,11 @@ func testAccCheckAWSQSHasPolicy(n string, expectedPolicyText string) resource.Te } } -func testAccCheckAWSSQSExists(n string) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[n] - if !ok { - return fmt.Errorf("Not found: %s", n) - } - - if rs.Primary.ID == "" { - return fmt.Errorf("No Queue URL specified!") - } - - return nil - } -} - -func testAccCheckAWSSQSExistsWithDefaults(n string) resource.TestCheckFunc { +func testAccCheckAWSSQSQueueExists(resourceName string, queueAttributes *map[string]*string) resource.TestCheckFunc { return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[n] + rs, ok := s.RootModule().Resources[resourceName] if !ok { - return fmt.Errorf("Not found: %s", n) + return fmt.Errorf("Not found: %s", resourceName) } if rs.Primary.ID == "" { @@ -346,36 +447,45 @@ func testAccCheckAWSSQSExistsWithDefaults(n string) resource.TestCheckFunc { conn := testAccProvider.Meta().(*AWSClient).sqsconn - params := &sqs.GetQueueAttributesInput{ + input := &sqs.GetQueueAttributesInput{ QueueUrl: aws.String(rs.Primary.ID), AttributeNames: []*string{aws.String("All")}, } - resp, err := conn.GetQueueAttributes(params) + output, err := conn.GetQueueAttributes(input) if err != nil { return err } + *queueAttributes = output.Attributes + + return nil + } +} + +func testAccCheckAWSSQSQueueDefaultAttributes(queueAttributes *map[string]*string) resource.TestCheckFunc { + return func(s *terraform.State) error { // checking if attributes are defaults - for k, v := range resp.Attributes { - if k == "VisibilityTimeout" && *v != "30" { - return fmt.Errorf("VisibilityTimeout (%s) was not set to 30", *v) + for key, valuePointer := range *queueAttributes { + value := aws.StringValue(valuePointer) + if key == "VisibilityTimeout" && value != "30" { + return fmt.Errorf("VisibilityTimeout (%s) was not set to 30", value) } - if k == "MessageRetentionPeriod" && *v != "345600" { - return fmt.Errorf("MessageRetentionPeriod (%s) was not set to 345600", *v) + if key == "MessageRetentionPeriod" && value != "345600" { + return fmt.Errorf("MessageRetentionPeriod (%s) was not set to 345600", value) } - if k == "MaximumMessageSize" && *v != "262144" { - return fmt.Errorf("MaximumMessageSize (%s) was not set to 262144", *v) + if key == "MaximumMessageSize" && value != "262144" { + return fmt.Errorf("MaximumMessageSize (%s) was not set to 262144", value) } - if k == "DelaySeconds" && *v != "0" { - return fmt.Errorf("DelaySeconds (%s) was not set to 0", *v) + if key == "DelaySeconds" && value != "0" { + return fmt.Errorf("DelaySeconds (%s) was not set to 0", value) } - if k == "ReceiveMessageWaitTimeSeconds" && *v != "0" { - return fmt.Errorf("ReceiveMessageWaitTimeSeconds (%s) was not set to 0", *v) + if key == "ReceiveMessageWaitTimeSeconds" && value != "0" { + return fmt.Errorf("ReceiveMessageWaitTimeSeconds (%s) was not set to 0", value) } } @@ -383,66 +493,29 @@ func testAccCheckAWSSQSExistsWithDefaults(n string) resource.TestCheckFunc { } } -func testAccCheckAWSSQSGeneratedNamePrefix(resource, prefix string) resource.TestCheckFunc { - return func(s *terraform.State) error { - r, ok := s.RootModule().Resources[resource] - if !ok { - return fmt.Errorf("Resource not found") - } - name, ok := r.Primary.Attributes["name"] - if !ok { - return fmt.Errorf("Name attr not found: %#v", r.Primary.Attributes) - } - if !strings.HasPrefix(name, prefix) { - return fmt.Errorf("Name: %q, does not have prefix: %q", name, prefix) - } - return nil - } -} - -func testAccCheckAWSSQSExistsWithOverrides(n string) resource.TestCheckFunc { +func testAccCheckAWSSQSQueueOverrideAttributes(queueAttributes *map[string]*string) resource.TestCheckFunc { return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[n] - if !ok { - return fmt.Errorf("Not found: %s", n) - } - - if rs.Primary.ID == "" { - return fmt.Errorf("No Queue URL specified!") - } - - conn := testAccProvider.Meta().(*AWSClient).sqsconn - - params := &sqs.GetQueueAttributesInput{ - QueueUrl: aws.String(rs.Primary.ID), - AttributeNames: []*string{aws.String("All")}, - } - resp, err := conn.GetQueueAttributes(params) - - if err != nil { - return err - } - // checking if attributes match our overrides - for k, v := range resp.Attributes { - if k == "VisibilityTimeout" && *v != "60" { - return fmt.Errorf("VisibilityTimeout (%s) was not set to 60", *v) + for key, valuePointer := range *queueAttributes { + value := aws.StringValue(valuePointer) + if key == "VisibilityTimeout" && value != "60" { + return fmt.Errorf("VisibilityTimeout (%s) was not set to 60", value) } - if k == "MessageRetentionPeriod" && *v != "86400" { - return fmt.Errorf("MessageRetentionPeriod (%s) was not set to 86400", *v) + if key == "MessageRetentionPeriod" && value != "86400" { + return fmt.Errorf("MessageRetentionPeriod (%s) was not set to 86400", value) } - if k == "MaximumMessageSize" && *v != "2048" { - return fmt.Errorf("MaximumMessageSize (%s) was not set to 2048", *v) + if key == "MaximumMessageSize" && value != "2048" { + return fmt.Errorf("MaximumMessageSize (%s) was not set to 2048", value) } - if k == "DelaySeconds" && *v != "90" { - return fmt.Errorf("DelaySeconds (%s) was not set to 90", *v) + if key == "DelaySeconds" && value != "90" { + return fmt.Errorf("DelaySeconds (%s) was not set to 90", value) } - if k == "ReceiveMessageWaitTimeSeconds" && *v != "10" { - return fmt.Errorf("ReceiveMessageWaitTimeSeconds (%s) was not set to 10", *v) + if key == "ReceiveMessageWaitTimeSeconds" && value != "10" { + return fmt.Errorf("ReceiveMessageWaitTimeSeconds (%s) was not set to 10", value) } } @@ -450,24 +523,6 @@ func testAccCheckAWSSQSExistsWithOverrides(n string) resource.TestCheckFunc { } } -func TestAccAWSSQSQueue_Encryption(t *testing.T) { - - resource.Test(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - Providers: testAccProviders, - CheckDestroy: testAccCheckAWSSQSQueueDestroy, - Steps: []resource.TestStep{ - { - Config: testAccAWSSQSConfigWithEncryption(acctest.RandString(10)), - Check: resource.ComposeTestCheckFunc( - testAccCheckAWSSQSExists("aws_sqs_queue.queue"), - resource.TestCheckResourceAttr("aws_sqs_queue.queue", "kms_master_key_id", "alias/aws/sqs"), - ), - }, - }, - }) -} - func testAccAWSSQSConfigWithDefaults(r string) string { return fmt.Sprintf(` resource "aws_sqs_queue" "queue" { @@ -484,6 +539,15 @@ resource "aws_sqs_queue" "queue" { `, r) } +func testAccAWSSQSFifoConfigWithNamePrefix(r string) string { + return fmt.Sprintf(` +resource "aws_sqs_queue" "queue" { + name_prefix = "%s" + fifo_queue = true +} +`, r) +} + func testAccAWSSQSFifoConfigWithDefaults(r string) string { return fmt.Sprintf(` resource "aws_sqs_queue" "queue" { diff --git a/aws/resource_aws_ssm_activation.go b/aws/resource_aws_ssm_activation.go index 8fb58671d0b..95ed6d41ff4 100644 --- a/aws/resource_aws_ssm_activation.go +++ b/aws/resource_aws_ssm_activation.go @@ -7,9 +7,9 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ssm" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsSsmActivation() *schema.Resource { @@ -37,7 +37,7 @@ func resourceAwsSsmActivation() *schema.Resource { Type: schema.TypeString, Optional: true, ForceNew: true, - ValidateFunc: validateRFC3339TimeString, + ValidateFunc: validation.ValidateRFC3339TimeString, }, "iam_role": { Type: schema.TypeString, @@ -106,11 +106,11 @@ func resourceAwsSsmActivationCreate(d *schema.ResourceData, meta interface{}) er }) if err != nil { - return errwrap.Wrapf("[ERROR] Error creating SSM activation: {{err}}", err) + return fmt.Errorf("Error creating SSM activation: %s", err) } if resp.ActivationId == nil { - return fmt.Errorf("[ERROR] ActivationId was nil") + return fmt.Errorf("ActivationId was nil") } d.SetId(*resp.ActivationId) d.Set("activation_code", resp.ActivationCode) @@ -138,10 +138,10 @@ func resourceAwsSsmActivationRead(d *schema.ResourceData, meta interface{}) erro resp, err := ssmconn.DescribeActivations(params) if err != nil { - return errwrap.Wrapf("[ERROR] Error reading SSM activation: {{err}}", err) + return fmt.Errorf("Error reading SSM activation: %s", err) } if resp.ActivationList == nil || len(resp.ActivationList) == 0 { - return fmt.Errorf("[ERROR] ActivationList was nil or empty") + return fmt.Errorf("ActivationList was nil or empty") } activation := resp.ActivationList[0] // Only 1 result as MaxResults is 1 above @@ -168,7 +168,7 @@ func resourceAwsSsmActivationDelete(d *schema.ResourceData, meta interface{}) er _, err := ssmconn.DeleteActivation(params) if err != nil { - return errwrap.Wrapf("[ERROR] Error deleting SSM activation: {{err}}", err) + return fmt.Errorf("Error deleting SSM activation: %s", err) } return nil diff --git a/aws/resource_aws_ssm_activation_test.go b/aws/resource_aws_ssm_activation_test.go index b41f09c6c5a..dc7f00ba6f0 100644 --- a/aws/resource_aws_ssm_activation_test.go +++ b/aws/resource_aws_ssm_activation_test.go @@ -15,7 +15,7 @@ import ( func TestAccAWSSSMActivation_basic(t *testing.T) { name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSSMActivationDestroy, @@ -37,7 +37,7 @@ func TestAccAWSSSMActivation_expirationDate(t *testing.T) { expirationDateS := expirationTime.Format(time.RFC3339) resourceName := "aws_ssm_activation.foo" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSSMActivationDestroy, diff --git a/aws/resource_aws_ssm_association.go b/aws/resource_aws_ssm_association.go index b65dd858453..7a93f96a88b 100644 --- a/aws/resource_aws_ssm_association.go +++ b/aws/resource_aws_ssm_association.go @@ -13,7 +13,7 @@ func resourceAwsSsmAssociation() *schema.Resource { return &schema.Resource{ Create: resourceAwsSsmAssociationCreate, Read: resourceAwsSsmAssociationRead, - Update: resourceAwsSsmAssocationUpdate, + Update: resourceAwsSsmAssociationUpdate, Delete: resourceAwsSsmAssociationDelete, MigrateState: resourceAwsSsmAssociationMigrateState, @@ -97,45 +97,45 @@ func resourceAwsSsmAssociationCreate(d *schema.ResourceData, meta interface{}) e log.Printf("[DEBUG] SSM association create: %s", d.Id()) - assosciationInput := &ssm.CreateAssociationInput{ + associationInput := &ssm.CreateAssociationInput{ Name: aws.String(d.Get("name").(string)), } if v, ok := d.GetOk("association_name"); ok { - assosciationInput.AssociationName = aws.String(v.(string)) + associationInput.AssociationName = aws.String(v.(string)) } if v, ok := d.GetOk("instance_id"); ok { - assosciationInput.InstanceId = aws.String(v.(string)) + associationInput.InstanceId = aws.String(v.(string)) } if v, ok := d.GetOk("document_version"); ok { - assosciationInput.DocumentVersion = aws.String(v.(string)) + associationInput.DocumentVersion = aws.String(v.(string)) } if v, ok := d.GetOk("schedule_expression"); ok { - assosciationInput.ScheduleExpression = aws.String(v.(string)) + associationInput.ScheduleExpression = aws.String(v.(string)) } if v, ok := d.GetOk("parameters"); ok { - assosciationInput.Parameters = expandSSMDocumentParameters(v.(map[string]interface{})) + associationInput.Parameters = expandSSMDocumentParameters(v.(map[string]interface{})) } if _, ok := d.GetOk("targets"); ok { - assosciationInput.Targets = expandAwsSsmTargets(d) + associationInput.Targets = expandAwsSsmTargets(d.Get("targets").([]interface{})) } if v, ok := d.GetOk("output_location"); ok { - assosciationInput.OutputLocation = expandSSMAssociationOutputLocation(v.([]interface{})) + associationInput.OutputLocation = expandSSMAssociationOutputLocation(v.([]interface{})) } - resp, err := ssmconn.CreateAssociation(assosciationInput) + resp, err := ssmconn.CreateAssociation(associationInput) if err != nil { - return fmt.Errorf("[ERROR] Error creating SSM association: %s", err) + return fmt.Errorf("Error creating SSM association: %s", err) } if resp.AssociationDescription == nil { - return fmt.Errorf("[ERROR] AssociationDescription was nil") + return fmt.Errorf("AssociationDescription was nil") } d.SetId(*resp.AssociationDescription.AssociationId) @@ -160,10 +160,10 @@ func resourceAwsSsmAssociationRead(d *schema.ResourceData, meta interface{}) err d.SetId("") return nil } - return fmt.Errorf("[ERROR] Error reading SSM association: %s", err) + return fmt.Errorf("Error reading SSM association: %s", err) } if resp.AssociationDescription == nil { - return fmt.Errorf("[ERROR] AssociationDescription was nil") + return fmt.Errorf("AssociationDescription was nil") } association := resp.AssociationDescription @@ -176,52 +176,53 @@ func resourceAwsSsmAssociationRead(d *schema.ResourceData, meta interface{}) err d.Set("document_version", association.DocumentVersion) if err := d.Set("targets", flattenAwsSsmTargets(association.Targets)); err != nil { - return fmt.Errorf("[DEBUG] Error setting targets error: %#v", err) + return fmt.Errorf("Error setting targets error: %#v", err) } if err := d.Set("output_location", flattenAwsSsmAssociationOutoutLocation(association.OutputLocation)); err != nil { - return fmt.Errorf("[DEBUG] Error setting output_location error: %#v", err) + return fmt.Errorf("Error setting output_location error: %#v", err) } return nil } -func resourceAwsSsmAssocationUpdate(d *schema.ResourceData, meta interface{}) error { +func resourceAwsSsmAssociationUpdate(d *schema.ResourceData, meta interface{}) error { ssmconn := meta.(*AWSClient).ssmconn - log.Printf("[DEBUG] SSM association update: %s", d.Id()) + log.Printf("[DEBUG] SSM Association update: %s", d.Id()) associationInput := &ssm.UpdateAssociationInput{ AssociationId: aws.String(d.Get("association_id").(string)), } - if d.HasChange("association_name") { - associationInput.AssociationName = aws.String(d.Get("association_name").(string)) + // AWS creates a new version every time the association is updated, so everything should be passed in the update. + if v, ok := d.GetOk("association_name"); ok { + associationInput.AssociationName = aws.String(v.(string)) } - if d.HasChange("schedule_expression") { - associationInput.ScheduleExpression = aws.String(d.Get("schedule_expression").(string)) + if v, ok := d.GetOk("document_version"); ok { + associationInput.DocumentVersion = aws.String(v.(string)) } - if d.HasChange("document_version") { - associationInput.DocumentVersion = aws.String(d.Get("document_version").(string)) + if v, ok := d.GetOk("schedule_expression"); ok { + associationInput.ScheduleExpression = aws.String(v.(string)) } - if d.HasChange("parameters") { - associationInput.Parameters = expandSSMDocumentParameters(d.Get("parameters").(map[string]interface{})) + if v, ok := d.GetOk("parameters"); ok { + associationInput.Parameters = expandSSMDocumentParameters(v.(map[string]interface{})) } - if d.HasChange("output_location") { - associationInput.OutputLocation = expandSSMAssociationOutputLocation(d.Get("output_location").([]interface{})) + if _, ok := d.GetOk("targets"); ok { + associationInput.Targets = expandAwsSsmTargets(d.Get("targets").([]interface{})) } - if d.HasChange("targets") { - associationInput.Targets = expandAwsSsmTargets(d) + if v, ok := d.GetOk("output_location"); ok { + associationInput.OutputLocation = expandSSMAssociationOutputLocation(v.([]interface{})) } _, err := ssmconn.UpdateAssociation(associationInput) if err != nil { - return fmt.Errorf("[ERROR] Error updating SSM association: %s", err) + return fmt.Errorf("Error updating SSM association: %s", err) } return resourceAwsSsmAssociationRead(d, meta) @@ -230,7 +231,7 @@ func resourceAwsSsmAssocationUpdate(d *schema.ResourceData, meta interface{}) er func resourceAwsSsmAssociationDelete(d *schema.ResourceData, meta interface{}) error { ssmconn := meta.(*AWSClient).ssmconn - log.Printf("[DEBUG] Deleting SSM Assosciation: %s", d.Id()) + log.Printf("[DEBUG] Deleting SSM Association: %s", d.Id()) params := &ssm.DeleteAssociationInput{ AssociationId: aws.String(d.Get("association_id").(string)), @@ -239,7 +240,7 @@ func resourceAwsSsmAssociationDelete(d *schema.ResourceData, meta interface{}) e _, err := ssmconn.DeleteAssociation(params) if err != nil { - return fmt.Errorf("[ERROR] Error deleting SSM association: %s", err) + return fmt.Errorf("Error deleting SSM association: %s", err) } return nil diff --git a/aws/resource_aws_ssm_association_test.go b/aws/resource_aws_ssm_association_test.go index aed25740f5a..a5b47151703 100644 --- a/aws/resource_aws_ssm_association_test.go +++ b/aws/resource_aws_ssm_association_test.go @@ -45,7 +45,7 @@ func TestAccAWSSSMAssociation_basic(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSSMAssociationDestroy, @@ -83,7 +83,7 @@ func TestAccAWSSSMAssociation_withTargets(t *testing.T) { key = "tag:ExtraName" values = ["acceptanceTest"] }` - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSSMAssociationDestroy, @@ -134,7 +134,7 @@ func TestAccAWSSSMAssociation_withTargets(t *testing.T) { func TestAccAWSSSMAssociation_withParameters(t *testing.T) { name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSSMAssociationDestroy, @@ -163,7 +163,7 @@ func TestAccAWSSSMAssociation_withAssociationName(t *testing.T) { assocName1 := acctest.RandString(10) assocName2 := acctest.RandString(10) rName := acctest.RandString(5) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSSMAssociationDestroy, @@ -188,9 +188,41 @@ func TestAccAWSSSMAssociation_withAssociationName(t *testing.T) { }) } +func TestAccAWSSSMAssociation_withAssociationNameAndScheduleExpression(t *testing.T) { + assocName := acctest.RandString(10) + rName := acctest.RandString(5) + resourceName := "aws_ssm_association.test" + scheduleExpression1 := "cron(0 16 ? * TUE *)" + scheduleExpression2 := "cron(0 16 ? * WED *)" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSSMAssociationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSSMAssociationConfigWithAssociationNameAndScheduleExpression(rName, assocName, scheduleExpression1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSSMAssociationExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "association_name", assocName), + resource.TestCheckResourceAttr(resourceName, "schedule_expression", scheduleExpression1), + ), + }, + { + Config: testAccAWSSSMAssociationConfigWithAssociationNameAndScheduleExpression(rName, assocName, scheduleExpression2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSSMAssociationExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "association_name", assocName), + resource.TestCheckResourceAttr(resourceName, "schedule_expression", scheduleExpression2), + ), + }, + }, + }) +} + func TestAccAWSSSMAssociation_withDocumentVersion(t *testing.T) { name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSSMAssociationDestroy, @@ -209,7 +241,7 @@ func TestAccAWSSSMAssociation_withDocumentVersion(t *testing.T) { func TestAccAWSSSMAssociation_withOutputLocation(t *testing.T) { name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSSMAssociationDestroy, @@ -250,7 +282,7 @@ func TestAccAWSSSMAssociation_withOutputLocation(t *testing.T) { func TestAccAWSSSMAssociation_withScheduleExpression(t *testing.T) { name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSSMAssociationDestroy, @@ -817,3 +849,41 @@ resource "aws_ssm_association" "foo" { } `, rName, assocName) } + +func testAccAWSSSMAssociationConfigWithAssociationNameAndScheduleExpression(rName, associationName, scheduleExpression string) string { + return fmt.Sprintf(` +resource "aws_ssm_document" "test" { + name = "test_document_association-%s", + document_type = "Command" + content = < 1 { ids = strings.Join(account_ids, ",") - } else { - ids = "" } if ids == "" { @@ -427,16 +477,25 @@ func deleteDocumentPermissions(d *schema.ResourceData, meta interface{}) error { log.Printf("[INFO] Removing permissions from document: %s", d.Id()) + permission := d.Get("permissions").(map[string]interface{}) + var accountsToRemove []*string + if permission["account_ids"] != nil { + accountsToRemove = aws.StringSlice([]string{permission["account_ids"].(string)}) + if strings.Contains(permission["account_ids"].(string), ",") { + accountsToRemove = aws.StringSlice(strings.Split(permission["account_ids"].(string), ",")) + } + } + permInput := &ssm.ModifyDocumentPermissionInput{ Name: aws.String(d.Get("name").(string)), PermissionType: aws.String("Share"), - AccountIdsToRemove: aws.StringSlice(strings.Split("all", ",")), + AccountIdsToRemove: accountsToRemove, } _, err := ssmconn.ModifyDocumentPermission(permInput) if err != nil { - return errwrap.Wrapf("[ERROR] Error removing permissions for SSM document: {{err}}", err) + return fmt.Errorf("Error removing permissions for SSM document: %s", err) } return nil @@ -465,7 +524,7 @@ func updateAwsSSMDocument(d *schema.ResourceData, meta interface{}) error { newDefaultVersion = d.Get("latest_version").(string) } else if err != nil { - return errwrap.Wrapf("Error updating SSM document: {{err}}", err) + return fmt.Errorf("Error updating SSM document: %s", err) } else { log.Printf("[INFO] Updating the default version to the new version %s: %s", newDefaultVersion, d.Id()) newDefaultVersion = *updated.DocumentDescription.DocumentVersion @@ -479,7 +538,7 @@ func updateAwsSSMDocument(d *schema.ResourceData, meta interface{}) error { _, err = ssmconn.UpdateDocumentDefaultVersion(updateDefaultInput) if err != nil { - return errwrap.Wrapf("Error updating the default document version to that of the updated document: {{err}}", err) + return fmt.Errorf("Error updating the default document version to that of the updated document: %s", err) } return nil } diff --git a/aws/resource_aws_ssm_document_test.go b/aws/resource_aws_ssm_document_test.go index c128e5423ca..91fab3f3456 100644 --- a/aws/resource_aws_ssm_document_test.go +++ b/aws/resource_aws_ssm_document_test.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "regexp" "testing" "github.com/aws/aws-sdk-go/aws" @@ -14,16 +15,19 @@ import ( func TestAccAWSSSMDocument_basic(t *testing.T) { name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSSMDocumentDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSSMDocumentBasicConfig(name), Check: resource.ComposeTestCheckFunc( testAccCheckAWSSSMDocumentExists("aws_ssm_document.foo"), resource.TestCheckResourceAttr("aws_ssm_document.foo", "document_format", "JSON"), + resource.TestMatchResourceAttr("aws_ssm_document.foo", "arn", + regexp.MustCompile(`^arn:aws:ssm:[a-z]{2}-[a-z]+-\d{1}:\d{12}:document/.*$`)), + resource.TestCheckResourceAttr("aws_ssm_document.foo", "tags.%", "0"), ), }, }, @@ -32,12 +36,12 @@ func TestAccAWSSSMDocument_basic(t *testing.T) { func TestAccAWSSSMDocument_update(t *testing.T) { name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSSMDocumentDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSSMDocument20Config(name), Check: resource.ComposeTestCheckFunc( testAccCheckAWSSSMDocumentExists("aws_ssm_document.foo"), @@ -49,7 +53,7 @@ func TestAccAWSSSMDocument_update(t *testing.T) { "aws_ssm_document.foo", "default_version", "1"), ), }, - resource.TestStep{ + { Config: testAccAWSSSMDocument20UpdatedConfig(name), Check: resource.ComposeTestCheckFunc( testAccCheckAWSSSMDocumentExists("aws_ssm_document.foo"), @@ -63,15 +67,15 @@ func TestAccAWSSSMDocument_update(t *testing.T) { }) } -func TestAccAWSSSMDocument_permission(t *testing.T) { +func TestAccAWSSSMDocument_permission_public(t *testing.T) { name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSSMDocumentDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSSSMDocumentPermissionConfig(name), + { + Config: testAccAWSSSMDocumentPublicPermissionConfig(name), Check: resource.ComposeTestCheckFunc( testAccCheckAWSSSMDocumentExists("aws_ssm_document.foo"), resource.TestCheckResourceAttr( @@ -84,14 +88,75 @@ func TestAccAWSSSMDocument_permission(t *testing.T) { }) } +func TestAccAWSSSMDocument_permission_private(t *testing.T) { + name := acctest.RandString(10) + ids := "123456789012" + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSSMDocumentDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSSMDocumentPrivatePermissionConfig(name, ids), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSSMDocumentExists("aws_ssm_document.foo"), + resource.TestCheckResourceAttr( + "aws_ssm_document.foo", "permissions.type", "Share"), + ), + }, + }, + }) +} + +func TestAccAWSSSMDocument_permission_change(t *testing.T) { + name := acctest.RandString(10) + idsInitial := "123456789012,123456789013" + idsRemove := "123456789012" + idsAdd := "123456789012,123456789014" + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSSMDocumentDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSSMDocumentPrivatePermissionConfig(name, idsInitial), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSSMDocumentExists("aws_ssm_document.foo"), + resource.TestCheckResourceAttr( + "aws_ssm_document.foo", "permissions.type", "Share"), + resource.TestCheckResourceAttr("aws_ssm_document.foo", "permissions.account_ids", idsInitial), + ), + }, + { + Config: testAccAWSSSMDocumentPrivatePermissionConfig(name, idsRemove), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSSMDocumentExists("aws_ssm_document.foo"), + resource.TestCheckResourceAttr( + "aws_ssm_document.foo", "permissions.type", "Share"), + resource.TestCheckResourceAttr("aws_ssm_document.foo", "permissions.account_ids", idsRemove), + ), + }, + { + Config: testAccAWSSSMDocumentPrivatePermissionConfig(name, idsAdd), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSSMDocumentExists("aws_ssm_document.foo"), + resource.TestCheckResourceAttr( + "aws_ssm_document.foo", "permissions.type", "Share"), + resource.TestCheckResourceAttr("aws_ssm_document.foo", "permissions.account_ids", idsAdd), + ), + }, + }, + }) +} + func TestAccAWSSSMDocument_params(t *testing.T) { name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSSMDocumentDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSSMDocumentParamConfig(name), Check: resource.ComposeTestCheckFunc( testAccCheckAWSSSMDocumentExists("aws_ssm_document.foo"), @@ -115,12 +180,12 @@ func TestAccAWSSSMDocument_params(t *testing.T) { func TestAccAWSSSMDocument_automation(t *testing.T) { name := acctest.RandString(10) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSSMDocumentDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSSSMDocumentTypeAutomationConfig(name), Check: resource.ComposeTestCheckFunc( testAccCheckAWSSSMDocumentExists("aws_ssm_document.foo"), @@ -132,6 +197,25 @@ func TestAccAWSSSMDocument_automation(t *testing.T) { }) } +func TestAccAWSSSMDocument_session(t *testing.T) { + name := acctest.RandString(10) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSSMDocumentDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSSMDocumentTypeSessionConfig(name), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSSMDocumentExists("aws_ssm_document.foo"), + resource.TestCheckResourceAttr( + "aws_ssm_document.foo", "document_type", "Session"), + ), + }, + }, + }) +} + func TestAccAWSSSMDocument_DocumentFormat_YAML(t *testing.T) { name := acctest.RandString(10) content1 := ` @@ -156,7 +240,7 @@ mainSteps: runCommand: - Get-Process ` - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSSSMDocumentDestroy, @@ -181,6 +265,44 @@ mainSteps: }) } +func TestAccAWSSSMDocument_Tags(t *testing.T) { + rName := acctest.RandString(10) + resourceName := "aws_ssm_document.foo" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSSMDocumentDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSSMDocumentConfig_Tags_Single(rName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSSMDocumentExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + Config: testAccAWSSSMDocumentConfig_Tags_Multiple(rName, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSSMDocumentExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccAWSSSMDocumentConfig_Tags_Single(rName, "key2", "value2updated"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSSSMDocumentExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2updated"), + ), + }, + }, + }) +} + func testAccCheckAWSSSMDocumentExists(n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -329,7 +451,7 @@ DOC `, rName) } -func testAccAWSSSMDocumentPermissionConfig(rName string) string { +func testAccAWSSSMDocumentPublicPermissionConfig(rName string) string { return fmt.Sprintf(` resource "aws_ssm_document" "foo" { name = "test_document-%s" @@ -363,6 +485,40 @@ DOC `, rName) } +func testAccAWSSSMDocumentPrivatePermissionConfig(rName string, rIds string) string { + return fmt.Sprintf(` +resource "aws_ssm_document" "foo" { + name = "test_document-%s" + document_type = "Command" + + permissions = { + type = "Share" + account_ids = "%s" + } + + content = < 0 && err == nil { + t.Fatalf("expected %q to trigger an error", tc.Input) + } + if gatewayARN != tc.ExpectedGatewayARN { + t.Fatalf("expected %q to return Gateway ARN %q, received: %s", tc.Input, tc.ExpectedGatewayARN, gatewayARN) + } + if diskID != tc.ExpectedDiskID { + t.Fatalf("expected %q to return Disk ID %q, received: %s", tc.Input, tc.ExpectedDiskID, diskID) + } + } +} + +func TestAccAWSStorageGatewayCache_FileGateway(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_cache.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + // Storage Gateway API does not support removing caches + // CheckDestroy: testAccCheckAWSStorageGatewayCacheDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayCacheConfig_FileGateway(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayCacheExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "disk_id"), + resource.TestMatchResourceAttr(resourceName, "gateway_arn", regexp.MustCompile(`^arn:[^:]+:storagegateway:[^:]+:[^:]+:gateway/sgw-.+$`)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSStorageGatewayCache_TapeAndVolumeGateway(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_cache.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + // Storage Gateway API does not support removing caches + // CheckDestroy: testAccCheckAWSStorageGatewayCacheDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayCacheConfig_TapeAndVolumeGateway(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayCacheExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "disk_id"), + resource.TestMatchResourceAttr(resourceName, "gateway_arn", regexp.MustCompile(`^arn:[^:]+:storagegateway:[^:]+:[^:]+:gateway/sgw-.+$`)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAWSStorageGatewayCacheExists(resourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + conn := testAccProvider.Meta().(*AWSClient).storagegatewayconn + + gatewayARN, diskID, err := decodeStorageGatewayCacheID(rs.Primary.ID) + if err != nil { + return err + } + + input := &storagegateway.DescribeCacheInput{ + GatewayARN: aws.String(gatewayARN), + } + + output, err := conn.DescribeCache(input) + + if err != nil { + return fmt.Errorf("error reading Storage Gateway cache: %s", err) + } + + if output == nil || len(output.DiskIds) == 0 { + return fmt.Errorf("Storage Gateway cache %q not found", rs.Primary.ID) + } + + for _, existingDiskID := range output.DiskIds { + if aws.StringValue(existingDiskID) == diskID { + return nil + } + } + + return fmt.Errorf("Storage Gateway cache %q not found", rs.Primary.ID) + } +} + +func testAccAWSStorageGatewayCacheConfig_FileGateway(rName string) string { + return testAccAWSStorageGatewayGatewayConfig_GatewayType_FileS3(rName) + fmt.Sprintf(` +resource "aws_ebs_volume" "test" { + availability_zone = "${aws_instance.test.availability_zone}" + size = "10" + type = "gp2" + + tags { + Name = %q + } +} + +resource "aws_volume_attachment" "test" { + device_name = "/dev/xvdb" + force_detach = true + instance_id = "${aws_instance.test.id}" + volume_id = "${aws_ebs_volume.test.id}" +} + +data "aws_storagegateway_local_disk" "test" { + disk_path = "${aws_volume_attachment.test.device_name}" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} + +resource "aws_storagegateway_cache" "test" { + disk_id = "${data.aws_storagegateway_local_disk.test.id}" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} +`, rName) +} + +func testAccAWSStorageGatewayCacheConfig_TapeAndVolumeGateway(rName string) string { + return testAccAWSStorageGatewayGatewayConfig_GatewayType_Cached(rName) + fmt.Sprintf(` +resource "aws_ebs_volume" "test" { + availability_zone = "${aws_instance.test.availability_zone}" + size = "10" + type = "gp2" + + tags { + Name = %q + } +} + +resource "aws_volume_attachment" "test" { + device_name = "/dev/xvdc" + force_detach = true + instance_id = "${aws_instance.test.id}" + volume_id = "${aws_ebs_volume.test.id}" +} + +data "aws_storagegateway_local_disk" "test" { + disk_path = "${aws_volume_attachment.test.device_name}" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} + +resource "aws_storagegateway_cache" "test" { + # ACCEPTANCE TESTING WORKAROUND: + # Data sources are not refreshed before plan after apply in TestStep + # Step 0 error: After applying this step, the plan was not empty: + # disk_id: "0b68f77a-709b-4c79-ad9d-d7728014b291" => "/dev/xvdc" (forces new resource) + # We expect this data source value to change due to how Storage Gateway works. + lifecycle { + ignore_changes = ["disk_id"] + } + + disk_id = "${data.aws_storagegateway_local_disk.test.id}" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} +`, rName) +} diff --git a/aws/resource_aws_storagegateway_cached_iscsi_volume.go b/aws/resource_aws_storagegateway_cached_iscsi_volume.go new file mode 100644 index 00000000000..ff6a18022fd --- /dev/null +++ b/aws/resource_aws_storagegateway_cached_iscsi_volume.go @@ -0,0 +1,236 @@ +package aws + +import ( + "fmt" + "log" + "strings" + "time" + + "github.com/hashicorp/terraform/helper/resource" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" + "github.com/aws/aws-sdk-go/service/storagegateway" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsStorageGatewayCachedIscsiVolume() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsStorageGatewayCachedIscsiVolumeCreate, + Read: resourceAwsStorageGatewayCachedIscsiVolumeRead, + Delete: resourceAwsStorageGatewayCachedIscsiVolumeDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "chap_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "gateway_arn": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateArn, + }, + "lun_number": { + Type: schema.TypeInt, + Computed: true, + }, + // Poor API naming: this accepts the IP address of the network interface + "network_interface_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "network_interface_port": { + Type: schema.TypeInt, + Computed: true, + }, + "snapshot_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "source_volume_arn": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validateArn, + }, + "target_arn": { + Type: schema.TypeString, + Computed: true, + }, + "target_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "volume_arn": { + Type: schema.TypeString, + Computed: true, + }, + "volume_id": { + Type: schema.TypeString, + Computed: true, + }, + "volume_size_in_bytes": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + }, + } +} + +func resourceAwsStorageGatewayCachedIscsiVolumeCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).storagegatewayconn + + input := &storagegateway.CreateCachediSCSIVolumeInput{ + ClientToken: aws.String(resource.UniqueId()), + GatewayARN: aws.String(d.Get("gateway_arn").(string)), + NetworkInterfaceId: aws.String(d.Get("network_interface_id").(string)), + TargetName: aws.String(d.Get("target_name").(string)), + VolumeSizeInBytes: aws.Int64(int64(d.Get("volume_size_in_bytes").(int))), + } + + if v, ok := d.GetOk("snapshot_id"); ok { + input.SnapshotId = aws.String(v.(string)) + } + + if v, ok := d.GetOk("source_volume_arn"); ok { + input.SourceVolumeARN = aws.String(v.(string)) + } + + log.Printf("[DEBUG] Creating Storage Gateway cached iSCSI volume: %s", input) + output, err := conn.CreateCachediSCSIVolume(input) + if err != nil { + return fmt.Errorf("error creating Storage Gateway cached iSCSI volume: %s", err) + } + + d.SetId(aws.StringValue(output.VolumeARN)) + + return resourceAwsStorageGatewayCachedIscsiVolumeRead(d, meta) +} + +func resourceAwsStorageGatewayCachedIscsiVolumeRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).storagegatewayconn + + input := &storagegateway.DescribeCachediSCSIVolumesInput{ + VolumeARNs: []*string{aws.String(d.Id())}, + } + + log.Printf("[DEBUG] Reading Storage Gateway cached iSCSI volume: %s", input) + output, err := conn.DescribeCachediSCSIVolumes(input) + + if err != nil { + if isAWSErr(err, storagegateway.ErrorCodeVolumeNotFound, "") { + log.Printf("[WARN] Storage Gateway cached iSCSI volume %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + return fmt.Errorf("error reading Storage Gateway cached iSCSI volume %q: %s", d.Id(), err) + } + + if output == nil || len(output.CachediSCSIVolumes) == 0 || output.CachediSCSIVolumes[0] == nil || aws.StringValue(output.CachediSCSIVolumes[0].VolumeARN) != d.Id() { + log.Printf("[WARN] Storage Gateway cached iSCSI volume %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + volume := output.CachediSCSIVolumes[0] + + d.Set("arn", aws.StringValue(volume.VolumeARN)) + d.Set("snapshot_id", aws.StringValue(volume.SourceSnapshotId)) + d.Set("volume_arn", aws.StringValue(volume.VolumeARN)) + d.Set("volume_id", aws.StringValue(volume.VolumeId)) + d.Set("volume_size_in_bytes", int(aws.Int64Value(volume.VolumeSizeInBytes))) + + if volume.VolumeiSCSIAttributes != nil { + d.Set("chap_enabled", aws.BoolValue(volume.VolumeiSCSIAttributes.ChapEnabled)) + d.Set("lun_number", int(aws.Int64Value(volume.VolumeiSCSIAttributes.LunNumber))) + d.Set("network_interface_id", aws.StringValue(volume.VolumeiSCSIAttributes.NetworkInterfaceId)) + d.Set("network_interface_port", int(aws.Int64Value(volume.VolumeiSCSIAttributes.NetworkInterfacePort))) + + targetARN := aws.StringValue(volume.VolumeiSCSIAttributes.TargetARN) + d.Set("target_arn", targetARN) + + gatewayARN, targetName, err := parseStorageGatewayVolumeGatewayARNAndTargetNameFromARN(targetARN) + if err != nil { + return fmt.Errorf("error parsing Storage Gateway volume gateway ARN and target name from target ARN %q: %s", targetARN, err) + } + d.Set("gateway_arn", gatewayARN) + d.Set("target_name", targetName) + } + + return nil +} + +func resourceAwsStorageGatewayCachedIscsiVolumeDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).storagegatewayconn + + input := &storagegateway.DeleteVolumeInput{ + VolumeARN: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Deleting Storage Gateway cached iSCSI volume: %s", input) + err := resource.Retry(2*time.Minute, func() *resource.RetryError { + _, err := conn.DeleteVolume(input) + if err != nil { + if isAWSErr(err, storagegateway.ErrorCodeVolumeNotFound, "") { + return nil + } + // InvalidGatewayRequestException: The specified gateway is not connected. + // Can occur during concurrent DeleteVolume operations + if isAWSErr(err, storagegateway.ErrCodeInvalidGatewayRequestException, "The specified gateway is not connected") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return fmt.Errorf("error deleting Storage Gateway cached iSCSI volume %q: %s", d.Id(), err) + } + + return nil +} + +func parseStorageGatewayVolumeGatewayARNAndTargetNameFromARN(inputARN string) (string, string, error) { + // inputARN = arn:aws:storagegateway:us-east-2:111122223333:gateway/sgw-12A3456B/target/iqn.1997-05.com.amazon:TargetName + targetARN, err := arn.Parse(inputARN) + if err != nil { + return "", "", err + } + // We need to get: + // * The Gateway ARN portion of the target ARN resource (gateway/sgw-12A3456B) + // * The target name portion of the target ARN resource (TargetName) + // First, let's split up the resource of the target ARN + // targetARN.Resource = gateway/sgw-12A3456B/target/iqn.1997-05.com.amazon:TargetName + expectedFormatErr := fmt.Errorf("expected resource format gateway/sgw-12A3456B/target/iqn.1997-05.com.amazon:TargetName, received: %s", targetARN.Resource) + resourceParts := strings.SplitN(targetARN.Resource, "/", 4) + if len(resourceParts) != 4 { + return "", "", expectedFormatErr + } + gatewayARN := arn.ARN{ + AccountID: targetARN.AccountID, + Partition: targetARN.Partition, + Region: targetARN.Region, + Resource: fmt.Sprintf("%s/%s", resourceParts[0], resourceParts[1]), + Service: targetARN.Service, + }.String() + // Second, let's split off the target name from the initiator name + // resourceParts[3] = iqn.1997-05.com.amazon:TargetName + initiatorParts := strings.SplitN(resourceParts[3], ":", 2) + if len(initiatorParts) != 2 { + return "", "", expectedFormatErr + } + targetName := initiatorParts[1] + return gatewayARN, targetName, nil +} diff --git a/aws/resource_aws_storagegateway_cached_iscsi_volume_test.go b/aws/resource_aws_storagegateway_cached_iscsi_volume_test.go new file mode 100644 index 00000000000..53e3f6e1e22 --- /dev/null +++ b/aws/resource_aws_storagegateway_cached_iscsi_volume_test.go @@ -0,0 +1,429 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/storagegateway" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestParseStorageGatewayVolumeGatewayARNAndTargetNameFromARN(t *testing.T) { + var testCases = []struct { + Input string + ExpectedGatewayARN string + ExpectedTargetName string + ErrCount int + }{ + { + Input: "arn:aws:storagegateway:us-east-2:111122223333:gateway/sgw-12A3456B/target/iqn.1997-05.com.amazon:TargetName", + ExpectedGatewayARN: "arn:aws:storagegateway:us-east-2:111122223333:gateway/sgw-12A3456B", + ExpectedTargetName: "TargetName", + ErrCount: 0, + }, + { + Input: "gateway/sgw-12A3456B/target/iqn.1997-05.com.amazon:TargetName", + ErrCount: 1, + }, + { + Input: "arn:aws:storagegateway:us-east-2:111122223333:target/iqn.1997-05.com.amazon:TargetName", + ErrCount: 1, + }, + { + Input: "arn:aws:storagegateway:us-east-1:123456789012:gateway/sgw-12345678", + ErrCount: 1, + }, + { + Input: "TargetName", + ErrCount: 1, + }, + { + Input: "gateway/sgw-12345678", + ErrCount: 1, + }, + { + Input: "sgw-12345678", + ErrCount: 1, + }, + } + + for _, tc := range testCases { + gatewayARN, targetName, err := parseStorageGatewayVolumeGatewayARNAndTargetNameFromARN(tc.Input) + if tc.ErrCount == 0 && err != nil { + t.Fatalf("expected %q not to trigger an error, received: %s", tc.Input, err) + } + if tc.ErrCount > 0 && err == nil { + t.Fatalf("expected %q to trigger an error", tc.Input) + } + if gatewayARN != tc.ExpectedGatewayARN { + t.Fatalf("expected %q to return Gateway ARN %q, received: %s", tc.Input, tc.ExpectedGatewayARN, gatewayARN) + } + if targetName != tc.ExpectedTargetName { + t.Fatalf("expected %q to return Disk ID %q, received: %s", tc.Input, tc.ExpectedTargetName, targetName) + } + } +} + +func TestAccAWSStorageGatewayCachedIscsiVolume_Basic(t *testing.T) { + var cachedIscsiVolume storagegateway.CachediSCSIVolume + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_cached_iscsi_volume.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayCachedIscsiVolumeDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayCachedIscsiVolumeConfig_Basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayCachedIscsiVolumeExists(resourceName, &cachedIscsiVolume), + resource.TestMatchResourceAttr(resourceName, "arn", regexp.MustCompile(`^arn:[^:]+:storagegateway:[^:]+:\d{12}:gateway/sgw-.+/volume/vol-.+$`)), + resource.TestCheckResourceAttr(resourceName, "chap_enabled", "false"), + resource.TestMatchResourceAttr(resourceName, "gateway_arn", regexp.MustCompile(`^arn:[^:]+:storagegateway:[^:]+:\d{12}:gateway/sgw-.+$`)), + resource.TestCheckResourceAttr(resourceName, "lun_number", "0"), + resource.TestMatchResourceAttr(resourceName, "network_interface_id", regexp.MustCompile(`^\d+\.\d+\.\d+\.\d+$`)), + resource.TestMatchResourceAttr(resourceName, "network_interface_port", regexp.MustCompile(`^\d+$`)), + resource.TestCheckResourceAttr(resourceName, "snapshot_id", ""), + resource.TestMatchResourceAttr(resourceName, "target_arn", regexp.MustCompile(fmt.Sprintf("^arn:[^:]+:storagegateway:[^:]+:\\d{12}:gateway/sgw-.+/target/iqn.1997-05.com.amazon:%s$", rName))), + resource.TestCheckResourceAttr(resourceName, "target_name", rName), + resource.TestMatchResourceAttr(resourceName, "volume_id", regexp.MustCompile(`^vol-.+$`)), + resource.TestMatchResourceAttr(resourceName, "volume_arn", regexp.MustCompile(`^arn:[^:]+:storagegateway:[^:]+:\d{12}:gateway/sgw-.+/volume/vol-.+$`)), + resource.TestCheckResourceAttr(resourceName, "volume_size_in_bytes", "5368709120"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSStorageGatewayCachedIscsiVolume_SnapshotId(t *testing.T) { + var cachedIscsiVolume storagegateway.CachediSCSIVolume + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_cached_iscsi_volume.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayCachedIscsiVolumeDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayCachedIscsiVolumeConfig_SnapshotId(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayCachedIscsiVolumeExists(resourceName, &cachedIscsiVolume), + resource.TestMatchResourceAttr(resourceName, "arn", regexp.MustCompile(`^arn:[^:]+:storagegateway:[^:]+:\d{12}:gateway/sgw-.+/volume/vol-.+$`)), + resource.TestCheckResourceAttr(resourceName, "chap_enabled", "false"), + resource.TestMatchResourceAttr(resourceName, "gateway_arn", regexp.MustCompile(`^arn:[^:]+:storagegateway:[^:]+:\d{12}:gateway/sgw-.+$`)), + resource.TestCheckResourceAttr(resourceName, "lun_number", "0"), + resource.TestMatchResourceAttr(resourceName, "network_interface_id", regexp.MustCompile(`^\d+\.\d+\.\d+\.\d+$`)), + resource.TestMatchResourceAttr(resourceName, "network_interface_port", regexp.MustCompile(`^\d+$`)), + resource.TestMatchResourceAttr(resourceName, "snapshot_id", regexp.MustCompile(`^snap-.+$`)), + resource.TestMatchResourceAttr(resourceName, "target_arn", regexp.MustCompile(fmt.Sprintf("^arn:[^:]+:storagegateway:[^:]+:\\d{12}:gateway/sgw-.+/target/iqn.1997-05.com.amazon:%s$", rName))), + resource.TestCheckResourceAttr(resourceName, "target_name", rName), + resource.TestMatchResourceAttr(resourceName, "volume_id", regexp.MustCompile(`^vol-.+$`)), + resource.TestMatchResourceAttr(resourceName, "volume_arn", regexp.MustCompile(`^arn:[^:]+:storagegateway:[^:]+:\d{12}:gateway/sgw-.+/volume/vol-.+$`)), + resource.TestCheckResourceAttr(resourceName, "volume_size_in_bytes", "5368709120"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSStorageGatewayCachedIscsiVolume_SourceVolumeArn(t *testing.T) { + t.Skip("This test can cause Storage Gateway 2.0.10.0 to enter an irrecoverable state during volume deletion.") + var cachedIscsiVolume storagegateway.CachediSCSIVolume + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_cached_iscsi_volume.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayCachedIscsiVolumeDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayCachedIscsiVolumeConfig_SourceVolumeArn(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayCachedIscsiVolumeExists(resourceName, &cachedIscsiVolume), + resource.TestMatchResourceAttr(resourceName, "arn", regexp.MustCompile(`^arn:[^:]+:storagegateway:[^:]+:\d{12}:gateway/sgw-.+/volume/vol-.+$`)), + resource.TestMatchResourceAttr(resourceName, "gateway_arn", regexp.MustCompile(`^arn:[^:]+:storagegateway:[^:]+:\d{12}:gateway/sgw-.+$`)), + resource.TestMatchResourceAttr(resourceName, "network_interface_id", regexp.MustCompile(`^\d+\.\d+\.\d+\.\d+$`)), + resource.TestMatchResourceAttr(resourceName, "network_interface_port", regexp.MustCompile(`^\d+$`)), + resource.TestCheckResourceAttr(resourceName, "snapshot_id", ""), + resource.TestMatchResourceAttr(resourceName, "source_volume_arn", regexp.MustCompile(`^arn:[^:]+:storagegateway:[^:]+:\d{12}:gateway/sgw-.+/volume/vol-.+$`)), + resource.TestMatchResourceAttr(resourceName, "target_arn", regexp.MustCompile(fmt.Sprintf("^arn:[^:]+:storagegateway:[^:]+:\\d{12}:gateway/sgw-.+/target/iqn.1997-05.com.amazon:%s$", rName))), + resource.TestCheckResourceAttr(resourceName, "target_name", rName), + resource.TestMatchResourceAttr(resourceName, "volume_id", regexp.MustCompile(`^vol-.+$`)), + resource.TestMatchResourceAttr(resourceName, "volume_arn", regexp.MustCompile(`^arn:[^:]+:storagegateway:[^:]+:\d{12}:gateway/sgw-.+/volume/vol-.+$`)), + resource.TestCheckResourceAttr(resourceName, "volume_size_in_bytes", "1073741824"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"source_volume_arn"}, + }, + }, + }) +} + +func testAccCheckAWSStorageGatewayCachedIscsiVolumeExists(resourceName string, cachedIscsiVolume *storagegateway.CachediSCSIVolume) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + conn := testAccProvider.Meta().(*AWSClient).storagegatewayconn + + input := &storagegateway.DescribeCachediSCSIVolumesInput{ + VolumeARNs: []*string{aws.String(rs.Primary.ID)}, + } + + output, err := conn.DescribeCachediSCSIVolumes(input) + + if err != nil { + return fmt.Errorf("error reading Storage Gateway cached iSCSI volume: %s", err) + } + + if output == nil || len(output.CachediSCSIVolumes) == 0 || output.CachediSCSIVolumes[0] == nil || aws.StringValue(output.CachediSCSIVolumes[0].VolumeARN) != rs.Primary.ID { + return fmt.Errorf("Storage Gateway cached iSCSI volume %q not found", rs.Primary.ID) + } + + *cachedIscsiVolume = *output.CachediSCSIVolumes[0] + + return nil + } +} + +func testAccCheckAWSStorageGatewayCachedIscsiVolumeDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).storagegatewayconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_storagegateway_cached_iscsi_volume" { + continue + } + + input := &storagegateway.DescribeCachediSCSIVolumesInput{ + VolumeARNs: []*string{aws.String(rs.Primary.ID)}, + } + + output, err := conn.DescribeCachediSCSIVolumes(input) + + if err != nil { + if isAWSErrStorageGatewayGatewayNotFound(err) { + return nil + } + if isAWSErr(err, storagegateway.ErrorCodeVolumeNotFound, "") { + return nil + } + return err + } + + if output != nil && len(output.CachediSCSIVolumes) > 0 && output.CachediSCSIVolumes[0] != nil && aws.StringValue(output.CachediSCSIVolumes[0].VolumeARN) == rs.Primary.ID { + return fmt.Errorf("Storage Gateway cached iSCSI volume %q still exists", rs.Primary.ID) + } + } + + return nil +} + +func testAccAWSStorageGatewayCachedIscsiVolumeConfig_Basic(rName string) string { + return testAccAWSStorageGatewayGatewayConfig_GatewayType_Cached(rName) + fmt.Sprintf(` +resource "aws_ebs_volume" "test" { + availability_zone = "${aws_instance.test.availability_zone}" + size = 10 + type = "gp2" + + tags { + Name = %q + } +} + +resource "aws_volume_attachment" "test" { + device_name = "/dev/xvdc" + force_detach = true + instance_id = "${aws_instance.test.id}" + volume_id = "${aws_ebs_volume.test.id}" +} + +data "aws_storagegateway_local_disk" "test" { + disk_path = "${aws_volume_attachment.test.device_name}" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} + +resource "aws_storagegateway_cache" "test" { + # ACCEPTANCE TESTING WORKAROUND: + # Data sources are not refreshed before plan after apply in TestStep + # Step 0 error: After applying this step, the plan was not empty: + # disk_id: "0b68f77a-709b-4c79-ad9d-d7728014b291" => "/dev/xvdc" (forces new resource) + # We expect this data source value to change due to how Storage Gateway works. + lifecycle { + ignore_changes = ["disk_id"] + } + + disk_id = "${data.aws_storagegateway_local_disk.test.id}" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} + +resource "aws_storagegateway_cached_iscsi_volume" "test" { + gateway_arn = "${aws_storagegateway_cache.test.gateway_arn}" + network_interface_id = "${aws_instance.test.private_ip}" + target_name = %q + volume_size_in_bytes = 5368709120 +} +`, rName, rName) +} + +func testAccAWSStorageGatewayCachedIscsiVolumeConfig_SnapshotId(rName string) string { + return testAccAWSStorageGatewayGatewayConfig_GatewayType_Cached(rName) + fmt.Sprintf(` +resource "aws_ebs_volume" "cachevolume" { + availability_zone = "${aws_instance.test.availability_zone}" + size = 10 + type = "gp2" + + tags { + Name = %q + } +} + +resource "aws_volume_attachment" "test" { + device_name = "/dev/xvdc" + force_detach = true + instance_id = "${aws_instance.test.id}" + volume_id = "${aws_ebs_volume.cachevolume.id}" +} + +data "aws_storagegateway_local_disk" "test" { + disk_path = "${aws_volume_attachment.test.device_name}" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} + +resource "aws_storagegateway_cache" "test" { + # ACCEPTANCE TESTING WORKAROUND: + # Data sources are not refreshed before plan after apply in TestStep + # Step 0 error: After applying this step, the plan was not empty: + # disk_id: "0b68f77a-709b-4c79-ad9d-d7728014b291" => "/dev/xvdc" (forces new resource) + # We expect this data source value to change due to how Storage Gateway works. + lifecycle { + ignore_changes = ["disk_id"] + } + + disk_id = "${data.aws_storagegateway_local_disk.test.id}" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} + +resource "aws_ebs_volume" "snapvolume" { + availability_zone = "${aws_instance.test.availability_zone}" + size = 5 + type = "gp2" + + tags { + Name = %q + } +} + +resource "aws_ebs_snapshot" "test" { + volume_id = "${aws_ebs_volume.snapvolume.id}" + + tags { + Name = %q + } +} + +resource "aws_storagegateway_cached_iscsi_volume" "test" { + gateway_arn = "${aws_storagegateway_cache.test.gateway_arn}" + network_interface_id = "${aws_instance.test.private_ip}" + snapshot_id = "${aws_ebs_snapshot.test.id}" + target_name = %q + volume_size_in_bytes = "${aws_ebs_snapshot.test.volume_size * 1024 * 1024 * 1024}" +} +`, rName, rName, rName, rName) +} + +func testAccAWSStorageGatewayCachedIscsiVolumeConfig_SourceVolumeArn(rName string) string { + return testAccAWSStorageGatewayGatewayConfig_GatewayType_Cached(rName) + fmt.Sprintf(` +data "aws_storagegateway_local_disk" "uploadbuffer" { + disk_path = "/dev/xvdb" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} + +resource "aws_storagegateway_upload_buffer" "test" { + # ACCEPTANCE TESTING WORKAROUND: + # Data sources are not refreshed before plan after apply in TestStep + # Step 0 error: After applying this step, the plan was not empty: + # disk_id: "0b68f77a-709b-4c79-ad9d-d7728014b291" => "/dev/xvdc" (forces new resource) + # We expect this data source value to change due to how Storage Gateway works. + lifecycle { + ignore_changes = ["disk_id"] + } + + disk_id = "${data.aws_storagegateway_local_disk.uploadbuffer.id}" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} + +resource "aws_ebs_volume" "test" { + availability_zone = "${aws_instance.test.availability_zone}" + size = 10 + type = "gp2" + + tags { + Name = %q + } +} + +resource "aws_volume_attachment" "test" { + device_name = "/dev/xvdc" + force_detach = true + instance_id = "${aws_instance.test.id}" + volume_id = "${aws_ebs_volume.test.id}" +} + +data "aws_storagegateway_local_disk" "test" { + disk_path = "${aws_volume_attachment.test.device_name}" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} + +resource "aws_storagegateway_cache" "test" { + # ACCEPTANCE TESTING WORKAROUND: + # Data sources are not refreshed before plan after apply in TestStep + # Step 0 error: After applying this step, the plan was not empty: + # disk_id: "0b68f77a-709b-4c79-ad9d-d7728014b291" => "/dev/xvdc" (forces new resource) + # We expect this data source value to change due to how Storage Gateway works. + lifecycle { + ignore_changes = ["disk_id"] + } + + disk_id = "${data.aws_storagegateway_local_disk.test.id}" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} + +resource "aws_storagegateway_cached_iscsi_volume" "source" { + gateway_arn = "${aws_storagegateway_cache.test.gateway_arn}" + network_interface_id = "${aws_instance.test.private_ip}" + target_name = "%s-source" + volume_size_in_bytes = 1073741824 +} + +resource "aws_storagegateway_cached_iscsi_volume" "test" { + gateway_arn = "${aws_storagegateway_cache.test.gateway_arn}" + network_interface_id = "${aws_instance.test.private_ip}" + source_volume_arn = "${aws_storagegateway_cached_iscsi_volume.source.arn}" + target_name = %q + volume_size_in_bytes = "${aws_storagegateway_cached_iscsi_volume.source.volume_size_in_bytes}" +} +`, rName, rName, rName) +} diff --git a/aws/resource_aws_storagegateway_gateway.go b/aws/resource_aws_storagegateway_gateway.go new file mode 100644 index 00000000000..f1679468530 --- /dev/null +++ b/aws/resource_aws_storagegateway_gateway.go @@ -0,0 +1,443 @@ +package aws + +import ( + "fmt" + "log" + "net" + "net/http" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/storagegateway" + "github.com/hashicorp/terraform/helper/customdiff" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsStorageGatewayGateway() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsStorageGatewayGatewayCreate, + Read: resourceAwsStorageGatewayGatewayRead, + Update: resourceAwsStorageGatewayGatewayUpdate, + Delete: resourceAwsStorageGatewayGatewayDelete, + CustomizeDiff: customdiff.Sequence( + customdiff.ForceNewIfChange("smb_active_directory_settings", func(old, new, meta interface{}) bool { + return len(old.([]interface{})) == 1 && len(new.([]interface{})) == 0 + }), + ), + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "activation_key": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"gateway_ip_address"}, + }, + "gateway_id": { + Type: schema.TypeString, + Computed: true, + }, + "gateway_ip_address": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"activation_key"}, + }, + "gateway_name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "gateway_timezone": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "gateway_type": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "STORED", + ValidateFunc: validation.StringInSlice([]string{ + "CACHED", + "FILE_S3", + "STORED", + "VTL", + }, false), + }, + "medium_changer_type": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{ + "AWS-Gateway-VTL", + "STK-L700", + }, false), + }, + "smb_active_directory_settings": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "domain_name": { + Type: schema.TypeString, + Required: true, + }, + "password": { + Type: schema.TypeString, + Required: true, + Sensitive: true, + }, + "username": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "smb_guest_password": { + Type: schema.TypeString, + Optional: true, + Sensitive: true, + }, + "tape_drive_type": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{ + "IBM-ULT3580-TD5", + }, false), + }, + }, + } +} + +func resourceAwsStorageGatewayGatewayCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).storagegatewayconn + region := meta.(*AWSClient).region + + activationKey := d.Get("activation_key").(string) + gatewayIpAddress := d.Get("gateway_ip_address").(string) + + // Perform one time fetch of activation key from gateway IP address + if activationKey == "" { + if gatewayIpAddress == "" { + return fmt.Errorf("either activation_key or gateway_ip_address must be provided") + } + + client := &http.Client{ + CheckRedirect: func(req *http.Request, via []*http.Request) error { + return http.ErrUseLastResponse + }, + Timeout: time.Second * 10, + } + + requestURL := fmt.Sprintf("http://%s/?activationRegion=%s", gatewayIpAddress, region) + log.Printf("[DEBUG] Creating HTTP request: %s", requestURL) + request, err := http.NewRequest("GET", requestURL, nil) + if err != nil { + return fmt.Errorf("error creating HTTP request: %s", err) + } + + err = resource.Retry(d.Timeout(schema.TimeoutCreate), func() *resource.RetryError { + log.Printf("[DEBUG] Making HTTP request: %s", request.URL.String()) + response, err := client.Do(request) + if err != nil { + if err, ok := err.(net.Error); ok { + errMessage := fmt.Errorf("error making HTTP request: %s", err) + log.Printf("[DEBUG] retryable %s", errMessage) + return resource.RetryableError(errMessage) + } + return resource.NonRetryableError(fmt.Errorf("error making HTTP request: %s", err)) + } + + log.Printf("[DEBUG] Received HTTP response: %#v", response) + if response.StatusCode != 302 { + return resource.NonRetryableError(fmt.Errorf("expected HTTP status code 302, received: %d", response.StatusCode)) + } + + redirectURL, err := response.Location() + if err != nil { + return resource.NonRetryableError(fmt.Errorf("error extracting HTTP Location header: %s", err)) + } + + activationKey = redirectURL.Query().Get("activationKey") + + return nil + }) + if err != nil { + return fmt.Errorf("error retrieving activation key from IP Address (%s): %s", gatewayIpAddress, err) + } + if activationKey == "" { + return fmt.Errorf("empty activationKey received from IP Address: %s", gatewayIpAddress) + } + } + + input := &storagegateway.ActivateGatewayInput{ + ActivationKey: aws.String(activationKey), + GatewayRegion: aws.String(region), + GatewayName: aws.String(d.Get("gateway_name").(string)), + GatewayTimezone: aws.String(d.Get("gateway_timezone").(string)), + GatewayType: aws.String(d.Get("gateway_type").(string)), + } + + if v, ok := d.GetOk("medium_changer_type"); ok { + input.MediumChangerType = aws.String(v.(string)) + } + + if v, ok := d.GetOk("tape_drive_type"); ok { + input.TapeDriveType = aws.String(v.(string)) + } + + log.Printf("[DEBUG] Activating Storage Gateway Gateway: %s", input) + output, err := conn.ActivateGateway(input) + if err != nil { + return fmt.Errorf("error activating Storage Gateway Gateway: %s", err) + } + + d.SetId(aws.StringValue(output.GatewayARN)) + + // Gateway activations can take a few minutes + err = resource.Retry(d.Timeout(schema.TimeoutCreate), func() *resource.RetryError { + _, err := conn.DescribeGatewayInformation(&storagegateway.DescribeGatewayInformationInput{ + GatewayARN: aws.String(d.Id()), + }) + if err != nil { + if isAWSErr(err, storagegateway.ErrorCodeGatewayNotConnected, "") { + return resource.RetryableError(err) + } + if isAWSErr(err, storagegateway.ErrCodeInvalidGatewayRequestException, "") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + + return nil + }) + if err != nil { + return fmt.Errorf("error waiting for Storage Gateway Gateway activation: %s", err) + } + + if v, ok := d.GetOk("smb_active_directory_settings"); ok && len(v.([]interface{})) > 0 { + m := v.([]interface{})[0].(map[string]interface{}) + + input := &storagegateway.JoinDomainInput{ + DomainName: aws.String(m["domain_name"].(string)), + GatewayARN: aws.String(d.Id()), + Password: aws.String(m["password"].(string)), + UserName: aws.String(m["username"].(string)), + } + + log.Printf("[DEBUG] Storage Gateway Gateway %q joining Active Directory domain: %s", d.Id(), m["domain_name"].(string)) + _, err := conn.JoinDomain(input) + if err != nil { + return fmt.Errorf("error joining Active Directory domain: %s", err) + } + } + + if v, ok := d.GetOk("smb_guest_password"); ok && v.(string) != "" { + input := &storagegateway.SetSMBGuestPasswordInput{ + GatewayARN: aws.String(d.Id()), + Password: aws.String(v.(string)), + } + + log.Printf("[DEBUG] Storage Gateway Gateway %q setting SMB guest password", d.Id()) + _, err := conn.SetSMBGuestPassword(input) + if err != nil { + return fmt.Errorf("error setting SMB guest password: %s", err) + } + } + + return resourceAwsStorageGatewayGatewayRead(d, meta) +} + +func resourceAwsStorageGatewayGatewayRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).storagegatewayconn + + input := &storagegateway.DescribeGatewayInformationInput{ + GatewayARN: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Reading Storage Gateway Gateway: %s", input) + output, err := conn.DescribeGatewayInformation(input) + if err != nil { + if isAWSErrStorageGatewayGatewayNotFound(err) { + log.Printf("[WARN] Storage Gateway Gateway %q not found - removing from state", d.Id()) + d.SetId("") + return nil + } + return fmt.Errorf("error reading Storage Gateway Gateway: %s", err) + } + + smbSettingsInput := &storagegateway.DescribeSMBSettingsInput{ + GatewayARN: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Reading Storage Gateway SMB Settings: %s", smbSettingsInput) + smbSettingsOutput, err := conn.DescribeSMBSettings(smbSettingsInput) + if err != nil && !isAWSErr(err, storagegateway.ErrCodeInvalidGatewayRequestException, "This operation is not valid for the specified gateway") { + if isAWSErrStorageGatewayGatewayNotFound(err) { + log.Printf("[WARN] Storage Gateway Gateway %q not found - removing from state", d.Id()) + d.SetId("") + return nil + } + return fmt.Errorf("error reading Storage Gateway SMB Settings: %s", err) + } + + // The Storage Gateway API currently provides no way to read this value + // We allow Terraform to passthrough the configuration value into the state + d.Set("activation_key", d.Get("activation_key").(string)) + + d.Set("arn", output.GatewayARN) + d.Set("gateway_id", output.GatewayId) + + // The Storage Gateway API currently provides no way to read this value + // We allow Terraform to passthrough the configuration value into the state + d.Set("gateway_ip_address", d.Get("gateway_ip_address").(string)) + + d.Set("gateway_name", output.GatewayName) + d.Set("gateway_timezone", output.GatewayTimezone) + d.Set("gateway_type", output.GatewayType) + + // The Storage Gateway API currently provides no way to read this value + // We allow Terraform to passthrough the configuration value into the state + d.Set("medium_changer_type", d.Get("medium_changer_type").(string)) + + // Treat the entire nested argument as a whole, based on domain name + // to simplify schema and difference logic + if smbSettingsOutput == nil || aws.StringValue(smbSettingsOutput.DomainName) == "" { + if err := d.Set("smb_active_directory_settings", []interface{}{}); err != nil { + return fmt.Errorf("error setting smb_active_directory_settings: %s", err) + } + } else { + m := map[string]interface{}{ + "domain_name": aws.StringValue(smbSettingsOutput.DomainName), + // The Storage Gateway API currently provides no way to read these values + // "password": ..., + // "username": ..., + } + // We must assemble these into the map from configuration or Terraform will enter "" + // into state and constantly show a difference (also breaking downstream references) + // UPDATE: aws_storagegateway_gateway.test + // smb_active_directory_settings.0.password: "" => "" (attribute changed) + // smb_active_directory_settings.0.username: "" => "Administrator" + if v, ok := d.GetOk("smb_active_directory_settings"); ok && len(v.([]interface{})) > 0 { + configM := v.([]interface{})[0].(map[string]interface{}) + m["password"] = configM["password"] + m["username"] = configM["username"] + } + if err := d.Set("smb_active_directory_settings", []map[string]interface{}{m}); err != nil { + return fmt.Errorf("error setting smb_active_directory_settings: %s", err) + } + } + + // The Storage Gateway API currently provides no way to read this value + // We allow Terraform to _automatically_ passthrough the configuration value into the state here + // as the API does clue us in whether or not its actually set at all, + // which can be used to tell Terraform to show a difference in this case + // as well as ensuring there is some sort of attribute value (unlike the others) + if smbSettingsOutput == nil || !aws.BoolValue(smbSettingsOutput.SMBGuestPasswordSet) { + d.Set("smb_guest_password", "") + } + + // The Storage Gateway API currently provides no way to read this value + // We allow Terraform to passthrough the configuration value into the state + d.Set("tape_drive_type", d.Get("tape_drive_type").(string)) + + return nil +} + +func resourceAwsStorageGatewayGatewayUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).storagegatewayconn + + if d.HasChange("gateway_name") || d.HasChange("gateway_timezone") { + input := &storagegateway.UpdateGatewayInformationInput{ + GatewayARN: aws.String(d.Id()), + GatewayName: aws.String(d.Get("gateway_name").(string)), + GatewayTimezone: aws.String(d.Get("gateway_timezone").(string)), + } + + log.Printf("[DEBUG] Updating Storage Gateway Gateway: %s", input) + _, err := conn.UpdateGatewayInformation(input) + if err != nil { + return fmt.Errorf("error updating Storage Gateway Gateway: %s", err) + } + } + + if d.HasChange("smb_active_directory_settings") { + l := d.Get("smb_active_directory_settings").([]interface{}) + m := l[0].(map[string]interface{}) + + input := &storagegateway.JoinDomainInput{ + DomainName: aws.String(m["domain_name"].(string)), + GatewayARN: aws.String(d.Id()), + Password: aws.String(m["password"].(string)), + UserName: aws.String(m["username"].(string)), + } + + log.Printf("[DEBUG] Storage Gateway Gateway %q joining Active Directory domain: %s", d.Id(), m["domain_name"].(string)) + _, err := conn.JoinDomain(input) + if err != nil { + return fmt.Errorf("error joining Active Directory domain: %s", err) + } + } + + if d.HasChange("smb_guest_password") { + input := &storagegateway.SetSMBGuestPasswordInput{ + GatewayARN: aws.String(d.Id()), + Password: aws.String(d.Get("smb_guest_password").(string)), + } + + log.Printf("[DEBUG] Storage Gateway Gateway %q setting SMB guest password", d.Id()) + _, err := conn.SetSMBGuestPassword(input) + if err != nil { + return fmt.Errorf("error setting SMB guest password: %s", err) + } + } + + return resourceAwsStorageGatewayGatewayRead(d, meta) +} + +func resourceAwsStorageGatewayGatewayDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).storagegatewayconn + + input := &storagegateway.DeleteGatewayInput{ + GatewayARN: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Deleting Storage Gateway Gateway: %s", input) + _, err := conn.DeleteGateway(input) + if err != nil { + if isAWSErrStorageGatewayGatewayNotFound(err) { + return nil + } + return fmt.Errorf("error deleting Storage Gateway Gateway: %s", err) + } + + return nil +} + +// The API returns multiple responses for a missing gateway +func isAWSErrStorageGatewayGatewayNotFound(err error) bool { + if isAWSErr(err, storagegateway.ErrCodeInvalidGatewayRequestException, "The specified gateway was not found.") { + return true + } + if isAWSErr(err, storagegateway.ErrorCodeGatewayNotFound, "") { + return true + } + return false +} diff --git a/aws/resource_aws_storagegateway_gateway_test.go b/aws/resource_aws_storagegateway_gateway_test.go new file mode 100644 index 00000000000..fc9af605088 --- /dev/null +++ b/aws/resource_aws_storagegateway_gateway_test.go @@ -0,0 +1,747 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + "strings" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/storagegateway" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func init() { + resource.AddTestSweepers("aws_storagegateway_gateway", &resource.Sweeper{ + Name: "aws_storagegateway_gateway", + F: testSweepStorageGatewayGateways, + }) +} + +func testSweepStorageGatewayGateways(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).storagegatewayconn + + err = conn.ListGatewaysPages(&storagegateway.ListGatewaysInput{}, func(page *storagegateway.ListGatewaysOutput, isLast bool) bool { + if len(page.Gateways) == 0 { + log.Print("[DEBUG] No Storage Gateway Gateways to sweep") + return true + } + + for _, gateway := range page.Gateways { + name := aws.StringValue(gateway.GatewayName) + if !strings.HasPrefix(name, "tf-acc-test-") { + log.Printf("[INFO] Skipping Storage Gateway Gateway: %s", name) + continue + } + log.Printf("[INFO] Deleting Storage Gateway Gateway: %s", name) + input := &storagegateway.DeleteGatewayInput{ + GatewayARN: gateway.GatewayARN, + } + + _, err := conn.DeleteGateway(input) + if err != nil { + if isAWSErr(err, storagegateway.ErrorCodeGatewayNotFound, "") { + continue + } + log.Printf("[ERROR] Failed to delete Storage Gateway Gateway (%s): %s", name, err) + } + } + + return !isLast + }) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping Storage Gateway Gateway sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error retrieving Storage Gateway Gateways: %s", err) + } + return nil +} + +func TestAccAWSStorageGatewayGateway_GatewayType_Cached(t *testing.T) { + var gateway storagegateway.DescribeGatewayInformationOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_gateway.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayGatewayDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayGatewayConfig_GatewayType_Cached(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayGatewayExists(resourceName, &gateway), + resource.TestMatchResourceAttr(resourceName, "arn", regexp.MustCompile(`^arn:[^:]+:storagegateway:[^:]+:[^:]+:gateway/sgw-.+$`)), + resource.TestCheckResourceAttrSet(resourceName, "gateway_id"), + resource.TestCheckResourceAttr(resourceName, "gateway_name", rName), + resource.TestCheckResourceAttr(resourceName, "gateway_timezone", "GMT"), + resource.TestCheckResourceAttr(resourceName, "gateway_type", "CACHED"), + resource.TestCheckResourceAttr(resourceName, "medium_changer_type", ""), + resource.TestCheckResourceAttr(resourceName, "smb_active_directory_settings.#", "0"), + resource.TestCheckResourceAttr(resourceName, "smb_guest_password", ""), + resource.TestCheckResourceAttr(resourceName, "tape_drive_type", ""), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"activation_key", "gateway_ip_address"}, + }, + }, + }) +} + +func TestAccAWSStorageGatewayGateway_GatewayType_FileS3(t *testing.T) { + var gateway storagegateway.DescribeGatewayInformationOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_gateway.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayGatewayDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayGatewayConfig_GatewayType_FileS3(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayGatewayExists(resourceName, &gateway), + resource.TestMatchResourceAttr(resourceName, "arn", regexp.MustCompile(`^arn:[^:]+:storagegateway:[^:]+:[^:]+:gateway/sgw-.+$`)), + resource.TestCheckResourceAttrSet(resourceName, "gateway_id"), + resource.TestCheckResourceAttr(resourceName, "gateway_name", rName), + resource.TestCheckResourceAttr(resourceName, "gateway_timezone", "GMT"), + resource.TestCheckResourceAttr(resourceName, "gateway_type", "FILE_S3"), + resource.TestCheckResourceAttr(resourceName, "medium_changer_type", ""), + resource.TestCheckResourceAttr(resourceName, "smb_active_directory_settings.#", "0"), + resource.TestCheckResourceAttr(resourceName, "smb_guest_password", ""), + resource.TestCheckResourceAttr(resourceName, "tape_drive_type", ""), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"activation_key", "gateway_ip_address"}, + }, + }, + }) +} + +func TestAccAWSStorageGatewayGateway_GatewayType_Stored(t *testing.T) { + var gateway storagegateway.DescribeGatewayInformationOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_gateway.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayGatewayDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayGatewayConfig_GatewayType_Stored(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayGatewayExists(resourceName, &gateway), + resource.TestMatchResourceAttr(resourceName, "arn", regexp.MustCompile(`^arn:[^:]+:storagegateway:[^:]+:[^:]+:gateway/sgw-.+$`)), + resource.TestCheckResourceAttrSet(resourceName, "gateway_id"), + resource.TestCheckResourceAttr(resourceName, "gateway_name", rName), + resource.TestCheckResourceAttr(resourceName, "gateway_timezone", "GMT"), + resource.TestCheckResourceAttr(resourceName, "gateway_type", "STORED"), + resource.TestCheckResourceAttr(resourceName, "medium_changer_type", ""), + resource.TestCheckResourceAttr(resourceName, "smb_active_directory_settings.#", "0"), + resource.TestCheckResourceAttr(resourceName, "smb_guest_password", ""), + resource.TestCheckResourceAttr(resourceName, "tape_drive_type", ""), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"activation_key", "gateway_ip_address"}, + }, + }, + }) +} + +func TestAccAWSStorageGatewayGateway_GatewayType_Vtl(t *testing.T) { + var gateway storagegateway.DescribeGatewayInformationOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_gateway.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayGatewayDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayGatewayConfig_GatewayType_Vtl(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayGatewayExists(resourceName, &gateway), + resource.TestMatchResourceAttr(resourceName, "arn", regexp.MustCompile(`^arn:[^:]+:storagegateway:[^:]+:[^:]+:gateway/sgw-.+$`)), + resource.TestCheckResourceAttrSet(resourceName, "gateway_id"), + resource.TestCheckResourceAttr(resourceName, "gateway_name", rName), + resource.TestCheckResourceAttr(resourceName, "gateway_timezone", "GMT"), + resource.TestCheckResourceAttr(resourceName, "gateway_type", "VTL"), + resource.TestCheckResourceAttr(resourceName, "medium_changer_type", ""), + resource.TestCheckResourceAttr(resourceName, "smb_active_directory_settings.#", "0"), + resource.TestCheckResourceAttr(resourceName, "smb_guest_password", ""), + resource.TestCheckResourceAttr(resourceName, "tape_drive_type", ""), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"activation_key", "gateway_ip_address"}, + }, + }, + }) +} + +func TestAccAWSStorageGatewayGateway_GatewayName(t *testing.T) { + var gateway storagegateway.DescribeGatewayInformationOutput + rName1 := acctest.RandomWithPrefix("tf-acc-test") + rName2 := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_gateway.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayGatewayDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayGatewayConfig_GatewayType_FileS3(rName1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayGatewayExists(resourceName, &gateway), + resource.TestCheckResourceAttr(resourceName, "gateway_name", rName1), + ), + }, + { + Config: testAccAWSStorageGatewayGatewayConfig_GatewayType_FileS3(rName2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayGatewayExists(resourceName, &gateway), + resource.TestCheckResourceAttr(resourceName, "gateway_name", rName2), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"activation_key", "gateway_ip_address"}, + }, + }, + }) +} + +func TestAccAWSStorageGatewayGateway_GatewayTimezone(t *testing.T) { + var gateway storagegateway.DescribeGatewayInformationOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_gateway.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayGatewayDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayGatewayConfig_GatewayTimezone(rName, "GMT-1:00"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayGatewayExists(resourceName, &gateway), + resource.TestCheckResourceAttr(resourceName, "gateway_timezone", "GMT-1:00"), + ), + }, + { + Config: testAccAWSStorageGatewayGatewayConfig_GatewayTimezone(rName, "GMT-2:00"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayGatewayExists(resourceName, &gateway), + resource.TestCheckResourceAttr(resourceName, "gateway_timezone", "GMT-2:00"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"activation_key", "gateway_ip_address"}, + }, + }, + }) +} + +func TestAccAWSStorageGatewayGateway_SmbActiveDirectorySettings(t *testing.T) { + var gateway storagegateway.DescribeGatewayInformationOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_gateway.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayGatewayDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayGatewayConfig_SmbActiveDirectorySettings(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayGatewayExists(resourceName, &gateway), + resource.TestCheckResourceAttr(resourceName, "smb_active_directory_settings.#", "1"), + resource.TestCheckResourceAttr(resourceName, "smb_active_directory_settings.0.domain_name", "terraformtesting.com"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"activation_key", "gateway_ip_address", "smb_active_directory_settings"}, + }, + }, + }) +} + +func TestAccAWSStorageGatewayGateway_SmbGuestPassword(t *testing.T) { + var gateway storagegateway.DescribeGatewayInformationOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_gateway.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayGatewayDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayGatewayConfig_SmbGuestPassword(rName, "myguestpassword1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayGatewayExists(resourceName, &gateway), + resource.TestCheckResourceAttr(resourceName, "smb_guest_password", "myguestpassword1"), + ), + }, + { + Config: testAccAWSStorageGatewayGatewayConfig_SmbGuestPassword(rName, "myguestpassword2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayGatewayExists(resourceName, &gateway), + resource.TestCheckResourceAttr(resourceName, "smb_guest_password", "myguestpassword2"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"activation_key", "gateway_ip_address", "smb_guest_password"}, + }, + }, + }) +} + +func testAccCheckAWSStorageGatewayGatewayDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).storagegatewayconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_storagegateway_gateway" { + continue + } + + input := &storagegateway.DescribeGatewayInformationInput{ + GatewayARN: aws.String(rs.Primary.ID), + } + + _, err := conn.DescribeGatewayInformation(input) + + if err != nil { + if isAWSErrStorageGatewayGatewayNotFound(err) { + return nil + } + return err + } + } + + return nil + +} + +func testAccCheckAWSStorageGatewayGatewayExists(resourceName string, gateway *storagegateway.DescribeGatewayInformationOutput) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + conn := testAccProvider.Meta().(*AWSClient).storagegatewayconn + input := &storagegateway.DescribeGatewayInformationInput{ + GatewayARN: aws.String(rs.Primary.ID), + } + + output, err := conn.DescribeGatewayInformation(input) + + if err != nil { + return err + } + + if output == nil { + return fmt.Errorf("Gateway %q does not exist", rs.Primary.ID) + } + + *gateway = *output + + return nil + } +} + +// testAccAWSStorageGateway_VPCBase provides a publicly accessible subnet +// and security group, suitable for Storage Gateway EC2 instances of any type +func testAccAWSStorageGateway_VPCBase(rName string) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" + + tags { + Name = %q + } +} + +resource "aws_subnet" "test" { + cidr_block = "10.0.0.0/24" + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = %q + } +} + +resource "aws_internet_gateway" "test" { + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = %q + } +} + +resource "aws_route_table" "test" { + vpc_id = "${aws_vpc.test.id}" + + route { + cidr_block = "0.0.0.0/0" + gateway_id = "${aws_internet_gateway.test.id}" + } + + tags { + Name = %q + } +} + +resource "aws_route_table_association" "test" { + subnet_id = "${aws_subnet.test.id}" + route_table_id = "${aws_route_table.test.id}" +} + +resource "aws_security_group" "test" { + vpc_id = "${aws_vpc.test.id}" + + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + ingress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + tags { + Name = %q + } +} +`, rName, rName, rName, rName, rName) +} + +// testAccAWSStorageGateway_FileGatewayBase uses the "thinstaller" Storage +// Gateway AMI for File Gateways +func testAccAWSStorageGateway_FileGatewayBase(rName string) string { + return testAccAWSStorageGateway_VPCBase(rName) + fmt.Sprintf(` +data "aws_ami" "aws-thinstaller" { + most_recent = true + + filter { + name = "owner-alias" + values = ["amazon"] + } + + filter { + name = "name" + values = ["aws-thinstaller-*"] + } +} + +resource "aws_instance" "test" { + depends_on = ["aws_internet_gateway.test"] + + ami = "${data.aws_ami.aws-thinstaller.id}" + associate_public_ip_address = true + # https://docs.aws.amazon.com/storagegateway/latest/userguide/Requirements.html + instance_type = "m4.xlarge" + vpc_security_group_ids = ["${aws_security_group.test.id}"] + subnet_id = "${aws_subnet.test.id}" + + tags { + Name = %q + } +} +`, rName) +} + +// testAccAWSStorageGateway_TapeAndVolumeGatewayBase uses the Storage Gateway +// AMI for either Tape or Volume Gateways +func testAccAWSStorageGateway_TapeAndVolumeGatewayBase(rName string) string { + return testAccAWSStorageGateway_VPCBase(rName) + fmt.Sprintf(` +data "aws_ami" "aws-storage-gateway-2" { + most_recent = true + + filter { + name = "owner-alias" + values = ["amazon"] + } + + filter { + name = "name" + values = ["aws-storage-gateway-2.*"] + } +} + +resource "aws_instance" "test" { + depends_on = ["aws_internet_gateway.test"] + + ami = "${data.aws_ami.aws-storage-gateway-2.id}" + associate_public_ip_address = true + # https://docs.aws.amazon.com/storagegateway/latest/userguide/Requirements.html + instance_type = "m4.xlarge" + vpc_security_group_ids = ["${aws_security_group.test.id}"] + subnet_id = "${aws_subnet.test.id}" + + tags { + Name = %q + } +} +`, rName) +} + +func testAccAWSStorageGatewayGatewayConfig_GatewayType_Cached(rName string) string { + return testAccAWSStorageGateway_TapeAndVolumeGatewayBase(rName) + fmt.Sprintf(` +resource "aws_storagegateway_gateway" "test" { + gateway_ip_address = "${aws_instance.test.public_ip}" + gateway_name = %q + gateway_timezone = "GMT" + gateway_type = "CACHED" +} +`, rName) +} + +func testAccAWSStorageGatewayGatewayConfig_GatewayType_FileS3(rName string) string { + return testAccAWSStorageGateway_FileGatewayBase(rName) + fmt.Sprintf(` +resource "aws_storagegateway_gateway" "test" { + gateway_ip_address = "${aws_instance.test.public_ip}" + gateway_name = %q + gateway_timezone = "GMT" + gateway_type = "FILE_S3" +} +`, rName) +} + +func testAccAWSStorageGatewayGatewayConfig_GatewayType_Stored(rName string) string { + return testAccAWSStorageGateway_TapeAndVolumeGatewayBase(rName) + fmt.Sprintf(` +resource "aws_storagegateway_gateway" "test" { + gateway_ip_address = "${aws_instance.test.public_ip}" + gateway_name = %q + gateway_timezone = "GMT" + gateway_type = "STORED" +} +`, rName) +} + +func testAccAWSStorageGatewayGatewayConfig_GatewayType_Vtl(rName string) string { + return testAccAWSStorageGateway_TapeAndVolumeGatewayBase(rName) + fmt.Sprintf(` +resource "aws_storagegateway_gateway" "test" { + gateway_ip_address = "${aws_instance.test.public_ip}" + gateway_name = %q + gateway_timezone = "GMT" + gateway_type = "VTL" +} +`, rName) +} + +func testAccAWSStorageGatewayGatewayConfig_GatewayTimezone(rName, gatewayTimezone string) string { + return testAccAWSStorageGateway_FileGatewayBase(rName) + fmt.Sprintf(` +resource "aws_storagegateway_gateway" "test" { + gateway_ip_address = "${aws_instance.test.public_ip}" + gateway_name = %q + gateway_timezone = %q + gateway_type = "FILE_S3" +} +`, rName, gatewayTimezone) +} + +func testAccAWSStorageGatewayGatewayConfig_SmbActiveDirectorySettings(rName string) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" + + tags { + Name = %q + } +} + +resource "aws_subnet" "test" { + count = 2 + + availability_zone = "${data.aws_availability_zones.available.names[count.index]}" + cidr_block = "10.0.${count.index}.0/24" + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = %q + } +} + +resource "aws_internet_gateway" "test" { + vpc_id = "${aws_vpc.test.id}" + + tags { + Name = %q + } +} + +resource "aws_route_table" "test" { + vpc_id = "${aws_vpc.test.id}" + + route { + cidr_block = "0.0.0.0/0" + gateway_id = "${aws_internet_gateway.test.id}" + } + + tags { + Name = %q + } +} + +resource "aws_route_table_association" "test" { + count = 2 + + subnet_id = "${aws_subnet.test.*.id[count.index]}" + route_table_id = "${aws_route_table.test.id}" +} + +resource "aws_security_group" "test" { + vpc_id = "${aws_vpc.test.id}" + + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + ingress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + tags { + Name = %q + } +} + +resource "aws_directory_service_directory" "test" { + name = "terraformtesting.com" + password = "SuperSecretPassw0rd" + size = "Small" + + vpc_settings { + subnet_ids = ["${aws_subnet.test.*.id}"] + vpc_id = "${aws_vpc.test.id}" + } + + tags { + Name = %q + } +} + +resource "aws_vpc_dhcp_options" "test" { + domain_name = "${aws_directory_service_directory.test.name}" + domain_name_servers = ["${aws_directory_service_directory.test.dns_ip_addresses}"] + + tags { + Name = %q + } +} + +resource "aws_vpc_dhcp_options_association" "test" { + dhcp_options_id = "${aws_vpc_dhcp_options.test.id}" + vpc_id = "${aws_vpc.test.id}" +} + +data "aws_ami" "aws-thinstaller" { + most_recent = true + + filter { + name = "owner-alias" + values = ["amazon"] + } + + filter { + name = "name" + values = ["aws-thinstaller-*"] + } +} + +resource "aws_instance" "test" { + depends_on = ["aws_internet_gateway.test", "aws_vpc_dhcp_options_association.test"] + + ami = "${data.aws_ami.aws-thinstaller.id}" + associate_public_ip_address = true + # https://docs.aws.amazon.com/storagegateway/latest/userguide/Requirements.html + instance_type = "m4.xlarge" + vpc_security_group_ids = ["${aws_security_group.test.id}"] + subnet_id = "${aws_subnet.test.*.id[0]}" + + tags { + Name = %q + } +} + +resource "aws_storagegateway_gateway" "test" { + gateway_ip_address = "${aws_instance.test.public_ip}" + gateway_name = %q + gateway_timezone = "GMT" + gateway_type = "FILE_S3" + + smb_active_directory_settings { + domain_name = "${aws_directory_service_directory.test.name}" + password = "${aws_directory_service_directory.test.password}" + username = "Administrator" + } +} +`, rName, rName, rName, rName, rName, rName, rName, rName, rName) +} + +func testAccAWSStorageGatewayGatewayConfig_SmbGuestPassword(rName, smbGuestPassword string) string { + return testAccAWSStorageGateway_FileGatewayBase(rName) + fmt.Sprintf(` +resource "aws_storagegateway_gateway" "test" { + gateway_ip_address = "${aws_instance.test.public_ip}" + gateway_name = %q + gateway_timezone = "GMT" + gateway_type = "FILE_S3" + smb_guest_password = %q +} +`, rName, smbGuestPassword) +} diff --git a/aws/resource_aws_storagegateway_nfs_file_share.go b/aws/resource_aws_storagegateway_nfs_file_share.go new file mode 100644 index 00000000000..4e80b00454e --- /dev/null +++ b/aws/resource_aws_storagegateway_nfs_file_share.go @@ -0,0 +1,392 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/storagegateway" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsStorageGatewayNfsFileShare() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsStorageGatewayNfsFileShareCreate, + Read: resourceAwsStorageGatewayNfsFileShareRead, + Update: resourceAwsStorageGatewayNfsFileShareUpdate, + Delete: resourceAwsStorageGatewayNfsFileShareDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Update: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "client_list": { + Type: schema.TypeSet, + Required: true, + MinItems: 1, + MaxItems: 100, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "default_storage_class": { + Type: schema.TypeString, + Optional: true, + Default: "S3_STANDARD", + ValidateFunc: validation.StringInSlice([]string{ + "S3_ONEZONE_IA", + "S3_STANDARD_IA", + "S3_STANDARD", + }, false), + }, + "fileshare_id": { + Type: schema.TypeString, + Computed: true, + }, + "gateway_arn": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateArn, + }, + "guess_mime_type_enabled": { + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + "kms_encrypted": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "kms_key_arn": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateArn, + }, + "location_arn": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateArn, + }, + "nfs_file_share_defaults": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "directory_mode": { + Type: schema.TypeString, + Optional: true, + Default: "0777", + }, + "file_mode": { + Type: schema.TypeString, + Optional: true, + Default: "0666", + }, + "group_id": { + Type: schema.TypeInt, + Optional: true, + Default: 65534, + ValidateFunc: validation.IntAtLeast(0), + }, + "owner_id": { + Type: schema.TypeInt, + Optional: true, + Default: 65534, + ValidateFunc: validation.IntAtLeast(0), + }, + }, + }, + }, + "object_acl": { + Type: schema.TypeString, + Optional: true, + Default: storagegateway.ObjectACLPrivate, + ValidateFunc: validation.StringInSlice([]string{ + storagegateway.ObjectACLAuthenticatedRead, + storagegateway.ObjectACLAwsExecRead, + storagegateway.ObjectACLBucketOwnerFullControl, + storagegateway.ObjectACLBucketOwnerRead, + storagegateway.ObjectACLPrivate, + storagegateway.ObjectACLPublicRead, + storagegateway.ObjectACLPublicReadWrite, + }, false), + }, + "read_only": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "requester_pays": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "role_arn": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateArn, + }, + "squash": { + Type: schema.TypeString, + Optional: true, + Default: "RootSquash", + ValidateFunc: validation.StringInSlice([]string{ + "AllSquash", + "NoSquash", + "RootSquash", + }, false), + }, + }, + } +} + +func resourceAwsStorageGatewayNfsFileShareCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).storagegatewayconn + + input := &storagegateway.CreateNFSFileShareInput{ + ClientList: expandStringSet(d.Get("client_list").(*schema.Set)), + ClientToken: aws.String(resource.UniqueId()), + DefaultStorageClass: aws.String(d.Get("default_storage_class").(string)), + GatewayARN: aws.String(d.Get("gateway_arn").(string)), + GuessMIMETypeEnabled: aws.Bool(d.Get("guess_mime_type_enabled").(bool)), + KMSEncrypted: aws.Bool(d.Get("kms_encrypted").(bool)), + LocationARN: aws.String(d.Get("location_arn").(string)), + NFSFileShareDefaults: expandStorageGatewayNfsFileShareDefaults(d.Get("nfs_file_share_defaults").([]interface{})), + ObjectACL: aws.String(d.Get("object_acl").(string)), + ReadOnly: aws.Bool(d.Get("read_only").(bool)), + RequesterPays: aws.Bool(d.Get("requester_pays").(bool)), + Role: aws.String(d.Get("role_arn").(string)), + Squash: aws.String(d.Get("squash").(string)), + } + + if v, ok := d.GetOk("kms_key_arn"); ok && v.(string) != "" { + input.KMSKey = aws.String(v.(string)) + } + + log.Printf("[DEBUG] Creating Storage Gateway NFS File Share: %s", input) + output, err := conn.CreateNFSFileShare(input) + if err != nil { + return fmt.Errorf("error creating Storage Gateway NFS File Share: %s", err) + } + + d.SetId(aws.StringValue(output.FileShareARN)) + + stateConf := &resource.StateChangeConf{ + Pending: []string{"CREATING", "MISSING"}, + Target: []string{"AVAILABLE"}, + Refresh: storageGatewayNfsFileShareRefreshFunc(d.Id(), conn), + Timeout: d.Timeout(schema.TimeoutCreate), + Delay: 5 * time.Second, + MinTimeout: 5 * time.Second, + } + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("error waiting for Storage Gateway NFS File Share creation: %s", err) + } + + return resourceAwsStorageGatewayNfsFileShareRead(d, meta) +} + +func resourceAwsStorageGatewayNfsFileShareRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).storagegatewayconn + + input := &storagegateway.DescribeNFSFileSharesInput{ + FileShareARNList: []*string{aws.String(d.Id())}, + } + + log.Printf("[DEBUG] Reading Storage Gateway NFS File Share: %s", input) + output, err := conn.DescribeNFSFileShares(input) + if err != nil { + if isAWSErr(err, storagegateway.ErrCodeInvalidGatewayRequestException, "The specified file share was not found.") { + log.Printf("[WARN] Storage Gateway NFS File Share %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + return fmt.Errorf("error reading Storage Gateway NFS File Share: %s", err) + } + + if output == nil || len(output.NFSFileShareInfoList) == 0 || output.NFSFileShareInfoList[0] == nil { + log.Printf("[WARN] Storage Gateway NFS File Share %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + fileshare := output.NFSFileShareInfoList[0] + + d.Set("arn", fileshare.FileShareARN) + + if err := d.Set("client_list", schema.NewSet(schema.HashString, flattenStringList(fileshare.ClientList))); err != nil { + return fmt.Errorf("error setting client_list: %s", err) + } + + d.Set("default_storage_class", fileshare.DefaultStorageClass) + d.Set("fileshare_id", fileshare.FileShareId) + d.Set("gateway_arn", fileshare.GatewayARN) + d.Set("guess_mime_type_enabled", fileshare.GuessMIMETypeEnabled) + d.Set("kms_encrypted", fileshare.KMSEncrypted) + d.Set("kms_key_arn", fileshare.KMSKey) + d.Set("location_arn", fileshare.LocationARN) + + if err := d.Set("nfs_file_share_defaults", flattenStorageGatewayNfsFileShareDefaults(fileshare.NFSFileShareDefaults)); err != nil { + return fmt.Errorf("error setting nfs_file_share_defaults: %s", err) + } + + d.Set("object_acl", fileshare.ObjectACL) + d.Set("path", fileshare.Path) + d.Set("read_only", fileshare.ReadOnly) + d.Set("requester_pays", fileshare.RequesterPays) + d.Set("role_arn", fileshare.Role) + d.Set("squash", fileshare.Squash) + + return nil +} + +func resourceAwsStorageGatewayNfsFileShareUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).storagegatewayconn + + input := &storagegateway.UpdateNFSFileShareInput{ + ClientList: expandStringSet(d.Get("client_list").(*schema.Set)), + DefaultStorageClass: aws.String(d.Get("default_storage_class").(string)), + FileShareARN: aws.String(d.Id()), + GuessMIMETypeEnabled: aws.Bool(d.Get("guess_mime_type_enabled").(bool)), + KMSEncrypted: aws.Bool(d.Get("kms_encrypted").(bool)), + NFSFileShareDefaults: expandStorageGatewayNfsFileShareDefaults(d.Get("nfs_file_share_defaults").([]interface{})), + ObjectACL: aws.String(d.Get("object_acl").(string)), + ReadOnly: aws.Bool(d.Get("read_only").(bool)), + RequesterPays: aws.Bool(d.Get("requester_pays").(bool)), + Squash: aws.String(d.Get("squash").(string)), + } + + if v, ok := d.GetOk("kms_key_arn"); ok && v.(string) != "" { + input.KMSKey = aws.String(v.(string)) + } + + log.Printf("[DEBUG] Updating Storage Gateway NFS File Share: %s", input) + _, err := conn.UpdateNFSFileShare(input) + if err != nil { + return fmt.Errorf("error updating Storage Gateway NFS File Share: %s", err) + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{"UPDATING"}, + Target: []string{"AVAILABLE"}, + Refresh: storageGatewayNfsFileShareRefreshFunc(d.Id(), conn), + Timeout: d.Timeout(schema.TimeoutUpdate), + Delay: 5 * time.Second, + MinTimeout: 5 * time.Second, + } + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf("error waiting for Storage Gateway NFS File Share update: %s", err) + } + + return resourceAwsStorageGatewayNfsFileShareRead(d, meta) +} + +func resourceAwsStorageGatewayNfsFileShareDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).storagegatewayconn + + input := &storagegateway.DeleteFileShareInput{ + FileShareARN: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Deleting Storage Gateway NFS File Share: %s", input) + _, err := conn.DeleteFileShare(input) + if err != nil { + if isAWSErr(err, storagegateway.ErrCodeInvalidGatewayRequestException, "The specified file share was not found.") { + return nil + } + return fmt.Errorf("error deleting Storage Gateway NFS File Share: %s", err) + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{"AVAILABLE", "DELETING", "FORCE_DELETING"}, + Target: []string{"MISSING"}, + Refresh: storageGatewayNfsFileShareRefreshFunc(d.Id(), conn), + Timeout: d.Timeout(schema.TimeoutDelete), + Delay: 5 * time.Second, + MinTimeout: 5 * time.Second, + NotFoundChecks: 1, + } + _, err = stateConf.WaitForState() + if err != nil { + if isResourceNotFoundError(err) { + return nil + } + return fmt.Errorf("error waiting for Storage Gateway NFS File Share deletion: %s", err) + } + + return nil +} + +func storageGatewayNfsFileShareRefreshFunc(fileShareArn string, conn *storagegateway.StorageGateway) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + input := &storagegateway.DescribeNFSFileSharesInput{ + FileShareARNList: []*string{aws.String(fileShareArn)}, + } + + log.Printf("[DEBUG] Reading Storage Gateway NFS File Share: %s", input) + output, err := conn.DescribeNFSFileShares(input) + if err != nil { + if isAWSErr(err, storagegateway.ErrCodeInvalidGatewayRequestException, "The specified file share was not found.") { + return nil, "MISSING", nil + } + return nil, "ERROR", fmt.Errorf("error reading Storage Gateway NFS File Share: %s", err) + } + + if output == nil || len(output.NFSFileShareInfoList) == 0 || output.NFSFileShareInfoList[0] == nil { + return nil, "MISSING", nil + } + + fileshare := output.NFSFileShareInfoList[0] + + return fileshare, aws.StringValue(fileshare.FileShareStatus), nil + } +} + +func expandStorageGatewayNfsFileShareDefaults(l []interface{}) *storagegateway.NFSFileShareDefaults { + if len(l) == 0 || l[0] == nil { + return nil + } + + m := l[0].(map[string]interface{}) + + nfsFileShareDefaults := &storagegateway.NFSFileShareDefaults{ + DirectoryMode: aws.String(m["directory_mode"].(string)), + FileMode: aws.String(m["file_mode"].(string)), + GroupId: aws.Int64(int64(m["group_id"].(int))), + OwnerId: aws.Int64(int64(m["owner_id"].(int))), + } + + return nfsFileShareDefaults +} + +func flattenStorageGatewayNfsFileShareDefaults(nfsFileShareDefaults *storagegateway.NFSFileShareDefaults) []interface{} { + if nfsFileShareDefaults == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "directory_mode": aws.StringValue(nfsFileShareDefaults.DirectoryMode), + "file_mode": aws.StringValue(nfsFileShareDefaults.FileMode), + "group_id": int(aws.Int64Value(nfsFileShareDefaults.GroupId)), + "owner_id": int(aws.Int64Value(nfsFileShareDefaults.OwnerId)), + } + + return []interface{}{m} +} diff --git a/aws/resource_aws_storagegateway_nfs_file_share_test.go b/aws/resource_aws_storagegateway_nfs_file_share_test.go new file mode 100644 index 00000000000..5d78a1c85c4 --- /dev/null +++ b/aws/resource_aws_storagegateway_nfs_file_share_test.go @@ -0,0 +1,697 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/storagegateway" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSStorageGatewayNfsFileShare_basic(t *testing.T) { + var nfsFileShare storagegateway.NFSFileShareInfo + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_nfs_file_share.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayNfsFileShareDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_Required(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestMatchResourceAttr(resourceName, "arn", regexp.MustCompile(`^arn:[^:]+:storagegateway:[^:]+:[^:]+:share/share-.+$`)), + resource.TestCheckResourceAttr(resourceName, "client_list.#", "1"), + resource.TestCheckResourceAttr(resourceName, "client_list.217649824", "0.0.0.0/0"), + resource.TestCheckResourceAttr(resourceName, "default_storage_class", "S3_STANDARD"), + resource.TestMatchResourceAttr(resourceName, "fileshare_id", regexp.MustCompile(`^share-`)), + resource.TestMatchResourceAttr(resourceName, "gateway_arn", regexp.MustCompile(`^arn:`)), + resource.TestCheckResourceAttr(resourceName, "guess_mime_type_enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "kms_encrypted", "false"), + resource.TestCheckResourceAttr(resourceName, "kms_key_arn", ""), + resource.TestMatchResourceAttr(resourceName, "location_arn", regexp.MustCompile(`^arn:`)), + resource.TestCheckResourceAttr(resourceName, "nfs_file_share_defaults.#", "0"), + resource.TestCheckResourceAttr(resourceName, "object_acl", storagegateway.ObjectACLPrivate), + resource.TestCheckResourceAttr(resourceName, "read_only", "false"), + resource.TestCheckResourceAttr(resourceName, "requester_pays", "false"), + resource.TestMatchResourceAttr(resourceName, "role_arn", regexp.MustCompile(`^arn:`)), + resource.TestCheckResourceAttr(resourceName, "squash", "RootSquash"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSStorageGatewayNfsFileShare_ClientList(t *testing.T) { + var nfsFileShare storagegateway.NFSFileShareInfo + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_nfs_file_share.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayNfsFileShareDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_ClientList_Single(rName, "1.1.1.1/32"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "client_list.#", "1"), + ), + }, + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_ClientList_Multiple(rName, "2.2.2.2/32", "3.3.3.3/32"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "client_list.#", "2"), + ), + }, + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_ClientList_Single(rName, "4.4.4.4/32"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "client_list.#", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSStorageGatewayNfsFileShare_DefaultStorageClass(t *testing.T) { + var nfsFileShare storagegateway.NFSFileShareInfo + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_nfs_file_share.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayNfsFileShareDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_DefaultStorageClass(rName, "S3_STANDARD_IA"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "default_storage_class", "S3_STANDARD_IA"), + ), + }, + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_DefaultStorageClass(rName, "S3_ONEZONE_IA"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "default_storage_class", "S3_ONEZONE_IA"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSStorageGatewayNfsFileShare_GuessMIMETypeEnabled(t *testing.T) { + var nfsFileShare storagegateway.NFSFileShareInfo + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_nfs_file_share.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayNfsFileShareDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_GuessMIMETypeEnabled(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "guess_mime_type_enabled", "false"), + ), + }, + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_GuessMIMETypeEnabled(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "guess_mime_type_enabled", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSStorageGatewayNfsFileShare_KMSEncrypted(t *testing.T) { + var nfsFileShare storagegateway.NFSFileShareInfo + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_nfs_file_share.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayNfsFileShareDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_KMSEncrypted(rName, true), + ExpectError: regexp.MustCompile(`KMSKey is missing`), + }, + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_KMSEncrypted(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "kms_encrypted", "false"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSStorageGatewayNfsFileShare_KMSKeyArn(t *testing.T) { + var nfsFileShare storagegateway.NFSFileShareInfo + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_nfs_file_share.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayNfsFileShareDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_KMSKeyArn(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "kms_encrypted", "true"), + resource.TestMatchResourceAttr(resourceName, "kms_key_arn", regexp.MustCompile(`^arn:`)), + ), + }, + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_KMSKeyArn_Update(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "kms_encrypted", "true"), + resource.TestMatchResourceAttr(resourceName, "kms_key_arn", regexp.MustCompile(`^arn:`)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_KMSEncrypted(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "kms_encrypted", "false"), + ), + }, + }, + }) +} + +func TestAccAWSStorageGatewayNfsFileShare_NFSFileShareDefaults(t *testing.T) { + var nfsFileShare storagegateway.NFSFileShareInfo + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_nfs_file_share.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayNfsFileShareDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_NFSFileShareDefaults(rName, "0700", "0600", 1, 2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "nfs_file_share_defaults.#", "1"), + resource.TestCheckResourceAttr(resourceName, "nfs_file_share_defaults.0.directory_mode", "0700"), + resource.TestCheckResourceAttr(resourceName, "nfs_file_share_defaults.0.file_mode", "0600"), + resource.TestCheckResourceAttr(resourceName, "nfs_file_share_defaults.0.group_id", "1"), + resource.TestCheckResourceAttr(resourceName, "nfs_file_share_defaults.0.owner_id", "2"), + ), + }, + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_NFSFileShareDefaults(rName, "0770", "0660", 3, 4), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "nfs_file_share_defaults.#", "1"), + resource.TestCheckResourceAttr(resourceName, "nfs_file_share_defaults.0.directory_mode", "0770"), + resource.TestCheckResourceAttr(resourceName, "nfs_file_share_defaults.0.file_mode", "0660"), + resource.TestCheckResourceAttr(resourceName, "nfs_file_share_defaults.0.group_id", "3"), + resource.TestCheckResourceAttr(resourceName, "nfs_file_share_defaults.0.owner_id", "4"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSStorageGatewayNfsFileShare_ObjectACL(t *testing.T) { + var nfsFileShare storagegateway.NFSFileShareInfo + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_nfs_file_share.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayNfsFileShareDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_ObjectACL(rName, storagegateway.ObjectACLPublicRead), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "object_acl", storagegateway.ObjectACLPublicRead), + ), + }, + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_ObjectACL(rName, storagegateway.ObjectACLPublicReadWrite), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "object_acl", storagegateway.ObjectACLPublicReadWrite), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSStorageGatewayNfsFileShare_ReadOnly(t *testing.T) { + var nfsFileShare storagegateway.NFSFileShareInfo + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_nfs_file_share.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayNfsFileShareDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_ReadOnly(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "read_only", "false"), + ), + }, + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_ReadOnly(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "read_only", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSStorageGatewayNfsFileShare_RequesterPays(t *testing.T) { + var nfsFileShare storagegateway.NFSFileShareInfo + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_nfs_file_share.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayNfsFileShareDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_RequesterPays(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "requester_pays", "false"), + ), + }, + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_RequesterPays(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "requester_pays", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSStorageGatewayNfsFileShare_Squash(t *testing.T) { + var nfsFileShare storagegateway.NFSFileShareInfo + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_nfs_file_share.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSStorageGatewayNfsFileShareDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_Squash(rName, "NoSquash"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "squash", "NoSquash"), + ), + }, + { + Config: testAccAWSStorageGatewayNfsFileShareConfig_Squash(rName, "AllSquash"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName, &nfsFileShare), + resource.TestCheckResourceAttr(resourceName, "squash", "AllSquash"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAWSStorageGatewayNfsFileShareDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).storagegatewayconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_storagegateway_nfs_file_share" { + continue + } + + input := &storagegateway.DescribeNFSFileSharesInput{ + FileShareARNList: []*string{aws.String(rs.Primary.ID)}, + } + + output, err := conn.DescribeNFSFileShares(input) + + if err != nil { + if isAWSErr(err, storagegateway.ErrCodeInvalidGatewayRequestException, "The specified file share was not found.") { + continue + } + return err + } + + if output != nil && len(output.NFSFileShareInfoList) > 0 && output.NFSFileShareInfoList[0] != nil { + return fmt.Errorf("Storage Gateway NFS File Share %q still exists", rs.Primary.ID) + } + } + + return nil + +} + +func testAccCheckAWSStorageGatewayNfsFileShareExists(resourceName string, nfsFileShare *storagegateway.NFSFileShareInfo) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + conn := testAccProvider.Meta().(*AWSClient).storagegatewayconn + input := &storagegateway.DescribeNFSFileSharesInput{ + FileShareARNList: []*string{aws.String(rs.Primary.ID)}, + } + + output, err := conn.DescribeNFSFileShares(input) + + if err != nil { + return err + } + + if output == nil || len(output.NFSFileShareInfoList) == 0 || output.NFSFileShareInfoList[0] == nil { + return fmt.Errorf("Storage Gateway NFS File Share %q does not exist", rs.Primary.ID) + } + + *nfsFileShare = *output.NFSFileShareInfoList[0] + + return nil + } +} + +func testAccAWSStorageGateway_S3FileShareBase(rName string) string { + return testAccAWSStorageGateway_FileGatewayBase(rName) + fmt.Sprintf(` +resource "aws_iam_role" "test" { + name = %q + + assume_role_policy = < 0 && output.SMBFileShareInfoList[0] != nil { + return fmt.Errorf("Storage Gateway SMB File Share %q still exists", rs.Primary.ID) + } + } + + return nil + +} + +func testAccCheckAWSStorageGatewaySmbFileShareExists(resourceName string, smbFileShare *storagegateway.SMBFileShareInfo) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + conn := testAccProvider.Meta().(*AWSClient).storagegatewayconn + input := &storagegateway.DescribeSMBFileSharesInput{ + FileShareARNList: []*string{aws.String(rs.Primary.ID)}, + } + + output, err := conn.DescribeSMBFileShares(input) + + if err != nil { + return err + } + + if output == nil || len(output.SMBFileShareInfoList) == 0 || output.SMBFileShareInfoList[0] == nil { + return fmt.Errorf("Storage Gateway SMB File Share %q does not exist", rs.Primary.ID) + } + + *smbFileShare = *output.SMBFileShareInfoList[0] + + return nil + } +} + +func testAccAWSStorageGateway_SmbFileShare_ActiveDirectoryBase(rName string) string { + return testAccAWSStorageGatewayGatewayConfig_SmbActiveDirectorySettings(rName) + fmt.Sprintf(` +resource "aws_iam_role" "test" { + name = %q + assume_role_policy = < 0 && err == nil { + t.Fatalf("expected %q to trigger an error", tc.Input) + } + if gatewayARN != tc.ExpectedGatewayARN { + t.Fatalf("expected %q to return Gateway ARN %q, received: %s", tc.Input, tc.ExpectedGatewayARN, gatewayARN) + } + if diskID != tc.ExpectedDiskID { + t.Fatalf("expected %q to return Disk ID %q, received: %s", tc.Input, tc.ExpectedDiskID, diskID) + } + } +} + +func TestAccAWSStorageGatewayUploadBuffer_Basic(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_upload_buffer.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + // Storage Gateway API does not support removing upload buffers + // CheckDestroy: testAccCheckAWSStorageGatewayUploadBufferDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayUploadBufferConfig_Basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayUploadBufferExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "disk_id"), + resource.TestMatchResourceAttr(resourceName, "gateway_arn", regexp.MustCompile(`^arn:[^:]+:storagegateway:[^:]+:[^:]+:gateway/sgw-.+$`)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAWSStorageGatewayUploadBufferExists(resourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + conn := testAccProvider.Meta().(*AWSClient).storagegatewayconn + + gatewayARN, diskID, err := decodeStorageGatewayUploadBufferID(rs.Primary.ID) + if err != nil { + return err + } + + input := &storagegateway.DescribeUploadBufferInput{ + GatewayARN: aws.String(gatewayARN), + } + + output, err := conn.DescribeUploadBuffer(input) + + if err != nil { + return fmt.Errorf("error reading Storage Gateway upload buffer: %s", err) + } + + if output == nil || len(output.DiskIds) == 0 { + return fmt.Errorf("Storage Gateway upload buffer %q not found", rs.Primary.ID) + } + + for _, existingDiskID := range output.DiskIds { + if aws.StringValue(existingDiskID) == diskID { + return nil + } + } + + return fmt.Errorf("Storage Gateway upload buffer %q not found", rs.Primary.ID) + } +} + +func testAccAWSStorageGatewayUploadBufferConfig_Basic(rName string) string { + return testAccAWSStorageGatewayGatewayConfig_GatewayType_Stored(rName) + fmt.Sprintf(` +resource "aws_ebs_volume" "test" { + availability_zone = "${aws_instance.test.availability_zone}" + size = "10" + type = "gp2" + + tags { + Name = %q + } +} + +resource "aws_volume_attachment" "test" { + device_name = "/dev/xvdc" + force_detach = true + instance_id = "${aws_instance.test.id}" + volume_id = "${aws_ebs_volume.test.id}" +} + +data "aws_storagegateway_local_disk" "test" { + disk_path = "${aws_volume_attachment.test.device_name}" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} + +resource "aws_storagegateway_upload_buffer" "test" { + disk_id = "${data.aws_storagegateway_local_disk.test.id}" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} +`, rName) +} diff --git a/aws/resource_aws_storagegateway_working_storage.go b/aws/resource_aws_storagegateway_working_storage.go new file mode 100644 index 00000000000..52c5787c3c5 --- /dev/null +++ b/aws/resource_aws_storagegateway_working_storage.go @@ -0,0 +1,131 @@ +package aws + +import ( + "fmt" + "log" + "strings" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" + "github.com/aws/aws-sdk-go/service/storagegateway" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsStorageGatewayWorkingStorage() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsStorageGatewayWorkingStorageCreate, + Read: resourceAwsStorageGatewayWorkingStorageRead, + Delete: schema.Noop, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "disk_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "gateway_arn": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateArn, + }, + }, + } +} + +func resourceAwsStorageGatewayWorkingStorageCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).storagegatewayconn + + diskID := d.Get("disk_id").(string) + gatewayARN := d.Get("gateway_arn").(string) + + input := &storagegateway.AddWorkingStorageInput{ + DiskIds: []*string{aws.String(diskID)}, + GatewayARN: aws.String(gatewayARN), + } + + log.Printf("[DEBUG] Adding Storage Gateway working storage: %s", input) + _, err := conn.AddWorkingStorage(input) + if err != nil { + return fmt.Errorf("error adding Storage Gateway working storage: %s", err) + } + + d.SetId(fmt.Sprintf("%s:%s", gatewayARN, diskID)) + + return resourceAwsStorageGatewayWorkingStorageRead(d, meta) +} + +func resourceAwsStorageGatewayWorkingStorageRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).storagegatewayconn + + gatewayARN, diskID, err := decodeStorageGatewayWorkingStorageID(d.Id()) + if err != nil { + return err + } + + input := &storagegateway.DescribeWorkingStorageInput{ + GatewayARN: aws.String(gatewayARN), + } + + log.Printf("[DEBUG] Reading Storage Gateway working storage: %s", input) + output, err := conn.DescribeWorkingStorage(input) + if err != nil { + if isAWSErrStorageGatewayGatewayNotFound(err) { + log.Printf("[WARN] Storage Gateway working storage %q not found - removing from state", d.Id()) + d.SetId("") + return nil + } + return fmt.Errorf("error reading Storage Gateway working storage: %s", err) + } + + if output == nil || len(output.DiskIds) == 0 { + log.Printf("[WARN] Storage Gateway working storage %q not found - removing from state", d.Id()) + d.SetId("") + return nil + } + + found := false + for _, existingDiskID := range output.DiskIds { + if aws.StringValue(existingDiskID) == diskID { + found = true + break + } + } + + if !found { + log.Printf("[WARN] Storage Gateway working storage %q not found - removing from state", d.Id()) + d.SetId("") + return nil + } + + d.Set("disk_id", diskID) + d.Set("gateway_arn", gatewayARN) + + return nil +} + +func decodeStorageGatewayWorkingStorageID(id string) (string, string, error) { + // id = arn:aws:storagegateway:us-east-1:123456789012:gateway/sgw-12345678:pci-0000:03:00.0-scsi-0:0:0:0 + idFormatErr := fmt.Errorf("expected ID in form of GatewayARN:DiskId, received: %s", id) + gatewayARNAndDisk, err := arn.Parse(id) + if err != nil { + return "", "", idFormatErr + } + // gatewayARNAndDisk.Resource = gateway/sgw-12345678:pci-0000:03:00.0-scsi-0:0:0:0 + resourceParts := strings.SplitN(gatewayARNAndDisk.Resource, ":", 2) + if len(resourceParts) != 2 { + return "", "", idFormatErr + } + // resourceParts = ["gateway/sgw-12345678", "pci-0000:03:00.0-scsi-0:0:0:0"] + gatewayARN := &arn.ARN{ + AccountID: gatewayARNAndDisk.AccountID, + Partition: gatewayARNAndDisk.Partition, + Region: gatewayARNAndDisk.Region, + Service: gatewayARNAndDisk.Service, + Resource: resourceParts[0], + } + return gatewayARN.String(), resourceParts[1], nil +} diff --git a/aws/resource_aws_storagegateway_working_storage_test.go b/aws/resource_aws_storagegateway_working_storage_test.go new file mode 100644 index 00000000000..770d5d1f1d5 --- /dev/null +++ b/aws/resource_aws_storagegateway_working_storage_test.go @@ -0,0 +1,165 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/storagegateway" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestDecodeStorageGatewayWorkingStorageID(t *testing.T) { + var testCases = []struct { + Input string + ExpectedGatewayARN string + ExpectedDiskID string + ErrCount int + }{ + { + Input: "arn:aws:storagegateway:us-east-1:123456789012:gateway/sgw-12345678:pci-0000:03:00.0-scsi-0:0:0:0", + ExpectedGatewayARN: "arn:aws:storagegateway:us-east-1:123456789012:gateway/sgw-12345678", + ExpectedDiskID: "pci-0000:03:00.0-scsi-0:0:0:0", + ErrCount: 0, + }, + { + Input: "sgw-12345678:pci-0000:03:00.0-scsi-0:0:0:0", + ErrCount: 1, + }, + { + Input: "example:pci-0000:03:00.0-scsi-0:0:0:0", + ErrCount: 1, + }, + { + Input: "arn:aws:storagegateway:us-east-1:123456789012:gateway/sgw-12345678", + ErrCount: 1, + }, + { + Input: "pci-0000:03:00.0-scsi-0:0:0:0", + ErrCount: 1, + }, + { + Input: "gateway/sgw-12345678", + ErrCount: 1, + }, + { + Input: "sgw-12345678", + ErrCount: 1, + }, + } + + for _, tc := range testCases { + gatewayARN, diskID, err := decodeStorageGatewayWorkingStorageID(tc.Input) + if tc.ErrCount == 0 && err != nil { + t.Fatalf("expected %q not to trigger an error, received: %s", tc.Input, err) + } + if tc.ErrCount > 0 && err == nil { + t.Fatalf("expected %q to trigger an error", tc.Input) + } + if gatewayARN != tc.ExpectedGatewayARN { + t.Fatalf("expected %q to return Gateway ARN %q, received: %s", tc.Input, tc.ExpectedGatewayARN, gatewayARN) + } + if diskID != tc.ExpectedDiskID { + t.Fatalf("expected %q to return Disk ID %q, received: %s", tc.Input, tc.ExpectedDiskID, diskID) + } + } +} + +func TestAccAWSStorageGatewayWorkingStorage_Basic(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_storagegateway_working_storage.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + // Storage Gateway API does not support removing working storages + // CheckDestroy: testAccCheckAWSStorageGatewayWorkingStorageDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSStorageGatewayWorkingStorageConfig_Basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSStorageGatewayWorkingStorageExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "disk_id"), + resource.TestMatchResourceAttr(resourceName, "gateway_arn", regexp.MustCompile(`^arn:[^:]+:storagegateway:[^:]+:[^:]+:gateway/sgw-.+$`)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAWSStorageGatewayWorkingStorageExists(resourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + conn := testAccProvider.Meta().(*AWSClient).storagegatewayconn + + gatewayARN, diskID, err := decodeStorageGatewayWorkingStorageID(rs.Primary.ID) + if err != nil { + return err + } + + input := &storagegateway.DescribeWorkingStorageInput{ + GatewayARN: aws.String(gatewayARN), + } + + output, err := conn.DescribeWorkingStorage(input) + + if err != nil { + return fmt.Errorf("error reading Storage Gateway working storage: %s", err) + } + + if output == nil || len(output.DiskIds) == 0 { + return fmt.Errorf("Storage Gateway working storage %q not found", rs.Primary.ID) + } + + for _, existingDiskID := range output.DiskIds { + if aws.StringValue(existingDiskID) == diskID { + return nil + } + } + + return fmt.Errorf("Storage Gateway working storage %q not found", rs.Primary.ID) + } +} + +func testAccAWSStorageGatewayWorkingStorageConfig_Basic(rName string) string { + return testAccAWSStorageGatewayGatewayConfig_GatewayType_Stored(rName) + fmt.Sprintf(` +resource "aws_ebs_volume" "test" { + availability_zone = "${aws_instance.test.availability_zone}" + size = "10" + type = "gp2" + + tags { + Name = %q + } +} + +resource "aws_volume_attachment" "test" { + device_name = "/dev/xvdc" + force_detach = true + instance_id = "${aws_instance.test.id}" + volume_id = "${aws_ebs_volume.test.id}" +} + +data "aws_storagegateway_local_disk" "test" { + disk_path = "${aws_volume_attachment.test.device_name}" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} + +resource "aws_storagegateway_working_storage" "test" { + disk_id = "${data.aws_storagegateway_local_disk.test.id}" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} +`, rName) +} diff --git a/aws/resource_aws_subnet.go b/aws/resource_aws_subnet.go index 88d23e829fb..5f36af5d99e 100644 --- a/aws/resource_aws_subnet.go +++ b/aws/resource_aws_subnet.go @@ -6,6 +6,7 @@ import ( "time" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" @@ -22,6 +23,11 @@ func resourceAwsSubnet() *schema.Resource { State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + SchemaVersion: 1, MigrateState: resourceAwsSubnetMigrateState, @@ -68,6 +74,11 @@ func resourceAwsSubnet() *schema.Resource { Computed: true, }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "tags": tagsSchema(), }, } @@ -154,6 +165,16 @@ func resourceAwsSubnetRead(d *schema.ResourceData, meta interface{}) error { d.Set("ipv6_cidr_block", "") } } + + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Region: meta.(*AWSClient).region, + Service: "ec2", + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("subnet/%s", d.Id()), + } + d.Set("arn", arn.String()) + d.Set("tags", tagsToMap(subnet.Tags)) return nil @@ -287,6 +308,11 @@ func resourceAwsSubnetDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).ec2conn log.Printf("[INFO] Deleting subnet: %s", d.Id()) + + if err := deleteLingeringLambdaENIs(conn, d, "subnet-id"); err != nil { + return fmt.Errorf("Failed to delete Lambda ENIs: %s", err) + } + req := &ec2.DeleteSubnetInput{ SubnetId: aws.String(d.Id()), } diff --git a/aws/resource_aws_subnet_test.go b/aws/resource_aws_subnet_test.go index 3168fad5340..c365f38fe7c 100644 --- a/aws/resource_aws_subnet_test.go +++ b/aws/resource_aws_subnet_test.go @@ -3,6 +3,7 @@ package aws import ( "fmt" "log" + "regexp" "testing" "github.com/aws/aws-sdk-go/aws" @@ -17,6 +18,26 @@ func init() { resource.AddTestSweepers("aws_subnet", &resource.Sweeper{ Name: "aws_subnet", F: testSweepSubnets, + // When implemented, these should be moved to aws_network_interface + // and aws_network_interface set as dependency here. + Dependencies: []string{ + "aws_autoscaling_group", + "aws_batch_compute_environment", + "aws_beanstalk_environment", + "aws_db_instance", + "aws_directory_service_directory", + "aws_eks_cluster", + "aws_elasticache_cluster", + "aws_elasticache_replication_group", + "aws_elasticsearch_domain", + "aws_elb", + "aws_instance", + "aws_lambda_function", + "aws_lb", + "aws_mq_broker", + "aws_redshift_cluster", + "aws_spot_fleet_request", + }, }) } @@ -39,6 +60,10 @@ func testSweepSubnets(region string) error { } resp, err := conn.DescribeSubnets(req) if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping EC2 Subnet sweep for %s: %s", region, err) + return nil + } return fmt.Errorf("Error describing subnets: %s", err) } @@ -62,6 +87,27 @@ func testSweepSubnets(region string) error { return nil } +func TestAccAWSSubnet_importBasic(t *testing.T) { + resourceName := "aws_subnet.foo" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckSubnetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccSubnetConfig, + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSSubnet_basic(t *testing.T) { var v ec2.Subnet @@ -77,7 +123,7 @@ func TestAccAWSSubnet_basic(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_subnet.foo", Providers: testAccProviders, @@ -89,6 +135,10 @@ func TestAccAWSSubnet_basic(t *testing.T) { testAccCheckSubnetExists( "aws_subnet.foo", &v), testCheck, + resource.TestMatchResourceAttr( + "aws_subnet.foo", + "arn", + regexp.MustCompile(`^arn:[^:]+:ec2:[^:]+:\d{12}:subnet/subnet-.+`)), ), }, }, @@ -98,7 +148,7 @@ func TestAccAWSSubnet_basic(t *testing.T) { func TestAccAWSSubnet_ipv6(t *testing.T) { var before, after ec2.Subnet - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_subnet.foo", Providers: testAccProviders, @@ -136,7 +186,7 @@ func TestAccAWSSubnet_ipv6(t *testing.T) { func TestAccAWSSubnet_enableIpv6(t *testing.T) { var subnet ec2.Subnet - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_subnet.foo", Providers: testAccProviders, diff --git a/aws/resource_aws_swf_domain.go b/aws/resource_aws_swf_domain.go new file mode 100644 index 00000000000..8668ec108fd --- /dev/null +++ b/aws/resource_aws_swf_domain.go @@ -0,0 +1,140 @@ +package aws + +import ( + "fmt" + "log" + "strconv" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/swf" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsSwfDomain() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsSwfDomainCreate, + Read: resourceAwsSwfDomainRead, + Delete: resourceAwsSwfDomainDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"name_prefix"}, + }, + "name_prefix": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"name"}, + }, + "description": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "workflow_execution_retention_period_in_days": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { + value, err := strconv.Atoi(v.(string)) + if err != nil || value > 90 || value < 0 { + es = append(es, fmt.Errorf( + "%q must be between 0 and 90 days inclusive", k)) + } + return + }, + }, + }, + } +} + +func resourceAwsSwfDomainCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).swfconn + + var name string + + if v, ok := d.GetOk("name"); ok { + name = v.(string) + } else if v, ok := d.GetOk("name_prefix"); ok { + name = resource.PrefixedUniqueId(v.(string)) + } else { + name = resource.UniqueId() + } + + input := &swf.RegisterDomainInput{ + Name: aws.String(name), + WorkflowExecutionRetentionPeriodInDays: aws.String(d.Get("workflow_execution_retention_period_in_days").(string)), + } + + if v, ok := d.GetOk("description"); ok { + input.Description = aws.String(v.(string)) + } + + _, err := conn.RegisterDomain(input) + if err != nil { + return err + } + + d.SetId(name) + + return resourceAwsSwfDomainRead(d, meta) +} + +func resourceAwsSwfDomainRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).swfconn + + input := &swf.DescribeDomainInput{ + Name: aws.String(d.Id()), + } + + resp, err := conn.DescribeDomain(input) + if err != nil { + if isAWSErr(err, swf.ErrCodeUnknownResourceFault, "") { + log.Printf("[WARN] SWF Domain %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + return fmt.Errorf("error reading SWF Domain: %s", err) + } + + if resp == nil || resp.Configuration == nil || resp.DomainInfo == nil { + log.Printf("[WARN] SWF Domain %q not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + d.Set("name", resp.DomainInfo.Name) + d.Set("description", resp.DomainInfo.Description) + d.Set("workflow_execution_retention_period_in_days", resp.Configuration.WorkflowExecutionRetentionPeriodInDays) + + return nil +} + +func resourceAwsSwfDomainDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).swfconn + + input := &swf.DeprecateDomainInput{ + Name: aws.String(d.Get("name").(string)), + } + + _, err := conn.DeprecateDomain(input) + if err != nil { + if isAWSErr(err, swf.ErrCodeDomainDeprecatedFault, "") { + return nil + } + if isAWSErr(err, swf.ErrCodeUnknownResourceFault, "") { + return nil + } + return fmt.Errorf("error deleting SWF Domain: %s", err) + } + + return nil +} diff --git a/aws/resource_aws_swf_domain_test.go b/aws/resource_aws_swf_domain_test.go new file mode 100644 index 00000000000..33b3771bc01 --- /dev/null +++ b/aws/resource_aws_swf_domain_test.go @@ -0,0 +1,208 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/swf" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSSwfDomain_basic(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_swf_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsSwfDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSwfDomainConfig_Name(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAwsSwfDomainExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "name", rName), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSSwfDomain_NamePrefix(t *testing.T) { + resourceName := "aws_swf_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsSwfDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSwfDomainConfig_NamePrefix, + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAwsSwfDomainExists(resourceName), + resource.TestMatchResourceAttr(resourceName, "name", regexp.MustCompile(`^tf-acc-test`)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"name_prefix"}, // this line is only necessary if the test configuration is using name_prefix + }, + }, + }) +} + +func TestAccAWSSwfDomain_GeneratedName(t *testing.T) { + resourceName := "aws_swf_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsSwfDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSwfDomainConfig_GeneratedName, + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAwsSwfDomainExists(resourceName), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSSwfDomain_Description(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_swf_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsSwfDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSwfDomainConfig_Description(rName, "description1"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAwsSwfDomainExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "description", "description1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckAwsSwfDomainDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).swfconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_swf_domain" { + continue + } + + name := rs.Primary.ID + input := &swf.DescribeDomainInput{ + Name: aws.String(name), + } + + resp, err := conn.DescribeDomain(input) + if err != nil { + return err + } + + if *resp.DomainInfo.Status != "DEPRECATED" { + return fmt.Errorf(`SWF Domain %s status is %s instead of "DEPRECATED". Failing!`, name, *resp.DomainInfo.Status) + } + } + + return nil +} + +func testAccCheckAwsSwfDomainExists(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("SWF Domain not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("SWF Domain name not set") + } + + name := rs.Primary.ID + conn := testAccProvider.Meta().(*AWSClient).swfconn + + input := &swf.DescribeDomainInput{ + Name: aws.String(name), + } + + resp, err := conn.DescribeDomain(input) + if err != nil { + return fmt.Errorf("SWF Domain %s not found in AWS", name) + } + + if *resp.DomainInfo.Status != "REGISTERED" { + return fmt.Errorf(`SWF Domain %s status is %s instead of "REGISTERED". Failing!`, name, *resp.DomainInfo.Status) + } + return nil + } +} + +func testAccAWSSwfDomainConfig_Description(rName, description string) string { + return fmt.Sprintf(` +resource "aws_swf_domain" "test" { + description = %q + name = %q + workflow_execution_retention_period_in_days = 1 +} +`, description, rName) +} + +const testAccAWSSwfDomainConfig_GeneratedName = ` +resource "aws_swf_domain" "test" { + workflow_execution_retention_period_in_days = 1 +} +` + +func testAccAWSSwfDomainConfig_Name(rName string) string { + return fmt.Sprintf(` +resource "aws_swf_domain" "test" { + name = %q + workflow_execution_retention_period_in_days = 1 +} +`, rName) +} + +const testAccAWSSwfDomainConfig_NamePrefix = ` +resource "aws_swf_domain" "test" { + name_prefix = "tf-acc-test" + workflow_execution_retention_period_in_days = 1 +} +` diff --git a/aws/resource_aws_volume_attachment.go b/aws/resource_aws_volume_attachment.go index 7049fa61c79..cd4ee44c1ad 100644 --- a/aws/resource_aws_volume_attachment.go +++ b/aws/resource_aws_volume_attachment.go @@ -63,11 +63,11 @@ func resourceAwsVolumeAttachmentCreate(d *schema.ResourceData, meta interface{}) request := &ec2.DescribeVolumesInput{ VolumeIds: []*string{aws.String(vID)}, Filters: []*ec2.Filter{ - &ec2.Filter{ + { Name: aws.String("attachment.instance-id"), Values: []*string{aws.String(iID)}, }, - &ec2.Filter{ + { Name: aws.String("attachment.device"), Values: []*string{aws.String(name)}, }, @@ -106,7 +106,7 @@ func resourceAwsVolumeAttachmentCreate(d *schema.ResourceData, meta interface{}) _, err := conn.AttachVolume(opts) if err != nil { if awsErr, ok := err.(awserr.Error); ok { - return fmt.Errorf("[WARN] Error attaching volume (%s) to instance (%s), message: \"%s\", code: \"%s\"", + return fmt.Errorf("Error attaching volume (%s) to instance (%s), message: \"%s\", code: \"%s\"", vID, iID, awsErr.Message(), awsErr.Code()) } return err @@ -138,11 +138,11 @@ func volumeAttachmentStateRefreshFunc(conn *ec2.EC2, name, volumeID, instanceID request := &ec2.DescribeVolumesInput{ VolumeIds: []*string{aws.String(volumeID)}, Filters: []*ec2.Filter{ - &ec2.Filter{ + { Name: aws.String("attachment.device"), Values: []*string{aws.String(name)}, }, - &ec2.Filter{ + { Name: aws.String("attachment.instance-id"), Values: []*string{aws.String(instanceID)}, }, @@ -176,11 +176,11 @@ func resourceAwsVolumeAttachmentRead(d *schema.ResourceData, meta interface{}) e request := &ec2.DescribeVolumesInput{ VolumeIds: []*string{aws.String(d.Get("volume_id").(string))}, Filters: []*ec2.Filter{ - &ec2.Filter{ + { Name: aws.String("attachment.device"), Values: []*string{aws.String(d.Get("device_name").(string))}, }, - &ec2.Filter{ + { Name: aws.String("attachment.instance-id"), Values: []*string{aws.String(d.Get("instance_id").(string))}, }, @@ -213,8 +213,6 @@ func resourceAwsVolumeAttachmentDelete(d *schema.ResourceData, meta interface{}) conn := meta.(*AWSClient).ec2conn if _, ok := d.GetOk("skip_destroy"); ok { - log.Printf("[INFO] Found skip_destroy to be true, removing attachment %q from state", d.Id()) - d.SetId("") return nil } @@ -250,7 +248,7 @@ func resourceAwsVolumeAttachmentDelete(d *schema.ResourceData, meta interface{}) "Error waiting for Volume (%s) to detach from Instance: %s", vID, iID) } - d.SetId("") + return nil } diff --git a/aws/resource_aws_volume_attachment_test.go b/aws/resource_aws_volume_attachment_test.go index 0249ade8534..8a23102f09e 100644 --- a/aws/resource_aws_volume_attachment_test.go +++ b/aws/resource_aws_volume_attachment_test.go @@ -16,7 +16,7 @@ func TestAccAWSVolumeAttachment_basic(t *testing.T) { var i ec2.Instance var v ec2.Volume - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVolumeAttachmentDestroy, @@ -42,7 +42,7 @@ func TestAccAWSVolumeAttachment_skipDestroy(t *testing.T) { var i ec2.Instance var v ec2.Volume - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVolumeAttachmentDestroy, @@ -72,8 +72,11 @@ func TestAccAWSVolumeAttachment_attachStopped(t *testing.T) { conn := testAccProvider.Meta().(*AWSClient).ec2conn _, err := conn.StopInstances(&ec2.StopInstancesInput{ - InstanceIds: []*string{aws.String(*i.InstanceId)}, + InstanceIds: []*string{i.InstanceId}, }) + if err != nil { + t.Fatalf("error stopping instance (%s): %s", aws.StringValue(i.InstanceId), err) + } stateConf := &resource.StateChangeConf{ Pending: []string{"pending", "running", "stopping"}, @@ -90,7 +93,7 @@ func TestAccAWSVolumeAttachment_attachStopped(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVolumeAttachmentDestroy, @@ -121,7 +124,7 @@ func TestAccAWSVolumeAttachment_attachStopped(t *testing.T) { } func TestAccAWSVolumeAttachment_update(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVolumeAttachmentDestroy, diff --git a/aws/resource_aws_vpc.go b/aws/resource_aws_vpc.go index 9fa19f1592d..575e76252e5 100644 --- a/aws/resource_aws_vpc.go +++ b/aws/resource_aws_vpc.go @@ -6,10 +6,12 @@ import ( "time" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsVpc() *schema.Resource { @@ -21,6 +23,7 @@ func resourceAwsVpc() *schema.Resource { Importer: &schema.ResourceImporter{ State: resourceAwsVpcInstanceImport, }, + CustomizeDiff: resourceAwsVpcCustomizeDiff, SchemaVersion: 1, MigrateState: resourceAwsVpcMigrateState, @@ -34,10 +37,10 @@ func resourceAwsVpc() *schema.Resource { }, "instance_tenancy": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeString, + Optional: true, + Default: ec2.TenancyDefault, + ValidateFunc: validation.StringInSlice([]string{ec2.TenancyDefault, ec2.TenancyDedicated}, false), }, "enable_dns_hostnames": { @@ -105,6 +108,11 @@ func resourceAwsVpc() *schema.Resource { Computed: true, }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "tags": tagsSchema(), }, } @@ -112,15 +120,11 @@ func resourceAwsVpc() *schema.Resource { func resourceAwsVpcCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).ec2conn - instance_tenancy := "default" - if v, ok := d.GetOk("instance_tenancy"); ok { - instance_tenancy = v.(string) - } // Create the VPC createOpts := &ec2.CreateVpcInput{ CidrBlock: aws.String(d.Get("cidr_block").(string)), - InstanceTenancy: aws.String(instance_tenancy), + InstanceTenancy: aws.String(d.Get("instance_tenancy").(string)), AmazonProvidedIpv6CidrBlock: aws.Bool(d.Get("assign_generated_ipv6_cidr_block").(bool)), } @@ -155,6 +159,13 @@ func resourceAwsVpcCreate(d *schema.ResourceData, meta interface{}) error { d.Id(), err) } + if len(vpc.Ipv6CidrBlockAssociationSet) > 0 && vpc.Ipv6CidrBlockAssociationSet[0] != nil { + log.Printf("[DEBUG] Waiting for EC2 VPC (%s) IPv6 CIDR to become associated", d.Id()) + if err := waitForEc2VpcIpv6CidrBlockAssociationCreate(conn, d.Id(), aws.StringValue(vpcResp.Vpc.Ipv6CidrBlockAssociationSet[0].AssociationId)); err != nil { + return fmt.Errorf("error waiting for EC2 VPC (%s) IPv6 CIDR to become associated: %s", d.Id(), err) + } + } + // Update our attributes and return return resourceAwsVpcUpdate(d, meta) } @@ -179,18 +190,29 @@ func resourceAwsVpcRead(d *schema.ResourceData, meta interface{}) error { d.Set("dhcp_options_id", vpc.DhcpOptionsId) d.Set("instance_tenancy", vpc.InstanceTenancy) + // ARN + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Service: "ec2", + Region: meta.(*AWSClient).region, + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("vpc/%s", d.Id()), + }.String() + d.Set("arn", arn) + // Tags d.Set("tags", tagsToMap(vpc.Tags)) + // Make sure those values are set, if an IPv6 block exists it'll be set in the loop + d.Set("assign_generated_ipv6_cidr_block", false) + d.Set("ipv6_association_id", "") + d.Set("ipv6_cidr_block", "") + for _, a := range vpc.Ipv6CidrBlockAssociationSet { - if *a.Ipv6CidrBlockState.State == "associated" { //we can only ever have 1 IPv6 block associated at once + if aws.StringValue(a.Ipv6CidrBlockState.State) == ec2.VpcCidrBlockStateCodeAssociated { //we can only ever have 1 IPv6 block associated at once d.Set("assign_generated_ipv6_cidr_block", true) d.Set("ipv6_association_id", a.AssociationId) d.Set("ipv6_cidr_block", a.Ipv6CidrBlock) - } else { - d.Set("assign_generated_ipv6_cidr_block", false) - d.Set("ipv6_association_id", "") // we blank these out to remove old entries - d.Set("ipv6_cidr_block", "") } } @@ -256,26 +278,11 @@ func resourceAwsVpcRead(d *schema.ResourceData, meta interface{}) error { d.Set("enable_classiclink_dns_support", classiclinkdns_enabled) } - // Get the main routing table for this VPC - // Really Ugly need to make this better - rmenn - filter1 := &ec2.Filter{ - Name: aws.String("association.main"), - Values: []*string{aws.String("true")}, - } - filter2 := &ec2.Filter{ - Name: aws.String("vpc-id"), - Values: []*string{aws.String(d.Id())}, - } - describeRouteOpts := &ec2.DescribeRouteTablesInput{ - Filters: []*ec2.Filter{filter1, filter2}, - } - routeResp, err := conn.DescribeRouteTables(describeRouteOpts) + routeTableId, err := resourceAwsVpcSetMainRouteTable(conn, vpcid) if err != nil { - return err - } - if v := routeResp.RouteTables; len(v) > 0 { - d.Set("main_route_table_id", *v[0].RouteTableId) + log.Printf("[WARN] Unable to set Main Route Table: %s", err) } + d.Set("main_route_table_id", routeTableId) if err := resourceAwsVpcSetDefaultNetworkAcl(conn, d); err != nil { log.Printf("[WARN] Unable to set Default Network ACL: %s", err) @@ -397,7 +404,7 @@ func resourceAwsVpcUpdate(d *schema.ResourceData, meta interface{}) error { if toAssign { modifyOpts := &ec2.AssociateVpcCidrBlockInput{ - VpcId: &vpcid, + VpcId: &vpcid, AmazonProvidedIpv6CidrBlock: aws.Bool(toAssign), } log.Printf("[INFO] Enabling assign_generated_ipv6_cidr_block vpc attribute for %s: %#v", @@ -407,24 +414,14 @@ func resourceAwsVpcUpdate(d *schema.ResourceData, meta interface{}) error { return err } - // Wait for the CIDR to become available - log.Printf( - "[DEBUG] Waiting for IPv6 CIDR (%s) to become associated", - d.Id()) - stateConf := &resource.StateChangeConf{ - Pending: []string{"associating", "disassociated"}, - Target: []string{"associated"}, - Refresh: Ipv6CidrStateRefreshFunc(conn, d.Id(), *resp.Ipv6CidrBlockAssociation.AssociationId), - Timeout: 1 * time.Minute, - } - if _, err := stateConf.WaitForState(); err != nil { - return fmt.Errorf( - "Error waiting for IPv6 CIDR (%s) to become associated: %s", - d.Id(), err) + log.Printf("[DEBUG] Waiting for EC2 VPC (%s) IPv6 CIDR to become associated", d.Id()) + if err := waitForEc2VpcIpv6CidrBlockAssociationCreate(conn, d.Id(), aws.StringValue(resp.Ipv6CidrBlockAssociation.AssociationId)); err != nil { + return fmt.Errorf("error waiting for EC2 VPC (%s) IPv6 CIDR to become associated: %s", d.Id(), err) } } else { + associationID := d.Get("ipv6_association_id").(string) modifyOpts := &ec2.DisassociateVpcCidrBlockInput{ - AssociationId: aws.String(d.Get("ipv6_association_id").(string)), + AssociationId: aws.String(associationID), } log.Printf("[INFO] Disabling assign_generated_ipv6_cidr_block vpc attribute for %s: %#v", d.Id(), modifyOpts) @@ -432,26 +429,30 @@ func resourceAwsVpcUpdate(d *schema.ResourceData, meta interface{}) error { return err } - // Wait for the CIDR to become available - log.Printf( - "[DEBUG] Waiting for IPv6 CIDR (%s) to become disassociated", - d.Id()) - stateConf := &resource.StateChangeConf{ - Pending: []string{"disassociating", "associated"}, - Target: []string{"disassociated"}, - Refresh: Ipv6CidrStateRefreshFunc(conn, d.Id(), d.Get("ipv6_association_id").(string)), - Timeout: 1 * time.Minute, - } - if _, err := stateConf.WaitForState(); err != nil { - return fmt.Errorf( - "Error waiting for IPv6 CIDR (%s) to become disassociated: %s", - d.Id(), err) + log.Printf("[DEBUG] Waiting for EC2 VPC (%s) IPv6 CIDR to become disassociated", d.Id()) + if err := waitForEc2VpcIpv6CidrBlockAssociationDelete(conn, d.Id(), associationID); err != nil { + return fmt.Errorf("error waiting for EC2 VPC (%s) IPv6 CIDR to become disassociated: %s", d.Id(), err) } } d.SetPartial("assign_generated_ipv6_cidr_block") } + if d.HasChange("instance_tenancy") && !d.IsNewResource() { + modifyOpts := &ec2.ModifyVpcTenancyInput{ + VpcId: aws.String(vpcid), + InstanceTenancy: aws.String(d.Get("instance_tenancy").(string)), + } + log.Printf( + "[INFO] Modifying instance_tenancy vpc attribute for %s: %#v", + d.Id(), modifyOpts) + if _, err := conn.ModifyVpcTenancy(modifyOpts); err != nil { + return err + } + + d.SetPartial("instance_tenancy") + } + if err := setTags(conn, d); err != nil { return err } else { @@ -492,6 +493,17 @@ func resourceAwsVpcDelete(d *schema.ResourceData, meta interface{}) error { }) } +func resourceAwsVpcCustomizeDiff(diff *schema.ResourceDiff, v interface{}) error { + if diff.HasChange("instance_tenancy") { + old, new := diff.GetChange("instance_tenancy") + if old.(string) != ec2.TenancyDedicated || new.(string) != ec2.TenancyDefault { + diff.ForceNew("instance_tenancy") + } + } + + return nil +} + // VPCStateRefreshFunc returns a resource.StateRefreshFunc that is used to watch // a VPC. func VPCStateRefreshFunc(conn *ec2.EC2, id string) resource.StateRefreshFunc { @@ -526,28 +538,24 @@ func Ipv6CidrStateRefreshFunc(conn *ec2.EC2, id string, associationId string) re VpcIds: []*string{aws.String(id)}, } resp, err := conn.DescribeVpcs(describeVpcOpts) + + if isAWSErr(err, "InvalidVpcID.NotFound", "") { + return nil, "", nil + } + if err != nil { - if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidVpcID.NotFound" { - resp = nil - } else { - log.Printf("Error on VPCStateRefresh: %s", err) - return nil, "", err - } + return nil, "", err } - if resp == nil { + if resp == nil || len(resp.Vpcs) == 0 || resp.Vpcs[0] == nil || resp.Vpcs[0].Ipv6CidrBlockAssociationSet == nil { // Sometimes AWS just has consistency issues and doesn't see // our instance yet. Return an empty state. return nil, "", nil } - if resp.Vpcs[0].Ipv6CidrBlockAssociationSet == nil { - return nil, "", nil - } - for _, association := range resp.Vpcs[0].Ipv6CidrBlockAssociationSet { - if *association.AssociationId == associationId { - return association, *association.Ipv6CidrBlockState.State, nil + if aws.StringValue(association.AssociationId) == associationId { + return association, aws.StringValue(association.Ipv6CidrBlockState.State), nil } } @@ -632,6 +640,33 @@ func resourceAwsVpcSetDefaultRouteTable(conn *ec2.EC2, d *schema.ResourceData) e return nil } +func resourceAwsVpcSetMainRouteTable(conn *ec2.EC2, vpcid string) (string, error) { + filter1 := &ec2.Filter{ + Name: aws.String("association.main"), + Values: []*string{aws.String("true")}, + } + filter2 := &ec2.Filter{ + Name: aws.String("vpc-id"), + Values: []*string{aws.String(vpcid)}, + } + + findOpts := &ec2.DescribeRouteTablesInput{ + Filters: []*ec2.Filter{filter1, filter2}, + } + + resp, err := conn.DescribeRouteTables(findOpts) + if err != nil { + return "", err + } + + if len(resp.RouteTables) < 1 || resp.RouteTables[0] == nil { + return "", fmt.Errorf("Main Route table not found") + } + + // There Can Be Only 1 Main Route Table for a VPC + return aws.StringValue(resp.RouteTables[0].RouteTableId), nil +} + func resourceAwsVpcInstanceImport( d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { d.Set("assign_generated_ipv6_cidr_block", false) @@ -650,3 +685,64 @@ func awsVpcDescribeVpcAttribute(attribute string, vpcId string, conn *ec2.EC2) ( return resp, nil } + +// vpcDescribe returns EC2 API information about the specified VPC. +// If the VPC doesn't exist, return nil. +func vpcDescribe(conn *ec2.EC2, vpcId string) (*ec2.Vpc, error) { + resp, err := conn.DescribeVpcs(&ec2.DescribeVpcsInput{ + VpcIds: aws.StringSlice([]string{vpcId}), + }) + if err != nil { + if !isAWSErr(err, "InvalidVpcID.NotFound", "") { + return nil, err + } + resp = nil + } + + if resp == nil { + return nil, nil + } + + n := len(resp.Vpcs) + switch n { + case 0: + return nil, nil + + case 1: + return resp.Vpcs[0], nil + + default: + return nil, fmt.Errorf("Found %d VPCs for %s, expected 1", n, vpcId) + } +} + +func waitForEc2VpcIpv6CidrBlockAssociationCreate(conn *ec2.EC2, vpcID, associationID string) error { + stateConf := &resource.StateChangeConf{ + Pending: []string{ + ec2.VpcCidrBlockStateCodeAssociating, + ec2.VpcCidrBlockStateCodeDisassociated, + }, + Target: []string{ec2.VpcCidrBlockStateCodeAssociated}, + Refresh: Ipv6CidrStateRefreshFunc(conn, vpcID, associationID), + Timeout: 1 * time.Minute, + } + _, err := stateConf.WaitForState() + + return err +} + +func waitForEc2VpcIpv6CidrBlockAssociationDelete(conn *ec2.EC2, vpcID, associationID string) error { + stateConf := &resource.StateChangeConf{ + Pending: []string{ + ec2.VpcCidrBlockStateCodeAssociated, + ec2.VpcCidrBlockStateCodeDisassociating, + }, + Target: []string{ec2.VpcCidrBlockStateCodeDisassociated}, + Refresh: Ipv6CidrStateRefreshFunc(conn, vpcID, associationID), + Timeout: 1 * time.Minute, + NotFoundChecks: 1, + } + _, err := stateConf.WaitForState() + + return err +} diff --git a/aws/resource_aws_vpc_dhcp_options.go b/aws/resource_aws_vpc_dhcp_options.go index ec2844cc7c6..71cc0ecfb8e 100644 --- a/aws/resource_aws_vpc_dhcp_options.go +++ b/aws/resource_aws_vpc_dhcp_options.go @@ -24,40 +24,40 @@ func resourceAwsVpcDhcpOptions() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "domain_name": &schema.Schema{ + "domain_name": { Type: schema.TypeString, Optional: true, ForceNew: true, }, - "domain_name_servers": &schema.Schema{ + "domain_name_servers": { Type: schema.TypeList, Optional: true, ForceNew: true, Elem: &schema.Schema{Type: schema.TypeString}, }, - "ntp_servers": &schema.Schema{ + "ntp_servers": { Type: schema.TypeList, Optional: true, ForceNew: true, Elem: &schema.Schema{Type: schema.TypeString}, }, - "netbios_node_type": &schema.Schema{ + "netbios_node_type": { Type: schema.TypeString, Optional: true, ForceNew: true, }, - "netbios_name_servers": &schema.Schema{ + "netbios_name_servers": { Type: schema.TypeList, Optional: true, ForceNew: true, Elem: &schema.Schema{Type: schema.TypeString}, }, - "tags": &schema.Schema{ + "tags": { Type: schema.TypeMap, Optional: true, }, @@ -147,17 +147,11 @@ func resourceAwsVpcDhcpOptionsRead(d *schema.ResourceData, meta interface{}) err resp, err := conn.DescribeDhcpOptions(req) if err != nil { - ec2err, ok := err.(awserr.Error) - if !ok { - return fmt.Errorf("Error retrieving DHCP Options: %s", err.Error()) - } - - if ec2err.Code() == "InvalidDhcpOptionID.NotFound" { + if isNoSuchDhcpOptionIDErr(err) { log.Printf("[WARN] DHCP Options (%s) not found, removing from state", d.Id()) d.SetId("") return nil } - return fmt.Errorf("Error retrieving DHCP Options: %s", err.Error()) } @@ -212,7 +206,7 @@ func resourceAwsVpcDhcpOptionsDelete(d *schema.ResourceData, meta interface{}) e } switch ec2err.Code() { - case "InvalidDhcpOptionsID.NotFound": + case "InvalidDhcpOptionsID.NotFound", "InvalidDhcpOptionID.NotFound": return nil case "DependencyViolation": // If it is a dependency violation, we want to disassociate @@ -242,7 +236,7 @@ func resourceAwsVpcDhcpOptionsDelete(d *schema.ResourceData, meta interface{}) e func findVPCsByDHCPOptionsID(conn *ec2.EC2, id string) ([]*ec2.Vpc, error) { req := &ec2.DescribeVpcsInput{ Filters: []*ec2.Filter{ - &ec2.Filter{ + { Name: aws.String("dhcp-options-id"), Values: []*string{ aws.String(id), @@ -272,7 +266,7 @@ func resourceDHCPOptionsStateRefreshFunc(conn *ec2.EC2, id string) resource.Stat resp, err := conn.DescribeDhcpOptions(DescribeDhcpOpts) if err != nil { - if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidDhcpOptionsID.NotFound" { + if isNoSuchDhcpOptionIDErr(err) { resp = nil } else { log.Printf("Error on DHCPOptionsStateRefresh: %s", err) @@ -290,3 +284,7 @@ func resourceDHCPOptionsStateRefreshFunc(conn *ec2.EC2, id string) resource.Stat return dos, "created", nil } } + +func isNoSuchDhcpOptionIDErr(err error) bool { + return isAWSErr(err, "InvalidDhcpOptionID.NotFound", "") || isAWSErr(err, "InvalidDhcpOptionsID.NotFound", "") +} diff --git a/aws/resource_aws_vpc_dhcp_options_association.go b/aws/resource_aws_vpc_dhcp_options_association.go index 7bdcb7a68c7..981ea0a05f3 100644 --- a/aws/resource_aws_vpc_dhcp_options_association.go +++ b/aws/resource_aws_vpc_dhcp_options_association.go @@ -16,12 +16,12 @@ func resourceAwsVpcDhcpOptionsAssociation() *schema.Resource { Delete: resourceAwsVpcDhcpOptionsAssociationDelete, Schema: map[string]*schema.Schema{ - "vpc_id": &schema.Schema{ + "vpc_id": { Type: schema.TypeString, Required: true, }, - "dhcp_options_id": &schema.Schema{ + "dhcp_options_id": { Type: schema.TypeString, Required: true, }, @@ -94,6 +94,5 @@ func resourceAwsVpcDhcpOptionsAssociationDelete(d *schema.ResourceData, meta int return err } - d.SetId("") return nil } diff --git a/aws/resource_aws_vpc_dhcp_options_association_test.go b/aws/resource_aws_vpc_dhcp_options_association_test.go index 19900eedda0..c0648d1f095 100644 --- a/aws/resource_aws_vpc_dhcp_options_association_test.go +++ b/aws/resource_aws_vpc_dhcp_options_association_test.go @@ -13,12 +13,12 @@ func TestAccAWSDHCPOptionsAssociation_basic(t *testing.T) { var v ec2.Vpc var d ec2.DhcpOptions - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckDHCPOptionsAssociationDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDHCPOptionsAssociationConfig, Check: resource.ComposeTestCheckFunc( testAccCheckDHCPOptionsExists("aws_vpc_dhcp_options.foo", &d), diff --git a/aws/resource_aws_vpc_dhcp_options_test.go b/aws/resource_aws_vpc_dhcp_options_test.go index f101f95f3c7..948f34b371e 100644 --- a/aws/resource_aws_vpc_dhcp_options_test.go +++ b/aws/resource_aws_vpc_dhcp_options_test.go @@ -11,15 +11,35 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSDHCPOptions_importBasic(t *testing.T) { + resourceName := "aws_vpc_dhcp_options.foo" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDHCPOptionsConfig, + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSDHCPOptions_basic(t *testing.T) { var d ec2.DhcpOptions - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckDHCPOptionsDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDHCPOptionsConfig, Check: resource.ComposeTestCheckFunc( testAccCheckDHCPOptionsExists("aws_vpc_dhcp_options.foo", &d), @@ -39,12 +59,12 @@ func TestAccAWSDHCPOptions_basic(t *testing.T) { func TestAccAWSDHCPOptions_deleteOptions(t *testing.T) { var d ec2.DhcpOptions - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckDHCPOptionsDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDHCPOptionsConfig, Check: resource.ComposeTestCheckFunc( testAccCheckDHCPOptionsExists("aws_vpc_dhcp_options.foo", &d), diff --git a/aws/resource_aws_vpc_endpoint.go b/aws/resource_aws_vpc_endpoint.go index 2826f490468..45ebf5236a0 100644 --- a/aws/resource_aws_vpc_endpoint.go +++ b/aws/resource_aws_vpc_endpoint.go @@ -8,7 +8,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/structure" @@ -122,6 +121,12 @@ func resourceAwsVpcEndpoint() *schema.Resource { Optional: true, }, }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Update: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, } } @@ -143,7 +148,7 @@ func resourceAwsVpcEndpointCreate(d *schema.ResourceData, meta interface{}) erro if v, ok := d.GetOk("policy"); ok { policy, err := structure.NormalizeJsonString(v) if err != nil { - return errwrap.Wrapf("policy contains an invalid JSON: {{err}}", err) + return fmt.Errorf("policy contains an invalid JSON: %s", err) } req.PolicyDocument = aws.String(policy) } @@ -155,7 +160,7 @@ func resourceAwsVpcEndpointCreate(d *schema.ResourceData, meta interface{}) erro log.Printf("[DEBUG] Creating VPC Endpoint: %#v", req) resp, err := conn.CreateVpcEndpoint(req) if err != nil { - return fmt.Errorf("Error creating VPC Endpoint: %s", err.Error()) + return fmt.Errorf("Error creating VPC Endpoint: %s", err) } vpce := resp.VpcEndpoint @@ -167,7 +172,7 @@ func resourceAwsVpcEndpointCreate(d *schema.ResourceData, meta interface{}) erro } } - if err := vpcEndpointWaitUntilAvailable(d, conn); err != nil { + if err := vpcEndpointWaitUntilAvailable(conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { return err } @@ -179,7 +184,7 @@ func resourceAwsVpcEndpointRead(d *schema.ResourceData, meta interface{}) error vpce, state, err := vpcEndpointStateRefresh(conn, d.Id())() if err != nil && state != "failed" { - return fmt.Errorf("Error reading VPC Endpoint: %s", err.Error()) + return fmt.Errorf("Error reading VPC Endpoint: %s", err) } terminalStates := map[string]bool{ @@ -214,7 +219,7 @@ func resourceAwsVpcEndpointUpdate(d *schema.ResourceData, meta interface{}) erro if d.HasChange("policy") { policy, err := structure.NormalizeJsonString(d.Get("policy")) if err != nil { - return errwrap.Wrapf("policy contains an invalid JSON: {{err}}", err) + return fmt.Errorf("policy contains an invalid JSON: %s", err) } if policy == "" { @@ -234,10 +239,10 @@ func resourceAwsVpcEndpointUpdate(d *schema.ResourceData, meta interface{}) erro log.Printf("[DEBUG] Updating VPC Endpoint: %#v", req) if _, err := conn.ModifyVpcEndpoint(req); err != nil { - return fmt.Errorf("Error updating VPC Endpoint: %s", err.Error()) + return fmt.Errorf("Error updating VPC Endpoint: %s", err) } - if err := vpcEndpointWaitUntilAvailable(d, conn); err != nil { + if err := vpcEndpointWaitUntilAvailable(conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { return err } @@ -255,7 +260,7 @@ func resourceAwsVpcEndpointDelete(d *schema.ResourceData, meta interface{}) erro if isAWSErr(err, "InvalidVpcEndpointId.NotFound", "") { log.Printf("[DEBUG] VPC Endpoint %s is already gone", d.Id()) } else { - return fmt.Errorf("Error deleting VPC Endpoint: %s", err.Error()) + return fmt.Errorf("Error deleting VPC Endpoint: %s", err) } } @@ -263,12 +268,12 @@ func resourceAwsVpcEndpointDelete(d *schema.ResourceData, meta interface{}) erro Pending: []string{"available", "pending", "deleting"}, Target: []string{"deleted"}, Refresh: vpcEndpointStateRefresh(conn, d.Id()), - Timeout: 10 * time.Minute, + Timeout: d.Timeout(schema.TimeoutDelete), Delay: 5 * time.Second, MinTimeout: 5 * time.Second, } if _, err = stateConf.WaitForState(); err != nil { - return fmt.Errorf("Error waiting for VPC Endpoint %s to delete: %s", d.Id(), err.Error()) + return fmt.Errorf("Error waiting for VPC Endpoint (%s) to delete: %s", d.Id(), err) } return nil @@ -284,7 +289,7 @@ func vpcEndpointAccept(conn *ec2.EC2, vpceId, svcName string) error { describeSvcResp, err := conn.DescribeVpcEndpointServiceConfigurations(describeSvcReq) if err != nil { - return fmt.Errorf("Error reading VPC Endpoint Service: %s", err.Error()) + return fmt.Errorf("Error reading VPC Endpoint Service: %s", err) } if describeSvcResp == nil || len(describeSvcResp.ServiceConfigurations) == 0 { return fmt.Errorf("No matching VPC Endpoint Service found") @@ -298,7 +303,7 @@ func vpcEndpointAccept(conn *ec2.EC2, vpceId, svcName string) error { log.Printf("[DEBUG] Accepting VPC Endpoint connection: %#v", acceptEpReq) _, err = conn.AcceptVpcEndpointConnections(acceptEpReq) if err != nil { - return fmt.Errorf("Error accepting VPC Endpoint connection: %s", err.Error()) + return fmt.Errorf("Error accepting VPC Endpoint connection: %s", err) } return nil @@ -312,33 +317,43 @@ func vpcEndpointStateRefresh(conn *ec2.EC2, vpceId string) resource.StateRefresh }) if err != nil { if isAWSErr(err, "InvalidVpcEndpointId.NotFound", "") { - return false, "deleted", nil + return "", "deleted", nil } return nil, "", err } - vpce := resp.VpcEndpoints[0] - state := aws.StringValue(vpce.State) - // No use in retrying if the endpoint is in a failed state. - if state == "failed" { - return nil, state, errors.New("VPC Endpoint is in a failed state") + n := len(resp.VpcEndpoints) + switch n { + case 0: + return "", "deleted", nil + + case 1: + vpce := resp.VpcEndpoints[0] + state := aws.StringValue(vpce.State) + // No use in retrying if the endpoint is in a failed state. + if state == "failed" { + return nil, state, errors.New("VPC Endpoint is in a failed state") + } + return vpce, state, nil + + default: + return nil, "", fmt.Errorf("Found %d VPC Endpoints for %s, expected 1", n, vpceId) } - return vpce, state, nil } } -func vpcEndpointWaitUntilAvailable(d *schema.ResourceData, conn *ec2.EC2) error { +func vpcEndpointWaitUntilAvailable(conn *ec2.EC2, vpceId string, timeout time.Duration) error { stateConf := &resource.StateChangeConf{ Pending: []string{"pending"}, Target: []string{"available", "pendingAcceptance"}, - Refresh: vpcEndpointStateRefresh(conn, d.Id()), - Timeout: 10 * time.Minute, + Refresh: vpcEndpointStateRefresh(conn, vpceId), + Timeout: timeout, Delay: 5 * time.Second, MinTimeout: 5 * time.Second, } if _, err := stateConf.WaitForState(); err != nil { - return fmt.Errorf("Error waiting for VPC Endpoint %s to become available: %s", d.Id(), err.Error()) + return fmt.Errorf("Error waiting for VPC Endpoint (%s) to become available: %s", vpceId, err) } return nil @@ -359,7 +374,7 @@ func vpcEndpointAttributes(d *schema.ResourceData, vpce *ec2.VpcEndpoint, conn * policy, err := structure.NormalizeJsonString(aws.StringValue(vpce.PolicyDocument)) if err != nil { - return errwrap.Wrapf("policy contains an invalid JSON: {{err}}", err) + return fmt.Errorf("policy contains an invalid JSON: %s", err) } d.Set("policy", policy) diff --git a/aws/resource_aws_vpc_endpoint_connection_notification_test.go b/aws/resource_aws_vpc_endpoint_connection_notification_test.go index 5ab08d5a15b..5219136905e 100644 --- a/aws/resource_aws_vpc_endpoint_connection_notification_test.go +++ b/aws/resource_aws_vpc_endpoint_connection_notification_test.go @@ -13,16 +13,38 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSVpcEndpointConnectionNotification_importBasic(t *testing.T) { + lbName := fmt.Sprintf("testaccawsnlb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + resourceName := "aws_vpc_endpoint_connection_notification.foo" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVpcEndpointConnectionNotificationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccVpcEndpointConnectionNotificationBasicConfig(lbName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSVpcEndpointConnectionNotification_basic(t *testing.T) { lbName := fmt.Sprintf("testaccawsnlb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpc_endpoint_connection_notification.foo", Providers: testAccProviders, CheckDestroy: testAccCheckVpcEndpointConnectionNotificationDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpcEndpointConnectionNotificationBasicConfig(lbName), Check: resource.ComposeTestCheckFunc( testAccCheckVpcEndpointConnectionNotificationExists("aws_vpc_endpoint_connection_notification.foo"), @@ -31,7 +53,7 @@ func TestAccAWSVpcEndpointConnectionNotification_basic(t *testing.T) { resource.TestCheckResourceAttr("aws_vpc_endpoint_connection_notification.foo", "notification_type", "Topic"), ), }, - resource.TestStep{ + { Config: testAccVpcEndpointConnectionNotificationModifiedConfig(lbName), Check: resource.ComposeTestCheckFunc( testAccCheckVpcEndpointConnectionNotificationExists("aws_vpc_endpoint_connection_notification.foo"), diff --git a/aws/resource_aws_vpc_endpoint_route_table_association_test.go b/aws/resource_aws_vpc_endpoint_route_table_association_test.go index b1f2b478830..050d7df541a 100644 --- a/aws/resource_aws_vpc_endpoint_route_table_association_test.go +++ b/aws/resource_aws_vpc_endpoint_route_table_association_test.go @@ -14,12 +14,12 @@ import ( func TestAccAWSVpcEndpointRouteTableAssociation_basic(t *testing.T) { var vpce ec2.VpcEndpoint - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpcEndpointRouteTableAssociationDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpcEndpointRouteTableAssociationConfig, Check: resource.ComposeTestCheckFunc( testAccCheckVpcEndpointRouteTableAssociationExists( diff --git a/aws/resource_aws_vpc_endpoint_service_allowed_principal_test.go b/aws/resource_aws_vpc_endpoint_service_allowed_principal_test.go index fd64a03ea33..b8ef66cc86c 100644 --- a/aws/resource_aws_vpc_endpoint_service_allowed_principal_test.go +++ b/aws/resource_aws_vpc_endpoint_service_allowed_principal_test.go @@ -16,12 +16,12 @@ import ( func TestAccAWSVpcEndpointServiceAllowedPrincipal_basic(t *testing.T) { lbName := fmt.Sprintf("testaccawsnlb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpcEndpointServiceAllowedPrincipalDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpcEndpointServiceAllowedPrincipalBasicConfig(lbName), Check: resource.ComposeTestCheckFunc( testAccCheckVpcEndpointServiceAllowedPrincipalExists("aws_vpc_endpoint_service_allowed_principal.foo"), diff --git a/aws/resource_aws_vpc_endpoint_service_test.go b/aws/resource_aws_vpc_endpoint_service_test.go index a7430178c5d..6e14c37e8f1 100644 --- a/aws/resource_aws_vpc_endpoint_service_test.go +++ b/aws/resource_aws_vpc_endpoint_service_test.go @@ -13,18 +13,40 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSVpcEndpointService_importBasic(t *testing.T) { + lbName := fmt.Sprintf("testaccawsnlb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + resourceName := "aws_vpc_endpoint_service.foo" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVpcEndpointServiceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccVpcEndpointServiceBasicConfig(lbName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSVpcEndpointService_basic(t *testing.T) { var svcCfg ec2.ServiceConfiguration lb1Name := fmt.Sprintf("testaccawsnlb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) lb2Name := fmt.Sprintf("testaccawsnlb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpc_endpoint_service.foo", Providers: testAccProviders, CheckDestroy: testAccCheckVpcEndpointServiceDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpcEndpointServiceBasicConfig(lb1Name), Check: resource.ComposeTestCheckFunc( testAccCheckVpcEndpointServiceExists("aws_vpc_endpoint_service.foo", &svcCfg), @@ -33,7 +55,7 @@ func TestAccAWSVpcEndpointService_basic(t *testing.T) { resource.TestCheckResourceAttr("aws_vpc_endpoint_service.foo", "allowed_principals.#", "1"), ), }, - resource.TestStep{ + { Config: testAccVpcEndpointServiceModifiedConfig(lb1Name, lb2Name), Check: resource.ComposeTestCheckFunc( testAccCheckVpcEndpointServiceExists("aws_vpc_endpoint_service.foo", &svcCfg), @@ -62,12 +84,12 @@ func TestAccAWSVpcEndpointService_removed(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpcEndpointServiceDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpcEndpointServiceBasicConfig(lbName), Check: resource.ComposeTestCheckFunc( testAccCheckVpcEndpointServiceExists("aws_vpc_endpoint_service.foo", &svcCfg), diff --git a/aws/resource_aws_vpc_endpoint_subnet_association.go b/aws/resource_aws_vpc_endpoint_subnet_association.go index 1fe3c4a1898..0527eb14374 100644 --- a/aws/resource_aws_vpc_endpoint_subnet_association.go +++ b/aws/resource_aws_vpc_endpoint_subnet_association.go @@ -3,11 +3,13 @@ package aws import ( "fmt" "log" + "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) @@ -32,6 +34,11 @@ func resourceAwsVpcEndpointSubnetAssociation() *schema.Resource { ForceNew: true, }, }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, } } @@ -46,15 +53,34 @@ func resourceAwsVpcEndpointSubnetAssociationCreate(d *schema.ResourceData, meta return err } - _, err = conn.ModifyVpcEndpoint(&ec2.ModifyVpcEndpointInput{ - VpcEndpointId: aws.String(endpointId), - AddSubnetIds: aws.StringSlice([]string{snId}), - }) + // See https://github.com/terraform-providers/terraform-provider-aws/issues/3382. + // Prevent concurrent subnet association requests and delay between requests. + mk := "vpc_endpoint_subnet_association_" + endpointId + awsMutexKV.Lock(mk) + defer awsMutexKV.Unlock(mk) + + c := &resource.StateChangeConf{ + Delay: 1 * time.Minute, + Timeout: 3 * time.Minute, + Target: []string{"ok"}, + Refresh: func() (interface{}, string, error) { + res, err := conn.ModifyVpcEndpoint(&ec2.ModifyVpcEndpointInput{ + VpcEndpointId: aws.String(endpointId), + AddSubnetIds: aws.StringSlice([]string{snId}), + }) + return res, "ok", err + }, + } + _, err = c.WaitForState() if err != nil { - return fmt.Errorf("Error creating Vpc Endpoint/Subnet association: %s", err.Error()) + return fmt.Errorf("Error creating Vpc Endpoint/Subnet association: %s", err) } - d.SetId(vpcEndpointIdSubnetIdHash(endpointId, snId)) + d.SetId(vpcEndpointSubnetAssociationId(endpointId, snId)) + + if err := vpcEndpointWaitUntilAvailable(conn, endpointId, d.Timeout(schema.TimeoutCreate)); err != nil { + return err + } return resourceAwsVpcEndpointSubnetAssociationRead(d, meta) } @@ -105,24 +131,26 @@ func resourceAwsVpcEndpointSubnetAssociationDelete(d *schema.ResourceData, meta if err != nil { ec2err, ok := err.(awserr.Error) if !ok { - return fmt.Errorf("Error deleting Vpc Endpoint/Subnet association: %s", err.Error()) + return fmt.Errorf("Error deleting Vpc Endpoint/Subnet association: %s", err) } switch ec2err.Code() { case "InvalidVpcEndpointId.NotFound": fallthrough - case "InvalidRouteTableId.NotFound": - fallthrough case "InvalidParameter": log.Printf("[DEBUG] Vpc Endpoint/Subnet association is already gone") default: - return fmt.Errorf("Error deleting Vpc Endpoint/Subnet association: %s", err.Error()) + return fmt.Errorf("Error deleting Vpc Endpoint/Subnet association: %s", err) } } + if err := vpcEndpointWaitUntilAvailable(conn, endpointId, d.Timeout(schema.TimeoutDelete)); err != nil { + return err + } + return nil } -func vpcEndpointIdSubnetIdHash(endpointId, snId string) string { +func vpcEndpointSubnetAssociationId(endpointId, snId string) string { return fmt.Sprintf("a-%s%d", endpointId, hashcode.String(snId)) } diff --git a/aws/resource_aws_vpc_endpoint_subnet_association_test.go b/aws/resource_aws_vpc_endpoint_subnet_association_test.go index eeb5c922135..80a7d9bee59 100644 --- a/aws/resource_aws_vpc_endpoint_subnet_association_test.go +++ b/aws/resource_aws_vpc_endpoint_subnet_association_test.go @@ -14,12 +14,12 @@ import ( func TestAccAWSVpcEndpointSubnetAssociation_basic(t *testing.T) { var vpce ec2.VpcEndpoint - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpcEndpointSubnetAssociationDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpcEndpointSubnetAssociationConfig_basic, Check: resource.ComposeTestCheckFunc( testAccCheckVpcEndpointSubnetAssociationExists( @@ -30,6 +30,29 @@ func TestAccAWSVpcEndpointSubnetAssociation_basic(t *testing.T) { }) } +func TestAccAWSVpcEndpointSubnetAssociation_multiple(t *testing.T) { + var vpce ec2.VpcEndpoint + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVpcEndpointSubnetAssociationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccVpcEndpointSubnetAssociationConfig_multiple, + Check: resource.ComposeTestCheckFunc( + testAccCheckVpcEndpointSubnetAssociationExists( + "aws_vpc_endpoint_subnet_association.a.0", &vpce), + testAccCheckVpcEndpointSubnetAssociationExists( + "aws_vpc_endpoint_subnet_association.a.1", &vpce), + testAccCheckVpcEndpointSubnetAssociationExists( + "aws_vpc_endpoint_subnet_association.a.2", &vpce), + ), + }, + }, + }) +} + func testAccCheckVpcEndpointSubnetAssociationDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).ec2conn @@ -103,10 +126,46 @@ func testAccCheckVpcEndpointSubnetAssociationExists(n string, vpce *ec2.VpcEndpo } const testAccVpcEndpointSubnetAssociationConfig_basic = ` -provider "aws" { - region = "us-west-2" +resource "aws_vpc" "foo" { + cidr_block = "10.0.0.0/16" + tags { + Name = "terraform-testacc-vpc-endpoint-subnet-association" + } +} + +data "aws_security_group" "default" { + vpc_id = "${aws_vpc.foo.id}" + name = "default" } +data "aws_region" "current" {} + +data "aws_availability_zones" "available" {} + +resource "aws_vpc_endpoint" "ec2" { + vpc_id = "${aws_vpc.foo.id}" + vpc_endpoint_type = "Interface" + service_name = "com.amazonaws.${data.aws_region.current.name}.ec2" + security_group_ids = ["${data.aws_security_group.default.id}"] + private_dns_enabled = false +} + +resource "aws_subnet" "sn" { + vpc_id = "${aws_vpc.foo.id}" + availability_zone = "${data.aws_availability_zones.available.names[0]}" + cidr_block = "10.0.0.0/17" + tags { + Name = "tf-acc-vpc-endpoint-subnet-association" + } +} + +resource "aws_vpc_endpoint_subnet_association" "a" { + vpc_endpoint_id = "${aws_vpc_endpoint.ec2.id}" + subnet_id = "${aws_subnet.sn.id}" +} +` + +const testAccVpcEndpointSubnetAssociationConfig_multiple = ` resource "aws_vpc" "foo" { cidr_block = "10.0.0.0/16" tags { @@ -116,28 +175,36 @@ resource "aws_vpc" "foo" { data "aws_security_group" "default" { vpc_id = "${aws_vpc.foo.id}" - name = "default" + name = "default" } +data "aws_region" "current" {} + +data "aws_availability_zones" "available" {} + resource "aws_vpc_endpoint" "ec2" { - vpc_id = "${aws_vpc.foo.id}" - vpc_endpoint_type = "Interface" - service_name = "com.amazonaws.us-west-2.ec2" - security_group_ids = ["${data.aws_security_group.default.id}"] + vpc_id = "${aws_vpc.foo.id}" + vpc_endpoint_type = "Interface" + service_name = "com.amazonaws.${data.aws_region.current.name}.ec2" + security_group_ids = ["${data.aws_security_group.default.id}"] private_dns_enabled = false } resource "aws_subnet" "sn" { - vpc_id = "${aws_vpc.foo.id}" - availability_zone = "us-west-2a" - cidr_block = "10.0.0.0/17" + count = 3 + + vpc_id = "${aws_vpc.foo.id}" + availability_zone = "${data.aws_availability_zones.available.names[count.index]}" + cidr_block = "${cidrsubnet(aws_vpc.foo.cidr_block, 2, count.index)}" tags { - Name = "tf-acc-vpc-endpoint-subnet-association" + Name = "${format("tf-acc-vpc-endpoint-subnet-association-%d", count.index + 1)}" } } resource "aws_vpc_endpoint_subnet_association" "a" { + count = 3 + vpc_endpoint_id = "${aws_vpc_endpoint.ec2.id}" - subnet_id = "${aws_subnet.sn.id}" + subnet_id = "${aws_subnet.sn.*.id[count.index]}" } ` diff --git a/aws/resource_aws_vpc_endpoint_test.go b/aws/resource_aws_vpc_endpoint_test.go index f8d90d20177..ef17dd910f5 100644 --- a/aws/resource_aws_vpc_endpoint_test.go +++ b/aws/resource_aws_vpc_endpoint_test.go @@ -15,16 +15,37 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSVpcEndpoint_importBasic(t *testing.T) { + resourceName := "aws_vpc_endpoint.s3" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVpcEndpointDestroy, + Steps: []resource.TestStep{ + { + Config: testAccVpcEndpointConfig_gatewayWithRouteTableAndPolicy, + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSVpcEndpoint_gatewayBasic(t *testing.T) { var endpoint ec2.VpcEndpoint - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpc_endpoint.s3", Providers: testAccProviders, CheckDestroy: testAccCheckVpcEndpointDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpcEndpointConfig_gatewayWithoutRouteTableOrPolicy, Check: resource.ComposeTestCheckFunc( testAccCheckVpcEndpointExists("aws_vpc_endpoint.s3", &endpoint), @@ -45,13 +66,13 @@ func TestAccAWSVpcEndpoint_gatewayWithRouteTableAndPolicy(t *testing.T) { var endpoint ec2.VpcEndpoint var routeTable ec2.RouteTable - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpc_endpoint.s3", Providers: testAccProviders, CheckDestroy: testAccCheckVpcEndpointDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpcEndpointConfig_gatewayWithRouteTableAndPolicy, Check: resource.ComposeTestCheckFunc( testAccCheckVpcEndpointExists("aws_vpc_endpoint.s3", &endpoint), @@ -64,7 +85,7 @@ func TestAccAWSVpcEndpoint_gatewayWithRouteTableAndPolicy(t *testing.T) { resource.TestCheckResourceAttr("aws_vpc_endpoint.s3", "private_dns_enabled", "false"), ), }, - resource.TestStep{ + { Config: testAccVpcEndpointConfig_gatewayWithRouteTableAndPolicyModified, Check: resource.ComposeTestCheckFunc( testAccCheckVpcEndpointExists("aws_vpc_endpoint.s3", &endpoint), @@ -84,13 +105,13 @@ func TestAccAWSVpcEndpoint_gatewayWithRouteTableAndPolicy(t *testing.T) { func TestAccAWSVpcEndpoint_interfaceBasic(t *testing.T) { var endpoint ec2.VpcEndpoint - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpc_endpoint.ec2", Providers: testAccProviders, CheckDestroy: testAccCheckVpcEndpointDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpcEndpointConfig_interfaceWithoutSubnet, Check: resource.ComposeTestCheckFunc( testAccCheckVpcEndpointExists("aws_vpc_endpoint.ec2", &endpoint), @@ -110,13 +131,13 @@ func TestAccAWSVpcEndpoint_interfaceBasic(t *testing.T) { func TestAccAWSVpcEndpoint_interfaceWithSubnetAndSecurityGroup(t *testing.T) { var endpoint ec2.VpcEndpoint - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpc_endpoint.ec2", Providers: testAccProviders, CheckDestroy: testAccCheckVpcEndpointDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpcEndpointConfig_interfaceWithSubnet, Check: resource.ComposeTestCheckFunc( testAccCheckVpcEndpointExists("aws_vpc_endpoint.ec2", &endpoint), @@ -128,14 +149,14 @@ func TestAccAWSVpcEndpoint_interfaceWithSubnetAndSecurityGroup(t *testing.T) { resource.TestCheckResourceAttr("aws_vpc_endpoint.ec2", "private_dns_enabled", "false"), ), }, - resource.TestStep{ + { Config: testAccVpcEndpointConfig_interfaceWithSubnetModified, Check: resource.ComposeTestCheckFunc( testAccCheckVpcEndpointExists("aws_vpc_endpoint.ec2", &endpoint), resource.TestCheckResourceAttr("aws_vpc_endpoint.ec2", "cidr_blocks.#", "0"), resource.TestCheckResourceAttr("aws_vpc_endpoint.ec2", "vpc_endpoint_type", "Interface"), resource.TestCheckResourceAttr("aws_vpc_endpoint.ec2", "route_table_ids.#", "0"), - resource.TestCheckResourceAttr("aws_vpc_endpoint.ec2", "subnet_ids.#", "2"), + resource.TestCheckResourceAttr("aws_vpc_endpoint.ec2", "subnet_ids.#", "3"), resource.TestCheckResourceAttr("aws_vpc_endpoint.ec2", "security_group_ids.#", "1"), resource.TestCheckResourceAttr("aws_vpc_endpoint.ec2", "private_dns_enabled", "true"), ), @@ -148,13 +169,13 @@ func TestAccAWSVpcEndpoint_interfaceNonAWSService(t *testing.T) { lbName := fmt.Sprintf("testaccawsnlb-basic-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) var endpoint ec2.VpcEndpoint - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpc_endpoint.foo", Providers: testAccProviders, CheckDestroy: testAccCheckVpcEndpointDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpcEndpointConfig_interfaceNonAWSService(lbName), Check: resource.ComposeTestCheckFunc( testAccCheckVpcEndpointExists("aws_vpc_endpoint.foo", &endpoint), @@ -186,12 +207,12 @@ func TestAccAWSVpcEndpoint_removed(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpcEndpointDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpcEndpointConfig_gatewayWithoutRouteTableOrPolicy, Check: resource.ComposeTestCheckFunc( testAccCheckVpcEndpointExists("aws_vpc_endpoint.s3", &endpoint), @@ -309,9 +330,11 @@ resource "aws_subnet" "foo" { } } +data "aws_region" "current" {} + resource "aws_vpc_endpoint" "s3" { vpc_id = "${aws_vpc.foo.id}" - service_name = "com.amazonaws.us-west-2.s3" + service_name = "com.amazonaws.${data.aws_region.current.name}.s3" route_table_ids = ["${aws_route_table.default.id}"] policy = < 0 { - co := s.List()[0].(map[string]interface{}) - modifyOpts.AccepterPeeringConnectionOptions = expandPeeringOptions(co) - } + v := d.Get("accepter").(*schema.Set).List() + if len(v) > 0 { + req.AccepterPeeringConnectionOptions = expandVpcPeeringConnectionOptions(v[0].(map[string]interface{})) } - if v, ok := d.GetOk("requester"); ok { - if s := v.(*schema.Set); len(s.List()) > 0 { - co := s.List()[0].(map[string]interface{}) - modifyOpts.RequesterPeeringConnectionOptions = expandPeeringOptions(co) - } + v = d.Get("requester").(*schema.Set).List() + if len(v) > 0 { + req.RequesterPeeringConnectionOptions = expandVpcPeeringConnectionOptions(v[0].(map[string]interface{})) } - log.Printf("[DEBUG] VPC Peering Connection modify options: %#v", modifyOpts) - if _, err := conn.ModifyVpcPeeringConnectionOptions(modifyOpts); err != nil { + log.Printf("[DEBUG] Modifying VPC Peering Connection options: %#v", req) + if _, err := conn.ModifyVpcPeeringConnectionOptions(req); err != nil { return err } @@ -237,22 +229,24 @@ func resourceAwsVPCPeeringUpdate(d *schema.ResourceData, meta interface{}) error d.SetPartial("tags") } - pcRaw, _, err := resourceAwsVPCPeeringConnectionStateRefreshFunc(conn, d.Id())() + pcRaw, _, err := vpcPeeringConnectionRefreshState(conn, d.Id())() if err != nil { - return err + return fmt.Errorf("Error reading VPC Peering Connection: %s", err) } if pcRaw == nil { + log.Printf("[WARN] VPC Peering Connection (%s) not found, removing from state", d.Id()) d.SetId("") return nil } + pc := pcRaw.(*ec2.VpcPeeringConnection) if _, ok := d.GetOk("auto_accept"); ok { if pc.Status != nil && *pc.Status.Code == ec2.VpcPeeringConnectionStateReasonCodePendingAcceptance { status, err := resourceVPCPeeringConnectionAccept(conn, d.Id()) if err != nil { - return errwrap.Wrapf("Unable to accept VPC Peering Connection: {{err}}", err) + return fmt.Errorf("Unable to accept VPC Peering Connection: %s", err) } log.Printf("[DEBUG] VPC Peering Connection accept status: %s", status) } @@ -266,14 +260,14 @@ func resourceAwsVPCPeeringUpdate(d *schema.ResourceData, meta interface{}) error "or activate VPC Peering Connection manually.", d.Id()) } - if err := resourceVPCPeeringConnectionOptionsModify(d, meta); err != nil { - return errwrap.Wrapf("Error modifying VPC Peering Connection options: {{err}}", err) + if err := resourceAwsVpcPeeringConnectionModifyOptions(d, meta); err != nil { + return fmt.Errorf("Error modifying VPC Peering Connection options: %s", err) } } - vpcAvailableErr := checkVpcPeeringConnectionAvailable(conn, d.Id()) - if vpcAvailableErr != nil { - return errwrap.Wrapf("Error waiting for VPC Peering Connection to become available: {{err}}", vpcAvailableErr) + err = vpcPeeringConnectionWaitUntilAvailable(conn, d.Id(), d.Timeout(schema.TimeoutUpdate)) + if err != nil { + return fmt.Errorf("Error waiting for VPC Peering Connection to become available: %s", err) } return resourceAwsVPCPeeringRead(d, meta) @@ -282,15 +276,23 @@ func resourceAwsVPCPeeringUpdate(d *schema.ResourceData, meta interface{}) error func resourceAwsVPCPeeringDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).ec2conn - _, err := conn.DeleteVpcPeeringConnection( - &ec2.DeleteVpcPeeringConnectionInput{ - VpcPeeringConnectionId: aws.String(d.Id()), - }) + req := &ec2.DeleteVpcPeeringConnectionInput{ + VpcPeeringConnectionId: aws.String(d.Id()), + } + log.Printf("[DEBUG] Deleting VPC Peering Connection: %s", req) + _, err := conn.DeleteVpcPeeringConnection(req) + if err != nil { + if isAWSErr(err, "InvalidVpcPeeringConnectionID.NotFound", "") { + return nil + } + return fmt.Errorf("Error deleting VPC Peering Connection (%s): %s", d.Id(), err) + } - // Wait for the vpc peering connection to become available + // Wait for the vpc peering connection to delete log.Printf("[DEBUG] Waiting for VPC Peering Connection (%s) to delete.", d.Id()) stateConf := &resource.StateChangeConf{ Pending: []string{ + ec2.VpcPeeringConnectionStateReasonCodeActive, ec2.VpcPeeringConnectionStateReasonCodePendingAcceptance, ec2.VpcPeeringConnectionStateReasonCodeDeleting, }, @@ -298,50 +300,51 @@ func resourceAwsVPCPeeringDelete(d *schema.ResourceData, meta interface{}) error ec2.VpcPeeringConnectionStateReasonCodeRejected, ec2.VpcPeeringConnectionStateReasonCodeDeleted, }, - Refresh: resourceAwsVPCPeeringConnectionStateRefreshFunc(conn, d.Id()), - Timeout: 1 * time.Minute, + Refresh: vpcPeeringConnectionRefreshState(conn, d.Id()), + Timeout: d.Timeout(schema.TimeoutDelete), } if _, err := stateConf.WaitForState(); err != nil { - return errwrap.Wrapf(fmt.Sprintf( - "Error waiting for VPC Peering Connection (%s) to be deleted: {{err}}", - d.Id()), err) + return fmt.Errorf("Error waiting for VPC Peering Connection (%s) to be deleted: %s", d.Id(), err) } - return err + return nil } -// resourceAwsVPCPeeringConnectionStateRefreshFunc returns a resource.StateRefreshFunc that is used to watch -// a VPCPeeringConnection. -func resourceAwsVPCPeeringConnectionStateRefreshFunc(conn *ec2.EC2, id string) resource.StateRefreshFunc { +func vpcPeeringConnectionRefreshState(conn *ec2.EC2, id string) resource.StateRefreshFunc { return func() (interface{}, string, error) { resp, err := conn.DescribeVpcPeeringConnections(&ec2.DescribeVpcPeeringConnectionsInput{ - VpcPeeringConnectionIds: []*string{aws.String(id)}, + VpcPeeringConnectionIds: aws.StringSlice([]string{id}), }) if err != nil { - if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidVpcPeeringConnectionID.NotFound" { - resp = nil - } else { - log.Printf("Error reading VPC Peering Connection details: %s", err) - return nil, "error", err + if isAWSErr(err, "InvalidVpcPeeringConnectionID.NotFound", "") { + return nil, ec2.VpcPeeringConnectionStateReasonCodeDeleted, nil } + + return nil, "", err } - if resp == nil { + if resp == nil || resp.VpcPeeringConnections == nil || + len(resp.VpcPeeringConnections) == 0 || resp.VpcPeeringConnections[0] == nil { // Sometimes AWS just has consistency issues and doesn't see - // our instance yet. Return an empty state. + // our peering connection yet. Return an empty state. return nil, "", nil } - pc := resp.VpcPeeringConnections[0] + if pc.Status == nil { + // Sometimes AWS just has consistency issues and doesn't see + // our peering connection yet. Return an empty state. + return nil, "", nil + } + statusCode := aws.StringValue(pc.Status.Code) // A VPC Peering Connection can exist in a failed state due to // incorrect VPC ID, account ID, or overlapping IP address range, // thus we short circuit before the time out would occur. - if pc != nil && *pc.Status.Code == "failed" { - return nil, "failed", errors.New(*pc.Status.Message) + if statusCode == ec2.VpcPeeringConnectionStateReasonCodeFailed { + return nil, statusCode, errors.New(aws.StringValue(pc.Status.Message)) } - return pc, *pc.Status.Code, nil + return pc, statusCode, nil } } @@ -373,44 +376,7 @@ func vpcPeeringConnectionOptionsSchema() *schema.Schema { } } -func flattenPeeringOptions(options *ec2.VpcPeeringConnectionOptionsDescription) (results []map[string]interface{}) { - m := make(map[string]interface{}) - - if options.AllowDnsResolutionFromRemoteVpc != nil { - m["allow_remote_vpc_dns_resolution"] = *options.AllowDnsResolutionFromRemoteVpc - } - - if options.AllowEgressFromLocalClassicLinkToRemoteVpc != nil { - m["allow_classic_link_to_remote_vpc"] = *options.AllowEgressFromLocalClassicLinkToRemoteVpc - } - - if options.AllowEgressFromLocalVpcToRemoteClassicLink != nil { - m["allow_vpc_to_remote_classic_link"] = *options.AllowEgressFromLocalVpcToRemoteClassicLink - } - - results = append(results, m) - return -} - -func expandPeeringOptions(m map[string]interface{}) *ec2.PeeringConnectionOptionsRequest { - r := &ec2.PeeringConnectionOptionsRequest{} - - if v, ok := m["allow_remote_vpc_dns_resolution"]; ok { - r.AllowDnsResolutionFromRemoteVpc = aws.Bool(v.(bool)) - } - - if v, ok := m["allow_classic_link_to_remote_vpc"]; ok { - r.AllowEgressFromLocalClassicLinkToRemoteVpc = aws.Bool(v.(bool)) - } - - if v, ok := m["allow_vpc_to_remote_classic_link"]; ok { - r.AllowEgressFromLocalVpcToRemoteClassicLink = aws.Bool(v.(bool)) - } - - return r -} - -func checkVpcPeeringConnectionAvailable(conn *ec2.EC2, id string) error { +func vpcPeeringConnectionWaitUntilAvailable(conn *ec2.EC2, id string, timeout time.Duration) error { // Wait for the vpc peering connection to become available log.Printf("[DEBUG] Waiting for VPC Peering Connection (%s) to become available.", id) stateConf := &resource.StateChangeConf{ @@ -422,13 +388,11 @@ func checkVpcPeeringConnectionAvailable(conn *ec2.EC2, id string) error { ec2.VpcPeeringConnectionStateReasonCodePendingAcceptance, ec2.VpcPeeringConnectionStateReasonCodeActive, }, - Refresh: resourceAwsVPCPeeringConnectionStateRefreshFunc(conn, id), - Timeout: 1 * time.Minute, + Refresh: vpcPeeringConnectionRefreshState(conn, id), + Timeout: timeout, } if _, err := stateConf.WaitForState(); err != nil { - return errwrap.Wrapf(fmt.Sprintf( - "Error waiting for VPC Peering Connection (%s) to become available: {{err}}", - id), err) + return fmt.Errorf("Error waiting for VPC Peering Connection (%s) to become available: %s", id, err) } return nil } diff --git a/aws/resource_aws_vpc_peering_connection_accepter.go b/aws/resource_aws_vpc_peering_connection_accepter.go index 854f8fc1668..0dadffd68ac 100644 --- a/aws/resource_aws_vpc_peering_connection_accepter.go +++ b/aws/resource_aws_vpc_peering_connection_accepter.go @@ -16,7 +16,7 @@ func resourceAwsVpcPeeringConnectionAccepter() *schema.Resource { Delete: resourceAwsVPCPeeringAccepterDelete, Schema: map[string]*schema.Schema{ - "vpc_peering_connection_id": &schema.Schema{ + "vpc_peering_connection_id": { Type: schema.TypeString, Required: true, ForceNew: true, @@ -69,6 +69,5 @@ func resourceAwsVPCPeeringAccepterCreate(d *schema.ResourceData, meta interface{ func resourceAwsVPCPeeringAccepterDelete(d *schema.ResourceData, meta interface{}) error { log.Printf("[WARN] Will not delete VPC peering connection. Terraform will remove this resource from the state file, however resources may remain.") - d.SetId("") return nil } diff --git a/aws/resource_aws_vpc_peering_connection_accepter_test.go b/aws/resource_aws_vpc_peering_connection_accepter_test.go index e6c9750dd65..d2e678349b7 100644 --- a/aws/resource_aws_vpc_peering_connection_accepter_test.go +++ b/aws/resource_aws_vpc_peering_connection_accepter_test.go @@ -12,12 +12,12 @@ import ( func TestAccAWSVPCPeeringConnectionAccepter_sameRegion(t *testing.T) { var connection ec2.VpcPeeringConnection - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccAwsVPCPeeringConnectionAccepterDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAwsVPCPeeringConnectionAccepterSameRegion, Check: resource.ComposeTestCheckFunc( testAccCheckAWSVpcPeeringConnectionExists( @@ -37,12 +37,12 @@ func TestAccAWSVPCPeeringConnectionAccepter_differentRegion(t *testing.T) { var providers []*schema.Provider - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, ProviderFactories: testAccProviderFactories(&providers), CheckDestroy: testAccAwsVPCPeeringConnectionAccepterDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAwsVPCPeeringConnectionAccepterDifferentRegion, Check: resource.ComposeTestCheckFunc( testAccCheckAWSVpcPeeringConnectionExists( diff --git a/aws/resource_aws_vpc_peering_connection_options.go b/aws/resource_aws_vpc_peering_connection_options.go new file mode 100644 index 00000000000..aca9ca2ec0c --- /dev/null +++ b/aws/resource_aws_vpc_peering_connection_options.go @@ -0,0 +1,84 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsVpcPeeringConnectionOptions() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsVpcPeeringConnectionOptionsCreate, + Read: resourceAwsVpcPeeringConnectionOptionsRead, + Update: resourceAwsVpcPeeringConnectionOptionsUpdate, + Delete: resourceAwsVpcPeeringConnectionOptionsDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "vpc_peering_connection_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "accepter": vpcPeeringConnectionOptionsSchema(), + "requester": vpcPeeringConnectionOptionsSchema(), + }, + } +} + +func resourceAwsVpcPeeringConnectionOptionsCreate(d *schema.ResourceData, meta interface{}) error { + d.SetId(d.Get("vpc_peering_connection_id").(string)) + return resourceAwsVpcPeeringConnectionOptionsUpdate(d, meta) +} + +func resourceAwsVpcPeeringConnectionOptionsRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + pcRaw, _, err := vpcPeeringConnectionRefreshState(conn, d.Id())() + if err != nil { + return fmt.Errorf("Error reading VPC Peering Connection: %s", err.Error()) + } + + if pcRaw == nil { + log.Printf("[WARN] VPC Peering Connection (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + pc := pcRaw.(*ec2.VpcPeeringConnection) + + d.Set("vpc_peering_connection_id", pc.VpcPeeringConnectionId) + + if pc != nil && pc.AccepterVpcInfo != nil && pc.AccepterVpcInfo.PeeringOptions != nil { + err := d.Set("accepter", flattenVpcPeeringConnectionOptions(pc.AccepterVpcInfo.PeeringOptions)) + if err != nil { + return fmt.Errorf("Error setting VPC Peering Connection Options accepter information: %s", err.Error()) + } + } + + if pc != nil && pc.RequesterVpcInfo != nil && pc.RequesterVpcInfo.PeeringOptions != nil { + err := d.Set("requester", flattenVpcPeeringConnectionOptions(pc.RequesterVpcInfo.PeeringOptions)) + if err != nil { + return fmt.Errorf("Error setting VPC Peering Connection Options requester information: %s", err.Error()) + } + } + + return nil +} + +func resourceAwsVpcPeeringConnectionOptionsUpdate(d *schema.ResourceData, meta interface{}) error { + if err := resourceAwsVpcPeeringConnectionModifyOptions(d, meta); err != nil { + return fmt.Errorf("Error modifying VPC Peering Connection Options: %s", err.Error()) + } + + return resourceAwsVpcPeeringConnectionOptionsRead(d, meta) +} + +func resourceAwsVpcPeeringConnectionOptionsDelete(d *schema.ResourceData, meta interface{}) error { + // Don't do anything with the underlying VPC peering connection. + return nil +} diff --git a/aws/resource_aws_vpc_peering_connection_options_test.go b/aws/resource_aws_vpc_peering_connection_options_test.go new file mode 100644 index 00000000000..e1197ee9fea --- /dev/null +++ b/aws/resource_aws_vpc_peering_connection_options_test.go @@ -0,0 +1,124 @@ +package aws + +import ( + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccAWSVpcPeeringConnectionOptions_importBasic(t *testing.T) { + resourceName := "aws_vpc_peering_connection_options.foo" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSVpcPeeringConnectionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccVpcPeeringConnectionOptionsConfig, + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSVpcPeeringConnectionOptions_basic(t *testing.T) { + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSVpcPeeringConnectionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccVpcPeeringConnectionOptionsConfig, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "aws_vpc_peering_connection_options.foo", + "accepter.#", + "1", + ), + resource.TestCheckResourceAttr( + "aws_vpc_peering_connection_options.foo", + "accepter.1102046665.allow_remote_vpc_dns_resolution", + "true", + ), + testAccCheckAWSVpcPeeringConnectionOptions( + "aws_vpc_peering_connection.foo", + "accepter", + &ec2.VpcPeeringConnectionOptionsDescription{ + AllowDnsResolutionFromRemoteVpc: aws.Bool(true), + AllowEgressFromLocalClassicLinkToRemoteVpc: aws.Bool(false), + AllowEgressFromLocalVpcToRemoteClassicLink: aws.Bool(false), + }, + ), + resource.TestCheckResourceAttr( + "aws_vpc_peering_connection_options.foo", + "requester.#", + "1", + ), + resource.TestCheckResourceAttr( + "aws_vpc_peering_connection_options.foo", + "requester.41753983.allow_classic_link_to_remote_vpc", + "true", + ), + resource.TestCheckResourceAttr( + "aws_vpc_peering_connection_options.foo", + "requester.41753983.allow_vpc_to_remote_classic_link", + "true", + ), + testAccCheckAWSVpcPeeringConnectionOptions( + "aws_vpc_peering_connection.foo", + "requester", + &ec2.VpcPeeringConnectionOptionsDescription{ + AllowDnsResolutionFromRemoteVpc: aws.Bool(false), + AllowEgressFromLocalClassicLinkToRemoteVpc: aws.Bool(true), + AllowEgressFromLocalVpcToRemoteClassicLink: aws.Bool(true), + }, + ), + ), + }, + }, + }) +} + +const testAccVpcPeeringConnectionOptionsConfig = ` +resource "aws_vpc" "foo" { + cidr_block = "10.0.0.0/16" + tags { + Name = "terraform-testacc-vpc-peering-conn-options-foo" + } +} + +resource "aws_vpc" "bar" { + cidr_block = "10.1.0.0/16" + enable_dns_hostnames = true + tags { + Name = "terraform-testacc-vpc-peering-conn-options-bar" + } +} + +resource "aws_vpc_peering_connection" "foo" { + vpc_id = "${aws_vpc.foo.id}" + peer_vpc_id = "${aws_vpc.bar.id}" + auto_accept = true +} + +resource "aws_vpc_peering_connection_options" "foo" { + vpc_peering_connection_id = "${aws_vpc_peering_connection.foo.id}" + + accepter { + allow_remote_vpc_dns_resolution = true + } + + requester { + allow_vpc_to_remote_classic_link = true + allow_classic_link_to_remote_vpc = true + } +} +` diff --git a/aws/resource_aws_vpc_peering_connection_test.go b/aws/resource_aws_vpc_peering_connection_test.go index 3b5e110baa1..47838c60d85 100644 --- a/aws/resource_aws_vpc_peering_connection_test.go +++ b/aws/resource_aws_vpc_peering_connection_test.go @@ -14,10 +14,33 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSVPCPeeringConnection_importBasic(t *testing.T) { + resourceName := "aws_vpc_peering_connection.foo" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSVpcPeeringConnectionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccVpcPeeringConfig, + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "auto_accept"}, + }, + }, + }) +} + func TestAccAWSVPCPeeringConnection_basic(t *testing.T) { var connection ec2.VpcPeeringConnection - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpc_peering_connection.foo", IDRefreshIgnore: []string{"auto_accept"}, @@ -25,7 +48,7 @@ func TestAccAWSVPCPeeringConnection_basic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSVpcPeeringConnectionDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpcPeeringConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSVpcPeeringConnectionExists( @@ -54,13 +77,13 @@ func TestAccAWSVPCPeeringConnection_plan(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshIgnore: []string{"auto_accept"}, Providers: testAccProviders, CheckDestroy: testAccCheckAWSVpcPeeringConnectionDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpcPeeringConfig, Check: resource.ComposeTestCheckFunc( testAccCheckAWSVpcPeeringConnectionExists( @@ -77,7 +100,7 @@ func TestAccAWSVPCPeeringConnection_plan(t *testing.T) { func TestAccAWSVPCPeeringConnection_tags(t *testing.T) { var connection ec2.VpcPeeringConnection - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpc_peering_connection.foo", IDRefreshIgnore: []string{"auto_accept"}, @@ -85,7 +108,7 @@ func TestAccAWSVPCPeeringConnection_tags(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckVpcDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpcPeeringConfigTags, Check: resource.ComposeTestCheckFunc( testAccCheckAWSVpcPeeringConnectionExists( @@ -118,7 +141,7 @@ func TestAccAWSVPCPeeringConnection_options(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpc_peering_connection.foo", IDRefreshIgnore: []string{"auto_accept"}, @@ -126,7 +149,7 @@ func TestAccAWSVPCPeeringConnection_options(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckAWSVpcPeeringConnectionDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpcPeeringConfigOptions, Check: resource.ComposeTestCheckFunc( testAccCheckAWSVpcPeeringConnectionExists( @@ -166,7 +189,7 @@ func TestAccAWSVPCPeeringConnection_options(t *testing.T) { ), ExpectNonEmptyPlan: true, }, - resource.TestStep{ + { Config: testAccVpcPeeringConfigOptions, Check: resource.ComposeTestCheckFunc( testAccCheckAWSVpcPeeringConnectionExists( @@ -193,13 +216,13 @@ func TestAccAWSVPCPeeringConnection_options(t *testing.T) { } func TestAccAWSVPCPeeringConnection_failedState(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshIgnore: []string{"auto_accept"}, Providers: testAccProviders, CheckDestroy: testAccCheckAWSVpcPeeringConnectionDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpcPeeringConfigFailedState, ExpectError: regexp.MustCompile(`.*Error waiting.*\(pcx-\w+\).*incorrect.*VPC-ID.*`), }, @@ -317,13 +340,13 @@ func testAccCheckAWSVpcPeeringConnectionOptions(n, block string, options *ec2.Vp } func TestAccAWSVPCPeeringConnection_peerRegionAndAutoAccept(t *testing.T) { - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshIgnore: []string{"auto_accept"}, Providers: testAccProviders, CheckDestroy: testAccCheckAWSVpcPeeringConnectionDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpcPeeringConfigRegionAutoAccept, ExpectError: regexp.MustCompile(`.*peer_region cannot be set whilst auto_accept is true when creating a vpc peering connection.*`), }, @@ -336,7 +359,7 @@ func TestAccAWSVPCPeeringConnection_region(t *testing.T) { var providers []*schema.Provider - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpc_peering_connection.foo", IDRefreshIgnore: []string{"auto_accept"}, @@ -344,7 +367,7 @@ func TestAccAWSVPCPeeringConnection_region(t *testing.T) { ProviderFactories: testAccProviderFactories(&providers), CheckDestroy: testAccCheckAWSVpcPeeringConnectionDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpcPeeringConfigRegion, Check: resource.ComposeTestCheckFunc( testAccCheckAWSVpcPeeringConnectionExists( @@ -358,171 +381,171 @@ func TestAccAWSVPCPeeringConnection_region(t *testing.T) { const testAccVpcPeeringConfig = ` resource "aws_vpc" "foo" { - cidr_block = "10.0.0.0/16" - tags { - Name = "terraform-testacc-vpc-peering-conn-foo" - } + cidr_block = "10.0.0.0/16" + tags { + Name = "terraform-testacc-vpc-peering-conn-foo" + } } resource "aws_vpc" "bar" { - cidr_block = "10.1.0.0/16" - tags { - Name = "terraform-testacc-vpc-peering-conn-bar" - } + cidr_block = "10.1.0.0/16" + tags { + Name = "terraform-testacc-vpc-peering-conn-bar" + } } resource "aws_vpc_peering_connection" "foo" { - vpc_id = "${aws_vpc.foo.id}" - peer_vpc_id = "${aws_vpc.bar.id}" - auto_accept = true + vpc_id = "${aws_vpc.foo.id}" + peer_vpc_id = "${aws_vpc.bar.id}" + auto_accept = true } ` const testAccVpcPeeringConfigTags = ` resource "aws_vpc" "foo" { - cidr_block = "10.0.0.0/16" - tags { - Name = "terraform-testacc-vpc-peering-conn-tags-foo" - } + cidr_block = "10.0.0.0/16" + tags { + Name = "terraform-testacc-vpc-peering-conn-tags-foo" + } } resource "aws_vpc" "bar" { - cidr_block = "10.1.0.0/16" - tags { - Name = "terraform-testacc-vpc-peering-conn-tags-bar" - } + cidr_block = "10.1.0.0/16" + tags { + Name = "terraform-testacc-vpc-peering-conn-tags-bar" + } } resource "aws_vpc_peering_connection" "foo" { - vpc_id = "${aws_vpc.foo.id}" - peer_vpc_id = "${aws_vpc.bar.id}" - auto_accept = true - tags { - foo = "bar" - } + vpc_id = "${aws_vpc.foo.id}" + peer_vpc_id = "${aws_vpc.bar.id}" + auto_accept = true + tags { + foo = "bar" + } } ` const testAccVpcPeeringConfigOptions = ` resource "aws_vpc" "foo" { - cidr_block = "10.0.0.0/16" - tags { - Name = "terraform-testacc-vpc-peering-conn-options-foo" - } + cidr_block = "10.0.0.0/16" + tags { + Name = "terraform-testacc-vpc-peering-conn-options-foo" + } } resource "aws_vpc" "bar" { - cidr_block = "10.1.0.0/16" - enable_dns_hostnames = true - tags { - Name = "terraform-testacc-vpc-peering-conn-options-bar" - } + cidr_block = "10.1.0.0/16" + enable_dns_hostnames = true + tags { + Name = "terraform-testacc-vpc-peering-conn-options-bar" + } } resource "aws_vpc_peering_connection" "foo" { - vpc_id = "${aws_vpc.foo.id}" - peer_vpc_id = "${aws_vpc.bar.id}" - auto_accept = true + vpc_id = "${aws_vpc.foo.id}" + peer_vpc_id = "${aws_vpc.bar.id}" + auto_accept = true - accepter { - allow_remote_vpc_dns_resolution = true - } + accepter { + allow_remote_vpc_dns_resolution = true + } - requester { - allow_vpc_to_remote_classic_link = true - allow_classic_link_to_remote_vpc = true - } + requester { + allow_vpc_to_remote_classic_link = true + allow_classic_link_to_remote_vpc = true + } } ` const testAccVpcPeeringConfigFailedState = ` resource "aws_vpc" "foo" { - cidr_block = "10.0.0.0/16" - tags { - Name = "terraform-testacc-vpc-peering-conn-failed-state-foo" - } + cidr_block = "10.0.0.0/16" + tags { + Name = "terraform-testacc-vpc-peering-conn-failed-state-foo" + } } resource "aws_vpc" "bar" { - cidr_block = "10.0.0.0/16" - tags { - Name = "terraform-testacc-vpc-peering-conn-failed-state-bar" - } + cidr_block = "10.0.0.0/16" + tags { + Name = "terraform-testacc-vpc-peering-conn-failed-state-bar" + } } resource "aws_vpc_peering_connection" "foo" { - vpc_id = "${aws_vpc.foo.id}" - peer_vpc_id = "${aws_vpc.bar.id}" + vpc_id = "${aws_vpc.foo.id}" + peer_vpc_id = "${aws_vpc.bar.id}" } ` const testAccVpcPeeringConfigRegionAutoAccept = ` provider "aws" { - alias = "main" + alias = "main" region = "us-west-2" } provider "aws" { - alias = "peer" + alias = "peer" region = "us-east-1" } resource "aws_vpc" "foo" { - provider = "aws.main" - cidr_block = "10.0.0.0/16" - tags { - Name = "terraform-testacc-vpc-peering-conn-region-auto-accept-foo" - } + provider = "aws.main" + cidr_block = "10.0.0.0/16" + tags { + Name = "terraform-testacc-vpc-peering-conn-region-auto-accept-foo" + } } resource "aws_vpc" "bar" { - provider = "aws.peer" - cidr_block = "10.1.0.0/16" - tags { - Name = "terraform-testacc-vpc-peering-conn-region-auto-accept-bar" - } + provider = "aws.peer" + cidr_block = "10.1.0.0/16" + tags { + Name = "terraform-testacc-vpc-peering-conn-region-auto-accept-bar" + } } resource "aws_vpc_peering_connection" "foo" { - provider = "aws.main" - vpc_id = "${aws_vpc.foo.id}" - peer_vpc_id = "${aws_vpc.bar.id}" - peer_region = "us-east-1" - auto_accept = true + provider = "aws.main" + vpc_id = "${aws_vpc.foo.id}" + peer_vpc_id = "${aws_vpc.bar.id}" + peer_region = "us-east-1" + auto_accept = true } ` const testAccVpcPeeringConfigRegion = ` provider "aws" { - alias = "main" + alias = "main" region = "us-west-2" } provider "aws" { - alias = "peer" + alias = "peer" region = "us-east-1" } resource "aws_vpc" "foo" { - provider = "aws.main" - cidr_block = "10.0.0.0/16" - tags { - Name = "terraform-testacc-vpc-peering-conn-region-foo" - } + provider = "aws.main" + cidr_block = "10.0.0.0/16" + tags { + Name = "terraform-testacc-vpc-peering-conn-region-foo" + } } resource "aws_vpc" "bar" { - provider = "aws.peer" - cidr_block = "10.1.0.0/16" - tags { - Name = "terraform-testacc-vpc-peering-conn-region-bar" - } + provider = "aws.peer" + cidr_block = "10.1.0.0/16" + tags { + Name = "terraform-testacc-vpc-peering-conn-region-bar" + } } resource "aws_vpc_peering_connection" "foo" { - provider = "aws.main" - vpc_id = "${aws_vpc.foo.id}" - peer_vpc_id = "${aws_vpc.bar.id}" - peer_region = "us-east-1" + provider = "aws.main" + vpc_id = "${aws_vpc.foo.id}" + peer_vpc_id = "${aws_vpc.bar.id}" + peer_region = "us-east-1" } ` diff --git a/aws/resource_aws_vpc_test.go b/aws/resource_aws_vpc_test.go index 54550b86b59..4c6b64842a0 100644 --- a/aws/resource_aws_vpc_test.go +++ b/aws/resource_aws_vpc_test.go @@ -3,7 +3,9 @@ package aws import ( "fmt" "log" + "regexp" "testing" + "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" @@ -20,6 +22,7 @@ func init() { "aws_internet_gateway", "aws_nat_gateway", "aws_network_acl", + "aws_route_table", "aws_security_group", "aws_subnet", "aws_vpn_gateway", @@ -41,12 +44,17 @@ func testSweepVPCs(region string) error { Name: aws.String("tag-value"), Values: []*string{ aws.String("terraform-testacc-*"), + aws.String("tf-acc-test-*"), }, }, }, } resp, err := conn.DescribeVpcs(req) if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping EC2 VPC sweep for %s: %s", region, err) + return nil + } return fmt.Errorf("Error describing vpcs: %s", err) } @@ -56,14 +64,25 @@ func testSweepVPCs(region string) error { } for _, vpc := range resp.Vpcs { - // delete the vpc - _, err := conn.DeleteVpc(&ec2.DeleteVpcInput{ + input := &ec2.DeleteVpcInput{ VpcId: vpc.VpcId, + } + log.Printf("[DEBUG] Deleting VPC: %s", input) + + // Handle EC2 eventual consistency + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err := conn.DeleteVpc(input) + if isAWSErr(err, "DependencyViolation", "") { + return resource.RetryableError(err) + } + if err != nil { + return resource.NonRetryableError(err) + } + return nil }) + if err != nil { - return fmt.Errorf( - "Error deleting VPC (%s): %s", - *vpc.VpcId, err) + return fmt.Errorf("Error deleting VPC (%s): %s", aws.StringValue(vpc.VpcId), err) } } @@ -72,8 +91,9 @@ func testSweepVPCs(region string) error { func TestAccAWSVpc_basic(t *testing.T) { var vpc ec2.Vpc + resourceName := "aws_vpc.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpcDestroy, @@ -81,77 +101,85 @@ func TestAccAWSVpc_basic(t *testing.T) { { Config: testAccVpcConfig, Check: resource.ComposeTestCheckFunc( - testAccCheckVpcExists("aws_vpc.foo", &vpc), + testAccCheckVpcExists(resourceName, &vpc), testAccCheckVpcCidr(&vpc, "10.1.0.0/16"), - resource.TestCheckResourceAttr( - "aws_vpc.foo", "cidr_block", "10.1.0.0/16"), - resource.TestCheckResourceAttrSet( - "aws_vpc.foo", "default_route_table_id"), - resource.TestCheckResourceAttr( - "aws_vpc.foo", "enable_dns_support", "true"), + testAccMatchResourceAttrRegionalARN(resourceName, "arn", "ec2", regexp.MustCompile(`vpc/vpc-.+`)), + resource.TestCheckResourceAttr(resourceName, "assign_generated_ipv6_cidr_block", "false"), + resource.TestMatchResourceAttr(resourceName, "default_route_table_id", regexp.MustCompile(`^rtb-.+`)), + resource.TestCheckResourceAttr(resourceName, "cidr_block", "10.1.0.0/16"), + resource.TestCheckResourceAttr(resourceName, "enable_dns_support", "true"), + resource.TestCheckResourceAttr(resourceName, "instance_tenancy", "default"), + resource.TestCheckResourceAttr(resourceName, "ipv6_association_id", ""), + resource.TestCheckResourceAttr(resourceName, "ipv6_cidr_block", ""), + resource.TestMatchResourceAttr(resourceName, "main_route_table_id", regexp.MustCompile(`^rtb-.+`)), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } -func TestAccAWSVpc_enableIpv6(t *testing.T) { +func TestAccAWSVpc_AssignGeneratedIpv6CidrBlock(t *testing.T) { var vpc ec2.Vpc + resourceName := "aws_vpc.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpcDestroy, Steps: []resource.TestStep{ { - Config: testAccVpcConfigIpv6Enabled, + Config: testAccVpcConfigAssignGeneratedIpv6CidrBlock(true), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckVpcExists("aws_vpc.foo", &vpc), + testAccCheckVpcExists(resourceName, &vpc), testAccCheckVpcCidr(&vpc, "10.1.0.0/16"), - resource.TestCheckResourceAttr( - "aws_vpc.foo", "cidr_block", "10.1.0.0/16"), - resource.TestCheckResourceAttrSet( - "aws_vpc.foo", "ipv6_association_id"), - resource.TestCheckResourceAttrSet( - "aws_vpc.foo", "ipv6_cidr_block"), - resource.TestCheckResourceAttr( - "aws_vpc.foo", "assign_generated_ipv6_cidr_block", "true"), + resource.TestCheckResourceAttr(resourceName, "assign_generated_ipv6_cidr_block", "true"), + resource.TestCheckResourceAttr(resourceName, "cidr_block", "10.1.0.0/16"), + resource.TestMatchResourceAttr(resourceName, "ipv6_association_id", regexp.MustCompile(`^vpc-cidr-assoc-.+`)), + resource.TestMatchResourceAttr(resourceName, "ipv6_cidr_block", regexp.MustCompile(`/56$`)), ), }, { - Config: testAccVpcConfigIpv6Disabled, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccVpcConfigAssignGeneratedIpv6CidrBlock(false), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckVpcExists("aws_vpc.foo", &vpc), + testAccCheckVpcExists(resourceName, &vpc), testAccCheckVpcCidr(&vpc, "10.1.0.0/16"), - resource.TestCheckResourceAttr( - "aws_vpc.foo", "cidr_block", "10.1.0.0/16"), - resource.TestCheckResourceAttr( - "aws_vpc.foo", "assign_generated_ipv6_cidr_block", "false"), + resource.TestCheckResourceAttr(resourceName, "assign_generated_ipv6_cidr_block", "false"), + resource.TestCheckResourceAttr(resourceName, "cidr_block", "10.1.0.0/16"), + resource.TestCheckResourceAttr(resourceName, "ipv6_association_id", ""), + resource.TestCheckResourceAttr(resourceName, "ipv6_cidr_block", ""), ), }, { - Config: testAccVpcConfigIpv6Enabled, + Config: testAccVpcConfigAssignGeneratedIpv6CidrBlock(true), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckVpcExists("aws_vpc.foo", &vpc), + testAccCheckVpcExists(resourceName, &vpc), testAccCheckVpcCidr(&vpc, "10.1.0.0/16"), - resource.TestCheckResourceAttr( - "aws_vpc.foo", "cidr_block", "10.1.0.0/16"), - resource.TestCheckResourceAttrSet( - "aws_vpc.foo", "ipv6_association_id"), - resource.TestCheckResourceAttrSet( - "aws_vpc.foo", "ipv6_cidr_block"), - resource.TestCheckResourceAttr( - "aws_vpc.foo", "assign_generated_ipv6_cidr_block", "true"), + resource.TestCheckResourceAttr(resourceName, "assign_generated_ipv6_cidr_block", "true"), + resource.TestCheckResourceAttr(resourceName, "cidr_block", "10.1.0.0/16"), + resource.TestMatchResourceAttr(resourceName, "ipv6_association_id", regexp.MustCompile(`^vpc-cidr-assoc-.+`)), + resource.TestMatchResourceAttr(resourceName, "ipv6_cidr_block", regexp.MustCompile(`/56$`)), ), }, }, }) } -func TestAccAWSVpc_dedicatedTenancy(t *testing.T) { - var vpc ec2.Vpc +func TestAccAWSVpc_Tenancy(t *testing.T) { + var vpcDedicated ec2.Vpc + var vpcDefault ec2.Vpc + resourceName := "aws_vpc.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpcDestroy, @@ -159,9 +187,29 @@ func TestAccAWSVpc_dedicatedTenancy(t *testing.T) { { Config: testAccVpcDedicatedConfig, Check: resource.ComposeTestCheckFunc( - testAccCheckVpcExists("aws_vpc.bar", &vpc), - resource.TestCheckResourceAttr( - "aws_vpc.bar", "instance_tenancy", "dedicated"), + testAccCheckVpcExists(resourceName, &vpcDedicated), + resource.TestCheckResourceAttr(resourceName, "instance_tenancy", "dedicated"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccVpcConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckVpcExists(resourceName, &vpcDefault), + resource.TestCheckResourceAttr(resourceName, "instance_tenancy", "default"), + testAccCheckVpcIdsEqual(&vpcDedicated, &vpcDefault), + ), + }, + { + Config: testAccVpcDedicatedConfig, + Check: resource.ComposeTestCheckFunc( + testAccCheckVpcExists(resourceName, &vpcDedicated), + resource.TestCheckResourceAttr(resourceName, "instance_tenancy", "dedicated"), + testAccCheckVpcIdsNotEqual(&vpcDedicated, &vpcDefault), ), }, }, @@ -170,8 +218,9 @@ func TestAccAWSVpc_dedicatedTenancy(t *testing.T) { func TestAccAWSVpc_tags(t *testing.T) { var vpc ec2.Vpc + resourceName := "aws_vpc.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpcDestroy, @@ -179,18 +228,21 @@ func TestAccAWSVpc_tags(t *testing.T) { { Config: testAccVpcConfigTags, Check: resource.ComposeTestCheckFunc( - testAccCheckVpcExists("aws_vpc.foo", &vpc), + testAccCheckVpcExists(resourceName, &vpc), testAccCheckVpcCidr(&vpc, "10.1.0.0/16"), - resource.TestCheckResourceAttr( - "aws_vpc.foo", "cidr_block", "10.1.0.0/16"), + resource.TestCheckResourceAttr(resourceName, "cidr_block", "10.1.0.0/16"), testAccCheckTags(&vpc.Tags, "foo", "bar"), ), }, - + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, { Config: testAccVpcConfigTagsUpdate, Check: resource.ComposeTestCheckFunc( - testAccCheckVpcExists("aws_vpc.foo", &vpc), + testAccCheckVpcExists(resourceName, &vpc), testAccCheckTags(&vpc.Tags, "foo", ""), testAccCheckTags(&vpc.Tags, "bar", "baz"), ), @@ -201,8 +253,9 @@ func TestAccAWSVpc_tags(t *testing.T) { func TestAccAWSVpc_update(t *testing.T) { var vpc ec2.Vpc + resourceName := "aws_vpc.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpcDestroy, @@ -210,18 +263,16 @@ func TestAccAWSVpc_update(t *testing.T) { { Config: testAccVpcConfig, Check: resource.ComposeTestCheckFunc( - testAccCheckVpcExists("aws_vpc.foo", &vpc), + testAccCheckVpcExists(resourceName, &vpc), testAccCheckVpcCidr(&vpc, "10.1.0.0/16"), - resource.TestCheckResourceAttr( - "aws_vpc.foo", "cidr_block", "10.1.0.0/16"), + resource.TestCheckResourceAttr(resourceName, "cidr_block", "10.1.0.0/16"), ), }, { Config: testAccVpcConfigUpdate, Check: resource.ComposeTestCheckFunc( - testAccCheckVpcExists("aws_vpc.foo", &vpc), - resource.TestCheckResourceAttr( - "aws_vpc.foo", "enable_dns_hostnames", "true"), + testAccCheckVpcExists(resourceName, &vpc), + resource.TestCheckResourceAttr(resourceName, "enable_dns_hostnames", "true"), ), }, }, @@ -264,9 +315,28 @@ func testAccCheckVpcDestroy(s *terraform.State) error { func testAccCheckVpcCidr(vpc *ec2.Vpc, expected string) resource.TestCheckFunc { return func(s *terraform.State) error { - CIDRBlock := vpc.CidrBlock - if *CIDRBlock != expected { - return fmt.Errorf("Bad cidr: %s", *vpc.CidrBlock) + if aws.StringValue(vpc.CidrBlock) != expected { + return fmt.Errorf("Bad cidr: %s", aws.StringValue(vpc.CidrBlock)) + } + + return nil + } +} + +func testAccCheckVpcIdsEqual(vpc1, vpc2 *ec2.Vpc) resource.TestCheckFunc { + return func(s *terraform.State) error { + if aws.StringValue(vpc1.VpcId) != aws.StringValue(vpc2.VpcId) { + return fmt.Errorf("VPC IDs not equal") + } + + return nil + } +} + +func testAccCheckVpcIdsNotEqual(vpc1, vpc2 *ec2.Vpc) resource.TestCheckFunc { + return func(s *terraform.State) error { + if aws.StringValue(vpc1.VpcId) == aws.StringValue(vpc2.VpcId) { + return fmt.Errorf("VPC IDs are equal") } return nil @@ -292,7 +362,7 @@ func testAccCheckVpcExists(n string, vpc *ec2.Vpc) resource.TestCheckFunc { if err != nil { return err } - if len(resp.Vpcs) == 0 { + if len(resp.Vpcs) == 0 || resp.Vpcs[0] == nil { return fmt.Errorf("VPC not found") } @@ -304,7 +374,10 @@ func testAccCheckVpcExists(n string, vpc *ec2.Vpc) resource.TestCheckFunc { // https://github.com/hashicorp/terraform/issues/1301 func TestAccAWSVpc_bothDnsOptionsSet(t *testing.T) { - resource.Test(t, resource.TestCase{ + var vpc ec2.Vpc + resourceName := "aws_vpc.test" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpcDestroy, @@ -312,19 +385,26 @@ func TestAccAWSVpc_bothDnsOptionsSet(t *testing.T) { { Config: testAccVpcConfig_BothDnsOptions, Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr( - "aws_vpc.bar", "enable_dns_hostnames", "true"), - resource.TestCheckResourceAttr( - "aws_vpc.bar", "enable_dns_support", "true"), + testAccCheckVpcExists(resourceName, &vpc), + resource.TestCheckResourceAttr(resourceName, "enable_dns_hostnames", "true"), + resource.TestCheckResourceAttr(resourceName, "enable_dns_support", "true"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } // https://github.com/hashicorp/terraform/issues/10168 func TestAccAWSVpc_DisabledDnsSupport(t *testing.T) { - resource.Test(t, resource.TestCase{ + var vpc ec2.Vpc + resourceName := "aws_vpc.test" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpcDestroy, @@ -332,16 +412,24 @@ func TestAccAWSVpc_DisabledDnsSupport(t *testing.T) { { Config: testAccVpcConfig_DisabledDnsSupport, Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr( - "aws_vpc.bar", "enable_dns_support", "false"), + testAccCheckVpcExists(resourceName, &vpc), + resource.TestCheckResourceAttr(resourceName, "enable_dns_support", "false"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } func TestAccAWSVpc_classiclinkOptionSet(t *testing.T) { - resource.Test(t, resource.TestCase{ + var vpc ec2.Vpc + resourceName := "aws_vpc.test" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpcDestroy, @@ -349,16 +437,24 @@ func TestAccAWSVpc_classiclinkOptionSet(t *testing.T) { { Config: testAccVpcConfig_ClassiclinkOption, Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr( - "aws_vpc.bar", "enable_classiclink", "true"), + testAccCheckVpcExists(resourceName, &vpc), + resource.TestCheckResourceAttr(resourceName, "enable_classiclink", "true"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } func TestAccAWSVpc_classiclinkDnsSupportOptionSet(t *testing.T) { - resource.Test(t, resource.TestCase{ + var vpc ec2.Vpc + resourceName := "aws_vpc.test" + + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpcDestroy, @@ -366,16 +462,21 @@ func TestAccAWSVpc_classiclinkDnsSupportOptionSet(t *testing.T) { { Config: testAccVpcConfig_ClassiclinkDnsSupportOption, Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr( - "aws_vpc.bar", "enable_classiclink_dns_support", "true"), + testAccCheckVpcExists(resourceName, &vpc), + resource.TestCheckResourceAttr(resourceName, "enable_classiclink_dns_support", "true"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } const testAccVpcConfig = ` -resource "aws_vpc" "foo" { +resource "aws_vpc" "test" { cidr_block = "10.1.0.0/16" tags { Name = "terraform-testacc-vpc" @@ -383,27 +484,21 @@ resource "aws_vpc" "foo" { } ` -const testAccVpcConfigIpv6Enabled = ` -resource "aws_vpc" "foo" { - cidr_block = "10.1.0.0/16" - assign_generated_ipv6_cidr_block = true - tags { - Name = "terraform-testacc-vpc-ipv6" - } -} -` +func testAccVpcConfigAssignGeneratedIpv6CidrBlock(assignGeneratedIpv6CidrBlock bool) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + assign_generated_ipv6_cidr_block = %t + cidr_block = "10.1.0.0/16" -const testAccVpcConfigIpv6Disabled = ` -resource "aws_vpc" "foo" { - cidr_block = "10.1.0.0/16" - tags { - Name = "terraform-testacc-vpc-ipv6" - } + tags { + Name = "terraform-testacc-vpc-ipv6" + } +} +`, assignGeneratedIpv6CidrBlock) } -` const testAccVpcConfigUpdate = ` -resource "aws_vpc" "foo" { +resource "aws_vpc" "test" { cidr_block = "10.1.0.0/16" enable_dns_hostnames = true tags { @@ -413,7 +508,7 @@ resource "aws_vpc" "foo" { ` const testAccVpcConfigTags = ` -resource "aws_vpc" "foo" { +resource "aws_vpc" "test" { cidr_block = "10.1.0.0/16" tags { @@ -424,7 +519,7 @@ resource "aws_vpc" "foo" { ` const testAccVpcConfigTagsUpdate = ` -resource "aws_vpc" "foo" { +resource "aws_vpc" "test" { cidr_block = "10.1.0.0/16" tags { @@ -434,9 +529,9 @@ resource "aws_vpc" "foo" { } ` const testAccVpcDedicatedConfig = ` -resource "aws_vpc" "bar" { +resource "aws_vpc" "test" { instance_tenancy = "dedicated" - cidr_block = "10.2.0.0/16" + cidr_block = "10.1.0.0/16" tags { Name = "terraform-testacc-vpc-dedicated" } @@ -444,11 +539,7 @@ resource "aws_vpc" "bar" { ` const testAccVpcConfig_BothDnsOptions = ` -provider "aws" { - region = "eu-central-1" -} - -resource "aws_vpc" "bar" { +resource "aws_vpc" "test" { cidr_block = "10.2.0.0/16" enable_dns_hostnames = true enable_dns_support = true @@ -459,7 +550,7 @@ resource "aws_vpc" "bar" { ` const testAccVpcConfig_DisabledDnsSupport = ` -resource "aws_vpc" "bar" { +resource "aws_vpc" "test" { cidr_block = "10.2.0.0/16" enable_dns_support = false tags { @@ -469,7 +560,7 @@ resource "aws_vpc" "bar" { ` const testAccVpcConfig_ClassiclinkOption = ` -resource "aws_vpc" "bar" { +resource "aws_vpc" "test" { cidr_block = "172.2.0.0/16" enable_classiclink = true tags { @@ -479,7 +570,7 @@ resource "aws_vpc" "bar" { ` const testAccVpcConfig_ClassiclinkDnsSupportOption = ` -resource "aws_vpc" "bar" { +resource "aws_vpc" "test" { cidr_block = "172.2.0.0/16" enable_classiclink = true enable_classiclink_dns_support = true diff --git a/aws/resource_aws_vpn_connection.go b/aws/resource_aws_vpn_connection.go index 21244784134..9a5073c6e81 100644 --- a/aws/resource_aws_vpn_connection.go +++ b/aws/resource_aws_vpn_connection.go @@ -15,7 +15,6 @@ import ( "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" @@ -383,7 +382,7 @@ func resourceAwsVpnConnectionRead(d *schema.ResourceData, meta interface{}) erro } if len(resp.VpnConnections) != 1 { - return fmt.Errorf("[ERROR] Error finding VPN connection: %s", d.Id()) + return fmt.Errorf("Error finding VPN connection: %s", d.Id()) } vpnConnection := resp.VpnConnections[0] @@ -461,7 +460,6 @@ func resourceAwsVpnConnectionDelete(d *schema.ResourceData, meta interface{}) er }) if err != nil { if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidVpnConnectionID.NotFound" { - d.SetId("") return nil } else { log.Printf("[ERROR] Error deleting VPN connection: %s", err) @@ -533,7 +531,7 @@ func telemetryToMapList(telemetry []*ec2.VgwTelemetry) []map[string]interface{} func xmlConfigToTunnelInfo(xmlConfig string) (*TunnelInfo, error) { var vpnConfig XmlVpnConnectionConfig if err := xml.Unmarshal([]byte(xmlConfig), &vpnConfig); err != nil { - return nil, errwrap.Wrapf("Error Unmarshalling XML: {{err}}", err) + return nil, fmt.Errorf("Error Unmarshalling XML: %s", err) } // don't expect consistent ordering from the XML @@ -568,8 +566,8 @@ func validateVpnConnectionTunnelPreSharedKey(v interface{}, k string) (ws []stri errors = append(errors, fmt.Errorf("%q cannot start with zero character", k)) } - if !regexp.MustCompile(`^[0-9a-zA-Z_]+$`).MatchString(value) { - errors = append(errors, fmt.Errorf("%q can only contain alphanumeric and underscore characters", k)) + if !regexp.MustCompile(`^[0-9a-zA-Z_.]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf("%q can only contain alphanumeric, period and underscore characters", k)) } return diff --git a/aws/resource_aws_vpn_connection_route.go b/aws/resource_aws_vpn_connection_route.go index 0f1991fe658..08373941285 100644 --- a/aws/resource_aws_vpn_connection_route.go +++ b/aws/resource_aws_vpn_connection_route.go @@ -23,13 +23,13 @@ func resourceAwsVpnConnectionRoute() *schema.Resource { Delete: resourceAwsVpnConnectionRouteDelete, Schema: map[string]*schema.Schema{ - "destination_cidr_block": &schema.Schema{ + "destination_cidr_block": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "vpn_connection_id": &schema.Schema{ + "vpn_connection_id": { Type: schema.TypeString, Required: true, ForceNew: true, @@ -106,7 +106,6 @@ func resourceAwsVpnConnectionRouteDelete(d *schema.ResourceData, meta interface{ }) if err != nil { if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidVpnConnectionID.NotFound" { - d.SetId("") return nil } log.Printf("[ERROR] Error deleting VPN connection route: %s", err) @@ -139,11 +138,11 @@ func resourceAwsVpnConnectionRouteDelete(d *schema.ResourceData, meta interface{ func findConnectionRoute(conn *ec2.EC2, cidrBlock, vpnConnectionId string) (*ec2.VpnStaticRoute, error) { resp, err := conn.DescribeVpnConnections(&ec2.DescribeVpnConnectionsInput{ Filters: []*ec2.Filter{ - &ec2.Filter{ + { Name: aws.String("route.destination-cidr-block"), Values: []*string{aws.String(cidrBlock)}, }, - &ec2.Filter{ + { Name: aws.String("vpn-connection-id"), Values: []*string{aws.String(vpnConnectionId)}, }, diff --git a/aws/resource_aws_vpn_connection_route_test.go b/aws/resource_aws_vpn_connection_route_test.go index 23229b0f9b0..121df3223ec 100644 --- a/aws/resource_aws_vpn_connection_route_test.go +++ b/aws/resource_aws_vpn_connection_route_test.go @@ -15,12 +15,12 @@ import ( func TestAccAWSVpnConnectionRoute_basic(t *testing.T) { rBgpAsn := acctest.RandIntRange(64512, 65534) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccAwsVpnConnectionRouteDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAwsVpnConnectionRouteConfig(rBgpAsn), Check: resource.ComposeTestCheckFunc( testAccAwsVpnConnectionRoute( @@ -31,7 +31,7 @@ func TestAccAWSVpnConnectionRoute_basic(t *testing.T) { ), ), }, - resource.TestStep{ + { Config: testAccAwsVpnConnectionRouteConfigUpdate(rBgpAsn), Check: resource.ComposeTestCheckFunc( testAccAwsVpnConnectionRoute( @@ -56,11 +56,11 @@ func testAccAwsVpnConnectionRouteDestroy(s *terraform.State) error { cidrBlock, vpnConnectionId := resourceAwsVpnConnectionRouteParseId(rs.Primary.ID) routeFilters := []*ec2.Filter{ - &ec2.Filter{ + { Name: aws.String("route.destination-cidr-block"), Values: []*string{aws.String(cidrBlock)}, }, - &ec2.Filter{ + { Name: aws.String("vpn-connection-id"), Values: []*string{aws.String(vpnConnectionId)}, }, @@ -122,11 +122,11 @@ func testAccAwsVpnConnectionRoute( cidrBlock, vpnConnectionId := resourceAwsVpnConnectionRouteParseId(route.Primary.ID) routeFilters := []*ec2.Filter{ - &ec2.Filter{ + { Name: aws.String("route.destination-cidr-block"), Values: []*string{aws.String(cidrBlock)}, }, - &ec2.Filter{ + { Name: aws.String("vpn-connection-id"), Values: []*string{aws.String(vpnConnectionId)}, }, diff --git a/aws/resource_aws_vpn_connection_test.go b/aws/resource_aws_vpn_connection_test.go index 47c45475e6f..a93c4502b84 100644 --- a/aws/resource_aws_vpn_connection_test.go +++ b/aws/resource_aws_vpn_connection_test.go @@ -15,12 +15,33 @@ import ( "github.com/hashicorp/terraform/terraform" ) +func TestAccAWSVpnConnection_importBasic(t *testing.T) { + resourceName := "aws_vpn_connection.foo" + rBgpAsn := acctest.RandIntRange(64512, 65534) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccAwsVpnConnectionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsVpnConnectionConfig(rBgpAsn), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSVpnConnection_basic(t *testing.T) { rInt := acctest.RandInt() rBgpAsn := acctest.RandIntRange(64512, 65534) var vpn ec2.VpnConnection - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpn_connection.foo", Providers: testAccProviders, @@ -58,7 +79,7 @@ func TestAccAWSVpnConnection_tunnelOptions(t *testing.T) { rBgpAsn := acctest.RandIntRange(64512, 65534) var vpn ec2.VpnConnection - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpn_connection.foo", Providers: testAccProviders, @@ -122,7 +143,7 @@ func TestAccAWSVpnConnection_tunnelOptions(t *testing.T) { }, { Config: testAccAwsVpnConnectionConfigSingleTunnelOptions(rBgpAsn, "1234567!", "169.254.254.0/30"), - ExpectError: regexp.MustCompile(`can only contain alphanumeric and underscore characters`), + ExpectError: regexp.MustCompile(`can only contain alphanumeric, period and underscore characters`), }, //Try actual building @@ -154,7 +175,7 @@ func TestAccAWSVpnConnection_withoutStaticRoutes(t *testing.T) { rBgpAsn := acctest.RandIntRange(64512, 65534) var vpn ec2.VpnConnection - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpn_connection.foo", Providers: testAccProviders, @@ -181,7 +202,7 @@ func TestAccAWSVpnConnection_disappears(t *testing.T) { rBgpAsn := acctest.RandIntRange(64512, 65534) var vpn ec2.VpnConnection - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccAwsVpnConnectionDestroy, diff --git a/aws/resource_aws_vpn_gateway.go b/aws/resource_aws_vpn_gateway.go index 57ed8a58605..ae9a5bba64b 100644 --- a/aws/resource_aws_vpn_gateway.go +++ b/aws/resource_aws_vpn_gateway.go @@ -218,7 +218,7 @@ func resourceAwsVpnGatewayAttach(d *schema.ResourceData, meta interface{}) error stateConf := &resource.StateChangeConf{ Pending: []string{"detached", "attaching"}, Target: []string{"attached"}, - Refresh: vpnGatewayAttachStateRefreshFunc(conn, d.Id(), "available"), + Refresh: vpnGatewayAttachStateRefreshFunc(conn, d.Id()), Timeout: 10 * time.Minute, } if _, err := stateConf.WaitForState(); err != nil { @@ -279,7 +279,7 @@ func resourceAwsVpnGatewayDetach(d *schema.ResourceData, meta interface{}) error stateConf := &resource.StateChangeConf{ Pending: []string{"attached", "detaching", "available"}, Target: []string{"detached"}, - Refresh: vpnGatewayAttachStateRefreshFunc(conn, d.Id(), "detached"), + Refresh: vpnGatewayAttachStateRefreshFunc(conn, d.Id()), Timeout: 10 * time.Minute, } if _, err := stateConf.WaitForState(); err != nil { @@ -293,7 +293,7 @@ func resourceAwsVpnGatewayDetach(d *schema.ResourceData, meta interface{}) error // vpnGatewayAttachStateRefreshFunc returns a resource.StateRefreshFunc that is used to watch // the state of a VPN gateway's attachment -func vpnGatewayAttachStateRefreshFunc(conn *ec2.EC2, id string, expected string) resource.StateRefreshFunc { +func vpnGatewayAttachStateRefreshFunc(conn *ec2.EC2, id string) resource.StateRefreshFunc { var start time.Time return func() (interface{}, string, error) { if start.IsZero() { diff --git a/aws/resource_aws_vpn_gateway_attachment.go b/aws/resource_aws_vpn_gateway_attachment.go index db01100008e..fd081b81bf3 100644 --- a/aws/resource_aws_vpn_gateway_attachment.go +++ b/aws/resource_aws_vpn_gateway_attachment.go @@ -20,12 +20,12 @@ func resourceAwsVpnGatewayAttachment() *schema.Resource { Delete: resourceAwsVpnGatewayAttachmentDelete, Schema: map[string]*schema.Schema{ - "vpc_id": &schema.Schema{ + "vpc_id": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "vpn_gateway_id": &schema.Schema{ + "vpn_gateway_id": { Type: schema.TypeString, Required: true, ForceNew: true, @@ -85,7 +85,7 @@ func resourceAwsVpnGatewayAttachmentRead(d *schema.ResourceData, meta interface{ if err != nil { awsErr, ok := err.(awserr.Error) - if ok && awsErr.Code() == "InvalidVPNGatewayID.NotFound" { + if ok && awsErr.Code() == "InvalidVpnGatewayID.NotFound" { log.Printf("[WARN] VPN Gateway %q not found.", vgwId) d.SetId("") return nil @@ -130,15 +130,9 @@ func resourceAwsVpnGatewayAttachmentDelete(d *schema.ResourceData, meta interfac awsErr, ok := err.(awserr.Error) if ok { switch awsErr.Code() { - case "InvalidVPNGatewayID.NotFound": - log.Printf("[WARN] VPN Gateway %q not found.", vgwId) - d.SetId("") + case "InvalidVpnGatewayID.NotFound": return nil case "InvalidVpnGatewayAttachment.NotFound": - log.Printf( - "[WARN] VPN Gateway %q attachment to VPC %q not found.", - vgwId, vpcId) - d.SetId("") return nil } } @@ -163,7 +157,6 @@ func resourceAwsVpnGatewayAttachmentDelete(d *schema.ResourceData, meta interfac } log.Printf("[DEBUG] VPN Gateway %q detached from VPC %q.", vgwId, vpcId) - d.SetId("") return nil } @@ -171,7 +164,7 @@ func vpnGatewayAttachmentStateRefresh(conn *ec2.EC2, vpcId, vgwId string) resour return func() (interface{}, string, error) { resp, err := conn.DescribeVpnGateways(&ec2.DescribeVpnGatewaysInput{ Filters: []*ec2.Filter{ - &ec2.Filter{ + { Name: aws.String("attachment.vpc-id"), Values: []*string{aws.String(vpcId)}, }, @@ -183,7 +176,7 @@ func vpnGatewayAttachmentStateRefresh(conn *ec2.EC2, vpcId, vgwId string) resour awsErr, ok := err.(awserr.Error) if ok { switch awsErr.Code() { - case "InvalidVPNGatewayID.NotFound": + case "InvalidVpnGatewayID.NotFound": fallthrough case "InvalidVpnGatewayAttachment.NotFound": return nil, "", nil diff --git a/aws/resource_aws_vpn_gateway_attachment_test.go b/aws/resource_aws_vpn_gateway_attachment_test.go index 329bfd7b4fc..3255d7333a5 100644 --- a/aws/resource_aws_vpn_gateway_attachment_test.go +++ b/aws/resource_aws_vpn_gateway_attachment_test.go @@ -14,13 +14,13 @@ func TestAccAWSVpnGatewayAttachment_basic(t *testing.T) { var vpc ec2.Vpc var vgw ec2.VpnGateway - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpn_gateway_attachment.test", Providers: testAccProviders, CheckDestroy: testAccCheckVpnGatewayAttachmentDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpnGatewayAttachmentConfig, Check: resource.ComposeTestCheckFunc( testAccCheckVpcExists( @@ -52,13 +52,13 @@ func TestAccAWSVpnGatewayAttachment_deleted(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpn_gateway_attachment.test", Providers: testAccProviders, CheckDestroy: testAccCheckVpnGatewayAttachmentDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpnGatewayAttachmentConfig, Check: resource.ComposeTestCheckFunc( testAccCheckVpcExists( @@ -72,7 +72,7 @@ func TestAccAWSVpnGatewayAttachment_deleted(t *testing.T) { &vpc, &vgw), ), }, - resource.TestStep{ + { Config: testAccNoVpnGatewayAttachmentConfig, Check: resource.ComposeTestCheckFunc( testDeleted("aws_vpn_gateway_attachment.test"), diff --git a/aws/resource_aws_vpn_gateway_route_propagation.go b/aws/resource_aws_vpn_gateway_route_propagation.go index 46e4b2208aa..bed5f7e4d20 100644 --- a/aws/resource_aws_vpn_gateway_route_propagation.go +++ b/aws/resource_aws_vpn_gateway_route_propagation.go @@ -16,12 +16,12 @@ func resourceAwsVpnGatewayRoutePropagation() *schema.Resource { Delete: resourceAwsVpnGatewayRoutePropagationDisable, Schema: map[string]*schema.Schema{ - "vpn_gateway_id": &schema.Schema{ + "vpn_gateway_id": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "route_table_id": &schema.Schema{ + "route_table_id": { Type: schema.TypeString, Required: true, ForceNew: true, @@ -64,7 +64,6 @@ func resourceAwsVpnGatewayRoutePropagationDisable(d *schema.ResourceData, meta i return fmt.Errorf("error disabling VGW propagation: %s", err) } - d.SetId("") return nil } diff --git a/aws/resource_aws_vpn_gateway_route_propagation_test.go b/aws/resource_aws_vpn_gateway_route_propagation_test.go index dd0f07c6f00..769fc2295f9 100644 --- a/aws/resource_aws_vpn_gateway_route_propagation_test.go +++ b/aws/resource_aws_vpn_gateway_route_propagation_test.go @@ -13,7 +13,7 @@ import ( func TestAccAWSVPNGatewayRoutePropagation_basic(t *testing.T) { var rtID, gwID string - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpn_gateway_route_propagation.foo", Providers: testAccProviders, diff --git a/aws/resource_aws_vpn_gateway_test.go b/aws/resource_aws_vpn_gateway_test.go index 5e5f94a08e8..287a42d82f8 100644 --- a/aws/resource_aws_vpn_gateway_test.go +++ b/aws/resource_aws_vpn_gateway_test.go @@ -34,12 +34,17 @@ func testSweepVPNGateways(region string) error { Name: aws.String("tag-value"), Values: []*string{ aws.String("terraform-testacc-*"), + aws.String("tf-acc-test-*"), }, }, }, } resp, err := conn.DescribeVpnGateways(req) if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping EC2 VPN Gateway sweep for %s: %s", region, err) + return nil + } return fmt.Errorf("Error describing VPN Gateways: %s", err) } @@ -49,19 +54,66 @@ func testSweepVPNGateways(region string) error { } for _, vpng := range resp.VpnGateways { - _, err := conn.DeleteVpnGateway(&ec2.DeleteVpnGatewayInput{ + for _, vpcAttachment := range vpng.VpcAttachments { + input := &ec2.DetachVpnGatewayInput{ + VpcId: vpcAttachment.VpcId, + VpnGatewayId: vpng.VpnGatewayId, + } + + log.Printf("[DEBUG] Detaching VPN Gateway: %s", input) + _, err := conn.DetachVpnGateway(input) + if err != nil { + return fmt.Errorf("error detaching VPN Gateway (%s) from VPC (%s): %s", aws.StringValue(vpng.VpnGatewayId), aws.StringValue(vpcAttachment.VpcId), err) + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{"attached", "detaching"}, + Target: []string{"detached"}, + Refresh: vpnGatewayAttachmentStateRefresh(conn, aws.StringValue(vpcAttachment.VpcId), aws.StringValue(vpng.VpnGatewayId)), + Timeout: 10 * time.Minute, + } + + log.Printf("[DEBUG] Waiting for VPN Gateway (%s) to detach from VPC (%s)", aws.StringValue(vpng.VpnGatewayId), aws.StringValue(vpcAttachment.VpcId)) + if _, err = stateConf.WaitForState(); err != nil { + return fmt.Errorf("error waiting for VPN Gateway (%s) to detach from VPC (%s): %s", aws.StringValue(vpng.VpnGatewayId), aws.StringValue(vpcAttachment.VpcId), err) + } + } + + input := &ec2.DeleteVpnGatewayInput{ VpnGatewayId: vpng.VpnGatewayId, - }) + } + + log.Printf("[DEBUG] Deleting VPN Gateway: %s", input) + _, err := conn.DeleteVpnGateway(input) if err != nil { - return fmt.Errorf( - "Error deleting VPN Gateway (%s): %s", - *vpng.VpnGatewayId, err) + return fmt.Errorf("error deleting VPN Gateway (%s): %s", aws.StringValue(vpng.VpnGatewayId), err) } } return nil } +func TestAccAWSVpnGateway_importBasic(t *testing.T) { + resourceName := "aws_vpn_gateway.foo" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckVpnGatewayDestroy, + Steps: []resource.TestStep{ + { + Config: testAccVpnGatewayConfig, + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSVpnGateway_basic(t *testing.T) { var v, v2 ec2.VpnGateway @@ -82,13 +134,13 @@ func TestAccAWSVpnGateway_basic(t *testing.T) { return nil } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpn_gateway.foo", Providers: testAccProviders, CheckDestroy: testAccCheckVpnGatewayDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpnGatewayConfig, Check: resource.ComposeTestCheckFunc( testAccCheckVpnGatewayExists( @@ -96,7 +148,7 @@ func TestAccAWSVpnGateway_basic(t *testing.T) { ), }, - resource.TestStep{ + { Config: testAccVpnGatewayConfigChangeVPC, Check: resource.ComposeTestCheckFunc( testAccCheckVpnGatewayExists( @@ -111,12 +163,12 @@ func TestAccAWSVpnGateway_basic(t *testing.T) { func TestAccAWSVpnGateway_withAvailabilityZoneSetToState(t *testing.T) { var v ec2.VpnGateway - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpnGatewayDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpnGatewayConfigWithAZ, Check: resource.ComposeTestCheckFunc( testAccCheckVpnGatewayExists("aws_vpn_gateway.foo", &v), @@ -130,12 +182,12 @@ func TestAccAWSVpnGateway_withAvailabilityZoneSetToState(t *testing.T) { func TestAccAWSVpnGateway_withAmazonSideAsnSetToState(t *testing.T) { var v ec2.VpnGateway - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpnGatewayDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpnGatewayConfigWithASN, Check: resource.ComposeTestCheckFunc( testAccCheckVpnGatewayExists("aws_vpn_gateway.foo", &v), @@ -150,12 +202,12 @@ func TestAccAWSVpnGateway_withAmazonSideAsnSetToState(t *testing.T) { func TestAccAWSVpnGateway_disappears(t *testing.T) { var v ec2.VpnGateway - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckVpnGatewayDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpnGatewayConfig, Check: resource.ComposeTestCheckFunc( testAccCheckVpnGatewayExists("aws_vpn_gateway.foo", &v), @@ -205,13 +257,13 @@ func TestAccAWSVpnGateway_reattach(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpn_gateway.foo", Providers: testAccProviders, CheckDestroy: testAccCheckVpnGatewayDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckVpnGatewayConfigReattach, Check: resource.ComposeTestCheckFunc( testAccCheckVpcExists("aws_vpc.foo", &vpc1), @@ -224,7 +276,7 @@ func TestAccAWSVpnGateway_reattach(t *testing.T) { testAttachmentFunc(&vgw2, &vpc2), ), }, - resource.TestStep{ + { Config: testAccCheckVpnGatewayConfigReattachChange, Check: resource.ComposeTestCheckFunc( testAccCheckVpnGatewayExists( @@ -235,7 +287,7 @@ func TestAccAWSVpnGateway_reattach(t *testing.T) { testAttachmentFunc(&vgw1, &vpc2), ), }, - resource.TestStep{ + { Config: testAccCheckVpnGatewayConfigReattach, Check: resource.ComposeTestCheckFunc( testAccCheckVpnGatewayExists( @@ -263,18 +315,18 @@ func TestAccAWSVpnGateway_delete(t *testing.T) { } } - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpn_gateway.foo", Providers: testAccProviders, CheckDestroy: testAccCheckVpnGatewayDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccVpnGatewayConfig, Check: resource.ComposeTestCheckFunc( testAccCheckVpnGatewayExists("aws_vpn_gateway.foo", &vpnGateway)), }, - resource.TestStep{ + { Config: testAccNoVpnGatewayConfig, Check: resource.ComposeTestCheckFunc(testDeleted("aws_vpn_gateway.foo")), }, @@ -285,20 +337,20 @@ func TestAccAWSVpnGateway_delete(t *testing.T) { func TestAccAWSVpnGateway_tags(t *testing.T) { var v ec2.VpnGateway - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, IDRefreshName: "aws_vpn_gateway.foo", Providers: testAccProviders, CheckDestroy: testAccCheckVpnGatewayDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccCheckVpnGatewayConfigTags, Check: resource.ComposeTestCheckFunc( testAccCheckVpnGatewayExists("aws_vpn_gateway.foo", &v), testAccCheckTags(&v.Tags, "Name", "terraform-testacc-vpn-gateway-tags"), ), }, - resource.TestStep{ + { Config: testAccCheckVpnGatewayConfigTagsUpdate, Check: resource.ComposeTestCheckFunc( testAccCheckVpnGatewayExists("aws_vpn_gateway.foo", &v), diff --git a/aws/resource_aws_waf_byte_match_set.go b/aws/resource_aws_waf_byte_match_set.go index 8c3d3127979..9c62800d601 100644 --- a/aws/resource_aws_waf_byte_match_set.go +++ b/aws/resource_aws_waf_byte_match_set.go @@ -7,7 +7,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/waf" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/schema" ) @@ -19,12 +18,12 @@ func resourceAwsWafByteMatchSet() *schema.Resource { Delete: resourceAwsWafByteMatchSetDelete, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "byte_match_tuples": &schema.Schema{ + "byte_match_tuples": { Type: schema.TypeSet, Optional: true, Elem: &schema.Resource{ @@ -46,15 +45,15 @@ func resourceAwsWafByteMatchSet() *schema.Resource { }, }, }, - "positional_constraint": &schema.Schema{ + "positional_constraint": { Type: schema.TypeString, Required: true, }, - "target_string": &schema.Schema{ + "target_string": { Type: schema.TypeString, Optional: true, }, - "text_transformation": &schema.Schema{ + "text_transformation": { Type: schema.TypeString, Required: true, }, @@ -70,7 +69,7 @@ func resourceAwsWafByteMatchSetCreate(d *schema.ResourceData, meta interface{}) log.Printf("[INFO] Creating ByteMatchSet: %s", d.Get("name").(string)) - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) out, err := wr.RetryWithToken(func(token *string) (interface{}, error) { params := &waf.CreateByteMatchSetInput{ ChangeToken: token, @@ -79,7 +78,7 @@ func resourceAwsWafByteMatchSetCreate(d *schema.ResourceData, meta interface{}) return conn.CreateByteMatchSet(params) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error creating ByteMatchSet: {{err}}", err) + return fmt.Errorf("Error creating ByteMatchSet: %s", err) } resp := out.(*waf.CreateByteMatchSetOutput) @@ -122,7 +121,7 @@ func resourceAwsWafByteMatchSetUpdate(d *schema.ResourceData, meta interface{}) oldT, newT := o.(*schema.Set).List(), n.(*schema.Set).List() err := updateByteMatchSetResource(d.Id(), oldT, newT, conn) if err != nil { - return errwrap.Wrapf("[ERROR] Error updating ByteMatchSet: {{err}}", err) + return fmt.Errorf("Error updating ByteMatchSet: %s", err) } } @@ -141,7 +140,7 @@ func resourceAwsWafByteMatchSetDelete(d *schema.ResourceData, meta interface{}) } } - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.DeleteByteMatchSetInput{ ChangeToken: token, @@ -151,14 +150,14 @@ func resourceAwsWafByteMatchSetDelete(d *schema.ResourceData, meta interface{}) return conn.DeleteByteMatchSet(req) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error deleting ByteMatchSet: {{err}}", err) + return fmt.Errorf("Error deleting ByteMatchSet: %s", err) } return nil } func updateByteMatchSetResource(id string, oldT, newT []interface{}, conn *waf.WAF) error { - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.UpdateByteMatchSetInput{ ChangeToken: token, @@ -169,14 +168,14 @@ func updateByteMatchSetResource(id string, oldT, newT []interface{}, conn *waf.W return conn.UpdateByteMatchSet(req) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error updating ByteMatchSet: {{err}}", err) + return fmt.Errorf("Error updating ByteMatchSet: %s", err) } return nil } func flattenWafByteMatchTuples(bmt []*waf.ByteMatchTuple) []interface{} { - out := make([]interface{}, len(bmt), len(bmt)) + out := make([]interface{}, len(bmt)) for i, t := range bmt { m := make(map[string]interface{}) diff --git a/aws/resource_aws_waf_byte_match_set_test.go b/aws/resource_aws_waf_byte_match_set_test.go index 2e432befbdf..941a87d6cb7 100644 --- a/aws/resource_aws_waf_byte_match_set_test.go +++ b/aws/resource_aws_waf_byte_match_set_test.go @@ -10,7 +10,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/waf" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/acctest" ) @@ -18,12 +17,12 @@ func TestAccAWSWafByteMatchSet_basic(t *testing.T) { var v waf.ByteMatchSet byteMatchSet := fmt.Sprintf("byteMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafByteMatchSetDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSWafByteMatchSetConfig(byteMatchSet), Check: resource.ComposeTestCheckFunc( testAccCheckAWSWafByteMatchSetExists("aws_waf_byte_match_set.byte_set", &v), @@ -52,7 +51,7 @@ func TestAccAWSWafByteMatchSet_changeNameForceNew(t *testing.T) { byteMatchSet := fmt.Sprintf("byteMatchSet-%s", acctest.RandString(5)) byteMatchSetNewName := fmt.Sprintf("byteMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafByteMatchSetDestroy, @@ -85,7 +84,7 @@ func TestAccAWSWafByteMatchSet_changeTuples(t *testing.T) { var before, after waf.ByteMatchSet byteMatchSetName := fmt.Sprintf("byteMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafByteMatchSetDestroy, @@ -138,7 +137,7 @@ func TestAccAWSWafByteMatchSet_noTuples(t *testing.T) { var byteSet waf.ByteMatchSet byteMatchSetName := fmt.Sprintf("byteMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafByteMatchSetDestroy, @@ -161,7 +160,7 @@ func TestAccAWSWafByteMatchSet_disappears(t *testing.T) { var v waf.ByteMatchSet byteMatchSet := fmt.Sprintf("byteMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafByteMatchSetDestroy, @@ -182,7 +181,7 @@ func testAccCheckAWSWafByteMatchSetDisappears(v *waf.ByteMatchSet) resource.Test return func(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).wafconn - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.UpdateByteMatchSetInput{ ChangeToken: token, @@ -205,7 +204,7 @@ func testAccCheckAWSWafByteMatchSetDisappears(v *waf.ByteMatchSet) resource.Test return conn.UpdateByteMatchSet(req) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error updating ByteMatchSet: {{err}}", err) + return fmt.Errorf("Error updating ByteMatchSet: %s", err) } _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { @@ -216,7 +215,7 @@ func testAccCheckAWSWafByteMatchSetDisappears(v *waf.ByteMatchSet) resource.Test return conn.DeleteByteMatchSet(opts) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error deleting ByteMatchSet: {{err}}", err) + return fmt.Errorf("Error deleting ByteMatchSet: %s", err) } return nil diff --git a/aws/resource_aws_waf_geo_match_set.go b/aws/resource_aws_waf_geo_match_set.go index cc5c2dbd104..dc5295b8429 100644 --- a/aws/resource_aws_waf_geo_match_set.go +++ b/aws/resource_aws_waf_geo_match_set.go @@ -6,7 +6,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/waf" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/schema" ) @@ -18,12 +17,12 @@ func resourceAwsWafGeoMatchSet() *schema.Resource { Delete: resourceAwsWafGeoMatchSetDelete, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "geo_match_constraint": &schema.Schema{ + "geo_match_constraint": { Type: schema.TypeSet, Optional: true, Elem: &schema.Resource{ @@ -48,7 +47,7 @@ func resourceAwsWafGeoMatchSetCreate(d *schema.ResourceData, meta interface{}) e log.Printf("[INFO] Creating GeoMatchSet: %s", d.Get("name").(string)) - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) out, err := wr.RetryWithToken(func(token *string) (interface{}, error) { params := &waf.CreateGeoMatchSetInput{ ChangeToken: token, @@ -58,7 +57,7 @@ func resourceAwsWafGeoMatchSetCreate(d *schema.ResourceData, meta interface{}) e return conn.CreateGeoMatchSet(params) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error creating GeoMatchSet: {{err}}", err) + return fmt.Errorf("Error creating GeoMatchSet: %s", err) } resp := out.(*waf.CreateGeoMatchSetOutput) @@ -76,6 +75,8 @@ func resourceAwsWafGeoMatchSetRead(d *schema.ResourceData, meta interface{}) err resp, err := conn.GetGeoMatchSet(params) if err != nil { + // TODO: Replace with constant once it's available + // See https://github.com/aws/aws-sdk-go/issues/1856 if isAWSErr(err, "WAFNonexistentItemException", "") { log.Printf("[WARN] WAF GeoMatchSet (%s) not found, removing from state", d.Id()) d.SetId("") @@ -100,7 +101,7 @@ func resourceAwsWafGeoMatchSetUpdate(d *schema.ResourceData, meta interface{}) e err := updateGeoMatchSetResource(d.Id(), oldT, newT, conn) if err != nil { - return errwrap.Wrapf("[ERROR] Error updating GeoMatchSet: {{err}}", err) + return fmt.Errorf("Error updating GeoMatchSet: %s", err) } } @@ -119,7 +120,7 @@ func resourceAwsWafGeoMatchSetDelete(d *schema.ResourceData, meta interface{}) e } } - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.DeleteGeoMatchSetInput{ ChangeToken: token, @@ -129,14 +130,14 @@ func resourceAwsWafGeoMatchSetDelete(d *schema.ResourceData, meta interface{}) e return conn.DeleteGeoMatchSet(req) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error deleting GeoMatchSet: {{err}}", err) + return fmt.Errorf("Error deleting GeoMatchSet: %s", err) } return nil } func updateGeoMatchSetResource(id string, oldT, newT []interface{}, conn *waf.WAF) error { - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.UpdateGeoMatchSetInput{ ChangeToken: token, @@ -148,53 +149,8 @@ func updateGeoMatchSetResource(id string, oldT, newT []interface{}, conn *waf.WA return conn.UpdateGeoMatchSet(req) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error updating GeoMatchSet: {{err}}", err) + return fmt.Errorf("Error updating GeoMatchSet: %s", err) } return nil } - -func flattenWafGeoMatchConstraint(ts []*waf.GeoMatchConstraint) []interface{} { - out := make([]interface{}, len(ts), len(ts)) - for i, t := range ts { - m := make(map[string]interface{}) - m["type"] = *t.Type - m["value"] = *t.Value - out[i] = m - } - return out -} - -func diffWafGeoMatchSetConstraints(oldT, newT []interface{}) []*waf.GeoMatchSetUpdate { - updates := make([]*waf.GeoMatchSetUpdate, 0) - - for _, od := range oldT { - constraint := od.(map[string]interface{}) - - if idx, contains := sliceContainsMap(newT, constraint); contains { - newT = append(newT[:idx], newT[idx+1:]...) - continue - } - - updates = append(updates, &waf.GeoMatchSetUpdate{ - Action: aws.String(waf.ChangeActionDelete), - GeoMatchConstraint: &waf.GeoMatchConstraint{ - Type: aws.String(constraint["type"].(string)), - Value: aws.String(constraint["value"].(string)), - }, - }) - } - - for _, nd := range newT { - constraint := nd.(map[string]interface{}) - - updates = append(updates, &waf.GeoMatchSetUpdate{ - Action: aws.String(waf.ChangeActionInsert), - GeoMatchConstraint: &waf.GeoMatchConstraint{ - Type: aws.String(constraint["type"].(string)), - Value: aws.String(constraint["value"].(string)), - }, - }) - } - return updates -} diff --git a/aws/resource_aws_waf_geo_match_set_test.go b/aws/resource_aws_waf_geo_match_set_test.go index 13782a3e2b7..f480f9d440a 100644 --- a/aws/resource_aws_waf_geo_match_set_test.go +++ b/aws/resource_aws_waf_geo_match_set_test.go @@ -9,7 +9,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/waf" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/acctest" ) @@ -17,12 +16,12 @@ func TestAccAWSWafGeoMatchSet_basic(t *testing.T) { var v waf.GeoMatchSet geoMatchSet := fmt.Sprintf("geoMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafGeoMatchSetDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSWafGeoMatchSetConfig(geoMatchSet), Check: resource.ComposeTestCheckFunc( testAccCheckAWSWafGeoMatchSetExists("aws_waf_geo_match_set.geo_match_set", &v), @@ -49,7 +48,7 @@ func TestAccAWSWafGeoMatchSet_changeNameForceNew(t *testing.T) { geoMatchSet := fmt.Sprintf("geoMatchSet-%s", acctest.RandString(5)) geoMatchSetNewName := fmt.Sprintf("geoMatchSetNewName-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafGeoMatchSetDestroy, @@ -82,7 +81,7 @@ func TestAccAWSWafGeoMatchSet_disappears(t *testing.T) { var v waf.GeoMatchSet geoMatchSet := fmt.Sprintf("geoMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafGeoMatchSetDestroy, @@ -103,7 +102,7 @@ func TestAccAWSWafGeoMatchSet_changeConstraints(t *testing.T) { var before, after waf.GeoMatchSet setName := fmt.Sprintf("geoMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafGeoMatchSetDestroy, @@ -152,7 +151,7 @@ func TestAccAWSWafGeoMatchSet_noConstraints(t *testing.T) { var ipset waf.GeoMatchSet setName := fmt.Sprintf("geoMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafGeoMatchSetDestroy, @@ -175,7 +174,7 @@ func testAccCheckAWSWafGeoMatchSetDisappears(v *waf.GeoMatchSet) resource.TestCh return func(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).wafconn - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.UpdateGeoMatchSetInput{ ChangeToken: token, @@ -195,7 +194,7 @@ func testAccCheckAWSWafGeoMatchSetDisappears(v *waf.GeoMatchSet) resource.TestCh return conn.UpdateGeoMatchSet(req) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error updating GeoMatchSet: {{err}}", err) + return fmt.Errorf("Error updating GeoMatchSet: %s", err) } _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { @@ -206,7 +205,7 @@ func testAccCheckAWSWafGeoMatchSetDisappears(v *waf.GeoMatchSet) resource.TestCh return conn.DeleteGeoMatchSet(opts) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error deleting GeoMatchSet: {{err}}", err) + return fmt.Errorf("Error deleting GeoMatchSet: %s", err) } return nil } diff --git a/aws/resource_aws_waf_ipset.go b/aws/resource_aws_waf_ipset.go index 26c6fe74eb1..9c4e1b47779 100644 --- a/aws/resource_aws_waf_ipset.go +++ b/aws/resource_aws_waf_ipset.go @@ -5,34 +5,45 @@ import ( "log" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/waf" "github.com/hashicorp/terraform/helper/schema" ) +// WAF requires UpdateIPSet operations be split into batches of 1000 Updates +const wafUpdateIPSetUpdatesLimit = 1000 + func resourceAwsWafIPSet() *schema.Resource { return &schema.Resource{ Create: resourceAwsWafIPSetCreate, Read: resourceAwsWafIPSetRead, Update: resourceAwsWafIPSetUpdate, Delete: resourceAwsWafIPSetDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "ip_set_descriptors": &schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "ip_set_descriptors": { Type: schema.TypeSet, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "type": &schema.Schema{ + "type": { Type: schema.TypeString, Required: true, }, - "value": &schema.Schema{ + "value": { Type: schema.TypeString, Required: true, }, @@ -46,7 +57,7 @@ func resourceAwsWafIPSet() *schema.Resource { func resourceAwsWafIPSetCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).wafconn - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) out, err := wr.RetryWithToken(func(token *string) (interface{}, error) { params := &waf.CreateIPSetInput{ ChangeToken: token, @@ -94,6 +105,14 @@ func resourceAwsWafIPSetRead(d *schema.ResourceData, meta interface{}) error { d.Set("name", resp.IPSet.Name) + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Service: "waf", + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("ipset/%s", d.Id()), + } + d.Set("arn", arn.String()) + return nil } @@ -122,11 +141,11 @@ func resourceAwsWafIPSetDelete(d *schema.ResourceData, meta interface{}) error { noDescriptors := []interface{}{} err := updateWafIpSetDescriptors(d.Id(), oldDescriptors, noDescriptors, conn) if err != nil { - return fmt.Errorf("Error updating IPSetDescriptors: %s", err) + return fmt.Errorf("Error Deleting IPSetDescriptors: %s", err) } } - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.DeleteIPSetInput{ ChangeToken: token, @@ -143,25 +162,28 @@ func resourceAwsWafIPSetDelete(d *schema.ResourceData, meta interface{}) error { } func updateWafIpSetDescriptors(id string, oldD, newD []interface{}, conn *waf.WAF) error { - wr := newWafRetryer(conn, "global") - _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { - req := &waf.UpdateIPSetInput{ - ChangeToken: token, - IPSetId: aws.String(id), - Updates: diffWafIpSetDescriptors(oldD, newD), + for _, ipSetUpdates := range diffWafIpSetDescriptors(oldD, newD) { + wr := newWafRetryer(conn) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateIPSetInput{ + ChangeToken: token, + IPSetId: aws.String(id), + Updates: ipSetUpdates, + } + log.Printf("[INFO] Updating IPSet descriptors: %s", req) + return conn.UpdateIPSet(req) + }) + if err != nil { + return fmt.Errorf("Error Updating WAF IPSet: %s", err) } - log.Printf("[INFO] Updating IPSet descriptors: %s", req) - return conn.UpdateIPSet(req) - }) - if err != nil { - return fmt.Errorf("Error Updating WAF IPSet: %s", err) } return nil } -func diffWafIpSetDescriptors(oldD, newD []interface{}) []*waf.IPSetUpdate { - updates := make([]*waf.IPSetUpdate, 0) +func diffWafIpSetDescriptors(oldD, newD []interface{}) [][]*waf.IPSetUpdate { + updates := make([]*waf.IPSetUpdate, 0, wafUpdateIPSetUpdatesLimit) + updatesBatches := make([][]*waf.IPSetUpdate, 0) for _, od := range oldD { descriptor := od.(map[string]interface{}) @@ -171,6 +193,11 @@ func diffWafIpSetDescriptors(oldD, newD []interface{}) []*waf.IPSetUpdate { continue } + if len(updates) == wafUpdateIPSetUpdatesLimit { + updatesBatches = append(updatesBatches, updates) + updates = make([]*waf.IPSetUpdate, 0, wafUpdateIPSetUpdatesLimit) + } + updates = append(updates, &waf.IPSetUpdate{ Action: aws.String(waf.ChangeActionDelete), IPSetDescriptor: &waf.IPSetDescriptor{ @@ -183,6 +210,11 @@ func diffWafIpSetDescriptors(oldD, newD []interface{}) []*waf.IPSetUpdate { for _, nd := range newD { descriptor := nd.(map[string]interface{}) + if len(updates) == wafUpdateIPSetUpdatesLimit { + updatesBatches = append(updatesBatches, updates) + updates = make([]*waf.IPSetUpdate, 0, wafUpdateIPSetUpdatesLimit) + } + updates = append(updates, &waf.IPSetUpdate{ Action: aws.String(waf.ChangeActionInsert), IPSetDescriptor: &waf.IPSetDescriptor{ @@ -191,5 +223,6 @@ func diffWafIpSetDescriptors(oldD, newD []interface{}) []*waf.IPSetUpdate { }, }) } - return updates + updatesBatches = append(updatesBatches, updates) + return updatesBatches } diff --git a/aws/resource_aws_waf_ipset_test.go b/aws/resource_aws_waf_ipset_test.go index ee75931163d..d907226c2cc 100644 --- a/aws/resource_aws_waf_ipset_test.go +++ b/aws/resource_aws_waf_ipset_test.go @@ -2,7 +2,10 @@ package aws import ( "fmt" + "net" "reflect" + "regexp" + "strings" "testing" "github.com/hashicorp/terraform/helper/resource" @@ -18,12 +21,12 @@ func TestAccAWSWafIPSet_basic(t *testing.T) { var v waf.IPSet ipsetName := fmt.Sprintf("ip-set-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafIPSetDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSWafIPSetConfig(ipsetName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSWafIPSetExists("aws_waf_ipset.ipset", &v), @@ -33,8 +36,15 @@ func TestAccAWSWafIPSet_basic(t *testing.T) { "aws_waf_ipset.ipset", "ip_set_descriptors.4037960608.type", "IPV4"), resource.TestCheckResourceAttr( "aws_waf_ipset.ipset", "ip_set_descriptors.4037960608.value", "192.0.7.0/24"), + resource.TestMatchResourceAttr("aws_waf_ipset.ipset", "arn", + regexp.MustCompile(`^arn:[\w-]+:waf::\d{12}:ipset/.+$`)), ), }, + { + ResourceName: "aws_waf_ipset.ipset", + ImportState: true, + ImportStateVerify: true, + }, }, }) } @@ -42,7 +52,7 @@ func TestAccAWSWafIPSet_basic(t *testing.T) { func TestAccAWSWafIPSet_disappears(t *testing.T) { var v waf.IPSet ipsetName := fmt.Sprintf("ip-set-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafIPSetDestroy, @@ -64,7 +74,7 @@ func TestAccAWSWafIPSet_changeNameForceNew(t *testing.T) { ipsetName := fmt.Sprintf("ip-set-%s", acctest.RandString(5)) ipsetNewName := fmt.Sprintf("ip-set-new-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafIPSetDestroy, @@ -101,7 +111,7 @@ func TestAccAWSWafIPSet_changeDescriptors(t *testing.T) { var before, after waf.IPSet ipsetName := fmt.Sprintf("ip-set-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafIPSetDestroy, @@ -142,7 +152,7 @@ func TestAccAWSWafIPSet_noDescriptors(t *testing.T) { var ipset waf.IPSet ipsetName := fmt.Sprintf("ip-set-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafIPSetDestroy, @@ -161,11 +171,51 @@ func TestAccAWSWafIPSet_noDescriptors(t *testing.T) { }) } +func TestAccAWSWafIPSet_IpSetDescriptors_1000UpdateLimit(t *testing.T) { + var ipset waf.IPSet + ipsetName := fmt.Sprintf("ip-set-%s", acctest.RandString(5)) + resourceName := "aws_waf_ipset.ipset" + + incrementIP := func(ip net.IP) { + for j := len(ip) - 1; j >= 0; j-- { + ip[j]++ + if ip[j] > 0 { + break + } + } + } + + // Generate 2048 IPs + ip, ipnet, err := net.ParseCIDR("10.0.0.0/21") + if err != nil { + t.Fatal(err) + } + ipSetDescriptors := make([]string, 0, 2048) + for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); incrementIP(ip) { + ipSetDescriptors = append(ipSetDescriptors, fmt.Sprintf("ip_set_descriptors {\ntype=\"IPV4\"\nvalue=\"%s/32\"\n}", ip)) + } + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafIPSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafIPSetConfig_IpSetDescriptors(ipsetName, strings.Join(ipSetDescriptors, "\n")), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafIPSetExists(resourceName, &ipset), + resource.TestCheckResourceAttr(resourceName, "ip_set_descriptors.#", "2048"), + ), + }, + }, + }) +} + func TestDiffWafIpSetDescriptors(t *testing.T) { testCases := []struct { Old []interface{} New []interface{} - ExpectedUpdates []*waf.IPSetUpdate + ExpectedUpdates [][]*waf.IPSetUpdate }{ { // Change @@ -175,19 +225,21 @@ func TestDiffWafIpSetDescriptors(t *testing.T) { New: []interface{}{ map[string]interface{}{"type": "IPV4", "value": "192.0.8.0/24"}, }, - ExpectedUpdates: []*waf.IPSetUpdate{ - &waf.IPSetUpdate{ - Action: aws.String(waf.ChangeActionDelete), - IPSetDescriptor: &waf.IPSetDescriptor{ - Type: aws.String("IPV4"), - Value: aws.String("192.0.7.0/24"), + ExpectedUpdates: [][]*waf.IPSetUpdate{ + { + { + Action: aws.String(waf.ChangeActionDelete), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("192.0.7.0/24"), + }, }, - }, - &waf.IPSetUpdate{ - Action: aws.String(waf.ChangeActionInsert), - IPSetDescriptor: &waf.IPSetDescriptor{ - Type: aws.String("IPV4"), - Value: aws.String("192.0.8.0/24"), + { + Action: aws.String(waf.ChangeActionInsert), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("192.0.8.0/24"), + }, }, }, }, @@ -200,26 +252,28 @@ func TestDiffWafIpSetDescriptors(t *testing.T) { map[string]interface{}{"type": "IPV4", "value": "10.0.2.0/24"}, map[string]interface{}{"type": "IPV4", "value": "10.0.3.0/24"}, }, - ExpectedUpdates: []*waf.IPSetUpdate{ - &waf.IPSetUpdate{ - Action: aws.String(waf.ChangeActionInsert), - IPSetDescriptor: &waf.IPSetDescriptor{ - Type: aws.String("IPV4"), - Value: aws.String("10.0.1.0/24"), + ExpectedUpdates: [][]*waf.IPSetUpdate{ + { + { + Action: aws.String(waf.ChangeActionInsert), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("10.0.1.0/24"), + }, }, - }, - &waf.IPSetUpdate{ - Action: aws.String(waf.ChangeActionInsert), - IPSetDescriptor: &waf.IPSetDescriptor{ - Type: aws.String("IPV4"), - Value: aws.String("10.0.2.0/24"), + { + Action: aws.String(waf.ChangeActionInsert), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("10.0.2.0/24"), + }, }, - }, - &waf.IPSetUpdate{ - Action: aws.String(waf.ChangeActionInsert), - IPSetDescriptor: &waf.IPSetDescriptor{ - Type: aws.String("IPV4"), - Value: aws.String("10.0.3.0/24"), + { + Action: aws.String(waf.ChangeActionInsert), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("10.0.3.0/24"), + }, }, }, }, @@ -231,19 +285,21 @@ func TestDiffWafIpSetDescriptors(t *testing.T) { map[string]interface{}{"type": "IPV4", "value": "192.0.8.0/24"}, }, New: []interface{}{}, - ExpectedUpdates: []*waf.IPSetUpdate{ - &waf.IPSetUpdate{ - Action: aws.String(waf.ChangeActionDelete), - IPSetDescriptor: &waf.IPSetDescriptor{ - Type: aws.String("IPV4"), - Value: aws.String("192.0.7.0/24"), + ExpectedUpdates: [][]*waf.IPSetUpdate{ + { + { + Action: aws.String(waf.ChangeActionDelete), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("192.0.7.0/24"), + }, }, - }, - &waf.IPSetUpdate{ - Action: aws.String(waf.ChangeActionDelete), - IPSetDescriptor: &waf.IPSetDescriptor{ - Type: aws.String("IPV4"), - Value: aws.String("192.0.8.0/24"), + { + Action: aws.String(waf.ChangeActionDelete), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("192.0.8.0/24"), + }, }, }, }, @@ -264,7 +320,7 @@ func testAccCheckAWSWafIPSetDisappears(v *waf.IPSet) resource.TestCheckFunc { return func(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).wafconn - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.UpdateIPSetInput{ ChangeToken: token, @@ -393,6 +449,13 @@ func testAccAWSWafIPSetConfigChangeIPSetDescriptors(name string) string { }`, name) } +func testAccAWSWafIPSetConfig_IpSetDescriptors(name, ipSetDescriptors string) string { + return fmt.Sprintf(`resource "aws_waf_ipset" "ipset" { + name = "%s" +%s +}`, name, ipSetDescriptors) +} + func testAccAWSWafIPSetConfig_noDescriptors(name string) string { return fmt.Sprintf(`resource "aws_waf_ipset" "ipset" { name = "%s" diff --git a/aws/resource_aws_waf_rate_based_rule.go b/aws/resource_aws_waf_rate_based_rule.go index c30a8178c99..2d3f5f5dada 100644 --- a/aws/resource_aws_waf_rate_based_rule.go +++ b/aws/resource_aws_waf_rate_based_rule.go @@ -19,32 +19,32 @@ func resourceAwsWafRateBasedRule() *schema.Resource { Delete: resourceAwsWafRateBasedRuleDelete, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "metric_name": &schema.Schema{ + "metric_name": { Type: schema.TypeString, Required: true, ForceNew: true, ValidateFunc: validateWafMetricName, }, - "predicates": &schema.Schema{ + "predicates": { Type: schema.TypeSet, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "negated": &schema.Schema{ + "negated": { Type: schema.TypeBool, Required: true, }, - "data_id": &schema.Schema{ + "data_id": { Type: schema.TypeString, Required: true, - ValidateFunc: validateMaxLength(128), + ValidateFunc: validation.StringLenBetween(0, 128), }, - "type": &schema.Schema{ + "type": { Type: schema.TypeString, Required: true, ValidateFunc: validateWafPredicatesType(), @@ -52,11 +52,11 @@ func resourceAwsWafRateBasedRule() *schema.Resource { }, }, }, - "rate_key": &schema.Schema{ + "rate_key": { Type: schema.TypeString, Required: true, }, - "rate_limit": &schema.Schema{ + "rate_limit": { Type: schema.TypeInt, Required: true, ValidateFunc: validation.IntAtLeast(2000), @@ -68,7 +68,7 @@ func resourceAwsWafRateBasedRule() *schema.Resource { func resourceAwsWafRateBasedRuleCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).wafconn - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) out, err := wr.RetryWithToken(func(token *string) (interface{}, error) { params := &waf.CreateRateBasedRuleInput{ ChangeToken: token, @@ -157,7 +157,7 @@ func resourceAwsWafRateBasedRuleDelete(d *schema.ResourceData, meta interface{}) } } - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.DeleteRateBasedRuleInput{ ChangeToken: token, @@ -174,7 +174,7 @@ func resourceAwsWafRateBasedRuleDelete(d *schema.ResourceData, meta interface{}) } func updateWafRateBasedRuleResource(id string, oldP, newP []interface{}, rateLimit interface{}, conn *waf.WAF) error { - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.UpdateRateBasedRuleInput{ ChangeToken: token, diff --git a/aws/resource_aws_waf_rate_based_rule_test.go b/aws/resource_aws_waf_rate_based_rule_test.go index 54fba9fd6e2..8758993ac75 100644 --- a/aws/resource_aws_waf_rate_based_rule_test.go +++ b/aws/resource_aws_waf_rate_based_rule_test.go @@ -17,12 +17,12 @@ import ( func TestAccAWSWafRateBasedRule_basic(t *testing.T) { var v waf.RateBasedRule wafRuleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRateBasedRuleDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSWafRateBasedRuleConfig(wafRuleName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSWafRateBasedRuleExists("aws_waf_rate_based_rule.wafrule", &v), @@ -43,7 +43,7 @@ func TestAccAWSWafRateBasedRule_changeNameForceNew(t *testing.T) { wafRuleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) wafRuleNewName := fmt.Sprintf("wafrulenew%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafIPSetDestroy, @@ -79,7 +79,7 @@ func TestAccAWSWafRateBasedRule_changeNameForceNew(t *testing.T) { func TestAccAWSWafRateBasedRule_disappears(t *testing.T) { var v waf.RateBasedRule wafRuleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRateBasedRuleDestroy, @@ -104,7 +104,7 @@ func TestAccAWSWafRateBasedRule_changePredicates(t *testing.T) { var idx int ruleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRuleDestroy, @@ -179,7 +179,7 @@ func TestAccAWSWafRateBasedRule_noPredicates(t *testing.T) { var rule waf.RateBasedRule ruleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRateBasedRuleDestroy, @@ -202,7 +202,7 @@ func testAccCheckAWSWafRateBasedRuleDisappears(v *waf.RateBasedRule) resource.Te return func(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).wafconn - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.UpdateRateBasedRuleInput{ ChangeToken: token, diff --git a/aws/resource_aws_waf_regex_match_set.go b/aws/resource_aws_waf_regex_match_set.go new file mode 100644 index 00000000000..9bfbee72be9 --- /dev/null +++ b/aws/resource_aws_waf_regex_match_set.go @@ -0,0 +1,175 @@ +package aws + +import ( + "fmt" + "log" + "strings" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/waf" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsWafRegexMatchSet() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsWafRegexMatchSetCreate, + Read: resourceAwsWafRegexMatchSetRead, + Update: resourceAwsWafRegexMatchSetUpdate, + Delete: resourceAwsWafRegexMatchSetDelete, + + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "regex_match_tuple": { + Type: schema.TypeSet, + Optional: true, + Set: resourceAwsWafRegexMatchSetTupleHash, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "field_to_match": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "data": { + Type: schema.TypeString, + Optional: true, + StateFunc: func(v interface{}) string { + return strings.ToLower(v.(string)) + }, + }, + "type": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "regex_pattern_set_id": { + Type: schema.TypeString, + Required: true, + }, + "text_transformation": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + }, + } +} + +func resourceAwsWafRegexMatchSetCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafconn + + log.Printf("[INFO] Creating WAF Regex Match Set: %s", d.Get("name").(string)) + + wr := newWafRetryer(conn) + out, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + params := &waf.CreateRegexMatchSetInput{ + ChangeToken: token, + Name: aws.String(d.Get("name").(string)), + } + return conn.CreateRegexMatchSet(params) + }) + if err != nil { + return fmt.Errorf("Failed creating WAF Regex Match Set: %s", err) + } + resp := out.(*waf.CreateRegexMatchSetOutput) + + d.SetId(*resp.RegexMatchSet.RegexMatchSetId) + + return resourceAwsWafRegexMatchSetUpdate(d, meta) +} + +func resourceAwsWafRegexMatchSetRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafconn + log.Printf("[INFO] Reading WAF Regex Match Set: %s", d.Get("name").(string)) + params := &waf.GetRegexMatchSetInput{ + RegexMatchSetId: aws.String(d.Id()), + } + + resp, err := conn.GetRegexMatchSet(params) + if err != nil { + if isAWSErr(err, "WAFNonexistentItemException", "") { + log.Printf("[WARN] WAF Regex Match Set (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + return err + } + + d.Set("name", resp.RegexMatchSet.Name) + d.Set("regex_match_tuple", flattenWafRegexMatchTuples(resp.RegexMatchSet.RegexMatchTuples)) + + return nil +} + +func resourceAwsWafRegexMatchSetUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafconn + + log.Printf("[INFO] Updating WAF Regex Match Set: %s", d.Get("name").(string)) + + if d.HasChange("regex_match_tuple") { + o, n := d.GetChange("regex_match_tuple") + oldT, newT := o.(*schema.Set).List(), n.(*schema.Set).List() + err := updateRegexMatchSetResource(d.Id(), oldT, newT, conn) + if err != nil { + return fmt.Errorf("Failed updating WAF Regex Match Set: %s", err) + } + } + + return resourceAwsWafRegexMatchSetRead(d, meta) +} + +func resourceAwsWafRegexMatchSetDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafconn + + oldTuples := d.Get("regex_match_tuple").(*schema.Set).List() + if len(oldTuples) > 0 { + noTuples := []interface{}{} + err := updateRegexMatchSetResource(d.Id(), oldTuples, noTuples, conn) + if err != nil { + return fmt.Errorf("Error updating WAF Regex Match Set: %s", err) + } + } + + wr := newWafRetryer(conn) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.DeleteRegexMatchSetInput{ + ChangeToken: token, + RegexMatchSetId: aws.String(d.Id()), + } + log.Printf("[INFO] Deleting WAF Regex Match Set: %s", req) + return conn.DeleteRegexMatchSet(req) + }) + if err != nil { + return fmt.Errorf("Failed deleting WAF Regex Match Set: %s", err) + } + + return nil +} + +func updateRegexMatchSetResource(id string, oldT, newT []interface{}, conn *waf.WAF) error { + wr := newWafRetryer(conn) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateRegexMatchSetInput{ + ChangeToken: token, + RegexMatchSetId: aws.String(id), + Updates: diffWafRegexMatchSetTuples(oldT, newT), + } + + return conn.UpdateRegexMatchSet(req) + }) + if err != nil { + return fmt.Errorf("Failed updating WAF Regex Match Set: %s", err) + } + + return nil +} diff --git a/aws/resource_aws_waf_regex_match_set_test.go b/aws/resource_aws_waf_regex_match_set_test.go new file mode 100644 index 00000000000..bd1c43984d4 --- /dev/null +++ b/aws/resource_aws_waf_regex_match_set_test.go @@ -0,0 +1,382 @@ +package aws + +import ( + "fmt" + "log" + "strings" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/waf" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func init() { + resource.AddTestSweepers("aws_waf_regex_match_set", &resource.Sweeper{ + Name: "aws_waf_regex_match_set", + F: testSweepWafRegexMatchSet, + }) +} + +func testSweepWafRegexMatchSet(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).wafconn + + req := &waf.ListRegexMatchSetsInput{} + resp, err := conn.ListRegexMatchSets(req) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping WAF Regex Match Set sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error describing WAF Regex Match Sets: %s", err) + } + + if len(resp.RegexMatchSets) == 0 { + log.Print("[DEBUG] No AWS WAF Regex Match Sets to sweep") + return nil + } + + for _, s := range resp.RegexMatchSets { + if !strings.HasPrefix(*s.Name, "tfacc") { + continue + } + + resp, err := conn.GetRegexMatchSet(&waf.GetRegexMatchSetInput{ + RegexMatchSetId: s.RegexMatchSetId, + }) + if err != nil { + return err + } + set := resp.RegexMatchSet + + oldTuples := flattenWafRegexMatchTuples(set.RegexMatchTuples) + noTuples := []interface{}{} + err = updateRegexMatchSetResource(*set.RegexMatchSetId, oldTuples, noTuples, conn) + if err != nil { + return fmt.Errorf("Error updating WAF Regex Match Set: %s", err) + } + + wr := newWafRetryer(conn) + _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.DeleteRegexMatchSetInput{ + ChangeToken: token, + RegexMatchSetId: set.RegexMatchSetId, + } + log.Printf("[INFO] Deleting WAF Regex Match Set: %s", req) + return conn.DeleteRegexMatchSet(req) + }) + if err != nil { + return fmt.Errorf("error deleting WAF Regex Match Set (%s): %s", aws.StringValue(set.RegexMatchSetId), err) + } + } + + return nil +} + +// Serialized acceptance tests due to WAF account limits +// https://docs.aws.amazon.com/waf/latest/developerguide/limits.html +func TestAccAWSWafRegexMatchSet(t *testing.T) { + testCases := map[string]func(t *testing.T){ + "basic": testAccAWSWafRegexMatchSet_basic, + "changePatterns": testAccAWSWafRegexMatchSet_changePatterns, + "noPatterns": testAccAWSWafRegexMatchSet_noPatterns, + "disappears": testAccAWSWafRegexMatchSet_disappears, + } + + for name, tc := range testCases { + tc := tc + t.Run(name, func(t *testing.T) { + tc(t) + }) + } +} + +func testAccAWSWafRegexMatchSet_basic(t *testing.T) { + var matchSet waf.RegexMatchSet + var patternSet waf.RegexPatternSet + var idx int + + matchSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + patternSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + fieldToMatch := waf.FieldToMatch{ + Data: aws.String("User-Agent"), + Type: aws.String("HEADER"), + } + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegexMatchSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegexMatchSetConfig(matchSetName, patternSetName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegexMatchSetExists("aws_waf_regex_match_set.test", &matchSet), + testAccCheckAWSWafRegexPatternSetExists("aws_waf_regex_pattern_set.test", &patternSet), + computeWafRegexMatchSetTuple(&patternSet, &fieldToMatch, "NONE", &idx), + resource.TestCheckResourceAttr("aws_waf_regex_match_set.test", "name", matchSetName), + resource.TestCheckResourceAttr("aws_waf_regex_match_set.test", "regex_match_tuple.#", "1"), + testCheckResourceAttrWithIndexesAddr("aws_waf_regex_match_set.test", "regex_match_tuple.%d.field_to_match.#", &idx, "1"), + testCheckResourceAttrWithIndexesAddr("aws_waf_regex_match_set.test", "regex_match_tuple.%d.field_to_match.0.data", &idx, "user-agent"), + testCheckResourceAttrWithIndexesAddr("aws_waf_regex_match_set.test", "regex_match_tuple.%d.field_to_match.0.type", &idx, "HEADER"), + testCheckResourceAttrWithIndexesAddr("aws_waf_regex_match_set.test", "regex_match_tuple.%d.text_transformation", &idx, "NONE"), + ), + }, + }, + }) +} + +func testAccAWSWafRegexMatchSet_changePatterns(t *testing.T) { + var before, after waf.RegexMatchSet + var patternSet waf.RegexPatternSet + var idx1, idx2 int + + matchSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + patternSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegexMatchSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegexMatchSetConfig(matchSetName, patternSetName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegexMatchSetExists("aws_waf_regex_match_set.test", &before), + testAccCheckAWSWafRegexPatternSetExists("aws_waf_regex_pattern_set.test", &patternSet), + computeWafRegexMatchSetTuple(&patternSet, &waf.FieldToMatch{Data: aws.String("User-Agent"), Type: aws.String("HEADER")}, "NONE", &idx1), + resource.TestCheckResourceAttr("aws_waf_regex_match_set.test", "name", matchSetName), + resource.TestCheckResourceAttr("aws_waf_regex_match_set.test", "regex_match_tuple.#", "1"), + testCheckResourceAttrWithIndexesAddr("aws_waf_regex_match_set.test", "regex_match_tuple.%d.field_to_match.#", &idx1, "1"), + testCheckResourceAttrWithIndexesAddr("aws_waf_regex_match_set.test", "regex_match_tuple.%d.field_to_match.0.data", &idx1, "user-agent"), + testCheckResourceAttrWithIndexesAddr("aws_waf_regex_match_set.test", "regex_match_tuple.%d.field_to_match.0.type", &idx1, "HEADER"), + testCheckResourceAttrWithIndexesAddr("aws_waf_regex_match_set.test", "regex_match_tuple.%d.text_transformation", &idx1, "NONE"), + ), + }, + { + Config: testAccAWSWafRegexMatchSetConfig_changePatterns(matchSetName, patternSetName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegexMatchSetExists("aws_waf_regex_match_set.test", &after), + resource.TestCheckResourceAttr("aws_waf_regex_match_set.test", "name", matchSetName), + resource.TestCheckResourceAttr("aws_waf_regex_match_set.test", "regex_match_tuple.#", "1"), + + computeWafRegexMatchSetTuple(&patternSet, &waf.FieldToMatch{Data: aws.String("Referer"), Type: aws.String("HEADER")}, "COMPRESS_WHITE_SPACE", &idx2), + testCheckResourceAttrWithIndexesAddr("aws_waf_regex_match_set.test", "regex_match_tuple.%d.field_to_match.#", &idx2, "1"), + testCheckResourceAttrWithIndexesAddr("aws_waf_regex_match_set.test", "regex_match_tuple.%d.field_to_match.0.data", &idx2, "referer"), + testCheckResourceAttrWithIndexesAddr("aws_waf_regex_match_set.test", "regex_match_tuple.%d.field_to_match.0.type", &idx2, "HEADER"), + testCheckResourceAttrWithIndexesAddr("aws_waf_regex_match_set.test", "regex_match_tuple.%d.text_transformation", &idx2, "COMPRESS_WHITE_SPACE"), + ), + }, + }, + }) +} + +func testAccAWSWafRegexMatchSet_noPatterns(t *testing.T) { + var matchSet waf.RegexMatchSet + matchSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegexMatchSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegexMatchSetConfig_noPatterns(matchSetName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegexMatchSetExists("aws_waf_regex_match_set.test", &matchSet), + resource.TestCheckResourceAttr("aws_waf_regex_match_set.test", "name", matchSetName), + resource.TestCheckResourceAttr("aws_waf_regex_match_set.test", "regex_match_tuple.#", "0"), + ), + }, + }, + }) +} + +func testAccAWSWafRegexMatchSet_disappears(t *testing.T) { + var matchSet waf.RegexMatchSet + matchSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + patternSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegexMatchSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegexMatchSetConfig(matchSetName, patternSetName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegexMatchSetExists("aws_waf_regex_match_set.test", &matchSet), + testAccCheckAWSWafRegexMatchSetDisappears(&matchSet), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func computeWafRegexMatchSetTuple(patternSet *waf.RegexPatternSet, fieldToMatch *waf.FieldToMatch, textTransformation string, idx *int) resource.TestCheckFunc { + return func(s *terraform.State) error { + m := map[string]interface{}{ + "field_to_match": flattenFieldToMatch(fieldToMatch), + "regex_pattern_set_id": *patternSet.RegexPatternSetId, + "text_transformation": textTransformation, + } + + *idx = resourceAwsWafRegexMatchSetTupleHash(m) + + return nil + } +} + +func testAccCheckAWSWafRegexMatchSetDisappears(set *waf.RegexMatchSet) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).wafconn + + wr := newWafRetryer(conn) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateRegexMatchSetInput{ + ChangeToken: token, + RegexMatchSetId: set.RegexMatchSetId, + } + + for _, tuple := range set.RegexMatchTuples { + req.Updates = append(req.Updates, &waf.RegexMatchSetUpdate{ + Action: aws.String("DELETE"), + RegexMatchTuple: tuple, + }) + } + + return conn.UpdateRegexMatchSet(req) + }) + if err != nil { + return fmt.Errorf("Failed updating WAF Regex Match Set: %s", err) + } + + _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { + opts := &waf.DeleteRegexMatchSetInput{ + ChangeToken: token, + RegexMatchSetId: set.RegexMatchSetId, + } + return conn.DeleteRegexMatchSet(opts) + }) + if err != nil { + return fmt.Errorf("Failed deleting WAF Regex Match Set: %s", err) + } + + return nil + } +} + +func testAccCheckAWSWafRegexMatchSetExists(n string, v *waf.RegexMatchSet) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No WAF Regex Match Set ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).wafconn + resp, err := conn.GetRegexMatchSet(&waf.GetRegexMatchSetInput{ + RegexMatchSetId: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + if *resp.RegexMatchSet.RegexMatchSetId == rs.Primary.ID { + *v = *resp.RegexMatchSet + return nil + } + + return fmt.Errorf("WAF Regex Match Set (%s) not found", rs.Primary.ID) + } +} + +func testAccCheckAWSWafRegexMatchSetDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_waf_regex_match_set" { + continue + } + + conn := testAccProvider.Meta().(*AWSClient).wafconn + resp, err := conn.GetRegexMatchSet(&waf.GetRegexMatchSetInput{ + RegexMatchSetId: aws.String(rs.Primary.ID), + }) + + if err == nil { + if *resp.RegexMatchSet.RegexMatchSetId == rs.Primary.ID { + return fmt.Errorf("WAF Regex Match Set %s still exists", rs.Primary.ID) + } + } + + // Return nil if the Regex Pattern Set is already destroyed + if isAWSErr(err, "WAFNonexistentItemException", "") { + return nil + } + + return err + } + + return nil +} + +func testAccAWSWafRegexMatchSetConfig(matchSetName, patternSetName string) string { + return fmt.Sprintf(` +resource "aws_waf_regex_match_set" "test" { + name = "%s" + regex_match_tuple { + field_to_match { + data = "User-Agent" + type = "HEADER" + } + regex_pattern_set_id = "${aws_waf_regex_pattern_set.test.id}" + text_transformation = "NONE" + } +} + +resource "aws_waf_regex_pattern_set" "test" { + name = "%s" + regex_pattern_strings = ["one", "two"] +} +`, matchSetName, patternSetName) +} + +func testAccAWSWafRegexMatchSetConfig_changePatterns(matchSetName, patternSetName string) string { + return fmt.Sprintf(` +resource "aws_waf_regex_match_set" "test" { + name = "%s" + + regex_match_tuple { + field_to_match { + data = "Referer" + type = "HEADER" + } + regex_pattern_set_id = "${aws_waf_regex_pattern_set.test.id}" + text_transformation = "COMPRESS_WHITE_SPACE" + } +} + +resource "aws_waf_regex_pattern_set" "test" { + name = "%s" + regex_pattern_strings = ["one", "two"] +} +`, matchSetName, patternSetName) +} + +func testAccAWSWafRegexMatchSetConfig_noPatterns(matchSetName string) string { + return fmt.Sprintf(` +resource "aws_waf_regex_match_set" "test" { + name = "%s" +}`, matchSetName) +} diff --git a/aws/resource_aws_waf_regex_pattern_set.go b/aws/resource_aws_waf_regex_pattern_set.go new file mode 100644 index 00000000000..a09ba6ffcc0 --- /dev/null +++ b/aws/resource_aws_waf_regex_pattern_set.go @@ -0,0 +1,144 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/waf" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsWafRegexPatternSet() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsWafRegexPatternSetCreate, + Read: resourceAwsWafRegexPatternSetRead, + Update: resourceAwsWafRegexPatternSetUpdate, + Delete: resourceAwsWafRegexPatternSetDelete, + + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "regex_pattern_strings": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + } +} + +func resourceAwsWafRegexPatternSetCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafconn + + log.Printf("[INFO] Creating WAF Regex Pattern Set: %s", d.Get("name").(string)) + + wr := newWafRetryer(conn) + out, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + params := &waf.CreateRegexPatternSetInput{ + ChangeToken: token, + Name: aws.String(d.Get("name").(string)), + } + return conn.CreateRegexPatternSet(params) + }) + if err != nil { + return fmt.Errorf("Failed creating WAF Regex Pattern Set: %s", err) + } + resp := out.(*waf.CreateRegexPatternSetOutput) + + d.SetId(*resp.RegexPatternSet.RegexPatternSetId) + + return resourceAwsWafRegexPatternSetUpdate(d, meta) +} + +func resourceAwsWafRegexPatternSetRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafconn + log.Printf("[INFO] Reading WAF Regex Pattern Set: %s", d.Get("name").(string)) + params := &waf.GetRegexPatternSetInput{ + RegexPatternSetId: aws.String(d.Id()), + } + + resp, err := conn.GetRegexPatternSet(params) + if err != nil { + // TODO: Replace with a constant once available + // See https://github.com/aws/aws-sdk-go/issues/1856 + if isAWSErr(err, "WAFNonexistentItemException", "") { + log.Printf("[WARN] WAF Regex Pattern Set (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + return err + } + + d.Set("name", resp.RegexPatternSet.Name) + d.Set("regex_pattern_strings", aws.StringValueSlice(resp.RegexPatternSet.RegexPatternStrings)) + + return nil +} + +func resourceAwsWafRegexPatternSetUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafconn + + log.Printf("[INFO] Updating WAF Regex Pattern Set: %s", d.Get("name").(string)) + + if d.HasChange("regex_pattern_strings") { + o, n := d.GetChange("regex_pattern_strings") + oldPatterns, newPatterns := o.(*schema.Set).List(), n.(*schema.Set).List() + err := updateWafRegexPatternSetPatternStrings(d.Id(), oldPatterns, newPatterns, conn) + if err != nil { + return fmt.Errorf("Failed updating WAF Regex Pattern Set: %s", err) + } + } + + return resourceAwsWafRegexPatternSetRead(d, meta) +} + +func resourceAwsWafRegexPatternSetDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafconn + + oldPatterns := d.Get("regex_pattern_strings").(*schema.Set).List() + if len(oldPatterns) > 0 { + noPatterns := []interface{}{} + err := updateWafRegexPatternSetPatternStrings(d.Id(), oldPatterns, noPatterns, conn) + if err != nil { + return fmt.Errorf("Error updating WAF Regex Pattern Set: %s", err) + } + } + + wr := newWafRetryer(conn) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.DeleteRegexPatternSetInput{ + ChangeToken: token, + RegexPatternSetId: aws.String(d.Id()), + } + log.Printf("[INFO] Deleting WAF Regex Pattern Set: %s", req) + return conn.DeleteRegexPatternSet(req) + }) + if err != nil { + return fmt.Errorf("Failed deleting WAF Regex Pattern Set: %s", err) + } + + return nil +} + +func updateWafRegexPatternSetPatternStrings(id string, oldPatterns, newPatterns []interface{}, conn *waf.WAF) error { + wr := newWafRetryer(conn) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateRegexPatternSetInput{ + ChangeToken: token, + RegexPatternSetId: aws.String(id), + Updates: diffWafRegexPatternSetPatternStrings(oldPatterns, newPatterns), + } + + return conn.UpdateRegexPatternSet(req) + }) + if err != nil { + return fmt.Errorf("Failed updating WAF Regex Pattern Set: %s", err) + } + + return nil +} diff --git a/aws/resource_aws_waf_regex_pattern_set_test.go b/aws/resource_aws_waf_regex_pattern_set_test.go new file mode 100644 index 00000000000..4a3ce953cbf --- /dev/null +++ b/aws/resource_aws_waf_regex_pattern_set_test.go @@ -0,0 +1,249 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/waf" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +// Serialized acceptance tests due to WAF account limits +// https://docs.aws.amazon.com/waf/latest/developerguide/limits.html +func TestAccAWSWafRegexPatternSet(t *testing.T) { + testCases := map[string]func(t *testing.T){ + "basic": testAccAWSWafRegexPatternSet_basic, + "changePatterns": testAccAWSWafRegexPatternSet_changePatterns, + "noPatterns": testAccAWSWafRegexPatternSet_noPatterns, + "disappears": testAccAWSWafRegexPatternSet_disappears, + } + + for name, tc := range testCases { + tc := tc + t.Run(name, func(t *testing.T) { + tc(t) + }) + } +} + +func testAccAWSWafRegexPatternSet_basic(t *testing.T) { + var v waf.RegexPatternSet + patternSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegexPatternSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegexPatternSetConfig(patternSetName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegexPatternSetExists("aws_waf_regex_pattern_set.test", &v), + resource.TestCheckResourceAttr("aws_waf_regex_pattern_set.test", "name", patternSetName), + resource.TestCheckResourceAttr("aws_waf_regex_pattern_set.test", "regex_pattern_strings.#", "2"), + resource.TestCheckResourceAttr("aws_waf_regex_pattern_set.test", "regex_pattern_strings.2848565413", "one"), + resource.TestCheckResourceAttr("aws_waf_regex_pattern_set.test", "regex_pattern_strings.3351840846", "two"), + ), + }, + }, + }) +} + +func testAccAWSWafRegexPatternSet_changePatterns(t *testing.T) { + var before, after waf.RegexPatternSet + patternSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegexPatternSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegexPatternSetConfig(patternSetName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegexPatternSetExists("aws_waf_regex_pattern_set.test", &before), + resource.TestCheckResourceAttr("aws_waf_regex_pattern_set.test", "name", patternSetName), + resource.TestCheckResourceAttr("aws_waf_regex_pattern_set.test", "regex_pattern_strings.#", "2"), + resource.TestCheckResourceAttr("aws_waf_regex_pattern_set.test", "regex_pattern_strings.2848565413", "one"), + resource.TestCheckResourceAttr("aws_waf_regex_pattern_set.test", "regex_pattern_strings.3351840846", "two"), + ), + }, + { + Config: testAccAWSWafRegexPatternSetConfig_changePatterns(patternSetName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegexPatternSetExists("aws_waf_regex_pattern_set.test", &after), + resource.TestCheckResourceAttr("aws_waf_regex_pattern_set.test", "name", patternSetName), + resource.TestCheckResourceAttr("aws_waf_regex_pattern_set.test", "regex_pattern_strings.#", "3"), + resource.TestCheckResourceAttr("aws_waf_regex_pattern_set.test", "regex_pattern_strings.3351840846", "two"), + resource.TestCheckResourceAttr("aws_waf_regex_pattern_set.test", "regex_pattern_strings.2929247714", "three"), + resource.TestCheckResourceAttr("aws_waf_regex_pattern_set.test", "regex_pattern_strings.1294846542", "four"), + ), + }, + }, + }) +} + +func testAccAWSWafRegexPatternSet_noPatterns(t *testing.T) { + var patternSet waf.RegexPatternSet + patternSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegexPatternSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegexPatternSetConfig_noPatterns(patternSetName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegexPatternSetExists("aws_waf_regex_pattern_set.test", &patternSet), + resource.TestCheckResourceAttr("aws_waf_regex_pattern_set.test", "name", patternSetName), + resource.TestCheckResourceAttr("aws_waf_regex_pattern_set.test", "regex_pattern_strings.#", "0"), + ), + }, + }, + }) +} + +func testAccAWSWafRegexPatternSet_disappears(t *testing.T) { + var v waf.RegexPatternSet + patternSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegexPatternSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegexPatternSetConfig(patternSetName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegexPatternSetExists("aws_waf_regex_pattern_set.test", &v), + testAccCheckAWSWafRegexPatternSetDisappears(&v), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccCheckAWSWafRegexPatternSetDisappears(set *waf.RegexPatternSet) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).wafconn + + wr := newWafRetryer(conn) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateRegexPatternSetInput{ + ChangeToken: token, + RegexPatternSetId: set.RegexPatternSetId, + } + + for _, pattern := range set.RegexPatternStrings { + update := &waf.RegexPatternSetUpdate{ + Action: aws.String("DELETE"), + RegexPatternString: pattern, + } + req.Updates = append(req.Updates, update) + } + + return conn.UpdateRegexPatternSet(req) + }) + if err != nil { + return fmt.Errorf("Failed updating WAF Regex Pattern Set: %s", err) + } + + _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { + opts := &waf.DeleteRegexPatternSetInput{ + ChangeToken: token, + RegexPatternSetId: set.RegexPatternSetId, + } + return conn.DeleteRegexPatternSet(opts) + }) + if err != nil { + return fmt.Errorf("Failed deleting WAF Regex Pattern Set: %s", err) + } + + return nil + } +} + +func testAccCheckAWSWafRegexPatternSetExists(n string, v *waf.RegexPatternSet) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No WAF Regex Pattern Set ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).wafconn + resp, err := conn.GetRegexPatternSet(&waf.GetRegexPatternSetInput{ + RegexPatternSetId: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + if *resp.RegexPatternSet.RegexPatternSetId == rs.Primary.ID { + *v = *resp.RegexPatternSet + return nil + } + + return fmt.Errorf("WAF Regex Pattern Set (%s) not found", rs.Primary.ID) + } +} + +func testAccCheckAWSWafRegexPatternSetDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_waf_regex_pattern_set" { + continue + } + + conn := testAccProvider.Meta().(*AWSClient).wafconn + resp, err := conn.GetRegexPatternSet(&waf.GetRegexPatternSetInput{ + RegexPatternSetId: aws.String(rs.Primary.ID), + }) + + if err == nil { + if *resp.RegexPatternSet.RegexPatternSetId == rs.Primary.ID { + return fmt.Errorf("WAF Regex Pattern Set %s still exists", rs.Primary.ID) + } + } + + // Return nil if the Regex Pattern Set is already destroyed + if isAWSErr(err, "WAFNonexistentItemException", "") { + return nil + } + + return err + } + + return nil +} + +func testAccAWSWafRegexPatternSetConfig(name string) string { + return fmt.Sprintf(` +resource "aws_waf_regex_pattern_set" "test" { + name = "%s" + regex_pattern_strings = ["one", "two"] +}`, name) +} + +func testAccAWSWafRegexPatternSetConfig_changePatterns(name string) string { + return fmt.Sprintf(` +resource "aws_waf_regex_pattern_set" "test" { + name = "%s" + regex_pattern_strings = ["two", "three", "four"] +}`, name) +} + +func testAccAWSWafRegexPatternSetConfig_noPatterns(name string) string { + return fmt.Sprintf(` +resource "aws_waf_regex_pattern_set" "test" { + name = "%s" +}`, name) +} diff --git a/aws/resource_aws_waf_rule.go b/aws/resource_aws_waf_rule.go index 7ba42b0b921..fae17269a55 100644 --- a/aws/resource_aws_waf_rule.go +++ b/aws/resource_aws_waf_rule.go @@ -8,6 +8,7 @@ import ( "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/waf" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsWafRule() *schema.Resource { @@ -16,34 +17,37 @@ func resourceAwsWafRule() *schema.Resource { Read: resourceAwsWafRuleRead, Update: resourceAwsWafRuleUpdate, Delete: resourceAwsWafRuleDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "metric_name": &schema.Schema{ + "metric_name": { Type: schema.TypeString, Required: true, ForceNew: true, ValidateFunc: validateWafMetricName, }, - "predicates": &schema.Schema{ + "predicates": { Type: schema.TypeSet, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "negated": &schema.Schema{ + "negated": { Type: schema.TypeBool, Required: true, }, - "data_id": &schema.Schema{ + "data_id": { Type: schema.TypeString, Required: true, - ValidateFunc: validateMaxLength(128), + ValidateFunc: validation.StringLenBetween(0, 128), }, - "type": &schema.Schema{ + "type": { Type: schema.TypeString, Required: true, ValidateFunc: validateWafPredicatesType(), @@ -58,7 +62,7 @@ func resourceAwsWafRule() *schema.Resource { func resourceAwsWafRuleCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).wafconn - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) out, err := wr.RetryWithToken(func(token *string) (interface{}, error) { params := &waf.CreateRuleInput{ ChangeToken: token, @@ -140,7 +144,7 @@ func resourceAwsWafRuleDelete(d *schema.ResourceData, meta interface{}) error { } } - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.DeleteRuleInput{ ChangeToken: token, @@ -157,7 +161,7 @@ func resourceAwsWafRuleDelete(d *schema.ResourceData, meta interface{}) error { } func updateWafRuleResource(id string, oldP, newP []interface{}, conn *waf.WAF) error { - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.UpdateRuleInput{ ChangeToken: token, @@ -173,39 +177,3 @@ func updateWafRuleResource(id string, oldP, newP []interface{}, conn *waf.WAF) e return nil } - -func diffWafRulePredicates(oldP, newP []interface{}) []*waf.RuleUpdate { - updates := make([]*waf.RuleUpdate, 0) - - for _, op := range oldP { - predicate := op.(map[string]interface{}) - - if idx, contains := sliceContainsMap(newP, predicate); contains { - newP = append(newP[:idx], newP[idx+1:]...) - continue - } - - updates = append(updates, &waf.RuleUpdate{ - Action: aws.String(waf.ChangeActionDelete), - Predicate: &waf.Predicate{ - Negated: aws.Bool(predicate["negated"].(bool)), - Type: aws.String(predicate["type"].(string)), - DataId: aws.String(predicate["data_id"].(string)), - }, - }) - } - - for _, np := range newP { - predicate := np.(map[string]interface{}) - - updates = append(updates, &waf.RuleUpdate{ - Action: aws.String(waf.ChangeActionInsert), - Predicate: &waf.Predicate{ - Negated: aws.Bool(predicate["negated"].(bool)), - Type: aws.String(predicate["type"].(string)), - DataId: aws.String(predicate["data_id"].(string)), - }, - }) - } - return updates -} diff --git a/aws/resource_aws_waf_rule_group.go b/aws/resource_aws_waf_rule_group.go new file mode 100644 index 00000000000..803d6ea07a8 --- /dev/null +++ b/aws/resource_aws_waf_rule_group.go @@ -0,0 +1,187 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/waf" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsWafRuleGroup() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsWafRuleGroupCreate, + Read: resourceAwsWafRuleGroupRead, + Update: resourceAwsWafRuleGroupUpdate, + Delete: resourceAwsWafRuleGroupDelete, + + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "metric_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateWafMetricName, + }, + "activated_rule": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "action": { + Type: schema.TypeList, + MaxItems: 1, + Required: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "type": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "priority": { + Type: schema.TypeInt, + Required: true, + }, + "rule_id": { + Type: schema.TypeString, + Required: true, + }, + "type": { + Type: schema.TypeString, + Optional: true, + Default: waf.WafRuleTypeRegular, + }, + }, + }, + }, + }, + } +} + +func resourceAwsWafRuleGroupCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafconn + + wr := newWafRetryer(conn) + out, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + params := &waf.CreateRuleGroupInput{ + ChangeToken: token, + MetricName: aws.String(d.Get("metric_name").(string)), + Name: aws.String(d.Get("name").(string)), + } + + return conn.CreateRuleGroup(params) + }) + if err != nil { + return err + } + resp := out.(*waf.CreateRuleGroupOutput) + d.SetId(*resp.RuleGroup.RuleGroupId) + return resourceAwsWafRuleGroupUpdate(d, meta) +} + +func resourceAwsWafRuleGroupRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafconn + + params := &waf.GetRuleGroupInput{ + RuleGroupId: aws.String(d.Id()), + } + + resp, err := conn.GetRuleGroup(params) + if err != nil { + if isAWSErr(err, "WAFNonexistentItemException", "") { + log.Printf("[WARN] WAF Rule Group (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + return err + } + + rResp, err := conn.ListActivatedRulesInRuleGroup(&waf.ListActivatedRulesInRuleGroupInput{ + RuleGroupId: aws.String(d.Id()), + }) + if err != nil { + return fmt.Errorf("error listing activated rules in WAF Rule Group (%s): %s", d.Id(), err) + } + + d.Set("activated_rule", flattenWafActivatedRules(rResp.ActivatedRules)) + d.Set("name", resp.RuleGroup.Name) + d.Set("metric_name", resp.RuleGroup.MetricName) + + return nil +} + +func resourceAwsWafRuleGroupUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafconn + + if d.HasChange("activated_rule") { + o, n := d.GetChange("activated_rule") + oldRules, newRules := o.(*schema.Set).List(), n.(*schema.Set).List() + + err := updateWafRuleGroupResource(d.Id(), oldRules, newRules, conn) + if err != nil { + return fmt.Errorf("Error Updating WAF Rule Group: %s", err) + } + } + + return resourceAwsWafRuleGroupRead(d, meta) +} + +func resourceAwsWafRuleGroupDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafconn + + oldRules := d.Get("activated_rule").(*schema.Set).List() + err := deleteWafRuleGroup(d.Id(), oldRules, conn) + + return err +} + +func deleteWafRuleGroup(id string, oldRules []interface{}, conn *waf.WAF) error { + if len(oldRules) > 0 { + noRules := []interface{}{} + err := updateWafRuleGroupResource(id, oldRules, noRules, conn) + if err != nil { + return fmt.Errorf("Error updating WAF Rule Group Predicates: %s", err) + } + } + + wr := newWafRetryer(conn) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.DeleteRuleGroupInput{ + ChangeToken: token, + RuleGroupId: aws.String(id), + } + log.Printf("[INFO] Deleting WAF Rule Group") + return conn.DeleteRuleGroup(req) + }) + if err != nil { + return fmt.Errorf("Error deleting WAF Rule Group: %s", err) + } + return nil +} + +func updateWafRuleGroupResource(id string, oldRules, newRules []interface{}, conn *waf.WAF) error { + wr := newWafRetryer(conn) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateRuleGroupInput{ + ChangeToken: token, + RuleGroupId: aws.String(id), + Updates: diffWafRuleGroupActivatedRules(oldRules, newRules), + } + + return conn.UpdateRuleGroup(req) + }) + if err != nil { + return fmt.Errorf("Error Updating WAF Rule Group: %s", err) + } + + return nil +} diff --git a/aws/resource_aws_waf_rule_group_test.go b/aws/resource_aws_waf_rule_group_test.go new file mode 100644 index 00000000000..86b431954d2 --- /dev/null +++ b/aws/resource_aws_waf_rule_group_test.go @@ -0,0 +1,431 @@ +package aws + +import ( + "fmt" + "log" + "strings" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/waf" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +func init() { + resource.AddTestSweepers("aws_waf_rule_group", &resource.Sweeper{ + Name: "aws_waf_rule_group", + F: testSweepWafRuleGroups, + }) +} + +func testSweepWafRuleGroups(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).wafconn + + req := &waf.ListRuleGroupsInput{} + resp, err := conn.ListRuleGroups(req) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping WAF Rule Group sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error describing WAF Rule Groups: %s", err) + } + + if len(resp.RuleGroups) == 0 { + log.Print("[DEBUG] No AWS WAF Rule Groups to sweep") + return nil + } + + for _, group := range resp.RuleGroups { + if !strings.HasPrefix(*group.Name, "tfacc") { + continue + } + + rResp, err := conn.ListActivatedRulesInRuleGroup(&waf.ListActivatedRulesInRuleGroupInput{ + RuleGroupId: group.RuleGroupId, + }) + if err != nil { + return err + } + oldRules := flattenWafActivatedRules(rResp.ActivatedRules) + err = deleteWafRuleGroup(*group.RuleGroupId, oldRules, conn) + if err != nil { + return err + } + } + + return nil +} + +func TestAccAWSWafRuleGroup_basic(t *testing.T) { + var rule waf.Rule + var group waf.RuleGroup + var idx int + + ruleName := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + groupName := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRuleGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRuleGroupConfig(ruleName, groupName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRuleExists("aws_waf_rule.test", &rule), + testAccCheckAWSWafRuleGroupExists("aws_waf_rule_group.test", &group), + resource.TestCheckResourceAttr("aws_waf_rule_group.test", "name", groupName), + resource.TestCheckResourceAttr("aws_waf_rule_group.test", "activated_rule.#", "1"), + resource.TestCheckResourceAttr("aws_waf_rule_group.test", "metric_name", groupName), + computeWafActivatedRuleWithRuleId(&rule, "COUNT", 50, &idx), + testCheckResourceAttrWithIndexesAddr("aws_waf_rule_group.test", "activated_rule.%d.action.0.type", &idx, "COUNT"), + testCheckResourceAttrWithIndexesAddr("aws_waf_rule_group.test", "activated_rule.%d.priority", &idx, "50"), + testCheckResourceAttrWithIndexesAddr("aws_waf_rule_group.test", "activated_rule.%d.type", &idx, waf.WafRuleTypeRegular), + ), + }, + }, + }) +} + +func TestAccAWSWafRuleGroup_changeNameForceNew(t *testing.T) { + var before, after waf.RuleGroup + + ruleName := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + groupName := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + newGroupName := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRuleGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRuleGroupConfig(ruleName, groupName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRuleGroupExists("aws_waf_rule_group.test", &before), + resource.TestCheckResourceAttr("aws_waf_rule_group.test", "name", groupName), + resource.TestCheckResourceAttr("aws_waf_rule_group.test", "activated_rule.#", "1"), + resource.TestCheckResourceAttr("aws_waf_rule_group.test", "metric_name", groupName), + ), + }, + { + Config: testAccAWSWafRuleGroupConfig(ruleName, newGroupName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRuleGroupExists("aws_waf_rule_group.test", &after), + resource.TestCheckResourceAttr("aws_waf_rule_group.test", "name", newGroupName), + resource.TestCheckResourceAttr("aws_waf_rule_group.test", "activated_rule.#", "1"), + resource.TestCheckResourceAttr("aws_waf_rule_group.test", "metric_name", newGroupName), + ), + }, + }, + }) +} + +func TestAccAWSWafRuleGroup_disappears(t *testing.T) { + var group waf.RuleGroup + ruleName := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + groupName := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRuleGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRuleGroupConfig(ruleName, groupName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRuleGroupExists("aws_waf_rule_group.test", &group), + testAccCheckAWSWafRuleGroupDisappears(&group), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccAWSWafRuleGroup_changeActivatedRules(t *testing.T) { + var rule0, rule1, rule2, rule3 waf.Rule + var groupBefore, groupAfter waf.RuleGroup + var idx0, idx1, idx2, idx3 int + + groupName := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + ruleName1 := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + ruleName2 := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + ruleName3 := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRuleGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRuleGroupConfig(ruleName1, groupName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRuleExists("aws_waf_rule.test", &rule0), + testAccCheckAWSWafRuleGroupExists("aws_waf_rule_group.test", &groupBefore), + resource.TestCheckResourceAttr("aws_waf_rule_group.test", "name", groupName), + resource.TestCheckResourceAttr("aws_waf_rule_group.test", "activated_rule.#", "1"), + computeWafActivatedRuleWithRuleId(&rule0, "COUNT", 50, &idx0), + testCheckResourceAttrWithIndexesAddr("aws_waf_rule_group.test", "activated_rule.%d.action.0.type", &idx0, "COUNT"), + testCheckResourceAttrWithIndexesAddr("aws_waf_rule_group.test", "activated_rule.%d.priority", &idx0, "50"), + testCheckResourceAttrWithIndexesAddr("aws_waf_rule_group.test", "activated_rule.%d.type", &idx0, waf.WafRuleTypeRegular), + ), + }, + { + Config: testAccAWSWafRuleGroupConfig_changeActivatedRules(ruleName1, ruleName2, ruleName3, groupName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr("aws_waf_rule_group.test", "name", groupName), + resource.TestCheckResourceAttr("aws_waf_rule_group.test", "activated_rule.#", "3"), + testAccCheckAWSWafRuleGroupExists("aws_waf_rule_group.test", &groupAfter), + + testAccCheckAWSWafRuleExists("aws_waf_rule.test", &rule1), + computeWafActivatedRuleWithRuleId(&rule1, "BLOCK", 10, &idx1), + testCheckResourceAttrWithIndexesAddr("aws_waf_rule_group.test", "activated_rule.%d.action.0.type", &idx1, "BLOCK"), + testCheckResourceAttrWithIndexesAddr("aws_waf_rule_group.test", "activated_rule.%d.priority", &idx1, "10"), + testCheckResourceAttrWithIndexesAddr("aws_waf_rule_group.test", "activated_rule.%d.type", &idx1, waf.WafRuleTypeRegular), + + testAccCheckAWSWafRuleExists("aws_waf_rule.test2", &rule2), + computeWafActivatedRuleWithRuleId(&rule2, "COUNT", 1, &idx2), + testCheckResourceAttrWithIndexesAddr("aws_waf_rule_group.test", "activated_rule.%d.action.0.type", &idx2, "COUNT"), + testCheckResourceAttrWithIndexesAddr("aws_waf_rule_group.test", "activated_rule.%d.priority", &idx2, "1"), + testCheckResourceAttrWithIndexesAddr("aws_waf_rule_group.test", "activated_rule.%d.type", &idx2, waf.WafRuleTypeRegular), + + testAccCheckAWSWafRuleExists("aws_waf_rule.test3", &rule3), + computeWafActivatedRuleWithRuleId(&rule3, "BLOCK", 15, &idx3), + testCheckResourceAttrWithIndexesAddr("aws_waf_rule_group.test", "activated_rule.%d.action.0.type", &idx3, "BLOCK"), + testCheckResourceAttrWithIndexesAddr("aws_waf_rule_group.test", "activated_rule.%d.priority", &idx3, "15"), + testCheckResourceAttrWithIndexesAddr("aws_waf_rule_group.test", "activated_rule.%d.type", &idx3, waf.WafRuleTypeRegular), + ), + }, + }, + }) +} + +// computeWafActivatedRuleWithRuleId calculates index +// which isn't static because ruleId is generated as part of the test +func computeWafActivatedRuleWithRuleId(rule *waf.Rule, actionType string, priority int, idx *int) resource.TestCheckFunc { + return func(s *terraform.State) error { + ruleResource := resourceAwsWafRuleGroup().Schema["activated_rule"].Elem.(*schema.Resource) + + m := map[string]interface{}{ + "action": []interface{}{ + map[string]interface{}{ + "type": actionType, + }, + }, + "priority": priority, + "rule_id": *rule.RuleId, + "type": waf.WafRuleTypeRegular, + } + + f := schema.HashResource(ruleResource) + *idx = f(m) + + return nil + } +} + +func TestAccAWSWafRuleGroup_noActivatedRules(t *testing.T) { + var group waf.RuleGroup + groupName := fmt.Sprintf("test%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRuleGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRuleGroupConfig_noActivatedRules(groupName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRuleGroupExists("aws_waf_rule_group.test", &group), + resource.TestCheckResourceAttr( + "aws_waf_rule_group.test", "name", groupName), + resource.TestCheckResourceAttr( + "aws_waf_rule_group.test", "activated_rule.#", "0"), + ), + }, + }, + }) +} + +func testAccCheckAWSWafRuleGroupDisappears(group *waf.RuleGroup) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).wafconn + + rResp, err := conn.ListActivatedRulesInRuleGroup(&waf.ListActivatedRulesInRuleGroupInput{ + RuleGroupId: group.RuleGroupId, + }) + if err != nil { + return fmt.Errorf("error listing activated rules in WAF Rule Group (%s): %s", aws.StringValue(group.RuleGroupId), err) + } + + wr := newWafRetryer(conn) + _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateRuleGroupInput{ + ChangeToken: token, + RuleGroupId: group.RuleGroupId, + } + + for _, rule := range rResp.ActivatedRules { + rule := &waf.RuleGroupUpdate{ + Action: aws.String("DELETE"), + ActivatedRule: rule, + } + req.Updates = append(req.Updates, rule) + } + + return conn.UpdateRuleGroup(req) + }) + if err != nil { + return fmt.Errorf("Error Updating WAF Rule Group: %s", err) + } + + _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { + opts := &waf.DeleteRuleGroupInput{ + ChangeToken: token, + RuleGroupId: group.RuleGroupId, + } + return conn.DeleteRuleGroup(opts) + }) + if err != nil { + return fmt.Errorf("Error Deleting WAF Rule Group: %s", err) + } + return nil + } +} + +func testAccCheckAWSWafRuleGroupDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_waf_rule_group" { + continue + } + + conn := testAccProvider.Meta().(*AWSClient).wafconn + resp, err := conn.GetRuleGroup(&waf.GetRuleGroupInput{ + RuleGroupId: aws.String(rs.Primary.ID), + }) + + if err == nil { + if *resp.RuleGroup.RuleGroupId == rs.Primary.ID { + return fmt.Errorf("WAF Rule Group %s still exists", rs.Primary.ID) + } + } + + if isAWSErr(err, "WAFNonexistentItemException", "") { + return nil + } + + return err + } + + return nil +} + +func testAccCheckAWSWafRuleGroupExists(n string, group *waf.RuleGroup) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No WAF Rule Group ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).wafconn + resp, err := conn.GetRuleGroup(&waf.GetRuleGroupInput{ + RuleGroupId: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + if *resp.RuleGroup.RuleGroupId == rs.Primary.ID { + *group = *resp.RuleGroup + return nil + } + + return fmt.Errorf("WAF Rule Group (%s) not found", rs.Primary.ID) + } +} + +func testAccAWSWafRuleGroupConfig(ruleName, groupName string) string { + return fmt.Sprintf(` +resource "aws_waf_rule" "test" { + name = "%[1]s" + metric_name = "%[1]s" +} + +resource "aws_waf_rule_group" "test" { + name = "%[2]s" + metric_name = "%[2]s" + activated_rule { + action { + type = "COUNT" + } + priority = 50 + rule_id = "${aws_waf_rule.test.id}" + } +}`, ruleName, groupName) +} + +func testAccAWSWafRuleGroupConfig_changeActivatedRules(ruleName1, ruleName2, ruleName3, groupName string) string { + return fmt.Sprintf(` +resource "aws_waf_rule" "test" { + name = "%[1]s" + metric_name = "%[1]s" +} + +resource "aws_waf_rule" "test2" { + name = "%[2]s" + metric_name = "%[2]s" +} + +resource "aws_waf_rule" "test3" { + name = "%[3]s" + metric_name = "%[3]s" +} + +resource "aws_waf_rule_group" "test" { + name = "%[4]s" + metric_name = "%[4]s" + activated_rule { + action { + type = "BLOCK" + } + priority = 10 + rule_id = "${aws_waf_rule.test.id}" + } + activated_rule { + action { + type = "COUNT" + } + priority = 1 + rule_id = "${aws_waf_rule.test2.id}" + } + activated_rule { + action { + type = "BLOCK" + } + priority = 15 + rule_id = "${aws_waf_rule.test3.id}" + } +}`, ruleName1, ruleName2, ruleName3, groupName) +} + +func testAccAWSWafRuleGroupConfig_noActivatedRules(groupName string) string { + return fmt.Sprintf(` +resource "aws_waf_rule_group" "test" { + name = "%[1]s" + metric_name = "%[1]s" +}`, groupName) +} diff --git a/aws/resource_aws_waf_rule_test.go b/aws/resource_aws_waf_rule_test.go index 21c7b4eb0a0..b2d97512c51 100644 --- a/aws/resource_aws_waf_rule_test.go +++ b/aws/resource_aws_waf_rule_test.go @@ -17,12 +17,12 @@ import ( func TestAccAWSWafRule_basic(t *testing.T) { var v waf.Rule wafRuleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRuleDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSWafRuleConfig(wafRuleName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSWafRuleExists("aws_waf_rule.wafrule", &v), @@ -34,6 +34,11 @@ func TestAccAWSWafRule_basic(t *testing.T) { "aws_waf_rule.wafrule", "metric_name", wafRuleName), ), }, + { + ResourceName: "aws_waf_rule.wafrule", + ImportState: true, + ImportStateVerify: true, + }, }, }) } @@ -43,7 +48,7 @@ func TestAccAWSWafRule_changeNameForceNew(t *testing.T) { wafRuleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) wafRuleNewName := fmt.Sprintf("wafrulenew%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafIPSetDestroy, @@ -79,7 +84,7 @@ func TestAccAWSWafRule_changeNameForceNew(t *testing.T) { func TestAccAWSWafRule_disappears(t *testing.T) { var v waf.Rule wafRuleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRuleDestroy, @@ -104,7 +109,7 @@ func TestAccAWSWafRule_changePredicates(t *testing.T) { var idx int ruleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRuleDestroy, @@ -144,7 +149,7 @@ func TestAccAWSWafRule_geoMatchSetPredicate(t *testing.T) { var idx int ruleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRuleDestroy, @@ -232,7 +237,7 @@ func TestAccAWSWafRule_noPredicates(t *testing.T) { var rule waf.Rule ruleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRuleDestroy, @@ -255,7 +260,7 @@ func testAccCheckAWSWafRuleDisappears(v *waf.Rule) resource.TestCheckFunc { return func(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).wafconn - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.UpdateRuleInput{ ChangeToken: token, diff --git a/aws/resource_aws_waf_size_constraint_set.go b/aws/resource_aws_waf_size_constraint_set.go index 8df37ab855b..87575247332 100644 --- a/aws/resource_aws_waf_size_constraint_set.go +++ b/aws/resource_aws_waf_size_constraint_set.go @@ -1,12 +1,12 @@ package aws import ( + "fmt" "log" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/waf" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/schema" ) @@ -17,50 +17,7 @@ func resourceAwsWafSizeConstraintSet() *schema.Resource { Update: resourceAwsWafSizeConstraintSetUpdate, Delete: resourceAwsWafSizeConstraintSetDelete, - Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ - Type: schema.TypeString, - Required: true, - ForceNew: true, - }, - "size_constraints": &schema.Schema{ - Type: schema.TypeSet, - Optional: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "field_to_match": { - Type: schema.TypeSet, - Required: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "data": { - Type: schema.TypeString, - Optional: true, - }, - "type": { - Type: schema.TypeString, - Required: true, - }, - }, - }, - }, - "comparison_operator": &schema.Schema{ - Type: schema.TypeString, - Required: true, - }, - "size": &schema.Schema{ - Type: schema.TypeInt, - Required: true, - }, - "text_transformation": &schema.Schema{ - Type: schema.TypeString, - Required: true, - }, - }, - }, - }, - }, + Schema: wafSizeConstraintSetSchema(), } } @@ -69,7 +26,7 @@ func resourceAwsWafSizeConstraintSetCreate(d *schema.ResourceData, meta interfac log.Printf("[INFO] Creating SizeConstraintSet: %s", d.Get("name").(string)) - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) out, err := wr.RetryWithToken(func(token *string) (interface{}, error) { params := &waf.CreateSizeConstraintSetInput{ ChangeToken: token, @@ -79,7 +36,7 @@ func resourceAwsWafSizeConstraintSetCreate(d *schema.ResourceData, meta interfac return conn.CreateSizeConstraintSet(params) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error creating SizeConstraintSet: {{err}}", err) + return fmt.Errorf("Error creating SizeConstraintSet: %s", err) } resp := out.(*waf.CreateSizeConstraintSetOutput) @@ -98,7 +55,7 @@ func resourceAwsWafSizeConstraintSetRead(d *schema.ResourceData, meta interface{ resp, err := conn.GetSizeConstraintSet(params) if err != nil { if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "WAFNonexistentItemException" { - log.Printf("[WARN] WAF IPSet (%s) not found, removing from state", d.Id()) + log.Printf("[WARN] WAF SizeConstraintSet (%s) not found, removing from state", d.Id()) d.SetId("") return nil } @@ -117,11 +74,11 @@ func resourceAwsWafSizeConstraintSetUpdate(d *schema.ResourceData, meta interfac if d.HasChange("size_constraints") { o, n := d.GetChange("size_constraints") - oldS, newS := o.(*schema.Set).List(), n.(*schema.Set).List() + oldConstraints, newConstraints := o.(*schema.Set).List(), n.(*schema.Set).List() - err := updateSizeConstraintSetResource(d.Id(), oldS, newS, conn) + err := updateSizeConstraintSetResource(d.Id(), oldConstraints, newConstraints, conn) if err != nil { - return errwrap.Wrapf("[ERROR] Error updating SizeConstraintSet: {{err}}", err) + return fmt.Errorf("Error updating SizeConstraintSet: %s", err) } } @@ -137,11 +94,11 @@ func resourceAwsWafSizeConstraintSetDelete(d *schema.ResourceData, meta interfac noConstraints := []interface{}{} err := updateSizeConstraintSetResource(d.Id(), oldConstraints, noConstraints, conn) if err != nil { - return errwrap.Wrapf("[ERROR] Error deleting SizeConstraintSet: {{err}}", err) + return fmt.Errorf("Error deleting SizeConstraintSet: %s", err) } } - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.DeleteSizeConstraintSetInput{ ChangeToken: token, @@ -150,14 +107,14 @@ func resourceAwsWafSizeConstraintSetDelete(d *schema.ResourceData, meta interfac return conn.DeleteSizeConstraintSet(req) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error deleting SizeConstraintSet: {{err}}", err) + return fmt.Errorf("Error deleting SizeConstraintSet: %s", err) } return nil } func updateSizeConstraintSetResource(id string, oldS, newS []interface{}, conn *waf.WAF) error { - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.UpdateSizeConstraintSetInput{ ChangeToken: token, @@ -169,61 +126,8 @@ func updateSizeConstraintSetResource(id string, oldS, newS []interface{}, conn * return conn.UpdateSizeConstraintSet(req) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error updating SizeConstraintSet: {{err}}", err) + return fmt.Errorf("Error updating SizeConstraintSet: %s", err) } return nil } - -func flattenWafSizeConstraints(sc []*waf.SizeConstraint) []interface{} { - out := make([]interface{}, len(sc), len(sc)) - for i, c := range sc { - m := make(map[string]interface{}) - m["comparison_operator"] = *c.ComparisonOperator - if c.FieldToMatch != nil { - m["field_to_match"] = flattenFieldToMatch(c.FieldToMatch) - } - m["size"] = *c.Size - m["text_transformation"] = *c.TextTransformation - out[i] = m - } - return out -} - -func diffWafSizeConstraints(oldS, newS []interface{}) []*waf.SizeConstraintSetUpdate { - updates := make([]*waf.SizeConstraintSetUpdate, 0) - - for _, os := range oldS { - constraint := os.(map[string]interface{}) - - if idx, contains := sliceContainsMap(newS, constraint); contains { - newS = append(newS[:idx], newS[idx+1:]...) - continue - } - - updates = append(updates, &waf.SizeConstraintSetUpdate{ - Action: aws.String(waf.ChangeActionDelete), - SizeConstraint: &waf.SizeConstraint{ - FieldToMatch: expandFieldToMatch(constraint["field_to_match"].(*schema.Set).List()[0].(map[string]interface{})), - ComparisonOperator: aws.String(constraint["comparison_operator"].(string)), - Size: aws.Int64(int64(constraint["size"].(int))), - TextTransformation: aws.String(constraint["text_transformation"].(string)), - }, - }) - } - - for _, ns := range newS { - constraint := ns.(map[string]interface{}) - - updates = append(updates, &waf.SizeConstraintSetUpdate{ - Action: aws.String(waf.ChangeActionInsert), - SizeConstraint: &waf.SizeConstraint{ - FieldToMatch: expandFieldToMatch(constraint["field_to_match"].(*schema.Set).List()[0].(map[string]interface{})), - ComparisonOperator: aws.String(constraint["comparison_operator"].(string)), - Size: aws.Int64(int64(constraint["size"].(int))), - TextTransformation: aws.String(constraint["text_transformation"].(string)), - }, - }) - } - return updates -} diff --git a/aws/resource_aws_waf_size_constraint_set_test.go b/aws/resource_aws_waf_size_constraint_set_test.go index dcfac5d203f..c65ccd06e2a 100644 --- a/aws/resource_aws_waf_size_constraint_set_test.go +++ b/aws/resource_aws_waf_size_constraint_set_test.go @@ -10,7 +10,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/waf" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/acctest" ) @@ -18,12 +17,12 @@ func TestAccAWSWafSizeConstraintSet_basic(t *testing.T) { var v waf.SizeConstraintSet sizeConstraintSet := fmt.Sprintf("sizeConstraintSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafSizeConstraintSetDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSWafSizeConstraintSetConfig(sizeConstraintSet), Check: resource.ComposeTestCheckFunc( testAccCheckAWSWafSizeConstraintSetExists("aws_waf_size_constraint_set.size_constraint_set", &v), @@ -54,7 +53,7 @@ func TestAccAWSWafSizeConstraintSet_changeNameForceNew(t *testing.T) { sizeConstraintSet := fmt.Sprintf("sizeConstraintSet-%s", acctest.RandString(5)) sizeConstraintSetNewName := fmt.Sprintf("sizeConstraintSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafSizeConstraintSetDestroy, @@ -87,7 +86,7 @@ func TestAccAWSWafSizeConstraintSet_disappears(t *testing.T) { var v waf.SizeConstraintSet sizeConstraintSet := fmt.Sprintf("sizeConstraintSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafSizeConstraintSetDestroy, @@ -108,7 +107,7 @@ func TestAccAWSWafSizeConstraintSet_changeConstraints(t *testing.T) { var before, after waf.SizeConstraintSet setName := fmt.Sprintf("sizeConstraintSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafSizeConstraintSetDestroy, @@ -162,10 +161,10 @@ func TestAccAWSWafSizeConstraintSet_changeConstraints(t *testing.T) { } func TestAccAWSWafSizeConstraintSet_noConstraints(t *testing.T) { - var ipset waf.SizeConstraintSet + var contraints waf.SizeConstraintSet setName := fmt.Sprintf("sizeConstraintSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafSizeConstraintSetDestroy, @@ -173,7 +172,7 @@ func TestAccAWSWafSizeConstraintSet_noConstraints(t *testing.T) { { Config: testAccAWSWafSizeConstraintSetConfig_noConstraints(setName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckAWSWafSizeConstraintSetExists("aws_waf_size_constraint_set.size_constraint_set", &ipset), + testAccCheckAWSWafSizeConstraintSetExists("aws_waf_size_constraint_set.size_constraint_set", &contraints), resource.TestCheckResourceAttr( "aws_waf_size_constraint_set.size_constraint_set", "name", setName), resource.TestCheckResourceAttr( @@ -188,7 +187,7 @@ func testAccCheckAWSWafSizeConstraintSetDisappears(v *waf.SizeConstraintSet) res return func(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).wafconn - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.UpdateSizeConstraintSetInput{ ChangeToken: token, @@ -210,7 +209,7 @@ func testAccCheckAWSWafSizeConstraintSetDisappears(v *waf.SizeConstraintSet) res return conn.UpdateSizeConstraintSet(req) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error updating SizeConstraintSet: {{err}}", err) + return fmt.Errorf("Error updating SizeConstraintSet: %s", err) } _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { @@ -220,10 +219,8 @@ func testAccCheckAWSWafSizeConstraintSetDisappears(v *waf.SizeConstraintSet) res } return conn.DeleteSizeConstraintSet(opts) }) - if err != nil { - return err - } - return nil + + return err } } @@ -258,7 +255,7 @@ func testAccCheckAWSWafSizeConstraintSetExists(n string, v *waf.SizeConstraintSe func testAccCheckAWSWafSizeConstraintSetDestroy(s *terraform.State) error { for _, rs := range s.RootModule().Resources { - if rs.Type != "aws_waf_byte_match_set" { + if rs.Type != "aws_waf_size_contraint_set" { continue } diff --git a/aws/resource_aws_waf_sql_injection_match_set.go b/aws/resource_aws_waf_sql_injection_match_set.go index 6049dcd10fc..ff9bb4d6eb4 100644 --- a/aws/resource_aws_waf_sql_injection_match_set.go +++ b/aws/resource_aws_waf_sql_injection_match_set.go @@ -1,12 +1,12 @@ package aws import ( + "fmt" "log" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/waf" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/schema" ) @@ -18,12 +18,12 @@ func resourceAwsWafSqlInjectionMatchSet() *schema.Resource { Delete: resourceAwsWafSqlInjectionMatchSetDelete, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "sql_injection_match_tuples": &schema.Schema{ + "sql_injection_match_tuples": { Type: schema.TypeSet, Optional: true, Elem: &schema.Resource{ @@ -45,7 +45,7 @@ func resourceAwsWafSqlInjectionMatchSet() *schema.Resource { }, }, }, - "text_transformation": &schema.Schema{ + "text_transformation": { Type: schema.TypeString, Required: true, }, @@ -61,7 +61,7 @@ func resourceAwsWafSqlInjectionMatchSetCreate(d *schema.ResourceData, meta inter log.Printf("[INFO] Creating SqlInjectionMatchSet: %s", d.Get("name").(string)) - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) out, err := wr.RetryWithToken(func(token *string) (interface{}, error) { params := &waf.CreateSqlInjectionMatchSetInput{ ChangeToken: token, @@ -71,7 +71,7 @@ func resourceAwsWafSqlInjectionMatchSetCreate(d *schema.ResourceData, meta inter return conn.CreateSqlInjectionMatchSet(params) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error creating SqlInjectionMatchSet: {{err}}", err) + return fmt.Errorf("Error creating SqlInjectionMatchSet: %s", err) } resp := out.(*waf.CreateSqlInjectionMatchSetOutput) d.SetId(*resp.SqlInjectionMatchSet.SqlInjectionMatchSetId) @@ -112,7 +112,7 @@ func resourceAwsWafSqlInjectionMatchSetUpdate(d *schema.ResourceData, meta inter err := updateSqlInjectionMatchSetResource(d.Id(), oldT, newT, conn) if err != nil { - return errwrap.Wrapf("[ERROR] Error updating SqlInjectionMatchSet: {{err}}", err) + return fmt.Errorf("Error updating SqlInjectionMatchSet: %s", err) } } @@ -128,11 +128,11 @@ func resourceAwsWafSqlInjectionMatchSetDelete(d *schema.ResourceData, meta inter noTuples := []interface{}{} err := updateSqlInjectionMatchSetResource(d.Id(), oldTuples, noTuples, conn) if err != nil { - return errwrap.Wrapf("[ERROR] Error deleting SqlInjectionMatchSet: {{err}}", err) + return fmt.Errorf("Error deleting SqlInjectionMatchSet: %s", err) } } - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.DeleteSqlInjectionMatchSetInput{ ChangeToken: token, @@ -142,14 +142,14 @@ func resourceAwsWafSqlInjectionMatchSetDelete(d *schema.ResourceData, meta inter return conn.DeleteSqlInjectionMatchSet(req) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error deleting SqlInjectionMatchSet: {{err}}", err) + return fmt.Errorf("Error deleting SqlInjectionMatchSet: %s", err) } return nil } func updateSqlInjectionMatchSetResource(id string, oldT, newT []interface{}, conn *waf.WAF) error { - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.UpdateSqlInjectionMatchSetInput{ ChangeToken: token, @@ -161,14 +161,14 @@ func updateSqlInjectionMatchSetResource(id string, oldT, newT []interface{}, con return conn.UpdateSqlInjectionMatchSet(req) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error updating SqlInjectionMatchSet: {{err}}", err) + return fmt.Errorf("Error updating SqlInjectionMatchSet: %s", err) } return nil } func flattenWafSqlInjectionMatchTuples(ts []*waf.SqlInjectionMatchTuple) []interface{} { - out := make([]interface{}, len(ts), len(ts)) + out := make([]interface{}, len(ts)) for i, t := range ts { m := make(map[string]interface{}) m["text_transformation"] = *t.TextTransformation diff --git a/aws/resource_aws_waf_sql_injection_match_set_test.go b/aws/resource_aws_waf_sql_injection_match_set_test.go index c9f081b3d90..802e2894b52 100644 --- a/aws/resource_aws_waf_sql_injection_match_set_test.go +++ b/aws/resource_aws_waf_sql_injection_match_set_test.go @@ -10,7 +10,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/waf" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/acctest" ) @@ -18,12 +17,12 @@ func TestAccAWSWafSqlInjectionMatchSet_basic(t *testing.T) { var v waf.SqlInjectionMatchSet sqlInjectionMatchSet := fmt.Sprintf("sqlInjectionMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafSqlInjectionMatchSetDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSWafSqlInjectionMatchSetConfig(sqlInjectionMatchSet), Check: resource.ComposeTestCheckFunc( testAccCheckAWSWafSqlInjectionMatchSetExists("aws_waf_sql_injection_match_set.sql_injection_match_set", &v), @@ -50,7 +49,7 @@ func TestAccAWSWafSqlInjectionMatchSet_changeNameForceNew(t *testing.T) { sqlInjectionMatchSet := fmt.Sprintf("sqlInjectionMatchSet-%s", acctest.RandString(5)) sqlInjectionMatchSetNewName := fmt.Sprintf("sqlInjectionMatchSetNewName-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafSqlInjectionMatchSetDestroy, @@ -83,7 +82,7 @@ func TestAccAWSWafSqlInjectionMatchSet_disappears(t *testing.T) { var v waf.SqlInjectionMatchSet sqlInjectionMatchSet := fmt.Sprintf("sqlInjectionMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafSqlInjectionMatchSetDestroy, @@ -104,7 +103,7 @@ func TestAccAWSWafSqlInjectionMatchSet_changeTuples(t *testing.T) { var before, after waf.SqlInjectionMatchSet setName := fmt.Sprintf("sqlInjectionMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafSqlInjectionMatchSetDestroy, @@ -153,7 +152,7 @@ func TestAccAWSWafSqlInjectionMatchSet_noTuples(t *testing.T) { var ipset waf.SqlInjectionMatchSet setName := fmt.Sprintf("sqlInjectionMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafSqlInjectionMatchSetDestroy, @@ -176,7 +175,7 @@ func testAccCheckAWSWafSqlInjectionMatchSetDisappears(v *waf.SqlInjectionMatchSe return func(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).wafconn - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.UpdateSqlInjectionMatchSetInput{ ChangeToken: token, @@ -196,7 +195,7 @@ func testAccCheckAWSWafSqlInjectionMatchSetDisappears(v *waf.SqlInjectionMatchSe return conn.UpdateSqlInjectionMatchSet(req) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error updating SqlInjectionMatchSet: {{err}}", err) + return fmt.Errorf("Error updating SqlInjectionMatchSet: %s", err) } _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { @@ -207,7 +206,7 @@ func testAccCheckAWSWafSqlInjectionMatchSetDisappears(v *waf.SqlInjectionMatchSe return conn.DeleteSqlInjectionMatchSet(opts) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error deleting SqlInjectionMatchSet: {{err}}", err) + return fmt.Errorf("Error deleting SqlInjectionMatchSet: %s", err) } return nil } diff --git a/aws/resource_aws_waf_web_acl.go b/aws/resource_aws_waf_web_acl.go index 92032a70245..03016238607 100644 --- a/aws/resource_aws_waf_web_acl.go +++ b/aws/resource_aws_waf_web_acl.go @@ -5,7 +5,6 @@ import ( "log" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/waf" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/validation" @@ -17,64 +16,81 @@ func resourceAwsWafWebAcl() *schema.Resource { Read: resourceAwsWafWebAclRead, Update: resourceAwsWafWebAclUpdate, Delete: resourceAwsWafWebAclDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "default_action": &schema.Schema{ + "default_action": { Type: schema.TypeSet, Required: true, MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "type": &schema.Schema{ + "type": { Type: schema.TypeString, Required: true, }, }, }, }, - "metric_name": &schema.Schema{ + "metric_name": { Type: schema.TypeString, Required: true, ForceNew: true, ValidateFunc: validateWafMetricName, }, - "rules": &schema.Schema{ + "rules": { Type: schema.TypeSet, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "action": &schema.Schema{ - Type: schema.TypeSet, - Required: true, + "action": { + Type: schema.TypeList, + Optional: true, MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "type": &schema.Schema{ + "type": { Type: schema.TypeString, Required: true, }, }, }, }, - "priority": &schema.Schema{ + "override_action": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "type": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "priority": { Type: schema.TypeInt, Required: true, }, - "type": &schema.Schema{ + "type": { Type: schema.TypeString, Optional: true, Default: waf.WafRuleTypeRegular, ValidateFunc: validation.StringInSlice([]string{ waf.WafRuleTypeRegular, waf.WafRuleTypeRateBased, + waf.WafRuleTypeGroup, }, false), }, - "rule_id": &schema.Schema{ + "rule_id": { Type: schema.TypeString, Required: true, }, @@ -88,11 +104,11 @@ func resourceAwsWafWebAcl() *schema.Resource { func resourceAwsWafWebAclCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).wafconn - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) out, err := wr.RetryWithToken(func(token *string) (interface{}, error) { params := &waf.CreateWebACLInput{ ChangeToken: token, - DefaultAction: expandDefaultAction(d), + DefaultAction: expandWafAction(d.Get("default_action").(*schema.Set).List()), MetricName: aws.String(d.Get("metric_name").(string)), Name: aws.String(d.Get("name").(string)), } @@ -115,7 +131,7 @@ func resourceAwsWafWebAclRead(d *schema.ResourceData, meta interface{}) error { resp, err := conn.GetWebACL(params) if err != nil { - if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "WAFNonexistentItemException" { + if isAWSErr(err, waf.ErrCodeNonexistentItemException, "") { log.Printf("[WARN] WAF ACL (%s) not found, removing from state", d.Id()) d.SetId("") return nil @@ -124,35 +140,72 @@ func resourceAwsWafWebAclRead(d *schema.ResourceData, meta interface{}) error { return err } - defaultAction := flattenDefaultAction(resp.WebACL.DefaultAction) - if defaultAction != nil { - if err := d.Set("default_action", defaultAction); err != nil { - return fmt.Errorf("error setting default_action: %s", err) - } + if resp == nil || resp.WebACL == nil { + log.Printf("[WARN] WAF ACL (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err := d.Set("default_action", flattenWafAction(resp.WebACL.DefaultAction)); err != nil { + return fmt.Errorf("error setting default_action: %s", err) } d.Set("name", resp.WebACL.Name) d.Set("metric_name", resp.WebACL.MetricName) + if err := d.Set("rules", flattenWafWebAclRules(resp.WebACL.Rules)); err != nil { + return fmt.Errorf("error setting rules: %s", err) + } return nil } func resourceAwsWafWebAclUpdate(d *schema.ResourceData, meta interface{}) error { - err := updateWebAclResource(d, meta, waf.ChangeActionInsert) - if err != nil { - return fmt.Errorf("Error Updating WAF ACL: %s", err) + conn := meta.(*AWSClient).wafconn + + if d.HasChange("default_action") || d.HasChange("rules") { + o, n := d.GetChange("rules") + oldR, newR := o.(*schema.Set).List(), n.(*schema.Set).List() + + wr := newWafRetryer(conn) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateWebACLInput{ + ChangeToken: token, + DefaultAction: expandWafAction(d.Get("default_action").(*schema.Set).List()), + Updates: diffWafWebAclRules(oldR, newR), + WebACLId: aws.String(d.Id()), + } + return conn.UpdateWebACL(req) + }) + if err != nil { + return fmt.Errorf("Error Updating WAF ACL: %s", err) + } } + return resourceAwsWafWebAclRead(d, meta) } func resourceAwsWafWebAclDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).wafconn - err := updateWebAclResource(d, meta, waf.ChangeActionDelete) - if err != nil { - return fmt.Errorf("Error Removing WAF ACL Rules: %s", err) + + // First, need to delete all rules + rules := d.Get("rules").(*schema.Set).List() + if len(rules) > 0 { + wr := newWafRetryer(conn) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateWebACLInput{ + ChangeToken: token, + DefaultAction: expandWafAction(d.Get("default_action").(*schema.Set).List()), + Updates: diffWafWebAclRules(rules, []interface{}{}), + WebACLId: aws.String(d.Id()), + } + return conn.UpdateWebACL(req) + }) + if err != nil { + return fmt.Errorf("Error Removing WAF Regional ACL Rules: %s", err) + } } - wr := newWafRetryer(conn, "global") - _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { + wr := newWafRetryer(conn) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.DeleteWebACLInput{ ChangeToken: token, WebACLId: aws.String(d.Id()), @@ -166,74 +219,3 @@ func resourceAwsWafWebAclDelete(d *schema.ResourceData, meta interface{}) error } return nil } - -func updateWebAclResource(d *schema.ResourceData, meta interface{}, ChangeAction string) error { - conn := meta.(*AWSClient).wafconn - - wr := newWafRetryer(conn, "global") - _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { - req := &waf.UpdateWebACLInput{ - ChangeToken: token, - WebACLId: aws.String(d.Id()), - } - - if d.HasChange("default_action") { - req.DefaultAction = expandDefaultAction(d) - } - - rules := d.Get("rules").(*schema.Set) - for _, rule := range rules.List() { - aclRule := rule.(map[string]interface{}) - action := aclRule["action"].(*schema.Set).List()[0].(map[string]interface{}) - aclRuleUpdate := &waf.WebACLUpdate{ - Action: aws.String(ChangeAction), - ActivatedRule: &waf.ActivatedRule{ - Priority: aws.Int64(int64(aclRule["priority"].(int))), - RuleId: aws.String(aclRule["rule_id"].(string)), - Type: aws.String(aclRule["type"].(string)), - Action: &waf.WafAction{Type: aws.String(action["type"].(string))}, - }, - } - req.Updates = append(req.Updates, aclRuleUpdate) - } - return conn.UpdateWebACL(req) - }) - if err != nil { - return fmt.Errorf("Error Updating WAF ACL: %s", err) - } - return nil -} - -func expandDefaultAction(d *schema.ResourceData) *waf.WafAction { - set, ok := d.GetOk("default_action") - if !ok { - return nil - } - - s := set.(*schema.Set).List() - if s == nil || len(s) == 0 { - return nil - } - - if s[0] == nil { - log.Printf("[ERR] First element of Default Action is set to nil") - return nil - } - - dA := s[0].(map[string]interface{}) - - return &waf.WafAction{ - Type: aws.String(dA["type"].(string)), - } -} - -func flattenDefaultAction(n *waf.WafAction) []map[string]interface{} { - if n == nil { - return nil - } - - m := setMap(make(map[string]interface{})) - - m.SetString("type", n.Type) - return m.MapList() -} diff --git a/aws/resource_aws_waf_web_acl_test.go b/aws/resource_aws_waf_web_acl_test.go index 6591fed0e4f..6db996e2981 100644 --- a/aws/resource_aws_waf_web_acl_test.go +++ b/aws/resource_aws_waf_web_acl_test.go @@ -4,150 +4,179 @@ import ( "fmt" "testing" - "github.com/hashicorp/terraform/helper/resource" - "github.com/hashicorp/terraform/terraform" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/waf" "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" ) func TestAccAWSWafWebAcl_basic(t *testing.T) { - var v waf.WebACL - wafAclName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) + var webACL waf.WebACL + rName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) + resourceName := "aws_waf_web_acl.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafWebAclDestroy, Steps: []resource.TestStep{ - resource.TestStep{ - Config: testAccAWSWafWebAclConfig(wafAclName), + { + Config: testAccAWSWafWebAclConfig_Required(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSWafWebAclExists("aws_waf_web_acl.waf_acl", &v), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "default_action.#", "1"), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "default_action.4234791575.type", "ALLOW"), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "name", wafAclName), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "rules.#", "1"), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "metric_name", wafAclName), + testAccCheckAWSWafWebAclExists(resourceName, &webACL), + resource.TestCheckResourceAttr(resourceName, "default_action.#", "1"), + resource.TestCheckResourceAttr(resourceName, "default_action.4234791575.type", "ALLOW"), + resource.TestCheckResourceAttr(resourceName, "metric_name", rName), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "rules.#", "0"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } func TestAccAWSWafWebAcl_changeNameForceNew(t *testing.T) { - var before, after waf.WebACL - wafAclName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) - wafAclNewName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) + var webACL waf.WebACL + rName1 := fmt.Sprintf("wafacl%s", acctest.RandString(5)) + rName2 := fmt.Sprintf("wafacl%s", acctest.RandString(5)) + resourceName := "aws_waf_web_acl.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafWebAclDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafWebAclConfig_Required(rName1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafWebAclExists(resourceName, &webACL), + resource.TestCheckResourceAttr(resourceName, "default_action.#", "1"), + resource.TestCheckResourceAttr(resourceName, "default_action.4234791575.type", "ALLOW"), + resource.TestCheckResourceAttr(resourceName, "metric_name", rName1), + resource.TestCheckResourceAttr(resourceName, "name", rName1), + resource.TestCheckResourceAttr(resourceName, "rules.#", "0"), + ), + }, + { + Config: testAccAWSWafWebAclConfig_Required(rName2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafWebAclExists(resourceName, &webACL), + resource.TestCheckResourceAttr(resourceName, "default_action.#", "1"), + resource.TestCheckResourceAttr(resourceName, "default_action.4234791575.type", "ALLOW"), + resource.TestCheckResourceAttr(resourceName, "metric_name", rName2), + resource.TestCheckResourceAttr(resourceName, "name", rName2), + resource.TestCheckResourceAttr(resourceName, "rules.#", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSWafWebAcl_DefaultAction(t *testing.T) { + var webACL waf.WebACL + rName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) + resourceName := "aws_waf_web_acl.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafWebAclDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSWafWebAclConfig(wafAclName), + Config: testAccAWSWafWebAclConfig_DefaultAction(rName, "ALLOW"), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSWafWebAclExists("aws_waf_web_acl.waf_acl", &before), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "default_action.#", "1"), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "default_action.4234791575.type", "ALLOW"), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "name", wafAclName), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "rules.#", "1"), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "metric_name", wafAclName), + testAccCheckAWSWafWebAclExists(resourceName, &webACL), + resource.TestCheckResourceAttr(resourceName, "default_action.#", "1"), + resource.TestCheckResourceAttr(resourceName, "default_action.4234791575.type", "ALLOW"), ), }, { - Config: testAccAWSWafWebAclConfigChangeName(wafAclNewName), + Config: testAccAWSWafWebAclConfig_DefaultAction(rName, "BLOCK"), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSWafWebAclExists("aws_waf_web_acl.waf_acl", &after), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "default_action.#", "1"), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "default_action.4234791575.type", "ALLOW"), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "name", wafAclNewName), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "rules.#", "1"), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "metric_name", wafAclNewName), + testAccCheckAWSWafWebAclExists(resourceName, &webACL), + resource.TestCheckResourceAttr(resourceName, "default_action.#", "1"), + resource.TestCheckResourceAttr(resourceName, "default_action.2267395054.type", "BLOCK"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } -func TestAccAWSWafWebAcl_changeDefaultAction(t *testing.T) { - var before, after waf.WebACL - wafAclName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) - wafAclNewName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) +func TestAccAWSWafWebAcl_Rules(t *testing.T) { + var webACL waf.WebACL + rName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) + resourceName := "aws_waf_web_acl.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafWebAclDestroy, Steps: []resource.TestStep{ + // Test creating with rule + { + Config: testAccAWSWafWebAclConfig_Rules_Single_Rule(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafWebAclExists(resourceName, &webACL), + resource.TestCheckResourceAttr(resourceName, "rules.#", "1"), + ), + }, + // Test adding rule { - Config: testAccAWSWafWebAclConfig(wafAclName), + Config: testAccAWSWafWebAclConfig_Rules_Multiple(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSWafWebAclExists("aws_waf_web_acl.waf_acl", &before), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "default_action.#", "1"), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "default_action.4234791575.type", "ALLOW"), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "name", wafAclName), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "rules.#", "1"), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "metric_name", wafAclName), + testAccCheckAWSWafWebAclExists(resourceName, &webACL), + resource.TestCheckResourceAttr(resourceName, "rules.#", "2"), ), }, + // Test removing rule { - Config: testAccAWSWafWebAclConfigDefaultAction(wafAclNewName), + Config: testAccAWSWafWebAclConfig_Rules_Single_RuleGroup(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSWafWebAclExists("aws_waf_web_acl.waf_acl", &after), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "default_action.#", "1"), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "default_action.2267395054.type", "BLOCK"), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "name", wafAclNewName), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "rules.#", "1"), - resource.TestCheckResourceAttr( - "aws_waf_web_acl.waf_acl", "metric_name", wafAclNewName), + testAccCheckAWSWafWebAclExists(resourceName, &webACL), + resource.TestCheckResourceAttr(resourceName, "rules.#", "1"), ), }, + // Test import + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } func TestAccAWSWafWebAcl_disappears(t *testing.T) { - var v waf.WebACL - wafAclName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) + var webACL waf.WebACL + rName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) + resourceName := "aws_waf_web_acl.test" - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafWebAclDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSWafWebAclConfig(wafAclName), + Config: testAccAWSWafWebAclConfig_Required(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckAWSWafWebAclExists("aws_waf_web_acl.waf_acl", &v), - testAccCheckAWSWafWebAclDisappears(&v), + testAccCheckAWSWafWebAclExists(resourceName, &webACL), + testAccCheckAWSWafWebAclDisappears(&webACL), ), ExpectNonEmptyPlan: true, }, @@ -159,32 +188,9 @@ func testAccCheckAWSWafWebAclDisappears(v *waf.WebACL) resource.TestCheckFunc { return func(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).wafconn - wr := newWafRetryer(conn, "global") - _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { - req := &waf.UpdateWebACLInput{ - ChangeToken: token, - WebACLId: v.WebACLId, - } - - for _, ActivatedRule := range v.Rules { - WebACLUpdate := &waf.WebACLUpdate{ - Action: aws.String("DELETE"), - ActivatedRule: &waf.ActivatedRule{ - Priority: ActivatedRule.Priority, - RuleId: ActivatedRule.RuleId, - Action: ActivatedRule.Action, - }, - } - req.Updates = append(req.Updates, WebACLUpdate) - } - - return conn.UpdateWebACL(req) - }) - if err != nil { - return fmt.Errorf("Error Updating WAF ACL: %s", err) - } + wr := newWafRetryer(conn) - _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { opts := &waf.DeleteWebACLInput{ ChangeToken: token, WebACLId: v.WebACLId, @@ -217,10 +223,8 @@ func testAccCheckAWSWafWebAclDestroy(s *terraform.State) error { } // Return nil if the WebACL is already destroyed - if awsErr, ok := err.(awserr.Error); ok { - if awsErr.Code() == "WAFNonexistentItemException" { - return nil - } + if isAWSErr(err, waf.ErrCodeNonexistentItemException, "") { + continue } return err @@ -258,110 +262,155 @@ func testAccCheckAWSWafWebAclExists(n string, v *waf.WebACL) resource.TestCheckF } } -func testAccAWSWafWebAclConfig(name string) string { - return fmt.Sprintf(`resource "aws_waf_ipset" "ipset" { - name = "%s" +func testAccAWSWafWebAclConfig_Required(rName string) string { + return fmt.Sprintf(` +resource "aws_waf_web_acl" "test" { + metric_name = %q + name = %q + + default_action { + type = "ALLOW" + } +} +`, rName, rName) +} + +func testAccAWSWafWebAclConfig_DefaultAction(rName, defaultAction string) string { + return fmt.Sprintf(` +resource "aws_waf_web_acl" "test" { + metric_name = %q + name = %q + + default_action { + type = %q + } +} +`, rName, rName, defaultAction) +} + +func testAccAWSWafWebAclConfig_Rules_Single_Rule(rName string) string { + return fmt.Sprintf(` +resource "aws_waf_ipset" "test" { + name = %q + ip_set_descriptors { - type = "IPV4" + type = "IPV4" value = "192.0.7.0/24" } } -resource "aws_waf_rule" "wafrule" { - depends_on = ["aws_waf_ipset.ipset"] - name = "%s" - metric_name = "%s" +resource "aws_waf_rule" "test" { + metric_name = %q + name = %q + predicates { - data_id = "${aws_waf_ipset.ipset.id}" + data_id = "${aws_waf_ipset.test.id}" negated = false - type = "IPMatch" + type = "IPMatch" } } -resource "aws_waf_web_acl" "waf_acl" { - depends_on = ["aws_waf_ipset.ipset", "aws_waf_rule.wafrule"] - name = "%s" - metric_name = "%s" + +resource "aws_waf_web_acl" "test" { + metric_name = %q + name = %q + default_action { type = "ALLOW" } + rules { + priority = 1 + rule_id = "${aws_waf_rule.test.id}" + action { type = "BLOCK" } - priority = 1 - rule_id = "${aws_waf_rule.wafrule.id}" } -}`, name, name, name, name, name) } - -func testAccAWSWafWebAclConfigChangeName(name string) string { - return fmt.Sprintf(`resource "aws_waf_ipset" "ipset" { - name = "%s" - ip_set_descriptors { - type = "IPV4" - value = "192.0.7.0/24" - } +`, rName, rName, rName, rName, rName) } -resource "aws_waf_rule" "wafrule" { - depends_on = ["aws_waf_ipset.ipset"] - name = "%s" - metric_name = "%s" - predicates { - data_id = "${aws_waf_ipset.ipset.id}" - negated = false - type = "IPMatch" - } +func testAccAWSWafWebAclConfig_Rules_Single_RuleGroup(rName string) string { + return fmt.Sprintf(` +resource "aws_waf_rule_group" "test" { + metric_name = %q + name = %q } -resource "aws_waf_web_acl" "waf_acl" { - depends_on = ["aws_waf_ipset.ipset", "aws_waf_rule.wafrule"] - name = "%s" - metric_name = "%s" + +resource "aws_waf_web_acl" "test" { + metric_name = %q + name = %q + default_action { type = "ALLOW" } + rules { - action { - type = "BLOCK" + priority = 1 + rule_id = "${aws_waf_rule_group.test.id}" + type = "GROUP" + + override_action { + type = "NONE" } - priority = 1 - rule_id = "${aws_waf_rule.wafrule.id}" } -}`, name, name, name, name, name) +} +`, rName, rName, rName, rName) } -func testAccAWSWafWebAclConfigDefaultAction(name string) string { - return fmt.Sprintf(`resource "aws_waf_ipset" "ipset" { - name = "%s" +func testAccAWSWafWebAclConfig_Rules_Multiple(rName string) string { + return fmt.Sprintf(` +resource "aws_waf_ipset" "test" { + name = %q + ip_set_descriptors { - type = "IPV4" + type = "IPV4" value = "192.0.7.0/24" } } -resource "aws_waf_rule" "wafrule" { - depends_on = ["aws_waf_ipset.ipset"] - name = "%s" - metric_name = "%s" +resource "aws_waf_rule" "test" { + metric_name = %q + name = %q + predicates { - data_id = "${aws_waf_ipset.ipset.id}" + data_id = "${aws_waf_ipset.test.id}" negated = false - type = "IPMatch" + type = "IPMatch" } } -resource "aws_waf_web_acl" "waf_acl" { - depends_on = ["aws_waf_ipset.ipset", "aws_waf_rule.wafrule"] - name = "%s" - metric_name = "%s" + +resource "aws_waf_rule_group" "test" { + metric_name = %q + name = %q +} + +resource "aws_waf_web_acl" "test" { + metric_name = %q + name = %q + default_action { - type = "BLOCK" + type = "ALLOW" } + rules { + priority = 1 + rule_id = "${aws_waf_rule.test.id}" + action { type = "BLOCK" } - priority = 1 - rule_id = "${aws_waf_rule.wafrule.id}" } -}`, name, name, name, name, name) + + rules { + priority = 2 + rule_id = "${aws_waf_rule_group.test.id}" + type = "GROUP" + + override_action { + type = "NONE" + } + } +} +`, rName, rName, rName, rName, rName, rName, rName) } diff --git a/aws/resource_aws_waf_xss_match_set.go b/aws/resource_aws_waf_xss_match_set.go index bb9f60449c1..56dff11917f 100644 --- a/aws/resource_aws_waf_xss_match_set.go +++ b/aws/resource_aws_waf_xss_match_set.go @@ -7,7 +7,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/waf" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/schema" ) @@ -19,12 +18,12 @@ func resourceAwsWafXssMatchSet() *schema.Resource { Delete: resourceAwsWafXssMatchSetDelete, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "xss_match_tuples": &schema.Schema{ + "xss_match_tuples": { Type: schema.TypeSet, Optional: true, Elem: &schema.Resource{ @@ -46,7 +45,7 @@ func resourceAwsWafXssMatchSet() *schema.Resource { }, }, }, - "text_transformation": &schema.Schema{ + "text_transformation": { Type: schema.TypeString, Required: true, }, @@ -62,7 +61,7 @@ func resourceAwsWafXssMatchSetCreate(d *schema.ResourceData, meta interface{}) e log.Printf("[INFO] Creating XssMatchSet: %s", d.Get("name").(string)) - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) out, err := wr.RetryWithToken(func(token *string) (interface{}, error) { params := &waf.CreateXssMatchSetInput{ ChangeToken: token, @@ -72,7 +71,7 @@ func resourceAwsWafXssMatchSetCreate(d *schema.ResourceData, meta interface{}) e return conn.CreateXssMatchSet(params) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error creating XssMatchSet: {{err}}", err) + return fmt.Errorf("Error creating XssMatchSet: %s", err) } resp := out.(*waf.CreateXssMatchSetOutput) @@ -114,7 +113,7 @@ func resourceAwsWafXssMatchSetUpdate(d *schema.ResourceData, meta interface{}) e err := updateXssMatchSetResource(d.Id(), oldT, newT, conn) if err != nil { - return errwrap.Wrapf("[ERROR] Error updating XssMatchSet: {{err}}", err) + return fmt.Errorf("Error updating XssMatchSet: %s", err) } } @@ -133,7 +132,7 @@ func resourceAwsWafXssMatchSetDelete(d *schema.ResourceData, meta interface{}) e } } - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.DeleteXssMatchSetInput{ ChangeToken: token, @@ -143,14 +142,14 @@ func resourceAwsWafXssMatchSetDelete(d *schema.ResourceData, meta interface{}) e return conn.DeleteXssMatchSet(req) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error deleting XssMatchSet: {{err}}", err) + return fmt.Errorf("Error deleting XssMatchSet: %s", err) } return nil } func updateXssMatchSetResource(id string, oldT, newT []interface{}, conn *waf.WAF) error { - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.UpdateXssMatchSetInput{ ChangeToken: token, @@ -162,14 +161,14 @@ func updateXssMatchSetResource(id string, oldT, newT []interface{}, conn *waf.WA return conn.UpdateXssMatchSet(req) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error updating XssMatchSet: {{err}}", err) + return fmt.Errorf("Error updating XssMatchSet: %s", err) } return nil } func flattenWafXssMatchTuples(ts []*waf.XssMatchTuple) []interface{} { - out := make([]interface{}, len(ts), len(ts)) + out := make([]interface{}, len(ts)) for i, t := range ts { m := make(map[string]interface{}) m["field_to_match"] = flattenFieldToMatch(t.FieldToMatch) diff --git a/aws/resource_aws_waf_xss_match_set_test.go b/aws/resource_aws_waf_xss_match_set_test.go index 175e61946c7..89755198a13 100644 --- a/aws/resource_aws_waf_xss_match_set_test.go +++ b/aws/resource_aws_waf_xss_match_set_test.go @@ -10,7 +10,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/waf" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/acctest" ) @@ -18,12 +17,12 @@ func TestAccAWSWafXssMatchSet_basic(t *testing.T) { var v waf.XssMatchSet xssMatchSet := fmt.Sprintf("xssMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafXssMatchSetDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSWafXssMatchSetConfig(xssMatchSet), Check: resource.ComposeTestCheckFunc( testAccCheckAWSWafXssMatchSetExists("aws_waf_xss_match_set.xss_match_set", &v), @@ -58,7 +57,7 @@ func TestAccAWSWafXssMatchSet_changeNameForceNew(t *testing.T) { xssMatchSet := fmt.Sprintf("xssMatchSet-%s", acctest.RandString(5)) xssMatchSetNewName := fmt.Sprintf("xssMatchSetNewName-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafXssMatchSetDestroy, @@ -91,7 +90,7 @@ func TestAccAWSWafXssMatchSet_disappears(t *testing.T) { var v waf.XssMatchSet xssMatchSet := fmt.Sprintf("xssMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafXssMatchSetDestroy, @@ -112,7 +111,7 @@ func TestAccAWSWafXssMatchSet_changeTuples(t *testing.T) { var before, after waf.XssMatchSet setName := fmt.Sprintf("xssMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafXssMatchSetDestroy, @@ -177,7 +176,7 @@ func TestAccAWSWafXssMatchSet_noTuples(t *testing.T) { var ipset waf.XssMatchSet setName := fmt.Sprintf("xssMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafXssMatchSetDestroy, @@ -200,7 +199,7 @@ func testAccCheckAWSWafXssMatchSetDisappears(v *waf.XssMatchSet) resource.TestCh return func(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).wafconn - wr := newWafRetryer(conn, "global") + wr := newWafRetryer(conn) _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.UpdateXssMatchSetInput{ ChangeToken: token, @@ -220,7 +219,7 @@ func testAccCheckAWSWafXssMatchSetDisappears(v *waf.XssMatchSet) resource.TestCh return conn.UpdateXssMatchSet(req) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error updating XssMatchSet: {{err}}", err) + return fmt.Errorf("Error updating XssMatchSet: %s", err) } _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { @@ -231,7 +230,7 @@ func testAccCheckAWSWafXssMatchSetDisappears(v *waf.XssMatchSet) resource.TestCh return conn.DeleteXssMatchSet(opts) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error deleting XssMatchSet: {{err}}", err) + return fmt.Errorf("Error deleting XssMatchSet: %s", err) } return nil } diff --git a/aws/resource_aws_wafregional_byte_match_set.go b/aws/resource_aws_wafregional_byte_match_set.go index 9321367ff75..f2578764aba 100644 --- a/aws/resource_aws_wafregional_byte_match_set.go +++ b/aws/resource_aws_wafregional_byte_match_set.go @@ -1,13 +1,12 @@ package aws import ( + "fmt" "log" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/waf" "github.com/aws/aws-sdk-go/service/wafregional" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/schema" ) @@ -19,18 +18,57 @@ func resourceAwsWafRegionalByteMatchSet() *schema.Resource { Delete: resourceAwsWafRegionalByteMatchSetDelete, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "byte_match_tuple": &schema.Schema{ + "byte_match_tuple": { + Type: schema.TypeSet, + Optional: true, + ConflictsWith: []string{"byte_match_tuples"}, + Deprecated: "use `byte_match_tuples` instead", + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "field_to_match": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "data": { + Type: schema.TypeString, + Optional: true, + }, + "type": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "positional_constraint": { + Type: schema.TypeString, + Required: true, + }, + "target_string": { + Type: schema.TypeString, + Optional: true, + }, + "text_transformation": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "byte_match_tuples": { Type: schema.TypeSet, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "field_to_match": { - Type: schema.TypeSet, + Type: schema.TypeList, Required: true, MaxItems: 1, Elem: &schema.Resource{ @@ -46,15 +84,15 @@ func resourceAwsWafRegionalByteMatchSet() *schema.Resource { }, }, }, - "positional_constraint": &schema.Schema{ + "positional_constraint": { Type: schema.TypeString, Required: true, }, - "target_string": &schema.Schema{ + "target_string": { Type: schema.TypeString, Optional: true, }, - "text_transformation": &schema.Schema{ + "text_transformation": { Type: schema.TypeString, Required: true, }, @@ -81,7 +119,7 @@ func resourceAwsWafRegionalByteMatchSetCreate(d *schema.ResourceData, meta inter }) if err != nil { - return errwrap.Wrapf("[ERROR] Error creating ByteMatchSet: {{err}}", err) + return fmt.Errorf("Error creating ByteMatchSet: %s", err) } resp := out.(*waf.CreateByteMatchSetOutput) @@ -100,47 +138,53 @@ func resourceAwsWafRegionalByteMatchSetRead(d *schema.ResourceData, meta interfa } resp, err := conn.GetByteMatchSet(params) + + if isAWSErr(err, waf.ErrCodeNonexistentItemException, "") { + log.Printf("[WARN] WAF Regional Byte Set Match (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + if err != nil { - if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "WAFNonexistentItemException" { - log.Printf("[WARN] WAF IPSet (%s) not found, removing from state", d.Id()) - d.SetId("") - return nil - } + return fmt.Errorf("error getting WAF Regional Byte Match Set (%s): %s", d.Id(), err) + } - return err + if resp == nil || resp.ByteMatchSet == nil { + log.Printf("[WARN] WAF Regional Byte Set Match (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil } - d.Set("byte_match_tuple", flattenWafByteMatchTuplesWR(resp.ByteMatchSet.ByteMatchTuples)) + if _, ok := d.GetOk("byte_match_tuple"); ok { + if err := d.Set("byte_match_tuple", flattenWafByteMatchTuplesWR(resp.ByteMatchSet.ByteMatchTuples)); err != nil { + return fmt.Errorf("error setting byte_match_tuple: %s", err) + } + } else { + if err := d.Set("byte_match_tuples", flattenWafByteMatchTuplesWR(resp.ByteMatchSet.ByteMatchTuples)); err != nil { + return fmt.Errorf("error setting byte_match_tuples: %s", err) + } + } d.Set("name", resp.ByteMatchSet.Name) return nil } func flattenWafByteMatchTuplesWR(in []*waf.ByteMatchTuple) []interface{} { - tuples := make([]interface{}, len(in), len(in)) + tuples := make([]interface{}, len(in)) for i, tuple := range in { - field_to_match := tuple.FieldToMatch - m := map[string]interface{}{ - "type": *field_to_match.Type, - } - - if field_to_match.Data == nil { - m["data"] = "" - } else { - m["data"] = *field_to_match.Data + fieldToMatchMap := map[string]interface{}{ + "data": aws.StringValue(tuple.FieldToMatch.Data), + "type": aws.StringValue(tuple.FieldToMatch.Type), } - var ms []map[string]interface{} - ms = append(ms, m) - - tuple := map[string]interface{}{ - "field_to_match": ms, - "positional_constraint": *tuple.PositionalConstraint, - "target_string": tuple.TargetString, - "text_transformation": *tuple.TextTransformation, + m := map[string]interface{}{ + "field_to_match": []map[string]interface{}{fieldToMatchMap}, + "positional_constraint": aws.StringValue(tuple.PositionalConstraint), + "target_string": string(tuple.TargetString), + "text_transformation": aws.StringValue(tuple.TextTransformation), } - tuples[i] = tuple + tuples[i] = m } return tuples @@ -157,7 +201,15 @@ func resourceAwsWafRegionalByteMatchSetUpdate(d *schema.ResourceData, meta inter err := updateByteMatchSetResourceWR(d, oldT, newT, conn, region) if err != nil { - return errwrap.Wrapf("[ERROR] Error updating ByteMatchSet: {{err}}", err) + return fmt.Errorf("Error updating ByteMatchSet: %s", err) + } + } else if d.HasChange("byte_match_tuples") { + o, n := d.GetChange("byte_match_tuples") + oldT, newT := o.(*schema.Set).List(), n.(*schema.Set).List() + + err := updateByteMatchSetResourceWR(d, oldT, newT, conn, region) + if err != nil { + return fmt.Errorf("Error updating ByteMatchSet: %s", err) } } return resourceAwsWafRegionalByteMatchSetRead(d, meta) @@ -169,14 +221,19 @@ func resourceAwsWafRegionalByteMatchSetDelete(d *schema.ResourceData, meta inter log.Printf("[INFO] Deleting ByteMatchSet: %s", d.Get("name").(string)) - oldT := d.Get("byte_match_tuple").(*schema.Set).List() + var oldT []interface{} + if _, ok := d.GetOk("byte_match_tuple"); ok { + oldT = d.Get("byte_match_tuple").(*schema.Set).List() + } else { + oldT = d.Get("byte_match_tuples").(*schema.Set).List() + } if len(oldT) > 0 { var newT []interface{} err := updateByteMatchSetResourceWR(d, oldT, newT, conn, region) if err != nil { - return errwrap.Wrapf("[ERROR] Error deleting ByteMatchSet: {{err}}", err) + return fmt.Errorf("Error deleting ByteMatchSet: %s", err) } } @@ -189,7 +246,7 @@ func resourceAwsWafRegionalByteMatchSetDelete(d *schema.ResourceData, meta inter return conn.DeleteByteMatchSet(req) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error deleting ByteMatchSet: {{err}}", err) + return fmt.Errorf("Error deleting ByteMatchSet: %s", err) } return nil @@ -207,7 +264,7 @@ func updateByteMatchSetResourceWR(d *schema.ResourceData, oldT, newT []interface return conn.UpdateByteMatchSet(req) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error updating ByteMatchSet: {{err}}", err) + return fmt.Errorf("Error updating ByteMatchSet: %s", err) } return nil @@ -227,7 +284,7 @@ func diffByteMatchSetTuple(oldT, newT []interface{}) []*waf.ByteMatchSetUpdate { updates = append(updates, &waf.ByteMatchSetUpdate{ Action: aws.String(waf.ChangeActionDelete), ByteMatchTuple: &waf.ByteMatchTuple{ - FieldToMatch: expandFieldToMatch(tuple["field_to_match"].(*schema.Set).List()[0].(map[string]interface{})), + FieldToMatch: expandFieldToMatch(tuple["field_to_match"].([]interface{})[0].(map[string]interface{})), PositionalConstraint: aws.String(tuple["positional_constraint"].(string)), TargetString: []byte(tuple["target_string"].(string)), TextTransformation: aws.String(tuple["text_transformation"].(string)), @@ -241,7 +298,7 @@ func diffByteMatchSetTuple(oldT, newT []interface{}) []*waf.ByteMatchSetUpdate { updates = append(updates, &waf.ByteMatchSetUpdate{ Action: aws.String(waf.ChangeActionInsert), ByteMatchTuple: &waf.ByteMatchTuple{ - FieldToMatch: expandFieldToMatch(tuple["field_to_match"].(*schema.Set).List()[0].(map[string]interface{})), + FieldToMatch: expandFieldToMatch(tuple["field_to_match"].([]interface{})[0].(map[string]interface{})), PositionalConstraint: aws.String(tuple["positional_constraint"].(string)), TargetString: []byte(tuple["target_string"].(string)), TextTransformation: aws.String(tuple["text_transformation"].(string)), diff --git a/aws/resource_aws_wafregional_byte_match_set_test.go b/aws/resource_aws_wafregional_byte_match_set_test.go index 2600349aee5..bbac5c8bb38 100644 --- a/aws/resource_aws_wafregional_byte_match_set_test.go +++ b/aws/resource_aws_wafregional_byte_match_set_test.go @@ -8,9 +8,7 @@ import ( "github.com/hashicorp/terraform/terraform" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/waf" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/acctest" ) @@ -18,39 +16,43 @@ func TestAccAWSWafRegionalByteMatchSet_basic(t *testing.T) { var v waf.ByteMatchSet byteMatchSet := fmt.Sprintf("byteMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalByteMatchSetDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSWafRegionalByteMatchSetConfig(byteMatchSet), Check: resource.ComposeTestCheckFunc( testAccCheckAWSWafRegionalByteMatchSetExists("aws_wafregional_byte_match_set.byte_set", &v), resource.TestCheckResourceAttr( "aws_wafregional_byte_match_set.byte_set", "name", byteMatchSet), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.#", "2"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.#", "2"), + resource.TestCheckResourceAttr( + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.field_to_match.#", "1"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.field_to_match.2991901334.data", "referer"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.field_to_match.0.data", "referer"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.field_to_match.2991901334.type", "HEADER"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.field_to_match.0.type", "HEADER"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.positional_constraint", "CONTAINS"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.positional_constraint", "CONTAINS"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.target_string", "badrefer1"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.target_string", "badrefer1"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.text_transformation", "NONE"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.text_transformation", "NONE"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.field_to_match.2991901334.data", "referer"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.field_to_match.#", "1"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.field_to_match.2991901334.type", "HEADER"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.field_to_match.0.data", "referer"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.positional_constraint", "CONTAINS"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.field_to_match.0.type", "HEADER"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.target_string", "badrefer2"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.positional_constraint", "CONTAINS"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.text_transformation", "NONE"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.target_string", "badrefer2"), + resource.TestCheckResourceAttr( + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.text_transformation", "NONE"), ), }, }, @@ -62,7 +64,7 @@ func TestAccAWSWafRegionalByteMatchSet_changeNameForceNew(t *testing.T) { byteMatchSet := fmt.Sprintf("byteMatchSet-%s", acctest.RandString(5)) byteMatchSetNewName := fmt.Sprintf("byteMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalByteMatchSetDestroy, @@ -74,27 +76,31 @@ func TestAccAWSWafRegionalByteMatchSet_changeNameForceNew(t *testing.T) { resource.TestCheckResourceAttr( "aws_wafregional_byte_match_set.byte_set", "name", byteMatchSet), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.#", "2"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.#", "2"), + resource.TestCheckResourceAttr( + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.field_to_match.#", "1"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.field_to_match.2991901334.data", "referer"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.field_to_match.0.data", "referer"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.field_to_match.2991901334.type", "HEADER"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.field_to_match.0.type", "HEADER"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.positional_constraint", "CONTAINS"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.positional_constraint", "CONTAINS"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.target_string", "badrefer1"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.target_string", "badrefer1"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.text_transformation", "NONE"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.text_transformation", "NONE"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.field_to_match.2991901334.data", "referer"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.field_to_match.#", "1"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.field_to_match.2991901334.type", "HEADER"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.field_to_match.0.data", "referer"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.positional_constraint", "CONTAINS"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.field_to_match.0.type", "HEADER"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.target_string", "badrefer2"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.positional_constraint", "CONTAINS"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.text_transformation", "NONE"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.target_string", "badrefer2"), + resource.TestCheckResourceAttr( + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.text_transformation", "NONE"), ), }, { @@ -104,38 +110,42 @@ func TestAccAWSWafRegionalByteMatchSet_changeNameForceNew(t *testing.T) { resource.TestCheckResourceAttr( "aws_wafregional_byte_match_set.byte_set", "name", byteMatchSetNewName), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.#", "2"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.#", "2"), + resource.TestCheckResourceAttr( + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.field_to_match.#", "1"), + resource.TestCheckResourceAttr( + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.field_to_match.0.data", "referer"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.field_to_match.2991901334.data", "referer"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.field_to_match.0.type", "HEADER"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.field_to_match.2991901334.type", "HEADER"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.positional_constraint", "CONTAINS"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.positional_constraint", "CONTAINS"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.target_string", "badrefer1"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.target_string", "badrefer1"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.text_transformation", "NONE"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.text_transformation", "NONE"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.field_to_match.#", "1"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.field_to_match.2991901334.data", "referer"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.field_to_match.0.data", "referer"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.field_to_match.2991901334.type", "HEADER"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.field_to_match.0.type", "HEADER"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.positional_constraint", "CONTAINS"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.positional_constraint", "CONTAINS"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.target_string", "badrefer2"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.target_string", "badrefer2"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.text_transformation", "NONE"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.text_transformation", "NONE"), ), }, }, }) } -func TestAccAWSWafRegionalByteMatchSet_changeByteMatchTuple(t *testing.T) { +func TestAccAWSWafRegionalByteMatchSet_changeByteMatchTuples(t *testing.T) { var before, after waf.ByteMatchSet byteMatchSetName := fmt.Sprintf("byte-batch-set-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalByteMatchSetDestroy, @@ -147,57 +157,61 @@ func TestAccAWSWafRegionalByteMatchSet_changeByteMatchTuple(t *testing.T) { resource.TestCheckResourceAttr( "aws_wafregional_byte_match_set.byte_set", "name", byteMatchSetName), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.#", "2"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.#", "2"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.field_to_match.2991901334.data", "referer"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.field_to_match.0.data", "referer"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.field_to_match.2991901334.type", "HEADER"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.field_to_match.0.type", "HEADER"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.positional_constraint", "CONTAINS"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.positional_constraint", "CONTAINS"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.target_string", "badrefer1"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.target_string", "badrefer1"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2174619346.text_transformation", "NONE"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.3483354334.text_transformation", "NONE"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.field_to_match.2991901334.data", "referer"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.field_to_match.0.data", "referer"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.field_to_match.2991901334.type", "HEADER"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.field_to_match.0.type", "HEADER"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.positional_constraint", "CONTAINS"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.positional_constraint", "CONTAINS"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.target_string", "badrefer2"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.target_string", "badrefer2"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.839525137.text_transformation", "NONE"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2081155357.text_transformation", "NONE"), ), }, { - Config: testAccAWSWafRegionalByteMatchSetConfigChangeByteMatchTuple(byteMatchSetName), + Config: testAccAWSWafRegionalByteMatchSetConfigChangeByteMatchTuples(byteMatchSetName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSWafRegionalByteMatchSetExists("aws_wafregional_byte_match_set.byte_set", &after), resource.TestCheckResourceAttr( "aws_wafregional_byte_match_set.byte_set", "name", byteMatchSetName), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.#", "2"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.#", "2"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2397850647.field_to_match.4253810390.data", "GET"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2069994922.field_to_match.#", "1"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2397850647.field_to_match.4253810390.type", "METHOD"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2069994922.field_to_match.0.data", ""), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2397850647.positional_constraint", "STARTS_WITH"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2069994922.field_to_match.0.type", "METHOD"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2397850647.target_string", "badrefer1+"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2069994922.positional_constraint", "EXACTLY"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.2397850647.text_transformation", "LOWERCASE"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2069994922.target_string", "GET"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.4153613423.field_to_match.3756326843.data", ""), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2069994922.text_transformation", "NONE"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.4153613423.field_to_match.3756326843.type", "URI"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2506804529.field_to_match.#", "1"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.4153613423.positional_constraint", "ENDS_WITH"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2506804529.field_to_match.0.data", ""), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.4153613423.target_string", "badrefer2+"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2506804529.field_to_match.0.type", "URI"), resource.TestCheckResourceAttr( - "aws_wafregional_byte_match_set.byte_set", "byte_match_tuple.4153613423.text_transformation", "LOWERCASE"), + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2506804529.positional_constraint", "ENDS_WITH"), + resource.TestCheckResourceAttr( + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2506804529.target_string", "badrefer2+"), + resource.TestCheckResourceAttr( + "aws_wafregional_byte_match_set.byte_set", "byte_match_tuples.2506804529.text_transformation", "LOWERCASE"), ), }, }, @@ -208,7 +222,7 @@ func TestAccAWSWafRegionalByteMatchSet_noByteMatchTuples(t *testing.T) { var byteMatchSet waf.ByteMatchSet byteMatchSetName := fmt.Sprintf("byte-batch-set-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalByteMatchSetDestroy, @@ -218,7 +232,7 @@ func TestAccAWSWafRegionalByteMatchSet_noByteMatchTuples(t *testing.T) { Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSWafRegionalByteMatchSetExists("aws_wafregional_byte_match_set.byte_match_set", &byteMatchSet), resource.TestCheckResourceAttr("aws_wafregional_byte_match_set.byte_match_set", "name", byteMatchSetName), - resource.TestCheckResourceAttr("aws_wafregional_byte_match_set.byte_match_set", "byte_match_tuple.#", "0"), + resource.TestCheckResourceAttr("aws_wafregional_byte_match_set.byte_match_set", "byte_match_tuples.#", "0"), ), }, }, @@ -229,7 +243,7 @@ func TestAccAWSWafRegionalByteMatchSet_disappears(t *testing.T) { var v waf.ByteMatchSet byteMatchSet := fmt.Sprintf("byteMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalByteMatchSetDestroy, @@ -274,7 +288,7 @@ func testAccCheckAWSWafRegionalByteMatchSetDisappears(v *waf.ByteMatchSet) resou return conn.UpdateByteMatchSet(req) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error updating ByteMatchSet: {{err}}", err) + return fmt.Errorf("Error updating ByteMatchSet: %s", err) } _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { @@ -285,7 +299,7 @@ func testAccCheckAWSWafRegionalByteMatchSetDisappears(v *waf.ByteMatchSet) resou return conn.DeleteByteMatchSet(opts) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error deleting ByteMatchSet: {{err}}", err) + return fmt.Errorf("Error deleting ByteMatchSet: %s", err) } return nil @@ -339,11 +353,8 @@ func testAccCheckAWSWafRegionalByteMatchSetDestroy(s *terraform.State) error { } } - // Return nil if the ByteMatchSet is already destroyed - if awsErr, ok := err.(awserr.Error); ok { - if awsErr.Code() == "WAFNonexistentItemException" { - return nil - } + if isAWSErr(err, waf.ErrCodeNonexistentItemException, "") { + continue } return err @@ -356,7 +367,7 @@ func testAccAWSWafRegionalByteMatchSetConfig(name string) string { return fmt.Sprintf(` resource "aws_wafregional_byte_match_set" "byte_set" { name = "%s" - byte_match_tuple { + byte_match_tuples { text_transformation = "NONE" target_string = "badrefer1" positional_constraint = "CONTAINS" @@ -366,7 +377,7 @@ resource "aws_wafregional_byte_match_set" "byte_set" { } } - byte_match_tuple { + byte_match_tuples { text_transformation = "NONE" target_string = "badrefer2" positional_constraint = "CONTAINS" @@ -382,7 +393,7 @@ func testAccAWSWafRegionalByteMatchSetConfigChangeName(name string) string { return fmt.Sprintf(` resource "aws_wafregional_byte_match_set" "byte_set" { name = "%s" - byte_match_tuple { + byte_match_tuples { text_transformation = "NONE" target_string = "badrefer1" positional_constraint = "CONTAINS" @@ -392,7 +403,7 @@ resource "aws_wafregional_byte_match_set" "byte_set" { } } - byte_match_tuple { + byte_match_tuples { text_transformation = "NONE" target_string = "badrefer2" positional_constraint = "CONTAINS" @@ -411,21 +422,20 @@ resource "aws_wafregional_byte_match_set" "byte_match_set" { }`, name) } -func testAccAWSWafRegionalByteMatchSetConfigChangeByteMatchTuple(name string) string { +func testAccAWSWafRegionalByteMatchSetConfigChangeByteMatchTuples(name string) string { return fmt.Sprintf(` resource "aws_wafregional_byte_match_set" "byte_set" { name = "%s" - byte_match_tuple { - text_transformation = "LOWERCASE" - target_string = "badrefer1+" - positional_constraint = "STARTS_WITH" + byte_match_tuples { + text_transformation = "NONE" + target_string = "GET" + positional_constraint = "EXACTLY" field_to_match { type = "METHOD" - data = "GET" } } - byte_match_tuple { + byte_match_tuples { text_transformation = "LOWERCASE" target_string = "badrefer2+" positional_constraint = "ENDS_WITH" diff --git a/aws/resource_aws_wafregional_geo_match_set.go b/aws/resource_aws_wafregional_geo_match_set.go new file mode 100644 index 00000000000..107640171ac --- /dev/null +++ b/aws/resource_aws_wafregional_geo_match_set.go @@ -0,0 +1,160 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/waf" + "github.com/aws/aws-sdk-go/service/wafregional" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsWafRegionalGeoMatchSet() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsWafRegionalGeoMatchSetCreate, + Read: resourceAwsWafRegionalGeoMatchSetRead, + Update: resourceAwsWafRegionalGeoMatchSetUpdate, + Delete: resourceAwsWafRegionalGeoMatchSetDelete, + + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "geo_match_constraint": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "type": { + Type: schema.TypeString, + Required: true, + }, + "value": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + }, + } +} + +func resourceAwsWafRegionalGeoMatchSetCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + region := meta.(*AWSClient).region + + log.Printf("[INFO] Creating WAF Regional Geo Match Set: %s", d.Get("name").(string)) + + wr := newWafRegionalRetryer(conn, region) + out, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + params := &waf.CreateGeoMatchSetInput{ + ChangeToken: token, + Name: aws.String(d.Get("name").(string)), + } + + return conn.CreateGeoMatchSet(params) + }) + if err != nil { + return fmt.Errorf("Failed creating WAF Regional Geo Match Set: %s", err) + } + resp := out.(*waf.CreateGeoMatchSetOutput) + + d.SetId(*resp.GeoMatchSet.GeoMatchSetId) + + return resourceAwsWafRegionalGeoMatchSetUpdate(d, meta) +} + +func resourceAwsWafRegionalGeoMatchSetRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + log.Printf("[INFO] Reading WAF Regional Geo Match Set: %s", d.Get("name").(string)) + params := &waf.GetGeoMatchSetInput{ + GeoMatchSetId: aws.String(d.Id()), + } + + resp, err := conn.GetGeoMatchSet(params) + if err != nil { + // TODO: Replace with constant once it's available + // See https://github.com/aws/aws-sdk-go/issues/1856 + if isAWSErr(err, "WAFNonexistentItemException", "") { + log.Printf("[WARN] WAF WAF Regional Geo Match Set (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + return err + } + + d.Set("name", resp.GeoMatchSet.Name) + d.Set("geo_match_constraint", flattenWafGeoMatchConstraint(resp.GeoMatchSet.GeoMatchConstraints)) + + return nil +} + +func resourceAwsWafRegionalGeoMatchSetUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + region := meta.(*AWSClient).region + + if d.HasChange("geo_match_constraint") { + o, n := d.GetChange("geo_match_constraint") + oldConstraints, newConstraints := o.(*schema.Set).List(), n.(*schema.Set).List() + + err := updateGeoMatchSetResourceWR(d.Id(), oldConstraints, newConstraints, conn, region) + if err != nil { + return fmt.Errorf("Failed updating WAF Regional Geo Match Set: %s", err) + } + } + + return resourceAwsWafRegionalGeoMatchSetRead(d, meta) +} + +func resourceAwsWafRegionalGeoMatchSetDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + region := meta.(*AWSClient).region + + oldConstraints := d.Get("geo_match_constraint").(*schema.Set).List() + if len(oldConstraints) > 0 { + noConstraints := []interface{}{} + err := updateGeoMatchSetResourceWR(d.Id(), oldConstraints, noConstraints, conn, region) + if err != nil { + return fmt.Errorf("Error updating WAF Regional Geo Match Constraint: %s", err) + } + } + + wr := newWafRegionalRetryer(conn, region) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.DeleteGeoMatchSetInput{ + ChangeToken: token, + GeoMatchSetId: aws.String(d.Id()), + } + + return conn.DeleteGeoMatchSet(req) + }) + if err != nil { + return fmt.Errorf("Failed deleting WAF Regional Geo Match Set: %s", err) + } + + return nil +} + +func updateGeoMatchSetResourceWR(id string, oldConstraints, newConstraints []interface{}, conn *wafregional.WAFRegional, region string) error { + wr := newWafRegionalRetryer(conn, region) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateGeoMatchSetInput{ + ChangeToken: token, + GeoMatchSetId: aws.String(id), + Updates: diffWafGeoMatchSetConstraints(oldConstraints, newConstraints), + } + + log.Printf("[INFO] Updating WAF Regional Geo Match Set constraints: %s", req) + return conn.UpdateGeoMatchSet(req) + }) + if err != nil { + return fmt.Errorf("Failed updating WAF Regional Geo Match Set: %s", err) + } + + return nil +} diff --git a/aws/resource_aws_wafregional_geo_match_set_test.go b/aws/resource_aws_wafregional_geo_match_set_test.go new file mode 100644 index 00000000000..1ce52e573c0 --- /dev/null +++ b/aws/resource_aws_wafregional_geo_match_set_test.go @@ -0,0 +1,335 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/waf" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSWafRegionalGeoMatchSet_basic(t *testing.T) { + var v waf.GeoMatchSet + geoMatchSet := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalGeoMatchSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalGeoMatchSetConfig(geoMatchSet), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalGeoMatchSetExists("aws_wafregional_geo_match_set.test", &v), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "name", geoMatchSet), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "geo_match_constraint.#", "2"), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "geo_match_constraint.384465307.type", "Country"), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "geo_match_constraint.384465307.value", "US"), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "geo_match_constraint.1991628426.type", "Country"), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "geo_match_constraint.1991628426.value", "CA"), + ), + }, + }, + }) +} + +func TestAccAWSWafRegionalGeoMatchSet_changeNameForceNew(t *testing.T) { + var before, after waf.GeoMatchSet + geoMatchSet := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + geoMatchSetNewName := fmt.Sprintf("geoMatchSetNewName-%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalGeoMatchSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalGeoMatchSetConfig(geoMatchSet), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalGeoMatchSetExists("aws_wafregional_geo_match_set.test", &before), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "name", geoMatchSet), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "geo_match_constraint.#", "2"), + ), + }, + { + Config: testAccAWSWafRegionalGeoMatchSetConfigChangeName(geoMatchSetNewName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalGeoMatchSetExists("aws_wafregional_geo_match_set.test", &after), + testAccCheckAWSWafGeoMatchSetIdDiffers(&before, &after), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "name", geoMatchSetNewName), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "geo_match_constraint.#", "2"), + ), + }, + }, + }) +} + +func TestAccAWSWafRegionalGeoMatchSet_disappears(t *testing.T) { + var v waf.GeoMatchSet + geoMatchSet := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalGeoMatchSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalGeoMatchSetConfig(geoMatchSet), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalGeoMatchSetExists("aws_wafregional_geo_match_set.test", &v), + testAccCheckAWSWafRegionalGeoMatchSetDisappears(&v), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccAWSWafRegionalGeoMatchSet_changeConstraints(t *testing.T) { + var before, after waf.GeoMatchSet + setName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalGeoMatchSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalGeoMatchSetConfig(setName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalGeoMatchSetExists("aws_wafregional_geo_match_set.test", &before), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "name", setName), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "geo_match_constraint.#", "2"), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "geo_match_constraint.384465307.type", "Country"), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "geo_match_constraint.384465307.value", "US"), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "geo_match_constraint.1991628426.type", "Country"), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "geo_match_constraint.1991628426.value", "CA"), + ), + }, + { + Config: testAccAWSWafRegionalGeoMatchSetConfig_changeConstraints(setName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalGeoMatchSetExists("aws_wafregional_geo_match_set.test", &after), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "name", setName), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "geo_match_constraint.#", "2"), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "geo_match_constraint.1174390936.type", "Country"), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "geo_match_constraint.1174390936.value", "RU"), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "geo_match_constraint.4046309957.type", "Country"), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "geo_match_constraint.4046309957.value", "CN"), + ), + }, + }, + }) +} + +func TestAccAWSWafRegionalGeoMatchSet_noConstraints(t *testing.T) { + var ipset waf.GeoMatchSet + setName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalGeoMatchSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalGeoMatchSetConfig_noConstraints(setName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalGeoMatchSetExists("aws_wafregional_geo_match_set.test", &ipset), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "name", setName), + resource.TestCheckResourceAttr( + "aws_wafregional_geo_match_set.test", "geo_match_constraint.#", "0"), + ), + }, + }, + }) +} + +func testAccCheckAWSWafGeoMatchSetIdDiffers(before, after *waf.GeoMatchSet) resource.TestCheckFunc { + return func(s *terraform.State) error { + if *before.GeoMatchSetId == *after.GeoMatchSetId { + return fmt.Errorf("Expected different IDs, given %q for both sets", *before.GeoMatchSetId) + } + return nil + } +} + +func testAccCheckAWSWafRegionalGeoMatchSetDisappears(v *waf.GeoMatchSet) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).wafregionalconn + region := testAccProvider.Meta().(*AWSClient).region + + wr := newWafRegionalRetryer(conn, region) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateGeoMatchSetInput{ + ChangeToken: token, + GeoMatchSetId: v.GeoMatchSetId, + } + + for _, geoMatchConstraint := range v.GeoMatchConstraints { + geoMatchConstraintUpdate := &waf.GeoMatchSetUpdate{ + Action: aws.String("DELETE"), + GeoMatchConstraint: &waf.GeoMatchConstraint{ + Type: geoMatchConstraint.Type, + Value: geoMatchConstraint.Value, + }, + } + req.Updates = append(req.Updates, geoMatchConstraintUpdate) + } + return conn.UpdateGeoMatchSet(req) + }) + if err != nil { + return fmt.Errorf("Failed updating WAF Regional Geo Match Set: %s", err) + } + + _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { + opts := &waf.DeleteGeoMatchSetInput{ + ChangeToken: token, + GeoMatchSetId: v.GeoMatchSetId, + } + return conn.DeleteGeoMatchSet(opts) + }) + if err != nil { + return fmt.Errorf("Failed deleting WAF Regional Geo Match Set: %s", err) + } + return nil + } +} + +func testAccCheckAWSWafRegionalGeoMatchSetExists(n string, v *waf.GeoMatchSet) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No WAF Regional Geo Match Set ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).wafregionalconn + resp, err := conn.GetGeoMatchSet(&waf.GetGeoMatchSetInput{ + GeoMatchSetId: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + if *resp.GeoMatchSet.GeoMatchSetId == rs.Primary.ID { + *v = *resp.GeoMatchSet + return nil + } + + return fmt.Errorf("WAF Regional Geo Match Set (%s) not found", rs.Primary.ID) + } +} + +func testAccCheckAWSWafRegionalGeoMatchSetDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_wafregional_geo_match_set" { + continue + } + + conn := testAccProvider.Meta().(*AWSClient).wafregionalconn + resp, err := conn.GetGeoMatchSet( + &waf.GetGeoMatchSetInput{ + GeoMatchSetId: aws.String(rs.Primary.ID), + }) + + if err == nil { + if *resp.GeoMatchSet.GeoMatchSetId == rs.Primary.ID { + return fmt.Errorf("WAF Regional Geo Match Set %s still exists", rs.Primary.ID) + } + } + + // Return nil if the WAF Regional Geo Match Set is already destroyed + if isAWSErr(err, "WAFNonexistentItemException", "") { + return nil + } + + return err + } + + return nil +} + +func testAccAWSWafRegionalGeoMatchSetConfig(name string) string { + return fmt.Sprintf(` +resource "aws_wafregional_geo_match_set" "test" { + name = "%s" + geo_match_constraint { + type = "Country" + value = "US" + } + + geo_match_constraint { + type = "Country" + value = "CA" + } +}`, name) +} + +func testAccAWSWafRegionalGeoMatchSetConfigChangeName(name string) string { + return fmt.Sprintf(` +resource "aws_wafregional_geo_match_set" "test" { + name = "%s" + geo_match_constraint { + type = "Country" + value = "US" + } + + geo_match_constraint { + type = "Country" + value = "CA" + } +}`, name) +} + +func testAccAWSWafRegionalGeoMatchSetConfig_changeConstraints(name string) string { + return fmt.Sprintf(` +resource "aws_wafregional_geo_match_set" "test" { + name = "%s" + geo_match_constraint { + type = "Country" + value = "RU" + } + + geo_match_constraint { + type = "Country" + value = "CN" + } +}`, name) +} + +func testAccAWSWafRegionalGeoMatchSetConfig_noConstraints(name string) string { + return fmt.Sprintf(` +resource "aws_wafregional_geo_match_set" "test" { + name = "%s" +}`, name) +} diff --git a/aws/resource_aws_wafregional_ipset.go b/aws/resource_aws_wafregional_ipset.go index 8227d64159b..b5cd3828f95 100644 --- a/aws/resource_aws_wafregional_ipset.go +++ b/aws/resource_aws_wafregional_ipset.go @@ -5,6 +5,7 @@ import ( "log" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/waf" "github.com/aws/aws-sdk-go/service/wafregional" @@ -19,21 +20,25 @@ func resourceAwsWafRegionalIPSet() *schema.Resource { Delete: resourceAwsWafRegionalIPSetDelete, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "ip_set_descriptor": &schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "ip_set_descriptor": { Type: schema.TypeSet, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "type": &schema.Schema{ + "type": { Type: schema.TypeString, Required: true, }, - "value": &schema.Schema{ + "value": { Type: schema.TypeString, Required: true, }, @@ -85,11 +90,20 @@ func resourceAwsWafRegionalIPSetRead(d *schema.ResourceData, meta interface{}) e d.Set("ip_set_descriptor", flattenWafIpSetDescriptorWR(resp.IPSet.IPSetDescriptors)) d.Set("name", resp.IPSet.Name) + arn := arn.ARN{ + Partition: meta.(*AWSClient).partition, + Service: "waf-regional", + Region: meta.(*AWSClient).region, + AccountID: meta.(*AWSClient).accountid, + Resource: fmt.Sprintf("ipset/%s", d.Id()), + } + d.Set("arn", arn.String()) + return nil } func flattenWafIpSetDescriptorWR(in []*waf.IPSetDescriptor) []interface{} { - descriptors := make([]interface{}, len(in), len(in)) + descriptors := make([]interface{}, len(in)) for i, descriptor := range in { d := map[string]interface{}{ @@ -129,7 +143,7 @@ func resourceAwsWafRegionalIPSetDelete(d *schema.ResourceData, meta interface{}) err := updateIPSetResourceWR(d.Id(), oldD, noD, conn, region) if err != nil { - return fmt.Errorf("Error Removing IPSetDescriptors: %s", err) + return fmt.Errorf("Error Deleting IPSetDescriptors: %s", err) } } @@ -150,20 +164,21 @@ func resourceAwsWafRegionalIPSetDelete(d *schema.ResourceData, meta interface{}) } func updateIPSetResourceWR(id string, oldD, newD []interface{}, conn *wafregional.WAFRegional, region string) error { - - wr := newWafRegionalRetryer(conn, region) - _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { - req := &waf.UpdateIPSetInput{ - ChangeToken: token, - IPSetId: aws.String(id), - Updates: diffWafIpSetDescriptors(oldD, newD), + for _, ipSetUpdates := range diffWafIpSetDescriptors(oldD, newD) { + wr := newWafRegionalRetryer(conn, region) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateIPSetInput{ + ChangeToken: token, + IPSetId: aws.String(id), + Updates: ipSetUpdates, + } + log.Printf("[INFO] Updating IPSet descriptor: %s", req) + + return conn.UpdateIPSet(req) + }) + if err != nil { + return fmt.Errorf("Error Updating WAF IPSet: %s", err) } - log.Printf("[INFO] Updating IPSet descriptor: %s", req) - - return conn.UpdateIPSet(req) - }) - if err != nil { - return fmt.Errorf("Error Updating WAF IPSet: %s", err) } return nil diff --git a/aws/resource_aws_wafregional_ipset_test.go b/aws/resource_aws_wafregional_ipset_test.go index 3aa251d43ec..32e12a94cb7 100644 --- a/aws/resource_aws_wafregional_ipset_test.go +++ b/aws/resource_aws_wafregional_ipset_test.go @@ -2,7 +2,10 @@ package aws import ( "fmt" + "net" "reflect" + "regexp" + "strings" "testing" "github.com/hashicorp/terraform/helper/resource" @@ -19,12 +22,12 @@ func TestAccAWSWafRegionalIPSet_basic(t *testing.T) { var v waf.IPSet ipsetName := fmt.Sprintf("ip-set-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalIPSetDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSWafRegionalIPSetConfig(ipsetName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSWafRegionalIPSetExists("aws_wafregional_ipset.ipset", &v), @@ -34,6 +37,8 @@ func TestAccAWSWafRegionalIPSet_basic(t *testing.T) { "aws_wafregional_ipset.ipset", "ip_set_descriptor.4037960608.type", "IPV4"), resource.TestCheckResourceAttr( "aws_wafregional_ipset.ipset", "ip_set_descriptor.4037960608.value", "192.0.7.0/24"), + resource.TestMatchResourceAttr("aws_wafregional_ipset.ipset", "arn", + regexp.MustCompile(`^arn:[\w-]+:waf-regional:[^:]+:\d{12}:ipset/.+$`)), ), }, }, @@ -43,7 +48,7 @@ func TestAccAWSWafRegionalIPSet_basic(t *testing.T) { func TestAccAWSWafRegionalIPSet_disappears(t *testing.T) { var v waf.IPSet ipsetName := fmt.Sprintf("ip-set-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalIPSetDestroy, @@ -65,7 +70,7 @@ func TestAccAWSWafRegionalIPSet_changeNameForceNew(t *testing.T) { ipsetName := fmt.Sprintf("ip-set-%s", acctest.RandString(5)) ipsetNewName := fmt.Sprintf("ip-set-new-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalIPSetDestroy, @@ -102,7 +107,7 @@ func TestAccAWSWafRegionalIPSet_changeDescriptors(t *testing.T) { var before, after waf.IPSet ipsetName := fmt.Sprintf("ip-set-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalIPSetDestroy, @@ -139,11 +144,51 @@ func TestAccAWSWafRegionalIPSet_changeDescriptors(t *testing.T) { }) } +func TestAccAWSWafRegionalIPSet_IpSetDescriptors_1000UpdateLimit(t *testing.T) { + var ipset waf.IPSet + ipsetName := fmt.Sprintf("ip-set-%s", acctest.RandString(5)) + resourceName := "aws_wafregional_ipset.ipset" + + incrementIP := func(ip net.IP) { + for j := len(ip) - 1; j >= 0; j-- { + ip[j]++ + if ip[j] > 0 { + break + } + } + } + + // Generate 2048 IPs + ip, ipnet, err := net.ParseCIDR("10.0.0.0/21") + if err != nil { + t.Fatal(err) + } + ipSetDescriptors := make([]string, 0, 2048) + for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); incrementIP(ip) { + ipSetDescriptors = append(ipSetDescriptors, fmt.Sprintf("ip_set_descriptor {\ntype=\"IPV4\"\nvalue=\"%s/32\"\n}", ip)) + } + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalIPSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalIPSetConfig_IpSetDescriptors(ipsetName, strings.Join(ipSetDescriptors, "\n")), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalIPSetExists(resourceName, &ipset), + resource.TestCheckResourceAttr(resourceName, "ip_set_descriptor.#", "2048"), + ), + }, + }, + }) +} + func TestAccAWSWafRegionalIPSet_noDescriptors(t *testing.T) { var ipset waf.IPSet ipsetName := fmt.Sprintf("ip-set-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalIPSetDestroy, @@ -166,7 +211,7 @@ func TestDiffWafRegionalIpSetDescriptors(t *testing.T) { testCases := []struct { Old []interface{} New []interface{} - ExpectedUpdates []*waf.IPSetUpdate + ExpectedUpdates [][]*waf.IPSetUpdate }{ { // Change @@ -176,19 +221,21 @@ func TestDiffWafRegionalIpSetDescriptors(t *testing.T) { New: []interface{}{ map[string]interface{}{"type": "IPV4", "value": "192.0.8.0/24"}, }, - ExpectedUpdates: []*waf.IPSetUpdate{ - &waf.IPSetUpdate{ - Action: aws.String(wafregional.ChangeActionDelete), - IPSetDescriptor: &waf.IPSetDescriptor{ - Type: aws.String("IPV4"), - Value: aws.String("192.0.7.0/24"), + ExpectedUpdates: [][]*waf.IPSetUpdate{ + { + { + Action: aws.String(wafregional.ChangeActionDelete), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("192.0.7.0/24"), + }, }, - }, - &waf.IPSetUpdate{ - Action: aws.String(wafregional.ChangeActionInsert), - IPSetDescriptor: &waf.IPSetDescriptor{ - Type: aws.String("IPV4"), - Value: aws.String("192.0.8.0/24"), + { + Action: aws.String(wafregional.ChangeActionInsert), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("192.0.8.0/24"), + }, }, }, }, @@ -201,26 +248,28 @@ func TestDiffWafRegionalIpSetDescriptors(t *testing.T) { map[string]interface{}{"type": "IPV4", "value": "10.0.2.0/24"}, map[string]interface{}{"type": "IPV4", "value": "10.0.3.0/24"}, }, - ExpectedUpdates: []*waf.IPSetUpdate{ - &waf.IPSetUpdate{ - Action: aws.String(wafregional.ChangeActionInsert), - IPSetDescriptor: &waf.IPSetDescriptor{ - Type: aws.String("IPV4"), - Value: aws.String("10.0.1.0/24"), + ExpectedUpdates: [][]*waf.IPSetUpdate{ + { + { + Action: aws.String(wafregional.ChangeActionInsert), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("10.0.1.0/24"), + }, }, - }, - &waf.IPSetUpdate{ - Action: aws.String(wafregional.ChangeActionInsert), - IPSetDescriptor: &waf.IPSetDescriptor{ - Type: aws.String("IPV4"), - Value: aws.String("10.0.2.0/24"), + { + Action: aws.String(wafregional.ChangeActionInsert), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("10.0.2.0/24"), + }, }, - }, - &waf.IPSetUpdate{ - Action: aws.String(wafregional.ChangeActionInsert), - IPSetDescriptor: &waf.IPSetDescriptor{ - Type: aws.String("IPV4"), - Value: aws.String("10.0.3.0/24"), + { + Action: aws.String(wafregional.ChangeActionInsert), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("10.0.3.0/24"), + }, }, }, }, @@ -232,19 +281,21 @@ func TestDiffWafRegionalIpSetDescriptors(t *testing.T) { map[string]interface{}{"type": "IPV4", "value": "192.0.8.0/24"}, }, New: []interface{}{}, - ExpectedUpdates: []*waf.IPSetUpdate{ - &waf.IPSetUpdate{ - Action: aws.String(wafregional.ChangeActionDelete), - IPSetDescriptor: &waf.IPSetDescriptor{ - Type: aws.String("IPV4"), - Value: aws.String("192.0.7.0/24"), + ExpectedUpdates: [][]*waf.IPSetUpdate{ + { + { + Action: aws.String(wafregional.ChangeActionDelete), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("192.0.7.0/24"), + }, }, - }, - &waf.IPSetUpdate{ - Action: aws.String(wafregional.ChangeActionDelete), - IPSetDescriptor: &waf.IPSetDescriptor{ - Type: aws.String("IPV4"), - Value: aws.String("192.0.8.0/24"), + { + Action: aws.String(wafregional.ChangeActionDelete), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("192.0.8.0/24"), + }, }, }, }, @@ -395,6 +446,13 @@ func testAccAWSWafRegionalIPSetConfigChangeIPSetDescriptors(name string) string }`, name) } +func testAccAWSWafRegionalIPSetConfig_IpSetDescriptors(name, ipSetDescriptors string) string { + return fmt.Sprintf(`resource "aws_wafregional_ipset" "ipset" { + name = "%s" +%s +}`, name, ipSetDescriptors) +} + func testAccAWSWafRegionalIPSetConfig_noDescriptors(name string) string { return fmt.Sprintf(`resource "aws_wafregional_ipset" "ipset" { name = "%s" diff --git a/aws/resource_aws_wafregional_rate_based_rule.go b/aws/resource_aws_wafregional_rate_based_rule.go new file mode 100644 index 00000000000..5cfc40f1148 --- /dev/null +++ b/aws/resource_aws_wafregional_rate_based_rule.go @@ -0,0 +1,195 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/waf" + "github.com/aws/aws-sdk-go/service/wafregional" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" +) + +func resourceAwsWafRegionalRateBasedRule() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsWafRegionalRateBasedRuleCreate, + Read: resourceAwsWafRegionalRateBasedRuleRead, + Update: resourceAwsWafRegionalRateBasedRuleUpdate, + Delete: resourceAwsWafRegionalRateBasedRuleDelete, + + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "metric_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateWafMetricName, + }, + "predicate": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "negated": { + Type: schema.TypeBool, + Required: true, + }, + "data_id": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(0, 128), + }, + "type": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validateWafPredicatesType(), + }, + }, + }, + }, + "rate_key": { + Type: schema.TypeString, + Required: true, + }, + "rate_limit": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntAtLeast(2000), + }, + }, + } +} + +func resourceAwsWafRegionalRateBasedRuleCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + region := meta.(*AWSClient).region + + wr := newWafRegionalRetryer(conn, region) + out, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + params := &waf.CreateRateBasedRuleInput{ + ChangeToken: token, + MetricName: aws.String(d.Get("metric_name").(string)), + Name: aws.String(d.Get("name").(string)), + RateKey: aws.String(d.Get("rate_key").(string)), + RateLimit: aws.Int64(int64(d.Get("rate_limit").(int))), + } + + return conn.CreateRateBasedRule(params) + }) + if err != nil { + return err + } + resp := out.(*waf.CreateRateBasedRuleOutput) + d.SetId(*resp.Rule.RuleId) + return resourceAwsWafRegionalRateBasedRuleUpdate(d, meta) +} + +func resourceAwsWafRegionalRateBasedRuleRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + + params := &waf.GetRateBasedRuleInput{ + RuleId: aws.String(d.Id()), + } + + resp, err := conn.GetRateBasedRule(params) + if err != nil { + if isAWSErr(err, wafregional.ErrCodeWAFNonexistentItemException, "") { + log.Printf("[WARN] WAF Regional Rate Based Rule (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + return err + } + + var predicates []map[string]interface{} + + for _, predicateSet := range resp.Rule.MatchPredicates { + predicates = append(predicates, map[string]interface{}{ + "negated": *predicateSet.Negated, + "type": *predicateSet.Type, + "data_id": *predicateSet.DataId, + }) + } + + d.Set("predicate", predicates) + d.Set("name", resp.Rule.Name) + d.Set("metric_name", resp.Rule.MetricName) + d.Set("rate_key", resp.Rule.RateKey) + d.Set("rate_limit", resp.Rule.RateLimit) + + return nil +} + +func resourceAwsWafRegionalRateBasedRuleUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + region := meta.(*AWSClient).region + + if d.HasChange("predicate") || d.HasChange("rate_limit") { + o, n := d.GetChange("predicate") + oldP, newP := o.(*schema.Set).List(), n.(*schema.Set).List() + rateLimit := d.Get("rate_limit") + + err := updateWafRateBasedRuleResourceWR(d.Id(), oldP, newP, rateLimit, conn, region) + if err != nil { + return fmt.Errorf("Error Updating WAF Rule: %s", err) + } + } + + return resourceAwsWafRegionalRateBasedRuleRead(d, meta) +} + +func resourceAwsWafRegionalRateBasedRuleDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + region := meta.(*AWSClient).region + + oldPredicates := d.Get("predicate").(*schema.Set).List() + if len(oldPredicates) > 0 { + noPredicates := []interface{}{} + rateLimit := d.Get("rate_limit") + + err := updateWafRateBasedRuleResourceWR(d.Id(), oldPredicates, noPredicates, rateLimit, conn, region) + if err != nil { + return fmt.Errorf("Error updating WAF Regional Rate Based Rule Predicates: %s", err) + } + } + + wr := newWafRegionalRetryer(conn, region) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.DeleteRateBasedRuleInput{ + ChangeToken: token, + RuleId: aws.String(d.Id()), + } + log.Printf("[INFO] Deleting WAF Regional Rate Based Rule") + return conn.DeleteRateBasedRule(req) + }) + if err != nil { + return fmt.Errorf("Error deleting WAF Regional Rate Based Rule: %s", err) + } + + return nil +} + +func updateWafRateBasedRuleResourceWR(id string, oldP, newP []interface{}, rateLimit interface{}, conn *wafregional.WAFRegional, region string) error { + wr := newWafRegionalRetryer(conn, region) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateRateBasedRuleInput{ + ChangeToken: token, + RuleId: aws.String(id), + Updates: diffWafRulePredicates(oldP, newP), + RateLimit: aws.Int64(int64(rateLimit.(int))), + } + + return conn.UpdateRateBasedRule(req) + }) + if err != nil { + return fmt.Errorf("Error Updating WAF Regional Rate Based Rule: %s", err) + } + + return nil +} diff --git a/aws/resource_aws_wafregional_rate_based_rule_test.go b/aws/resource_aws_wafregional_rate_based_rule_test.go new file mode 100644 index 00000000000..e014f78bb6f --- /dev/null +++ b/aws/resource_aws_wafregional_rate_based_rule_test.go @@ -0,0 +1,445 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/waf" + "github.com/aws/aws-sdk-go/service/wafregional" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSWafRegionalRateBasedRule_basic(t *testing.T) { + var v waf.RateBasedRule + wafRuleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalRateBasedRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalRateBasedRuleConfig(wafRuleName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalRateBasedRuleExists("aws_wafregional_rate_based_rule.wafrule", &v), + resource.TestCheckResourceAttr( + "aws_wafregional_rate_based_rule.wafrule", "name", wafRuleName), + resource.TestCheckResourceAttr( + "aws_wafregional_rate_based_rule.wafrule", "predicate.#", "1"), + resource.TestCheckResourceAttr( + "aws_wafregional_rate_based_rule.wafrule", "metric_name", wafRuleName), + ), + }, + }, + }) +} + +func TestAccAWSWafRegionalRateBasedRule_changeNameForceNew(t *testing.T) { + var before, after waf.RateBasedRule + wafRuleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) + wafRuleNewName := fmt.Sprintf("wafrulenew%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalRateBasedRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalRateBasedRuleConfig(wafRuleName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalRateBasedRuleExists("aws_wafregional_rate_based_rule.wafrule", &before), + resource.TestCheckResourceAttr( + "aws_wafregional_rate_based_rule.wafrule", "name", wafRuleName), + resource.TestCheckResourceAttr( + "aws_wafregional_rate_based_rule.wafrule", "predicate.#", "1"), + resource.TestCheckResourceAttr( + "aws_wafregional_rate_based_rule.wafrule", "metric_name", wafRuleName), + ), + }, + { + Config: testAccAWSWafRegionalRateBasedRuleConfigChangeName(wafRuleNewName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalRateBasedRuleExists("aws_wafregional_rate_based_rule.wafrule", &after), + testAccCheckAWSWafRateBasedRuleIdDiffers(&before, &after), + resource.TestCheckResourceAttr( + "aws_wafregional_rate_based_rule.wafrule", "name", wafRuleNewName), + resource.TestCheckResourceAttr( + "aws_wafregional_rate_based_rule.wafrule", "predicate.#", "1"), + resource.TestCheckResourceAttr( + "aws_wafregional_rate_based_rule.wafrule", "metric_name", wafRuleNewName), + ), + }, + }, + }) +} + +func TestAccAWSWafRegionalRateBasedRule_disappears(t *testing.T) { + var v waf.RateBasedRule + wafRuleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalRateBasedRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalRateBasedRuleConfig(wafRuleName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalRateBasedRuleExists("aws_wafregional_rate_based_rule.wafrule", &v), + testAccCheckAWSWafRegionalRateBasedRuleDisappears(&v), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccAWSWafRegionalRateBasedRule_changePredicates(t *testing.T) { + var ipset waf.IPSet + var byteMatchSet waf.ByteMatchSet + + var before, after waf.RateBasedRule + var idx int + ruleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalRateBasedRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalRateBasedRuleConfig(ruleName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalIPSetExists("aws_wafregional_ipset.ipset", &ipset), + testAccCheckAWSWafRegionalRateBasedRuleExists("aws_wafregional_rate_based_rule.wafrule", &before), + resource.TestCheckResourceAttr("aws_wafregional_rate_based_rule.wafrule", "name", ruleName), + resource.TestCheckResourceAttr("aws_wafregional_rate_based_rule.wafrule", "predicate.#", "1"), + computeWafRegionalRateBasedRulePredicateWithIpSet(&ipset, false, "IPMatch", &idx), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_rate_based_rule.wafrule", "predicate.%d.negated", &idx, "false"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_rate_based_rule.wafrule", "predicate.%d.type", &idx, "IPMatch"), + ), + }, + { + Config: testAccAWSWafRegionalRateBasedRuleConfig_changePredicates(ruleName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalByteMatchSetExists("aws_wafregional_byte_match_set.set", &byteMatchSet), + testAccCheckAWSWafRegionalRateBasedRuleExists("aws_wafregional_rate_based_rule.wafrule", &after), + resource.TestCheckResourceAttr("aws_wafregional_rate_based_rule.wafrule", "name", ruleName), + resource.TestCheckResourceAttr("aws_wafregional_rate_based_rule.wafrule", "predicate.#", "1"), + computeWafRegionalRateBasedRulePredicateWithByteMatchSet(&byteMatchSet, true, "ByteMatch", &idx), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_rate_based_rule.wafrule", "predicate.%d.negated", &idx, "true"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_rate_based_rule.wafrule", "predicate.%d.type", &idx, "ByteMatch"), + ), + }, + }, + }) +} + +func TestAccAWSWafRegionalRateBasedRule_changeRateLimit(t *testing.T) { + var before, after waf.RateBasedRule + ruleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) + rateLimitBefore := "2000" + rateLimitAfter := "2001" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalRateBasedRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalRateBasedRuleWithRateLimitConfig(ruleName, rateLimitBefore), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalRateBasedRuleExists("aws_wafregional_rate_based_rule.wafrule", &before), + resource.TestCheckResourceAttr("aws_wafregional_rate_based_rule.wafrule", "name", ruleName), + resource.TestCheckResourceAttr("aws_wafregional_rate_based_rule.wafrule", "rate_limit", rateLimitBefore), + ), + }, + { + Config: testAccAWSWafRegionalRateBasedRuleWithRateLimitConfig(ruleName, rateLimitAfter), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalRateBasedRuleExists("aws_wafregional_rate_based_rule.wafrule", &after), + resource.TestCheckResourceAttr("aws_wafregional_rate_based_rule.wafrule", "name", ruleName), + resource.TestCheckResourceAttr("aws_wafregional_rate_based_rule.wafrule", "rate_limit", rateLimitAfter), + ), + }, + }, + }) +} + +// computeWafRegionalRateBasedRulePredicateWithIpSet calculates index +// which isn't static because dataId is generated as part of the test +func computeWafRegionalRateBasedRulePredicateWithIpSet(ipSet *waf.IPSet, negated bool, pType string, idx *int) resource.TestCheckFunc { + return func(s *terraform.State) error { + predicateResource := resourceAwsWafRegionalRateBasedRule().Schema["predicate"].Elem.(*schema.Resource) + + m := map[string]interface{}{ + "data_id": *ipSet.IPSetId, + "negated": negated, + "type": pType, + } + + f := schema.HashResource(predicateResource) + *idx = f(m) + + return nil + } +} + +// computeWafRegionalRateBasedRulePredicateWithByteMatchSet calculates index +// which isn't static because dataId is generated as part of the test +func computeWafRegionalRateBasedRulePredicateWithByteMatchSet(set *waf.ByteMatchSet, negated bool, pType string, idx *int) resource.TestCheckFunc { + return func(s *terraform.State) error { + predicateResource := resourceAwsWafRegionalRateBasedRule().Schema["predicate"].Elem.(*schema.Resource) + + m := map[string]interface{}{ + "data_id": *set.ByteMatchSetId, + "negated": negated, + "type": pType, + } + + f := schema.HashResource(predicateResource) + *idx = f(m) + + return nil + } +} + +func TestAccAWSWafRegionalRateBasedRule_noPredicates(t *testing.T) { + var rule waf.RateBasedRule + ruleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalRateBasedRuleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalRateBasedRuleConfig_noPredicates(ruleName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalRateBasedRuleExists("aws_wafregional_rate_based_rule.wafrule", &rule), + resource.TestCheckResourceAttr( + "aws_wafregional_rate_based_rule.wafrule", "name", ruleName), + resource.TestCheckResourceAttr( + "aws_wafregional_rate_based_rule.wafrule", "predicate.#", "0"), + ), + }, + }, + }) +} + +func testAccCheckAWSWafRateBasedRuleIdDiffers(before, after *waf.RateBasedRule) resource.TestCheckFunc { + return func(s *terraform.State) error { + if *before.RuleId == *after.RuleId { + return fmt.Errorf("Expected different IDs, given %q for both rules", *before.RuleId) + } + return nil + } +} + +func testAccCheckAWSWafRegionalRateBasedRuleDisappears(v *waf.RateBasedRule) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).wafregionalconn + region := testAccProvider.Meta().(*AWSClient).region + + wr := newWafRegionalRetryer(conn, region) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateRateBasedRuleInput{ + ChangeToken: token, + RuleId: v.RuleId, + RateLimit: v.RateLimit, + } + + for _, Predicate := range v.MatchPredicates { + Predicate := &waf.RuleUpdate{ + Action: aws.String("DELETE"), + Predicate: &waf.Predicate{ + Negated: Predicate.Negated, + Type: Predicate.Type, + DataId: Predicate.DataId, + }, + } + req.Updates = append(req.Updates, Predicate) + } + + return conn.UpdateRateBasedRule(req) + }) + if err != nil { + return fmt.Errorf("Error Updating WAF Rule: %s", err) + } + + _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { + opts := &waf.DeleteRateBasedRuleInput{ + ChangeToken: token, + RuleId: v.RuleId, + } + return conn.DeleteRateBasedRule(opts) + }) + if err != nil { + return fmt.Errorf("Error Deleting WAF Rule: %s", err) + } + return nil + } +} + +func testAccCheckAWSWafRegionalRateBasedRuleDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_wafregional_rate_based_rule" { + continue + } + + conn := testAccProvider.Meta().(*AWSClient).wafregionalconn + resp, err := conn.GetRateBasedRule( + &waf.GetRateBasedRuleInput{ + RuleId: aws.String(rs.Primary.ID), + }) + + if err == nil { + if *resp.Rule.RuleId == rs.Primary.ID { + return fmt.Errorf("WAF Rule %s still exists", rs.Primary.ID) + } + } + + // Return nil if the Rule is already destroyed + if isAWSErr(err, wafregional.ErrCodeWAFNonexistentItemException, "") { + return nil + } + + return err + } + + return nil +} + +func testAccCheckAWSWafRegionalRateBasedRuleExists(n string, v *waf.RateBasedRule) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No WAF Rule ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).wafregionalconn + resp, err := conn.GetRateBasedRule(&waf.GetRateBasedRuleInput{ + RuleId: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + if *resp.Rule.RuleId == rs.Primary.ID { + *v = *resp.Rule + return nil + } + + return fmt.Errorf("WAF Regional Rule (%s) not found", rs.Primary.ID) + } +} + +func testAccAWSWafRegionalRateBasedRuleConfig(name string) string { + return fmt.Sprintf(` +resource "aws_wafregional_ipset" "ipset" { + name = "%s" + ip_set_descriptor { + type = "IPV4" + value = "192.0.7.0/24" + } +} + +resource "aws_wafregional_rate_based_rule" "wafrule" { + name = "%s" + metric_name = "%s" + rate_key = "IP" + rate_limit = 2000 + predicate { + data_id = "${aws_wafregional_ipset.ipset.id}" + negated = false + type = "IPMatch" + } +}`, name, name, name) +} + +func testAccAWSWafRegionalRateBasedRuleConfigChangeName(name string) string { + return fmt.Sprintf(` +resource "aws_wafregional_ipset" "ipset" { + name = "%s" + ip_set_descriptor { + type = "IPV4" + value = "192.0.7.0/24" + } +} + +resource "aws_wafregional_rate_based_rule" "wafrule" { + name = "%s" + metric_name = "%s" + rate_key = "IP" + rate_limit = 2000 + predicate { + data_id = "${aws_wafregional_ipset.ipset.id}" + negated = false + type = "IPMatch" + } +}`, name, name, name) +} + +func testAccAWSWafRegionalRateBasedRuleConfig_changePredicates(name string) string { + return fmt.Sprintf(` +resource "aws_wafregional_ipset" "ipset" { + name = "%s" + ip_set_descriptor { + type = "IPV4" + value = "192.0.7.0/24" + } +} + +resource "aws_wafregional_byte_match_set" "set" { + name = "%s" + byte_match_tuple { + text_transformation = "NONE" + target_string = "badrefer1" + positional_constraint = "CONTAINS" + + field_to_match { + type = "HEADER" + data = "referer" + } + } +} + +resource "aws_wafregional_rate_based_rule" "wafrule" { + name = "%s" + metric_name = "%s" + rate_key = "IP" + rate_limit = 2000 + predicate { + data_id = "${aws_wafregional_byte_match_set.set.id}" + negated = true + type = "ByteMatch" + } +}`, name, name, name, name) +} + +func testAccAWSWafRegionalRateBasedRuleConfig_noPredicates(name string) string { + return fmt.Sprintf(` +resource "aws_wafregional_rate_based_rule" "wafrule" { + name = "%s" + metric_name = "%s" + rate_key = "IP" + rate_limit = 2000 +}`, name, name) +} + +func testAccAWSWafRegionalRateBasedRuleWithRateLimitConfig(name string, limit string) string { + return fmt.Sprintf(` +resource "aws_wafregional_rate_based_rule" "wafrule" { + name = "%s" + metric_name = "%s" + rate_key = "IP" + rate_limit = %s +}`, name, name, limit) +} diff --git a/aws/resource_aws_wafregional_regex_match_set.go b/aws/resource_aws_wafregional_regex_match_set.go new file mode 100644 index 00000000000..b1c9d10d08d --- /dev/null +++ b/aws/resource_aws_wafregional_regex_match_set.go @@ -0,0 +1,179 @@ +package aws + +import ( + "fmt" + "log" + "strings" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/waf" + "github.com/aws/aws-sdk-go/service/wafregional" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsWafRegionalRegexMatchSet() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsWafRegionalRegexMatchSetCreate, + Read: resourceAwsWafRegionalRegexMatchSetRead, + Update: resourceAwsWafRegionalRegexMatchSetUpdate, + Delete: resourceAwsWafRegionalRegexMatchSetDelete, + + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "regex_match_tuple": { + Type: schema.TypeSet, + Optional: true, + Set: resourceAwsWafRegexMatchSetTupleHash, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "field_to_match": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "data": { + Type: schema.TypeString, + Optional: true, + StateFunc: func(v interface{}) string { + return strings.ToLower(v.(string)) + }, + }, + "type": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "regex_pattern_set_id": { + Type: schema.TypeString, + Required: true, + }, + "text_transformation": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + }, + } +} + +func resourceAwsWafRegionalRegexMatchSetCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + region := meta.(*AWSClient).region + + log.Printf("[INFO] Creating WAF Regional Regex Match Set: %s", d.Get("name").(string)) + + wr := newWafRegionalRetryer(conn, region) + out, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + params := &waf.CreateRegexMatchSetInput{ + ChangeToken: token, + Name: aws.String(d.Get("name").(string)), + } + return conn.CreateRegexMatchSet(params) + }) + if err != nil { + return fmt.Errorf("Failed creating WAF Regional Regex Match Set: %s", err) + } + resp := out.(*waf.CreateRegexMatchSetOutput) + + d.SetId(*resp.RegexMatchSet.RegexMatchSetId) + + return resourceAwsWafRegionalRegexMatchSetUpdate(d, meta) +} + +func resourceAwsWafRegionalRegexMatchSetRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + log.Printf("[INFO] Reading WAF Regional Regex Match Set: %s", d.Get("name").(string)) + params := &waf.GetRegexMatchSetInput{ + RegexMatchSetId: aws.String(d.Id()), + } + + resp, err := conn.GetRegexMatchSet(params) + if err != nil { + if isAWSErr(err, wafregional.ErrCodeWAFNonexistentItemException, "") { + log.Printf("[WARN] WAF Regional Regex Match Set (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + return err + } + + d.Set("name", resp.RegexMatchSet.Name) + d.Set("regex_match_tuple", flattenWafRegexMatchTuples(resp.RegexMatchSet.RegexMatchTuples)) + + return nil +} + +func resourceAwsWafRegionalRegexMatchSetUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + region := meta.(*AWSClient).region + + log.Printf("[INFO] Updating WAF Regional Regex Match Set: %s", d.Get("name").(string)) + + if d.HasChange("regex_match_tuple") { + o, n := d.GetChange("regex_match_tuple") + oldT, newT := o.(*schema.Set).List(), n.(*schema.Set).List() + err := updateRegexMatchSetResourceWR(d.Id(), oldT, newT, conn, region) + if err != nil { + return fmt.Errorf("Failed updating WAF Regional Regex Match Set: %s", err) + } + } + + return resourceAwsWafRegionalRegexMatchSetRead(d, meta) +} + +func resourceAwsWafRegionalRegexMatchSetDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + region := meta.(*AWSClient).region + + oldTuples := d.Get("regex_match_tuple").(*schema.Set).List() + if len(oldTuples) > 0 { + noTuples := []interface{}{} + err := updateRegexMatchSetResourceWR(d.Id(), oldTuples, noTuples, conn, region) + if err != nil { + return fmt.Errorf("Error updating WAF Regional Regex Match Set: %s", err) + } + } + + wr := newWafRegionalRetryer(conn, "global") + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.DeleteRegexMatchSetInput{ + ChangeToken: token, + RegexMatchSetId: aws.String(d.Id()), + } + log.Printf("[INFO] Deleting WAF Regional Regex Match Set: %s", req) + return conn.DeleteRegexMatchSet(req) + }) + if err != nil { + return fmt.Errorf("Failed deleting WAF Regional Regex Match Set: %s", err) + } + + return nil +} + +func updateRegexMatchSetResourceWR(id string, oldT, newT []interface{}, conn *wafregional.WAFRegional, region string) error { + wr := newWafRegionalRetryer(conn, region) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateRegexMatchSetInput{ + ChangeToken: token, + RegexMatchSetId: aws.String(id), + Updates: diffWafRegexMatchSetTuples(oldT, newT), + } + + return conn.UpdateRegexMatchSet(req) + }) + if err != nil { + return fmt.Errorf("Failed updating WAF Regional Regex Match Set: %s", err) + } + + return nil +} diff --git a/aws/resource_aws_wafregional_regex_match_set_test.go b/aws/resource_aws_wafregional_regex_match_set_test.go new file mode 100644 index 00000000000..d1fd6b5db62 --- /dev/null +++ b/aws/resource_aws_wafregional_regex_match_set_test.go @@ -0,0 +1,370 @@ +package aws + +import ( + "fmt" + "log" + "strings" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/waf" + "github.com/aws/aws-sdk-go/service/wafregional" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func init() { + resource.AddTestSweepers("aws_wafregional_regex_match_set", &resource.Sweeper{ + Name: "aws_wafregional_regex_match_set", + F: testSweepWafRegionalRegexMatchSet, + }) +} + +func testSweepWafRegionalRegexMatchSet(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).wafregionalconn + + req := &waf.ListRegexMatchSetsInput{} + resp, err := conn.ListRegexMatchSets(req) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping WAF Regional Regex Match Set sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error describing WAF Regional Regex Match Sets: %s", err) + } + + if len(resp.RegexMatchSets) == 0 { + log.Print("[DEBUG] No AWS WAF Regional Regex Match Sets to sweep") + return nil + } + + for _, s := range resp.RegexMatchSets { + if !strings.HasPrefix(*s.Name, "tfacc") { + continue + } + + resp, err := conn.GetRegexMatchSet(&waf.GetRegexMatchSetInput{ + RegexMatchSetId: s.RegexMatchSetId, + }) + if err != nil { + return err + } + set := resp.RegexMatchSet + + oldTuples := flattenWafRegexMatchTuples(set.RegexMatchTuples) + noTuples := []interface{}{} + err = updateRegexMatchSetResourceWR(*set.RegexMatchSetId, oldTuples, noTuples, conn, region) + if err != nil { + return fmt.Errorf("Error updating WAF Regional Regex Match Set: %s", err) + } + + wr := newWafRegionalRetryer(conn, region) + _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.DeleteRegexMatchSetInput{ + ChangeToken: token, + RegexMatchSetId: set.RegexMatchSetId, + } + log.Printf("[INFO] Deleting WAF Regional Regex Match Set: %s", req) + return conn.DeleteRegexMatchSet(req) + }) + if err != nil { + return fmt.Errorf("error deleting WAF Regional Regex Match Set (%s): %s", aws.StringValue(set.RegexMatchSetId), err) + } + } + + return nil +} + +// Serialized acceptance tests due to WAF account limits +// https://docs.aws.amazon.com/waf/latest/developerguide/limits.html +func TestAccAWSWafRegionalRegexMatchSet(t *testing.T) { + testCases := map[string]func(t *testing.T){ + "basic": testAccAWSWafRegionalRegexMatchSet_basic, + "changePatterns": testAccAWSWafRegionalRegexMatchSet_changePatterns, + "noPatterns": testAccAWSWafRegionalRegexMatchSet_noPatterns, + "disappears": testAccAWSWafRegionalRegexMatchSet_disappears, + } + + for name, tc := range testCases { + tc := tc + t.Run(name, func(t *testing.T) { + tc(t) + }) + } +} + +func testAccAWSWafRegionalRegexMatchSet_basic(t *testing.T) { + var matchSet waf.RegexMatchSet + var patternSet waf.RegexPatternSet + var idx int + + matchSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + patternSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + fieldToMatch := waf.FieldToMatch{ + Data: aws.String("User-Agent"), + Type: aws.String("HEADER"), + } + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalRegexMatchSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalRegexMatchSetConfig(matchSetName, patternSetName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalRegexMatchSetExists("aws_wafregional_regex_match_set.test", &matchSet), + testAccCheckAWSWafRegionalRegexPatternSetExists("aws_wafregional_regex_pattern_set.test", &patternSet), + computeWafRegexMatchSetTuple(&patternSet, &fieldToMatch, "NONE", &idx), + resource.TestCheckResourceAttr("aws_wafregional_regex_match_set.test", "name", matchSetName), + resource.TestCheckResourceAttr("aws_wafregional_regex_match_set.test", "regex_match_tuple.#", "1"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_regex_match_set.test", "regex_match_tuple.%d.field_to_match.#", &idx, "1"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_regex_match_set.test", "regex_match_tuple.%d.field_to_match.0.data", &idx, "user-agent"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_regex_match_set.test", "regex_match_tuple.%d.field_to_match.0.type", &idx, "HEADER"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_regex_match_set.test", "regex_match_tuple.%d.text_transformation", &idx, "NONE"), + ), + }, + }, + }) +} + +func testAccAWSWafRegionalRegexMatchSet_changePatterns(t *testing.T) { + var before, after waf.RegexMatchSet + var patternSet waf.RegexPatternSet + var idx1, idx2 int + + matchSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + patternSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalRegexMatchSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalRegexMatchSetConfig(matchSetName, patternSetName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalRegexMatchSetExists("aws_wafregional_regex_match_set.test", &before), + testAccCheckAWSWafRegionalRegexPatternSetExists("aws_wafregional_regex_pattern_set.test", &patternSet), + computeWafRegexMatchSetTuple(&patternSet, &waf.FieldToMatch{Data: aws.String("User-Agent"), Type: aws.String("HEADER")}, "NONE", &idx1), + resource.TestCheckResourceAttr("aws_wafregional_regex_match_set.test", "name", matchSetName), + resource.TestCheckResourceAttr("aws_wafregional_regex_match_set.test", "regex_match_tuple.#", "1"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_regex_match_set.test", "regex_match_tuple.%d.field_to_match.#", &idx1, "1"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_regex_match_set.test", "regex_match_tuple.%d.field_to_match.0.data", &idx1, "user-agent"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_regex_match_set.test", "regex_match_tuple.%d.field_to_match.0.type", &idx1, "HEADER"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_regex_match_set.test", "regex_match_tuple.%d.text_transformation", &idx1, "NONE"), + ), + }, + { + Config: testAccAWSWafRegionalRegexMatchSetConfig_changePatterns(matchSetName, patternSetName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalRegexMatchSetExists("aws_wafregional_regex_match_set.test", &after), + resource.TestCheckResourceAttr("aws_wafregional_regex_match_set.test", "name", matchSetName), + resource.TestCheckResourceAttr("aws_wafregional_regex_match_set.test", "regex_match_tuple.#", "1"), + + computeWafRegexMatchSetTuple(&patternSet, &waf.FieldToMatch{Data: aws.String("Referer"), Type: aws.String("HEADER")}, "COMPRESS_WHITE_SPACE", &idx2), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_regex_match_set.test", "regex_match_tuple.%d.field_to_match.#", &idx2, "1"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_regex_match_set.test", "regex_match_tuple.%d.field_to_match.0.data", &idx2, "referer"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_regex_match_set.test", "regex_match_tuple.%d.field_to_match.0.type", &idx2, "HEADER"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_regex_match_set.test", "regex_match_tuple.%d.text_transformation", &idx2, "COMPRESS_WHITE_SPACE"), + ), + }, + }, + }) +} + +func testAccAWSWafRegionalRegexMatchSet_noPatterns(t *testing.T) { + var matchSet waf.RegexMatchSet + matchSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalRegexMatchSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalRegexMatchSetConfig_noPatterns(matchSetName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalRegexMatchSetExists("aws_wafregional_regex_match_set.test", &matchSet), + resource.TestCheckResourceAttr("aws_wafregional_regex_match_set.test", "name", matchSetName), + resource.TestCheckResourceAttr("aws_wafregional_regex_match_set.test", "regex_match_tuple.#", "0"), + ), + }, + }, + }) +} + +func testAccAWSWafRegionalRegexMatchSet_disappears(t *testing.T) { + var matchSet waf.RegexMatchSet + matchSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + patternSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalRegexMatchSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalRegexMatchSetConfig(matchSetName, patternSetName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalRegexMatchSetExists("aws_wafregional_regex_match_set.test", &matchSet), + testAccCheckAWSWafRegionalRegexMatchSetDisappears(&matchSet), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccCheckAWSWafRegionalRegexMatchSetDisappears(set *waf.RegexMatchSet) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).wafregionalconn + region := testAccProvider.Meta().(*AWSClient).region + + wr := newWafRegionalRetryer(conn, region) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateRegexMatchSetInput{ + ChangeToken: token, + RegexMatchSetId: set.RegexMatchSetId, + } + + for _, tuple := range set.RegexMatchTuples { + req.Updates = append(req.Updates, &waf.RegexMatchSetUpdate{ + Action: aws.String("DELETE"), + RegexMatchTuple: tuple, + }) + } + + return conn.UpdateRegexMatchSet(req) + }) + if err != nil { + return fmt.Errorf("Failed updating WAF Regional Regex Match Set: %s", err) + } + + _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { + opts := &waf.DeleteRegexMatchSetInput{ + ChangeToken: token, + RegexMatchSetId: set.RegexMatchSetId, + } + return conn.DeleteRegexMatchSet(opts) + }) + if err != nil { + return fmt.Errorf("Failed deleting WAF Regional Regex Match Set: %s", err) + } + + return nil + } +} + +func testAccCheckAWSWafRegionalRegexMatchSetExists(n string, v *waf.RegexMatchSet) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No WAF Regional Regex Match Set ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).wafregionalconn + resp, err := conn.GetRegexMatchSet(&waf.GetRegexMatchSetInput{ + RegexMatchSetId: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + if *resp.RegexMatchSet.RegexMatchSetId == rs.Primary.ID { + *v = *resp.RegexMatchSet + return nil + } + + return fmt.Errorf("WAF Regional Regex Match Set (%s) not found", rs.Primary.ID) + } +} + +func testAccCheckAWSWafRegionalRegexMatchSetDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_wafregional_regex_match_set" { + continue + } + + conn := testAccProvider.Meta().(*AWSClient).wafregionalconn + resp, err := conn.GetRegexMatchSet(&waf.GetRegexMatchSetInput{ + RegexMatchSetId: aws.String(rs.Primary.ID), + }) + + if err == nil { + if *resp.RegexMatchSet.RegexMatchSetId == rs.Primary.ID { + return fmt.Errorf("WAF Regional Regex Match Set %s still exists", rs.Primary.ID) + } + } + + // Return nil if the Regex Pattern Set is already destroyed + if isAWSErr(err, wafregional.ErrCodeWAFNonexistentItemException, "") { + return nil + } + + return err + } + + return nil +} + +func testAccAWSWafRegionalRegexMatchSetConfig(matchSetName, patternSetName string) string { + return fmt.Sprintf(` +resource "aws_wafregional_regex_match_set" "test" { + name = "%s" + regex_match_tuple { + field_to_match { + data = "User-Agent" + type = "HEADER" + } + regex_pattern_set_id = "${aws_wafregional_regex_pattern_set.test.id}" + text_transformation = "NONE" + } +} + +resource "aws_wafregional_regex_pattern_set" "test" { + name = "%s" + regex_pattern_strings = ["one", "two"] +} +`, matchSetName, patternSetName) +} + +func testAccAWSWafRegionalRegexMatchSetConfig_changePatterns(matchSetName, patternSetName string) string { + return fmt.Sprintf(` +resource "aws_wafregional_regex_match_set" "test" { + name = "%s" + + regex_match_tuple { + field_to_match { + data = "Referer" + type = "HEADER" + } + regex_pattern_set_id = "${aws_wafregional_regex_pattern_set.test.id}" + text_transformation = "COMPRESS_WHITE_SPACE" + } +} + +resource "aws_wafregional_regex_pattern_set" "test" { + name = "%s" + regex_pattern_strings = ["one", "two"] +} +`, matchSetName, patternSetName) +} + +func testAccAWSWafRegionalRegexMatchSetConfig_noPatterns(matchSetName string) string { + return fmt.Sprintf(` +resource "aws_wafregional_regex_match_set" "test" { + name = "%s" +}`, matchSetName) +} diff --git a/aws/resource_aws_wafregional_regex_pattern_set.go b/aws/resource_aws_wafregional_regex_pattern_set.go new file mode 100644 index 00000000000..6f311b32585 --- /dev/null +++ b/aws/resource_aws_wafregional_regex_pattern_set.go @@ -0,0 +1,147 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/waf" + "github.com/aws/aws-sdk-go/service/wafregional" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsWafRegionalRegexPatternSet() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsWafRegionalRegexPatternSetCreate, + Read: resourceAwsWafRegionalRegexPatternSetRead, + Update: resourceAwsWafRegionalRegexPatternSetUpdate, + Delete: resourceAwsWafRegionalRegexPatternSetDelete, + + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "regex_pattern_strings": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + } +} + +func resourceAwsWafRegionalRegexPatternSetCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + region := meta.(*AWSClient).region + + log.Printf("[INFO] Creating WAF Regional Regex Pattern Set: %s", d.Get("name").(string)) + + wr := newWafRegionalRetryer(conn, region) + out, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + params := &waf.CreateRegexPatternSetInput{ + ChangeToken: token, + Name: aws.String(d.Get("name").(string)), + } + return conn.CreateRegexPatternSet(params) + }) + if err != nil { + return fmt.Errorf("Failed creating WAF Regional Regex Pattern Set: %s", err) + } + resp := out.(*waf.CreateRegexPatternSetOutput) + + d.SetId(*resp.RegexPatternSet.RegexPatternSetId) + + return resourceAwsWafRegionalRegexPatternSetUpdate(d, meta) +} + +func resourceAwsWafRegionalRegexPatternSetRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + + log.Printf("[INFO] Reading WAF Regional Regex Pattern Set: %s", d.Get("name").(string)) + params := &waf.GetRegexPatternSetInput{ + RegexPatternSetId: aws.String(d.Id()), + } + + resp, err := conn.GetRegexPatternSet(params) + if err != nil { + if isAWSErr(err, wafregional.ErrCodeWAFNonexistentItemException, "") { + log.Printf("[WARN] WAF Regional Regex Pattern Set (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + return err + } + + d.Set("name", resp.RegexPatternSet.Name) + d.Set("regex_pattern_strings", aws.StringValueSlice(resp.RegexPatternSet.RegexPatternStrings)) + + return nil +} + +func resourceAwsWafRegionalRegexPatternSetUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + region := meta.(*AWSClient).region + + log.Printf("[INFO] Updating WAF Regional Regex Pattern Set: %s", d.Get("name").(string)) + + if d.HasChange("regex_pattern_strings") { + o, n := d.GetChange("regex_pattern_strings") + oldPatterns, newPatterns := o.(*schema.Set).List(), n.(*schema.Set).List() + err := updateWafRegionalRegexPatternSetPatternStringsWR(d.Id(), oldPatterns, newPatterns, conn, region) + if err != nil { + return fmt.Errorf("Failed updating WAF Regional Regex Pattern Set: %s", err) + } + } + + return resourceAwsWafRegionalRegexPatternSetRead(d, meta) +} + +func resourceAwsWafRegionalRegexPatternSetDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + region := meta.(*AWSClient).region + + oldPatterns := d.Get("regex_pattern_strings").(*schema.Set).List() + if len(oldPatterns) > 0 { + noPatterns := []interface{}{} + err := updateWafRegionalRegexPatternSetPatternStringsWR(d.Id(), oldPatterns, noPatterns, conn, region) + if err != nil { + return fmt.Errorf("Error updating WAF Regional Regex Pattern Set: %s", err) + } + } + + wr := newWafRegionalRetryer(conn, region) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.DeleteRegexPatternSetInput{ + ChangeToken: token, + RegexPatternSetId: aws.String(d.Id()), + } + log.Printf("[INFO] Deleting WAF Regional Regex Pattern Set: %s", req) + return conn.DeleteRegexPatternSet(req) + }) + if err != nil { + return fmt.Errorf("Failed deleting WAF Regional Regex Pattern Set: %s", err) + } + + return nil +} + +func updateWafRegionalRegexPatternSetPatternStringsWR(id string, oldPatterns, newPatterns []interface{}, conn *wafregional.WAFRegional, region string) error { + wr := newWafRegionalRetryer(conn, region) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateRegexPatternSetInput{ + ChangeToken: token, + RegexPatternSetId: aws.String(id), + Updates: diffWafRegexPatternSetPatternStrings(oldPatterns, newPatterns), + } + + return conn.UpdateRegexPatternSet(req) + }) + if err != nil { + return fmt.Errorf("Failed updating WAF Regional Regex Pattern Set: %s", err) + } + + return nil +} diff --git a/aws/resource_aws_wafregional_regex_pattern_set_test.go b/aws/resource_aws_wafregional_regex_pattern_set_test.go new file mode 100644 index 00000000000..9b0f4ed867c --- /dev/null +++ b/aws/resource_aws_wafregional_regex_pattern_set_test.go @@ -0,0 +1,251 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/waf" + "github.com/aws/aws-sdk-go/service/wafregional" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +// Serialized acceptance tests due to WAF account limits +// https://docs.aws.amazon.com/waf/latest/developerguide/limits.html +func TestAccAWSWafRegionalRegexPatternSet(t *testing.T) { + testCases := map[string]func(t *testing.T){ + "basic": testAccAWSWafRegionalRegexPatternSet_basic, + "changePatterns": testAccAWSWafRegionalRegexPatternSet_changePatterns, + "noPatterns": testAccAWSWafRegionalRegexPatternSet_noPatterns, + "disappears": testAccAWSWafRegionalRegexPatternSet_disappears, + } + + for name, tc := range testCases { + tc := tc + t.Run(name, func(t *testing.T) { + tc(t) + }) + } +} + +func testAccAWSWafRegionalRegexPatternSet_basic(t *testing.T) { + var patternSet waf.RegexPatternSet + patternSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalRegexPatternSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalRegexPatternSetConfig(patternSetName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalRegexPatternSetExists("aws_wafregional_regex_pattern_set.test", &patternSet), + resource.TestCheckResourceAttr("aws_wafregional_regex_pattern_set.test", "name", patternSetName), + resource.TestCheckResourceAttr("aws_wafregional_regex_pattern_set.test", "regex_pattern_strings.#", "2"), + resource.TestCheckResourceAttr("aws_wafregional_regex_pattern_set.test", "regex_pattern_strings.2848565413", "one"), + resource.TestCheckResourceAttr("aws_wafregional_regex_pattern_set.test", "regex_pattern_strings.3351840846", "two"), + ), + }, + }, + }) +} + +func testAccAWSWafRegionalRegexPatternSet_changePatterns(t *testing.T) { + var before, after waf.RegexPatternSet + patternSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalRegexPatternSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalRegexPatternSetConfig(patternSetName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalRegexPatternSetExists("aws_wafregional_regex_pattern_set.test", &before), + resource.TestCheckResourceAttr("aws_wafregional_regex_pattern_set.test", "name", patternSetName), + resource.TestCheckResourceAttr("aws_wafregional_regex_pattern_set.test", "regex_pattern_strings.#", "2"), + resource.TestCheckResourceAttr("aws_wafregional_regex_pattern_set.test", "regex_pattern_strings.2848565413", "one"), + resource.TestCheckResourceAttr("aws_wafregional_regex_pattern_set.test", "regex_pattern_strings.3351840846", "two"), + ), + }, + { + Config: testAccAWSWafRegionalRegexPatternSetConfig_changePatterns(patternSetName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalRegexPatternSetExists("aws_wafregional_regex_pattern_set.test", &after), + resource.TestCheckResourceAttr("aws_wafregional_regex_pattern_set.test", "name", patternSetName), + resource.TestCheckResourceAttr("aws_wafregional_regex_pattern_set.test", "regex_pattern_strings.#", "3"), + resource.TestCheckResourceAttr("aws_wafregional_regex_pattern_set.test", "regex_pattern_strings.3351840846", "two"), + resource.TestCheckResourceAttr("aws_wafregional_regex_pattern_set.test", "regex_pattern_strings.2929247714", "three"), + resource.TestCheckResourceAttr("aws_wafregional_regex_pattern_set.test", "regex_pattern_strings.1294846542", "four"), + ), + }, + }, + }) +} + +func testAccAWSWafRegionalRegexPatternSet_noPatterns(t *testing.T) { + var patternSet waf.RegexPatternSet + patternSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalRegexPatternSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalRegexPatternSetConfig_noPatterns(patternSetName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalRegexPatternSetExists("aws_wafregional_regex_pattern_set.test", &patternSet), + resource.TestCheckResourceAttr("aws_wafregional_regex_pattern_set.test", "name", patternSetName), + resource.TestCheckResourceAttr("aws_wafregional_regex_pattern_set.test", "regex_pattern_strings.#", "0"), + ), + }, + }, + }) +} + +func testAccAWSWafRegionalRegexPatternSet_disappears(t *testing.T) { + var patternSet waf.RegexPatternSet + patternSetName := fmt.Sprintf("tfacc-%s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalRegexPatternSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalRegexPatternSetConfig(patternSetName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalRegexPatternSetExists("aws_wafregional_regex_pattern_set.test", &patternSet), + testAccCheckAWSWafRegionalRegexPatternSetDisappears(&patternSet), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccCheckAWSWafRegionalRegexPatternSetDisappears(set *waf.RegexPatternSet) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).wafregionalconn + region := testAccProvider.Meta().(*AWSClient).region + + wr := newWafRegionalRetryer(conn, region) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateRegexPatternSetInput{ + ChangeToken: token, + RegexPatternSetId: set.RegexPatternSetId, + } + + for _, pattern := range set.RegexPatternStrings { + update := &waf.RegexPatternSetUpdate{ + Action: aws.String("DELETE"), + RegexPatternString: pattern, + } + req.Updates = append(req.Updates, update) + } + + return conn.UpdateRegexPatternSet(req) + }) + if err != nil { + return fmt.Errorf("Failed updating WAF Regional Regex Pattern Set: %s", err) + } + + _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { + opts := &waf.DeleteRegexPatternSetInput{ + ChangeToken: token, + RegexPatternSetId: set.RegexPatternSetId, + } + return conn.DeleteRegexPatternSet(opts) + }) + if err != nil { + return fmt.Errorf("Failed deleting WAF Regional Regex Pattern Set: %s", err) + } + + return nil + } +} + +func testAccCheckAWSWafRegionalRegexPatternSetExists(n string, patternSet *waf.RegexPatternSet) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No WAF Regional Regex Pattern Set ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).wafregionalconn + resp, err := conn.GetRegexPatternSet(&waf.GetRegexPatternSetInput{ + RegexPatternSetId: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + if *resp.RegexPatternSet.RegexPatternSetId == rs.Primary.ID { + *patternSet = *resp.RegexPatternSet + return nil + } + + return fmt.Errorf("WAF Regional Regex Pattern Set (%s) not found", rs.Primary.ID) + } +} + +func testAccCheckAWSWafRegionalRegexPatternSetDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_wafregional_regex_pattern_set" { + continue + } + + conn := testAccProvider.Meta().(*AWSClient).wafregionalconn + resp, err := conn.GetRegexPatternSet(&waf.GetRegexPatternSetInput{ + RegexPatternSetId: aws.String(rs.Primary.ID), + }) + + if err == nil { + if *resp.RegexPatternSet.RegexPatternSetId == rs.Primary.ID { + return fmt.Errorf("WAF Regional Regex Pattern Set %s still exists", rs.Primary.ID) + } + } + + // Return nil if the Regex Pattern Set is already destroyed + if isAWSErr(err, wafregional.ErrCodeWAFNonexistentItemException, "") { + return nil + } + + return err + } + + return nil +} + +func testAccAWSWafRegionalRegexPatternSetConfig(name string) string { + return fmt.Sprintf(` +resource "aws_wafregional_regex_pattern_set" "test" { + name = "%s" + regex_pattern_strings = ["one", "two"] +}`, name) +} + +func testAccAWSWafRegionalRegexPatternSetConfig_changePatterns(name string) string { + return fmt.Sprintf(` +resource "aws_wafregional_regex_pattern_set" "test" { + name = "%s" + regex_pattern_strings = ["two", "three", "four"] +}`, name) +} + +func testAccAWSWafRegionalRegexPatternSetConfig_noPatterns(name string) string { + return fmt.Sprintf(` +resource "aws_wafregional_regex_pattern_set" "test" { + name = "%s" +}`, name) +} diff --git a/aws/resource_aws_wafregional_rule.go b/aws/resource_aws_wafregional_rule.go index 24bc1d11fa5..25cc935a869 100644 --- a/aws/resource_aws_wafregional_rule.go +++ b/aws/resource_aws_wafregional_rule.go @@ -20,40 +20,34 @@ func resourceAwsWafRegionalRule() *schema.Resource { Delete: resourceAwsWafRegionalRuleDelete, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "metric_name": &schema.Schema{ + "metric_name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "predicate": &schema.Schema{ + "predicate": { Type: schema.TypeSet, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "negated": &schema.Schema{ + "negated": { Type: schema.TypeBool, Required: true, }, - "data_id": &schema.Schema{ + "data_id": { Type: schema.TypeString, Required: true, ValidateFunc: validation.StringLenBetween(1, 128), }, - "type": &schema.Schema{ - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice([]string{ - "IPMatch", - "ByteMatch", - "SqlInjectionMatch", - "SizeConstraint", - "XssMatch", - }, false), + "type": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validateWafPredicatesType(), }, }, }, @@ -174,7 +168,7 @@ func updateWafRegionalRuleResource(id string, oldP, newP []interface{}, meta int } func flattenWafPredicates(ts []*waf.Predicate) []interface{} { - out := make([]interface{}, len(ts), len(ts)) + out := make([]interface{}, len(ts)) for i, p := range ts { m := make(map[string]interface{}) m["negated"] = *p.Negated diff --git a/aws/resource_aws_wafregional_rule_group.go b/aws/resource_aws_wafregional_rule_group.go new file mode 100644 index 00000000000..e82e09798fc --- /dev/null +++ b/aws/resource_aws_wafregional_rule_group.go @@ -0,0 +1,191 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/waf" + "github.com/aws/aws-sdk-go/service/wafregional" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsWafRegionalRuleGroup() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsWafRegionalRuleGroupCreate, + Read: resourceAwsWafRegionalRuleGroupRead, + Update: resourceAwsWafRegionalRuleGroupUpdate, + Delete: resourceAwsWafRegionalRuleGroupDelete, + + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "metric_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateWafMetricName, + }, + "activated_rule": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "action": { + Type: schema.TypeList, + MaxItems: 1, + Required: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "type": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "priority": { + Type: schema.TypeInt, + Required: true, + }, + "rule_id": { + Type: schema.TypeString, + Required: true, + }, + "type": { + Type: schema.TypeString, + Optional: true, + Default: wafregional.WafRuleTypeRegular, + }, + }, + }, + }, + }, + } +} + +func resourceAwsWafRegionalRuleGroupCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + region := meta.(*AWSClient).region + + wr := newWafRegionalRetryer(conn, region) + out, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + params := &waf.CreateRuleGroupInput{ + ChangeToken: token, + MetricName: aws.String(d.Get("metric_name").(string)), + Name: aws.String(d.Get("name").(string)), + } + + return conn.CreateRuleGroup(params) + }) + if err != nil { + return err + } + resp := out.(*waf.CreateRuleGroupOutput) + d.SetId(*resp.RuleGroup.RuleGroupId) + return resourceAwsWafRegionalRuleGroupUpdate(d, meta) +} + +func resourceAwsWafRegionalRuleGroupRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + + params := &waf.GetRuleGroupInput{ + RuleGroupId: aws.String(d.Id()), + } + + resp, err := conn.GetRuleGroup(params) + if err != nil { + if isAWSErr(err, wafregional.ErrCodeWAFNonexistentItemException, "") { + log.Printf("[WARN] WAF Regional Rule Group (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + return err + } + + rResp, err := conn.ListActivatedRulesInRuleGroup(&waf.ListActivatedRulesInRuleGroupInput{ + RuleGroupId: aws.String(d.Id()), + }) + if err != nil { + return fmt.Errorf("error listing activated rules in WAF Regional Rule Group (%s): %s", d.Id(), err) + } + + d.Set("activated_rule", flattenWafActivatedRules(rResp.ActivatedRules)) + d.Set("name", resp.RuleGroup.Name) + d.Set("metric_name", resp.RuleGroup.MetricName) + + return nil +} + +func resourceAwsWafRegionalRuleGroupUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + region := meta.(*AWSClient).region + + if d.HasChange("activated_rule") { + o, n := d.GetChange("activated_rule") + oldRules, newRules := o.(*schema.Set).List(), n.(*schema.Set).List() + + err := updateWafRuleGroupResourceWR(d.Id(), oldRules, newRules, conn, region) + if err != nil { + return fmt.Errorf("Error Updating WAF Regional Rule Group: %s", err) + } + } + + return resourceAwsWafRegionalRuleGroupRead(d, meta) +} + +func resourceAwsWafRegionalRuleGroupDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + region := meta.(*AWSClient).region + + oldRules := d.Get("activated_rule").(*schema.Set).List() + err := deleteWafRegionalRuleGroup(d.Id(), oldRules, conn, region) + + return err +} + +func deleteWafRegionalRuleGroup(id string, oldRules []interface{}, conn *wafregional.WAFRegional, region string) error { + if len(oldRules) > 0 { + noRules := []interface{}{} + err := updateWafRuleGroupResourceWR(id, oldRules, noRules, conn, region) + if err != nil { + return fmt.Errorf("Error updating WAF Regional Rule Group Predicates: %s", err) + } + } + + wr := newWafRegionalRetryer(conn, region) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.DeleteRuleGroupInput{ + ChangeToken: token, + RuleGroupId: aws.String(id), + } + log.Printf("[INFO] Deleting WAF Regional Rule Group") + return conn.DeleteRuleGroup(req) + }) + if err != nil { + return fmt.Errorf("Error deleting WAF Regional Rule Group: %s", err) + } + return nil +} + +func updateWafRuleGroupResourceWR(id string, oldRules, newRules []interface{}, conn *wafregional.WAFRegional, region string) error { + wr := newWafRegionalRetryer(conn, region) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateRuleGroupInput{ + ChangeToken: token, + RuleGroupId: aws.String(id), + Updates: diffWafRuleGroupActivatedRules(oldRules, newRules), + } + + return conn.UpdateRuleGroup(req) + }) + if err != nil { + return fmt.Errorf("Error Updating WAF Regional Rule Group: %s", err) + } + + return nil +} diff --git a/aws/resource_aws_wafregional_rule_group_test.go b/aws/resource_aws_wafregional_rule_group_test.go new file mode 100644 index 00000000000..269fa24c235 --- /dev/null +++ b/aws/resource_aws_wafregional_rule_group_test.go @@ -0,0 +1,408 @@ +package aws + +import ( + "fmt" + "log" + "strings" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/waf" + "github.com/aws/aws-sdk-go/service/wafregional" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func init() { + resource.AddTestSweepers("aws_wafregional_rule_group", &resource.Sweeper{ + Name: "aws_wafregional_rule_group", + F: testSweepWafRegionalRuleGroups, + }) +} + +func testSweepWafRegionalRuleGroups(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).wafregionalconn + + req := &waf.ListRuleGroupsInput{} + resp, err := conn.ListRuleGroups(req) + if err != nil { + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping WAF Regional Rule Group sweep for %s: %s", region, err) + return nil + } + return fmt.Errorf("Error describing WAF Regional Rule Groups: %s", err) + } + + if len(resp.RuleGroups) == 0 { + log.Print("[DEBUG] No AWS WAF Regional Rule Groups to sweep") + return nil + } + + for _, group := range resp.RuleGroups { + if !strings.HasPrefix(*group.Name, "tfacc") { + continue + } + + rResp, err := conn.ListActivatedRulesInRuleGroup(&waf.ListActivatedRulesInRuleGroupInput{ + RuleGroupId: group.RuleGroupId, + }) + if err != nil { + return err + } + oldRules := flattenWafActivatedRules(rResp.ActivatedRules) + err = deleteWafRegionalRuleGroup(*group.RuleGroupId, oldRules, conn, region) + if err != nil { + return err + } + } + + return nil +} + +func TestAccAWSWafRegionalRuleGroup_basic(t *testing.T) { + var rule waf.Rule + var group waf.RuleGroup + var idx int + + ruleName := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + groupName := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalRuleGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalRuleGroupConfig(ruleName, groupName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalRuleExists("aws_wafregional_rule.test", &rule), + testAccCheckAWSWafRegionalRuleGroupExists("aws_wafregional_rule_group.test", &group), + resource.TestCheckResourceAttr("aws_wafregional_rule_group.test", "name", groupName), + resource.TestCheckResourceAttr("aws_wafregional_rule_group.test", "activated_rule.#", "1"), + resource.TestCheckResourceAttr("aws_wafregional_rule_group.test", "metric_name", groupName), + computeWafActivatedRuleWithRuleId(&rule, "COUNT", 50, &idx), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_rule_group.test", "activated_rule.%d.action.0.type", &idx, "COUNT"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_rule_group.test", "activated_rule.%d.priority", &idx, "50"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_rule_group.test", "activated_rule.%d.type", &idx, waf.WafRuleTypeRegular), + ), + }, + }, + }) +} + +func TestAccAWSWafRegionalRuleGroup_changeNameForceNew(t *testing.T) { + var before, after waf.RuleGroup + + ruleName := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + groupName := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + newGroupName := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalRuleGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalRuleGroupConfig(ruleName, groupName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalRuleGroupExists("aws_wafregional_rule_group.test", &before), + resource.TestCheckResourceAttr("aws_wafregional_rule_group.test", "name", groupName), + resource.TestCheckResourceAttr("aws_wafregional_rule_group.test", "activated_rule.#", "1"), + resource.TestCheckResourceAttr("aws_wafregional_rule_group.test", "metric_name", groupName), + ), + }, + { + Config: testAccAWSWafRegionalRuleGroupConfig(ruleName, newGroupName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalRuleGroupExists("aws_wafregional_rule_group.test", &after), + resource.TestCheckResourceAttr("aws_wafregional_rule_group.test", "name", newGroupName), + resource.TestCheckResourceAttr("aws_wafregional_rule_group.test", "activated_rule.#", "1"), + resource.TestCheckResourceAttr("aws_wafregional_rule_group.test", "metric_name", newGroupName), + ), + }, + }, + }) +} + +func TestAccAWSWafRegionalRuleGroup_disappears(t *testing.T) { + var group waf.RuleGroup + ruleName := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + groupName := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalRuleGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalRuleGroupConfig(ruleName, groupName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalRuleGroupExists("aws_wafregional_rule_group.test", &group), + testAccCheckAWSWafRegionalRuleGroupDisappears(&group), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccAWSWafRegionalRuleGroup_changeActivatedRules(t *testing.T) { + var rule0, rule1, rule2, rule3 waf.Rule + var groupBefore, groupAfter waf.RuleGroup + var idx0, idx1, idx2, idx3 int + + groupName := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + ruleName1 := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + ruleName2 := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + ruleName3 := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalRuleGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalRuleGroupConfig(ruleName1, groupName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalRuleExists("aws_wafregional_rule.test", &rule0), + testAccCheckAWSWafRegionalRuleGroupExists("aws_wafregional_rule_group.test", &groupBefore), + resource.TestCheckResourceAttr("aws_wafregional_rule_group.test", "name", groupName), + resource.TestCheckResourceAttr("aws_wafregional_rule_group.test", "activated_rule.#", "1"), + computeWafActivatedRuleWithRuleId(&rule0, "COUNT", 50, &idx0), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_rule_group.test", "activated_rule.%d.action.0.type", &idx0, "COUNT"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_rule_group.test", "activated_rule.%d.priority", &idx0, "50"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_rule_group.test", "activated_rule.%d.type", &idx0, waf.WafRuleTypeRegular), + ), + }, + { + Config: testAccAWSWafRegionalRuleGroupConfig_changeActivatedRules(ruleName1, ruleName2, ruleName3, groupName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr("aws_wafregional_rule_group.test", "name", groupName), + resource.TestCheckResourceAttr("aws_wafregional_rule_group.test", "activated_rule.#", "3"), + testAccCheckAWSWafRegionalRuleGroupExists("aws_wafregional_rule_group.test", &groupAfter), + + testAccCheckAWSWafRegionalRuleExists("aws_wafregional_rule.test", &rule1), + computeWafActivatedRuleWithRuleId(&rule1, "BLOCK", 10, &idx1), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_rule_group.test", "activated_rule.%d.action.0.type", &idx1, "BLOCK"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_rule_group.test", "activated_rule.%d.priority", &idx1, "10"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_rule_group.test", "activated_rule.%d.type", &idx1, waf.WafRuleTypeRegular), + + testAccCheckAWSWafRegionalRuleExists("aws_wafregional_rule.test2", &rule2), + computeWafActivatedRuleWithRuleId(&rule2, "COUNT", 1, &idx2), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_rule_group.test", "activated_rule.%d.action.0.type", &idx2, "COUNT"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_rule_group.test", "activated_rule.%d.priority", &idx2, "1"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_rule_group.test", "activated_rule.%d.type", &idx2, waf.WafRuleTypeRegular), + + testAccCheckAWSWafRegionalRuleExists("aws_wafregional_rule.test3", &rule3), + computeWafActivatedRuleWithRuleId(&rule3, "BLOCK", 15, &idx3), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_rule_group.test", "activated_rule.%d.action.0.type", &idx3, "BLOCK"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_rule_group.test", "activated_rule.%d.priority", &idx3, "15"), + testCheckResourceAttrWithIndexesAddr("aws_wafregional_rule_group.test", "activated_rule.%d.type", &idx3, waf.WafRuleTypeRegular), + ), + }, + }, + }) +} + +func TestAccAWSWafRegionalRuleGroup_noActivatedRules(t *testing.T) { + var group waf.RuleGroup + groupName := fmt.Sprintf("tfacc%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalRuleGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalRuleGroupConfig_noActivatedRules(groupName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalRuleGroupExists("aws_wafregional_rule_group.test", &group), + resource.TestCheckResourceAttr( + "aws_wafregional_rule_group.test", "name", groupName), + resource.TestCheckResourceAttr( + "aws_wafregional_rule_group.test", "activated_rule.#", "0"), + ), + }, + }, + }) +} + +func testAccCheckAWSWafRegionalRuleGroupDisappears(group *waf.RuleGroup) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).wafregionalconn + region := testAccProvider.Meta().(*AWSClient).region + + rResp, err := conn.ListActivatedRulesInRuleGroup(&waf.ListActivatedRulesInRuleGroupInput{ + RuleGroupId: group.RuleGroupId, + }) + if err != nil { + return fmt.Errorf("error listing activated rules in WAF Regional Rule Group (%s): %s", aws.StringValue(group.RuleGroupId), err) + } + + wr := newWafRegionalRetryer(conn, region) + _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateRuleGroupInput{ + ChangeToken: token, + RuleGroupId: group.RuleGroupId, + } + + for _, rule := range rResp.ActivatedRules { + rule := &waf.RuleGroupUpdate{ + Action: aws.String("DELETE"), + ActivatedRule: rule, + } + req.Updates = append(req.Updates, rule) + } + + return conn.UpdateRuleGroup(req) + }) + if err != nil { + return fmt.Errorf("Error Updating WAF Regional Rule Group: %s", err) + } + + _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { + opts := &waf.DeleteRuleGroupInput{ + ChangeToken: token, + RuleGroupId: group.RuleGroupId, + } + return conn.DeleteRuleGroup(opts) + }) + if err != nil { + return fmt.Errorf("Error Deleting WAF Regional Rule Group: %s", err) + } + return nil + } +} + +func testAccCheckAWSWafRegionalRuleGroupDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_wafregional_rule_group" { + continue + } + + conn := testAccProvider.Meta().(*AWSClient).wafregionalconn + resp, err := conn.GetRuleGroup(&waf.GetRuleGroupInput{ + RuleGroupId: aws.String(rs.Primary.ID), + }) + + if err == nil { + if *resp.RuleGroup.RuleGroupId == rs.Primary.ID { + return fmt.Errorf("WAF Regional Rule Group %s still exists", rs.Primary.ID) + } + } + + if isAWSErr(err, wafregional.ErrCodeWAFNonexistentItemException, "") { + return nil + } + + return err + } + + return nil +} + +func testAccCheckAWSWafRegionalRuleGroupExists(n string, group *waf.RuleGroup) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No WAF Regional Rule Group ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).wafregionalconn + resp, err := conn.GetRuleGroup(&waf.GetRuleGroupInput{ + RuleGroupId: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + if *resp.RuleGroup.RuleGroupId == rs.Primary.ID { + *group = *resp.RuleGroup + return nil + } + + return fmt.Errorf("WAF Regional Rule Group (%s) not found", rs.Primary.ID) + } +} + +func testAccAWSWafRegionalRuleGroupConfig(ruleName, groupName string) string { + return fmt.Sprintf(` +resource "aws_wafregional_rule" "test" { + name = "%[1]s" + metric_name = "%[1]s" +} + +resource "aws_wafregional_rule_group" "test" { + name = "%[2]s" + metric_name = "%[2]s" + activated_rule { + action { + type = "COUNT" + } + priority = 50 + rule_id = "${aws_wafregional_rule.test.id}" + } +}`, ruleName, groupName) +} + +func testAccAWSWafRegionalRuleGroupConfig_changeActivatedRules(ruleName1, ruleName2, ruleName3, groupName string) string { + return fmt.Sprintf(` +resource "aws_wafregional_rule" "test" { + name = "%[1]s" + metric_name = "%[1]s" +} + +resource "aws_wafregional_rule" "test2" { + name = "%[2]s" + metric_name = "%[2]s" +} + +resource "aws_wafregional_rule" "test3" { + name = "%[3]s" + metric_name = "%[3]s" +} + +resource "aws_wafregional_rule_group" "test" { + name = "%[4]s" + metric_name = "%[4]s" + activated_rule { + action { + type = "BLOCK" + } + priority = 10 + rule_id = "${aws_wafregional_rule.test.id}" + } + activated_rule { + action { + type = "COUNT" + } + priority = 1 + rule_id = "${aws_wafregional_rule.test2.id}" + } + activated_rule { + action { + type = "BLOCK" + } + priority = 15 + rule_id = "${aws_wafregional_rule.test3.id}" + } +}`, ruleName1, ruleName2, ruleName3, groupName) +} + +func testAccAWSWafRegionalRuleGroupConfig_noActivatedRules(groupName string) string { + return fmt.Sprintf(` +resource "aws_wafregional_rule_group" "test" { + name = "%[1]s" + metric_name = "%[1]s" +}`, groupName) +} diff --git a/aws/resource_aws_wafregional_rule_test.go b/aws/resource_aws_wafregional_rule_test.go index 41d99c71a83..f0fcfd29dc4 100644 --- a/aws/resource_aws_wafregional_rule_test.go +++ b/aws/resource_aws_wafregional_rule_test.go @@ -16,12 +16,12 @@ import ( func TestAccAWSWafRegionalRule_basic(t *testing.T) { var v waf.Rule wafRuleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalRuleDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSWafRegionalRuleConfig(wafRuleName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSWafRegionalRuleExists("aws_wafregional_rule.wafrule", &v), @@ -42,7 +42,7 @@ func TestAccAWSWafRegionalRule_changeNameForceNew(t *testing.T) { wafRuleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) wafRuleNewName := fmt.Sprintf("wafrulenew%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalIPSetDestroy, @@ -78,7 +78,7 @@ func TestAccAWSWafRegionalRule_changeNameForceNew(t *testing.T) { func TestAccAWSWafRegionalRule_disappears(t *testing.T) { var v waf.Rule wafRuleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalRuleDestroy, @@ -98,7 +98,7 @@ func TestAccAWSWafRegionalRule_disappears(t *testing.T) { func TestAccAWSWafRegionalRule_noPredicates(t *testing.T) { var v waf.Rule wafRuleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalRuleDestroy, @@ -125,7 +125,7 @@ func TestAccAWSWafRegionalRule_changePredicates(t *testing.T) { var idx int ruleName := fmt.Sprintf("wafrule%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRuleDestroy, diff --git a/aws/resource_aws_wafregional_size_constraint_set.go b/aws/resource_aws_wafregional_size_constraint_set.go new file mode 100644 index 00000000000..904547d1817 --- /dev/null +++ b/aws/resource_aws_wafregional_size_constraint_set.go @@ -0,0 +1,135 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/waf" + "github.com/aws/aws-sdk-go/service/wafregional" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsWafRegionalSizeConstraintSet() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsWafRegionalSizeConstraintSetCreate, + Read: resourceAwsWafRegionalSizeConstraintSetRead, + Update: resourceAwsWafRegionalSizeConstraintSetUpdate, + Delete: resourceAwsWafRegionalSizeConstraintSetDelete, + + Schema: wafSizeConstraintSetSchema(), + } +} + +func resourceAwsWafRegionalSizeConstraintSetCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + region := meta.(*AWSClient).region + + name := d.Get("name").(string) + + log.Printf("[INFO] Creating WAF Regional SizeConstraintSet: %s", name) + + wr := newWafRegionalRetryer(conn, region) + out, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + params := &waf.CreateSizeConstraintSetInput{ + ChangeToken: token, + Name: aws.String(name), + } + + return conn.CreateSizeConstraintSet(params) + }) + if err != nil { + return fmt.Errorf("Error creating WAF Regional SizeConstraintSet: %s", err) + } + resp := out.(*waf.CreateSizeConstraintSetOutput) + + d.SetId(*resp.SizeConstraintSet.SizeConstraintSetId) + + return resourceAwsWafRegionalSizeConstraintSetUpdate(d, meta) +} + +func resourceAwsWafRegionalSizeConstraintSetRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + + log.Printf("[INFO] Reading WAF Regional SizeConstraintSet: %s", d.Get("name").(string)) + params := &waf.GetSizeConstraintSetInput{ + SizeConstraintSetId: aws.String(d.Id()), + } + + resp, err := conn.GetSizeConstraintSet(params) + if err != nil { + if isAWSErr(err, wafregional.ErrCodeWAFNonexistentItemException, "") { + log.Printf("[WARN] WAF Regional SizeConstraintSet (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + return err + } + + d.Set("name", resp.SizeConstraintSet.Name) + d.Set("size_constraints", flattenWafSizeConstraints(resp.SizeConstraintSet.SizeConstraints)) + + return nil +} + +func resourceAwsWafRegionalSizeConstraintSetUpdate(d *schema.ResourceData, meta interface{}) error { + client := meta.(*AWSClient) + + if d.HasChange("size_constraints") { + o, n := d.GetChange("size_constraints") + oldConstraints, newConstraints := o.(*schema.Set).List(), n.(*schema.Set).List() + + if err := updateRegionalSizeConstraintSetResource(d.Id(), oldConstraints, newConstraints, client.wafregionalconn, client.region); err != nil { + return fmt.Errorf("Error updating WAF Regional SizeConstraintSet: %s", err) + } + } + + return resourceAwsWafRegionalSizeConstraintSetRead(d, meta) +} + +func resourceAwsWafRegionalSizeConstraintSetDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + region := meta.(*AWSClient).region + + oldConstraints := d.Get("size_constraints").(*schema.Set).List() + + if len(oldConstraints) > 0 { + noConstraints := []interface{}{} + if err := updateRegionalSizeConstraintSetResource(d.Id(), oldConstraints, noConstraints, conn, region); err != nil { + return fmt.Errorf("Error deleting WAF Regional SizeConstraintSet: %s", err) + } + } + + wr := newWafRegionalRetryer(conn, region) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.DeleteSizeConstraintSetInput{ + ChangeToken: token, + SizeConstraintSetId: aws.String(d.Id()), + } + return conn.DeleteSizeConstraintSet(req) + }) + if err != nil { + return fmt.Errorf("Error deleting WAF Regional SizeConstraintSet: %s", err) + } + + return nil +} + +func updateRegionalSizeConstraintSetResource(id string, oldConstraints, newConstraints []interface{}, conn *wafregional.WAFRegional, region string) error { + wr := newWafRegionalRetryer(conn, region) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateSizeConstraintSetInput{ + ChangeToken: token, + SizeConstraintSetId: aws.String(id), + Updates: diffWafSizeConstraints(oldConstraints, newConstraints), + } + + log.Printf("[INFO] Updating WAF Regional SizeConstraintSet: %s", req) + return conn.UpdateSizeConstraintSet(req) + }) + if err != nil { + return fmt.Errorf("Error updating WAF Regional SizeConstraintSet: %s", err) + } + + return nil +} diff --git a/aws/resource_aws_wafregional_size_constraint_set_test.go b/aws/resource_aws_wafregional_size_constraint_set_test.go new file mode 100644 index 00000000000..09da9119492 --- /dev/null +++ b/aws/resource_aws_wafregional_size_constraint_set_test.go @@ -0,0 +1,336 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/waf" + "github.com/aws/aws-sdk-go/service/wafregional" + "github.com/hashicorp/terraform/helper/acctest" +) + +func TestAccAWSWafRegionalSizeConstraintSet_basic(t *testing.T) { + var constraints waf.SizeConstraintSet + sizeConstraintSet := fmt.Sprintf("sizeConstraintSet-%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalSizeConstraintSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalSizeConstraintSetConfig(sizeConstraintSet), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalSizeConstraintSetExists("aws_wafregional_size_constraint_set.size_constraint_set", &constraints), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "name", sizeConstraintSet), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.#", "1"), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.2029852522.comparison_operator", "EQ"), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.2029852522.field_to_match.#", "1"), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.2029852522.field_to_match.281401076.data", ""), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.2029852522.field_to_match.281401076.type", "BODY"), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.2029852522.size", "4096"), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.2029852522.text_transformation", "NONE"), + ), + }, + }, + }) +} + +func TestAccAWSWafRegionalSizeConstraintSet_changeNameForceNew(t *testing.T) { + var before, after waf.SizeConstraintSet + sizeConstraintSet := fmt.Sprintf("sizeConstraintSet-%s", acctest.RandString(5)) + sizeConstraintSetNewName := fmt.Sprintf("sizeConstraintSet-%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalSizeConstraintSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalSizeConstraintSetConfig(sizeConstraintSet), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalSizeConstraintSetExists("aws_wafregional_size_constraint_set.size_constraint_set", &before), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "name", sizeConstraintSet), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.#", "1"), + ), + }, + { + Config: testAccAWSWafRegionalSizeConstraintSetConfigChangeName(sizeConstraintSetNewName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalSizeConstraintSetExists("aws_wafregional_size_constraint_set.size_constraint_set", &after), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "name", sizeConstraintSetNewName), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.#", "1"), + ), + }, + }, + }) +} + +func TestAccAWSWafRegionalSizeConstraintSet_disappears(t *testing.T) { + var constraints waf.SizeConstraintSet + sizeConstraintSet := fmt.Sprintf("sizeConstraintSet-%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalSizeConstraintSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalSizeConstraintSetConfig(sizeConstraintSet), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalSizeConstraintSetExists("aws_wafregional_size_constraint_set.size_constraint_set", &constraints), + testAccCheckAWSWafRegionalSizeConstraintSetDisappears(&constraints), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccAWSWafRegionalSizeConstraintSet_changeConstraints(t *testing.T) { + var before, after waf.SizeConstraintSet + setName := fmt.Sprintf("sizeConstraintSet-%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalSizeConstraintSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalSizeConstraintSetConfig(setName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalSizeConstraintSetExists("aws_wafregional_size_constraint_set.size_constraint_set", &before), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "name", setName), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.#", "1"), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.2029852522.comparison_operator", "EQ"), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.2029852522.field_to_match.#", "1"), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.2029852522.field_to_match.281401076.data", ""), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.2029852522.field_to_match.281401076.type", "BODY"), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.2029852522.size", "4096"), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.2029852522.text_transformation", "NONE"), + ), + }, + { + Config: testAccAWSWafRegionalSizeConstraintSetConfig_changeConstraints(setName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalSizeConstraintSetExists("aws_wafregional_size_constraint_set.size_constraint_set", &after), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "name", setName), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.#", "1"), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.3222308386.comparison_operator", "GE"), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.3222308386.field_to_match.#", "1"), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.3222308386.field_to_match.281401076.data", ""), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.3222308386.field_to_match.281401076.type", "BODY"), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.3222308386.size", "1024"), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.3222308386.text_transformation", "NONE"), + ), + }, + }, + }) +} + +func TestAccAWSWafRegionalSizeConstraintSet_noConstraints(t *testing.T) { + var constraints waf.SizeConstraintSet + setName := fmt.Sprintf("sizeConstraintSet-%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalSizeConstraintSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalSizeConstraintSetConfig_noConstraints(setName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafRegionalSizeConstraintSetExists("aws_wafregional_size_constraint_set.size_constraint_set", &constraints), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "name", setName), + resource.TestCheckResourceAttr( + "aws_wafregional_size_constraint_set.size_constraint_set", "size_constraints.#", "0"), + ), + }, + }, + }) +} + +func testAccCheckAWSWafRegionalSizeConstraintSetDisappears(constraints *waf.SizeConstraintSet) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).wafregionalconn + region := testAccProvider.Meta().(*AWSClient).region + + wr := newWafRegionalRetryer(conn, region) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateSizeConstraintSetInput{ + ChangeToken: token, + SizeConstraintSetId: constraints.SizeConstraintSetId, + } + + for _, sizeConstraint := range constraints.SizeConstraints { + sizeConstraintUpdate := &waf.SizeConstraintSetUpdate{ + Action: aws.String("DELETE"), + SizeConstraint: &waf.SizeConstraint{ + FieldToMatch: sizeConstraint.FieldToMatch, + ComparisonOperator: sizeConstraint.ComparisonOperator, + Size: sizeConstraint.Size, + TextTransformation: sizeConstraint.TextTransformation, + }, + } + req.Updates = append(req.Updates, sizeConstraintUpdate) + } + return conn.UpdateSizeConstraintSet(req) + }) + if err != nil { + return fmt.Errorf("Error updating SizeConstraintSet: %s", err) + } + + _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { + opts := &waf.DeleteSizeConstraintSetInput{ + ChangeToken: token, + SizeConstraintSetId: constraints.SizeConstraintSetId, + } + return conn.DeleteSizeConstraintSet(opts) + }) + + return err + } +} + +func testAccCheckAWSWafRegionalSizeConstraintSetExists(n string, constraints *waf.SizeConstraintSet) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No WAF SizeConstraintSet ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).wafregionalconn + resp, err := conn.GetSizeConstraintSet(&waf.GetSizeConstraintSetInput{ + SizeConstraintSetId: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + if *resp.SizeConstraintSet.SizeConstraintSetId == rs.Primary.ID { + *constraints = *resp.SizeConstraintSet + return nil + } + + return fmt.Errorf("WAF SizeConstraintSet (%s) not found", rs.Primary.ID) + } +} + +func testAccCheckAWSWafRegionalSizeConstraintSetDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_wafregional_size_contraint_set" { + continue + } + + conn := testAccProvider.Meta().(*AWSClient).wafregionalconn + resp, err := conn.GetSizeConstraintSet( + &waf.GetSizeConstraintSetInput{ + SizeConstraintSetId: aws.String(rs.Primary.ID), + }) + + if err == nil { + if *resp.SizeConstraintSet.SizeConstraintSetId == rs.Primary.ID { + return fmt.Errorf("WAF SizeConstraintSet %s still exists", rs.Primary.ID) + } + } + + // Return nil if the SizeConstraintSet is already destroyed + if isAWSErr(err, wafregional.ErrCodeWAFNonexistentItemException, "") { + return nil + } + + return err + } + + return nil +} + +func testAccAWSWafRegionalSizeConstraintSetConfig(name string) string { + return fmt.Sprintf(` +resource "aws_wafregional_size_constraint_set" "size_constraint_set" { + name = "%s" + size_constraints { + text_transformation = "NONE" + comparison_operator = "EQ" + size = "4096" + field_to_match { + type = "BODY" + } + } +}`, name) +} + +func testAccAWSWafRegionalSizeConstraintSetConfigChangeName(name string) string { + return fmt.Sprintf(` +resource "aws_wafregional_size_constraint_set" "size_constraint_set" { + name = "%s" + size_constraints { + text_transformation = "NONE" + comparison_operator = "EQ" + size = "4096" + field_to_match { + type = "BODY" + } + } +}`, name) +} + +func testAccAWSWafRegionalSizeConstraintSetConfig_changeConstraints(name string) string { + return fmt.Sprintf(` +resource "aws_wafregional_size_constraint_set" "size_constraint_set" { + name = "%s" + size_constraints { + text_transformation = "NONE" + comparison_operator = "GE" + size = "1024" + field_to_match { + type = "BODY" + } + } +}`, name) +} + +func testAccAWSWafRegionalSizeConstraintSetConfig_noConstraints(name string) string { + return fmt.Sprintf(` +resource "aws_wafregional_size_constraint_set" "size_constraint_set" { + name = "%s" +}`, name) +} diff --git a/aws/resource_aws_wafregional_sql_injection_match_set.go b/aws/resource_aws_wafregional_sql_injection_match_set.go index f7d23c9c2a6..43e4554e75a 100644 --- a/aws/resource_aws_wafregional_sql_injection_match_set.go +++ b/aws/resource_aws_wafregional_sql_injection_match_set.go @@ -122,7 +122,7 @@ func resourceAwsWafRegionalSqlInjectionMatchSetUpdate(d *schema.ResourceData, me err := updateSqlInjectionMatchSetResourceWR(d.Id(), oldT, newT, conn, region) if err != nil { - return fmt.Errorf("[ERROR] Error updating Regional WAF SQL Injection Match Set: %s", err) + return fmt.Errorf("Error updating Regional WAF SQL Injection Match Set: %s", err) } } @@ -139,7 +139,7 @@ func resourceAwsWafRegionalSqlInjectionMatchSetDelete(d *schema.ResourceData, me noTuples := []interface{}{} err := updateSqlInjectionMatchSetResourceWR(d.Id(), oldTuples, noTuples, conn, region) if err != nil { - return fmt.Errorf("[ERROR] Error deleting Regional WAF SQL Injection Match Set: %s", err) + return fmt.Errorf("Error deleting Regional WAF SQL Injection Match Set: %s", err) } } diff --git a/aws/resource_aws_wafregional_sql_injection_match_set_test.go b/aws/resource_aws_wafregional_sql_injection_match_set_test.go index 068ab04cf2d..a0a0f07764e 100644 --- a/aws/resource_aws_wafregional_sql_injection_match_set_test.go +++ b/aws/resource_aws_wafregional_sql_injection_match_set_test.go @@ -16,12 +16,12 @@ func TestAccAWSWafRegionalSqlInjectionMatchSet_basic(t *testing.T) { var v waf.SqlInjectionMatchSet sqlInjectionMatchSet := fmt.Sprintf("sqlInjectionMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalSqlInjectionMatchSetDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSWafRegionalSqlInjectionMatchSetConfig(sqlInjectionMatchSet), Check: resource.ComposeTestCheckFunc( testAccCheckAWSWafRegionalSqlInjectionMatchSetExists("aws_wafregional_sql_injection_match_set.sql_injection_match_set", &v), @@ -48,7 +48,7 @@ func TestAccAWSWafRegionalSqlInjectionMatchSet_changeNameForceNew(t *testing.T) sqlInjectionMatchSet := fmt.Sprintf("sqlInjectionMatchSet-%s", acctest.RandString(5)) sqlInjectionMatchSetNewName := fmt.Sprintf("sqlInjectionMatchSetNewName-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalSqlInjectionMatchSetDestroy, @@ -81,7 +81,7 @@ func TestAccAWSWafRegionalSqlInjectionMatchSet_disappears(t *testing.T) { var v waf.SqlInjectionMatchSet sqlInjectionMatchSet := fmt.Sprintf("sqlInjectionMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalSqlInjectionMatchSetDestroy, @@ -102,7 +102,7 @@ func TestAccAWSWafRegionalSqlInjectionMatchSet_changeTuples(t *testing.T) { var before, after waf.SqlInjectionMatchSet setName := fmt.Sprintf("sqlInjectionMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalSqlInjectionMatchSetDestroy, @@ -151,7 +151,7 @@ func TestAccAWSWafRegionalSqlInjectionMatchSet_noTuples(t *testing.T) { var ipset waf.SqlInjectionMatchSet setName := fmt.Sprintf("sqlInjectionMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalSqlInjectionMatchSetDestroy, diff --git a/aws/resource_aws_wafregional_web_acl.go b/aws/resource_aws_wafregional_web_acl.go index d466b14d95d..6389d84a549 100644 --- a/aws/resource_aws_wafregional_web_acl.go +++ b/aws/resource_aws_wafregional_web_acl.go @@ -8,6 +8,7 @@ import ( "github.com/aws/aws-sdk-go/service/waf" "github.com/aws/aws-sdk-go/service/wafregional" "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) func resourceAwsWafRegionalWebAcl() *schema.Resource { @@ -18,52 +19,75 @@ func resourceAwsWafRegionalWebAcl() *schema.Resource { Delete: resourceAwsWafRegionalWebAclDelete, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "default_action": &schema.Schema{ + "default_action": { Type: schema.TypeList, Required: true, MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "type": &schema.Schema{ + "type": { Type: schema.TypeString, Required: true, }, }, }, }, - "metric_name": &schema.Schema{ + "metric_name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "rule": &schema.Schema{ + "rule": { Type: schema.TypeSet, Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "action": &schema.Schema{ + "action": { Type: schema.TypeList, - Required: true, + Optional: true, MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "type": &schema.Schema{ + "type": { Type: schema.TypeString, Required: true, }, }, }, }, - "priority": &schema.Schema{ + "override_action": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "type": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "priority": { Type: schema.TypeInt, Required: true, }, - "rule_id": &schema.Schema{ + "type": { + Type: schema.TypeString, + Optional: true, + Default: waf.WafRuleTypeRegular, + ValidateFunc: validation.StringInSlice([]string{ + waf.WafRuleTypeRegular, + waf.WafRuleTypeRateBased, + waf.WafRuleTypeGroup, + }, false), + }, + "rule_id": { Type: schema.TypeString, Required: true, }, @@ -82,7 +106,7 @@ func resourceAwsWafRegionalWebAclCreate(d *schema.ResourceData, meta interface{} out, err := wr.RetryWithToken(func(token *string) (interface{}, error) { params := &waf.CreateWebACLInput{ ChangeToken: token, - DefaultAction: expandDefaultActionWR(d.Get("default_action").([]interface{})), + DefaultAction: expandWafAction(d.Get("default_action").([]interface{})), MetricName: aws.String(d.Get("metric_name").(string)), Name: aws.String(d.Get("name").(string)), } @@ -106,7 +130,7 @@ func resourceAwsWafRegionalWebAclRead(d *schema.ResourceData, meta interface{}) resp, err := conn.GetWebACL(params) if err != nil { if isAWSErr(err, wafregional.ErrCodeWAFNonexistentItemException, "") { - log.Printf("[WARN] WAF Regional ACL (%s) not found, error code (404)", d.Id()) + log.Printf("[WARN] WAF Regional ACL (%s) not found, removing from state", d.Id()) d.SetId("") return nil } @@ -114,24 +138,42 @@ func resourceAwsWafRegionalWebAclRead(d *schema.ResourceData, meta interface{}) return err } - d.Set("default_action", flattenDefaultActionWR(resp.WebACL.DefaultAction)) + if resp == nil || resp.WebACL == nil { + log.Printf("[WARN] WAF Regional ACL (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err := d.Set("default_action", flattenWafAction(resp.WebACL.DefaultAction)); err != nil { + return fmt.Errorf("error setting default_action: %s", err) + } d.Set("name", resp.WebACL.Name) d.Set("metric_name", resp.WebACL.MetricName) - d.Set("rule", flattenWafWebAclRules(resp.WebACL.Rules)) + if err := d.Set("rule", flattenWafWebAclRules(resp.WebACL.Rules)); err != nil { + return fmt.Errorf("error setting rule: %s", err) + } return nil } func resourceAwsWafRegionalWebAclUpdate(d *schema.ResourceData, meta interface{}) error { - if d.HasChange("default_action") || d.HasChange("rule") { - conn := meta.(*AWSClient).wafregionalconn - region := meta.(*AWSClient).region + conn := meta.(*AWSClient).wafregionalconn + region := meta.(*AWSClient).region - action := expandDefaultActionWR(d.Get("default_action").([]interface{})) + if d.HasChange("default_action") || d.HasChange("rule") { o, n := d.GetChange("rule") oldR, newR := o.(*schema.Set).List(), n.(*schema.Set).List() - err := updateWebAclResourceWR(d.Id(), action, oldR, newR, conn, region) + wr := newWafRegionalRetryer(conn, region) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateWebACLInput{ + ChangeToken: token, + DefaultAction: expandWafAction(d.Get("default_action").([]interface{})), + Updates: diffWafWebAclRules(oldR, newR), + WebACLId: aws.String(d.Id()), + } + return conn.UpdateWebACL(req) + }) if err != nil { return fmt.Errorf("Error Updating WAF Regional ACL: %s", err) } @@ -143,11 +185,19 @@ func resourceAwsWafRegionalWebAclDelete(d *schema.ResourceData, meta interface{} conn := meta.(*AWSClient).wafregionalconn region := meta.(*AWSClient).region - action := expandDefaultActionWR(d.Get("default_action").([]interface{})) + // First, need to delete all rules rules := d.Get("rule").(*schema.Set).List() if len(rules) > 0 { - noRules := []interface{}{} - err := updateWebAclResourceWR(d.Id(), action, rules, noRules, conn, region) + wr := newWafRegionalRetryer(conn, region) + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { + req := &waf.UpdateWebACLInput{ + ChangeToken: token, + DefaultAction: expandWafAction(d.Get("default_action").([]interface{})), + Updates: diffWafWebAclRules(rules, []interface{}{}), + WebACLId: aws.String(d.Id()), + } + return conn.UpdateWebACL(req) + }) if err != nil { return fmt.Errorf("Error Removing WAF Regional ACL Rules: %s", err) } @@ -168,99 +218,3 @@ func resourceAwsWafRegionalWebAclDelete(d *schema.ResourceData, meta interface{} } return nil } - -func updateWebAclResourceWR(id string, a *waf.WafAction, oldR, newR []interface{}, conn *wafregional.WAFRegional, region string) error { - wr := newWafRegionalRetryer(conn, region) - _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { - req := &waf.UpdateWebACLInput{ - DefaultAction: a, - ChangeToken: token, - WebACLId: aws.String(id), - Updates: diffWafWebAclRules(oldR, newR), - } - return conn.UpdateWebACL(req) - }) - if err != nil { - return fmt.Errorf("Error Updating WAF Regional ACL: %s", err) - } - return nil -} - -func expandDefaultActionWR(d []interface{}) *waf.WafAction { - if d == nil || len(d) == 0 { - return nil - } - - if d[0] == nil { - log.Printf("[ERR] First element of Default Action is set to nil") - return nil - } - - dA := d[0].(map[string]interface{}) - - return &waf.WafAction{ - Type: aws.String(dA["type"].(string)), - } -} - -func flattenDefaultActionWR(n *waf.WafAction) []map[string]interface{} { - if n == nil { - return nil - } - - m := setMap(make(map[string]interface{})) - - m.SetString("type", n.Type) - return m.MapList() -} - -func flattenWafWebAclRules(ts []*waf.ActivatedRule) []interface{} { - out := make([]interface{}, len(ts), len(ts)) - for i, r := range ts { - actionMap := map[string]interface{}{ - "type": *r.Action.Type, - } - m := make(map[string]interface{}) - m["action"] = []interface{}{actionMap} - m["priority"] = *r.Priority - m["rule_id"] = *r.RuleId - out[i] = m - } - return out -} - -func expandWafWebAclUpdate(updateAction string, aclRule map[string]interface{}) *waf.WebACLUpdate { - ruleAction := aclRule["action"].([]interface{})[0].(map[string]interface{}) - rule := &waf.ActivatedRule{ - Action: &waf.WafAction{Type: aws.String(ruleAction["type"].(string))}, - Priority: aws.Int64(int64(aclRule["priority"].(int))), - RuleId: aws.String(aclRule["rule_id"].(string)), - } - - update := &waf.WebACLUpdate{ - Action: aws.String(updateAction), - ActivatedRule: rule, - } - - return update -} - -func diffWafWebAclRules(oldR, newR []interface{}) []*waf.WebACLUpdate { - updates := make([]*waf.WebACLUpdate, 0) - - for _, or := range oldR { - aclRule := or.(map[string]interface{}) - - if idx, contains := sliceContainsMap(newR, aclRule); contains { - newR = append(newR[:idx], newR[idx+1:]...) - continue - } - updates = append(updates, expandWafWebAclUpdate(waf.ChangeActionDelete, aclRule)) - } - - for _, nr := range newR { - aclRule := nr.(map[string]interface{}) - updates = append(updates, expandWafWebAclUpdate(waf.ChangeActionInsert, aclRule)) - } - return updates -} diff --git a/aws/resource_aws_wafregional_web_acl_association.go b/aws/resource_aws_wafregional_web_acl_association.go new file mode 100644 index 00000000000..eb3a9128a22 --- /dev/null +++ b/aws/resource_aws_wafregional_web_acl_association.go @@ -0,0 +1,124 @@ +package aws + +import ( + "fmt" + "log" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/wafregional" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsWafRegionalWebAclAssociation() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsWafRegionalWebAclAssociationCreate, + Read: resourceAwsWafRegionalWebAclAssociationRead, + Delete: resourceAwsWafRegionalWebAclAssociationDelete, + + Schema: map[string]*schema.Schema{ + "web_acl_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "resource_arn": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + } +} + +func resourceAwsWafRegionalWebAclAssociationCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + + log.Printf( + "[INFO] Creating WAF Regional Web ACL association: %s => %s", + d.Get("web_acl_id").(string), + d.Get("resource_arn").(string)) + + params := &wafregional.AssociateWebACLInput{ + WebACLId: aws.String(d.Get("web_acl_id").(string)), + ResourceArn: aws.String(d.Get("resource_arn").(string)), + } + + // create association and wait on retryable error + // no response body + var err error + err = resource.Retry(2*time.Minute, func() *resource.RetryError { + _, err = conn.AssociateWebACL(params) + if err != nil { + if isAWSErr(err, wafregional.ErrCodeWAFUnavailableEntityException, "") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if err != nil { + return err + } + + // Store association id + d.SetId(fmt.Sprintf("%s:%s", *params.WebACLId, *params.ResourceArn)) + + return nil +} + +func resourceAwsWafRegionalWebAclAssociationRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + + webAclId, resourceArn := resourceAwsWafRegionalWebAclAssociationParseId(d.Id()) + + // List all resources for Web ACL and see if we get a match + params := &wafregional.ListResourcesForWebACLInput{ + WebACLId: aws.String(webAclId), + } + + resp, err := conn.ListResourcesForWebACL(params) + if err != nil { + return err + } + + // Find match + found := false + for _, listResourceArn := range resp.ResourceArns { + if resourceArn == *listResourceArn { + found = true + break + } + } + if !found { + log.Printf("[WARN] WAF Regional Web ACL association (%s) not found, removing from state", d.Id()) + d.SetId("") + } + + return nil +} + +func resourceAwsWafRegionalWebAclAssociationDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).wafregionalconn + + _, resourceArn := resourceAwsWafRegionalWebAclAssociationParseId(d.Id()) + + log.Printf("[INFO] Deleting WAF Regional Web ACL association: %s", resourceArn) + + params := &wafregional.DisassociateWebACLInput{ + ResourceArn: aws.String(resourceArn), + } + + // If action successful HTTP 200 response with an empty body + _, err := conn.DisassociateWebACL(params) + return err +} + +func resourceAwsWafRegionalWebAclAssociationParseId(id string) (webAclId, resourceArn string) { + parts := strings.SplitN(id, ":", 2) + webAclId = parts[0] + resourceArn = parts[1] + return +} diff --git a/aws/resource_aws_wafregional_web_acl_association_test.go b/aws/resource_aws_wafregional_web_acl_association_test.go new file mode 100644 index 00000000000..5d47bce6490 --- /dev/null +++ b/aws/resource_aws_wafregional_web_acl_association_test.go @@ -0,0 +1,177 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/wafregional" +) + +func TestAccAWSWafRegionalWebAclAssociation_basic(t *testing.T) { + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckWafRegionalWebAclAssociationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccCheckWafRegionalWebAclAssociationConfig_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckWafRegionalWebAclAssociationExists("aws_wafregional_web_acl_association.foo"), + ), + }, + }, + }) +} + +func TestAccAWSWafRegionalWebAclAssociation_multipleAssociations(t *testing.T) { + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckWafRegionalWebAclAssociationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccCheckWafRegionalWebAclAssociationConfig_multipleAssociations, + Check: resource.ComposeTestCheckFunc( + testAccCheckWafRegionalWebAclAssociationExists("aws_wafregional_web_acl_association.foo"), + testAccCheckWafRegionalWebAclAssociationExists("aws_wafregional_web_acl_association.bar"), + ), + }, + }, + }) +} + +func testAccCheckWafRegionalWebAclAssociationDestroy(s *terraform.State) error { + return testAccCheckWafRegionalWebAclAssociationDestroyWithProvider(s, testAccProvider) +} + +func testAccCheckWafRegionalWebAclAssociationDestroyWithProvider(s *terraform.State, provider *schema.Provider) error { + conn := provider.Meta().(*AWSClient).wafregionalconn + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_wafregional_web_acl_association" { + continue + } + + webAclId, resourceArn := resourceAwsWafRegionalWebAclAssociationParseId(rs.Primary.ID) + + resp, err := conn.ListResourcesForWebACL(&wafregional.ListResourcesForWebACLInput{WebACLId: aws.String(webAclId)}) + if err != nil { + found := false + for _, listResourceArn := range resp.ResourceArns { + if resourceArn == *listResourceArn { + found = true + break + } + } + if found { + return fmt.Errorf("WebACL: %v is still associated to resource: %v", webAclId, resourceArn) + } + } + } + return nil +} + +func testAccCheckWafRegionalWebAclAssociationExists(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + return testAccCheckWafRegionalWebAclAssociationExistsWithProvider(s, n, testAccProvider) + } +} + +func testAccCheckWafRegionalWebAclAssociationExistsWithProvider(s *terraform.State, n string, provider *schema.Provider) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No WebACL association ID is set") + } + + webAclId, resourceArn := resourceAwsWafRegionalWebAclAssociationParseId(rs.Primary.ID) + + conn := provider.Meta().(*AWSClient).wafregionalconn + resp, err := conn.ListResourcesForWebACL(&wafregional.ListResourcesForWebACLInput{WebACLId: aws.String(webAclId)}) + if err != nil { + return fmt.Errorf("List Web ACL err: %v", err) + } + + found := false + for _, listResourceArn := range resp.ResourceArns { + if resourceArn == *listResourceArn { + found = true + break + } + } + + if !found { + return fmt.Errorf("Web ACL association not found") + } + + return nil +} + +const testAccCheckWafRegionalWebAclAssociationConfig_basic = ` +resource "aws_wafregional_rule" "foo" { + name = "foo" + metric_name = "foo" +} + +resource "aws_wafregional_web_acl" "foo" { + name = "foo" + metric_name = "foo" + default_action { + type = "ALLOW" + } + rule { + action { + type = "COUNT" + } + priority = 100 + rule_id = "${aws_wafregional_rule.foo.id}" + } +} + +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" +} + +data "aws_availability_zones" "available" {} + +resource "aws_subnet" "foo" { + vpc_id = "${aws_vpc.foo.id}" + cidr_block = "10.1.1.0/24" + availability_zone = "${data.aws_availability_zones.available.names[0]}" +} + +resource "aws_subnet" "bar" { + vpc_id = "${aws_vpc.foo.id}" + cidr_block = "10.1.2.0/24" + availability_zone = "${data.aws_availability_zones.available.names[1]}" +} + +resource "aws_alb" "foo" { + internal = true + subnets = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] +} + +resource "aws_wafregional_web_acl_association" "foo" { + resource_arn = "${aws_alb.foo.arn}" + web_acl_id = "${aws_wafregional_web_acl.foo.id}" +} +` + +const testAccCheckWafRegionalWebAclAssociationConfig_multipleAssociations = testAccCheckWafRegionalWebAclAssociationConfig_basic + ` +resource "aws_alb" "bar" { + internal = true + subnets = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] +} + +resource "aws_wafregional_web_acl_association" "bar" { + resource_arn = "${aws_alb.bar.arn}" + web_acl_id = "${aws_wafregional_web_acl.foo.id}" +} +` diff --git a/aws/resource_aws_wafregional_web_acl_test.go b/aws/resource_aws_wafregional_web_acl_test.go index d57d162e1bc..ec7457e3cac 100644 --- a/aws/resource_aws_wafregional_web_acl_test.go +++ b/aws/resource_aws_wafregional_web_acl_test.go @@ -18,12 +18,12 @@ func TestAccAWSWafRegionalWebAcl_basic(t *testing.T) { var v waf.WebACL wafAclName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalWebAclDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSWafRegionalWebAclConfig(wafAclName), Check: resource.ComposeTestCheckFunc( testAccCheckAWSWafRegionalWebAclExists("aws_wafregional_web_acl.waf_acl", &v), @@ -43,12 +43,70 @@ func TestAccAWSWafRegionalWebAcl_basic(t *testing.T) { }) } +func TestAccAWSWafRegionalWebAcl_createRateBased(t *testing.T) { + var v waf.WebACL + wafAclName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalWebAclDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalWebAclConfigRateBased(wafAclName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalWebAclExists("aws_wafregional_web_acl.waf_acl", &v), + resource.TestCheckResourceAttr( + "aws_wafregional_web_acl.waf_acl", "default_action.#", "1"), + resource.TestCheckResourceAttr( + "aws_wafregional_web_acl.waf_acl", "default_action.0.type", "ALLOW"), + resource.TestCheckResourceAttr( + "aws_wafregional_web_acl.waf_acl", "name", wafAclName), + resource.TestCheckResourceAttr( + "aws_wafregional_web_acl.waf_acl", "rule.#", "1"), + resource.TestCheckResourceAttr( + "aws_wafregional_web_acl.waf_acl", "metric_name", wafAclName), + ), + }, + }, + }) +} + +func TestAccAWSWafRegionalWebAcl_createGroup(t *testing.T) { + var v waf.WebACL + wafAclName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafRegionalWebAclDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafRegionalWebAclConfigGroup(wafAclName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSWafRegionalWebAclExists("aws_wafregional_web_acl.waf_acl", &v), + resource.TestCheckResourceAttr( + "aws_wafregional_web_acl.waf_acl", "default_action.#", "1"), + resource.TestCheckResourceAttr( + "aws_wafregional_web_acl.waf_acl", "default_action.0.type", "ALLOW"), + resource.TestCheckResourceAttr( + "aws_wafregional_web_acl.waf_acl", "name", wafAclName), + resource.TestCheckResourceAttr( + "aws_wafregional_web_acl.waf_acl", "rule.#", "1"), + resource.TestCheckResourceAttr( + "aws_wafregional_web_acl.waf_acl", "metric_name", wafAclName), + ), + }, + }, + }) +} + func TestAccAWSWafRegionalWebAcl_changeNameForceNew(t *testing.T) { var before, after waf.WebACL wafAclName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) wafAclNewName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalWebAclDestroy, @@ -94,7 +152,7 @@ func TestAccAWSWafRegionalWebAcl_changeDefaultAction(t *testing.T) { wafAclName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) wafAclNewName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalWebAclDestroy, @@ -139,7 +197,7 @@ func TestAccAWSWafRegionalWebAcl_disappears(t *testing.T) { var v waf.WebACL wafAclName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalWebAclDestroy, @@ -160,7 +218,7 @@ func TestAccAWSWafRegionalWebAcl_noRules(t *testing.T) { var v waf.WebACL wafAclName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalWebAclDestroy, @@ -189,7 +247,7 @@ func TestAccAWSWafRegionalWebAcl_changeRules(t *testing.T) { var idx int wafAclName := fmt.Sprintf("wafacl%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalWebAclDestroy, @@ -207,7 +265,7 @@ func TestAccAWSWafRegionalWebAcl_changeRules(t *testing.T) { "aws_wafregional_web_acl.waf_acl", "name", wafAclName), resource.TestCheckResourceAttr( "aws_wafregional_web_acl.waf_acl", "rule.#", "1"), - computeWafRegionalWebAclRuleIndex(&r.RuleId, 1, "BLOCK", &idx), + computeWafRegionalWebAclRuleIndex(&r.RuleId, 1, "REGULAR", "BLOCK", &idx), testCheckResourceAttrWithIndexesAddr("aws_wafregional_web_acl.waf_acl", "rule.%d.priority", &idx, "1"), ), }, @@ -230,16 +288,18 @@ func TestAccAWSWafRegionalWebAcl_changeRules(t *testing.T) { } // Calculates the index which isn't static because ruleId is generated as part of the test -func computeWafRegionalWebAclRuleIndex(ruleId **string, priority int, actionType string, idx *int) resource.TestCheckFunc { +func computeWafRegionalWebAclRuleIndex(ruleId **string, priority int, ruleType string, actionType string, idx *int) resource.TestCheckFunc { return func(s *terraform.State) error { ruleResource := resourceAwsWafRegionalWebAcl().Schema["rule"].Elem.(*schema.Resource) actionMap := map[string]interface{}{ "type": actionType, } m := map[string]interface{}{ - "rule_id": **ruleId, - "priority": priority, - "action": []interface{}{actionMap}, + "rule_id": **ruleId, + "type": ruleType, + "priority": priority, + "action": []interface{}{actionMap}, + "override_action": []interface{}{}, } f := schema.HashResource(ruleResource) @@ -374,6 +434,59 @@ resource "aws_wafregional_web_acl" "waf_acl" { }`, name, name, name, name) } +func testAccAWSWafRegionalWebAclConfigRateBased(name string) string { + return fmt.Sprintf(` + +resource "aws_wafregional_rate_based_rule" "wafrule" { + name = "%s" + metric_name = "%s" + + rate_key = "IP" + rate_limit = 2000 +} + +resource "aws_wafregional_web_acl" "waf_acl" { + name = "%s" + metric_name = "%s" + default_action { + type = "ALLOW" + } + rule { + action { + type = "BLOCK" + } + priority = 1 + type = "RATE_BASED" + rule_id = "${aws_wafregional_rate_based_rule.wafrule.id}" + } +}`, name, name, name, name) +} + +func testAccAWSWafRegionalWebAclConfigGroup(name string) string { + return fmt.Sprintf(` + +resource "aws_wafregional_rule_group" "wafrulegroup" { + name = "%s" + metric_name = "%s" +} + +resource "aws_wafregional_web_acl" "waf_acl" { + name = "%s" + metric_name = "%s" + default_action { + type = "ALLOW" + } + rule { + override_action { + type = "NONE" + } + priority = 1 + type = "GROUP" + rule_id = "${aws_wafregional_rule_group.wafrulegroup.id}" # todo + } +}`, name, name, name, name) +} + func testAccAWSWafRegionalWebAclConfig_changeName(name string) string { return fmt.Sprintf(` resource "aws_wafregional_rule" "wafrule" { diff --git a/aws/resource_aws_wafregional_xss_match_set.go b/aws/resource_aws_wafregional_xss_match_set.go index eed3d928b6c..dbbc8f942a7 100644 --- a/aws/resource_aws_wafregional_xss_match_set.go +++ b/aws/resource_aws_wafregional_xss_match_set.go @@ -18,12 +18,12 @@ func resourceAwsWafRegionalXssMatchSet() *schema.Resource { Delete: resourceAwsWafRegionalXssMatchSetDelete, Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, ForceNew: true, }, - "xss_match_tuple": &schema.Schema{ + "xss_match_tuple": { Type: schema.TypeSet, Optional: true, Elem: &schema.Resource{ @@ -45,7 +45,7 @@ func resourceAwsWafRegionalXssMatchSet() *schema.Resource { }, }, }, - "text_transformation": &schema.Schema{ + "text_transformation": { Type: schema.TypeString, Required: true, }, diff --git a/aws/resource_aws_wafregional_xss_match_set_test.go b/aws/resource_aws_wafregional_xss_match_set_test.go index cddee2eac61..a20953337c4 100644 --- a/aws/resource_aws_wafregional_xss_match_set_test.go +++ b/aws/resource_aws_wafregional_xss_match_set_test.go @@ -7,7 +7,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/waf" "github.com/aws/aws-sdk-go/service/wafregional" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/acctest" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/terraform" @@ -17,12 +16,12 @@ func TestAccAWSWafRegionalXssMatchSet_basic(t *testing.T) { var v waf.XssMatchSet xssMatchSet := fmt.Sprintf("xssMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalXssMatchSetDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccAWSWafRegionalXssMatchSetConfig(xssMatchSet), Check: resource.ComposeTestCheckFunc( testAccCheckAWSWafRegionalXssMatchSetExists("aws_wafregional_xss_match_set.xss_match_set", &v), @@ -57,7 +56,7 @@ func TestAccAWSWafRegionalXssMatchSet_changeNameForceNew(t *testing.T) { xssMatchSet := fmt.Sprintf("xssMatchSet-%s", acctest.RandString(5)) xssMatchSetNewName := fmt.Sprintf("xssMatchSetNewName-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalXssMatchSetDestroy, @@ -90,7 +89,7 @@ func TestAccAWSWafRegionalXssMatchSet_disappears(t *testing.T) { var v waf.XssMatchSet xssMatchSet := fmt.Sprintf("xssMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalXssMatchSetDestroy, @@ -111,7 +110,7 @@ func TestAccAWSWafRegionalXssMatchSet_changeTuples(t *testing.T) { var before, after waf.XssMatchSet setName := fmt.Sprintf("xssMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalXssMatchSetDestroy, @@ -176,7 +175,7 @@ func TestAccAWSWafRegionalXssMatchSet_noTuples(t *testing.T) { var ipset waf.XssMatchSet setName := fmt.Sprintf("xssMatchSet-%s", acctest.RandString(5)) - resource.Test(t, resource.TestCase{ + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSWafRegionalXssMatchSetDestroy, @@ -220,7 +219,7 @@ func testAccCheckAWSWafRegionalXssMatchSetDisappears(v *waf.XssMatchSet) resourc return conn.UpdateXssMatchSet(req) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error updating regional WAF XSS Match Set: {{err}}", err) + return fmt.Errorf("Error updating regional WAF XSS Match Set: %s", err) } _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { @@ -231,7 +230,7 @@ func testAccCheckAWSWafRegionalXssMatchSetDisappears(v *waf.XssMatchSet) resourc return conn.DeleteXssMatchSet(opts) }) if err != nil { - return errwrap.Wrapf("[ERROR] Error deleting regional WAF XSS Match Set: {{err}}", err) + return fmt.Errorf("Error deleting regional WAF XSS Match Set: %s", err) } return nil } diff --git a/aws/s3_tags.go b/aws/s3_tags.go index 8de14fa3f56..2712fb155cc 100644 --- a/aws/s3_tags.go +++ b/aws/s3_tags.go @@ -22,7 +22,7 @@ func setTagsS3(conn *s3.S3, d *schema.ResourceData) error { // Set tags if len(remove) > 0 { log.Printf("[DEBUG] Removing tags: %#v", remove) - _, err := retryOnAwsCodes([]string{"NoSuchBucket", "OperationAborted"}, func() (interface{}, error) { + _, err := RetryOnAwsCodes([]string{"NoSuchBucket", "OperationAborted"}, func() (interface{}, error) { return conn.DeleteBucketTagging(&s3.DeleteBucketTaggingInput{ Bucket: aws.String(d.Get("bucket").(string)), }) @@ -40,7 +40,7 @@ func setTagsS3(conn *s3.S3, d *schema.ResourceData) error { }, } - _, err := retryOnAwsCodes([]string{"NoSuchBucket", "OperationAborted"}, func() (interface{}, error) { + _, err := RetryOnAwsCodes([]string{"NoSuchBucket", "OperationAborted"}, func() (interface{}, error) { return conn.PutBucketTagging(req) }) if err != nil { diff --git a/aws/sort.go b/aws/sort.go deleted file mode 100644 index 0d90458ebf4..00000000000 --- a/aws/sort.go +++ /dev/null @@ -1,53 +0,0 @@ -package aws - -import ( - "sort" - "time" - - "github.com/aws/aws-sdk-go/service/ec2" -) - -type imageSort []*ec2.Image -type snapshotSort []*ec2.Snapshot - -func (a imageSort) Len() int { - return len(a) -} - -func (a imageSort) Swap(i, j int) { - a[i], a[j] = a[j], a[i] -} - -func (a imageSort) Less(i, j int) bool { - itime, _ := time.Parse(time.RFC3339, *a[i].CreationDate) - jtime, _ := time.Parse(time.RFC3339, *a[j].CreationDate) - return itime.Unix() < jtime.Unix() -} - -// Sort images by creation date, in descending order. -func sortImages(images []*ec2.Image) []*ec2.Image { - sortedImages := images - sort.Sort(sort.Reverse(imageSort(sortedImages))) - return sortedImages -} - -func (a snapshotSort) Len() int { - return len(a) -} - -func (a snapshotSort) Swap(i, j int) { - a[i], a[j] = a[j], a[i] -} - -func (a snapshotSort) Less(i, j int) bool { - itime := *a[i].StartTime - jtime := *a[j].StartTime - return itime.Unix() < jtime.Unix() -} - -// Sort snapshots by creation date, in descending order. -func sortSnapshots(snapshots []*ec2.Snapshot) []*ec2.Snapshot { - sortedSnapshots := snapshots - sort.Sort(sort.Reverse(snapshotSort(sortedSnapshots))) - return sortedSnapshots -} diff --git a/aws/structure.go b/aws/structure.go index c46d1520a74..e322158966e 100644 --- a/aws/structure.go +++ b/aws/structure.go @@ -20,6 +20,7 @@ import ( "github.com/aws/aws-sdk-go/service/cognitoidentityprovider" "github.com/aws/aws-sdk-go/service/configservice" "github.com/aws/aws-sdk-go/service/dax" + "github.com/aws/aws-sdk-go/service/directconnect" "github.com/aws/aws-sdk-go/service/directoryservice" "github.com/aws/aws-sdk-go/service/dynamodb" "github.com/aws/aws-sdk-go/service/ec2" @@ -31,7 +32,9 @@ import ( "github.com/aws/aws-sdk-go/service/iot" "github.com/aws/aws-sdk-go/service/kinesis" "github.com/aws/aws-sdk-go/service/lambda" + "github.com/aws/aws-sdk-go/service/macie" "github.com/aws/aws-sdk-go/service/mq" + "github.com/aws/aws-sdk-go/service/neptune" "github.com/aws/aws-sdk-go/service/rds" "github.com/aws/aws-sdk-go/service/redshift" "github.com/aws/aws-sdk-go/service/route53" @@ -82,7 +85,7 @@ func expandListeners(configured []interface{}) ([]*elb.Listener, error) { if valid { listeners = append(listeners, l) } else { - return nil, fmt.Errorf("[ERR] ELB Listener: ssl_certificate_id may be set only when protocol is 'https' or 'ssl'") + return nil, fmt.Errorf("ELB Listener: ssl_certificate_id may be set only when protocol is 'https' or 'ssl'") } } @@ -110,6 +113,35 @@ func expandEcsVolumes(configured []interface{}) ([]*ecs.Volume, error) { } } + configList, ok := data["docker_volume_configuration"].([]interface{}) + if ok && len(configList) > 0 { + config := configList[0].(map[string]interface{}) + l.DockerVolumeConfiguration = &ecs.DockerVolumeConfiguration{} + + if v, ok := config["scope"].(string); ok && v != "" { + l.DockerVolumeConfiguration.Scope = aws.String(v) + } + + if v, ok := config["autoprovision"]; ok && v != "" { + scope := l.DockerVolumeConfiguration.Scope + if scope == nil || *scope != ecs.ScopeTask || v.(bool) { + l.DockerVolumeConfiguration.Autoprovision = aws.Bool(v.(bool)) + } + } + + if v, ok := config["driver"].(string); ok && v != "" { + l.DockerVolumeConfiguration.Driver = aws.String(v) + } + + if v, ok := config["driver_opts"].(map[string]interface{}); ok && len(v) > 0 { + l.DockerVolumeConfiguration.DriverOpts = stringMapToPointers(v) + } + + if v, ok := config["labels"].(map[string]interface{}); ok && len(v) > 0 { + l.DockerVolumeConfiguration.Labels = stringMapToPointers(v) + } + } + volumes = append(volumes, l) } @@ -351,6 +383,10 @@ func expandOptionConfiguration(configured []interface{}) ([]*rds.OptionConfigura o.OptionSettings = expandOptionSetting(raw.(*schema.Set).List()) } + if raw, ok := data["version"]; ok && raw.(string) != "" { + o.OptionVersion = aws.String(raw.(string)) + } + option = append(option, o) } @@ -395,6 +431,28 @@ func expandElastiCacheParameters(configured []interface{}) ([]*elasticache.Param return parameters, nil } +// Takes the result of flatmap.Expand for an array of parameters and +// returns Parameter API compatible objects +func expandNeptuneParameters(configured []interface{}) ([]*neptune.Parameter, error) { + parameters := make([]*neptune.Parameter, 0, len(configured)) + + // Loop over our configured parameters and create + // an array of aws-sdk-go compatible objects + for _, pRaw := range configured { + data := pRaw.(map[string]interface{}) + + p := &neptune.Parameter{ + ApplyMethod: aws.String(data["apply_method"].(string)), + ParameterName: aws.String(data["name"].(string)), + ParameterValue: aws.String(data["value"].(string)), + } + + parameters = append(parameters, p) + } + + return parameters, nil +} + // Flattens an access log into something that flatmap.Flatten() can handle func flattenAccessLog(l *elb.AccessLog) []map[string]interface{} { result := make([]map[string]interface{}, 0, 1) @@ -592,15 +650,47 @@ func flattenEcsVolumes(list []*ecs.Volume) []map[string]interface{} { "name": *volume.Name, } - if volume.Host.SourcePath != nil { + if volume.Host != nil && volume.Host.SourcePath != nil { l["host_path"] = *volume.Host.SourcePath } + if volume.DockerVolumeConfiguration != nil { + l["docker_volume_configuration"] = flattenDockerVolumeConfiguration(volume.DockerVolumeConfiguration) + } + result = append(result, l) } return result } +func flattenDockerVolumeConfiguration(config *ecs.DockerVolumeConfiguration) []interface{} { + var items []interface{} + m := make(map[string]interface{}) + + if config.Scope != nil { + m["scope"] = aws.StringValue(config.Scope) + } + + if config.Autoprovision != nil { + m["autoprovision"] = aws.BoolValue(config.Autoprovision) + } + + if config.Driver != nil { + m["driver"] = aws.StringValue(config.Driver) + } + + if config.DriverOpts != nil { + m["driver_opts"] = pointersMapToStringList(config.DriverOpts) + } + + if config.Labels != nil { + m["labels"] = pointersMapToStringList(config.Labels) + } + + items = append(items, m) + return items +} + // Flattens an array of ECS LoadBalancers into a []map[string]interface{} func flattenEcsLoadBalancers(list []*ecs.LoadBalancer) []map[string]interface{} { result := make([]map[string]interface{}, 0, len(list)) @@ -645,6 +735,10 @@ func flattenOptions(list []*rds.Option) []map[string]interface{} { if i.Port != nil { r["port"] = int(*i.Port) } + r["version"] = "" + if i.OptionVersion != nil { + r["version"] = strings.ToLower(*i.OptionVersion) + } if i.VpcSecurityGroupMemberships != nil { vpcs := make([]string, 0, len(i.VpcSecurityGroupMemberships)) for _, vpc := range i.VpcSecurityGroupMemberships { @@ -732,6 +826,21 @@ func flattenElastiCacheParameters(list []*elasticache.Parameter) []map[string]in return result } +// Flattens an array of Parameters into a []map[string]interface{} +func flattenNeptuneParameters(list []*neptune.Parameter) []map[string]interface{} { + result := make([]map[string]interface{}, 0, len(list)) + for _, i := range list { + if i.ParameterValue != nil { + result = append(result, map[string]interface{}{ + "apply_method": aws.StringValue(i.ApplyMethod), + "name": aws.StringValue(i.ParameterName), + "value": aws.StringValue(i.ParameterValue), + }) + } + } + return result +} + // Takes the result of flatmap.Expand for an array of strings // and returns a []*string func expandStringList(configured []interface{}) []*string { @@ -745,6 +854,15 @@ func expandStringList(configured []interface{}) []*string { return vs } +// Expands a map of string to interface to a map of string to *float +func expandFloat64Map(m map[string]interface{}) map[string]*float64 { + float64Map := make(map[string]*float64, len(m)) + for k, v := range m { + float64Map[k] = aws.Float64(v.(float64)) + } + return float64Map +} + // Takes the result of schema.Set of strings and returns a []*string func expandStringSet(configured *schema.Set) []*string { return expandStringList(configured.List()) @@ -807,6 +925,14 @@ func flattenAttachment(a *ec2.NetworkInterfaceAttachment) map[string]interface{} return att } +func flattenEc2AttributeValues(l []*ec2.AttributeValue) []string { + values := make([]string, 0, len(l)) + for _, v := range l { + values = append(values, aws.StringValue(v.Value)) + } + return values +} + func flattenEc2NetworkInterfaceAssociation(a *ec2.NetworkInterfaceAssociation) []interface{} { att := make(map[string]interface{}) if a.AllocationId != nil { @@ -1023,20 +1149,79 @@ func flattenESClusterConfig(c *elasticsearch.ElasticsearchClusterConfig) []map[s return []map[string]interface{}{m} } +func expandESCognitoOptions(c []interface{}) *elasticsearch.CognitoOptions { + options := &elasticsearch.CognitoOptions{ + Enabled: aws.Bool(false), + } + if len(c) < 1 { + return options + } + + m := c[0].(map[string]interface{}) + + if cognitoEnabled, ok := m["enabled"]; ok { + options.Enabled = aws.Bool(cognitoEnabled.(bool)) + + if cognitoEnabled.(bool) { + + if v, ok := m["user_pool_id"]; ok && v.(string) != "" { + options.UserPoolId = aws.String(v.(string)) + } + if v, ok := m["identity_pool_id"]; ok && v.(string) != "" { + options.IdentityPoolId = aws.String(v.(string)) + } + if v, ok := m["role_arn"]; ok && v.(string) != "" { + options.RoleArn = aws.String(v.(string)) + } + } + } + + return options +} + +func flattenESCognitoOptions(c *elasticsearch.CognitoOptions) []map[string]interface{} { + m := map[string]interface{}{} + + m["enabled"] = aws.BoolValue(c.Enabled) + + if aws.BoolValue(c.Enabled) { + m["identity_pool_id"] = aws.StringValue(c.IdentityPoolId) + m["user_pool_id"] = aws.StringValue(c.UserPoolId) + m["role_arn"] = aws.StringValue(c.RoleArn) + } + + return []map[string]interface{}{m} +} + +func flattenESSnapshotOptions(snapshotOptions *elasticsearch.SnapshotOptions) []map[string]interface{} { + if snapshotOptions == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "automated_snapshot_start_hour": int(aws.Int64Value(snapshotOptions.AutomatedSnapshotStartHour)), + } + + return []map[string]interface{}{m} +} + func flattenESEBSOptions(o *elasticsearch.EBSOptions) []map[string]interface{} { m := map[string]interface{}{} if o.EBSEnabled != nil { m["ebs_enabled"] = *o.EBSEnabled } - if o.Iops != nil { - m["iops"] = *o.Iops - } - if o.VolumeSize != nil { - m["volume_size"] = *o.VolumeSize - } - if o.VolumeType != nil { - m["volume_type"] = *o.VolumeType + + if aws.BoolValue(o.EBSEnabled) { + if o.Iops != nil { + m["iops"] = *o.Iops + } + if o.VolumeSize != nil { + m["volume_size"] = *o.VolumeSize + } + if o.VolumeType != nil { + m["volume_type"] = *o.VolumeType + } } return []map[string]interface{}{m} @@ -1045,17 +1230,20 @@ func flattenESEBSOptions(o *elasticsearch.EBSOptions) []map[string]interface{} { func expandESEBSOptions(m map[string]interface{}) *elasticsearch.EBSOptions { options := elasticsearch.EBSOptions{} - if v, ok := m["ebs_enabled"]; ok { - options.EBSEnabled = aws.Bool(v.(bool)) - } - if v, ok := m["iops"]; ok && v.(int) > 0 { - options.Iops = aws.Int64(int64(v.(int))) - } - if v, ok := m["volume_size"]; ok && v.(int) > 0 { - options.VolumeSize = aws.Int64(int64(v.(int))) - } - if v, ok := m["volume_type"]; ok && v.(string) != "" { - options.VolumeType = aws.String(v.(string)) + if ebsEnabled, ok := m["ebs_enabled"]; ok { + options.EBSEnabled = aws.Bool(ebsEnabled.(bool)) + + if ebsEnabled.(bool) { + if v, ok := m["iops"]; ok && v.(int) > 0 { + options.Iops = aws.Int64(int64(v.(int))) + } + if v, ok := m["volume_size"]; ok && v.(int) > 0 { + options.VolumeSize = aws.Int64(int64(v.(int))) + } + if v, ok := m["volume_type"]; ok && v.(string) != "" { + options.VolumeType = aws.String(v.(string)) + } + } } return &options @@ -1160,7 +1348,7 @@ func flattenConfigRecordingGroup(g *configservice.RecordingGroup) []map[string]i } func flattenConfigSnapshotDeliveryProperties(p *configservice.ConfigSnapshotDeliveryProperties) []map[string]interface{} { - m := make(map[string]interface{}, 0) + m := make(map[string]interface{}) if p.DeliveryFrequency != nil { m["delivery_frequency"] = *p.DeliveryFrequency @@ -1187,7 +1375,7 @@ func stringMapToPointers(m map[string]interface{}) map[string]*string { func flattenDSVpcSettings( s *directoryservice.DirectoryVpcSettingsDescription) []map[string]interface{} { - settings := make(map[string]interface{}, 0) + settings := make(map[string]interface{}) if s == nil { return nil @@ -1218,7 +1406,7 @@ func flattenLambdaEnvironment(lambdaEnv *lambda.EnvironmentResponse) []interface } func flattenLambdaVpcConfigResponse(s *lambda.VpcConfigResponse) []map[string]interface{} { - settings := make(map[string]interface{}, 0) + settings := make(map[string]interface{}) if s == nil { return nil @@ -1241,6 +1429,18 @@ func flattenLambdaVpcConfigResponse(s *lambda.VpcConfigResponse) []map[string]in return []map[string]interface{}{settings} } +func flattenLambdaAliasRoutingConfiguration(arc *lambda.AliasRoutingConfiguration) []interface{} { + if arc == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "additional_version_weights": aws.Float64ValueMap(arc.AdditionalVersionWeights), + } + + return []interface{}{m} +} + func flattenDSConnectSettings( customerDnsIps []*string, s *directoryservice.DirectoryConnectSettingsDescription) []map[string]interface{} { @@ -1248,7 +1448,7 @@ func flattenDSConnectSettings( return nil } - settings := make(map[string]interface{}, 0) + settings := make(map[string]interface{}) settings["customer_dns_ips"] = schema.NewSet(schema.HashString, flattenStringList(customerDnsIps)) settings["connect_ips"] = schema.NewSet(schema.HashString, flattenStringList(s.ConnectIps)) @@ -1389,7 +1589,7 @@ func expandApiGatewayRequestResponseModelOperations(d *schema.ResourceData, key oldModelMap := oldModels.(map[string]interface{}) newModelMap := newModels.(map[string]interface{}) - for k, _ := range oldModelMap { + for k := range oldModelMap { operation := apigateway.PatchOperation{ Op: aws.String("remove"), Path: aws.String(fmt.Sprintf("/%s/%s", prefix, strings.Replace(k, "/", "~1", -1))), @@ -1407,7 +1607,7 @@ func expandApiGatewayRequestResponseModelOperations(d *schema.ResourceData, key for nK, nV := range newModelMap { exists := false - for k, _ := range oldModelMap { + for k := range oldModelMap { if k == nK { exists = true } @@ -1441,7 +1641,7 @@ func deprecatedExpandApiGatewayMethodParametersJSONOperations(d *schema.Resource return operations, err } - for k, _ := range oldParametersMap { + for k := range oldParametersMap { operation := apigateway.PatchOperation{ Op: aws.String("remove"), Path: aws.String(fmt.Sprintf("/%s/%s", prefix, k)), @@ -1459,7 +1659,7 @@ func deprecatedExpandApiGatewayMethodParametersJSONOperations(d *schema.Resource for nK, nV := range newParametersMap { exists := false - for k, _ := range oldParametersMap { + for k := range oldParametersMap { if k == nK { exists = true } @@ -1484,7 +1684,7 @@ func expandApiGatewayMethodParametersOperations(d *schema.ResourceData, key stri oldParametersMap := oldParameters.(map[string]interface{}) newParametersMap := newParameters.(map[string]interface{}) - for k, _ := range oldParametersMap { + for k := range oldParametersMap { operation := apigateway.PatchOperation{ Op: aws.String("remove"), Path: aws.String(fmt.Sprintf("/%s/%s", prefix, k)), @@ -1507,7 +1707,7 @@ func expandApiGatewayMethodParametersOperations(d *schema.ResourceData, key stri for nK, nV := range newParametersMap { exists := false - for k, _ := range oldParametersMap { + for k := range oldParametersMap { if k == nK { exists = true } @@ -1580,29 +1780,32 @@ func expandApiGatewayStageKeyOperations(d *schema.ResourceData) []*apigateway.Pa return operations } -func expandCloudWachLogMetricTransformations(m map[string]interface{}) ([]*cloudwatchlogs.MetricTransformation, error) { +func expandCloudWatchLogMetricTransformations(m map[string]interface{}) []*cloudwatchlogs.MetricTransformation { transformation := cloudwatchlogs.MetricTransformation{ MetricName: aws.String(m["name"].(string)), MetricNamespace: aws.String(m["namespace"].(string)), MetricValue: aws.String(m["value"].(string)), } - if m["default_value"] != "" { - transformation.DefaultValue = aws.Float64(m["default_value"].(float64)) + if m["default_value"].(string) != "" { + value, _ := strconv.ParseFloat(m["default_value"].(string), 64) + transformation.DefaultValue = aws.Float64(value) } - return []*cloudwatchlogs.MetricTransformation{&transformation}, nil + return []*cloudwatchlogs.MetricTransformation{&transformation} } -func flattenCloudWachLogMetricTransformations(ts []*cloudwatchlogs.MetricTransformation) []interface{} { +func flattenCloudWatchLogMetricTransformations(ts []*cloudwatchlogs.MetricTransformation) []interface{} { mts := make([]interface{}, 0) - m := make(map[string]interface{}, 0) + m := make(map[string]interface{}) m["name"] = *ts[0].MetricName m["namespace"] = *ts[0].MetricNamespace m["value"] = *ts[0].MetricValue - if ts[0].DefaultValue != nil { + if ts[0].DefaultValue == nil { + m["default_value"] = "" + } else { m["default_value"] = *ts[0].DefaultValue } @@ -1672,7 +1875,7 @@ func flattenBeanstalkTrigger(list []*elasticbeanstalk.Trigger) []string { } // There are several parts of the AWS API that will sort lists of strings, -// causing diffs inbetween resources that use lists. This avoids a bit of +// causing diffs between resources that use lists. This avoids a bit of // code duplication for pre-sorts that can be used for things like hash // functions, etc. func sortInterfaceSlice(in []interface{}) []interface{} { @@ -1692,7 +1895,8 @@ func sortInterfaceSlice(in []interface{}) []interface{} { } // This function sorts List A to look like a list found in the tf file. -func sortListBasedonTFFile(in []string, d *schema.ResourceData, listName string) ([]string, error) { +func sortListBasedonTFFile(in []string, d *schema.ResourceData) ([]string, error) { + listName := "layer_ids" if attributeCount, ok := d.Get(listName + ".#").(int); ok { for i := 0; i < attributeCount; i++ { currAttributeId := d.Get(listName + "." + strconv.Itoa(i)) @@ -1763,22 +1967,6 @@ func getStringPtr(m interface{}, key string) *string { return nil } -// getStringPtrList returns a []*string version of the map value. If the key -// isn't present, getNilStringList returns nil. -func getStringPtrList(m map[string]interface{}, key string) []*string { - if v, ok := m[key]; ok { - var stringList []*string - for _, i := range v.([]interface{}) { - s := i.(string) - stringList = append(stringList, &s) - } - - return stringList - } - - return nil -} - // a convenience wrapper type for the schema.Set map[string]interface{} // Set operations only alter the underlying map if the value is not nil type setMap map[string]interface{} @@ -1900,6 +2088,80 @@ func flattenPolicyAttributes(list []*elb.PolicyAttributeDescription) []interface return attributes } +func expandConfigAccountAggregationSources(configured []interface{}) []*configservice.AccountAggregationSource { + var results []*configservice.AccountAggregationSource + for _, item := range configured { + detail := item.(map[string]interface{}) + source := configservice.AccountAggregationSource{ + AllAwsRegions: aws.Bool(detail["all_regions"].(bool)), + } + + if v, ok := detail["account_ids"]; ok { + accountIDs := v.([]interface{}) + if len(accountIDs) > 0 { + source.AccountIds = expandStringList(accountIDs) + } + } + + if v, ok := detail["regions"]; ok { + regions := v.([]interface{}) + if len(regions) > 0 { + source.AwsRegions = expandStringList(regions) + } + } + + results = append(results, &source) + } + return results +} + +func expandConfigOrganizationAggregationSource(configured map[string]interface{}) *configservice.OrganizationAggregationSource { + source := configservice.OrganizationAggregationSource{ + AllAwsRegions: aws.Bool(configured["all_regions"].(bool)), + RoleArn: aws.String(configured["role_arn"].(string)), + } + + if v, ok := configured["regions"]; ok { + regions := v.([]interface{}) + if len(regions) > 0 { + source.AwsRegions = expandStringList(regions) + } + } + + return &source +} + +func flattenConfigAccountAggregationSources(sources []*configservice.AccountAggregationSource) []interface{} { + var result []interface{} + + if len(sources) == 0 { + return result + } + + source := sources[0] + m := make(map[string]interface{}) + m["account_ids"] = flattenStringList(source.AccountIds) + m["all_regions"] = aws.BoolValue(source.AllAwsRegions) + m["regions"] = flattenStringList(source.AwsRegions) + result = append(result, m) + return result +} + +func flattenConfigOrganizationAggregationSource(source *configservice.OrganizationAggregationSource) []interface{} { + var result []interface{} + + if source == nil { + return result + } + + m := make(map[string]interface{}) + m["all_regions"] = aws.BoolValue(source.AllAwsRegions) + m["regions"] = flattenStringList(source.AwsRegions) + m["role_arn"] = aws.StringValue(source.RoleArn) + result = append(result, m) + return result +} + func flattenConfigRuleSource(source *configservice.Source) []interface{} { var result []interface{} m := make(map[string]interface{}) @@ -1988,7 +2250,11 @@ func flattenConfigRuleScope(scope *configservice.Scope) []interface{} { return items } -func expandConfigRuleScope(configured map[string]interface{}) *configservice.Scope { +func expandConfigRuleScope(l []interface{}) *configservice.Scope { + if len(l) == 0 || l[0] == nil { + return nil + } + configured := l[0].(map[string]interface{}) scope := &configservice.Scope{} if v, ok := configured["compliance_resource_id"].(string); ok && v != "" { @@ -2023,11 +2289,8 @@ func checkYamlString(yamlString interface{}) (string, error) { s := yamlString.(string) err := yaml.Unmarshal([]byte(s), &y) - if err != nil { - return s, err - } - return s, nil + return s, err } func normalizeCloudFormationTemplate(templateString interface{}) (string, error) { @@ -2038,14 +2301,6 @@ func normalizeCloudFormationTemplate(templateString interface{}) (string, error) return checkYamlString(templateString) } -func flattenInspectorTags(cfTags []*cloudformation.Tag) map[string]string { - tags := make(map[string]string, len(cfTags)) - for _, t := range cfTags { - tags[*t.Key] = *t.Value - } - return tags -} - func flattenApiGatewayUsageApiStages(s []*apigateway.ApiStage) []map[string]interface{} { stages := make([]map[string]interface{}, 0) @@ -2067,7 +2322,7 @@ func flattenApiGatewayUsageApiStages(s []*apigateway.ApiStage) []map[string]inte } func flattenApiGatewayUsagePlanThrottling(s *apigateway.ThrottleSettings) []map[string]interface{} { - settings := make(map[string]interface{}, 0) + settings := make(map[string]interface{}) if s == nil { return nil @@ -2085,7 +2340,7 @@ func flattenApiGatewayUsagePlanThrottling(s *apigateway.ThrottleSettings) []map[ } func flattenApiGatewayUsagePlanQuota(s *apigateway.QuotaSettings) []map[string]interface{} { - settings := make(map[string]interface{}, 0) + settings := make(map[string]interface{}) if s == nil { return nil @@ -2111,15 +2366,6 @@ func buildApiGatewayInvokeURL(restApiId, region, stageName string) string { restApiId, region, stageName) } -func buildApiGatewayExecutionARN(restApiId, region, accountId string) (string, error) { - if accountId == "" { - return "", fmt.Errorf("Unable to build execution ARN for %s as account ID is missing", - restApiId) - } - return fmt.Sprintf("arn:aws:execute-api:%s:%s:%s", - region, accountId, restApiId), nil -} - func expandCognitoSupportedLoginProviders(config map[string]interface{}) map[string]*string { m := map[string]*string{} for k, v := range config { @@ -2192,7 +2438,7 @@ func flattenCognitoIdentityProviders(ips []*cognitoidentity.Provider) []map[stri } func flattenCognitoUserPoolEmailConfiguration(s *cognitoidentityprovider.EmailConfigurationType) []map[string]interface{} { - m := make(map[string]interface{}, 0) + m := make(map[string]interface{}) if s == nil { return nil @@ -2354,6 +2600,10 @@ func expandCognitoUserPoolLambdaConfig(config map[string]interface{}) *cognitoid configs.PreTokenGeneration = aws.String(v.(string)) } + if v, ok := config["user_migration"]; ok && v.(string) != "" { + configs.UserMigration = aws.String(v.(string)) + } + if v, ok := config["verify_auth_challenge_response"]; ok && v.(string) != "" { configs.VerifyAuthChallengeResponse = aws.String(v.(string)) } @@ -2400,6 +2650,10 @@ func flattenCognitoUserPoolLambdaConfig(s *cognitoidentityprovider.LambdaConfigT m["pre_token_generation"] = *s.PreTokenGeneration } + if s.UserMigration != nil { + m["user_migration"] = *s.UserMigration + } + if s.VerifyAuthChallengeResponse != nil { m["verify_auth_challenge_response"] = *s.VerifyAuthChallengeResponse } @@ -2540,8 +2794,9 @@ func flattenIoTRuleFirehoseActions(actions []*iot.Action) []map[string]interface result := make(map[string]interface{}) v := a.Firehose if v != nil { - result["role_arn"] = *v.RoleArn - result["delivery_stream_name"] = *v.DeliveryStreamName + result["role_arn"] = aws.StringValue(v.RoleArn) + result["delivery_stream_name"] = aws.StringValue(v.DeliveryStreamName) + result["separator"] = aws.StringValue(v.Separator) results = append(results, result) } @@ -2692,8 +2947,44 @@ func flattenCognitoUserPoolPasswordPolicy(s *cognitoidentityprovider.PasswordPol return []map[string]interface{}{} } +func expandCognitoResourceServerScope(inputs []interface{}) []*cognitoidentityprovider.ResourceServerScopeType { + configs := make([]*cognitoidentityprovider.ResourceServerScopeType, len(inputs)) + for i, input := range inputs { + param := input.(map[string]interface{}) + config := &cognitoidentityprovider.ResourceServerScopeType{} + + if v, ok := param["scope_description"]; ok { + config.ScopeDescription = aws.String(v.(string)) + } + + if v, ok := param["scope_name"]; ok { + config.ScopeName = aws.String(v.(string)) + } + + configs[i] = config + } + + return configs +} + +func flattenCognitoResourceServerScope(inputs []*cognitoidentityprovider.ResourceServerScopeType) []map[string]interface{} { + values := make([]map[string]interface{}, 0) + + for _, input := range inputs { + if input == nil { + continue + } + var value = map[string]interface{}{ + "scope_name": aws.StringValue(input.ScopeName), + "scope_description": aws.StringValue(input.ScopeDescription), + } + values = append(values, value) + } + return values +} + func expandCognitoUserPoolSchema(inputs []interface{}) []*cognitoidentityprovider.SchemaAttributeType { - configs := make([]*cognitoidentityprovider.SchemaAttributeType, len(inputs), len(inputs)) + configs := make([]*cognitoidentityprovider.SchemaAttributeType, len(inputs)) for i, input := range inputs { param := input.(map[string]interface{}) @@ -2767,65 +3058,308 @@ func expandCognitoUserPoolSchema(inputs []interface{}) []*cognitoidentityprovide return configs } -func flattenCognitoUserPoolSchema(inputs []*cognitoidentityprovider.SchemaAttributeType) []map[string]interface{} { - values := make([]map[string]interface{}, len(inputs), len(inputs)) - - for i, input := range inputs { - value := make(map[string]interface{}) +func cognitoUserPoolSchemaAttributeMatchesStandardAttribute(input *cognitoidentityprovider.SchemaAttributeType) bool { + if input == nil { + return false + } + + // All standard attributes always returned by API + // https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-attributes.html#cognito-user-pools-standard-attributes + var standardAttributes = []cognitoidentityprovider.SchemaAttributeType{ + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("address"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("2048"), + MinLength: aws.String("0"), + }, + }, + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("birthdate"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("10"), + MinLength: aws.String("10"), + }, + }, + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("email"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("2048"), + MinLength: aws.String("0"), + }, + }, + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeBoolean), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("email_verified"), + Required: aws.Bool(false), + }, + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("gender"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("2048"), + MinLength: aws.String("0"), + }, + }, + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("given_name"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("2048"), + MinLength: aws.String("0"), + }, + }, + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("family_name"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("2048"), + MinLength: aws.String("0"), + }, + }, + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("locale"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("2048"), + MinLength: aws.String("0"), + }, + }, + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("middle_name"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("2048"), + MinLength: aws.String("0"), + }, + }, + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("name"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("2048"), + MinLength: aws.String("0"), + }, + }, + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("nickname"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("2048"), + MinLength: aws.String("0"), + }, + }, + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("phone_number"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("2048"), + MinLength: aws.String("0"), + }, + }, + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeBoolean), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("phone_number_verified"), + Required: aws.Bool(false), + }, + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("picture"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("2048"), + MinLength: aws.String("0"), + }, + }, + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("preferred_username"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("2048"), + MinLength: aws.String("0"), + }, + }, + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("profile"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("2048"), + MinLength: aws.String("0"), + }, + }, + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(false), + Name: aws.String("sub"), + Required: aws.Bool(true), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("2048"), + MinLength: aws.String("1"), + }, + }, + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeNumber), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("updated_at"), + NumberAttributeConstraints: &cognitoidentityprovider.NumberAttributeConstraintsType{ + MinValue: aws.String("0"), + }, + Required: aws.Bool(false), + }, + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("website"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("2048"), + MinLength: aws.String("0"), + }, + }, + { + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("zoneinfo"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("2048"), + MinLength: aws.String("0"), + }, + }, + } + for _, standardAttribute := range standardAttributes { + if reflect.DeepEqual(*input, standardAttribute) { + return true + } + } + return false +} + +func flattenCognitoUserPoolSchema(configuredAttributes, inputs []*cognitoidentityprovider.SchemaAttributeType) []map[string]interface{} { + values := make([]map[string]interface{}, 0) + for _, input := range inputs { if input == nil { - return nil + continue } - if input.AttributeDataType != nil { - value["attribute_data_type"] = *input.AttributeDataType - } + // The API returns all standard attributes + // https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-attributes.html#cognito-user-pools-standard-attributes + // Ignore setting them in state if they are unconfigured to prevent a huge and unexpected diff + configured := false - if input.DeveloperOnlyAttribute != nil { - value["developer_only_attribute"] = *input.DeveloperOnlyAttribute + for _, configuredAttribute := range configuredAttributes { + if reflect.DeepEqual(input, configuredAttribute) { + configured = true + } } - if input.Mutable != nil { - value["mutable"] = *input.Mutable + if !configured { + if cognitoUserPoolSchemaAttributeMatchesStandardAttribute(input) { + continue + } + // When adding a Cognito Identity Provider, the API will automatically add an "identities" attribute + identitiesAttribute := cognitoidentityprovider.SchemaAttributeType{ + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("identities"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{}, + } + if reflect.DeepEqual(*input, identitiesAttribute) { + continue + } } - if input.Name != nil { - value["name"] = *input.Name + var value = map[string]interface{}{ + "attribute_data_type": aws.StringValue(input.AttributeDataType), + "developer_only_attribute": aws.BoolValue(input.DeveloperOnlyAttribute), + "mutable": aws.BoolValue(input.Mutable), + "name": strings.TrimPrefix(strings.TrimPrefix(aws.StringValue(input.Name), "dev:"), "custom:"), + "required": aws.BoolValue(input.Required), } if input.NumberAttributeConstraints != nil { subvalue := make(map[string]interface{}) if input.NumberAttributeConstraints.MinValue != nil { - subvalue["min_value"] = input.NumberAttributeConstraints.MinValue + subvalue["min_value"] = aws.StringValue(input.NumberAttributeConstraints.MinValue) } if input.NumberAttributeConstraints.MaxValue != nil { - subvalue["max_value"] = input.NumberAttributeConstraints.MaxValue + subvalue["max_value"] = aws.StringValue(input.NumberAttributeConstraints.MaxValue) } - value["number_attribute_constraints"] = subvalue - } - - if input.Required != nil { - value["required"] = *input.Required + value["number_attribute_constraints"] = []map[string]interface{}{subvalue} } if input.StringAttributeConstraints != nil { subvalue := make(map[string]interface{}) if input.StringAttributeConstraints.MinLength != nil { - subvalue["min_length"] = input.StringAttributeConstraints.MinLength + subvalue["min_length"] = aws.StringValue(input.StringAttributeConstraints.MinLength) } if input.StringAttributeConstraints.MaxLength != nil { - subvalue["max_length"] = input.StringAttributeConstraints.MaxLength + subvalue["max_length"] = aws.StringValue(input.StringAttributeConstraints.MaxLength) } - value["string_attribute_constraints"] = subvalue + value["string_attribute_constraints"] = []map[string]interface{}{subvalue} } - values[i] = value + values = append(values, value) } return values @@ -2926,12 +3460,6 @@ func flattenCognitoUserPoolVerificationMessageTemplate(s *cognitoidentityprovide return []map[string]interface{}{} } -func buildLambdaInvokeArn(lambdaArn, region string) string { - apiVersion := "2015-03-31" - return fmt.Sprintf("arn:aws:apigateway:%s:lambda:path/%s/functions/%s/invocations", - region, apiVersion, lambdaArn) -} - func sliceContainsMap(l []interface{}, m map[string]interface{}) (int, bool) { for i, t := range l { if reflect.DeepEqual(m, t.(map[string]interface{})) { @@ -2942,12 +3470,10 @@ func sliceContainsMap(l []interface{}, m map[string]interface{}) (int, bool) { return -1, false } -func expandAwsSsmTargets(d *schema.ResourceData) []*ssm.Target { +func expandAwsSsmTargets(in []interface{}) []*ssm.Target { targets := make([]*ssm.Target, 0) - targetConfig := d.Get("targets").([]interface{}) - - for _, tConfig := range targetConfig { + for _, tConfig := range in { config := tConfig.(map[string]interface{}) target := &ssm.Target{ @@ -2999,6 +3525,115 @@ func flattenFieldToMatch(fm *waf.FieldToMatch) []interface{} { return []interface{}{m} } +func diffWafWebAclRules(oldR, newR []interface{}) []*waf.WebACLUpdate { + updates := make([]*waf.WebACLUpdate, 0) + + for _, or := range oldR { + aclRule := or.(map[string]interface{}) + + if idx, contains := sliceContainsMap(newR, aclRule); contains { + newR = append(newR[:idx], newR[idx+1:]...) + continue + } + updates = append(updates, expandWafWebAclUpdate(waf.ChangeActionDelete, aclRule)) + } + + for _, nr := range newR { + aclRule := nr.(map[string]interface{}) + updates = append(updates, expandWafWebAclUpdate(waf.ChangeActionInsert, aclRule)) + } + return updates +} + +func expandWafWebAclUpdate(updateAction string, aclRule map[string]interface{}) *waf.WebACLUpdate { + var rule *waf.ActivatedRule + + switch aclRule["type"].(string) { + case waf.WafRuleTypeGroup: + rule = &waf.ActivatedRule{ + OverrideAction: expandWafOverrideAction(aclRule["override_action"].([]interface{})), + Priority: aws.Int64(int64(aclRule["priority"].(int))), + RuleId: aws.String(aclRule["rule_id"].(string)), + Type: aws.String(aclRule["type"].(string)), + } + default: + rule = &waf.ActivatedRule{ + Action: expandWafAction(aclRule["action"].([]interface{})), + Priority: aws.Int64(int64(aclRule["priority"].(int))), + RuleId: aws.String(aclRule["rule_id"].(string)), + Type: aws.String(aclRule["type"].(string)), + } + } + + update := &waf.WebACLUpdate{ + Action: aws.String(updateAction), + ActivatedRule: rule, + } + + return update +} + +func expandWafAction(l []interface{}) *waf.WafAction { + if len(l) == 0 || l[0] == nil { + return nil + } + + m := l[0].(map[string]interface{}) + + return &waf.WafAction{ + Type: aws.String(m["type"].(string)), + } +} + +func expandWafOverrideAction(l []interface{}) *waf.WafOverrideAction { + if len(l) == 0 || l[0] == nil { + return nil + } + + m := l[0].(map[string]interface{}) + + return &waf.WafOverrideAction{ + Type: aws.String(m["type"].(string)), + } +} + +func flattenWafAction(n *waf.WafAction) []map[string]interface{} { + if n == nil { + return nil + } + + m := setMap(make(map[string]interface{})) + + m.SetString("type", n.Type) + return m.MapList() +} + +func flattenWafWebAclRules(ts []*waf.ActivatedRule) []map[string]interface{} { + out := make([]map[string]interface{}, len(ts)) + for i, r := range ts { + m := make(map[string]interface{}) + + switch aws.StringValue(r.Type) { + case waf.WafRuleTypeGroup: + actionMap := map[string]interface{}{ + "type": aws.StringValue(r.OverrideAction.Type), + } + m["override_action"] = []map[string]interface{}{actionMap} + default: + actionMap := map[string]interface{}{ + "type": aws.StringValue(r.Action.Type), + } + m["action"] = []map[string]interface{}{actionMap} + } + + m["priority"] = int(aws.Int64Value(r.Priority)) + m["rule_id"] = aws.StringValue(r.RuleId) + m["type"] = aws.StringValue(r.Type) + out[i] = m + } + return out +} + // escapeJsonPointer escapes string per RFC 6901 // so it can be used as path in JSON patch operations func escapeJsonPointer(path string) string { @@ -3036,7 +3671,7 @@ func flattenCognitoIdentityPoolRoles(config map[string]*string) map[string]strin } func expandCognitoIdentityPoolRoleMappingsAttachment(rms []interface{}) map[string]*cognitoidentity.RoleMapping { - values := make(map[string]*cognitoidentity.RoleMapping, 0) + values := make(map[string]*cognitoidentity.RoleMapping) if len(rms) == 0 { return values @@ -3134,7 +3769,7 @@ func flattenRedshiftLogging(ls *redshift.LoggingStatus) []interface{} { return []interface{}{} } - cfg := make(map[string]interface{}, 0) + cfg := make(map[string]interface{}) cfg["enabled"] = *ls.LoggingEnabled if ls.BucketName != nil { cfg["bucket_name"] = *ls.BucketName @@ -3150,7 +3785,7 @@ func flattenRedshiftSnapshotCopy(scs *redshift.ClusterSnapshotCopyStatus) []inte return []interface{}{} } - cfg := make(map[string]interface{}, 0) + cfg := make(map[string]interface{}) if scs.DestinationRegion != nil { cfg["destination_region"] = *scs.DestinationRegion } @@ -3184,7 +3819,7 @@ func canonicalXML(s string) (string, error) { } func expandMqUsers(cfg []interface{}) []*mq.User { - users := make([]*mq.User, len(cfg), len(cfg)) + users := make([]*mq.User, len(cfg)) for i, m := range cfg { u := m.(map[string]interface{}) user := mq.User{ @@ -3204,7 +3839,7 @@ func expandMqUsers(cfg []interface{}) []*mq.User { // We use cfgdUsers to get & set the password func flattenMqUsers(users []*mq.User, cfgUsers []interface{}) *schema.Set { - existingPairs := make(map[string]string, 0) + existingPairs := make(map[string]string) for _, u := range cfgUsers { user := u.(map[string]interface{}) username := user["username"].(string) @@ -3213,13 +3848,15 @@ func flattenMqUsers(users []*mq.User, cfgUsers []interface{}) *schema.Set { out := make([]interface{}, 0) for _, u := range users { + m := map[string]interface{}{ + "username": *u.Username, + } password := "" if p, ok := existingPairs[*u.Username]; ok { password = p } - m := map[string]interface{}{ - "username": *u.Username, - "password": password, + if password != "" { + m["password"] = password } if u.ConsoleAccess != nil { m["console_access"] = *u.ConsoleAccess @@ -3249,7 +3886,7 @@ func flattenMqWeeklyStartTime(wst *mq.WeeklyStartTime) []interface{} { if wst == nil { return []interface{}{} } - m := make(map[string]interface{}, 0) + m := make(map[string]interface{}) if wst.DayOfWeek != nil { m["day_of_week"] = *wst.DayOfWeek } @@ -3282,7 +3919,7 @@ func flattenMqConfigurationId(cid *mq.ConfigurationId) []interface{} { if cid == nil { return []interface{}{} } - m := make(map[string]interface{}, 0) + m := make(map[string]interface{}) if cid.Id != nil { m["id"] = *cid.Id } @@ -3296,21 +3933,82 @@ func flattenMqBrokerInstances(instances []*mq.BrokerInstance) []interface{} { if len(instances) == 0 { return []interface{}{} } - l := make([]interface{}, len(instances), len(instances)) + l := make([]interface{}, len(instances)) for i, instance := range instances { - m := make(map[string]interface{}, 0) + m := make(map[string]interface{}) if instance.ConsoleURL != nil { m["console_url"] = *instance.ConsoleURL } if len(instance.Endpoints) > 0 { m["endpoints"] = aws.StringValueSlice(instance.Endpoints) } + if instance.IpAddress != nil { + m["ip_address"] = *instance.IpAddress + } l[i] = m } return l } +func flattenMqLogs(logs *mq.LogsSummary) []interface{} { + if logs == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "general": aws.BoolValue(logs.General), + "audit": aws.BoolValue(logs.Audit), + } + + return []interface{}{m} +} + +func expandMqLogs(l []interface{}) *mq.Logs { + if len(l) == 0 || l[0] == nil { + return nil + } + + m := l[0].(map[string]interface{}) + + logs := &mq.Logs{ + Audit: aws.Bool(m["audit"].(bool)), + General: aws.Bool(m["general"].(bool)), + } + + return logs +} + +func flattenResourceLifecycleConfig(rlc *elasticbeanstalk.ApplicationResourceLifecycleConfig) []map[string]interface{} { + result := make([]map[string]interface{}, 0, 1) + + anything_enabled := false + appversion_lifecycle := make(map[string]interface{}) + + if rlc.ServiceRole != nil { + appversion_lifecycle["service_role"] = *rlc.ServiceRole + } + + if vlc := rlc.VersionLifecycleConfig; vlc != nil { + if mar := vlc.MaxAgeRule; mar != nil && *mar.Enabled { + anything_enabled = true + appversion_lifecycle["max_age_in_days"] = *mar.MaxAgeInDays + appversion_lifecycle["delete_source_from_s3"] = *mar.DeleteSourceFromS3 + } + if mcr := vlc.MaxCountRule; mcr != nil && *mcr.Enabled { + anything_enabled = true + appversion_lifecycle["max_count"] = *mcr.MaxCount + appversion_lifecycle["delete_source_from_s3"] = *mcr.DeleteSourceFromS3 + } + } + + if anything_enabled { + result = append(result, appversion_lifecycle) + } + + return result +} + func diffDynamoDbGSI(oldGsi, newGsi []interface{}) (ops []*dynamodb.GlobalSecondaryIndexUpdate, e error) { // Transform slices into maps oldGsis := make(map[string]interface{}) @@ -3436,6 +4134,25 @@ func flattenDynamoDbTtl(ttlDesc *dynamodb.TimeToLiveDescription) []interface{} { return []interface{}{} } +func flattenDynamoDbPitr(pitrDesc *dynamodb.DescribeContinuousBackupsOutput) []interface{} { + m := map[string]interface{}{ + "enabled": false, + } + + if pitrDesc == nil { + return []interface{}{m} + } + + if pitrDesc.ContinuousBackupsDescription != nil { + pitr := pitrDesc.ContinuousBackupsDescription.PointInTimeRecoveryDescription + if pitr != nil { + m["enabled"] = (*pitr.PointInTimeRecoveryStatus == dynamodb.PointInTimeRecoveryStatusEnabled) + } + } + + return []interface{}{m} +} + func flattenAwsDynamoDbTableResource(d *schema.ResourceData, table *dynamodb.TableDescription) error { d.Set("write_capacity", table.ProvisionedThroughput.WriteCapacityUnits) d.Set("read_capacity", table.ProvisionedThroughput.ReadCapacityUnits) @@ -3551,7 +4268,7 @@ func flattenAwsDynamoDbTableResource(d *schema.ResourceData, table *dynamodb.Tab } func expandDynamoDbAttributes(cfg []interface{}) []*dynamodb.AttributeDefinition { - attributes := make([]*dynamodb.AttributeDefinition, len(cfg), len(cfg)) + attributes := make([]*dynamodb.AttributeDefinition, len(cfg)) for i, attribute := range cfg { attr := attribute.(map[string]interface{}) attributes[i] = &dynamodb.AttributeDefinition{ @@ -3562,11 +4279,11 @@ func expandDynamoDbAttributes(cfg []interface{}) []*dynamodb.AttributeDefinition return attributes } -// TODO: Get rid of keySchemaM - the user should just explicitely define +// TODO: Get rid of keySchemaM - the user should just explicitly define // this in the config, we shouldn't magically be setting it like this. // Removal will however require config change, hence BC. :/ func expandDynamoDbLocalSecondaryIndexes(cfg []interface{}, keySchemaM map[string]interface{}) []*dynamodb.LocalSecondaryIndex { - indexes := make([]*dynamodb.LocalSecondaryIndex, len(cfg), len(cfg)) + indexes := make([]*dynamodb.LocalSecondaryIndex, len(cfg)) for i, lsi := range cfg { m := lsi.(map[string]interface{}) idxName := m["name"].(string) @@ -3726,3 +4443,237 @@ func flattenIotThingTypeProperties(s *iot.ThingTypeProperties) []map[string]inte return []map[string]interface{}{m} } + +func expandLaunchTemplateSpecification(specs []interface{}) (*autoscaling.LaunchTemplateSpecification, error) { + if len(specs) < 1 { + return nil, nil + } + + spec := specs[0].(map[string]interface{}) + + idValue, idOk := spec["id"] + nameValue, nameOk := spec["name"] + + if idValue == "" && nameValue == "" { + return nil, fmt.Errorf("One of `id` or `name` must be set for `launch_template`") + } + + result := &autoscaling.LaunchTemplateSpecification{} + + // DescribeAutoScalingGroups returns both name and id but LaunchTemplateSpecification + // allows only one of them to be set + if idOk && idValue != "" { + result.LaunchTemplateId = aws.String(idValue.(string)) + } else if nameOk && nameValue != "" { + result.LaunchTemplateName = aws.String(nameValue.(string)) + } + + if v, ok := spec["version"]; ok && v != "" { + result.Version = aws.String(v.(string)) + } + + return result, nil +} + +func flattenLaunchTemplateSpecification(lt *autoscaling.LaunchTemplateSpecification) []map[string]interface{} { + attrs := map[string]interface{}{} + result := make([]map[string]interface{}, 0) + + // id and name are always returned by DescribeAutoscalingGroups + attrs["id"] = *lt.LaunchTemplateId + attrs["name"] = *lt.LaunchTemplateName + + // version is returned only if it was previosly set + if lt.Version != nil { + attrs["version"] = *lt.Version + } else { + attrs["version"] = nil + } + + result = append(result, attrs) + + return result +} + +func flattenVpcPeeringConnectionOptions(options *ec2.VpcPeeringConnectionOptionsDescription) []map[string]interface{} { + m := map[string]interface{}{} + + if options.AllowDnsResolutionFromRemoteVpc != nil { + m["allow_remote_vpc_dns_resolution"] = *options.AllowDnsResolutionFromRemoteVpc + } + + if options.AllowEgressFromLocalClassicLinkToRemoteVpc != nil { + m["allow_classic_link_to_remote_vpc"] = *options.AllowEgressFromLocalClassicLinkToRemoteVpc + } + + if options.AllowEgressFromLocalVpcToRemoteClassicLink != nil { + m["allow_vpc_to_remote_classic_link"] = *options.AllowEgressFromLocalVpcToRemoteClassicLink + } + + return []map[string]interface{}{m} +} + +func expandVpcPeeringConnectionOptions(m map[string]interface{}) *ec2.PeeringConnectionOptionsRequest { + options := &ec2.PeeringConnectionOptionsRequest{} + + if v, ok := m["allow_remote_vpc_dns_resolution"]; ok { + options.AllowDnsResolutionFromRemoteVpc = aws.Bool(v.(bool)) + } + + if v, ok := m["allow_classic_link_to_remote_vpc"]; ok { + options.AllowEgressFromLocalClassicLinkToRemoteVpc = aws.Bool(v.(bool)) + } + + if v, ok := m["allow_vpc_to_remote_classic_link"]; ok { + options.AllowEgressFromLocalVpcToRemoteClassicLink = aws.Bool(v.(bool)) + } + + return options +} + +func expandDxRouteFilterPrefixes(cfg []interface{}) []*directconnect.RouteFilterPrefix { + prefixes := make([]*directconnect.RouteFilterPrefix, len(cfg)) + for i, p := range cfg { + prefix := &directconnect.RouteFilterPrefix{ + Cidr: aws.String(p.(string)), + } + prefixes[i] = prefix + } + return prefixes +} + +func flattenDxRouteFilterPrefixes(prefixes []*directconnect.RouteFilterPrefix) *schema.Set { + out := make([]interface{}, 0) + for _, prefix := range prefixes { + out = append(out, aws.StringValue(prefix.Cidr)) + } + return schema.NewSet(schema.HashString, out) +} + +func expandMacieClassificationType(d *schema.ResourceData) *macie.ClassificationType { + continuous := macie.S3ContinuousClassificationTypeFull + oneTime := macie.S3OneTimeClassificationTypeNone + if v := d.Get("classification_type").([]interface{}); len(v) > 0 { + m := v[0].(map[string]interface{}) + continuous = m["continuous"].(string) + oneTime = m["one_time"].(string) + } + + return &macie.ClassificationType{ + Continuous: aws.String(continuous), + OneTime: aws.String(oneTime), + } +} + +func expandMacieClassificationTypeUpdate(d *schema.ResourceData) *macie.ClassificationTypeUpdate { + continuous := macie.S3ContinuousClassificationTypeFull + oneTime := macie.S3OneTimeClassificationTypeNone + if v := d.Get("classification_type").([]interface{}); len(v) > 0 { + m := v[0].(map[string]interface{}) + continuous = m["continuous"].(string) + oneTime = m["one_time"].(string) + } + + return &macie.ClassificationTypeUpdate{ + Continuous: aws.String(continuous), + OneTime: aws.String(oneTime), + } +} + +func flattenMacieClassificationType(classificationType *macie.ClassificationType) []map[string]interface{} { + if classificationType == nil { + return []map[string]interface{}{} + } + m := map[string]interface{}{ + "continuous": aws.StringValue(classificationType.Continuous), + "one_time": aws.StringValue(classificationType.OneTime), + } + return []map[string]interface{}{m} +} + +func expandDaxParameterGroupParameterNameValue(config []interface{}) []*dax.ParameterNameValue { + if len(config) == 0 { + return nil + } + results := make([]*dax.ParameterNameValue, 0, len(config)) + for _, raw := range config { + m := raw.(map[string]interface{}) + pnv := &dax.ParameterNameValue{ + ParameterName: aws.String(m["name"].(string)), + ParameterValue: aws.String(m["value"].(string)), + } + results = append(results, pnv) + } + return results +} + +func flattenDaxParameterGroupParameters(params []*dax.Parameter) []map[string]interface{} { + if len(params) == 0 { + return nil + } + results := make([]map[string]interface{}, 0) + for _, p := range params { + m := map[string]interface{}{ + "name": aws.StringValue(p.ParameterName), + "value": aws.StringValue(p.ParameterValue), + } + results = append(results, m) + } + return results +} + +func expandDaxEncryptAtRestOptions(m map[string]interface{}) *dax.SSESpecification { + options := dax.SSESpecification{} + + if v, ok := m["enabled"]; ok { + options.Enabled = aws.Bool(v.(bool)) + } + + return &options +} + +func flattenDaxEncryptAtRestOptions(options *dax.SSEDescription) []map[string]interface{} { + m := map[string]interface{}{ + "enabled": false, + } + + if options == nil { + return []map[string]interface{}{m} + } + + m["enabled"] = (aws.StringValue(options.Status) == dax.SSEStatusEnabled) + + return []map[string]interface{}{m} +} + +func expandRdsScalingConfiguration(l []interface{}) *rds.ScalingConfiguration { + if len(l) == 0 || l[0] == nil { + return nil + } + + m := l[0].(map[string]interface{}) + + scalingConfiguration := &rds.ScalingConfiguration{ + AutoPause: aws.Bool(m["auto_pause"].(bool)), + MaxCapacity: aws.Int64(int64(m["max_capacity"].(int))), + MinCapacity: aws.Int64(int64(m["min_capacity"].(int))), + SecondsUntilAutoPause: aws.Int64(int64(m["seconds_until_auto_pause"].(int))), + } + + return scalingConfiguration +} + +func flattenRdsScalingConfigurationInfo(scalingConfigurationInfo *rds.ScalingConfigurationInfo) []interface{} { + if scalingConfigurationInfo == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "auto_pause": aws.BoolValue(scalingConfigurationInfo.AutoPause), + "max_capacity": aws.Int64Value(scalingConfigurationInfo.MaxCapacity), + "min_capacity": aws.Int64Value(scalingConfigurationInfo.MinCapacity), + "seconds_until_auto_pause": aws.Int64Value(scalingConfigurationInfo.SecondsUntilAutoPause), + } + + return []interface{}{m} +} diff --git a/aws/structure_test.go b/aws/structure_test.go index 10b7202784b..15aa3a480ed 100644 --- a/aws/structure_test.go +++ b/aws/structure_test.go @@ -8,6 +8,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/apigateway" "github.com/aws/aws-sdk-go/service/autoscaling" + "github.com/aws/aws-sdk-go/service/cognitoidentityprovider" "github.com/aws/aws-sdk-go/service/ec2" "github.com/aws/aws-sdk-go/service/elasticache" "github.com/aws/aws-sdk-go/service/elb" @@ -74,34 +75,34 @@ func TestExpandIPPerms(t *testing.T) { } expected := []ec2.IpPermission{ - ec2.IpPermission{ + { IpProtocol: aws.String("icmp"), FromPort: aws.Int64(int64(1)), ToPort: aws.Int64(int64(-1)), IpRanges: []*ec2.IpRange{ - &ec2.IpRange{ + { CidrIp: aws.String("0.0.0.0/0"), Description: aws.String("desc"), }, }, UserIdGroupPairs: []*ec2.UserIdGroupPair{ - &ec2.UserIdGroupPair{ + { UserId: aws.String("foo"), GroupId: aws.String("sg-22222"), Description: aws.String("desc"), }, - &ec2.UserIdGroupPair{ + { GroupId: aws.String("sg-11111"), Description: aws.String("desc"), }, }, }, - ec2.IpPermission{ + { IpProtocol: aws.String("icmp"), FromPort: aws.Int64(int64(1)), ToPort: aws.Int64(int64(-1)), UserIdGroupPairs: []*ec2.UserIdGroupPair{ - &ec2.UserIdGroupPair{ + { GroupId: aws.String("foo"), }, }, @@ -183,17 +184,17 @@ func TestExpandIPPerms_NegOneProtocol(t *testing.T) { } expected := []ec2.IpPermission{ - ec2.IpPermission{ + { IpProtocol: aws.String("-1"), FromPort: aws.Int64(int64(0)), ToPort: aws.Int64(int64(0)), - IpRanges: []*ec2.IpRange{&ec2.IpRange{CidrIp: aws.String("0.0.0.0/0")}}, + IpRanges: []*ec2.IpRange{{CidrIp: aws.String("0.0.0.0/0")}}, UserIdGroupPairs: []*ec2.UserIdGroupPair{ - &ec2.UserIdGroupPair{ + { UserId: aws.String("foo"), GroupId: aws.String("sg-22222"), }, - &ec2.UserIdGroupPair{ + { GroupId: aws.String("sg-11111"), }, }, @@ -279,26 +280,26 @@ func TestExpandIPPerms_nonVPC(t *testing.T) { } expected := []ec2.IpPermission{ - ec2.IpPermission{ + { IpProtocol: aws.String("icmp"), FromPort: aws.Int64(int64(1)), ToPort: aws.Int64(int64(-1)), - IpRanges: []*ec2.IpRange{&ec2.IpRange{CidrIp: aws.String("0.0.0.0/0")}}, + IpRanges: []*ec2.IpRange{{CidrIp: aws.String("0.0.0.0/0")}}, UserIdGroupPairs: []*ec2.UserIdGroupPair{ - &ec2.UserIdGroupPair{ + { GroupName: aws.String("sg-22222"), }, - &ec2.UserIdGroupPair{ + { GroupName: aws.String("sg-11111"), }, }, }, - ec2.IpPermission{ + { IpProtocol: aws.String("icmp"), FromPort: aws.Int64(int64(1)), ToPort: aws.Int64(int64(-1)), UserIdGroupPairs: []*ec2.UserIdGroupPair{ - &ec2.UserIdGroupPair{ + { GroupName: aws.String("foo"), }, }, @@ -422,7 +423,7 @@ func TestFlattenHealthCheck(t *testing.T) { Interval: aws.Int64(int64(30)), }, Output: []map[string]interface{}{ - map[string]interface{}{ + { "unhealthy_threshold": int64(10), "healthy_threshold": int64(10), "target": "HTTP:80/", @@ -590,13 +591,13 @@ func TestFlattenParameters(t *testing.T) { }{ { Input: []*rds.Parameter{ - &rds.Parameter{ + { ParameterName: aws.String("character_set_client"), ParameterValue: aws.String("utf8"), }, }, Output: []map[string]interface{}{ - map[string]interface{}{ + { "name": "character_set_client", "value": "utf8", }, @@ -619,13 +620,13 @@ func TestFlattenRedshiftParameters(t *testing.T) { }{ { Input: []*redshift.Parameter{ - &redshift.Parameter{ + { ParameterName: aws.String("character_set_client"), ParameterValue: aws.String("utf8"), }, }, Output: []map[string]interface{}{ - map[string]interface{}{ + { "name": "character_set_client", "value": "utf8", }, @@ -648,13 +649,13 @@ func TestFlattenElasticacheParameters(t *testing.T) { }{ { Input: []*elasticache.Parameter{ - &elasticache.Parameter{ + { ParameterName: aws.String("activerehashing"), ParameterValue: aws.String("yes"), }, }, Output: []map[string]interface{}{ - map[string]interface{}{ + { "name": "activerehashing", "value": "yes", }, @@ -673,8 +674,8 @@ func TestFlattenElasticacheParameters(t *testing.T) { func TestExpandInstanceString(t *testing.T) { expected := []*elb.Instance{ - &elb.Instance{InstanceId: aws.String("test-one")}, - &elb.Instance{InstanceId: aws.String("test-two")}, + {InstanceId: aws.String("test-one")}, + {InstanceId: aws.String("test-two")}, } ids := []interface{}{ @@ -691,8 +692,8 @@ func TestExpandInstanceString(t *testing.T) { func TestFlattenNetworkInterfacesPrivateIPAddresses(t *testing.T) { expanded := []*ec2.NetworkInterfacePrivateIpAddress{ - &ec2.NetworkInterfacePrivateIpAddress{PrivateIpAddress: aws.String("192.168.0.1")}, - &ec2.NetworkInterfacePrivateIpAddress{PrivateIpAddress: aws.String("192.168.0.2")}, + {PrivateIpAddress: aws.String("192.168.0.1")}, + {PrivateIpAddress: aws.String("192.168.0.2")}, } result := flattenNetworkInterfacesPrivateIPAddresses(expanded) @@ -716,8 +717,8 @@ func TestFlattenNetworkInterfacesPrivateIPAddresses(t *testing.T) { func TestFlattenGroupIdentifiers(t *testing.T) { expanded := []*ec2.GroupIdentifier{ - &ec2.GroupIdentifier{GroupId: aws.String("sg-001")}, - &ec2.GroupIdentifier{GroupId: aws.String("sg-002")}, + {GroupId: aws.String("sg-001")}, + {GroupId: aws.String("sg-002")}, } result := flattenGroupIdentifiers(expanded) @@ -804,7 +805,7 @@ func TestFlattenAttachmentWhenNoInstanceId(t *testing.T) { func TestFlattenStepAdjustments(t *testing.T) { expanded := []*autoscaling.StepAdjustment{ - &autoscaling.StepAdjustment{ + { MetricIntervalLowerBound: aws.Float64(1.0), MetricIntervalUpperBound: aws.Float64(2.0), ScalingAdjustment: aws.Int64(int64(1)), @@ -886,8 +887,8 @@ func checkFlattenResourceRecords( func TestFlattenAsgEnabledMetrics(t *testing.T) { expanded := []*autoscaling.EnabledMetric{ - &autoscaling.EnabledMetric{Granularity: aws.String("1Minute"), Metric: aws.String("GroupTotalInstances")}, - &autoscaling.EnabledMetric{Granularity: aws.String("1Minute"), Metric: aws.String("GroupMaxSize")}, + {Granularity: aws.String("1Minute"), Metric: aws.String("GroupTotalInstances")}, + {Granularity: aws.String("1Minute"), Metric: aws.String("GroupMaxSize")}, } result := flattenAsgEnabledMetrics(expanded) @@ -907,7 +908,7 @@ func TestFlattenAsgEnabledMetrics(t *testing.T) { func TestFlattenKinesisShardLevelMetrics(t *testing.T) { expanded := []*kinesis.EnhancedMetrics{ - &kinesis.EnhancedMetrics{ + { ShardLevelMetrics: []*string{ aws.String("IncomingBytes"), aws.String("IncomingRecords"), @@ -936,12 +937,12 @@ func TestFlattenSecurityGroups(t *testing.T) { { ownerId: aws.String("user1234"), pairs: []*ec2.UserIdGroupPair{ - &ec2.UserIdGroupPair{ + { GroupId: aws.String("sg-12345"), }, }, expected: []*GroupIdentifier{ - &GroupIdentifier{ + { GroupId: aws.String("sg-12345"), }, }, @@ -951,13 +952,13 @@ func TestFlattenSecurityGroups(t *testing.T) { { ownerId: aws.String("user1234"), pairs: []*ec2.UserIdGroupPair{ - &ec2.UserIdGroupPair{ + { GroupId: aws.String("sg-12345"), UserId: aws.String("user1234"), }, }, expected: []*GroupIdentifier{ - &GroupIdentifier{ + { GroupId: aws.String("sg-12345"), }, }, @@ -968,14 +969,14 @@ func TestFlattenSecurityGroups(t *testing.T) { { ownerId: aws.String("user1234"), pairs: []*ec2.UserIdGroupPair{ - &ec2.UserIdGroupPair{ + { GroupId: aws.String("sg-12345"), GroupName: aws.String("somegroup"), // GroupName is only included in Classic UserId: aws.String("user4321"), }, }, expected: []*GroupIdentifier{ - &GroupIdentifier{ + { GroupId: aws.String("sg-12345"), GroupName: aws.String("user4321/somegroup"), }, @@ -987,13 +988,13 @@ func TestFlattenSecurityGroups(t *testing.T) { { ownerId: aws.String("user1234"), pairs: []*ec2.UserIdGroupPair{ - &ec2.UserIdGroupPair{ + { GroupId: aws.String("sg-12345"), UserId: aws.String("user4321"), }, }, expected: []*GroupIdentifier{ - &GroupIdentifier{ + { GroupId: aws.String("user4321/sg-12345"), }, }, @@ -1003,13 +1004,13 @@ func TestFlattenSecurityGroups(t *testing.T) { { ownerId: aws.String("user1234"), pairs: []*ec2.UserIdGroupPair{ - &ec2.UserIdGroupPair{ + { GroupId: aws.String("sg-12345"), Description: aws.String("desc"), }, }, expected: []*GroupIdentifier{ - &GroupIdentifier{ + { GroupId: aws.String("sg-12345"), Description: aws.String("desc"), }, @@ -1075,11 +1076,11 @@ func TestFlattenApiGatewayStageKeys(t *testing.T) { aws.String("e5d4c3b2a1/test"), }, Output: []map[string]interface{}{ - map[string]interface{}{ + { "stage_name": "dev", "rest_api_id": "a1b2c3d4e5", }, - map[string]interface{}{ + { "stage_name": "test", "rest_api_id": "e5d4c3b2a1", }, @@ -1177,7 +1178,7 @@ func TestFlattenPolicyAttributes(t *testing.T) { }{ { Input: []*elb.PolicyAttributeDescription{ - &elb.PolicyAttributeDescription{ + { AttributeName: aws.String("Protocol-TLSv1.2"), AttributeValue: aws.String("true"), }, @@ -1259,6 +1260,142 @@ func TestNormalizeCloudFormationTemplate(t *testing.T) { } } +func TestCognitoUserPoolSchemaAttributeMatchesStandardAttribute(t *testing.T) { + cases := []struct { + Input *cognitoidentityprovider.SchemaAttributeType + Expected bool + }{ + { + Input: &cognitoidentityprovider.SchemaAttributeType{ + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("birthdate"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("10"), + MinLength: aws.String("10"), + }, + }, + Expected: true, + }, + { + Input: &cognitoidentityprovider.SchemaAttributeType{ + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(true), + Mutable: aws.Bool(true), + Name: aws.String("birthdate"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("10"), + MinLength: aws.String("10"), + }, + }, + Expected: false, + }, + { + Input: &cognitoidentityprovider.SchemaAttributeType{ + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(false), + Name: aws.String("birthdate"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("10"), + MinLength: aws.String("10"), + }, + }, + Expected: false, + }, + { + Input: &cognitoidentityprovider.SchemaAttributeType{ + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("non-existent"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("10"), + MinLength: aws.String("10"), + }, + }, + Expected: false, + }, + { + Input: &cognitoidentityprovider.SchemaAttributeType{ + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("birthdate"), + Required: aws.Bool(true), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("10"), + MinLength: aws.String("10"), + }, + }, + Expected: false, + }, + { + Input: &cognitoidentityprovider.SchemaAttributeType{ + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("birthdate"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("999"), + MinLength: aws.String("10"), + }, + }, + Expected: false, + }, + { + Input: &cognitoidentityprovider.SchemaAttributeType{ + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeString), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("birthdate"), + Required: aws.Bool(false), + StringAttributeConstraints: &cognitoidentityprovider.StringAttributeConstraintsType{ + MaxLength: aws.String("10"), + MinLength: aws.String("999"), + }, + }, + Expected: false, + }, + { + Input: &cognitoidentityprovider.SchemaAttributeType{ + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeBoolean), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("email_verified"), + Required: aws.Bool(false), + }, + Expected: true, + }, + { + Input: &cognitoidentityprovider.SchemaAttributeType{ + AttributeDataType: aws.String(cognitoidentityprovider.AttributeDataTypeNumber), + DeveloperOnlyAttribute: aws.Bool(false), + Mutable: aws.Bool(true), + Name: aws.String("updated_at"), + NumberAttributeConstraints: &cognitoidentityprovider.NumberAttributeConstraintsType{ + MinValue: aws.String("0"), + }, + Required: aws.Bool(false), + }, + Expected: true, + }, + } + + for _, tc := range cases { + output := cognitoUserPoolSchemaAttributeMatchesStandardAttribute(tc.Input) + if output != tc.Expected { + t.Fatalf("Expected %t match with standard attribute on input: \n\n%#v\n\n", tc.Expected, tc.Input) + } + } +} + func TestCanonicalXML(t *testing.T) { cases := []struct { Name string diff --git a/aws/tags.go b/aws/tags.go index 7a4bfdef174..f716b5c0d77 100644 --- a/aws/tags.go +++ b/aws/tags.go @@ -1,6 +1,7 @@ package aws import ( + "fmt" "log" "regexp" "strings" @@ -11,6 +12,7 @@ import ( "github.com/aws/aws-sdk-go/service/dynamodb" "github.com/aws/aws-sdk-go/service/ec2" "github.com/aws/aws-sdk-go/service/elbv2" + "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" ) @@ -98,7 +100,19 @@ func setVolumeTags(conn *ec2.EC2, d *schema.ResourceData) error { return nil }) if err != nil { - return err + // Retry without time bounds for EC2 throttling + if isResourceTimeoutError(err) { + log.Printf("[DEBUG] Removing volume tags: %#v from %s", remove, d.Id()) + _, err := conn.DeleteTags(&ec2.DeleteTagsInput{ + Resources: volumeIds, + Tags: remove, + }) + if err != nil { + return err + } + } else { + return err + } } } if len(create) > 0 { @@ -118,7 +132,19 @@ func setVolumeTags(conn *ec2.EC2, d *schema.ResourceData) error { return nil }) if err != nil { - return err + // Retry without time bounds for EC2 throttling + if isResourceTimeoutError(err) { + log.Printf("[DEBUG] Creating vol tags: %s for %s", create, d.Id()) + _, err := conn.CreateTags(&ec2.CreateTagsInput{ + Resources: volumeIds, + Tags: create, + }) + if err != nil { + return err + } + } else { + return err + } } } } @@ -153,7 +179,19 @@ func setTags(conn *ec2.EC2, d *schema.ResourceData) error { return nil }) if err != nil { - return err + // Retry without time bounds for EC2 throttling + if isResourceTimeoutError(err) { + log.Printf("[DEBUG] Removing tags: %#v from %s", remove, d.Id()) + _, err := conn.DeleteTags(&ec2.DeleteTagsInput{ + Resources: []*string{aws.String(d.Id())}, + Tags: remove, + }) + if err != nil { + return err + } + } else { + return err + } } } if len(create) > 0 { @@ -173,7 +211,19 @@ func setTags(conn *ec2.EC2, d *schema.ResourceData) error { return nil }) if err != nil { - return err + // Retry without time bounds for EC2 throttling + if isResourceTimeoutError(err) { + log.Printf("[DEBUG] Creating tags: %s for %s", create, d.Id()) + _, err := conn.CreateTags(&ec2.CreateTagsInput{ + Resources: []*string{aws.String(d.Id())}, + Tags: create, + }) + if err != nil { + return err + } + } else { + return err + } } } } @@ -188,15 +238,18 @@ func diffTags(oldTags, newTags []*ec2.Tag) ([]*ec2.Tag, []*ec2.Tag) { // First, we're creating everything we have create := make(map[string]interface{}) for _, t := range newTags { - create[*t.Key] = *t.Value + create[aws.StringValue(t.Key)] = aws.StringValue(t.Value) } // Build the list of what to remove var remove []*ec2.Tag for _, t := range oldTags { - old, ok := create[*t.Key] - if !ok || old != *t.Value { + old, ok := create[aws.StringValue(t.Key)] + if !ok || old != aws.StringValue(t.Value) { remove = append(remove, t) + } else if ok { + // already present so remove from new + delete(create, aws.StringValue(t.Key)) } } @@ -401,3 +454,23 @@ func diffTagsDynamoDb(oldTags, newTags []*dynamodb.Tag) ([]*dynamodb.Tag, []*str } return tagsFromMapDynamoDb(create), remove } + +// tagsMapToHash returns a stable hash value for a raw tags map. +// The returned value map be negative (i.e. not suitable for a 'Set' function). +func tagsMapToHash(tags map[string]interface{}) int { + total := 0 + for k, v := range tags { + total = total ^ hashcode.String(fmt.Sprintf("%s-%s", k, v)) + } + return total +} + +// tagsMapToRaw converts a tags map to its "raw" type. +func tagsMapToRaw(m map[string]string) map[string]interface{} { + raw := make(map[string]interface{}) + for k, v := range m { + raw[k] = v + } + + return raw +} diff --git a/aws/tagsACM.go b/aws/tagsACM.go index 5f8a88fa90f..7f38ff3a432 100644 --- a/aws/tagsACM.go +++ b/aws/tagsACM.go @@ -2,7 +2,6 @@ package aws import ( "log" - "regexp" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/acm" @@ -87,17 +86,3 @@ func tagsToMapACM(ts []*acm.Tag) map[string]string { return result } - -// compare a tag against a list of strings and checks if it should -// be ignored or not -func tagIgnoredACM(t *acm.Tag) bool { - filter := []string{"^aws:"} - for _, v := range filter { - log.Printf("[DEBUG] Matching %v with %v\n", v, *t.Key) - if r, _ := regexp.MatchString(v, *t.Key); r == true { - log.Printf("[DEBUG] Found AWS specific tag %s (val: %s), ignoring.\n", *t.Key, *t.Value) - return true - } - } - return false -} diff --git a/aws/tagsACMPCA.go b/aws/tagsACMPCA.go new file mode 100644 index 00000000000..f497f3aee5b --- /dev/null +++ b/aws/tagsACMPCA.go @@ -0,0 +1,50 @@ +package aws + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/acmpca" +) + +// diffTags takes our tags locally and the ones remotely and returns +// the set of tags that must be created, and the set of tags that must +// be destroyed. +func diffTagsACMPCA(oldTags, newTags []*acmpca.Tag) ([]*acmpca.Tag, []*acmpca.Tag) { + // First, we're creating everything we have + create := make(map[string]interface{}) + for _, t := range newTags { + create[aws.StringValue(t.Key)] = aws.StringValue(t.Value) + } + + // Build the list of what to remove + var remove []*acmpca.Tag + for _, t := range oldTags { + old, ok := create[aws.StringValue(t.Key)] + if !ok || old != aws.StringValue(t.Value) { + // Delete it! + remove = append(remove, t) + } + } + + return tagsFromMapACMPCA(create), remove +} + +func tagsFromMapACMPCA(m map[string]interface{}) []*acmpca.Tag { + result := []*acmpca.Tag{} + for k, v := range m { + result = append(result, &acmpca.Tag{ + Key: aws.String(k), + Value: aws.String(v.(string)), + }) + } + + return result +} + +func tagsToMapACMPCA(ts []*acmpca.Tag) map[string]string { + result := map[string]string{} + for _, t := range ts { + result[aws.StringValue(t.Key)] = aws.StringValue(t.Value) + } + + return result +} diff --git a/aws/tagsACMPCA_test.go b/aws/tagsACMPCA_test.go new file mode 100644 index 00000000000..9c3183be494 --- /dev/null +++ b/aws/tagsACMPCA_test.go @@ -0,0 +1,57 @@ +package aws + +import ( + "reflect" + "testing" +) + +func TestDiffTagsACMPCA(t *testing.T) { + cases := []struct { + Old, New map[string]interface{} + Create, Remove map[string]string + }{ + // Basic add/remove + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "bar": "baz", + }, + Create: map[string]string{ + "bar": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + + // Modify + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "baz", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + } + + for i, tc := range cases { + c, r := diffTagsACMPCA(tagsFromMapACMPCA(tc.Old), tagsFromMapACMPCA(tc.New)) + cm := tagsToMapACMPCA(c) + rm := tagsToMapACMPCA(r) + if !reflect.DeepEqual(cm, tc.Create) { + t.Fatalf("%d: bad create: %#v", i, cm) + } + if !reflect.DeepEqual(rm, tc.Remove) { + t.Fatalf("%d: bad remove: %#v", i, rm) + } + } +} diff --git a/aws/tagsACM_test.go b/aws/tagsACM_test.go index 14213437d76..6c5ec130f3a 100644 --- a/aws/tagsACM_test.go +++ b/aws/tagsACM_test.go @@ -3,9 +3,6 @@ package aws import ( "reflect" "testing" - - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/acm" ) func TestDiffTagsACM(t *testing.T) { @@ -58,20 +55,3 @@ func TestDiffTagsACM(t *testing.T) { } } } - -func TestIgnoringTagsACM(t *testing.T) { - var ignoredTags []*acm.Tag - ignoredTags = append(ignoredTags, &acm.Tag{ - Key: aws.String("aws:cloudformation:logical-id"), - Value: aws.String("foo"), - }) - ignoredTags = append(ignoredTags, &acm.Tag{ - Key: aws.String("aws:foo:bar"), - Value: aws.String("baz"), - }) - for _, tag := range ignoredTags { - if !tagIgnoredACM(tag) { - t.Fatalf("Tag %v with value %v not ignored, but should be!", *tag.Key, *tag.Value) - } - } -} diff --git a/aws/tagsBeanstalk_test.go b/aws/tagsBeanstalk_test.go index e6dbb09829d..46ebfb23755 100644 --- a/aws/tagsBeanstalk_test.go +++ b/aws/tagsBeanstalk_test.go @@ -1,14 +1,11 @@ package aws import ( - "fmt" "reflect" "testing" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/elasticbeanstalk" - "github.com/hashicorp/terraform/helper/resource" - "github.com/hashicorp/terraform/terraform" ) func TestDiffBeanstalkTags(t *testing.T) { @@ -80,26 +77,3 @@ func TestIgnoringTagsBeanstalk(t *testing.T) { } } } - -// testAccCheckTags can be used to check the tags on a resource. -func testAccCheckBeanstalkTags( - ts *[]*elasticbeanstalk.Tag, key string, value string) resource.TestCheckFunc { - return func(s *terraform.State) error { - m := tagsToMapBeanstalk(*ts) - v, ok := m[key] - if value != "" && !ok { - return fmt.Errorf("Missing tag: %s", key) - } else if value == "" && ok { - return fmt.Errorf("Extra tag: %s", key) - } - if value == "" { - return nil - } - - if v != value { - return fmt.Errorf("%s: bad value: %s", key, v) - } - - return nil - } -} diff --git a/aws/tagsCodeBuild.go b/aws/tagsCodeBuild.go index 3302d742638..0a300697239 100644 --- a/aws/tagsCodeBuild.go +++ b/aws/tagsCodeBuild.go @@ -1,36 +1,10 @@ package aws import ( - "log" - "regexp" - "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/codebuild" ) -// diffTags takes our tags locally and the ones remotely and returns -// the set of tags that must be created, and the set of tags that must -// be destroyed. -func diffTagsCodeBuild(oldTags, newTags []*codebuild.Tag) ([]*codebuild.Tag, []*codebuild.Tag) { - // First, we're creating everything we have - create := make(map[string]interface{}) - for _, t := range newTags { - create[*t.Key] = *t.Value - } - - // Build the list of what to remove - var remove []*codebuild.Tag - for _, t := range oldTags { - old, ok := create[*t.Key] - if !ok || old != *t.Value { - // Delete it! - remove = append(remove, t) - } - } - - return tagsFromMapCodeBuild(create), remove -} - func tagsFromMapCodeBuild(m map[string]interface{}) []*codebuild.Tag { result := []*codebuild.Tag{} for k, v := range m { @@ -51,17 +25,3 @@ func tagsToMapCodeBuild(ts []*codebuild.Tag) map[string]string { return result } - -// compare a tag against a list of strings and checks if it should -// be ignored or not -func tagIgnoredCodeBuild(t *codebuild.Tag) bool { - filter := []string{"^aws:"} - for _, v := range filter { - log.Printf("[DEBUG] Matching %v with %v\n", v, *t.Key) - if r, _ := regexp.MatchString(v, *t.Key); r == true { - log.Printf("[DEBUG] Found AWS specific tag %s (val: %s), ignoring.\n", *t.Key, *t.Value) - return true - } - } - return false -} diff --git a/aws/tagsCodeBuild_test.go b/aws/tagsCodeBuild_test.go deleted file mode 100644 index 82a3af899ee..00000000000 --- a/aws/tagsCodeBuild_test.go +++ /dev/null @@ -1,103 +0,0 @@ -package aws - -import ( - "fmt" - "reflect" - "testing" - - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/codebuild" - "github.com/hashicorp/terraform/helper/resource" - "github.com/hashicorp/terraform/terraform" -) - -func TestDiffTagsCodeBuild(t *testing.T) { - cases := []struct { - Old, New map[string]interface{} - Create, Remove map[string]string - }{ - // Basic add/remove - { - Old: map[string]interface{}{ - "foo": "bar", - }, - New: map[string]interface{}{ - "bar": "baz", - }, - Create: map[string]string{ - "bar": "baz", - }, - Remove: map[string]string{ - "foo": "bar", - }, - }, - - // Modify - { - Old: map[string]interface{}{ - "foo": "bar", - }, - New: map[string]interface{}{ - "foo": "baz", - }, - Create: map[string]string{ - "foo": "baz", - }, - Remove: map[string]string{ - "foo": "bar", - }, - }, - } - - for i, tc := range cases { - c, r := diffTagsCodeBuild(tagsFromMapCodeBuild(tc.Old), tagsFromMapCodeBuild(tc.New)) - cm := tagsToMapCodeBuild(c) - rm := tagsToMapCodeBuild(r) - if !reflect.DeepEqual(cm, tc.Create) { - t.Fatalf("%d: bad create: %#v", i, cm) - } - if !reflect.DeepEqual(rm, tc.Remove) { - t.Fatalf("%d: bad remove: %#v", i, rm) - } - } -} - -func TestIgnoringTagsCodeBuild(t *testing.T) { - var ignoredTags []*codebuild.Tag - ignoredTags = append(ignoredTags, &codebuild.Tag{ - Key: aws.String("aws:cloudformation:logical-id"), - Value: aws.String("foo"), - }) - ignoredTags = append(ignoredTags, &codebuild.Tag{ - Key: aws.String("aws:foo:bar"), - Value: aws.String("baz"), - }) - for _, tag := range ignoredTags { - if !tagIgnoredCodeBuild(tag) { - t.Fatalf("Tag %v with value %v not ignored, but should be!", *tag.Key, *tag.Value) - } - } -} - -// testAccCheckTags can be used to check the tags on a resource. -func testAccCheckTagsCodeBuild( - ts *[]*codebuild.Tag, key string, value string) resource.TestCheckFunc { - return func(s *terraform.State) error { - m := tagsToMapCodeBuild(*ts) - v, ok := m[key] - if value != "" && !ok { - return fmt.Errorf("Missing tag: %s", key) - } else if value == "" && ok { - return fmt.Errorf("Extra tag: %s", key) - } - if value == "" { - return nil - } - - if v != value { - return fmt.Errorf("%s: bad value: %s", key, v) - } - - return nil - } -} diff --git a/aws/tagsDAX_test.go b/aws/tagsDAX_test.go index 2fc65c9cc2b..6838b5c64a2 100644 --- a/aws/tagsDAX_test.go +++ b/aws/tagsDAX_test.go @@ -1,14 +1,11 @@ package aws import ( - "fmt" "reflect" "testing" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/dax" - "github.com/hashicorp/terraform/helper/resource" - "github.com/hashicorp/terraform/terraform" ) func TestDaxTagsDiff(t *testing.T) { @@ -78,26 +75,3 @@ func TestTagsDaxIgnore(t *testing.T) { } } } - -// testAccCheckTags can be used to check the tags on a resource. -func testAccCheckDaxTags( - ts []*dax.Tag, key string, value string) resource.TestCheckFunc { - return func(s *terraform.State) error { - m := tagsToMapDax(ts) - v, ok := m[key] - if value != "" && !ok { - return fmt.Errorf("Missing tag: %s", key) - } else if value == "" && ok { - return fmt.Errorf("Extra tag: %s", key) - } - if value == "" { - return nil - } - - if v != value { - return fmt.Errorf("%s: bad value: %s", key, v) - } - - return nil - } -} diff --git a/aws/tagsECS.go b/aws/tagsECS.go new file mode 100644 index 00000000000..9a9854288dd --- /dev/null +++ b/aws/tagsECS.go @@ -0,0 +1,77 @@ +package aws + +import ( + "log" + "regexp" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ecs" +) + +// diffTags takes our tags locally and the ones remotely and returns +// the set of tags that must be created, and the set of tags that must +// be destroyed. +func diffTagsECS(oldTags, newTags []*ecs.Tag) ([]*ecs.Tag, []*ecs.Tag) { + // First, we're creating everything we have + create := make(map[string]interface{}) + for _, t := range newTags { + create[aws.StringValue(t.Key)] = aws.StringValue(t.Value) + } + + // Build the list of what to remove + var remove []*ecs.Tag + for _, t := range oldTags { + old, ok := create[aws.StringValue(t.Key)] + if !ok || old != aws.StringValue(t.Value) { + // Delete it! + remove = append(remove, t) + } else if ok { + // already present so remove from new + delete(create, aws.StringValue(t.Key)) + } + } + + return tagsFromMapECS(create), remove +} + +// tagsFromMap returns the tags for the given map of data. +func tagsFromMapECS(tagMap map[string]interface{}) []*ecs.Tag { + tags := make([]*ecs.Tag, 0, len(tagMap)) + for tagKey, tagValueRaw := range tagMap { + tag := &ecs.Tag{ + Key: aws.String(tagKey), + Value: aws.String(tagValueRaw.(string)), + } + if !tagIgnoredECS(tag) { + tags = append(tags, tag) + } + } + + return tags +} + +// tagsToMap turns the list of tags into a map. +func tagsToMapECS(tags []*ecs.Tag) map[string]string { + tagMap := make(map[string]string) + for _, tag := range tags { + if !tagIgnoredECS(tag) { + tagMap[aws.StringValue(tag.Key)] = aws.StringValue(tag.Value) + } + } + + return tagMap +} + +// compare a tag against a list of strings and checks if it should +// be ignored or not +func tagIgnoredECS(t *ecs.Tag) bool { + filter := []string{"^aws:"} + for _, v := range filter { + log.Printf("[DEBUG] Matching %v with %v\n", v, aws.StringValue(t.Key)) + if r, _ := regexp.MatchString(v, aws.StringValue(t.Key)); r == true { + log.Printf("[DEBUG] Found AWS specific tag %s (val: %s), ignoring.\n", aws.StringValue(t.Key), aws.StringValue(t.Value)) + return true + } + } + return false +} diff --git a/aws/tagsECS_test.go b/aws/tagsECS_test.go new file mode 100644 index 00000000000..cc9289df4a7 --- /dev/null +++ b/aws/tagsECS_test.go @@ -0,0 +1,110 @@ +package aws + +import ( + "reflect" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ecs" +) + +func TestDiffECSTags(t *testing.T) { + cases := []struct { + Old, New map[string]interface{} + Create, Remove map[string]string + }{ + // Add + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "bar", + "bar": "baz", + }, + Create: map[string]string{ + "bar": "baz", + }, + Remove: map[string]string{}, + }, + + // Modify + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "baz", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + + // Overlap + { + Old: map[string]interface{}{ + "foo": "bar", + "hello": "world", + }, + New: map[string]interface{}{ + "foo": "baz", + "hello": "world", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + + // Remove + { + Old: map[string]interface{}{ + "foo": "bar", + "bar": "baz", + }, + New: map[string]interface{}{ + "foo": "bar", + }, + Create: map[string]string{}, + Remove: map[string]string{ + "bar": "baz", + }, + }, + } + + for i, tc := range cases { + c, r := diffTagsECS(tagsFromMapECS(tc.Old), tagsFromMapECS(tc.New)) + cm := tagsToMapECS(c) + rm := tagsToMapECS(r) + if !reflect.DeepEqual(cm, tc.Create) { + t.Fatalf("%d: bad create: %#v", i, cm) + } + if !reflect.DeepEqual(rm, tc.Remove) { + t.Fatalf("%d: bad remove: %#v", i, rm) + } + } +} + +func TestIgnoringTagsECS(t *testing.T) { + ignoredTags := []*ecs.Tag{ + { + Key: aws.String("aws:cloudformation:logical-id"), + Value: aws.String("foo"), + }, + { + Key: aws.String("aws:foo:bar"), + Value: aws.String("baz"), + }, + } + for _, tag := range ignoredTags { + if !tagIgnoredECS(tag) { + t.Fatalf("Tag %v with value %v not ignored, but should be!", *tag.Key, *tag.Value) + } + } +} diff --git a/aws/tagsEC_test.go b/aws/tagsEC_test.go index 3ea3a8d7085..a67aae6bac3 100644 --- a/aws/tagsEC_test.go +++ b/aws/tagsEC_test.go @@ -1,14 +1,11 @@ package aws import ( - "fmt" "reflect" "testing" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/elasticache" - "github.com/hashicorp/terraform/helper/resource" - "github.com/hashicorp/terraform/terraform" ) func TestDiffelasticacheTags(t *testing.T) { @@ -78,26 +75,3 @@ func TestIgnoringTagsEC(t *testing.T) { } } } - -// testAccCheckTags can be used to check the tags on a resource. -func testAccCheckelasticacheTags( - ts []*elasticache.Tag, key string, value string) resource.TestCheckFunc { - return func(s *terraform.State) error { - m := tagsToMapEC(ts) - v, ok := m[key] - if value != "" && !ok { - return fmt.Errorf("Missing tag: %s", key) - } else if value == "" && ok { - return fmt.Errorf("Extra tag: %s", key) - } - if value == "" { - return nil - } - - if v != value { - return fmt.Errorf("%s: bad value: %s", key, v) - } - - return nil - } -} diff --git a/aws/tagsEFS_test.go b/aws/tagsEFS_test.go index 58ed72da178..a354fbb3c49 100644 --- a/aws/tagsEFS_test.go +++ b/aws/tagsEFS_test.go @@ -1,14 +1,11 @@ package aws import ( - "fmt" "reflect" "testing" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/efs" - "github.com/hashicorp/terraform/helper/resource" - "github.com/hashicorp/terraform/terraform" ) func TestDiffEFSTags(t *testing.T) { @@ -78,26 +75,3 @@ func TestIgnoringTagsEFS(t *testing.T) { } } } - -// testAccCheckTags can be used to check the tags on a resource. -func testAccCheckEFSTags( - ts *[]*efs.Tag, key string, value string) resource.TestCheckFunc { - return func(s *terraform.State) error { - m := tagsToMapEFS(*ts) - v, ok := m[key] - if value != "" && !ok { - return fmt.Errorf("Missing tag: %s", key) - } else if value == "" && ok { - return fmt.Errorf("Extra tag: %s", key) - } - if value == "" { - return nil - } - - if v != value { - return fmt.Errorf("%s: bad value: %s", key, v) - } - - return nil - } -} diff --git a/aws/tagsIAM.go b/aws/tagsIAM.go new file mode 100644 index 00000000000..17a547c5420 --- /dev/null +++ b/aws/tagsIAM.go @@ -0,0 +1,85 @@ +package aws + +import ( + "log" + "regexp" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/iam" +) + +// diffTags takes our tags locally and the ones remotely and returns +// the set of tags that must be created, and the set of tags that must +// be destroyed. +func diffTagsIAM(oldTags, newTags []*iam.Tag) ([]*iam.Tag, []*iam.Tag) { + // First, we're creating everything we have + create := make(map[string]interface{}) + for _, t := range newTags { + create[aws.StringValue(t.Key)] = aws.StringValue(t.Value) + } + + // Build the list of what to remove + var remove []*iam.Tag + for _, t := range oldTags { + old, ok := create[aws.StringValue(t.Key)] + if !ok || old != aws.StringValue(t.Value) { + // Delete it! + remove = append(remove, t) + } else if ok { + delete(create, aws.StringValue(t.Key)) + } + } + + return tagsFromMapIAM(create), remove +} + +// tagsFromMapIAM returns the tags for the given map of data for IAM. +func tagsFromMapIAM(m map[string]interface{}) []*iam.Tag { + result := make([]*iam.Tag, 0, len(m)) + for k, v := range m { + t := &iam.Tag{ + Key: aws.String(k), + Value: aws.String(v.(string)), + } + if !tagIgnoredIAM(t) { + result = append(result, t) + } + } + + return result +} + +// tagsToMapIAM turns the list of IAM tags into a map. +func tagsToMapIAM(ts []*iam.Tag) map[string]string { + result := make(map[string]string) + for _, t := range ts { + if !tagIgnoredIAM(t) { + result[aws.StringValue(t.Key)] = aws.StringValue(t.Value) + } + } + + return result +} + +// compare a tag against a list of strings and checks if it should +// be ignored or not +func tagIgnoredIAM(t *iam.Tag) bool { + filter := []string{"^aws:"} + for _, v := range filter { + log.Printf("[DEBUG] Matching %v with %v\n", v, *t.Key) + if r, _ := regexp.MatchString(v, *t.Key); r == true { + log.Printf("[DEBUG] Found AWS specific tag %s (val: %s), ignoring.\n", *t.Key, *t.Value) + return true + } + } + return false +} + +// tagKeysIam returns the keys for the list of IAM tags +func tagKeysIam(ts []*iam.Tag) []*string { + result := make([]*string, 0, len(ts)) + for _, t := range ts { + result = append(result, t.Key) + } + return result +} diff --git a/aws/tagsIAM_test.go b/aws/tagsIAM_test.go new file mode 100644 index 00000000000..4a74062f5e9 --- /dev/null +++ b/aws/tagsIAM_test.go @@ -0,0 +1,111 @@ +package aws + +import ( + "reflect" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/iam" +) + +// go test -v -run="TestDiffIAMTags" +func TestDiffIAMTags(t *testing.T) { + cases := []struct { + Old, New map[string]interface{} + Create, Remove map[string]string + }{ + // Add + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "bar", + "bar": "baz", + }, + Create: map[string]string{ + "bar": "baz", + }, + Remove: map[string]string{}, + }, + + // Modify + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "baz", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + + // Overlap + { + Old: map[string]interface{}{ + "foo": "bar", + "hello": "world", + }, + New: map[string]interface{}{ + "foo": "baz", + "hello": "world", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + + // Remove + { + Old: map[string]interface{}{ + "foo": "bar", + "bar": "baz", + }, + New: map[string]interface{}{ + "foo": "bar", + }, + Create: map[string]string{}, + Remove: map[string]string{ + "bar": "baz", + }, + }, + } + + for i, tc := range cases { + c, r := diffTagsIAM(tagsFromMapIAM(tc.Old), tagsFromMapIAM(tc.New)) + cm := tagsToMapIAM(c) + rm := tagsToMapIAM(r) + if !reflect.DeepEqual(cm, tc.Create) { + t.Fatalf("%d: bad create: %#v", i, cm) + } + if !reflect.DeepEqual(rm, tc.Remove) { + t.Fatalf("%d: bad remove: %#v", i, rm) + } + } +} + +// go test -v -run="TestIgnoringTagsIAM" +func TestIgnoringTagsIAM(t *testing.T) { + var ignoredTags []*iam.Tag + ignoredTags = append(ignoredTags, &iam.Tag{ + Key: aws.String("aws:cloudformation:logical-id"), + Value: aws.String("foo"), + }) + ignoredTags = append(ignoredTags, &iam.Tag{ + Key: aws.String("aws:foo:bar"), + Value: aws.String("baz"), + }) + for _, tag := range ignoredTags { + if !tagIgnoredIAM(tag) { + t.Fatalf("Tag %v with value %v not ignored, but should be!", *tag.Key, *tag.Value) + } + } +} diff --git a/aws/tagsInspector.go b/aws/tagsInspector.go index ef18f33c2ec..68539f738fc 100644 --- a/aws/tagsInspector.go +++ b/aws/tagsInspector.go @@ -8,29 +8,6 @@ import ( "github.com/aws/aws-sdk-go/service/inspector" ) -// diffTags takes our tags locally and the ones remotely and returns -// the set of tags that must be created, and the set of tags that must -// be destroyed. -func diffTagsInspector(oldTags, newTags []*inspector.ResourceGroupTag) ([]*inspector.ResourceGroupTag, []*inspector.ResourceGroupTag) { - // First, we're creating everything we have - create := make(map[string]interface{}) - for _, t := range newTags { - create[*t.Key] = *t.Value - } - - // Build the list of what to remove - var remove []*inspector.ResourceGroupTag - for _, t := range oldTags { - old, ok := create[*t.Key] - if !ok || old != *t.Value { - // Delete it! - remove = append(remove, t) - } - } - - return tagsFromMapInspector(create), remove -} - // tagsFromMap returns the tags for the given map of data. func tagsFromMapInspector(m map[string]interface{}) []*inspector.ResourceGroupTag { var result []*inspector.ResourceGroupTag @@ -47,18 +24,6 @@ func tagsFromMapInspector(m map[string]interface{}) []*inspector.ResourceGroupTa return result } -// tagsToMap turns the list of tags into a map. -func tagsToMapInspector(ts []*inspector.ResourceGroupTag) map[string]string { - result := make(map[string]string) - for _, t := range ts { - if !tagIgnoredInspector(t) { - result[*t.Key] = *t.Value - } - } - - return result -} - // compare a tag against a list of strings and checks if it should // be ignored or not func tagIgnoredInspector(t *inspector.ResourceGroupTag) bool { diff --git a/aws/tagsKMS_test.go b/aws/tagsKMS_test.go index a1d7a770e25..ece14b9bcc5 100644 --- a/aws/tagsKMS_test.go +++ b/aws/tagsKMS_test.go @@ -1,14 +1,11 @@ package aws import ( - "fmt" "reflect" "testing" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/kms" - "github.com/hashicorp/terraform/helper/resource" - "github.com/hashicorp/terraform/terraform" ) // go test -v -run="TestDiffKMSTags" @@ -80,26 +77,3 @@ func TestIgnoringTagsKMS(t *testing.T) { } } } - -// testAccCheckTags can be used to check the tags on a resource. -func testAccCheckKMSTags( - ts []*kms.Tag, key string, value string) resource.TestCheckFunc { - return func(s *terraform.State) error { - m := tagsToMapKMS(ts) - v, ok := m[key] - if value != "" && !ok { - return fmt.Errorf("Missing tag: %s", key) - } else if value == "" && ok { - return fmt.Errorf("Extra tag: %s", key) - } - if value == "" { - return nil - } - - if v != value { - return fmt.Errorf("%s: bad value: %s", key, v) - } - - return nil - } -} diff --git a/aws/tagsNeptune.go b/aws/tagsNeptune.go new file mode 100644 index 00000000000..8f47bb3d833 --- /dev/null +++ b/aws/tagsNeptune.go @@ -0,0 +1,133 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/neptune" + "github.com/hashicorp/terraform/helper/schema" +) + +// setTags is a helper to set the tags for a resource. It expects the +// tags field to be named "tags" +func setTagsNeptune(conn *neptune.Neptune, d *schema.ResourceData, arn string) error { + if d.HasChange("tags") { + oraw, nraw := d.GetChange("tags") + o := oraw.(map[string]interface{}) + n := nraw.(map[string]interface{}) + create, remove := diffTagsNeptune(tagsFromMapNeptune(o), tagsFromMapNeptune(n)) + + // Set tags + if len(remove) > 0 { + log.Printf("[DEBUG] Removing tags: %s", remove) + k := make([]*string, len(remove), len(remove)) + for i, t := range remove { + k[i] = t.Key + } + + _, err := conn.RemoveTagsFromResource(&neptune.RemoveTagsFromResourceInput{ + ResourceName: aws.String(arn), + TagKeys: k, + }) + if err != nil { + return err + } + } + if len(create) > 0 { + log.Printf("[DEBUG] Creating tags: %s", create) + _, err := conn.AddTagsToResource(&neptune.AddTagsToResourceInput{ + ResourceName: aws.String(arn), + Tags: create, + }) + if err != nil { + return err + } + } + } + + return nil +} + +// diffTags takes our tags locally and the ones remotely and returns +// the set of tags that must be created, and the set of tags that must +// be destroyed. +func diffTagsNeptune(oldTags, newTags []*neptune.Tag) ([]*neptune.Tag, []*neptune.Tag) { + // First, we're creating everything we have + create := make(map[string]interface{}) + for _, t := range newTags { + create[aws.StringValue(t.Key)] = aws.StringValue(t.Value) + } + + // Build the list of what to remove + var remove []*neptune.Tag + for _, t := range oldTags { + old, ok := create[aws.StringValue(t.Key)] + if !ok || old != aws.StringValue(t.Value) { + // Delete it! + remove = append(remove, t) + } + } + + return tagsFromMapNeptune(create), remove +} + +// tagsFromMap returns the tags for the given map of data. +func tagsFromMapNeptune(m map[string]interface{}) []*neptune.Tag { + result := make([]*neptune.Tag, 0, len(m)) + for k, v := range m { + t := &neptune.Tag{ + Key: aws.String(k), + Value: aws.String(v.(string)), + } + if !tagIgnoredNeptune(t) { + result = append(result, t) + } + } + + return result +} + +// tagsToMap turns the list of tags into a map. +func tagsToMapNeptune(ts []*neptune.Tag) map[string]string { + result := make(map[string]string) + for _, t := range ts { + if !tagIgnoredNeptune(t) { + result[aws.StringValue(t.Key)] = aws.StringValue(t.Value) + } + } + + return result +} + +// compare a tag against a list of strings and checks if it should +// be ignored or not +func tagIgnoredNeptune(t *neptune.Tag) bool { + filter := []string{"^aws:"} + for _, v := range filter { + log.Printf("[DEBUG] Matching %v with %v\n", v, aws.StringValue(t.Key)) + if r, _ := regexp.MatchString(v, aws.StringValue(t.Key)); r == true { + log.Printf("[DEBUG] Found AWS specific tag %s (val: %s), ignoring.\n", aws.StringValue(t.Key), aws.StringValue(t.Value)) + return true + } + } + return false +} + +func saveTagsNeptune(conn *neptune.Neptune, d *schema.ResourceData, arn string) error { + resp, err := conn.ListTagsForResource(&neptune.ListTagsForResourceInput{ + ResourceName: aws.String(arn), + }) + + if err != nil { + return fmt.Errorf("Error retreiving tags for ARN: %s", arn) + } + + var dt []*neptune.Tag + if len(resp.TagList) > 0 { + dt = resp.TagList + } + + return d.Set("tags", tagsToMapNeptune(dt)) +} diff --git a/aws/tagsNeptune_test.go b/aws/tagsNeptune_test.go new file mode 100644 index 00000000000..a5389bc493e --- /dev/null +++ b/aws/tagsNeptune_test.go @@ -0,0 +1,77 @@ +package aws + +import ( + "reflect" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/neptune" +) + +func TestDiffNeptuneTags(t *testing.T) { + cases := []struct { + Old, New map[string]interface{} + Create, Remove map[string]string + }{ + // Basic add/remove + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "bar": "baz", + }, + Create: map[string]string{ + "bar": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + + // Modify + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "baz", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + } + + for i, tc := range cases { + c, r := diffTagsNeptune(tagsFromMapNeptune(tc.Old), tagsFromMapNeptune(tc.New)) + cm := tagsToMapNeptune(c) + rm := tagsToMapNeptune(r) + if !reflect.DeepEqual(cm, tc.Create) { + t.Fatalf("%d: bad create: %#v", i, cm) + } + if !reflect.DeepEqual(rm, tc.Remove) { + t.Fatalf("%d: bad remove: %#v", i, rm) + } + } +} + +func TestIgnoringTagsNeptune(t *testing.T) { + var ignoredTags []*neptune.Tag + ignoredTags = append(ignoredTags, &neptune.Tag{ + Key: aws.String("aws:cloudformation:logical-id"), + Value: aws.String("foo"), + }) + ignoredTags = append(ignoredTags, &neptune.Tag{ + Key: aws.String("aws:foo:bar"), + Value: aws.String("baz"), + }) + for _, tag := range ignoredTags { + if !tagIgnoredNeptune(tag) { + t.Fatalf("Tag %v with value %v not ignored, but should be!", *tag.Key, *tag.Value) + } + } +} diff --git a/aws/tagsRDS.go b/aws/tagsRDS.go index 2d64113482b..ef7a869d383 100644 --- a/aws/tagsRDS.go +++ b/aws/tagsRDS.go @@ -107,7 +107,7 @@ func saveTagsRDS(conn *rds.RDS, d *schema.ResourceData, arn string) error { }) if err != nil { - return fmt.Errorf("[DEBUG] Error retreiving tags for ARN: %s", arn) + return fmt.Errorf("Error retreiving tags for ARN: %s", arn) } var dt []*rds.Tag diff --git a/aws/tagsRDS_test.go b/aws/tagsRDS_test.go index cc2887daa69..9eceb271792 100644 --- a/aws/tagsRDS_test.go +++ b/aws/tagsRDS_test.go @@ -1,14 +1,11 @@ package aws import ( - "fmt" "reflect" "testing" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/rds" - "github.com/hashicorp/terraform/helper/resource" - "github.com/hashicorp/terraform/terraform" ) func TestDiffRDSTags(t *testing.T) { @@ -78,26 +75,3 @@ func TestIgnoringTagsRDS(t *testing.T) { } } } - -// testAccCheckTags can be used to check the tags on a resource. -func testAccCheckRDSTags( - ts []*rds.Tag, key string, value string) resource.TestCheckFunc { - return func(s *terraform.State) error { - m := tagsToMapRDS(ts) - v, ok := m[key] - if value != "" && !ok { - return fmt.Errorf("Missing tag: %s", key) - } else if value == "" && ok { - return fmt.Errorf("Extra tag: %s", key) - } - if value == "" { - return nil - } - - if v != value { - return fmt.Errorf("%s: bad value: %s", key, v) - } - - return nil - } -} diff --git a/aws/tagsSSM_test.go b/aws/tagsSSM_test.go index 33792ae6985..06a0dbc47ea 100644 --- a/aws/tagsSSM_test.go +++ b/aws/tagsSSM_test.go @@ -1,14 +1,11 @@ package aws import ( - "fmt" "reflect" "testing" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ssm" - "github.com/hashicorp/terraform/helper/resource" - "github.com/hashicorp/terraform/terraform" ) // go test -v -run="TestDiffSSMTags" @@ -80,26 +77,3 @@ func TestIgnoringTagsSSM(t *testing.T) { } } } - -// testAccCheckTags can be used to check the tags on a resource. -func testAccCheckSSMTags( - ts []*ssm.Tag, key string, value string) resource.TestCheckFunc { - return func(s *terraform.State) error { - m := tagsToMapSSM(ts) - v, ok := m[key] - if value != "" && !ok { - return fmt.Errorf("Missing tag: %s", key) - } else if value == "" && ok { - return fmt.Errorf("Extra tag: %s", key) - } - if value == "" { - return nil - } - - if v != value { - return fmt.Errorf("%s: bad value: %s", key, v) - } - - return nil - } -} diff --git a/aws/tagsSecretsManager.go b/aws/tagsSecretsManager.go new file mode 100644 index 00000000000..55921831797 --- /dev/null +++ b/aws/tagsSecretsManager.go @@ -0,0 +1,74 @@ +package aws + +import ( + "log" + "regexp" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/secretsmanager" +) + +// diffTags takes our tags locally and the ones remotely and returns +// the set of tags that must be created, and the set of tags that must +// be destroyed. +func diffTagsSecretsManager(oldTags, newTags []*secretsmanager.Tag) ([]*secretsmanager.Tag, []*secretsmanager.Tag) { + // First, we're creating everything we have + create := make(map[string]interface{}) + for _, t := range newTags { + create[*t.Key] = *t.Value + } + + // Build the list of what to remove + var remove []*secretsmanager.Tag + for _, t := range oldTags { + old, ok := create[*t.Key] + if !ok || old != *t.Value { + // Delete it! + remove = append(remove, t) + } + } + + return tagsFromMapSecretsManager(create), remove +} + +// tagsFromMap returns the tags for the given map of data. +func tagsFromMapSecretsManager(m map[string]interface{}) []*secretsmanager.Tag { + result := make([]*secretsmanager.Tag, 0, len(m)) + for k, v := range m { + t := &secretsmanager.Tag{ + Key: aws.String(k), + Value: aws.String(v.(string)), + } + if !tagIgnoredSecretsManager(t) { + result = append(result, t) + } + } + + return result +} + +// tagsToMap turns the list of tags into a map. +func tagsToMapSecretsManager(ts []*secretsmanager.Tag) map[string]string { + result := make(map[string]string) + for _, t := range ts { + if !tagIgnoredSecretsManager(t) { + result[*t.Key] = *t.Value + } + } + + return result +} + +// compare a tag against a list of strings and checks if it should +// be ignored or not +func tagIgnoredSecretsManager(t *secretsmanager.Tag) bool { + filter := []string{"^aws:"} + for _, v := range filter { + log.Printf("[DEBUG] Matching %v with %v\n", v, *t.Key) + if r, _ := regexp.MatchString(v, *t.Key); r == true { + log.Printf("[DEBUG] Found AWS specific tag %s (val: %s), ignoring.\n", *t.Key, *t.Value) + return true + } + } + return false +} diff --git a/aws/tagsSecretsManager_test.go b/aws/tagsSecretsManager_test.go new file mode 100644 index 00000000000..d6eebe5343c --- /dev/null +++ b/aws/tagsSecretsManager_test.go @@ -0,0 +1,79 @@ +package aws + +import ( + "reflect" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/secretsmanager" +) + +// go test -v -run="TestDiffSecretsManagerTags" +func TestDiffSecretsManagerTags(t *testing.T) { + cases := []struct { + Old, New map[string]interface{} + Create, Remove map[string]string + }{ + // Basic add/remove + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "bar": "baz", + }, + Create: map[string]string{ + "bar": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + + // Modify + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "baz", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + } + + for i, tc := range cases { + c, r := diffTagsSecretsManager(tagsFromMapSecretsManager(tc.Old), tagsFromMapSecretsManager(tc.New)) + cm := tagsToMapSecretsManager(c) + rm := tagsToMapSecretsManager(r) + if !reflect.DeepEqual(cm, tc.Create) { + t.Fatalf("%d: bad create: %#v", i, cm) + } + if !reflect.DeepEqual(rm, tc.Remove) { + t.Fatalf("%d: bad remove: %#v", i, rm) + } + } +} + +// go test -v -run="TestIgnoringTagsSecretsManager" +func TestIgnoringTagsSecretsManager(t *testing.T) { + var ignoredTags []*secretsmanager.Tag + ignoredTags = append(ignoredTags, &secretsmanager.Tag{ + Key: aws.String("aws:cloudformation:logical-id"), + Value: aws.String("foo"), + }) + ignoredTags = append(ignoredTags, &secretsmanager.Tag{ + Key: aws.String("aws:foo:bar"), + Value: aws.String("baz"), + }) + for _, tag := range ignoredTags { + if !tagIgnoredSecretsManager(tag) { + t.Fatalf("Tag %v with value %v not ignored, but should be!", *tag.Key, *tag.Value) + } + } +} diff --git a/aws/tags_apigateway.go b/aws/tags_apigateway.go new file mode 100644 index 00000000000..9168d39e2af --- /dev/null +++ b/aws/tags_apigateway.go @@ -0,0 +1,44 @@ +package aws + +import ( + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/apigateway" + "github.com/hashicorp/terraform/helper/schema" +) + +func setTagsAPIGatewayStage(conn *apigateway.APIGateway, d *schema.ResourceData, arn string) error { + if d.HasChange("tags") { + oraw, nraw := d.GetChange("tags") + o := oraw.(map[string]interface{}) + n := nraw.(map[string]interface{}) + create, remove := diffTagsGeneric(o, n) + if len(remove) > 0 { + log.Printf("[DEBUG] Removing tags: %#v", remove) + keys := make([]*string, 0, len(remove)) + for k := range remove { + keys = append(keys, aws.String(k)) + } + + _, err := conn.UntagResource(&apigateway.UntagResourceInput{ + ResourceArn: aws.String(arn), + TagKeys: keys, + }) + if err != nil { + return err + } + } + if len(create) > 0 { + log.Printf("[DEBUG] Creating tags: %#v", create) + _, err := conn.TagResource(&apigateway.TagResourceInput{ + ResourceArn: aws.String(arn), + Tags: create, + }) + if err != nil { + return err + } + } + } + return nil +} diff --git a/aws/tags_dms_test.go b/aws/tags_dms_test.go index 630ace37296..78388f74dd1 100644 --- a/aws/tags_dms_test.go +++ b/aws/tags_dms_test.go @@ -1,11 +1,11 @@ package aws import ( + "reflect" "testing" "github.com/aws/aws-sdk-go/aws" dms "github.com/aws/aws-sdk-go/service/databasemigrationservice" - "reflect" ) func TestDmsTagsToMap(t *testing.T) { diff --git a/aws/tags_kinesis.go b/aws/tags_kinesis.go index a5622e95d61..fe1e5f28a80 100644 --- a/aws/tags_kinesis.go +++ b/aws/tags_kinesis.go @@ -9,6 +9,9 @@ import ( "github.com/hashicorp/terraform/helper/schema" ) +// Kinesis requires tagging operations be split into 10 tag batches +const kinesisTagBatchLimit = 10 + // setTags is a helper to set the tags for a resource. It expects the // tags field to be named "tags" func setTagsKinesis(conn *kinesis.Kinesis, d *schema.ResourceData) error { @@ -24,34 +27,51 @@ func setTagsKinesis(conn *kinesis.Kinesis, d *schema.ResourceData) error { // Set tags if len(remove) > 0 { log.Printf("[DEBUG] Removing tags: %#v", remove) - k := make([]*string, len(remove), len(remove)) - for i, t := range remove { - k[i] = t.Key - } - _, err := conn.RemoveTagsFromStream(&kinesis.RemoveTagsFromStreamInput{ - StreamName: aws.String(sn), - TagKeys: k, - }) - if err != nil { - return err + tagKeysBatch := make([]*string, 0, kinesisTagBatchLimit) + tagKeysBatches := make([][]*string, 0, len(remove)/kinesisTagBatchLimit+1) + for _, tag := range remove { + if len(tagKeysBatch) == kinesisTagBatchLimit { + tagKeysBatches = append(tagKeysBatches, tagKeysBatch) + tagKeysBatch = make([]*string, 0, kinesisTagBatchLimit) + } + tagKeysBatch = append(tagKeysBatch, tag.Key) + } + tagKeysBatches = append(tagKeysBatches, tagKeysBatch) + + for _, tagKeys := range tagKeysBatches { + _, err := conn.RemoveTagsFromStream(&kinesis.RemoveTagsFromStreamInput{ + StreamName: aws.String(sn), + TagKeys: tagKeys, + }) + if err != nil { + return err + } } } if len(create) > 0 { - log.Printf("[DEBUG] Creating tags: %#v", create) - t := make(map[string]*string) + + tagsBatch := make(map[string]*string) + tagsBatches := make([]map[string]*string, 0, len(create)/kinesisTagBatchLimit+1) for _, tag := range create { - t[*tag.Key] = tag.Value + if len(tagsBatch) == kinesisTagBatchLimit { + tagsBatches = append(tagsBatches, tagsBatch) + tagsBatch = make(map[string]*string) + } + tagsBatch[aws.StringValue(tag.Key)] = tag.Value } - - _, err := conn.AddTagsToStream(&kinesis.AddTagsToStreamInput{ - StreamName: aws.String(sn), - Tags: t, - }) - if err != nil { - return err + tagsBatches = append(tagsBatches, tagsBatch) + + for _, tags := range tagsBatches { + _, err := conn.AddTagsToStream(&kinesis.AddTagsToStreamInput{ + StreamName: aws.String(sn), + Tags: tags, + }) + if err != nil { + return err + } } } } diff --git a/aws/tags_kinesis_test.go b/aws/tags_kinesis_test.go index 63504b10bf1..2c499f1fb92 100644 --- a/aws/tags_kinesis_test.go +++ b/aws/tags_kinesis_test.go @@ -1,14 +1,11 @@ package aws import ( - "fmt" "reflect" "testing" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/kinesis" - "github.com/hashicorp/terraform/helper/resource" - "github.com/hashicorp/terraform/terraform" ) func TestDiffTagsKinesis(t *testing.T) { @@ -78,25 +75,3 @@ func TestIgnoringTagsKinesis(t *testing.T) { } } } - -// testAccCheckTags can be used to check the tags on a resource. -func testAccCheckKinesisTags(ts []*kinesis.Tag, key string, value string) resource.TestCheckFunc { - return func(s *terraform.State) error { - m := tagsToMapKinesis(ts) - v, ok := m[key] - if value != "" && !ok { - return fmt.Errorf("Missing tag: %s", key) - } else if value == "" && ok { - return fmt.Errorf("Extra tag: %s", key) - } - if value == "" { - return nil - } - - if v != value { - return fmt.Errorf("%s: bad value: %s", key, v) - } - - return nil - } -} diff --git a/aws/tags_test.go b/aws/tags_test.go index 1777c376408..0d4fa606445 100644 --- a/aws/tags_test.go +++ b/aws/tags_test.go @@ -16,20 +16,19 @@ func TestDiffTags(t *testing.T) { Old, New map[string]interface{} Create, Remove map[string]string }{ - // Basic add/remove + // Add { Old: map[string]interface{}{ "foo": "bar", }, New: map[string]interface{}{ + "foo": "bar", "bar": "baz", }, Create: map[string]string{ "bar": "baz", }, - Remove: map[string]string{ - "foo": "bar", - }, + Remove: map[string]string{}, }, // Modify @@ -47,6 +46,39 @@ func TestDiffTags(t *testing.T) { "foo": "bar", }, }, + + // Overlap + { + Old: map[string]interface{}{ + "foo": "bar", + "hello": "world", + }, + New: map[string]interface{}{ + "foo": "baz", + "hello": "world", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + + // Remove + { + Old: map[string]interface{}{ + "foo": "bar", + "bar": "baz", + }, + New: map[string]interface{}{ + "foo": "bar", + }, + Create: map[string]string{}, + Remove: map[string]string{ + "bar": "baz", + }, + }, } for i, tc := range cases { @@ -80,6 +112,50 @@ func TestIgnoringTags(t *testing.T) { } } +func TestTagsMapToHash(t *testing.T) { + cases := []struct { + Left, Right map[string]interface{} + MustBeEqual bool + }{ + { + Left: map[string]interface{}{}, + Right: map[string]interface{}{}, + MustBeEqual: true, + }, + { + Left: map[string]interface{}{ + "foo": "bar", + "bar": "baz", + }, + Right: map[string]interface{}{ + "bar": "baz", + "foo": "bar", + }, + MustBeEqual: true, + }, + { + Left: map[string]interface{}{ + "foo": "bar", + }, + Right: map[string]interface{}{ + "bar": "baz", + }, + MustBeEqual: false, + }, + } + + for i, tc := range cases { + l := tagsMapToHash(tc.Left) + r := tagsMapToHash(tc.Right) + if tc.MustBeEqual && (l != r) { + t.Fatalf("%d: Hashes don't match", i) + } + if !tc.MustBeEqual && (l == r) { + t.Logf("%d: Hashes match", i) + } + } +} + // testAccCheckTags can be used to check the tags on a resource. func testAccCheckTags( ts *[]*ec2.Tag, key string, value string) resource.TestCheckFunc { diff --git a/aws/test-fixtures/cloudfront-public-key.pem b/aws/test-fixtures/cloudfront-public-key.pem new file mode 100644 index 00000000000..d25ae696d39 --- /dev/null +++ b/aws/test-fixtures/cloudfront-public-key.pem @@ -0,0 +1,9 @@ +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAtZCjGTEV/ttumSJBnsc2 +SUzPY/wJjfNchT2mjWivg/S7HuwKp1tDHizxrXTVuZLdDKceVcSclS7otzwfmGxM +Gjk2/CM2hEMThT86q76TrbH6hvGa25n8piBOkhwbwdbvmg3DRJiLR9bqw+nAPt/n +1ggTcwazm1Bw7y112Ardop+buWirS3w2C6au2OdloaaLz5N1eHEHQuRpnmD+UoVR +OgGeaLaU7FxKkpOps4Giu4vgjcefGlM3MrqG4FAzDMtgGZdJm4U+bldYmk0+J1yv +JA0FGd9g9GhjHMT9UznxXccw7PhHQsXn4lQfOn47uO9KIq170t8FeHKEzbCMsmyA +2QIDAQAB +-----END PUBLIC KEY----- diff --git a/aws/test-fixtures/lambda_invocation.js b/aws/test-fixtures/lambda_invocation.js new file mode 100644 index 00000000000..abc0191f982 --- /dev/null +++ b/aws/test-fixtures/lambda_invocation.js @@ -0,0 +1,6 @@ +exports.handler = async (event) => { + if (process.env.TEST_DATA) { + event.key3 = process.env.TEST_DATA; + } + return event; +} diff --git a/aws/test-fixtures/lambda_invocation.zip b/aws/test-fixtures/lambda_invocation.zip new file mode 100644 index 00000000000..b2bc4cde4e1 Binary files /dev/null and b/aws/test-fixtures/lambda_invocation.zip differ diff --git a/aws/test-fixtures/lambdapinpoint.zip b/aws/test-fixtures/lambdapinpoint.zip new file mode 100644 index 00000000000..259ef4e5443 Binary files /dev/null and b/aws/test-fixtures/lambdapinpoint.zip differ diff --git a/aws/test-fixtures/lambdatest_modified.zip b/aws/test-fixtures/lambdatest_modified.zip new file mode 100644 index 00000000000..336bdd72eb8 Binary files /dev/null and b/aws/test-fixtures/lambdatest_modified.zip differ diff --git a/aws/test-fixtures/public-ssh-key.pub b/aws/test-fixtures/public-ssh-key.pub new file mode 100644 index 00000000000..70f18ecfd04 --- /dev/null +++ b/aws/test-fixtures/public-ssh-key.pub @@ -0,0 +1 @@ +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCtCk0lzMj1gPEOjdfQ37AIxCyETqJBubaMWuB4bgvGHp8LvEghr2YDl2bml1JrE1EOcZhPnIwgyucryXKA959sTUlgbvaFN7vmpVze56Q9tVU6BJQxOdaRoy5FcQMET9LB6SdbXk+V4CkDMsQNaFXezpg98HgCj+V7+bBWsfI6U63IESlWKK7kraCom8EWxkQk4mk9fizE2I+KrtiqN4xcah02LFG6IMnS+Xy3CDhcpZeYzWOV6zhcf675UJOdg/pLgQbUhhiwTOJFgRo8IcvE3iBrRMz508ppx6vLLr8J+3B8ujykc+/3ZSGfQfx6rO+OuSskhG5FLI6icbQBtBzf terraform-provider-aws@hashicorp.com diff --git a/aws/utils.go b/aws/utils.go index bfca044cfb1..b701ddbeadd 100644 --- a/aws/utils.go +++ b/aws/utils.go @@ -5,6 +5,8 @@ import ( "encoding/json" "reflect" "regexp" + + "github.com/hashicorp/terraform/helper/resource" ) // Base64Encode encodes data if the input isn't already encoded using base64.StdEncoding.EncodeToString. @@ -40,3 +42,13 @@ func jsonBytesEqual(b1, b2 []byte) bool { return reflect.DeepEqual(o1, o2) } + +func isResourceNotFoundError(err error) bool { + _, ok := err.(*resource.NotFoundError) + return ok +} + +func isResourceTimeoutError(err error) bool { + timeoutErr, ok := err.(*resource.TimeoutError) + return ok && timeoutErr.LastError == nil +} diff --git a/aws/utils_test.go b/aws/utils_test.go index 8248f4384d2..785ad141ed3 100644 --- a/aws/utils_test.go +++ b/aws/utils_test.go @@ -1,6 +1,8 @@ package aws -import "testing" +import ( + "testing" +) var base64encodingTests = []struct { in []byte diff --git a/aws/validators.go b/aws/validators.go index a20a0321941..84e48ff7ad8 100644 --- a/aws/validators.go +++ b/aws/validators.go @@ -12,22 +12,73 @@ import ( "github.com/aws/aws-sdk-go/service/apigateway" "github.com/aws/aws-sdk-go/service/cognitoidentity" - "github.com/aws/aws-sdk-go/service/cognitoidentityprovider" "github.com/aws/aws-sdk-go/service/configservice" - "github.com/aws/aws-sdk-go/service/ec2" "github.com/aws/aws-sdk-go/service/s3" "github.com/aws/aws-sdk-go/service/waf" + "github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/structure" "github.com/hashicorp/terraform/helper/validation" ) -// When released, replace all usage with upstream validation function: -// https://github.com/hashicorp/terraform/pull/17484 -func validateRFC3339TimeString(v interface{}, k string) (ws []string, errors []error) { - if _, err := time.Parse(time.RFC3339, v.(string)); err != nil { - errors = append(errors, fmt.Errorf("%q: %s", k, err)) +// validateAny returns a SchemaValidateFunc which tests if the provided value +// passes any of the provided SchemaValidateFunc +// Temporarily added into AWS provider, but will be submitted upstream into provider SDK +func validateAny(validators ...schema.SchemaValidateFunc) schema.SchemaValidateFunc { + return func(i interface{}, k string) ([]string, []error) { + var allErrors []error + var allWarnings []string + for _, validator := range validators { + validatorWarnings, validatorErrors := validator(i, k) + if len(validatorWarnings) == 0 && len(validatorErrors) == 0 { + return []string{}, []error{} + } + allWarnings = append(allWarnings, validatorWarnings...) + allErrors = append(allErrors, validatorErrors...) + } + return allWarnings, allErrors + } +} + +// validateTypeStringNullableBoolean provides custom error messaging for TypeString booleans +// Some arguments require three values: true, false, and "" (unspecified). +// This ValidateFunc returns a custom message since the message with +// validation.StringInSlice([]string{"", "false", "true"}, false) is confusing: +// to be one of [ false true], got 1 +func validateTypeStringNullableBoolean(v interface{}, k string) (ws []string, es []error) { + value, ok := v.(string) + if !ok { + es = append(es, fmt.Errorf("expected type of %s to be string", k)) + return } + + for _, str := range []string{"", "0", "1", "false", "true"} { + if value == str { + return + } + } + + es = append(es, fmt.Errorf("expected %s to be one of [\"\", false, true], got %s", k, value)) + return +} + +// validateTypeStringNullableFloat provides custom error messaging for TypeString floats +// Some arguments require a floating point value or an unspecified, empty field. +func validateTypeStringNullableFloat(v interface{}, k string) (ws []string, es []error) { + value, ok := v.(string) + if !ok { + es = append(es, fmt.Errorf("expected type of %s to be string", k)) + return + } + + if value == "" { + return + } + + if _, err := strconv.ParseFloat(value, 64); err != nil { + es = append(es, fmt.Errorf("%s: cannot parse '%s' as float: %s", k, value, err)) + } + return } @@ -52,6 +103,27 @@ func validateRdsIdentifier(v interface{}, k string) (ws []string, errors []error return } +func validateNeptuneIdentifier(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9a-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only lowercase alphanumeric characters and hyphens allowed in %q", k)) + } + if !regexp.MustCompile(`^[a-z]`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "first character of %q must be a letter", k)) + } + if regexp.MustCompile(`--`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot contain two consecutive hyphens", k)) + } + if regexp.MustCompile(`-$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot end with a hyphen", k)) + } + return +} + func validateRdsIdentifierPrefix(v interface{}, k string) (ws []string, errors []error) { value := v.(string) if !regexp.MustCompile(`^[0-9a-z-]+$`).MatchString(value) { @@ -69,6 +141,23 @@ func validateRdsIdentifierPrefix(v interface{}, k string) (ws []string, errors [ return } +func validateNeptuneIdentifierPrefix(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9a-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only lowercase alphanumeric characters and hyphens allowed in %q", k)) + } + if !regexp.MustCompile(`^[a-z]`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "first character of %q must be a letter", k)) + } + if regexp.MustCompile(`--`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot contain two consecutive hyphens", k)) + } + return +} + func validateRdsEngine() schema.SchemaValidateFunc { return validation.StringInSlice([]string{ "aurora", @@ -77,6 +166,12 @@ func validateRdsEngine() schema.SchemaValidateFunc { }, false) } +func validateNeptuneEngine() schema.SchemaValidateFunc { + return validation.StringInSlice([]string{ + "neptune", + }, false) +} + func validateElastiCacheClusterId(v interface{}, k string) (ws []string, errors []error) { value := v.(string) if (len(value) < 1) || (len(value) > 20) { @@ -128,23 +223,23 @@ func validateDbParamGroupName(v interface{}, k string) (ws []string, errors []er value := v.(string) if !regexp.MustCompile(`^[0-9a-z-]+$`).MatchString(value) { errors = append(errors, fmt.Errorf( - "only lowercase alphanumeric characters and hyphens allowed in %q", k)) + "only lowercase alphanumeric characters and hyphens allowed in parameter group %q", k)) } if !regexp.MustCompile(`^[a-z]`).MatchString(value) { errors = append(errors, fmt.Errorf( - "first character of %q must be a letter", k)) + "first character of parameter group %q must be a letter", k)) } if regexp.MustCompile(`--`).MatchString(value) { errors = append(errors, fmt.Errorf( - "%q cannot contain two consecutive hyphens", k)) + "parameter group %q cannot contain two consecutive hyphens", k)) } if regexp.MustCompile(`-$`).MatchString(value) { errors = append(errors, fmt.Errorf( - "%q cannot end with a hyphen", k)) + "parameter group %q cannot end with a hyphen", k)) } if len(value) > 255 { errors = append(errors, fmt.Errorf( - "%q cannot be greater than 255 characters", k)) + "parameter group %q cannot be greater than 255 characters", k)) } return } @@ -153,19 +248,19 @@ func validateDbParamGroupNamePrefix(v interface{}, k string) (ws []string, error value := v.(string) if !regexp.MustCompile(`^[0-9a-z-]+$`).MatchString(value) { errors = append(errors, fmt.Errorf( - "only lowercase alphanumeric characters and hyphens allowed in %q", k)) + "only lowercase alphanumeric characters and hyphens allowed in parameter group %q", k)) } if !regexp.MustCompile(`^[a-z]`).MatchString(value) { errors = append(errors, fmt.Errorf( - "first character of %q must be a letter", k)) + "first character of parameter group %q must be a letter", k)) } if regexp.MustCompile(`--`).MatchString(value) { errors = append(errors, fmt.Errorf( - "%q cannot contain two consecutive hyphens", k)) + "parameter group %q cannot contain two consecutive hyphens", k)) } if len(value) > 255 { errors = append(errors, fmt.Errorf( - "%q cannot be greater than 226 characters", k)) + "parameter group %q cannot be greater than 226 characters", k)) } return } @@ -213,28 +308,6 @@ func validateElbNamePrefix(v interface{}, k string) (ws []string, errors []error return } -func validateEcrRepositoryName(v interface{}, k string) (ws []string, errors []error) { - value := v.(string) - if len(value) < 2 { - errors = append(errors, fmt.Errorf( - "%q must be at least 2 characters long: %q", k, value)) - } - if len(value) > 256 { - errors = append(errors, fmt.Errorf( - "%q cannot be longer than 256 characters: %q", k, value)) - } - - // http://docs.aws.amazon.com/AmazonECR/latest/APIReference/API_CreateRepository.html - pattern := `^(?:[a-z0-9]+(?:[._-][a-z0-9]+)*/)*[a-z0-9]+(?:[._-][a-z0-9]+)*$` - if !regexp.MustCompile(pattern).MatchString(value) { - errors = append(errors, fmt.Errorf( - "%q doesn't comply with restrictions (%q): %q", - k, pattern, value)) - } - - return -} - func validateCloudWatchDashboardName(v interface{}, k string) (ws []string, errors []error) { value := v.(string) if len(value) > 255 { @@ -283,21 +356,19 @@ func validateCloudWatchLogResourcePolicyDocument(v interface{}, k string) (ws [] return } -func validateMaxLength(length int) schema.SchemaValidateFunc { - return validation.StringLenBetween(0, length) -} - -func validateIntegerInRange(min, max int) schema.SchemaValidateFunc { - return func(v interface{}, k string) (ws []string, errors []error) { - value := v.(int) - if value < min { - errors = append(errors, fmt.Errorf( - "%q cannot be lower than %d: %d", k, min, value)) +func validateIntegerInSlice(valid []int) schema.SchemaValidateFunc { + return func(i interface{}, k string) (s []string, es []error) { + v, ok := i.(int) + if !ok { + es = append(es, fmt.Errorf("expected type of %s to be int", k)) + return } - if value > max { - errors = append(errors, fmt.Errorf( - "%q cannot be higher than %d: %d", k, max, value)) + for _, in := range valid { + if v == in { + return + } } + es = append(es, fmt.Errorf("expected %s to be one of %v, got %d", k, valid, v)) return } } @@ -368,6 +439,24 @@ func validateLambdaPermissionAction(v interface{}, k string) (ws []string, error return } +func validateLambdaPermissionEventSourceToken(v interface{}, k string) (ws []string, errors []error) { + // https://docs.aws.amazon.com/lambda/latest/dg/API_AddPermission.html + value := v.(string) + + if len(value) > 256 { + errors = append(errors, fmt.Errorf("%q cannot be longer than 256 characters: %q", k, value)) + } + + pattern := `^[a-zA-Z0-9._\-]+$` + if !regexp.MustCompile(pattern).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q doesn't comply with restrictions (%q): %q", + k, pattern, value)) + } + + return +} + func validateAwsAccountId(v interface{}, k string) (ws []string, errors []error) { value := v.(string) @@ -400,6 +489,20 @@ func validateArn(v interface{}, k string) (ws []string, errors []error) { return } +func validateEC2AutomateARN(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + + // https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_PutMetricAlarm.html + pattern := `^arn:[\w-]+:automate:[\w-]+:ec2:(reboot|recover|stop|terminate)$` + if !regexp.MustCompile(pattern).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q does not match EC2 automation ARN (%q): %q", + k, pattern, value)) + } + + return +} + func validatePolicyStatementId(v interface{}, k string) (ws []string, errors []error) { value := v.(string) @@ -543,6 +646,7 @@ func validateS3BucketLifecycleTimestamp(v interface{}, k string) (ws []string, e func validateS3BucketLifecycleStorageClass() schema.SchemaValidateFunc { return validation.StringInSlice([]string{ + s3.TransitionStorageClassOnezoneIa, s3.TransitionStorageClassStandardIa, s3.TransitionStorageClassGlacier, }, false) @@ -561,13 +665,6 @@ func validateDbEventSubscriptionName(v interface{}, k string) (ws []string, erro return } -func validateJsonString(v interface{}, k string) (ws []string, errors []error) { - if _, err := structure.NormalizeJsonString(v); err != nil { - errors = append(errors, fmt.Errorf("%q contains an invalid JSON: %s", k, err)) - } - return -} - func validateIAMPolicyJson(v interface{}, k string) (ws []string, errors []error) { // IAM Policy documents need to be valid JSON, and pass legacy parsing value := v.(string) @@ -617,7 +714,8 @@ func validateSQSQueueName(v interface{}, k string) (ws []string, errors []error) return } -func validateSQSNonFifoQueueName(v interface{}, k string) (errors []error) { +func validateSQSNonFifoQueueName(v interface{}) (errors []error) { + k := "name" value := v.(string) if len(value) > 80 { errors = append(errors, fmt.Errorf("%q cannot be longer than 80 characters", k)) @@ -629,7 +727,8 @@ func validateSQSNonFifoQueueName(v interface{}, k string) (errors []error) { return } -func validateSQSFifoQueueName(v interface{}, k string) (errors []error) { +func validateSQSFifoQueueName(v interface{}) (errors []error) { + k := "name" value := v.(string) if len(value) > 80 { @@ -645,7 +744,7 @@ func validateSQSFifoQueueName(v interface{}, k string) (errors []error) { } if !regexp.MustCompile(`\.fifo$`).MatchString(value) { - errors = append(errors, fmt.Errorf("FIFO queue name should ends with \".fifo\": %v", value)) + errors = append(errors, fmt.Errorf("FIFO queue name should end with \".fifo\": %v", value)) } return @@ -712,8 +811,11 @@ func validateAwsDynamoDbGlobalTableName(v interface{}, k string) (ws []string, e func validateAwsEcsPlacementStrategy(stratType, stratField string) error { switch stratType { case "random": - // random does not need the field attribute set, could error, but it isn't read at the API level - return nil + // random requires the field attribute to be unset. + if stratField != "" { + return fmt.Errorf("Random type requires the field attribute to be unset. Got: %s", + stratField) + } case "spread": // For the spread placement strategy, valid values are instanceId // (or host, which has the same effect), or any platform or custom attribute @@ -736,6 +838,7 @@ func validateAwsEmrEbsVolumeType() schema.SchemaValidateFunc { "gp2", "io1", "standard", + "st1", }, false) } @@ -893,8 +996,8 @@ func validateIamRolePolicyName(v interface{}, k string) (ws []string, errors []e errors = append(errors, fmt.Errorf( "%q cannot be longer than 128 characters", k)) } - if !regexp.MustCompile("^[\\w+=,.@-]+$").MatchString(value) { - errors = append(errors, fmt.Errorf("%q must match [\\w+=,.@-]", k)) + if !regexp.MustCompile(`^[\w+=,.@-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf(`%q must match [\w+=,.@-]`, k)) } return } @@ -905,8 +1008,8 @@ func validateIamRolePolicyNamePrefix(v interface{}, k string) (ws []string, erro errors = append(errors, fmt.Errorf( "%q cannot be longer than 100 characters", k)) } - if !regexp.MustCompile("^[\\w+=,.@-]+$").MatchString(value) { - errors = append(errors, fmt.Errorf("%q must match [\\w+=,.@-]", k)) + if !regexp.MustCompile(`^[\w+=,.@-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf(`%q must match [\w+=,.@-]`, k)) } return } @@ -947,6 +1050,23 @@ func validateDbSubnetGroupName(v interface{}, k string) (ws []string, errors []e return } +func validateNeptuneSubnetGroupName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[ .0-9a-z-_]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only lowercase alphanumeric characters, hyphens, underscores, periods, and spaces allowed in %q", k)) + } + if len(value) > 255 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 255 characters", k)) + } + if value == "default" { + errors = append(errors, fmt.Errorf( + "%q is not allowed as %q", "Default", k)) + } + return +} + func validateDbSubnetGroupNamePrefix(v interface{}, k string) (ws []string, errors []error) { value := v.(string) if !regexp.MustCompile(`^[ .0-9a-z-_]+$`).MatchString(value) { @@ -960,6 +1080,20 @@ func validateDbSubnetGroupNamePrefix(v interface{}, k string) (ws []string, erro return } +func validateNeptuneSubnetGroupNamePrefix(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[ .0-9a-z-_]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only lowercase alphanumeric characters, hyphens, underscores, periods, and spaces allowed in %q", k)) + } + prefixMaxLength := 255 - resource.UniqueIDSuffixLength + if len(value) > prefixMaxLength { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than %d characters", k, prefixMaxLength)) + } + return +} + func validateDbOptionGroupName(v interface{}, k string) (ws []string, errors []error) { value := v.(string) if !regexp.MustCompile(`^[a-z]`).MatchString(value) { @@ -1047,7 +1181,7 @@ func validateAwsKmsGrantName(v interface{}, k string) (ws []string, es []error) func validateCognitoIdentityPoolName(v interface{}, k string) (ws []string, errors []error) { val := v.(string) - if !regexp.MustCompile("^[\\w _]+$").MatchString(val) { + if !regexp.MustCompile(`^[\w _]+$`).MatchString(val) { errors = append(errors, fmt.Errorf("%q must contain only alphanumeric characters and spaces", k)) } @@ -1060,7 +1194,7 @@ func validateCognitoProviderDeveloperName(v interface{}, k string) (ws []string, errors = append(errors, fmt.Errorf("%q cannot be longer than 100 characters", k)) } - if !regexp.MustCompile("^[\\w._-]+$").MatchString(value) { + if !regexp.MustCompile(`^[\w._-]+$`).MatchString(value) { errors = append(errors, fmt.Errorf("%q must contain only alphanumeric characters, dots, underscores and hyphens", k)) } @@ -1077,7 +1211,7 @@ func validateCognitoSupportedLoginProviders(v interface{}, k string) (ws []strin errors = append(errors, fmt.Errorf("%q cannot be longer than 128 characters", k)) } - if !regexp.MustCompile("^[\\w.;_/-]+$").MatchString(value) { + if !regexp.MustCompile(`^[\w.;_/-]+$`).MatchString(value) { errors = append(errors, fmt.Errorf("%q must contain only alphanumeric characters, dots, semicolons, underscores, slashes and hyphens", k)) } @@ -1094,7 +1228,7 @@ func validateCognitoIdentityProvidersClientId(v interface{}, k string) (ws []str errors = append(errors, fmt.Errorf("%q cannot be longer than 128 characters", k)) } - if !regexp.MustCompile("^[\\w_]+$").MatchString(value) { + if !regexp.MustCompile(`^[\w_]+$`).MatchString(value) { errors = append(errors, fmt.Errorf("%q must contain only alphanumeric characters and underscores", k)) } @@ -1111,7 +1245,7 @@ func validateCognitoIdentityProvidersProviderName(v interface{}, k string) (ws [ errors = append(errors, fmt.Errorf("%q cannot be longer than 128 characters", k)) } - if !regexp.MustCompile("^[\\w._:/-]+$").MatchString(value) { + if !regexp.MustCompile(`^[\w._:/-]+$`).MatchString(value) { errors = append(errors, fmt.Errorf("%q must contain only alphanumeric characters, dots, underscores, colons, slashes and hyphens", k)) } @@ -1129,7 +1263,7 @@ func validateCognitoUserGroupName(v interface{}, k string) (ws []string, es []er } if !regexp.MustCompile(`[\p{L}\p{M}\p{S}\p{N}\p{P}]+`).MatchString(value) { - es = append(es, fmt.Errorf("%q must satisfy regular expression pattern: [\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}]+", k)) + es = append(es, fmt.Errorf(`%q must satisfy regular expression pattern: [\p{L}\p{M}\p{S}\p{N}\p{P}]+`, k)) } return } @@ -1206,23 +1340,6 @@ func validateCognitoUserPoolSmsVerificationMessage(v interface{}, k string) (ws return } -func validateCognitoUserPoolClientAuthFlows(v interface{}, k string) (ws []string, es []error) { - validValues := []string{ - cognitoidentityprovider.AuthFlowTypeAdminNoSrpAuth, - cognitoidentityprovider.AuthFlowTypeCustomAuth, - } - period := v.(string) - for _, f := range validValues { - if period == f { - return - } - } - es = append(es, fmt.Errorf( - "%q contains an invalid auth flow %q. Valid auth flows are %q.", - k, period, validValues)) - return -} - func validateCognitoUserPoolTemplateEmailMessage(v interface{}, k string) (ws []string, es []error) { value := v.(string) if len(value) < 6 { @@ -1245,12 +1362,12 @@ func validateCognitoUserPoolTemplateEmailMessageByLink(v interface{}, k string) es = append(es, fmt.Errorf("%q cannot be less than 1 character", k)) } - if len(value) > 140 { - es = append(es, fmt.Errorf("%q cannot be longer than 140 characters", k)) + if len(value) > 20000 { + es = append(es, fmt.Errorf("%q cannot be longer than 20000 characters", k)) } if !regexp.MustCompile(`[\p{L}\p{M}\p{S}\p{N}\p{P}\s*]*\{##[\p{L}\p{M}\p{S}\p{N}\p{P}\s*]*##\}[\p{L}\p{M}\p{S}\p{N}\p{P}\s*]*`).MatchString(value) { - es = append(es, fmt.Errorf("%q must satisfy regular expression pattern: [\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}\\s*]*\\{##[\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}\\s*]*##\\}[\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}\\s*]*", k)) + es = append(es, fmt.Errorf(`%q must satisfy regular expression pattern: [\p{L}\p{M}\p{S}\p{N}\p{P}\s*]*\{##[\p{L}\p{M}\p{S}\p{N}\p{P}\s*]*##\}[\p{L}\p{M}\p{S}\p{N}\p{P}\s*]*`, k)) } return } @@ -1266,7 +1383,7 @@ func validateCognitoUserPoolTemplateEmailSubject(v interface{}, k string) (ws [] } if !regexp.MustCompile(`[\p{L}\p{M}\p{S}\p{N}\p{P}\s]+`).MatchString(value) { - es = append(es, fmt.Errorf("%q must satisfy regular expression pattern: [\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}\\s]+", k)) + es = append(es, fmt.Errorf(`%q must satisfy regular expression pattern: [\p{L}\p{M}\p{S}\p{N}\p{P}\s]+`, k)) } return } @@ -1282,7 +1399,7 @@ func validateCognitoUserPoolTemplateEmailSubjectByLink(v interface{}, k string) } if !regexp.MustCompile(`[\p{L}\p{M}\p{S}\p{N}\p{P}\s]+`).MatchString(value) { - es = append(es, fmt.Errorf("%q must satisfy regular expression pattern: [\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}\\s]+", k)) + es = append(es, fmt.Errorf(`%q must satisfy regular expression pattern: [\p{L}\p{M}\p{S}\p{N}\p{P}\s]+`, k)) } return } @@ -1348,7 +1465,7 @@ func validateCognitoUserPoolReplyEmailAddress(v interface{}, k string) (ws []str if !regexp.MustCompile(`[\p{L}\p{M}\p{S}\p{N}\p{P}]+@[\p{L}\p{M}\p{S}\p{N}\p{P}]+`).MatchString(value) { errors = append(errors, fmt.Errorf( - "%q must satisfy regular expression pattern: [\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}]+@[\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}]+", k)) + `%q must satisfy regular expression pattern: [\p{L}\p{M}\p{S}\p{N}\p{P}]+@[\p{L}\p{M}\p{S}\p{N}\p{P}]+`, k)) } return } @@ -1364,7 +1481,7 @@ func validateCognitoUserPoolSchemaName(v interface{}, k string) (ws []string, es } if !regexp.MustCompile(`[\p{L}\p{M}\p{S}\p{N}\p{P}]+`).MatchString(value) { - es = append(es, fmt.Errorf("%q must satisfy regular expression pattern: [\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}]+", k)) + es = append(es, fmt.Errorf(`%q must satisfy regular expression pattern: [\p{L}\p{M}\p{S}\p{N}\p{P}]+`, k)) } return } @@ -1380,7 +1497,22 @@ func validateCognitoUserPoolClientURL(v interface{}, k string) (ws []string, es } if !regexp.MustCompile(`[\p{L}\p{M}\p{S}\p{N}\p{P}]+`).MatchString(value) { - es = append(es, fmt.Errorf("%q must satisfy regular expression pattern: [\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}]+", k)) + es = append(es, fmt.Errorf(`%q must satisfy regular expression pattern: [\p{L}\p{M}\p{S}\p{N}\p{P}]+`, k)) + } + return +} + +func validateCognitoResourceServerScopeName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + + if len(value) < 1 { + errors = append(errors, fmt.Errorf("%q cannot be less than 1 character", k)) + } + if len(value) > 256 { + errors = append(errors, fmt.Errorf("%q cannot be longer than 256 character", k)) + } + if !regexp.MustCompile(`[\x21\x23-\x2E\x30-\x5B\x5D-\x7E]+`).MatchString(value) { + errors = append(errors, fmt.Errorf(`%q must satisfy regular expression pattern: [\x21\x23-\x2E\x30-\x5B\x5D-\x7E]+`, k)) } return } @@ -1397,12 +1529,13 @@ func validateWafMetricName(v interface{}, k string) (ws []string, errors []error func validateWafPredicatesType() schema.SchemaValidateFunc { return validation.StringInSlice([]string{ - waf.PredicateTypeIpmatch, waf.PredicateTypeByteMatch, - waf.PredicateTypeSqlInjectionMatch, + waf.PredicateTypeGeoMatch, + waf.PredicateTypeIpmatch, + waf.PredicateTypeRegexMatch, waf.PredicateTypeSizeConstraint, + waf.PredicateTypeSqlInjectionMatch, waf.PredicateTypeXssMatch, - waf.PredicateTypeGeoMatch, }, false) } @@ -1415,7 +1548,7 @@ func validateIamRoleDescription(v interface{}, k string) (ws []string, errors [] if !regexp.MustCompile(`[\p{L}\p{M}\p{Z}\p{S}\p{N}\p{P}]*`).MatchString(value) { errors = append(errors, fmt.Errorf( - "Only alphanumeric & accented characters allowed in %q: %q (Must satisfy regular expression pattern: [\\p{L}\\p{M}\\p{Z}\\p{S}\\p{N}\\p{P}]*)", + `Only alphanumeric & accented characters allowed in %q: %q (Must satisfy regular expression pattern: [\p{L}\p{M}\p{Z}\p{S}\p{N}\p{P}]*)`, k, value)) } return @@ -1425,6 +1558,19 @@ func validateAwsSSMName(v interface{}, k string) (ws []string, errors []error) { // http://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateDocument.html#EC2-CreateDocument-request-Name value := v.(string) + if !regexp.MustCompile(`^[a-zA-Z0-9_\-.]{3,128}$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + `Only alphanumeric characters, hyphens, dots & underscores allowed in %q: %q (Must satisfy regular expression pattern: ^[a-zA-Z0-9_\-.]{3,128}$)`, + k, value)) + } + + return +} + +func validateAwsSSMMaintenanceWindowTaskName(v interface{}, k string) (ws []string, errors []error) { + // https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_RegisterTaskWithMaintenanceWindow.html#systemsmanager-RegisterTaskWithMaintenanceWindow-request-Name + value := v.(string) + if !regexp.MustCompile(`^[a-zA-Z0-9_\-.]{3,128}$`).MatchString(value) { errors = append(errors, fmt.Errorf( "Only alphanumeric characters, hyphens, dots & underscores allowed in %q: %q (Must satisfy regular expression pattern: ^[a-zA-Z0-9_\\-.]{3,128}$)", @@ -1513,12 +1659,25 @@ func validateIoTTopicRuleElasticSearchEndpoint(v interface{}, k string) (ws []st return } +func validateIoTTopicRuleFirehoseSeparator(v interface{}, s string) ([]string, []error) { + switch v.(string) { + case + ",", + "\t", + "\n", + "\r\n": + return nil, nil + } + + return nil, []error{fmt.Errorf(`Separator must be one of ',' (comma), '\t' (tab) '\n' (newline) or '\r\n' (Windows newline)`)} +} + func validateCognitoRoleMappingsAmbiguousRoleResolutionAgainstType(v map[string]interface{}) (errors []error) { t := v["type"].(string) isRequired := t == cognitoidentity.RoleMappingTypeToken || t == cognitoidentity.RoleMappingTypeRules if value, ok := v["ambiguous_role_resolution"]; (!ok || value == "") && isRequired { - errors = append(errors, fmt.Errorf("Ambiguous Role Resolution must be defined when \"type\" equals \"Token\" or \"Rules\"")) + errors = append(errors, fmt.Errorf(`Ambiguous Role Resolution must be defined when "type" equals "Token" or "Rules"`)) } return @@ -1545,7 +1704,7 @@ func validateCognitoRoleMappingsRulesConfiguration(v map[string]interface{}) (er func validateCognitoRoleMappingsRulesClaim(v interface{}, k string) (ws []string, errors []error) { value := v.(string) - if !regexp.MustCompile("^[\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}]+$").MatchString(value) { + if !regexp.MustCompile(`^[\p{L}\p{M}\p{S}\p{N}\p{P}]+$`).MatchString(value) { errors = append(errors, fmt.Errorf("%q must contain only alphanumeric characters, dots, underscores, colons, slashes and hyphens", k)) } @@ -1553,7 +1712,8 @@ func validateCognitoRoleMappingsRulesClaim(v interface{}, k string) (ws []string } // Validates that either authenticated or unauthenticated is defined -func validateCognitoRoles(v map[string]interface{}, k string) (errors []error) { +func validateCognitoRoles(v map[string]interface{}) (errors []error) { + k := "roles" _, hasAuthenticated := v["authenticated"].(string) _, hasUnauthenticated := v["unauthenticated"].(string) @@ -1564,17 +1724,16 @@ func validateCognitoRoles(v map[string]interface{}, k string) (errors []error) { return } -func validateCognitoUserPoolDomain(v interface{}, k string) (ws []string, errors []error) { - value := v.(string) - if !regexp.MustCompile(`^[a-z0-9](?:[a-z0-9\-]{0,61}[a-z0-9])?$`).MatchString(value) { - errors = append(errors, fmt.Errorf( - "only lowercase alphanumeric characters and hyphens (max length 63 chars) allowed in %q", k)) - } - return -} - func validateDxConnectionBandWidth() schema.SchemaValidateFunc { - return validation.StringInSlice([]string{"1Gbps", "10Gbps"}, false) + return validation.StringInSlice([]string{ + "1Gbps", + "10Gbps", + "50Mbps", + "100Mbps", + "200Mbps", + "300Mbps", + "400Mbps", + "500Mbps"}, false) } func validateKmsKey(v interface{}, k string) (ws []string, errors []error) { @@ -1621,25 +1780,6 @@ func validateDynamoDbStreamSpec(d *schema.ResourceDiff) error { return nil } -func validateVpcEndpointType(v interface{}, k string) (ws []string, errors []error) { - return validateStringIn(ec2.VpcEndpointTypeGateway, ec2.VpcEndpointTypeInterface)(v, k) -} - -func validateStringIn(validValues ...string) schema.SchemaValidateFunc { - return func(v interface{}, k string) (ws []string, errors []error) { - value := v.(string) - for _, s := range validValues { - if value == s { - return - } - } - errors = append(errors, fmt.Errorf( - "%q contains an invalid value %q. Valid values are %q.", - k, value, validValues)) - return - } -} - func validateAmazonSideAsn(v interface{}, k string) (ws []string, errors []error) { value := v.(string) @@ -1650,8 +1790,13 @@ func validateAmazonSideAsn(v interface{}, k string) (ws []string, errors []error return } - if (asn < 64512) || (asn > 65534 && asn < 4200000000) || (asn > 4294967294) { - errors = append(errors, fmt.Errorf("%q (%q) must be in the range 64512 to 65534 or 4200000000 to 4294967294", k, v)) + // https://github.com/terraform-providers/terraform-provider-aws/issues/5263 + isLegacyAsn := func(a int64) bool { + return a == 7224 || a == 9059 || a == 10124 || a == 17493 + } + + if !isLegacyAsn(asn) && ((asn < 64512) || (asn > 65534 && asn < 4200000000) || (asn > 4294967294)) { + errors = append(errors, fmt.Errorf("%q (%q) must be 7224, 9059, 10124 or 17493 or in the range 64512 to 65534 or 4200000000 to 4294967294", k, v)) } return } @@ -1673,7 +1818,7 @@ func validateIotThingTypeDescription(v interface{}, k string) (ws []string, erro } if !regexp.MustCompile(`[\\p{Graph}\\x20]*`).MatchString(value) { errors = append(errors, fmt.Errorf( - "%q must match pattern [\\p{Graph}\\x20]*", k)) + `%q must match pattern [\p{Graph}\x20]*`, k)) } return } @@ -1740,3 +1885,197 @@ func validateDynamoDbTableAttributes(d *schema.ResourceDiff) error { return nil } + +func validateLaunchTemplateName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if len(value) < 3 { + errors = append(errors, fmt.Errorf("%q cannot be less than 3 characters", k)) + } else if strings.HasSuffix(k, "prefix") && len(value) > 99 { + errors = append(errors, fmt.Errorf("%q cannot be longer than 99 characters, name is limited to 125", k)) + } else if !strings.HasSuffix(k, "prefix") && len(value) > 125 { + errors = append(errors, fmt.Errorf("%q cannot be longer than 125 characters", k)) + } else if !regexp.MustCompile(`^[0-9a-zA-Z()./_\-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf("%q can only alphanumeric characters and ()./_- symbols", k)) + } + return +} + +func validateLaunchTemplateId(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if len(value) < 1 { + errors = append(errors, fmt.Errorf("%q cannot be shorter than 1 character", k)) + } else if len(value) > 255 { + errors = append(errors, fmt.Errorf("%q cannot be longer than 255 characters", k)) + } else if !regexp.MustCompile(`^lt\-[a-z0-9]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q must begin with 'lt-' and be comprised of only alphanumeric characters: %v", k, value)) + } + return +} + +func validateNeptuneParamGroupName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9a-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only lowercase alphanumeric characters and hyphens allowed in %q", k)) + } + if !regexp.MustCompile(`^[a-z]`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "first character of %q must be a letter", k)) + } + if regexp.MustCompile(`--`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot contain two consecutive hyphens", k)) + } + if regexp.MustCompile(`-$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot end with a hyphen", k)) + } + if len(value) > 255 { + errors = append(errors, fmt.Errorf( + "%q cannot be greater than 255 characters", k)) + } + return +} + +func validateNeptuneParamGroupNamePrefix(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9a-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only lowercase alphanumeric characters and hyphens allowed in %q", k)) + } + if !regexp.MustCompile(`^[a-z]`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "first character of %q must be a letter", k)) + } + if regexp.MustCompile(`--`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot contain two consecutive hyphens", k)) + } + prefixMaxLength := 255 - resource.UniqueIDSuffixLength + if len(value) > prefixMaxLength { + errors = append(errors, fmt.Errorf( + "%q cannot be greater than %d characters", k, prefixMaxLength)) + } + return +} + +func validateNeptuneEventSubscriptionName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9A-Za-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only alphanumeric characters and hyphens allowed in %q", k)) + } + if len(value) > 255 { + errors = append(errors, fmt.Errorf( + "%q cannot be greater than 255 characters", k)) + } + return +} + +func validateNeptuneEventSubscriptionNamePrefix(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9A-Za-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only alphanumeric characters and hyphens allowed in %q", k)) + } + prefixMaxLength := 255 - resource.UniqueIDSuffixLength + if len(value) > prefixMaxLength { + errors = append(errors, fmt.Errorf( + "%q cannot be greater than %d characters", k, prefixMaxLength)) + } + return +} + +func validateCloudFrontPublicKeyName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9A-Za-z_-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only alphanumeric characters, underscores and hyphens allowed in %q", k)) + } + if len(value) > 128 { + errors = append(errors, fmt.Errorf( + "%q cannot be greater than 128 characters", k)) + } + return +} + +func validateCloudFrontPublicKeyNamePrefix(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9A-Za-z_-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only alphanumeric characters, underscores and hyphens allowed in %q", k)) + } + prefixMaxLength := 128 - resource.UniqueIDSuffixLength + if len(value) > prefixMaxLength { + errors = append(errors, fmt.Errorf( + "%q cannot be greater than %d characters", k, prefixMaxLength)) + } + return +} + +func validateLbTargetGroupName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if len(value) > 32 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 32 characters", k)) + } + if !regexp.MustCompile(`^[0-9A-Za-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only alphanumeric characters and hyphens allowed in %q", k)) + } + if regexp.MustCompile(`^-`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot begin with a hyphen", k)) + } + if regexp.MustCompile(`-$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot end with a hyphen", k)) + } + return +} + +func validateSecretManagerSecretName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9A-Za-z/_+=.@-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only alphanumeric characters and /_+=.@- special characters are allowed in %q", k)) + } + if len(value) > 512 { + errors = append(errors, fmt.Errorf( + "%q cannot be greater than 512 characters", k)) + } + return +} + +func validateLbTargetGroupNamePrefix(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + prefixMaxLength := 32 - resource.UniqueIDSuffixLength + if len(value) > prefixMaxLength { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than %d characters", k, prefixMaxLength)) + } + if !regexp.MustCompile(`^[0-9A-Za-z-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only alphanumeric characters and hyphens allowed in %q", k)) + } + if regexp.MustCompile(`^-`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q cannot begin with a hyphen", k)) + } + return +} + +func validateSecretManagerSecretNamePrefix(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9A-Za-z/_+=.@-]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "only alphanumeric characters and /_+=.@- special characters are allowed in %q", k)) + } + prefixMaxLength := 512 - resource.UniqueIDSuffixLength + if len(value) > prefixMaxLength { + errors = append(errors, fmt.Errorf( + "%q cannot be greater than %d characters", k, prefixMaxLength)) + } + return +} diff --git a/aws/validators_test.go b/aws/validators_test.go index e32679e35cc..53bed5bc920 100644 --- a/aws/validators_test.go +++ b/aws/validators_test.go @@ -7,49 +7,98 @@ import ( "testing" "github.com/aws/aws-sdk-go/service/cognitoidentity" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" ) -func TestValidateRFC3339TimeString(t *testing.T) { +func TestValidationAny(t *testing.T) { testCases := []struct { val interface{} + f schema.SchemaValidateFunc expectedErr *regexp.Regexp }{ { - val: "2018-03-01T00:00:00Z", + val: "valid", + f: validateAny( + validation.StringLenBetween(5, 42), + validation.StringMatch(regexp.MustCompile(`[a-zA-Z0-9]+`), "value must be alphanumeric"), + ), }, { - val: "2018-03-01T00:00:00-05:00", + val: "foo", + f: validateAny( + validation.StringLenBetween(5, 42), + validation.StringMatch(regexp.MustCompile(`[a-zA-Z0-9]+`), "value must be alphanumeric"), + ), }, { - val: "2018-03-01T00:00:00+05:00", + val: "!!!!!", + f: validateAny( + validation.StringLenBetween(5, 42), + validation.StringMatch(regexp.MustCompile(`[a-zA-Z0-9]+`), "value must be alphanumeric"), + ), }, { - val: "03/01/2018", - expectedErr: regexp.MustCompile(regexp.QuoteMeta(`cannot parse "1/2018" as "2006"`)), + val: "!!!", + f: validateAny( + validation.StringLenBetween(5, 42), + validation.StringMatch(regexp.MustCompile(`[a-zA-Z0-9]+`), "value must be alphanumeric"), + ), + expectedErr: regexp.MustCompile("value must be alphanumeric"), }, + } + + matchErr := func(errs []error, r *regexp.Regexp) bool { + // err must match one provided + for _, err := range errs { + if r.MatchString(err.Error()) { + return true + } + } + + return false + } + + for i, tc := range testCases { + _, errs := tc.f(tc.val, "test_property") + + if len(errs) == 0 && tc.expectedErr == nil { + continue + } + + if len(errs) != 0 && tc.expectedErr == nil { + t.Fatalf("expected test case %d to produce no errors, got %v", i, errs) + } + + if !matchErr(errs, tc.expectedErr) { + t.Fatalf("expected test case %d to produce error matching \"%s\", got %v", i, tc.expectedErr, errs) + } + } +} + +func TestValidateTypeStringNullableBoolean(t *testing.T) { + testCases := []struct { + val interface{} + expectedErr *regexp.Regexp + }{ { - val: "03-01-2018", - expectedErr: regexp.MustCompile(regexp.QuoteMeta(`cannot parse "1-2018" as "2006"`)), + val: "", }, { - val: "2018-03-01", - expectedErr: regexp.MustCompile(regexp.QuoteMeta(`cannot parse "" as "T"`)), + val: "0", }, { - val: "2018-03-01T", - expectedErr: regexp.MustCompile(regexp.QuoteMeta(`cannot parse "" as "15"`)), + val: "1", }, { - val: "2018-03-01T00:00:00", - expectedErr: regexp.MustCompile(regexp.QuoteMeta(`cannot parse "" as "Z07:00"`)), + val: "true", }, { - val: "2018-03-01T00:00:00Z05:00", - expectedErr: regexp.MustCompile(regexp.QuoteMeta(`extra text: 05:00`)), + val: "false", }, { - val: "2018-03-01T00:00:00Z-05:00", - expectedErr: regexp.MustCompile(regexp.QuoteMeta(`extra text: -05:00`)), + val: "invalid", + expectedErr: regexp.MustCompile(`to be one of \["", false, true\]`), }, } @@ -65,7 +114,7 @@ func TestValidateRFC3339TimeString(t *testing.T) { } for i, tc := range testCases { - _, errs := validateRFC3339TimeString(tc.val, "test_property") + _, errs := validateTypeStringNullableBoolean(tc.val, "test_property") if len(errs) == 0 && tc.expectedErr == nil { continue @@ -81,42 +130,53 @@ func TestValidateRFC3339TimeString(t *testing.T) { } } -func TestValidateEcrRepositoryName(t *testing.T) { - validNames := []string{ - "nginx-web-app", - "project-a/nginx-web-app", - "domain.ltd/nginx-web-app", - "3chosome-thing.com/01different-pattern", - "0123456789/999999999", - "double/forward/slash", - "000000000000000", +func TestValidateTypeStringNullableFloat(t *testing.T) { + testCases := []struct { + val interface{} + expectedErr *regexp.Regexp + }{ + { + val: "", + }, + { + val: "0", + }, + { + val: "1", + }, + { + val: "42.0", + }, + { + val: "threeve", + expectedErr: regexp.MustCompile(`cannot parse`), + }, } - for _, v := range validNames { - _, errors := validateEcrRepositoryName(v, "name") - if len(errors) != 0 { - t.Fatalf("%q should be a valid ECR repository name: %q", v, errors) + + matchErr := func(errs []error, r *regexp.Regexp) bool { + // err must match one provided + for _, err := range errs { + if r.MatchString(err.Error()) { + return true + } } - } - invalidNames := []string{ - // length > 256 - "3cho_some-thing.com/01different.-_pattern01different.-_pattern01diff" + - "erent.-_pattern01different.-_pattern01different.-_pattern01different" + - ".-_pattern01different.-_pattern01different.-_pattern01different.-_pa" + - "ttern01different.-_pattern01different.-_pattern234567", - // length < 2 - "i", - "special@character", - "different+special=character", - "double//slash", - "double..dot", - "/slash-at-the-beginning", - "slash-at-the-end/", + return false } - for _, v := range invalidNames { - _, errors := validateEcrRepositoryName(v, "name") - if len(errors) == 0 { - t.Fatalf("%q should be an invalid ECR repository name", v) + + for i, tc := range testCases { + _, errs := validateTypeStringNullableFloat(tc.val, "test_property") + + if len(errs) == 0 && tc.expectedErr == nil { + continue + } + + if len(errs) != 0 && tc.expectedErr == nil { + t.Fatalf("expected test case %d to produce no errors, got %v", i, errs) + } + + if !matchErr(errs, tc.expectedErr) { + t.Fatalf("expected test case %d to produce error matching \"%s\", got %v", i, tc.expectedErr, errs) } } } @@ -266,6 +326,32 @@ func TestValidateLambdaPermissionAction(t *testing.T) { } } +func TestValidateLambdaPermissionEventSourceToken(t *testing.T) { + validTokens := []string{ + "amzn1.ask.skill.80c92c86-e6dd-4c4b-8d0d-000000000000", + "test-event-source-token", + strings.Repeat(".", 256), + } + for _, v := range validTokens { + _, errors := validateLambdaPermissionEventSourceToken(v, "event_source_token") + if len(errors) != 0 { + t.Fatalf("%q should be a valid Lambda permission event source token", v) + } + } + + invalidTokens := []string{ + "!", + "test event source token", + strings.Repeat(".", 257), + } + for _, v := range invalidTokens { + _, errors := validateLambdaPermissionEventSourceToken(v, "event_source_token") + if len(errors) == 0 { + t.Fatalf("%q should be an invalid Lambda permission event source token", v) + } + } +} + func TestValidateAwsAccountId(t *testing.T) { validNames := []string{ "123456789012", @@ -332,6 +418,40 @@ func TestValidateArn(t *testing.T) { } } +func TestValidateEC2AutomateARN(t *testing.T) { + validNames := []string{ + "arn:aws:automate:us-east-1:ec2:reboot", + "arn:aws:automate:us-east-1:ec2:recover", + "arn:aws:automate:us-east-1:ec2:stop", + "arn:aws:automate:us-east-1:ec2:terminate", + } + for _, v := range validNames { + _, errors := validateEC2AutomateARN(v, "test_property") + if len(errors) != 0 { + t.Fatalf("%q should be a valid ARN: %q", v, errors) + } + } + + invalidNames := []string{ + "", + "arn:aws:elasticbeanstalk:us-east-1:123456789012:environment/My App/MyEnvironment", // Beanstalk + "arn:aws:iam::123456789012:user/David", // IAM User + "arn:aws:rds:eu-west-1:123456789012:db:mysql-db", // RDS + "arn:aws:s3:::my_corporate_bucket/exampleobject.png", // S3 object + "arn:aws:events:us-east-1:319201112229:rule/rule_name", // CloudWatch Rule + "arn:aws:lambda:eu-west-1:319201112229:function:myCustomFunction", // Lambda function + "arn:aws:lambda:eu-west-1:319201112229:function:myCustomFunction:Qualifier", // Lambda func qualifier + "arn:aws-us-gov:s3:::corp_bucket/object.png", // GovCloud ARN + "arn:aws-us-gov:kms:us-gov-west-1:123456789012:key/some-uuid-abc123", // GovCloud KMS ARN + } + for _, v := range invalidNames { + _, errors := validateEC2AutomateARN(v, "test_property") + if len(errors) == 0 { + t.Fatalf("%q should be an invalid ARN", v) + } + } +} + func TestValidatePolicyStatementId(t *testing.T) { validNames := []string{ "YadaHereAndThere", @@ -548,22 +668,51 @@ func TestValidateS3BucketLifecycleTimestamp(t *testing.T) { } } -func TestValidateIntegerInRange(t *testing.T) { - validIntegers := []int{-259, 0, 1, 5, 999} - min := -259 - max := 999 - for _, v := range validIntegers { - _, errors := validateIntegerInRange(min, max)(v, "name") - if len(errors) != 0 { - t.Fatalf("%q should be an integer in range (%d, %d): %q", v, min, max, errors) +func TestValidateIntegerInSlice(t *testing.T) { + cases := []struct { + val interface{} + f schema.SchemaValidateFunc + expectedErr *regexp.Regexp + }{ + { + val: 42, + f: validateIntegerInSlice([]int{2, 4, 42, 420}), + }, + { + val: 42, + f: validateIntegerInSlice([]int{0, 43}), + expectedErr: regexp.MustCompile(`expected [\w]+ to be one of \[0 43\], got 42`), + }, + { + val: "42", + f: validateIntegerInSlice([]int{0, 42}), + expectedErr: regexp.MustCompile(`expected type of [\w]+ to be int`), + }, + } + matchErr := func(errs []error, r *regexp.Regexp) bool { + // err must match one provided + for _, err := range errs { + if r.MatchString(err.Error()) { + return true + } } + + return false } - invalidIntegers := []int{-260, -99999, 1000, 25678} - for _, v := range invalidIntegers { - _, errors := validateIntegerInRange(min, max)(v, "name") - if len(errors) == 0 { - t.Fatalf("%q should be an integer outside range (%d, %d)", v, min, max) + for i, tc := range cases { + _, errs := tc.f(tc.val, "test_property") + + if len(errs) == 0 && tc.expectedErr == nil { + continue + } + + if len(errs) != 0 && tc.expectedErr == nil { + t.Fatalf("expected test case %d to produce no errors, got %v", i, errs) + } + + if !matchErr(errs, tc.expectedErr) { + t.Fatalf("expected test case %d to produce error matching \"%s\", got %v", i, tc.expectedErr, errs) } } } @@ -641,61 +790,6 @@ func TestValidateDbEventSubscriptionName(t *testing.T) { } } -func TestValidateJsonString(t *testing.T) { - type testCases struct { - Value string - ErrCount int - } - - invalidCases := []testCases{ - { - Value: `{0:"1"}`, - ErrCount: 1, - }, - { - Value: `{'abc':1}`, - ErrCount: 1, - }, - { - Value: `{"def":}`, - ErrCount: 1, - }, - { - Value: `{"xyz":[}}`, - ErrCount: 1, - }, - } - - for _, tc := range invalidCases { - _, errors := validateJsonString(tc.Value, "json") - if len(errors) != tc.ErrCount { - t.Fatalf("Expected %q to trigger a validation error.", tc.Value) - } - } - - validCases := []testCases{ - { - Value: ``, - ErrCount: 0, - }, - { - Value: `{}`, - ErrCount: 0, - }, - { - Value: `{"abc":["1","2"]}`, - ErrCount: 0, - }, - } - - for _, tc := range validCases { - _, errors := validateJsonString(tc.Value, "json") - if len(errors) != tc.ErrCount { - t.Fatalf("Expected %q not to trigger a validation error.", tc.Value) - } - } -} - func TestValidateIAMPolicyJsonString(t *testing.T) { type testCases struct { Value string @@ -808,11 +902,11 @@ func TestValidateSQSQueueName(t *testing.T) { strings.Repeat("W", 80), } for _, v := range validNames { - if _, errors := validateSQSQueueName(v, "name"); len(errors) > 0 { + if _, errors := validateSQSQueueName(v, "test_attribute"); len(errors) > 0 { t.Fatalf("%q should be a valid SQS queue Name", v) } - if errors := validateSQSNonFifoQueueName(v, "name"); len(errors) > 0 { + if errors := validateSQSNonFifoQueueName(v); len(errors) > 0 { t.Fatalf("%q should be a valid SQS non-fifo queue Name", v) } } @@ -829,11 +923,11 @@ func TestValidateSQSQueueName(t *testing.T) { strings.Repeat("W", 81), // length > 80 } for _, v := range invalidNames { - if _, errors := validateSQSQueueName(v, "name"); len(errors) == 0 { + if _, errors := validateSQSQueueName(v, "test_attribute"); len(errors) == 0 { t.Fatalf("%q should be an invalid SQS queue Name", v) } - if errors := validateSQSNonFifoQueueName(v, "name"); len(errors) == 0 { + if errors := validateSQSNonFifoQueueName(v); len(errors) == 0 { t.Fatalf("%q should be an invalid SQS non-fifo queue Name", v) } } @@ -852,11 +946,11 @@ func TestValidateSQSFifoQueueName(t *testing.T) { fmt.Sprintf("%s.fifo", strings.Repeat("W", 75)), } for _, v := range validNames { - if _, errors := validateSQSQueueName(v, "name"); len(errors) > 0 { + if _, errors := validateSQSQueueName(v, "test_attribute"); len(errors) > 0 { t.Fatalf("%q should be a valid SQS queue Name", v) } - if errors := validateSQSFifoQueueName(v, "name"); len(errors) > 0 { + if errors := validateSQSFifoQueueName(v); len(errors) > 0 { t.Fatalf("%q should be a valid SQS FIFO queue Name: %v", v, errors) } } @@ -874,11 +968,11 @@ func TestValidateSQSFifoQueueName(t *testing.T) { strings.Repeat("W", 81), // length > 80 } for _, v := range invalidNames { - if _, errors := validateSQSQueueName(v, "name"); len(errors) == 0 { + if _, errors := validateSQSQueueName(v, "test_attribute"); len(errors) == 0 { t.Fatalf("%q should be an invalid SQS queue Name", v) } - if errors := validateSQSFifoQueueName(v, "name"); len(errors) == 0 { + if errors := validateSQSFifoQueueName(v); len(errors) == 0 { t.Fatalf("%q should be an invalid SQS FIFO queue Name: %v", v, errors) } } @@ -1489,130 +1583,250 @@ func TestValidateElbNamePrefix(t *testing.T) { } } -func TestValidateDbSubnetGroupName(t *testing.T) { +func TestValidateNeptuneEventSubscriptionName(t *testing.T) { cases := []struct { Value string ErrCount int }{ { - Value: "tEsting", + Value: "testing123!", ErrCount: 1, }, { - Value: "testing?", + Value: "testing 123", ErrCount: 1, }, { - Value: "default", + Value: "testing_123", ErrCount: 1, }, { - Value: randomString(300), + Value: randomString(256), ErrCount: 1, }, } - for _, tc := range cases { - _, errors := validateDbSubnetGroupName(tc.Value, "aws_db_subnet_group") - + _, errors := validateNeptuneEventSubscriptionName(tc.Value, "aws_neptune_event_subscription") if len(errors) != tc.ErrCount { - t.Fatalf("Expected the DB Subnet Group name to trigger a validation error") + t.Fatalf("Expected the Neptune Event Subscription Name to trigger a validation error for %q", tc.Value) } } } -func TestValidateDbSubnetGroupNamePrefix(t *testing.T) { +func TestValidateNeptuneEventSubscriptionNamePrefix(t *testing.T) { cases := []struct { Value string ErrCount int }{ { - Value: "tEsting", + Value: "testing123!", ErrCount: 1, }, { - Value: "testing?", + Value: "testing 123", ErrCount: 1, }, { - Value: randomString(230), + Value: "testing_123", + ErrCount: 1, + }, + { + Value: randomString(254), ErrCount: 1, }, } - for _, tc := range cases { - _, errors := validateDbSubnetGroupNamePrefix(tc.Value, "aws_db_subnet_group") - + _, errors := validateNeptuneEventSubscriptionNamePrefix(tc.Value, "aws_neptune_event_subscription") if len(errors) != tc.ErrCount { - t.Fatalf("Expected the DB Subnet Group name prefix to trigger a validation error") + t.Fatalf("Expected the Neptune Event Subscription Name Prefix to trigger a validation error for %q", tc.Value) } } } -func TestValidateDbOptionGroupName(t *testing.T) { +func TestValidateDbSubnetGroupName(t *testing.T) { cases := []struct { Value string ErrCount int }{ { - Value: "testing123!", - ErrCount: 1, - }, - { - Value: "1testing123", + Value: "tEsting", ErrCount: 1, }, { - Value: "testing--123", + Value: "testing?", ErrCount: 1, }, { - Value: "testing123-", + Value: "default", ErrCount: 1, }, { - Value: randomString(256), + Value: randomString(300), ErrCount: 1, }, } for _, tc := range cases { - _, errors := validateDbOptionGroupName(tc.Value, "aws_db_option_group_name") + _, errors := validateDbSubnetGroupName(tc.Value, "aws_db_subnet_group") if len(errors) != tc.ErrCount { - t.Fatalf("Expected the DB Option Group Name to trigger a validation error") + t.Fatalf("Expected the DB Subnet Group name to trigger a validation error") } } } -func TestValidateDbOptionGroupNamePrefix(t *testing.T) { +func TestValidateNeptuneSubnetGroupName(t *testing.T) { cases := []struct { Value string ErrCount int }{ { - Value: "testing123!", + Value: "tEsting", ErrCount: 1, }, { - Value: "1testing123", + Value: "testing?", ErrCount: 1, }, { - Value: "testing--123", + Value: "default", ErrCount: 1, }, { - Value: randomString(230), + Value: randomString(300), ErrCount: 1, }, } for _, tc := range cases { - _, errors := validateDbOptionGroupNamePrefix(tc.Value, "aws_db_option_group_name") + _, errors := validateNeptuneSubnetGroupName(tc.Value, "aws_neptune_subnet_group") if len(errors) != tc.ErrCount { - t.Fatalf("Expected the DB Option Group name prefix to trigger a validation error") + t.Fatalf("Expected the Neptune Subnet Group name to trigger a validation error") + } + } +} + +func TestValidateDbSubnetGroupNamePrefix(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "tEsting", + ErrCount: 1, + }, + { + Value: "testing?", + ErrCount: 1, + }, + { + Value: randomString(230), + ErrCount: 1, + }, + } + + for _, tc := range cases { + _, errors := validateDbSubnetGroupNamePrefix(tc.Value, "aws_db_subnet_group") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the DB Subnet Group name prefix to trigger a validation error") + } + } +} + +func TestValidateNeptuneSubnetGroupNamePrefix(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "tEsting", + ErrCount: 1, + }, + { + Value: "testing?", + ErrCount: 1, + }, + { + Value: randomString(230), + ErrCount: 1, + }, + } + + for _, tc := range cases { + _, errors := validateNeptuneSubnetGroupNamePrefix(tc.Value, "aws_neptune_subnet_group") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the Neptune Subnet Group name prefix to trigger a validation error") + } + } +} + +func TestValidateDbOptionGroupName(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "testing123!", + ErrCount: 1, + }, + { + Value: "1testing123", + ErrCount: 1, + }, + { + Value: "testing--123", + ErrCount: 1, + }, + { + Value: "testing123-", + ErrCount: 1, + }, + { + Value: randomString(256), + ErrCount: 1, + }, + } + + for _, tc := range cases { + _, errors := validateDbOptionGroupName(tc.Value, "aws_db_option_group_name") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the DB Option Group Name to trigger a validation error") + } + } +} + +func TestValidateDbOptionGroupNamePrefix(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "testing123!", + ErrCount: 1, + }, + { + Value: "1testing123", + ErrCount: 1, + }, + { + Value: "testing--123", + ErrCount: 1, + }, + { + Value: randomString(230), + ErrCount: 1, + }, + } + + for _, tc := range cases { + _, errors := validateDbOptionGroupNamePrefix(tc.Value, "aws_db_option_group_name") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the DB Option Group name prefix to trigger a validation error") } } } @@ -2146,23 +2360,23 @@ func TestValidateCognitoRoleMappingsAmbiguousRoleResolutionAgainstType(t *testin }{ { AmbiguousRoleResolution: nil, - Type: cognitoidentity.RoleMappingTypeToken, - ErrCount: 1, + Type: cognitoidentity.RoleMappingTypeToken, + ErrCount: 1, }, { AmbiguousRoleResolution: "foo", - Type: cognitoidentity.RoleMappingTypeToken, - ErrCount: 0, // 0 as it should be defined, the value isn't validated here + Type: cognitoidentity.RoleMappingTypeToken, + ErrCount: 0, // 0 as it should be defined, the value isn't validated here }, { AmbiguousRoleResolution: cognitoidentity.AmbiguousRoleResolutionTypeAuthenticatedRole, - Type: cognitoidentity.RoleMappingTypeToken, - ErrCount: 0, + Type: cognitoidentity.RoleMappingTypeToken, + ErrCount: 0, }, { AmbiguousRoleResolution: cognitoidentity.AmbiguousRoleResolutionTypeDeny, - Type: cognitoidentity.RoleMappingTypeToken, - ErrCount: 0, + Type: cognitoidentity.RoleMappingTypeToken, + ErrCount: 0, }, } @@ -2266,25 +2480,25 @@ func TestValidateSecurityGroupRuleDescription(t *testing.T) { func TestValidateCognitoRoles(t *testing.T) { validValues := []map[string]interface{}{ - map[string]interface{}{"authenticated": "hoge"}, - map[string]interface{}{"unauthenticated": "hoge"}, - map[string]interface{}{"authenticated": "hoge", "unauthenticated": "hoge"}, + {"authenticated": "hoge"}, + {"unauthenticated": "hoge"}, + {"authenticated": "hoge", "unauthenticated": "hoge"}, } for _, s := range validValues { - errors := validateCognitoRoles(s, "roles") + errors := validateCognitoRoles(s) if len(errors) > 0 { t.Fatalf("%q should be a valid Cognito Roles: %v", s, errors) } } invalidValues := []map[string]interface{}{ - map[string]interface{}{}, - map[string]interface{}{"invalid": "hoge"}, + {}, + {"invalid": "hoge"}, } for _, s := range invalidValues { - errors := validateCognitoRoles(s, "roles") + errors := validateCognitoRoles(s) if len(errors) == 0 { t.Fatalf("%q should not be a valid Cognito Roles: %v", s, errors) } @@ -2404,33 +2618,6 @@ func TestResourceAWSElastiCacheReplicationGroupAuthTokenValidation(t *testing.T) } } -func TestValidateCognitoUserPoolDomain(t *testing.T) { - validTypes := []string{ - "valid-domain", - "validdomain", - "val1d-d0main", - } - for _, v := range validTypes { - _, errors := validateCognitoUserPoolDomain(v, "name") - if len(errors) != 0 { - t.Fatalf("%q should be a valid Cognito User Pool Domain: %q", v, errors) - } - } - - invalidTypes := []string{ - "UpperCase", - "-invalid", - "invalid-", - strings.Repeat("i", 64), // > 63 - } - for _, v := range invalidTypes { - _, errors := validateCognitoUserPoolDomain(v, "name") - if len(errors) == 0 { - t.Fatalf("%q should be an invalid Cognito User Pool Domain", v) - } - } -} - func TestValidateCognitoUserGroupName(t *testing.T) { validValues := []string{ "foo", @@ -2493,6 +2680,10 @@ func TestValidateCognitoUserPoolId(t *testing.T) { func TestValidateAmazonSideAsn(t *testing.T) { validAsns := []string{ + "7224", + "9059", + "10124", + "17493", "64512", "64513", "65533", @@ -2513,6 +2704,15 @@ func TestValidateAmazonSideAsn(t *testing.T) { "1", "ABCDEFG", "", + "7225", + "9058", + "10125", + "17492", + "64511", + "65535", + "4199999999", + "4294967295", + "9999999999", } for _, v := range invalidAsns { _, errors := validateAmazonSideAsn(v, "amazon_side_asn") @@ -2521,3 +2721,351 @@ func TestValidateAmazonSideAsn(t *testing.T) { } } } + +func TestValidateLaunchTemplateName(t *testing.T) { + validNames := []string{ + "fooBAR123", + "(./_)", + } + for _, v := range validNames { + _, errors := validateLaunchTemplateName(v, "name") + if len(errors) != 0 { + t.Fatalf("%q should be a valid Launch Template name: %q", v, errors) + } + } + + invalidNames := []string{ + "tf", + strings.Repeat("W", 126), // > 125 + "invalid*", + "invalid\name", + "inavalid&", + "invalid+", + "invalid!", + "invalid:", + "invalid;", + } + for _, v := range invalidNames { + _, errors := validateLaunchTemplateName(v, "name") + if len(errors) == 0 { + t.Fatalf("%q should be an invalid Launch Template name: %q", v, errors) + } + } + + invalidNamePrefixes := []string{ + strings.Repeat("W", 100), // > 99 + } + for _, v := range invalidNamePrefixes { + _, errors := validateLaunchTemplateName(v, "name_prefix") + if len(errors) == 0 { + t.Fatalf("%q should be an invalid Launch Template name prefix: %q", v, errors) + } + } +} + +func TestValidateLaunchTemplateId(t *testing.T) { + validIds := []string{ + "lt-foobar123456", + } + for _, v := range validIds { + _, errors := validateLaunchTemplateId(v, "id") + if len(errors) != 0 { + t.Fatalf("%q should be a valid Launch Template id: %q", v, errors) + } + } + + invalidIds := []string{ + strings.Repeat("W", 256), + "invalid-foobar123456", + "lt_foobar123456", + } + for _, v := range invalidIds { + _, errors := validateLaunchTemplateId(v, "id") + if len(errors) == 0 { + t.Fatalf("%q should be an invalid Launch Template id: %q", v, errors) + } + } +} + +func TestValidateNeptuneParamGroupName(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "tEsting123", + ErrCount: 1, + }, + { + Value: "testing123!", + ErrCount: 1, + }, + { + Value: "1testing123", + ErrCount: 1, + }, + { + Value: "testing--123", + ErrCount: 1, + }, + { + Value: "testing_123", + ErrCount: 1, + }, + { + Value: "testing123-", + ErrCount: 1, + }, + { + Value: randomString(256), + ErrCount: 1, + }, + } + + for _, tc := range cases { + _, errors := validateNeptuneParamGroupName(tc.Value, "aws_neptune_cluster_parameter_group_name") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the Neptune Parameter Group Name to trigger a validation error for %q", tc.Value) + } + } +} + +func TestValidateNeptuneParamGroupNamePrefix(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "tEsting123", + ErrCount: 1, + }, + { + Value: "testing123!", + ErrCount: 1, + }, + { + Value: "1testing123", + ErrCount: 1, + }, + { + Value: "testing--123", + ErrCount: 1, + }, + { + Value: "testing_123", + ErrCount: 1, + }, + { + Value: randomString(256), + ErrCount: 1, + }, + } + + for _, tc := range cases { + _, errors := validateNeptuneParamGroupNamePrefix(tc.Value, "aws_neptune_cluster_parameter_group_name") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the Neptune Parameter Group Name to trigger a validation error for %q", tc.Value) + } + } +} + +func TestValidateCloudFrontPublicKeyName(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "testing123!", + ErrCount: 1, + }, + { + Value: "testing 123", + ErrCount: 1, + }, + { + Value: randomString(129), + ErrCount: 1, + }, + } + + for _, tc := range cases { + _, errors := validateCloudFrontPublicKeyName(tc.Value, "aws_cloudfront_public_key") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the CloudFront PublicKey Name to trigger a validation error for %q", tc.Value) + } + } +} + +func TestValidateCloudFrontPublicKeyNamePrefix(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "testing123!", + ErrCount: 1, + }, + { + Value: "testing 123", + ErrCount: 1, + }, + { + Value: randomString(128), + ErrCount: 1, + }, + } + + for _, tc := range cases { + _, errors := validateCloudFrontPublicKeyNamePrefix(tc.Value, "aws_cloudfront_public_key") + + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the CloudFront PublicKey Name to trigger a validation error for %q", tc.Value) + } + } +} + +func TestValidateDxConnectionBandWidth(t *testing.T) { + validBandwidths := []string{ + "1Gbps", + "10Gbps", + "50Mbps", + "100Mbps", + "200Mbps", + "300Mbps", + "400Mbps", + "500Mbps", + } + for _, v := range validBandwidths { + _, errors := validateDxConnectionBandWidth()(v, "bandwidth") + if len(errors) != 0 { + t.Fatalf("%q should be a valid bandwidth: %q", v, errors) + } + } + + invalidBandwidths := []string{ + "1Tbps", + "100Gbps", + "10GBpS", + "42Mbps", + "0", + "???", + "a lot", + } + for _, v := range invalidBandwidths { + _, errors := validateDxConnectionBandWidth()(v, "bandwidth") + if len(errors) == 0 { + t.Fatalf("%q should be an invalid bandwidth", v) + } + } +} + +func TestValidateLbTargetGroupName(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "tf.test.elb.target.1", + ErrCount: 1, + }, + { + Value: "-tf-test-target", + ErrCount: 1, + }, + { + Value: "tf-test-target-", + ErrCount: 1, + }, + { + Value: randomString(33), + ErrCount: 1, + }, + } + for _, tc := range cases { + _, errors := validateLbTargetGroupName(tc.Value, "aws_lb_target_group") + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the AWS LB Target Group Name to trigger a validation error for %q", tc.Value) + } + } +} + +func TestValidateLbTargetGroupNamePrefix(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "tf.lb", + ErrCount: 1, + }, + { + Value: "-tf-lb", + ErrCount: 1, + }, + { + Value: randomString(32), + ErrCount: 1, + }, + } + for _, tc := range cases { + _, errors := validateLbTargetGroupNamePrefix(tc.Value, "aws_lb_target_group") + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the AWS LB Target Group Name to trigger a validation error for %q", tc.Value) + } + } +} + +func TestValidateSecretManagerSecretName(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "testing123!", + ErrCount: 1, + }, + { + Value: "testing 123", + ErrCount: 1, + }, + { + Value: randomString(513), + ErrCount: 1, + }, + } + for _, tc := range cases { + _, errors := validateSecretManagerSecretName(tc.Value, "aws_secretsmanager_secret") + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the AWS Secretsmanager Secret Name to not trigger a validation error for %q", tc.Value) + } + } +} + +func TestValidateSecretManagerSecretNamePrefix(t *testing.T) { + cases := []struct { + Value string + ErrCount int + }{ + { + Value: "testing123!", + ErrCount: 1, + }, + { + Value: "testing 123", + ErrCount: 1, + }, + { + Value: randomString(512), + ErrCount: 1, + }, + } + for _, tc := range cases { + _, errors := validateSecretManagerSecretNamePrefix(tc.Value, "aws_secretsmanager_secret") + if len(errors) != tc.ErrCount { + t.Fatalf("Expected the AWS Secretsmanager Secret Name to not trigger a validation error for %q", tc.Value) + } + } +} diff --git a/aws/waf_helpers.go b/aws/waf_helpers.go new file mode 100644 index 00000000000..b9e6403ce3c --- /dev/null +++ b/aws/waf_helpers.go @@ -0,0 +1,364 @@ +package aws + +import ( + "bytes" + "fmt" + "strings" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/waf" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" +) + +func wafSizeConstraintSetSchema() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "size_constraints": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "field_to_match": { + Type: schema.TypeSet, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "data": { + Type: schema.TypeString, + Optional: true, + }, + "type": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "comparison_operator": { + Type: schema.TypeString, + Required: true, + }, + "size": { + Type: schema.TypeInt, + Required: true, + }, + "text_transformation": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + } +} + +func diffWafSizeConstraints(oldS, newS []interface{}) []*waf.SizeConstraintSetUpdate { + updates := make([]*waf.SizeConstraintSetUpdate, 0) + + for _, os := range oldS { + constraint := os.(map[string]interface{}) + + if idx, contains := sliceContainsMap(newS, constraint); contains { + newS = append(newS[:idx], newS[idx+1:]...) + continue + } + + updates = append(updates, &waf.SizeConstraintSetUpdate{ + Action: aws.String(waf.ChangeActionDelete), + SizeConstraint: &waf.SizeConstraint{ + FieldToMatch: expandFieldToMatch(constraint["field_to_match"].(*schema.Set).List()[0].(map[string]interface{})), + ComparisonOperator: aws.String(constraint["comparison_operator"].(string)), + Size: aws.Int64(int64(constraint["size"].(int))), + TextTransformation: aws.String(constraint["text_transformation"].(string)), + }, + }) + } + + for _, ns := range newS { + constraint := ns.(map[string]interface{}) + + updates = append(updates, &waf.SizeConstraintSetUpdate{ + Action: aws.String(waf.ChangeActionInsert), + SizeConstraint: &waf.SizeConstraint{ + FieldToMatch: expandFieldToMatch(constraint["field_to_match"].(*schema.Set).List()[0].(map[string]interface{})), + ComparisonOperator: aws.String(constraint["comparison_operator"].(string)), + Size: aws.Int64(int64(constraint["size"].(int))), + TextTransformation: aws.String(constraint["text_transformation"].(string)), + }, + }) + } + return updates +} + +func flattenWafSizeConstraints(sc []*waf.SizeConstraint) []interface{} { + out := make([]interface{}, len(sc)) + for i, c := range sc { + m := make(map[string]interface{}) + m["comparison_operator"] = *c.ComparisonOperator + if c.FieldToMatch != nil { + m["field_to_match"] = flattenFieldToMatch(c.FieldToMatch) + } + m["size"] = *c.Size + m["text_transformation"] = *c.TextTransformation + out[i] = m + } + return out +} + +func flattenWafGeoMatchConstraint(ts []*waf.GeoMatchConstraint) []interface{} { + out := make([]interface{}, len(ts)) + for i, t := range ts { + m := make(map[string]interface{}) + m["type"] = *t.Type + m["value"] = *t.Value + out[i] = m + } + return out +} + +func diffWafGeoMatchSetConstraints(oldT, newT []interface{}) []*waf.GeoMatchSetUpdate { + updates := make([]*waf.GeoMatchSetUpdate, 0) + + for _, od := range oldT { + constraint := od.(map[string]interface{}) + + if idx, contains := sliceContainsMap(newT, constraint); contains { + newT = append(newT[:idx], newT[idx+1:]...) + continue + } + + updates = append(updates, &waf.GeoMatchSetUpdate{ + Action: aws.String(waf.ChangeActionDelete), + GeoMatchConstraint: &waf.GeoMatchConstraint{ + Type: aws.String(constraint["type"].(string)), + Value: aws.String(constraint["value"].(string)), + }, + }) + } + + for _, nd := range newT { + constraint := nd.(map[string]interface{}) + + updates = append(updates, &waf.GeoMatchSetUpdate{ + Action: aws.String(waf.ChangeActionInsert), + GeoMatchConstraint: &waf.GeoMatchConstraint{ + Type: aws.String(constraint["type"].(string)), + Value: aws.String(constraint["value"].(string)), + }, + }) + } + return updates +} + +func diffWafRegexPatternSetPatternStrings(oldPatterns, newPatterns []interface{}) []*waf.RegexPatternSetUpdate { + updates := make([]*waf.RegexPatternSetUpdate, 0) + + for _, op := range oldPatterns { + if idx, contains := sliceContainsString(newPatterns, op.(string)); contains { + newPatterns = append(newPatterns[:idx], newPatterns[idx+1:]...) + continue + } + + updates = append(updates, &waf.RegexPatternSetUpdate{ + Action: aws.String(waf.ChangeActionDelete), + RegexPatternString: aws.String(op.(string)), + }) + } + + for _, np := range newPatterns { + updates = append(updates, &waf.RegexPatternSetUpdate{ + Action: aws.String(waf.ChangeActionInsert), + RegexPatternString: aws.String(np.(string)), + }) + } + return updates +} + +func diffWafRulePredicates(oldP, newP []interface{}) []*waf.RuleUpdate { + updates := make([]*waf.RuleUpdate, 0) + + for _, op := range oldP { + predicate := op.(map[string]interface{}) + + if idx, contains := sliceContainsMap(newP, predicate); contains { + newP = append(newP[:idx], newP[idx+1:]...) + continue + } + + updates = append(updates, &waf.RuleUpdate{ + Action: aws.String(waf.ChangeActionDelete), + Predicate: &waf.Predicate{ + Negated: aws.Bool(predicate["negated"].(bool)), + Type: aws.String(predicate["type"].(string)), + DataId: aws.String(predicate["data_id"].(string)), + }, + }) + } + + for _, np := range newP { + predicate := np.(map[string]interface{}) + + updates = append(updates, &waf.RuleUpdate{ + Action: aws.String(waf.ChangeActionInsert), + Predicate: &waf.Predicate{ + Negated: aws.Bool(predicate["negated"].(bool)), + Type: aws.String(predicate["type"].(string)), + DataId: aws.String(predicate["data_id"].(string)), + }, + }) + } + return updates +} + +func sliceContainsString(slice []interface{}, s string) (int, bool) { + for idx, value := range slice { + v := value.(string) + if v == s { + return idx, true + } + } + return -1, false +} + +func diffWafRuleGroupActivatedRules(oldRules, newRules []interface{}) []*waf.RuleGroupUpdate { + updates := make([]*waf.RuleGroupUpdate, 0) + + for _, op := range oldRules { + rule := op.(map[string]interface{}) + + if idx, contains := sliceContainsMap(newRules, rule); contains { + newRules = append(newRules[:idx], newRules[idx+1:]...) + continue + } + + updates = append(updates, &waf.RuleGroupUpdate{ + Action: aws.String(waf.ChangeActionDelete), + ActivatedRule: expandWafActivatedRule(rule), + }) + } + + for _, np := range newRules { + rule := np.(map[string]interface{}) + + updates = append(updates, &waf.RuleGroupUpdate{ + Action: aws.String(waf.ChangeActionInsert), + ActivatedRule: expandWafActivatedRule(rule), + }) + } + return updates +} + +func flattenWafActivatedRules(activatedRules []*waf.ActivatedRule) []interface{} { + out := make([]interface{}, len(activatedRules)) + for i, ar := range activatedRules { + rule := map[string]interface{}{ + "priority": int(*ar.Priority), + "rule_id": *ar.RuleId, + "type": *ar.Type, + } + if ar.Action != nil { + rule["action"] = []interface{}{ + map[string]interface{}{ + "type": *ar.Action.Type, + }, + } + } + out[i] = rule + } + return out +} + +func expandWafActivatedRule(rule map[string]interface{}) *waf.ActivatedRule { + r := &waf.ActivatedRule{ + Priority: aws.Int64(int64(rule["priority"].(int))), + RuleId: aws.String(rule["rule_id"].(string)), + Type: aws.String(rule["type"].(string)), + } + + if a, ok := rule["action"].([]interface{}); ok && len(a) > 0 { + m := a[0].(map[string]interface{}) + r.Action = &waf.WafAction{ + Type: aws.String(m["type"].(string)), + } + } + return r +} + +func flattenWafRegexMatchTuples(tuples []*waf.RegexMatchTuple) []interface{} { + out := make([]interface{}, len(tuples)) + for i, t := range tuples { + m := make(map[string]interface{}) + + if t.FieldToMatch != nil { + m["field_to_match"] = flattenFieldToMatch(t.FieldToMatch) + } + m["regex_pattern_set_id"] = *t.RegexPatternSetId + m["text_transformation"] = *t.TextTransformation + + out[i] = m + } + return out +} + +func diffWafRegexMatchSetTuples(oldT, newT []interface{}) []*waf.RegexMatchSetUpdate { + updates := make([]*waf.RegexMatchSetUpdate, 0) + + for _, ot := range oldT { + tuple := ot.(map[string]interface{}) + + if idx, contains := sliceContainsMap(newT, tuple); contains { + newT = append(newT[:idx], newT[idx+1:]...) + continue + } + + ftm := tuple["field_to_match"].([]interface{}) + updates = append(updates, &waf.RegexMatchSetUpdate{ + Action: aws.String(waf.ChangeActionDelete), + RegexMatchTuple: &waf.RegexMatchTuple{ + FieldToMatch: expandFieldToMatch(ftm[0].(map[string]interface{})), + RegexPatternSetId: aws.String(tuple["regex_pattern_set_id"].(string)), + TextTransformation: aws.String(tuple["text_transformation"].(string)), + }, + }) + } + + for _, nt := range newT { + tuple := nt.(map[string]interface{}) + + ftm := tuple["field_to_match"].([]interface{}) + updates = append(updates, &waf.RegexMatchSetUpdate{ + Action: aws.String(waf.ChangeActionInsert), + RegexMatchTuple: &waf.RegexMatchTuple{ + FieldToMatch: expandFieldToMatch(ftm[0].(map[string]interface{})), + RegexPatternSetId: aws.String(tuple["regex_pattern_set_id"].(string)), + TextTransformation: aws.String(tuple["text_transformation"].(string)), + }, + }) + } + return updates +} + +func resourceAwsWafRegexMatchSetTupleHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + if v, ok := m["field_to_match"]; ok { + ftms := v.([]interface{}) + ftm := ftms[0].(map[string]interface{}) + + if v, ok := ftm["data"]; ok { + buf.WriteString(fmt.Sprintf("%s-", strings.ToLower(v.(string)))) + } + buf.WriteString(fmt.Sprintf("%s-", ftm["type"].(string))) + } + buf.WriteString(fmt.Sprintf("%s-", m["regex_pattern_set_id"].(string))) + buf.WriteString(fmt.Sprintf("%s-", m["text_transformation"].(string))) + + return hashcode.String(buf.String()) +} diff --git a/aws/waf_token_handlers.go b/aws/waf_token_handlers.go index ac99f09507e..3de972aa256 100644 --- a/aws/waf_token_handlers.go +++ b/aws/waf_token_handlers.go @@ -1,24 +1,23 @@ package aws import ( + "fmt" "time" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/waf" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/resource" ) type WafRetryer struct { Connection *waf.WAF - Region string } type withTokenFunc func(token *string) (interface{}, error) func (t *WafRetryer) RetryWithToken(f withTokenFunc) (interface{}, error) { - awsMutexKV.Lock(t.Region) - defer awsMutexKV.Unlock(t.Region) + awsMutexKV.Lock("WafRetryer") + defer awsMutexKV.Unlock("WafRetryer") var out interface{} err := resource.Retry(15*time.Minute, func() *resource.RetryError { @@ -27,7 +26,7 @@ func (t *WafRetryer) RetryWithToken(f withTokenFunc) (interface{}, error) { tokenOut, err = t.Connection.GetChangeToken(&waf.GetChangeTokenInput{}) if err != nil { - return resource.NonRetryableError(errwrap.Wrapf("Failed to acquire change token: {{err}}", err)) + return resource.NonRetryableError(fmt.Errorf("Failed to acquire change token: %s", err)) } out, err = f(tokenOut.ChangeToken) @@ -44,6 +43,6 @@ func (t *WafRetryer) RetryWithToken(f withTokenFunc) (interface{}, error) { return out, err } -func newWafRetryer(conn *waf.WAF, region string) *WafRetryer { - return &WafRetryer{Connection: conn, Region: region} +func newWafRetryer(conn *waf.WAF) *WafRetryer { + return &WafRetryer{Connection: conn} } diff --git a/aws/wafregionl_token_handlers.go b/aws/wafregionl_token_handlers.go index da3d8b58f78..17c69c0f969 100644 --- a/aws/wafregionl_token_handlers.go +++ b/aws/wafregionl_token_handlers.go @@ -1,12 +1,12 @@ package aws import ( + "fmt" "time" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/waf" "github.com/aws/aws-sdk-go/service/wafregional" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/resource" ) @@ -28,7 +28,7 @@ func (t *WafRegionalRetryer) RetryWithToken(f withRegionalTokenFunc) (interface{ tokenOut, err = t.Connection.GetChangeToken(&waf.GetChangeTokenInput{}) if err != nil { - return resource.NonRetryableError(errwrap.Wrapf("Failed to acquire change token: {{err}}", err)) + return resource.NonRetryableError(fmt.Errorf("Failed to acquire change token: %s", err)) } out, err = f(tokenOut.ChangeToken) diff --git a/build-in-docker.sh b/build-in-docker.sh new file mode 100755 index 00000000000..cfc4e73159d --- /dev/null +++ b/build-in-docker.sh @@ -0,0 +1,13 @@ +#!/bin/bash + +set -e + +VERSION=$1 +if [ -z "$VERSION" ]; then + echo "Provide the Terraform provider version number as an argument" + exit -1 +fi + +IMAGE_NAME=terraform-provider-aws-build +docker -D build -t $IMAGE_NAME . +docker run -v $(pwd):/opt/project -e VERSION=${VERSION} -e GIT_SHORT_HASH=$(git rev-parse --short HEAD) --entrypoint /bin/ash $IMAGE_NAME /opt/project/copy-build-output.sh diff --git a/copy-build-output.sh b/copy-build-output.sh new file mode 100755 index 00000000000..399679017a3 --- /dev/null +++ b/copy-build-output.sh @@ -0,0 +1,7 @@ +#!/bin/sh + +OUTPUT_DIR=${VERSION}_patched_${GIT_SHORT_HASH}/linux_amd64 +mkdir -p /opt/project/${OUTPUT_DIR} +cp /go/bin/terraform-provider-aws /opt/project/${OUTPUT_DIR}/terraform-provider-aws_$VERSION +cd /opt/project/${OUTPUT_DIR} +sha256sum terraform-provider-aws_$VERSION > SHA256SUMS diff --git a/examples/cloudhsm/main.tf b/examples/cloudhsm/main.tf new file mode 100644 index 00000000000..4116cb2f0a1 --- /dev/null +++ b/examples/cloudhsm/main.tf @@ -0,0 +1,44 @@ +provider "aws" { + region = "${var.aws_region}" +} + +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "cloudhsm2_vpc" { + cidr_block = "10.0.0.0/16" + + tags { + Name = "example-aws_cloudhsm_v2_cluster" + } +} + +resource "aws_subnet" "cloudhsm2_subnets" { + count = 2 + vpc_id = "${aws_vpc.cloudhsm2_vpc.id}" + cidr_block = "${element(var.subnets, count.index)}" + map_public_ip_on_launch = false + availability_zone = "${element(data.aws_availability_zones.available.names, count.index)}" + + tags { + Name = "example-aws_cloudhsm_v2_cluster" + } +} + +resource "aws_cloudhsm_v2_cluster" "cloudhsm_v2_cluster" { + hsm_type = "hsm1.medium" + subnet_ids = ["${aws_subnet.cloudhsm2_subnets.*.id}"] + + tags { + Name = "example-aws_cloudhsm_v2_cluster" + } +} + +resource "aws_cloudhsm_v2_hsm" "cloudhsm_v2_hsm" { + subnet_id = "${aws_subnet.cloudhsm2_subnets.0.id}" + cluster_id = "${aws_cloudhsm_v2_cluster.cloudhsm_v2_cluster.cluster_id}" +} + +data "aws_cloudhsm_v2_cluster" "cluster" { + cluster_id = "${aws_cloudhsm_v2_cluster.cloudhsm_v2_cluster.cluster_id}" + depends_on = ["aws_cloudhsm_v2_hsm.cloudhsm_v2_hsm"] +} diff --git a/examples/cloudhsm/outputs.tf b/examples/cloudhsm/outputs.tf new file mode 100644 index 00000000000..a91d8997a27 --- /dev/null +++ b/examples/cloudhsm/outputs.tf @@ -0,0 +1,7 @@ +output "hsm_ip_address" { + value = "${aws_cloudhsm_v2_hsm.cloudhsm_v2_hsm.ip_address}" +} + +output "cluster_data_certificate" { + value = "${data.aws_cloudhsm_v2_cluster.cluster.cluster_certificates.0.cluster_csr}" +} diff --git a/examples/cloudhsm/variables.tf b/examples/cloudhsm/variables.tf new file mode 100644 index 00000000000..31cdfd980ac --- /dev/null +++ b/examples/cloudhsm/variables.tf @@ -0,0 +1,9 @@ +variable "aws_region" { + description = "AWS region to launch cloudHSM cluster." + default = "eu-west-1" +} + +variable "subnets" { + default = ["10.0.1.0/24", "10.0.2.0/24"] + type = "list" +} diff --git a/examples/cognito-user-pool/README.md b/examples/cognito-user-pool/README.md index 828bd34ef9c..6236eda6ab1 100644 --- a/examples/cognito-user-pool/README.md +++ b/examples/cognito-user-pool/README.md @@ -1,8 +1,6 @@ # Cognito User Pool example -This example shows how to create - -This creates a Cognito User Pool, IAM roles and lambdas. +This example creates a Cognito User Pool, IAM roles and lambdas. To run, configure your AWS provider as described in https://www.terraform.io/docs/providers/aws/index.html diff --git a/examples/cognito-user-pool/main.tf b/examples/cognito-user-pool/main.tf index 74c65c62f0d..53316fa5c69 100644 --- a/examples/cognito-user-pool/main.tf +++ b/examples/cognito-user-pool/main.tf @@ -107,6 +107,7 @@ resource "aws_cognito_user_pool" "pool" { pre_authentication = "${aws_lambda_function.main.arn}" pre_sign_up = "${aws_lambda_function.main.arn}" pre_token_generation = "${aws_lambda_function.main.arn}" + user_migration = "${aws_lambda_function.main.arn}" verify_auth_challenge_response = "${aws_lambda_function.main.arn}" } diff --git a/examples/ecs-alb/main.tf b/examples/ecs-alb/main.tf index d19950bd4b9..dc16aca7ec4 100644 --- a/examples/ecs-alb/main.tf +++ b/examples/ecs-alb/main.tf @@ -143,8 +143,8 @@ resource "aws_security_group" "instance_sg" { ingress { protocol = "tcp" - from_port = 8080 - to_port = 8080 + from_port = 32768 + to_port = 61000 security_groups = [ "${aws_security_group.lb_sg.id}", @@ -185,7 +185,7 @@ resource "aws_ecs_service" "test" { name = "tf-example-ecs-ghost" cluster = "${aws_ecs_cluster.main.id}" task_definition = "${aws_ecs_task_definition.ghost.arn}" - desired_count = 1 + desired_count = "${var.service_desired}" iam_role = "${aws_iam_role.ecs_service.name}" load_balancer { @@ -291,7 +291,7 @@ resource "aws_iam_role_policy" "instance" { resource "aws_alb_target_group" "test" { name = "tf-example-ecs-ghost" - port = 80 + port = 8080 protocol = "HTTP" vpc_id = "${aws_vpc.main.id}" } diff --git a/examples/ecs-alb/task-definition.json b/examples/ecs-alb/task-definition.json index 184e454151e..aae57890bd8 100644 --- a/examples/ecs-alb/task-definition.json +++ b/examples/ecs-alb/task-definition.json @@ -8,7 +8,7 @@ "portMappings": [ { "containerPort": 2368, - "hostPort": 8080 + "hostPort": 0 } ], "logConfiguration": { diff --git a/examples/ecs-alb/variables.tf b/examples/ecs-alb/variables.tf index aec42571cf1..b40d8758115 100644 --- a/examples/ecs-alb/variables.tf +++ b/examples/ecs-alb/variables.tf @@ -32,6 +32,11 @@ variable "asg_desired" { default = "1" } +variable "service_desired" { + description = "Desired numbers of instances in the ecs service" + default = "1" +} + variable "admin_cidr_ingress" { description = "CIDR to allow tcp/22 ingress to EC2 instance" } diff --git a/examples/eks-getting-started/README.md b/examples/eks-getting-started/README.md new file mode 100644 index 00000000000..30ffab8a535 --- /dev/null +++ b/examples/eks-getting-started/README.md @@ -0,0 +1,7 @@ +# EKS Getting Started Guide Configuration + +This is the full configuration from https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html + +See that guide for additional information. + +NOTE: This full configuration utilizes the [Terraform http provider](https://www.terraform.io/docs/providers/http/index.html) to call out to icanhazip.com to determine your local workstation external IP for easily configuring EC2 Security Group access to the Kubernetes master servers. Feel free to replace this as necessary. diff --git a/examples/eks-getting-started/eks-cluster.tf b/examples/eks-getting-started/eks-cluster.tf new file mode 100644 index 00000000000..58a7389fc49 --- /dev/null +++ b/examples/eks-getting-started/eks-cluster.tf @@ -0,0 +1,87 @@ +# +# EKS Cluster Resources +# * IAM Role to allow EKS service to manage other AWS services +# * EC2 Security Group to allow networking traffic with EKS cluster +# * EKS Cluster +# + +resource "aws_iam_role" "demo-cluster" { + name = "terraform-eks-demo-cluster" + + assume_role_policy = < Checking for unchecked errors..." - -if ! which errcheck > /dev/null; then - echo "==> Installing errcheck..." - go get -u github.com/kisielk/errcheck -fi - -err_files=$(errcheck -ignoretests \ - -ignore 'github.com/hashicorp/terraform/helper/schema:Set' \ - -ignore 'bytes:.*' \ - -ignore 'io:Close|Write' \ - $(go list ./...| grep -v /vendor/)) - -if [[ -n ${err_files} ]]; then - echo 'Unchecked errors found in the following places:' - echo "${err_files}" - echo "Please handle returned errors. You can check directly with \`make errcheck\`" - exit 1 -fi - -exit 0 diff --git a/scripts/gofmtcheck.sh b/scripts/gofmtcheck.sh index 1c055815f8d..dd2307acc57 100755 --- a/scripts/gofmtcheck.sh +++ b/scripts/gofmtcheck.sh @@ -2,7 +2,7 @@ # Check gofmt echo "==> Checking that code complies with gofmt requirements..." -gofmt_files=$(gofmt -l `find . -name '*.go' | grep -v vendor`) +gofmt_files=$(find . -name '*.go' | grep -v vendor | xargs gofmt -l -s) if [[ -n ${gofmt_files} ]]; then echo 'gofmt needs running on the following files:' echo "${gofmt_files}" diff --git a/scripts/websitefmtcheck.sh b/scripts/websitefmtcheck.sh new file mode 100755 index 00000000000..bbbec8053b3 --- /dev/null +++ b/scripts/websitefmtcheck.sh @@ -0,0 +1,19 @@ +#!/bin/bash + +set -eou pipefail + +npm list codedown > /dev/null 2>&1 || npm install --no-save codedown > /dev/null 2>&1 + +problems=false +for f in $(find website -name '*.markdown'); do + if [ "${1-}" = "diff" ]; then + echo "$f" + cat "$f" | node_modules/.bin/codedown hcl | terraform fmt -diff=true - + else + cat "$f" | node_modules/.bin/codedown hcl | terraform fmt -check=true - || problems=true && echo "Formatting errors in $f" + fi +done + +if [ "$problems" = true ] ; then + exit 1 +fi diff --git a/vendor/github.com/apparentlymart/go-cidr/cidr/cidr.go b/vendor/github.com/apparentlymart/go-cidr/cidr/cidr.go index a31cdec7732..c292db0ce07 100644 --- a/vendor/github.com/apparentlymart/go-cidr/cidr/cidr.go +++ b/vendor/github.com/apparentlymart/go-cidr/cidr/cidr.go @@ -71,8 +71,13 @@ func Host(base *net.IPNet, num int) (net.IP, error) { if numUint64 > maxHostNum { return nil, fmt.Errorf("prefix of %d does not accommodate a host numbered %d", parentLen, num) } - - return insertNumIntoIP(ip, num, 32), nil + var bitlength int + if ip.To4() != nil { + bitlength = 32 + } else { + bitlength = 128 + } + return insertNumIntoIP(ip, num, bitlength), nil } // AddressRange returns the first and last addresses in the given CIDR range. @@ -110,3 +115,96 @@ func AddressCount(network *net.IPNet) uint64 { prefixLen, bits := network.Mask.Size() return 1 << (uint64(bits) - uint64(prefixLen)) } + +//VerifyNoOverlap takes a list subnets and supernet (CIDRBlock) and verifies +//none of the subnets overlap and all subnets are in the supernet +//it returns an error if any of those conditions are not satisfied +func VerifyNoOverlap(subnets []*net.IPNet, CIDRBlock *net.IPNet) error { + firstLastIP := make([][]net.IP, len(subnets)) + for i, s := range subnets { + first, last := AddressRange(s) + firstLastIP[i] = []net.IP{first, last} + } + for i, s := range subnets { + if !CIDRBlock.Contains(firstLastIP[i][0]) || !CIDRBlock.Contains(firstLastIP[i][1]) { + return fmt.Errorf("%s does not fully contain %s", CIDRBlock.String(), s.String()) + } + for j := i + 1; j < len(subnets); j++ { + first := firstLastIP[j][0] + last := firstLastIP[j][1] + if s.Contains(first) || s.Contains(last) { + return fmt.Errorf("%s overlaps with %s", subnets[j].String(), s.String()) + } + } + } + return nil +} + +// PreviousSubnet returns the subnet of the desired mask in the IP space +// just lower than the start of IPNet provided. If the IP space rolls over +// then the second return value is true +func PreviousSubnet(network *net.IPNet, prefixLen int) (*net.IPNet, bool) { + startIP := checkIPv4(network.IP) + previousIP := make(net.IP, len(startIP)) + copy(previousIP, startIP) + cMask := net.CIDRMask(prefixLen, 8*len(previousIP)) + previousIP = Dec(previousIP) + previous := &net.IPNet{IP: previousIP.Mask(cMask), Mask: cMask} + if startIP.Equal(net.IPv4zero) || startIP.Equal(net.IPv6zero) { + return previous, true + } + return previous, false +} + +// NextSubnet returns the next available subnet of the desired mask size +// starting for the maximum IP of the offset subnet +// If the IP exceeds the maxium IP then the second return value is true +func NextSubnet(network *net.IPNet, prefixLen int) (*net.IPNet, bool) { + _, currentLast := AddressRange(network) + mask := net.CIDRMask(prefixLen, 8*len(currentLast)) + currentSubnet := &net.IPNet{IP: currentLast.Mask(mask), Mask: mask} + _, last := AddressRange(currentSubnet) + last = Inc(last) + next := &net.IPNet{IP: last.Mask(mask), Mask: mask} + if last.Equal(net.IPv4zero) || last.Equal(net.IPv6zero) { + return next, true + } + return next, false +} + +//Inc increases the IP by one this returns a new []byte for the IP +func Inc(IP net.IP) net.IP { + IP = checkIPv4(IP) + incIP := make([]byte, len(IP)) + copy(incIP, IP) + for j := len(incIP) - 1; j >= 0; j-- { + incIP[j]++ + if incIP[j] > 0 { + break + } + } + return incIP +} + +//Dec decreases the IP by one this returns a new []byte for the IP +func Dec(IP net.IP) net.IP { + IP = checkIPv4(IP) + decIP := make([]byte, len(IP)) + copy(decIP, IP) + decIP = checkIPv4(decIP) + for j := len(decIP) - 1; j >= 0; j-- { + decIP[j]-- + if decIP[j] < 255 { + break + } + } + return decIP +} + +func checkIPv4(ip net.IP) net.IP { + // Go for some reason allocs IPv6len for IPv4 so we have to correct it + if v4 := ip.To4(); v4 != nil { + return v4 + } + return ip +} diff --git a/vendor/github.com/armon/go-radix/go.mod b/vendor/github.com/armon/go-radix/go.mod new file mode 100644 index 00000000000..4336aa29ea2 --- /dev/null +++ b/vendor/github.com/armon/go-radix/go.mod @@ -0,0 +1 @@ +module github.com/armon/go-radix diff --git a/vendor/github.com/armon/go-radix/radix.go b/vendor/github.com/armon/go-radix/radix.go index f9655a126b7..e2bb22eb91d 100644 --- a/vendor/github.com/armon/go-radix/radix.go +++ b/vendor/github.com/armon/go-radix/radix.go @@ -44,13 +44,13 @@ func (n *node) addEdge(e edge) { n.edges.Sort() } -func (n *node) replaceEdge(e edge) { +func (n *node) updateEdge(label byte, node *node) { num := len(n.edges) idx := sort.Search(num, func(i int) bool { - return n.edges[i].label >= e.label + return n.edges[i].label >= label }) - if idx < num && n.edges[idx].label == e.label { - n.edges[idx].node = e.node + if idx < num && n.edges[idx].label == label { + n.edges[idx].node = node return } panic("replacing missing edge") @@ -198,10 +198,7 @@ func (t *Tree) Insert(s string, v interface{}) (interface{}, bool) { child := &node{ prefix: search[:commonPrefix], } - parent.replaceEdge(edge{ - label: search[0], - node: child, - }) + parent.updateEdge(search[0], child) // Restore the existing node child.addEdge(edge{ diff --git a/vendor/github.com/aws/aws-sdk-go/aws/client/client.go b/vendor/github.com/aws/aws-sdk-go/aws/client/client.go index 788fe6e279b..212fe25e71e 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/client/client.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/client/client.go @@ -15,6 +15,12 @@ type Config struct { Endpoint string SigningRegion string SigningName string + + // States that the signing name did not come from a modeled source but + // was derived based on other data. Used by service client constructors + // to determine if the signin name can be overriden based on metadata the + // service has. + SigningNameDerived bool } // ConfigProvider provides a generic way for a service client to receive @@ -85,6 +91,6 @@ func (c *Client) AddDebugHandlers() { return } - c.Handlers.Send.PushFrontNamed(request.NamedHandler{Name: "awssdk.client.LogRequest", Fn: logRequest}) - c.Handlers.Send.PushBackNamed(request.NamedHandler{Name: "awssdk.client.LogResponse", Fn: logResponse}) + c.Handlers.Send.PushFrontNamed(LogHTTPRequestHandler) + c.Handlers.Send.PushBackNamed(LogHTTPResponseHandler) } diff --git a/vendor/github.com/aws/aws-sdk-go/aws/client/logger.go b/vendor/github.com/aws/aws-sdk-go/aws/client/logger.go index e223c54cc6c..ce9fb896d94 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/client/logger.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/client/logger.go @@ -44,12 +44,22 @@ func (reader *teeReaderCloser) Close() error { return reader.Source.Close() } +// LogHTTPRequestHandler is a SDK request handler to log the HTTP request sent +// to a service. Will include the HTTP request body if the LogLevel of the +// request matches LogDebugWithHTTPBody. +var LogHTTPRequestHandler = request.NamedHandler{ + Name: "awssdk.client.LogRequest", + Fn: logRequest, +} + func logRequest(r *request.Request) { logBody := r.Config.LogLevel.Matches(aws.LogDebugWithHTTPBody) bodySeekable := aws.IsReaderSeekable(r.Body) - dumpedBody, err := httputil.DumpRequestOut(r.HTTPRequest, logBody) + + b, err := httputil.DumpRequestOut(r.HTTPRequest, logBody) if err != nil { - r.Config.Logger.Log(fmt.Sprintf(logReqErrMsg, r.ClientInfo.ServiceName, r.Operation.Name, err)) + r.Config.Logger.Log(fmt.Sprintf(logReqErrMsg, + r.ClientInfo.ServiceName, r.Operation.Name, err)) return } @@ -63,7 +73,28 @@ func logRequest(r *request.Request) { r.ResetBody() } - r.Config.Logger.Log(fmt.Sprintf(logReqMsg, r.ClientInfo.ServiceName, r.Operation.Name, string(dumpedBody))) + r.Config.Logger.Log(fmt.Sprintf(logReqMsg, + r.ClientInfo.ServiceName, r.Operation.Name, string(b))) +} + +// LogHTTPRequestHeaderHandler is a SDK request handler to log the HTTP request sent +// to a service. Will only log the HTTP request's headers. The request payload +// will not be read. +var LogHTTPRequestHeaderHandler = request.NamedHandler{ + Name: "awssdk.client.LogRequestHeader", + Fn: logRequestHeader, +} + +func logRequestHeader(r *request.Request) { + b, err := httputil.DumpRequestOut(r.HTTPRequest, false) + if err != nil { + r.Config.Logger.Log(fmt.Sprintf(logReqErrMsg, + r.ClientInfo.ServiceName, r.Operation.Name, err)) + return + } + + r.Config.Logger.Log(fmt.Sprintf(logReqMsg, + r.ClientInfo.ServiceName, r.Operation.Name, string(b))) } const logRespMsg = `DEBUG: Response %s/%s Details: @@ -76,27 +107,44 @@ const logRespErrMsg = `DEBUG ERROR: Response %s/%s: %s -----------------------------------------------------` +// LogHTTPResponseHandler is a SDK request handler to log the HTTP response +// received from a service. Will include the HTTP response body if the LogLevel +// of the request matches LogDebugWithHTTPBody. +var LogHTTPResponseHandler = request.NamedHandler{ + Name: "awssdk.client.LogResponse", + Fn: logResponse, +} + func logResponse(r *request.Request) { lw := &logWriter{r.Config.Logger, bytes.NewBuffer(nil)} - r.HTTPResponse.Body = &teeReaderCloser{ - Reader: io.TeeReader(r.HTTPResponse.Body, lw), - Source: r.HTTPResponse.Body, + + logBody := r.Config.LogLevel.Matches(aws.LogDebugWithHTTPBody) + if logBody { + r.HTTPResponse.Body = &teeReaderCloser{ + Reader: io.TeeReader(r.HTTPResponse.Body, lw), + Source: r.HTTPResponse.Body, + } } handlerFn := func(req *request.Request) { - body, err := httputil.DumpResponse(req.HTTPResponse, false) + b, err := httputil.DumpResponse(req.HTTPResponse, false) if err != nil { - lw.Logger.Log(fmt.Sprintf(logRespErrMsg, req.ClientInfo.ServiceName, req.Operation.Name, err)) + lw.Logger.Log(fmt.Sprintf(logRespErrMsg, + req.ClientInfo.ServiceName, req.Operation.Name, err)) return } - b, err := ioutil.ReadAll(lw.buf) - if err != nil { - lw.Logger.Log(fmt.Sprintf(logRespErrMsg, req.ClientInfo.ServiceName, req.Operation.Name, err)) - return - } - lw.Logger.Log(fmt.Sprintf(logRespMsg, req.ClientInfo.ServiceName, req.Operation.Name, string(body))) - if req.Config.LogLevel.Matches(aws.LogDebugWithHTTPBody) { + lw.Logger.Log(fmt.Sprintf(logRespMsg, + req.ClientInfo.ServiceName, req.Operation.Name, string(b))) + + if logBody { + b, err := ioutil.ReadAll(lw.buf) + if err != nil { + lw.Logger.Log(fmt.Sprintf(logRespErrMsg, + req.ClientInfo.ServiceName, req.Operation.Name, err)) + return + } + lw.Logger.Log(string(b)) } } @@ -110,3 +158,27 @@ func logResponse(r *request.Request) { Name: handlerName, Fn: handlerFn, }) } + +// LogHTTPResponseHeaderHandler is a SDK request handler to log the HTTP +// response received from a service. Will only log the HTTP response's headers. +// The response payload will not be read. +var LogHTTPResponseHeaderHandler = request.NamedHandler{ + Name: "awssdk.client.LogResponseHeader", + Fn: logResponseHeader, +} + +func logResponseHeader(r *request.Request) { + if r.Config.Logger == nil { + return + } + + b, err := httputil.DumpResponse(r.HTTPResponse, false) + if err != nil { + r.Config.Logger.Log(fmt.Sprintf(logRespErrMsg, + r.ClientInfo.ServiceName, r.Operation.Name, err)) + return + } + + r.Config.Logger.Log(fmt.Sprintf(logRespMsg, + r.ClientInfo.ServiceName, r.Operation.Name, string(b))) +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/client/metadata/client_info.go b/vendor/github.com/aws/aws-sdk-go/aws/client/metadata/client_info.go index 4778056ddfd..920e9fddf87 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/client/metadata/client_info.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/client/metadata/client_info.go @@ -3,6 +3,7 @@ package metadata // ClientInfo wraps immutable data from the client.Client structure. type ClientInfo struct { ServiceName string + ServiceID string APIVersion string Endpoint string SigningName string diff --git a/vendor/github.com/aws/aws-sdk-go/aws/config.go b/vendor/github.com/aws/aws-sdk-go/aws/config.go index 2b162251694..e9695ef249b 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/config.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/config.go @@ -18,7 +18,7 @@ const UseServiceDefaultRetries = -1 type RequestRetryer interface{} // A Config provides service configuration for service clients. By default, -// all clients will use the defaults.DefaultConfig tructure. +// all clients will use the defaults.DefaultConfig structure. // // // Create Session with MaxRetry configuration to be shared by multiple // // service clients. @@ -45,8 +45,8 @@ type Config struct { // that overrides the default generated endpoint for a client. Set this // to `""` to use the default generated endpoint. // - // @note You must still provide a `Region` value when specifying an - // endpoint for a client. + // Note: You must still provide a `Region` value when specifying an + // endpoint for a client. Endpoint *string // The resolver to use for looking up endpoints for AWS service clients @@ -65,8 +65,8 @@ type Config struct { // noted. A full list of regions is found in the "Regions and Endpoints" // document. // - // @see http://docs.aws.amazon.com/general/latest/gr/rande.html - // AWS Regions and Endpoints + // See http://docs.aws.amazon.com/general/latest/gr/rande.html for AWS + // Regions and Endpoints. Region *string // Set this to `true` to disable SSL when sending requests. Defaults @@ -120,9 +120,10 @@ type Config struct { // will use virtual hosted bucket addressing when possible // (`http://BUCKET.s3.amazonaws.com/KEY`). // - // @note This configuration option is specific to the Amazon S3 service. - // @see http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html - // Amazon S3: Virtual Hosting of Buckets + // Note: This configuration option is specific to the Amazon S3 service. + // + // See http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html + // for Amazon S3: Virtual Hosting of Buckets S3ForcePathStyle *bool // Set this to `true` to disable the SDK adding the `Expect: 100-Continue` @@ -151,6 +152,9 @@ type Config struct { // with accelerate. S3UseAccelerate *bool + // S3DisableContentMD5Validation config option is temporarily disabled, + // For S3 GetObject API calls, #1837. + // // Set this to `true` to disable the S3 service client from automatically // adding the ContentMD5 to S3 Object Put and Upload API calls. This option // will also disable the SDK from performing object ContentMD5 validation @@ -220,6 +224,21 @@ type Config struct { // Key: aws.String("//foo//bar//moo"), // }) DisableRestProtocolURICleaning *bool + + // EnableEndpointDiscovery will allow for endpoint discovery on operations that + // have the definition in its model. By default, endpoint discovery is off. + // + // Example: + // sess := session.Must(session.NewSession(&aws.Config{ + // EnableEndpointDiscovery: aws.Bool(true), + // })) + // + // svc := s3.New(sess) + // out, err := svc.GetObject(&s3.GetObjectInput { + // Bucket: aws.String("bucketname"), + // Key: aws.String("/foo/bar/moo"), + // }) + EnableEndpointDiscovery *bool } // NewConfig returns a new Config pointer that can be chained with builder @@ -374,6 +393,12 @@ func (c *Config) WithSleepDelay(fn func(time.Duration)) *Config { return c } +// WithEndpointDiscovery will set whether or not to use endpoint discovery. +func (c *Config) WithEndpointDiscovery(t bool) *Config { + c.EnableEndpointDiscovery = &t + return c +} + // MergeIn merges the passed in configs into the existing config object. func (c *Config) MergeIn(cfgs ...*Config) { for _, other := range cfgs { @@ -473,6 +498,10 @@ func mergeInConfig(dst *Config, other *Config) { if other.EnforceShouldRetryCheck != nil { dst.EnforceShouldRetryCheck = other.EnforceShouldRetryCheck } + + if other.EnableEndpointDiscovery != nil { + dst.EnableEndpointDiscovery = other.EnableEndpointDiscovery + } } // Copy will return a shallow copy of the Config object. If any additional diff --git a/vendor/github.com/aws/aws-sdk-go/aws/credentials/chain_provider.go b/vendor/github.com/aws/aws-sdk-go/aws/credentials/chain_provider.go index f298d659626..3ad1e798df8 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/credentials/chain_provider.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/credentials/chain_provider.go @@ -9,9 +9,7 @@ var ( // providers in the ChainProvider. // // This has been deprecated. For verbose error messaging set - // aws.Config.CredentialsChainVerboseErrors to true - // - // @readonly + // aws.Config.CredentialsChainVerboseErrors to true. ErrNoValidProvidersFoundInChain = awserr.New("NoCredentialProviders", `no valid providers in chain. Deprecated. For verbose messaging see aws.Config.CredentialsChainVerboseErrors`, diff --git a/vendor/github.com/aws/aws-sdk-go/aws/credentials/credentials.go b/vendor/github.com/aws/aws-sdk-go/aws/credentials/credentials.go index 42416fc2f0f..dc82f4c3cfa 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/credentials/credentials.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/credentials/credentials.go @@ -64,8 +64,6 @@ import ( // Credentials: credentials.AnonymousCredentials, // }))) // // Access public S3 buckets. -// -// @readonly var AnonymousCredentials = NewStaticCredentials("", "", "") // A Value is the AWS credentials value for individual credential fields. @@ -158,13 +156,14 @@ func (e *Expiry) SetExpiration(expiration time.Time, window time.Duration) { // IsExpired returns if the credentials are expired. func (e *Expiry) IsExpired() bool { - if e.CurrentTime == nil { - e.CurrentTime = time.Now + curTime := e.CurrentTime + if curTime == nil { + curTime = time.Now } - return e.expiration.Before(e.CurrentTime()) + return e.expiration.Before(curTime()) } -// A Credentials provides synchronous safe retrieval of AWS credentials Value. +// A Credentials provides concurrency safe retrieval of AWS credentials Value. // Credentials will cache the credentials value until they expire. Once the value // expires the next Get will attempt to retrieve valid credentials. // @@ -178,7 +177,8 @@ func (e *Expiry) IsExpired() bool { type Credentials struct { creds Value forceRefresh bool - m sync.Mutex + + m sync.RWMutex provider Provider } @@ -201,6 +201,17 @@ func NewCredentials(provider Provider) *Credentials { // If Credentials.Expire() was called the credentials Value will be force // expired, and the next call to Get() will cause them to be refreshed. func (c *Credentials) Get() (Value, error) { + // Check the cached credentials first with just the read lock. + c.m.RLock() + if !c.isExpired() { + creds := c.creds + c.m.RUnlock() + return creds, nil + } + c.m.RUnlock() + + // Credentials are expired need to retrieve the credentials taking the full + // lock. c.m.Lock() defer c.m.Unlock() @@ -234,8 +245,8 @@ func (c *Credentials) Expire() { // If the Credentials were forced to be expired with Expire() this will // reflect that override. func (c *Credentials) IsExpired() bool { - c.m.Lock() - defer c.m.Unlock() + c.m.RLock() + defer c.m.RUnlock() return c.isExpired() } diff --git a/vendor/github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds/ec2_role_provider.go b/vendor/github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds/ec2_role_provider.go index c39749524ec..0ed791be641 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds/ec2_role_provider.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds/ec2_role_provider.go @@ -4,7 +4,6 @@ import ( "bufio" "encoding/json" "fmt" - "path" "strings" "time" @@ -12,6 +11,7 @@ import ( "github.com/aws/aws-sdk-go/aws/client" "github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/aws/ec2metadata" + "github.com/aws/aws-sdk-go/internal/sdkuri" ) // ProviderName provides a name of EC2Role provider @@ -125,7 +125,7 @@ type ec2RoleCredRespBody struct { Message string } -const iamSecurityCredsPath = "/iam/security-credentials" +const iamSecurityCredsPath = "iam/security-credentials/" // requestCredList requests a list of credentials from the EC2 service. // If there are no credentials, or there is an error making or receiving the request @@ -153,7 +153,7 @@ func requestCredList(client *ec2metadata.EC2Metadata) ([]string, error) { // If the credentials cannot be found, or there is an error reading the response // and error will be returned. func requestCred(client *ec2metadata.EC2Metadata, credsName string) (ec2RoleCredRespBody, error) { - resp, err := client.GetMetadata(path.Join(iamSecurityCredsPath, credsName)) + resp, err := client.GetMetadata(sdkuri.PathJoin(iamSecurityCredsPath, credsName)) if err != nil { return ec2RoleCredRespBody{}, awserr.New("EC2RoleRequestError", diff --git a/vendor/github.com/aws/aws-sdk-go/aws/credentials/endpointcreds/provider.go b/vendor/github.com/aws/aws-sdk-go/aws/credentials/endpointcreds/provider.go index a4cec5c553a..ace51313820 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/credentials/endpointcreds/provider.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/credentials/endpointcreds/provider.go @@ -65,6 +65,10 @@ type Provider struct { // // If ExpiryWindow is 0 or less it will be ignored. ExpiryWindow time.Duration + + // Optional authorization token value if set will be used as the value of + // the Authorization header of the endpoint credential request. + AuthorizationToken string } // NewProviderClient returns a credentials Provider for retrieving AWS credentials @@ -152,6 +156,9 @@ func (p *Provider) getCredentials() (*getCredentialsOutput, error) { out := &getCredentialsOutput{} req := p.Client.NewRequest(op, nil, out) req.HTTPRequest.Header.Set("Accept", "application/json") + if authToken := p.AuthorizationToken; len(authToken) != 0 { + req.HTTPRequest.Header.Set("Authorization", authToken) + } return out, req.Send() } diff --git a/vendor/github.com/aws/aws-sdk-go/aws/credentials/env_provider.go b/vendor/github.com/aws/aws-sdk-go/aws/credentials/env_provider.go index c14231a16f2..54c5cf7333f 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/credentials/env_provider.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/credentials/env_provider.go @@ -12,14 +12,10 @@ const EnvProviderName = "EnvProvider" var ( // ErrAccessKeyIDNotFound is returned when the AWS Access Key ID can't be // found in the process's environment. - // - // @readonly ErrAccessKeyIDNotFound = awserr.New("EnvAccessKeyNotFound", "AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY not found in environment", nil) // ErrSecretAccessKeyNotFound is returned when the AWS Secret Access Key // can't be found in the process's environment. - // - // @readonly ErrSecretAccessKeyNotFound = awserr.New("EnvSecretNotFound", "AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY not found in environment", nil) ) diff --git a/vendor/github.com/aws/aws-sdk-go/aws/credentials/shared_credentials_provider.go b/vendor/github.com/aws/aws-sdk-go/aws/credentials/shared_credentials_provider.go index 51e21e0f38f..e1551495812 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/credentials/shared_credentials_provider.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/credentials/shared_credentials_provider.go @@ -4,9 +4,8 @@ import ( "fmt" "os" - "github.com/go-ini/ini" - "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/internal/ini" "github.com/aws/aws-sdk-go/internal/shareddefaults" ) @@ -77,36 +76,37 @@ func (p *SharedCredentialsProvider) IsExpired() bool { // The credentials retrieved from the profile will be returned or error. Error will be // returned if it fails to read from the file, or the data is invalid. func loadProfile(filename, profile string) (Value, error) { - config, err := ini.Load(filename) + config, err := ini.OpenFile(filename) if err != nil { return Value{ProviderName: SharedCredsProviderName}, awserr.New("SharedCredsLoad", "failed to load shared credentials file", err) } - iniProfile, err := config.GetSection(profile) - if err != nil { - return Value{ProviderName: SharedCredsProviderName}, awserr.New("SharedCredsLoad", "failed to get profile", err) + + iniProfile, ok := config.GetSection(profile) + if !ok { + return Value{ProviderName: SharedCredsProviderName}, awserr.New("SharedCredsLoad", "failed to get profile", nil) } - id, err := iniProfile.GetKey("aws_access_key_id") - if err != nil { + id := iniProfile.String("aws_access_key_id") + if len(id) == 0 { return Value{ProviderName: SharedCredsProviderName}, awserr.New("SharedCredsAccessKey", fmt.Sprintf("shared credentials %s in %s did not contain aws_access_key_id", profile, filename), - err) + nil) } - secret, err := iniProfile.GetKey("aws_secret_access_key") - if err != nil { + secret := iniProfile.String("aws_secret_access_key") + if len(secret) == 0 { return Value{ProviderName: SharedCredsProviderName}, awserr.New("SharedCredsSecret", fmt.Sprintf("shared credentials %s in %s did not contain aws_secret_access_key", profile, filename), nil) } // Default to empty string if not found - token := iniProfile.Key("aws_session_token") + token := iniProfile.String("aws_session_token") return Value{ - AccessKeyID: id.String(), - SecretAccessKey: secret.String(), - SessionToken: token.String(), + AccessKeyID: id, + SecretAccessKey: secret, + SessionToken: token, ProviderName: SharedCredsProviderName, }, nil } diff --git a/vendor/github.com/aws/aws-sdk-go/aws/credentials/static_provider.go b/vendor/github.com/aws/aws-sdk-go/aws/credentials/static_provider.go index 4f5dab3fcc4..531139e3971 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/credentials/static_provider.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/credentials/static_provider.go @@ -9,8 +9,6 @@ const StaticProviderName = "StaticProvider" var ( // ErrStaticCredentialsEmpty is emitted when static credentials are empty. - // - // @readonly ErrStaticCredentialsEmpty = awserr.New("EmptyStaticCreds", "static credentials are empty", nil) ) diff --git a/vendor/github.com/aws/aws-sdk-go/aws/crr/cache.go b/vendor/github.com/aws/aws-sdk-go/aws/crr/cache.go new file mode 100644 index 00000000000..6b69556a4af --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/crr/cache.go @@ -0,0 +1,110 @@ +package crr + +import ( + "sync/atomic" +) + +// EndpointCache is an LRU cache that holds a series of endpoints +// based on some key. The datastructure makes use of a read write +// mutex to enable asynchronous use. +type EndpointCache struct { + endpoints syncMap + endpointLimit int64 + // size is used to count the number elements in the cache. + // The atomic package is used to ensure this size is accurate when + // using multiple goroutines. + size int64 +} + +// NewEndpointCache will return a newly initialized cache with a limit +// of endpointLimit entries. +func NewEndpointCache(endpointLimit int64) *EndpointCache { + return &EndpointCache{ + endpointLimit: endpointLimit, + endpoints: newSyncMap(), + } +} + +// get is a concurrent safe get operation that will retrieve an endpoint +// based on endpointKey. A boolean will also be returned to illustrate whether +// or not the endpoint had been found. +func (c *EndpointCache) get(endpointKey string) (Endpoint, bool) { + endpoint, ok := c.endpoints.Load(endpointKey) + if !ok { + return Endpoint{}, false + } + + c.endpoints.Store(endpointKey, endpoint) + return endpoint.(Endpoint), true +} + +// Get will retrieve a weighted address based off of the endpoint key. If an endpoint +// should be retrieved, due to not existing or the current endpoint has expired +// the Discoverer object that was passed in will attempt to discover a new endpoint +// and add that to the cache. +func (c *EndpointCache) Get(d Discoverer, endpointKey string, required bool) (WeightedAddress, error) { + var err error + endpoint, ok := c.get(endpointKey) + weighted, found := endpoint.GetValidAddress() + shouldGet := !ok || !found + + if required && shouldGet { + if endpoint, err = c.discover(d, endpointKey); err != nil { + return WeightedAddress{}, err + } + + weighted, _ = endpoint.GetValidAddress() + } else if shouldGet { + go c.discover(d, endpointKey) + } + + return weighted, nil +} + +// Add is a concurrent safe operation that will allow new endpoints to be added +// to the cache. If the cache is full, the number of endpoints equal endpointLimit, +// then this will remove the oldest entry before adding the new endpoint. +func (c *EndpointCache) Add(endpoint Endpoint) { + // de-dups multiple adds of an endpoint with a pre-existing key + if iface, ok := c.endpoints.Load(endpoint.Key); ok { + e := iface.(Endpoint) + if e.Len() > 0 { + return + } + } + c.endpoints.Store(endpoint.Key, endpoint) + + size := atomic.AddInt64(&c.size, 1) + if size > 0 && size > c.endpointLimit { + c.deleteRandomKey() + } +} + +// deleteRandomKey will delete a random key from the cache. If +// no key was deleted false will be returned. +func (c *EndpointCache) deleteRandomKey() bool { + atomic.AddInt64(&c.size, -1) + found := false + + c.endpoints.Range(func(key, value interface{}) bool { + found = true + c.endpoints.Delete(key) + + return false + }) + + return found +} + +// discover will get and store and endpoint using the Discoverer. +func (c *EndpointCache) discover(d Discoverer, endpointKey string) (Endpoint, error) { + endpoint, err := d.Discover() + if err != nil { + return Endpoint{}, err + } + + endpoint.Key = endpointKey + c.Add(endpoint) + + return endpoint, nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/crr/endpoint.go b/vendor/github.com/aws/aws-sdk-go/aws/crr/endpoint.go new file mode 100644 index 00000000000..d5599188e06 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/crr/endpoint.go @@ -0,0 +1,99 @@ +package crr + +import ( + "net/url" + "sort" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" +) + +// Endpoint represents an endpoint used in endpoint discovery. +type Endpoint struct { + Key string + Addresses WeightedAddresses +} + +// WeightedAddresses represents a list of WeightedAddress. +type WeightedAddresses []WeightedAddress + +// WeightedAddress represents an address with a given weight. +type WeightedAddress struct { + URL *url.URL + Expired time.Time +} + +// HasExpired will return whether or not the endpoint has expired with +// the exception of a zero expiry meaning does not expire. +func (e WeightedAddress) HasExpired() bool { + return e.Expired.Before(time.Now()) +} + +// Add will add a given WeightedAddress to the address list of Endpoint. +func (e *Endpoint) Add(addr WeightedAddress) { + e.Addresses = append(e.Addresses, addr) +} + +// Len returns the number of valid endpoints where valid means the endpoint +// has not expired. +func (e *Endpoint) Len() int { + validEndpoints := 0 + for _, endpoint := range e.Addresses { + if endpoint.HasExpired() { + continue + } + + validEndpoints++ + } + return validEndpoints +} + +// GetValidAddress will return a non-expired weight endpoint +func (e *Endpoint) GetValidAddress() (WeightedAddress, bool) { + for i := 0; i < len(e.Addresses); i++ { + we := e.Addresses[i] + + if we.HasExpired() { + e.Addresses = append(e.Addresses[:i], e.Addresses[i+1:]...) + i-- + continue + } + + return we, true + } + + return WeightedAddress{}, false +} + +// Discoverer is an interface used to discovery which endpoint hit. This +// allows for specifics about what parameters need to be used to be contained +// in the Discoverer implementor. +type Discoverer interface { + Discover() (Endpoint, error) +} + +// BuildEndpointKey will sort the keys in alphabetical order and then retrieve +// the values in that order. Those values are then concatenated together to form +// the endpoint key. +func BuildEndpointKey(params map[string]*string) string { + keys := make([]string, len(params)) + i := 0 + + for k := range params { + keys[i] = k + i++ + } + sort.Strings(keys) + + values := make([]string, len(params)) + for i, k := range keys { + if params[k] == nil { + continue + } + + values[i] = aws.StringValue(params[k]) + } + + return strings.Join(values, ".") +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/crr/sync_map.go b/vendor/github.com/aws/aws-sdk-go/aws/crr/sync_map.go new file mode 100644 index 00000000000..e414eaace28 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/crr/sync_map.go @@ -0,0 +1,29 @@ +// +build go1.9 + +package crr + +import ( + "sync" +) + +type syncMap sync.Map + +func newSyncMap() syncMap { + return syncMap{} +} + +func (m *syncMap) Load(key interface{}) (interface{}, bool) { + return (*sync.Map)(m).Load(key) +} + +func (m *syncMap) Store(key interface{}, value interface{}) { + (*sync.Map)(m).Store(key, value) +} + +func (m *syncMap) Delete(key interface{}) { + (*sync.Map)(m).Delete(key) +} + +func (m *syncMap) Range(f func(interface{}, interface{}) bool) { + (*sync.Map)(m).Range(f) +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/crr/sync_map_1_8.go b/vendor/github.com/aws/aws-sdk-go/aws/crr/sync_map_1_8.go new file mode 100644 index 00000000000..e0b12200855 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/crr/sync_map_1_8.go @@ -0,0 +1,48 @@ +// +build !go1.9 + +package crr + +import ( + "sync" +) + +type syncMap struct { + container map[interface{}]interface{} + lock sync.RWMutex +} + +func newSyncMap() syncMap { + return syncMap{ + container: map[interface{}]interface{}{}, + } +} + +func (m *syncMap) Load(key interface{}) (interface{}, bool) { + m.lock.RLock() + defer m.lock.RUnlock() + + v, ok := m.container[key] + return v, ok +} + +func (m *syncMap) Store(key interface{}, value interface{}) { + m.lock.Lock() + defer m.lock.Unlock() + + m.container[key] = value +} + +func (m *syncMap) Delete(key interface{}) { + m.lock.Lock() + defer m.lock.Unlock() + + delete(m.container, key) +} + +func (m *syncMap) Range(f func(interface{}, interface{}) bool) { + for k, v := range m.container { + if !f(k, v) { + return + } + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/csm/doc.go b/vendor/github.com/aws/aws-sdk-go/aws/csm/doc.go new file mode 100644 index 00000000000..152d785b362 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/csm/doc.go @@ -0,0 +1,46 @@ +// Package csm provides Client Side Monitoring (CSM) which enables sending metrics +// via UDP connection. Using the Start function will enable the reporting of +// metrics on a given port. If Start is called, with different parameters, again, +// a panic will occur. +// +// Pause can be called to pause any metrics publishing on a given port. Sessions +// that have had their handlers modified via InjectHandlers may still be used. +// However, the handlers will act as a no-op meaning no metrics will be published. +// +// Example: +// r, err := csm.Start("clientID", ":31000") +// if err != nil { +// panic(fmt.Errorf("failed starting CSM: %v", err)) +// } +// +// sess, err := session.NewSession(&aws.Config{}) +// if err != nil { +// panic(fmt.Errorf("failed loading session: %v", err)) +// } +// +// r.InjectHandlers(&sess.Handlers) +// +// client := s3.New(sess) +// resp, err := client.GetObject(&s3.GetObjectInput{ +// Bucket: aws.String("bucket"), +// Key: aws.String("key"), +// }) +// +// // Will pause monitoring +// r.Pause() +// resp, err = client.GetObject(&s3.GetObjectInput{ +// Bucket: aws.String("bucket"), +// Key: aws.String("key"), +// }) +// +// // Resume monitoring +// r.Continue() +// +// Start returns a Reporter that is used to enable or disable monitoring. If +// access to the Reporter is required later, calling Get will return the Reporter +// singleton. +// +// Example: +// r := csm.Get() +// r.Continue() +package csm diff --git a/vendor/github.com/aws/aws-sdk-go/aws/csm/enable.go b/vendor/github.com/aws/aws-sdk-go/aws/csm/enable.go new file mode 100644 index 00000000000..2f0c6eac9a8 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/csm/enable.go @@ -0,0 +1,67 @@ +package csm + +import ( + "fmt" + "sync" +) + +var ( + lock sync.Mutex +) + +// Client side metric handler names +const ( + APICallMetricHandlerName = "awscsm.SendAPICallMetric" + APICallAttemptMetricHandlerName = "awscsm.SendAPICallAttemptMetric" +) + +// Start will start the a long running go routine to capture +// client side metrics. Calling start multiple time will only +// start the metric listener once and will panic if a different +// client ID or port is passed in. +// +// Example: +// r, err := csm.Start("clientID", "127.0.0.1:8094") +// if err != nil { +// panic(fmt.Errorf("expected no error, but received %v", err)) +// } +// sess := session.NewSession() +// r.InjectHandlers(sess.Handlers) +// +// svc := s3.New(sess) +// out, err := svc.GetObject(&s3.GetObjectInput{ +// Bucket: aws.String("bucket"), +// Key: aws.String("key"), +// }) +func Start(clientID string, url string) (*Reporter, error) { + lock.Lock() + defer lock.Unlock() + + if sender == nil { + sender = newReporter(clientID, url) + } else { + if sender.clientID != clientID { + panic(fmt.Errorf("inconsistent client IDs. %q was expected, but received %q", sender.clientID, clientID)) + } + + if sender.url != url { + panic(fmt.Errorf("inconsistent URLs. %q was expected, but received %q", sender.url, url)) + } + } + + if err := connect(url); err != nil { + sender = nil + return nil, err + } + + return sender, nil +} + +// Get will return a reporter if one exists, if one does not exist, nil will +// be returned. +func Get() *Reporter { + lock.Lock() + defer lock.Unlock() + + return sender +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/csm/metric.go b/vendor/github.com/aws/aws-sdk-go/aws/csm/metric.go new file mode 100644 index 00000000000..6f57024d743 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/csm/metric.go @@ -0,0 +1,53 @@ +package csm + +import ( + "strconv" + "time" +) + +type metricTime time.Time + +func (t metricTime) MarshalJSON() ([]byte, error) { + ns := time.Duration(time.Time(t).UnixNano()) + return []byte(strconv.FormatInt(int64(ns/time.Millisecond), 10)), nil +} + +type metric struct { + ClientID *string `json:"ClientId,omitempty"` + API *string `json:"Api,omitempty"` + Service *string `json:"Service,omitempty"` + Timestamp *metricTime `json:"Timestamp,omitempty"` + Type *string `json:"Type,omitempty"` + Version *int `json:"Version,omitempty"` + + AttemptCount *int `json:"AttemptCount,omitempty"` + Latency *int `json:"Latency,omitempty"` + + Fqdn *string `json:"Fqdn,omitempty"` + UserAgent *string `json:"UserAgent,omitempty"` + AttemptLatency *int `json:"AttemptLatency,omitempty"` + + SessionToken *string `json:"SessionToken,omitempty"` + Region *string `json:"Region,omitempty"` + AccessKey *string `json:"AccessKey,omitempty"` + HTTPStatusCode *int `json:"HttpStatusCode,omitempty"` + XAmzID2 *string `json:"XAmzId2,omitempty"` + XAmzRequestID *string `json:"XAmznRequestId,omitempty"` + + AWSException *string `json:"AwsException,omitempty"` + AWSExceptionMessage *string `json:"AwsExceptionMessage,omitempty"` + SDKException *string `json:"SdkException,omitempty"` + SDKExceptionMessage *string `json:"SdkExceptionMessage,omitempty"` + + DestinationIP *string `json:"DestinationIp,omitempty"` + ConnectionReused *int `json:"ConnectionReused,omitempty"` + + AcquireConnectionLatency *int `json:"AcquireConnectionLatency,omitempty"` + ConnectLatency *int `json:"ConnectLatency,omitempty"` + RequestLatency *int `json:"RequestLatency,omitempty"` + DNSLatency *int `json:"DnsLatency,omitempty"` + TCPLatency *int `json:"TcpLatency,omitempty"` + SSLLatency *int `json:"SslLatency,omitempty"` + + MaxRetriesExceeded *int `json:"MaxRetriesExceeded,omitempty"` +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/csm/metric_chan.go b/vendor/github.com/aws/aws-sdk-go/aws/csm/metric_chan.go new file mode 100644 index 00000000000..514fc3739a5 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/csm/metric_chan.go @@ -0,0 +1,54 @@ +package csm + +import ( + "sync/atomic" +) + +const ( + runningEnum = iota + pausedEnum +) + +var ( + // MetricsChannelSize of metrics to hold in the channel + MetricsChannelSize = 100 +) + +type metricChan struct { + ch chan metric + paused int64 +} + +func newMetricChan(size int) metricChan { + return metricChan{ + ch: make(chan metric, size), + } +} + +func (ch *metricChan) Pause() { + atomic.StoreInt64(&ch.paused, pausedEnum) +} + +func (ch *metricChan) Continue() { + atomic.StoreInt64(&ch.paused, runningEnum) +} + +func (ch *metricChan) IsPaused() bool { + v := atomic.LoadInt64(&ch.paused) + return v == pausedEnum +} + +// Push will push metrics to the metric channel if the channel +// is not paused +func (ch *metricChan) Push(m metric) bool { + if ch.IsPaused() { + return false + } + + select { + case ch.ch <- m: + return true + default: + return false + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/csm/reporter.go b/vendor/github.com/aws/aws-sdk-go/aws/csm/reporter.go new file mode 100644 index 00000000000..11861844246 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/aws/csm/reporter.go @@ -0,0 +1,242 @@ +package csm + +import ( + "encoding/json" + "net" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/request" +) + +const ( + // DefaultPort is used when no port is specified + DefaultPort = "31000" +) + +// Reporter will gather metrics of API requests made and +// send those metrics to the CSM endpoint. +type Reporter struct { + clientID string + url string + conn net.Conn + metricsCh metricChan + done chan struct{} +} + +var ( + sender *Reporter +) + +func connect(url string) error { + const network = "udp" + if err := sender.connect(network, url); err != nil { + return err + } + + if sender.done == nil { + sender.done = make(chan struct{}) + go sender.start() + } + + return nil +} + +func newReporter(clientID, url string) *Reporter { + return &Reporter{ + clientID: clientID, + url: url, + metricsCh: newMetricChan(MetricsChannelSize), + } +} + +func (rep *Reporter) sendAPICallAttemptMetric(r *request.Request) { + if rep == nil { + return + } + + now := time.Now() + creds, _ := r.Config.Credentials.Get() + + m := metric{ + ClientID: aws.String(rep.clientID), + API: aws.String(r.Operation.Name), + Service: aws.String(r.ClientInfo.ServiceID), + Timestamp: (*metricTime)(&now), + UserAgent: aws.String(r.HTTPRequest.Header.Get("User-Agent")), + Region: r.Config.Region, + Type: aws.String("ApiCallAttempt"), + Version: aws.Int(1), + + XAmzRequestID: aws.String(r.RequestID), + + AttemptCount: aws.Int(r.RetryCount + 1), + AttemptLatency: aws.Int(int(now.Sub(r.AttemptTime).Nanoseconds() / int64(time.Millisecond))), + AccessKey: aws.String(creds.AccessKeyID), + } + + if r.HTTPResponse != nil { + m.HTTPStatusCode = aws.Int(r.HTTPResponse.StatusCode) + } + + if r.Error != nil { + if awserr, ok := r.Error.(awserr.Error); ok { + setError(&m, awserr) + } + } + + rep.metricsCh.Push(m) +} + +func setError(m *metric, err awserr.Error) { + msg := err.Error() + code := err.Code() + + switch code { + case "RequestError", + "SerializationError", + request.CanceledErrorCode: + m.SDKException = &code + m.SDKExceptionMessage = &msg + default: + m.AWSException = &code + m.AWSExceptionMessage = &msg + } +} + +func (rep *Reporter) sendAPICallMetric(r *request.Request) { + if rep == nil { + return + } + + now := time.Now() + m := metric{ + ClientID: aws.String(rep.clientID), + API: aws.String(r.Operation.Name), + Service: aws.String(r.ClientInfo.ServiceID), + Timestamp: (*metricTime)(&now), + Type: aws.String("ApiCall"), + AttemptCount: aws.Int(r.RetryCount + 1), + Region: r.Config.Region, + Latency: aws.Int(int(time.Now().Sub(r.Time) / time.Millisecond)), + XAmzRequestID: aws.String(r.RequestID), + MaxRetriesExceeded: aws.Int(boolIntValue(r.RetryCount >= r.MaxRetries())), + } + + // TODO: Probably want to figure something out for logging dropped + // metrics + rep.metricsCh.Push(m) +} + +func (rep *Reporter) connect(network, url string) error { + if rep.conn != nil { + rep.conn.Close() + } + + conn, err := net.Dial(network, url) + if err != nil { + return awserr.New("UDPError", "Could not connect", err) + } + + rep.conn = conn + + return nil +} + +func (rep *Reporter) close() { + if rep.done != nil { + close(rep.done) + } + + rep.metricsCh.Pause() +} + +func (rep *Reporter) start() { + defer func() { + rep.metricsCh.Pause() + }() + + for { + select { + case <-rep.done: + rep.done = nil + return + case m := <-rep.metricsCh.ch: + // TODO: What to do with this error? Probably should just log + b, err := json.Marshal(m) + if err != nil { + continue + } + + rep.conn.Write(b) + } + } +} + +// Pause will pause the metric channel preventing any new metrics from +// being added. +func (rep *Reporter) Pause() { + lock.Lock() + defer lock.Unlock() + + if rep == nil { + return + } + + rep.close() +} + +// Continue will reopen the metric channel and allow for monitoring +// to be resumed. +func (rep *Reporter) Continue() { + lock.Lock() + defer lock.Unlock() + if rep == nil { + return + } + + if !rep.metricsCh.IsPaused() { + return + } + + rep.metricsCh.Continue() +} + +// InjectHandlers will will enable client side metrics and inject the proper +// handlers to handle how metrics are sent. +// +// Example: +// // Start must be called in order to inject the correct handlers +// r, err := csm.Start("clientID", "127.0.0.1:8094") +// if err != nil { +// panic(fmt.Errorf("expected no error, but received %v", err)) +// } +// +// sess := session.NewSession() +// r.InjectHandlers(&sess.Handlers) +// +// // create a new service client with our client side metric session +// svc := s3.New(sess) +func (rep *Reporter) InjectHandlers(handlers *request.Handlers) { + if rep == nil { + return + } + + apiCallHandler := request.NamedHandler{Name: APICallMetricHandlerName, Fn: rep.sendAPICallMetric} + apiCallAttemptHandler := request.NamedHandler{Name: APICallAttemptMetricHandlerName, Fn: rep.sendAPICallAttemptMetric} + + handlers.Complete.PushFrontNamed(apiCallHandler) + handlers.Complete.PushFrontNamed(apiCallAttemptHandler) + + handlers.AfterRetry.PushFrontNamed(apiCallAttemptHandler) +} + +// boolIntValue return 1 for true and 0 for false. +func boolIntValue(b bool) int { + if b { + return 1 + } + + return 0 +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/defaults/defaults.go b/vendor/github.com/aws/aws-sdk-go/aws/defaults/defaults.go index 3cf1036b625..23bb639e018 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/defaults/defaults.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/defaults/defaults.go @@ -24,6 +24,7 @@ import ( "github.com/aws/aws-sdk-go/aws/ec2metadata" "github.com/aws/aws-sdk-go/aws/endpoints" "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/internal/shareddefaults" ) // A Defaults provides a collection of default values for SDK clients. @@ -92,17 +93,28 @@ func Handlers() request.Handlers { func CredChain(cfg *aws.Config, handlers request.Handlers) *credentials.Credentials { return credentials.NewCredentials(&credentials.ChainProvider{ VerboseErrors: aws.BoolValue(cfg.CredentialsChainVerboseErrors), - Providers: []credentials.Provider{ - &credentials.EnvProvider{}, - &credentials.SharedCredentialsProvider{Filename: "", Profile: ""}, - RemoteCredProvider(*cfg, handlers), - }, + Providers: CredProviders(cfg, handlers), }) } +// CredProviders returns the slice of providers used in +// the default credential chain. +// +// For applications that need to use some other provider (for example use +// different environment variables for legacy reasons) but still fall back +// on the default chain of providers. This allows that default chaint to be +// automatically updated +func CredProviders(cfg *aws.Config, handlers request.Handlers) []credentials.Provider { + return []credentials.Provider{ + &credentials.EnvProvider{}, + &credentials.SharedCredentialsProvider{Filename: "", Profile: ""}, + RemoteCredProvider(*cfg, handlers), + } +} + const ( - httpProviderEnvVar = "AWS_CONTAINER_CREDENTIALS_FULL_URI" - ecsCredsProviderEnvVar = "AWS_CONTAINER_CREDENTIALS_RELATIVE_URI" + httpProviderAuthorizationEnvVar = "AWS_CONTAINER_AUTHORIZATION_TOKEN" + httpProviderEnvVar = "AWS_CONTAINER_CREDENTIALS_FULL_URI" ) // RemoteCredProvider returns a credentials provider for the default remote @@ -112,8 +124,8 @@ func RemoteCredProvider(cfg aws.Config, handlers request.Handlers) credentials.P return localHTTPCredProvider(cfg, handlers, u) } - if uri := os.Getenv(ecsCredsProviderEnvVar); len(uri) > 0 { - u := fmt.Sprintf("http://169.254.170.2%s", uri) + if uri := os.Getenv(shareddefaults.ECSCredsProviderEnvVar); len(uri) > 0 { + u := fmt.Sprintf("%s%s", shareddefaults.ECSContainerCredentialsURI, uri) return httpCredProvider(cfg, handlers, u) } @@ -176,6 +188,7 @@ func httpCredProvider(cfg aws.Config, handlers request.Handlers, u string) crede return endpointcreds.NewProviderClient(cfg, handlers, u, func(p *endpointcreds.Provider) { p.ExpiryWindow = 5 * time.Minute + p.AuthorizationToken = os.Getenv(httpProviderAuthorizationEnvVar) }, ) } diff --git a/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/api.go b/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/api.go index 984407a580f..c215cd3f599 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/api.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/api.go @@ -4,12 +4,12 @@ import ( "encoding/json" "fmt" "net/http" - "path" "strings" "time" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/internal/sdkuri" ) // GetMetadata uses the path provided to request information from the EC2 @@ -19,7 +19,7 @@ func (c *EC2Metadata) GetMetadata(p string) (string, error) { op := &request.Operation{ Name: "GetMetadata", HTTPMethod: "GET", - HTTPPath: path.Join("/", "meta-data", p), + HTTPPath: sdkuri.PathJoin("/meta-data", p), } output := &metadataOutput{} @@ -35,7 +35,7 @@ func (c *EC2Metadata) GetUserData() (string, error) { op := &request.Operation{ Name: "GetUserData", HTTPMethod: "GET", - HTTPPath: path.Join("/", "user-data"), + HTTPPath: "/user-data", } output := &metadataOutput{} @@ -56,7 +56,7 @@ func (c *EC2Metadata) GetDynamicData(p string) (string, error) { op := &request.Operation{ Name: "GetDynamicData", HTTPMethod: "GET", - HTTPPath: path.Join("/", "dynamic", p), + HTTPPath: sdkuri.PathJoin("/dynamic", p), } output := &metadataOutput{} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/service.go b/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/service.go index ef5f73292ba..53457cac368 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/service.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/ec2metadata/service.go @@ -72,6 +72,7 @@ func NewClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceName, Endpoint: endpoint, APIVersion: "latest", }, diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/decode.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/decode.go index 83505ad1bcf..1ddeae10198 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/decode.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/decode.go @@ -4,7 +4,6 @@ import ( "encoding/json" "fmt" "io" - "os" "github.com/aws/aws-sdk-go/aws/awserr" ) @@ -85,41 +84,23 @@ func decodeV3Endpoints(modelDef modelDefinition, opts DecodeModelOptions) (Resol custAddEC2Metadata(p) custAddS3DualStack(p) custRmIotDataService(p) - - custFixCloudHSMv2SigningName(p) - custFixRuntimeSagemakerSigningName(p) + custFixAppAutoscalingChina(p) } return ps, nil } -func custFixCloudHSMv2SigningName(p *partition) { - // Workaround for aws/aws-sdk-go#1745 until the endpoint model can be - // fixed upstream. TODO remove this once the endpoints model is updated. - - s, ok := p.Services["cloudhsmv2"] - if !ok { - return - } - - if len(s.Defaults.CredentialScope.Service) != 0 { - fmt.Fprintf(os.Stderr, "cloudhsmv2 signing name already set, ignoring override.\n") - // If the value is already set don't override - return - } - - s.Defaults.CredentialScope.Service = "cloudhsm" - fmt.Fprintf(os.Stderr, "cloudhsmv2 signing name not set, overriding.\n") - - p.Services["cloudhsmv2"] = s -} - func custAddS3DualStack(p *partition) { if p.ID != "aws" { return } - s, ok := p.Services["s3"] + custAddDualstack(p, "s3") + custAddDualstack(p, "s3-control") +} + +func custAddDualstack(p *partition, svcName string) { + s, ok := p.Services[svcName] if !ok { return } @@ -127,7 +108,7 @@ func custAddS3DualStack(p *partition) { s.Defaults.HasDualStack = boxedTrue s.Defaults.DualStackHostname = "{service}.dualstack.{region}.{dnsSuffix}" - p.Services["s3"] = s + p.Services[svcName] = s } func custAddEC2Metadata(p *partition) { @@ -147,24 +128,25 @@ func custRmIotDataService(p *partition) { delete(p.Services, "data.iot") } -func custFixRuntimeSagemakerSigningName(p *partition) { - // Workaround for aws/aws-sdk-go#1836 +func custFixAppAutoscalingChina(p *partition) { + if p.ID != "aws-cn" { + return + } - s, ok := p.Services["runtime.sagemaker"] + const serviceName = "application-autoscaling" + s, ok := p.Services[serviceName] if !ok { return } - if len(s.Defaults.CredentialScope.Service) != 0 { - fmt.Fprintf(os.Stderr, "runtime.sagemaker signing name already set, ignoring override.\n") - // If the value is already set don't override + const expectHostname = `autoscaling.{region}.amazonaws.com` + if e, a := s.Defaults.Hostname, expectHostname; e != a { + fmt.Printf("custFixAppAutoscalingChina: ignoring customization, expected %s, got %s\n", e, a) return } - s.Defaults.CredentialScope.Service = "sagemaker" - fmt.Fprintf(os.Stderr, "sagemaker signing name not set, overriding.\n") - - p.Services["runtime.sagemaker"] = s + s.Defaults.Hostname = expectHostname + ".cn" + p.Services[serviceName] = s } type decodeModelError struct { diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go index 17a3c01499f..c01c90185c4 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go @@ -40,6 +40,7 @@ const ( // AWS GovCloud (US) partition's regions. const ( + UsGovEast1RegionID = "us-gov-east-1" // AWS GovCloud (US-East). UsGovWest1RegionID = "us-gov-west-1" // AWS GovCloud (US). ) @@ -47,16 +48,21 @@ const ( const ( A4bServiceID = "a4b" // A4b. AcmServiceID = "acm" // Acm. + AcmPcaServiceID = "acm-pca" // AcmPca. + ApiMediatailorServiceID = "api.mediatailor" // ApiMediatailor. ApiPricingServiceID = "api.pricing" // ApiPricing. + ApiSagemakerServiceID = "api.sagemaker" // ApiSagemaker. ApigatewayServiceID = "apigateway" // Apigateway. ApplicationAutoscalingServiceID = "application-autoscaling" // ApplicationAutoscaling. Appstream2ServiceID = "appstream2" // Appstream2. + AppsyncServiceID = "appsync" // Appsync. AthenaServiceID = "athena" // Athena. AutoscalingServiceID = "autoscaling" // Autoscaling. AutoscalingPlansServiceID = "autoscaling-plans" // AutoscalingPlans. BatchServiceID = "batch" // Batch. BudgetsServiceID = "budgets" // Budgets. CeServiceID = "ce" // Ce. + ChimeServiceID = "chime" // Chime. Cloud9ServiceID = "cloud9" // Cloud9. ClouddirectoryServiceID = "clouddirectory" // Clouddirectory. CloudformationServiceID = "cloudformation" // Cloudformation. @@ -99,6 +105,7 @@ const ( EsServiceID = "es" // Es. EventsServiceID = "events" // Events. FirehoseServiceID = "firehose" // Firehose. + FmsServiceID = "fms" // Fms. GameliftServiceID = "gamelift" // Gamelift. GlacierServiceID = "glacier" // Glacier. GlueServiceID = "glue" // Glue. @@ -109,6 +116,7 @@ const ( ImportexportServiceID = "importexport" // Importexport. InspectorServiceID = "inspector" // Inspector. IotServiceID = "iot" // Iot. + IotanalyticsServiceID = "iotanalytics" // Iotanalytics. KinesisServiceID = "kinesis" // Kinesis. KinesisanalyticsServiceID = "kinesisanalytics" // Kinesisanalytics. KinesisvideoServiceID = "kinesisvideo" // Kinesisvideo. @@ -121,12 +129,14 @@ const ( MediaconvertServiceID = "mediaconvert" // Mediaconvert. MedialiveServiceID = "medialive" // Medialive. MediapackageServiceID = "mediapackage" // Mediapackage. + MediastoreServiceID = "mediastore" // Mediastore. MeteringMarketplaceServiceID = "metering.marketplace" // MeteringMarketplace. MghServiceID = "mgh" // Mgh. MobileanalyticsServiceID = "mobileanalytics" // Mobileanalytics. ModelsLexServiceID = "models.lex" // ModelsLex. MonitoringServiceID = "monitoring" // Monitoring. MturkRequesterServiceID = "mturk-requester" // MturkRequester. + NeptuneServiceID = "neptune" // Neptune. OpsworksServiceID = "opsworks" // Opsworks. OpsworksCmServiceID = "opsworks-cm" // OpsworksCm. OrganizationsServiceID = "organizations" // Organizations. @@ -141,8 +151,9 @@ const ( RuntimeLexServiceID = "runtime.lex" // RuntimeLex. RuntimeSagemakerServiceID = "runtime.sagemaker" // RuntimeSagemaker. S3ServiceID = "s3" // S3. - SagemakerServiceID = "sagemaker" // Sagemaker. + S3ControlServiceID = "s3-control" // S3Control. SdbServiceID = "sdb" // Sdb. + SecretsmanagerServiceID = "secretsmanager" // Secretsmanager. ServerlessrepoServiceID = "serverlessrepo" // Serverlessrepo. ServicecatalogServiceID = "servicecatalog" // Servicecatalog. ServicediscoveryServiceID = "servicediscovery" // Servicediscovery. @@ -287,6 +298,33 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, + "acm-pca": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "api.mediatailor": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + }, + }, "api.pricing": service{ Defaults: endpoint{ CredentialScope: credentialScope{ @@ -298,6 +336,24 @@ var awsPartition = partition{ "us-east-1": endpoint{}, }, }, + "api.sagemaker": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, "apigateway": service{ Endpoints: endpoints{ @@ -358,14 +414,32 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, + "appsync": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, "athena": service{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-2": endpoint{}, @@ -414,12 +488,14 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-1": endpoint{}, @@ -452,6 +528,23 @@ var awsPartition = partition{ }, }, }, + "chime": service{ + PartitionEndpoint: "aws-global", + IsRegionalized: boxedFalse, + Defaults: endpoint{ + SSLCommonName: "service.chime.aws.amazon.com", + Protocols: []string{"https"}, + }, + Endpoints: endpoints{ + "aws-global": endpoint{ + Hostname: "service.chime.aws.amazon.com", + Protocols: []string{"https"}, + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + }, + }, "cloud9": service{ Endpoints: endpoints{ @@ -532,12 +625,15 @@ var awsPartition = partition{ }, Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-1": endpoint{}, @@ -584,16 +680,43 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "us-east-1-fips": endpoint{ + Hostname: "codebuild-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-2": endpoint{}, + "us-east-2-fips": endpoint{ + Hostname: "codebuild-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-1": endpoint{}, + "us-west-1-fips": endpoint{ + Hostname: "codebuild-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "us-west-2": endpoint{}, + "us-west-2-fips": endpoint{ + Hostname: "codebuild-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, }, }, "codecommit": service{ @@ -608,11 +731,18 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "fips": endpoint{ + Hostname: "codecommit-fips.ca-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ca-central-1", + }, + }, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "codedeploy": service{ @@ -630,9 +760,33 @@ var awsPartition = partition{ "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "us-east-1-fips": endpoint{ + Hostname: "codedeploy-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-2": endpoint{}, + "us-east-2-fips": endpoint{ + Hostname: "codedeploy-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-1": endpoint{}, + "us-west-1-fips": endpoint{ + Hostname: "codedeploy-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "us-west-2": endpoint{}, + "us-west-2-fips": endpoint{ + Hostname: "codedeploy-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, }, }, "codepipeline": service{ @@ -680,6 +834,7 @@ var awsPartition = partition{ "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, @@ -696,6 +851,7 @@ var awsPartition = partition{ "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, @@ -725,10 +881,11 @@ var awsPartition = partition{ Protocols: []string{"https"}, }, Endpoints: endpoints{ - "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, }, }, "config": service{ @@ -777,6 +934,7 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, + "us-east-2": endpoint{}, "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, @@ -967,11 +1125,17 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "sa-east-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "fips": endpoint{ + Hostname: "elasticache-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "elasticbeanstalk": service{ @@ -997,11 +1161,15 @@ var awsPartition = partition{ "elasticfilesystem": service{ Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, + "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, @@ -1030,7 +1198,7 @@ var awsPartition = partition{ "elasticmapreduce": service{ Defaults: endpoint{ SSLCommonName: "{region}.{service}.{dnsSuffix}", - Protocols: []string{"http", "https"}, + Protocols: []string{"https"}, }, Endpoints: endpoints{ "ap-northeast-1": endpoint{}, @@ -1129,16 +1297,32 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, + "fms": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + }, + Endpoints: endpoints{ + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, "gamelift": service{ Endpoints: endpoints{ @@ -1173,6 +1357,7 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-1": endpoint{}, @@ -1183,7 +1368,14 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-2": endpoint{}, @@ -1198,6 +1390,7 @@ var awsPartition = partition{ "ap-northeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, "us-east-1": endpoint{}, "us-west-2": endpoint{}, }, @@ -1217,6 +1410,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -1282,6 +1476,7 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, @@ -1292,6 +1487,17 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, + "iotanalytics": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, "kinesis": service{ Endpoints: endpoints{ @@ -1315,9 +1521,10 @@ var awsPartition = partition{ "kinesisanalytics": service{ Endpoints: endpoints{ - "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-west-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "kinesisvideo": service{ @@ -1374,12 +1581,15 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-2": endpoint{}, @@ -1422,12 +1632,17 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, + "us-east-2": endpoint{}, "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, @@ -1436,9 +1651,13 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-west-2": endpoint{}, }, @@ -1447,11 +1666,27 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, + "mediastore": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, "us-east-1": endpoint{}, "us-west-2": endpoint{}, }, @@ -1501,6 +1736,7 @@ var awsPartition = partition{ Endpoints: endpoints{ "eu-west-1": endpoint{}, "us-east-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "monitoring": service{ @@ -1535,17 +1771,58 @@ var awsPartition = partition{ "us-east-1": endpoint{}, }, }, - "opsworks": service{ + "neptune": service{ Endpoints: endpoints{ - "ap-northeast-1": endpoint{}, - "ap-northeast-2": endpoint{}, - "ap-south-1": endpoint{}, - "ap-southeast-1": endpoint{}, - "ap-southeast-2": endpoint{}, - "ca-central-1": endpoint{}, - "eu-central-1": endpoint{}, - "eu-west-1": endpoint{}, + "eu-central-1": endpoint{ + Hostname: "rds.eu-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-central-1", + }, + }, + "eu-west-1": endpoint{ + Hostname: "rds.eu-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-west-1", + }, + }, + "eu-west-2": endpoint{ + Hostname: "rds.eu-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "eu-west-2", + }, + }, + "us-east-1": endpoint{ + Hostname: "rds.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-2": endpoint{ + Hostname: "rds.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-2": endpoint{ + Hostname: "rds.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + }, + }, + "opsworks": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, @@ -1589,6 +1866,7 @@ var awsPartition = partition{ }, }, Endpoints: endpoints{ + "eu-west-1": endpoint{}, "us-east-1": endpoint{}, }, }, @@ -1677,6 +1955,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -1712,19 +1991,25 @@ var awsPartition = partition{ Endpoints: endpoints{ "eu-west-1": endpoint{}, "us-east-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "runtime.sagemaker": service{ - Defaults: endpoint{ - CredentialScope: credentialScope{ - Service: "sagemaker", - }, - }, + Endpoints: endpoints{ - "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "s3": service{ @@ -1786,13 +2071,148 @@ var awsPartition = partition{ }, }, }, - "sagemaker": service{ + "s3-control": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + SignatureVersions: []string{"s3v4"}, + HasDualStack: boxedTrue, + DualStackHostname: "{service}.dualstack.{region}.{dnsSuffix}", + }, Endpoints: endpoints{ - "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, + "ap-northeast-1": endpoint{ + Hostname: "s3-control.ap-northeast-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "ap-northeast-1", + }, + }, + "ap-northeast-2": endpoint{ + Hostname: "s3-control.ap-northeast-2.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "ap-northeast-2", + }, + }, + "ap-south-1": endpoint{ + Hostname: "s3-control.ap-south-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "ap-south-1", + }, + }, + "ap-southeast-1": endpoint{ + Hostname: "s3-control.ap-southeast-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "ap-southeast-1", + }, + }, + "ap-southeast-2": endpoint{ + Hostname: "s3-control.ap-southeast-2.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "ap-southeast-2", + }, + }, + "ca-central-1": endpoint{ + Hostname: "s3-control.ca-central-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "ca-central-1", + }, + }, + "eu-central-1": endpoint{ + Hostname: "s3-control.eu-central-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "eu-central-1", + }, + }, + "eu-west-1": endpoint{ + Hostname: "s3-control.eu-west-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "eu-west-1", + }, + }, + "eu-west-2": endpoint{ + Hostname: "s3-control.eu-west-2.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "eu-west-2", + }, + }, + "eu-west-3": endpoint{ + Hostname: "s3-control.eu-west-3.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "eu-west-3", + }, + }, + "sa-east-1": endpoint{ + Hostname: "s3-control.sa-east-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "sa-east-1", + }, + }, + "us-east-1": endpoint{ + Hostname: "s3-control.us-east-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-1-fips": endpoint{ + Hostname: "s3-control-fips.us-east-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-2": endpoint{ + Hostname: "s3-control.us-east-2.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-east-2-fips": endpoint{ + Hostname: "s3-control-fips.us-east-2.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-1": endpoint{ + Hostname: "s3-control.us-west-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "us-west-1-fips": endpoint{ + Hostname: "s3-control-fips.us-west-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "us-west-2": endpoint{ + Hostname: "s3-control.us-west-2.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "us-west-2-fips": endpoint{ + Hostname: "s3-control-fips.us-west-2.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, }, }, "sdb": service{ @@ -1813,6 +2233,50 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, + "secretsmanager": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-1-fips": endpoint{ + Hostname: "secretsmanager-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-2": endpoint{}, + "us-east-2-fips": endpoint{ + Hostname: "secretsmanager-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-1": endpoint{}, + "us-west-1-fips": endpoint{ + Hostname: "secretsmanager-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "us-west-2": endpoint{}, + "us-west-2-fips": endpoint{ + Hostname: "secretsmanager-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + }, + }, "serverlessrepo": service{ Defaults: endpoint{ Protocols: []string{"https"}, @@ -1877,18 +2341,53 @@ var awsPartition = partition{ "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-1": endpoint{}, - "us-west-2": endpoint{}, + "us-east-1-fips": endpoint{ + Hostname: "servicecatalog-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "us-east-2": endpoint{}, + "us-east-2-fips": endpoint{ + Hostname: "servicecatalog-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "us-west-1": endpoint{}, + "us-west-1-fips": endpoint{ + Hostname: "servicecatalog-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "us-west-2": endpoint{}, + "us-west-2-fips": endpoint{ + Hostname: "servicecatalog-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, }, }, "servicediscovery": service{ Endpoints: endpoints{ - "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "shield": service{ @@ -1907,12 +2406,14 @@ var awsPartition = partition{ "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-1": endpoint{}, @@ -1924,6 +2425,7 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, @@ -1975,7 +2477,31 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, - "sa-east-1": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "sqs-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "sqs-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "sqs-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "sqs-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "sa-east-1": endpoint{}, "us-east-1": endpoint{ SSLCommonName: "queue.{dnsSuffix}", }, @@ -2008,6 +2534,8 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, @@ -2016,6 +2544,7 @@ var awsPartition = partition{ "eu-west-2": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, + "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, @@ -2178,9 +2707,28 @@ var awsPartition = partition{ Protocols: []string{"https"}, }, Endpoints: endpoints{ + "eu-west-1": endpoint{}, "us-east-1": endpoint{}, + "us-east-1-fips": endpoint{ + Hostname: "translate-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, "us-east-2": endpoint{}, + "us-east-2-fips": endpoint{ + Hostname: "translate-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, "us-west-2": endpoint{}, + "us-west-2-fips": endpoint{ + Hostname: "translate-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, }, }, "waf": service{ @@ -2204,6 +2752,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "us-east-1": endpoint{}, + "us-east-2": endpoint{}, "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, @@ -2233,11 +2782,14 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-west-2": endpoint{}, }, @@ -2254,6 +2806,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -2296,12 +2849,13 @@ var awscnPartition = partition{ "apigateway": service{ Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "application-autoscaling": service{ Defaults: endpoint{ - Hostname: "autoscaling.{region}.amazonaws.com", + Hostname: "autoscaling.{region}.amazonaws.com.cn", Protocols: []string{"http", "https"}, CredentialScope: credentialScope{ Service: "application-autoscaling", @@ -2335,6 +2889,13 @@ var awscnPartition = partition{ "cn-northwest-1": endpoint{}, }, }, + "codebuild": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, "codedeploy": service{ Endpoints: endpoints{ @@ -2362,6 +2923,20 @@ var awscnPartition = partition{ "cn-northwest-1": endpoint{}, }, }, + "dms": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "ds": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, "dynamodb": service{ Defaults: endpoint{ Protocols: []string{"http", "https"}, @@ -2394,13 +2969,15 @@ var awscnPartition = partition{ "ecr": service{ Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "ecs": service{ Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "elasticache": service{ @@ -2428,7 +3005,7 @@ var awscnPartition = partition{ }, "elasticmapreduce": service{ Defaults: endpoint{ - Protocols: []string{"http", "https"}, + Protocols: []string{"https"}, }, Endpoints: endpoints{ "cn-north-1": endpoint{}, @@ -2438,6 +3015,7 @@ var awscnPartition = partition{ "es": service{ Endpoints: endpoints{ + "cn-north-1": endpoint{}, "cn-northwest-1": endpoint{}, }, }, @@ -2490,7 +3068,8 @@ var awscnPartition = partition{ "lambda": service{ Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "logs": service{ @@ -2533,10 +3112,33 @@ var awscnPartition = partition{ "cn-northwest-1": endpoint{}, }, }, + "s3-control": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + SignatureVersions: []string{"s3v4"}, + }, + Endpoints: endpoints{ + "cn-north-1": endpoint{ + Hostname: "s3-control.cn-north-1.amazonaws.com.cn", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "cn-north-1", + }, + }, + "cn-northwest-1": endpoint{ + Hostname: "s3-control.cn-northwest-1.amazonaws.com.cn", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "cn-northwest-1", + }, + }, + }, + }, "sms": service{ Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "snowball": service{ @@ -2634,6 +3236,9 @@ var awsusgovPartition = partition{ SignatureVersions: []string{"v4"}, }, Regions: regions{ + "us-gov-east-1": region{ + Description: "AWS GovCloud (US-East)", + }, "us-gov-west-1": region{ Description: "AWS GovCloud (US)", }, @@ -2641,6 +3246,13 @@ var awsusgovPartition = partition{ Services: services{ "acm": service{ + Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, + "us-gov-west-1": endpoint{}, + }, + }, + "api.sagemaker": service{ + Endpoints: endpoints{ "us-gov-west-1": endpoint{}, }, @@ -2648,20 +3260,36 @@ var awsusgovPartition = partition{ "apigateway": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, + "us-gov-west-1": endpoint{}, + }, + }, + "application-autoscaling": service{ + + Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, "autoscaling": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{ Protocols: []string{"http", "https"}, }, }, }, + "clouddirectory": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, "cloudformation": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, @@ -2671,39 +3299,61 @@ var awsusgovPartition = partition{ "us-gov-west-1": endpoint{}, }, }, + "cloudhsmv2": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "cloudhsm", + }, + }, + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, "cloudtrail": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, "codedeploy": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, + "us-gov-west-1-fips": endpoint{ + Hostname: "codedeploy-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, }, }, "config": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, "directconnect": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, "dms": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, "dynamodb": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, "us-gov-west-1-fips": endpoint{ Hostname: "dynamodb.us-gov-west-1.amazonaws.com", @@ -2716,6 +3366,7 @@ var awsusgovPartition = partition{ "ec2": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, @@ -2733,30 +3384,41 @@ var awsusgovPartition = partition{ "ecr": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, "ecs": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, "elasticache": service{ Endpoints: endpoints{ + "fips": endpoint{ + Hostname: "elasticache-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, "elasticbeanstalk": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, "elasticloadbalancing": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{ Protocols: []string{"http", "https"}, }, @@ -2765,8 +3427,9 @@ var awsusgovPartition = partition{ "elasticmapreduce": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{ - Protocols: []string{"http", "https"}, + Protocols: []string{"https"}, }, }, }, @@ -2779,17 +3442,28 @@ var awsusgovPartition = partition{ "events": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, "glacier": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{ Protocols: []string{"http", "https"}, }, }, }, + "guardduty": service{ + IsRegionalized: boxedTrue, + Defaults: endpoint{ + Protocols: []string{"https"}, + }, + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, "iam": service{ PartitionEndpoint: "aws-us-gov-global", IsRegionalized: boxedFalse, @@ -2803,27 +3477,48 @@ var awsusgovPartition = partition{ }, }, }, + "inspector": service{ + + Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, + "us-gov-west-1": endpoint{}, + }, + }, + "iot": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "execute-api", + }, + }, + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, "kinesis": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, "kms": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, "lambda": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, "logs": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, @@ -2840,6 +3535,7 @@ var awsusgovPartition = partition{ "monitoring": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, @@ -2852,12 +3548,14 @@ var awsusgovPartition = partition{ "rds": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, "redshift": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, @@ -2867,6 +3565,12 @@ var awsusgovPartition = partition{ "us-gov-west-1": endpoint{}, }, }, + "runtime.sagemaker": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, "s3": service{ Defaults: endpoint{ SignatureVersions: []string{"s3", "s3v4"}, @@ -2878,15 +3582,56 @@ var awsusgovPartition = partition{ Region: "us-gov-west-1", }, }, + "us-gov-east-1": endpoint{ + Hostname: "s3.us-gov-east-1.amazonaws.com", + Protocols: []string{"http", "https"}, + }, "us-gov-west-1": endpoint{ Hostname: "s3.us-gov-west-1.amazonaws.com", Protocols: []string{"http", "https"}, }, }, }, + "s3-control": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + SignatureVersions: []string{"s3v4"}, + }, + Endpoints: endpoints{ + "us-gov-east-1": endpoint{ + Hostname: "s3-control.us-gov-east-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "us-gov-east-1-fips": endpoint{ + Hostname: "s3-control-fips.us-gov-east-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "us-gov-west-1": endpoint{ + Hostname: "s3-control.us-gov-west-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, + "us-gov-west-1-fips": endpoint{ + Hostname: "s3-control-fips.us-gov-west-1.amazonaws.com", + SignatureVersions: []string{"s3v4"}, + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, + }, + }, "sms": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, @@ -2899,6 +3644,7 @@ var awsusgovPartition = partition{ "sns": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{ Protocols: []string{"http", "https"}, }, @@ -2907,6 +3653,7 @@ var awsusgovPartition = partition{ "sqs": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{ SSLCommonName: "{region}.queue.{dnsSuffix}", Protocols: []string{"http", "https"}, @@ -2915,6 +3662,20 @@ var awsusgovPartition = partition{ }, "ssm": service{ + Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, + "us-gov-west-1": endpoint{}, + }, + }, + "states": service{ + + Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, + "us-gov-west-1": endpoint{}, + }, + }, + "storagegateway": service{ + Endpoints: endpoints{ "us-gov-west-1": endpoint{}, }, @@ -2926,6 +3687,7 @@ var awsusgovPartition = partition{ }, }, Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, "us-gov-west-1-fips": endpoint{ Hostname: "dynamodb.us-gov-west-1.amazonaws.com", @@ -2938,19 +3700,36 @@ var awsusgovPartition = partition{ "sts": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, "swf": service{ Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, "us-gov-west-1": endpoint{}, }, }, "tagging": service{ + Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, + "us-gov-west-1": endpoint{}, + }, + }, + "translate": service{ + Defaults: endpoint{ + Protocols: []string{"https"}, + }, Endpoints: endpoints{ "us-gov-west-1": endpoint{}, + "us-gov-west-1-fips": endpoint{ + Hostname: "translate-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, }, }, }, diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/endpoints.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/endpoints.go index 9c3eedb48d5..e29c095121d 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/endpoints.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/endpoints.go @@ -206,10 +206,11 @@ func (p Partition) EndpointFor(service, region string, opts ...func(*Options)) ( // enumerating over the regions in a partition. func (p Partition) Regions() map[string]Region { rs := map[string]Region{} - for id := range p.p.Regions { + for id, r := range p.p.Regions { rs[id] = Region{ - id: id, - p: p.p, + id: id, + desc: r.Description, + p: p.p, } } @@ -240,6 +241,10 @@ type Region struct { // ID returns the region's identifier. func (r Region) ID() string { return r.id } +// Description returns the region's description. The region description +// is free text, it can be empty, and it may change between SDK releases. +func (r Region) Description() string { return r.desc } + // ResolveEndpoint resolves an endpoint from the context of the region given // a service. See Partition.EndpointFor for usage and errors that can be returned. func (r Region) ResolveEndpoint(service string, opts ...func(*Options)) (ResolvedEndpoint, error) { @@ -284,10 +289,11 @@ func (s Service) ResolveEndpoint(region string, opts ...func(*Options)) (Resolve func (s Service) Regions() map[string]Region { rs := map[string]Region{} for id := range s.p.Services[s.id].Endpoints { - if _, ok := s.p.Regions[id]; ok { + if r, ok := s.p.Regions[id]; ok { rs[id] = Region{ - id: id, - p: s.p, + id: id, + desc: r.Description, + p: s.p, } } } @@ -347,6 +353,10 @@ type ResolvedEndpoint struct { // The service name that should be used for signing requests. SigningName string + // States that the signing name for this endpoint was derived from metadata + // passed in, but was not explicitly modeled. + SigningNameDerived bool + // The signing method that should be used for signing requests. SigningMethod string } diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/v3model.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/v3model.go index 13d968a249e..ff6f76db6eb 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/v3model.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/v3model.go @@ -226,16 +226,20 @@ func (e endpoint) resolve(service, region, dnsSuffix string, defs []endpoint, op if len(signingRegion) == 0 { signingRegion = region } + signingName := e.CredentialScope.Service + var signingNameDerived bool if len(signingName) == 0 { signingName = service + signingNameDerived = true } return ResolvedEndpoint{ - URL: u, - SigningRegion: signingRegion, - SigningName: signingName, - SigningMethod: getByPriority(e.SignatureVersions, signerPriority, defaultSigner), + URL: u, + SigningRegion: signingRegion, + SigningName: signingName, + SigningNameDerived: signingNameDerived, + SigningMethod: getByPriority(e.SignatureVersions, signerPriority, defaultSigner), } } diff --git a/vendor/github.com/aws/aws-sdk-go/aws/errors.go b/vendor/github.com/aws/aws-sdk-go/aws/errors.go index 57663616868..fa06f7a8f8b 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/errors.go @@ -5,13 +5,9 @@ import "github.com/aws/aws-sdk-go/aws/awserr" var ( // ErrMissingRegion is an error that is returned if region configuration is // not found. - // - // @readonly ErrMissingRegion = awserr.New("MissingRegion", "could not find region configuration", nil) // ErrMissingEndpoint is an error that is returned if an endpoint cannot be // resolved for a service. - // - // @readonly ErrMissingEndpoint = awserr.New("MissingEndpoint", "'Endpoint' configuration is required for this service", nil) ) diff --git a/vendor/github.com/aws/aws-sdk-go/aws/logger.go b/vendor/github.com/aws/aws-sdk-go/aws/logger.go index 3babb5abdb6..6ed15b2ecc2 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/logger.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/logger.go @@ -71,6 +71,12 @@ const ( // LogDebugWithRequestErrors states the SDK should log when service requests fail // to build, send, validate, or unmarshal. LogDebugWithRequestErrors + + // LogDebugWithEventStreamBody states the SDK should log EventStream + // request and response bodys. This should be used to log the EventStream + // wire unmarshaled message content of requests and responses made while + // using the SDK Will also enable LogDebug. + LogDebugWithEventStreamBody ) // A Logger is a minimalistic interface for the SDK to log messages to. Should diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/handlers.go b/vendor/github.com/aws/aws-sdk-go/aws/request/handlers.go index 802ac88ad5c..605a72d3c94 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/request/handlers.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/handlers.go @@ -14,6 +14,7 @@ type Handlers struct { Send HandlerList ValidateResponse HandlerList Unmarshal HandlerList + UnmarshalStream HandlerList UnmarshalMeta HandlerList UnmarshalError HandlerList Retry HandlerList @@ -30,6 +31,7 @@ func (h *Handlers) Copy() Handlers { Send: h.Send.copy(), ValidateResponse: h.ValidateResponse.copy(), Unmarshal: h.Unmarshal.copy(), + UnmarshalStream: h.UnmarshalStream.copy(), UnmarshalError: h.UnmarshalError.copy(), UnmarshalMeta: h.UnmarshalMeta.copy(), Retry: h.Retry.copy(), @@ -45,6 +47,7 @@ func (h *Handlers) Clear() { h.Send.Clear() h.Sign.Clear() h.Unmarshal.Clear() + h.UnmarshalStream.Clear() h.UnmarshalMeta.Clear() h.UnmarshalError.Clear() h.ValidateResponse.Clear() @@ -172,6 +175,21 @@ func (l *HandlerList) SwapNamed(n NamedHandler) (swapped bool) { return swapped } +// Swap will swap out all handlers matching the name passed in. The matched +// handlers will be swapped in. True is returned if the handlers were swapped. +func (l *HandlerList) Swap(name string, replace NamedHandler) bool { + var swapped bool + + for i := 0; i < len(l.list); i++ { + if l.list[i].Name == name { + l.list[i] = replace + swapped = true + } + } + + return swapped +} + // SetBackNamed will replace the named handler if it exists in the handler list. // If the handler does not exist the handler will be added to the end of the list. func (l *HandlerList) SetBackNamed(n NamedHandler) { diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/request.go b/vendor/github.com/aws/aws-sdk-go/aws/request/request.go index e81785c268f..63e7f71c3ed 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/request/request.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/request.go @@ -46,6 +46,7 @@ type Request struct { Handlers Handlers Retryer + AttemptTime time.Time Time time.Time Operation *Operation HTTPRequest *http.Request @@ -121,6 +122,7 @@ func New(cfg aws.Config, clientInfo metadata.ClientInfo, handlers Handlers, Handlers: handlers.Copy(), Retryer: retryer, + AttemptTime: time.Now(), Time: time.Now(), ExpireTime: 0, Operation: operation, @@ -264,7 +266,9 @@ func (r *Request) SetReaderBody(reader io.ReadSeeker) { } // Presign returns the request's signed URL. Error will be returned -// if the signing fails. +// if the signing fails. The expire parameter is only used for presigned Amazon +// S3 API requests. All other AWS services will use a fixed expriation +// time of 15 minutes. // // It is invalid to create a presigned URL with a expire duration 0 or less. An // error is returned if expire duration is 0 or less. @@ -281,7 +285,9 @@ func (r *Request) Presign(expire time.Duration) (string, error) { } // PresignRequest behaves just like presign, with the addition of returning a -// set of headers that were signed. +// set of headers that were signed. The expire parameter is only used for +// presigned Amazon S3 API requests. All other AWS services will use a fixed +// expriation time of 15 minutes. // // It is invalid to create a presigned URL with a expire duration 0 or less. An // error is returned if expire duration is 0 or less. @@ -342,7 +348,7 @@ func debugLogReqError(r *Request, stage string, retrying bool, err error) { // Build will build the request's object so it can be signed and sent // to the service. Build will also validate all the request's parameters. -// Anny additional build Handlers set on this request will be run +// Any additional build Handlers set on this request will be run // in the order they were set. // // The request will only be built once. Multiple calls to build will have @@ -368,9 +374,9 @@ func (r *Request) Build() error { return r.Error } -// Sign will sign the request returning error if errors are encountered. +// Sign will sign the request, returning error if errors are encountered. // -// Send will build the request prior to signing. All Sign Handlers will +// Sign will build the request prior to signing. All Sign Handlers will // be executed in the order they were set. func (r *Request) Sign() error { r.Build() @@ -440,7 +446,7 @@ func (r *Request) GetBody() io.ReadSeeker { return r.safeBody } -// Send will send the request returning error if errors are encountered. +// Send will send the request, returning error if errors are encountered. // // Send will sign the request prior to sending. All Send Handlers will // be executed in the order they were set. @@ -461,6 +467,7 @@ func (r *Request) Send() error { }() for { + r.AttemptTime = time.Now() if aws.BoolValue(r.Retryable) { if r.Config.LogLevel.Matches(aws.LogDebugWithRequestRetries) { r.Config.Logger.Log(fmt.Sprintf("DEBUG: Retrying Request %s/%s, attempt %d", diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/request_1_7.go b/vendor/github.com/aws/aws-sdk-go/aws/request/request_1_7.go index 869b97a1a0f..e36e468b7c6 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/request/request_1_7.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/request_1_7.go @@ -21,7 +21,7 @@ func (noBody) WriteTo(io.Writer) (int64, error) { return 0, nil } var NoBody = noBody{} // ResetBody rewinds the request body back to its starting position, and -// set's the HTTP Request body reference. When the body is read prior +// sets the HTTP Request body reference. When the body is read prior // to being sent in the HTTP request it will need to be rewound. // // ResetBody will automatically be called by the SDK's build handler, but if diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/request_1_8.go b/vendor/github.com/aws/aws-sdk-go/aws/request/request_1_8.go index c32fc69bc56..7c6a8000f67 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/request/request_1_8.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/request_1_8.go @@ -11,7 +11,7 @@ import ( var NoBody = http.NoBody // ResetBody rewinds the request body back to its starting position, and -// set's the HTTP Request body reference. When the body is read prior +// sets the HTTP Request body reference. When the body is read prior // to being sent in the HTTP request it will need to be rewound. // // ResetBody will automatically be called by the SDK's build handler, but if diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/request_pagination.go b/vendor/github.com/aws/aws-sdk-go/aws/request/request_pagination.go index 159518a75cd..a633ed5acfa 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/request/request_pagination.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/request_pagination.go @@ -35,8 +35,12 @@ type Pagination struct { // NewRequest should always be built from the same API operations. It is // undefined if different API operations are returned on subsequent calls. NewRequest func() (*Request, error) + // EndPageOnSameToken, when enabled, will allow the paginator to stop on + // token that are the same as its previous tokens. + EndPageOnSameToken bool started bool + prevTokens []interface{} nextTokens []interface{} err error @@ -49,7 +53,15 @@ type Pagination struct { // // Will always return true if Next has not been called yet. func (p *Pagination) HasNextPage() bool { - return !(p.started && len(p.nextTokens) == 0) + if !p.started { + return true + } + + hasNextPage := len(p.nextTokens) != 0 + if p.EndPageOnSameToken { + return hasNextPage && !awsutil.DeepEqual(p.nextTokens, p.prevTokens) + } + return hasNextPage } // Err returns the error Pagination encountered when retrieving the next page. @@ -96,6 +108,7 @@ func (p *Pagination) Next() bool { return false } + p.prevTokens = p.nextTokens p.nextTokens = req.nextPageTokens() p.curPage = req.Data diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/retryer.go b/vendor/github.com/aws/aws-sdk-go/aws/request/retryer.go index f35fef213ed..7d527029884 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/request/retryer.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/retryer.go @@ -97,7 +97,7 @@ func isNestedErrorRetryable(parentErr awserr.Error) bool { } if t, ok := err.(temporaryError); ok { - return t.Temporary() + return t.Temporary() || isErrConnectionReset(err) } return isErrConnectionReset(err) diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/validation.go b/vendor/github.com/aws/aws-sdk-go/aws/request/validation.go index 40124622821..bcfd947a3d5 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/request/validation.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/validation.go @@ -17,6 +17,10 @@ const ( ParamMinValueErrCode = "ParamMinValueError" // ParamMinLenErrCode is the error code for fields without enough elements. ParamMinLenErrCode = "ParamMinLenError" + + // ParamFormatErrCode is the error code for a field with invalid + // format or characters. + ParamFormatErrCode = "ParamFormatInvalidError" ) // Validator provides a way for types to perform validation logic on their @@ -232,3 +236,26 @@ func NewErrParamMinLen(field string, min int) *ErrParamMinLen { func (e *ErrParamMinLen) MinLen() int { return e.min } + +// An ErrParamFormat represents a invalid format parameter error. +type ErrParamFormat struct { + errInvalidParam + format string +} + +// NewErrParamFormat creates a new invalid format parameter error. +func NewErrParamFormat(field string, format, value string) *ErrParamFormat { + return &ErrParamFormat{ + errInvalidParam: errInvalidParam{ + code: ParamFormatErrCode, + field: field, + msg: fmt.Sprintf("format %v, %v", format, value), + }, + format: format, + } +} + +// Format returns the field's required format. +func (e *ErrParamFormat) Format() string { + return e.format +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/doc.go b/vendor/github.com/aws/aws-sdk-go/aws/session/doc.go index ea7b886f81f..98d420fd64d 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/session/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/session/doc.go @@ -128,7 +128,7 @@ read. The Session will be created from configuration values from the shared credentials file (~/.aws/credentials) over those in the shared config file (~/.aws/config). Credentials are the values the SDK should use for authenticating requests with -AWS Services. They arfrom a configuration file will need to include both +AWS Services. They are from a configuration file will need to include both aws_access_key_id and aws_secret_access_key must be provided together in the same file to be considered valid. The values will be ignored if not a complete group. aws_session_token is an optional field that can be provided if both of diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go b/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go index 12b452177a8..c94d0fb9a7c 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go @@ -4,6 +4,7 @@ import ( "os" "strconv" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/aws/defaults" ) @@ -96,9 +97,29 @@ type envConfig struct { // // AWS_CA_BUNDLE=$HOME/my_custom_ca_bundle CustomCABundle string + + csmEnabled string + CSMEnabled bool + CSMPort string + CSMClientID string + + enableEndpointDiscovery string + // Enables endpoint discovery via environment variables. + // + // AWS_ENABLE_ENDPOINT_DISCOVERY=true + EnableEndpointDiscovery *bool } var ( + csmEnabledEnvKey = []string{ + "AWS_CSM_ENABLED", + } + csmPortEnvKey = []string{ + "AWS_CSM_PORT", + } + csmClientIDEnvKey = []string{ + "AWS_CSM_CLIENT_ID", + } credAccessEnvKey = []string{ "AWS_ACCESS_KEY_ID", "AWS_ACCESS_KEY", @@ -111,6 +132,10 @@ var ( "AWS_SESSION_TOKEN", } + enableEndpointDiscoveryEnvKey = []string{ + "AWS_ENABLE_ENDPOINT_DISCOVERY", + } + regionEnvKeys = []string{ "AWS_REGION", "AWS_DEFAULT_REGION", // Only read if AWS_SDK_LOAD_CONFIG is also set @@ -157,6 +182,12 @@ func envConfigLoad(enableSharedConfig bool) envConfig { setFromEnvVal(&cfg.Creds.SecretAccessKey, credSecretEnvKey) setFromEnvVal(&cfg.Creds.SessionToken, credSessionEnvKey) + // CSM environment variables + setFromEnvVal(&cfg.csmEnabled, csmEnabledEnvKey) + setFromEnvVal(&cfg.CSMPort, csmPortEnvKey) + setFromEnvVal(&cfg.CSMClientID, csmClientIDEnvKey) + cfg.CSMEnabled = len(cfg.csmEnabled) > 0 + // Require logical grouping of credentials if len(cfg.Creds.AccessKeyID) == 0 || len(cfg.Creds.SecretAccessKey) == 0 { cfg.Creds = credentials.Value{} @@ -174,6 +205,12 @@ func envConfigLoad(enableSharedConfig bool) envConfig { setFromEnvVal(&cfg.Region, regionKeys) setFromEnvVal(&cfg.Profile, profileKeys) + // endpoint discovery is in reference to it being enabled. + setFromEnvVal(&cfg.enableEndpointDiscovery, enableEndpointDiscoveryEnvKey) + if len(cfg.enableEndpointDiscovery) > 0 { + cfg.EnableEndpointDiscovery = aws.Bool(cfg.enableEndpointDiscovery != "false") + } + setFromEnvVal(&cfg.SharedCredentialsFile, sharedCredsFileEnvKey) setFromEnvVal(&cfg.SharedConfigFile, sharedConfigFileEnvKey) diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/session.go b/vendor/github.com/aws/aws-sdk-go/aws/session/session.go index 4bf7a155849..e7c156e8b12 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/session/session.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/session/session.go @@ -15,11 +15,30 @@ import ( "github.com/aws/aws-sdk-go/aws/corehandlers" "github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/aws/credentials/stscreds" + "github.com/aws/aws-sdk-go/aws/csm" "github.com/aws/aws-sdk-go/aws/defaults" "github.com/aws/aws-sdk-go/aws/endpoints" "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/internal/shareddefaults" ) +const ( + // ErrCodeSharedConfig represents an error that occurs in the shared + // configuration logic + ErrCodeSharedConfig = "SharedConfigErr" +) + +// ErrSharedConfigSourceCollision will be returned if a section contains both +// source_profile and credential_source +var ErrSharedConfigSourceCollision = awserr.New(ErrCodeSharedConfig, "only source profile or credential source can be specified, not both", nil) + +// ErrSharedConfigECSContainerEnvVarEmpty will be returned if the environment +// variables are empty and Environment was set as the credential source +var ErrSharedConfigECSContainerEnvVarEmpty = awserr.New(ErrCodeSharedConfig, "EcsContainer was specified as the credential_source, but 'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI' was not set", nil) + +// ErrSharedConfigInvalidCredSource will be returned if an invalid credential source was provided +var ErrSharedConfigInvalidCredSource = awserr.New(ErrCodeSharedConfig, "credential source values must be EcsContainer, Ec2InstanceMetadata, or Environment", nil) + // A Session provides a central location to create service clients from and // store configurations and request handlers for those services. // @@ -81,10 +100,16 @@ func New(cfgs ...*aws.Config) *Session { r.Error = err }) } + return s } - return deprecatedNewSession(cfgs...) + s := deprecatedNewSession(cfgs...) + if envCfg.CSMEnabled { + enableCSM(&s.Handlers, envCfg.CSMClientID, envCfg.CSMPort, s.Config.Logger) + } + + return s } // NewSession returns a new Session created from SDK defaults, config files, @@ -300,10 +325,22 @@ func deprecatedNewSession(cfgs ...*aws.Config) *Session { } initHandlers(s) - return s } +func enableCSM(handlers *request.Handlers, clientID string, port string, logger aws.Logger) { + logger.Log("Enabling CSM") + if len(port) == 0 { + port = csm.DefaultPort + } + + r, err := csm.Start(clientID, "127.0.0.1:"+port) + if err != nil { + return + } + r.InjectHandlers(handlers) +} + func newSession(opts Options, envCfg envConfig, cfgs ...*aws.Config) (*Session, error) { cfg := defaults.Config() handlers := defaults.Handlers() @@ -343,6 +380,9 @@ func newSession(opts Options, envCfg envConfig, cfgs ...*aws.Config) (*Session, } initHandlers(s) + if envCfg.CSMEnabled { + enableCSM(&s.Handlers, envCfg.CSMClientID, envCfg.CSMPort, s.Config.Logger) + } // Setup HTTP client with custom cert bundle if enabled if opts.CustomCABundle != nil { @@ -412,8 +452,67 @@ func mergeConfigSrcs(cfg, userCfg *aws.Config, envCfg envConfig, sharedCfg share } } + if cfg.EnableEndpointDiscovery == nil { + if envCfg.EnableEndpointDiscovery != nil { + cfg.WithEndpointDiscovery(*envCfg.EnableEndpointDiscovery) + } else if envCfg.EnableSharedConfig && sharedCfg.EnableEndpointDiscovery != nil { + cfg.WithEndpointDiscovery(*sharedCfg.EnableEndpointDiscovery) + } + } + // Configure credentials if not already set if cfg.Credentials == credentials.AnonymousCredentials && userCfg.Credentials == nil { + + // inspect the profile to see if a credential source has been specified. + if envCfg.EnableSharedConfig && len(sharedCfg.AssumeRole.CredentialSource) > 0 { + + // if both credential_source and source_profile have been set, return an error + // as this is undefined behavior. + if len(sharedCfg.AssumeRole.SourceProfile) > 0 { + return ErrSharedConfigSourceCollision + } + + // valid credential source values + const ( + credSourceEc2Metadata = "Ec2InstanceMetadata" + credSourceEnvironment = "Environment" + credSourceECSContainer = "EcsContainer" + ) + + switch sharedCfg.AssumeRole.CredentialSource { + case credSourceEc2Metadata: + cfgCp := *cfg + p := defaults.RemoteCredProvider(cfgCp, handlers) + cfgCp.Credentials = credentials.NewCredentials(p) + + if len(sharedCfg.AssumeRole.MFASerial) > 0 && sessOpts.AssumeRoleTokenProvider == nil { + // AssumeRole Token provider is required if doing Assume Role + // with MFA. + return AssumeRoleTokenProviderNotSetError{} + } + + cfg.Credentials = assumeRoleCredentials(cfgCp, handlers, sharedCfg, sessOpts) + case credSourceEnvironment: + cfg.Credentials = credentials.NewStaticCredentialsFromCreds( + envCfg.Creds, + ) + case credSourceECSContainer: + if len(os.Getenv(shareddefaults.ECSCredsProviderEnvVar)) == 0 { + return ErrSharedConfigECSContainerEnvVarEmpty + } + + cfgCp := *cfg + p := defaults.RemoteCredProvider(cfgCp, handlers) + creds := credentials.NewCredentials(p) + + cfg.Credentials = creds + default: + return ErrSharedConfigInvalidCredSource + } + + return nil + } + if len(envCfg.Creds.AccessKeyID) > 0 { cfg.Credentials = credentials.NewStaticCredentialsFromCreds( envCfg.Creds, @@ -423,32 +522,14 @@ func mergeConfigSrcs(cfg, userCfg *aws.Config, envCfg envConfig, sharedCfg share cfgCp.Credentials = credentials.NewStaticCredentialsFromCreds( sharedCfg.AssumeRoleSource.Creds, ) + if len(sharedCfg.AssumeRole.MFASerial) > 0 && sessOpts.AssumeRoleTokenProvider == nil { // AssumeRole Token provider is required if doing Assume Role // with MFA. return AssumeRoleTokenProviderNotSetError{} } - cfg.Credentials = stscreds.NewCredentials( - &Session{ - Config: &cfgCp, - Handlers: handlers.Copy(), - }, - sharedCfg.AssumeRole.RoleARN, - func(opt *stscreds.AssumeRoleProvider) { - opt.RoleSessionName = sharedCfg.AssumeRole.RoleSessionName - - // Assume role with external ID - if len(sharedCfg.AssumeRole.ExternalID) > 0 { - opt.ExternalID = aws.String(sharedCfg.AssumeRole.ExternalID) - } - - // Assume role with MFA - if len(sharedCfg.AssumeRole.MFASerial) > 0 { - opt.SerialNumber = aws.String(sharedCfg.AssumeRole.MFASerial) - opt.TokenProvider = sessOpts.AssumeRoleTokenProvider - } - }, - ) + + cfg.Credentials = assumeRoleCredentials(cfgCp, handlers, sharedCfg, sessOpts) } else if len(sharedCfg.Creds.AccessKeyID) > 0 { cfg.Credentials = credentials.NewStaticCredentialsFromCreds( sharedCfg.Creds, @@ -471,6 +552,30 @@ func mergeConfigSrcs(cfg, userCfg *aws.Config, envCfg envConfig, sharedCfg share return nil } +func assumeRoleCredentials(cfg aws.Config, handlers request.Handlers, sharedCfg sharedConfig, sessOpts Options) *credentials.Credentials { + return stscreds.NewCredentials( + &Session{ + Config: &cfg, + Handlers: handlers.Copy(), + }, + sharedCfg.AssumeRole.RoleARN, + func(opt *stscreds.AssumeRoleProvider) { + opt.RoleSessionName = sharedCfg.AssumeRole.RoleSessionName + + // Assume role with external ID + if len(sharedCfg.AssumeRole.ExternalID) > 0 { + opt.ExternalID = aws.String(sharedCfg.AssumeRole.ExternalID) + } + + // Assume role with MFA + if len(sharedCfg.AssumeRole.MFASerial) > 0 { + opt.SerialNumber = aws.String(sharedCfg.AssumeRole.MFASerial) + opt.TokenProvider = sessOpts.AssumeRoleTokenProvider + } + }, + ) +} + // AssumeRoleTokenProviderNotSetError is an error returned when creating a session when the // MFAToken option is not set when shared config is configured load assume a // role with an MFA token. @@ -571,11 +676,12 @@ func (s *Session) clientConfigWithErr(serviceName string, cfgs ...*aws.Config) ( } return client.Config{ - Config: s.Config, - Handlers: s.Handlers, - Endpoint: resolved.URL, - SigningRegion: resolved.SigningRegion, - SigningName: resolved.SigningName, + Config: s.Config, + Handlers: s.Handlers, + Endpoint: resolved.URL, + SigningRegion: resolved.SigningRegion, + SigningNameDerived: resolved.SigningNameDerived, + SigningName: resolved.SigningName, }, err } @@ -595,10 +701,11 @@ func (s *Session) ClientConfigNoResolveEndpoint(cfgs ...*aws.Config) client.Conf } return client.Config{ - Config: s.Config, - Handlers: s.Handlers, - Endpoint: resolved.URL, - SigningRegion: resolved.SigningRegion, - SigningName: resolved.SigningName, + Config: s.Config, + Handlers: s.Handlers, + Endpoint: resolved.URL, + SigningRegion: resolved.SigningRegion, + SigningNameDerived: resolved.SigningNameDerived, + SigningName: resolved.SigningName, } } diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/shared_config.go b/vendor/github.com/aws/aws-sdk-go/aws/session/shared_config.go index 09c8e5bc7ab..427b8a4e997 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/session/shared_config.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/session/shared_config.go @@ -2,11 +2,11 @@ package session import ( "fmt" - "io/ioutil" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/aws/credentials" - "github.com/go-ini/ini" + + "github.com/aws/aws-sdk-go/internal/ini" ) const ( @@ -16,15 +16,19 @@ const ( sessionTokenKey = `aws_session_token` // optional // Assume Role Credentials group - roleArnKey = `role_arn` // group required - sourceProfileKey = `source_profile` // group required - externalIDKey = `external_id` // optional - mfaSerialKey = `mfa_serial` // optional - roleSessionNameKey = `role_session_name` // optional + roleArnKey = `role_arn` // group required + sourceProfileKey = `source_profile` // group required (or credential_source) + credentialSourceKey = `credential_source` // group required (or source_profile) + externalIDKey = `external_id` // optional + mfaSerialKey = `mfa_serial` // optional + roleSessionNameKey = `role_session_name` // optional // Additional Config fields regionKey = `region` + // endpoint discovery group + enableEndpointDiscoveryKey = `endpoint_discovery_enabled` // optional + // DefaultSharedConfigProfile is the default profile to be used when // loading configuration from the config files if another profile name // is not provided. @@ -32,11 +36,12 @@ const ( ) type assumeRoleConfig struct { - RoleARN string - SourceProfile string - ExternalID string - MFASerial string - RoleSessionName string + RoleARN string + SourceProfile string + CredentialSource string + ExternalID string + MFASerial string + RoleSessionName string } // sharedConfig represents the configuration fields of the SDK config files. @@ -60,11 +65,17 @@ type sharedConfig struct { // // region Region string + + // EnableEndpointDiscovery can be enabled in the shared config by setting + // endpoint_discovery_enabled to true + // + // endpoint_discovery_enabled = true + EnableEndpointDiscovery *bool } type sharedConfigFile struct { Filename string - IniData *ini.File + IniData ini.Sections } // loadSharedConfig retrieves the configuration from the list of files @@ -105,19 +116,16 @@ func loadSharedConfigIniFiles(filenames []string) ([]sharedConfigFile, error) { files := make([]sharedConfigFile, 0, len(filenames)) for _, filename := range filenames { - b, err := ioutil.ReadFile(filename) - if err != nil { + sections, err := ini.OpenFile(filename) + if aerr, ok := err.(awserr.Error); ok && aerr.Code() == ini.ErrCodeUnableToReadFile { // Skip files which can't be opened and read for whatever reason continue - } - - f, err := ini.Load(b) - if err != nil { + } else if err != nil { return nil, SharedConfigLoadError{Filename: filename, Err: err} } files = append(files, sharedConfigFile{ - Filename: filename, IniData: f, + Filename: filename, IniData: sections, }) } @@ -127,6 +135,13 @@ func loadSharedConfigIniFiles(filenames []string) ([]sharedConfigFile, error) { func (cfg *sharedConfig) setAssumeRoleSource(origProfile string, files []sharedConfigFile) error { var assumeRoleSrc sharedConfig + if len(cfg.AssumeRole.CredentialSource) > 0 { + // setAssumeRoleSource is only called when source_profile is found. + // If both source_profile and credential_source are set, then + // ErrSharedConfigSourceCollision will be returned + return ErrSharedConfigSourceCollision + } + // Multiple level assume role chains are not support if cfg.AssumeRole.SourceProfile == origProfile { assumeRoleSrc = *cfg @@ -171,45 +186,54 @@ func (cfg *sharedConfig) setFromIniFiles(profile string, files []sharedConfigFil // if a config file only includes aws_access_key_id but no aws_secret_access_key // the aws_access_key_id will be ignored. func (cfg *sharedConfig) setFromIniFile(profile string, file sharedConfigFile) error { - section, err := file.IniData.GetSection(profile) - if err != nil { + section, ok := file.IniData.GetSection(profile) + if !ok { // Fallback to to alternate profile name: profile - section, err = file.IniData.GetSection(fmt.Sprintf("profile %s", profile)) - if err != nil { - return SharedConfigProfileNotExistsError{Profile: profile, Err: err} + section, ok = file.IniData.GetSection(fmt.Sprintf("profile %s", profile)) + if !ok { + return SharedConfigProfileNotExistsError{Profile: profile, Err: nil} } } // Shared Credentials - akid := section.Key(accessKeyIDKey).String() - secret := section.Key(secretAccessKey).String() + akid := section.String(accessKeyIDKey) + secret := section.String(secretAccessKey) if len(akid) > 0 && len(secret) > 0 { cfg.Creds = credentials.Value{ AccessKeyID: akid, SecretAccessKey: secret, - SessionToken: section.Key(sessionTokenKey).String(), + SessionToken: section.String(sessionTokenKey), ProviderName: fmt.Sprintf("SharedConfigCredentials: %s", file.Filename), } } // Assume Role - roleArn := section.Key(roleArnKey).String() - srcProfile := section.Key(sourceProfileKey).String() - if len(roleArn) > 0 && len(srcProfile) > 0 { + roleArn := section.String(roleArnKey) + srcProfile := section.String(sourceProfileKey) + credentialSource := section.String(credentialSourceKey) + hasSource := len(srcProfile) > 0 || len(credentialSource) > 0 + if len(roleArn) > 0 && hasSource { cfg.AssumeRole = assumeRoleConfig{ - RoleARN: roleArn, - SourceProfile: srcProfile, - ExternalID: section.Key(externalIDKey).String(), - MFASerial: section.Key(mfaSerialKey).String(), - RoleSessionName: section.Key(roleSessionNameKey).String(), + RoleARN: roleArn, + SourceProfile: srcProfile, + CredentialSource: credentialSource, + ExternalID: section.String(externalIDKey), + MFASerial: section.String(mfaSerialKey), + RoleSessionName: section.String(roleSessionNameKey), } } // Region - if v := section.Key(regionKey).String(); len(v) > 0 { + if v := section.String(regionKey); len(v) > 0 { cfg.Region = v } + // Endpoint discovery + if section.Has(enableEndpointDiscoveryKey) { + v := section.Bool(enableEndpointDiscoveryKey) + cfg.EnableEndpointDiscovery = &v + } + return nil } diff --git a/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go b/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go index 6e46376125b..155645d6404 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go @@ -134,7 +134,9 @@ var requiredSignedHeaders = rules{ "X-Amz-Server-Side-Encryption-Customer-Key": struct{}{}, "X-Amz-Server-Side-Encryption-Customer-Key-Md5": struct{}{}, "X-Amz-Storage-Class": struct{}{}, + "X-Amz-Tagging": struct{}{}, "X-Amz-Website-Redirect-Location": struct{}{}, + "X-Amz-Content-Sha256": struct{}{}, }, }, patterns{"X-Amz-Meta-"}, @@ -671,8 +673,15 @@ func (ctx *signingCtx) buildSignature() { func (ctx *signingCtx) buildBodyDigest() error { hash := ctx.Request.Header.Get("X-Amz-Content-Sha256") if hash == "" { - if ctx.unsignedPayload || (ctx.isPresign && ctx.ServiceName == "s3") { + includeSHA256Header := ctx.unsignedPayload || + ctx.ServiceName == "s3" || + ctx.ServiceName == "glacier" + + s3Presign := ctx.isPresign && ctx.ServiceName == "s3" + + if ctx.unsignedPayload || s3Presign { hash = "UNSIGNED-PAYLOAD" + includeSHA256Header = !s3Presign } else if ctx.Body == nil { hash = emptyStringSHA256 } else { @@ -681,7 +690,8 @@ func (ctx *signingCtx) buildBodyDigest() error { } hash = hex.EncodeToString(makeSha256Reader(ctx.Body)) } - if ctx.unsignedPayload || ctx.ServiceName == "s3" || ctx.ServiceName == "glacier" { + + if includeSHA256Header { ctx.Request.Header.Set("X-Amz-Content-Sha256", hash) } } @@ -730,7 +740,15 @@ func makeSha256Reader(reader io.ReadSeeker) []byte { start, _ := reader.Seek(0, sdkio.SeekCurrent) defer reader.Seek(start, sdkio.SeekStart) - io.Copy(hash, reader) + // Use CopyN to avoid allocating the 32KB buffer in io.Copy for bodies + // smaller than 32KB. Fall back to io.Copy if we fail to determine the size. + size, err := aws.SeekerLen(reader) + if err != nil { + io.Copy(hash, reader) + } else { + io.CopyN(hash, reader, size) + } + return hash.Sum(nil) } diff --git a/vendor/github.com/aws/aws-sdk-go/aws/version.go b/vendor/github.com/aws/aws-sdk-go/aws/version.go index 5ff5b287b26..ab03d605c07 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/version.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/version.go @@ -5,4 +5,4 @@ package aws const SDKName = "aws-sdk-go" // SDKVersion is the version of this SDK -const SDKVersion = "1.13.15" +const SDKVersion = "1.15.79" diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/ast.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/ast.go new file mode 100644 index 00000000000..e83a99886bc --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/ast.go @@ -0,0 +1,120 @@ +package ini + +// ASTKind represents different states in the parse table +// and the type of AST that is being constructed +type ASTKind int + +// ASTKind* is used in the parse table to transition between +// the different states +const ( + ASTKindNone = ASTKind(iota) + ASTKindStart + ASTKindExpr + ASTKindEqualExpr + ASTKindStatement + ASTKindSkipStatement + ASTKindExprStatement + ASTKindSectionStatement + ASTKindNestedSectionStatement + ASTKindCompletedNestedSectionStatement + ASTKindCommentStatement + ASTKindCompletedSectionStatement +) + +func (k ASTKind) String() string { + switch k { + case ASTKindNone: + return "none" + case ASTKindStart: + return "start" + case ASTKindExpr: + return "expr" + case ASTKindStatement: + return "stmt" + case ASTKindSectionStatement: + return "section_stmt" + case ASTKindExprStatement: + return "expr_stmt" + case ASTKindCommentStatement: + return "comment" + case ASTKindNestedSectionStatement: + return "nested_section_stmt" + case ASTKindCompletedSectionStatement: + return "completed_stmt" + case ASTKindSkipStatement: + return "skip" + default: + return "" + } +} + +// AST interface allows us to determine what kind of node we +// are on and casting may not need to be necessary. +// +// The root is always the first node in Children +type AST struct { + Kind ASTKind + Root Token + RootToken bool + Children []AST +} + +func newAST(kind ASTKind, root AST, children ...AST) AST { + return AST{ + Kind: kind, + Children: append([]AST{root}, children...), + } +} + +func newASTWithRootToken(kind ASTKind, root Token, children ...AST) AST { + return AST{ + Kind: kind, + Root: root, + RootToken: true, + Children: children, + } +} + +// AppendChild will append to the list of children an AST has. +func (a *AST) AppendChild(child AST) { + a.Children = append(a.Children, child) +} + +// GetRoot will return the root AST which can be the first entry +// in the children list or a token. +func (a *AST) GetRoot() AST { + if a.RootToken { + return *a + } + + if len(a.Children) == 0 { + return AST{} + } + + return a.Children[0] +} + +// GetChildren will return the current AST's list of children +func (a *AST) GetChildren() []AST { + if len(a.Children) == 0 { + return []AST{} + } + + if a.RootToken { + return a.Children + } + + return a.Children[1:] +} + +// SetChildren will set and override all children of the AST. +func (a *AST) SetChildren(children []AST) { + if a.RootToken { + a.Children = children + } else { + a.Children = append(a.Children[:1], children...) + } +} + +// Start is used to indicate the starting state of the parse table. +var Start = newAST(ASTKindStart, AST{}) diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/comma_token.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/comma_token.go new file mode 100644 index 00000000000..0895d53cbe6 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/comma_token.go @@ -0,0 +1,11 @@ +package ini + +var commaRunes = []rune(",") + +func isComma(b rune) bool { + return b == ',' +} + +func newCommaToken() Token { + return newToken(TokenComma, commaRunes, NoneType) +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/comment_token.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/comment_token.go new file mode 100644 index 00000000000..0b76999ba1f --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/comment_token.go @@ -0,0 +1,35 @@ +package ini + +// isComment will return whether or not the next byte(s) is a +// comment. +func isComment(b []rune) bool { + if len(b) == 0 { + return false + } + + switch b[0] { + case ';': + return true + case '#': + return true + } + + return false +} + +// newCommentToken will create a comment token and +// return how many bytes were read. +func newCommentToken(b []rune) (Token, int, error) { + i := 0 + for ; i < len(b); i++ { + if b[i] == '\n' { + break + } + + if len(b)-i > 2 && b[i] == '\r' && b[i+1] == '\n' { + break + } + } + + return newToken(TokenComment, b[:i], NoneType), i, nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/doc.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/doc.go new file mode 100644 index 00000000000..25ce0fe134d --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/doc.go @@ -0,0 +1,29 @@ +// Package ini is an LL(1) parser for configuration files. +// +// Example: +// sections, err := ini.OpenFile("/path/to/file") +// if err != nil { +// panic(err) +// } +// +// profile := "foo" +// section, ok := sections.GetSection(profile) +// if !ok { +// fmt.Printf("section %q could not be found", profile) +// } +// +// Below is the BNF that describes this parser +// Grammar: +// stmt -> value stmt' +// stmt' -> epsilon | op stmt +// value -> number | string | boolean | quoted_string +// +// section -> [ section' +// section' -> value section_close +// section_close -> ] +// +// SkipState will skip (NL WS)+ +// +// comment -> # comment' | ; comment' +// comment' -> epsilon | value +package ini diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/empty_token.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/empty_token.go new file mode 100644 index 00000000000..04345a54c20 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/empty_token.go @@ -0,0 +1,4 @@ +package ini + +// emptyToken is used to satisfy the Token interface +var emptyToken = newToken(TokenNone, []rune{}, NoneType) diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/expression.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/expression.go new file mode 100644 index 00000000000..91ba2a59dd5 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/expression.go @@ -0,0 +1,24 @@ +package ini + +// newExpression will return an expression AST. +// Expr represents an expression +// +// grammar: +// expr -> string | number +func newExpression(tok Token) AST { + return newASTWithRootToken(ASTKindExpr, tok) +} + +func newEqualExpr(left AST, tok Token) AST { + return newASTWithRootToken(ASTKindEqualExpr, tok, left) +} + +// EqualExprKey will return a LHS value in the equal expr +func EqualExprKey(ast AST) string { + children := ast.GetChildren() + if len(children) == 0 || ast.Kind != ASTKindEqualExpr { + return "" + } + + return string(children[0].Root.Raw()) +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/fuzz.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/fuzz.go new file mode 100644 index 00000000000..8d462f77e24 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/fuzz.go @@ -0,0 +1,17 @@ +// +build gofuzz + +package ini + +import ( + "bytes" +) + +func Fuzz(data []byte) int { + b := bytes.NewReader(data) + + if _, err := Parse(b); err != nil { + return 0 + } + + return 1 +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/ini.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/ini.go new file mode 100644 index 00000000000..3b0ca7afe3b --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/ini.go @@ -0,0 +1,51 @@ +package ini + +import ( + "io" + "os" + + "github.com/aws/aws-sdk-go/aws/awserr" +) + +// OpenFile takes a path to a given file, and will open and parse +// that file. +func OpenFile(path string) (Sections, error) { + f, err := os.Open(path) + if err != nil { + return Sections{}, awserr.New(ErrCodeUnableToReadFile, "unable to open file", err) + } + defer f.Close() + + return Parse(f) +} + +// Parse will parse the given file using the shared config +// visitor. +func Parse(f io.Reader) (Sections, error) { + tree, err := ParseAST(f) + if err != nil { + return Sections{}, err + } + + v := NewDefaultVisitor() + if err = Walk(tree, v); err != nil { + return Sections{}, err + } + + return v.Sections, nil +} + +// ParseBytes will parse the given bytes and return the parsed sections. +func ParseBytes(b []byte) (Sections, error) { + tree, err := ParseASTBytes(b) + if err != nil { + return Sections{}, err + } + + v := NewDefaultVisitor() + if err = Walk(tree, v); err != nil { + return Sections{}, err + } + + return v.Sections, nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/ini_lexer.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/ini_lexer.go new file mode 100644 index 00000000000..582c024ad15 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/ini_lexer.go @@ -0,0 +1,165 @@ +package ini + +import ( + "bytes" + "io" + "io/ioutil" + + "github.com/aws/aws-sdk-go/aws/awserr" +) + +const ( + // ErrCodeUnableToReadFile is used when a file is failed to be + // opened or read from. + ErrCodeUnableToReadFile = "FailedRead" +) + +// TokenType represents the various different tokens types +type TokenType int + +func (t TokenType) String() string { + switch t { + case TokenNone: + return "none" + case TokenLit: + return "literal" + case TokenSep: + return "sep" + case TokenOp: + return "op" + case TokenWS: + return "ws" + case TokenNL: + return "newline" + case TokenComment: + return "comment" + case TokenComma: + return "comma" + default: + return "" + } +} + +// TokenType enums +const ( + TokenNone = TokenType(iota) + TokenLit + TokenSep + TokenComma + TokenOp + TokenWS + TokenNL + TokenComment +) + +type iniLexer struct{} + +// Tokenize will return a list of tokens during lexical analysis of the +// io.Reader. +func (l *iniLexer) Tokenize(r io.Reader) ([]Token, error) { + b, err := ioutil.ReadAll(r) + if err != nil { + return nil, awserr.New(ErrCodeUnableToReadFile, "unable to read file", err) + } + + return l.tokenize(b) +} + +func (l *iniLexer) tokenize(b []byte) ([]Token, error) { + runes := bytes.Runes(b) + var err error + n := 0 + tokenAmount := countTokens(runes) + tokens := make([]Token, tokenAmount) + count := 0 + + for len(runes) > 0 && count < tokenAmount { + switch { + case isWhitespace(runes[0]): + tokens[count], n, err = newWSToken(runes) + case isComma(runes[0]): + tokens[count], n = newCommaToken(), 1 + case isComment(runes): + tokens[count], n, err = newCommentToken(runes) + case isNewline(runes): + tokens[count], n, err = newNewlineToken(runes) + case isSep(runes): + tokens[count], n, err = newSepToken(runes) + case isOp(runes): + tokens[count], n, err = newOpToken(runes) + default: + tokens[count], n, err = newLitToken(runes) + } + + if err != nil { + return nil, err + } + + count++ + + runes = runes[n:] + } + + return tokens[:count], nil +} + +func countTokens(runes []rune) int { + count, n := 0, 0 + var err error + + for len(runes) > 0 { + switch { + case isWhitespace(runes[0]): + _, n, err = newWSToken(runes) + case isComma(runes[0]): + _, n = newCommaToken(), 1 + case isComment(runes): + _, n, err = newCommentToken(runes) + case isNewline(runes): + _, n, err = newNewlineToken(runes) + case isSep(runes): + _, n, err = newSepToken(runes) + case isOp(runes): + _, n, err = newOpToken(runes) + default: + _, n, err = newLitToken(runes) + } + + if err != nil { + return 0 + } + + count++ + runes = runes[n:] + } + + return count + 1 +} + +// Token indicates a metadata about a given value. +type Token struct { + t TokenType + ValueType ValueType + base int + raw []rune +} + +var emptyValue = Value{} + +func newToken(t TokenType, raw []rune, v ValueType) Token { + return Token{ + t: t, + raw: raw, + ValueType: v, + } +} + +// Raw return the raw runes that were consumed +func (tok Token) Raw() []rune { + return tok.raw +} + +// Type returns the token type +func (tok Token) Type() TokenType { + return tok.t +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/ini_parser.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/ini_parser.go new file mode 100644 index 00000000000..8be520ae6da --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/ini_parser.go @@ -0,0 +1,347 @@ +package ini + +import ( + "fmt" + "io" +) + +// State enums for the parse table +const ( + InvalidState = iota + // stmt -> value stmt' + StatementState + // stmt' -> MarkComplete | op stmt + StatementPrimeState + // value -> number | string | boolean | quoted_string + ValueState + // section -> [ section' + OpenScopeState + // section' -> value section_close + SectionState + // section_close -> ] + CloseScopeState + // SkipState will skip (NL WS)+ + SkipState + // SkipTokenState will skip any token and push the previous + // state onto the stack. + SkipTokenState + // comment -> # comment' | ; comment' + // comment' -> MarkComplete | value + CommentState + // MarkComplete state will complete statements and move that + // to the completed AST list + MarkCompleteState + // TerminalState signifies that the tokens have been fully parsed + TerminalState +) + +// parseTable is a state machine to dictate the grammar above. +var parseTable = map[ASTKind]map[TokenType]int{ + ASTKindStart: map[TokenType]int{ + TokenLit: StatementState, + TokenSep: OpenScopeState, + TokenWS: SkipTokenState, + TokenNL: SkipTokenState, + TokenComment: CommentState, + TokenNone: TerminalState, + }, + ASTKindCommentStatement: map[TokenType]int{ + TokenLit: StatementState, + TokenSep: OpenScopeState, + TokenWS: SkipTokenState, + TokenNL: SkipTokenState, + TokenComment: CommentState, + TokenNone: MarkCompleteState, + }, + ASTKindExpr: map[TokenType]int{ + TokenOp: StatementPrimeState, + TokenLit: ValueState, + TokenSep: OpenScopeState, + TokenWS: ValueState, + TokenNL: SkipState, + TokenComment: CommentState, + TokenNone: MarkCompleteState, + }, + ASTKindEqualExpr: map[TokenType]int{ + TokenLit: ValueState, + TokenWS: SkipTokenState, + TokenNL: SkipState, + }, + ASTKindStatement: map[TokenType]int{ + TokenLit: SectionState, + TokenSep: CloseScopeState, + TokenWS: SkipTokenState, + TokenNL: SkipTokenState, + TokenComment: CommentState, + TokenNone: MarkCompleteState, + }, + ASTKindExprStatement: map[TokenType]int{ + TokenLit: ValueState, + TokenSep: OpenScopeState, + TokenOp: ValueState, + TokenWS: ValueState, + TokenNL: MarkCompleteState, + TokenComment: CommentState, + TokenNone: TerminalState, + TokenComma: SkipState, + }, + ASTKindSectionStatement: map[TokenType]int{ + TokenLit: SectionState, + TokenOp: SectionState, + TokenSep: CloseScopeState, + TokenWS: SectionState, + TokenNL: SkipTokenState, + }, + ASTKindCompletedSectionStatement: map[TokenType]int{ + TokenWS: SkipTokenState, + TokenNL: SkipTokenState, + TokenLit: StatementState, + TokenSep: OpenScopeState, + TokenComment: CommentState, + TokenNone: MarkCompleteState, + }, + ASTKindSkipStatement: map[TokenType]int{ + TokenLit: StatementState, + TokenSep: OpenScopeState, + TokenWS: SkipTokenState, + TokenNL: SkipTokenState, + TokenComment: CommentState, + TokenNone: TerminalState, + }, +} + +// ParseAST will parse input from an io.Reader using +// an LL(1) parser. +func ParseAST(r io.Reader) ([]AST, error) { + lexer := iniLexer{} + tokens, err := lexer.Tokenize(r) + if err != nil { + return []AST{}, err + } + + return parse(tokens) +} + +// ParseASTBytes will parse input from a byte slice using +// an LL(1) parser. +func ParseASTBytes(b []byte) ([]AST, error) { + lexer := iniLexer{} + tokens, err := lexer.tokenize(b) + if err != nil { + return []AST{}, err + } + + return parse(tokens) +} + +func parse(tokens []Token) ([]AST, error) { + start := Start + stack := newParseStack(3, len(tokens)) + + stack.Push(start) + s := newSkipper() + +loop: + for stack.Len() > 0 { + k := stack.Pop() + + var tok Token + if len(tokens) == 0 { + // this occurs when all the tokens have been processed + // but reduction of what's left on the stack needs to + // occur. + tok = emptyToken + } else { + tok = tokens[0] + } + + step := parseTable[k.Kind][tok.Type()] + if s.ShouldSkip(tok) { + // being in a skip state with no tokens will break out of + // the parse loop since there is nothing left to process. + if len(tokens) == 0 { + break loop + } + + step = SkipTokenState + } + + switch step { + case TerminalState: + // Finished parsing. Push what should be the last + // statement to the stack. If there is anything left + // on the stack, an error in parsing has occurred. + if k.Kind != ASTKindStart { + stack.MarkComplete(k) + } + break loop + case SkipTokenState: + // When skipping a token, the previous state was popped off the stack. + // To maintain the correct state, the previous state will be pushed + // onto the stack. + stack.Push(k) + case StatementState: + if k.Kind != ASTKindStart { + stack.MarkComplete(k) + } + expr := newExpression(tok) + stack.Push(expr) + case StatementPrimeState: + if tok.Type() != TokenOp { + stack.MarkComplete(k) + continue + } + + if k.Kind != ASTKindExpr { + return nil, NewParseError( + fmt.Sprintf("invalid expression: expected Expr type, but found %T type", k), + ) + } + + k = trimSpaces(k) + expr := newEqualExpr(k, tok) + stack.Push(expr) + case ValueState: + // ValueState requires the previous state to either be an equal expression + // or an expression statement. + // + // This grammar occurs when the RHS is a number, word, or quoted string. + // equal_expr -> lit op equal_expr' + // equal_expr' -> number | string | quoted_string + // quoted_string -> " quoted_string' + // quoted_string' -> string quoted_string_end + // quoted_string_end -> " + // + // otherwise + // expr_stmt -> equal_expr (expr_stmt')* + // expr_stmt' -> ws S | op S | MarkComplete + // S -> equal_expr' expr_stmt' + switch k.Kind { + case ASTKindEqualExpr: + // assiging a value to some key + k.AppendChild(newExpression(tok)) + stack.Push(newExprStatement(k)) + case ASTKindExpr: + k.Root.raw = append(k.Root.raw, tok.Raw()...) + stack.Push(k) + case ASTKindExprStatement: + root := k.GetRoot() + children := root.GetChildren() + if len(children) == 0 { + return nil, NewParseError( + fmt.Sprintf("invalid expression: AST contains no children %s", k.Kind), + ) + } + + rhs := children[len(children)-1] + + if rhs.Root.ValueType != QuotedStringType { + rhs.Root.ValueType = StringType + rhs.Root.raw = append(rhs.Root.raw, tok.Raw()...) + + } + + children[len(children)-1] = rhs + k.SetChildren(children) + + stack.Push(k) + } + case OpenScopeState: + if !runeCompare(tok.Raw(), openBrace) { + return nil, NewParseError("expected '['") + } + + stmt := newStatement() + stack.Push(stmt) + case CloseScopeState: + if !runeCompare(tok.Raw(), closeBrace) { + return nil, NewParseError("expected ']'") + } + + k = trimSpaces(k) + stack.Push(newCompletedSectionStatement(k)) + case SectionState: + var stmt AST + + switch k.Kind { + case ASTKindStatement: + // If there are multiple literals inside of a scope declaration, + // then the current token's raw value will be appended to the Name. + // + // This handles cases like [ profile default ] + // + // k will represent a SectionStatement with the children representing + // the label of the section + stmt = newSectionStatement(tok) + case ASTKindSectionStatement: + k.Root.raw = append(k.Root.raw, tok.Raw()...) + stmt = k + default: + return nil, NewParseError( + fmt.Sprintf("invalid statement: expected statement: %v", k.Kind), + ) + } + + stack.Push(stmt) + case MarkCompleteState: + if k.Kind != ASTKindStart { + stack.MarkComplete(k) + } + + if stack.Len() == 0 { + stack.Push(start) + } + case SkipState: + stack.Push(newSkipStatement(k)) + s.Skip() + case CommentState: + if k.Kind == ASTKindStart { + stack.Push(k) + } else { + stack.MarkComplete(k) + } + + stmt := newCommentStatement(tok) + stack.Push(stmt) + default: + return nil, NewParseError(fmt.Sprintf("invalid state with ASTKind %v and TokenType %v", k, tok)) + } + + if len(tokens) > 0 { + tokens = tokens[1:] + } + } + + // this occurs when a statement has not been completed + if stack.top > 1 { + return nil, NewParseError(fmt.Sprintf("incomplete expression: %v", stack.container)) + } + + // returns a sublist which exludes the start symbol + return stack.List(), nil +} + +// trimSpaces will trim spaces on the left and right hand side of +// the literal. +func trimSpaces(k AST) AST { + // trim left hand side of spaces + for i := 0; i < len(k.Root.raw); i++ { + if !isWhitespace(k.Root.raw[i]) { + break + } + + k.Root.raw = k.Root.raw[1:] + i-- + } + + // trim right hand side of spaces + for i := len(k.Root.raw) - 1; i >= 0; i-- { + if !isWhitespace(k.Root.raw[i]) { + break + } + + k.Root.raw = k.Root.raw[:len(k.Root.raw)-1] + } + + return k +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/literal_tokens.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/literal_tokens.go new file mode 100644 index 00000000000..24df543d38c --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/literal_tokens.go @@ -0,0 +1,324 @@ +package ini + +import ( + "fmt" + "strconv" + "strings" +) + +var ( + runesTrue = []rune("true") + runesFalse = []rune("false") +) + +var literalValues = [][]rune{ + runesTrue, + runesFalse, +} + +func isBoolValue(b []rune) bool { + for _, lv := range literalValues { + if isLitValue(lv, b) { + return true + } + } + return false +} + +func isLitValue(want, have []rune) bool { + if len(have) < len(want) { + return false + } + + for i := 0; i < len(want); i++ { + if want[i] != have[i] { + return false + } + } + + return true +} + +// isNumberValue will return whether not the leading characters in +// a byte slice is a number. A number is delimited by whitespace or +// the newline token. +// +// A number is defined to be in a binary, octal, decimal (int | float), hex format, +// or in scientific notation. +func isNumberValue(b []rune) bool { + negativeIndex := 0 + helper := numberHelper{} + needDigit := false + + for i := 0; i < len(b); i++ { + negativeIndex++ + + switch b[i] { + case '-': + if helper.IsNegative() || negativeIndex != 1 { + return false + } + helper.Determine(b[i]) + needDigit = true + continue + case 'e', 'E': + if err := helper.Determine(b[i]); err != nil { + return false + } + negativeIndex = 0 + needDigit = true + continue + case 'b': + if helper.numberFormat == hex { + break + } + fallthrough + case 'o', 'x': + needDigit = true + if i == 0 { + return false + } + + fallthrough + case '.': + if err := helper.Determine(b[i]); err != nil { + return false + } + needDigit = true + continue + } + + if i > 0 && (isNewline(b[i:]) || isWhitespace(b[i])) { + return !needDigit + } + + if !helper.CorrectByte(b[i]) { + return false + } + needDigit = false + } + + return !needDigit +} + +func isValid(b []rune) (bool, int, error) { + if len(b) == 0 { + // TODO: should probably return an error + return false, 0, nil + } + + return isValidRune(b[0]), 1, nil +} + +func isValidRune(r rune) bool { + return r != ':' && r != '=' && r != '[' && r != ']' && r != ' ' && r != '\n' +} + +// ValueType is an enum that will signify what type +// the Value is +type ValueType int + +func (v ValueType) String() string { + switch v { + case NoneType: + return "NONE" + case DecimalType: + return "FLOAT" + case IntegerType: + return "INT" + case StringType: + return "STRING" + case BoolType: + return "BOOL" + } + + return "" +} + +// ValueType enums +const ( + NoneType = ValueType(iota) + DecimalType + IntegerType + StringType + QuotedStringType + BoolType +) + +// Value is a union container +type Value struct { + Type ValueType + raw []rune + + integer int64 + decimal float64 + boolean bool + str string +} + +func newValue(t ValueType, base int, raw []rune) (Value, error) { + v := Value{ + Type: t, + raw: raw, + } + var err error + + switch t { + case DecimalType: + v.decimal, err = strconv.ParseFloat(string(raw), 64) + case IntegerType: + if base != 10 { + raw = raw[2:] + } + + v.integer, err = strconv.ParseInt(string(raw), base, 64) + case StringType: + v.str = string(raw) + case QuotedStringType: + v.str = string(raw[1 : len(raw)-1]) + case BoolType: + v.boolean = runeCompare(v.raw, runesTrue) + } + + // issue 2253 + // + // if the value trying to be parsed is too large, then we will use + // the 'StringType' and raw value instead. + if nerr, ok := err.(*strconv.NumError); ok && nerr.Err == strconv.ErrRange { + v.Type = StringType + v.str = string(raw) + err = nil + } + + return v, err +} + +// Append will append values and change the type to a string +// type. +func (v *Value) Append(tok Token) { + r := tok.Raw() + if v.Type != QuotedStringType { + v.Type = StringType + r = tok.raw[1 : len(tok.raw)-1] + } + if tok.Type() != TokenLit { + v.raw = append(v.raw, tok.Raw()...) + } else { + v.raw = append(v.raw, r...) + } +} + +func (v Value) String() string { + switch v.Type { + case DecimalType: + return fmt.Sprintf("decimal: %f", v.decimal) + case IntegerType: + return fmt.Sprintf("integer: %d", v.integer) + case StringType: + return fmt.Sprintf("string: %s", string(v.raw)) + case QuotedStringType: + return fmt.Sprintf("quoted string: %s", string(v.raw)) + case BoolType: + return fmt.Sprintf("bool: %t", v.boolean) + default: + return "union not set" + } +} + +func newLitToken(b []rune) (Token, int, error) { + n := 0 + var err error + + token := Token{} + if b[0] == '"' { + n, err = getStringValue(b) + if err != nil { + return token, n, err + } + + token = newToken(TokenLit, b[:n], QuotedStringType) + } else if isNumberValue(b) { + var base int + base, n, err = getNumericalValue(b) + if err != nil { + return token, 0, err + } + + value := b[:n] + vType := IntegerType + if contains(value, '.') || hasExponent(value) { + vType = DecimalType + } + token = newToken(TokenLit, value, vType) + token.base = base + } else if isBoolValue(b) { + n, err = getBoolValue(b) + + token = newToken(TokenLit, b[:n], BoolType) + } else { + n, err = getValue(b) + token = newToken(TokenLit, b[:n], StringType) + } + + return token, n, err +} + +// IntValue returns an integer value +func (v Value) IntValue() int64 { + return v.integer +} + +// FloatValue returns a float value +func (v Value) FloatValue() float64 { + return v.decimal +} + +// BoolValue returns a bool value +func (v Value) BoolValue() bool { + return v.boolean +} + +func isTrimmable(r rune) bool { + switch r { + case '\n', ' ': + return true + } + return false +} + +// StringValue returns the string value +func (v Value) StringValue() string { + switch v.Type { + case StringType: + return strings.TrimFunc(string(v.raw), isTrimmable) + case QuotedStringType: + // preserve all characters in the quotes + return string(removeEscapedCharacters(v.raw[1 : len(v.raw)-1])) + default: + return strings.TrimFunc(string(v.raw), isTrimmable) + } +} + +func contains(runes []rune, c rune) bool { + for i := 0; i < len(runes); i++ { + if runes[i] == c { + return true + } + } + + return false +} + +func runeCompare(v1 []rune, v2 []rune) bool { + if len(v1) != len(v2) { + return false + } + + for i := 0; i < len(v1); i++ { + if v1[i] != v2[i] { + return false + } + } + + return true +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/newline_token.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/newline_token.go new file mode 100644 index 00000000000..e52ac399f17 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/newline_token.go @@ -0,0 +1,30 @@ +package ini + +func isNewline(b []rune) bool { + if len(b) == 0 { + return false + } + + if b[0] == '\n' { + return true + } + + if len(b) < 2 { + return false + } + + return b[0] == '\r' && b[1] == '\n' +} + +func newNewlineToken(b []rune) (Token, int, error) { + i := 1 + if b[0] == '\r' && isNewline(b[1:]) { + i++ + } + + if !isNewline([]rune(b[:i])) { + return emptyToken, 0, NewParseError("invalid new line token") + } + + return newToken(TokenNL, b[:i], NoneType), i, nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/number_helper.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/number_helper.go new file mode 100644 index 00000000000..a45c0bc5662 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/number_helper.go @@ -0,0 +1,152 @@ +package ini + +import ( + "bytes" + "fmt" + "strconv" +) + +const ( + none = numberFormat(iota) + binary + octal + decimal + hex + exponent +) + +type numberFormat int + +// numberHelper is used to dictate what format a number is in +// and what to do for negative values. Since -1e-4 is a valid +// number, we cannot just simply check for duplicate negatives. +type numberHelper struct { + numberFormat numberFormat + + negative bool + negativeExponent bool +} + +func (b numberHelper) Exists() bool { + return b.numberFormat != none +} + +func (b numberHelper) IsNegative() bool { + return b.negative || b.negativeExponent +} + +func (b *numberHelper) Determine(c rune) error { + if b.Exists() { + return NewParseError(fmt.Sprintf("multiple number formats: 0%v", string(c))) + } + + switch c { + case 'b': + b.numberFormat = binary + case 'o': + b.numberFormat = octal + case 'x': + b.numberFormat = hex + case 'e', 'E': + b.numberFormat = exponent + case '-': + if b.numberFormat != exponent { + b.negative = true + } else { + b.negativeExponent = true + } + case '.': + b.numberFormat = decimal + default: + return NewParseError(fmt.Sprintf("invalid number character: %v", string(c))) + } + + return nil +} + +func (b numberHelper) CorrectByte(c rune) bool { + switch { + case b.numberFormat == binary: + if !isBinaryByte(c) { + return false + } + case b.numberFormat == octal: + if !isOctalByte(c) { + return false + } + case b.numberFormat == hex: + if !isHexByte(c) { + return false + } + case b.numberFormat == decimal: + if !isDigit(c) { + return false + } + case b.numberFormat == exponent: + if !isDigit(c) { + return false + } + case b.negativeExponent: + if !isDigit(c) { + return false + } + case b.negative: + if !isDigit(c) { + return false + } + default: + if !isDigit(c) { + return false + } + } + + return true +} + +func (b numberHelper) Base() int { + switch b.numberFormat { + case binary: + return 2 + case octal: + return 8 + case hex: + return 16 + default: + return 10 + } +} + +func (b numberHelper) String() string { + buf := bytes.Buffer{} + i := 0 + + switch b.numberFormat { + case binary: + i++ + buf.WriteString(strconv.Itoa(i) + ": binary format\n") + case octal: + i++ + buf.WriteString(strconv.Itoa(i) + ": octal format\n") + case hex: + i++ + buf.WriteString(strconv.Itoa(i) + ": hex format\n") + case exponent: + i++ + buf.WriteString(strconv.Itoa(i) + ": exponent format\n") + default: + i++ + buf.WriteString(strconv.Itoa(i) + ": integer format\n") + } + + if b.negative { + i++ + buf.WriteString(strconv.Itoa(i) + ": negative format\n") + } + + if b.negativeExponent { + i++ + buf.WriteString(strconv.Itoa(i) + ": negative exponent format\n") + } + + return buf.String() +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/op_tokens.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/op_tokens.go new file mode 100644 index 00000000000..8a84c7cbe08 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/op_tokens.go @@ -0,0 +1,39 @@ +package ini + +import ( + "fmt" +) + +var ( + equalOp = []rune("=") + equalColonOp = []rune(":") +) + +func isOp(b []rune) bool { + if len(b) == 0 { + return false + } + + switch b[0] { + case '=': + return true + case ':': + return true + default: + return false + } +} + +func newOpToken(b []rune) (Token, int, error) { + tok := Token{} + + switch b[0] { + case '=': + tok = newToken(TokenOp, equalOp, NoneType) + case ':': + tok = newToken(TokenOp, equalColonOp, NoneType) + default: + return tok, 0, NewParseError(fmt.Sprintf("unexpected op type, %v", b[0])) + } + return tok, 1, nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/parse_error.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/parse_error.go new file mode 100644 index 00000000000..45728701931 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/parse_error.go @@ -0,0 +1,43 @@ +package ini + +import "fmt" + +const ( + // ErrCodeParseError is returned when a parsing error + // has occurred. + ErrCodeParseError = "INIParseError" +) + +// ParseError is an error which is returned during any part of +// the parsing process. +type ParseError struct { + msg string +} + +// NewParseError will return a new ParseError where message +// is the description of the error. +func NewParseError(message string) *ParseError { + return &ParseError{ + msg: message, + } +} + +// Code will return the ErrCodeParseError +func (err *ParseError) Code() string { + return ErrCodeParseError +} + +// Message returns the error's message +func (err *ParseError) Message() string { + return err.msg +} + +// OrigError return nothing since there will never be any +// original error. +func (err *ParseError) OrigError() error { + return nil +} + +func (err *ParseError) Error() string { + return fmt.Sprintf("%s: %s", err.Code(), err.Message()) +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/parse_stack.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/parse_stack.go new file mode 100644 index 00000000000..7f01cf7c703 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/parse_stack.go @@ -0,0 +1,60 @@ +package ini + +import ( + "bytes" + "fmt" +) + +// ParseStack is a stack that contains a container, the stack portion, +// and the list which is the list of ASTs that have been successfully +// parsed. +type ParseStack struct { + top int + container []AST + list []AST + index int +} + +func newParseStack(sizeContainer, sizeList int) ParseStack { + return ParseStack{ + container: make([]AST, sizeContainer), + list: make([]AST, sizeList), + } +} + +// Pop will return and truncate the last container element. +func (s *ParseStack) Pop() AST { + s.top-- + return s.container[s.top] +} + +// Push will add the new AST to the container +func (s *ParseStack) Push(ast AST) { + s.container[s.top] = ast + s.top++ +} + +// MarkComplete will append the AST to the list of completed statements +func (s *ParseStack) MarkComplete(ast AST) { + s.list[s.index] = ast + s.index++ +} + +// List will return the completed statements +func (s ParseStack) List() []AST { + return s.list[:s.index] +} + +// Len will return the length of the container +func (s *ParseStack) Len() int { + return s.top +} + +func (s ParseStack) String() string { + buf := bytes.Buffer{} + for i, node := range s.list { + buf.WriteString(fmt.Sprintf("%d: %v\n", i+1, node)) + } + + return buf.String() +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/sep_tokens.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/sep_tokens.go new file mode 100644 index 00000000000..f82095ba259 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/sep_tokens.go @@ -0,0 +1,41 @@ +package ini + +import ( + "fmt" +) + +var ( + emptyRunes = []rune{} +) + +func isSep(b []rune) bool { + if len(b) == 0 { + return false + } + + switch b[0] { + case '[', ']': + return true + default: + return false + } +} + +var ( + openBrace = []rune("[") + closeBrace = []rune("]") +) + +func newSepToken(b []rune) (Token, int, error) { + tok := Token{} + + switch b[0] { + case '[': + tok = newToken(TokenSep, openBrace, NoneType) + case ']': + tok = newToken(TokenSep, closeBrace, NoneType) + default: + return tok, 0, NewParseError(fmt.Sprintf("unexpected sep type, %v", b[0])) + } + return tok, 1, nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/skipper.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/skipper.go new file mode 100644 index 00000000000..6bb6964475e --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/skipper.go @@ -0,0 +1,45 @@ +package ini + +// skipper is used to skip certain blocks of an ini file. +// Currently skipper is used to skip nested blocks of ini +// files. See example below +// +// [ foo ] +// nested = ; this section will be skipped +// a=b +// c=d +// bar=baz ; this will be included +type skipper struct { + shouldSkip bool + TokenSet bool + prevTok Token +} + +func newSkipper() skipper { + return skipper{ + prevTok: emptyToken, + } +} + +func (s *skipper) ShouldSkip(tok Token) bool { + if s.shouldSkip && + s.prevTok.Type() == TokenNL && + tok.Type() != TokenWS { + + s.Continue() + return false + } + s.prevTok = tok + + return s.shouldSkip +} + +func (s *skipper) Skip() { + s.shouldSkip = true + s.prevTok = emptyToken +} + +func (s *skipper) Continue() { + s.shouldSkip = false + s.prevTok = emptyToken +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/statement.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/statement.go new file mode 100644 index 00000000000..ba0af01b53b --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/statement.go @@ -0,0 +1,35 @@ +package ini + +// Statement is an empty AST mostly used for transitioning states. +func newStatement() AST { + return newAST(ASTKindStatement, AST{}) +} + +// SectionStatement represents a section AST +func newSectionStatement(tok Token) AST { + return newASTWithRootToken(ASTKindSectionStatement, tok) +} + +// ExprStatement represents a completed expression AST +func newExprStatement(ast AST) AST { + return newAST(ASTKindExprStatement, ast) +} + +// CommentStatement represents a comment in the ini defintion. +// +// grammar: +// comment -> #comment' | ;comment' +// comment' -> epsilon | value +func newCommentStatement(tok Token) AST { + return newAST(ASTKindCommentStatement, newExpression(tok)) +} + +// CompletedSectionStatement represents a completed section +func newCompletedSectionStatement(ast AST) AST { + return newAST(ASTKindCompletedSectionStatement, ast) +} + +// SkipStatement is used to skip whole statements +func newSkipStatement(ast AST) AST { + return newAST(ASTKindSkipStatement, ast) +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/value_util.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/value_util.go new file mode 100644 index 00000000000..305999d29be --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/value_util.go @@ -0,0 +1,284 @@ +package ini + +import ( + "fmt" +) + +// getStringValue will return a quoted string and the amount +// of bytes read +// +// an error will be returned if the string is not properly formatted +func getStringValue(b []rune) (int, error) { + if b[0] != '"' { + return 0, NewParseError("strings must start with '\"'") + } + + endQuote := false + i := 1 + + for ; i < len(b) && !endQuote; i++ { + if escaped := isEscaped(b[:i], b[i]); b[i] == '"' && !escaped { + endQuote = true + break + } else if escaped { + /*c, err := getEscapedByte(b[i]) + if err != nil { + return 0, err + } + + b[i-1] = c + b = append(b[:i], b[i+1:]...) + i--*/ + + continue + } + } + + if !endQuote { + return 0, NewParseError("missing '\"' in string value") + } + + return i + 1, nil +} + +// getBoolValue will return a boolean and the amount +// of bytes read +// +// an error will be returned if the boolean is not of a correct +// value +func getBoolValue(b []rune) (int, error) { + if len(b) < 4 { + return 0, NewParseError("invalid boolean value") + } + + n := 0 + for _, lv := range literalValues { + if len(lv) > len(b) { + continue + } + + if isLitValue(lv, b) { + n = len(lv) + } + } + + if n == 0 { + return 0, NewParseError("invalid boolean value") + } + + return n, nil +} + +// getNumericalValue will return a numerical string, the amount +// of bytes read, and the base of the number +// +// an error will be returned if the number is not of a correct +// value +func getNumericalValue(b []rune) (int, int, error) { + if !isDigit(b[0]) { + return 0, 0, NewParseError("invalid digit value") + } + + i := 0 + helper := numberHelper{} + +loop: + for negativeIndex := 0; i < len(b); i++ { + negativeIndex++ + + if !isDigit(b[i]) { + switch b[i] { + case '-': + if helper.IsNegative() || negativeIndex != 1 { + return 0, 0, NewParseError("parse error '-'") + } + + n := getNegativeNumber(b[i:]) + i += (n - 1) + helper.Determine(b[i]) + continue + case '.': + if err := helper.Determine(b[i]); err != nil { + return 0, 0, err + } + case 'e', 'E': + if err := helper.Determine(b[i]); err != nil { + return 0, 0, err + } + + negativeIndex = 0 + case 'b': + if helper.numberFormat == hex { + break + } + fallthrough + case 'o', 'x': + if i == 0 && b[i] != '0' { + return 0, 0, NewParseError("incorrect base format, expected leading '0'") + } + + if i != 1 { + return 0, 0, NewParseError(fmt.Sprintf("incorrect base format found %s at %d index", string(b[i]), i)) + } + + if err := helper.Determine(b[i]); err != nil { + return 0, 0, err + } + default: + if isWhitespace(b[i]) { + break loop + } + + if isNewline(b[i:]) { + break loop + } + + if !(helper.numberFormat == hex && isHexByte(b[i])) { + if i+2 < len(b) && !isNewline(b[i:i+2]) { + return 0, 0, NewParseError("invalid numerical character") + } else if !isNewline([]rune{b[i]}) { + return 0, 0, NewParseError("invalid numerical character") + } + + break loop + } + } + } + } + + return helper.Base(), i, nil +} + +// isDigit will return whether or not something is an integer +func isDigit(b rune) bool { + return b >= '0' && b <= '9' +} + +func hasExponent(v []rune) bool { + return contains(v, 'e') || contains(v, 'E') +} + +func isBinaryByte(b rune) bool { + switch b { + case '0', '1': + return true + default: + return false + } +} + +func isOctalByte(b rune) bool { + switch b { + case '0', '1', '2', '3', '4', '5', '6', '7': + return true + default: + return false + } +} + +func isHexByte(b rune) bool { + if isDigit(b) { + return true + } + return (b >= 'A' && b <= 'F') || + (b >= 'a' && b <= 'f') +} + +func getValue(b []rune) (int, error) { + i := 0 + + for i < len(b) { + if isNewline(b[i:]) { + break + } + + if isOp(b[i:]) { + break + } + + valid, n, err := isValid(b[i:]) + if err != nil { + return 0, err + } + + if !valid { + break + } + + i += n + } + + return i, nil +} + +// getNegativeNumber will return a negative number from a +// byte slice. This will iterate through all characters until +// a non-digit has been found. +func getNegativeNumber(b []rune) int { + if b[0] != '-' { + return 0 + } + + i := 1 + for ; i < len(b); i++ { + if !isDigit(b[i]) { + return i + } + } + + return i +} + +// isEscaped will return whether or not the character is an escaped +// character. +func isEscaped(value []rune, b rune) bool { + if len(value) == 0 { + return false + } + + switch b { + case '\'': // single quote + case '"': // quote + case 'n': // newline + case 't': // tab + case '\\': // backslash + default: + return false + } + + return value[len(value)-1] == '\\' +} + +func getEscapedByte(b rune) (rune, error) { + switch b { + case '\'': // single quote + return '\'', nil + case '"': // quote + return '"', nil + case 'n': // newline + return '\n', nil + case 't': // table + return '\t', nil + case '\\': // backslash + return '\\', nil + default: + return b, NewParseError(fmt.Sprintf("invalid escaped character %c", b)) + } +} + +func removeEscapedCharacters(b []rune) []rune { + for i := 0; i < len(b); i++ { + if isEscaped(b[:i], b[i]) { + c, err := getEscapedByte(b[i]) + if err != nil { + return b + } + + b[i-1] = c + b = append(b[:i], b[i+1:]...) + i-- + } + } + + return b +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/visitor.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/visitor.go new file mode 100644 index 00000000000..94841c32443 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/visitor.go @@ -0,0 +1,166 @@ +package ini + +import ( + "fmt" + "sort" +) + +// Visitor is an interface used by walkers that will +// traverse an array of ASTs. +type Visitor interface { + VisitExpr(AST) error + VisitStatement(AST) error +} + +// DefaultVisitor is used to visit statements and expressions +// and ensure that they are both of the correct format. +// In addition, upon visiting this will build sections and populate +// the Sections field which can be used to retrieve profile +// configuration. +type DefaultVisitor struct { + scope string + Sections Sections +} + +// NewDefaultVisitor return a DefaultVisitor +func NewDefaultVisitor() *DefaultVisitor { + return &DefaultVisitor{ + Sections: Sections{ + container: map[string]Section{}, + }, + } +} + +// VisitExpr visits expressions... +func (v *DefaultVisitor) VisitExpr(expr AST) error { + t := v.Sections.container[v.scope] + if t.values == nil { + t.values = values{} + } + + switch expr.Kind { + case ASTKindExprStatement: + opExpr := expr.GetRoot() + switch opExpr.Kind { + case ASTKindEqualExpr: + children := opExpr.GetChildren() + if len(children) <= 1 { + return NewParseError("unexpected token type") + } + + rhs := children[1] + + if rhs.Root.Type() != TokenLit { + return NewParseError("unexpected token type") + } + + key := EqualExprKey(opExpr) + v, err := newValue(rhs.Root.ValueType, rhs.Root.base, rhs.Root.Raw()) + if err != nil { + return err + } + + t.values[key] = v + default: + return NewParseError(fmt.Sprintf("unsupported expression %v", expr)) + } + default: + return NewParseError(fmt.Sprintf("unsupported expression %v", expr)) + } + + v.Sections.container[v.scope] = t + return nil +} + +// VisitStatement visits statements... +func (v *DefaultVisitor) VisitStatement(stmt AST) error { + switch stmt.Kind { + case ASTKindCompletedSectionStatement: + child := stmt.GetRoot() + if child.Kind != ASTKindSectionStatement { + return NewParseError(fmt.Sprintf("unsupported child statement: %T", child)) + } + + name := string(child.Root.Raw()) + v.Sections.container[name] = Section{} + v.scope = name + default: + return NewParseError(fmt.Sprintf("unsupported statement: %s", stmt.Kind)) + } + + return nil +} + +// Sections is a map of Section structures that represent +// a configuration. +type Sections struct { + container map[string]Section +} + +// GetSection will return section p. If section p does not exist, +// false will be returned in the second parameter. +func (t Sections) GetSection(p string) (Section, bool) { + v, ok := t.container[p] + return v, ok +} + +// values represents a map of union values. +type values map[string]Value + +// List will return a list of all sections that were successfully +// parsed. +func (t Sections) List() []string { + keys := make([]string, len(t.container)) + i := 0 + for k := range t.container { + keys[i] = k + i++ + } + + sort.Strings(keys) + return keys +} + +// Section contains a name and values. This represent +// a sectioned entry in a configuration file. +type Section struct { + Name string + values values +} + +// Has will return whether or not an entry exists in a given section +func (t Section) Has(k string) bool { + _, ok := t.values[k] + return ok +} + +// ValueType will returned what type the union is set to. If +// k was not found, the NoneType will be returned. +func (t Section) ValueType(k string) (ValueType, bool) { + v, ok := t.values[k] + return v.Type, ok +} + +// Bool returns a bool value at k +func (t Section) Bool(k string) bool { + return t.values[k].BoolValue() +} + +// Int returns an integer value at k +func (t Section) Int(k string) int64 { + return t.values[k].IntValue() +} + +// Float64 returns a float value at k +func (t Section) Float64(k string) float64 { + return t.values[k].FloatValue() +} + +// String returns the string value at k +func (t Section) String(k string) string { + _, ok := t.values[k] + if !ok { + return "" + } + return t.values[k].StringValue() +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/walker.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/walker.go new file mode 100644 index 00000000000..99915f7f777 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/walker.go @@ -0,0 +1,25 @@ +package ini + +// Walk will traverse the AST using the v, the Visitor. +func Walk(tree []AST, v Visitor) error { + for _, node := range tree { + switch node.Kind { + case ASTKindExpr, + ASTKindExprStatement: + + if err := v.VisitExpr(node); err != nil { + return err + } + case ASTKindStatement, + ASTKindCompletedSectionStatement, + ASTKindNestedSectionStatement, + ASTKindCompletedNestedSectionStatement: + + if err := v.VisitStatement(node); err != nil { + return err + } + } + } + + return nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/ini/ws_token.go b/vendor/github.com/aws/aws-sdk-go/internal/ini/ws_token.go new file mode 100644 index 00000000000..7ffb4ae06ff --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/ini/ws_token.go @@ -0,0 +1,24 @@ +package ini + +import ( + "unicode" +) + +// isWhitespace will return whether or not the character is +// a whitespace character. +// +// Whitespace is defined as a space or tab. +func isWhitespace(c rune) bool { + return unicode.IsSpace(c) && c != '\n' && c != '\r' +} + +func newWSToken(b []rune) (Token, int, error) { + i := 0 + for ; i < len(b); i++ { + if !isWhitespace(b[i]) { + break + } + } + + return newToken(TokenWS, b[:i], NoneType), i, nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/s3err/error.go b/vendor/github.com/aws/aws-sdk-go/internal/s3err/error.go new file mode 100644 index 00000000000..0b9b0dfce04 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/s3err/error.go @@ -0,0 +1,57 @@ +package s3err + +import ( + "fmt" + + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/request" +) + +// RequestFailure provides additional S3 specific metadata for the request +// failure. +type RequestFailure struct { + awserr.RequestFailure + + hostID string +} + +// NewRequestFailure returns a request failure error decordated with S3 +// specific metadata. +func NewRequestFailure(err awserr.RequestFailure, hostID string) *RequestFailure { + return &RequestFailure{RequestFailure: err, hostID: hostID} +} + +func (r RequestFailure) Error() string { + extra := fmt.Sprintf("status code: %d, request id: %s, host id: %s", + r.StatusCode(), r.RequestID(), r.hostID) + return awserr.SprintError(r.Code(), r.Message(), extra, r.OrigErr()) +} +func (r RequestFailure) String() string { + return r.Error() +} + +// HostID returns the HostID request response value. +func (r RequestFailure) HostID() string { + return r.hostID +} + +// RequestFailureWrapperHandler returns a handler to rap an +// awserr.RequestFailure with the S3 request ID 2 from the response. +func RequestFailureWrapperHandler() request.NamedHandler { + return request.NamedHandler{ + Name: "awssdk.s3.errorHandler", + Fn: func(req *request.Request) { + reqErr, ok := req.Error.(awserr.RequestFailure) + if !ok || reqErr == nil { + return + } + + hostID := req.HTTPResponse.Header.Get("X-Amz-Id-2") + if req.Error == nil { + return + } + + req.Error = NewRequestFailure(reqErr, hostID) + }, + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/sdkuri/path.go b/vendor/github.com/aws/aws-sdk-go/internal/sdkuri/path.go new file mode 100644 index 00000000000..38ea61afeaa --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/sdkuri/path.go @@ -0,0 +1,23 @@ +package sdkuri + +import ( + "path" + "strings" +) + +// PathJoin will join the elements of the path delimited by the "/" +// character. Similar to path.Join with the exception the trailing "/" +// character is preserved if present. +func PathJoin(elems ...string) string { + if len(elems) == 0 { + return "" + } + + hasTrailing := strings.HasSuffix(elems[len(elems)-1], "/") + str := path.Join(elems...) + if hasTrailing && str != "/" { + str += "/" + } + + return str +} diff --git a/vendor/github.com/aws/aws-sdk-go/internal/shareddefaults/ecs_container.go b/vendor/github.com/aws/aws-sdk-go/internal/shareddefaults/ecs_container.go new file mode 100644 index 00000000000..b63e4c2639b --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/internal/shareddefaults/ecs_container.go @@ -0,0 +1,12 @@ +package shareddefaults + +const ( + // ECSCredsProviderEnvVar is an environmental variable key used to + // determine which path needs to be hit. + ECSCredsProviderEnvVar = "AWS_CONTAINER_CREDENTIALS_RELATIVE_URI" +) + +// ECSContainerCredentialsURI is the endpoint to retrieve container +// credentials. This can be overriden to test to ensure the credential process +// is behaving correctly. +var ECSContainerCredentialsURI = "http://169.254.170.2" diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/ec2query/unmarshal.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/ec2query/unmarshal.go index 095e97ccf91..5793c047373 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/ec2query/unmarshal.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/ec2query/unmarshal.go @@ -27,7 +27,11 @@ func Unmarshal(r *request.Request) { decoder := xml.NewDecoder(r.HTTPResponse.Body) err := xmlutil.UnmarshalXML(r.Data, decoder, "") if err != nil { - r.Error = awserr.New("SerializationError", "failed decoding EC2 Query response", err) + r.Error = awserr.NewRequestFailure( + awserr.New("SerializationError", "failed decoding EC2 Query response", err), + r.HTTPResponse.StatusCode, + r.RequestID, + ) return } } @@ -52,7 +56,11 @@ func UnmarshalError(r *request.Request) { resp := &xmlErrorResponse{} err := xml.NewDecoder(r.HTTPResponse.Body).Decode(resp) if err != nil && err != io.EOF { - r.Error = awserr.New("SerializationError", "failed decoding EC2 Query error response", err) + r.Error = awserr.NewRequestFailure( + awserr.New("SerializationError", "failed decoding EC2 Query error response", err), + r.HTTPResponse.StatusCode, + r.RequestID, + ) } else { r.Error = awserr.NewRequestFailure( awserr.New(resp.Code, resp.Message, nil), diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/debug.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/debug.go new file mode 100644 index 00000000000..ecc7bf82fa2 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/debug.go @@ -0,0 +1,144 @@ +package eventstream + +import ( + "bytes" + "encoding/base64" + "encoding/json" + "fmt" + "strconv" +) + +type decodedMessage struct { + rawMessage + Headers decodedHeaders `json:"headers"` +} +type jsonMessage struct { + Length json.Number `json:"total_length"` + HeadersLen json.Number `json:"headers_length"` + PreludeCRC json.Number `json:"prelude_crc"` + Headers decodedHeaders `json:"headers"` + Payload []byte `json:"payload"` + CRC json.Number `json:"message_crc"` +} + +func (d *decodedMessage) UnmarshalJSON(b []byte) (err error) { + var jsonMsg jsonMessage + if err = json.Unmarshal(b, &jsonMsg); err != nil { + return err + } + + d.Length, err = numAsUint32(jsonMsg.Length) + if err != nil { + return err + } + d.HeadersLen, err = numAsUint32(jsonMsg.HeadersLen) + if err != nil { + return err + } + d.PreludeCRC, err = numAsUint32(jsonMsg.PreludeCRC) + if err != nil { + return err + } + d.Headers = jsonMsg.Headers + d.Payload = jsonMsg.Payload + d.CRC, err = numAsUint32(jsonMsg.CRC) + if err != nil { + return err + } + + return nil +} + +func (d *decodedMessage) MarshalJSON() ([]byte, error) { + jsonMsg := jsonMessage{ + Length: json.Number(strconv.Itoa(int(d.Length))), + HeadersLen: json.Number(strconv.Itoa(int(d.HeadersLen))), + PreludeCRC: json.Number(strconv.Itoa(int(d.PreludeCRC))), + Headers: d.Headers, + Payload: d.Payload, + CRC: json.Number(strconv.Itoa(int(d.CRC))), + } + + return json.Marshal(jsonMsg) +} + +func numAsUint32(n json.Number) (uint32, error) { + v, err := n.Int64() + if err != nil { + return 0, fmt.Errorf("failed to get int64 json number, %v", err) + } + + return uint32(v), nil +} + +func (d decodedMessage) Message() Message { + return Message{ + Headers: Headers(d.Headers), + Payload: d.Payload, + } +} + +type decodedHeaders Headers + +func (hs *decodedHeaders) UnmarshalJSON(b []byte) error { + var jsonHeaders []struct { + Name string `json:"name"` + Type valueType `json:"type"` + Value interface{} `json:"value"` + } + + decoder := json.NewDecoder(bytes.NewReader(b)) + decoder.UseNumber() + if err := decoder.Decode(&jsonHeaders); err != nil { + return err + } + + var headers Headers + for _, h := range jsonHeaders { + value, err := valueFromType(h.Type, h.Value) + if err != nil { + return err + } + headers.Set(h.Name, value) + } + (*hs) = decodedHeaders(headers) + + return nil +} + +func valueFromType(typ valueType, val interface{}) (Value, error) { + switch typ { + case trueValueType: + return BoolValue(true), nil + case falseValueType: + return BoolValue(false), nil + case int8ValueType: + v, err := val.(json.Number).Int64() + return Int8Value(int8(v)), err + case int16ValueType: + v, err := val.(json.Number).Int64() + return Int16Value(int16(v)), err + case int32ValueType: + v, err := val.(json.Number).Int64() + return Int32Value(int32(v)), err + case int64ValueType: + v, err := val.(json.Number).Int64() + return Int64Value(v), err + case bytesValueType: + v, err := base64.StdEncoding.DecodeString(val.(string)) + return BytesValue(v), err + case stringValueType: + v, err := base64.StdEncoding.DecodeString(val.(string)) + return StringValue(string(v)), err + case timestampValueType: + v, err := val.(json.Number).Int64() + return TimestampValue(timeFromEpochMilli(v)), err + case uuidValueType: + v, err := base64.StdEncoding.DecodeString(val.(string)) + var tv UUIDValue + copy(tv[:], v) + return tv, err + default: + panic(fmt.Sprintf("unknown type, %s, %T", typ.String(), val)) + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/decode.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/decode.go new file mode 100644 index 00000000000..4b972b2d666 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/decode.go @@ -0,0 +1,199 @@ +package eventstream + +import ( + "bytes" + "encoding/binary" + "encoding/hex" + "encoding/json" + "fmt" + "hash" + "hash/crc32" + "io" + + "github.com/aws/aws-sdk-go/aws" +) + +// Decoder provides decoding of an Event Stream messages. +type Decoder struct { + r io.Reader + logger aws.Logger +} + +// NewDecoder initializes and returns a Decoder for decoding event +// stream messages from the reader provided. +func NewDecoder(r io.Reader) *Decoder { + return &Decoder{ + r: r, + } +} + +// Decode attempts to decode a single message from the event stream reader. +// Will return the event stream message, or error if Decode fails to read +// the message from the stream. +func (d *Decoder) Decode(payloadBuf []byte) (m Message, err error) { + reader := d.r + if d.logger != nil { + debugMsgBuf := bytes.NewBuffer(nil) + reader = io.TeeReader(reader, debugMsgBuf) + defer func() { + logMessageDecode(d.logger, debugMsgBuf, m, err) + }() + } + + crc := crc32.New(crc32IEEETable) + hashReader := io.TeeReader(reader, crc) + + prelude, err := decodePrelude(hashReader, crc) + if err != nil { + return Message{}, err + } + + if prelude.HeadersLen > 0 { + lr := io.LimitReader(hashReader, int64(prelude.HeadersLen)) + m.Headers, err = decodeHeaders(lr) + if err != nil { + return Message{}, err + } + } + + if payloadLen := prelude.PayloadLen(); payloadLen > 0 { + buf, err := decodePayload(payloadBuf, io.LimitReader(hashReader, int64(payloadLen))) + if err != nil { + return Message{}, err + } + m.Payload = buf + } + + msgCRC := crc.Sum32() + if err := validateCRC(reader, msgCRC); err != nil { + return Message{}, err + } + + return m, nil +} + +// UseLogger specifies the Logger that that the decoder should use to log the +// message decode to. +func (d *Decoder) UseLogger(logger aws.Logger) { + d.logger = logger +} + +func logMessageDecode(logger aws.Logger, msgBuf *bytes.Buffer, msg Message, decodeErr error) { + w := bytes.NewBuffer(nil) + defer func() { logger.Log(w.String()) }() + + fmt.Fprintf(w, "Raw message:\n%s\n", + hex.Dump(msgBuf.Bytes())) + + if decodeErr != nil { + fmt.Fprintf(w, "Decode error: %v\n", decodeErr) + return + } + + rawMsg, err := msg.rawMessage() + if err != nil { + fmt.Fprintf(w, "failed to create raw message, %v\n", err) + return + } + + decodedMsg := decodedMessage{ + rawMessage: rawMsg, + Headers: decodedHeaders(msg.Headers), + } + + fmt.Fprintf(w, "Decoded message:\n") + encoder := json.NewEncoder(w) + if err := encoder.Encode(decodedMsg); err != nil { + fmt.Fprintf(w, "failed to generate decoded message, %v\n", err) + } +} + +func decodePrelude(r io.Reader, crc hash.Hash32) (messagePrelude, error) { + var p messagePrelude + + var err error + p.Length, err = decodeUint32(r) + if err != nil { + return messagePrelude{}, err + } + + p.HeadersLen, err = decodeUint32(r) + if err != nil { + return messagePrelude{}, err + } + + if err := p.ValidateLens(); err != nil { + return messagePrelude{}, err + } + + preludeCRC := crc.Sum32() + if err := validateCRC(r, preludeCRC); err != nil { + return messagePrelude{}, err + } + + p.PreludeCRC = preludeCRC + + return p, nil +} + +func decodePayload(buf []byte, r io.Reader) ([]byte, error) { + w := bytes.NewBuffer(buf[0:0]) + + _, err := io.Copy(w, r) + return w.Bytes(), err +} + +func decodeUint8(r io.Reader) (uint8, error) { + type byteReader interface { + ReadByte() (byte, error) + } + + if br, ok := r.(byteReader); ok { + v, err := br.ReadByte() + return uint8(v), err + } + + var b [1]byte + _, err := io.ReadFull(r, b[:]) + return uint8(b[0]), err +} +func decodeUint16(r io.Reader) (uint16, error) { + var b [2]byte + bs := b[:] + _, err := io.ReadFull(r, bs) + if err != nil { + return 0, err + } + return binary.BigEndian.Uint16(bs), nil +} +func decodeUint32(r io.Reader) (uint32, error) { + var b [4]byte + bs := b[:] + _, err := io.ReadFull(r, bs) + if err != nil { + return 0, err + } + return binary.BigEndian.Uint32(bs), nil +} +func decodeUint64(r io.Reader) (uint64, error) { + var b [8]byte + bs := b[:] + _, err := io.ReadFull(r, bs) + if err != nil { + return 0, err + } + return binary.BigEndian.Uint64(bs), nil +} + +func validateCRC(r io.Reader, expect uint32) error { + msgCRC, err := decodeUint32(r) + if err != nil { + return err + } + + if msgCRC != expect { + return ChecksumError{} + } + + return nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/encode.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/encode.go new file mode 100644 index 00000000000..150a60981d8 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/encode.go @@ -0,0 +1,114 @@ +package eventstream + +import ( + "bytes" + "encoding/binary" + "hash" + "hash/crc32" + "io" +) + +// Encoder provides EventStream message encoding. +type Encoder struct { + w io.Writer + + headersBuf *bytes.Buffer +} + +// NewEncoder initializes and returns an Encoder to encode Event Stream +// messages to an io.Writer. +func NewEncoder(w io.Writer) *Encoder { + return &Encoder{ + w: w, + headersBuf: bytes.NewBuffer(nil), + } +} + +// Encode encodes a single EventStream message to the io.Writer the Encoder +// was created with. An error is returned if writing the message fails. +func (e *Encoder) Encode(msg Message) error { + e.headersBuf.Reset() + + err := encodeHeaders(e.headersBuf, msg.Headers) + if err != nil { + return err + } + + crc := crc32.New(crc32IEEETable) + hashWriter := io.MultiWriter(e.w, crc) + + headersLen := uint32(e.headersBuf.Len()) + payloadLen := uint32(len(msg.Payload)) + + if err := encodePrelude(hashWriter, crc, headersLen, payloadLen); err != nil { + return err + } + + if headersLen > 0 { + if _, err := io.Copy(hashWriter, e.headersBuf); err != nil { + return err + } + } + + if payloadLen > 0 { + if _, err := hashWriter.Write(msg.Payload); err != nil { + return err + } + } + + msgCRC := crc.Sum32() + return binary.Write(e.w, binary.BigEndian, msgCRC) +} + +func encodePrelude(w io.Writer, crc hash.Hash32, headersLen, payloadLen uint32) error { + p := messagePrelude{ + Length: minMsgLen + headersLen + payloadLen, + HeadersLen: headersLen, + } + if err := p.ValidateLens(); err != nil { + return err + } + + err := binaryWriteFields(w, binary.BigEndian, + p.Length, + p.HeadersLen, + ) + if err != nil { + return err + } + + p.PreludeCRC = crc.Sum32() + err = binary.Write(w, binary.BigEndian, p.PreludeCRC) + if err != nil { + return err + } + + return nil +} + +func encodeHeaders(w io.Writer, headers Headers) error { + for _, h := range headers { + hn := headerName{ + Len: uint8(len(h.Name)), + } + copy(hn.Name[:hn.Len], h.Name) + if err := hn.encode(w); err != nil { + return err + } + + if err := h.Value.encode(w); err != nil { + return err + } + } + + return nil +} + +func binaryWriteFields(w io.Writer, order binary.ByteOrder, vs ...interface{}) error { + for _, v := range vs { + if err := binary.Write(w, order, v); err != nil { + return err + } + } + return nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/error.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/error.go new file mode 100644 index 00000000000..5481ef30796 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/error.go @@ -0,0 +1,23 @@ +package eventstream + +import "fmt" + +// LengthError provides the error for items being larger than a maximum length. +type LengthError struct { + Part string + Want int + Have int + Value interface{} +} + +func (e LengthError) Error() string { + return fmt.Sprintf("%s length invalid, %d/%d, %v", + e.Part, e.Want, e.Have, e.Value) +} + +// ChecksumError provides the error for message checksum invalidation errors. +type ChecksumError struct{} + +func (e ChecksumError) Error() string { + return "message checksum mismatch" +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/eventstreamapi/api.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/eventstreamapi/api.go new file mode 100644 index 00000000000..97937c8e598 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/eventstreamapi/api.go @@ -0,0 +1,196 @@ +package eventstreamapi + +import ( + "fmt" + "io" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/private/protocol" + "github.com/aws/aws-sdk-go/private/protocol/eventstream" +) + +// Unmarshaler provides the interface for unmarshaling a EventStream +// message into a SDK type. +type Unmarshaler interface { + UnmarshalEvent(protocol.PayloadUnmarshaler, eventstream.Message) error +} + +// EventStream headers with specific meaning to async API functionality. +const ( + MessageTypeHeader = `:message-type` // Identifies type of message. + EventMessageType = `event` + ErrorMessageType = `error` + ExceptionMessageType = `exception` + + // Message Events + EventTypeHeader = `:event-type` // Identifies message event type e.g. "Stats". + + // Message Error + ErrorCodeHeader = `:error-code` + ErrorMessageHeader = `:error-message` + + // Message Exception + ExceptionTypeHeader = `:exception-type` +) + +// EventReader provides reading from the EventStream of an reader. +type EventReader struct { + reader io.ReadCloser + decoder *eventstream.Decoder + + unmarshalerForEventType func(string) (Unmarshaler, error) + payloadUnmarshaler protocol.PayloadUnmarshaler + + payloadBuf []byte +} + +// NewEventReader returns a EventReader built from the reader and unmarshaler +// provided. Use ReadStream method to start reading from the EventStream. +func NewEventReader( + reader io.ReadCloser, + payloadUnmarshaler protocol.PayloadUnmarshaler, + unmarshalerForEventType func(string) (Unmarshaler, error), +) *EventReader { + return &EventReader{ + reader: reader, + decoder: eventstream.NewDecoder(reader), + payloadUnmarshaler: payloadUnmarshaler, + unmarshalerForEventType: unmarshalerForEventType, + payloadBuf: make([]byte, 10*1024), + } +} + +// UseLogger instructs the EventReader to use the logger and log level +// specified. +func (r *EventReader) UseLogger(logger aws.Logger, logLevel aws.LogLevelType) { + if logger != nil && logLevel.Matches(aws.LogDebugWithEventStreamBody) { + r.decoder.UseLogger(logger) + } +} + +// ReadEvent attempts to read a message from the EventStream and return the +// unmarshaled event value that the message is for. +// +// For EventStream API errors check if the returned error satisfies the +// awserr.Error interface to get the error's Code and Message components. +// +// EventUnmarshalers called with EventStream messages must take copies of the +// message's Payload. The payload will is reused between events read. +func (r *EventReader) ReadEvent() (event interface{}, err error) { + msg, err := r.decoder.Decode(r.payloadBuf) + if err != nil { + return nil, err + } + defer func() { + // Reclaim payload buffer for next message read. + r.payloadBuf = msg.Payload[0:0] + }() + + typ, err := GetHeaderString(msg, MessageTypeHeader) + if err != nil { + return nil, err + } + + switch typ { + case EventMessageType: + return r.unmarshalEventMessage(msg) + case ExceptionMessageType: + err = r.unmarshalEventException(msg) + return nil, err + case ErrorMessageType: + return nil, r.unmarshalErrorMessage(msg) + default: + return nil, fmt.Errorf("unknown eventstream message type, %v", typ) + } +} + +func (r *EventReader) unmarshalEventMessage( + msg eventstream.Message, +) (event interface{}, err error) { + eventType, err := GetHeaderString(msg, EventTypeHeader) + if err != nil { + return nil, err + } + + ev, err := r.unmarshalerForEventType(eventType) + if err != nil { + return nil, err + } + + err = ev.UnmarshalEvent(r.payloadUnmarshaler, msg) + if err != nil { + return nil, err + } + + return ev, nil +} + +func (r *EventReader) unmarshalEventException( + msg eventstream.Message, +) (err error) { + eventType, err := GetHeaderString(msg, ExceptionTypeHeader) + if err != nil { + return err + } + + ev, err := r.unmarshalerForEventType(eventType) + if err != nil { + return err + } + + err = ev.UnmarshalEvent(r.payloadUnmarshaler, msg) + if err != nil { + return err + } + + var ok bool + err, ok = ev.(error) + if !ok { + err = messageError{ + code: "SerializationError", + msg: fmt.Sprintf( + "event stream exception %s mapped to non-error %T, %v", + eventType, ev, ev, + ), + } + } + + return err +} + +func (r *EventReader) unmarshalErrorMessage(msg eventstream.Message) (err error) { + var msgErr messageError + + msgErr.code, err = GetHeaderString(msg, ErrorCodeHeader) + if err != nil { + return err + } + + msgErr.msg, err = GetHeaderString(msg, ErrorMessageHeader) + if err != nil { + return err + } + + return msgErr +} + +// Close closes the EventReader's EventStream reader. +func (r *EventReader) Close() error { + return r.reader.Close() +} + +// GetHeaderString returns the value of the header as a string. If the header +// is not set or the value is not a string an error will be returned. +func GetHeaderString(msg eventstream.Message, headerName string) (string, error) { + headerVal := msg.Headers.Get(headerName) + if headerVal == nil { + return "", fmt.Errorf("error header %s not present", headerName) + } + + v, ok := headerVal.Get().(string) + if !ok { + return "", fmt.Errorf("error header value is not a string, %T", headerVal) + } + + return v, nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/eventstreamapi/error.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/eventstreamapi/error.go new file mode 100644 index 00000000000..5ea5a988b63 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/eventstreamapi/error.go @@ -0,0 +1,24 @@ +package eventstreamapi + +import "fmt" + +type messageError struct { + code string + msg string +} + +func (e messageError) Code() string { + return e.code +} + +func (e messageError) Message() string { + return e.msg +} + +func (e messageError) Error() string { + return fmt.Sprintf("%s: %s", e.code, e.msg) +} + +func (e messageError) OrigErr() error { + return nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/header.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/header.go new file mode 100644 index 00000000000..3b44dde2f32 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/header.go @@ -0,0 +1,166 @@ +package eventstream + +import ( + "encoding/binary" + "fmt" + "io" +) + +// Headers are a collection of EventStream header values. +type Headers []Header + +// Header is a single EventStream Key Value header pair. +type Header struct { + Name string + Value Value +} + +// Set associates the name with a value. If the header name already exists in +// the Headers the value will be replaced with the new one. +func (hs *Headers) Set(name string, value Value) { + var i int + for ; i < len(*hs); i++ { + if (*hs)[i].Name == name { + (*hs)[i].Value = value + return + } + } + + *hs = append(*hs, Header{ + Name: name, Value: value, + }) +} + +// Get returns the Value associated with the header. Nil is returned if the +// value does not exist. +func (hs Headers) Get(name string) Value { + for i := 0; i < len(hs); i++ { + if h := hs[i]; h.Name == name { + return h.Value + } + } + return nil +} + +// Del deletes the value in the Headers if it exists. +func (hs *Headers) Del(name string) { + for i := 0; i < len(*hs); i++ { + if (*hs)[i].Name == name { + copy((*hs)[i:], (*hs)[i+1:]) + (*hs) = (*hs)[:len(*hs)-1] + } + } +} + +func decodeHeaders(r io.Reader) (Headers, error) { + hs := Headers{} + + for { + name, err := decodeHeaderName(r) + if err != nil { + if err == io.EOF { + // EOF while getting header name means no more headers + break + } + return nil, err + } + + value, err := decodeHeaderValue(r) + if err != nil { + return nil, err + } + + hs.Set(name, value) + } + + return hs, nil +} + +func decodeHeaderName(r io.Reader) (string, error) { + var n headerName + + var err error + n.Len, err = decodeUint8(r) + if err != nil { + return "", err + } + + name := n.Name[:n.Len] + if _, err := io.ReadFull(r, name); err != nil { + return "", err + } + + return string(name), nil +} + +func decodeHeaderValue(r io.Reader) (Value, error) { + var raw rawValue + + typ, err := decodeUint8(r) + if err != nil { + return nil, err + } + raw.Type = valueType(typ) + + var v Value + + switch raw.Type { + case trueValueType: + v = BoolValue(true) + case falseValueType: + v = BoolValue(false) + case int8ValueType: + var tv Int8Value + err = tv.decode(r) + v = tv + case int16ValueType: + var tv Int16Value + err = tv.decode(r) + v = tv + case int32ValueType: + var tv Int32Value + err = tv.decode(r) + v = tv + case int64ValueType: + var tv Int64Value + err = tv.decode(r) + v = tv + case bytesValueType: + var tv BytesValue + err = tv.decode(r) + v = tv + case stringValueType: + var tv StringValue + err = tv.decode(r) + v = tv + case timestampValueType: + var tv TimestampValue + err = tv.decode(r) + v = tv + case uuidValueType: + var tv UUIDValue + err = tv.decode(r) + v = tv + default: + panic(fmt.Sprintf("unknown value type %d", raw.Type)) + } + + // Error could be EOF, let caller deal with it + return v, err +} + +const maxHeaderNameLen = 255 + +type headerName struct { + Len uint8 + Name [maxHeaderNameLen]byte +} + +func (v headerName) encode(w io.Writer) error { + if err := binary.Write(w, binary.BigEndian, v.Len); err != nil { + return err + } + + _, err := w.Write(v.Name[:v.Len]) + return err +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/header_value.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/header_value.go new file mode 100644 index 00000000000..e3fc0766a9e --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/header_value.go @@ -0,0 +1,501 @@ +package eventstream + +import ( + "encoding/base64" + "encoding/binary" + "fmt" + "io" + "strconv" + "time" +) + +const maxHeaderValueLen = 1<<15 - 1 // 2^15-1 or 32KB - 1 + +// valueType is the EventStream header value type. +type valueType uint8 + +// Header value types +const ( + trueValueType valueType = iota + falseValueType + int8ValueType // Byte + int16ValueType // Short + int32ValueType // Integer + int64ValueType // Long + bytesValueType + stringValueType + timestampValueType + uuidValueType +) + +func (t valueType) String() string { + switch t { + case trueValueType: + return "bool" + case falseValueType: + return "bool" + case int8ValueType: + return "int8" + case int16ValueType: + return "int16" + case int32ValueType: + return "int32" + case int64ValueType: + return "int64" + case bytesValueType: + return "byte_array" + case stringValueType: + return "string" + case timestampValueType: + return "timestamp" + case uuidValueType: + return "uuid" + default: + return fmt.Sprintf("unknown value type %d", uint8(t)) + } +} + +type rawValue struct { + Type valueType + Len uint16 // Only set for variable length slices + Value []byte // byte representation of value, BigEndian encoding. +} + +func (r rawValue) encodeScalar(w io.Writer, v interface{}) error { + return binaryWriteFields(w, binary.BigEndian, + r.Type, + v, + ) +} + +func (r rawValue) encodeFixedSlice(w io.Writer, v []byte) error { + binary.Write(w, binary.BigEndian, r.Type) + + _, err := w.Write(v) + return err +} + +func (r rawValue) encodeBytes(w io.Writer, v []byte) error { + if len(v) > maxHeaderValueLen { + return LengthError{ + Part: "header value", + Want: maxHeaderValueLen, Have: len(v), + Value: v, + } + } + r.Len = uint16(len(v)) + + err := binaryWriteFields(w, binary.BigEndian, + r.Type, + r.Len, + ) + if err != nil { + return err + } + + _, err = w.Write(v) + return err +} + +func (r rawValue) encodeString(w io.Writer, v string) error { + if len(v) > maxHeaderValueLen { + return LengthError{ + Part: "header value", + Want: maxHeaderValueLen, Have: len(v), + Value: v, + } + } + r.Len = uint16(len(v)) + + type stringWriter interface { + WriteString(string) (int, error) + } + + err := binaryWriteFields(w, binary.BigEndian, + r.Type, + r.Len, + ) + if err != nil { + return err + } + + if sw, ok := w.(stringWriter); ok { + _, err = sw.WriteString(v) + } else { + _, err = w.Write([]byte(v)) + } + + return err +} + +func decodeFixedBytesValue(r io.Reader, buf []byte) error { + _, err := io.ReadFull(r, buf) + return err +} + +func decodeBytesValue(r io.Reader) ([]byte, error) { + var raw rawValue + var err error + raw.Len, err = decodeUint16(r) + if err != nil { + return nil, err + } + + buf := make([]byte, raw.Len) + _, err = io.ReadFull(r, buf) + if err != nil { + return nil, err + } + + return buf, nil +} + +func decodeStringValue(r io.Reader) (string, error) { + v, err := decodeBytesValue(r) + return string(v), err +} + +// Value represents the abstract header value. +type Value interface { + Get() interface{} + String() string + valueType() valueType + encode(io.Writer) error +} + +// An BoolValue provides eventstream encoding, and representation +// of a Go bool value. +type BoolValue bool + +// Get returns the underlying type +func (v BoolValue) Get() interface{} { + return bool(v) +} + +// valueType returns the EventStream header value type value. +func (v BoolValue) valueType() valueType { + if v { + return trueValueType + } + return falseValueType +} + +func (v BoolValue) String() string { + return strconv.FormatBool(bool(v)) +} + +// encode encodes the BoolValue into an eventstream binary value +// representation. +func (v BoolValue) encode(w io.Writer) error { + return binary.Write(w, binary.BigEndian, v.valueType()) +} + +// An Int8Value provides eventstream encoding, and representation of a Go +// int8 value. +type Int8Value int8 + +// Get returns the underlying value. +func (v Int8Value) Get() interface{} { + return int8(v) +} + +// valueType returns the EventStream header value type value. +func (Int8Value) valueType() valueType { + return int8ValueType +} + +func (v Int8Value) String() string { + return fmt.Sprintf("0x%02x", int8(v)) +} + +// encode encodes the Int8Value into an eventstream binary value +// representation. +func (v Int8Value) encode(w io.Writer) error { + raw := rawValue{ + Type: v.valueType(), + } + + return raw.encodeScalar(w, v) +} + +func (v *Int8Value) decode(r io.Reader) error { + n, err := decodeUint8(r) + if err != nil { + return err + } + + *v = Int8Value(n) + return nil +} + +// An Int16Value provides eventstream encoding, and representation of a Go +// int16 value. +type Int16Value int16 + +// Get returns the underlying value. +func (v Int16Value) Get() interface{} { + return int16(v) +} + +// valueType returns the EventStream header value type value. +func (Int16Value) valueType() valueType { + return int16ValueType +} + +func (v Int16Value) String() string { + return fmt.Sprintf("0x%04x", int16(v)) +} + +// encode encodes the Int16Value into an eventstream binary value +// representation. +func (v Int16Value) encode(w io.Writer) error { + raw := rawValue{ + Type: v.valueType(), + } + return raw.encodeScalar(w, v) +} + +func (v *Int16Value) decode(r io.Reader) error { + n, err := decodeUint16(r) + if err != nil { + return err + } + + *v = Int16Value(n) + return nil +} + +// An Int32Value provides eventstream encoding, and representation of a Go +// int32 value. +type Int32Value int32 + +// Get returns the underlying value. +func (v Int32Value) Get() interface{} { + return int32(v) +} + +// valueType returns the EventStream header value type value. +func (Int32Value) valueType() valueType { + return int32ValueType +} + +func (v Int32Value) String() string { + return fmt.Sprintf("0x%08x", int32(v)) +} + +// encode encodes the Int32Value into an eventstream binary value +// representation. +func (v Int32Value) encode(w io.Writer) error { + raw := rawValue{ + Type: v.valueType(), + } + return raw.encodeScalar(w, v) +} + +func (v *Int32Value) decode(r io.Reader) error { + n, err := decodeUint32(r) + if err != nil { + return err + } + + *v = Int32Value(n) + return nil +} + +// An Int64Value provides eventstream encoding, and representation of a Go +// int64 value. +type Int64Value int64 + +// Get returns the underlying value. +func (v Int64Value) Get() interface{} { + return int64(v) +} + +// valueType returns the EventStream header value type value. +func (Int64Value) valueType() valueType { + return int64ValueType +} + +func (v Int64Value) String() string { + return fmt.Sprintf("0x%016x", int64(v)) +} + +// encode encodes the Int64Value into an eventstream binary value +// representation. +func (v Int64Value) encode(w io.Writer) error { + raw := rawValue{ + Type: v.valueType(), + } + return raw.encodeScalar(w, v) +} + +func (v *Int64Value) decode(r io.Reader) error { + n, err := decodeUint64(r) + if err != nil { + return err + } + + *v = Int64Value(n) + return nil +} + +// An BytesValue provides eventstream encoding, and representation of a Go +// byte slice. +type BytesValue []byte + +// Get returns the underlying value. +func (v BytesValue) Get() interface{} { + return []byte(v) +} + +// valueType returns the EventStream header value type value. +func (BytesValue) valueType() valueType { + return bytesValueType +} + +func (v BytesValue) String() string { + return base64.StdEncoding.EncodeToString([]byte(v)) +} + +// encode encodes the BytesValue into an eventstream binary value +// representation. +func (v BytesValue) encode(w io.Writer) error { + raw := rawValue{ + Type: v.valueType(), + } + + return raw.encodeBytes(w, []byte(v)) +} + +func (v *BytesValue) decode(r io.Reader) error { + buf, err := decodeBytesValue(r) + if err != nil { + return err + } + + *v = BytesValue(buf) + return nil +} + +// An StringValue provides eventstream encoding, and representation of a Go +// string. +type StringValue string + +// Get returns the underlying value. +func (v StringValue) Get() interface{} { + return string(v) +} + +// valueType returns the EventStream header value type value. +func (StringValue) valueType() valueType { + return stringValueType +} + +func (v StringValue) String() string { + return string(v) +} + +// encode encodes the StringValue into an eventstream binary value +// representation. +func (v StringValue) encode(w io.Writer) error { + raw := rawValue{ + Type: v.valueType(), + } + + return raw.encodeString(w, string(v)) +} + +func (v *StringValue) decode(r io.Reader) error { + s, err := decodeStringValue(r) + if err != nil { + return err + } + + *v = StringValue(s) + return nil +} + +// An TimestampValue provides eventstream encoding, and representation of a Go +// timestamp. +type TimestampValue time.Time + +// Get returns the underlying value. +func (v TimestampValue) Get() interface{} { + return time.Time(v) +} + +// valueType returns the EventStream header value type value. +func (TimestampValue) valueType() valueType { + return timestampValueType +} + +func (v TimestampValue) epochMilli() int64 { + nano := time.Time(v).UnixNano() + msec := nano / int64(time.Millisecond) + return msec +} + +func (v TimestampValue) String() string { + msec := v.epochMilli() + return strconv.FormatInt(msec, 10) +} + +// encode encodes the TimestampValue into an eventstream binary value +// representation. +func (v TimestampValue) encode(w io.Writer) error { + raw := rawValue{ + Type: v.valueType(), + } + + msec := v.epochMilli() + return raw.encodeScalar(w, msec) +} + +func (v *TimestampValue) decode(r io.Reader) error { + n, err := decodeUint64(r) + if err != nil { + return err + } + + *v = TimestampValue(timeFromEpochMilli(int64(n))) + return nil +} + +func timeFromEpochMilli(t int64) time.Time { + secs := t / 1e3 + msec := t % 1e3 + return time.Unix(secs, msec*int64(time.Millisecond)).UTC() +} + +// An UUIDValue provides eventstream encoding, and representation of a UUID +// value. +type UUIDValue [16]byte + +// Get returns the underlying value. +func (v UUIDValue) Get() interface{} { + return v[:] +} + +// valueType returns the EventStream header value type value. +func (UUIDValue) valueType() valueType { + return uuidValueType +} + +func (v UUIDValue) String() string { + return fmt.Sprintf(`%X-%X-%X-%X-%X`, v[0:4], v[4:6], v[6:8], v[8:10], v[10:]) +} + +// encode encodes the UUIDValue into an eventstream binary value +// representation. +func (v UUIDValue) encode(w io.Writer) error { + raw := rawValue{ + Type: v.valueType(), + } + + return raw.encodeFixedSlice(w, v[:]) +} + +func (v *UUIDValue) decode(r io.Reader) error { + tv := (*v)[:] + return decodeFixedBytesValue(r, tv) +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/message.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/message.go new file mode 100644 index 00000000000..2dc012a66e2 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/eventstream/message.go @@ -0,0 +1,103 @@ +package eventstream + +import ( + "bytes" + "encoding/binary" + "hash/crc32" +) + +const preludeLen = 8 +const preludeCRCLen = 4 +const msgCRCLen = 4 +const minMsgLen = preludeLen + preludeCRCLen + msgCRCLen +const maxPayloadLen = 1024 * 1024 * 16 // 16MB +const maxHeadersLen = 1024 * 128 // 128KB +const maxMsgLen = minMsgLen + maxHeadersLen + maxPayloadLen + +var crc32IEEETable = crc32.MakeTable(crc32.IEEE) + +// A Message provides the eventstream message representation. +type Message struct { + Headers Headers + Payload []byte +} + +func (m *Message) rawMessage() (rawMessage, error) { + var raw rawMessage + + if len(m.Headers) > 0 { + var headers bytes.Buffer + if err := encodeHeaders(&headers, m.Headers); err != nil { + return rawMessage{}, err + } + raw.Headers = headers.Bytes() + raw.HeadersLen = uint32(len(raw.Headers)) + } + + raw.Length = raw.HeadersLen + uint32(len(m.Payload)) + minMsgLen + + hash := crc32.New(crc32IEEETable) + binaryWriteFields(hash, binary.BigEndian, raw.Length, raw.HeadersLen) + raw.PreludeCRC = hash.Sum32() + + binaryWriteFields(hash, binary.BigEndian, raw.PreludeCRC) + + if raw.HeadersLen > 0 { + hash.Write(raw.Headers) + } + + // Read payload bytes and update hash for it as well. + if len(m.Payload) > 0 { + raw.Payload = m.Payload + hash.Write(raw.Payload) + } + + raw.CRC = hash.Sum32() + + return raw, nil +} + +type messagePrelude struct { + Length uint32 + HeadersLen uint32 + PreludeCRC uint32 +} + +func (p messagePrelude) PayloadLen() uint32 { + return p.Length - p.HeadersLen - minMsgLen +} + +func (p messagePrelude) ValidateLens() error { + if p.Length == 0 || p.Length > maxMsgLen { + return LengthError{ + Part: "message prelude", + Want: maxMsgLen, + Have: int(p.Length), + } + } + if p.HeadersLen > maxHeadersLen { + return LengthError{ + Part: "message headers", + Want: maxHeadersLen, + Have: int(p.HeadersLen), + } + } + if payloadLen := p.PayloadLen(); payloadLen > maxPayloadLen { + return LengthError{ + Part: "message payload", + Want: maxPayloadLen, + Have: int(payloadLen), + } + } + + return nil +} + +type rawMessage struct { + messagePrelude + + Headers []byte + Payload []byte + + CRC uint32 +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/host.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/host.go new file mode 100644 index 00000000000..f06f44ee1c7 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/host.go @@ -0,0 +1,21 @@ +package protocol + +// ValidHostLabel returns if the label is a valid RFC 1123 Section 2.1 domain +// host label name. +func ValidHostLabel(label string) bool { + if l := len(label); l == 0 || l > 63 { + return false + } + for _, r := range label { + switch { + case r >= '0' && r <= '9': + case r >= 'A' && r <= 'Z': + case r >= 'a' && r <= 'z': + case r == '-': + default: + return false + } + } + + return true +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/build.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/build.go index ec765ba257e..864fb6704b4 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/build.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/build.go @@ -216,7 +216,17 @@ func buildScalar(v reflect.Value, buf *bytes.Buffer, tag reflect.StructTag) erro default: switch converted := value.Interface().(type) { case time.Time: - buf.Write(strconv.AppendInt(scratch[:0], converted.UTC().Unix(), 10)) + format := tag.Get("timestampFormat") + if len(format) == 0 { + format = protocol.UnixTimeFormatName + } + + ts := protocol.FormatTime(format, converted) + if format != protocol.UnixTimeFormatName { + ts = `"` + ts + `"` + } + + buf.WriteString(ts) case []byte: if !value.IsNil() { buf.WriteByte('"') diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/unmarshal.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/unmarshal.go index 037e1e7be78..b11f3ee45b5 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/unmarshal.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/unmarshal.go @@ -5,7 +5,6 @@ import ( "encoding/json" "fmt" "io" - "io/ioutil" "reflect" "time" @@ -17,16 +16,10 @@ import ( func UnmarshalJSON(v interface{}, stream io.Reader) error { var out interface{} - b, err := ioutil.ReadAll(stream) - if err != nil { - return err - } - - if len(b) == 0 { + err := json.NewDecoder(stream).Decode(&out) + if err == io.EOF { return nil - } - - if err := json.Unmarshal(b, &out); err != nil { + } else if err != nil { return err } @@ -172,9 +165,6 @@ func unmarshalMap(value reflect.Value, data interface{}, tag reflect.StructTag) } func unmarshalScalar(value reflect.Value, data interface{}, tag reflect.StructTag) error { - errf := func() error { - return fmt.Errorf("unsupported value: %v (%s)", value.Interface(), value.Type()) - } switch d := data.(type) { case nil: @@ -189,6 +179,17 @@ func unmarshalScalar(value reflect.Value, data interface{}, tag reflect.StructTa return err } value.Set(reflect.ValueOf(b)) + case *time.Time: + format := tag.Get("timestampFormat") + if len(format) == 0 { + format = protocol.ISO8601TimeFormatName + } + + t, err := protocol.ParseTime(format, d) + if err != nil { + return err + } + value.Set(reflect.ValueOf(&t)) case aws.JSONValue: // No need to use escaping as the value is a non-quoted string. v, err := protocol.DecodeJSONValue(d, protocol.NoEscape) @@ -197,7 +198,7 @@ func unmarshalScalar(value reflect.Value, data interface{}, tag reflect.StructTa } value.Set(reflect.ValueOf(v)) default: - return errf() + return fmt.Errorf("unsupported value: %v (%s)", value.Interface(), value.Type()) } case float64: switch value.Interface().(type) { @@ -207,17 +208,18 @@ func unmarshalScalar(value reflect.Value, data interface{}, tag reflect.StructTa case *float64: value.Set(reflect.ValueOf(&d)) case *time.Time: + // Time unmarshaled from a float64 can only be epoch seconds t := time.Unix(int64(d), 0).UTC() value.Set(reflect.ValueOf(&t)) default: - return errf() + return fmt.Errorf("unsupported value: %v (%s)", value.Interface(), value.Type()) } case bool: switch value.Interface().(type) { case *bool: value.Set(reflect.ValueOf(&d)) default: - return errf() + return fmt.Errorf("unsupported value: %v (%s)", value.Interface(), value.Type()) } default: return fmt.Errorf("unsupported JSON value (%v)", data) diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/jsonrpc/jsonrpc.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/jsonrpc/jsonrpc.go index 56af4dc4426..9a7ba27ad53 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/jsonrpc/jsonrpc.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/jsonrpc/jsonrpc.go @@ -7,7 +7,7 @@ package jsonrpc import ( "encoding/json" - "io/ioutil" + "io" "strings" "github.com/aws/aws-sdk-go/aws/awserr" @@ -64,7 +64,11 @@ func Unmarshal(req *request.Request) { if req.DataFilled() { err := jsonutil.UnmarshalJSON(req.Data, req.HTTPResponse.Body) if err != nil { - req.Error = awserr.New("SerializationError", "failed decoding JSON RPC response", err) + req.Error = awserr.NewRequestFailure( + awserr.New("SerializationError", "failed decoding JSON RPC response", err), + req.HTTPResponse.StatusCode, + req.RequestID, + ) } } return @@ -78,22 +82,22 @@ func UnmarshalMeta(req *request.Request) { // UnmarshalError unmarshals an error response for a JSON RPC service. func UnmarshalError(req *request.Request) { defer req.HTTPResponse.Body.Close() - bodyBytes, err := ioutil.ReadAll(req.HTTPResponse.Body) - if err != nil { - req.Error = awserr.New("SerializationError", "failed reading JSON RPC error response", err) - return - } - if len(bodyBytes) == 0 { + + var jsonErr jsonErrorResponse + err := json.NewDecoder(req.HTTPResponse.Body).Decode(&jsonErr) + if err == io.EOF { req.Error = awserr.NewRequestFailure( awserr.New("SerializationError", req.HTTPResponse.Status, nil), req.HTTPResponse.StatusCode, - "", + req.RequestID, ) return - } - var jsonErr jsonErrorResponse - if err := json.Unmarshal(bodyBytes, &jsonErr); err != nil { - req.Error = awserr.New("SerializationError", "failed decoding JSON RPC error response", err) + } else if err != nil { + req.Error = awserr.NewRequestFailure( + awserr.New("SerializationError", "failed decoding JSON RPC error response", err), + req.HTTPResponse.StatusCode, + req.RequestID, + ) return } diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/payload.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/payload.go new file mode 100644 index 00000000000..e21614a1250 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/payload.go @@ -0,0 +1,81 @@ +package protocol + +import ( + "io" + "io/ioutil" + "net/http" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" +) + +// PayloadUnmarshaler provides the interface for unmarshaling a payload's +// reader into a SDK shape. +type PayloadUnmarshaler interface { + UnmarshalPayload(io.Reader, interface{}) error +} + +// HandlerPayloadUnmarshal implements the PayloadUnmarshaler from a +// HandlerList. This provides the support for unmarshaling a payload reader to +// a shape without needing a SDK request first. +type HandlerPayloadUnmarshal struct { + Unmarshalers request.HandlerList +} + +// UnmarshalPayload unmarshals the io.Reader payload into the SDK shape using +// the Unmarshalers HandlerList provided. Returns an error if unable +// unmarshaling fails. +func (h HandlerPayloadUnmarshal) UnmarshalPayload(r io.Reader, v interface{}) error { + req := &request.Request{ + HTTPRequest: &http.Request{}, + HTTPResponse: &http.Response{ + StatusCode: 200, + Header: http.Header{}, + Body: ioutil.NopCloser(r), + }, + Data: v, + } + + h.Unmarshalers.Run(req) + + return req.Error +} + +// PayloadMarshaler provides the interface for marshaling a SDK shape into and +// io.Writer. +type PayloadMarshaler interface { + MarshalPayload(io.Writer, interface{}) error +} + +// HandlerPayloadMarshal implements the PayloadMarshaler from a HandlerList. +// This provides support for marshaling a SDK shape into an io.Writer without +// needing a SDK request first. +type HandlerPayloadMarshal struct { + Marshalers request.HandlerList +} + +// MarshalPayload marshals the SDK shape into the io.Writer using the +// Marshalers HandlerList provided. Returns an error if unable if marshal +// fails. +func (h HandlerPayloadMarshal) MarshalPayload(w io.Writer, v interface{}) error { + req := request.New( + aws.Config{}, + metadata.ClientInfo{}, + request.Handlers{}, + nil, + &request.Operation{HTTPMethod: "GET"}, + v, + nil, + ) + + h.Marshalers.Run(req) + + if req.Error != nil { + return req.Error + } + + io.Copy(w, req.GetBody()) + + return nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/query/queryutil/queryutil.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/query/queryutil/queryutil.go index 5ce9cba3291..75866d01218 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/query/queryutil/queryutil.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/query/queryutil/queryutil.go @@ -233,7 +233,12 @@ func (q *queryParser) parseScalar(v url.Values, r reflect.Value, name string, ta v.Set(name, strconv.FormatFloat(float64(value), 'f', -1, 32)) case time.Time: const ISO8601UTC = "2006-01-02T15:04:05Z" - v.Set(name, value.UTC().Format(ISO8601UTC)) + format := tag.Get("timestampFormat") + if len(format) == 0 { + format = protocol.ISO8601TimeFormatName + } + + v.Set(name, protocol.FormatTime(format, value)) default: return fmt.Errorf("unsupported value for param %s: %v (%s)", name, r.Interface(), r.Type().Name()) } diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/query/unmarshal.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/query/unmarshal.go index e0f4d5a5419..3495c73070b 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/query/unmarshal.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/query/unmarshal.go @@ -23,7 +23,11 @@ func Unmarshal(r *request.Request) { decoder := xml.NewDecoder(r.HTTPResponse.Body) err := xmlutil.UnmarshalXML(r.Data, decoder, r.Operation.Name+"Result") if err != nil { - r.Error = awserr.New("SerializationError", "failed decoding Query response", err) + r.Error = awserr.NewRequestFailure( + awserr.New("SerializationError", "failed decoding Query response", err), + r.HTTPResponse.StatusCode, + r.RequestID, + ) return } } diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/query/unmarshal_error.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/query/unmarshal_error.go index f2142961717..46d354e826f 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/query/unmarshal_error.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/query/unmarshal_error.go @@ -28,7 +28,11 @@ func UnmarshalError(r *request.Request) { bodyBytes, err := ioutil.ReadAll(r.HTTPResponse.Body) if err != nil { - r.Error = awserr.New("SerializationError", "failed to read from query HTTP response body", err) + r.Error = awserr.NewRequestFailure( + awserr.New("SerializationError", "failed to read from query HTTP response body", err), + r.HTTPResponse.StatusCode, + r.RequestID, + ) return } @@ -61,6 +65,10 @@ func UnmarshalError(r *request.Request) { } // Failed to retrieve any error message from the response body - r.Error = awserr.New("SerializationError", - "failed to decode query XML error response", decodeErr) + r.Error = awserr.NewRequestFailure( + awserr.New("SerializationError", + "failed to decode query XML error response", decodeErr), + r.HTTPResponse.StatusCode, + r.RequestID, + ) } diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/build.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/build.go index c405288d742..b34f5258a4c 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/build.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/build.go @@ -20,9 +20,6 @@ import ( "github.com/aws/aws-sdk-go/private/protocol" ) -// RFC822 returns an RFC822 formatted timestamp for AWS protocols -const RFC822 = "Mon, 2 Jan 2006 15:04:05 GMT" - // Whether the byte value can be sent without escaping in AWS URLs var noEscape [256]bool @@ -270,7 +267,14 @@ func convertType(v reflect.Value, tag reflect.StructTag) (str string, err error) case float64: str = strconv.FormatFloat(value, 'f', -1, 64) case time.Time: - str = value.UTC().Format(RFC822) + format := tag.Get("timestampFormat") + if len(format) == 0 { + format = protocol.RFC822TimeFormatName + if tag.Get("location") == "querystring" { + format = protocol.ISO8601TimeFormatName + } + } + str = protocol.FormatTime(format, value) case aws.JSONValue: if len(value) == 0 { return "", errValueNotSet diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/unmarshal.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/unmarshal.go index 823f045eed7..33fd53b126a 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/unmarshal.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/unmarshal.go @@ -198,7 +198,11 @@ func unmarshalHeader(v reflect.Value, header string, tag reflect.StructTag) erro } v.Set(reflect.ValueOf(&f)) case *time.Time: - t, err := time.Parse(RFC822, header) + format := tag.Get("timestampFormat") + if len(format) == 0 { + format = protocol.RFC822TimeFormatName + } + t, err := protocol.ParseTime(format, header) if err != nil { return err } diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/restjson/restjson.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/restjson/restjson.go index b1d5294ad93..de8adce636c 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/restjson/restjson.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/restjson/restjson.go @@ -7,7 +7,7 @@ package restjson import ( "encoding/json" - "io/ioutil" + "io" "strings" "github.com/aws/aws-sdk-go/aws/awserr" @@ -54,26 +54,26 @@ func UnmarshalMeta(r *request.Request) { // UnmarshalError unmarshals a response error for the REST JSON protocol. func UnmarshalError(r *request.Request) { defer r.HTTPResponse.Body.Close() - code := r.HTTPResponse.Header.Get("X-Amzn-Errortype") - bodyBytes, err := ioutil.ReadAll(r.HTTPResponse.Body) - if err != nil { - r.Error = awserr.New("SerializationError", "failed reading REST JSON error response", err) - return - } - if len(bodyBytes) == 0 { + + var jsonErr jsonErrorResponse + err := json.NewDecoder(r.HTTPResponse.Body).Decode(&jsonErr) + if err == io.EOF { r.Error = awserr.NewRequestFailure( awserr.New("SerializationError", r.HTTPResponse.Status, nil), r.HTTPResponse.StatusCode, - "", + r.RequestID, ) return - } - var jsonErr jsonErrorResponse - if err := json.Unmarshal(bodyBytes, &jsonErr); err != nil { - r.Error = awserr.New("SerializationError", "failed decoding REST JSON error response", err) + } else if err != nil { + r.Error = awserr.NewRequestFailure( + awserr.New("SerializationError", "failed decoding REST JSON error response", err), + r.HTTPResponse.StatusCode, + r.RequestID, + ) return } + code := r.HTTPResponse.Header.Get("X-Amzn-Errortype") if code == "" { code = jsonErr.Code } diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/restxml/restxml.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/restxml/restxml.go index 7bdf4c8538f..b0f4e245661 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/restxml/restxml.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/restxml/restxml.go @@ -36,7 +36,11 @@ func Build(r *request.Request) { var buf bytes.Buffer err := xmlutil.BuildXML(r.Params, xml.NewEncoder(&buf)) if err != nil { - r.Error = awserr.New("SerializationError", "failed to encode rest XML request", err) + r.Error = awserr.NewRequestFailure( + awserr.New("SerializationError", "failed to encode rest XML request", err), + r.HTTPResponse.StatusCode, + r.RequestID, + ) return } r.SetBufferBody(buf.Bytes()) @@ -50,7 +54,11 @@ func Unmarshal(r *request.Request) { decoder := xml.NewDecoder(r.HTTPResponse.Body) err := xmlutil.UnmarshalXML(r.Data, decoder, "") if err != nil { - r.Error = awserr.New("SerializationError", "failed to decode REST XML response", err) + r.Error = awserr.NewRequestFailure( + awserr.New("SerializationError", "failed to decode REST XML response", err), + r.HTTPResponse.StatusCode, + r.RequestID, + ) return } } else { diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/timestamp.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/timestamp.go new file mode 100644 index 00000000000..b7ed6c6f810 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/timestamp.go @@ -0,0 +1,72 @@ +package protocol + +import ( + "strconv" + "time" +) + +// Names of time formats supported by the SDK +const ( + RFC822TimeFormatName = "rfc822" + ISO8601TimeFormatName = "iso8601" + UnixTimeFormatName = "unixTimestamp" +) + +// Time formats supported by the SDK +const ( + // RFC 7231#section-7.1.1.1 timetamp format. e.g Tue, 29 Apr 2014 18:30:38 GMT + RFC822TimeFormat = "Mon, 2 Jan 2006 15:04:05 GMT" + + // RFC3339 a subset of the ISO8601 timestamp format. e.g 2014-04-29T18:30:38Z + ISO8601TimeFormat = "2006-01-02T15:04:05Z" +) + +// IsKnownTimestampFormat returns if the timestamp format name +// is know to the SDK's protocols. +func IsKnownTimestampFormat(name string) bool { + switch name { + case RFC822TimeFormatName: + fallthrough + case ISO8601TimeFormatName: + fallthrough + case UnixTimeFormatName: + return true + default: + return false + } +} + +// FormatTime returns a string value of the time. +func FormatTime(name string, t time.Time) string { + t = t.UTC() + + switch name { + case RFC822TimeFormatName: + return t.Format(RFC822TimeFormat) + case ISO8601TimeFormatName: + return t.Format(ISO8601TimeFormat) + case UnixTimeFormatName: + return strconv.FormatInt(t.Unix(), 10) + default: + panic("unknown timestamp format name, " + name) + } +} + +// ParseTime attempts to parse the time given the format. Returns +// the time if it was able to be parsed, and fails otherwise. +func ParseTime(formatName, value string) (time.Time, error) { + switch formatName { + case RFC822TimeFormatName: + return time.Parse(RFC822TimeFormat, value) + case ISO8601TimeFormatName: + return time.Parse(ISO8601TimeFormat, value) + case UnixTimeFormatName: + v, err := strconv.ParseFloat(value, 64) + if err != nil { + return time.Time{}, err + } + return time.Unix(int64(v), 0), nil + default: + panic("unknown timestamp format name, " + formatName) + } +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/build.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/build.go index 7091b456d18..cf981fe9513 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/build.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/build.go @@ -13,9 +13,13 @@ import ( "github.com/aws/aws-sdk-go/private/protocol" ) -// BuildXML will serialize params into an xml.Encoder. -// Error will be returned if the serialization of any of the params or nested values fails. +// BuildXML will serialize params into an xml.Encoder. Error will be returned +// if the serialization of any of the params or nested values fails. func BuildXML(params interface{}, e *xml.Encoder) error { + return buildXML(params, e, false) +} + +func buildXML(params interface{}, e *xml.Encoder, sorted bool) error { b := xmlBuilder{encoder: e, namespaces: map[string]string{}} root := NewXMLElement(xml.Name{}) if err := b.buildValue(reflect.ValueOf(params), root, ""); err != nil { @@ -23,7 +27,7 @@ func BuildXML(params interface{}, e *xml.Encoder) error { } for _, c := range root.Children { for _, v := range c { - return StructToXML(e, v, false) + return StructToXML(e, v, sorted) } } return nil @@ -83,15 +87,13 @@ func (b *xmlBuilder) buildValue(value reflect.Value, current *XMLNode, tag refle } } -// buildStruct adds a struct and its fields to the current XMLNode. All fields any any nested +// buildStruct adds a struct and its fields to the current XMLNode. All fields and any nested // types are converted to XMLNodes also. func (b *xmlBuilder) buildStruct(value reflect.Value, current *XMLNode, tag reflect.StructTag) error { if !value.IsValid() { return nil } - fieldAdded := false - // unwrap payloads if payload := tag.Get("payload"); payload != "" { field, _ := value.Type().FieldByName(payload) @@ -119,6 +121,8 @@ func (b *xmlBuilder) buildStruct(value reflect.Value, current *XMLNode, tag refl child.Attr = append(child.Attr, ns) } + var payloadFields, nonPayloadFields int + t := value.Type() for i := 0; i < value.NumField(); i++ { member := elemOf(value.Field(i)) @@ -133,8 +137,10 @@ func (b *xmlBuilder) buildStruct(value reflect.Value, current *XMLNode, tag refl mTag := field.Tag if mTag.Get("location") != "" { // skip non-body members + nonPayloadFields++ continue } + payloadFields++ if protocol.CanSetIdempotencyToken(value.Field(i), field) { token := protocol.GetIdempotencyToken() @@ -149,11 +155,11 @@ func (b *xmlBuilder) buildStruct(value reflect.Value, current *XMLNode, tag refl if err := b.buildValue(member, child, mTag); err != nil { return err } - - fieldAdded = true } - if fieldAdded { // only append this child if we have one ore more valid members + // Only case where the child shape is not added is if the shape only contains + // non-payload fields, e.g headers/query. + if !(payloadFields == 0 && nonPayloadFields > 0) { current.AddChild(child) } @@ -278,8 +284,12 @@ func (b *xmlBuilder) buildScalar(value reflect.Value, current *XMLNode, tag refl case float32: str = strconv.FormatFloat(float64(converted), 'f', -1, 32) case time.Time: - const ISO8601UTC = "2006-01-02T15:04:05Z" - str = converted.UTC().Format(ISO8601UTC) + format := tag.Get("timestampFormat") + if len(format) == 0 { + format = protocol.ISO8601TimeFormatName + } + + str = protocol.FormatTime(format, converted) default: return fmt.Errorf("unsupported value for param %s: %v (%s)", tag.Get("locationName"), value.Interface(), value.Type().Name()) diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/unmarshal.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/unmarshal.go index 87584628a2b..ff1ef6830b9 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/unmarshal.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/unmarshal.go @@ -9,6 +9,8 @@ import ( "strconv" "strings" "time" + + "github.com/aws/aws-sdk-go/private/protocol" ) // UnmarshalXML deserializes an xml.Decoder into the container v. V @@ -52,9 +54,15 @@ func parse(r reflect.Value, node *XMLNode, tag reflect.StructTag) error { if t == "" { switch rtype.Kind() { case reflect.Struct: - t = "structure" + // also it can't be a time object + if _, ok := r.Interface().(*time.Time); !ok { + t = "structure" + } case reflect.Slice: - t = "list" + // also it can't be a byte slice + if _, ok := r.Interface().([]byte); !ok { + t = "list" + } case reflect.Map: t = "map" } @@ -247,8 +255,12 @@ func parseScalar(r reflect.Value, node *XMLNode, tag reflect.StructTag) error { } r.Set(reflect.ValueOf(&v)) case *time.Time: - const ISO8601UTC = "2006-01-02T15:04:05Z" - t, err := time.Parse(ISO8601UTC, node.Text) + format := tag.Get("timestampFormat") + if len(format) == 0 { + format = protocol.ISO8601TimeFormatName + } + + t, err := protocol.ParseTime(format, node.Text) if err != nil { return err } diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/xml_to_struct.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/xml_to_struct.go index 3e970b629da..515ce15215b 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/xml_to_struct.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/xml_to_struct.go @@ -29,6 +29,7 @@ func NewXMLElement(name xml.Name) *XMLNode { // AddChild adds child to the XMLNode. func (n *XMLNode) AddChild(child *XMLNode) { + child.parent = n if _, ok := n.Children[child.Name.Local]; !ok { n.Children[child.Name.Local] = []*XMLNode{} } diff --git a/vendor/github.com/aws/aws-sdk-go/service/acm/api.go b/vendor/github.com/aws/aws-sdk-go/service/acm/api.go index b6af6aaa1ff..40cb236501d 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/acm/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/acm/api.go @@ -17,8 +17,8 @@ const opAddTagsToCertificate = "AddTagsToCertificate" // AddTagsToCertificateRequest generates a "aws/request.Request" representing the // client's request for the AddTagsToCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -59,7 +59,7 @@ func (c *ACM) AddTagsToCertificateRequest(input *AddTagsToCertificateInput) (req // AddTagsToCertificate API operation for AWS Certificate Manager. // -// Adds one or more tags to an ACM Certificate. Tags are labels that you can +// Adds one or more tags to an ACM certificate. Tags are labels that you can // use to identify and organize your AWS resources. Each tag consists of a key // and an optional value. You specify the certificate on input by its Amazon // Resource Name (ARN). You specify the tag by using a key-value pair. @@ -69,9 +69,9 @@ func (c *ACM) AddTagsToCertificateRequest(input *AddTagsToCertificateInput) (req // certificates if you want to filter for a common relationship among those // certificates. Similarly, you can apply the same tag to multiple resources // if you want to specify a relationship among those resources. For example, -// you can add the same tag to an ACM Certificate and an Elastic Load Balancing +// you can add the same tag to an ACM certificate and an Elastic Load Balancing // load balancer to indicate that they are both used by the same website. For -// more information, see Tagging ACM Certificates (http://docs.aws.amazon.com/acm/latest/userguide/tags.html). +// more information, see Tagging ACM certificates (http://docs.aws.amazon.com/acm/latest/userguide/tags.html). // // To remove one or more tags, use the RemoveTagsFromCertificate action. To // view all of the tags that have been applied to the certificate, use the ListTagsForCertificate @@ -86,7 +86,7 @@ func (c *ACM) AddTagsToCertificateRequest(input *AddTagsToCertificateInput) (req // // Returned Error Codes: // * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified certificate cannot be found in the caller's account, or the +// The specified certificate cannot be found in the caller's account or the // caller's account cannot be found. // // * ErrCodeInvalidArnException "InvalidArnException" @@ -125,8 +125,8 @@ const opDeleteCertificate = "DeleteCertificate" // DeleteCertificateRequest generates a "aws/request.Request" representing the // client's request for the DeleteCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -173,7 +173,7 @@ func (c *ACM) DeleteCertificateRequest(input *DeleteCertificateInput) (req *requ // action. The certificate will not be available for use by AWS services integrated // with ACM. // -// You cannot delete an ACM Certificate that is being used by another AWS service. +// You cannot delete an ACM certificate that is being used by another AWS service. // To delete a certificate that is in use, the certificate association must // first be removed. // @@ -186,7 +186,7 @@ func (c *ACM) DeleteCertificateRequest(input *DeleteCertificateInput) (req *requ // // Returned Error Codes: // * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified certificate cannot be found in the caller's account, or the +// The specified certificate cannot be found in the caller's account or the // caller's account cannot be found. // // * ErrCodeResourceInUseException "ResourceInUseException" @@ -222,8 +222,8 @@ const opDescribeCertificate = "DescribeCertificate" // DescribeCertificateRequest generates a "aws/request.Request" representing the // client's request for the DescribeCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -262,7 +262,7 @@ func (c *ACM) DescribeCertificateRequest(input *DescribeCertificateInput) (req * // DescribeCertificate API operation for AWS Certificate Manager. // -// Returns detailed metadata about the specified ACM Certificate. +// Returns detailed metadata about the specified ACM certificate. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -273,7 +273,7 @@ func (c *ACM) DescribeCertificateRequest(input *DescribeCertificateInput) (req * // // Returned Error Codes: // * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified certificate cannot be found in the caller's account, or the +// The specified certificate cannot be found in the caller's account or the // caller's account cannot be found. // // * ErrCodeInvalidArnException "InvalidArnException" @@ -301,12 +301,107 @@ func (c *ACM) DescribeCertificateWithContext(ctx aws.Context, input *DescribeCer return out, req.Send() } +const opExportCertificate = "ExportCertificate" + +// ExportCertificateRequest generates a "aws/request.Request" representing the +// client's request for the ExportCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ExportCertificate for more information on using the ExportCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ExportCertificateRequest method. +// req, resp := client.ExportCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-2015-12-08/ExportCertificate +func (c *ACM) ExportCertificateRequest(input *ExportCertificateInput) (req *request.Request, output *ExportCertificateOutput) { + op := &request.Operation{ + Name: opExportCertificate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ExportCertificateInput{} + } + + output = &ExportCertificateOutput{} + req = c.newRequest(op, input, output) + return +} + +// ExportCertificate API operation for AWS Certificate Manager. +// +// Exports a private certificate issued by a private certificate authority (CA) +// for use anywhere. You can export the certificate, the certificate chain, +// and the encrypted private key associated with the public key embedded in +// the certificate. You must store the private key securely. The private key +// is a 2048 bit RSA key. You must provide a passphrase for the private key +// when exporting it. You can use the following OpenSSL command to decrypt it +// later. Provide the passphrase when prompted. +// +// openssl rsa -in encrypted_key.pem -out decrypted_key.pem +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Certificate Manager's +// API operation ExportCertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified certificate cannot be found in the caller's account or the +// caller's account cannot be found. +// +// * ErrCodeRequestInProgressException "RequestInProgressException" +// The certificate request is in process and the certificate in your account +// has not yet been issued. +// +// * ErrCodeInvalidArnException "InvalidArnException" +// The requested Amazon Resource Name (ARN) does not refer to an existing resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-2015-12-08/ExportCertificate +func (c *ACM) ExportCertificate(input *ExportCertificateInput) (*ExportCertificateOutput, error) { + req, out := c.ExportCertificateRequest(input) + return out, req.Send() +} + +// ExportCertificateWithContext is the same as ExportCertificate with the addition of +// the ability to pass a context and additional request options. +// +// See ExportCertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACM) ExportCertificateWithContext(ctx aws.Context, input *ExportCertificateInput, opts ...request.Option) (*ExportCertificateOutput, error) { + req, out := c.ExportCertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opGetCertificate = "GetCertificate" // GetCertificateRequest generates a "aws/request.Request" representing the // client's request for the GetCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -361,7 +456,7 @@ func (c *ACM) GetCertificateRequest(input *GetCertificateInput) (req *request.Re // // Returned Error Codes: // * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified certificate cannot be found in the caller's account, or the +// The specified certificate cannot be found in the caller's account or the // caller's account cannot be found. // // * ErrCodeRequestInProgressException "RequestInProgressException" @@ -397,8 +492,8 @@ const opImportCertificate = "ImportCertificate" // ImportCertificateRequest generates a "aws/request.Request" representing the // client's request for the ImportCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -475,13 +570,17 @@ func (c *ACM) ImportCertificateRequest(input *ImportCertificateInput) (req *requ // * To import a new certificate, omit the CertificateArn argument. Include // this argument only when you want to replace a previously imported certificate. // -// * When you import a certificate by using the CLI or one of the SDKs, you -// must specify the certificate, the certificate chain, and the private key -// by their file names preceded by file://. For example, you can specify -// a certificate saved in the C:\temp folder as file://C:\temp\certificate_to_import.pem. +// * When you import a certificate by using the CLI, you must specify the +// certificate, the certificate chain, and the private key by their file +// names preceded by file://. For example, you can specify a certificate +// saved in the C:\temp folder as file://C:\temp\certificate_to_import.pem. // If you are making an HTTP or HTTPS Query request, include these arguments // as BLOBs. // +// * When you import a certificate by using an SDK, you must specify the +// certificate, the certificate chain, and the private key files in the manner +// required by the programming language you're using. +// // This operation returns the Amazon Resource Name (ARN) (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) // of the imported certificate. // @@ -494,15 +593,11 @@ func (c *ACM) ImportCertificateRequest(input *ImportCertificateInput) (req *requ // // Returned Error Codes: // * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified certificate cannot be found in the caller's account, or the +// The specified certificate cannot be found in the caller's account or the // caller's account cannot be found. // // * ErrCodeLimitExceededException "LimitExceededException" -// An ACM limit has been exceeded. For example, you may have input more domains -// than are allowed or you've requested too many certificates for your account. -// See the exception message returned by ACM to determine which limit you have -// violated. For more information about ACM limits, see the Limits (http://docs.aws.amazon.com/acm/latest/userguide/acm-limits.html) -// topic. +// An ACM limit has been exceeded. // // See also, https://docs.aws.amazon.com/goto/WebAPI/acm-2015-12-08/ImportCertificate func (c *ACM) ImportCertificate(input *ImportCertificateInput) (*ImportCertificateOutput, error) { @@ -530,8 +625,8 @@ const opListCertificates = "ListCertificates" // ListCertificatesRequest generates a "aws/request.Request" representing the // client's request for the ListCertificates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -662,8 +757,8 @@ const opListTagsForCertificate = "ListTagsForCertificate" // ListTagsForCertificateRequest generates a "aws/request.Request" representing the // client's request for the ListTagsForCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -702,9 +797,9 @@ func (c *ACM) ListTagsForCertificateRequest(input *ListTagsForCertificateInput) // ListTagsForCertificate API operation for AWS Certificate Manager. // -// Lists the tags that have been applied to the ACM Certificate. Use the certificate's +// Lists the tags that have been applied to the ACM certificate. Use the certificate's // Amazon Resource Name (ARN) to specify the certificate. To add a tag to an -// ACM Certificate, use the AddTagsToCertificate action. To delete a tag, use +// ACM certificate, use the AddTagsToCertificate action. To delete a tag, use // the RemoveTagsFromCertificate action. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -716,7 +811,7 @@ func (c *ACM) ListTagsForCertificateRequest(input *ListTagsForCertificateInput) // // Returned Error Codes: // * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified certificate cannot be found in the caller's account, or the +// The specified certificate cannot be found in the caller's account or the // caller's account cannot be found. // // * ErrCodeInvalidArnException "InvalidArnException" @@ -748,8 +843,8 @@ const opRemoveTagsFromCertificate = "RemoveTagsFromCertificate" // RemoveTagsFromCertificateRequest generates a "aws/request.Request" representing the // client's request for the RemoveTagsFromCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -790,13 +885,13 @@ func (c *ACM) RemoveTagsFromCertificateRequest(input *RemoveTagsFromCertificateI // RemoveTagsFromCertificate API operation for AWS Certificate Manager. // -// Remove one or more tags from an ACM Certificate. A tag consists of a key-value +// Remove one or more tags from an ACM certificate. A tag consists of a key-value // pair. If you do not specify the value portion of the tag when calling this // function, the tag will be removed regardless of value. If you specify a value, // the tag is removed only if it is associated with the specified value. // // To add tags to a certificate, use the AddTagsToCertificate action. To view -// all of the tags that have been applied to a specific ACM Certificate, use +// all of the tags that have been applied to a specific ACM certificate, use // the ListTagsForCertificate action. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -808,7 +903,7 @@ func (c *ACM) RemoveTagsFromCertificateRequest(input *RemoveTagsFromCertificateI // // Returned Error Codes: // * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified certificate cannot be found in the caller's account, or the +// The specified certificate cannot be found in the caller's account or the // caller's account cannot be found. // // * ErrCodeInvalidArnException "InvalidArnException" @@ -844,8 +939,8 @@ const opRequestCertificate = "RequestCertificate" // RequestCertificateRequest generates a "aws/request.Request" representing the // client's request for the RequestCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -884,20 +979,18 @@ func (c *ACM) RequestCertificateRequest(input *RequestCertificateInput) (req *re // RequestCertificate API operation for AWS Certificate Manager. // -// Requests an ACM Certificate for use with other AWS services. To request an -// ACM Certificate, you must specify the fully qualified domain name (FQDN) -// for your site in the DomainName parameter. You can also specify additional -// FQDNs in the SubjectAlternativeNames parameter if users can reach your site -// by using other names. +// Requests an ACM certificate for use with other AWS services. To request an +// ACM certificate, you must specify a fully qualified domain name (FQDN) in +// the DomainName parameter. You can also specify additional FQDNs in the SubjectAlternativeNames +// parameter. // -// For each domain name you specify, email is sent to the domain owner to request -// approval to issue the certificate. Email is sent to three registered contact -// addresses in the WHOIS database and to five common system administration -// addresses formed from the DomainName you enter or the optional ValidationDomain -// parameter. For more information, see Validate Domain Ownership (http://docs.aws.amazon.com/acm/latest/userguide/gs-acm-validate.html). -// -// After receiving approval from the domain owner, the ACM Certificate is issued. -// For more information, see the AWS Certificate Manager User Guide (http://docs.aws.amazon.com/acm/latest/userguide/). +// If you are requesting a private certificate, domain validation is not required. +// If you are requesting a public certificate, each domain name that you specify +// must be validated to verify that you own or control the domain. You can use +// DNS validation (http://docs.aws.amazon.com/acm/latest/userguide/gs-acm-validate-dns.html) +// or email validation (http://docs.aws.amazon.com/acm/latest/userguide/gs-acm-validate-email.html). +// We recommend that you use DNS validation. ACM issues public certificates +// after receiving approval from the domain owner. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -908,15 +1001,14 @@ func (c *ACM) RequestCertificateRequest(input *RequestCertificateInput) (req *re // // Returned Error Codes: // * ErrCodeLimitExceededException "LimitExceededException" -// An ACM limit has been exceeded. For example, you may have input more domains -// than are allowed or you've requested too many certificates for your account. -// See the exception message returned by ACM to determine which limit you have -// violated. For more information about ACM limits, see the Limits (http://docs.aws.amazon.com/acm/latest/userguide/acm-limits.html) -// topic. +// An ACM limit has been exceeded. // // * ErrCodeInvalidDomainValidationOptionsException "InvalidDomainValidationOptionsException" // One or more values in the DomainValidationOption structure is incorrect. // +// * ErrCodeInvalidArnException "InvalidArnException" +// The requested Amazon Resource Name (ARN) does not refer to an existing resource. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/acm-2015-12-08/RequestCertificate func (c *ACM) RequestCertificate(input *RequestCertificateInput) (*RequestCertificateOutput, error) { req, out := c.RequestCertificateRequest(input) @@ -943,8 +1035,8 @@ const opResendValidationEmail = "ResendValidationEmail" // ResendValidationEmailRequest generates a "aws/request.Request" representing the // client's request for the ResendValidationEmail operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -986,12 +1078,12 @@ func (c *ACM) ResendValidationEmailRequest(input *ResendValidationEmailInput) (r // ResendValidationEmail API operation for AWS Certificate Manager. // // Resends the email that requests domain ownership validation. The domain owner -// or an authorized representative must approve the ACM Certificate before it +// or an authorized representative must approve the ACM certificate before it // can be issued. The certificate can be approved by clicking a link in the // mail to navigate to the Amazon certificate approval website and then clicking // I Approve. However, the validation email can be blocked by spam filters. // Therefore, if you do not receive the original mail, you can request that -// the mail be resent within 72 hours of requesting the ACM Certificate. If +// the mail be resent within 72 hours of requesting the ACM certificate. If // more than 72 hours have elapsed since your original request or since your // last attempt to resend validation mail, you must request a new certificate. // For more information about setting up your contact email addresses, see Configure @@ -1006,14 +1098,11 @@ func (c *ACM) ResendValidationEmailRequest(input *ResendValidationEmailInput) (r // // Returned Error Codes: // * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified certificate cannot be found in the caller's account, or the +// The specified certificate cannot be found in the caller's account or the // caller's account cannot be found. // // * ErrCodeInvalidStateException "InvalidStateException" -// Processing has reached an invalid state. For example, this exception can -// occur if the specified domain is not using email validation, or the current -// certificate status does not permit the requested operation. See the exception -// message returned by ACM to determine which state is not valid. +// Processing has reached an invalid state. // // * ErrCodeInvalidArnException "InvalidArnException" // The requested Amazon Resource Name (ARN) does not refer to an existing resource. @@ -1043,10 +1132,104 @@ func (c *ACM) ResendValidationEmailWithContext(ctx aws.Context, input *ResendVal return out, req.Send() } +const opUpdateCertificateOptions = "UpdateCertificateOptions" + +// UpdateCertificateOptionsRequest generates a "aws/request.Request" representing the +// client's request for the UpdateCertificateOptions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateCertificateOptions for more information on using the UpdateCertificateOptions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateCertificateOptionsRequest method. +// req, resp := client.UpdateCertificateOptionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-2015-12-08/UpdateCertificateOptions +func (c *ACM) UpdateCertificateOptionsRequest(input *UpdateCertificateOptionsInput) (req *request.Request, output *UpdateCertificateOptionsOutput) { + op := &request.Operation{ + Name: opUpdateCertificateOptions, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateCertificateOptionsInput{} + } + + output = &UpdateCertificateOptionsOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UpdateCertificateOptions API operation for AWS Certificate Manager. +// +// Updates a certificate. Currently, you can use this function to specify whether +// to opt in to or out of recording your certificate in a certificate transparency +// log. For more information, see Opting Out of Certificate Transparency Logging +// (http://docs.aws.amazon.com/acm/latest/userguide/acm-bestpractices.html#best-practices-transparency). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Certificate Manager's +// API operation UpdateCertificateOptions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified certificate cannot be found in the caller's account or the +// caller's account cannot be found. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// An ACM limit has been exceeded. +// +// * ErrCodeInvalidStateException "InvalidStateException" +// Processing has reached an invalid state. +// +// * ErrCodeInvalidArnException "InvalidArnException" +// The requested Amazon Resource Name (ARN) does not refer to an existing resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-2015-12-08/UpdateCertificateOptions +func (c *ACM) UpdateCertificateOptions(input *UpdateCertificateOptionsInput) (*UpdateCertificateOptionsOutput, error) { + req, out := c.UpdateCertificateOptionsRequest(input) + return out, req.Send() +} + +// UpdateCertificateOptionsWithContext is the same as UpdateCertificateOptions with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateCertificateOptions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACM) UpdateCertificateOptionsWithContext(ctx aws.Context, input *UpdateCertificateOptionsInput, opts ...request.Option) (*UpdateCertificateOptionsOutput, error) { + req, out := c.UpdateCertificateOptionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + type AddTagsToCertificateInput struct { _ struct{} `type:"structure"` - // String that contains the ARN of the ACM Certificate to which the tag is to + // String that contains the ARN of the ACM certificate to which the tag is to // be applied. This must be of the form: // // arn:aws:acm:region:123456789012:certificate/12345678-1234-1234-1234-123456789012 @@ -1141,9 +1324,15 @@ type CertificateDetail struct { // in the AWS General Reference. CertificateArn *string `min:"20" type:"string"` + // The Amazon Resource Name (ARN) of the ACM PCA private certificate authority + // (CA) that issued the certificate. This has the following format: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012 + CertificateAuthorityArn *string `min:"20" type:"string"` + // The time at which the certificate was requested. This value exists only when // the certificate type is AMAZON_ISSUED. - CreatedAt *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedAt *time.Time `type:"timestamp"` // The fully qualified domain name for the certificate, such as www.example.com // or example.com. @@ -1167,7 +1356,7 @@ type CertificateDetail struct { // The date and time at which the certificate was imported. This value exists // only when the certificate type is IMPORTED. - ImportedAt *time.Time `type:"timestamp" timestampFormat:"unix"` + ImportedAt *time.Time `type:"timestamp"` // A list of ARNs for the AWS resources that are using the certificate. A certificate // can be used by multiple AWS resources. @@ -1175,7 +1364,7 @@ type CertificateDetail struct { // The time at which the certificate was issued. This value exists only when // the certificate type is AMAZON_ISSUED. - IssuedAt *time.Time `type:"timestamp" timestampFormat:"unix"` + IssuedAt *time.Time `type:"timestamp"` // The name of the certificate authority that issued and signed the certificate. Issuer *string `type:"string"` @@ -1190,10 +1379,20 @@ type CertificateDetail struct { KeyUsages []*KeyUsage `type:"list"` // The time after which the certificate is not valid. - NotAfter *time.Time `type:"timestamp" timestampFormat:"unix"` + NotAfter *time.Time `type:"timestamp"` // The time before which the certificate is not valid. - NotBefore *time.Time `type:"timestamp" timestampFormat:"unix"` + NotBefore *time.Time `type:"timestamp"` + + // Value that specifies whether to add the certificate to a transparency log. + // Certificate transparency makes it possible to detect SSL certificates that + // have been mistakenly or maliciously issued. A browser might respond to certificate + // that has not been logged by showing an error message. The logs are cryptographically + // secure. + Options *CertificateOptions `type:"structure"` + + // Specifies whether the certificate is eligible for renewal. + RenewalEligibility *string `type:"string" enum:"RenewalEligibility"` // Contains information about the status of ACM's managed renewal (http://docs.aws.amazon.com/acm/latest/userguide/acm-renewal.html) // for the certificate. This field exists only when the certificate type is @@ -1206,7 +1405,7 @@ type CertificateDetail struct { // The time at which the certificate was revoked. This value exists only when // the certificate status is REVOKED. - RevokedAt *time.Time `type:"timestamp" timestampFormat:"unix"` + RevokedAt *time.Time `type:"timestamp"` // The serial number of the certificate. Serial *string `type:"string"` @@ -1254,6 +1453,12 @@ func (s *CertificateDetail) SetCertificateArn(v string) *CertificateDetail { return s } +// SetCertificateAuthorityArn sets the CertificateAuthorityArn field's value. +func (s *CertificateDetail) SetCertificateAuthorityArn(v string) *CertificateDetail { + s.CertificateAuthorityArn = &v + return s +} + // SetCreatedAt sets the CreatedAt field's value. func (s *CertificateDetail) SetCreatedAt(v time.Time) *CertificateDetail { s.CreatedAt = &v @@ -1332,6 +1537,18 @@ func (s *CertificateDetail) SetNotBefore(v time.Time) *CertificateDetail { return s } +// SetOptions sets the Options field's value. +func (s *CertificateDetail) SetOptions(v *CertificateOptions) *CertificateDetail { + s.Options = v + return s +} + +// SetRenewalEligibility sets the RenewalEligibility field's value. +func (s *CertificateDetail) SetRenewalEligibility(v string) *CertificateDetail { + s.RenewalEligibility = &v + return s +} + // SetRenewalSummary sets the RenewalSummary field's value. func (s *CertificateDetail) SetRenewalSummary(v *RenewalSummary) *CertificateDetail { s.RenewalSummary = v @@ -1386,6 +1603,37 @@ func (s *CertificateDetail) SetType(v string) *CertificateDetail { return s } +// Structure that contains options for your certificate. Currently, you can +// use this only to specify whether to opt in to or out of certificate transparency +// logging. Some browsers require that public certificates issued for your domain +// be recorded in a log. Certificates that are not logged typically generate +// a browser error. Transparency makes it possible for you to detect SSL/TLS +// certificates that have been mistakenly or maliciously issued for your domain. +// For general information, see Certificate Transparency Logging (http://docs.aws.amazon.com/acm/latest/userguide/acm-concepts.html#concept-transparency). +type CertificateOptions struct { + _ struct{} `type:"structure"` + + // You can opt out of certificate transparency logging by specifying the DISABLED + // option. Opt in by specifying ENABLED. + CertificateTransparencyLoggingPreference *string `type:"string" enum:"CertificateTransparencyLoggingPreference"` +} + +// String returns the string representation +func (s CertificateOptions) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CertificateOptions) GoString() string { + return s.String() +} + +// SetCertificateTransparencyLoggingPreference sets the CertificateTransparencyLoggingPreference field's value. +func (s *CertificateOptions) SetCertificateTransparencyLoggingPreference(v string) *CertificateOptions { + s.CertificateTransparencyLoggingPreference = &v + return s +} + // This structure is returned in the response object of ListCertificates action. type CertificateSummary struct { _ struct{} `type:"structure"` @@ -1428,7 +1676,7 @@ func (s *CertificateSummary) SetDomainName(v string) *CertificateSummary { type DeleteCertificateInput struct { _ struct{} `type:"structure"` - // String that contains the ARN of the ACM Certificate to be deleted. This must + // String that contains the ARN of the ACM certificate to be deleted. This must // be of the form: // // arn:aws:acm:region:123456789012:certificate/12345678-1234-1234-1234-123456789012 @@ -1489,7 +1737,7 @@ func (s DeleteCertificateOutput) GoString() string { type DescribeCertificateInput struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of the ACM Certificate. The ARN must have + // The Amazon Resource Name (ARN) of the ACM certificate. The ARN must have // the following form: // // arn:aws:acm:region:123456789012:certificate/12345678-1234-1234-1234-123456789012 @@ -1711,6 +1959,115 @@ func (s *DomainValidationOption) SetValidationDomain(v string) *DomainValidation return s } +type ExportCertificateInput struct { + _ struct{} `type:"structure"` + + // An Amazon Resource Name (ARN) of the issued certificate. This must be of + // the form: + // + // arn:aws:acm:region:account:certificate/12345678-1234-1234-1234-123456789012 + // + // CertificateArn is a required field + CertificateArn *string `min:"20" type:"string" required:"true"` + + // Passphrase to associate with the encrypted exported private key. If you want + // to later decrypt the private key, you must have the passphrase. You can use + // the following OpenSSL command to decrypt a private key: + // + // openssl rsa -in encrypted_key.pem -out decrypted_key.pem + // + // Passphrase is automatically base64 encoded/decoded by the SDK. + // + // Passphrase is a required field + Passphrase []byte `min:"4" type:"blob" required:"true"` +} + +// String returns the string representation +func (s ExportCertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ExportCertificateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ExportCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ExportCertificateInput"} + if s.CertificateArn == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateArn")) + } + if s.CertificateArn != nil && len(*s.CertificateArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("CertificateArn", 20)) + } + if s.Passphrase == nil { + invalidParams.Add(request.NewErrParamRequired("Passphrase")) + } + if s.Passphrase != nil && len(s.Passphrase) < 4 { + invalidParams.Add(request.NewErrParamMinLen("Passphrase", 4)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateArn sets the CertificateArn field's value. +func (s *ExportCertificateInput) SetCertificateArn(v string) *ExportCertificateInput { + s.CertificateArn = &v + return s +} + +// SetPassphrase sets the Passphrase field's value. +func (s *ExportCertificateInput) SetPassphrase(v []byte) *ExportCertificateInput { + s.Passphrase = v + return s +} + +type ExportCertificateOutput struct { + _ struct{} `type:"structure"` + + // The base64 PEM-encoded certificate. + Certificate *string `min:"1" type:"string"` + + // The base64 PEM-encoded certificate chain. This does not include the certificate + // that you are exporting. + CertificateChain *string `min:"1" type:"string"` + + // The PEM-encoded private key associated with the public key in the certificate. + PrivateKey *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ExportCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ExportCertificateOutput) GoString() string { + return s.String() +} + +// SetCertificate sets the Certificate field's value. +func (s *ExportCertificateOutput) SetCertificate(v string) *ExportCertificateOutput { + s.Certificate = &v + return s +} + +// SetCertificateChain sets the CertificateChain field's value. +func (s *ExportCertificateOutput) SetCertificateChain(v string) *ExportCertificateOutput { + s.CertificateChain = &v + return s +} + +// SetPrivateKey sets the PrivateKey field's value. +func (s *ExportCertificateOutput) SetPrivateKey(v string) *ExportCertificateOutput { + s.PrivateKey = &v + return s +} + // The Extended Key Usage X.509 v3 extension defines one or more purposes for // which the public key can be used. This is in addition to or in place of the // basic purposes specified by the Key Usage extension. @@ -1858,7 +2215,7 @@ func (s *GetCertificateInput) SetCertificateArn(v string) *GetCertificateInput { type GetCertificateOutput struct { _ struct{} `type:"structure"` - // String that contains the ACM Certificate represented by the ARN specified + // String that contains the ACM certificate represented by the ARN specified // at input. Certificate *string `min:"1" type:"string"` @@ -2102,7 +2459,7 @@ func (s *ListCertificatesInput) SetNextToken(v string) *ListCertificatesInput { type ListCertificatesOutput struct { _ struct{} `type:"structure"` - // A list of ACM Certificates. + // A list of ACM certificates. CertificateSummaryList []*CertificateSummary `type:"list"` // When the list is truncated, this value is present and contains the value @@ -2135,7 +2492,7 @@ func (s *ListCertificatesOutput) SetNextToken(v string) *ListCertificatesOutput type ListTagsForCertificateInput struct { _ struct{} `type:"structure"` - // String that contains the ARN of the ACM Certificate for which you want to + // String that contains the ARN of the ACM certificate for which you want to // list the tags. This must have the following form: // // arn:aws:acm:region:123456789012:certificate/12345678-1234-1234-1234-123456789012 @@ -2337,10 +2694,20 @@ func (s *RenewalSummary) SetRenewalStatus(v string) *RenewalSummary { type RequestCertificateInput struct { _ struct{} `type:"structure"` - // Fully qualified domain name (FQDN), such as www.example.com, of the site - // that you want to secure with an ACM Certificate. Use an asterisk (*) to create - // a wildcard certificate that protects several sites in the same domain. For - // example, *.example.com protects www.example.com, site.example.com, and images.example.com. + // The Amazon Resource Name (ARN) of the private certificate authority (CA) + // that will be used to issue the certificate. If you do not provide an ARN + // and you are trying to request a private certificate, ACM will attempt to + // issue a public certificate. For more information about private CAs, see the + // AWS Certificate Manager Private Certificate Authority (PCA) (http://docs.aws.amazon.com/acm-pca/latest/userguide/PcaWelcome.html) + // user guide. The ARN must have the following form: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012 + CertificateAuthorityArn *string `min:"20" type:"string"` + + // Fully qualified domain name (FQDN), such as www.example.com, that you want + // to secure with an ACM certificate. Use an asterisk (*) to create a wildcard + // certificate that protects several sites in the same domain. For example, + // *.example.com protects www.example.com, site.example.com, and images.example.com. // // The first domain name you enter cannot exceed 63 octets, including periods. // Each subsequent Subject Alternative Name (SAN), however, can be up to 253 @@ -2361,11 +2728,19 @@ type RequestCertificateInput struct { // requesting multiple certificates. IdempotencyToken *string `min:"1" type:"string"` + // Currently, you can use this parameter to specify whether to add the certificate + // to a certificate transparency log. Certificate transparency makes it possible + // to detect SSL/TLS certificates that have been mistakenly or maliciously issued. + // Certificates that have not been logged typically produce an error message + // in a browser. For more information, see Opting Out of Certificate Transparency + // Logging (http://docs.aws.amazon.com/acm/latest/userguide/acm-bestpractices.html#best-practices-transparency). + Options *CertificateOptions `type:"structure"` + // Additional FQDNs to be included in the Subject Alternative Name extension - // of the ACM Certificate. For example, add the name www.example.net to a certificate + // of the ACM certificate. For example, add the name www.example.net to a certificate // for which the DomainName field is www.example.com if users can reach your // site by using either name. The maximum number of domain names that you can - // add to an ACM Certificate is 100. However, the initial limit is 10 domain + // add to an ACM certificate is 100. However, the initial limit is 10 domain // names. If you need more than 10 names, you must request a limit increase. // For more information, see Limits (http://docs.aws.amazon.com/acm/latest/userguide/acm-limits.html). // @@ -2385,7 +2760,10 @@ type RequestCertificateInput struct { // the total length of the DNS name (63+1+63+1+63+1+62) exceeds 253 octets. SubjectAlternativeNames []*string `min:"1" type:"list"` - // The method you want to use to validate your domain. + // The method you want to use if you are requesting a public certificate to + // validate that you own or control domain. You can validate with DNS (http://docs.aws.amazon.com/acm/latest/userguide/gs-acm-validate-dns.html) + // or validate with email (http://docs.aws.amazon.com/acm/latest/userguide/gs-acm-validate-email.html). + // We recommend that you use DNS validation. ValidationMethod *string `type:"string" enum:"ValidationMethod"` } @@ -2402,6 +2780,9 @@ func (s RequestCertificateInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *RequestCertificateInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "RequestCertificateInput"} + if s.CertificateAuthorityArn != nil && len(*s.CertificateAuthorityArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("CertificateAuthorityArn", 20)) + } if s.DomainName == nil { invalidParams.Add(request.NewErrParamRequired("DomainName")) } @@ -2434,6 +2815,12 @@ func (s *RequestCertificateInput) Validate() error { return nil } +// SetCertificateAuthorityArn sets the CertificateAuthorityArn field's value. +func (s *RequestCertificateInput) SetCertificateAuthorityArn(v string) *RequestCertificateInput { + s.CertificateAuthorityArn = &v + return s +} + // SetDomainName sets the DomainName field's value. func (s *RequestCertificateInput) SetDomainName(v string) *RequestCertificateInput { s.DomainName = &v @@ -2452,6 +2839,12 @@ func (s *RequestCertificateInput) SetIdempotencyToken(v string) *RequestCertific return s } +// SetOptions sets the Options field's value. +func (s *RequestCertificateInput) SetOptions(v *CertificateOptions) *RequestCertificateInput { + s.Options = v + return s +} + // SetSubjectAlternativeNames sets the SubjectAlternativeNames field's value. func (s *RequestCertificateInput) SetSubjectAlternativeNames(v []*string) *RequestCertificateInput { s.SubjectAlternativeNames = v @@ -2703,6 +3096,81 @@ func (s *Tag) SetValue(v string) *Tag { return s } +type UpdateCertificateOptionsInput struct { + _ struct{} `type:"structure"` + + // ARN of the requested certificate to update. This must be of the form: + // + // arn:aws:acm:us-east-1:account:certificate/12345678-1234-1234-1234-123456789012 + // + // CertificateArn is a required field + CertificateArn *string `min:"20" type:"string" required:"true"` + + // Use to update the options for your certificate. Currently, you can specify + // whether to add your certificate to a transparency log. Certificate transparency + // makes it possible to detect SSL/TLS certificates that have been mistakenly + // or maliciously issued. Certificates that have not been logged typically produce + // an error message in a browser. + // + // Options is a required field + Options *CertificateOptions `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateCertificateOptionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateCertificateOptionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateCertificateOptionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateCertificateOptionsInput"} + if s.CertificateArn == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateArn")) + } + if s.CertificateArn != nil && len(*s.CertificateArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("CertificateArn", 20)) + } + if s.Options == nil { + invalidParams.Add(request.NewErrParamRequired("Options")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateArn sets the CertificateArn field's value. +func (s *UpdateCertificateOptionsInput) SetCertificateArn(v string) *UpdateCertificateOptionsInput { + s.CertificateArn = &v + return s +} + +// SetOptions sets the Options field's value. +func (s *UpdateCertificateOptionsInput) SetOptions(v *CertificateOptions) *UpdateCertificateOptionsInput { + s.Options = v + return s +} + +type UpdateCertificateOptionsOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UpdateCertificateOptionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateCertificateOptionsOutput) GoString() string { + return s.String() +} + const ( // CertificateStatusPendingValidation is a CertificateStatus enum value CertificateStatusPendingValidation = "PENDING_VALIDATION" @@ -2726,12 +3194,23 @@ const ( CertificateStatusFailed = "FAILED" ) +const ( + // CertificateTransparencyLoggingPreferenceEnabled is a CertificateTransparencyLoggingPreference enum value + CertificateTransparencyLoggingPreferenceEnabled = "ENABLED" + + // CertificateTransparencyLoggingPreferenceDisabled is a CertificateTransparencyLoggingPreference enum value + CertificateTransparencyLoggingPreferenceDisabled = "DISABLED" +) + const ( // CertificateTypeImported is a CertificateType enum value CertificateTypeImported = "IMPORTED" // CertificateTypeAmazonIssued is a CertificateType enum value CertificateTypeAmazonIssued = "AMAZON_ISSUED" + + // CertificateTypePrivate is a CertificateType enum value + CertificateTypePrivate = "PRIVATE" ) const ( @@ -2799,6 +3278,24 @@ const ( // FailureReasonCaaError is a FailureReason enum value FailureReasonCaaError = "CAA_ERROR" + // FailureReasonPcaLimitExceeded is a FailureReason enum value + FailureReasonPcaLimitExceeded = "PCA_LIMIT_EXCEEDED" + + // FailureReasonPcaInvalidArn is a FailureReason enum value + FailureReasonPcaInvalidArn = "PCA_INVALID_ARN" + + // FailureReasonPcaInvalidState is a FailureReason enum value + FailureReasonPcaInvalidState = "PCA_INVALID_STATE" + + // FailureReasonPcaRequestFailed is a FailureReason enum value + FailureReasonPcaRequestFailed = "PCA_REQUEST_FAILED" + + // FailureReasonPcaResourceNotFound is a FailureReason enum value + FailureReasonPcaResourceNotFound = "PCA_RESOURCE_NOT_FOUND" + + // FailureReasonPcaInvalidArgs is a FailureReason enum value + FailureReasonPcaInvalidArgs = "PCA_INVALID_ARGS" + // FailureReasonOther is a FailureReason enum value FailureReasonOther = "OTHER" ) @@ -2863,6 +3360,14 @@ const ( RecordTypeCname = "CNAME" ) +const ( + // RenewalEligibilityEligible is a RenewalEligibility enum value + RenewalEligibilityEligible = "ELIGIBLE" + + // RenewalEligibilityIneligible is a RenewalEligibility enum value + RenewalEligibilityIneligible = "INELIGIBLE" +) + const ( // RenewalStatusPendingAutoRenewal is a RenewalStatus enum value RenewalStatusPendingAutoRenewal = "PENDING_AUTO_RENEWAL" diff --git a/vendor/github.com/aws/aws-sdk-go/service/acm/errors.go b/vendor/github.com/aws/aws-sdk-go/service/acm/errors.go index d09bda760b4..421b0d1b625 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/acm/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/acm/errors.go @@ -19,10 +19,7 @@ const ( // ErrCodeInvalidStateException for service response error code // "InvalidStateException". // - // Processing has reached an invalid state. For example, this exception can - // occur if the specified domain is not using email validation, or the current - // certificate status does not permit the requested operation. See the exception - // message returned by ACM to determine which state is not valid. + // Processing has reached an invalid state. ErrCodeInvalidStateException = "InvalidStateException" // ErrCodeInvalidTagException for service response error code @@ -35,11 +32,7 @@ const ( // ErrCodeLimitExceededException for service response error code // "LimitExceededException". // - // An ACM limit has been exceeded. For example, you may have input more domains - // than are allowed or you've requested too many certificates for your account. - // See the exception message returned by ACM to determine which limit you have - // violated. For more information about ACM limits, see the Limits (http://docs.aws.amazon.com/acm/latest/userguide/acm-limits.html) - // topic. + // An ACM limit has been exceeded. ErrCodeLimitExceededException = "LimitExceededException" // ErrCodeRequestInProgressException for service response error code @@ -59,7 +52,7 @@ const ( // ErrCodeResourceNotFoundException for service response error code // "ResourceNotFoundException". // - // The specified certificate cannot be found in the caller's account, or the + // The specified certificate cannot be found in the caller's account or the // caller's account cannot be found. ErrCodeResourceNotFoundException = "ResourceNotFoundException" diff --git a/vendor/github.com/aws/aws-sdk-go/service/acm/service.go b/vendor/github.com/aws/aws-sdk-go/service/acm/service.go index b083c37d66b..9817d0c0a5a 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/acm/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/acm/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "acm" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "acm" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "ACM" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the ACM client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/acm/waiters.go b/vendor/github.com/aws/aws-sdk-go/service/acm/waiters.go new file mode 100644 index 00000000000..6d63b88bc04 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/acm/waiters.go @@ -0,0 +1,71 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package acm + +import ( + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/request" +) + +// WaitUntilCertificateValidated uses the ACM API operation +// DescribeCertificate to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *ACM) WaitUntilCertificateValidated(input *DescribeCertificateInput) error { + return c.WaitUntilCertificateValidatedWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilCertificateValidatedWithContext is an extended version of WaitUntilCertificateValidated. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACM) WaitUntilCertificateValidatedWithContext(ctx aws.Context, input *DescribeCertificateInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilCertificateValidated", + MaxAttempts: 40, + Delay: request.ConstantWaiterDelay(60 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.PathAllWaiterMatch, Argument: "Certificate.DomainValidationOptions[].ValidationStatus", + Expected: "SUCCESS", + }, + { + State: request.RetryWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "Certificate.DomainValidationOptions[].ValidationStatus", + Expected: "PENDING_VALIDATION", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathWaiterMatch, Argument: "Certificate.Status", + Expected: "FAILED", + }, + { + State: request.FailureWaiterState, + Matcher: request.ErrorWaiterMatch, + Expected: "ResourceNotFoundException", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeCertificateInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeCertificateRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/acmpca/api.go b/vendor/github.com/aws/aws-sdk-go/service/acmpca/api.go new file mode 100644 index 00000000000..02db27d9708 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/acmpca/api.go @@ -0,0 +1,4053 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package acmpca + +import ( + "fmt" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol" + "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" +) + +const opCreateCertificateAuthority = "CreateCertificateAuthority" + +// CreateCertificateAuthorityRequest generates a "aws/request.Request" representing the +// client's request for the CreateCertificateAuthority operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateCertificateAuthority for more information on using the CreateCertificateAuthority +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateCertificateAuthorityRequest method. +// req, resp := client.CreateCertificateAuthorityRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/CreateCertificateAuthority +func (c *ACMPCA) CreateCertificateAuthorityRequest(input *CreateCertificateAuthorityInput) (req *request.Request, output *CreateCertificateAuthorityOutput) { + op := &request.Operation{ + Name: opCreateCertificateAuthority, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateCertificateAuthorityInput{} + } + + output = &CreateCertificateAuthorityOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateCertificateAuthority API operation for AWS Certificate Manager Private Certificate Authority. +// +// Creates a private subordinate certificate authority (CA). You must specify +// the CA configuration, the revocation configuration, the CA type, and an optional +// idempotency token. The CA configuration specifies the name of the algorithm +// and key size to be used to create the CA private key, the type of signing +// algorithm that the CA uses to sign, and X.500 subject information. The CRL +// (certificate revocation list) configuration specifies the CRL expiration +// period in days (the validity period of the CRL), the Amazon S3 bucket that +// will contain the CRL, and a CNAME alias for the S3 bucket that is included +// in certificates issued by the CA. If successful, this operation returns the +// Amazon Resource Name (ARN) of the CA. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Certificate Manager Private Certificate Authority's +// API operation CreateCertificateAuthority for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidArgsException "InvalidArgsException" +// One or more of the specified arguments was not valid. +// +// * ErrCodeInvalidPolicyException "InvalidPolicyException" +// The S3 bucket policy is not valid. The policy must give ACM PCA rights to +// read from and write to the bucket and find the bucket location. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// An ACM PCA limit has been exceeded. See the exception message returned to +// determine the limit that was exceeded. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/CreateCertificateAuthority +func (c *ACMPCA) CreateCertificateAuthority(input *CreateCertificateAuthorityInput) (*CreateCertificateAuthorityOutput, error) { + req, out := c.CreateCertificateAuthorityRequest(input) + return out, req.Send() +} + +// CreateCertificateAuthorityWithContext is the same as CreateCertificateAuthority with the addition of +// the ability to pass a context and additional request options. +// +// See CreateCertificateAuthority for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACMPCA) CreateCertificateAuthorityWithContext(ctx aws.Context, input *CreateCertificateAuthorityInput, opts ...request.Option) (*CreateCertificateAuthorityOutput, error) { + req, out := c.CreateCertificateAuthorityRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateCertificateAuthorityAuditReport = "CreateCertificateAuthorityAuditReport" + +// CreateCertificateAuthorityAuditReportRequest generates a "aws/request.Request" representing the +// client's request for the CreateCertificateAuthorityAuditReport operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateCertificateAuthorityAuditReport for more information on using the CreateCertificateAuthorityAuditReport +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateCertificateAuthorityAuditReportRequest method. +// req, resp := client.CreateCertificateAuthorityAuditReportRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/CreateCertificateAuthorityAuditReport +func (c *ACMPCA) CreateCertificateAuthorityAuditReportRequest(input *CreateCertificateAuthorityAuditReportInput) (req *request.Request, output *CreateCertificateAuthorityAuditReportOutput) { + op := &request.Operation{ + Name: opCreateCertificateAuthorityAuditReport, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateCertificateAuthorityAuditReportInput{} + } + + output = &CreateCertificateAuthorityAuditReportOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateCertificateAuthorityAuditReport API operation for AWS Certificate Manager Private Certificate Authority. +// +// Creates an audit report that lists every time that the your CA private key +// is used. The report is saved in the Amazon S3 bucket that you specify on +// input. The IssueCertificate and RevokeCertificate operations use the private +// key. You can generate a new report every 30 minutes. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Certificate Manager Private Certificate Authority's +// API operation CreateCertificateAuthorityAuditReport for usage and error information. +// +// Returned Error Codes: +// * ErrCodeRequestInProgressException "RequestInProgressException" +// Your request is already in progress. +// +// * ErrCodeRequestFailedException "RequestFailedException" +// The request has failed for an unspecified reason. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// A resource such as a private CA, S3 bucket, certificate, or audit report +// cannot be found. +// +// * ErrCodeInvalidArnException "InvalidArnException" +// The requested Amazon Resource Name (ARN) does not refer to an existing resource. +// +// * ErrCodeInvalidArgsException "InvalidArgsException" +// One or more of the specified arguments was not valid. +// +// * ErrCodeInvalidStateException "InvalidStateException" +// The private CA is in a state during which a report cannot be generated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/CreateCertificateAuthorityAuditReport +func (c *ACMPCA) CreateCertificateAuthorityAuditReport(input *CreateCertificateAuthorityAuditReportInput) (*CreateCertificateAuthorityAuditReportOutput, error) { + req, out := c.CreateCertificateAuthorityAuditReportRequest(input) + return out, req.Send() +} + +// CreateCertificateAuthorityAuditReportWithContext is the same as CreateCertificateAuthorityAuditReport with the addition of +// the ability to pass a context and additional request options. +// +// See CreateCertificateAuthorityAuditReport for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACMPCA) CreateCertificateAuthorityAuditReportWithContext(ctx aws.Context, input *CreateCertificateAuthorityAuditReportInput, opts ...request.Option) (*CreateCertificateAuthorityAuditReportOutput, error) { + req, out := c.CreateCertificateAuthorityAuditReportRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteCertificateAuthority = "DeleteCertificateAuthority" + +// DeleteCertificateAuthorityRequest generates a "aws/request.Request" representing the +// client's request for the DeleteCertificateAuthority operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteCertificateAuthority for more information on using the DeleteCertificateAuthority +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteCertificateAuthorityRequest method. +// req, resp := client.DeleteCertificateAuthorityRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/DeleteCertificateAuthority +func (c *ACMPCA) DeleteCertificateAuthorityRequest(input *DeleteCertificateAuthorityInput) (req *request.Request, output *DeleteCertificateAuthorityOutput) { + op := &request.Operation{ + Name: opDeleteCertificateAuthority, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteCertificateAuthorityInput{} + } + + output = &DeleteCertificateAuthorityOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteCertificateAuthority API operation for AWS Certificate Manager Private Certificate Authority. +// +// Deletes a private certificate authority (CA). You must provide the ARN (Amazon +// Resource Name) of the private CA that you want to delete. You can find the +// ARN by calling the ListCertificateAuthorities operation. Before you can delete +// a CA, you must disable it. Call the UpdateCertificateAuthority operation +// and set the CertificateAuthorityStatus parameter to DISABLED. +// +// Additionally, you can delete a CA if you are waiting for it to be created +// (the Status field of the CertificateAuthority is CREATING). You can also +// delete it if the CA has been created but you haven't yet imported the signed +// certificate (the Status is PENDING_CERTIFICATE) into ACM PCA. +// +// If the CA is in one of the aforementioned states and you call DeleteCertificateAuthority, +// the CA's status changes to DELETED. However, the CA won't be permentantly +// deleted until the restoration period has passed. By default, if you do not +// set the PermanentDeletionTimeInDays parameter, the CA remains restorable +// for 30 days. You can set the parameter from 7 to 30 days. The DescribeCertificateAuthority +// operation returns the time remaining in the restoration window of a Private +// CA in the DELETED state. To restore an eligable CA, call the RestoreCertificateAuthority +// operation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Certificate Manager Private Certificate Authority's +// API operation DeleteCertificateAuthority for usage and error information. +// +// Returned Error Codes: +// * ErrCodeConcurrentModificationException "ConcurrentModificationException" +// A previous update to your private CA is still ongoing. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// A resource such as a private CA, S3 bucket, certificate, or audit report +// cannot be found. +// +// * ErrCodeInvalidArnException "InvalidArnException" +// The requested Amazon Resource Name (ARN) does not refer to an existing resource. +// +// * ErrCodeInvalidStateException "InvalidStateException" +// The private CA is in a state during which a report cannot be generated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/DeleteCertificateAuthority +func (c *ACMPCA) DeleteCertificateAuthority(input *DeleteCertificateAuthorityInput) (*DeleteCertificateAuthorityOutput, error) { + req, out := c.DeleteCertificateAuthorityRequest(input) + return out, req.Send() +} + +// DeleteCertificateAuthorityWithContext is the same as DeleteCertificateAuthority with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteCertificateAuthority for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACMPCA) DeleteCertificateAuthorityWithContext(ctx aws.Context, input *DeleteCertificateAuthorityInput, opts ...request.Option) (*DeleteCertificateAuthorityOutput, error) { + req, out := c.DeleteCertificateAuthorityRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeCertificateAuthority = "DescribeCertificateAuthority" + +// DescribeCertificateAuthorityRequest generates a "aws/request.Request" representing the +// client's request for the DescribeCertificateAuthority operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeCertificateAuthority for more information on using the DescribeCertificateAuthority +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeCertificateAuthorityRequest method. +// req, resp := client.DescribeCertificateAuthorityRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/DescribeCertificateAuthority +func (c *ACMPCA) DescribeCertificateAuthorityRequest(input *DescribeCertificateAuthorityInput) (req *request.Request, output *DescribeCertificateAuthorityOutput) { + op := &request.Operation{ + Name: opDescribeCertificateAuthority, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeCertificateAuthorityInput{} + } + + output = &DescribeCertificateAuthorityOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeCertificateAuthority API operation for AWS Certificate Manager Private Certificate Authority. +// +// Lists information about your private certificate authority (CA). You specify +// the private CA on input by its ARN (Amazon Resource Name). The output contains +// the status of your CA. This can be any of the following: +// +// * CREATING - ACM PCA is creating your private certificate authority. +// +// * PENDING_CERTIFICATE - The certificate is pending. You must use your +// on-premises root or subordinate CA to sign your private CA CSR and then +// import it into PCA. +// +// * ACTIVE - Your private CA is active. +// +// * DISABLED - Your private CA has been disabled. +// +// * EXPIRED - Your private CA certificate has expired. +// +// * FAILED - Your private CA has failed. Your CA can fail because of problems +// such a network outage or backend AWS failure or other errors. A failed +// CA can never return to the pending state. You must create a new CA. +// +// * DELETED - Your private CA is within the restoration period, after which +// it will be permanently deleted. The length of time remaining in the CA's +// restoration period will also be included in this operation's output. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Certificate Manager Private Certificate Authority's +// API operation DescribeCertificateAuthority for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// A resource such as a private CA, S3 bucket, certificate, or audit report +// cannot be found. +// +// * ErrCodeInvalidArnException "InvalidArnException" +// The requested Amazon Resource Name (ARN) does not refer to an existing resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/DescribeCertificateAuthority +func (c *ACMPCA) DescribeCertificateAuthority(input *DescribeCertificateAuthorityInput) (*DescribeCertificateAuthorityOutput, error) { + req, out := c.DescribeCertificateAuthorityRequest(input) + return out, req.Send() +} + +// DescribeCertificateAuthorityWithContext is the same as DescribeCertificateAuthority with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeCertificateAuthority for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACMPCA) DescribeCertificateAuthorityWithContext(ctx aws.Context, input *DescribeCertificateAuthorityInput, opts ...request.Option) (*DescribeCertificateAuthorityOutput, error) { + req, out := c.DescribeCertificateAuthorityRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeCertificateAuthorityAuditReport = "DescribeCertificateAuthorityAuditReport" + +// DescribeCertificateAuthorityAuditReportRequest generates a "aws/request.Request" representing the +// client's request for the DescribeCertificateAuthorityAuditReport operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeCertificateAuthorityAuditReport for more information on using the DescribeCertificateAuthorityAuditReport +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeCertificateAuthorityAuditReportRequest method. +// req, resp := client.DescribeCertificateAuthorityAuditReportRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/DescribeCertificateAuthorityAuditReport +func (c *ACMPCA) DescribeCertificateAuthorityAuditReportRequest(input *DescribeCertificateAuthorityAuditReportInput) (req *request.Request, output *DescribeCertificateAuthorityAuditReportOutput) { + op := &request.Operation{ + Name: opDescribeCertificateAuthorityAuditReport, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeCertificateAuthorityAuditReportInput{} + } + + output = &DescribeCertificateAuthorityAuditReportOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeCertificateAuthorityAuditReport API operation for AWS Certificate Manager Private Certificate Authority. +// +// Lists information about a specific audit report created by calling the CreateCertificateAuthorityAuditReport +// operation. Audit information is created every time the certificate authority +// (CA) private key is used. The private key is used when you call the IssueCertificate +// operation or the RevokeCertificate operation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Certificate Manager Private Certificate Authority's +// API operation DescribeCertificateAuthorityAuditReport for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// A resource such as a private CA, S3 bucket, certificate, or audit report +// cannot be found. +// +// * ErrCodeInvalidArnException "InvalidArnException" +// The requested Amazon Resource Name (ARN) does not refer to an existing resource. +// +// * ErrCodeInvalidArgsException "InvalidArgsException" +// One or more of the specified arguments was not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/DescribeCertificateAuthorityAuditReport +func (c *ACMPCA) DescribeCertificateAuthorityAuditReport(input *DescribeCertificateAuthorityAuditReportInput) (*DescribeCertificateAuthorityAuditReportOutput, error) { + req, out := c.DescribeCertificateAuthorityAuditReportRequest(input) + return out, req.Send() +} + +// DescribeCertificateAuthorityAuditReportWithContext is the same as DescribeCertificateAuthorityAuditReport with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeCertificateAuthorityAuditReport for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACMPCA) DescribeCertificateAuthorityAuditReportWithContext(ctx aws.Context, input *DescribeCertificateAuthorityAuditReportInput, opts ...request.Option) (*DescribeCertificateAuthorityAuditReportOutput, error) { + req, out := c.DescribeCertificateAuthorityAuditReportRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetCertificate = "GetCertificate" + +// GetCertificateRequest generates a "aws/request.Request" representing the +// client's request for the GetCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetCertificate for more information on using the GetCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetCertificateRequest method. +// req, resp := client.GetCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/GetCertificate +func (c *ACMPCA) GetCertificateRequest(input *GetCertificateInput) (req *request.Request, output *GetCertificateOutput) { + op := &request.Operation{ + Name: opGetCertificate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetCertificateInput{} + } + + output = &GetCertificateOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetCertificate API operation for AWS Certificate Manager Private Certificate Authority. +// +// Retrieves a certificate from your private CA. The ARN of the certificate +// is returned when you call the IssueCertificate operation. You must specify +// both the ARN of your private CA and the ARN of the issued certificate when +// calling the GetCertificate operation. You can retrieve the certificate if +// it is in the ISSUED state. You can call the CreateCertificateAuthorityAuditReport +// operation to create a report that contains information about all of the certificates +// issued and revoked by your private CA. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Certificate Manager Private Certificate Authority's +// API operation GetCertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeRequestInProgressException "RequestInProgressException" +// Your request is already in progress. +// +// * ErrCodeRequestFailedException "RequestFailedException" +// The request has failed for an unspecified reason. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// A resource such as a private CA, S3 bucket, certificate, or audit report +// cannot be found. +// +// * ErrCodeInvalidArnException "InvalidArnException" +// The requested Amazon Resource Name (ARN) does not refer to an existing resource. +// +// * ErrCodeInvalidStateException "InvalidStateException" +// The private CA is in a state during which a report cannot be generated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/GetCertificate +func (c *ACMPCA) GetCertificate(input *GetCertificateInput) (*GetCertificateOutput, error) { + req, out := c.GetCertificateRequest(input) + return out, req.Send() +} + +// GetCertificateWithContext is the same as GetCertificate with the addition of +// the ability to pass a context and additional request options. +// +// See GetCertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACMPCA) GetCertificateWithContext(ctx aws.Context, input *GetCertificateInput, opts ...request.Option) (*GetCertificateOutput, error) { + req, out := c.GetCertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetCertificateAuthorityCertificate = "GetCertificateAuthorityCertificate" + +// GetCertificateAuthorityCertificateRequest generates a "aws/request.Request" representing the +// client's request for the GetCertificateAuthorityCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetCertificateAuthorityCertificate for more information on using the GetCertificateAuthorityCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetCertificateAuthorityCertificateRequest method. +// req, resp := client.GetCertificateAuthorityCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/GetCertificateAuthorityCertificate +func (c *ACMPCA) GetCertificateAuthorityCertificateRequest(input *GetCertificateAuthorityCertificateInput) (req *request.Request, output *GetCertificateAuthorityCertificateOutput) { + op := &request.Operation{ + Name: opGetCertificateAuthorityCertificate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetCertificateAuthorityCertificateInput{} + } + + output = &GetCertificateAuthorityCertificateOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetCertificateAuthorityCertificate API operation for AWS Certificate Manager Private Certificate Authority. +// +// Retrieves the certificate and certificate chain for your private certificate +// authority (CA). Both the certificate and the chain are base64 PEM-encoded. +// The chain does not include the CA certificate. Each certificate in the chain +// signs the one before it. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Certificate Manager Private Certificate Authority's +// API operation GetCertificateAuthorityCertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// A resource such as a private CA, S3 bucket, certificate, or audit report +// cannot be found. +// +// * ErrCodeInvalidStateException "InvalidStateException" +// The private CA is in a state during which a report cannot be generated. +// +// * ErrCodeInvalidArnException "InvalidArnException" +// The requested Amazon Resource Name (ARN) does not refer to an existing resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/GetCertificateAuthorityCertificate +func (c *ACMPCA) GetCertificateAuthorityCertificate(input *GetCertificateAuthorityCertificateInput) (*GetCertificateAuthorityCertificateOutput, error) { + req, out := c.GetCertificateAuthorityCertificateRequest(input) + return out, req.Send() +} + +// GetCertificateAuthorityCertificateWithContext is the same as GetCertificateAuthorityCertificate with the addition of +// the ability to pass a context and additional request options. +// +// See GetCertificateAuthorityCertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACMPCA) GetCertificateAuthorityCertificateWithContext(ctx aws.Context, input *GetCertificateAuthorityCertificateInput, opts ...request.Option) (*GetCertificateAuthorityCertificateOutput, error) { + req, out := c.GetCertificateAuthorityCertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetCertificateAuthorityCsr = "GetCertificateAuthorityCsr" + +// GetCertificateAuthorityCsrRequest generates a "aws/request.Request" representing the +// client's request for the GetCertificateAuthorityCsr operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetCertificateAuthorityCsr for more information on using the GetCertificateAuthorityCsr +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetCertificateAuthorityCsrRequest method. +// req, resp := client.GetCertificateAuthorityCsrRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/GetCertificateAuthorityCsr +func (c *ACMPCA) GetCertificateAuthorityCsrRequest(input *GetCertificateAuthorityCsrInput) (req *request.Request, output *GetCertificateAuthorityCsrOutput) { + op := &request.Operation{ + Name: opGetCertificateAuthorityCsr, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetCertificateAuthorityCsrInput{} + } + + output = &GetCertificateAuthorityCsrOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetCertificateAuthorityCsr API operation for AWS Certificate Manager Private Certificate Authority. +// +// Retrieves the certificate signing request (CSR) for your private certificate +// authority (CA). The CSR is created when you call the CreateCertificateAuthority +// operation. Take the CSR to your on-premises X.509 infrastructure and sign +// it by using your root or a subordinate CA. Then import the signed certificate +// back into ACM PCA by calling the ImportCertificateAuthorityCertificate operation. +// The CSR is returned as a base64 PEM-encoded string. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Certificate Manager Private Certificate Authority's +// API operation GetCertificateAuthorityCsr for usage and error information. +// +// Returned Error Codes: +// * ErrCodeRequestInProgressException "RequestInProgressException" +// Your request is already in progress. +// +// * ErrCodeRequestFailedException "RequestFailedException" +// The request has failed for an unspecified reason. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// A resource such as a private CA, S3 bucket, certificate, or audit report +// cannot be found. +// +// * ErrCodeInvalidArnException "InvalidArnException" +// The requested Amazon Resource Name (ARN) does not refer to an existing resource. +// +// * ErrCodeInvalidStateException "InvalidStateException" +// The private CA is in a state during which a report cannot be generated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/GetCertificateAuthorityCsr +func (c *ACMPCA) GetCertificateAuthorityCsr(input *GetCertificateAuthorityCsrInput) (*GetCertificateAuthorityCsrOutput, error) { + req, out := c.GetCertificateAuthorityCsrRequest(input) + return out, req.Send() +} + +// GetCertificateAuthorityCsrWithContext is the same as GetCertificateAuthorityCsr with the addition of +// the ability to pass a context and additional request options. +// +// See GetCertificateAuthorityCsr for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACMPCA) GetCertificateAuthorityCsrWithContext(ctx aws.Context, input *GetCertificateAuthorityCsrInput, opts ...request.Option) (*GetCertificateAuthorityCsrOutput, error) { + req, out := c.GetCertificateAuthorityCsrRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opImportCertificateAuthorityCertificate = "ImportCertificateAuthorityCertificate" + +// ImportCertificateAuthorityCertificateRequest generates a "aws/request.Request" representing the +// client's request for the ImportCertificateAuthorityCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ImportCertificateAuthorityCertificate for more information on using the ImportCertificateAuthorityCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ImportCertificateAuthorityCertificateRequest method. +// req, resp := client.ImportCertificateAuthorityCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/ImportCertificateAuthorityCertificate +func (c *ACMPCA) ImportCertificateAuthorityCertificateRequest(input *ImportCertificateAuthorityCertificateInput) (req *request.Request, output *ImportCertificateAuthorityCertificateOutput) { + op := &request.Operation{ + Name: opImportCertificateAuthorityCertificate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ImportCertificateAuthorityCertificateInput{} + } + + output = &ImportCertificateAuthorityCertificateOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// ImportCertificateAuthorityCertificate API operation for AWS Certificate Manager Private Certificate Authority. +// +// Imports your signed private CA certificate into ACM PCA. Before you can call +// this operation, you must create the private certificate authority by calling +// the CreateCertificateAuthority operation. You must then generate a certificate +// signing request (CSR) by calling the GetCertificateAuthorityCsr operation. +// Take the CSR to your on-premises CA and use the root certificate or a subordinate +// certificate to sign it. Create a certificate chain and copy the signed certificate +// and the certificate chain to your working directory. +// +// Your certificate chain must not include the private CA certificate that you +// are importing. +// +// Your on-premises CA certificate must be the last certificate in your chain. +// The subordinate certificate, if any, that your root CA signed must be next +// to last. The subordinate certificate signed by the preceding subordinate +// CA must come next, and so on until your chain is built. +// +// The chain must be PEM-encoded. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Certificate Manager Private Certificate Authority's +// API operation ImportCertificateAuthorityCertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeConcurrentModificationException "ConcurrentModificationException" +// A previous update to your private CA is still ongoing. +// +// * ErrCodeRequestInProgressException "RequestInProgressException" +// Your request is already in progress. +// +// * ErrCodeRequestFailedException "RequestFailedException" +// The request has failed for an unspecified reason. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// A resource such as a private CA, S3 bucket, certificate, or audit report +// cannot be found. +// +// * ErrCodeInvalidArnException "InvalidArnException" +// The requested Amazon Resource Name (ARN) does not refer to an existing resource. +// +// * ErrCodeInvalidStateException "InvalidStateException" +// The private CA is in a state during which a report cannot be generated. +// +// * ErrCodeMalformedCertificateException "MalformedCertificateException" +// One or more fields in the certificate are invalid. +// +// * ErrCodeCertificateMismatchException "CertificateMismatchException" +// The certificate authority certificate you are importing does not comply with +// conditions specified in the certificate that signed it. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/ImportCertificateAuthorityCertificate +func (c *ACMPCA) ImportCertificateAuthorityCertificate(input *ImportCertificateAuthorityCertificateInput) (*ImportCertificateAuthorityCertificateOutput, error) { + req, out := c.ImportCertificateAuthorityCertificateRequest(input) + return out, req.Send() +} + +// ImportCertificateAuthorityCertificateWithContext is the same as ImportCertificateAuthorityCertificate with the addition of +// the ability to pass a context and additional request options. +// +// See ImportCertificateAuthorityCertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACMPCA) ImportCertificateAuthorityCertificateWithContext(ctx aws.Context, input *ImportCertificateAuthorityCertificateInput, opts ...request.Option) (*ImportCertificateAuthorityCertificateOutput, error) { + req, out := c.ImportCertificateAuthorityCertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opIssueCertificate = "IssueCertificate" + +// IssueCertificateRequest generates a "aws/request.Request" representing the +// client's request for the IssueCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See IssueCertificate for more information on using the IssueCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the IssueCertificateRequest method. +// req, resp := client.IssueCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/IssueCertificate +func (c *ACMPCA) IssueCertificateRequest(input *IssueCertificateInput) (req *request.Request, output *IssueCertificateOutput) { + op := &request.Operation{ + Name: opIssueCertificate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &IssueCertificateInput{} + } + + output = &IssueCertificateOutput{} + req = c.newRequest(op, input, output) + return +} + +// IssueCertificate API operation for AWS Certificate Manager Private Certificate Authority. +// +// Uses your private certificate authority (CA) to issue a client certificate. +// This operation returns the Amazon Resource Name (ARN) of the certificate. +// You can retrieve the certificate by calling the GetCertificate operation +// and specifying the ARN. +// +// You cannot use the ACM ListCertificateAuthorities operation to retrieve the +// ARNs of the certificates that you issue by using ACM PCA. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Certificate Manager Private Certificate Authority's +// API operation IssueCertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceededException" +// An ACM PCA limit has been exceeded. See the exception message returned to +// determine the limit that was exceeded. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// A resource such as a private CA, S3 bucket, certificate, or audit report +// cannot be found. +// +// * ErrCodeInvalidStateException "InvalidStateException" +// The private CA is in a state during which a report cannot be generated. +// +// * ErrCodeInvalidArnException "InvalidArnException" +// The requested Amazon Resource Name (ARN) does not refer to an existing resource. +// +// * ErrCodeInvalidArgsException "InvalidArgsException" +// One or more of the specified arguments was not valid. +// +// * ErrCodeMalformedCSRException "MalformedCSRException" +// The certificate signing request is invalid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/IssueCertificate +func (c *ACMPCA) IssueCertificate(input *IssueCertificateInput) (*IssueCertificateOutput, error) { + req, out := c.IssueCertificateRequest(input) + return out, req.Send() +} + +// IssueCertificateWithContext is the same as IssueCertificate with the addition of +// the ability to pass a context and additional request options. +// +// See IssueCertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACMPCA) IssueCertificateWithContext(ctx aws.Context, input *IssueCertificateInput, opts ...request.Option) (*IssueCertificateOutput, error) { + req, out := c.IssueCertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListCertificateAuthorities = "ListCertificateAuthorities" + +// ListCertificateAuthoritiesRequest generates a "aws/request.Request" representing the +// client's request for the ListCertificateAuthorities operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListCertificateAuthorities for more information on using the ListCertificateAuthorities +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListCertificateAuthoritiesRequest method. +// req, resp := client.ListCertificateAuthoritiesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/ListCertificateAuthorities +func (c *ACMPCA) ListCertificateAuthoritiesRequest(input *ListCertificateAuthoritiesInput) (req *request.Request, output *ListCertificateAuthoritiesOutput) { + op := &request.Operation{ + Name: opListCertificateAuthorities, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListCertificateAuthoritiesInput{} + } + + output = &ListCertificateAuthoritiesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListCertificateAuthorities API operation for AWS Certificate Manager Private Certificate Authority. +// +// Lists the private certificate authorities that you created by using the CreateCertificateAuthority +// operation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Certificate Manager Private Certificate Authority's +// API operation ListCertificateAuthorities for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The token specified in the NextToken argument is not valid. Use the token +// returned from your previous call to ListCertificateAuthorities. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/ListCertificateAuthorities +func (c *ACMPCA) ListCertificateAuthorities(input *ListCertificateAuthoritiesInput) (*ListCertificateAuthoritiesOutput, error) { + req, out := c.ListCertificateAuthoritiesRequest(input) + return out, req.Send() +} + +// ListCertificateAuthoritiesWithContext is the same as ListCertificateAuthorities with the addition of +// the ability to pass a context and additional request options. +// +// See ListCertificateAuthorities for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACMPCA) ListCertificateAuthoritiesWithContext(ctx aws.Context, input *ListCertificateAuthoritiesInput, opts ...request.Option) (*ListCertificateAuthoritiesOutput, error) { + req, out := c.ListCertificateAuthoritiesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListTags = "ListTags" + +// ListTagsRequest generates a "aws/request.Request" representing the +// client's request for the ListTags operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListTags for more information on using the ListTags +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListTagsRequest method. +// req, resp := client.ListTagsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/ListTags +func (c *ACMPCA) ListTagsRequest(input *ListTagsInput) (req *request.Request, output *ListTagsOutput) { + op := &request.Operation{ + Name: opListTags, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListTagsInput{} + } + + output = &ListTagsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListTags API operation for AWS Certificate Manager Private Certificate Authority. +// +// Lists the tags, if any, that are associated with your private CA. Tags are +// labels that you can use to identify and organize your CAs. Each tag consists +// of a key and an optional value. Call the TagCertificateAuthority operation +// to add one or more tags to your CA. Call the UntagCertificateAuthority operation +// to remove tags. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Certificate Manager Private Certificate Authority's +// API operation ListTags for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// A resource such as a private CA, S3 bucket, certificate, or audit report +// cannot be found. +// +// * ErrCodeInvalidArnException "InvalidArnException" +// The requested Amazon Resource Name (ARN) does not refer to an existing resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/ListTags +func (c *ACMPCA) ListTags(input *ListTagsInput) (*ListTagsOutput, error) { + req, out := c.ListTagsRequest(input) + return out, req.Send() +} + +// ListTagsWithContext is the same as ListTags with the addition of +// the ability to pass a context and additional request options. +// +// See ListTags for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACMPCA) ListTagsWithContext(ctx aws.Context, input *ListTagsInput, opts ...request.Option) (*ListTagsOutput, error) { + req, out := c.ListTagsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRestoreCertificateAuthority = "RestoreCertificateAuthority" + +// RestoreCertificateAuthorityRequest generates a "aws/request.Request" representing the +// client's request for the RestoreCertificateAuthority operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RestoreCertificateAuthority for more information on using the RestoreCertificateAuthority +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RestoreCertificateAuthorityRequest method. +// req, resp := client.RestoreCertificateAuthorityRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/RestoreCertificateAuthority +func (c *ACMPCA) RestoreCertificateAuthorityRequest(input *RestoreCertificateAuthorityInput) (req *request.Request, output *RestoreCertificateAuthorityOutput) { + op := &request.Operation{ + Name: opRestoreCertificateAuthority, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RestoreCertificateAuthorityInput{} + } + + output = &RestoreCertificateAuthorityOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// RestoreCertificateAuthority API operation for AWS Certificate Manager Private Certificate Authority. +// +// Restores a certificate authority (CA) that is in the DELETED state. You can +// restore a CA during the period that you defined in the PermanentDeletionTimeInDays +// parameter of the DeleteCertificateAuthority operation. Currently, you can +// specify 7 to 30 days. If you did not specify a PermanentDeletionTimeInDays +// value, by default you can restore the CA at any time in a 30 day period. +// You can check the time remaining in the restoration period of a private CA +// in the DELETED state by calling the DescribeCertificateAuthority or ListCertificateAuthorities +// operations. The status of a restored CA is set to its pre-deletion status +// when the RestoreCertificateAuthority operation returns. To change its status +// to ACTIVE, call the UpdateCertificateAuthority operation. If the private +// CA was in the PENDING_CERTIFICATE state at deletion, you must use the ImportCertificateAuthorityCertificate +// operation to import a certificate authority into the private CA before it +// can be activated. You cannot restore a CA after the restoration period has +// ended. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Certificate Manager Private Certificate Authority's +// API operation RestoreCertificateAuthority for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// A resource such as a private CA, S3 bucket, certificate, or audit report +// cannot be found. +// +// * ErrCodeInvalidStateException "InvalidStateException" +// The private CA is in a state during which a report cannot be generated. +// +// * ErrCodeInvalidArnException "InvalidArnException" +// The requested Amazon Resource Name (ARN) does not refer to an existing resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/RestoreCertificateAuthority +func (c *ACMPCA) RestoreCertificateAuthority(input *RestoreCertificateAuthorityInput) (*RestoreCertificateAuthorityOutput, error) { + req, out := c.RestoreCertificateAuthorityRequest(input) + return out, req.Send() +} + +// RestoreCertificateAuthorityWithContext is the same as RestoreCertificateAuthority with the addition of +// the ability to pass a context and additional request options. +// +// See RestoreCertificateAuthority for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACMPCA) RestoreCertificateAuthorityWithContext(ctx aws.Context, input *RestoreCertificateAuthorityInput, opts ...request.Option) (*RestoreCertificateAuthorityOutput, error) { + req, out := c.RestoreCertificateAuthorityRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRevokeCertificate = "RevokeCertificate" + +// RevokeCertificateRequest generates a "aws/request.Request" representing the +// client's request for the RevokeCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RevokeCertificate for more information on using the RevokeCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RevokeCertificateRequest method. +// req, resp := client.RevokeCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/RevokeCertificate +func (c *ACMPCA) RevokeCertificateRequest(input *RevokeCertificateInput) (req *request.Request, output *RevokeCertificateOutput) { + op := &request.Operation{ + Name: opRevokeCertificate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RevokeCertificateInput{} + } + + output = &RevokeCertificateOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// RevokeCertificate API operation for AWS Certificate Manager Private Certificate Authority. +// +// Revokes a certificate that you issued by calling the IssueCertificate operation. +// If you enable a certificate revocation list (CRL) when you create or update +// your private CA, information about the revoked certificates will be included +// in the CRL. ACM PCA writes the CRL to an S3 bucket that you specify. For +// more information about revocation, see the CrlConfiguration structure. ACM +// PCA also writes revocation information to the audit report. For more information, +// see CreateCertificateAuthorityAuditReport. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Certificate Manager Private Certificate Authority's +// API operation RevokeCertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeConcurrentModificationException "ConcurrentModificationException" +// A previous update to your private CA is still ongoing. +// +// * ErrCodeInvalidArnException "InvalidArnException" +// The requested Amazon Resource Name (ARN) does not refer to an existing resource. +// +// * ErrCodeInvalidStateException "InvalidStateException" +// The private CA is in a state during which a report cannot be generated. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// A resource such as a private CA, S3 bucket, certificate, or audit report +// cannot be found. +// +// * ErrCodeRequestAlreadyProcessedException "RequestAlreadyProcessedException" +// Your request has already been completed. +// +// * ErrCodeRequestInProgressException "RequestInProgressException" +// Your request is already in progress. +// +// * ErrCodeRequestFailedException "RequestFailedException" +// The request has failed for an unspecified reason. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/RevokeCertificate +func (c *ACMPCA) RevokeCertificate(input *RevokeCertificateInput) (*RevokeCertificateOutput, error) { + req, out := c.RevokeCertificateRequest(input) + return out, req.Send() +} + +// RevokeCertificateWithContext is the same as RevokeCertificate with the addition of +// the ability to pass a context and additional request options. +// +// See RevokeCertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACMPCA) RevokeCertificateWithContext(ctx aws.Context, input *RevokeCertificateInput, opts ...request.Option) (*RevokeCertificateOutput, error) { + req, out := c.RevokeCertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opTagCertificateAuthority = "TagCertificateAuthority" + +// TagCertificateAuthorityRequest generates a "aws/request.Request" representing the +// client's request for the TagCertificateAuthority operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See TagCertificateAuthority for more information on using the TagCertificateAuthority +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the TagCertificateAuthorityRequest method. +// req, resp := client.TagCertificateAuthorityRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/TagCertificateAuthority +func (c *ACMPCA) TagCertificateAuthorityRequest(input *TagCertificateAuthorityInput) (req *request.Request, output *TagCertificateAuthorityOutput) { + op := &request.Operation{ + Name: opTagCertificateAuthority, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &TagCertificateAuthorityInput{} + } + + output = &TagCertificateAuthorityOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// TagCertificateAuthority API operation for AWS Certificate Manager Private Certificate Authority. +// +// Adds one or more tags to your private CA. Tags are labels that you can use +// to identify and organize your AWS resources. Each tag consists of a key and +// an optional value. You specify the private CA on input by its Amazon Resource +// Name (ARN). You specify the tag by using a key-value pair. You can apply +// a tag to just one private CA if you want to identify a specific characteristic +// of that CA, or you can apply the same tag to multiple private CAs if you +// want to filter for a common relationship among those CAs. To remove one or +// more tags, use the UntagCertificateAuthority operation. Call the ListTags +// operation to see what tags are associated with your CA. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Certificate Manager Private Certificate Authority's +// API operation TagCertificateAuthority for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// A resource such as a private CA, S3 bucket, certificate, or audit report +// cannot be found. +// +// * ErrCodeInvalidArnException "InvalidArnException" +// The requested Amazon Resource Name (ARN) does not refer to an existing resource. +// +// * ErrCodeInvalidStateException "InvalidStateException" +// The private CA is in a state during which a report cannot be generated. +// +// * ErrCodeInvalidTagException "InvalidTagException" +// The tag associated with the CA is not valid. The invalid argument is contained +// in the message field. +// +// * ErrCodeTooManyTagsException "TooManyTagsException" +// You can associate up to 50 tags with a private CA. Exception information +// is contained in the exception message field. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/TagCertificateAuthority +func (c *ACMPCA) TagCertificateAuthority(input *TagCertificateAuthorityInput) (*TagCertificateAuthorityOutput, error) { + req, out := c.TagCertificateAuthorityRequest(input) + return out, req.Send() +} + +// TagCertificateAuthorityWithContext is the same as TagCertificateAuthority with the addition of +// the ability to pass a context and additional request options. +// +// See TagCertificateAuthority for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACMPCA) TagCertificateAuthorityWithContext(ctx aws.Context, input *TagCertificateAuthorityInput, opts ...request.Option) (*TagCertificateAuthorityOutput, error) { + req, out := c.TagCertificateAuthorityRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUntagCertificateAuthority = "UntagCertificateAuthority" + +// UntagCertificateAuthorityRequest generates a "aws/request.Request" representing the +// client's request for the UntagCertificateAuthority operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UntagCertificateAuthority for more information on using the UntagCertificateAuthority +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UntagCertificateAuthorityRequest method. +// req, resp := client.UntagCertificateAuthorityRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/UntagCertificateAuthority +func (c *ACMPCA) UntagCertificateAuthorityRequest(input *UntagCertificateAuthorityInput) (req *request.Request, output *UntagCertificateAuthorityOutput) { + op := &request.Operation{ + Name: opUntagCertificateAuthority, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UntagCertificateAuthorityInput{} + } + + output = &UntagCertificateAuthorityOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UntagCertificateAuthority API operation for AWS Certificate Manager Private Certificate Authority. +// +// Remove one or more tags from your private CA. A tag consists of a key-value +// pair. If you do not specify the value portion of the tag when calling this +// operation, the tag will be removed regardless of value. If you specify a +// value, the tag is removed only if it is associated with the specified value. +// To add tags to a private CA, use the TagCertificateAuthority. Call the ListTags +// operation to see what tags are associated with your CA. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Certificate Manager Private Certificate Authority's +// API operation UntagCertificateAuthority for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// A resource such as a private CA, S3 bucket, certificate, or audit report +// cannot be found. +// +// * ErrCodeInvalidArnException "InvalidArnException" +// The requested Amazon Resource Name (ARN) does not refer to an existing resource. +// +// * ErrCodeInvalidStateException "InvalidStateException" +// The private CA is in a state during which a report cannot be generated. +// +// * ErrCodeInvalidTagException "InvalidTagException" +// The tag associated with the CA is not valid. The invalid argument is contained +// in the message field. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/UntagCertificateAuthority +func (c *ACMPCA) UntagCertificateAuthority(input *UntagCertificateAuthorityInput) (*UntagCertificateAuthorityOutput, error) { + req, out := c.UntagCertificateAuthorityRequest(input) + return out, req.Send() +} + +// UntagCertificateAuthorityWithContext is the same as UntagCertificateAuthority with the addition of +// the ability to pass a context and additional request options. +// +// See UntagCertificateAuthority for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACMPCA) UntagCertificateAuthorityWithContext(ctx aws.Context, input *UntagCertificateAuthorityInput, opts ...request.Option) (*UntagCertificateAuthorityOutput, error) { + req, out := c.UntagCertificateAuthorityRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateCertificateAuthority = "UpdateCertificateAuthority" + +// UpdateCertificateAuthorityRequest generates a "aws/request.Request" representing the +// client's request for the UpdateCertificateAuthority operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateCertificateAuthority for more information on using the UpdateCertificateAuthority +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateCertificateAuthorityRequest method. +// req, resp := client.UpdateCertificateAuthorityRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/UpdateCertificateAuthority +func (c *ACMPCA) UpdateCertificateAuthorityRequest(input *UpdateCertificateAuthorityInput) (req *request.Request, output *UpdateCertificateAuthorityOutput) { + op := &request.Operation{ + Name: opUpdateCertificateAuthority, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateCertificateAuthorityInput{} + } + + output = &UpdateCertificateAuthorityOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UpdateCertificateAuthority API operation for AWS Certificate Manager Private Certificate Authority. +// +// Updates the status or configuration of a private certificate authority (CA). +// Your private CA must be in the ACTIVE or DISABLED state before you can update +// it. You can disable a private CA that is in the ACTIVE state or make a CA +// that is in the DISABLED state active again. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Certificate Manager Private Certificate Authority's +// API operation UpdateCertificateAuthority for usage and error information. +// +// Returned Error Codes: +// * ErrCodeConcurrentModificationException "ConcurrentModificationException" +// A previous update to your private CA is still ongoing. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// A resource such as a private CA, S3 bucket, certificate, or audit report +// cannot be found. +// +// * ErrCodeInvalidArgsException "InvalidArgsException" +// One or more of the specified arguments was not valid. +// +// * ErrCodeInvalidArnException "InvalidArnException" +// The requested Amazon Resource Name (ARN) does not refer to an existing resource. +// +// * ErrCodeInvalidStateException "InvalidStateException" +// The private CA is in a state during which a report cannot be generated. +// +// * ErrCodeInvalidPolicyException "InvalidPolicyException" +// The S3 bucket policy is not valid. The policy must give ACM PCA rights to +// read from and write to the bucket and find the bucket location. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22/UpdateCertificateAuthority +func (c *ACMPCA) UpdateCertificateAuthority(input *UpdateCertificateAuthorityInput) (*UpdateCertificateAuthorityOutput, error) { + req, out := c.UpdateCertificateAuthorityRequest(input) + return out, req.Send() +} + +// UpdateCertificateAuthorityWithContext is the same as UpdateCertificateAuthority with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateCertificateAuthority for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ACMPCA) UpdateCertificateAuthorityWithContext(ctx aws.Context, input *UpdateCertificateAuthorityInput, opts ...request.Option) (*UpdateCertificateAuthorityOutput, error) { + req, out := c.UpdateCertificateAuthorityRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// Contains information about the certificate subject. The certificate can be +// one issued by your private certificate authority (CA) or it can be your private +// CA certificate. The Subject field in the certificate identifies the entity +// that owns or controls the public key in the certificate. The entity can be +// a user, computer, device, or service. The Subject must contain an X.500 distinguished +// name (DN). A DN is a sequence of relative distinguished names (RDNs). The +// RDNs are separated by commas in the certificate. The DN must be unique for +// each entity, but your private CA can issue more than one certificate with +// the same DN to the same entity. +type ASN1Subject struct { + _ struct{} `type:"structure"` + + // Fully qualified domain name (FQDN) associated with the certificate subject. + CommonName *string `type:"string"` + + // Two-digit code that specifies the country in which the certificate subject + // located. + Country *string `type:"string"` + + // Disambiguating information for the certificate subject. + DistinguishedNameQualifier *string `type:"string"` + + // Typically a qualifier appended to the name of an individual. Examples include + // Jr. for junior, Sr. for senior, and III for third. + GenerationQualifier *string `type:"string"` + + // First name. + GivenName *string `type:"string"` + + // Concatenation that typically contains the first letter of the GivenName, + // the first letter of the middle name if one exists, and the first letter of + // the SurName. + Initials *string `type:"string"` + + // The locality (such as a city or town) in which the certificate subject is + // located. + Locality *string `type:"string"` + + // Legal name of the organization with which the certificate subject is affiliated. + Organization *string `type:"string"` + + // A subdivision or unit of the organization (such as sales or finance) with + // which the certificate subject is affiliated. + OrganizationalUnit *string `type:"string"` + + // Typically a shortened version of a longer GivenName. For example, Jonathan + // is often shortened to John. Elizabeth is often shortened to Beth, Liz, or + // Eliza. + Pseudonym *string `type:"string"` + + // The certificate serial number. + SerialNumber *string `type:"string"` + + // State in which the subject of the certificate is located. + State *string `type:"string"` + + // Family name. In the US and the UK, for example, the surname of an individual + // is ordered last. In Asian cultures the surname is typically ordered first. + Surname *string `type:"string"` + + // A title such as Mr. or Ms., which is pre-pended to the name to refer formally + // to the certificate subject. + Title *string `type:"string"` +} + +// String returns the string representation +func (s ASN1Subject) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ASN1Subject) GoString() string { + return s.String() +} + +// SetCommonName sets the CommonName field's value. +func (s *ASN1Subject) SetCommonName(v string) *ASN1Subject { + s.CommonName = &v + return s +} + +// SetCountry sets the Country field's value. +func (s *ASN1Subject) SetCountry(v string) *ASN1Subject { + s.Country = &v + return s +} + +// SetDistinguishedNameQualifier sets the DistinguishedNameQualifier field's value. +func (s *ASN1Subject) SetDistinguishedNameQualifier(v string) *ASN1Subject { + s.DistinguishedNameQualifier = &v + return s +} + +// SetGenerationQualifier sets the GenerationQualifier field's value. +func (s *ASN1Subject) SetGenerationQualifier(v string) *ASN1Subject { + s.GenerationQualifier = &v + return s +} + +// SetGivenName sets the GivenName field's value. +func (s *ASN1Subject) SetGivenName(v string) *ASN1Subject { + s.GivenName = &v + return s +} + +// SetInitials sets the Initials field's value. +func (s *ASN1Subject) SetInitials(v string) *ASN1Subject { + s.Initials = &v + return s +} + +// SetLocality sets the Locality field's value. +func (s *ASN1Subject) SetLocality(v string) *ASN1Subject { + s.Locality = &v + return s +} + +// SetOrganization sets the Organization field's value. +func (s *ASN1Subject) SetOrganization(v string) *ASN1Subject { + s.Organization = &v + return s +} + +// SetOrganizationalUnit sets the OrganizationalUnit field's value. +func (s *ASN1Subject) SetOrganizationalUnit(v string) *ASN1Subject { + s.OrganizationalUnit = &v + return s +} + +// SetPseudonym sets the Pseudonym field's value. +func (s *ASN1Subject) SetPseudonym(v string) *ASN1Subject { + s.Pseudonym = &v + return s +} + +// SetSerialNumber sets the SerialNumber field's value. +func (s *ASN1Subject) SetSerialNumber(v string) *ASN1Subject { + s.SerialNumber = &v + return s +} + +// SetState sets the State field's value. +func (s *ASN1Subject) SetState(v string) *ASN1Subject { + s.State = &v + return s +} + +// SetSurname sets the Surname field's value. +func (s *ASN1Subject) SetSurname(v string) *ASN1Subject { + s.Surname = &v + return s +} + +// SetTitle sets the Title field's value. +func (s *ASN1Subject) SetTitle(v string) *ASN1Subject { + s.Title = &v + return s +} + +// Contains information about your private certificate authority (CA). Your +// private CA can issue and revoke X.509 digital certificates. Digital certificates +// verify that the entity named in the certificate Subject field owns or controls +// the public key contained in the Subject Public Key Info field. Call the CreateCertificateAuthority +// operation to create your private CA. You must then call the GetCertificateAuthorityCertificate +// operation to retrieve a private CA certificate signing request (CSR). Take +// the CSR to your on-premises CA and sign it with the root CA certificate or +// a subordinate certificate. Call the ImportCertificateAuthorityCertificate +// operation to import the signed certificate into AWS Certificate Manager (ACM). +type CertificateAuthority struct { + _ struct{} `type:"structure"` + + // Amazon Resource Name (ARN) for your private certificate authority (CA). The + // format is 12345678-1234-1234-1234-123456789012. + Arn *string `min:"5" type:"string"` + + // Your private CA configuration. + CertificateAuthorityConfiguration *CertificateAuthorityConfiguration `type:"structure"` + + // Date and time at which your private CA was created. + CreatedAt *time.Time `type:"timestamp"` + + // Reason the request to create your private CA failed. + FailureReason *string `type:"string" enum:"FailureReason"` + + // Date and time at which your private CA was last updated. + LastStateChangeAt *time.Time `type:"timestamp"` + + // Date and time after which your private CA certificate is not valid. + NotAfter *time.Time `type:"timestamp"` + + // Date and time before which your private CA certificate is not valid. + NotBefore *time.Time `type:"timestamp"` + + // The period during which a deleted CA can be restored. For more information, + // see the PermanentDeletionTimeInDays parameter of the DeleteCertificateAuthorityRequest + // operation. + RestorableUntil *time.Time `type:"timestamp"` + + // Information about the certificate revocation list (CRL) created and maintained + // by your private CA. + RevocationConfiguration *RevocationConfiguration `type:"structure"` + + // Serial number of your private CA. + Serial *string `type:"string"` + + // Status of your private CA. + Status *string `type:"string" enum:"CertificateAuthorityStatus"` + + // Type of your private CA. + Type *string `type:"string" enum:"CertificateAuthorityType"` +} + +// String returns the string representation +func (s CertificateAuthority) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CertificateAuthority) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *CertificateAuthority) SetArn(v string) *CertificateAuthority { + s.Arn = &v + return s +} + +// SetCertificateAuthorityConfiguration sets the CertificateAuthorityConfiguration field's value. +func (s *CertificateAuthority) SetCertificateAuthorityConfiguration(v *CertificateAuthorityConfiguration) *CertificateAuthority { + s.CertificateAuthorityConfiguration = v + return s +} + +// SetCreatedAt sets the CreatedAt field's value. +func (s *CertificateAuthority) SetCreatedAt(v time.Time) *CertificateAuthority { + s.CreatedAt = &v + return s +} + +// SetFailureReason sets the FailureReason field's value. +func (s *CertificateAuthority) SetFailureReason(v string) *CertificateAuthority { + s.FailureReason = &v + return s +} + +// SetLastStateChangeAt sets the LastStateChangeAt field's value. +func (s *CertificateAuthority) SetLastStateChangeAt(v time.Time) *CertificateAuthority { + s.LastStateChangeAt = &v + return s +} + +// SetNotAfter sets the NotAfter field's value. +func (s *CertificateAuthority) SetNotAfter(v time.Time) *CertificateAuthority { + s.NotAfter = &v + return s +} + +// SetNotBefore sets the NotBefore field's value. +func (s *CertificateAuthority) SetNotBefore(v time.Time) *CertificateAuthority { + s.NotBefore = &v + return s +} + +// SetRestorableUntil sets the RestorableUntil field's value. +func (s *CertificateAuthority) SetRestorableUntil(v time.Time) *CertificateAuthority { + s.RestorableUntil = &v + return s +} + +// SetRevocationConfiguration sets the RevocationConfiguration field's value. +func (s *CertificateAuthority) SetRevocationConfiguration(v *RevocationConfiguration) *CertificateAuthority { + s.RevocationConfiguration = v + return s +} + +// SetSerial sets the Serial field's value. +func (s *CertificateAuthority) SetSerial(v string) *CertificateAuthority { + s.Serial = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *CertificateAuthority) SetStatus(v string) *CertificateAuthority { + s.Status = &v + return s +} + +// SetType sets the Type field's value. +func (s *CertificateAuthority) SetType(v string) *CertificateAuthority { + s.Type = &v + return s +} + +// Contains configuration information for your private certificate authority +// (CA). This includes information about the class of public key algorithm and +// the key pair that your private CA creates when it issues a certificate, the +// signature algorithm it uses used when issuing certificates, and its X.500 +// distinguished name. You must specify this information when you call the CreateCertificateAuthority +// operation. +type CertificateAuthorityConfiguration struct { + _ struct{} `type:"structure"` + + // Type of the public key algorithm and size, in bits, of the key pair that + // your key pair creates when it issues a certificate. + // + // KeyAlgorithm is a required field + KeyAlgorithm *string `type:"string" required:"true" enum:"KeyAlgorithm"` + + // Name of the algorithm your private CA uses to sign certificate requests. + // + // SigningAlgorithm is a required field + SigningAlgorithm *string `type:"string" required:"true" enum:"SigningAlgorithm"` + + // Structure that contains X.500 distinguished name information for your private + // CA. + // + // Subject is a required field + Subject *ASN1Subject `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CertificateAuthorityConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CertificateAuthorityConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CertificateAuthorityConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CertificateAuthorityConfiguration"} + if s.KeyAlgorithm == nil { + invalidParams.Add(request.NewErrParamRequired("KeyAlgorithm")) + } + if s.SigningAlgorithm == nil { + invalidParams.Add(request.NewErrParamRequired("SigningAlgorithm")) + } + if s.Subject == nil { + invalidParams.Add(request.NewErrParamRequired("Subject")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKeyAlgorithm sets the KeyAlgorithm field's value. +func (s *CertificateAuthorityConfiguration) SetKeyAlgorithm(v string) *CertificateAuthorityConfiguration { + s.KeyAlgorithm = &v + return s +} + +// SetSigningAlgorithm sets the SigningAlgorithm field's value. +func (s *CertificateAuthorityConfiguration) SetSigningAlgorithm(v string) *CertificateAuthorityConfiguration { + s.SigningAlgorithm = &v + return s +} + +// SetSubject sets the Subject field's value. +func (s *CertificateAuthorityConfiguration) SetSubject(v *ASN1Subject) *CertificateAuthorityConfiguration { + s.Subject = v + return s +} + +type CreateCertificateAuthorityAuditReportInput struct { + _ struct{} `type:"structure"` + + // Format in which to create the report. This can be either JSON or CSV. + // + // AuditReportResponseFormat is a required field + AuditReportResponseFormat *string `type:"string" required:"true" enum:"AuditReportResponseFormat"` + + // Amazon Resource Name (ARN) of the CA to be audited. This is of the form: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012. + // + // CertificateAuthorityArn is a required field + CertificateAuthorityArn *string `min:"5" type:"string" required:"true"` + + // Name of the S3 bucket that will contain the audit report. + // + // S3BucketName is a required field + S3BucketName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateCertificateAuthorityAuditReportInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateCertificateAuthorityAuditReportInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateCertificateAuthorityAuditReportInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateCertificateAuthorityAuditReportInput"} + if s.AuditReportResponseFormat == nil { + invalidParams.Add(request.NewErrParamRequired("AuditReportResponseFormat")) + } + if s.CertificateAuthorityArn == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateAuthorityArn")) + } + if s.CertificateAuthorityArn != nil && len(*s.CertificateAuthorityArn) < 5 { + invalidParams.Add(request.NewErrParamMinLen("CertificateAuthorityArn", 5)) + } + if s.S3BucketName == nil { + invalidParams.Add(request.NewErrParamRequired("S3BucketName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAuditReportResponseFormat sets the AuditReportResponseFormat field's value. +func (s *CreateCertificateAuthorityAuditReportInput) SetAuditReportResponseFormat(v string) *CreateCertificateAuthorityAuditReportInput { + s.AuditReportResponseFormat = &v + return s +} + +// SetCertificateAuthorityArn sets the CertificateAuthorityArn field's value. +func (s *CreateCertificateAuthorityAuditReportInput) SetCertificateAuthorityArn(v string) *CreateCertificateAuthorityAuditReportInput { + s.CertificateAuthorityArn = &v + return s +} + +// SetS3BucketName sets the S3BucketName field's value. +func (s *CreateCertificateAuthorityAuditReportInput) SetS3BucketName(v string) *CreateCertificateAuthorityAuditReportInput { + s.S3BucketName = &v + return s +} + +type CreateCertificateAuthorityAuditReportOutput struct { + _ struct{} `type:"structure"` + + // An alphanumeric string that contains a report identifier. + AuditReportId *string `min:"36" type:"string"` + + // The key that uniquely identifies the report file in your S3 bucket. + S3Key *string `type:"string"` +} + +// String returns the string representation +func (s CreateCertificateAuthorityAuditReportOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateCertificateAuthorityAuditReportOutput) GoString() string { + return s.String() +} + +// SetAuditReportId sets the AuditReportId field's value. +func (s *CreateCertificateAuthorityAuditReportOutput) SetAuditReportId(v string) *CreateCertificateAuthorityAuditReportOutput { + s.AuditReportId = &v + return s +} + +// SetS3Key sets the S3Key field's value. +func (s *CreateCertificateAuthorityAuditReportOutput) SetS3Key(v string) *CreateCertificateAuthorityAuditReportOutput { + s.S3Key = &v + return s +} + +type CreateCertificateAuthorityInput struct { + _ struct{} `type:"structure"` + + // Name and bit size of the private key algorithm, the name of the signing algorithm, + // and X.500 certificate subject information. + // + // CertificateAuthorityConfiguration is a required field + CertificateAuthorityConfiguration *CertificateAuthorityConfiguration `type:"structure" required:"true"` + + // The type of the certificate authority. Currently, this must be SUBORDINATE. + // + // CertificateAuthorityType is a required field + CertificateAuthorityType *string `type:"string" required:"true" enum:"CertificateAuthorityType"` + + // Alphanumeric string that can be used to distinguish between calls to CreateCertificateAuthority. + // Idempotency tokens time out after five minutes. Therefore, if you call CreateCertificateAuthority + // multiple times with the same idempotency token within a five minute period, + // ACM PCA recognizes that you are requesting only one certificate. As a result, + // ACM PCA issues only one. If you change the idempotency token for each call, + // however, ACM PCA recognizes that you are requesting multiple certificates. + IdempotencyToken *string `min:"1" type:"string"` + + // Contains a Boolean value that you can use to enable a certification revocation + // list (CRL) for the CA, the name of the S3 bucket to which ACM PCA will write + // the CRL, and an optional CNAME alias that you can use to hide the name of + // your bucket in the CRL Distribution Points extension of your CA certificate. + // For more information, see the CrlConfiguration structure. + RevocationConfiguration *RevocationConfiguration `type:"structure"` +} + +// String returns the string representation +func (s CreateCertificateAuthorityInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateCertificateAuthorityInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateCertificateAuthorityInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateCertificateAuthorityInput"} + if s.CertificateAuthorityConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateAuthorityConfiguration")) + } + if s.CertificateAuthorityType == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateAuthorityType")) + } + if s.IdempotencyToken != nil && len(*s.IdempotencyToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("IdempotencyToken", 1)) + } + if s.CertificateAuthorityConfiguration != nil { + if err := s.CertificateAuthorityConfiguration.Validate(); err != nil { + invalidParams.AddNested("CertificateAuthorityConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.RevocationConfiguration != nil { + if err := s.RevocationConfiguration.Validate(); err != nil { + invalidParams.AddNested("RevocationConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateAuthorityConfiguration sets the CertificateAuthorityConfiguration field's value. +func (s *CreateCertificateAuthorityInput) SetCertificateAuthorityConfiguration(v *CertificateAuthorityConfiguration) *CreateCertificateAuthorityInput { + s.CertificateAuthorityConfiguration = v + return s +} + +// SetCertificateAuthorityType sets the CertificateAuthorityType field's value. +func (s *CreateCertificateAuthorityInput) SetCertificateAuthorityType(v string) *CreateCertificateAuthorityInput { + s.CertificateAuthorityType = &v + return s +} + +// SetIdempotencyToken sets the IdempotencyToken field's value. +func (s *CreateCertificateAuthorityInput) SetIdempotencyToken(v string) *CreateCertificateAuthorityInput { + s.IdempotencyToken = &v + return s +} + +// SetRevocationConfiguration sets the RevocationConfiguration field's value. +func (s *CreateCertificateAuthorityInput) SetRevocationConfiguration(v *RevocationConfiguration) *CreateCertificateAuthorityInput { + s.RevocationConfiguration = v + return s +} + +type CreateCertificateAuthorityOutput struct { + _ struct{} `type:"structure"` + + // If successful, the Amazon Resource Name (ARN) of the certificate authority + // (CA). This is of the form: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012. + CertificateAuthorityArn *string `min:"5" type:"string"` +} + +// String returns the string representation +func (s CreateCertificateAuthorityOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateCertificateAuthorityOutput) GoString() string { + return s.String() +} + +// SetCertificateAuthorityArn sets the CertificateAuthorityArn field's value. +func (s *CreateCertificateAuthorityOutput) SetCertificateAuthorityArn(v string) *CreateCertificateAuthorityOutput { + s.CertificateAuthorityArn = &v + return s +} + +// Contains configuration information for a certificate revocation list (CRL). +// Your private certificate authority (CA) creates base CRLs. Delta CRLs are +// not supported. You can enable CRLs for your new or an existing private CA +// by setting the Enabled parameter to true. Your private CA writes CRLs to +// an S3 bucket that you specify in the S3BucketName parameter. You can hide +// the name of your bucket by specifying a value for the CustomCname parameter. +// Your private CA copies the CNAME or the S3 bucket name to the CRL Distribution +// Points extension of each certificate it issues. Your S3 bucket policy must +// give write permission to ACM PCA. +// +// Your private CA uses the value in the ExpirationInDays parameter to calculate +// the nextUpdate field in the CRL. The CRL is refreshed at 1/2 the age of next +// update or when a certificate is revoked. When a certificate is revoked, it +// is recorded in the next CRL that is generated and in the next audit report. +// Only time valid certificates are listed in the CRL. Expired certificates +// are not included. +// +// CRLs contain the following fields: +// +// * Version: The current version number defined in RFC 5280 is V2. The integer +// value is 0x1. +// +// * Signature Algorithm: The name of the algorithm used to sign the CRL. +// +// * Issuer: The X.500 distinguished name of your private CA that issued +// the CRL. +// +// * Last Update: The issue date and time of this CRL. +// +// * Next Update: The day and time by which the next CRL will be issued. +// +// * Revoked Certificates: List of revoked certificates. Each list item contains +// the following information. +// +// Serial Number: The serial number, in hexadecimal format, of the revoked certificate. +// +// Revocation Date: Date and time the certificate was revoked. +// +// CRL Entry Extensions: Optional extensions for the CRL entry. +// +// X509v3 CRL Reason Code: Reason the certificate was revoked. +// +// * CRL Extensions: Optional extensions for the CRL. +// +// X509v3 Authority Key Identifier: Identifies the public key associated with +// the private key used to sign the certificate. +// +// X509v3 CRL Number:: Decimal sequence number for the CRL. +// +// * Signature Algorithm: Algorithm used by your private CA to sign the CRL. +// +// * Signature Value: Signature computed over the CRL. +// +// Certificate revocation lists created by ACM PCA are DER-encoded. You can +// use the following OpenSSL command to list a CRL. +// +// openssl crl -inform DER -text -in crl_path -noout +type CrlConfiguration struct { + _ struct{} `type:"structure"` + + // Name inserted into the certificate CRL Distribution Points extension that + // enables the use of an alias for the CRL distribution point. Use this value + // if you don't want the name of your S3 bucket to be public. + CustomCname *string `type:"string"` + + // Boolean value that specifies whether certificate revocation lists (CRLs) + // are enabled. You can use this value to enable certificate revocation for + // a new CA when you call the CreateCertificateAuthority operation or for an + // existing CA when you call the UpdateCertificateAuthority operation. + // + // Enabled is a required field + Enabled *bool `type:"boolean" required:"true"` + + // Number of days until a certificate expires. + ExpirationInDays *int64 `min:"1" type:"integer"` + + // Name of the S3 bucket that contains the CRL. If you do not provide a value + // for the CustomCname argument, the name of your S3 bucket is placed into the + // CRL Distribution Points extension of the issued certificate. You can change + // the name of your bucket by calling the UpdateCertificateAuthority operation. + // You must specify a bucket policy that allows ACM PCA to write the CRL to + // your bucket. + S3BucketName *string `min:"3" type:"string"` +} + +// String returns the string representation +func (s CrlConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CrlConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CrlConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CrlConfiguration"} + if s.Enabled == nil { + invalidParams.Add(request.NewErrParamRequired("Enabled")) + } + if s.ExpirationInDays != nil && *s.ExpirationInDays < 1 { + invalidParams.Add(request.NewErrParamMinValue("ExpirationInDays", 1)) + } + if s.S3BucketName != nil && len(*s.S3BucketName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("S3BucketName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCustomCname sets the CustomCname field's value. +func (s *CrlConfiguration) SetCustomCname(v string) *CrlConfiguration { + s.CustomCname = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *CrlConfiguration) SetEnabled(v bool) *CrlConfiguration { + s.Enabled = &v + return s +} + +// SetExpirationInDays sets the ExpirationInDays field's value. +func (s *CrlConfiguration) SetExpirationInDays(v int64) *CrlConfiguration { + s.ExpirationInDays = &v + return s +} + +// SetS3BucketName sets the S3BucketName field's value. +func (s *CrlConfiguration) SetS3BucketName(v string) *CrlConfiguration { + s.S3BucketName = &v + return s +} + +type DeleteCertificateAuthorityInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) that was returned when you called CreateCertificateAuthority. + // This must have the following form: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012. + // + // CertificateAuthorityArn is a required field + CertificateAuthorityArn *string `min:"5" type:"string" required:"true"` + + // The number of days to make a CA restorable after it has been deleted. This + // can be anywhere from 7 to 30 days, with 30 being the default. + PermanentDeletionTimeInDays *int64 `min:"7" type:"integer"` +} + +// String returns the string representation +func (s DeleteCertificateAuthorityInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteCertificateAuthorityInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteCertificateAuthorityInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteCertificateAuthorityInput"} + if s.CertificateAuthorityArn == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateAuthorityArn")) + } + if s.CertificateAuthorityArn != nil && len(*s.CertificateAuthorityArn) < 5 { + invalidParams.Add(request.NewErrParamMinLen("CertificateAuthorityArn", 5)) + } + if s.PermanentDeletionTimeInDays != nil && *s.PermanentDeletionTimeInDays < 7 { + invalidParams.Add(request.NewErrParamMinValue("PermanentDeletionTimeInDays", 7)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateAuthorityArn sets the CertificateAuthorityArn field's value. +func (s *DeleteCertificateAuthorityInput) SetCertificateAuthorityArn(v string) *DeleteCertificateAuthorityInput { + s.CertificateAuthorityArn = &v + return s +} + +// SetPermanentDeletionTimeInDays sets the PermanentDeletionTimeInDays field's value. +func (s *DeleteCertificateAuthorityInput) SetPermanentDeletionTimeInDays(v int64) *DeleteCertificateAuthorityInput { + s.PermanentDeletionTimeInDays = &v + return s +} + +type DeleteCertificateAuthorityOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteCertificateAuthorityOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteCertificateAuthorityOutput) GoString() string { + return s.String() +} + +type DescribeCertificateAuthorityAuditReportInput struct { + _ struct{} `type:"structure"` + + // The report ID returned by calling the CreateCertificateAuthorityAuditReport + // operation. + // + // AuditReportId is a required field + AuditReportId *string `min:"36" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the private CA. This must be of the form: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012. + // + // CertificateAuthorityArn is a required field + CertificateAuthorityArn *string `min:"5" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeCertificateAuthorityAuditReportInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeCertificateAuthorityAuditReportInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeCertificateAuthorityAuditReportInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeCertificateAuthorityAuditReportInput"} + if s.AuditReportId == nil { + invalidParams.Add(request.NewErrParamRequired("AuditReportId")) + } + if s.AuditReportId != nil && len(*s.AuditReportId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("AuditReportId", 36)) + } + if s.CertificateAuthorityArn == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateAuthorityArn")) + } + if s.CertificateAuthorityArn != nil && len(*s.CertificateAuthorityArn) < 5 { + invalidParams.Add(request.NewErrParamMinLen("CertificateAuthorityArn", 5)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAuditReportId sets the AuditReportId field's value. +func (s *DescribeCertificateAuthorityAuditReportInput) SetAuditReportId(v string) *DescribeCertificateAuthorityAuditReportInput { + s.AuditReportId = &v + return s +} + +// SetCertificateAuthorityArn sets the CertificateAuthorityArn field's value. +func (s *DescribeCertificateAuthorityAuditReportInput) SetCertificateAuthorityArn(v string) *DescribeCertificateAuthorityAuditReportInput { + s.CertificateAuthorityArn = &v + return s +} + +type DescribeCertificateAuthorityAuditReportOutput struct { + _ struct{} `type:"structure"` + + // Specifies whether report creation is in progress, has succeeded, or has failed. + AuditReportStatus *string `type:"string" enum:"AuditReportStatus"` + + // The date and time at which the report was created. + CreatedAt *time.Time `type:"timestamp"` + + // Name of the S3 bucket that contains the report. + S3BucketName *string `type:"string"` + + // S3 key that uniquely identifies the report file in your S3 bucket. + S3Key *string `type:"string"` +} + +// String returns the string representation +func (s DescribeCertificateAuthorityAuditReportOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeCertificateAuthorityAuditReportOutput) GoString() string { + return s.String() +} + +// SetAuditReportStatus sets the AuditReportStatus field's value. +func (s *DescribeCertificateAuthorityAuditReportOutput) SetAuditReportStatus(v string) *DescribeCertificateAuthorityAuditReportOutput { + s.AuditReportStatus = &v + return s +} + +// SetCreatedAt sets the CreatedAt field's value. +func (s *DescribeCertificateAuthorityAuditReportOutput) SetCreatedAt(v time.Time) *DescribeCertificateAuthorityAuditReportOutput { + s.CreatedAt = &v + return s +} + +// SetS3BucketName sets the S3BucketName field's value. +func (s *DescribeCertificateAuthorityAuditReportOutput) SetS3BucketName(v string) *DescribeCertificateAuthorityAuditReportOutput { + s.S3BucketName = &v + return s +} + +// SetS3Key sets the S3Key field's value. +func (s *DescribeCertificateAuthorityAuditReportOutput) SetS3Key(v string) *DescribeCertificateAuthorityAuditReportOutput { + s.S3Key = &v + return s +} + +type DescribeCertificateAuthorityInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) that was returned when you called CreateCertificateAuthority. + // This must be of the form: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012. + // + // CertificateAuthorityArn is a required field + CertificateAuthorityArn *string `min:"5" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeCertificateAuthorityInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeCertificateAuthorityInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeCertificateAuthorityInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeCertificateAuthorityInput"} + if s.CertificateAuthorityArn == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateAuthorityArn")) + } + if s.CertificateAuthorityArn != nil && len(*s.CertificateAuthorityArn) < 5 { + invalidParams.Add(request.NewErrParamMinLen("CertificateAuthorityArn", 5)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateAuthorityArn sets the CertificateAuthorityArn field's value. +func (s *DescribeCertificateAuthorityInput) SetCertificateAuthorityArn(v string) *DescribeCertificateAuthorityInput { + s.CertificateAuthorityArn = &v + return s +} + +type DescribeCertificateAuthorityOutput struct { + _ struct{} `type:"structure"` + + // A CertificateAuthority structure that contains information about your private + // CA. + CertificateAuthority *CertificateAuthority `type:"structure"` +} + +// String returns the string representation +func (s DescribeCertificateAuthorityOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeCertificateAuthorityOutput) GoString() string { + return s.String() +} + +// SetCertificateAuthority sets the CertificateAuthority field's value. +func (s *DescribeCertificateAuthorityOutput) SetCertificateAuthority(v *CertificateAuthority) *DescribeCertificateAuthorityOutput { + s.CertificateAuthority = v + return s +} + +type GetCertificateAuthorityCertificateInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of your private CA. This is of the form: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012. + // + // CertificateAuthorityArn is a required field + CertificateAuthorityArn *string `min:"5" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetCertificateAuthorityCertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCertificateAuthorityCertificateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetCertificateAuthorityCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetCertificateAuthorityCertificateInput"} + if s.CertificateAuthorityArn == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateAuthorityArn")) + } + if s.CertificateAuthorityArn != nil && len(*s.CertificateAuthorityArn) < 5 { + invalidParams.Add(request.NewErrParamMinLen("CertificateAuthorityArn", 5)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateAuthorityArn sets the CertificateAuthorityArn field's value. +func (s *GetCertificateAuthorityCertificateInput) SetCertificateAuthorityArn(v string) *GetCertificateAuthorityCertificateInput { + s.CertificateAuthorityArn = &v + return s +} + +type GetCertificateAuthorityCertificateOutput struct { + _ struct{} `type:"structure"` + + // Base64-encoded certificate authority (CA) certificate. + Certificate *string `type:"string"` + + // Base64-encoded certificate chain that includes any intermediate certificates + // and chains up to root on-premises certificate that you used to sign your + // private CA certificate. The chain does not include your private CA certificate. + CertificateChain *string `type:"string"` +} + +// String returns the string representation +func (s GetCertificateAuthorityCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCertificateAuthorityCertificateOutput) GoString() string { + return s.String() +} + +// SetCertificate sets the Certificate field's value. +func (s *GetCertificateAuthorityCertificateOutput) SetCertificate(v string) *GetCertificateAuthorityCertificateOutput { + s.Certificate = &v + return s +} + +// SetCertificateChain sets the CertificateChain field's value. +func (s *GetCertificateAuthorityCertificateOutput) SetCertificateChain(v string) *GetCertificateAuthorityCertificateOutput { + s.CertificateChain = &v + return s +} + +type GetCertificateAuthorityCsrInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) that was returned when you called the CreateCertificateAuthority + // operation. This must be of the form: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012 + // + // CertificateAuthorityArn is a required field + CertificateAuthorityArn *string `min:"5" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetCertificateAuthorityCsrInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCertificateAuthorityCsrInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetCertificateAuthorityCsrInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetCertificateAuthorityCsrInput"} + if s.CertificateAuthorityArn == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateAuthorityArn")) + } + if s.CertificateAuthorityArn != nil && len(*s.CertificateAuthorityArn) < 5 { + invalidParams.Add(request.NewErrParamMinLen("CertificateAuthorityArn", 5)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateAuthorityArn sets the CertificateAuthorityArn field's value. +func (s *GetCertificateAuthorityCsrInput) SetCertificateAuthorityArn(v string) *GetCertificateAuthorityCsrInput { + s.CertificateAuthorityArn = &v + return s +} + +type GetCertificateAuthorityCsrOutput struct { + _ struct{} `type:"structure"` + + // The base64 PEM-encoded certificate signing request (CSR) for your private + // CA certificate. + Csr *string `type:"string"` +} + +// String returns the string representation +func (s GetCertificateAuthorityCsrOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCertificateAuthorityCsrOutput) GoString() string { + return s.String() +} + +// SetCsr sets the Csr field's value. +func (s *GetCertificateAuthorityCsrOutput) SetCsr(v string) *GetCertificateAuthorityCsrOutput { + s.Csr = &v + return s +} + +type GetCertificateInput struct { + _ struct{} `type:"structure"` + + // The ARN of the issued certificate. The ARN contains the certificate serial + // number and must be in the following form: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012/certificate/286535153982981100925020015808220737245 + // + // CertificateArn is a required field + CertificateArn *string `min:"5" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) that was returned when you called CreateCertificateAuthority. + // This must be of the form: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012. + // + // CertificateAuthorityArn is a required field + CertificateAuthorityArn *string `min:"5" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetCertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCertificateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetCertificateInput"} + if s.CertificateArn == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateArn")) + } + if s.CertificateArn != nil && len(*s.CertificateArn) < 5 { + invalidParams.Add(request.NewErrParamMinLen("CertificateArn", 5)) + } + if s.CertificateAuthorityArn == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateAuthorityArn")) + } + if s.CertificateAuthorityArn != nil && len(*s.CertificateAuthorityArn) < 5 { + invalidParams.Add(request.NewErrParamMinLen("CertificateAuthorityArn", 5)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateArn sets the CertificateArn field's value. +func (s *GetCertificateInput) SetCertificateArn(v string) *GetCertificateInput { + s.CertificateArn = &v + return s +} + +// SetCertificateAuthorityArn sets the CertificateAuthorityArn field's value. +func (s *GetCertificateInput) SetCertificateAuthorityArn(v string) *GetCertificateInput { + s.CertificateAuthorityArn = &v + return s +} + +type GetCertificateOutput struct { + _ struct{} `type:"structure"` + + // The base64 PEM-encoded certificate specified by the CertificateArn parameter. + Certificate *string `type:"string"` + + // The base64 PEM-encoded certificate chain that chains up to the on-premises + // root CA certificate that you used to sign your private CA certificate. + CertificateChain *string `type:"string"` +} + +// String returns the string representation +func (s GetCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCertificateOutput) GoString() string { + return s.String() +} + +// SetCertificate sets the Certificate field's value. +func (s *GetCertificateOutput) SetCertificate(v string) *GetCertificateOutput { + s.Certificate = &v + return s +} + +// SetCertificateChain sets the CertificateChain field's value. +func (s *GetCertificateOutput) SetCertificateChain(v string) *GetCertificateOutput { + s.CertificateChain = &v + return s +} + +type ImportCertificateAuthorityCertificateInput struct { + _ struct{} `type:"structure"` + + // The PEM-encoded certificate for your private CA. This must be signed by using + // your on-premises CA. + // + // Certificate is automatically base64 encoded/decoded by the SDK. + // + // Certificate is a required field + Certificate []byte `min:"1" type:"blob" required:"true"` + + // The Amazon Resource Name (ARN) that was returned when you called CreateCertificateAuthority. + // This must be of the form: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012 + // + // CertificateAuthorityArn is a required field + CertificateAuthorityArn *string `min:"5" type:"string" required:"true"` + + // A PEM-encoded file that contains all of your certificates, other than the + // certificate you're importing, chaining up to your root CA. Your on-premises + // root certificate is the last in the chain, and each certificate in the chain + // signs the one preceding. + // + // CertificateChain is automatically base64 encoded/decoded by the SDK. + // + // CertificateChain is a required field + CertificateChain []byte `type:"blob" required:"true"` +} + +// String returns the string representation +func (s ImportCertificateAuthorityCertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ImportCertificateAuthorityCertificateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ImportCertificateAuthorityCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ImportCertificateAuthorityCertificateInput"} + if s.Certificate == nil { + invalidParams.Add(request.NewErrParamRequired("Certificate")) + } + if s.Certificate != nil && len(s.Certificate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Certificate", 1)) + } + if s.CertificateAuthorityArn == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateAuthorityArn")) + } + if s.CertificateAuthorityArn != nil && len(*s.CertificateAuthorityArn) < 5 { + invalidParams.Add(request.NewErrParamMinLen("CertificateAuthorityArn", 5)) + } + if s.CertificateChain == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateChain")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificate sets the Certificate field's value. +func (s *ImportCertificateAuthorityCertificateInput) SetCertificate(v []byte) *ImportCertificateAuthorityCertificateInput { + s.Certificate = v + return s +} + +// SetCertificateAuthorityArn sets the CertificateAuthorityArn field's value. +func (s *ImportCertificateAuthorityCertificateInput) SetCertificateAuthorityArn(v string) *ImportCertificateAuthorityCertificateInput { + s.CertificateAuthorityArn = &v + return s +} + +// SetCertificateChain sets the CertificateChain field's value. +func (s *ImportCertificateAuthorityCertificateInput) SetCertificateChain(v []byte) *ImportCertificateAuthorityCertificateInput { + s.CertificateChain = v + return s +} + +type ImportCertificateAuthorityCertificateOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s ImportCertificateAuthorityCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ImportCertificateAuthorityCertificateOutput) GoString() string { + return s.String() +} + +type IssueCertificateInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) that was returned when you called CreateCertificateAuthority. + // This must be of the form: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012 + // + // CertificateAuthorityArn is a required field + CertificateAuthorityArn *string `min:"5" type:"string" required:"true"` + + // The certificate signing request (CSR) for the certificate you want to issue. + // You can use the following OpenSSL command to create the CSR and a 2048 bit + // RSA private key. + // + // openssl req -new -newkey rsa:2048 -days 365 -keyout private/test_cert_priv_key.pem + // -out csr/test_cert_.csr + // + // If you have a configuration file, you can use the following OpenSSL command. + // The usr_cert block in the configuration file contains your X509 version 3 + // extensions. + // + // openssl req -new -config openssl_rsa.cnf -extensions usr_cert -newkey rsa:2048 + // -days -365 -keyout private/test_cert_priv_key.pem -out csr/test_cert_.csr + // + // Csr is automatically base64 encoded/decoded by the SDK. + // + // Csr is a required field + Csr []byte `min:"1" type:"blob" required:"true"` + + // Custom string that can be used to distinguish between calls to the IssueCertificate + // operation. Idempotency tokens time out after one hour. Therefore, if you + // call IssueCertificate multiple times with the same idempotency token within + // 5 minutes, ACM PCA recognizes that you are requesting only one certificate + // and will issue only one. If you change the idempotency token for each call, + // PCA recognizes that you are requesting multiple certificates. + IdempotencyToken *string `min:"1" type:"string"` + + // The name of the algorithm that will be used to sign the certificate to be + // issued. + // + // SigningAlgorithm is a required field + SigningAlgorithm *string `type:"string" required:"true" enum:"SigningAlgorithm"` + + // The type of the validity period. + // + // Validity is a required field + Validity *Validity `type:"structure" required:"true"` +} + +// String returns the string representation +func (s IssueCertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s IssueCertificateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *IssueCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "IssueCertificateInput"} + if s.CertificateAuthorityArn == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateAuthorityArn")) + } + if s.CertificateAuthorityArn != nil && len(*s.CertificateAuthorityArn) < 5 { + invalidParams.Add(request.NewErrParamMinLen("CertificateAuthorityArn", 5)) + } + if s.Csr == nil { + invalidParams.Add(request.NewErrParamRequired("Csr")) + } + if s.Csr != nil && len(s.Csr) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Csr", 1)) + } + if s.IdempotencyToken != nil && len(*s.IdempotencyToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("IdempotencyToken", 1)) + } + if s.SigningAlgorithm == nil { + invalidParams.Add(request.NewErrParamRequired("SigningAlgorithm")) + } + if s.Validity == nil { + invalidParams.Add(request.NewErrParamRequired("Validity")) + } + if s.Validity != nil { + if err := s.Validity.Validate(); err != nil { + invalidParams.AddNested("Validity", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateAuthorityArn sets the CertificateAuthorityArn field's value. +func (s *IssueCertificateInput) SetCertificateAuthorityArn(v string) *IssueCertificateInput { + s.CertificateAuthorityArn = &v + return s +} + +// SetCsr sets the Csr field's value. +func (s *IssueCertificateInput) SetCsr(v []byte) *IssueCertificateInput { + s.Csr = v + return s +} + +// SetIdempotencyToken sets the IdempotencyToken field's value. +func (s *IssueCertificateInput) SetIdempotencyToken(v string) *IssueCertificateInput { + s.IdempotencyToken = &v + return s +} + +// SetSigningAlgorithm sets the SigningAlgorithm field's value. +func (s *IssueCertificateInput) SetSigningAlgorithm(v string) *IssueCertificateInput { + s.SigningAlgorithm = &v + return s +} + +// SetValidity sets the Validity field's value. +func (s *IssueCertificateInput) SetValidity(v *Validity) *IssueCertificateInput { + s.Validity = v + return s +} + +type IssueCertificateOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the issued certificate and the certificate + // serial number. This is of the form: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012/certificate/286535153982981100925020015808220737245 + CertificateArn *string `min:"5" type:"string"` +} + +// String returns the string representation +func (s IssueCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s IssueCertificateOutput) GoString() string { + return s.String() +} + +// SetCertificateArn sets the CertificateArn field's value. +func (s *IssueCertificateOutput) SetCertificateArn(v string) *IssueCertificateOutput { + s.CertificateArn = &v + return s +} + +type ListCertificateAuthoritiesInput struct { + _ struct{} `type:"structure"` + + // Use this parameter when paginating results to specify the maximum number + // of items to return in the response on each page. If additional items exist + // beyond the number you specify, the NextToken element is sent in the response. + // Use this NextToken value in a subsequent request to retrieve additional items. + MaxResults *int64 `min:"1" type:"integer"` + + // Use this parameter when paginating results in a subsequent request after + // you receive a response with truncated results. Set it to the value of the + // NextToken parameter from the response you just received. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListCertificateAuthoritiesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListCertificateAuthoritiesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListCertificateAuthoritiesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListCertificateAuthoritiesInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListCertificateAuthoritiesInput) SetMaxResults(v int64) *ListCertificateAuthoritiesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListCertificateAuthoritiesInput) SetNextToken(v string) *ListCertificateAuthoritiesInput { + s.NextToken = &v + return s +} + +type ListCertificateAuthoritiesOutput struct { + _ struct{} `type:"structure"` + + // Summary information about each certificate authority you have created. + CertificateAuthorities []*CertificateAuthority `type:"list"` + + // When the list is truncated, this value is present and should be used for + // the NextToken parameter in a subsequent pagination request. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListCertificateAuthoritiesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListCertificateAuthoritiesOutput) GoString() string { + return s.String() +} + +// SetCertificateAuthorities sets the CertificateAuthorities field's value. +func (s *ListCertificateAuthoritiesOutput) SetCertificateAuthorities(v []*CertificateAuthority) *ListCertificateAuthoritiesOutput { + s.CertificateAuthorities = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListCertificateAuthoritiesOutput) SetNextToken(v string) *ListCertificateAuthoritiesOutput { + s.NextToken = &v + return s +} + +type ListTagsInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) that was returned when you called the CreateCertificateAuthority + // operation. This must be of the form: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012 + // + // CertificateAuthorityArn is a required field + CertificateAuthorityArn *string `min:"5" type:"string" required:"true"` + + // Use this parameter when paginating results to specify the maximum number + // of items to return in the response. If additional items exist beyond the + // number you specify, the NextToken element is sent in the response. Use this + // NextToken value in a subsequent request to retrieve additional items. + MaxResults *int64 `min:"1" type:"integer"` + + // Use this parameter when paginating results in a subsequent request after + // you receive a response with truncated results. Set it to the value of NextToken + // from the response you just received. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListTagsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListTagsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTagsInput"} + if s.CertificateAuthorityArn == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateAuthorityArn")) + } + if s.CertificateAuthorityArn != nil && len(*s.CertificateAuthorityArn) < 5 { + invalidParams.Add(request.NewErrParamMinLen("CertificateAuthorityArn", 5)) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateAuthorityArn sets the CertificateAuthorityArn field's value. +func (s *ListTagsInput) SetCertificateAuthorityArn(v string) *ListTagsInput { + s.CertificateAuthorityArn = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListTagsInput) SetMaxResults(v int64) *ListTagsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListTagsInput) SetNextToken(v string) *ListTagsInput { + s.NextToken = &v + return s +} + +type ListTagsOutput struct { + _ struct{} `type:"structure"` + + // When the list is truncated, this value is present and should be used for + // the NextToken parameter in a subsequent pagination request. + NextToken *string `min:"1" type:"string"` + + // The tags associated with your private CA. + Tags []*Tag `min:"1" type:"list"` +} + +// String returns the string representation +func (s ListTagsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListTagsOutput) SetNextToken(v string) *ListTagsOutput { + s.NextToken = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *ListTagsOutput) SetTags(v []*Tag) *ListTagsOutput { + s.Tags = v + return s +} + +type RestoreCertificateAuthorityInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) that was returned when you called the CreateCertificateAuthority + // operation. This must be of the form: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012 + // + // CertificateAuthorityArn is a required field + CertificateAuthorityArn *string `min:"5" type:"string" required:"true"` +} + +// String returns the string representation +func (s RestoreCertificateAuthorityInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreCertificateAuthorityInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RestoreCertificateAuthorityInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RestoreCertificateAuthorityInput"} + if s.CertificateAuthorityArn == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateAuthorityArn")) + } + if s.CertificateAuthorityArn != nil && len(*s.CertificateAuthorityArn) < 5 { + invalidParams.Add(request.NewErrParamMinLen("CertificateAuthorityArn", 5)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateAuthorityArn sets the CertificateAuthorityArn field's value. +func (s *RestoreCertificateAuthorityInput) SetCertificateAuthorityArn(v string) *RestoreCertificateAuthorityInput { + s.CertificateAuthorityArn = &v + return s +} + +type RestoreCertificateAuthorityOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s RestoreCertificateAuthorityOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreCertificateAuthorityOutput) GoString() string { + return s.String() +} + +// Certificate revocation information used by the CreateCertificateAuthority +// and UpdateCertificateAuthority operations. Your private certificate authority +// (CA) can create and maintain a certificate revocation list (CRL). A CRL contains +// information about certificates revoked by your CA. For more information, +// see RevokeCertificate. +type RevocationConfiguration struct { + _ struct{} `type:"structure"` + + // Configuration of the certificate revocation list (CRL), if any, maintained + // by your private CA. + CrlConfiguration *CrlConfiguration `type:"structure"` +} + +// String returns the string representation +func (s RevocationConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RevocationConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RevocationConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RevocationConfiguration"} + if s.CrlConfiguration != nil { + if err := s.CrlConfiguration.Validate(); err != nil { + invalidParams.AddNested("CrlConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCrlConfiguration sets the CrlConfiguration field's value. +func (s *RevocationConfiguration) SetCrlConfiguration(v *CrlConfiguration) *RevocationConfiguration { + s.CrlConfiguration = v + return s +} + +type RevokeCertificateInput struct { + _ struct{} `type:"structure"` + + // Amazon Resource Name (ARN) of the private CA that issued the certificate + // to be revoked. This must be of the form: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012 + // + // CertificateAuthorityArn is a required field + CertificateAuthorityArn *string `min:"5" type:"string" required:"true"` + + // Serial number of the certificate to be revoked. This must be in hexadecimal + // format. You can retrieve the serial number by calling GetCertificate with + // the Amazon Resource Name (ARN) of the certificate you want and the ARN of + // your private CA. The GetCertificate operation retrieves the certificate in + // the PEM format. You can use the following OpenSSL command to list the certificate + // in text format and copy the hexadecimal serial number. + // + // openssl x509 -in file_path -text -noout + // + // You can also copy the serial number from the console or use the DescribeCertificate + // (https://docs.aws.amazon.com/acm/latest/APIReference/API_DescribeCertificate.html) + // operation in the AWS Certificate Manager API Reference. + // + // CertificateSerial is a required field + CertificateSerial *string `type:"string" required:"true"` + + // Specifies why you revoked the certificate. + // + // RevocationReason is a required field + RevocationReason *string `type:"string" required:"true" enum:"RevocationReason"` +} + +// String returns the string representation +func (s RevokeCertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RevokeCertificateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RevokeCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RevokeCertificateInput"} + if s.CertificateAuthorityArn == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateAuthorityArn")) + } + if s.CertificateAuthorityArn != nil && len(*s.CertificateAuthorityArn) < 5 { + invalidParams.Add(request.NewErrParamMinLen("CertificateAuthorityArn", 5)) + } + if s.CertificateSerial == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateSerial")) + } + if s.RevocationReason == nil { + invalidParams.Add(request.NewErrParamRequired("RevocationReason")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateAuthorityArn sets the CertificateAuthorityArn field's value. +func (s *RevokeCertificateInput) SetCertificateAuthorityArn(v string) *RevokeCertificateInput { + s.CertificateAuthorityArn = &v + return s +} + +// SetCertificateSerial sets the CertificateSerial field's value. +func (s *RevokeCertificateInput) SetCertificateSerial(v string) *RevokeCertificateInput { + s.CertificateSerial = &v + return s +} + +// SetRevocationReason sets the RevocationReason field's value. +func (s *RevokeCertificateInput) SetRevocationReason(v string) *RevokeCertificateInput { + s.RevocationReason = &v + return s +} + +type RevokeCertificateOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s RevokeCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RevokeCertificateOutput) GoString() string { + return s.String() +} + +// Tags are labels that you can use to identify and organize your private CAs. +// Each tag consists of a key and an optional value. You can associate up to +// 50 tags with a private CA. To add one or more tags to a private CA, call +// the TagCertificateAuthority operation. To remove a tag, call the UntagCertificateAuthority +// operation. +type Tag struct { + _ struct{} `type:"structure"` + + // Key (name) of the tag. + // + // Key is a required field + Key *string `min:"1" type:"string" required:"true"` + + // Value of the tag. + Value *string `type:"string"` +} + +// String returns the string representation +func (s Tag) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Tag) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Tag) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Tag"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *Tag) SetKey(v string) *Tag { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Tag) SetValue(v string) *Tag { + s.Value = &v + return s +} + +type TagCertificateAuthorityInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) that was returned when you called CreateCertificateAuthority. + // This must be of the form: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012 + // + // CertificateAuthorityArn is a required field + CertificateAuthorityArn *string `min:"5" type:"string" required:"true"` + + // List of tags to be associated with the CA. + // + // Tags is a required field + Tags []*Tag `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s TagCertificateAuthorityInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagCertificateAuthorityInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *TagCertificateAuthorityInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TagCertificateAuthorityInput"} + if s.CertificateAuthorityArn == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateAuthorityArn")) + } + if s.CertificateAuthorityArn != nil && len(*s.CertificateAuthorityArn) < 5 { + invalidParams.Add(request.NewErrParamMinLen("CertificateAuthorityArn", 5)) + } + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) + } + if s.Tags != nil && len(s.Tags) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Tags", 1)) + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateAuthorityArn sets the CertificateAuthorityArn field's value. +func (s *TagCertificateAuthorityInput) SetCertificateAuthorityArn(v string) *TagCertificateAuthorityInput { + s.CertificateAuthorityArn = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *TagCertificateAuthorityInput) SetTags(v []*Tag) *TagCertificateAuthorityInput { + s.Tags = v + return s +} + +type TagCertificateAuthorityOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s TagCertificateAuthorityOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagCertificateAuthorityOutput) GoString() string { + return s.String() +} + +type UntagCertificateAuthorityInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) that was returned when you called CreateCertificateAuthority. + // This must be of the form: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012 + // + // CertificateAuthorityArn is a required field + CertificateAuthorityArn *string `min:"5" type:"string" required:"true"` + + // List of tags to be removed from the CA. + // + // Tags is a required field + Tags []*Tag `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s UntagCertificateAuthorityInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UntagCertificateAuthorityInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UntagCertificateAuthorityInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UntagCertificateAuthorityInput"} + if s.CertificateAuthorityArn == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateAuthorityArn")) + } + if s.CertificateAuthorityArn != nil && len(*s.CertificateAuthorityArn) < 5 { + invalidParams.Add(request.NewErrParamMinLen("CertificateAuthorityArn", 5)) + } + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) + } + if s.Tags != nil && len(s.Tags) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Tags", 1)) + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateAuthorityArn sets the CertificateAuthorityArn field's value. +func (s *UntagCertificateAuthorityInput) SetCertificateAuthorityArn(v string) *UntagCertificateAuthorityInput { + s.CertificateAuthorityArn = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *UntagCertificateAuthorityInput) SetTags(v []*Tag) *UntagCertificateAuthorityInput { + s.Tags = v + return s +} + +type UntagCertificateAuthorityOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UntagCertificateAuthorityOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UntagCertificateAuthorityOutput) GoString() string { + return s.String() +} + +type UpdateCertificateAuthorityInput struct { + _ struct{} `type:"structure"` + + // Amazon Resource Name (ARN) of the private CA that issued the certificate + // to be revoked. This must be of the form: + // + // arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012 + // + // CertificateAuthorityArn is a required field + CertificateAuthorityArn *string `min:"5" type:"string" required:"true"` + + // Revocation information for your private CA. + RevocationConfiguration *RevocationConfiguration `type:"structure"` + + // Status of your private CA. + Status *string `type:"string" enum:"CertificateAuthorityStatus"` +} + +// String returns the string representation +func (s UpdateCertificateAuthorityInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateCertificateAuthorityInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateCertificateAuthorityInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateCertificateAuthorityInput"} + if s.CertificateAuthorityArn == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateAuthorityArn")) + } + if s.CertificateAuthorityArn != nil && len(*s.CertificateAuthorityArn) < 5 { + invalidParams.Add(request.NewErrParamMinLen("CertificateAuthorityArn", 5)) + } + if s.RevocationConfiguration != nil { + if err := s.RevocationConfiguration.Validate(); err != nil { + invalidParams.AddNested("RevocationConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateAuthorityArn sets the CertificateAuthorityArn field's value. +func (s *UpdateCertificateAuthorityInput) SetCertificateAuthorityArn(v string) *UpdateCertificateAuthorityInput { + s.CertificateAuthorityArn = &v + return s +} + +// SetRevocationConfiguration sets the RevocationConfiguration field's value. +func (s *UpdateCertificateAuthorityInput) SetRevocationConfiguration(v *RevocationConfiguration) *UpdateCertificateAuthorityInput { + s.RevocationConfiguration = v + return s +} + +// SetStatus sets the Status field's value. +func (s *UpdateCertificateAuthorityInput) SetStatus(v string) *UpdateCertificateAuthorityInput { + s.Status = &v + return s +} + +type UpdateCertificateAuthorityOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UpdateCertificateAuthorityOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateCertificateAuthorityOutput) GoString() string { + return s.String() +} + +// Length of time for which the certificate issued by your private certificate +// authority (CA), or by the private CA itself, is valid in days, months, or +// years. You can issue a certificate by calling the IssueCertificate operation. +type Validity struct { + _ struct{} `type:"structure"` + + // Specifies whether the Value parameter represents days, months, or years. + // + // Type is a required field + Type *string `type:"string" required:"true" enum:"ValidityPeriodType"` + + // Time period. + // + // Value is a required field + Value *int64 `min:"1" type:"long" required:"true"` +} + +// String returns the string representation +func (s Validity) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Validity) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Validity) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Validity"} + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + if s.Value != nil && *s.Value < 1 { + invalidParams.Add(request.NewErrParamMinValue("Value", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetType sets the Type field's value. +func (s *Validity) SetType(v string) *Validity { + s.Type = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Validity) SetValue(v int64) *Validity { + s.Value = &v + return s +} + +const ( + // AuditReportResponseFormatJson is a AuditReportResponseFormat enum value + AuditReportResponseFormatJson = "JSON" + + // AuditReportResponseFormatCsv is a AuditReportResponseFormat enum value + AuditReportResponseFormatCsv = "CSV" +) + +const ( + // AuditReportStatusCreating is a AuditReportStatus enum value + AuditReportStatusCreating = "CREATING" + + // AuditReportStatusSuccess is a AuditReportStatus enum value + AuditReportStatusSuccess = "SUCCESS" + + // AuditReportStatusFailed is a AuditReportStatus enum value + AuditReportStatusFailed = "FAILED" +) + +const ( + // CertificateAuthorityStatusCreating is a CertificateAuthorityStatus enum value + CertificateAuthorityStatusCreating = "CREATING" + + // CertificateAuthorityStatusPendingCertificate is a CertificateAuthorityStatus enum value + CertificateAuthorityStatusPendingCertificate = "PENDING_CERTIFICATE" + + // CertificateAuthorityStatusActive is a CertificateAuthorityStatus enum value + CertificateAuthorityStatusActive = "ACTIVE" + + // CertificateAuthorityStatusDeleted is a CertificateAuthorityStatus enum value + CertificateAuthorityStatusDeleted = "DELETED" + + // CertificateAuthorityStatusDisabled is a CertificateAuthorityStatus enum value + CertificateAuthorityStatusDisabled = "DISABLED" + + // CertificateAuthorityStatusExpired is a CertificateAuthorityStatus enum value + CertificateAuthorityStatusExpired = "EXPIRED" + + // CertificateAuthorityStatusFailed is a CertificateAuthorityStatus enum value + CertificateAuthorityStatusFailed = "FAILED" +) + +const ( + // CertificateAuthorityTypeSubordinate is a CertificateAuthorityType enum value + CertificateAuthorityTypeSubordinate = "SUBORDINATE" +) + +const ( + // FailureReasonRequestTimedOut is a FailureReason enum value + FailureReasonRequestTimedOut = "REQUEST_TIMED_OUT" + + // FailureReasonUnsupportedAlgorithm is a FailureReason enum value + FailureReasonUnsupportedAlgorithm = "UNSUPPORTED_ALGORITHM" + + // FailureReasonOther is a FailureReason enum value + FailureReasonOther = "OTHER" +) + +const ( + // KeyAlgorithmRsa2048 is a KeyAlgorithm enum value + KeyAlgorithmRsa2048 = "RSA_2048" + + // KeyAlgorithmRsa4096 is a KeyAlgorithm enum value + KeyAlgorithmRsa4096 = "RSA_4096" + + // KeyAlgorithmEcPrime256v1 is a KeyAlgorithm enum value + KeyAlgorithmEcPrime256v1 = "EC_prime256v1" + + // KeyAlgorithmEcSecp384r1 is a KeyAlgorithm enum value + KeyAlgorithmEcSecp384r1 = "EC_secp384r1" +) + +const ( + // RevocationReasonUnspecified is a RevocationReason enum value + RevocationReasonUnspecified = "UNSPECIFIED" + + // RevocationReasonKeyCompromise is a RevocationReason enum value + RevocationReasonKeyCompromise = "KEY_COMPROMISE" + + // RevocationReasonCertificateAuthorityCompromise is a RevocationReason enum value + RevocationReasonCertificateAuthorityCompromise = "CERTIFICATE_AUTHORITY_COMPROMISE" + + // RevocationReasonAffiliationChanged is a RevocationReason enum value + RevocationReasonAffiliationChanged = "AFFILIATION_CHANGED" + + // RevocationReasonSuperseded is a RevocationReason enum value + RevocationReasonSuperseded = "SUPERSEDED" + + // RevocationReasonCessationOfOperation is a RevocationReason enum value + RevocationReasonCessationOfOperation = "CESSATION_OF_OPERATION" + + // RevocationReasonPrivilegeWithdrawn is a RevocationReason enum value + RevocationReasonPrivilegeWithdrawn = "PRIVILEGE_WITHDRAWN" + + // RevocationReasonAACompromise is a RevocationReason enum value + RevocationReasonAACompromise = "A_A_COMPROMISE" +) + +const ( + // SigningAlgorithmSha256withecdsa is a SigningAlgorithm enum value + SigningAlgorithmSha256withecdsa = "SHA256WITHECDSA" + + // SigningAlgorithmSha384withecdsa is a SigningAlgorithm enum value + SigningAlgorithmSha384withecdsa = "SHA384WITHECDSA" + + // SigningAlgorithmSha512withecdsa is a SigningAlgorithm enum value + SigningAlgorithmSha512withecdsa = "SHA512WITHECDSA" + + // SigningAlgorithmSha256withrsa is a SigningAlgorithm enum value + SigningAlgorithmSha256withrsa = "SHA256WITHRSA" + + // SigningAlgorithmSha384withrsa is a SigningAlgorithm enum value + SigningAlgorithmSha384withrsa = "SHA384WITHRSA" + + // SigningAlgorithmSha512withrsa is a SigningAlgorithm enum value + SigningAlgorithmSha512withrsa = "SHA512WITHRSA" +) + +const ( + // ValidityPeriodTypeEndDate is a ValidityPeriodType enum value + ValidityPeriodTypeEndDate = "END_DATE" + + // ValidityPeriodTypeAbsolute is a ValidityPeriodType enum value + ValidityPeriodTypeAbsolute = "ABSOLUTE" + + // ValidityPeriodTypeDays is a ValidityPeriodType enum value + ValidityPeriodTypeDays = "DAYS" + + // ValidityPeriodTypeMonths is a ValidityPeriodType enum value + ValidityPeriodTypeMonths = "MONTHS" + + // ValidityPeriodTypeYears is a ValidityPeriodType enum value + ValidityPeriodTypeYears = "YEARS" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/acmpca/doc.go b/vendor/github.com/aws/aws-sdk-go/service/acmpca/doc.go new file mode 100644 index 00000000000..3ca43776465 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/acmpca/doc.go @@ -0,0 +1,55 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package acmpca provides the client and types for making API +// requests to AWS Certificate Manager Private Certificate Authority. +// +// You can use the ACM PCA API to create a private certificate authority (CA). +// You must first call the CreateCertificateAuthority operation. If successful, +// the operation returns an Amazon Resource Name (ARN) for your private CA. +// Use this ARN as input to the GetCertificateAuthorityCsr operation to retrieve +// the certificate signing request (CSR) for your private CA certificate. Sign +// the CSR using the root or an intermediate CA in your on-premises PKI hierarchy, +// and call the ImportCertificateAuthorityCertificate to import your signed +// private CA certificate into ACM PCA. +// +// Use your private CA to issue and revoke certificates. These are private certificates +// that identify and secure client computers, servers, applications, services, +// devices, and users over SSLS/TLS connections within your organization. Call +// the IssueCertificate operation to issue a certificate. Call the RevokeCertificate +// operation to revoke a certificate. +// +// Certificates issued by your private CA can be trusted only within your organization, +// not publicly. +// +// Your private CA can optionally create a certificate revocation list (CRL) +// to track the certificates you revoke. To create a CRL, you must specify a +// RevocationConfiguration object when you call the CreateCertificateAuthority +// operation. ACM PCA writes the CRL to an S3 bucket that you specify. You must +// specify a bucket policy that grants ACM PCA write permission. +// +// You can also call the CreateCertificateAuthorityAuditReport to create an +// optional audit report that lists every time the CA private key is used. The +// private key is used for signing when the IssueCertificate or RevokeCertificate +// operation is called. +// +// See https://docs.aws.amazon.com/goto/WebAPI/acm-pca-2017-08-22 for more information on this service. +// +// See acmpca package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/acmpca/ +// +// Using the Client +// +// To contact AWS Certificate Manager Private Certificate Authority with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the AWS Certificate Manager Private Certificate Authority client ACMPCA for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/acmpca/#New +package acmpca diff --git a/vendor/github.com/aws/aws-sdk-go/service/acmpca/errors.go b/vendor/github.com/aws/aws-sdk-go/service/acmpca/errors.go new file mode 100644 index 00000000000..2614b5a42f3 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/acmpca/errors.go @@ -0,0 +1,109 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package acmpca + +const ( + + // ErrCodeCertificateMismatchException for service response error code + // "CertificateMismatchException". + // + // The certificate authority certificate you are importing does not comply with + // conditions specified in the certificate that signed it. + ErrCodeCertificateMismatchException = "CertificateMismatchException" + + // ErrCodeConcurrentModificationException for service response error code + // "ConcurrentModificationException". + // + // A previous update to your private CA is still ongoing. + ErrCodeConcurrentModificationException = "ConcurrentModificationException" + + // ErrCodeInvalidArgsException for service response error code + // "InvalidArgsException". + // + // One or more of the specified arguments was not valid. + ErrCodeInvalidArgsException = "InvalidArgsException" + + // ErrCodeInvalidArnException for service response error code + // "InvalidArnException". + // + // The requested Amazon Resource Name (ARN) does not refer to an existing resource. + ErrCodeInvalidArnException = "InvalidArnException" + + // ErrCodeInvalidNextTokenException for service response error code + // "InvalidNextTokenException". + // + // The token specified in the NextToken argument is not valid. Use the token + // returned from your previous call to ListCertificateAuthorities. + ErrCodeInvalidNextTokenException = "InvalidNextTokenException" + + // ErrCodeInvalidPolicyException for service response error code + // "InvalidPolicyException". + // + // The S3 bucket policy is not valid. The policy must give ACM PCA rights to + // read from and write to the bucket and find the bucket location. + ErrCodeInvalidPolicyException = "InvalidPolicyException" + + // ErrCodeInvalidStateException for service response error code + // "InvalidStateException". + // + // The private CA is in a state during which a report cannot be generated. + ErrCodeInvalidStateException = "InvalidStateException" + + // ErrCodeInvalidTagException for service response error code + // "InvalidTagException". + // + // The tag associated with the CA is not valid. The invalid argument is contained + // in the message field. + ErrCodeInvalidTagException = "InvalidTagException" + + // ErrCodeLimitExceededException for service response error code + // "LimitExceededException". + // + // An ACM PCA limit has been exceeded. See the exception message returned to + // determine the limit that was exceeded. + ErrCodeLimitExceededException = "LimitExceededException" + + // ErrCodeMalformedCSRException for service response error code + // "MalformedCSRException". + // + // The certificate signing request is invalid. + ErrCodeMalformedCSRException = "MalformedCSRException" + + // ErrCodeMalformedCertificateException for service response error code + // "MalformedCertificateException". + // + // One or more fields in the certificate are invalid. + ErrCodeMalformedCertificateException = "MalformedCertificateException" + + // ErrCodeRequestAlreadyProcessedException for service response error code + // "RequestAlreadyProcessedException". + // + // Your request has already been completed. + ErrCodeRequestAlreadyProcessedException = "RequestAlreadyProcessedException" + + // ErrCodeRequestFailedException for service response error code + // "RequestFailedException". + // + // The request has failed for an unspecified reason. + ErrCodeRequestFailedException = "RequestFailedException" + + // ErrCodeRequestInProgressException for service response error code + // "RequestInProgressException". + // + // Your request is already in progress. + ErrCodeRequestInProgressException = "RequestInProgressException" + + // ErrCodeResourceNotFoundException for service response error code + // "ResourceNotFoundException". + // + // A resource such as a private CA, S3 bucket, certificate, or audit report + // cannot be found. + ErrCodeResourceNotFoundException = "ResourceNotFoundException" + + // ErrCodeTooManyTagsException for service response error code + // "TooManyTagsException". + // + // You can associate up to 50 tags with a private CA. Exception information + // is contained in the exception message field. + ErrCodeTooManyTagsException = "TooManyTagsException" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/acmpca/service.go b/vendor/github.com/aws/aws-sdk-go/service/acmpca/service.go new file mode 100644 index 00000000000..6c231c1d700 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/acmpca/service.go @@ -0,0 +1,97 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package acmpca + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" +) + +// ACMPCA provides the API operation methods for making requests to +// AWS Certificate Manager Private Certificate Authority. See this package's package overview docs +// for details on the service. +// +// ACMPCA methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type ACMPCA struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "acm-pca" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "ACM PCA" // ServiceID is a unique identifer of a specific service. +) + +// New creates a new instance of the ACMPCA client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a ACMPCA client from just a session. +// svc := acmpca.New(mySession) +// +// // Create a ACMPCA client with additional configuration +// svc := acmpca.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *ACMPCA { + c := p.ClientConfig(EndpointsID, cfgs...) + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *ACMPCA { + svc := &ACMPCA{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + ServiceID: ServiceID, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2017-08-22", + JSONVersion: "1.1", + TargetPrefix: "ACMPrivateCA", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(jsonrpc.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(jsonrpc.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(jsonrpc.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(jsonrpc.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a ACMPCA operation and runs any +// custom request initialization. +func (c *ACMPCA) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/apigateway/api.go b/vendor/github.com/aws/aws-sdk-go/service/apigateway/api.go index 656830a9a16..2b9e72da8a5 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/apigateway/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/apigateway/api.go @@ -16,8 +16,8 @@ const opCreateApiKey = "CreateApiKey" // CreateApiKeyRequest generates a "aws/request.Request" representing the // client's request for the CreateApiKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -56,7 +56,7 @@ func (c *APIGateway) CreateApiKeyRequest(input *CreateApiKeyInput) (req *request // // Create an ApiKey resource. // -// AWS CLI (http://docs.aws.amazon.com/cli/latest/reference/apigateway/create-api-key.html) +// AWS CLI (https://docs.aws.amazon.com/cli/latest/reference/apigateway/create-api-key.html) // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -112,8 +112,8 @@ const opCreateAuthorizer = "CreateAuthorizer" // CreateAuthorizerRequest generates a "aws/request.Request" representing the // client's request for the CreateAuthorizer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -152,7 +152,7 @@ func (c *APIGateway) CreateAuthorizerRequest(input *CreateAuthorizerInput) (req // // Adds a new Authorizer resource to an existing RestApi resource. // -// AWS CLI (http://docs.aws.amazon.com/cli/latest/reference/apigateway/create-authorizer.html) +// AWS CLI (https://docs.aws.amazon.com/cli/latest/reference/apigateway/create-authorizer.html) // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -204,8 +204,8 @@ const opCreateBasePathMapping = "CreateBasePathMapping" // CreateBasePathMappingRequest generates a "aws/request.Request" representing the // client's request for the CreateBasePathMapping operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -295,8 +295,8 @@ const opCreateDeployment = "CreateDeployment" // CreateDeploymentRequest generates a "aws/request.Request" representing the // client's request for the CreateDeployment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -394,8 +394,8 @@ const opCreateDocumentationPart = "CreateDocumentationPart" // CreateDocumentationPartRequest generates a "aws/request.Request" representing the // client's request for the CreateDocumentationPart operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -486,8 +486,8 @@ const opCreateDocumentationVersion = "CreateDocumentationVersion" // CreateDocumentationVersionRequest generates a "aws/request.Request" representing the // client's request for the CreateDocumentationVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -578,8 +578,8 @@ const opCreateDomainName = "CreateDomainName" // CreateDomainNameRequest generates a "aws/request.Request" representing the // client's request for the CreateDomainName operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -666,8 +666,8 @@ const opCreateModel = "CreateModel" // CreateModelRequest generates a "aws/request.Request" representing the // client's request for the CreateModel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -760,8 +760,8 @@ const opCreateRequestValidator = "CreateRequestValidator" // CreateRequestValidatorRequest generates a "aws/request.Request" representing the // client's request for the CreateRequestValidator operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -850,8 +850,8 @@ const opCreateResource = "CreateResource" // CreateResourceRequest generates a "aws/request.Request" representing the // client's request for the CreateResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -944,8 +944,8 @@ const opCreateRestApi = "CreateRestApi" // CreateRestApiRequest generates a "aws/request.Request" representing the // client's request for the CreateRestApi operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1031,8 +1031,8 @@ const opCreateStage = "CreateStage" // CreateStageRequest generates a "aws/request.Request" representing the // client's request for the CreateStage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1126,8 +1126,8 @@ const opCreateUsagePlan = "CreateUsagePlan" // CreateUsagePlanRequest generates a "aws/request.Request" representing the // client's request for the CreateUsagePlan operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1221,8 +1221,8 @@ const opCreateUsagePlanKey = "CreateUsagePlanKey" // CreateUsagePlanKeyRequest generates a "aws/request.Request" representing the // client's request for the CreateUsagePlanKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1312,8 +1312,8 @@ const opCreateVpcLink = "CreateVpcLink" // CreateVpcLinkRequest generates a "aws/request.Request" representing the // client's request for the CreateVpcLink operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1399,8 +1399,8 @@ const opDeleteApiKey = "DeleteApiKey" // DeleteApiKeyRequest generates a "aws/request.Request" representing the // client's request for the DeleteApiKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1484,8 +1484,8 @@ const opDeleteAuthorizer = "DeleteAuthorizer" // DeleteAuthorizerRequest generates a "aws/request.Request" representing the // client's request for the DeleteAuthorizer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1526,7 +1526,7 @@ func (c *APIGateway) DeleteAuthorizerRequest(input *DeleteAuthorizerInput) (req // // Deletes an existing Authorizer resource. // -// AWS CLI (http://docs.aws.amazon.com/cli/latest/reference/apigateway/delete-authorizer.html) +// AWS CLI (https://docs.aws.amazon.com/cli/latest/reference/apigateway/delete-authorizer.html) // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1579,8 +1579,8 @@ const opDeleteBasePathMapping = "DeleteBasePathMapping" // DeleteBasePathMappingRequest generates a "aws/request.Request" representing the // client's request for the DeleteBasePathMapping operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1672,8 +1672,8 @@ const opDeleteClientCertificate = "DeleteClientCertificate" // DeleteClientCertificateRequest generates a "aws/request.Request" representing the // client's request for the DeleteClientCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1761,8 +1761,8 @@ const opDeleteDeployment = "DeleteDeployment" // DeleteDeploymentRequest generates a "aws/request.Request" representing the // client's request for the DeleteDeployment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1851,8 +1851,8 @@ const opDeleteDocumentationPart = "DeleteDocumentationPart" // DeleteDocumentationPartRequest generates a "aws/request.Request" representing the // client's request for the DeleteDocumentationPart operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1942,8 +1942,8 @@ const opDeleteDocumentationVersion = "DeleteDocumentationVersion" // DeleteDocumentationVersionRequest generates a "aws/request.Request" representing the // client's request for the DeleteDocumentationVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2033,8 +2033,8 @@ const opDeleteDomainName = "DeleteDomainName" // DeleteDomainNameRequest generates a "aws/request.Request" representing the // client's request for the DeleteDomainName operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2118,8 +2118,8 @@ const opDeleteGatewayResponse = "DeleteGatewayResponse" // DeleteGatewayResponseRequest generates a "aws/request.Request" representing the // client's request for the DeleteGatewayResponse operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2212,8 +2212,8 @@ const opDeleteIntegration = "DeleteIntegration" // DeleteIntegrationRequest generates a "aws/request.Request" representing the // client's request for the DeleteIntegration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2301,8 +2301,8 @@ const opDeleteIntegrationResponse = "DeleteIntegrationResponse" // DeleteIntegrationResponseRequest generates a "aws/request.Request" representing the // client's request for the DeleteIntegrationResponse operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2394,8 +2394,8 @@ const opDeleteMethod = "DeleteMethod" // DeleteMethodRequest generates a "aws/request.Request" representing the // client's request for the DeleteMethod operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2483,8 +2483,8 @@ const opDeleteMethodResponse = "DeleteMethodResponse" // DeleteMethodResponseRequest generates a "aws/request.Request" representing the // client's request for the DeleteMethodResponse operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2576,8 +2576,8 @@ const opDeleteModel = "DeleteModel" // DeleteModelRequest generates a "aws/request.Request" representing the // client's request for the DeleteModel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2669,8 +2669,8 @@ const opDeleteRequestValidator = "DeleteRequestValidator" // DeleteRequestValidatorRequest generates a "aws/request.Request" representing the // client's request for the DeleteRequestValidator operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2762,8 +2762,8 @@ const opDeleteResource = "DeleteResource" // DeleteResourceRequest generates a "aws/request.Request" representing the // client's request for the DeleteResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2855,8 +2855,8 @@ const opDeleteRestApi = "DeleteRestApi" // DeleteRestApiRequest generates a "aws/request.Request" representing the // client's request for the DeleteRestApi operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2944,8 +2944,8 @@ const opDeleteStage = "DeleteStage" // DeleteStageRequest generates a "aws/request.Request" representing the // client's request for the DeleteStage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3033,8 +3033,8 @@ const opDeleteUsagePlan = "DeleteUsagePlan" // DeleteUsagePlanRequest generates a "aws/request.Request" representing the // client's request for the DeleteUsagePlan operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3122,8 +3122,8 @@ const opDeleteUsagePlanKey = "DeleteUsagePlanKey" // DeleteUsagePlanKeyRequest generates a "aws/request.Request" representing the // client's request for the DeleteUsagePlanKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3216,8 +3216,8 @@ const opDeleteVpcLink = "DeleteVpcLink" // DeleteVpcLinkRequest generates a "aws/request.Request" representing the // client's request for the DeleteVpcLink operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3305,8 +3305,8 @@ const opFlushStageAuthorizersCache = "FlushStageAuthorizersCache" // FlushStageAuthorizersCacheRequest generates a "aws/request.Request" representing the // client's request for the FlushStageAuthorizersCache operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3394,8 +3394,8 @@ const opFlushStageCache = "FlushStageCache" // FlushStageCacheRequest generates a "aws/request.Request" representing the // client's request for the FlushStageCache operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3483,8 +3483,8 @@ const opGenerateClientCertificate = "GenerateClientCertificate" // GenerateClientCertificateRequest generates a "aws/request.Request" representing the // client's request for the GenerateClientCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3566,8 +3566,8 @@ const opGetAccount = "GetAccount" // GetAccountRequest generates a "aws/request.Request" representing the // client's request for the GetAccount operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3649,8 +3649,8 @@ const opGetApiKey = "GetApiKey" // GetApiKeyRequest generates a "aws/request.Request" representing the // client's request for the GetApiKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3732,8 +3732,8 @@ const opGetApiKeys = "GetApiKeys" // GetApiKeysRequest generates a "aws/request.Request" representing the // client's request for the GetApiKeys operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3872,8 +3872,8 @@ const opGetAuthorizer = "GetAuthorizer" // GetAuthorizerRequest generates a "aws/request.Request" representing the // client's request for the GetAuthorizer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3912,7 +3912,7 @@ func (c *APIGateway) GetAuthorizerRequest(input *GetAuthorizerInput) (req *reque // // Describe an existing Authorizer resource. // -// AWS CLI (http://docs.aws.amazon.com/cli/latest/reference/apigateway/get-authorizer.html) +// AWS CLI (https://docs.aws.amazon.com/cli/latest/reference/apigateway/get-authorizer.html) // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3957,8 +3957,8 @@ const opGetAuthorizers = "GetAuthorizers" // GetAuthorizersRequest generates a "aws/request.Request" representing the // client's request for the GetAuthorizers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3997,7 +3997,7 @@ func (c *APIGateway) GetAuthorizersRequest(input *GetAuthorizersInput) (req *req // // Describe an existing Authorizers resource. // -// AWS CLI (http://docs.aws.amazon.com/cli/latest/reference/apigateway/get-authorizers.html) +// AWS CLI (https://docs.aws.amazon.com/cli/latest/reference/apigateway/get-authorizers.html) // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4046,8 +4046,8 @@ const opGetBasePathMapping = "GetBasePathMapping" // GetBasePathMappingRequest generates a "aws/request.Request" representing the // client's request for the GetBasePathMapping operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4129,8 +4129,8 @@ const opGetBasePathMappings = "GetBasePathMappings" // GetBasePathMappingsRequest generates a "aws/request.Request" representing the // client's request for the GetBasePathMappings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4268,8 +4268,8 @@ const opGetClientCertificate = "GetClientCertificate" // GetClientCertificateRequest generates a "aws/request.Request" representing the // client's request for the GetClientCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4351,8 +4351,8 @@ const opGetClientCertificates = "GetClientCertificates" // GetClientCertificatesRequest generates a "aws/request.Request" representing the // client's request for the GetClientCertificates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4491,8 +4491,8 @@ const opGetDeployment = "GetDeployment" // GetDeploymentRequest generates a "aws/request.Request" representing the // client's request for the GetDeployment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4578,8 +4578,8 @@ const opGetDeployments = "GetDeployments" // GetDeploymentsRequest generates a "aws/request.Request" representing the // client's request for the GetDeployments operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4722,8 +4722,8 @@ const opGetDocumentationPart = "GetDocumentationPart" // GetDocumentationPartRequest generates a "aws/request.Request" representing the // client's request for the GetDocumentationPart operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4803,8 +4803,8 @@ const opGetDocumentationParts = "GetDocumentationParts" // GetDocumentationPartsRequest generates a "aws/request.Request" representing the // client's request for the GetDocumentationParts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4888,8 +4888,8 @@ const opGetDocumentationVersion = "GetDocumentationVersion" // GetDocumentationVersionRequest generates a "aws/request.Request" representing the // client's request for the GetDocumentationVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4969,8 +4969,8 @@ const opGetDocumentationVersions = "GetDocumentationVersions" // GetDocumentationVersionsRequest generates a "aws/request.Request" representing the // client's request for the GetDocumentationVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5054,8 +5054,8 @@ const opGetDomainName = "GetDomainName" // GetDomainNameRequest generates a "aws/request.Request" representing the // client's request for the GetDomainName operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5142,8 +5142,8 @@ const opGetDomainNames = "GetDomainNames" // GetDomainNamesRequest generates a "aws/request.Request" representing the // client's request for the GetDomainNames operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5282,8 +5282,8 @@ const opGetExport = "GetExport" // GetExportRequest generates a "aws/request.Request" representing the // client's request for the GetExport operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5373,8 +5373,8 @@ const opGetGatewayResponse = "GetGatewayResponse" // GetGatewayResponseRequest generates a "aws/request.Request" representing the // client's request for the GetGatewayResponse operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5456,8 +5456,8 @@ const opGetGatewayResponses = "GetGatewayResponses" // GetGatewayResponsesRequest generates a "aws/request.Request" representing the // client's request for the GetGatewayResponses operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5546,8 +5546,8 @@ const opGetIntegration = "GetIntegration" // GetIntegrationRequest generates a "aws/request.Request" representing the // client's request for the GetIntegration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5629,8 +5629,8 @@ const opGetIntegrationResponse = "GetIntegrationResponse" // GetIntegrationResponseRequest generates a "aws/request.Request" representing the // client's request for the GetIntegrationResponse operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5712,8 +5712,8 @@ const opGetMethod = "GetMethod" // GetMethodRequest generates a "aws/request.Request" representing the // client's request for the GetMethod operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5795,8 +5795,8 @@ const opGetMethodResponse = "GetMethodResponse" // GetMethodResponseRequest generates a "aws/request.Request" representing the // client's request for the GetMethodResponse operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5878,8 +5878,8 @@ const opGetModel = "GetModel" // GetModelRequest generates a "aws/request.Request" representing the // client's request for the GetModel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5961,8 +5961,8 @@ const opGetModelTemplate = "GetModelTemplate" // GetModelTemplateRequest generates a "aws/request.Request" representing the // client's request for the GetModelTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6049,8 +6049,8 @@ const opGetModels = "GetModels" // GetModelsRequest generates a "aws/request.Request" representing the // client's request for the GetModels operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6192,8 +6192,8 @@ const opGetRequestValidator = "GetRequestValidator" // GetRequestValidatorRequest generates a "aws/request.Request" representing the // client's request for the GetRequestValidator operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6275,8 +6275,8 @@ const opGetRequestValidators = "GetRequestValidators" // GetRequestValidatorsRequest generates a "aws/request.Request" representing the // client's request for the GetRequestValidators operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6362,8 +6362,8 @@ const opGetResource = "GetResource" // GetResourceRequest generates a "aws/request.Request" representing the // client's request for the GetResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6445,8 +6445,8 @@ const opGetResources = "GetResources" // GetResourcesRequest generates a "aws/request.Request" representing the // client's request for the GetResources operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6588,8 +6588,8 @@ const opGetRestApi = "GetRestApi" // GetRestApiRequest generates a "aws/request.Request" representing the // client's request for the GetRestApi operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6671,8 +6671,8 @@ const opGetRestApis = "GetRestApis" // GetRestApisRequest generates a "aws/request.Request" representing the // client's request for the GetRestApis operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6811,8 +6811,8 @@ const opGetSdk = "GetSdk" // GetSdkRequest generates a "aws/request.Request" representing the // client's request for the GetSdk operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6902,8 +6902,8 @@ const opGetSdkType = "GetSdkType" // GetSdkTypeRequest generates a "aws/request.Request" representing the // client's request for the GetSdkType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6983,8 +6983,8 @@ const opGetSdkTypes = "GetSdkTypes" // GetSdkTypesRequest generates a "aws/request.Request" representing the // client's request for the GetSdkTypes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7061,8 +7061,8 @@ const opGetStage = "GetStage" // GetStageRequest generates a "aws/request.Request" representing the // client's request for the GetStage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7144,8 +7144,8 @@ const opGetStages = "GetStages" // GetStagesRequest generates a "aws/request.Request" representing the // client's request for the GetStages operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7227,8 +7227,8 @@ const opGetTags = "GetTags" // GetTagsRequest generates a "aws/request.Request" representing the // client's request for the GetTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7317,8 +7317,8 @@ const opGetUsage = "GetUsage" // GetUsageRequest generates a "aws/request.Request" representing the // client's request for the GetUsage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7460,8 +7460,8 @@ const opGetUsagePlan = "GetUsagePlan" // GetUsagePlanRequest generates a "aws/request.Request" representing the // client's request for the GetUsagePlan operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7547,8 +7547,8 @@ const opGetUsagePlanKey = "GetUsagePlanKey" // GetUsagePlanKeyRequest generates a "aws/request.Request" representing the // client's request for the GetUsagePlanKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7634,8 +7634,8 @@ const opGetUsagePlanKeys = "GetUsagePlanKeys" // GetUsagePlanKeysRequest generates a "aws/request.Request" representing the // client's request for the GetUsagePlanKeys operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7778,8 +7778,8 @@ const opGetUsagePlans = "GetUsagePlans" // GetUsagePlansRequest generates a "aws/request.Request" representing the // client's request for the GetUsagePlans operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7925,8 +7925,8 @@ const opGetVpcLink = "GetVpcLink" // GetVpcLinkRequest generates a "aws/request.Request" representing the // client's request for the GetVpcLink operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8008,8 +8008,8 @@ const opGetVpcLinks = "GetVpcLinks" // GetVpcLinksRequest generates a "aws/request.Request" representing the // client's request for the GetVpcLinks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8148,8 +8148,8 @@ const opImportApiKeys = "ImportApiKeys" // ImportApiKeysRequest generates a "aws/request.Request" representing the // client's request for the ImportApiKeys operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8242,8 +8242,8 @@ const opImportDocumentationParts = "ImportDocumentationParts" // ImportDocumentationPartsRequest generates a "aws/request.Request" representing the // client's request for the ImportDocumentationParts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8330,8 +8330,8 @@ const opImportRestApi = "ImportRestApi" // ImportRestApiRequest generates a "aws/request.Request" representing the // client's request for the ImportRestApi operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8422,8 +8422,8 @@ const opPutGatewayResponse = "PutGatewayResponse" // PutGatewayResponseRequest generates a "aws/request.Request" representing the // client's request for the PutGatewayResponse operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8513,8 +8513,8 @@ const opPutIntegration = "PutIntegration" // PutIntegrationRequest generates a "aws/request.Request" representing the // client's request for the PutIntegration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8604,8 +8604,8 @@ const opPutIntegrationResponse = "PutIntegrationResponse" // PutIntegrationResponseRequest generates a "aws/request.Request" representing the // client's request for the PutIntegrationResponse operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8698,8 +8698,8 @@ const opPutMethod = "PutMethod" // PutMethodRequest generates a "aws/request.Request" representing the // client's request for the PutMethod operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8792,8 +8792,8 @@ const opPutMethodResponse = "PutMethodResponse" // PutMethodResponseRequest generates a "aws/request.Request" representing the // client's request for the PutMethodResponse operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8886,8 +8886,8 @@ const opPutRestApi = "PutRestApi" // PutRestApiRequest generates a "aws/request.Request" representing the // client's request for the PutRestApi operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8983,8 +8983,8 @@ const opTagResource = "TagResource" // TagResourceRequest generates a "aws/request.Request" representing the // client's request for the TagResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9023,7 +9023,7 @@ func (c *APIGateway) TagResourceRequest(input *TagResourceInput) (req *request.R // TagResource API operation for Amazon API Gateway. // -// Adds or updates Tags on a gievn resource. +// Adds or updates a tag on a given resource. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -9079,8 +9079,8 @@ const opTestInvokeAuthorizer = "TestInvokeAuthorizer" // TestInvokeAuthorizerRequest generates a "aws/request.Request" representing the // client's request for the TestInvokeAuthorizer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9120,7 +9120,7 @@ func (c *APIGateway) TestInvokeAuthorizerRequest(input *TestInvokeAuthorizerInpu // Simulate the execution of an Authorizer in your RestApi with headers, parameters, // and an incoming request body. // -// Enable custom authorizers (http://docs.aws.amazon.com/apigateway/latest/developerguide/use-custom-authorizer.html) +// Enable custom authorizers (https://docs.aws.amazon.com/apigateway/latest/developerguide/use-custom-authorizer.html) // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -9169,8 +9169,8 @@ const opTestInvokeMethod = "TestInvokeMethod" // TestInvokeMethodRequest generates a "aws/request.Request" representing the // client's request for the TestInvokeMethod operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9257,8 +9257,8 @@ const opUntagResource = "UntagResource" // UntagResourceRequest generates a "aws/request.Request" representing the // client's request for the UntagResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9297,7 +9297,7 @@ func (c *APIGateway) UntagResourceRequest(input *UntagResourceInput) (req *reque // UntagResource API operation for Amazon API Gateway. // -// Removes Tags from a given resource. +// Removes a tag from a given resource. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -9350,8 +9350,8 @@ const opUpdateAccount = "UpdateAccount" // UpdateAccountRequest generates a "aws/request.Request" representing the // client's request for the UpdateAccount operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9437,8 +9437,8 @@ const opUpdateApiKey = "UpdateApiKey" // UpdateApiKeyRequest generates a "aws/request.Request" representing the // client's request for the UpdateApiKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9528,8 +9528,8 @@ const opUpdateAuthorizer = "UpdateAuthorizer" // UpdateAuthorizerRequest generates a "aws/request.Request" representing the // client's request for the UpdateAuthorizer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9568,7 +9568,7 @@ func (c *APIGateway) UpdateAuthorizerRequest(input *UpdateAuthorizerInput) (req // // Updates an existing Authorizer resource. // -// AWS CLI (http://docs.aws.amazon.com/cli/latest/reference/apigateway/update-authorizer.html) +// AWS CLI (https://docs.aws.amazon.com/cli/latest/reference/apigateway/update-authorizer.html) // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -9617,8 +9617,8 @@ const opUpdateBasePathMapping = "UpdateBasePathMapping" // UpdateBasePathMappingRequest generates a "aws/request.Request" representing the // client's request for the UpdateBasePathMapping operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9708,8 +9708,8 @@ const opUpdateClientCertificate = "UpdateClientCertificate" // UpdateClientCertificateRequest generates a "aws/request.Request" representing the // client's request for the UpdateClientCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9795,8 +9795,8 @@ const opUpdateDeployment = "UpdateDeployment" // UpdateDeploymentRequest generates a "aws/request.Request" representing the // client's request for the UpdateDeployment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9886,8 +9886,8 @@ const opUpdateDocumentationPart = "UpdateDocumentationPart" // UpdateDocumentationPartRequest generates a "aws/request.Request" representing the // client's request for the UpdateDocumentationPart operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9978,8 +9978,8 @@ const opUpdateDocumentationVersion = "UpdateDocumentationVersion" // UpdateDocumentationVersionRequest generates a "aws/request.Request" representing the // client's request for the UpdateDocumentationVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10067,8 +10067,8 @@ const opUpdateDomainName = "UpdateDomainName" // UpdateDomainNameRequest generates a "aws/request.Request" representing the // client's request for the UpdateDomainName operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10158,8 +10158,8 @@ const opUpdateGatewayResponse = "UpdateGatewayResponse" // UpdateGatewayResponseRequest generates a "aws/request.Request" representing the // client's request for the UpdateGatewayResponse operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10245,8 +10245,8 @@ const opUpdateIntegration = "UpdateIntegration" // UpdateIntegrationRequest generates a "aws/request.Request" representing the // client's request for the UpdateIntegration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10336,8 +10336,8 @@ const opUpdateIntegrationResponse = "UpdateIntegrationResponse" // UpdateIntegrationResponseRequest generates a "aws/request.Request" representing the // client's request for the UpdateIntegrationResponse operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10427,8 +10427,8 @@ const opUpdateMethod = "UpdateMethod" // UpdateMethodRequest generates a "aws/request.Request" representing the // client's request for the UpdateMethod operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10518,8 +10518,8 @@ const opUpdateMethodResponse = "UpdateMethodResponse" // UpdateMethodResponseRequest generates a "aws/request.Request" representing the // client's request for the UpdateMethodResponse operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10612,8 +10612,8 @@ const opUpdateModel = "UpdateModel" // UpdateModelRequest generates a "aws/request.Request" representing the // client's request for the UpdateModel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10703,8 +10703,8 @@ const opUpdateRequestValidator = "UpdateRequestValidator" // UpdateRequestValidatorRequest generates a "aws/request.Request" representing the // client's request for the UpdateRequestValidator operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10790,8 +10790,8 @@ const opUpdateResource = "UpdateResource" // UpdateResourceRequest generates a "aws/request.Request" representing the // client's request for the UpdateResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10881,8 +10881,8 @@ const opUpdateRestApi = "UpdateRestApi" // UpdateRestApiRequest generates a "aws/request.Request" representing the // client's request for the UpdateRestApi operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10972,8 +10972,8 @@ const opUpdateStage = "UpdateStage" // UpdateStageRequest generates a "aws/request.Request" representing the // client's request for the UpdateStage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11063,8 +11063,8 @@ const opUpdateUsage = "UpdateUsage" // UpdateUsageRequest generates a "aws/request.Request" representing the // client's request for the UpdateUsage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11151,8 +11151,8 @@ const opUpdateUsagePlan = "UpdateUsagePlan" // UpdateUsagePlanRequest generates a "aws/request.Request" representing the // client's request for the UpdateUsagePlan operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11242,8 +11242,8 @@ const opUpdateVpcLink = "UpdateVpcLink" // UpdateVpcLinkRequest generates a "aws/request.Request" representing the // client's request for the UpdateVpcLink operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11338,7 +11338,7 @@ type AccessLogSettings struct { DestinationArn *string `locationName:"destinationArn" type:"string"` // A single line format of the access logs of data, as specified by selected - // $context variables (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html#context-variable-reference). + // $context variables (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html#context-variable-reference). // The format must include at least $context.requestId. Format *string `locationName:"format" type:"string"` } @@ -11377,7 +11377,7 @@ func (s *AccessLogSettings) SetFormat(v string) *AccessLogSettings { // NotFoundException // TooManyRequestsException // For detailed error code information, including the corresponding HTTP Status -// Codes, see API Gateway Error Codes (http://docs.aws.amazon.com/apigateway/api-reference/handling-errors/#api-error-codes) +// Codes, see API Gateway Error Codes (https://docs.aws.amazon.com/apigateway/api-reference/handling-errors/#api-error-codes) // // Example: Get the information about an account. // @@ -11391,16 +11391,16 @@ func (s *AccessLogSettings) SetFormat(v string) *AccessLogSettings { // The successful response returns a 200 OK status code and a payload similar // to the following: // -// { "_links": { "curies": { "href": "http://docs.aws.amazon.com/apigateway/latest/developerguide/account-apigateway-{rel}.html", +// { "_links": { "curies": { "href": "https://docs.aws.amazon.com/apigateway/latest/developerguide/account-apigateway-{rel}.html", // "name": "account", "templated": true }, "self": { "href": "/account" }, "account:update": // { "href": "/account" } }, "cloudwatchRoleArn": "arn:aws:iam::123456789012:role/apigAwsProxyRole", // "throttleSettings": { "rateLimit": 500, "burstLimit": 1000 } } // In addition to making the REST API call directly, you can use the AWS CLI // and an AWS SDK to access this resource. // -// API Gateway Limits (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-limits.html)Developer -// Guide (http://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html), -// AWS CLI (http://docs.aws.amazon.com/cli/latest/reference/apigateway/get-account.html) +// API Gateway Limits (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-limits.html)Developer +// Guide (https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html), +// AWS CLI (https://docs.aws.amazon.com/cli/latest/reference/apigateway/get-account.html) type Account struct { _ struct{} `type:"structure"` @@ -11457,12 +11457,12 @@ func (s *Account) SetThrottleSettings(v *ThrottleSettings) *Account { // which indicates that the callers with the API key can make requests to that // stage. // -// Use API Keys (http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-api-keys.html) +// Use API Keys (https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-api-keys.html) type ApiKey struct { _ struct{} `type:"structure"` // The timestamp when the API Key was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // An AWS Marketplace customer identifier , when integrating with the AWS SaaS // Marketplace. @@ -11478,7 +11478,7 @@ type ApiKey struct { Id *string `locationName:"id" type:"string"` // The timestamp when the API Key was last updated. - LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp" timestampFormat:"unix"` + LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp"` // The name of the API Key. Name *string `locationName:"name" type:"string"` @@ -11563,6 +11563,10 @@ type ApiStage struct { // API stage name of the associated API stage in a usage plan. Stage *string `locationName:"stage" type:"string"` + + // Map containing method level throttling information for API stage in a usage + // plan. + Throttle map[string]*ThrottleSettings `locationName:"throttle" type:"map"` } // String returns the string representation @@ -11587,14 +11591,20 @@ func (s *ApiStage) SetStage(v string) *ApiStage { return s } +// SetThrottle sets the Throttle field's value. +func (s *ApiStage) SetThrottle(v map[string]*ThrottleSettings) *ApiStage { + s.Throttle = v + return s +} + // Represents an authorization layer for methods. If enabled on a method, API // Gateway will activate the authorizer when a client calls the method. // -// Enable custom authorization (http://docs.aws.amazon.com/apigateway/latest/developerguide/use-custom-authorizer.html) +// Enable custom authorization (https://docs.aws.amazon.com/apigateway/latest/developerguide/use-custom-authorizer.html) type Authorizer struct { _ struct{} `type:"structure"` - // Optional customer-defined field, used in Swagger imports and exports without + // Optional customer-defined field, used in OpenAPI imports and exports without // functional impact. AuthType *string `locationName:"authType" type:"string"` @@ -11623,11 +11633,11 @@ type Authorizer struct { // The identifier for the authorizer resource. Id *string `locationName:"id" type:"string"` - // The identity source for which authorization is requested. For a TOKEN authorizer, - // this is required and specifies the request header mapping expression for - // the custom header holding the authorization token submitted by the client. - // For example, if the token header name is Auth, the header mapping expression - // is method.request.header.Auth. + // The identity source for which authorization is requested. For a TOKEN or + // COGNITO_USER_POOLS authorizer, this is required and specifies the request + // header mapping expression for the custom header holding the authorization + // token submitted by the client. For example, if the token header name is Auth, + // the header mapping expression is method.request.header.Auth. // For the REQUEST authorizer, this is required when authorization caching is // enabled. The value is a comma-separated string of one or more mapping expressions // of the specified request parameters. For example, if an Auth header, a Name @@ -11640,16 +11650,14 @@ type Authorizer struct { // response without calling the Lambda function. The valid value is a string // of comma-separated mapping expressions of the specified request parameters. // When the authorization caching is not enabled, this property is optional. - // - // For a COGNITO_USER_POOLS authorizer, this property is not used. IdentitySource *string `locationName:"identitySource" type:"string"` // A validation expression for the incoming identity token. For TOKEN authorizers, - // this value is a regular expression. API Gateway will match the incoming token - // from the client against the specified regular expression. It will invoke - // the authorizer's Lambda function there is a match. Otherwise, it will return - // a 401 Unauthorized response without calling the Lambda function. The validation - // expression does not apply to the REQUEST authorizer. + // this value is a regular expression. API Gateway will match the aud field + // of the incoming token from the client against the specified regular expression. + // It will invoke the authorizer's Lambda function when there is a match. Otherwise, + // it will return a 401 Unauthorized response without calling the Lambda function. + // The validation expression does not apply to the REQUEST authorizer. IdentityValidationExpression *string `locationName:"identityValidationExpression" type:"string"` // [Required] The name of the authorizer. @@ -11660,10 +11668,10 @@ type Authorizer struct { // For a TOKEN or REQUEST authorizer, this is not defined. ProviderARNs []*string `locationName:"providerARNs" type:"list"` - // [Required] The authorizer type. Valid values are TOKEN for a Lambda function - // using a single authorization token submitted in a custom header, REQUEST - // for a Lambda function using incoming request parameters, and COGNITO_USER_POOLS - // for using an Amazon Cognito user pool. + // The authorizer type. Valid values are TOKEN for a Lambda function using a + // single authorization token submitted in a custom header, REQUEST for a Lambda + // function using incoming request parameters, and COGNITO_USER_POOLS for using + // an Amazon Cognito user pool. Type *string `locationName:"type" type:"string" enum:"AuthorizerType"` } @@ -11742,7 +11750,7 @@ func (s *Authorizer) SetType(v string) *Authorizer { // // A custom domain name plus a BasePathMapping specification identifies a deployed // RestApi in a given stage of the owner Account. -// Use Custom Domain Names (http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html) +// Use Custom Domain Names (https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html) type BasePathMapping struct { _ struct{} `type:"structure"` @@ -11845,7 +11853,7 @@ func (s *CanarySettings) SetUseStageCache(v bool) *CanarySettings { // Client certificates are used to authenticate an API by the backend server. // To authenticate an API client (or user), use IAM roles and policies, a custom // Authorizer or an Amazon Cognito user pool. -// Use Client-Side Certificate (http://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started-client-side-ssl-authentication.html) +// Use Client-Side Certificate (https://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started-client-side-ssl-authentication.html) type ClientCertificate struct { _ struct{} `type:"structure"` @@ -11853,13 +11861,13 @@ type ClientCertificate struct { ClientCertificateId *string `locationName:"clientCertificateId" type:"string"` // The timestamp when the client certificate was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // The description of the client certificate. Description *string `locationName:"description" type:"string"` // The timestamp when the client certificate will expire. - ExpirationDate *time.Time `locationName:"expirationDate" type:"timestamp" timestampFormat:"unix"` + ExpirationDate *time.Time `locationName:"expirationDate" type:"timestamp"` // The PEM-encoded public key of the client certificate, which can be used to // configure certificate authentication in the integration endpoint . @@ -11990,7 +11998,7 @@ func (s *CreateApiKeyInput) SetValue(v string) *CreateApiKeyInput { type CreateAuthorizerInput struct { _ struct{} `type:"structure"` - // Optional customer-defined field, used in Swagger imports and exports without + // Optional customer-defined field, used in OpenAPI imports and exports without // functional impact. AuthType *string `locationName:"authType" type:"string"` @@ -12016,11 +12024,11 @@ type CreateAuthorizerInput struct { // is usually of the form /2015-03-31/functions/[FunctionARN]/invocations. AuthorizerUri *string `locationName:"authorizerUri" type:"string"` - // The identity source for which authorization is requested. For a TOKEN authorizer, - // this is required and specifies the request header mapping expression for - // the custom header holding the authorization token submitted by the client. - // For example, if the token header name is Auth, the header mapping expression - // is method.request.header.Auth. + // The identity source for which authorization is requested. For a TOKEN or + // COGNITO_USER_POOLS authorizer, this is required and specifies the request + // header mapping expression for the custom header holding the authorization + // token submitted by the client. For example, if the token header name is Auth, + // the header mapping expression is method.request.header.Auth. // For the REQUEST authorizer, this is required when authorization caching is // enabled. The value is a comma-separated string of one or more mapping expressions // of the specified request parameters. For example, if an Auth header, a Name @@ -12033,16 +12041,14 @@ type CreateAuthorizerInput struct { // response without calling the Lambda function. The valid value is a string // of comma-separated mapping expressions of the specified request parameters. // When the authorization caching is not enabled, this property is optional. - // - // For a COGNITO_USER_POOLS authorizer, this property is not used. IdentitySource *string `locationName:"identitySource" type:"string"` // A validation expression for the incoming identity token. For TOKEN authorizers, - // this value is a regular expression. API Gateway will match the incoming token - // from the client against the specified regular expression. It will invoke - // the authorizer's Lambda function there is a match. Otherwise, it will return - // a 401 Unauthorized response without calling the Lambda function. The validation - // expression does not apply to the REQUEST authorizer. + // this value is a regular expression. API Gateway will match the aud field + // of the incoming token from the client against the specified regular expression. + // It will invoke the authorizer's Lambda function when there is a match. Otherwise, + // it will return a 401 Unauthorized response without calling the Lambda function. + // The validation expression does not apply to the REQUEST authorizer. IdentityValidationExpression *string `locationName:"identityValidationExpression" type:"string"` // [Required] The name of the authorizer. @@ -12055,7 +12061,7 @@ type CreateAuthorizerInput struct { // For a TOKEN or REQUEST authorizer, this is not defined. ProviderARNs []*string `locationName:"providerARNs" type:"list"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -12168,12 +12174,12 @@ type CreateBasePathMappingInput struct { // a base path name after the domain name. BasePath *string `locationName:"basePath" type:"string"` - // The domain name of the BasePathMapping resource to create. + // [Required] The domain name of the BasePathMapping resource to create. // // DomainName is a required field DomainName *string `location:"uri" locationName:"domain_name" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `locationName:"restApiId" type:"string" required:"true"` @@ -12252,7 +12258,7 @@ type CreateDeploymentInput struct { // The description for the Deployment resource to create. Description *string `locationName:"description" type:"string"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -12263,6 +12269,9 @@ type CreateDeploymentInput struct { // The name of the Stage resource for the Deployment resource to create. StageName *string `locationName:"stageName" type:"string"` + // Specifies whether active tracing with X-ray is enabled for the Stage. + TracingEnabled *bool `locationName:"tracingEnabled" type:"boolean"` + // A map that defines the stage variables for the Stage resource that is associated // with the new deployment. Variable names can have alphanumeric and underscore // characters, and the values must match [A-Za-z0-9-._~:/?#&=,]+. @@ -12334,6 +12343,12 @@ func (s *CreateDeploymentInput) SetStageName(v string) *CreateDeploymentInput { return s } +// SetTracingEnabled sets the TracingEnabled field's value. +func (s *CreateDeploymentInput) SetTracingEnabled(v bool) *CreateDeploymentInput { + s.TracingEnabled = &v + return s +} + // SetVariables sets the Variables field's value. func (s *CreateDeploymentInput) SetVariables(v map[string]*string) *CreateDeploymentInput { s.Variables = v @@ -12351,7 +12366,7 @@ type CreateDocumentationPartInput struct { Location *DocumentationPartLocation `locationName:"location" type:"structure" required:"true"` // [Required] The new documentation content map of the targeted API entity. - // Enclosed key-value pairs are API-specific, but only Swagger-compliant key-value + // Enclosed key-value pairs are API-specific, but only OpenAPI-compliant key-value // pairs can be exported and, hence, published. // // Properties is a required field @@ -12516,7 +12531,7 @@ type CreateDomainNameInput struct { // key. CertificatePrivateKey *string `locationName:"certificatePrivateKey" type:"string"` - // (Required) The name of the DomainName resource. + // [Required] The name of the DomainName resource. // // DomainName is a required field DomainName *string `locationName:"domainName" type:"string" required:"true"` @@ -12616,7 +12631,7 @@ func (s *CreateDomainNameInput) SetRegionalCertificateName(v string) *CreateDoma type CreateModelInput struct { _ struct{} `type:"structure"` - // The content-type for the model. + // [Required] The content-type for the model. // // ContentType is a required field ContentType *string `locationName:"contentType" type:"string" required:"true"` @@ -12624,18 +12639,18 @@ type CreateModelInput struct { // The description of the model. Description *string `locationName:"description" type:"string"` - // The name of the model. Must be alphanumeric. + // [Required] The name of the model. Must be alphanumeric. // // Name is a required field Name *string `locationName:"name" type:"string" required:"true"` - // The RestApi identifier under which the Model will be created. + // [Required] The RestApi identifier under which the Model will be created. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` - // The schema for the model. For application/json models, this should be JSON-schema - // draft v4 (http://json-schema.org/documentation.html) model. + // The schema for the model. For application/json models, this should be JSON + // schema draft 4 (https://tools.ietf.org/html/draft-zyp-json-schema-04) model. Schema *string `locationName:"schema" type:"string"` } @@ -12705,7 +12720,7 @@ type CreateRequestValidatorInput struct { // The name of the to-be-created RequestValidator. Name *string `locationName:"name" type:"string"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -12770,7 +12785,7 @@ func (s *CreateRequestValidatorInput) SetValidateRequestParameters(v bool) *Crea type CreateResourceInput struct { _ struct{} `type:"structure"` - // The parent resource's identifier. + // [Required] The parent resource's identifier. // // ParentId is a required field ParentId *string `location:"uri" locationName:"parent_id" type:"string" required:"true"` @@ -12780,7 +12795,7 @@ type CreateResourceInput struct { // PathPart is a required field PathPart *string `locationName:"pathPart" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -12837,8 +12852,8 @@ func (s *CreateResourceInput) SetRestApiId(v string) *CreateResourceInput { type CreateRestApiInput struct { _ struct{} `type:"structure"` - // The source of the API key for metring requests according to a usage plan. - // Valid values are HEADER to read the API key from the X-API-Key header of + // The source of the API key for metering requests according to a usage plan. + // Valid values are: HEADER to read the API key from the X-API-Key header of // a request. // AUTHORIZER to read the API key from the UsageIdentifierKey from a custom // authorizer. @@ -12858,18 +12873,22 @@ type CreateRestApiInput struct { // the API. EndpointConfiguration *EndpointConfiguration `locationName:"endpointConfiguration" type:"structure"` - // A nullable integer used to enable (non-negative between 0 and 10485760 (10M) - // bytes, inclusive) or disable (null) compression on an API. When compression - // is enabled, compression or decompression are not applied on the payload if - // the payload size is smaller than this value. Setting it to zero allows compression - // for any payload size. + // A nullable integer that is used to enable compression (with non-negative + // between 0 and 10485760 (10M) bytes, inclusive) or disable compression (with + // a null value) on an API. When compression is enabled, compression or decompression + // is not applied on the payload if the payload size is smaller than this value. + // Setting it to zero allows compression for any payload size. MinimumCompressionSize *int64 `locationName:"minimumCompressionSize" type:"integer"` - // The name of the RestApi. + // [Required] The name of the RestApi. // // Name is a required field Name *string `locationName:"name" type:"string" required:"true"` + // A stringified JSON policy document that applies to this RestApi regardless + // of the caller and Method + Policy *string `locationName:"policy" type:"string"` + // A version identifier for the API. Version *string `locationName:"version" type:"string"` } @@ -12939,6 +12958,12 @@ func (s *CreateRestApiInput) SetName(v string) *CreateRestApiInput { return s } +// SetPolicy sets the Policy field's value. +func (s *CreateRestApiInput) SetPolicy(v string) *CreateRestApiInput { + s.Policy = &v + return s +} + // SetVersion sets the Version field's value. func (s *CreateRestApiInput) SetVersion(v string) *CreateRestApiInput { s.Version = &v @@ -12969,7 +12994,7 @@ type CreateStageInput struct { // The version of the associated API documentation. DocumentationVersion *string `locationName:"documentationVersion" type:"string"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -12979,11 +13004,14 @@ type CreateStageInput struct { // StageName is a required field StageName *string `locationName:"stageName" type:"string" required:"true"` - // Key/Value map of strings. Valid character set is [a-zA-Z+-=._:/]. Tag key - // can be up to 128 characters and must not start with "aws:". Tag value can - // be up to 256 characters. + // The key-value map of strings. The valid character set is [a-zA-Z+-=._:/]. + // The tag key can be up to 128 characters and must not start with aws:. The + // tag value can be up to 256 characters. Tags map[string]*string `locationName:"tags" type:"map"` + // Specifies whether active tracing with X-ray is enabled for the Stage. + TracingEnabled *bool `locationName:"tracingEnabled" type:"boolean"` + // A map that defines the stage variables for the new Stage resource. Variable // names can have alphanumeric and underscore characters, and the values must // match [A-Za-z0-9-._~:/?#&=,]+. @@ -13073,6 +13101,12 @@ func (s *CreateStageInput) SetTags(v map[string]*string) *CreateStageInput { return s } +// SetTracingEnabled sets the TracingEnabled field's value. +func (s *CreateStageInput) SetTracingEnabled(v bool) *CreateStageInput { + s.TracingEnabled = &v + return s +} + // SetVariables sets the Variables field's value. func (s *CreateStageInput) SetVariables(v map[string]*string) *CreateStageInput { s.Variables = v @@ -13091,7 +13125,7 @@ type CreateUsagePlanInput struct { // The description of the usage plan. Description *string `locationName:"description" type:"string"` - // The name of the usage plan. + // [Required] The name of the usage plan. // // Name is a required field Name *string `locationName:"name" type:"string" required:"true"` @@ -13161,18 +13195,18 @@ func (s *CreateUsagePlanInput) SetThrottle(v *ThrottleSettings) *CreateUsagePlan type CreateUsagePlanKeyInput struct { _ struct{} `type:"structure"` - // The identifier of a UsagePlanKey resource for a plan customer. + // [Required] The identifier of a UsagePlanKey resource for a plan customer. // // KeyId is a required field KeyId *string `locationName:"keyId" type:"string" required:"true"` - // The type of a UsagePlanKey resource for a plan customer. + // [Required] The type of a UsagePlanKey resource for a plan customer. // // KeyType is a required field KeyType *string `locationName:"keyType" type:"string" required:"true"` - // The Id of the UsagePlan resource representing the usage plan containing the - // to-be-created UsagePlanKey resource representing a plan customer. + // [Required] The Id of the UsagePlan resource representing the usage plan containing + // the to-be-created UsagePlanKey resource representing a plan customer. // // UsagePlanId is a required field UsagePlanId *string `location:"uri" locationName:"usageplanId" type:"string" required:"true"` @@ -13296,7 +13330,7 @@ func (s *CreateVpcLinkInput) SetTargetArns(v []*string) *CreateVpcLinkInput { type DeleteApiKeyInput struct { _ struct{} `type:"structure"` - // The identifier of the ApiKey resource to be deleted. + // [Required] The identifier of the ApiKey resource to be deleted. // // ApiKey is a required field ApiKey *string `location:"uri" locationName:"api_Key" type:"string" required:"true"` @@ -13349,12 +13383,12 @@ func (s DeleteApiKeyOutput) GoString() string { type DeleteAuthorizerInput struct { _ struct{} `type:"structure"` - // The identifier of the Authorizer resource. + // [Required] The identifier of the Authorizer resource. // // AuthorizerId is a required field AuthorizerId *string `location:"uri" locationName:"authorizer_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -13416,12 +13450,12 @@ func (s DeleteAuthorizerOutput) GoString() string { type DeleteBasePathMappingInput struct { _ struct{} `type:"structure"` - // The base path name of the BasePathMapping resource to delete. + // [Required] The base path name of the BasePathMapping resource to delete. // // BasePath is a required field BasePath *string `location:"uri" locationName:"base_path" type:"string" required:"true"` - // The domain name of the BasePathMapping resource to delete. + // [Required] The domain name of the BasePathMapping resource to delete. // // DomainName is a required field DomainName *string `location:"uri" locationName:"domain_name" type:"string" required:"true"` @@ -13483,7 +13517,7 @@ func (s DeleteBasePathMappingOutput) GoString() string { type DeleteClientCertificateInput struct { _ struct{} `type:"structure"` - // The identifier of the ClientCertificate resource to be deleted. + // [Required] The identifier of the ClientCertificate resource to be deleted. // // ClientCertificateId is a required field ClientCertificateId *string `location:"uri" locationName:"clientcertificate_id" type:"string" required:"true"` @@ -13536,12 +13570,12 @@ func (s DeleteClientCertificateOutput) GoString() string { type DeleteDeploymentInput struct { _ struct{} `type:"structure"` - // The identifier of the Deployment resource to delete. + // [Required] The identifier of the Deployment resource to delete. // // DeploymentId is a required field DeploymentId *string `location:"uri" locationName:"deployment_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -13737,7 +13771,7 @@ func (s DeleteDocumentationVersionOutput) GoString() string { type DeleteDomainNameInput struct { _ struct{} `type:"structure"` - // The name of the DomainName resource to be deleted. + // [Required] The name of the DomainName resource to be deleted. // // DomainName is a required field DomainName *string `location:"uri" locationName:"domain_name" type:"string" required:"true"` @@ -13791,8 +13825,8 @@ func (s DeleteDomainNameOutput) GoString() string { type DeleteGatewayResponseInput struct { _ struct{} `type:"structure"` - // The response type of the associated GatewayResponse. Valid values are ACCESS_DENIED - // + // [Required] The response type of the associated GatewayResponse. Valid values + // are ACCESS_DENIED // API_CONFIGURATION_ERROR // AUTHORIZER_FAILURE // AUTHORIZER_CONFIGURATION_ERROR @@ -13811,12 +13845,12 @@ type DeleteGatewayResponseInput struct { // RESOURCE_NOT_FOUND // THROTTLED // UNAUTHORIZED - // UNSUPPORTED_MEDIA_TYPES + // UNSUPPORTED_MEDIA_TYPE // // ResponseType is a required field ResponseType *string `location:"uri" locationName:"response_type" type:"string" required:"true" enum:"GatewayResponseType"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -13878,17 +13912,17 @@ func (s DeleteGatewayResponseOutput) GoString() string { type DeleteIntegrationInput struct { _ struct{} `type:"structure"` - // Specifies a delete integration request's HTTP method. + // [Required] Specifies a delete integration request's HTTP method. // // HttpMethod is a required field HttpMethod *string `location:"uri" locationName:"http_method" type:"string" required:"true"` - // Specifies a delete integration request's resource identifier. + // [Required] Specifies a delete integration request's resource identifier. // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -13959,22 +13993,22 @@ func (s DeleteIntegrationOutput) GoString() string { type DeleteIntegrationResponseInput struct { _ struct{} `type:"structure"` - // Specifies a delete integration response request's HTTP method. + // [Required] Specifies a delete integration response request's HTTP method. // // HttpMethod is a required field HttpMethod *string `location:"uri" locationName:"http_method" type:"string" required:"true"` - // Specifies a delete integration response request's resource identifier. + // [Required] Specifies a delete integration response request's resource identifier. // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` - // Specifies a delete integration response request's status code. + // [Required] Specifies a delete integration response request's status code. // // StatusCode is a required field StatusCode *string `location:"uri" locationName:"status_code" type:"string" required:"true"` @@ -14054,17 +14088,17 @@ func (s DeleteIntegrationResponseOutput) GoString() string { type DeleteMethodInput struct { _ struct{} `type:"structure"` - // The HTTP verb of the Method resource. + // [Required] The HTTP verb of the Method resource. // // HttpMethod is a required field HttpMethod *string `location:"uri" locationName:"http_method" type:"string" required:"true"` - // The Resource identifier for the Method resource. + // [Required] The Resource identifier for the Method resource. // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -14135,22 +14169,22 @@ func (s DeleteMethodOutput) GoString() string { type DeleteMethodResponseInput struct { _ struct{} `type:"structure"` - // The HTTP verb of the Method resource. + // [Required] The HTTP verb of the Method resource. // // HttpMethod is a required field HttpMethod *string `location:"uri" locationName:"http_method" type:"string" required:"true"` - // The Resource identifier for the MethodResponse resource. + // [Required] The Resource identifier for the MethodResponse resource. // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` - // The status code identifier for the MethodResponse resource. + // [Required] The status code identifier for the MethodResponse resource. // // StatusCode is a required field StatusCode *string `location:"uri" locationName:"status_code" type:"string" required:"true"` @@ -14230,12 +14264,12 @@ func (s DeleteMethodResponseOutput) GoString() string { type DeleteModelInput struct { _ struct{} `type:"structure"` - // The name of the model to delete. + // [Required] The name of the model to delete. // // ModelName is a required field ModelName *string `location:"uri" locationName:"model_name" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -14302,7 +14336,7 @@ type DeleteRequestValidatorInput struct { // RequestValidatorId is a required field RequestValidatorId *string `location:"uri" locationName:"requestvalidator_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -14364,12 +14398,12 @@ func (s DeleteRequestValidatorOutput) GoString() string { type DeleteResourceInput struct { _ struct{} `type:"structure"` - // The identifier of the Resource resource. + // [Required] The identifier of the Resource resource. // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -14431,7 +14465,7 @@ func (s DeleteResourceOutput) GoString() string { type DeleteRestApiInput struct { _ struct{} `type:"structure"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -14484,12 +14518,12 @@ func (s DeleteRestApiOutput) GoString() string { type DeleteStageInput struct { _ struct{} `type:"structure"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` - // The name of the Stage resource to delete. + // [Required] The name of the Stage resource to delete. // // StageName is a required field StageName *string `location:"uri" locationName:"stage_name" type:"string" required:"true"` @@ -14551,7 +14585,7 @@ func (s DeleteStageOutput) GoString() string { type DeleteUsagePlanInput struct { _ struct{} `type:"structure"` - // The Id of the to-be-deleted usage plan. + // [Required] The Id of the to-be-deleted usage plan. // // UsagePlanId is a required field UsagePlanId *string `location:"uri" locationName:"usageplanId" type:"string" required:"true"` @@ -14591,13 +14625,13 @@ func (s *DeleteUsagePlanInput) SetUsagePlanId(v string) *DeleteUsagePlanInput { type DeleteUsagePlanKeyInput struct { _ struct{} `type:"structure"` - // The Id of the UsagePlanKey resource to be deleted. + // [Required] The Id of the UsagePlanKey resource to be deleted. // // KeyId is a required field KeyId *string `location:"uri" locationName:"keyId" type:"string" required:"true"` - // The Id of the UsagePlan resource representing the usage plan containing the - // to-be-deleted UsagePlanKey resource representing a plan customer. + // [Required] The Id of the UsagePlan resource representing the usage plan containing + // the to-be-deleted UsagePlanKey resource representing a plan customer. // // UsagePlanId is a required field UsagePlanId *string `location:"uri" locationName:"usageplanId" type:"string" required:"true"` @@ -14731,7 +14765,7 @@ func (s DeleteVpcLinkOutput) GoString() string { // To view, update, or delete a deployment, call GET, PATCH, or DELETE on the // specified deployment resource (/restapis/{restapi_id}/deployments/{deployment_id}). // -// RestApi, Deployments, Stage, AWS CLI (http://docs.aws.amazon.com/cli/latest/reference/apigateway/get-deployment.html), +// RestApi, Deployments, Stage, AWS CLI (https://docs.aws.amazon.com/cli/latest/reference/apigateway/get-deployment.html), // AWS SDKs (https://aws.amazon.com/tools/) type Deployment struct { _ struct{} `type:"structure"` @@ -14741,7 +14775,7 @@ type Deployment struct { ApiSummary map[string]map[string]*MethodSnapshot `locationName:"apiSummary" type:"map"` // The date and time that the deployment resource was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // The description for the deployment resource. Description *string `locationName:"description" type:"string"` @@ -14840,11 +14874,11 @@ func (s *DeploymentCanarySettings) SetUseStageCache(v bool) *DeploymentCanarySet // on the API entity type. All valid fields are not required. // // The content map is a JSON string of API-specific key-value pairs. Although -// an API can use any shape for the content map, only the Swagger-compliant +// an API can use any shape for the content map, only the OpenAPI-compliant // documentation fields will be injected into the associated API entity definition -// in the exported Swagger definition file. +// in the exported OpenAPI definition file. // -// Documenting an API (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-documenting-api.html), +// Documenting an API (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-documenting-api.html), // DocumentationParts type DocumentationPart struct { _ struct{} `type:"structure"` @@ -14862,10 +14896,10 @@ type DocumentationPart struct { // A content map of API-specific key-value pairs describing the targeted API // entity. The map must be encoded as a JSON string, e.g., "{ \"description\": - // \"The API does ...\" }". Only Swagger-compliant documentation-related fields + // \"The API does ...\" }". Only OpenAPI-compliant documentation-related fields // from the properties map are exported and, hence, published as part of the // API entity definitions, while the original documentation parts are exported - // in a Swagger extension of x-amazon-apigateway-documentation. + // in a OpenAPI extension of x-amazon-apigateway-documentation. Properties *string `locationName:"properties" type:"string"` } @@ -14933,12 +14967,11 @@ type DocumentationPartLocation struct { // of the parent entity exactly. StatusCode *string `locationName:"statusCode" type:"string"` - // The type of API entity to which the documentation content applies. It is - // a valid and required field for API entity types of API, AUTHORIZER, MODEL, - // RESOURCE, METHOD, PATH_PARAMETER, QUERY_PARAMETER, REQUEST_HEADER, REQUEST_BODY, - // RESPONSE, RESPONSE_HEADER, and RESPONSE_BODY. Content inheritance does not - // apply to any entity of the API, AUTHORIZER, METHOD, MODEL, REQUEST_BODY, - // or RESOURCE type. + // [Required] The type of API entity to which the documentation content applies. + // Valid values are API, AUTHORIZER, MODEL, RESOURCE, METHOD, PATH_PARAMETER, + // QUERY_PARAMETER, REQUEST_HEADER, REQUEST_BODY, RESPONSE, RESPONSE_HEADER, + // and RESPONSE_BODY. Content inheritance does not apply to any entity of the + // API, AUTHORIZER, METHOD, MODEL, REQUEST_BODY, or RESOURCE type. // // Type is a required field Type *string `locationName:"type" type:"string" required:"true" enum:"DocumentationPartType"` @@ -15001,15 +15034,15 @@ func (s *DocumentationPartLocation) SetType(v string) *DocumentationPartLocation // // Publishing API documentation involves creating a documentation version associated // with an API stage and exporting the versioned documentation to an external -// (e.g., Swagger) file. +// (e.g., OpenAPI) file. // -// Documenting an API (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-documenting-api.html), +// Documenting an API (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-documenting-api.html), // DocumentationPart, DocumentationVersions type DocumentationVersion struct { _ struct{} `type:"structure"` // The date when the API documentation snapshot is created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // The description of the API documentation snapshot. Description *string `locationName:"description" type:"string"` @@ -15057,7 +15090,7 @@ func (s *DocumentationVersion) SetVersion(v string) *DocumentationVersion { // where myApi is the base path mapping (BasePathMapping) of your API under // the custom domain name. // -// Set a Custom Host Name for an API (http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html) +// Set a Custom Host Name for an API (https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html) type DomainName struct { _ struct{} `type:"structure"` @@ -15072,22 +15105,22 @@ type DomainName struct { // The timestamp when the certificate that was used by edge-optimized endpoint // for this domain name was uploaded. - CertificateUploadDate *time.Time `locationName:"certificateUploadDate" type:"timestamp" timestampFormat:"unix"` + CertificateUploadDate *time.Time `locationName:"certificateUploadDate" type:"timestamp"` // The domain name of the Amazon CloudFront distribution associated with this // custom domain name for an edge-optimized endpoint. You set up this association // when adding a DNS record pointing the custom domain name to this distribution // name. For more information about CloudFront distributions, see the Amazon - // CloudFront documentation (http://aws.amazon.com/documentation/cloudfront/). + // CloudFront documentation (https://aws.amazon.com/documentation/cloudfront/). DistributionDomainName *string `locationName:"distributionDomainName" type:"string"` // The region-agnostic Amazon Route 53 Hosted Zone ID of the edge-optimized // endpoint. The valid value is Z2FDTNDATAQYW2 for all the regions. For more // information, see Set up a Regional Custom Domain Name (https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-regional-api-custom-domain-create.html) - // and AWS Regions and Endpoints for API Gateway (http://docs.aws.amazon.com/general/latest/gr/rande.html#apigateway_region). + // and AWS Regions and Endpoints for API Gateway (https://docs.aws.amazon.com/general/latest/gr/rande.html#apigateway_region). DistributionHostedZoneId *string `locationName:"distributionHostedZoneId" type:"string"` - // The name of the DomainName resource. + // The custom domain name as an API host name, for example, my-api.example.com. DomainName *string `locationName:"domainName" type:"string"` // The endpoint configuration of this DomainName showing the endpoint types @@ -15110,7 +15143,7 @@ type DomainName struct { // The region-specific Amazon Route 53 Hosted Zone ID of the regional endpoint. // For more information, see Set up a Regional Custom Domain Name (https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-regional-api-custom-domain-create.html) - // and AWS Regions and Endpoints for API Gateway (http://docs.aws.amazon.com/general/latest/gr/rande.html#apigateway_region). + // and AWS Regions and Endpoints for API Gateway (https://docs.aws.amazon.com/general/latest/gr/rande.html#apigateway_region). RegionalHostedZoneId *string `locationName:"regionalHostedZoneId" type:"string"` } @@ -15198,7 +15231,7 @@ type EndpointConfiguration struct { // A list of endpoint types of an API (RestApi) or its custom domain name (DomainName). // For an edge-optimized API and its custom domain name, the endpoint type is // "EDGE". For a regional API and its custom domain name, the endpoint type - // is REGIONAL. + // is REGIONAL. For a private API, the endpoint type is PRIVATE. Types []*string `locationName:"types" type:"list"` } @@ -15289,12 +15322,12 @@ func (s FlushStageAuthorizersCacheOutput) GoString() string { type FlushStageCacheInput struct { _ struct{} `type:"structure"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` - // The name of the stage to flush its cache. + // [Required] The name of the stage to flush its cache. // // StageName is a required field StageName *string `location:"uri" locationName:"stage_name" type:"string" required:"true"` @@ -15395,7 +15428,7 @@ func (s GetAccountInput) GoString() string { type GetApiKeyInput struct { _ struct{} `type:"structure"` - // The identifier of the ApiKey resource. + // [Required] The identifier of the ApiKey resource. // // ApiKey is a required field ApiKey *string `location:"uri" locationName:"api_Key" type:"string" required:"true"` @@ -15452,7 +15485,8 @@ type GetApiKeysInput struct { // key values. IncludeValues *bool `location:"querystring" locationName:"includeValues" type:"boolean"` - // The maximum number of returned results per page. + // The maximum number of returned results per page. The default value is 25 + // and the maximum value is 500. Limit *int64 `location:"querystring" locationName:"limit" type:"integer"` // The name of queried API keys. @@ -15504,7 +15538,7 @@ func (s *GetApiKeysInput) SetPosition(v string) *GetApiKeysInput { // Represents a collection of API keys as represented by an ApiKeys resource. // -// Use API Keys (http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-api-keys.html) +// Use API Keys (https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-api-keys.html) type GetApiKeysOutput struct { _ struct{} `type:"structure"` @@ -15550,12 +15584,12 @@ func (s *GetApiKeysOutput) SetWarnings(v []*string) *GetApiKeysOutput { type GetAuthorizerInput struct { _ struct{} `type:"structure"` - // The identifier of the Authorizer resource. + // [Required] The identifier of the Authorizer resource. // // AuthorizerId is a required field AuthorizerId *string `location:"uri" locationName:"authorizer_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -15603,13 +15637,14 @@ func (s *GetAuthorizerInput) SetRestApiId(v string) *GetAuthorizerInput { type GetAuthorizersInput struct { _ struct{} `type:"structure"` - // The maximum number of returned results per page. + // The maximum number of returned results per page. The default value is 25 + // and the maximum value is 500. Limit *int64 `location:"querystring" locationName:"limit" type:"integer"` // The current pagination position in the paged result set. Position *string `location:"querystring" locationName:"position" type:"string"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -15658,7 +15693,7 @@ func (s *GetAuthorizersInput) SetRestApiId(v string) *GetAuthorizersInput { // Represents a collection of Authorizer resources. // -// Enable custom authorization (http://docs.aws.amazon.com/apigateway/latest/developerguide/use-custom-authorizer.html) +// Enable custom authorization (https://docs.aws.amazon.com/apigateway/latest/developerguide/use-custom-authorizer.html) type GetAuthorizersOutput struct { _ struct{} `type:"structure"` @@ -15694,15 +15729,15 @@ func (s *GetAuthorizersOutput) SetPosition(v string) *GetAuthorizersOutput { type GetBasePathMappingInput struct { _ struct{} `type:"structure"` - // The base path name that callers of the API must provide as part of the URL - // after the domain name. This value must be unique for all of the mappings - // across a single API. Leave this blank if you do not want callers to specify - // any base path name after the domain name. + // [Required] The base path name that callers of the API must provide as part + // of the URL after the domain name. This value must be unique for all of the + // mappings across a single API. Leave this blank if you do not want callers + // to specify any base path name after the domain name. // // BasePath is a required field BasePath *string `location:"uri" locationName:"base_path" type:"string" required:"true"` - // The domain name of the BasePathMapping resource to be described. + // [Required] The domain name of the BasePathMapping resource to be described. // // DomainName is a required field DomainName *string `location:"uri" locationName:"domain_name" type:"string" required:"true"` @@ -15750,13 +15785,13 @@ func (s *GetBasePathMappingInput) SetDomainName(v string) *GetBasePathMappingInp type GetBasePathMappingsInput struct { _ struct{} `type:"structure"` - // The domain name of a BasePathMapping resource. + // [Required] The domain name of a BasePathMapping resource. // // DomainName is a required field DomainName *string `location:"uri" locationName:"domain_name" type:"string" required:"true"` - // The maximum number of returned results per page. The value is 25 by default - // and could be between 1 - 500. + // The maximum number of returned results per page. The default value is 25 + // and the maximum value is 500. Limit *int64 `location:"querystring" locationName:"limit" type:"integer"` // The current pagination position in the paged result set. @@ -15806,7 +15841,7 @@ func (s *GetBasePathMappingsInput) SetPosition(v string) *GetBasePathMappingsInp // Represents a collection of BasePathMapping resources. // -// Use Custom Domain Names (http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html) +// Use Custom Domain Names (https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html) type GetBasePathMappingsOutput struct { _ struct{} `type:"structure"` @@ -15842,7 +15877,7 @@ func (s *GetBasePathMappingsOutput) SetPosition(v string) *GetBasePathMappingsOu type GetClientCertificateInput struct { _ struct{} `type:"structure"` - // The identifier of the ClientCertificate resource to be described. + // [Required] The identifier of the ClientCertificate resource to be described. // // ClientCertificateId is a required field ClientCertificateId *string `location:"uri" locationName:"clientcertificate_id" type:"string" required:"true"` @@ -15881,8 +15916,8 @@ func (s *GetClientCertificateInput) SetClientCertificateId(v string) *GetClientC type GetClientCertificatesInput struct { _ struct{} `type:"structure"` - // The maximum number of returned results per page. The value is 25 by default - // and could be between 1 - 500. + // The maximum number of returned results per page. The default value is 25 + // and the maximum value is 500. Limit *int64 `location:"querystring" locationName:"limit" type:"integer"` // The current pagination position in the paged result set. @@ -15913,7 +15948,7 @@ func (s *GetClientCertificatesInput) SetPosition(v string) *GetClientCertificate // Represents a collection of ClientCertificate resources. // -// Use Client-Side Certificate (http://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started-client-side-ssl-authentication.html) +// Use Client-Side Certificate (https://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started-client-side-ssl-authentication.html) type GetClientCertificatesOutput struct { _ struct{} `type:"structure"` @@ -15949,7 +15984,7 @@ func (s *GetClientCertificatesOutput) SetPosition(v string) *GetClientCertificat type GetDeploymentInput struct { _ struct{} `type:"structure"` - // The identifier of the Deployment resource to get information about. + // [Required] The identifier of the Deployment resource to get information about. // // DeploymentId is a required field DeploymentId *string `location:"uri" locationName:"deployment_id" type:"string" required:"true"` @@ -15963,7 +15998,7 @@ type GetDeploymentInput struct { // list containing only the "apisummary" string. For example, GET /restapis/{restapi_id}/deployments/{deployment_id}?embed=apisummary. Embed []*string `location:"querystring" locationName:"embed" type:"list"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -16017,14 +16052,14 @@ func (s *GetDeploymentInput) SetRestApiId(v string) *GetDeploymentInput { type GetDeploymentsInput struct { _ struct{} `type:"structure"` - // The maximum number of returned results per page. The value is 25 by default - // and could be between 1 - 500. + // The maximum number of returned results per page. The default value is 25 + // and the maximum value is 500. Limit *int64 `location:"querystring" locationName:"limit" type:"integer"` // The current pagination position in the paged result set. Position *string `location:"querystring" locationName:"position" type:"string"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -16080,8 +16115,8 @@ func (s *GetDeploymentsInput) SetRestApiId(v string) *GetDeploymentsInput { // resource. To view, update, or delete an existing deployment, make a GET, // PATCH, or DELETE request, respectively, on a specified Deployment resource. // -// Deploying an API (http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-deploy-api.html), -// AWS CLI (http://docs.aws.amazon.com/cli/latest/reference/apigateway/get-deployment.html), +// Deploying an API (https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-deploy-api.html), +// AWS CLI (https://docs.aws.amazon.com/cli/latest/reference/apigateway/get-deployment.html), // AWS SDKs (https://aws.amazon.com/tools/) type GetDeploymentsOutput struct { _ struct{} `type:"structure"` @@ -16172,7 +16207,8 @@ func (s *GetDocumentationPartInput) SetRestApiId(v string) *GetDocumentationPart type GetDocumentationPartsInput struct { _ struct{} `type:"structure"` - // The maximum number of returned results per page. + // The maximum number of returned results per page. The default value is 25 + // and the maximum value is 500. Limit *int64 `location:"querystring" locationName:"limit" type:"integer"` // The status of the API documentation parts to retrieve. Valid values are DOCUMENTED @@ -16265,7 +16301,7 @@ func (s *GetDocumentationPartsInput) SetType(v string) *GetDocumentationPartsInp // The collection of documentation parts of an API. // -// Documenting an API (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-documenting-api.html), DocumentationPart +// Documenting an API (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-documenting-api.html), DocumentationPart type GetDocumentationPartsOutput struct { _ struct{} `type:"structure"` @@ -16354,7 +16390,8 @@ func (s *GetDocumentationVersionInput) SetRestApiId(v string) *GetDocumentationV type GetDocumentationVersionsInput struct { _ struct{} `type:"structure"` - // The maximum number of returned results per page. + // The maximum number of returned results per page. The default value is 25 + // and the maximum value is 500. Limit *int64 `location:"querystring" locationName:"limit" type:"integer"` // The current pagination position in the paged result set. @@ -16412,7 +16449,7 @@ func (s *GetDocumentationVersionsInput) SetRestApiId(v string) *GetDocumentation // Use the DocumentationVersions to manage documentation snapshots associated // with various API stages. // -// Documenting an API (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-documenting-api.html), +// Documenting an API (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-documenting-api.html), // DocumentationPart, DocumentationVersion type GetDocumentationVersionsOutput struct { _ struct{} `type:"structure"` @@ -16449,7 +16486,7 @@ func (s *GetDocumentationVersionsOutput) SetPosition(v string) *GetDocumentation type GetDomainNameInput struct { _ struct{} `type:"structure"` - // The name of the DomainName resource. + // [Required] The name of the DomainName resource. // // DomainName is a required field DomainName *string `location:"uri" locationName:"domain_name" type:"string" required:"true"` @@ -16488,8 +16525,8 @@ func (s *GetDomainNameInput) SetDomainName(v string) *GetDomainNameInput { type GetDomainNamesInput struct { _ struct{} `type:"structure"` - // The maximum number of returned results per page. The value is 25 by default - // and could be between 1 - 500. + // The maximum number of returned results per page. The default value is 25 + // and the maximum value is 500. Limit *int64 `location:"querystring" locationName:"limit" type:"integer"` // The current pagination position in the paged result set. @@ -16520,7 +16557,7 @@ func (s *GetDomainNamesInput) SetPosition(v string) *GetDomainNamesInput { // Represents a collection of DomainName resources. // -// Use Client-Side Certificate (http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html) +// Use Client-Side Certificate (https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html) type GetDomainNamesOutput struct { _ struct{} `type:"structure"` @@ -16557,30 +16594,31 @@ type GetExportInput struct { _ struct{} `type:"structure"` // The content-type of the export, for example application/json. Currently application/json - // and application/yaml are supported for exportType of swagger. This should - // be specified in the Accept header for direct API requests. + // and application/yaml are supported for exportType ofoas30 and swagger. This + // should be specified in the Accept header for direct API requests. Accepts *string `location:"header" locationName:"Accept" type:"string"` - // The type of export. Currently only 'swagger' is supported. + // [Required] The type of export. Acceptable values are 'oas30' for OpenAPI + // 3.0.x and 'swagger' for Swagger/OpenAPI 2.0. // // ExportType is a required field ExportType *string `location:"uri" locationName:"export_type" type:"string" required:"true"` // A key-value map of query string parameters that specify properties of the - // export, depending on the requested exportType. For exportTypeswagger, any - // combination of the following parameters are supported: integrations will - // export the API with x-amazon-apigateway-integration extensions. authorizers - // will export the API with x-amazon-apigateway-authorizer extensions. postman - // will export the API with Postman extensions, allowing for import to the Postman - // tool + // export, depending on the requested exportType. For exportTypeoas30 and swagger, + // any combination of the following parameters are supported: extensions='integrations' + // or extensions='apigateway' will export the API with x-amazon-apigateway-integration + // extensions. extensions='authorizers' will export the API with x-amazon-apigateway-authorizer + // extensions. postman will export the API with Postman extensions, allowing + // for import to the Postman tool Parameters map[string]*string `location:"querystring" locationName:"parameters" type:"map"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` - // The name of the Stage that will be exported. + // [Required] The name of the Stage that will be exported. // // StageName is a required field StageName *string `location:"uri" locationName:"stage_name" type:"string" required:"true"` @@ -16692,8 +16730,8 @@ func (s *GetExportOutput) SetContentType(v string) *GetExportOutput { type GetGatewayResponseInput struct { _ struct{} `type:"structure"` - // The response type of the associated GatewayResponse. Valid values are ACCESS_DENIED - // + // [Required] The response type of the associated GatewayResponse. Valid values + // are ACCESS_DENIED // API_CONFIGURATION_ERROR // AUTHORIZER_FAILURE // AUTHORIZER_CONFIGURATION_ERROR @@ -16712,12 +16750,12 @@ type GetGatewayResponseInput struct { // RESOURCE_NOT_FOUND // THROTTLED // UNAUTHORIZED - // UNSUPPORTED_MEDIA_TYPES + // UNSUPPORTED_MEDIA_TYPE // // ResponseType is a required field ResponseType *string `location:"uri" locationName:"response_type" type:"string" required:"true" enum:"GatewayResponseType"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -16768,15 +16806,16 @@ func (s *GetGatewayResponseInput) SetRestApiId(v string) *GetGatewayResponseInpu type GetGatewayResponsesInput struct { _ struct{} `type:"structure"` - // The maximum number of returned results per page. The GatewayResponses collection - // does not support pagination and the limit does not apply here. + // The maximum number of returned results per page. The default value is 25 + // and the maximum value is 500. The GatewayResponses collection does not support + // pagination and the limit does not apply here. Limit *int64 `location:"querystring" locationName:"limit" type:"integer"` // The current pagination position in the paged result set. The GatewayResponse // collection does not support pagination and the position does not apply here. Position *string `location:"querystring" locationName:"position" type:"string"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -16828,7 +16867,7 @@ func (s *GetGatewayResponsesInput) SetRestApiId(v string) *GetGatewayResponsesIn // this collection. // // For more information about valid gateway response types, see Gateway Response -// Types Supported by API Gateway (http://docs.aws.amazon.com/apigateway/latest/developerguide/supported-gateway-response-types.html)Example: +// Types Supported by API Gateway (https://docs.aws.amazon.com/apigateway/latest/developerguide/supported-gateway-response-types.html)Example: // Get the collection of gateway responses of an API // // Request @@ -16982,7 +17021,7 @@ func (s *GetGatewayResponsesInput) SetRestApiId(v string) *GetGatewayResponsesIn // { "application/json": "{\"message\":$context.error.messageString}" }, "responseType": // "AUTHORIZER_FAILURE", "statusCode": "500" } ] } } // -// Customize Gateway Responses (http://docs.aws.amazon.com/apigateway/latest/developerguide/customize-gateway-responses.html) +// Customize Gateway Responses (https://docs.aws.amazon.com/apigateway/latest/developerguide/customize-gateway-responses.html) type GetGatewayResponsesOutput struct { _ struct{} `type:"structure"` @@ -17018,17 +17057,17 @@ func (s *GetGatewayResponsesOutput) SetPosition(v string) *GetGatewayResponsesOu type GetIntegrationInput struct { _ struct{} `type:"structure"` - // Specifies a get integration request's HTTP method. + // [Required] Specifies a get integration request's HTTP method. // // HttpMethod is a required field HttpMethod *string `location:"uri" locationName:"http_method" type:"string" required:"true"` - // Specifies a get integration request's resource identifier + // [Required] Specifies a get integration request's resource identifier // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -17085,22 +17124,22 @@ func (s *GetIntegrationInput) SetRestApiId(v string) *GetIntegrationInput { type GetIntegrationResponseInput struct { _ struct{} `type:"structure"` - // Specifies a get integration response request's HTTP method. + // [Required] Specifies a get integration response request's HTTP method. // // HttpMethod is a required field HttpMethod *string `location:"uri" locationName:"http_method" type:"string" required:"true"` - // Specifies a get integration response request's resource identifier. + // [Required] Specifies a get integration response request's resource identifier. // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` - // Specifies a get integration response request's status code. + // [Required] Specifies a get integration response request's status code. // // StatusCode is a required field StatusCode *string `location:"uri" locationName:"status_code" type:"string" required:"true"` @@ -17166,17 +17205,17 @@ func (s *GetIntegrationResponseInput) SetStatusCode(v string) *GetIntegrationRes type GetMethodInput struct { _ struct{} `type:"structure"` - // Specifies the method request's HTTP method type. + // [Required] Specifies the method request's HTTP method type. // // HttpMethod is a required field HttpMethod *string `location:"uri" locationName:"http_method" type:"string" required:"true"` - // The Resource identifier for the Method resource. + // [Required] The Resource identifier for the Method resource. // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -17233,22 +17272,22 @@ func (s *GetMethodInput) SetRestApiId(v string) *GetMethodInput { type GetMethodResponseInput struct { _ struct{} `type:"structure"` - // The HTTP verb of the Method resource. + // [Required] The HTTP verb of the Method resource. // // HttpMethod is a required field HttpMethod *string `location:"uri" locationName:"http_method" type:"string" required:"true"` - // The Resource identifier for the MethodResponse resource. + // [Required] The Resource identifier for the MethodResponse resource. // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` - // The status code for the MethodResponse resource. + // [Required] The status code for the MethodResponse resource. // // StatusCode is a required field StatusCode *string `location:"uri" locationName:"status_code" type:"string" required:"true"` @@ -17319,12 +17358,12 @@ type GetModelInput struct { // is false. Flatten *bool `location:"querystring" locationName:"flatten" type:"boolean"` - // The name of the model as an identifier. + // [Required] The name of the model as an identifier. // // ModelName is a required field ModelName *string `location:"uri" locationName:"model_name" type:"string" required:"true"` - // The RestApi identifier under which the Model exists. + // [Required] The RestApi identifier under which the Model exists. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -17378,12 +17417,12 @@ func (s *GetModelInput) SetRestApiId(v string) *GetModelInput { type GetModelTemplateInput struct { _ struct{} `type:"structure"` - // The name of the model for which to generate a template. + // [Required] The name of the model for which to generate a template. // // ModelName is a required field ModelName *string `location:"uri" locationName:"model_name" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -17429,11 +17468,11 @@ func (s *GetModelTemplateInput) SetRestApiId(v string) *GetModelTemplateInput { // Represents a mapping template used to transform a payload. // -// Mapping Templates (http://docs.aws.amazon.com/apigateway/latest/developerguide/models-mappings.html#models-mappings-mappings) +// Mapping Templates (https://docs.aws.amazon.com/apigateway/latest/developerguide/models-mappings.html#models-mappings-mappings) type GetModelTemplateOutput struct { _ struct{} `type:"structure"` - // The Apache Velocity Template Language (VTL) (http://velocity.apache.org/engine/devel/vtl-reference-guide.html) + // The Apache Velocity Template Language (VTL) (https://velocity.apache.org/engine/devel/vtl-reference-guide.html) // template content used for the template resource. Value *string `locationName:"value" type:"string"` } @@ -17458,14 +17497,14 @@ func (s *GetModelTemplateOutput) SetValue(v string) *GetModelTemplateOutput { type GetModelsInput struct { _ struct{} `type:"structure"` - // The maximum number of returned results per page. The value is 25 by default - // and could be between 1 - 500. + // The maximum number of returned results per page. The default value is 25 + // and the maximum value is 500. Limit *int64 `location:"querystring" locationName:"limit" type:"integer"` // The current pagination position in the paged result set. Position *string `location:"querystring" locationName:"position" type:"string"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -17514,7 +17553,7 @@ func (s *GetModelsInput) SetRestApiId(v string) *GetModelsInput { // Represents a collection of Model resources. // -// Method, MethodResponse, Models and Mappings (http://docs.aws.amazon.com/apigateway/latest/developerguide/models-mappings.html) +// Method, MethodResponse, Models and Mappings (https://docs.aws.amazon.com/apigateway/latest/developerguide/models-mappings.html) type GetModelsOutput struct { _ struct{} `type:"structure"` @@ -17555,7 +17594,7 @@ type GetRequestValidatorInput struct { // RequestValidatorId is a required field RequestValidatorId *string `location:"uri" locationName:"requestvalidator_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -17603,13 +17642,14 @@ func (s *GetRequestValidatorInput) SetRestApiId(v string) *GetRequestValidatorIn type GetRequestValidatorsInput struct { _ struct{} `type:"structure"` - // The maximum number of returned results per page. + // The maximum number of returned results per page. The default value is 25 + // and the maximum value is 500. Limit *int64 `location:"querystring" locationName:"limit" type:"integer"` // The current pagination position in the paged result set. Position *string `location:"querystring" locationName:"position" type:"string"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -17658,11 +17698,11 @@ func (s *GetRequestValidatorsInput) SetRestApiId(v string) *GetRequestValidators // A collection of RequestValidator resources of a given RestApi. // -// In Swagger, the RequestValidators of an API is defined by the x-amazon-apigateway-request-validators -// (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-swagger-extensions.html#api-gateway-swagger-extensions-request-validators.html) +// In OpenAPI, the RequestValidators of an API is defined by the x-amazon-apigateway-request-validators +// (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-swagger-extensions.html#api-gateway-swagger-extensions-request-validators.html) // extension. // -// Enable Basic Request Validation in API Gateway (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-method-request-validation.html) +// Enable Basic Request Validation in API Gateway (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-method-request-validation.html) type GetRequestValidatorsOutput struct { _ struct{} `type:"structure"` @@ -17706,12 +17746,12 @@ type GetResourceInput struct { // /restapis/{restapi_id}/resources/{resource_id}?embed=methods. Embed []*string `location:"querystring" locationName:"embed" type:"list"` - // The identifier for the Resource resource. + // [Required] The identifier for the Resource resource. // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -17773,14 +17813,14 @@ type GetResourcesInput struct { // /restapis/{restapi_id}/resources?embed=methods. Embed []*string `location:"querystring" locationName:"embed" type:"list"` - // The maximum number of returned results per page. The value is 25 by default - // and could be between 1 - 500. + // The maximum number of returned results per page. The default value is 25 + // and the maximum value is 500. Limit *int64 `location:"querystring" locationName:"limit" type:"integer"` // The current pagination position in the paged result set. Position *string `location:"querystring" locationName:"position" type:"string"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -17835,7 +17875,7 @@ func (s *GetResourcesInput) SetRestApiId(v string) *GetResourcesInput { // Represents a collection of Resource resources. // -// Create an API (http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-create-api.html) +// Create an API (https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-create-api.html) type GetResourcesOutput struct { _ struct{} `type:"structure"` @@ -17871,7 +17911,7 @@ func (s *GetResourcesOutput) SetPosition(v string) *GetResourcesOutput { type GetRestApiInput struct { _ struct{} `type:"structure"` - // The identifier of the RestApi resource. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -17910,8 +17950,8 @@ func (s *GetRestApiInput) SetRestApiId(v string) *GetRestApiInput { type GetRestApisInput struct { _ struct{} `type:"structure"` - // The maximum number of returned results per page. The value is 25 by default - // and could be between 1 - 500. + // The maximum number of returned results per page. The default value is 25 + // and the maximum value is 500. Limit *int64 `location:"querystring" locationName:"limit" type:"integer"` // The current pagination position in the paged result set. @@ -17943,7 +17983,7 @@ func (s *GetRestApisInput) SetPosition(v string) *GetRestApisInput { // Contains references to your APIs and links that guide you in how to interact // with your collection. A collection offers a paginated view of your APIs. // -// Create an API (http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-create-api.html) +// Create an API (https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-create-api.html) type GetRestApisOutput struct { _ struct{} `type:"structure"` @@ -17986,18 +18026,18 @@ type GetSdkInput struct { // named serviceName and javaPackageName are required. Parameters map[string]*string `location:"querystring" locationName:"parameters" type:"map"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` - // The language for the generated SDK. Currently java, javascript, android, - // objectivec (for iOS), swift (for iOS), and ruby are supported. + // [Required] The language for the generated SDK. Currently java, javascript, + // android, objectivec (for iOS), swift (for iOS), and ruby are supported. // // SdkType is a required field SdkType *string `location:"uri" locationName:"sdk_type" type:"string" required:"true"` - // The name of the Stage that the SDK will use. + // [Required] The name of the Stage that the SDK will use. // // StageName is a required field StageName *string `location:"uri" locationName:"stage_name" type:"string" required:"true"` @@ -18102,7 +18142,7 @@ func (s *GetSdkOutput) SetContentType(v string) *GetSdkOutput { type GetSdkTypeInput struct { _ struct{} `type:"structure"` - // The identifier of the queried SdkType instance. + // [Required] The identifier of the queried SdkType instance. // // Id is a required field Id *string `location:"uri" locationName:"sdktype_id" type:"string" required:"true"` @@ -18141,7 +18181,8 @@ func (s *GetSdkTypeInput) SetId(v string) *GetSdkTypeInput { type GetSdkTypesInput struct { _ struct{} `type:"structure"` - // The maximum number of returned results per page. + // The maximum number of returned results per page. The default value is 25 + // and the maximum value is 500. Limit *int64 `location:"querystring" locationName:"limit" type:"integer"` // The current pagination position in the paged result set. @@ -18206,12 +18247,12 @@ func (s *GetSdkTypesOutput) SetPosition(v string) *GetSdkTypesOutput { type GetStageInput struct { _ struct{} `type:"structure"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` - // The name of the Stage resource to get information about. + // [Required] The name of the Stage resource to get information about. // // StageName is a required field StageName *string `location:"uri" locationName:"stage_name" type:"string" required:"true"` @@ -18262,7 +18303,7 @@ type GetStagesInput struct { // The stages' deployment identifiers. DeploymentId *string `location:"querystring" locationName:"deploymentId" type:"string"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -18305,7 +18346,7 @@ func (s *GetStagesInput) SetRestApiId(v string) *GetStagesInput { // A list of Stage resources that are associated with the ApiKey resource. // -// Deploying API in Stages (http://docs.aws.amazon.com/apigateway/latest/developerguide/stages.html) +// Deploying API in Stages (https://docs.aws.amazon.com/apigateway/latest/developerguide/stages.html) type GetStagesOutput struct { _ struct{} `type:"structure"` @@ -18334,14 +18375,15 @@ type GetTagsInput struct { _ struct{} `type:"structure"` // (Not currently supported) The maximum number of returned results per page. + // The default value is 25 and the maximum value is 500. Limit *int64 `location:"querystring" locationName:"limit" type:"integer"` // (Not currently supported) The current pagination position in the paged result // set. Position *string `location:"querystring" locationName:"position" type:"string"` - // [Required] The ARN of a resource that can be tagged. At present, Stage is - // the only taggable resource. + // [Required] The ARN of a resource that can be tagged. The resource ARN must + // be URL-encoded. At present, Stage is the only taggable resource. // // ResourceArn is a required field ResourceArn *string `location:"uri" locationName:"resource_arn" type:"string" required:"true"` @@ -18388,11 +18430,11 @@ func (s *GetTagsInput) SetResourceArn(v string) *GetTagsInput { return s } -// A collection of Tags associated with a given resource. +// The collection of tags. Each tag element is associated with a given resource. type GetTagsOutput struct { _ struct{} `type:"structure"` - // A collection of Tags associated with a given resource. + // The collection of tags. Each tag element is associated with a given resource. Tags map[string]*string `locationName:"tags" type:"map"` } @@ -18417,7 +18459,7 @@ func (s *GetTagsOutput) SetTags(v map[string]*string) *GetTagsOutput { type GetUsageInput struct { _ struct{} `type:"structure"` - // The ending date (e.g., 2016-12-31) of the usage data. + // [Required] The ending date (e.g., 2016-12-31) of the usage data. // // EndDate is a required field EndDate *string `location:"querystring" locationName:"endDate" type:"string" required:"true"` @@ -18425,18 +18467,19 @@ type GetUsageInput struct { // The Id of the API key associated with the resultant usage data. KeyId *string `location:"querystring" locationName:"keyId" type:"string"` - // The maximum number of returned results per page. + // The maximum number of returned results per page. The default value is 25 + // and the maximum value is 500. Limit *int64 `location:"querystring" locationName:"limit" type:"integer"` // The current pagination position in the paged result set. Position *string `location:"querystring" locationName:"position" type:"string"` - // The starting date (e.g., 2016-01-01) of the usage data. + // [Required] The starting date (e.g., 2016-01-01) of the usage data. // // StartDate is a required field StartDate *string `location:"querystring" locationName:"startDate" type:"string" required:"true"` - // The Id of the usage plan associated with the usage data. + // [Required] The Id of the usage plan associated with the usage data. // // UsagePlanId is a required field UsagePlanId *string `location:"uri" locationName:"usageplanId" type:"string" required:"true"` @@ -18511,7 +18554,7 @@ func (s *GetUsageInput) SetUsagePlanId(v string) *GetUsageInput { type GetUsagePlanInput struct { _ struct{} `type:"structure"` - // The identifier of the UsagePlan resource to be retrieved. + // [Required] The identifier of the UsagePlan resource to be retrieved. // // UsagePlanId is a required field UsagePlanId *string `location:"uri" locationName:"usageplanId" type:"string" required:"true"` @@ -18550,14 +18593,14 @@ func (s *GetUsagePlanInput) SetUsagePlanId(v string) *GetUsagePlanInput { type GetUsagePlanKeyInput struct { _ struct{} `type:"structure"` - // The key Id of the to-be-retrieved UsagePlanKey resource representing a plan - // customer. + // [Required] The key Id of the to-be-retrieved UsagePlanKey resource representing + // a plan customer. // // KeyId is a required field KeyId *string `location:"uri" locationName:"keyId" type:"string" required:"true"` - // The Id of the UsagePlan resource representing the usage plan containing the - // to-be-retrieved UsagePlanKey resource representing a plan customer. + // [Required] The Id of the UsagePlan resource representing the usage plan containing + // the to-be-retrieved UsagePlanKey resource representing a plan customer. // // UsagePlanId is a required field UsagePlanId *string `location:"uri" locationName:"usageplanId" type:"string" required:"true"` @@ -18606,7 +18649,8 @@ func (s *GetUsagePlanKeyInput) SetUsagePlanId(v string) *GetUsagePlanKeyInput { type GetUsagePlanKeysInput struct { _ struct{} `type:"structure"` - // The maximum number of returned results per page. + // The maximum number of returned results per page. The default value is 25 + // and the maximum value is 500. Limit *int64 `location:"querystring" locationName:"limit" type:"integer"` // A query parameter specifying the name of the to-be-returned usage plan keys. @@ -18615,8 +18659,8 @@ type GetUsagePlanKeysInput struct { // The current pagination position in the paged result set. Position *string `location:"querystring" locationName:"position" type:"string"` - // The Id of the UsagePlan resource representing the usage plan containing the - // to-be-retrieved UsagePlanKey resource representing a plan customer. + // [Required] The Id of the UsagePlan resource representing the usage plan containing + // the to-be-retrieved UsagePlanKey resource representing a plan customer. // // UsagePlanId is a required field UsagePlanId *string `location:"uri" locationName:"usageplanId" type:"string" required:"true"` @@ -18672,7 +18716,7 @@ func (s *GetUsagePlanKeysInput) SetUsagePlanId(v string) *GetUsagePlanKeysInput // Represents the collection of usage plan keys added to usage plans for the // associated API keys and, possibly, other types of keys. // -// Create and Use Usage Plans (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html) +// Create and Use Usage Plans (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html) type GetUsagePlanKeysOutput struct { _ struct{} `type:"structure"` @@ -18711,7 +18755,8 @@ type GetUsagePlansInput struct { // The identifier of the API key associated with the usage plans. KeyId *string `location:"querystring" locationName:"keyId" type:"string"` - // The maximum number of returned results per page. + // The maximum number of returned results per page. The default value is 25 + // and the maximum value is 500. Limit *int64 `location:"querystring" locationName:"limit" type:"integer"` // The current pagination position in the paged result set. @@ -18748,7 +18793,7 @@ func (s *GetUsagePlansInput) SetPosition(v string) *GetUsagePlansInput { // Represents a collection of usage plans for an AWS account. // -// Create and Use Usage Plans (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html) +// Create and Use Usage Plans (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html) type GetUsagePlansOutput struct { _ struct{} `type:"structure"` @@ -18824,7 +18869,8 @@ func (s *GetVpcLinkInput) SetVpcLinkId(v string) *GetVpcLinkInput { type GetVpcLinksInput struct { _ struct{} `type:"structure"` - // The maximum number of returned results per page. + // The maximum number of returned results per page. The default value is 25 + // and the maximum value is 500. Limit *int64 `location:"querystring" locationName:"limit" type:"integer"` // The current pagination position in the paged result set. @@ -18855,8 +18901,8 @@ func (s *GetVpcLinksInput) SetPosition(v string) *GetVpcLinksInput { // The collection of VPC links under the caller's account in a region. // -// Getting Started with Private Integrations (http://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started-with-private-integration.html), -// Set up Private Integrations (http://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-private-integration.html) +// Getting Started with Private Integrations (https://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started-with-private-integration.html), +// Set up Private Integrations (https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-private-integration.html) type GetVpcLinksOutput struct { _ struct{} `type:"structure"` @@ -18894,7 +18940,7 @@ type ImportApiKeysInput struct { _ struct{} `type:"structure" payload:"Body"` // The payload of the POST request to import API keys. For the payload format, - // see API Key File Format (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-key-file-format.html). + // see API Key File Format (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-key-file-format.html). // // Body is a required field Body []byte `locationName:"body" type:"blob" required:"true"` @@ -18987,12 +19033,12 @@ func (s *ImportApiKeysOutput) SetWarnings(v []*string) *ImportApiKeysOutput { return s } -// Import documentation parts from an external (e.g., Swagger) definition file. +// Import documentation parts from an external (e.g., OpenAPI) definition file. type ImportDocumentationPartsInput struct { _ struct{} `type:"structure" payload:"Body"` // [Required] Raw byte array representing the to-be-imported documentation parts. - // To import from a Swagger file, this is a JSON object. + // To import from an OpenAPI file, this is a JSON object. // // Body is a required field Body []byte `locationName:"body" type:"blob" required:"true"` @@ -19066,9 +19112,9 @@ func (s *ImportDocumentationPartsInput) SetRestApiId(v string) *ImportDocumentat // A collection of the imported DocumentationPart identifiers. // // This is used to return the result when documentation parts in an external -// (e.g., Swagger) file are imported into API Gateway -// Documenting an API (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-documenting-api.html), -// documentationpart:import (http://docs.aws.amazon.com/apigateway/api-reference/link-relation/documentationpart-import/), +// (e.g., OpenAPI) file are imported into API Gateway +// Documenting an API (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-documenting-api.html), +// documentationpart:import (https://docs.aws.amazon.com/apigateway/api-reference/link-relation/documentationpart-import/), // DocumentationPart type ImportDocumentationPartsOutput struct { _ struct{} `type:"structure"` @@ -19107,9 +19153,9 @@ func (s *ImportDocumentationPartsOutput) SetWarnings(v []*string) *ImportDocumen type ImportRestApiInput struct { _ struct{} `type:"structure" payload:"Body"` - // The POST request body containing external API definitions. Currently, only - // Swagger definition JSON files are supported. The maximum size of the API - // definition file is 2MB. + // [Required] The POST request body containing external API definitions. Currently, + // only OpenAPI definition JSON/YAML files are supported. The maximum size of + // the API definition file is 2MB. // // Body is a required field Body []byte `locationName:"body" type:"blob" required:"true"` @@ -19124,8 +19170,9 @@ type ImportRestApiInput struct { // // To exclude DocumentationParts from the import, set parameters as ignore=documentation. // - // To configure the endpoint type, set parameters as endpointConfigurationTypes=EDGE - // orendpointConfigurationTypes=REGIONAL. The default endpoint type is EDGE. + // To configure the endpoint type, set parameters as endpointConfigurationTypes=EDGE, + // endpointConfigurationTypes=REGIONAL, or endpointConfigurationTypes=PRIVATE. + // The default endpoint type is EDGE. // // To handle imported basePath, set parameters as basePath=ignore, basePath=prepend // or basePath=split. @@ -19134,11 +19181,11 @@ type ImportRestApiInput struct { // API is: // // aws apigateway import-rest-api --parameters ignore=documentation --body - // 'file:///path/to/imported-api-body.json + // 'file:///path/to/imported-api-body.json' // The AWS CLI command to set the regional endpoint on the imported API is: // // aws apigateway import-rest-api --parameters endpointConfigurationTypes=REGIONAL - // --body 'file:///path/to/imported-api-body.json + // --body 'file:///path/to/imported-api-body.json' Parameters map[string]*string `location:"querystring" locationName:"parameters" type:"map"` } @@ -19187,7 +19234,7 @@ func (s *ImportRestApiInput) SetParameters(v map[string]*string) *ImportRestApiI // // In the API Gateway console, the built-in Lambda integration is an AWS integration. // -// Creating an API (http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-create-api.html) +// Creating an API (https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-create-api.html) type Integration struct { _ struct{} `type:"structure"` @@ -19197,7 +19244,7 @@ type Integration struct { // Specifies the integration's cache namespace. CacheNamespace *string `locationName:"cacheNamespace" type:"string"` - // The (id (http://docs.aws.amazon.com/apigateway/api-reference/resource/vpc-link/#id)) + // The (id (https://docs.aws.amazon.com/apigateway/api-reference/resource/vpc-link/#id)) // of the VpcLink used for the integration when connectionType=VPC_LINK and // undefined, otherwise. ConnectionId *string `locationName:"connectionId" type:"string"` @@ -19246,7 +19293,7 @@ type Integration struct { // // The successful response returns 200 OKstatus and a payload as follows: // - // { "_links": { "curies": { "href": "http://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-integration-response-{rel}.html", + // { "_links": { "curies": { "href": "https://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-integration-response-{rel}.html", // "name": "integrationresponse", "templated": true }, "self": { "href": "/restapis/fugvjdxtri/resources/3kzxbg5sa2/methods/GET/integration/responses/200", // "title": "200" }, "integrationresponse:delete": { "href": "/restapis/fugvjdxtri/resources/3kzxbg5sa2/methods/GET/integration/responses/200" // }, "integrationresponse:update": { "href": "/restapis/fugvjdxtri/resources/3kzxbg5sa2/methods/GET/integration/responses/200" @@ -19339,7 +19386,7 @@ type Integration struct { // Alternatively, path can be used for an AWS service path-based API. The // ensuing service_api refers to the path to an AWS service resource, including // the region of the integrated AWS service, if applicable. For example, - // for integration with the S3 API of GetObject (http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html), + // for integration with the S3 API of GetObject (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html), // the uri can be either arn:aws:apigateway:us-west-2:s3:action/GetObject&Bucket={bucket}&Key={key} // or arn:aws:apigateway:us-west-2:s3:path/{bucket}/{key} Uri *string `locationName:"uri" type:"string"` @@ -19443,7 +19490,7 @@ func (s *Integration) SetUri(v string) *Integration { // MethodResponse, and parameters and templates can be used to transform the // back-end response. // -// Creating an API (http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-create-api.html) +// Creating an API (https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-create-api.html) type IntegrationResponse struct { _ struct{} `type:"structure"` @@ -19559,10 +19606,10 @@ func (s *IntegrationResponse) SetStatusCode(v string) *IntegrationResponse { // The successful response returns a 200 OK status code and a payload similar // to the following: // -// { "_links": { "curies": [ { "href": "http://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-integration-{rel}.html", -// "name": "integration", "templated": true }, { "href": "http://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-integration-response-{rel}.html", -// "name": "integrationresponse", "templated": true }, { "href": "http://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-method-{rel}.html", -// "name": "method", "templated": true }, { "href": "http://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-method-response-{rel}.html", +// { "_links": { "curies": [ { "href": "https://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-integration-{rel}.html", +// "name": "integration", "templated": true }, { "href": "https://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-integration-response-{rel}.html", +// "name": "integrationresponse", "templated": true }, { "href": "https://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-method-{rel}.html", +// "name": "method", "templated": true }, { "href": "https://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-method-response-{rel}.html", // "name": "methodresponse", "templated": true } ], "self": { "href": "/restapis/fugvjdxtri/resources/3kzxbg5sa2/methods/GET", // "name": "GET", "title": "GET" }, "integration:put": { "href": "/restapis/fugvjdxtri/resources/3kzxbg5sa2/methods/GET/integration" // }, "method:delete": { "href": "/restapis/fugvjdxtri/resources/3kzxbg5sa2/methods/GET" @@ -19599,10 +19646,10 @@ func (s *IntegrationResponse) SetStatusCode(v string) *IntegrationResponse { // In the example above, the response template for the 200 OK response maps // the JSON output from the ListStreams action in the back end to an XML output. // The mapping template is URL-encoded as %3CkinesisStreams%3E%23foreach(%24stream%20in%20%24input.path(%27%24.StreamNames%27))%3Cstream%3E%3Cname%3E%24stream%3C%2Fname%3E%3C%2Fstream%3E%23end%3C%2FkinesisStreams%3E -// and the output is decoded using the $util.urlDecode() (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html#util-templat-reference) +// and the output is decoded using the $util.urlDecode() (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html#util-templat-reference) // helper function. // -// MethodResponse, Integration, IntegrationResponse, Resource, Set up an API's method (http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-method-settings.html) +// MethodResponse, Integration, IntegrationResponse, Resource, Set up an API's method (https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-method-settings.html) type Method struct { _ struct{} `type:"structure"` @@ -19611,13 +19658,13 @@ type Method struct { ApiKeyRequired *bool `locationName:"apiKeyRequired" type:"boolean"` // A list of authorization scopes configured on the method. The scopes are used - // with a COGNITO_USER_POOL authorizer to authorize the method invocation. The - // authorization works by matching the method scopes against the scopes parsed - // from the access token in the incoming request. The method invocation is authorized - // if any method scopes matches a claimed scope in the access token. Otherwise, - // the invocation is not authorized. When the method scope is configured, the - // client must provide an access token instead of an identity token for authorization - // purposes. + // with a COGNITO_USER_POOLS authorizer to authorize the method invocation. + // The authorization works by matching the method scopes against the scopes + // parsed from the access token in the incoming request. The method invocation + // is authorized if any method scopes matches a claimed scope in the access + // token. Otherwise, the invocation is not authorized. When the method scope + // is configured, the client must provide an access token instead of an identity + // token for authorization purposes. AuthorizationScopes []*string `locationName:"authorizationScopes" type:"list"` // The method's authorization type. Valid values are NONE for open access, AWS_IAM @@ -19648,8 +19695,8 @@ type Method struct { // // The successful response returns a 200 OKstatus code and a payload similar to the following: // - // { "_links": { "curies": [ { "href": "http://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-integration-{rel}.html", - // "name": "integration", "templated": true }, { "href": "http://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-integration-response-{rel}.html", + // { "_links": { "curies": [ { "href": "https://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-integration-{rel}.html", + // "name": "integration", "templated": true }, { "href": "https://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-integration-response-{rel}.html", // "name": "integrationresponse", "templated": true } ], "self": { "href": "/restapis/uojnr9hd57/resources/0cjtch/methods/GET/integration" // }, "integration:delete": { "href": "/restapis/uojnr9hd57/resources/0cjtch/methods/GET/integration" // }, "integration:responses": { "href": "/restapis/uojnr9hd57/resources/0cjtch/methods/GET/integration/responses/200", @@ -19694,7 +19741,7 @@ type Method struct { // The successful response returns a 200 OK status code and a payload similar // to the following: // - // { "_links": { "curies": { "href": "http://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-method-response-{rel}.html", + // { "_links": { "curies": { "href": "https://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-method-response-{rel}.html", // "name": "methodresponse", "templated": true }, "self": { "href": "/restapis/uojnr9hd57/resources/0cjtch/methods/GET/responses/200", // "title": "200" }, "methodresponse:delete": { "href": "/restapis/uojnr9hd57/resources/0cjtch/methods/GET/responses/200" // }, "methodresponse:update": { "href": "/restapis/uojnr9hd57/resources/0cjtch/methods/GET/responses/200" @@ -19706,7 +19753,7 @@ type Method struct { // A human-friendly operation identifier for the method. For example, you can // assign the operationName of ListPets for the GET /pets method in PetStore - // (http://petstore-demo-endpoint.execute-api.com/petstore/pets) example. + // (https://petstore-demo-endpoint.execute-api.com/petstore/pets) example. OperationName *string `locationName:"operationName" type:"string"` // A key-value map specifying data schemas, represented by Model resources, @@ -19822,7 +19869,7 @@ func (s *Method) SetRequestValidatorId(v string) *Method { // // The successful response returns 200 OK status and a payload as follows: // -// { "_links": { "curies": { "href": "http://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-method-response-{rel}.html", +// { "_links": { "curies": { "href": "https://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-method-response-{rel}.html", // "name": "methodresponse", "templated": true }, "self": { "href": "/restapis/fugvjdxtri/resources/3kzxbg5sa2/methods/GET/responses/200", // "title": "200" }, "methodresponse:delete": { "href": "/restapis/fugvjdxtri/resources/3kzxbg5sa2/methods/GET/responses/200" // }, "methodresponse:update": { "href": "/restapis/fugvjdxtri/resources/3kzxbg5sa2/methods/GET/responses/200" @@ -19904,12 +19951,12 @@ type MethodSetting struct { // the value is a Boolean. CachingEnabled *bool `locationName:"cachingEnabled" type:"boolean"` - // Specifies whether data trace logging is enabled for this method, which effects + // Specifies whether data trace logging is enabled for this method, which affects // the log entries pushed to Amazon CloudWatch Logs. The PATCH path for this // setting is /{method_setting_key}/logging/dataTrace, and the value is a Boolean. DataTraceEnabled *bool `locationName:"dataTraceEnabled" type:"boolean"` - // Specifies the logging level for this method, which effects the log entries + // Specifies the logging level for this method, which affects the log entries // pushed to Amazon CloudWatch Logs. The PATCH path for this setting is /{method_setting_key}/logging/loglevel, // and the available levels are OFF, ERROR, and INFO. LoggingLevel *string `locationName:"loggingLevel" type:"string"` @@ -20054,7 +20101,7 @@ func (s *MethodSnapshot) SetAuthorizationType(v string) *MethodSnapshot { // A model is used for generating an API's SDK, validating the input request // body, and creating a skeletal mapping template. // -// Method, MethodResponse, Models and Mappings (http://docs.aws.amazon.com/apigateway/latest/developerguide/models-mappings.html) +// Method, MethodResponse, Models and Mappings (https://docs.aws.amazon.com/apigateway/latest/developerguide/models-mappings.html) type Model struct { _ struct{} `type:"structure"` @@ -20070,12 +20117,12 @@ type Model struct { // The name of the model. Must be an alphanumeric string. Name *string `locationName:"name" type:"string"` - // The schema for the model. For application/json models, this should be JSON-schema - // draft v4 (http://json-schema.org/documentation.html) model. Do not include - // "\*/" characters in the description of any properties because such "\*/" - // characters may be interpreted as the closing marker for comments in some - // languages, such as Java or JavaScript, causing the installation of your API's - // SDK generated by API Gateway to fail. + // The schema for the model. For application/json models, this should be JSON + // schema draft 4 (https://tools.ietf.org/html/draft-zyp-json-schema-04) model. + // Do not include "\*/" characters in the description of any properties because + // such "\*/" characters may be interpreted as the closing marker for comments + // in some languages, such as Java or JavaScript, causing the installation of + // your API's SDK generated by API Gateway to fail. Schema *string `locationName:"schema" type:"string"` } @@ -20152,7 +20199,7 @@ type PatchOperation struct { // The new target value of the update operation. It is applicable for the add // or replace operation. When using AWS CLI to update a property of a JSON value, // enclose the JSON object with a pair of single quotes in a Linux shell, e.g., - // '{"a": ...}'. In a Windows shell, see Using JSON for Parameters (http://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html#cli-using-param-json). + // '{"a": ...}'. In a Windows shell, see Using JSON for Parameters (https://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html#cli-using-param-json). Value *string `locationName:"value" type:"string"` } @@ -20203,8 +20250,8 @@ type PutGatewayResponseInput struct { // pairs. ResponseTemplates map[string]*string `locationName:"responseTemplates" type:"map"` - // The response type of the associated GatewayResponse. Valid values are ACCESS_DENIED - // + // [Required] The response type of the associated GatewayResponse. Valid values + // are ACCESS_DENIED // API_CONFIGURATION_ERROR // AUTHORIZER_FAILURE // AUTHORIZER_CONFIGURATION_ERROR @@ -20223,12 +20270,12 @@ type PutGatewayResponseInput struct { // RESOURCE_NOT_FOUND // THROTTLED // UNAUTHORIZED - // UNSUPPORTED_MEDIA_TYPES + // UNSUPPORTED_MEDIA_TYPE // // ResponseType is a required field ResponseType *string `location:"uri" locationName:"response_type" type:"string" required:"true" enum:"GatewayResponseType"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -20303,7 +20350,7 @@ type PutIntegrationInput struct { // Specifies a put integration input's cache namespace. CacheNamespace *string `locationName:"cacheNamespace" type:"string"` - // The (id (http://docs.aws.amazon.com/apigateway/api-reference/resource/vpc-link/#id)) + // The (id (https://docs.aws.amazon.com/apigateway/api-reference/resource/vpc-link/#id)) // of the VpcLink used for the integration when connectionType=VPC_LINK and // undefined, otherwise. ConnectionId *string `locationName:"connectionId" type:"string"` @@ -20331,7 +20378,7 @@ type PutIntegrationInput struct { // Specifies whether credentials are required for a put integration. Credentials *string `locationName:"credentials" type:"string"` - // Specifies a put integration request's HTTP method. + // [Required] Specifies a put integration request's HTTP method. // // HttpMethod is a required field HttpMethod *string `location:"uri" locationName:"http_method" type:"string" required:"true"` @@ -20371,12 +20418,12 @@ type PutIntegrationInput struct { // value. RequestTemplates map[string]*string `locationName:"requestTemplates" type:"map"` - // Specifies a put integration request's resource ID. + // [Required] Specifies a put integration request's resource ID. // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -20385,7 +20432,7 @@ type PutIntegrationInput struct { // milliseconds or 29 seconds. TimeoutInMillis *int64 `locationName:"timeoutInMillis" type:"integer"` - // Specifies a put integration input's type. + // [Required] Specifies a put integration input's type. // // Type is a required field Type *string `locationName:"type" type:"string" required:"true" enum:"IntegrationType"` @@ -20408,7 +20455,7 @@ type PutIntegrationInput struct { // Alternatively, path can be used for an AWS service path-based API. The // ensuing service_api refers to the path to an AWS service resource, including // the region of the integrated AWS service, if applicable. For example, - // for integration with the S3 API of GetObject (http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html), + // for integration with the S3 API of GetObject (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html), // the uri can be either arn:aws:apigateway:us-west-2:s3:action/GetObject&Bucket={bucket}&Key={key} // or arn:aws:apigateway:us-west-2:s3:path/{bucket}/{key} Uri *string `locationName:"uri" type:"string"` @@ -20559,12 +20606,12 @@ type PutIntegrationResponseInput struct { // from the integration response to the method response without modification. ContentHandling *string `locationName:"contentHandling" type:"string" enum:"ContentHandlingStrategy"` - // Specifies a put integration response request's HTTP method. + // [Required] Specifies a put integration response request's HTTP method. // // HttpMethod is a required field HttpMethod *string `location:"uri" locationName:"http_method" type:"string" required:"true"` - // Specifies a put integration response request's resource identifier. + // [Required] Specifies a put integration response request's resource identifier. // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` @@ -20584,7 +20631,7 @@ type PutIntegrationResponseInput struct { // Specifies a put integration response's templates. ResponseTemplates map[string]*string `locationName:"responseTemplates" type:"map"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -20592,8 +20639,8 @@ type PutIntegrationResponseInput struct { // Specifies the selection pattern of a put integration response. SelectionPattern *string `locationName:"selectionPattern" type:"string"` - // Specifies the status code that is used to map the integration response to - // an existing MethodResponse. + // [Required] Specifies the status code that is used to map the integration + // response to an existing MethodResponse. // // StatusCode is a required field StatusCode *string `location:"uri" locationName:"status_code" type:"string" required:"true"` @@ -20687,34 +20734,35 @@ type PutMethodInput struct { ApiKeyRequired *bool `locationName:"apiKeyRequired" type:"boolean"` // A list of authorization scopes configured on the method. The scopes are used - // with a COGNITO_USER_POOL authorizer to authorize the method invocation. The - // authorization works by matching the method scopes against the scopes parsed - // from the access token in the incoming request. The method invocation is authorized - // if any method scopes matches a claimed scope in the access token. Otherwise, - // the invocation is not authorized. When the method scope is configured, the - // client must provide an access token instead of an identity token for authorization - // purposes. + // with a COGNITO_USER_POOLS authorizer to authorize the method invocation. + // The authorization works by matching the method scopes against the scopes + // parsed from the access token in the incoming request. The method invocation + // is authorized if any method scopes matches a claimed scope in the access + // token. Otherwise, the invocation is not authorized. When the method scope + // is configured, the client must provide an access token instead of an identity + // token for authorization purposes. AuthorizationScopes []*string `locationName:"authorizationScopes" type:"list"` - // The method's authorization type. Valid values are NONE for open access, AWS_IAM - // for using AWS IAM permissions, CUSTOM for using a custom authorizer, or COGNITO_USER_POOLS - // for using a Cognito user pool. + // [Required] The method's authorization type. Valid values are NONE for open + // access, AWS_IAM for using AWS IAM permissions, CUSTOM for using a custom + // authorizer, or COGNITO_USER_POOLS for using a Cognito user pool. // // AuthorizationType is a required field AuthorizationType *string `locationName:"authorizationType" type:"string" required:"true"` // Specifies the identifier of an Authorizer to use on this Method, if the type - // is CUSTOM. + // is CUSTOM or COGNITO_USER_POOLS. The authorizer identifier is generated by + // API Gateway when you created the authorizer. AuthorizerId *string `locationName:"authorizerId" type:"string"` - // Specifies the method request's HTTP method type. + // [Required] Specifies the method request's HTTP method type. // // HttpMethod is a required field HttpMethod *string `location:"uri" locationName:"http_method" type:"string" required:"true"` // A human-friendly operation identifier for the method. For example, you can // assign the operationName of ListPets for the GET /pets method in PetStore - // (http://petstore-demo-endpoint.execute-api.com/petstore/pets) example. + // (https://petstore-demo-endpoint.execute-api.com/petstore/pets) example. OperationName *string `locationName:"operationName" type:"string"` // Specifies the Model resources used for the request's content type. Request @@ -20735,12 +20783,12 @@ type PutMethodInput struct { // The identifier of a RequestValidator for validating the method request. RequestValidatorId *string `locationName:"requestValidatorId" type:"string"` - // The Resource identifier for the new Method resource. + // [Required] The Resource identifier for the new Method resource. // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -20848,12 +20896,12 @@ func (s *PutMethodInput) SetRestApiId(v string) *PutMethodInput { type PutMethodResponseInput struct { _ struct{} `type:"structure"` - // The HTTP verb of the Method resource. + // [Required] The HTTP verb of the Method resource. // // HttpMethod is a required field HttpMethod *string `location:"uri" locationName:"http_method" type:"string" required:"true"` - // The Resource identifier for the Method resource. + // [Required] The Resource identifier for the Method resource. // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` @@ -20876,12 +20924,12 @@ type PutMethodResponseInput struct { // where JSON-expression is a valid JSON expression without the $ prefix.) ResponseParameters map[string]*bool `locationName:"responseParameters" type:"map"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` - // The method response's status code. + // [Required] The method response's status code. // // StatusCode is a required field StatusCode *string `location:"uri" locationName:"status_code" type:"string" required:"true"` @@ -20960,9 +21008,9 @@ func (s *PutMethodResponseInput) SetStatusCode(v string) *PutMethodResponseInput type PutRestApiInput struct { _ struct{} `type:"structure" payload:"Body"` - // The PUT request body containing external API definitions. Currently, only - // Swagger definition JSON files are supported. The maximum size of the API - // definition file is 2MB. + // [Required] The PUT request body containing external API definitions. Currently, + // only OpenAPI definition JSON/YAML files are supported. The maximum size of + // the API definition file is 2MB. // // Body is a required field Body []byte `locationName:"body" type:"blob" required:"true"` @@ -20978,10 +21026,10 @@ type PutRestApiInput struct { // Custom header parameters as part of the request. For example, to exclude // DocumentationParts from an imported API, set ignore=documentation as a parameters // value, as in the AWS CLI command of aws apigateway import-rest-api --parameters - // ignore=documentation --body 'file:///path/to/imported-api-body.json. + // ignore=documentation --body 'file:///path/to/imported-api-body.json'. Parameters map[string]*string `location:"querystring" locationName:"parameters" type:"map"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -21089,7 +21137,7 @@ func (s *QuotaSettings) SetPeriod(v string) *QuotaSettings { // Represents an API resource. // -// Create an API (http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-create-api.html) +// Create an API (https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-create-api.html) type Resource struct { _ struct{} `type:"structure"` @@ -21122,10 +21170,10 @@ type Resource struct { // SignedHeaders=content-type;host;x-amz-date, Signature={sig4_hash} // Response // - // { "_links": { "curies": [ { "href": "http://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-integration-{rel}.html", - // "name": "integration", "templated": true }, { "href": "http://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-integration-response-{rel}.html", - // "name": "integrationresponse", "templated": true }, { "href": "http://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-method-{rel}.html", - // "name": "method", "templated": true }, { "href": "http://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-method-response-{rel}.html", + // { "_links": { "curies": [ { "href": "https://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-integration-{rel}.html", + // "name": "integration", "templated": true }, { "href": "https://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-integration-response-{rel}.html", + // "name": "integrationresponse", "templated": true }, { "href": "https://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-method-{rel}.html", + // "name": "method", "templated": true }, { "href": "https://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-method-response-{rel}.html", // "name": "methodresponse", "templated": true } ], "self": { "href": "/restapis/fugvjdxtri/resources/3kzxbg5sa2/methods/GET", // "name": "GET", "title": "GET" }, "integration:put": { "href": "/restapis/fugvjdxtri/resources/3kzxbg5sa2/methods/GET/integration" // }, "method:delete": { "href": "/restapis/fugvjdxtri/resources/3kzxbg5sa2/methods/GET" @@ -21207,12 +21255,12 @@ func (s *Resource) SetResourceMethods(v map[string]*Method) *Resource { // Represents a REST API. // -// Create an API (http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-create-api.html) +// Create an API (https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-create-api.html) type RestApi struct { _ struct{} `type:"structure"` - // The source of the API key for metring requests according to a usage plan. - // Valid values are HEADER to read the API key from the X-API-Key header of + // The source of the API key for metering requests according to a usage plan. + // Valid values are: HEADER to read the API key from the X-API-Key header of // a request. // AUTHORIZER to read the API key from the UsageIdentifierKey from a custom // authorizer. @@ -21223,7 +21271,7 @@ type RestApi struct { BinaryMediaTypes []*string `locationName:"binaryMediaTypes" type:"list"` // The timestamp when the API was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // The API's description. Description *string `locationName:"description" type:"string"` @@ -21236,16 +21284,20 @@ type RestApi struct { // API Gateway. Id *string `locationName:"id" type:"string"` - // A nullable integer used to enable (non-negative between 0 and 10485760 (10M) - // bytes, inclusive) or disable (null) compression on an API. When compression - // is enabled, compression or decompression are not applied on the payload if - // the payload size is smaller than this value. Setting it to zero allows compression - // for any payload size. + // A nullable integer that is used to enable compression (with non-negative + // between 0 and 10485760 (10M) bytes, inclusive) or disable compression (with + // a null value) on an API. When compression is enabled, compression or decompression + // is not applied on the payload if the payload size is smaller than this value. + // Setting it to zero allows compression for any payload size. MinimumCompressionSize *int64 `locationName:"minimumCompressionSize" type:"integer"` // The API's name. Name *string `locationName:"name" type:"string"` + // A stringified JSON policy document that applies to this RestApi regardless + // of the caller and Method + Policy *string `locationName:"policy" type:"string"` + // A version identifier for the API. Version *string `locationName:"version" type:"string"` @@ -21312,6 +21364,12 @@ func (s *RestApi) SetName(v string) *RestApi { return s } +// SetPolicy sets the Policy field's value. +func (s *RestApi) SetPolicy(v string) *RestApi { + s.Policy = &v + return s +} + // SetVersion sets the Version field's value. func (s *RestApi) SetVersion(v string) *RestApi { s.Version = &v @@ -21439,7 +21497,7 @@ func (s *SdkType) SetId(v string) *SdkType { // Represents a unique identifier for a version of a deployed RestApi that is // callable by users. // -// Deploy an API (http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-deploy-api.html) +// Deploy an API (https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-deploy-api.html) type Stage struct { _ struct{} `type:"structure"` @@ -21462,7 +21520,7 @@ type Stage struct { ClientCertificateId *string `locationName:"clientCertificateId" type:"string"` // The timestamp when the stage was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // The identifier of the Deployment that the stage points to. DeploymentId *string `locationName:"deploymentId" type:"string"` @@ -21474,7 +21532,7 @@ type Stage struct { DocumentationVersion *string `locationName:"documentationVersion" type:"string"` // The timestamp when the stage last updated. - LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp" timestampFormat:"unix"` + LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp"` // A map that defines the method settings for a Stage resource. Keys (designated // as /{method_setting_key below) are method paths defined as {resource_path}/{http_method} @@ -21486,13 +21544,19 @@ type Stage struct { // (URI) of a call to API Gateway. StageName *string `locationName:"stageName" type:"string"` - // A collection of Tags associated with a given resource. + // The collection of tags. Each tag element is associated with a given resource. Tags map[string]*string `locationName:"tags" type:"map"` + // Specifies whether active tracing with X-ray is enabled for the Stage. + TracingEnabled *bool `locationName:"tracingEnabled" type:"boolean"` + // A map that defines the stage variables for a Stage resource. Variable names // can have alphanumeric and underscore characters, and the values must match // [A-Za-z0-9-._~:/?#&=,]+. Variables map[string]*string `locationName:"variables" type:"map"` + + // The ARN of the WebAcl associated with the Stage. + WebAclArn *string `locationName:"webAclArn" type:"string"` } // String returns the string representation @@ -21589,12 +21653,24 @@ func (s *Stage) SetTags(v map[string]*string) *Stage { return s } +// SetTracingEnabled sets the TracingEnabled field's value. +func (s *Stage) SetTracingEnabled(v bool) *Stage { + s.TracingEnabled = &v + return s +} + // SetVariables sets the Variables field's value. func (s *Stage) SetVariables(v map[string]*string) *Stage { s.Variables = v return s } +// SetWebAclArn sets the WebAclArn field's value. +func (s *Stage) SetWebAclArn(v string) *Stage { + s.WebAclArn = &v + return s +} + // A reference to a unique stage identified in the format {restApiId}/{stage}. type StageKey struct { _ struct{} `type:"structure"` @@ -21628,19 +21704,19 @@ func (s *StageKey) SetStageName(v string) *StageKey { return s } -// Adds or updates Tags on a gievn resource. +// Adds or updates a tag on a given resource. type TagResourceInput struct { _ struct{} `type:"structure"` - // [Required] The ARN of a resource that can be tagged. At present, Stage is - // the only taggable resource. + // [Required] The ARN of a resource that can be tagged. The resource ARN must + // be URL-encoded. At present, Stage is the only taggable resource. // // ResourceArn is a required field ResourceArn *string `location:"uri" locationName:"resource_arn" type:"string" required:"true"` - // [Required] Key/Value map of strings. Valid character set is [a-zA-Z+-=._:/]. - // Tag key can be up to 128 characters and must not start with "aws:". Tag value - // can be up to 256 characters. + // [Required] The key-value map of strings. The valid character set is [a-zA-Z+-=._:/]. + // The tag key can be up to 128 characters and must not start with aws:. The + // tag value can be up to 256 characters. // // Tags is a required field Tags map[string]*string `locationName:"tags" type:"map" required:"true"` @@ -21705,7 +21781,7 @@ type TestInvokeAuthorizerInput struct { // [Optional] A key-value map of additional context variables. AdditionalContext map[string]*string `locationName:"additionalContext" type:"map"` - // Specifies a test invoke authorizer request's Authorizer ID. + // [Required] Specifies a test invoke authorizer request's Authorizer ID. // // AuthorizerId is a required field AuthorizerId *string `location:"uri" locationName:"authorizer_id" type:"string" required:"true"` @@ -21718,11 +21794,16 @@ type TestInvokeAuthorizerInput struct { // should be specified. Headers map[string]*string `locationName:"headers" type:"map"` + // [Optional] The headers as a map from string to list of values to simulate + // an incoming invocation request. This is where the incoming authorization + // token, or identity source, may be specified. + MultiValueHeaders map[string][]*string `locationName:"multiValueHeaders" type:"map"` + // [Optional] The URI path, including query string, of the simulated invocation // request. Use this to specify path parameters and query string parameters. PathWithQueryString *string `locationName:"pathWithQueryString" type:"string"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -21782,6 +21863,12 @@ func (s *TestInvokeAuthorizerInput) SetHeaders(v map[string]*string) *TestInvoke return s } +// SetMultiValueHeaders sets the MultiValueHeaders field's value. +func (s *TestInvokeAuthorizerInput) SetMultiValueHeaders(v map[string][]*string) *TestInvokeAuthorizerInput { + s.MultiValueHeaders = v + return s +} + // SetPathWithQueryString sets the PathWithQueryString field's value. func (s *TestInvokeAuthorizerInput) SetPathWithQueryString(v string) *TestInvokeAuthorizerInput { s.PathWithQueryString = &v @@ -21806,7 +21893,7 @@ type TestInvokeAuthorizerOutput struct { Authorization map[string][]*string `locationName:"authorization" type:"map"` - // The open identity claims (http://openid.net/specs/openid-connect-core-1_0.html#StandardClaims), + // The open identity claims (https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims), // with any supported custom attributes, returned from the Cognito Your User // Pool configured for the API. Claims map[string]*string `locationName:"claims" type:"map"` @@ -21895,21 +21982,25 @@ type TestInvokeMethodInput struct { // A key-value map of headers to simulate an incoming invocation request. Headers map[string]*string `locationName:"headers" type:"map"` - // Specifies a test invoke method request's HTTP method. + // [Required] Specifies a test invoke method request's HTTP method. // // HttpMethod is a required field HttpMethod *string `location:"uri" locationName:"http_method" type:"string" required:"true"` + // The headers as a map from string to list of values to simulate an incoming + // invocation request. + MultiValueHeaders map[string][]*string `locationName:"multiValueHeaders" type:"map"` + // The URI path, including query string, of the simulated invocation request. // Use this to specify path parameters and query string parameters. PathWithQueryString *string `locationName:"pathWithQueryString" type:"string"` - // Specifies a test invoke method request's resource ID. + // [Required] Specifies a test invoke method request's resource ID. // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -21972,6 +22063,12 @@ func (s *TestInvokeMethodInput) SetHttpMethod(v string) *TestInvokeMethodInput { return s } +// SetMultiValueHeaders sets the MultiValueHeaders field's value. +func (s *TestInvokeMethodInput) SetMultiValueHeaders(v map[string][]*string) *TestInvokeMethodInput { + s.MultiValueHeaders = v + return s +} + // SetPathWithQueryString sets the PathWithQueryString field's value. func (s *TestInvokeMethodInput) SetPathWithQueryString(v string) *TestInvokeMethodInput { s.PathWithQueryString = &v @@ -21998,7 +22095,7 @@ func (s *TestInvokeMethodInput) SetStageVariables(v map[string]*string) *TestInv // Represents the response of the test invoke request in the HTTP method. // -// Test API using the API Gateway console (http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-test-method.html#how-to-test-method-console) +// Test API using the API Gateway console (https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-test-method.html#how-to-test-method-console) type TestInvokeMethodOutput struct { _ struct{} `type:"structure"` @@ -22014,6 +22111,9 @@ type TestInvokeMethodOutput struct { // The API Gateway execution log for the test invoke request. Log *string `locationName:"log" type:"string"` + // The headers of the HTTP response as a map from string to list of values. + MultiValueHeaders map[string][]*string `locationName:"multiValueHeaders" type:"map"` + // The HTTP status code. Status *int64 `locationName:"status" type:"integer"` } @@ -22052,6 +22152,12 @@ func (s *TestInvokeMethodOutput) SetLog(v string) *TestInvokeMethodOutput { return s } +// SetMultiValueHeaders sets the MultiValueHeaders field's value. +func (s *TestInvokeMethodOutput) SetMultiValueHeaders(v map[string][]*string) *TestInvokeMethodOutput { + s.MultiValueHeaders = v + return s +} + // SetStatus sets the Status field's value. func (s *TestInvokeMethodOutput) SetStatus(v int64) *TestInvokeMethodOutput { s.Status = &v @@ -22093,17 +22199,17 @@ func (s *ThrottleSettings) SetRateLimit(v float64) *ThrottleSettings { return s } -// Removes Tags from a given resource. +// Removes a tag from a given resource. type UntagResourceInput struct { _ struct{} `type:"structure"` - // [Required] The ARN of a resource that can be tagged. At present, Stage is - // the only taggable resource. + // [Required] The ARN of a resource that can be tagged. The resource ARN must + // be URL-encoded. At present, Stage is the only taggable resource. // // ResourceArn is a required field ResourceArn *string `location:"uri" locationName:"resource_arn" type:"string" required:"true"` - // The Tag keys to delete. + // [Required] The Tag keys to delete. // // TagKeys is a required field TagKeys []*string `location:"querystring" locationName:"tagKeys" type:"list" required:"true"` @@ -22190,7 +22296,7 @@ func (s *UpdateAccountInput) SetPatchOperations(v []*PatchOperation) *UpdateAcco type UpdateApiKeyInput struct { _ struct{} `type:"structure"` - // The identifier of the ApiKey resource to be updated. + // [Required] The identifier of the ApiKey resource to be updated. // // ApiKey is a required field ApiKey *string `location:"uri" locationName:"api_Key" type:"string" required:"true"` @@ -22239,7 +22345,7 @@ func (s *UpdateApiKeyInput) SetPatchOperations(v []*PatchOperation) *UpdateApiKe type UpdateAuthorizerInput struct { _ struct{} `type:"structure"` - // The identifier of the Authorizer resource. + // [Required] The identifier of the Authorizer resource. // // AuthorizerId is a required field AuthorizerId *string `location:"uri" locationName:"authorizer_id" type:"string" required:"true"` @@ -22248,7 +22354,7 @@ type UpdateAuthorizerInput struct { // the order specified in this list. PatchOperations []*PatchOperation `locationName:"patchOperations" type:"list"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -22302,12 +22408,12 @@ func (s *UpdateAuthorizerInput) SetRestApiId(v string) *UpdateAuthorizerInput { type UpdateBasePathMappingInput struct { _ struct{} `type:"structure"` - // The base path of the BasePathMapping resource to change. + // [Required] The base path of the BasePathMapping resource to change. // // BasePath is a required field BasePath *string `location:"uri" locationName:"base_path" type:"string" required:"true"` - // The domain name of the BasePathMapping resource to change. + // [Required] The domain name of the BasePathMapping resource to change. // // DomainName is a required field DomainName *string `location:"uri" locationName:"domain_name" type:"string" required:"true"` @@ -22365,7 +22471,7 @@ func (s *UpdateBasePathMappingInput) SetPatchOperations(v []*PatchOperation) *Up type UpdateClientCertificateInput struct { _ struct{} `type:"structure"` - // The identifier of the ClientCertificate resource to be updated. + // [Required] The identifier of the ClientCertificate resource to be updated. // // ClientCertificateId is a required field ClientCertificateId *string `location:"uri" locationName:"clientcertificate_id" type:"string" required:"true"` @@ -22424,7 +22530,7 @@ type UpdateDeploymentInput struct { // the order specified in this list. PatchOperations []*PatchOperation `locationName:"patchOperations" type:"list"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -22604,7 +22710,7 @@ func (s *UpdateDocumentationVersionInput) SetRestApiId(v string) *UpdateDocument type UpdateDomainNameInput struct { _ struct{} `type:"structure"` - // The name of the DomainName resource to be changed. + // [Required] The name of the DomainName resource to be changed. // // DomainName is a required field DomainName *string `location:"uri" locationName:"domain_name" type:"string" required:"true"` @@ -22657,8 +22763,8 @@ type UpdateGatewayResponseInput struct { // the order specified in this list. PatchOperations []*PatchOperation `locationName:"patchOperations" type:"list"` - // The response type of the associated GatewayResponse. Valid values are ACCESS_DENIED - // + // [Required] The response type of the associated GatewayResponse. Valid values + // are ACCESS_DENIED // API_CONFIGURATION_ERROR // AUTHORIZER_FAILURE // AUTHORIZER_CONFIGURATION_ERROR @@ -22677,12 +22783,12 @@ type UpdateGatewayResponseInput struct { // RESOURCE_NOT_FOUND // THROTTLED // UNAUTHORIZED - // UNSUPPORTED_MEDIA_TYPES + // UNSUPPORTED_MEDIA_TYPE // // ResponseType is a required field ResponseType *string `location:"uri" locationName:"response_type" type:"string" required:"true" enum:"GatewayResponseType"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -22736,7 +22842,7 @@ func (s *UpdateGatewayResponseInput) SetRestApiId(v string) *UpdateGatewayRespon // response parameters and mapping templates. // // For more information about valid gateway response types, see Gateway Response -// Types Supported by API Gateway (http://docs.aws.amazon.com/apigateway/latest/developerguide/supported-gateway-response-types.html)Example: +// Types Supported by API Gateway (https://docs.aws.amazon.com/apigateway/latest/developerguide/supported-gateway-response-types.html)Example: // Get a Gateway Response of a given response type // // Request @@ -22772,7 +22878,7 @@ func (s *UpdateGatewayResponseInput) SetRestApiId(v string) *UpdateGatewayRespon // \"statusCode\": \"'404'\"\n}" }, "responseType": "MISSING_AUTHENTICATION_TOKEN", // "statusCode": "404" } // -// Customize Gateway Responses (http://docs.aws.amazon.com/apigateway/latest/developerguide/customize-gateway-responses.html) +// Customize Gateway Responses (https://docs.aws.amazon.com/apigateway/latest/developerguide/customize-gateway-responses.html) type UpdateGatewayResponseOutput struct { _ struct{} `type:"structure"` @@ -22809,7 +22915,7 @@ type UpdateGatewayResponseOutput struct { // RESOURCE_NOT_FOUND // THROTTLED // UNAUTHORIZED - // UNSUPPORTED_MEDIA_TYPES + // UNSUPPORTED_MEDIA_TYPE ResponseType *string `locationName:"responseType" type:"string" enum:"GatewayResponseType"` // The HTTP status code for this GatewayResponse. @@ -22860,7 +22966,7 @@ func (s *UpdateGatewayResponseOutput) SetStatusCode(v string) *UpdateGatewayResp type UpdateIntegrationInput struct { _ struct{} `type:"structure"` - // Represents an update integration request's HTTP method. + // [Required] Represents an update integration request's HTTP method. // // HttpMethod is a required field HttpMethod *string `location:"uri" locationName:"http_method" type:"string" required:"true"` @@ -22869,12 +22975,12 @@ type UpdateIntegrationInput struct { // the order specified in this list. PatchOperations []*PatchOperation `locationName:"patchOperations" type:"list"` - // Represents an update integration request's resource identifier. + // [Required] Represents an update integration request's resource identifier. // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -22937,7 +23043,7 @@ func (s *UpdateIntegrationInput) SetRestApiId(v string) *UpdateIntegrationInput type UpdateIntegrationResponseInput struct { _ struct{} `type:"structure"` - // Specifies an update integration response request's HTTP method. + // [Required] Specifies an update integration response request's HTTP method. // // HttpMethod is a required field HttpMethod *string `location:"uri" locationName:"http_method" type:"string" required:"true"` @@ -22946,17 +23052,17 @@ type UpdateIntegrationResponseInput struct { // the order specified in this list. PatchOperations []*PatchOperation `locationName:"patchOperations" type:"list"` - // Specifies an update integration response request's resource identifier. + // [Required] Specifies an update integration response request's resource identifier. // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` - // Specifies an update integration response request's status code. + // [Required] Specifies an update integration response request's status code. // // StatusCode is a required field StatusCode *string `location:"uri" locationName:"status_code" type:"string" required:"true"` @@ -23028,7 +23134,7 @@ func (s *UpdateIntegrationResponseInput) SetStatusCode(v string) *UpdateIntegrat type UpdateMethodInput struct { _ struct{} `type:"structure"` - // The HTTP verb of the Method resource. + // [Required] The HTTP verb of the Method resource. // // HttpMethod is a required field HttpMethod *string `location:"uri" locationName:"http_method" type:"string" required:"true"` @@ -23037,12 +23143,12 @@ type UpdateMethodInput struct { // the order specified in this list. PatchOperations []*PatchOperation `locationName:"patchOperations" type:"list"` - // The Resource identifier for the Method resource. + // [Required] The Resource identifier for the Method resource. // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -23105,7 +23211,7 @@ func (s *UpdateMethodInput) SetRestApiId(v string) *UpdateMethodInput { type UpdateMethodResponseInput struct { _ struct{} `type:"structure"` - // The HTTP verb of the Method resource. + // [Required] The HTTP verb of the Method resource. // // HttpMethod is a required field HttpMethod *string `location:"uri" locationName:"http_method" type:"string" required:"true"` @@ -23114,17 +23220,17 @@ type UpdateMethodResponseInput struct { // the order specified in this list. PatchOperations []*PatchOperation `locationName:"patchOperations" type:"list"` - // The Resource identifier for the MethodResponse resource. + // [Required] The Resource identifier for the MethodResponse resource. // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` - // The status code for the MethodResponse resource. + // [Required] The status code for the MethodResponse resource. // // StatusCode is a required field StatusCode *string `location:"uri" locationName:"status_code" type:"string" required:"true"` @@ -23196,7 +23302,7 @@ func (s *UpdateMethodResponseInput) SetStatusCode(v string) *UpdateMethodRespons type UpdateModelInput struct { _ struct{} `type:"structure"` - // The name of the model to update. + // [Required] The name of the model to update. // // ModelName is a required field ModelName *string `location:"uri" locationName:"model_name" type:"string" required:"true"` @@ -23205,7 +23311,7 @@ type UpdateModelInput struct { // the order specified in this list. PatchOperations []*PatchOperation `locationName:"patchOperations" type:"list"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -23268,7 +23374,7 @@ type UpdateRequestValidatorInput struct { // RequestValidatorId is a required field RequestValidatorId *string `location:"uri" locationName:"requestvalidator_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -23320,13 +23426,13 @@ func (s *UpdateRequestValidatorInput) SetRestApiId(v string) *UpdateRequestValid // A set of validation rules for incoming Method requests. // -// In Swagger, a RequestValidator of an API is defined by the x-amazon-apigateway-request-validators.requestValidator -// (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-swagger-extensions.html#api-gateway-swagger-extensions-request-validators.requestValidator.html) +// In OpenAPI, a RequestValidator of an API is defined by the x-amazon-apigateway-request-validators.requestValidator +// (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-swagger-extensions.html#api-gateway-swagger-extensions-request-validators.requestValidator.html) // object. It the referenced using the x-amazon-apigateway-request-validator -// (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-swagger-extensions.html#api-gateway-swagger-extensions-request-validator) +// (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-swagger-extensions.html#api-gateway-swagger-extensions-request-validator) // property. // -// Enable Basic Request Validation in API Gateway (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-method-request-validation.html) +// Enable Basic Request Validation in API Gateway (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-method-request-validation.html) type UpdateRequestValidatorOutput struct { _ struct{} `type:"structure"` @@ -23387,12 +23493,12 @@ type UpdateResourceInput struct { // the order specified in this list. PatchOperations []*PatchOperation `locationName:"patchOperations" type:"list"` - // The identifier of the Resource resource. + // [Required] The identifier of the Resource resource. // // ResourceId is a required field ResourceId *string `location:"uri" locationName:"resource_id" type:"string" required:"true"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -23450,7 +23556,7 @@ type UpdateRestApiInput struct { // the order specified in this list. PatchOperations []*PatchOperation `locationName:"patchOperations" type:"list"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` @@ -23499,12 +23605,12 @@ type UpdateStageInput struct { // the order specified in this list. PatchOperations []*PatchOperation `locationName:"patchOperations" type:"list"` - // The string identifier of the associated RestApi. + // [Required] The string identifier of the associated RestApi. // // RestApiId is a required field RestApiId *string `location:"uri" locationName:"restapi_id" type:"string" required:"true"` - // The name of the Stage resource to change information about. + // [Required] The name of the Stage resource to change information about. // // StageName is a required field StageName *string `location:"uri" locationName:"stage_name" type:"string" required:"true"` @@ -23559,8 +23665,8 @@ func (s *UpdateStageInput) SetStageName(v string) *UpdateStageInput { type UpdateUsageInput struct { _ struct{} `type:"structure"` - // The identifier of the API key associated with the usage plan in which a temporary - // extension is granted to the remaining quota. + // [Required] The identifier of the API key associated with the usage plan in + // which a temporary extension is granted to the remaining quota. // // KeyId is a required field KeyId *string `location:"uri" locationName:"keyId" type:"string" required:"true"` @@ -23569,7 +23675,7 @@ type UpdateUsageInput struct { // the order specified in this list. PatchOperations []*PatchOperation `locationName:"patchOperations" type:"list"` - // The Id of the usage plan associated with the usage data. + // [Required] The Id of the usage plan associated with the usage data. // // UsagePlanId is a required field UsagePlanId *string `location:"uri" locationName:"usageplanId" type:"string" required:"true"` @@ -23627,7 +23733,7 @@ type UpdateUsagePlanInput struct { // the order specified in this list. PatchOperations []*PatchOperation `locationName:"patchOperations" type:"list"` - // The Id of the to-be-updated usage plan. + // [Required] The Id of the to-be-updated usage plan. // // UsagePlanId is a required field UsagePlanId *string `location:"uri" locationName:"usageplanId" type:"string" required:"true"` @@ -23802,7 +23908,7 @@ func (s *UpdateVpcLinkOutput) SetTargetArns(v []*string) *UpdateVpcLinkOutput { // Represents the usage data of a usage plan. // -// Create and Use Usage Plans (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html), Manage Usage in a Usage Plan (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-usage-plans-with-console.html#api-gateway-usage-plan-manage-usage) +// Create and Use Usage Plans (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html), Manage Usage in a Usage Plan (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-usage-plans-with-console.html#api-gateway-usage-plan-manage-usage) type Usage struct { _ struct{} `type:"structure"` @@ -23872,7 +23978,7 @@ func (s *Usage) SetUsagePlanId(v string) *Usage { // name of the specified API. You add plan customers by adding API keys to the // plan. // -// Create and Use Usage Plans (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html) +// Create and Use Usage Plans (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html) type UsagePlan struct { _ struct{} `type:"structure"` @@ -23956,7 +24062,7 @@ func (s *UsagePlan) SetThrottle(v *ThrottleSettings) *UsagePlan { // To associate an API stage with a selected API key in a usage plan, you must // create a UsagePlanKey resource to represent the selected ApiKey. // -// " Create and Use Usage Plans (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html) +// " Create and Use Usage Plans (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html) type UsagePlanKey struct { _ struct{} `type:"structure"` @@ -24020,10 +24126,10 @@ const ( ApiKeysFormatCsv = "csv" ) -// [Required] The authorizer type. Valid values are TOKEN for a Lambda function -// using a single authorization token submitted in a custom header, REQUEST -// for a Lambda function using incoming request parameters, and COGNITO_USER_POOLS -// for using an Amazon Cognito user pool. +// The authorizer type. Valid values are TOKEN for a Lambda function using a +// single authorization token submitted in a custom header, REQUEST for a Lambda +// function using incoming request parameters, and COGNITO_USER_POOLS for using +// an Amazon Cognito user pool. const ( // AuthorizerTypeToken is a AuthorizerType enum value AuthorizerTypeToken = "TOKEN" @@ -24134,15 +24240,19 @@ const ( DocumentationPartTypeResponseBody = "RESPONSE_BODY" ) -// The endpoint type. The valid value is EDGE for edge-optimized API setup, -// most suitable for mobile applications, REGIONAL for regional API endpoint -// setup, most suitable for calling from AWS Region +// The endpoint type. The valid values are EDGE for edge-optimized API setup, +// most suitable for mobile applications; REGIONAL for regional API endpoint +// setup, most suitable for calling from AWS Region; and PRIVATE for private +// APIs. const ( // EndpointTypeRegional is a EndpointType enum value EndpointTypeRegional = "REGIONAL" // EndpointTypeEdge is a EndpointType enum value EndpointTypeEdge = "EDGE" + + // EndpointTypePrivate is a EndpointType enum value + EndpointTypePrivate = "PRIVATE" ) const ( diff --git a/vendor/github.com/aws/aws-sdk-go/service/apigateway/service.go b/vendor/github.com/aws/aws-sdk-go/service/apigateway/service.go index 690b6b2f333..8064d24fc02 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/apigateway/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/apigateway/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "apigateway" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "apigateway" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "API Gateway" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the APIGateway client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/applicationautoscaling/api.go b/vendor/github.com/aws/aws-sdk-go/service/applicationautoscaling/api.go index f7d739be7da..5df0ddb5831 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/applicationautoscaling/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/applicationautoscaling/api.go @@ -15,8 +15,8 @@ const opDeleteScalingPolicy = "DeleteScalingPolicy" // DeleteScalingPolicyRequest generates a "aws/request.Request" representing the // client's request for the DeleteScalingPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -115,8 +115,8 @@ const opDeleteScheduledAction = "DeleteScheduledAction" // DeleteScheduledActionRequest generates a "aws/request.Request" representing the // client's request for the DeleteScheduledAction operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -209,8 +209,8 @@ const opDeregisterScalableTarget = "DeregisterScalableTarget" // DeregisterScalableTargetRequest generates a "aws/request.Request" representing the // client's request for the DeregisterScalableTarget operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -308,8 +308,8 @@ const opDescribeScalableTargets = "DescribeScalableTargets" // DescribeScalableTargetsRequest generates a "aws/request.Request" representing the // client's request for the DescribeScalableTargets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -460,8 +460,8 @@ const opDescribeScalingActivities = "DescribeScalingActivities" // DescribeScalingActivitiesRequest generates a "aws/request.Request" representing the // client's request for the DescribeScalingActivities operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -614,8 +614,8 @@ const opDescribeScalingPolicies = "DescribeScalingPolicies" // DescribeScalingPoliciesRequest generates a "aws/request.Request" representing the // client's request for the DescribeScalingPolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -774,8 +774,8 @@ const opDescribeScheduledActions = "DescribeScheduledActions" // DescribeScheduledActionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeScheduledActions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -870,8 +870,8 @@ const opPutScalingPolicy = "PutScalingPolicy" // PutScalingPolicyRequest generates a "aws/request.Request" representing the // client's request for the PutScalingPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -988,8 +988,8 @@ const opPutScheduledAction = "PutScheduledAction" // PutScheduledActionRequest generates a "aws/request.Request" representing the // client's request for the PutScheduledAction operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1100,8 +1100,8 @@ const opRegisterScalableTarget = "RegisterScalableTarget" // RegisterScalableTargetRequest generates a "aws/request.Request" representing the // client's request for the RegisterScalableTarget operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1362,6 +1362,11 @@ type DeleteScalingPolicyInput struct { // * Amazon SageMaker endpoint variants - The resource type is variant and // the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering. // + // * Custom resources are not supported with a resource type. This parameter + // must specify the OutputValue from the CloudFormation template stack used + // to access the resources. The unique identifier is defined by the service + // provider. + // // ResourceId is a required field ResourceId *string `min:"1" type:"string" required:"true"` @@ -1397,11 +1402,15 @@ type DeleteScalingPolicyInput struct { // * sagemaker:variant:DesiredInstanceCount - The number of EC2 instances // for an Amazon SageMaker model endpoint variant. // + // * custom-resource:ResourceType:Property - The scalable dimension for a + // custom resource provided by your own application or service. + // // ScalableDimension is a required field ScalableDimension *string `type:"string" required:"true" enum:"ScalableDimension"` - // The namespace of the AWS service. For more information, see AWS Service Namespaces - // (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) + // The namespace of the AWS service that provides the resource or custom-resource + // for a resource provided by your own application or service. For more information, + // see AWS Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) // in the Amazon Web Services General Reference. // // ServiceNamespace is a required field @@ -1514,6 +1523,11 @@ type DeleteScheduledActionInput struct { // * Amazon SageMaker endpoint variants - The resource type is variant and // the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering. // + // * Custom resources are not supported with a resource type. This parameter + // must specify the OutputValue from the CloudFormation template stack used + // to access the resources. The unique identifier is defined by the service + // provider. + // // ResourceId is a required field ResourceId *string `min:"1" type:"string" required:"true"` @@ -1548,6 +1562,9 @@ type DeleteScheduledActionInput struct { // // * sagemaker:variant:DesiredInstanceCount - The number of EC2 instances // for an Amazon SageMaker model endpoint variant. + // + // * custom-resource:ResourceType:Property - The scalable dimension for a + // custom resource provided by your own application or service. ScalableDimension *string `type:"string" enum:"ScalableDimension"` // The name of the scheduled action. @@ -1555,8 +1572,9 @@ type DeleteScheduledActionInput struct { // ScheduledActionName is a required field ScheduledActionName *string `min:"1" type:"string" required:"true"` - // The namespace of the AWS service. For more information, see AWS Service Namespaces - // (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) + // The namespace of the AWS service that provides the resource or custom-resource + // for a resource provided by your own application or service. For more information, + // see AWS Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) // in the Amazon Web Services General Reference. // // ServiceNamespace is a required field @@ -1666,6 +1684,11 @@ type DeregisterScalableTargetInput struct { // * Amazon SageMaker endpoint variants - The resource type is variant and // the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering. // + // * Custom resources are not supported with a resource type. This parameter + // must specify the OutputValue from the CloudFormation template stack used + // to access the resources. The unique identifier is defined by the service + // provider. + // // ResourceId is a required field ResourceId *string `min:"1" type:"string" required:"true"` @@ -1701,11 +1724,15 @@ type DeregisterScalableTargetInput struct { // * sagemaker:variant:DesiredInstanceCount - The number of EC2 instances // for an Amazon SageMaker model endpoint variant. // + // * custom-resource:ResourceType:Property - The scalable dimension for a + // custom resource provided by your own application or service. + // // ScalableDimension is a required field ScalableDimension *string `type:"string" required:"true" enum:"ScalableDimension"` - // The namespace of the AWS service. For more information, see AWS Service Namespaces - // (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) + // The namespace of the AWS service that provides the resource or custom-resource + // for a resource provided by your own application or service. For more information, + // see AWS Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) // in the Amazon Web Services General Reference. // // ServiceNamespace is a required field @@ -1818,6 +1845,11 @@ type DescribeScalableTargetsInput struct { // // * Amazon SageMaker endpoint variants - The resource type is variant and // the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering. + // + // * Custom resources are not supported with a resource type. This parameter + // must specify the OutputValue from the CloudFormation template stack used + // to access the resources. The unique identifier is defined by the service + // provider. ResourceIds []*string `type:"list"` // The scalable dimension associated with the scalable target. This string consists @@ -1852,10 +1884,14 @@ type DescribeScalableTargetsInput struct { // // * sagemaker:variant:DesiredInstanceCount - The number of EC2 instances // for an Amazon SageMaker model endpoint variant. + // + // * custom-resource:ResourceType:Property - The scalable dimension for a + // custom resource provided by your own application or service. ScalableDimension *string `type:"string" enum:"ScalableDimension"` - // The namespace of the AWS service. For more information, see AWS Service Namespaces - // (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) + // The namespace of the AWS service that provides the resource or custom-resource + // for a resource provided by your own application or service. For more information, + // see AWS Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) // in the Amazon Web Services General Reference. // // ServiceNamespace is a required field @@ -1990,6 +2026,11 @@ type DescribeScalingActivitiesInput struct { // // * Amazon SageMaker endpoint variants - The resource type is variant and // the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering. + // + // * Custom resources are not supported with a resource type. This parameter + // must specify the OutputValue from the CloudFormation template stack used + // to access the resources. The unique identifier is defined by the service + // provider. ResourceId *string `min:"1" type:"string"` // The scalable dimension. This string consists of the service namespace, resource @@ -2024,10 +2065,14 @@ type DescribeScalingActivitiesInput struct { // // * sagemaker:variant:DesiredInstanceCount - The number of EC2 instances // for an Amazon SageMaker model endpoint variant. + // + // * custom-resource:ResourceType:Property - The scalable dimension for a + // custom resource provided by your own application or service. ScalableDimension *string `type:"string" enum:"ScalableDimension"` - // The namespace of the AWS service. For more information, see AWS Service Namespaces - // (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) + // The namespace of the AWS service that provides the resource or custom-resource + // for a resource provided by your own application or service. For more information, + // see AWS Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) // in the Amazon Web Services General Reference. // // ServiceNamespace is a required field @@ -2168,6 +2213,11 @@ type DescribeScalingPoliciesInput struct { // // * Amazon SageMaker endpoint variants - The resource type is variant and // the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering. + // + // * Custom resources are not supported with a resource type. This parameter + // must specify the OutputValue from the CloudFormation template stack used + // to access the resources. The unique identifier is defined by the service + // provider. ResourceId *string `min:"1" type:"string"` // The scalable dimension. This string consists of the service namespace, resource @@ -2202,10 +2252,14 @@ type DescribeScalingPoliciesInput struct { // // * sagemaker:variant:DesiredInstanceCount - The number of EC2 instances // for an Amazon SageMaker model endpoint variant. + // + // * custom-resource:ResourceType:Property - The scalable dimension for a + // custom resource provided by your own application or service. ScalableDimension *string `type:"string" enum:"ScalableDimension"` - // The namespace of the AWS service. For more information, see AWS Service Namespaces - // (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) + // The namespace of the AWS service that provides the resource or custom-resource + // for a resource provided by your own application or service. For more information, + // see AWS Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) // in the Amazon Web Services General Reference. // // ServiceNamespace is a required field @@ -2349,6 +2403,11 @@ type DescribeScheduledActionsInput struct { // // * Amazon SageMaker endpoint variants - The resource type is variant and // the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering. + // + // * Custom resources are not supported with a resource type. This parameter + // must specify the OutputValue from the CloudFormation template stack used + // to access the resources. The unique identifier is defined by the service + // provider. ResourceId *string `min:"1" type:"string"` // The scalable dimension. This string consists of the service namespace, resource @@ -2383,13 +2442,17 @@ type DescribeScheduledActionsInput struct { // // * sagemaker:variant:DesiredInstanceCount - The number of EC2 instances // for an Amazon SageMaker model endpoint variant. + // + // * custom-resource:ResourceType:Property - The scalable dimension for a + // custom resource provided by your own application or service. ScalableDimension *string `type:"string" enum:"ScalableDimension"` // The names of the scheduled actions to describe. ScheduledActionNames []*string `type:"list"` - // The namespace of the AWS service. For more information, see AWS Service Namespaces - // (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) + // The namespace of the AWS service that provides the resource or custom-resource + // for a resource provided by your own application or service. For more information, + // see AWS Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) // in the Amazon Web Services General Reference. // // ServiceNamespace is a required field @@ -2649,6 +2712,11 @@ type PutScalingPolicyInput struct { // * Amazon SageMaker endpoint variants - The resource type is variant and // the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering. // + // * Custom resources are not supported with a resource type. This parameter + // must specify the OutputValue from the CloudFormation template stack used + // to access the resources. The unique identifier is defined by the service + // provider. + // // ResourceId is a required field ResourceId *string `min:"1" type:"string" required:"true"` @@ -2684,11 +2752,15 @@ type PutScalingPolicyInput struct { // * sagemaker:variant:DesiredInstanceCount - The number of EC2 instances // for an Amazon SageMaker model endpoint variant. // + // * custom-resource:ResourceType:Property - The scalable dimension for a + // custom resource provided by your own application or service. + // // ScalableDimension is a required field ScalableDimension *string `type:"string" required:"true" enum:"ScalableDimension"` - // The namespace of the AWS service. For more information, see AWS Service Namespaces - // (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) + // The namespace of the AWS service that provides the resource or custom-resource + // for a resource provided by your own application or service. For more information, + // see AWS Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) // in the Amazon Web Services General Reference. // // ServiceNamespace is a required field @@ -2835,7 +2907,7 @@ type PutScheduledActionInput struct { _ struct{} `type:"structure"` // The date and time for the scheduled action to end. - EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `type:"timestamp"` // The identifier of the resource associated with the scheduled action. This // string consists of the resource type and unique identifier. @@ -2864,6 +2936,11 @@ type PutScheduledActionInput struct { // * Amazon SageMaker endpoint variants - The resource type is variant and // the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering. // + // * Custom resources are not supported with a resource type. This parameter + // must specify the OutputValue from the CloudFormation template stack used + // to access the resources. The unique identifier is defined by the service + // provider. + // // ResourceId is a required field ResourceId *string `min:"1" type:"string" required:"true"` @@ -2899,6 +2976,9 @@ type PutScheduledActionInput struct { // // * sagemaker:variant:DesiredInstanceCount - The number of EC2 instances // for an Amazon SageMaker model endpoint variant. + // + // * custom-resource:ResourceType:Property - The scalable dimension for a + // custom resource provided by your own application or service. ScalableDimension *string `type:"string" enum:"ScalableDimension"` // The new minimum and maximum capacity. You can set both values or just one. @@ -2921,7 +3001,8 @@ type PutScheduledActionInput struct { // For rate expressions, value is a positive integer and unit is minute | minutes // | hour | hours | day | days. // - // For more information about cron expressions, see Cron (https://en.wikipedia.org/wiki/Cron). + // For more information about cron expressions, see Cron Expressions (http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html#CronExpressions) + // in the Amazon CloudWatch Events User Guide. Schedule *string `min:"1" type:"string"` // The name of the scheduled action. @@ -2929,15 +3010,16 @@ type PutScheduledActionInput struct { // ScheduledActionName is a required field ScheduledActionName *string `min:"1" type:"string" required:"true"` - // The namespace of the AWS service. For more information, see AWS Service Namespaces - // (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) + // The namespace of the AWS service that provides the resource or custom-resource + // for a resource provided by your own application or service. For more information, + // see AWS Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) // in the Amazon Web Services General Reference. // // ServiceNamespace is a required field ServiceNamespace *string `type:"string" required:"true" enum:"ServiceNamespace"` // The date and time for the scheduled action to start. - StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -3078,12 +3160,17 @@ type RegisterScalableTargetInput struct { // * Amazon SageMaker endpoint variants - The resource type is variant and // the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering. // + // * Custom resources are not supported with a resource type. This parameter + // must specify the OutputValue from the CloudFormation template stack used + // to access the resources. The unique identifier is defined by the service + // provider. + // // ResourceId is a required field ResourceId *string `min:"1" type:"string" required:"true"` // Application Auto Scaling creates a service-linked role that grants it permissions // to modify the scalable target on your behalf. For more information, see Service-Linked - // Roles for Application Auto Scaling (http://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/application-autoscaling-service-linked-roles.html). + // Roles for Application Auto Scaling (http://docs.aws.amazon.com/autoscaling/application/userguide/application-autoscaling-service-linked-roles.html). // // For resources that are not supported using a service-linked role, this parameter // is required and must specify the ARN of an IAM role that allows Application @@ -3122,11 +3209,15 @@ type RegisterScalableTargetInput struct { // * sagemaker:variant:DesiredInstanceCount - The number of EC2 instances // for an Amazon SageMaker model endpoint variant. // + // * custom-resource:ResourceType:Property - The scalable dimension for a + // custom resource provided by your own application or service. + // // ScalableDimension is a required field ScalableDimension *string `type:"string" required:"true" enum:"ScalableDimension"` - // The namespace of the AWS service. For more information, see AWS Service Namespaces - // (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) + // The namespace of the AWS service that provides the resource or custom-resource + // for a resource provided by your own application or service. For more information, + // see AWS Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) // in the Amazon Web Services General Reference. // // ServiceNamespace is a required field @@ -3225,7 +3316,7 @@ type ScalableTarget struct { // The Unix timestamp for when the scalable target was created. // // CreationTime is a required field - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + CreationTime *time.Time `type:"timestamp" required:"true"` // The maximum value to scale to in response to a scale out event. // @@ -3264,6 +3355,11 @@ type ScalableTarget struct { // * Amazon SageMaker endpoint variants - The resource type is variant and // the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering. // + // * Custom resources are not supported with a resource type. This parameter + // must specify the OutputValue from the CloudFormation template stack used + // to access the resources. The unique identifier is defined by the service + // provider. + // // ResourceId is a required field ResourceId *string `min:"1" type:"string" required:"true"` @@ -3305,11 +3401,15 @@ type ScalableTarget struct { // * sagemaker:variant:DesiredInstanceCount - The number of EC2 instances // for an Amazon SageMaker model endpoint variant. // + // * custom-resource:ResourceType:Property - The scalable dimension for a + // custom resource provided by your own application or service. + // // ScalableDimension is a required field ScalableDimension *string `type:"string" required:"true" enum:"ScalableDimension"` - // The namespace of the AWS service. For more information, see AWS Service Namespaces - // (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) + // The namespace of the AWS service that provides the resource or custom-resource + // for a resource provided by your own application or service. For more information, + // see AWS Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) // in the Amazon Web Services General Reference. // // ServiceNamespace is a required field @@ -3424,7 +3524,7 @@ type ScalingActivity struct { Details *string `type:"string"` // The Unix timestamp for when the scaling activity ended. - EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `type:"timestamp"` // The identifier of the resource associated with the scaling activity. This // string consists of the resource type and unique identifier. @@ -3453,6 +3553,11 @@ type ScalingActivity struct { // * Amazon SageMaker endpoint variants - The resource type is variant and // the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering. // + // * Custom resources are not supported with a resource type. This parameter + // must specify the OutputValue from the CloudFormation template stack used + // to access the resources. The unique identifier is defined by the service + // provider. + // // ResourceId is a required field ResourceId *string `min:"1" type:"string" required:"true"` @@ -3488,11 +3593,15 @@ type ScalingActivity struct { // * sagemaker:variant:DesiredInstanceCount - The number of EC2 instances // for an Amazon SageMaker model endpoint variant. // + // * custom-resource:ResourceType:Property - The scalable dimension for a + // custom resource provided by your own application or service. + // // ScalableDimension is a required field ScalableDimension *string `type:"string" required:"true" enum:"ScalableDimension"` - // The namespace of the AWS service. For more information, see AWS Service Namespaces - // (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) + // The namespace of the AWS service that provides the resource or custom-resource + // for a resource provided by your own application or service. For more information, + // see AWS Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) // in the Amazon Web Services General Reference. // // ServiceNamespace is a required field @@ -3501,7 +3610,7 @@ type ScalingActivity struct { // The Unix timestamp for when the scaling activity began. // // StartTime is a required field - StartTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + StartTime *time.Time `type:"timestamp" required:"true"` // Indicates the status of the scaling activity. // @@ -3598,7 +3707,7 @@ type ScalingPolicy struct { // The Unix timestamp for when the scaling policy was created. // // CreationTime is a required field - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + CreationTime *time.Time `type:"timestamp" required:"true"` // The Amazon Resource Name (ARN) of the scaling policy. // @@ -3642,6 +3751,11 @@ type ScalingPolicy struct { // * Amazon SageMaker endpoint variants - The resource type is variant and // the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering. // + // * Custom resources are not supported with a resource type. This parameter + // must specify the OutputValue from the CloudFormation template stack used + // to access the resources. The unique identifier is defined by the service + // provider. + // // ResourceId is a required field ResourceId *string `min:"1" type:"string" required:"true"` @@ -3677,11 +3791,15 @@ type ScalingPolicy struct { // * sagemaker:variant:DesiredInstanceCount - The number of EC2 instances // for an Amazon SageMaker model endpoint variant. // + // * custom-resource:ResourceType:Property - The scalable dimension for a + // custom resource provided by your own application or service. + // // ScalableDimension is a required field ScalableDimension *string `type:"string" required:"true" enum:"ScalableDimension"` - // The namespace of the AWS service. For more information, see AWS Service Namespaces - // (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) + // The namespace of the AWS service that provides the resource or custom-resource + // for a resource provided by your own application or service. For more information, + // see AWS Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) // in the Amazon Web Services General Reference. // // ServiceNamespace is a required field @@ -3771,10 +3889,10 @@ type ScheduledAction struct { // The date and time that the scheduled action was created. // // CreationTime is a required field - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + CreationTime *time.Time `type:"timestamp" required:"true"` // The date and time that the action is scheduled to end. - EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `type:"timestamp"` // The identifier of the resource associated with the scaling policy. This string // consists of the resource type and unique identifier. @@ -3803,6 +3921,11 @@ type ScheduledAction struct { // * Amazon SageMaker endpoint variants - The resource type is variant and // the unique identifier is the resource ID. Example: endpoint/my-end-point/variant/KMeansClustering. // + // * Custom resources are not supported with a resource type. This parameter + // must specify the OutputValue from the CloudFormation template stack used + // to access the resources. The unique identifier is defined by the service + // provider. + // // ResourceId is a required field ResourceId *string `min:"1" type:"string" required:"true"` @@ -3837,6 +3960,9 @@ type ScheduledAction struct { // // * sagemaker:variant:DesiredInstanceCount - The number of EC2 instances // for an Amazon SageMaker model endpoint variant. + // + // * custom-resource:ResourceType:Property - The scalable dimension for a + // custom resource provided by your own application or service. ScalableDimension *string `type:"string" enum:"ScalableDimension"` // The new minimum and maximum capacity. You can set both values or just one. @@ -3859,7 +3985,8 @@ type ScheduledAction struct { // For rate expressions, value is a positive integer and unit is minute | minutes // | hour | hours | day | days. // - // For more information about cron expressions, see Cron (https://en.wikipedia.org/wiki/Cron). + // For more information about cron expressions, see Cron Expressions (http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html#CronExpressions) + // in the Amazon CloudWatch Events User Guide. // // Schedule is a required field Schedule *string `min:"1" type:"string" required:"true"` @@ -3874,15 +4001,16 @@ type ScheduledAction struct { // ScheduledActionName is a required field ScheduledActionName *string `min:"1" type:"string" required:"true"` - // The namespace of the AWS service. For more information, see AWS Service Namespaces - // (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) + // The namespace of the AWS service that provides the resource or custom-resource + // for a resource provided by your own application or service. For more information, + // see AWS Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) // in the Amazon Web Services General Reference. // // ServiceNamespace is a required field ServiceNamespace *string `type:"string" required:"true" enum:"ServiceNamespace"` // The date and time that the action is scheduled to begin. - StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -4381,6 +4509,9 @@ const ( // ScalableDimensionSagemakerVariantDesiredInstanceCount is a ScalableDimension enum value ScalableDimensionSagemakerVariantDesiredInstanceCount = "sagemaker:variant:DesiredInstanceCount" + + // ScalableDimensionCustomResourceResourceTypeProperty is a ScalableDimension enum value + ScalableDimensionCustomResourceResourceTypeProperty = "custom-resource:ResourceType:Property" ) const ( @@ -4424,4 +4555,7 @@ const ( // ServiceNamespaceSagemaker is a ServiceNamespace enum value ServiceNamespaceSagemaker = "sagemaker" + + // ServiceNamespaceCustomResource is a ServiceNamespace enum value + ServiceNamespaceCustomResource = "custom-resource" ) diff --git a/vendor/github.com/aws/aws-sdk-go/service/applicationautoscaling/doc.go b/vendor/github.com/aws/aws-sdk-go/service/applicationautoscaling/doc.go index 67430038166..9d1c051a255 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/applicationautoscaling/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/applicationautoscaling/doc.go @@ -4,10 +4,10 @@ // requests to Application Auto Scaling. // // With Application Auto Scaling, you can configure automatic scaling for your -// scalable AWS resources. You can use Application Auto Scaling to accomplish -// the following tasks: +// scalable resources. You can use Application Auto Scaling to accomplish the +// following tasks: // -// * Define scaling policies to automatically scale your AWS resources +// * Define scaling policies to automatically scale your AWS or custom resources // // * Scale your resources in response to CloudWatch alarms // @@ -15,7 +15,7 @@ // // * View the history of your scaling events // -// Application Auto Scaling can scale the following AWS resources: +// Application Auto Scaling can scale the following resources: // // * Amazon ECS services. For more information, see Service Auto Scaling // (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-auto-scaling.html) @@ -41,16 +41,19 @@ // * Amazon Aurora Replicas. For more information, see Using Amazon Aurora // Auto Scaling with Aurora Replicas (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Integrating.AutoScaling.html). // -// * Amazon SageMaker endpoints. For more information, see Automatically +// * Amazon SageMaker endpoint variants. For more information, see Automatically // Scaling Amazon SageMaker Models (http://docs.aws.amazon.com/sagemaker/latest/dg/endpoint-auto-scaling.html). // +// * Custom resources provided by your own applications or services. More +// information is available in our GitHub repository (https://github.com/aws/aws-auto-scaling-custom-resource). +// +// +// To learn more about Application Auto Scaling, see the Application Auto Scaling +// User Guide (http://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto-scaling.html). +// // To configure automatic scaling for multiple resources across multiple services, // use AWS Auto Scaling to create a scaling plan for your application. For more -// information, see AWS Auto Scaling (http://aws.amazon.com/autoscaling). -// -// For a list of supported regions, see AWS Regions and Endpoints: Application -// Auto Scaling (http://docs.aws.amazon.com/general/latest/gr/rande.html#as-app_region) -// in the AWS General Reference. +// information, see the AWS Auto Scaling User Guide (http://docs.aws.amazon.com/autoscaling/plans/userguide/what-is-aws-auto-scaling.html). // // See https://docs.aws.amazon.com/goto/WebAPI/application-autoscaling-2016-02-06 for more information on this service. // diff --git a/vendor/github.com/aws/aws-sdk-go/service/applicationautoscaling/service.go b/vendor/github.com/aws/aws-sdk-go/service/applicationautoscaling/service.go index 56103ee27ec..902d81d426e 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/applicationautoscaling/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/applicationautoscaling/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "autoscaling" // Service endpoint prefix API calls made to. - EndpointsID = "application-autoscaling" // Service ID for Regions and Endpoints metadata. + ServiceName = "autoscaling" // Name of service. + EndpointsID = "application-autoscaling" // ID to lookup a service endpoint with. + ServiceID = "Application Auto Scaling" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the ApplicationAutoScaling client with a session. @@ -45,19 +46,20 @@ const ( // svc := applicationautoscaling.New(mySession, aws.NewConfig().WithRegion("us-west-2")) func New(p client.ConfigProvider, cfgs ...*aws.Config) *ApplicationAutoScaling { c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "application-autoscaling" + } return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) } // newClient creates, initializes and returns a new service client instance. func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *ApplicationAutoScaling { - if len(signingName) == 0 { - signingName = "application-autoscaling" - } svc := &ApplicationAutoScaling{ Client: client.New( cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/appsync/api.go b/vendor/github.com/aws/aws-sdk-go/service/appsync/api.go index ce553365c91..371d471eba9 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/appsync/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/appsync/api.go @@ -12,8 +12,8 @@ const opCreateApiKey = "CreateApiKey" // CreateApiKeyRequest generates a "aws/request.Request" representing the // client's request for the CreateApiKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -87,7 +87,8 @@ func (c *AppSync) CreateApiKeyRequest(input *CreateApiKeyInput) (req *request.Re // The API key exceeded a limit. Try your request again. // // * ErrCodeApiKeyValidityOutOfBoundsException "ApiKeyValidityOutOfBoundsException" -// The API key expiration must be set to a value between 1 and 365 days. +// The API key expiration must be set to a value between 1 and 365 days from +// creation (for CreateApiKey) or from update (for UpdateApiKey). // // See also, https://docs.aws.amazon.com/goto/WebAPI/appsync-2017-07-25/CreateApiKey func (c *AppSync) CreateApiKey(input *CreateApiKeyInput) (*CreateApiKeyOutput, error) { @@ -115,8 +116,8 @@ const opCreateDataSource = "CreateDataSource" // CreateDataSourceRequest generates a "aws/request.Request" representing the // client's request for the CreateDataSource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -209,8 +210,8 @@ const opCreateGraphqlApi = "CreateGraphqlApi" // CreateGraphqlApiRequest generates a "aws/request.Request" representing the // client's request for the CreateGraphqlApi operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -276,9 +277,6 @@ func (c *AppSync) CreateGraphqlApiRequest(input *CreateGraphqlApiInput) (req *re // * ErrCodeInternalFailureException "InternalFailureException" // An internal AWS AppSync error occurred. Try your request again. // -// * ErrCodeLimitExceededException "LimitExceededException" -// The request exceeded a limit. Try your request again. -// // * ErrCodeApiLimitExceededException "ApiLimitExceededException" // The GraphQL API exceeded a limit. Try your request again. // @@ -308,8 +306,8 @@ const opCreateResolver = "CreateResolver" // CreateResolverRequest generates a "aws/request.Request" representing the // client's request for the CreateResolver operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -401,8 +399,8 @@ const opCreateType = "CreateType" // CreateTypeRequest generates a "aws/request.Request" representing the // client's request for the CreateType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -495,8 +493,8 @@ const opDeleteApiKey = "DeleteApiKey" // DeleteApiKeyRequest generates a "aws/request.Request" representing the // client's request for the DeleteApiKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -585,8 +583,8 @@ const opDeleteDataSource = "DeleteDataSource" // DeleteDataSourceRequest generates a "aws/request.Request" representing the // client's request for the DeleteDataSource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -679,8 +677,8 @@ const opDeleteGraphqlApi = "DeleteGraphqlApi" // DeleteGraphqlApiRequest generates a "aws/request.Request" representing the // client's request for the DeleteGraphqlApi operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -773,8 +771,8 @@ const opDeleteResolver = "DeleteResolver" // DeleteResolverRequest generates a "aws/request.Request" representing the // client's request for the DeleteResolver operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -863,8 +861,8 @@ const opDeleteType = "DeleteType" // DeleteTypeRequest generates a "aws/request.Request" representing the // client's request for the DeleteType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -957,8 +955,8 @@ const opGetDataSource = "GetDataSource" // GetDataSourceRequest generates a "aws/request.Request" representing the // client's request for the GetDataSource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1051,8 +1049,8 @@ const opGetGraphqlApi = "GetGraphqlApi" // GetGraphqlApiRequest generates a "aws/request.Request" representing the // client's request for the GetGraphqlApi operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1141,8 +1139,8 @@ const opGetIntrospectionSchema = "GetIntrospectionSchema" // GetIntrospectionSchemaRequest generates a "aws/request.Request" representing the // client's request for the GetIntrospectionSchema operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1230,8 +1228,8 @@ const opGetResolver = "GetResolver" // GetResolverRequest generates a "aws/request.Request" representing the // client's request for the GetResolver operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1317,8 +1315,8 @@ const opGetSchemaCreationStatus = "GetSchemaCreationStatus" // GetSchemaCreationStatusRequest generates a "aws/request.Request" representing the // client's request for the GetSchemaCreationStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1407,8 +1405,8 @@ const opGetType = "GetType" // GetTypeRequest generates a "aws/request.Request" representing the // client's request for the GetType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1501,8 +1499,8 @@ const opListApiKeys = "ListApiKeys" // ListApiKeysRequest generates a "aws/request.Request" representing the // client's request for the ListApiKeys operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1543,6 +1541,11 @@ func (c *AppSync) ListApiKeysRequest(input *ListApiKeysInput) (req *request.Requ // // Lists the API keys for a given API. // +// API keys are deleted automatically sometime after they expire. However, they +// may still be included in the response until they have actually been deleted. +// You can safely call DeleteApiKey to manually delete a key before it's automatically +// deleted. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -1591,8 +1594,8 @@ const opListDataSources = "ListDataSources" // ListDataSourcesRequest generates a "aws/request.Request" representing the // client's request for the ListDataSources operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1681,8 +1684,8 @@ const opListGraphqlApis = "ListGraphqlApis" // ListGraphqlApisRequest generates a "aws/request.Request" representing the // client's request for the ListGraphqlApis operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1767,8 +1770,8 @@ const opListResolvers = "ListResolvers" // ListResolversRequest generates a "aws/request.Request" representing the // client's request for the ListResolvers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1857,8 +1860,8 @@ const opListTypes = "ListTypes" // ListTypesRequest generates a "aws/request.Request" representing the // client's request for the ListTypes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1951,8 +1954,8 @@ const opStartSchemaCreation = "StartSchemaCreation" // StartSchemaCreationRequest generates a "aws/request.Request" representing the // client's request for the StartSchemaCreation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2047,8 +2050,8 @@ const opUpdateApiKey = "UpdateApiKey" // UpdateApiKeyRequest generates a "aws/request.Request" representing the // client's request for the UpdateApiKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2115,7 +2118,8 @@ func (c *AppSync) UpdateApiKeyRequest(input *UpdateApiKeyInput) (req *request.Re // An internal AWS AppSync error occurred. Try your request again. // // * ErrCodeApiKeyValidityOutOfBoundsException "ApiKeyValidityOutOfBoundsException" -// The API key expiration must be set to a value between 1 and 365 days. +// The API key expiration must be set to a value between 1 and 365 days from +// creation (for CreateApiKey) or from update (for UpdateApiKey). // // See also, https://docs.aws.amazon.com/goto/WebAPI/appsync-2017-07-25/UpdateApiKey func (c *AppSync) UpdateApiKey(input *UpdateApiKeyInput) (*UpdateApiKeyOutput, error) { @@ -2143,8 +2147,8 @@ const opUpdateDataSource = "UpdateDataSource" // UpdateDataSourceRequest generates a "aws/request.Request" representing the // client's request for the UpdateDataSource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2237,8 +2241,8 @@ const opUpdateGraphqlApi = "UpdateGraphqlApi" // UpdateGraphqlApiRequest generates a "aws/request.Request" representing the // client's request for the UpdateGraphqlApi operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2331,8 +2335,8 @@ const opUpdateResolver = "UpdateResolver" // UpdateResolverRequest generates a "aws/request.Request" representing the // client's request for the UpdateResolver operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2421,8 +2425,8 @@ const opUpdateType = "UpdateType" // UpdateTypeRequest generates a "aws/request.Request" representing the // client's request for the UpdateType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2512,6 +2516,43 @@ func (c *AppSync) UpdateTypeWithContext(ctx aws.Context, input *UpdateTypeInput, } // Describes an API key. +// +// Customers invoke AWS AppSync GraphQL APIs with API keys as an identity mechanism. +// There are two key versions: +// +// da1: This version was introduced at launch in November 2017. These keys always +// expire after 7 days. Key expiration is managed by DynamoDB TTL. The keys +// will cease to be valid after Feb 21, 2018 and should not be used after that +// date. +// +// * ListApiKeys returns the expiration time in milliseconds. +// +// * CreateApiKey returns the expiration time in milliseconds. +// +// * UpdateApiKey is not available for this key version. +// +// * DeleteApiKey deletes the item from the table. +// +// * Expiration is stored in DynamoDB as milliseconds. This results in a +// bug where keys are not automatically deleted because DynamoDB expects +// the TTL to be stored in seconds. As a one-time action, we will delete +// these keys from the table after Feb 21, 2018. +// +// da2: This version was introduced in February 2018 when AppSync added support +// to extend key expiration. +// +// * ListApiKeys returns the expiration time in seconds. +// +// * CreateApiKey returns the expiration time in seconds and accepts a user-provided +// expiration time in seconds. +// +// * UpdateApiKey returns the expiration time in seconds and accepts a user-provided +// expiration time in seconds. Key expiration can only be updated while the +// key has not expired. +// +// * DeleteApiKey deletes the item from the table. +// +// * Expiration is stored in DynamoDB as seconds. type ApiKey struct { _ struct{} `type:"structure"` @@ -2565,9 +2606,10 @@ type CreateApiKeyInput struct { // A description of the purpose of the API key. Description *string `locationName:"description" type:"string"` - // The time after which the API key expires. The date is represented as seconds - // since the epoch, rounded down to the nearest hour. The default value for - // this parameter is 7 days from creation time. + // The time from creation time after which the API key expires. The date is + // represented as seconds since the epoch, rounded down to the nearest hour. + // The default value for this parameter is 7 days from creation time. For more + // information, see . Expires *int64 `locationName:"expires" type:"long"` } @@ -2652,6 +2694,9 @@ type CreateDataSourceInput struct { // Amazon Elasticsearch settings. ElasticsearchConfig *ElasticsearchDataSourceConfig `locationName:"elasticsearchConfig" type:"structure"` + // Http endpoint settings. + HttpConfig *HttpDataSourceConfig `locationName:"httpConfig" type:"structure"` + // AWS Lambda settings. LambdaConfig *LambdaDataSourceConfig `locationName:"lambdaConfig" type:"structure"` @@ -2738,6 +2783,12 @@ func (s *CreateDataSourceInput) SetElasticsearchConfig(v *ElasticsearchDataSourc return s } +// SetHttpConfig sets the HttpConfig field's value. +func (s *CreateDataSourceInput) SetHttpConfig(v *HttpDataSourceConfig) *CreateDataSourceInput { + s.HttpConfig = v + return s +} + // SetLambdaConfig sets the LambdaConfig field's value. func (s *CreateDataSourceInput) SetLambdaConfig(v *LambdaDataSourceConfig) *CreateDataSourceInput { s.LambdaConfig = v @@ -2793,11 +2844,17 @@ type CreateGraphqlApiInput struct { // AuthenticationType is a required field AuthenticationType *string `locationName:"authenticationType" type:"string" required:"true" enum:"AuthenticationType"` + // The Amazon CloudWatch logs configuration. + LogConfig *LogConfig `locationName:"logConfig" type:"structure"` + // A user-supplied name for the GraphqlApi. // // Name is a required field Name *string `locationName:"name" type:"string" required:"true"` + // The Open Id Connect configuration configuration. + OpenIDConnectConfig *OpenIDConnectConfig `locationName:"openIDConnectConfig" type:"structure"` + // The Amazon Cognito User Pool configuration. UserPoolConfig *UserPoolConfig `locationName:"userPoolConfig" type:"structure"` } @@ -2821,6 +2878,16 @@ func (s *CreateGraphqlApiInput) Validate() error { if s.Name == nil { invalidParams.Add(request.NewErrParamRequired("Name")) } + if s.LogConfig != nil { + if err := s.LogConfig.Validate(); err != nil { + invalidParams.AddNested("LogConfig", err.(request.ErrInvalidParams)) + } + } + if s.OpenIDConnectConfig != nil { + if err := s.OpenIDConnectConfig.Validate(); err != nil { + invalidParams.AddNested("OpenIDConnectConfig", err.(request.ErrInvalidParams)) + } + } if s.UserPoolConfig != nil { if err := s.UserPoolConfig.Validate(); err != nil { invalidParams.AddNested("UserPoolConfig", err.(request.ErrInvalidParams)) @@ -2839,12 +2906,24 @@ func (s *CreateGraphqlApiInput) SetAuthenticationType(v string) *CreateGraphqlAp return s } +// SetLogConfig sets the LogConfig field's value. +func (s *CreateGraphqlApiInput) SetLogConfig(v *LogConfig) *CreateGraphqlApiInput { + s.LogConfig = v + return s +} + // SetName sets the Name field's value. func (s *CreateGraphqlApiInput) SetName(v string) *CreateGraphqlApiInput { s.Name = &v return s } +// SetOpenIDConnectConfig sets the OpenIDConnectConfig field's value. +func (s *CreateGraphqlApiInput) SetOpenIDConnectConfig(v *OpenIDConnectConfig) *CreateGraphqlApiInput { + s.OpenIDConnectConfig = v + return s +} + // SetUserPoolConfig sets the UserPoolConfig field's value. func (s *CreateGraphqlApiInput) SetUserPoolConfig(v *UserPoolConfig) *CreateGraphqlApiInput { s.UserPoolConfig = v @@ -2899,10 +2978,10 @@ type CreateResolverInput struct { // in Apache Velocity Template Language (VTL). // // RequestMappingTemplate is a required field - RequestMappingTemplate *string `locationName:"requestMappingTemplate" type:"string" required:"true"` + RequestMappingTemplate *string `locationName:"requestMappingTemplate" min:"1" type:"string" required:"true"` // The mapping template to be used for responses from the data source. - ResponseMappingTemplate *string `locationName:"responseMappingTemplate" type:"string"` + ResponseMappingTemplate *string `locationName:"responseMappingTemplate" min:"1" type:"string"` // The name of the Type. // @@ -2935,6 +3014,12 @@ func (s *CreateResolverInput) Validate() error { if s.RequestMappingTemplate == nil { invalidParams.Add(request.NewErrParamRequired("RequestMappingTemplate")) } + if s.RequestMappingTemplate != nil && len(*s.RequestMappingTemplate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RequestMappingTemplate", 1)) + } + if s.ResponseMappingTemplate != nil && len(*s.ResponseMappingTemplate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResponseMappingTemplate", 1)) + } if s.TypeName == nil { invalidParams.Add(request.NewErrParamRequired("TypeName")) } @@ -3111,6 +3196,9 @@ type DataSource struct { // Amazon Elasticsearch settings. ElasticsearchConfig *ElasticsearchDataSourceConfig `locationName:"elasticsearchConfig" type:"structure"` + // Http endpoint settings. + HttpConfig *HttpDataSourceConfig `locationName:"httpConfig" type:"structure"` + // Lambda settings. LambdaConfig *LambdaDataSourceConfig `locationName:"lambdaConfig" type:"structure"` @@ -3130,8 +3218,12 @@ type DataSource struct { // // * AWS_LAMBDA: The data source is an AWS Lambda function. // - // * NONE: There is no data source. This type is used when the required information - // can be computed on the fly without connecting to a back-end data source. + // * NONE: There is no data source. This type is used when when you wish + // to invoke a GraphQL operation without connecting to a data source, such + // as performing data transformation with resolvers or triggering a subscription + // to be invoked from a mutation. + // + // * HTTP: The data source is an HTTP endpoint. Type *string `locationName:"type" type:"string" enum:"DataSourceType"` } @@ -3169,6 +3261,12 @@ func (s *DataSource) SetElasticsearchConfig(v *ElasticsearchDataSourceConfig) *D return s } +// SetHttpConfig sets the HttpConfig field's value. +func (s *DataSource) SetHttpConfig(v *HttpDataSourceConfig) *DataSource { + s.HttpConfig = v + return s +} + // SetLambdaConfig sets the LambdaConfig field's value. func (s *DataSource) SetLambdaConfig(v *LambdaDataSourceConfig) *DataSource { s.LambdaConfig = v @@ -4113,9 +4211,15 @@ type GraphqlApi struct { // The authentication type. AuthenticationType *string `locationName:"authenticationType" type:"string" enum:"AuthenticationType"` + // The Amazon CloudWatch Logs configuration. + LogConfig *LogConfig `locationName:"logConfig" type:"structure"` + // The API name. Name *string `locationName:"name" type:"string"` + // The Open Id Connect configuration. + OpenIDConnectConfig *OpenIDConnectConfig `locationName:"openIDConnectConfig" type:"structure"` + // The URIs. Uris map[string]*string `locationName:"uris" type:"map"` @@ -4151,12 +4255,24 @@ func (s *GraphqlApi) SetAuthenticationType(v string) *GraphqlApi { return s } +// SetLogConfig sets the LogConfig field's value. +func (s *GraphqlApi) SetLogConfig(v *LogConfig) *GraphqlApi { + s.LogConfig = v + return s +} + // SetName sets the Name field's value. func (s *GraphqlApi) SetName(v string) *GraphqlApi { s.Name = &v return s } +// SetOpenIDConnectConfig sets the OpenIDConnectConfig field's value. +func (s *GraphqlApi) SetOpenIDConnectConfig(v *OpenIDConnectConfig) *GraphqlApi { + s.OpenIDConnectConfig = v + return s +} + // SetUris sets the Uris field's value. func (s *GraphqlApi) SetUris(v map[string]*string) *GraphqlApi { s.Uris = v @@ -4169,6 +4285,33 @@ func (s *GraphqlApi) SetUserPoolConfig(v *UserPoolConfig) *GraphqlApi { return s } +// Describes a Http data source configuration. +type HttpDataSourceConfig struct { + _ struct{} `type:"structure"` + + // The Http url endpoint. You can either specify the domain name or ip and port + // combination and the url scheme must be http(s). If the port is not specified, + // AWS AppSync will use the default port 80 for http endpoint and port 443 for + // https endpoints. + Endpoint *string `locationName:"endpoint" type:"string"` +} + +// String returns the string representation +func (s HttpDataSourceConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s HttpDataSourceConfig) GoString() string { + return s.String() +} + +// SetEndpoint sets the Endpoint field's value. +func (s *HttpDataSourceConfig) SetEndpoint(v string) *HttpDataSourceConfig { + s.Endpoint = &v + return s +} + // Describes a Lambda data source configuration. type LambdaDataSourceConfig struct { _ struct{} `type:"structure"` @@ -4662,6 +4805,147 @@ func (s *ListTypesOutput) SetTypes(v []*Type) *ListTypesOutput { return s } +// The CloudWatch Logs configuration. +type LogConfig struct { + _ struct{} `type:"structure"` + + // The service role that AWS AppSync will assume to publish to Amazon CloudWatch + // logs in your account. + // + // CloudWatchLogsRoleArn is a required field + CloudWatchLogsRoleArn *string `locationName:"cloudWatchLogsRoleArn" type:"string" required:"true"` + + // The field logging level. Values can be NONE, ERROR, ALL. + // + // * NONE: No field-level logs are captured. + // + // * ERROR: Logs the following information only for the fields that are in + // error: + // + // The error section in the server response. + // + // Field-level errors. + // + // The generated request/response functions that got resolved for error fields. + // + // * ALL: The following information is logged for all fields in the query: + // + // Field-level tracing information. + // + // The generated request/response functions that got resolved for each field. + // + // FieldLogLevel is a required field + FieldLogLevel *string `locationName:"fieldLogLevel" type:"string" required:"true" enum:"FieldLogLevel"` +} + +// String returns the string representation +func (s LogConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LogConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *LogConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LogConfig"} + if s.CloudWatchLogsRoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("CloudWatchLogsRoleArn")) + } + if s.FieldLogLevel == nil { + invalidParams.Add(request.NewErrParamRequired("FieldLogLevel")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCloudWatchLogsRoleArn sets the CloudWatchLogsRoleArn field's value. +func (s *LogConfig) SetCloudWatchLogsRoleArn(v string) *LogConfig { + s.CloudWatchLogsRoleArn = &v + return s +} + +// SetFieldLogLevel sets the FieldLogLevel field's value. +func (s *LogConfig) SetFieldLogLevel(v string) *LogConfig { + s.FieldLogLevel = &v + return s +} + +// Describes an Open Id Connect configuration. +type OpenIDConnectConfig struct { + _ struct{} `type:"structure"` + + // The number of milliseconds a token is valid after being authenticated. + AuthTTL *int64 `locationName:"authTTL" type:"long"` + + // The client identifier of the Relying party at the OpenID Provider. This identifier + // is typically obtained when the Relying party is registered with the OpenID + // Provider. You can specify a regular expression so the AWS AppSync can validate + // against multiple client identifiers at a time + ClientId *string `locationName:"clientId" type:"string"` + + // The number of milliseconds a token is valid after being issued to a user. + IatTTL *int64 `locationName:"iatTTL" type:"long"` + + // The issuer for the open id connect configuration. The issuer returned by + // discovery MUST exactly match the value of iss in the ID Token. + // + // Issuer is a required field + Issuer *string `locationName:"issuer" type:"string" required:"true"` +} + +// String returns the string representation +func (s OpenIDConnectConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OpenIDConnectConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *OpenIDConnectConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "OpenIDConnectConfig"} + if s.Issuer == nil { + invalidParams.Add(request.NewErrParamRequired("Issuer")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAuthTTL sets the AuthTTL field's value. +func (s *OpenIDConnectConfig) SetAuthTTL(v int64) *OpenIDConnectConfig { + s.AuthTTL = &v + return s +} + +// SetClientId sets the ClientId field's value. +func (s *OpenIDConnectConfig) SetClientId(v string) *OpenIDConnectConfig { + s.ClientId = &v + return s +} + +// SetIatTTL sets the IatTTL field's value. +func (s *OpenIDConnectConfig) SetIatTTL(v int64) *OpenIDConnectConfig { + s.IatTTL = &v + return s +} + +// SetIssuer sets the Issuer field's value. +func (s *OpenIDConnectConfig) SetIssuer(v string) *OpenIDConnectConfig { + s.Issuer = &v + return s +} + // Describes a resolver. type Resolver struct { _ struct{} `type:"structure"` @@ -4673,13 +4957,13 @@ type Resolver struct { FieldName *string `locationName:"fieldName" type:"string"` // The request mapping template. - RequestMappingTemplate *string `locationName:"requestMappingTemplate" type:"string"` + RequestMappingTemplate *string `locationName:"requestMappingTemplate" min:"1" type:"string"` // The resolver ARN. ResolverArn *string `locationName:"resolverArn" type:"string"` // The response mapping template. - ResponseMappingTemplate *string `locationName:"responseMappingTemplate" type:"string"` + ResponseMappingTemplate *string `locationName:"responseMappingTemplate" min:"1" type:"string"` // The resolver type name. TypeName *string `locationName:"typeName" type:"string"` @@ -4880,8 +5164,8 @@ type UpdateApiKeyInput struct { // A description of the purpose of the API key. Description *string `locationName:"description" type:"string"` - // The time after which the API key expires. The date is represented as seconds - // since the epoch. + // The time from update time after which the API key expires. The date is represented + // as seconds since the epoch. For more information, see . Expires *int64 `locationName:"expires" type:"long"` // The API key ID. @@ -4980,6 +5264,9 @@ type UpdateDataSourceInput struct { // The new Elasticsearch configuration. ElasticsearchConfig *ElasticsearchDataSourceConfig `locationName:"elasticsearchConfig" type:"structure"` + // The new http endpoint configuration + HttpConfig *HttpDataSourceConfig `locationName:"httpConfig" type:"structure"` + // The new Lambda configuration. LambdaConfig *LambdaDataSourceConfig `locationName:"lambdaConfig" type:"structure"` @@ -5065,6 +5352,12 @@ func (s *UpdateDataSourceInput) SetElasticsearchConfig(v *ElasticsearchDataSourc return s } +// SetHttpConfig sets the HttpConfig field's value. +func (s *UpdateDataSourceInput) SetHttpConfig(v *HttpDataSourceConfig) *UpdateDataSourceInput { + s.HttpConfig = v + return s +} + // SetLambdaConfig sets the LambdaConfig field's value. func (s *UpdateDataSourceInput) SetLambdaConfig(v *LambdaDataSourceConfig) *UpdateDataSourceInput { s.LambdaConfig = v @@ -5123,11 +5416,17 @@ type UpdateGraphqlApiInput struct { // The new authentication type for the GraphqlApi object. AuthenticationType *string `locationName:"authenticationType" type:"string" enum:"AuthenticationType"` + // The Amazon CloudWatch logs configuration for the GraphqlApi object. + LogConfig *LogConfig `locationName:"logConfig" type:"structure"` + // The new name for the GraphqlApi object. // // Name is a required field Name *string `locationName:"name" type:"string" required:"true"` + // The Open Id Connect configuration configuration for the GraphqlApi object. + OpenIDConnectConfig *OpenIDConnectConfig `locationName:"openIDConnectConfig" type:"structure"` + // The new Amazon Cognito User Pool configuration for the GraphqlApi object. UserPoolConfig *UserPoolConfig `locationName:"userPoolConfig" type:"structure"` } @@ -5151,6 +5450,16 @@ func (s *UpdateGraphqlApiInput) Validate() error { if s.Name == nil { invalidParams.Add(request.NewErrParamRequired("Name")) } + if s.LogConfig != nil { + if err := s.LogConfig.Validate(); err != nil { + invalidParams.AddNested("LogConfig", err.(request.ErrInvalidParams)) + } + } + if s.OpenIDConnectConfig != nil { + if err := s.OpenIDConnectConfig.Validate(); err != nil { + invalidParams.AddNested("OpenIDConnectConfig", err.(request.ErrInvalidParams)) + } + } if s.UserPoolConfig != nil { if err := s.UserPoolConfig.Validate(); err != nil { invalidParams.AddNested("UserPoolConfig", err.(request.ErrInvalidParams)) @@ -5175,12 +5484,24 @@ func (s *UpdateGraphqlApiInput) SetAuthenticationType(v string) *UpdateGraphqlAp return s } +// SetLogConfig sets the LogConfig field's value. +func (s *UpdateGraphqlApiInput) SetLogConfig(v *LogConfig) *UpdateGraphqlApiInput { + s.LogConfig = v + return s +} + // SetName sets the Name field's value. func (s *UpdateGraphqlApiInput) SetName(v string) *UpdateGraphqlApiInput { s.Name = &v return s } +// SetOpenIDConnectConfig sets the OpenIDConnectConfig field's value. +func (s *UpdateGraphqlApiInput) SetOpenIDConnectConfig(v *OpenIDConnectConfig) *UpdateGraphqlApiInput { + s.OpenIDConnectConfig = v + return s +} + // SetUserPoolConfig sets the UserPoolConfig field's value. func (s *UpdateGraphqlApiInput) SetUserPoolConfig(v *UserPoolConfig) *UpdateGraphqlApiInput { s.UserPoolConfig = v @@ -5231,10 +5552,10 @@ type UpdateResolverInput struct { // The new request mapping template. // // RequestMappingTemplate is a required field - RequestMappingTemplate *string `locationName:"requestMappingTemplate" type:"string" required:"true"` + RequestMappingTemplate *string `locationName:"requestMappingTemplate" min:"1" type:"string" required:"true"` // The new response mapping template. - ResponseMappingTemplate *string `locationName:"responseMappingTemplate" type:"string"` + ResponseMappingTemplate *string `locationName:"responseMappingTemplate" min:"1" type:"string"` // The new type name. // @@ -5267,6 +5588,12 @@ func (s *UpdateResolverInput) Validate() error { if s.RequestMappingTemplate == nil { invalidParams.Add(request.NewErrParamRequired("RequestMappingTemplate")) } + if s.RequestMappingTemplate != nil && len(*s.RequestMappingTemplate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RequestMappingTemplate", 1)) + } + if s.ResponseMappingTemplate != nil && len(*s.ResponseMappingTemplate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResponseMappingTemplate", 1)) + } if s.TypeName == nil { invalidParams.Add(request.NewErrParamRequired("TypeName")) } @@ -5522,6 +5849,9 @@ const ( // AuthenticationTypeAmazonCognitoUserPools is a AuthenticationType enum value AuthenticationTypeAmazonCognitoUserPools = "AMAZON_COGNITO_USER_POOLS" + + // AuthenticationTypeOpenidConnect is a AuthenticationType enum value + AuthenticationTypeOpenidConnect = "OPENID_CONNECT" ) const ( @@ -5536,6 +5866,9 @@ const ( // DataSourceTypeNone is a DataSourceType enum value DataSourceTypeNone = "NONE" + + // DataSourceTypeHttp is a DataSourceType enum value + DataSourceTypeHttp = "HTTP" ) const ( @@ -5546,6 +5879,17 @@ const ( DefaultActionDeny = "DENY" ) +const ( + // FieldLogLevelNone is a FieldLogLevel enum value + FieldLogLevelNone = "NONE" + + // FieldLogLevelError is a FieldLogLevel enum value + FieldLogLevelError = "ERROR" + + // FieldLogLevelAll is a FieldLogLevel enum value + FieldLogLevelAll = "ALL" +) + const ( // OutputTypeSdl is a OutputType enum value OutputTypeSdl = "SDL" diff --git a/vendor/github.com/aws/aws-sdk-go/service/appsync/errors.go b/vendor/github.com/aws/aws-sdk-go/service/appsync/errors.go index a2a7a0a0ca3..5c109eaba92 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/appsync/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/appsync/errors.go @@ -13,7 +13,8 @@ const ( // ErrCodeApiKeyValidityOutOfBoundsException for service response error code // "ApiKeyValidityOutOfBoundsException". // - // The API key expiration must be set to a value between 1 and 365 days. + // The API key expiration must be set to a value between 1 and 365 days from + // creation (for CreateApiKey) or from update (for UpdateApiKey). ErrCodeApiKeyValidityOutOfBoundsException = "ApiKeyValidityOutOfBoundsException" // ErrCodeApiLimitExceededException for service response error code diff --git a/vendor/github.com/aws/aws-sdk-go/service/appsync/service.go b/vendor/github.com/aws/aws-sdk-go/service/appsync/service.go index cdc4698e8e4..e53f6e71b48 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/appsync/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/appsync/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "appsync" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "appsync" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "AppSync" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the AppSync client with a session. @@ -45,19 +46,20 @@ const ( // svc := appsync.New(mySession, aws.NewConfig().WithRegion("us-west-2")) func New(p client.ConfigProvider, cfgs ...*aws.Config) *AppSync { c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "appsync" + } return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) } // newClient creates, initializes and returns a new service client instance. func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *AppSync { - if len(signingName) == 0 { - signingName = "appsync" - } svc := &AppSync{ Client: client.New( cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/athena/api.go b/vendor/github.com/aws/aws-sdk-go/service/athena/api.go index 198a0dfbb39..bbf510b9476 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/athena/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/athena/api.go @@ -14,8 +14,8 @@ const opBatchGetNamedQuery = "BatchGetNamedQuery" // BatchGetNamedQueryRequest generates a "aws/request.Request" representing the // client's request for the BatchGetNamedQuery operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -104,8 +104,8 @@ const opBatchGetQueryExecution = "BatchGetQueryExecution" // BatchGetQueryExecutionRequest generates a "aws/request.Request" representing the // client's request for the BatchGetQueryExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -192,8 +192,8 @@ const opCreateNamedQuery = "CreateNamedQuery" // CreateNamedQueryRequest generates a "aws/request.Request" representing the // client's request for the CreateNamedQuery operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -280,8 +280,8 @@ const opDeleteNamedQuery = "DeleteNamedQuery" // DeleteNamedQueryRequest generates a "aws/request.Request" representing the // client's request for the DeleteNamedQuery operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -368,8 +368,8 @@ const opGetNamedQuery = "GetNamedQuery" // GetNamedQueryRequest generates a "aws/request.Request" representing the // client's request for the GetNamedQuery operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -452,8 +452,8 @@ const opGetQueryExecution = "GetQueryExecution" // GetQueryExecutionRequest generates a "aws/request.Request" representing the // client's request for the GetQueryExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -537,8 +537,8 @@ const opGetQueryResults = "GetQueryResults" // GetQueryResultsRequest generates a "aws/request.Request" representing the // client's request for the GetQueryResults operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -679,8 +679,8 @@ const opListNamedQueries = "ListNamedQueries" // ListNamedQueriesRequest generates a "aws/request.Request" representing the // client's request for the ListNamedQueries operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -823,8 +823,8 @@ const opListQueryExecutions = "ListQueryExecutions" // ListQueryExecutionsRequest generates a "aws/request.Request" representing the // client's request for the ListQueryExecutions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -967,8 +967,8 @@ const opStartQueryExecution = "StartQueryExecution" // StartQueryExecutionRequest generates a "aws/request.Request" representing the // client's request for the StartQueryExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1030,7 +1030,8 @@ func (c *Athena) StartQueryExecutionRequest(input *StartQueryExecutionInput) (re // a required parameter may be missing or out of range. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" -// Indicates that the request was throttled. +// Indicates that the request was throttled and includes the reason for throttling, +// for example, the limit of concurrent queries has been exceeded. // // See also, https://docs.aws.amazon.com/goto/WebAPI/athena-2017-05-18/StartQueryExecution func (c *Athena) StartQueryExecution(input *StartQueryExecutionInput) (*StartQueryExecutionOutput, error) { @@ -1058,8 +1059,8 @@ const opStopQueryExecution = "StopQueryExecution" // StopQueryExecutionRequest generates a "aws/request.Request" representing the // client's request for the StopQueryExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1604,8 +1605,8 @@ func (s DeleteNamedQueryOutput) GoString() string { return s.String() } -// If query results are encrypted in Amazon S3, indicates the Amazon S3 encryption -// option used. +// If query results are encrypted in Amazon S3, indicates the encryption option +// used (for example, SSE-KMS or CSE-KMS) and key information. type EncryptionConfiguration struct { _ struct{} `type:"structure"` @@ -1842,6 +1843,9 @@ type GetQueryResultsOutput struct { // The results of the query execution. ResultSet *ResultSet `type:"structure"` + + // The number of rows inserted with a CREATE TABLE AS SELECT statement. + UpdateCount *int64 `type:"long"` } // String returns the string representation @@ -1866,6 +1870,12 @@ func (s *GetQueryResultsOutput) SetResultSet(v *ResultSet) *GetQueryResultsOutpu return s } +// SetUpdateCount sets the UpdateCount field's value. +func (s *GetQueryResultsOutput) SetUpdateCount(v int64) *GetQueryResultsOutput { + s.UpdateCount = &v + return s +} + type ListNamedQueriesInput struct { _ struct{} `type:"structure"` @@ -2080,8 +2090,14 @@ type QueryExecution struct { // option, if any, used for query results. ResultConfiguration *ResultConfiguration `type:"structure"` + // The type of query statement that was run. DDL indicates DDL query statements. + // DML indicates DML (Data Manipulation Language) query statements, such as + // CREATE TABLE AS SELECT. UTILITY indicates query statements other than DDL + // and DML, such as SHOW CREATE TABLE, or DESCRIBE . + StatementType *string `type:"string" enum:"StatementType"` + // The amount of data scanned during the query execution and the amount of time - // that it took to execute. + // that it took to execute, and the type of statement that was run. Statistics *QueryExecutionStatistics `type:"structure"` // The completion date, current state, submission time, and state change reason @@ -2123,6 +2139,12 @@ func (s *QueryExecution) SetResultConfiguration(v *ResultConfiguration) *QueryEx return s } +// SetStatementType sets the StatementType field's value. +func (s *QueryExecution) SetStatementType(v string) *QueryExecution { + s.StatementType = &v + return s +} + // SetStatistics sets the Statistics field's value. func (s *QueryExecution) SetStatistics(v *QueryExecutionStatistics) *QueryExecution { s.Statistics = v @@ -2173,7 +2195,7 @@ func (s *QueryExecutionContext) SetDatabase(v string) *QueryExecutionContext { } // The amount of data scanned during the query execution and the amount of time -// that it took to execute. +// that it took to execute, and the type of statement that was run. type QueryExecutionStatistics struct { _ struct{} `type:"structure"` @@ -2212,20 +2234,21 @@ type QueryExecutionStatus struct { _ struct{} `type:"structure"` // The date and time that the query completed. - CompletionDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` - - // The state of query execution. SUBMITTED indicates that the query is queued - // for execution. RUNNING indicates that the query is scanning data and returning - // results. SUCCEEDED indicates that the query completed without error. FAILED - // indicates that the query experienced an error and did not complete processing. - // CANCELLED indicates that user input interrupted query execution. + CompletionDateTime *time.Time `type:"timestamp"` + + // The state of query execution. QUEUED state is listed but is not used by Athena + // and is reserved for future use. RUNNING indicates that the query has been + // submitted to the service, and Athena will execute the query as soon as resources + // are available. SUCCEEDED indicates that the query completed without error. + // FAILED indicates that the query experienced an error and did not complete + // processing.CANCELLED indicates that user input interrupted query execution. State *string `type:"string" enum:"QueryExecutionState"` // Further detail about the status of the query. StateChangeReason *string `type:"string"` // The date and time that the query was submitted. - SubmissionDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + SubmissionDateTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -2267,11 +2290,12 @@ func (s *QueryExecutionStatus) SetSubmissionDateTime(v time.Time) *QueryExecutio type ResultConfiguration struct { _ struct{} `type:"structure"` - // If query results are encrypted in S3, indicates the S3 encryption option - // used (for example, SSE-KMS or CSE-KMS and key information. + // If query results are encrypted in Amazon S3, indicates the encryption option + // used (for example, SSE-KMS or CSE-KMS) and key information. EncryptionConfiguration *EncryptionConfiguration `type:"structure"` - // The location in S3 where query results are stored. + // The location in Amazon S3 where your query results are stored, such as s3://path/to/query/bucket/. + // For more information, see Queries and Query Result Files. (http://docs.aws.amazon.com/athena/latest/ug/querying.html) // // OutputLocation is a required field OutputLocation *string `type:"string" required:"true"` @@ -2357,7 +2381,7 @@ func (s *ResultSet) SetRows(v []*Row) *ResultSet { type ResultSetMetadata struct { _ struct{} `type:"structure"` - // Information about the columns in a query execution result. + // Information about the columns returned in a query result metadata. ColumnInfo []*ColumnInfo `type:"list"` } @@ -2696,6 +2720,19 @@ const ( QueryExecutionStateCancelled = "CANCELLED" ) +const ( + // StatementTypeDdl is a StatementType enum value + StatementTypeDdl = "DDL" + + // StatementTypeDml is a StatementType enum value + StatementTypeDml = "DML" + + // StatementTypeUtility is a StatementType enum value + StatementTypeUtility = "UTILITY" +) + +// The reason for the query throttling, for example, when it exceeds the concurrent +// query limit. const ( // ThrottleReasonConcurrentQueryLimitExceeded is a ThrottleReason enum value ThrottleReasonConcurrentQueryLimitExceeded = "CONCURRENT_QUERY_LIMIT_EXCEEDED" diff --git a/vendor/github.com/aws/aws-sdk-go/service/athena/doc.go b/vendor/github.com/aws/aws-sdk-go/service/athena/doc.go index 920fafc0c73..76b6b8989bb 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/athena/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/athena/doc.go @@ -12,8 +12,13 @@ // For more information, see What is Amazon Athena (http://docs.aws.amazon.com/athena/latest/ug/what-is.html) // in the Amazon Athena User Guide. // +// If you connect to Athena using the JDBC driver, use version 1.1.0 of the +// driver or later with the Amazon Athena API. Earlier version drivers do not +// support the API. For more information and to download the driver, see Accessing +// Amazon Athena with JDBC (https://docs.aws.amazon.com/athena/latest/ug/connect-with-jdbc.html). +// // For code samples using the AWS SDK for Java, see Examples and Code Samples -// (http://docs.aws.amazon.com/athena/latest/ug/code-samples.html) in the Amazon +// (https://docs.aws.amazon.com/athena/latest/ug/code-samples.html) in the Amazon // Athena User Guide. // // See https://docs.aws.amazon.com/goto/WebAPI/athena-2017-05-18 for more information on this service. diff --git a/vendor/github.com/aws/aws-sdk-go/service/athena/errors.go b/vendor/github.com/aws/aws-sdk-go/service/athena/errors.go index 4060e744e36..abfa4fac398 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/athena/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/athena/errors.go @@ -21,6 +21,7 @@ const ( // ErrCodeTooManyRequestsException for service response error code // "TooManyRequestsException". // - // Indicates that the request was throttled. + // Indicates that the request was throttled and includes the reason for throttling, + // for example, the limit of concurrent queries has been exceeded. ErrCodeTooManyRequestsException = "TooManyRequestsException" ) diff --git a/vendor/github.com/aws/aws-sdk-go/service/athena/service.go b/vendor/github.com/aws/aws-sdk-go/service/athena/service.go index c1fd29b0f89..6806a62ec95 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/athena/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/athena/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "athena" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "athena" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Athena" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the Athena client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/autoscaling/api.go b/vendor/github.com/aws/aws-sdk-go/service/autoscaling/api.go index 7cb3085b437..ccc5dca61e2 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/autoscaling/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/autoscaling/api.go @@ -17,8 +17,8 @@ const opAttachInstances = "AttachInstances" // AttachInstancesRequest generates a "aws/request.Request" representing the // client's request for the AttachInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -61,10 +61,10 @@ func (c *AutoScaling) AttachInstancesRequest(input *AttachInstancesInput) (req * // // Attaches one or more EC2 instances to the specified Auto Scaling group. // -// When you attach instances, Auto Scaling increases the desired capacity of -// the group by the number of instances being attached. If the number of instances -// being attached plus the desired capacity of the group exceeds the maximum -// size of the group, the operation fails. +// When you attach instances, Amazon EC2 Auto Scaling increases the desired +// capacity of the group by the number of instances being attached. If the number +// of instances being attached plus the desired capacity of the group exceeds +// the maximum size of the group, the operation fails. // // If there is a Classic Load Balancer attached to your Auto Scaling group, // the instances are also registered with the load balancer. If there are target @@ -72,8 +72,8 @@ func (c *AutoScaling) AttachInstancesRequest(input *AttachInstancesInput) (req * // with the target groups. // // For more information, see Attach EC2 Instances to Your Auto Scaling Group -// (http://docs.aws.amazon.com/autoscaling/latest/userguide/attach-instance-asg.html) -// in the Auto Scaling User Guide. +// (http://docs.aws.amazon.com/autoscaling/ec2/userguide/attach-instance-asg.html) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -116,8 +116,8 @@ const opAttachLoadBalancerTargetGroups = "AttachLoadBalancerTargetGroups" // AttachLoadBalancerTargetGroupsRequest generates a "aws/request.Request" representing the // client's request for the AttachLoadBalancerTargetGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -162,8 +162,8 @@ func (c *AutoScaling) AttachLoadBalancerTargetGroupsRequest(input *AttachLoadBal // To detach the target group from the Auto Scaling group, use DetachLoadBalancerTargetGroups. // // For more information, see Attach a Load Balancer to Your Auto Scaling Group -// (http://docs.aws.amazon.com/autoscaling/latest/userguide/attach-load-balancer-asg.html) -// in the Auto Scaling User Guide. +// (http://docs.aws.amazon.com/autoscaling/ec2/userguide/attach-load-balancer-asg.html) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -206,8 +206,8 @@ const opAttachLoadBalancers = "AttachLoadBalancers" // AttachLoadBalancersRequest generates a "aws/request.Request" representing the // client's request for the AttachLoadBalancers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -255,8 +255,8 @@ func (c *AutoScaling) AttachLoadBalancersRequest(input *AttachLoadBalancersInput // To detach the load balancer from the Auto Scaling group, use DetachLoadBalancers. // // For more information, see Attach a Load Balancer to Your Auto Scaling Group -// (http://docs.aws.amazon.com/autoscaling/latest/userguide/attach-load-balancer-asg.html) -// in the Auto Scaling User Guide. +// (http://docs.aws.amazon.com/autoscaling/ec2/userguide/attach-load-balancer-asg.html) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -295,12 +295,183 @@ func (c *AutoScaling) AttachLoadBalancersWithContext(ctx aws.Context, input *Att return out, req.Send() } +const opBatchDeleteScheduledAction = "BatchDeleteScheduledAction" + +// BatchDeleteScheduledActionRequest generates a "aws/request.Request" representing the +// client's request for the BatchDeleteScheduledAction operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See BatchDeleteScheduledAction for more information on using the BatchDeleteScheduledAction +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the BatchDeleteScheduledActionRequest method. +// req, resp := client.BatchDeleteScheduledActionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/autoscaling-2011-01-01/BatchDeleteScheduledAction +func (c *AutoScaling) BatchDeleteScheduledActionRequest(input *BatchDeleteScheduledActionInput) (req *request.Request, output *BatchDeleteScheduledActionOutput) { + op := &request.Operation{ + Name: opBatchDeleteScheduledAction, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &BatchDeleteScheduledActionInput{} + } + + output = &BatchDeleteScheduledActionOutput{} + req = c.newRequest(op, input, output) + return +} + +// BatchDeleteScheduledAction API operation for Auto Scaling. +// +// Deletes one or more scheduled actions for the specified Auto Scaling group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Auto Scaling's +// API operation BatchDeleteScheduledAction for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceContentionFault "ResourceContention" +// You already have a pending update to an Auto Scaling resource (for example, +// a group, instance, or load balancer). +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/autoscaling-2011-01-01/BatchDeleteScheduledAction +func (c *AutoScaling) BatchDeleteScheduledAction(input *BatchDeleteScheduledActionInput) (*BatchDeleteScheduledActionOutput, error) { + req, out := c.BatchDeleteScheduledActionRequest(input) + return out, req.Send() +} + +// BatchDeleteScheduledActionWithContext is the same as BatchDeleteScheduledAction with the addition of +// the ability to pass a context and additional request options. +// +// See BatchDeleteScheduledAction for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AutoScaling) BatchDeleteScheduledActionWithContext(ctx aws.Context, input *BatchDeleteScheduledActionInput, opts ...request.Option) (*BatchDeleteScheduledActionOutput, error) { + req, out := c.BatchDeleteScheduledActionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opBatchPutScheduledUpdateGroupAction = "BatchPutScheduledUpdateGroupAction" + +// BatchPutScheduledUpdateGroupActionRequest generates a "aws/request.Request" representing the +// client's request for the BatchPutScheduledUpdateGroupAction operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See BatchPutScheduledUpdateGroupAction for more information on using the BatchPutScheduledUpdateGroupAction +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the BatchPutScheduledUpdateGroupActionRequest method. +// req, resp := client.BatchPutScheduledUpdateGroupActionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/autoscaling-2011-01-01/BatchPutScheduledUpdateGroupAction +func (c *AutoScaling) BatchPutScheduledUpdateGroupActionRequest(input *BatchPutScheduledUpdateGroupActionInput) (req *request.Request, output *BatchPutScheduledUpdateGroupActionOutput) { + op := &request.Operation{ + Name: opBatchPutScheduledUpdateGroupAction, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &BatchPutScheduledUpdateGroupActionInput{} + } + + output = &BatchPutScheduledUpdateGroupActionOutput{} + req = c.newRequest(op, input, output) + return +} + +// BatchPutScheduledUpdateGroupAction API operation for Auto Scaling. +// +// Creates or updates one or more scheduled scaling actions for an Auto Scaling +// group. If you leave a parameter unspecified when updating a scheduled scaling +// action, the corresponding value remains unchanged. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Auto Scaling's +// API operation BatchPutScheduledUpdateGroupAction for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAlreadyExistsFault "AlreadyExists" +// You already have an Auto Scaling group or launch configuration with this +// name. +// +// * ErrCodeLimitExceededFault "LimitExceeded" +// You have already reached a limit for your Auto Scaling resources (for example, +// groups, launch configurations, or lifecycle hooks). For more information, +// see DescribeAccountLimits. +// +// * ErrCodeResourceContentionFault "ResourceContention" +// You already have a pending update to an Auto Scaling resource (for example, +// a group, instance, or load balancer). +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/autoscaling-2011-01-01/BatchPutScheduledUpdateGroupAction +func (c *AutoScaling) BatchPutScheduledUpdateGroupAction(input *BatchPutScheduledUpdateGroupActionInput) (*BatchPutScheduledUpdateGroupActionOutput, error) { + req, out := c.BatchPutScheduledUpdateGroupActionRequest(input) + return out, req.Send() +} + +// BatchPutScheduledUpdateGroupActionWithContext is the same as BatchPutScheduledUpdateGroupAction with the addition of +// the ability to pass a context and additional request options. +// +// See BatchPutScheduledUpdateGroupAction for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AutoScaling) BatchPutScheduledUpdateGroupActionWithContext(ctx aws.Context, input *BatchPutScheduledUpdateGroupActionInput, opts ...request.Option) (*BatchPutScheduledUpdateGroupActionOutput, error) { + req, out := c.BatchPutScheduledUpdateGroupActionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCompleteLifecycleAction = "CompleteLifecycleAction" // CompleteLifecycleActionRequest generates a "aws/request.Request" representing the // client's request for the CompleteLifecycleAction operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -346,11 +517,12 @@ func (c *AutoScaling) CompleteLifecycleActionRequest(input *CompleteLifecycleAct // Scaling group: // // (Optional) Create a Lambda function and a rule that allows CloudWatch Events -// to invoke your Lambda function when Auto Scaling launches or terminates instances. +// to invoke your Lambda function when Amazon EC2 Auto Scaling launches or terminates +// instances. // // (Optional) Create a notification target and an IAM role. The target can be -// either an Amazon SQS queue or an Amazon SNS topic. The role allows Auto Scaling -// to publish lifecycle notifications to the target. +// either an Amazon SQS queue or an Amazon SNS topic. The role allows Amazon +// EC2 Auto Scaling to publish lifecycle notifications to the target. // // Create the lifecycle hook. Specify whether the hook is used when the instances // launch or terminate. @@ -360,8 +532,8 @@ func (c *AutoScaling) CompleteLifecycleActionRequest(input *CompleteLifecycleAct // // If you finish before the timeout period ends, complete the lifecycle action. // -// For more information, see Auto Scaling Lifecycle (http://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroupLifecycle.html) -// in the Auto Scaling User Guide. +// For more information, see Auto Scaling Lifecycle (http://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroupLifecycle.html) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -401,8 +573,8 @@ const opCreateAutoScalingGroup = "CreateAutoScalingGroup" // CreateAutoScalingGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateAutoScalingGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -447,11 +619,11 @@ func (c *AutoScaling) CreateAutoScalingGroupRequest(input *CreateAutoScalingGrou // // If you exceed your maximum limit of Auto Scaling groups, the call fails. // For information about viewing this limit, see DescribeAccountLimits. For -// information about updating this limit, see Auto Scaling Limits (http://docs.aws.amazon.com/autoscaling/latest/userguide/as-account-limits.html) -// in the Auto Scaling User Guide. +// information about updating this limit, see Auto Scaling Limits (http://docs.aws.amazon.com/autoscaling/ec2/userguide/as-account-limits.html) +// in the Amazon EC2 Auto Scaling User Guide. // -// For more information, see Auto Scaling Groups (http://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroup.html) -// in the Auto Scaling User Guide. +// For more information, see Auto Scaling Groups (http://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -503,8 +675,8 @@ const opCreateLaunchConfiguration = "CreateLaunchConfiguration" // CreateLaunchConfigurationRequest generates a "aws/request.Request" representing the // client's request for the CreateLaunchConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -549,11 +721,11 @@ func (c *AutoScaling) CreateLaunchConfigurationRequest(input *CreateLaunchConfig // // If you exceed your maximum limit of launch configurations, the call fails. // For information about viewing this limit, see DescribeAccountLimits. For -// information about updating this limit, see Auto Scaling Limits (http://docs.aws.amazon.com/autoscaling/latest/userguide/as-account-limits.html) -// in the Auto Scaling User Guide. +// information about updating this limit, see Auto Scaling Limits (http://docs.aws.amazon.com/autoscaling/ec2/userguide/as-account-limits.html) +// in the Amazon EC2 Auto Scaling User Guide. // -// For more information, see Launch Configurations (http://docs.aws.amazon.com/autoscaling/latest/userguide/LaunchConfiguration.html) -// in the Auto Scaling User Guide. +// For more information, see Launch Configurations (http://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchConfiguration.html) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -602,8 +774,8 @@ const opCreateOrUpdateTags = "CreateOrUpdateTags" // CreateOrUpdateTagsRequest generates a "aws/request.Request" representing the // client's request for the CreateOrUpdateTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -649,8 +821,8 @@ func (c *AutoScaling) CreateOrUpdateTagsRequest(input *CreateOrUpdateTagsInput) // When you specify a tag with a key that already exists, the operation overwrites // the previous tag definition, and you do not get an error message. // -// For more information, see Tagging Auto Scaling Groups and Instances (http://docs.aws.amazon.com/autoscaling/latest/userguide/autoscaling-tagging.html) -// in the Auto Scaling User Guide. +// For more information, see Tagging Auto Scaling Groups and Instances (http://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-tagging.html) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -702,8 +874,8 @@ const opDeleteAutoScalingGroup = "DeleteAutoScalingGroup" // DeleteAutoScalingGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteAutoScalingGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -754,7 +926,8 @@ func (c *AutoScaling) DeleteAutoScalingGroupRequest(input *DeleteAutoScalingGrou // // To remove instances from the Auto Scaling group before deleting it, call // DetachInstances with the list of instances and the option to decrement the -// desired capacity so that Auto Scaling does not launch replacement instances. +// desired capacity. This ensures that Amazon EC2 Auto Scaling does not launch +// replacement instances. // // To terminate all instances before deleting the Auto Scaling group, call UpdateAutoScalingGroup // and set the minimum size and desired capacity of the Auto Scaling group to @@ -805,8 +978,8 @@ const opDeleteLaunchConfiguration = "DeleteLaunchConfiguration" // DeleteLaunchConfigurationRequest generates a "aws/request.Request" representing the // client's request for the DeleteLaunchConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -894,8 +1067,8 @@ const opDeleteLifecycleHook = "DeleteLifecycleHook" // DeleteLifecycleHookRequest generates a "aws/request.Request" representing the // client's request for the DeleteLifecycleHook operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -977,8 +1150,8 @@ const opDeleteNotificationConfiguration = "DeleteNotificationConfiguration" // DeleteNotificationConfigurationRequest generates a "aws/request.Request" representing the // client's request for the DeleteNotificationConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1059,8 +1232,8 @@ const opDeletePolicy = "DeletePolicy" // DeletePolicyRequest generates a "aws/request.Request" representing the // client's request for the DeletePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1147,8 +1320,8 @@ const opDeleteScheduledAction = "DeleteScheduledAction" // DeleteScheduledActionRequest generates a "aws/request.Request" representing the // client's request for the DeleteScheduledAction operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1229,8 +1402,8 @@ const opDeleteTags = "DeleteTags" // DeleteTagsRequest generates a "aws/request.Request" representing the // client's request for the DeleteTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1314,8 +1487,8 @@ const opDescribeAccountLimits = "DescribeAccountLimits" // DescribeAccountLimitsRequest generates a "aws/request.Request" representing the // client's request for the DescribeAccountLimits operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1357,8 +1530,8 @@ func (c *AutoScaling) DescribeAccountLimitsRequest(input *DescribeAccountLimitsI // Describes the current Auto Scaling resource limits for your AWS account. // // For information about requesting an increase in these limits, see Auto Scaling -// Limits (http://docs.aws.amazon.com/autoscaling/latest/userguide/as-account-limits.html) -// in the Auto Scaling User Guide. +// Limits (http://docs.aws.amazon.com/autoscaling/ec2/userguide/as-account-limits.html) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1398,8 +1571,8 @@ const opDescribeAdjustmentTypes = "DescribeAdjustmentTypes" // DescribeAdjustmentTypesRequest generates a "aws/request.Request" representing the // client's request for the DescribeAdjustmentTypes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1478,8 +1651,8 @@ const opDescribeAutoScalingGroups = "DescribeAutoScalingGroups" // DescribeAutoScalingGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeAutoScalingGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1617,8 +1790,8 @@ const opDescribeAutoScalingInstances = "DescribeAutoScalingInstances" // DescribeAutoScalingInstancesRequest generates a "aws/request.Request" representing the // client's request for the DescribeAutoScalingInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1756,8 +1929,8 @@ const opDescribeAutoScalingNotificationTypes = "DescribeAutoScalingNotificationT // DescribeAutoScalingNotificationTypesRequest generates a "aws/request.Request" representing the // client's request for the DescribeAutoScalingNotificationTypes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1796,7 +1969,7 @@ func (c *AutoScaling) DescribeAutoScalingNotificationTypesRequest(input *Describ // DescribeAutoScalingNotificationTypes API operation for Auto Scaling. // -// Describes the notification types that are supported by Auto Scaling. +// Describes the notification types that are supported by Amazon EC2 Auto Scaling. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1836,8 +2009,8 @@ const opDescribeLaunchConfigurations = "DescribeLaunchConfigurations" // DescribeLaunchConfigurationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeLaunchConfigurations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1975,8 +2148,8 @@ const opDescribeLifecycleHookTypes = "DescribeLifecycleHookTypes" // DescribeLifecycleHookTypesRequest generates a "aws/request.Request" representing the // client's request for the DescribeLifecycleHookTypes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2017,6 +2190,12 @@ func (c *AutoScaling) DescribeLifecycleHookTypesRequest(input *DescribeLifecycle // // Describes the available types of lifecycle hooks. // +// The following hook types are supported: +// +// * autoscaling:EC2_INSTANCE_LAUNCHING +// +// * autoscaling:EC2_INSTANCE_TERMINATING +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -2055,8 +2234,8 @@ const opDescribeLifecycleHooks = "DescribeLifecycleHooks" // DescribeLifecycleHooksRequest generates a "aws/request.Request" representing the // client's request for the DescribeLifecycleHooks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2135,8 +2314,8 @@ const opDescribeLoadBalancerTargetGroups = "DescribeLoadBalancerTargetGroups" // DescribeLoadBalancerTargetGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeLoadBalancerTargetGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2215,8 +2394,8 @@ const opDescribeLoadBalancers = "DescribeLoadBalancers" // DescribeLoadBalancersRequest generates a "aws/request.Request" representing the // client's request for the DescribeLoadBalancers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2257,8 +2436,8 @@ func (c *AutoScaling) DescribeLoadBalancersRequest(input *DescribeLoadBalancersI // // Describes the load balancers for the specified Auto Scaling group. // -// Note that this operation describes only Classic Load Balancers. If you have -// Application Load Balancers, use DescribeLoadBalancerTargetGroups instead. +// This operation describes only Classic Load Balancers. If you have Application +// Load Balancers, use DescribeLoadBalancerTargetGroups instead. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2298,8 +2477,8 @@ const opDescribeMetricCollectionTypes = "DescribeMetricCollectionTypes" // DescribeMetricCollectionTypesRequest generates a "aws/request.Request" representing the // client's request for the DescribeMetricCollectionTypes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2338,10 +2517,10 @@ func (c *AutoScaling) DescribeMetricCollectionTypesRequest(input *DescribeMetric // DescribeMetricCollectionTypes API operation for Auto Scaling. // -// Describes the available CloudWatch metrics for Auto Scaling. +// Describes the available CloudWatch metrics for Amazon EC2 Auto Scaling. // -// Note that the GroupStandbyInstances metric is not returned by default. You -// must explicitly request this metric when calling EnableMetricsCollection. +// The GroupStandbyInstances metric is not returned by default. You must explicitly +// request this metric when calling EnableMetricsCollection. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2381,8 +2560,8 @@ const opDescribeNotificationConfigurations = "DescribeNotificationConfigurations // DescribeNotificationConfigurationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeNotificationConfigurations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2521,8 +2700,8 @@ const opDescribePolicies = "DescribePolicies" // DescribePoliciesRequest generates a "aws/request.Request" representing the // client's request for the DescribePolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2663,8 +2842,8 @@ const opDescribeScalingActivities = "DescribeScalingActivities" // DescribeScalingActivitiesRequest generates a "aws/request.Request" representing the // client's request for the DescribeScalingActivities operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2802,8 +2981,8 @@ const opDescribeScalingProcessTypes = "DescribeScalingProcessTypes" // DescribeScalingProcessTypesRequest generates a "aws/request.Request" representing the // client's request for the DescribeScalingProcessTypes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2882,8 +3061,8 @@ const opDescribeScheduledActions = "DescribeScheduledActions" // DescribeScheduledActionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeScheduledActions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3022,8 +3201,8 @@ const opDescribeTags = "DescribeTags" // DescribeTagsRequest generates a "aws/request.Request" representing the // client's request for the DescribeTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3170,8 +3349,8 @@ const opDescribeTerminationPolicyTypes = "DescribeTerminationPolicyTypes" // DescribeTerminationPolicyTypesRequest generates a "aws/request.Request" representing the // client's request for the DescribeTerminationPolicyTypes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3210,7 +3389,11 @@ func (c *AutoScaling) DescribeTerminationPolicyTypesRequest(input *DescribeTermi // DescribeTerminationPolicyTypes API operation for Auto Scaling. // -// Describes the termination policies supported by Auto Scaling. +// Describes the termination policies supported by Amazon EC2 Auto Scaling. +// +// For more information, see Controlling Which Auto Scaling Instances Terminate +// During Scale In (http://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-termination.html) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3250,8 +3433,8 @@ const opDetachInstances = "DetachInstances" // DetachInstancesRequest generates a "aws/request.Request" representing the // client's request for the DetachInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3295,8 +3478,8 @@ func (c *AutoScaling) DetachInstancesRequest(input *DetachInstancesInput) (req * // After the instances are detached, you can manage them independent of the // Auto Scaling group. // -// If you do not specify the option to decrement the desired capacity, Auto -// Scaling launches instances to replace the ones that are detached. +// If you do not specify the option to decrement the desired capacity, Amazon +// EC2 Auto Scaling launches instances to replace the ones that are detached. // // If there is a Classic Load Balancer attached to the Auto Scaling group, the // instances are deregistered from the load balancer. If there are target groups @@ -3304,8 +3487,8 @@ func (c *AutoScaling) DetachInstancesRequest(input *DetachInstancesInput) (req * // target groups. // // For more information, see Detach EC2 Instances from Your Auto Scaling Group -// (http://docs.aws.amazon.com/autoscaling/latest/userguide/detach-instance-asg.html) -// in the Auto Scaling User Guide. +// (http://docs.aws.amazon.com/autoscaling/ec2/userguide/detach-instance-asg.html) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3345,8 +3528,8 @@ const opDetachLoadBalancerTargetGroups = "DetachLoadBalancerTargetGroups" // DetachLoadBalancerTargetGroupsRequest generates a "aws/request.Request" representing the // client's request for the DetachLoadBalancerTargetGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3425,8 +3608,8 @@ const opDetachLoadBalancers = "DetachLoadBalancers" // DetachLoadBalancersRequest generates a "aws/request.Request" representing the // client's request for the DetachLoadBalancers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3468,13 +3651,13 @@ func (c *AutoScaling) DetachLoadBalancersRequest(input *DetachLoadBalancersInput // Detaches one or more Classic Load Balancers from the specified Auto Scaling // group. // -// Note that this operation detaches only Classic Load Balancers. If you have -// Application Load Balancers, use DetachLoadBalancerTargetGroups instead. +// This operation detaches only Classic Load Balancers. If you have Application +// Load Balancers, use DetachLoadBalancerTargetGroups instead. // // When you detach a load balancer, it enters the Removing state while deregistering // the instances in the group. When all instances are deregistered, then you -// can no longer describe the load balancer using DescribeLoadBalancers. Note -// that the instances remain running. +// can no longer describe the load balancer using DescribeLoadBalancers. The +// instances remain running. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3514,8 +3697,8 @@ const opDisableMetricsCollection = "DisableMetricsCollection" // DisableMetricsCollectionRequest generates a "aws/request.Request" representing the // client's request for the DisableMetricsCollection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3596,8 +3779,8 @@ const opEnableMetricsCollection = "EnableMetricsCollection" // EnableMetricsCollectionRequest generates a "aws/request.Request" representing the // client's request for the EnableMetricsCollection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3639,8 +3822,8 @@ func (c *AutoScaling) EnableMetricsCollectionRequest(input *EnableMetricsCollect // EnableMetricsCollection API operation for Auto Scaling. // // Enables group metrics for the specified Auto Scaling group. For more information, -// see Monitoring Your Auto Scaling Groups and Instances (http://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-monitoring.html) -// in the Auto Scaling User Guide. +// see Monitoring Your Auto Scaling Groups and Instances (http://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-monitoring.html) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3680,8 +3863,8 @@ const opEnterStandby = "EnterStandby" // EnterStandbyRequest generates a "aws/request.Request" representing the // client's request for the EnterStandby operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3723,8 +3906,8 @@ func (c *AutoScaling) EnterStandbyRequest(input *EnterStandbyInput) (req *reques // Moves the specified instances into the standby state. // // For more information, see Temporarily Removing Instances from Your Auto Scaling -// Group (http://docs.aws.amazon.com/autoscaling/latest/userguide/as-enter-exit-standby.html) -// in the Auto Scaling User Guide. +// Group (http://docs.aws.amazon.com/autoscaling/ec2/userguide/as-enter-exit-standby.html) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3764,8 +3947,8 @@ const opExecutePolicy = "ExecutePolicy" // ExecutePolicyRequest generates a "aws/request.Request" representing the // client's request for the ExecutePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3850,8 +4033,8 @@ const opExitStandby = "ExitStandby" // ExitStandbyRequest generates a "aws/request.Request" representing the // client's request for the ExitStandby operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3893,8 +4076,8 @@ func (c *AutoScaling) ExitStandbyRequest(input *ExitStandbyInput) (req *request. // Moves the specified instances out of the standby state. // // For more information, see Temporarily Removing Instances from Your Auto Scaling -// Group (http://docs.aws.amazon.com/autoscaling/latest/userguide/as-enter-exit-standby.html) -// in the Auto Scaling User Guide. +// Group (http://docs.aws.amazon.com/autoscaling/ec2/userguide/as-enter-exit-standby.html) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3934,8 +4117,8 @@ const opPutLifecycleHook = "PutLifecycleHook" // PutLifecycleHookRequest generates a "aws/request.Request" representing the // client's request for the PutLifecycleHook operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3974,21 +4157,22 @@ func (c *AutoScaling) PutLifecycleHookRequest(input *PutLifecycleHookInput) (req // PutLifecycleHook API operation for Auto Scaling. // -// Creates or updates a lifecycle hook for the specified Auto Scaling Group. +// Creates or updates a lifecycle hook for the specified Auto Scaling group. // -// A lifecycle hook tells Auto Scaling that you want to perform an action on -// an instance that is not actively in service; for example, either when the -// instance launches or before the instance terminates. +// A lifecycle hook tells Amazon EC2 Auto Scaling to perform an action on an +// instance that is not actively in service; for example, either when the instance +// launches or before the instance terminates. // // This step is a part of the procedure for adding a lifecycle hook to an Auto // Scaling group: // // (Optional) Create a Lambda function and a rule that allows CloudWatch Events -// to invoke your Lambda function when Auto Scaling launches or terminates instances. +// to invoke your Lambda function when Amazon EC2 Auto Scaling launches or terminates +// instances. // // (Optional) Create a notification target and an IAM role. The target can be -// either an Amazon SQS queue or an Amazon SNS topic. The role allows Auto Scaling -// to publish lifecycle notifications to the target. +// either an Amazon SQS queue or an Amazon SNS topic. The role allows Amazon +// EC2 Auto Scaling to publish lifecycle notifications to the target. // // Create the lifecycle hook. Specify whether the hook is used when the instances // launch or terminate. @@ -3998,8 +4182,8 @@ func (c *AutoScaling) PutLifecycleHookRequest(input *PutLifecycleHookInput) (req // // If you finish before the timeout period ends, complete the lifecycle action. // -// For more information, see Auto Scaling Lifecycle Hooks (http://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.html) -// in the Auto Scaling User Guide. +// For more information, see Auto Scaling Lifecycle Hooks (http://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html) +// in the Amazon EC2 Auto Scaling User Guide. // // If you exceed your maximum limit of lifecycle hooks, which by default is // 50 per Auto Scaling group, the call fails. For information about updating @@ -4049,8 +4233,8 @@ const opPutNotificationConfiguration = "PutNotificationConfiguration" // PutNotificationConfigurationRequest generates a "aws/request.Request" representing the // client's request for the PutNotificationConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4097,8 +4281,8 @@ func (c *AutoScaling) PutNotificationConfigurationRequest(input *PutNotification // // This configuration overwrites any existing configuration. // -// For more information see Getting SNS Notifications When Your Auto Scaling -// Group Scales (http://docs.aws.amazon.com/autoscaling/latest/userguide/ASGettingNotifications.html) +// For more information, see Getting SNS Notifications When Your Auto Scaling +// Group Scales (http://docs.aws.amazon.com/autoscaling/ec2/userguide/ASGettingNotifications.html) // in the Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -4147,8 +4331,8 @@ const opPutScalingPolicy = "PutScalingPolicy" // PutScalingPolicyRequest generates a "aws/request.Request" representing the // client's request for the PutScalingPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4188,9 +4372,9 @@ func (c *AutoScaling) PutScalingPolicyRequest(input *PutScalingPolicyInput) (req // PutScalingPolicy API operation for Auto Scaling. // // Creates or updates a policy for an Auto Scaling group. To update an existing -// policy, use the existing policy name and set the parameters you want to change. -// Any existing parameter not changed in an update to an existing policy is -// not changed in this update request. +// policy, use the existing policy name and set the parameters to change. Any +// existing parameter not changed in an update to an existing policy is not +// changed in this update request. // // If you exceed your maximum limit of step adjustments, which by default is // 20 per region, the call fails. For information about updating this limit, @@ -4243,8 +4427,8 @@ const opPutScheduledUpdateGroupAction = "PutScheduledUpdateGroupAction" // PutScheduledUpdateGroupActionRequest generates a "aws/request.Request" representing the // client's request for the PutScheduledUpdateGroupAction operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4286,11 +4470,11 @@ func (c *AutoScaling) PutScheduledUpdateGroupActionRequest(input *PutScheduledUp // PutScheduledUpdateGroupAction API operation for Auto Scaling. // // Creates or updates a scheduled scaling action for an Auto Scaling group. -// When updating a scheduled scaling action, if you leave a parameter unspecified, +// If you leave a parameter unspecified when updating a scheduled scaling action, // the corresponding value remains unchanged. // -// For more information, see Scheduled Scaling (http://docs.aws.amazon.com/autoscaling/latest/userguide/schedule_time.html) -// in the Auto Scaling User Guide. +// For more information, see Scheduled Scaling (http://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4339,8 +4523,8 @@ const opRecordLifecycleActionHeartbeat = "RecordLifecycleActionHeartbeat" // RecordLifecycleActionHeartbeatRequest generates a "aws/request.Request" representing the // client's request for the RecordLifecycleActionHeartbeat operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4387,11 +4571,12 @@ func (c *AutoScaling) RecordLifecycleActionHeartbeatRequest(input *RecordLifecyc // Scaling group: // // (Optional) Create a Lambda function and a rule that allows CloudWatch Events -// to invoke your Lambda function when Auto Scaling launches or terminates instances. +// to invoke your Lambda function when Amazon EC2 Auto Scaling launches or terminates +// instances. // // (Optional) Create a notification target and an IAM role. The target can be -// either an Amazon SQS queue or an Amazon SNS topic. The role allows Auto Scaling -// to publish lifecycle notifications to the target. +// either an Amazon SQS queue or an Amazon SNS topic. The role allows Amazon +// EC2 Auto Scaling to publish lifecycle notifications to the target. // // Create the lifecycle hook. Specify whether the hook is used when the instances // launch or terminate. @@ -4401,8 +4586,8 @@ func (c *AutoScaling) RecordLifecycleActionHeartbeatRequest(input *RecordLifecyc // // If you finish before the timeout period ends, complete the lifecycle action. // -// For more information, see Auto Scaling Lifecycle (http://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroupLifecycle.html) -// in the Auto Scaling User Guide. +// For more information, see Auto Scaling Lifecycle (http://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroupLifecycle.html) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4442,8 +4627,8 @@ const opResumeProcesses = "ResumeProcesses" // ResumeProcessesRequest generates a "aws/request.Request" representing the // client's request for the ResumeProcesses operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4484,12 +4669,11 @@ func (c *AutoScaling) ResumeProcessesRequest(input *ScalingProcessQuery) (req *r // ResumeProcesses API operation for Auto Scaling. // -// Resumes the specified suspended Auto Scaling processes, or all suspended +// Resumes the specified suspended automatic scaling processes, or all suspended // process, for the specified Auto Scaling group. // -// For more information, see Suspending and Resuming Auto Scaling Processes -// (http://docs.aws.amazon.com/autoscaling/latest/userguide/as-suspend-resume-processes.html) -// in the Auto Scaling User Guide. +// For more information, see Suspending and Resuming Scaling Processes (http://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4532,8 +4716,8 @@ const opSetDesiredCapacity = "SetDesiredCapacity" // SetDesiredCapacityRequest generates a "aws/request.Request" representing the // client's request for the SetDesiredCapacity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4576,8 +4760,9 @@ func (c *AutoScaling) SetDesiredCapacityRequest(input *SetDesiredCapacityInput) // // Sets the size of the specified Auto Scaling group. // -// For more information about desired capacity, see What Is Auto Scaling? (http://docs.aws.amazon.com/autoscaling/latest/userguide/WhatIsAutoScaling.html) -// in the Auto Scaling User Guide. +// For more information about desired capacity, see What Is Amazon EC2 Auto +// Scaling? (http://docs.aws.amazon.com/autoscaling/ec2/userguide/WhatIsAutoScaling.html) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4621,8 +4806,8 @@ const opSetInstanceHealth = "SetInstanceHealth" // SetInstanceHealthRequest generates a "aws/request.Request" representing the // client's request for the SetInstanceHealth operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4665,8 +4850,8 @@ func (c *AutoScaling) SetInstanceHealthRequest(input *SetInstanceHealthInput) (r // // Sets the health status of the specified instance. // -// For more information, see Health Checks (http://docs.aws.amazon.com/autoscaling/latest/userguide/healthcheck.html) -// in the Auto Scaling User Guide. +// For more information, see Health Checks (http://docs.aws.amazon.com/autoscaling/ec2/userguide/healthcheck.html) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4706,8 +4891,8 @@ const opSetInstanceProtection = "SetInstanceProtection" // SetInstanceProtectionRequest generates a "aws/request.Request" representing the // client's request for the SetInstanceProtection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4748,8 +4933,8 @@ func (c *AutoScaling) SetInstanceProtectionRequest(input *SetInstanceProtectionI // // Updates the instance protection settings of the specified instances. // -// For more information, see Instance Protection (http://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html#instance-protection) -// in the Auto Scaling User Guide. +// For more information, see Instance Protection (http://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-termination.html#instance-protection) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4794,8 +4979,8 @@ const opSuspendProcesses = "SuspendProcesses" // SuspendProcessesRequest generates a "aws/request.Request" representing the // client's request for the SuspendProcesses operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4836,17 +5021,16 @@ func (c *AutoScaling) SuspendProcessesRequest(input *ScalingProcessQuery) (req * // SuspendProcesses API operation for Auto Scaling. // -// Suspends the specified Auto Scaling processes, or all processes, for the -// specified Auto Scaling group. +// Suspends the specified automatic scaling processes, or all processes, for +// the specified Auto Scaling group. // -// Note that if you suspend either the Launch or Terminate process types, it -// can prevent other process types from functioning properly. +// If you suspend either the Launch or Terminate process types, it can prevent +// other process types from functioning properly. // // To resume processes that have been suspended, use ResumeProcesses. // -// For more information, see Suspending and Resuming Auto Scaling Processes -// (http://docs.aws.amazon.com/autoscaling/latest/userguide/as-suspend-resume-processes.html) -// in the Auto Scaling User Guide. +// For more information, see Suspending and Resuming Scaling Processes (http://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html) +// in the Amazon EC2 Auto Scaling User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4889,8 +5073,8 @@ const opTerminateInstanceInAutoScalingGroup = "TerminateInstanceInAutoScalingGro // TerminateInstanceInAutoScalingGroupRequest generates a "aws/request.Request" representing the // client's request for the TerminateInstanceInAutoScalingGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4977,8 +5161,8 @@ const opUpdateAutoScalingGroup = "UpdateAutoScalingGroup" // UpdateAutoScalingGroupRequest generates a "aws/request.Request" representing the // client's request for the UpdateAutoScalingGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5026,7 +5210,7 @@ func (c *AutoScaling) UpdateAutoScalingGroupRequest(input *UpdateAutoScalingGrou // // To update an Auto Scaling group with a launch configuration with InstanceMonitoring // set to false, you must first disable the collection of group metrics. Otherwise, -// you will get an error. If you have previously enabled the collection of group +// you get an error. If you have previously enabled the collection of group // metrics, you can disable it using DisableMetricsCollection. // // Note the following: @@ -5112,7 +5296,7 @@ type Activity struct { Details *string `type:"string"` // The end time of the activity. - EndTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + EndTime *time.Time `type:"timestamp"` // A value between 0 and 100 that indicates the progress of the activity. Progress *int64 `type:"integer"` @@ -5120,7 +5304,7 @@ type Activity struct { // The start time of the activity. // // StartTime is a required field - StartTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + StartTime *time.Time `type:"timestamp" required:"true"` // The current status of the activity. // @@ -5203,8 +5387,8 @@ func (s *Activity) SetStatusMessage(v string) *Activity { // Describes a policy adjustment type. // -// For more information, see Dynamic Scaling (http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-scale-based-on-demand.html) -// in the Auto Scaling User Guide. +// For more information, see Dynamic Scaling (http://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html) +// in the Amazon EC2 Auto Scaling User Guide. type AdjustmentType struct { _ struct{} `type:"structure"` @@ -5465,55 +5649,42 @@ func (s AttachLoadBalancersOutput) GoString() string { return s.String() } -// Describes a block device mapping. -type BlockDeviceMapping struct { +type BatchDeleteScheduledActionInput struct { _ struct{} `type:"structure"` - // The device name exposed to the EC2 instance (for example, /dev/sdh or xvdh). + // The name of the Auto Scaling group. // - // DeviceName is a required field - DeviceName *string `min:"1" type:"string" required:"true"` - - // The information about the Amazon EBS volume. - Ebs *Ebs `type:"structure"` + // AutoScalingGroupName is a required field + AutoScalingGroupName *string `min:"1" type:"string" required:"true"` - // Suppresses a device mapping. + // The names of the scheduled actions to delete. The maximum number allowed + // is 50. // - // If this parameter is true for the root device, the instance might fail the - // EC2 health check. Auto Scaling launches a replacement instance if the instance - // fails the health check. - NoDevice *bool `type:"boolean"` - - // The name of the virtual device (for example, ephemeral0). - VirtualName *string `min:"1" type:"string"` + // ScheduledActionNames is a required field + ScheduledActionNames []*string `type:"list" required:"true"` } // String returns the string representation -func (s BlockDeviceMapping) String() string { +func (s BatchDeleteScheduledActionInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s BlockDeviceMapping) GoString() string { +func (s BatchDeleteScheduledActionInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *BlockDeviceMapping) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "BlockDeviceMapping"} - if s.DeviceName == nil { - invalidParams.Add(request.NewErrParamRequired("DeviceName")) - } - if s.DeviceName != nil && len(*s.DeviceName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("DeviceName", 1)) +func (s *BatchDeleteScheduledActionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BatchDeleteScheduledActionInput"} + if s.AutoScalingGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("AutoScalingGroupName")) } - if s.VirtualName != nil && len(*s.VirtualName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("VirtualName", 1)) + if s.AutoScalingGroupName != nil && len(*s.AutoScalingGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AutoScalingGroupName", 1)) } - if s.Ebs != nil { - if err := s.Ebs.Validate(); err != nil { - invalidParams.AddNested("Ebs", err.(request.ErrInvalidParams)) - } + if s.ScheduledActionNames == nil { + invalidParams.Add(request.NewErrParamRequired("ScheduledActionNames")) } if invalidParams.Len() > 0 { @@ -5522,31 +5693,43 @@ func (s *BlockDeviceMapping) Validate() error { return nil } -// SetDeviceName sets the DeviceName field's value. -func (s *BlockDeviceMapping) SetDeviceName(v string) *BlockDeviceMapping { - s.DeviceName = &v +// SetAutoScalingGroupName sets the AutoScalingGroupName field's value. +func (s *BatchDeleteScheduledActionInput) SetAutoScalingGroupName(v string) *BatchDeleteScheduledActionInput { + s.AutoScalingGroupName = &v return s } -// SetEbs sets the Ebs field's value. -func (s *BlockDeviceMapping) SetEbs(v *Ebs) *BlockDeviceMapping { - s.Ebs = v +// SetScheduledActionNames sets the ScheduledActionNames field's value. +func (s *BatchDeleteScheduledActionInput) SetScheduledActionNames(v []*string) *BatchDeleteScheduledActionInput { + s.ScheduledActionNames = v return s } -// SetNoDevice sets the NoDevice field's value. -func (s *BlockDeviceMapping) SetNoDevice(v bool) *BlockDeviceMapping { - s.NoDevice = &v - return s +type BatchDeleteScheduledActionOutput struct { + _ struct{} `type:"structure"` + + // The names of the scheduled actions that could not be deleted, including an + // error message. + FailedScheduledActions []*FailedScheduledUpdateGroupActionRequest `type:"list"` } -// SetVirtualName sets the VirtualName field's value. -func (s *BlockDeviceMapping) SetVirtualName(v string) *BlockDeviceMapping { - s.VirtualName = &v +// String returns the string representation +func (s BatchDeleteScheduledActionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchDeleteScheduledActionOutput) GoString() string { + return s.String() +} + +// SetFailedScheduledActions sets the FailedScheduledActions field's value. +func (s *BatchDeleteScheduledActionOutput) SetFailedScheduledActions(v []*FailedScheduledUpdateGroupActionRequest) *BatchDeleteScheduledActionOutput { + s.FailedScheduledActions = v return s } -type CompleteLifecycleActionInput struct { +type BatchPutScheduledUpdateGroupActionInput struct { _ struct{} `type:"structure"` // The name of the Auto Scaling group. @@ -5554,59 +5737,43 @@ type CompleteLifecycleActionInput struct { // AutoScalingGroupName is a required field AutoScalingGroupName *string `min:"1" type:"string" required:"true"` - // The ID of the instance. - InstanceId *string `min:"1" type:"string"` - - // The action for the group to take. This parameter can be either CONTINUE or - // ABANDON. - // - // LifecycleActionResult is a required field - LifecycleActionResult *string `type:"string" required:"true"` - - // A universally unique identifier (UUID) that identifies a specific lifecycle - // action associated with an instance. Auto Scaling sends this token to the - // notification target you specified when you created the lifecycle hook. - LifecycleActionToken *string `min:"36" type:"string"` - - // The name of the lifecycle hook. + // One or more scheduled actions. The maximum number allowed is 50. // - // LifecycleHookName is a required field - LifecycleHookName *string `min:"1" type:"string" required:"true"` + // ScheduledUpdateGroupActions is a required field + ScheduledUpdateGroupActions []*ScheduledUpdateGroupActionRequest `type:"list" required:"true"` } // String returns the string representation -func (s CompleteLifecycleActionInput) String() string { +func (s BatchPutScheduledUpdateGroupActionInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CompleteLifecycleActionInput) GoString() string { +func (s BatchPutScheduledUpdateGroupActionInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CompleteLifecycleActionInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CompleteLifecycleActionInput"} +func (s *BatchPutScheduledUpdateGroupActionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BatchPutScheduledUpdateGroupActionInput"} if s.AutoScalingGroupName == nil { invalidParams.Add(request.NewErrParamRequired("AutoScalingGroupName")) } if s.AutoScalingGroupName != nil && len(*s.AutoScalingGroupName) < 1 { invalidParams.Add(request.NewErrParamMinLen("AutoScalingGroupName", 1)) } - if s.InstanceId != nil && len(*s.InstanceId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("InstanceId", 1)) - } - if s.LifecycleActionResult == nil { - invalidParams.Add(request.NewErrParamRequired("LifecycleActionResult")) - } - if s.LifecycleActionToken != nil && len(*s.LifecycleActionToken) < 36 { - invalidParams.Add(request.NewErrParamMinLen("LifecycleActionToken", 36)) - } - if s.LifecycleHookName == nil { - invalidParams.Add(request.NewErrParamRequired("LifecycleHookName")) + if s.ScheduledUpdateGroupActions == nil { + invalidParams.Add(request.NewErrParamRequired("ScheduledUpdateGroupActions")) } - if s.LifecycleHookName != nil && len(*s.LifecycleHookName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("LifecycleHookName", 1)) + if s.ScheduledUpdateGroupActions != nil { + for i, v := range s.ScheduledUpdateGroupActions { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ScheduledUpdateGroupActions", i), err.(request.ErrInvalidParams)) + } + } } if invalidParams.Len() > 0 { @@ -5616,14 +5783,200 @@ func (s *CompleteLifecycleActionInput) Validate() error { } // SetAutoScalingGroupName sets the AutoScalingGroupName field's value. -func (s *CompleteLifecycleActionInput) SetAutoScalingGroupName(v string) *CompleteLifecycleActionInput { +func (s *BatchPutScheduledUpdateGroupActionInput) SetAutoScalingGroupName(v string) *BatchPutScheduledUpdateGroupActionInput { s.AutoScalingGroupName = &v return s } -// SetInstanceId sets the InstanceId field's value. -func (s *CompleteLifecycleActionInput) SetInstanceId(v string) *CompleteLifecycleActionInput { - s.InstanceId = &v +// SetScheduledUpdateGroupActions sets the ScheduledUpdateGroupActions field's value. +func (s *BatchPutScheduledUpdateGroupActionInput) SetScheduledUpdateGroupActions(v []*ScheduledUpdateGroupActionRequest) *BatchPutScheduledUpdateGroupActionInput { + s.ScheduledUpdateGroupActions = v + return s +} + +type BatchPutScheduledUpdateGroupActionOutput struct { + _ struct{} `type:"structure"` + + // The names of the scheduled actions that could not be created or updated, + // including an error message. + FailedScheduledUpdateGroupActions []*FailedScheduledUpdateGroupActionRequest `type:"list"` +} + +// String returns the string representation +func (s BatchPutScheduledUpdateGroupActionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchPutScheduledUpdateGroupActionOutput) GoString() string { + return s.String() +} + +// SetFailedScheduledUpdateGroupActions sets the FailedScheduledUpdateGroupActions field's value. +func (s *BatchPutScheduledUpdateGroupActionOutput) SetFailedScheduledUpdateGroupActions(v []*FailedScheduledUpdateGroupActionRequest) *BatchPutScheduledUpdateGroupActionOutput { + s.FailedScheduledUpdateGroupActions = v + return s +} + +// Describes a block device mapping. +type BlockDeviceMapping struct { + _ struct{} `type:"structure"` + + // The device name exposed to the EC2 instance (for example, /dev/sdh or xvdh). + // + // DeviceName is a required field + DeviceName *string `min:"1" type:"string" required:"true"` + + // The information about the Amazon EBS volume. + Ebs *Ebs `type:"structure"` + + // Suppresses a device mapping. + // + // If this parameter is true for the root device, the instance might fail the + // EC2 health check. In that case, Amazon EC2 Auto Scaling launches a replacement + // instance. + NoDevice *bool `type:"boolean"` + + // The name of the virtual device (for example, ephemeral0). + VirtualName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s BlockDeviceMapping) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BlockDeviceMapping) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *BlockDeviceMapping) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BlockDeviceMapping"} + if s.DeviceName == nil { + invalidParams.Add(request.NewErrParamRequired("DeviceName")) + } + if s.DeviceName != nil && len(*s.DeviceName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DeviceName", 1)) + } + if s.VirtualName != nil && len(*s.VirtualName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("VirtualName", 1)) + } + if s.Ebs != nil { + if err := s.Ebs.Validate(); err != nil { + invalidParams.AddNested("Ebs", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDeviceName sets the DeviceName field's value. +func (s *BlockDeviceMapping) SetDeviceName(v string) *BlockDeviceMapping { + s.DeviceName = &v + return s +} + +// SetEbs sets the Ebs field's value. +func (s *BlockDeviceMapping) SetEbs(v *Ebs) *BlockDeviceMapping { + s.Ebs = v + return s +} + +// SetNoDevice sets the NoDevice field's value. +func (s *BlockDeviceMapping) SetNoDevice(v bool) *BlockDeviceMapping { + s.NoDevice = &v + return s +} + +// SetVirtualName sets the VirtualName field's value. +func (s *BlockDeviceMapping) SetVirtualName(v string) *BlockDeviceMapping { + s.VirtualName = &v + return s +} + +type CompleteLifecycleActionInput struct { + _ struct{} `type:"structure"` + + // The name of the Auto Scaling group. + // + // AutoScalingGroupName is a required field + AutoScalingGroupName *string `min:"1" type:"string" required:"true"` + + // The ID of the instance. + InstanceId *string `min:"1" type:"string"` + + // The action for the group to take. This parameter can be either CONTINUE or + // ABANDON. + // + // LifecycleActionResult is a required field + LifecycleActionResult *string `type:"string" required:"true"` + + // A universally unique identifier (UUID) that identifies a specific lifecycle + // action associated with an instance. Amazon EC2 Auto Scaling sends this token + // to the notification target you specified when you created the lifecycle hook. + LifecycleActionToken *string `min:"36" type:"string"` + + // The name of the lifecycle hook. + // + // LifecycleHookName is a required field + LifecycleHookName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CompleteLifecycleActionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CompleteLifecycleActionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CompleteLifecycleActionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CompleteLifecycleActionInput"} + if s.AutoScalingGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("AutoScalingGroupName")) + } + if s.AutoScalingGroupName != nil && len(*s.AutoScalingGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AutoScalingGroupName", 1)) + } + if s.InstanceId != nil && len(*s.InstanceId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("InstanceId", 1)) + } + if s.LifecycleActionResult == nil { + invalidParams.Add(request.NewErrParamRequired("LifecycleActionResult")) + } + if s.LifecycleActionToken != nil && len(*s.LifecycleActionToken) < 36 { + invalidParams.Add(request.NewErrParamMinLen("LifecycleActionToken", 36)) + } + if s.LifecycleHookName == nil { + invalidParams.Add(request.NewErrParamRequired("LifecycleHookName")) + } + if s.LifecycleHookName != nil && len(*s.LifecycleHookName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LifecycleHookName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAutoScalingGroupName sets the AutoScalingGroupName field's value. +func (s *CompleteLifecycleActionInput) SetAutoScalingGroupName(v string) *CompleteLifecycleActionInput { + s.AutoScalingGroupName = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *CompleteLifecycleActionInput) SetInstanceId(v string) *CompleteLifecycleActionInput { + s.InstanceId = &v return s } @@ -5675,8 +6028,8 @@ type CreateAutoScalingGroupInput struct { // The amount of time, in seconds, after a scaling activity completes before // another scaling activity can start. The default is 300. // - // For more information, see Auto Scaling Cooldowns (http://docs.aws.amazon.com/autoscaling/latest/userguide/Cooldown.html) - // in the Auto Scaling User Guide. + // For more information, see Scaling Cooldowns (http://docs.aws.amazon.com/autoscaling/ec2/userguide/Cooldown.html) + // in the Amazon EC2 Auto Scaling User Guide. DefaultCooldown *int64 `type:"integer"` // The number of EC2 instances that should be running in the group. This number @@ -5685,44 +6038,44 @@ type CreateAutoScalingGroupInput struct { // capacity, the default is the minimum size of the group. DesiredCapacity *int64 `type:"integer"` - // The amount of time, in seconds, that Auto Scaling waits before checking the - // health status of an EC2 instance that has come into service. During this - // time, any health check failures for the instance are ignored. The default - // is 0. + // The amount of time, in seconds, that Amazon EC2 Auto Scaling waits before + // checking the health status of an EC2 instance that has come into service. + // During this time, any health check failures for the instance are ignored. + // The default is 0. // // This parameter is required if you are adding an ELB health check. // - // For more information, see Health Checks (http://docs.aws.amazon.com/autoscaling/latest/userguide/healthcheck.html) - // in the Auto Scaling User Guide. + // For more information, see Health Checks (http://docs.aws.amazon.com/autoscaling/ec2/userguide/healthcheck.html) + // in the Amazon EC2 Auto Scaling User Guide. HealthCheckGracePeriod *int64 `type:"integer"` // The service to use for the health checks. The valid values are EC2 and ELB. // // By default, health checks use Amazon EC2 instance status checks to determine - // the health of an instance. For more information, see Health Checks (http://docs.aws.amazon.com/autoscaling/latest/userguide/healthcheck.html) - // in the Auto Scaling User Guide. + // the health of an instance. For more information, see Health Checks (http://docs.aws.amazon.com/autoscaling/ec2/userguide/healthcheck.html) + // in the Amazon EC2 Auto Scaling User Guide. HealthCheckType *string `min:"1" type:"string"` // The ID of the instance used to create a launch configuration for the group. - // You must specify one of the following: an EC2 instance, a launch configuration, - // or a launch template. + // This parameter, a launch configuration, a launch template, or a mixed instances + // policy must be specified. // - // When you specify an ID of an instance, Auto Scaling creates a new launch - // configuration and associates it with the group. This launch configuration - // derives its attributes from the specified instance, with the exception of - // the block device mapping. + // When you specify an ID of an instance, Amazon EC2 Auto Scaling creates a + // new launch configuration and associates it with the group. This launch configuration + // derives its attributes from the specified instance, except for the block + // device mapping. // // For more information, see Create an Auto Scaling Group Using an EC2 Instance - // (http://docs.aws.amazon.com/autoscaling/latest/userguide/create-asg-from-instance.html) - // in the Auto Scaling User Guide. + // (http://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg-from-instance.html) + // in the Amazon EC2 Auto Scaling User Guide. InstanceId *string `min:"1" type:"string"` - // The name of the launch configuration. You must specify one of the following: - // a launch configuration, a launch template, or an EC2 instance. + // The name of the launch configuration. This parameter, a launch template, + // a mixed instances policy, or an EC2 instance must be specified. LaunchConfigurationName *string `min:"1" type:"string"` - // The launch template to use to launch instances. You must specify one of the - // following: a launch template, a launch configuration, or an EC2 instance. + // The launch template to use to launch instances. This parameter, a launch + // configuration, a mixed instances policy, or an EC2 instance must be specified. LaunchTemplate *LaunchTemplateSpecification `type:"structure"` // One or more lifecycle hooks. @@ -5732,8 +6085,8 @@ type CreateAutoScalingGroupInput struct { // use TargetGroupARNs instead. // // For more information, see Using a Load Balancer With an Auto Scaling Group - // (http://docs.aws.amazon.com/autoscaling/latest/userguide/create-asg-from-instance.html) - // in the Auto Scaling User Guide. + // (http://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg-from-instance.html) + // in the Amazon EC2 Auto Scaling User Guide. LoadBalancerNames []*string `type:"list"` // The maximum size of the group. @@ -5746,25 +6099,29 @@ type CreateAutoScalingGroupInput struct { // MinSize is a required field MinSize *int64 `type:"integer" required:"true"` + // The mixed instances policy to use to launch instances. This parameter, a + // launch template, a launch configuration, or an EC2 instance must be specified. + MixedInstancesPolicy *MixedInstancesPolicy `type:"structure"` + // Indicates whether newly launched instances are protected from termination // by Auto Scaling when scaling in. NewInstancesProtectedFromScaleIn *bool `type:"boolean"` - // The name of the placement group into which you'll launch your instances, - // if any. For more information, see Placement Groups (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html) + // The name of the placement group into which to launch your instances, if any. + // For more information, see Placement Groups (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html) // in the Amazon Elastic Compute Cloud User Guide. PlacementGroup *string `min:"1" type:"string"` // The Amazon Resource Name (ARN) of the service-linked role that the Auto Scaling - // group uses to call other AWS services on your behalf. By default, Auto Scaling - // uses a service-linked role named AWSServiceRoleForAutoScaling, which it creates - // if it does not exist. + // group uses to call other AWS services on your behalf. By default, Amazon + // EC2 Auto Scaling uses a service-linked role named AWSServiceRoleForAutoScaling, + // which it creates if it does not exist. ServiceLinkedRoleARN *string `min:"1" type:"string"` // One or more tags. // - // For more information, see Tagging Auto Scaling Groups and Instances (http://docs.aws.amazon.com/autoscaling/latest/userguide/autoscaling-tagging.html) - // in the Auto Scaling User Guide. + // For more information, see Tagging Auto Scaling Groups and Instances (http://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-tagging.html) + // in the Amazon EC2 Auto Scaling User Guide. Tags []*Tag `type:"list"` // The Amazon Resource Names (ARN) of the target groups. @@ -5774,7 +6131,7 @@ type CreateAutoScalingGroupInput struct { // These policies are executed in the order that they are listed. // // For more information, see Controlling Which Instances Auto Scaling Terminates - // During Scale In (http://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html) + // During Scale In (http://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-termination.html) // in the Auto Scaling User Guide. TerminationPolicies []*string `type:"list"` @@ -5784,8 +6141,8 @@ type CreateAutoScalingGroupInput struct { // If you specify subnets and Availability Zones with this call, ensure that // the subnets' Availability Zones match the Availability Zones specified. // - // For more information, see Launching Auto Scaling Instances in a VPC (http://docs.aws.amazon.com/autoscaling/latest/userguide/asg-in-vpc.html) - // in the Auto Scaling User Guide. + // For more information, see Launching Auto Scaling Instances in a VPC (http://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-in-vpc.html) + // in the Amazon EC2 Auto Scaling User Guide. VPCZoneIdentifier *string `min:"1" type:"string"` } @@ -5850,6 +6207,11 @@ func (s *CreateAutoScalingGroupInput) Validate() error { } } } + if s.MixedInstancesPolicy != nil { + if err := s.MixedInstancesPolicy.Validate(); err != nil { + invalidParams.AddNested("MixedInstancesPolicy", err.(request.ErrInvalidParams)) + } + } if s.Tags != nil { for i, v := range s.Tags { if v == nil { @@ -5945,6 +6307,12 @@ func (s *CreateAutoScalingGroupInput) SetMinSize(v int64) *CreateAutoScalingGrou return s } +// SetMixedInstancesPolicy sets the MixedInstancesPolicy field's value. +func (s *CreateAutoScalingGroupInput) SetMixedInstancesPolicy(v *MixedInstancesPolicy) *CreateAutoScalingGroupInput { + s.MixedInstancesPolicy = v + return s +} + // SetNewInstancesProtectedFromScaleIn sets the NewInstancesProtectedFromScaleIn field's value. func (s *CreateAutoScalingGroupInput) SetNewInstancesProtectedFromScaleIn(v bool) *CreateAutoScalingGroupInput { s.NewInstancesProtectedFromScaleIn = &v @@ -6006,8 +6374,8 @@ type CreateLaunchConfigurationInput struct { // Used for groups that launch instances into a virtual private cloud (VPC). // Specifies whether to assign a public IP address to each instance. For more - // information, see Launching Auto Scaling Instances in a VPC (http://docs.aws.amazon.com/autoscaling/latest/userguide/asg-in-vpc.html) - // in the Auto Scaling User Guide. + // information, see Launching Auto Scaling Instances in a VPC (http://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-in-vpc.html) + // in the Amazon EC2 Auto Scaling User Guide. // // If you specify this parameter, be sure to specify at least one subnet when // you create your group. @@ -6046,12 +6414,12 @@ type CreateLaunchConfigurationInput struct { // The name or the Amazon Resource Name (ARN) of the instance profile associated // with the IAM role for the instance. // - // EC2 instances launched with an IAM role will automatically have AWS security - // credentials available. You can use IAM roles with Auto Scaling to automatically + // EC2 instances launched with an IAM role automatically have AWS security credentials + // available. You can use IAM roles with Amazon EC2 Auto Scaling to automatically // enable applications running on your EC2 instances to securely access other // AWS resources. For more information, see Launch Auto Scaling Instances with - // an IAM Role (http://docs.aws.amazon.com/autoscaling/latest/userguide/us-iam-role.html) - // in the Auto Scaling User Guide. + // an IAM Role (http://docs.aws.amazon.com/autoscaling/ec2/userguide/us-iam-role.html) + // in the Amazon EC2 Auto Scaling User Guide. IamInstanceProfile *string `min:"1" type:"string"` // The ID of the Amazon Machine Image (AMI) to use to launch your EC2 instances. @@ -6063,8 +6431,8 @@ type CreateLaunchConfigurationInput struct { ImageId *string `min:"1" type:"string"` // The ID of the instance to use to create the launch configuration. The new - // launch configuration derives attributes from the instance, with the exception - // of the block device mapping. + // launch configuration derives attributes from the instance, except for the + // block device mapping. // // If you do not specify InstanceId, you must specify both ImageId and InstanceType. // @@ -6072,8 +6440,8 @@ type CreateLaunchConfigurationInput struct { // any other instance attributes, specify them as part of the same request. // // For more information, see Create a Launch Configuration Using an EC2 Instance - // (http://docs.aws.amazon.com/autoscaling/latest/userguide/create-lc-with-instanceID.html) - // in the Auto Scaling User Guide. + // (http://docs.aws.amazon.com/autoscaling/ec2/userguide/create-lc-with-instanceID.html) + // in the Amazon EC2 Auto Scaling User Guide. InstanceId *string `min:"1" type:"string"` // Enables detailed monitoring (true) or basic monitoring (false) for the Auto @@ -6106,15 +6474,15 @@ type CreateLaunchConfigurationInput struct { // The tenancy of the instance. An instance with a tenancy of dedicated runs // on single-tenant hardware and can only be launched into a VPC. // - // You must set the value of this parameter to dedicated if want to launch Dedicated - // Instances into a shared tenancy VPC (VPC with instance placement tenancy - // attribute set to default). + // To launch Dedicated Instances into a shared tenancy VPC (a VPC with the instance + // placement tenancy attribute set to default), you must set the value of this + // parameter to dedicated. // // If you specify this parameter, be sure to specify at least one subnet when // you create your group. // - // For more information, see Launching Auto Scaling Instances in a VPC (http://docs.aws.amazon.com/autoscaling/latest/userguide/asg-in-vpc.html) - // in the Auto Scaling User Guide. + // For more information, see Launching Auto Scaling Instances in a VPC (http://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-in-vpc.html) + // in the Amazon EC2 Auto Scaling User Guide. // // Valid values: default | dedicated PlacementTenancy *string `min:"1" type:"string"` @@ -6125,8 +6493,8 @@ type CreateLaunchConfigurationInput struct { // One or more security groups with which to associate the instances. // // If your instances are launched in EC2-Classic, you can either specify security - // group names or the security group IDs. For more information about security - // groups for EC2-Classic, see Amazon EC2 Security Groups (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html) + // group names or the security group IDs. For more information, see Amazon EC2 + // Security Groups (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html) // in the Amazon Elastic Compute Cloud User Guide. // // If your instances are launched into a VPC, specify security group IDs. For @@ -6137,8 +6505,8 @@ type CreateLaunchConfigurationInput struct { // The maximum hourly price to be paid for any Spot Instance launched to fulfill // the request. Spot Instances are launched when the price you specify exceeds // the current Spot market price. For more information, see Launching Spot Instances - // in Your Auto Scaling Group (http://docs.aws.amazon.com/autoscaling/latest/userguide/US-SpotInstances.html) - // in the Auto Scaling User Guide. + // in Your Auto Scaling Group (http://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-launch-spot-instances.html) + // in the Amazon EC2 Auto Scaling User Guide. SpotPrice *string `min:"1" type:"string"` // The user data to make available to the launched EC2 instances. For more information, @@ -6500,7 +6868,7 @@ type DeleteAutoScalingGroupInput struct { // AutoScalingGroupName is a required field AutoScalingGroupName *string `min:"1" type:"string" required:"true"` - // Specifies that the group will be deleted along with all instances associated + // Specifies that the group is to be deleted along with all instances associated // with the group, without waiting for all instances to be terminated. This // parameter also deletes any lifecycle actions associated with the group. ForceDelete *bool `type:"boolean"` @@ -6694,7 +7062,7 @@ type DeleteNotificationConfigurationInput struct { AutoScalingGroupName *string `min:"1" type:"string" required:"true"` // The Amazon Resource Name (ARN) of the Amazon Simple Notification Service - // (SNS) topic. + // (Amazon SNS) topic. // // TopicARN is a required field TopicARN *string `min:"1" type:"string" required:"true"` @@ -7065,8 +7433,8 @@ func (s *DescribeAdjustmentTypesOutput) SetAdjustmentTypes(v []*AdjustmentType) type DescribeAutoScalingGroupsInput struct { _ struct{} `type:"structure"` - // The names of the Auto Scaling groups. If you omit this parameter, all Auto - // Scaling groups are described. + // The names of the Auto Scaling groups. You can specify up to MaxRecords names. + // If you omit this parameter, all Auto Scaling groups are described. AutoScalingGroupNames []*string `type:"list"` // The maximum number of items to return with this call. The default value is @@ -7144,9 +7512,9 @@ func (s *DescribeAutoScalingGroupsOutput) SetNextToken(v string) *DescribeAutoSc type DescribeAutoScalingInstancesInput struct { _ struct{} `type:"structure"` - // The instances to describe; up to 50 instance IDs. If you omit this parameter, - // all Auto Scaling instances are described. If you specify an ID that does - // not exist, it is ignored with no error. + // The IDs of the instances. You can specify up to MaxRecords IDs. If you omit + // this parameter, all Auto Scaling instances are described. If you specify + // an ID that does not exist, it is ignored with no error. InstanceIds []*string `type:"list"` // The maximum number of items to return with this call. The default value is @@ -7773,7 +8141,7 @@ type DescribePoliciesInput struct { NextToken *string `type:"string"` // The names of one or more policies. If you omit this parameter, all policies - // are described. If an group name is provided, the results are limited to that + // are described. If a group name is provided, the results are limited to that // group. This list is limited to 50 items. If you specify an unknown policy // name, it is ignored with no error. PolicyNames []*string `type:"list"` @@ -7871,11 +8239,11 @@ func (s *DescribePoliciesOutput) SetScalingPolicies(v []*ScalingPolicy) *Describ type DescribeScalingActivitiesInput struct { _ struct{} `type:"structure"` - // The activity IDs of the desired scaling activities. If you omit this parameter, - // all activities for the past six weeks are described. If you specify an Auto - // Scaling group, the results are limited to that group. The list of requested - // activities cannot contain more than 50 items. If unknown activities are requested, - // they are ignored with no error. + // The activity IDs of the desired scaling activities. You can specify up to + // 50 IDs. If you omit this parameter, all activities for the past six weeks + // are described. If unknown activities are requested, they are ignored with + // no error. If you specify an Auto Scaling group, the results are limited to + // that group. ActivityIds []*string `type:"list"` // The name of the Auto Scaling group. @@ -8018,7 +8386,7 @@ type DescribeScheduledActionsInput struct { // The latest scheduled start time to return. If scheduled action names are // provided, this parameter is ignored. - EndTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + EndTime *time.Time `type:"timestamp"` // The maximum number of items to return with this call. The default value is // 50 and the maximum value is 100. @@ -8028,18 +8396,14 @@ type DescribeScheduledActionsInput struct { // a previous call.) NextToken *string `type:"string"` - // Describes one or more scheduled actions. If you omit this parameter, all - // scheduled actions are described. If you specify an unknown scheduled action, - // it is ignored with no error. - // - // You can describe up to a maximum of 50 instances with a single call. If there - // are more items to return, the call returns a token. To get the next set of - // items, repeat the call with the returned token. + // The names of one or more scheduled actions. You can specify up to 50 actions. + // If you omit this parameter, all scheduled actions are described. If you specify + // an unknown scheduled action, it is ignored with no error. ScheduledActionNames []*string `type:"list"` // The earliest scheduled start time to return. If scheduled action names are // provided, this parameter is ignored. - StartTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + StartTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -8137,7 +8501,8 @@ func (s *DescribeScheduledActionsOutput) SetScheduledUpdateGroupActions(v []*Sch type DescribeTagsInput struct { _ struct{} `type:"structure"` - // A filter used to scope the tags to return. + // One or more filters to scope the tags to return. The maximum number of filters + // per filter type (for example, auto-scaling-group) is 1000. Filters []*Filter `type:"list"` // The maximum number of items to return with this call. The default value is @@ -8227,8 +8592,11 @@ func (s DescribeTerminationPolicyTypesInput) GoString() string { type DescribeTerminationPolicyTypesOutput struct { _ struct{} `type:"structure"` - // The termination policies supported by Auto Scaling (OldestInstance, OldestLaunchConfiguration, - // NewestInstance, ClosestToNextInstanceHour, and Default). + // The termination policies supported by Amazon EC2 Auto Scaling: OldestInstance, + // OldestLaunchConfiguration, NewestInstance, ClosestToNextInstanceHour, Default, + // OldestLaunchTemplate, and AllocationStrategy. Currently, the OldestLaunchTemplate + // and AllocationStrategy policies are only supported for Auto Scaling groups + // with MixedInstancesPolicy. TerminationPolicyTypes []*string `type:"list"` } @@ -8594,8 +8962,6 @@ type Ebs struct { // in the Amazon Elastic Compute Cloud User Guide. // // Valid values: standard | io1 | gp2 - // - // Default: standard VolumeType *string `min:"1" type:"string"` } @@ -8915,13 +9281,13 @@ type ExecutePolicyInput struct { // otherwise. BreachThreshold *float64 `type:"double"` - // Indicates whether Auto Scaling waits for the cooldown period to complete - // before executing the policy. + // Indicates whether Amazon EC2 Auto Scaling waits for the cooldown period to + // complete before executing the policy. // // This parameter is not supported if the policy type is StepScaling. // - // For more information, see Auto Scaling Cooldowns (http://docs.aws.amazon.com/autoscaling/latest/userguide/Cooldown.html) - // in the Auto Scaling User Guide. + // For more information, see Scaling Cooldowns (http://docs.aws.amazon.com/autoscaling/ec2/userguide/Cooldown.html) + // in the Amazon EC2 Auto Scaling User Guide. HonorCooldown *bool `type:"boolean"` // The metric value to compare to BreachThreshold. This enables you to execute @@ -9089,6 +9455,50 @@ func (s *ExitStandbyOutput) SetActivities(v []*Activity) *ExitStandbyOutput { return s } +// Describes a scheduled action that could not be created, updated, or deleted. +type FailedScheduledUpdateGroupActionRequest struct { + _ struct{} `type:"structure"` + + // The error code. + ErrorCode *string `min:"1" type:"string"` + + // The error message accompanying the error code. + ErrorMessage *string `type:"string"` + + // The name of the scheduled action. + // + // ScheduledActionName is a required field + ScheduledActionName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s FailedScheduledUpdateGroupActionRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FailedScheduledUpdateGroupActionRequest) GoString() string { + return s.String() +} + +// SetErrorCode sets the ErrorCode field's value. +func (s *FailedScheduledUpdateGroupActionRequest) SetErrorCode(v string) *FailedScheduledUpdateGroupActionRequest { + s.ErrorCode = &v + return s +} + +// SetErrorMessage sets the ErrorMessage field's value. +func (s *FailedScheduledUpdateGroupActionRequest) SetErrorMessage(v string) *FailedScheduledUpdateGroupActionRequest { + s.ErrorMessage = &v + return s +} + +// SetScheduledActionName sets the ScheduledActionName field's value. +func (s *FailedScheduledUpdateGroupActionRequest) SetScheduledActionName(v string) *FailedScheduledUpdateGroupActionRequest { + s.ScheduledActionName = &v + return s +} + // Describes a filter. type Filter struct { _ struct{} `type:"structure"` @@ -9143,7 +9553,7 @@ type Group struct { // The date and time the group was created. // // CreatedTime is a required field - CreatedTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + CreatedTime *time.Time `type:"timestamp" required:"true"` // The amount of time, in seconds, after a scaling activity completes before // another scaling activity can start. @@ -9159,8 +9569,8 @@ type Group struct { // The metrics enabled for the group. EnabledMetrics []*EnabledMetric `type:"list"` - // The amount of time, in seconds, that Auto Scaling waits before checking the - // health status of an EC2 instance that has come into service. + // The amount of time, in seconds, that Amazon EC2 Auto Scaling waits before + // checking the health status of an EC2 instance that has come into service. HealthCheckGracePeriod *int64 `type:"integer"` // The service to use for the health checks. The valid values are EC2 and ELB. @@ -9190,12 +9600,15 @@ type Group struct { // MinSize is a required field MinSize *int64 `type:"integer" required:"true"` + // The mixed instances policy for the group. + MixedInstancesPolicy *MixedInstancesPolicy `type:"structure"` + // Indicates whether newly launched instances are protected from termination // by Auto Scaling when scaling in. NewInstancesProtectedFromScaleIn *bool `type:"boolean"` - // The name of the placement group into which you'll launch your instances, - // if any. For more information, see Placement Groups (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html) + // The name of the placement group into which to launch your instances, if any. + // For more information, see Placement Groups (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html) // in the Amazon Elastic Compute Cloud User Guide. PlacementGroup *string `min:"1" type:"string"` @@ -9325,6 +9738,12 @@ func (s *Group) SetMinSize(v int64) *Group { return s } +// SetMixedInstancesPolicy sets the MixedInstancesPolicy field's value. +func (s *Group) SetMixedInstancesPolicy(v *MixedInstancesPolicy) *Group { + s.MixedInstancesPolicy = v + return s +} + // SetNewInstancesProtectedFromScaleIn sets the NewInstancesProtectedFromScaleIn field's value. func (s *Group) SetNewInstancesProtectedFromScaleIn(v bool) *Group { s.NewInstancesProtectedFromScaleIn = &v @@ -9390,7 +9809,8 @@ type Instance struct { // The last reported health status of the instance. "Healthy" means that the // instance is healthy and should remain in service. "Unhealthy" means that - // the instance is unhealthy and Auto Scaling should terminate and replace it. + // the instance is unhealthy and that Amazon EC2 Auto Scaling should terminate + // and replace it. // // HealthStatus is a required field HealthStatus *string `min:"1" type:"string" required:"true"` @@ -9406,14 +9826,14 @@ type Instance struct { // The launch template for the instance. LaunchTemplate *LaunchTemplateSpecification `type:"structure"` - // A description of the current lifecycle state. Note that the Quarantined state - // is not used. + // A description of the current lifecycle state. The Quarantined state is not + // used. // // LifecycleState is a required field LifecycleState *string `type:"string" required:"true" enum:"LifecycleState"` - // Indicates whether the instance is protected from termination by Auto Scaling - // when scaling in. + // Indicates whether the instance is protected from termination by Amazon EC2 + // Auto Scaling when scaling in. // // ProtectedFromScaleIn is a required field ProtectedFromScaleIn *bool `type:"boolean" required:"true"` @@ -9487,7 +9907,8 @@ type InstanceDetails struct { // The last reported health status of this instance. "Healthy" means that the // instance is healthy and should remain in service. "Unhealthy" means that - // the instance is unhealthy and Auto Scaling should terminate and replace it. + // the instance is unhealthy and Amazon EC2 Auto Scaling should terminate and + // replace it. // // HealthStatus is a required field HealthStatus *string `min:"1" type:"string" required:"true"` @@ -9505,14 +9926,14 @@ type InstanceDetails struct { LaunchTemplate *LaunchTemplateSpecification `type:"structure"` // The lifecycle state for the instance. For more information, see Auto Scaling - // Lifecycle (http://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroupLifecycle.html) - // in the Auto Scaling User Guide. + // Lifecycle (http://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroupLifecycle.html) + // in the Amazon EC2 Auto Scaling User Guide. // // LifecycleState is a required field LifecycleState *string `min:"1" type:"string" required:"true"` - // Indicates whether the instance is protected from termination by Auto Scaling - // when scaling in. + // Indicates whether the instance is protected from termination by Amazon EC2 + // Auto Scaling when scaling in. // // ProtectedFromScaleIn is a required field ProtectedFromScaleIn *bool `type:"boolean" required:"true"` @@ -9600,6 +10021,121 @@ func (s *InstanceMonitoring) SetEnabled(v bool) *InstanceMonitoring { return s } +// Describes an instances distribution for an Auto Scaling group with MixedInstancesPolicy. +// +// The instances distribution specifies the distribution of On-Demand Instances +// and Spot Instances, the maximum price to pay for Spot Instances, and how +// the Auto Scaling group allocates instance types. +type InstancesDistribution struct { + _ struct{} `type:"structure"` + + // Indicates how to allocate instance types to fulfill On-Demand capacity. + // + // The only valid value is prioritized, which is also the default value. This + // strategy uses the order of instance types in the Overrides array of LaunchTemplate + // to define the launch priority of each instance type. The first instance type + // in the array is prioritized higher than the last. If all your On-Demand capacity + // cannot be fulfilled using your highest priority instance, then the Auto Scaling + // groups launches the remaining capacity using the second priority instance + // type, and so on. + OnDemandAllocationStrategy *string `type:"string"` + + // The minimum amount of the Auto Scaling group's capacity that must be fulfilled + // by On-Demand Instances. This base portion is provisioned first as your group + // scales. + // + // The default value is 0. If you leave this parameter set to 0, On-Demand Instances + // are launched as a percentage of the Auto Scaling group's desired capacity, + // per the OnDemandPercentageAboveBaseCapacity setting. + OnDemandBaseCapacity *int64 `type:"integer"` + + // Controls the percentages of On-Demand Instances and Spot Instances for your + // additional capacity beyond OnDemandBaseCapacity. + // + // The range is 0–100. The default value is 100. If you leave this parameter + // set to 100, the percentages are 100% for On-Demand Instances and 0% for Spot + // Instances. + OnDemandPercentageAboveBaseCapacity *int64 `type:"integer"` + + // Indicates how to allocate Spot capacity across Spot pools. + // + // The only valid value is lowest-price, which is also the default value. The + // Auto Scaling group selects the cheapest Spot pools and evenly allocates your + // Spot capacity across the number of Spot pools that you specify. + SpotAllocationStrategy *string `type:"string"` + + // The number of Spot pools to use to allocate your Spot capacity. The Spot + // pools are determined from the different instance types in the Overrides array + // of LaunchTemplate. + // + // The range is 1–20 and the default is 2. + SpotInstancePools *int64 `type:"integer"` + + // The maximum price per unit hour that you are willing to pay for a Spot Instance. + // If you leave this value blank (which is the default), the maximum Spot price + // is set at the On-Demand price. + SpotMaxPrice *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s InstancesDistribution) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstancesDistribution) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InstancesDistribution) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InstancesDistribution"} + if s.SpotMaxPrice != nil && len(*s.SpotMaxPrice) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SpotMaxPrice", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetOnDemandAllocationStrategy sets the OnDemandAllocationStrategy field's value. +func (s *InstancesDistribution) SetOnDemandAllocationStrategy(v string) *InstancesDistribution { + s.OnDemandAllocationStrategy = &v + return s +} + +// SetOnDemandBaseCapacity sets the OnDemandBaseCapacity field's value. +func (s *InstancesDistribution) SetOnDemandBaseCapacity(v int64) *InstancesDistribution { + s.OnDemandBaseCapacity = &v + return s +} + +// SetOnDemandPercentageAboveBaseCapacity sets the OnDemandPercentageAboveBaseCapacity field's value. +func (s *InstancesDistribution) SetOnDemandPercentageAboveBaseCapacity(v int64) *InstancesDistribution { + s.OnDemandPercentageAboveBaseCapacity = &v + return s +} + +// SetSpotAllocationStrategy sets the SpotAllocationStrategy field's value. +func (s *InstancesDistribution) SetSpotAllocationStrategy(v string) *InstancesDistribution { + s.SpotAllocationStrategy = &v + return s +} + +// SetSpotInstancePools sets the SpotInstancePools field's value. +func (s *InstancesDistribution) SetSpotInstancePools(v int64) *InstancesDistribution { + s.SpotInstancePools = &v + return s +} + +// SetSpotMaxPrice sets the SpotMaxPrice field's value. +func (s *InstancesDistribution) SetSpotMaxPrice(v string) *InstancesDistribution { + s.SpotMaxPrice = &v + return s +} + // Describes a launch configuration. type LaunchConfiguration struct { _ struct{} `type:"structure"` @@ -9625,7 +10161,7 @@ type LaunchConfiguration struct { // The creation date and time for the launch configuration. // // CreatedTime is a required field - CreatedTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + CreatedTime *time.Time `type:"timestamp" required:"true"` // Controls whether the instance is optimized for EBS I/O (true) or not (false). EbsOptimized *bool `type:"boolean"` @@ -9804,7 +10340,119 @@ func (s *LaunchConfiguration) SetUserData(v string) *LaunchConfiguration { return s } -// Describes a launch template. +// Describes a launch template and overrides. +// +// The overrides are used to override the instance type specified by the launch +// template with multiple instance types that can be used to launch On-Demand +// Instances and Spot Instances. +type LaunchTemplate struct { + _ struct{} `type:"structure"` + + // The launch template to use. You must specify either the launch template ID + // or launch template name in the request. + LaunchTemplateSpecification *LaunchTemplateSpecification `type:"structure"` + + // Any parameters that you specify override the same parameters in the launch + // template. Currently, the only supported override is instance type. + // + // You must specify between 2 and 20 overrides. + Overrides []*LaunchTemplateOverrides `type:"list"` +} + +// String returns the string representation +func (s LaunchTemplate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *LaunchTemplate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LaunchTemplate"} + if s.LaunchTemplateSpecification != nil { + if err := s.LaunchTemplateSpecification.Validate(); err != nil { + invalidParams.AddNested("LaunchTemplateSpecification", err.(request.ErrInvalidParams)) + } + } + if s.Overrides != nil { + for i, v := range s.Overrides { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Overrides", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLaunchTemplateSpecification sets the LaunchTemplateSpecification field's value. +func (s *LaunchTemplate) SetLaunchTemplateSpecification(v *LaunchTemplateSpecification) *LaunchTemplate { + s.LaunchTemplateSpecification = v + return s +} + +// SetOverrides sets the Overrides field's value. +func (s *LaunchTemplate) SetOverrides(v []*LaunchTemplateOverrides) *LaunchTemplate { + s.Overrides = v + return s +} + +// Describes an override for a launch template. +type LaunchTemplateOverrides struct { + _ struct{} `type:"structure"` + + // The instance type. + // + // For information about available instance types, see Available Instance Types + // (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#AvailableInstanceTypes) + // in the Amazon Elastic Compute Cloud User Guide. + InstanceType *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s LaunchTemplateOverrides) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateOverrides) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *LaunchTemplateOverrides) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LaunchTemplateOverrides"} + if s.InstanceType != nil && len(*s.InstanceType) < 1 { + invalidParams.Add(request.NewErrParamMinLen("InstanceType", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceType sets the InstanceType field's value. +func (s *LaunchTemplateOverrides) SetInstanceType(v string) *LaunchTemplateOverrides { + s.InstanceType = &v + return s +} + +// Describes a launch template and the launch template version. +// +// The launch template that is specified must be configured for use with an +// Auto Scaling group. For more information, see Creating a Launch Template +// for an Auto Scaling group (http://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html) +// in the Amazon EC2 Auto Scaling User Guide. type LaunchTemplateSpecification struct { _ struct{} `type:"structure"` @@ -9816,10 +10464,11 @@ type LaunchTemplateSpecification struct { // or a template ID. LaunchTemplateName *string `min:"3" type:"string"` - // The version number, $Latest, or $Default. If the value is $Latest, Auto Scaling - // selects the latest version of the launch template when launching instances. - // If the value is $Default, Auto Scaling selects the default version of the - // launch template when launching instances. The default value is $Default. + // The version number, $Latest, or $Default. If the value is $Latest, Amazon + // EC2 Auto Scaling selects the latest version of the launch template when launching + // instances. If the value is $Default, Amazon EC2 Auto Scaling selects the + // default version of the launch template when launching instances. The default + // value is $Default. Version *string `min:"1" type:"string"` } @@ -9870,11 +10519,12 @@ func (s *LaunchTemplateSpecification) SetVersion(v string) *LaunchTemplateSpecif return s } -// Describes a lifecycle hook, which tells Auto Scaling that you want to perform -// an action whenever it launches instances or whenever it terminates instances. +// Describes a lifecycle hook, which tells Amazon EC2 Auto Scaling that you +// want to perform an action whenever it launches instances or whenever it terminates +// instances. // -// For more information, see Auto Scaling Lifecycle Hooks (http://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.html) -// in the Auto Scaling User Guide. +// For more information, see Lifecycle Hooks (http://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html) +// in the Amazon EC2 Auto Scaling User Guide. type LifecycleHook struct { _ struct{} `type:"structure"` @@ -9892,24 +10542,29 @@ type LifecycleHook struct { GlobalTimeout *int64 `type:"integer"` // The maximum time, in seconds, that can elapse before the lifecycle hook times - // out. If the lifecycle hook times out, Auto Scaling performs the default action. - // You can prevent the lifecycle hook from timing out by calling RecordLifecycleActionHeartbeat. + // out. If the lifecycle hook times out, Amazon EC2 Auto Scaling performs the + // default action. You can prevent the lifecycle hook from timing out by calling + // RecordLifecycleActionHeartbeat. HeartbeatTimeout *int64 `type:"integer"` // The name of the lifecycle hook. LifecycleHookName *string `min:"1" type:"string"` - // The state of the EC2 instance to which you want to attach the lifecycle hook. - // For a list of lifecycle hook types, see DescribeLifecycleHookTypes. + // The state of the EC2 instance to which to attach the lifecycle hook. The + // following are possible values: + // + // * autoscaling:EC2_INSTANCE_LAUNCHING + // + // * autoscaling:EC2_INSTANCE_TERMINATING LifecycleTransition *string `type:"string"` - // Additional information that you want to include any time Auto Scaling sends - // a message to the notification target. + // Additional information that you want to include any time Amazon EC2 Auto + // Scaling sends a message to the notification target. NotificationMetadata *string `min:"1" type:"string"` - // The ARN of the target that Auto Scaling sends notifications to when an instance - // is in the transition state for the lifecycle hook. The notification target - // can be either an SQS queue or an SNS topic. + // The ARN of the target that Amazon EC2 Auto Scaling sends notifications to + // when an instance is in the transition state for the lifecycle hook. The notification + // target can be either an SQS queue or an SNS topic. NotificationTargetARN *string `min:"1" type:"string"` // The ARN of the IAM role that allows the Auto Scaling group to publish to @@ -9981,11 +10636,12 @@ func (s *LifecycleHook) SetRoleARN(v string) *LifecycleHook { return s } -// Describes a lifecycle hook, which tells Auto Scaling that you want to perform -// an action whenever it launches instances or whenever it terminates instances. +// Describes a lifecycle hook, which tells Amazon EC2 Auto Scaling that you +// want to perform an action whenever it launches instances or whenever it terminates +// instances. // -// For more information, see Auto Scaling Lifecycle Hooks (http://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.html) -// in the Auto Scaling User Guide. +// For more information, see Lifecycle Hooks (http://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html) +// in the Amazon EC2 Auto Scaling User Guide. type LifecycleHookSpecification struct { _ struct{} `type:"structure"` @@ -9995,8 +10651,9 @@ type LifecycleHookSpecification struct { DefaultResult *string `type:"string"` // The maximum time, in seconds, that can elapse before the lifecycle hook times - // out. If the lifecycle hook times out, Auto Scaling performs the default action. - // You can prevent the lifecycle hook from timing out by calling RecordLifecycleActionHeartbeat. + // out. If the lifecycle hook times out, Amazon EC2 Auto Scaling performs the + // default action. You can prevent the lifecycle hook from timing out by calling + // RecordLifecycleActionHeartbeat. HeartbeatTimeout *int64 `type:"integer"` // The name of the lifecycle hook. @@ -10005,18 +10662,22 @@ type LifecycleHookSpecification struct { LifecycleHookName *string `min:"1" type:"string" required:"true"` // The state of the EC2 instance to which you want to attach the lifecycle hook. - // For a list of lifecycle hook types, see DescribeLifecycleHookTypes. + // The possible values are: + // + // * autoscaling:EC2_INSTANCE_LAUNCHING + // + // * autoscaling:EC2_INSTANCE_TERMINATING // // LifecycleTransition is a required field LifecycleTransition *string `type:"string" required:"true"` - // Additional information that you want to include any time Auto Scaling sends - // a message to the notification target. + // Additional information that you want to include any time Amazon EC2 Auto + // Scaling sends a message to the notification target. NotificationMetadata *string `min:"1" type:"string"` - // The ARN of the target that Auto Scaling sends notifications to when an instance - // is in the transition state for the lifecycle hook. The notification target - // can be either an SQS queue or an SNS topic. + // The ARN of the target that Amazon EC2 Auto Scaling sends notifications to + // when an instance is in the transition state for the lifecycle hook. The notification + // target can be either an SQS queue or an SNS topic. NotificationTargetARN *string `type:"string"` // The ARN of the IAM role that allows the Auto Scaling group to publish to @@ -10108,10 +10769,11 @@ func (s *LifecycleHookSpecification) SetRoleARN(v string) *LifecycleHookSpecific // // If you attach a load balancer to an existing Auto Scaling group, the initial // state is Adding. The state transitions to Added after all instances in the -// group are registered with the load balancer. If ELB health checks are enabled -// for the load balancer, the state transitions to InService after at least -// one instance in the group passes the health check. If EC2 health checks are -// enabled instead, the load balancer remains in the Added state. +// group are registered with the load balancer. If Elastic Load Balancing health +// checks are enabled for the load balancer, the state transitions to InService +// after at least one instance in the group passes the health check. If EC2 +// health checks are enabled instead, the load balancer remains in the Added +// state. type LoadBalancerState struct { _ struct{} `type:"structure"` @@ -10163,10 +10825,10 @@ func (s *LoadBalancerState) SetState(v string) *LoadBalancerState { // // If you attach a target group to an existing Auto Scaling group, the initial // state is Adding. The state transitions to Added after all Auto Scaling instances -// are registered with the target group. If ELB health checks are enabled, the -// state transitions to InService after at least one Auto Scaling instance passes -// the health check. If EC2 health checks are enabled instead, the target group -// remains in the Added state. +// are registered with the target group. If Elastic Load Balancing health checks +// are enabled, the state transitions to InService after at least one Auto Scaling +// instance passes the health check. If EC2 health checks are enabled instead, +// the target group remains in the Added state. type LoadBalancerTargetGroupState struct { _ struct{} `type:"structure"` @@ -10331,6 +10993,73 @@ func (s *MetricGranularityType) SetGranularity(v string) *MetricGranularityType return s } +// Describes a mixed instances policy for an Auto Scaling group. With mixed +// instances, your Auto Scaling group can provision a combination of On-Demand +// Instances and Spot Instances across multiple instance types. For more information, +// see Using Multiple Instance Types and Purchase Options (http://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html#asg-purchase-options) +// in the Amazon EC2 Auto Scaling User Guide. +// +// When you create your Auto Scaling group, you can specify a launch configuration +// or template as a parameter for the top-level object, or you can specify a +// mixed instances policy, but not both at the same time. +type MixedInstancesPolicy struct { + _ struct{} `type:"structure"` + + // The instances distribution to use. + // + // If you leave this parameter unspecified when creating the group, the default + // values are used. + InstancesDistribution *InstancesDistribution `type:"structure"` + + // The launch template and overrides. + // + // This parameter is required when creating an Auto Scaling group with a mixed + // instances policy, but is not required when updating the group. + LaunchTemplate *LaunchTemplate `type:"structure"` +} + +// String returns the string representation +func (s MixedInstancesPolicy) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MixedInstancesPolicy) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *MixedInstancesPolicy) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MixedInstancesPolicy"} + if s.InstancesDistribution != nil { + if err := s.InstancesDistribution.Validate(); err != nil { + invalidParams.AddNested("InstancesDistribution", err.(request.ErrInvalidParams)) + } + } + if s.LaunchTemplate != nil { + if err := s.LaunchTemplate.Validate(); err != nil { + invalidParams.AddNested("LaunchTemplate", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstancesDistribution sets the InstancesDistribution field's value. +func (s *MixedInstancesPolicy) SetInstancesDistribution(v *InstancesDistribution) *MixedInstancesPolicy { + s.InstancesDistribution = v + return s +} + +// SetLaunchTemplate sets the LaunchTemplate field's value. +func (s *MixedInstancesPolicy) SetLaunchTemplate(v *LaunchTemplate) *MixedInstancesPolicy { + s.LaunchTemplate = v + return s +} + // Describes a notification. type NotificationConfiguration struct { _ struct{} `type:"structure"` @@ -10352,7 +11081,7 @@ type NotificationConfiguration struct { NotificationType *string `min:"1" type:"string"` // The Amazon Resource Name (ARN) of the Amazon Simple Notification Service - // (SNS) topic. + // (Amazon SNS) topic. TopicARN *string `min:"1" type:"string"` } @@ -10396,17 +11125,17 @@ type PredefinedMetricSpecification struct { // Identifies the resource associated with the metric type. The following predefined // metrics are available: // - // * ASGAverageCPUUtilization - average CPU utilization of the Auto Scaling - // group + // * ASGAverageCPUUtilization - Average CPU utilization of the Auto Scaling + // group. // - // * ASGAverageNetworkIn - average number of bytes received on all network - // interfaces by the Auto Scaling group + // * ASGAverageNetworkIn - Average number of bytes received on all network + // interfaces by the Auto Scaling group. // - // * ASGAverageNetworkOut - average number of bytes sent out on all network - // interfaces by the Auto Scaling group + // * ASGAverageNetworkOut - Average number of bytes sent out on all network + // interfaces by the Auto Scaling group. // - // * ALBRequestCountPerTarget - number of requests completed per target in - // an Application Load Balancer target group + // * ALBRequestCountPerTarget - Number of requests completed per target in + // an Application Load Balancer target group. // // For predefined metric types ASGAverageCPUUtilization, ASGAverageNetworkIn, // and ASGAverageNetworkOut, the parameter must not be specified as the resource @@ -10460,8 +11189,8 @@ func (s *PredefinedMetricSpecification) SetResourceLabel(v string) *PredefinedMe // Describes a process type. // -// For more information, see Auto Scaling Processes (http://docs.aws.amazon.com/autoscaling/latest/userguide/as-suspend-resume-processes.html#process-types) -// in the Auto Scaling User Guide. +// For more information, see Scaling Processes (http://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html#process-types) +// in the Amazon EC2 Auto Scaling User Guide. type ProcessType struct { _ struct{} `type:"structure"` @@ -10520,8 +11249,8 @@ type PutLifecycleHookInput struct { // out. The range is from 30 to 7200 seconds. The default is 3600 seconds (1 // hour). // - // If the lifecycle hook times out, Auto Scaling performs the default action. - // You can prevent the lifecycle hook from timing out by calling RecordLifecycleActionHeartbeat. + // If the lifecycle hook times out, Amazon EC2 Auto Scaling performs the default + // action. You can prevent the lifecycle hook from timing out by calling RecordLifecycleActionHeartbeat. HeartbeatTimeout *int64 `type:"integer"` // The name of the lifecycle hook. @@ -10529,29 +11258,33 @@ type PutLifecycleHookInput struct { // LifecycleHookName is a required field LifecycleHookName *string `min:"1" type:"string" required:"true"` - // The instance state to which you want to attach the lifecycle hook. For a - // list of lifecycle hook types, see DescribeLifecycleHookTypes. + // The instance state to which you want to attach the lifecycle hook. The possible + // values are: + // + // * autoscaling:EC2_INSTANCE_LAUNCHING + // + // * autoscaling:EC2_INSTANCE_TERMINATING // // This parameter is required for new lifecycle hooks, but optional when updating // existing hooks. LifecycleTransition *string `type:"string"` - // Contains additional information that you want to include any time Auto Scaling - // sends a message to the notification target. + // Contains additional information that you want to include any time Amazon + // EC2 Auto Scaling sends a message to the notification target. NotificationMetadata *string `min:"1" type:"string"` - // The ARN of the notification target that Auto Scaling will use to notify you - // when an instance is in the transition state for the lifecycle hook. This + // The ARN of the notification target that Amazon EC2 Auto Scaling uses to notify + // you when an instance is in the transition state for the lifecycle hook. This // target can be either an SQS queue or an SNS topic. If you specify an empty // string, this overrides the current ARN. // // This operation uses the JSON format when sending notifications to an Amazon - // SQS queue, and an email key/value pair format when sending notifications + // SQS queue, and an email key-value pair format when sending notifications // to an Amazon SNS topic. // - // When you specify a notification target, Auto Scaling sends it a test message. - // Test messages contains the following additional key/value pair: "Event": - // "autoscaling:TEST_NOTIFICATION". + // When you specify a notification target, Amazon EC2 Auto Scaling sends it + // a test message. Test messages contain the following additional key-value + // pair: "Event": "autoscaling:TEST_NOTIFICATION". NotificationTargetARN *string `type:"string"` // The ARN of the IAM role that allows the Auto Scaling group to publish to @@ -10670,14 +11403,14 @@ type PutNotificationConfigurationInput struct { // AutoScalingGroupName is a required field AutoScalingGroupName *string `min:"1" type:"string" required:"true"` - // The type of event that will cause the notification to be sent. For details - // about notification types supported by Auto Scaling, see DescribeAutoScalingNotificationTypes. + // The type of event that causes the notification to be sent. For more information + // about notification types supported by Amazon EC2 Auto Scaling, see DescribeAutoScalingNotificationTypes. // // NotificationTypes is a required field NotificationTypes []*string `type:"list" required:"true"` // The Amazon Resource Name (ARN) of the Amazon Simple Notification Service - // (SNS) topic. + // (Amazon SNS) topic. // // TopicARN is a required field TopicARN *string `min:"1" type:"string" required:"true"` @@ -10758,8 +11491,8 @@ type PutScalingPolicyInput struct { // // This parameter is supported if the policy type is SimpleScaling or StepScaling. // - // For more information, see Dynamic Scaling (http://docs.aws.amazon.com/autoscaling/latest/userguide/as-scale-based-on-demand.html) - // in the Auto Scaling User Guide. + // For more information, see Dynamic Scaling (http://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html) + // in the Amazon EC2 Auto Scaling User Guide. AdjustmentType *string `min:"1" type:"string"` // The name of the Auto Scaling group. @@ -10773,8 +11506,8 @@ type PutScalingPolicyInput struct { // // This parameter is supported if the policy type is SimpleScaling. // - // For more information, see Auto Scaling Cooldowns (http://docs.aws.amazon.com/autoscaling/latest/userguide/Cooldown.html) - // in the Auto Scaling User Guide. + // For more information, see Scaling Cooldowns (http://docs.aws.amazon.com/autoscaling/ec2/userguide/Cooldown.html) + // in the Amazon EC2 Auto Scaling User Guide. Cooldown *int64 `type:"integer"` // The estimated time, in seconds, until a newly launched instance can contribute @@ -11005,9 +11738,9 @@ type PutScheduledUpdateGroupActionInput struct { // The number of EC2 instances that should be running in the group. DesiredCapacity *int64 `type:"integer"` - // The time for the recurring schedule to end. Auto Scaling does not perform - // the action after this time. - EndTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + // The time for the recurring schedule to end. Amazon EC2 Auto Scaling does + // not perform the action after this time. + EndTime *time.Time `type:"timestamp"` // The maximum size for the Auto Scaling group. MaxSize *int64 `type:"integer"` @@ -11016,7 +11749,7 @@ type PutScheduledUpdateGroupActionInput struct { MinSize *int64 `type:"integer"` // The recurring schedule for this action, in Unix cron syntax format. For more - // information, see Cron (http://en.wikipedia.org/wiki/Cron) in Wikipedia. + // information about this format, see Crontab (http://crontab.org). Recurrence *string `min:"1" type:"string"` // The name of this scaling action. @@ -11027,15 +11760,16 @@ type PutScheduledUpdateGroupActionInput struct { // The time for this action to start, in "YYYY-MM-DDThh:mm:ssZ" format in UTC/GMT // only (for example, 2014-06-01T00:00:00Z). // - // If you specify Recurrence and StartTime, Auto Scaling performs the action - // at this time, and then performs the action based on the specified recurrence. + // If you specify Recurrence and StartTime, Amazon EC2 Auto Scaling performs + // the action at this time, and then performs the action based on the specified + // recurrence. // - // If you try to schedule your action in the past, Auto Scaling returns an error - // message. - StartTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + // If you try to schedule your action in the past, Amazon EC2 Auto Scaling returns + // an error message. + StartTime *time.Time `type:"timestamp"` // This parameter is deprecated. - Time *time.Time `type:"timestamp" timestampFormat:"iso8601"` + Time *time.Time `type:"timestamp"` } // String returns the string representation @@ -11153,8 +11887,8 @@ type RecordLifecycleActionHeartbeatInput struct { InstanceId *string `min:"1" type:"string"` // A token that uniquely identifies a specific lifecycle action associated with - // an instance. Auto Scaling sends this token to the notification target you - // specified when you created the lifecycle hook. + // an instance. Amazon EC2 Auto Scaling sends this token to the notification + // target that you specified when you created the lifecycle hook. LifecycleActionToken *string `min:"36" type:"string"` // The name of the lifecycle hook. @@ -11471,7 +12205,7 @@ func (s *ScalingProcessQuery) SetScalingProcesses(v []*string) *ScalingProcessQu return s } -// Describes a scheduled update to an Auto Scaling group. +// Describes a scheduled scaling action. Used in response to DescribeScheduledActions. type ScheduledUpdateGroupAction struct { _ struct{} `type:"structure"` @@ -11483,7 +12217,7 @@ type ScheduledUpdateGroupAction struct { // The date and time that the action is scheduled to end. This date and time // can be up to one month in the future. - EndTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + EndTime *time.Time `type:"timestamp"` // The maximum size of the group. MaxSize *int64 `type:"integer"` @@ -11504,11 +12238,11 @@ type ScheduledUpdateGroupAction struct { // can be up to one month in the future. // // When StartTime and EndTime are specified with Recurrence, they form the boundaries - // of when the recurring action will start and stop. - StartTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + // of when the recurring action starts and stops. + StartTime *time.Time `type:"timestamp"` // This parameter is deprecated. - Time *time.Time `type:"timestamp" timestampFormat:"iso8601"` + Time *time.Time `type:"timestamp"` } // String returns the string representation @@ -11581,6 +12315,119 @@ func (s *ScheduledUpdateGroupAction) SetTime(v time.Time) *ScheduledUpdateGroupA return s } +// Describes one or more scheduled scaling action updates for a specified Auto +// Scaling group. Used in combination with BatchPutScheduledUpdateGroupAction. +// +// When updating a scheduled scaling action, all optional parameters are left +// unchanged if not specified. +type ScheduledUpdateGroupActionRequest struct { + _ struct{} `type:"structure"` + + // The number of EC2 instances that should be running in the group. + DesiredCapacity *int64 `type:"integer"` + + // The time for the recurring schedule to end. Amazon EC2 Auto Scaling does + // not perform the action after this time. + EndTime *time.Time `type:"timestamp"` + + // The maximum size of the group. + MaxSize *int64 `type:"integer"` + + // The minimum size of the group. + MinSize *int64 `type:"integer"` + + // The recurring schedule for the action, in Unix cron syntax format. For more + // information about this format, see Crontab (http://crontab.org). + Recurrence *string `min:"1" type:"string"` + + // The name of the scaling action. + // + // ScheduledActionName is a required field + ScheduledActionName *string `min:"1" type:"string" required:"true"` + + // The time for the action to start, in "YYYY-MM-DDThh:mm:ssZ" format in UTC/GMT + // only (for example, 2014-06-01T00:00:00Z). + // + // If you specify Recurrence and StartTime, Amazon EC2 Auto Scaling performs + // the action at this time, and then performs the action based on the specified + // recurrence. + // + // If you try to schedule the action in the past, Amazon EC2 Auto Scaling returns + // an error message. + StartTime *time.Time `type:"timestamp"` +} + +// String returns the string representation +func (s ScheduledUpdateGroupActionRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ScheduledUpdateGroupActionRequest) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ScheduledUpdateGroupActionRequest) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ScheduledUpdateGroupActionRequest"} + if s.Recurrence != nil && len(*s.Recurrence) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Recurrence", 1)) + } + if s.ScheduledActionName == nil { + invalidParams.Add(request.NewErrParamRequired("ScheduledActionName")) + } + if s.ScheduledActionName != nil && len(*s.ScheduledActionName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ScheduledActionName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDesiredCapacity sets the DesiredCapacity field's value. +func (s *ScheduledUpdateGroupActionRequest) SetDesiredCapacity(v int64) *ScheduledUpdateGroupActionRequest { + s.DesiredCapacity = &v + return s +} + +// SetEndTime sets the EndTime field's value. +func (s *ScheduledUpdateGroupActionRequest) SetEndTime(v time.Time) *ScheduledUpdateGroupActionRequest { + s.EndTime = &v + return s +} + +// SetMaxSize sets the MaxSize field's value. +func (s *ScheduledUpdateGroupActionRequest) SetMaxSize(v int64) *ScheduledUpdateGroupActionRequest { + s.MaxSize = &v + return s +} + +// SetMinSize sets the MinSize field's value. +func (s *ScheduledUpdateGroupActionRequest) SetMinSize(v int64) *ScheduledUpdateGroupActionRequest { + s.MinSize = &v + return s +} + +// SetRecurrence sets the Recurrence field's value. +func (s *ScheduledUpdateGroupActionRequest) SetRecurrence(v string) *ScheduledUpdateGroupActionRequest { + s.Recurrence = &v + return s +} + +// SetScheduledActionName sets the ScheduledActionName field's value. +func (s *ScheduledUpdateGroupActionRequest) SetScheduledActionName(v string) *ScheduledUpdateGroupActionRequest { + s.ScheduledActionName = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *ScheduledUpdateGroupActionRequest) SetStartTime(v time.Time) *ScheduledUpdateGroupActionRequest { + s.StartTime = &v + return s +} + type SetDesiredCapacityInput struct { _ struct{} `type:"structure"` @@ -11594,10 +12441,10 @@ type SetDesiredCapacityInput struct { // DesiredCapacity is a required field DesiredCapacity *int64 `type:"integer" required:"true"` - // Indicates whether Auto Scaling waits for the cooldown period to complete - // before initiating a scaling activity to set your Auto Scaling group to its - // new capacity. By default, Auto Scaling does not honor the cooldown period - // during manual scaling activities. + // Indicates whether Amazon EC2 Auto Scaling waits for the cooldown period to + // complete before initiating a scaling activity to set your Auto Scaling group + // to its new capacity. By default, Amazon EC2 Auto Scaling does not honor the + // cooldown period during manual scaling activities. HonorCooldown *bool `type:"boolean"` } @@ -11665,9 +12512,9 @@ func (s SetDesiredCapacityOutput) GoString() string { type SetInstanceHealthInput struct { _ struct{} `type:"structure"` - // The health status of the instance. Set to Healthy if you want the instance - // to remain in service. Set to Unhealthy if you want the instance to be out - // of service. Auto Scaling will terminate and replace the unhealthy instance. + // The health status of the instance. Set to Healthy to have the instance remain + // in service. Set to Unhealthy to have the instance be out of service. Amazon + // EC2 Auto Scaling terminates and replaces the unhealthy instance. // // HealthStatus is a required field HealthStatus *string `min:"1" type:"string" required:"true"` @@ -11678,12 +12525,11 @@ type SetInstanceHealthInput struct { InstanceId *string `min:"1" type:"string" required:"true"` // If the Auto Scaling group of the specified instance has a HealthCheckGracePeriod - // specified for the group, by default, this call will respect the grace period. - // Set this to False, if you do not want the call to respect the grace period - // associated with the group. + // specified for the group, by default, this call respects the grace period. + // Set this to False, to have the call not respect the grace period associated + // with the group. // - // For more information, see the description of the health check grace period - // for CreateAutoScalingGroup. + // For more information about the health check grace period, see CreateAutoScalingGroup. ShouldRespectGracePeriod *bool `type:"boolean"` } @@ -11764,8 +12610,8 @@ type SetInstanceProtectionInput struct { // InstanceIds is a required field InstanceIds []*string `type:"list" required:"true"` - // Indicates whether the instance is protected from termination by Auto Scaling - // when scaling in. + // Indicates whether the instance is protected from termination by Amazon EC2 + // Auto Scaling when scaling in. // // ProtectedFromScaleIn is a required field ProtectedFromScaleIn *bool `type:"boolean" required:"true"` @@ -11842,23 +12688,23 @@ func (s SetInstanceProtectionOutput) GoString() string { // For the following examples, suppose that you have an alarm with a breach // threshold of 50: // -// * If you want the adjustment to be triggered when the metric is greater -// than or equal to 50 and less than 60, specify a lower bound of 0 and an -// upper bound of 10. +// * To trigger the adjustment when the metric is greater than or equal to +// 50 and less than 60, specify a lower bound of 0 and an upper bound of +// 10. // -// * If you want the adjustment to be triggered when the metric is greater -// than 40 and less than or equal to 50, specify a lower bound of -10 and -// an upper bound of 0. +// * To trigger the adjustment when the metric is greater than 40 and less +// than or equal to 50, specify a lower bound of -10 and an upper bound of +// 0. // // There are a few rules for the step adjustments for your step policy: // // * The ranges of your step adjustments can't overlap or have a gap. // -// * At most one step adjustment can have a null lower bound. If one step +// * At most, one step adjustment can have a null lower bound. If one step // adjustment has a negative lower bound, then there must be a step adjustment // with a null lower bound. // -// * At most one step adjustment can have a null upper bound. If one step +// * At most, one step adjustment can have a null upper bound. If one step // adjustment has a positive upper bound, then there must be a step adjustment // with a null upper bound. // @@ -11946,8 +12792,8 @@ func (s SuspendProcessesOutput) GoString() string { return s.String() } -// Describes an Auto Scaling process that has been suspended. For more information, -// see ProcessType. +// Describes an automatic scaling process that has been suspended. For more +// information, see ProcessType. type SuspendedProcess struct { _ struct{} `type:"structure"` @@ -12127,9 +12973,9 @@ type TargetTrackingConfiguration struct { // A customized metric. CustomizedMetricSpecification *CustomizedMetricSpecification `type:"structure"` - // Indicates whether scale in by the target tracking policy is disabled. If - // scale in is disabled, the target tracking policy won't remove instances from - // the Auto Scaling group. Otherwise, the target tracking policy can remove + // Indicates whether scaling in by the target tracking policy is disabled. If + // scaling in is disabled, the target tracking policy doesn't remove instances + // from the Auto Scaling group. Otherwise, the target tracking policy can remove // instances from the Auto Scaling group. The default is disabled. DisableScaleIn *bool `type:"boolean"` @@ -12293,8 +13139,8 @@ type UpdateAutoScalingGroupInput struct { // The amount of time, in seconds, after a scaling activity completes before // another scaling activity can start. The default is 300. // - // For more information, see Auto Scaling Cooldowns (http://docs.aws.amazon.com/autoscaling/latest/userguide/Cooldown.html) - // in the Auto Scaling User Guide. + // For more information, see Scaling Cooldowns (http://docs.aws.amazon.com/autoscaling/ec2/userguide/Cooldown.html) + // in the Amazon EC2 Auto Scaling User Guide. DefaultCooldown *int64 `type:"integer"` // The number of EC2 instances that should be running in the Auto Scaling group. @@ -12302,23 +13148,24 @@ type UpdateAutoScalingGroupInput struct { // and less than or equal to the maximum size of the group. DesiredCapacity *int64 `type:"integer"` - // The amount of time, in seconds, that Auto Scaling waits before checking the - // health status of an EC2 instance that has come into service. The default - // is 0. + // The amount of time, in seconds, that Amazon EC2 Auto Scaling waits before + // checking the health status of an EC2 instance that has come into service. + // The default is 0. // - // For more information, see Health Checks (http://docs.aws.amazon.com/autoscaling/latest/userguide/healthcheck.html) - // in the Auto Scaling User Guide. + // For more information, see Health Checks (http://docs.aws.amazon.com/autoscaling/ec2/userguide/healthcheck.html) + // in the Amazon EC2 Auto Scaling User Guide. HealthCheckGracePeriod *int64 `type:"integer"` // The service to use for the health checks. The valid values are EC2 and ELB. HealthCheckType *string `min:"1" type:"string"` - // The name of the launch configuration. If you specify a launch configuration, - // you can't specify a launch template. + // The name of the launch configuration. If you specify this parameter, you + // can't specify a launch template or a mixed instances policy. LaunchConfigurationName *string `min:"1" type:"string"` - // The launch template to use to specify the updates. If you specify a launch - // template, you can't specify a launch configuration. + // The launch template and version to use to specify the updates. If you specify + // this parameter, you can't specify a launch configuration or a mixed instances + // policy. LaunchTemplate *LaunchTemplateSpecification `type:"structure"` // The maximum size of the Auto Scaling group. @@ -12327,12 +13174,16 @@ type UpdateAutoScalingGroupInput struct { // The minimum size of the Auto Scaling group. MinSize *int64 `type:"integer"` + // The mixed instances policy to use to specify the updates. If you specify + // this parameter, you can't specify a launch configuration or a launch template. + MixedInstancesPolicy *MixedInstancesPolicy `type:"structure"` + // Indicates whether newly launched instances are protected from termination // by Auto Scaling when scaling in. NewInstancesProtectedFromScaleIn *bool `type:"boolean"` - // The name of the placement group into which you'll launch your instances, - // if any. For more information, see Placement Groups (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html) + // The name of the placement group into which to launch your instances, if any. + // For more information, see Placement Groups (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html) // in the Amazon Elastic Compute Cloud User Guide. PlacementGroup *string `min:"1" type:"string"` @@ -12345,7 +13196,7 @@ type UpdateAutoScalingGroupInput struct { // that they are listed. // // For more information, see Controlling Which Instances Auto Scaling Terminates - // During Scale In (http://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html) + // During Scale In (http://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-termination.html) // in the Auto Scaling User Guide. TerminationPolicies []*string `type:"list"` @@ -12355,8 +13206,8 @@ type UpdateAutoScalingGroupInput struct { // When you specify VPCZoneIdentifier with AvailabilityZones, ensure that the // subnets' Availability Zones match the values you specify for AvailabilityZones. // - // For more information, see Launching Auto Scaling Instances in a VPC (http://docs.aws.amazon.com/autoscaling/latest/userguide/asg-in-vpc.html) - // in the Auto Scaling User Guide. + // For more information, see Launching Auto Scaling Instances in a VPC (http://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-in-vpc.html) + // in the Amazon EC2 Auto Scaling User Guide. VPCZoneIdentifier *string `min:"1" type:"string"` } @@ -12402,6 +13253,11 @@ func (s *UpdateAutoScalingGroupInput) Validate() error { invalidParams.AddNested("LaunchTemplate", err.(request.ErrInvalidParams)) } } + if s.MixedInstancesPolicy != nil { + if err := s.MixedInstancesPolicy.Validate(); err != nil { + invalidParams.AddNested("MixedInstancesPolicy", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -12469,6 +13325,12 @@ func (s *UpdateAutoScalingGroupInput) SetMinSize(v int64) *UpdateAutoScalingGrou return s } +// SetMixedInstancesPolicy sets the MixedInstancesPolicy field's value. +func (s *UpdateAutoScalingGroupInput) SetMixedInstancesPolicy(v *MixedInstancesPolicy) *UpdateAutoScalingGroupInput { + s.MixedInstancesPolicy = v + return s +} + // SetNewInstancesProtectedFromScaleIn sets the NewInstancesProtectedFromScaleIn field's value. func (s *UpdateAutoScalingGroupInput) SetNewInstancesProtectedFromScaleIn(v bool) *UpdateAutoScalingGroupInput { s.NewInstancesProtectedFromScaleIn = &v diff --git a/vendor/github.com/aws/aws-sdk-go/service/autoscaling/doc.go b/vendor/github.com/aws/aws-sdk-go/service/autoscaling/doc.go index f431b8aa39b..32d9594c7ad 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/autoscaling/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/autoscaling/doc.go @@ -5,8 +5,10 @@ // // Amazon EC2 Auto Scaling is designed to automatically launch or terminate // EC2 instances based on user-defined policies, schedules, and health checks. -// Use this service in conjunction with the AWS Auto Scaling, Amazon CloudWatch, -// and Elastic Load Balancing services. +// Use this service with AWS Auto Scaling, Amazon CloudWatch, and Elastic Load +// Balancing. +// +// For more information, see the Amazon EC2 Auto Scaling User Guide (http://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html). // // See https://docs.aws.amazon.com/goto/WebAPI/autoscaling-2011-01-01 for more information on this service. // diff --git a/vendor/github.com/aws/aws-sdk-go/service/autoscaling/service.go b/vendor/github.com/aws/aws-sdk-go/service/autoscaling/service.go index 5e63d1c031f..e1da9fd7546 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/autoscaling/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/autoscaling/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "autoscaling" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "autoscaling" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Auto Scaling" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the AutoScaling client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/batch/api.go b/vendor/github.com/aws/aws-sdk-go/service/batch/api.go index 9921afa192f..fb87e1ac3e7 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/batch/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/batch/api.go @@ -14,8 +14,8 @@ const opCancelJob = "CancelJob" // CancelJobRequest generates a "aws/request.Request" representing the // client's request for the CancelJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -102,8 +102,8 @@ const opCreateComputeEnvironment = "CreateComputeEnvironment" // CreateComputeEnvironmentRequest generates a "aws/request.Request" representing the // client's request for the CreateComputeEnvironment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -145,13 +145,16 @@ func (c *Batch) CreateComputeEnvironmentRequest(input *CreateComputeEnvironmentI // Creates an AWS Batch compute environment. You can create MANAGED or UNMANAGED // compute environments. // -// In a managed compute environment, AWS Batch manages the compute resources -// within the environment, based on the compute resources that you specify. -// Instances launched into a managed compute environment use a recent, approved -// version of the Amazon ECS-optimized AMI. You can choose to use Amazon EC2 -// On-Demand Instances in your managed compute environment, or you can use Amazon -// EC2 Spot Instances that only launch when the Spot bid price is below a specified -// percentage of the On-Demand price. +// In a managed compute environment, AWS Batch manages the capacity and instance +// types of the compute resources within the environment. This is based on the +// compute resource specification that you define or the launch template (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html) +// that you specify when you create the compute environment. You can choose +// to use Amazon EC2 On-Demand Instances or Spot Instances in your managed compute +// environment. You can optionally set a maximum price so that Spot Instances +// only launch when the Spot Instance price is below a specified percentage +// of the On-Demand price. +// +// Multi-node parallel jobs are not supported on Spot Instances. // // In an unmanaged compute environment, you can manage your own compute resources. // This provides more compute resource configuration options, such as using @@ -160,11 +163,26 @@ func (c *Batch) CreateComputeEnvironmentRequest(input *CreateComputeEnvironmentI // AMIs (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/container_instance_AMIs.html) // in the Amazon Elastic Container Service Developer Guide. After you have created // your unmanaged compute environment, you can use the DescribeComputeEnvironments -// operation to find the Amazon ECS cluster that is associated with it and then +// operation to find the Amazon ECS cluster that is associated with it. Then, // manually launch your container instances into that Amazon ECS cluster. For // more information, see Launching an Amazon ECS Container Instance (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html) // in the Amazon Elastic Container Service Developer Guide. // +// AWS Batch does not upgrade the AMIs in a compute environment after it is +// created (for example, when a newer version of the Amazon ECS-optimized AMI +// is available). You are responsible for the management of the guest operating +// system (including updates and security patches) and any additional application +// software or utilities that you install on the compute resources. To use a +// new AMI for your AWS Batch jobs: +// +// Create a new compute environment with the new AMI. +// +// Add the compute environment to an existing job queue. +// +// Remove the old compute environment from your job queue. +// +// Delete the old compute environment. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -207,8 +225,8 @@ const opCreateJobQueue = "CreateJobQueue" // CreateJobQueueRequest generates a "aws/request.Request" representing the // client's request for the CreateJobQueue operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -299,8 +317,8 @@ const opDeleteComputeEnvironment = "DeleteComputeEnvironment" // DeleteComputeEnvironmentRequest generates a "aws/request.Request" representing the // client's request for the DeleteComputeEnvironment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -387,8 +405,8 @@ const opDeleteJobQueue = "DeleteJobQueue" // DeleteJobQueueRequest generates a "aws/request.Request" representing the // client's request for the DeleteJobQueue operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -476,8 +494,8 @@ const opDeregisterJobDefinition = "DeregisterJobDefinition" // DeregisterJobDefinitionRequest generates a "aws/request.Request" representing the // client's request for the DeregisterJobDefinition operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -560,8 +578,8 @@ const opDescribeComputeEnvironments = "DescribeComputeEnvironments" // DescribeComputeEnvironmentsRequest generates a "aws/request.Request" representing the // client's request for the DescribeComputeEnvironments operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -648,8 +666,8 @@ const opDescribeJobDefinitions = "DescribeJobDefinitions" // DescribeJobDefinitionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeJobDefinitions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -733,8 +751,8 @@ const opDescribeJobQueues = "DescribeJobQueues" // DescribeJobQueuesRequest generates a "aws/request.Request" representing the // client's request for the DescribeJobQueues operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -817,8 +835,8 @@ const opDescribeJobs = "DescribeJobs" // DescribeJobsRequest generates a "aws/request.Request" representing the // client's request for the DescribeJobs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -901,8 +919,8 @@ const opListJobs = "ListJobs" // ListJobsRequest generates a "aws/request.Request" representing the // client's request for the ListJobs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -941,9 +959,18 @@ func (c *Batch) ListJobsRequest(input *ListJobsInput) (req *request.Request, out // ListJobs API operation for AWS Batch. // -// Returns a list of task jobs for a specified job queue. You can filter the -// results by job status with the jobStatus parameter. If you do not specify -// a status, only RUNNING jobs are returned. +// Returns a list of AWS Batch jobs. +// +// You must specify only one of the following: +// +// * a job queue ID to return a list of jobs in that job queue +// +// * a multi-node parallel job ID to return a list of that job's nodes +// +// * an array job ID to return a list of that job's children +// +// You can filter the results by job status with the jobStatus parameter. If +// you do not specify a status, only RUNNING jobs are returned. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -987,8 +1014,8 @@ const opRegisterJobDefinition = "RegisterJobDefinition" // RegisterJobDefinitionRequest generates a "aws/request.Request" representing the // client's request for the RegisterJobDefinition operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1071,8 +1098,8 @@ const opSubmitJob = "SubmitJob" // SubmitJobRequest generates a "aws/request.Request" representing the // client's request for the SubmitJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1156,8 +1183,8 @@ const opTerminateJob = "TerminateJob" // TerminateJobRequest generates a "aws/request.Request" representing the // client's request for the TerminateJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1242,8 +1269,8 @@ const opUpdateComputeEnvironment = "UpdateComputeEnvironment" // UpdateComputeEnvironmentRequest generates a "aws/request.Request" representing the // client's request for the UpdateComputeEnvironment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1326,8 +1353,8 @@ const opUpdateJobQueue = "UpdateJobQueue" // UpdateJobQueueRequest generates a "aws/request.Request" representing the // client's request for the UpdateJobQueue operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1524,6 +1551,9 @@ type AttemptContainerDetail struct { // receives a log stream name when they reach the RUNNING status. LogStreamName *string `locationName:"logStreamName" type:"string"` + // The network interfaces associated with the job attempt. + NetworkInterfaces []*NetworkInterface `locationName:"networkInterfaces" type:"list"` + // A short (255 max characters) human-readable string to provide additional // details about a running or stopped container. Reason *string `locationName:"reason" type:"string"` @@ -1562,6 +1592,12 @@ func (s *AttemptContainerDetail) SetLogStreamName(v string) *AttemptContainerDet return s } +// SetNetworkInterfaces sets the NetworkInterfaces field's value. +func (s *AttemptContainerDetail) SetNetworkInterfaces(v []*NetworkInterface) *AttemptContainerDetail { + s.NetworkInterfaces = v + return s +} + // SetReason sets the Reason field's value. func (s *AttemptContainerDetail) SetReason(v string) *AttemptContainerDetail { s.Reason = &v @@ -1581,16 +1617,18 @@ type AttemptDetail struct { // Details about the container in this job attempt. Container *AttemptContainerDetail `locationName:"container" type:"structure"` - // The Unix time stamp for when the attempt was started (when the attempt transitioned - // from the STARTING state to the RUNNING state). + // The Unix timestamp (in seconds and milliseconds) for when the attempt was + // started (when the attempt transitioned from the STARTING state to the RUNNING + // state). StartedAt *int64 `locationName:"startedAt" type:"long"` // A short, human-readable string to provide additional details about the current // status of the job attempt. StatusReason *string `locationName:"statusReason" type:"string"` - // The Unix time stamp for when the attempt was stopped (when the attempt transitioned - // from the RUNNING state to a terminal state, such as SUCCEEDED or FAILED). + // The Unix timestamp (in seconds and milliseconds) for when the attempt was + // stopped (when the attempt transitioned from the RUNNING state to a terminal + // state, such as SUCCEEDED or FAILED). StoppedAt *int64 `locationName:"stoppedAt" type:"long"` } @@ -1724,8 +1762,17 @@ type ComputeEnvironmentDetail struct { ServiceRole *string `locationName:"serviceRole" type:"string"` // The state of the compute environment. The valid values are ENABLED or DISABLED. - // An ENABLED state indicates that you can register instances with the compute - // environment and that the associated instances can accept jobs. + // + // If the state is ENABLED, then the AWS Batch scheduler can attempt to place + // jobs from an associated job queue on the compute resources within the environment. + // If the compute environment is managed, then it can scale its instances out + // or in automatically, based on the job queue demand. + // + // If the state is DISABLED, then the AWS Batch scheduler does not attempt to + // place jobs within the environment. Jobs in a STARTING or RUNNING state continue + // to progress normally. Managed compute environments in the DISABLED state + // do not scale out. However, they scale in to minvCpus value after instances + // become idle. State *string `locationName:"state" type:"string" enum:"CEState"` // The current status of the compute environment (for example, CREATING or VALID). @@ -1863,10 +1910,13 @@ func (s *ComputeEnvironmentOrder) SetOrder(v int64) *ComputeEnvironmentOrder { type ComputeResource struct { _ struct{} `type:"structure"` - // The minimum percentage that a Spot Instance price must be when compared with + // The maximum percentage that a Spot Instance price can be when compared with // the On-Demand price for that instance type before instances are launched. - // For example, if your bid percentage is 20%, then the Spot price must be below - // 20% of the current On-Demand price for that EC2 instance. + // For example, if your maximum percentage is 20%, then the Spot price must + // be below 20% of the current On-Demand price for that EC2 instance. You always + // pay the lowest (market) price and never more than your maximum percentage. + // If you leave this field empty, the default value is 100% of the On-Demand + // price. BidPercentage *int64 `locationName:"bidPercentage" type:"integer"` // The desired number of EC2 vCPUS in the compute environment. @@ -1897,21 +1947,35 @@ type ComputeResource struct { // InstanceTypes is a required field InstanceTypes []*string `locationName:"instanceTypes" type:"list" required:"true"` + // The launch template to use for your compute resources. Any other compute + // resource parameters that you specify in a CreateComputeEnvironment API operation + // override the same parameters in the launch template. You must specify either + // the launch template ID or launch template name in the request, but not both. + LaunchTemplate *LaunchTemplateSpecification `locationName:"launchTemplate" type:"structure"` + // The maximum number of EC2 vCPUs that an environment can reach. // // MaxvCpus is a required field MaxvCpus *int64 `locationName:"maxvCpus" type:"integer" required:"true"` - // The minimum number of EC2 vCPUs that an environment should maintain. + // The minimum number of EC2 vCPUs that an environment should maintain (even + // if the compute environment is DISABLED). // // MinvCpus is a required field MinvCpus *int64 `locationName:"minvCpus" type:"integer" required:"true"` + // The Amazon EC2 placement group to associate with your compute resources. + // If you intend to submit multi-node parallel jobs to your compute environment, + // you should consider creating a cluster placement group and associate it with + // your compute resources. This keeps your multi-node parallel job on a logical + // grouping of instances within a single Availability Zone with high network + // flow potential. For more information, see Placement Groups (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html) + // in the Amazon EC2 User Guide for Linux Instances. + PlacementGroup *string `locationName:"placementGroup" type:"string"` + // The EC2 security group that is associated with instances launched in the // compute environment. - // - // SecurityGroupIds is a required field - SecurityGroupIds []*string `locationName:"securityGroupIds" type:"list" required:"true"` + SecurityGroupIds []*string `locationName:"securityGroupIds" type:"list"` // The Amazon Resource Name (ARN) of the Amazon EC2 Spot Fleet IAM role applied // to a SPOT compute environment. @@ -1957,9 +2021,6 @@ func (s *ComputeResource) Validate() error { if s.MinvCpus == nil { invalidParams.Add(request.NewErrParamRequired("MinvCpus")) } - if s.SecurityGroupIds == nil { - invalidParams.Add(request.NewErrParamRequired("SecurityGroupIds")) - } if s.Subnets == nil { invalidParams.Add(request.NewErrParamRequired("Subnets")) } @@ -2009,6 +2070,12 @@ func (s *ComputeResource) SetInstanceTypes(v []*string) *ComputeResource { return s } +// SetLaunchTemplate sets the LaunchTemplate field's value. +func (s *ComputeResource) SetLaunchTemplate(v *LaunchTemplateSpecification) *ComputeResource { + s.LaunchTemplate = v + return s +} + // SetMaxvCpus sets the MaxvCpus field's value. func (s *ComputeResource) SetMaxvCpus(v int64) *ComputeResource { s.MaxvCpus = &v @@ -2021,6 +2088,12 @@ func (s *ComputeResource) SetMinvCpus(v int64) *ComputeResource { return s } +// SetPlacementGroup sets the PlacementGroup field's value. +func (s *ComputeResource) SetPlacementGroup(v string) *ComputeResource { + s.PlacementGroup = &v + return s +} + // SetSecurityGroupIds sets the SecurityGroupIds field's value. func (s *ComputeResource) SetSecurityGroupIds(v []*string) *ComputeResource { s.SecurityGroupIds = v @@ -2117,6 +2190,10 @@ type ContainerDetail struct { // The image used to start the container. Image *string `locationName:"image" type:"string"` + // The instance type of the underlying host infrastructure of a multi-node parallel + // job. + InstanceType *string `locationName:"instanceType" type:"string"` + // The Amazon Resource Name (ARN) associated with the job upon execution. JobRoleArn *string `locationName:"jobRoleArn" type:"string"` @@ -2131,6 +2208,9 @@ type ContainerDetail struct { // The mount points for data volumes in your container. MountPoints []*MountPoint `locationName:"mountPoints" type:"list"` + // The network interfaces associated with the job. + NetworkInterfaces []*NetworkInterface `locationName:"networkInterfaces" type:"list"` + // When this parameter is true, the container is given elevated privileges on // the host container instance (similar to the root user). Privileged *bool `locationName:"privileged" type:"boolean"` @@ -2201,6 +2281,12 @@ func (s *ContainerDetail) SetImage(v string) *ContainerDetail { return s } +// SetInstanceType sets the InstanceType field's value. +func (s *ContainerDetail) SetInstanceType(v string) *ContainerDetail { + s.InstanceType = &v + return s +} + // SetJobRoleArn sets the JobRoleArn field's value. func (s *ContainerDetail) SetJobRoleArn(v string) *ContainerDetail { s.JobRoleArn = &v @@ -2225,6 +2311,12 @@ func (s *ContainerDetail) SetMountPoints(v []*MountPoint) *ContainerDetail { return s } +// SetNetworkInterfaces sets the NetworkInterfaces field's value. +func (s *ContainerDetail) SetNetworkInterfaces(v []*NetworkInterface) *ContainerDetail { + s.NetworkInterfaces = v + return s +} + // SetPrivileged sets the Privileged field's value. func (s *ContainerDetail) SetPrivileged(v bool) *ContainerDetail { s.Privileged = &v @@ -2289,6 +2381,10 @@ type ContainerOverrides struct { // is reserved for variables that are set by the AWS Batch service. Environment []*KeyValuePair `locationName:"environment" type:"list"` + // The instance type to use for a multi-node parallel job. This parameter is + // not valid for single-node container jobs. + InstanceType *string `locationName:"instanceType" type:"string"` + // The number of MiB of memory reserved for the job. This value overrides the // value set in the job definition. Memory *int64 `locationName:"memory" type:"integer"` @@ -2320,6 +2416,12 @@ func (s *ContainerOverrides) SetEnvironment(v []*KeyValuePair) *ContainerOverrid return s } +// SetInstanceType sets the InstanceType field's value. +func (s *ContainerOverrides) SetInstanceType(v string) *ContainerOverrides { + s.InstanceType = &v + return s +} + // SetMemory sets the Memory field's value. func (s *ContainerOverrides) SetMemory(v int64) *ContainerOverrides { s.Memory = &v @@ -2378,9 +2480,12 @@ type ContainerProperties struct { // // * Images in other online repositories are qualified further by a domain // name (for example, quay.io/assemblyline/ubuntu). - // - // Image is a required field - Image *string `locationName:"image" type:"string" required:"true"` + Image *string `locationName:"image" type:"string"` + + // The instance type to use for a multi-node parallel job. Currently all node + // groups in a multi-node parallel job must use the same instance type. This + // parameter is not valid for single-node container jobs. + InstanceType *string `locationName:"instanceType" type:"string"` // The Amazon Resource Name (ARN) of the IAM role that the container can assume // for AWS permissions. @@ -2393,8 +2498,11 @@ type ContainerProperties struct { // and the --memory option to docker run (https://docs.docker.com/engine/reference/run/). // You must specify at least 4 MiB of memory for a job. // - // Memory is a required field - Memory *int64 `locationName:"memory" type:"integer" required:"true"` + // If you are trying to maximize your resource utilization by providing your + // jobs as much memory as possible for a particular instance type, see Memory + // Management (http://docs.aws.amazon.com/batch/latest/userguide/memory-management.html) + // in the AWS Batch User Guide. + Memory *int64 `locationName:"memory" type:"integer"` // The mount points for data volumes in your container. This parameter maps // to Volumes in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.23/#create-a-container) @@ -2434,9 +2542,7 @@ type ContainerProperties struct { // and the --cpu-shares option to docker run (https://docs.docker.com/engine/reference/run/). // Each vCPU is equivalent to 1,024 CPU shares. You must specify at least one // vCPU. - // - // Vcpus is a required field - Vcpus *int64 `locationName:"vcpus" type:"integer" required:"true"` + Vcpus *int64 `locationName:"vcpus" type:"integer"` // A list of data volumes used in a job. Volumes []*Volume `locationName:"volumes" type:"list"` @@ -2455,15 +2561,6 @@ func (s ContainerProperties) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *ContainerProperties) Validate() error { invalidParams := request.ErrInvalidParams{Context: "ContainerProperties"} - if s.Image == nil { - invalidParams.Add(request.NewErrParamRequired("Image")) - } - if s.Memory == nil { - invalidParams.Add(request.NewErrParamRequired("Memory")) - } - if s.Vcpus == nil { - invalidParams.Add(request.NewErrParamRequired("Vcpus")) - } if s.Ulimits != nil { for i, v := range s.Ulimits { if v == nil { @@ -2499,6 +2596,12 @@ func (s *ContainerProperties) SetImage(v string) *ContainerProperties { return s } +// SetInstanceType sets the InstanceType field's value. +func (s *ContainerProperties) SetInstanceType(v string) *ContainerProperties { + s.InstanceType = &v + return s +} + // SetJobRoleArn sets the JobRoleArn field's value. func (s *ContainerProperties) SetJobRoleArn(v string) *ContainerProperties { s.JobRoleArn = &v @@ -2621,7 +2724,9 @@ type CreateComputeEnvironmentInput struct { // on queues. State *string `locationName:"state" type:"string" enum:"CEState"` - // The type of the compute environment. + // The type of the compute environment. For more information, see Compute Environments + // (http://docs.aws.amazon.com/batch/latest/userguide/compute_environments.html) + // in the AWS Batch User Guide. // // Type is a required field Type *string `locationName:"type" type:"string" required:"true" enum:"CEType"` @@ -2742,7 +2847,7 @@ type CreateJobQueueInput struct { // The priority of the job queue. Job queues with a higher priority (or a higher // integer value for the priority parameter) are evaluated first when associated - // with same compute environment. Priority is determined in descending order, + // with the same compute environment. Priority is determined in descending order, // for example, a job queue with a priority value of 10 is given scheduling // preference over a job queue with a priority value of 1. // @@ -3411,6 +3516,9 @@ type JobDefinition struct { // JobDefinitionName is a required field JobDefinitionName *string `locationName:"jobDefinitionName" type:"string" required:"true"` + // An object with various properties specific to multi-node parallel jobs. + NodeProperties *NodeProperties `locationName:"nodeProperties" type:"structure"` + // Default parameters or parameter substitution placeholders that are set in // the job definition. Parameters are specified as a key-value pair mapping. // Parameters in a SubmitJob request override any corresponding parameter defaults @@ -3429,6 +3537,11 @@ type JobDefinition struct { // The status of the job definition. Status *string `locationName:"status" type:"string"` + // The timeout configuration for jobs that are submitted with this job definition. + // You can specify a timeout duration after which AWS Batch terminates your + // jobs if they have not finished. + Timeout *JobTimeout `locationName:"timeout" type:"structure"` + // The type of job definition. // // Type is a required field @@ -3463,6 +3576,12 @@ func (s *JobDefinition) SetJobDefinitionName(v string) *JobDefinition { return s } +// SetNodeProperties sets the NodeProperties field's value. +func (s *JobDefinition) SetNodeProperties(v *NodeProperties) *JobDefinition { + s.NodeProperties = v + return s +} + // SetParameters sets the Parameters field's value. func (s *JobDefinition) SetParameters(v map[string]*string) *JobDefinition { s.Parameters = v @@ -3487,6 +3606,12 @@ func (s *JobDefinition) SetStatus(v string) *JobDefinition { return s } +// SetTimeout sets the Timeout field's value. +func (s *JobDefinition) SetTimeout(v *JobTimeout) *JobDefinition { + s.Timeout = v + return s +} + // SetType sets the Type field's value. func (s *JobDefinition) SetType(v string) *JobDefinition { s.Type = &v @@ -3540,10 +3665,11 @@ type JobDetail struct { // the job. Container *ContainerDetail `locationName:"container" type:"structure"` - // The Unix time stamp for when the job was created. For non-array jobs and - // parent array jobs, this is when the job entered the SUBMITTED state (at the - // time SubmitJob was called). For array child jobs, this is when the child - // job was spawned by its parent and entered the PENDING state. + // The Unix timestamp (in seconds and milliseconds) for when the job was created. + // For non-array jobs and parent array jobs, this is when the job entered the + // SUBMITTED state (at the time SubmitJob was called). For array child jobs, + // this is when the child job was spawned by its parent and entered the PENDING + // state. CreatedAt *int64 `locationName:"createdAt" type:"long"` // A list of job names or IDs on which this job depends. @@ -3569,6 +3695,13 @@ type JobDetail struct { // JobQueue is a required field JobQueue *string `locationName:"jobQueue" type:"string" required:"true"` + // An object representing the details of a node that is associated with a multi-node + // parallel job. + NodeDetails *NodeDetails `locationName:"nodeDetails" type:"structure"` + + // An object representing the node properties of a multi-node parallel job. + NodeProperties *NodeProperties `locationName:"nodeProperties" type:"structure"` + // Additional parameters passed to the job that replace parameter substitution // placeholders or override any corresponding parameter defaults from the job // definition. @@ -3577,14 +3710,17 @@ type JobDetail struct { // The retry strategy to use for this job if an attempt fails. RetryStrategy *RetryStrategy `locationName:"retryStrategy" type:"structure"` - // The Unix time stamp for when the job was started (when the job transitioned - // from the STARTING state to the RUNNING state). + // The Unix timestamp (in seconds and milliseconds) for when the job was started + // (when the job transitioned from the STARTING state to the RUNNING state). // // StartedAt is a required field StartedAt *int64 `locationName:"startedAt" type:"long" required:"true"` // The current status for the job. // + // If your jobs do not progress to STARTING, see Jobs Stuck in (http://docs.aws.amazon.com/batch/latest/userguide/troubleshooting.html#job_stuck_in_runnable)RUNNABLE + // Status in the troubleshooting section of the AWS Batch User Guide. + // // Status is a required field Status *string `locationName:"status" type:"string" required:"true" enum:"JobStatus"` @@ -3592,9 +3728,13 @@ type JobDetail struct { // status of the job. StatusReason *string `locationName:"statusReason" type:"string"` - // The Unix time stamp for when the job was stopped (when the job transitioned - // from the RUNNING state to a terminal state, such as SUCCEEDED or FAILED). + // The Unix timestamp (in seconds and milliseconds) for when the job was stopped + // (when the job transitioned from the RUNNING state to a terminal state, such + // as SUCCEEDED or FAILED). StoppedAt *int64 `locationName:"stoppedAt" type:"long"` + + // The timeout configuration for the job. + Timeout *JobTimeout `locationName:"timeout" type:"structure"` } // String returns the string representation @@ -3661,6 +3801,18 @@ func (s *JobDetail) SetJobQueue(v string) *JobDetail { return s } +// SetNodeDetails sets the NodeDetails field's value. +func (s *JobDetail) SetNodeDetails(v *NodeDetails) *JobDetail { + s.NodeDetails = v + return s +} + +// SetNodeProperties sets the NodeProperties field's value. +func (s *JobDetail) SetNodeProperties(v *NodeProperties) *JobDetail { + s.NodeProperties = v + return s +} + // SetParameters sets the Parameters field's value. func (s *JobDetail) SetParameters(v map[string]*string) *JobDetail { s.Parameters = v @@ -3697,6 +3849,12 @@ func (s *JobDetail) SetStoppedAt(v int64) *JobDetail { return s } +// SetTimeout sets the Timeout field's value. +func (s *JobDetail) SetTimeout(v *JobTimeout) *JobDetail { + s.Timeout = v + return s +} + // An object representing the details of an AWS Batch job queue. type JobQueueDetail struct { _ struct{} `type:"structure"` @@ -3799,10 +3957,10 @@ type JobSummary struct { // the job. Container *ContainerSummary `locationName:"container" type:"structure"` - // The Unix time stamp for when the job was created. For non-array jobs and - // parent array jobs, this is when the job entered the SUBMITTED state (at the - // time SubmitJob was called). For array child jobs, this is when the child - // job was spawned by its parent and entered the PENDING state. + // The Unix timestamp for when the job was created. For non-array jobs and parent + // array jobs, this is when the job entered the SUBMITTED state (at the time + // SubmitJob was called). For array child jobs, this is when the child job was + // spawned by its parent and entered the PENDING state. CreatedAt *int64 `locationName:"createdAt" type:"long"` // The ID of the job. @@ -3815,7 +3973,10 @@ type JobSummary struct { // JobName is a required field JobName *string `locationName:"jobName" type:"string" required:"true"` - // The Unix time stamp for when the job was started (when the job transitioned + // The node properties for a single node in a job summary list. + NodeProperties *NodePropertiesSummary `locationName:"nodeProperties" type:"structure"` + + // The Unix timestamp for when the job was started (when the job transitioned // from the STARTING state to the RUNNING state). StartedAt *int64 `locationName:"startedAt" type:"long"` @@ -3826,7 +3987,7 @@ type JobSummary struct { // status of the job. StatusReason *string `locationName:"statusReason" type:"string"` - // The Unix time stamp for when the job was stopped (when the job transitioned + // The Unix timestamp for when the job was stopped (when the job transitioned // from the RUNNING state to a terminal state, such as SUCCEEDED or FAILED). StoppedAt *int64 `locationName:"stoppedAt" type:"long"` } @@ -3871,6 +4032,12 @@ func (s *JobSummary) SetJobName(v string) *JobSummary { return s } +// SetNodeProperties sets the NodeProperties field's value. +func (s *JobSummary) SetNodeProperties(v *NodePropertiesSummary) *JobSummary { + s.NodeProperties = v + return s +} + // SetStartedAt sets the StartedAt field's value. func (s *JobSummary) SetStartedAt(v int64) *JobSummary { s.StartedAt = &v @@ -3895,6 +4062,31 @@ func (s *JobSummary) SetStoppedAt(v int64) *JobSummary { return s } +// An object representing a job timeout configuration. +type JobTimeout struct { + _ struct{} `type:"structure"` + + // The time duration in seconds (measured from the job attempt's startedAt timestamp) + // after which AWS Batch terminates your jobs if they have not finished. + AttemptDurationSeconds *int64 `locationName:"attemptDurationSeconds" type:"integer"` +} + +// String returns the string representation +func (s JobTimeout) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s JobTimeout) GoString() string { + return s.String() +} + +// SetAttemptDurationSeconds sets the AttemptDurationSeconds field's value. +func (s *JobTimeout) SetAttemptDurationSeconds(v int64) *JobTimeout { + s.AttemptDurationSeconds = &v + return s +} + // A key-value pair object. type KeyValuePair struct { _ struct{} `type:"structure"` @@ -3930,6 +4122,52 @@ func (s *KeyValuePair) SetValue(v string) *KeyValuePair { return s } +// An object representing a launch template associated with a compute resource. +// You must specify either the launch template ID or launch template name in +// the request, but not both. +type LaunchTemplateSpecification struct { + _ struct{} `type:"structure"` + + // The ID of the launch template. + LaunchTemplateId *string `locationName:"launchTemplateId" type:"string"` + + // The name of the launch template. + LaunchTemplateName *string `locationName:"launchTemplateName" type:"string"` + + // The version number of the launch template. + // + // Default: The default version of the launch template. + Version *string `locationName:"version" type:"string"` +} + +// String returns the string representation +func (s LaunchTemplateSpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateSpecification) GoString() string { + return s.String() +} + +// SetLaunchTemplateId sets the LaunchTemplateId field's value. +func (s *LaunchTemplateSpecification) SetLaunchTemplateId(v string) *LaunchTemplateSpecification { + s.LaunchTemplateId = &v + return s +} + +// SetLaunchTemplateName sets the LaunchTemplateName field's value. +func (s *LaunchTemplateSpecification) SetLaunchTemplateName(v string) *LaunchTemplateSpecification { + s.LaunchTemplateName = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *LaunchTemplateSpecification) SetVersion(v string) *LaunchTemplateSpecification { + s.Version = &v + return s +} + type ListJobsInput struct { _ struct{} `type:"structure"` @@ -3954,6 +4192,11 @@ type ListJobsInput struct { // if applicable. MaxResults *int64 `locationName:"maxResults" type:"integer"` + // The job ID for a multi-node parallel job. Specifying a multi-node parallel + // job ID with this parameter lists all nodes that are associated with the specified + // job. + MultiNodeJobId *string `locationName:"multiNodeJobId" type:"string"` + // The nextToken value returned from a previous paginated ListJobs request where // maxResults was used and the results exceeded the value of that parameter. // Pagination continues from the end of the previous results that returned the @@ -3998,6 +4241,12 @@ func (s *ListJobsInput) SetMaxResults(v int64) *ListJobsInput { return s } +// SetMultiNodeJobId sets the MultiNodeJobId field's value. +func (s *ListJobsInput) SetMultiNodeJobId(v string) *ListJobsInput { + s.MultiNodeJobId = &v + return s +} + // SetNextToken sets the NextToken field's value. func (s *ListJobsInput) SetNextToken(v string) *ListJobsInput { s.NextToken = &v @@ -4085,11 +4334,373 @@ func (s *MountPoint) SetSourceVolume(v string) *MountPoint { return s } +// An object representing the elastic network interface for a multi-node parallel +// job node. +type NetworkInterface struct { + _ struct{} `type:"structure"` + + // The attachment ID for the network interface. + AttachmentId *string `locationName:"attachmentId" type:"string"` + + // The private IPv6 address for the network interface. + Ipv6Address *string `locationName:"ipv6Address" type:"string"` + + // The private IPv4 address for the network interface. + PrivateIpv4Address *string `locationName:"privateIpv4Address" type:"string"` +} + +// String returns the string representation +func (s NetworkInterface) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NetworkInterface) GoString() string { + return s.String() +} + +// SetAttachmentId sets the AttachmentId field's value. +func (s *NetworkInterface) SetAttachmentId(v string) *NetworkInterface { + s.AttachmentId = &v + return s +} + +// SetIpv6Address sets the Ipv6Address field's value. +func (s *NetworkInterface) SetIpv6Address(v string) *NetworkInterface { + s.Ipv6Address = &v + return s +} + +// SetPrivateIpv4Address sets the PrivateIpv4Address field's value. +func (s *NetworkInterface) SetPrivateIpv4Address(v string) *NetworkInterface { + s.PrivateIpv4Address = &v + return s +} + +// An object representing the details of a multi-node parallel job node. +type NodeDetails struct { + _ struct{} `type:"structure"` + + // Specifies whether the current node is the main node for a multi-node parallel + // job. + IsMainNode *bool `locationName:"isMainNode" type:"boolean"` + + // The node index for the node. Node index numbering begins at zero. This index + // is also available on the node with the AWS_BATCH_JOB_NODE_INDEX environment + // variable. + NodeIndex *int64 `locationName:"nodeIndex" type:"integer"` +} + +// String returns the string representation +func (s NodeDetails) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NodeDetails) GoString() string { + return s.String() +} + +// SetIsMainNode sets the IsMainNode field's value. +func (s *NodeDetails) SetIsMainNode(v bool) *NodeDetails { + s.IsMainNode = &v + return s +} + +// SetNodeIndex sets the NodeIndex field's value. +func (s *NodeDetails) SetNodeIndex(v int64) *NodeDetails { + s.NodeIndex = &v + return s +} + +// Object representing any node overrides to a job definition that is used in +// a SubmitJob API operation. +type NodeOverrides struct { + _ struct{} `type:"structure"` + + // The node property overrides for the job. + NodePropertyOverrides []*NodePropertyOverride `locationName:"nodePropertyOverrides" type:"list"` +} + +// String returns the string representation +func (s NodeOverrides) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NodeOverrides) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *NodeOverrides) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "NodeOverrides"} + if s.NodePropertyOverrides != nil { + for i, v := range s.NodePropertyOverrides { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "NodePropertyOverrides", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNodePropertyOverrides sets the NodePropertyOverrides field's value. +func (s *NodeOverrides) SetNodePropertyOverrides(v []*NodePropertyOverride) *NodeOverrides { + s.NodePropertyOverrides = v + return s +} + +// An object representing the node properties of a multi-node parallel job. +type NodeProperties struct { + _ struct{} `type:"structure"` + + // Specifies the node index for the main node of a multi-node parallel job. + // + // MainNode is a required field + MainNode *int64 `locationName:"mainNode" type:"integer" required:"true"` + + // A list of node ranges and their properties associated with a multi-node parallel + // job. + // + // NodeRangeProperties is a required field + NodeRangeProperties []*NodeRangeProperty `locationName:"nodeRangeProperties" type:"list" required:"true"` + + // The number of nodes associated with a multi-node parallel job. + // + // NumNodes is a required field + NumNodes *int64 `locationName:"numNodes" type:"integer" required:"true"` +} + +// String returns the string representation +func (s NodeProperties) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NodeProperties) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *NodeProperties) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "NodeProperties"} + if s.MainNode == nil { + invalidParams.Add(request.NewErrParamRequired("MainNode")) + } + if s.NodeRangeProperties == nil { + invalidParams.Add(request.NewErrParamRequired("NodeRangeProperties")) + } + if s.NumNodes == nil { + invalidParams.Add(request.NewErrParamRequired("NumNodes")) + } + if s.NodeRangeProperties != nil { + for i, v := range s.NodeRangeProperties { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "NodeRangeProperties", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMainNode sets the MainNode field's value. +func (s *NodeProperties) SetMainNode(v int64) *NodeProperties { + s.MainNode = &v + return s +} + +// SetNodeRangeProperties sets the NodeRangeProperties field's value. +func (s *NodeProperties) SetNodeRangeProperties(v []*NodeRangeProperty) *NodeProperties { + s.NodeRangeProperties = v + return s +} + +// SetNumNodes sets the NumNodes field's value. +func (s *NodeProperties) SetNumNodes(v int64) *NodeProperties { + s.NumNodes = &v + return s +} + +// An object representing the properties of a node that is associated with a +// multi-node parallel job. +type NodePropertiesSummary struct { + _ struct{} `type:"structure"` + + // Specifies whether the current node is the main node for a multi-node parallel + // job. + IsMainNode *bool `locationName:"isMainNode" type:"boolean"` + + // The node index for the node. Node index numbering begins at zero. This index + // is also available on the node with the AWS_BATCH_JOB_NODE_INDEX environment + // variable. + NodeIndex *int64 `locationName:"nodeIndex" type:"integer"` + + // The number of nodes associated with a multi-node parallel job. + NumNodes *int64 `locationName:"numNodes" type:"integer"` +} + +// String returns the string representation +func (s NodePropertiesSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NodePropertiesSummary) GoString() string { + return s.String() +} + +// SetIsMainNode sets the IsMainNode field's value. +func (s *NodePropertiesSummary) SetIsMainNode(v bool) *NodePropertiesSummary { + s.IsMainNode = &v + return s +} + +// SetNodeIndex sets the NodeIndex field's value. +func (s *NodePropertiesSummary) SetNodeIndex(v int64) *NodePropertiesSummary { + s.NodeIndex = &v + return s +} + +// SetNumNodes sets the NumNodes field's value. +func (s *NodePropertiesSummary) SetNumNodes(v int64) *NodePropertiesSummary { + s.NumNodes = &v + return s +} + +// Object representing any node overrides to a job definition that is used in +// a SubmitJob API operation. +type NodePropertyOverride struct { + _ struct{} `type:"structure"` + + // The overrides that should be sent to a node range. + ContainerOverrides *ContainerOverrides `locationName:"containerOverrides" type:"structure"` + + // The range of nodes, using node index values, with which to override. A range + // of 0:3 indicates nodes with index values of 0 through 3. If the starting + // range value is omitted (:n), then 0 is used to start the range. If the ending + // range value is omitted (n:), then the highest possible node index is used + // to end the range. + // + // TargetNodes is a required field + TargetNodes *string `locationName:"targetNodes" type:"string" required:"true"` +} + +// String returns the string representation +func (s NodePropertyOverride) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NodePropertyOverride) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *NodePropertyOverride) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "NodePropertyOverride"} + if s.TargetNodes == nil { + invalidParams.Add(request.NewErrParamRequired("TargetNodes")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetContainerOverrides sets the ContainerOverrides field's value. +func (s *NodePropertyOverride) SetContainerOverrides(v *ContainerOverrides) *NodePropertyOverride { + s.ContainerOverrides = v + return s +} + +// SetTargetNodes sets the TargetNodes field's value. +func (s *NodePropertyOverride) SetTargetNodes(v string) *NodePropertyOverride { + s.TargetNodes = &v + return s +} + +// An object representing the properties of the node range for a multi-node +// parallel job. +type NodeRangeProperty struct { + _ struct{} `type:"structure"` + + // The container details for the node range. + Container *ContainerProperties `locationName:"container" type:"structure"` + + // The range of nodes, using node index values. A range of 0:3 indicates nodes + // with index values of 0 through 3. If the starting range value is omitted + // (:n), then 0 is used to start the range. If the ending range value is omitted + // (n:), then the highest possible node index is used to end the range. Your + // accumulative node ranges must account for all nodes (0:n). You may nest node + // ranges, for example 0:10 and 4:5, in which case the 4:5 range properties + // override the 0:10 properties. + // + // TargetNodes is a required field + TargetNodes *string `locationName:"targetNodes" type:"string" required:"true"` +} + +// String returns the string representation +func (s NodeRangeProperty) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NodeRangeProperty) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *NodeRangeProperty) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "NodeRangeProperty"} + if s.TargetNodes == nil { + invalidParams.Add(request.NewErrParamRequired("TargetNodes")) + } + if s.Container != nil { + if err := s.Container.Validate(); err != nil { + invalidParams.AddNested("Container", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetContainer sets the Container field's value. +func (s *NodeRangeProperty) SetContainer(v *ContainerProperties) *NodeRangeProperty { + s.Container = v + return s +} + +// SetTargetNodes sets the TargetNodes field's value. +func (s *NodeRangeProperty) SetTargetNodes(v string) *NodeRangeProperty { + s.TargetNodes = &v + return s +} + type RegisterJobDefinitionInput struct { _ struct{} `type:"structure"` - // An object with various properties specific for container-based jobs. This - // parameter is required if the type parameter is container. + // An object with various properties specific to single-node container-based + // jobs. If the job definition's type parameter is container, then you must + // specify either containerProperties or nodeProperties. ContainerProperties *ContainerProperties `locationName:"containerProperties" type:"structure"` // The name of the job definition to register. Up to 128 letters (uppercase @@ -4098,6 +4709,13 @@ type RegisterJobDefinitionInput struct { // JobDefinitionName is a required field JobDefinitionName *string `locationName:"jobDefinitionName" type:"string" required:"true"` + // An object with various properties specific to multi-node parallel jobs. If + // you specify node properties for a job, it becomes a multi-node parallel job. + // For more information, see Multi-node Parallel Jobs (http://docs.aws.amazon.com/batch/latest/userguide/multi-node-parallel-jobs.html) + // in the AWS Batch User Guide. If the job definition's type parameter is container, + // then you must specify either containerProperties or nodeProperties. + NodeProperties *NodeProperties `locationName:"nodeProperties" type:"structure"` + // Default parameter substitution placeholders to set in the job definition. // Parameters are specified as a key-value pair mapping. Parameters in a SubmitJob // request override any corresponding parameter defaults from the job definition. @@ -4105,9 +4723,19 @@ type RegisterJobDefinitionInput struct { // The retry strategy to use for failed jobs that are submitted with this job // definition. Any retry strategy that is specified during a SubmitJob operation - // overrides the retry strategy defined here. + // overrides the retry strategy defined here. If a job is terminated due to + // a timeout, it is not retried. RetryStrategy *RetryStrategy `locationName:"retryStrategy" type:"structure"` + // The timeout configuration for jobs that are submitted with this job definition, + // after which AWS Batch terminates your jobs if they have not finished. If + // a job is terminated due to a timeout, it is not retried. The minimum value + // for the timeout is 60 seconds. Any timeout configuration that is specified + // during a SubmitJob operation overrides the timeout configuration defined + // here. For more information, see Job Timeouts (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/job_timeouts.html) + // in the Amazon Elastic Container Service Developer Guide. + Timeout *JobTimeout `locationName:"timeout" type:"structure"` + // The type of job definition. // // Type is a required field @@ -4138,6 +4766,11 @@ func (s *RegisterJobDefinitionInput) Validate() error { invalidParams.AddNested("ContainerProperties", err.(request.ErrInvalidParams)) } } + if s.NodeProperties != nil { + if err := s.NodeProperties.Validate(); err != nil { + invalidParams.AddNested("NodeProperties", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -4157,6 +4790,12 @@ func (s *RegisterJobDefinitionInput) SetJobDefinitionName(v string) *RegisterJob return s } +// SetNodeProperties sets the NodeProperties field's value. +func (s *RegisterJobDefinitionInput) SetNodeProperties(v *NodeProperties) *RegisterJobDefinitionInput { + s.NodeProperties = v + return s +} + // SetParameters sets the Parameters field's value. func (s *RegisterJobDefinitionInput) SetParameters(v map[string]*string) *RegisterJobDefinitionInput { s.Parameters = v @@ -4169,6 +4808,12 @@ func (s *RegisterJobDefinitionInput) SetRetryStrategy(v *RetryStrategy) *Registe return s } +// SetTimeout sets the Timeout field's value. +func (s *RegisterJobDefinitionInput) SetTimeout(v *JobTimeout) *RegisterJobDefinitionInput { + s.Timeout = v + return s +} + // SetType sets the Type field's value. func (s *RegisterJobDefinitionInput) SetType(v string) *RegisterJobDefinitionInput { s.Type = &v @@ -4228,7 +4873,7 @@ type RetryStrategy struct { // The number of times to move a job to the RUNNABLE status. You may specify // between 1 and 10 attempts. If the value of attempts is greater than one, - // the job is retried if it fails until it has moved to RUNNABLE that many times. + // the job is retried on failure the same number of attempts as the value. Attempts *int64 `locationName:"attempts" type:"integer"` } @@ -4271,8 +4916,9 @@ type SubmitJobInput struct { // jobs. You can specify a SEQUENTIAL type dependency without specifying a job // ID for array jobs so that each child array job completes sequentially, starting // at index 0. You can also specify an N_TO_N type dependency with a job ID - // for array jobs so that each index child of this job must wait for the corresponding - // index child of each dependency to complete before it can begin. + // for array jobs. In that case, each index child of this job must wait for + // the corresponding index child of each dependency to complete before it can + // begin. DependsOn []*JobDependency `locationName:"dependsOn" type:"list"` // The job definition used by this job. This value can be either a name:revision @@ -4294,6 +4940,10 @@ type SubmitJobInput struct { // JobQueue is a required field JobQueue *string `locationName:"jobQueue" type:"string" required:"true"` + // A list of node overrides in JSON format that specify the node range to target + // and the container overrides for that node range. + NodeOverrides *NodeOverrides `locationName:"nodeOverrides" type:"structure"` + // Additional parameters passed to the job that replace parameter substitution // placeholders that are set in the job definition. Parameters are specified // as a key and value pair mapping. Parameters in a SubmitJob request override @@ -4304,6 +4954,16 @@ type SubmitJobInput struct { // When a retry strategy is specified here, it overrides the retry strategy // defined in the job definition. RetryStrategy *RetryStrategy `locationName:"retryStrategy" type:"structure"` + + // The timeout configuration for this SubmitJob operation. You can specify a + // timeout duration after which AWS Batch terminates your jobs if they have + // not finished. If a job is terminated due to a timeout, it is not retried. + // The minimum value for the timeout is 60 seconds. This configuration overrides + // any timeout configuration specified in the job definition. For array jobs, + // child jobs have the same timeout configuration as the parent job. For more + // information, see Job Timeouts (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/job_timeouts.html) + // in the Amazon Elastic Container Service Developer Guide. + Timeout *JobTimeout `locationName:"timeout" type:"structure"` } // String returns the string representation @@ -4328,6 +4988,11 @@ func (s *SubmitJobInput) Validate() error { if s.JobQueue == nil { invalidParams.Add(request.NewErrParamRequired("JobQueue")) } + if s.NodeOverrides != nil { + if err := s.NodeOverrides.Validate(); err != nil { + invalidParams.AddNested("NodeOverrides", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -4371,6 +5036,12 @@ func (s *SubmitJobInput) SetJobQueue(v string) *SubmitJobInput { return s } +// SetNodeOverrides sets the NodeOverrides field's value. +func (s *SubmitJobInput) SetNodeOverrides(v *NodeOverrides) *SubmitJobInput { + s.NodeOverrides = v + return s +} + // SetParameters sets the Parameters field's value. func (s *SubmitJobInput) SetParameters(v map[string]*string) *SubmitJobInput { s.Parameters = v @@ -4383,6 +5054,12 @@ func (s *SubmitJobInput) SetRetryStrategy(v *RetryStrategy) *SubmitJobInput { return s } +// SetTimeout sets the Timeout field's value. +func (s *SubmitJobInput) SetTimeout(v *JobTimeout) *SubmitJobInput { + s.Timeout = v + return s +} + type SubmitJobOutput struct { _ struct{} `type:"structure"` @@ -4640,7 +5317,7 @@ type UpdateComputeEnvironmentOutput struct { // The Amazon Resource Name (ARN) of the compute environment. ComputeEnvironmentArn *string `locationName:"computeEnvironmentArn" type:"string"` - // The name of compute environment. + // The name of the compute environment. ComputeEnvironmentName *string `locationName:"computeEnvironmentName" type:"string"` } @@ -4681,7 +5358,7 @@ type UpdateJobQueueInput struct { // The priority of the job queue. Job queues with a higher priority (or a higher // integer value for the priority parameter) are evaluated first when associated - // with same compute environment. Priority is determined in descending order, + // with the same compute environment. Priority is determined in descending order, // for example, a job queue with a priority value of 10 is given scheduling // preference over a job queue with a priority value of 1. Priority *int64 `locationName:"priority" type:"integer"` @@ -4901,6 +5578,9 @@ const ( const ( // JobDefinitionTypeContainer is a JobDefinitionType enum value JobDefinitionTypeContainer = "container" + + // JobDefinitionTypeMultinode is a JobDefinitionType enum value + JobDefinitionTypeMultinode = "multinode" ) const ( diff --git a/vendor/github.com/aws/aws-sdk-go/service/batch/service.go b/vendor/github.com/aws/aws-sdk-go/service/batch/service.go index 11539325a15..0d04e7497ec 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/batch/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/batch/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "batch" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "batch" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Batch" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the Batch client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/budgets/api.go b/vendor/github.com/aws/aws-sdk-go/service/budgets/api.go index 25c50c880f1..a559c495876 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/budgets/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/budgets/api.go @@ -15,8 +15,8 @@ const opCreateBudget = "CreateBudget" // CreateBudgetRequest generates a "aws/request.Request" representing the // client's request for the CreateBudget operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -102,8 +102,8 @@ const opCreateNotification = "CreateNotification" // CreateNotificationRequest generates a "aws/request.Request" representing the // client's request for the CreateNotification operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -193,8 +193,8 @@ const opCreateSubscriber = "CreateSubscriber" // CreateSubscriberRequest generates a "aws/request.Request" representing the // client's request for the CreateSubscriber operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -284,8 +284,8 @@ const opDeleteBudget = "DeleteBudget" // DeleteBudgetRequest generates a "aws/request.Request" representing the // client's request for the DeleteBudget operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -324,8 +324,8 @@ func (c *Budgets) DeleteBudgetRequest(input *DeleteBudgetInput) (req *request.Re // // Deletes a budget. You can delete your budget at any time. // -// Deleting a budget also deletes the notifications and subscribers associated -// with that budget. +// Deleting a budget also deletes the notifications and subscribers that are +// associated with that budget. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -371,8 +371,8 @@ const opDeleteNotification = "DeleteNotification" // DeleteNotificationRequest generates a "aws/request.Request" representing the // client's request for the DeleteNotification operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -411,8 +411,8 @@ func (c *Budgets) DeleteNotificationRequest(input *DeleteNotificationInput) (req // // Deletes a notification. // -// Deleting a notification also deletes the subscribers associated with the -// notification. +// Deleting a notification also deletes the subscribers that are associated +// with the notification. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -458,8 +458,8 @@ const opDeleteSubscriber = "DeleteSubscriber" // DeleteSubscriberRequest generates a "aws/request.Request" representing the // client's request for the DeleteSubscriber operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -544,8 +544,8 @@ const opDescribeBudget = "DescribeBudget" // DescribeBudgetRequest generates a "aws/request.Request" representing the // client's request for the DescribeBudget operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -624,12 +624,103 @@ func (c *Budgets) DescribeBudgetWithContext(ctx aws.Context, input *DescribeBudg return out, req.Send() } +const opDescribeBudgetPerformanceHistory = "DescribeBudgetPerformanceHistory" + +// DescribeBudgetPerformanceHistoryRequest generates a "aws/request.Request" representing the +// client's request for the DescribeBudgetPerformanceHistory operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeBudgetPerformanceHistory for more information on using the DescribeBudgetPerformanceHistory +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeBudgetPerformanceHistoryRequest method. +// req, resp := client.DescribeBudgetPerformanceHistoryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *Budgets) DescribeBudgetPerformanceHistoryRequest(input *DescribeBudgetPerformanceHistoryInput) (req *request.Request, output *DescribeBudgetPerformanceHistoryOutput) { + op := &request.Operation{ + Name: opDescribeBudgetPerformanceHistory, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeBudgetPerformanceHistoryInput{} + } + + output = &DescribeBudgetPerformanceHistoryOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeBudgetPerformanceHistory API operation for AWS Budgets. +// +// Describes the history for DAILY, MONTHLY, and QUARTERLY budgets. Budget history +// isn't available for ANNUAL budgets. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Budgets's +// API operation DescribeBudgetPerformanceHistory for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalErrorException "InternalErrorException" +// An error on the server occurred during the processing of your request. Try +// again later. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// An error on the client occurred. Typically, the cause is an invalid input +// value. +// +// * ErrCodeNotFoundException "NotFoundException" +// We can’t locate the resource that you specified. +// +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The pagination token is invalid. +// +// * ErrCodeExpiredNextTokenException "ExpiredNextTokenException" +// The pagination token expired. +// +func (c *Budgets) DescribeBudgetPerformanceHistory(input *DescribeBudgetPerformanceHistoryInput) (*DescribeBudgetPerformanceHistoryOutput, error) { + req, out := c.DescribeBudgetPerformanceHistoryRequest(input) + return out, req.Send() +} + +// DescribeBudgetPerformanceHistoryWithContext is the same as DescribeBudgetPerformanceHistory with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeBudgetPerformanceHistory for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Budgets) DescribeBudgetPerformanceHistoryWithContext(ctx aws.Context, input *DescribeBudgetPerformanceHistoryInput, opts ...request.Option) (*DescribeBudgetPerformanceHistoryOutput, error) { + req, out := c.DescribeBudgetPerformanceHistoryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeBudgets = "DescribeBudgets" // DescribeBudgetsRequest generates a "aws/request.Request" representing the // client's request for the DescribeBudgets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -666,7 +757,7 @@ func (c *Budgets) DescribeBudgetsRequest(input *DescribeBudgetsInput) (req *requ // DescribeBudgets API operation for AWS Budgets. // -// Lists the budgets associated with an account. +// Lists the budgets that are associated with an account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -718,8 +809,8 @@ const opDescribeNotificationsForBudget = "DescribeNotificationsForBudget" // DescribeNotificationsForBudgetRequest generates a "aws/request.Request" representing the // client's request for the DescribeNotificationsForBudget operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -756,7 +847,7 @@ func (c *Budgets) DescribeNotificationsForBudgetRequest(input *DescribeNotificat // DescribeNotificationsForBudget API operation for AWS Budgets. // -// Lists the notifications associated with a budget. +// Lists the notifications that are associated with a budget. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -808,8 +899,8 @@ const opDescribeSubscribersForNotification = "DescribeSubscribersForNotification // DescribeSubscribersForNotificationRequest generates a "aws/request.Request" representing the // client's request for the DescribeSubscribersForNotification operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -846,7 +937,7 @@ func (c *Budgets) DescribeSubscribersForNotificationRequest(input *DescribeSubsc // DescribeSubscribersForNotification API operation for AWS Budgets. // -// Lists the subscribers associated with a notification. +// Lists the subscribers that are associated with a notification. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -898,8 +989,8 @@ const opUpdateBudget = "UpdateBudget" // UpdateBudgetRequest generates a "aws/request.Request" representing the // client's request for the UpdateBudget operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -937,7 +1028,7 @@ func (c *Budgets) UpdateBudgetRequest(input *UpdateBudgetInput) (req *request.Re // UpdateBudget API operation for AWS Budgets. // // Updates a budget. You can change every part of a budget except for the budgetName -// and the calculatedSpend. When a budget is modified, the calculatedSpend drops +// and the calculatedSpend. When you modify a budget, the calculatedSpend drops // to zero until AWS has new usage data to use for forecasting. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -984,8 +1075,8 @@ const opUpdateNotification = "UpdateNotification" // UpdateNotificationRequest generates a "aws/request.Request" representing the // client's request for the UpdateNotification operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1071,8 +1162,8 @@ const opUpdateSubscriber = "UpdateSubscriber" // UpdateSubscriberRequest generates a "aws/request.Request" representing the // client's request for the UpdateSubscriber operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1156,52 +1247,71 @@ func (c *Budgets) UpdateSubscriberWithContext(ctx aws.Context, input *UpdateSubs // Represents the output of the CreateBudget operation. The content consists // of the detailed metadata and data file information, and the current status -// of the budget. +// of the budget object. // -// The ARN pattern for a budget is: arn:aws:budgetservice::AccountId:budget/budgetName +// This is the ARN pattern for a budget: +// +// arn:aws:budgetservice::AccountId:budget/budgetName type Budget struct { _ struct{} `type:"structure"` - // The total amount of cost, usage, or RI utilization that you want to track - // with your budget. + // The total amount of cost, usage, RI utilization, or RI coverage that you + // want to track with your budget. // // BudgetLimit is required for cost or usage budgets, but optional for RI utilization - // budgets. RI utilization budgets default to the only valid value for RI utilization - // budgets, which is 100. + // or coverage budgets. RI utilization or coverage budgets default to 100, which + // is the only valid value for RI utilization or coverage budgets. BudgetLimit *Spend `type:"structure"` - // The name of a budget. Unique within accounts. : and \ characters are not - // allowed in the BudgetName. + // The name of a budget. The name must be unique within accounts. The : and + // \ characters aren't allowed in BudgetName. // // BudgetName is a required field - BudgetName *string `type:"string" required:"true"` + BudgetName *string `min:"1" type:"string" required:"true"` - // Whether this budget tracks monetary costs, usage, or RI utilization. + // Whether this budget tracks monetary costs, usage, RI utilization, or RI coverage. // // BudgetType is a required field BudgetType *string `type:"string" required:"true" enum:"BudgetType"` - // The actual and forecasted cost or usage being tracked by a budget. + // The actual and forecasted cost or usage that the budget tracks. CalculatedSpend *CalculatedSpend `type:"structure"` - // The cost filters applied to a budget, such as service or region. + // The cost filters, such as service or region, that are applied to a budget. + // + // AWS Budgets supports the following services as a filter for RI budgets: + // + // * Amazon Elastic Compute Cloud - Compute + // + // * Amazon Redshift + // + // * Amazon Relational Database Service + // + // * Amazon ElastiCache + // + // * Amazon Elasticsearch Service CostFilters map[string][]*string `type:"map"` - // The types of costs included in this budget. + // The types of costs that are included in this COST budget. + // + // USAGE, RI_UTILIZATION, and RI_COVERAGE budgets do not have CostTypes. CostTypes *CostTypes `type:"structure"` - // The period of time covered by a budget. Has a start date and an end date. - // The start date must come before the end date. There are no restrictions on - // the end date. + // The last time that you updated this budget. + LastUpdatedTime *time.Time `type:"timestamp"` + + // The period of time that is covered by a budget. The period has a start date + // and an end date. The start date must come before the end date. The end date + // must come before 06/15/87 00:00 UTC. // - // If you created your budget and didn't specify a start date, AWS defaults - // to the start of your chosen time period (i.e. DAILY, MONTHLY, QUARTERLY, - // ANNUALLY). For example, if you created your budget on January 24th 2018, - // chose DAILY, and didn't set a start date, AWS set your start date to 01/24/18 - // 00:00 UTC. If you chose MONTHLY, AWS set your start date to 01/01/18 00:00 - // UTC. If you didn't specify an end date, AWS set your end date to 06/15/87 - // 00:00 UTC. The defaults are the same for the AWS Billing and Cost Management - // console and the API. + // If you create your budget and don't specify a start date, AWS defaults to + // the start of your chosen time period (DAILY, MONTHLY, QUARTERLY, or ANNUALLY). + // For example, if you created your budget on January 24, 2018, chose DAILY, + // and didn't set a start date, AWS set your start date to 01/24/18 00:00 UTC. + // If you chose MONTHLY, AWS set your start date to 01/01/18 00:00 UTC. If you + // didn't specify an end date, AWS set your end date to 06/15/87 00:00 UTC. + // The defaults are the same for the AWS Billing and Cost Management console + // and the API. // // You can change either date with the UpdateBudget operation. // @@ -1210,6 +1320,7 @@ type Budget struct { TimePeriod *TimePeriod `type:"structure"` // The length of time until a budget resets the actual and forecasted spend. + // DAILY is available only for RI_UTILIZATION and RI_COVERAGE budgets. // // TimeUnit is a required field TimeUnit *string `type:"string" required:"true" enum:"TimeUnit"` @@ -1231,6 +1342,9 @@ func (s *Budget) Validate() error { if s.BudgetName == nil { invalidParams.Add(request.NewErrParamRequired("BudgetName")) } + if s.BudgetName != nil && len(*s.BudgetName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BudgetName", 1)) + } if s.BudgetType == nil { invalidParams.Add(request.NewErrParamRequired("BudgetType")) } @@ -1290,6 +1404,12 @@ func (s *Budget) SetCostTypes(v *CostTypes) *Budget { return s } +// SetLastUpdatedTime sets the LastUpdatedTime field's value. +func (s *Budget) SetLastUpdatedTime(v time.Time) *Budget { + s.LastUpdatedTime = &v + return s +} + // SetTimePeriod sets the TimePeriod field's value. func (s *Budget) SetTimePeriod(v *TimePeriod) *Budget { s.TimePeriod = v @@ -1302,8 +1422,125 @@ func (s *Budget) SetTimeUnit(v string) *Budget { return s } -// The spend objects associated with this budget. The actualSpend tracks how -// much you've used, cost, usage, or RI units, and the forecastedSpend tracks +// A history of the state of a budget at the end of the budget's specified time +// period. +type BudgetPerformanceHistory struct { + _ struct{} `type:"structure"` + + // A string that represents the budget name. The ":" and "\" characters aren't + // allowed. + BudgetName *string `min:"1" type:"string"` + + // The type of a budget. It must be one of the following types: + // + // COST, USAGE, RI_UTILIZATION, or RI_COVERAGE. + BudgetType *string `type:"string" enum:"BudgetType"` + + // A list of amounts of cost or usage that you created budgets for, compared + // to your actual costs or usage. + BudgetedAndActualAmountsList []*BudgetedAndActualAmounts `type:"list"` + + // The history of the cost filters for a budget during the specified time period. + CostFilters map[string][]*string `type:"map"` + + // The history of the cost types for a budget during the specified time period. + CostTypes *CostTypes `type:"structure"` + + // The time unit of the budget, such as MONTHLY or QUARTERLY. + TimeUnit *string `type:"string" enum:"TimeUnit"` +} + +// String returns the string representation +func (s BudgetPerformanceHistory) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BudgetPerformanceHistory) GoString() string { + return s.String() +} + +// SetBudgetName sets the BudgetName field's value. +func (s *BudgetPerformanceHistory) SetBudgetName(v string) *BudgetPerformanceHistory { + s.BudgetName = &v + return s +} + +// SetBudgetType sets the BudgetType field's value. +func (s *BudgetPerformanceHistory) SetBudgetType(v string) *BudgetPerformanceHistory { + s.BudgetType = &v + return s +} + +// SetBudgetedAndActualAmountsList sets the BudgetedAndActualAmountsList field's value. +func (s *BudgetPerformanceHistory) SetBudgetedAndActualAmountsList(v []*BudgetedAndActualAmounts) *BudgetPerformanceHistory { + s.BudgetedAndActualAmountsList = v + return s +} + +// SetCostFilters sets the CostFilters field's value. +func (s *BudgetPerformanceHistory) SetCostFilters(v map[string][]*string) *BudgetPerformanceHistory { + s.CostFilters = v + return s +} + +// SetCostTypes sets the CostTypes field's value. +func (s *BudgetPerformanceHistory) SetCostTypes(v *CostTypes) *BudgetPerformanceHistory { + s.CostTypes = v + return s +} + +// SetTimeUnit sets the TimeUnit field's value. +func (s *BudgetPerformanceHistory) SetTimeUnit(v string) *BudgetPerformanceHistory { + s.TimeUnit = &v + return s +} + +// The amount of cost or usage that you created the budget for, compared to +// your actual costs or usage. +type BudgetedAndActualAmounts struct { + _ struct{} `type:"structure"` + + // Your actual costs or usage for a budget period. + ActualAmount *Spend `type:"structure"` + + // The amount of cost or usage that you created the budget for. + BudgetedAmount *Spend `type:"structure"` + + // The time period covered by this budget comparison. + TimePeriod *TimePeriod `type:"structure"` +} + +// String returns the string representation +func (s BudgetedAndActualAmounts) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BudgetedAndActualAmounts) GoString() string { + return s.String() +} + +// SetActualAmount sets the ActualAmount field's value. +func (s *BudgetedAndActualAmounts) SetActualAmount(v *Spend) *BudgetedAndActualAmounts { + s.ActualAmount = v + return s +} + +// SetBudgetedAmount sets the BudgetedAmount field's value. +func (s *BudgetedAndActualAmounts) SetBudgetedAmount(v *Spend) *BudgetedAndActualAmounts { + s.BudgetedAmount = v + return s +} + +// SetTimePeriod sets the TimePeriod field's value. +func (s *BudgetedAndActualAmounts) SetTimePeriod(v *TimePeriod) *BudgetedAndActualAmounts { + s.TimePeriod = v + return s +} + +// The spend objects that are associated with this budget. The actualSpend tracks +// how much you've used, cost, usage, or RI units, and the forecastedSpend tracks // how much you are predicted to spend if your current usage remains steady. // // For example, if it is the 20th of the month and you have spent 50 dollars @@ -1366,7 +1603,9 @@ func (s *CalculatedSpend) SetForecastedSpend(v *Spend) *CalculatedSpend { return s } -// The types of cost included in a budget, such as tax and subscriptions. +// The types of cost that are included in a COST budget, such as tax and subscriptions. +// +// USAGE, RI_UTILIZATION, and RI_COVERAGE budgets do not have CostTypes. type CostTypes struct { _ struct{} `type:"structure"` @@ -1420,7 +1659,7 @@ type CostTypes struct { // The default value is false. UseAmortized *bool `type:"boolean"` - // Specifies whether a budget uses blended rate. + // Specifies whether a budget uses a blended rate. // // The default value is false. UseBlended *bool `type:"boolean"` @@ -1518,7 +1757,7 @@ type CreateBudgetInput struct { // A notification that you want to associate with a budget. A budget can have // up to five notifications, and each notification can have one SNS subscriber - // and up to ten email subscribers. If you include notifications and subscribers + // and up to 10 email subscribers. If you include notifications and subscribers // in your CreateBudget call, AWS creates the notifications and subscribers // for you. NotificationsWithSubscribers []*NotificationWithSubscribers `type:"list"` @@ -1611,11 +1850,11 @@ type CreateNotificationInput struct { // AccountId is a required field AccountId *string `min:"12" type:"string" required:"true"` - // The name of the budget that you want AWS to notified you about. Budget names + // The name of the budget that you want AWS to notify you about. Budget names // must be unique within an account. // // BudgetName is a required field - BudgetName *string `type:"string" required:"true"` + BudgetName *string `min:"1" type:"string" required:"true"` // The notification that you want to create. // @@ -1623,7 +1862,7 @@ type CreateNotificationInput struct { Notification *Notification `type:"structure" required:"true"` // A list of subscribers that you want to associate with the notification. Each - // notification can have one SNS subscriber and up to ten email subscribers. + // notification can have one SNS subscriber and up to 10 email subscribers. // // Subscribers is a required field Subscribers []*Subscriber `min:"1" type:"list" required:"true"` @@ -1651,6 +1890,9 @@ func (s *CreateNotificationInput) Validate() error { if s.BudgetName == nil { invalidParams.Add(request.NewErrParamRequired("BudgetName")) } + if s.BudgetName != nil && len(*s.BudgetName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BudgetName", 1)) + } if s.Notification == nil { invalidParams.Add(request.NewErrParamRequired("Notification")) } @@ -1725,8 +1967,8 @@ func (s CreateNotificationOutput) GoString() string { type CreateSubscriberInput struct { _ struct{} `type:"structure"` - // The accountId associated with the budget that you want to create a subscriber - // for. + // The accountId that is associated with the budget that you want to create + // a subscriber for. // // AccountId is a required field AccountId *string `min:"12" type:"string" required:"true"` @@ -1735,7 +1977,7 @@ type CreateSubscriberInput struct { // unique within an account. // // BudgetName is a required field - BudgetName *string `type:"string" required:"true"` + BudgetName *string `min:"1" type:"string" required:"true"` // The notification that you want to create a subscriber for. // @@ -1770,6 +2012,9 @@ func (s *CreateSubscriberInput) Validate() error { if s.BudgetName == nil { invalidParams.Add(request.NewErrParamRequired("BudgetName")) } + if s.BudgetName != nil && len(*s.BudgetName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BudgetName", 1)) + } if s.Notification == nil { invalidParams.Add(request.NewErrParamRequired("Notification")) } @@ -1844,7 +2089,7 @@ type DeleteBudgetInput struct { // The name of the budget that you want to delete. // // BudgetName is a required field - BudgetName *string `type:"string" required:"true"` + BudgetName *string `min:"1" type:"string" required:"true"` } // String returns the string representation @@ -1869,6 +2114,9 @@ func (s *DeleteBudgetInput) Validate() error { if s.BudgetName == nil { invalidParams.Add(request.NewErrParamRequired("BudgetName")) } + if s.BudgetName != nil && len(*s.BudgetName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BudgetName", 1)) + } if invalidParams.Len() > 0 { return invalidParams @@ -1916,7 +2164,7 @@ type DeleteNotificationInput struct { // The name of the budget whose notification you want to delete. // // BudgetName is a required field - BudgetName *string `type:"string" required:"true"` + BudgetName *string `min:"1" type:"string" required:"true"` // The notification that you want to delete. // @@ -1946,6 +2194,9 @@ func (s *DeleteNotificationInput) Validate() error { if s.BudgetName == nil { invalidParams.Add(request.NewErrParamRequired("BudgetName")) } + if s.BudgetName != nil && len(*s.BudgetName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BudgetName", 1)) + } if s.Notification == nil { invalidParams.Add(request.NewErrParamRequired("Notification")) } @@ -2007,7 +2258,7 @@ type DeleteSubscriberInput struct { // The name of the budget whose subscriber you want to delete. // // BudgetName is a required field - BudgetName *string `type:"string" required:"true"` + BudgetName *string `min:"1" type:"string" required:"true"` // The notification whose subscriber you want to delete. // @@ -2042,6 +2293,9 @@ func (s *DeleteSubscriberInput) Validate() error { if s.BudgetName == nil { invalidParams.Add(request.NewErrParamRequired("BudgetName")) } + if s.BudgetName != nil && len(*s.BudgetName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BudgetName", 1)) + } if s.Notification == nil { invalidParams.Add(request.NewErrParamRequired("Notification")) } @@ -2117,7 +2371,7 @@ type DescribeBudgetInput struct { // The name of the budget that you want a description of. // // BudgetName is a required field - BudgetName *string `type:"string" required:"true"` + BudgetName *string `min:"1" type:"string" required:"true"` } // String returns the string representation @@ -2142,6 +2396,9 @@ func (s *DescribeBudgetInput) Validate() error { if s.BudgetName == nil { invalidParams.Add(request.NewErrParamRequired("BudgetName")) } + if s.BudgetName != nil && len(*s.BudgetName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BudgetName", 1)) + } if invalidParams.Len() > 0 { return invalidParams @@ -2185,6 +2442,134 @@ func (s *DescribeBudgetOutput) SetBudget(v *Budget) *DescribeBudgetOutput { return s } +type DescribeBudgetPerformanceHistoryInput struct { + _ struct{} `type:"structure"` + + // The account ID of the user. It should be a 12-digit number. + // + // AccountId is a required field + AccountId *string `min:"12" type:"string" required:"true"` + + // A string that represents the budget name. The ":" and "\" characters aren't + // allowed. + // + // BudgetName is a required field + BudgetName *string `min:"1" type:"string" required:"true"` + + // An integer that represents how many entries a paginated response contains. + // The maximum is 100. + MaxResults *int64 `min:"1" type:"integer"` + + // A generic string. + NextToken *string `type:"string"` + + // Retrieves how often the budget went into an ALARM state for the specified + // time period. + TimePeriod *TimePeriod `type:"structure"` +} + +// String returns the string representation +func (s DescribeBudgetPerformanceHistoryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeBudgetPerformanceHistoryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeBudgetPerformanceHistoryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeBudgetPerformanceHistoryInput"} + if s.AccountId == nil { + invalidParams.Add(request.NewErrParamRequired("AccountId")) + } + if s.AccountId != nil && len(*s.AccountId) < 12 { + invalidParams.Add(request.NewErrParamMinLen("AccountId", 12)) + } + if s.BudgetName == nil { + invalidParams.Add(request.NewErrParamRequired("BudgetName")) + } + if s.BudgetName != nil && len(*s.BudgetName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BudgetName", 1)) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccountId sets the AccountId field's value. +func (s *DescribeBudgetPerformanceHistoryInput) SetAccountId(v string) *DescribeBudgetPerformanceHistoryInput { + s.AccountId = &v + return s +} + +// SetBudgetName sets the BudgetName field's value. +func (s *DescribeBudgetPerformanceHistoryInput) SetBudgetName(v string) *DescribeBudgetPerformanceHistoryInput { + s.BudgetName = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeBudgetPerformanceHistoryInput) SetMaxResults(v int64) *DescribeBudgetPerformanceHistoryInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeBudgetPerformanceHistoryInput) SetNextToken(v string) *DescribeBudgetPerformanceHistoryInput { + s.NextToken = &v + return s +} + +// SetTimePeriod sets the TimePeriod field's value. +func (s *DescribeBudgetPerformanceHistoryInput) SetTimePeriod(v *TimePeriod) *DescribeBudgetPerformanceHistoryInput { + s.TimePeriod = v + return s +} + +type DescribeBudgetPerformanceHistoryOutput struct { + _ struct{} `type:"structure"` + + // The history of how often the budget has gone into an ALARM state. + // + // For DAILY budgets, the history saves the state of the budget for the last + // 60 days. For MONTHLY budgets, the history saves the state of the budget for + // the current month plus the last 12 months. For QUARTERLY budgets, the history + // saves the state of the budget for the last four quarters. + BudgetPerformanceHistory *BudgetPerformanceHistory `type:"structure"` + + // A generic string. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeBudgetPerformanceHistoryOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeBudgetPerformanceHistoryOutput) GoString() string { + return s.String() +} + +// SetBudgetPerformanceHistory sets the BudgetPerformanceHistory field's value. +func (s *DescribeBudgetPerformanceHistoryOutput) SetBudgetPerformanceHistory(v *BudgetPerformanceHistory) *DescribeBudgetPerformanceHistoryOutput { + s.BudgetPerformanceHistory = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeBudgetPerformanceHistoryOutput) SetNextToken(v string) *DescribeBudgetPerformanceHistoryOutput { + s.NextToken = &v + return s +} + // Request of DescribeBudgets type DescribeBudgetsInput struct { _ struct{} `type:"structure"` @@ -2195,10 +2580,12 @@ type DescribeBudgetsInput struct { // AccountId is a required field AccountId *string `min:"12" type:"string" required:"true"` - // Optional integer. Specifies the maximum number of results to return in response. + // An optional integer that represents how many entries a paginated response + // contains. The maximum is 100. MaxResults *int64 `min:"1" type:"integer"` - // The pagination token that indicates the next set of results to retrieve. + // The pagination token that you include in your request to indicate the next + // set of results that you want to retrieve. NextToken *string `type:"string"` } @@ -2256,8 +2643,8 @@ type DescribeBudgetsOutput struct { // A list of budgets. Budgets []*Budget `type:"list"` - // The pagination token that indicates the next set of results that you can - // retrieve. + // The pagination token in the service response that indicates the next set + // of results that you can retrieve. NextToken *string `type:"string"` } @@ -2296,12 +2683,14 @@ type DescribeNotificationsForBudgetInput struct { // The name of the budget whose notifications you want descriptions of. // // BudgetName is a required field - BudgetName *string `type:"string" required:"true"` + BudgetName *string `min:"1" type:"string" required:"true"` - // Optional integer. Specifies the maximum number of results to return in response. + // An optional integer that represents how many entries a paginated response + // contains. The maximum is 100. MaxResults *int64 `min:"1" type:"integer"` - // The pagination token that indicates the next set of results to retrieve. + // The pagination token that you include in your request to indicate the next + // set of results that you want to retrieve. NextToken *string `type:"string"` } @@ -2327,6 +2716,9 @@ func (s *DescribeNotificationsForBudgetInput) Validate() error { if s.BudgetName == nil { invalidParams.Add(request.NewErrParamRequired("BudgetName")) } + if s.BudgetName != nil && len(*s.BudgetName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BudgetName", 1)) + } if s.MaxResults != nil && *s.MaxResults < 1 { invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } @@ -2365,11 +2757,11 @@ func (s *DescribeNotificationsForBudgetInput) SetNextToken(v string) *DescribeNo type DescribeNotificationsForBudgetOutput struct { _ struct{} `type:"structure"` - // The pagination token that indicates the next set of results that you can - // retrieve. + // The pagination token in the service response that indicates the next set + // of results that you can retrieve. NextToken *string `type:"string"` - // A list of notifications associated with a budget. + // A list of notifications that are associated with a budget. Notifications []*Notification `type:"list"` } @@ -2408,12 +2800,14 @@ type DescribeSubscribersForNotificationInput struct { // The name of the budget whose subscribers you want descriptions of. // // BudgetName is a required field - BudgetName *string `type:"string" required:"true"` + BudgetName *string `min:"1" type:"string" required:"true"` - // Optional integer. Specifies the maximum number of results to return in response. + // An optional integer that represents how many entries a paginated response + // contains. The maximum is 100. MaxResults *int64 `min:"1" type:"integer"` - // The pagination token that indicates the next set of results to retrieve. + // The pagination token that you include in your request to indicate the next + // set of results that you want to retrieve. NextToken *string `type:"string"` // The notification whose subscribers you want to list. @@ -2444,6 +2838,9 @@ func (s *DescribeSubscribersForNotificationInput) Validate() error { if s.BudgetName == nil { invalidParams.Add(request.NewErrParamRequired("BudgetName")) } + if s.BudgetName != nil && len(*s.BudgetName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BudgetName", 1)) + } if s.MaxResults != nil && *s.MaxResults < 1 { invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } @@ -2496,11 +2893,11 @@ func (s *DescribeSubscribersForNotificationInput) SetNotification(v *Notificatio type DescribeSubscribersForNotificationOutput struct { _ struct{} `type:"structure"` - // The pagination token that indicates the next set of results that you can - // retrieve. + // The pagination token in the service response that indicates the next set + // of results that you can retrieve. NextToken *string `type:"string"` - // A list of subscribers associated with a notification. + // A list of subscribers that are associated with a notification. Subscribers []*Subscriber `min:"1" type:"list"` } @@ -2526,41 +2923,52 @@ func (s *DescribeSubscribersForNotificationOutput) SetSubscribers(v []*Subscribe return s } -// A notification associated with a budget. A budget can have up to five notifications. +// A notification that is associated with a budget. A budget can have up to +// five notifications. // // Each notification must have at least one subscriber. A notification can have -// one SNS subscriber and up to ten email subscribers, for a total of 11 subscribers. +// one SNS subscriber and up to 10 email subscribers, for a total of 11 subscribers. // // For example, if you have a budget for 200 dollars and you want to be notified // when you go over 160 dollars, create a notification with the following parameters: // // * A notificationType of ACTUAL // +// * A thresholdType of PERCENTAGE +// // * A comparisonOperator of GREATER_THAN // // * A notification threshold of 80 type Notification struct { _ struct{} `type:"structure"` - // The comparison used for this notification. + // The comparison that is used for this notification. // // ComparisonOperator is a required field ComparisonOperator *string `type:"string" required:"true" enum:"ComparisonOperator"` + // Whether this notification is in alarm. If a budget notification is in the + // ALARM state, you have passed the set threshold for the budget. + NotificationState *string `type:"string" enum:"NotificationState"` + // Whether the notification is for how much you have spent (ACTUAL) or for how - // much you are forecasted to spend (FORECASTED). + // much you're forecasted to spend (FORECASTED). // // NotificationType is a required field NotificationType *string `type:"string" required:"true" enum:"NotificationType"` - // The threshold associated with a notification. Thresholds are always a percentage. + // The threshold that is associated with a notification. Thresholds are always + // a percentage. // // Threshold is a required field - Threshold *float64 `min:"0.1" type:"double" required:"true"` - - // The type of threshold for a notification. For ACTUAL thresholds, AWS notifies - // you when you go over the threshold, and for FORECASTED thresholds AWS notifies - // you when you are forecasted to go over the threshold. + Threshold *float64 `type:"double" required:"true"` + + // The type of threshold for a notification. For ABSOLUTE_VALUE thresholds, + // AWS notifies you when you go over or are forecasted to go over your total + // cost threshold. For PERCENTAGE thresholds, AWS notifies you when you go over + // or are forecasted to go over a certain percentage of your forecasted spend. + // For example, if you have a budget for 200 dollars and you have a PERCENTAGE + // threshold of 80%, AWS notifies you when you go over 160 dollars. ThresholdType *string `type:"string" enum:"ThresholdType"` } @@ -2586,9 +2994,6 @@ func (s *Notification) Validate() error { if s.Threshold == nil { invalidParams.Add(request.NewErrParamRequired("Threshold")) } - if s.Threshold != nil && *s.Threshold < 0.1 { - invalidParams.Add(request.NewErrParamMinValue("Threshold", 0.1)) - } if invalidParams.Len() > 0 { return invalidParams @@ -2602,6 +3007,12 @@ func (s *Notification) SetComparisonOperator(v string) *Notification { return s } +// SetNotificationState sets the NotificationState field's value. +func (s *Notification) SetNotificationState(v string) *Notification { + s.NotificationState = &v + return s +} + // SetNotificationType sets the NotificationType field's value. func (s *Notification) SetNotificationType(v string) *Notification { s.NotificationType = &v @@ -2621,11 +3032,11 @@ func (s *Notification) SetThresholdType(v string) *Notification { } // A notification with subscribers. A notification can have one SNS subscriber -// and up to ten email subscribers, for a total of 11 subscribers. +// and up to 10 email subscribers, for a total of 11 subscribers. type NotificationWithSubscribers struct { _ struct{} `type:"structure"` - // The notification associated with a budget. + // The notification that is associated with a budget. // // Notification is a required field Notification *Notification `type:"structure" required:"true"` @@ -2692,7 +3103,7 @@ func (s *NotificationWithSubscribers) SetSubscribers(v []*Subscriber) *Notificat return s } -// The amount of cost or usage being measured for a budget. +// The amount of cost or usage that is measured for a budget. // // For example, a Spend for 3 GB of S3 usage would have the following parameters: // @@ -2702,14 +3113,14 @@ func (s *NotificationWithSubscribers) SetSubscribers(v []*Subscriber) *Notificat type Spend struct { _ struct{} `type:"structure"` - // The cost or usage amount associated with a budget forecast, actual spend, - // or budget threshold. + // The cost or usage amount that is associated with a budget forecast, actual + // spend, or budget threshold. // // Amount is a required field - Amount *string `type:"string" required:"true"` + Amount *string `min:"1" type:"string" required:"true"` - // The unit of measurement used for the budget forecast, actual spend, or budget - // threshold, such as dollars or GB. + // The unit of measurement that is used for the budget forecast, actual spend, + // or budget threshold, such as dollars or GB. // // Unit is a required field Unit *string `min:"1" type:"string" required:"true"` @@ -2731,6 +3142,9 @@ func (s *Spend) Validate() error { if s.Amount == nil { invalidParams.Add(request.NewErrParamRequired("Amount")) } + if s.Amount != nil && len(*s.Amount) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Amount", 1)) + } if s.Unit == nil { invalidParams.Add(request.NewErrParamRequired("Unit")) } @@ -2757,7 +3171,7 @@ func (s *Spend) SetUnit(v string) *Spend { } // The subscriber to a budget notification. The subscriber consists of a subscription -// type and either an Amazon Simple Notification Service topic or an email address. +// type and either an Amazon SNS topic or an email address. // // For example, an email subscriber would have the following parameters: // @@ -2820,9 +3234,9 @@ func (s *Subscriber) SetSubscriptionType(v string) *Subscriber { return s } -// The period of time covered by a budget. Has a start date and an end date. -// The start date must come before the end date. There are no restrictions on -// the end date. +// The period of time that is covered by a budget. The period has a start date +// and an end date. The start date must come before the end date. There are +// no restrictions on the end date. type TimePeriod struct { _ struct{} `type:"structure"` @@ -2832,18 +3246,18 @@ type TimePeriod struct { // // After the end date, AWS deletes the budget and all associated notifications // and subscribers. You can change your end date with the UpdateBudget operation. - End *time.Time `type:"timestamp" timestampFormat:"unix"` + End *time.Time `type:"timestamp"` // The start date for a budget. If you created your budget and didn't specify - // a start date, AWS defaults to the start of your chosen time period (i.e. - // DAILY, MONTHLY, QUARTERLY, ANNUALLY). For example, if you created your budget - // on January 24th 2018, chose DAILY, and didn't set a start date, AWS set your + // a start date, AWS defaults to the start of your chosen time period (DAILY, + // MONTHLY, QUARTERLY, or ANNUALLY). For example, if you created your budget + // on January 24, 2018, chose DAILY, and didn't set a start date, AWS set your // start date to 01/24/18 00:00 UTC. If you chose MONTHLY, AWS set your start // date to 01/01/18 00:00 UTC. The defaults are the same for the AWS Billing // and Cost Management console and the API. // // You can change your start date with the UpdateBudget operation. - Start *time.Time `type:"timestamp" timestampFormat:"unix"` + Start *time.Time `type:"timestamp"` } // String returns the string representation @@ -2957,14 +3371,14 @@ type UpdateNotificationInput struct { // The name of the budget whose notification you want to update. // // BudgetName is a required field - BudgetName *string `type:"string" required:"true"` + BudgetName *string `min:"1" type:"string" required:"true"` // The updated notification to be associated with a budget. // // NewNotification is a required field NewNotification *Notification `type:"structure" required:"true"` - // The previous notification associated with a budget. + // The previous notification that is associated with a budget. // // OldNotification is a required field OldNotification *Notification `type:"structure" required:"true"` @@ -2992,6 +3406,9 @@ func (s *UpdateNotificationInput) Validate() error { if s.BudgetName == nil { invalidParams.Add(request.NewErrParamRequired("BudgetName")) } + if s.BudgetName != nil && len(*s.BudgetName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BudgetName", 1)) + } if s.NewNotification == nil { invalidParams.Add(request.NewErrParamRequired("NewNotification")) } @@ -3067,9 +3484,9 @@ type UpdateSubscriberInput struct { // The name of the budget whose subscriber you want to update. // // BudgetName is a required field - BudgetName *string `type:"string" required:"true"` + BudgetName *string `min:"1" type:"string" required:"true"` - // The updated subscriber associated with a budget notification. + // The updated subscriber that is associated with a budget notification. // // NewSubscriber is a required field NewSubscriber *Subscriber `type:"structure" required:"true"` @@ -3079,7 +3496,7 @@ type UpdateSubscriberInput struct { // Notification is a required field Notification *Notification `type:"structure" required:"true"` - // The previous subscriber associated with a budget notification. + // The previous subscriber that is associated with a budget notification. // // OldSubscriber is a required field OldSubscriber *Subscriber `type:"structure" required:"true"` @@ -3107,6 +3524,9 @@ func (s *UpdateSubscriberInput) Validate() error { if s.BudgetName == nil { invalidParams.Add(request.NewErrParamRequired("BudgetName")) } + if s.BudgetName != nil && len(*s.BudgetName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BudgetName", 1)) + } if s.NewSubscriber == nil { invalidParams.Add(request.NewErrParamRequired("NewSubscriber")) } @@ -3183,7 +3603,9 @@ func (s UpdateSubscriberOutput) GoString() string { return s.String() } -// The type of a budget. It should be COST, USAGE, or RI_UTILIZATION. +// The type of a budget. It must be one of the following types: +// +// COST, USAGE, RI_UTILIZATION, or RI_COVERAGE. const ( // BudgetTypeUsage is a BudgetType enum value BudgetTypeUsage = "USAGE" @@ -3193,10 +3615,15 @@ const ( // BudgetTypeRiUtilization is a BudgetType enum value BudgetTypeRiUtilization = "RI_UTILIZATION" + + // BudgetTypeRiCoverage is a BudgetType enum value + BudgetTypeRiCoverage = "RI_COVERAGE" ) -// The comparison operator of a notification. Currently we support less than, -// equal to and greater than. +// The comparison operator of a notification. Currently the service supports +// the following operators: +// +// GREATER_THAN, LESS_THAN, EQUAL_TO const ( // ComparisonOperatorGreaterThan is a ComparisonOperator enum value ComparisonOperatorGreaterThan = "GREATER_THAN" @@ -3208,7 +3635,15 @@ const ( ComparisonOperatorEqualTo = "EQUAL_TO" ) -// The type of a notification. It should be ACTUAL or FORECASTED. +const ( + // NotificationStateOk is a NotificationState enum value + NotificationStateOk = "OK" + + // NotificationStateAlarm is a NotificationState enum value + NotificationStateAlarm = "ALARM" +) + +// The type of a notification. It must be ACTUAL or FORECASTED. const ( // NotificationTypeActual is a NotificationType enum value NotificationTypeActual = "ACTUAL" @@ -3235,7 +3670,7 @@ const ( ThresholdTypeAbsoluteValue = "ABSOLUTE_VALUE" ) -// The time unit of the budget. e.g. MONTHLY, QUARTERLY, etc. +// The time unit of the budget, such as MONTHLY or QUARTERLY. const ( // TimeUnitDaily is a TimeUnit enum value TimeUnitDaily = "DAILY" diff --git a/vendor/github.com/aws/aws-sdk-go/service/budgets/doc.go b/vendor/github.com/aws/aws-sdk-go/service/budgets/doc.go index 2fd19749636..918baf6684d 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/budgets/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/budgets/doc.go @@ -3,39 +3,48 @@ // Package budgets provides the client and types for making API // requests to AWS Budgets. // -// Budgets enable you to plan your service usage, service costs, and your RI -// utilization. You can also track how close your plan is to your budgeted amount -// or to the free tier limits. Budgets provide you with a quick way to see your -// usage-to-date and current estimated charges from AWS and to see how much -// your predicted usage accrues in charges by the end of the month. Budgets -// also compare current estimates and charges to the amount that you indicated -// you want to use or spend and lets you see how much of your budget has been -// used. AWS updates your budget status several times a day. Budgets track your -// unblended costs, subscriptions, and refunds. You can create the following -// types of budgets: -// -// * Cost budgets allow you to say how much you want to spend on a service. -// -// * Usage budgets allow you to say how many hours you want to use for one -// or more services. -// -// * RI utilization budgets allow you to define a utilization threshold and -// receive alerts when RIs are tracking below that threshold. -// -// You can create up to 20,000 budgets per AWS master account. Your first two -// budgets are free of charge. Each additional budget costs $0.02 per day. You -// can set up optional notifications that warn you if you exceed, or are forecasted -// to exceed, your budgeted amount. You can have notifications sent to an Amazon -// SNS topic, to an email address, or to both. For more information, see Creating -// an Amazon SNS Topic for Budget Notifications (https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/budgets-sns-policy.html). -// AWS Free Tier usage alerts via AWS Budgets are provided for you, and do not -// count toward your budget limits. +// The AWS Budgets API enables you to use AWS Budgets to plan your service usage, +// service costs, and instance reservations. The API reference provides descriptions, +// syntax, and usage examples for each of the actions and data types for AWS +// Budgets. +// +// Budgets provide you with a way to see the following information: +// +// * How close your plan is to your budgeted amount or to the free tier limits +// +// * Your usage-to-date, including how much you've used of your Reserved +// Instances (RIs) +// +// * Your current estimated charges from AWS, and how much your predicted +// usage will accrue in charges by the end of the month +// +// * How much of your budget has been used +// +// AWS updates your budget status several times a day. Budgets track your unblended +// costs, subscriptions, refunds, and RIs. You can create the following types +// of budgets: +// +// * Cost budgets - Plan how much you want to spend on a service. +// +// * Usage budgets - Plan how much you want to use one or more services. +// +// * RI utilization budgets - Define a utilization threshold, and receive +// alerts when your RI usage falls below that threshold. This lets you see +// if your RIs are unused or under-utilized. +// +// * RI coverage budgets - Define a coverage threshold, and receive alerts +// when the number of your instance hours that are covered by RIs fall below +// that threshold. This lets you see how much of your instance usage is covered +// by a reservation. // // Service Endpoint // // The AWS Budgets API provides the following endpoint: // -// * https://budgets.us-east-1.amazonaws.com +// * https://budgets.amazonaws.com +// +// For information about costs that are associated with the AWS Budgets API, +// see AWS Cost Management Pricing (https://aws.amazon.com/aws-cost-management/pricing/). // // See budgets package documentation for more information. // https://docs.aws.amazon.com/sdk-for-go/api/service/budgets/ diff --git a/vendor/github.com/aws/aws-sdk-go/service/budgets/service.go b/vendor/github.com/aws/aws-sdk-go/service/budgets/service.go index d74d9c96a55..dc10c2aaa1b 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/budgets/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/budgets/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "budgets" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "budgets" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Budgets" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the Budgets client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloud9/api.go b/vendor/github.com/aws/aws-sdk-go/service/cloud9/api.go index 9cda728ffe4..1d00e9a7109 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloud9/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloud9/api.go @@ -14,8 +14,8 @@ const opCreateEnvironmentEC2 = "CreateEnvironmentEC2" // CreateEnvironmentEC2Request generates a "aws/request.Request" representing the // client's request for the CreateEnvironmentEC2 operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -113,8 +113,8 @@ const opCreateEnvironmentMembership = "CreateEnvironmentMembership" // CreateEnvironmentMembershipRequest generates a "aws/request.Request" representing the // client's request for the CreateEnvironmentMembership operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -210,8 +210,8 @@ const opDeleteEnvironment = "DeleteEnvironment" // DeleteEnvironmentRequest generates a "aws/request.Request" representing the // client's request for the DeleteEnvironment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -308,8 +308,8 @@ const opDeleteEnvironmentMembership = "DeleteEnvironmentMembership" // DeleteEnvironmentMembershipRequest generates a "aws/request.Request" representing the // client's request for the DeleteEnvironmentMembership operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -405,8 +405,8 @@ const opDescribeEnvironmentMemberships = "DescribeEnvironmentMemberships" // DescribeEnvironmentMembershipsRequest generates a "aws/request.Request" representing the // client's request for the DescribeEnvironmentMemberships operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -559,8 +559,8 @@ const opDescribeEnvironmentStatus = "DescribeEnvironmentStatus" // DescribeEnvironmentStatusRequest generates a "aws/request.Request" representing the // client's request for the DescribeEnvironmentStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -656,8 +656,8 @@ const opDescribeEnvironments = "DescribeEnvironments" // DescribeEnvironmentsRequest generates a "aws/request.Request" representing the // client's request for the DescribeEnvironments operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -753,8 +753,8 @@ const opListEnvironments = "ListEnvironments" // ListEnvironmentsRequest generates a "aws/request.Request" representing the // client's request for the ListEnvironments operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -906,8 +906,8 @@ const opUpdateEnvironment = "UpdateEnvironment" // UpdateEnvironmentRequest generates a "aws/request.Request" representing the // client's request for the UpdateEnvironment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1003,8 +1003,8 @@ const opUpdateEnvironmentMembership = "UpdateEnvironmentMembership" // UpdateEnvironmentMembershipRequest generates a "aws/request.Request" representing the // client's request for the UpdateEnvironmentMembership operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1791,7 +1791,7 @@ type EnvironmentMember struct { // The time, expressed in epoch time format, when the environment member last // opened the environment. - LastAccess *time.Time `locationName:"lastAccess" type:"timestamp" timestampFormat:"unix"` + LastAccess *time.Time `locationName:"lastAccess" type:"timestamp"` // The type of environment member permissions associated with this environment // member. Available values include: diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloud9/service.go b/vendor/github.com/aws/aws-sdk-go/service/cloud9/service.go index 1a01d2d9248..8b9d30d5440 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloud9/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloud9/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "cloud9" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "cloud9" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Cloud9" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the Cloud9 client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudformation/api.go b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/api.go index 8a07b648e56..845bb9d5196 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudformation/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/api.go @@ -17,8 +17,8 @@ const opCancelUpdateStack = "CancelUpdateStack" // CancelUpdateStackRequest generates a "aws/request.Request" representing the // client's request for the CancelUpdateStack operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -101,8 +101,8 @@ const opContinueUpdateRollback = "ContinueUpdateRollback" // ContinueUpdateRollbackRequest generates a "aws/request.Request" representing the // client's request for the ContinueUpdateRollback operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -192,8 +192,8 @@ const opCreateChangeSet = "CreateChangeSet" // CreateChangeSetRequest generates a "aws/request.Request" representing the // client's request for the CreateChangeSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -300,8 +300,8 @@ const opCreateStack = "CreateStack" // CreateStackRequest generates a "aws/request.Request" representing the // client's request for the CreateStack operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -393,8 +393,8 @@ const opCreateStackInstances = "CreateStackInstances" // CreateStackInstancesRequest generates a "aws/request.Request" representing the // client's request for the CreateStackInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -494,8 +494,8 @@ const opCreateStackSet = "CreateStackSet" // CreateStackSetRequest generates a "aws/request.Request" representing the // client's request for the CreateStackSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -581,8 +581,8 @@ const opDeleteChangeSet = "DeleteChangeSet" // DeleteChangeSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteChangeSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -666,8 +666,8 @@ const opDeleteStack = "DeleteStack" // DeleteStackRequest generates a "aws/request.Request" representing the // client's request for the DeleteStack operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -749,8 +749,8 @@ const opDeleteStackInstances = "DeleteStackInstances" // DeleteStackInstancesRequest generates a "aws/request.Request" representing the // client's request for the DeleteStackInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -842,8 +842,8 @@ const opDeleteStackSet = "DeleteStackSet" // DeleteStackSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteStackSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -929,8 +929,8 @@ const opDescribeAccountLimits = "DescribeAccountLimits" // DescribeAccountLimitsRequest generates a "aws/request.Request" representing the // client's request for the DescribeAccountLimits operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1004,8 +1004,8 @@ const opDescribeChangeSet = "DescribeChangeSet" // DescribeChangeSetRequest generates a "aws/request.Request" representing the // client's request for the DescribeChangeSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1083,12 +1083,98 @@ func (c *CloudFormation) DescribeChangeSetWithContext(ctx aws.Context, input *De return out, req.Send() } +const opDescribeStackDriftDetectionStatus = "DescribeStackDriftDetectionStatus" + +// DescribeStackDriftDetectionStatusRequest generates a "aws/request.Request" representing the +// client's request for the DescribeStackDriftDetectionStatus operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeStackDriftDetectionStatus for more information on using the DescribeStackDriftDetectionStatus +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeStackDriftDetectionStatusRequest method. +// req, resp := client.DescribeStackDriftDetectionStatusRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeStackDriftDetectionStatus +func (c *CloudFormation) DescribeStackDriftDetectionStatusRequest(input *DescribeStackDriftDetectionStatusInput) (req *request.Request, output *DescribeStackDriftDetectionStatusOutput) { + op := &request.Operation{ + Name: opDescribeStackDriftDetectionStatus, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeStackDriftDetectionStatusInput{} + } + + output = &DescribeStackDriftDetectionStatusOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeStackDriftDetectionStatus API operation for AWS CloudFormation. +// +// Returns information about a stack drift detection operation. A stack drift +// detection operation detects whether a stack's actual configuration differs, +// or has drifted, from it's expected configuration, as defined in the stack +// template and any values specified as template parameters. A stack is considered +// to have drifted if one or more of its resources have drifted. For more information +// on stack and resource drift, see Detecting Unregulated Configuration Changes +// to Stacks and Resources (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift.html). +// +// Use DetectStackDrift to initiate a stack drift detection operation. DetectStackDrift +// returns a StackDriftDetectionId you can use to monitor the progress of the +// operation using DescribeStackDriftDetectionStatus. Once the drift detection +// operation has completed, use DescribeStackResourceDrifts to return drift +// information about the stack and its resources. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation DescribeStackDriftDetectionStatus for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeStackDriftDetectionStatus +func (c *CloudFormation) DescribeStackDriftDetectionStatus(input *DescribeStackDriftDetectionStatusInput) (*DescribeStackDriftDetectionStatusOutput, error) { + req, out := c.DescribeStackDriftDetectionStatusRequest(input) + return out, req.Send() +} + +// DescribeStackDriftDetectionStatusWithContext is the same as DescribeStackDriftDetectionStatus with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeStackDriftDetectionStatus for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DescribeStackDriftDetectionStatusWithContext(ctx aws.Context, input *DescribeStackDriftDetectionStatusInput, opts ...request.Option) (*DescribeStackDriftDetectionStatusOutput, error) { + req, out := c.DescribeStackDriftDetectionStatusRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeStackEvents = "DescribeStackEvents" // DescribeStackEventsRequest generates a "aws/request.Request" representing the // client's request for the DescribeStackEvents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1222,8 +1308,8 @@ const opDescribeStackInstance = "DescribeStackInstance" // DescribeStackInstanceRequest generates a "aws/request.Request" representing the // client's request for the DescribeStackInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1308,8 +1394,8 @@ const opDescribeStackResource = "DescribeStackResource" // DescribeStackResourceRequest generates a "aws/request.Request" representing the // client's request for the DescribeStackResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1381,12 +1467,153 @@ func (c *CloudFormation) DescribeStackResourceWithContext(ctx aws.Context, input return out, req.Send() } +const opDescribeStackResourceDrifts = "DescribeStackResourceDrifts" + +// DescribeStackResourceDriftsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeStackResourceDrifts operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeStackResourceDrifts for more information on using the DescribeStackResourceDrifts +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeStackResourceDriftsRequest method. +// req, resp := client.DescribeStackResourceDriftsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeStackResourceDrifts +func (c *CloudFormation) DescribeStackResourceDriftsRequest(input *DescribeStackResourceDriftsInput) (req *request.Request, output *DescribeStackResourceDriftsOutput) { + op := &request.Operation{ + Name: opDescribeStackResourceDrifts, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeStackResourceDriftsInput{} + } + + output = &DescribeStackResourceDriftsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeStackResourceDrifts API operation for AWS CloudFormation. +// +// Returns drift information for the resources that have been checked for drift +// in the specified stack. This includes actual and expected configuration values +// for resources where AWS CloudFormation detects configuration drift. +// +// For a given stack, there will be one StackResourceDrift for each stack resource +// that has been checked for drift. Resources that have not yet been checked +// for drift are not included. Resources that do not currently support drift +// detection are not checked, and so not included. For a list of resources that +// support drift detection, see Resources that Support Drift Detection (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift-resource-list.html). +// +// Use DetectStackResourceDrift to detect drift on individual resources, or +// DetectStackDrift to detect drift on all supported resources for a given stack. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation DescribeStackResourceDrifts for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DescribeStackResourceDrifts +func (c *CloudFormation) DescribeStackResourceDrifts(input *DescribeStackResourceDriftsInput) (*DescribeStackResourceDriftsOutput, error) { + req, out := c.DescribeStackResourceDriftsRequest(input) + return out, req.Send() +} + +// DescribeStackResourceDriftsWithContext is the same as DescribeStackResourceDrifts with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeStackResourceDrifts for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DescribeStackResourceDriftsWithContext(ctx aws.Context, input *DescribeStackResourceDriftsInput, opts ...request.Option) (*DescribeStackResourceDriftsOutput, error) { + req, out := c.DescribeStackResourceDriftsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeStackResourceDriftsPages iterates over the pages of a DescribeStackResourceDrifts operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeStackResourceDrifts method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeStackResourceDrifts operation. +// pageNum := 0 +// err := client.DescribeStackResourceDriftsPages(params, +// func(page *DescribeStackResourceDriftsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CloudFormation) DescribeStackResourceDriftsPages(input *DescribeStackResourceDriftsInput, fn func(*DescribeStackResourceDriftsOutput, bool) bool) error { + return c.DescribeStackResourceDriftsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeStackResourceDriftsPagesWithContext same as DescribeStackResourceDriftsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DescribeStackResourceDriftsPagesWithContext(ctx aws.Context, input *DescribeStackResourceDriftsInput, fn func(*DescribeStackResourceDriftsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeStackResourceDriftsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeStackResourceDriftsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeStackResourceDriftsOutput), !p.HasNextPage()) + } + return p.Err() +} + const opDescribeStackResources = "DescribeStackResources" // DescribeStackResourcesRequest generates a "aws/request.Request" representing the // client's request for the DescribeStackResources operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1476,8 +1703,8 @@ const opDescribeStackSet = "DescribeStackSet" // DescribeStackSetRequest generates a "aws/request.Request" representing the // client's request for the DescribeStackSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1555,8 +1782,8 @@ const opDescribeStackSetOperation = "DescribeStackSetOperation" // DescribeStackSetOperationRequest generates a "aws/request.Request" representing the // client's request for the DescribeStackSetOperation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1637,8 +1864,8 @@ const opDescribeStacks = "DescribeStacks" // DescribeStacksRequest generates a "aws/request.Request" representing the // client's request for the DescribeStacks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1766,12 +1993,198 @@ func (c *CloudFormation) DescribeStacksPagesWithContext(ctx aws.Context, input * return p.Err() } +const opDetectStackDrift = "DetectStackDrift" + +// DetectStackDriftRequest generates a "aws/request.Request" representing the +// client's request for the DetectStackDrift operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DetectStackDrift for more information on using the DetectStackDrift +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DetectStackDriftRequest method. +// req, resp := client.DetectStackDriftRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DetectStackDrift +func (c *CloudFormation) DetectStackDriftRequest(input *DetectStackDriftInput) (req *request.Request, output *DetectStackDriftOutput) { + op := &request.Operation{ + Name: opDetectStackDrift, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DetectStackDriftInput{} + } + + output = &DetectStackDriftOutput{} + req = c.newRequest(op, input, output) + return +} + +// DetectStackDrift API operation for AWS CloudFormation. +// +// Detects whether a stack's actual configuration differs, or has drifted, from +// it's expected configuration, as defined in the stack template and any values +// specified as template parameters. For each resource in the stack that supports +// drift detection, AWS CloudFormation compares the actual configuration of +// the resource with its expected template configuration. Only resource properties +// explicitly defined in the stack template are checked for drift. A stack is +// considered to have drifted if one or more of its resources differ from their +// expected template configurations. For more information, see Detecting Unregulated +// Configuration Changes to Stacks and Resources (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift.html). +// +// Use DetectStackDrift to detect drift on all supported resources for a given +// stack, or DetectStackResourceDrift to detect drift on individual resources. +// +// For a list of stack resources that currently support drift detection, see +// Resources that Support Drift Detection (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift-resource-list.html). +// +// DetectStackDrift can take up to several minutes, depending on the number +// of resources contained within the stack. Use DescribeStackDriftDetectionStatus +// to monitor the progress of a detect stack drift operation. Once the drift +// detection operation has completed, use DescribeStackResourceDrifts to return +// drift information about the stack and its resources. +// +// When detecting drift on a stack, AWS CloudFormation does not detect drift +// on any nested stacks belonging to that stack. Perform DetectStackDrift directly +// on the nested stack itself. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation DetectStackDrift for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DetectStackDrift +func (c *CloudFormation) DetectStackDrift(input *DetectStackDriftInput) (*DetectStackDriftOutput, error) { + req, out := c.DetectStackDriftRequest(input) + return out, req.Send() +} + +// DetectStackDriftWithContext is the same as DetectStackDrift with the addition of +// the ability to pass a context and additional request options. +// +// See DetectStackDrift for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DetectStackDriftWithContext(ctx aws.Context, input *DetectStackDriftInput, opts ...request.Option) (*DetectStackDriftOutput, error) { + req, out := c.DetectStackDriftRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDetectStackResourceDrift = "DetectStackResourceDrift" + +// DetectStackResourceDriftRequest generates a "aws/request.Request" representing the +// client's request for the DetectStackResourceDrift operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DetectStackResourceDrift for more information on using the DetectStackResourceDrift +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DetectStackResourceDriftRequest method. +// req, resp := client.DetectStackResourceDriftRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DetectStackResourceDrift +func (c *CloudFormation) DetectStackResourceDriftRequest(input *DetectStackResourceDriftInput) (req *request.Request, output *DetectStackResourceDriftOutput) { + op := &request.Operation{ + Name: opDetectStackResourceDrift, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DetectStackResourceDriftInput{} + } + + output = &DetectStackResourceDriftOutput{} + req = c.newRequest(op, input, output) + return +} + +// DetectStackResourceDrift API operation for AWS CloudFormation. +// +// Returns information about whether a resource's actual configuration differs, +// or has drifted, from it's expected configuration, as defined in the stack +// template and any values specified as template parameters. This information +// includes actual and expected property values for resources in which AWS CloudFormation +// detects drift. Only resource properties explicitly defined in the stack template +// are checked for drift. For more information about stack and resource drift, +// see Detecting Unregulated Configuration Changes to Stacks and Resources (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift.html). +// +// Use DetectStackResourceDrift to detect drift on individual resources, or +// DetectStackDrift to detect drift on all resources in a given stack that support +// drift detection. +// +// Resources that do not currently support drift detection cannot be checked. +// For a list of resources that support drift detection, see Resources that +// Support Drift Detection (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift-resource-list.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudFormation's +// API operation DetectStackResourceDrift for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/DetectStackResourceDrift +func (c *CloudFormation) DetectStackResourceDrift(input *DetectStackResourceDriftInput) (*DetectStackResourceDriftOutput, error) { + req, out := c.DetectStackResourceDriftRequest(input) + return out, req.Send() +} + +// DetectStackResourceDriftWithContext is the same as DetectStackResourceDrift with the addition of +// the ability to pass a context and additional request options. +// +// See DetectStackResourceDrift for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFormation) DetectStackResourceDriftWithContext(ctx aws.Context, input *DetectStackResourceDriftInput, opts ...request.Option) (*DetectStackResourceDriftOutput, error) { + req, out := c.DetectStackResourceDriftRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opEstimateTemplateCost = "EstimateTemplateCost" // EstimateTemplateCostRequest generates a "aws/request.Request" representing the // client's request for the EstimateTemplateCost operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1846,8 +2259,8 @@ const opExecuteChangeSet = "ExecuteChangeSet" // ExecuteChangeSetRequest generates a "aws/request.Request" representing the // client's request for the ExecuteChangeSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1949,8 +2362,8 @@ const opGetStackPolicy = "GetStackPolicy" // GetStackPolicyRequest generates a "aws/request.Request" representing the // client's request for the GetStackPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2024,8 +2437,8 @@ const opGetTemplate = "GetTemplate" // GetTemplateRequest generates a "aws/request.Request" representing the // client's request for the GetTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2110,8 +2523,8 @@ const opGetTemplateSummary = "GetTemplateSummary" // GetTemplateSummaryRequest generates a "aws/request.Request" representing the // client's request for the GetTemplateSummary operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2200,8 +2613,8 @@ const opListChangeSets = "ListChangeSets" // ListChangeSetsRequest generates a "aws/request.Request" representing the // client's request for the ListChangeSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2276,8 +2689,8 @@ const opListExports = "ListExports" // ListExportsRequest generates a "aws/request.Request" representing the // client's request for the ListExports operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2412,8 +2825,8 @@ const opListImports = "ListImports" // ListImportsRequest generates a "aws/request.Request" representing the // client's request for the ListImports operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2548,8 +2961,8 @@ const opListStackInstances = "ListStackInstances" // ListStackInstancesRequest generates a "aws/request.Request" representing the // client's request for the ListStackInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2629,8 +3042,8 @@ const opListStackResources = "ListStackResources" // ListStackResourcesRequest generates a "aws/request.Request" representing the // client's request for the ListStackResources operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2762,8 +3175,8 @@ const opListStackSetOperationResults = "ListStackSetOperationResults" // ListStackSetOperationResultsRequest generates a "aws/request.Request" representing the // client's request for the ListStackSetOperationResults operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2844,8 +3257,8 @@ const opListStackSetOperations = "ListStackSetOperations" // ListStackSetOperationsRequest generates a "aws/request.Request" representing the // client's request for the ListStackSetOperations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2923,8 +3336,8 @@ const opListStackSets = "ListStackSets" // ListStackSetsRequest generates a "aws/request.Request" representing the // client's request for the ListStackSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2998,8 +3411,8 @@ const opListStacks = "ListStacks" // ListStacksRequest generates a "aws/request.Request" representing the // client's request for the ListStacks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3132,8 +3545,8 @@ const opSetStackPolicy = "SetStackPolicy" // SetStackPolicyRequest generates a "aws/request.Request" representing the // client's request for the SetStackPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3208,8 +3621,8 @@ const opSignalResource = "SignalResource" // SignalResourceRequest generates a "aws/request.Request" representing the // client's request for the SignalResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3289,8 +3702,8 @@ const opStopStackSetOperation = "StopStackSetOperation" // StopStackSetOperationRequest generates a "aws/request.Request" representing the // client's request for the StopStackSetOperation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3374,8 +3787,8 @@ const opUpdateStack = "UpdateStack" // UpdateStackRequest generates a "aws/request.Request" representing the // client's request for the UpdateStack operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3465,8 +3878,8 @@ const opUpdateStackInstances = "UpdateStackInstances" // UpdateStackInstancesRequest generates a "aws/request.Request" representing the // client's request for the UpdateStackInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3578,8 +3991,8 @@ const opUpdateStackSet = "UpdateStackSet" // UpdateStackSetRequest generates a "aws/request.Request" representing the // client's request for the UpdateStackSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3618,7 +4031,8 @@ func (c *CloudFormation) UpdateStackSetRequest(input *UpdateStackSetInput) (req // UpdateStackSet API operation for AWS CloudFormation. // -// Updates the stack set and all associated stack instances. +// Updates the stack set, and associated stack instances in the specified accounts +// and regions. // // Even if the stack set operation created by updating the stack set fails (completely // or partially, below or above a specified failure tolerance), the stack set @@ -3650,6 +4064,9 @@ func (c *CloudFormation) UpdateStackSetRequest(input *UpdateStackSetInput) (req // * ErrCodeInvalidOperationException "InvalidOperationException" // The specified operation isn't valid. // +// * ErrCodeStackInstanceNotFoundException "StackInstanceNotFoundException" +// The specified stack instance doesn't exist. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/UpdateStackSet func (c *CloudFormation) UpdateStackSet(input *UpdateStackSetInput) (*UpdateStackSetOutput, error) { req, out := c.UpdateStackSetRequest(input) @@ -3676,8 +4093,8 @@ const opUpdateTerminationProtection = "UpdateTerminationProtection" // UpdateTerminationProtectionRequest generates a "aws/request.Request" representing the // client's request for the UpdateTerminationProtection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3719,10 +4136,10 @@ func (c *CloudFormation) UpdateTerminationProtectionRequest(input *UpdateTermina // Updates termination protection for the specified stack. If a user attempts // to delete a stack with termination protection enabled, the operation fails // and the stack remains unchanged. For more information, see Protecting a Stack -// From Being Deleted (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-protect-stacks.html) +// From Being Deleted (AWSCloudFormation/latest/UserGuide/using-cfn-protect-stacks.html) // in the AWS CloudFormation User Guide. // -// For nested stacks (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html), +// For nested stacks (AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html), // termination protection is set on the root stack and cannot be changed directly // on the nested stack. // @@ -3758,8 +4175,8 @@ const opValidateTemplate = "ValidateTemplate" // ValidateTemplateRequest generates a "aws/request.Request" representing the // client's request for the ValidateTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4052,7 +4469,7 @@ type ChangeSetSummary struct { ChangeSetName *string `min:"1" type:"string"` // The start time when the change set was created, in UTC. - CreationTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CreationTime *time.Time `type:"timestamp"` // Descriptive information about the change set. Description *string `min:"1" type:"string"` @@ -4353,8 +4770,7 @@ type CreateChangeSetInput struct { NotificationARNs []*string `type:"list"` // A list of Parameter structures that specify input parameters for the change - // set. For more information, see the Parameter (http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_Parameter.html) - // data type. + // set. For more information, see the Parameter data type. Parameters []*Parameter `type:"list"` // The template resource types that you have permissions to work with if you @@ -4608,30 +5024,74 @@ func (s *CreateChangeSetOutput) SetStackId(v string) *CreateChangeSetOutput { type CreateStackInput struct { _ struct{} `type:"structure"` - // A list of values that you must specify before AWS CloudFormation can create - // certain stacks. Some stack templates might include resources that can affect - // permissions in your AWS account, for example, by creating new AWS Identity - // and Access Management (IAM) users. For those stacks, you must explicitly - // acknowledge their capabilities by specifying this parameter. + // In some cases, you must explicity acknowledge that your stack template contains + // certain capabilities in order for AWS CloudFormation to create the stack. + // + // * CAPABILITY_IAM and CAPABILITY_NAMED_IAM + // + // Some stack templates might include resources that can affect permissions + // in your AWS account; for example, by creating new AWS Identity and Access + // Management (IAM) users. For those stacks, you must explicitly acknowledge + // this by specifying one of these capabilities. + // + // The following IAM resources require you to specify either the CAPABILITY_IAM + // or CAPABILITY_NAMED_IAM capability. + // + // If you have IAM resources, you can specify either capability. + // + // If you have IAM resources with custom names, you must specify CAPABILITY_NAMED_IAM. + // + // + // If you don't specify either of these capabilities, AWS CloudFormation returns + // an InsufficientCapabilities error. // - // The only valid values are CAPABILITY_IAM and CAPABILITY_NAMED_IAM. The following - // resources require you to specify this parameter: AWS::IAM::AccessKey (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-accesskey.html), - // AWS::IAM::Group (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-group.html), - // AWS::IAM::InstanceProfile (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-instanceprofile.html), - // AWS::IAM::Policy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-policy.html), - // AWS::IAM::Role (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html), - // AWS::IAM::User (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-user.html), - // and AWS::IAM::UserToGroupAddition (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-addusertogroup.html). // If your stack template contains these resources, we recommend that you review - // all permissions associated with them and edit their permissions if necessary. + // all permissions associated with them and edit their permissions if necessary. // - // If you have IAM resources, you can specify either capability. If you have - // IAM resources with custom names, you must specify CAPABILITY_NAMED_IAM. If - // you don't specify this parameter, this action returns an InsufficientCapabilities - // error. + // AWS::IAM::AccessKey (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-accesskey.html) + // + // AWS::IAM::Group (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-group.html) + // + // AWS::IAM::InstanceProfile (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-instanceprofile.html) + // + // AWS::IAM::Policy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-policy.html) + // + // AWS::IAM::Role (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html) + // + // AWS::IAM::User (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-user.html) + // + // AWS::IAM::UserToGroupAddition (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-addusertogroup.html) // // For more information, see Acknowledging IAM Resources in AWS CloudFormation - // Templates (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html#capabilities). + // Templates (http://docs.aws.amazon.com/http:/docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html#capabilities). + // + // * CAPABILITY_AUTO_EXPAND + // + // Some template contain macros. Macros perform custom processing on templates; + // this can include simple actions like find-and-replace operations, all + // the way to extensive transformations of entire templates. Because of this, + // users typically create a change set from the processed template, so that + // they can review the changes resulting from the macros before actually + // creating the stack. If your stack template contains one or more macros, + // and you choose to create a stack directly from the processed template, + // without first reviewing the resulting changes in a change set, you must + // acknowledge this capability. This includes the AWS::Include (http://docs.aws.amazon.com/http:/docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/create-reusable-transform-function-snippets-and-add-to-your-template-with-aws-include-transform.html) + // and AWS::Serverless (http://docs.aws.amazon.com/http:/docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/transform-aws-serverless.html) + // transforms, which are macros hosted by AWS CloudFormation. + // + // Change sets do not currently support nested stacks. If you want to create + // a stack from a stack template that contains macros and nested stacks, + // you must create the stack directly from the template using this capability. + // + // You should only create stacks directly from a stack template that contains + // macros if you know what processing the macro performs. + // + // Each macro relies on an underlying Lambda service function for processing + // stack templates. Be aware that the Lambda function owner can update the + // function operation without AWS CloudFormation being notified. + // + // For more information, see Using AWS CloudFormation Macros to Perform Custom + // Processing on Templates (http://docs.aws.amazon.com/http:/docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-macros.html). Capabilities []*string `type:"list"` // A unique identifier for this CreateStack request. Specify this token if you @@ -5125,6 +5585,16 @@ func (s *CreateStackOutput) SetStackId(v string) *CreateStackOutput { type CreateStackSetInput struct { _ struct{} `type:"structure"` + // The Amazon Resource Number (ARN) of the IAM role to use to create this stack + // set. + // + // Specify an IAM role only if you are using customized administrator roles + // to control which users or groups can manage specific stack sets within the + // same administrator account. For more information, see Prerequisites: Granting + // Permissions for Stack Set Operations (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs.html) + // in the AWS CloudFormation User Guide. + AdministrationRoleARN *string `min:"20" type:"string"` + // A list of values that you must specify before AWS CloudFormation can create // certain stack sets. Some stack set templates might include resources that // can affect permissions in your AWS account—for example, by creating new AWS @@ -5174,6 +5644,14 @@ type CreateStackSetInput struct { // stack set's purpose or other important information. Description *string `min:"1" type:"string"` + // The name of the IAM execution role to use to create the stack set. If you + // do not specify an execution role, AWS CloudFormation uses the AWSCloudFormationStackSetExecutionRole + // role for the stack set operation. + // + // Specify an IAM role only if you are using customized execution roles to control + // which stack resources users and groups can include in their stack sets. + ExecutionRoleName *string `min:"1" type:"string"` + // The input parameters for the stack set template. Parameters []*Parameter `type:"list"` @@ -5229,12 +5707,18 @@ func (s CreateStackSetInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *CreateStackSetInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "CreateStackSetInput"} + if s.AdministrationRoleARN != nil && len(*s.AdministrationRoleARN) < 20 { + invalidParams.Add(request.NewErrParamMinLen("AdministrationRoleARN", 20)) + } if s.ClientRequestToken != nil && len(*s.ClientRequestToken) < 1 { invalidParams.Add(request.NewErrParamMinLen("ClientRequestToken", 1)) } if s.Description != nil && len(*s.Description) < 1 { invalidParams.Add(request.NewErrParamMinLen("Description", 1)) } + if s.ExecutionRoleName != nil && len(*s.ExecutionRoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ExecutionRoleName", 1)) + } if s.StackSetName == nil { invalidParams.Add(request.NewErrParamRequired("StackSetName")) } @@ -5261,6 +5745,12 @@ func (s *CreateStackSetInput) Validate() error { return nil } +// SetAdministrationRoleARN sets the AdministrationRoleARN field's value. +func (s *CreateStackSetInput) SetAdministrationRoleARN(v string) *CreateStackSetInput { + s.AdministrationRoleARN = &v + return s +} + // SetCapabilities sets the Capabilities field's value. func (s *CreateStackSetInput) SetCapabilities(v []*string) *CreateStackSetInput { s.Capabilities = v @@ -5279,6 +5769,12 @@ func (s *CreateStackSetInput) SetDescription(v string) *CreateStackSetInput { return s } +// SetExecutionRoleName sets the ExecutionRoleName field's value. +func (s *CreateStackSetInput) SetExecutionRoleName(v string) *CreateStackSetInput { + s.ExecutionRoleName = &v + return s +} + // SetParameters sets the Parameters field's value. func (s *CreateStackSetInput) SetParameters(v []*Parameter) *CreateStackSetInput { s.Parameters = v @@ -5871,7 +6367,7 @@ type DescribeChangeSetOutput struct { Changes []*Change `type:"list"` // The start time when the change set was created, in UTC. - CreationTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CreationTime *time.Time `type:"timestamp"` // Information about the change set. Description *string `min:"1" type:"string"` @@ -6026,6 +6522,169 @@ func (s *DescribeChangeSetOutput) SetTags(v []*Tag) *DescribeChangeSetOutput { return s } +type DescribeStackDriftDetectionStatusInput struct { + _ struct{} `type:"structure"` + + // The ID of the drift detection results of this operation. + // + // AWS CloudFormation generates new results, with a new drift detection ID, + // each time this operation is run. However, the number of drift results AWS + // CloudFormation retains for any given stack, and for how long, may vary. + // + // StackDriftDetectionId is a required field + StackDriftDetectionId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeStackDriftDetectionStatusInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStackDriftDetectionStatusInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeStackDriftDetectionStatusInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeStackDriftDetectionStatusInput"} + if s.StackDriftDetectionId == nil { + invalidParams.Add(request.NewErrParamRequired("StackDriftDetectionId")) + } + if s.StackDriftDetectionId != nil && len(*s.StackDriftDetectionId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackDriftDetectionId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetStackDriftDetectionId sets the StackDriftDetectionId field's value. +func (s *DescribeStackDriftDetectionStatusInput) SetStackDriftDetectionId(v string) *DescribeStackDriftDetectionStatusInput { + s.StackDriftDetectionId = &v + return s +} + +type DescribeStackDriftDetectionStatusOutput struct { + _ struct{} `type:"structure"` + + // The status of the stack drift detection operation. + // + // * DETECTION_COMPLETE: The stack drift detection operation has successfully + // completed for all resources in the stack that support drift detection. + // (Resources that do not currently support stack detection remain unchecked.) + // + // If you specified logical resource IDs for AWS CloudFormation to use as a + // filter for the stack drift detection operation, only the resources with + // those logical IDs are checked for drift. + // + // * DETECTION_FAILED: The stack drift detection operation has failed for + // at least one resource in the stack. Results will be available for resources + // on which AWS CloudFormation successfully completed drift detection. + // + // * DETECTION_IN_PROGRESS: The stack drift detection operation is currently + // in progress. + // + // DetectionStatus is a required field + DetectionStatus *string `type:"string" required:"true" enum:"StackDriftDetectionStatus"` + + // The reason the stack drift detection operation has its current status. + DetectionStatusReason *string `type:"string"` + + // Total number of stack resources that have drifted. This is NULL until the + // drift detection operation reaches a status of DETECTION_COMPLETE. This value + // will be 0 for stacks whose drift status is IN_SYNC. + DriftedStackResourceCount *int64 `type:"integer"` + + // The ID of the drift detection results of this operation. + // + // AWS CloudFormation generates new results, with a new drift detection ID, + // each time this operation is run. However, the number of reports AWS CloudFormation + // retains for any given stack, and for how long, may vary. + // + // StackDriftDetectionId is a required field + StackDriftDetectionId *string `min:"1" type:"string" required:"true"` + + // Status of the stack's actual configuration compared to its expected configuration. + // + // * DRIFTED: The stack differs from its expected template configuration. + // A stack is considered to have drifted if one or more of its resources + // have drifted. + // + // * NOT_CHECKED: AWS CloudFormation has not checked if the stack differs + // from its expected template configuration. + // + // * IN_SYNC: The stack's actual configuration matches its expected template + // configuration. + // + // * UNKNOWN: This value is reserved for future use. + StackDriftStatus *string `type:"string" enum:"StackDriftStatus"` + + // The ID of the stack. + // + // StackId is a required field + StackId *string `type:"string" required:"true"` + + // Time at which the stack drift detection operation was initiated. + // + // Timestamp is a required field + Timestamp *time.Time `type:"timestamp" required:"true"` +} + +// String returns the string representation +func (s DescribeStackDriftDetectionStatusOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStackDriftDetectionStatusOutput) GoString() string { + return s.String() +} + +// SetDetectionStatus sets the DetectionStatus field's value. +func (s *DescribeStackDriftDetectionStatusOutput) SetDetectionStatus(v string) *DescribeStackDriftDetectionStatusOutput { + s.DetectionStatus = &v + return s +} + +// SetDetectionStatusReason sets the DetectionStatusReason field's value. +func (s *DescribeStackDriftDetectionStatusOutput) SetDetectionStatusReason(v string) *DescribeStackDriftDetectionStatusOutput { + s.DetectionStatusReason = &v + return s +} + +// SetDriftedStackResourceCount sets the DriftedStackResourceCount field's value. +func (s *DescribeStackDriftDetectionStatusOutput) SetDriftedStackResourceCount(v int64) *DescribeStackDriftDetectionStatusOutput { + s.DriftedStackResourceCount = &v + return s +} + +// SetStackDriftDetectionId sets the StackDriftDetectionId field's value. +func (s *DescribeStackDriftDetectionStatusOutput) SetStackDriftDetectionId(v string) *DescribeStackDriftDetectionStatusOutput { + s.StackDriftDetectionId = &v + return s +} + +// SetStackDriftStatus sets the StackDriftStatus field's value. +func (s *DescribeStackDriftDetectionStatusOutput) SetStackDriftStatus(v string) *DescribeStackDriftDetectionStatusOutput { + s.StackDriftStatus = &v + return s +} + +// SetStackId sets the StackId field's value. +func (s *DescribeStackDriftDetectionStatusOutput) SetStackId(v string) *DescribeStackDriftDetectionStatusOutput { + s.StackId = &v + return s +} + +// SetTimestamp sets the Timestamp field's value. +func (s *DescribeStackDriftDetectionStatusOutput) SetTimestamp(v time.Time) *DescribeStackDriftDetectionStatusOutput { + s.Timestamp = &v + return s +} + // The input for DescribeStackEvents action. type DescribeStackEventsInput struct { _ struct{} `type:"structure"` @@ -6204,6 +6863,143 @@ func (s *DescribeStackInstanceOutput) SetStackInstance(v *StackInstance) *Descri return s } +type DescribeStackResourceDriftsInput struct { + _ struct{} `type:"structure"` + + // The maximum number of results to be returned with a single call. If the number + // of available results exceeds this maximum, the response includes a NextToken + // value that you can assign to the NextToken request parameter to get the next + // set of results. + MaxResults *int64 `min:"1" type:"integer"` + + // A string that identifies the next page of stack resource drift results. + NextToken *string `min:"1" type:"string"` + + // The name of the stack for which you want drift information. + // + // StackName is a required field + StackName *string `min:"1" type:"string" required:"true"` + + // The resource drift status values to use as filters for the resource drift + // results returned. + // + // * DELETED: The resource differs from its expected template configuration + // in that the resource has been deleted. + // + // * MODIFIED: One or more resource properties differ from their expected + // template values. + // + // * IN_SYNC: The resources's actual configuration matches its expected template + // configuration. + // + // * NOT_CHECKED: AWS CloudFormation does not currently return this value. + StackResourceDriftStatusFilters []*string `min:"1" type:"list"` +} + +// String returns the string representation +func (s DescribeStackResourceDriftsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStackResourceDriftsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeStackResourceDriftsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeStackResourceDriftsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + if s.StackName == nil { + invalidParams.Add(request.NewErrParamRequired("StackName")) + } + if s.StackName != nil && len(*s.StackName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackName", 1)) + } + if s.StackResourceDriftStatusFilters != nil && len(s.StackResourceDriftStatusFilters) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackResourceDriftStatusFilters", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeStackResourceDriftsInput) SetMaxResults(v int64) *DescribeStackResourceDriftsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeStackResourceDriftsInput) SetNextToken(v string) *DescribeStackResourceDriftsInput { + s.NextToken = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *DescribeStackResourceDriftsInput) SetStackName(v string) *DescribeStackResourceDriftsInput { + s.StackName = &v + return s +} + +// SetStackResourceDriftStatusFilters sets the StackResourceDriftStatusFilters field's value. +func (s *DescribeStackResourceDriftsInput) SetStackResourceDriftStatusFilters(v []*string) *DescribeStackResourceDriftsInput { + s.StackResourceDriftStatusFilters = v + return s +} + +type DescribeStackResourceDriftsOutput struct { + _ struct{} `type:"structure"` + + // If the request doesn't return all of the remaining results, NextToken is + // set to a token. To retrieve the next set of results, call DescribeStackResourceDrifts + // again and assign that token to the request object's NextToken parameter. + // If the request returns all results, NextToken is set to null. + NextToken *string `min:"1" type:"string"` + + // Drift information for the resources that have been checked for drift in the + // specified stack. This includes actual and expected configuration values for + // resources where AWS CloudFormation detects drift. + // + // For a given stack, there will be one StackResourceDrift for each stack resource + // that has been checked for drift. Resources that have not yet been checked + // for drift are not included. Resources that do not currently support drift + // detection are not checked, and so not included. For a list of resources that + // support drift detection, see Resources that Support Drift Detection (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift-resource-list.html). + // + // StackResourceDrifts is a required field + StackResourceDrifts []*StackResourceDrift `type:"list" required:"true"` +} + +// String returns the string representation +func (s DescribeStackResourceDriftsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStackResourceDriftsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeStackResourceDriftsOutput) SetNextToken(v string) *DescribeStackResourceDriftsOutput { + s.NextToken = &v + return s +} + +// SetStackResourceDrifts sets the StackResourceDrifts field's value. +func (s *DescribeStackResourceDriftsOutput) SetStackResourceDrifts(v []*StackResourceDrift) *DescribeStackResourceDriftsOutput { + s.StackResourceDrifts = v + return s +} + // The input for DescribeStackResource action. type DescribeStackResourceInput struct { _ struct{} `type:"structure"` @@ -6609,45 +7405,209 @@ func (s *DescribeStacksOutput) SetStacks(v []*Stack) *DescribeStacksOutput { return s } -// The input for an EstimateTemplateCost action. -type EstimateTemplateCostInput struct { +type DetectStackDriftInput struct { _ struct{} `type:"structure"` - // A list of Parameter structures that specify input parameters. - Parameters []*Parameter `type:"list"` - - // Structure containing the template body with a minimum length of 1 byte and - // a maximum length of 51,200 bytes. (For more information, go to Template Anatomy - // (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) - // in the AWS CloudFormation User Guide.) - // - // Conditional: You must pass TemplateBody or TemplateURL. If both are passed, - // only TemplateBody is used. - TemplateBody *string `min:"1" type:"string"` + // The logical names of any resources you want to use as filters. + LogicalResourceIds []*string `min:"1" type:"list"` - // Location of file containing the template body. The URL must point to a template - // that is located in an Amazon S3 bucket. For more information, go to Template - // Anatomy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) - // in the AWS CloudFormation User Guide. + // The name of the stack for which you want to detect drift. // - // Conditional: You must pass TemplateURL or TemplateBody. If both are passed, - // only TemplateBody is used. - TemplateURL *string `min:"1" type:"string"` + // StackName is a required field + StackName *string `min:"1" type:"string" required:"true"` } // String returns the string representation -func (s EstimateTemplateCostInput) String() string { +func (s DetectStackDriftInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s EstimateTemplateCostInput) GoString() string { +func (s DetectStackDriftInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *EstimateTemplateCostInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "EstimateTemplateCostInput"} +func (s *DetectStackDriftInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DetectStackDriftInput"} + if s.LogicalResourceIds != nil && len(s.LogicalResourceIds) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogicalResourceIds", 1)) + } + if s.StackName == nil { + invalidParams.Add(request.NewErrParamRequired("StackName")) + } + if s.StackName != nil && len(*s.StackName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLogicalResourceIds sets the LogicalResourceIds field's value. +func (s *DetectStackDriftInput) SetLogicalResourceIds(v []*string) *DetectStackDriftInput { + s.LogicalResourceIds = v + return s +} + +// SetStackName sets the StackName field's value. +func (s *DetectStackDriftInput) SetStackName(v string) *DetectStackDriftInput { + s.StackName = &v + return s +} + +type DetectStackDriftOutput struct { + _ struct{} `type:"structure"` + + // The ID of the drift detection results of this operation. + // + // AWS CloudFormation generates new results, with a new drift detection ID, + // each time this operation is run. However, the number of drift results AWS + // CloudFormation retains for any given stack, and for how long, may vary. + // + // StackDriftDetectionId is a required field + StackDriftDetectionId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DetectStackDriftOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DetectStackDriftOutput) GoString() string { + return s.String() +} + +// SetStackDriftDetectionId sets the StackDriftDetectionId field's value. +func (s *DetectStackDriftOutput) SetStackDriftDetectionId(v string) *DetectStackDriftOutput { + s.StackDriftDetectionId = &v + return s +} + +type DetectStackResourceDriftInput struct { + _ struct{} `type:"structure"` + + // The logical name of the resource for which to return drift information. + // + // LogicalResourceId is a required field + LogicalResourceId *string `type:"string" required:"true"` + + // The name of the stack to which the resource belongs. + // + // StackName is a required field + StackName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DetectStackResourceDriftInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DetectStackResourceDriftInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DetectStackResourceDriftInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DetectStackResourceDriftInput"} + if s.LogicalResourceId == nil { + invalidParams.Add(request.NewErrParamRequired("LogicalResourceId")) + } + if s.StackName == nil { + invalidParams.Add(request.NewErrParamRequired("StackName")) + } + if s.StackName != nil && len(*s.StackName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StackName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLogicalResourceId sets the LogicalResourceId field's value. +func (s *DetectStackResourceDriftInput) SetLogicalResourceId(v string) *DetectStackResourceDriftInput { + s.LogicalResourceId = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *DetectStackResourceDriftInput) SetStackName(v string) *DetectStackResourceDriftInput { + s.StackName = &v + return s +} + +type DetectStackResourceDriftOutput struct { + _ struct{} `type:"structure"` + + // Information about whether the resource's actual configuration has drifted + // from its expected template configuration, including actual and expected property + // values and any differences detected. + // + // StackResourceDrift is a required field + StackResourceDrift *StackResourceDrift `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DetectStackResourceDriftOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DetectStackResourceDriftOutput) GoString() string { + return s.String() +} + +// SetStackResourceDrift sets the StackResourceDrift field's value. +func (s *DetectStackResourceDriftOutput) SetStackResourceDrift(v *StackResourceDrift) *DetectStackResourceDriftOutput { + s.StackResourceDrift = v + return s +} + +// The input for an EstimateTemplateCost action. +type EstimateTemplateCostInput struct { + _ struct{} `type:"structure"` + + // A list of Parameter structures that specify input parameters. + Parameters []*Parameter `type:"list"` + + // Structure containing the template body with a minimum length of 1 byte and + // a maximum length of 51,200 bytes. (For more information, go to Template Anatomy + // (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) + // in the AWS CloudFormation User Guide.) + // + // Conditional: You must pass TemplateBody or TemplateURL. If both are passed, + // only TemplateBody is used. + TemplateBody *string `min:"1" type:"string"` + + // Location of file containing the template body. The URL must point to a template + // that is located in an Amazon S3 bucket. For more information, go to Template + // Anatomy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) + // in the AWS CloudFormation User Guide. + // + // Conditional: You must pass TemplateURL or TemplateBody. If both are passed, + // only TemplateBody is used. + TemplateURL *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s EstimateTemplateCostInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EstimateTemplateCostInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *EstimateTemplateCostInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "EstimateTemplateCostInput"} if s.TemplateBody != nil && len(*s.TemplateBody) < 1 { invalidParams.Add(request.NewErrParamMinLen("TemplateBody", 1)) } @@ -8284,6 +9244,120 @@ func (s *ParameterDeclaration) SetParameterType(v string) *ParameterDeclaration return s } +// Context information that enables AWS CloudFormation to uniquely identify +// a resource. AWS CloudFormation uses context key-value pairs in cases where +// a resource's logical and physical IDs are not enough to uniquely identify +// that resource. Each context key-value pair specifies a resource that contains +// the targeted resource. +type PhysicalResourceIdContextKeyValuePair struct { + _ struct{} `type:"structure"` + + // The resource context key. + // + // Key is a required field + Key *string `type:"string" required:"true"` + + // The resource context value. + // + // Value is a required field + Value *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s PhysicalResourceIdContextKeyValuePair) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PhysicalResourceIdContextKeyValuePair) GoString() string { + return s.String() +} + +// SetKey sets the Key field's value. +func (s *PhysicalResourceIdContextKeyValuePair) SetKey(v string) *PhysicalResourceIdContextKeyValuePair { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *PhysicalResourceIdContextKeyValuePair) SetValue(v string) *PhysicalResourceIdContextKeyValuePair { + s.Value = &v + return s +} + +// Information about a resource property whose actual value differs from its +// expected value, as defined in the stack template and any values specified +// as template parameters. These will be present only for resources whose StackResourceDriftStatus +// is MODIFIED. For more information, see Detecting Unregulated Configuration +// Changes to Stacks and Resources (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift.html). +type PropertyDifference struct { + _ struct{} `type:"structure"` + + // The actual property value of the resource property. + // + // ActualValue is a required field + ActualValue *string `type:"string" required:"true"` + + // The type of property difference. + // + // * ADD: A value has been added to a resource property that is an array + // or list data type. + // + // * REMOVE: The property has been removed from the current resource configuration. + // + // * NOT_EQUAL: The current property value differs from its expected value + // (as defined in the stack template and any values specified as template + // parameters). + // + // DifferenceType is a required field + DifferenceType *string `type:"string" required:"true" enum:"DifferenceType"` + + // The expected property value of the resource property, as defined in the stack + // template and any values specified as template parameters. + // + // ExpectedValue is a required field + ExpectedValue *string `type:"string" required:"true"` + + // The fully-qualified path to the resource property. + // + // PropertyPath is a required field + PropertyPath *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s PropertyDifference) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PropertyDifference) GoString() string { + return s.String() +} + +// SetActualValue sets the ActualValue field's value. +func (s *PropertyDifference) SetActualValue(v string) *PropertyDifference { + s.ActualValue = &v + return s +} + +// SetDifferenceType sets the DifferenceType field's value. +func (s *PropertyDifference) SetDifferenceType(v string) *PropertyDifference { + s.DifferenceType = &v + return s +} + +// SetExpectedValue sets the ExpectedValue field's value. +func (s *PropertyDifference) SetExpectedValue(v string) *PropertyDifference { + s.ExpectedValue = &v + return s +} + +// SetPropertyPath sets the PropertyPath field's value. +func (s *PropertyDifference) SetPropertyPath(v string) *PropertyDifference { + s.PropertyPath = &v + return s +} + // The ResourceChange structure describes the resource and the action that AWS // CloudFormation will perform on it if you execute this change set. type ResourceChange struct { @@ -8527,38 +9601,21 @@ func (s *ResourceTargetDefinition) SetRequiresRecreation(v string) *ResourceTarg // Rollback triggers enable you to have AWS CloudFormation monitor the state // of your application during stack creation and updating, and to roll back // that operation if the application breaches the threshold of any of the alarms -// you've specified. For each rollback trigger you create, you specify the Cloudwatch -// alarm that CloudFormation should monitor. CloudFormation monitors the specified -// alarms during the stack create or update operation, and for the specified -// amount of time after all resources have been deployed. If any of the alarms -// goes to ALERT state during the stack operation or the monitoring period, -// CloudFormation rolls back the entire stack operation. If the monitoring period -// expires without any alarms going to ALERT state, CloudFormation proceeds -// to dispose of old resources as usual. -// -// By default, CloudFormation only rolls back stack operations if an alarm goes -// to ALERT state, not INSUFFICIENT_DATA state. To have CloudFormation roll -// back the stack operation if an alarm goes to INSUFFICIENT_DATA state as well, -// edit the CloudWatch alarm to treat missing data as breaching. For more information, -// see Configuring How CloudWatch Alarms Treats Missing Data (http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html). -// -// AWS CloudFormation does not monitor rollback triggers when it rolls back -// a stack during an update operation. +// you've specified. For more information, see Monitor and Roll Back Stack Operations +// (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-rollback-triggers.html). type RollbackConfiguration struct { _ struct{} `type:"structure"` // The amount of time, in minutes, during which CloudFormation should monitor // all the rollback triggers after the stack creation or update operation deploys - // all necessary resources. If any of the alarms goes to ALERT state during - // the stack operation or this monitoring period, CloudFormation rolls back - // the entire stack operation. Then, for update operations, if the monitoring - // period expires without any alarms going to ALERT state CloudFormation proceeds - // to dispose of old resources as usual. + // all necessary resources. + // + // The default is 0 minutes. // // If you specify a monitoring period but do not specify any rollback triggers, // CloudFormation still waits the specified period of time before cleaning up - // old resources for update operations. You can use this monitoring period to - // perform any manual stack validation desired, and manually cancel the stack + // old resources after update operations. You can use this monitoring period + // to perform any manual stack validation desired, and manually cancel the stack // creation or update (using CancelUpdateStack (http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_CancelUpdateStack.html), // for example) as necessary. // @@ -8576,20 +9633,20 @@ type RollbackConfiguration struct { // parameter, those triggers replace any list of triggers previously specified // for the stack. This means: // - // * If you don't specify this parameter, AWS CloudFormation uses the rollback - // triggers previously specified for this stack, if any. + // * To use the rollback triggers previously specified for this stack, if + // any, don't specify this parameter. // - // * If you specify any rollback triggers using this parameter, you must - // specify all the triggers that you want used for this stack, even triggers - // you've specifed before (for example, when creating the stack or during - // a previous stack update). Any triggers that you don't include in the updated - // list of triggers are no longer applied to the stack. + // * To specify new or updated rollback triggers, you must specify all the + // triggers that you want used for this stack, even triggers you've specifed + // before (for example, when creating the stack or during a previous stack + // update). Any triggers that you don't include in the updated list of triggers + // are no longer applied to the stack. // - // * If you specify an empty list, AWS CloudFormation removes all currently - // specified triggers. + // * To remove all currently specified triggers, specify an empty list for + // this parameter. // - // If a specified Cloudwatch alarm is missing, the entire stack operation fails - // and is rolled back. + // If a specified trigger is missing, the entire stack operation fails and is + // rolled back. RollbackTriggers []*RollbackTrigger `type:"list"` } @@ -8636,7 +9693,7 @@ func (s *RollbackConfiguration) SetRollbackTriggers(v []*RollbackTrigger) *Rollb } // A rollback trigger AWS CloudFormation monitors during creation and updating -// of stacks. If any of the alarms you specify goes to ALERT state during the +// of stacks. If any of the alarms you specify goes to ALARM state during the // stack operation or within the specified monitoring period afterwards, CloudFormation // rolls back the entire stack operation. type RollbackTrigger struct { @@ -8644,6 +9701,9 @@ type RollbackTrigger struct { // The Amazon Resource Name (ARN) of the rollback trigger. // + // If a specified trigger is missing, the entire stack operation fails and is + // rolled back. + // // Arn is a required field Arn *string `type:"string" required:"true"` @@ -8896,10 +9956,10 @@ type Stack struct { // The time at which the stack was created. // // CreationTime is a required field - CreationTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + CreationTime *time.Time `type:"timestamp" required:"true"` // The time the stack was deleted. - DeletionTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + DeletionTime *time.Time `type:"timestamp"` // A user-defined description associated with the stack. Description *string `min:"1" type:"string"` @@ -8911,6 +9971,12 @@ type Stack struct { // * false: enable rollback DisableRollback *bool `type:"boolean"` + // Information on whether a stack's actual configuration differs, or has drifted, + // from it's expected configuration, as defined in the stack template and any + // values specified as template parameters. For more information, see Detecting + // Unregulated Configuration Changes to Stacks and Resources (http://docs.aws.amazon.com/http:/docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift.html). + DriftInformation *StackDriftInformation `type:"structure"` + // Whether termination protection is enabled for the stack. // // For nested stacks (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html), @@ -8922,7 +9988,7 @@ type Stack struct { // The time the stack was last updated. This field will only be returned if // the stack has been updated at least once. - LastUpdatedTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + LastUpdatedTime *time.Time `type:"timestamp"` // SNS topic ARNs to which stack related events are published. NotificationARNs []*string `type:"list"` @@ -9026,6 +10092,12 @@ func (s *Stack) SetDisableRollback(v bool) *Stack { return s } +// SetDriftInformation sets the DriftInformation field's value. +func (s *Stack) SetDriftInformation(v *StackDriftInformation) *Stack { + s.DriftInformation = v + return s +} + // SetEnableTerminationProtection sets the EnableTerminationProtection field's value. func (s *Stack) SetEnableTerminationProtection(v bool) *Stack { s.EnableTerminationProtection = &v @@ -9116,6 +10188,110 @@ func (s *Stack) SetTimeoutInMinutes(v int64) *Stack { return s } +// Contains information about whether the stack's actual configuration differs, +// or has drifted, from its expected configuration, as defined in the stack +// template and any values specified as template parameters. A stack is considered +// to have drifted if one or more of its resources have drifted. +type StackDriftInformation struct { + _ struct{} `type:"structure"` + + // Most recent time when a drift detection operation was initiated on the stack, + // or any of its individual resources that support drift detection. + LastCheckTimestamp *time.Time `type:"timestamp"` + + // Status of the stack's actual configuration compared to its expected template + // configuration. + // + // * DRIFTED: The stack differs from its expected template configuration. + // A stack is considered to have drifted if one or more of its resources + // have drifted. + // + // * NOT_CHECKED: AWS CloudFormation has not checked if the stack differs + // from its expected template configuration. + // + // * IN_SYNC: The stack's actual configuration matches its expected template + // configuration. + // + // * UNKNOWN: This value is reserved for future use. + // + // StackDriftStatus is a required field + StackDriftStatus *string `type:"string" required:"true" enum:"StackDriftStatus"` +} + +// String returns the string representation +func (s StackDriftInformation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StackDriftInformation) GoString() string { + return s.String() +} + +// SetLastCheckTimestamp sets the LastCheckTimestamp field's value. +func (s *StackDriftInformation) SetLastCheckTimestamp(v time.Time) *StackDriftInformation { + s.LastCheckTimestamp = &v + return s +} + +// SetStackDriftStatus sets the StackDriftStatus field's value. +func (s *StackDriftInformation) SetStackDriftStatus(v string) *StackDriftInformation { + s.StackDriftStatus = &v + return s +} + +// Contains information about whether the stack's actual configuration differs, +// or has drifted, from its expected configuration, as defined in the stack +// template and any values specified as template parameters. A stack is considered +// to have drifted if one or more of its resources have drifted. +type StackDriftInformationSummary struct { + _ struct{} `type:"structure"` + + // Most recent time when a drift detection operation was initiated on the stack, + // or any of its individual resources that support drift detection. + LastCheckTimestamp *time.Time `type:"timestamp"` + + // Status of the stack's actual configuration compared to its expected template + // configuration. + // + // * DRIFTED: The stack differs from its expected template configuration. + // A stack is considered to have drifted if one or more of its resources + // have drifted. + // + // * NOT_CHECKED: AWS CloudFormation has not checked if the stack differs + // from its expected template configuration. + // + // * IN_SYNC: The stack's actual configuration matches its expected template + // configuration. + // + // * UNKNOWN: This value is reserved for future use. + // + // StackDriftStatus is a required field + StackDriftStatus *string `type:"string" required:"true" enum:"StackDriftStatus"` +} + +// String returns the string representation +func (s StackDriftInformationSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StackDriftInformationSummary) GoString() string { + return s.String() +} + +// SetLastCheckTimestamp sets the LastCheckTimestamp field's value. +func (s *StackDriftInformationSummary) SetLastCheckTimestamp(v time.Time) *StackDriftInformationSummary { + s.LastCheckTimestamp = &v + return s +} + +// SetStackDriftStatus sets the StackDriftStatus field's value. +func (s *StackDriftInformationSummary) SetStackDriftStatus(v string) *StackDriftInformationSummary { + s.StackDriftStatus = &v + return s +} + // The StackEvent data type. type StackEvent struct { _ struct{} `type:"structure"` @@ -9173,7 +10349,7 @@ type StackEvent struct { // Time the status was updated. // // Timestamp is a required field - Timestamp *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + Timestamp *time.Time `type:"timestamp" required:"true"` } // String returns the string representation @@ -9450,6 +10626,12 @@ type StackResource struct { // User defined description associated with the resource. Description *string `min:"1" type:"string"` + // Information about whether the resource's actual configuration differs, or + // has drifted, from its expected configuration, as defined in the stack template + // and any values specified as template parameters. For more information, see + // Detecting Unregulated Configuration Changes to Stacks and Resources (http://docs.aws.amazon.com/http:/docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift.html). + DriftInformation *StackResourceDriftInformation `type:"structure"` + // The logical name of the resource specified in the template. // // LogicalResourceId is a required field @@ -9483,7 +10665,7 @@ type StackResource struct { // Time the status was updated. // // Timestamp is a required field - Timestamp *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + Timestamp *time.Time `type:"timestamp" required:"true"` } // String returns the string representation @@ -9502,6 +10684,12 @@ func (s *StackResource) SetDescription(v string) *StackResource { return s } +// SetDriftInformation sets the DriftInformation field's value. +func (s *StackResource) SetDriftInformation(v *StackResourceDriftInformation) *StackResource { + s.DriftInformation = v + return s +} + // SetLogicalResourceId sets the LogicalResourceId field's value. func (s *StackResource) SetLogicalResourceId(v string) *StackResource { s.LogicalResourceId = &v @@ -9557,10 +10745,16 @@ type StackResourceDetail struct { // User defined description associated with the resource. Description *string `min:"1" type:"string"` + // Information about whether the resource's actual configuration differs, or + // has drifted, from its expected configuration, as defined in the stack template + // and any values specified as template parameters. For more information, see + // Detecting Unregulated Configuration Changes to Stacks and Resources (http://docs.aws.amazon.com/http:/docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift.html). + DriftInformation *StackResourceDriftInformation `type:"structure"` + // Time the status was updated. // // LastUpdatedTimestamp is a required field - LastUpdatedTimestamp *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + LastUpdatedTimestamp *time.Time `type:"timestamp" required:"true"` // The logical name of the resource specified in the template. // @@ -9614,6 +10808,12 @@ func (s *StackResourceDetail) SetDescription(v string) *StackResourceDetail { return s } +// SetDriftInformation sets the DriftInformation field's value. +func (s *StackResourceDetail) SetDriftInformation(v *StackResourceDriftInformation) *StackResourceDetail { + s.DriftInformation = v + return s +} + // SetLastUpdatedTimestamp sets the LastUpdatedTimestamp field's value. func (s *StackResourceDetail) SetLastUpdatedTimestamp(v time.Time) *StackResourceDetail { s.LastUpdatedTimestamp = &v @@ -9668,14 +10868,281 @@ func (s *StackResourceDetail) SetStackName(v string) *StackResourceDetail { return s } +// Contains the drift information for a resource that has been checked for drift. +// This includes actual and expected property values for resources in which +// AWS CloudFormation has detected drift. Only resource properties explicitly +// defined in the stack template are checked for drift. For more information, +// see Detecting Unregulated Configuration Changes to Stacks and Resources (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift.html). +// +// Resources that do not currently support drift detection cannot be checked. +// For a list of resources that support drift detection, see Resources that +// Support Drift Detection (http://docs.aws.amazon.com/http:/docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift-resource-list.html). +// +// Use DetectStackResourceDrift to detect drift on individual resources, or +// DetectStackDrift to detect drift on all resources in a given stack that support +// drift detection. +type StackResourceDrift struct { + _ struct{} `type:"structure"` + + // A JSON structure containing the actual property values of the stack resource. + // + // For resources whose StackResourceDriftStatus is DELETED, this structure will + // not be present. + ActualProperties *string `type:"string"` + + // A JSON structure containing the expected property values of the stack resource, + // as defined in the stack template and any values specified as template parameters. + // + // For resources whose StackResourceDriftStatus is DELETED, this structure will + // not be present. + ExpectedProperties *string `type:"string"` + + // The logical name of the resource specified in the template. + // + // LogicalResourceId is a required field + LogicalResourceId *string `type:"string" required:"true"` + + // The name or unique identifier that corresponds to a physical instance ID + // of a resource supported by AWS CloudFormation. + PhysicalResourceId *string `type:"string"` + + // Context information that enables AWS CloudFormation to uniquely identify + // a resource. AWS CloudFormation uses context key-value pairs in cases where + // a resource's logical and physical IDs are not enough to uniquely identify + // that resource. Each context key-value pair specifies a unique resource that + // contains the targeted resource. + PhysicalResourceIdContext []*PhysicalResourceIdContextKeyValuePair `type:"list"` + + // A collection of the resource properties whose actual values differ from their + // expected values. These will be present only for resources whose StackResourceDriftStatus + // is MODIFIED. + PropertyDifferences []*PropertyDifference `type:"list"` + + // The type of the resource. + // + // ResourceType is a required field + ResourceType *string `min:"1" type:"string" required:"true"` + + // The ID of the stack. + // + // StackId is a required field + StackId *string `type:"string" required:"true"` + + // Status of the resource's actual configuration compared to its expected configuration + // + // * DELETED: The resource differs from its expected template configuration + // because the resource has been deleted. + // + // * MODIFIED: One or more resource properties differ from their expected + // values (as defined in the stack template and any values specified as template + // parameters). + // + // * IN_SYNC: The resources's actual configuration matches its expected template + // configuration. + // + // * NOT_CHECKED: AWS CloudFormation does not currently return this value. + // + // StackResourceDriftStatus is a required field + StackResourceDriftStatus *string `type:"string" required:"true" enum:"StackResourceDriftStatus"` + + // Time at which AWS CloudFormation performed drift detection on the stack resource. + // + // Timestamp is a required field + Timestamp *time.Time `type:"timestamp" required:"true"` +} + +// String returns the string representation +func (s StackResourceDrift) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StackResourceDrift) GoString() string { + return s.String() +} + +// SetActualProperties sets the ActualProperties field's value. +func (s *StackResourceDrift) SetActualProperties(v string) *StackResourceDrift { + s.ActualProperties = &v + return s +} + +// SetExpectedProperties sets the ExpectedProperties field's value. +func (s *StackResourceDrift) SetExpectedProperties(v string) *StackResourceDrift { + s.ExpectedProperties = &v + return s +} + +// SetLogicalResourceId sets the LogicalResourceId field's value. +func (s *StackResourceDrift) SetLogicalResourceId(v string) *StackResourceDrift { + s.LogicalResourceId = &v + return s +} + +// SetPhysicalResourceId sets the PhysicalResourceId field's value. +func (s *StackResourceDrift) SetPhysicalResourceId(v string) *StackResourceDrift { + s.PhysicalResourceId = &v + return s +} + +// SetPhysicalResourceIdContext sets the PhysicalResourceIdContext field's value. +func (s *StackResourceDrift) SetPhysicalResourceIdContext(v []*PhysicalResourceIdContextKeyValuePair) *StackResourceDrift { + s.PhysicalResourceIdContext = v + return s +} + +// SetPropertyDifferences sets the PropertyDifferences field's value. +func (s *StackResourceDrift) SetPropertyDifferences(v []*PropertyDifference) *StackResourceDrift { + s.PropertyDifferences = v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *StackResourceDrift) SetResourceType(v string) *StackResourceDrift { + s.ResourceType = &v + return s +} + +// SetStackId sets the StackId field's value. +func (s *StackResourceDrift) SetStackId(v string) *StackResourceDrift { + s.StackId = &v + return s +} + +// SetStackResourceDriftStatus sets the StackResourceDriftStatus field's value. +func (s *StackResourceDrift) SetStackResourceDriftStatus(v string) *StackResourceDrift { + s.StackResourceDriftStatus = &v + return s +} + +// SetTimestamp sets the Timestamp field's value. +func (s *StackResourceDrift) SetTimestamp(v time.Time) *StackResourceDrift { + s.Timestamp = &v + return s +} + +// Contains information about whether the resource's actual configuration differs, +// or has drifted, from its expected configuration. +type StackResourceDriftInformation struct { + _ struct{} `type:"structure"` + + // When AWS CloudFormation last checked if the resource had drifted from its + // expected configuration. + LastCheckTimestamp *time.Time `type:"timestamp"` + + // Status of the resource's actual configuration compared to its expected configuration + // + // * DELETED: The resource differs from its expected configuration in that + // it has been deleted. + // + // * MODIFIED: The resource differs from its expected configuration. + // + // * NOT_CHECKED: AWS CloudFormation has not checked if the resource differs + // from its expected configuration. + // + // Any resources that do not currently support drift detection have a status + // of NOT_CHECKED. For more information, see Resources that Support Drift + // Detection (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift-resource-list.html). + // + // + // * IN_SYNC: The resources's actual configuration matches its expected configuration. + // + // StackResourceDriftStatus is a required field + StackResourceDriftStatus *string `type:"string" required:"true" enum:"StackResourceDriftStatus"` +} + +// String returns the string representation +func (s StackResourceDriftInformation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StackResourceDriftInformation) GoString() string { + return s.String() +} + +// SetLastCheckTimestamp sets the LastCheckTimestamp field's value. +func (s *StackResourceDriftInformation) SetLastCheckTimestamp(v time.Time) *StackResourceDriftInformation { + s.LastCheckTimestamp = &v + return s +} + +// SetStackResourceDriftStatus sets the StackResourceDriftStatus field's value. +func (s *StackResourceDriftInformation) SetStackResourceDriftStatus(v string) *StackResourceDriftInformation { + s.StackResourceDriftStatus = &v + return s +} + +// Summarizes information about whether the resource's actual configuration +// differs, or has drifted, from its expected configuration. +type StackResourceDriftInformationSummary struct { + _ struct{} `type:"structure"` + + // When AWS CloudFormation last checked if the resource had drifted from its + // expected configuration. + LastCheckTimestamp *time.Time `type:"timestamp"` + + // Status of the resource's actual configuration compared to its expected configuration + // + // * DELETED: The resource differs from its expected configuration in that + // it has been deleted. + // + // * MODIFIED: The resource differs from its expected configuration. + // + // * NOT_CHECKED: AWS CloudFormation has not checked if the resource differs + // from its expected configuration. + // + // Any resources that do not currently support drift detection have a status + // of NOT_CHECKED. For more information, see Resources that Support Drift + // Detection (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift-resource-list.html). + // If you performed an ContinueUpdateRollback operation on a stack, any resources + // included in ResourcesToSkip will also have a status of NOT_CHECKED. For + // more information on skipping resources during rollback operations, see + // Continue Rolling Back an Update (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html) + // in the AWS CloudFormation User Guide. + // + // * IN_SYNC: The resources's actual configuration matches its expected configuration. + // + // StackResourceDriftStatus is a required field + StackResourceDriftStatus *string `type:"string" required:"true" enum:"StackResourceDriftStatus"` +} + +// String returns the string representation +func (s StackResourceDriftInformationSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StackResourceDriftInformationSummary) GoString() string { + return s.String() +} + +// SetLastCheckTimestamp sets the LastCheckTimestamp field's value. +func (s *StackResourceDriftInformationSummary) SetLastCheckTimestamp(v time.Time) *StackResourceDriftInformationSummary { + s.LastCheckTimestamp = &v + return s +} + +// SetStackResourceDriftStatus sets the StackResourceDriftStatus field's value. +func (s *StackResourceDriftInformationSummary) SetStackResourceDriftStatus(v string) *StackResourceDriftInformationSummary { + s.StackResourceDriftStatus = &v + return s +} + // Contains high-level information about the specified stack resource. type StackResourceSummary struct { _ struct{} `type:"structure"` + // Information about whether the resource's actual configuration differs, or + // has drifted, from its expected configuration, as defined in the stack template + // and any values specified as template parameters. For more information, see + // Detecting Unregulated Configuration Changes to Stacks and Resources (http://docs.aws.amazon.com/http:/docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift.html). + DriftInformation *StackResourceDriftInformationSummary `type:"structure"` + // Time the status was updated. // // LastUpdatedTimestamp is a required field - LastUpdatedTimestamp *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + LastUpdatedTimestamp *time.Time `type:"timestamp" required:"true"` // The logical name of the resource specified in the template. // @@ -9712,6 +11179,12 @@ func (s StackResourceSummary) GoString() string { return s.String() } +// SetDriftInformation sets the DriftInformation field's value. +func (s *StackResourceSummary) SetDriftInformation(v *StackResourceDriftInformationSummary) *StackResourceSummary { + s.DriftInformation = v + return s +} + // SetLastUpdatedTimestamp sets the LastUpdatedTimestamp field's value. func (s *StackResourceSummary) SetLastUpdatedTimestamp(v time.Time) *StackResourceSummary { s.LastUpdatedTimestamp = &v @@ -9755,6 +11228,15 @@ func (s *StackResourceSummary) SetResourceType(v string) *StackResourceSummary { type StackSet struct { _ struct{} `type:"structure"` + // The Amazon Resource Number (ARN) of the IAM role used to create or update + // the stack set. + // + // Use customized administrator roles to control which users or groups can manage + // specific stack sets within the same administrator account. For more information, + // see Prerequisites: Granting Permissions for Stack Set Operations (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs.html) + // in the AWS CloudFormation User Guide. + AdministrationRoleARN *string `min:"20" type:"string"` + // The capabilities that are allowed in the stack set. Some stack set templates // might include resources that can affect permissions in your AWS account—for // example, by creating new AWS Identity and Access Management (IAM) users. @@ -9766,9 +11248,18 @@ type StackSet struct { // or updated. Description *string `min:"1" type:"string"` + // The name of the IAM execution role used to create or update the stack set. + // + // Use customized execution roles to control which stack resources users and + // groups can include in their stack sets. + ExecutionRoleName *string `min:"1" type:"string"` + // A list of input parameters for a stack set. Parameters []*Parameter `type:"list"` + // The Amazon Resource Number (ARN) of the stack set. + StackSetARN *string `type:"string"` + // The ID of the stack set. StackSetId *string `type:"string"` @@ -9797,6 +11288,12 @@ func (s StackSet) GoString() string { return s.String() } +// SetAdministrationRoleARN sets the AdministrationRoleARN field's value. +func (s *StackSet) SetAdministrationRoleARN(v string) *StackSet { + s.AdministrationRoleARN = &v + return s +} + // SetCapabilities sets the Capabilities field's value. func (s *StackSet) SetCapabilities(v []*string) *StackSet { s.Capabilities = v @@ -9809,12 +11306,24 @@ func (s *StackSet) SetDescription(v string) *StackSet { return s } +// SetExecutionRoleName sets the ExecutionRoleName field's value. +func (s *StackSet) SetExecutionRoleName(v string) *StackSet { + s.ExecutionRoleName = &v + return s +} + // SetParameters sets the Parameters field's value. func (s *StackSet) SetParameters(v []*Parameter) *StackSet { s.Parameters = v return s } +// SetStackSetARN sets the StackSetARN field's value. +func (s *StackSet) SetStackSetARN(v string) *StackSet { + s.StackSetARN = &v + return s +} + // SetStackSetId sets the StackSetId field's value. func (s *StackSet) SetStackSetId(v string) *StackSet { s.StackSetId = &v @@ -9855,17 +11364,32 @@ type StackSetOperation struct { // itself, as well as all associated stack set instances. Action *string `type:"string" enum:"StackSetOperationAction"` + // The Amazon Resource Number (ARN) of the IAM role used to perform this stack + // set operation. + // + // Use customized administrator roles to control which users or groups can manage + // specific stack sets within the same administrator account. For more information, + // see Define Permissions for Multiple Administrators (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs.html) + // in the AWS CloudFormation User Guide. + AdministrationRoleARN *string `min:"20" type:"string"` + // The time at which the operation was initiated. Note that the creation times // for the stack set operation might differ from the creation time of the individual // stacks themselves. This is because AWS CloudFormation needs to perform preparatory // work for the operation, such as dispatching the work to the requested regions, // before actually creating the first stacks. - CreationTimestamp *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CreationTimestamp *time.Time `type:"timestamp"` // The time at which the stack set operation ended, across all accounts and // regions specified. Note that this doesn't necessarily mean that the stack // set operation was successful, or even attempted, in each account or region. - EndTimestamp *time.Time `type:"timestamp" timestampFormat:"iso8601"` + EndTimestamp *time.Time `type:"timestamp"` + + // The name of the IAM execution role used to create or update the stack set. + // + // Use customized execution roles to control which stack resources users and + // groups can include in their stack sets. + ExecutionRoleName *string `min:"1" type:"string"` // The unique ID of a stack set operation. OperationId *string `min:"1" type:"string"` @@ -9920,6 +11444,12 @@ func (s *StackSetOperation) SetAction(v string) *StackSetOperation { return s } +// SetAdministrationRoleARN sets the AdministrationRoleARN field's value. +func (s *StackSetOperation) SetAdministrationRoleARN(v string) *StackSetOperation { + s.AdministrationRoleARN = &v + return s +} + // SetCreationTimestamp sets the CreationTimestamp field's value. func (s *StackSetOperation) SetCreationTimestamp(v time.Time) *StackSetOperation { s.CreationTimestamp = &v @@ -9932,6 +11462,12 @@ func (s *StackSetOperation) SetEndTimestamp(v time.Time) *StackSetOperation { return s } +// SetExecutionRoleName sets the ExecutionRoleName field's value. +func (s *StackSetOperation) SetExecutionRoleName(v string) *StackSetOperation { + s.ExecutionRoleName = &v + return s +} + // SetOperationId sets the OperationId field's value. func (s *StackSetOperation) SetOperationId(v string) *StackSetOperation { s.OperationId = &v @@ -10176,12 +11712,12 @@ type StackSetOperationSummary struct { // stacks themselves. This is because AWS CloudFormation needs to perform preparatory // work for the operation, such as dispatching the work to the requested regions, // before actually creating the first stacks. - CreationTimestamp *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CreationTimestamp *time.Time `type:"timestamp"` // The time at which the stack set operation ended, across all accounts and // regions specified. Note that this doesn't necessarily mean that the stack // set operation was successful, or even attempted, in each account or region. - EndTimestamp *time.Time `type:"timestamp" timestampFormat:"iso8601"` + EndTimestamp *time.Time `type:"timestamp"` // The unique ID of the stack set operation. OperationId *string `min:"1" type:"string"` @@ -10308,14 +11844,20 @@ type StackSummary struct { // The time the stack was created. // // CreationTime is a required field - CreationTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + CreationTime *time.Time `type:"timestamp" required:"true"` // The time the stack was deleted. - DeletionTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + DeletionTime *time.Time `type:"timestamp"` + + // Summarizes information on whether a stack's actual configuration differs, + // or has drifted, from it's expected configuration, as defined in the stack + // template and any values specified as template parameters. For more information, + // see Detecting Unregulated Configuration Changes to Stacks and Resources (http://docs.aws.amazon.com/http:/docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift.html). + DriftInformation *StackDriftInformationSummary `type:"structure"` // The time the stack was last updated. This field will only be returned if // the stack has been updated at least once. - LastUpdatedTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + LastUpdatedTime *time.Time `type:"timestamp"` // For nested stacks--stacks created as resources for another stack--the stack // ID of the direct parent of this stack. For the first level of nested stacks, @@ -10374,6 +11916,12 @@ func (s *StackSummary) SetDeletionTime(v time.Time) *StackSummary { return s } +// SetDriftInformation sets the DriftInformation field's value. +func (s *StackSummary) SetDriftInformation(v *StackDriftInformationSummary) *StackSummary { + s.DriftInformation = v + return s +} + // SetLastUpdatedTime sets the LastUpdatedTime field's value. func (s *StackSummary) SetLastUpdatedTime(v time.Time) *StackSummary { s.LastUpdatedTime = &v @@ -10611,30 +12159,74 @@ func (s *TemplateParameter) SetParameterKey(v string) *TemplateParameter { type UpdateStackInput struct { _ struct{} `type:"structure"` - // A list of values that you must specify before AWS CloudFormation can update - // certain stacks. Some stack templates might include resources that can affect - // permissions in your AWS account, for example, by creating new AWS Identity - // and Access Management (IAM) users. For those stacks, you must explicitly - // acknowledge their capabilities by specifying this parameter. + // In some cases, you must explicity acknowledge that your stack template contains + // certain capabilities in order for AWS CloudFormation to update the stack. + // + // * CAPABILITY_IAM and CAPABILITY_NAMED_IAM + // + // Some stack templates might include resources that can affect permissions + // in your AWS account; for example, by creating new AWS Identity and Access + // Management (IAM) users. For those stacks, you must explicitly acknowledge + // this by specifying one of these capabilities. + // + // The following IAM resources require you to specify either the CAPABILITY_IAM + // or CAPABILITY_NAMED_IAM capability. + // + // If you have IAM resources, you can specify either capability. + // + // If you have IAM resources with custom names, you must specify CAPABILITY_NAMED_IAM. + // + // + // If you don't specify either of these capabilities, AWS CloudFormation returns + // an InsufficientCapabilities error. // - // The only valid values are CAPABILITY_IAM and CAPABILITY_NAMED_IAM. The following - // resources require you to specify this parameter: AWS::IAM::AccessKey (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-accesskey.html), - // AWS::IAM::Group (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-group.html), - // AWS::IAM::InstanceProfile (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-instanceprofile.html), - // AWS::IAM::Policy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-policy.html), - // AWS::IAM::Role (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html), - // AWS::IAM::User (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-user.html), - // and AWS::IAM::UserToGroupAddition (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-addusertogroup.html). // If your stack template contains these resources, we recommend that you review - // all permissions associated with them and edit their permissions if necessary. + // all permissions associated with them and edit their permissions if necessary. // - // If you have IAM resources, you can specify either capability. If you have - // IAM resources with custom names, you must specify CAPABILITY_NAMED_IAM. If - // you don't specify this parameter, this action returns an InsufficientCapabilities - // error. + // AWS::IAM::AccessKey (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-accesskey.html) + // + // AWS::IAM::Group (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-group.html) + // + // AWS::IAM::InstanceProfile (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-instanceprofile.html) + // + // AWS::IAM::Policy (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-policy.html) + // + // AWS::IAM::Role (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html) + // + // AWS::IAM::User (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-user.html) + // + // AWS::IAM::UserToGroupAddition (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-addusertogroup.html) // // For more information, see Acknowledging IAM Resources in AWS CloudFormation - // Templates (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html#capabilities). + // Templates (http://docs.aws.amazon.com/http:/docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html#capabilities). + // + // * CAPABILITY_AUTO_EXPAND + // + // Some template contain macros. Macros perform custom processing on templates; + // this can include simple actions like find-and-replace operations, all + // the way to extensive transformations of entire templates. Because of this, + // users typically create a change set from the processed template, so that + // they can review the changes resulting from the macros before actually + // updating the stack. If your stack template contains one or more macros, + // and you choose to update a stack directly from the processed template, + // without first reviewing the resulting changes in a change set, you must + // acknowledge this capability. This includes the AWS::Include (http://docs.aws.amazon.com/http:/docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/create-reusable-transform-function-snippets-and-add-to-your-template-with-aws-include-transform.html) + // and AWS::Serverless (http://docs.aws.amazon.com/http:/docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/transform-aws-serverless.html) + // transforms, which are macros hosted by AWS CloudFormation. + // + // Change sets do not currently support nested stacks. If you want to update + // a stack from a stack template that contains macros and nested stacks, + // you must update the stack directly from the template using this capability. + // + // You should only update stacks directly from a stack template that contains + // macros if you know what processing the macro performs. + // + // Each macro relies on an underlying Lambda service function for processing + // stack templates. Be aware that the Lambda function owner can update the + // function operation without AWS CloudFormation being notified. + // + // For more information, see Using AWS CloudFormation Macros to Perform Custom + // Processing on Templates (http://docs.aws.amazon.com/http:/docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-macros.html). Capabilities []*string `type:"list"` // A unique identifier for this UpdateStack request. Specify this token if you @@ -11124,6 +12716,39 @@ func (s *UpdateStackOutput) SetStackId(v string) *UpdateStackOutput { type UpdateStackSetInput struct { _ struct{} `type:"structure"` + // The accounts in which to update associated stack instances. If you specify + // accounts, you must also specify the regions in which to update stack set + // instances. + // + // To update all the stack instances associated with this stack set, do not + // specify the Accounts or Regions properties. + // + // If the stack set update includes changes to the template (that is, if the + // TemplateBody or TemplateURL properties are specified), or the Parameters + // property, AWS CloudFormation marks all stack instances with a status of OUTDATED + // prior to updating the stack instances in the specified accounts and regions. + // If the stack set update does not include changes to the template or parameters, + // AWS CloudFormation updates the stack instances in the specified accounts + // and regions, while leaving all other stack instances with their existing + // stack instance status. + Accounts []*string `type:"list"` + + // The Amazon Resource Number (ARN) of the IAM role to use to update this stack + // set. + // + // Specify an IAM role only if you are using customized administrator roles + // to control which users or groups can manage specific stack sets within the + // same administrator account. For more information, see Define Permissions + // for Multiple Administrators (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs.html) + // in the AWS CloudFormation User Guide. + // + // If you specify a customized administrator role, AWS CloudFormation uses that + // role to update the stack. If you do not specify a customized administrator + // role, AWS CloudFormation performs the update using the role previously associated + // with the stack set, so long as you have permissions to perform operations + // on the stack set. + AdministrationRoleARN *string `min:"20" type:"string"` + // A list of values that you must specify before AWS CloudFormation can create // certain stack sets. Some stack set templates might include resources that // can affect permissions in your AWS account—for example, by creating new AWS @@ -11163,6 +12788,20 @@ type UpdateStackSetInput struct { // A brief description of updates that you are making. Description *string `min:"1" type:"string"` + // The name of the IAM execution role to use to update the stack set. If you + // do not specify an execution role, AWS CloudFormation uses the AWSCloudFormationStackSetExecutionRole + // role for the stack set operation. + // + // Specify an IAM role only if you are using customized execution roles to control + // which stack resources users and groups can include in their stack sets. + // + // If you specify a customized execution role, AWS CloudFormation uses that + // role to update the stack. If you do not specify a customized execution role, + // AWS CloudFormation performs the update using the role previously associated + // with the stack set, so long as you have permissions to perform operations + // on the stack set. + ExecutionRoleName *string `min:"1" type:"string"` + // The unique ID for this stack set operation. // // The operation ID also functions as an idempotency token, to ensure that AWS @@ -11182,6 +12821,22 @@ type UpdateStackSetInput struct { // A list of input parameters for the stack set template. Parameters []*Parameter `type:"list"` + // The regions in which to update associated stack instances. If you specify + // regions, you must also specify accounts in which to update stack set instances. + // + // To update all the stack instances associated with this stack set, do not + // specify the Accounts or Regions properties. + // + // If the stack set update includes changes to the template (that is, if the + // TemplateBody or TemplateURL properties are specified), or the Parameters + // property, AWS CloudFormation marks all stack instances with a status of OUTDATED + // prior to updating the stack instances in the specified accounts and regions. + // If the stack set update does not include changes to the template or parameters, + // AWS CloudFormation updates the stack instances in the specified accounts + // and regions, while leaving all other stack instances with their existing + // stack instance status. + Regions []*string `type:"list"` + // The name or unique ID of the stack set that you want to update. // // StackSetName is a required field @@ -11255,9 +12910,15 @@ func (s UpdateStackSetInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *UpdateStackSetInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "UpdateStackSetInput"} + if s.AdministrationRoleARN != nil && len(*s.AdministrationRoleARN) < 20 { + invalidParams.Add(request.NewErrParamMinLen("AdministrationRoleARN", 20)) + } if s.Description != nil && len(*s.Description) < 1 { invalidParams.Add(request.NewErrParamMinLen("Description", 1)) } + if s.ExecutionRoleName != nil && len(*s.ExecutionRoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ExecutionRoleName", 1)) + } if s.OperationId != nil && len(*s.OperationId) < 1 { invalidParams.Add(request.NewErrParamMinLen("OperationId", 1)) } @@ -11292,6 +12953,18 @@ func (s *UpdateStackSetInput) Validate() error { return nil } +// SetAccounts sets the Accounts field's value. +func (s *UpdateStackSetInput) SetAccounts(v []*string) *UpdateStackSetInput { + s.Accounts = v + return s +} + +// SetAdministrationRoleARN sets the AdministrationRoleARN field's value. +func (s *UpdateStackSetInput) SetAdministrationRoleARN(v string) *UpdateStackSetInput { + s.AdministrationRoleARN = &v + return s +} + // SetCapabilities sets the Capabilities field's value. func (s *UpdateStackSetInput) SetCapabilities(v []*string) *UpdateStackSetInput { s.Capabilities = v @@ -11304,6 +12977,12 @@ func (s *UpdateStackSetInput) SetDescription(v string) *UpdateStackSetInput { return s } +// SetExecutionRoleName sets the ExecutionRoleName field's value. +func (s *UpdateStackSetInput) SetExecutionRoleName(v string) *UpdateStackSetInput { + s.ExecutionRoleName = &v + return s +} + // SetOperationId sets the OperationId field's value. func (s *UpdateStackSetInput) SetOperationId(v string) *UpdateStackSetInput { s.OperationId = &v @@ -11322,6 +13001,12 @@ func (s *UpdateStackSetInput) SetParameters(v []*Parameter) *UpdateStackSetInput return s } +// SetRegions sets the Regions field's value. +func (s *UpdateStackSetInput) SetRegions(v []*string) *UpdateStackSetInput { + s.Regions = v + return s +} + // SetStackSetName sets the StackSetName field's value. func (s *UpdateStackSetInput) SetStackSetName(v string) *UpdateStackSetInput { s.StackSetName = &v @@ -11600,6 +13285,9 @@ const ( // CapabilityCapabilityNamedIam is a Capability enum value CapabilityCapabilityNamedIam = "CAPABILITY_NAMED_IAM" + + // CapabilityCapabilityAutoExpand is a Capability enum value + CapabilityCapabilityAutoExpand = "CAPABILITY_AUTO_EXPAND" ) const ( @@ -11660,6 +13348,17 @@ const ( ChangeTypeResource = "Resource" ) +const ( + // DifferenceTypeAdd is a DifferenceType enum value + DifferenceTypeAdd = "ADD" + + // DifferenceTypeRemove is a DifferenceType enum value + DifferenceTypeRemove = "REMOVE" + + // DifferenceTypeNotEqual is a DifferenceType enum value + DifferenceTypeNotEqual = "NOT_EQUAL" +) + const ( // EvaluationTypeStatic is a EvaluationType enum value EvaluationTypeStatic = "Static" @@ -11781,6 +13480,31 @@ const ( ResourceStatusUpdateComplete = "UPDATE_COMPLETE" ) +const ( + // StackDriftDetectionStatusDetectionInProgress is a StackDriftDetectionStatus enum value + StackDriftDetectionStatusDetectionInProgress = "DETECTION_IN_PROGRESS" + + // StackDriftDetectionStatusDetectionFailed is a StackDriftDetectionStatus enum value + StackDriftDetectionStatusDetectionFailed = "DETECTION_FAILED" + + // StackDriftDetectionStatusDetectionComplete is a StackDriftDetectionStatus enum value + StackDriftDetectionStatusDetectionComplete = "DETECTION_COMPLETE" +) + +const ( + // StackDriftStatusDrifted is a StackDriftStatus enum value + StackDriftStatusDrifted = "DRIFTED" + + // StackDriftStatusInSync is a StackDriftStatus enum value + StackDriftStatusInSync = "IN_SYNC" + + // StackDriftStatusUnknown is a StackDriftStatus enum value + StackDriftStatusUnknown = "UNKNOWN" + + // StackDriftStatusNotChecked is a StackDriftStatus enum value + StackDriftStatusNotChecked = "NOT_CHECKED" +) + const ( // StackInstanceStatusCurrent is a StackInstanceStatus enum value StackInstanceStatusCurrent = "CURRENT" @@ -11792,6 +13516,20 @@ const ( StackInstanceStatusInoperable = "INOPERABLE" ) +const ( + // StackResourceDriftStatusInSync is a StackResourceDriftStatus enum value + StackResourceDriftStatusInSync = "IN_SYNC" + + // StackResourceDriftStatusModified is a StackResourceDriftStatus enum value + StackResourceDriftStatusModified = "MODIFIED" + + // StackResourceDriftStatusDeleted is a StackResourceDriftStatus enum value + StackResourceDriftStatusDeleted = "DELETED" + + // StackResourceDriftStatusNotChecked is a StackResourceDriftStatus enum value + StackResourceDriftStatusNotChecked = "NOT_CHECKED" +) + const ( // StackSetOperationActionCreate is a StackSetOperationAction enum value StackSetOperationActionCreate = "CREATE" diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudformation/service.go b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/service.go index 0115c5bb002..65df49a0cc8 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudformation/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudformation/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "cloudformation" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "cloudformation" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "CloudFormation" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the CloudFormation client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudfront/api.go b/vendor/github.com/aws/aws-sdk-go/service/cloudfront/api.go index e7749972f54..4445dd2ac5a 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudfront/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudfront/api.go @@ -13,12 +13,12 @@ import ( "github.com/aws/aws-sdk-go/private/protocol/restxml" ) -const opCreateCloudFrontOriginAccessIdentity = "CreateCloudFrontOriginAccessIdentity2017_03_25" +const opCreateCloudFrontOriginAccessIdentity = "CreateCloudFrontOriginAccessIdentity2018_06_18" // CreateCloudFrontOriginAccessIdentityRequest generates a "aws/request.Request" representing the // client's request for the CreateCloudFrontOriginAccessIdentity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -38,12 +38,12 @@ const opCreateCloudFrontOriginAccessIdentity = "CreateCloudFrontOriginAccessIden // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/CreateCloudFrontOriginAccessIdentity +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/CreateCloudFrontOriginAccessIdentity func (c *CloudFront) CreateCloudFrontOriginAccessIdentityRequest(input *CreateCloudFrontOriginAccessIdentityInput) (req *request.Request, output *CreateCloudFrontOriginAccessIdentityOutput) { op := &request.Operation{ Name: opCreateCloudFrontOriginAccessIdentity, HTTPMethod: "POST", - HTTPPath: "/2017-03-25/origin-access-identity/cloudfront", + HTTPPath: "/2018-06-18/origin-access-identity/cloudfront", } if input == nil { @@ -72,7 +72,7 @@ func (c *CloudFront) CreateCloudFrontOriginAccessIdentityRequest(input *CreateCl // API operation CreateCloudFrontOriginAccessIdentity for usage and error information. // // Returned Error Codes: -// * ErrCodeOriginAccessIdentityAlreadyExists "OriginAccessIdentityAlreadyExists" +// * ErrCodeOriginAccessIdentityAlreadyExists "CloudFrontOriginAccessIdentityAlreadyExists" // If the CallerReference is a value you already sent in a previous request // to create an identity but the content of the CloudFrontOriginAccessIdentityConfig // is different from the original request, CloudFront returns a CloudFrontOriginAccessIdentityAlreadyExists @@ -92,7 +92,7 @@ func (c *CloudFront) CreateCloudFrontOriginAccessIdentityRequest(input *CreateCl // * ErrCodeInconsistentQuantities "InconsistentQuantities" // The value of Quantity and the size of Items don't match. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/CreateCloudFrontOriginAccessIdentity +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/CreateCloudFrontOriginAccessIdentity func (c *CloudFront) CreateCloudFrontOriginAccessIdentity(input *CreateCloudFrontOriginAccessIdentityInput) (*CreateCloudFrontOriginAccessIdentityOutput, error) { req, out := c.CreateCloudFrontOriginAccessIdentityRequest(input) return out, req.Send() @@ -114,12 +114,12 @@ func (c *CloudFront) CreateCloudFrontOriginAccessIdentityWithContext(ctx aws.Con return out, req.Send() } -const opCreateDistribution = "CreateDistribution2017_03_25" +const opCreateDistribution = "CreateDistribution2018_06_18" // CreateDistributionRequest generates a "aws/request.Request" representing the // client's request for the CreateDistribution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -139,12 +139,12 @@ const opCreateDistribution = "CreateDistribution2017_03_25" // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/CreateDistribution +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/CreateDistribution func (c *CloudFront) CreateDistributionRequest(input *CreateDistributionInput) (req *request.Request, output *CreateDistributionOutput) { op := &request.Operation{ Name: opCreateDistribution, HTTPMethod: "POST", - HTTPPath: "/2017-03-25/distribution", + HTTPPath: "/2018-06-18/distribution", } if input == nil { @@ -158,8 +158,21 @@ func (c *CloudFront) CreateDistributionRequest(input *CreateDistributionInput) ( // CreateDistribution API operation for Amazon CloudFront. // -// Creates a new web distribution. Send a POST request to the /CloudFront API -// version/distribution/distribution ID resource. +// Creates a new web distribution. You create a CloudFront distribution to tell +// CloudFront where you want content to be delivered from, and the details about +// how to track and manage content delivery. Send a POST request to the /CloudFront +// API version/distribution/distribution ID resource. +// +// When you update a distribution, there are more required fields than when +// you create a distribution. When you update your distribution by using UpdateDistribution, +// follow the steps included in the documentation to get the current configuration +// and then make your updates. This helps to make sure that you include all +// of the required fields. To view a summary, see Required Fields for Create +// Distribution and Update Distribution (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-overview-required-fields.html) +// in the Amazon CloudFront Developer Guide. +// +// If you are using Adobe Flash Media Server's RTMP protocol, you set up a different +// kind of CloudFront distribution. For more information, see CreateStreamingDistribution. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -287,7 +300,18 @@ func (c *CloudFront) CreateDistributionRequest(input *CreateDistributionInput) ( // // * ErrCodeInvalidOriginKeepaliveTimeout "InvalidOriginKeepaliveTimeout" // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/CreateDistribution +// * ErrCodeNoSuchFieldLevelEncryptionConfig "NoSuchFieldLevelEncryptionConfig" +// The specified configuration for field-level encryption doesn't exist. +// +// * ErrCodeIllegalFieldLevelEncryptionConfigAssociationWithCacheBehavior "IllegalFieldLevelEncryptionConfigAssociationWithCacheBehavior" +// The specified configuration for field-level encryption can't be associated +// with the specified cache behavior. +// +// * ErrCodeTooManyDistributionsAssociatedToFieldLevelEncryptionConfig "TooManyDistributionsAssociatedToFieldLevelEncryptionConfig" +// The maximum number of distributions have been associated with the specified +// configuration for field-level encryption. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/CreateDistribution func (c *CloudFront) CreateDistribution(input *CreateDistributionInput) (*CreateDistributionOutput, error) { req, out := c.CreateDistributionRequest(input) return out, req.Send() @@ -309,12 +333,12 @@ func (c *CloudFront) CreateDistributionWithContext(ctx aws.Context, input *Creat return out, req.Send() } -const opCreateDistributionWithTags = "CreateDistributionWithTags2017_03_25" +const opCreateDistributionWithTags = "CreateDistributionWithTags2018_06_18" // CreateDistributionWithTagsRequest generates a "aws/request.Request" representing the // client's request for the CreateDistributionWithTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -334,12 +358,12 @@ const opCreateDistributionWithTags = "CreateDistributionWithTags2017_03_25" // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/CreateDistributionWithTags +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/CreateDistributionWithTags func (c *CloudFront) CreateDistributionWithTagsRequest(input *CreateDistributionWithTagsInput) (req *request.Request, output *CreateDistributionWithTagsOutput) { op := &request.Operation{ Name: opCreateDistributionWithTags, HTTPMethod: "POST", - HTTPPath: "/2017-03-25/distribution?WithTags", + HTTPPath: "/2018-06-18/distribution?WithTags", } if input == nil { @@ -483,7 +507,18 @@ func (c *CloudFront) CreateDistributionWithTagsRequest(input *CreateDistribution // // * ErrCodeInvalidOriginKeepaliveTimeout "InvalidOriginKeepaliveTimeout" // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/CreateDistributionWithTags +// * ErrCodeNoSuchFieldLevelEncryptionConfig "NoSuchFieldLevelEncryptionConfig" +// The specified configuration for field-level encryption doesn't exist. +// +// * ErrCodeIllegalFieldLevelEncryptionConfigAssociationWithCacheBehavior "IllegalFieldLevelEncryptionConfigAssociationWithCacheBehavior" +// The specified configuration for field-level encryption can't be associated +// with the specified cache behavior. +// +// * ErrCodeTooManyDistributionsAssociatedToFieldLevelEncryptionConfig "TooManyDistributionsAssociatedToFieldLevelEncryptionConfig" +// The maximum number of distributions have been associated with the specified +// configuration for field-level encryption. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/CreateDistributionWithTags func (c *CloudFront) CreateDistributionWithTags(input *CreateDistributionWithTagsInput) (*CreateDistributionWithTagsOutput, error) { req, out := c.CreateDistributionWithTagsRequest(input) return out, req.Send() @@ -505,12 +540,217 @@ func (c *CloudFront) CreateDistributionWithTagsWithContext(ctx aws.Context, inpu return out, req.Send() } -const opCreateInvalidation = "CreateInvalidation2017_03_25" +const opCreateFieldLevelEncryptionConfig = "CreateFieldLevelEncryptionConfig2018_06_18" + +// CreateFieldLevelEncryptionConfigRequest generates a "aws/request.Request" representing the +// client's request for the CreateFieldLevelEncryptionConfig operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateFieldLevelEncryptionConfig for more information on using the CreateFieldLevelEncryptionConfig +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateFieldLevelEncryptionConfigRequest method. +// req, resp := client.CreateFieldLevelEncryptionConfigRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/CreateFieldLevelEncryptionConfig +func (c *CloudFront) CreateFieldLevelEncryptionConfigRequest(input *CreateFieldLevelEncryptionConfigInput) (req *request.Request, output *CreateFieldLevelEncryptionConfigOutput) { + op := &request.Operation{ + Name: opCreateFieldLevelEncryptionConfig, + HTTPMethod: "POST", + HTTPPath: "/2018-06-18/field-level-encryption", + } + + if input == nil { + input = &CreateFieldLevelEncryptionConfigInput{} + } + + output = &CreateFieldLevelEncryptionConfigOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateFieldLevelEncryptionConfig API operation for Amazon CloudFront. +// +// Create a new field-level encryption configuration. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudFront's +// API operation CreateFieldLevelEncryptionConfig for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInconsistentQuantities "InconsistentQuantities" +// The value of Quantity and the size of Items don't match. +// +// * ErrCodeInvalidArgument "InvalidArgument" +// The argument is invalid. +// +// * ErrCodeNoSuchFieldLevelEncryptionProfile "NoSuchFieldLevelEncryptionProfile" +// The specified profile for field-level encryption doesn't exist. +// +// * ErrCodeFieldLevelEncryptionConfigAlreadyExists "FieldLevelEncryptionConfigAlreadyExists" +// The specified configuration for field-level encryption already exists. +// +// * ErrCodeTooManyFieldLevelEncryptionConfigs "TooManyFieldLevelEncryptionConfigs" +// The maximum number of configurations for field-level encryption have been +// created. +// +// * ErrCodeTooManyFieldLevelEncryptionQueryArgProfiles "TooManyFieldLevelEncryptionQueryArgProfiles" +// The maximum number of query arg profiles for field-level encryption have +// been created. +// +// * ErrCodeTooManyFieldLevelEncryptionContentTypeProfiles "TooManyFieldLevelEncryptionContentTypeProfiles" +// The maximum number of content type profiles for field-level encryption have +// been created. +// +// * ErrCodeQueryArgProfileEmpty "QueryArgProfileEmpty" +// No profile specified for the field-level encryption query argument. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/CreateFieldLevelEncryptionConfig +func (c *CloudFront) CreateFieldLevelEncryptionConfig(input *CreateFieldLevelEncryptionConfigInput) (*CreateFieldLevelEncryptionConfigOutput, error) { + req, out := c.CreateFieldLevelEncryptionConfigRequest(input) + return out, req.Send() +} + +// CreateFieldLevelEncryptionConfigWithContext is the same as CreateFieldLevelEncryptionConfig with the addition of +// the ability to pass a context and additional request options. +// +// See CreateFieldLevelEncryptionConfig for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFront) CreateFieldLevelEncryptionConfigWithContext(ctx aws.Context, input *CreateFieldLevelEncryptionConfigInput, opts ...request.Option) (*CreateFieldLevelEncryptionConfigOutput, error) { + req, out := c.CreateFieldLevelEncryptionConfigRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateFieldLevelEncryptionProfile = "CreateFieldLevelEncryptionProfile2018_06_18" + +// CreateFieldLevelEncryptionProfileRequest generates a "aws/request.Request" representing the +// client's request for the CreateFieldLevelEncryptionProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateFieldLevelEncryptionProfile for more information on using the CreateFieldLevelEncryptionProfile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateFieldLevelEncryptionProfileRequest method. +// req, resp := client.CreateFieldLevelEncryptionProfileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/CreateFieldLevelEncryptionProfile +func (c *CloudFront) CreateFieldLevelEncryptionProfileRequest(input *CreateFieldLevelEncryptionProfileInput) (req *request.Request, output *CreateFieldLevelEncryptionProfileOutput) { + op := &request.Operation{ + Name: opCreateFieldLevelEncryptionProfile, + HTTPMethod: "POST", + HTTPPath: "/2018-06-18/field-level-encryption-profile", + } + + if input == nil { + input = &CreateFieldLevelEncryptionProfileInput{} + } + + output = &CreateFieldLevelEncryptionProfileOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateFieldLevelEncryptionProfile API operation for Amazon CloudFront. +// +// Create a field-level encryption profile. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudFront's +// API operation CreateFieldLevelEncryptionProfile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInconsistentQuantities "InconsistentQuantities" +// The value of Quantity and the size of Items don't match. +// +// * ErrCodeInvalidArgument "InvalidArgument" +// The argument is invalid. +// +// * ErrCodeNoSuchPublicKey "NoSuchPublicKey" +// The specified public key doesn't exist. +// +// * ErrCodeFieldLevelEncryptionProfileAlreadyExists "FieldLevelEncryptionProfileAlreadyExists" +// The specified profile for field-level encryption already exists. +// +// * ErrCodeFieldLevelEncryptionProfileSizeExceeded "FieldLevelEncryptionProfileSizeExceeded" +// The maximum size of a profile for field-level encryption was exceeded. +// +// * ErrCodeTooManyFieldLevelEncryptionProfiles "TooManyFieldLevelEncryptionProfiles" +// The maximum number of profiles for field-level encryption have been created. +// +// * ErrCodeTooManyFieldLevelEncryptionEncryptionEntities "TooManyFieldLevelEncryptionEncryptionEntities" +// The maximum number of encryption entities for field-level encryption have +// been created. +// +// * ErrCodeTooManyFieldLevelEncryptionFieldPatterns "TooManyFieldLevelEncryptionFieldPatterns" +// The maximum number of field patterns for field-level encryption have been +// created. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/CreateFieldLevelEncryptionProfile +func (c *CloudFront) CreateFieldLevelEncryptionProfile(input *CreateFieldLevelEncryptionProfileInput) (*CreateFieldLevelEncryptionProfileOutput, error) { + req, out := c.CreateFieldLevelEncryptionProfileRequest(input) + return out, req.Send() +} + +// CreateFieldLevelEncryptionProfileWithContext is the same as CreateFieldLevelEncryptionProfile with the addition of +// the ability to pass a context and additional request options. +// +// See CreateFieldLevelEncryptionProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFront) CreateFieldLevelEncryptionProfileWithContext(ctx aws.Context, input *CreateFieldLevelEncryptionProfileInput, opts ...request.Option) (*CreateFieldLevelEncryptionProfileOutput, error) { + req, out := c.CreateFieldLevelEncryptionProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateInvalidation = "CreateInvalidation2018_06_18" // CreateInvalidationRequest generates a "aws/request.Request" representing the // client's request for the CreateInvalidation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -530,12 +770,12 @@ const opCreateInvalidation = "CreateInvalidation2017_03_25" // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/CreateInvalidation +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/CreateInvalidation func (c *CloudFront) CreateInvalidationRequest(input *CreateInvalidationInput) (req *request.Request, output *CreateInvalidationOutput) { op := &request.Operation{ Name: opCreateInvalidation, HTTPMethod: "POST", - HTTPPath: "/2017-03-25/distribution/{DistributionId}/invalidation", + HTTPPath: "/2018-06-18/distribution/{DistributionId}/invalidation", } if input == nil { @@ -581,7 +821,7 @@ func (c *CloudFront) CreateInvalidationRequest(input *CreateInvalidationInput) ( // * ErrCodeInconsistentQuantities "InconsistentQuantities" // The value of Quantity and the size of Items don't match. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/CreateInvalidation +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/CreateInvalidation func (c *CloudFront) CreateInvalidation(input *CreateInvalidationInput) (*CreateInvalidationOutput, error) { req, out := c.CreateInvalidationRequest(input) return out, req.Send() @@ -603,12 +843,99 @@ func (c *CloudFront) CreateInvalidationWithContext(ctx aws.Context, input *Creat return out, req.Send() } -const opCreateStreamingDistribution = "CreateStreamingDistribution2017_03_25" +const opCreatePublicKey = "CreatePublicKey2018_06_18" + +// CreatePublicKeyRequest generates a "aws/request.Request" representing the +// client's request for the CreatePublicKey operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreatePublicKey for more information on using the CreatePublicKey +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreatePublicKeyRequest method. +// req, resp := client.CreatePublicKeyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/CreatePublicKey +func (c *CloudFront) CreatePublicKeyRequest(input *CreatePublicKeyInput) (req *request.Request, output *CreatePublicKeyOutput) { + op := &request.Operation{ + Name: opCreatePublicKey, + HTTPMethod: "POST", + HTTPPath: "/2018-06-18/public-key", + } + + if input == nil { + input = &CreatePublicKeyInput{} + } + + output = &CreatePublicKeyOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreatePublicKey API operation for Amazon CloudFront. +// +// Add a new public key to CloudFront to use, for example, for field-level encryption. +// You can add a maximum of 10 public keys with one AWS account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudFront's +// API operation CreatePublicKey for usage and error information. +// +// Returned Error Codes: +// * ErrCodePublicKeyAlreadyExists "PublicKeyAlreadyExists" +// The specified public key already exists. +// +// * ErrCodeInvalidArgument "InvalidArgument" +// The argument is invalid. +// +// * ErrCodeTooManyPublicKeys "TooManyPublicKeys" +// The maximum number of public keys for field-level encryption have been created. +// To create a new public key, delete one of the existing keys. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/CreatePublicKey +func (c *CloudFront) CreatePublicKey(input *CreatePublicKeyInput) (*CreatePublicKeyOutput, error) { + req, out := c.CreatePublicKeyRequest(input) + return out, req.Send() +} + +// CreatePublicKeyWithContext is the same as CreatePublicKey with the addition of +// the ability to pass a context and additional request options. +// +// See CreatePublicKey for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFront) CreatePublicKeyWithContext(ctx aws.Context, input *CreatePublicKeyInput, opts ...request.Option) (*CreatePublicKeyOutput, error) { + req, out := c.CreatePublicKeyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateStreamingDistribution = "CreateStreamingDistribution2018_06_18" // CreateStreamingDistributionRequest generates a "aws/request.Request" representing the // client's request for the CreateStreamingDistribution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -628,12 +955,12 @@ const opCreateStreamingDistribution = "CreateStreamingDistribution2017_03_25" // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/CreateStreamingDistribution +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/CreateStreamingDistribution func (c *CloudFront) CreateStreamingDistributionRequest(input *CreateStreamingDistributionInput) (req *request.Request, output *CreateStreamingDistributionOutput) { op := &request.Operation{ Name: opCreateStreamingDistribution, HTTPMethod: "POST", - HTTPPath: "/2017-03-25/streaming-distribution", + HTTPPath: "/2018-06-18/streaming-distribution", } if input == nil { @@ -720,7 +1047,7 @@ func (c *CloudFront) CreateStreamingDistributionRequest(input *CreateStreamingDi // * ErrCodeInconsistentQuantities "InconsistentQuantities" // The value of Quantity and the size of Items don't match. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/CreateStreamingDistribution +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/CreateStreamingDistribution func (c *CloudFront) CreateStreamingDistribution(input *CreateStreamingDistributionInput) (*CreateStreamingDistributionOutput, error) { req, out := c.CreateStreamingDistributionRequest(input) return out, req.Send() @@ -742,12 +1069,12 @@ func (c *CloudFront) CreateStreamingDistributionWithContext(ctx aws.Context, inp return out, req.Send() } -const opCreateStreamingDistributionWithTags = "CreateStreamingDistributionWithTags2017_03_25" +const opCreateStreamingDistributionWithTags = "CreateStreamingDistributionWithTags2018_06_18" // CreateStreamingDistributionWithTagsRequest generates a "aws/request.Request" representing the // client's request for the CreateStreamingDistributionWithTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -767,12 +1094,12 @@ const opCreateStreamingDistributionWithTags = "CreateStreamingDistributionWithTa // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/CreateStreamingDistributionWithTags +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/CreateStreamingDistributionWithTags func (c *CloudFront) CreateStreamingDistributionWithTagsRequest(input *CreateStreamingDistributionWithTagsInput) (req *request.Request, output *CreateStreamingDistributionWithTagsOutput) { op := &request.Operation{ Name: opCreateStreamingDistributionWithTags, HTTPMethod: "POST", - HTTPPath: "/2017-03-25/streaming-distribution?WithTags", + HTTPPath: "/2018-06-18/streaming-distribution?WithTags", } if input == nil { @@ -834,7 +1161,7 @@ func (c *CloudFront) CreateStreamingDistributionWithTagsRequest(input *CreateStr // // * ErrCodeInvalidTagging "InvalidTagging" // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/CreateStreamingDistributionWithTags +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/CreateStreamingDistributionWithTags func (c *CloudFront) CreateStreamingDistributionWithTags(input *CreateStreamingDistributionWithTagsInput) (*CreateStreamingDistributionWithTagsOutput, error) { req, out := c.CreateStreamingDistributionWithTagsRequest(input) return out, req.Send() @@ -856,12 +1183,12 @@ func (c *CloudFront) CreateStreamingDistributionWithTagsWithContext(ctx aws.Cont return out, req.Send() } -const opDeleteCloudFrontOriginAccessIdentity = "DeleteCloudFrontOriginAccessIdentity2017_03_25" +const opDeleteCloudFrontOriginAccessIdentity = "DeleteCloudFrontOriginAccessIdentity2018_06_18" // DeleteCloudFrontOriginAccessIdentityRequest generates a "aws/request.Request" representing the // client's request for the DeleteCloudFrontOriginAccessIdentity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -881,12 +1208,12 @@ const opDeleteCloudFrontOriginAccessIdentity = "DeleteCloudFrontOriginAccessIden // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/DeleteCloudFrontOriginAccessIdentity +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/DeleteCloudFrontOriginAccessIdentity func (c *CloudFront) DeleteCloudFrontOriginAccessIdentityRequest(input *DeleteCloudFrontOriginAccessIdentityInput) (req *request.Request, output *DeleteCloudFrontOriginAccessIdentityOutput) { op := &request.Operation{ Name: opDeleteCloudFrontOriginAccessIdentity, HTTPMethod: "DELETE", - HTTPPath: "/2017-03-25/origin-access-identity/cloudfront/{Id}", + HTTPPath: "/2018-06-18/origin-access-identity/cloudfront/{Id}", } if input == nil { @@ -925,9 +1252,9 @@ func (c *CloudFront) DeleteCloudFrontOriginAccessIdentityRequest(input *DeleteCl // The precondition given in one or more of the request-header fields evaluated // to false. // -// * ErrCodeOriginAccessIdentityInUse "OriginAccessIdentityInUse" +// * ErrCodeOriginAccessIdentityInUse "CloudFrontOriginAccessIdentityInUse" // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/DeleteCloudFrontOriginAccessIdentity +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/DeleteCloudFrontOriginAccessIdentity func (c *CloudFront) DeleteCloudFrontOriginAccessIdentity(input *DeleteCloudFrontOriginAccessIdentityInput) (*DeleteCloudFrontOriginAccessIdentityOutput, error) { req, out := c.DeleteCloudFrontOriginAccessIdentityRequest(input) return out, req.Send() @@ -949,12 +1276,12 @@ func (c *CloudFront) DeleteCloudFrontOriginAccessIdentityWithContext(ctx aws.Con return out, req.Send() } -const opDeleteDistribution = "DeleteDistribution2017_03_25" +const opDeleteDistribution = "DeleteDistribution2018_06_18" // DeleteDistributionRequest generates a "aws/request.Request" representing the // client's request for the DeleteDistribution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -974,12 +1301,12 @@ const opDeleteDistribution = "DeleteDistribution2017_03_25" // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/DeleteDistribution +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/DeleteDistribution func (c *CloudFront) DeleteDistributionRequest(input *DeleteDistributionInput) (req *request.Request, output *DeleteDistributionOutput) { op := &request.Operation{ Name: opDeleteDistribution, HTTPMethod: "DELETE", - HTTPPath: "/2017-03-25/distribution/{Id}", + HTTPPath: "/2018-06-18/distribution/{Id}", } if input == nil { @@ -1020,7 +1347,7 @@ func (c *CloudFront) DeleteDistributionRequest(input *DeleteDistributionInput) ( // The precondition given in one or more of the request-header fields evaluated // to false. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/DeleteDistribution +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/DeleteDistribution func (c *CloudFront) DeleteDistribution(input *DeleteDistributionInput) (*DeleteDistributionOutput, error) { req, out := c.DeleteDistributionRequest(input) return out, req.Send() @@ -1042,199 +1369,395 @@ func (c *CloudFront) DeleteDistributionWithContext(ctx aws.Context, input *Delet return out, req.Send() } -const opDeleteServiceLinkedRole = "DeleteServiceLinkedRole2017_03_25" +const opDeleteFieldLevelEncryptionConfig = "DeleteFieldLevelEncryptionConfig2018_06_18" -// DeleteServiceLinkedRoleRequest generates a "aws/request.Request" representing the -// client's request for the DeleteServiceLinkedRole operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteFieldLevelEncryptionConfigRequest generates a "aws/request.Request" representing the +// client's request for the DeleteFieldLevelEncryptionConfig operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DeleteServiceLinkedRole for more information on using the DeleteServiceLinkedRole +// See DeleteFieldLevelEncryptionConfig for more information on using the DeleteFieldLevelEncryptionConfig // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DeleteServiceLinkedRoleRequest method. -// req, resp := client.DeleteServiceLinkedRoleRequest(params) +// // Example sending a request using the DeleteFieldLevelEncryptionConfigRequest method. +// req, resp := client.DeleteFieldLevelEncryptionConfigRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/DeleteServiceLinkedRole -func (c *CloudFront) DeleteServiceLinkedRoleRequest(input *DeleteServiceLinkedRoleInput) (req *request.Request, output *DeleteServiceLinkedRoleOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/DeleteFieldLevelEncryptionConfig +func (c *CloudFront) DeleteFieldLevelEncryptionConfigRequest(input *DeleteFieldLevelEncryptionConfigInput) (req *request.Request, output *DeleteFieldLevelEncryptionConfigOutput) { op := &request.Operation{ - Name: opDeleteServiceLinkedRole, + Name: opDeleteFieldLevelEncryptionConfig, HTTPMethod: "DELETE", - HTTPPath: "/2017-03-25/service-linked-role/{RoleName}", + HTTPPath: "/2018-06-18/field-level-encryption/{Id}", } if input == nil { - input = &DeleteServiceLinkedRoleInput{} + input = &DeleteFieldLevelEncryptionConfigInput{} } - output = &DeleteServiceLinkedRoleOutput{} + output = &DeleteFieldLevelEncryptionConfigOutput{} req = c.newRequest(op, input, output) req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// DeleteServiceLinkedRole API operation for Amazon CloudFront. +// DeleteFieldLevelEncryptionConfig API operation for Amazon CloudFront. +// +// Remove a field-level encryption configuration. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon CloudFront's -// API operation DeleteServiceLinkedRole for usage and error information. +// API operation DeleteFieldLevelEncryptionConfig for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidArgument "InvalidArgument" -// The argument is invalid. -// // * ErrCodeAccessDenied "AccessDenied" // Access denied. // -// * ErrCodeResourceInUse "ResourceInUse" +// * ErrCodeInvalidIfMatchVersion "InvalidIfMatchVersion" +// The If-Match version is missing or not valid for the distribution. // -// * ErrCodeNoSuchResource "NoSuchResource" +// * ErrCodeNoSuchFieldLevelEncryptionConfig "NoSuchFieldLevelEncryptionConfig" +// The specified configuration for field-level encryption doesn't exist. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/DeleteServiceLinkedRole -func (c *CloudFront) DeleteServiceLinkedRole(input *DeleteServiceLinkedRoleInput) (*DeleteServiceLinkedRoleOutput, error) { - req, out := c.DeleteServiceLinkedRoleRequest(input) +// * ErrCodePreconditionFailed "PreconditionFailed" +// The precondition given in one or more of the request-header fields evaluated +// to false. +// +// * ErrCodeFieldLevelEncryptionConfigInUse "FieldLevelEncryptionConfigInUse" +// The specified configuration for field-level encryption is in use. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/DeleteFieldLevelEncryptionConfig +func (c *CloudFront) DeleteFieldLevelEncryptionConfig(input *DeleteFieldLevelEncryptionConfigInput) (*DeleteFieldLevelEncryptionConfigOutput, error) { + req, out := c.DeleteFieldLevelEncryptionConfigRequest(input) return out, req.Send() } -// DeleteServiceLinkedRoleWithContext is the same as DeleteServiceLinkedRole with the addition of +// DeleteFieldLevelEncryptionConfigWithContext is the same as DeleteFieldLevelEncryptionConfig with the addition of // the ability to pass a context and additional request options. // -// See DeleteServiceLinkedRole for details on how to use this API operation. +// See DeleteFieldLevelEncryptionConfig for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CloudFront) DeleteServiceLinkedRoleWithContext(ctx aws.Context, input *DeleteServiceLinkedRoleInput, opts ...request.Option) (*DeleteServiceLinkedRoleOutput, error) { - req, out := c.DeleteServiceLinkedRoleRequest(input) +func (c *CloudFront) DeleteFieldLevelEncryptionConfigWithContext(ctx aws.Context, input *DeleteFieldLevelEncryptionConfigInput, opts ...request.Option) (*DeleteFieldLevelEncryptionConfigOutput, error) { + req, out := c.DeleteFieldLevelEncryptionConfigRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDeleteStreamingDistribution = "DeleteStreamingDistribution2017_03_25" +const opDeleteFieldLevelEncryptionProfile = "DeleteFieldLevelEncryptionProfile2018_06_18" -// DeleteStreamingDistributionRequest generates a "aws/request.Request" representing the -// client's request for the DeleteStreamingDistribution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteFieldLevelEncryptionProfileRequest generates a "aws/request.Request" representing the +// client's request for the DeleteFieldLevelEncryptionProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DeleteStreamingDistribution for more information on using the DeleteStreamingDistribution +// See DeleteFieldLevelEncryptionProfile for more information on using the DeleteFieldLevelEncryptionProfile // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DeleteStreamingDistributionRequest method. -// req, resp := client.DeleteStreamingDistributionRequest(params) +// // Example sending a request using the DeleteFieldLevelEncryptionProfileRequest method. +// req, resp := client.DeleteFieldLevelEncryptionProfileRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/DeleteStreamingDistribution -func (c *CloudFront) DeleteStreamingDistributionRequest(input *DeleteStreamingDistributionInput) (req *request.Request, output *DeleteStreamingDistributionOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/DeleteFieldLevelEncryptionProfile +func (c *CloudFront) DeleteFieldLevelEncryptionProfileRequest(input *DeleteFieldLevelEncryptionProfileInput) (req *request.Request, output *DeleteFieldLevelEncryptionProfileOutput) { op := &request.Operation{ - Name: opDeleteStreamingDistribution, + Name: opDeleteFieldLevelEncryptionProfile, HTTPMethod: "DELETE", - HTTPPath: "/2017-03-25/streaming-distribution/{Id}", + HTTPPath: "/2018-06-18/field-level-encryption-profile/{Id}", } if input == nil { - input = &DeleteStreamingDistributionInput{} + input = &DeleteFieldLevelEncryptionProfileInput{} } - output = &DeleteStreamingDistributionOutput{} + output = &DeleteFieldLevelEncryptionProfileOutput{} req = c.newRequest(op, input, output) req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// DeleteStreamingDistribution API operation for Amazon CloudFront. -// -// Delete a streaming distribution. To delete an RTMP distribution using the -// CloudFront API, perform the following steps. -// -// To delete an RTMP distribution using the CloudFront API: -// -// Disable the RTMP distribution. -// -// Submit a GET Streaming Distribution Config request to get the current configuration -// and the Etag header for the distribution. -// -// Update the XML document that was returned in the response to your GET Streaming -// Distribution Config request to change the value of Enabled to false. -// -// Submit a PUT Streaming Distribution Config request to update the configuration -// for your distribution. In the request body, include the XML document that -// you updated in Step 3. Then set the value of the HTTP If-Match header to -// the value of the ETag header that CloudFront returned when you submitted -// the GET Streaming Distribution Config request in Step 2. -// -// Review the response to the PUT Streaming Distribution Config request to confirm -// that the distribution was successfully disabled. -// -// Submit a GET Streaming Distribution Config request to confirm that your changes -// have propagated. When propagation is complete, the value of Status is Deployed. +// DeleteFieldLevelEncryptionProfile API operation for Amazon CloudFront. // -// Submit a DELETE Streaming Distribution request. Set the value of the HTTP -// If-Match header to the value of the ETag header that CloudFront returned -// when you submitted the GET Streaming Distribution Config request in Step -// 2. -// -// Review the response to your DELETE Streaming Distribution request to confirm -// that the distribution was successfully deleted. -// -// For information about deleting a distribution using the CloudFront console, -// see Deleting a Distribution (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/HowToDeleteDistribution.html) -// in the Amazon CloudFront Developer Guide. +// Remove a field-level encryption profile. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon CloudFront's -// API operation DeleteStreamingDistribution for usage and error information. +// API operation DeleteFieldLevelEncryptionProfile for usage and error information. // // Returned Error Codes: // * ErrCodeAccessDenied "AccessDenied" // Access denied. // -// * ErrCodeStreamingDistributionNotDisabled "StreamingDistributionNotDisabled" -// // * ErrCodeInvalidIfMatchVersion "InvalidIfMatchVersion" // The If-Match version is missing or not valid for the distribution. // -// * ErrCodeNoSuchStreamingDistribution "NoSuchStreamingDistribution" -// The specified streaming distribution does not exist. +// * ErrCodeNoSuchFieldLevelEncryptionProfile "NoSuchFieldLevelEncryptionProfile" +// The specified profile for field-level encryption doesn't exist. // // * ErrCodePreconditionFailed "PreconditionFailed" // The precondition given in one or more of the request-header fields evaluated // to false. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/DeleteStreamingDistribution +// * ErrCodeFieldLevelEncryptionProfileInUse "FieldLevelEncryptionProfileInUse" +// The specified profile for field-level encryption is in use. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/DeleteFieldLevelEncryptionProfile +func (c *CloudFront) DeleteFieldLevelEncryptionProfile(input *DeleteFieldLevelEncryptionProfileInput) (*DeleteFieldLevelEncryptionProfileOutput, error) { + req, out := c.DeleteFieldLevelEncryptionProfileRequest(input) + return out, req.Send() +} + +// DeleteFieldLevelEncryptionProfileWithContext is the same as DeleteFieldLevelEncryptionProfile with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteFieldLevelEncryptionProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFront) DeleteFieldLevelEncryptionProfileWithContext(ctx aws.Context, input *DeleteFieldLevelEncryptionProfileInput, opts ...request.Option) (*DeleteFieldLevelEncryptionProfileOutput, error) { + req, out := c.DeleteFieldLevelEncryptionProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeletePublicKey = "DeletePublicKey2018_06_18" + +// DeletePublicKeyRequest generates a "aws/request.Request" representing the +// client's request for the DeletePublicKey operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeletePublicKey for more information on using the DeletePublicKey +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeletePublicKeyRequest method. +// req, resp := client.DeletePublicKeyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/DeletePublicKey +func (c *CloudFront) DeletePublicKeyRequest(input *DeletePublicKeyInput) (req *request.Request, output *DeletePublicKeyOutput) { + op := &request.Operation{ + Name: opDeletePublicKey, + HTTPMethod: "DELETE", + HTTPPath: "/2018-06-18/public-key/{Id}", + } + + if input == nil { + input = &DeletePublicKeyInput{} + } + + output = &DeletePublicKeyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeletePublicKey API operation for Amazon CloudFront. +// +// Remove a public key you previously added to CloudFront. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudFront's +// API operation DeletePublicKey for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAccessDenied "AccessDenied" +// Access denied. +// +// * ErrCodePublicKeyInUse "PublicKeyInUse" +// The specified public key is in use. +// +// * ErrCodeInvalidIfMatchVersion "InvalidIfMatchVersion" +// The If-Match version is missing or not valid for the distribution. +// +// * ErrCodeNoSuchPublicKey "NoSuchPublicKey" +// The specified public key doesn't exist. +// +// * ErrCodePreconditionFailed "PreconditionFailed" +// The precondition given in one or more of the request-header fields evaluated +// to false. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/DeletePublicKey +func (c *CloudFront) DeletePublicKey(input *DeletePublicKeyInput) (*DeletePublicKeyOutput, error) { + req, out := c.DeletePublicKeyRequest(input) + return out, req.Send() +} + +// DeletePublicKeyWithContext is the same as DeletePublicKey with the addition of +// the ability to pass a context and additional request options. +// +// See DeletePublicKey for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFront) DeletePublicKeyWithContext(ctx aws.Context, input *DeletePublicKeyInput, opts ...request.Option) (*DeletePublicKeyOutput, error) { + req, out := c.DeletePublicKeyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteStreamingDistribution = "DeleteStreamingDistribution2018_06_18" + +// DeleteStreamingDistributionRequest generates a "aws/request.Request" representing the +// client's request for the DeleteStreamingDistribution operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteStreamingDistribution for more information on using the DeleteStreamingDistribution +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteStreamingDistributionRequest method. +// req, resp := client.DeleteStreamingDistributionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/DeleteStreamingDistribution +func (c *CloudFront) DeleteStreamingDistributionRequest(input *DeleteStreamingDistributionInput) (req *request.Request, output *DeleteStreamingDistributionOutput) { + op := &request.Operation{ + Name: opDeleteStreamingDistribution, + HTTPMethod: "DELETE", + HTTPPath: "/2018-06-18/streaming-distribution/{Id}", + } + + if input == nil { + input = &DeleteStreamingDistributionInput{} + } + + output = &DeleteStreamingDistributionOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteStreamingDistribution API operation for Amazon CloudFront. +// +// Delete a streaming distribution. To delete an RTMP distribution using the +// CloudFront API, perform the following steps. +// +// To delete an RTMP distribution using the CloudFront API: +// +// Disable the RTMP distribution. +// +// Submit a GET Streaming Distribution Config request to get the current configuration +// and the Etag header for the distribution. +// +// Update the XML document that was returned in the response to your GET Streaming +// Distribution Config request to change the value of Enabled to false. +// +// Submit a PUT Streaming Distribution Config request to update the configuration +// for your distribution. In the request body, include the XML document that +// you updated in Step 3. Then set the value of the HTTP If-Match header to +// the value of the ETag header that CloudFront returned when you submitted +// the GET Streaming Distribution Config request in Step 2. +// +// Review the response to the PUT Streaming Distribution Config request to confirm +// that the distribution was successfully disabled. +// +// Submit a GET Streaming Distribution Config request to confirm that your changes +// have propagated. When propagation is complete, the value of Status is Deployed. +// +// Submit a DELETE Streaming Distribution request. Set the value of the HTTP +// If-Match header to the value of the ETag header that CloudFront returned +// when you submitted the GET Streaming Distribution Config request in Step +// 2. +// +// Review the response to your DELETE Streaming Distribution request to confirm +// that the distribution was successfully deleted. +// +// For information about deleting a distribution using the CloudFront console, +// see Deleting a Distribution (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/HowToDeleteDistribution.html) +// in the Amazon CloudFront Developer Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudFront's +// API operation DeleteStreamingDistribution for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAccessDenied "AccessDenied" +// Access denied. +// +// * ErrCodeStreamingDistributionNotDisabled "StreamingDistributionNotDisabled" +// +// * ErrCodeInvalidIfMatchVersion "InvalidIfMatchVersion" +// The If-Match version is missing or not valid for the distribution. +// +// * ErrCodeNoSuchStreamingDistribution "NoSuchStreamingDistribution" +// The specified streaming distribution does not exist. +// +// * ErrCodePreconditionFailed "PreconditionFailed" +// The precondition given in one or more of the request-header fields evaluated +// to false. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/DeleteStreamingDistribution func (c *CloudFront) DeleteStreamingDistribution(input *DeleteStreamingDistributionInput) (*DeleteStreamingDistributionOutput, error) { req, out := c.DeleteStreamingDistributionRequest(input) return out, req.Send() @@ -1256,12 +1779,12 @@ func (c *CloudFront) DeleteStreamingDistributionWithContext(ctx aws.Context, inp return out, req.Send() } -const opGetCloudFrontOriginAccessIdentity = "GetCloudFrontOriginAccessIdentity2017_03_25" +const opGetCloudFrontOriginAccessIdentity = "GetCloudFrontOriginAccessIdentity2018_06_18" // GetCloudFrontOriginAccessIdentityRequest generates a "aws/request.Request" representing the // client's request for the GetCloudFrontOriginAccessIdentity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1281,12 +1804,12 @@ const opGetCloudFrontOriginAccessIdentity = "GetCloudFrontOriginAccessIdentity20 // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/GetCloudFrontOriginAccessIdentity +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetCloudFrontOriginAccessIdentity func (c *CloudFront) GetCloudFrontOriginAccessIdentityRequest(input *GetCloudFrontOriginAccessIdentityInput) (req *request.Request, output *GetCloudFrontOriginAccessIdentityOutput) { op := &request.Operation{ Name: opGetCloudFrontOriginAccessIdentity, HTTPMethod: "GET", - HTTPPath: "/2017-03-25/origin-access-identity/cloudfront/{Id}", + HTTPPath: "/2018-06-18/origin-access-identity/cloudfront/{Id}", } if input == nil { @@ -1316,7 +1839,7 @@ func (c *CloudFront) GetCloudFrontOriginAccessIdentityRequest(input *GetCloudFro // * ErrCodeAccessDenied "AccessDenied" // Access denied. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/GetCloudFrontOriginAccessIdentity +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetCloudFrontOriginAccessIdentity func (c *CloudFront) GetCloudFrontOriginAccessIdentity(input *GetCloudFrontOriginAccessIdentityInput) (*GetCloudFrontOriginAccessIdentityOutput, error) { req, out := c.GetCloudFrontOriginAccessIdentityRequest(input) return out, req.Send() @@ -1338,12 +1861,12 @@ func (c *CloudFront) GetCloudFrontOriginAccessIdentityWithContext(ctx aws.Contex return out, req.Send() } -const opGetCloudFrontOriginAccessIdentityConfig = "GetCloudFrontOriginAccessIdentityConfig2017_03_25" +const opGetCloudFrontOriginAccessIdentityConfig = "GetCloudFrontOriginAccessIdentityConfig2018_06_18" // GetCloudFrontOriginAccessIdentityConfigRequest generates a "aws/request.Request" representing the // client's request for the GetCloudFrontOriginAccessIdentityConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1363,12 +1886,12 @@ const opGetCloudFrontOriginAccessIdentityConfig = "GetCloudFrontOriginAccessIden // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/GetCloudFrontOriginAccessIdentityConfig +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetCloudFrontOriginAccessIdentityConfig func (c *CloudFront) GetCloudFrontOriginAccessIdentityConfigRequest(input *GetCloudFrontOriginAccessIdentityConfigInput) (req *request.Request, output *GetCloudFrontOriginAccessIdentityConfigOutput) { op := &request.Operation{ Name: opGetCloudFrontOriginAccessIdentityConfig, HTTPMethod: "GET", - HTTPPath: "/2017-03-25/origin-access-identity/cloudfront/{Id}/config", + HTTPPath: "/2018-06-18/origin-access-identity/cloudfront/{Id}/config", } if input == nil { @@ -1398,7 +1921,7 @@ func (c *CloudFront) GetCloudFrontOriginAccessIdentityConfigRequest(input *GetCl // * ErrCodeAccessDenied "AccessDenied" // Access denied. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/GetCloudFrontOriginAccessIdentityConfig +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetCloudFrontOriginAccessIdentityConfig func (c *CloudFront) GetCloudFrontOriginAccessIdentityConfig(input *GetCloudFrontOriginAccessIdentityConfigInput) (*GetCloudFrontOriginAccessIdentityConfigOutput, error) { req, out := c.GetCloudFrontOriginAccessIdentityConfigRequest(input) return out, req.Send() @@ -1420,12 +1943,12 @@ func (c *CloudFront) GetCloudFrontOriginAccessIdentityConfigWithContext(ctx aws. return out, req.Send() } -const opGetDistribution = "GetDistribution2017_03_25" +const opGetDistribution = "GetDistribution2018_06_18" // GetDistributionRequest generates a "aws/request.Request" representing the // client's request for the GetDistribution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1445,12 +1968,12 @@ const opGetDistribution = "GetDistribution2017_03_25" // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/GetDistribution +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetDistribution func (c *CloudFront) GetDistributionRequest(input *GetDistributionInput) (req *request.Request, output *GetDistributionOutput) { op := &request.Operation{ Name: opGetDistribution, HTTPMethod: "GET", - HTTPPath: "/2017-03-25/distribution/{Id}", + HTTPPath: "/2018-06-18/distribution/{Id}", } if input == nil { @@ -1480,7 +2003,7 @@ func (c *CloudFront) GetDistributionRequest(input *GetDistributionInput) (req *r // * ErrCodeAccessDenied "AccessDenied" // Access denied. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/GetDistribution +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetDistribution func (c *CloudFront) GetDistribution(input *GetDistributionInput) (*GetDistributionOutput, error) { req, out := c.GetDistributionRequest(input) return out, req.Send() @@ -1502,12 +2025,12 @@ func (c *CloudFront) GetDistributionWithContext(ctx aws.Context, input *GetDistr return out, req.Send() } -const opGetDistributionConfig = "GetDistributionConfig2017_03_25" +const opGetDistributionConfig = "GetDistributionConfig2018_06_18" // GetDistributionConfigRequest generates a "aws/request.Request" representing the // client's request for the GetDistributionConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1527,12 +2050,12 @@ const opGetDistributionConfig = "GetDistributionConfig2017_03_25" // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/GetDistributionConfig +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetDistributionConfig func (c *CloudFront) GetDistributionConfigRequest(input *GetDistributionConfigInput) (req *request.Request, output *GetDistributionConfigOutput) { op := &request.Operation{ Name: opGetDistributionConfig, HTTPMethod: "GET", - HTTPPath: "/2017-03-25/distribution/{Id}/config", + HTTPPath: "/2018-06-18/distribution/{Id}/config", } if input == nil { @@ -1562,7 +2085,7 @@ func (c *CloudFront) GetDistributionConfigRequest(input *GetDistributionConfigIn // * ErrCodeAccessDenied "AccessDenied" // Access denied. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/GetDistributionConfig +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetDistributionConfig func (c *CloudFront) GetDistributionConfig(input *GetDistributionConfigInput) (*GetDistributionConfigOutput, error) { req, out := c.GetDistributionConfigRequest(input) return out, req.Send() @@ -1584,870 +2107,870 @@ func (c *CloudFront) GetDistributionConfigWithContext(ctx aws.Context, input *Ge return out, req.Send() } -const opGetInvalidation = "GetInvalidation2017_03_25" +const opGetFieldLevelEncryption = "GetFieldLevelEncryption2018_06_18" -// GetInvalidationRequest generates a "aws/request.Request" representing the -// client's request for the GetInvalidation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetFieldLevelEncryptionRequest generates a "aws/request.Request" representing the +// client's request for the GetFieldLevelEncryption operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetInvalidation for more information on using the GetInvalidation +// See GetFieldLevelEncryption for more information on using the GetFieldLevelEncryption // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetInvalidationRequest method. -// req, resp := client.GetInvalidationRequest(params) +// // Example sending a request using the GetFieldLevelEncryptionRequest method. +// req, resp := client.GetFieldLevelEncryptionRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/GetInvalidation -func (c *CloudFront) GetInvalidationRequest(input *GetInvalidationInput) (req *request.Request, output *GetInvalidationOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetFieldLevelEncryption +func (c *CloudFront) GetFieldLevelEncryptionRequest(input *GetFieldLevelEncryptionInput) (req *request.Request, output *GetFieldLevelEncryptionOutput) { op := &request.Operation{ - Name: opGetInvalidation, + Name: opGetFieldLevelEncryption, HTTPMethod: "GET", - HTTPPath: "/2017-03-25/distribution/{DistributionId}/invalidation/{Id}", + HTTPPath: "/2018-06-18/field-level-encryption/{Id}", } if input == nil { - input = &GetInvalidationInput{} + input = &GetFieldLevelEncryptionInput{} } - output = &GetInvalidationOutput{} + output = &GetFieldLevelEncryptionOutput{} req = c.newRequest(op, input, output) return } -// GetInvalidation API operation for Amazon CloudFront. +// GetFieldLevelEncryption API operation for Amazon CloudFront. // -// Get the information about an invalidation. +// Get the field-level encryption configuration information. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon CloudFront's -// API operation GetInvalidation for usage and error information. +// API operation GetFieldLevelEncryption for usage and error information. // // Returned Error Codes: -// * ErrCodeNoSuchInvalidation "NoSuchInvalidation" -// The specified invalidation does not exist. -// -// * ErrCodeNoSuchDistribution "NoSuchDistribution" -// The specified distribution does not exist. -// // * ErrCodeAccessDenied "AccessDenied" // Access denied. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/GetInvalidation -func (c *CloudFront) GetInvalidation(input *GetInvalidationInput) (*GetInvalidationOutput, error) { - req, out := c.GetInvalidationRequest(input) +// * ErrCodeNoSuchFieldLevelEncryptionConfig "NoSuchFieldLevelEncryptionConfig" +// The specified configuration for field-level encryption doesn't exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetFieldLevelEncryption +func (c *CloudFront) GetFieldLevelEncryption(input *GetFieldLevelEncryptionInput) (*GetFieldLevelEncryptionOutput, error) { + req, out := c.GetFieldLevelEncryptionRequest(input) return out, req.Send() } -// GetInvalidationWithContext is the same as GetInvalidation with the addition of +// GetFieldLevelEncryptionWithContext is the same as GetFieldLevelEncryption with the addition of // the ability to pass a context and additional request options. // -// See GetInvalidation for details on how to use this API operation. +// See GetFieldLevelEncryption for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CloudFront) GetInvalidationWithContext(ctx aws.Context, input *GetInvalidationInput, opts ...request.Option) (*GetInvalidationOutput, error) { - req, out := c.GetInvalidationRequest(input) +func (c *CloudFront) GetFieldLevelEncryptionWithContext(ctx aws.Context, input *GetFieldLevelEncryptionInput, opts ...request.Option) (*GetFieldLevelEncryptionOutput, error) { + req, out := c.GetFieldLevelEncryptionRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetStreamingDistribution = "GetStreamingDistribution2017_03_25" +const opGetFieldLevelEncryptionConfig = "GetFieldLevelEncryptionConfig2018_06_18" -// GetStreamingDistributionRequest generates a "aws/request.Request" representing the -// client's request for the GetStreamingDistribution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetFieldLevelEncryptionConfigRequest generates a "aws/request.Request" representing the +// client's request for the GetFieldLevelEncryptionConfig operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetStreamingDistribution for more information on using the GetStreamingDistribution +// See GetFieldLevelEncryptionConfig for more information on using the GetFieldLevelEncryptionConfig // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetStreamingDistributionRequest method. -// req, resp := client.GetStreamingDistributionRequest(params) +// // Example sending a request using the GetFieldLevelEncryptionConfigRequest method. +// req, resp := client.GetFieldLevelEncryptionConfigRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/GetStreamingDistribution -func (c *CloudFront) GetStreamingDistributionRequest(input *GetStreamingDistributionInput) (req *request.Request, output *GetStreamingDistributionOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetFieldLevelEncryptionConfig +func (c *CloudFront) GetFieldLevelEncryptionConfigRequest(input *GetFieldLevelEncryptionConfigInput) (req *request.Request, output *GetFieldLevelEncryptionConfigOutput) { op := &request.Operation{ - Name: opGetStreamingDistribution, + Name: opGetFieldLevelEncryptionConfig, HTTPMethod: "GET", - HTTPPath: "/2017-03-25/streaming-distribution/{Id}", + HTTPPath: "/2018-06-18/field-level-encryption/{Id}/config", } if input == nil { - input = &GetStreamingDistributionInput{} + input = &GetFieldLevelEncryptionConfigInput{} } - output = &GetStreamingDistributionOutput{} + output = &GetFieldLevelEncryptionConfigOutput{} req = c.newRequest(op, input, output) return } -// GetStreamingDistribution API operation for Amazon CloudFront. +// GetFieldLevelEncryptionConfig API operation for Amazon CloudFront. // -// Gets information about a specified RTMP distribution, including the distribution -// configuration. +// Get the field-level encryption configuration information. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon CloudFront's -// API operation GetStreamingDistribution for usage and error information. +// API operation GetFieldLevelEncryptionConfig for usage and error information. // // Returned Error Codes: -// * ErrCodeNoSuchStreamingDistribution "NoSuchStreamingDistribution" -// The specified streaming distribution does not exist. -// // * ErrCodeAccessDenied "AccessDenied" // Access denied. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/GetStreamingDistribution -func (c *CloudFront) GetStreamingDistribution(input *GetStreamingDistributionInput) (*GetStreamingDistributionOutput, error) { - req, out := c.GetStreamingDistributionRequest(input) +// * ErrCodeNoSuchFieldLevelEncryptionConfig "NoSuchFieldLevelEncryptionConfig" +// The specified configuration for field-level encryption doesn't exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetFieldLevelEncryptionConfig +func (c *CloudFront) GetFieldLevelEncryptionConfig(input *GetFieldLevelEncryptionConfigInput) (*GetFieldLevelEncryptionConfigOutput, error) { + req, out := c.GetFieldLevelEncryptionConfigRequest(input) return out, req.Send() } -// GetStreamingDistributionWithContext is the same as GetStreamingDistribution with the addition of +// GetFieldLevelEncryptionConfigWithContext is the same as GetFieldLevelEncryptionConfig with the addition of // the ability to pass a context and additional request options. // -// See GetStreamingDistribution for details on how to use this API operation. +// See GetFieldLevelEncryptionConfig for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CloudFront) GetStreamingDistributionWithContext(ctx aws.Context, input *GetStreamingDistributionInput, opts ...request.Option) (*GetStreamingDistributionOutput, error) { - req, out := c.GetStreamingDistributionRequest(input) +func (c *CloudFront) GetFieldLevelEncryptionConfigWithContext(ctx aws.Context, input *GetFieldLevelEncryptionConfigInput, opts ...request.Option) (*GetFieldLevelEncryptionConfigOutput, error) { + req, out := c.GetFieldLevelEncryptionConfigRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetStreamingDistributionConfig = "GetStreamingDistributionConfig2017_03_25" +const opGetFieldLevelEncryptionProfile = "GetFieldLevelEncryptionProfile2018_06_18" -// GetStreamingDistributionConfigRequest generates a "aws/request.Request" representing the -// client's request for the GetStreamingDistributionConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetFieldLevelEncryptionProfileRequest generates a "aws/request.Request" representing the +// client's request for the GetFieldLevelEncryptionProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetStreamingDistributionConfig for more information on using the GetStreamingDistributionConfig +// See GetFieldLevelEncryptionProfile for more information on using the GetFieldLevelEncryptionProfile // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetStreamingDistributionConfigRequest method. -// req, resp := client.GetStreamingDistributionConfigRequest(params) +// // Example sending a request using the GetFieldLevelEncryptionProfileRequest method. +// req, resp := client.GetFieldLevelEncryptionProfileRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/GetStreamingDistributionConfig -func (c *CloudFront) GetStreamingDistributionConfigRequest(input *GetStreamingDistributionConfigInput) (req *request.Request, output *GetStreamingDistributionConfigOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetFieldLevelEncryptionProfile +func (c *CloudFront) GetFieldLevelEncryptionProfileRequest(input *GetFieldLevelEncryptionProfileInput) (req *request.Request, output *GetFieldLevelEncryptionProfileOutput) { op := &request.Operation{ - Name: opGetStreamingDistributionConfig, + Name: opGetFieldLevelEncryptionProfile, HTTPMethod: "GET", - HTTPPath: "/2017-03-25/streaming-distribution/{Id}/config", + HTTPPath: "/2018-06-18/field-level-encryption-profile/{Id}", } if input == nil { - input = &GetStreamingDistributionConfigInput{} + input = &GetFieldLevelEncryptionProfileInput{} } - output = &GetStreamingDistributionConfigOutput{} + output = &GetFieldLevelEncryptionProfileOutput{} req = c.newRequest(op, input, output) return } -// GetStreamingDistributionConfig API operation for Amazon CloudFront. +// GetFieldLevelEncryptionProfile API operation for Amazon CloudFront. // -// Get the configuration information about a streaming distribution. +// Get the field-level encryption profile information. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon CloudFront's -// API operation GetStreamingDistributionConfig for usage and error information. +// API operation GetFieldLevelEncryptionProfile for usage and error information. // // Returned Error Codes: -// * ErrCodeNoSuchStreamingDistribution "NoSuchStreamingDistribution" -// The specified streaming distribution does not exist. -// // * ErrCodeAccessDenied "AccessDenied" // Access denied. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/GetStreamingDistributionConfig -func (c *CloudFront) GetStreamingDistributionConfig(input *GetStreamingDistributionConfigInput) (*GetStreamingDistributionConfigOutput, error) { - req, out := c.GetStreamingDistributionConfigRequest(input) +// * ErrCodeNoSuchFieldLevelEncryptionProfile "NoSuchFieldLevelEncryptionProfile" +// The specified profile for field-level encryption doesn't exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetFieldLevelEncryptionProfile +func (c *CloudFront) GetFieldLevelEncryptionProfile(input *GetFieldLevelEncryptionProfileInput) (*GetFieldLevelEncryptionProfileOutput, error) { + req, out := c.GetFieldLevelEncryptionProfileRequest(input) return out, req.Send() } -// GetStreamingDistributionConfigWithContext is the same as GetStreamingDistributionConfig with the addition of +// GetFieldLevelEncryptionProfileWithContext is the same as GetFieldLevelEncryptionProfile with the addition of // the ability to pass a context and additional request options. // -// See GetStreamingDistributionConfig for details on how to use this API operation. +// See GetFieldLevelEncryptionProfile for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CloudFront) GetStreamingDistributionConfigWithContext(ctx aws.Context, input *GetStreamingDistributionConfigInput, opts ...request.Option) (*GetStreamingDistributionConfigOutput, error) { - req, out := c.GetStreamingDistributionConfigRequest(input) +func (c *CloudFront) GetFieldLevelEncryptionProfileWithContext(ctx aws.Context, input *GetFieldLevelEncryptionProfileInput, opts ...request.Option) (*GetFieldLevelEncryptionProfileOutput, error) { + req, out := c.GetFieldLevelEncryptionProfileRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListCloudFrontOriginAccessIdentities = "ListCloudFrontOriginAccessIdentities2017_03_25" +const opGetFieldLevelEncryptionProfileConfig = "GetFieldLevelEncryptionProfileConfig2018_06_18" -// ListCloudFrontOriginAccessIdentitiesRequest generates a "aws/request.Request" representing the -// client's request for the ListCloudFrontOriginAccessIdentities operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetFieldLevelEncryptionProfileConfigRequest generates a "aws/request.Request" representing the +// client's request for the GetFieldLevelEncryptionProfileConfig operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListCloudFrontOriginAccessIdentities for more information on using the ListCloudFrontOriginAccessIdentities +// See GetFieldLevelEncryptionProfileConfig for more information on using the GetFieldLevelEncryptionProfileConfig // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListCloudFrontOriginAccessIdentitiesRequest method. -// req, resp := client.ListCloudFrontOriginAccessIdentitiesRequest(params) +// // Example sending a request using the GetFieldLevelEncryptionProfileConfigRequest method. +// req, resp := client.GetFieldLevelEncryptionProfileConfigRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/ListCloudFrontOriginAccessIdentities -func (c *CloudFront) ListCloudFrontOriginAccessIdentitiesRequest(input *ListCloudFrontOriginAccessIdentitiesInput) (req *request.Request, output *ListCloudFrontOriginAccessIdentitiesOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetFieldLevelEncryptionProfileConfig +func (c *CloudFront) GetFieldLevelEncryptionProfileConfigRequest(input *GetFieldLevelEncryptionProfileConfigInput) (req *request.Request, output *GetFieldLevelEncryptionProfileConfigOutput) { op := &request.Operation{ - Name: opListCloudFrontOriginAccessIdentities, + Name: opGetFieldLevelEncryptionProfileConfig, HTTPMethod: "GET", - HTTPPath: "/2017-03-25/origin-access-identity/cloudfront", - Paginator: &request.Paginator{ - InputTokens: []string{"Marker"}, - OutputTokens: []string{"CloudFrontOriginAccessIdentityList.NextMarker"}, - LimitToken: "MaxItems", - TruncationToken: "CloudFrontOriginAccessIdentityList.IsTruncated", - }, + HTTPPath: "/2018-06-18/field-level-encryption-profile/{Id}/config", } if input == nil { - input = &ListCloudFrontOriginAccessIdentitiesInput{} + input = &GetFieldLevelEncryptionProfileConfigInput{} } - output = &ListCloudFrontOriginAccessIdentitiesOutput{} + output = &GetFieldLevelEncryptionProfileConfigOutput{} req = c.newRequest(op, input, output) return } -// ListCloudFrontOriginAccessIdentities API operation for Amazon CloudFront. +// GetFieldLevelEncryptionProfileConfig API operation for Amazon CloudFront. // -// Lists origin access identities. +// Get the field-level encryption profile configuration information. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon CloudFront's -// API operation ListCloudFrontOriginAccessIdentities for usage and error information. +// API operation GetFieldLevelEncryptionProfileConfig for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidArgument "InvalidArgument" -// The argument is invalid. +// * ErrCodeAccessDenied "AccessDenied" +// Access denied. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/ListCloudFrontOriginAccessIdentities -func (c *CloudFront) ListCloudFrontOriginAccessIdentities(input *ListCloudFrontOriginAccessIdentitiesInput) (*ListCloudFrontOriginAccessIdentitiesOutput, error) { - req, out := c.ListCloudFrontOriginAccessIdentitiesRequest(input) +// * ErrCodeNoSuchFieldLevelEncryptionProfile "NoSuchFieldLevelEncryptionProfile" +// The specified profile for field-level encryption doesn't exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetFieldLevelEncryptionProfileConfig +func (c *CloudFront) GetFieldLevelEncryptionProfileConfig(input *GetFieldLevelEncryptionProfileConfigInput) (*GetFieldLevelEncryptionProfileConfigOutput, error) { + req, out := c.GetFieldLevelEncryptionProfileConfigRequest(input) return out, req.Send() } -// ListCloudFrontOriginAccessIdentitiesWithContext is the same as ListCloudFrontOriginAccessIdentities with the addition of +// GetFieldLevelEncryptionProfileConfigWithContext is the same as GetFieldLevelEncryptionProfileConfig with the addition of // the ability to pass a context and additional request options. // -// See ListCloudFrontOriginAccessIdentities for details on how to use this API operation. +// See GetFieldLevelEncryptionProfileConfig for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CloudFront) ListCloudFrontOriginAccessIdentitiesWithContext(ctx aws.Context, input *ListCloudFrontOriginAccessIdentitiesInput, opts ...request.Option) (*ListCloudFrontOriginAccessIdentitiesOutput, error) { - req, out := c.ListCloudFrontOriginAccessIdentitiesRequest(input) +func (c *CloudFront) GetFieldLevelEncryptionProfileConfigWithContext(ctx aws.Context, input *GetFieldLevelEncryptionProfileConfigInput, opts ...request.Option) (*GetFieldLevelEncryptionProfileConfigOutput, error) { + req, out := c.GetFieldLevelEncryptionProfileConfigRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// ListCloudFrontOriginAccessIdentitiesPages iterates over the pages of a ListCloudFrontOriginAccessIdentities operation, -// calling the "fn" function with the response data for each page. To stop -// iterating, return false from the fn function. -// -// See ListCloudFrontOriginAccessIdentities method for more information on how to use this operation. -// -// Note: This operation can generate multiple requests to a service. -// -// // Example iterating over at most 3 pages of a ListCloudFrontOriginAccessIdentities operation. -// pageNum := 0 -// err := client.ListCloudFrontOriginAccessIdentitiesPages(params, -// func(page *ListCloudFrontOriginAccessIdentitiesOutput, lastPage bool) bool { -// pageNum++ -// fmt.Println(page) -// return pageNum <= 3 -// }) -// -func (c *CloudFront) ListCloudFrontOriginAccessIdentitiesPages(input *ListCloudFrontOriginAccessIdentitiesInput, fn func(*ListCloudFrontOriginAccessIdentitiesOutput, bool) bool) error { - return c.ListCloudFrontOriginAccessIdentitiesPagesWithContext(aws.BackgroundContext(), input, fn) -} - -// ListCloudFrontOriginAccessIdentitiesPagesWithContext same as ListCloudFrontOriginAccessIdentitiesPages except -// it takes a Context and allows setting request options on the pages. -// -// The context must be non-nil and will be used for request cancellation. If -// the context is nil a panic will occur. In the future the SDK may create -// sub-contexts for http.Requests. See https://golang.org/pkg/context/ -// for more information on using Contexts. -func (c *CloudFront) ListCloudFrontOriginAccessIdentitiesPagesWithContext(ctx aws.Context, input *ListCloudFrontOriginAccessIdentitiesInput, fn func(*ListCloudFrontOriginAccessIdentitiesOutput, bool) bool, opts ...request.Option) error { - p := request.Pagination{ - NewRequest: func() (*request.Request, error) { - var inCpy *ListCloudFrontOriginAccessIdentitiesInput - if input != nil { - tmp := *input - inCpy = &tmp - } - req, _ := c.ListCloudFrontOriginAccessIdentitiesRequest(inCpy) - req.SetContext(ctx) - req.ApplyOptions(opts...) - return req, nil - }, - } - - cont := true - for p.Next() && cont { - cont = fn(p.Page().(*ListCloudFrontOriginAccessIdentitiesOutput), !p.HasNextPage()) - } - return p.Err() -} - -const opListDistributions = "ListDistributions2017_03_25" +const opGetInvalidation = "GetInvalidation2018_06_18" -// ListDistributionsRequest generates a "aws/request.Request" representing the -// client's request for the ListDistributions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetInvalidationRequest generates a "aws/request.Request" representing the +// client's request for the GetInvalidation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListDistributions for more information on using the ListDistributions +// See GetInvalidation for more information on using the GetInvalidation // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListDistributionsRequest method. -// req, resp := client.ListDistributionsRequest(params) +// // Example sending a request using the GetInvalidationRequest method. +// req, resp := client.GetInvalidationRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/ListDistributions -func (c *CloudFront) ListDistributionsRequest(input *ListDistributionsInput) (req *request.Request, output *ListDistributionsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetInvalidation +func (c *CloudFront) GetInvalidationRequest(input *GetInvalidationInput) (req *request.Request, output *GetInvalidationOutput) { op := &request.Operation{ - Name: opListDistributions, + Name: opGetInvalidation, HTTPMethod: "GET", - HTTPPath: "/2017-03-25/distribution", - Paginator: &request.Paginator{ - InputTokens: []string{"Marker"}, - OutputTokens: []string{"DistributionList.NextMarker"}, - LimitToken: "MaxItems", - TruncationToken: "DistributionList.IsTruncated", - }, + HTTPPath: "/2018-06-18/distribution/{DistributionId}/invalidation/{Id}", } if input == nil { - input = &ListDistributionsInput{} + input = &GetInvalidationInput{} } - output = &ListDistributionsOutput{} + output = &GetInvalidationOutput{} req = c.newRequest(op, input, output) return } -// ListDistributions API operation for Amazon CloudFront. +// GetInvalidation API operation for Amazon CloudFront. // -// List distributions. +// Get the information about an invalidation. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon CloudFront's -// API operation ListDistributions for usage and error information. +// API operation GetInvalidation for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidArgument "InvalidArgument" -// The argument is invalid. +// * ErrCodeNoSuchInvalidation "NoSuchInvalidation" +// The specified invalidation does not exist. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/ListDistributions -func (c *CloudFront) ListDistributions(input *ListDistributionsInput) (*ListDistributionsOutput, error) { - req, out := c.ListDistributionsRequest(input) +// * ErrCodeNoSuchDistribution "NoSuchDistribution" +// The specified distribution does not exist. +// +// * ErrCodeAccessDenied "AccessDenied" +// Access denied. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetInvalidation +func (c *CloudFront) GetInvalidation(input *GetInvalidationInput) (*GetInvalidationOutput, error) { + req, out := c.GetInvalidationRequest(input) return out, req.Send() } -// ListDistributionsWithContext is the same as ListDistributions with the addition of +// GetInvalidationWithContext is the same as GetInvalidation with the addition of // the ability to pass a context and additional request options. // -// See ListDistributions for details on how to use this API operation. +// See GetInvalidation for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CloudFront) ListDistributionsWithContext(ctx aws.Context, input *ListDistributionsInput, opts ...request.Option) (*ListDistributionsOutput, error) { - req, out := c.ListDistributionsRequest(input) +func (c *CloudFront) GetInvalidationWithContext(ctx aws.Context, input *GetInvalidationInput, opts ...request.Option) (*GetInvalidationOutput, error) { + req, out := c.GetInvalidationRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// ListDistributionsPages iterates over the pages of a ListDistributions operation, -// calling the "fn" function with the response data for each page. To stop -// iterating, return false from the fn function. -// -// See ListDistributions method for more information on how to use this operation. -// -// Note: This operation can generate multiple requests to a service. -// -// // Example iterating over at most 3 pages of a ListDistributions operation. -// pageNum := 0 -// err := client.ListDistributionsPages(params, -// func(page *ListDistributionsOutput, lastPage bool) bool { -// pageNum++ -// fmt.Println(page) -// return pageNum <= 3 -// }) -// -func (c *CloudFront) ListDistributionsPages(input *ListDistributionsInput, fn func(*ListDistributionsOutput, bool) bool) error { - return c.ListDistributionsPagesWithContext(aws.BackgroundContext(), input, fn) -} - -// ListDistributionsPagesWithContext same as ListDistributionsPages except -// it takes a Context and allows setting request options on the pages. -// -// The context must be non-nil and will be used for request cancellation. If -// the context is nil a panic will occur. In the future the SDK may create -// sub-contexts for http.Requests. See https://golang.org/pkg/context/ -// for more information on using Contexts. -func (c *CloudFront) ListDistributionsPagesWithContext(ctx aws.Context, input *ListDistributionsInput, fn func(*ListDistributionsOutput, bool) bool, opts ...request.Option) error { - p := request.Pagination{ - NewRequest: func() (*request.Request, error) { - var inCpy *ListDistributionsInput - if input != nil { - tmp := *input - inCpy = &tmp - } - req, _ := c.ListDistributionsRequest(inCpy) - req.SetContext(ctx) - req.ApplyOptions(opts...) - return req, nil - }, - } - - cont := true - for p.Next() && cont { - cont = fn(p.Page().(*ListDistributionsOutput), !p.HasNextPage()) - } - return p.Err() -} - -const opListDistributionsByWebACLId = "ListDistributionsByWebACLId2017_03_25" +const opGetPublicKey = "GetPublicKey2018_06_18" -// ListDistributionsByWebACLIdRequest generates a "aws/request.Request" representing the -// client's request for the ListDistributionsByWebACLId operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetPublicKeyRequest generates a "aws/request.Request" representing the +// client's request for the GetPublicKey operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListDistributionsByWebACLId for more information on using the ListDistributionsByWebACLId +// See GetPublicKey for more information on using the GetPublicKey // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListDistributionsByWebACLIdRequest method. -// req, resp := client.ListDistributionsByWebACLIdRequest(params) +// // Example sending a request using the GetPublicKeyRequest method. +// req, resp := client.GetPublicKeyRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/ListDistributionsByWebACLId -func (c *CloudFront) ListDistributionsByWebACLIdRequest(input *ListDistributionsByWebACLIdInput) (req *request.Request, output *ListDistributionsByWebACLIdOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetPublicKey +func (c *CloudFront) GetPublicKeyRequest(input *GetPublicKeyInput) (req *request.Request, output *GetPublicKeyOutput) { op := &request.Operation{ - Name: opListDistributionsByWebACLId, + Name: opGetPublicKey, HTTPMethod: "GET", - HTTPPath: "/2017-03-25/distributionsByWebACLId/{WebACLId}", + HTTPPath: "/2018-06-18/public-key/{Id}", } if input == nil { - input = &ListDistributionsByWebACLIdInput{} + input = &GetPublicKeyInput{} } - output = &ListDistributionsByWebACLIdOutput{} + output = &GetPublicKeyOutput{} req = c.newRequest(op, input, output) return } -// ListDistributionsByWebACLId API operation for Amazon CloudFront. +// GetPublicKey API operation for Amazon CloudFront. // -// List the distributions that are associated with a specified AWS WAF web ACL. +// Get the public key information. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon CloudFront's -// API operation ListDistributionsByWebACLId for usage and error information. +// API operation GetPublicKey for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidArgument "InvalidArgument" -// The argument is invalid. +// * ErrCodeAccessDenied "AccessDenied" +// Access denied. // -// * ErrCodeInvalidWebACLId "InvalidWebACLId" +// * ErrCodeNoSuchPublicKey "NoSuchPublicKey" +// The specified public key doesn't exist. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/ListDistributionsByWebACLId -func (c *CloudFront) ListDistributionsByWebACLId(input *ListDistributionsByWebACLIdInput) (*ListDistributionsByWebACLIdOutput, error) { - req, out := c.ListDistributionsByWebACLIdRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetPublicKey +func (c *CloudFront) GetPublicKey(input *GetPublicKeyInput) (*GetPublicKeyOutput, error) { + req, out := c.GetPublicKeyRequest(input) return out, req.Send() } -// ListDistributionsByWebACLIdWithContext is the same as ListDistributionsByWebACLId with the addition of +// GetPublicKeyWithContext is the same as GetPublicKey with the addition of // the ability to pass a context and additional request options. // -// See ListDistributionsByWebACLId for details on how to use this API operation. +// See GetPublicKey for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CloudFront) ListDistributionsByWebACLIdWithContext(ctx aws.Context, input *ListDistributionsByWebACLIdInput, opts ...request.Option) (*ListDistributionsByWebACLIdOutput, error) { - req, out := c.ListDistributionsByWebACLIdRequest(input) +func (c *CloudFront) GetPublicKeyWithContext(ctx aws.Context, input *GetPublicKeyInput, opts ...request.Option) (*GetPublicKeyOutput, error) { + req, out := c.GetPublicKeyRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListInvalidations = "ListInvalidations2017_03_25" +const opGetPublicKeyConfig = "GetPublicKeyConfig2018_06_18" -// ListInvalidationsRequest generates a "aws/request.Request" representing the -// client's request for the ListInvalidations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetPublicKeyConfigRequest generates a "aws/request.Request" representing the +// client's request for the GetPublicKeyConfig operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListInvalidations for more information on using the ListInvalidations +// See GetPublicKeyConfig for more information on using the GetPublicKeyConfig // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListInvalidationsRequest method. -// req, resp := client.ListInvalidationsRequest(params) +// // Example sending a request using the GetPublicKeyConfigRequest method. +// req, resp := client.GetPublicKeyConfigRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/ListInvalidations -func (c *CloudFront) ListInvalidationsRequest(input *ListInvalidationsInput) (req *request.Request, output *ListInvalidationsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetPublicKeyConfig +func (c *CloudFront) GetPublicKeyConfigRequest(input *GetPublicKeyConfigInput) (req *request.Request, output *GetPublicKeyConfigOutput) { op := &request.Operation{ - Name: opListInvalidations, + Name: opGetPublicKeyConfig, HTTPMethod: "GET", - HTTPPath: "/2017-03-25/distribution/{DistributionId}/invalidation", - Paginator: &request.Paginator{ - InputTokens: []string{"Marker"}, - OutputTokens: []string{"InvalidationList.NextMarker"}, - LimitToken: "MaxItems", - TruncationToken: "InvalidationList.IsTruncated", - }, + HTTPPath: "/2018-06-18/public-key/{Id}/config", } if input == nil { - input = &ListInvalidationsInput{} + input = &GetPublicKeyConfigInput{} } - output = &ListInvalidationsOutput{} + output = &GetPublicKeyConfigOutput{} req = c.newRequest(op, input, output) return } -// ListInvalidations API operation for Amazon CloudFront. +// GetPublicKeyConfig API operation for Amazon CloudFront. // -// Lists invalidation batches. +// Return public key configuration informaation // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon CloudFront's -// API operation ListInvalidations for usage and error information. +// API operation GetPublicKeyConfig for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidArgument "InvalidArgument" -// The argument is invalid. -// -// * ErrCodeNoSuchDistribution "NoSuchDistribution" -// The specified distribution does not exist. -// // * ErrCodeAccessDenied "AccessDenied" // Access denied. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/ListInvalidations -func (c *CloudFront) ListInvalidations(input *ListInvalidationsInput) (*ListInvalidationsOutput, error) { - req, out := c.ListInvalidationsRequest(input) +// * ErrCodeNoSuchPublicKey "NoSuchPublicKey" +// The specified public key doesn't exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetPublicKeyConfig +func (c *CloudFront) GetPublicKeyConfig(input *GetPublicKeyConfigInput) (*GetPublicKeyConfigOutput, error) { + req, out := c.GetPublicKeyConfigRequest(input) return out, req.Send() } -// ListInvalidationsWithContext is the same as ListInvalidations with the addition of +// GetPublicKeyConfigWithContext is the same as GetPublicKeyConfig with the addition of // the ability to pass a context and additional request options. // -// See ListInvalidations for details on how to use this API operation. +// See GetPublicKeyConfig for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CloudFront) ListInvalidationsWithContext(ctx aws.Context, input *ListInvalidationsInput, opts ...request.Option) (*ListInvalidationsOutput, error) { - req, out := c.ListInvalidationsRequest(input) +func (c *CloudFront) GetPublicKeyConfigWithContext(ctx aws.Context, input *GetPublicKeyConfigInput, opts ...request.Option) (*GetPublicKeyConfigOutput, error) { + req, out := c.GetPublicKeyConfigRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// ListInvalidationsPages iterates over the pages of a ListInvalidations operation, -// calling the "fn" function with the response data for each page. To stop -// iterating, return false from the fn function. +const opGetStreamingDistribution = "GetStreamingDistribution2018_06_18" + +// GetStreamingDistributionRequest generates a "aws/request.Request" representing the +// client's request for the GetStreamingDistribution operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // -// See ListInvalidations method for more information on how to use this operation. +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. // -// Note: This operation can generate multiple requests to a service. +// See GetStreamingDistribution for more information on using the GetStreamingDistribution +// API call, and error handling. // -// // Example iterating over at most 3 pages of a ListInvalidations operation. -// pageNum := 0 -// err := client.ListInvalidationsPages(params, -// func(page *ListInvalidationsOutput, lastPage bool) bool { -// pageNum++ -// fmt.Println(page) -// return pageNum <= 3 -// }) +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. // -func (c *CloudFront) ListInvalidationsPages(input *ListInvalidationsInput, fn func(*ListInvalidationsOutput, bool) bool) error { - return c.ListInvalidationsPagesWithContext(aws.BackgroundContext(), input, fn) +// +// // Example sending a request using the GetStreamingDistributionRequest method. +// req, resp := client.GetStreamingDistributionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetStreamingDistribution +func (c *CloudFront) GetStreamingDistributionRequest(input *GetStreamingDistributionInput) (req *request.Request, output *GetStreamingDistributionOutput) { + op := &request.Operation{ + Name: opGetStreamingDistribution, + HTTPMethod: "GET", + HTTPPath: "/2018-06-18/streaming-distribution/{Id}", + } + + if input == nil { + input = &GetStreamingDistributionInput{} + } + + output = &GetStreamingDistributionOutput{} + req = c.newRequest(op, input, output) + return } -// ListInvalidationsPagesWithContext same as ListInvalidationsPages except -// it takes a Context and allows setting request options on the pages. +// GetStreamingDistribution API operation for Amazon CloudFront. +// +// Gets information about a specified RTMP distribution, including the distribution +// configuration. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudFront's +// API operation GetStreamingDistribution for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchStreamingDistribution "NoSuchStreamingDistribution" +// The specified streaming distribution does not exist. +// +// * ErrCodeAccessDenied "AccessDenied" +// Access denied. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetStreamingDistribution +func (c *CloudFront) GetStreamingDistribution(input *GetStreamingDistributionInput) (*GetStreamingDistributionOutput, error) { + req, out := c.GetStreamingDistributionRequest(input) + return out, req.Send() +} + +// GetStreamingDistributionWithContext is the same as GetStreamingDistribution with the addition of +// the ability to pass a context and additional request options. +// +// See GetStreamingDistribution for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CloudFront) ListInvalidationsPagesWithContext(ctx aws.Context, input *ListInvalidationsInput, fn func(*ListInvalidationsOutput, bool) bool, opts ...request.Option) error { - p := request.Pagination{ - NewRequest: func() (*request.Request, error) { - var inCpy *ListInvalidationsInput - if input != nil { - tmp := *input - inCpy = &tmp - } - req, _ := c.ListInvalidationsRequest(inCpy) - req.SetContext(ctx) - req.ApplyOptions(opts...) - return req, nil - }, +func (c *CloudFront) GetStreamingDistributionWithContext(ctx aws.Context, input *GetStreamingDistributionInput, opts ...request.Option) (*GetStreamingDistributionOutput, error) { + req, out := c.GetStreamingDistributionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetStreamingDistributionConfig = "GetStreamingDistributionConfig2018_06_18" + +// GetStreamingDistributionConfigRequest generates a "aws/request.Request" representing the +// client's request for the GetStreamingDistributionConfig operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetStreamingDistributionConfig for more information on using the GetStreamingDistributionConfig +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetStreamingDistributionConfigRequest method. +// req, resp := client.GetStreamingDistributionConfigRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetStreamingDistributionConfig +func (c *CloudFront) GetStreamingDistributionConfigRequest(input *GetStreamingDistributionConfigInput) (req *request.Request, output *GetStreamingDistributionConfigOutput) { + op := &request.Operation{ + Name: opGetStreamingDistributionConfig, + HTTPMethod: "GET", + HTTPPath: "/2018-06-18/streaming-distribution/{Id}/config", } - cont := true - for p.Next() && cont { - cont = fn(p.Page().(*ListInvalidationsOutput), !p.HasNextPage()) + if input == nil { + input = &GetStreamingDistributionConfigInput{} } - return p.Err() + + output = &GetStreamingDistributionConfigOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetStreamingDistributionConfig API operation for Amazon CloudFront. +// +// Get the configuration information about a streaming distribution. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudFront's +// API operation GetStreamingDistributionConfig for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchStreamingDistribution "NoSuchStreamingDistribution" +// The specified streaming distribution does not exist. +// +// * ErrCodeAccessDenied "AccessDenied" +// Access denied. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/GetStreamingDistributionConfig +func (c *CloudFront) GetStreamingDistributionConfig(input *GetStreamingDistributionConfigInput) (*GetStreamingDistributionConfigOutput, error) { + req, out := c.GetStreamingDistributionConfigRequest(input) + return out, req.Send() +} + +// GetStreamingDistributionConfigWithContext is the same as GetStreamingDistributionConfig with the addition of +// the ability to pass a context and additional request options. +// +// See GetStreamingDistributionConfig for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFront) GetStreamingDistributionConfigWithContext(ctx aws.Context, input *GetStreamingDistributionConfigInput, opts ...request.Option) (*GetStreamingDistributionConfigOutput, error) { + req, out := c.GetStreamingDistributionConfigRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() } -const opListStreamingDistributions = "ListStreamingDistributions2017_03_25" +const opListCloudFrontOriginAccessIdentities = "ListCloudFrontOriginAccessIdentities2018_06_18" -// ListStreamingDistributionsRequest generates a "aws/request.Request" representing the -// client's request for the ListStreamingDistributions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListCloudFrontOriginAccessIdentitiesRequest generates a "aws/request.Request" representing the +// client's request for the ListCloudFrontOriginAccessIdentities operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListStreamingDistributions for more information on using the ListStreamingDistributions +// See ListCloudFrontOriginAccessIdentities for more information on using the ListCloudFrontOriginAccessIdentities // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListStreamingDistributionsRequest method. -// req, resp := client.ListStreamingDistributionsRequest(params) +// // Example sending a request using the ListCloudFrontOriginAccessIdentitiesRequest method. +// req, resp := client.ListCloudFrontOriginAccessIdentitiesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/ListStreamingDistributions -func (c *CloudFront) ListStreamingDistributionsRequest(input *ListStreamingDistributionsInput) (req *request.Request, output *ListStreamingDistributionsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/ListCloudFrontOriginAccessIdentities +func (c *CloudFront) ListCloudFrontOriginAccessIdentitiesRequest(input *ListCloudFrontOriginAccessIdentitiesInput) (req *request.Request, output *ListCloudFrontOriginAccessIdentitiesOutput) { op := &request.Operation{ - Name: opListStreamingDistributions, + Name: opListCloudFrontOriginAccessIdentities, HTTPMethod: "GET", - HTTPPath: "/2017-03-25/streaming-distribution", + HTTPPath: "/2018-06-18/origin-access-identity/cloudfront", Paginator: &request.Paginator{ InputTokens: []string{"Marker"}, - OutputTokens: []string{"StreamingDistributionList.NextMarker"}, + OutputTokens: []string{"CloudFrontOriginAccessIdentityList.NextMarker"}, LimitToken: "MaxItems", - TruncationToken: "StreamingDistributionList.IsTruncated", + TruncationToken: "CloudFrontOriginAccessIdentityList.IsTruncated", }, } if input == nil { - input = &ListStreamingDistributionsInput{} + input = &ListCloudFrontOriginAccessIdentitiesInput{} } - output = &ListStreamingDistributionsOutput{} + output = &ListCloudFrontOriginAccessIdentitiesOutput{} req = c.newRequest(op, input, output) return } -// ListStreamingDistributions API operation for Amazon CloudFront. +// ListCloudFrontOriginAccessIdentities API operation for Amazon CloudFront. // -// List streaming distributions. +// Lists origin access identities. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon CloudFront's -// API operation ListStreamingDistributions for usage and error information. +// API operation ListCloudFrontOriginAccessIdentities for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidArgument "InvalidArgument" // The argument is invalid. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/ListStreamingDistributions -func (c *CloudFront) ListStreamingDistributions(input *ListStreamingDistributionsInput) (*ListStreamingDistributionsOutput, error) { - req, out := c.ListStreamingDistributionsRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/ListCloudFrontOriginAccessIdentities +func (c *CloudFront) ListCloudFrontOriginAccessIdentities(input *ListCloudFrontOriginAccessIdentitiesInput) (*ListCloudFrontOriginAccessIdentitiesOutput, error) { + req, out := c.ListCloudFrontOriginAccessIdentitiesRequest(input) return out, req.Send() } -// ListStreamingDistributionsWithContext is the same as ListStreamingDistributions with the addition of +// ListCloudFrontOriginAccessIdentitiesWithContext is the same as ListCloudFrontOriginAccessIdentities with the addition of // the ability to pass a context and additional request options. // -// See ListStreamingDistributions for details on how to use this API operation. +// See ListCloudFrontOriginAccessIdentities for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CloudFront) ListStreamingDistributionsWithContext(ctx aws.Context, input *ListStreamingDistributionsInput, opts ...request.Option) (*ListStreamingDistributionsOutput, error) { - req, out := c.ListStreamingDistributionsRequest(input) +func (c *CloudFront) ListCloudFrontOriginAccessIdentitiesWithContext(ctx aws.Context, input *ListCloudFrontOriginAccessIdentitiesInput, opts ...request.Option) (*ListCloudFrontOriginAccessIdentitiesOutput, error) { + req, out := c.ListCloudFrontOriginAccessIdentitiesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// ListStreamingDistributionsPages iterates over the pages of a ListStreamingDistributions operation, +// ListCloudFrontOriginAccessIdentitiesPages iterates over the pages of a ListCloudFrontOriginAccessIdentities operation, // calling the "fn" function with the response data for each page. To stop // iterating, return false from the fn function. // -// See ListStreamingDistributions method for more information on how to use this operation. +// See ListCloudFrontOriginAccessIdentities method for more information on how to use this operation. // // Note: This operation can generate multiple requests to a service. // -// // Example iterating over at most 3 pages of a ListStreamingDistributions operation. +// // Example iterating over at most 3 pages of a ListCloudFrontOriginAccessIdentities operation. // pageNum := 0 -// err := client.ListStreamingDistributionsPages(params, -// func(page *ListStreamingDistributionsOutput, lastPage bool) bool { +// err := client.ListCloudFrontOriginAccessIdentitiesPages(params, +// func(page *ListCloudFrontOriginAccessIdentitiesOutput, lastPage bool) bool { // pageNum++ // fmt.Println(page) // return pageNum <= 3 // }) // -func (c *CloudFront) ListStreamingDistributionsPages(input *ListStreamingDistributionsInput, fn func(*ListStreamingDistributionsOutput, bool) bool) error { - return c.ListStreamingDistributionsPagesWithContext(aws.BackgroundContext(), input, fn) +func (c *CloudFront) ListCloudFrontOriginAccessIdentitiesPages(input *ListCloudFrontOriginAccessIdentitiesInput, fn func(*ListCloudFrontOriginAccessIdentitiesOutput, bool) bool) error { + return c.ListCloudFrontOriginAccessIdentitiesPagesWithContext(aws.BackgroundContext(), input, fn) } -// ListStreamingDistributionsPagesWithContext same as ListStreamingDistributionsPages except +// ListCloudFrontOriginAccessIdentitiesPagesWithContext same as ListCloudFrontOriginAccessIdentitiesPages except // it takes a Context and allows setting request options on the pages. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CloudFront) ListStreamingDistributionsPagesWithContext(ctx aws.Context, input *ListStreamingDistributionsInput, fn func(*ListStreamingDistributionsOutput, bool) bool, opts ...request.Option) error { +func (c *CloudFront) ListCloudFrontOriginAccessIdentitiesPagesWithContext(ctx aws.Context, input *ListCloudFrontOriginAccessIdentitiesInput, fn func(*ListCloudFrontOriginAccessIdentitiesOutput, bool) bool, opts ...request.Option) error { p := request.Pagination{ NewRequest: func() (*request.Request, error) { - var inCpy *ListStreamingDistributionsInput + var inCpy *ListCloudFrontOriginAccessIdentitiesInput if input != nil { tmp := *input inCpy = &tmp } - req, _ := c.ListStreamingDistributionsRequest(inCpy) + req, _ := c.ListCloudFrontOriginAccessIdentitiesRequest(inCpy) req.SetContext(ctx) req.ApplyOptions(opts...) return req, nil @@ -2456,1167 +2979,4427 @@ func (c *CloudFront) ListStreamingDistributionsPagesWithContext(ctx aws.Context, cont := true for p.Next() && cont { - cont = fn(p.Page().(*ListStreamingDistributionsOutput), !p.HasNextPage()) + cont = fn(p.Page().(*ListCloudFrontOriginAccessIdentitiesOutput), !p.HasNextPage()) } return p.Err() } -const opListTagsForResource = "ListTagsForResource2017_03_25" +const opListDistributions = "ListDistributions2018_06_18" -// ListTagsForResourceRequest generates a "aws/request.Request" representing the -// client's request for the ListTagsForResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListDistributionsRequest generates a "aws/request.Request" representing the +// client's request for the ListDistributions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListTagsForResource for more information on using the ListTagsForResource +// See ListDistributions for more information on using the ListDistributions // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListTagsForResourceRequest method. -// req, resp := client.ListTagsForResourceRequest(params) +// // Example sending a request using the ListDistributionsRequest method. +// req, resp := client.ListDistributionsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/ListTagsForResource -func (c *CloudFront) ListTagsForResourceRequest(input *ListTagsForResourceInput) (req *request.Request, output *ListTagsForResourceOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/ListDistributions +func (c *CloudFront) ListDistributionsRequest(input *ListDistributionsInput) (req *request.Request, output *ListDistributionsOutput) { op := &request.Operation{ - Name: opListTagsForResource, + Name: opListDistributions, HTTPMethod: "GET", - HTTPPath: "/2017-03-25/tagging", + HTTPPath: "/2018-06-18/distribution", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"DistributionList.NextMarker"}, + LimitToken: "MaxItems", + TruncationToken: "DistributionList.IsTruncated", + }, } if input == nil { - input = &ListTagsForResourceInput{} + input = &ListDistributionsInput{} } - output = &ListTagsForResourceOutput{} + output = &ListDistributionsOutput{} req = c.newRequest(op, input, output) return } -// ListTagsForResource API operation for Amazon CloudFront. +// ListDistributions API operation for Amazon CloudFront. // -// List tags for a CloudFront resource. +// List distributions. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon CloudFront's -// API operation ListTagsForResource for usage and error information. +// API operation ListDistributions for usage and error information. // // Returned Error Codes: -// * ErrCodeAccessDenied "AccessDenied" -// Access denied. -// // * ErrCodeInvalidArgument "InvalidArgument" // The argument is invalid. // -// * ErrCodeInvalidTagging "InvalidTagging" -// -// * ErrCodeNoSuchResource "NoSuchResource" -// -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/ListTagsForResource -func (c *CloudFront) ListTagsForResource(input *ListTagsForResourceInput) (*ListTagsForResourceOutput, error) { - req, out := c.ListTagsForResourceRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/ListDistributions +func (c *CloudFront) ListDistributions(input *ListDistributionsInput) (*ListDistributionsOutput, error) { + req, out := c.ListDistributionsRequest(input) return out, req.Send() } -// ListTagsForResourceWithContext is the same as ListTagsForResource with the addition of +// ListDistributionsWithContext is the same as ListDistributions with the addition of // the ability to pass a context and additional request options. // -// See ListTagsForResource for details on how to use this API operation. +// See ListDistributions for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CloudFront) ListTagsForResourceWithContext(ctx aws.Context, input *ListTagsForResourceInput, opts ...request.Option) (*ListTagsForResourceOutput, error) { - req, out := c.ListTagsForResourceRequest(input) +func (c *CloudFront) ListDistributionsWithContext(ctx aws.Context, input *ListDistributionsInput, opts ...request.Option) (*ListDistributionsOutput, error) { + req, out := c.ListDistributionsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opTagResource = "TagResource2017_03_25" +// ListDistributionsPages iterates over the pages of a ListDistributions operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListDistributions method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListDistributions operation. +// pageNum := 0 +// err := client.ListDistributionsPages(params, +// func(page *ListDistributionsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CloudFront) ListDistributionsPages(input *ListDistributionsInput, fn func(*ListDistributionsOutput, bool) bool) error { + return c.ListDistributionsPagesWithContext(aws.BackgroundContext(), input, fn) +} -// TagResourceRequest generates a "aws/request.Request" representing the -// client's request for the TagResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListDistributionsPagesWithContext same as ListDistributionsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFront) ListDistributionsPagesWithContext(ctx aws.Context, input *ListDistributionsInput, fn func(*ListDistributionsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListDistributionsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListDistributionsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListDistributionsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListDistributionsByWebACLId = "ListDistributionsByWebACLId2018_06_18" + +// ListDistributionsByWebACLIdRequest generates a "aws/request.Request" representing the +// client's request for the ListDistributionsByWebACLId operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See TagResource for more information on using the TagResource +// See ListDistributionsByWebACLId for more information on using the ListDistributionsByWebACLId // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the TagResourceRequest method. -// req, resp := client.TagResourceRequest(params) +// // Example sending a request using the ListDistributionsByWebACLIdRequest method. +// req, resp := client.ListDistributionsByWebACLIdRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/TagResource -func (c *CloudFront) TagResourceRequest(input *TagResourceInput) (req *request.Request, output *TagResourceOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/ListDistributionsByWebACLId +func (c *CloudFront) ListDistributionsByWebACLIdRequest(input *ListDistributionsByWebACLIdInput) (req *request.Request, output *ListDistributionsByWebACLIdOutput) { op := &request.Operation{ - Name: opTagResource, - HTTPMethod: "POST", - HTTPPath: "/2017-03-25/tagging?Operation=Tag", + Name: opListDistributionsByWebACLId, + HTTPMethod: "GET", + HTTPPath: "/2018-06-18/distributionsByWebACLId/{WebACLId}", } if input == nil { - input = &TagResourceInput{} + input = &ListDistributionsByWebACLIdInput{} } - output = &TagResourceOutput{} + output = &ListDistributionsByWebACLIdOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// TagResource API operation for Amazon CloudFront. +// ListDistributionsByWebACLId API operation for Amazon CloudFront. // -// Add tags to a CloudFront resource. +// List the distributions that are associated with a specified AWS WAF web ACL. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon CloudFront's -// API operation TagResource for usage and error information. +// API operation ListDistributionsByWebACLId for usage and error information. // // Returned Error Codes: -// * ErrCodeAccessDenied "AccessDenied" -// Access denied. -// // * ErrCodeInvalidArgument "InvalidArgument" // The argument is invalid. // -// * ErrCodeInvalidTagging "InvalidTagging" -// -// * ErrCodeNoSuchResource "NoSuchResource" +// * ErrCodeInvalidWebACLId "InvalidWebACLId" // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/TagResource -func (c *CloudFront) TagResource(input *TagResourceInput) (*TagResourceOutput, error) { - req, out := c.TagResourceRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/ListDistributionsByWebACLId +func (c *CloudFront) ListDistributionsByWebACLId(input *ListDistributionsByWebACLIdInput) (*ListDistributionsByWebACLIdOutput, error) { + req, out := c.ListDistributionsByWebACLIdRequest(input) return out, req.Send() } -// TagResourceWithContext is the same as TagResource with the addition of +// ListDistributionsByWebACLIdWithContext is the same as ListDistributionsByWebACLId with the addition of // the ability to pass a context and additional request options. // -// See TagResource for details on how to use this API operation. +// See ListDistributionsByWebACLId for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CloudFront) TagResourceWithContext(ctx aws.Context, input *TagResourceInput, opts ...request.Option) (*TagResourceOutput, error) { - req, out := c.TagResourceRequest(input) +func (c *CloudFront) ListDistributionsByWebACLIdWithContext(ctx aws.Context, input *ListDistributionsByWebACLIdInput, opts ...request.Option) (*ListDistributionsByWebACLIdOutput, error) { + req, out := c.ListDistributionsByWebACLIdRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUntagResource = "UntagResource2017_03_25" +const opListFieldLevelEncryptionConfigs = "ListFieldLevelEncryptionConfigs2018_06_18" -// UntagResourceRequest generates a "aws/request.Request" representing the -// client's request for the UntagResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListFieldLevelEncryptionConfigsRequest generates a "aws/request.Request" representing the +// client's request for the ListFieldLevelEncryptionConfigs operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UntagResource for more information on using the UntagResource +// See ListFieldLevelEncryptionConfigs for more information on using the ListFieldLevelEncryptionConfigs // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UntagResourceRequest method. -// req, resp := client.UntagResourceRequest(params) +// // Example sending a request using the ListFieldLevelEncryptionConfigsRequest method. +// req, resp := client.ListFieldLevelEncryptionConfigsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/UntagResource -func (c *CloudFront) UntagResourceRequest(input *UntagResourceInput) (req *request.Request, output *UntagResourceOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/ListFieldLevelEncryptionConfigs +func (c *CloudFront) ListFieldLevelEncryptionConfigsRequest(input *ListFieldLevelEncryptionConfigsInput) (req *request.Request, output *ListFieldLevelEncryptionConfigsOutput) { op := &request.Operation{ - Name: opUntagResource, - HTTPMethod: "POST", - HTTPPath: "/2017-03-25/tagging?Operation=Untag", + Name: opListFieldLevelEncryptionConfigs, + HTTPMethod: "GET", + HTTPPath: "/2018-06-18/field-level-encryption", } if input == nil { - input = &UntagResourceInput{} + input = &ListFieldLevelEncryptionConfigsInput{} } - output = &UntagResourceOutput{} + output = &ListFieldLevelEncryptionConfigsOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UntagResource API operation for Amazon CloudFront. +// ListFieldLevelEncryptionConfigs API operation for Amazon CloudFront. // -// Remove tags from a CloudFront resource. +// List all field-level encryption configurations that have been created in +// CloudFront for this account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon CloudFront's -// API operation UntagResource for usage and error information. +// API operation ListFieldLevelEncryptionConfigs for usage and error information. // // Returned Error Codes: -// * ErrCodeAccessDenied "AccessDenied" -// Access denied. -// // * ErrCodeInvalidArgument "InvalidArgument" // The argument is invalid. // -// * ErrCodeInvalidTagging "InvalidTagging" -// -// * ErrCodeNoSuchResource "NoSuchResource" -// -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/UntagResource -func (c *CloudFront) UntagResource(input *UntagResourceInput) (*UntagResourceOutput, error) { - req, out := c.UntagResourceRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/ListFieldLevelEncryptionConfigs +func (c *CloudFront) ListFieldLevelEncryptionConfigs(input *ListFieldLevelEncryptionConfigsInput) (*ListFieldLevelEncryptionConfigsOutput, error) { + req, out := c.ListFieldLevelEncryptionConfigsRequest(input) return out, req.Send() } -// UntagResourceWithContext is the same as UntagResource with the addition of +// ListFieldLevelEncryptionConfigsWithContext is the same as ListFieldLevelEncryptionConfigs with the addition of // the ability to pass a context and additional request options. // -// See UntagResource for details on how to use this API operation. +// See ListFieldLevelEncryptionConfigs for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CloudFront) UntagResourceWithContext(ctx aws.Context, input *UntagResourceInput, opts ...request.Option) (*UntagResourceOutput, error) { - req, out := c.UntagResourceRequest(input) +func (c *CloudFront) ListFieldLevelEncryptionConfigsWithContext(ctx aws.Context, input *ListFieldLevelEncryptionConfigsInput, opts ...request.Option) (*ListFieldLevelEncryptionConfigsOutput, error) { + req, out := c.ListFieldLevelEncryptionConfigsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateCloudFrontOriginAccessIdentity = "UpdateCloudFrontOriginAccessIdentity2017_03_25" +const opListFieldLevelEncryptionProfiles = "ListFieldLevelEncryptionProfiles2018_06_18" -// UpdateCloudFrontOriginAccessIdentityRequest generates a "aws/request.Request" representing the -// client's request for the UpdateCloudFrontOriginAccessIdentity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListFieldLevelEncryptionProfilesRequest generates a "aws/request.Request" representing the +// client's request for the ListFieldLevelEncryptionProfiles operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateCloudFrontOriginAccessIdentity for more information on using the UpdateCloudFrontOriginAccessIdentity +// See ListFieldLevelEncryptionProfiles for more information on using the ListFieldLevelEncryptionProfiles // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateCloudFrontOriginAccessIdentityRequest method. -// req, resp := client.UpdateCloudFrontOriginAccessIdentityRequest(params) +// // Example sending a request using the ListFieldLevelEncryptionProfilesRequest method. +// req, resp := client.ListFieldLevelEncryptionProfilesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/UpdateCloudFrontOriginAccessIdentity -func (c *CloudFront) UpdateCloudFrontOriginAccessIdentityRequest(input *UpdateCloudFrontOriginAccessIdentityInput) (req *request.Request, output *UpdateCloudFrontOriginAccessIdentityOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/ListFieldLevelEncryptionProfiles +func (c *CloudFront) ListFieldLevelEncryptionProfilesRequest(input *ListFieldLevelEncryptionProfilesInput) (req *request.Request, output *ListFieldLevelEncryptionProfilesOutput) { op := &request.Operation{ - Name: opUpdateCloudFrontOriginAccessIdentity, - HTTPMethod: "PUT", - HTTPPath: "/2017-03-25/origin-access-identity/cloudfront/{Id}/config", + Name: opListFieldLevelEncryptionProfiles, + HTTPMethod: "GET", + HTTPPath: "/2018-06-18/field-level-encryption-profile", } if input == nil { - input = &UpdateCloudFrontOriginAccessIdentityInput{} + input = &ListFieldLevelEncryptionProfilesInput{} } - output = &UpdateCloudFrontOriginAccessIdentityOutput{} + output = &ListFieldLevelEncryptionProfilesOutput{} req = c.newRequest(op, input, output) return } -// UpdateCloudFrontOriginAccessIdentity API operation for Amazon CloudFront. +// ListFieldLevelEncryptionProfiles API operation for Amazon CloudFront. // -// Update an origin access identity. +// Request a list of field-level encryption profiles that have been created +// in CloudFront for this account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon CloudFront's -// API operation UpdateCloudFrontOriginAccessIdentity for usage and error information. +// API operation ListFieldLevelEncryptionProfiles for usage and error information. // // Returned Error Codes: -// * ErrCodeAccessDenied "AccessDenied" -// Access denied. -// -// * ErrCodeIllegalUpdate "IllegalUpdate" -// Origin and CallerReference cannot be updated. -// -// * ErrCodeInvalidIfMatchVersion "InvalidIfMatchVersion" -// The If-Match version is missing or not valid for the distribution. -// -// * ErrCodeMissingBody "MissingBody" -// This operation requires a body. Ensure that the body is present and the Content-Type -// header is set. -// -// * ErrCodeNoSuchCloudFrontOriginAccessIdentity "NoSuchCloudFrontOriginAccessIdentity" -// The specified origin access identity does not exist. -// -// * ErrCodePreconditionFailed "PreconditionFailed" -// The precondition given in one or more of the request-header fields evaluated -// to false. -// // * ErrCodeInvalidArgument "InvalidArgument" // The argument is invalid. // -// * ErrCodeInconsistentQuantities "InconsistentQuantities" -// The value of Quantity and the size of Items don't match. -// -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/UpdateCloudFrontOriginAccessIdentity -func (c *CloudFront) UpdateCloudFrontOriginAccessIdentity(input *UpdateCloudFrontOriginAccessIdentityInput) (*UpdateCloudFrontOriginAccessIdentityOutput, error) { - req, out := c.UpdateCloudFrontOriginAccessIdentityRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/ListFieldLevelEncryptionProfiles +func (c *CloudFront) ListFieldLevelEncryptionProfiles(input *ListFieldLevelEncryptionProfilesInput) (*ListFieldLevelEncryptionProfilesOutput, error) { + req, out := c.ListFieldLevelEncryptionProfilesRequest(input) return out, req.Send() } -// UpdateCloudFrontOriginAccessIdentityWithContext is the same as UpdateCloudFrontOriginAccessIdentity with the addition of +// ListFieldLevelEncryptionProfilesWithContext is the same as ListFieldLevelEncryptionProfiles with the addition of // the ability to pass a context and additional request options. // -// See UpdateCloudFrontOriginAccessIdentity for details on how to use this API operation. +// See ListFieldLevelEncryptionProfiles for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CloudFront) UpdateCloudFrontOriginAccessIdentityWithContext(ctx aws.Context, input *UpdateCloudFrontOriginAccessIdentityInput, opts ...request.Option) (*UpdateCloudFrontOriginAccessIdentityOutput, error) { - req, out := c.UpdateCloudFrontOriginAccessIdentityRequest(input) +func (c *CloudFront) ListFieldLevelEncryptionProfilesWithContext(ctx aws.Context, input *ListFieldLevelEncryptionProfilesInput, opts ...request.Option) (*ListFieldLevelEncryptionProfilesOutput, error) { + req, out := c.ListFieldLevelEncryptionProfilesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateDistribution = "UpdateDistribution2017_03_25" +const opListInvalidations = "ListInvalidations2018_06_18" -// UpdateDistributionRequest generates a "aws/request.Request" representing the -// client's request for the UpdateDistribution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListInvalidationsRequest generates a "aws/request.Request" representing the +// client's request for the ListInvalidations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateDistribution for more information on using the UpdateDistribution +// See ListInvalidations for more information on using the ListInvalidations // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateDistributionRequest method. -// req, resp := client.UpdateDistributionRequest(params) +// // Example sending a request using the ListInvalidationsRequest method. +// req, resp := client.ListInvalidationsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/UpdateDistribution -func (c *CloudFront) UpdateDistributionRequest(input *UpdateDistributionInput) (req *request.Request, output *UpdateDistributionOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/ListInvalidations +func (c *CloudFront) ListInvalidationsRequest(input *ListInvalidationsInput) (req *request.Request, output *ListInvalidationsOutput) { op := &request.Operation{ - Name: opUpdateDistribution, - HTTPMethod: "PUT", - HTTPPath: "/2017-03-25/distribution/{Id}/config", + Name: opListInvalidations, + HTTPMethod: "GET", + HTTPPath: "/2018-06-18/distribution/{DistributionId}/invalidation", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"InvalidationList.NextMarker"}, + LimitToken: "MaxItems", + TruncationToken: "InvalidationList.IsTruncated", + }, } if input == nil { - input = &UpdateDistributionInput{} + input = &ListInvalidationsInput{} } - output = &UpdateDistributionOutput{} + output = &ListInvalidationsOutput{} req = c.newRequest(op, input, output) return } -// UpdateDistribution API operation for Amazon CloudFront. -// -// Updates the configuration for a web distribution. Perform the following steps. -// -// For information about updating a distribution using the CloudFront console, -// see Creating or Updating a Web Distribution Using the CloudFront Console -// (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-creating-console.html) -// in the Amazon CloudFront Developer Guide. -// -// To update a web distribution using the CloudFront API -// -// Submit a GetDistributionConfig request to get the current configuration and -// an Etag header for the distribution. -// -// If you update the distribution again, you need to get a new Etag header. -// -// Update the XML document that was returned in the response to your GetDistributionConfig -// request to include the desired changes. You can't change the value of CallerReference. -// If you try to change this value, CloudFront returns an IllegalUpdate error. -// -// The new configuration replaces the existing configuration; the values that -// you specify in an UpdateDistribution request are not merged into the existing -// configuration. When you add, delete, or replace values in an element that -// allows multiple values (for example, CNAME), you must specify all of the -// values that you want to appear in the updated distribution. In addition, -// you must update the corresponding Quantity element. -// -// Submit an UpdateDistribution request to update the configuration for your -// distribution: -// -// In the request body, include the XML document that you updated in Step 2. -// The request body must include an XML document with a DistributionConfig element. -// -// Set the value of the HTTP If-Match header to the value of the ETag header -// that CloudFront returned when you submitted the GetDistributionConfig request -// in Step 1. -// -// Review the response to the UpdateDistribution request to confirm that the -// configuration was successfully updated. -// -// Optional: Submit a GetDistribution request to confirm that your changes have -// propagated. When propagation is complete, the value of Status is Deployed. +// ListInvalidations API operation for Amazon CloudFront. // -// Beginning with the 2012-05-05 version of the CloudFront API, we made substantial -// changes to the format of the XML document that you include in the request -// body when you create or update a distribution. With previous versions of -// the API, we discovered that it was too easy to accidentally delete one or -// more values for an element that accepts multiple values, for example, CNAMEs -// and trusted signers. Our changes for the 2012-05-05 release are intended -// to prevent these accidental deletions and to notify you when there's a mismatch -// between the number of values you say you're specifying in the Quantity element -// and the number of values you're actually specifying. +// Lists invalidation batches. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon CloudFront's -// API operation UpdateDistribution for usage and error information. +// API operation ListInvalidations for usage and error information. // // Returned Error Codes: +// * ErrCodeInvalidArgument "InvalidArgument" +// The argument is invalid. +// +// * ErrCodeNoSuchDistribution "NoSuchDistribution" +// The specified distribution does not exist. +// // * ErrCodeAccessDenied "AccessDenied" // Access denied. // -// * ErrCodeCNAMEAlreadyExists "CNAMEAlreadyExists" +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/ListInvalidations +func (c *CloudFront) ListInvalidations(input *ListInvalidationsInput) (*ListInvalidationsOutput, error) { + req, out := c.ListInvalidationsRequest(input) + return out, req.Send() +} + +// ListInvalidationsWithContext is the same as ListInvalidations with the addition of +// the ability to pass a context and additional request options. // -// * ErrCodeIllegalUpdate "IllegalUpdate" -// Origin and CallerReference cannot be updated. +// See ListInvalidations for details on how to use this API operation. // -// * ErrCodeInvalidIfMatchVersion "InvalidIfMatchVersion" -// The If-Match version is missing or not valid for the distribution. -// -// * ErrCodeMissingBody "MissingBody" -// This operation requires a body. Ensure that the body is present and the Content-Type -// header is set. -// -// * ErrCodeNoSuchDistribution "NoSuchDistribution" -// The specified distribution does not exist. -// -// * ErrCodePreconditionFailed "PreconditionFailed" -// The precondition given in one or more of the request-header fields evaluated -// to false. -// -// * ErrCodeTooManyDistributionCNAMEs "TooManyDistributionCNAMEs" -// Your request contains more CNAMEs than are allowed per distribution. -// -// * ErrCodeInvalidDefaultRootObject "InvalidDefaultRootObject" -// The default root object file name is too big or contains an invalid character. -// -// * ErrCodeInvalidRelativePath "InvalidRelativePath" -// The relative path is too big, is not URL-encoded, or does not begin with -// a slash (/). -// -// * ErrCodeInvalidErrorCode "InvalidErrorCode" -// -// * ErrCodeInvalidResponseCode "InvalidResponseCode" -// -// * ErrCodeInvalidArgument "InvalidArgument" -// The argument is invalid. -// -// * ErrCodeInvalidOriginAccessIdentity "InvalidOriginAccessIdentity" -// The origin access identity is not valid or doesn't exist. -// -// * ErrCodeTooManyTrustedSigners "TooManyTrustedSigners" -// Your request contains more trusted signers than are allowed per distribution. -// -// * ErrCodeTrustedSignerDoesNotExist "TrustedSignerDoesNotExist" -// One or more of your trusted signers don't exist. -// -// * ErrCodeInvalidViewerCertificate "InvalidViewerCertificate" -// -// * ErrCodeInvalidMinimumProtocolVersion "InvalidMinimumProtocolVersion" -// -// * ErrCodeInvalidRequiredProtocol "InvalidRequiredProtocol" -// This operation requires the HTTPS protocol. Ensure that you specify the HTTPS -// protocol in your request, or omit the RequiredProtocols element from your -// distribution configuration. -// -// * ErrCodeNoSuchOrigin "NoSuchOrigin" -// No origin exists with the specified Origin Id. -// -// * ErrCodeTooManyOrigins "TooManyOrigins" -// You cannot create more origins for the distribution. -// -// * ErrCodeTooManyCacheBehaviors "TooManyCacheBehaviors" -// You cannot create more cache behaviors for the distribution. -// -// * ErrCodeTooManyCookieNamesInWhiteList "TooManyCookieNamesInWhiteList" -// Your request contains more cookie names in the whitelist than are allowed -// per cache behavior. -// -// * ErrCodeInvalidForwardCookies "InvalidForwardCookies" -// Your request contains forward cookies option which doesn't match with the -// expectation for the whitelisted list of cookie names. Either list of cookie -// names has been specified when not allowed or list of cookie names is missing -// when expected. +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFront) ListInvalidationsWithContext(ctx aws.Context, input *ListInvalidationsInput, opts ...request.Option) (*ListInvalidationsOutput, error) { + req, out := c.ListInvalidationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListInvalidationsPages iterates over the pages of a ListInvalidations operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. // -// * ErrCodeTooManyHeadersInForwardedValues "TooManyHeadersInForwardedValues" +// See ListInvalidations method for more information on how to use this operation. // -// * ErrCodeInvalidHeadersForS3Origin "InvalidHeadersForS3Origin" +// Note: This operation can generate multiple requests to a service. // -// * ErrCodeInconsistentQuantities "InconsistentQuantities" -// The value of Quantity and the size of Items don't match. +// // Example iterating over at most 3 pages of a ListInvalidations operation. +// pageNum := 0 +// err := client.ListInvalidationsPages(params, +// func(page *ListInvalidationsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) // -// * ErrCodeTooManyCertificates "TooManyCertificates" -// You cannot create anymore custom SSL/TLS certificates. +func (c *CloudFront) ListInvalidationsPages(input *ListInvalidationsInput, fn func(*ListInvalidationsOutput, bool) bool) error { + return c.ListInvalidationsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListInvalidationsPagesWithContext same as ListInvalidationsPages except +// it takes a Context and allows setting request options on the pages. // -// * ErrCodeInvalidLocationCode "InvalidLocationCode" +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFront) ListInvalidationsPagesWithContext(ctx aws.Context, input *ListInvalidationsInput, fn func(*ListInvalidationsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListInvalidationsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListInvalidationsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListInvalidationsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListPublicKeys = "ListPublicKeys2018_06_18" + +// ListPublicKeysRequest generates a "aws/request.Request" representing the +// client's request for the ListPublicKeys operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // -// * ErrCodeInvalidGeoRestrictionParameter "InvalidGeoRestrictionParameter" +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. // -// * ErrCodeInvalidTTLOrder "InvalidTTLOrder" +// See ListPublicKeys for more information on using the ListPublicKeys +// API call, and error handling. // -// * ErrCodeInvalidWebACLId "InvalidWebACLId" +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. // -// * ErrCodeTooManyOriginCustomHeaders "TooManyOriginCustomHeaders" // -// * ErrCodeTooManyQueryStringParameters "TooManyQueryStringParameters" +// // Example sending a request using the ListPublicKeysRequest method. +// req, resp := client.ListPublicKeysRequest(params) // -// * ErrCodeInvalidQueryStringParameters "InvalidQueryStringParameters" +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } // -// * ErrCodeTooManyDistributionsWithLambdaAssociations "TooManyDistributionsWithLambdaAssociations" -// Processing your request would cause the maximum number of distributions with -// Lambda function associations per owner to be exceeded. +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/ListPublicKeys +func (c *CloudFront) ListPublicKeysRequest(input *ListPublicKeysInput) (req *request.Request, output *ListPublicKeysOutput) { + op := &request.Operation{ + Name: opListPublicKeys, + HTTPMethod: "GET", + HTTPPath: "/2018-06-18/public-key", + } + + if input == nil { + input = &ListPublicKeysInput{} + } + + output = &ListPublicKeysOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListPublicKeys API operation for Amazon CloudFront. // -// * ErrCodeTooManyLambdaFunctionAssociations "TooManyLambdaFunctionAssociations" -// Your request contains more Lambda function associations than are allowed -// per distribution. +// List all public keys that have been added to CloudFront for this account. // -// * ErrCodeInvalidLambdaFunctionAssociation "InvalidLambdaFunctionAssociation" -// The specified Lambda function association is invalid. +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. // -// * ErrCodeInvalidOriginReadTimeout "InvalidOriginReadTimeout" +// See the AWS API reference guide for Amazon CloudFront's +// API operation ListPublicKeys for usage and error information. // -// * ErrCodeInvalidOriginKeepaliveTimeout "InvalidOriginKeepaliveTimeout" +// Returned Error Codes: +// * ErrCodeInvalidArgument "InvalidArgument" +// The argument is invalid. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/UpdateDistribution -func (c *CloudFront) UpdateDistribution(input *UpdateDistributionInput) (*UpdateDistributionOutput, error) { - req, out := c.UpdateDistributionRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/ListPublicKeys +func (c *CloudFront) ListPublicKeys(input *ListPublicKeysInput) (*ListPublicKeysOutput, error) { + req, out := c.ListPublicKeysRequest(input) return out, req.Send() } -// UpdateDistributionWithContext is the same as UpdateDistribution with the addition of +// ListPublicKeysWithContext is the same as ListPublicKeys with the addition of // the ability to pass a context and additional request options. // -// See UpdateDistribution for details on how to use this API operation. +// See ListPublicKeys for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CloudFront) UpdateDistributionWithContext(ctx aws.Context, input *UpdateDistributionInput, opts ...request.Option) (*UpdateDistributionOutput, error) { - req, out := c.UpdateDistributionRequest(input) +func (c *CloudFront) ListPublicKeysWithContext(ctx aws.Context, input *ListPublicKeysInput, opts ...request.Option) (*ListPublicKeysOutput, error) { + req, out := c.ListPublicKeysRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateStreamingDistribution = "UpdateStreamingDistribution2017_03_25" +const opListStreamingDistributions = "ListStreamingDistributions2018_06_18" -// UpdateStreamingDistributionRequest generates a "aws/request.Request" representing the -// client's request for the UpdateStreamingDistribution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListStreamingDistributionsRequest generates a "aws/request.Request" representing the +// client's request for the ListStreamingDistributions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateStreamingDistribution for more information on using the UpdateStreamingDistribution +// See ListStreamingDistributions for more information on using the ListStreamingDistributions // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateStreamingDistributionRequest method. -// req, resp := client.UpdateStreamingDistributionRequest(params) +// // Example sending a request using the ListStreamingDistributionsRequest method. +// req, resp := client.ListStreamingDistributionsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/UpdateStreamingDistribution -func (c *CloudFront) UpdateStreamingDistributionRequest(input *UpdateStreamingDistributionInput) (req *request.Request, output *UpdateStreamingDistributionOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/ListStreamingDistributions +func (c *CloudFront) ListStreamingDistributionsRequest(input *ListStreamingDistributionsInput) (req *request.Request, output *ListStreamingDistributionsOutput) { op := &request.Operation{ - Name: opUpdateStreamingDistribution, - HTTPMethod: "PUT", - HTTPPath: "/2017-03-25/streaming-distribution/{Id}/config", + Name: opListStreamingDistributions, + HTTPMethod: "GET", + HTTPPath: "/2018-06-18/streaming-distribution", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"StreamingDistributionList.NextMarker"}, + LimitToken: "MaxItems", + TruncationToken: "StreamingDistributionList.IsTruncated", + }, } if input == nil { - input = &UpdateStreamingDistributionInput{} + input = &ListStreamingDistributionsInput{} } - output = &UpdateStreamingDistributionOutput{} + output = &ListStreamingDistributionsOutput{} req = c.newRequest(op, input, output) return } -// UpdateStreamingDistribution API operation for Amazon CloudFront. +// ListStreamingDistributions API operation for Amazon CloudFront. // -// Update a streaming distribution. +// List streaming distributions. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon CloudFront's -// API operation UpdateStreamingDistribution for usage and error information. +// API operation ListStreamingDistributions for usage and error information. // // Returned Error Codes: -// * ErrCodeAccessDenied "AccessDenied" -// Access denied. -// -// * ErrCodeCNAMEAlreadyExists "CNAMEAlreadyExists" -// -// * ErrCodeIllegalUpdate "IllegalUpdate" -// Origin and CallerReference cannot be updated. -// -// * ErrCodeInvalidIfMatchVersion "InvalidIfMatchVersion" -// The If-Match version is missing or not valid for the distribution. -// -// * ErrCodeMissingBody "MissingBody" -// This operation requires a body. Ensure that the body is present and the Content-Type -// header is set. -// -// * ErrCodeNoSuchStreamingDistribution "NoSuchStreamingDistribution" -// The specified streaming distribution does not exist. -// -// * ErrCodePreconditionFailed "PreconditionFailed" -// The precondition given in one or more of the request-header fields evaluated -// to false. -// -// * ErrCodeTooManyStreamingDistributionCNAMEs "TooManyStreamingDistributionCNAMEs" -// // * ErrCodeInvalidArgument "InvalidArgument" // The argument is invalid. // -// * ErrCodeInvalidOriginAccessIdentity "InvalidOriginAccessIdentity" -// The origin access identity is not valid or doesn't exist. -// -// * ErrCodeTooManyTrustedSigners "TooManyTrustedSigners" -// Your request contains more trusted signers than are allowed per distribution. -// -// * ErrCodeTrustedSignerDoesNotExist "TrustedSignerDoesNotExist" -// One or more of your trusted signers don't exist. -// -// * ErrCodeInconsistentQuantities "InconsistentQuantities" -// The value of Quantity and the size of Items don't match. -// -// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25/UpdateStreamingDistribution -func (c *CloudFront) UpdateStreamingDistribution(input *UpdateStreamingDistributionInput) (*UpdateStreamingDistributionOutput, error) { - req, out := c.UpdateStreamingDistributionRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/ListStreamingDistributions +func (c *CloudFront) ListStreamingDistributions(input *ListStreamingDistributionsInput) (*ListStreamingDistributionsOutput, error) { + req, out := c.ListStreamingDistributionsRequest(input) return out, req.Send() } -// UpdateStreamingDistributionWithContext is the same as UpdateStreamingDistribution with the addition of +// ListStreamingDistributionsWithContext is the same as ListStreamingDistributions with the addition of // the ability to pass a context and additional request options. // -// See UpdateStreamingDistribution for details on how to use this API operation. +// See ListStreamingDistributions for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CloudFront) UpdateStreamingDistributionWithContext(ctx aws.Context, input *UpdateStreamingDistributionInput, opts ...request.Option) (*UpdateStreamingDistributionOutput, error) { - req, out := c.UpdateStreamingDistributionRequest(input) +func (c *CloudFront) ListStreamingDistributionsWithContext(ctx aws.Context, input *ListStreamingDistributionsInput, opts ...request.Option) (*ListStreamingDistributionsOutput, error) { + req, out := c.ListStreamingDistributionsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// A complex type that lists the AWS accounts, if any, that you included in -// the TrustedSigners complex type for this distribution. These are the accounts -// that you want to allow to create signed URLs for private content. +// ListStreamingDistributionsPages iterates over the pages of a ListStreamingDistributions operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. // -// The Signer complex type lists the AWS account number of the trusted signer -// or self if the signer is the AWS account that created the distribution. The -// Signer element also includes the IDs of any active CloudFront key pairs that -// are associated with the trusted signer's AWS account. If no KeyPairId element -// appears for a Signer, that signer can't create signed URLs. +// See ListStreamingDistributions method for more information on how to use this operation. // -// For more information, see Serving Private Content through CloudFront (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html) -// in the Amazon CloudFront Developer Guide. -type ActiveTrustedSigners struct { - _ struct{} `type:"structure"` - - // Enabled is true if any of the AWS accounts listed in the TrustedSigners complex - // type for this RTMP distribution have active CloudFront key pairs. If not, - // Enabled is false. - // - // For more information, see ActiveTrustedSigners. - // - // Enabled is a required field - Enabled *bool `type:"boolean" required:"true"` - - // A complex type that contains one Signer complex type for each trusted signer - // that is specified in the TrustedSigners complex type. - // - // For more information, see ActiveTrustedSigners. - Items []*Signer `locationNameList:"Signer" type:"list"` - - // A complex type that contains one Signer complex type for each trusted signer - // specified in the TrustedSigners complex type. - // - // For more information, see ActiveTrustedSigners. - // - // Quantity is a required field - Quantity *int64 `type:"integer" required:"true"` -} - -// String returns the string representation -func (s ActiveTrustedSigners) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s ActiveTrustedSigners) GoString() string { - return s.String() +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListStreamingDistributions operation. +// pageNum := 0 +// err := client.ListStreamingDistributionsPages(params, +// func(page *ListStreamingDistributionsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CloudFront) ListStreamingDistributionsPages(input *ListStreamingDistributionsInput, fn func(*ListStreamingDistributionsOutput, bool) bool) error { + return c.ListStreamingDistributionsPagesWithContext(aws.BackgroundContext(), input, fn) } -// SetEnabled sets the Enabled field's value. -func (s *ActiveTrustedSigners) SetEnabled(v bool) *ActiveTrustedSigners { - s.Enabled = &v - return s -} +// ListStreamingDistributionsPagesWithContext same as ListStreamingDistributionsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFront) ListStreamingDistributionsPagesWithContext(ctx aws.Context, input *ListStreamingDistributionsInput, fn func(*ListStreamingDistributionsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListStreamingDistributionsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListStreamingDistributionsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } -// SetItems sets the Items field's value. -func (s *ActiveTrustedSigners) SetItems(v []*Signer) *ActiveTrustedSigners { - s.Items = v - return s + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListStreamingDistributionsOutput), !p.HasNextPage()) + } + return p.Err() } -// SetQuantity sets the Quantity field's value. -func (s *ActiveTrustedSigners) SetQuantity(v int64) *ActiveTrustedSigners { - s.Quantity = &v - return s -} +const opListTagsForResource = "ListTagsForResource2018_06_18" -// A complex type that contains information about CNAMEs (alternate domain names), -// if any, for this distribution. -type Aliases struct { - _ struct{} `type:"structure"` +// ListTagsForResourceRequest generates a "aws/request.Request" representing the +// client's request for the ListTagsForResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListTagsForResource for more information on using the ListTagsForResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListTagsForResourceRequest method. +// req, resp := client.ListTagsForResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/ListTagsForResource +func (c *CloudFront) ListTagsForResourceRequest(input *ListTagsForResourceInput) (req *request.Request, output *ListTagsForResourceOutput) { + op := &request.Operation{ + Name: opListTagsForResource, + HTTPMethod: "GET", + HTTPPath: "/2018-06-18/tagging", + } - // A complex type that contains the CNAME aliases, if any, that you want to - // associate with this distribution. - Items []*string `locationNameList:"CNAME" type:"list"` + if input == nil { + input = &ListTagsForResourceInput{} + } - // The number of CNAME aliases, if any, that you want to associate with this - // distribution. - // - // Quantity is a required field - Quantity *int64 `type:"integer" required:"true"` + output = &ListTagsForResourceOutput{} + req = c.newRequest(op, input, output) + return } -// String returns the string representation -func (s Aliases) String() string { - return awsutil.Prettify(s) +// ListTagsForResource API operation for Amazon CloudFront. +// +// List tags for a CloudFront resource. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudFront's +// API operation ListTagsForResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAccessDenied "AccessDenied" +// Access denied. +// +// * ErrCodeInvalidArgument "InvalidArgument" +// The argument is invalid. +// +// * ErrCodeInvalidTagging "InvalidTagging" +// +// * ErrCodeNoSuchResource "NoSuchResource" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/ListTagsForResource +func (c *CloudFront) ListTagsForResource(input *ListTagsForResourceInput) (*ListTagsForResourceOutput, error) { + req, out := c.ListTagsForResourceRequest(input) + return out, req.Send() } -// GoString returns the string representation -func (s Aliases) GoString() string { - return s.String() +// ListTagsForResourceWithContext is the same as ListTagsForResource with the addition of +// the ability to pass a context and additional request options. +// +// See ListTagsForResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFront) ListTagsForResourceWithContext(ctx aws.Context, input *ListTagsForResourceInput, opts ...request.Option) (*ListTagsForResourceOutput, error) { + req, out := c.ListTagsForResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *Aliases) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "Aliases"} - if s.Quantity == nil { - invalidParams.Add(request.NewErrParamRequired("Quantity")) - } +const opTagResource = "TagResource2018_06_18" - if invalidParams.Len() > 0 { - return invalidParams +// TagResourceRequest generates a "aws/request.Request" representing the +// client's request for the TagResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See TagResource for more information on using the TagResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the TagResourceRequest method. +// req, resp := client.TagResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/TagResource +func (c *CloudFront) TagResourceRequest(input *TagResourceInput) (req *request.Request, output *TagResourceOutput) { + op := &request.Operation{ + Name: opTagResource, + HTTPMethod: "POST", + HTTPPath: "/2018-06-18/tagging?Operation=Tag", } - return nil -} -// SetItems sets the Items field's value. -func (s *Aliases) SetItems(v []*string) *Aliases { - s.Items = v - return s -} + if input == nil { + input = &TagResourceInput{} + } -// SetQuantity sets the Quantity field's value. -func (s *Aliases) SetQuantity(v int64) *Aliases { - s.Quantity = &v - return s + output = &TagResourceOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return } -// A complex type that controls which HTTP methods CloudFront processes and -// forwards to your Amazon S3 bucket or your custom origin. There are three -// choices: -// -// * CloudFront forwards only GET and HEAD requests. +// TagResource API operation for Amazon CloudFront. // -// * CloudFront forwards only GET, HEAD, and OPTIONS requests. +// Add tags to a CloudFront resource. // -// * CloudFront forwards GET, HEAD, OPTIONS, PUT, PATCH, POST, and DELETE -// requests. +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. // -// If you pick the third choice, you may need to restrict access to your Amazon -// S3 bucket or to your custom origin so users can't perform operations that -// you don't want them to. For example, you might not want users to have permissions -// to delete objects from your origin. -type AllowedMethods struct { - _ struct{} `type:"structure"` - - // A complex type that controls whether CloudFront caches the response to requests - // using the specified HTTP methods. There are two choices: - // - // * CloudFront caches responses to GET and HEAD requests. - // - // * CloudFront caches responses to GET, HEAD, and OPTIONS requests. - // - // If you pick the second choice for your Amazon S3 Origin, you may need to - // forward Access-Control-Request-Method, Access-Control-Request-Headers, and - // Origin headers for the responses to be cached correctly. - CachedMethods *CachedMethods `type:"structure"` - - // A complex type that contains the HTTP methods that you want CloudFront to - // process and forward to your origin. - // - // Items is a required field - Items []*string `locationNameList:"Method" type:"list" required:"true"` - - // The number of HTTP methods that you want CloudFront to forward to your origin. - // Valid values are 2 (for GET and HEAD requests), 3 (for GET, HEAD, and OPTIONS - // requests) and 7 (for GET, HEAD, OPTIONS, PUT, PATCH, POST, and DELETE requests). - // - // Quantity is a required field - Quantity *int64 `type:"integer" required:"true"` +// See the AWS API reference guide for Amazon CloudFront's +// API operation TagResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAccessDenied "AccessDenied" +// Access denied. +// +// * ErrCodeInvalidArgument "InvalidArgument" +// The argument is invalid. +// +// * ErrCodeInvalidTagging "InvalidTagging" +// +// * ErrCodeNoSuchResource "NoSuchResource" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/TagResource +func (c *CloudFront) TagResource(input *TagResourceInput) (*TagResourceOutput, error) { + req, out := c.TagResourceRequest(input) + return out, req.Send() } -// String returns the string representation -func (s AllowedMethods) String() string { - return awsutil.Prettify(s) +// TagResourceWithContext is the same as TagResource with the addition of +// the ability to pass a context and additional request options. +// +// See TagResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFront) TagResourceWithContext(ctx aws.Context, input *TagResourceInput, opts ...request.Option) (*TagResourceOutput, error) { + req, out := c.TagResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() } -// GoString returns the string representation -func (s AllowedMethods) GoString() string { - return s.String() -} +const opUntagResource = "UntagResource2018_06_18" -// Validate inspects the fields of the type to determine if they are valid. -func (s *AllowedMethods) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AllowedMethods"} - if s.Items == nil { - invalidParams.Add(request.NewErrParamRequired("Items")) - } - if s.Quantity == nil { - invalidParams.Add(request.NewErrParamRequired("Quantity")) - } - if s.CachedMethods != nil { - if err := s.CachedMethods.Validate(); err != nil { - invalidParams.AddNested("CachedMethods", err.(request.ErrInvalidParams)) - } +// UntagResourceRequest generates a "aws/request.Request" representing the +// client's request for the UntagResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UntagResource for more information on using the UntagResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UntagResourceRequest method. +// req, resp := client.UntagResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/UntagResource +func (c *CloudFront) UntagResourceRequest(input *UntagResourceInput) (req *request.Request, output *UntagResourceOutput) { + op := &request.Operation{ + Name: opUntagResource, + HTTPMethod: "POST", + HTTPPath: "/2018-06-18/tagging?Operation=Untag", } - if invalidParams.Len() > 0 { - return invalidParams + if input == nil { + input = &UntagResourceInput{} } - return nil -} -// SetCachedMethods sets the CachedMethods field's value. -func (s *AllowedMethods) SetCachedMethods(v *CachedMethods) *AllowedMethods { - s.CachedMethods = v - return s + output = &UntagResourceOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return } -// SetItems sets the Items field's value. -func (s *AllowedMethods) SetItems(v []*string) *AllowedMethods { - s.Items = v - return s +// UntagResource API operation for Amazon CloudFront. +// +// Remove tags from a CloudFront resource. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudFront's +// API operation UntagResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAccessDenied "AccessDenied" +// Access denied. +// +// * ErrCodeInvalidArgument "InvalidArgument" +// The argument is invalid. +// +// * ErrCodeInvalidTagging "InvalidTagging" +// +// * ErrCodeNoSuchResource "NoSuchResource" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/UntagResource +func (c *CloudFront) UntagResource(input *UntagResourceInput) (*UntagResourceOutput, error) { + req, out := c.UntagResourceRequest(input) + return out, req.Send() } -// SetQuantity sets the Quantity field's value. -func (s *AllowedMethods) SetQuantity(v int64) *AllowedMethods { - s.Quantity = &v - return s +// UntagResourceWithContext is the same as UntagResource with the addition of +// the ability to pass a context and additional request options. +// +// See UntagResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFront) UntagResourceWithContext(ctx aws.Context, input *UntagResourceInput, opts ...request.Option) (*UntagResourceOutput, error) { + req, out := c.UntagResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() } -// A complex type that describes how CloudFront processes requests. +const opUpdateCloudFrontOriginAccessIdentity = "UpdateCloudFrontOriginAccessIdentity2018_06_18" + +// UpdateCloudFrontOriginAccessIdentityRequest generates a "aws/request.Request" representing the +// client's request for the UpdateCloudFrontOriginAccessIdentity operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // -// You must create at least as many cache behaviors (including the default cache -// behavior) as you have origins if you want CloudFront to distribute objects -// from all of the origins. Each cache behavior specifies the one origin from -// which you want CloudFront to get objects. If you have two origins and only -// the default cache behavior, the default cache behavior will cause CloudFront -// to get objects from one of the origins, but the other origin is never used. +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. // -// For the current limit on the number of cache behaviors that you can add to -// a distribution, see Amazon CloudFront Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_cloudfront) -// in the AWS General Reference. +// See UpdateCloudFrontOriginAccessIdentity for more information on using the UpdateCloudFrontOriginAccessIdentity +// API call, and error handling. // -// If you don't want to specify any cache behaviors, include only an empty CacheBehaviors -// element. Don't include an empty CacheBehavior element, or CloudFront returns -// a MalformedXML error. +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. // -// To delete all cache behaviors in an existing distribution, update the distribution -// configuration and include only an empty CacheBehaviors element. // -// To add, change, or remove one or more cache behaviors, update the distribution -// configuration and specify all of the cache behaviors that you want to include -// in the updated distribution. +// // Example sending a request using the UpdateCloudFrontOriginAccessIdentityRequest method. +// req, resp := client.UpdateCloudFrontOriginAccessIdentityRequest(params) // -// For more information about cache behaviors, see Cache Behaviors (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesCacheBehavior) -// in the Amazon CloudFront Developer Guide. -type CacheBehavior struct { - _ struct{} `type:"structure"` - - // A complex type that controls which HTTP methods CloudFront processes and - // forwards to your Amazon S3 bucket or your custom origin. There are three - // choices: - // - // * CloudFront forwards only GET and HEAD requests. - // - // * CloudFront forwards only GET, HEAD, and OPTIONS requests. - // - // * CloudFront forwards GET, HEAD, OPTIONS, PUT, PATCH, POST, and DELETE - // requests. - // - // If you pick the third choice, you may need to restrict access to your Amazon - // S3 bucket or to your custom origin so users can't perform operations that - // you don't want them to. For example, you might not want users to have permissions - // to delete objects from your origin. - AllowedMethods *AllowedMethods `type:"structure"` +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/UpdateCloudFrontOriginAccessIdentity +func (c *CloudFront) UpdateCloudFrontOriginAccessIdentityRequest(input *UpdateCloudFrontOriginAccessIdentityInput) (req *request.Request, output *UpdateCloudFrontOriginAccessIdentityOutput) { + op := &request.Operation{ + Name: opUpdateCloudFrontOriginAccessIdentity, + HTTPMethod: "PUT", + HTTPPath: "/2018-06-18/origin-access-identity/cloudfront/{Id}/config", + } - // Whether you want CloudFront to automatically compress certain files for this - // cache behavior. If so, specify true; if not, specify false. For more information, - // see Serving Compressed Files (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html) - // in the Amazon CloudFront Developer Guide. - Compress *bool `type:"boolean"` + if input == nil { + input = &UpdateCloudFrontOriginAccessIdentityInput{} + } - // The default amount of time that you want objects to stay in CloudFront caches - // before CloudFront forwards another request to your origin to determine whether - // the object has been updated. The value that you specify applies only when - // your origin does not add HTTP headers such as Cache-Control max-age, Cache-Control - // s-maxage, and Expires to objects. For more information, see Specifying How - // Long Objects and Errors Stay in a CloudFront Edge Cache (Expiration) (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html) - // in the Amazon CloudFront Developer Guide. - DefaultTTL *int64 `type:"long"` + output = &UpdateCloudFrontOriginAccessIdentityOutput{} + req = c.newRequest(op, input, output) + return +} - // A complex type that specifies how CloudFront handles query strings and cookies. - // - // ForwardedValues is a required field +// UpdateCloudFrontOriginAccessIdentity API operation for Amazon CloudFront. +// +// Update an origin access identity. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudFront's +// API operation UpdateCloudFrontOriginAccessIdentity for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAccessDenied "AccessDenied" +// Access denied. +// +// * ErrCodeIllegalUpdate "IllegalUpdate" +// Origin and CallerReference cannot be updated. +// +// * ErrCodeInvalidIfMatchVersion "InvalidIfMatchVersion" +// The If-Match version is missing or not valid for the distribution. +// +// * ErrCodeMissingBody "MissingBody" +// This operation requires a body. Ensure that the body is present and the Content-Type +// header is set. +// +// * ErrCodeNoSuchCloudFrontOriginAccessIdentity "NoSuchCloudFrontOriginAccessIdentity" +// The specified origin access identity does not exist. +// +// * ErrCodePreconditionFailed "PreconditionFailed" +// The precondition given in one or more of the request-header fields evaluated +// to false. +// +// * ErrCodeInvalidArgument "InvalidArgument" +// The argument is invalid. +// +// * ErrCodeInconsistentQuantities "InconsistentQuantities" +// The value of Quantity and the size of Items don't match. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/UpdateCloudFrontOriginAccessIdentity +func (c *CloudFront) UpdateCloudFrontOriginAccessIdentity(input *UpdateCloudFrontOriginAccessIdentityInput) (*UpdateCloudFrontOriginAccessIdentityOutput, error) { + req, out := c.UpdateCloudFrontOriginAccessIdentityRequest(input) + return out, req.Send() +} + +// UpdateCloudFrontOriginAccessIdentityWithContext is the same as UpdateCloudFrontOriginAccessIdentity with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateCloudFrontOriginAccessIdentity for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFront) UpdateCloudFrontOriginAccessIdentityWithContext(ctx aws.Context, input *UpdateCloudFrontOriginAccessIdentityInput, opts ...request.Option) (*UpdateCloudFrontOriginAccessIdentityOutput, error) { + req, out := c.UpdateCloudFrontOriginAccessIdentityRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateDistribution = "UpdateDistribution2018_06_18" + +// UpdateDistributionRequest generates a "aws/request.Request" representing the +// client's request for the UpdateDistribution operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateDistribution for more information on using the UpdateDistribution +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateDistributionRequest method. +// req, resp := client.UpdateDistributionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/UpdateDistribution +func (c *CloudFront) UpdateDistributionRequest(input *UpdateDistributionInput) (req *request.Request, output *UpdateDistributionOutput) { + op := &request.Operation{ + Name: opUpdateDistribution, + HTTPMethod: "PUT", + HTTPPath: "/2018-06-18/distribution/{Id}/config", + } + + if input == nil { + input = &UpdateDistributionInput{} + } + + output = &UpdateDistributionOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateDistribution API operation for Amazon CloudFront. +// +// Updates the configuration for a web distribution. +// +// When you update a distribution, there are more required fields than when +// you create a distribution. When you update your distribution by using this +// API action, follow the steps here to get the current configuration and then +// make your updates, to make sure that you include all of the required fields. +// To view a summary, see Required Fields for Create Distribution and Update +// Distribution (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-overview-required-fields.html) +// in the Amazon CloudFront Developer Guide. +// +// The update process includes getting the current distribution configuration, +// updating the XML document that is returned to make your changes, and then +// submitting an UpdateDistribution request to make the updates. +// +// For information about updating a distribution using the CloudFront console +// instead, see Creating a Distribution (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-creating-console.html) +// in the Amazon CloudFront Developer Guide. +// +// To update a web distribution using the CloudFront API +// +// Submit a GetDistributionConfig request to get the current configuration and +// an Etag header for the distribution. +// +// If you update the distribution again, you must get a new Etag header. +// +// Update the XML document that was returned in the response to your GetDistributionConfig +// request to include your changes. +// +// When you edit the XML file, be aware of the following: +// +// You must strip out the ETag parameter that is returned. +// +// Additional fields are required when you update a distribution. There may +// be fields included in the XML file for features that you haven't configured +// for your distribution. This is expected and required to successfully update +// the distribution. +// +// You can't change the value of CallerReference. If you try to change this +// value, CloudFront returns an IllegalUpdate error. +// +// The new configuration replaces the existing configuration; the values that +// you specify in an UpdateDistribution request are not merged into your existing +// configuration. When you add, delete, or replace values in an element that +// allows multiple values (for example, CNAME), you must specify all of the +// values that you want to appear in the updated distribution. In addition, +// you must update the corresponding Quantity element. +// +// Submit an UpdateDistribution request to update the configuration for your +// distribution: +// +// In the request body, include the XML document that you updated in Step 2. +// The request body must include an XML document with a DistributionConfig element. +// +// Set the value of the HTTP If-Match header to the value of the ETag header +// that CloudFront returned when you submitted the GetDistributionConfig request +// in Step 1. +// +// Review the response to the UpdateDistribution request to confirm that the +// configuration was successfully updated. +// +// Optional: Submit a GetDistribution request to confirm that your changes have +// propagated. When propagation is complete, the value of Status is Deployed. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudFront's +// API operation UpdateDistribution for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAccessDenied "AccessDenied" +// Access denied. +// +// * ErrCodeCNAMEAlreadyExists "CNAMEAlreadyExists" +// +// * ErrCodeIllegalUpdate "IllegalUpdate" +// Origin and CallerReference cannot be updated. +// +// * ErrCodeInvalidIfMatchVersion "InvalidIfMatchVersion" +// The If-Match version is missing or not valid for the distribution. +// +// * ErrCodeMissingBody "MissingBody" +// This operation requires a body. Ensure that the body is present and the Content-Type +// header is set. +// +// * ErrCodeNoSuchDistribution "NoSuchDistribution" +// The specified distribution does not exist. +// +// * ErrCodePreconditionFailed "PreconditionFailed" +// The precondition given in one or more of the request-header fields evaluated +// to false. +// +// * ErrCodeTooManyDistributionCNAMEs "TooManyDistributionCNAMEs" +// Your request contains more CNAMEs than are allowed per distribution. +// +// * ErrCodeInvalidDefaultRootObject "InvalidDefaultRootObject" +// The default root object file name is too big or contains an invalid character. +// +// * ErrCodeInvalidRelativePath "InvalidRelativePath" +// The relative path is too big, is not URL-encoded, or does not begin with +// a slash (/). +// +// * ErrCodeInvalidErrorCode "InvalidErrorCode" +// +// * ErrCodeInvalidResponseCode "InvalidResponseCode" +// +// * ErrCodeInvalidArgument "InvalidArgument" +// The argument is invalid. +// +// * ErrCodeInvalidOriginAccessIdentity "InvalidOriginAccessIdentity" +// The origin access identity is not valid or doesn't exist. +// +// * ErrCodeTooManyTrustedSigners "TooManyTrustedSigners" +// Your request contains more trusted signers than are allowed per distribution. +// +// * ErrCodeTrustedSignerDoesNotExist "TrustedSignerDoesNotExist" +// One or more of your trusted signers don't exist. +// +// * ErrCodeInvalidViewerCertificate "InvalidViewerCertificate" +// +// * ErrCodeInvalidMinimumProtocolVersion "InvalidMinimumProtocolVersion" +// +// * ErrCodeInvalidRequiredProtocol "InvalidRequiredProtocol" +// This operation requires the HTTPS protocol. Ensure that you specify the HTTPS +// protocol in your request, or omit the RequiredProtocols element from your +// distribution configuration. +// +// * ErrCodeNoSuchOrigin "NoSuchOrigin" +// No origin exists with the specified Origin Id. +// +// * ErrCodeTooManyOrigins "TooManyOrigins" +// You cannot create more origins for the distribution. +// +// * ErrCodeTooManyCacheBehaviors "TooManyCacheBehaviors" +// You cannot create more cache behaviors for the distribution. +// +// * ErrCodeTooManyCookieNamesInWhiteList "TooManyCookieNamesInWhiteList" +// Your request contains more cookie names in the whitelist than are allowed +// per cache behavior. +// +// * ErrCodeInvalidForwardCookies "InvalidForwardCookies" +// Your request contains forward cookies option which doesn't match with the +// expectation for the whitelisted list of cookie names. Either list of cookie +// names has been specified when not allowed or list of cookie names is missing +// when expected. +// +// * ErrCodeTooManyHeadersInForwardedValues "TooManyHeadersInForwardedValues" +// +// * ErrCodeInvalidHeadersForS3Origin "InvalidHeadersForS3Origin" +// +// * ErrCodeInconsistentQuantities "InconsistentQuantities" +// The value of Quantity and the size of Items don't match. +// +// * ErrCodeTooManyCertificates "TooManyCertificates" +// You cannot create anymore custom SSL/TLS certificates. +// +// * ErrCodeInvalidLocationCode "InvalidLocationCode" +// +// * ErrCodeInvalidGeoRestrictionParameter "InvalidGeoRestrictionParameter" +// +// * ErrCodeInvalidTTLOrder "InvalidTTLOrder" +// +// * ErrCodeInvalidWebACLId "InvalidWebACLId" +// +// * ErrCodeTooManyOriginCustomHeaders "TooManyOriginCustomHeaders" +// +// * ErrCodeTooManyQueryStringParameters "TooManyQueryStringParameters" +// +// * ErrCodeInvalidQueryStringParameters "InvalidQueryStringParameters" +// +// * ErrCodeTooManyDistributionsWithLambdaAssociations "TooManyDistributionsWithLambdaAssociations" +// Processing your request would cause the maximum number of distributions with +// Lambda function associations per owner to be exceeded. +// +// * ErrCodeTooManyLambdaFunctionAssociations "TooManyLambdaFunctionAssociations" +// Your request contains more Lambda function associations than are allowed +// per distribution. +// +// * ErrCodeInvalidLambdaFunctionAssociation "InvalidLambdaFunctionAssociation" +// The specified Lambda function association is invalid. +// +// * ErrCodeInvalidOriginReadTimeout "InvalidOriginReadTimeout" +// +// * ErrCodeInvalidOriginKeepaliveTimeout "InvalidOriginKeepaliveTimeout" +// +// * ErrCodeNoSuchFieldLevelEncryptionConfig "NoSuchFieldLevelEncryptionConfig" +// The specified configuration for field-level encryption doesn't exist. +// +// * ErrCodeIllegalFieldLevelEncryptionConfigAssociationWithCacheBehavior "IllegalFieldLevelEncryptionConfigAssociationWithCacheBehavior" +// The specified configuration for field-level encryption can't be associated +// with the specified cache behavior. +// +// * ErrCodeTooManyDistributionsAssociatedToFieldLevelEncryptionConfig "TooManyDistributionsAssociatedToFieldLevelEncryptionConfig" +// The maximum number of distributions have been associated with the specified +// configuration for field-level encryption. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/UpdateDistribution +func (c *CloudFront) UpdateDistribution(input *UpdateDistributionInput) (*UpdateDistributionOutput, error) { + req, out := c.UpdateDistributionRequest(input) + return out, req.Send() +} + +// UpdateDistributionWithContext is the same as UpdateDistribution with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateDistribution for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFront) UpdateDistributionWithContext(ctx aws.Context, input *UpdateDistributionInput, opts ...request.Option) (*UpdateDistributionOutput, error) { + req, out := c.UpdateDistributionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateFieldLevelEncryptionConfig = "UpdateFieldLevelEncryptionConfig2018_06_18" + +// UpdateFieldLevelEncryptionConfigRequest generates a "aws/request.Request" representing the +// client's request for the UpdateFieldLevelEncryptionConfig operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateFieldLevelEncryptionConfig for more information on using the UpdateFieldLevelEncryptionConfig +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateFieldLevelEncryptionConfigRequest method. +// req, resp := client.UpdateFieldLevelEncryptionConfigRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/UpdateFieldLevelEncryptionConfig +func (c *CloudFront) UpdateFieldLevelEncryptionConfigRequest(input *UpdateFieldLevelEncryptionConfigInput) (req *request.Request, output *UpdateFieldLevelEncryptionConfigOutput) { + op := &request.Operation{ + Name: opUpdateFieldLevelEncryptionConfig, + HTTPMethod: "PUT", + HTTPPath: "/2018-06-18/field-level-encryption/{Id}/config", + } + + if input == nil { + input = &UpdateFieldLevelEncryptionConfigInput{} + } + + output = &UpdateFieldLevelEncryptionConfigOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateFieldLevelEncryptionConfig API operation for Amazon CloudFront. +// +// Update a field-level encryption configuration. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudFront's +// API operation UpdateFieldLevelEncryptionConfig for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAccessDenied "AccessDenied" +// Access denied. +// +// * ErrCodeIllegalUpdate "IllegalUpdate" +// Origin and CallerReference cannot be updated. +// +// * ErrCodeInconsistentQuantities "InconsistentQuantities" +// The value of Quantity and the size of Items don't match. +// +// * ErrCodeInvalidArgument "InvalidArgument" +// The argument is invalid. +// +// * ErrCodeInvalidIfMatchVersion "InvalidIfMatchVersion" +// The If-Match version is missing or not valid for the distribution. +// +// * ErrCodeNoSuchFieldLevelEncryptionProfile "NoSuchFieldLevelEncryptionProfile" +// The specified profile for field-level encryption doesn't exist. +// +// * ErrCodeNoSuchFieldLevelEncryptionConfig "NoSuchFieldLevelEncryptionConfig" +// The specified configuration for field-level encryption doesn't exist. +// +// * ErrCodePreconditionFailed "PreconditionFailed" +// The precondition given in one or more of the request-header fields evaluated +// to false. +// +// * ErrCodeTooManyFieldLevelEncryptionQueryArgProfiles "TooManyFieldLevelEncryptionQueryArgProfiles" +// The maximum number of query arg profiles for field-level encryption have +// been created. +// +// * ErrCodeTooManyFieldLevelEncryptionContentTypeProfiles "TooManyFieldLevelEncryptionContentTypeProfiles" +// The maximum number of content type profiles for field-level encryption have +// been created. +// +// * ErrCodeQueryArgProfileEmpty "QueryArgProfileEmpty" +// No profile specified for the field-level encryption query argument. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/UpdateFieldLevelEncryptionConfig +func (c *CloudFront) UpdateFieldLevelEncryptionConfig(input *UpdateFieldLevelEncryptionConfigInput) (*UpdateFieldLevelEncryptionConfigOutput, error) { + req, out := c.UpdateFieldLevelEncryptionConfigRequest(input) + return out, req.Send() +} + +// UpdateFieldLevelEncryptionConfigWithContext is the same as UpdateFieldLevelEncryptionConfig with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateFieldLevelEncryptionConfig for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFront) UpdateFieldLevelEncryptionConfigWithContext(ctx aws.Context, input *UpdateFieldLevelEncryptionConfigInput, opts ...request.Option) (*UpdateFieldLevelEncryptionConfigOutput, error) { + req, out := c.UpdateFieldLevelEncryptionConfigRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateFieldLevelEncryptionProfile = "UpdateFieldLevelEncryptionProfile2018_06_18" + +// UpdateFieldLevelEncryptionProfileRequest generates a "aws/request.Request" representing the +// client's request for the UpdateFieldLevelEncryptionProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateFieldLevelEncryptionProfile for more information on using the UpdateFieldLevelEncryptionProfile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateFieldLevelEncryptionProfileRequest method. +// req, resp := client.UpdateFieldLevelEncryptionProfileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/UpdateFieldLevelEncryptionProfile +func (c *CloudFront) UpdateFieldLevelEncryptionProfileRequest(input *UpdateFieldLevelEncryptionProfileInput) (req *request.Request, output *UpdateFieldLevelEncryptionProfileOutput) { + op := &request.Operation{ + Name: opUpdateFieldLevelEncryptionProfile, + HTTPMethod: "PUT", + HTTPPath: "/2018-06-18/field-level-encryption-profile/{Id}/config", + } + + if input == nil { + input = &UpdateFieldLevelEncryptionProfileInput{} + } + + output = &UpdateFieldLevelEncryptionProfileOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateFieldLevelEncryptionProfile API operation for Amazon CloudFront. +// +// Update a field-level encryption profile. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudFront's +// API operation UpdateFieldLevelEncryptionProfile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAccessDenied "AccessDenied" +// Access denied. +// +// * ErrCodeFieldLevelEncryptionProfileAlreadyExists "FieldLevelEncryptionProfileAlreadyExists" +// The specified profile for field-level encryption already exists. +// +// * ErrCodeIllegalUpdate "IllegalUpdate" +// Origin and CallerReference cannot be updated. +// +// * ErrCodeInconsistentQuantities "InconsistentQuantities" +// The value of Quantity and the size of Items don't match. +// +// * ErrCodeInvalidArgument "InvalidArgument" +// The argument is invalid. +// +// * ErrCodeInvalidIfMatchVersion "InvalidIfMatchVersion" +// The If-Match version is missing or not valid for the distribution. +// +// * ErrCodeNoSuchPublicKey "NoSuchPublicKey" +// The specified public key doesn't exist. +// +// * ErrCodeNoSuchFieldLevelEncryptionProfile "NoSuchFieldLevelEncryptionProfile" +// The specified profile for field-level encryption doesn't exist. +// +// * ErrCodePreconditionFailed "PreconditionFailed" +// The precondition given in one or more of the request-header fields evaluated +// to false. +// +// * ErrCodeFieldLevelEncryptionProfileSizeExceeded "FieldLevelEncryptionProfileSizeExceeded" +// The maximum size of a profile for field-level encryption was exceeded. +// +// * ErrCodeTooManyFieldLevelEncryptionEncryptionEntities "TooManyFieldLevelEncryptionEncryptionEntities" +// The maximum number of encryption entities for field-level encryption have +// been created. +// +// * ErrCodeTooManyFieldLevelEncryptionFieldPatterns "TooManyFieldLevelEncryptionFieldPatterns" +// The maximum number of field patterns for field-level encryption have been +// created. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/UpdateFieldLevelEncryptionProfile +func (c *CloudFront) UpdateFieldLevelEncryptionProfile(input *UpdateFieldLevelEncryptionProfileInput) (*UpdateFieldLevelEncryptionProfileOutput, error) { + req, out := c.UpdateFieldLevelEncryptionProfileRequest(input) + return out, req.Send() +} + +// UpdateFieldLevelEncryptionProfileWithContext is the same as UpdateFieldLevelEncryptionProfile with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateFieldLevelEncryptionProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFront) UpdateFieldLevelEncryptionProfileWithContext(ctx aws.Context, input *UpdateFieldLevelEncryptionProfileInput, opts ...request.Option) (*UpdateFieldLevelEncryptionProfileOutput, error) { + req, out := c.UpdateFieldLevelEncryptionProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdatePublicKey = "UpdatePublicKey2018_06_18" + +// UpdatePublicKeyRequest generates a "aws/request.Request" representing the +// client's request for the UpdatePublicKey operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdatePublicKey for more information on using the UpdatePublicKey +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdatePublicKeyRequest method. +// req, resp := client.UpdatePublicKeyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/UpdatePublicKey +func (c *CloudFront) UpdatePublicKeyRequest(input *UpdatePublicKeyInput) (req *request.Request, output *UpdatePublicKeyOutput) { + op := &request.Operation{ + Name: opUpdatePublicKey, + HTTPMethod: "PUT", + HTTPPath: "/2018-06-18/public-key/{Id}/config", + } + + if input == nil { + input = &UpdatePublicKeyInput{} + } + + output = &UpdatePublicKeyOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdatePublicKey API operation for Amazon CloudFront. +// +// Update public key information. Note that the only value you can change is +// the comment. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudFront's +// API operation UpdatePublicKey for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAccessDenied "AccessDenied" +// Access denied. +// +// * ErrCodeCannotChangeImmutablePublicKeyFields "CannotChangeImmutablePublicKeyFields" +// You can't change the value of a public key. +// +// * ErrCodeInvalidArgument "InvalidArgument" +// The argument is invalid. +// +// * ErrCodeInvalidIfMatchVersion "InvalidIfMatchVersion" +// The If-Match version is missing or not valid for the distribution. +// +// * ErrCodeIllegalUpdate "IllegalUpdate" +// Origin and CallerReference cannot be updated. +// +// * ErrCodeNoSuchPublicKey "NoSuchPublicKey" +// The specified public key doesn't exist. +// +// * ErrCodePreconditionFailed "PreconditionFailed" +// The precondition given in one or more of the request-header fields evaluated +// to false. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/UpdatePublicKey +func (c *CloudFront) UpdatePublicKey(input *UpdatePublicKeyInput) (*UpdatePublicKeyOutput, error) { + req, out := c.UpdatePublicKeyRequest(input) + return out, req.Send() +} + +// UpdatePublicKeyWithContext is the same as UpdatePublicKey with the addition of +// the ability to pass a context and additional request options. +// +// See UpdatePublicKey for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFront) UpdatePublicKeyWithContext(ctx aws.Context, input *UpdatePublicKeyInput, opts ...request.Option) (*UpdatePublicKeyOutput, error) { + req, out := c.UpdatePublicKeyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateStreamingDistribution = "UpdateStreamingDistribution2018_06_18" + +// UpdateStreamingDistributionRequest generates a "aws/request.Request" representing the +// client's request for the UpdateStreamingDistribution operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateStreamingDistribution for more information on using the UpdateStreamingDistribution +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateStreamingDistributionRequest method. +// req, resp := client.UpdateStreamingDistributionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/UpdateStreamingDistribution +func (c *CloudFront) UpdateStreamingDistributionRequest(input *UpdateStreamingDistributionInput) (req *request.Request, output *UpdateStreamingDistributionOutput) { + op := &request.Operation{ + Name: opUpdateStreamingDistribution, + HTTPMethod: "PUT", + HTTPPath: "/2018-06-18/streaming-distribution/{Id}/config", + } + + if input == nil { + input = &UpdateStreamingDistributionInput{} + } + + output = &UpdateStreamingDistributionOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateStreamingDistribution API operation for Amazon CloudFront. +// +// Update a streaming distribution. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudFront's +// API operation UpdateStreamingDistribution for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAccessDenied "AccessDenied" +// Access denied. +// +// * ErrCodeCNAMEAlreadyExists "CNAMEAlreadyExists" +// +// * ErrCodeIllegalUpdate "IllegalUpdate" +// Origin and CallerReference cannot be updated. +// +// * ErrCodeInvalidIfMatchVersion "InvalidIfMatchVersion" +// The If-Match version is missing or not valid for the distribution. +// +// * ErrCodeMissingBody "MissingBody" +// This operation requires a body. Ensure that the body is present and the Content-Type +// header is set. +// +// * ErrCodeNoSuchStreamingDistribution "NoSuchStreamingDistribution" +// The specified streaming distribution does not exist. +// +// * ErrCodePreconditionFailed "PreconditionFailed" +// The precondition given in one or more of the request-header fields evaluated +// to false. +// +// * ErrCodeTooManyStreamingDistributionCNAMEs "TooManyStreamingDistributionCNAMEs" +// +// * ErrCodeInvalidArgument "InvalidArgument" +// The argument is invalid. +// +// * ErrCodeInvalidOriginAccessIdentity "InvalidOriginAccessIdentity" +// The origin access identity is not valid or doesn't exist. +// +// * ErrCodeTooManyTrustedSigners "TooManyTrustedSigners" +// Your request contains more trusted signers than are allowed per distribution. +// +// * ErrCodeTrustedSignerDoesNotExist "TrustedSignerDoesNotExist" +// One or more of your trusted signers don't exist. +// +// * ErrCodeInconsistentQuantities "InconsistentQuantities" +// The value of Quantity and the size of Items don't match. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18/UpdateStreamingDistribution +func (c *CloudFront) UpdateStreamingDistribution(input *UpdateStreamingDistributionInput) (*UpdateStreamingDistributionOutput, error) { + req, out := c.UpdateStreamingDistributionRequest(input) + return out, req.Send() +} + +// UpdateStreamingDistributionWithContext is the same as UpdateStreamingDistribution with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateStreamingDistribution for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudFront) UpdateStreamingDistributionWithContext(ctx aws.Context, input *UpdateStreamingDistributionInput, opts ...request.Option) (*UpdateStreamingDistributionOutput, error) { + req, out := c.UpdateStreamingDistributionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// A complex type that lists the AWS accounts, if any, that you included in +// the TrustedSigners complex type for this distribution. These are the accounts +// that you want to allow to create signed URLs for private content. +// +// The Signer complex type lists the AWS account number of the trusted signer +// or self if the signer is the AWS account that created the distribution. The +// Signer element also includes the IDs of any active CloudFront key pairs that +// are associated with the trusted signer's AWS account. If no KeyPairId element +// appears for a Signer, that signer can't create signed URLs. +// +// For more information, see Serving Private Content through CloudFront (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html) +// in the Amazon CloudFront Developer Guide. +type ActiveTrustedSigners struct { + _ struct{} `type:"structure"` + + // Enabled is true if any of the AWS accounts listed in the TrustedSigners complex + // type for this RTMP distribution have active CloudFront key pairs. If not, + // Enabled is false. + // + // For more information, see ActiveTrustedSigners. + // + // Enabled is a required field + Enabled *bool `type:"boolean" required:"true"` + + // A complex type that contains one Signer complex type for each trusted signer + // that is specified in the TrustedSigners complex type. + // + // For more information, see ActiveTrustedSigners. + Items []*Signer `locationNameList:"Signer" type:"list"` + + // A complex type that contains one Signer complex type for each trusted signer + // specified in the TrustedSigners complex type. + // + // For more information, see ActiveTrustedSigners. + // + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` +} + +// String returns the string representation +func (s ActiveTrustedSigners) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ActiveTrustedSigners) GoString() string { + return s.String() +} + +// SetEnabled sets the Enabled field's value. +func (s *ActiveTrustedSigners) SetEnabled(v bool) *ActiveTrustedSigners { + s.Enabled = &v + return s +} + +// SetItems sets the Items field's value. +func (s *ActiveTrustedSigners) SetItems(v []*Signer) *ActiveTrustedSigners { + s.Items = v + return s +} + +// SetQuantity sets the Quantity field's value. +func (s *ActiveTrustedSigners) SetQuantity(v int64) *ActiveTrustedSigners { + s.Quantity = &v + return s +} + +// A complex type that contains information about CNAMEs (alternate domain names), +// if any, for this distribution. +type Aliases struct { + _ struct{} `type:"structure"` + + // A complex type that contains the CNAME aliases, if any, that you want to + // associate with this distribution. + Items []*string `locationNameList:"CNAME" type:"list"` + + // The number of CNAME aliases, if any, that you want to associate with this + // distribution. + // + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` +} + +// String returns the string representation +func (s Aliases) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Aliases) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Aliases) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Aliases"} + if s.Quantity == nil { + invalidParams.Add(request.NewErrParamRequired("Quantity")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetItems sets the Items field's value. +func (s *Aliases) SetItems(v []*string) *Aliases { + s.Items = v + return s +} + +// SetQuantity sets the Quantity field's value. +func (s *Aliases) SetQuantity(v int64) *Aliases { + s.Quantity = &v + return s +} + +// A complex type that controls which HTTP methods CloudFront processes and +// forwards to your Amazon S3 bucket or your custom origin. There are three +// choices: +// +// * CloudFront forwards only GET and HEAD requests. +// +// * CloudFront forwards only GET, HEAD, and OPTIONS requests. +// +// * CloudFront forwards GET, HEAD, OPTIONS, PUT, PATCH, POST, and DELETE +// requests. +// +// If you pick the third choice, you may need to restrict access to your Amazon +// S3 bucket or to your custom origin so users can't perform operations that +// you don't want them to. For example, you might not want users to have permissions +// to delete objects from your origin. +type AllowedMethods struct { + _ struct{} `type:"structure"` + + // A complex type that controls whether CloudFront caches the response to requests + // using the specified HTTP methods. There are two choices: + // + // * CloudFront caches responses to GET and HEAD requests. + // + // * CloudFront caches responses to GET, HEAD, and OPTIONS requests. + // + // If you pick the second choice for your Amazon S3 Origin, you may need to + // forward Access-Control-Request-Method, Access-Control-Request-Headers, and + // Origin headers for the responses to be cached correctly. + CachedMethods *CachedMethods `type:"structure"` + + // A complex type that contains the HTTP methods that you want CloudFront to + // process and forward to your origin. + // + // Items is a required field + Items []*string `locationNameList:"Method" type:"list" required:"true"` + + // The number of HTTP methods that you want CloudFront to forward to your origin. + // Valid values are 2 (for GET and HEAD requests), 3 (for GET, HEAD, and OPTIONS + // requests) and 7 (for GET, HEAD, OPTIONS, PUT, PATCH, POST, and DELETE requests). + // + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` +} + +// String returns the string representation +func (s AllowedMethods) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AllowedMethods) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AllowedMethods) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AllowedMethods"} + if s.Items == nil { + invalidParams.Add(request.NewErrParamRequired("Items")) + } + if s.Quantity == nil { + invalidParams.Add(request.NewErrParamRequired("Quantity")) + } + if s.CachedMethods != nil { + if err := s.CachedMethods.Validate(); err != nil { + invalidParams.AddNested("CachedMethods", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCachedMethods sets the CachedMethods field's value. +func (s *AllowedMethods) SetCachedMethods(v *CachedMethods) *AllowedMethods { + s.CachedMethods = v + return s +} + +// SetItems sets the Items field's value. +func (s *AllowedMethods) SetItems(v []*string) *AllowedMethods { + s.Items = v + return s +} + +// SetQuantity sets the Quantity field's value. +func (s *AllowedMethods) SetQuantity(v int64) *AllowedMethods { + s.Quantity = &v + return s +} + +// A complex type that describes how CloudFront processes requests. +// +// You must create at least as many cache behaviors (including the default cache +// behavior) as you have origins if you want CloudFront to distribute objects +// from all of the origins. Each cache behavior specifies the one origin from +// which you want CloudFront to get objects. If you have two origins and only +// the default cache behavior, the default cache behavior will cause CloudFront +// to get objects from one of the origins, but the other origin is never used. +// +// For the current limit on the number of cache behaviors that you can add to +// a distribution, see Amazon CloudFront Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_cloudfront) +// in the AWS General Reference. +// +// If you don't want to specify any cache behaviors, include only an empty CacheBehaviors +// element. Don't include an empty CacheBehavior element, or CloudFront returns +// a MalformedXML error. +// +// To delete all cache behaviors in an existing distribution, update the distribution +// configuration and include only an empty CacheBehaviors element. +// +// To add, change, or remove one or more cache behaviors, update the distribution +// configuration and specify all of the cache behaviors that you want to include +// in the updated distribution. +// +// For more information about cache behaviors, see Cache Behaviors (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesCacheBehavior) +// in the Amazon CloudFront Developer Guide. +type CacheBehavior struct { + _ struct{} `type:"structure"` + + // A complex type that controls which HTTP methods CloudFront processes and + // forwards to your Amazon S3 bucket or your custom origin. There are three + // choices: + // + // * CloudFront forwards only GET and HEAD requests. + // + // * CloudFront forwards only GET, HEAD, and OPTIONS requests. + // + // * CloudFront forwards GET, HEAD, OPTIONS, PUT, PATCH, POST, and DELETE + // requests. + // + // If you pick the third choice, you may need to restrict access to your Amazon + // S3 bucket or to your custom origin so users can't perform operations that + // you don't want them to. For example, you might not want users to have permissions + // to delete objects from your origin. + AllowedMethods *AllowedMethods `type:"structure"` + + // Whether you want CloudFront to automatically compress certain files for this + // cache behavior. If so, specify true; if not, specify false. For more information, + // see Serving Compressed Files (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html) + // in the Amazon CloudFront Developer Guide. + Compress *bool `type:"boolean"` + + // The default amount of time that you want objects to stay in CloudFront caches + // before CloudFront forwards another request to your origin to determine whether + // the object has been updated. The value that you specify applies only when + // your origin does not add HTTP headers such as Cache-Control max-age, Cache-Control + // s-maxage, and Expires to objects. For more information, see Specifying How + // Long Objects and Errors Stay in a CloudFront Edge Cache (Expiration) (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html) + // in the Amazon CloudFront Developer Guide. + DefaultTTL *int64 `type:"long"` + + // The value of ID for the field-level encryption configuration that you want + // CloudFront to use for encrypting specific fields of data for a cache behavior + // or for the default cache behavior in your distribution. + FieldLevelEncryptionId *string `type:"string"` + + // A complex type that specifies how CloudFront handles query strings and cookies. + // + // ForwardedValues is a required field + ForwardedValues *ForwardedValues `type:"structure" required:"true"` + + // A complex type that contains zero or more Lambda function associations for + // a cache behavior. + LambdaFunctionAssociations *LambdaFunctionAssociations `type:"structure"` + + // The maximum amount of time that you want objects to stay in CloudFront caches + // before CloudFront forwards another request to your origin to determine whether + // the object has been updated. The value that you specify applies only when + // your origin adds HTTP headers such as Cache-Control max-age, Cache-Control + // s-maxage, and Expires to objects. For more information, see Specifying How + // Long Objects and Errors Stay in a CloudFront Edge Cache (Expiration) (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html) + // in the Amazon CloudFront Developer Guide. + MaxTTL *int64 `type:"long"` + + // The minimum amount of time that you want objects to stay in CloudFront caches + // before CloudFront forwards another request to your origin to determine whether + // the object has been updated. For more information, see Specifying How Long + // Objects and Errors Stay in a CloudFront Edge Cache (Expiration) (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html) + // in the Amazon Amazon CloudFront Developer Guide. + // + // You must specify 0 for MinTTL if you configure CloudFront to forward all + // headers to your origin (under Headers, if you specify 1 for Quantity and + // * for Name). + // + // MinTTL is a required field + MinTTL *int64 `type:"long" required:"true"` + + // The pattern (for example, images/*.jpg) that specifies which requests to + // apply the behavior to. When CloudFront receives a viewer request, the requested + // path is compared with path patterns in the order in which cache behaviors + // are listed in the distribution. + // + // You can optionally include a slash (/) at the beginning of the path pattern. + // For example, /images/*.jpg. CloudFront behavior is the same with or without + // the leading /. + // + // The path pattern for the default cache behavior is * and cannot be changed. + // If the request for an object does not match the path pattern for any cache + // behaviors, CloudFront applies the behavior in the default cache behavior. + // + // For more information, see Path Pattern (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesPathPattern) + // in the Amazon CloudFront Developer Guide. + // + // PathPattern is a required field + PathPattern *string `type:"string" required:"true"` + + // Indicates whether you want to distribute media files in the Microsoft Smooth + // Streaming format using the origin that is associated with this cache behavior. + // If so, specify true; if not, specify false. If you specify true for SmoothStreaming, + // you can still distribute other content using this cache behavior if the content + // matches the value of PathPattern. + SmoothStreaming *bool `type:"boolean"` + + // The value of ID for the origin that you want CloudFront to route requests + // to when a request matches the path pattern either for a cache behavior or + // for the default cache behavior in your distribution. + // + // TargetOriginId is a required field + TargetOriginId *string `type:"string" required:"true"` + + // A complex type that specifies the AWS accounts, if any, that you want to + // allow to create signed URLs for private content. + // + // If you want to require signed URLs in requests for objects in the target + // origin that match the PathPattern for this cache behavior, specify true for + // Enabled, and specify the applicable values for Quantity and Items. For more + // information, see Serving Private Content through CloudFront (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html) + // in the Amazon Amazon CloudFront Developer Guide. + // + // If you don't want to require signed URLs in requests for objects that match + // PathPattern, specify false for Enabled and 0 for Quantity. Omit Items. + // + // To add, change, or remove one or more trusted signers, change Enabled to + // true (if it's currently false), change Quantity as applicable, and specify + // all of the trusted signers that you want to include in the updated distribution. + // + // TrustedSigners is a required field + TrustedSigners *TrustedSigners `type:"structure" required:"true"` + + // The protocol that viewers can use to access the files in the origin specified + // by TargetOriginId when a request matches the path pattern in PathPattern. + // You can specify the following options: + // + // * allow-all: Viewers can use HTTP or HTTPS. + // + // * redirect-to-https: If a viewer submits an HTTP request, CloudFront returns + // an HTTP status code of 301 (Moved Permanently) to the viewer along with + // the HTTPS URL. The viewer then resubmits the request using the new URL. + // + // + // * https-only: If a viewer sends an HTTP request, CloudFront returns an + // HTTP status code of 403 (Forbidden). + // + // For more information about requiring the HTTPS protocol, see Using an HTTPS + // Connection to Access Your Objects (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecureConnections.html) + // in the Amazon CloudFront Developer Guide. + // + // The only way to guarantee that viewers retrieve an object that was fetched + // from the origin using HTTPS is never to use any other protocol to fetch the + // object. If you have recently changed from HTTP to HTTPS, we recommend that + // you clear your objects' cache because cached objects are protocol agnostic. + // That means that an edge location will return an object from the cache regardless + // of whether the current request protocol matches the protocol used previously. + // For more information, see Specifying How Long Objects and Errors Stay in + // a CloudFront Edge Cache (Expiration) (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html) + // in the Amazon CloudFront Developer Guide. + // + // ViewerProtocolPolicy is a required field + ViewerProtocolPolicy *string `type:"string" required:"true" enum:"ViewerProtocolPolicy"` +} + +// String returns the string representation +func (s CacheBehavior) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CacheBehavior) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CacheBehavior) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CacheBehavior"} + if s.ForwardedValues == nil { + invalidParams.Add(request.NewErrParamRequired("ForwardedValues")) + } + if s.MinTTL == nil { + invalidParams.Add(request.NewErrParamRequired("MinTTL")) + } + if s.PathPattern == nil { + invalidParams.Add(request.NewErrParamRequired("PathPattern")) + } + if s.TargetOriginId == nil { + invalidParams.Add(request.NewErrParamRequired("TargetOriginId")) + } + if s.TrustedSigners == nil { + invalidParams.Add(request.NewErrParamRequired("TrustedSigners")) + } + if s.ViewerProtocolPolicy == nil { + invalidParams.Add(request.NewErrParamRequired("ViewerProtocolPolicy")) + } + if s.AllowedMethods != nil { + if err := s.AllowedMethods.Validate(); err != nil { + invalidParams.AddNested("AllowedMethods", err.(request.ErrInvalidParams)) + } + } + if s.ForwardedValues != nil { + if err := s.ForwardedValues.Validate(); err != nil { + invalidParams.AddNested("ForwardedValues", err.(request.ErrInvalidParams)) + } + } + if s.LambdaFunctionAssociations != nil { + if err := s.LambdaFunctionAssociations.Validate(); err != nil { + invalidParams.AddNested("LambdaFunctionAssociations", err.(request.ErrInvalidParams)) + } + } + if s.TrustedSigners != nil { + if err := s.TrustedSigners.Validate(); err != nil { + invalidParams.AddNested("TrustedSigners", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAllowedMethods sets the AllowedMethods field's value. +func (s *CacheBehavior) SetAllowedMethods(v *AllowedMethods) *CacheBehavior { + s.AllowedMethods = v + return s +} + +// SetCompress sets the Compress field's value. +func (s *CacheBehavior) SetCompress(v bool) *CacheBehavior { + s.Compress = &v + return s +} + +// SetDefaultTTL sets the DefaultTTL field's value. +func (s *CacheBehavior) SetDefaultTTL(v int64) *CacheBehavior { + s.DefaultTTL = &v + return s +} + +// SetFieldLevelEncryptionId sets the FieldLevelEncryptionId field's value. +func (s *CacheBehavior) SetFieldLevelEncryptionId(v string) *CacheBehavior { + s.FieldLevelEncryptionId = &v + return s +} + +// SetForwardedValues sets the ForwardedValues field's value. +func (s *CacheBehavior) SetForwardedValues(v *ForwardedValues) *CacheBehavior { + s.ForwardedValues = v + return s +} + +// SetLambdaFunctionAssociations sets the LambdaFunctionAssociations field's value. +func (s *CacheBehavior) SetLambdaFunctionAssociations(v *LambdaFunctionAssociations) *CacheBehavior { + s.LambdaFunctionAssociations = v + return s +} + +// SetMaxTTL sets the MaxTTL field's value. +func (s *CacheBehavior) SetMaxTTL(v int64) *CacheBehavior { + s.MaxTTL = &v + return s +} + +// SetMinTTL sets the MinTTL field's value. +func (s *CacheBehavior) SetMinTTL(v int64) *CacheBehavior { + s.MinTTL = &v + return s +} + +// SetPathPattern sets the PathPattern field's value. +func (s *CacheBehavior) SetPathPattern(v string) *CacheBehavior { + s.PathPattern = &v + return s +} + +// SetSmoothStreaming sets the SmoothStreaming field's value. +func (s *CacheBehavior) SetSmoothStreaming(v bool) *CacheBehavior { + s.SmoothStreaming = &v + return s +} + +// SetTargetOriginId sets the TargetOriginId field's value. +func (s *CacheBehavior) SetTargetOriginId(v string) *CacheBehavior { + s.TargetOriginId = &v + return s +} + +// SetTrustedSigners sets the TrustedSigners field's value. +func (s *CacheBehavior) SetTrustedSigners(v *TrustedSigners) *CacheBehavior { + s.TrustedSigners = v + return s +} + +// SetViewerProtocolPolicy sets the ViewerProtocolPolicy field's value. +func (s *CacheBehavior) SetViewerProtocolPolicy(v string) *CacheBehavior { + s.ViewerProtocolPolicy = &v + return s +} + +// A complex type that contains zero or more CacheBehavior elements. +type CacheBehaviors struct { + _ struct{} `type:"structure"` + + // Optional: A complex type that contains cache behaviors for this distribution. + // If Quantity is 0, you can omit Items. + Items []*CacheBehavior `locationNameList:"CacheBehavior" type:"list"` + + // The number of cache behaviors for this distribution. + // + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` +} + +// String returns the string representation +func (s CacheBehaviors) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CacheBehaviors) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CacheBehaviors) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CacheBehaviors"} + if s.Quantity == nil { + invalidParams.Add(request.NewErrParamRequired("Quantity")) + } + if s.Items != nil { + for i, v := range s.Items { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Items", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetItems sets the Items field's value. +func (s *CacheBehaviors) SetItems(v []*CacheBehavior) *CacheBehaviors { + s.Items = v + return s +} + +// SetQuantity sets the Quantity field's value. +func (s *CacheBehaviors) SetQuantity(v int64) *CacheBehaviors { + s.Quantity = &v + return s +} + +// A complex type that controls whether CloudFront caches the response to requests +// using the specified HTTP methods. There are two choices: +// +// * CloudFront caches responses to GET and HEAD requests. +// +// * CloudFront caches responses to GET, HEAD, and OPTIONS requests. +// +// If you pick the second choice for your Amazon S3 Origin, you may need to +// forward Access-Control-Request-Method, Access-Control-Request-Headers, and +// Origin headers for the responses to be cached correctly. +type CachedMethods struct { + _ struct{} `type:"structure"` + + // A complex type that contains the HTTP methods that you want CloudFront to + // cache responses to. + // + // Items is a required field + Items []*string `locationNameList:"Method" type:"list" required:"true"` + + // The number of HTTP methods for which you want CloudFront to cache responses. + // Valid values are 2 (for caching responses to GET and HEAD requests) and 3 + // (for caching responses to GET, HEAD, and OPTIONS requests). + // + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` +} + +// String returns the string representation +func (s CachedMethods) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CachedMethods) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CachedMethods) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CachedMethods"} + if s.Items == nil { + invalidParams.Add(request.NewErrParamRequired("Items")) + } + if s.Quantity == nil { + invalidParams.Add(request.NewErrParamRequired("Quantity")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetItems sets the Items field's value. +func (s *CachedMethods) SetItems(v []*string) *CachedMethods { + s.Items = v + return s +} + +// SetQuantity sets the Quantity field's value. +func (s *CachedMethods) SetQuantity(v int64) *CachedMethods { + s.Quantity = &v + return s +} + +// A field-level encryption content type profile. +type ContentTypeProfile struct { + _ struct{} `type:"structure"` + + // The content type for a field-level encryption content type-profile mapping. + // + // ContentType is a required field + ContentType *string `type:"string" required:"true"` + + // The format for a field-level encryption content type-profile mapping. + // + // Format is a required field + Format *string `type:"string" required:"true" enum:"Format"` + + // The profile ID for a field-level encryption content type-profile mapping. + ProfileId *string `type:"string"` +} + +// String returns the string representation +func (s ContentTypeProfile) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ContentTypeProfile) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ContentTypeProfile) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ContentTypeProfile"} + if s.ContentType == nil { + invalidParams.Add(request.NewErrParamRequired("ContentType")) + } + if s.Format == nil { + invalidParams.Add(request.NewErrParamRequired("Format")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetContentType sets the ContentType field's value. +func (s *ContentTypeProfile) SetContentType(v string) *ContentTypeProfile { + s.ContentType = &v + return s +} + +// SetFormat sets the Format field's value. +func (s *ContentTypeProfile) SetFormat(v string) *ContentTypeProfile { + s.Format = &v + return s +} + +// SetProfileId sets the ProfileId field's value. +func (s *ContentTypeProfile) SetProfileId(v string) *ContentTypeProfile { + s.ProfileId = &v + return s +} + +// The configuration for a field-level encryption content type-profile mapping. +type ContentTypeProfileConfig struct { + _ struct{} `type:"structure"` + + // The configuration for a field-level encryption content type-profile. + ContentTypeProfiles *ContentTypeProfiles `type:"structure"` + + // The setting in a field-level encryption content type-profile mapping that + // specifies what to do when an unknown content type is provided for the profile. + // If true, content is forwarded without being encrypted when the content type + // is unknown. If false (the default), an error is returned when the content + // type is unknown. + // + // ForwardWhenContentTypeIsUnknown is a required field + ForwardWhenContentTypeIsUnknown *bool `type:"boolean" required:"true"` +} + +// String returns the string representation +func (s ContentTypeProfileConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ContentTypeProfileConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ContentTypeProfileConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ContentTypeProfileConfig"} + if s.ForwardWhenContentTypeIsUnknown == nil { + invalidParams.Add(request.NewErrParamRequired("ForwardWhenContentTypeIsUnknown")) + } + if s.ContentTypeProfiles != nil { + if err := s.ContentTypeProfiles.Validate(); err != nil { + invalidParams.AddNested("ContentTypeProfiles", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetContentTypeProfiles sets the ContentTypeProfiles field's value. +func (s *ContentTypeProfileConfig) SetContentTypeProfiles(v *ContentTypeProfiles) *ContentTypeProfileConfig { + s.ContentTypeProfiles = v + return s +} + +// SetForwardWhenContentTypeIsUnknown sets the ForwardWhenContentTypeIsUnknown field's value. +func (s *ContentTypeProfileConfig) SetForwardWhenContentTypeIsUnknown(v bool) *ContentTypeProfileConfig { + s.ForwardWhenContentTypeIsUnknown = &v + return s +} + +// Field-level encryption content type-profile. +type ContentTypeProfiles struct { + _ struct{} `type:"structure"` + + // Items in a field-level encryption content type-profile mapping. + Items []*ContentTypeProfile `locationNameList:"ContentTypeProfile" type:"list"` + + // The number of field-level encryption content type-profile mappings. + // + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` +} + +// String returns the string representation +func (s ContentTypeProfiles) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ContentTypeProfiles) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ContentTypeProfiles) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ContentTypeProfiles"} + if s.Quantity == nil { + invalidParams.Add(request.NewErrParamRequired("Quantity")) + } + if s.Items != nil { + for i, v := range s.Items { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Items", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetItems sets the Items field's value. +func (s *ContentTypeProfiles) SetItems(v []*ContentTypeProfile) *ContentTypeProfiles { + s.Items = v + return s +} + +// SetQuantity sets the Quantity field's value. +func (s *ContentTypeProfiles) SetQuantity(v int64) *ContentTypeProfiles { + s.Quantity = &v + return s +} + +// A complex type that specifies whether you want CloudFront to forward cookies +// to the origin and, if so, which ones. For more information about forwarding +// cookies to the origin, see How CloudFront Forwards, Caches, and Logs Cookies +// (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Cookies.html) +// in the Amazon CloudFront Developer Guide. +type CookieNames struct { + _ struct{} `type:"structure"` + + // A complex type that contains one Name element for each cookie that you want + // CloudFront to forward to the origin for this cache behavior. + Items []*string `locationNameList:"Name" type:"list"` + + // The number of different cookies that you want CloudFront to forward to the + // origin for this cache behavior. + // + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` +} + +// String returns the string representation +func (s CookieNames) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CookieNames) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CookieNames) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CookieNames"} + if s.Quantity == nil { + invalidParams.Add(request.NewErrParamRequired("Quantity")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetItems sets the Items field's value. +func (s *CookieNames) SetItems(v []*string) *CookieNames { + s.Items = v + return s +} + +// SetQuantity sets the Quantity field's value. +func (s *CookieNames) SetQuantity(v int64) *CookieNames { + s.Quantity = &v + return s +} + +// A complex type that specifies whether you want CloudFront to forward cookies +// to the origin and, if so, which ones. For more information about forwarding +// cookies to the origin, see How CloudFront Forwards, Caches, and Logs Cookies +// (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Cookies.html) +// in the Amazon CloudFront Developer Guide. +type CookiePreference struct { + _ struct{} `type:"structure"` + + // Specifies which cookies to forward to the origin for this cache behavior: + // all, none, or the list of cookies specified in the WhitelistedNames complex + // type. + // + // Amazon S3 doesn't process cookies. When the cache behavior is forwarding + // requests to an Amazon S3 origin, specify none for the Forward element. + // + // Forward is a required field + Forward *string `type:"string" required:"true" enum:"ItemSelection"` + + // Required if you specify whitelist for the value of Forward:. A complex type + // that specifies how many different cookies you want CloudFront to forward + // to the origin for this cache behavior and, if you want to forward selected + // cookies, the names of those cookies. + // + // If you specify all or none for the value of Forward, omit WhitelistedNames. + // If you change the value of Forward from whitelist to all or none and you + // don't delete the WhitelistedNames element and its child elements, CloudFront + // deletes them automatically. + // + // For the current limit on the number of cookie names that you can whitelist + // for each cache behavior, see Amazon CloudFront Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_cloudfront) + // in the AWS General Reference. + WhitelistedNames *CookieNames `type:"structure"` +} + +// String returns the string representation +func (s CookiePreference) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CookiePreference) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CookiePreference) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CookiePreference"} + if s.Forward == nil { + invalidParams.Add(request.NewErrParamRequired("Forward")) + } + if s.WhitelistedNames != nil { + if err := s.WhitelistedNames.Validate(); err != nil { + invalidParams.AddNested("WhitelistedNames", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetForward sets the Forward field's value. +func (s *CookiePreference) SetForward(v string) *CookiePreference { + s.Forward = &v + return s +} + +// SetWhitelistedNames sets the WhitelistedNames field's value. +func (s *CookiePreference) SetWhitelistedNames(v *CookieNames) *CookiePreference { + s.WhitelistedNames = v + return s +} + +// The request to create a new origin access identity. +type CreateCloudFrontOriginAccessIdentityInput struct { + _ struct{} `type:"structure" payload:"CloudFrontOriginAccessIdentityConfig"` + + // The current configuration information for the identity. + // + // CloudFrontOriginAccessIdentityConfig is a required field + CloudFrontOriginAccessIdentityConfig *OriginAccessIdentityConfig `locationName:"CloudFrontOriginAccessIdentityConfig" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2018-06-18/"` +} + +// String returns the string representation +func (s CreateCloudFrontOriginAccessIdentityInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateCloudFrontOriginAccessIdentityInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateCloudFrontOriginAccessIdentityInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateCloudFrontOriginAccessIdentityInput"} + if s.CloudFrontOriginAccessIdentityConfig == nil { + invalidParams.Add(request.NewErrParamRequired("CloudFrontOriginAccessIdentityConfig")) + } + if s.CloudFrontOriginAccessIdentityConfig != nil { + if err := s.CloudFrontOriginAccessIdentityConfig.Validate(); err != nil { + invalidParams.AddNested("CloudFrontOriginAccessIdentityConfig", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCloudFrontOriginAccessIdentityConfig sets the CloudFrontOriginAccessIdentityConfig field's value. +func (s *CreateCloudFrontOriginAccessIdentityInput) SetCloudFrontOriginAccessIdentityConfig(v *OriginAccessIdentityConfig) *CreateCloudFrontOriginAccessIdentityInput { + s.CloudFrontOriginAccessIdentityConfig = v + return s +} + +// The returned result of the corresponding request. +type CreateCloudFrontOriginAccessIdentityOutput struct { + _ struct{} `type:"structure" payload:"CloudFrontOriginAccessIdentity"` + + // The origin access identity's information. + CloudFrontOriginAccessIdentity *OriginAccessIdentity `type:"structure"` + + // The current version of the origin access identity created. + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // The fully qualified URI of the new origin access identity just created. For + // example: https://cloudfront.amazonaws.com/2010-11-01/origin-access-identity/cloudfront/E74FTE3AJFJ256A. + Location *string `location:"header" locationName:"Location" type:"string"` +} + +// String returns the string representation +func (s CreateCloudFrontOriginAccessIdentityOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateCloudFrontOriginAccessIdentityOutput) GoString() string { + return s.String() +} + +// SetCloudFrontOriginAccessIdentity sets the CloudFrontOriginAccessIdentity field's value. +func (s *CreateCloudFrontOriginAccessIdentityOutput) SetCloudFrontOriginAccessIdentity(v *OriginAccessIdentity) *CreateCloudFrontOriginAccessIdentityOutput { + s.CloudFrontOriginAccessIdentity = v + return s +} + +// SetETag sets the ETag field's value. +func (s *CreateCloudFrontOriginAccessIdentityOutput) SetETag(v string) *CreateCloudFrontOriginAccessIdentityOutput { + s.ETag = &v + return s +} + +// SetLocation sets the Location field's value. +func (s *CreateCloudFrontOriginAccessIdentityOutput) SetLocation(v string) *CreateCloudFrontOriginAccessIdentityOutput { + s.Location = &v + return s +} + +// The request to create a new distribution. +type CreateDistributionInput struct { + _ struct{} `type:"structure" payload:"DistributionConfig"` + + // The distribution's configuration information. + // + // DistributionConfig is a required field + DistributionConfig *DistributionConfig `locationName:"DistributionConfig" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2018-06-18/"` +} + +// String returns the string representation +func (s CreateDistributionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDistributionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDistributionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDistributionInput"} + if s.DistributionConfig == nil { + invalidParams.Add(request.NewErrParamRequired("DistributionConfig")) + } + if s.DistributionConfig != nil { + if err := s.DistributionConfig.Validate(); err != nil { + invalidParams.AddNested("DistributionConfig", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDistributionConfig sets the DistributionConfig field's value. +func (s *CreateDistributionInput) SetDistributionConfig(v *DistributionConfig) *CreateDistributionInput { + s.DistributionConfig = v + return s +} + +// The returned result of the corresponding request. +type CreateDistributionOutput struct { + _ struct{} `type:"structure" payload:"Distribution"` + + // The distribution's information. + Distribution *Distribution `type:"structure"` + + // The current version of the distribution created. + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // The fully qualified URI of the new distribution resource just created. For + // example: https://cloudfront.amazonaws.com/2010-11-01/distribution/EDFDVBD632BHDS5. + Location *string `location:"header" locationName:"Location" type:"string"` +} + +// String returns the string representation +func (s CreateDistributionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDistributionOutput) GoString() string { + return s.String() +} + +// SetDistribution sets the Distribution field's value. +func (s *CreateDistributionOutput) SetDistribution(v *Distribution) *CreateDistributionOutput { + s.Distribution = v + return s +} + +// SetETag sets the ETag field's value. +func (s *CreateDistributionOutput) SetETag(v string) *CreateDistributionOutput { + s.ETag = &v + return s +} + +// SetLocation sets the Location field's value. +func (s *CreateDistributionOutput) SetLocation(v string) *CreateDistributionOutput { + s.Location = &v + return s +} + +// The request to create a new distribution with tags. +type CreateDistributionWithTagsInput struct { + _ struct{} `type:"structure" payload:"DistributionConfigWithTags"` + + // The distribution's configuration information. + // + // DistributionConfigWithTags is a required field + DistributionConfigWithTags *DistributionConfigWithTags `locationName:"DistributionConfigWithTags" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2018-06-18/"` +} + +// String returns the string representation +func (s CreateDistributionWithTagsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDistributionWithTagsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDistributionWithTagsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDistributionWithTagsInput"} + if s.DistributionConfigWithTags == nil { + invalidParams.Add(request.NewErrParamRequired("DistributionConfigWithTags")) + } + if s.DistributionConfigWithTags != nil { + if err := s.DistributionConfigWithTags.Validate(); err != nil { + invalidParams.AddNested("DistributionConfigWithTags", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDistributionConfigWithTags sets the DistributionConfigWithTags field's value. +func (s *CreateDistributionWithTagsInput) SetDistributionConfigWithTags(v *DistributionConfigWithTags) *CreateDistributionWithTagsInput { + s.DistributionConfigWithTags = v + return s +} + +// The returned result of the corresponding request. +type CreateDistributionWithTagsOutput struct { + _ struct{} `type:"structure" payload:"Distribution"` + + // The distribution's information. + Distribution *Distribution `type:"structure"` + + // The current version of the distribution created. + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // The fully qualified URI of the new distribution resource just created. For + // example: https://cloudfront.amazonaws.com/2010-11-01/distribution/EDFDVBD632BHDS5. + Location *string `location:"header" locationName:"Location" type:"string"` +} + +// String returns the string representation +func (s CreateDistributionWithTagsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDistributionWithTagsOutput) GoString() string { + return s.String() +} + +// SetDistribution sets the Distribution field's value. +func (s *CreateDistributionWithTagsOutput) SetDistribution(v *Distribution) *CreateDistributionWithTagsOutput { + s.Distribution = v + return s +} + +// SetETag sets the ETag field's value. +func (s *CreateDistributionWithTagsOutput) SetETag(v string) *CreateDistributionWithTagsOutput { + s.ETag = &v + return s +} + +// SetLocation sets the Location field's value. +func (s *CreateDistributionWithTagsOutput) SetLocation(v string) *CreateDistributionWithTagsOutput { + s.Location = &v + return s +} + +type CreateFieldLevelEncryptionConfigInput struct { + _ struct{} `type:"structure" payload:"FieldLevelEncryptionConfig"` + + // The request to create a new field-level encryption configuration. + // + // FieldLevelEncryptionConfig is a required field + FieldLevelEncryptionConfig *FieldLevelEncryptionConfig `locationName:"FieldLevelEncryptionConfig" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2018-06-18/"` +} + +// String returns the string representation +func (s CreateFieldLevelEncryptionConfigInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateFieldLevelEncryptionConfigInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateFieldLevelEncryptionConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateFieldLevelEncryptionConfigInput"} + if s.FieldLevelEncryptionConfig == nil { + invalidParams.Add(request.NewErrParamRequired("FieldLevelEncryptionConfig")) + } + if s.FieldLevelEncryptionConfig != nil { + if err := s.FieldLevelEncryptionConfig.Validate(); err != nil { + invalidParams.AddNested("FieldLevelEncryptionConfig", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFieldLevelEncryptionConfig sets the FieldLevelEncryptionConfig field's value. +func (s *CreateFieldLevelEncryptionConfigInput) SetFieldLevelEncryptionConfig(v *FieldLevelEncryptionConfig) *CreateFieldLevelEncryptionConfigInput { + s.FieldLevelEncryptionConfig = v + return s +} + +type CreateFieldLevelEncryptionConfigOutput struct { + _ struct{} `type:"structure" payload:"FieldLevelEncryption"` + + // The current version of the field level encryption configuration. For example: + // E2QWRUHAPOMQZL. + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // Returned when you create a new field-level encryption configuration. + FieldLevelEncryption *FieldLevelEncryption `type:"structure"` + + // The fully qualified URI of the new configuration resource just created. For + // example: https://cloudfront.amazonaws.com/2010-11-01/field-level-encryption-config/EDFDVBD632BHDS5. + Location *string `location:"header" locationName:"Location" type:"string"` +} + +// String returns the string representation +func (s CreateFieldLevelEncryptionConfigOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateFieldLevelEncryptionConfigOutput) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *CreateFieldLevelEncryptionConfigOutput) SetETag(v string) *CreateFieldLevelEncryptionConfigOutput { + s.ETag = &v + return s +} + +// SetFieldLevelEncryption sets the FieldLevelEncryption field's value. +func (s *CreateFieldLevelEncryptionConfigOutput) SetFieldLevelEncryption(v *FieldLevelEncryption) *CreateFieldLevelEncryptionConfigOutput { + s.FieldLevelEncryption = v + return s +} + +// SetLocation sets the Location field's value. +func (s *CreateFieldLevelEncryptionConfigOutput) SetLocation(v string) *CreateFieldLevelEncryptionConfigOutput { + s.Location = &v + return s +} + +type CreateFieldLevelEncryptionProfileInput struct { + _ struct{} `type:"structure" payload:"FieldLevelEncryptionProfileConfig"` + + // The request to create a field-level encryption profile. + // + // FieldLevelEncryptionProfileConfig is a required field + FieldLevelEncryptionProfileConfig *FieldLevelEncryptionProfileConfig `locationName:"FieldLevelEncryptionProfileConfig" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2018-06-18/"` +} + +// String returns the string representation +func (s CreateFieldLevelEncryptionProfileInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateFieldLevelEncryptionProfileInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateFieldLevelEncryptionProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateFieldLevelEncryptionProfileInput"} + if s.FieldLevelEncryptionProfileConfig == nil { + invalidParams.Add(request.NewErrParamRequired("FieldLevelEncryptionProfileConfig")) + } + if s.FieldLevelEncryptionProfileConfig != nil { + if err := s.FieldLevelEncryptionProfileConfig.Validate(); err != nil { + invalidParams.AddNested("FieldLevelEncryptionProfileConfig", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFieldLevelEncryptionProfileConfig sets the FieldLevelEncryptionProfileConfig field's value. +func (s *CreateFieldLevelEncryptionProfileInput) SetFieldLevelEncryptionProfileConfig(v *FieldLevelEncryptionProfileConfig) *CreateFieldLevelEncryptionProfileInput { + s.FieldLevelEncryptionProfileConfig = v + return s +} + +type CreateFieldLevelEncryptionProfileOutput struct { + _ struct{} `type:"structure" payload:"FieldLevelEncryptionProfile"` + + // The current version of the field level encryption profile. For example: E2QWRUHAPOMQZL. + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // Returned when you create a new field-level encryption profile. + FieldLevelEncryptionProfile *FieldLevelEncryptionProfile `type:"structure"` + + // The fully qualified URI of the new profile resource just created. For example: + // https://cloudfront.amazonaws.com/2010-11-01/field-level-encryption-profile/EDFDVBD632BHDS5. + Location *string `location:"header" locationName:"Location" type:"string"` +} + +// String returns the string representation +func (s CreateFieldLevelEncryptionProfileOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateFieldLevelEncryptionProfileOutput) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *CreateFieldLevelEncryptionProfileOutput) SetETag(v string) *CreateFieldLevelEncryptionProfileOutput { + s.ETag = &v + return s +} + +// SetFieldLevelEncryptionProfile sets the FieldLevelEncryptionProfile field's value. +func (s *CreateFieldLevelEncryptionProfileOutput) SetFieldLevelEncryptionProfile(v *FieldLevelEncryptionProfile) *CreateFieldLevelEncryptionProfileOutput { + s.FieldLevelEncryptionProfile = v + return s +} + +// SetLocation sets the Location field's value. +func (s *CreateFieldLevelEncryptionProfileOutput) SetLocation(v string) *CreateFieldLevelEncryptionProfileOutput { + s.Location = &v + return s +} + +// The request to create an invalidation. +type CreateInvalidationInput struct { + _ struct{} `type:"structure" payload:"InvalidationBatch"` + + // The distribution's id. + // + // DistributionId is a required field + DistributionId *string `location:"uri" locationName:"DistributionId" type:"string" required:"true"` + + // The batch information for the invalidation. + // + // InvalidationBatch is a required field + InvalidationBatch *InvalidationBatch `locationName:"InvalidationBatch" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2018-06-18/"` +} + +// String returns the string representation +func (s CreateInvalidationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateInvalidationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateInvalidationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateInvalidationInput"} + if s.DistributionId == nil { + invalidParams.Add(request.NewErrParamRequired("DistributionId")) + } + if s.InvalidationBatch == nil { + invalidParams.Add(request.NewErrParamRequired("InvalidationBatch")) + } + if s.InvalidationBatch != nil { + if err := s.InvalidationBatch.Validate(); err != nil { + invalidParams.AddNested("InvalidationBatch", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDistributionId sets the DistributionId field's value. +func (s *CreateInvalidationInput) SetDistributionId(v string) *CreateInvalidationInput { + s.DistributionId = &v + return s +} + +// SetInvalidationBatch sets the InvalidationBatch field's value. +func (s *CreateInvalidationInput) SetInvalidationBatch(v *InvalidationBatch) *CreateInvalidationInput { + s.InvalidationBatch = v + return s +} + +// The returned result of the corresponding request. +type CreateInvalidationOutput struct { + _ struct{} `type:"structure" payload:"Invalidation"` + + // The invalidation's information. + Invalidation *Invalidation `type:"structure"` + + // The fully qualified URI of the distribution and invalidation batch request, + // including the Invalidation ID. + Location *string `location:"header" locationName:"Location" type:"string"` +} + +// String returns the string representation +func (s CreateInvalidationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateInvalidationOutput) GoString() string { + return s.String() +} + +// SetInvalidation sets the Invalidation field's value. +func (s *CreateInvalidationOutput) SetInvalidation(v *Invalidation) *CreateInvalidationOutput { + s.Invalidation = v + return s +} + +// SetLocation sets the Location field's value. +func (s *CreateInvalidationOutput) SetLocation(v string) *CreateInvalidationOutput { + s.Location = &v + return s +} + +type CreatePublicKeyInput struct { + _ struct{} `type:"structure" payload:"PublicKeyConfig"` + + // The request to add a public key to CloudFront. + // + // PublicKeyConfig is a required field + PublicKeyConfig *PublicKeyConfig `locationName:"PublicKeyConfig" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2018-06-18/"` +} + +// String returns the string representation +func (s CreatePublicKeyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreatePublicKeyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreatePublicKeyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreatePublicKeyInput"} + if s.PublicKeyConfig == nil { + invalidParams.Add(request.NewErrParamRequired("PublicKeyConfig")) + } + if s.PublicKeyConfig != nil { + if err := s.PublicKeyConfig.Validate(); err != nil { + invalidParams.AddNested("PublicKeyConfig", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPublicKeyConfig sets the PublicKeyConfig field's value. +func (s *CreatePublicKeyInput) SetPublicKeyConfig(v *PublicKeyConfig) *CreatePublicKeyInput { + s.PublicKeyConfig = v + return s +} + +type CreatePublicKeyOutput struct { + _ struct{} `type:"structure" payload:"PublicKey"` + + // The current version of the public key. For example: E2QWRUHAPOMQZL. + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // The fully qualified URI of the new public key resource just created. For + // example: https://cloudfront.amazonaws.com/2010-11-01/cloudfront-public-key/EDFDVBD632BHDS5. + Location *string `location:"header" locationName:"Location" type:"string"` + + // Returned when you add a public key. + PublicKey *PublicKey `type:"structure"` +} + +// String returns the string representation +func (s CreatePublicKeyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreatePublicKeyOutput) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *CreatePublicKeyOutput) SetETag(v string) *CreatePublicKeyOutput { + s.ETag = &v + return s +} + +// SetLocation sets the Location field's value. +func (s *CreatePublicKeyOutput) SetLocation(v string) *CreatePublicKeyOutput { + s.Location = &v + return s +} + +// SetPublicKey sets the PublicKey field's value. +func (s *CreatePublicKeyOutput) SetPublicKey(v *PublicKey) *CreatePublicKeyOutput { + s.PublicKey = v + return s +} + +// The request to create a new streaming distribution. +type CreateStreamingDistributionInput struct { + _ struct{} `type:"structure" payload:"StreamingDistributionConfig"` + + // The streaming distribution's configuration information. + // + // StreamingDistributionConfig is a required field + StreamingDistributionConfig *StreamingDistributionConfig `locationName:"StreamingDistributionConfig" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2018-06-18/"` +} + +// String returns the string representation +func (s CreateStreamingDistributionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateStreamingDistributionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateStreamingDistributionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateStreamingDistributionInput"} + if s.StreamingDistributionConfig == nil { + invalidParams.Add(request.NewErrParamRequired("StreamingDistributionConfig")) + } + if s.StreamingDistributionConfig != nil { + if err := s.StreamingDistributionConfig.Validate(); err != nil { + invalidParams.AddNested("StreamingDistributionConfig", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetStreamingDistributionConfig sets the StreamingDistributionConfig field's value. +func (s *CreateStreamingDistributionInput) SetStreamingDistributionConfig(v *StreamingDistributionConfig) *CreateStreamingDistributionInput { + s.StreamingDistributionConfig = v + return s +} + +// The returned result of the corresponding request. +type CreateStreamingDistributionOutput struct { + _ struct{} `type:"structure" payload:"StreamingDistribution"` + + // The current version of the streaming distribution created. + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // The fully qualified URI of the new streaming distribution resource just created. + // For example: https://cloudfront.amazonaws.com/2010-11-01/streaming-distribution/EGTXBD79H29TRA8. + Location *string `location:"header" locationName:"Location" type:"string"` + + // The streaming distribution's information. + StreamingDistribution *StreamingDistribution `type:"structure"` +} + +// String returns the string representation +func (s CreateStreamingDistributionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateStreamingDistributionOutput) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *CreateStreamingDistributionOutput) SetETag(v string) *CreateStreamingDistributionOutput { + s.ETag = &v + return s +} + +// SetLocation sets the Location field's value. +func (s *CreateStreamingDistributionOutput) SetLocation(v string) *CreateStreamingDistributionOutput { + s.Location = &v + return s +} + +// SetStreamingDistribution sets the StreamingDistribution field's value. +func (s *CreateStreamingDistributionOutput) SetStreamingDistribution(v *StreamingDistribution) *CreateStreamingDistributionOutput { + s.StreamingDistribution = v + return s +} + +// The request to create a new streaming distribution with tags. +type CreateStreamingDistributionWithTagsInput struct { + _ struct{} `type:"structure" payload:"StreamingDistributionConfigWithTags"` + + // The streaming distribution's configuration information. + // + // StreamingDistributionConfigWithTags is a required field + StreamingDistributionConfigWithTags *StreamingDistributionConfigWithTags `locationName:"StreamingDistributionConfigWithTags" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2018-06-18/"` +} + +// String returns the string representation +func (s CreateStreamingDistributionWithTagsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateStreamingDistributionWithTagsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateStreamingDistributionWithTagsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateStreamingDistributionWithTagsInput"} + if s.StreamingDistributionConfigWithTags == nil { + invalidParams.Add(request.NewErrParamRequired("StreamingDistributionConfigWithTags")) + } + if s.StreamingDistributionConfigWithTags != nil { + if err := s.StreamingDistributionConfigWithTags.Validate(); err != nil { + invalidParams.AddNested("StreamingDistributionConfigWithTags", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetStreamingDistributionConfigWithTags sets the StreamingDistributionConfigWithTags field's value. +func (s *CreateStreamingDistributionWithTagsInput) SetStreamingDistributionConfigWithTags(v *StreamingDistributionConfigWithTags) *CreateStreamingDistributionWithTagsInput { + s.StreamingDistributionConfigWithTags = v + return s +} + +// The returned result of the corresponding request. +type CreateStreamingDistributionWithTagsOutput struct { + _ struct{} `type:"structure" payload:"StreamingDistribution"` + + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // The fully qualified URI of the new streaming distribution resource just created. + // For example: https://cloudfront.amazonaws.com/2010-11-01/streaming-distribution/EGTXBD79H29TRA8. + Location *string `location:"header" locationName:"Location" type:"string"` + + // The streaming distribution's information. + StreamingDistribution *StreamingDistribution `type:"structure"` +} + +// String returns the string representation +func (s CreateStreamingDistributionWithTagsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateStreamingDistributionWithTagsOutput) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *CreateStreamingDistributionWithTagsOutput) SetETag(v string) *CreateStreamingDistributionWithTagsOutput { + s.ETag = &v + return s +} + +// SetLocation sets the Location field's value. +func (s *CreateStreamingDistributionWithTagsOutput) SetLocation(v string) *CreateStreamingDistributionWithTagsOutput { + s.Location = &v + return s +} + +// SetStreamingDistribution sets the StreamingDistribution field's value. +func (s *CreateStreamingDistributionWithTagsOutput) SetStreamingDistribution(v *StreamingDistribution) *CreateStreamingDistributionWithTagsOutput { + s.StreamingDistribution = v + return s +} + +// A complex type that controls: +// +// * Whether CloudFront replaces HTTP status codes in the 4xx and 5xx range +// with custom error messages before returning the response to the viewer. +// +// +// * How long CloudFront caches HTTP status codes in the 4xx and 5xx range. +// +// For more information about custom error pages, see Customizing Error Responses +// (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html) +// in the Amazon CloudFront Developer Guide. +type CustomErrorResponse struct { + _ struct{} `type:"structure"` + + // The minimum amount of time, in seconds, that you want CloudFront to cache + // the HTTP status code specified in ErrorCode. When this time period has elapsed, + // CloudFront queries your origin to see whether the problem that caused the + // error has been resolved and the requested object is now available. + // + // If you don't want to specify a value, include an empty element, , + // in the XML document. + // + // For more information, see Customizing Error Responses (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html) + // in the Amazon CloudFront Developer Guide. + ErrorCachingMinTTL *int64 `type:"long"` + + // The HTTP status code for which you want to specify a custom error page and/or + // a caching duration. + // + // ErrorCode is a required field + ErrorCode *int64 `type:"integer" required:"true"` + + // The HTTP status code that you want CloudFront to return to the viewer along + // with the custom error page. There are a variety of reasons that you might + // want CloudFront to return a status code different from the status code that + // your origin returned to CloudFront, for example: + // + // * Some Internet devices (some firewalls and corporate proxies, for example) + // intercept HTTP 4xx and 5xx and prevent the response from being returned + // to the viewer. If you substitute 200, the response typically won't be + // intercepted. + // + // * If you don't care about distinguishing among different client errors + // or server errors, you can specify 400 or 500 as the ResponseCode for all + // 4xx or 5xx errors. + // + // * You might want to return a 200 status code (OK) and static website so + // your customers don't know that your website is down. + // + // If you specify a value for ResponseCode, you must also specify a value for + // ResponsePagePath. If you don't want to specify a value, include an empty + // element, , in the XML document. + ResponseCode *string `type:"string"` + + // The path to the custom error page that you want CloudFront to return to a + // viewer when your origin returns the HTTP status code specified by ErrorCode, + // for example, /4xx-errors/403-forbidden.html. If you want to store your objects + // and your custom error pages in different locations, your distribution must + // include a cache behavior for which the following is true: + // + // * The value of PathPattern matches the path to your custom error messages. + // For example, suppose you saved custom error pages for 4xx errors in an + // Amazon S3 bucket in a directory named /4xx-errors. Your distribution must + // include a cache behavior for which the path pattern routes requests for + // your custom error pages to that location, for example, /4xx-errors/*. + // + // + // * The value of TargetOriginId specifies the value of the ID element for + // the origin that contains your custom error pages. + // + // If you specify a value for ResponsePagePath, you must also specify a value + // for ResponseCode. If you don't want to specify a value, include an empty + // element, , in the XML document. + // + // We recommend that you store custom error pages in an Amazon S3 bucket. If + // you store custom error pages on an HTTP server and the server starts to return + // 5xx errors, CloudFront can't get the files that you want to return to viewers + // because the origin server is unavailable. + ResponsePagePath *string `type:"string"` +} + +// String returns the string representation +func (s CustomErrorResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CustomErrorResponse) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CustomErrorResponse) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CustomErrorResponse"} + if s.ErrorCode == nil { + invalidParams.Add(request.NewErrParamRequired("ErrorCode")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetErrorCachingMinTTL sets the ErrorCachingMinTTL field's value. +func (s *CustomErrorResponse) SetErrorCachingMinTTL(v int64) *CustomErrorResponse { + s.ErrorCachingMinTTL = &v + return s +} + +// SetErrorCode sets the ErrorCode field's value. +func (s *CustomErrorResponse) SetErrorCode(v int64) *CustomErrorResponse { + s.ErrorCode = &v + return s +} + +// SetResponseCode sets the ResponseCode field's value. +func (s *CustomErrorResponse) SetResponseCode(v string) *CustomErrorResponse { + s.ResponseCode = &v + return s +} + +// SetResponsePagePath sets the ResponsePagePath field's value. +func (s *CustomErrorResponse) SetResponsePagePath(v string) *CustomErrorResponse { + s.ResponsePagePath = &v + return s +} + +// A complex type that controls: +// +// * Whether CloudFront replaces HTTP status codes in the 4xx and 5xx range +// with custom error messages before returning the response to the viewer. +// +// * How long CloudFront caches HTTP status codes in the 4xx and 5xx range. +// +// For more information about custom error pages, see Customizing Error Responses +// (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html) +// in the Amazon CloudFront Developer Guide. +type CustomErrorResponses struct { + _ struct{} `type:"structure"` + + // A complex type that contains a CustomErrorResponse element for each HTTP + // status code for which you want to specify a custom error page and/or a caching + // duration. + Items []*CustomErrorResponse `locationNameList:"CustomErrorResponse" type:"list"` + + // The number of HTTP status codes for which you want to specify a custom error + // page and/or a caching duration. If Quantity is 0, you can omit Items. + // + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` +} + +// String returns the string representation +func (s CustomErrorResponses) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CustomErrorResponses) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CustomErrorResponses) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CustomErrorResponses"} + if s.Quantity == nil { + invalidParams.Add(request.NewErrParamRequired("Quantity")) + } + if s.Items != nil { + for i, v := range s.Items { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Items", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetItems sets the Items field's value. +func (s *CustomErrorResponses) SetItems(v []*CustomErrorResponse) *CustomErrorResponses { + s.Items = v + return s +} + +// SetQuantity sets the Quantity field's value. +func (s *CustomErrorResponses) SetQuantity(v int64) *CustomErrorResponses { + s.Quantity = &v + return s +} + +// A complex type that contains the list of Custom Headers for each origin. +type CustomHeaders struct { + _ struct{} `type:"structure"` + + // Optional: A list that contains one OriginCustomHeader element for each custom + // header that you want CloudFront to forward to the origin. If Quantity is + // 0, omit Items. + Items []*OriginCustomHeader `locationNameList:"OriginCustomHeader" type:"list"` + + // The number of custom headers, if any, for this distribution. + // + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` +} + +// String returns the string representation +func (s CustomHeaders) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CustomHeaders) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CustomHeaders) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CustomHeaders"} + if s.Quantity == nil { + invalidParams.Add(request.NewErrParamRequired("Quantity")) + } + if s.Items != nil { + for i, v := range s.Items { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Items", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetItems sets the Items field's value. +func (s *CustomHeaders) SetItems(v []*OriginCustomHeader) *CustomHeaders { + s.Items = v + return s +} + +// SetQuantity sets the Quantity field's value. +func (s *CustomHeaders) SetQuantity(v int64) *CustomHeaders { + s.Quantity = &v + return s +} + +// A customer origin or an Amazon S3 bucket configured as a website endpoint. +type CustomOriginConfig struct { + _ struct{} `type:"structure"` + + // The HTTP port the custom origin listens on. + // + // HTTPPort is a required field + HTTPPort *int64 `type:"integer" required:"true"` + + // The HTTPS port the custom origin listens on. + // + // HTTPSPort is a required field + HTTPSPort *int64 `type:"integer" required:"true"` + + // You can create a custom keep-alive timeout. All timeout units are in seconds. + // The default keep-alive timeout is 5 seconds, but you can configure custom + // timeout lengths using the CloudFront API. The minimum timeout length is 1 + // second; the maximum is 60 seconds. + // + // If you need to increase the maximum time limit, contact the AWS Support Center + // (https://console.aws.amazon.com/support/home#/). + OriginKeepaliveTimeout *int64 `type:"integer"` + + // The origin protocol policy to apply to your origin. + // + // OriginProtocolPolicy is a required field + OriginProtocolPolicy *string `type:"string" required:"true" enum:"OriginProtocolPolicy"` + + // You can create a custom origin read timeout. All timeout units are in seconds. + // The default origin read timeout is 30 seconds, but you can configure custom + // timeout lengths using the CloudFront API. The minimum timeout length is 4 + // seconds; the maximum is 60 seconds. + // + // If you need to increase the maximum time limit, contact the AWS Support Center + // (https://console.aws.amazon.com/support/home#/). + OriginReadTimeout *int64 `type:"integer"` + + // The SSL/TLS protocols that you want CloudFront to use when communicating + // with your origin over HTTPS. + OriginSslProtocols *OriginSslProtocols `type:"structure"` +} + +// String returns the string representation +func (s CustomOriginConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CustomOriginConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CustomOriginConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CustomOriginConfig"} + if s.HTTPPort == nil { + invalidParams.Add(request.NewErrParamRequired("HTTPPort")) + } + if s.HTTPSPort == nil { + invalidParams.Add(request.NewErrParamRequired("HTTPSPort")) + } + if s.OriginProtocolPolicy == nil { + invalidParams.Add(request.NewErrParamRequired("OriginProtocolPolicy")) + } + if s.OriginSslProtocols != nil { + if err := s.OriginSslProtocols.Validate(); err != nil { + invalidParams.AddNested("OriginSslProtocols", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetHTTPPort sets the HTTPPort field's value. +func (s *CustomOriginConfig) SetHTTPPort(v int64) *CustomOriginConfig { + s.HTTPPort = &v + return s +} + +// SetHTTPSPort sets the HTTPSPort field's value. +func (s *CustomOriginConfig) SetHTTPSPort(v int64) *CustomOriginConfig { + s.HTTPSPort = &v + return s +} + +// SetOriginKeepaliveTimeout sets the OriginKeepaliveTimeout field's value. +func (s *CustomOriginConfig) SetOriginKeepaliveTimeout(v int64) *CustomOriginConfig { + s.OriginKeepaliveTimeout = &v + return s +} + +// SetOriginProtocolPolicy sets the OriginProtocolPolicy field's value. +func (s *CustomOriginConfig) SetOriginProtocolPolicy(v string) *CustomOriginConfig { + s.OriginProtocolPolicy = &v + return s +} + +// SetOriginReadTimeout sets the OriginReadTimeout field's value. +func (s *CustomOriginConfig) SetOriginReadTimeout(v int64) *CustomOriginConfig { + s.OriginReadTimeout = &v + return s +} + +// SetOriginSslProtocols sets the OriginSslProtocols field's value. +func (s *CustomOriginConfig) SetOriginSslProtocols(v *OriginSslProtocols) *CustomOriginConfig { + s.OriginSslProtocols = v + return s +} + +// A complex type that describes the default cache behavior if you don't specify +// a CacheBehavior element or if files don't match any of the values of PathPattern +// in CacheBehavior elements. You must create exactly one default cache behavior. +type DefaultCacheBehavior struct { + _ struct{} `type:"structure"` + + // A complex type that controls which HTTP methods CloudFront processes and + // forwards to your Amazon S3 bucket or your custom origin. There are three + // choices: + // + // * CloudFront forwards only GET and HEAD requests. + // + // * CloudFront forwards only GET, HEAD, and OPTIONS requests. + // + // * CloudFront forwards GET, HEAD, OPTIONS, PUT, PATCH, POST, and DELETE + // requests. + // + // If you pick the third choice, you may need to restrict access to your Amazon + // S3 bucket or to your custom origin so users can't perform operations that + // you don't want them to. For example, you might not want users to have permissions + // to delete objects from your origin. + AllowedMethods *AllowedMethods `type:"structure"` + + // Whether you want CloudFront to automatically compress certain files for this + // cache behavior. If so, specify true; if not, specify false. For more information, + // see Serving Compressed Files (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html) + // in the Amazon CloudFront Developer Guide. + Compress *bool `type:"boolean"` + + // The default amount of time that you want objects to stay in CloudFront caches + // before CloudFront forwards another request to your origin to determine whether + // the object has been updated. The value that you specify applies only when + // your origin does not add HTTP headers such as Cache-Control max-age, Cache-Control + // s-maxage, and Expires to objects. For more information, see Specifying How + // Long Objects and Errors Stay in a CloudFront Edge Cache (Expiration) (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html) + // in the Amazon CloudFront Developer Guide. + DefaultTTL *int64 `type:"long"` + + // The value of ID for the field-level encryption configuration that you want + // CloudFront to use for encrypting specific fields of data for a cache behavior + // or for the default cache behavior in your distribution. + FieldLevelEncryptionId *string `type:"string"` + + // A complex type that specifies how CloudFront handles query strings and cookies. + // + // ForwardedValues is a required field ForwardedValues *ForwardedValues `type:"structure" required:"true"` - // A complex type that contains zero or more Lambda function associations for - // a cache behavior. - LambdaFunctionAssociations *LambdaFunctionAssociations `type:"structure"` + // A complex type that contains zero or more Lambda function associations for + // a cache behavior. + LambdaFunctionAssociations *LambdaFunctionAssociations `type:"structure"` + + MaxTTL *int64 `type:"long"` + + // The minimum amount of time that you want objects to stay in CloudFront caches + // before CloudFront forwards another request to your origin to determine whether + // the object has been updated. For more information, see Specifying How Long + // Objects and Errors Stay in a CloudFront Edge Cache (Expiration) (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html) + // in the Amazon Amazon CloudFront Developer Guide. + // + // You must specify 0 for MinTTL if you configure CloudFront to forward all + // headers to your origin (under Headers, if you specify 1 for Quantity and + // * for Name). + // + // MinTTL is a required field + MinTTL *int64 `type:"long" required:"true"` + + // Indicates whether you want to distribute media files in the Microsoft Smooth + // Streaming format using the origin that is associated with this cache behavior. + // If so, specify true; if not, specify false. If you specify true for SmoothStreaming, + // you can still distribute other content using this cache behavior if the content + // matches the value of PathPattern. + SmoothStreaming *bool `type:"boolean"` + + // The value of ID for the origin that you want CloudFront to route requests + // to when a request matches the path pattern either for a cache behavior or + // for the default cache behavior in your distribution. + // + // TargetOriginId is a required field + TargetOriginId *string `type:"string" required:"true"` + + // A complex type that specifies the AWS accounts, if any, that you want to + // allow to create signed URLs for private content. + // + // If you want to require signed URLs in requests for objects in the target + // origin that match the PathPattern for this cache behavior, specify true for + // Enabled, and specify the applicable values for Quantity and Items. For more + // information, see Serving Private Content through CloudFront (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html) + // in the Amazon Amazon CloudFront Developer Guide. + // + // If you don't want to require signed URLs in requests for objects that match + // PathPattern, specify false for Enabled and 0 for Quantity. Omit Items. + // + // To add, change, or remove one or more trusted signers, change Enabled to + // true (if it's currently false), change Quantity as applicable, and specify + // all of the trusted signers that you want to include in the updated distribution. + // + // TrustedSigners is a required field + TrustedSigners *TrustedSigners `type:"structure" required:"true"` + + // The protocol that viewers can use to access the files in the origin specified + // by TargetOriginId when a request matches the path pattern in PathPattern. + // You can specify the following options: + // + // * allow-all: Viewers can use HTTP or HTTPS. + // + // * redirect-to-https: If a viewer submits an HTTP request, CloudFront returns + // an HTTP status code of 301 (Moved Permanently) to the viewer along with + // the HTTPS URL. The viewer then resubmits the request using the new URL. + // + // * https-only: If a viewer sends an HTTP request, CloudFront returns an + // HTTP status code of 403 (Forbidden). + // + // For more information about requiring the HTTPS protocol, see Using an HTTPS + // Connection to Access Your Objects (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecureConnections.html) + // in the Amazon CloudFront Developer Guide. + // + // The only way to guarantee that viewers retrieve an object that was fetched + // from the origin using HTTPS is never to use any other protocol to fetch the + // object. If you have recently changed from HTTP to HTTPS, we recommend that + // you clear your objects' cache because cached objects are protocol agnostic. + // That means that an edge location will return an object from the cache regardless + // of whether the current request protocol matches the protocol used previously. + // For more information, see Specifying How Long Objects and Errors Stay in + // a CloudFront Edge Cache (Expiration) (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html) + // in the Amazon CloudFront Developer Guide. + // + // ViewerProtocolPolicy is a required field + ViewerProtocolPolicy *string `type:"string" required:"true" enum:"ViewerProtocolPolicy"` +} + +// String returns the string representation +func (s DefaultCacheBehavior) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DefaultCacheBehavior) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DefaultCacheBehavior) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DefaultCacheBehavior"} + if s.ForwardedValues == nil { + invalidParams.Add(request.NewErrParamRequired("ForwardedValues")) + } + if s.MinTTL == nil { + invalidParams.Add(request.NewErrParamRequired("MinTTL")) + } + if s.TargetOriginId == nil { + invalidParams.Add(request.NewErrParamRequired("TargetOriginId")) + } + if s.TrustedSigners == nil { + invalidParams.Add(request.NewErrParamRequired("TrustedSigners")) + } + if s.ViewerProtocolPolicy == nil { + invalidParams.Add(request.NewErrParamRequired("ViewerProtocolPolicy")) + } + if s.AllowedMethods != nil { + if err := s.AllowedMethods.Validate(); err != nil { + invalidParams.AddNested("AllowedMethods", err.(request.ErrInvalidParams)) + } + } + if s.ForwardedValues != nil { + if err := s.ForwardedValues.Validate(); err != nil { + invalidParams.AddNested("ForwardedValues", err.(request.ErrInvalidParams)) + } + } + if s.LambdaFunctionAssociations != nil { + if err := s.LambdaFunctionAssociations.Validate(); err != nil { + invalidParams.AddNested("LambdaFunctionAssociations", err.(request.ErrInvalidParams)) + } + } + if s.TrustedSigners != nil { + if err := s.TrustedSigners.Validate(); err != nil { + invalidParams.AddNested("TrustedSigners", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAllowedMethods sets the AllowedMethods field's value. +func (s *DefaultCacheBehavior) SetAllowedMethods(v *AllowedMethods) *DefaultCacheBehavior { + s.AllowedMethods = v + return s +} + +// SetCompress sets the Compress field's value. +func (s *DefaultCacheBehavior) SetCompress(v bool) *DefaultCacheBehavior { + s.Compress = &v + return s +} + +// SetDefaultTTL sets the DefaultTTL field's value. +func (s *DefaultCacheBehavior) SetDefaultTTL(v int64) *DefaultCacheBehavior { + s.DefaultTTL = &v + return s +} + +// SetFieldLevelEncryptionId sets the FieldLevelEncryptionId field's value. +func (s *DefaultCacheBehavior) SetFieldLevelEncryptionId(v string) *DefaultCacheBehavior { + s.FieldLevelEncryptionId = &v + return s +} + +// SetForwardedValues sets the ForwardedValues field's value. +func (s *DefaultCacheBehavior) SetForwardedValues(v *ForwardedValues) *DefaultCacheBehavior { + s.ForwardedValues = v + return s +} + +// SetLambdaFunctionAssociations sets the LambdaFunctionAssociations field's value. +func (s *DefaultCacheBehavior) SetLambdaFunctionAssociations(v *LambdaFunctionAssociations) *DefaultCacheBehavior { + s.LambdaFunctionAssociations = v + return s +} + +// SetMaxTTL sets the MaxTTL field's value. +func (s *DefaultCacheBehavior) SetMaxTTL(v int64) *DefaultCacheBehavior { + s.MaxTTL = &v + return s +} + +// SetMinTTL sets the MinTTL field's value. +func (s *DefaultCacheBehavior) SetMinTTL(v int64) *DefaultCacheBehavior { + s.MinTTL = &v + return s +} + +// SetSmoothStreaming sets the SmoothStreaming field's value. +func (s *DefaultCacheBehavior) SetSmoothStreaming(v bool) *DefaultCacheBehavior { + s.SmoothStreaming = &v + return s +} + +// SetTargetOriginId sets the TargetOriginId field's value. +func (s *DefaultCacheBehavior) SetTargetOriginId(v string) *DefaultCacheBehavior { + s.TargetOriginId = &v + return s +} + +// SetTrustedSigners sets the TrustedSigners field's value. +func (s *DefaultCacheBehavior) SetTrustedSigners(v *TrustedSigners) *DefaultCacheBehavior { + s.TrustedSigners = v + return s +} + +// SetViewerProtocolPolicy sets the ViewerProtocolPolicy field's value. +func (s *DefaultCacheBehavior) SetViewerProtocolPolicy(v string) *DefaultCacheBehavior { + s.ViewerProtocolPolicy = &v + return s +} + +// Deletes a origin access identity. +type DeleteCloudFrontOriginAccessIdentityInput struct { + _ struct{} `type:"structure"` + + // The origin access identity's ID. + // + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` + + // The value of the ETag header you received from a previous GET or PUT request. + // For example: E2QWRUHAPOMQZL. + IfMatch *string `location:"header" locationName:"If-Match" type:"string"` +} + +// String returns the string representation +func (s DeleteCloudFrontOriginAccessIdentityInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteCloudFrontOriginAccessIdentityInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteCloudFrontOriginAccessIdentityInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteCloudFrontOriginAccessIdentityInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetId sets the Id field's value. +func (s *DeleteCloudFrontOriginAccessIdentityInput) SetId(v string) *DeleteCloudFrontOriginAccessIdentityInput { + s.Id = &v + return s +} + +// SetIfMatch sets the IfMatch field's value. +func (s *DeleteCloudFrontOriginAccessIdentityInput) SetIfMatch(v string) *DeleteCloudFrontOriginAccessIdentityInput { + s.IfMatch = &v + return s +} + +type DeleteCloudFrontOriginAccessIdentityOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteCloudFrontOriginAccessIdentityOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteCloudFrontOriginAccessIdentityOutput) GoString() string { + return s.String() +} + +// This action deletes a web distribution. To delete a web distribution using +// the CloudFront API, perform the following steps. +// +// To delete a web distribution using the CloudFront API: +// +// Disable the web distribution +// +// Submit a GET Distribution Config request to get the current configuration +// and the Etag header for the distribution. +// +// Update the XML document that was returned in the response to your GET Distribution +// Config request to change the value of Enabled to false. +// +// Submit a PUT Distribution Config request to update the configuration for +// your distribution. In the request body, include the XML document that you +// updated in Step 3. Set the value of the HTTP If-Match header to the value +// of the ETag header that CloudFront returned when you submitted the GET Distribution +// Config request in Step 2. +// +// Review the response to the PUT Distribution Config request to confirm that +// the distribution was successfully disabled. +// +// Submit a GET Distribution request to confirm that your changes have propagated. +// When propagation is complete, the value of Status is Deployed. +// +// Submit a DELETE Distribution request. Set the value of the HTTP If-Match +// header to the value of the ETag header that CloudFront returned when you +// submitted the GET Distribution Config request in Step 6. +// +// Review the response to your DELETE Distribution request to confirm that the +// distribution was successfully deleted. +// +// For information about deleting a distribution using the CloudFront console, +// see Deleting a Distribution (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/HowToDeleteDistribution.html) +// in the Amazon CloudFront Developer Guide. +type DeleteDistributionInput struct { + _ struct{} `type:"structure"` + + // The distribution ID. + // + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` + + // The value of the ETag header that you received when you disabled the distribution. + // For example: E2QWRUHAPOMQZL. + IfMatch *string `location:"header" locationName:"If-Match" type:"string"` +} + +// String returns the string representation +func (s DeleteDistributionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDistributionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteDistributionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDistributionInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetId sets the Id field's value. +func (s *DeleteDistributionInput) SetId(v string) *DeleteDistributionInput { + s.Id = &v + return s +} + +// SetIfMatch sets the IfMatch field's value. +func (s *DeleteDistributionInput) SetIfMatch(v string) *DeleteDistributionInput { + s.IfMatch = &v + return s +} + +type DeleteDistributionOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteDistributionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDistributionOutput) GoString() string { + return s.String() +} + +type DeleteFieldLevelEncryptionConfigInput struct { + _ struct{} `type:"structure"` + + // The ID of the configuration you want to delete from CloudFront. + // + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` + + // The value of the ETag header that you received when retrieving the configuration + // identity to delete. For example: E2QWRUHAPOMQZL. + IfMatch *string `location:"header" locationName:"If-Match" type:"string"` +} + +// String returns the string representation +func (s DeleteFieldLevelEncryptionConfigInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteFieldLevelEncryptionConfigInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteFieldLevelEncryptionConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteFieldLevelEncryptionConfigInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetId sets the Id field's value. +func (s *DeleteFieldLevelEncryptionConfigInput) SetId(v string) *DeleteFieldLevelEncryptionConfigInput { + s.Id = &v + return s +} - // The maximum amount of time that you want objects to stay in CloudFront caches - // before CloudFront forwards another request to your origin to determine whether - // the object has been updated. The value that you specify applies only when - // your origin adds HTTP headers such as Cache-Control max-age, Cache-Control - // s-maxage, and Expires to objects. For more information, see Specifying How - // Long Objects and Errors Stay in a CloudFront Edge Cache (Expiration) (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html) - // in the Amazon CloudFront Developer Guide. - MaxTTL *int64 `type:"long"` +// SetIfMatch sets the IfMatch field's value. +func (s *DeleteFieldLevelEncryptionConfigInput) SetIfMatch(v string) *DeleteFieldLevelEncryptionConfigInput { + s.IfMatch = &v + return s +} - // The minimum amount of time that you want objects to stay in CloudFront caches - // before CloudFront forwards another request to your origin to determine whether - // the object has been updated. For more information, see Specifying How Long - // Objects and Errors Stay in a CloudFront Edge Cache (Expiration) (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html) - // in the Amazon Amazon CloudFront Developer Guide. - // - // You must specify 0 for MinTTL if you configure CloudFront to forward all - // headers to your origin (under Headers, if you specify 1 for Quantity and - // * for Name). - // - // MinTTL is a required field - MinTTL *int64 `type:"long" required:"true"` +type DeleteFieldLevelEncryptionConfigOutput struct { + _ struct{} `type:"structure"` +} - // The pattern (for example, images/*.jpg) that specifies which requests to - // apply the behavior to. When CloudFront receives a viewer request, the requested - // path is compared with path patterns in the order in which cache behaviors - // are listed in the distribution. - // - // You can optionally include a slash (/) at the beginning of the path pattern. - // For example, /images/*.jpg. CloudFront behavior is the same with or without - // the leading /. - // - // The path pattern for the default cache behavior is * and cannot be changed. - // If the request for an object does not match the path pattern for any cache - // behaviors, CloudFront applies the behavior in the default cache behavior. - // - // For more information, see Path Pattern (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesPathPattern) - // in the Amazon CloudFront Developer Guide. - // - // PathPattern is a required field - PathPattern *string `type:"string" required:"true"` +// String returns the string representation +func (s DeleteFieldLevelEncryptionConfigOutput) String() string { + return awsutil.Prettify(s) +} - // Indicates whether you want to distribute media files in the Microsoft Smooth - // Streaming format using the origin that is associated with this cache behavior. - // If so, specify true; if not, specify false. If you specify true for SmoothStreaming, - // you can still distribute other content using this cache behavior if the content - // matches the value of PathPattern. - SmoothStreaming *bool `type:"boolean"` +// GoString returns the string representation +func (s DeleteFieldLevelEncryptionConfigOutput) GoString() string { + return s.String() +} - // The value of ID for the origin that you want CloudFront to route requests - // to when a request matches the path pattern either for a cache behavior or - // for the default cache behavior. - // - // TargetOriginId is a required field - TargetOriginId *string `type:"string" required:"true"` +type DeleteFieldLevelEncryptionProfileInput struct { + _ struct{} `type:"structure"` - // A complex type that specifies the AWS accounts, if any, that you want to - // allow to create signed URLs for private content. - // - // If you want to require signed URLs in requests for objects in the target - // origin that match the PathPattern for this cache behavior, specify true for - // Enabled, and specify the applicable values for Quantity and Items. For more - // information, see Serving Private Content through CloudFront (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html) - // in the Amazon Amazon CloudFront Developer Guide. - // - // If you don't want to require signed URLs in requests for objects that match - // PathPattern, specify false for Enabled and 0 for Quantity. Omit Items. - // - // To add, change, or remove one or more trusted signers, change Enabled to - // true (if it's currently false), change Quantity as applicable, and specify - // all of the trusted signers that you want to include in the updated distribution. + // Request the ID of the profile you want to delete from CloudFront. // - // TrustedSigners is a required field - TrustedSigners *TrustedSigners `type:"structure" required:"true"` + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` - // The protocol that viewers can use to access the files in the origin specified - // by TargetOriginId when a request matches the path pattern in PathPattern. - // You can specify the following options: - // - // * allow-all: Viewers can use HTTP or HTTPS. - // - // * redirect-to-https: If a viewer submits an HTTP request, CloudFront returns - // an HTTP status code of 301 (Moved Permanently) to the viewer along with - // the HTTPS URL. The viewer then resubmits the request using the new URL. - // - // - // * https-only: If a viewer sends an HTTP request, CloudFront returns an - // HTTP status code of 403 (Forbidden). - // - // For more information about requiring the HTTPS protocol, see Using an HTTPS - // Connection to Access Your Objects (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecureConnections.html) - // in the Amazon CloudFront Developer Guide. - // - // The only way to guarantee that viewers retrieve an object that was fetched - // from the origin using HTTPS is never to use any other protocol to fetch the - // object. If you have recently changed from HTTP to HTTPS, we recommend that - // you clear your objects' cache because cached objects are protocol agnostic. - // That means that an edge location will return an object from the cache regardless - // of whether the current request protocol matches the protocol used previously. - // For more information, see Specifying How Long Objects and Errors Stay in - // a CloudFront Edge Cache (Expiration) (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html) - // in the Amazon CloudFront Developer Guide. - // - // ViewerProtocolPolicy is a required field - ViewerProtocolPolicy *string `type:"string" required:"true" enum:"ViewerProtocolPolicy"` + // The value of the ETag header that you received when retrieving the profile + // to delete. For example: E2QWRUHAPOMQZL. + IfMatch *string `location:"header" locationName:"If-Match" type:"string"` } // String returns the string representation -func (s CacheBehavior) String() string { +func (s DeleteFieldLevelEncryptionProfileInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CacheBehavior) GoString() string { +func (s DeleteFieldLevelEncryptionProfileInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CacheBehavior) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CacheBehavior"} - if s.ForwardedValues == nil { - invalidParams.Add(request.NewErrParamRequired("ForwardedValues")) - } - if s.MinTTL == nil { - invalidParams.Add(request.NewErrParamRequired("MinTTL")) - } - if s.PathPattern == nil { - invalidParams.Add(request.NewErrParamRequired("PathPattern")) - } - if s.TargetOriginId == nil { - invalidParams.Add(request.NewErrParamRequired("TargetOriginId")) - } - if s.TrustedSigners == nil { - invalidParams.Add(request.NewErrParamRequired("TrustedSigners")) - } - if s.ViewerProtocolPolicy == nil { - invalidParams.Add(request.NewErrParamRequired("ViewerProtocolPolicy")) - } - if s.AllowedMethods != nil { - if err := s.AllowedMethods.Validate(); err != nil { - invalidParams.AddNested("AllowedMethods", err.(request.ErrInvalidParams)) - } - } - if s.ForwardedValues != nil { - if err := s.ForwardedValues.Validate(); err != nil { - invalidParams.AddNested("ForwardedValues", err.(request.ErrInvalidParams)) - } - } - if s.LambdaFunctionAssociations != nil { - if err := s.LambdaFunctionAssociations.Validate(); err != nil { - invalidParams.AddNested("LambdaFunctionAssociations", err.(request.ErrInvalidParams)) - } - } - if s.TrustedSigners != nil { - if err := s.TrustedSigners.Validate(); err != nil { - invalidParams.AddNested("TrustedSigners", err.(request.ErrInvalidParams)) - } +func (s *DeleteFieldLevelEncryptionProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteFieldLevelEncryptionProfileInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) } if invalidParams.Len() > 0 { @@ -3625,117 +7408,123 @@ func (s *CacheBehavior) Validate() error { return nil } -// SetAllowedMethods sets the AllowedMethods field's value. -func (s *CacheBehavior) SetAllowedMethods(v *AllowedMethods) *CacheBehavior { - s.AllowedMethods = v +// SetId sets the Id field's value. +func (s *DeleteFieldLevelEncryptionProfileInput) SetId(v string) *DeleteFieldLevelEncryptionProfileInput { + s.Id = &v return s } -// SetCompress sets the Compress field's value. -func (s *CacheBehavior) SetCompress(v bool) *CacheBehavior { - s.Compress = &v +// SetIfMatch sets the IfMatch field's value. +func (s *DeleteFieldLevelEncryptionProfileInput) SetIfMatch(v string) *DeleteFieldLevelEncryptionProfileInput { + s.IfMatch = &v return s } -// SetDefaultTTL sets the DefaultTTL field's value. -func (s *CacheBehavior) SetDefaultTTL(v int64) *CacheBehavior { - s.DefaultTTL = &v - return s +type DeleteFieldLevelEncryptionProfileOutput struct { + _ struct{} `type:"structure"` } -// SetForwardedValues sets the ForwardedValues field's value. -func (s *CacheBehavior) SetForwardedValues(v *ForwardedValues) *CacheBehavior { - s.ForwardedValues = v - return s +// String returns the string representation +func (s DeleteFieldLevelEncryptionProfileOutput) String() string { + return awsutil.Prettify(s) } -// SetLambdaFunctionAssociations sets the LambdaFunctionAssociations field's value. -func (s *CacheBehavior) SetLambdaFunctionAssociations(v *LambdaFunctionAssociations) *CacheBehavior { - s.LambdaFunctionAssociations = v - return s +// GoString returns the string representation +func (s DeleteFieldLevelEncryptionProfileOutput) GoString() string { + return s.String() } -// SetMaxTTL sets the MaxTTL field's value. -func (s *CacheBehavior) SetMaxTTL(v int64) *CacheBehavior { - s.MaxTTL = &v - return s +type DeletePublicKeyInput struct { + _ struct{} `type:"structure"` + + // The ID of the public key you want to remove from CloudFront. + // + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` + + // The value of the ETag header that you received when retrieving the public + // key identity to delete. For example: E2QWRUHAPOMQZL. + IfMatch *string `location:"header" locationName:"If-Match" type:"string"` } -// SetMinTTL sets the MinTTL field's value. -func (s *CacheBehavior) SetMinTTL(v int64) *CacheBehavior { - s.MinTTL = &v - return s +// String returns the string representation +func (s DeletePublicKeyInput) String() string { + return awsutil.Prettify(s) } -// SetPathPattern sets the PathPattern field's value. -func (s *CacheBehavior) SetPathPattern(v string) *CacheBehavior { - s.PathPattern = &v - return s +// GoString returns the string representation +func (s DeletePublicKeyInput) GoString() string { + return s.String() } -// SetSmoothStreaming sets the SmoothStreaming field's value. -func (s *CacheBehavior) SetSmoothStreaming(v bool) *CacheBehavior { - s.SmoothStreaming = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeletePublicKeyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeletePublicKeyInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetTargetOriginId sets the TargetOriginId field's value. -func (s *CacheBehavior) SetTargetOriginId(v string) *CacheBehavior { - s.TargetOriginId = &v +// SetId sets the Id field's value. +func (s *DeletePublicKeyInput) SetId(v string) *DeletePublicKeyInput { + s.Id = &v return s } -// SetTrustedSigners sets the TrustedSigners field's value. -func (s *CacheBehavior) SetTrustedSigners(v *TrustedSigners) *CacheBehavior { - s.TrustedSigners = v +// SetIfMatch sets the IfMatch field's value. +func (s *DeletePublicKeyInput) SetIfMatch(v string) *DeletePublicKeyInput { + s.IfMatch = &v return s } -// SetViewerProtocolPolicy sets the ViewerProtocolPolicy field's value. -func (s *CacheBehavior) SetViewerProtocolPolicy(v string) *CacheBehavior { - s.ViewerProtocolPolicy = &v - return s +type DeletePublicKeyOutput struct { + _ struct{} `type:"structure"` } -// A complex type that contains zero or more CacheBehavior elements. -type CacheBehaviors struct { +// String returns the string representation +func (s DeletePublicKeyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeletePublicKeyOutput) GoString() string { + return s.String() +} + +// The request to delete a streaming distribution. +type DeleteStreamingDistributionInput struct { _ struct{} `type:"structure"` - // Optional: A complex type that contains cache behaviors for this distribution. - // If Quantity is 0, you can omit Items. - Items []*CacheBehavior `locationNameList:"CacheBehavior" type:"list"` + // The distribution ID. + // + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` - // The number of cache behaviors for this distribution. - // - // Quantity is a required field - Quantity *int64 `type:"integer" required:"true"` + // The value of the ETag header that you received when you disabled the streaming + // distribution. For example: E2QWRUHAPOMQZL. + IfMatch *string `location:"header" locationName:"If-Match" type:"string"` } // String returns the string representation -func (s CacheBehaviors) String() string { +func (s DeleteStreamingDistributionInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CacheBehaviors) GoString() string { +func (s DeleteStreamingDistributionInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CacheBehaviors) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CacheBehaviors"} - if s.Quantity == nil { - invalidParams.Add(request.NewErrParamRequired("Quantity")) - } - if s.Items != nil { - for i, v := range s.Items { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Items", i), err.(request.ErrInvalidParams)) - } - } +func (s *DeleteStreamingDistributionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteStreamingDistributionInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) } if invalidParams.Len() > 0 { @@ -3744,190 +7533,478 @@ func (s *CacheBehaviors) Validate() error { return nil } -// SetItems sets the Items field's value. -func (s *CacheBehaviors) SetItems(v []*CacheBehavior) *CacheBehaviors { - s.Items = v +// SetId sets the Id field's value. +func (s *DeleteStreamingDistributionInput) SetId(v string) *DeleteStreamingDistributionInput { + s.Id = &v return s } -// SetQuantity sets the Quantity field's value. -func (s *CacheBehaviors) SetQuantity(v int64) *CacheBehaviors { - s.Quantity = &v +// SetIfMatch sets the IfMatch field's value. +func (s *DeleteStreamingDistributionInput) SetIfMatch(v string) *DeleteStreamingDistributionInput { + s.IfMatch = &v return s } -// A complex type that controls whether CloudFront caches the response to requests -// using the specified HTTP methods. There are two choices: -// -// * CloudFront caches responses to GET and HEAD requests. -// -// * CloudFront caches responses to GET, HEAD, and OPTIONS requests. -// -// If you pick the second choice for your Amazon S3 Origin, you may need to -// forward Access-Control-Request-Method, Access-Control-Request-Headers, and -// Origin headers for the responses to be cached correctly. -type CachedMethods struct { +type DeleteStreamingDistributionOutput struct { _ struct{} `type:"structure"` +} - // A complex type that contains the HTTP methods that you want CloudFront to - // cache responses to. +// String returns the string representation +func (s DeleteStreamingDistributionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteStreamingDistributionOutput) GoString() string { + return s.String() +} + +// The distribution's information. +type Distribution struct { + _ struct{} `type:"structure"` + + // The ARN (Amazon Resource Name) for the distribution. For example: arn:aws:cloudfront::123456789012:distribution/EDFDVBD632BHDS5, + // where 123456789012 is your AWS account ID. // - // Items is a required field - Items []*string `locationNameList:"Method" type:"list" required:"true"` + // ARN is a required field + ARN *string `type:"string" required:"true"` - // The number of HTTP methods for which you want CloudFront to cache responses. - // Valid values are 2 (for caching responses to GET and HEAD requests) and 3 - // (for caching responses to GET, HEAD, and OPTIONS requests). + // CloudFront automatically adds this element to the response only if you've + // set up the distribution to serve private content with signed URLs. The element + // lists the key pair IDs that CloudFront is aware of for each trusted signer. + // The Signer child element lists the AWS account number of the trusted signer + // (or an empty Self element if the signer is you). The Signer element also + // includes the IDs of any active key pairs associated with the trusted signer's + // AWS account. If no KeyPairId element appears for a Signer, that signer can't + // create working signed URLs. // - // Quantity is a required field - Quantity *int64 `type:"integer" required:"true"` + // ActiveTrustedSigners is a required field + ActiveTrustedSigners *ActiveTrustedSigners `type:"structure" required:"true"` + + // The current configuration information for the distribution. Send a GET request + // to the /CloudFront API version/distribution ID/config resource. + // + // DistributionConfig is a required field + DistributionConfig *DistributionConfig `type:"structure" required:"true"` + + // The domain name corresponding to the distribution, for example, d111111abcdef8.cloudfront.net. + // + // DomainName is a required field + DomainName *string `type:"string" required:"true"` + + // The identifier for the distribution. For example: EDFDVBD632BHDS5. + // + // Id is a required field + Id *string `type:"string" required:"true"` + + // The number of invalidation batches currently in progress. + // + // InProgressInvalidationBatches is a required field + InProgressInvalidationBatches *int64 `type:"integer" required:"true"` + + // The date and time the distribution was last modified. + // + // LastModifiedTime is a required field + LastModifiedTime *time.Time `type:"timestamp" required:"true"` + + // This response element indicates the current status of the distribution. When + // the status is Deployed, the distribution's information is fully propagated + // to all CloudFront edge locations. + // + // Status is a required field + Status *string `type:"string" required:"true"` } // String returns the string representation -func (s CachedMethods) String() string { +func (s Distribution) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CachedMethods) GoString() string { +func (s Distribution) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *CachedMethods) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CachedMethods"} - if s.Items == nil { - invalidParams.Add(request.NewErrParamRequired("Items")) - } - if s.Quantity == nil { - invalidParams.Add(request.NewErrParamRequired("Quantity")) - } +// SetARN sets the ARN field's value. +func (s *Distribution) SetARN(v string) *Distribution { + s.ARN = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetActiveTrustedSigners sets the ActiveTrustedSigners field's value. +func (s *Distribution) SetActiveTrustedSigners(v *ActiveTrustedSigners) *Distribution { + s.ActiveTrustedSigners = v + return s } -// SetItems sets the Items field's value. -func (s *CachedMethods) SetItems(v []*string) *CachedMethods { - s.Items = v +// SetDistributionConfig sets the DistributionConfig field's value. +func (s *Distribution) SetDistributionConfig(v *DistributionConfig) *Distribution { + s.DistributionConfig = v return s } -// SetQuantity sets the Quantity field's value. -func (s *CachedMethods) SetQuantity(v int64) *CachedMethods { - s.Quantity = &v +// SetDomainName sets the DomainName field's value. +func (s *Distribution) SetDomainName(v string) *Distribution { + s.DomainName = &v return s } -// A complex type that specifies whether you want CloudFront to forward cookies -// to the origin and, if so, which ones. For more information about forwarding -// cookies to the origin, see How CloudFront Forwards, Caches, and Logs Cookies -// (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Cookies.html) -// in the Amazon CloudFront Developer Guide. -type CookieNames struct { +// SetId sets the Id field's value. +func (s *Distribution) SetId(v string) *Distribution { + s.Id = &v + return s +} + +// SetInProgressInvalidationBatches sets the InProgressInvalidationBatches field's value. +func (s *Distribution) SetInProgressInvalidationBatches(v int64) *Distribution { + s.InProgressInvalidationBatches = &v + return s +} + +// SetLastModifiedTime sets the LastModifiedTime field's value. +func (s *Distribution) SetLastModifiedTime(v time.Time) *Distribution { + s.LastModifiedTime = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *Distribution) SetStatus(v string) *Distribution { + s.Status = &v + return s +} + +// A distribution configuration. +type DistributionConfig struct { _ struct{} `type:"structure"` - // A complex type that contains one Name element for each cookie that you want - // CloudFront to forward to the origin for this cache behavior. - Items []*string `locationNameList:"Name" type:"list"` + // A complex type that contains information about CNAMEs (alternate domain names), + // if any, for this distribution. + Aliases *Aliases `type:"structure"` - // The number of different cookies that you want CloudFront to forward to the - // origin for this cache behavior. + // A complex type that contains zero or more CacheBehavior elements. + CacheBehaviors *CacheBehaviors `type:"structure"` + + // A unique value (for example, a date-time stamp) that ensures that the request + // can't be replayed. + // + // If the value of CallerReference is new (regardless of the content of the + // DistributionConfig object), CloudFront creates a new distribution. + // + // If CallerReference is a value you already sent in a previous request to create + // a distribution, and if the content of the DistributionConfig is identical + // to the original request (ignoring white space), CloudFront returns the same + // the response that it returned to the original request. + // + // If CallerReference is a value you already sent in a previous request to create + // a distribution but the content of the DistributionConfig is different from + // the original request, CloudFront returns a DistributionAlreadyExists error. + // + // CallerReference is a required field + CallerReference *string `type:"string" required:"true"` + + // Any comments you want to include about the distribution. + // + // If you don't want to specify a comment, include an empty Comment element. + // + // To delete an existing comment, update the distribution configuration and + // include an empty Comment element. + // + // To add or change a comment, update the distribution configuration and specify + // the new comment. + // + // Comment is a required field + Comment *string `type:"string" required:"true"` + + // A complex type that controls the following: + // + // * Whether CloudFront replaces HTTP status codes in the 4xx and 5xx range + // with custom error messages before returning the response to the viewer. + // + // * How long CloudFront caches HTTP status codes in the 4xx and 5xx range. + // + // For more information about custom error pages, see Customizing Error Responses + // (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html) + // in the Amazon CloudFront Developer Guide. + CustomErrorResponses *CustomErrorResponses `type:"structure"` + + // A complex type that describes the default cache behavior if you don't specify + // a CacheBehavior element or if files don't match any of the values of PathPattern + // in CacheBehavior elements. You must create exactly one default cache behavior. + // + // DefaultCacheBehavior is a required field + DefaultCacheBehavior *DefaultCacheBehavior `type:"structure" required:"true"` + + // The object that you want CloudFront to request from your origin (for example, + // index.html) when a viewer requests the root URL for your distribution (http://www.example.com) + // instead of an object in your distribution (http://www.example.com/product-description.html). + // Specifying a default root object avoids exposing the contents of your distribution. // - // Quantity is a required field - Quantity *int64 `type:"integer" required:"true"` -} + // Specify only the object name, for example, index.html. Don't add a / before + // the object name. + // + // If you don't want to specify a default root object when you create a distribution, + // include an empty DefaultRootObject element. + // + // To delete the default root object from an existing distribution, update the + // distribution configuration and include an empty DefaultRootObject element. + // + // To replace the default root object, update the distribution configuration + // and specify the new object. + // + // For more information about the default root object, see Creating a Default + // Root Object (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DefaultRootObject.html) + // in the Amazon CloudFront Developer Guide. + DefaultRootObject *string `type:"string"` -// String returns the string representation -func (s CookieNames) String() string { - return awsutil.Prettify(s) -} + // From this field, you can enable or disable the selected distribution. + // + // Enabled is a required field + Enabled *bool `type:"boolean" required:"true"` -// GoString returns the string representation -func (s CookieNames) GoString() string { - return s.String() -} + // (Optional) Specify the maximum HTTP version that you want viewers to use + // to communicate with CloudFront. The default value for new web distributions + // is http2. Viewers that don't support HTTP/2 automatically use an earlier + // HTTP version. + // + // For viewers and CloudFront to use HTTP/2, viewers must support TLS 1.2 or + // later, and must support Server Name Identification (SNI). + // + // In general, configuring CloudFront to communicate with viewers using HTTP/2 + // reduces latency. You can improve performance by optimizing for HTTP/2. For + // more information, do an Internet search for "http/2 optimization." + HttpVersion *string `type:"string" enum:"HttpVersion"` -// Validate inspects the fields of the type to determine if they are valid. -func (s *CookieNames) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CookieNames"} - if s.Quantity == nil { - invalidParams.Add(request.NewErrParamRequired("Quantity")) - } + // If you want CloudFront to respond to IPv6 DNS requests with an IPv6 address + // for your distribution, specify true. If you specify false, CloudFront responds + // to IPv6 DNS requests with the DNS response code NOERROR and with no IP addresses. + // This allows viewers to submit a second request, for an IPv4 address for your + // distribution. + // + // In general, you should enable IPv6 if you have users on IPv6 networks who + // want to access your content. However, if you're using signed URLs or signed + // cookies to restrict access to your content, and if you're using a custom + // policy that includes the IpAddress parameter to restrict the IP addresses + // that can access your content, don't enable IPv6. If you want to restrict + // access to some content by IP address and not restrict access to other content + // (or restrict access but not by IP address), you can create two distributions. + // For more information, see Creating a Signed URL Using a Custom Policy (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-creating-signed-url-custom-policy.html) + // in the Amazon CloudFront Developer Guide. + // + // If you're using an Amazon Route 53 alias resource record set to route traffic + // to your CloudFront distribution, you need to create a second alias resource + // record set when both of the following are true: + // + // * You enable IPv6 for the distribution + // + // * You're using alternate domain names in the URLs for your objects + // + // For more information, see Routing Traffic to an Amazon CloudFront Web Distribution + // by Using Your Domain Name (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-cloudfront-distribution.html) + // in the Amazon Route 53 Developer Guide. + // + // If you created a CNAME resource record set, either with Amazon Route 53 or + // with another DNS service, you don't need to make any changes. A CNAME record + // will route traffic to your distribution regardless of the IP address format + // of the viewer request. + IsIPV6Enabled *bool `type:"boolean"` - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} + // A complex type that controls whether access logs are written for the distribution. + // + // For more information about logging, see Access Logs (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html) + // in the Amazon CloudFront Developer Guide. + Logging *LoggingConfig `type:"structure"` -// SetItems sets the Items field's value. -func (s *CookieNames) SetItems(v []*string) *CookieNames { - s.Items = v - return s -} + // A complex type that contains information about origins for this distribution. + // + // Origins is a required field + Origins *Origins `type:"structure" required:"true"` -// SetQuantity sets the Quantity field's value. -func (s *CookieNames) SetQuantity(v int64) *CookieNames { - s.Quantity = &v - return s -} + // The price class that corresponds with the maximum price that you want to + // pay for CloudFront service. If you specify PriceClass_All, CloudFront responds + // to requests for your objects from all CloudFront edge locations. + // + // If you specify a price class other than PriceClass_All, CloudFront serves + // your objects from the CloudFront edge location that has the lowest latency + // among the edge locations in your price class. Viewers who are in or near + // regions that are excluded from your specified price class may encounter slower + // performance. + // + // For more information about price classes, see Choosing the Price Class for + // a CloudFront Distribution (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PriceClass.html) + // in the Amazon CloudFront Developer Guide. For information about CloudFront + // pricing, including how price classes (such as Price Class 100) map to CloudFront + // regions, see Amazon CloudFront Pricing (https://aws.amazon.com/cloudfront/pricing/). + // For price class information, scroll down to see the table at the bottom of + // the page. + PriceClass *string `type:"string" enum:"PriceClass"` -// A complex type that specifies whether you want CloudFront to forward cookies -// to the origin and, if so, which ones. For more information about forwarding -// cookies to the origin, see How CloudFront Forwards, Caches, and Logs Cookies -// (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Cookies.html) -// in the Amazon CloudFront Developer Guide. -type CookiePreference struct { - _ struct{} `type:"structure"` + // A complex type that identifies ways in which you want to restrict distribution + // of your content. + Restrictions *Restrictions `type:"structure"` - // Specifies which cookies to forward to the origin for this cache behavior: - // all, none, or the list of cookies specified in the WhitelistedNames complex - // type. + // A complex type that specifies the following: // - // Amazon S3 doesn't process cookies. When the cache behavior is forwarding - // requests to an Amazon S3 origin, specify none for the Forward element. + // * Whether you want viewers to use HTTP or HTTPS to request your objects. // - // Forward is a required field - Forward *string `type:"string" required:"true" enum:"ItemSelection"` - - // Required if you specify whitelist for the value of Forward:. A complex type - // that specifies how many different cookies you want CloudFront to forward - // to the origin for this cache behavior and, if you want to forward selected - // cookies, the names of those cookies. + // * If you want viewers to use HTTPS, whether you're using an alternate + // domain name such as example.com or the CloudFront domain name for your + // distribution, such as d111111abcdef8.cloudfront.net. // - // If you specify all or none for the value of Forward, omit WhitelistedNames. - // If you change the value of Forward from whitelist to all or none and you - // don't delete the WhitelistedNames element and its child elements, CloudFront - // deletes them automatically. + // * If you're using an alternate domain name, whether AWS Certificate Manager + // (ACM) provided the certificate, or you purchased a certificate from a + // third-party certificate authority and imported it into ACM or uploaded + // it to the IAM certificate store. // - // For the current limit on the number of cookie names that you can whitelist - // for each cache behavior, see Amazon CloudFront Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_cloudfront) - // in the AWS General Reference. - WhitelistedNames *CookieNames `type:"structure"` + // You must specify only one of the following values: + // + // * ViewerCertificate$ACMCertificateArn + // + // * ViewerCertificate$IAMCertificateId + // + // * ViewerCertificate$CloudFrontDefaultCertificate + // + // Don't specify false for CloudFrontDefaultCertificate. + // + // If you want viewers to use HTTP instead of HTTPS to request your objects: + // Specify the following value: + // + // true + // + // In addition, specify allow-all for ViewerProtocolPolicy for all of your cache + // behaviors. + // + // If you want viewers to use HTTPS to request your objects: Choose the type + // of certificate that you want to use based on whether you're using an alternate + // domain name for your objects or the CloudFront domain name: + // + // * If you're using an alternate domain name, such as example.com: Specify + // one of the following values, depending on whether ACM provided your certificate + // or you purchased your certificate from third-party certificate authority: + // + // ARN for ACM SSL/TLS certificate where + // ARN for ACM SSL/TLS certificate is the ARN for the ACM SSL/TLS certificate + // that you want to use for this distribution. + // + // IAM certificate ID where IAM certificate + // ID is the ID that IAM returned when you added the certificate to the IAM + // certificate store. + // + // If you specify ACMCertificateArn or IAMCertificateId, you must also specify + // a value for SSLSupportMethod. + // + // If you choose to use an ACM certificate or a certificate in the IAM certificate + // store, we recommend that you use only an alternate domain name in your + // object URLs (https://example.com/logo.jpg). If you use the domain name + // that is associated with your CloudFront distribution (such as https://d111111abcdef8.cloudfront.net/logo.jpg) + // and the viewer supports SNI, then CloudFront behaves normally. However, + // if the browser does not support SNI, the user's experience depends on + // the value that you choose for SSLSupportMethod: + // + // vip: The viewer displays a warning because there is a mismatch between the + // CloudFront domain name and the domain name in your SSL/TLS certificate. + // + // sni-only: CloudFront drops the connection with the browser without returning + // the object. + // + // * If you're using the CloudFront domain name for your distribution, such + // as d111111abcdef8.cloudfront.net: Specify the following value: + // + // true + // + // If you want viewers to use HTTPS, you must also specify one of the following + // values in your cache behaviors: + // + // * https-only + // + // * redirect-to-https + // + // You can also optionally require that CloudFront use HTTPS to communicate + // with your origin by specifying one of the following values for the applicable + // origins: + // + // * https-only + // + // * match-viewer + // + // For more information, see Using Alternate Domain Names and HTTPS (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecureConnections.html#CNAMEsAndHTTPS) + // in the Amazon CloudFront Developer Guide. + ViewerCertificate *ViewerCertificate `type:"structure"` + + // A unique identifier that specifies the AWS WAF web ACL, if any, to associate + // with this distribution. + // + // AWS WAF is a web application firewall that lets you monitor the HTTP and + // HTTPS requests that are forwarded to CloudFront, and lets you control access + // to your content. Based on conditions that you specify, such as the IP addresses + // that requests originate from or the values of query strings, CloudFront responds + // to requests either with the requested content or with an HTTP 403 status + // code (Forbidden). You can also configure CloudFront to return a custom error + // page when a request is blocked. For more information about AWS WAF, see the + // AWS WAF Developer Guide (http://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html). + WebACLId *string `type:"string"` } // String returns the string representation -func (s CookiePreference) String() string { +func (s DistributionConfig) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CookiePreference) GoString() string { +func (s DistributionConfig) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CookiePreference) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CookiePreference"} - if s.Forward == nil { - invalidParams.Add(request.NewErrParamRequired("Forward")) +func (s *DistributionConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DistributionConfig"} + if s.CallerReference == nil { + invalidParams.Add(request.NewErrParamRequired("CallerReference")) } - if s.WhitelistedNames != nil { - if err := s.WhitelistedNames.Validate(); err != nil { - invalidParams.AddNested("WhitelistedNames", err.(request.ErrInvalidParams)) + if s.Comment == nil { + invalidParams.Add(request.NewErrParamRequired("Comment")) + } + if s.DefaultCacheBehavior == nil { + invalidParams.Add(request.NewErrParamRequired("DefaultCacheBehavior")) + } + if s.Enabled == nil { + invalidParams.Add(request.NewErrParamRequired("Enabled")) + } + if s.Origins == nil { + invalidParams.Add(request.NewErrParamRequired("Origins")) + } + if s.Aliases != nil { + if err := s.Aliases.Validate(); err != nil { + invalidParams.AddNested("Aliases", err.(request.ErrInvalidParams)) + } + } + if s.CacheBehaviors != nil { + if err := s.CacheBehaviors.Validate(); err != nil { + invalidParams.AddNested("CacheBehaviors", err.(request.ErrInvalidParams)) + } + } + if s.CustomErrorResponses != nil { + if err := s.CustomErrorResponses.Validate(); err != nil { + invalidParams.AddNested("CustomErrorResponses", err.(request.ErrInvalidParams)) + } + } + if s.DefaultCacheBehavior != nil { + if err := s.DefaultCacheBehavior.Validate(); err != nil { + invalidParams.AddNested("DefaultCacheBehavior", err.(request.ErrInvalidParams)) + } + } + if s.Logging != nil { + if err := s.Logging.Validate(); err != nil { + invalidParams.AddNested("Logging", err.(request.ErrInvalidParams)) + } + } + if s.Origins != nil { + if err := s.Origins.Validate(); err != nil { + invalidParams.AddNested("Origins", err.(request.ErrInvalidParams)) + } + } + if s.Restrictions != nil { + if err := s.Restrictions.Validate(); err != nil { + invalidParams.AddNested("Restrictions", err.(request.ErrInvalidParams)) } } @@ -3937,136 +8014,147 @@ func (s *CookiePreference) Validate() error { return nil } -// SetForward sets the Forward field's value. -func (s *CookiePreference) SetForward(v string) *CookiePreference { - s.Forward = &v +// SetAliases sets the Aliases field's value. +func (s *DistributionConfig) SetAliases(v *Aliases) *DistributionConfig { + s.Aliases = v return s } -// SetWhitelistedNames sets the WhitelistedNames field's value. -func (s *CookiePreference) SetWhitelistedNames(v *CookieNames) *CookiePreference { - s.WhitelistedNames = v +// SetCacheBehaviors sets the CacheBehaviors field's value. +func (s *DistributionConfig) SetCacheBehaviors(v *CacheBehaviors) *DistributionConfig { + s.CacheBehaviors = v return s } -// The request to create a new origin access identity. -type CreateCloudFrontOriginAccessIdentityInput struct { - _ struct{} `type:"structure" payload:"CloudFrontOriginAccessIdentityConfig"` - - // The current configuration information for the identity. - // - // CloudFrontOriginAccessIdentityConfig is a required field - CloudFrontOriginAccessIdentityConfig *OriginAccessIdentityConfig `locationName:"CloudFrontOriginAccessIdentityConfig" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2017-03-25/"` +// SetCallerReference sets the CallerReference field's value. +func (s *DistributionConfig) SetCallerReference(v string) *DistributionConfig { + s.CallerReference = &v + return s } -// String returns the string representation -func (s CreateCloudFrontOriginAccessIdentityInput) String() string { - return awsutil.Prettify(s) +// SetComment sets the Comment field's value. +func (s *DistributionConfig) SetComment(v string) *DistributionConfig { + s.Comment = &v + return s } -// GoString returns the string representation -func (s CreateCloudFrontOriginAccessIdentityInput) GoString() string { - return s.String() +// SetCustomErrorResponses sets the CustomErrorResponses field's value. +func (s *DistributionConfig) SetCustomErrorResponses(v *CustomErrorResponses) *DistributionConfig { + s.CustomErrorResponses = v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *CreateCloudFrontOriginAccessIdentityInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateCloudFrontOriginAccessIdentityInput"} - if s.CloudFrontOriginAccessIdentityConfig == nil { - invalidParams.Add(request.NewErrParamRequired("CloudFrontOriginAccessIdentityConfig")) - } - if s.CloudFrontOriginAccessIdentityConfig != nil { - if err := s.CloudFrontOriginAccessIdentityConfig.Validate(); err != nil { - invalidParams.AddNested("CloudFrontOriginAccessIdentityConfig", err.(request.ErrInvalidParams)) - } - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetDefaultCacheBehavior sets the DefaultCacheBehavior field's value. +func (s *DistributionConfig) SetDefaultCacheBehavior(v *DefaultCacheBehavior) *DistributionConfig { + s.DefaultCacheBehavior = v + return s } -// SetCloudFrontOriginAccessIdentityConfig sets the CloudFrontOriginAccessIdentityConfig field's value. -func (s *CreateCloudFrontOriginAccessIdentityInput) SetCloudFrontOriginAccessIdentityConfig(v *OriginAccessIdentityConfig) *CreateCloudFrontOriginAccessIdentityInput { - s.CloudFrontOriginAccessIdentityConfig = v +// SetDefaultRootObject sets the DefaultRootObject field's value. +func (s *DistributionConfig) SetDefaultRootObject(v string) *DistributionConfig { + s.DefaultRootObject = &v return s } -// The returned result of the corresponding request. -type CreateCloudFrontOriginAccessIdentityOutput struct { - _ struct{} `type:"structure" payload:"CloudFrontOriginAccessIdentity"` +// SetEnabled sets the Enabled field's value. +func (s *DistributionConfig) SetEnabled(v bool) *DistributionConfig { + s.Enabled = &v + return s +} - // The origin access identity's information. - CloudFrontOriginAccessIdentity *OriginAccessIdentity `type:"structure"` +// SetHttpVersion sets the HttpVersion field's value. +func (s *DistributionConfig) SetHttpVersion(v string) *DistributionConfig { + s.HttpVersion = &v + return s +} - // The current version of the origin access identity created. - ETag *string `location:"header" locationName:"ETag" type:"string"` +// SetIsIPV6Enabled sets the IsIPV6Enabled field's value. +func (s *DistributionConfig) SetIsIPV6Enabled(v bool) *DistributionConfig { + s.IsIPV6Enabled = &v + return s +} - // The fully qualified URI of the new origin access identity just created. For - // example: https://cloudfront.amazonaws.com/2010-11-01/origin-access-identity/cloudfront/E74FTE3AJFJ256A. - Location *string `location:"header" locationName:"Location" type:"string"` +// SetLogging sets the Logging field's value. +func (s *DistributionConfig) SetLogging(v *LoggingConfig) *DistributionConfig { + s.Logging = v + return s } -// String returns the string representation -func (s CreateCloudFrontOriginAccessIdentityOutput) String() string { - return awsutil.Prettify(s) +// SetOrigins sets the Origins field's value. +func (s *DistributionConfig) SetOrigins(v *Origins) *DistributionConfig { + s.Origins = v + return s } -// GoString returns the string representation -func (s CreateCloudFrontOriginAccessIdentityOutput) GoString() string { - return s.String() +// SetPriceClass sets the PriceClass field's value. +func (s *DistributionConfig) SetPriceClass(v string) *DistributionConfig { + s.PriceClass = &v + return s } -// SetCloudFrontOriginAccessIdentity sets the CloudFrontOriginAccessIdentity field's value. -func (s *CreateCloudFrontOriginAccessIdentityOutput) SetCloudFrontOriginAccessIdentity(v *OriginAccessIdentity) *CreateCloudFrontOriginAccessIdentityOutput { - s.CloudFrontOriginAccessIdentity = v +// SetRestrictions sets the Restrictions field's value. +func (s *DistributionConfig) SetRestrictions(v *Restrictions) *DistributionConfig { + s.Restrictions = v return s } -// SetETag sets the ETag field's value. -func (s *CreateCloudFrontOriginAccessIdentityOutput) SetETag(v string) *CreateCloudFrontOriginAccessIdentityOutput { - s.ETag = &v +// SetViewerCertificate sets the ViewerCertificate field's value. +func (s *DistributionConfig) SetViewerCertificate(v *ViewerCertificate) *DistributionConfig { + s.ViewerCertificate = v return s } -// SetLocation sets the Location field's value. -func (s *CreateCloudFrontOriginAccessIdentityOutput) SetLocation(v string) *CreateCloudFrontOriginAccessIdentityOutput { - s.Location = &v +// SetWebACLId sets the WebACLId field's value. +func (s *DistributionConfig) SetWebACLId(v string) *DistributionConfig { + s.WebACLId = &v return s } -// The request to create a new distribution. -type CreateDistributionInput struct { - _ struct{} `type:"structure" payload:"DistributionConfig"` +// A distribution Configuration and a list of tags to be associated with the +// distribution. +type DistributionConfigWithTags struct { + _ struct{} `type:"structure"` - // The distribution's configuration information. + // A distribution configuration. // // DistributionConfig is a required field - DistributionConfig *DistributionConfig `locationName:"DistributionConfig" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2017-03-25/"` + DistributionConfig *DistributionConfig `type:"structure" required:"true"` + + // A complex type that contains zero or more Tag elements. + // + // Tags is a required field + Tags *Tags `type:"structure" required:"true"` } // String returns the string representation -func (s CreateDistributionInput) String() string { +func (s DistributionConfigWithTags) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateDistributionInput) GoString() string { +func (s DistributionConfigWithTags) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateDistributionInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateDistributionInput"} +func (s *DistributionConfigWithTags) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DistributionConfigWithTags"} if s.DistributionConfig == nil { invalidParams.Add(request.NewErrParamRequired("DistributionConfig")) } + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) + } if s.DistributionConfig != nil { if err := s.DistributionConfig.Validate(); err != nil { invalidParams.AddNested("DistributionConfig", err.(request.ErrInvalidParams)) } } + if s.Tags != nil { + if err := s.Tags.Validate(); err != nil { + invalidParams.AddNested("Tags", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -4075,262 +8163,445 @@ func (s *CreateDistributionInput) Validate() error { } // SetDistributionConfig sets the DistributionConfig field's value. -func (s *CreateDistributionInput) SetDistributionConfig(v *DistributionConfig) *CreateDistributionInput { +func (s *DistributionConfigWithTags) SetDistributionConfig(v *DistributionConfig) *DistributionConfigWithTags { s.DistributionConfig = v return s } -// The returned result of the corresponding request. -type CreateDistributionOutput struct { - _ struct{} `type:"structure" payload:"Distribution"` +// SetTags sets the Tags field's value. +func (s *DistributionConfigWithTags) SetTags(v *Tags) *DistributionConfigWithTags { + s.Tags = v + return s +} - // The distribution's information. - Distribution *Distribution `type:"structure"` +// A distribution list. +type DistributionList struct { + _ struct{} `type:"structure"` - // The current version of the distribution created. - ETag *string `location:"header" locationName:"ETag" type:"string"` + // A flag that indicates whether more distributions remain to be listed. If + // your results were truncated, you can make a follow-up pagination request + // using the Marker request parameter to retrieve more distributions in the + // list. + // + // IsTruncated is a required field + IsTruncated *bool `type:"boolean" required:"true"` - // The fully qualified URI of the new distribution resource just created. For - // example: https://cloudfront.amazonaws.com/2010-11-01/distribution/EDFDVBD632BHDS5. - Location *string `location:"header" locationName:"Location" type:"string"` + // A complex type that contains one DistributionSummary element for each distribution + // that was created by the current AWS account. + Items []*DistributionSummary `locationNameList:"DistributionSummary" type:"list"` + + // The value you provided for the Marker request parameter. + // + // Marker is a required field + Marker *string `type:"string" required:"true"` + + // The value you provided for the MaxItems request parameter. + // + // MaxItems is a required field + MaxItems *int64 `type:"integer" required:"true"` + + // If IsTruncated is true, this element is present and contains the value you + // can use for the Marker request parameter to continue listing your distributions + // where they left off. + NextMarker *string `type:"string"` + + // The number of distributions that were created by the current AWS account. + // + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` } // String returns the string representation -func (s CreateDistributionOutput) String() string { +func (s DistributionList) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateDistributionOutput) GoString() string { +func (s DistributionList) GoString() string { return s.String() } -// SetDistribution sets the Distribution field's value. -func (s *CreateDistributionOutput) SetDistribution(v *Distribution) *CreateDistributionOutput { - s.Distribution = v +// SetIsTruncated sets the IsTruncated field's value. +func (s *DistributionList) SetIsTruncated(v bool) *DistributionList { + s.IsTruncated = &v return s } -// SetETag sets the ETag field's value. -func (s *CreateDistributionOutput) SetETag(v string) *CreateDistributionOutput { - s.ETag = &v +// SetItems sets the Items field's value. +func (s *DistributionList) SetItems(v []*DistributionSummary) *DistributionList { + s.Items = v return s } -// SetLocation sets the Location field's value. -func (s *CreateDistributionOutput) SetLocation(v string) *CreateDistributionOutput { - s.Location = &v +// SetMarker sets the Marker field's value. +func (s *DistributionList) SetMarker(v string) *DistributionList { + s.Marker = &v return s } -// The request to create a new distribution with tags. -type CreateDistributionWithTagsInput struct { - _ struct{} `type:"structure" payload:"DistributionConfigWithTags"` - - // The distribution's configuration information. - // - // DistributionConfigWithTags is a required field - DistributionConfigWithTags *DistributionConfigWithTags `locationName:"DistributionConfigWithTags" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2017-03-25/"` +// SetMaxItems sets the MaxItems field's value. +func (s *DistributionList) SetMaxItems(v int64) *DistributionList { + s.MaxItems = &v + return s } -// String returns the string representation -func (s CreateDistributionWithTagsInput) String() string { - return awsutil.Prettify(s) +// SetNextMarker sets the NextMarker field's value. +func (s *DistributionList) SetNextMarker(v string) *DistributionList { + s.NextMarker = &v + return s } -// GoString returns the string representation -func (s CreateDistributionWithTagsInput) GoString() string { - return s.String() +// SetQuantity sets the Quantity field's value. +func (s *DistributionList) SetQuantity(v int64) *DistributionList { + s.Quantity = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *CreateDistributionWithTagsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateDistributionWithTagsInput"} - if s.DistributionConfigWithTags == nil { - invalidParams.Add(request.NewErrParamRequired("DistributionConfigWithTags")) - } - if s.DistributionConfigWithTags != nil { - if err := s.DistributionConfigWithTags.Validate(); err != nil { - invalidParams.AddNested("DistributionConfigWithTags", err.(request.ErrInvalidParams)) - } - } +// A summary of the information about a CloudFront distribution. +type DistributionSummary struct { + _ struct{} `type:"structure"` - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} + // The ARN (Amazon Resource Name) for the distribution. For example: arn:aws:cloudfront::123456789012:distribution/EDFDVBD632BHDS5, + // where 123456789012 is your AWS account ID. + // + // ARN is a required field + ARN *string `type:"string" required:"true"` -// SetDistributionConfigWithTags sets the DistributionConfigWithTags field's value. -func (s *CreateDistributionWithTagsInput) SetDistributionConfigWithTags(v *DistributionConfigWithTags) *CreateDistributionWithTagsInput { - s.DistributionConfigWithTags = v - return s -} + // A complex type that contains information about CNAMEs (alternate domain names), + // if any, for this distribution. + // + // Aliases is a required field + Aliases *Aliases `type:"structure" required:"true"` -// The returned result of the corresponding request. -type CreateDistributionWithTagsOutput struct { - _ struct{} `type:"structure" payload:"Distribution"` + // A complex type that contains zero or more CacheBehavior elements. + // + // CacheBehaviors is a required field + CacheBehaviors *CacheBehaviors `type:"structure" required:"true"` - // The distribution's information. - Distribution *Distribution `type:"structure"` + // The comment originally specified when this distribution was created. + // + // Comment is a required field + Comment *string `type:"string" required:"true"` - // The current version of the distribution created. - ETag *string `location:"header" locationName:"ETag" type:"string"` + // A complex type that contains zero or more CustomErrorResponses elements. + // + // CustomErrorResponses is a required field + CustomErrorResponses *CustomErrorResponses `type:"structure" required:"true"` - // The fully qualified URI of the new distribution resource just created. For - // example: https://cloudfront.amazonaws.com/2010-11-01/distribution/EDFDVBD632BHDS5. - Location *string `location:"header" locationName:"Location" type:"string"` + // A complex type that describes the default cache behavior if you don't specify + // a CacheBehavior element or if files don't match any of the values of PathPattern + // in CacheBehavior elements. You must create exactly one default cache behavior. + // + // DefaultCacheBehavior is a required field + DefaultCacheBehavior *DefaultCacheBehavior `type:"structure" required:"true"` + + // The domain name that corresponds to the distribution, for example, d111111abcdef8.cloudfront.net. + // + // DomainName is a required field + DomainName *string `type:"string" required:"true"` + + // Whether the distribution is enabled to accept user requests for content. + // + // Enabled is a required field + Enabled *bool `type:"boolean" required:"true"` + + // Specify the maximum HTTP version that you want viewers to use to communicate + // with CloudFront. The default value for new web distributions is http2. Viewers + // that don't support HTTP/2 will automatically use an earlier version. + // + // HttpVersion is a required field + HttpVersion *string `type:"string" required:"true" enum:"HttpVersion"` + + // The identifier for the distribution. For example: EDFDVBD632BHDS5. + // + // Id is a required field + Id *string `type:"string" required:"true"` + + // Whether CloudFront responds to IPv6 DNS requests with an IPv6 address for + // your distribution. + // + // IsIPV6Enabled is a required field + IsIPV6Enabled *bool `type:"boolean" required:"true"` + + // The date and time the distribution was last modified. + // + // LastModifiedTime is a required field + LastModifiedTime *time.Time `type:"timestamp" required:"true"` + + // A complex type that contains information about origins for this distribution. + // + // Origins is a required field + Origins *Origins `type:"structure" required:"true"` + + // PriceClass is a required field + PriceClass *string `type:"string" required:"true" enum:"PriceClass"` + + // A complex type that identifies ways in which you want to restrict distribution + // of your content. + // + // Restrictions is a required field + Restrictions *Restrictions `type:"structure" required:"true"` + + // The current status of the distribution. When the status is Deployed, the + // distribution's information is propagated to all CloudFront edge locations. + // + // Status is a required field + Status *string `type:"string" required:"true"` + + // A complex type that specifies the following: + // + // * Whether you want viewers to use HTTP or HTTPS to request your objects. + // + // * If you want viewers to use HTTPS, whether you're using an alternate + // domain name such as example.com or the CloudFront domain name for your + // distribution, such as d111111abcdef8.cloudfront.net. + // + // * If you're using an alternate domain name, whether AWS Certificate Manager + // (ACM) provided the certificate, or you purchased a certificate from a + // third-party certificate authority and imported it into ACM or uploaded + // it to the IAM certificate store. + // + // You must specify only one of the following values: + // + // * ViewerCertificate$ACMCertificateArn + // + // * ViewerCertificate$IAMCertificateId + // + // * ViewerCertificate$CloudFrontDefaultCertificate + // + // Don't specify false for CloudFrontDefaultCertificate. + // + // If you want viewers to use HTTP instead of HTTPS to request your objects: + // Specify the following value: + // + // true + // + // In addition, specify allow-all for ViewerProtocolPolicy for all of your cache + // behaviors. + // + // If you want viewers to use HTTPS to request your objects: Choose the type + // of certificate that you want to use based on whether you're using an alternate + // domain name for your objects or the CloudFront domain name: + // + // * If you're using an alternate domain name, such as example.com: Specify + // one of the following values, depending on whether ACM provided your certificate + // or you purchased your certificate from third-party certificate authority: + // + // ARN for ACM SSL/TLS certificate where + // ARN for ACM SSL/TLS certificate is the ARN for the ACM SSL/TLS certificate + // that you want to use for this distribution. + // + // IAM certificate ID where IAM certificate + // ID is the ID that IAM returned when you added the certificate to the IAM + // certificate store. + // + // If you specify ACMCertificateArn or IAMCertificateId, you must also specify + // a value for SSLSupportMethod. + // + // If you choose to use an ACM certificate or a certificate in the IAM certificate + // store, we recommend that you use only an alternate domain name in your + // object URLs (https://example.com/logo.jpg). If you use the domain name + // that is associated with your CloudFront distribution (such as https://d111111abcdef8.cloudfront.net/logo.jpg) + // and the viewer supports SNI, then CloudFront behaves normally. However, + // if the browser does not support SNI, the user's experience depends on + // the value that you choose for SSLSupportMethod: + // + // vip: The viewer displays a warning because there is a mismatch between the + // CloudFront domain name and the domain name in your SSL/TLS certificate. + // + // sni-only: CloudFront drops the connection with the browser without returning + // the object. + // + // * If you're using the CloudFront domain name for your distribution, such + // as d111111abcdef8.cloudfront.net: Specify the following value: + // + // true + // + // If you want viewers to use HTTPS, you must also specify one of the following + // values in your cache behaviors: + // + // * https-only + // + // * redirect-to-https + // + // You can also optionally require that CloudFront use HTTPS to communicate + // with your origin by specifying one of the following values for the applicable + // origins: + // + // * https-only + // + // * match-viewer + // + // For more information, see Using Alternate Domain Names and HTTPS (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecureConnections.html#CNAMEsAndHTTPS) + // in the Amazon CloudFront Developer Guide. + // + // ViewerCertificate is a required field + ViewerCertificate *ViewerCertificate `type:"structure" required:"true"` + + // The Web ACL Id (if any) associated with the distribution. + // + // WebACLId is a required field + WebACLId *string `type:"string" required:"true"` } // String returns the string representation -func (s CreateDistributionWithTagsOutput) String() string { +func (s DistributionSummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateDistributionWithTagsOutput) GoString() string { +func (s DistributionSummary) GoString() string { return s.String() } -// SetDistribution sets the Distribution field's value. -func (s *CreateDistributionWithTagsOutput) SetDistribution(v *Distribution) *CreateDistributionWithTagsOutput { - s.Distribution = v +// SetARN sets the ARN field's value. +func (s *DistributionSummary) SetARN(v string) *DistributionSummary { + s.ARN = &v return s } -// SetETag sets the ETag field's value. -func (s *CreateDistributionWithTagsOutput) SetETag(v string) *CreateDistributionWithTagsOutput { - s.ETag = &v +// SetAliases sets the Aliases field's value. +func (s *DistributionSummary) SetAliases(v *Aliases) *DistributionSummary { + s.Aliases = v return s } -// SetLocation sets the Location field's value. -func (s *CreateDistributionWithTagsOutput) SetLocation(v string) *CreateDistributionWithTagsOutput { - s.Location = &v +// SetCacheBehaviors sets the CacheBehaviors field's value. +func (s *DistributionSummary) SetCacheBehaviors(v *CacheBehaviors) *DistributionSummary { + s.CacheBehaviors = v return s } -// The request to create an invalidation. -type CreateInvalidationInput struct { - _ struct{} `type:"structure" payload:"InvalidationBatch"` - - // The distribution's id. - // - // DistributionId is a required field - DistributionId *string `location:"uri" locationName:"DistributionId" type:"string" required:"true"` +// SetComment sets the Comment field's value. +func (s *DistributionSummary) SetComment(v string) *DistributionSummary { + s.Comment = &v + return s +} - // The batch information for the invalidation. - // - // InvalidationBatch is a required field - InvalidationBatch *InvalidationBatch `locationName:"InvalidationBatch" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2017-03-25/"` +// SetCustomErrorResponses sets the CustomErrorResponses field's value. +func (s *DistributionSummary) SetCustomErrorResponses(v *CustomErrorResponses) *DistributionSummary { + s.CustomErrorResponses = v + return s } -// String returns the string representation -func (s CreateInvalidationInput) String() string { - return awsutil.Prettify(s) +// SetDefaultCacheBehavior sets the DefaultCacheBehavior field's value. +func (s *DistributionSummary) SetDefaultCacheBehavior(v *DefaultCacheBehavior) *DistributionSummary { + s.DefaultCacheBehavior = v + return s } -// GoString returns the string representation -func (s CreateInvalidationInput) GoString() string { - return s.String() +// SetDomainName sets the DomainName field's value. +func (s *DistributionSummary) SetDomainName(v string) *DistributionSummary { + s.DomainName = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *CreateInvalidationInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateInvalidationInput"} - if s.DistributionId == nil { - invalidParams.Add(request.NewErrParamRequired("DistributionId")) - } - if s.InvalidationBatch == nil { - invalidParams.Add(request.NewErrParamRequired("InvalidationBatch")) - } - if s.InvalidationBatch != nil { - if err := s.InvalidationBatch.Validate(); err != nil { - invalidParams.AddNested("InvalidationBatch", err.(request.ErrInvalidParams)) - } - } +// SetEnabled sets the Enabled field's value. +func (s *DistributionSummary) SetEnabled(v bool) *DistributionSummary { + s.Enabled = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetHttpVersion sets the HttpVersion field's value. +func (s *DistributionSummary) SetHttpVersion(v string) *DistributionSummary { + s.HttpVersion = &v + return s } -// SetDistributionId sets the DistributionId field's value. -func (s *CreateInvalidationInput) SetDistributionId(v string) *CreateInvalidationInput { - s.DistributionId = &v +// SetId sets the Id field's value. +func (s *DistributionSummary) SetId(v string) *DistributionSummary { + s.Id = &v return s } -// SetInvalidationBatch sets the InvalidationBatch field's value. -func (s *CreateInvalidationInput) SetInvalidationBatch(v *InvalidationBatch) *CreateInvalidationInput { - s.InvalidationBatch = v +// SetIsIPV6Enabled sets the IsIPV6Enabled field's value. +func (s *DistributionSummary) SetIsIPV6Enabled(v bool) *DistributionSummary { + s.IsIPV6Enabled = &v return s } -// The returned result of the corresponding request. -type CreateInvalidationOutput struct { - _ struct{} `type:"structure" payload:"Invalidation"` +// SetLastModifiedTime sets the LastModifiedTime field's value. +func (s *DistributionSummary) SetLastModifiedTime(v time.Time) *DistributionSummary { + s.LastModifiedTime = &v + return s +} - // The invalidation's information. - Invalidation *Invalidation `type:"structure"` +// SetOrigins sets the Origins field's value. +func (s *DistributionSummary) SetOrigins(v *Origins) *DistributionSummary { + s.Origins = v + return s +} - // The fully qualified URI of the distribution and invalidation batch request, - // including the Invalidation ID. - Location *string `location:"header" locationName:"Location" type:"string"` +// SetPriceClass sets the PriceClass field's value. +func (s *DistributionSummary) SetPriceClass(v string) *DistributionSummary { + s.PriceClass = &v + return s } -// String returns the string representation -func (s CreateInvalidationOutput) String() string { - return awsutil.Prettify(s) +// SetRestrictions sets the Restrictions field's value. +func (s *DistributionSummary) SetRestrictions(v *Restrictions) *DistributionSummary { + s.Restrictions = v + return s } -// GoString returns the string representation -func (s CreateInvalidationOutput) GoString() string { - return s.String() +// SetStatus sets the Status field's value. +func (s *DistributionSummary) SetStatus(v string) *DistributionSummary { + s.Status = &v + return s } -// SetInvalidation sets the Invalidation field's value. -func (s *CreateInvalidationOutput) SetInvalidation(v *Invalidation) *CreateInvalidationOutput { - s.Invalidation = v +// SetViewerCertificate sets the ViewerCertificate field's value. +func (s *DistributionSummary) SetViewerCertificate(v *ViewerCertificate) *DistributionSummary { + s.ViewerCertificate = v return s } -// SetLocation sets the Location field's value. -func (s *CreateInvalidationOutput) SetLocation(v string) *CreateInvalidationOutput { - s.Location = &v +// SetWebACLId sets the WebACLId field's value. +func (s *DistributionSummary) SetWebACLId(v string) *DistributionSummary { + s.WebACLId = &v return s } -// The request to create a new streaming distribution. -type CreateStreamingDistributionInput struct { - _ struct{} `type:"structure" payload:"StreamingDistributionConfig"` +// Complex data type for field-level encryption profiles that includes all of +// the encryption entities. +type EncryptionEntities struct { + _ struct{} `type:"structure"` - // The streaming distribution's configuration information. + // An array of field patterns in a field-level encryption content type-profile + // mapping. + Items []*EncryptionEntity `locationNameList:"EncryptionEntity" type:"list"` + + // Number of field pattern items in a field-level encryption content type-profile + // mapping. // - // StreamingDistributionConfig is a required field - StreamingDistributionConfig *StreamingDistributionConfig `locationName:"StreamingDistributionConfig" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2017-03-25/"` + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` } // String returns the string representation -func (s CreateStreamingDistributionInput) String() string { +func (s EncryptionEntities) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateStreamingDistributionInput) GoString() string { +func (s EncryptionEntities) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateStreamingDistributionInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateStreamingDistributionInput"} - if s.StreamingDistributionConfig == nil { - invalidParams.Add(request.NewErrParamRequired("StreamingDistributionConfig")) +func (s *EncryptionEntities) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "EncryptionEntities"} + if s.Quantity == nil { + invalidParams.Add(request.NewErrParamRequired("Quantity")) } - if s.StreamingDistributionConfig != nil { - if err := s.StreamingDistributionConfig.Validate(); err != nil { - invalidParams.AddNested("StreamingDistributionConfig", err.(request.ErrInvalidParams)) + if s.Items != nil { + for i, v := range s.Items { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Items", i), err.(request.ErrInvalidParams)) + } } } @@ -4340,84 +8611,71 @@ func (s *CreateStreamingDistributionInput) Validate() error { return nil } -// SetStreamingDistributionConfig sets the StreamingDistributionConfig field's value. -func (s *CreateStreamingDistributionInput) SetStreamingDistributionConfig(v *StreamingDistributionConfig) *CreateStreamingDistributionInput { - s.StreamingDistributionConfig = v +// SetItems sets the Items field's value. +func (s *EncryptionEntities) SetItems(v []*EncryptionEntity) *EncryptionEntities { + s.Items = v return s } -// The returned result of the corresponding request. -type CreateStreamingDistributionOutput struct { - _ struct{} `type:"structure" payload:"StreamingDistribution"` - - // The current version of the streaming distribution created. - ETag *string `location:"header" locationName:"ETag" type:"string"` - - // The fully qualified URI of the new streaming distribution resource just created. - // For example: https://cloudfront.amazonaws.com/2010-11-01/streaming-distribution/EGTXBD79H29TRA8. - Location *string `location:"header" locationName:"Location" type:"string"` - - // The streaming distribution's information. - StreamingDistribution *StreamingDistribution `type:"structure"` -} - -// String returns the string representation -func (s CreateStreamingDistributionOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s CreateStreamingDistributionOutput) GoString() string { - return s.String() -} - -// SetETag sets the ETag field's value. -func (s *CreateStreamingDistributionOutput) SetETag(v string) *CreateStreamingDistributionOutput { - s.ETag = &v +// SetQuantity sets the Quantity field's value. +func (s *EncryptionEntities) SetQuantity(v int64) *EncryptionEntities { + s.Quantity = &v return s } -// SetLocation sets the Location field's value. -func (s *CreateStreamingDistributionOutput) SetLocation(v string) *CreateStreamingDistributionOutput { - s.Location = &v - return s -} +// Complex data type for field-level encryption profiles that includes the encryption +// key and field pattern specifications. +type EncryptionEntity struct { + _ struct{} `type:"structure"` -// SetStreamingDistribution sets the StreamingDistribution field's value. -func (s *CreateStreamingDistributionOutput) SetStreamingDistribution(v *StreamingDistribution) *CreateStreamingDistributionOutput { - s.StreamingDistribution = v - return s -} + // Field patterns in a field-level encryption content type profile specify the + // fields that you want to be encrypted. You can provide the full field name, + // or any beginning characters followed by a wildcard (*). You can't overlap + // field patterns. For example, you can't have both ABC* and AB*. Note that + // field patterns are case-sensitive. + // + // FieldPatterns is a required field + FieldPatterns *FieldPatterns `type:"structure" required:"true"` -// The request to create a new streaming distribution with tags. -type CreateStreamingDistributionWithTagsInput struct { - _ struct{} `type:"structure" payload:"StreamingDistributionConfigWithTags"` + // The provider associated with the public key being used for encryption. This + // value must also be provided with the private key for applications to be able + // to decrypt data. + // + // ProviderId is a required field + ProviderId *string `type:"string" required:"true"` - // The streaming distribution's configuration information. + // The public key associated with a set of field-level encryption patterns, + // to be used when encrypting the fields that match the patterns. // - // StreamingDistributionConfigWithTags is a required field - StreamingDistributionConfigWithTags *StreamingDistributionConfigWithTags `locationName:"StreamingDistributionConfigWithTags" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2017-03-25/"` + // PublicKeyId is a required field + PublicKeyId *string `type:"string" required:"true"` } // String returns the string representation -func (s CreateStreamingDistributionWithTagsInput) String() string { +func (s EncryptionEntity) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateStreamingDistributionWithTagsInput) GoString() string { +func (s EncryptionEntity) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateStreamingDistributionWithTagsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateStreamingDistributionWithTagsInput"} - if s.StreamingDistributionConfigWithTags == nil { - invalidParams.Add(request.NewErrParamRequired("StreamingDistributionConfigWithTags")) +func (s *EncryptionEntity) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "EncryptionEntity"} + if s.FieldPatterns == nil { + invalidParams.Add(request.NewErrParamRequired("FieldPatterns")) } - if s.StreamingDistributionConfigWithTags != nil { - if err := s.StreamingDistributionConfigWithTags.Validate(); err != nil { - invalidParams.AddNested("StreamingDistributionConfigWithTags", err.(request.ErrInvalidParams)) + if s.ProviderId == nil { + invalidParams.Add(request.NewErrParamRequired("ProviderId")) + } + if s.PublicKeyId == nil { + invalidParams.Add(request.NewErrParamRequired("PublicKeyId")) + } + if s.FieldPatterns != nil { + if err := s.FieldPatterns.Validate(); err != nil { + invalidParams.AddNested("FieldPatterns", err.(request.ErrInvalidParams)) } } @@ -4427,150 +8685,124 @@ func (s *CreateStreamingDistributionWithTagsInput) Validate() error { return nil } -// SetStreamingDistributionConfigWithTags sets the StreamingDistributionConfigWithTags field's value. -func (s *CreateStreamingDistributionWithTagsInput) SetStreamingDistributionConfigWithTags(v *StreamingDistributionConfigWithTags) *CreateStreamingDistributionWithTagsInput { - s.StreamingDistributionConfigWithTags = v +// SetFieldPatterns sets the FieldPatterns field's value. +func (s *EncryptionEntity) SetFieldPatterns(v *FieldPatterns) *EncryptionEntity { + s.FieldPatterns = v return s } -// The returned result of the corresponding request. -type CreateStreamingDistributionWithTagsOutput struct { - _ struct{} `type:"structure" payload:"StreamingDistribution"` +// SetProviderId sets the ProviderId field's value. +func (s *EncryptionEntity) SetProviderId(v string) *EncryptionEntity { + s.ProviderId = &v + return s +} - ETag *string `location:"header" locationName:"ETag" type:"string"` +// SetPublicKeyId sets the PublicKeyId field's value. +func (s *EncryptionEntity) SetPublicKeyId(v string) *EncryptionEntity { + s.PublicKeyId = &v + return s +} - // The fully qualified URI of the new streaming distribution resource just created. - // For example: https://cloudfront.amazonaws.com/2010-11-01/streaming-distribution/EGTXBD79H29TRA8. - Location *string `location:"header" locationName:"Location" type:"string"` +// A complex data type that includes the profile configurations and other options +// specified for field-level encryption. +type FieldLevelEncryption struct { + _ struct{} `type:"structure"` + + // A complex data type that includes the profile configurations specified for + // field-level encryption. + // + // FieldLevelEncryptionConfig is a required field + FieldLevelEncryptionConfig *FieldLevelEncryptionConfig `type:"structure" required:"true"` + + // The configuration ID for a field-level encryption configuration which includes + // a set of profiles that specify certain selected data fields to be encrypted + // by specific public keys. + // + // Id is a required field + Id *string `type:"string" required:"true"` - // The streaming distribution's information. - StreamingDistribution *StreamingDistribution `type:"structure"` + // The last time the field-level encryption configuration was changed. + // + // LastModifiedTime is a required field + LastModifiedTime *time.Time `type:"timestamp" required:"true"` } // String returns the string representation -func (s CreateStreamingDistributionWithTagsOutput) String() string { +func (s FieldLevelEncryption) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateStreamingDistributionWithTagsOutput) GoString() string { +func (s FieldLevelEncryption) GoString() string { return s.String() } -// SetETag sets the ETag field's value. -func (s *CreateStreamingDistributionWithTagsOutput) SetETag(v string) *CreateStreamingDistributionWithTagsOutput { - s.ETag = &v +// SetFieldLevelEncryptionConfig sets the FieldLevelEncryptionConfig field's value. +func (s *FieldLevelEncryption) SetFieldLevelEncryptionConfig(v *FieldLevelEncryptionConfig) *FieldLevelEncryption { + s.FieldLevelEncryptionConfig = v return s } -// SetLocation sets the Location field's value. -func (s *CreateStreamingDistributionWithTagsOutput) SetLocation(v string) *CreateStreamingDistributionWithTagsOutput { - s.Location = &v +// SetId sets the Id field's value. +func (s *FieldLevelEncryption) SetId(v string) *FieldLevelEncryption { + s.Id = &v return s } -// SetStreamingDistribution sets the StreamingDistribution field's value. -func (s *CreateStreamingDistributionWithTagsOutput) SetStreamingDistribution(v *StreamingDistribution) *CreateStreamingDistributionWithTagsOutput { - s.StreamingDistribution = v +// SetLastModifiedTime sets the LastModifiedTime field's value. +func (s *FieldLevelEncryption) SetLastModifiedTime(v time.Time) *FieldLevelEncryption { + s.LastModifiedTime = &v return s } -// A complex type that controls: -// -// * Whether CloudFront replaces HTTP status codes in the 4xx and 5xx range -// with custom error messages before returning the response to the viewer. -// -// -// * How long CloudFront caches HTTP status codes in the 4xx and 5xx range. -// -// For more information about custom error pages, see Customizing Error Responses -// (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html) -// in the Amazon CloudFront Developer Guide. -type CustomErrorResponse struct { +// A complex data type that includes the profile configurations specified for +// field-level encryption. +type FieldLevelEncryptionConfig struct { _ struct{} `type:"structure"` - // The minimum amount of time, in seconds, that you want CloudFront to cache - // the HTTP status code specified in ErrorCode. When this time period has elapsed, - // CloudFront queries your origin to see whether the problem that caused the - // error has been resolved and the requested object is now available. - // - // If you don't want to specify a value, include an empty element, , - // in the XML document. + // A unique number that ensures the request can't be replayed. // - // For more information, see Customizing Error Responses (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html) - // in the Amazon CloudFront Developer Guide. - ErrorCachingMinTTL *int64 `type:"long"` + // CallerReference is a required field + CallerReference *string `type:"string" required:"true"` - // The HTTP status code for which you want to specify a custom error page and/or - // a caching duration. - // - // ErrorCode is a required field - ErrorCode *int64 `type:"integer" required:"true"` + // An optional comment about the configuration. + Comment *string `type:"string"` - // The HTTP status code that you want CloudFront to return to the viewer along - // with the custom error page. There are a variety of reasons that you might - // want CloudFront to return a status code different from the status code that - // your origin returned to CloudFront, for example: - // - // * Some Internet devices (some firewalls and corporate proxies, for example) - // intercept HTTP 4xx and 5xx and prevent the response from being returned - // to the viewer. If you substitute 200, the response typically won't be - // intercepted. - // - // * If you don't care about distinguishing among different client errors - // or server errors, you can specify 400 or 500 as the ResponseCode for all - // 4xx or 5xx errors. - // - // * You might want to return a 200 status code (OK) and static website so - // your customers don't know that your website is down. - // - // If you specify a value for ResponseCode, you must also specify a value for - // ResponsePagePath. If you don't want to specify a value, include an empty - // element, , in the XML document. - ResponseCode *string `type:"string"` + // A complex data type that specifies when to forward content if a content type + // isn't recognized and profiles to use as by default in a request if a query + // argument doesn't specify a profile to use. + ContentTypeProfileConfig *ContentTypeProfileConfig `type:"structure"` - // The path to the custom error page that you want CloudFront to return to a - // viewer when your origin returns the HTTP status code specified by ErrorCode, - // for example, /4xx-errors/403-forbidden.html. If you want to store your objects - // and your custom error pages in different locations, your distribution must - // include a cache behavior for which the following is true: - // - // * The value of PathPattern matches the path to your custom error messages. - // For example, suppose you saved custom error pages for 4xx errors in an - // Amazon S3 bucket in a directory named /4xx-errors. Your distribution must - // include a cache behavior for which the path pattern routes requests for - // your custom error pages to that location, for example, /4xx-errors/*. - // - // - // * The value of TargetOriginId specifies the value of the ID element for - // the origin that contains your custom error pages. - // - // If you specify a value for ResponsePagePath, you must also specify a value - // for ResponseCode. If you don't want to specify a value, include an empty - // element, , in the XML document. - // - // We recommend that you store custom error pages in an Amazon S3 bucket. If - // you store custom error pages on an HTTP server and the server starts to return - // 5xx errors, CloudFront can't get the files that you want to return to viewers - // because the origin server is unavailable. - ResponsePagePath *string `type:"string"` + // A complex data type that specifies when to forward content if a profile isn't + // found and the profile that can be provided as a query argument in a request. + QueryArgProfileConfig *QueryArgProfileConfig `type:"structure"` } // String returns the string representation -func (s CustomErrorResponse) String() string { +func (s FieldLevelEncryptionConfig) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CustomErrorResponse) GoString() string { +func (s FieldLevelEncryptionConfig) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CustomErrorResponse) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CustomErrorResponse"} - if s.ErrorCode == nil { - invalidParams.Add(request.NewErrParamRequired("ErrorCode")) +func (s *FieldLevelEncryptionConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "FieldLevelEncryptionConfig"} + if s.CallerReference == nil { + invalidParams.Add(request.NewErrParamRequired("CallerReference")) + } + if s.ContentTypeProfileConfig != nil { + if err := s.ContentTypeProfileConfig.Validate(); err != nil { + invalidParams.AddNested("ContentTypeProfileConfig", err.(request.ErrInvalidParams)) + } + } + if s.QueryArgProfileConfig != nil { + if err := s.QueryArgProfileConfig.Validate(); err != nil { + invalidParams.AddNested("QueryArgProfileConfig", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -4579,227 +8811,188 @@ func (s *CustomErrorResponse) Validate() error { return nil } -// SetErrorCachingMinTTL sets the ErrorCachingMinTTL field's value. -func (s *CustomErrorResponse) SetErrorCachingMinTTL(v int64) *CustomErrorResponse { - s.ErrorCachingMinTTL = &v +// SetCallerReference sets the CallerReference field's value. +func (s *FieldLevelEncryptionConfig) SetCallerReference(v string) *FieldLevelEncryptionConfig { + s.CallerReference = &v return s } -// SetErrorCode sets the ErrorCode field's value. -func (s *CustomErrorResponse) SetErrorCode(v int64) *CustomErrorResponse { - s.ErrorCode = &v +// SetComment sets the Comment field's value. +func (s *FieldLevelEncryptionConfig) SetComment(v string) *FieldLevelEncryptionConfig { + s.Comment = &v return s } -// SetResponseCode sets the ResponseCode field's value. -func (s *CustomErrorResponse) SetResponseCode(v string) *CustomErrorResponse { - s.ResponseCode = &v +// SetContentTypeProfileConfig sets the ContentTypeProfileConfig field's value. +func (s *FieldLevelEncryptionConfig) SetContentTypeProfileConfig(v *ContentTypeProfileConfig) *FieldLevelEncryptionConfig { + s.ContentTypeProfileConfig = v return s } -// SetResponsePagePath sets the ResponsePagePath field's value. -func (s *CustomErrorResponse) SetResponsePagePath(v string) *CustomErrorResponse { - s.ResponsePagePath = &v +// SetQueryArgProfileConfig sets the QueryArgProfileConfig field's value. +func (s *FieldLevelEncryptionConfig) SetQueryArgProfileConfig(v *QueryArgProfileConfig) *FieldLevelEncryptionConfig { + s.QueryArgProfileConfig = v return s } -// A complex type that controls: -// -// * Whether CloudFront replaces HTTP status codes in the 4xx and 5xx range -// with custom error messages before returning the response to the viewer. -// -// * How long CloudFront caches HTTP status codes in the 4xx and 5xx range. -// -// For more information about custom error pages, see Customizing Error Responses -// (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html) -// in the Amazon CloudFront Developer Guide. -type CustomErrorResponses struct { +// List of field-level encrpytion configurations. +type FieldLevelEncryptionList struct { _ struct{} `type:"structure"` - // A complex type that contains a CustomErrorResponse element for each HTTP - // status code for which you want to specify a custom error page and/or a caching - // duration. - Items []*CustomErrorResponse `locationNameList:"CustomErrorResponse" type:"list"` + // An array of field-level encryption items. + Items []*FieldLevelEncryptionSummary `locationNameList:"FieldLevelEncryptionSummary" type:"list"` - // The number of HTTP status codes for which you want to specify a custom error - // page and/or a caching duration. If Quantity is 0, you can omit Items. + // The maximum number of elements you want in the response body. + // + // MaxItems is a required field + MaxItems *int64 `type:"integer" required:"true"` + + // If there are more elements to be listed, this element is present and contains + // the value that you can use for the Marker request parameter to continue listing + // your configurations where you left off. + NextMarker *string `type:"string"` + + // The number of field-level encryption items. // // Quantity is a required field Quantity *int64 `type:"integer" required:"true"` } // String returns the string representation -func (s CustomErrorResponses) String() string { +func (s FieldLevelEncryptionList) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CustomErrorResponses) GoString() string { +func (s FieldLevelEncryptionList) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *CustomErrorResponses) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CustomErrorResponses"} - if s.Quantity == nil { - invalidParams.Add(request.NewErrParamRequired("Quantity")) - } - if s.Items != nil { - for i, v := range s.Items { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Items", i), err.(request.ErrInvalidParams)) - } - } - } +// SetItems sets the Items field's value. +func (s *FieldLevelEncryptionList) SetItems(v []*FieldLevelEncryptionSummary) *FieldLevelEncryptionList { + s.Items = v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetMaxItems sets the MaxItems field's value. +func (s *FieldLevelEncryptionList) SetMaxItems(v int64) *FieldLevelEncryptionList { + s.MaxItems = &v + return s } -// SetItems sets the Items field's value. -func (s *CustomErrorResponses) SetItems(v []*CustomErrorResponse) *CustomErrorResponses { - s.Items = v +// SetNextMarker sets the NextMarker field's value. +func (s *FieldLevelEncryptionList) SetNextMarker(v string) *FieldLevelEncryptionList { + s.NextMarker = &v return s } // SetQuantity sets the Quantity field's value. -func (s *CustomErrorResponses) SetQuantity(v int64) *CustomErrorResponses { +func (s *FieldLevelEncryptionList) SetQuantity(v int64) *FieldLevelEncryptionList { s.Quantity = &v return s } -// A complex type that contains the list of Custom Headers for each origin. -type CustomHeaders struct { +// A complex data type for field-level encryption profiles. +type FieldLevelEncryptionProfile struct { _ struct{} `type:"structure"` - // Optional: A list that contains one OriginCustomHeader element for each custom - // header that you want CloudFront to forward to the origin. If Quantity is - // 0, omit Items. - Items []*OriginCustomHeader `locationNameList:"OriginCustomHeader" type:"list"` + // A complex data type that includes the profile name and the encryption entities + // for the field-level encryption profile. + // + // FieldLevelEncryptionProfileConfig is a required field + FieldLevelEncryptionProfileConfig *FieldLevelEncryptionProfileConfig `type:"structure" required:"true"` - // The number of custom headers, if any, for this distribution. + // The ID for a field-level encryption profile configuration which includes + // a set of profiles that specify certain selected data fields to be encrypted + // by specific public keys. // - // Quantity is a required field - Quantity *int64 `type:"integer" required:"true"` + // Id is a required field + Id *string `type:"string" required:"true"` + + // The last time the field-level encryption profile was updated. + // + // LastModifiedTime is a required field + LastModifiedTime *time.Time `type:"timestamp" required:"true"` } // String returns the string representation -func (s CustomHeaders) String() string { +func (s FieldLevelEncryptionProfile) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CustomHeaders) GoString() string { +func (s FieldLevelEncryptionProfile) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *CustomHeaders) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CustomHeaders"} - if s.Quantity == nil { - invalidParams.Add(request.NewErrParamRequired("Quantity")) - } - if s.Items != nil { - for i, v := range s.Items { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Items", i), err.(request.ErrInvalidParams)) - } - } - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetFieldLevelEncryptionProfileConfig sets the FieldLevelEncryptionProfileConfig field's value. +func (s *FieldLevelEncryptionProfile) SetFieldLevelEncryptionProfileConfig(v *FieldLevelEncryptionProfileConfig) *FieldLevelEncryptionProfile { + s.FieldLevelEncryptionProfileConfig = v + return s } -// SetItems sets the Items field's value. -func (s *CustomHeaders) SetItems(v []*OriginCustomHeader) *CustomHeaders { - s.Items = v +// SetId sets the Id field's value. +func (s *FieldLevelEncryptionProfile) SetId(v string) *FieldLevelEncryptionProfile { + s.Id = &v return s } -// SetQuantity sets the Quantity field's value. -func (s *CustomHeaders) SetQuantity(v int64) *CustomHeaders { - s.Quantity = &v +// SetLastModifiedTime sets the LastModifiedTime field's value. +func (s *FieldLevelEncryptionProfile) SetLastModifiedTime(v time.Time) *FieldLevelEncryptionProfile { + s.LastModifiedTime = &v return s } -// A customer origin. -type CustomOriginConfig struct { +// A complex data type of profiles for the field-level encryption. +type FieldLevelEncryptionProfileConfig struct { _ struct{} `type:"structure"` - // The HTTP port the custom origin listens on. - // - // HTTPPort is a required field - HTTPPort *int64 `type:"integer" required:"true"` - - // The HTTPS port the custom origin listens on. - // - // HTTPSPort is a required field - HTTPSPort *int64 `type:"integer" required:"true"` - - // You can create a custom keep-alive timeout. All timeout units are in seconds. - // The default keep-alive timeout is 5 seconds, but you can configure custom - // timeout lengths using the CloudFront API. The minimum timeout length is 1 - // second; the maximum is 60 seconds. + // A unique number that ensures the request can't be replayed. // - // If you need to increase the maximum time limit, contact the AWS Support Center - // (https://console.aws.amazon.com/support/home#/). - OriginKeepaliveTimeout *int64 `type:"integer"` + // CallerReference is a required field + CallerReference *string `type:"string" required:"true"` - // The origin protocol policy to apply to your origin. - // - // OriginProtocolPolicy is a required field - OriginProtocolPolicy *string `type:"string" required:"true" enum:"OriginProtocolPolicy"` + // An optional comment for the field-level encryption profile. + Comment *string `type:"string"` - // You can create a custom origin read timeout. All timeout units are in seconds. - // The default origin read timeout is 30 seconds, but you can configure custom - // timeout lengths using the CloudFront API. The minimum timeout length is 4 - // seconds; the maximum is 60 seconds. + // A complex data type of encryption entities for the field-level encryption + // profile that include the public key ID, provider, and field patterns for + // specifying which fields to encrypt with this key. // - // If you need to increase the maximum time limit, contact the AWS Support Center - // (https://console.aws.amazon.com/support/home#/). - OriginReadTimeout *int64 `type:"integer"` + // EncryptionEntities is a required field + EncryptionEntities *EncryptionEntities `type:"structure" required:"true"` - // The SSL/TLS protocols that you want CloudFront to use when communicating - // with your origin over HTTPS. - OriginSslProtocols *OriginSslProtocols `type:"structure"` + // Profile name for the field-level encryption profile. + // + // Name is a required field + Name *string `type:"string" required:"true"` } // String returns the string representation -func (s CustomOriginConfig) String() string { +func (s FieldLevelEncryptionProfileConfig) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CustomOriginConfig) GoString() string { +func (s FieldLevelEncryptionProfileConfig) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CustomOriginConfig) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CustomOriginConfig"} - if s.HTTPPort == nil { - invalidParams.Add(request.NewErrParamRequired("HTTPPort")) +func (s *FieldLevelEncryptionProfileConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "FieldLevelEncryptionProfileConfig"} + if s.CallerReference == nil { + invalidParams.Add(request.NewErrParamRequired("CallerReference")) } - if s.HTTPSPort == nil { - invalidParams.Add(request.NewErrParamRequired("HTTPSPort")) + if s.EncryptionEntities == nil { + invalidParams.Add(request.NewErrParamRequired("EncryptionEntities")) } - if s.OriginProtocolPolicy == nil { - invalidParams.Add(request.NewErrParamRequired("OriginProtocolPolicy")) + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) } - if s.OriginSslProtocols != nil { - if err := s.OriginSslProtocols.Validate(); err != nil { - invalidParams.AddNested("OriginSslProtocols", err.(request.ErrInvalidParams)) + if s.EncryptionEntities != nil { + if err := s.EncryptionEntities.Validate(); err != nil { + invalidParams.AddNested("EncryptionEntities", err.(request.ErrInvalidParams)) } } @@ -4809,318 +9002,251 @@ func (s *CustomOriginConfig) Validate() error { return nil } -// SetHTTPPort sets the HTTPPort field's value. -func (s *CustomOriginConfig) SetHTTPPort(v int64) *CustomOriginConfig { - s.HTTPPort = &v - return s -} - -// SetHTTPSPort sets the HTTPSPort field's value. -func (s *CustomOriginConfig) SetHTTPSPort(v int64) *CustomOriginConfig { - s.HTTPSPort = &v - return s -} - -// SetOriginKeepaliveTimeout sets the OriginKeepaliveTimeout field's value. -func (s *CustomOriginConfig) SetOriginKeepaliveTimeout(v int64) *CustomOriginConfig { - s.OriginKeepaliveTimeout = &v +// SetCallerReference sets the CallerReference field's value. +func (s *FieldLevelEncryptionProfileConfig) SetCallerReference(v string) *FieldLevelEncryptionProfileConfig { + s.CallerReference = &v return s } -// SetOriginProtocolPolicy sets the OriginProtocolPolicy field's value. -func (s *CustomOriginConfig) SetOriginProtocolPolicy(v string) *CustomOriginConfig { - s.OriginProtocolPolicy = &v +// SetComment sets the Comment field's value. +func (s *FieldLevelEncryptionProfileConfig) SetComment(v string) *FieldLevelEncryptionProfileConfig { + s.Comment = &v return s } -// SetOriginReadTimeout sets the OriginReadTimeout field's value. -func (s *CustomOriginConfig) SetOriginReadTimeout(v int64) *CustomOriginConfig { - s.OriginReadTimeout = &v +// SetEncryptionEntities sets the EncryptionEntities field's value. +func (s *FieldLevelEncryptionProfileConfig) SetEncryptionEntities(v *EncryptionEntities) *FieldLevelEncryptionProfileConfig { + s.EncryptionEntities = v return s } -// SetOriginSslProtocols sets the OriginSslProtocols field's value. -func (s *CustomOriginConfig) SetOriginSslProtocols(v *OriginSslProtocols) *CustomOriginConfig { - s.OriginSslProtocols = v +// SetName sets the Name field's value. +func (s *FieldLevelEncryptionProfileConfig) SetName(v string) *FieldLevelEncryptionProfileConfig { + s.Name = &v return s } -// A complex type that describes the default cache behavior if you don't specify -// a CacheBehavior element or if files don't match any of the values of PathPattern -// in CacheBehavior elements. You must create exactly one default cache behavior. -type DefaultCacheBehavior struct { +// List of field-level encryption profiles. +type FieldLevelEncryptionProfileList struct { _ struct{} `type:"structure"` - // A complex type that controls which HTTP methods CloudFront processes and - // forwards to your Amazon S3 bucket or your custom origin. There are three - // choices: - // - // * CloudFront forwards only GET and HEAD requests. - // - // * CloudFront forwards only GET, HEAD, and OPTIONS requests. + // The field-level encryption profile items. + Items []*FieldLevelEncryptionProfileSummary `locationNameList:"FieldLevelEncryptionProfileSummary" type:"list"` + + // The maximum number of field-level encryption profiles you want in the response + // body. // - // * CloudFront forwards GET, HEAD, OPTIONS, PUT, PATCH, POST, and DELETE - // requests. + // MaxItems is a required field + MaxItems *int64 `type:"integer" required:"true"` + + // If there are more elements to be listed, this element is present and contains + // the value that you can use for the Marker request parameter to continue listing + // your profiles where you left off. + NextMarker *string `type:"string"` + + // The number of field-level encryption profiles. // - // If you pick the third choice, you may need to restrict access to your Amazon - // S3 bucket or to your custom origin so users can't perform operations that - // you don't want them to. For example, you might not want users to have permissions - // to delete objects from your origin. - AllowedMethods *AllowedMethods `type:"structure"` + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` +} - // Whether you want CloudFront to automatically compress certain files for this - // cache behavior. If so, specify true; if not, specify false. For more information, - // see Serving Compressed Files (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html) - // in the Amazon CloudFront Developer Guide. - Compress *bool `type:"boolean"` +// String returns the string representation +func (s FieldLevelEncryptionProfileList) String() string { + return awsutil.Prettify(s) +} - // The default amount of time that you want objects to stay in CloudFront caches - // before CloudFront forwards another request to your origin to determine whether - // the object has been updated. The value that you specify applies only when - // your origin does not add HTTP headers such as Cache-Control max-age, Cache-Control - // s-maxage, and Expires to objects. For more information, see Specifying How - // Long Objects and Errors Stay in a CloudFront Edge Cache (Expiration) (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html) - // in the Amazon CloudFront Developer Guide. - DefaultTTL *int64 `type:"long"` +// GoString returns the string representation +func (s FieldLevelEncryptionProfileList) GoString() string { + return s.String() +} - // A complex type that specifies how CloudFront handles query strings and cookies. - // - // ForwardedValues is a required field - ForwardedValues *ForwardedValues `type:"structure" required:"true"` +// SetItems sets the Items field's value. +func (s *FieldLevelEncryptionProfileList) SetItems(v []*FieldLevelEncryptionProfileSummary) *FieldLevelEncryptionProfileList { + s.Items = v + return s +} - // A complex type that contains zero or more Lambda function associations for - // a cache behavior. - LambdaFunctionAssociations *LambdaFunctionAssociations `type:"structure"` +// SetMaxItems sets the MaxItems field's value. +func (s *FieldLevelEncryptionProfileList) SetMaxItems(v int64) *FieldLevelEncryptionProfileList { + s.MaxItems = &v + return s +} - MaxTTL *int64 `type:"long"` +// SetNextMarker sets the NextMarker field's value. +func (s *FieldLevelEncryptionProfileList) SetNextMarker(v string) *FieldLevelEncryptionProfileList { + s.NextMarker = &v + return s +} - // The minimum amount of time that you want objects to stay in CloudFront caches - // before CloudFront forwards another request to your origin to determine whether - // the object has been updated. For more information, see Specifying How Long - // Objects and Errors Stay in a CloudFront Edge Cache (Expiration) (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html) - // in the Amazon Amazon CloudFront Developer Guide. - // - // You must specify 0 for MinTTL if you configure CloudFront to forward all - // headers to your origin (under Headers, if you specify 1 for Quantity and - // * for Name). - // - // MinTTL is a required field - MinTTL *int64 `type:"long" required:"true"` +// SetQuantity sets the Quantity field's value. +func (s *FieldLevelEncryptionProfileList) SetQuantity(v int64) *FieldLevelEncryptionProfileList { + s.Quantity = &v + return s +} - // Indicates whether you want to distribute media files in the Microsoft Smooth - // Streaming format using the origin that is associated with this cache behavior. - // If so, specify true; if not, specify false. If you specify true for SmoothStreaming, - // you can still distribute other content using this cache behavior if the content - // matches the value of PathPattern. - SmoothStreaming *bool `type:"boolean"` +// The field-level encryption profile summary. +type FieldLevelEncryptionProfileSummary struct { + _ struct{} `type:"structure"` - // The value of ID for the origin that you want CloudFront to route requests - // to when a request matches the path pattern either for a cache behavior or - // for the default cache behavior. - // - // TargetOriginId is a required field - TargetOriginId *string `type:"string" required:"true"` + // An optional comment for the field-level encryption profile summary. + Comment *string `type:"string"` - // A complex type that specifies the AWS accounts, if any, that you want to - // allow to create signed URLs for private content. - // - // If you want to require signed URLs in requests for objects in the target - // origin that match the PathPattern for this cache behavior, specify true for - // Enabled, and specify the applicable values for Quantity and Items. For more - // information, see Serving Private Content through CloudFront (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html) - // in the Amazon Amazon CloudFront Developer Guide. - // - // If you don't want to require signed URLs in requests for objects that match - // PathPattern, specify false for Enabled and 0 for Quantity. Omit Items. + // A complex data type of encryption entities for the field-level encryption + // profile that include the public key ID, provider, and field patterns for + // specifying which fields to encrypt with this key. // - // To add, change, or remove one or more trusted signers, change Enabled to - // true (if it's currently false), change Quantity as applicable, and specify - // all of the trusted signers that you want to include in the updated distribution. - // - // TrustedSigners is a required field - TrustedSigners *TrustedSigners `type:"structure" required:"true"` + // EncryptionEntities is a required field + EncryptionEntities *EncryptionEntities `type:"structure" required:"true"` - // The protocol that viewers can use to access the files in the origin specified - // by TargetOriginId when a request matches the path pattern in PathPattern. - // You can specify the following options: - // - // * allow-all: Viewers can use HTTP or HTTPS. - // - // * redirect-to-https: If a viewer submits an HTTP request, CloudFront returns - // an HTTP status code of 301 (Moved Permanently) to the viewer along with - // the HTTPS URL. The viewer then resubmits the request using the new URL. - // - // * https-only: If a viewer sends an HTTP request, CloudFront returns an - // HTTP status code of 403 (Forbidden). + // ID for the field-level encryption profile summary. // - // For more information about requiring the HTTPS protocol, see Using an HTTPS - // Connection to Access Your Objects (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecureConnections.html) - // in the Amazon CloudFront Developer Guide. + // Id is a required field + Id *string `type:"string" required:"true"` + + // The time when the the field-level encryption profile summary was last updated. // - // The only way to guarantee that viewers retrieve an object that was fetched - // from the origin using HTTPS is never to use any other protocol to fetch the - // object. If you have recently changed from HTTP to HTTPS, we recommend that - // you clear your objects' cache because cached objects are protocol agnostic. - // That means that an edge location will return an object from the cache regardless - // of whether the current request protocol matches the protocol used previously. - // For more information, see Specifying How Long Objects and Errors Stay in - // a CloudFront Edge Cache (Expiration) (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html) - // in the Amazon CloudFront Developer Guide. + // LastModifiedTime is a required field + LastModifiedTime *time.Time `type:"timestamp" required:"true"` + + // Name for the field-level encryption profile summary. // - // ViewerProtocolPolicy is a required field - ViewerProtocolPolicy *string `type:"string" required:"true" enum:"ViewerProtocolPolicy"` + // Name is a required field + Name *string `type:"string" required:"true"` } // String returns the string representation -func (s DefaultCacheBehavior) String() string { +func (s FieldLevelEncryptionProfileSummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DefaultCacheBehavior) GoString() string { +func (s FieldLevelEncryptionProfileSummary) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DefaultCacheBehavior) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DefaultCacheBehavior"} - if s.ForwardedValues == nil { - invalidParams.Add(request.NewErrParamRequired("ForwardedValues")) - } - if s.MinTTL == nil { - invalidParams.Add(request.NewErrParamRequired("MinTTL")) - } - if s.TargetOriginId == nil { - invalidParams.Add(request.NewErrParamRequired("TargetOriginId")) - } - if s.TrustedSigners == nil { - invalidParams.Add(request.NewErrParamRequired("TrustedSigners")) - } - if s.ViewerProtocolPolicy == nil { - invalidParams.Add(request.NewErrParamRequired("ViewerProtocolPolicy")) - } - if s.AllowedMethods != nil { - if err := s.AllowedMethods.Validate(); err != nil { - invalidParams.AddNested("AllowedMethods", err.(request.ErrInvalidParams)) - } - } - if s.ForwardedValues != nil { - if err := s.ForwardedValues.Validate(); err != nil { - invalidParams.AddNested("ForwardedValues", err.(request.ErrInvalidParams)) - } - } - if s.LambdaFunctionAssociations != nil { - if err := s.LambdaFunctionAssociations.Validate(); err != nil { - invalidParams.AddNested("LambdaFunctionAssociations", err.(request.ErrInvalidParams)) - } - } - if s.TrustedSigners != nil { - if err := s.TrustedSigners.Validate(); err != nil { - invalidParams.AddNested("TrustedSigners", err.(request.ErrInvalidParams)) - } - } +// SetComment sets the Comment field's value. +func (s *FieldLevelEncryptionProfileSummary) SetComment(v string) *FieldLevelEncryptionProfileSummary { + s.Comment = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetEncryptionEntities sets the EncryptionEntities field's value. +func (s *FieldLevelEncryptionProfileSummary) SetEncryptionEntities(v *EncryptionEntities) *FieldLevelEncryptionProfileSummary { + s.EncryptionEntities = v + return s } -// SetAllowedMethods sets the AllowedMethods field's value. -func (s *DefaultCacheBehavior) SetAllowedMethods(v *AllowedMethods) *DefaultCacheBehavior { - s.AllowedMethods = v +// SetId sets the Id field's value. +func (s *FieldLevelEncryptionProfileSummary) SetId(v string) *FieldLevelEncryptionProfileSummary { + s.Id = &v return s } -// SetCompress sets the Compress field's value. -func (s *DefaultCacheBehavior) SetCompress(v bool) *DefaultCacheBehavior { - s.Compress = &v +// SetLastModifiedTime sets the LastModifiedTime field's value. +func (s *FieldLevelEncryptionProfileSummary) SetLastModifiedTime(v time.Time) *FieldLevelEncryptionProfileSummary { + s.LastModifiedTime = &v return s } -// SetDefaultTTL sets the DefaultTTL field's value. -func (s *DefaultCacheBehavior) SetDefaultTTL(v int64) *DefaultCacheBehavior { - s.DefaultTTL = &v +// SetName sets the Name field's value. +func (s *FieldLevelEncryptionProfileSummary) SetName(v string) *FieldLevelEncryptionProfileSummary { + s.Name = &v return s } -// SetForwardedValues sets the ForwardedValues field's value. -func (s *DefaultCacheBehavior) SetForwardedValues(v *ForwardedValues) *DefaultCacheBehavior { - s.ForwardedValues = v - return s +// A summary of a field-level encryption item. +type FieldLevelEncryptionSummary struct { + _ struct{} `type:"structure"` + + // An optional comment about the field-level encryption item. + Comment *string `type:"string"` + + // A summary of a content type-profile mapping. + ContentTypeProfileConfig *ContentTypeProfileConfig `type:"structure"` + + // The unique ID of a field-level encryption item. + // + // Id is a required field + Id *string `type:"string" required:"true"` + + // The last time that the summary of field-level encryption items was modified. + // + // LastModifiedTime is a required field + LastModifiedTime *time.Time `type:"timestamp" required:"true"` + + // A summary of a query argument-profile mapping. + QueryArgProfileConfig *QueryArgProfileConfig `type:"structure"` } -// SetLambdaFunctionAssociations sets the LambdaFunctionAssociations field's value. -func (s *DefaultCacheBehavior) SetLambdaFunctionAssociations(v *LambdaFunctionAssociations) *DefaultCacheBehavior { - s.LambdaFunctionAssociations = v - return s +// String returns the string representation +func (s FieldLevelEncryptionSummary) String() string { + return awsutil.Prettify(s) } -// SetMaxTTL sets the MaxTTL field's value. -func (s *DefaultCacheBehavior) SetMaxTTL(v int64) *DefaultCacheBehavior { - s.MaxTTL = &v - return s +// GoString returns the string representation +func (s FieldLevelEncryptionSummary) GoString() string { + return s.String() } -// SetMinTTL sets the MinTTL field's value. -func (s *DefaultCacheBehavior) SetMinTTL(v int64) *DefaultCacheBehavior { - s.MinTTL = &v +// SetComment sets the Comment field's value. +func (s *FieldLevelEncryptionSummary) SetComment(v string) *FieldLevelEncryptionSummary { + s.Comment = &v return s } -// SetSmoothStreaming sets the SmoothStreaming field's value. -func (s *DefaultCacheBehavior) SetSmoothStreaming(v bool) *DefaultCacheBehavior { - s.SmoothStreaming = &v +// SetContentTypeProfileConfig sets the ContentTypeProfileConfig field's value. +func (s *FieldLevelEncryptionSummary) SetContentTypeProfileConfig(v *ContentTypeProfileConfig) *FieldLevelEncryptionSummary { + s.ContentTypeProfileConfig = v return s } -// SetTargetOriginId sets the TargetOriginId field's value. -func (s *DefaultCacheBehavior) SetTargetOriginId(v string) *DefaultCacheBehavior { - s.TargetOriginId = &v +// SetId sets the Id field's value. +func (s *FieldLevelEncryptionSummary) SetId(v string) *FieldLevelEncryptionSummary { + s.Id = &v return s } -// SetTrustedSigners sets the TrustedSigners field's value. -func (s *DefaultCacheBehavior) SetTrustedSigners(v *TrustedSigners) *DefaultCacheBehavior { - s.TrustedSigners = v +// SetLastModifiedTime sets the LastModifiedTime field's value. +func (s *FieldLevelEncryptionSummary) SetLastModifiedTime(v time.Time) *FieldLevelEncryptionSummary { + s.LastModifiedTime = &v return s } -// SetViewerProtocolPolicy sets the ViewerProtocolPolicy field's value. -func (s *DefaultCacheBehavior) SetViewerProtocolPolicy(v string) *DefaultCacheBehavior { - s.ViewerProtocolPolicy = &v +// SetQueryArgProfileConfig sets the QueryArgProfileConfig field's value. +func (s *FieldLevelEncryptionSummary) SetQueryArgProfileConfig(v *QueryArgProfileConfig) *FieldLevelEncryptionSummary { + s.QueryArgProfileConfig = v return s } -// Deletes a origin access identity. -type DeleteCloudFrontOriginAccessIdentityInput struct { +// A complex data type that includes the field patterns to match for field-level +// encryption. +type FieldPatterns struct { _ struct{} `type:"structure"` - // The origin access identity's ID. - // - // Id is a required field - Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` + // An array of the field-level encryption field patterns. + Items []*string `locationNameList:"FieldPattern" type:"list"` - // The value of the ETag header you received from a previous GET or PUT request. - // For example: E2QWRUHAPOMQZL. - IfMatch *string `location:"header" locationName:"If-Match" type:"string"` + // The number of field-level encryption field patterns. + // + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` } // String returns the string representation -func (s DeleteCloudFrontOriginAccessIdentityInput) String() string { +func (s FieldPatterns) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteCloudFrontOriginAccessIdentityInput) GoString() string { +func (s FieldPatterns) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteCloudFrontOriginAccessIdentityInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteCloudFrontOriginAccessIdentityInput"} - if s.Id == nil { - invalidParams.Add(request.NewErrParamRequired("Id")) +func (s *FieldPatterns) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "FieldPatterns"} + if s.Quantity == nil { + invalidParams.Add(request.NewErrParamRequired("Quantity")) } if invalidParams.Len() > 0 { @@ -5129,95 +9255,100 @@ func (s *DeleteCloudFrontOriginAccessIdentityInput) Validate() error { return nil } -// SetId sets the Id field's value. -func (s *DeleteCloudFrontOriginAccessIdentityInput) SetId(v string) *DeleteCloudFrontOriginAccessIdentityInput { - s.Id = &v +// SetItems sets the Items field's value. +func (s *FieldPatterns) SetItems(v []*string) *FieldPatterns { + s.Items = v return s } -// SetIfMatch sets the IfMatch field's value. -func (s *DeleteCloudFrontOriginAccessIdentityInput) SetIfMatch(v string) *DeleteCloudFrontOriginAccessIdentityInput { - s.IfMatch = &v +// SetQuantity sets the Quantity field's value. +func (s *FieldPatterns) SetQuantity(v int64) *FieldPatterns { + s.Quantity = &v return s } -type DeleteCloudFrontOriginAccessIdentityOutput struct { +// A complex type that specifies how CloudFront handles query strings and cookies. +type ForwardedValues struct { _ struct{} `type:"structure"` -} - -// String returns the string representation -func (s DeleteCloudFrontOriginAccessIdentityOutput) String() string { - return awsutil.Prettify(s) -} -// GoString returns the string representation -func (s DeleteCloudFrontOriginAccessIdentityOutput) GoString() string { - return s.String() -} + // A complex type that specifies whether you want CloudFront to forward cookies + // to the origin and, if so, which ones. For more information about forwarding + // cookies to the origin, see How CloudFront Forwards, Caches, and Logs Cookies + // (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Cookies.html) + // in the Amazon CloudFront Developer Guide. + // + // Cookies is a required field + Cookies *CookiePreference `type:"structure" required:"true"` -// This action deletes a web distribution. To delete a web distribution using -// the CloudFront API, perform the following steps. -// -// To delete a web distribution using the CloudFront API: -// -// Disable the web distribution -// -// Submit a GET Distribution Config request to get the current configuration -// and the Etag header for the distribution. -// -// Update the XML document that was returned in the response to your GET Distribution -// Config request to change the value of Enabled to false. -// -// Submit a PUT Distribution Config request to update the configuration for -// your distribution. In the request body, include the XML document that you -// updated in Step 3. Set the value of the HTTP If-Match header to the value -// of the ETag header that CloudFront returned when you submitted the GET Distribution -// Config request in Step 2. -// -// Review the response to the PUT Distribution Config request to confirm that -// the distribution was successfully disabled. -// -// Submit a GET Distribution request to confirm that your changes have propagated. -// When propagation is complete, the value of Status is Deployed. -// -// Submit a DELETE Distribution request. Set the value of the HTTP If-Match -// header to the value of the ETag header that CloudFront returned when you -// submitted the GET Distribution Config request in Step 6. -// -// Review the response to your DELETE Distribution request to confirm that the -// distribution was successfully deleted. -// -// For information about deleting a distribution using the CloudFront console, -// see Deleting a Distribution (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/HowToDeleteDistribution.html) -// in the Amazon CloudFront Developer Guide. -type DeleteDistributionInput struct { - _ struct{} `type:"structure"` + // A complex type that specifies the Headers, if any, that you want CloudFront + // to base caching on for this cache behavior. + Headers *Headers `type:"structure"` - // The distribution ID. + // Indicates whether you want CloudFront to forward query strings to the origin + // that is associated with this cache behavior and cache based on the query + // string parameters. CloudFront behavior depends on the value of QueryString + // and on the values that you specify for QueryStringCacheKeys, if any: // - // Id is a required field - Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` + // If you specify true for QueryString and you don't specify any values for + // QueryStringCacheKeys, CloudFront forwards all query string parameters to + // the origin and caches based on all query string parameters. Depending on + // how many query string parameters and values you have, this can adversely + // affect performance because CloudFront must forward more requests to the origin. + // + // If you specify true for QueryString and you specify one or more values for + // QueryStringCacheKeys, CloudFront forwards all query string parameters to + // the origin, but it only caches based on the query string parameters that + // you specify. + // + // If you specify false for QueryString, CloudFront doesn't forward any query + // string parameters to the origin, and doesn't cache based on query string + // parameters. + // + // For more information, see Configuring CloudFront to Cache Based on Query + // String Parameters (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html) + // in the Amazon CloudFront Developer Guide. + // + // QueryString is a required field + QueryString *bool `type:"boolean" required:"true"` - // The value of the ETag header that you received when you disabled the distribution. - // For example: E2QWRUHAPOMQZL. - IfMatch *string `location:"header" locationName:"If-Match" type:"string"` + // A complex type that contains information about the query string parameters + // that you want CloudFront to use for caching for this cache behavior. + QueryStringCacheKeys *QueryStringCacheKeys `type:"structure"` } // String returns the string representation -func (s DeleteDistributionInput) String() string { +func (s ForwardedValues) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteDistributionInput) GoString() string { +func (s ForwardedValues) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteDistributionInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteDistributionInput"} - if s.Id == nil { - invalidParams.Add(request.NewErrParamRequired("Id")) +func (s *ForwardedValues) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ForwardedValues"} + if s.Cookies == nil { + invalidParams.Add(request.NewErrParamRequired("Cookies")) + } + if s.QueryString == nil { + invalidParams.Add(request.NewErrParamRequired("QueryString")) + } + if s.Cookies != nil { + if err := s.Cookies.Validate(); err != nil { + invalidParams.AddNested("Cookies", err.(request.ErrInvalidParams)) + } + } + if s.Headers != nil { + if err := s.Headers.Validate(); err != nil { + invalidParams.AddNested("Headers", err.(request.ErrInvalidParams)) + } + } + if s.QueryStringCacheKeys != nil { + if err := s.QueryStringCacheKeys.Validate(); err != nil { + invalidParams.AddNested("QueryStringCacheKeys", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -5226,54 +9357,91 @@ func (s *DeleteDistributionInput) Validate() error { return nil } -// SetId sets the Id field's value. -func (s *DeleteDistributionInput) SetId(v string) *DeleteDistributionInput { - s.Id = &v +// SetCookies sets the Cookies field's value. +func (s *ForwardedValues) SetCookies(v *CookiePreference) *ForwardedValues { + s.Cookies = v return s } -// SetIfMatch sets the IfMatch field's value. -func (s *DeleteDistributionInput) SetIfMatch(v string) *DeleteDistributionInput { - s.IfMatch = &v +// SetHeaders sets the Headers field's value. +func (s *ForwardedValues) SetHeaders(v *Headers) *ForwardedValues { + s.Headers = v return s } -type DeleteDistributionOutput struct { - _ struct{} `type:"structure"` -} - -// String returns the string representation -func (s DeleteDistributionOutput) String() string { - return awsutil.Prettify(s) +// SetQueryString sets the QueryString field's value. +func (s *ForwardedValues) SetQueryString(v bool) *ForwardedValues { + s.QueryString = &v + return s } -// GoString returns the string representation -func (s DeleteDistributionOutput) GoString() string { - return s.String() +// SetQueryStringCacheKeys sets the QueryStringCacheKeys field's value. +func (s *ForwardedValues) SetQueryStringCacheKeys(v *QueryStringCacheKeys) *ForwardedValues { + s.QueryStringCacheKeys = v + return s } -type DeleteServiceLinkedRoleInput struct { +// A complex type that controls the countries in which your content is distributed. +// CloudFront determines the location of your users using MaxMind GeoIP databases. +type GeoRestriction struct { _ struct{} `type:"structure"` - // RoleName is a required field - RoleName *string `location:"uri" locationName:"RoleName" type:"string" required:"true"` + // A complex type that contains a Location element for each country in which + // you want CloudFront either to distribute your content (whitelist) or not + // distribute your content (blacklist). + // + // The Location element is a two-letter, uppercase country code for a country + // that you want to include in your blacklist or whitelist. Include one Location + // element for each country. + // + // CloudFront and MaxMind both use ISO 3166 country codes. For the current list + // of countries and the corresponding codes, see ISO 3166-1-alpha-2 code on + // the International Organization for Standardization website. You can also + // refer to the country list on the CloudFront console, which includes both + // country names and codes. + Items []*string `locationNameList:"Location" type:"list"` + + // When geo restriction is enabled, this is the number of countries in your + // whitelist or blacklist. Otherwise, when it is not enabled, Quantity is 0, + // and you can omit Items. + // + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` + + // The method that you want to use to restrict distribution of your content + // by country: + // + // * none: No geo restriction is enabled, meaning access to content is not + // restricted by client geo location. + // + // * blacklist: The Location elements specify the countries in which you + // don't want CloudFront to distribute your content. + // + // * whitelist: The Location elements specify the countries in which you + // want CloudFront to distribute your content. + // + // RestrictionType is a required field + RestrictionType *string `type:"string" required:"true" enum:"GeoRestrictionType"` } // String returns the string representation -func (s DeleteServiceLinkedRoleInput) String() string { +func (s GeoRestriction) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteServiceLinkedRoleInput) GoString() string { +func (s GeoRestriction) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteServiceLinkedRoleInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteServiceLinkedRoleInput"} - if s.RoleName == nil { - invalidParams.Add(request.NewErrParamRequired("RoleName")) +func (s *GeoRestriction) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GeoRestriction"} + if s.Quantity == nil { + invalidParams.Add(request.NewErrParamRequired("Quantity")) + } + if s.RestrictionType == nil { + invalidParams.Add(request.NewErrParamRequired("RestrictionType")) } if invalidParams.Len() > 0 { @@ -5282,53 +9450,48 @@ func (s *DeleteServiceLinkedRoleInput) Validate() error { return nil } -// SetRoleName sets the RoleName field's value. -func (s *DeleteServiceLinkedRoleInput) SetRoleName(v string) *DeleteServiceLinkedRoleInput { - s.RoleName = &v +// SetItems sets the Items field's value. +func (s *GeoRestriction) SetItems(v []*string) *GeoRestriction { + s.Items = v return s } -type DeleteServiceLinkedRoleOutput struct { - _ struct{} `type:"structure"` -} - -// String returns the string representation -func (s DeleteServiceLinkedRoleOutput) String() string { - return awsutil.Prettify(s) +// SetQuantity sets the Quantity field's value. +func (s *GeoRestriction) SetQuantity(v int64) *GeoRestriction { + s.Quantity = &v + return s } -// GoString returns the string representation -func (s DeleteServiceLinkedRoleOutput) GoString() string { - return s.String() +// SetRestrictionType sets the RestrictionType field's value. +func (s *GeoRestriction) SetRestrictionType(v string) *GeoRestriction { + s.RestrictionType = &v + return s } -// The request to delete a streaming distribution. -type DeleteStreamingDistributionInput struct { +// The origin access identity's configuration information. For more information, +// see CloudFrontOriginAccessIdentityConfigComplexType. +type GetCloudFrontOriginAccessIdentityConfigInput struct { _ struct{} `type:"structure"` - // The distribution ID. + // The identity's ID. // // Id is a required field Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` - - // The value of the ETag header that you received when you disabled the streaming - // distribution. For example: E2QWRUHAPOMQZL. - IfMatch *string `location:"header" locationName:"If-Match" type:"string"` } // String returns the string representation -func (s DeleteStreamingDistributionInput) String() string { +func (s GetCloudFrontOriginAccessIdentityConfigInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteStreamingDistributionInput) GoString() string { +func (s GetCloudFrontOriginAccessIdentityConfigInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteStreamingDistributionInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteStreamingDistributionInput"} +func (s *GetCloudFrontOriginAccessIdentityConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetCloudFrontOriginAccessIdentityConfigInput"} if s.Id == nil { invalidParams.Add(request.NewErrParamRequired("Id")) } @@ -5340,479 +9503,142 @@ func (s *DeleteStreamingDistributionInput) Validate() error { } // SetId sets the Id field's value. -func (s *DeleteStreamingDistributionInput) SetId(v string) *DeleteStreamingDistributionInput { +func (s *GetCloudFrontOriginAccessIdentityConfigInput) SetId(v string) *GetCloudFrontOriginAccessIdentityConfigInput { s.Id = &v return s } -// SetIfMatch sets the IfMatch field's value. -func (s *DeleteStreamingDistributionInput) SetIfMatch(v string) *DeleteStreamingDistributionInput { - s.IfMatch = &v - return s -} - -type DeleteStreamingDistributionOutput struct { - _ struct{} `type:"structure"` -} - -// String returns the string representation -func (s DeleteStreamingDistributionOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s DeleteStreamingDistributionOutput) GoString() string { - return s.String() -} - -// The distribution's information. -type Distribution struct { - _ struct{} `type:"structure"` - - // The ARN (Amazon Resource Name) for the distribution. For example: arn:aws:cloudfront::123456789012:distribution/EDFDVBD632BHDS5, - // where 123456789012 is your AWS account ID. - // - // ARN is a required field - ARN *string `type:"string" required:"true"` - - // CloudFront automatically adds this element to the response only if you've - // set up the distribution to serve private content with signed URLs. The element - // lists the key pair IDs that CloudFront is aware of for each trusted signer. - // The Signer child element lists the AWS account number of the trusted signer - // (or an empty Self element if the signer is you). The Signer element also - // includes the IDs of any active key pairs associated with the trusted signer's - // AWS account. If no KeyPairId element appears for a Signer, that signer can't - // create working signed URLs. - // - // ActiveTrustedSigners is a required field - ActiveTrustedSigners *ActiveTrustedSigners `type:"structure" required:"true"` - - // The current configuration information for the distribution. Send a GET request - // to the /CloudFront API version/distribution ID/config resource. - // - // DistributionConfig is a required field - DistributionConfig *DistributionConfig `type:"structure" required:"true"` - - // The domain name corresponding to the distribution, for example, d111111abcdef8.cloudfront.net. - // - // DomainName is a required field - DomainName *string `type:"string" required:"true"` - - // The identifier for the distribution. For example: EDFDVBD632BHDS5. - // - // Id is a required field - Id *string `type:"string" required:"true"` - - // The number of invalidation batches currently in progress. - // - // InProgressInvalidationBatches is a required field - InProgressInvalidationBatches *int64 `type:"integer" required:"true"` +// The returned result of the corresponding request. +type GetCloudFrontOriginAccessIdentityConfigOutput struct { + _ struct{} `type:"structure" payload:"CloudFrontOriginAccessIdentityConfig"` - // The date and time the distribution was last modified. - // - // LastModifiedTime is a required field - LastModifiedTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + // The origin access identity's configuration information. + CloudFrontOriginAccessIdentityConfig *OriginAccessIdentityConfig `type:"structure"` - // This response element indicates the current status of the distribution. When - // the status is Deployed, the distribution's information is fully propagated - // to all CloudFront edge locations. - // - // Status is a required field - Status *string `type:"string" required:"true"` + // The current version of the configuration. For example: E2QWRUHAPOMQZL. + ETag *string `location:"header" locationName:"ETag" type:"string"` } // String returns the string representation -func (s Distribution) String() string { +func (s GetCloudFrontOriginAccessIdentityConfigOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Distribution) GoString() string { +func (s GetCloudFrontOriginAccessIdentityConfigOutput) GoString() string { return s.String() } -// SetARN sets the ARN field's value. -func (s *Distribution) SetARN(v string) *Distribution { - s.ARN = &v - return s -} - -// SetActiveTrustedSigners sets the ActiveTrustedSigners field's value. -func (s *Distribution) SetActiveTrustedSigners(v *ActiveTrustedSigners) *Distribution { - s.ActiveTrustedSigners = v - return s -} - -// SetDistributionConfig sets the DistributionConfig field's value. -func (s *Distribution) SetDistributionConfig(v *DistributionConfig) *Distribution { - s.DistributionConfig = v - return s -} - -// SetDomainName sets the DomainName field's value. -func (s *Distribution) SetDomainName(v string) *Distribution { - s.DomainName = &v - return s -} - -// SetId sets the Id field's value. -func (s *Distribution) SetId(v string) *Distribution { - s.Id = &v - return s -} - -// SetInProgressInvalidationBatches sets the InProgressInvalidationBatches field's value. -func (s *Distribution) SetInProgressInvalidationBatches(v int64) *Distribution { - s.InProgressInvalidationBatches = &v - return s -} - -// SetLastModifiedTime sets the LastModifiedTime field's value. -func (s *Distribution) SetLastModifiedTime(v time.Time) *Distribution { - s.LastModifiedTime = &v +// SetCloudFrontOriginAccessIdentityConfig sets the CloudFrontOriginAccessIdentityConfig field's value. +func (s *GetCloudFrontOriginAccessIdentityConfigOutput) SetCloudFrontOriginAccessIdentityConfig(v *OriginAccessIdentityConfig) *GetCloudFrontOriginAccessIdentityConfigOutput { + s.CloudFrontOriginAccessIdentityConfig = v return s } -// SetStatus sets the Status field's value. -func (s *Distribution) SetStatus(v string) *Distribution { - s.Status = &v +// SetETag sets the ETag field's value. +func (s *GetCloudFrontOriginAccessIdentityConfigOutput) SetETag(v string) *GetCloudFrontOriginAccessIdentityConfigOutput { + s.ETag = &v return s } -// A distribution configuration. -type DistributionConfig struct { +// The request to get an origin access identity's information. +type GetCloudFrontOriginAccessIdentityInput struct { _ struct{} `type:"structure"` - // A complex type that contains information about CNAMEs (alternate domain names), - // if any, for this distribution. - Aliases *Aliases `type:"structure"` - - // A complex type that contains zero or more CacheBehavior elements. - CacheBehaviors *CacheBehaviors `type:"structure"` - - // A unique value (for example, a date-time stamp) that ensures that the request - // can't be replayed. - // - // If the value of CallerReference is new (regardless of the content of the - // DistributionConfig object), CloudFront creates a new distribution. - // - // If CallerReference is a value you already sent in a previous request to create - // a distribution, and if the content of the DistributionConfig is identical - // to the original request (ignoring white space), CloudFront returns the same - // the response that it returned to the original request. - // - // If CallerReference is a value you already sent in a previous request to create - // a distribution but the content of the DistributionConfig is different from - // the original request, CloudFront returns a DistributionAlreadyExists error. + // The identity's ID. // - // CallerReference is a required field - CallerReference *string `type:"string" required:"true"` + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` +} - // Any comments you want to include about the distribution. - // - // If you don't want to specify a comment, include an empty Comment element. - // - // To delete an existing comment, update the distribution configuration and - // include an empty Comment element. - // - // To add or change a comment, update the distribution configuration and specify - // the new comment. - // - // Comment is a required field - Comment *string `type:"string" required:"true"` +// String returns the string representation +func (s GetCloudFrontOriginAccessIdentityInput) String() string { + return awsutil.Prettify(s) +} - // A complex type that controls the following: - // - // * Whether CloudFront replaces HTTP status codes in the 4xx and 5xx range - // with custom error messages before returning the response to the viewer. - // - // * How long CloudFront caches HTTP status codes in the 4xx and 5xx range. - // - // For more information about custom error pages, see Customizing Error Responses - // (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html) - // in the Amazon CloudFront Developer Guide. - CustomErrorResponses *CustomErrorResponses `type:"structure"` +// GoString returns the string representation +func (s GetCloudFrontOriginAccessIdentityInput) GoString() string { + return s.String() +} - // A complex type that describes the default cache behavior if you don't specify - // a CacheBehavior element or if files don't match any of the values of PathPattern - // in CacheBehavior elements. You must create exactly one default cache behavior. - // - // DefaultCacheBehavior is a required field - DefaultCacheBehavior *DefaultCacheBehavior `type:"structure" required:"true"` +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetCloudFrontOriginAccessIdentityInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetCloudFrontOriginAccessIdentityInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } - // The object that you want CloudFront to request from your origin (for example, - // index.html) when a viewer requests the root URL for your distribution (http://www.example.com) - // instead of an object in your distribution (http://www.example.com/product-description.html). - // Specifying a default root object avoids exposing the contents of your distribution. - // - // Specify only the object name, for example, index.html. Don't add a / before - // the object name. - // - // If you don't want to specify a default root object when you create a distribution, - // include an empty DefaultRootObject element. - // - // To delete the default root object from an existing distribution, update the - // distribution configuration and include an empty DefaultRootObject element. - // - // To replace the default root object, update the distribution configuration - // and specify the new object. - // - // For more information about the default root object, see Creating a Default - // Root Object (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DefaultRootObject.html) - // in the Amazon CloudFront Developer Guide. - DefaultRootObject *string `type:"string"` + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} - // From this field, you can enable or disable the selected distribution. - // - // If you specify false for Enabled but you specify values for Bucket and Prefix, - // the values are automatically deleted. - // - // Enabled is a required field - Enabled *bool `type:"boolean" required:"true"` +// SetId sets the Id field's value. +func (s *GetCloudFrontOriginAccessIdentityInput) SetId(v string) *GetCloudFrontOriginAccessIdentityInput { + s.Id = &v + return s +} - // (Optional) Specify the maximum HTTP version that you want viewers to use - // to communicate with CloudFront. The default value for new web distributions - // is http2. Viewers that don't support HTTP/2 automatically use an earlier - // HTTP version. - // - // For viewers and CloudFront to use HTTP/2, viewers must support TLS 1.2 or - // later, and must support Server Name Identification (SNI). - // - // In general, configuring CloudFront to communicate with viewers using HTTP/2 - // reduces latency. You can improve performance by optimizing for HTTP/2. For - // more information, do an Internet search for "http/2 optimization." - HttpVersion *string `type:"string" enum:"HttpVersion"` +// The returned result of the corresponding request. +type GetCloudFrontOriginAccessIdentityOutput struct { + _ struct{} `type:"structure" payload:"CloudFrontOriginAccessIdentity"` - // If you want CloudFront to respond to IPv6 DNS requests with an IPv6 address - // for your distribution, specify true. If you specify false, CloudFront responds - // to IPv6 DNS requests with the DNS response code NOERROR and with no IP addresses. - // This allows viewers to submit a second request, for an IPv4 address for your - // distribution. - // - // In general, you should enable IPv6 if you have users on IPv6 networks who - // want to access your content. However, if you're using signed URLs or signed - // cookies to restrict access to your content, and if you're using a custom - // policy that includes the IpAddress parameter to restrict the IP addresses - // that can access your content, don't enable IPv6. If you want to restrict - // access to some content by IP address and not restrict access to other content - // (or restrict access but not by IP address), you can create two distributions. - // For more information, see Creating a Signed URL Using a Custom Policy (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-creating-signed-url-custom-policy.html) - // in the Amazon CloudFront Developer Guide. - // - // If you're using an Amazon Route 53 alias resource record set to route traffic - // to your CloudFront distribution, you need to create a second alias resource - // record set when both of the following are true: - // - // * You enable IPv6 for the distribution - // - // * You're using alternate domain names in the URLs for your objects - // - // For more information, see Routing Traffic to an Amazon CloudFront Web Distribution - // by Using Your Domain Name (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-cloudfront-distribution.html) - // in the Amazon Route 53 Developer Guide. - // - // If you created a CNAME resource record set, either with Amazon Route 53 or - // with another DNS service, you don't need to make any changes. A CNAME record - // will route traffic to your distribution regardless of the IP address format - // of the viewer request. - IsIPV6Enabled *bool `type:"boolean"` + // The origin access identity's information. + CloudFrontOriginAccessIdentity *OriginAccessIdentity `type:"structure"` - // A complex type that controls whether access logs are written for the distribution. - // - // For more information about logging, see Access Logs (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html) - // in the Amazon CloudFront Developer Guide. - Logging *LoggingConfig `type:"structure"` + // The current version of the origin access identity's information. For example: + // E2QWRUHAPOMQZL. + ETag *string `location:"header" locationName:"ETag" type:"string"` +} - // A complex type that contains information about origins for this distribution. - // - // Origins is a required field - Origins *Origins `type:"structure" required:"true"` +// String returns the string representation +func (s GetCloudFrontOriginAccessIdentityOutput) String() string { + return awsutil.Prettify(s) +} - // The price class that corresponds with the maximum price that you want to - // pay for CloudFront service. If you specify PriceClass_All, CloudFront responds - // to requests for your objects from all CloudFront edge locations. - // - // If you specify a price class other than PriceClass_All, CloudFront serves - // your objects from the CloudFront edge location that has the lowest latency - // among the edge locations in your price class. Viewers who are in or near - // regions that are excluded from your specified price class may encounter slower - // performance. - // - // For more information about price classes, see Choosing the Price Class for - // a CloudFront Distribution (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PriceClass.html) - // in the Amazon CloudFront Developer Guide. For information about CloudFront - // pricing, including how price classes map to CloudFront regions, see Amazon - // CloudFront Pricing (https://aws.amazon.com/cloudfront/pricing/). - PriceClass *string `type:"string" enum:"PriceClass"` +// GoString returns the string representation +func (s GetCloudFrontOriginAccessIdentityOutput) GoString() string { + return s.String() +} - // A complex type that identifies ways in which you want to restrict distribution - // of your content. - Restrictions *Restrictions `type:"structure"` +// SetCloudFrontOriginAccessIdentity sets the CloudFrontOriginAccessIdentity field's value. +func (s *GetCloudFrontOriginAccessIdentityOutput) SetCloudFrontOriginAccessIdentity(v *OriginAccessIdentity) *GetCloudFrontOriginAccessIdentityOutput { + s.CloudFrontOriginAccessIdentity = v + return s +} - // A complex type that specifies the following: - // - // * Whether you want viewers to use HTTP or HTTPS to request your objects. - // - // * If you want viewers to use HTTPS, whether you're using an alternate - // domain name such as example.com or the CloudFront domain name for your - // distribution, such as d111111abcdef8.cloudfront.net. - // - // * If you're using an alternate domain name, whether AWS Certificate Manager - // (ACM) provided the certificate, or you purchased a certificate from a - // third-party certificate authority and imported it into ACM or uploaded - // it to the IAM certificate store. - // - // You must specify only one of the following values: - // - // * ViewerCertificate$ACMCertificateArn - // - // * ViewerCertificate$IAMCertificateId - // - // * ViewerCertificate$CloudFrontDefaultCertificate - // - // Don't specify false for CloudFrontDefaultCertificate. - // - // If you want viewers to use HTTP instead of HTTPS to request your objects: - // Specify the following value: - // - // true - // - // In addition, specify allow-all for ViewerProtocolPolicy for all of your cache - // behaviors. - // - // If you want viewers to use HTTPS to request your objects: Choose the type - // of certificate that you want to use based on whether you're using an alternate - // domain name for your objects or the CloudFront domain name: - // - // * If you're using an alternate domain name, such as example.com: Specify - // one of the following values, depending on whether ACM provided your certificate - // or you purchased your certificate from third-party certificate authority: - // - // ARN for ACM SSL/TLS certificate where - // ARN for ACM SSL/TLS certificate is the ARN for the ACM SSL/TLS certificate - // that you want to use for this distribution. - // - // IAM certificate ID where IAM certificate - // ID is the ID that IAM returned when you added the certificate to the IAM - // certificate store. - // - // If you specify ACMCertificateArn or IAMCertificateId, you must also specify - // a value for SSLSupportMethod. - // - // If you choose to use an ACM certificate or a certificate in the IAM certificate - // store, we recommend that you use only an alternate domain name in your - // object URLs (https://example.com/logo.jpg). If you use the domain name - // that is associated with your CloudFront distribution (such as https://d111111abcdef8.cloudfront.net/logo.jpg) - // and the viewer supports SNI, then CloudFront behaves normally. However, - // if the browser does not support SNI, the user's experience depends on - // the value that you choose for SSLSupportMethod: - // - // vip: The viewer displays a warning because there is a mismatch between the - // CloudFront domain name and the domain name in your SSL/TLS certificate. - // - // sni-only: CloudFront drops the connection with the browser without returning - // the object. - // - // * If you're using the CloudFront domain name for your distribution, such - // as d111111abcdef8.cloudfront.net: Specify the following value: - // - // true - // - // If you want viewers to use HTTPS, you must also specify one of the following - // values in your cache behaviors: - // - // * https-only - // - // * redirect-to-https - // - // You can also optionally require that CloudFront use HTTPS to communicate - // with your origin by specifying one of the following values for the applicable - // origins: - // - // * https-only - // - // * match-viewer - // - // For more information, see Using Alternate Domain Names and HTTPS (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecureConnections.html#CNAMEsAndHTTPS) - // in the Amazon CloudFront Developer Guide. - ViewerCertificate *ViewerCertificate `type:"structure"` +// SetETag sets the ETag field's value. +func (s *GetCloudFrontOriginAccessIdentityOutput) SetETag(v string) *GetCloudFrontOriginAccessIdentityOutput { + s.ETag = &v + return s +} - // A unique identifier that specifies the AWS WAF web ACL, if any, to associate - // with this distribution. +// The request to get a distribution configuration. +type GetDistributionConfigInput struct { + _ struct{} `type:"structure"` + + // The distribution's ID. // - // AWS WAF is a web application firewall that lets you monitor the HTTP and - // HTTPS requests that are forwarded to CloudFront, and lets you control access - // to your content. Based on conditions that you specify, such as the IP addresses - // that requests originate from or the values of query strings, CloudFront responds - // to requests either with the requested content or with an HTTP 403 status - // code (Forbidden). You can also configure CloudFront to return a custom error - // page when a request is blocked. For more information about AWS WAF, see the - // AWS WAF Developer Guide (http://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html). - WebACLId *string `type:"string"` + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` } // String returns the string representation -func (s DistributionConfig) String() string { +func (s GetDistributionConfigInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DistributionConfig) GoString() string { +func (s GetDistributionConfigInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DistributionConfig) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DistributionConfig"} - if s.CallerReference == nil { - invalidParams.Add(request.NewErrParamRequired("CallerReference")) - } - if s.Comment == nil { - invalidParams.Add(request.NewErrParamRequired("Comment")) - } - if s.DefaultCacheBehavior == nil { - invalidParams.Add(request.NewErrParamRequired("DefaultCacheBehavior")) - } - if s.Enabled == nil { - invalidParams.Add(request.NewErrParamRequired("Enabled")) - } - if s.Origins == nil { - invalidParams.Add(request.NewErrParamRequired("Origins")) - } - if s.Aliases != nil { - if err := s.Aliases.Validate(); err != nil { - invalidParams.AddNested("Aliases", err.(request.ErrInvalidParams)) - } - } - if s.CacheBehaviors != nil { - if err := s.CacheBehaviors.Validate(); err != nil { - invalidParams.AddNested("CacheBehaviors", err.(request.ErrInvalidParams)) - } - } - if s.CustomErrorResponses != nil { - if err := s.CustomErrorResponses.Validate(); err != nil { - invalidParams.AddNested("CustomErrorResponses", err.(request.ErrInvalidParams)) - } - } - if s.DefaultCacheBehavior != nil { - if err := s.DefaultCacheBehavior.Validate(); err != nil { - invalidParams.AddNested("DefaultCacheBehavior", err.(request.ErrInvalidParams)) - } - } - if s.Logging != nil { - if err := s.Logging.Validate(); err != nil { - invalidParams.AddNested("Logging", err.(request.ErrInvalidParams)) - } - } - if s.Origins != nil { - if err := s.Origins.Validate(); err != nil { - invalidParams.AddNested("Origins", err.(request.ErrInvalidParams)) - } - } - if s.Restrictions != nil { - if err := s.Restrictions.Validate(); err != nil { - invalidParams.AddNested("Restrictions", err.(request.ErrInvalidParams)) - } +func (s *GetDistributionConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetDistributionConfigInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) } if invalidParams.Len() > 0 { @@ -5821,146 +9647,283 @@ func (s *DistributionConfig) Validate() error { return nil } -// SetAliases sets the Aliases field's value. -func (s *DistributionConfig) SetAliases(v *Aliases) *DistributionConfig { - s.Aliases = v +// SetId sets the Id field's value. +func (s *GetDistributionConfigInput) SetId(v string) *GetDistributionConfigInput { + s.Id = &v return s } -// SetCacheBehaviors sets the CacheBehaviors field's value. -func (s *DistributionConfig) SetCacheBehaviors(v *CacheBehaviors) *DistributionConfig { - s.CacheBehaviors = v - return s +// The returned result of the corresponding request. +type GetDistributionConfigOutput struct { + _ struct{} `type:"structure" payload:"DistributionConfig"` + + // The distribution's configuration information. + DistributionConfig *DistributionConfig `type:"structure"` + + // The current version of the configuration. For example: E2QWRUHAPOMQZL. + ETag *string `location:"header" locationName:"ETag" type:"string"` } -// SetCallerReference sets the CallerReference field's value. -func (s *DistributionConfig) SetCallerReference(v string) *DistributionConfig { - s.CallerReference = &v +// String returns the string representation +func (s GetDistributionConfigOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDistributionConfigOutput) GoString() string { + return s.String() +} + +// SetDistributionConfig sets the DistributionConfig field's value. +func (s *GetDistributionConfigOutput) SetDistributionConfig(v *DistributionConfig) *GetDistributionConfigOutput { + s.DistributionConfig = v return s } -// SetComment sets the Comment field's value. -func (s *DistributionConfig) SetComment(v string) *DistributionConfig { - s.Comment = &v +// SetETag sets the ETag field's value. +func (s *GetDistributionConfigOutput) SetETag(v string) *GetDistributionConfigOutput { + s.ETag = &v return s } -// SetCustomErrorResponses sets the CustomErrorResponses field's value. -func (s *DistributionConfig) SetCustomErrorResponses(v *CustomErrorResponses) *DistributionConfig { - s.CustomErrorResponses = v +// The request to get a distribution's information. +type GetDistributionInput struct { + _ struct{} `type:"structure"` + + // The distribution's ID. + // + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetDistributionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDistributionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetDistributionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetDistributionInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetId sets the Id field's value. +func (s *GetDistributionInput) SetId(v string) *GetDistributionInput { + s.Id = &v return s } -// SetDefaultCacheBehavior sets the DefaultCacheBehavior field's value. -func (s *DistributionConfig) SetDefaultCacheBehavior(v *DefaultCacheBehavior) *DistributionConfig { - s.DefaultCacheBehavior = v +// The returned result of the corresponding request. +type GetDistributionOutput struct { + _ struct{} `type:"structure" payload:"Distribution"` + + // The distribution's information. + Distribution *Distribution `type:"structure"` + + // The current version of the distribution's information. For example: E2QWRUHAPOMQZL. + ETag *string `location:"header" locationName:"ETag" type:"string"` +} + +// String returns the string representation +func (s GetDistributionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDistributionOutput) GoString() string { + return s.String() +} + +// SetDistribution sets the Distribution field's value. +func (s *GetDistributionOutput) SetDistribution(v *Distribution) *GetDistributionOutput { + s.Distribution = v return s } -// SetDefaultRootObject sets the DefaultRootObject field's value. -func (s *DistributionConfig) SetDefaultRootObject(v string) *DistributionConfig { - s.DefaultRootObject = &v +// SetETag sets the ETag field's value. +func (s *GetDistributionOutput) SetETag(v string) *GetDistributionOutput { + s.ETag = &v return s } -// SetEnabled sets the Enabled field's value. -func (s *DistributionConfig) SetEnabled(v bool) *DistributionConfig { - s.Enabled = &v +type GetFieldLevelEncryptionConfigInput struct { + _ struct{} `type:"structure"` + + // Request the ID for the field-level encryption configuration information. + // + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetFieldLevelEncryptionConfigInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetFieldLevelEncryptionConfigInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetFieldLevelEncryptionConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetFieldLevelEncryptionConfigInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetId sets the Id field's value. +func (s *GetFieldLevelEncryptionConfigInput) SetId(v string) *GetFieldLevelEncryptionConfigInput { + s.Id = &v return s } -// SetHttpVersion sets the HttpVersion field's value. -func (s *DistributionConfig) SetHttpVersion(v string) *DistributionConfig { - s.HttpVersion = &v +type GetFieldLevelEncryptionConfigOutput struct { + _ struct{} `type:"structure" payload:"FieldLevelEncryptionConfig"` + + // The current version of the field level encryption configuration. For example: + // E2QWRUHAPOMQZL. + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // Return the field-level encryption configuration information. + FieldLevelEncryptionConfig *FieldLevelEncryptionConfig `type:"structure"` +} + +// String returns the string representation +func (s GetFieldLevelEncryptionConfigOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetFieldLevelEncryptionConfigOutput) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *GetFieldLevelEncryptionConfigOutput) SetETag(v string) *GetFieldLevelEncryptionConfigOutput { + s.ETag = &v return s } -// SetIsIPV6Enabled sets the IsIPV6Enabled field's value. -func (s *DistributionConfig) SetIsIPV6Enabled(v bool) *DistributionConfig { - s.IsIPV6Enabled = &v +// SetFieldLevelEncryptionConfig sets the FieldLevelEncryptionConfig field's value. +func (s *GetFieldLevelEncryptionConfigOutput) SetFieldLevelEncryptionConfig(v *FieldLevelEncryptionConfig) *GetFieldLevelEncryptionConfigOutput { + s.FieldLevelEncryptionConfig = v return s } -// SetLogging sets the Logging field's value. -func (s *DistributionConfig) SetLogging(v *LoggingConfig) *DistributionConfig { - s.Logging = v +type GetFieldLevelEncryptionInput struct { + _ struct{} `type:"structure"` + + // Request the ID for the field-level encryption configuration information. + // + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetFieldLevelEncryptionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetFieldLevelEncryptionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetFieldLevelEncryptionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetFieldLevelEncryptionInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetId sets the Id field's value. +func (s *GetFieldLevelEncryptionInput) SetId(v string) *GetFieldLevelEncryptionInput { + s.Id = &v return s } -// SetOrigins sets the Origins field's value. -func (s *DistributionConfig) SetOrigins(v *Origins) *DistributionConfig { - s.Origins = v - return s +type GetFieldLevelEncryptionOutput struct { + _ struct{} `type:"structure" payload:"FieldLevelEncryption"` + + // The current version of the field level encryption configuration. For example: + // E2QWRUHAPOMQZL. + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // Return the field-level encryption configuration information. + FieldLevelEncryption *FieldLevelEncryption `type:"structure"` } -// SetPriceClass sets the PriceClass field's value. -func (s *DistributionConfig) SetPriceClass(v string) *DistributionConfig { - s.PriceClass = &v - return s +// String returns the string representation +func (s GetFieldLevelEncryptionOutput) String() string { + return awsutil.Prettify(s) } -// SetRestrictions sets the Restrictions field's value. -func (s *DistributionConfig) SetRestrictions(v *Restrictions) *DistributionConfig { - s.Restrictions = v - return s +// GoString returns the string representation +func (s GetFieldLevelEncryptionOutput) GoString() string { + return s.String() } -// SetViewerCertificate sets the ViewerCertificate field's value. -func (s *DistributionConfig) SetViewerCertificate(v *ViewerCertificate) *DistributionConfig { - s.ViewerCertificate = v +// SetETag sets the ETag field's value. +func (s *GetFieldLevelEncryptionOutput) SetETag(v string) *GetFieldLevelEncryptionOutput { + s.ETag = &v return s } -// SetWebACLId sets the WebACLId field's value. -func (s *DistributionConfig) SetWebACLId(v string) *DistributionConfig { - s.WebACLId = &v +// SetFieldLevelEncryption sets the FieldLevelEncryption field's value. +func (s *GetFieldLevelEncryptionOutput) SetFieldLevelEncryption(v *FieldLevelEncryption) *GetFieldLevelEncryptionOutput { + s.FieldLevelEncryption = v return s } -// A distribution Configuration and a list of tags to be associated with the -// distribution. -type DistributionConfigWithTags struct { +type GetFieldLevelEncryptionProfileConfigInput struct { _ struct{} `type:"structure"` - // A distribution configuration. - // - // DistributionConfig is a required field - DistributionConfig *DistributionConfig `type:"structure" required:"true"` - - // A complex type that contains zero or more Tag elements. + // Get the ID for the field-level encryption profile configuration information. // - // Tags is a required field - Tags *Tags `type:"structure" required:"true"` + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` } // String returns the string representation -func (s DistributionConfigWithTags) String() string { +func (s GetFieldLevelEncryptionProfileConfigInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DistributionConfigWithTags) GoString() string { +func (s GetFieldLevelEncryptionProfileConfigInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DistributionConfigWithTags) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DistributionConfigWithTags"} - if s.DistributionConfig == nil { - invalidParams.Add(request.NewErrParamRequired("DistributionConfig")) - } - if s.Tags == nil { - invalidParams.Add(request.NewErrParamRequired("Tags")) - } - if s.DistributionConfig != nil { - if err := s.DistributionConfig.Validate(); err != nil { - invalidParams.AddNested("DistributionConfig", err.(request.ErrInvalidParams)) - } - } - if s.Tags != nil { - if err := s.Tags.Validate(); err != nil { - invalidParams.AddNested("Tags", err.(request.ErrInvalidParams)) - } +func (s *GetFieldLevelEncryptionProfileConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetFieldLevelEncryptionProfileConfigInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) } if invalidParams.Len() > 0 { @@ -5969,488 +9932,430 @@ func (s *DistributionConfigWithTags) Validate() error { return nil } -// SetDistributionConfig sets the DistributionConfig field's value. -func (s *DistributionConfigWithTags) SetDistributionConfig(v *DistributionConfig) *DistributionConfigWithTags { - s.DistributionConfig = v +// SetId sets the Id field's value. +func (s *GetFieldLevelEncryptionProfileConfigInput) SetId(v string) *GetFieldLevelEncryptionProfileConfigInput { + s.Id = &v return s } -// SetTags sets the Tags field's value. -func (s *DistributionConfigWithTags) SetTags(v *Tags) *DistributionConfigWithTags { - s.Tags = v - return s -} +type GetFieldLevelEncryptionProfileConfigOutput struct { + _ struct{} `type:"structure" payload:"FieldLevelEncryptionProfileConfig"` -// A distribution list. -type DistributionList struct { - _ struct{} `type:"structure"` + // The current version of the field-level encryption profile configuration result. + // For example: E2QWRUHAPOMQZL. + ETag *string `location:"header" locationName:"ETag" type:"string"` - // A flag that indicates whether more distributions remain to be listed. If - // your results were truncated, you can make a follow-up pagination request - // using the Marker request parameter to retrieve more distributions in the - // list. - // - // IsTruncated is a required field - IsTruncated *bool `type:"boolean" required:"true"` + // Return the field-level encryption profile configuration information. + FieldLevelEncryptionProfileConfig *FieldLevelEncryptionProfileConfig `type:"structure"` +} - // A complex type that contains one DistributionSummary element for each distribution - // that was created by the current AWS account. - Items []*DistributionSummary `locationNameList:"DistributionSummary" type:"list"` +// String returns the string representation +func (s GetFieldLevelEncryptionProfileConfigOutput) String() string { + return awsutil.Prettify(s) +} - // The value you provided for the Marker request parameter. - // - // Marker is a required field - Marker *string `type:"string" required:"true"` +// GoString returns the string representation +func (s GetFieldLevelEncryptionProfileConfigOutput) GoString() string { + return s.String() +} - // The value you provided for the MaxItems request parameter. - // - // MaxItems is a required field - MaxItems *int64 `type:"integer" required:"true"` +// SetETag sets the ETag field's value. +func (s *GetFieldLevelEncryptionProfileConfigOutput) SetETag(v string) *GetFieldLevelEncryptionProfileConfigOutput { + s.ETag = &v + return s +} - // If IsTruncated is true, this element is present and contains the value you - // can use for the Marker request parameter to continue listing your distributions - // where they left off. - NextMarker *string `type:"string"` +// SetFieldLevelEncryptionProfileConfig sets the FieldLevelEncryptionProfileConfig field's value. +func (s *GetFieldLevelEncryptionProfileConfigOutput) SetFieldLevelEncryptionProfileConfig(v *FieldLevelEncryptionProfileConfig) *GetFieldLevelEncryptionProfileConfigOutput { + s.FieldLevelEncryptionProfileConfig = v + return s +} - // The number of distributions that were created by the current AWS account. +type GetFieldLevelEncryptionProfileInput struct { + _ struct{} `type:"structure"` + + // Get the ID for the field-level encryption profile information. // - // Quantity is a required field - Quantity *int64 `type:"integer" required:"true"` + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` } // String returns the string representation -func (s DistributionList) String() string { +func (s GetFieldLevelEncryptionProfileInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DistributionList) GoString() string { +func (s GetFieldLevelEncryptionProfileInput) GoString() string { return s.String() } -// SetIsTruncated sets the IsTruncated field's value. -func (s *DistributionList) SetIsTruncated(v bool) *DistributionList { - s.IsTruncated = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetFieldLevelEncryptionProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetFieldLevelEncryptionProfileInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetItems sets the Items field's value. -func (s *DistributionList) SetItems(v []*DistributionSummary) *DistributionList { - s.Items = v +// SetId sets the Id field's value. +func (s *GetFieldLevelEncryptionProfileInput) SetId(v string) *GetFieldLevelEncryptionProfileInput { + s.Id = &v return s } -// SetMarker sets the Marker field's value. -func (s *DistributionList) SetMarker(v string) *DistributionList { - s.Marker = &v - return s +type GetFieldLevelEncryptionProfileOutput struct { + _ struct{} `type:"structure" payload:"FieldLevelEncryptionProfile"` + + // The current version of the field level encryption profile. For example: E2QWRUHAPOMQZL. + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // Return the field-level encryption profile information. + FieldLevelEncryptionProfile *FieldLevelEncryptionProfile `type:"structure"` } -// SetMaxItems sets the MaxItems field's value. -func (s *DistributionList) SetMaxItems(v int64) *DistributionList { - s.MaxItems = &v - return s +// String returns the string representation +func (s GetFieldLevelEncryptionProfileOutput) String() string { + return awsutil.Prettify(s) } -// SetNextMarker sets the NextMarker field's value. -func (s *DistributionList) SetNextMarker(v string) *DistributionList { - s.NextMarker = &v +// GoString returns the string representation +func (s GetFieldLevelEncryptionProfileOutput) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *GetFieldLevelEncryptionProfileOutput) SetETag(v string) *GetFieldLevelEncryptionProfileOutput { + s.ETag = &v return s } -// SetQuantity sets the Quantity field's value. -func (s *DistributionList) SetQuantity(v int64) *DistributionList { - s.Quantity = &v +// SetFieldLevelEncryptionProfile sets the FieldLevelEncryptionProfile field's value. +func (s *GetFieldLevelEncryptionProfileOutput) SetFieldLevelEncryptionProfile(v *FieldLevelEncryptionProfile) *GetFieldLevelEncryptionProfileOutput { + s.FieldLevelEncryptionProfile = v return s } -// A summary of the information about a CloudFront distribution. -type DistributionSummary struct { +// The request to get an invalidation's information. +type GetInvalidationInput struct { _ struct{} `type:"structure"` - // The ARN (Amazon Resource Name) for the distribution. For example: arn:aws:cloudfront::123456789012:distribution/EDFDVBD632BHDS5, - // where 123456789012 is your AWS account ID. + // The distribution's ID. // - // ARN is a required field - ARN *string `type:"string" required:"true"` + // DistributionId is a required field + DistributionId *string `location:"uri" locationName:"DistributionId" type:"string" required:"true"` - // A complex type that contains information about CNAMEs (alternate domain names), - // if any, for this distribution. + // The identifier for the invalidation request, for example, IDFDVBD632BHDS5. // - // Aliases is a required field - Aliases *Aliases `type:"structure" required:"true"` + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` +} - // A complex type that contains zero or more CacheBehavior elements. - // - // CacheBehaviors is a required field - CacheBehaviors *CacheBehaviors `type:"structure" required:"true"` +// String returns the string representation +func (s GetInvalidationInput) String() string { + return awsutil.Prettify(s) +} - // The comment originally specified when this distribution was created. - // - // Comment is a required field - Comment *string `type:"string" required:"true"` +// GoString returns the string representation +func (s GetInvalidationInput) GoString() string { + return s.String() +} - // A complex type that contains zero or more CustomErrorResponses elements. - // - // CustomErrorResponses is a required field - CustomErrorResponses *CustomErrorResponses `type:"structure" required:"true"` +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetInvalidationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetInvalidationInput"} + if s.DistributionId == nil { + invalidParams.Add(request.NewErrParamRequired("DistributionId")) + } + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } - // A complex type that describes the default cache behavior if you don't specify - // a CacheBehavior element or if files don't match any of the values of PathPattern - // in CacheBehavior elements. You must create exactly one default cache behavior. - // - // DefaultCacheBehavior is a required field - DefaultCacheBehavior *DefaultCacheBehavior `type:"structure" required:"true"` + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} - // The domain name that corresponds to the distribution, for example, d111111abcdef8.cloudfront.net. - // - // DomainName is a required field - DomainName *string `type:"string" required:"true"` +// SetDistributionId sets the DistributionId field's value. +func (s *GetInvalidationInput) SetDistributionId(v string) *GetInvalidationInput { + s.DistributionId = &v + return s +} + +// SetId sets the Id field's value. +func (s *GetInvalidationInput) SetId(v string) *GetInvalidationInput { + s.Id = &v + return s +} + +// The returned result of the corresponding request. +type GetInvalidationOutput struct { + _ struct{} `type:"structure" payload:"Invalidation"` + + // The invalidation's information. For more information, see Invalidation Complex + // Type (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/InvalidationDatatype.html). + Invalidation *Invalidation `type:"structure"` +} + +// String returns the string representation +func (s GetInvalidationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetInvalidationOutput) GoString() string { + return s.String() +} - // Whether the distribution is enabled to accept user requests for content. - // - // Enabled is a required field - Enabled *bool `type:"boolean" required:"true"` +// SetInvalidation sets the Invalidation field's value. +func (s *GetInvalidationOutput) SetInvalidation(v *Invalidation) *GetInvalidationOutput { + s.Invalidation = v + return s +} - // Specify the maximum HTTP version that you want viewers to use to communicate - // with CloudFront. The default value for new web distributions is http2. Viewers - // that don't support HTTP/2 will automatically use an earlier version. - // - // HttpVersion is a required field - HttpVersion *string `type:"string" required:"true" enum:"HttpVersion"` +type GetPublicKeyConfigInput struct { + _ struct{} `type:"structure"` - // The identifier for the distribution. For example: EDFDVBD632BHDS5. + // Request the ID for the public key configuration. // // Id is a required field - Id *string `type:"string" required:"true"` + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` +} - // Whether CloudFront responds to IPv6 DNS requests with an IPv6 address for - // your distribution. - // - // IsIPV6Enabled is a required field - IsIPV6Enabled *bool `type:"boolean" required:"true"` +// String returns the string representation +func (s GetPublicKeyConfigInput) String() string { + return awsutil.Prettify(s) +} - // The date and time the distribution was last modified. - // - // LastModifiedTime is a required field - LastModifiedTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` +// GoString returns the string representation +func (s GetPublicKeyConfigInput) GoString() string { + return s.String() +} - // A complex type that contains information about origins for this distribution. - // - // Origins is a required field - Origins *Origins `type:"structure" required:"true"` +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetPublicKeyConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetPublicKeyConfigInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } - // PriceClass is a required field - PriceClass *string `type:"string" required:"true" enum:"PriceClass"` + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} - // A complex type that identifies ways in which you want to restrict distribution - // of your content. - // - // Restrictions is a required field - Restrictions *Restrictions `type:"structure" required:"true"` +// SetId sets the Id field's value. +func (s *GetPublicKeyConfigInput) SetId(v string) *GetPublicKeyConfigInput { + s.Id = &v + return s +} - // The current status of the distribution. When the status is Deployed, the - // distribution's information is propagated to all CloudFront edge locations. - // - // Status is a required field - Status *string `type:"string" required:"true"` +type GetPublicKeyConfigOutput struct { + _ struct{} `type:"structure" payload:"PublicKeyConfig"` - // A complex type that specifies the following: - // - // * Whether you want viewers to use HTTP or HTTPS to request your objects. - // - // * If you want viewers to use HTTPS, whether you're using an alternate - // domain name such as example.com or the CloudFront domain name for your - // distribution, such as d111111abcdef8.cloudfront.net. - // - // * If you're using an alternate domain name, whether AWS Certificate Manager - // (ACM) provided the certificate, or you purchased a certificate from a - // third-party certificate authority and imported it into ACM or uploaded - // it to the IAM certificate store. - // - // You must specify only one of the following values: - // - // * ViewerCertificate$ACMCertificateArn - // - // * ViewerCertificate$IAMCertificateId - // - // * ViewerCertificate$CloudFrontDefaultCertificate - // - // Don't specify false for CloudFrontDefaultCertificate. - // - // If you want viewers to use HTTP instead of HTTPS to request your objects: - // Specify the following value: - // - // true - // - // In addition, specify allow-all for ViewerProtocolPolicy for all of your cache - // behaviors. - // - // If you want viewers to use HTTPS to request your objects: Choose the type - // of certificate that you want to use based on whether you're using an alternate - // domain name for your objects or the CloudFront domain name: - // - // * If you're using an alternate domain name, such as example.com: Specify - // one of the following values, depending on whether ACM provided your certificate - // or you purchased your certificate from third-party certificate authority: - // - // ARN for ACM SSL/TLS certificate where - // ARN for ACM SSL/TLS certificate is the ARN for the ACM SSL/TLS certificate - // that you want to use for this distribution. - // - // IAM certificate ID where IAM certificate - // ID is the ID that IAM returned when you added the certificate to the IAM - // certificate store. - // - // If you specify ACMCertificateArn or IAMCertificateId, you must also specify - // a value for SSLSupportMethod. - // - // If you choose to use an ACM certificate or a certificate in the IAM certificate - // store, we recommend that you use only an alternate domain name in your - // object URLs (https://example.com/logo.jpg). If you use the domain name - // that is associated with your CloudFront distribution (such as https://d111111abcdef8.cloudfront.net/logo.jpg) - // and the viewer supports SNI, then CloudFront behaves normally. However, - // if the browser does not support SNI, the user's experience depends on - // the value that you choose for SSLSupportMethod: - // - // vip: The viewer displays a warning because there is a mismatch between the - // CloudFront domain name and the domain name in your SSL/TLS certificate. - // - // sni-only: CloudFront drops the connection with the browser without returning - // the object. - // - // * If you're using the CloudFront domain name for your distribution, such - // as d111111abcdef8.cloudfront.net: Specify the following value: - // - // true - // - // If you want viewers to use HTTPS, you must also specify one of the following - // values in your cache behaviors: - // - // * https-only - // - // * redirect-to-https - // - // You can also optionally require that CloudFront use HTTPS to communicate - // with your origin by specifying one of the following values for the applicable - // origins: - // - // * https-only - // - // * match-viewer - // - // For more information, see Using Alternate Domain Names and HTTPS (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecureConnections.html#CNAMEsAndHTTPS) - // in the Amazon CloudFront Developer Guide. - // - // ViewerCertificate is a required field - ViewerCertificate *ViewerCertificate `type:"structure" required:"true"` + // The current version of the public key configuration. For example: E2QWRUHAPOMQZL. + ETag *string `location:"header" locationName:"ETag" type:"string"` - // The Web ACL Id (if any) associated with the distribution. - // - // WebACLId is a required field - WebACLId *string `type:"string" required:"true"` + // Return the result for the public key configuration. + PublicKeyConfig *PublicKeyConfig `type:"structure"` } // String returns the string representation -func (s DistributionSummary) String() string { +func (s GetPublicKeyConfigOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DistributionSummary) GoString() string { +func (s GetPublicKeyConfigOutput) GoString() string { return s.String() } -// SetARN sets the ARN field's value. -func (s *DistributionSummary) SetARN(v string) *DistributionSummary { - s.ARN = &v +// SetETag sets the ETag field's value. +func (s *GetPublicKeyConfigOutput) SetETag(v string) *GetPublicKeyConfigOutput { + s.ETag = &v return s } -// SetAliases sets the Aliases field's value. -func (s *DistributionSummary) SetAliases(v *Aliases) *DistributionSummary { - s.Aliases = v +// SetPublicKeyConfig sets the PublicKeyConfig field's value. +func (s *GetPublicKeyConfigOutput) SetPublicKeyConfig(v *PublicKeyConfig) *GetPublicKeyConfigOutput { + s.PublicKeyConfig = v return s } -// SetCacheBehaviors sets the CacheBehaviors field's value. -func (s *DistributionSummary) SetCacheBehaviors(v *CacheBehaviors) *DistributionSummary { - s.CacheBehaviors = v - return s +type GetPublicKeyInput struct { + _ struct{} `type:"structure"` + + // Request the ID for the public key. + // + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` } -// SetComment sets the Comment field's value. -func (s *DistributionSummary) SetComment(v string) *DistributionSummary { - s.Comment = &v - return s +// String returns the string representation +func (s GetPublicKeyInput) String() string { + return awsutil.Prettify(s) } -// SetCustomErrorResponses sets the CustomErrorResponses field's value. -func (s *DistributionSummary) SetCustomErrorResponses(v *CustomErrorResponses) *DistributionSummary { - s.CustomErrorResponses = v - return s +// GoString returns the string representation +func (s GetPublicKeyInput) GoString() string { + return s.String() } -// SetDefaultCacheBehavior sets the DefaultCacheBehavior field's value. -func (s *DistributionSummary) SetDefaultCacheBehavior(v *DefaultCacheBehavior) *DistributionSummary { - s.DefaultCacheBehavior = v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetPublicKeyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetPublicKeyInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetDomainName sets the DomainName field's value. -func (s *DistributionSummary) SetDomainName(v string) *DistributionSummary { - s.DomainName = &v +// SetId sets the Id field's value. +func (s *GetPublicKeyInput) SetId(v string) *GetPublicKeyInput { + s.Id = &v return s } -// SetEnabled sets the Enabled field's value. -func (s *DistributionSummary) SetEnabled(v bool) *DistributionSummary { - s.Enabled = &v +type GetPublicKeyOutput struct { + _ struct{} `type:"structure" payload:"PublicKey"` + + // The current version of the public key. For example: E2QWRUHAPOMQZL. + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // Return the public key. + PublicKey *PublicKey `type:"structure"` +} + +// String returns the string representation +func (s GetPublicKeyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetPublicKeyOutput) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *GetPublicKeyOutput) SetETag(v string) *GetPublicKeyOutput { + s.ETag = &v return s } -// SetHttpVersion sets the HttpVersion field's value. -func (s *DistributionSummary) SetHttpVersion(v string) *DistributionSummary { - s.HttpVersion = &v +// SetPublicKey sets the PublicKey field's value. +func (s *GetPublicKeyOutput) SetPublicKey(v *PublicKey) *GetPublicKeyOutput { + s.PublicKey = v return s } +// To request to get a streaming distribution configuration. +type GetStreamingDistributionConfigInput struct { + _ struct{} `type:"structure"` + + // The streaming distribution's ID. + // + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetStreamingDistributionConfigInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetStreamingDistributionConfigInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetStreamingDistributionConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetStreamingDistributionConfigInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetId sets the Id field's value. -func (s *DistributionSummary) SetId(v string) *DistributionSummary { +func (s *GetStreamingDistributionConfigInput) SetId(v string) *GetStreamingDistributionConfigInput { s.Id = &v return s } -// SetIsIPV6Enabled sets the IsIPV6Enabled field's value. -func (s *DistributionSummary) SetIsIPV6Enabled(v bool) *DistributionSummary { - s.IsIPV6Enabled = &v - return s -} - -// SetLastModifiedTime sets the LastModifiedTime field's value. -func (s *DistributionSummary) SetLastModifiedTime(v time.Time) *DistributionSummary { - s.LastModifiedTime = &v - return s -} +// The returned result of the corresponding request. +type GetStreamingDistributionConfigOutput struct { + _ struct{} `type:"structure" payload:"StreamingDistributionConfig"` -// SetOrigins sets the Origins field's value. -func (s *DistributionSummary) SetOrigins(v *Origins) *DistributionSummary { - s.Origins = v - return s -} + // The current version of the configuration. For example: E2QWRUHAPOMQZL. + ETag *string `location:"header" locationName:"ETag" type:"string"` -// SetPriceClass sets the PriceClass field's value. -func (s *DistributionSummary) SetPriceClass(v string) *DistributionSummary { - s.PriceClass = &v - return s + // The streaming distribution's configuration information. + StreamingDistributionConfig *StreamingDistributionConfig `type:"structure"` } -// SetRestrictions sets the Restrictions field's value. -func (s *DistributionSummary) SetRestrictions(v *Restrictions) *DistributionSummary { - s.Restrictions = v - return s +// String returns the string representation +func (s GetStreamingDistributionConfigOutput) String() string { + return awsutil.Prettify(s) } -// SetStatus sets the Status field's value. -func (s *DistributionSummary) SetStatus(v string) *DistributionSummary { - s.Status = &v - return s +// GoString returns the string representation +func (s GetStreamingDistributionConfigOutput) GoString() string { + return s.String() } -// SetViewerCertificate sets the ViewerCertificate field's value. -func (s *DistributionSummary) SetViewerCertificate(v *ViewerCertificate) *DistributionSummary { - s.ViewerCertificate = v +// SetETag sets the ETag field's value. +func (s *GetStreamingDistributionConfigOutput) SetETag(v string) *GetStreamingDistributionConfigOutput { + s.ETag = &v return s } -// SetWebACLId sets the WebACLId field's value. -func (s *DistributionSummary) SetWebACLId(v string) *DistributionSummary { - s.WebACLId = &v +// SetStreamingDistributionConfig sets the StreamingDistributionConfig field's value. +func (s *GetStreamingDistributionConfigOutput) SetStreamingDistributionConfig(v *StreamingDistributionConfig) *GetStreamingDistributionConfigOutput { + s.StreamingDistributionConfig = v return s } -// A complex type that specifies how CloudFront handles query strings and cookies. -type ForwardedValues struct { +// The request to get a streaming distribution's information. +type GetStreamingDistributionInput struct { _ struct{} `type:"structure"` - // A complex type that specifies whether you want CloudFront to forward cookies - // to the origin and, if so, which ones. For more information about forwarding - // cookies to the origin, see How CloudFront Forwards, Caches, and Logs Cookies - // (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Cookies.html) - // in the Amazon CloudFront Developer Guide. - // - // Cookies is a required field - Cookies *CookiePreference `type:"structure" required:"true"` - - // A complex type that specifies the Headers, if any, that you want CloudFront - // to base caching on for this cache behavior. - Headers *Headers `type:"structure"` - - // Indicates whether you want CloudFront to forward query strings to the origin - // that is associated with this cache behavior and cache based on the query - // string parameters. CloudFront behavior depends on the value of QueryString - // and on the values that you specify for QueryStringCacheKeys, if any: - // - // If you specify true for QueryString and you don't specify any values for - // QueryStringCacheKeys, CloudFront forwards all query string parameters to - // the origin and caches based on all query string parameters. Depending on - // how many query string parameters and values you have, this can adversely - // affect performance because CloudFront must forward more requests to the origin. - // - // If you specify true for QueryString and you specify one or more values for - // QueryStringCacheKeys, CloudFront forwards all query string parameters to - // the origin, but it only caches based on the query string parameters that - // you specify. - // - // If you specify false for QueryString, CloudFront doesn't forward any query - // string parameters to the origin, and doesn't cache based on query string - // parameters. - // - // For more information, see Configuring CloudFront to Cache Based on Query - // String Parameters (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html) - // in the Amazon CloudFront Developer Guide. + // The streaming distribution's ID. // - // QueryString is a required field - QueryString *bool `type:"boolean" required:"true"` - - // A complex type that contains information about the query string parameters - // that you want CloudFront to use for caching for this cache behavior. - QueryStringCacheKeys *QueryStringCacheKeys `type:"structure"` + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` } // String returns the string representation -func (s ForwardedValues) String() string { +func (s GetStreamingDistributionInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ForwardedValues) GoString() string { +func (s GetStreamingDistributionInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ForwardedValues) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ForwardedValues"} - if s.Cookies == nil { - invalidParams.Add(request.NewErrParamRequired("Cookies")) - } - if s.QueryString == nil { - invalidParams.Add(request.NewErrParamRequired("QueryString")) - } - if s.Cookies != nil { - if err := s.Cookies.Validate(); err != nil { - invalidParams.AddNested("Cookies", err.(request.ErrInvalidParams)) - } - } - if s.Headers != nil { - if err := s.Headers.Validate(); err != nil { - invalidParams.AddNested("Headers", err.(request.ErrInvalidParams)) - } - } - if s.QueryStringCacheKeys != nil { - if err := s.QueryStringCacheKeys.Validate(); err != nil { - invalidParams.AddNested("QueryStringCacheKeys", err.(request.ErrInvalidParams)) - } +func (s *GetStreamingDistributionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetStreamingDistributionInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) } if invalidParams.Len() > 0 { @@ -6459,92 +10364,113 @@ func (s *ForwardedValues) Validate() error { return nil } -// SetCookies sets the Cookies field's value. -func (s *ForwardedValues) SetCookies(v *CookiePreference) *ForwardedValues { - s.Cookies = v +// SetId sets the Id field's value. +func (s *GetStreamingDistributionInput) SetId(v string) *GetStreamingDistributionInput { + s.Id = &v return s } -// SetHeaders sets the Headers field's value. -func (s *ForwardedValues) SetHeaders(v *Headers) *ForwardedValues { - s.Headers = v - return s +// The returned result of the corresponding request. +type GetStreamingDistributionOutput struct { + _ struct{} `type:"structure" payload:"StreamingDistribution"` + + // The current version of the streaming distribution's information. For example: + // E2QWRUHAPOMQZL. + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // The streaming distribution's information. + StreamingDistribution *StreamingDistribution `type:"structure"` } -// SetQueryString sets the QueryString field's value. -func (s *ForwardedValues) SetQueryString(v bool) *ForwardedValues { - s.QueryString = &v +// String returns the string representation +func (s GetStreamingDistributionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetStreamingDistributionOutput) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *GetStreamingDistributionOutput) SetETag(v string) *GetStreamingDistributionOutput { + s.ETag = &v return s } -// SetQueryStringCacheKeys sets the QueryStringCacheKeys field's value. -func (s *ForwardedValues) SetQueryStringCacheKeys(v *QueryStringCacheKeys) *ForwardedValues { - s.QueryStringCacheKeys = v +// SetStreamingDistribution sets the StreamingDistribution field's value. +func (s *GetStreamingDistributionOutput) SetStreamingDistribution(v *StreamingDistribution) *GetStreamingDistributionOutput { + s.StreamingDistribution = v return s } -// A complex type that controls the countries in which your content is distributed. -// CloudFront determines the location of your users using MaxMind GeoIP databases. -type GeoRestriction struct { +// A complex type that specifies the request headers, if any, that you want +// CloudFront to base caching on for this cache behavior. +// +// For the headers that you specify, CloudFront caches separate versions of +// a specified object based on the header values in viewer requests. For example, +// suppose viewer requests for logo.jpg contain a custom product header that +// has a value of either acme or apex, and you configure CloudFront to cache +// your content based on values in the product header. CloudFront forwards the +// product header to the origin and caches the response from the origin once +// for each header value. For more information about caching based on header +// values, see How CloudFront Forwards and Caches Headers (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/header-caching.html) +// in the Amazon CloudFront Developer Guide. +type Headers struct { _ struct{} `type:"structure"` - // A complex type that contains a Location element for each country in which - // you want CloudFront either to distribute your content (whitelist) or not - // distribute your content (blacklist). + // A list that contains one Name element for each header that you want CloudFront + // to use for caching in this cache behavior. If Quantity is 0, omit Items. + Items []*string `locationNameList:"Name" type:"list"` + + // The number of different headers that you want CloudFront to base caching + // on for this cache behavior. You can configure each cache behavior in a web + // distribution to do one of the following: // - // The Location element is a two-letter, uppercase country code for a country - // that you want to include in your blacklist or whitelist. Include one Location - // element for each country. + // * Forward all headers to your origin: Specify 1 for Quantity and * for + // Name. // - // CloudFront and MaxMind both use ISO 3166 country codes. For the current list - // of countries and the corresponding codes, see ISO 3166-1-alpha-2 code on - // the International Organization for Standardization website. You can also - // refer to the country list on the CloudFront console, which includes both - // country names and codes. - Items []*string `locationNameList:"Location" type:"list"` - - // When geo restriction is enabled, this is the number of countries in your - // whitelist or blacklist. Otherwise, when it is not enabled, Quantity is 0, - // and you can omit Items. + // CloudFront doesn't cache the objects that are associated with this cache + // behavior. Instead, CloudFront sends every request to the origin. // - // Quantity is a required field - Quantity *int64 `type:"integer" required:"true"` - - // The method that you want to use to restrict distribution of your content - // by country: + // * Forward a whitelist of headers you specify: Specify the number of headers + // that you want CloudFront to base caching on. Then specify the header names + // in Name elements. CloudFront caches your objects based on the values in + // the specified headers. // - // * none: No geo restriction is enabled, meaning access to content is not - // restricted by client geo location. + // * Forward only the default headers: Specify 0 for Quantity and omit Items. + // In this configuration, CloudFront doesn't cache based on the values in + // the request headers. // - // * blacklist: The Location elements specify the countries in which you - // don't want CloudFront to distribute your content. + // Regardless of which option you choose, CloudFront forwards headers to your + // origin based on whether the origin is an S3 bucket or a custom origin. See + // the following documentation: // - // * whitelist: The Location elements specify the countries in which you - // want CloudFront to distribute your content. + // * S3 bucket: See HTTP Request Headers That CloudFront Removes or Updates + // (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorS3Origin.html#request-s3-removed-headers) // - // RestrictionType is a required field - RestrictionType *string `type:"string" required:"true" enum:"GeoRestrictionType"` + // * Custom origin: See HTTP Request Headers and CloudFront Behavior (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html#request-custom-headers-behavior) + // + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` } // String returns the string representation -func (s GeoRestriction) String() string { +func (s Headers) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GeoRestriction) GoString() string { +func (s Headers) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GeoRestriction) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GeoRestriction"} +func (s *Headers) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Headers"} if s.Quantity == nil { invalidParams.Add(request.NewErrParamRequired("Quantity")) } - if s.RestrictionType == nil { - invalidParams.Add(request.NewErrParamRequired("RestrictionType")) - } if invalidParams.Len() > 0 { return invalidParams @@ -6553,121 +10479,132 @@ func (s *GeoRestriction) Validate() error { } // SetItems sets the Items field's value. -func (s *GeoRestriction) SetItems(v []*string) *GeoRestriction { +func (s *Headers) SetItems(v []*string) *Headers { s.Items = v return s } // SetQuantity sets the Quantity field's value. -func (s *GeoRestriction) SetQuantity(v int64) *GeoRestriction { +func (s *Headers) SetQuantity(v int64) *Headers { s.Quantity = &v return s } -// SetRestrictionType sets the RestrictionType field's value. -func (s *GeoRestriction) SetRestrictionType(v string) *GeoRestriction { - s.RestrictionType = &v - return s -} - -// The origin access identity's configuration information. For more information, -// see CloudFrontOriginAccessIdentityConfigComplexType. -type GetCloudFrontOriginAccessIdentityConfigInput struct { +// An invalidation. +type Invalidation struct { _ struct{} `type:"structure"` - // The identity's ID. + // The date and time the invalidation request was first made. + // + // CreateTime is a required field + CreateTime *time.Time `type:"timestamp" required:"true"` + + // The identifier for the invalidation request. For example: IDFDVBD632BHDS5. // // Id is a required field - Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` + Id *string `type:"string" required:"true"` + + // The current invalidation information for the batch request. + // + // InvalidationBatch is a required field + InvalidationBatch *InvalidationBatch `type:"structure" required:"true"` + + // The status of the invalidation request. When the invalidation batch is finished, + // the status is Completed. + // + // Status is a required field + Status *string `type:"string" required:"true"` } // String returns the string representation -func (s GetCloudFrontOriginAccessIdentityConfigInput) String() string { +func (s Invalidation) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetCloudFrontOriginAccessIdentityConfigInput) GoString() string { +func (s Invalidation) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *GetCloudFrontOriginAccessIdentityConfigInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetCloudFrontOriginAccessIdentityConfigInput"} - if s.Id == nil { - invalidParams.Add(request.NewErrParamRequired("Id")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetCreateTime sets the CreateTime field's value. +func (s *Invalidation) SetCreateTime(v time.Time) *Invalidation { + s.CreateTime = &v + return s } // SetId sets the Id field's value. -func (s *GetCloudFrontOriginAccessIdentityConfigInput) SetId(v string) *GetCloudFrontOriginAccessIdentityConfigInput { +func (s *Invalidation) SetId(v string) *Invalidation { s.Id = &v return s } -// The returned result of the corresponding request. -type GetCloudFrontOriginAccessIdentityConfigOutput struct { - _ struct{} `type:"structure" payload:"CloudFrontOriginAccessIdentityConfig"` - - // The origin access identity's configuration information. - CloudFrontOriginAccessIdentityConfig *OriginAccessIdentityConfig `type:"structure"` - - // The current version of the configuration. For example: E2QWRUHAPOMQZL. - ETag *string `location:"header" locationName:"ETag" type:"string"` -} - -// String returns the string representation -func (s GetCloudFrontOriginAccessIdentityConfigOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s GetCloudFrontOriginAccessIdentityConfigOutput) GoString() string { - return s.String() -} - -// SetCloudFrontOriginAccessIdentityConfig sets the CloudFrontOriginAccessIdentityConfig field's value. -func (s *GetCloudFrontOriginAccessIdentityConfigOutput) SetCloudFrontOriginAccessIdentityConfig(v *OriginAccessIdentityConfig) *GetCloudFrontOriginAccessIdentityConfigOutput { - s.CloudFrontOriginAccessIdentityConfig = v +// SetInvalidationBatch sets the InvalidationBatch field's value. +func (s *Invalidation) SetInvalidationBatch(v *InvalidationBatch) *Invalidation { + s.InvalidationBatch = v return s } -// SetETag sets the ETag field's value. -func (s *GetCloudFrontOriginAccessIdentityConfigOutput) SetETag(v string) *GetCloudFrontOriginAccessIdentityConfigOutput { - s.ETag = &v +// SetStatus sets the Status field's value. +func (s *Invalidation) SetStatus(v string) *Invalidation { + s.Status = &v return s } -// The request to get an origin access identity's information. -type GetCloudFrontOriginAccessIdentityInput struct { +// An invalidation batch. +type InvalidationBatch struct { _ struct{} `type:"structure"` - // The identity's ID. + // A value that you specify to uniquely identify an invalidation request. CloudFront + // uses the value to prevent you from accidentally resubmitting an identical + // request. Whenever you create a new invalidation request, you must specify + // a new value for CallerReference and change other values in the request as + // applicable. One way to ensure that the value of CallerReference is unique + // is to use a timestamp, for example, 20120301090000. // - // Id is a required field - Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` + // If you make a second invalidation request with the same value for CallerReference, + // and if the rest of the request is the same, CloudFront doesn't create a new + // invalidation request. Instead, CloudFront returns information about the invalidation + // request that you previously created with the same CallerReference. + // + // If CallerReference is a value you already sent in a previous invalidation + // batch request but the content of any Path is different from the original + // request, CloudFront returns an InvalidationBatchAlreadyExists error. + // + // CallerReference is a required field + CallerReference *string `type:"string" required:"true"` + + // A complex type that contains information about the objects that you want + // to invalidate. For more information, see Specifying the Objects to Invalidate + // (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html#invalidation-specifying-objects) + // in the Amazon CloudFront Developer Guide. + // + // Paths is a required field + Paths *Paths `type:"structure" required:"true"` } // String returns the string representation -func (s GetCloudFrontOriginAccessIdentityInput) String() string { +func (s InvalidationBatch) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetCloudFrontOriginAccessIdentityInput) GoString() string { +func (s InvalidationBatch) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetCloudFrontOriginAccessIdentityInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetCloudFrontOriginAccessIdentityInput"} - if s.Id == nil { - invalidParams.Add(request.NewErrParamRequired("Id")) +func (s *InvalidationBatch) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InvalidationBatch"} + if s.CallerReference == nil { + invalidParams.Add(request.NewErrParamRequired("CallerReference")) + } + if s.Paths == nil { + invalidParams.Add(request.NewErrParamRequired("Paths")) + } + if s.Paths != nil { + if err := s.Paths.Validate(); err != nil { + invalidParams.AddNested("Paths", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -6676,143 +10613,256 @@ func (s *GetCloudFrontOriginAccessIdentityInput) Validate() error { return nil } -// SetId sets the Id field's value. -func (s *GetCloudFrontOriginAccessIdentityInput) SetId(v string) *GetCloudFrontOriginAccessIdentityInput { - s.Id = &v +// SetCallerReference sets the CallerReference field's value. +func (s *InvalidationBatch) SetCallerReference(v string) *InvalidationBatch { + s.CallerReference = &v return s } -// The returned result of the corresponding request. -type GetCloudFrontOriginAccessIdentityOutput struct { - _ struct{} `type:"structure" payload:"CloudFrontOriginAccessIdentity"` +// SetPaths sets the Paths field's value. +func (s *InvalidationBatch) SetPaths(v *Paths) *InvalidationBatch { + s.Paths = v + return s +} - // The origin access identity's information. - CloudFrontOriginAccessIdentity *OriginAccessIdentity `type:"structure"` +// The InvalidationList complex type describes the list of invalidation objects. +// For more information about invalidation, see Invalidating Objects (Web Distributions +// Only) (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html) +// in the Amazon CloudFront Developer Guide. +type InvalidationList struct { + _ struct{} `type:"structure"` - // The current version of the origin access identity's information. For example: - // E2QWRUHAPOMQZL. - ETag *string `location:"header" locationName:"ETag" type:"string"` + // A flag that indicates whether more invalidation batch requests remain to + // be listed. If your results were truncated, you can make a follow-up pagination + // request using the Marker request parameter to retrieve more invalidation + // batches in the list. + // + // IsTruncated is a required field + IsTruncated *bool `type:"boolean" required:"true"` + + // A complex type that contains one InvalidationSummary element for each invalidation + // batch created by the current AWS account. + Items []*InvalidationSummary `locationNameList:"InvalidationSummary" type:"list"` + + // The value that you provided for the Marker request parameter. + // + // Marker is a required field + Marker *string `type:"string" required:"true"` + + // The value that you provided for the MaxItems request parameter. + // + // MaxItems is a required field + MaxItems *int64 `type:"integer" required:"true"` + + // If IsTruncated is true, this element is present and contains the value that + // you can use for the Marker request parameter to continue listing your invalidation + // batches where they left off. + NextMarker *string `type:"string"` + + // The number of invalidation batches that were created by the current AWS account. + // + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` } // String returns the string representation -func (s GetCloudFrontOriginAccessIdentityOutput) String() string { +func (s InvalidationList) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetCloudFrontOriginAccessIdentityOutput) GoString() string { +func (s InvalidationList) GoString() string { return s.String() } -// SetCloudFrontOriginAccessIdentity sets the CloudFrontOriginAccessIdentity field's value. -func (s *GetCloudFrontOriginAccessIdentityOutput) SetCloudFrontOriginAccessIdentity(v *OriginAccessIdentity) *GetCloudFrontOriginAccessIdentityOutput { - s.CloudFrontOriginAccessIdentity = v +// SetIsTruncated sets the IsTruncated field's value. +func (s *InvalidationList) SetIsTruncated(v bool) *InvalidationList { + s.IsTruncated = &v return s } -// SetETag sets the ETag field's value. -func (s *GetCloudFrontOriginAccessIdentityOutput) SetETag(v string) *GetCloudFrontOriginAccessIdentityOutput { - s.ETag = &v +// SetItems sets the Items field's value. +func (s *InvalidationList) SetItems(v []*InvalidationSummary) *InvalidationList { + s.Items = v return s } -// The request to get a distribution configuration. -type GetDistributionConfigInput struct { +// SetMarker sets the Marker field's value. +func (s *InvalidationList) SetMarker(v string) *InvalidationList { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *InvalidationList) SetMaxItems(v int64) *InvalidationList { + s.MaxItems = &v + return s +} + +// SetNextMarker sets the NextMarker field's value. +func (s *InvalidationList) SetNextMarker(v string) *InvalidationList { + s.NextMarker = &v + return s +} + +// SetQuantity sets the Quantity field's value. +func (s *InvalidationList) SetQuantity(v int64) *InvalidationList { + s.Quantity = &v + return s +} + +// A summary of an invalidation request. +type InvalidationSummary struct { _ struct{} `type:"structure"` - // The distribution's ID. + // CreateTime is a required field + CreateTime *time.Time `type:"timestamp" required:"true"` + + // The unique ID for an invalidation request. + // + // Id is a required field + Id *string `type:"string" required:"true"` + + // The status of an invalidation request. // - // Id is a required field - Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` + // Status is a required field + Status *string `type:"string" required:"true"` } // String returns the string representation -func (s GetDistributionConfigInput) String() string { +func (s InvalidationSummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDistributionConfigInput) GoString() string { +func (s InvalidationSummary) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *GetDistributionConfigInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetDistributionConfigInput"} - if s.Id == nil { - invalidParams.Add(request.NewErrParamRequired("Id")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetCreateTime sets the CreateTime field's value. +func (s *InvalidationSummary) SetCreateTime(v time.Time) *InvalidationSummary { + s.CreateTime = &v + return s } // SetId sets the Id field's value. -func (s *GetDistributionConfigInput) SetId(v string) *GetDistributionConfigInput { +func (s *InvalidationSummary) SetId(v string) *InvalidationSummary { s.Id = &v return s } -// The returned result of the corresponding request. -type GetDistributionConfigOutput struct { - _ struct{} `type:"structure" payload:"DistributionConfig"` +// SetStatus sets the Status field's value. +func (s *InvalidationSummary) SetStatus(v string) *InvalidationSummary { + s.Status = &v + return s +} - // The distribution's configuration information. - DistributionConfig *DistributionConfig `type:"structure"` +// A complex type that lists the active CloudFront key pairs, if any, that are +// associated with AwsAccountNumber. +// +// For more information, see ActiveTrustedSigners. +type KeyPairIds struct { + _ struct{} `type:"structure"` - // The current version of the configuration. For example: E2QWRUHAPOMQZL. - ETag *string `location:"header" locationName:"ETag" type:"string"` + // A complex type that lists the active CloudFront key pairs, if any, that are + // associated with AwsAccountNumber. + // + // For more information, see ActiveTrustedSigners. + Items []*string `locationNameList:"KeyPairId" type:"list"` + + // The number of active CloudFront key pairs for AwsAccountNumber. + // + // For more information, see ActiveTrustedSigners. + // + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` } // String returns the string representation -func (s GetDistributionConfigOutput) String() string { +func (s KeyPairIds) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDistributionConfigOutput) GoString() string { +func (s KeyPairIds) GoString() string { return s.String() } -// SetDistributionConfig sets the DistributionConfig field's value. -func (s *GetDistributionConfigOutput) SetDistributionConfig(v *DistributionConfig) *GetDistributionConfigOutput { - s.DistributionConfig = v +// SetItems sets the Items field's value. +func (s *KeyPairIds) SetItems(v []*string) *KeyPairIds { + s.Items = v return s } -// SetETag sets the ETag field's value. -func (s *GetDistributionConfigOutput) SetETag(v string) *GetDistributionConfigOutput { - s.ETag = &v +// SetQuantity sets the Quantity field's value. +func (s *KeyPairIds) SetQuantity(v int64) *KeyPairIds { + s.Quantity = &v return s } -// The request to get a distribution's information. -type GetDistributionInput struct { +// A complex type that contains a Lambda function association. +type LambdaFunctionAssociation struct { _ struct{} `type:"structure"` - // The distribution's ID. + // Specifies the event type that triggers a Lambda function invocation. You + // can specify the following values: // - // Id is a required field - Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` + // * viewer-request: The function executes when CloudFront receives a request + // from a viewer and before it checks to see whether the requested object + // is in the edge cache. + // + // * origin-request: The function executes only when CloudFront forwards + // a request to your origin. When the requested object is in the edge cache, + // the function doesn't execute. + // + // * origin-response: The function executes after CloudFront receives a response + // from the origin and before it caches the object in the response. When + // the requested object is in the edge cache, the function doesn't execute. + // + // If the origin returns an HTTP status code other than HTTP 200 (OK), the function + // doesn't execute. + // + // * viewer-response: The function executes before CloudFront returns the + // requested object to the viewer. The function executes regardless of whether + // the object was already in the edge cache. + // + // If the origin returns an HTTP status code other than HTTP 200 (OK), the function + // doesn't execute. + // + // EventType is a required field + EventType *string `type:"string" required:"true" enum:"EventType"` + + // A flag that allows a Lambda function to have read access to the body content. + // For more information, see Accessing the Request Body by Choosing the Include + // Body Option (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-include-body-access.html) + // in the Amazon CloudFront Developer Guide. + IncludeBody *bool `type:"boolean"` + + // The ARN of the Lambda function. You must specify the ARN of a function version; + // you can't specify a Lambda alias or $LATEST. + // + // LambdaFunctionARN is a required field + LambdaFunctionARN *string `type:"string" required:"true"` } // String returns the string representation -func (s GetDistributionInput) String() string { +func (s LambdaFunctionAssociation) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDistributionInput) GoString() string { +func (s LambdaFunctionAssociation) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetDistributionInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetDistributionInput"} - if s.Id == nil { - invalidParams.Add(request.NewErrParamRequired("Id")) +func (s *LambdaFunctionAssociation) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LambdaFunctionAssociation"} + if s.EventType == nil { + invalidParams.Add(request.NewErrParamRequired("EventType")) + } + if s.LambdaFunctionARN == nil { + invalidParams.Add(request.NewErrParamRequired("LambdaFunctionARN")) } if invalidParams.Len() > 0 { @@ -6821,78 +10871,73 @@ func (s *GetDistributionInput) Validate() error { return nil } -// SetId sets the Id field's value. -func (s *GetDistributionInput) SetId(v string) *GetDistributionInput { - s.Id = &v +// SetEventType sets the EventType field's value. +func (s *LambdaFunctionAssociation) SetEventType(v string) *LambdaFunctionAssociation { + s.EventType = &v return s } -// The returned result of the corresponding request. -type GetDistributionOutput struct { - _ struct{} `type:"structure" payload:"Distribution"` - - // The distribution's information. - Distribution *Distribution `type:"structure"` - - // The current version of the distribution's information. For example: E2QWRUHAPOMQZL. - ETag *string `location:"header" locationName:"ETag" type:"string"` -} - -// String returns the string representation -func (s GetDistributionOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s GetDistributionOutput) GoString() string { - return s.String() -} - -// SetDistribution sets the Distribution field's value. -func (s *GetDistributionOutput) SetDistribution(v *Distribution) *GetDistributionOutput { - s.Distribution = v +// SetIncludeBody sets the IncludeBody field's value. +func (s *LambdaFunctionAssociation) SetIncludeBody(v bool) *LambdaFunctionAssociation { + s.IncludeBody = &v return s } -// SetETag sets the ETag field's value. -func (s *GetDistributionOutput) SetETag(v string) *GetDistributionOutput { - s.ETag = &v +// SetLambdaFunctionARN sets the LambdaFunctionARN field's value. +func (s *LambdaFunctionAssociation) SetLambdaFunctionARN(v string) *LambdaFunctionAssociation { + s.LambdaFunctionARN = &v return s } -// The request to get an invalidation's information. -type GetInvalidationInput struct { +// A complex type that specifies a list of Lambda functions associations for +// a cache behavior. +// +// If you want to invoke one or more Lambda functions triggered by requests +// that match the PathPattern of the cache behavior, specify the applicable +// values for Quantity and Items. Note that there can be up to 4 LambdaFunctionAssociation +// items in this list (one for each possible value of EventType) and each EventType +// can be associated with the Lambda function only once. +// +// If you don't want to invoke any Lambda functions for the requests that match +// PathPattern, specify 0 for Quantity and omit Items. +type LambdaFunctionAssociations struct { _ struct{} `type:"structure"` - // The distribution's ID. - // - // DistributionId is a required field - DistributionId *string `location:"uri" locationName:"DistributionId" type:"string" required:"true"` + // Optional: A complex type that contains LambdaFunctionAssociation items for + // this cache behavior. If Quantity is 0, you can omit Items. + Items []*LambdaFunctionAssociation `locationNameList:"LambdaFunctionAssociation" type:"list"` - // The identifier for the invalidation request, for example, IDFDVBD632BHDS5. + // The number of Lambda function associations for this cache behavior. // - // Id is a required field - Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` } // String returns the string representation -func (s GetInvalidationInput) String() string { +func (s LambdaFunctionAssociations) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetInvalidationInput) GoString() string { +func (s LambdaFunctionAssociations) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetInvalidationInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetInvalidationInput"} - if s.DistributionId == nil { - invalidParams.Add(request.NewErrParamRequired("DistributionId")) +func (s *LambdaFunctionAssociations) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LambdaFunctionAssociations"} + if s.Quantity == nil { + invalidParams.Add(request.NewErrParamRequired("Quantity")) } - if s.Id == nil { - invalidParams.Add(request.NewErrParamRequired("Id")) + if s.Items != nil { + for i, v := range s.Items { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Items", i), err.(request.ErrInvalidParams)) + } + } } if invalidParams.Len() > 0 { @@ -6901,140 +10946,118 @@ func (s *GetInvalidationInput) Validate() error { return nil } -// SetDistributionId sets the DistributionId field's value. -func (s *GetInvalidationInput) SetDistributionId(v string) *GetInvalidationInput { - s.DistributionId = &v - return s -} - -// SetId sets the Id field's value. -func (s *GetInvalidationInput) SetId(v string) *GetInvalidationInput { - s.Id = &v +// SetItems sets the Items field's value. +func (s *LambdaFunctionAssociations) SetItems(v []*LambdaFunctionAssociation) *LambdaFunctionAssociations { + s.Items = v return s } -// The returned result of the corresponding request. -type GetInvalidationOutput struct { - _ struct{} `type:"structure" payload:"Invalidation"` - - // The invalidation's information. For more information, see Invalidation Complex - // Type (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/InvalidationDatatype.html). - Invalidation *Invalidation `type:"structure"` -} - -// String returns the string representation -func (s GetInvalidationOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s GetInvalidationOutput) GoString() string { - return s.String() -} - -// SetInvalidation sets the Invalidation field's value. -func (s *GetInvalidationOutput) SetInvalidation(v *Invalidation) *GetInvalidationOutput { - s.Invalidation = v +// SetQuantity sets the Quantity field's value. +func (s *LambdaFunctionAssociations) SetQuantity(v int64) *LambdaFunctionAssociations { + s.Quantity = &v return s } -// To request to get a streaming distribution configuration. -type GetStreamingDistributionConfigInput struct { +// The request to list origin access identities. +type ListCloudFrontOriginAccessIdentitiesInput struct { _ struct{} `type:"structure"` - // The streaming distribution's ID. - // - // Id is a required field - Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` + // Use this when paginating results to indicate where to begin in your list + // of origin access identities. The results include identities in the list that + // occur after the marker. To get the next page of results, set the Marker to + // the value of the NextMarker from the current page's response (which is also + // the ID of the last identity on that page). + Marker *string `location:"querystring" locationName:"Marker" type:"string"` + + // The maximum number of origin access identities you want in the response body. + MaxItems *int64 `location:"querystring" locationName:"MaxItems" type:"integer"` } // String returns the string representation -func (s GetStreamingDistributionConfigInput) String() string { +func (s ListCloudFrontOriginAccessIdentitiesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetStreamingDistributionConfigInput) GoString() string { +func (s ListCloudFrontOriginAccessIdentitiesInput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *GetStreamingDistributionConfigInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetStreamingDistributionConfigInput"} - if s.Id == nil { - invalidParams.Add(request.NewErrParamRequired("Id")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetMarker sets the Marker field's value. +func (s *ListCloudFrontOriginAccessIdentitiesInput) SetMarker(v string) *ListCloudFrontOriginAccessIdentitiesInput { + s.Marker = &v + return s } -// SetId sets the Id field's value. -func (s *GetStreamingDistributionConfigInput) SetId(v string) *GetStreamingDistributionConfigInput { - s.Id = &v +// SetMaxItems sets the MaxItems field's value. +func (s *ListCloudFrontOriginAccessIdentitiesInput) SetMaxItems(v int64) *ListCloudFrontOriginAccessIdentitiesInput { + s.MaxItems = &v return s } // The returned result of the corresponding request. -type GetStreamingDistributionConfigOutput struct { - _ struct{} `type:"structure" payload:"StreamingDistributionConfig"` - - // The current version of the configuration. For example: E2QWRUHAPOMQZL. - ETag *string `location:"header" locationName:"ETag" type:"string"` +type ListCloudFrontOriginAccessIdentitiesOutput struct { + _ struct{} `type:"structure" payload:"CloudFrontOriginAccessIdentityList"` - // The streaming distribution's configuration information. - StreamingDistributionConfig *StreamingDistributionConfig `type:"structure"` + // The CloudFrontOriginAccessIdentityList type. + CloudFrontOriginAccessIdentityList *OriginAccessIdentityList `type:"structure"` } // String returns the string representation -func (s GetStreamingDistributionConfigOutput) String() string { +func (s ListCloudFrontOriginAccessIdentitiesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetStreamingDistributionConfigOutput) GoString() string { +func (s ListCloudFrontOriginAccessIdentitiesOutput) GoString() string { return s.String() } -// SetETag sets the ETag field's value. -func (s *GetStreamingDistributionConfigOutput) SetETag(v string) *GetStreamingDistributionConfigOutput { - s.ETag = &v - return s -} - -// SetStreamingDistributionConfig sets the StreamingDistributionConfig field's value. -func (s *GetStreamingDistributionConfigOutput) SetStreamingDistributionConfig(v *StreamingDistributionConfig) *GetStreamingDistributionConfigOutput { - s.StreamingDistributionConfig = v +// SetCloudFrontOriginAccessIdentityList sets the CloudFrontOriginAccessIdentityList field's value. +func (s *ListCloudFrontOriginAccessIdentitiesOutput) SetCloudFrontOriginAccessIdentityList(v *OriginAccessIdentityList) *ListCloudFrontOriginAccessIdentitiesOutput { + s.CloudFrontOriginAccessIdentityList = v return s } -// The request to get a streaming distribution's information. -type GetStreamingDistributionInput struct { +// The request to list distributions that are associated with a specified AWS +// WAF web ACL. +type ListDistributionsByWebACLIdInput struct { _ struct{} `type:"structure"` - // The streaming distribution's ID. + // Use Marker and MaxItems to control pagination of results. If you have more + // than MaxItems distributions that satisfy the request, the response includes + // a NextMarker element. To get the next page of results, submit another request. + // For the value of Marker, specify the value of NextMarker from the last response. + // (For the first request, omit Marker.) + Marker *string `location:"querystring" locationName:"Marker" type:"string"` + + // The maximum number of distributions that you want CloudFront to return in + // the response body. The maximum and default values are both 100. + MaxItems *int64 `location:"querystring" locationName:"MaxItems" type:"integer"` + + // The ID of the AWS WAF web ACL that you want to list the associated distributions. + // If you specify "null" for the ID, the request returns a list of the distributions + // that aren't associated with a web ACL. // - // Id is a required field - Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` + // WebACLId is a required field + WebACLId *string `location:"uri" locationName:"WebACLId" type:"string" required:"true"` } // String returns the string representation -func (s GetStreamingDistributionInput) String() string { +func (s ListDistributionsByWebACLIdInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetStreamingDistributionInput) GoString() string { +func (s ListDistributionsByWebACLIdInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetStreamingDistributionInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetStreamingDistributionInput"} - if s.Id == nil { - invalidParams.Add(request.NewErrParamRequired("Id")) +func (s *ListDistributionsByWebACLIdInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListDistributionsByWebACLIdInput"} + if s.WebACLId == nil { + invalidParams.Add(request.NewErrParamRequired("WebACLId")) } if invalidParams.Len() > 0 { @@ -7043,538 +11066,462 @@ func (s *GetStreamingDistributionInput) Validate() error { return nil } -// SetId sets the Id field's value. -func (s *GetStreamingDistributionInput) SetId(v string) *GetStreamingDistributionInput { - s.Id = &v +// SetMarker sets the Marker field's value. +func (s *ListDistributionsByWebACLIdInput) SetMarker(v string) *ListDistributionsByWebACLIdInput { + s.Marker = &v return s } -// The returned result of the corresponding request. -type GetStreamingDistributionOutput struct { - _ struct{} `type:"structure" payload:"StreamingDistribution"` +// SetMaxItems sets the MaxItems field's value. +func (s *ListDistributionsByWebACLIdInput) SetMaxItems(v int64) *ListDistributionsByWebACLIdInput { + s.MaxItems = &v + return s +} - // The current version of the streaming distribution's information. For example: - // E2QWRUHAPOMQZL. - ETag *string `location:"header" locationName:"ETag" type:"string"` +// SetWebACLId sets the WebACLId field's value. +func (s *ListDistributionsByWebACLIdInput) SetWebACLId(v string) *ListDistributionsByWebACLIdInput { + s.WebACLId = &v + return s +} - // The streaming distribution's information. - StreamingDistribution *StreamingDistribution `type:"structure"` +// The response to a request to list the distributions that are associated with +// a specified AWS WAF web ACL. +type ListDistributionsByWebACLIdOutput struct { + _ struct{} `type:"structure" payload:"DistributionList"` + + // The DistributionList type. + DistributionList *DistributionList `type:"structure"` } // String returns the string representation -func (s GetStreamingDistributionOutput) String() string { +func (s ListDistributionsByWebACLIdOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetStreamingDistributionOutput) GoString() string { +func (s ListDistributionsByWebACLIdOutput) GoString() string { return s.String() } -// SetETag sets the ETag field's value. -func (s *GetStreamingDistributionOutput) SetETag(v string) *GetStreamingDistributionOutput { - s.ETag = &v - return s -} - -// SetStreamingDistribution sets the StreamingDistribution field's value. -func (s *GetStreamingDistributionOutput) SetStreamingDistribution(v *StreamingDistribution) *GetStreamingDistributionOutput { - s.StreamingDistribution = v +// SetDistributionList sets the DistributionList field's value. +func (s *ListDistributionsByWebACLIdOutput) SetDistributionList(v *DistributionList) *ListDistributionsByWebACLIdOutput { + s.DistributionList = v return s } -// A complex type that specifies the request headers, if any, that you want -// CloudFront to base caching on for this cache behavior. -// -// For the headers that you specify, CloudFront caches separate versions of -// a specified object based on the header values in viewer requests. For example, -// suppose viewer requests for logo.jpg contain a custom product header that -// has a value of either acme or apex, and you configure CloudFront to cache -// your content based on values in the product header. CloudFront forwards the -// product header to the origin and caches the response from the origin once -// for each header value. For more information about caching based on header -// values, see How CloudFront Forwards and Caches Headers (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/header-caching.html) -// in the Amazon CloudFront Developer Guide. -type Headers struct { +// The request to list your distributions. +type ListDistributionsInput struct { _ struct{} `type:"structure"` - // A list that contains one Name element for each header that you want CloudFront - // to use for caching in this cache behavior. If Quantity is 0, omit Items. - Items []*string `locationNameList:"Name" type:"list"` + // Use this when paginating results to indicate where to begin in your list + // of distributions. The results include distributions in the list that occur + // after the marker. To get the next page of results, set the Marker to the + // value of the NextMarker from the current page's response (which is also the + // ID of the last distribution on that page). + Marker *string `location:"querystring" locationName:"Marker" type:"string"` - // The number of different headers that you want CloudFront to base caching - // on for this cache behavior. You can configure each cache behavior in a web - // distribution to do one of the following: - // - // * Forward all headers to your origin: Specify 1 for Quantity and * for - // Name. - // - // CloudFront doesn't cache the objects that are associated with this cache - // behavior. Instead, CloudFront sends every request to the origin. - // - // * Forward a whitelist of headers you specify: Specify the number of headers - // that you want CloudFront to base caching on. Then specify the header names - // in Name elements. CloudFront caches your objects based on the values in - // the specified headers. - // - // * Forward only the default headers: Specify 0 for Quantity and omit Items. - // In this configuration, CloudFront doesn't cache based on the values in - // the request headers. - // - // Regardless of which option you choose, CloudFront forwards headers to your - // origin based on whether the origin is an S3 bucket or a custom origin. See - // the following documentation: - // - // * S3 bucket: See HTTP Request Headers That CloudFront Removes or Updates - // (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorS3Origin.html#request-s3-removed-headers) - // - // * Custom origin: See HTTP Request Headers and CloudFront Behavior (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html#request-custom-headers-behavior) - // - // Quantity is a required field - Quantity *int64 `type:"integer" required:"true"` + // The maximum number of distributions you want in the response body. + MaxItems *int64 `location:"querystring" locationName:"MaxItems" type:"integer"` } // String returns the string representation -func (s Headers) String() string { +func (s ListDistributionsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Headers) GoString() string { +func (s ListDistributionsInput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *Headers) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "Headers"} - if s.Quantity == nil { - invalidParams.Add(request.NewErrParamRequired("Quantity")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetMarker sets the Marker field's value. +func (s *ListDistributionsInput) SetMarker(v string) *ListDistributionsInput { + s.Marker = &v + return s } -// SetItems sets the Items field's value. -func (s *Headers) SetItems(v []*string) *Headers { - s.Items = v +// SetMaxItems sets the MaxItems field's value. +func (s *ListDistributionsInput) SetMaxItems(v int64) *ListDistributionsInput { + s.MaxItems = &v return s } -// SetQuantity sets the Quantity field's value. -func (s *Headers) SetQuantity(v int64) *Headers { - s.Quantity = &v - return s +// The returned result of the corresponding request. +type ListDistributionsOutput struct { + _ struct{} `type:"structure" payload:"DistributionList"` + + // The DistributionList type. + DistributionList *DistributionList `type:"structure"` } -// An invalidation. -type Invalidation struct { - _ struct{} `type:"structure"` +// String returns the string representation +func (s ListDistributionsOutput) String() string { + return awsutil.Prettify(s) +} - // The date and time the invalidation request was first made. - // - // CreateTime is a required field - CreateTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` +// GoString returns the string representation +func (s ListDistributionsOutput) GoString() string { + return s.String() +} - // The identifier for the invalidation request. For example: IDFDVBD632BHDS5. - // - // Id is a required field - Id *string `type:"string" required:"true"` +// SetDistributionList sets the DistributionList field's value. +func (s *ListDistributionsOutput) SetDistributionList(v *DistributionList) *ListDistributionsOutput { + s.DistributionList = v + return s +} - // The current invalidation information for the batch request. - // - // InvalidationBatch is a required field - InvalidationBatch *InvalidationBatch `type:"structure" required:"true"` +type ListFieldLevelEncryptionConfigsInput struct { + _ struct{} `type:"structure"` - // The status of the invalidation request. When the invalidation batch is finished, - // the status is Completed. - // - // Status is a required field - Status *string `type:"string" required:"true"` + // Use this when paginating results to indicate where to begin in your list + // of configurations. The results include configurations in the list that occur + // after the marker. To get the next page of results, set the Marker to the + // value of the NextMarker from the current page's response (which is also the + // ID of the last configuration on that page). + Marker *string `location:"querystring" locationName:"Marker" type:"string"` + + // The maximum number of field-level encryption configurations you want in the + // response body. + MaxItems *int64 `location:"querystring" locationName:"MaxItems" type:"integer"` } // String returns the string representation -func (s Invalidation) String() string { +func (s ListFieldLevelEncryptionConfigsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Invalidation) GoString() string { +func (s ListFieldLevelEncryptionConfigsInput) GoString() string { return s.String() } -// SetCreateTime sets the CreateTime field's value. -func (s *Invalidation) SetCreateTime(v time.Time) *Invalidation { - s.CreateTime = &v +// SetMarker sets the Marker field's value. +func (s *ListFieldLevelEncryptionConfigsInput) SetMarker(v string) *ListFieldLevelEncryptionConfigsInput { + s.Marker = &v return s } -// SetId sets the Id field's value. -func (s *Invalidation) SetId(v string) *Invalidation { - s.Id = &v +// SetMaxItems sets the MaxItems field's value. +func (s *ListFieldLevelEncryptionConfigsInput) SetMaxItems(v int64) *ListFieldLevelEncryptionConfigsInput { + s.MaxItems = &v return s } -// SetInvalidationBatch sets the InvalidationBatch field's value. -func (s *Invalidation) SetInvalidationBatch(v *InvalidationBatch) *Invalidation { - s.InvalidationBatch = v - return s +type ListFieldLevelEncryptionConfigsOutput struct { + _ struct{} `type:"structure" payload:"FieldLevelEncryptionList"` + + // Returns a list of all field-level encryption configurations that have been + // created in CloudFront for this account. + FieldLevelEncryptionList *FieldLevelEncryptionList `type:"structure"` } -// SetStatus sets the Status field's value. -func (s *Invalidation) SetStatus(v string) *Invalidation { - s.Status = &v +// String returns the string representation +func (s ListFieldLevelEncryptionConfigsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListFieldLevelEncryptionConfigsOutput) GoString() string { + return s.String() +} + +// SetFieldLevelEncryptionList sets the FieldLevelEncryptionList field's value. +func (s *ListFieldLevelEncryptionConfigsOutput) SetFieldLevelEncryptionList(v *FieldLevelEncryptionList) *ListFieldLevelEncryptionConfigsOutput { + s.FieldLevelEncryptionList = v return s } -// An invalidation batch. -type InvalidationBatch struct { +type ListFieldLevelEncryptionProfilesInput struct { _ struct{} `type:"structure"` - // A value that you specify to uniquely identify an invalidation request. CloudFront - // uses the value to prevent you from accidentally resubmitting an identical - // request. Whenever you create a new invalidation request, you must specify - // a new value for CallerReference and change other values in the request as - // applicable. One way to ensure that the value of CallerReference is unique - // is to use a timestamp, for example, 20120301090000. - // - // If you make a second invalidation request with the same value for CallerReference, - // and if the rest of the request is the same, CloudFront doesn't create a new - // invalidation request. Instead, CloudFront returns information about the invalidation - // request that you previously created with the same CallerReference. - // - // If CallerReference is a value you already sent in a previous invalidation - // batch request but the content of any Path is different from the original - // request, CloudFront returns an InvalidationBatchAlreadyExists error. - // - // CallerReference is a required field - CallerReference *string `type:"string" required:"true"` + // Use this when paginating results to indicate where to begin in your list + // of profiles. The results include profiles in the list that occur after the + // marker. To get the next page of results, set the Marker to the value of the + // NextMarker from the current page's response (which is also the ID of the + // last profile on that page). + Marker *string `location:"querystring" locationName:"Marker" type:"string"` - // A complex type that contains information about the objects that you want - // to invalidate. For more information, see Specifying the Objects to Invalidate - // (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html#invalidation-specifying-objects) - // in the Amazon CloudFront Developer Guide. - // - // Paths is a required field - Paths *Paths `type:"structure" required:"true"` + // The maximum number of field-level encryption profiles you want in the response + // body. + MaxItems *int64 `location:"querystring" locationName:"MaxItems" type:"integer"` } // String returns the string representation -func (s InvalidationBatch) String() string { +func (s ListFieldLevelEncryptionProfilesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s InvalidationBatch) GoString() string { +func (s ListFieldLevelEncryptionProfilesInput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *InvalidationBatch) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "InvalidationBatch"} - if s.CallerReference == nil { - invalidParams.Add(request.NewErrParamRequired("CallerReference")) - } - if s.Paths == nil { - invalidParams.Add(request.NewErrParamRequired("Paths")) - } - if s.Paths != nil { - if err := s.Paths.Validate(); err != nil { - invalidParams.AddNested("Paths", err.(request.ErrInvalidParams)) - } - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetMarker sets the Marker field's value. +func (s *ListFieldLevelEncryptionProfilesInput) SetMarker(v string) *ListFieldLevelEncryptionProfilesInput { + s.Marker = &v + return s } -// SetCallerReference sets the CallerReference field's value. -func (s *InvalidationBatch) SetCallerReference(v string) *InvalidationBatch { - s.CallerReference = &v +// SetMaxItems sets the MaxItems field's value. +func (s *ListFieldLevelEncryptionProfilesInput) SetMaxItems(v int64) *ListFieldLevelEncryptionProfilesInput { + s.MaxItems = &v return s } -// SetPaths sets the Paths field's value. -func (s *InvalidationBatch) SetPaths(v *Paths) *InvalidationBatch { - s.Paths = v - return s +type ListFieldLevelEncryptionProfilesOutput struct { + _ struct{} `type:"structure" payload:"FieldLevelEncryptionProfileList"` + + // Returns a list of the field-level encryption profiles that have been created + // in CloudFront for this account. + FieldLevelEncryptionProfileList *FieldLevelEncryptionProfileList `type:"structure"` } -// The InvalidationList complex type describes the list of invalidation objects. -// For more information about invalidation, see Invalidating Objects (Web Distributions -// Only) (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html) -// in the Amazon CloudFront Developer Guide. -type InvalidationList struct { - _ struct{} `type:"structure"` +// String returns the string representation +func (s ListFieldLevelEncryptionProfilesOutput) String() string { + return awsutil.Prettify(s) +} - // A flag that indicates whether more invalidation batch requests remain to - // be listed. If your results were truncated, you can make a follow-up pagination - // request using the Marker request parameter to retrieve more invalidation - // batches in the list. - // - // IsTruncated is a required field - IsTruncated *bool `type:"boolean" required:"true"` +// GoString returns the string representation +func (s ListFieldLevelEncryptionProfilesOutput) GoString() string { + return s.String() +} - // A complex type that contains one InvalidationSummary element for each invalidation - // batch created by the current AWS account. - Items []*InvalidationSummary `locationNameList:"InvalidationSummary" type:"list"` +// SetFieldLevelEncryptionProfileList sets the FieldLevelEncryptionProfileList field's value. +func (s *ListFieldLevelEncryptionProfilesOutput) SetFieldLevelEncryptionProfileList(v *FieldLevelEncryptionProfileList) *ListFieldLevelEncryptionProfilesOutput { + s.FieldLevelEncryptionProfileList = v + return s +} - // The value that you provided for the Marker request parameter. - // - // Marker is a required field - Marker *string `type:"string" required:"true"` +// The request to list invalidations. +type ListInvalidationsInput struct { + _ struct{} `type:"structure"` - // The value that you provided for the MaxItems request parameter. + // The distribution's ID. // - // MaxItems is a required field - MaxItems *int64 `type:"integer" required:"true"` + // DistributionId is a required field + DistributionId *string `location:"uri" locationName:"DistributionId" type:"string" required:"true"` - // If IsTruncated is true, this element is present and contains the value that - // you can use for the Marker request parameter to continue listing your invalidation - // batches where they left off. - NextMarker *string `type:"string"` + // Use this parameter when paginating results to indicate where to begin in + // your list of invalidation batches. Because the results are returned in decreasing + // order from most recent to oldest, the most recent results are on the first + // page, the second page will contain earlier results, and so on. To get the + // next page of results, set Marker to the value of the NextMarker from the + // current page's response. This value is the same as the ID of the last invalidation + // batch on that page. + Marker *string `location:"querystring" locationName:"Marker" type:"string"` - // The number of invalidation batches that were created by the current AWS account. - // - // Quantity is a required field - Quantity *int64 `type:"integer" required:"true"` + // The maximum number of invalidation batches that you want in the response + // body. + MaxItems *int64 `location:"querystring" locationName:"MaxItems" type:"integer"` } // String returns the string representation -func (s InvalidationList) String() string { +func (s ListInvalidationsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s InvalidationList) GoString() string { +func (s ListInvalidationsInput) GoString() string { return s.String() } -// SetIsTruncated sets the IsTruncated field's value. -func (s *InvalidationList) SetIsTruncated(v bool) *InvalidationList { - s.IsTruncated = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListInvalidationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListInvalidationsInput"} + if s.DistributionId == nil { + invalidParams.Add(request.NewErrParamRequired("DistributionId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetItems sets the Items field's value. -func (s *InvalidationList) SetItems(v []*InvalidationSummary) *InvalidationList { - s.Items = v +// SetDistributionId sets the DistributionId field's value. +func (s *ListInvalidationsInput) SetDistributionId(v string) *ListInvalidationsInput { + s.DistributionId = &v return s } // SetMarker sets the Marker field's value. -func (s *InvalidationList) SetMarker(v string) *InvalidationList { +func (s *ListInvalidationsInput) SetMarker(v string) *ListInvalidationsInput { s.Marker = &v return s } // SetMaxItems sets the MaxItems field's value. -func (s *InvalidationList) SetMaxItems(v int64) *InvalidationList { +func (s *ListInvalidationsInput) SetMaxItems(v int64) *ListInvalidationsInput { s.MaxItems = &v return s } -// SetNextMarker sets the NextMarker field's value. -func (s *InvalidationList) SetNextMarker(v string) *InvalidationList { - s.NextMarker = &v - return s +// The returned result of the corresponding request. +type ListInvalidationsOutput struct { + _ struct{} `type:"structure" payload:"InvalidationList"` + + // Information about invalidation batches. + InvalidationList *InvalidationList `type:"structure"` } -// SetQuantity sets the Quantity field's value. -func (s *InvalidationList) SetQuantity(v int64) *InvalidationList { - s.Quantity = &v +// String returns the string representation +func (s ListInvalidationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListInvalidationsOutput) GoString() string { + return s.String() +} + +// SetInvalidationList sets the InvalidationList field's value. +func (s *ListInvalidationsOutput) SetInvalidationList(v *InvalidationList) *ListInvalidationsOutput { + s.InvalidationList = v return s } -// A summary of an invalidation request. -type InvalidationSummary struct { +type ListPublicKeysInput struct { _ struct{} `type:"structure"` - // CreateTime is a required field - CreateTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` - - // The unique ID for an invalidation request. - // - // Id is a required field - Id *string `type:"string" required:"true"` + // Use this when paginating results to indicate where to begin in your list + // of public keys. The results include public keys in the list that occur after + // the marker. To get the next page of results, set the Marker to the value + // of the NextMarker from the current page's response (which is also the ID + // of the last public key on that page). + Marker *string `location:"querystring" locationName:"Marker" type:"string"` - // The status of an invalidation request. - // - // Status is a required field - Status *string `type:"string" required:"true"` + // The maximum number of public keys you want in the response body. + MaxItems *int64 `location:"querystring" locationName:"MaxItems" type:"integer"` } // String returns the string representation -func (s InvalidationSummary) String() string { +func (s ListPublicKeysInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s InvalidationSummary) GoString() string { +func (s ListPublicKeysInput) GoString() string { return s.String() } -// SetCreateTime sets the CreateTime field's value. -func (s *InvalidationSummary) SetCreateTime(v time.Time) *InvalidationSummary { - s.CreateTime = &v +// SetMarker sets the Marker field's value. +func (s *ListPublicKeysInput) SetMarker(v string) *ListPublicKeysInput { + s.Marker = &v return s } -// SetId sets the Id field's value. -func (s *InvalidationSummary) SetId(v string) *InvalidationSummary { - s.Id = &v +// SetMaxItems sets the MaxItems field's value. +func (s *ListPublicKeysInput) SetMaxItems(v int64) *ListPublicKeysInput { + s.MaxItems = &v return s } -// SetStatus sets the Status field's value. -func (s *InvalidationSummary) SetStatus(v string) *InvalidationSummary { - s.Status = &v +type ListPublicKeysOutput struct { + _ struct{} `type:"structure" payload:"PublicKeyList"` + + // Returns a list of all public keys that have been added to CloudFront for + // this account. + PublicKeyList *PublicKeyList `type:"structure"` +} + +// String returns the string representation +func (s ListPublicKeysOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListPublicKeysOutput) GoString() string { + return s.String() +} + +// SetPublicKeyList sets the PublicKeyList field's value. +func (s *ListPublicKeysOutput) SetPublicKeyList(v *PublicKeyList) *ListPublicKeysOutput { + s.PublicKeyList = v return s } -// A complex type that lists the active CloudFront key pairs, if any, that are -// associated with AwsAccountNumber. -// -// For more information, see ActiveTrustedSigners. -type KeyPairIds struct { +// The request to list your streaming distributions. +type ListStreamingDistributionsInput struct { _ struct{} `type:"structure"` - // A complex type that lists the active CloudFront key pairs, if any, that are - // associated with AwsAccountNumber. - // - // For more information, see ActiveTrustedSigners. - Items []*string `locationNameList:"KeyPairId" type:"list"` + // The value that you provided for the Marker request parameter. + Marker *string `location:"querystring" locationName:"Marker" type:"string"` - // The number of active CloudFront key pairs for AwsAccountNumber. - // - // For more information, see ActiveTrustedSigners. - // - // Quantity is a required field - Quantity *int64 `type:"integer" required:"true"` + // The value that you provided for the MaxItems request parameter. + MaxItems *int64 `location:"querystring" locationName:"MaxItems" type:"integer"` } // String returns the string representation -func (s KeyPairIds) String() string { +func (s ListStreamingDistributionsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s KeyPairIds) GoString() string { +func (s ListStreamingDistributionsInput) GoString() string { return s.String() } -// SetItems sets the Items field's value. -func (s *KeyPairIds) SetItems(v []*string) *KeyPairIds { - s.Items = v +// SetMarker sets the Marker field's value. +func (s *ListStreamingDistributionsInput) SetMarker(v string) *ListStreamingDistributionsInput { + s.Marker = &v return s } -// SetQuantity sets the Quantity field's value. -func (s *KeyPairIds) SetQuantity(v int64) *KeyPairIds { - s.Quantity = &v +// SetMaxItems sets the MaxItems field's value. +func (s *ListStreamingDistributionsInput) SetMaxItems(v int64) *ListStreamingDistributionsInput { + s.MaxItems = &v return s } -// A complex type that contains a Lambda function association. -type LambdaFunctionAssociation struct { - _ struct{} `type:"structure"` - - // Specifies the event type that triggers a Lambda function invocation. You - // can specify the following values: - // - // * viewer-request: The function executes when CloudFront receives a request - // from a viewer and before it checks to see whether the requested object - // is in the edge cache. - // - // * origin-request: The function executes only when CloudFront forwards - // a request to your origin. When the requested object is in the edge cache, - // the function doesn't execute. - // - // * origin-response: The function executes after CloudFront receives a response - // from the origin and before it caches the object in the response. When - // the requested object is in the edge cache, the function doesn't execute. - // - // If the origin returns an HTTP status code other than HTTP 200 (OK), the function - // doesn't execute. - // - // * viewer-response: The function executes before CloudFront returns the - // requested object to the viewer. The function executes regardless of whether - // the object was already in the edge cache. - // - // If the origin returns an HTTP status code other than HTTP 200 (OK), the function - // doesn't execute. - EventType *string `type:"string" enum:"EventType"` +// The returned result of the corresponding request. +type ListStreamingDistributionsOutput struct { + _ struct{} `type:"structure" payload:"StreamingDistributionList"` - // The ARN of the Lambda function. You must specify the ARN of a function version; - // you can't specify a Lambda alias or $LATEST. - LambdaFunctionARN *string `type:"string"` + // The StreamingDistributionList type. + StreamingDistributionList *StreamingDistributionList `type:"structure"` } // String returns the string representation -func (s LambdaFunctionAssociation) String() string { +func (s ListStreamingDistributionsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s LambdaFunctionAssociation) GoString() string { +func (s ListStreamingDistributionsOutput) GoString() string { return s.String() } -// SetEventType sets the EventType field's value. -func (s *LambdaFunctionAssociation) SetEventType(v string) *LambdaFunctionAssociation { - s.EventType = &v - return s -} - -// SetLambdaFunctionARN sets the LambdaFunctionARN field's value. -func (s *LambdaFunctionAssociation) SetLambdaFunctionARN(v string) *LambdaFunctionAssociation { - s.LambdaFunctionARN = &v +// SetStreamingDistributionList sets the StreamingDistributionList field's value. +func (s *ListStreamingDistributionsOutput) SetStreamingDistributionList(v *StreamingDistributionList) *ListStreamingDistributionsOutput { + s.StreamingDistributionList = v return s } -// A complex type that specifies a list of Lambda functions associations for -// a cache behavior. -// -// If you want to invoke one or more Lambda functions triggered by requests -// that match the PathPattern of the cache behavior, specify the applicable -// values for Quantity and Items. Note that there can be up to 4 LambdaFunctionAssociation -// items in this list (one for each possible value of EventType) and each EventType -// can be associated with the Lambda function only once. -// -// If you don't want to invoke any Lambda functions for the requests that match -// PathPattern, specify 0 for Quantity and omit Items. -type LambdaFunctionAssociations struct { +// The request to list tags for a CloudFront resource. +type ListTagsForResourceInput struct { _ struct{} `type:"structure"` - // Optional: A complex type that contains LambdaFunctionAssociation items for - // this cache behavior. If Quantity is 0, you can omit Items. - Items []*LambdaFunctionAssociation `locationNameList:"LambdaFunctionAssociation" type:"list"` - - // The number of Lambda function associations for this cache behavior. + // An ARN of a CloudFront resource. // - // Quantity is a required field - Quantity *int64 `type:"integer" required:"true"` + // Resource is a required field + Resource *string `location:"querystring" locationName:"Resource" type:"string" required:"true"` } // String returns the string representation -func (s LambdaFunctionAssociations) String() string { +func (s ListTagsForResourceInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s LambdaFunctionAssociations) GoString() string { +func (s ListTagsForResourceInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *LambdaFunctionAssociations) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "LambdaFunctionAssociations"} - if s.Quantity == nil { - invalidParams.Add(request.NewErrParamRequired("Quantity")) +func (s *ListTagsForResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTagsForResourceInput"} + if s.Resource == nil { + invalidParams.Add(request.NewErrParamRequired("Resource")) } if invalidParams.Len() > 0 { @@ -7583,118 +11530,253 @@ func (s *LambdaFunctionAssociations) Validate() error { return nil } -// SetItems sets the Items field's value. -func (s *LambdaFunctionAssociations) SetItems(v []*LambdaFunctionAssociation) *LambdaFunctionAssociations { - s.Items = v - return s -} - -// SetQuantity sets the Quantity field's value. -func (s *LambdaFunctionAssociations) SetQuantity(v int64) *LambdaFunctionAssociations { - s.Quantity = &v +// SetResource sets the Resource field's value. +func (s *ListTagsForResourceInput) SetResource(v string) *ListTagsForResourceInput { + s.Resource = &v return s } -// The request to list origin access identities. -type ListCloudFrontOriginAccessIdentitiesInput struct { - _ struct{} `type:"structure"` - - // Use this when paginating results to indicate where to begin in your list - // of origin access identities. The results include identities in the list that - // occur after the marker. To get the next page of results, set the Marker to - // the value of the NextMarker from the current page's response (which is also - // the ID of the last identity on that page). - Marker *string `location:"querystring" locationName:"Marker" type:"string"` +// The returned result of the corresponding request. +type ListTagsForResourceOutput struct { + _ struct{} `type:"structure" payload:"Tags"` - // The maximum number of origin access identities you want in the response body. - MaxItems *int64 `location:"querystring" locationName:"MaxItems" type:"integer"` + // A complex type that contains zero or more Tag elements. + // + // Tags is a required field + Tags *Tags `type:"structure" required:"true"` } // String returns the string representation -func (s ListCloudFrontOriginAccessIdentitiesInput) String() string { +func (s ListTagsForResourceOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListCloudFrontOriginAccessIdentitiesInput) GoString() string { +func (s ListTagsForResourceOutput) GoString() string { return s.String() } -// SetMarker sets the Marker field's value. -func (s *ListCloudFrontOriginAccessIdentitiesInput) SetMarker(v string) *ListCloudFrontOriginAccessIdentitiesInput { - s.Marker = &v +// SetTags sets the Tags field's value. +func (s *ListTagsForResourceOutput) SetTags(v *Tags) *ListTagsForResourceOutput { + s.Tags = v return s } -// SetMaxItems sets the MaxItems field's value. -func (s *ListCloudFrontOriginAccessIdentitiesInput) SetMaxItems(v int64) *ListCloudFrontOriginAccessIdentitiesInput { - s.MaxItems = &v - return s -} +// A complex type that controls whether access logs are written for the distribution. +type LoggingConfig struct { + _ struct{} `type:"structure"` -// The returned result of the corresponding request. -type ListCloudFrontOriginAccessIdentitiesOutput struct { - _ struct{} `type:"structure" payload:"CloudFrontOriginAccessIdentityList"` + // The Amazon S3 bucket to store the access logs in, for example, myawslogbucket.s3.amazonaws.com. + // + // Bucket is a required field + Bucket *string `type:"string" required:"true"` - // The CloudFrontOriginAccessIdentityList type. - CloudFrontOriginAccessIdentityList *OriginAccessIdentityList `type:"structure"` + // Specifies whether you want CloudFront to save access logs to an Amazon S3 + // bucket. If you don't want to enable logging when you create a distribution + // or if you want to disable logging for an existing distribution, specify false + // for Enabled, and specify empty Bucket and Prefix elements. If you specify + // false for Enabled but you specify values for Bucket, prefix, and IncludeCookies, + // the values are automatically deleted. + // + // Enabled is a required field + Enabled *bool `type:"boolean" required:"true"` + + // Specifies whether you want CloudFront to include cookies in access logs, + // specify true for IncludeCookies. If you choose to include cookies in logs, + // CloudFront logs all cookies regardless of how you configure the cache behaviors + // for this distribution. If you don't want to include cookies when you create + // a distribution or if you want to disable include cookies for an existing + // distribution, specify false for IncludeCookies. + // + // IncludeCookies is a required field + IncludeCookies *bool `type:"boolean" required:"true"` + + // An optional string that you want CloudFront to prefix to the access log filenames + // for this distribution, for example, myprefix/. If you want to enable logging, + // but you don't want to specify a prefix, you still must include an empty Prefix + // element in the Logging element. + // + // Prefix is a required field + Prefix *string `type:"string" required:"true"` } // String returns the string representation -func (s ListCloudFrontOriginAccessIdentitiesOutput) String() string { +func (s LoggingConfig) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListCloudFrontOriginAccessIdentitiesOutput) GoString() string { +func (s LoggingConfig) GoString() string { return s.String() } -// SetCloudFrontOriginAccessIdentityList sets the CloudFrontOriginAccessIdentityList field's value. -func (s *ListCloudFrontOriginAccessIdentitiesOutput) SetCloudFrontOriginAccessIdentityList(v *OriginAccessIdentityList) *ListCloudFrontOriginAccessIdentitiesOutput { - s.CloudFrontOriginAccessIdentityList = v +// Validate inspects the fields of the type to determine if they are valid. +func (s *LoggingConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LoggingConfig"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Enabled == nil { + invalidParams.Add(request.NewErrParamRequired("Enabled")) + } + if s.IncludeCookies == nil { + invalidParams.Add(request.NewErrParamRequired("IncludeCookies")) + } + if s.Prefix == nil { + invalidParams.Add(request.NewErrParamRequired("Prefix")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *LoggingConfig) SetBucket(v string) *LoggingConfig { + s.Bucket = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *LoggingConfig) SetEnabled(v bool) *LoggingConfig { + s.Enabled = &v + return s +} + +// SetIncludeCookies sets the IncludeCookies field's value. +func (s *LoggingConfig) SetIncludeCookies(v bool) *LoggingConfig { + s.IncludeCookies = &v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *LoggingConfig) SetPrefix(v string) *LoggingConfig { + s.Prefix = &v return s } -// The request to list distributions that are associated with a specified AWS -// WAF web ACL. -type ListDistributionsByWebACLIdInput struct { +// A complex type that describes the Amazon S3 bucket or the HTTP server (for +// example, a web server) from which CloudFront gets your files. You must create +// at least one origin. +// +// For the current limit on the number of origins that you can create for a +// distribution, see Amazon CloudFront Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_cloudfront) +// in the AWS General Reference. +type Origin struct { _ struct{} `type:"structure"` - // Use Marker and MaxItems to control pagination of results. If you have more - // than MaxItems distributions that satisfy the request, the response includes - // a NextMarker element. To get the next page of results, submit another request. - // For the value of Marker, specify the value of NextMarker from the last response. - // (For the first request, omit Marker.) - Marker *string `location:"querystring" locationName:"Marker" type:"string"` + // A complex type that contains names and values for the custom headers that + // you want. + CustomHeaders *CustomHeaders `type:"structure"` - // The maximum number of distributions that you want CloudFront to return in - // the response body. The maximum and default values are both 100. - MaxItems *int64 `location:"querystring" locationName:"MaxItems" type:"integer"` + // A complex type that contains information about a custom origin. If the origin + // is an Amazon S3 bucket, use the S3OriginConfig element instead. + CustomOriginConfig *CustomOriginConfig `type:"structure"` - // The ID of the AWS WAF web ACL that you want to list the associated distributions. - // If you specify "null" for the ID, the request returns a list of the distributions - // that aren't associated with a web ACL. + // Amazon S3 origins: The DNS name of the Amazon S3 bucket from which you want + // CloudFront to get objects for this origin, for example, myawsbucket.s3.amazonaws.com. + // If you set up your bucket to be configured as a website endpoint, enter the + // Amazon S3 static website hosting endpoint for the bucket. // - // WebACLId is a required field - WebACLId *string `location:"uri" locationName:"WebACLId" type:"string" required:"true"` + // Constraints for Amazon S3 origins: + // + // * If you configured Amazon S3 Transfer Acceleration for your bucket, don't + // specify the s3-accelerate endpoint for DomainName. + // + // * The bucket name must be between 3 and 63 characters long (inclusive). + // + // * The bucket name must contain only lowercase characters, numbers, periods, + // underscores, and dashes. + // + // * The bucket name must not contain adjacent periods. + // + // Custom Origins: The DNS domain name for the HTTP server from which you want + // CloudFront to get objects for this origin, for example, www.example.com. + // + // Constraints for custom origins: + // + // * DomainName must be a valid DNS name that contains only a-z, A-Z, 0-9, + // dot (.), hyphen (-), or underscore (_) characters. + // + // * The name cannot exceed 128 characters. + // + // DomainName is a required field + DomainName *string `type:"string" required:"true"` + + // A unique identifier for the origin. The value of Id must be unique within + // the distribution. + // + // When you specify the value of TargetOriginId for the default cache behavior + // or for another cache behavior, you indicate the origin to which you want + // the cache behavior to route requests by specifying the value of the Id element + // for that origin. When a request matches the path pattern for that cache behavior, + // CloudFront routes the request to the specified origin. For more information, + // see Cache Behavior Settings (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesCacheBehavior) + // in the Amazon CloudFront Developer Guide. + // + // Id is a required field + Id *string `type:"string" required:"true"` + + // An optional element that causes CloudFront to request your content from a + // directory in your Amazon S3 bucket or your custom origin. When you include + // the OriginPath element, specify the directory name, beginning with a /. CloudFront + // appends the directory name to the value of DomainName, for example, example.com/production. + // Do not include a / at the end of the directory name. + // + // For example, suppose you've specified the following values for your distribution: + // + // * DomainName: An Amazon S3 bucket named myawsbucket. + // + // * OriginPath: /production + // + // * CNAME: example.com + // + // When a user enters example.com/index.html in a browser, CloudFront sends + // a request to Amazon S3 for myawsbucket/production/index.html. + // + // When a user enters example.com/acme/index.html in a browser, CloudFront sends + // a request to Amazon S3 for myawsbucket/production/acme/index.html. + OriginPath *string `type:"string"` + + // A complex type that contains information about the Amazon S3 origin. If the + // origin is a custom origin, use the CustomOriginConfig element instead. + S3OriginConfig *S3OriginConfig `type:"structure"` } // String returns the string representation -func (s ListDistributionsByWebACLIdInput) String() string { +func (s Origin) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListDistributionsByWebACLIdInput) GoString() string { +func (s Origin) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListDistributionsByWebACLIdInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListDistributionsByWebACLIdInput"} - if s.WebACLId == nil { - invalidParams.Add(request.NewErrParamRequired("WebACLId")) +func (s *Origin) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Origin"} + if s.DomainName == nil { + invalidParams.Add(request.NewErrParamRequired("DomainName")) + } + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.CustomHeaders != nil { + if err := s.CustomHeaders.Validate(); err != nil { + invalidParams.AddNested("CustomHeaders", err.(request.ErrInvalidParams)) + } + } + if s.CustomOriginConfig != nil { + if err := s.CustomOriginConfig.Validate(); err != nil { + invalidParams.AddNested("CustomOriginConfig", err.(request.ErrInvalidParams)) + } + } + if s.S3OriginConfig != nil { + if err := s.S3OriginConfig.Validate(); err != nil { + invalidParams.AddNested("S3OriginConfig", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -7703,148 +11785,137 @@ func (s *ListDistributionsByWebACLIdInput) Validate() error { return nil } -// SetMarker sets the Marker field's value. -func (s *ListDistributionsByWebACLIdInput) SetMarker(v string) *ListDistributionsByWebACLIdInput { - s.Marker = &v +// SetCustomHeaders sets the CustomHeaders field's value. +func (s *Origin) SetCustomHeaders(v *CustomHeaders) *Origin { + s.CustomHeaders = v return s } -// SetMaxItems sets the MaxItems field's value. -func (s *ListDistributionsByWebACLIdInput) SetMaxItems(v int64) *ListDistributionsByWebACLIdInput { - s.MaxItems = &v +// SetCustomOriginConfig sets the CustomOriginConfig field's value. +func (s *Origin) SetCustomOriginConfig(v *CustomOriginConfig) *Origin { + s.CustomOriginConfig = v return s } -// SetWebACLId sets the WebACLId field's value. -func (s *ListDistributionsByWebACLIdInput) SetWebACLId(v string) *ListDistributionsByWebACLIdInput { - s.WebACLId = &v +// SetDomainName sets the DomainName field's value. +func (s *Origin) SetDomainName(v string) *Origin { + s.DomainName = &v return s } -// The response to a request to list the distributions that are associated with -// a specified AWS WAF web ACL. -type ListDistributionsByWebACLIdOutput struct { - _ struct{} `type:"structure" payload:"DistributionList"` - - // The DistributionList type. - DistributionList *DistributionList `type:"structure"` -} - -// String returns the string representation -func (s ListDistributionsByWebACLIdOutput) String() string { - return awsutil.Prettify(s) +// SetId sets the Id field's value. +func (s *Origin) SetId(v string) *Origin { + s.Id = &v + return s } -// GoString returns the string representation -func (s ListDistributionsByWebACLIdOutput) GoString() string { - return s.String() +// SetOriginPath sets the OriginPath field's value. +func (s *Origin) SetOriginPath(v string) *Origin { + s.OriginPath = &v + return s } -// SetDistributionList sets the DistributionList field's value. -func (s *ListDistributionsByWebACLIdOutput) SetDistributionList(v *DistributionList) *ListDistributionsByWebACLIdOutput { - s.DistributionList = v +// SetS3OriginConfig sets the S3OriginConfig field's value. +func (s *Origin) SetS3OriginConfig(v *S3OriginConfig) *Origin { + s.S3OriginConfig = v return s } -// The request to list your distributions. -type ListDistributionsInput struct { +// CloudFront origin access identity. +type OriginAccessIdentity struct { _ struct{} `type:"structure"` - // Use this when paginating results to indicate where to begin in your list - // of distributions. The results include distributions in the list that occur - // after the marker. To get the next page of results, set the Marker to the - // value of the NextMarker from the current page's response (which is also the - // ID of the last distribution on that page). - Marker *string `location:"querystring" locationName:"Marker" type:"string"` + // The current configuration information for the identity. + CloudFrontOriginAccessIdentityConfig *OriginAccessIdentityConfig `type:"structure"` - // The maximum number of distributions you want in the response body. - MaxItems *int64 `location:"querystring" locationName:"MaxItems" type:"integer"` + // The ID for the origin access identity, for example, E74FTE3AJFJ256A. + // + // Id is a required field + Id *string `type:"string" required:"true"` + + // The Amazon S3 canonical user ID for the origin access identity, used when + // giving the origin access identity read permission to an object in Amazon + // S3. + // + // S3CanonicalUserId is a required field + S3CanonicalUserId *string `type:"string" required:"true"` } // String returns the string representation -func (s ListDistributionsInput) String() string { +func (s OriginAccessIdentity) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListDistributionsInput) GoString() string { +func (s OriginAccessIdentity) GoString() string { return s.String() } -// SetMarker sets the Marker field's value. -func (s *ListDistributionsInput) SetMarker(v string) *ListDistributionsInput { - s.Marker = &v +// SetCloudFrontOriginAccessIdentityConfig sets the CloudFrontOriginAccessIdentityConfig field's value. +func (s *OriginAccessIdentity) SetCloudFrontOriginAccessIdentityConfig(v *OriginAccessIdentityConfig) *OriginAccessIdentity { + s.CloudFrontOriginAccessIdentityConfig = v return s } -// SetMaxItems sets the MaxItems field's value. -func (s *ListDistributionsInput) SetMaxItems(v int64) *ListDistributionsInput { - s.MaxItems = &v +// SetId sets the Id field's value. +func (s *OriginAccessIdentity) SetId(v string) *OriginAccessIdentity { + s.Id = &v return s } -// The returned result of the corresponding request. -type ListDistributionsOutput struct { - _ struct{} `type:"structure" payload:"DistributionList"` - - // The DistributionList type. - DistributionList *DistributionList `type:"structure"` -} - -// String returns the string representation -func (s ListDistributionsOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s ListDistributionsOutput) GoString() string { - return s.String() -} - -// SetDistributionList sets the DistributionList field's value. -func (s *ListDistributionsOutput) SetDistributionList(v *DistributionList) *ListDistributionsOutput { - s.DistributionList = v +// SetS3CanonicalUserId sets the S3CanonicalUserId field's value. +func (s *OriginAccessIdentity) SetS3CanonicalUserId(v string) *OriginAccessIdentity { + s.S3CanonicalUserId = &v return s } -// The request to list invalidations. -type ListInvalidationsInput struct { +// Origin access identity configuration. Send a GET request to the /CloudFront +// API version/CloudFront/identity ID/config resource. +type OriginAccessIdentityConfig struct { _ struct{} `type:"structure"` - // The distribution's ID. + // A unique number that ensures the request can't be replayed. + // + // If the CallerReference is new (no matter the content of the CloudFrontOriginAccessIdentityConfig + // object), a new origin access identity is created. + // + // If the CallerReference is a value already sent in a previous identity request, + // and the content of the CloudFrontOriginAccessIdentityConfig is identical + // to the original request (ignoring white space), the response includes the + // same information returned to the original request. // - // DistributionId is a required field - DistributionId *string `location:"uri" locationName:"DistributionId" type:"string" required:"true"` - - // Use this parameter when paginating results to indicate where to begin in - // your list of invalidation batches. Because the results are returned in decreasing - // order from most recent to oldest, the most recent results are on the first - // page, the second page will contain earlier results, and so on. To get the - // next page of results, set Marker to the value of the NextMarker from the - // current page's response. This value is the same as the ID of the last invalidation - // batch on that page. - Marker *string `location:"querystring" locationName:"Marker" type:"string"` + // If the CallerReference is a value you already sent in a previous request + // to create an identity, but the content of the CloudFrontOriginAccessIdentityConfig + // is different from the original request, CloudFront returns a CloudFrontOriginAccessIdentityAlreadyExists + // error. + // + // CallerReference is a required field + CallerReference *string `type:"string" required:"true"` - // The maximum number of invalidation batches that you want in the response - // body. - MaxItems *int64 `location:"querystring" locationName:"MaxItems" type:"integer"` + // Any comments you want to include about the origin access identity. + // + // Comment is a required field + Comment *string `type:"string" required:"true"` } // String returns the string representation -func (s ListInvalidationsInput) String() string { +func (s OriginAccessIdentityConfig) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListInvalidationsInput) GoString() string { +func (s OriginAccessIdentityConfig) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListInvalidationsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListInvalidationsInput"} - if s.DistributionId == nil { - invalidParams.Add(request.NewErrParamRequired("DistributionId")) +func (s *OriginAccessIdentityConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "OriginAccessIdentityConfig"} + if s.CallerReference == nil { + invalidParams.Add(request.NewErrParamRequired("CallerReference")) + } + if s.Comment == nil { + invalidParams.Add(request.NewErrParamRequired("Comment")) } if invalidParams.Len() > 0 { @@ -7853,232 +11924,254 @@ func (s *ListInvalidationsInput) Validate() error { return nil } -// SetDistributionId sets the DistributionId field's value. -func (s *ListInvalidationsInput) SetDistributionId(v string) *ListInvalidationsInput { - s.DistributionId = &v +// SetCallerReference sets the CallerReference field's value. +func (s *OriginAccessIdentityConfig) SetCallerReference(v string) *OriginAccessIdentityConfig { + s.CallerReference = &v return s } -// SetMarker sets the Marker field's value. -func (s *ListInvalidationsInput) SetMarker(v string) *ListInvalidationsInput { - s.Marker = &v +// SetComment sets the Comment field's value. +func (s *OriginAccessIdentityConfig) SetComment(v string) *OriginAccessIdentityConfig { + s.Comment = &v return s } -// SetMaxItems sets the MaxItems field's value. -func (s *ListInvalidationsInput) SetMaxItems(v int64) *ListInvalidationsInput { - s.MaxItems = &v - return s -} +// Lists the origin access identities for CloudFront.Send a GET request to the +// /CloudFront API version/origin-access-identity/cloudfront resource. The response +// includes a CloudFrontOriginAccessIdentityList element with zero or more CloudFrontOriginAccessIdentitySummary +// child elements. By default, your entire list of origin access identities +// is returned in one single page. If the list is long, you can paginate it +// using the MaxItems and Marker parameters. +type OriginAccessIdentityList struct { + _ struct{} `type:"structure"` -// The returned result of the corresponding request. -type ListInvalidationsOutput struct { - _ struct{} `type:"structure" payload:"InvalidationList"` + // A flag that indicates whether more origin access identities remain to be + // listed. If your results were truncated, you can make a follow-up pagination + // request using the Marker request parameter to retrieve more items in the + // list. + // + // IsTruncated is a required field + IsTruncated *bool `type:"boolean" required:"true"` - // Information about invalidation batches. - InvalidationList *InvalidationList `type:"structure"` + // A complex type that contains one CloudFrontOriginAccessIdentitySummary element + // for each origin access identity that was created by the current AWS account. + Items []*OriginAccessIdentitySummary `locationNameList:"CloudFrontOriginAccessIdentitySummary" type:"list"` + + // Use this when paginating results to indicate where to begin in your list + // of origin access identities. The results include identities in the list that + // occur after the marker. To get the next page of results, set the Marker to + // the value of the NextMarker from the current page's response (which is also + // the ID of the last identity on that page). + // + // Marker is a required field + Marker *string `type:"string" required:"true"` + + // The maximum number of origin access identities you want in the response body. + // + // MaxItems is a required field + MaxItems *int64 `type:"integer" required:"true"` + + // If IsTruncated is true, this element is present and contains the value you + // can use for the Marker request parameter to continue listing your origin + // access identities where they left off. + NextMarker *string `type:"string"` + + // The number of CloudFront origin access identities that were created by the + // current AWS account. + // + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` } // String returns the string representation -func (s ListInvalidationsOutput) String() string { +func (s OriginAccessIdentityList) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListInvalidationsOutput) GoString() string { +func (s OriginAccessIdentityList) GoString() string { return s.String() } -// SetInvalidationList sets the InvalidationList field's value. -func (s *ListInvalidationsOutput) SetInvalidationList(v *InvalidationList) *ListInvalidationsOutput { - s.InvalidationList = v +// SetIsTruncated sets the IsTruncated field's value. +func (s *OriginAccessIdentityList) SetIsTruncated(v bool) *OriginAccessIdentityList { + s.IsTruncated = &v return s } -// The request to list your streaming distributions. -type ListStreamingDistributionsInput struct { - _ struct{} `type:"structure"` - - // The value that you provided for the Marker request parameter. - Marker *string `location:"querystring" locationName:"Marker" type:"string"` - - // The value that you provided for the MaxItems request parameter. - MaxItems *int64 `location:"querystring" locationName:"MaxItems" type:"integer"` -} - -// String returns the string representation -func (s ListStreamingDistributionsInput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s ListStreamingDistributionsInput) GoString() string { - return s.String() +// SetItems sets the Items field's value. +func (s *OriginAccessIdentityList) SetItems(v []*OriginAccessIdentitySummary) *OriginAccessIdentityList { + s.Items = v + return s } // SetMarker sets the Marker field's value. -func (s *ListStreamingDistributionsInput) SetMarker(v string) *ListStreamingDistributionsInput { +func (s *OriginAccessIdentityList) SetMarker(v string) *OriginAccessIdentityList { s.Marker = &v return s } // SetMaxItems sets the MaxItems field's value. -func (s *ListStreamingDistributionsInput) SetMaxItems(v int64) *ListStreamingDistributionsInput { +func (s *OriginAccessIdentityList) SetMaxItems(v int64) *OriginAccessIdentityList { s.MaxItems = &v return s } -// The returned result of the corresponding request. -type ListStreamingDistributionsOutput struct { - _ struct{} `type:"structure" payload:"StreamingDistributionList"` +// SetNextMarker sets the NextMarker field's value. +func (s *OriginAccessIdentityList) SetNextMarker(v string) *OriginAccessIdentityList { + s.NextMarker = &v + return s +} - // The StreamingDistributionList type. - StreamingDistributionList *StreamingDistributionList `type:"structure"` +// SetQuantity sets the Quantity field's value. +func (s *OriginAccessIdentityList) SetQuantity(v int64) *OriginAccessIdentityList { + s.Quantity = &v + return s +} + +// Summary of the information about a CloudFront origin access identity. +type OriginAccessIdentitySummary struct { + _ struct{} `type:"structure"` + + // The comment for this origin access identity, as originally specified when + // created. + // + // Comment is a required field + Comment *string `type:"string" required:"true"` + + // The ID for the origin access identity. For example: E74FTE3AJFJ256A. + // + // Id is a required field + Id *string `type:"string" required:"true"` + + // The Amazon S3 canonical user ID for the origin access identity, which you + // use when giving the origin access identity read permission to an object in + // Amazon S3. + // + // S3CanonicalUserId is a required field + S3CanonicalUserId *string `type:"string" required:"true"` } // String returns the string representation -func (s ListStreamingDistributionsOutput) String() string { +func (s OriginAccessIdentitySummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListStreamingDistributionsOutput) GoString() string { +func (s OriginAccessIdentitySummary) GoString() string { return s.String() } -// SetStreamingDistributionList sets the StreamingDistributionList field's value. -func (s *ListStreamingDistributionsOutput) SetStreamingDistributionList(v *StreamingDistributionList) *ListStreamingDistributionsOutput { - s.StreamingDistributionList = v +// SetComment sets the Comment field's value. +func (s *OriginAccessIdentitySummary) SetComment(v string) *OriginAccessIdentitySummary { + s.Comment = &v return s } -// The request to list tags for a CloudFront resource. -type ListTagsForResourceInput struct { +// SetId sets the Id field's value. +func (s *OriginAccessIdentitySummary) SetId(v string) *OriginAccessIdentitySummary { + s.Id = &v + return s +} + +// SetS3CanonicalUserId sets the S3CanonicalUserId field's value. +func (s *OriginAccessIdentitySummary) SetS3CanonicalUserId(v string) *OriginAccessIdentitySummary { + s.S3CanonicalUserId = &v + return s +} + +// A complex type that contains HeaderName and HeaderValue elements, if any, +// for this distribution. +type OriginCustomHeader struct { _ struct{} `type:"structure"` - // An ARN of a CloudFront resource. + // The name of a header that you want CloudFront to forward to your origin. + // For more information, see Forwarding Custom Headers to Your Origin (Web Distributions + // Only) (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/forward-custom-headers.html) + // in the Amazon Amazon CloudFront Developer Guide. // - // Resource is a required field - Resource *string `location:"querystring" locationName:"Resource" type:"string" required:"true"` + // HeaderName is a required field + HeaderName *string `type:"string" required:"true"` + + // The value for the header that you specified in the HeaderName field. + // + // HeaderValue is a required field + HeaderValue *string `type:"string" required:"true"` } // String returns the string representation -func (s ListTagsForResourceInput) String() string { +func (s OriginCustomHeader) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListTagsForResourceInput) GoString() string { +func (s OriginCustomHeader) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListTagsForResourceInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListTagsForResourceInput"} - if s.Resource == nil { - invalidParams.Add(request.NewErrParamRequired("Resource")) +func (s *OriginCustomHeader) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "OriginCustomHeader"} + if s.HeaderName == nil { + invalidParams.Add(request.NewErrParamRequired("HeaderName")) } - - if invalidParams.Len() > 0 { - return invalidParams + if s.HeaderValue == nil { + invalidParams.Add(request.NewErrParamRequired("HeaderValue")) } - return nil -} - -// SetResource sets the Resource field's value. -func (s *ListTagsForResourceInput) SetResource(v string) *ListTagsForResourceInput { - s.Resource = &v - return s -} - -// The returned result of the corresponding request. -type ListTagsForResourceOutput struct { - _ struct{} `type:"structure" payload:"Tags"` - - // A complex type that contains zero or more Tag elements. - // - // Tags is a required field - Tags *Tags `type:"structure" required:"true"` -} -// String returns the string representation -func (s ListTagsForResourceOutput) String() string { - return awsutil.Prettify(s) + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// GoString returns the string representation -func (s ListTagsForResourceOutput) GoString() string { - return s.String() +// SetHeaderName sets the HeaderName field's value. +func (s *OriginCustomHeader) SetHeaderName(v string) *OriginCustomHeader { + s.HeaderName = &v + return s } -// SetTags sets the Tags field's value. -func (s *ListTagsForResourceOutput) SetTags(v *Tags) *ListTagsForResourceOutput { - s.Tags = v +// SetHeaderValue sets the HeaderValue field's value. +func (s *OriginCustomHeader) SetHeaderValue(v string) *OriginCustomHeader { + s.HeaderValue = &v return s } -// A complex type that controls whether access logs are written for the distribution. -type LoggingConfig struct { +// A complex type that contains information about the SSL/TLS protocols that +// CloudFront can use when establishing an HTTPS connection with your origin. +type OriginSslProtocols struct { _ struct{} `type:"structure"` - // The Amazon S3 bucket to store the access logs in, for example, myawslogbucket.s3.amazonaws.com. - // - // Bucket is a required field - Bucket *string `type:"string" required:"true"` - - // Specifies whether you want CloudFront to save access logs to an Amazon S3 - // bucket. If you don't want to enable logging when you create a distribution - // or if you want to disable logging for an existing distribution, specify false - // for Enabled, and specify empty Bucket and Prefix elements. If you specify - // false for Enabled but you specify values for Bucket, prefix, and IncludeCookies, - // the values are automatically deleted. - // - // Enabled is a required field - Enabled *bool `type:"boolean" required:"true"` - - // Specifies whether you want CloudFront to include cookies in access logs, - // specify true for IncludeCookies. If you choose to include cookies in logs, - // CloudFront logs all cookies regardless of how you configure the cache behaviors - // for this distribution. If you don't want to include cookies when you create - // a distribution or if you want to disable include cookies for an existing - // distribution, specify false for IncludeCookies. + // A list that contains allowed SSL/TLS protocols for this distribution. // - // IncludeCookies is a required field - IncludeCookies *bool `type:"boolean" required:"true"` + // Items is a required field + Items []*string `locationNameList:"SslProtocol" type:"list" required:"true"` - // An optional string that you want CloudFront to prefix to the access log filenames - // for this distribution, for example, myprefix/. If you want to enable logging, - // but you don't want to specify a prefix, you still must include an empty Prefix - // element in the Logging element. + // The number of SSL/TLS protocols that you want to allow CloudFront to use + // when establishing an HTTPS connection with this origin. // - // Prefix is a required field - Prefix *string `type:"string" required:"true"` + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` } // String returns the string representation -func (s LoggingConfig) String() string { +func (s OriginSslProtocols) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s LoggingConfig) GoString() string { +func (s OriginSslProtocols) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *LoggingConfig) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "LoggingConfig"} - if s.Bucket == nil { - invalidParams.Add(request.NewErrParamRequired("Bucket")) - } - if s.Enabled == nil { - invalidParams.Add(request.NewErrParamRequired("Enabled")) - } - if s.IncludeCookies == nil { - invalidParams.Add(request.NewErrParamRequired("IncludeCookies")) +func (s *OriginSslProtocols) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "OriginSslProtocols"} + if s.Items == nil { + invalidParams.Add(request.NewErrParamRequired("Items")) } - if s.Prefix == nil { - invalidParams.Add(request.NewErrParamRequired("Prefix")) + if s.Quantity == nil { + invalidParams.Add(request.NewErrParamRequired("Quantity")) } if invalidParams.Len() > 0 { @@ -8087,148 +12180,58 @@ func (s *LoggingConfig) Validate() error { return nil } -// SetBucket sets the Bucket field's value. -func (s *LoggingConfig) SetBucket(v string) *LoggingConfig { - s.Bucket = &v - return s -} - -// SetEnabled sets the Enabled field's value. -func (s *LoggingConfig) SetEnabled(v bool) *LoggingConfig { - s.Enabled = &v - return s -} - -// SetIncludeCookies sets the IncludeCookies field's value. -func (s *LoggingConfig) SetIncludeCookies(v bool) *LoggingConfig { - s.IncludeCookies = &v +// SetItems sets the Items field's value. +func (s *OriginSslProtocols) SetItems(v []*string) *OriginSslProtocols { + s.Items = v return s } -// SetPrefix sets the Prefix field's value. -func (s *LoggingConfig) SetPrefix(v string) *LoggingConfig { - s.Prefix = &v +// SetQuantity sets the Quantity field's value. +func (s *OriginSslProtocols) SetQuantity(v int64) *OriginSslProtocols { + s.Quantity = &v return s } -// A complex type that describes the Amazon S3 bucket or the HTTP server (for -// example, a web server) from which CloudFront gets your files. You must create -// at least one origin. -// -// For the current limit on the number of origins that you can create for a -// distribution, see Amazon CloudFront Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_cloudfront) -// in the AWS General Reference. -type Origin struct { +// A complex type that contains information about origins for this distribution. +type Origins struct { _ struct{} `type:"structure"` - // A complex type that contains names and values for the custom headers that - // you want. - CustomHeaders *CustomHeaders `type:"structure"` - - // A complex type that contains information about a custom origin. If the origin - // is an Amazon S3 bucket, use the S3OriginConfig element instead. - CustomOriginConfig *CustomOriginConfig `type:"structure"` - - // Amazon S3 origins: The DNS name of the Amazon S3 bucket from which you want - // CloudFront to get objects for this origin, for example, myawsbucket.s3.amazonaws.com. - // - // Constraints for Amazon S3 origins: - // - // * If you configured Amazon S3 Transfer Acceleration for your bucket, don't - // specify the s3-accelerate endpoint for DomainName. - // - // * The bucket name must be between 3 and 63 characters long (inclusive). - // - // * The bucket name must contain only lowercase characters, numbers, periods, - // underscores, and dashes. - // - // * The bucket name must not contain adjacent periods. - // - // Custom Origins: The DNS domain name for the HTTP server from which you want - // CloudFront to get objects for this origin, for example, www.example.com. - // - // Constraints for custom origins: - // - // * DomainName must be a valid DNS name that contains only a-z, A-Z, 0-9, - // dot (.), hyphen (-), or underscore (_) characters. - // - // * The name cannot exceed 128 characters. - // - // DomainName is a required field - DomainName *string `type:"string" required:"true"` - - // A unique identifier for the origin. The value of Id must be unique within - // the distribution. - // - // When you specify the value of TargetOriginId for the default cache behavior - // or for another cache behavior, you indicate the origin to which you want - // the cache behavior to route requests by specifying the value of the Id element - // for that origin. When a request matches the path pattern for that cache behavior, - // CloudFront routes the request to the specified origin. For more information, - // see Cache Behavior Settings (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesCacheBehavior) - // in the Amazon CloudFront Developer Guide. - // - // Id is a required field - Id *string `type:"string" required:"true"` + // A complex type that contains origins for this distribution. + Items []*Origin `locationNameList:"Origin" min:"1" type:"list"` - // An optional element that causes CloudFront to request your content from a - // directory in your Amazon S3 bucket or your custom origin. When you include - // the OriginPath element, specify the directory name, beginning with a /. CloudFront - // appends the directory name to the value of DomainName, for example, example.com/production. - // Do not include a / at the end of the directory name. - // - // For example, suppose you've specified the following values for your distribution: - // - // * DomainName: An Amazon S3 bucket named myawsbucket. - // - // * OriginPath: /production - // - // * CNAME: example.com - // - // When a user enters example.com/index.html in a browser, CloudFront sends - // a request to Amazon S3 for myawsbucket/production/index.html. + // The number of origins for this distribution. // - // When a user enters example.com/acme/index.html in a browser, CloudFront sends - // a request to Amazon S3 for myawsbucket/production/acme/index.html. - OriginPath *string `type:"string"` - - // A complex type that contains information about the Amazon S3 origin. If the - // origin is a custom origin, use the CustomOriginConfig element instead. - S3OriginConfig *S3OriginConfig `type:"structure"` + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` } // String returns the string representation -func (s Origin) String() string { +func (s Origins) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Origin) GoString() string { +func (s Origins) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *Origin) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "Origin"} - if s.DomainName == nil { - invalidParams.Add(request.NewErrParamRequired("DomainName")) - } - if s.Id == nil { - invalidParams.Add(request.NewErrParamRequired("Id")) - } - if s.CustomHeaders != nil { - if err := s.CustomHeaders.Validate(); err != nil { - invalidParams.AddNested("CustomHeaders", err.(request.ErrInvalidParams)) - } +func (s *Origins) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Origins"} + if s.Items != nil && len(s.Items) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Items", 1)) } - if s.CustomOriginConfig != nil { - if err := s.CustomOriginConfig.Validate(); err != nil { - invalidParams.AddNested("CustomOriginConfig", err.(request.ErrInvalidParams)) - } + if s.Quantity == nil { + invalidParams.Add(request.NewErrParamRequired("Quantity")) } - if s.S3OriginConfig != nil { - if err := s.S3OriginConfig.Validate(); err != nil { - invalidParams.AddNested("S3OriginConfig", err.(request.ErrInvalidParams)) + if s.Items != nil { + for i, v := range s.Items { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Items", i), err.(request.ErrInvalidParams)) + } } } @@ -8238,137 +12241,166 @@ func (s *Origin) Validate() error { return nil } -// SetCustomHeaders sets the CustomHeaders field's value. -func (s *Origin) SetCustomHeaders(v *CustomHeaders) *Origin { - s.CustomHeaders = v +// SetItems sets the Items field's value. +func (s *Origins) SetItems(v []*Origin) *Origins { + s.Items = v + return s +} + +// SetQuantity sets the Quantity field's value. +func (s *Origins) SetQuantity(v int64) *Origins { + s.Quantity = &v return s } -// SetCustomOriginConfig sets the CustomOriginConfig field's value. -func (s *Origin) SetCustomOriginConfig(v *CustomOriginConfig) *Origin { - s.CustomOriginConfig = v - return s +// A complex type that contains information about the objects that you want +// to invalidate. For more information, see Specifying the Objects to Invalidate +// (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html#invalidation-specifying-objects) +// in the Amazon CloudFront Developer Guide. +type Paths struct { + _ struct{} `type:"structure"` + + // A complex type that contains a list of the paths that you want to invalidate. + Items []*string `locationNameList:"Path" type:"list"` + + // The number of objects that you want to invalidate. + // + // Quantity is a required field + Quantity *int64 `type:"integer" required:"true"` +} + +// String returns the string representation +func (s Paths) String() string { + return awsutil.Prettify(s) } -// SetDomainName sets the DomainName field's value. -func (s *Origin) SetDomainName(v string) *Origin { - s.DomainName = &v - return s +// GoString returns the string representation +func (s Paths) GoString() string { + return s.String() } -// SetId sets the Id field's value. -func (s *Origin) SetId(v string) *Origin { - s.Id = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *Paths) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Paths"} + if s.Quantity == nil { + invalidParams.Add(request.NewErrParamRequired("Quantity")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetOriginPath sets the OriginPath field's value. -func (s *Origin) SetOriginPath(v string) *Origin { - s.OriginPath = &v +// SetItems sets the Items field's value. +func (s *Paths) SetItems(v []*string) *Paths { + s.Items = v return s } -// SetS3OriginConfig sets the S3OriginConfig field's value. -func (s *Origin) SetS3OriginConfig(v *S3OriginConfig) *Origin { - s.S3OriginConfig = v +// SetQuantity sets the Quantity field's value. +func (s *Paths) SetQuantity(v int64) *Paths { + s.Quantity = &v return s } -// CloudFront origin access identity. -type OriginAccessIdentity struct { +// A complex data type of public keys you add to CloudFront to use with features +// like field-level encryption. +type PublicKey struct { _ struct{} `type:"structure"` - // The current configuration information for the identity. - CloudFrontOriginAccessIdentityConfig *OriginAccessIdentityConfig `type:"structure"` + // A time you added a public key to CloudFront. + // + // CreatedTime is a required field + CreatedTime *time.Time `type:"timestamp" required:"true"` - // The ID for the origin access identity, for example, E74FTE3AJFJ256A. + // A unique ID assigned to a public key you've added to CloudFront. // // Id is a required field Id *string `type:"string" required:"true"` - // The Amazon S3 canonical user ID for the origin access identity, used when - // giving the origin access identity read permission to an object in Amazon - // S3. + // A complex data type for a public key you add to CloudFront to use with features + // like field-level encryption. // - // S3CanonicalUserId is a required field - S3CanonicalUserId *string `type:"string" required:"true"` + // PublicKeyConfig is a required field + PublicKeyConfig *PublicKeyConfig `type:"structure" required:"true"` } // String returns the string representation -func (s OriginAccessIdentity) String() string { +func (s PublicKey) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s OriginAccessIdentity) GoString() string { +func (s PublicKey) GoString() string { return s.String() } -// SetCloudFrontOriginAccessIdentityConfig sets the CloudFrontOriginAccessIdentityConfig field's value. -func (s *OriginAccessIdentity) SetCloudFrontOriginAccessIdentityConfig(v *OriginAccessIdentityConfig) *OriginAccessIdentity { - s.CloudFrontOriginAccessIdentityConfig = v +// SetCreatedTime sets the CreatedTime field's value. +func (s *PublicKey) SetCreatedTime(v time.Time) *PublicKey { + s.CreatedTime = &v return s } // SetId sets the Id field's value. -func (s *OriginAccessIdentity) SetId(v string) *OriginAccessIdentity { +func (s *PublicKey) SetId(v string) *PublicKey { s.Id = &v return s } -// SetS3CanonicalUserId sets the S3CanonicalUserId field's value. -func (s *OriginAccessIdentity) SetS3CanonicalUserId(v string) *OriginAccessIdentity { - s.S3CanonicalUserId = &v +// SetPublicKeyConfig sets the PublicKeyConfig field's value. +func (s *PublicKey) SetPublicKeyConfig(v *PublicKeyConfig) *PublicKey { + s.PublicKeyConfig = v return s } -// Origin access identity configuration. Send a GET request to the /CloudFront -// API version/CloudFront/identity ID/config resource. -type OriginAccessIdentityConfig struct { +// Information about a public key you add to CloudFront to use with features +// like field-level encryption. +type PublicKeyConfig struct { _ struct{} `type:"structure"` // A unique number that ensures the request can't be replayed. // - // If the CallerReference is new (no matter the content of the CloudFrontOriginAccessIdentityConfig - // object), a new origin access identity is created. - // - // If the CallerReference is a value already sent in a previous identity request, - // and the content of the CloudFrontOriginAccessIdentityConfig is identical - // to the original request (ignoring white space), the response includes the - // same information returned to the original request. - // - // If the CallerReference is a value you already sent in a previous request - // to create an identity, but the content of the CloudFrontOriginAccessIdentityConfig - // is different from the original request, CloudFront returns a CloudFrontOriginAccessIdentityAlreadyExists - // error. - // // CallerReference is a required field CallerReference *string `type:"string" required:"true"` - // Any comments you want to include about the origin access identity. + // An optional comment about a public key. + Comment *string `type:"string"` + + // The encoded public key that you want to add to CloudFront to use with features + // like field-level encryption. // - // Comment is a required field - Comment *string `type:"string" required:"true"` + // EncodedKey is a required field + EncodedKey *string `type:"string" required:"true"` + + // The name for a public key you add to CloudFront to use with features like + // field-level encryption. + // + // Name is a required field + Name *string `type:"string" required:"true"` } // String returns the string representation -func (s OriginAccessIdentityConfig) String() string { +func (s PublicKeyConfig) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s OriginAccessIdentityConfig) GoString() string { +func (s PublicKeyConfig) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *OriginAccessIdentityConfig) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "OriginAccessIdentityConfig"} +func (s *PublicKeyConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PublicKeyConfig"} if s.CallerReference == nil { invalidParams.Add(request.NewErrParamRequired("CallerReference")) } - if s.Comment == nil { - invalidParams.Add(request.NewErrParamRequired("Comment")) + if s.EncodedKey == nil { + invalidParams.Add(request.NewErrParamRequired("EncodedKey")) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) } if invalidParams.Len() > 0 { @@ -8378,253 +12410,190 @@ func (s *OriginAccessIdentityConfig) Validate() error { } // SetCallerReference sets the CallerReference field's value. -func (s *OriginAccessIdentityConfig) SetCallerReference(v string) *OriginAccessIdentityConfig { +func (s *PublicKeyConfig) SetCallerReference(v string) *PublicKeyConfig { s.CallerReference = &v return s } // SetComment sets the Comment field's value. -func (s *OriginAccessIdentityConfig) SetComment(v string) *OriginAccessIdentityConfig { +func (s *PublicKeyConfig) SetComment(v string) *PublicKeyConfig { s.Comment = &v return s } -// Lists the origin access identities for CloudFront.Send a GET request to the -// /CloudFront API version/origin-access-identity/cloudfront resource. The response -// includes a CloudFrontOriginAccessIdentityList element with zero or more CloudFrontOriginAccessIdentitySummary -// child elements. By default, your entire list of origin access identities -// is returned in one single page. If the list is long, you can paginate it -// using the MaxItems and Marker parameters. -type OriginAccessIdentityList struct { - _ struct{} `type:"structure"` +// SetEncodedKey sets the EncodedKey field's value. +func (s *PublicKeyConfig) SetEncodedKey(v string) *PublicKeyConfig { + s.EncodedKey = &v + return s +} - // A flag that indicates whether more origin access identities remain to be - // listed. If your results were truncated, you can make a follow-up pagination - // request using the Marker request parameter to retrieve more items in the - // list. - // - // IsTruncated is a required field - IsTruncated *bool `type:"boolean" required:"true"` +// SetName sets the Name field's value. +func (s *PublicKeyConfig) SetName(v string) *PublicKeyConfig { + s.Name = &v + return s +} - // A complex type that contains one CloudFrontOriginAccessIdentitySummary element - // for each origin access identity that was created by the current AWS account. - Items []*OriginAccessIdentitySummary `locationNameList:"CloudFrontOriginAccessIdentitySummary" type:"list"` +// A list of public keys you've added to CloudFront to use with features like +// field-level encryption. +type PublicKeyList struct { + _ struct{} `type:"structure"` - // Use this when paginating results to indicate where to begin in your list - // of origin access identities. The results include identities in the list that - // occur after the marker. To get the next page of results, set the Marker to - // the value of the NextMarker from the current page's response (which is also - // the ID of the last identity on that page). - // - // Marker is a required field - Marker *string `type:"string" required:"true"` + // An array of information about a public key you add to CloudFront to use with + // features like field-level encryption. + Items []*PublicKeySummary `locationNameList:"PublicKeySummary" type:"list"` - // The maximum number of origin access identities you want in the response body. + // The maximum number of public keys you want in the response body. // // MaxItems is a required field MaxItems *int64 `type:"integer" required:"true"` - // If IsTruncated is true, this element is present and contains the value you - // can use for the Marker request parameter to continue listing your origin - // access identities where they left off. + // If there are more elements to be listed, this element is present and contains + // the value that you can use for the Marker request parameter to continue listing + // your public keys where you left off. NextMarker *string `type:"string"` - // The number of CloudFront origin access identities that were created by the - // current AWS account. + // The number of public keys you added to CloudFront to use with features like + // field-level encryption. // // Quantity is a required field Quantity *int64 `type:"integer" required:"true"` } // String returns the string representation -func (s OriginAccessIdentityList) String() string { +func (s PublicKeyList) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s OriginAccessIdentityList) GoString() string { +func (s PublicKeyList) GoString() string { return s.String() } -// SetIsTruncated sets the IsTruncated field's value. -func (s *OriginAccessIdentityList) SetIsTruncated(v bool) *OriginAccessIdentityList { - s.IsTruncated = &v - return s -} - // SetItems sets the Items field's value. -func (s *OriginAccessIdentityList) SetItems(v []*OriginAccessIdentitySummary) *OriginAccessIdentityList { +func (s *PublicKeyList) SetItems(v []*PublicKeySummary) *PublicKeyList { s.Items = v return s } -// SetMarker sets the Marker field's value. -func (s *OriginAccessIdentityList) SetMarker(v string) *OriginAccessIdentityList { - s.Marker = &v - return s -} - // SetMaxItems sets the MaxItems field's value. -func (s *OriginAccessIdentityList) SetMaxItems(v int64) *OriginAccessIdentityList { +func (s *PublicKeyList) SetMaxItems(v int64) *PublicKeyList { s.MaxItems = &v return s } // SetNextMarker sets the NextMarker field's value. -func (s *OriginAccessIdentityList) SetNextMarker(v string) *OriginAccessIdentityList { +func (s *PublicKeyList) SetNextMarker(v string) *PublicKeyList { s.NextMarker = &v return s } // SetQuantity sets the Quantity field's value. -func (s *OriginAccessIdentityList) SetQuantity(v int64) *OriginAccessIdentityList { +func (s *PublicKeyList) SetQuantity(v int64) *PublicKeyList { s.Quantity = &v return s } -// Summary of the information about a CloudFront origin access identity. -type OriginAccessIdentitySummary struct { +// Public key information summary. +type PublicKeySummary struct { _ struct{} `type:"structure"` - // The comment for this origin access identity, as originally specified when - // created. + // Comment for public key information summary. + Comment *string `type:"string"` + + // Creation time for public key information summary. // - // Comment is a required field - Comment *string `type:"string" required:"true"` + // CreatedTime is a required field + CreatedTime *time.Time `type:"timestamp" required:"true"` - // The ID for the origin access identity. For example: E74FTE3AJFJ256A. + // Encoded key for public key information summary. + // + // EncodedKey is a required field + EncodedKey *string `type:"string" required:"true"` + + // ID for public key information summary. // // Id is a required field Id *string `type:"string" required:"true"` - // The Amazon S3 canonical user ID for the origin access identity, which you - // use when giving the origin access identity read permission to an object in - // Amazon S3. + // Name for public key information summary. // - // S3CanonicalUserId is a required field - S3CanonicalUserId *string `type:"string" required:"true"` + // Name is a required field + Name *string `type:"string" required:"true"` } // String returns the string representation -func (s OriginAccessIdentitySummary) String() string { +func (s PublicKeySummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s OriginAccessIdentitySummary) GoString() string { +func (s PublicKeySummary) GoString() string { return s.String() } // SetComment sets the Comment field's value. -func (s *OriginAccessIdentitySummary) SetComment(v string) *OriginAccessIdentitySummary { +func (s *PublicKeySummary) SetComment(v string) *PublicKeySummary { s.Comment = &v return s } -// SetId sets the Id field's value. -func (s *OriginAccessIdentitySummary) SetId(v string) *OriginAccessIdentitySummary { - s.Id = &v +// SetCreatedTime sets the CreatedTime field's value. +func (s *PublicKeySummary) SetCreatedTime(v time.Time) *PublicKeySummary { + s.CreatedTime = &v return s } -// SetS3CanonicalUserId sets the S3CanonicalUserId field's value. -func (s *OriginAccessIdentitySummary) SetS3CanonicalUserId(v string) *OriginAccessIdentitySummary { - s.S3CanonicalUserId = &v +// SetEncodedKey sets the EncodedKey field's value. +func (s *PublicKeySummary) SetEncodedKey(v string) *PublicKeySummary { + s.EncodedKey = &v return s } -// A complex type that contains HeaderName and HeaderValue elements, if any, -// for this distribution. -type OriginCustomHeader struct { - _ struct{} `type:"structure"` - - // The name of a header that you want CloudFront to forward to your origin. - // For more information, see Forwarding Custom Headers to Your Origin (Web Distributions - // Only) (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/forward-custom-headers.html) - // in the Amazon Amazon CloudFront Developer Guide. - // - // HeaderName is a required field - HeaderName *string `type:"string" required:"true"` - - // The value for the header that you specified in the HeaderName field. - // - // HeaderValue is a required field - HeaderValue *string `type:"string" required:"true"` -} - -// String returns the string representation -func (s OriginCustomHeader) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s OriginCustomHeader) GoString() string { - return s.String() -} - -// Validate inspects the fields of the type to determine if they are valid. -func (s *OriginCustomHeader) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "OriginCustomHeader"} - if s.HeaderName == nil { - invalidParams.Add(request.NewErrParamRequired("HeaderName")) - } - if s.HeaderValue == nil { - invalidParams.Add(request.NewErrParamRequired("HeaderValue")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetHeaderName sets the HeaderName field's value. -func (s *OriginCustomHeader) SetHeaderName(v string) *OriginCustomHeader { - s.HeaderName = &v +// SetId sets the Id field's value. +func (s *PublicKeySummary) SetId(v string) *PublicKeySummary { + s.Id = &v return s } -// SetHeaderValue sets the HeaderValue field's value. -func (s *OriginCustomHeader) SetHeaderValue(v string) *OriginCustomHeader { - s.HeaderValue = &v +// SetName sets the Name field's value. +func (s *PublicKeySummary) SetName(v string) *PublicKeySummary { + s.Name = &v return s } -// A complex type that contains information about the SSL/TLS protocols that -// CloudFront can use when establishing an HTTPS connection with your origin. -type OriginSslProtocols struct { +// Query argument-profile mapping for field-level encryption. +type QueryArgProfile struct { _ struct{} `type:"structure"` - // A list that contains allowed SSL/TLS protocols for this distribution. + // ID of profile to use for field-level encryption query argument-profile mapping // - // Items is a required field - Items []*string `locationNameList:"SslProtocol" type:"list" required:"true"` + // ProfileId is a required field + ProfileId *string `type:"string" required:"true"` - // The number of SSL/TLS protocols that you want to allow CloudFront to use - // when establishing an HTTPS connection with this origin. + // Query argument for field-level encryption query argument-profile mapping. // - // Quantity is a required field - Quantity *int64 `type:"integer" required:"true"` + // QueryArg is a required field + QueryArg *string `type:"string" required:"true"` } // String returns the string representation -func (s OriginSslProtocols) String() string { +func (s QueryArgProfile) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s OriginSslProtocols) GoString() string { +func (s QueryArgProfile) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *OriginSslProtocols) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "OriginSslProtocols"} - if s.Items == nil { - invalidParams.Add(request.NewErrParamRequired("Items")) +func (s *QueryArgProfile) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "QueryArgProfile"} + if s.ProfileId == nil { + invalidParams.Add(request.NewErrParamRequired("ProfileId")) } - if s.Quantity == nil { - invalidParams.Add(request.NewErrParamRequired("Quantity")) + if s.QueryArg == nil { + invalidParams.Add(request.NewErrParamRequired("QueryArg")) } if invalidParams.Len() > 0 { @@ -8633,58 +12602,52 @@ func (s *OriginSslProtocols) Validate() error { return nil } -// SetItems sets the Items field's value. -func (s *OriginSslProtocols) SetItems(v []*string) *OriginSslProtocols { - s.Items = v +// SetProfileId sets the ProfileId field's value. +func (s *QueryArgProfile) SetProfileId(v string) *QueryArgProfile { + s.ProfileId = &v return s } -// SetQuantity sets the Quantity field's value. -func (s *OriginSslProtocols) SetQuantity(v int64) *OriginSslProtocols { - s.Quantity = &v +// SetQueryArg sets the QueryArg field's value. +func (s *QueryArgProfile) SetQueryArg(v string) *QueryArgProfile { + s.QueryArg = &v return s } -// A complex type that contains information about origins for this distribution. -type Origins struct { +// Configuration for query argument-profile mapping for field-level encryption. +type QueryArgProfileConfig struct { _ struct{} `type:"structure"` - // A complex type that contains origins for this distribution. - Items []*Origin `locationNameList:"Origin" min:"1" type:"list"` - - // The number of origins for this distribution. + // Flag to set if you want a request to be forwarded to the origin even if the + // profile specified by the field-level encryption query argument, fle-profile, + // is unknown. // - // Quantity is a required field - Quantity *int64 `type:"integer" required:"true"` + // ForwardWhenQueryArgProfileIsUnknown is a required field + ForwardWhenQueryArgProfileIsUnknown *bool `type:"boolean" required:"true"` + + // Profiles specified for query argument-profile mapping for field-level encryption. + QueryArgProfiles *QueryArgProfiles `type:"structure"` } // String returns the string representation -func (s Origins) String() string { +func (s QueryArgProfileConfig) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Origins) GoString() string { +func (s QueryArgProfileConfig) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *Origins) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "Origins"} - if s.Items != nil && len(s.Items) < 1 { - invalidParams.Add(request.NewErrParamMinLen("Items", 1)) - } - if s.Quantity == nil { - invalidParams.Add(request.NewErrParamRequired("Quantity")) - } - if s.Items != nil { - for i, v := range s.Items { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Items", i), err.(request.ErrInvalidParams)) - } +func (s *QueryArgProfileConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "QueryArgProfileConfig"} + if s.ForwardWhenQueryArgProfileIsUnknown == nil { + invalidParams.Add(request.NewErrParamRequired("ForwardWhenQueryArgProfileIsUnknown")) + } + if s.QueryArgProfiles != nil { + if err := s.QueryArgProfiles.Validate(); err != nil { + invalidParams.AddNested("QueryArgProfiles", err.(request.ErrInvalidParams)) } } @@ -8694,50 +12657,57 @@ func (s *Origins) Validate() error { return nil } -// SetItems sets the Items field's value. -func (s *Origins) SetItems(v []*Origin) *Origins { - s.Items = v +// SetForwardWhenQueryArgProfileIsUnknown sets the ForwardWhenQueryArgProfileIsUnknown field's value. +func (s *QueryArgProfileConfig) SetForwardWhenQueryArgProfileIsUnknown(v bool) *QueryArgProfileConfig { + s.ForwardWhenQueryArgProfileIsUnknown = &v return s } -// SetQuantity sets the Quantity field's value. -func (s *Origins) SetQuantity(v int64) *Origins { - s.Quantity = &v +// SetQueryArgProfiles sets the QueryArgProfiles field's value. +func (s *QueryArgProfileConfig) SetQueryArgProfiles(v *QueryArgProfiles) *QueryArgProfileConfig { + s.QueryArgProfiles = v return s } -// A complex type that contains information about the objects that you want -// to invalidate. For more information, see Specifying the Objects to Invalidate -// (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html#invalidation-specifying-objects) -// in the Amazon CloudFront Developer Guide. -type Paths struct { +// Query argument-profile mapping for field-level encryption. +type QueryArgProfiles struct { _ struct{} `type:"structure"` - // A complex type that contains a list of the paths that you want to invalidate. - Items []*string `locationNameList:"Path" type:"list"` + // Number of items for query argument-profile mapping for field-level encryption. + Items []*QueryArgProfile `locationNameList:"QueryArgProfile" type:"list"` - // The number of objects that you want to invalidate. + // Number of profiles for query argument-profile mapping for field-level encryption. // // Quantity is a required field Quantity *int64 `type:"integer" required:"true"` } // String returns the string representation -func (s Paths) String() string { +func (s QueryArgProfiles) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Paths) GoString() string { +func (s QueryArgProfiles) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *Paths) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "Paths"} +func (s *QueryArgProfiles) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "QueryArgProfiles"} if s.Quantity == nil { invalidParams.Add(request.NewErrParamRequired("Quantity")) } + if s.Items != nil { + for i, v := range s.Items { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Items", i), err.(request.ErrInvalidParams)) + } + } + } if invalidParams.Len() > 0 { return invalidParams @@ -8746,13 +12716,13 @@ func (s *Paths) Validate() error { } // SetItems sets the Items field's value. -func (s *Paths) SetItems(v []*string) *Paths { +func (s *QueryArgProfiles) SetItems(v []*QueryArgProfile) *QueryArgProfiles { s.Items = v return s } // SetQuantity sets the Quantity field's value. -func (s *Paths) SetQuantity(v int64) *Paths { +func (s *QueryArgProfiles) SetQuantity(v int64) *QueryArgProfiles { s.Quantity = &v return s } @@ -9059,7 +13029,7 @@ type StreamingDistribution struct { Id *string `type:"string" required:"true"` // The date and time that the distribution was last modified. - LastModifiedTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + LastModifiedTime *time.Time `type:"timestamp"` // The current status of the RTMP distribution. When the status is Deployed, // the distribution's information is propagated to all CloudFront edge locations. @@ -9474,7 +13444,7 @@ type StreamingDistributionSummary struct { // The date and time the distribution was last modified. // // LastModifiedTime is a required field - LastModifiedTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + LastModifiedTime *time.Time `type:"timestamp" required:"true"` // PriceClass is a required field PriceClass *string `type:"string" required:"true" enum:"PriceClass"` @@ -9752,7 +13722,7 @@ type TagResourceInput struct { // A complex type that contains zero or more Tag elements. // // Tags is a required field - Tags *Tags `locationName:"Tags" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2017-03-25/"` + Tags *Tags `locationName:"Tags" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2018-06-18/"` } // String returns the string representation @@ -9949,7 +13919,7 @@ type UntagResourceInput struct { // A complex type that contains zero or more Tag key elements. // // TagKeys is a required field - TagKeys *TagKeys `locationName:"TagKeys" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2017-03-25/"` + TagKeys *TagKeys `locationName:"TagKeys" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2018-06-18/"` } // String returns the string representation @@ -10011,7 +13981,7 @@ type UpdateCloudFrontOriginAccessIdentityInput struct { // The identity's configuration information. // // CloudFrontOriginAccessIdentityConfig is a required field - CloudFrontOriginAccessIdentityConfig *OriginAccessIdentityConfig `locationName:"CloudFrontOriginAccessIdentityConfig" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2017-03-25/"` + CloudFrontOriginAccessIdentityConfig *OriginAccessIdentityConfig `locationName:"CloudFrontOriginAccessIdentityConfig" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2018-06-18/"` // The identity's id. // @@ -10112,7 +14082,7 @@ type UpdateDistributionInput struct { // The distribution's configuration information. // // DistributionConfig is a required field - DistributionConfig *DistributionConfig `locationName:"DistributionConfig" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2017-03-25/"` + DistributionConfig *DistributionConfig `locationName:"DistributionConfig" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2018-06-18/"` // The distribution's id. // @@ -10206,6 +14176,304 @@ func (s *UpdateDistributionOutput) SetETag(v string) *UpdateDistributionOutput { return s } +type UpdateFieldLevelEncryptionConfigInput struct { + _ struct{} `type:"structure" payload:"FieldLevelEncryptionConfig"` + + // Request to update a field-level encryption configuration. + // + // FieldLevelEncryptionConfig is a required field + FieldLevelEncryptionConfig *FieldLevelEncryptionConfig `locationName:"FieldLevelEncryptionConfig" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2018-06-18/"` + + // The ID of the configuration you want to update. + // + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` + + // The value of the ETag header that you received when retrieving the configuration + // identity to update. For example: E2QWRUHAPOMQZL. + IfMatch *string `location:"header" locationName:"If-Match" type:"string"` +} + +// String returns the string representation +func (s UpdateFieldLevelEncryptionConfigInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateFieldLevelEncryptionConfigInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateFieldLevelEncryptionConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateFieldLevelEncryptionConfigInput"} + if s.FieldLevelEncryptionConfig == nil { + invalidParams.Add(request.NewErrParamRequired("FieldLevelEncryptionConfig")) + } + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.FieldLevelEncryptionConfig != nil { + if err := s.FieldLevelEncryptionConfig.Validate(); err != nil { + invalidParams.AddNested("FieldLevelEncryptionConfig", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFieldLevelEncryptionConfig sets the FieldLevelEncryptionConfig field's value. +func (s *UpdateFieldLevelEncryptionConfigInput) SetFieldLevelEncryptionConfig(v *FieldLevelEncryptionConfig) *UpdateFieldLevelEncryptionConfigInput { + s.FieldLevelEncryptionConfig = v + return s +} + +// SetId sets the Id field's value. +func (s *UpdateFieldLevelEncryptionConfigInput) SetId(v string) *UpdateFieldLevelEncryptionConfigInput { + s.Id = &v + return s +} + +// SetIfMatch sets the IfMatch field's value. +func (s *UpdateFieldLevelEncryptionConfigInput) SetIfMatch(v string) *UpdateFieldLevelEncryptionConfigInput { + s.IfMatch = &v + return s +} + +type UpdateFieldLevelEncryptionConfigOutput struct { + _ struct{} `type:"structure" payload:"FieldLevelEncryption"` + + // The value of the ETag header that you received when updating the configuration. + // For example: E2QWRUHAPOMQZL. + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // Return the results of updating the configuration. + FieldLevelEncryption *FieldLevelEncryption `type:"structure"` +} + +// String returns the string representation +func (s UpdateFieldLevelEncryptionConfigOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateFieldLevelEncryptionConfigOutput) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *UpdateFieldLevelEncryptionConfigOutput) SetETag(v string) *UpdateFieldLevelEncryptionConfigOutput { + s.ETag = &v + return s +} + +// SetFieldLevelEncryption sets the FieldLevelEncryption field's value. +func (s *UpdateFieldLevelEncryptionConfigOutput) SetFieldLevelEncryption(v *FieldLevelEncryption) *UpdateFieldLevelEncryptionConfigOutput { + s.FieldLevelEncryption = v + return s +} + +type UpdateFieldLevelEncryptionProfileInput struct { + _ struct{} `type:"structure" payload:"FieldLevelEncryptionProfileConfig"` + + // Request to update a field-level encryption profile. + // + // FieldLevelEncryptionProfileConfig is a required field + FieldLevelEncryptionProfileConfig *FieldLevelEncryptionProfileConfig `locationName:"FieldLevelEncryptionProfileConfig" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2018-06-18/"` + + // The ID of the field-level encryption profile request. + // + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` + + // The value of the ETag header that you received when retrieving the profile + // identity to update. For example: E2QWRUHAPOMQZL. + IfMatch *string `location:"header" locationName:"If-Match" type:"string"` +} + +// String returns the string representation +func (s UpdateFieldLevelEncryptionProfileInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateFieldLevelEncryptionProfileInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateFieldLevelEncryptionProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateFieldLevelEncryptionProfileInput"} + if s.FieldLevelEncryptionProfileConfig == nil { + invalidParams.Add(request.NewErrParamRequired("FieldLevelEncryptionProfileConfig")) + } + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.FieldLevelEncryptionProfileConfig != nil { + if err := s.FieldLevelEncryptionProfileConfig.Validate(); err != nil { + invalidParams.AddNested("FieldLevelEncryptionProfileConfig", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFieldLevelEncryptionProfileConfig sets the FieldLevelEncryptionProfileConfig field's value. +func (s *UpdateFieldLevelEncryptionProfileInput) SetFieldLevelEncryptionProfileConfig(v *FieldLevelEncryptionProfileConfig) *UpdateFieldLevelEncryptionProfileInput { + s.FieldLevelEncryptionProfileConfig = v + return s +} + +// SetId sets the Id field's value. +func (s *UpdateFieldLevelEncryptionProfileInput) SetId(v string) *UpdateFieldLevelEncryptionProfileInput { + s.Id = &v + return s +} + +// SetIfMatch sets the IfMatch field's value. +func (s *UpdateFieldLevelEncryptionProfileInput) SetIfMatch(v string) *UpdateFieldLevelEncryptionProfileInput { + s.IfMatch = &v + return s +} + +type UpdateFieldLevelEncryptionProfileOutput struct { + _ struct{} `type:"structure" payload:"FieldLevelEncryptionProfile"` + + // The result of the field-level encryption profile request. + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // Return the results of updating the profile. + FieldLevelEncryptionProfile *FieldLevelEncryptionProfile `type:"structure"` +} + +// String returns the string representation +func (s UpdateFieldLevelEncryptionProfileOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateFieldLevelEncryptionProfileOutput) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *UpdateFieldLevelEncryptionProfileOutput) SetETag(v string) *UpdateFieldLevelEncryptionProfileOutput { + s.ETag = &v + return s +} + +// SetFieldLevelEncryptionProfile sets the FieldLevelEncryptionProfile field's value. +func (s *UpdateFieldLevelEncryptionProfileOutput) SetFieldLevelEncryptionProfile(v *FieldLevelEncryptionProfile) *UpdateFieldLevelEncryptionProfileOutput { + s.FieldLevelEncryptionProfile = v + return s +} + +type UpdatePublicKeyInput struct { + _ struct{} `type:"structure" payload:"PublicKeyConfig"` + + // ID of the public key to be updated. + // + // Id is a required field + Id *string `location:"uri" locationName:"Id" type:"string" required:"true"` + + // The value of the ETag header that you received when retrieving the public + // key to update. For example: E2QWRUHAPOMQZL. + IfMatch *string `location:"header" locationName:"If-Match" type:"string"` + + // Request to update public key information. + // + // PublicKeyConfig is a required field + PublicKeyConfig *PublicKeyConfig `locationName:"PublicKeyConfig" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2018-06-18/"` +} + +// String returns the string representation +func (s UpdatePublicKeyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdatePublicKeyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdatePublicKeyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdatePublicKeyInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.PublicKeyConfig == nil { + invalidParams.Add(request.NewErrParamRequired("PublicKeyConfig")) + } + if s.PublicKeyConfig != nil { + if err := s.PublicKeyConfig.Validate(); err != nil { + invalidParams.AddNested("PublicKeyConfig", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetId sets the Id field's value. +func (s *UpdatePublicKeyInput) SetId(v string) *UpdatePublicKeyInput { + s.Id = &v + return s +} + +// SetIfMatch sets the IfMatch field's value. +func (s *UpdatePublicKeyInput) SetIfMatch(v string) *UpdatePublicKeyInput { + s.IfMatch = &v + return s +} + +// SetPublicKeyConfig sets the PublicKeyConfig field's value. +func (s *UpdatePublicKeyInput) SetPublicKeyConfig(v *PublicKeyConfig) *UpdatePublicKeyInput { + s.PublicKeyConfig = v + return s +} + +type UpdatePublicKeyOutput struct { + _ struct{} `type:"structure" payload:"PublicKey"` + + // The current version of the update public key result. For example: E2QWRUHAPOMQZL. + ETag *string `location:"header" locationName:"ETag" type:"string"` + + // Return the results of updating the public key. + PublicKey *PublicKey `type:"structure"` +} + +// String returns the string representation +func (s UpdatePublicKeyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdatePublicKeyOutput) GoString() string { + return s.String() +} + +// SetETag sets the ETag field's value. +func (s *UpdatePublicKeyOutput) SetETag(v string) *UpdatePublicKeyOutput { + s.ETag = &v + return s +} + +// SetPublicKey sets the PublicKey field's value. +func (s *UpdatePublicKeyOutput) SetPublicKey(v *PublicKey) *UpdatePublicKeyOutput { + s.PublicKey = v + return s +} + // The request to update a streaming distribution. type UpdateStreamingDistributionInput struct { _ struct{} `type:"structure" payload:"StreamingDistributionConfig"` @@ -10222,7 +14490,7 @@ type UpdateStreamingDistributionInput struct { // The streaming distribution's configuration information. // // StreamingDistributionConfig is a required field - StreamingDistributionConfig *StreamingDistributionConfig `locationName:"StreamingDistributionConfig" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2017-03-25/"` + StreamingDistributionConfig *StreamingDistributionConfig `locationName:"StreamingDistributionConfig" type:"structure" required:"true" xmlURI:"http://cloudfront.amazonaws.com/doc/2018-06-18/"` } // String returns the string representation @@ -10406,6 +14674,8 @@ type ViewerCertificate struct { // * ViewerCertificate$IAMCertificateId // // * ViewerCertificate$CloudFrontDefaultCertificate + // + // Deprecated: Certificate has been deprecated Certificate *string `deprecated:"true" type:"string"` // This field has been deprecated. Use one of the following fields instead: @@ -10415,6 +14685,8 @@ type ViewerCertificate struct { // * ViewerCertificate$IAMCertificateId // // * ViewerCertificate$CloudFrontDefaultCertificate + // + // Deprecated: CertificateSource has been deprecated CertificateSource *string `deprecated:"true" type:"string" enum:"CertificateSource"` // For information about how and when to use CloudFrontDefaultCertificate, see @@ -10565,6 +14837,11 @@ const ( EventTypeOriginResponse = "origin-response" ) +const ( + // FormatUrlencoded is a Format enum value + FormatUrlencoded = "URLEncoded" +) + const ( // GeoRestrictionTypeBlacklist is a GeoRestrictionType enum value GeoRestrictionTypeBlacklist = "blacklist" diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudfront/doc.go b/vendor/github.com/aws/aws-sdk-go/service/cloudfront/doc.go index 5fb8d3622a2..0a431962103 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudfront/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudfront/doc.go @@ -8,7 +8,7 @@ // errors. For detailed information about CloudFront features, see the Amazon // CloudFront Developer Guide. // -// See https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-03-25 for more information on this service. +// See https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2018-06-18 for more information on this service. // // See cloudfront package documentation for more information. // https://docs.aws.amazon.com/sdk-for-go/api/service/cloudfront/ diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudfront/errors.go b/vendor/github.com/aws/aws-sdk-go/service/cloudfront/errors.go index ed66a11e51b..ede41494f30 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudfront/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudfront/errors.go @@ -18,6 +18,12 @@ const ( // "CNAMEAlreadyExists". ErrCodeCNAMEAlreadyExists = "CNAMEAlreadyExists" + // ErrCodeCannotChangeImmutablePublicKeyFields for service response error code + // "CannotChangeImmutablePublicKeyFields". + // + // You can't change the value of a public key. + ErrCodeCannotChangeImmutablePublicKeyFields = "CannotChangeImmutablePublicKeyFields" + // ErrCodeDistributionAlreadyExists for service response error code // "DistributionAlreadyExists". // @@ -29,6 +35,43 @@ const ( // "DistributionNotDisabled". ErrCodeDistributionNotDisabled = "DistributionNotDisabled" + // ErrCodeFieldLevelEncryptionConfigAlreadyExists for service response error code + // "FieldLevelEncryptionConfigAlreadyExists". + // + // The specified configuration for field-level encryption already exists. + ErrCodeFieldLevelEncryptionConfigAlreadyExists = "FieldLevelEncryptionConfigAlreadyExists" + + // ErrCodeFieldLevelEncryptionConfigInUse for service response error code + // "FieldLevelEncryptionConfigInUse". + // + // The specified configuration for field-level encryption is in use. + ErrCodeFieldLevelEncryptionConfigInUse = "FieldLevelEncryptionConfigInUse" + + // ErrCodeFieldLevelEncryptionProfileAlreadyExists for service response error code + // "FieldLevelEncryptionProfileAlreadyExists". + // + // The specified profile for field-level encryption already exists. + ErrCodeFieldLevelEncryptionProfileAlreadyExists = "FieldLevelEncryptionProfileAlreadyExists" + + // ErrCodeFieldLevelEncryptionProfileInUse for service response error code + // "FieldLevelEncryptionProfileInUse". + // + // The specified profile for field-level encryption is in use. + ErrCodeFieldLevelEncryptionProfileInUse = "FieldLevelEncryptionProfileInUse" + + // ErrCodeFieldLevelEncryptionProfileSizeExceeded for service response error code + // "FieldLevelEncryptionProfileSizeExceeded". + // + // The maximum size of a profile for field-level encryption was exceeded. + ErrCodeFieldLevelEncryptionProfileSizeExceeded = "FieldLevelEncryptionProfileSizeExceeded" + + // ErrCodeIllegalFieldLevelEncryptionConfigAssociationWithCacheBehavior for service response error code + // "IllegalFieldLevelEncryptionConfigAssociationWithCacheBehavior". + // + // The specified configuration for field-level encryption can't be associated + // with the specified cache behavior. + ErrCodeIllegalFieldLevelEncryptionConfigAssociationWithCacheBehavior = "IllegalFieldLevelEncryptionConfigAssociationWithCacheBehavior" + // ErrCodeIllegalUpdate for service response error code // "IllegalUpdate". // @@ -180,6 +223,18 @@ const ( // The specified distribution does not exist. ErrCodeNoSuchDistribution = "NoSuchDistribution" + // ErrCodeNoSuchFieldLevelEncryptionConfig for service response error code + // "NoSuchFieldLevelEncryptionConfig". + // + // The specified configuration for field-level encryption doesn't exist. + ErrCodeNoSuchFieldLevelEncryptionConfig = "NoSuchFieldLevelEncryptionConfig" + + // ErrCodeNoSuchFieldLevelEncryptionProfile for service response error code + // "NoSuchFieldLevelEncryptionProfile". + // + // The specified profile for field-level encryption doesn't exist. + ErrCodeNoSuchFieldLevelEncryptionProfile = "NoSuchFieldLevelEncryptionProfile" + // ErrCodeNoSuchInvalidation for service response error code // "NoSuchInvalidation". // @@ -192,6 +247,12 @@ const ( // No origin exists with the specified Origin Id. ErrCodeNoSuchOrigin = "NoSuchOrigin" + // ErrCodeNoSuchPublicKey for service response error code + // "NoSuchPublicKey". + // + // The specified public key doesn't exist. + ErrCodeNoSuchPublicKey = "NoSuchPublicKey" + // ErrCodeNoSuchResource for service response error code // "NoSuchResource". ErrCodeNoSuchResource = "NoSuchResource" @@ -203,17 +264,17 @@ const ( ErrCodeNoSuchStreamingDistribution = "NoSuchStreamingDistribution" // ErrCodeOriginAccessIdentityAlreadyExists for service response error code - // "OriginAccessIdentityAlreadyExists". + // "CloudFrontOriginAccessIdentityAlreadyExists". // // If the CallerReference is a value you already sent in a previous request // to create an identity but the content of the CloudFrontOriginAccessIdentityConfig // is different from the original request, CloudFront returns a CloudFrontOriginAccessIdentityAlreadyExists // error. - ErrCodeOriginAccessIdentityAlreadyExists = "OriginAccessIdentityAlreadyExists" + ErrCodeOriginAccessIdentityAlreadyExists = "CloudFrontOriginAccessIdentityAlreadyExists" // ErrCodeOriginAccessIdentityInUse for service response error code - // "OriginAccessIdentityInUse". - ErrCodeOriginAccessIdentityInUse = "OriginAccessIdentityInUse" + // "CloudFrontOriginAccessIdentityInUse". + ErrCodeOriginAccessIdentityInUse = "CloudFrontOriginAccessIdentityInUse" // ErrCodePreconditionFailed for service response error code // "PreconditionFailed". @@ -222,9 +283,23 @@ const ( // to false. ErrCodePreconditionFailed = "PreconditionFailed" - // ErrCodeResourceInUse for service response error code - // "ResourceInUse". - ErrCodeResourceInUse = "ResourceInUse" + // ErrCodePublicKeyAlreadyExists for service response error code + // "PublicKeyAlreadyExists". + // + // The specified public key already exists. + ErrCodePublicKeyAlreadyExists = "PublicKeyAlreadyExists" + + // ErrCodePublicKeyInUse for service response error code + // "PublicKeyInUse". + // + // The specified public key is in use. + ErrCodePublicKeyInUse = "PublicKeyInUse" + + // ErrCodeQueryArgProfileEmpty for service response error code + // "QueryArgProfileEmpty". + // + // No profile specified for the field-level encryption query argument. + ErrCodeQueryArgProfileEmpty = "QueryArgProfileEmpty" // ErrCodeStreamingDistributionAlreadyExists for service response error code // "StreamingDistributionAlreadyExists". @@ -273,6 +348,13 @@ const ( // allowed. ErrCodeTooManyDistributions = "TooManyDistributions" + // ErrCodeTooManyDistributionsAssociatedToFieldLevelEncryptionConfig for service response error code + // "TooManyDistributionsAssociatedToFieldLevelEncryptionConfig". + // + // The maximum number of distributions have been associated with the specified + // configuration for field-level encryption. + ErrCodeTooManyDistributionsAssociatedToFieldLevelEncryptionConfig = "TooManyDistributionsAssociatedToFieldLevelEncryptionConfig" + // ErrCodeTooManyDistributionsWithLambdaAssociations for service response error code // "TooManyDistributionsWithLambdaAssociations". // @@ -280,6 +362,47 @@ const ( // Lambda function associations per owner to be exceeded. ErrCodeTooManyDistributionsWithLambdaAssociations = "TooManyDistributionsWithLambdaAssociations" + // ErrCodeTooManyFieldLevelEncryptionConfigs for service response error code + // "TooManyFieldLevelEncryptionConfigs". + // + // The maximum number of configurations for field-level encryption have been + // created. + ErrCodeTooManyFieldLevelEncryptionConfigs = "TooManyFieldLevelEncryptionConfigs" + + // ErrCodeTooManyFieldLevelEncryptionContentTypeProfiles for service response error code + // "TooManyFieldLevelEncryptionContentTypeProfiles". + // + // The maximum number of content type profiles for field-level encryption have + // been created. + ErrCodeTooManyFieldLevelEncryptionContentTypeProfiles = "TooManyFieldLevelEncryptionContentTypeProfiles" + + // ErrCodeTooManyFieldLevelEncryptionEncryptionEntities for service response error code + // "TooManyFieldLevelEncryptionEncryptionEntities". + // + // The maximum number of encryption entities for field-level encryption have + // been created. + ErrCodeTooManyFieldLevelEncryptionEncryptionEntities = "TooManyFieldLevelEncryptionEncryptionEntities" + + // ErrCodeTooManyFieldLevelEncryptionFieldPatterns for service response error code + // "TooManyFieldLevelEncryptionFieldPatterns". + // + // The maximum number of field patterns for field-level encryption have been + // created. + ErrCodeTooManyFieldLevelEncryptionFieldPatterns = "TooManyFieldLevelEncryptionFieldPatterns" + + // ErrCodeTooManyFieldLevelEncryptionProfiles for service response error code + // "TooManyFieldLevelEncryptionProfiles". + // + // The maximum number of profiles for field-level encryption have been created. + ErrCodeTooManyFieldLevelEncryptionProfiles = "TooManyFieldLevelEncryptionProfiles" + + // ErrCodeTooManyFieldLevelEncryptionQueryArgProfiles for service response error code + // "TooManyFieldLevelEncryptionQueryArgProfiles". + // + // The maximum number of query arg profiles for field-level encryption have + // been created. + ErrCodeTooManyFieldLevelEncryptionQueryArgProfiles = "TooManyFieldLevelEncryptionQueryArgProfiles" + // ErrCodeTooManyHeadersInForwardedValues for service response error code // "TooManyHeadersInForwardedValues". ErrCodeTooManyHeadersInForwardedValues = "TooManyHeadersInForwardedValues" @@ -308,6 +431,13 @@ const ( // You cannot create more origins for the distribution. ErrCodeTooManyOrigins = "TooManyOrigins" + // ErrCodeTooManyPublicKeys for service response error code + // "TooManyPublicKeys". + // + // The maximum number of public keys for field-level encryption have been created. + // To create a new public key, delete one of the existing keys. + ErrCodeTooManyPublicKeys = "TooManyPublicKeys" + // ErrCodeTooManyQueryStringParameters for service response error code // "TooManyQueryStringParameters". ErrCodeTooManyQueryStringParameters = "TooManyQueryStringParameters" diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudfront/service.go b/vendor/github.com/aws/aws-sdk-go/service/cloudfront/service.go index 75dcf86e8dc..24b06e0a0f2 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudfront/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudfront/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "cloudfront" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "cloudfront" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "CloudFront" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the CloudFront client with a session. @@ -55,10 +56,11 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, - APIVersion: "2017-03-25", + APIVersion: "2018-06-18", }, handlers, ), diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudhsmv2/api.go b/vendor/github.com/aws/aws-sdk-go/service/cloudhsmv2/api.go new file mode 100644 index 00000000000..08f4efc32ee --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudhsmv2/api.go @@ -0,0 +1,3069 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package cloudhsmv2 + +import ( + "fmt" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" +) + +const opCopyBackupToRegion = "CopyBackupToRegion" + +// CopyBackupToRegionRequest generates a "aws/request.Request" representing the +// client's request for the CopyBackupToRegion operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CopyBackupToRegion for more information on using the CopyBackupToRegion +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CopyBackupToRegionRequest method. +// req, resp := client.CopyBackupToRegionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/CopyBackupToRegion +func (c *CloudHSMV2) CopyBackupToRegionRequest(input *CopyBackupToRegionInput) (req *request.Request, output *CopyBackupToRegionOutput) { + op := &request.Operation{ + Name: opCopyBackupToRegion, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CopyBackupToRegionInput{} + } + + output = &CopyBackupToRegionOutput{} + req = c.newRequest(op, input, output) + return +} + +// CopyBackupToRegion API operation for AWS CloudHSM V2. +// +// Copy an AWS CloudHSM cluster backup to a different region. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudHSM V2's +// API operation CopyBackupToRegion for usage and error information. +// +// Returned Error Codes: +// * ErrCodeCloudHsmInternalFailureException "CloudHsmInternalFailureException" +// The request was rejected because of an AWS CloudHSM internal failure. The +// request can be retried. +// +// * ErrCodeCloudHsmServiceException "CloudHsmServiceException" +// The request was rejected because an error occurred. +// +// * ErrCodeCloudHsmResourceNotFoundException "CloudHsmResourceNotFoundException" +// The request was rejected because it refers to a resource that cannot be found. +// +// * ErrCodeCloudHsmInvalidRequestException "CloudHsmInvalidRequestException" +// The request was rejected because it is not a valid request. +// +// * ErrCodeCloudHsmAccessDeniedException "CloudHsmAccessDeniedException" +// The request was rejected because the requester does not have permission to +// perform the requested operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/CopyBackupToRegion +func (c *CloudHSMV2) CopyBackupToRegion(input *CopyBackupToRegionInput) (*CopyBackupToRegionOutput, error) { + req, out := c.CopyBackupToRegionRequest(input) + return out, req.Send() +} + +// CopyBackupToRegionWithContext is the same as CopyBackupToRegion with the addition of +// the ability to pass a context and additional request options. +// +// See CopyBackupToRegion for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudHSMV2) CopyBackupToRegionWithContext(ctx aws.Context, input *CopyBackupToRegionInput, opts ...request.Option) (*CopyBackupToRegionOutput, error) { + req, out := c.CopyBackupToRegionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateCluster = "CreateCluster" + +// CreateClusterRequest generates a "aws/request.Request" representing the +// client's request for the CreateCluster operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateCluster for more information on using the CreateCluster +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateClusterRequest method. +// req, resp := client.CreateClusterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/CreateCluster +func (c *CloudHSMV2) CreateClusterRequest(input *CreateClusterInput) (req *request.Request, output *CreateClusterOutput) { + op := &request.Operation{ + Name: opCreateCluster, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateClusterInput{} + } + + output = &CreateClusterOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateCluster API operation for AWS CloudHSM V2. +// +// Creates a new AWS CloudHSM cluster. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudHSM V2's +// API operation CreateCluster for usage and error information. +// +// Returned Error Codes: +// * ErrCodeCloudHsmInternalFailureException "CloudHsmInternalFailureException" +// The request was rejected because of an AWS CloudHSM internal failure. The +// request can be retried. +// +// * ErrCodeCloudHsmServiceException "CloudHsmServiceException" +// The request was rejected because an error occurred. +// +// * ErrCodeCloudHsmResourceNotFoundException "CloudHsmResourceNotFoundException" +// The request was rejected because it refers to a resource that cannot be found. +// +// * ErrCodeCloudHsmInvalidRequestException "CloudHsmInvalidRequestException" +// The request was rejected because it is not a valid request. +// +// * ErrCodeCloudHsmAccessDeniedException "CloudHsmAccessDeniedException" +// The request was rejected because the requester does not have permission to +// perform the requested operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/CreateCluster +func (c *CloudHSMV2) CreateCluster(input *CreateClusterInput) (*CreateClusterOutput, error) { + req, out := c.CreateClusterRequest(input) + return out, req.Send() +} + +// CreateClusterWithContext is the same as CreateCluster with the addition of +// the ability to pass a context and additional request options. +// +// See CreateCluster for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudHSMV2) CreateClusterWithContext(ctx aws.Context, input *CreateClusterInput, opts ...request.Option) (*CreateClusterOutput, error) { + req, out := c.CreateClusterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateHsm = "CreateHsm" + +// CreateHsmRequest generates a "aws/request.Request" representing the +// client's request for the CreateHsm operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateHsm for more information on using the CreateHsm +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateHsmRequest method. +// req, resp := client.CreateHsmRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/CreateHsm +func (c *CloudHSMV2) CreateHsmRequest(input *CreateHsmInput) (req *request.Request, output *CreateHsmOutput) { + op := &request.Operation{ + Name: opCreateHsm, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateHsmInput{} + } + + output = &CreateHsmOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateHsm API operation for AWS CloudHSM V2. +// +// Creates a new hardware security module (HSM) in the specified AWS CloudHSM +// cluster. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudHSM V2's +// API operation CreateHsm for usage and error information. +// +// Returned Error Codes: +// * ErrCodeCloudHsmInternalFailureException "CloudHsmInternalFailureException" +// The request was rejected because of an AWS CloudHSM internal failure. The +// request can be retried. +// +// * ErrCodeCloudHsmServiceException "CloudHsmServiceException" +// The request was rejected because an error occurred. +// +// * ErrCodeCloudHsmInvalidRequestException "CloudHsmInvalidRequestException" +// The request was rejected because it is not a valid request. +// +// * ErrCodeCloudHsmResourceNotFoundException "CloudHsmResourceNotFoundException" +// The request was rejected because it refers to a resource that cannot be found. +// +// * ErrCodeCloudHsmAccessDeniedException "CloudHsmAccessDeniedException" +// The request was rejected because the requester does not have permission to +// perform the requested operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/CreateHsm +func (c *CloudHSMV2) CreateHsm(input *CreateHsmInput) (*CreateHsmOutput, error) { + req, out := c.CreateHsmRequest(input) + return out, req.Send() +} + +// CreateHsmWithContext is the same as CreateHsm with the addition of +// the ability to pass a context and additional request options. +// +// See CreateHsm for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudHSMV2) CreateHsmWithContext(ctx aws.Context, input *CreateHsmInput, opts ...request.Option) (*CreateHsmOutput, error) { + req, out := c.CreateHsmRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteBackup = "DeleteBackup" + +// DeleteBackupRequest generates a "aws/request.Request" representing the +// client's request for the DeleteBackup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteBackup for more information on using the DeleteBackup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteBackupRequest method. +// req, resp := client.DeleteBackupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/DeleteBackup +func (c *CloudHSMV2) DeleteBackupRequest(input *DeleteBackupInput) (req *request.Request, output *DeleteBackupOutput) { + op := &request.Operation{ + Name: opDeleteBackup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteBackupInput{} + } + + output = &DeleteBackupOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteBackup API operation for AWS CloudHSM V2. +// +// Deletes a specified AWS CloudHSM backup. A backup can be restored up to 7 +// days after the DeleteBackup request. For more information on restoring a +// backup, see RestoreBackup +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudHSM V2's +// API operation DeleteBackup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeCloudHsmInternalFailureException "CloudHsmInternalFailureException" +// The request was rejected because of an AWS CloudHSM internal failure. The +// request can be retried. +// +// * ErrCodeCloudHsmServiceException "CloudHsmServiceException" +// The request was rejected because an error occurred. +// +// * ErrCodeCloudHsmResourceNotFoundException "CloudHsmResourceNotFoundException" +// The request was rejected because it refers to a resource that cannot be found. +// +// * ErrCodeCloudHsmInvalidRequestException "CloudHsmInvalidRequestException" +// The request was rejected because it is not a valid request. +// +// * ErrCodeCloudHsmAccessDeniedException "CloudHsmAccessDeniedException" +// The request was rejected because the requester does not have permission to +// perform the requested operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/DeleteBackup +func (c *CloudHSMV2) DeleteBackup(input *DeleteBackupInput) (*DeleteBackupOutput, error) { + req, out := c.DeleteBackupRequest(input) + return out, req.Send() +} + +// DeleteBackupWithContext is the same as DeleteBackup with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteBackup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudHSMV2) DeleteBackupWithContext(ctx aws.Context, input *DeleteBackupInput, opts ...request.Option) (*DeleteBackupOutput, error) { + req, out := c.DeleteBackupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteCluster = "DeleteCluster" + +// DeleteClusterRequest generates a "aws/request.Request" representing the +// client's request for the DeleteCluster operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteCluster for more information on using the DeleteCluster +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteClusterRequest method. +// req, resp := client.DeleteClusterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/DeleteCluster +func (c *CloudHSMV2) DeleteClusterRequest(input *DeleteClusterInput) (req *request.Request, output *DeleteClusterOutput) { + op := &request.Operation{ + Name: opDeleteCluster, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteClusterInput{} + } + + output = &DeleteClusterOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteCluster API operation for AWS CloudHSM V2. +// +// Deletes the specified AWS CloudHSM cluster. Before you can delete a cluster, +// you must delete all HSMs in the cluster. To see if the cluster contains any +// HSMs, use DescribeClusters. To delete an HSM, use DeleteHsm. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudHSM V2's +// API operation DeleteCluster for usage and error information. +// +// Returned Error Codes: +// * ErrCodeCloudHsmInternalFailureException "CloudHsmInternalFailureException" +// The request was rejected because of an AWS CloudHSM internal failure. The +// request can be retried. +// +// * ErrCodeCloudHsmServiceException "CloudHsmServiceException" +// The request was rejected because an error occurred. +// +// * ErrCodeCloudHsmResourceNotFoundException "CloudHsmResourceNotFoundException" +// The request was rejected because it refers to a resource that cannot be found. +// +// * ErrCodeCloudHsmInvalidRequestException "CloudHsmInvalidRequestException" +// The request was rejected because it is not a valid request. +// +// * ErrCodeCloudHsmAccessDeniedException "CloudHsmAccessDeniedException" +// The request was rejected because the requester does not have permission to +// perform the requested operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/DeleteCluster +func (c *CloudHSMV2) DeleteCluster(input *DeleteClusterInput) (*DeleteClusterOutput, error) { + req, out := c.DeleteClusterRequest(input) + return out, req.Send() +} + +// DeleteClusterWithContext is the same as DeleteCluster with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteCluster for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudHSMV2) DeleteClusterWithContext(ctx aws.Context, input *DeleteClusterInput, opts ...request.Option) (*DeleteClusterOutput, error) { + req, out := c.DeleteClusterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteHsm = "DeleteHsm" + +// DeleteHsmRequest generates a "aws/request.Request" representing the +// client's request for the DeleteHsm operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteHsm for more information on using the DeleteHsm +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteHsmRequest method. +// req, resp := client.DeleteHsmRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/DeleteHsm +func (c *CloudHSMV2) DeleteHsmRequest(input *DeleteHsmInput) (req *request.Request, output *DeleteHsmOutput) { + op := &request.Operation{ + Name: opDeleteHsm, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteHsmInput{} + } + + output = &DeleteHsmOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteHsm API operation for AWS CloudHSM V2. +// +// Deletes the specified HSM. To specify an HSM, you can use its identifier +// (ID), the IP address of the HSM's elastic network interface (ENI), or the +// ID of the HSM's ENI. You need to specify only one of these values. To find +// these values, use DescribeClusters. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudHSM V2's +// API operation DeleteHsm for usage and error information. +// +// Returned Error Codes: +// * ErrCodeCloudHsmInternalFailureException "CloudHsmInternalFailureException" +// The request was rejected because of an AWS CloudHSM internal failure. The +// request can be retried. +// +// * ErrCodeCloudHsmServiceException "CloudHsmServiceException" +// The request was rejected because an error occurred. +// +// * ErrCodeCloudHsmResourceNotFoundException "CloudHsmResourceNotFoundException" +// The request was rejected because it refers to a resource that cannot be found. +// +// * ErrCodeCloudHsmInvalidRequestException "CloudHsmInvalidRequestException" +// The request was rejected because it is not a valid request. +// +// * ErrCodeCloudHsmAccessDeniedException "CloudHsmAccessDeniedException" +// The request was rejected because the requester does not have permission to +// perform the requested operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/DeleteHsm +func (c *CloudHSMV2) DeleteHsm(input *DeleteHsmInput) (*DeleteHsmOutput, error) { + req, out := c.DeleteHsmRequest(input) + return out, req.Send() +} + +// DeleteHsmWithContext is the same as DeleteHsm with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteHsm for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudHSMV2) DeleteHsmWithContext(ctx aws.Context, input *DeleteHsmInput, opts ...request.Option) (*DeleteHsmOutput, error) { + req, out := c.DeleteHsmRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeBackups = "DescribeBackups" + +// DescribeBackupsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeBackups operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeBackups for more information on using the DescribeBackups +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeBackupsRequest method. +// req, resp := client.DescribeBackupsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/DescribeBackups +func (c *CloudHSMV2) DescribeBackupsRequest(input *DescribeBackupsInput) (req *request.Request, output *DescribeBackupsOutput) { + op := &request.Operation{ + Name: opDescribeBackups, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeBackupsInput{} + } + + output = &DescribeBackupsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeBackups API operation for AWS CloudHSM V2. +// +// Gets information about backups of AWS CloudHSM clusters. +// +// This is a paginated operation, which means that each response might contain +// only a subset of all the backups. When the response contains only a subset +// of backups, it includes a NextToken value. Use this value in a subsequent +// DescribeBackups request to get more backups. When you receive a response +// with no NextToken (or an empty or null value), that means there are no more +// backups to get. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudHSM V2's +// API operation DescribeBackups for usage and error information. +// +// Returned Error Codes: +// * ErrCodeCloudHsmInternalFailureException "CloudHsmInternalFailureException" +// The request was rejected because of an AWS CloudHSM internal failure. The +// request can be retried. +// +// * ErrCodeCloudHsmServiceException "CloudHsmServiceException" +// The request was rejected because an error occurred. +// +// * ErrCodeCloudHsmResourceNotFoundException "CloudHsmResourceNotFoundException" +// The request was rejected because it refers to a resource that cannot be found. +// +// * ErrCodeCloudHsmInvalidRequestException "CloudHsmInvalidRequestException" +// The request was rejected because it is not a valid request. +// +// * ErrCodeCloudHsmAccessDeniedException "CloudHsmAccessDeniedException" +// The request was rejected because the requester does not have permission to +// perform the requested operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/DescribeBackups +func (c *CloudHSMV2) DescribeBackups(input *DescribeBackupsInput) (*DescribeBackupsOutput, error) { + req, out := c.DescribeBackupsRequest(input) + return out, req.Send() +} + +// DescribeBackupsWithContext is the same as DescribeBackups with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeBackups for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudHSMV2) DescribeBackupsWithContext(ctx aws.Context, input *DescribeBackupsInput, opts ...request.Option) (*DescribeBackupsOutput, error) { + req, out := c.DescribeBackupsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeBackupsPages iterates over the pages of a DescribeBackups operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeBackups method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeBackups operation. +// pageNum := 0 +// err := client.DescribeBackupsPages(params, +// func(page *DescribeBackupsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CloudHSMV2) DescribeBackupsPages(input *DescribeBackupsInput, fn func(*DescribeBackupsOutput, bool) bool) error { + return c.DescribeBackupsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeBackupsPagesWithContext same as DescribeBackupsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudHSMV2) DescribeBackupsPagesWithContext(ctx aws.Context, input *DescribeBackupsInput, fn func(*DescribeBackupsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeBackupsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeBackupsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeBackupsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDescribeClusters = "DescribeClusters" + +// DescribeClustersRequest generates a "aws/request.Request" representing the +// client's request for the DescribeClusters operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeClusters for more information on using the DescribeClusters +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeClustersRequest method. +// req, resp := client.DescribeClustersRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/DescribeClusters +func (c *CloudHSMV2) DescribeClustersRequest(input *DescribeClustersInput) (req *request.Request, output *DescribeClustersOutput) { + op := &request.Operation{ + Name: opDescribeClusters, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeClustersInput{} + } + + output = &DescribeClustersOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeClusters API operation for AWS CloudHSM V2. +// +// Gets information about AWS CloudHSM clusters. +// +// This is a paginated operation, which means that each response might contain +// only a subset of all the clusters. When the response contains only a subset +// of clusters, it includes a NextToken value. Use this value in a subsequent +// DescribeClusters request to get more clusters. When you receive a response +// with no NextToken (or an empty or null value), that means there are no more +// clusters to get. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudHSM V2's +// API operation DescribeClusters for usage and error information. +// +// Returned Error Codes: +// * ErrCodeCloudHsmInternalFailureException "CloudHsmInternalFailureException" +// The request was rejected because of an AWS CloudHSM internal failure. The +// request can be retried. +// +// * ErrCodeCloudHsmServiceException "CloudHsmServiceException" +// The request was rejected because an error occurred. +// +// * ErrCodeCloudHsmInvalidRequestException "CloudHsmInvalidRequestException" +// The request was rejected because it is not a valid request. +// +// * ErrCodeCloudHsmAccessDeniedException "CloudHsmAccessDeniedException" +// The request was rejected because the requester does not have permission to +// perform the requested operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/DescribeClusters +func (c *CloudHSMV2) DescribeClusters(input *DescribeClustersInput) (*DescribeClustersOutput, error) { + req, out := c.DescribeClustersRequest(input) + return out, req.Send() +} + +// DescribeClustersWithContext is the same as DescribeClusters with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeClusters for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudHSMV2) DescribeClustersWithContext(ctx aws.Context, input *DescribeClustersInput, opts ...request.Option) (*DescribeClustersOutput, error) { + req, out := c.DescribeClustersRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeClustersPages iterates over the pages of a DescribeClusters operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeClusters method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeClusters operation. +// pageNum := 0 +// err := client.DescribeClustersPages(params, +// func(page *DescribeClustersOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CloudHSMV2) DescribeClustersPages(input *DescribeClustersInput, fn func(*DescribeClustersOutput, bool) bool) error { + return c.DescribeClustersPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeClustersPagesWithContext same as DescribeClustersPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudHSMV2) DescribeClustersPagesWithContext(ctx aws.Context, input *DescribeClustersInput, fn func(*DescribeClustersOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeClustersInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeClustersRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeClustersOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opInitializeCluster = "InitializeCluster" + +// InitializeClusterRequest generates a "aws/request.Request" representing the +// client's request for the InitializeCluster operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See InitializeCluster for more information on using the InitializeCluster +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the InitializeClusterRequest method. +// req, resp := client.InitializeClusterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/InitializeCluster +func (c *CloudHSMV2) InitializeClusterRequest(input *InitializeClusterInput) (req *request.Request, output *InitializeClusterOutput) { + op := &request.Operation{ + Name: opInitializeCluster, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &InitializeClusterInput{} + } + + output = &InitializeClusterOutput{} + req = c.newRequest(op, input, output) + return +} + +// InitializeCluster API operation for AWS CloudHSM V2. +// +// Claims an AWS CloudHSM cluster by submitting the cluster certificate issued +// by your issuing certificate authority (CA) and the CA's root certificate. +// Before you can claim a cluster, you must sign the cluster's certificate signing +// request (CSR) with your issuing CA. To get the cluster's CSR, use DescribeClusters. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudHSM V2's +// API operation InitializeCluster for usage and error information. +// +// Returned Error Codes: +// * ErrCodeCloudHsmInternalFailureException "CloudHsmInternalFailureException" +// The request was rejected because of an AWS CloudHSM internal failure. The +// request can be retried. +// +// * ErrCodeCloudHsmServiceException "CloudHsmServiceException" +// The request was rejected because an error occurred. +// +// * ErrCodeCloudHsmResourceNotFoundException "CloudHsmResourceNotFoundException" +// The request was rejected because it refers to a resource that cannot be found. +// +// * ErrCodeCloudHsmInvalidRequestException "CloudHsmInvalidRequestException" +// The request was rejected because it is not a valid request. +// +// * ErrCodeCloudHsmAccessDeniedException "CloudHsmAccessDeniedException" +// The request was rejected because the requester does not have permission to +// perform the requested operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/InitializeCluster +func (c *CloudHSMV2) InitializeCluster(input *InitializeClusterInput) (*InitializeClusterOutput, error) { + req, out := c.InitializeClusterRequest(input) + return out, req.Send() +} + +// InitializeClusterWithContext is the same as InitializeCluster with the addition of +// the ability to pass a context and additional request options. +// +// See InitializeCluster for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudHSMV2) InitializeClusterWithContext(ctx aws.Context, input *InitializeClusterInput, opts ...request.Option) (*InitializeClusterOutput, error) { + req, out := c.InitializeClusterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListTags = "ListTags" + +// ListTagsRequest generates a "aws/request.Request" representing the +// client's request for the ListTags operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListTags for more information on using the ListTags +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListTagsRequest method. +// req, resp := client.ListTagsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/ListTags +func (c *CloudHSMV2) ListTagsRequest(input *ListTagsInput) (req *request.Request, output *ListTagsOutput) { + op := &request.Operation{ + Name: opListTags, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListTagsInput{} + } + + output = &ListTagsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListTags API operation for AWS CloudHSM V2. +// +// Gets a list of tags for the specified AWS CloudHSM cluster. +// +// This is a paginated operation, which means that each response might contain +// only a subset of all the tags. When the response contains only a subset of +// tags, it includes a NextToken value. Use this value in a subsequent ListTags +// request to get more tags. When you receive a response with no NextToken (or +// an empty or null value), that means there are no more tags to get. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudHSM V2's +// API operation ListTags for usage and error information. +// +// Returned Error Codes: +// * ErrCodeCloudHsmInternalFailureException "CloudHsmInternalFailureException" +// The request was rejected because of an AWS CloudHSM internal failure. The +// request can be retried. +// +// * ErrCodeCloudHsmServiceException "CloudHsmServiceException" +// The request was rejected because an error occurred. +// +// * ErrCodeCloudHsmResourceNotFoundException "CloudHsmResourceNotFoundException" +// The request was rejected because it refers to a resource that cannot be found. +// +// * ErrCodeCloudHsmInvalidRequestException "CloudHsmInvalidRequestException" +// The request was rejected because it is not a valid request. +// +// * ErrCodeCloudHsmAccessDeniedException "CloudHsmAccessDeniedException" +// The request was rejected because the requester does not have permission to +// perform the requested operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/ListTags +func (c *CloudHSMV2) ListTags(input *ListTagsInput) (*ListTagsOutput, error) { + req, out := c.ListTagsRequest(input) + return out, req.Send() +} + +// ListTagsWithContext is the same as ListTags with the addition of +// the ability to pass a context and additional request options. +// +// See ListTags for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudHSMV2) ListTagsWithContext(ctx aws.Context, input *ListTagsInput, opts ...request.Option) (*ListTagsOutput, error) { + req, out := c.ListTagsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListTagsPages iterates over the pages of a ListTags operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListTags method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListTags operation. +// pageNum := 0 +// err := client.ListTagsPages(params, +// func(page *ListTagsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CloudHSMV2) ListTagsPages(input *ListTagsInput, fn func(*ListTagsOutput, bool) bool) error { + return c.ListTagsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListTagsPagesWithContext same as ListTagsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudHSMV2) ListTagsPagesWithContext(ctx aws.Context, input *ListTagsInput, fn func(*ListTagsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListTagsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListTagsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListTagsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opRestoreBackup = "RestoreBackup" + +// RestoreBackupRequest generates a "aws/request.Request" representing the +// client's request for the RestoreBackup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RestoreBackup for more information on using the RestoreBackup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RestoreBackupRequest method. +// req, resp := client.RestoreBackupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/RestoreBackup +func (c *CloudHSMV2) RestoreBackupRequest(input *RestoreBackupInput) (req *request.Request, output *RestoreBackupOutput) { + op := &request.Operation{ + Name: opRestoreBackup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RestoreBackupInput{} + } + + output = &RestoreBackupOutput{} + req = c.newRequest(op, input, output) + return +} + +// RestoreBackup API operation for AWS CloudHSM V2. +// +// Restores a specified AWS CloudHSM backup that is in the PENDING_DELETION +// state. For more information on deleting a backup, see DeleteBackup. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudHSM V2's +// API operation RestoreBackup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeCloudHsmInternalFailureException "CloudHsmInternalFailureException" +// The request was rejected because of an AWS CloudHSM internal failure. The +// request can be retried. +// +// * ErrCodeCloudHsmServiceException "CloudHsmServiceException" +// The request was rejected because an error occurred. +// +// * ErrCodeCloudHsmResourceNotFoundException "CloudHsmResourceNotFoundException" +// The request was rejected because it refers to a resource that cannot be found. +// +// * ErrCodeCloudHsmInvalidRequestException "CloudHsmInvalidRequestException" +// The request was rejected because it is not a valid request. +// +// * ErrCodeCloudHsmAccessDeniedException "CloudHsmAccessDeniedException" +// The request was rejected because the requester does not have permission to +// perform the requested operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/RestoreBackup +func (c *CloudHSMV2) RestoreBackup(input *RestoreBackupInput) (*RestoreBackupOutput, error) { + req, out := c.RestoreBackupRequest(input) + return out, req.Send() +} + +// RestoreBackupWithContext is the same as RestoreBackup with the addition of +// the ability to pass a context and additional request options. +// +// See RestoreBackup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudHSMV2) RestoreBackupWithContext(ctx aws.Context, input *RestoreBackupInput, opts ...request.Option) (*RestoreBackupOutput, error) { + req, out := c.RestoreBackupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opTagResource = "TagResource" + +// TagResourceRequest generates a "aws/request.Request" representing the +// client's request for the TagResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See TagResource for more information on using the TagResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the TagResourceRequest method. +// req, resp := client.TagResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/TagResource +func (c *CloudHSMV2) TagResourceRequest(input *TagResourceInput) (req *request.Request, output *TagResourceOutput) { + op := &request.Operation{ + Name: opTagResource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &TagResourceInput{} + } + + output = &TagResourceOutput{} + req = c.newRequest(op, input, output) + return +} + +// TagResource API operation for AWS CloudHSM V2. +// +// Adds or overwrites one or more tags for the specified AWS CloudHSM cluster. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudHSM V2's +// API operation TagResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeCloudHsmInternalFailureException "CloudHsmInternalFailureException" +// The request was rejected because of an AWS CloudHSM internal failure. The +// request can be retried. +// +// * ErrCodeCloudHsmServiceException "CloudHsmServiceException" +// The request was rejected because an error occurred. +// +// * ErrCodeCloudHsmResourceNotFoundException "CloudHsmResourceNotFoundException" +// The request was rejected because it refers to a resource that cannot be found. +// +// * ErrCodeCloudHsmInvalidRequestException "CloudHsmInvalidRequestException" +// The request was rejected because it is not a valid request. +// +// * ErrCodeCloudHsmAccessDeniedException "CloudHsmAccessDeniedException" +// The request was rejected because the requester does not have permission to +// perform the requested operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/TagResource +func (c *CloudHSMV2) TagResource(input *TagResourceInput) (*TagResourceOutput, error) { + req, out := c.TagResourceRequest(input) + return out, req.Send() +} + +// TagResourceWithContext is the same as TagResource with the addition of +// the ability to pass a context and additional request options. +// +// See TagResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudHSMV2) TagResourceWithContext(ctx aws.Context, input *TagResourceInput, opts ...request.Option) (*TagResourceOutput, error) { + req, out := c.TagResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUntagResource = "UntagResource" + +// UntagResourceRequest generates a "aws/request.Request" representing the +// client's request for the UntagResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UntagResource for more information on using the UntagResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UntagResourceRequest method. +// req, resp := client.UntagResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/UntagResource +func (c *CloudHSMV2) UntagResourceRequest(input *UntagResourceInput) (req *request.Request, output *UntagResourceOutput) { + op := &request.Operation{ + Name: opUntagResource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UntagResourceInput{} + } + + output = &UntagResourceOutput{} + req = c.newRequest(op, input, output) + return +} + +// UntagResource API operation for AWS CloudHSM V2. +// +// Removes the specified tag or tags from the specified AWS CloudHSM cluster. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CloudHSM V2's +// API operation UntagResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeCloudHsmInternalFailureException "CloudHsmInternalFailureException" +// The request was rejected because of an AWS CloudHSM internal failure. The +// request can be retried. +// +// * ErrCodeCloudHsmServiceException "CloudHsmServiceException" +// The request was rejected because an error occurred. +// +// * ErrCodeCloudHsmResourceNotFoundException "CloudHsmResourceNotFoundException" +// The request was rejected because it refers to a resource that cannot be found. +// +// * ErrCodeCloudHsmInvalidRequestException "CloudHsmInvalidRequestException" +// The request was rejected because it is not a valid request. +// +// * ErrCodeCloudHsmAccessDeniedException "CloudHsmAccessDeniedException" +// The request was rejected because the requester does not have permission to +// perform the requested operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28/UntagResource +func (c *CloudHSMV2) UntagResource(input *UntagResourceInput) (*UntagResourceOutput, error) { + req, out := c.UntagResourceRequest(input) + return out, req.Send() +} + +// UntagResourceWithContext is the same as UntagResource with the addition of +// the ability to pass a context and additional request options. +// +// See UntagResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudHSMV2) UntagResourceWithContext(ctx aws.Context, input *UntagResourceInput, opts ...request.Option) (*UntagResourceOutput, error) { + req, out := c.UntagResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// Contains information about a backup of an AWS CloudHSM cluster. +type Backup struct { + _ struct{} `type:"structure"` + + // The identifier (ID) of the backup. + // + // BackupId is a required field + BackupId *string `type:"string" required:"true"` + + // The state of the backup. + BackupState *string `type:"string" enum:"BackupState"` + + // The identifier (ID) of the cluster that was backed up. + ClusterId *string `type:"string"` + + CopyTimestamp *time.Time `type:"timestamp"` + + // The date and time when the backup was created. + CreateTimestamp *time.Time `type:"timestamp"` + + // The date and time when the backup will be permanently deleted. + DeleteTimestamp *time.Time `type:"timestamp"` + + SourceBackup *string `type:"string"` + + SourceCluster *string `type:"string"` + + SourceRegion *string `type:"string"` +} + +// String returns the string representation +func (s Backup) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Backup) GoString() string { + return s.String() +} + +// SetBackupId sets the BackupId field's value. +func (s *Backup) SetBackupId(v string) *Backup { + s.BackupId = &v + return s +} + +// SetBackupState sets the BackupState field's value. +func (s *Backup) SetBackupState(v string) *Backup { + s.BackupState = &v + return s +} + +// SetClusterId sets the ClusterId field's value. +func (s *Backup) SetClusterId(v string) *Backup { + s.ClusterId = &v + return s +} + +// SetCopyTimestamp sets the CopyTimestamp field's value. +func (s *Backup) SetCopyTimestamp(v time.Time) *Backup { + s.CopyTimestamp = &v + return s +} + +// SetCreateTimestamp sets the CreateTimestamp field's value. +func (s *Backup) SetCreateTimestamp(v time.Time) *Backup { + s.CreateTimestamp = &v + return s +} + +// SetDeleteTimestamp sets the DeleteTimestamp field's value. +func (s *Backup) SetDeleteTimestamp(v time.Time) *Backup { + s.DeleteTimestamp = &v + return s +} + +// SetSourceBackup sets the SourceBackup field's value. +func (s *Backup) SetSourceBackup(v string) *Backup { + s.SourceBackup = &v + return s +} + +// SetSourceCluster sets the SourceCluster field's value. +func (s *Backup) SetSourceCluster(v string) *Backup { + s.SourceCluster = &v + return s +} + +// SetSourceRegion sets the SourceRegion field's value. +func (s *Backup) SetSourceRegion(v string) *Backup { + s.SourceRegion = &v + return s +} + +// Contains one or more certificates or a certificate signing request (CSR). +type Certificates struct { + _ struct{} `type:"structure"` + + // The HSM hardware certificate issued (signed) by AWS CloudHSM. + AwsHardwareCertificate *string `type:"string"` + + // The cluster certificate issued (signed) by the issuing certificate authority + // (CA) of the cluster's owner. + ClusterCertificate *string `type:"string"` + + // The cluster's certificate signing request (CSR). The CSR exists only when + // the cluster's state is UNINITIALIZED. + ClusterCsr *string `type:"string"` + + // The HSM certificate issued (signed) by the HSM hardware. + HsmCertificate *string `type:"string"` + + // The HSM hardware certificate issued (signed) by the hardware manufacturer. + ManufacturerHardwareCertificate *string `type:"string"` +} + +// String returns the string representation +func (s Certificates) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Certificates) GoString() string { + return s.String() +} + +// SetAwsHardwareCertificate sets the AwsHardwareCertificate field's value. +func (s *Certificates) SetAwsHardwareCertificate(v string) *Certificates { + s.AwsHardwareCertificate = &v + return s +} + +// SetClusterCertificate sets the ClusterCertificate field's value. +func (s *Certificates) SetClusterCertificate(v string) *Certificates { + s.ClusterCertificate = &v + return s +} + +// SetClusterCsr sets the ClusterCsr field's value. +func (s *Certificates) SetClusterCsr(v string) *Certificates { + s.ClusterCsr = &v + return s +} + +// SetHsmCertificate sets the HsmCertificate field's value. +func (s *Certificates) SetHsmCertificate(v string) *Certificates { + s.HsmCertificate = &v + return s +} + +// SetManufacturerHardwareCertificate sets the ManufacturerHardwareCertificate field's value. +func (s *Certificates) SetManufacturerHardwareCertificate(v string) *Certificates { + s.ManufacturerHardwareCertificate = &v + return s +} + +// Contains information about an AWS CloudHSM cluster. +type Cluster struct { + _ struct{} `type:"structure"` + + // The cluster's backup policy. + BackupPolicy *string `type:"string" enum:"BackupPolicy"` + + // Contains one or more certificates or a certificate signing request (CSR). + Certificates *Certificates `type:"structure"` + + // The cluster's identifier (ID). + ClusterId *string `type:"string"` + + // The date and time when the cluster was created. + CreateTimestamp *time.Time `type:"timestamp"` + + // The type of HSM that the cluster contains. + HsmType *string `type:"string"` + + // Contains information about the HSMs in the cluster. + Hsms []*Hsm `type:"list"` + + // The default password for the cluster's Pre-Crypto Officer (PRECO) user. + PreCoPassword *string `min:"7" type:"string"` + + // The identifier (ID) of the cluster's security group. + SecurityGroup *string `type:"string"` + + // The identifier (ID) of the backup used to create the cluster. This value + // exists only when the cluster was created from a backup. + SourceBackupId *string `type:"string"` + + // The cluster's state. + State *string `type:"string" enum:"ClusterState"` + + // A description of the cluster's state. + StateMessage *string `type:"string"` + + // A map of the cluster's subnets and their corresponding Availability Zones. + SubnetMapping map[string]*string `type:"map"` + + // The identifier (ID) of the virtual private cloud (VPC) that contains the + // cluster. + VpcId *string `type:"string"` +} + +// String returns the string representation +func (s Cluster) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Cluster) GoString() string { + return s.String() +} + +// SetBackupPolicy sets the BackupPolicy field's value. +func (s *Cluster) SetBackupPolicy(v string) *Cluster { + s.BackupPolicy = &v + return s +} + +// SetCertificates sets the Certificates field's value. +func (s *Cluster) SetCertificates(v *Certificates) *Cluster { + s.Certificates = v + return s +} + +// SetClusterId sets the ClusterId field's value. +func (s *Cluster) SetClusterId(v string) *Cluster { + s.ClusterId = &v + return s +} + +// SetCreateTimestamp sets the CreateTimestamp field's value. +func (s *Cluster) SetCreateTimestamp(v time.Time) *Cluster { + s.CreateTimestamp = &v + return s +} + +// SetHsmType sets the HsmType field's value. +func (s *Cluster) SetHsmType(v string) *Cluster { + s.HsmType = &v + return s +} + +// SetHsms sets the Hsms field's value. +func (s *Cluster) SetHsms(v []*Hsm) *Cluster { + s.Hsms = v + return s +} + +// SetPreCoPassword sets the PreCoPassword field's value. +func (s *Cluster) SetPreCoPassword(v string) *Cluster { + s.PreCoPassword = &v + return s +} + +// SetSecurityGroup sets the SecurityGroup field's value. +func (s *Cluster) SetSecurityGroup(v string) *Cluster { + s.SecurityGroup = &v + return s +} + +// SetSourceBackupId sets the SourceBackupId field's value. +func (s *Cluster) SetSourceBackupId(v string) *Cluster { + s.SourceBackupId = &v + return s +} + +// SetState sets the State field's value. +func (s *Cluster) SetState(v string) *Cluster { + s.State = &v + return s +} + +// SetStateMessage sets the StateMessage field's value. +func (s *Cluster) SetStateMessage(v string) *Cluster { + s.StateMessage = &v + return s +} + +// SetSubnetMapping sets the SubnetMapping field's value. +func (s *Cluster) SetSubnetMapping(v map[string]*string) *Cluster { + s.SubnetMapping = v + return s +} + +// SetVpcId sets the VpcId field's value. +func (s *Cluster) SetVpcId(v string) *Cluster { + s.VpcId = &v + return s +} + +type CopyBackupToRegionInput struct { + _ struct{} `type:"structure"` + + // The ID of the backup that will be copied to the destination region. + // + // BackupId is a required field + BackupId *string `type:"string" required:"true"` + + // The AWS region that will contain your copied CloudHSM cluster backup. + // + // DestinationRegion is a required field + DestinationRegion *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s CopyBackupToRegionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CopyBackupToRegionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CopyBackupToRegionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CopyBackupToRegionInput"} + if s.BackupId == nil { + invalidParams.Add(request.NewErrParamRequired("BackupId")) + } + if s.DestinationRegion == nil { + invalidParams.Add(request.NewErrParamRequired("DestinationRegion")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBackupId sets the BackupId field's value. +func (s *CopyBackupToRegionInput) SetBackupId(v string) *CopyBackupToRegionInput { + s.BackupId = &v + return s +} + +// SetDestinationRegion sets the DestinationRegion field's value. +func (s *CopyBackupToRegionInput) SetDestinationRegion(v string) *CopyBackupToRegionInput { + s.DestinationRegion = &v + return s +} + +type CopyBackupToRegionOutput struct { + _ struct{} `type:"structure"` + + // Information on the backup that will be copied to the destination region, + // including CreateTimestamp, SourceBackup, SourceCluster, and Source Region. + // CreateTimestamp of the destination backup will be the same as that of the + // source backup. + // + // You will need to use the sourceBackupID returned in this operation to use + // the DescribeBackups operation on the backup that will be copied to the destination + // region. + DestinationBackup *DestinationBackup `type:"structure"` +} + +// String returns the string representation +func (s CopyBackupToRegionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CopyBackupToRegionOutput) GoString() string { + return s.String() +} + +// SetDestinationBackup sets the DestinationBackup field's value. +func (s *CopyBackupToRegionOutput) SetDestinationBackup(v *DestinationBackup) *CopyBackupToRegionOutput { + s.DestinationBackup = v + return s +} + +type CreateClusterInput struct { + _ struct{} `type:"structure"` + + // The type of HSM to use in the cluster. Currently the only allowed value is + // hsm1.medium. + // + // HsmType is a required field + HsmType *string `type:"string" required:"true"` + + // The identifier (ID) of the cluster backup to restore. Use this value to restore + // the cluster from a backup instead of creating a new cluster. To find the + // backup ID, use DescribeBackups. + SourceBackupId *string `type:"string"` + + // The identifiers (IDs) of the subnets where you are creating the cluster. + // You must specify at least one subnet. If you specify multiple subnets, they + // must meet the following criteria: + // + // * All subnets must be in the same virtual private cloud (VPC). + // + // * You can specify only one subnet per Availability Zone. + // + // SubnetIds is a required field + SubnetIds []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s CreateClusterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateClusterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateClusterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateClusterInput"} + if s.HsmType == nil { + invalidParams.Add(request.NewErrParamRequired("HsmType")) + } + if s.SubnetIds == nil { + invalidParams.Add(request.NewErrParamRequired("SubnetIds")) + } + if s.SubnetIds != nil && len(s.SubnetIds) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SubnetIds", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetHsmType sets the HsmType field's value. +func (s *CreateClusterInput) SetHsmType(v string) *CreateClusterInput { + s.HsmType = &v + return s +} + +// SetSourceBackupId sets the SourceBackupId field's value. +func (s *CreateClusterInput) SetSourceBackupId(v string) *CreateClusterInput { + s.SourceBackupId = &v + return s +} + +// SetSubnetIds sets the SubnetIds field's value. +func (s *CreateClusterInput) SetSubnetIds(v []*string) *CreateClusterInput { + s.SubnetIds = v + return s +} + +type CreateClusterOutput struct { + _ struct{} `type:"structure"` + + // Information about the cluster that was created. + Cluster *Cluster `type:"structure"` +} + +// String returns the string representation +func (s CreateClusterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateClusterOutput) GoString() string { + return s.String() +} + +// SetCluster sets the Cluster field's value. +func (s *CreateClusterOutput) SetCluster(v *Cluster) *CreateClusterOutput { + s.Cluster = v + return s +} + +type CreateHsmInput struct { + _ struct{} `type:"structure"` + + // The Availability Zone where you are creating the HSM. To find the cluster's + // Availability Zones, use DescribeClusters. + // + // AvailabilityZone is a required field + AvailabilityZone *string `type:"string" required:"true"` + + // The identifier (ID) of the HSM's cluster. To find the cluster ID, use DescribeClusters. + // + // ClusterId is a required field + ClusterId *string `type:"string" required:"true"` + + // The HSM's IP address. If you specify an IP address, use an available address + // from the subnet that maps to the Availability Zone where you are creating + // the HSM. If you don't specify an IP address, one is chosen for you from that + // subnet. + IpAddress *string `type:"string"` +} + +// String returns the string representation +func (s CreateHsmInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateHsmInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateHsmInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateHsmInput"} + if s.AvailabilityZone == nil { + invalidParams.Add(request.NewErrParamRequired("AvailabilityZone")) + } + if s.ClusterId == nil { + invalidParams.Add(request.NewErrParamRequired("ClusterId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *CreateHsmInput) SetAvailabilityZone(v string) *CreateHsmInput { + s.AvailabilityZone = &v + return s +} + +// SetClusterId sets the ClusterId field's value. +func (s *CreateHsmInput) SetClusterId(v string) *CreateHsmInput { + s.ClusterId = &v + return s +} + +// SetIpAddress sets the IpAddress field's value. +func (s *CreateHsmInput) SetIpAddress(v string) *CreateHsmInput { + s.IpAddress = &v + return s +} + +type CreateHsmOutput struct { + _ struct{} `type:"structure"` + + // Information about the HSM that was created. + Hsm *Hsm `type:"structure"` +} + +// String returns the string representation +func (s CreateHsmOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateHsmOutput) GoString() string { + return s.String() +} + +// SetHsm sets the Hsm field's value. +func (s *CreateHsmOutput) SetHsm(v *Hsm) *CreateHsmOutput { + s.Hsm = v + return s +} + +type DeleteBackupInput struct { + _ struct{} `type:"structure"` + + // The ID of the backup to be deleted. To find the ID of a backup, use the DescribeBackups + // operation. + // + // BackupId is a required field + BackupId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteBackupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBackupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteBackupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteBackupInput"} + if s.BackupId == nil { + invalidParams.Add(request.NewErrParamRequired("BackupId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBackupId sets the BackupId field's value. +func (s *DeleteBackupInput) SetBackupId(v string) *DeleteBackupInput { + s.BackupId = &v + return s +} + +type DeleteBackupOutput struct { + _ struct{} `type:"structure"` + + // Information on the Backup object deleted. + Backup *Backup `type:"structure"` +} + +// String returns the string representation +func (s DeleteBackupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBackupOutput) GoString() string { + return s.String() +} + +// SetBackup sets the Backup field's value. +func (s *DeleteBackupOutput) SetBackup(v *Backup) *DeleteBackupOutput { + s.Backup = v + return s +} + +type DeleteClusterInput struct { + _ struct{} `type:"structure"` + + // The identifier (ID) of the cluster that you are deleting. To find the cluster + // ID, use DescribeClusters. + // + // ClusterId is a required field + ClusterId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteClusterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteClusterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteClusterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteClusterInput"} + if s.ClusterId == nil { + invalidParams.Add(request.NewErrParamRequired("ClusterId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClusterId sets the ClusterId field's value. +func (s *DeleteClusterInput) SetClusterId(v string) *DeleteClusterInput { + s.ClusterId = &v + return s +} + +type DeleteClusterOutput struct { + _ struct{} `type:"structure"` + + // Information about the cluster that was deleted. + Cluster *Cluster `type:"structure"` +} + +// String returns the string representation +func (s DeleteClusterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteClusterOutput) GoString() string { + return s.String() +} + +// SetCluster sets the Cluster field's value. +func (s *DeleteClusterOutput) SetCluster(v *Cluster) *DeleteClusterOutput { + s.Cluster = v + return s +} + +type DeleteHsmInput struct { + _ struct{} `type:"structure"` + + // The identifier (ID) of the cluster that contains the HSM that you are deleting. + // + // ClusterId is a required field + ClusterId *string `type:"string" required:"true"` + + // The identifier (ID) of the elastic network interface (ENI) of the HSM that + // you are deleting. + EniId *string `type:"string"` + + // The IP address of the elastic network interface (ENI) of the HSM that you + // are deleting. + EniIp *string `type:"string"` + + // The identifier (ID) of the HSM that you are deleting. + HsmId *string `type:"string"` +} + +// String returns the string representation +func (s DeleteHsmInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteHsmInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteHsmInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteHsmInput"} + if s.ClusterId == nil { + invalidParams.Add(request.NewErrParamRequired("ClusterId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClusterId sets the ClusterId field's value. +func (s *DeleteHsmInput) SetClusterId(v string) *DeleteHsmInput { + s.ClusterId = &v + return s +} + +// SetEniId sets the EniId field's value. +func (s *DeleteHsmInput) SetEniId(v string) *DeleteHsmInput { + s.EniId = &v + return s +} + +// SetEniIp sets the EniIp field's value. +func (s *DeleteHsmInput) SetEniIp(v string) *DeleteHsmInput { + s.EniIp = &v + return s +} + +// SetHsmId sets the HsmId field's value. +func (s *DeleteHsmInput) SetHsmId(v string) *DeleteHsmInput { + s.HsmId = &v + return s +} + +type DeleteHsmOutput struct { + _ struct{} `type:"structure"` + + // The identifier (ID) of the HSM that was deleted. + HsmId *string `type:"string"` +} + +// String returns the string representation +func (s DeleteHsmOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteHsmOutput) GoString() string { + return s.String() +} + +// SetHsmId sets the HsmId field's value. +func (s *DeleteHsmOutput) SetHsmId(v string) *DeleteHsmOutput { + s.HsmId = &v + return s +} + +type DescribeBackupsInput struct { + _ struct{} `type:"structure"` + + // One or more filters to limit the items returned in the response. + // + // Use the backupIds filter to return only the specified backups. Specify backups + // by their backup identifier (ID). + // + // Use the sourceBackupIds filter to return only the backups created from a + // source backup. The sourceBackupID of a source backup is returned by the CopyBackupToRegion + // operation. + // + // Use the clusterIds filter to return only the backups for the specified clusters. + // Specify clusters by their cluster identifier (ID). + // + // Use the states filter to return only backups that match the specified state. + Filters map[string][]*string `type:"map"` + + // The maximum number of backups to return in the response. When there are more + // backups than the number you specify, the response contains a NextToken value. + MaxResults *int64 `min:"1" type:"integer"` + + // The NextToken value that you received in the previous response. Use this + // value to get more backups. + NextToken *string `type:"string"` + + SortAscending *bool `type:"boolean"` +} + +// String returns the string representation +func (s DescribeBackupsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeBackupsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeBackupsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeBackupsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeBackupsInput) SetFilters(v map[string][]*string) *DescribeBackupsInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeBackupsInput) SetMaxResults(v int64) *DescribeBackupsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeBackupsInput) SetNextToken(v string) *DescribeBackupsInput { + s.NextToken = &v + return s +} + +// SetSortAscending sets the SortAscending field's value. +func (s *DescribeBackupsInput) SetSortAscending(v bool) *DescribeBackupsInput { + s.SortAscending = &v + return s +} + +type DescribeBackupsOutput struct { + _ struct{} `type:"structure"` + + // A list of backups. + Backups []*Backup `type:"list"` + + // An opaque string that indicates that the response contains only a subset + // of backups. Use this value in a subsequent DescribeBackups request to get + // more backups. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeBackupsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeBackupsOutput) GoString() string { + return s.String() +} + +// SetBackups sets the Backups field's value. +func (s *DescribeBackupsOutput) SetBackups(v []*Backup) *DescribeBackupsOutput { + s.Backups = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeBackupsOutput) SetNextToken(v string) *DescribeBackupsOutput { + s.NextToken = &v + return s +} + +type DescribeClustersInput struct { + _ struct{} `type:"structure"` + + // One or more filters to limit the items returned in the response. + // + // Use the clusterIds filter to return only the specified clusters. Specify + // clusters by their cluster identifier (ID). + // + // Use the vpcIds filter to return only the clusters in the specified virtual + // private clouds (VPCs). Specify VPCs by their VPC identifier (ID). + // + // Use the states filter to return only clusters that match the specified state. + Filters map[string][]*string `type:"map"` + + // The maximum number of clusters to return in the response. When there are + // more clusters than the number you specify, the response contains a NextToken + // value. + MaxResults *int64 `min:"1" type:"integer"` + + // The NextToken value that you received in the previous response. Use this + // value to get more clusters. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeClustersInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeClustersInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeClustersInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeClustersInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeClustersInput) SetFilters(v map[string][]*string) *DescribeClustersInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeClustersInput) SetMaxResults(v int64) *DescribeClustersInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeClustersInput) SetNextToken(v string) *DescribeClustersInput { + s.NextToken = &v + return s +} + +type DescribeClustersOutput struct { + _ struct{} `type:"structure"` + + // A list of clusters. + Clusters []*Cluster `type:"list"` + + // An opaque string that indicates that the response contains only a subset + // of clusters. Use this value in a subsequent DescribeClusters request to get + // more clusters. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeClustersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeClustersOutput) GoString() string { + return s.String() +} + +// SetClusters sets the Clusters field's value. +func (s *DescribeClustersOutput) SetClusters(v []*Cluster) *DescribeClustersOutput { + s.Clusters = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeClustersOutput) SetNextToken(v string) *DescribeClustersOutput { + s.NextToken = &v + return s +} + +type DestinationBackup struct { + _ struct{} `type:"structure"` + + CreateTimestamp *time.Time `type:"timestamp"` + + SourceBackup *string `type:"string"` + + SourceCluster *string `type:"string"` + + SourceRegion *string `type:"string"` +} + +// String returns the string representation +func (s DestinationBackup) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DestinationBackup) GoString() string { + return s.String() +} + +// SetCreateTimestamp sets the CreateTimestamp field's value. +func (s *DestinationBackup) SetCreateTimestamp(v time.Time) *DestinationBackup { + s.CreateTimestamp = &v + return s +} + +// SetSourceBackup sets the SourceBackup field's value. +func (s *DestinationBackup) SetSourceBackup(v string) *DestinationBackup { + s.SourceBackup = &v + return s +} + +// SetSourceCluster sets the SourceCluster field's value. +func (s *DestinationBackup) SetSourceCluster(v string) *DestinationBackup { + s.SourceCluster = &v + return s +} + +// SetSourceRegion sets the SourceRegion field's value. +func (s *DestinationBackup) SetSourceRegion(v string) *DestinationBackup { + s.SourceRegion = &v + return s +} + +// Contains information about a hardware security module (HSM) in an AWS CloudHSM +// cluster. +type Hsm struct { + _ struct{} `type:"structure"` + + // The Availability Zone that contains the HSM. + AvailabilityZone *string `type:"string"` + + // The identifier (ID) of the cluster that contains the HSM. + ClusterId *string `type:"string"` + + // The identifier (ID) of the HSM's elastic network interface (ENI). + EniId *string `type:"string"` + + // The IP address of the HSM's elastic network interface (ENI). + EniIp *string `type:"string"` + + // The HSM's identifier (ID). + // + // HsmId is a required field + HsmId *string `type:"string" required:"true"` + + // The HSM's state. + State *string `type:"string" enum:"HsmState"` + + // A description of the HSM's state. + StateMessage *string `type:"string"` + + // The subnet that contains the HSM's elastic network interface (ENI). + SubnetId *string `type:"string"` +} + +// String returns the string representation +func (s Hsm) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Hsm) GoString() string { + return s.String() +} + +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *Hsm) SetAvailabilityZone(v string) *Hsm { + s.AvailabilityZone = &v + return s +} + +// SetClusterId sets the ClusterId field's value. +func (s *Hsm) SetClusterId(v string) *Hsm { + s.ClusterId = &v + return s +} + +// SetEniId sets the EniId field's value. +func (s *Hsm) SetEniId(v string) *Hsm { + s.EniId = &v + return s +} + +// SetEniIp sets the EniIp field's value. +func (s *Hsm) SetEniIp(v string) *Hsm { + s.EniIp = &v + return s +} + +// SetHsmId sets the HsmId field's value. +func (s *Hsm) SetHsmId(v string) *Hsm { + s.HsmId = &v + return s +} + +// SetState sets the State field's value. +func (s *Hsm) SetState(v string) *Hsm { + s.State = &v + return s +} + +// SetStateMessage sets the StateMessage field's value. +func (s *Hsm) SetStateMessage(v string) *Hsm { + s.StateMessage = &v + return s +} + +// SetSubnetId sets the SubnetId field's value. +func (s *Hsm) SetSubnetId(v string) *Hsm { + s.SubnetId = &v + return s +} + +type InitializeClusterInput struct { + _ struct{} `type:"structure"` + + // The identifier (ID) of the cluster that you are claiming. To find the cluster + // ID, use DescribeClusters. + // + // ClusterId is a required field + ClusterId *string `type:"string" required:"true"` + + // The cluster certificate issued (signed) by your issuing certificate authority + // (CA). The certificate must be in PEM format and can contain a maximum of + // 5000 characters. + // + // SignedCert is a required field + SignedCert *string `type:"string" required:"true"` + + // The issuing certificate of the issuing certificate authority (CA) that issued + // (signed) the cluster certificate. This can be a root (self-signed) certificate + // or a certificate chain that begins with the certificate that issued the cluster + // certificate and ends with a root certificate. The certificate or certificate + // chain must be in PEM format and can contain a maximum of 5000 characters. + // + // TrustAnchor is a required field + TrustAnchor *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s InitializeClusterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InitializeClusterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InitializeClusterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InitializeClusterInput"} + if s.ClusterId == nil { + invalidParams.Add(request.NewErrParamRequired("ClusterId")) + } + if s.SignedCert == nil { + invalidParams.Add(request.NewErrParamRequired("SignedCert")) + } + if s.TrustAnchor == nil { + invalidParams.Add(request.NewErrParamRequired("TrustAnchor")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClusterId sets the ClusterId field's value. +func (s *InitializeClusterInput) SetClusterId(v string) *InitializeClusterInput { + s.ClusterId = &v + return s +} + +// SetSignedCert sets the SignedCert field's value. +func (s *InitializeClusterInput) SetSignedCert(v string) *InitializeClusterInput { + s.SignedCert = &v + return s +} + +// SetTrustAnchor sets the TrustAnchor field's value. +func (s *InitializeClusterInput) SetTrustAnchor(v string) *InitializeClusterInput { + s.TrustAnchor = &v + return s +} + +type InitializeClusterOutput struct { + _ struct{} `type:"structure"` + + // The cluster's state. + State *string `type:"string" enum:"ClusterState"` + + // A description of the cluster's state. + StateMessage *string `type:"string"` +} + +// String returns the string representation +func (s InitializeClusterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InitializeClusterOutput) GoString() string { + return s.String() +} + +// SetState sets the State field's value. +func (s *InitializeClusterOutput) SetState(v string) *InitializeClusterOutput { + s.State = &v + return s +} + +// SetStateMessage sets the StateMessage field's value. +func (s *InitializeClusterOutput) SetStateMessage(v string) *InitializeClusterOutput { + s.StateMessage = &v + return s +} + +type ListTagsInput struct { + _ struct{} `type:"structure"` + + // The maximum number of tags to return in the response. When there are more + // tags than the number you specify, the response contains a NextToken value. + MaxResults *int64 `min:"1" type:"integer"` + + // The NextToken value that you received in the previous response. Use this + // value to get more tags. + NextToken *string `type:"string"` + + // The cluster identifier (ID) for the cluster whose tags you are getting. To + // find the cluster ID, use DescribeClusters. + // + // ResourceId is a required field + ResourceId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s ListTagsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListTagsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTagsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.ResourceId == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListTagsInput) SetMaxResults(v int64) *ListTagsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListTagsInput) SetNextToken(v string) *ListTagsInput { + s.NextToken = &v + return s +} + +// SetResourceId sets the ResourceId field's value. +func (s *ListTagsInput) SetResourceId(v string) *ListTagsInput { + s.ResourceId = &v + return s +} + +type ListTagsOutput struct { + _ struct{} `type:"structure"` + + // An opaque string that indicates that the response contains only a subset + // of tags. Use this value in a subsequent ListTags request to get more tags. + NextToken *string `type:"string"` + + // A list of tags. + // + // TagList is a required field + TagList []*Tag `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s ListTagsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListTagsOutput) SetNextToken(v string) *ListTagsOutput { + s.NextToken = &v + return s +} + +// SetTagList sets the TagList field's value. +func (s *ListTagsOutput) SetTagList(v []*Tag) *ListTagsOutput { + s.TagList = v + return s +} + +type RestoreBackupInput struct { + _ struct{} `type:"structure"` + + // The ID of the backup to be restored. To find the ID of a backup, use the + // DescribeBackups operation. + // + // BackupId is a required field + BackupId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s RestoreBackupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreBackupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RestoreBackupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RestoreBackupInput"} + if s.BackupId == nil { + invalidParams.Add(request.NewErrParamRequired("BackupId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBackupId sets the BackupId field's value. +func (s *RestoreBackupInput) SetBackupId(v string) *RestoreBackupInput { + s.BackupId = &v + return s +} + +type RestoreBackupOutput struct { + _ struct{} `type:"structure"` + + // Information on the Backup object created. + Backup *Backup `type:"structure"` +} + +// String returns the string representation +func (s RestoreBackupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreBackupOutput) GoString() string { + return s.String() +} + +// SetBackup sets the Backup field's value. +func (s *RestoreBackupOutput) SetBackup(v *Backup) *RestoreBackupOutput { + s.Backup = v + return s +} + +// Contains a tag. A tag is a key-value pair. +type Tag struct { + _ struct{} `type:"structure"` + + // The key of the tag. + // + // Key is a required field + Key *string `min:"1" type:"string" required:"true"` + + // The value of the tag. + // + // Value is a required field + Value *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s Tag) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Tag) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Tag) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Tag"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *Tag) SetKey(v string) *Tag { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Tag) SetValue(v string) *Tag { + s.Value = &v + return s +} + +type TagResourceInput struct { + _ struct{} `type:"structure"` + + // The cluster identifier (ID) for the cluster that you are tagging. To find + // the cluster ID, use DescribeClusters. + // + // ResourceId is a required field + ResourceId *string `type:"string" required:"true"` + + // A list of one or more tags. + // + // TagList is a required field + TagList []*Tag `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s TagResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *TagResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TagResourceInput"} + if s.ResourceId == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceId")) + } + if s.TagList == nil { + invalidParams.Add(request.NewErrParamRequired("TagList")) + } + if s.TagList != nil && len(s.TagList) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TagList", 1)) + } + if s.TagList != nil { + for i, v := range s.TagList { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "TagList", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceId sets the ResourceId field's value. +func (s *TagResourceInput) SetResourceId(v string) *TagResourceInput { + s.ResourceId = &v + return s +} + +// SetTagList sets the TagList field's value. +func (s *TagResourceInput) SetTagList(v []*Tag) *TagResourceInput { + s.TagList = v + return s +} + +type TagResourceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s TagResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagResourceOutput) GoString() string { + return s.String() +} + +type UntagResourceInput struct { + _ struct{} `type:"structure"` + + // The cluster identifier (ID) for the cluster whose tags you are removing. + // To find the cluster ID, use DescribeClusters. + // + // ResourceId is a required field + ResourceId *string `type:"string" required:"true"` + + // A list of one or more tag keys for the tags that you are removing. Specify + // only the tag keys, not the tag values. + // + // TagKeyList is a required field + TagKeyList []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s UntagResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UntagResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UntagResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UntagResourceInput"} + if s.ResourceId == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceId")) + } + if s.TagKeyList == nil { + invalidParams.Add(request.NewErrParamRequired("TagKeyList")) + } + if s.TagKeyList != nil && len(s.TagKeyList) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TagKeyList", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceId sets the ResourceId field's value. +func (s *UntagResourceInput) SetResourceId(v string) *UntagResourceInput { + s.ResourceId = &v + return s +} + +// SetTagKeyList sets the TagKeyList field's value. +func (s *UntagResourceInput) SetTagKeyList(v []*string) *UntagResourceInput { + s.TagKeyList = v + return s +} + +type UntagResourceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UntagResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UntagResourceOutput) GoString() string { + return s.String() +} + +const ( + // BackupPolicyDefault is a BackupPolicy enum value + BackupPolicyDefault = "DEFAULT" +) + +const ( + // BackupStateCreateInProgress is a BackupState enum value + BackupStateCreateInProgress = "CREATE_IN_PROGRESS" + + // BackupStateReady is a BackupState enum value + BackupStateReady = "READY" + + // BackupStateDeleted is a BackupState enum value + BackupStateDeleted = "DELETED" + + // BackupStatePendingDeletion is a BackupState enum value + BackupStatePendingDeletion = "PENDING_DELETION" +) + +const ( + // ClusterStateCreateInProgress is a ClusterState enum value + ClusterStateCreateInProgress = "CREATE_IN_PROGRESS" + + // ClusterStateUninitialized is a ClusterState enum value + ClusterStateUninitialized = "UNINITIALIZED" + + // ClusterStateInitializeInProgress is a ClusterState enum value + ClusterStateInitializeInProgress = "INITIALIZE_IN_PROGRESS" + + // ClusterStateInitialized is a ClusterState enum value + ClusterStateInitialized = "INITIALIZED" + + // ClusterStateActive is a ClusterState enum value + ClusterStateActive = "ACTIVE" + + // ClusterStateUpdateInProgress is a ClusterState enum value + ClusterStateUpdateInProgress = "UPDATE_IN_PROGRESS" + + // ClusterStateDeleteInProgress is a ClusterState enum value + ClusterStateDeleteInProgress = "DELETE_IN_PROGRESS" + + // ClusterStateDeleted is a ClusterState enum value + ClusterStateDeleted = "DELETED" + + // ClusterStateDegraded is a ClusterState enum value + ClusterStateDegraded = "DEGRADED" +) + +const ( + // HsmStateCreateInProgress is a HsmState enum value + HsmStateCreateInProgress = "CREATE_IN_PROGRESS" + + // HsmStateActive is a HsmState enum value + HsmStateActive = "ACTIVE" + + // HsmStateDegraded is a HsmState enum value + HsmStateDegraded = "DEGRADED" + + // HsmStateDeleteInProgress is a HsmState enum value + HsmStateDeleteInProgress = "DELETE_IN_PROGRESS" + + // HsmStateDeleted is a HsmState enum value + HsmStateDeleted = "DELETED" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudhsmv2/doc.go b/vendor/github.com/aws/aws-sdk-go/service/cloudhsmv2/doc.go new file mode 100644 index 00000000000..fd7b0ef8e83 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudhsmv2/doc.go @@ -0,0 +1,29 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package cloudhsmv2 provides the client and types for making API +// requests to AWS CloudHSM V2. +// +// For more information about AWS CloudHSM, see AWS CloudHSM (http://aws.amazon.com/cloudhsm/) +// and the AWS CloudHSM User Guide (http://docs.aws.amazon.com/cloudhsm/latest/userguide/). +// +// See https://docs.aws.amazon.com/goto/WebAPI/cloudhsmv2-2017-04-28 for more information on this service. +// +// See cloudhsmv2 package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/cloudhsmv2/ +// +// Using the Client +// +// To contact AWS CloudHSM V2 with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the AWS CloudHSM V2 client CloudHSMV2 for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/cloudhsmv2/#New +package cloudhsmv2 diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudhsmv2/errors.go b/vendor/github.com/aws/aws-sdk-go/service/cloudhsmv2/errors.go new file mode 100644 index 00000000000..542f2f40481 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudhsmv2/errors.go @@ -0,0 +1,38 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package cloudhsmv2 + +const ( + + // ErrCodeCloudHsmAccessDeniedException for service response error code + // "CloudHsmAccessDeniedException". + // + // The request was rejected because the requester does not have permission to + // perform the requested operation. + ErrCodeCloudHsmAccessDeniedException = "CloudHsmAccessDeniedException" + + // ErrCodeCloudHsmInternalFailureException for service response error code + // "CloudHsmInternalFailureException". + // + // The request was rejected because of an AWS CloudHSM internal failure. The + // request can be retried. + ErrCodeCloudHsmInternalFailureException = "CloudHsmInternalFailureException" + + // ErrCodeCloudHsmInvalidRequestException for service response error code + // "CloudHsmInvalidRequestException". + // + // The request was rejected because it is not a valid request. + ErrCodeCloudHsmInvalidRequestException = "CloudHsmInvalidRequestException" + + // ErrCodeCloudHsmResourceNotFoundException for service response error code + // "CloudHsmResourceNotFoundException". + // + // The request was rejected because it refers to a resource that cannot be found. + ErrCodeCloudHsmResourceNotFoundException = "CloudHsmResourceNotFoundException" + + // ErrCodeCloudHsmServiceException for service response error code + // "CloudHsmServiceException". + // + // The request was rejected because an error occurred. + ErrCodeCloudHsmServiceException = "CloudHsmServiceException" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudhsmv2/service.go b/vendor/github.com/aws/aws-sdk-go/service/cloudhsmv2/service.go new file mode 100644 index 00000000000..c86db6ae7b0 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudhsmv2/service.go @@ -0,0 +1,100 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package cloudhsmv2 + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" +) + +// CloudHSMV2 provides the API operation methods for making requests to +// AWS CloudHSM V2. See this package's package overview docs +// for details on the service. +// +// CloudHSMV2 methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type CloudHSMV2 struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "cloudhsmv2" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "CloudHSM V2" // ServiceID is a unique identifer of a specific service. +) + +// New creates a new instance of the CloudHSMV2 client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a CloudHSMV2 client from just a session. +// svc := cloudhsmv2.New(mySession) +// +// // Create a CloudHSMV2 client with additional configuration +// svc := cloudhsmv2.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *CloudHSMV2 { + c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "cloudhsm" + } + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *CloudHSMV2 { + svc := &CloudHSMV2{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + ServiceID: ServiceID, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2017-04-28", + JSONVersion: "1.1", + TargetPrefix: "BaldrApiService", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(jsonrpc.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(jsonrpc.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(jsonrpc.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(jsonrpc.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a CloudHSMV2 operation and runs any +// custom request initialization. +func (c *CloudHSMV2) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudsearch/api.go b/vendor/github.com/aws/aws-sdk-go/service/cloudsearch/api.go index 1440c67821f..dbbf3a16aed 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudsearch/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudsearch/api.go @@ -14,8 +14,8 @@ const opBuildSuggesters = "BuildSuggesters" // BuildSuggestersRequest generates a "aws/request.Request" representing the // client's request for the BuildSuggesters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -100,8 +100,8 @@ const opCreateDomain = "CreateDomain" // CreateDomainRequest generates a "aws/request.Request" representing the // client's request for the CreateDomain operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -185,8 +185,8 @@ const opDefineAnalysisScheme = "DefineAnalysisScheme" // DefineAnalysisSchemeRequest generates a "aws/request.Request" representing the // client's request for the DefineAnalysisScheme operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -278,8 +278,8 @@ const opDefineExpression = "DefineExpression" // DefineExpressionRequest generates a "aws/request.Request" representing the // client's request for the DefineExpression operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -371,8 +371,8 @@ const opDefineIndexField = "DefineIndexField" // DefineIndexFieldRequest generates a "aws/request.Request" representing the // client's request for the DefineIndexField operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -468,8 +468,8 @@ const opDefineSuggester = "DefineSuggester" // DefineSuggesterRequest generates a "aws/request.Request" representing the // client's request for the DefineSuggester operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -563,8 +563,8 @@ const opDeleteAnalysisScheme = "DeleteAnalysisScheme" // DeleteAnalysisSchemeRequest generates a "aws/request.Request" representing the // client's request for the DeleteAnalysisScheme operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -652,8 +652,8 @@ const opDeleteDomain = "DeleteDomain" // DeleteDomainRequest generates a "aws/request.Request" representing the // client's request for the DeleteDomain operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -735,8 +735,8 @@ const opDeleteExpression = "DeleteExpression" // DeleteExpressionRequest generates a "aws/request.Request" representing the // client's request for the DeleteExpression operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -824,8 +824,8 @@ const opDeleteIndexField = "DeleteIndexField" // DeleteIndexFieldRequest generates a "aws/request.Request" representing the // client's request for the DeleteIndexField operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -913,8 +913,8 @@ const opDeleteSuggester = "DeleteSuggester" // DeleteSuggesterRequest generates a "aws/request.Request" representing the // client's request for the DeleteSuggester operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1002,8 +1002,8 @@ const opDescribeAnalysisSchemes = "DescribeAnalysisSchemes" // DescribeAnalysisSchemesRequest generates a "aws/request.Request" representing the // client's request for the DescribeAnalysisSchemes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1092,8 +1092,8 @@ const opDescribeAvailabilityOptions = "DescribeAvailabilityOptions" // DescribeAvailabilityOptionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeAvailabilityOptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1189,8 +1189,8 @@ const opDescribeDomains = "DescribeDomains" // DescribeDomainsRequest generates a "aws/request.Request" representing the // client's request for the DescribeDomains operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1274,8 +1274,8 @@ const opDescribeExpressions = "DescribeExpressions" // DescribeExpressionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeExpressions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1363,8 +1363,8 @@ const opDescribeIndexFields = "DescribeIndexFields" // DescribeIndexFieldsRequest generates a "aws/request.Request" representing the // client's request for the DescribeIndexFields operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1452,8 +1452,8 @@ const opDescribeScalingParameters = "DescribeScalingParameters" // DescribeScalingParametersRequest generates a "aws/request.Request" representing the // client's request for the DescribeScalingParameters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1539,8 +1539,8 @@ const opDescribeServiceAccessPolicies = "DescribeServiceAccessPolicies" // DescribeServiceAccessPoliciesRequest generates a "aws/request.Request" representing the // client's request for the DescribeServiceAccessPolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1628,8 +1628,8 @@ const opDescribeSuggesters = "DescribeSuggesters" // DescribeSuggestersRequest generates a "aws/request.Request" representing the // client's request for the DescribeSuggesters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1718,8 +1718,8 @@ const opIndexDocuments = "IndexDocuments" // IndexDocumentsRequest generates a "aws/request.Request" representing the // client's request for the IndexDocuments operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1804,8 +1804,8 @@ const opListDomainNames = "ListDomainNames" // ListDomainNamesRequest generates a "aws/request.Request" representing the // client's request for the ListDomainNames operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1880,8 +1880,8 @@ const opUpdateAvailabilityOptions = "UpdateAvailabilityOptions" // UpdateAvailabilityOptionsRequest generates a "aws/request.Request" representing the // client's request for the UpdateAvailabilityOptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1978,8 +1978,8 @@ const opUpdateScalingParameters = "UpdateScalingParameters" // UpdateScalingParametersRequest generates a "aws/request.Request" representing the // client's request for the UpdateScalingParameters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2075,8 +2075,8 @@ const opUpdateServiceAccessPolicies = "UpdateServiceAccessPolicies" // UpdateServiceAccessPoliciesRequest generates a "aws/request.Request" representing the // client's request for the UpdateServiceAccessPolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5477,7 +5477,7 @@ type OptionStatus struct { // A timestamp for when this option was created. // // CreationDate is a required field - CreationDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + CreationDate *time.Time `type:"timestamp" required:"true"` // Indicates that the option will be deleted once processing is complete. PendingDeletion *bool `type:"boolean"` @@ -5499,7 +5499,7 @@ type OptionStatus struct { // A timestamp for when this option was last updated. // // UpdateDate is a required field - UpdateDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + UpdateDate *time.Time `type:"timestamp" required:"true"` // A unique integer that indicates when this option was last updated. UpdateVersion *int64 `type:"integer"` diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudsearch/service.go b/vendor/github.com/aws/aws-sdk-go/service/cloudsearch/service.go index 6ef7a62d4a1..850bc137051 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudsearch/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudsearch/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "cloudsearch" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "cloudsearch" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "CloudSearch" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the CloudSearch client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudtrail/api.go b/vendor/github.com/aws/aws-sdk-go/service/cloudtrail/api.go index 646949fa2df..bde82b40d98 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudtrail/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudtrail/api.go @@ -15,8 +15,8 @@ const opAddTags = "AddTags" // AddTagsRequest generates a "aws/request.Request" representing the // client's request for the AddTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -73,11 +73,11 @@ func (c *CloudTrail) AddTagsRequest(input *AddTagsInput) (req *request.Request, // * ErrCodeResourceNotFoundException "ResourceNotFoundException" // This exception is thrown when the specified resource is not found. // -// * ErrCodeARNInvalidException "ARNInvalidException" +// * ErrCodeARNInvalidException "CloudTrailARNInvalidException" // This exception is thrown when an operation is called with an invalid trail // ARN. The format of a trail ARN is: // -// arn:aws:cloudtrail:us-east-1:123456789012:trail/MyTrail +// arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail // // * ErrCodeResourceTypeNotSupportedException "ResourceTypeNotSupportedException" // This exception is thrown when the specified resource type is not supported @@ -113,6 +113,12 @@ func (c *CloudTrail) AddTagsRequest(input *AddTagsInput) (req *request.Request, // * ErrCodeOperationNotPermittedException "OperationNotPermittedException" // This exception is thrown when the requested operation is not permitted. // +// * ErrCodeNotOrganizationMasterAccountException "NotOrganizationMasterAccountException" +// This exception is thrown when the AWS account making the request to create +// or update an organization trail is not the master account for an organization +// in AWS Organizations. For more information, see Prepare For Creating a Trail +// For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). +// // See also, https://docs.aws.amazon.com/goto/WebAPI/cloudtrail-2013-11-01/AddTags func (c *CloudTrail) AddTags(input *AddTagsInput) (*AddTagsOutput, error) { req, out := c.AddTagsRequest(input) @@ -139,8 +145,8 @@ const opCreateTrail = "CreateTrail" // CreateTrailRequest generates a "aws/request.Request" representing the // client's request for the CreateTrail operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -271,6 +277,35 @@ func (c *CloudTrail) CreateTrailRequest(input *CreateTrailInput) (req *request.R // * ErrCodeOperationNotPermittedException "OperationNotPermittedException" // This exception is thrown when the requested operation is not permitted. // +// * ErrCodeAccessNotEnabledException "CloudTrailAccessNotEnabledException" +// This exception is thrown when trusted access has not been enabled between +// AWS CloudTrail and AWS Organizations. For more information, see Enabling +// Trusted Access with Other AWS Services (https://docs.aws.amazon.com/organizations/latest/userguide/orgs_integrate_services.html) +// and Prepare For Creating a Trail For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). +// +// * ErrCodeInsufficientDependencyServiceAccessPermissionException "InsufficientDependencyServiceAccessPermissionException" +// This exception is thrown when the IAM user or role that is used to create +// the organization trail is lacking one or more required permissions for creating +// an organization trail in a required service. For more information, see Prepare +// For Creating a Trail For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). +// +// * ErrCodeNotOrganizationMasterAccountException "NotOrganizationMasterAccountException" +// This exception is thrown when the AWS account making the request to create +// or update an organization trail is not the master account for an organization +// in AWS Organizations. For more information, see Prepare For Creating a Trail +// For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). +// +// * ErrCodeOrganizationsNotInUseException "OrganizationsNotInUseException" +// This exception is thrown when the request is made from an AWS account that +// is not a member of an organization. To make this request, sign in using the +// credentials of an account that belongs to an organization. +// +// * ErrCodeOrganizationNotInAllFeaturesModeException "OrganizationNotInAllFeaturesModeException" +// This exception is thrown when AWS Organizations is not configured to support +// all features. All features must be enabled in AWS Organization to support +// creating an organization trail. For more information, see Prepare For Creating +// a Trail For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). +// // See also, https://docs.aws.amazon.com/goto/WebAPI/cloudtrail-2013-11-01/CreateTrail func (c *CloudTrail) CreateTrail(input *CreateTrailInput) (*CreateTrailOutput, error) { req, out := c.CreateTrailRequest(input) @@ -297,8 +332,8 @@ const opDeleteTrail = "DeleteTrail" // DeleteTrailRequest generates a "aws/request.Request" representing the // client's request for the DeleteTrail operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -372,6 +407,24 @@ func (c *CloudTrail) DeleteTrailRequest(input *DeleteTrailInput) (req *request.R // This exception is thrown when an operation is called on a trail from a region // other than the region in which the trail was created. // +// * ErrCodeUnsupportedOperationException "UnsupportedOperationException" +// This exception is thrown when the requested operation is not supported. +// +// * ErrCodeOperationNotPermittedException "OperationNotPermittedException" +// This exception is thrown when the requested operation is not permitted. +// +// * ErrCodeNotOrganizationMasterAccountException "NotOrganizationMasterAccountException" +// This exception is thrown when the AWS account making the request to create +// or update an organization trail is not the master account for an organization +// in AWS Organizations. For more information, see Prepare For Creating a Trail +// For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). +// +// * ErrCodeInsufficientDependencyServiceAccessPermissionException "InsufficientDependencyServiceAccessPermissionException" +// This exception is thrown when the IAM user or role that is used to create +// the organization trail is lacking one or more required permissions for creating +// an organization trail in a required service. For more information, see Prepare +// For Creating a Trail For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). +// // See also, https://docs.aws.amazon.com/goto/WebAPI/cloudtrail-2013-11-01/DeleteTrail func (c *CloudTrail) DeleteTrail(input *DeleteTrailInput) (*DeleteTrailOutput, error) { req, out := c.DeleteTrailRequest(input) @@ -398,8 +451,8 @@ const opDescribeTrails = "DescribeTrails" // DescribeTrailsRequest generates a "aws/request.Request" representing the // client's request for the DescribeTrails operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -481,8 +534,8 @@ const opGetEventSelectors = "GetEventSelectors" // GetEventSelectorsRequest generates a "aws/request.Request" representing the // client's request for the GetEventSelectors operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -524,12 +577,13 @@ func (c *CloudTrail) GetEventSelectorsRequest(input *GetEventSelectorsInput) (re // Describes the settings for the event selectors that you configured for your // trail. The information returned for your event selectors includes the following: // -// * The S3 objects that you are logging for data events. +// * If your event selector includes read-only events, write-only events, +// or all events. This applies to both management events and data events. // // * If your event selector includes management events. // -// * If your event selector includes read-only events, write-only events, -// or all. +// * If your event selector includes data events, the Amazon S3 objects or +// AWS Lambda functions that you are logging for data events. // // For more information, see Logging Data and Management Events for Trails // (http://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-and-data-events-with-cloudtrail.html) @@ -594,8 +648,8 @@ const opGetTrailStatus = "GetTrailStatus" // GetTrailStatusRequest generates a "aws/request.Request" representing the // client's request for the GetTrailStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -693,8 +747,8 @@ const opListPublicKeys = "ListPublicKeys" // ListPublicKeysRequest generates a "aws/request.Request" representing the // client's request for the ListPublicKeys operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -789,8 +843,8 @@ const opListTags = "ListTags" // ListTagsRequest generates a "aws/request.Request" representing the // client's request for the ListTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -842,11 +896,11 @@ func (c *CloudTrail) ListTagsRequest(input *ListTagsInput) (req *request.Request // * ErrCodeResourceNotFoundException "ResourceNotFoundException" // This exception is thrown when the specified resource is not found. // -// * ErrCodeARNInvalidException "ARNInvalidException" +// * ErrCodeARNInvalidException "CloudTrailARNInvalidException" // This exception is thrown when an operation is called with an invalid trail // ARN. The format of a trail ARN is: // -// arn:aws:cloudtrail:us-east-1:123456789012:trail/MyTrail +// arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail // // * ErrCodeResourceTypeNotSupportedException "ResourceTypeNotSupportedException" // This exception is thrown when the specified resource type is not supported @@ -903,8 +957,8 @@ const opLookupEvents = "LookupEvents" // LookupEventsRequest generates a "aws/request.Request" representing the // client's request for the LookupEvents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -949,10 +1003,11 @@ func (c *CloudTrail) LookupEventsRequest(input *LookupEventsInput) (req *request // LookupEvents API operation for AWS CloudTrail. // -// Looks up API activity events captured by CloudTrail that create, update, -// or delete resources in your account. Events for a region can be looked up -// for the times in which you had CloudTrail turned on in that region during -// the last seven days. Lookup supports the following attributes: +// Looks up management events (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html#cloudtrail-concepts-management-events) +// captured by CloudTrail. Events for a region can be looked up in that region +// during the last 90 days. Lookup supports the following attributes: +// +// * AWS access key // // * Event ID // @@ -960,13 +1015,15 @@ func (c *CloudTrail) LookupEventsRequest(input *LookupEventsInput) (req *request // // * Event source // +// * Read only +// // * Resource name // // * Resource type // // * User name // -// All attributes are optional. The default number of results returned is 10, +// All attributes are optional. The default number of results returned is 50, // with a maximum of 50 possible. The response includes a token that you can // use to get the next page of results. // @@ -1074,8 +1131,8 @@ const opPutEventSelectors = "PutEventSelectors" // PutEventSelectorsRequest generates a "aws/request.Request" representing the // client's request for the PutEventSelectors operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1114,10 +1171,13 @@ func (c *CloudTrail) PutEventSelectorsRequest(input *PutEventSelectorsInput) (re // PutEventSelectors API operation for AWS CloudTrail. // -// Configures an event selector for your trail. Use event selectors to specify -// whether you want your trail to log management and/or data events. When an -// event occurs in your account, CloudTrail evaluates the event selectors in -// all trails. For each trail, if the event matches any event selector, the +// Configures an event selector for your trail. Use event selectors to further +// specify the management and data event settings for your trail. By default, +// trails created without specific event selectors will be configured to log +// all read and write management events, and no data events. +// +// When an event occurs in your account, CloudTrail evaluates the event selectors +// in all trails. For each trail, if the event matches any event selector, the // trail processes and logs the event. If the event doesn't match any event // selector, the trail doesn't log the event. // @@ -1141,6 +1201,7 @@ func (c *CloudTrail) PutEventSelectorsRequest(input *PutEventSelectorsInput) (re // // You can configure up to five event selectors for each trail. For more information, // see Logging Data and Management Events for Trails (http://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-and-data-events-with-cloudtrail.html) +// and Limits in AWS CloudTrail (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/WhatIsCloudTrail-Limits.html) // in the AWS CloudTrail User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -1176,12 +1237,21 @@ func (c *CloudTrail) PutEventSelectorsRequest(input *PutEventSelectorsInput) (re // // * ErrCodeInvalidEventSelectorsException "InvalidEventSelectorsException" // This exception is thrown when the PutEventSelectors operation is called with -// an invalid number of event selectors, data resources, or an invalid value -// for a parameter: +// a number of event selectors or data resources that is not valid. The combination +// of event selectors and data resources is not valid. A trail can have up to +// 5 event selectors. A trail is limited to 250 data resources. These data resources +// can be distributed across event selectors, but the overall total cannot exceed +// 250. +// +// You can: // // * Specify a valid number of event selectors (1 to 5) for a trail. // // * Specify a valid number of data resources (1 to 250) for an event selector. +// The limit of number of resources on an individual event selector is configurable +// up to 250. However, this upper limit is allowed only if the total number +// of data resources does not exceed 250 across all event selectors for a +// trail. // // * Specify a valid value for a parameter. For example, specifying the ReadWriteType // parameter with a value of read-only is invalid. @@ -1192,6 +1262,18 @@ func (c *CloudTrail) PutEventSelectorsRequest(input *PutEventSelectorsInput) (re // * ErrCodeOperationNotPermittedException "OperationNotPermittedException" // This exception is thrown when the requested operation is not permitted. // +// * ErrCodeNotOrganizationMasterAccountException "NotOrganizationMasterAccountException" +// This exception is thrown when the AWS account making the request to create +// or update an organization trail is not the master account for an organization +// in AWS Organizations. For more information, see Prepare For Creating a Trail +// For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). +// +// * ErrCodeInsufficientDependencyServiceAccessPermissionException "InsufficientDependencyServiceAccessPermissionException" +// This exception is thrown when the IAM user or role that is used to create +// the organization trail is lacking one or more required permissions for creating +// an organization trail in a required service. For more information, see Prepare +// For Creating a Trail For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). +// // See also, https://docs.aws.amazon.com/goto/WebAPI/cloudtrail-2013-11-01/PutEventSelectors func (c *CloudTrail) PutEventSelectors(input *PutEventSelectorsInput) (*PutEventSelectorsOutput, error) { req, out := c.PutEventSelectorsRequest(input) @@ -1218,8 +1300,8 @@ const opRemoveTags = "RemoveTags" // RemoveTagsRequest generates a "aws/request.Request" representing the // client's request for the RemoveTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1271,11 +1353,11 @@ func (c *CloudTrail) RemoveTagsRequest(input *RemoveTagsInput) (req *request.Req // * ErrCodeResourceNotFoundException "ResourceNotFoundException" // This exception is thrown when the specified resource is not found. // -// * ErrCodeARNInvalidException "ARNInvalidException" +// * ErrCodeARNInvalidException "CloudTrailARNInvalidException" // This exception is thrown when an operation is called with an invalid trail // ARN. The format of a trail ARN is: // -// arn:aws:cloudtrail:us-east-1:123456789012:trail/MyTrail +// arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail // // * ErrCodeResourceTypeNotSupportedException "ResourceTypeNotSupportedException" // This exception is thrown when the specified resource type is not supported @@ -1307,6 +1389,12 @@ func (c *CloudTrail) RemoveTagsRequest(input *RemoveTagsInput) (req *request.Req // * ErrCodeOperationNotPermittedException "OperationNotPermittedException" // This exception is thrown when the requested operation is not permitted. // +// * ErrCodeNotOrganizationMasterAccountException "NotOrganizationMasterAccountException" +// This exception is thrown when the AWS account making the request to create +// or update an organization trail is not the master account for an organization +// in AWS Organizations. For more information, see Prepare For Creating a Trail +// For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). +// // See also, https://docs.aws.amazon.com/goto/WebAPI/cloudtrail-2013-11-01/RemoveTags func (c *CloudTrail) RemoveTags(input *RemoveTagsInput) (*RemoveTagsOutput, error) { req, out := c.RemoveTagsRequest(input) @@ -1333,8 +1421,8 @@ const opStartLogging = "StartLogging" // StartLoggingRequest generates a "aws/request.Request" representing the // client's request for the StartLogging operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1410,6 +1498,24 @@ func (c *CloudTrail) StartLoggingRequest(input *StartLoggingInput) (req *request // This exception is thrown when an operation is called on a trail from a region // other than the region in which the trail was created. // +// * ErrCodeUnsupportedOperationException "UnsupportedOperationException" +// This exception is thrown when the requested operation is not supported. +// +// * ErrCodeOperationNotPermittedException "OperationNotPermittedException" +// This exception is thrown when the requested operation is not permitted. +// +// * ErrCodeNotOrganizationMasterAccountException "NotOrganizationMasterAccountException" +// This exception is thrown when the AWS account making the request to create +// or update an organization trail is not the master account for an organization +// in AWS Organizations. For more information, see Prepare For Creating a Trail +// For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). +// +// * ErrCodeInsufficientDependencyServiceAccessPermissionException "InsufficientDependencyServiceAccessPermissionException" +// This exception is thrown when the IAM user or role that is used to create +// the organization trail is lacking one or more required permissions for creating +// an organization trail in a required service. For more information, see Prepare +// For Creating a Trail For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). +// // See also, https://docs.aws.amazon.com/goto/WebAPI/cloudtrail-2013-11-01/StartLogging func (c *CloudTrail) StartLogging(input *StartLoggingInput) (*StartLoggingOutput, error) { req, out := c.StartLoggingRequest(input) @@ -1436,8 +1542,8 @@ const opStopLogging = "StopLogging" // StopLoggingRequest generates a "aws/request.Request" representing the // client's request for the StopLogging operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1515,6 +1621,24 @@ func (c *CloudTrail) StopLoggingRequest(input *StopLoggingInput) (req *request.R // This exception is thrown when an operation is called on a trail from a region // other than the region in which the trail was created. // +// * ErrCodeUnsupportedOperationException "UnsupportedOperationException" +// This exception is thrown when the requested operation is not supported. +// +// * ErrCodeOperationNotPermittedException "OperationNotPermittedException" +// This exception is thrown when the requested operation is not permitted. +// +// * ErrCodeNotOrganizationMasterAccountException "NotOrganizationMasterAccountException" +// This exception is thrown when the AWS account making the request to create +// or update an organization trail is not the master account for an organization +// in AWS Organizations. For more information, see Prepare For Creating a Trail +// For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). +// +// * ErrCodeInsufficientDependencyServiceAccessPermissionException "InsufficientDependencyServiceAccessPermissionException" +// This exception is thrown when the IAM user or role that is used to create +// the organization trail is lacking one or more required permissions for creating +// an organization trail in a required service. For more information, see Prepare +// For Creating a Trail For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). +// // See also, https://docs.aws.amazon.com/goto/WebAPI/cloudtrail-2013-11-01/StopLogging func (c *CloudTrail) StopLogging(input *StopLoggingInput) (*StopLoggingOutput, error) { req, out := c.StopLoggingRequest(input) @@ -1541,8 +1665,8 @@ const opUpdateTrail = "UpdateTrail" // UpdateTrailRequest generates a "aws/request.Request" representing the // client's request for the UpdateTrail operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1677,6 +1801,35 @@ func (c *CloudTrail) UpdateTrailRequest(input *UpdateTrailInput) (req *request.R // * ErrCodeOperationNotPermittedException "OperationNotPermittedException" // This exception is thrown when the requested operation is not permitted. // +// * ErrCodeAccessNotEnabledException "CloudTrailAccessNotEnabledException" +// This exception is thrown when trusted access has not been enabled between +// AWS CloudTrail and AWS Organizations. For more information, see Enabling +// Trusted Access with Other AWS Services (https://docs.aws.amazon.com/organizations/latest/userguide/orgs_integrate_services.html) +// and Prepare For Creating a Trail For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). +// +// * ErrCodeInsufficientDependencyServiceAccessPermissionException "InsufficientDependencyServiceAccessPermissionException" +// This exception is thrown when the IAM user or role that is used to create +// the organization trail is lacking one or more required permissions for creating +// an organization trail in a required service. For more information, see Prepare +// For Creating a Trail For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). +// +// * ErrCodeOrganizationsNotInUseException "OrganizationsNotInUseException" +// This exception is thrown when the request is made from an AWS account that +// is not a member of an organization. To make this request, sign in using the +// credentials of an account that belongs to an organization. +// +// * ErrCodeNotOrganizationMasterAccountException "NotOrganizationMasterAccountException" +// This exception is thrown when the AWS account making the request to create +// or update an organization trail is not the master account for an organization +// in AWS Organizations. For more information, see Prepare For Creating a Trail +// For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). +// +// * ErrCodeOrganizationNotInAllFeaturesModeException "OrganizationNotInAllFeaturesModeException" +// This exception is thrown when AWS Organizations is not configured to support +// all features. All features must be enabled in AWS Organization to support +// creating an organization trail. For more information, see Prepare For Creating +// a Trail For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). +// // See also, https://docs.aws.amazon.com/goto/WebAPI/cloudtrail-2013-11-01/UpdateTrail func (c *CloudTrail) UpdateTrail(input *UpdateTrailInput) (*UpdateTrailOutput, error) { req, out := c.UpdateTrailRequest(input) @@ -1706,7 +1859,7 @@ type AddTagsInput struct { // Specifies the ARN of the trail to which one or more tags will be added. The // format of a trail ARN is: // - // arn:aws:cloudtrail:us-east-1:123456789012:trail/MyTrail + // arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail // // ResourceId is a required field ResourceId *string `type:"string" required:"true"` @@ -1810,6 +1963,12 @@ type CreateTrailInput struct { // The default is false. IsMultiRegionTrail *bool `type:"boolean"` + // Specifies whether the trail is created for all accounts in an organization + // in AWS Organizations, or only for the current AWS account. The default is + // false, and cannot be true unless the call is made on behalf of an AWS account + // that is the master account for an organization in AWS Organizations. + IsOrganizationTrail *bool `type:"boolean"` + // Specifies the KMS key ID to use to encrypt the logs delivered by CloudTrail. // The value can be an alias name prefixed by "alias/", a fully specified ARN // to an alias, a fully specified ARN to a key, or a globally unique identifier. @@ -1818,9 +1977,9 @@ type CreateTrailInput struct { // // * alias/MyAliasName // - // * arn:aws:kms:us-east-1:123456789012:alias/MyAliasName + // * arn:aws:kms:us-east-2:123456789012:alias/MyAliasName // - // * arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012 + // * arn:aws:kms:us-east-2:123456789012:key/12345678-1234-1234-1234-123456789012 // // * 12345678-1234-1234-1234-123456789012 KmsKeyId *string `type:"string"` @@ -1915,6 +2074,12 @@ func (s *CreateTrailInput) SetIsMultiRegionTrail(v bool) *CreateTrailInput { return s } +// SetIsOrganizationTrail sets the IsOrganizationTrail field's value. +func (s *CreateTrailInput) SetIsOrganizationTrail(v bool) *CreateTrailInput { + s.IsOrganizationTrail = &v + return s +} + // SetKmsKeyId sets the KmsKeyId field's value. func (s *CreateTrailInput) SetKmsKeyId(v string) *CreateTrailInput { s.KmsKeyId = &v @@ -1965,10 +2130,13 @@ type CreateTrailOutput struct { // Specifies whether the trail exists in one region or in all regions. IsMultiRegionTrail *bool `type:"boolean"` + // Specifies whether the trail is an organization trail. + IsOrganizationTrail *bool `type:"boolean"` + // Specifies the KMS key ID that encrypts the logs delivered by CloudTrail. // The value is a fully specified ARN to a KMS key in the format: // - // arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012 + // arn:aws:kms:us-east-2:123456789012:key/12345678-1234-1234-1234-123456789012 KmsKeyId *string `type:"string"` // Specifies whether log file integrity validation is enabled. @@ -1989,16 +2157,18 @@ type CreateTrailOutput struct { // Specifies the ARN of the Amazon SNS topic that CloudTrail uses to send notifications // when log files are delivered. The format of a topic ARN is: // - // arn:aws:sns:us-east-1:123456789012:MyTopic + // arn:aws:sns:us-east-2:123456789012:MyTopic SnsTopicARN *string `type:"string"` // This field is deprecated. Use SnsTopicARN. + // + // Deprecated: SnsTopicName has been deprecated SnsTopicName *string `deprecated:"true" type:"string"` // Specifies the ARN of the trail that was created. The format of a trail ARN // is: // - // arn:aws:cloudtrail:us-east-1:123456789012:trail/MyTrail + // arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail TrailARN *string `type:"string"` } @@ -2036,6 +2206,12 @@ func (s *CreateTrailOutput) SetIsMultiRegionTrail(v bool) *CreateTrailOutput { return s } +// SetIsOrganizationTrail sets the IsOrganizationTrail field's value. +func (s *CreateTrailOutput) SetIsOrganizationTrail(v bool) *CreateTrailOutput { + s.IsOrganizationTrail = &v + return s +} + // SetKmsKeyId sets the KmsKeyId field's value. func (s *CreateTrailOutput) SetKmsKeyId(v string) *CreateTrailOutput { s.KmsKeyId = &v @@ -2084,41 +2260,93 @@ func (s *CreateTrailOutput) SetTrailARN(v string) *CreateTrailOutput { return s } -// The Amazon S3 objects that you specify in your event selectors for your trail -// to log data events. Data events are object-level API operations that access -// S3 objects, such as GetObject, DeleteObject, and PutObject. You can specify -// up to 250 S3 buckets and object prefixes for a trail. +// The Amazon S3 buckets or AWS Lambda functions that you specify in your event +// selectors for your trail to log data events. Data events provide insight +// into the resource operations performed on or within a resource itself. These +// are also known as data plane operations. You can specify up to 250 data resources +// for a trail. // -// Example +// The total number of allowed data resources is 250. This number can be distributed +// between 1 and 5 event selectors, but the total cannot exceed 250 across all +// selectors. +// +// The following example demonstrates how logging works when you configure logging +// of all data events for an S3 bucket named bucket-1. In this example, the +// CloudTrail user spcified an empty prefix, and the option to log both Read +// and Write data events. +// +// A user uploads an image file to bucket-1. // -// You create an event selector for a trail and specify an S3 bucket and an -// empty prefix, such as arn:aws:s3:::bucket-1/. +// The PutObject API operation is an Amazon S3 object-level API. It is recorded +// as a data event in CloudTrail. Because the CloudTrail user specified an S3 +// bucket with an empty prefix, events that occur on any object in that bucket +// are logged. The trail processes and logs the event. // -// You upload an image file to bucket-1. +// A user uploads an object to an Amazon S3 bucket named arn:aws:s3:::bucket-2. // -// The PutObject API operation occurs on an object in the S3 bucket that you -// specified in the event selector. The trail processes and logs the event. +// The PutObject API operation occurred for an object in an S3 bucket that the +// CloudTrail user didn't specify for the trail. The trail doesn’t log the event. // -// You upload another image file to a different S3 bucket named arn:aws:s3:::bucket-2. +// The following example demonstrates how logging works when you configure logging +// of AWS Lambda data events for a Lambda function named MyLambdaFunction, but +// not for all AWS Lambda functions. // -// The event occurs on an object in an S3 bucket that you didn't specify in -// the event selector. The trail doesn’t log the event. +// A user runs a script that includes a call to the MyLambdaFunction function +// and the MyOtherLambdaFunction function. +// +// The Invoke API operation on MyLambdaFunction is an AWS Lambda API. It is +// recorded as a data event in CloudTrail. Because the CloudTrail user specified +// logging data events for MyLambdaFunction, any invocations of that function +// are logged. The trail processes and logs the event. +// +// The Invoke API operation on MyOtherLambdaFunction is an AWS Lambda API. Because +// the CloudTrail user did not specify logging data events for all Lambda functions, +// the Invoke operation for MyOtherLambdaFunction does not match the function +// specified for the trail. The trail doesn’t log the event. type DataResource struct { _ struct{} `type:"structure"` - // The resource type in which you want to log data events. You can specify only - // the following value: AWS::S3::Object. + // The resource type in which you want to log data events. You can specify AWS::S3::Object + // or AWS::Lambda::Function resources. Type *string `type:"string"` - // A list of ARN-like strings for the specified S3 objects. + // An array of Amazon Resource Name (ARN) strings or partial ARN strings for + // the specified objects. + // + // * To log data events for all objects in all S3 buckets in your AWS account, + // specify the prefix as arn:aws:s3:::. + // + // This will also enable logging of data event activity performed by any user + // or role in your AWS account, even if that activity is performed on a bucket + // that belongs to another AWS account. + // + // * To log data events for all objects in all S3 buckets that include my-bucket + // in their names, specify the prefix as aws:s3:::my-bucket. The trail logs + // data events for all objects in all buckets whose name contains a match + // for my-bucket. // - // To log data events for all objects in an S3 bucket, specify the bucket and - // an empty object prefix such as arn:aws:s3:::bucket-1/. The trail logs data - // events for all objects in this S3 bucket. + // * To log data events for all objects in an S3 bucket, specify the bucket + // and an empty object prefix such as arn:aws:s3:::bucket-1/. The trail logs + // data events for all objects in this S3 bucket. // - // To log data events for specific objects, specify the S3 bucket and object - // prefix such as arn:aws:s3:::bucket-1/example-images. The trail logs data - // events for objects in this S3 bucket that match the prefix. + // * To log data events for specific objects, specify the S3 bucket and object + // prefix such as arn:aws:s3:::bucket-1/example-images. The trail logs data + // events for objects in this S3 bucket that match the prefix. + // + // * To log data events for all functions in your AWS account, specify the + // prefix as arn:aws:lambda. + // + // This will also enable logging of Invoke activity performed by any user or + // role in your AWS account, even if that activity is performed on a function + // that belongs to another AWS account. + // + // * To log data eents for a specific Lambda function, specify the function + // ARN. + // + // Lambda function ARNs are exact. Unlike S3, you cannot use matching. For example, + // if you specify a function ARN arn:aws:lambda:us-west-2:111111111111:function:helloworld, + // data events will only be logged for arn:aws:lambda:us-west-2:111111111111:function:helloworld. + // They will not be logged for arn:aws:lambda:us-west-2:111111111111:function:helloworld2. Values []*string `type:"list"` } @@ -2149,7 +2377,7 @@ type DeleteTrailInput struct { _ struct{} `type:"structure"` // Specifies the name or the CloudTrail ARN of the trail to be deleted. The - // format of a trail ARN is: arn:aws:cloudtrail:us-east-1:123456789012:trail/MyTrail + // format of a trail ARN is: arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail // // Name is a required field Name *string `type:"string" required:"true"` @@ -2206,13 +2434,16 @@ type DescribeTrailsInput struct { // Specifies whether to include shadow trails in the response. A shadow trail // is the replication in a region of a trail that was created in a different - // region. The default is true. + // region, or in the case of an organization trail, the replication of an organization + // trail in member accounts. If you do not include shadow trails, organization + // trails in a member account and region replication trails will not be returned. + // The default is true. IncludeShadowTrails *bool `locationName:"includeShadowTrails" type:"boolean"` // Specifies a list of trail names, trail ARNs, or both, of the trails to describe. // The format of a trail ARN is: // - // arn:aws:cloudtrail:us-east-1:123456789012:trail/MyTrail + // arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail // // If an empty list is specified, information for the trail in the current region // is returned. @@ -2283,6 +2514,11 @@ func (s *DescribeTrailsOutput) SetTrailList(v []*Trail) *DescribeTrailsOutput { type Event struct { _ struct{} `type:"structure"` + // The AWS access key ID that was used to sign the request. If the request was + // made with temporary security credentials, this is the access key ID of the + // temporary credentials. + AccessKeyId *string `type:"string"` + // A JSON string that contains a representation of the event returned. CloudTrailEvent *string `type:"string"` @@ -2296,7 +2532,10 @@ type Event struct { EventSource *string `type:"string"` // The date and time of the event returned. - EventTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EventTime *time.Time `type:"timestamp"` + + // Information about whether the event is a write event or a read event. + ReadOnly *string `type:"string"` // A list of resources referenced by the event returned. Resources []*Resource `type:"list"` @@ -2316,6 +2555,12 @@ func (s Event) GoString() string { return s.String() } +// SetAccessKeyId sets the AccessKeyId field's value. +func (s *Event) SetAccessKeyId(v string) *Event { + s.AccessKeyId = &v + return s +} + // SetCloudTrailEvent sets the CloudTrailEvent field's value. func (s *Event) SetCloudTrailEvent(v string) *Event { s.CloudTrailEvent = &v @@ -2346,6 +2591,12 @@ func (s *Event) SetEventTime(v time.Time) *Event { return s } +// SetReadOnly sets the ReadOnly field's value. +func (s *Event) SetReadOnly(v string) *Event { + s.ReadOnly = &v + return s +} + // SetResources sets the Resources field's value. func (s *Event) SetResources(v []*Resource) *Event { s.Resources = v @@ -2358,20 +2609,26 @@ func (s *Event) SetUsername(v string) *Event { return s } -// Use event selectors to specify whether you want your trail to log management -// and/or data events. When an event occurs in your account, CloudTrail evaluates -// the event selector for all trails. For each trail, if the event matches any -// event selector, the trail processes and logs the event. If the event doesn't -// match any event selector, the trail doesn't log the event. +// Use event selectors to further specify the management and data event settings +// for your trail. By default, trails created without specific event selectors +// will be configured to log all read and write management events, and no data +// events. When an event occurs in your account, CloudTrail evaluates the event +// selector for all trails. For each trail, if the event matches any event selector, +// the trail processes and logs the event. If the event doesn't match any event +// selector, the trail doesn't log the event. // // You can configure up to five event selectors for a trail. type EventSelector struct { _ struct{} `type:"structure"` - // CloudTrail supports logging only data events for S3 objects. You can specify - // up to 250 S3 buckets and object prefixes for a trail. + // CloudTrail supports data event logging for Amazon S3 objects and AWS Lambda + // functions. You can specify up to 250 resources for an individual event selector, + // but the total number of data resources cannot exceed 250 across all event + // selectors in a trail. This limit does not apply if you configure resource + // logging for all data events. // // For more information, see Data Events (http://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-and-data-events-with-cloudtrail.html#logging-data-events) + // and Limits in AWS CloudTrail (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/WhatIsCloudTrail-Limits.html) // in the AWS CloudTrail User Guide. DataResources []*DataResource `type:"list"` @@ -2434,13 +2691,13 @@ type GetEventSelectorsInput struct { // * Be between 3 and 128 characters // // * Have no adjacent periods, underscores or dashes. Names like my-_namespace - // and my--namespace are invalid. + // and my--namespace are not valid. // // * Not be in IP address format (for example, 192.168.5.4) // // If you specify a trail ARN, it must be in the format: // - // arn:aws:cloudtrail:us-east-1:123456789012:trail/MyTrail + // arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail // // TrailName is a required field TrailName *string `type:"string" required:"true"` @@ -2515,7 +2772,7 @@ type GetTrailStatusInput struct { // status. To get the status of a shadow trail (a replication of the trail in // another region), you must specify its ARN. The format of a trail ARN is: // - // arn:aws:cloudtrail:us-east-1:123456789012:trail/MyTrail + // arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail // // Name is a required field Name *string `type:"string" required:"true"` @@ -2564,7 +2821,7 @@ type GetTrailStatusOutput struct { // Displays the most recent date and time when CloudTrail delivered logs to // CloudWatch Logs. - LatestCloudWatchLogsDeliveryTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LatestCloudWatchLogsDeliveryTime *time.Time `type:"timestamp"` // This field is deprecated. LatestDeliveryAttemptSucceeded *string `type:"string"` @@ -2585,7 +2842,7 @@ type GetTrailStatusOutput struct { // Specifies the date and time that CloudTrail last delivered log files to an // account's Amazon S3 bucket. - LatestDeliveryTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LatestDeliveryTime *time.Time `type:"timestamp"` // Displays any Amazon S3 error that CloudTrail encountered when attempting // to deliver a digest file to the designated bucket. For more information see @@ -2600,7 +2857,7 @@ type GetTrailStatusOutput struct { // Specifies the date and time that CloudTrail last delivered a digest file // to an account's Amazon S3 bucket. - LatestDigestDeliveryTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LatestDigestDeliveryTime *time.Time `type:"timestamp"` // This field is deprecated. LatestNotificationAttemptSucceeded *string `type:"string"` @@ -2615,15 +2872,15 @@ type GetTrailStatusOutput struct { // Specifies the date and time of the most recent Amazon SNS notification that // CloudTrail has written a new log file to an account's Amazon S3 bucket. - LatestNotificationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LatestNotificationTime *time.Time `type:"timestamp"` // Specifies the most recent date and time when CloudTrail started recording // API calls for an AWS account. - StartLoggingTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartLoggingTime *time.Time `type:"timestamp"` // Specifies the most recent date and time when CloudTrail stopped recording // API calls for an AWS account. - StopLoggingTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StopLoggingTime *time.Time `type:"timestamp"` // This field is deprecated. TimeLoggingStarted *string `type:"string"` @@ -2750,7 +3007,7 @@ type ListPublicKeysInput struct { // Optionally specifies, in UTC, the end of the time range to look up public // keys for CloudTrail digest files. If not specified, the current time is used. - EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `type:"timestamp"` // Reserved for future use. NextToken *string `type:"string"` @@ -2758,7 +3015,7 @@ type ListPublicKeysInput struct { // Optionally specifies, in UTC, the start of the time range to look up public // keys for CloudTrail digest files. If not specified, the current time is used, // and the current public key is returned. - StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -2835,7 +3092,7 @@ type ListTagsInput struct { // Specifies a list of trail ARNs whose tags will be listed. The list has a // limit of 20 ARNs. The format of a trail ARN is: // - // arn:aws:cloudtrail:us-east-1:123456789012:trail/MyTrail + // arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail // // ResourceIdList is a required field ResourceIdList []*string `type:"list" required:"true"` @@ -2970,14 +3227,14 @@ type LookupEventsInput struct { // Specifies that only events that occur before or at the specified time are // returned. If the specified end time is before the specified start time, an // error is returned. - EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `type:"timestamp"` // Contains a list of lookup attributes. Currently the list can contain only // one item. LookupAttributes []*LookupAttribute `type:"list"` // The number of events to return. Possible values are 1 through 50. The default - // is 10. + // is 50. MaxResults *int64 `min:"1" type:"integer"` // The token to use to get the next page of results after a previous API call. @@ -2990,7 +3247,7 @@ type LookupEventsInput struct { // Specifies that only events that occur after or at the specified time are // returned. If the specified start time is after the specified end time, an // error is returned. - StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -3103,10 +3360,10 @@ type PublicKey struct { Fingerprint *string `type:"string"` // The ending time of validity of the public key. - ValidityEndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + ValidityEndTime *time.Time `type:"timestamp"` // The starting time of validity of the public key. - ValidityStartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + ValidityStartTime *time.Time `type:"timestamp"` // The DER encoded public key value in PKCS#1 format. // @@ -3174,7 +3431,7 @@ type PutEventSelectorsInput struct { // // If you specify a trail ARN, it must be in the format: // - // arn:aws:cloudtrail:us-east-1:123456789012:trail/MyTrail + // arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail // // TrailName is a required field TrailName *string `type:"string" required:"true"` @@ -3227,7 +3484,7 @@ type PutEventSelectorsOutput struct { // Specifies the ARN of the trail that was updated with event selectors. The // format of a trail ARN is: // - // arn:aws:cloudtrail:us-east-1:123456789012:trail/MyTrail + // arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail TrailARN *string `type:"string"` } @@ -3260,7 +3517,7 @@ type RemoveTagsInput struct { // Specifies the ARN of the trail from which tags should be removed. The format // of a trail ARN is: // - // arn:aws:cloudtrail:us-east-1:123456789012:trail/MyTrail + // arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail // // ResourceId is a required field ResourceId *string `type:"string" required:"true"` @@ -3410,7 +3667,7 @@ type StartLoggingInput struct { // Specifies the name or the CloudTrail ARN of the trail for which CloudTrail // logs AWS API calls. The format of a trail ARN is: // - // arn:aws:cloudtrail:us-east-1:123456789012:trail/MyTrail + // arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail // // Name is a required field Name *string `type:"string" required:"true"` @@ -3469,7 +3726,7 @@ type StopLoggingInput struct { // Specifies the name or the CloudTrail ARN of the trail for which CloudTrail // will stop logging AWS API calls. The format of a trail ARN is: // - // arn:aws:cloudtrail:us-east-1:123456789012:trail/MyTrail + // arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail // // Name is a required field Name *string `type:"string" required:"true"` @@ -3595,10 +3852,13 @@ type Trail struct { // Specifies whether the trail belongs only to one region or exists in all regions. IsMultiRegionTrail *bool `type:"boolean"` + // Specifies whether the trail is an organization trail. + IsOrganizationTrail *bool `type:"boolean"` + // Specifies the KMS key ID that encrypts the logs delivered by CloudTrail. // The value is a fully specified ARN to a KMS key in the format: // - // arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012 + // arn:aws:kms:us-east-2:123456789012:key/12345678-1234-1234-1234-123456789012 KmsKeyId *string `type:"string"` // Specifies whether log file validation is enabled. @@ -3620,15 +3880,17 @@ type Trail struct { // Specifies the ARN of the Amazon SNS topic that CloudTrail uses to send notifications // when log files are delivered. The format of a topic ARN is: // - // arn:aws:sns:us-east-1:123456789012:MyTopic + // arn:aws:sns:us-east-2:123456789012:MyTopic SnsTopicARN *string `type:"string"` // This field is deprecated. Use SnsTopicARN. + // + // Deprecated: SnsTopicName has been deprecated SnsTopicName *string `deprecated:"true" type:"string"` // Specifies the ARN of the trail. The format of a trail ARN is: // - // arn:aws:cloudtrail:us-east-1:123456789012:trail/MyTrail + // arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail TrailARN *string `type:"string"` } @@ -3678,6 +3940,12 @@ func (s *Trail) SetIsMultiRegionTrail(v bool) *Trail { return s } +// SetIsOrganizationTrail sets the IsOrganizationTrail field's value. +func (s *Trail) SetIsOrganizationTrail(v bool) *Trail { + s.IsOrganizationTrail = &v + return s +} + // SetKmsKeyId sets the KmsKeyId field's value. func (s *Trail) SetKmsKeyId(v string) *Trail { s.KmsKeyId = &v @@ -3763,6 +4031,17 @@ type UpdateTrailInput struct { // it was created, and its shadow trails in other regions will be deleted. IsMultiRegionTrail *bool `type:"boolean"` + // Specifies whether the trail is applied to all accounts in an organization + // in AWS Organizations, or only for the current AWS account. The default is + // false, and cannot be true unless the call is made on behalf of an AWS account + // that is the master account for an organization in AWS Organizations. If the + // trail is not an organization trail and this is set to true, the trail will + // be created in all AWS accounts that belong to the organization. If the trail + // is an organization trail and this is set to false, the trail will remain + // in the current AWS account but be deleted from all member accounts in the + // organization. + IsOrganizationTrail *bool `type:"boolean"` + // Specifies the KMS key ID to use to encrypt the logs delivered by CloudTrail. // The value can be an alias name prefixed by "alias/", a fully specified ARN // to an alias, a fully specified ARN to a key, or a globally unique identifier. @@ -3771,9 +4050,9 @@ type UpdateTrailInput struct { // // * alias/MyAliasName // - // * arn:aws:kms:us-east-1:123456789012:alias/MyAliasName + // * arn:aws:kms:us-east-2:123456789012:alias/MyAliasName // - // * arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012 + // * arn:aws:kms:us-east-2:123456789012:key/12345678-1234-1234-1234-123456789012 // // * 12345678-1234-1234-1234-123456789012 KmsKeyId *string `type:"string"` @@ -3795,7 +4074,7 @@ type UpdateTrailInput struct { // // If Name is a trail ARN, it must be in the format: // - // arn:aws:cloudtrail:us-east-1:123456789012:trail/MyTrail + // arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail // // Name is a required field Name *string `type:"string" required:"true"` @@ -3868,6 +4147,12 @@ func (s *UpdateTrailInput) SetIsMultiRegionTrail(v bool) *UpdateTrailInput { return s } +// SetIsOrganizationTrail sets the IsOrganizationTrail field's value. +func (s *UpdateTrailInput) SetIsOrganizationTrail(v bool) *UpdateTrailInput { + s.IsOrganizationTrail = &v + return s +} + // SetKmsKeyId sets the KmsKeyId field's value. func (s *UpdateTrailInput) SetKmsKeyId(v string) *UpdateTrailInput { s.KmsKeyId = &v @@ -3918,10 +4203,13 @@ type UpdateTrailOutput struct { // Specifies whether the trail exists in one region or in all regions. IsMultiRegionTrail *bool `type:"boolean"` + // Specifies whether the trail is an organization trail. + IsOrganizationTrail *bool `type:"boolean"` + // Specifies the KMS key ID that encrypts the logs delivered by CloudTrail. // The value is a fully specified ARN to a KMS key in the format: // - // arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012 + // arn:aws:kms:us-east-2:123456789012:key/12345678-1234-1234-1234-123456789012 KmsKeyId *string `type:"string"` // Specifies whether log file integrity validation is enabled. @@ -3942,16 +4230,18 @@ type UpdateTrailOutput struct { // Specifies the ARN of the Amazon SNS topic that CloudTrail uses to send notifications // when log files are delivered. The format of a topic ARN is: // - // arn:aws:sns:us-east-1:123456789012:MyTopic + // arn:aws:sns:us-east-2:123456789012:MyTopic SnsTopicARN *string `type:"string"` // This field is deprecated. Use SnsTopicARN. + // + // Deprecated: SnsTopicName has been deprecated SnsTopicName *string `deprecated:"true" type:"string"` // Specifies the ARN of the trail that was updated. The format of a trail ARN // is: // - // arn:aws:cloudtrail:us-east-1:123456789012:trail/MyTrail + // arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail TrailARN *string `type:"string"` } @@ -3989,6 +4279,12 @@ func (s *UpdateTrailOutput) SetIsMultiRegionTrail(v bool) *UpdateTrailOutput { return s } +// SetIsOrganizationTrail sets the IsOrganizationTrail field's value. +func (s *UpdateTrailOutput) SetIsOrganizationTrail(v bool) *UpdateTrailOutput { + s.IsOrganizationTrail = &v + return s +} + // SetKmsKeyId sets the KmsKeyId field's value. func (s *UpdateTrailOutput) SetKmsKeyId(v string) *UpdateTrailOutput { s.KmsKeyId = &v @@ -4044,6 +4340,9 @@ const ( // LookupAttributeKeyEventName is a LookupAttributeKey enum value LookupAttributeKeyEventName = "EventName" + // LookupAttributeKeyReadOnly is a LookupAttributeKey enum value + LookupAttributeKeyReadOnly = "ReadOnly" + // LookupAttributeKeyUsername is a LookupAttributeKey enum value LookupAttributeKeyUsername = "Username" @@ -4055,6 +4354,9 @@ const ( // LookupAttributeKeyEventSource is a LookupAttributeKey enum value LookupAttributeKeyEventSource = "EventSource" + + // LookupAttributeKeyAccessKeyId is a LookupAttributeKey enum value + LookupAttributeKeyAccessKeyId = "AccessKeyId" ) const ( diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudtrail/errors.go b/vendor/github.com/aws/aws-sdk-go/service/cloudtrail/errors.go index 0da999b3325..4fdd9a37b55 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudtrail/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudtrail/errors.go @@ -5,13 +5,22 @@ package cloudtrail const ( // ErrCodeARNInvalidException for service response error code - // "ARNInvalidException". + // "CloudTrailARNInvalidException". // // This exception is thrown when an operation is called with an invalid trail // ARN. The format of a trail ARN is: // - // arn:aws:cloudtrail:us-east-1:123456789012:trail/MyTrail - ErrCodeARNInvalidException = "ARNInvalidException" + // arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail + ErrCodeARNInvalidException = "CloudTrailARNInvalidException" + + // ErrCodeAccessNotEnabledException for service response error code + // "CloudTrailAccessNotEnabledException". + // + // This exception is thrown when trusted access has not been enabled between + // AWS CloudTrail and AWS Organizations. For more information, see Enabling + // Trusted Access with Other AWS Services (https://docs.aws.amazon.com/organizations/latest/userguide/orgs_integrate_services.html) + // and Prepare For Creating a Trail For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). + ErrCodeAccessNotEnabledException = "CloudTrailAccessNotEnabledException" // ErrCodeCloudWatchLogsDeliveryUnavailableException for service response error code // "CloudWatchLogsDeliveryUnavailableException". @@ -19,6 +28,15 @@ const ( // Cannot set a CloudWatch Logs delivery for this region. ErrCodeCloudWatchLogsDeliveryUnavailableException = "CloudWatchLogsDeliveryUnavailableException" + // ErrCodeInsufficientDependencyServiceAccessPermissionException for service response error code + // "InsufficientDependencyServiceAccessPermissionException". + // + // This exception is thrown when the IAM user or role that is used to create + // the organization trail is lacking one or more required permissions for creating + // an organization trail in a required service. For more information, see Prepare + // For Creating a Trail For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). + ErrCodeInsufficientDependencyServiceAccessPermissionException = "InsufficientDependencyServiceAccessPermissionException" + // ErrCodeInsufficientEncryptionPolicyException for service response error code // "InsufficientEncryptionPolicyException". // @@ -54,12 +72,21 @@ const ( // "InvalidEventSelectorsException". // // This exception is thrown when the PutEventSelectors operation is called with - // an invalid number of event selectors, data resources, or an invalid value - // for a parameter: + // a number of event selectors or data resources that is not valid. The combination + // of event selectors and data resources is not valid. A trail can have up to + // 5 event selectors. A trail is limited to 250 data resources. These data resources + // can be distributed across event selectors, but the overall total cannot exceed + // 250. + // + // You can: // // * Specify a valid number of event selectors (1 to 5) for a trail. // // * Specify a valid number of data resources (1 to 250) for an event selector. + // The limit of number of resources on an individual event selector is configurable + // up to 250. However, this upper limit is allowed only if the total number + // of data resources does not exceed 250 across all event selectors for a + // trail. // // * Specify a valid value for a parameter. For example, specifying the ReadWriteType // parameter with a value of read-only is invalid. @@ -187,12 +214,38 @@ const ( // This exception is thrown when the maximum number of trails is reached. ErrCodeMaximumNumberOfTrailsExceededException = "MaximumNumberOfTrailsExceededException" + // ErrCodeNotOrganizationMasterAccountException for service response error code + // "NotOrganizationMasterAccountException". + // + // This exception is thrown when the AWS account making the request to create + // or update an organization trail is not the master account for an organization + // in AWS Organizations. For more information, see Prepare For Creating a Trail + // For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). + ErrCodeNotOrganizationMasterAccountException = "NotOrganizationMasterAccountException" + // ErrCodeOperationNotPermittedException for service response error code // "OperationNotPermittedException". // // This exception is thrown when the requested operation is not permitted. ErrCodeOperationNotPermittedException = "OperationNotPermittedException" + // ErrCodeOrganizationNotInAllFeaturesModeException for service response error code + // "OrganizationNotInAllFeaturesModeException". + // + // This exception is thrown when AWS Organizations is not configured to support + // all features. All features must be enabled in AWS Organization to support + // creating an organization trail. For more information, see Prepare For Creating + // a Trail For Your Organization (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-an-organizational-trail-prepare.html). + ErrCodeOrganizationNotInAllFeaturesModeException = "OrganizationNotInAllFeaturesModeException" + + // ErrCodeOrganizationsNotInUseException for service response error code + // "OrganizationsNotInUseException". + // + // This exception is thrown when the request is made from an AWS account that + // is not a member of an organization. To make this request, sign in using the + // credentials of an account that belongs to an organization. + ErrCodeOrganizationsNotInUseException = "OrganizationsNotInUseException" + // ErrCodeResourceNotFoundException for service response error code // "ResourceNotFoundException". // diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudtrail/service.go b/vendor/github.com/aws/aws-sdk-go/service/cloudtrail/service.go index 49198fc6390..5f1d4ddc41f 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudtrail/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudtrail/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "cloudtrail" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "cloudtrail" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "CloudTrail" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the CloudTrail client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudwatch/api.go b/vendor/github.com/aws/aws-sdk-go/service/cloudwatch/api.go index 2675be8486d..3d0f003dea7 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudwatch/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudwatch/api.go @@ -17,8 +17,8 @@ const opDeleteAlarms = "DeleteAlarms" // DeleteAlarmsRequest generates a "aws/request.Request" representing the // client's request for the DeleteAlarms operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -98,8 +98,8 @@ const opDeleteDashboards = "DeleteDashboards" // DeleteDashboardsRequest generates a "aws/request.Request" representing the // client's request for the DeleteDashboards operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -184,8 +184,8 @@ const opDescribeAlarmHistory = "DescribeAlarmHistory" // DescribeAlarmHistoryRequest generates a "aws/request.Request" representing the // client's request for the DescribeAlarmHistory operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -323,8 +323,8 @@ const opDescribeAlarms = "DescribeAlarms" // DescribeAlarmsRequest generates a "aws/request.Request" representing the // client's request for the DescribeAlarms operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -460,8 +460,8 @@ const opDescribeAlarmsForMetric = "DescribeAlarmsForMetric" // DescribeAlarmsForMetricRequest generates a "aws/request.Request" representing the // client's request for the DescribeAlarmsForMetric operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -535,8 +535,8 @@ const opDisableAlarmActions = "DisableAlarmActions" // DisableAlarmActionsRequest generates a "aws/request.Request" representing the // client's request for the DisableAlarmActions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -612,8 +612,8 @@ const opEnableAlarmActions = "EnableAlarmActions" // EnableAlarmActionsRequest generates a "aws/request.Request" representing the // client's request for the EnableAlarmActions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -688,8 +688,8 @@ const opGetDashboard = "GetDashboard" // GetDashboardRequest generates a "aws/request.Request" representing the // client's request for the GetDashboard operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -773,12 +773,125 @@ func (c *CloudWatch) GetDashboardWithContext(ctx aws.Context, input *GetDashboar return out, req.Send() } +const opGetMetricData = "GetMetricData" + +// GetMetricDataRequest generates a "aws/request.Request" representing the +// client's request for the GetMetricData operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetMetricData for more information on using the GetMetricData +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetMetricDataRequest method. +// req, resp := client.GetMetricDataRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/monitoring-2010-08-01/GetMetricData +func (c *CloudWatch) GetMetricDataRequest(input *GetMetricDataInput) (req *request.Request, output *GetMetricDataOutput) { + op := &request.Operation{ + Name: opGetMetricData, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetMetricDataInput{} + } + + output = &GetMetricDataOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetMetricData API operation for Amazon CloudWatch. +// +// You can use the GetMetricData API to retrieve as many as 100 different metrics +// in a single request, with a total of as many as 100,800 datapoints. You can +// also optionally perform math expressions on the values of the returned statistics, +// to create new time series that represent new insights into your data. For +// example, using Lambda metrics, you could divide the Errors metric by the +// Invocations metric to get an error rate time series. For more information +// about metric math expressions, see Metric Math Syntax and Functions (http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-metric-math.html#metric-math-syntax) +// in the Amazon CloudWatch User Guide. +// +// Calls to the GetMetricData API have a different pricing structure than calls +// to GetMetricStatistics. For more information about pricing, see Amazon CloudWatch +// Pricing (https://aws.amazon.com/cloudwatch/pricing/). +// +// Amazon CloudWatch retains metric data as follows: +// +// * Data points with a period of less than 60 seconds are available for +// 3 hours. These data points are high-resolution metrics and are available +// only for custom metrics that have been defined with a StorageResolution +// of 1. +// +// * Data points with a period of 60 seconds (1-minute) are available for +// 15 days. +// +// * Data points with a period of 300 seconds (5-minute) are available for +// 63 days. +// +// * Data points with a period of 3600 seconds (1 hour) are available for +// 455 days (15 months). +// +// Data points that are initially published with a shorter period are aggregated +// together for long-term storage. For example, if you collect data using a +// period of 1 minute, the data remains available for 15 days with 1-minute +// resolution. After 15 days, this data is still available, but is aggregated +// and retrievable only with a resolution of 5 minutes. After 63 days, the data +// is further aggregated and is available with a resolution of 1 hour. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch's +// API operation GetMetricData for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The next token specified is invalid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/monitoring-2010-08-01/GetMetricData +func (c *CloudWatch) GetMetricData(input *GetMetricDataInput) (*GetMetricDataOutput, error) { + req, out := c.GetMetricDataRequest(input) + return out, req.Send() +} + +// GetMetricDataWithContext is the same as GetMetricData with the addition of +// the ability to pass a context and additional request options. +// +// See GetMetricData for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatch) GetMetricDataWithContext(ctx aws.Context, input *GetMetricDataInput, opts ...request.Option) (*GetMetricDataOutput, error) { + req, out := c.GetMetricDataRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opGetMetricStatistics = "GetMetricStatistics" // GetMetricStatisticsRequest generates a "aws/request.Request" representing the // client's request for the GetMetricStatistics operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -839,6 +952,9 @@ func (c *CloudWatch) GetMetricStatisticsRequest(input *GetMetricStatisticsInput) // // * The Min and the Max values of the statistic set are equal. // +// Percentile statistics are not available for metrics when any of the metric +// values are negative numbers. +// // Amazon CloudWatch retains metric data as follows: // // * Data points with a period of less than 60 seconds are available for @@ -911,12 +1027,100 @@ func (c *CloudWatch) GetMetricStatisticsWithContext(ctx aws.Context, input *GetM return out, req.Send() } +const opGetMetricWidgetImage = "GetMetricWidgetImage" + +// GetMetricWidgetImageRequest generates a "aws/request.Request" representing the +// client's request for the GetMetricWidgetImage operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetMetricWidgetImage for more information on using the GetMetricWidgetImage +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetMetricWidgetImageRequest method. +// req, resp := client.GetMetricWidgetImageRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/monitoring-2010-08-01/GetMetricWidgetImage +func (c *CloudWatch) GetMetricWidgetImageRequest(input *GetMetricWidgetImageInput) (req *request.Request, output *GetMetricWidgetImageOutput) { + op := &request.Operation{ + Name: opGetMetricWidgetImage, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetMetricWidgetImageInput{} + } + + output = &GetMetricWidgetImageOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetMetricWidgetImage API operation for Amazon CloudWatch. +// +// You can use the GetMetricWidgetImage API to retrieve a snapshot graph of +// one or more Amazon CloudWatch metrics as a bitmap image. You can then embed +// this image into your services and products, such as wiki pages, reports, +// and documents. You could also retrieve images regularly, such as every minute, +// and create your own custom live dashboard. +// +// The graph you retrieve can include all CloudWatch metric graph features, +// including metric math and horizontal and vertical annotations. +// +// There is a limit of 20 transactions per second for this API. Each GetMetricWidgetImage +// action has the following limits: +// +// * As many as 100 metrics in the graph. +// +// * Up to 100 KB uncompressed payload. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch's +// API operation GetMetricWidgetImage for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/monitoring-2010-08-01/GetMetricWidgetImage +func (c *CloudWatch) GetMetricWidgetImage(input *GetMetricWidgetImageInput) (*GetMetricWidgetImageOutput, error) { + req, out := c.GetMetricWidgetImageRequest(input) + return out, req.Send() +} + +// GetMetricWidgetImageWithContext is the same as GetMetricWidgetImage with the addition of +// the ability to pass a context and additional request options. +// +// See GetMetricWidgetImage for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatch) GetMetricWidgetImageWithContext(ctx aws.Context, input *GetMetricWidgetImageInput, opts ...request.Option) (*GetMetricWidgetImageOutput, error) { + req, out := c.GetMetricWidgetImageRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opListDashboards = "ListDashboards" // ListDashboardsRequest generates a "aws/request.Request" representing the // client's request for the ListDashboards operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -959,6 +1163,10 @@ func (c *CloudWatch) ListDashboardsRequest(input *ListDashboardsInput) (req *req // only those dashboards with names starting with the prefix are listed. Otherwise, // all dashboards in your account are listed. // +// ListDashboards returns up to 1000 results on one page. If there are more +// than 1000 dashboards, you can call ListDashboards again and include the value +// you received for NextToken in the first call, to receive the next 1000 results. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -999,8 +1207,8 @@ const opListMetrics = "ListMetrics" // ListMetricsRequest generates a "aws/request.Request" representing the // client's request for the ListMetrics operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1045,15 +1253,15 @@ func (c *CloudWatch) ListMetricsRequest(input *ListMetricsInput) (req *request.R // ListMetrics API operation for Amazon CloudWatch. // -// List the specified metrics. You can use the returned metrics with GetMetricStatistics -// to obtain statistical data. +// List the specified metrics. You can use the returned metrics with GetMetricData +// or GetMetricStatistics to obtain statistical data. // // Up to 500 results are returned for any one call. To retrieve additional results, // use the returned token with subsequent calls. // // After you create a metric, allow up to fifteen minutes before the metric // appears. Statistics about the metric, however, are available sooner using -// GetMetricStatistics. +// GetMetricData or GetMetricStatistics. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1145,8 +1353,8 @@ const opPutDashboard = "PutDashboard" // PutDashboardRequest generates a "aws/request.Request" representing the // client's request for the PutDashboard operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1189,8 +1397,8 @@ func (c *CloudWatch) PutDashboardRequest(input *PutDashboardInput) (req *request // dashboard. If you update a dashboard, the entire contents are replaced with // what you specify here. // -// You can have up to 500 dashboards per account. All dashboards in your account -// are global, not region-specific. +// There is no limit to the number of dashboards in your account. All dashboards +// in your account are global, not region-specific. // // A simple way to create a dashboard using PutDashboard is to copy an existing // dashboard. To copy an existing dashboard using the console, you can load @@ -1245,8 +1453,8 @@ const opPutMetricAlarm = "PutMetricAlarm" // PutMetricAlarmRequest generates a "aws/request.Request" representing the // client's request for the PutMetricAlarm operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1325,10 +1533,11 @@ func (c *CloudWatch) PutMetricAlarmRequest(input *PutMetricAlarmInput) (req *req // If you are using temporary security credentials granted using AWS STS, you // cannot stop or terminate an EC2 instance using alarm actions. // -// You must create at least one stop, terminate, or reboot alarm using either -// the Amazon EC2 or CloudWatch consoles to create the EC2ActionsAccess IAM -// role. After this IAM role is created, you can create stop, terminate, or -// reboot alarms using a command-line interface or API. +// The first time you create an alarm in the AWS Management Console, the CLI, +// or by using the PutMetricAlarm API, CloudWatch creates the necessary service-linked +// role for you. The service-linked role is called AWSServiceRoleForCloudWatchEvents. +// For more information about service-linked roles, see AWS service-linked role +// (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#iam-term-service-linked-role). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1367,8 +1576,8 @@ const opPutMetricData = "PutMetricData" // PutMetricDataRequest generates a "aws/request.Request" representing the // client's request for the PutMetricData operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1409,12 +1618,21 @@ func (c *CloudWatch) PutMetricDataRequest(input *PutMetricDataInput) (req *reque // PutMetricData API operation for Amazon CloudWatch. // -// Publishes metric data points to Amazon CloudWatch. CloudWatch associates -// the data points with the specified metric. If the specified metric does not -// exist, CloudWatch creates the metric. When CloudWatch creates a metric, it -// can take up to fifteen minutes for the metric to appear in calls to ListMetrics. +// Publishes metric data to Amazon CloudWatch. CloudWatch associates the data +// with the specified metric. If the specified metric does not exist, CloudWatch +// creates the metric. When CloudWatch creates a metric, it can take up to fifteen +// minutes for the metric to appear in calls to ListMetrics. +// +// You can publish either individual data points in the Value field, or arrays +// of values and the number of times each value occurred during the period by +// using the Values and Counts fields in the MetricDatum structure. Using the +// Values and Counts method enables you to publish up to 150 values per metric +// with one PutMetricData request, and supports retrieving percentile statistics +// on this data. // // Each PutMetricData request is limited to 40 KB in size for HTTP POST requests. +// You can send a payload compressed by gzip. Each request is also limited to +// no more than 20 different metrics. // // Although the Value parameter accepts numbers of type Double, CloudWatch rejects // values that are either too small or too large. Values must be in the range @@ -1428,16 +1646,19 @@ func (c *CloudWatch) PutMetricDataRequest(input *PutMetricDataInput) (req *reque // in the Amazon CloudWatch User Guide. // // Data points with time stamps from 24 hours ago or longer can take at least -// 48 hours to become available for GetMetricStatistics from the time they are -// submitted. +// 48 hours to become available for GetMetricData or GetMetricStatistics from +// the time they are submitted. // -// CloudWatch needs raw data points to calculate percentile statistics. If you -// publish data using a statistic set instead, you can only retrieve percentile -// statistics for this data if one of the following conditions is true: +// CloudWatch needs raw data points to calculate percentile statistics. These +// raw data points could be published individually or as part of Values and +// Counts arrays. If you publish data using statistic sets in the StatisticValues +// field instead, you can only retrieve percentile statistics for this data +// if one of the following conditions is true: // -// * The SampleCount value of the statistic set is 1 +// * The SampleCount value of the statistic set is 1 and Min, Max, and Sum +// are all equal. // -// * The Min and the Max values of the statistic set are equal +// * The Min and Max are equal, and Sum is equal to Min multiplied by SampleCount. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1485,8 +1706,8 @@ const opSetAlarmState = "SetAlarmState" // SetAlarmStateRequest generates a "aws/request.Request" representing the // client's request for the SetAlarmState operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1589,7 +1810,7 @@ type AlarmHistoryItem struct { HistorySummary *string `min:"1" type:"string"` // The time stamp for the alarm history item. - Timestamp *time.Time `type:"timestamp" timestampFormat:"iso8601"` + Timestamp *time.Time `type:"timestamp"` } // String returns the string representation @@ -1645,7 +1866,7 @@ type DashboardEntry struct { // The time stamp of when the dashboard was last modified, either by an API // call or through the console. This number is expressed as the number of milliseconds // since Jan 1, 1970 00:00:00 UTC. - LastModified *time.Time `type:"timestamp" timestampFormat:"iso8601"` + LastModified *time.Time `type:"timestamp"` // The size of the dashboard, in bytes. Size *int64 `type:"long"` @@ -1742,7 +1963,7 @@ type Datapoint struct { Sum *float64 `type:"double"` // The time stamp used for the data point. - Timestamp *time.Time `type:"timestamp" timestampFormat:"iso8601"` + Timestamp *time.Time `type:"timestamp"` // The standard unit for the data point. Unit *string `type:"string" enum:"StandardUnit"` @@ -1917,7 +2138,7 @@ type DescribeAlarmHistoryInput struct { AlarmName *string `min:"1" type:"string"` // The ending date to retrieve alarm history. - EndDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + EndDate *time.Time `type:"timestamp"` // The type of alarm histories to retrieve. HistoryItemType *string `type:"string" enum:"HistoryItemType"` @@ -1930,7 +2151,7 @@ type DescribeAlarmHistoryInput struct { NextToken *string `type:"string"` // The starting date to retrieve alarm history. - StartDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + StartDate *time.Time `type:"timestamp"` } // String returns the string representation @@ -2588,6 +2809,162 @@ func (s *GetDashboardOutput) SetDashboardName(v string) *GetDashboardOutput { return s } +type GetMetricDataInput struct { + _ struct{} `type:"structure"` + + // The time stamp indicating the latest data to be returned. + // + // For better performance, specify StartTime and EndTime values that align with + // the value of the metric's Period and sync up with the beginning and end of + // an hour. For example, if the Period of a metric is 5 minutes, specifying + // 12:05 or 12:30 as EndTime can get a faster response from CloudWatch then + // setting 12:07 or 12:29 as the EndTime. + // + // EndTime is a required field + EndTime *time.Time `type:"timestamp" required:"true"` + + // The maximum number of data points the request should return before paginating. + // If you omit this, the default of 100,800 is used. + MaxDatapoints *int64 `type:"integer"` + + // The metric queries to be returned. A single GetMetricData call can include + // as many as 100 MetricDataQuery structures. Each of these structures can specify + // either a metric to retrieve, or a math expression to perform on retrieved + // data. + // + // MetricDataQueries is a required field + MetricDataQueries []*MetricDataQuery `type:"list" required:"true"` + + // Include this value, if it was returned by the previous call, to get the next + // set of data points. + NextToken *string `type:"string"` + + // The order in which data points should be returned. TimestampDescending returns + // the newest data first and paginates when the MaxDatapoints limit is reached. + // TimestampAscending returns the oldest data first and paginates when the MaxDatapoints + // limit is reached. + ScanBy *string `type:"string" enum:"ScanBy"` + + // The time stamp indicating the earliest data to be returned. + // + // For better performance, specify StartTime and EndTime values that align with + // the value of the metric's Period and sync up with the beginning and end of + // an hour. For example, if the Period of a metric is 5 minutes, specifying + // 12:05 or 12:30 as StartTime can get a faster response from CloudWatch then + // setting 12:07 or 12:29 as the StartTime. + // + // StartTime is a required field + StartTime *time.Time `type:"timestamp" required:"true"` +} + +// String returns the string representation +func (s GetMetricDataInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetMetricDataInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetMetricDataInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetMetricDataInput"} + if s.EndTime == nil { + invalidParams.Add(request.NewErrParamRequired("EndTime")) + } + if s.MetricDataQueries == nil { + invalidParams.Add(request.NewErrParamRequired("MetricDataQueries")) + } + if s.StartTime == nil { + invalidParams.Add(request.NewErrParamRequired("StartTime")) + } + if s.MetricDataQueries != nil { + for i, v := range s.MetricDataQueries { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "MetricDataQueries", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEndTime sets the EndTime field's value. +func (s *GetMetricDataInput) SetEndTime(v time.Time) *GetMetricDataInput { + s.EndTime = &v + return s +} + +// SetMaxDatapoints sets the MaxDatapoints field's value. +func (s *GetMetricDataInput) SetMaxDatapoints(v int64) *GetMetricDataInput { + s.MaxDatapoints = &v + return s +} + +// SetMetricDataQueries sets the MetricDataQueries field's value. +func (s *GetMetricDataInput) SetMetricDataQueries(v []*MetricDataQuery) *GetMetricDataInput { + s.MetricDataQueries = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *GetMetricDataInput) SetNextToken(v string) *GetMetricDataInput { + s.NextToken = &v + return s +} + +// SetScanBy sets the ScanBy field's value. +func (s *GetMetricDataInput) SetScanBy(v string) *GetMetricDataInput { + s.ScanBy = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *GetMetricDataInput) SetStartTime(v time.Time) *GetMetricDataInput { + s.StartTime = &v + return s +} + +type GetMetricDataOutput struct { + _ struct{} `type:"structure"` + + // The metrics that are returned, including the metric name, namespace, and + // dimensions. + MetricDataResults []*MetricDataResult `type:"list"` + + // A token that marks the next batch of returned results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s GetMetricDataOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetMetricDataOutput) GoString() string { + return s.String() +} + +// SetMetricDataResults sets the MetricDataResults field's value. +func (s *GetMetricDataOutput) SetMetricDataResults(v []*MetricDataResult) *GetMetricDataOutput { + s.MetricDataResults = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *GetMetricDataOutput) SetNextToken(v string) *GetMetricDataOutput { + s.NextToken = &v + return s +} + type GetMetricStatisticsInput struct { _ struct{} `type:"structure"` @@ -2608,11 +2985,12 @@ type GetMetricStatisticsInput struct { // time stamp. The time stamp must be in ISO 8601 UTC format (for example, 2016-10-10T23:00:00Z). // // EndTime is a required field - EndTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + EndTime *time.Time `type:"timestamp" required:"true"` // The percentile statistics. Specify values between p0.0 and p100. When calling // GetMetricStatistics, you must specify either Statistics or ExtendedStatistics, - // but not both. + // but not both. Percentile statistics are not available for metrics when any + // of the metric values are negative numbers. ExtendedStatistics []*string `min:"1" type:"list"` // The name of the metric, with or without spaces. @@ -2674,7 +3052,7 @@ type GetMetricStatisticsInput struct { // you receive data timestamped between 15:02:15 and 15:07:15. // // StartTime is a required field - StartTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + StartTime *time.Time `type:"timestamp" required:"true"` // The metric statistics, other than percentile. For percentile statistics, // use ExtendedStatistics. When calling GetMetricStatistics, you must specify @@ -2682,8 +3060,8 @@ type GetMetricStatisticsInput struct { Statistics []*string `min:"1" type:"list"` // The unit for a given metric. Metrics may be reported in multiple units. Not - // supplying a unit results in all units being returned. If the metric only - // ever reports one unit, specifying a unit has no effect. + // supplying a unit results in all units being returned. If you specify only + // a unit that the metric does not report, the results of the call are null. Unit *string `type:"string" enum:"StandardUnit"` } @@ -2833,6 +3211,115 @@ func (s *GetMetricStatisticsOutput) SetLabel(v string) *GetMetricStatisticsOutpu return s } +type GetMetricWidgetImageInput struct { + _ struct{} `type:"structure"` + + // A JSON string that defines the bitmap graph to be retrieved. The string includes + // the metrics to include in the graph, statistics, annotations, title, axis + // limits, and so on. You can include only one MetricWidget parameter in each + // GetMetricWidgetImage call. + // + // For more information about the syntax of MetricWidget see CloudWatch-Metric-Widget-Structure. + // + // If any metric on the graph could not load all the requested data points, + // an orange triangle with an exclamation point appears next to the graph legend. + // + // MetricWidget is a required field + MetricWidget *string `type:"string" required:"true"` + + // The format of the resulting image. Only PNG images are supported. + // + // The default is png. If you specify png, the API returns an HTTP response + // with the content-type set to text/xml. The image data is in a MetricWidgetImage + // field. For example: + // + // + // + // + // + // + // + // iVBORw0KGgoAAAANSUhEUgAAAlgAAAGQEAYAAAAip... + // + // + // + // + // + // + // + // 6f0d4192-4d42-11e8-82c1-f539a07e0e3b + // + // + // + // + // + // The image/png setting is intended only for custom HTTP requests. For most + // use cases, and all actions using an AWS SDK, you should use png. If you specify + // image/png, the HTTP response has a content-type set to image/png, and the + // body of the response is a PNG image. + OutputFormat *string `type:"string"` +} + +// String returns the string representation +func (s GetMetricWidgetImageInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetMetricWidgetImageInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetMetricWidgetImageInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetMetricWidgetImageInput"} + if s.MetricWidget == nil { + invalidParams.Add(request.NewErrParamRequired("MetricWidget")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMetricWidget sets the MetricWidget field's value. +func (s *GetMetricWidgetImageInput) SetMetricWidget(v string) *GetMetricWidgetImageInput { + s.MetricWidget = &v + return s +} + +// SetOutputFormat sets the OutputFormat field's value. +func (s *GetMetricWidgetImageInput) SetOutputFormat(v string) *GetMetricWidgetImageInput { + s.OutputFormat = &v + return s +} + +type GetMetricWidgetImageOutput struct { + _ struct{} `type:"structure"` + + // The image of the graph, in the output format specified. + // + // MetricWidgetImage is automatically base64 encoded/decoded by the SDK. + MetricWidgetImage []byte `type:"blob"` +} + +// String returns the string representation +func (s GetMetricWidgetImageOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetMetricWidgetImageOutput) GoString() string { + return s.String() +} + +// SetMetricWidgetImage sets the MetricWidgetImage field's value. +func (s *GetMetricWidgetImageOutput) SetMetricWidgetImage(v []byte) *GetMetricWidgetImageOutput { + s.MetricWidgetImage = v + return s +} + type ListDashboardsInput struct { _ struct{} `type:"structure"` @@ -3009,6 +3496,39 @@ func (s *ListMetricsOutput) SetNextToken(v string) *ListMetricsOutput { return s } +// A message returned by the GetMetricDataAPI, including a code and a description. +type MessageData struct { + _ struct{} `type:"structure"` + + // The error code or status code associated with the message. + Code *string `type:"string"` + + // The message text. + Value *string `type:"string"` +} + +// String returns the string representation +func (s MessageData) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MessageData) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *MessageData) SetCode(v string) *MessageData { + s.Code = &v + return s +} + +// SetValue sets the Value field's value. +func (s *MessageData) SetValue(v string) *MessageData { + s.Value = &v + return s +} + // Represents a specific metric. type Metric struct { _ struct{} `type:"structure"` @@ -3033,6 +3553,32 @@ func (s Metric) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *Metric) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Metric"} + if s.MetricName != nil && len(*s.MetricName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MetricName", 1)) + } + if s.Namespace != nil && len(*s.Namespace) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Namespace", 1)) + } + if s.Dimensions != nil { + for i, v := range s.Dimensions { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Dimensions", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetDimensions sets the Dimensions field's value. func (s *Metric) SetDimensions(v []*Dimension) *Metric { s.Dimensions = v @@ -3067,7 +3613,7 @@ type MetricAlarm struct { AlarmArn *string `min:"1" type:"string"` // The time stamp of the last update to the alarm configuration. - AlarmConfigurationUpdatedTimestamp *time.Time `type:"timestamp" timestampFormat:"iso8601"` + AlarmConfigurationUpdatedTimestamp *time.Time `type:"timestamp"` // The description of the alarm. AlarmDescription *string `type:"string"` @@ -3123,7 +3669,7 @@ type MetricAlarm struct { StateReasonData *string `type:"string"` // The time stamp of the last update to the alarm state. - StateUpdatedTimestamp *time.Time `type:"timestamp" timestampFormat:"iso8601"` + StateUpdatedTimestamp *time.Time `type:"timestamp"` // The state value for the alarm. StateValue *string `type:"string" enum:"StateValue"` @@ -3303,11 +3849,211 @@ func (s *MetricAlarm) SetUnit(v string) *MetricAlarm { return s } +// This structure indicates the metric data to return, and whether this call +// is just retrieving a batch set of data for one metric, or is performing a +// math expression on metric data. A single GetMetricData call can include up +// to 100 MetricDataQuery structures. +type MetricDataQuery struct { + _ struct{} `type:"structure"` + + // The math expression to be performed on the returned data, if this structure + // is performing a math expression. For more information about metric math expressions, + // see Metric Math Syntax and Functions (http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-metric-math.html#metric-math-syntax) + // in the Amazon CloudWatch User Guide. + // + // Within one MetricDataQuery structure, you must specify either Expression + // or MetricStat but not both. + Expression *string `min:"1" type:"string"` + + // A short name used to tie this structure to the results in the response. This + // name must be unique within a single call to GetMetricData. If you are performing + // math expressions on this set of data, this name represents that data and + // can serve as a variable in the mathematical expression. The valid characters + // are letters, numbers, and underscore. The first character must be a lowercase + // letter. + // + // Id is a required field + Id *string `min:"1" type:"string" required:"true"` + + // A human-readable label for this metric or expression. This is especially + // useful if this is an expression, so that you know what the value represents. + // If the metric or expression is shown in a CloudWatch dashboard widget, the + // label is shown. If Label is omitted, CloudWatch generates a default. + Label *string `type:"string"` + + // The metric to be returned, along with statistics, period, and units. Use + // this parameter only if this structure is performing a data retrieval and + // not performing a math expression on the returned data. + // + // Within one MetricDataQuery structure, you must specify either Expression + // or MetricStat but not both. + MetricStat *MetricStat `type:"structure"` + + // Indicates whether to return the time stamps and raw data values of this metric. + // If you are performing this call just to do math expressions and do not also + // need the raw data returned, you can specify False. If you omit this, the + // default of True is used. + ReturnData *bool `type:"boolean"` +} + +// String returns the string representation +func (s MetricDataQuery) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MetricDataQuery) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *MetricDataQuery) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MetricDataQuery"} + if s.Expression != nil && len(*s.Expression) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Expression", 1)) + } + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.Id != nil && len(*s.Id) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Id", 1)) + } + if s.MetricStat != nil { + if err := s.MetricStat.Validate(); err != nil { + invalidParams.AddNested("MetricStat", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetExpression sets the Expression field's value. +func (s *MetricDataQuery) SetExpression(v string) *MetricDataQuery { + s.Expression = &v + return s +} + +// SetId sets the Id field's value. +func (s *MetricDataQuery) SetId(v string) *MetricDataQuery { + s.Id = &v + return s +} + +// SetLabel sets the Label field's value. +func (s *MetricDataQuery) SetLabel(v string) *MetricDataQuery { + s.Label = &v + return s +} + +// SetMetricStat sets the MetricStat field's value. +func (s *MetricDataQuery) SetMetricStat(v *MetricStat) *MetricDataQuery { + s.MetricStat = v + return s +} + +// SetReturnData sets the ReturnData field's value. +func (s *MetricDataQuery) SetReturnData(v bool) *MetricDataQuery { + s.ReturnData = &v + return s +} + +// A GetMetricData call returns an array of MetricDataResult structures. Each +// of these structures includes the data points for that metric, along with +// the time stamps of those data points and other identifying information. +type MetricDataResult struct { + _ struct{} `type:"structure"` + + // The short name you specified to represent this metric. + Id *string `min:"1" type:"string"` + + // The human-readable label associated with the data. + Label *string `type:"string"` + + // A list of messages with additional information about the data returned. + Messages []*MessageData `type:"list"` + + // The status of the returned data. Complete indicates that all data points + // in the requested time range were returned. PartialData means that an incomplete + // set of data points were returned. You can use the NextToken value that was + // returned and repeat your request to get more data points. NextToken is not + // returned if you are performing a math expression. InternalError indicates + // that an error occurred. Retry your request using NextToken, if present. + StatusCode *string `type:"string" enum:"StatusCode"` + + // The time stamps for the data points, formatted in Unix timestamp format. + // The number of time stamps always matches the number of values and the value + // for Timestamps[x] is Values[x]. + Timestamps []*time.Time `type:"list"` + + // The data points for the metric corresponding to Timestamps. The number of + // values always matches the number of time stamps and the time stamp for Values[x] + // is Timestamps[x]. + Values []*float64 `type:"list"` +} + +// String returns the string representation +func (s MetricDataResult) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MetricDataResult) GoString() string { + return s.String() +} + +// SetId sets the Id field's value. +func (s *MetricDataResult) SetId(v string) *MetricDataResult { + s.Id = &v + return s +} + +// SetLabel sets the Label field's value. +func (s *MetricDataResult) SetLabel(v string) *MetricDataResult { + s.Label = &v + return s +} + +// SetMessages sets the Messages field's value. +func (s *MetricDataResult) SetMessages(v []*MessageData) *MetricDataResult { + s.Messages = v + return s +} + +// SetStatusCode sets the StatusCode field's value. +func (s *MetricDataResult) SetStatusCode(v string) *MetricDataResult { + s.StatusCode = &v + return s +} + +// SetTimestamps sets the Timestamps field's value. +func (s *MetricDataResult) SetTimestamps(v []*time.Time) *MetricDataResult { + s.Timestamps = v + return s +} + +// SetValues sets the Values field's value. +func (s *MetricDataResult) SetValues(v []*float64) *MetricDataResult { + s.Values = v + return s +} + // Encapsulates the information sent to either create a metric or add new values // to be aggregated into an existing metric. type MetricDatum struct { _ struct{} `type:"structure"` + // Array of numbers that is used along with the Values array. Each number in + // the Count array is the number of times the corresponding value in the Values + // array occurred during the period. + // + // If you omit the Counts array, the default of 1 is used as the value for each + // count. If you include a Counts array, it must include the same amount of + // values as the Values array. + Counts []*float64 `type:"list"` + // The dimensions associated with the metric. Dimensions []*Dimension `type:"list"` @@ -3332,7 +4078,7 @@ type MetricDatum struct { // The time the metric data was received, expressed as the number of milliseconds // since Jan 1, 1970 00:00:00 UTC. - Timestamp *time.Time `type:"timestamp" timestampFormat:"iso8601"` + Timestamp *time.Time `type:"timestamp"` // The unit of the metric. Unit *string `type:"string" enum:"StandardUnit"` @@ -3345,6 +4091,19 @@ type MetricDatum struct { // In addition, special values (for example, NaN, +Infinity, -Infinity) are // not supported. Value *float64 `type:"double"` + + // Array of numbers representing the values for the metric during the period. + // Each unique value is listed just once in this array, and the corresponding + // number in the Counts array specifies the number of times that value occurred + // during the period. You can include up to 150 unique values in each PutMetricData + // action that specifies a Values array. + // + // Although the Values array accepts numbers of type Double, CloudWatch rejects + // values that are either too small or too large. Values must be in the range + // of 8.515920e-109 to 1.174271e+108 (Base 10) or 2e-360 to 2e360 (Base 2). + // In addition, special values (for example, NaN, +Infinity, -Infinity) are + // not supported. + Values []*float64 `type:"list"` } // String returns the string representation @@ -3391,6 +4150,12 @@ func (s *MetricDatum) Validate() error { return nil } +// SetCounts sets the Counts field's value. +func (s *MetricDatum) SetCounts(v []*float64) *MetricDatum { + s.Counts = v + return s +} + // SetDimensions sets the Dimensions field's value. func (s *MetricDatum) SetDimensions(v []*Dimension) *MetricDatum { s.Dimensions = v @@ -3433,6 +4198,98 @@ func (s *MetricDatum) SetValue(v float64) *MetricDatum { return s } +// SetValues sets the Values field's value. +func (s *MetricDatum) SetValues(v []*float64) *MetricDatum { + s.Values = v + return s +} + +// This structure defines the metric to be returned, along with the statistics, +// period, and units. +type MetricStat struct { + _ struct{} `type:"structure"` + + // The metric to return, including the metric name, namespace, and dimensions. + // + // Metric is a required field + Metric *Metric `type:"structure" required:"true"` + + // The period to use when retrieving the metric. + // + // Period is a required field + Period *int64 `min:"1" type:"integer" required:"true"` + + // The statistic to return. It can include any CloudWatch statistic or extended + // statistic. + // + // Stat is a required field + Stat *string `type:"string" required:"true"` + + // The unit to use for the returned data points. + Unit *string `type:"string" enum:"StandardUnit"` +} + +// String returns the string representation +func (s MetricStat) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MetricStat) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *MetricStat) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MetricStat"} + if s.Metric == nil { + invalidParams.Add(request.NewErrParamRequired("Metric")) + } + if s.Period == nil { + invalidParams.Add(request.NewErrParamRequired("Period")) + } + if s.Period != nil && *s.Period < 1 { + invalidParams.Add(request.NewErrParamMinValue("Period", 1)) + } + if s.Stat == nil { + invalidParams.Add(request.NewErrParamRequired("Stat")) + } + if s.Metric != nil { + if err := s.Metric.Validate(); err != nil { + invalidParams.AddNested("Metric", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMetric sets the Metric field's value. +func (s *MetricStat) SetMetric(v *Metric) *MetricStat { + s.Metric = v + return s +} + +// SetPeriod sets the Period field's value. +func (s *MetricStat) SetPeriod(v int64) *MetricStat { + s.Period = &v + return s +} + +// SetStat sets the Stat field's value. +func (s *MetricStat) SetStat(v string) *MetricStat { + s.Stat = &v + return s +} + +// SetUnit sets the Unit field's value. +func (s *MetricStat) SetUnit(v string) *MetricStat { + s.Unit = &v + return s +} + type PutDashboardInput struct { _ struct{} `type:"structure"` @@ -3535,11 +4392,11 @@ type PutMetricAlarmInput struct { // // Valid Values: arn:aws:automate:region:ec2:stop | arn:aws:automate:region:ec2:terminate // | arn:aws:automate:region:ec2:recover | arn:aws:sns:region:account-id:sns-topic-name - // | arn:aws:autoscaling:region:account-id:scalingPolicy:policy-id autoScalingGroupName/group-friendly-name:policyName/policy-friendly-name + // | arn:aws:autoscaling:region:account-id:scalingPolicy:policy-idautoScalingGroupName/group-friendly-name:policyName/policy-friendly-name // - // Valid Values (for use with IAM roles): arn:aws:swf:region:{account-id}:action/actions/AWS_EC2.InstanceId.Stop/1.0 - // | arn:aws:swf:region:{account-id}:action/actions/AWS_EC2.InstanceId.Terminate/1.0 - // | arn:aws:swf:region:{account-id}:action/actions/AWS_EC2.InstanceId.Reboot/1.0 + // Valid Values (for use with IAM roles): arn:aws:swf:region:account-id:action/actions/AWS_EC2.InstanceId.Stop/1.0 + // | arn:aws:swf:region:account-id:action/actions/AWS_EC2.InstanceId.Terminate/1.0 + // | arn:aws:swf:region:account-id:action/actions/AWS_EC2.InstanceId.Reboot/1.0 AlarmActions []*string `type:"list"` // The description for the alarm. @@ -3556,7 +4413,10 @@ type PutMetricAlarmInput struct { // ComparisonOperator is a required field ComparisonOperator *string `type:"string" required:"true" enum:"ComparisonOperator"` - // The number of datapoints that must be breaching to trigger the alarm. + // The number of datapoints that must be breaching to trigger the alarm. This + // is used only if you are setting an "M out of N" alarm. In that case, this + // value is the M. For more information, see Evaluating an Alarm (http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html#alarm-evaluation) + // in the Amazon CloudWatch User Guide. DatapointsToAlarm *int64 `min:"1" type:"integer"` // The dimensions for the metric associated with the alarm. @@ -3573,6 +4433,10 @@ type PutMetricAlarmInput struct { EvaluateLowSampleCountPercentile *string `min:"1" type:"string"` // The number of periods over which data is compared to the specified threshold. + // If you are setting an alarm which requires that a number of consecutive data + // points be breaching to trigger the alarm, this value specifies that number. + // If you are setting an "M out of N" alarm, this value is the N. + // // An alarm's total current evaluation period can be no longer than one day, // so this number multiplied by Period cannot be more than 86,400 seconds. // @@ -3590,11 +4454,11 @@ type PutMetricAlarmInput struct { // // Valid Values: arn:aws:automate:region:ec2:stop | arn:aws:automate:region:ec2:terminate // | arn:aws:automate:region:ec2:recover | arn:aws:sns:region:account-id:sns-topic-name - // | arn:aws:autoscaling:region:account-id:scalingPolicy:policy-id autoScalingGroupName/group-friendly-name:policyName/policy-friendly-name + // | arn:aws:autoscaling:region:account-id:scalingPolicy:policy-idautoScalingGroupName/group-friendly-name:policyName/policy-friendly-name // - // Valid Values (for use with IAM roles): arn:aws:swf:region:{account-id}:action/actions/AWS_EC2.InstanceId.Stop/1.0 - // | arn:aws:swf:region:{account-id}:action/actions/AWS_EC2.InstanceId.Terminate/1.0 - // | arn:aws:swf:region:{account-id}:action/actions/AWS_EC2.InstanceId.Reboot/1.0 + // Valid Values (for use with IAM roles): >arn:aws:swf:region:account-id:action/actions/AWS_EC2.InstanceId.Stop/1.0 + // | arn:aws:swf:region:account-id:action/actions/AWS_EC2.InstanceId.Terminate/1.0 + // | arn:aws:swf:region:account-id:action/actions/AWS_EC2.InstanceId.Reboot/1.0 InsufficientDataActions []*string `type:"list"` // The name for the metric associated with the alarm. @@ -3612,18 +4476,18 @@ type PutMetricAlarmInput struct { // // Valid Values: arn:aws:automate:region:ec2:stop | arn:aws:automate:region:ec2:terminate // | arn:aws:automate:region:ec2:recover | arn:aws:sns:region:account-id:sns-topic-name - // | arn:aws:autoscaling:region:account-id:scalingPolicy:policy-id autoScalingGroupName/group-friendly-name:policyName/policy-friendly-name + // | arn:aws:autoscaling:region:account-id:scalingPolicy:policy-idautoScalingGroupName/group-friendly-name:policyName/policy-friendly-name // - // Valid Values (for use with IAM roles): arn:aws:swf:region:{account-id}:action/actions/AWS_EC2.InstanceId.Stop/1.0 - // | arn:aws:swf:region:{account-id}:action/actions/AWS_EC2.InstanceId.Terminate/1.0 - // | arn:aws:swf:region:{account-id}:action/actions/AWS_EC2.InstanceId.Reboot/1.0 + // Valid Values (for use with IAM roles): arn:aws:swf:region:account-id:action/actions/AWS_EC2.InstanceId.Stop/1.0 + // | arn:aws:swf:region:account-id:action/actions/AWS_EC2.InstanceId.Terminate/1.0 + // | arn:aws:swf:region:account-id:action/actions/AWS_EC2.InstanceId.Reboot/1.0 OKActions []*string `type:"list"` // The period, in seconds, over which the specified statistic is applied. Valid // values are 10, 30, and any multiple of 60. // // Be sure to specify 10 or 30 only for metrics that are stored by a PutMetricData - // call with a StorageResolution of 1. If you specify a Period of 10 or 30 for + // call with a StorageResolution of 1. If you specify a period of 10 or 30 for // a metric that does not have sub-minute resolution, the alarm still attempts // to gather data at the period rate that you specify. In this case, it does // not receive data for the attempts that do not correspond to a one-minute @@ -3873,7 +4737,8 @@ func (s PutMetricAlarmOutput) GoString() string { type PutMetricDataInput struct { _ struct{} `type:"structure"` - // The data for the metric. + // The data for the metric. The array can include no more than 20 metrics per + // call. // // MetricData is a required field MetricData []*MetricDatum `type:"list" required:"true"` @@ -4151,6 +5016,14 @@ const ( HistoryItemTypeAction = "Action" ) +const ( + // ScanByTimestampDescending is a ScanBy enum value + ScanByTimestampDescending = "TimestampDescending" + + // ScanByTimestampAscending is a ScanBy enum value + ScanByTimestampAscending = "TimestampAscending" +) + const ( // StandardUnitSeconds is a StandardUnit enum value StandardUnitSeconds = "Seconds" @@ -4261,3 +5134,14 @@ const ( // StatisticMaximum is a Statistic enum value StatisticMaximum = "Maximum" ) + +const ( + // StatusCodeComplete is a StatusCode enum value + StatusCodeComplete = "Complete" + + // StatusCodeInternalError is a StatusCode enum value + StatusCodeInternalError = "InternalError" + + // StatusCodePartialData is a StatusCode enum value + StatusCodePartialData = "PartialData" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudwatch/service.go b/vendor/github.com/aws/aws-sdk-go/service/cloudwatch/service.go index 4b0aa76edcd..0d478662240 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudwatch/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudwatch/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "monitoring" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "monitoring" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "CloudWatch" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the CloudWatch client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/api.go b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/api.go index 0a933b7474e..61e522a7b85 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/api.go @@ -17,8 +17,8 @@ const opDeleteRule = "DeleteRule" // DeleteRuleRequest generates a "aws/request.Request" representing the // client's request for the DeleteRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -61,11 +61,10 @@ func (c *CloudWatchEvents) DeleteRuleRequest(input *DeleteRuleInput) (req *reque // // Deletes the specified rule. // -// You must remove all targets from a rule using RemoveTargets before you can -// delete the rule. +// Before you can delete the rule, you must remove all targets, using RemoveTargets. // // When you delete a rule, incoming events might continue to match to the deleted -// rule. Please allow a short period of time for changes to take effect. +// rule. Allow a short period of time for changes to take effect. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -107,8 +106,8 @@ const opDescribeEventBus = "DescribeEventBus" // DescribeEventBusRequest generates a "aws/request.Request" representing the // client's request for the DescribeEventBus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -191,8 +190,8 @@ const opDescribeRule = "DescribeRule" // DescribeRuleRequest generates a "aws/request.Request" representing the // client's request for the DescribeRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -233,6 +232,9 @@ func (c *CloudWatchEvents) DescribeRuleRequest(input *DescribeRuleInput) (req *r // // Describes the specified rule. // +// DescribeRule does not list the targets of a rule. To see the targets associated +// with a rule, use ListTargetsByRule. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -273,8 +275,8 @@ const opDisableRule = "DisableRule" // DisableRuleRequest generates a "aws/request.Request" representing the // client's request for the DisableRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -319,7 +321,7 @@ func (c *CloudWatchEvents) DisableRuleRequest(input *DisableRuleInput) (req *req // won't self-trigger if it has a schedule expression. // // When you disable a rule, incoming events might continue to match to the disabled -// rule. Please allow a short period of time for changes to take effect. +// rule. Allow a short period of time for changes to take effect. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -364,8 +366,8 @@ const opEnableRule = "EnableRule" // EnableRuleRequest generates a "aws/request.Request" representing the // client's request for the EnableRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -409,8 +411,8 @@ func (c *CloudWatchEvents) EnableRuleRequest(input *EnableRuleInput) (req *reque // Enables the specified rule. If the rule does not exist, the operation fails. // // When you enable a rule, incoming events might not immediately start matching -// to a newly enabled rule. Please allow a short period of time for changes -// to take effect. +// to a newly enabled rule. Allow a short period of time for changes to take +// effect. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -455,8 +457,8 @@ const opListRuleNamesByTarget = "ListRuleNamesByTarget" // ListRuleNamesByTargetRequest generates a "aws/request.Request" representing the // client's request for the ListRuleNamesByTarget operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -535,8 +537,8 @@ const opListRules = "ListRules" // ListRulesRequest generates a "aws/request.Request" representing the // client's request for the ListRules operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -578,6 +580,9 @@ func (c *CloudWatchEvents) ListRulesRequest(input *ListRulesInput) (req *request // Lists your Amazon CloudWatch Events rules. You can either list all the rules // or you can provide a prefix to match to the rule names. // +// ListRules does not list the targets of a rule. To see the targets associated +// with a rule, use ListTargetsByRule. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -615,8 +620,8 @@ const opListTargetsByRule = "ListTargetsByRule" // ListTargetsByRuleRequest generates a "aws/request.Request" representing the // client's request for the ListTargetsByRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -697,8 +702,8 @@ const opPutEvents = "PutEvents" // PutEventsRequest generates a "aws/request.Request" representing the // client's request for the PutEvents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -777,8 +782,8 @@ const opPutPermission = "PutPermission" // PutPermissionRequest generates a "aws/request.Request" representing the // client's request for the PutPermission operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -819,18 +824,28 @@ func (c *CloudWatchEvents) PutPermissionRequest(input *PutPermissionInput) (req // PutPermission API operation for Amazon CloudWatch Events. // -// Running PutPermission permits the specified AWS account to put events to -// your account's default event bus. CloudWatch Events rules in your account -// are triggered by these events arriving to your default event bus. +// Running PutPermission permits the specified AWS account or AWS organization +// to put events to your account's default event bus. CloudWatch Events rules +// in your account are triggered by these events arriving to your default event +// bus. // // For another account to send events to your account, that external account // must have a CloudWatch Events rule with your account's default event bus // as a target. // // To enable multiple AWS accounts to put events to your default event bus, -// run PutPermission once for each of these accounts. +// run PutPermission once for each of these accounts. Or, if all the accounts +// are members of the same AWS organization, you can run PutPermission once +// specifying Principal as "*" and specifying the AWS organization ID in Condition, +// to grant permissions to all accounts in that organization. +// +// If you grant permissions using an organization, then accounts in that organization +// must specify a RoleArn with proper permissions when they use PutTarget to +// add your account's event bus as a target. For more information, see Sending +// and Receiving Events Between AWS Accounts (http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatchEvents-CrossAccountEventDelivery.html) +// in the Amazon CloudWatch Events User Guide. // -// The permission policy on the default event bus cannot exceed 10KB in size. +// The permission policy on the default event bus cannot exceed 10 KB in size. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -878,8 +893,8 @@ const opPutRule = "PutRule" // PutRuleRequest generates a "aws/request.Request" representing the // client's request for the PutRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -921,14 +936,14 @@ func (c *CloudWatchEvents) PutRuleRequest(input *PutRuleInput) (req *request.Req // Creates or updates the specified rule. Rules are enabled by default, or based // on value of the state. You can disable a rule using DisableRule. // -// If you are updating an existing rule, the rule is completely replaced with -// what you specify in this PutRule command. If you omit arguments in PutRule, -// the old values for those arguments are not kept. Instead, they are replaced -// with null values. +// If you are updating an existing rule, the rule is replaced with what you +// specify in this PutRule command. If you omit arguments in PutRule, the old +// values for those arguments are not kept. Instead, they are replaced with +// null values. // // When you create or update a rule, incoming events might not immediately start -// matching to new or updated rules. Please allow a short period of time for -// changes to take effect. +// matching to new or updated rules. Allow a short period of time for changes +// to take effect. // // A rule must contain at least an EventPattern or ScheduleExpression. Rules // with EventPatterns are triggered when a matching event is observed. Rules @@ -941,6 +956,20 @@ func (c *CloudWatchEvents) PutRuleRequest(input *PutRuleInput) (req *request.Req // and rules. Be sure to use the correct ARN characters when creating event // patterns so that they match the ARN syntax in the event you want to match. // +// In CloudWatch Events, it is possible to create rules that lead to infinite +// loops, where a rule is fired repeatedly. For example, a rule might detect +// that ACLs have changed on an S3 bucket, and trigger software to change them +// to the desired state. If the rule is not written carefully, the subsequent +// change to the ACLs fires the rule again, creating an infinite loop. +// +// To prevent this, write the rules so that the triggered actions do not re-fire +// the same rule. For example, your rule could fire only if ACLs are found to +// be in a bad state, instead of after any change. +// +// An infinite loop can quickly cause higher than expected charges. We recommend +// that you use budgeting, which alerts you when charges exceed your specified +// limit. For more information, see Managing Your Costs with Budgets (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/budgets-managing-costs.html). +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -987,8 +1016,8 @@ const opPutTargets = "PutTargets" // PutTargetsRequest generates a "aws/request.Request" representing the // client's request for the PutTargets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1036,11 +1065,15 @@ func (c *CloudWatchEvents) PutTargetsRequest(input *PutTargetsInput) (req *reque // // * EC2 instances // +// * SSM Run Command +// +// * SSM Automation +// // * AWS Lambda functions // -// * Streams in Amazon Kinesis Streams +// * Data streams in Amazon Kinesis Data Streams // -// * Delivery streams in Amazon Kinesis Firehose +// * Data delivery streams in Amazon Kinesis Data Firehose // // * Amazon ECS tasks // @@ -1048,50 +1081,59 @@ func (c *CloudWatchEvents) PutTargetsRequest(input *PutTargetsInput) (req *reque // // * AWS Batch jobs // -// * Pipelines in Amazon Code Pipeline +// * AWS CodeBuild projects +// +// * Pipelines in AWS CodePipeline // // * Amazon Inspector assessment templates // // * Amazon SNS topics // -// * Amazon SQS queues +// * Amazon SQS queues, including FIFO queues // // * The default event bus of another AWS account // -// Note that creating rules with built-in targets is supported only in the AWS -// Management Console. +// Creating rules with built-in targets is supported only in the AWS Management +// Console. The built-in targets are EC2 CreateSnapshot API call, EC2 RebootInstances +// API call, EC2 StopInstances API call, and EC2 TerminateInstances API call. // // For some target types, PutTargets provides target-specific parameters. If -// the target is an Amazon Kinesis stream, you can optionally specify which -// shard the event goes to by using the KinesisParameters argument. To invoke -// a command on multiple EC2 instances with one rule, you can use the RunCommandParameters +// the target is a Kinesis data stream, you can optionally specify which shard +// the event goes to by using the KinesisParameters argument. To invoke a command +// on multiple EC2 instances with one rule, you can use the RunCommandParameters // field. // // To be able to make API calls against the resources that you own, Amazon CloudWatch // Events needs the appropriate permissions. For AWS Lambda and Amazon SNS resources, -// CloudWatch Events relies on resource-based policies. For EC2 instances, Amazon -// Kinesis streams, and AWS Step Functions state machines, CloudWatch Events -// relies on IAM roles that you specify in the RoleARN argument in PutTargets. -// For more information, see Authentication and Access Control (http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/auth-and-access-control-cwe.html) +// CloudWatch Events relies on resource-based policies. For EC2 instances, Kinesis +// data streams, and AWS Step Functions state machines, CloudWatch Events relies +// on IAM roles that you specify in the RoleARN argument in PutTargets. For +// more information, see Authentication and Access Control (http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/auth-and-access-control-cwe.html) // in the Amazon CloudWatch Events User Guide. // // If another AWS account is in the same region and has granted you permission -// (using PutPermission), you can send events to that account by setting that -// account's event bus as a target of the rules in your account. To send the -// matched events to the other account, specify that account's event bus as -// the Arn when you run PutTargets. If your account sends events to another -// account, your account is charged for each sent event. Each event sent to -// antoher account is charged as a custom event. The account receiving the event -// is not charged. For more information on pricing, see Amazon CloudWatch Pricing -// (https://aws.amazon.com/cloudwatch/pricing/). +// (using PutPermission), you can send events to that account. Set that account's +// event bus as a target of the rules in your account. To send the matched events +// to the other account, specify that account's event bus as the Arn value when +// you run PutTargets. If your account sends events to another account, your +// account is charged for each sent event. Each event sent to another account +// is charged as a custom event. The account receiving the event is not charged. +// For more information, see Amazon CloudWatch Pricing (https://aws.amazon.com/cloudwatch/pricing/). +// +// If you are setting the event bus of another account as the target, and that +// account granted permission to your account through an organization instead +// of directly by the account ID, then you must specify a RoleArn with proper +// permissions in the Target structure. For more information, see Sending and +// Receiving Events Between AWS Accounts (http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatchEvents-CrossAccountEventDelivery.html) +// in the Amazon CloudWatch Events User Guide. // // For more information about enabling cross-account events, see PutPermission. // -// Input, InputPath and InputTransformer are mutually exclusive and optional +// Input, InputPath, and InputTransformer are mutually exclusive and optional // parameters of a target. When a rule is triggered due to a matched event: // // * If none of the following arguments are specified for a target, then -// the entire event is passed to the target in JSON form (unless the target +// the entire event is passed to the target in JSON format (unless the target // is Amazon EC2 Run Command or Amazon ECS task, in which case nothing from // the event is passed to the target). // @@ -1110,8 +1152,8 @@ func (c *CloudWatchEvents) PutTargetsRequest(input *PutTargetsInput) (req *reque // not bracket notation. // // When you add targets to a rule and the associated rule triggers soon after, -// new or updated targets might not be immediately invoked. Please allow a short -// period of time for changes to take effect. +// new or updated targets might not be immediately invoked. Allow a short period +// of time for changes to take effect. // // This action can partially fail if too many requests are made at the same // time. If that happens, FailedEntryCount is non-zero in the response and each @@ -1164,8 +1206,8 @@ const opRemovePermission = "RemovePermission" // RemovePermissionRequest generates a "aws/request.Request" representing the // client's request for the RemovePermission operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1254,8 +1296,8 @@ const opRemoveTargets = "RemoveTargets" // RemoveTargetsRequest generates a "aws/request.Request" representing the // client's request for the RemoveTargets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1298,8 +1340,8 @@ func (c *CloudWatchEvents) RemoveTargetsRequest(input *RemoveTargetsInput) (req // those targets are no longer be invoked. // // When you remove a target, when the associated rule triggers, removed targets -// might continue to be invoked. Please allow a short period of time for changes -// to take effect. +// might continue to be invoked. Allow a short period of time for changes to +// take effect. // // This action can partially fail if too many requests are made at the same // time. If that happens, FailedEntryCount is non-zero in the response and each @@ -1349,8 +1391,8 @@ const opTestEventPattern = "TestEventPattern" // TestEventPatternRequest generates a "aws/request.Request" representing the // client's request for the TestEventPattern operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1432,6 +1474,71 @@ func (c *CloudWatchEvents) TestEventPatternWithContext(ctx aws.Context, input *T return out, req.Send() } +// This structure specifies the VPC subnets and security groups for the task, +// and whether a public IP address is to be used. This structure is relevant +// only for ECS tasks that use the awsvpc network mode. +type AwsVpcConfiguration struct { + _ struct{} `type:"structure"` + + // Specifies whether the task's elastic network interface receives a public + // IP address. You can specify ENABLED only when LaunchType in EcsParameters + // is set to FARGATE. + AssignPublicIp *string `type:"string" enum:"AssignPublicIp"` + + // Specifies the security groups associated with the task. These security groups + // must all be in the same VPC. You can specify as many as five security groups. + // If you do not specify a security group, the default security group for the + // VPC is used. + SecurityGroups []*string `type:"list"` + + // Specifies the subnets associated with the task. These subnets must all be + // in the same VPC. You can specify as many as 16 subnets. + // + // Subnets is a required field + Subnets []*string `type:"list" required:"true"` +} + +// String returns the string representation +func (s AwsVpcConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AwsVpcConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AwsVpcConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AwsVpcConfiguration"} + if s.Subnets == nil { + invalidParams.Add(request.NewErrParamRequired("Subnets")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAssignPublicIp sets the AssignPublicIp field's value. +func (s *AwsVpcConfiguration) SetAssignPublicIp(v string) *AwsVpcConfiguration { + s.AssignPublicIp = &v + return s +} + +// SetSecurityGroups sets the SecurityGroups field's value. +func (s *AwsVpcConfiguration) SetSecurityGroups(v []*string) *AwsVpcConfiguration { + s.SecurityGroups = v + return s +} + +// SetSubnets sets the Subnets field's value. +func (s *AwsVpcConfiguration) SetSubnets(v []*string) *AwsVpcConfiguration { + s.Subnets = v + return s +} + // The array properties for the submitted job, such as the size of the array. // The array size can be between 2 and 10,000. If you specify array properties // for a job, it becomes an array job. This parameter is used only if the target @@ -1484,7 +1591,7 @@ type BatchParameters struct { // The retry strategy to use for failed jobs, if the target is an AWS Batch // job. The retry strategy is the number of times to retry the failed job execution. - // Valid values are 1 to 10. When you specify a retry strategy here, it overrides + // Valid values are 1–10. When you specify a retry strategy here, it overrides // the retry strategy defined in the job definition. RetryStrategy *BatchRetryStrategy `type:"structure"` } @@ -1546,7 +1653,7 @@ type BatchRetryStrategy struct { _ struct{} `type:"structure"` // The number of times to attempt to retry, if the job fails. Valid values are - // 1 to 10. + // 1–10. Attempts *int64 `type:"integer"` } @@ -1566,6 +1673,80 @@ func (s *BatchRetryStrategy) SetAttempts(v int64) *BatchRetryStrategy { return s } +// A JSON string which you can use to limit the event bus permissions you are +// granting to only accounts that fulfill the condition. Currently, the only +// supported condition is membership in a certain AWS organization. The string +// must contain Type, Key, and Value fields. The Value field specifies the ID +// of the AWS organization. Following is an example value for Condition: +// +// '{"Type" : "StringEquals", "Key": "aws:PrincipalOrgID", "Value": "o-1234567890"}' +type Condition struct { + _ struct{} `type:"structure"` + + // Specifies the key for the condition. Currently the only supported key is + // aws:PrincipalOrgID. + // + // Key is a required field + Key *string `type:"string" required:"true"` + + // Specifies the type of condition. Currently the only supported value is StringEquals. + // + // Type is a required field + Type *string `type:"string" required:"true"` + + // Specifies the value for the key. Currently, this must be the ID of the organization. + // + // Value is a required field + Value *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s Condition) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Condition) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Condition) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Condition"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *Condition) SetKey(v string) *Condition { + s.Key = &v + return s +} + +// SetType sets the Type field's value. +func (s *Condition) SetType(v string) *Condition { + s.Type = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Condition) SetValue(v string) *Condition { + s.Value = &v + return s +} + type DeleteRuleInput struct { _ struct{} `type:"structure"` @@ -1851,16 +2032,43 @@ func (s DisableRuleOutput) GoString() string { return s.String() } -// The custom parameters to be used when the target is an Amazon ECS cluster. +// The custom parameters to be used when the target is an Amazon ECS task. type EcsParameters struct { _ struct{} `type:"structure"` - // The number of tasks to create based on the TaskDefinition. The default is - // one. + // Specifies an ECS task group for the task. The maximum length is 255 characters. + Group *string `type:"string"` + + // Specifies the launch type on which your task is running. The launch type + // that you specify here must match one of the launch type (compatibilities) + // of the target task. The FARGATE value is supported only in the Regions where + // AWS Fargate with Amazon ECS is supported. For more information, see AWS Fargate + // on Amazon ECS (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS-Fargate.html) + // in the Amazon Elastic Container Service Developer Guide. + LaunchType *string `type:"string" enum:"LaunchType"` + + // Use this structure if the ECS task uses the awsvpc network mode. This structure + // specifies the VPC subnets and security groups associated with the task, and + // whether a public IP address is to be used. This structure is required if + // LaunchType is FARGATE because the awsvpc mode is required for Fargate tasks. + // + // If you specify NetworkConfiguration when the target ECS task does not use + // the awsvpc network mode, the task fails. + NetworkConfiguration *NetworkConfiguration `type:"structure"` + + // Specifies the platform version for the task. Specify only the numeric portion + // of the platform version, such as 1.1.0. + // + // This structure is used only if LaunchType is FARGATE. For more information + // about valid platform versions, see AWS Fargate Platform Versions (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/platform_versions.html) + // in the Amazon Elastic Container Service Developer Guide. + PlatformVersion *string `type:"string"` + + // The number of tasks to create based on TaskDefinition. The default is 1. TaskCount *int64 `min:"1" type:"integer"` // The ARN of the task definition to use if the event target is an Amazon ECS - // cluster. + // task. // // TaskDefinitionArn is a required field TaskDefinitionArn *string `min:"1" type:"string" required:"true"` @@ -1888,6 +2096,11 @@ func (s *EcsParameters) Validate() error { if s.TaskDefinitionArn != nil && len(*s.TaskDefinitionArn) < 1 { invalidParams.Add(request.NewErrParamMinLen("TaskDefinitionArn", 1)) } + if s.NetworkConfiguration != nil { + if err := s.NetworkConfiguration.Validate(); err != nil { + invalidParams.AddNested("NetworkConfiguration", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -1895,6 +2108,30 @@ func (s *EcsParameters) Validate() error { return nil } +// SetGroup sets the Group field's value. +func (s *EcsParameters) SetGroup(v string) *EcsParameters { + s.Group = &v + return s +} + +// SetLaunchType sets the LaunchType field's value. +func (s *EcsParameters) SetLaunchType(v string) *EcsParameters { + s.LaunchType = &v + return s +} + +// SetNetworkConfiguration sets the NetworkConfiguration field's value. +func (s *EcsParameters) SetNetworkConfiguration(v *NetworkConfiguration) *EcsParameters { + s.NetworkConfiguration = v + return s +} + +// SetPlatformVersion sets the PlatformVersion field's value. +func (s *EcsParameters) SetPlatformVersion(v string) *EcsParameters { + s.PlatformVersion = &v + return s +} + // SetTaskCount sets the TaskCount field's value. func (s *EcsParameters) SetTaskCount(v int64) *EcsParameters { s.TaskCount = &v @@ -1967,13 +2204,53 @@ func (s EnableRuleOutput) GoString() string { type InputTransformer struct { _ struct{} `type:"structure"` - // Map of JSON paths to be extracted from the event. These are key-value pairs, - // where each value is a JSON path. You must use JSON dot notation, not bracket - // notation. + // Map of JSON paths to be extracted from the event. You can then insert these + // in the template in InputTemplate to produce the output you want to be sent + // to the target. + // + // InputPathsMap is an array key-value pairs, where each value is a valid JSON + // path. You can have as many as 10 key-value pairs. You must use JSON dot notation, + // not bracket notation. + // + // The keys cannot start with "AWS." InputPathsMap map[string]*string `type:"map"` - // Input template where you can use the values of the keys from InputPathsMap - // to customize the data sent to the target. + // Input template where you specify placeholders that will be filled with the + // values of the keys from InputPathsMap to customize the data sent to the target. + // Enclose each InputPathsMaps value in brackets: The InputTemplate + // must be valid JSON. + // + // If InputTemplate is a JSON object (surrounded by curly braces), the following + // restrictions apply: + // + // * The placeholder cannot be used as an object key. + // + // * Object values cannot include quote marks. + // + // The following example shows the syntax for using InputPathsMap and InputTemplate. + // + // "InputTransformer": + // + // { + // + // "InputPathsMap": {"instance": "$.detail.instance","status": "$.detail.status"}, + // + // "InputTemplate": " is in state " + // + // } + // + // To have the InputTemplate include quote marks within a JSON string, escape + // each quote marks with a slash, as in the following example: + // + // "InputTransformer": + // + // { + // + // "InputPathsMap": {"instance": "$.detail.instance","status": "$.detail.status"}, + // + // "InputTemplate": " is in state \"\"" + // + // } // // InputTemplate is a required field InputTemplate *string `min:"1" type:"string" required:"true"` @@ -2018,9 +2295,9 @@ func (s *InputTransformer) SetInputTemplate(v string) *InputTransformer { } // This object enables you to specify a JSON path to extract from the event -// and use as the partition key for the Amazon Kinesis stream, so that you can -// control the shard to which the event goes. If you do not include this parameter, -// the default is to use the eventId as the partition key. +// and use as the partition key for the Amazon Kinesis data stream, so that +// you can control the shard to which the event goes. If you do not include +// this parameter, the default is to use the eventId as the partition key. type KinesisParameters struct { _ struct{} `type:"structure"` @@ -2350,6 +2627,47 @@ func (s *ListTargetsByRuleOutput) SetTargets(v []*Target) *ListTargetsByRuleOutp return s } +// This structure specifies the network configuration for an ECS task. +type NetworkConfiguration struct { + _ struct{} `type:"structure"` + + // Use this structure to specify the VPC subnets and security groups for the + // task, and whether a public IP address is to be used. This structure is relevant + // only for ECS tasks that use the awsvpc network mode. + AwsvpcConfiguration *AwsVpcConfiguration `locationName:"awsvpcConfiguration" type:"structure"` +} + +// String returns the string representation +func (s NetworkConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NetworkConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *NetworkConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "NetworkConfiguration"} + if s.AwsvpcConfiguration != nil { + if err := s.AwsvpcConfiguration.Validate(); err != nil { + invalidParams.AddNested("AwsvpcConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAwsvpcConfiguration sets the AwsvpcConfiguration field's value. +func (s *NetworkConfiguration) SetAwsvpcConfiguration(v *AwsVpcConfiguration) *NetworkConfiguration { + s.AwsvpcConfiguration = v + return s +} + type PutEventsInput struct { _ struct{} `type:"structure"` @@ -2442,12 +2760,12 @@ type PutEventsRequestEntry struct { // primarily concerns. Any number, including zero, may be present. Resources []*string `type:"list"` - // The source of the event. + // The source of the event. This field is required. Source *string `type:"string"` - // The timestamp of the event, per RFC3339 (https://www.rfc-editor.org/rfc/rfc3339.txt). - // If no timestamp is provided, the timestamp of the PutEvents call is used. - Time *time.Time `type:"timestamp" timestampFormat:"unix"` + // The time stamp of the event, per RFC3339 (https://www.rfc-editor.org/rfc/rfc3339.txt). + // If no time stamp is provided, the time stamp of the PutEvents call is used. + Time *time.Time `type:"timestamp"` } // String returns the string representation @@ -2541,15 +2859,28 @@ type PutPermissionInput struct { // Action is a required field Action *string `min:"1" type:"string" required:"true"` + // This parameter enables you to limit the permission to accounts that fulfill + // a certain condition, such as being a member of a certain AWS organization. + // For more information about AWS Organizations, see What Is AWS Organizations + // (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) + // in the AWS Organizations User Guide. + // + // If you specify Condition with an AWS organization ID, and specify "*" as + // the value for Principal, you grant permission to all the accounts in the + // named organization. + // + // The Condition is a JSON string which must contain Type, Key, and Value fields. + Condition *Condition `type:"structure"` + // The 12-digit AWS account ID that you are permitting to put events to your // default event bus. Specify "*" to permit any account to put events to your // default event bus. // - // If you specify "*", avoid creating rules that may match undesirable events. - // To create more secure rules, make sure that the event pattern for each rule - // contains an account field with a specific account ID from which to receive - // events. Rules with an account field do not match any events sent from other - // accounts. + // If you specify "*" without specifying Condition, avoid creating rules that + // may match undesirable events. To create more secure rules, make sure that + // the event pattern for each rule contains an account field with a specific + // account ID from which to receive events. Rules with an account field do not + // match any events sent from other accounts. // // Principal is a required field Principal *string `min:"1" type:"string" required:"true"` @@ -2593,6 +2924,11 @@ func (s *PutPermissionInput) Validate() error { if s.StatementId != nil && len(*s.StatementId) < 1 { invalidParams.Add(request.NewErrParamMinLen("StatementId", 1)) } + if s.Condition != nil { + if err := s.Condition.Validate(); err != nil { + invalidParams.AddNested("Condition", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -2606,6 +2942,12 @@ func (s *PutPermissionInput) SetAction(v string) *PutPermissionInput { return s } +// SetCondition sets the Condition field's value. +func (s *PutPermissionInput) SetCondition(v *Condition) *PutPermissionInput { + s.Condition = v + return s +} + // SetPrincipal sets the Principal field's value. func (s *PutPermissionInput) SetPrincipal(v string) *PutPermissionInput { s.Principal = &v @@ -3275,10 +3617,40 @@ func (s *RunCommandTarget) SetValues(v []*string) *RunCommandTarget { return s } -// Targets are the resources to be invoked when a rule is triggered. Target -// types include EC2 instances, AWS Lambda functions, Amazon Kinesis streams, -// Amazon ECS tasks, AWS Step Functions state machines, Run Command, and built-in -// targets. +// This structure includes the custom parameter to be used when the target is +// an SQS FIFO queue. +type SqsParameters struct { + _ struct{} `type:"structure"` + + // The FIFO message group ID to use as the target. + MessageGroupId *string `type:"string"` +} + +// String returns the string representation +func (s SqsParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SqsParameters) GoString() string { + return s.String() +} + +// SetMessageGroupId sets the MessageGroupId field's value. +func (s *SqsParameters) SetMessageGroupId(v string) *SqsParameters { + s.MessageGroupId = &v + return s +} + +// Targets are the resources to be invoked when a rule is triggered. For a complete +// list of services and resources that can be set as a target, see PutTargets. +// +// If you are setting the event bus of another account as the target, and that +// account granted permission to your account through an organization instead +// of directly by the account ID, then you must specify a RoleArn with proper +// permissions in the Target structure. For more information, see Sending and +// Receiving Events Between AWS Accounts (http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatchEvents-CrossAccountEventDelivery.html) +// in the Amazon CloudWatch Events User Guide. type Target struct { _ struct{} `type:"structure"` @@ -3287,10 +3659,9 @@ type Target struct { // Arn is a required field Arn *string `min:"1" type:"string" required:"true"` - // Contains the job definition, job name, and other parameters if the event - // target is an AWS Batch job. For more information about AWS Batch, see Jobs - // (http://docs.aws.amazon.com/batch/latest/userguide/jobs.html) in the AWS - // Batch User Guide. + // If the event target is an AWS Batch job, this contains the job definition, + // job name, and other parameters. For more information, see Jobs (http://docs.aws.amazon.com/batch/latest/userguide/jobs.html) + // in the AWS Batch User Guide. BatchParameters *BatchParameters `type:"structure"` // Contains the Amazon ECS task definition and task count to be used, if the @@ -3319,9 +3690,9 @@ type Target struct { // then use that data to send customized input to the target. InputTransformer *InputTransformer `type:"structure"` - // The custom parameter you can use to control shard assignment, when the target - // is an Amazon Kinesis stream. If you do not include this parameter, the default - // is to use the eventId as the partition key. + // The custom parameter you can use to control the shard assignment, when the + // target is a Kinesis data stream. If you do not include this parameter, the + // default is to use the eventId as the partition key. KinesisParameters *KinesisParameters `type:"structure"` // The Amazon Resource Name (ARN) of the IAM role to be used for this target @@ -3331,6 +3702,12 @@ type Target struct { // Parameters used when you are using the rule to invoke Amazon EC2 Run Command. RunCommandParameters *RunCommandParameters `type:"structure"` + + // Contains the message group ID to use when the target is a FIFO queue. + // + // If you specify an SQS FIFO queue as a target, the queue must have content-based + // deduplication enabled. + SqsParameters *SqsParameters `type:"structure"` } // String returns the string representation @@ -3453,6 +3830,12 @@ func (s *Target) SetRunCommandParameters(v *RunCommandParameters) *Target { return s } +// SetSqsParameters sets the SqsParameters field's value. +func (s *Target) SetSqsParameters(v *SqsParameters) *Target { + s.SqsParameters = v + return s +} + type TestEventPatternInput struct { _ struct{} `type:"structure"` @@ -3529,6 +3912,22 @@ func (s *TestEventPatternOutput) SetResult(v bool) *TestEventPatternOutput { return s } +const ( + // AssignPublicIpEnabled is a AssignPublicIp enum value + AssignPublicIpEnabled = "ENABLED" + + // AssignPublicIpDisabled is a AssignPublicIp enum value + AssignPublicIpDisabled = "DISABLED" +) + +const ( + // LaunchTypeEc2 is a LaunchType enum value + LaunchTypeEc2 = "EC2" + + // LaunchTypeFargate is a LaunchType enum value + LaunchTypeFargate = "FARGATE" +) + const ( // RuleStateEnabled is a RuleState enum value RuleStateEnabled = "ENABLED" diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/doc.go b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/doc.go index b55c8709d1b..16976e0d1f9 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/doc.go @@ -13,8 +13,9 @@ // * Automatically invoke an AWS Lambda function to update DNS entries when // an event notifies you that Amazon EC2 instance enters the running state. // -// * Direct specific API records from CloudTrail to an Amazon Kinesis stream -// for detailed analysis of potential security or availability risks. +// * Direct specific API records from AWS CloudTrail to an Amazon Kinesis +// data stream for detailed analysis of potential security or availability +// risks. // // * Periodically invoke a built-in target to create a snapshot of an Amazon // EBS volume. diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/service.go b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/service.go index 1fb24f16b01..2a7f6969cd8 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchevents/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "events" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "events" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "CloudWatch Events" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the CloudWatchEvents client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/api.go b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/api.go index aaebe84d35d..f873b25cc9c 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/api.go @@ -16,8 +16,8 @@ const opAssociateKmsKey = "AssociateKmsKey" // AssociateKmsKeyRequest generates a "aws/request.Request" representing the // client's request for the AssociateKmsKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -119,8 +119,8 @@ const opCancelExportTask = "CancelExportTask" // CancelExportTaskRequest generates a "aws/request.Request" representing the // client's request for the CancelExportTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -211,8 +211,8 @@ const opCreateExportTask = "CreateExportTask" // CreateExportTaskRequest generates a "aws/request.Request" representing the // client's request for the CreateExportTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -316,8 +316,8 @@ const opCreateLogGroup = "CreateLogGroup" // CreateLogGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateLogGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -430,8 +430,8 @@ const opCreateLogStream = "CreateLogStream" // CreateLogStreamRequest generates a "aws/request.Request" representing the // client's request for the CreateLogStream operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -531,8 +531,8 @@ const opDeleteDestination = "DeleteDestination" // DeleteDestinationRequest generates a "aws/request.Request" representing the // client's request for the DeleteDestination operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -623,8 +623,8 @@ const opDeleteLogGroup = "DeleteLogGroup" // DeleteLogGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteLogGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -714,8 +714,8 @@ const opDeleteLogStream = "DeleteLogStream" // DeleteLogStreamRequest generates a "aws/request.Request" representing the // client's request for the DeleteLogStream operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -805,8 +805,8 @@ const opDeleteMetricFilter = "DeleteMetricFilter" // DeleteMetricFilterRequest generates a "aws/request.Request" representing the // client's request for the DeleteMetricFilter operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -895,8 +895,8 @@ const opDeleteResourcePolicy = "DeleteResourcePolicy" // DeleteResourcePolicyRequest generates a "aws/request.Request" representing the // client's request for the DeleteResourcePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -983,8 +983,8 @@ const opDeleteRetentionPolicy = "DeleteRetentionPolicy" // DeleteRetentionPolicyRequest generates a "aws/request.Request" representing the // client's request for the DeleteRetentionPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1076,8 +1076,8 @@ const opDeleteSubscriptionFilter = "DeleteSubscriptionFilter" // DeleteSubscriptionFilterRequest generates a "aws/request.Request" representing the // client's request for the DeleteSubscriptionFilter operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1166,8 +1166,8 @@ const opDescribeDestinations = "DescribeDestinations" // DescribeDestinationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeDestinations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1281,6 +1281,7 @@ func (c *CloudWatchLogs) DescribeDestinationsPages(input *DescribeDestinationsIn // for more information on using Contexts. func (c *CloudWatchLogs) DescribeDestinationsPagesWithContext(ctx aws.Context, input *DescribeDestinationsInput, fn func(*DescribeDestinationsOutput, bool) bool, opts ...request.Option) error { p := request.Pagination{ + EndPageOnSameToken: true, NewRequest: func() (*request.Request, error) { var inCpy *DescribeDestinationsInput if input != nil { @@ -1305,8 +1306,8 @@ const opDescribeExportTasks = "DescribeExportTasks" // DescribeExportTasksRequest generates a "aws/request.Request" representing the // client's request for the DescribeExportTasks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1388,8 +1389,8 @@ const opDescribeLogGroups = "DescribeLogGroups" // DescribeLogGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeLogGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1503,6 +1504,7 @@ func (c *CloudWatchLogs) DescribeLogGroupsPages(input *DescribeLogGroupsInput, f // for more information on using Contexts. func (c *CloudWatchLogs) DescribeLogGroupsPagesWithContext(ctx aws.Context, input *DescribeLogGroupsInput, fn func(*DescribeLogGroupsOutput, bool) bool, opts ...request.Option) error { p := request.Pagination{ + EndPageOnSameToken: true, NewRequest: func() (*request.Request, error) { var inCpy *DescribeLogGroupsInput if input != nil { @@ -1527,8 +1529,8 @@ const opDescribeLogStreams = "DescribeLogStreams" // DescribeLogStreamsRequest generates a "aws/request.Request" representing the // client's request for the DescribeLogStreams operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1649,6 +1651,7 @@ func (c *CloudWatchLogs) DescribeLogStreamsPages(input *DescribeLogStreamsInput, // for more information on using Contexts. func (c *CloudWatchLogs) DescribeLogStreamsPagesWithContext(ctx aws.Context, input *DescribeLogStreamsInput, fn func(*DescribeLogStreamsOutput, bool) bool, opts ...request.Option) error { p := request.Pagination{ + EndPageOnSameToken: true, NewRequest: func() (*request.Request, error) { var inCpy *DescribeLogStreamsInput if input != nil { @@ -1673,8 +1676,8 @@ const opDescribeMetricFilters = "DescribeMetricFilters" // DescribeMetricFiltersRequest generates a "aws/request.Request" representing the // client's request for the DescribeMetricFilters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1792,6 +1795,7 @@ func (c *CloudWatchLogs) DescribeMetricFiltersPages(input *DescribeMetricFilters // for more information on using Contexts. func (c *CloudWatchLogs) DescribeMetricFiltersPagesWithContext(ctx aws.Context, input *DescribeMetricFiltersInput, fn func(*DescribeMetricFiltersOutput, bool) bool, opts ...request.Option) error { p := request.Pagination{ + EndPageOnSameToken: true, NewRequest: func() (*request.Request, error) { var inCpy *DescribeMetricFiltersInput if input != nil { @@ -1816,8 +1820,8 @@ const opDescribeResourcePolicies = "DescribeResourcePolicies" // DescribeResourcePoliciesRequest generates a "aws/request.Request" representing the // client's request for the DescribeResourcePolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1898,8 +1902,8 @@ const opDescribeSubscriptionFilters = "DescribeSubscriptionFilters" // DescribeSubscriptionFiltersRequest generates a "aws/request.Request" representing the // client's request for the DescribeSubscriptionFilters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2017,6 +2021,7 @@ func (c *CloudWatchLogs) DescribeSubscriptionFiltersPages(input *DescribeSubscri // for more information on using Contexts. func (c *CloudWatchLogs) DescribeSubscriptionFiltersPagesWithContext(ctx aws.Context, input *DescribeSubscriptionFiltersInput, fn func(*DescribeSubscriptionFiltersOutput, bool) bool, opts ...request.Option) error { p := request.Pagination{ + EndPageOnSameToken: true, NewRequest: func() (*request.Request, error) { var inCpy *DescribeSubscriptionFiltersInput if input != nil { @@ -2041,8 +2046,8 @@ const opDisassociateKmsKey = "DisassociateKmsKey" // DisassociateKmsKeyRequest generates a "aws/request.Request" representing the // client's request for the DisassociateKmsKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2139,8 +2144,8 @@ const opFilterLogEvents = "FilterLogEvents" // FilterLogEventsRequest generates a "aws/request.Request" representing the // client's request for the FilterLogEvents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2264,6 +2269,7 @@ func (c *CloudWatchLogs) FilterLogEventsPages(input *FilterLogEventsInput, fn fu // for more information on using Contexts. func (c *CloudWatchLogs) FilterLogEventsPagesWithContext(ctx aws.Context, input *FilterLogEventsInput, fn func(*FilterLogEventsOutput, bool) bool, opts ...request.Option) error { p := request.Pagination{ + EndPageOnSameToken: true, NewRequest: func() (*request.Request, error) { var inCpy *FilterLogEventsInput if input != nil { @@ -2288,8 +2294,8 @@ const opGetLogEvents = "GetLogEvents" // GetLogEventsRequest generates a "aws/request.Request" representing the // client's request for the GetLogEvents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2410,6 +2416,7 @@ func (c *CloudWatchLogs) GetLogEventsPages(input *GetLogEventsInput, fn func(*Ge // for more information on using Contexts. func (c *CloudWatchLogs) GetLogEventsPagesWithContext(ctx aws.Context, input *GetLogEventsInput, fn func(*GetLogEventsOutput, bool) bool, opts ...request.Option) error { p := request.Pagination{ + EndPageOnSameToken: true, NewRequest: func() (*request.Request, error) { var inCpy *GetLogEventsInput if input != nil { @@ -2434,8 +2441,8 @@ const opListTagsLogGroup = "ListTagsLogGroup" // ListTagsLogGroupRequest generates a "aws/request.Request" representing the // client's request for the ListTagsLogGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2516,8 +2523,8 @@ const opPutDestination = "PutDestination" // PutDestinationRequest generates a "aws/request.Request" representing the // client's request for the PutDestination operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2611,8 +2618,8 @@ const opPutDestinationPolicy = "PutDestinationPolicy" // PutDestinationPolicyRequest generates a "aws/request.Request" representing the // client's request for the PutDestinationPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2701,8 +2708,8 @@ const opPutLogEvents = "PutLogEvents" // PutLogEventsRequest generates a "aws/request.Request" representing the // client's request for the PutLogEvents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2762,14 +2769,19 @@ func (c *CloudWatchLogs) PutLogEventsRequest(input *PutLogEventsInput) (req *req // retention period of the log group. // // * The log events in the batch must be in chronological ordered by their -// time stamp (the time the event occurred, expressed as the number of milliseconds -// after Jan 1, 1970 00:00:00 UTC). +// time stamp. The time stamp is the time the event occurred, expressed as +// the number of milliseconds after Jan 1, 1970 00:00:00 UTC. (In AWS Tools +// for PowerShell and the AWS SDK for .NET, the timestamp is specified in +// .NET format: yyyy-mm-ddThh:mm:ss. For example, 2017-09-15T13:45:30.) // // * The maximum number of log events in a batch is 10,000. // // * A batch of log events in a single request cannot span more than 24 hours. // Otherwise, the operation fails. // +// If a call to PutLogEvents returns "UnrecognizedClientException" the most +// likely cause is an invalid AWS access key ID or secret key. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -2793,6 +2805,9 @@ func (c *CloudWatchLogs) PutLogEventsRequest(input *PutLogEventsInput) (req *req // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service cannot complete the request. // +// * ErrCodeUnrecognizedClientException "UnrecognizedClientException" +// The most likely cause is an invalid AWS access key ID or secret key. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/PutLogEvents func (c *CloudWatchLogs) PutLogEvents(input *PutLogEventsInput) (*PutLogEventsOutput, error) { req, out := c.PutLogEventsRequest(input) @@ -2819,8 +2834,8 @@ const opPutMetricFilter = "PutMetricFilter" // PutMetricFilterRequest generates a "aws/request.Request" representing the // client's request for the PutMetricFilter operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2917,8 +2932,8 @@ const opPutResourcePolicy = "PutResourcePolicy" // PutResourcePolicyRequest generates a "aws/request.Request" representing the // client's request for the PutResourcePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2959,7 +2974,7 @@ func (c *CloudWatchLogs) PutResourcePolicyRequest(input *PutResourcePolicyInput) // // Creates or updates a resource policy allowing other AWS services to put log // events to this account, such as Amazon Route 53. An account can have up to -// 50 resource policies per region. +// 10 resource policies per region. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3004,8 +3019,8 @@ const opPutRetentionPolicy = "PutRetentionPolicy" // PutRetentionPolicyRequest generates a "aws/request.Request" representing the // client's request for the PutRetentionPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3096,8 +3111,8 @@ const opPutSubscriptionFilter = "PutSubscriptionFilter" // PutSubscriptionFilterRequest generates a "aws/request.Request" representing the // client's request for the PutSubscriptionFilter operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3209,8 +3224,8 @@ const opTagLogGroup = "TagLogGroup" // TagLogGroupRequest generates a "aws/request.Request" representing the // client's request for the TagLogGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3300,8 +3315,8 @@ const opTestMetricFilter = "TestMetricFilter" // TestMetricFilterRequest generates a "aws/request.Request" representing the // client's request for the TestMetricFilter operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3384,8 +3399,8 @@ const opUntagLogGroup = "UntagLogGroup" // UntagLogGroupRequest generates a "aws/request.Request" representing the // client's request for the UntagLogGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4622,7 +4637,7 @@ type DescribeLogStreamsInput struct { // The prefix to match. // - // iIf orderBy is LastEventTime,you cannot specify this parameter. + // If orderBy is LastEventTime,you cannot specify this parameter. LogStreamNamePrefix *string `locationName:"logStreamNamePrefix" min:"1" type:"string"` // The token for the next set of items to return. (You received this token from @@ -4761,11 +4776,14 @@ type DescribeMetricFiltersInput struct { // The name of the log group. LogGroupName *string `locationName:"logGroupName" min:"1" type:"string"` - // The name of the CloudWatch metric to which the monitored log information - // should be published. For example, you may publish to a metric called ErrorCount. + // Filters results to include only those with the specified metric name. If + // you include this parameter in your request, you must also include the metricNamespace + // parameter. MetricName *string `locationName:"metricName" type:"string"` - // The namespace of the CloudWatch metric. + // Filters results to include only those in the specified namespace. If you + // include this parameter in your request, you must also include the metricName + // parameter. MetricNamespace *string `locationName:"metricNamespace" type:"string"` // The token for the next set of items to return. (You received this token from @@ -5370,7 +5388,10 @@ type FilterLogEventsInput struct { // not returned. EndTime *int64 `locationName:"endTime" type:"long"` - // The filter pattern to use. If not provided, all the events are matched. + // The filter pattern to use. For more information, see Filter and Pattern Syntax + // (http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html). + // + // If not provided, all the events are matched. FilterPattern *string `locationName:"filterPattern" type:"string"` // If the value is true, the operation makes a best effort to provide responses @@ -5383,12 +5404,24 @@ type FilterLogEventsInput struct { // The maximum number of events to return. The default is 10,000 events. Limit *int64 `locationName:"limit" min:"1" type:"integer"` - // The name of the log group. + // The name of the log group to search. // // LogGroupName is a required field LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` - // Optional list of log stream names. + // Filters the results to include only events from log streams that have names + // starting with this prefix. + // + // If you specify a value for both logStreamNamePrefix and logStreamNames, but + // the value for logStreamNamePrefix does not match any log stream names specified + // in logStreamNames, the action returns an InvalidParameterException error. + LogStreamNamePrefix *string `locationName:"logStreamNamePrefix" min:"1" type:"string"` + + // Filters the results to only logs from the log streams in this list. + // + // If you specify a value for both logStreamNamePrefix and logStreamNames, but + // the value for logStreamNamePrefix does not match any log stream names specified + // in logStreamNames, the action returns an InvalidParameterException error. LogStreamNames []*string `locationName:"logStreamNames" min:"1" type:"list"` // The token for the next set of events to return. (You received this token @@ -5423,6 +5456,9 @@ func (s *FilterLogEventsInput) Validate() error { if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) } + if s.LogStreamNamePrefix != nil && len(*s.LogStreamNamePrefix) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogStreamNamePrefix", 1)) + } if s.LogStreamNames != nil && len(s.LogStreamNames) < 1 { invalidParams.Add(request.NewErrParamMinLen("LogStreamNames", 1)) } @@ -5466,6 +5502,12 @@ func (s *FilterLogEventsInput) SetLogGroupName(v string) *FilterLogEventsInput { return s } +// SetLogStreamNamePrefix sets the LogStreamNamePrefix field's value. +func (s *FilterLogEventsInput) SetLogStreamNamePrefix(v string) *FilterLogEventsInput { + s.LogStreamNamePrefix = &v + return s +} + // SetLogStreamNames sets the LogStreamNames field's value. func (s *FilterLogEventsInput) SetLogStreamNames(v []*string) *FilterLogEventsInput { s.LogStreamNames = v @@ -5593,8 +5635,8 @@ type GetLogEventsInput struct { _ struct{} `type:"structure"` // The end of the time range, expressed as the number of milliseconds after - // Jan 1, 1970 00:00:00 UTC. Events with a time stamp later than this time are - // not included. + // Jan 1, 1970 00:00:00 UTC. Events with a time stamp equal to or later than + // this time are not included. EndTime *int64 `locationName:"endTime" type:"long"` // The maximum number of log events returned. If you don't specify a value, @@ -5622,8 +5664,9 @@ type GetLogEventsInput struct { StartFromHead *bool `locationName:"startFromHead" type:"boolean"` // The start of the time range, expressed as the number of milliseconds after - // Jan 1, 1970 00:00:00 UTC. Events with a time stamp earlier than this time - // are not included. + // Jan 1, 1970 00:00:00 UTC. Events with a time stamp equal to this time or + // later than this time are included. Events with a time stamp earlier than + // this time are not included. StartTime *int64 `locationName:"startTime" type:"long"` } @@ -5714,11 +5757,13 @@ type GetLogEventsOutput struct { Events []*OutputLogEvent `locationName:"events" type:"list"` // The token for the next set of items in the backward direction. The token - // expires after 24 hours. + // expires after 24 hours. This token will never be null. If you have reached + // the end of the stream, it will return the same token you passed in. NextBackwardToken *string `locationName:"nextBackwardToken" min:"1" type:"string"` // The token for the next set of items in the forward direction. The token expires - // after 24 hours. + // after 24 hours. If you have reached the end of the stream, it will return + // the same token you passed in. NextForwardToken *string `locationName:"nextForwardToken" min:"1" type:"string"` } @@ -5760,7 +5805,7 @@ type InputLogEvent struct { // Message is a required field Message *string `locationName:"message" min:"1" type:"string" required:"true"` - // The time the event occurred, expressed as the number of milliseconds fter + // The time the event occurred, expressed as the number of milliseconds after // Jan 1, 1970 00:00:00 UTC. // // Timestamp is a required field @@ -6707,9 +6752,9 @@ type PutResourcePolicyInput struct { // to put DNS query logs in to the specified log group. Replace "logArn" with // the ARN of your CloudWatch Logs resource, such as a log group or log stream. // - // { "Version": "2012-10-17" "Statement": [ { "Sid": "Route53LogsToCloudWatchLogs", + // { "Version": "2012-10-17", "Statement": [ { "Sid": "Route53LogsToCloudWatchLogs", // "Effect": "Allow", "Principal": { "Service": [ "route53.amazonaws.com" ] - // }, "Action":"logs:PutLogEvents", "Resource": logArn } ] } + // }, "Action":"logs:PutLogEvents", "Resource": "logArn" } ] } PolicyDocument *string `locationName:"policyDocument" min:"1" type:"string"` // Name of the new policy. This parameter is required. diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/errors.go b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/errors.go index 772141f53a7..d8f3338b355 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/errors.go @@ -57,4 +57,10 @@ const ( // // The service cannot complete the request. ErrCodeServiceUnavailableException = "ServiceUnavailableException" + + // ErrCodeUnrecognizedClientException for service response error code + // "UnrecognizedClientException". + // + // The most likely cause is an invalid AWS access key ID or secret key. + ErrCodeUnrecognizedClientException = "UnrecognizedClientException" ) diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/service.go b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/service.go index 8e6094d58a5..8d5f929df84 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "logs" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "logs" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "CloudWatch Logs" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the CloudWatchLogs client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/codebuild/api.go b/vendor/github.com/aws/aws-sdk-go/service/codebuild/api.go index 433c34bfff3..73e940d776c 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/codebuild/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/codebuild/api.go @@ -15,8 +15,8 @@ const opBatchDeleteBuilds = "BatchDeleteBuilds" // BatchDeleteBuildsRequest generates a "aws/request.Request" representing the // client's request for the BatchDeleteBuilds operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -94,8 +94,8 @@ const opBatchGetBuilds = "BatchGetBuilds" // BatchGetBuildsRequest generates a "aws/request.Request" representing the // client's request for the BatchGetBuilds operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -173,8 +173,8 @@ const opBatchGetProjects = "BatchGetProjects" // BatchGetProjectsRequest generates a "aws/request.Request" representing the // client's request for the BatchGetProjects operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -252,8 +252,8 @@ const opCreateProject = "CreateProject" // CreateProjectRequest generates a "aws/request.Request" representing the // client's request for the CreateProject operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -338,8 +338,8 @@ const opCreateWebhook = "CreateWebhook" // CreateWebhookRequest generates a "aws/request.Request" representing the // client's request for the CreateWebhook operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -379,17 +379,17 @@ func (c *CodeBuild) CreateWebhookRequest(input *CreateWebhookInput) (req *reques // CreateWebhook API operation for AWS CodeBuild. // // For an existing AWS CodeBuild build project that has its source code stored -// in a GitHub repository, enables AWS CodeBuild to begin automatically rebuilding +// in a GitHub or Bitbucket repository, enables AWS CodeBuild to start rebuilding // the source code every time a code change is pushed to the repository. // // If you enable webhooks for an AWS CodeBuild project, and the project is used -// as a build step in AWS CodePipeline, then two identical builds will be created +// as a build step in AWS CodePipeline, then two identical builds are created // for each commit. One build is triggered through webhooks, and one through -// AWS CodePipeline. Because billing is on a per-build basis, you will be billed +// AWS CodePipeline. Because billing is on a per-build basis, you are billed // for both builds. Therefore, if you are using AWS CodePipeline, we recommend -// that you disable webhooks in CodeBuild. In the AWS CodeBuild console, clear -// the Webhook box. For more information, see step 9 in Change a Build Project's -// Settings (http://docs.aws.amazon.com/codebuild/latest/userguide/change-project.html#change-project-console). +// that you disable webhooks in AWS CodeBuild. In the AWS CodeBuild console, +// clear the Webhook box. For more information, see step 5 in Change a Build +// Project's Settings (http://docs.aws.amazon.com/codebuild/latest/userguide/change-project.html#change-project-console). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -438,8 +438,8 @@ const opDeleteProject = "DeleteProject" // DeleteProjectRequest generates a "aws/request.Request" representing the // client's request for the DeleteProject operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -517,8 +517,8 @@ const opDeleteWebhook = "DeleteWebhook" // DeleteWebhookRequest generates a "aws/request.Request" representing the // client's request for the DeleteWebhook operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -558,7 +558,7 @@ func (c *CodeBuild) DeleteWebhookRequest(input *DeleteWebhookInput) (req *reques // DeleteWebhook API operation for AWS CodeBuild. // // For an existing AWS CodeBuild build project that has its source code stored -// in a GitHub repository, stops AWS CodeBuild from automatically rebuilding +// in a GitHub or Bitbucket repository, stops AWS CodeBuild from rebuilding // the source code every time a code change is pushed to the repository. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -604,8 +604,8 @@ const opInvalidateProjectCache = "InvalidateProjectCache" // InvalidateProjectCacheRequest generates a "aws/request.Request" representing the // client's request for the InvalidateProjectCache operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -686,8 +686,8 @@ const opListBuilds = "ListBuilds" // ListBuildsRequest generates a "aws/request.Request" representing the // client's request for the ListBuilds operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -765,8 +765,8 @@ const opListBuildsForProject = "ListBuildsForProject" // ListBuildsForProjectRequest generates a "aws/request.Request" representing the // client's request for the ListBuildsForProject operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -848,8 +848,8 @@ const opListCuratedEnvironmentImages = "ListCuratedEnvironmentImages" // ListCuratedEnvironmentImagesRequest generates a "aws/request.Request" representing the // client's request for the ListCuratedEnvironmentImages operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -922,8 +922,8 @@ const opListProjects = "ListProjects" // ListProjectsRequest generates a "aws/request.Request" representing the // client's request for the ListProjects operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1002,8 +1002,8 @@ const opStartBuild = "StartBuild" // StartBuildRequest generates a "aws/request.Request" representing the // client's request for the StartBuild operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1087,8 +1087,8 @@ const opStopBuild = "StopBuild" // StopBuildRequest generates a "aws/request.Request" representing the // client's request for the StopBuild operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1169,8 +1169,8 @@ const opUpdateProject = "UpdateProject" // UpdateProjectRequest generates a "aws/request.Request" representing the // client's request for the UpdateProject operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1247,6 +1247,93 @@ func (c *CodeBuild) UpdateProjectWithContext(ctx aws.Context, input *UpdateProje return out, req.Send() } +const opUpdateWebhook = "UpdateWebhook" + +// UpdateWebhookRequest generates a "aws/request.Request" representing the +// client's request for the UpdateWebhook operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateWebhook for more information on using the UpdateWebhook +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateWebhookRequest method. +// req, resp := client.UpdateWebhookRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/UpdateWebhook +func (c *CodeBuild) UpdateWebhookRequest(input *UpdateWebhookInput) (req *request.Request, output *UpdateWebhookOutput) { + op := &request.Operation{ + Name: opUpdateWebhook, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateWebhookInput{} + } + + output = &UpdateWebhookOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateWebhook API operation for AWS CodeBuild. +// +// Updates the webhook associated with an AWS CodeBuild build project. +// +// If you use Bitbucket for your repository, rotateSecret is ignored. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CodeBuild's +// API operation UpdateWebhook for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInputException" +// The input value that was provided is not valid. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified AWS resource cannot be found. +// +// * ErrCodeOAuthProviderException "OAuthProviderException" +// There was a problem with the underlying OAuth provider. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/codebuild-2016-10-06/UpdateWebhook +func (c *CodeBuild) UpdateWebhook(input *UpdateWebhookInput) (*UpdateWebhookOutput, error) { + req, out := c.UpdateWebhookRequest(input) + return out, req.Send() +} + +// UpdateWebhookWithContext is the same as UpdateWebhook with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateWebhook for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CodeBuild) UpdateWebhookWithContext(ctx aws.Context, input *UpdateWebhookInput, opts ...request.Option) (*UpdateWebhookOutput, error) { + req, out := c.UpdateWebhookRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + type BatchDeleteBuildsInput struct { _ struct{} `type:"structure"` @@ -1476,7 +1563,7 @@ type Build struct { // Information about the output artifacts for the build. Artifacts *BuildArtifacts `locationName:"artifacts" type:"structure"` - // Whether the build has finished. True if completed; otherwise, false. + // Whether the build is complete. True if complete; otherwise, false. BuildComplete *bool `locationName:"buildComplete" type:"boolean"` // The current status of the build. Valid values include: @@ -1500,8 +1587,15 @@ type Build struct { // The current build phase. CurrentPhase *string `locationName:"currentPhase" type:"string"` + // The AWS Key Management Service (AWS KMS) customer master key (CMK) to be + // used for encrypting the build output artifacts. + // + // This is expressed either as the Amazon Resource Name (ARN) of the CMK or, + // if specified, the CMK's alias (using the format alias/alias-name). + EncryptionKey *string `locationName:"encryptionKey" min:"1" type:"string"` + // When the build process ended, expressed in Unix time format. - EndTime *time.Time `locationName:"endTime" type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `locationName:"endTime" type:"timestamp"` // Information about the build environment for this build. Environment *ProjectEnvironment `locationName:"environment" type:"structure"` @@ -1515,7 +1609,7 @@ type Build struct { // codepipeline/my-demo-pipeline). // // * If an AWS Identity and Access Management (IAM) user started the build, - // the user's name (for example MyUserName). + // the user's name (for example, MyUserName). // // * If the Jenkins plugin for AWS CodeBuild started the build, the string // CodeBuild-Jenkins-Plugin. @@ -1527,13 +1621,57 @@ type Build struct { // Describes a network interface. NetworkInterface *NetworkInterface `locationName:"networkInterface" type:"structure"` - // Information about all previous build phases that are completed and information + // Information about all previous build phases that are complete and information // about any current build phase that is not yet complete. Phases []*BuildPhase `locationName:"phases" type:"list"` - // The name of the build project. + // The name of the AWS CodeBuild project. ProjectName *string `locationName:"projectName" min:"1" type:"string"` + // The number of minutes a build is allowed to be queued before it times out. + QueuedTimeoutInMinutes *int64 `locationName:"queuedTimeoutInMinutes" type:"integer"` + + // An identifier for the version of this build's source code. + // + // * For AWS CodeCommit, GitHub, GitHub Enterprise, and BitBucket, the commit + // ID. + // + // * For AWS CodePipeline, the source revision provided by AWS CodePipeline. + // + // + // * For Amazon Simple Storage Service (Amazon S3), this does not apply. + ResolvedSourceVersion *string `locationName:"resolvedSourceVersion" min:"1" type:"string"` + + // An array of ProjectArtifacts objects. + SecondaryArtifacts []*BuildArtifacts `locationName:"secondaryArtifacts" type:"list"` + + // An array of ProjectSourceVersion objects. Each ProjectSourceVersion must + // be one of: + // + // * For AWS CodeCommit: the commit ID to use. + // + // * For GitHub: the commit ID, pull request ID, branch name, or tag name + // that corresponds to the version of the source code you want to build. + // If a pull request ID is specified, it must use the format pr/pull-request-ID + // (for example, pr/25). If a branch name is specified, the branch's HEAD + // commit ID is used. If not specified, the default branch's HEAD commit + // ID is used. + // + // * For Bitbucket: the commit ID, branch name, or tag name that corresponds + // to the version of the source code you want to build. If a branch name + // is specified, the branch's HEAD commit ID is used. If not specified, the + // default branch's HEAD commit ID is used. + // + // * For Amazon Simple Storage Service (Amazon S3): the version ID of the + // object that represents the build input ZIP file to use. + SecondarySourceVersions []*ProjectSourceVersion `locationName:"secondarySourceVersions" type:"list"` + + // An array of ProjectSource objects. + SecondarySources []*ProjectSource `locationName:"secondarySources" type:"list"` + + // The name of a service role used for this build. + ServiceRole *string `locationName:"serviceRole" min:"1" type:"string"` + // Information about the source code to be built. Source *ProjectSource `locationName:"source" type:"structure"` @@ -1541,7 +1679,7 @@ type Build struct { SourceVersion *string `locationName:"sourceVersion" min:"1" type:"string"` // When the build process started, expressed in Unix time format. - StartTime *time.Time `locationName:"startTime" type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `locationName:"startTime" type:"timestamp"` // How long, in minutes, for AWS CodeBuild to wait before timing out this build // if it does not get marked as completed. @@ -1600,6 +1738,12 @@ func (s *Build) SetCurrentPhase(v string) *Build { return s } +// SetEncryptionKey sets the EncryptionKey field's value. +func (s *Build) SetEncryptionKey(v string) *Build { + s.EncryptionKey = &v + return s +} + // SetEndTime sets the EndTime field's value. func (s *Build) SetEndTime(v time.Time) *Build { s.EndTime = &v @@ -1648,6 +1792,42 @@ func (s *Build) SetProjectName(v string) *Build { return s } +// SetQueuedTimeoutInMinutes sets the QueuedTimeoutInMinutes field's value. +func (s *Build) SetQueuedTimeoutInMinutes(v int64) *Build { + s.QueuedTimeoutInMinutes = &v + return s +} + +// SetResolvedSourceVersion sets the ResolvedSourceVersion field's value. +func (s *Build) SetResolvedSourceVersion(v string) *Build { + s.ResolvedSourceVersion = &v + return s +} + +// SetSecondaryArtifacts sets the SecondaryArtifacts field's value. +func (s *Build) SetSecondaryArtifacts(v []*BuildArtifacts) *Build { + s.SecondaryArtifacts = v + return s +} + +// SetSecondarySourceVersions sets the SecondarySourceVersions field's value. +func (s *Build) SetSecondarySourceVersions(v []*ProjectSourceVersion) *Build { + s.SecondarySourceVersions = v + return s +} + +// SetSecondarySources sets the SecondarySources field's value. +func (s *Build) SetSecondarySources(v []*ProjectSource) *Build { + s.SecondarySources = v + return s +} + +// SetServiceRole sets the ServiceRole field's value. +func (s *Build) SetServiceRole(v string) *Build { + s.ServiceRole = &v + return s +} + // SetSource sets the Source field's value. func (s *Build) SetSource(v *ProjectSource) *Build { s.Source = v @@ -1682,21 +1862,33 @@ func (s *Build) SetVpcConfig(v *VpcConfig) *Build { type BuildArtifacts struct { _ struct{} `type:"structure"` + // An identifier for this artifact definition. + ArtifactIdentifier *string `locationName:"artifactIdentifier" type:"string"` + + // Information that tells you if encryption for build artifacts is disabled. + EncryptionDisabled *bool `locationName:"encryptionDisabled" type:"boolean"` + // Information about the location of the build artifacts. Location *string `locationName:"location" type:"string"` // The MD5 hash of the build artifact. // - // You can use this hash along with a checksum tool to confirm both file integrity + // You can use this hash along with a checksum tool to confirm file integrity // and authenticity. // // This value is available only if the build project's packaging value is set // to ZIP. Md5sum *string `locationName:"md5sum" type:"string"` + // If this flag is set, a name specified in the build spec file overrides the + // artifact name. The name specified in a build spec file is calculated at build + // time and uses the Shell Command Language. For example, you can append a date + // and time to your artifact name so that it is always unique. + OverrideArtifactName *bool `locationName:"overrideArtifactName" type:"boolean"` + // The SHA-256 hash of the build artifact. // - // You can use this hash along with a checksum tool to confirm both file integrity + // You can use this hash along with a checksum tool to confirm file integrity // and authenticity. // // This value is available only if the build project's packaging value is set @@ -1714,6 +1906,18 @@ func (s BuildArtifacts) GoString() string { return s.String() } +// SetArtifactIdentifier sets the ArtifactIdentifier field's value. +func (s *BuildArtifacts) SetArtifactIdentifier(v string) *BuildArtifacts { + s.ArtifactIdentifier = &v + return s +} + +// SetEncryptionDisabled sets the EncryptionDisabled field's value. +func (s *BuildArtifacts) SetEncryptionDisabled(v bool) *BuildArtifacts { + s.EncryptionDisabled = &v + return s +} + // SetLocation sets the Location field's value. func (s *BuildArtifacts) SetLocation(v string) *BuildArtifacts { s.Location = &v @@ -1726,6 +1930,12 @@ func (s *BuildArtifacts) SetMd5sum(v string) *BuildArtifacts { return s } +// SetOverrideArtifactName sets the OverrideArtifactName field's value. +func (s *BuildArtifacts) SetOverrideArtifactName(v bool) *BuildArtifacts { + s.OverrideArtifactName = &v + return s +} + // SetSha256sum sets the Sha256sum field's value. func (s *BuildArtifacts) SetSha256sum(v string) *BuildArtifacts { s.Sha256sum = &v @@ -1778,7 +1988,7 @@ type BuildPhase struct { DurationInSeconds *int64 `locationName:"durationInSeconds" type:"long"` // When the build phase ended, expressed in Unix time format. - EndTime *time.Time `locationName:"endTime" type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `locationName:"endTime" type:"timestamp"` // The current status of the build phase. Valid values include: // @@ -1788,6 +1998,9 @@ type BuildPhase struct { // // * IN_PROGRESS: The build phase is still in progress. // + // * QUEUED: The build has been submitted and is queued behind other submitted + // builds. + // // * STOPPED: The build phase stopped. // // * SUCCEEDED: The build phase succeeded. @@ -1813,6 +2026,9 @@ type BuildPhase struct { // // * PROVISIONING: The build environment is being set up. // + // * QUEUED: The build has been submitted and is queued behind other submitted + // builds. + // // * SUBMITTED: The build has been submitted. // // * UPLOAD_ARTIFACTS: Build output artifacts are being uploaded to the output @@ -1820,7 +2036,7 @@ type BuildPhase struct { PhaseType *string `locationName:"phaseType" type:"string" enum:"BuildPhaseType"` // When the build phase started, expressed in Unix time format. - StartTime *time.Time `locationName:"startTime" type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `locationName:"startTime" type:"timestamp"` } // String returns the string representation @@ -1869,6 +2085,70 @@ func (s *BuildPhase) SetStartTime(v time.Time) *BuildPhase { return s } +// Information about Amazon CloudWatch Logs for a build project. +type CloudWatchLogsConfig struct { + _ struct{} `type:"structure"` + + // The group name of the logs in Amazon CloudWatch Logs. For more information, + // see Working with Log Groups and Log Streams (http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html). + GroupName *string `locationName:"groupName" type:"string"` + + // The current status of the logs in Amazon CloudWatch Logs for a build project. + // Valid values are: + // + // * ENABLED: Amazon CloudWatch Logs are enabled for this build project. + // + // * DISABLED: Amazon CloudWatch Logs are not enabled for this build project. + // + // Status is a required field + Status *string `locationName:"status" type:"string" required:"true" enum:"LogsConfigStatusType"` + + // The prefix of the stream name of the Amazon CloudWatch Logs. For more information, + // see Working with Log Groups and Log Streams (http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html). + StreamName *string `locationName:"streamName" type:"string"` +} + +// String returns the string representation +func (s CloudWatchLogsConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CloudWatchLogsConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CloudWatchLogsConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CloudWatchLogsConfig"} + if s.Status == nil { + invalidParams.Add(request.NewErrParamRequired("Status")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupName sets the GroupName field's value. +func (s *CloudWatchLogsConfig) SetGroupName(v string) *CloudWatchLogsConfig { + s.GroupName = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *CloudWatchLogsConfig) SetStatus(v string) *CloudWatchLogsConfig { + s.Status = &v + return s +} + +// SetStreamName sets the StreamName field's value. +func (s *CloudWatchLogsConfig) SetStreamName(v string) *CloudWatchLogsConfig { + s.StreamName = &v + return s +} + type CreateProjectInput struct { _ struct{} `type:"structure"` @@ -1877,7 +2157,7 @@ type CreateProjectInput struct { // Artifacts is a required field Artifacts *ProjectArtifacts `locationName:"artifacts" type:"structure" required:"true"` - // Set this to true to generate a publicly-accessible URL for your project's + // Set this to true to generate a publicly accessible URL for your project's // build badge. BadgeEnabled *bool `locationName:"badgeEnabled" type:"boolean"` @@ -1891,7 +2171,7 @@ type CreateProjectInput struct { // The AWS Key Management Service (AWS KMS) customer master key (CMK) to be // used for encrypting the build output artifacts. // - // You can specify either the CMK's Amazon Resource Name (ARN) or, if available, + // You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, // the CMK's alias (using the format alias/alias-name). EncryptionKey *string `locationName:"encryptionKey" min:"1" type:"string"` @@ -1900,15 +2180,30 @@ type CreateProjectInput struct { // Environment is a required field Environment *ProjectEnvironment `locationName:"environment" type:"structure" required:"true"` + // Information about logs for the build project. These can be logs in Amazon + // CloudWatch Logs, logs uploaded to a specified S3 bucket, or both. + LogsConfig *LogsConfig `locationName:"logsConfig" type:"structure"` + // The name of the build project. // // Name is a required field Name *string `locationName:"name" min:"2" type:"string" required:"true"` + // The number of minutes a build is allowed to be queued before it times out. + QueuedTimeoutInMinutes *int64 `locationName:"queuedTimeoutInMinutes" min:"5" type:"integer"` + + // An array of ProjectArtifacts objects. + SecondaryArtifacts []*ProjectArtifacts `locationName:"secondaryArtifacts" type:"list"` + + // An array of ProjectSource objects. + SecondarySources []*ProjectSource `locationName:"secondarySources" type:"list"` + // The ARN of the AWS Identity and Access Management (IAM) role that enables // AWS CodeBuild to interact with dependent AWS services on behalf of the AWS // account. - ServiceRole *string `locationName:"serviceRole" min:"1" type:"string"` + // + // ServiceRole is a required field + ServiceRole *string `locationName:"serviceRole" min:"1" type:"string" required:"true"` // Information about the build input source code for the build project. // @@ -1922,8 +2217,8 @@ type CreateProjectInput struct { Tags []*Tag `locationName:"tags" type:"list"` // How long, in minutes, from 5 to 480 (8 hours), for AWS CodeBuild to wait - // until timing out any build that has not been marked as completed. The default - // is 60 minutes. + // before it times out any build that has not been marked as completed. The + // default is 60 minutes. TimeoutInMinutes *int64 `locationName:"timeoutInMinutes" min:"5" type:"integer"` // VpcConfig enables AWS CodeBuild to access resources in an Amazon VPC. @@ -1958,6 +2253,12 @@ func (s *CreateProjectInput) Validate() error { if s.Name != nil && len(*s.Name) < 2 { invalidParams.Add(request.NewErrParamMinLen("Name", 2)) } + if s.QueuedTimeoutInMinutes != nil && *s.QueuedTimeoutInMinutes < 5 { + invalidParams.Add(request.NewErrParamMinValue("QueuedTimeoutInMinutes", 5)) + } + if s.ServiceRole == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceRole")) + } if s.ServiceRole != nil && len(*s.ServiceRole) < 1 { invalidParams.Add(request.NewErrParamMinLen("ServiceRole", 1)) } @@ -1982,6 +2283,31 @@ func (s *CreateProjectInput) Validate() error { invalidParams.AddNested("Environment", err.(request.ErrInvalidParams)) } } + if s.LogsConfig != nil { + if err := s.LogsConfig.Validate(); err != nil { + invalidParams.AddNested("LogsConfig", err.(request.ErrInvalidParams)) + } + } + if s.SecondaryArtifacts != nil { + for i, v := range s.SecondaryArtifacts { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "SecondaryArtifacts", i), err.(request.ErrInvalidParams)) + } + } + } + if s.SecondarySources != nil { + for i, v := range s.SecondarySources { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "SecondarySources", i), err.(request.ErrInvalidParams)) + } + } + } if s.Source != nil { if err := s.Source.Validate(); err != nil { invalidParams.AddNested("Source", err.(request.ErrInvalidParams)) @@ -2045,12 +2371,36 @@ func (s *CreateProjectInput) SetEnvironment(v *ProjectEnvironment) *CreateProjec return s } +// SetLogsConfig sets the LogsConfig field's value. +func (s *CreateProjectInput) SetLogsConfig(v *LogsConfig) *CreateProjectInput { + s.LogsConfig = v + return s +} + // SetName sets the Name field's value. func (s *CreateProjectInput) SetName(v string) *CreateProjectInput { s.Name = &v return s } +// SetQueuedTimeoutInMinutes sets the QueuedTimeoutInMinutes field's value. +func (s *CreateProjectInput) SetQueuedTimeoutInMinutes(v int64) *CreateProjectInput { + s.QueuedTimeoutInMinutes = &v + return s +} + +// SetSecondaryArtifacts sets the SecondaryArtifacts field's value. +func (s *CreateProjectInput) SetSecondaryArtifacts(v []*ProjectArtifacts) *CreateProjectInput { + s.SecondaryArtifacts = v + return s +} + +// SetSecondarySources sets the SecondarySources field's value. +func (s *CreateProjectInput) SetSecondarySources(v []*ProjectSource) *CreateProjectInput { + s.SecondarySources = v + return s +} + // SetServiceRole sets the ServiceRole field's value. func (s *CreateProjectInput) SetServiceRole(v string) *CreateProjectInput { s.ServiceRole = &v @@ -2107,7 +2457,13 @@ func (s *CreateProjectOutput) SetProject(v *Project) *CreateProjectOutput { type CreateWebhookInput struct { _ struct{} `type:"structure"` - // The name of the build project. + // A regular expression used to determine which repository branches are built + // when a webhook is triggered. If the name of a branch matches the regular + // expression, then it is built. If branchFilter is empty, then all branches + // are built. + BranchFilter *string `locationName:"branchFilter" type:"string"` + + // The name of the AWS CodeBuild project. // // ProjectName is a required field ProjectName *string `locationName:"projectName" min:"2" type:"string" required:"true"` @@ -2139,6 +2495,12 @@ func (s *CreateWebhookInput) Validate() error { return nil } +// SetBranchFilter sets the BranchFilter field's value. +func (s *CreateWebhookInput) SetBranchFilter(v string) *CreateWebhookInput { + s.BranchFilter = &v + return s +} + // SetProjectName sets the ProjectName field's value. func (s *CreateWebhookInput) SetProjectName(v string) *CreateWebhookInput { s.ProjectName = &v @@ -2148,8 +2510,8 @@ func (s *CreateWebhookInput) SetProjectName(v string) *CreateWebhookInput { type CreateWebhookOutput struct { _ struct{} `type:"structure"` - // Information about a webhook in GitHub that connects repository events to - // a build project in AWS CodeBuild. + // Information about a webhook that connects repository events to a build project + // in AWS CodeBuild. Webhook *Webhook `locationName:"webhook" type:"structure"` } @@ -2227,7 +2589,7 @@ func (s DeleteProjectOutput) GoString() string { type DeleteWebhookInput struct { _ struct{} `type:"structure"` - // The name of the build project. + // The name of the AWS CodeBuild project. // // ProjectName is a required field ProjectName *string `locationName:"projectName" min:"2" type:"string" required:"true"` @@ -2408,9 +2770,9 @@ type EnvironmentVariable struct { // The value of the environment variable. // - // We strongly discourage using environment variables to store sensitive values, - // especially AWS secret key IDs and secret access keys. Environment variables - // can be displayed in plain text using tools such as the AWS CodeBuild console + // We strongly discourage the use of environment variables to store sensitive + // values, especially AWS secret key IDs and secret access keys. Environment + // variables can be displayed in plain text using the AWS CodeBuild console // and the AWS Command Line Interface (AWS CLI). // // Value is a required field @@ -2467,7 +2829,7 @@ func (s *EnvironmentVariable) SetValue(v string) *EnvironmentVariable { type InvalidateProjectCacheInput struct { _ struct{} `type:"structure"` - // The name of the build project that the cache will be reset for. + // The name of the AWS CodeBuild build project that the cache is reset for. // // ProjectName is a required field ProjectName *string `locationName:"projectName" min:"1" type:"string" required:"true"` @@ -2530,7 +2892,7 @@ type ListBuildsForProjectInput struct { // until no more next tokens are returned. NextToken *string `locationName:"nextToken" type:"string"` - // The name of the build project. + // The name of the AWS CodeBuild project. // // ProjectName is a required field ProjectName *string `locationName:"projectName" min:"1" type:"string" required:"true"` @@ -2750,13 +3112,12 @@ type ListProjectsInput struct { // The criterion to be used to list build project names. Valid values include: // - // * CREATED_TIME: List the build project names based on when each build - // project was created. + // * CREATED_TIME: List based on when each build project was created. // - // * LAST_MODIFIED_TIME: List the build project names based on when information - // about each build project was last changed. + // * LAST_MODIFIED_TIME: List based on when information about each build + // project was last changed. // - // * NAME: List the build project names based on each build project's name. + // * NAME: List based on each build project's name. // // Use sortOrder to specify in what order to list the build project names based // on the preceding criteria. @@ -2764,9 +3125,9 @@ type ListProjectsInput struct { // The order in which to list build projects. Valid values include: // - // * ASCENDING: List the build project names in ascending order. + // * ASCENDING: List in ascending order. // - // * DESCENDING: List the build project names in descending order. + // * DESCENDING: List in descending order. // // Use sortBy to specify the criterion to be used to list build project names. SortOrder *string `locationName:"sortOrder" type:"string" enum:"SortOrderType"` @@ -2849,16 +3210,81 @@ func (s *ListProjectsOutput) SetProjects(v []*string) *ListProjectsOutput { return s } +// Information about logs for a build project. These can be logs in Amazon CloudWatch +// Logs, built in a specified S3 bucket, or both. +type LogsConfig struct { + _ struct{} `type:"structure"` + + // Information about Amazon CloudWatch Logs for a build project. Amazon CloudWatch + // Logs are enabled by default. + CloudWatchLogs *CloudWatchLogsConfig `locationName:"cloudWatchLogs" type:"structure"` + + // Information about logs built to an S3 bucket for a build project. S3 logs + // are not enabled by default. + S3Logs *S3LogsConfig `locationName:"s3Logs" type:"structure"` +} + +// String returns the string representation +func (s LogsConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LogsConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *LogsConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LogsConfig"} + if s.CloudWatchLogs != nil { + if err := s.CloudWatchLogs.Validate(); err != nil { + invalidParams.AddNested("CloudWatchLogs", err.(request.ErrInvalidParams)) + } + } + if s.S3Logs != nil { + if err := s.S3Logs.Validate(); err != nil { + invalidParams.AddNested("S3Logs", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCloudWatchLogs sets the CloudWatchLogs field's value. +func (s *LogsConfig) SetCloudWatchLogs(v *CloudWatchLogsConfig) *LogsConfig { + s.CloudWatchLogs = v + return s +} + +// SetS3Logs sets the S3Logs field's value. +func (s *LogsConfig) SetS3Logs(v *S3LogsConfig) *LogsConfig { + s.S3Logs = v + return s +} + // Information about build logs in Amazon CloudWatch Logs. type LogsLocation struct { _ struct{} `type:"structure"` + // Information about Amazon CloudWatch Logs for a build project. + CloudWatchLogs *CloudWatchLogsConfig `locationName:"cloudWatchLogs" type:"structure"` + // The URL to an individual build log in Amazon CloudWatch Logs. DeepLink *string `locationName:"deepLink" type:"string"` // The name of the Amazon CloudWatch Logs group for the build logs. GroupName *string `locationName:"groupName" type:"string"` + // The URL to a build log in an S3 bucket. + S3DeepLink *string `locationName:"s3DeepLink" type:"string"` + + // Information about S3 logs for a build project. + S3Logs *S3LogsConfig `locationName:"s3Logs" type:"structure"` + // The name of the Amazon CloudWatch Logs stream for the build logs. StreamName *string `locationName:"streamName" type:"string"` } @@ -2873,6 +3299,12 @@ func (s LogsLocation) GoString() string { return s.String() } +// SetCloudWatchLogs sets the CloudWatchLogs field's value. +func (s *LogsLocation) SetCloudWatchLogs(v *CloudWatchLogsConfig) *LogsLocation { + s.CloudWatchLogs = v + return s +} + // SetDeepLink sets the DeepLink field's value. func (s *LogsLocation) SetDeepLink(v string) *LogsLocation { s.DeepLink = &v @@ -2885,14 +3317,26 @@ func (s *LogsLocation) SetGroupName(v string) *LogsLocation { return s } -// SetStreamName sets the StreamName field's value. -func (s *LogsLocation) SetStreamName(v string) *LogsLocation { - s.StreamName = &v +// SetS3DeepLink sets the S3DeepLink field's value. +func (s *LogsLocation) SetS3DeepLink(v string) *LogsLocation { + s.S3DeepLink = &v return s } -// Describes a network interface. -type NetworkInterface struct { +// SetS3Logs sets the S3Logs field's value. +func (s *LogsLocation) SetS3Logs(v *S3LogsConfig) *LogsLocation { + s.S3Logs = v + return s +} + +// SetStreamName sets the StreamName field's value. +func (s *LogsLocation) SetStreamName(v string) *LogsLocation { + s.StreamName = &v + return s +} + +// Describes a network interface. +type NetworkInterface struct { _ struct{} `type:"structure"` // The ID of the network interface. @@ -2925,12 +3369,12 @@ func (s *NetworkInterface) SetSubnetId(v string) *NetworkInterface { } // Additional information about a build phase that has an error. You can use -// this information to help troubleshoot a failed build. +// this information for troubleshooting. type PhaseContext struct { _ struct{} `type:"structure"` - // An explanation of the build phase's context. This explanation might include - // a command ID and an exit code. + // An explanation of the build phase's context. This might include a command + // ID and an exit code. Message *string `locationName:"message" type:"string"` // The status code for the context of the build phase. @@ -2976,7 +3420,7 @@ type Project struct { Cache *ProjectCache `locationName:"cache" type:"structure"` // When the build project was created, expressed in Unix time format. - Created *time.Time `locationName:"created" type:"timestamp" timestampFormat:"unix"` + Created *time.Time `locationName:"created" type:"timestamp"` // A description that makes the build project easy to identify. Description *string `locationName:"description" type:"string"` @@ -2984,8 +3428,8 @@ type Project struct { // The AWS Key Management Service (AWS KMS) customer master key (CMK) to be // used for encrypting the build output artifacts. // - // This is expressed either as the CMK's Amazon Resource Name (ARN) or, if specified, - // the CMK's alias (using the format alias/alias-name). + // This is expressed either as the Amazon Resource Name (ARN) of the CMK or, + // if specified, the CMK's alias (using the format alias/alias-name). EncryptionKey *string `locationName:"encryptionKey" min:"1" type:"string"` // Information about the build environment for this build project. @@ -2993,11 +3437,24 @@ type Project struct { // When the build project's settings were last modified, expressed in Unix time // format. - LastModified *time.Time `locationName:"lastModified" type:"timestamp" timestampFormat:"unix"` + LastModified *time.Time `locationName:"lastModified" type:"timestamp"` + + // Information about logs for the build project. A project can create logs in + // Amazon CloudWatch Logs, an S3 bucket, or both. + LogsConfig *LogsConfig `locationName:"logsConfig" type:"structure"` // The name of the build project. Name *string `locationName:"name" min:"2" type:"string"` + // The number of minutes a build is allowed to be queued before it times out. + QueuedTimeoutInMinutes *int64 `locationName:"queuedTimeoutInMinutes" min:"5" type:"integer"` + + // An array of ProjectArtifacts objects. + SecondaryArtifacts []*ProjectArtifacts `locationName:"secondaryArtifacts" type:"list"` + + // An array of ProjectSource objects. + SecondarySources []*ProjectSource `locationName:"secondarySources" type:"list"` + // The ARN of the AWS Identity and Access Management (IAM) role that enables // AWS CodeBuild to interact with dependent AWS services on behalf of the AWS // account. @@ -3017,11 +3474,11 @@ type Project struct { // The default is 60 minutes. TimeoutInMinutes *int64 `locationName:"timeoutInMinutes" min:"5" type:"integer"` - // Information about the VPC configuration that AWS CodeBuild will access. + // Information about the VPC configuration that AWS CodeBuild accesses. VpcConfig *VpcConfig `locationName:"vpcConfig" type:"structure"` - // Information about a webhook in GitHub that connects repository events to - // a build project in AWS CodeBuild. + // Information about a webhook that connects repository events to a build project + // in AWS CodeBuild. Webhook *Webhook `locationName:"webhook" type:"structure"` } @@ -3089,12 +3546,36 @@ func (s *Project) SetLastModified(v time.Time) *Project { return s } +// SetLogsConfig sets the LogsConfig field's value. +func (s *Project) SetLogsConfig(v *LogsConfig) *Project { + s.LogsConfig = v + return s +} + // SetName sets the Name field's value. func (s *Project) SetName(v string) *Project { s.Name = &v return s } +// SetQueuedTimeoutInMinutes sets the QueuedTimeoutInMinutes field's value. +func (s *Project) SetQueuedTimeoutInMinutes(v int64) *Project { + s.QueuedTimeoutInMinutes = &v + return s +} + +// SetSecondaryArtifacts sets the SecondaryArtifacts field's value. +func (s *Project) SetSecondaryArtifacts(v []*ProjectArtifacts) *Project { + s.SecondaryArtifacts = v + return s +} + +// SetSecondarySources sets the SecondarySources field's value. +func (s *Project) SetSecondarySources(v []*ProjectSource) *Project { + s.SecondarySources = v + return s +} + // SetServiceRole sets the ServiceRole field's value. func (s *Project) SetServiceRole(v string) *Project { s.ServiceRole = &v @@ -3135,46 +3616,65 @@ func (s *Project) SetWebhook(v *Webhook) *Project { type ProjectArtifacts struct { _ struct{} `type:"structure"` - // Information about the build output artifact location, as follows: + // An identifier for this artifact definition. + ArtifactIdentifier *string `locationName:"artifactIdentifier" type:"string"` + + // Set to true if you do not want your output artifacts encrypted. This option + // is valid only if your artifacts type is Amazon Simple Storage Service (Amazon + // S3). If this is set with another artifacts type, an invalidInputException + // is thrown. + EncryptionDisabled *bool `locationName:"encryptionDisabled" type:"boolean"` + + // Information about the build output artifact location: // - // * If type is set to CODEPIPELINE, then AWS CodePipeline will ignore this - // value if specified. This is because AWS CodePipeline manages its build - // output locations instead of AWS CodeBuild. + // * If type is set to CODEPIPELINE, AWS CodePipeline ignores this value + // if specified. This is because AWS CodePipeline manages its build output + // locations instead of AWS CodeBuild. // - // * If type is set to NO_ARTIFACTS, then this value will be ignored if specified, - // because no build output will be produced. + // * If type is set to NO_ARTIFACTS, this value is ignored if specified, + // because no build output is produced. // // * If type is set to S3, this is the name of the output bucket. Location *string `locationName:"location" type:"string"` - // Along with path and namespaceType, the pattern that AWS CodeBuild will use - // to name and store the output artifact, as follows: + // Along with path and namespaceType, the pattern that AWS CodeBuild uses to + // name and store the output artifact: // - // * If type is set to CODEPIPELINE, then AWS CodePipeline will ignore this - // value if specified. This is because AWS CodePipeline manages its build - // output names instead of AWS CodeBuild. + // * If type is set to CODEPIPELINE, AWS CodePipeline ignores this value + // if specified. This is because AWS CodePipeline manages its build output + // names instead of AWS CodeBuild. // - // * If type is set to NO_ARTIFACTS, then this value will be ignored if specified, - // because no build output will be produced. + // * If type is set to NO_ARTIFACTS, this value is ignored if specified, + // because no build output is produced. // // * If type is set to S3, this is the name of the output artifact object. + // If you set the name to be a forward slash ("/"), the artifact is stored + // in the root of the output bucket. // - // For example, if path is set to MyArtifacts, namespaceType is set to BUILD_ID, - // and name is set to MyArtifact.zip, then the output artifact would be stored - // in MyArtifacts/build-ID/MyArtifact.zip. + // For example: + // + // * If path is set to MyArtifacts, namespaceType is set to BUILD_ID, and + // name is set to MyArtifact.zip, then the output artifact is stored in MyArtifacts/build-ID/MyArtifact.zip. + // + // + // * If path is empty, namespaceType is set to NONE, and name is set to + // "/", the output artifact is stored in the root of the output bucket. + // + // * If path is set to MyArtifacts, namespaceType is set to BUILD_ID, and + // name is set to "/", the output artifact is stored in MyArtifacts/build-ID. Name *string `locationName:"name" type:"string"` - // Along with path and name, the pattern that AWS CodeBuild will use to determine - // the name and location to store the output artifact, as follows: + // Along with path and name, the pattern that AWS CodeBuild uses to determine + // the name and location to store the output artifact: // - // * If type is set to CODEPIPELINE, then AWS CodePipeline will ignore this - // value if specified. This is because AWS CodePipeline manages its build - // output names instead of AWS CodeBuild. + // * If type is set to CODEPIPELINE, AWS CodePipeline ignores this value + // if specified. This is because AWS CodePipeline manages its build output + // names instead of AWS CodeBuild. // - // * If type is set to NO_ARTIFACTS, then this value will be ignored if specified, - // because no build output will be produced. + // * If type is set to NO_ARTIFACTS, this value is ignored if specified, + // because no build output is produced. // - // * If type is set to S3, then valid values include: + // * If type is set to S3, valid values include: // // BUILD_ID: Include the build ID in the location of the build output artifact. // @@ -3182,55 +3682,60 @@ type ProjectArtifacts struct { // not specified. // // For example, if path is set to MyArtifacts, namespaceType is set to BUILD_ID, - // and name is set to MyArtifact.zip, then the output artifact would be stored - // in MyArtifacts/build-ID/MyArtifact.zip. + // and name is set to MyArtifact.zip, the output artifact is stored in MyArtifacts/build-ID/MyArtifact.zip. NamespaceType *string `locationName:"namespaceType" type:"string" enum:"ArtifactNamespace"` - // The type of build output artifact to create, as follows: + // If this flag is set, a name specified in the build spec file overrides the + // artifact name. The name specified in a build spec file is calculated at build + // time and uses the Shell Command Language. For example, you can append a date + // and time to your artifact name so that it is always unique. + OverrideArtifactName *bool `locationName:"overrideArtifactName" type:"boolean"` + + // The type of build output artifact to create: // - // * If type is set to CODEPIPELINE, then AWS CodePipeline will ignore this - // value if specified. This is because AWS CodePipeline manages its build - // output artifacts instead of AWS CodeBuild. + // * If type is set to CODEPIPELINE, AWS CodePipeline ignores this value + // if specified. This is because AWS CodePipeline manages its build output + // artifacts instead of AWS CodeBuild. // - // * If type is set to NO_ARTIFACTS, then this value will be ignored if specified, - // because no build output will be produced. + // * If type is set to NO_ARTIFACTS, this value is ignored if specified, + // because no build output is produced. // // * If type is set to S3, valid values include: // - // NONE: AWS CodeBuild will create in the output bucket a folder containing - // the build output. This is the default if packaging is not specified. + // NONE: AWS CodeBuild creates in the output bucket a folder that contains the + // build output. This is the default if packaging is not specified. // - // ZIP: AWS CodeBuild will create in the output bucket a ZIP file containing + // ZIP: AWS CodeBuild creates in the output bucket a ZIP file that contains // the build output. Packaging *string `locationName:"packaging" type:"string" enum:"ArtifactPackaging"` - // Along with namespaceType and name, the pattern that AWS CodeBuild will use - // to name and store the output artifact, as follows: + // Along with namespaceType and name, the pattern that AWS CodeBuild uses to + // name and store the output artifact: // - // * If type is set to CODEPIPELINE, then AWS CodePipeline will ignore this - // value if specified. This is because AWS CodePipeline manages its build - // output names instead of AWS CodeBuild. + // * If type is set to CODEPIPELINE, AWS CodePipeline ignores this value + // if specified. This is because AWS CodePipeline manages its build output + // names instead of AWS CodeBuild. // - // * If type is set to NO_ARTIFACTS, then this value will be ignored if specified, - // because no build output will be produced. + // * If type is set to NO_ARTIFACTS, this value is ignored if specified, + // because no build output is produced. // // * If type is set to S3, this is the path to the output artifact. If path - // is not specified, then path will not be used. + // is not specified, path is not used. // // For example, if path is set to MyArtifacts, namespaceType is set to NONE, - // and name is set to MyArtifact.zip, then the output artifact would be stored - // in the output bucket at MyArtifacts/MyArtifact.zip. + // and name is set to MyArtifact.zip, the output artifact is stored in the output + // bucket at MyArtifacts/MyArtifact.zip. Path *string `locationName:"path" type:"string"` // The type of build output artifact. Valid values include: // - // * CODEPIPELINE: The build project will have build output generated through - // AWS CodePipeline. + // * CODEPIPELINE: The build project has build output generated through AWS + // CodePipeline. // - // * NO_ARTIFACTS: The build project will not produce any build output. + // * NO_ARTIFACTS: The build project does not produce any build output. // - // * S3: The build project will store build output in Amazon Simple Storage - // Service (Amazon S3). + // * S3: The build project stores build output in Amazon Simple Storage Service + // (Amazon S3). // // Type is a required field Type *string `locationName:"type" type:"string" required:"true" enum:"ArtifactsType"` @@ -3259,6 +3764,18 @@ func (s *ProjectArtifacts) Validate() error { return nil } +// SetArtifactIdentifier sets the ArtifactIdentifier field's value. +func (s *ProjectArtifacts) SetArtifactIdentifier(v string) *ProjectArtifacts { + s.ArtifactIdentifier = &v + return s +} + +// SetEncryptionDisabled sets the EncryptionDisabled field's value. +func (s *ProjectArtifacts) SetEncryptionDisabled(v bool) *ProjectArtifacts { + s.EncryptionDisabled = &v + return s +} + // SetLocation sets the Location field's value. func (s *ProjectArtifacts) SetLocation(v string) *ProjectArtifacts { s.Location = &v @@ -3277,6 +3794,12 @@ func (s *ProjectArtifacts) SetNamespaceType(v string) *ProjectArtifacts { return s } +// SetOverrideArtifactName sets the OverrideArtifactName field's value. +func (s *ProjectArtifacts) SetOverrideArtifactName(v bool) *ProjectArtifacts { + s.OverrideArtifactName = &v + return s +} + // SetPackaging sets the Packaging field's value. func (s *ProjectArtifacts) SetPackaging(v string) *ProjectArtifacts { s.Packaging = &v @@ -3299,12 +3822,15 @@ func (s *ProjectArtifacts) SetType(v string) *ProjectArtifacts { type ProjectBadge struct { _ struct{} `type:"structure"` - // Set this to true to generate a publicly-accessible URL for your project's + // Set this to true to generate a publicly accessible URL for your project's // build badge. BadgeEnabled *bool `locationName:"badgeEnabled" type:"boolean"` // The publicly-accessible URL through which you can access the build badge // for your project. + // + // The publicly accessible URL through which you can access the build badge + // for your project. BadgeRequestUrl *string `locationName:"badgeRequestUrl" type:"string"` } @@ -3334,18 +3860,18 @@ func (s *ProjectBadge) SetBadgeRequestUrl(v string) *ProjectBadge { type ProjectCache struct { _ struct{} `type:"structure"` - // Information about the cache location, as follows: + // Information about the cache location: // - // * NO_CACHE: This value will be ignored. + // * NO_CACHE: This value is ignored. // // * S3: This is the S3 bucket name/prefix. Location *string `locationName:"location" type:"string"` // The type of cache used by the build project. Valid values include: // - // * NO_CACHE: The build project will not use any cache. + // * NO_CACHE: The build project does not use any cache. // - // * S3: The build project will read and write from/to S3. + // * S3: The build project reads and writes from and to S3. // // Type is a required field Type *string `locationName:"type" type:"string" required:"true" enum:"CacheType"` @@ -3393,7 +3919,7 @@ type ProjectEnvironment struct { // The certificate to use with this build project. Certificate *string `locationName:"certificate" type:"string"` - // Information about the compute resources the build project will use. Available + // Information about the compute resources the build project uses. Available // values include: // // * BUILD_GENERAL1_SMALL: Use up to 3 GB memory and 2 vCPUs for builds. @@ -3414,20 +3940,27 @@ type ProjectEnvironment struct { // Image is a required field Image *string `locationName:"image" min:"1" type:"string" required:"true"` - // If set to true, enables running the Docker daemon inside a Docker container; - // otherwise, false or not specified (the default). This value must be set to - // true only if this build project will be used to build Docker images, and - // the specified build environment image is not one provided by AWS CodeBuild - // with Docker support. Otherwise, all associated builds that attempt to interact - // with the Docker daemon will fail. Note that you must also start the Docker - // daemon so that your builds can interact with it as needed. One way to do - // this is to initialize the Docker daemon in the install phase of your build - // spec by running the following build commands. (Do not run the following build - // commands if the specified build environment image is provided by AWS CodeBuild - // with Docker support.) + // Enables running the Docker daemon inside a Docker container. Set to true + // only if the build project is be used to build Docker images, and the specified + // build environment image is not provided by AWS CodeBuild with Docker support. + // Otherwise, all associated builds that attempt to interact with the Docker + // daemon fail. You must also start the Docker daemon so that builds can interact + // with it. One way to do this is to initialize the Docker daemon during the + // install phase of your build spec by running the following build commands. + // (Do not run these commands if the specified build environment image is provided + // by AWS CodeBuild with Docker support.) + // + // If the operating system's base image is Ubuntu Linux: + // + // - nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 + // --storage-driver=overlay& - timeout 15 sh -c "until docker info; do echo + // .; sleep 1; done" + // + // If the operating system's base image is Alpine Linux, add the -t argument + // to timeout: // // - nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 - // --storage-driver=overlay& - timeout -t 15 sh -c "until docker info; do echo + // --storage-driver=overlay& - timeout 15 -t sh -c "until docker info; do echo // .; sleep 1; done" PrivilegedMode *bool `locationName:"privilegedMode" type:"boolean"` @@ -3545,7 +4078,7 @@ type ProjectSource struct { // // * For source code settings that are specified in the source action of // a pipeline in AWS CodePipeline, location should not be specified. If it - // is specified, AWS CodePipeline will ignore it. This is because AWS CodePipeline + // is specified, AWS CodePipeline ignores it. This is because AWS CodePipeline // uses the settings in a pipeline's source action instead of this value. // // * For source code in an AWS CodeCommit repository, the HTTPS clone URL @@ -3553,34 +4086,47 @@ type ProjectSource struct { // example, https://git-codecommit.region-ID.amazonaws.com/v1/repos/repo-name). // // * For source code in an Amazon Simple Storage Service (Amazon S3) input - // bucket, the path to the ZIP file that contains the source code (for example, - // bucket-name/path/to/object-name.zip) + // bucket, one of the following. + // + // The path to the ZIP file that contains the source code (for example, bucket-name/path/to/object-name.zip). + // + // + // The path to the folder that contains the source code (for example, bucket-name/path/to/source-code/folder/). + // // // * For source code in a GitHub repository, the HTTPS clone URL to the repository - // that contains the source and the build spec. Also, you must connect your - // AWS account to your GitHub account. To do this, use the AWS CodeBuild - // console to begin creating a build project. When you use the console to - // connect (or reconnect) with GitHub, on the GitHub Authorize application - // page that displays, for Organization access, choose Request access next - // to each repository you want to allow AWS CodeBuild to have access to. - // Then choose Authorize application. (After you have connected to your GitHub - // account, you do not need to finish creating the build project, and you - // may then leave the AWS CodeBuild console.) To instruct AWS CodeBuild to - // then use this connection, in the source object, set the auth object's - // type value to OAUTH. + // that contains the source and the build spec. You must connect your AWS + // account to your GitHub account. Use the AWS CodeBuild console to start + // creating a build project. When you use the console to connect (or reconnect) + // with GitHub, on the GitHub Authorize application page, for Organization + // access, choose Request access next to each repository you want to allow + // AWS CodeBuild to have access to, and then choose Authorize application. + // (After you have connected to your GitHub account, you do not need to finish + // creating the build project. You can leave the AWS CodeBuild console.) + // To instruct AWS CodeBuild to use this connection, in the source object, + // set the auth object's type value to OAUTH. // // * For source code in a Bitbucket repository, the HTTPS clone URL to the - // repository that contains the source and the build spec. Also, you must - // connect your AWS account to your Bitbucket account. To do this, use the - // AWS CodeBuild console to begin creating a build project. When you use - // the console to connect (or reconnect) with Bitbucket, on the Bitbucket - // Confirm access to your account page that displays, choose Grant access. - // (After you have connected to your Bitbucket account, you do not need to - // finish creating the build project, and you may then leave the AWS CodeBuild - // console.) To instruct AWS CodeBuild to then use this connection, in the - // source object, set the auth object's type value to OAUTH. + // repository that contains the source and the build spec. You must connect + // your AWS account to your Bitbucket account. Use the AWS CodeBuild console + // to start creating a build project. When you use the console to connect + // (or reconnect) with Bitbucket, on the Bitbucket Confirm access to your + // account page, choose Grant access. (After you have connected to your Bitbucket + // account, you do not need to finish creating the build project. You can + // leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this + // connection, in the source object, set the auth object's type value to + // OAUTH. Location *string `locationName:"location" type:"string"` + // Set to true to report the status of a build's start and finish to your source + // provider. This option is valid only when your source provider is GitHub, + // GitHub Enterprise, or Bitbucket. If this is set and you use a different source + // provider, an invalidInputException is thrown. + ReportBuildStatus *bool `locationName:"reportBuildStatus" type:"boolean"` + + // An identifier for this project source. + SourceIdentifier *string `locationName:"sourceIdentifier" type:"string"` + // The type of repository that contains the source code to be built. Valid values // include: // @@ -3593,6 +4139,8 @@ type ProjectSource struct { // // * GITHUB: The source code is in a GitHub repository. // + // * NO_SOURCE: The project does not have input source code. + // // * S3: The source code is in an Amazon Simple Storage Service (Amazon S3) // input bucket. // @@ -3658,12 +4206,149 @@ func (s *ProjectSource) SetLocation(v string) *ProjectSource { return s } +// SetReportBuildStatus sets the ReportBuildStatus field's value. +func (s *ProjectSource) SetReportBuildStatus(v bool) *ProjectSource { + s.ReportBuildStatus = &v + return s +} + +// SetSourceIdentifier sets the SourceIdentifier field's value. +func (s *ProjectSource) SetSourceIdentifier(v string) *ProjectSource { + s.SourceIdentifier = &v + return s +} + // SetType sets the Type field's value. func (s *ProjectSource) SetType(v string) *ProjectSource { s.Type = &v return s } +// A source identifier and its corresponding version. +type ProjectSourceVersion struct { + _ struct{} `type:"structure"` + + // An identifier for a source in the build project. + // + // SourceIdentifier is a required field + SourceIdentifier *string `locationName:"sourceIdentifier" type:"string" required:"true"` + + // The source version for the corresponding source identifier. If specified, + // must be one of: + // + // * For AWS CodeCommit: the commit ID to use. + // + // * For GitHub: the commit ID, pull request ID, branch name, or tag name + // that corresponds to the version of the source code you want to build. + // If a pull request ID is specified, it must use the format pr/pull-request-ID + // (for example, pr/25). If a branch name is specified, the branch's HEAD + // commit ID is used. If not specified, the default branch's HEAD commit + // ID is used. + // + // * For Bitbucket: the commit ID, branch name, or tag name that corresponds + // to the version of the source code you want to build. If a branch name + // is specified, the branch's HEAD commit ID is used. If not specified, the + // default branch's HEAD commit ID is used. + // + // * For Amazon Simple Storage Service (Amazon S3): the version ID of the + // object that represents the build input ZIP file to use. + // + // SourceVersion is a required field + SourceVersion *string `locationName:"sourceVersion" type:"string" required:"true"` +} + +// String returns the string representation +func (s ProjectSourceVersion) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ProjectSourceVersion) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ProjectSourceVersion) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ProjectSourceVersion"} + if s.SourceIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("SourceIdentifier")) + } + if s.SourceVersion == nil { + invalidParams.Add(request.NewErrParamRequired("SourceVersion")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSourceIdentifier sets the SourceIdentifier field's value. +func (s *ProjectSourceVersion) SetSourceIdentifier(v string) *ProjectSourceVersion { + s.SourceIdentifier = &v + return s +} + +// SetSourceVersion sets the SourceVersion field's value. +func (s *ProjectSourceVersion) SetSourceVersion(v string) *ProjectSourceVersion { + s.SourceVersion = &v + return s +} + +// Information about S3 logs for a build project. +type S3LogsConfig struct { + _ struct{} `type:"structure"` + + // The ARN of an S3 bucket and the path prefix for S3 logs. If your Amazon S3 + // bucket name is my-bucket, and your path prefix is build-log, then acceptable + // formats are my-bucket/build-log or arn:aws:s3:::my-bucket/build-log. + Location *string `locationName:"location" type:"string"` + + // The current status of the S3 build logs. Valid values are: + // + // * ENABLED: S3 build logs are enabled for this build project. + // + // * DISABLED: S3 build logs are not enabled for this build project. + // + // Status is a required field + Status *string `locationName:"status" type:"string" required:"true" enum:"LogsConfigStatusType"` +} + +// String returns the string representation +func (s S3LogsConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s S3LogsConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *S3LogsConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "S3LogsConfig"} + if s.Status == nil { + invalidParams.Add(request.NewErrParamRequired("Status")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLocation sets the Location field's value. +func (s *S3LogsConfig) SetLocation(v string) *S3LogsConfig { + s.Location = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *S3LogsConfig) SetStatus(v string) *S3LogsConfig { + s.Status = &v + return s +} + // Information about the authorization settings for AWS CodeBuild to access // the source code to be built. // @@ -3729,6 +4414,22 @@ type StartBuildInput struct { // one already defined in the build project. BuildspecOverride *string `locationName:"buildspecOverride" type:"string"` + // A ProjectCache object specified for this build that overrides the one defined + // in the build project. + CacheOverride *ProjectCache `locationName:"cacheOverride" type:"structure"` + + // The name of a certificate for this build that overrides the one specified + // in the build project. + CertificateOverride *string `locationName:"certificateOverride" type:"string"` + + // The name of a compute type for this build that overrides the one specified + // in the build project. + ComputeTypeOverride *string `locationName:"computeTypeOverride" type:"string" enum:"ComputeType"` + + // A container type for this build that overrides the one specified in the build + // project. + EnvironmentTypeOverride *string `locationName:"environmentTypeOverride" type:"string" enum:"EnvironmentType"` + // A set of environment variables that overrides, for this build only, the latest // ones already defined in the build project. EnvironmentVariablesOverride []*EnvironmentVariable `locationName:"environmentVariablesOverride" type:"list"` @@ -3737,13 +4438,72 @@ type StartBuildInput struct { // for this build only, any previous depth of history defined in the build project. GitCloneDepthOverride *int64 `locationName:"gitCloneDepthOverride" type:"integer"` - // The name of the build project to start running a build. + // A unique, case sensitive identifier you provide to ensure the idempotency + // of the StartBuild request. The token is included in the StartBuild request + // and is valid for 12 hours. If you repeat the StartBuild request with the + // same token, but change a parameter, AWS CodeBuild returns a parameter mismatch + // error. + IdempotencyToken *string `locationName:"idempotencyToken" type:"string"` + + // The name of an image for this build that overrides the one specified in the + // build project. + ImageOverride *string `locationName:"imageOverride" min:"1" type:"string"` + + // Enable this flag to override the insecure SSL setting that is specified in + // the build project. The insecure SSL setting determines whether to ignore + // SSL warnings while connecting to the project source code. This override applies + // only if the build's source is GitHub Enterprise. + InsecureSslOverride *bool `locationName:"insecureSslOverride" type:"boolean"` + + // Log settings for this build that override the log settings defined in the + // build project. + LogsConfigOverride *LogsConfig `locationName:"logsConfigOverride" type:"structure"` + + // Enable this flag to override privileged mode in the build project. + PrivilegedModeOverride *bool `locationName:"privilegedModeOverride" type:"boolean"` + + // The name of the AWS CodeBuild build project to start running a build. // // ProjectName is a required field ProjectName *string `locationName:"projectName" min:"1" type:"string" required:"true"` + // The number of minutes a build is allowed to be queued before it times out. + QueuedTimeoutInMinutesOverride *int64 `locationName:"queuedTimeoutInMinutesOverride" min:"5" type:"integer"` + + // Set to true to report to your source provider the status of a build's start + // and completion. If you use this option with a source provider other than + // GitHub, GitHub Enterprise, or Bitbucket, an invalidInputException is thrown. + ReportBuildStatusOverride *bool `locationName:"reportBuildStatusOverride" type:"boolean"` + + // An array of ProjectArtifacts objects. + SecondaryArtifactsOverride []*ProjectArtifacts `locationName:"secondaryArtifactsOverride" type:"list"` + + // An array of ProjectSource objects. + SecondarySourcesOverride []*ProjectSource `locationName:"secondarySourcesOverride" type:"list"` + + // An array of ProjectSourceVersion objects that specify one or more versions + // of the project's secondary sources to be used for this build only. + SecondarySourcesVersionOverride []*ProjectSourceVersion `locationName:"secondarySourcesVersionOverride" type:"list"` + + // The name of a service role for this build that overrides the one specified + // in the build project. + ServiceRoleOverride *string `locationName:"serviceRoleOverride" min:"1" type:"string"` + + // An authorization type for this build that overrides the one defined in the + // build project. This override applies only if the build project's source is + // BitBucket or GitHub. + SourceAuthOverride *SourceAuth `locationName:"sourceAuthOverride" type:"structure"` + + // A location that overrides, for this build, the source location for the one + // defined in the build project. + SourceLocationOverride *string `locationName:"sourceLocationOverride" type:"string"` + + // A source input type, for this build, that overrides the source input defined + // in the build project. + SourceTypeOverride *string `locationName:"sourceTypeOverride" type:"string" enum:"SourceType"` + // A version of the build input to be built, for this build only. If not specified, - // the latest version will be used. If specified, must be one of: + // the latest version is used. If specified, must be one of: // // * For AWS CodeCommit: the commit ID to use. // @@ -3751,16 +4511,16 @@ type StartBuildInput struct { // that corresponds to the version of the source code you want to build. // If a pull request ID is specified, it must use the format pr/pull-request-ID // (for example pr/25). If a branch name is specified, the branch's HEAD - // commit ID will be used. If not specified, the default branch's HEAD commit - // ID will be used. + // commit ID is used. If not specified, the default branch's HEAD commit + // ID is used. // // * For Bitbucket: the commit ID, branch name, or tag name that corresponds // to the version of the source code you want to build. If a branch name - // is specified, the branch's HEAD commit ID will be used. If not specified, - // the default branch's HEAD commit ID will be used. + // is specified, the branch's HEAD commit ID is used. If not specified, the + // default branch's HEAD commit ID is used. // // * For Amazon Simple Storage Service (Amazon S3): the version ID of the - // object representing the build input ZIP file to use. + // object that represents the build input ZIP file to use. SourceVersion *string `locationName:"sourceVersion" type:"string"` // The number of build timeout minutes, from 5 to 480 (8 hours), that overrides, @@ -3781,12 +4541,21 @@ func (s StartBuildInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *StartBuildInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "StartBuildInput"} + if s.ImageOverride != nil && len(*s.ImageOverride) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ImageOverride", 1)) + } if s.ProjectName == nil { invalidParams.Add(request.NewErrParamRequired("ProjectName")) } if s.ProjectName != nil && len(*s.ProjectName) < 1 { invalidParams.Add(request.NewErrParamMinLen("ProjectName", 1)) } + if s.QueuedTimeoutInMinutesOverride != nil && *s.QueuedTimeoutInMinutesOverride < 5 { + invalidParams.Add(request.NewErrParamMinValue("QueuedTimeoutInMinutesOverride", 5)) + } + if s.ServiceRoleOverride != nil && len(*s.ServiceRoleOverride) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ServiceRoleOverride", 1)) + } if s.TimeoutInMinutesOverride != nil && *s.TimeoutInMinutesOverride < 5 { invalidParams.Add(request.NewErrParamMinValue("TimeoutInMinutesOverride", 5)) } @@ -3795,6 +4564,11 @@ func (s *StartBuildInput) Validate() error { invalidParams.AddNested("ArtifactsOverride", err.(request.ErrInvalidParams)) } } + if s.CacheOverride != nil { + if err := s.CacheOverride.Validate(); err != nil { + invalidParams.AddNested("CacheOverride", err.(request.ErrInvalidParams)) + } + } if s.EnvironmentVariablesOverride != nil { for i, v := range s.EnvironmentVariablesOverride { if v == nil { @@ -3805,6 +4579,46 @@ func (s *StartBuildInput) Validate() error { } } } + if s.LogsConfigOverride != nil { + if err := s.LogsConfigOverride.Validate(); err != nil { + invalidParams.AddNested("LogsConfigOverride", err.(request.ErrInvalidParams)) + } + } + if s.SecondaryArtifactsOverride != nil { + for i, v := range s.SecondaryArtifactsOverride { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "SecondaryArtifactsOverride", i), err.(request.ErrInvalidParams)) + } + } + } + if s.SecondarySourcesOverride != nil { + for i, v := range s.SecondarySourcesOverride { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "SecondarySourcesOverride", i), err.(request.ErrInvalidParams)) + } + } + } + if s.SecondarySourcesVersionOverride != nil { + for i, v := range s.SecondarySourcesVersionOverride { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "SecondarySourcesVersionOverride", i), err.(request.ErrInvalidParams)) + } + } + } + if s.SourceAuthOverride != nil { + if err := s.SourceAuthOverride.Validate(); err != nil { + invalidParams.AddNested("SourceAuthOverride", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -3824,6 +4638,30 @@ func (s *StartBuildInput) SetBuildspecOverride(v string) *StartBuildInput { return s } +// SetCacheOverride sets the CacheOverride field's value. +func (s *StartBuildInput) SetCacheOverride(v *ProjectCache) *StartBuildInput { + s.CacheOverride = v + return s +} + +// SetCertificateOverride sets the CertificateOverride field's value. +func (s *StartBuildInput) SetCertificateOverride(v string) *StartBuildInput { + s.CertificateOverride = &v + return s +} + +// SetComputeTypeOverride sets the ComputeTypeOverride field's value. +func (s *StartBuildInput) SetComputeTypeOverride(v string) *StartBuildInput { + s.ComputeTypeOverride = &v + return s +} + +// SetEnvironmentTypeOverride sets the EnvironmentTypeOverride field's value. +func (s *StartBuildInput) SetEnvironmentTypeOverride(v string) *StartBuildInput { + s.EnvironmentTypeOverride = &v + return s +} + // SetEnvironmentVariablesOverride sets the EnvironmentVariablesOverride field's value. func (s *StartBuildInput) SetEnvironmentVariablesOverride(v []*EnvironmentVariable) *StartBuildInput { s.EnvironmentVariablesOverride = v @@ -3836,12 +4674,96 @@ func (s *StartBuildInput) SetGitCloneDepthOverride(v int64) *StartBuildInput { return s } +// SetIdempotencyToken sets the IdempotencyToken field's value. +func (s *StartBuildInput) SetIdempotencyToken(v string) *StartBuildInput { + s.IdempotencyToken = &v + return s +} + +// SetImageOverride sets the ImageOverride field's value. +func (s *StartBuildInput) SetImageOverride(v string) *StartBuildInput { + s.ImageOverride = &v + return s +} + +// SetInsecureSslOverride sets the InsecureSslOverride field's value. +func (s *StartBuildInput) SetInsecureSslOverride(v bool) *StartBuildInput { + s.InsecureSslOverride = &v + return s +} + +// SetLogsConfigOverride sets the LogsConfigOverride field's value. +func (s *StartBuildInput) SetLogsConfigOverride(v *LogsConfig) *StartBuildInput { + s.LogsConfigOverride = v + return s +} + +// SetPrivilegedModeOverride sets the PrivilegedModeOverride field's value. +func (s *StartBuildInput) SetPrivilegedModeOverride(v bool) *StartBuildInput { + s.PrivilegedModeOverride = &v + return s +} + // SetProjectName sets the ProjectName field's value. func (s *StartBuildInput) SetProjectName(v string) *StartBuildInput { s.ProjectName = &v return s } +// SetQueuedTimeoutInMinutesOverride sets the QueuedTimeoutInMinutesOverride field's value. +func (s *StartBuildInput) SetQueuedTimeoutInMinutesOverride(v int64) *StartBuildInput { + s.QueuedTimeoutInMinutesOverride = &v + return s +} + +// SetReportBuildStatusOverride sets the ReportBuildStatusOverride field's value. +func (s *StartBuildInput) SetReportBuildStatusOverride(v bool) *StartBuildInput { + s.ReportBuildStatusOverride = &v + return s +} + +// SetSecondaryArtifactsOverride sets the SecondaryArtifactsOverride field's value. +func (s *StartBuildInput) SetSecondaryArtifactsOverride(v []*ProjectArtifacts) *StartBuildInput { + s.SecondaryArtifactsOverride = v + return s +} + +// SetSecondarySourcesOverride sets the SecondarySourcesOverride field's value. +func (s *StartBuildInput) SetSecondarySourcesOverride(v []*ProjectSource) *StartBuildInput { + s.SecondarySourcesOverride = v + return s +} + +// SetSecondarySourcesVersionOverride sets the SecondarySourcesVersionOverride field's value. +func (s *StartBuildInput) SetSecondarySourcesVersionOverride(v []*ProjectSourceVersion) *StartBuildInput { + s.SecondarySourcesVersionOverride = v + return s +} + +// SetServiceRoleOverride sets the ServiceRoleOverride field's value. +func (s *StartBuildInput) SetServiceRoleOverride(v string) *StartBuildInput { + s.ServiceRoleOverride = &v + return s +} + +// SetSourceAuthOverride sets the SourceAuthOverride field's value. +func (s *StartBuildInput) SetSourceAuthOverride(v *SourceAuth) *StartBuildInput { + s.SourceAuthOverride = v + return s +} + +// SetSourceLocationOverride sets the SourceLocationOverride field's value. +func (s *StartBuildInput) SetSourceLocationOverride(v string) *StartBuildInput { + s.SourceLocationOverride = &v + return s +} + +// SetSourceTypeOverride sets the SourceTypeOverride field's value. +func (s *StartBuildInput) SetSourceTypeOverride(v string) *StartBuildInput { + s.SourceTypeOverride = &v + return s +} + // SetSourceVersion sets the SourceVersion field's value. func (s *StartBuildInput) SetSourceVersion(v string) *StartBuildInput { s.SourceVersion = &v @@ -3999,7 +4921,7 @@ type UpdateProjectInput struct { // project. Artifacts *ProjectArtifacts `locationName:"artifacts" type:"structure"` - // Set this to true to generate a publicly-accessible URL for your project's + // Set this to true to generate a publicly accessible URL for your project's // build badge. BadgeEnabled *bool `locationName:"badgeEnabled" type:"boolean"` @@ -4013,13 +4935,17 @@ type UpdateProjectInput struct { // The replacement AWS Key Management Service (AWS KMS) customer master key // (CMK) to be used for encrypting the build output artifacts. // - // You can specify either the CMK's Amazon Resource Name (ARN) or, if available, + // You can specify either the Amazon Resource Name (ARN)of the CMK or, if available, // the CMK's alias (using the format alias/alias-name). EncryptionKey *string `locationName:"encryptionKey" min:"1" type:"string"` // Information to be changed about the build environment for the build project. Environment *ProjectEnvironment `locationName:"environment" type:"structure"` + // Information about logs for the build project. A project can create logs in + // Amazon CloudWatch Logs, logs in an S3 bucket, or both. + LogsConfig *LogsConfig `locationName:"logsConfig" type:"structure"` + // The name of the build project. // // You cannot change a build project's name. @@ -4027,6 +4953,15 @@ type UpdateProjectInput struct { // Name is a required field Name *string `locationName:"name" min:"1" type:"string" required:"true"` + // The number of minutes a build is allowed to be queued before it times out. + QueuedTimeoutInMinutes *int64 `locationName:"queuedTimeoutInMinutes" min:"5" type:"integer"` + + // An array of ProjectSource objects. + SecondaryArtifacts []*ProjectArtifacts `locationName:"secondaryArtifacts" type:"list"` + + // An array of ProjectSource objects. + SecondarySources []*ProjectSource `locationName:"secondarySources" type:"list"` + // The replacement ARN of the AWS Identity and Access Management (IAM) role // that enables AWS CodeBuild to interact with dependent AWS services on behalf // of the AWS account. @@ -4072,6 +5007,9 @@ func (s *UpdateProjectInput) Validate() error { if s.Name != nil && len(*s.Name) < 1 { invalidParams.Add(request.NewErrParamMinLen("Name", 1)) } + if s.QueuedTimeoutInMinutes != nil && *s.QueuedTimeoutInMinutes < 5 { + invalidParams.Add(request.NewErrParamMinValue("QueuedTimeoutInMinutes", 5)) + } if s.ServiceRole != nil && len(*s.ServiceRole) < 1 { invalidParams.Add(request.NewErrParamMinLen("ServiceRole", 1)) } @@ -4093,6 +5031,31 @@ func (s *UpdateProjectInput) Validate() error { invalidParams.AddNested("Environment", err.(request.ErrInvalidParams)) } } + if s.LogsConfig != nil { + if err := s.LogsConfig.Validate(); err != nil { + invalidParams.AddNested("LogsConfig", err.(request.ErrInvalidParams)) + } + } + if s.SecondaryArtifacts != nil { + for i, v := range s.SecondaryArtifacts { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "SecondaryArtifacts", i), err.(request.ErrInvalidParams)) + } + } + } + if s.SecondarySources != nil { + for i, v := range s.SecondarySources { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "SecondarySources", i), err.(request.ErrInvalidParams)) + } + } + } if s.Source != nil { if err := s.Source.Validate(); err != nil { invalidParams.AddNested("Source", err.(request.ErrInvalidParams)) @@ -4156,12 +5119,36 @@ func (s *UpdateProjectInput) SetEnvironment(v *ProjectEnvironment) *UpdateProjec return s } +// SetLogsConfig sets the LogsConfig field's value. +func (s *UpdateProjectInput) SetLogsConfig(v *LogsConfig) *UpdateProjectInput { + s.LogsConfig = v + return s +} + // SetName sets the Name field's value. func (s *UpdateProjectInput) SetName(v string) *UpdateProjectInput { s.Name = &v return s } +// SetQueuedTimeoutInMinutes sets the QueuedTimeoutInMinutes field's value. +func (s *UpdateProjectInput) SetQueuedTimeoutInMinutes(v int64) *UpdateProjectInput { + s.QueuedTimeoutInMinutes = &v + return s +} + +// SetSecondaryArtifacts sets the SecondaryArtifacts field's value. +func (s *UpdateProjectInput) SetSecondaryArtifacts(v []*ProjectArtifacts) *UpdateProjectInput { + s.SecondaryArtifacts = v + return s +} + +// SetSecondarySources sets the SecondarySources field's value. +func (s *UpdateProjectInput) SetSecondarySources(v []*ProjectSource) *UpdateProjectInput { + s.SecondarySources = v + return s +} + // SetServiceRole sets the ServiceRole field's value. func (s *UpdateProjectInput) SetServiceRole(v string) *UpdateProjectInput { s.ServiceRole = &v @@ -4215,7 +5202,95 @@ func (s *UpdateProjectOutput) SetProject(v *Project) *UpdateProjectOutput { return s } -// Information about the VPC configuration that AWS CodeBuild will access. +type UpdateWebhookInput struct { + _ struct{} `type:"structure"` + + // A regular expression used to determine which repository branches are built + // when a webhook is triggered. If the name of a branch matches the regular + // expression, then it is built. If branchFilter is empty, then all branches + // are built. + BranchFilter *string `locationName:"branchFilter" type:"string"` + + // The name of the AWS CodeBuild project. + // + // ProjectName is a required field + ProjectName *string `locationName:"projectName" min:"2" type:"string" required:"true"` + + // A boolean value that specifies whether the associated GitHub repository's + // secret token should be updated. If you use Bitbucket for your repository, + // rotateSecret is ignored. + RotateSecret *bool `locationName:"rotateSecret" type:"boolean"` +} + +// String returns the string representation +func (s UpdateWebhookInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateWebhookInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateWebhookInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateWebhookInput"} + if s.ProjectName == nil { + invalidParams.Add(request.NewErrParamRequired("ProjectName")) + } + if s.ProjectName != nil && len(*s.ProjectName) < 2 { + invalidParams.Add(request.NewErrParamMinLen("ProjectName", 2)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBranchFilter sets the BranchFilter field's value. +func (s *UpdateWebhookInput) SetBranchFilter(v string) *UpdateWebhookInput { + s.BranchFilter = &v + return s +} + +// SetProjectName sets the ProjectName field's value. +func (s *UpdateWebhookInput) SetProjectName(v string) *UpdateWebhookInput { + s.ProjectName = &v + return s +} + +// SetRotateSecret sets the RotateSecret field's value. +func (s *UpdateWebhookInput) SetRotateSecret(v bool) *UpdateWebhookInput { + s.RotateSecret = &v + return s +} + +type UpdateWebhookOutput struct { + _ struct{} `type:"structure"` + + // Information about a repository's webhook that is associated with a project + // in AWS CodeBuild. + Webhook *Webhook `locationName:"webhook" type:"structure"` +} + +// String returns the string representation +func (s UpdateWebhookOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateWebhookOutput) GoString() string { + return s.String() +} + +// SetWebhook sets the Webhook field's value. +func (s *UpdateWebhookOutput) SetWebhook(v *Webhook) *UpdateWebhookOutput { + s.Webhook = v + return s +} + +// Information about the VPC configuration that AWS CodeBuild accesses. type VpcConfig struct { _ struct{} `type:"structure"` @@ -4270,17 +5345,27 @@ func (s *VpcConfig) SetVpcId(v string) *VpcConfig { return s } -// Information about a webhook in GitHub that connects repository events to -// a build project in AWS CodeBuild. +// Information about a webhook that connects repository events to a build project +// in AWS CodeBuild. type Webhook struct { _ struct{} `type:"structure"` - // This is the server endpoint that will receive the webhook payload. + // A regular expression used to determine which repository branches are built + // when a webhook is triggered. If the name of a branch matches the regular + // expression, then it is built. If branchFilter is empty, then all branches + // are built. + BranchFilter *string `locationName:"branchFilter" type:"string"` + + // A timestamp that indicates the last time a repository's secret token was + // modified. + LastModifiedSecret *time.Time `locationName:"lastModifiedSecret" type:"timestamp"` + + // The AWS CodeBuild endpoint where webhook events are sent. PayloadUrl *string `locationName:"payloadUrl" min:"1" type:"string"` - // Use this secret while creating a webhook in GitHub for Enterprise. The secret - // allows webhook requests sent by GitHub for Enterprise to be authenticated - // by AWS CodeBuild. + // The secret token of the associated repository. + // + // A Bitbucket webhook does not support secret. Secret *string `locationName:"secret" min:"1" type:"string"` // The URL to the webhook. @@ -4297,6 +5382,18 @@ func (s Webhook) GoString() string { return s.String() } +// SetBranchFilter sets the BranchFilter field's value. +func (s *Webhook) SetBranchFilter(v string) *Webhook { + s.BranchFilter = &v + return s +} + +// SetLastModifiedSecret sets the LastModifiedSecret field's value. +func (s *Webhook) SetLastModifiedSecret(v time.Time) *Webhook { + s.LastModifiedSecret = &v + return s +} + // SetPayloadUrl sets the PayloadUrl field's value. func (s *Webhook) SetPayloadUrl(v string) *Webhook { s.PayloadUrl = &v @@ -4346,6 +5443,9 @@ const ( // BuildPhaseTypeSubmitted is a BuildPhaseType enum value BuildPhaseTypeSubmitted = "SUBMITTED" + // BuildPhaseTypeQueued is a BuildPhaseType enum value + BuildPhaseTypeQueued = "QUEUED" + // BuildPhaseTypeProvisioning is a BuildPhaseType enum value BuildPhaseTypeProvisioning = "PROVISIONING" @@ -4394,6 +5494,9 @@ const ( ) const ( + // EnvironmentTypeWindowsContainer is a EnvironmentType enum value + EnvironmentTypeWindowsContainer = "WINDOWS_CONTAINER" + // EnvironmentTypeLinuxContainer is a EnvironmentType enum value EnvironmentTypeLinuxContainer = "LINUX_CONTAINER" ) @@ -4433,6 +5536,17 @@ const ( // LanguageTypeBase is a LanguageType enum value LanguageTypeBase = "BASE" + + // LanguageTypePhp is a LanguageType enum value + LanguageTypePhp = "PHP" +) + +const ( + // LogsConfigStatusTypeEnabled is a LogsConfigStatusType enum value + LogsConfigStatusTypeEnabled = "ENABLED" + + // LogsConfigStatusTypeDisabled is a LogsConfigStatusType enum value + LogsConfigStatusTypeDisabled = "DISABLED" ) const ( @@ -4444,6 +5558,9 @@ const ( // PlatformTypeUbuntu is a PlatformType enum value PlatformTypeUbuntu = "UBUNTU" + + // PlatformTypeWindowsServer is a PlatformType enum value + PlatformTypeWindowsServer = "WINDOWS_SERVER" ) const ( @@ -4488,6 +5605,9 @@ const ( // SourceTypeGithubEnterprise is a SourceType enum value SourceTypeGithubEnterprise = "GITHUB_ENTERPRISE" + + // SourceTypeNoSource is a SourceType enum value + SourceTypeNoSource = "NO_SOURCE" ) const ( diff --git a/vendor/github.com/aws/aws-sdk-go/service/codebuild/doc.go b/vendor/github.com/aws/aws-sdk-go/service/codebuild/doc.go index 5bab7179bba..6d2b7f003f0 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/codebuild/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/codebuild/doc.go @@ -10,7 +10,7 @@ // for the most popular programming languages and build tools, such as Apache // Maven, Gradle, and more. You can also fully customize build environments // in AWS CodeBuild to use your own build tools. AWS CodeBuild scales automatically -// to meet peak build requests, and you pay only for the build time you consume. +// to meet peak build requests. You pay only for the build time you consume. // For more information about AWS CodeBuild, see the AWS CodeBuild User Guide. // // AWS CodeBuild supports these operations: @@ -18,27 +18,28 @@ // * BatchDeleteBuilds: Deletes one or more builds. // // * BatchGetProjects: Gets information about one or more build projects. -// A build project defines how AWS CodeBuild will run a build. This includes +// A build project defines how AWS CodeBuild runs a build. This includes // information such as where to get the source code to build, the build environment // to use, the build commands to run, and where to store the build output. -// A build environment represents a combination of operating system, programming -// language runtime, and tools that AWS CodeBuild will use to run a build. -// Also, you can add tags to build projects to help manage your resources -// and costs. +// A build environment is a representation of operating system, programming +// language runtime, and tools that AWS CodeBuild uses to run a build. You +// can add tags to build projects to help manage your resources and costs. // // * CreateProject: Creates a build project. // // * CreateWebhook: For an existing AWS CodeBuild build project that has -// its source code stored in a GitHub repository, enables AWS CodeBuild to -// begin automatically rebuilding the source code every time a code change +// its source code stored in a GitHub or Bitbucket repository, enables AWS +// CodeBuild to start rebuilding the source code every time a code change // is pushed to the repository. // +// * UpdateWebhook: Changes the settings of an existing webhook. +// // * DeleteProject: Deletes a build project. // // * DeleteWebhook: For an existing AWS CodeBuild build project that has -// its source code stored in a GitHub repository, stops AWS CodeBuild from -// automatically rebuilding the source code every time a code change is pushed -// to the repository. +// its source code stored in a GitHub or Bitbucket repository, stops AWS +// CodeBuild from rebuilding the source code every time a code change is +// pushed to the repository. // // * ListProjects: Gets a list of build project names, with each build project // name representing a single build project. diff --git a/vendor/github.com/aws/aws-sdk-go/service/codebuild/service.go b/vendor/github.com/aws/aws-sdk-go/service/codebuild/service.go index d45aed63318..b9ff2b6f76e 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/codebuild/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/codebuild/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "codebuild" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "codebuild" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "CodeBuild" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the CodeBuild client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/codecommit/api.go b/vendor/github.com/aws/aws-sdk-go/service/codecommit/api.go index 9458d149bc8..56b34429955 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/codecommit/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/codecommit/api.go @@ -17,8 +17,8 @@ const opBatchGetRepositories = "BatchGetRepositories" // BatchGetRepositoriesRequest generates a "aws/request.Request" representing the // client's request for the BatchGetRepositories operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -128,8 +128,8 @@ const opCreateBranch = "CreateBranch" // CreateBranchRequest generates a "aws/request.Request" representing the // client's request for the CreateBranch operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -256,8 +256,8 @@ const opCreatePullRequest = "CreatePullRequest" // CreatePullRequestRequest generates a "aws/request.Request" representing the // client's request for the CreatePullRequest operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -433,8 +433,8 @@ const opCreateRepository = "CreateRepository" // CreateRepositoryRequest generates a "aws/request.Request" representing the // client's request for the CreateRepository operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -543,8 +543,8 @@ const opDeleteBranch = "DeleteBranch" // DeleteBranchRequest generates a "aws/request.Request" representing the // client's request for the DeleteBranch operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -659,8 +659,8 @@ const opDeleteCommentContent = "DeleteCommentContent" // DeleteCommentContentRequest generates a "aws/request.Request" representing the // client's request for the DeleteCommentContent operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -746,12 +746,174 @@ func (c *CodeCommit) DeleteCommentContentWithContext(ctx aws.Context, input *Del return out, req.Send() } +const opDeleteFile = "DeleteFile" + +// DeleteFileRequest generates a "aws/request.Request" representing the +// client's request for the DeleteFile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteFile for more information on using the DeleteFile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteFileRequest method. +// req, resp := client.DeleteFileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/codecommit-2015-04-13/DeleteFile +func (c *CodeCommit) DeleteFileRequest(input *DeleteFileInput) (req *request.Request, output *DeleteFileOutput) { + op := &request.Operation{ + Name: opDeleteFile, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteFileInput{} + } + + output = &DeleteFileOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteFile API operation for AWS CodeCommit. +// +// Deletes a specified file from a specified branch. A commit is created on +// the branch that contains the revision. The file will still exist in the commits +// prior to the commit that contains the deletion. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CodeCommit's +// API operation DeleteFile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeRepositoryNameRequiredException "RepositoryNameRequiredException" +// A repository name is required but was not specified. +// +// * ErrCodeInvalidRepositoryNameException "InvalidRepositoryNameException" +// At least one specified repository name is not valid. +// +// This exception only occurs when a specified repository name is not valid. +// Other exceptions occur when a required repository parameter is missing, or +// when a specified repository does not exist. +// +// * ErrCodeRepositoryDoesNotExistException "RepositoryDoesNotExistException" +// The specified repository does not exist. +// +// * ErrCodeParentCommitIdRequiredException "ParentCommitIdRequiredException" +// A parent commit ID is required. To view the full commit ID of a branch in +// a repository, use GetBranch or a Git command (for example, git pull or git +// log). +// +// * ErrCodeInvalidParentCommitIdException "InvalidParentCommitIdException" +// The parent commit ID is not valid. The commit ID cannot be empty, and must +// match the head commit ID for the branch of the repository where you want +// to add or update a file. +// +// * ErrCodeParentCommitDoesNotExistException "ParentCommitDoesNotExistException" +// The parent commit ID is not valid because it does not exist. The specified +// parent commit ID does not exist in the specified branch of the repository. +// +// * ErrCodeParentCommitIdOutdatedException "ParentCommitIdOutdatedException" +// The file could not be added because the provided parent commit ID is not +// the current tip of the specified branch. To view the full commit ID of the +// current head of the branch, use GetBranch. +// +// * ErrCodePathRequiredException "PathRequiredException" +// The folderPath for a location cannot be null. +// +// * ErrCodeInvalidPathException "InvalidPathException" +// The specified path is not valid. +// +// * ErrCodeFileDoesNotExistException "FileDoesNotExistException" +// The specified file does not exist. Verify that you have provided the correct +// name of the file, including its full path and extension. +// +// * ErrCodeBranchNameRequiredException "BranchNameRequiredException" +// A branch name is required but was not specified. +// +// * ErrCodeInvalidBranchNameException "InvalidBranchNameException" +// The specified reference name is not valid. +// +// * ErrCodeBranchDoesNotExistException "BranchDoesNotExistException" +// The specified branch does not exist. +// +// * ErrCodeBranchNameIsTagNameException "BranchNameIsTagNameException" +// The specified branch name is not valid because it is a tag name. Type the +// name of a current branch in the repository. For a list of valid branch names, +// use ListBranches. +// +// * ErrCodeNameLengthExceededException "NameLengthExceededException" +// The user name is not valid because it has exceeded the character limit for +// file names. File names, including the path to the file, cannot exceed the +// character limit. +// +// * ErrCodeInvalidEmailException "InvalidEmailException" +// The specified email address either contains one or more characters that are +// not allowed, or it exceeds the maximum number of characters allowed for an +// email address. +// +// * ErrCodeCommitMessageLengthExceededException "CommitMessageLengthExceededException" +// The commit message is too long. Provide a shorter string. +// +// * ErrCodeEncryptionIntegrityChecksFailedException "EncryptionIntegrityChecksFailedException" +// An encryption integrity check failed. +// +// * ErrCodeEncryptionKeyAccessDeniedException "EncryptionKeyAccessDeniedException" +// An encryption key could not be accessed. +// +// * ErrCodeEncryptionKeyDisabledException "EncryptionKeyDisabledException" +// The encryption key is disabled. +// +// * ErrCodeEncryptionKeyNotFoundException "EncryptionKeyNotFoundException" +// No encryption key was found. +// +// * ErrCodeEncryptionKeyUnavailableException "EncryptionKeyUnavailableException" +// The encryption key is not available. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/codecommit-2015-04-13/DeleteFile +func (c *CodeCommit) DeleteFile(input *DeleteFileInput) (*DeleteFileOutput, error) { + req, out := c.DeleteFileRequest(input) + return out, req.Send() +} + +// DeleteFileWithContext is the same as DeleteFile with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteFile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CodeCommit) DeleteFileWithContext(ctx aws.Context, input *DeleteFileInput, opts ...request.Option) (*DeleteFileOutput, error) { + req, out := c.DeleteFileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteRepository = "DeleteRepository" // DeleteRepositoryRequest generates a "aws/request.Request" representing the // client's request for the DeleteRepository operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -856,8 +1018,8 @@ const opDescribePullRequestEvents = "DescribePullRequestEvents" // DescribePullRequestEventsRequest generates a "aws/request.Request" representing the // client's request for the DescribePullRequestEvents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1032,8 +1194,8 @@ const opGetBlob = "GetBlob" // GetBlobRequest generates a "aws/request.Request" representing the // client's request for the GetBlob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1150,8 +1312,8 @@ const opGetBranch = "GetBranch" // GetBranchRequest generates a "aws/request.Request" representing the // client's request for the GetBranch operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1264,8 +1426,8 @@ const opGetComment = "GetComment" // GetCommentRequest generates a "aws/request.Request" representing the // client's request for the GetComment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1355,8 +1517,8 @@ const opGetCommentsForComparedCommit = "GetCommentsForComparedCommit" // GetCommentsForComparedCommitRequest generates a "aws/request.Request" representing the // client's request for the GetCommentsForComparedCommit operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1531,8 +1693,8 @@ const opGetCommentsForPullRequest = "GetCommentsForPullRequest" // GetCommentsForPullRequestRequest generates a "aws/request.Request" representing the // client's request for the GetCommentsForPullRequest operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1724,8 +1886,8 @@ const opGetCommit = "GetCommit" // GetCommitRequest generates a "aws/request.Request" representing the // client's request for the GetCommit operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1838,8 +2000,8 @@ const opGetDifferences = "GetDifferences" // GetDifferencesRequest generates a "aws/request.Request" representing the // client's request for the GetDifferences operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2021,59 +2183,58 @@ func (c *CodeCommit) GetDifferencesPagesWithContext(ctx aws.Context, input *GetD return p.Err() } -const opGetMergeConflicts = "GetMergeConflicts" +const opGetFile = "GetFile" -// GetMergeConflictsRequest generates a "aws/request.Request" representing the -// client's request for the GetMergeConflicts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetFileRequest generates a "aws/request.Request" representing the +// client's request for the GetFile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetMergeConflicts for more information on using the GetMergeConflicts +// See GetFile for more information on using the GetFile // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetMergeConflictsRequest method. -// req, resp := client.GetMergeConflictsRequest(params) +// // Example sending a request using the GetFileRequest method. +// req, resp := client.GetFileRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/codecommit-2015-04-13/GetMergeConflicts -func (c *CodeCommit) GetMergeConflictsRequest(input *GetMergeConflictsInput) (req *request.Request, output *GetMergeConflictsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/codecommit-2015-04-13/GetFile +func (c *CodeCommit) GetFileRequest(input *GetFileInput) (req *request.Request, output *GetFileOutput) { op := &request.Operation{ - Name: opGetMergeConflicts, + Name: opGetFile, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &GetMergeConflictsInput{} + input = &GetFileInput{} } - output = &GetMergeConflictsOutput{} + output = &GetFileOutput{} req = c.newRequest(op, input, output) return } -// GetMergeConflicts API operation for AWS CodeCommit. +// GetFile API operation for AWS CodeCommit. // -// Returns information about merge conflicts between the before and after commit -// IDs for a pull request in a repository. +// Returns the base-64 encoded contents of a specified file and its metadata. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS CodeCommit's -// API operation GetMergeConflicts for usage and error information. +// API operation GetFile for usage and error information. // // Returned Error Codes: // * ErrCodeRepositoryNameRequiredException "RepositoryNameRequiredException" @@ -2089,34 +2250,22 @@ func (c *CodeCommit) GetMergeConflictsRequest(input *GetMergeConflictsInput) (re // * ErrCodeRepositoryDoesNotExistException "RepositoryDoesNotExistException" // The specified repository does not exist. // -// * ErrCodeMergeOptionRequiredException "MergeOptionRequiredException" -// A merge option or stategy is required, and none was provided. -// -// * ErrCodeInvalidMergeOptionException "InvalidMergeOptionException" -// The specified merge option is not valid. The only valid value is FAST_FORWARD_MERGE. -// -// * ErrCodeInvalidDestinationCommitSpecifierException "InvalidDestinationCommitSpecifierException" -// The destination commit specifier is not valid. You must provide a valid branch -// name, tag, or full commit ID. -// -// * ErrCodeInvalidSourceCommitSpecifierException "InvalidSourceCommitSpecifierException" -// The source commit specifier is not valid. You must provide a valid branch -// name, tag, or full commit ID. -// -// * ErrCodeCommitRequiredException "CommitRequiredException" -// A commit was not specified. +// * ErrCodeInvalidCommitException "InvalidCommitException" +// The specified commit is not valid. // // * ErrCodeCommitDoesNotExistException "CommitDoesNotExistException" // The specified commit does not exist or no commit was specified, and the specified // repository has no default branch. // -// * ErrCodeInvalidCommitException "InvalidCommitException" -// The specified commit is not valid. +// * ErrCodePathRequiredException "PathRequiredException" +// The folderPath for a location cannot be null. // -// * ErrCodeTipsDivergenceExceededException "TipsDivergenceExceededException" -// The divergence between the tips of the provided commit specifiers is too -// great to determine whether there might be any merge conflicts. Locally compare -// the specifiers using git diff or a diff tool. +// * ErrCodeInvalidPathException "InvalidPathException" +// The specified path is not valid. +// +// * ErrCodeFileDoesNotExistException "FileDoesNotExistException" +// The specified file does not exist. Verify that you have provided the correct +// name of the file, including its full path and extension. // // * ErrCodeEncryptionIntegrityChecksFailedException "EncryptionIntegrityChecksFailedException" // An encryption integrity check failed. @@ -2133,93 +2282,116 @@ func (c *CodeCommit) GetMergeConflictsRequest(input *GetMergeConflictsInput) (re // * ErrCodeEncryptionKeyUnavailableException "EncryptionKeyUnavailableException" // The encryption key is not available. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/codecommit-2015-04-13/GetMergeConflicts -func (c *CodeCommit) GetMergeConflicts(input *GetMergeConflictsInput) (*GetMergeConflictsOutput, error) { - req, out := c.GetMergeConflictsRequest(input) +// * ErrCodeFileTooLargeException "FileTooLargeException" +// The specified file exceeds the file size limit for AWS CodeCommit. For more +// information about limits in AWS CodeCommit, see AWS CodeCommit User Guide +// (http://docs.aws.amazon.com/codecommit/latest/userguide/limits.html). +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/codecommit-2015-04-13/GetFile +func (c *CodeCommit) GetFile(input *GetFileInput) (*GetFileOutput, error) { + req, out := c.GetFileRequest(input) return out, req.Send() } -// GetMergeConflictsWithContext is the same as GetMergeConflicts with the addition of +// GetFileWithContext is the same as GetFile with the addition of // the ability to pass a context and additional request options. // -// See GetMergeConflicts for details on how to use this API operation. +// See GetFile for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CodeCommit) GetMergeConflictsWithContext(ctx aws.Context, input *GetMergeConflictsInput, opts ...request.Option) (*GetMergeConflictsOutput, error) { - req, out := c.GetMergeConflictsRequest(input) +func (c *CodeCommit) GetFileWithContext(ctx aws.Context, input *GetFileInput, opts ...request.Option) (*GetFileOutput, error) { + req, out := c.GetFileRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetPullRequest = "GetPullRequest" +const opGetFolder = "GetFolder" -// GetPullRequestRequest generates a "aws/request.Request" representing the -// client's request for the GetPullRequest operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetFolderRequest generates a "aws/request.Request" representing the +// client's request for the GetFolder operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetPullRequest for more information on using the GetPullRequest +// See GetFolder for more information on using the GetFolder // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetPullRequestRequest method. -// req, resp := client.GetPullRequestRequest(params) +// // Example sending a request using the GetFolderRequest method. +// req, resp := client.GetFolderRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/codecommit-2015-04-13/GetPullRequest -func (c *CodeCommit) GetPullRequestRequest(input *GetPullRequestInput) (req *request.Request, output *GetPullRequestOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/codecommit-2015-04-13/GetFolder +func (c *CodeCommit) GetFolderRequest(input *GetFolderInput) (req *request.Request, output *GetFolderOutput) { op := &request.Operation{ - Name: opGetPullRequest, + Name: opGetFolder, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &GetPullRequestInput{} + input = &GetFolderInput{} } - output = &GetPullRequestOutput{} + output = &GetFolderOutput{} req = c.newRequest(op, input, output) return } -// GetPullRequest API operation for AWS CodeCommit. +// GetFolder API operation for AWS CodeCommit. // -// Gets information about a pull request in a specified repository. +// Returns the contents of a specified folder in a repository. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS CodeCommit's -// API operation GetPullRequest for usage and error information. +// API operation GetFolder for usage and error information. // // Returned Error Codes: -// * ErrCodePullRequestDoesNotExistException "PullRequestDoesNotExistException" -// The pull request ID could not be found. Make sure that you have specified -// the correct repository name and pull request ID, and then try again. +// * ErrCodeRepositoryNameRequiredException "RepositoryNameRequiredException" +// A repository name is required but was not specified. // -// * ErrCodeInvalidPullRequestIdException "InvalidPullRequestIdException" -// The pull request ID is not valid. Make sure that you have provided the full -// ID and that the pull request is in the specified repository, and then try -// again. +// * ErrCodeInvalidRepositoryNameException "InvalidRepositoryNameException" +// At least one specified repository name is not valid. // -// * ErrCodePullRequestIdRequiredException "PullRequestIdRequiredException" -// A pull request ID is required, but none was provided. +// This exception only occurs when a specified repository name is not valid. +// Other exceptions occur when a required repository parameter is missing, or +// when a specified repository does not exist. +// +// * ErrCodeRepositoryDoesNotExistException "RepositoryDoesNotExistException" +// The specified repository does not exist. +// +// * ErrCodeInvalidCommitException "InvalidCommitException" +// The specified commit is not valid. +// +// * ErrCodeCommitDoesNotExistException "CommitDoesNotExistException" +// The specified commit does not exist or no commit was specified, and the specified +// repository has no default branch. +// +// * ErrCodePathRequiredException "PathRequiredException" +// The folderPath for a location cannot be null. +// +// * ErrCodeInvalidPathException "InvalidPathException" +// The specified path is not valid. +// +// * ErrCodeFolderDoesNotExistException "FolderDoesNotExistException" +// The specified folder does not exist. Either the folder name is not correct, +// or you did not provide the full path to the folder. // // * ErrCodeEncryptionIntegrityChecksFailedException "EncryptionIntegrityChecksFailedException" // An encryption integrity check failed. @@ -2236,57 +2408,294 @@ func (c *CodeCommit) GetPullRequestRequest(input *GetPullRequestInput) (req *req // * ErrCodeEncryptionKeyUnavailableException "EncryptionKeyUnavailableException" // The encryption key is not available. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/codecommit-2015-04-13/GetPullRequest -func (c *CodeCommit) GetPullRequest(input *GetPullRequestInput) (*GetPullRequestOutput, error) { - req, out := c.GetPullRequestRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/codecommit-2015-04-13/GetFolder +func (c *CodeCommit) GetFolder(input *GetFolderInput) (*GetFolderOutput, error) { + req, out := c.GetFolderRequest(input) return out, req.Send() } -// GetPullRequestWithContext is the same as GetPullRequest with the addition of +// GetFolderWithContext is the same as GetFolder with the addition of // the ability to pass a context and additional request options. // -// See GetPullRequest for details on how to use this API operation. +// See GetFolder for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CodeCommit) GetPullRequestWithContext(ctx aws.Context, input *GetPullRequestInput, opts ...request.Option) (*GetPullRequestOutput, error) { - req, out := c.GetPullRequestRequest(input) +func (c *CodeCommit) GetFolderWithContext(ctx aws.Context, input *GetFolderInput, opts ...request.Option) (*GetFolderOutput, error) { + req, out := c.GetFolderRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetRepository = "GetRepository" +const opGetMergeConflicts = "GetMergeConflicts" -// GetRepositoryRequest generates a "aws/request.Request" representing the -// client's request for the GetRepository operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetMergeConflictsRequest generates a "aws/request.Request" representing the +// client's request for the GetMergeConflicts operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetRepository for more information on using the GetRepository +// See GetMergeConflicts for more information on using the GetMergeConflicts // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetRepositoryRequest method. -// req, resp := client.GetRepositoryRequest(params) +// // Example sending a request using the GetMergeConflictsRequest method. +// req, resp := client.GetMergeConflictsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/codecommit-2015-04-13/GetRepository -func (c *CodeCommit) GetRepositoryRequest(input *GetRepositoryInput) (req *request.Request, output *GetRepositoryOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/codecommit-2015-04-13/GetMergeConflicts +func (c *CodeCommit) GetMergeConflictsRequest(input *GetMergeConflictsInput) (req *request.Request, output *GetMergeConflictsOutput) { op := &request.Operation{ - Name: opGetRepository, + Name: opGetMergeConflicts, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetMergeConflictsInput{} + } + + output = &GetMergeConflictsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetMergeConflicts API operation for AWS CodeCommit. +// +// Returns information about merge conflicts between the before and after commit +// IDs for a pull request in a repository. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CodeCommit's +// API operation GetMergeConflicts for usage and error information. +// +// Returned Error Codes: +// * ErrCodeRepositoryNameRequiredException "RepositoryNameRequiredException" +// A repository name is required but was not specified. +// +// * ErrCodeInvalidRepositoryNameException "InvalidRepositoryNameException" +// At least one specified repository name is not valid. +// +// This exception only occurs when a specified repository name is not valid. +// Other exceptions occur when a required repository parameter is missing, or +// when a specified repository does not exist. +// +// * ErrCodeRepositoryDoesNotExistException "RepositoryDoesNotExistException" +// The specified repository does not exist. +// +// * ErrCodeMergeOptionRequiredException "MergeOptionRequiredException" +// A merge option or stategy is required, and none was provided. +// +// * ErrCodeInvalidMergeOptionException "InvalidMergeOptionException" +// The specified merge option is not valid. The only valid value is FAST_FORWARD_MERGE. +// +// * ErrCodeInvalidDestinationCommitSpecifierException "InvalidDestinationCommitSpecifierException" +// The destination commit specifier is not valid. You must provide a valid branch +// name, tag, or full commit ID. +// +// * ErrCodeInvalidSourceCommitSpecifierException "InvalidSourceCommitSpecifierException" +// The source commit specifier is not valid. You must provide a valid branch +// name, tag, or full commit ID. +// +// * ErrCodeCommitRequiredException "CommitRequiredException" +// A commit was not specified. +// +// * ErrCodeCommitDoesNotExistException "CommitDoesNotExistException" +// The specified commit does not exist or no commit was specified, and the specified +// repository has no default branch. +// +// * ErrCodeInvalidCommitException "InvalidCommitException" +// The specified commit is not valid. +// +// * ErrCodeTipsDivergenceExceededException "TipsDivergenceExceededException" +// The divergence between the tips of the provided commit specifiers is too +// great to determine whether there might be any merge conflicts. Locally compare +// the specifiers using git diff or a diff tool. +// +// * ErrCodeEncryptionIntegrityChecksFailedException "EncryptionIntegrityChecksFailedException" +// An encryption integrity check failed. +// +// * ErrCodeEncryptionKeyAccessDeniedException "EncryptionKeyAccessDeniedException" +// An encryption key could not be accessed. +// +// * ErrCodeEncryptionKeyDisabledException "EncryptionKeyDisabledException" +// The encryption key is disabled. +// +// * ErrCodeEncryptionKeyNotFoundException "EncryptionKeyNotFoundException" +// No encryption key was found. +// +// * ErrCodeEncryptionKeyUnavailableException "EncryptionKeyUnavailableException" +// The encryption key is not available. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/codecommit-2015-04-13/GetMergeConflicts +func (c *CodeCommit) GetMergeConflicts(input *GetMergeConflictsInput) (*GetMergeConflictsOutput, error) { + req, out := c.GetMergeConflictsRequest(input) + return out, req.Send() +} + +// GetMergeConflictsWithContext is the same as GetMergeConflicts with the addition of +// the ability to pass a context and additional request options. +// +// See GetMergeConflicts for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CodeCommit) GetMergeConflictsWithContext(ctx aws.Context, input *GetMergeConflictsInput, opts ...request.Option) (*GetMergeConflictsOutput, error) { + req, out := c.GetMergeConflictsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetPullRequest = "GetPullRequest" + +// GetPullRequestRequest generates a "aws/request.Request" representing the +// client's request for the GetPullRequest operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetPullRequest for more information on using the GetPullRequest +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetPullRequestRequest method. +// req, resp := client.GetPullRequestRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/codecommit-2015-04-13/GetPullRequest +func (c *CodeCommit) GetPullRequestRequest(input *GetPullRequestInput) (req *request.Request, output *GetPullRequestOutput) { + op := &request.Operation{ + Name: opGetPullRequest, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetPullRequestInput{} + } + + output = &GetPullRequestOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetPullRequest API operation for AWS CodeCommit. +// +// Gets information about a pull request in a specified repository. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CodeCommit's +// API operation GetPullRequest for usage and error information. +// +// Returned Error Codes: +// * ErrCodePullRequestDoesNotExistException "PullRequestDoesNotExistException" +// The pull request ID could not be found. Make sure that you have specified +// the correct repository name and pull request ID, and then try again. +// +// * ErrCodeInvalidPullRequestIdException "InvalidPullRequestIdException" +// The pull request ID is not valid. Make sure that you have provided the full +// ID and that the pull request is in the specified repository, and then try +// again. +// +// * ErrCodePullRequestIdRequiredException "PullRequestIdRequiredException" +// A pull request ID is required, but none was provided. +// +// * ErrCodeEncryptionIntegrityChecksFailedException "EncryptionIntegrityChecksFailedException" +// An encryption integrity check failed. +// +// * ErrCodeEncryptionKeyAccessDeniedException "EncryptionKeyAccessDeniedException" +// An encryption key could not be accessed. +// +// * ErrCodeEncryptionKeyDisabledException "EncryptionKeyDisabledException" +// The encryption key is disabled. +// +// * ErrCodeEncryptionKeyNotFoundException "EncryptionKeyNotFoundException" +// No encryption key was found. +// +// * ErrCodeEncryptionKeyUnavailableException "EncryptionKeyUnavailableException" +// The encryption key is not available. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/codecommit-2015-04-13/GetPullRequest +func (c *CodeCommit) GetPullRequest(input *GetPullRequestInput) (*GetPullRequestOutput, error) { + req, out := c.GetPullRequestRequest(input) + return out, req.Send() +} + +// GetPullRequestWithContext is the same as GetPullRequest with the addition of +// the ability to pass a context and additional request options. +// +// See GetPullRequest for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CodeCommit) GetPullRequestWithContext(ctx aws.Context, input *GetPullRequestInput, opts ...request.Option) (*GetPullRequestOutput, error) { + req, out := c.GetPullRequestRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetRepository = "GetRepository" + +// GetRepositoryRequest generates a "aws/request.Request" representing the +// client's request for the GetRepository operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetRepository for more information on using the GetRepository +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetRepositoryRequest method. +// req, resp := client.GetRepositoryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/codecommit-2015-04-13/GetRepository +func (c *CodeCommit) GetRepositoryRequest(input *GetRepositoryInput) (req *request.Request, output *GetRepositoryOutput) { + op := &request.Operation{ + Name: opGetRepository, HTTPMethod: "POST", HTTPPath: "/", } @@ -2372,8 +2781,8 @@ const opGetRepositoryTriggers = "GetRepositoryTriggers" // GetRepositoryTriggersRequest generates a "aws/request.Request" representing the // client's request for the GetRepositoryTriggers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2476,8 +2885,8 @@ const opListBranches = "ListBranches" // ListBranchesRequest generates a "aws/request.Request" representing the // client's request for the ListBranches operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2639,8 +3048,8 @@ const opListPullRequests = "ListPullRequests" // ListPullRequestsRequest generates a "aws/request.Request" representing the // client's request for the ListPullRequests operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2817,8 +3226,8 @@ const opListRepositories = "ListRepositories" // ListRepositoriesRequest generates a "aws/request.Request" representing the // client's request for the ListRepositories operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2958,8 +3367,8 @@ const opMergePullRequestByFastForward = "MergePullRequestByFastForward" // MergePullRequestByFastForwardRequest generates a "aws/request.Request" representing the // client's request for the MergePullRequestByFastForward operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3094,8 +3503,8 @@ const opPostCommentForComparedCommit = "PostCommentForComparedCommit" // PostCommentForComparedCommitRequest generates a "aws/request.Request" representing the // client's request for the PostCommentForComparedCommit operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3187,7 +3596,7 @@ func (c *CodeCommit) PostCommentForComparedCommitRequest(input *PostCommentForCo // is not valid in respect to the current file version. // // * ErrCodePathRequiredException "PathRequiredException" -// The filePath for a location cannot be empty or null. +// The folderPath for a location cannot be null. // // * ErrCodeInvalidFilePositionException "InvalidFilePositionException" // The position is not valid. Make sure that the line number exists in the version @@ -3254,8 +3663,8 @@ const opPostCommentForPullRequest = "PostCommentForPullRequest" // PostCommentForPullRequestRequest generates a "aws/request.Request" representing the // client's request for the PostCommentForPullRequest operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3364,7 +3773,7 @@ func (c *CodeCommit) PostCommentForPullRequestRequest(input *PostCommentForPullR // is not valid in respect to the current file version. // // * ErrCodePathRequiredException "PathRequiredException" -// The filePath for a location cannot be empty or null. +// The folderPath for a location cannot be null. // // * ErrCodeInvalidFilePositionException "InvalidFilePositionException" // The position is not valid. Make sure that the line number exists in the version @@ -3402,7 +3811,7 @@ func (c *CodeCommit) PostCommentForPullRequestRequest(input *PostCommentForPullR // The specified path does not exist. // // * ErrCodePathRequiredException "PathRequiredException" -// The filePath for a location cannot be empty or null. +// The folderPath for a location cannot be null. // // * ErrCodeBeforeCommitIdAndAfterCommitIdAreSameException "BeforeCommitIdAndAfterCommitIdAreSameException" // The before commit ID and the after commit ID are the same, which is not valid. @@ -3434,8 +3843,8 @@ const opPostCommentReply = "PostCommentReply" // PostCommentReplyRequest generates a "aws/request.Request" representing the // client's request for the PostCommentReply operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3543,8 +3952,8 @@ const opPutFile = "PutFile" // PutFileRequest generates a "aws/request.Request" representing the // client's request for the PutFile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3583,7 +3992,8 @@ func (c *CodeCommit) PutFileRequest(input *PutFileInput) (req *request.Request, // PutFile API operation for AWS CodeCommit. // -// Adds or updates a file in an AWS CodeCommit repository. +// Adds or updates a file in a branch in an AWS CodeCommit repository, and generates +// a commit for the addition in the specified branch. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3617,8 +4027,8 @@ func (c *CodeCommit) PutFileRequest(input *PutFileInput) (req *request.Request, // to add or update a file. // // * ErrCodeParentCommitDoesNotExistException "ParentCommitDoesNotExistException" -// The parent commit ID is not valid. The specified parent commit ID does not -// exist in the specified branch of the repository. +// The parent commit ID is not valid because it does not exist. The specified +// parent commit ID does not exist in the specified branch of the repository. // // * ErrCodeParentCommitIdOutdatedException "ParentCommitIdOutdatedException" // The file could not be added because the provided parent commit ID is not @@ -3635,7 +4045,7 @@ func (c *CodeCommit) PutFileRequest(input *PutFileInput) (req *request.Request, // than 2 GB, add them using a Git client. // // * ErrCodePathRequiredException "PathRequiredException" -// The filePath for a location cannot be empty or null. +// The folderPath for a location cannot be null. // // * ErrCodeInvalidPathException "InvalidPathException" // The specified path is not valid. @@ -3659,7 +4069,7 @@ func (c *CodeCommit) PutFileRequest(input *PutFileInput) (req *request.Request, // mode permissions, see PutFile. // // * ErrCodeNameLengthExceededException "NameLengthExceededException" -// The file name is not valid because it has exceeded the character limit for +// The user name is not valid because it has exceeded the character limit for // file names. File names, including the path to the file, cannot exceed the // character limit. // @@ -3671,6 +4081,9 @@ func (c *CodeCommit) PutFileRequest(input *PutFileInput) (req *request.Request, // * ErrCodeCommitMessageLengthExceededException "CommitMessageLengthExceededException" // The commit message is too long. Provide a shorter string. // +// * ErrCodeInvalidDeletionParameterException "InvalidDeletionParameterException" +// The specified deletion parameter is not valid. +// // * ErrCodeEncryptionIntegrityChecksFailedException "EncryptionIntegrityChecksFailedException" // An encryption integrity check failed. // @@ -3729,8 +4142,8 @@ const opPutRepositoryTriggers = "PutRepositoryTriggers" // PutRepositoryTriggersRequest generates a "aws/request.Request" representing the // client's request for the PutRepositoryTriggers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3879,8 +4292,8 @@ const opTestRepositoryTriggers = "TestRepositoryTriggers" // TestRepositoryTriggersRequest generates a "aws/request.Request" representing the // client's request for the TestRepositoryTriggers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4031,8 +4444,8 @@ const opUpdateComment = "UpdateComment" // UpdateCommentRequest generates a "aws/request.Request" representing the // client's request for the UpdateComment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4133,8 +4546,8 @@ const opUpdateDefaultBranch = "UpdateDefaultBranch" // UpdateDefaultBranchRequest generates a "aws/request.Request" representing the // client's request for the UpdateDefaultBranch operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4252,8 +4665,8 @@ const opUpdatePullRequestDescription = "UpdatePullRequestDescription" // UpdatePullRequestDescriptionRequest generates a "aws/request.Request" representing the // client's request for the UpdatePullRequestDescription operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4347,8 +4760,8 @@ const opUpdatePullRequestStatus = "UpdatePullRequestStatus" // UpdatePullRequestStatusRequest generates a "aws/request.Request" representing the // client's request for the UpdatePullRequestStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4461,8 +4874,8 @@ const opUpdatePullRequestTitle = "UpdatePullRequestTitle" // UpdatePullRequestTitleRequest generates a "aws/request.Request" representing the // client's request for the UpdatePullRequestTitle operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4559,8 +4972,8 @@ const opUpdateRepositoryDescription = "UpdateRepositoryDescription" // UpdateRepositoryDescriptionRequest generates a "aws/request.Request" representing the // client's request for the UpdateRepositoryDescription operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4674,8 +5087,8 @@ const opUpdateRepositoryName = "UpdateRepositoryName" // UpdateRepositoryNameRequest generates a "aws/request.Request" representing the // client's request for the UpdateRepositoryName operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4944,7 +5357,7 @@ type Comment struct { Content *string `locationName:"content" type:"string"` // The date and time the comment was created, in timestamp format. - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix"` + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` // A Boolean value indicating whether the comment has been deleted. Deleted *bool `locationName:"deleted" type:"boolean"` @@ -4953,7 +5366,7 @@ type Comment struct { InReplyTo *string `locationName:"inReplyTo" type:"string"` // The date and time the comment was most recently modified, in timestamp format. - LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp" timestampFormat:"unix"` + LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp"` } // String returns the string representation @@ -5713,23 +6126,220 @@ func (s *DeleteCommentContentOutput) SetComment(v *Comment) *DeleteCommentConten return s } -// Represents the input of a delete repository operation. -type DeleteRepositoryInput struct { +type DeleteFileInput struct { _ struct{} `type:"structure"` - // The name of the repository to delete. + // The name of the branch where the commit will be made deleting the file. + // + // BranchName is a required field + BranchName *string `locationName:"branchName" min:"1" type:"string" required:"true"` + + // The commit message you want to include as part of deleting the file. Commit + // messages are limited to 256 KB. If no message is specified, a default message + // will be used. + CommitMessage *string `locationName:"commitMessage" type:"string"` + + // The email address for the commit that deletes the file. If no email address + // is specified, the email address will be left blank. + Email *string `locationName:"email" type:"string"` + + // The fully-qualified path to the file that will be deleted, including the + // full name and extension of that file. For example, /examples/file.md is a + // fully qualified path to a file named file.md in a folder named examples. + // + // FilePath is a required field + FilePath *string `locationName:"filePath" type:"string" required:"true"` + + // Specifies whether to delete the folder or directory that contains the file + // you want to delete if that file is the only object in the folder or directory. + // By default, empty folders will be deleted. This includes empty folders that + // are part of the directory structure. For example, if the path to a file is + // dir1/dir2/dir3/dir4, and dir2 and dir3 are empty, deleting the last file + // in dir4 will also delete the empty folders dir4, dir3, and dir2. + KeepEmptyFolders *bool `locationName:"keepEmptyFolders" type:"boolean"` + + // The name of the author of the commit that deletes the file. If no name is + // specified, the user's ARN will be used as the author name and committer name. + Name *string `locationName:"name" type:"string"` + + // The ID of the commit that is the tip of the branch where you want to create + // the commit that will delete the file. This must be the HEAD commit for the + // branch. The commit that deletes the file will be created from this commit + // ID. + // + // ParentCommitId is a required field + ParentCommitId *string `locationName:"parentCommitId" type:"string" required:"true"` + + // The name of the repository that contains the file to delete. // // RepositoryName is a required field RepositoryName *string `locationName:"repositoryName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s DeleteRepositoryInput) String() string { +func (s DeleteFileInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteRepositoryInput) GoString() string { +func (s DeleteFileInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteFileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteFileInput"} + if s.BranchName == nil { + invalidParams.Add(request.NewErrParamRequired("BranchName")) + } + if s.BranchName != nil && len(*s.BranchName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BranchName", 1)) + } + if s.FilePath == nil { + invalidParams.Add(request.NewErrParamRequired("FilePath")) + } + if s.ParentCommitId == nil { + invalidParams.Add(request.NewErrParamRequired("ParentCommitId")) + } + if s.RepositoryName == nil { + invalidParams.Add(request.NewErrParamRequired("RepositoryName")) + } + if s.RepositoryName != nil && len(*s.RepositoryName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RepositoryName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBranchName sets the BranchName field's value. +func (s *DeleteFileInput) SetBranchName(v string) *DeleteFileInput { + s.BranchName = &v + return s +} + +// SetCommitMessage sets the CommitMessage field's value. +func (s *DeleteFileInput) SetCommitMessage(v string) *DeleteFileInput { + s.CommitMessage = &v + return s +} + +// SetEmail sets the Email field's value. +func (s *DeleteFileInput) SetEmail(v string) *DeleteFileInput { + s.Email = &v + return s +} + +// SetFilePath sets the FilePath field's value. +func (s *DeleteFileInput) SetFilePath(v string) *DeleteFileInput { + s.FilePath = &v + return s +} + +// SetKeepEmptyFolders sets the KeepEmptyFolders field's value. +func (s *DeleteFileInput) SetKeepEmptyFolders(v bool) *DeleteFileInput { + s.KeepEmptyFolders = &v + return s +} + +// SetName sets the Name field's value. +func (s *DeleteFileInput) SetName(v string) *DeleteFileInput { + s.Name = &v + return s +} + +// SetParentCommitId sets the ParentCommitId field's value. +func (s *DeleteFileInput) SetParentCommitId(v string) *DeleteFileInput { + s.ParentCommitId = &v + return s +} + +// SetRepositoryName sets the RepositoryName field's value. +func (s *DeleteFileInput) SetRepositoryName(v string) *DeleteFileInput { + s.RepositoryName = &v + return s +} + +type DeleteFileOutput struct { + _ struct{} `type:"structure"` + + // The blob ID removed from the tree as part of deleting the file. + // + // BlobId is a required field + BlobId *string `locationName:"blobId" type:"string" required:"true"` + + // The full commit ID of the commit that contains the change that deletes the + // file. + // + // CommitId is a required field + CommitId *string `locationName:"commitId" type:"string" required:"true"` + + // The fully-qualified path to the file that will be deleted, including the + // full name and extension of that file. + // + // FilePath is a required field + FilePath *string `locationName:"filePath" type:"string" required:"true"` + + // The full SHA-1 pointer of the tree information for the commit that contains + // the delete file change. + // + // TreeId is a required field + TreeId *string `locationName:"treeId" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteFileOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteFileOutput) GoString() string { + return s.String() +} + +// SetBlobId sets the BlobId field's value. +func (s *DeleteFileOutput) SetBlobId(v string) *DeleteFileOutput { + s.BlobId = &v + return s +} + +// SetCommitId sets the CommitId field's value. +func (s *DeleteFileOutput) SetCommitId(v string) *DeleteFileOutput { + s.CommitId = &v + return s +} + +// SetFilePath sets the FilePath field's value. +func (s *DeleteFileOutput) SetFilePath(v string) *DeleteFileOutput { + s.FilePath = &v + return s +} + +// SetTreeId sets the TreeId field's value. +func (s *DeleteFileOutput) SetTreeId(v string) *DeleteFileOutput { + s.TreeId = &v + return s +} + +// Represents the input of a delete repository operation. +type DeleteRepositoryInput struct { + _ struct{} `type:"structure"` + + // The name of the repository to delete. + // + // RepositoryName is a required field + RepositoryName *string `locationName:"repositoryName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteRepositoryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteRepositoryInput) GoString() string { return s.String() } @@ -5938,6 +6548,102 @@ func (s *Difference) SetChangeType(v string) *Difference { return s } +// Returns information about a file in a repository. +type File struct { + _ struct{} `type:"structure"` + + // The fully-qualified path to the file in the repository. + AbsolutePath *string `locationName:"absolutePath" type:"string"` + + // The blob ID that contains the file information. + BlobId *string `locationName:"blobId" type:"string"` + + // The extrapolated file mode permissions for the file. Valid values include + // EXECUTABLE and NORMAL. + FileMode *string `locationName:"fileMode" type:"string" enum:"FileModeTypeEnum"` + + // The relative path of the file from the folder where the query originated. + RelativePath *string `locationName:"relativePath" type:"string"` +} + +// String returns the string representation +func (s File) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s File) GoString() string { + return s.String() +} + +// SetAbsolutePath sets the AbsolutePath field's value. +func (s *File) SetAbsolutePath(v string) *File { + s.AbsolutePath = &v + return s +} + +// SetBlobId sets the BlobId field's value. +func (s *File) SetBlobId(v string) *File { + s.BlobId = &v + return s +} + +// SetFileMode sets the FileMode field's value. +func (s *File) SetFileMode(v string) *File { + s.FileMode = &v + return s +} + +// SetRelativePath sets the RelativePath field's value. +func (s *File) SetRelativePath(v string) *File { + s.RelativePath = &v + return s +} + +// Returns information about a folder in a repository. +type Folder struct { + _ struct{} `type:"structure"` + + // The fully-qualified path of the folder in the repository. + AbsolutePath *string `locationName:"absolutePath" type:"string"` + + // The relative path of the specified folder from the folder where the query + // originated. + RelativePath *string `locationName:"relativePath" type:"string"` + + // The full SHA-1 pointer of the tree information for the commit that contains + // the folder. + TreeId *string `locationName:"treeId" type:"string"` +} + +// String returns the string representation +func (s Folder) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Folder) GoString() string { + return s.String() +} + +// SetAbsolutePath sets the AbsolutePath field's value. +func (s *Folder) SetAbsolutePath(v string) *Folder { + s.AbsolutePath = &v + return s +} + +// SetRelativePath sets the RelativePath field's value. +func (s *Folder) SetRelativePath(v string) *Folder { + s.RelativePath = &v + return s +} + +// SetTreeId sets the TreeId field's value. +func (s *Folder) SetTreeId(v string) *Folder { + s.TreeId = &v + return s +} + // Represents the input of a get blob operation. type GetBlobInput struct { _ struct{} `type:"structure"` @@ -6457,87 +7163,372 @@ func (s *GetCommitInput) SetRepositoryName(v string) *GetCommitInput { return s } -// Represents the output of a get commit operation. -type GetCommitOutput struct { - _ struct{} `type:"structure"` +// Represents the output of a get commit operation. +type GetCommitOutput struct { + _ struct{} `type:"structure"` + + // A commit data type object that contains information about the specified commit. + // + // Commit is a required field + Commit *Commit `locationName:"commit" type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetCommitOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCommitOutput) GoString() string { + return s.String() +} + +// SetCommit sets the Commit field's value. +func (s *GetCommitOutput) SetCommit(v *Commit) *GetCommitOutput { + s.Commit = v + return s +} + +type GetDifferencesInput struct { + _ struct{} `type:"structure"` + + // The branch, tag, HEAD, or other fully qualified reference used to identify + // a commit. + // + // AfterCommitSpecifier is a required field + AfterCommitSpecifier *string `locationName:"afterCommitSpecifier" type:"string" required:"true"` + + // The file path in which to check differences. Limits the results to this path. + // Can also be used to specify the changed name of a directory or folder, if + // it has changed. If not specified, differences will be shown for all paths. + AfterPath *string `locationName:"afterPath" type:"string"` + + // The branch, tag, HEAD, or other fully qualified reference used to identify + // a commit. For example, the full commit ID. Optional. If not specified, all + // changes prior to the afterCommitSpecifier value will be shown. If you do + // not use beforeCommitSpecifier in your request, consider limiting the results + // with maxResults. + BeforeCommitSpecifier *string `locationName:"beforeCommitSpecifier" type:"string"` + + // The file path in which to check for differences. Limits the results to this + // path. Can also be used to specify the previous name of a directory or folder. + // If beforePath and afterPath are not specified, differences will be shown + // for all paths. + BeforePath *string `locationName:"beforePath" type:"string"` + + // A non-negative integer used to limit the number of returned results. + MaxResults *int64 `type:"integer"` + + // An enumeration token that when provided in a request, returns the next batch + // of the results. + NextToken *string `type:"string"` + + // The name of the repository where you want to get differences. + // + // RepositoryName is a required field + RepositoryName *string `locationName:"repositoryName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetDifferencesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDifferencesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetDifferencesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetDifferencesInput"} + if s.AfterCommitSpecifier == nil { + invalidParams.Add(request.NewErrParamRequired("AfterCommitSpecifier")) + } + if s.RepositoryName == nil { + invalidParams.Add(request.NewErrParamRequired("RepositoryName")) + } + if s.RepositoryName != nil && len(*s.RepositoryName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RepositoryName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAfterCommitSpecifier sets the AfterCommitSpecifier field's value. +func (s *GetDifferencesInput) SetAfterCommitSpecifier(v string) *GetDifferencesInput { + s.AfterCommitSpecifier = &v + return s +} + +// SetAfterPath sets the AfterPath field's value. +func (s *GetDifferencesInput) SetAfterPath(v string) *GetDifferencesInput { + s.AfterPath = &v + return s +} + +// SetBeforeCommitSpecifier sets the BeforeCommitSpecifier field's value. +func (s *GetDifferencesInput) SetBeforeCommitSpecifier(v string) *GetDifferencesInput { + s.BeforeCommitSpecifier = &v + return s +} + +// SetBeforePath sets the BeforePath field's value. +func (s *GetDifferencesInput) SetBeforePath(v string) *GetDifferencesInput { + s.BeforePath = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *GetDifferencesInput) SetMaxResults(v int64) *GetDifferencesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *GetDifferencesInput) SetNextToken(v string) *GetDifferencesInput { + s.NextToken = &v + return s +} + +// SetRepositoryName sets the RepositoryName field's value. +func (s *GetDifferencesInput) SetRepositoryName(v string) *GetDifferencesInput { + s.RepositoryName = &v + return s +} + +type GetDifferencesOutput struct { + _ struct{} `type:"structure"` + + // A differences data type object that contains information about the differences, + // including whether the difference is added, modified, or deleted (A, D, M). + Differences []*Difference `locationName:"differences" type:"list"` + + // An enumeration token that can be used in a request to return the next batch + // of the results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s GetDifferencesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDifferencesOutput) GoString() string { + return s.String() +} + +// SetDifferences sets the Differences field's value. +func (s *GetDifferencesOutput) SetDifferences(v []*Difference) *GetDifferencesOutput { + s.Differences = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *GetDifferencesOutput) SetNextToken(v string) *GetDifferencesOutput { + s.NextToken = &v + return s +} + +type GetFileInput struct { + _ struct{} `type:"structure"` + + // The fully-quaified reference that identifies the commit that contains the + // file. For example, you could specify a full commit ID, a tag, a branch name, + // or a reference such as refs/heads/master. If none is provided, then the head + // commit will be used. + CommitSpecifier *string `locationName:"commitSpecifier" type:"string"` + + // The fully-qualified path to the file, including the full name and extension + // of the file. For example, /examples/file.md is the fully-qualified path to + // a file named file.md in a folder named examples. + // + // FilePath is a required field + FilePath *string `locationName:"filePath" type:"string" required:"true"` + + // The name of the repository that contains the file. + // + // RepositoryName is a required field + RepositoryName *string `locationName:"repositoryName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetFileInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetFileInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetFileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetFileInput"} + if s.FilePath == nil { + invalidParams.Add(request.NewErrParamRequired("FilePath")) + } + if s.RepositoryName == nil { + invalidParams.Add(request.NewErrParamRequired("RepositoryName")) + } + if s.RepositoryName != nil && len(*s.RepositoryName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RepositoryName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCommitSpecifier sets the CommitSpecifier field's value. +func (s *GetFileInput) SetCommitSpecifier(v string) *GetFileInput { + s.CommitSpecifier = &v + return s +} + +// SetFilePath sets the FilePath field's value. +func (s *GetFileInput) SetFilePath(v string) *GetFileInput { + s.FilePath = &v + return s +} + +// SetRepositoryName sets the RepositoryName field's value. +func (s *GetFileInput) SetRepositoryName(v string) *GetFileInput { + s.RepositoryName = &v + return s +} + +type GetFileOutput struct { + _ struct{} `type:"structure"` + + // The blob ID of the object that represents the file content. + // + // BlobId is a required field + BlobId *string `locationName:"blobId" type:"string" required:"true"` + + // The full commit ID of the commit that contains the content returned by GetFile. + // + // CommitId is a required field + CommitId *string `locationName:"commitId" type:"string" required:"true"` + + // The base-64 encoded binary data object that represents the content of the + // file. + // + // FileContent is automatically base64 encoded/decoded by the SDK. + // + // FileContent is a required field + FileContent []byte `locationName:"fileContent" type:"blob" required:"true"` + + // The extrapolated file mode permissions of the blob. Valid values include + // strings such as EXECUTABLE and not numeric values. + // + // The file mode permissions returned by this API are not the standard file + // mode permission values, such as 100644, but rather extrapolated values. See + // below for a full list of supported return values. + // + // FileMode is a required field + FileMode *string `locationName:"fileMode" type:"string" required:"true" enum:"FileModeTypeEnum"` - // A commit data type object that contains information about the specified commit. + // The fully qualified path to the specified file. This returns the name and + // extension of the file. // - // Commit is a required field - Commit *Commit `locationName:"commit" type:"structure" required:"true"` + // FilePath is a required field + FilePath *string `locationName:"filePath" type:"string" required:"true"` + + // The size of the contents of the file, in bytes. + // + // FileSize is a required field + FileSize *int64 `locationName:"fileSize" type:"long" required:"true"` } // String returns the string representation -func (s GetCommitOutput) String() string { +func (s GetFileOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetCommitOutput) GoString() string { +func (s GetFileOutput) GoString() string { return s.String() } -// SetCommit sets the Commit field's value. -func (s *GetCommitOutput) SetCommit(v *Commit) *GetCommitOutput { - s.Commit = v +// SetBlobId sets the BlobId field's value. +func (s *GetFileOutput) SetBlobId(v string) *GetFileOutput { + s.BlobId = &v return s } -type GetDifferencesInput struct { - _ struct{} `type:"structure"` +// SetCommitId sets the CommitId field's value. +func (s *GetFileOutput) SetCommitId(v string) *GetFileOutput { + s.CommitId = &v + return s +} - // The branch, tag, HEAD, or other fully qualified reference used to identify - // a commit. - // - // AfterCommitSpecifier is a required field - AfterCommitSpecifier *string `locationName:"afterCommitSpecifier" type:"string" required:"true"` +// SetFileContent sets the FileContent field's value. +func (s *GetFileOutput) SetFileContent(v []byte) *GetFileOutput { + s.FileContent = v + return s +} - // The file path in which to check differences. Limits the results to this path. - // Can also be used to specify the changed name of a directory or folder, if - // it has changed. If not specified, differences will be shown for all paths. - AfterPath *string `locationName:"afterPath" type:"string"` +// SetFileMode sets the FileMode field's value. +func (s *GetFileOutput) SetFileMode(v string) *GetFileOutput { + s.FileMode = &v + return s +} - // The branch, tag, HEAD, or other fully qualified reference used to identify - // a commit. For example, the full commit ID. Optional. If not specified, all - // changes prior to the afterCommitSpecifier value will be shown. If you do - // not use beforeCommitSpecifier in your request, consider limiting the results - // with maxResults. - BeforeCommitSpecifier *string `locationName:"beforeCommitSpecifier" type:"string"` +// SetFilePath sets the FilePath field's value. +func (s *GetFileOutput) SetFilePath(v string) *GetFileOutput { + s.FilePath = &v + return s +} - // The file path in which to check for differences. Limits the results to this - // path. Can also be used to specify the previous name of a directory or folder. - // If beforePath and afterPath are not specified, differences will be shown - // for all paths. - BeforePath *string `locationName:"beforePath" type:"string"` +// SetFileSize sets the FileSize field's value. +func (s *GetFileOutput) SetFileSize(v int64) *GetFileOutput { + s.FileSize = &v + return s +} - // A non-negative integer used to limit the number of returned results. - MaxResults *int64 `type:"integer"` +type GetFolderInput struct { + _ struct{} `type:"structure"` - // An enumeration token that when provided in a request, returns the next batch - // of the results. - NextToken *string `type:"string"` + // A fully-qualified reference used to identify a commit that contains the version + // of the folder's content to return. A fully-qualified reference can be a commit + // ID, branch name, tag, or reference such as HEAD. If no specifier is provided, + // the folder content will be returned as it exists in the HEAD commit. + CommitSpecifier *string `locationName:"commitSpecifier" type:"string"` - // The name of the repository where you want to get differences. + // The fully-qualified path to the folder whose contents will be returned, including + // the folder name. For example, /examples is a fully-qualified path to a folder + // named examples that was created off of the root directory (/) of a repository. + // + // FolderPath is a required field + FolderPath *string `locationName:"folderPath" type:"string" required:"true"` + + // The name of the repository. // // RepositoryName is a required field RepositoryName *string `locationName:"repositoryName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s GetDifferencesInput) String() string { +func (s GetFolderInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDifferencesInput) GoString() string { +func (s GetFolderInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetDifferencesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetDifferencesInput"} - if s.AfterCommitSpecifier == nil { - invalidParams.Add(request.NewErrParamRequired("AfterCommitSpecifier")) +func (s *GetFolderInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetFolderInput"} + if s.FolderPath == nil { + invalidParams.Add(request.NewErrParamRequired("FolderPath")) } if s.RepositoryName == nil { invalidParams.Add(request.NewErrParamRequired("RepositoryName")) @@ -6552,79 +7543,105 @@ func (s *GetDifferencesInput) Validate() error { return nil } -// SetAfterCommitSpecifier sets the AfterCommitSpecifier field's value. -func (s *GetDifferencesInput) SetAfterCommitSpecifier(v string) *GetDifferencesInput { - s.AfterCommitSpecifier = &v +// SetCommitSpecifier sets the CommitSpecifier field's value. +func (s *GetFolderInput) SetCommitSpecifier(v string) *GetFolderInput { + s.CommitSpecifier = &v return s } -// SetAfterPath sets the AfterPath field's value. -func (s *GetDifferencesInput) SetAfterPath(v string) *GetDifferencesInput { - s.AfterPath = &v +// SetFolderPath sets the FolderPath field's value. +func (s *GetFolderInput) SetFolderPath(v string) *GetFolderInput { + s.FolderPath = &v return s } -// SetBeforeCommitSpecifier sets the BeforeCommitSpecifier field's value. -func (s *GetDifferencesInput) SetBeforeCommitSpecifier(v string) *GetDifferencesInput { - s.BeforeCommitSpecifier = &v +// SetRepositoryName sets the RepositoryName field's value. +func (s *GetFolderInput) SetRepositoryName(v string) *GetFolderInput { + s.RepositoryName = &v return s } -// SetBeforePath sets the BeforePath field's value. -func (s *GetDifferencesInput) SetBeforePath(v string) *GetDifferencesInput { - s.BeforePath = &v - return s -} +type GetFolderOutput struct { + _ struct{} `type:"structure"` -// SetMaxResults sets the MaxResults field's value. -func (s *GetDifferencesInput) SetMaxResults(v int64) *GetDifferencesInput { - s.MaxResults = &v - return s -} + // The full commit ID used as a reference for which version of the folder content + // is returned. + // + // CommitId is a required field + CommitId *string `locationName:"commitId" type:"string" required:"true"` -// SetNextToken sets the NextToken field's value. -func (s *GetDifferencesInput) SetNextToken(v string) *GetDifferencesInput { - s.NextToken = &v - return s -} + // The list of files that exist in the specified folder, if any. + Files []*File `locationName:"files" type:"list"` -// SetRepositoryName sets the RepositoryName field's value. -func (s *GetDifferencesInput) SetRepositoryName(v string) *GetDifferencesInput { - s.RepositoryName = &v - return s -} + // The fully-qualified path of the folder whose contents are returned. + // + // FolderPath is a required field + FolderPath *string `locationName:"folderPath" type:"string" required:"true"` -type GetDifferencesOutput struct { - _ struct{} `type:"structure"` + // The list of folders that exist beneath the specified folder, if any. + SubFolders []*Folder `locationName:"subFolders" type:"list"` - // A differences data type object that contains information about the differences, - // including whether the difference is added, modified, or deleted (A, D, M). - Differences []*Difference `locationName:"differences" type:"list"` + // The list of submodules that exist in the specified folder, if any. + SubModules []*SubModule `locationName:"subModules" type:"list"` - // An enumeration token that can be used in a request to return the next batch - // of the results. - NextToken *string `type:"string"` + // The list of symbolic links to other files and folders that exist in the specified + // folder, if any. + SymbolicLinks []*SymbolicLink `locationName:"symbolicLinks" type:"list"` + + // The full SHA-1 pointer of the tree information for the commit that contains + // the folder. + TreeId *string `locationName:"treeId" type:"string"` } // String returns the string representation -func (s GetDifferencesOutput) String() string { +func (s GetFolderOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDifferencesOutput) GoString() string { +func (s GetFolderOutput) GoString() string { return s.String() } -// SetDifferences sets the Differences field's value. -func (s *GetDifferencesOutput) SetDifferences(v []*Difference) *GetDifferencesOutput { - s.Differences = v +// SetCommitId sets the CommitId field's value. +func (s *GetFolderOutput) SetCommitId(v string) *GetFolderOutput { + s.CommitId = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *GetDifferencesOutput) SetNextToken(v string) *GetDifferencesOutput { - s.NextToken = &v +// SetFiles sets the Files field's value. +func (s *GetFolderOutput) SetFiles(v []*File) *GetFolderOutput { + s.Files = v + return s +} + +// SetFolderPath sets the FolderPath field's value. +func (s *GetFolderOutput) SetFolderPath(v string) *GetFolderOutput { + s.FolderPath = &v + return s +} + +// SetSubFolders sets the SubFolders field's value. +func (s *GetFolderOutput) SetSubFolders(v []*Folder) *GetFolderOutput { + s.SubFolders = v + return s +} + +// SetSubModules sets the SubModules field's value. +func (s *GetFolderOutput) SetSubModules(v []*SubModule) *GetFolderOutput { + s.SubModules = v + return s +} + +// SetSymbolicLinks sets the SymbolicLinks field's value. +func (s *GetFolderOutput) SetSymbolicLinks(v []*SymbolicLink) *GetFolderOutput { + s.SymbolicLinks = v + return s +} + +// SetTreeId sets the TreeId field's value. +func (s *GetFolderOutput) SetTreeId(v string) *GetFolderOutput { + s.TreeId = &v return s } @@ -7912,7 +8929,7 @@ type PullRequest struct { ClientRequestToken *string `locationName:"clientRequestToken" type:"string"` // The date and time the pull request was originally created, in timestamp format. - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix"` + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` // The user-defined description of the pull request. This description can be // used to clarify what should be reviewed and other details of the request. @@ -7920,7 +8937,7 @@ type PullRequest struct { // The day and time of the last user or system activity on the pull request, // in timestamp format. - LastActivityDate *time.Time `locationName:"lastActivityDate" type:"timestamp" timestampFormat:"unix"` + LastActivityDate *time.Time `locationName:"lastActivityDate" type:"timestamp"` // The system-generated ID of the pull request. PullRequestId *string `locationName:"pullRequestId" type:"string"` @@ -8002,6 +9019,60 @@ func (s *PullRequest) SetTitle(v string) *PullRequest { return s } +// Metadata about the pull request that is used when comparing the pull request +// source with its destination. +type PullRequestCreatedEventMetadata struct { + _ struct{} `type:"structure"` + + // The commit ID of the tip of the branch specified as the destination branch + // when the pull request was created. + DestinationCommitId *string `locationName:"destinationCommitId" type:"string"` + + // The commit ID of the most recent commit that the source branch and the destination + // branch have in common. + MergeBase *string `locationName:"mergeBase" type:"string"` + + // The name of the repository where the pull request was created. + RepositoryName *string `locationName:"repositoryName" min:"1" type:"string"` + + // The commit ID on the source branch used when the pull request was created. + SourceCommitId *string `locationName:"sourceCommitId" type:"string"` +} + +// String returns the string representation +func (s PullRequestCreatedEventMetadata) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PullRequestCreatedEventMetadata) GoString() string { + return s.String() +} + +// SetDestinationCommitId sets the DestinationCommitId field's value. +func (s *PullRequestCreatedEventMetadata) SetDestinationCommitId(v string) *PullRequestCreatedEventMetadata { + s.DestinationCommitId = &v + return s +} + +// SetMergeBase sets the MergeBase field's value. +func (s *PullRequestCreatedEventMetadata) SetMergeBase(v string) *PullRequestCreatedEventMetadata { + s.MergeBase = &v + return s +} + +// SetRepositoryName sets the RepositoryName field's value. +func (s *PullRequestCreatedEventMetadata) SetRepositoryName(v string) *PullRequestCreatedEventMetadata { + s.RepositoryName = &v + return s +} + +// SetSourceCommitId sets the SourceCommitId field's value. +func (s *PullRequestCreatedEventMetadata) SetSourceCommitId(v string) *PullRequestCreatedEventMetadata { + s.SourceCommitId = &v + return s +} + // Returns information about a pull request event. type PullRequestEvent struct { _ struct{} `type:"structure"` @@ -8012,7 +9083,10 @@ type PullRequestEvent struct { ActorArn *string `locationName:"actorArn" type:"string"` // The day and time of the pull request event, in timestamp format. - EventDate *time.Time `locationName:"eventDate" type:"timestamp" timestampFormat:"unix"` + EventDate *time.Time `locationName:"eventDate" type:"timestamp"` + + // Information about the source and destination branches for the pull request. + PullRequestCreatedEventMetadata *PullRequestCreatedEventMetadata `locationName:"pullRequestCreatedEventMetadata" type:"structure"` // The type of the pull request event, for example a status change event (PULL_REQUEST_STATUS_CHANGED) // or update event (PULL_REQUEST_SOURCE_REFERENCE_UPDATED). @@ -8053,6 +9127,12 @@ func (s *PullRequestEvent) SetEventDate(v time.Time) *PullRequestEvent { return s } +// SetPullRequestCreatedEventMetadata sets the PullRequestCreatedEventMetadata field's value. +func (s *PullRequestEvent) SetPullRequestCreatedEventMetadata(v *PullRequestCreatedEventMetadata) *PullRequestEvent { + s.PullRequestCreatedEventMetadata = v + return s +} + // SetPullRequestEventType sets the PullRequestEventType field's value. func (s *PullRequestEvent) SetPullRequestEventType(v string) *PullRequestEvent { s.PullRequestEventType = &v @@ -8138,6 +9218,10 @@ type PullRequestSourceReferenceUpdatedEventMetadata struct { // of the branch at the time the pull request was updated. BeforeCommitId *string `locationName:"beforeCommitId" type:"string"` + // The commit ID of the most recent commit that the source branch and the destination + // branch have in common. + MergeBase *string `locationName:"mergeBase" type:"string"` + // The name of the repository where the pull request was updated. RepositoryName *string `locationName:"repositoryName" min:"1" type:"string"` } @@ -8164,6 +9248,12 @@ func (s *PullRequestSourceReferenceUpdatedEventMetadata) SetBeforeCommitId(v str return s } +// SetMergeBase sets the MergeBase field's value. +func (s *PullRequestSourceReferenceUpdatedEventMetadata) SetMergeBase(v string) *PullRequestSourceReferenceUpdatedEventMetadata { + s.MergeBase = &v + return s +} + // SetRepositoryName sets the RepositoryName field's value. func (s *PullRequestSourceReferenceUpdatedEventMetadata) SetRepositoryName(v string) *PullRequestSourceReferenceUpdatedEventMetadata { s.RepositoryName = &v @@ -8206,6 +9296,10 @@ type PullRequestTarget struct { // into. Also known as the destination branch. DestinationReference *string `locationName:"destinationReference" type:"string"` + // The commit ID of the most recent commit that the source branch and the destination + // branch have in common. + MergeBase *string `locationName:"mergeBase" type:"string"` + // Returns metadata about the state of the merge, including whether the merge // has been made. MergeMetadata *MergeMetadata `locationName:"mergeMetadata" type:"structure"` @@ -8246,6 +9340,12 @@ func (s *PullRequestTarget) SetDestinationReference(v string) *PullRequestTarget return s } +// SetMergeBase sets the MergeBase field's value. +func (s *PullRequestTarget) SetMergeBase(v string) *PullRequestTarget { + s.MergeBase = &v + return s +} + // SetMergeMetadata sets the MergeMetadata field's value. func (s *PullRequestTarget) SetMergeMetadata(v *MergeMetadata) *PullRequestTarget { s.MergeMetadata = v @@ -8273,7 +9373,8 @@ func (s *PullRequestTarget) SetSourceReference(v string) *PullRequestTarget { type PutFileInput struct { _ struct{} `type:"structure"` - // The name of the branch where you want to add or update the file. + // The name of the branch where you want to add or update the file. If this + // is an empty repository, this branch will be created. // // BranchName is a required field BranchName *string `locationName:"branchName" min:"1" type:"string" required:"true"` @@ -8312,9 +9413,11 @@ type PutFileInput struct { Name *string `locationName:"name" type:"string"` // The full commit ID of the head commit in the branch where you want to add - // or update the file. If the commit ID does not match the ID of the head commit - // at the time of the operation, an error will occur, and the file will not - // be added or updated. + // or update the file. If this is an empty repository, no commit ID is required. + // If this is not an empty repository, a commit ID is required. + // + // The commit ID must match the ID of the head commit at the time of the operation, + // or an error will occur, and the file will not be added or updated. ParentCommitId *string `locationName:"parentCommitId" type:"string"` // The name of the repository where you want to add or update the file. @@ -8428,7 +9531,8 @@ type PutFileOutput struct { // CommitId is a required field CommitId *string `locationName:"commitId" type:"string" required:"true"` - // Tree information for the commit that contains this file change. + // The full SHA-1 pointer of the tree information for the commit that contains + // this file change. // // TreeId is a required field TreeId *string `locationName:"treeId" type:"string" required:"true"` @@ -8569,13 +9673,13 @@ type RepositoryMetadata struct { CloneUrlSsh *string `locationName:"cloneUrlSsh" type:"string"` // The date and time the repository was created, in timestamp format. - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix"` + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` // The repository's default branch name. DefaultBranch *string `locationName:"defaultBranch" min:"1" type:"string"` // The date and time the repository was last modified, in timestamp format. - LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp" timestampFormat:"unix"` + LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp"` // A comment or description about the repository. RepositoryDescription *string `locationName:"repositoryDescription" type:"string"` @@ -8817,6 +9921,101 @@ func (s *RepositoryTriggerExecutionFailure) SetTrigger(v string) *RepositoryTrig return s } +// Returns information about a submodule reference in a repository folder. +type SubModule struct { + _ struct{} `type:"structure"` + + // The fully qualified path to the folder that contains the reference to the + // submodule. + AbsolutePath *string `locationName:"absolutePath" type:"string"` + + // The commit ID that contains the reference to the submodule. + CommitId *string `locationName:"commitId" type:"string"` + + // The relative path of the submodule from the folder where the query originated. + RelativePath *string `locationName:"relativePath" type:"string"` +} + +// String returns the string representation +func (s SubModule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SubModule) GoString() string { + return s.String() +} + +// SetAbsolutePath sets the AbsolutePath field's value. +func (s *SubModule) SetAbsolutePath(v string) *SubModule { + s.AbsolutePath = &v + return s +} + +// SetCommitId sets the CommitId field's value. +func (s *SubModule) SetCommitId(v string) *SubModule { + s.CommitId = &v + return s +} + +// SetRelativePath sets the RelativePath field's value. +func (s *SubModule) SetRelativePath(v string) *SubModule { + s.RelativePath = &v + return s +} + +// Returns information about a symbolic link in a repository folder. +type SymbolicLink struct { + _ struct{} `type:"structure"` + + // The fully-qualified path to the folder that contains the symbolic link. + AbsolutePath *string `locationName:"absolutePath" type:"string"` + + // The blob ID that contains the information about the symbolic link. + BlobId *string `locationName:"blobId" type:"string"` + + // The file mode permissions of the blob that cotains information about the + // symbolic link. + FileMode *string `locationName:"fileMode" type:"string" enum:"FileModeTypeEnum"` + + // The relative path of the symbolic link from the folder where the query originated. + RelativePath *string `locationName:"relativePath" type:"string"` +} + +// String returns the string representation +func (s SymbolicLink) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SymbolicLink) GoString() string { + return s.String() +} + +// SetAbsolutePath sets the AbsolutePath field's value. +func (s *SymbolicLink) SetAbsolutePath(v string) *SymbolicLink { + s.AbsolutePath = &v + return s +} + +// SetBlobId sets the BlobId field's value. +func (s *SymbolicLink) SetBlobId(v string) *SymbolicLink { + s.BlobId = &v + return s +} + +// SetFileMode sets the FileMode field's value. +func (s *SymbolicLink) SetFileMode(v string) *SymbolicLink { + s.FileMode = &v + return s +} + +// SetRelativePath sets the RelativePath field's value. +func (s *SymbolicLink) SetRelativePath(v string) *SymbolicLink { + s.RelativePath = &v + return s +} + // Returns information about a target for a pull request. type Target struct { _ struct{} `type:"structure"` diff --git a/vendor/github.com/aws/aws-sdk-go/service/codecommit/doc.go b/vendor/github.com/aws/aws-sdk-go/service/codecommit/doc.go index 39c77b1f90d..604881e7441 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/codecommit/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/codecommit/doc.go @@ -45,6 +45,13 @@ // // Files, by calling the following: // +// * DeleteFile, which deletes the content of a specified file from a specified +// branch. +// +// * GetFile, which returns the base-64 encoded content of a specified file. +// +// * GetFolder, which returns the contents of a specified folder or directory. +// // * PutFile, which adds or modifies a file in a specified repository and // branch. // diff --git a/vendor/github.com/aws/aws-sdk-go/service/codecommit/errors.go b/vendor/github.com/aws/aws-sdk-go/service/codecommit/errors.go index 60c433ac55c..2301b41f556 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/codecommit/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/codecommit/errors.go @@ -204,6 +204,13 @@ const ( // than 2 GB, add them using a Git client. ErrCodeFileContentSizeLimitExceededException = "FileContentSizeLimitExceededException" + // ErrCodeFileDoesNotExistException for service response error code + // "FileDoesNotExistException". + // + // The specified file does not exist. Verify that you have provided the correct + // name of the file, including its full path and extension. + ErrCodeFileDoesNotExistException = "FileDoesNotExistException" + // ErrCodeFileNameConflictsWithDirectoryNameException for service response error code // "FileNameConflictsWithDirectoryNameException". // @@ -221,6 +228,13 @@ const ( // (http://docs.aws.amazon.com/codecommit/latest/userguide/limits.html). ErrCodeFileTooLargeException = "FileTooLargeException" + // ErrCodeFolderDoesNotExistException for service response error code + // "FolderDoesNotExistException". + // + // The specified folder does not exist. Either the folder name is not correct, + // or you did not provide the full path to the folder. + ErrCodeFolderDoesNotExistException = "FolderDoesNotExistException" + // ErrCodeIdempotencyParameterMismatchException for service response error code // "IdempotencyParameterMismatchException". // @@ -286,6 +300,12 @@ const ( // The specified continuation token is not valid. ErrCodeInvalidContinuationTokenException = "InvalidContinuationTokenException" + // ErrCodeInvalidDeletionParameterException for service response error code + // "InvalidDeletionParameterException". + // + // The specified deletion parameter is not valid. + ErrCodeInvalidDeletionParameterException = "InvalidDeletionParameterException" + // ErrCodeInvalidDescriptionException for service response error code // "InvalidDescriptionException". // @@ -549,7 +569,7 @@ const ( // ErrCodeNameLengthExceededException for service response error code // "NameLengthExceededException". // - // The file name is not valid because it has exceeded the character limit for + // The user name is not valid because it has exceeded the character limit for // file names. File names, including the path to the file, cannot exceed the // character limit. ErrCodeNameLengthExceededException = "NameLengthExceededException" @@ -557,8 +577,8 @@ const ( // ErrCodeParentCommitDoesNotExistException for service response error code // "ParentCommitDoesNotExistException". // - // The parent commit ID is not valid. The specified parent commit ID does not - // exist in the specified branch of the repository. + // The parent commit ID is not valid because it does not exist. The specified + // parent commit ID does not exist in the specified branch of the repository. ErrCodeParentCommitDoesNotExistException = "ParentCommitDoesNotExistException" // ErrCodeParentCommitIdOutdatedException for service response error code @@ -586,7 +606,7 @@ const ( // ErrCodePathRequiredException for service response error code // "PathRequiredException". // - // The filePath for a location cannot be empty or null. + // The folderPath for a location cannot be null. ErrCodePathRequiredException = "PathRequiredException" // ErrCodePullRequestAlreadyClosedException for service response error code diff --git a/vendor/github.com/aws/aws-sdk-go/service/codecommit/service.go b/vendor/github.com/aws/aws-sdk-go/service/codecommit/service.go index cb25eaba7ff..c8cad394ab0 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/codecommit/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/codecommit/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "codecommit" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "codecommit" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "CodeCommit" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the CodeCommit client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/codedeploy/api.go b/vendor/github.com/aws/aws-sdk-go/service/codedeploy/api.go index 52ad4bf9f02..290e541dbf3 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/codedeploy/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/codedeploy/api.go @@ -16,8 +16,8 @@ const opAddTagsToOnPremisesInstances = "AddTagsToOnPremisesInstances" // AddTagsToOnPremisesInstancesRequest generates a "aws/request.Request" representing the // client's request for the AddTagsToOnPremisesInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -116,8 +116,8 @@ const opBatchGetApplicationRevisions = "BatchGetApplicationRevisions" // BatchGetApplicationRevisionsRequest generates a "aws/request.Request" representing the // client's request for the BatchGetApplicationRevisions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -210,8 +210,8 @@ const opBatchGetApplications = "BatchGetApplications" // BatchGetApplicationsRequest generates a "aws/request.Request" representing the // client's request for the BatchGetApplications operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -298,8 +298,8 @@ const opBatchGetDeploymentGroups = "BatchGetDeploymentGroups" // BatchGetDeploymentGroupsRequest generates a "aws/request.Request" representing the // client's request for the BatchGetDeploymentGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -392,8 +392,8 @@ const opBatchGetDeploymentInstances = "BatchGetDeploymentInstances" // BatchGetDeploymentInstancesRequest generates a "aws/request.Request" representing the // client's request for the BatchGetDeploymentInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -487,8 +487,8 @@ const opBatchGetDeployments = "BatchGetDeployments" // BatchGetDeploymentsRequest generates a "aws/request.Request" representing the // client's request for the BatchGetDeployments operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -572,8 +572,8 @@ const opBatchGetOnPremisesInstances = "BatchGetOnPremisesInstances" // BatchGetOnPremisesInstancesRequest generates a "aws/request.Request" representing the // client's request for the BatchGetOnPremisesInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -657,8 +657,8 @@ const opContinueDeployment = "ContinueDeployment" // ContinueDeploymentRequest generates a "aws/request.Request" representing the // client's request for the ContinueDeployment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -758,8 +758,8 @@ const opCreateApplication = "CreateApplication" // CreateApplicationRequest generates a "aws/request.Request" representing the // client's request for the CreateApplication operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -850,8 +850,8 @@ const opCreateDeployment = "CreateDeployment" // CreateDeploymentRequest generates a "aws/request.Request" representing the // client's request for the CreateDeployment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -986,6 +986,9 @@ func (c *CodeDeploy) CreateDeploymentRequest(input *CreateDeploymentInput) (req // The IgnoreApplicationStopFailures value is invalid. For AWS Lambda deployments, // false is expected. For EC2/On-premises deployments, true or false is expected. // +// * ErrCodeInvalidGitHubAccountTokenException "InvalidGitHubAccountTokenException" +// The GitHub token is not valid. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/codedeploy-2014-10-06/CreateDeployment func (c *CodeDeploy) CreateDeployment(input *CreateDeploymentInput) (*CreateDeploymentOutput, error) { req, out := c.CreateDeploymentRequest(input) @@ -1012,8 +1015,8 @@ const opCreateDeploymentConfig = "CreateDeploymentConfig" // CreateDeploymentConfigRequest generates a "aws/request.Request" representing the // client's request for the CreateDeploymentConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1111,8 +1114,8 @@ const opCreateDeploymentGroup = "CreateDeploymentGroup" // CreateDeploymentGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateDeploymentGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1291,8 +1294,8 @@ const opDeleteApplication = "DeleteApplication" // DeleteApplicationRequest generates a "aws/request.Request" representing the // client's request for the DeleteApplication operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1375,8 +1378,8 @@ const opDeleteDeploymentConfig = "DeleteDeploymentConfig" // DeleteDeploymentConfigRequest generates a "aws/request.Request" representing the // client's request for the DeleteDeploymentConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1468,8 +1471,8 @@ const opDeleteDeploymentGroup = "DeleteDeploymentGroup" // DeleteDeploymentGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteDeploymentGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1561,8 +1564,8 @@ const opDeleteGitHubAccountToken = "DeleteGitHubAccountToken" // DeleteGitHubAccountTokenRequest generates a "aws/request.Request" representing the // client's request for the DeleteGitHubAccountToken operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1652,8 +1655,8 @@ const opDeregisterOnPremisesInstance = "DeregisterOnPremisesInstance" // DeregisterOnPremisesInstanceRequest generates a "aws/request.Request" representing the // client's request for the DeregisterOnPremisesInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1736,8 +1739,8 @@ const opGetApplication = "GetApplication" // GetApplicationRequest generates a "aws/request.Request" representing the // client's request for the GetApplication operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1821,8 +1824,8 @@ const opGetApplicationRevision = "GetApplicationRevision" // GetApplicationRevisionRequest generates a "aws/request.Request" representing the // client's request for the GetApplicationRevision operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1915,8 +1918,8 @@ const opGetDeployment = "GetDeployment" // GetDeploymentRequest generates a "aws/request.Request" representing the // client's request for the GetDeployment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2000,8 +2003,8 @@ const opGetDeploymentConfig = "GetDeploymentConfig" // GetDeploymentConfigRequest generates a "aws/request.Request" representing the // client's request for the GetDeploymentConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2086,8 +2089,8 @@ const opGetDeploymentGroup = "GetDeploymentGroup" // GetDeploymentGroupRequest generates a "aws/request.Request" representing the // client's request for the GetDeploymentGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2181,8 +2184,8 @@ const opGetDeploymentInstance = "GetDeploymentInstance" // GetDeploymentInstanceRequest generates a "aws/request.Request" representing the // client's request for the GetDeploymentInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2275,8 +2278,8 @@ const opGetOnPremisesInstance = "GetOnPremisesInstance" // GetOnPremisesInstanceRequest generates a "aws/request.Request" representing the // client's request for the GetOnPremisesInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2360,8 +2363,8 @@ const opListApplicationRevisions = "ListApplicationRevisions" // ListApplicationRevisionsRequest generates a "aws/request.Request" representing the // client's request for the ListApplicationRevisions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2523,8 +2526,8 @@ const opListApplications = "ListApplications" // ListApplicationsRequest generates a "aws/request.Request" representing the // client's request for the ListApplications operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2658,8 +2661,8 @@ const opListDeploymentConfigs = "ListDeploymentConfigs" // ListDeploymentConfigsRequest generates a "aws/request.Request" representing the // client's request for the ListDeploymentConfigs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2793,8 +2796,8 @@ const opListDeploymentGroups = "ListDeploymentGroups" // ListDeploymentGroupsRequest generates a "aws/request.Request" representing the // client's request for the ListDeploymentGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2938,8 +2941,8 @@ const opListDeploymentInstances = "ListDeploymentInstances" // ListDeploymentInstancesRequest generates a "aws/request.Request" representing the // client's request for the ListDeploymentInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3098,8 +3101,8 @@ const opListDeployments = "ListDeployments" // ListDeploymentsRequest generates a "aws/request.Request" representing the // client's request for the ListDeployments operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3259,8 +3262,8 @@ const opListGitHubAccountTokenNames = "ListGitHubAccountTokenNames" // ListGitHubAccountTokenNamesRequest generates a "aws/request.Request" representing the // client's request for the ListGitHubAccountTokenNames operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3344,8 +3347,8 @@ const opListOnPremisesInstances = "ListOnPremisesInstances" // ListOnPremisesInstancesRequest generates a "aws/request.Request" representing the // client's request for the ListOnPremisesInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3433,8 +3436,8 @@ const opPutLifecycleEventHookExecutionStatus = "PutLifecycleEventHookExecutionSt // PutLifecycleEventHookExecutionStatusRequest generates a "aws/request.Request" representing the // client's request for the PutLifecycleEventHookExecutionStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3534,8 +3537,8 @@ const opRegisterApplicationRevision = "RegisterApplicationRevision" // RegisterApplicationRevisionRequest generates a "aws/request.Request" representing the // client's request for the RegisterApplicationRevision operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3630,8 +3633,8 @@ const opRegisterOnPremisesInstance = "RegisterOnPremisesInstance" // RegisterOnPremisesInstanceRequest generates a "aws/request.Request" representing the // client's request for the RegisterOnPremisesInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3744,8 +3747,8 @@ const opRemoveTagsFromOnPremisesInstances = "RemoveTagsFromOnPremisesInstances" // RemoveTagsFromOnPremisesInstancesRequest generates a "aws/request.Request" representing the // client's request for the RemoveTagsFromOnPremisesInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3844,8 +3847,8 @@ const opSkipWaitTimeForInstanceTermination = "SkipWaitTimeForInstanceTermination // SkipWaitTimeForInstanceTerminationRequest generates a "aws/request.Request" representing the // client's request for the SkipWaitTimeForInstanceTermination operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3941,8 +3944,8 @@ const opStopDeployment = "StopDeployment" // StopDeploymentRequest generates a "aws/request.Request" representing the // client's request for the StopDeployment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4029,8 +4032,8 @@ const opUpdateApplication = "UpdateApplication" // UpdateApplicationRequest generates a "aws/request.Request" representing the // client's request for the UpdateApplication operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4120,8 +4123,8 @@ const opUpdateDeploymentGroup = "UpdateDeploymentGroup" // UpdateDeploymentGroupRequest generates a "aws/request.Request" representing the // client's request for the UpdateDeploymentGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4455,7 +4458,7 @@ type ApplicationInfo struct { ComputePlatform *string `locationName:"computePlatform" type:"string" enum:"ComputePlatform"` // The time at which the application was created. - CreateTime *time.Time `locationName:"createTime" type:"timestamp" timestampFormat:"unix"` + CreateTime *time.Time `locationName:"createTime" type:"timestamp"` // The name for a connection to a GitHub account. GitHubAccountName *string `locationName:"gitHubAccountName" type:"string"` @@ -5102,7 +5105,8 @@ type BlueInstanceTerminationOption struct { Action *string `locationName:"action" type:"string" enum:"InstanceAction"` // The number of minutes to wait after a successful blue/green deployment before - // terminating instances from the original environment. + // terminating instances from the original environment. The maximum setting + // is 2880 minutes (2 days). TerminationWaitTimeInMinutes *int64 `locationName:"terminationWaitTimeInMinutes" type:"integer"` } @@ -6029,7 +6033,7 @@ type DeploymentConfigInfo struct { ComputePlatform *string `locationName:"computePlatform" type:"string" enum:"ComputePlatform"` // The time at which the deployment configuration was created. - CreateTime *time.Time `locationName:"createTime" type:"timestamp" timestampFormat:"unix"` + CreateTime *time.Time `locationName:"createTime" type:"timestamp"` // The deployment configuration ID. DeploymentConfigId *string `locationName:"deploymentConfigId" type:"string"` @@ -6316,13 +6320,13 @@ type DeploymentInfo struct { BlueGreenDeploymentConfiguration *BlueGreenDeploymentConfiguration `locationName:"blueGreenDeploymentConfiguration" type:"structure"` // A timestamp indicating when the deployment was complete. - CompleteTime *time.Time `locationName:"completeTime" type:"timestamp" timestampFormat:"unix"` + CompleteTime *time.Time `locationName:"completeTime" type:"timestamp"` // The destination platform type for the deployment (Lambda or Server). ComputePlatform *string `locationName:"computePlatform" type:"string" enum:"ComputePlatform"` // A timestamp indicating when the deployment was created. - CreateTime *time.Time `locationName:"createTime" type:"timestamp" timestampFormat:"unix"` + CreateTime *time.Time `locationName:"createTime" type:"timestamp"` // The means by which the deployment was created: // @@ -6409,7 +6413,7 @@ type DeploymentInfo struct { // In some cases, the reported value of the start time may be later than the // complete time. This is due to differences in the clock settings of back-end // servers that participate in the deployment process. - StartTime *time.Time `locationName:"startTime" type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `locationName:"startTime" type:"timestamp"` // The current state of the deployment as a whole. Status *string `locationName:"status" type:"string" enum:"DeploymentStatus"` @@ -6678,10 +6682,10 @@ type DeploymentReadyOption struct { // after the new application revision is installed on the instances in the // replacement environment. // - // * STOP_DEPLOYMENT: Do not register new instances with load balancer unless - // traffic is rerouted manually. If traffic is not rerouted manually before - // the end of the specified wait period, the deployment status is changed - // to Stopped. + // * STOP_DEPLOYMENT: Do not register new instances with a load balancer + // unless traffic rerouting is started using ContinueDeployment. If traffic + // rerouting is not started before the end of the specified wait period, + // the deployment status is changed to Stopped. ActionOnTimeout *string `locationName:"actionOnTimeout" type:"string" enum:"DeploymentReadyAction"` // The number of minutes to wait before the status of a blue/green deployment @@ -7055,13 +7059,13 @@ type GenericRevisionInfo struct { Description *string `locationName:"description" type:"string"` // When the revision was first used by AWS CodeDeploy. - FirstUsedTime *time.Time `locationName:"firstUsedTime" type:"timestamp" timestampFormat:"unix"` + FirstUsedTime *time.Time `locationName:"firstUsedTime" type:"timestamp"` // When the revision was last used by AWS CodeDeploy. - LastUsedTime *time.Time `locationName:"lastUsedTime" type:"timestamp" timestampFormat:"unix"` + LastUsedTime *time.Time `locationName:"lastUsedTime" type:"timestamp"` // When the revision was registered with AWS CodeDeploy. - RegisterTime *time.Time `locationName:"registerTime" type:"timestamp" timestampFormat:"unix"` + RegisterTime *time.Time `locationName:"registerTime" type:"timestamp"` } // String returns the string representation @@ -7697,7 +7701,7 @@ type InstanceInfo struct { // If the on-premises instance was deregistered, the time at which the on-premises // instance was deregistered. - DeregisterTime *time.Time `locationName:"deregisterTime" type:"timestamp" timestampFormat:"unix"` + DeregisterTime *time.Time `locationName:"deregisterTime" type:"timestamp"` // The ARN of the IAM session associated with the on-premises instance. IamSessionArn *string `locationName:"iamSessionArn" type:"string"` @@ -7712,7 +7716,7 @@ type InstanceInfo struct { InstanceName *string `locationName:"instanceName" type:"string"` // The time at which the on-premises instance was registered. - RegisterTime *time.Time `locationName:"registerTime" type:"timestamp" timestampFormat:"unix"` + RegisterTime *time.Time `locationName:"registerTime" type:"timestamp"` // The tags currently associated with the on-premises instance. Tags []*Tag `locationName:"tags" type:"list"` @@ -7789,7 +7793,7 @@ type InstanceSummary struct { InstanceType *string `locationName:"instanceType" type:"string" enum:"InstanceType"` // A timestamp indicating when the instance information was last updated. - LastUpdatedAt *time.Time `locationName:"lastUpdatedAt" type:"timestamp" timestampFormat:"unix"` + LastUpdatedAt *time.Time `locationName:"lastUpdatedAt" type:"timestamp"` // A list of lifecycle events for this instance. LifecycleEvents []*LifecycleEvent `locationName:"lifecycleEvents" type:"list"` @@ -7863,14 +7867,14 @@ type LastDeploymentInfo struct { // A timestamp indicating when the most recent deployment to the deployment // group started. - CreateTime *time.Time `locationName:"createTime" type:"timestamp" timestampFormat:"unix"` + CreateTime *time.Time `locationName:"createTime" type:"timestamp"` // The deployment ID. DeploymentId *string `locationName:"deploymentId" type:"string"` // A timestamp indicating when the most recent deployment to the deployment // group completed. - EndTime *time.Time `locationName:"endTime" type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `locationName:"endTime" type:"timestamp"` // The status of the most recent deployment. Status *string `locationName:"status" type:"string" enum:"DeploymentStatus"` @@ -7918,14 +7922,14 @@ type LifecycleEvent struct { Diagnostics *Diagnostics `locationName:"diagnostics" type:"structure"` // A timestamp indicating when the deployment lifecycle event ended. - EndTime *time.Time `locationName:"endTime" type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `locationName:"endTime" type:"timestamp"` // The deployment lifecycle event name, such as ApplicationStop, BeforeInstall, // AfterInstall, ApplicationStart, or ValidateService. LifecycleEventName *string `locationName:"lifecycleEventName" type:"string"` // A timestamp indicating when the deployment lifecycle event started. - StartTime *time.Time `locationName:"startTime" type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `locationName:"startTime" type:"timestamp"` // The deployment lifecycle event status: // @@ -8757,11 +8761,15 @@ type LoadBalancerInfo struct { // An array containing information about the load balancer to use for load balancing // in a deployment. In Elastic Load Balancing, load balancers are used with // Classic Load Balancers. + // + // Adding more than one load balancer to the array is not supported. ElbInfoList []*ELBInfo `locationName:"elbInfoList" type:"list"` // An array containing information about the target group to use for load balancing // in a deployment. In Elastic Load Balancing, target groups are used with Application // Load Balancers. + // + // Adding more than one target group to the array is not supported. TargetGroupInfoList []*TargetGroupInfo `locationName:"targetGroupInfoList" type:"list"` } @@ -9775,12 +9783,12 @@ type TimeRange struct { // The end time of the time range. // // Specify null to leave the end time open-ended. - End *time.Time `locationName:"end" type:"timestamp" timestampFormat:"unix"` + End *time.Time `locationName:"end" type:"timestamp"` // The start time of the time range. // // Specify null to leave the start time open-ended. - Start *time.Time `locationName:"start" type:"timestamp" timestampFormat:"unix"` + Start *time.Time `locationName:"start" type:"timestamp"` } // String returns the string representation diff --git a/vendor/github.com/aws/aws-sdk-go/service/codedeploy/errors.go b/vendor/github.com/aws/aws-sdk-go/service/codedeploy/errors.go index 963a57a533f..c3af4b0ddb8 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/codedeploy/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/codedeploy/errors.go @@ -345,6 +345,12 @@ const ( // "DISALLOW", "OVERWRITE", and "RETAIN". ErrCodeInvalidFileExistsBehaviorException = "InvalidFileExistsBehaviorException" + // ErrCodeInvalidGitHubAccountTokenException for service response error code + // "InvalidGitHubAccountTokenException". + // + // The GitHub token is not valid. + ErrCodeInvalidGitHubAccountTokenException = "InvalidGitHubAccountTokenException" + // ErrCodeInvalidGitHubAccountTokenNameException for service response error code // "InvalidGitHubAccountTokenNameException". // diff --git a/vendor/github.com/aws/aws-sdk-go/service/codedeploy/service.go b/vendor/github.com/aws/aws-sdk-go/service/codedeploy/service.go index e6524d04d73..dd1eaf41355 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/codedeploy/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/codedeploy/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "codedeploy" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "codedeploy" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "CodeDeploy" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the CodeDeploy client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/codepipeline/api.go b/vendor/github.com/aws/aws-sdk-go/service/codepipeline/api.go index 6d6c4d460cc..136d876e95d 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/codepipeline/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/codepipeline/api.go @@ -17,8 +17,8 @@ const opAcknowledgeJob = "AcknowledgeJob" // AcknowledgeJobRequest generates a "aws/request.Request" representing the // client's request for the AcknowledgeJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -103,8 +103,8 @@ const opAcknowledgeThirdPartyJob = "AcknowledgeThirdPartyJob" // AcknowledgeThirdPartyJobRequest generates a "aws/request.Request" representing the // client's request for the AcknowledgeThirdPartyJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -192,8 +192,8 @@ const opCreateCustomActionType = "CreateCustomActionType" // CreateCustomActionTypeRequest generates a "aws/request.Request" representing the // client's request for the CreateCustomActionType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -276,8 +276,8 @@ const opCreatePipeline = "CreatePipeline" // CreatePipelineRequest generates a "aws/request.Request" representing the // client's request for the CreatePipeline operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -374,8 +374,8 @@ const opDeleteCustomActionType = "DeleteCustomActionType" // DeleteCustomActionTypeRequest generates a "aws/request.Request" representing the // client's request for the DeleteCustomActionType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -419,8 +419,11 @@ func (c *CodePipeline) DeleteCustomActionTypeRequest(input *DeleteCustomActionTy // Marks a custom action as deleted. PollForJobs for the custom action will // fail after the action is marked for deletion. Only used for custom actions. // -// You cannot recreate a custom action after it has been deleted unless you -// increase the version number of the action. +// To re-create a custom action after it has been deleted you must use a string +// in the version field that has never been used before. This string can be +// an incremented version number, for example. To restore a deleted custom action, +// use a JSON file that is identical to the deleted action, including the original +// string in the version field. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -459,8 +462,8 @@ const opDeletePipeline = "DeletePipeline" // DeletePipelineRequest generates a "aws/request.Request" representing the // client's request for the DeletePipeline operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -536,12 +539,179 @@ func (c *CodePipeline) DeletePipelineWithContext(ctx aws.Context, input *DeleteP return out, req.Send() } +const opDeleteWebhook = "DeleteWebhook" + +// DeleteWebhookRequest generates a "aws/request.Request" representing the +// client's request for the DeleteWebhook operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteWebhook for more information on using the DeleteWebhook +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteWebhookRequest method. +// req, resp := client.DeleteWebhookRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/DeleteWebhook +func (c *CodePipeline) DeleteWebhookRequest(input *DeleteWebhookInput) (req *request.Request, output *DeleteWebhookOutput) { + op := &request.Operation{ + Name: opDeleteWebhook, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteWebhookInput{} + } + + output = &DeleteWebhookOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteWebhook API operation for AWS CodePipeline. +// +// Deletes a previously created webhook by name. Deleting the webhook stops +// AWS CodePipeline from starting a pipeline every time an external event occurs. +// The API will return successfully when trying to delete a webhook that is +// already deleted. If a deleted webhook is re-created by calling PutWebhook +// with the same name, it will have a different URL. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CodePipeline's +// API operation DeleteWebhook for usage and error information. +// +// Returned Error Codes: +// * ErrCodeValidationException "ValidationException" +// The validation was specified in an invalid format. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/DeleteWebhook +func (c *CodePipeline) DeleteWebhook(input *DeleteWebhookInput) (*DeleteWebhookOutput, error) { + req, out := c.DeleteWebhookRequest(input) + return out, req.Send() +} + +// DeleteWebhookWithContext is the same as DeleteWebhook with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteWebhook for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CodePipeline) DeleteWebhookWithContext(ctx aws.Context, input *DeleteWebhookInput, opts ...request.Option) (*DeleteWebhookOutput, error) { + req, out := c.DeleteWebhookRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeregisterWebhookWithThirdParty = "DeregisterWebhookWithThirdParty" + +// DeregisterWebhookWithThirdPartyRequest generates a "aws/request.Request" representing the +// client's request for the DeregisterWebhookWithThirdParty operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeregisterWebhookWithThirdParty for more information on using the DeregisterWebhookWithThirdParty +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeregisterWebhookWithThirdPartyRequest method. +// req, resp := client.DeregisterWebhookWithThirdPartyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/DeregisterWebhookWithThirdParty +func (c *CodePipeline) DeregisterWebhookWithThirdPartyRequest(input *DeregisterWebhookWithThirdPartyInput) (req *request.Request, output *DeregisterWebhookWithThirdPartyOutput) { + op := &request.Operation{ + Name: opDeregisterWebhookWithThirdParty, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeregisterWebhookWithThirdPartyInput{} + } + + output = &DeregisterWebhookWithThirdPartyOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeregisterWebhookWithThirdParty API operation for AWS CodePipeline. +// +// Removes the connection between the webhook that was created by CodePipeline +// and the external tool with events to be detected. Currently only supported +// for webhooks that target an action type of GitHub. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CodePipeline's +// API operation DeregisterWebhookWithThirdParty for usage and error information. +// +// Returned Error Codes: +// * ErrCodeValidationException "ValidationException" +// The validation was specified in an invalid format. +// +// * ErrCodeWebhookNotFoundException "WebhookNotFoundException" +// The specified webhook was entered in an invalid format or cannot be found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/DeregisterWebhookWithThirdParty +func (c *CodePipeline) DeregisterWebhookWithThirdParty(input *DeregisterWebhookWithThirdPartyInput) (*DeregisterWebhookWithThirdPartyOutput, error) { + req, out := c.DeregisterWebhookWithThirdPartyRequest(input) + return out, req.Send() +} + +// DeregisterWebhookWithThirdPartyWithContext is the same as DeregisterWebhookWithThirdParty with the addition of +// the ability to pass a context and additional request options. +// +// See DeregisterWebhookWithThirdParty for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CodePipeline) DeregisterWebhookWithThirdPartyWithContext(ctx aws.Context, input *DeregisterWebhookWithThirdPartyInput, opts ...request.Option) (*DeregisterWebhookWithThirdPartyOutput, error) { + req, out := c.DeregisterWebhookWithThirdPartyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDisableStageTransition = "DisableStageTransition" // DisableStageTransitionRequest generates a "aws/request.Request" representing the // client's request for the DisableStageTransition operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -628,8 +798,8 @@ const opEnableStageTransition = "EnableStageTransition" // EnableStageTransitionRequest generates a "aws/request.Request" representing the // client's request for the EnableStageTransition operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -715,8 +885,8 @@ const opGetJobDetails = "GetJobDetails" // GetJobDetailsRequest generates a "aws/request.Request" representing the // client's request for the GetJobDetails operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -802,8 +972,8 @@ const opGetPipeline = "GetPipeline" // GetPipelineRequest generates a "aws/request.Request" representing the // client's request for the GetPipeline operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -890,8 +1060,8 @@ const opGetPipelineExecution = "GetPipelineExecution" // GetPipelineExecutionRequest generates a "aws/request.Request" representing the // client's request for the GetPipelineExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -978,8 +1148,8 @@ const opGetPipelineState = "GetPipelineState" // GetPipelineStateRequest generates a "aws/request.Request" representing the // client's request for the GetPipelineState operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1061,8 +1231,8 @@ const opGetThirdPartyJobDetails = "GetThirdPartyJobDetails" // GetThirdPartyJobDetailsRequest generates a "aws/request.Request" representing the // client's request for the GetThirdPartyJobDetails operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1155,8 +1325,8 @@ const opListActionTypes = "ListActionTypes" // ListActionTypesRequest generates a "aws/request.Request" representing the // client's request for the ListActionTypes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1239,8 +1409,8 @@ const opListPipelineExecutions = "ListPipelineExecutions" // ListPipelineExecutionsRequest generates a "aws/request.Request" representing the // client's request for the ListPipelineExecutions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1325,8 +1495,8 @@ const opListPipelines = "ListPipelines" // ListPipelinesRequest generates a "aws/request.Request" representing the // client's request for the ListPipelines operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1375,6 +1545,9 @@ func (c *CodePipeline) ListPipelinesRequest(input *ListPipelinesInput) (req *req // API operation ListPipelines for usage and error information. // // Returned Error Codes: +// * ErrCodeValidationException "ValidationException" +// The validation was specified in an invalid format. +// // * ErrCodeInvalidNextTokenException "InvalidNextTokenException" // The next token was specified in an invalid format. Make sure that the next // token you provided is the token returned by a previous call. @@ -1401,12 +1574,97 @@ func (c *CodePipeline) ListPipelinesWithContext(ctx aws.Context, input *ListPipe return out, req.Send() } +const opListWebhooks = "ListWebhooks" + +// ListWebhooksRequest generates a "aws/request.Request" representing the +// client's request for the ListWebhooks operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListWebhooks for more information on using the ListWebhooks +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListWebhooksRequest method. +// req, resp := client.ListWebhooksRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/ListWebhooks +func (c *CodePipeline) ListWebhooksRequest(input *ListWebhooksInput) (req *request.Request, output *ListWebhooksOutput) { + op := &request.Operation{ + Name: opListWebhooks, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListWebhooksInput{} + } + + output = &ListWebhooksOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListWebhooks API operation for AWS CodePipeline. +// +// Gets a listing of all the webhooks in this region for this account. The output +// lists all webhooks and includes the webhook URL and ARN, as well the configuration +// for each webhook. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CodePipeline's +// API operation ListWebhooks for usage and error information. +// +// Returned Error Codes: +// * ErrCodeValidationException "ValidationException" +// The validation was specified in an invalid format. +// +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The next token was specified in an invalid format. Make sure that the next +// token you provided is the token returned by a previous call. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/ListWebhooks +func (c *CodePipeline) ListWebhooks(input *ListWebhooksInput) (*ListWebhooksOutput, error) { + req, out := c.ListWebhooksRequest(input) + return out, req.Send() +} + +// ListWebhooksWithContext is the same as ListWebhooks with the addition of +// the ability to pass a context and additional request options. +// +// See ListWebhooks for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CodePipeline) ListWebhooksWithContext(ctx aws.Context, input *ListWebhooksInput, opts ...request.Option) (*ListWebhooksOutput, error) { + req, out := c.ListWebhooksRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opPollForJobs = "PollForJobs" // PollForJobsRequest generates a "aws/request.Request" representing the // client's request for the PollForJobs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1445,7 +1703,10 @@ func (c *CodePipeline) PollForJobsRequest(input *PollForJobsInput) (req *request // PollForJobs API operation for AWS CodePipeline. // -// Returns information about any jobs for AWS CodePipeline to act upon. +// Returns information about any jobs for AWS CodePipeline to act upon. PollForJobs +// is only valid for action types with "Custom" in the owner field. If the action +// type contains "AWS" or "ThirdParty" in the owner field, the PollForJobs action +// returns an error. // // When this API is called, AWS CodePipeline returns temporary credentials for // the Amazon S3 bucket used to store artifacts for the pipeline, if the action @@ -1492,8 +1753,8 @@ const opPollForThirdPartyJobs = "PollForThirdPartyJobs" // PollForThirdPartyJobsRequest generates a "aws/request.Request" representing the // client's request for the PollForThirdPartyJobs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1579,8 +1840,8 @@ const opPutActionRevision = "PutActionRevision" // PutActionRevisionRequest generates a "aws/request.Request" representing the // client's request for the PutActionRevision operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1667,8 +1928,8 @@ const opPutApprovalResult = "PutApprovalResult" // PutApprovalResultRequest generates a "aws/request.Request" representing the // client's request for the PutApprovalResult operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1762,8 +2023,8 @@ const opPutJobFailureResult = "PutJobFailureResult" // PutJobFailureResultRequest generates a "aws/request.Request" representing the // client's request for the PutJobFailureResult operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1850,8 +2111,8 @@ const opPutJobSuccessResult = "PutJobSuccessResult" // PutJobSuccessResultRequest generates a "aws/request.Request" representing the // client's request for the PutJobSuccessResult operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1938,8 +2199,8 @@ const opPutThirdPartyJobFailureResult = "PutThirdPartyJobFailureResult" // PutThirdPartyJobFailureResultRequest generates a "aws/request.Request" representing the // client's request for the PutThirdPartyJobFailureResult operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2029,8 +2290,8 @@ const opPutThirdPartyJobSuccessResult = "PutThirdPartyJobSuccessResult" // PutThirdPartyJobSuccessResultRequest generates a "aws/request.Request" representing the // client's request for the PutThirdPartyJobSuccessResult operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2116,213 +2377,395 @@ func (c *CodePipeline) PutThirdPartyJobSuccessResultWithContext(ctx aws.Context, return out, req.Send() } -const opRetryStageExecution = "RetryStageExecution" +const opPutWebhook = "PutWebhook" -// RetryStageExecutionRequest generates a "aws/request.Request" representing the -// client's request for the RetryStageExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// PutWebhookRequest generates a "aws/request.Request" representing the +// client's request for the PutWebhook operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See RetryStageExecution for more information on using the RetryStageExecution +// See PutWebhook for more information on using the PutWebhook // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the RetryStageExecutionRequest method. -// req, resp := client.RetryStageExecutionRequest(params) +// // Example sending a request using the PutWebhookRequest method. +// req, resp := client.PutWebhookRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/RetryStageExecution -func (c *CodePipeline) RetryStageExecutionRequest(input *RetryStageExecutionInput) (req *request.Request, output *RetryStageExecutionOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/PutWebhook +func (c *CodePipeline) PutWebhookRequest(input *PutWebhookInput) (req *request.Request, output *PutWebhookOutput) { op := &request.Operation{ - Name: opRetryStageExecution, + Name: opPutWebhook, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &RetryStageExecutionInput{} + input = &PutWebhookInput{} } - output = &RetryStageExecutionOutput{} + output = &PutWebhookOutput{} req = c.newRequest(op, input, output) return } -// RetryStageExecution API operation for AWS CodePipeline. +// PutWebhook API operation for AWS CodePipeline. // -// Resumes the pipeline execution by retrying the last failed actions in a stage. +// Defines a webhook and returns a unique webhook URL generated by CodePipeline. +// This URL can be supplied to third party source hosting providers to call +// every time there's a code change. When CodePipeline receives a POST request +// on this URL, the pipeline defined in the webhook is started as long as the +// POST request satisfied the authentication and filtering requirements supplied +// when defining the webhook. RegisterWebhookWithThirdParty and DeregisterWebhookWithThirdParty +// APIs can be used to automatically configure supported third parties to call +// the generated webhook URL. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS CodePipeline's -// API operation RetryStageExecution for usage and error information. +// API operation PutWebhook for usage and error information. // // Returned Error Codes: // * ErrCodeValidationException "ValidationException" // The validation was specified in an invalid format. // -// * ErrCodePipelineNotFoundException "PipelineNotFoundException" -// The specified pipeline was specified in an invalid format or cannot be found. +// * ErrCodeLimitExceededException "LimitExceededException" +// The number of pipelines associated with the AWS account has exceeded the +// limit allowed for the account. // -// * ErrCodeStageNotFoundException "StageNotFoundException" -// The specified stage was specified in an invalid format or cannot be found. +// * ErrCodeInvalidWebhookFilterPatternException "InvalidWebhookFilterPatternException" +// The specified event filter rule is in an invalid format. // -// * ErrCodeStageNotRetryableException "StageNotRetryableException" -// The specified stage can't be retried because the pipeline structure or stage -// state changed after the stage was not completed; the stage contains no failed -// actions; one or more actions are still in progress; or another retry attempt -// is already in progress. +// * ErrCodeInvalidWebhookAuthenticationParametersException "InvalidWebhookAuthenticationParametersException" +// The specified authentication type is in an invalid format. // -// * ErrCodeNotLatestPipelineExecutionException "NotLatestPipelineExecutionException" -// The stage has failed in a later run of the pipeline and the pipelineExecutionId -// associated with the request is out of date. +// * ErrCodePipelineNotFoundException "PipelineNotFoundException" +// The specified pipeline was specified in an invalid format or cannot be found. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/RetryStageExecution -func (c *CodePipeline) RetryStageExecution(input *RetryStageExecutionInput) (*RetryStageExecutionOutput, error) { - req, out := c.RetryStageExecutionRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/PutWebhook +func (c *CodePipeline) PutWebhook(input *PutWebhookInput) (*PutWebhookOutput, error) { + req, out := c.PutWebhookRequest(input) return out, req.Send() } -// RetryStageExecutionWithContext is the same as RetryStageExecution with the addition of +// PutWebhookWithContext is the same as PutWebhook with the addition of // the ability to pass a context and additional request options. // -// See RetryStageExecution for details on how to use this API operation. +// See PutWebhook for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CodePipeline) RetryStageExecutionWithContext(ctx aws.Context, input *RetryStageExecutionInput, opts ...request.Option) (*RetryStageExecutionOutput, error) { - req, out := c.RetryStageExecutionRequest(input) +func (c *CodePipeline) PutWebhookWithContext(ctx aws.Context, input *PutWebhookInput, opts ...request.Option) (*PutWebhookOutput, error) { + req, out := c.PutWebhookRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opStartPipelineExecution = "StartPipelineExecution" +const opRegisterWebhookWithThirdParty = "RegisterWebhookWithThirdParty" -// StartPipelineExecutionRequest generates a "aws/request.Request" representing the -// client's request for the StartPipelineExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// RegisterWebhookWithThirdPartyRequest generates a "aws/request.Request" representing the +// client's request for the RegisterWebhookWithThirdParty operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See StartPipelineExecution for more information on using the StartPipelineExecution +// See RegisterWebhookWithThirdParty for more information on using the RegisterWebhookWithThirdParty // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the StartPipelineExecutionRequest method. -// req, resp := client.StartPipelineExecutionRequest(params) +// // Example sending a request using the RegisterWebhookWithThirdPartyRequest method. +// req, resp := client.RegisterWebhookWithThirdPartyRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/StartPipelineExecution -func (c *CodePipeline) StartPipelineExecutionRequest(input *StartPipelineExecutionInput) (req *request.Request, output *StartPipelineExecutionOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/RegisterWebhookWithThirdParty +func (c *CodePipeline) RegisterWebhookWithThirdPartyRequest(input *RegisterWebhookWithThirdPartyInput) (req *request.Request, output *RegisterWebhookWithThirdPartyOutput) { op := &request.Operation{ - Name: opStartPipelineExecution, + Name: opRegisterWebhookWithThirdParty, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &StartPipelineExecutionInput{} + input = &RegisterWebhookWithThirdPartyInput{} } - output = &StartPipelineExecutionOutput{} + output = &RegisterWebhookWithThirdPartyOutput{} req = c.newRequest(op, input, output) return } -// StartPipelineExecution API operation for AWS CodePipeline. +// RegisterWebhookWithThirdParty API operation for AWS CodePipeline. // -// Starts the specified pipeline. Specifically, it begins processing the latest -// commit to the source location specified as part of the pipeline. +// Configures a connection between the webhook that was created and the external +// tool with events to be detected. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS CodePipeline's -// API operation StartPipelineExecution for usage and error information. +// API operation RegisterWebhookWithThirdParty for usage and error information. // // Returned Error Codes: // * ErrCodeValidationException "ValidationException" // The validation was specified in an invalid format. // -// * ErrCodePipelineNotFoundException "PipelineNotFoundException" -// The specified pipeline was specified in an invalid format or cannot be found. +// * ErrCodeWebhookNotFoundException "WebhookNotFoundException" +// The specified webhook was entered in an invalid format or cannot be found. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/StartPipelineExecution -func (c *CodePipeline) StartPipelineExecution(input *StartPipelineExecutionInput) (*StartPipelineExecutionOutput, error) { - req, out := c.StartPipelineExecutionRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/RegisterWebhookWithThirdParty +func (c *CodePipeline) RegisterWebhookWithThirdParty(input *RegisterWebhookWithThirdPartyInput) (*RegisterWebhookWithThirdPartyOutput, error) { + req, out := c.RegisterWebhookWithThirdPartyRequest(input) return out, req.Send() } -// StartPipelineExecutionWithContext is the same as StartPipelineExecution with the addition of +// RegisterWebhookWithThirdPartyWithContext is the same as RegisterWebhookWithThirdParty with the addition of // the ability to pass a context and additional request options. // -// See StartPipelineExecution for details on how to use this API operation. +// See RegisterWebhookWithThirdParty for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *CodePipeline) StartPipelineExecutionWithContext(ctx aws.Context, input *StartPipelineExecutionInput, opts ...request.Option) (*StartPipelineExecutionOutput, error) { - req, out := c.StartPipelineExecutionRequest(input) +func (c *CodePipeline) RegisterWebhookWithThirdPartyWithContext(ctx aws.Context, input *RegisterWebhookWithThirdPartyInput, opts ...request.Option) (*RegisterWebhookWithThirdPartyOutput, error) { + req, out := c.RegisterWebhookWithThirdPartyRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdatePipeline = "UpdatePipeline" +const opRetryStageExecution = "RetryStageExecution" -// UpdatePipelineRequest generates a "aws/request.Request" representing the -// client's request for the UpdatePipeline operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// RetryStageExecutionRequest generates a "aws/request.Request" representing the +// client's request for the RetryStageExecution operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdatePipeline for more information on using the UpdatePipeline +// See RetryStageExecution for more information on using the RetryStageExecution // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdatePipelineRequest method. -// req, resp := client.UpdatePipelineRequest(params) +// // Example sending a request using the RetryStageExecutionRequest method. +// req, resp := client.RetryStageExecutionRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/UpdatePipeline -func (c *CodePipeline) UpdatePipelineRequest(input *UpdatePipelineInput) (req *request.Request, output *UpdatePipelineOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/RetryStageExecution +func (c *CodePipeline) RetryStageExecutionRequest(input *RetryStageExecutionInput) (req *request.Request, output *RetryStageExecutionOutput) { op := &request.Operation{ - Name: opUpdatePipeline, + Name: opRetryStageExecution, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RetryStageExecutionInput{} + } + + output = &RetryStageExecutionOutput{} + req = c.newRequest(op, input, output) + return +} + +// RetryStageExecution API operation for AWS CodePipeline. +// +// Resumes the pipeline execution by retrying the last failed actions in a stage. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CodePipeline's +// API operation RetryStageExecution for usage and error information. +// +// Returned Error Codes: +// * ErrCodeValidationException "ValidationException" +// The validation was specified in an invalid format. +// +// * ErrCodePipelineNotFoundException "PipelineNotFoundException" +// The specified pipeline was specified in an invalid format or cannot be found. +// +// * ErrCodeStageNotFoundException "StageNotFoundException" +// The specified stage was specified in an invalid format or cannot be found. +// +// * ErrCodeStageNotRetryableException "StageNotRetryableException" +// The specified stage can't be retried because the pipeline structure or stage +// state changed after the stage was not completed; the stage contains no failed +// actions; one or more actions are still in progress; or another retry attempt +// is already in progress. +// +// * ErrCodeNotLatestPipelineExecutionException "NotLatestPipelineExecutionException" +// The stage has failed in a later run of the pipeline and the pipelineExecutionId +// associated with the request is out of date. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/RetryStageExecution +func (c *CodePipeline) RetryStageExecution(input *RetryStageExecutionInput) (*RetryStageExecutionOutput, error) { + req, out := c.RetryStageExecutionRequest(input) + return out, req.Send() +} + +// RetryStageExecutionWithContext is the same as RetryStageExecution with the addition of +// the ability to pass a context and additional request options. +// +// See RetryStageExecution for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CodePipeline) RetryStageExecutionWithContext(ctx aws.Context, input *RetryStageExecutionInput, opts ...request.Option) (*RetryStageExecutionOutput, error) { + req, out := c.RetryStageExecutionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStartPipelineExecution = "StartPipelineExecution" + +// StartPipelineExecutionRequest generates a "aws/request.Request" representing the +// client's request for the StartPipelineExecution operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StartPipelineExecution for more information on using the StartPipelineExecution +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StartPipelineExecutionRequest method. +// req, resp := client.StartPipelineExecutionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/StartPipelineExecution +func (c *CodePipeline) StartPipelineExecutionRequest(input *StartPipelineExecutionInput) (req *request.Request, output *StartPipelineExecutionOutput) { + op := &request.Operation{ + Name: opStartPipelineExecution, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StartPipelineExecutionInput{} + } + + output = &StartPipelineExecutionOutput{} + req = c.newRequest(op, input, output) + return +} + +// StartPipelineExecution API operation for AWS CodePipeline. +// +// Starts the specified pipeline. Specifically, it begins processing the latest +// commit to the source location specified as part of the pipeline. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS CodePipeline's +// API operation StartPipelineExecution for usage and error information. +// +// Returned Error Codes: +// * ErrCodeValidationException "ValidationException" +// The validation was specified in an invalid format. +// +// * ErrCodePipelineNotFoundException "PipelineNotFoundException" +// The specified pipeline was specified in an invalid format or cannot be found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/StartPipelineExecution +func (c *CodePipeline) StartPipelineExecution(input *StartPipelineExecutionInput) (*StartPipelineExecutionOutput, error) { + req, out := c.StartPipelineExecutionRequest(input) + return out, req.Send() +} + +// StartPipelineExecutionWithContext is the same as StartPipelineExecution with the addition of +// the ability to pass a context and additional request options. +// +// See StartPipelineExecution for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CodePipeline) StartPipelineExecutionWithContext(ctx aws.Context, input *StartPipelineExecutionInput, opts ...request.Option) (*StartPipelineExecutionOutput, error) { + req, out := c.StartPipelineExecutionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdatePipeline = "UpdatePipeline" + +// UpdatePipelineRequest generates a "aws/request.Request" representing the +// client's request for the UpdatePipeline operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdatePipeline for more information on using the UpdatePipeline +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdatePipelineRequest method. +// req, resp := client.UpdatePipelineRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/UpdatePipeline +func (c *CodePipeline) UpdatePipelineRequest(input *UpdatePipelineInput) (req *request.Request, output *UpdatePipelineOutput) { + op := &request.Operation{ + Name: opUpdatePipeline, HTTPMethod: "POST", HTTPPath: "/", } @@ -2366,6 +2809,10 @@ func (c *CodePipeline) UpdatePipelineRequest(input *UpdatePipelineInput) (req *r // * ErrCodeInvalidStructureException "InvalidStructureException" // The specified structure was specified in an invalid format. // +// * ErrCodeLimitExceededException "LimitExceededException" +// The number of pipelines associated with the AWS account has exceeded the +// limit allowed for the account. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/codepipeline-2015-07-09/UpdatePipeline func (c *CodePipeline) UpdatePipeline(input *UpdatePipelineInput) (*UpdatePipelineOutput, error) { req, out := c.UpdatePipelineRequest(input) @@ -2453,7 +2900,7 @@ type AcknowledgeJobInput struct { // response of the PollForJobs request that returned this job. // // Nonce is a required field - Nonce *string `locationName:"nonce" type:"string" required:"true"` + Nonce *string `locationName:"nonce" min:"1" type:"string" required:"true"` } // String returns the string representation @@ -2475,6 +2922,9 @@ func (s *AcknowledgeJobInput) Validate() error { if s.Nonce == nil { invalidParams.Add(request.NewErrParamRequired("Nonce")) } + if s.Nonce != nil && len(*s.Nonce) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Nonce", 1)) + } if invalidParams.Len() > 0 { return invalidParams @@ -2538,7 +2988,7 @@ type AcknowledgeThirdPartyJobInput struct { // response to a GetThirdPartyJobDetails request. // // Nonce is a required field - Nonce *string `locationName:"nonce" type:"string" required:"true"` + Nonce *string `locationName:"nonce" min:"1" type:"string" required:"true"` } // String returns the string representation @@ -2569,6 +3019,9 @@ func (s *AcknowledgeThirdPartyJobInput) Validate() error { if s.Nonce == nil { invalidParams.Add(request.NewErrParamRequired("Nonce")) } + if s.Nonce != nil && len(*s.Nonce) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Nonce", 1)) + } if invalidParams.Len() > 0 { return invalidParams @@ -2820,6 +3273,9 @@ type ActionDeclaration struct { // build artifact. OutputArtifacts []*OutputArtifact `locationName:"outputArtifacts" type:"list"` + // The action declaration's AWS Region, such as us-east-1. + Region *string `locationName:"region" min:"4" type:"string"` + // The ARN of the IAM service role that will perform the declared action. This // is assumed through the roleArn for the pipeline. RoleArn *string `locationName:"roleArn" type:"string"` @@ -2850,6 +3306,9 @@ func (s *ActionDeclaration) Validate() error { if s.Name != nil && len(*s.Name) < 1 { invalidParams.Add(request.NewErrParamMinLen("Name", 1)) } + if s.Region != nil && len(*s.Region) < 4 { + invalidParams.Add(request.NewErrParamMinLen("Region", 4)) + } if s.RunOrder != nil && *s.RunOrder < 1 { invalidParams.Add(request.NewErrParamMinValue("RunOrder", 1)) } @@ -2915,6 +3374,12 @@ func (s *ActionDeclaration) SetOutputArtifacts(v []*OutputArtifact) *ActionDecla return s } +// SetRegion sets the Region field's value. +func (s *ActionDeclaration) SetRegion(v string) *ActionDeclaration { + s.Region = &v + return s +} + // SetRoleArn sets the RoleArn field's value. func (s *ActionDeclaration) SetRoleArn(v string) *ActionDeclaration { s.RoleArn = &v @@ -2942,7 +3407,7 @@ type ActionExecution struct { ExternalExecutionUrl *string `locationName:"externalExecutionUrl" min:"1" type:"string"` // The last status change of the action. - LastStatusChange *time.Time `locationName:"lastStatusChange" type:"timestamp" timestampFormat:"unix"` + LastStatusChange *time.Time `locationName:"lastStatusChange" type:"timestamp"` // The ARN of the user who last changed the pipeline. LastUpdatedBy *string `locationName:"lastUpdatedBy" type:"string"` @@ -2955,7 +3420,7 @@ type ActionExecution struct { Status *string `locationName:"status" type:"string" enum:"ActionExecutionStatus"` // A summary of the run of the action. - Summary *string `locationName:"summary" type:"string"` + Summary *string `locationName:"summary" min:"1" type:"string"` // The system-generated token used to identify a unique approval request. The // token for each open approval request can be obtained using the GetPipelineState @@ -3036,7 +3501,7 @@ type ActionRevision struct { // in timestamp format. // // Created is a required field - Created *time.Time `locationName:"created" type:"timestamp" timestampFormat:"unix" required:"true"` + Created *time.Time `locationName:"created" type:"timestamp" required:"true"` // The unique identifier of the change that set the state to this revision, // for example a deployment ID or timestamp. @@ -3256,7 +3721,7 @@ type ActionTypeId struct { // Provider is a required field Provider *string `locationName:"provider" min:"1" type:"string" required:"true"` - // A string that identifies the action type. + // A string that describes the action version. // // Version is a required field Version *string `locationName:"version" min:"1" type:"string" required:"true"` @@ -3596,7 +4061,7 @@ type ArtifactRevision struct { // The date and time when the most recent revision of the artifact was created, // in timestamp format. - Created *time.Time `locationName:"created" type:"timestamp" timestampFormat:"unix"` + Created *time.Time `locationName:"created" type:"timestamp"` // The name of an artifact. This name might be system-generated, such as "MyApp", // or might be defined by the user when an action is created. @@ -4056,7 +4521,7 @@ type CurrentRevision struct { // The date and time when the most recent revision of the artifact was created, // in timestamp format. - Created *time.Time `locationName:"created" type:"timestamp" timestampFormat:"unix"` + Created *time.Time `locationName:"created" type:"timestamp"` // The revision ID of the current version of an artifact. // @@ -4271,71 +4736,33 @@ func (s DeletePipelineOutput) GoString() string { return s.String() } -// Represents the input of a DisableStageTransition action. -type DisableStageTransitionInput struct { +type DeleteWebhookInput struct { _ struct{} `type:"structure"` - // The name of the pipeline in which you want to disable the flow of artifacts - // from one stage to another. - // - // PipelineName is a required field - PipelineName *string `locationName:"pipelineName" min:"1" type:"string" required:"true"` - - // The reason given to the user why a stage is disabled, such as waiting for - // manual approval or manual tests. This message is displayed in the pipeline - // console UI. - // - // Reason is a required field - Reason *string `locationName:"reason" min:"1" type:"string" required:"true"` - - // The name of the stage where you want to disable the inbound or outbound transition - // of artifacts. - // - // StageName is a required field - StageName *string `locationName:"stageName" min:"1" type:"string" required:"true"` - - // Specifies whether artifacts will be prevented from transitioning into the - // stage and being processed by the actions in that stage (inbound), or prevented - // from transitioning from the stage after they have been processed by the actions - // in that stage (outbound). + // The name of the webhook you want to delete. // - // TransitionType is a required field - TransitionType *string `locationName:"transitionType" type:"string" required:"true" enum:"StageTransitionType"` + // Name is a required field + Name *string `locationName:"name" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s DisableStageTransitionInput) String() string { +func (s DeleteWebhookInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DisableStageTransitionInput) GoString() string { +func (s DeleteWebhookInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DisableStageTransitionInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DisableStageTransitionInput"} - if s.PipelineName == nil { - invalidParams.Add(request.NewErrParamRequired("PipelineName")) - } - if s.PipelineName != nil && len(*s.PipelineName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("PipelineName", 1)) - } - if s.Reason == nil { - invalidParams.Add(request.NewErrParamRequired("Reason")) - } - if s.Reason != nil && len(*s.Reason) < 1 { - invalidParams.Add(request.NewErrParamMinLen("Reason", 1)) - } - if s.StageName == nil { - invalidParams.Add(request.NewErrParamRequired("StageName")) - } - if s.StageName != nil && len(*s.StageName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("StageName", 1)) +func (s *DeleteWebhookInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteWebhookInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) } - if s.TransitionType == nil { - invalidParams.Add(request.NewErrParamRequired("TransitionType")) + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) } if invalidParams.Len() > 0 { @@ -4344,23 +4771,166 @@ func (s *DisableStageTransitionInput) Validate() error { return nil } -// SetPipelineName sets the PipelineName field's value. -func (s *DisableStageTransitionInput) SetPipelineName(v string) *DisableStageTransitionInput { - s.PipelineName = &v +// SetName sets the Name field's value. +func (s *DeleteWebhookInput) SetName(v string) *DeleteWebhookInput { + s.Name = &v return s } -// SetReason sets the Reason field's value. -func (s *DisableStageTransitionInput) SetReason(v string) *DisableStageTransitionInput { - s.Reason = &v - return s +type DeleteWebhookOutput struct { + _ struct{} `type:"structure"` } -// SetStageName sets the StageName field's value. -func (s *DisableStageTransitionInput) SetStageName(v string) *DisableStageTransitionInput { - s.StageName = &v - return s -} +// String returns the string representation +func (s DeleteWebhookOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteWebhookOutput) GoString() string { + return s.String() +} + +type DeregisterWebhookWithThirdPartyInput struct { + _ struct{} `type:"structure"` + + // The name of the webhook you want to deregister. + WebhookName *string `locationName:"webhookName" min:"1" type:"string"` +} + +// String returns the string representation +func (s DeregisterWebhookWithThirdPartyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeregisterWebhookWithThirdPartyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeregisterWebhookWithThirdPartyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeregisterWebhookWithThirdPartyInput"} + if s.WebhookName != nil && len(*s.WebhookName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("WebhookName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetWebhookName sets the WebhookName field's value. +func (s *DeregisterWebhookWithThirdPartyInput) SetWebhookName(v string) *DeregisterWebhookWithThirdPartyInput { + s.WebhookName = &v + return s +} + +type DeregisterWebhookWithThirdPartyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeregisterWebhookWithThirdPartyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeregisterWebhookWithThirdPartyOutput) GoString() string { + return s.String() +} + +// Represents the input of a DisableStageTransition action. +type DisableStageTransitionInput struct { + _ struct{} `type:"structure"` + + // The name of the pipeline in which you want to disable the flow of artifacts + // from one stage to another. + // + // PipelineName is a required field + PipelineName *string `locationName:"pipelineName" min:"1" type:"string" required:"true"` + + // The reason given to the user why a stage is disabled, such as waiting for + // manual approval or manual tests. This message is displayed in the pipeline + // console UI. + // + // Reason is a required field + Reason *string `locationName:"reason" min:"1" type:"string" required:"true"` + + // The name of the stage where you want to disable the inbound or outbound transition + // of artifacts. + // + // StageName is a required field + StageName *string `locationName:"stageName" min:"1" type:"string" required:"true"` + + // Specifies whether artifacts will be prevented from transitioning into the + // stage and being processed by the actions in that stage (inbound), or prevented + // from transitioning from the stage after they have been processed by the actions + // in that stage (outbound). + // + // TransitionType is a required field + TransitionType *string `locationName:"transitionType" type:"string" required:"true" enum:"StageTransitionType"` +} + +// String returns the string representation +func (s DisableStageTransitionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisableStageTransitionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DisableStageTransitionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DisableStageTransitionInput"} + if s.PipelineName == nil { + invalidParams.Add(request.NewErrParamRequired("PipelineName")) + } + if s.PipelineName != nil && len(*s.PipelineName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PipelineName", 1)) + } + if s.Reason == nil { + invalidParams.Add(request.NewErrParamRequired("Reason")) + } + if s.Reason != nil && len(*s.Reason) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Reason", 1)) + } + if s.StageName == nil { + invalidParams.Add(request.NewErrParamRequired("StageName")) + } + if s.StageName != nil && len(*s.StageName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StageName", 1)) + } + if s.TransitionType == nil { + invalidParams.Add(request.NewErrParamRequired("TransitionType")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPipelineName sets the PipelineName field's value. +func (s *DisableStageTransitionInput) SetPipelineName(v string) *DisableStageTransitionInput { + s.PipelineName = &v + return s +} + +// SetReason sets the Reason field's value. +func (s *DisableStageTransitionInput) SetReason(v string) *DisableStageTransitionInput { + s.Reason = &v + return s +} + +// SetStageName sets the StageName field's value. +func (s *DisableStageTransitionInput) SetStageName(v string) *DisableStageTransitionInput { + s.StageName = &v + return s +} // SetTransitionType sets the TransitionType field's value. func (s *DisableStageTransitionInput) SetTransitionType(v string) *DisableStageTransitionInput { @@ -4540,7 +5110,7 @@ type ErrorDetails struct { Code *string `locationName:"code" type:"string"` // The text of the error message. - Message *string `locationName:"message" type:"string"` + Message *string `locationName:"message" min:"1" type:"string"` } // String returns the string representation @@ -4579,7 +5149,7 @@ type ExecutionDetails struct { PercentComplete *int64 `locationName:"percentComplete" type:"integer"` // The summary of the current status of the actions. - Summary *string `locationName:"summary" type:"string"` + Summary *string `locationName:"summary" min:"1" type:"string"` } // String returns the string representation @@ -4598,6 +5168,9 @@ func (s *ExecutionDetails) Validate() error { if s.ExternalExecutionId != nil && len(*s.ExternalExecutionId) < 1 { invalidParams.Add(request.NewErrParamMinLen("ExternalExecutionId", 1)) } + if s.Summary != nil && len(*s.Summary) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Summary", 1)) + } if invalidParams.Len() > 0 { return invalidParams @@ -4633,7 +5206,7 @@ type FailureDetails struct { // The message about the failure. // // Message is a required field - Message *string `locationName:"message" type:"string" required:"true"` + Message *string `locationName:"message" min:"1" type:"string" required:"true"` // The type of the failure. // @@ -4660,6 +5233,9 @@ func (s *FailureDetails) Validate() error { if s.Message == nil { invalidParams.Add(request.NewErrParamRequired("Message")) } + if s.Message != nil && len(*s.Message) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Message", 1)) + } if s.Type == nil { invalidParams.Add(request.NewErrParamRequired("Type")) } @@ -4971,7 +5547,7 @@ type GetPipelineStateOutput struct { _ struct{} `type:"structure"` // The date and time the pipeline was created, in timestamp format. - Created *time.Time `locationName:"created" type:"timestamp" timestampFormat:"unix"` + Created *time.Time `locationName:"created" type:"timestamp"` // The name of the pipeline for which you want to get the state. PipelineName *string `locationName:"pipelineName" min:"1" type:"string"` @@ -4986,7 +5562,7 @@ type GetPipelineStateOutput struct { StageStates []*StageState `locationName:"stageStates" type:"list"` // The date and time the pipeline was last updated, in timestamp format. - Updated *time.Time `locationName:"updated" type:"timestamp" timestampFormat:"unix"` + Updated *time.Time `locationName:"updated" type:"timestamp"` } // String returns the string representation @@ -5178,7 +5754,7 @@ type Job struct { // A system-generated random number that AWS CodePipeline uses to ensure that // the job is being worked on by only one job worker. Use this number in an // AcknowledgeJob request. - Nonce *string `locationName:"nonce" type:"string"` + Nonce *string `locationName:"nonce" min:"1" type:"string"` } // String returns the string representation @@ -5234,7 +5810,7 @@ type JobData struct { // A system-generated token, such as a AWS CodeDeploy deployment ID, that a // job requires in order to continue the job asynchronously. - ContinuationToken *string `locationName:"continuationToken" type:"string"` + ContinuationToken *string `locationName:"continuationToken" min:"1" type:"string"` // Represents information about the key used to encrypt data in the artifact // store, such as an AWS Key Management Service (AWS KMS) key. @@ -5613,6 +6189,170 @@ func (s *ListPipelinesOutput) SetPipelines(v []*PipelineSummary) *ListPipelinesO return s } +// The detail returned for each webhook after listing webhooks, such as the +// webhook URL, the webhook name, and the webhook ARN. +type ListWebhookItem struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the webhook. + Arn *string `locationName:"arn" type:"string"` + + // The detail returned for each webhook, such as the webhook authentication + // type and filter rules. + // + // Definition is a required field + Definition *WebhookDefinition `locationName:"definition" type:"structure" required:"true"` + + // The number code of the error. + ErrorCode *string `locationName:"errorCode" type:"string"` + + // The text of the error message about the webhook. + ErrorMessage *string `locationName:"errorMessage" type:"string"` + + // The date and time a webhook was last successfully triggered, in timestamp + // format. + LastTriggered *time.Time `locationName:"lastTriggered" type:"timestamp"` + + // A unique URL generated by CodePipeline. When a POST request is made to this + // URL, the defined pipeline is started as long as the body of the post request + // satisfies the defined authentication and filtering conditions. Deleting and + // re-creating a webhook will make the old URL invalid and generate a new URL. + // + // Url is a required field + Url *string `locationName:"url" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListWebhookItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListWebhookItem) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *ListWebhookItem) SetArn(v string) *ListWebhookItem { + s.Arn = &v + return s +} + +// SetDefinition sets the Definition field's value. +func (s *ListWebhookItem) SetDefinition(v *WebhookDefinition) *ListWebhookItem { + s.Definition = v + return s +} + +// SetErrorCode sets the ErrorCode field's value. +func (s *ListWebhookItem) SetErrorCode(v string) *ListWebhookItem { + s.ErrorCode = &v + return s +} + +// SetErrorMessage sets the ErrorMessage field's value. +func (s *ListWebhookItem) SetErrorMessage(v string) *ListWebhookItem { + s.ErrorMessage = &v + return s +} + +// SetLastTriggered sets the LastTriggered field's value. +func (s *ListWebhookItem) SetLastTriggered(v time.Time) *ListWebhookItem { + s.LastTriggered = &v + return s +} + +// SetUrl sets the Url field's value. +func (s *ListWebhookItem) SetUrl(v string) *ListWebhookItem { + s.Url = &v + return s +} + +type ListWebhooksInput struct { + _ struct{} `type:"structure"` + + // The maximum number of results to return in a single call. To retrieve the + // remaining results, make another call with the returned nextToken value. + MaxResults *int64 `min:"1" type:"integer"` + + // The token that was returned from the previous ListWebhooks call, which can + // be used to return the next set of webhooks in the list. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListWebhooksInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListWebhooksInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListWebhooksInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListWebhooksInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListWebhooksInput) SetMaxResults(v int64) *ListWebhooksInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListWebhooksInput) SetNextToken(v string) *ListWebhooksInput { + s.NextToken = &v + return s +} + +type ListWebhooksOutput struct { + _ struct{} `type:"structure"` + + // If the amount of returned information is significantly large, an identifier + // is also returned and can be used in a subsequent ListWebhooks call to return + // the next set of webhooks in the list. + NextToken *string `min:"1" type:"string"` + + // The JSON detail returned for each webhook in the list output for the ListWebhooks + // call. + Webhooks []*ListWebhookItem `locationName:"webhooks" type:"list"` +} + +// String returns the string representation +func (s ListWebhooksOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListWebhooksOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListWebhooksOutput) SetNextToken(v string) *ListWebhooksOutput { + s.NextToken = &v + return s +} + +// SetWebhooks sets the Webhooks field's value. +func (s *ListWebhooksOutput) SetWebhooks(v []*ListWebhookItem) *ListWebhooksOutput { + s.Webhooks = v + return s +} + // Represents information about the output of an action. type OutputArtifact struct { _ struct{} `type:"structure"` @@ -5712,9 +6452,15 @@ type PipelineDeclaration struct { // Represents information about the Amazon S3 bucket where artifacts are stored // for the pipeline. + ArtifactStore *ArtifactStore `locationName:"artifactStore" type:"structure"` + + // A mapping of artifactStore objects and their corresponding regions. There + // must be an artifact store for the pipeline region and for each cross-region + // action within the pipeline. You can only use either artifactStore or artifactStores, + // not both. // - // ArtifactStore is a required field - ArtifactStore *ArtifactStore `locationName:"artifactStore" type:"structure" required:"true"` + // If you create a cross-region action in your pipeline, you must use artifactStores. + ArtifactStores map[string]*ArtifactStore `locationName:"artifactStores" type:"map"` // The name of the action to be performed. // @@ -5751,9 +6497,6 @@ func (s PipelineDeclaration) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *PipelineDeclaration) Validate() error { invalidParams := request.ErrInvalidParams{Context: "PipelineDeclaration"} - if s.ArtifactStore == nil { - invalidParams.Add(request.NewErrParamRequired("ArtifactStore")) - } if s.Name == nil { invalidParams.Add(request.NewErrParamRequired("Name")) } @@ -5774,6 +6517,16 @@ func (s *PipelineDeclaration) Validate() error { invalidParams.AddNested("ArtifactStore", err.(request.ErrInvalidParams)) } } + if s.ArtifactStores != nil { + for i, v := range s.ArtifactStores { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ArtifactStores", i), err.(request.ErrInvalidParams)) + } + } + } if s.Stages != nil { for i, v := range s.Stages { if v == nil { @@ -5797,6 +6550,12 @@ func (s *PipelineDeclaration) SetArtifactStore(v *ArtifactStore) *PipelineDeclar return s } +// SetArtifactStores sets the ArtifactStores field's value. +func (s *PipelineDeclaration) SetArtifactStores(v map[string]*ArtifactStore) *PipelineDeclaration { + s.ArtifactStores = v + return s +} + // SetName sets the Name field's value. func (s *PipelineDeclaration) SetName(v string) *PipelineDeclaration { s.Name = &v @@ -5897,13 +6656,16 @@ type PipelineExecutionSummary struct { // The date and time of the last change to the pipeline execution, in timestamp // format. - LastUpdateTime *time.Time `locationName:"lastUpdateTime" type:"timestamp" timestampFormat:"unix"` + LastUpdateTime *time.Time `locationName:"lastUpdateTime" type:"timestamp"` // The ID of the pipeline execution. PipelineExecutionId *string `locationName:"pipelineExecutionId" type:"string"` + // A list of the source artifact revisions that initiated a pipeline execution. + SourceRevisions []*SourceRevision `locationName:"sourceRevisions" type:"list"` + // The date and time when the pipeline execution began, in timestamp format. - StartTime *time.Time `locationName:"startTime" type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `locationName:"startTime" type:"timestamp"` // The status of the pipeline execution. // @@ -5941,6 +6703,12 @@ func (s *PipelineExecutionSummary) SetPipelineExecutionId(v string) *PipelineExe return s } +// SetSourceRevisions sets the SourceRevisions field's value. +func (s *PipelineExecutionSummary) SetSourceRevisions(v []*SourceRevision) *PipelineExecutionSummary { + s.SourceRevisions = v + return s +} + // SetStartTime sets the StartTime field's value. func (s *PipelineExecutionSummary) SetStartTime(v time.Time) *PipelineExecutionSummary { s.StartTime = &v @@ -5958,13 +6726,13 @@ type PipelineMetadata struct { _ struct{} `type:"structure"` // The date and time the pipeline was created, in timestamp format. - Created *time.Time `locationName:"created" type:"timestamp" timestampFormat:"unix"` + Created *time.Time `locationName:"created" type:"timestamp"` // The Amazon Resource Name (ARN) of the pipeline. PipelineArn *string `locationName:"pipelineArn" type:"string"` // The date and time the pipeline was last updated, in timestamp format. - Updated *time.Time `locationName:"updated" type:"timestamp" timestampFormat:"unix"` + Updated *time.Time `locationName:"updated" type:"timestamp"` } // String returns the string representation @@ -6000,13 +6768,13 @@ type PipelineSummary struct { _ struct{} `type:"structure"` // The date and time the pipeline was created, in timestamp format. - Created *time.Time `locationName:"created" type:"timestamp" timestampFormat:"unix"` + Created *time.Time `locationName:"created" type:"timestamp"` // The name of the pipeline. Name *string `locationName:"name" min:"1" type:"string"` // The date and time of the last update to the pipeline, in timestamp format. - Updated *time.Time `locationName:"updated" type:"timestamp" timestampFormat:"unix"` + Updated *time.Time `locationName:"updated" type:"timestamp"` // The version number of the pipeline. Version *int64 `locationName:"version" min:"1" type:"integer"` @@ -6464,7 +7232,7 @@ type PutApprovalResultOutput struct { _ struct{} `type:"structure"` // The timestamp showing when the approval or rejection was submitted. - ApprovedAt *time.Time `locationName:"approvedAt" type:"timestamp" timestampFormat:"unix"` + ApprovedAt *time.Time `locationName:"approvedAt" type:"timestamp"` } // String returns the string representation @@ -6566,7 +7334,7 @@ type PutJobSuccessResultInput struct { // action. It can be reused to return additional information about the progress // of the custom action. When the action is complete, no continuation token // should be supplied. - ContinuationToken *string `locationName:"continuationToken" type:"string"` + ContinuationToken *string `locationName:"continuationToken" min:"1" type:"string"` // The ID of the current revision of the artifact successfully worked upon by // the job. @@ -6596,6 +7364,9 @@ func (s PutJobSuccessResultInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *PutJobSuccessResultInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "PutJobSuccessResultInput"} + if s.ContinuationToken != nil && len(*s.ContinuationToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ContinuationToken", 1)) + } if s.JobId == nil { invalidParams.Add(request.NewErrParamRequired("JobId")) } @@ -6763,7 +7534,7 @@ type PutThirdPartyJobSuccessResultInput struct { // of the action. It can be reused to return additional information about the // progress of the partner action. When the action is complete, no continuation // token should be supplied. - ContinuationToken *string `locationName:"continuationToken" type:"string"` + ContinuationToken *string `locationName:"continuationToken" min:"1" type:"string"` // Represents information about a current revision. CurrentRevision *CurrentRevision `locationName:"currentRevision" type:"structure"` @@ -6798,6 +7569,9 @@ func (s *PutThirdPartyJobSuccessResultInput) Validate() error { if s.ClientToken != nil && len(*s.ClientToken) < 1 { invalidParams.Add(request.NewErrParamMinLen("ClientToken", 1)) } + if s.ContinuationToken != nil && len(*s.ContinuationToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ContinuationToken", 1)) + } if s.JobId == nil { invalidParams.Add(request.NewErrParamRequired("JobId")) } @@ -6865,13 +7639,135 @@ func (s PutThirdPartyJobSuccessResultOutput) GoString() string { return s.String() } -// Represents the input of a RetryStageExecution action. -type RetryStageExecutionInput struct { +type PutWebhookInput struct { _ struct{} `type:"structure"` - // The ID of the pipeline execution in the failed stage to be retried. Use the - // GetPipelineState action to retrieve the current pipelineExecutionId of the - // failed stage + // The detail provided in an input file to create the webhook, such as the webhook + // name, the pipeline name, and the action name. Give the webhook a unique name + // which identifies the webhook being defined. You may choose to name the webhook + // after the pipeline and action it targets so that you can easily recognize + // what it's used for later. + // + // Webhook is a required field + Webhook *WebhookDefinition `locationName:"webhook" type:"structure" required:"true"` +} + +// String returns the string representation +func (s PutWebhookInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutWebhookInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutWebhookInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutWebhookInput"} + if s.Webhook == nil { + invalidParams.Add(request.NewErrParamRequired("Webhook")) + } + if s.Webhook != nil { + if err := s.Webhook.Validate(); err != nil { + invalidParams.AddNested("Webhook", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetWebhook sets the Webhook field's value. +func (s *PutWebhookInput) SetWebhook(v *WebhookDefinition) *PutWebhookInput { + s.Webhook = v + return s +} + +type PutWebhookOutput struct { + _ struct{} `type:"structure"` + + // The detail returned from creating the webhook, such as the webhook name, + // webhook URL, and webhook ARN. + Webhook *ListWebhookItem `locationName:"webhook" type:"structure"` +} + +// String returns the string representation +func (s PutWebhookOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutWebhookOutput) GoString() string { + return s.String() +} + +// SetWebhook sets the Webhook field's value. +func (s *PutWebhookOutput) SetWebhook(v *ListWebhookItem) *PutWebhookOutput { + s.Webhook = v + return s +} + +type RegisterWebhookWithThirdPartyInput struct { + _ struct{} `type:"structure"` + + // The name of an existing webhook created with PutWebhook to register with + // a supported third party. + WebhookName *string `locationName:"webhookName" min:"1" type:"string"` +} + +// String returns the string representation +func (s RegisterWebhookWithThirdPartyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RegisterWebhookWithThirdPartyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RegisterWebhookWithThirdPartyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RegisterWebhookWithThirdPartyInput"} + if s.WebhookName != nil && len(*s.WebhookName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("WebhookName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetWebhookName sets the WebhookName field's value. +func (s *RegisterWebhookWithThirdPartyInput) SetWebhookName(v string) *RegisterWebhookWithThirdPartyInput { + s.WebhookName = &v + return s +} + +type RegisterWebhookWithThirdPartyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s RegisterWebhookWithThirdPartyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RegisterWebhookWithThirdPartyOutput) GoString() string { + return s.String() +} + +// Represents the input of a RetryStageExecution action. +type RetryStageExecutionInput struct { + _ struct{} `type:"structure"` + + // The ID of the pipeline execution in the failed stage to be retried. Use the + // GetPipelineState action to retrieve the current pipelineExecutionId of the + // failed stage // // PipelineExecutionId is a required field PipelineExecutionId *string `locationName:"pipelineExecutionId" type:"string" required:"true"` @@ -7016,6 +7912,66 @@ func (s *S3ArtifactLocation) SetObjectKey(v string) *S3ArtifactLocation { return s } +// Information about the version (or revision) of a source artifact that initiated +// a pipeline execution. +type SourceRevision struct { + _ struct{} `type:"structure"` + + // The name of the action that processed the revision to the source artifact. + // + // ActionName is a required field + ActionName *string `locationName:"actionName" min:"1" type:"string" required:"true"` + + // The system-generated unique ID that identifies the revision number of the + // artifact. + RevisionId *string `locationName:"revisionId" min:"1" type:"string"` + + // Summary information about the most recent revision of the artifact. For GitHub + // and AWS CodeCommit repositories, the commit message. For Amazon S3 buckets + // or actions, the user-provided content of a codepipeline-artifact-revision-summary + // key specified in the object metadata. + RevisionSummary *string `locationName:"revisionSummary" min:"1" type:"string"` + + // The commit ID for the artifact revision. For artifacts stored in GitHub or + // AWS CodeCommit repositories, the commit ID is linked to a commit details + // page. + RevisionUrl *string `locationName:"revisionUrl" min:"1" type:"string"` +} + +// String returns the string representation +func (s SourceRevision) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SourceRevision) GoString() string { + return s.String() +} + +// SetActionName sets the ActionName field's value. +func (s *SourceRevision) SetActionName(v string) *SourceRevision { + s.ActionName = &v + return s +} + +// SetRevisionId sets the RevisionId field's value. +func (s *SourceRevision) SetRevisionId(v string) *SourceRevision { + s.RevisionId = &v + return s +} + +// SetRevisionSummary sets the RevisionSummary field's value. +func (s *SourceRevision) SetRevisionSummary(v string) *SourceRevision { + s.RevisionSummary = &v + return s +} + +// SetRevisionUrl sets the RevisionUrl field's value. +func (s *SourceRevision) SetRevisionUrl(v string) *SourceRevision { + s.RevisionUrl = &v + return s +} + // Represents information about a stage to a job worker. type StageContext struct { _ struct{} `type:"structure"` @@ -7219,6 +8175,9 @@ func (s *StageState) SetStageName(v string) *StageState { type StartPipelineExecutionInput struct { _ struct{} `type:"structure"` + // The system-generated unique ID used to identify a unique execution request. + ClientRequestToken *string `locationName:"clientRequestToken" min:"1" type:"string" idempotencyToken:"true"` + // The name of the pipeline to start. // // Name is a required field @@ -7238,6 +8197,9 @@ func (s StartPipelineExecutionInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *StartPipelineExecutionInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "StartPipelineExecutionInput"} + if s.ClientRequestToken != nil && len(*s.ClientRequestToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientRequestToken", 1)) + } if s.Name == nil { invalidParams.Add(request.NewErrParamRequired("Name")) } @@ -7251,6 +8213,12 @@ func (s *StartPipelineExecutionInput) Validate() error { return nil } +// SetClientRequestToken sets the ClientRequestToken field's value. +func (s *StartPipelineExecutionInput) SetClientRequestToken(v string) *StartPipelineExecutionInput { + s.ClientRequestToken = &v + return s +} + // SetName sets the Name field's value. func (s *StartPipelineExecutionInput) SetName(v string) *StartPipelineExecutionInput { s.Name = &v @@ -7334,7 +8302,7 @@ type ThirdPartyJobData struct { // A system-generated token, such as a AWS CodeDeploy deployment ID, that a // job requires in order to continue the job asynchronously. - ContinuationToken *string `locationName:"continuationToken" type:"string"` + ContinuationToken *string `locationName:"continuationToken" min:"1" type:"string"` // The encryption key used to encrypt and decrypt data in the artifact store // for the pipeline, such as an AWS Key Management Service (AWS KMS) key. This @@ -7428,7 +8396,7 @@ type ThirdPartyJobDetails struct { // A system-generated random number that AWS CodePipeline uses to ensure that // the job is being worked on by only one job worker. Use this number in an // AcknowledgeThirdPartyJob request. - Nonce *string `locationName:"nonce" type:"string"` + Nonce *string `locationName:"nonce" min:"1" type:"string"` } // String returns the string representation @@ -7472,7 +8440,7 @@ type TransitionState struct { Enabled *bool `locationName:"enabled" type:"boolean"` // The timestamp when the transition state was last changed. - LastChangedAt *time.Time `locationName:"lastChangedAt" type:"timestamp" timestampFormat:"unix"` + LastChangedAt *time.Time `locationName:"lastChangedAt" type:"timestamp"` // The ID of the user who last changed the transition state. LastChangedBy *string `locationName:"lastChangedBy" type:"string"` @@ -7580,6 +8548,272 @@ func (s *UpdatePipelineOutput) SetPipeline(v *PipelineDeclaration) *UpdatePipeli return s } +// The authentication applied to incoming webhook trigger requests. +type WebhookAuthConfiguration struct { + _ struct{} `type:"structure"` + + // The property used to configure acceptance of webhooks within a specific IP + // range. For IP, only the AllowedIPRange property must be set, and this property + // must be set to a valid CIDR range. + AllowedIPRange *string `min:"1" type:"string"` + + // The property used to configure GitHub authentication. For GITHUB_HMAC, only + // the SecretToken property must be set. + SecretToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s WebhookAuthConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s WebhookAuthConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *WebhookAuthConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "WebhookAuthConfiguration"} + if s.AllowedIPRange != nil && len(*s.AllowedIPRange) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AllowedIPRange", 1)) + } + if s.SecretToken != nil && len(*s.SecretToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecretToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAllowedIPRange sets the AllowedIPRange field's value. +func (s *WebhookAuthConfiguration) SetAllowedIPRange(v string) *WebhookAuthConfiguration { + s.AllowedIPRange = &v + return s +} + +// SetSecretToken sets the SecretToken field's value. +func (s *WebhookAuthConfiguration) SetSecretToken(v string) *WebhookAuthConfiguration { + s.SecretToken = &v + return s +} + +// Represents information about a webhook and its definition. +type WebhookDefinition struct { + _ struct{} `type:"structure"` + + // Supported options are GITHUB_HMAC, IP and UNAUTHENTICATED. + // + // * GITHUB_HMAC implements the authentication scheme described here: https://developer.github.com/webhooks/securing/ + // + // * IP will reject webhooks trigger requests unless they originate from + // an IP within the IP range whitelisted in the authentication configuration. + // + // * UNAUTHENTICATED will accept all webhook trigger requests regardless + // of origin. + // + // Authentication is a required field + Authentication *string `locationName:"authentication" type:"string" required:"true" enum:"WebhookAuthenticationType"` + + // Properties that configure the authentication applied to incoming webhook + // trigger requests. The required properties depend on the authentication type. + // For GITHUB_HMAC, only the SecretToken property must be set. For IP, only + // the AllowedIPRange property must be set to a valid CIDR range. For UNAUTHENTICATED, + // no properties can be set. + // + // AuthenticationConfiguration is a required field + AuthenticationConfiguration *WebhookAuthConfiguration `locationName:"authenticationConfiguration" type:"structure" required:"true"` + + // A list of rules applied to the body/payload sent in the POST request to a + // webhook URL. All defined rules must pass for the request to be accepted and + // the pipeline started. + // + // Filters is a required field + Filters []*WebhookFilterRule `locationName:"filters" type:"list" required:"true"` + + // The name of the webhook. + // + // Name is a required field + Name *string `locationName:"name" min:"1" type:"string" required:"true"` + + // The name of the action in a pipeline you want to connect to the webhook. + // The action must be from the source (first) stage of the pipeline. + // + // TargetAction is a required field + TargetAction *string `locationName:"targetAction" min:"1" type:"string" required:"true"` + + // The name of the pipeline you want to connect to the webhook. + // + // TargetPipeline is a required field + TargetPipeline *string `locationName:"targetPipeline" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s WebhookDefinition) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s WebhookDefinition) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *WebhookDefinition) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "WebhookDefinition"} + if s.Authentication == nil { + invalidParams.Add(request.NewErrParamRequired("Authentication")) + } + if s.AuthenticationConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("AuthenticationConfiguration")) + } + if s.Filters == nil { + invalidParams.Add(request.NewErrParamRequired("Filters")) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + if s.TargetAction == nil { + invalidParams.Add(request.NewErrParamRequired("TargetAction")) + } + if s.TargetAction != nil && len(*s.TargetAction) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TargetAction", 1)) + } + if s.TargetPipeline == nil { + invalidParams.Add(request.NewErrParamRequired("TargetPipeline")) + } + if s.TargetPipeline != nil && len(*s.TargetPipeline) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TargetPipeline", 1)) + } + if s.AuthenticationConfiguration != nil { + if err := s.AuthenticationConfiguration.Validate(); err != nil { + invalidParams.AddNested("AuthenticationConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAuthentication sets the Authentication field's value. +func (s *WebhookDefinition) SetAuthentication(v string) *WebhookDefinition { + s.Authentication = &v + return s +} + +// SetAuthenticationConfiguration sets the AuthenticationConfiguration field's value. +func (s *WebhookDefinition) SetAuthenticationConfiguration(v *WebhookAuthConfiguration) *WebhookDefinition { + s.AuthenticationConfiguration = v + return s +} + +// SetFilters sets the Filters field's value. +func (s *WebhookDefinition) SetFilters(v []*WebhookFilterRule) *WebhookDefinition { + s.Filters = v + return s +} + +// SetName sets the Name field's value. +func (s *WebhookDefinition) SetName(v string) *WebhookDefinition { + s.Name = &v + return s +} + +// SetTargetAction sets the TargetAction field's value. +func (s *WebhookDefinition) SetTargetAction(v string) *WebhookDefinition { + s.TargetAction = &v + return s +} + +// SetTargetPipeline sets the TargetPipeline field's value. +func (s *WebhookDefinition) SetTargetPipeline(v string) *WebhookDefinition { + s.TargetPipeline = &v + return s +} + +// The event criteria that specify when a webhook notification is sent to your +// URL. +type WebhookFilterRule struct { + _ struct{} `type:"structure"` + + // A JsonPath expression that will be applied to the body/payload of the webhook. + // The value selected by JsonPath expression must match the value specified + // in the matchEquals field, otherwise the request will be ignored. More information + // on JsonPath expressions can be found here: https://github.com/json-path/JsonPath. + // + // JsonPath is a required field + JsonPath *string `locationName:"jsonPath" min:"1" type:"string" required:"true"` + + // The value selected by the JsonPath expression must match what is supplied + // in the MatchEquals field, otherwise the request will be ignored. Properties + // from the target action configuration can be included as placeholders in this + // value by surrounding the action configuration key with curly braces. For + // example, if the value supplied here is "refs/heads/{Branch}" and the target + // action has an action configuration property called "Branch" with a value + // of "master", the MatchEquals value will be evaluated as "refs/heads/master". + // A list of action configuration properties for built-in action types can be + // found here: Pipeline Structure Reference Action Requirements (http://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#action-requirements). + MatchEquals *string `locationName:"matchEquals" min:"1" type:"string"` +} + +// String returns the string representation +func (s WebhookFilterRule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s WebhookFilterRule) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *WebhookFilterRule) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "WebhookFilterRule"} + if s.JsonPath == nil { + invalidParams.Add(request.NewErrParamRequired("JsonPath")) + } + if s.JsonPath != nil && len(*s.JsonPath) < 1 { + invalidParams.Add(request.NewErrParamMinLen("JsonPath", 1)) + } + if s.MatchEquals != nil && len(*s.MatchEquals) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MatchEquals", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetJsonPath sets the JsonPath field's value. +func (s *WebhookFilterRule) SetJsonPath(v string) *WebhookFilterRule { + s.JsonPath = &v + return s +} + +// SetMatchEquals sets the MatchEquals field's value. +func (s *WebhookFilterRule) SetMatchEquals(v string) *WebhookFilterRule { + s.MatchEquals = &v + return s +} + const ( // ActionCategorySource is a ActionCategory enum value ActionCategorySource = "Source" @@ -7741,3 +8975,14 @@ const ( // StageTransitionTypeOutbound is a StageTransitionType enum value StageTransitionTypeOutbound = "Outbound" ) + +const ( + // WebhookAuthenticationTypeGithubHmac is a WebhookAuthenticationType enum value + WebhookAuthenticationTypeGithubHmac = "GITHUB_HMAC" + + // WebhookAuthenticationTypeIp is a WebhookAuthenticationType enum value + WebhookAuthenticationTypeIp = "IP" + + // WebhookAuthenticationTypeUnauthenticated is a WebhookAuthenticationType enum value + WebhookAuthenticationTypeUnauthenticated = "UNAUTHENTICATED" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/codepipeline/doc.go b/vendor/github.com/aws/aws-sdk-go/service/codepipeline/doc.go index bebd4b1bbe1..ecb1cd86257 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/codepipeline/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/codepipeline/doc.go @@ -11,10 +11,10 @@ // see the AWS CodePipeline User Guide (http://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html). // // You can use the AWS CodePipeline API to work with pipelines, stages, actions, -// gates, and transitions, as described below. +// and transitions, as described below. // // Pipelines are models of automated release processes. Each pipeline is uniquely -// named, and consists of actions, gates, and stages. +// named, and consists of stages, actions, and transitions. // // You can work with pipelines by calling: // @@ -43,24 +43,37 @@ // * UpdatePipeline, which updates a pipeline with edits or changes to the // structure of the pipeline. // -// Pipelines include stages, which are logical groupings of gates and actions. -// Each stage contains one or more actions that must complete before the next -// stage begins. A stage will result in success or failure. If a stage fails, -// then the pipeline stops at that stage and will remain stopped until either -// a new version of an artifact appears in the source location, or a user takes -// action to re-run the most recent artifact through the pipeline. You can call -// GetPipelineState, which displays the status of a pipeline, including the -// status of stages in the pipeline, or GetPipeline, which returns the entire -// structure of the pipeline, including the stages of that pipeline. For more -// information about the structure of stages and actions, also refer to the -// AWS CodePipeline Pipeline Structure Reference (http://docs.aws.amazon.com/codepipeline/latest/userguide/pipeline-structure.html). +// Pipelines include stages. Each stage contains one or more actions that must +// complete before the next stage begins. A stage will result in success or +// failure. If a stage fails, then the pipeline stops at that stage and will +// remain stopped until either a new version of an artifact appears in the source +// location, or a user takes action to re-run the most recent artifact through +// the pipeline. You can call GetPipelineState, which displays the status of +// a pipeline, including the status of stages in the pipeline, or GetPipeline, +// which returns the entire structure of the pipeline, including the stages +// of that pipeline. For more information about the structure of stages and +// actions, also refer to the AWS CodePipeline Pipeline Structure Reference +// (http://docs.aws.amazon.com/codepipeline/latest/userguide/pipeline-structure.html). // // Pipeline stages include actions, which are categorized into categories such // as source or build actions performed within a stage of a pipeline. For example, // you can use a source action to import artifacts into a pipeline from a source // such as Amazon S3. Like stages, you do not work with actions directly in // most cases, but you do define and interact with actions when working with -// pipeline operations such as CreatePipeline and GetPipelineState. +// pipeline operations such as CreatePipeline and GetPipelineState. Valid action +// categories are: +// +// * Source +// +// * Build +// +// * Test +// +// * Deploy +// +// * Approval +// +// * Invoke // // Pipelines also include transitions, which allow the transition of artifacts // from one stage to the next in a pipeline after the actions in one stage complete. diff --git a/vendor/github.com/aws/aws-sdk-go/service/codepipeline/errors.go b/vendor/github.com/aws/aws-sdk-go/service/codepipeline/errors.go index ca160b26a87..dfdffa34c79 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/codepipeline/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/codepipeline/errors.go @@ -83,6 +83,18 @@ const ( // The specified structure was specified in an invalid format. ErrCodeInvalidStructureException = "InvalidStructureException" + // ErrCodeInvalidWebhookAuthenticationParametersException for service response error code + // "InvalidWebhookAuthenticationParametersException". + // + // The specified authentication type is in an invalid format. + ErrCodeInvalidWebhookAuthenticationParametersException = "InvalidWebhookAuthenticationParametersException" + + // ErrCodeInvalidWebhookFilterPatternException for service response error code + // "InvalidWebhookFilterPatternException". + // + // The specified event filter rule is in an invalid format. + ErrCodeInvalidWebhookFilterPatternException = "InvalidWebhookFilterPatternException" + // ErrCodeJobNotFoundException for service response error code // "JobNotFoundException". // @@ -149,4 +161,10 @@ const ( // // The validation was specified in an invalid format. ErrCodeValidationException = "ValidationException" + + // ErrCodeWebhookNotFoundException for service response error code + // "WebhookNotFoundException". + // + // The specified webhook was entered in an invalid format or cannot be found. + ErrCodeWebhookNotFoundException = "WebhookNotFoundException" ) diff --git a/vendor/github.com/aws/aws-sdk-go/service/codepipeline/service.go b/vendor/github.com/aws/aws-sdk-go/service/codepipeline/service.go index 8b7737abb8e..397a00f9eb2 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/codepipeline/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/codepipeline/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "codepipeline" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "codepipeline" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "CodePipeline" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the CodePipeline client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/cognitoidentity/api.go b/vendor/github.com/aws/aws-sdk-go/service/cognitoidentity/api.go index 399ef91e212..ffafc65c278 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cognitoidentity/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cognitoidentity/api.go @@ -17,8 +17,8 @@ const opCreateIdentityPool = "CreateIdentityPool" // CreateIdentityPoolRequest generates a "aws/request.Request" representing the // client's request for the CreateIdentityPool operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -126,8 +126,8 @@ const opDeleteIdentities = "DeleteIdentities" // DeleteIdentitiesRequest generates a "aws/request.Request" representing the // client's request for the DeleteIdentities operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -214,8 +214,8 @@ const opDeleteIdentityPool = "DeleteIdentityPool" // DeleteIdentityPoolRequest generates a "aws/request.Request" representing the // client's request for the DeleteIdentityPool operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -311,8 +311,8 @@ const opDescribeIdentity = "DescribeIdentity" // DescribeIdentityRequest generates a "aws/request.Request" representing the // client's request for the DescribeIdentity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -406,8 +406,8 @@ const opDescribeIdentityPool = "DescribeIdentityPool" // DescribeIdentityPoolRequest generates a "aws/request.Request" representing the // client's request for the DescribeIdentityPool operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -501,8 +501,8 @@ const opGetCredentialsForIdentity = "GetCredentialsForIdentity" // GetCredentialsForIdentityRequest generates a "aws/request.Request" representing the // client's request for the GetCredentialsForIdentity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -610,8 +610,8 @@ const opGetId = "GetId" // GetIdRequest generates a "aws/request.Request" representing the // client's request for the GetId operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -716,8 +716,8 @@ const opGetIdentityPoolRoles = "GetIdentityPoolRoles" // GetIdentityPoolRolesRequest generates a "aws/request.Request" representing the // client's request for the GetIdentityPoolRoles operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -814,8 +814,8 @@ const opGetOpenIdToken = "GetOpenIdToken" // GetOpenIdTokenRequest generates a "aws/request.Request" representing the // client's request for the GetOpenIdToken operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -920,8 +920,8 @@ const opGetOpenIdTokenForDeveloperIdentity = "GetOpenIdTokenForDeveloperIdentity // GetOpenIdTokenForDeveloperIdentityRequest generates a "aws/request.Request" representing the // client's request for the GetOpenIdTokenForDeveloperIdentity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1035,8 +1035,8 @@ const opListIdentities = "ListIdentities" // ListIdentitiesRequest generates a "aws/request.Request" representing the // client's request for the ListIdentities operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1129,8 +1129,8 @@ const opListIdentityPools = "ListIdentityPools" // ListIdentityPoolsRequest generates a "aws/request.Request" representing the // client's request for the ListIdentityPools operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1219,8 +1219,8 @@ const opLookupDeveloperIdentity = "LookupDeveloperIdentity" // LookupDeveloperIdentityRequest generates a "aws/request.Request" representing the // client's request for the LookupDeveloperIdentity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1324,8 +1324,8 @@ const opMergeDeveloperIdentities = "MergeDeveloperIdentities" // MergeDeveloperIdentitiesRequest generates a "aws/request.Request" representing the // client's request for the MergeDeveloperIdentities operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1428,8 +1428,8 @@ const opSetIdentityPoolRoles = "SetIdentityPoolRoles" // SetIdentityPoolRolesRequest generates a "aws/request.Request" representing the // client's request for the SetIdentityPoolRoles operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1532,8 +1532,8 @@ const opUnlinkDeveloperIdentity = "UnlinkDeveloperIdentity" // UnlinkDeveloperIdentityRequest generates a "aws/request.Request" representing the // client's request for the UnlinkDeveloperIdentity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1635,8 +1635,8 @@ const opUnlinkIdentity = "UnlinkIdentity" // UnlinkIdentityRequest generates a "aws/request.Request" representing the // client's request for the UnlinkIdentity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1741,8 +1741,8 @@ const opUpdateIdentityPool = "UpdateIdentityPool" // UpdateIdentityPoolRequest generates a "aws/request.Request" representing the // client's request for the UpdateIdentityPool operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1970,7 +1970,7 @@ type Credentials struct { AccessKeyId *string `type:"string"` // The date at which these credentials will expire. - Expiration *time.Time `type:"timestamp" timestampFormat:"unix"` + Expiration *time.Time `type:"timestamp"` // The Secret Access Key portion of the credentials SecretKey *string `type:"string"` @@ -2728,13 +2728,13 @@ type IdentityDescription struct { _ struct{} `type:"structure"` // Date on which the identity was created. - CreationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDate *time.Time `type:"timestamp"` // A unique identifier in the format REGION:GUID. IdentityId *string `min:"1" type:"string"` // Date on which the identity was last modified. - LastModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + LastModifiedDate *time.Time `type:"timestamp"` // A set of optional name-value pairs that map provider names to provider tokens. Logins []*string `type:"list"` diff --git a/vendor/github.com/aws/aws-sdk-go/service/cognitoidentity/service.go b/vendor/github.com/aws/aws-sdk-go/service/cognitoidentity/service.go index ee82b18ce31..9ebae103f36 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cognitoidentity/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cognitoidentity/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "cognito-identity" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "cognito-identity" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Cognito Identity" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the CognitoIdentity client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/cognitoidentityprovider/api.go b/vendor/github.com/aws/aws-sdk-go/service/cognitoidentityprovider/api.go index 168fe2192df..3187edea36c 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cognitoidentityprovider/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cognitoidentityprovider/api.go @@ -18,8 +18,8 @@ const opAddCustomAttributes = "AddCustomAttributes" // AddCustomAttributesRequest generates a "aws/request.Request" representing the // client's request for the AddCustomAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -116,8 +116,8 @@ const opAdminAddUserToGroup = "AdminAddUserToGroup" // AdminAddUserToGroupRequest generates a "aws/request.Request" representing the // client's request for the AdminAddUserToGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -217,8 +217,8 @@ const opAdminConfirmSignUp = "AdminConfirmSignUp" // AdminConfirmSignUpRequest generates a "aws/request.Request" representing the // client's request for the AdminConfirmSignUp operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -337,8 +337,8 @@ const opAdminCreateUser = "AdminCreateUser" // AdminCreateUserRequest generates a "aws/request.Request" representing the // client's request for the AdminCreateUser operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -488,8 +488,8 @@ const opAdminDeleteUser = "AdminDeleteUser" // AdminDeleteUserRequest generates a "aws/request.Request" representing the // client's request for the AdminDeleteUser operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -589,8 +589,8 @@ const opAdminDeleteUserAttributes = "AdminDeleteUserAttributes" // AdminDeleteUserAttributesRequest generates a "aws/request.Request" representing the // client's request for the AdminDeleteUserAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -689,8 +689,8 @@ const opAdminDisableProviderForUser = "AdminDisableProviderForUser" // AdminDisableProviderForUserRequest generates a "aws/request.Request" representing the // client's request for the AdminDisableProviderForUser operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -820,8 +820,8 @@ const opAdminDisableUser = "AdminDisableUser" // AdminDisableUserRequest generates a "aws/request.Request" representing the // client's request for the AdminDisableUser operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -919,8 +919,8 @@ const opAdminEnableUser = "AdminEnableUser" // AdminEnableUserRequest generates a "aws/request.Request" representing the // client's request for the AdminEnableUser operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1018,8 +1018,8 @@ const opAdminForgetDevice = "AdminForgetDevice" // AdminForgetDeviceRequest generates a "aws/request.Request" representing the // client's request for the AdminForgetDevice operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1122,8 +1122,8 @@ const opAdminGetDevice = "AdminGetDevice" // AdminGetDeviceRequest generates a "aws/request.Request" representing the // client's request for the AdminGetDevice operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1221,8 +1221,8 @@ const opAdminGetUser = "AdminGetUser" // AdminGetUserRequest generates a "aws/request.Request" representing the // client's request for the AdminGetUser operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1321,8 +1321,8 @@ const opAdminInitiateAuth = "AdminInitiateAuth" // AdminInitiateAuthRequest generates a "aws/request.Request" representing the // client's request for the AdminInitiateAuth operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1455,8 +1455,8 @@ const opAdminLinkProviderForUser = "AdminLinkProviderForUser" // AdminLinkProviderForUserRequest generates a "aws/request.Request" representing the // client's request for the AdminLinkProviderForUser operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1576,8 +1576,8 @@ const opAdminListDevices = "AdminListDevices" // AdminListDevicesRequest generates a "aws/request.Request" representing the // client's request for the AdminListDevices operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1675,8 +1675,8 @@ const opAdminListGroupsForUser = "AdminListGroupsForUser" // AdminListGroupsForUserRequest generates a "aws/request.Request" representing the // client's request for the AdminListGroupsForUser operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1774,8 +1774,8 @@ const opAdminListUserAuthEvents = "AdminListUserAuthEvents" // AdminListUserAuthEventsRequest generates a "aws/request.Request" representing the // client's request for the AdminListUserAuthEvents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1875,8 +1875,8 @@ const opAdminRemoveUserFromGroup = "AdminRemoveUserFromGroup" // AdminRemoveUserFromGroupRequest generates a "aws/request.Request" representing the // client's request for the AdminRemoveUserFromGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1976,8 +1976,8 @@ const opAdminResetUserPassword = "AdminResetUserPassword" // AdminResetUserPasswordRequest generates a "aws/request.Request" representing the // client's request for the AdminResetUserPassword operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2116,8 +2116,8 @@ const opAdminRespondToAuthChallenge = "AdminRespondToAuthChallenge" // AdminRespondToAuthChallengeRequest generates a "aws/request.Request" representing the // client's request for the AdminRespondToAuthChallenge operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2271,8 +2271,8 @@ const opAdminSetUserMFAPreference = "AdminSetUserMFAPreference" // AdminSetUserMFAPreferenceRequest generates a "aws/request.Request" representing the // client's request for the AdminSetUserMFAPreference operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2370,8 +2370,8 @@ const opAdminSetUserSettings = "AdminSetUserSettings" // AdminSetUserSettingsRequest generates a "aws/request.Request" representing the // client's request for the AdminSetUserSettings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2465,8 +2465,8 @@ const opAdminUpdateAuthEventFeedback = "AdminUpdateAuthEventFeedback" // AdminUpdateAuthEventFeedbackRequest generates a "aws/request.Request" representing the // client's request for the AdminUpdateAuthEventFeedback operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2567,8 +2567,8 @@ const opAdminUpdateDeviceStatus = "AdminUpdateDeviceStatus" // AdminUpdateDeviceStatusRequest generates a "aws/request.Request" representing the // client's request for the AdminUpdateDeviceStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2669,8 +2669,8 @@ const opAdminUpdateUserAttributes = "AdminUpdateUserAttributes" // AdminUpdateUserAttributesRequest generates a "aws/request.Request" representing the // client's request for the AdminUpdateUserAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2793,8 +2793,8 @@ const opAdminUserGlobalSignOut = "AdminUserGlobalSignOut" // AdminUserGlobalSignOutRequest generates a "aws/request.Request" representing the // client's request for the AdminUserGlobalSignOut operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2892,8 +2892,8 @@ const opAssociateSoftwareToken = "AssociateSoftwareToken" // AssociateSoftwareTokenRequest generates a "aws/request.Request" representing the // client's request for the AssociateSoftwareToken operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2987,8 +2987,8 @@ const opChangePassword = "ChangePassword" // ChangePasswordRequest generates a "aws/request.Request" representing the // client's request for the ChangePassword operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3099,8 +3099,8 @@ const opConfirmDevice = "ConfirmDevice" // ConfirmDeviceRequest generates a "aws/request.Request" representing the // client's request for the ConfirmDevice operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3218,8 +3218,8 @@ const opConfirmForgotPassword = "ConfirmForgotPassword" // ConfirmForgotPasswordRequest generates a "aws/request.Request" representing the // client's request for the ConfirmForgotPassword operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3350,8 +3350,8 @@ const opConfirmSignUp = "ConfirmSignUp" // ConfirmSignUpRequest generates a "aws/request.Request" representing the // client's request for the ConfirmSignUp operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3482,8 +3482,8 @@ const opCreateGroup = "CreateGroup" // CreateGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3586,8 +3586,8 @@ const opCreateIdentityProvider = "CreateIdentityProvider" // CreateIdentityProviderRequest generates a "aws/request.Request" representing the // client's request for the CreateIdentityProvider operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3688,8 +3688,8 @@ const opCreateResourceServer = "CreateResourceServer" // CreateResourceServerRequest generates a "aws/request.Request" representing the // client's request for the CreateResourceServer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3786,8 +3786,8 @@ const opCreateUserImportJob = "CreateUserImportJob" // CreateUserImportJobRequest generates a "aws/request.Request" representing the // client's request for the CreateUserImportJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3887,8 +3887,8 @@ const opCreateUserPool = "CreateUserPool" // CreateUserPoolRequest generates a "aws/request.Request" representing the // client's request for the CreateUserPool operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3999,8 +3999,8 @@ const opCreateUserPoolClient = "CreateUserPoolClient" // CreateUserPoolClientRequest generates a "aws/request.Request" representing the // client's request for the CreateUserPoolClient operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4103,8 +4103,8 @@ const opCreateUserPoolDomain = "CreateUserPoolDomain" // CreateUserPoolDomainRequest generates a "aws/request.Request" representing the // client's request for the CreateUserPoolDomain operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4164,6 +4164,10 @@ func (c *CognitoIdentityProvider) CreateUserPoolDomainRequest(input *CreateUserP // This exception is thrown when the Amazon Cognito service cannot find the // requested resource. // +// * ErrCodeLimitExceededException "LimitExceededException" +// This exception is thrown when a user exceeds the limit for a requested AWS +// resource. +// // * ErrCodeInternalErrorException "InternalErrorException" // This exception is thrown when Amazon Cognito encounters an internal error. // @@ -4193,8 +4197,8 @@ const opDeleteGroup = "DeleteGroup" // DeleteGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4291,8 +4295,8 @@ const opDeleteIdentityProvider = "DeleteIdentityProvider" // DeleteIdentityProviderRequest generates a "aws/request.Request" representing the // client's request for the DeleteIdentityProvider operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4390,8 +4394,8 @@ const opDeleteResourceServer = "DeleteResourceServer" // DeleteResourceServerRequest generates a "aws/request.Request" representing the // client's request for the DeleteResourceServer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4486,8 +4490,8 @@ const opDeleteUser = "DeleteUser" // DeleteUserRequest generates a "aws/request.Request" representing the // client's request for the DeleteUser operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4592,8 +4596,8 @@ const opDeleteUserAttributes = "DeleteUserAttributes" // DeleteUserAttributesRequest generates a "aws/request.Request" representing the // client's request for the DeleteUserAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4696,8 +4700,8 @@ const opDeleteUserPool = "DeleteUserPool" // DeleteUserPoolRequest generates a "aws/request.Request" representing the // client's request for the DeleteUserPool operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4796,8 +4800,8 @@ const opDeleteUserPoolClient = "DeleteUserPoolClient" // DeleteUserPoolClientRequest generates a "aws/request.Request" representing the // client's request for the DeleteUserPoolClient operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4892,8 +4896,8 @@ const opDeleteUserPoolDomain = "DeleteUserPoolDomain" // DeleteUserPoolDomainRequest generates a "aws/request.Request" representing the // client's request for the DeleteUserPoolDomain operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4982,8 +4986,8 @@ const opDescribeIdentityProvider = "DescribeIdentityProvider" // DescribeIdentityProviderRequest generates a "aws/request.Request" representing the // client's request for the DescribeIdentityProvider operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5076,8 +5080,8 @@ const opDescribeResourceServer = "DescribeResourceServer" // DescribeResourceServerRequest generates a "aws/request.Request" representing the // client's request for the DescribeResourceServer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5170,8 +5174,8 @@ const opDescribeRiskConfiguration = "DescribeRiskConfiguration" // DescribeRiskConfigurationRequest generates a "aws/request.Request" representing the // client's request for the DescribeRiskConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5267,8 +5271,8 @@ const opDescribeUserImportJob = "DescribeUserImportJob" // DescribeUserImportJobRequest generates a "aws/request.Request" representing the // client's request for the DescribeUserImportJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5361,8 +5365,8 @@ const opDescribeUserPool = "DescribeUserPool" // DescribeUserPoolRequest generates a "aws/request.Request" representing the // client's request for the DescribeUserPool operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5459,8 +5463,8 @@ const opDescribeUserPoolClient = "DescribeUserPoolClient" // DescribeUserPoolClientRequest generates a "aws/request.Request" representing the // client's request for the DescribeUserPoolClient operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5500,7 +5504,7 @@ func (c *CognitoIdentityProvider) DescribeUserPoolClientRequest(input *DescribeU // DescribeUserPoolClient API operation for Amazon Cognito Identity Provider. // // Client method for returning the configuration information and metadata of -// the specified user pool client. +// the specified user pool app client. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -5554,8 +5558,8 @@ const opDescribeUserPoolDomain = "DescribeUserPoolDomain" // DescribeUserPoolDomainRequest generates a "aws/request.Request" representing the // client's request for the DescribeUserPoolDomain operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5644,8 +5648,8 @@ const opForgetDevice = "ForgetDevice" // ForgetDeviceRequest generates a "aws/request.Request" representing the // client's request for the ForgetDevice operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5752,8 +5756,8 @@ const opForgotPassword = "ForgotPassword" // ForgotPasswordRequest generates a "aws/request.Request" representing the // client's request for the ForgotPassword operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5893,8 +5897,8 @@ const opGetCSVHeader = "GetCSVHeader" // GetCSVHeaderRequest generates a "aws/request.Request" representing the // client's request for the GetCSVHeader operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5988,8 +5992,8 @@ const opGetDevice = "GetDevice" // GetDeviceRequest generates a "aws/request.Request" representing the // client's request for the GetDevice operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6094,8 +6098,8 @@ const opGetGroup = "GetGroup" // GetGroupRequest generates a "aws/request.Request" representing the // client's request for the GetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6190,8 +6194,8 @@ const opGetIdentityProviderByIdentifier = "GetIdentityProviderByIdentifier" // GetIdentityProviderByIdentifierRequest generates a "aws/request.Request" representing the // client's request for the GetIdentityProviderByIdentifier operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6284,8 +6288,8 @@ const opGetSigningCertificate = "GetSigningCertificate" // GetSigningCertificateRequest generates a "aws/request.Request" representing the // client's request for the GetSigningCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6367,8 +6371,8 @@ const opGetUICustomization = "GetUICustomization" // GetUICustomizationRequest generates a "aws/request.Request" representing the // client's request for the GetUICustomization operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6464,8 +6468,8 @@ const opGetUser = "GetUser" // GetUserRequest generates a "aws/request.Request" representing the // client's request for the GetUser operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6568,8 +6572,8 @@ const opGetUserAttributeVerificationCode = "GetUserAttributeVerificationCode" // GetUserAttributeVerificationCodeRequest generates a "aws/request.Request" representing the // client's request for the GetUserAttributeVerificationCode operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6705,8 +6709,8 @@ const opGetUserPoolMfaConfig = "GetUserPoolMfaConfig" // GetUserPoolMfaConfigRequest generates a "aws/request.Request" representing the // client's request for the GetUserPoolMfaConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6799,8 +6803,8 @@ const opGlobalSignOut = "GlobalSignOut" // GlobalSignOutRequest generates a "aws/request.Request" representing the // client's request for the GlobalSignOut operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6899,8 +6903,8 @@ const opInitiateAuth = "InitiateAuth" // InitiateAuthRequest generates a "aws/request.Request" representing the // client's request for the InitiateAuth operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7017,8 +7021,8 @@ const opListDevices = "ListDevices" // ListDevicesRequest generates a "aws/request.Request" representing the // client's request for the ListDevices operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7123,8 +7127,8 @@ const opListGroups = "ListGroups" // ListGroupsRequest generates a "aws/request.Request" representing the // client's request for the ListGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7219,8 +7223,8 @@ const opListIdentityProviders = "ListIdentityProviders" // ListIdentityProvidersRequest generates a "aws/request.Request" representing the // client's request for the ListIdentityProviders operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7313,8 +7317,8 @@ const opListResourceServers = "ListResourceServers" // ListResourceServersRequest generates a "aws/request.Request" representing the // client's request for the ListResourceServers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7407,8 +7411,8 @@ const opListUserImportJobs = "ListUserImportJobs" // ListUserImportJobsRequest generates a "aws/request.Request" representing the // client's request for the ListUserImportJobs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7501,8 +7505,8 @@ const opListUserPoolClients = "ListUserPoolClients" // ListUserPoolClientsRequest generates a "aws/request.Request" representing the // client's request for the ListUserPoolClients operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7595,8 +7599,8 @@ const opListUserPools = "ListUserPools" // ListUserPoolsRequest generates a "aws/request.Request" representing the // client's request for the ListUserPools operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7685,8 +7689,8 @@ const opListUsers = "ListUsers" // ListUsersRequest generates a "aws/request.Request" representing the // client's request for the ListUsers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7779,8 +7783,8 @@ const opListUsersInGroup = "ListUsersInGroup" // ListUsersInGroupRequest generates a "aws/request.Request" representing the // client's request for the ListUsersInGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7875,8 +7879,8 @@ const opResendConfirmationCode = "ResendConfirmationCode" // ResendConfirmationCodeRequest generates a "aws/request.Request" representing the // client's request for the ResendConfirmationCode operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8007,8 +8011,8 @@ const opRespondToAuthChallenge = "RespondToAuthChallenge" // RespondToAuthChallengeRequest generates a "aws/request.Request" representing the // client's request for the RespondToAuthChallenge operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8160,8 +8164,8 @@ const opSetRiskConfiguration = "SetRiskConfiguration" // SetRiskConfigurationRequest generates a "aws/request.Request" representing the // client's request for the SetRiskConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8270,8 +8274,8 @@ const opSetUICustomization = "SetUICustomization" // SetUICustomizationRequest generates a "aws/request.Request" representing the // client's request for the SetUICustomization operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8375,8 +8379,8 @@ const opSetUserMFAPreference = "SetUserMFAPreference" // SetUserMFAPreferenceRequest generates a "aws/request.Request" representing the // client's request for the SetUserMFAPreference operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8474,8 +8478,8 @@ const opSetUserPoolMfaConfig = "SetUserPoolMfaConfig" // SetUserPoolMfaConfigRequest generates a "aws/request.Request" representing the // client's request for the SetUserPoolMfaConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8578,8 +8582,8 @@ const opSetUserSettings = "SetUserSettings" // SetUserSettingsRequest generates a "aws/request.Request" representing the // client's request for the SetUserSettings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8680,8 +8684,8 @@ const opSignUp = "SignUp" // SignUpRequest generates a "aws/request.Request" representing the // client's request for the SignUp operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8813,8 +8817,8 @@ const opStartUserImportJob = "StartUserImportJob" // StartUserImportJobRequest generates a "aws/request.Request" representing the // client's request for the StartUserImportJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8910,8 +8914,8 @@ const opStopUserImportJob = "StopUserImportJob" // StopUserImportJobRequest generates a "aws/request.Request" representing the // client's request for the StopUserImportJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9007,8 +9011,8 @@ const opUpdateAuthEventFeedback = "UpdateAuthEventFeedback" // UpdateAuthEventFeedbackRequest generates a "aws/request.Request" representing the // client's request for the UpdateAuthEventFeedback operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9109,8 +9113,8 @@ const opUpdateDeviceStatus = "UpdateDeviceStatus" // UpdateDeviceStatusRequest generates a "aws/request.Request" representing the // client's request for the UpdateDeviceStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9215,8 +9219,8 @@ const opUpdateGroup = "UpdateGroup" // UpdateGroupRequest generates a "aws/request.Request" representing the // client's request for the UpdateGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9311,8 +9315,8 @@ const opUpdateIdentityProvider = "UpdateIdentityProvider" // UpdateIdentityProviderRequest generates a "aws/request.Request" representing the // client's request for the UpdateIdentityProvider operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9408,8 +9412,8 @@ const opUpdateResourceServer = "UpdateResourceServer" // UpdateResourceServerRequest generates a "aws/request.Request" representing the // client's request for the UpdateResourceServer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9502,8 +9506,8 @@ const opUpdateUserAttributes = "UpdateUserAttributes" // UpdateUserAttributesRequest generates a "aws/request.Request" representing the // client's request for the UpdateUserAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9648,8 +9652,8 @@ const opUpdateUserPool = "UpdateUserPool" // UpdateUserPoolRequest generates a "aws/request.Request" representing the // client's request for the UpdateUserPool operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9688,7 +9692,9 @@ func (c *CognitoIdentityProvider) UpdateUserPoolRequest(input *UpdateUserPoolInp // UpdateUserPool API operation for Amazon Cognito Identity Provider. // -// Updates the specified user pool with the specified attributes. +// Updates the specified user pool with the specified attributes. If you don't +// provide a value for an attribute, it will be set to the default value. You +// can get a list of the current user pool settings with . // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -9766,8 +9772,8 @@ const opUpdateUserPoolClient = "UpdateUserPoolClient" // UpdateUserPoolClientRequest generates a "aws/request.Request" representing the // client's request for the UpdateUserPoolClient operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9806,8 +9812,10 @@ func (c *CognitoIdentityProvider) UpdateUserPoolClientRequest(input *UpdateUserP // UpdateUserPoolClient API operation for Amazon Cognito Identity Provider. // -// Allows the developer to update the specified user pool client and password -// policy. +// Updates the specified user pool app client with the specified attributes. +// If you don't provide a value for an attribute, it will be set to the default +// value. You can get a list of the current user pool app client settings with +// . // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -9870,8 +9878,8 @@ const opVerifySoftwareToken = "VerifySoftwareToken" // VerifySoftwareTokenRequest generates a "aws/request.Request" representing the // client's request for the VerifySoftwareToken operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9911,7 +9919,8 @@ func (c *CognitoIdentityProvider) VerifySoftwareTokenRequest(input *VerifySoftwa // VerifySoftwareToken API operation for Amazon Cognito Identity Provider. // // Use this API to register a user's entered TOTP code and mark the user's software -// token MFA status as "verified" if successful, +// token MFA status as "verified" if successful. The request takes an access +// token or a session string, but not both. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -9992,8 +10001,8 @@ const opVerifyUserAttribute = "VerifyUserAttribute" // VerifyUserAttributeRequest generates a "aws/request.Request" representing the // client's request for the VerifyUserAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11485,10 +11494,10 @@ type AdminGetUserOutput struct { UserAttributes []*AttributeType `type:"list"` // The date the user was created. - UserCreateDate *time.Time `type:"timestamp" timestampFormat:"unix"` + UserCreateDate *time.Time `type:"timestamp"` // The date the user was last modified. - UserLastModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + UserLastModifiedDate *time.Time `type:"timestamp"` // The list of the user's MFA settings. UserMFASettingList []*string `type:"list"` @@ -11751,6 +11760,13 @@ type AdminInitiateAuthOutput struct { // is returned to you in the AdminInitiateAuth response if you need to pass // another challenge. // + // * MFA_SETUP: If MFA is required, users who do not have at least one of + // the MFA methods set up are presented with an MFA_SETUP challenge. The + // user must set up at least one MFA type to continue to authenticate. + // + // * SELECT_MFA_TYPE: Selects the MFA type. Valid MFA options are SMS_MFA + // for text SMS MFA, and SOFTWARE_TOKEN_MFA for TOTP software token MFA. + // // * SMS_MFA: Next challenge is to supply an SMS_MFA_CODE, delivered via // SMS. // @@ -13467,7 +13483,7 @@ type AuthEventType struct { ChallengeResponses []*ChallengeResponseType `type:"list"` // The creation date - CreationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDate *time.Time `type:"timestamp"` // The user context data captured at the time of an event request. It provides // additional information about the client from which event the request is received. @@ -13555,7 +13571,7 @@ type AuthenticationResultType struct { // The access token. AccessToken *string `type:"string"` - // The expiration period of the authentication result. + // The expiration period of the authentication result in seconds. ExpiresIn *int64 `type:"integer"` // The ID token. @@ -14851,7 +14867,22 @@ type CreateUserPoolClientInput struct { // user pool. AnalyticsConfiguration *AnalyticsConfigurationType `type:"structure"` - // A list of allowed callback URLs for the identity providers. + // A list of allowed redirect (callback) URLs for the identity providers. + // + // A redirect URI must: + // + // * Be an absolute URI. + // + // * Be registered with the authorization server. + // + // * Not include a fragment component. + // + // See OAuth 2.0 - Redirection Endpoint (https://tools.ietf.org/html/rfc6749#section-3.1.2). + // + // Amazon Cognito requires HTTPS over HTTP except for http://localhost for testing + // purposes only. + // + // App callback URLs such as myapp://example are also supported. CallbackURLs []*string `type:"list"` // The client name for the user pool client you would like to create. @@ -14860,6 +14891,21 @@ type CreateUserPoolClientInput struct { ClientName *string `min:"1" type:"string" required:"true"` // The default redirect URI. Must be in the CallbackURLs list. + // + // A redirect URI must: + // + // * Be an absolute URI. + // + // * Be registered with the authorization server. + // + // * Not include a fragment component. + // + // See OAuth 2.0 - Redirection Endpoint (https://tools.ietf.org/html/rfc6749#section-3.1.2). + // + // Amazon Cognito requires HTTPS over HTTP except for http://localhost for testing + // purposes only. + // + // App callback URLs such as myapp://example are also supported. DefaultRedirectURI *string `min:"1" type:"string"` // The explicit authentication flows. @@ -15049,6 +15095,17 @@ func (s *CreateUserPoolClientOutput) SetUserPoolClient(v *UserPoolClientType) *C type CreateUserPoolDomainInput struct { _ struct{} `type:"structure"` + // The configuration for a custom domain that hosts the sign-up and sign-in + // webpages for your application. + // + // Provide this parameter only if you want to use own custom domain for your + // user pool. Otherwise, you can exclude this parameter and use the Amazon Cognito + // hosted domain instead. + // + // For more information about the hosted domain and custom domains, see Configuring + // a User Pool Domain (http://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-assign-domain.html). + CustomDomainConfig *CustomDomainConfigType `type:"structure"` + // The domain string. // // Domain is a required field @@ -15085,6 +15142,11 @@ func (s *CreateUserPoolDomainInput) Validate() error { if s.UserPoolId != nil && len(*s.UserPoolId) < 1 { invalidParams.Add(request.NewErrParamMinLen("UserPoolId", 1)) } + if s.CustomDomainConfig != nil { + if err := s.CustomDomainConfig.Validate(); err != nil { + invalidParams.AddNested("CustomDomainConfig", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -15092,6 +15154,12 @@ func (s *CreateUserPoolDomainInput) Validate() error { return nil } +// SetCustomDomainConfig sets the CustomDomainConfig field's value. +func (s *CreateUserPoolDomainInput) SetCustomDomainConfig(v *CustomDomainConfigType) *CreateUserPoolDomainInput { + s.CustomDomainConfig = v + return s +} + // SetDomain sets the Domain field's value. func (s *CreateUserPoolDomainInput) SetDomain(v string) *CreateUserPoolDomainInput { s.Domain = &v @@ -15106,6 +15174,10 @@ func (s *CreateUserPoolDomainInput) SetUserPoolId(v string) *CreateUserPoolDomai type CreateUserPoolDomainOutput struct { _ struct{} `type:"structure"` + + // The Amazon CloudFront endpoint that you use as the target of the alias that + // you set up with your Domain Name Service (DNS) provider. + CloudFrontDomain *string `min:"1" type:"string"` } // String returns the string representation @@ -15118,6 +15190,12 @@ func (s CreateUserPoolDomainOutput) GoString() string { return s.String() } +// SetCloudFrontDomain sets the CloudFrontDomain field's value. +func (s *CreateUserPoolDomainOutput) SetCloudFrontDomain(v string) *CreateUserPoolDomainOutput { + s.CloudFrontDomain = &v + return s +} + // Represents the request to create a user pool. type CreateUserPoolInput struct { _ struct{} `type:"structure"` @@ -15422,6 +15500,50 @@ func (s *CreateUserPoolOutput) SetUserPool(v *UserPoolType) *CreateUserPoolOutpu return s } +// The configuration for a custom domain that hosts the sign-up and sign-in +// webpages for your application. +type CustomDomainConfigType struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of an AWS Certificate Manager SSL certificate. + // You use this certificate for the subdomain of your custom domain. + // + // CertificateArn is a required field + CertificateArn *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s CustomDomainConfigType) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CustomDomainConfigType) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CustomDomainConfigType) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CustomDomainConfigType"} + if s.CertificateArn == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateArn")) + } + if s.CertificateArn != nil && len(*s.CertificateArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("CertificateArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateArn sets the CertificateArn field's value. +func (s *CustomDomainConfigType) SetCertificateArn(v string) *CustomDomainConfigType { + s.CertificateArn = &v + return s +} + type DeleteGroupInput struct { _ struct{} `type:"structure"` @@ -16580,16 +16702,16 @@ type DeviceType struct { DeviceAttributes []*AttributeType `type:"list"` // The creation date of the device. - DeviceCreateDate *time.Time `type:"timestamp" timestampFormat:"unix"` + DeviceCreateDate *time.Time `type:"timestamp"` // The device key. DeviceKey *string `min:"1" type:"string"` // The date in which the device was last authenticated. - DeviceLastAuthenticatedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + DeviceLastAuthenticatedDate *time.Time `type:"timestamp"` // The last modified date of the device. - DeviceLastModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + DeviceLastModifiedDate *time.Time `type:"timestamp"` } // String returns the string representation @@ -16640,7 +16762,11 @@ type DomainDescriptionType struct { AWSAccountId *string `type:"string"` // The ARN of the CloudFront distribution. - CloudFrontDistribution *string `min:"20" type:"string"` + CloudFrontDistribution *string `type:"string"` + + // The configuration for a custom domain that hosts the sign-up and sign-in + // webpages for your application. + CustomDomainConfig *CustomDomainConfigType `type:"structure"` // The domain string. Domain *string `min:"1" type:"string"` @@ -16680,6 +16806,12 @@ func (s *DomainDescriptionType) SetCloudFrontDistribution(v string) *DomainDescr return s } +// SetCustomDomainConfig sets the CustomDomainConfig field's value. +func (s *DomainDescriptionType) SetCustomDomainConfig(v *CustomDomainConfigType) *DomainDescriptionType { + s.CustomDomainConfig = v + return s +} + // SetDomain sets the Domain field's value. func (s *DomainDescriptionType) SetDomain(v string) *DomainDescriptionType { s.Domain = &v @@ -16821,7 +16953,7 @@ type EventFeedbackType struct { _ struct{} `type:"structure"` // The event feedback date. - FeedbackDate *time.Time `type:"timestamp" timestampFormat:"unix"` + FeedbackDate *time.Time `type:"timestamp"` // The event feedback value. // @@ -17875,7 +18007,7 @@ type GroupType struct { _ struct{} `type:"structure"` // The date the group was created. - CreationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDate *time.Time `type:"timestamp"` // A string containing the description of the group. Description *string `type:"string"` @@ -17884,7 +18016,7 @@ type GroupType struct { GroupName *string `min:"1" type:"string"` // The date the group was last modified. - LastModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + LastModifiedDate *time.Time `type:"timestamp"` // A nonnegative integer value that specifies the precedence of this group relative // to the other groups that a user can belong to in the user pool. If a user @@ -18003,13 +18135,13 @@ type IdentityProviderType struct { AttributeMapping map[string]*string `type:"map"` // The date the identity provider was created. - CreationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDate *time.Time `type:"timestamp"` // A list of identity provider identifiers. IdpIdentifiers []*string `type:"list"` // The date the identity provider was last modified. - LastModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + LastModifiedDate *time.Time `type:"timestamp"` // The identity provider details, such as MetadataURL and MetadataFile. ProviderDetails map[string]*string `type:"map"` @@ -19288,9 +19420,9 @@ type ListUsersInput struct { // // * preferred_username // - // * cognito:user_status (called Enabled in the Console) (case-sensitive) + // * cognito:user_status (called Status in the Console) (case-insensitive) // - // * status (case-insensitive) + // * status (called Enabled in the Console) (case-sensitive) // // * sub // @@ -19837,10 +19969,10 @@ type ProviderDescription struct { _ struct{} `type:"structure"` // The date the provider was added to the user pool. - CreationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDate *time.Time `type:"timestamp"` // The date the provider was last modified. - LastModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + LastModifiedDate *time.Time `type:"timestamp"` // The identity provider name. ProviderName *string `min:"1" type:"string"` @@ -20354,7 +20486,7 @@ type RiskConfigurationType struct { CompromisedCredentialsRiskConfiguration *CompromisedCredentialsRiskConfigurationType `type:"structure"` // The last modified date. - LastModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + LastModifiedDate *time.Time `type:"timestamp"` // The configuration to override the risk decision. RiskExceptionConfiguration *RiskExceptionConfigurationType `type:"structure"` @@ -20488,7 +20620,7 @@ type SchemaAttributeType struct { // Specifies whether the attribute type is developer only. DeveloperOnlyAttribute *bool `type:"boolean"` - // Specifies whether the attribute can be changed once it has been created. + // Specifies whether the value of the attribute can be changed. Mutable *bool `type:"boolean"` // A schema attribute of the name type. @@ -21626,13 +21758,13 @@ type UICustomizationType struct { ClientId *string `min:"1" type:"string"` // The creation date for the UI customization. - CreationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDate *time.Time `type:"timestamp"` // The logo image for the UI customization. ImageUrl *string `type:"string"` // The last-modified date for the UI customization. - LastModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + LastModifiedDate *time.Time `type:"timestamp"` // The user pool ID for the user pool. UserPoolId *string `min:"1" type:"string"` @@ -22344,7 +22476,22 @@ type UpdateUserPoolClientInput struct { // user pool. AnalyticsConfiguration *AnalyticsConfigurationType `type:"structure"` - // A list of allowed callback URLs for the identity providers. + // A list of allowed redirect (callback) URLs for the identity providers. + // + // A redirect URI must: + // + // * Be an absolute URI. + // + // * Be registered with the authorization server. + // + // * Not include a fragment component. + // + // See OAuth 2.0 - Redirection Endpoint (https://tools.ietf.org/html/rfc6749#section-3.1.2). + // + // Amazon Cognito requires HTTPS over HTTP except for http://localhost for testing + // purposes only. + // + // App callback URLs such as myapp://example are also supported. CallbackURLs []*string `type:"list"` // The ID of the client associated with the user pool. @@ -22356,6 +22503,21 @@ type UpdateUserPoolClientInput struct { ClientName *string `min:"1" type:"string"` // The default redirect URI. Must be in the CallbackURLs list. + // + // A redirect URI must: + // + // * Be an absolute URI. + // + // * Be registered with the authorization server. + // + // * Not include a fragment component. + // + // See OAuth 2.0 - Redirection Endpoint (https://tools.ietf.org/html/rfc6749#section-3.1.2). + // + // Amazon Cognito requires HTTPS over HTTP except for http://localhost for testing + // purposes only. + // + // App callback URLs such as myapp://example are also supported. DefaultRedirectURI *string `min:"1" type:"string"` // Explicit authentication flows. @@ -22834,13 +22996,13 @@ type UserImportJobType struct { CloudWatchLogsRoleArn *string `min:"20" type:"string"` // The date when the user import job was completed. - CompletionDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CompletionDate *time.Time `type:"timestamp"` // The message returned when the user import job is completed. CompletionMessage *string `min:"1" type:"string"` // The date the user import job was created. - CreationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDate *time.Time `type:"timestamp"` // The number of users that could not be imported. FailedUsers *int64 `type:"long"` @@ -22861,7 +23023,7 @@ type UserImportJobType struct { SkippedUsers *int64 `type:"long"` // The date when the user import job was started. - StartDate *time.Time `type:"timestamp" timestampFormat:"unix"` + StartDate *time.Time `type:"timestamp"` // The status of the user import job. One of the following: // @@ -23084,7 +23246,22 @@ type UserPoolClientType struct { // The Amazon Pinpoint analytics configuration for the user pool client. AnalyticsConfiguration *AnalyticsConfigurationType `type:"structure"` - // A list of allowed callback URLs for the identity providers. + // A list of allowed redirect (callback) URLs for the identity providers. + // + // A redirect URI must: + // + // * Be an absolute URI. + // + // * Be registered with the authorization server. + // + // * Not include a fragment component. + // + // See OAuth 2.0 - Redirection Endpoint (https://tools.ietf.org/html/rfc6749#section-3.1.2). + // + // Amazon Cognito requires HTTPS over HTTP except for http://localhost for testing + // purposes only. + // + // App callback URLs such as myapp://example are also supported. CallbackURLs []*string `type:"list"` // The ID of the client associated with the user pool. @@ -23097,16 +23274,31 @@ type UserPoolClientType struct { ClientSecret *string `min:"1" type:"string"` // The date the user pool client was created. - CreationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDate *time.Time `type:"timestamp"` // The default redirect URI. Must be in the CallbackURLs list. + // + // A redirect URI must: + // + // * Be an absolute URI. + // + // * Be registered with the authorization server. + // + // * Not include a fragment component. + // + // See OAuth 2.0 - Redirection Endpoint (https://tools.ietf.org/html/rfc6749#section-3.1.2). + // + // Amazon Cognito requires HTTPS over HTTP except for http://localhost for testing + // purposes only. + // + // App callback URLs such as myapp://example are also supported. DefaultRedirectURI *string `min:"1" type:"string"` // The explicit authentication flows. ExplicitAuthFlows []*string `type:"list"` // The date the user pool client was last modified. - LastModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + LastModifiedDate *time.Time `type:"timestamp"` // A list of allowed logout URLs for the identity providers. LogoutURLs []*string `type:"list"` @@ -23252,7 +23444,7 @@ type UserPoolDescriptionType struct { _ struct{} `type:"structure"` // The date the user pool description was created. - CreationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDate *time.Time `type:"timestamp"` // The ID in a user pool description. Id *string `min:"1" type:"string"` @@ -23261,7 +23453,7 @@ type UserPoolDescriptionType struct { LambdaConfig *LambdaConfigType `type:"structure"` // The date the user pool description was last modified. - LastModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + LastModifiedDate *time.Time `type:"timestamp"` // The name in a user pool description. Name *string `min:"1" type:"string"` @@ -23365,11 +23557,16 @@ type UserPoolType struct { // Specifies the attributes that are aliased in a user pool. AliasAttributes []*string `type:"list"` + // The Amazon Resource Name (ARN) for the user pool. + Arn *string `min:"20" type:"string"` + // Specifies the attributes that are auto-verified in a user pool. AutoVerifiedAttributes []*string `type:"list"` // The date the user pool was created. - CreationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDate *time.Time `type:"timestamp"` + + CustomDomain *string `min:"1" type:"string"` // The device configuration. DeviceConfiguration *DeviceConfigurationType `type:"structure"` @@ -23395,11 +23592,11 @@ type UserPoolType struct { // The ID of the user pool. Id *string `min:"1" type:"string"` - // The AWS Lambda triggers associated with tue user pool. + // The AWS Lambda triggers associated with the user pool. LambdaConfig *LambdaConfigType `type:"structure"` // The date the user pool was last modified. - LastModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + LastModifiedDate *time.Time `type:"timestamp"` // Can be one of the following values: // @@ -23473,6 +23670,12 @@ func (s *UserPoolType) SetAliasAttributes(v []*string) *UserPoolType { return s } +// SetArn sets the Arn field's value. +func (s *UserPoolType) SetArn(v string) *UserPoolType { + s.Arn = &v + return s +} + // SetAutoVerifiedAttributes sets the AutoVerifiedAttributes field's value. func (s *UserPoolType) SetAutoVerifiedAttributes(v []*string) *UserPoolType { s.AutoVerifiedAttributes = v @@ -23485,6 +23688,12 @@ func (s *UserPoolType) SetCreationDate(v time.Time) *UserPoolType { return s } +// SetCustomDomain sets the CustomDomain field's value. +func (s *UserPoolType) SetCustomDomain(v string) *UserPoolType { + s.CustomDomain = &v + return s +} + // SetDeviceConfiguration sets the DeviceConfiguration field's value. func (s *UserPoolType) SetDeviceConfiguration(v *DeviceConfigurationType) *UserPoolType { s.DeviceConfiguration = v @@ -23637,10 +23846,10 @@ type UserType struct { MFAOptions []*MFAOptionType `type:"list"` // The creation date of the user. - UserCreateDate *time.Time `type:"timestamp" timestampFormat:"unix"` + UserCreateDate *time.Time `type:"timestamp"` // The last modified date of the user. - UserLastModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + UserLastModifiedDate *time.Time `type:"timestamp"` // The user status. Can be one of the following: // @@ -24228,6 +24437,9 @@ const ( // IdentityProviderTypeTypeLoginWithAmazon is a IdentityProviderTypeType enum value IdentityProviderTypeTypeLoginWithAmazon = "LoginWithAmazon" + + // IdentityProviderTypeTypeOidc is a IdentityProviderTypeType enum value + IdentityProviderTypeTypeOidc = "OIDC" ) const ( diff --git a/vendor/github.com/aws/aws-sdk-go/service/cognitoidentityprovider/service.go b/vendor/github.com/aws/aws-sdk-go/service/cognitoidentityprovider/service.go index 190d20711f8..68efbd80b6e 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/cognitoidentityprovider/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/cognitoidentityprovider/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "cognito-idp" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "cognito-idp" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Cognito Identity Provider" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the CognitoIdentityProvider client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/configservice/api.go b/vendor/github.com/aws/aws-sdk-go/service/configservice/api.go index 7348923f3df..4d02ef3e884 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/configservice/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/configservice/api.go @@ -13,12 +13,276 @@ import ( "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" ) +const opBatchGetAggregateResourceConfig = "BatchGetAggregateResourceConfig" + +// BatchGetAggregateResourceConfigRequest generates a "aws/request.Request" representing the +// client's request for the BatchGetAggregateResourceConfig operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See BatchGetAggregateResourceConfig for more information on using the BatchGetAggregateResourceConfig +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the BatchGetAggregateResourceConfigRequest method. +// req, resp := client.BatchGetAggregateResourceConfigRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/BatchGetAggregateResourceConfig +func (c *ConfigService) BatchGetAggregateResourceConfigRequest(input *BatchGetAggregateResourceConfigInput) (req *request.Request, output *BatchGetAggregateResourceConfigOutput) { + op := &request.Operation{ + Name: opBatchGetAggregateResourceConfig, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &BatchGetAggregateResourceConfigInput{} + } + + output = &BatchGetAggregateResourceConfigOutput{} + req = c.newRequest(op, input, output) + return +} + +// BatchGetAggregateResourceConfig API operation for AWS Config. +// +// Returns the current configuration items for resources that are present in +// your AWS Config aggregator. The operation also returns a list of resources +// that are not processed in the current request. If there are no unprocessed +// resources, the operation returns an empty unprocessedResourceIdentifiers +// list. +// +// The API does not return results for deleted resources. +// +// The API does not return tags and relationships. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation BatchGetAggregateResourceConfig for usage and error information. +// +// Returned Error Codes: +// * ErrCodeValidationException "ValidationException" +// The requested action is not valid. +// +// * ErrCodeNoSuchConfigurationAggregatorException "NoSuchConfigurationAggregatorException" +// You have specified a configuration aggregator that does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/BatchGetAggregateResourceConfig +func (c *ConfigService) BatchGetAggregateResourceConfig(input *BatchGetAggregateResourceConfigInput) (*BatchGetAggregateResourceConfigOutput, error) { + req, out := c.BatchGetAggregateResourceConfigRequest(input) + return out, req.Send() +} + +// BatchGetAggregateResourceConfigWithContext is the same as BatchGetAggregateResourceConfig with the addition of +// the ability to pass a context and additional request options. +// +// See BatchGetAggregateResourceConfig for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) BatchGetAggregateResourceConfigWithContext(ctx aws.Context, input *BatchGetAggregateResourceConfigInput, opts ...request.Option) (*BatchGetAggregateResourceConfigOutput, error) { + req, out := c.BatchGetAggregateResourceConfigRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opBatchGetResourceConfig = "BatchGetResourceConfig" + +// BatchGetResourceConfigRequest generates a "aws/request.Request" representing the +// client's request for the BatchGetResourceConfig operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See BatchGetResourceConfig for more information on using the BatchGetResourceConfig +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the BatchGetResourceConfigRequest method. +// req, resp := client.BatchGetResourceConfigRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/BatchGetResourceConfig +func (c *ConfigService) BatchGetResourceConfigRequest(input *BatchGetResourceConfigInput) (req *request.Request, output *BatchGetResourceConfigOutput) { + op := &request.Operation{ + Name: opBatchGetResourceConfig, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &BatchGetResourceConfigInput{} + } + + output = &BatchGetResourceConfigOutput{} + req = c.newRequest(op, input, output) + return +} + +// BatchGetResourceConfig API operation for AWS Config. +// +// Returns the current configuration for one or more requested resources. The +// operation also returns a list of resources that are not processed in the +// current request. If there are no unprocessed resources, the operation returns +// an empty unprocessedResourceKeys list. +// +// The API does not return results for deleted resources. +// +// The API does not return any tags for the requested resources. This information +// is filtered out of the supplementaryConfiguration section of the API response. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation BatchGetResourceConfig for usage and error information. +// +// Returned Error Codes: +// * ErrCodeValidationException "ValidationException" +// The requested action is not valid. +// +// * ErrCodeNoAvailableConfigurationRecorderException "NoAvailableConfigurationRecorderException" +// There are no configuration recorders available to provide the role needed +// to describe your resources. Create a configuration recorder. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/BatchGetResourceConfig +func (c *ConfigService) BatchGetResourceConfig(input *BatchGetResourceConfigInput) (*BatchGetResourceConfigOutput, error) { + req, out := c.BatchGetResourceConfigRequest(input) + return out, req.Send() +} + +// BatchGetResourceConfigWithContext is the same as BatchGetResourceConfig with the addition of +// the ability to pass a context and additional request options. +// +// See BatchGetResourceConfig for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) BatchGetResourceConfigWithContext(ctx aws.Context, input *BatchGetResourceConfigInput, opts ...request.Option) (*BatchGetResourceConfigOutput, error) { + req, out := c.BatchGetResourceConfigRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteAggregationAuthorization = "DeleteAggregationAuthorization" + +// DeleteAggregationAuthorizationRequest generates a "aws/request.Request" representing the +// client's request for the DeleteAggregationAuthorization operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteAggregationAuthorization for more information on using the DeleteAggregationAuthorization +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteAggregationAuthorizationRequest method. +// req, resp := client.DeleteAggregationAuthorizationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DeleteAggregationAuthorization +func (c *ConfigService) DeleteAggregationAuthorizationRequest(input *DeleteAggregationAuthorizationInput) (req *request.Request, output *DeleteAggregationAuthorizationOutput) { + op := &request.Operation{ + Name: opDeleteAggregationAuthorization, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteAggregationAuthorizationInput{} + } + + output = &DeleteAggregationAuthorizationOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteAggregationAuthorization API operation for AWS Config. +// +// Deletes the authorization granted to the specified configuration aggregator +// account in a specified region. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation DeleteAggregationAuthorization for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" +// One or more of the specified parameters are invalid. Verify that your parameters +// are valid and try again. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DeleteAggregationAuthorization +func (c *ConfigService) DeleteAggregationAuthorization(input *DeleteAggregationAuthorizationInput) (*DeleteAggregationAuthorizationOutput, error) { + req, out := c.DeleteAggregationAuthorizationRequest(input) + return out, req.Send() +} + +// DeleteAggregationAuthorizationWithContext is the same as DeleteAggregationAuthorization with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteAggregationAuthorization for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) DeleteAggregationAuthorizationWithContext(ctx aws.Context, input *DeleteAggregationAuthorizationInput, opts ...request.Option) (*DeleteAggregationAuthorizationOutput, error) { + req, out := c.DeleteAggregationAuthorizationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteConfigRule = "DeleteConfigRule" // DeleteConfigRuleRequest generates a "aws/request.Request" representing the // client's request for the DeleteConfigRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -105,12 +369,94 @@ func (c *ConfigService) DeleteConfigRuleWithContext(ctx aws.Context, input *Dele return out, req.Send() } +const opDeleteConfigurationAggregator = "DeleteConfigurationAggregator" + +// DeleteConfigurationAggregatorRequest generates a "aws/request.Request" representing the +// client's request for the DeleteConfigurationAggregator operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteConfigurationAggregator for more information on using the DeleteConfigurationAggregator +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteConfigurationAggregatorRequest method. +// req, resp := client.DeleteConfigurationAggregatorRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DeleteConfigurationAggregator +func (c *ConfigService) DeleteConfigurationAggregatorRequest(input *DeleteConfigurationAggregatorInput) (req *request.Request, output *DeleteConfigurationAggregatorOutput) { + op := &request.Operation{ + Name: opDeleteConfigurationAggregator, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteConfigurationAggregatorInput{} + } + + output = &DeleteConfigurationAggregatorOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteConfigurationAggregator API operation for AWS Config. +// +// Deletes the specified configuration aggregator and the aggregated data associated +// with the aggregator. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation DeleteConfigurationAggregator for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchConfigurationAggregatorException "NoSuchConfigurationAggregatorException" +// You have specified a configuration aggregator that does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DeleteConfigurationAggregator +func (c *ConfigService) DeleteConfigurationAggregator(input *DeleteConfigurationAggregatorInput) (*DeleteConfigurationAggregatorOutput, error) { + req, out := c.DeleteConfigurationAggregatorRequest(input) + return out, req.Send() +} + +// DeleteConfigurationAggregatorWithContext is the same as DeleteConfigurationAggregator with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteConfigurationAggregator for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) DeleteConfigurationAggregatorWithContext(ctx aws.Context, input *DeleteConfigurationAggregatorInput, opts ...request.Option) (*DeleteConfigurationAggregatorOutput, error) { + req, out := c.DeleteConfigurationAggregatorRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteConfigurationRecorder = "DeleteConfigurationRecorder" // DeleteConfigurationRecorderRequest generates a "aws/request.Request" representing the // client's request for the DeleteConfigurationRecorder operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -199,8 +545,8 @@ const opDeleteDeliveryChannel = "DeleteDeliveryChannel" // DeleteDeliveryChannelRequest generates a "aws/request.Request" representing the // client's request for the DeleteDeliveryChannel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -287,8 +633,8 @@ const opDeleteEvaluationResults = "DeleteEvaluationResults" // DeleteEvaluationResultsRequest generates a "aws/request.Request" representing the // client's request for the DeleteEvaluationResults operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -327,10 +673,10 @@ func (c *ConfigService) DeleteEvaluationResultsRequest(input *DeleteEvaluationRe // DeleteEvaluationResults API operation for AWS Config. // -// Deletes the evaluation results for the specified Config rule. You can specify -// one Config rule per request. After you delete the evaluation results, you -// can call the StartConfigRulesEvaluation API to start evaluating your AWS -// resources against the rule. +// Deletes the evaluation results for the specified AWS Config rule. You can +// specify one AWS Config rule per request. After you delete the evaluation +// results, you can call the StartConfigRulesEvaluation API to start evaluating +// your AWS resources against the rule. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -370,2700 +716,6424 @@ func (c *ConfigService) DeleteEvaluationResultsWithContext(ctx aws.Context, inpu return out, req.Send() } -const opDeliverConfigSnapshot = "DeliverConfigSnapshot" +const opDeletePendingAggregationRequest = "DeletePendingAggregationRequest" -// DeliverConfigSnapshotRequest generates a "aws/request.Request" representing the -// client's request for the DeliverConfigSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeletePendingAggregationRequestRequest generates a "aws/request.Request" representing the +// client's request for the DeletePendingAggregationRequest operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DeliverConfigSnapshot for more information on using the DeliverConfigSnapshot +// See DeletePendingAggregationRequest for more information on using the DeletePendingAggregationRequest // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DeliverConfigSnapshotRequest method. -// req, resp := client.DeliverConfigSnapshotRequest(params) +// // Example sending a request using the DeletePendingAggregationRequestRequest method. +// req, resp := client.DeletePendingAggregationRequestRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DeliverConfigSnapshot -func (c *ConfigService) DeliverConfigSnapshotRequest(input *DeliverConfigSnapshotInput) (req *request.Request, output *DeliverConfigSnapshotOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DeletePendingAggregationRequest +func (c *ConfigService) DeletePendingAggregationRequestRequest(input *DeletePendingAggregationRequestInput) (req *request.Request, output *DeletePendingAggregationRequestOutput) { op := &request.Operation{ - Name: opDeliverConfigSnapshot, + Name: opDeletePendingAggregationRequest, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &DeliverConfigSnapshotInput{} + input = &DeletePendingAggregationRequestInput{} } - output = &DeliverConfigSnapshotOutput{} + output = &DeletePendingAggregationRequestOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// DeliverConfigSnapshot API operation for AWS Config. -// -// Schedules delivery of a configuration snapshot to the Amazon S3 bucket in -// the specified delivery channel. After the delivery has started, AWS Config -// sends following notifications using an Amazon SNS topic that you have specified. -// -// * Notification of starting the delivery. +// DeletePendingAggregationRequest API operation for AWS Config. // -// * Notification of delivery completed, if the delivery was successfully -// completed. -// -// * Notification of delivery failure, if the delivery failed to complete. +// Deletes pending authorization requests for a specified aggregator account +// in a specified region. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation DeliverConfigSnapshot for usage and error information. +// API operation DeletePendingAggregationRequest for usage and error information. // // Returned Error Codes: -// * ErrCodeNoSuchDeliveryChannelException "NoSuchDeliveryChannelException" -// You have specified a delivery channel that does not exist. -// -// * ErrCodeNoAvailableConfigurationRecorderException "NoAvailableConfigurationRecorderException" -// There are no configuration recorders available to provide the role needed -// to describe your resources. Create a configuration recorder. -// -// * ErrCodeNoRunningConfigurationRecorderException "NoRunningConfigurationRecorderException" -// There is no configuration recorder running. +// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" +// One or more of the specified parameters are invalid. Verify that your parameters +// are valid and try again. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DeliverConfigSnapshot -func (c *ConfigService) DeliverConfigSnapshot(input *DeliverConfigSnapshotInput) (*DeliverConfigSnapshotOutput, error) { - req, out := c.DeliverConfigSnapshotRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DeletePendingAggregationRequest +func (c *ConfigService) DeletePendingAggregationRequest(input *DeletePendingAggregationRequestInput) (*DeletePendingAggregationRequestOutput, error) { + req, out := c.DeletePendingAggregationRequestRequest(input) return out, req.Send() } -// DeliverConfigSnapshotWithContext is the same as DeliverConfigSnapshot with the addition of +// DeletePendingAggregationRequestWithContext is the same as DeletePendingAggregationRequest with the addition of // the ability to pass a context and additional request options. // -// See DeliverConfigSnapshot for details on how to use this API operation. +// See DeletePendingAggregationRequest for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) DeliverConfigSnapshotWithContext(ctx aws.Context, input *DeliverConfigSnapshotInput, opts ...request.Option) (*DeliverConfigSnapshotOutput, error) { - req, out := c.DeliverConfigSnapshotRequest(input) +func (c *ConfigService) DeletePendingAggregationRequestWithContext(ctx aws.Context, input *DeletePendingAggregationRequestInput, opts ...request.Option) (*DeletePendingAggregationRequestOutput, error) { + req, out := c.DeletePendingAggregationRequestRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeComplianceByConfigRule = "DescribeComplianceByConfigRule" +const opDeleteRetentionConfiguration = "DeleteRetentionConfiguration" -// DescribeComplianceByConfigRuleRequest generates a "aws/request.Request" representing the -// client's request for the DescribeComplianceByConfigRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteRetentionConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the DeleteRetentionConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeComplianceByConfigRule for more information on using the DescribeComplianceByConfigRule +// See DeleteRetentionConfiguration for more information on using the DeleteRetentionConfiguration // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeComplianceByConfigRuleRequest method. -// req, resp := client.DescribeComplianceByConfigRuleRequest(params) +// // Example sending a request using the DeleteRetentionConfigurationRequest method. +// req, resp := client.DeleteRetentionConfigurationRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeComplianceByConfigRule -func (c *ConfigService) DescribeComplianceByConfigRuleRequest(input *DescribeComplianceByConfigRuleInput) (req *request.Request, output *DescribeComplianceByConfigRuleOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DeleteRetentionConfiguration +func (c *ConfigService) DeleteRetentionConfigurationRequest(input *DeleteRetentionConfigurationInput) (req *request.Request, output *DeleteRetentionConfigurationOutput) { op := &request.Operation{ - Name: opDescribeComplianceByConfigRule, + Name: opDeleteRetentionConfiguration, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &DescribeComplianceByConfigRuleInput{} + input = &DeleteRetentionConfigurationInput{} } - output = &DescribeComplianceByConfigRuleOutput{} + output = &DeleteRetentionConfigurationOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// DescribeComplianceByConfigRule API operation for AWS Config. -// -// Indicates whether the specified AWS Config rules are compliant. If a rule -// is noncompliant, this action returns the number of AWS resources that do -// not comply with the rule. -// -// A rule is compliant if all of the evaluated resources comply with it, and -// it is noncompliant if any of these resources do not comply. -// -// If AWS Config has no current evaluation results for the rule, it returns -// INSUFFICIENT_DATA. This result might indicate one of the following conditions: -// -// * AWS Config has never invoked an evaluation for the rule. To check whether -// it has, use the DescribeConfigRuleEvaluationStatus action to get the LastSuccessfulInvocationTime -// and LastFailedInvocationTime. -// -// * The rule's AWS Lambda function is failing to send evaluation results -// to AWS Config. Verify that the role that you assigned to your configuration -// recorder includes the config:PutEvaluations permission. If the rule is -// a custom rule, verify that the AWS Lambda execution role includes the -// config:PutEvaluations permission. +// DeleteRetentionConfiguration API operation for AWS Config. // -// * The rule's AWS Lambda function has returned NOT_APPLICABLE for all evaluation -// results. This can occur if the resources were deleted or removed from -// the rule's scope. +// Deletes the retention configuration. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation DescribeComplianceByConfigRule for usage and error information. +// API operation DeleteRetentionConfiguration for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // One or more of the specified parameters are invalid. Verify that your parameters // are valid and try again. // -// * ErrCodeNoSuchConfigRuleException "NoSuchConfigRuleException" -// One or more AWS Config rules in the request are invalid. Verify that the -// rule names are correct and try again. -// -// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" -// The specified next token is invalid. Specify the NextToken string that was -// returned in the previous response to get the next page of results. +// * ErrCodeNoSuchRetentionConfigurationException "NoSuchRetentionConfigurationException" +// You have specified a retention configuration that does not exist. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeComplianceByConfigRule -func (c *ConfigService) DescribeComplianceByConfigRule(input *DescribeComplianceByConfigRuleInput) (*DescribeComplianceByConfigRuleOutput, error) { - req, out := c.DescribeComplianceByConfigRuleRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DeleteRetentionConfiguration +func (c *ConfigService) DeleteRetentionConfiguration(input *DeleteRetentionConfigurationInput) (*DeleteRetentionConfigurationOutput, error) { + req, out := c.DeleteRetentionConfigurationRequest(input) return out, req.Send() } -// DescribeComplianceByConfigRuleWithContext is the same as DescribeComplianceByConfigRule with the addition of +// DeleteRetentionConfigurationWithContext is the same as DeleteRetentionConfiguration with the addition of // the ability to pass a context and additional request options. // -// See DescribeComplianceByConfigRule for details on how to use this API operation. +// See DeleteRetentionConfiguration for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) DescribeComplianceByConfigRuleWithContext(ctx aws.Context, input *DescribeComplianceByConfigRuleInput, opts ...request.Option) (*DescribeComplianceByConfigRuleOutput, error) { - req, out := c.DescribeComplianceByConfigRuleRequest(input) +func (c *ConfigService) DeleteRetentionConfigurationWithContext(ctx aws.Context, input *DeleteRetentionConfigurationInput, opts ...request.Option) (*DeleteRetentionConfigurationOutput, error) { + req, out := c.DeleteRetentionConfigurationRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeComplianceByResource = "DescribeComplianceByResource" +const opDeliverConfigSnapshot = "DeliverConfigSnapshot" -// DescribeComplianceByResourceRequest generates a "aws/request.Request" representing the -// client's request for the DescribeComplianceByResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeliverConfigSnapshotRequest generates a "aws/request.Request" representing the +// client's request for the DeliverConfigSnapshot operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeComplianceByResource for more information on using the DescribeComplianceByResource +// See DeliverConfigSnapshot for more information on using the DeliverConfigSnapshot // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeComplianceByResourceRequest method. -// req, resp := client.DescribeComplianceByResourceRequest(params) +// // Example sending a request using the DeliverConfigSnapshotRequest method. +// req, resp := client.DeliverConfigSnapshotRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeComplianceByResource -func (c *ConfigService) DescribeComplianceByResourceRequest(input *DescribeComplianceByResourceInput) (req *request.Request, output *DescribeComplianceByResourceOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DeliverConfigSnapshot +func (c *ConfigService) DeliverConfigSnapshotRequest(input *DeliverConfigSnapshotInput) (req *request.Request, output *DeliverConfigSnapshotOutput) { op := &request.Operation{ - Name: opDescribeComplianceByResource, + Name: opDeliverConfigSnapshot, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &DescribeComplianceByResourceInput{} + input = &DeliverConfigSnapshotInput{} } - output = &DescribeComplianceByResourceOutput{} + output = &DeliverConfigSnapshotOutput{} req = c.newRequest(op, input, output) return } -// DescribeComplianceByResource API operation for AWS Config. -// -// Indicates whether the specified AWS resources are compliant. If a resource -// is noncompliant, this action returns the number of AWS Config rules that -// the resource does not comply with. -// -// A resource is compliant if it complies with all the AWS Config rules that -// evaluate it. It is noncompliant if it does not comply with one or more of -// these rules. +// DeliverConfigSnapshot API operation for AWS Config. // -// If AWS Config has no current evaluation results for the resource, it returns -// INSUFFICIENT_DATA. This result might indicate one of the following conditions -// about the rules that evaluate the resource: +// Schedules delivery of a configuration snapshot to the Amazon S3 bucket in +// the specified delivery channel. After the delivery has started, AWS Config +// sends the following notifications using an Amazon SNS topic that you have +// specified. // -// * AWS Config has never invoked an evaluation for the rule. To check whether -// it has, use the DescribeConfigRuleEvaluationStatus action to get the LastSuccessfulInvocationTime -// and LastFailedInvocationTime. +// * Notification of the start of the delivery. // -// * The rule's AWS Lambda function is failing to send evaluation results -// to AWS Config. Verify that the role that you assigned to your configuration -// recorder includes the config:PutEvaluations permission. If the rule is -// a custom rule, verify that the AWS Lambda execution role includes the -// config:PutEvaluations permission. +// * Notification of the completion of the delivery, if the delivery was +// successfully completed. // -// * The rule's AWS Lambda function has returned NOT_APPLICABLE for all evaluation -// results. This can occur if the resources were deleted or removed from -// the rule's scope. +// * Notification of delivery failure, if the delivery failed. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation DescribeComplianceByResource for usage and error information. +// API operation DeliverConfigSnapshot for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" -// One or more of the specified parameters are invalid. Verify that your parameters -// are valid and try again. +// * ErrCodeNoSuchDeliveryChannelException "NoSuchDeliveryChannelException" +// You have specified a delivery channel that does not exist. // -// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" -// The specified next token is invalid. Specify the NextToken string that was -// returned in the previous response to get the next page of results. +// * ErrCodeNoAvailableConfigurationRecorderException "NoAvailableConfigurationRecorderException" +// There are no configuration recorders available to provide the role needed +// to describe your resources. Create a configuration recorder. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeComplianceByResource -func (c *ConfigService) DescribeComplianceByResource(input *DescribeComplianceByResourceInput) (*DescribeComplianceByResourceOutput, error) { - req, out := c.DescribeComplianceByResourceRequest(input) +// * ErrCodeNoRunningConfigurationRecorderException "NoRunningConfigurationRecorderException" +// There is no configuration recorder running. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DeliverConfigSnapshot +func (c *ConfigService) DeliverConfigSnapshot(input *DeliverConfigSnapshotInput) (*DeliverConfigSnapshotOutput, error) { + req, out := c.DeliverConfigSnapshotRequest(input) return out, req.Send() } -// DescribeComplianceByResourceWithContext is the same as DescribeComplianceByResource with the addition of +// DeliverConfigSnapshotWithContext is the same as DeliverConfigSnapshot with the addition of // the ability to pass a context and additional request options. // -// See DescribeComplianceByResource for details on how to use this API operation. +// See DeliverConfigSnapshot for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) DescribeComplianceByResourceWithContext(ctx aws.Context, input *DescribeComplianceByResourceInput, opts ...request.Option) (*DescribeComplianceByResourceOutput, error) { - req, out := c.DescribeComplianceByResourceRequest(input) +func (c *ConfigService) DeliverConfigSnapshotWithContext(ctx aws.Context, input *DeliverConfigSnapshotInput, opts ...request.Option) (*DeliverConfigSnapshotOutput, error) { + req, out := c.DeliverConfigSnapshotRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeConfigRuleEvaluationStatus = "DescribeConfigRuleEvaluationStatus" +const opDescribeAggregateComplianceByConfigRules = "DescribeAggregateComplianceByConfigRules" -// DescribeConfigRuleEvaluationStatusRequest generates a "aws/request.Request" representing the -// client's request for the DescribeConfigRuleEvaluationStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeAggregateComplianceByConfigRulesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeAggregateComplianceByConfigRules operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeConfigRuleEvaluationStatus for more information on using the DescribeConfigRuleEvaluationStatus +// See DescribeAggregateComplianceByConfigRules for more information on using the DescribeAggregateComplianceByConfigRules // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeConfigRuleEvaluationStatusRequest method. -// req, resp := client.DescribeConfigRuleEvaluationStatusRequest(params) +// // Example sending a request using the DescribeAggregateComplianceByConfigRulesRequest method. +// req, resp := client.DescribeAggregateComplianceByConfigRulesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigRuleEvaluationStatus -func (c *ConfigService) DescribeConfigRuleEvaluationStatusRequest(input *DescribeConfigRuleEvaluationStatusInput) (req *request.Request, output *DescribeConfigRuleEvaluationStatusOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeAggregateComplianceByConfigRules +func (c *ConfigService) DescribeAggregateComplianceByConfigRulesRequest(input *DescribeAggregateComplianceByConfigRulesInput) (req *request.Request, output *DescribeAggregateComplianceByConfigRulesOutput) { op := &request.Operation{ - Name: opDescribeConfigRuleEvaluationStatus, + Name: opDescribeAggregateComplianceByConfigRules, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &DescribeConfigRuleEvaluationStatusInput{} + input = &DescribeAggregateComplianceByConfigRulesInput{} } - output = &DescribeConfigRuleEvaluationStatusOutput{} + output = &DescribeAggregateComplianceByConfigRulesOutput{} req = c.newRequest(op, input, output) return } -// DescribeConfigRuleEvaluationStatus API operation for AWS Config. +// DescribeAggregateComplianceByConfigRules API operation for AWS Config. // -// Returns status information for each of your AWS managed Config rules. The -// status includes information such as the last time AWS Config invoked the -// rule, the last time AWS Config failed to invoke the rule, and the related -// error for the last failure. +// Returns a list of compliant and noncompliant rules with the number of resources +// for compliant and noncompliant rules. +// +// The results can return an empty result page, but if you have a nextToken, +// the results are displayed on the next page. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation DescribeConfigRuleEvaluationStatus for usage and error information. +// API operation DescribeAggregateComplianceByConfigRules for usage and error information. // // Returned Error Codes: -// * ErrCodeNoSuchConfigRuleException "NoSuchConfigRuleException" -// One or more AWS Config rules in the request are invalid. Verify that the -// rule names are correct and try again. +// * ErrCodeValidationException "ValidationException" +// The requested action is not valid. // -// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" -// One or more of the specified parameters are invalid. Verify that your parameters -// are valid and try again. +// * ErrCodeInvalidLimitException "InvalidLimitException" +// The specified limit is outside the allowable range. // // * ErrCodeInvalidNextTokenException "InvalidNextTokenException" -// The specified next token is invalid. Specify the NextToken string that was +// The specified next token is invalid. Specify the nextToken string that was // returned in the previous response to get the next page of results. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigRuleEvaluationStatus -func (c *ConfigService) DescribeConfigRuleEvaluationStatus(input *DescribeConfigRuleEvaluationStatusInput) (*DescribeConfigRuleEvaluationStatusOutput, error) { - req, out := c.DescribeConfigRuleEvaluationStatusRequest(input) +// * ErrCodeNoSuchConfigurationAggregatorException "NoSuchConfigurationAggregatorException" +// You have specified a configuration aggregator that does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeAggregateComplianceByConfigRules +func (c *ConfigService) DescribeAggregateComplianceByConfigRules(input *DescribeAggregateComplianceByConfigRulesInput) (*DescribeAggregateComplianceByConfigRulesOutput, error) { + req, out := c.DescribeAggregateComplianceByConfigRulesRequest(input) return out, req.Send() } -// DescribeConfigRuleEvaluationStatusWithContext is the same as DescribeConfigRuleEvaluationStatus with the addition of +// DescribeAggregateComplianceByConfigRulesWithContext is the same as DescribeAggregateComplianceByConfigRules with the addition of // the ability to pass a context and additional request options. // -// See DescribeConfigRuleEvaluationStatus for details on how to use this API operation. +// See DescribeAggregateComplianceByConfigRules for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) DescribeConfigRuleEvaluationStatusWithContext(ctx aws.Context, input *DescribeConfigRuleEvaluationStatusInput, opts ...request.Option) (*DescribeConfigRuleEvaluationStatusOutput, error) { - req, out := c.DescribeConfigRuleEvaluationStatusRequest(input) +func (c *ConfigService) DescribeAggregateComplianceByConfigRulesWithContext(ctx aws.Context, input *DescribeAggregateComplianceByConfigRulesInput, opts ...request.Option) (*DescribeAggregateComplianceByConfigRulesOutput, error) { + req, out := c.DescribeAggregateComplianceByConfigRulesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeConfigRules = "DescribeConfigRules" +const opDescribeAggregationAuthorizations = "DescribeAggregationAuthorizations" -// DescribeConfigRulesRequest generates a "aws/request.Request" representing the -// client's request for the DescribeConfigRules operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeAggregationAuthorizationsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeAggregationAuthorizations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeConfigRules for more information on using the DescribeConfigRules +// See DescribeAggregationAuthorizations for more information on using the DescribeAggregationAuthorizations // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeConfigRulesRequest method. -// req, resp := client.DescribeConfigRulesRequest(params) +// // Example sending a request using the DescribeAggregationAuthorizationsRequest method. +// req, resp := client.DescribeAggregationAuthorizationsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigRules -func (c *ConfigService) DescribeConfigRulesRequest(input *DescribeConfigRulesInput) (req *request.Request, output *DescribeConfigRulesOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeAggregationAuthorizations +func (c *ConfigService) DescribeAggregationAuthorizationsRequest(input *DescribeAggregationAuthorizationsInput) (req *request.Request, output *DescribeAggregationAuthorizationsOutput) { op := &request.Operation{ - Name: opDescribeConfigRules, + Name: opDescribeAggregationAuthorizations, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &DescribeConfigRulesInput{} + input = &DescribeAggregationAuthorizationsInput{} } - output = &DescribeConfigRulesOutput{} + output = &DescribeAggregationAuthorizationsOutput{} req = c.newRequest(op, input, output) return } -// DescribeConfigRules API operation for AWS Config. +// DescribeAggregationAuthorizations API operation for AWS Config. // -// Returns details about your AWS Config rules. +// Returns a list of authorizations granted to various aggregator accounts and +// regions. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation DescribeConfigRules for usage and error information. +// API operation DescribeAggregationAuthorizations for usage and error information. // // Returned Error Codes: -// * ErrCodeNoSuchConfigRuleException "NoSuchConfigRuleException" -// One or more AWS Config rules in the request are invalid. Verify that the -// rule names are correct and try again. +// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" +// One or more of the specified parameters are invalid. Verify that your parameters +// are valid and try again. // // * ErrCodeInvalidNextTokenException "InvalidNextTokenException" -// The specified next token is invalid. Specify the NextToken string that was +// The specified next token is invalid. Specify the nextToken string that was // returned in the previous response to get the next page of results. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigRules -func (c *ConfigService) DescribeConfigRules(input *DescribeConfigRulesInput) (*DescribeConfigRulesOutput, error) { - req, out := c.DescribeConfigRulesRequest(input) +// * ErrCodeInvalidLimitException "InvalidLimitException" +// The specified limit is outside the allowable range. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeAggregationAuthorizations +func (c *ConfigService) DescribeAggregationAuthorizations(input *DescribeAggregationAuthorizationsInput) (*DescribeAggregationAuthorizationsOutput, error) { + req, out := c.DescribeAggregationAuthorizationsRequest(input) return out, req.Send() } -// DescribeConfigRulesWithContext is the same as DescribeConfigRules with the addition of +// DescribeAggregationAuthorizationsWithContext is the same as DescribeAggregationAuthorizations with the addition of // the ability to pass a context and additional request options. // -// See DescribeConfigRules for details on how to use this API operation. +// See DescribeAggregationAuthorizations for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) DescribeConfigRulesWithContext(ctx aws.Context, input *DescribeConfigRulesInput, opts ...request.Option) (*DescribeConfigRulesOutput, error) { - req, out := c.DescribeConfigRulesRequest(input) +func (c *ConfigService) DescribeAggregationAuthorizationsWithContext(ctx aws.Context, input *DescribeAggregationAuthorizationsInput, opts ...request.Option) (*DescribeAggregationAuthorizationsOutput, error) { + req, out := c.DescribeAggregationAuthorizationsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeConfigurationRecorderStatus = "DescribeConfigurationRecorderStatus" +const opDescribeComplianceByConfigRule = "DescribeComplianceByConfigRule" -// DescribeConfigurationRecorderStatusRequest generates a "aws/request.Request" representing the -// client's request for the DescribeConfigurationRecorderStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeComplianceByConfigRuleRequest generates a "aws/request.Request" representing the +// client's request for the DescribeComplianceByConfigRule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeConfigurationRecorderStatus for more information on using the DescribeConfigurationRecorderStatus +// See DescribeComplianceByConfigRule for more information on using the DescribeComplianceByConfigRule // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeConfigurationRecorderStatusRequest method. -// req, resp := client.DescribeConfigurationRecorderStatusRequest(params) +// // Example sending a request using the DescribeComplianceByConfigRuleRequest method. +// req, resp := client.DescribeComplianceByConfigRuleRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigurationRecorderStatus -func (c *ConfigService) DescribeConfigurationRecorderStatusRequest(input *DescribeConfigurationRecorderStatusInput) (req *request.Request, output *DescribeConfigurationRecorderStatusOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeComplianceByConfigRule +func (c *ConfigService) DescribeComplianceByConfigRuleRequest(input *DescribeComplianceByConfigRuleInput) (req *request.Request, output *DescribeComplianceByConfigRuleOutput) { op := &request.Operation{ - Name: opDescribeConfigurationRecorderStatus, + Name: opDescribeComplianceByConfigRule, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &DescribeConfigurationRecorderStatusInput{} + input = &DescribeComplianceByConfigRuleInput{} } - output = &DescribeConfigurationRecorderStatusOutput{} + output = &DescribeComplianceByConfigRuleOutput{} req = c.newRequest(op, input, output) return } -// DescribeConfigurationRecorderStatus API operation for AWS Config. -// -// Returns the current status of the specified configuration recorder. If a -// configuration recorder is not specified, this action returns the status of -// all configuration recorder associated with the account. +// DescribeComplianceByConfigRule API operation for AWS Config. // -// Currently, you can specify only one configuration recorder per region in -// your account. +// Indicates whether the specified AWS Config rules are compliant. If a rule +// is noncompliant, this action returns the number of AWS resources that do +// not comply with the rule. // -// Returns awserr.Error for service API and SDK errors. Use runtime type assertions -// with awserr.Error's Code and Message methods to get detailed information about -// the error. +// A rule is compliant if all of the evaluated resources comply with it. It +// is noncompliant if any of these resources do not comply. // -// See the AWS API reference guide for AWS Config's -// API operation DescribeConfigurationRecorderStatus for usage and error information. +// If AWS Config has no current evaluation results for the rule, it returns +// INSUFFICIENT_DATA. This result might indicate one of the following conditions: // -// Returned Error Codes: -// * ErrCodeNoSuchConfigurationRecorderException "NoSuchConfigurationRecorderException" -// You have specified a configuration recorder that does not exist. +// * AWS Config has never invoked an evaluation for the rule. To check whether +// it has, use the DescribeConfigRuleEvaluationStatus action to get the LastSuccessfulInvocationTime +// and LastFailedInvocationTime. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigurationRecorderStatus -func (c *ConfigService) DescribeConfigurationRecorderStatus(input *DescribeConfigurationRecorderStatusInput) (*DescribeConfigurationRecorderStatusOutput, error) { - req, out := c.DescribeConfigurationRecorderStatusRequest(input) +// * The rule's AWS Lambda function is failing to send evaluation results +// to AWS Config. Verify that the role you assigned to your configuration +// recorder includes the config:PutEvaluations permission. If the rule is +// a custom rule, verify that the AWS Lambda execution role includes the +// config:PutEvaluations permission. +// +// * The rule's AWS Lambda function has returned NOT_APPLICABLE for all evaluation +// results. This can occur if the resources were deleted or removed from +// the rule's scope. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation DescribeComplianceByConfigRule for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" +// One or more of the specified parameters are invalid. Verify that your parameters +// are valid and try again. +// +// * ErrCodeNoSuchConfigRuleException "NoSuchConfigRuleException" +// One or more AWS Config rules in the request are invalid. Verify that the +// rule names are correct and try again. +// +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The specified next token is invalid. Specify the nextToken string that was +// returned in the previous response to get the next page of results. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeComplianceByConfigRule +func (c *ConfigService) DescribeComplianceByConfigRule(input *DescribeComplianceByConfigRuleInput) (*DescribeComplianceByConfigRuleOutput, error) { + req, out := c.DescribeComplianceByConfigRuleRequest(input) return out, req.Send() } -// DescribeConfigurationRecorderStatusWithContext is the same as DescribeConfigurationRecorderStatus with the addition of +// DescribeComplianceByConfigRuleWithContext is the same as DescribeComplianceByConfigRule with the addition of // the ability to pass a context and additional request options. // -// See DescribeConfigurationRecorderStatus for details on how to use this API operation. +// See DescribeComplianceByConfigRule for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) DescribeConfigurationRecorderStatusWithContext(ctx aws.Context, input *DescribeConfigurationRecorderStatusInput, opts ...request.Option) (*DescribeConfigurationRecorderStatusOutput, error) { - req, out := c.DescribeConfigurationRecorderStatusRequest(input) +func (c *ConfigService) DescribeComplianceByConfigRuleWithContext(ctx aws.Context, input *DescribeComplianceByConfigRuleInput, opts ...request.Option) (*DescribeComplianceByConfigRuleOutput, error) { + req, out := c.DescribeComplianceByConfigRuleRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeConfigurationRecorders = "DescribeConfigurationRecorders" +const opDescribeComplianceByResource = "DescribeComplianceByResource" -// DescribeConfigurationRecordersRequest generates a "aws/request.Request" representing the -// client's request for the DescribeConfigurationRecorders operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeComplianceByResourceRequest generates a "aws/request.Request" representing the +// client's request for the DescribeComplianceByResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeConfigurationRecorders for more information on using the DescribeConfigurationRecorders +// See DescribeComplianceByResource for more information on using the DescribeComplianceByResource // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeConfigurationRecordersRequest method. -// req, resp := client.DescribeConfigurationRecordersRequest(params) +// // Example sending a request using the DescribeComplianceByResourceRequest method. +// req, resp := client.DescribeComplianceByResourceRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigurationRecorders -func (c *ConfigService) DescribeConfigurationRecordersRequest(input *DescribeConfigurationRecordersInput) (req *request.Request, output *DescribeConfigurationRecordersOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeComplianceByResource +func (c *ConfigService) DescribeComplianceByResourceRequest(input *DescribeComplianceByResourceInput) (req *request.Request, output *DescribeComplianceByResourceOutput) { op := &request.Operation{ - Name: opDescribeConfigurationRecorders, + Name: opDescribeComplianceByResource, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &DescribeConfigurationRecordersInput{} + input = &DescribeComplianceByResourceInput{} } - output = &DescribeConfigurationRecordersOutput{} + output = &DescribeComplianceByResourceOutput{} req = c.newRequest(op, input, output) return } -// DescribeConfigurationRecorders API operation for AWS Config. +// DescribeComplianceByResource API operation for AWS Config. // -// Returns the details for the specified configuration recorders. If the configuration -// recorder is not specified, this action returns the details for all configuration -// recorders associated with the account. +// Indicates whether the specified AWS resources are compliant. If a resource +// is noncompliant, this action returns the number of AWS Config rules that +// the resource does not comply with. // -// Currently, you can specify only one configuration recorder per region in -// your account. +// A resource is compliant if it complies with all the AWS Config rules that +// evaluate it. It is noncompliant if it does not comply with one or more of +// these rules. +// +// If AWS Config has no current evaluation results for the resource, it returns +// INSUFFICIENT_DATA. This result might indicate one of the following conditions +// about the rules that evaluate the resource: +// +// * AWS Config has never invoked an evaluation for the rule. To check whether +// it has, use the DescribeConfigRuleEvaluationStatus action to get the LastSuccessfulInvocationTime +// and LastFailedInvocationTime. +// +// * The rule's AWS Lambda function is failing to send evaluation results +// to AWS Config. Verify that the role that you assigned to your configuration +// recorder includes the config:PutEvaluations permission. If the rule is +// a custom rule, verify that the AWS Lambda execution role includes the +// config:PutEvaluations permission. +// +// * The rule's AWS Lambda function has returned NOT_APPLICABLE for all evaluation +// results. This can occur if the resources were deleted or removed from +// the rule's scope. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation DescribeConfigurationRecorders for usage and error information. +// API operation DescribeComplianceByResource for usage and error information. // // Returned Error Codes: -// * ErrCodeNoSuchConfigurationRecorderException "NoSuchConfigurationRecorderException" -// You have specified a configuration recorder that does not exist. +// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" +// One or more of the specified parameters are invalid. Verify that your parameters +// are valid and try again. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigurationRecorders -func (c *ConfigService) DescribeConfigurationRecorders(input *DescribeConfigurationRecordersInput) (*DescribeConfigurationRecordersOutput, error) { - req, out := c.DescribeConfigurationRecordersRequest(input) +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The specified next token is invalid. Specify the nextToken string that was +// returned in the previous response to get the next page of results. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeComplianceByResource +func (c *ConfigService) DescribeComplianceByResource(input *DescribeComplianceByResourceInput) (*DescribeComplianceByResourceOutput, error) { + req, out := c.DescribeComplianceByResourceRequest(input) return out, req.Send() } -// DescribeConfigurationRecordersWithContext is the same as DescribeConfigurationRecorders with the addition of +// DescribeComplianceByResourceWithContext is the same as DescribeComplianceByResource with the addition of // the ability to pass a context and additional request options. // -// See DescribeConfigurationRecorders for details on how to use this API operation. +// See DescribeComplianceByResource for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) DescribeConfigurationRecordersWithContext(ctx aws.Context, input *DescribeConfigurationRecordersInput, opts ...request.Option) (*DescribeConfigurationRecordersOutput, error) { - req, out := c.DescribeConfigurationRecordersRequest(input) +func (c *ConfigService) DescribeComplianceByResourceWithContext(ctx aws.Context, input *DescribeComplianceByResourceInput, opts ...request.Option) (*DescribeComplianceByResourceOutput, error) { + req, out := c.DescribeComplianceByResourceRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeDeliveryChannelStatus = "DescribeDeliveryChannelStatus" +const opDescribeConfigRuleEvaluationStatus = "DescribeConfigRuleEvaluationStatus" -// DescribeDeliveryChannelStatusRequest generates a "aws/request.Request" representing the -// client's request for the DescribeDeliveryChannelStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeConfigRuleEvaluationStatusRequest generates a "aws/request.Request" representing the +// client's request for the DescribeConfigRuleEvaluationStatus operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeDeliveryChannelStatus for more information on using the DescribeDeliveryChannelStatus +// See DescribeConfigRuleEvaluationStatus for more information on using the DescribeConfigRuleEvaluationStatus // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeDeliveryChannelStatusRequest method. -// req, resp := client.DescribeDeliveryChannelStatusRequest(params) +// // Example sending a request using the DescribeConfigRuleEvaluationStatusRequest method. +// req, resp := client.DescribeConfigRuleEvaluationStatusRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeDeliveryChannelStatus -func (c *ConfigService) DescribeDeliveryChannelStatusRequest(input *DescribeDeliveryChannelStatusInput) (req *request.Request, output *DescribeDeliveryChannelStatusOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigRuleEvaluationStatus +func (c *ConfigService) DescribeConfigRuleEvaluationStatusRequest(input *DescribeConfigRuleEvaluationStatusInput) (req *request.Request, output *DescribeConfigRuleEvaluationStatusOutput) { op := &request.Operation{ - Name: opDescribeDeliveryChannelStatus, + Name: opDescribeConfigRuleEvaluationStatus, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &DescribeDeliveryChannelStatusInput{} + input = &DescribeConfigRuleEvaluationStatusInput{} } - output = &DescribeDeliveryChannelStatusOutput{} + output = &DescribeConfigRuleEvaluationStatusOutput{} req = c.newRequest(op, input, output) return } -// DescribeDeliveryChannelStatus API operation for AWS Config. -// -// Returns the current status of the specified delivery channel. If a delivery -// channel is not specified, this action returns the current status of all delivery -// channels associated with the account. +// DescribeConfigRuleEvaluationStatus API operation for AWS Config. // -// Currently, you can specify only one delivery channel per region in your account. +// Returns status information for each of your AWS managed Config rules. The +// status includes information such as the last time AWS Config invoked the +// rule, the last time AWS Config failed to invoke the rule, and the related +// error for the last failure. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation DescribeDeliveryChannelStatus for usage and error information. +// API operation DescribeConfigRuleEvaluationStatus for usage and error information. // // Returned Error Codes: -// * ErrCodeNoSuchDeliveryChannelException "NoSuchDeliveryChannelException" -// You have specified a delivery channel that does not exist. +// * ErrCodeNoSuchConfigRuleException "NoSuchConfigRuleException" +// One or more AWS Config rules in the request are invalid. Verify that the +// rule names are correct and try again. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeDeliveryChannelStatus -func (c *ConfigService) DescribeDeliveryChannelStatus(input *DescribeDeliveryChannelStatusInput) (*DescribeDeliveryChannelStatusOutput, error) { - req, out := c.DescribeDeliveryChannelStatusRequest(input) +// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" +// One or more of the specified parameters are invalid. Verify that your parameters +// are valid and try again. +// +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The specified next token is invalid. Specify the nextToken string that was +// returned in the previous response to get the next page of results. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigRuleEvaluationStatus +func (c *ConfigService) DescribeConfigRuleEvaluationStatus(input *DescribeConfigRuleEvaluationStatusInput) (*DescribeConfigRuleEvaluationStatusOutput, error) { + req, out := c.DescribeConfigRuleEvaluationStatusRequest(input) return out, req.Send() } -// DescribeDeliveryChannelStatusWithContext is the same as DescribeDeliveryChannelStatus with the addition of +// DescribeConfigRuleEvaluationStatusWithContext is the same as DescribeConfigRuleEvaluationStatus with the addition of // the ability to pass a context and additional request options. // -// See DescribeDeliveryChannelStatus for details on how to use this API operation. +// See DescribeConfigRuleEvaluationStatus for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) DescribeDeliveryChannelStatusWithContext(ctx aws.Context, input *DescribeDeliveryChannelStatusInput, opts ...request.Option) (*DescribeDeliveryChannelStatusOutput, error) { - req, out := c.DescribeDeliveryChannelStatusRequest(input) +func (c *ConfigService) DescribeConfigRuleEvaluationStatusWithContext(ctx aws.Context, input *DescribeConfigRuleEvaluationStatusInput, opts ...request.Option) (*DescribeConfigRuleEvaluationStatusOutput, error) { + req, out := c.DescribeConfigRuleEvaluationStatusRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeDeliveryChannels = "DescribeDeliveryChannels" +const opDescribeConfigRules = "DescribeConfigRules" -// DescribeDeliveryChannelsRequest generates a "aws/request.Request" representing the -// client's request for the DescribeDeliveryChannels operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeConfigRulesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeConfigRules operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeDeliveryChannels for more information on using the DescribeDeliveryChannels +// See DescribeConfigRules for more information on using the DescribeConfigRules // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeDeliveryChannelsRequest method. -// req, resp := client.DescribeDeliveryChannelsRequest(params) +// // Example sending a request using the DescribeConfigRulesRequest method. +// req, resp := client.DescribeConfigRulesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeDeliveryChannels -func (c *ConfigService) DescribeDeliveryChannelsRequest(input *DescribeDeliveryChannelsInput) (req *request.Request, output *DescribeDeliveryChannelsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigRules +func (c *ConfigService) DescribeConfigRulesRequest(input *DescribeConfigRulesInput) (req *request.Request, output *DescribeConfigRulesOutput) { op := &request.Operation{ - Name: opDescribeDeliveryChannels, + Name: opDescribeConfigRules, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &DescribeDeliveryChannelsInput{} + input = &DescribeConfigRulesInput{} } - output = &DescribeDeliveryChannelsOutput{} + output = &DescribeConfigRulesOutput{} req = c.newRequest(op, input, output) return } -// DescribeDeliveryChannels API operation for AWS Config. -// -// Returns details about the specified delivery channel. If a delivery channel -// is not specified, this action returns the details of all delivery channels -// associated with the account. +// DescribeConfigRules API operation for AWS Config. // -// Currently, you can specify only one delivery channel per region in your account. +// Returns details about your AWS Config rules. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation DescribeDeliveryChannels for usage and error information. +// API operation DescribeConfigRules for usage and error information. // // Returned Error Codes: -// * ErrCodeNoSuchDeliveryChannelException "NoSuchDeliveryChannelException" -// You have specified a delivery channel that does not exist. +// * ErrCodeNoSuchConfigRuleException "NoSuchConfigRuleException" +// One or more AWS Config rules in the request are invalid. Verify that the +// rule names are correct and try again. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeDeliveryChannels -func (c *ConfigService) DescribeDeliveryChannels(input *DescribeDeliveryChannelsInput) (*DescribeDeliveryChannelsOutput, error) { - req, out := c.DescribeDeliveryChannelsRequest(input) +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The specified next token is invalid. Specify the nextToken string that was +// returned in the previous response to get the next page of results. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigRules +func (c *ConfigService) DescribeConfigRules(input *DescribeConfigRulesInput) (*DescribeConfigRulesOutput, error) { + req, out := c.DescribeConfigRulesRequest(input) return out, req.Send() } -// DescribeDeliveryChannelsWithContext is the same as DescribeDeliveryChannels with the addition of +// DescribeConfigRulesWithContext is the same as DescribeConfigRules with the addition of // the ability to pass a context and additional request options. // -// See DescribeDeliveryChannels for details on how to use this API operation. +// See DescribeConfigRules for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) DescribeDeliveryChannelsWithContext(ctx aws.Context, input *DescribeDeliveryChannelsInput, opts ...request.Option) (*DescribeDeliveryChannelsOutput, error) { - req, out := c.DescribeDeliveryChannelsRequest(input) +func (c *ConfigService) DescribeConfigRulesWithContext(ctx aws.Context, input *DescribeConfigRulesInput, opts ...request.Option) (*DescribeConfigRulesOutput, error) { + req, out := c.DescribeConfigRulesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetComplianceDetailsByConfigRule = "GetComplianceDetailsByConfigRule" +const opDescribeConfigurationAggregatorSourcesStatus = "DescribeConfigurationAggregatorSourcesStatus" -// GetComplianceDetailsByConfigRuleRequest generates a "aws/request.Request" representing the -// client's request for the GetComplianceDetailsByConfigRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeConfigurationAggregatorSourcesStatusRequest generates a "aws/request.Request" representing the +// client's request for the DescribeConfigurationAggregatorSourcesStatus operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetComplianceDetailsByConfigRule for more information on using the GetComplianceDetailsByConfigRule +// See DescribeConfigurationAggregatorSourcesStatus for more information on using the DescribeConfigurationAggregatorSourcesStatus // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetComplianceDetailsByConfigRuleRequest method. -// req, resp := client.GetComplianceDetailsByConfigRuleRequest(params) +// // Example sending a request using the DescribeConfigurationAggregatorSourcesStatusRequest method. +// req, resp := client.DescribeConfigurationAggregatorSourcesStatusRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetComplianceDetailsByConfigRule -func (c *ConfigService) GetComplianceDetailsByConfigRuleRequest(input *GetComplianceDetailsByConfigRuleInput) (req *request.Request, output *GetComplianceDetailsByConfigRuleOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigurationAggregatorSourcesStatus +func (c *ConfigService) DescribeConfigurationAggregatorSourcesStatusRequest(input *DescribeConfigurationAggregatorSourcesStatusInput) (req *request.Request, output *DescribeConfigurationAggregatorSourcesStatusOutput) { op := &request.Operation{ - Name: opGetComplianceDetailsByConfigRule, + Name: opDescribeConfigurationAggregatorSourcesStatus, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &GetComplianceDetailsByConfigRuleInput{} + input = &DescribeConfigurationAggregatorSourcesStatusInput{} } - output = &GetComplianceDetailsByConfigRuleOutput{} + output = &DescribeConfigurationAggregatorSourcesStatusOutput{} req = c.newRequest(op, input, output) return } -// GetComplianceDetailsByConfigRule API operation for AWS Config. +// DescribeConfigurationAggregatorSourcesStatus API operation for AWS Config. // -// Returns the evaluation results for the specified AWS Config rule. The results -// indicate which AWS resources were evaluated by the rule, when each resource -// was last evaluated, and whether each resource complies with the rule. +// Returns status information for sources within an aggregator. The status includes +// information about the last time AWS Config aggregated data from source accounts +// or AWS Config failed to aggregate data from source accounts with the related +// error code or message. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation GetComplianceDetailsByConfigRule for usage and error information. +// API operation DescribeConfigurationAggregatorSourcesStatus for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // One or more of the specified parameters are invalid. Verify that your parameters // are valid and try again. // +// * ErrCodeNoSuchConfigurationAggregatorException "NoSuchConfigurationAggregatorException" +// You have specified a configuration aggregator that does not exist. +// // * ErrCodeInvalidNextTokenException "InvalidNextTokenException" -// The specified next token is invalid. Specify the NextToken string that was +// The specified next token is invalid. Specify the nextToken string that was // returned in the previous response to get the next page of results. // -// * ErrCodeNoSuchConfigRuleException "NoSuchConfigRuleException" -// One or more AWS Config rules in the request are invalid. Verify that the -// rule names are correct and try again. +// * ErrCodeInvalidLimitException "InvalidLimitException" +// The specified limit is outside the allowable range. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetComplianceDetailsByConfigRule -func (c *ConfigService) GetComplianceDetailsByConfigRule(input *GetComplianceDetailsByConfigRuleInput) (*GetComplianceDetailsByConfigRuleOutput, error) { - req, out := c.GetComplianceDetailsByConfigRuleRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigurationAggregatorSourcesStatus +func (c *ConfigService) DescribeConfigurationAggregatorSourcesStatus(input *DescribeConfigurationAggregatorSourcesStatusInput) (*DescribeConfigurationAggregatorSourcesStatusOutput, error) { + req, out := c.DescribeConfigurationAggregatorSourcesStatusRequest(input) return out, req.Send() } -// GetComplianceDetailsByConfigRuleWithContext is the same as GetComplianceDetailsByConfigRule with the addition of +// DescribeConfigurationAggregatorSourcesStatusWithContext is the same as DescribeConfigurationAggregatorSourcesStatus with the addition of // the ability to pass a context and additional request options. // -// See GetComplianceDetailsByConfigRule for details on how to use this API operation. +// See DescribeConfigurationAggregatorSourcesStatus for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) GetComplianceDetailsByConfigRuleWithContext(ctx aws.Context, input *GetComplianceDetailsByConfigRuleInput, opts ...request.Option) (*GetComplianceDetailsByConfigRuleOutput, error) { - req, out := c.GetComplianceDetailsByConfigRuleRequest(input) +func (c *ConfigService) DescribeConfigurationAggregatorSourcesStatusWithContext(ctx aws.Context, input *DescribeConfigurationAggregatorSourcesStatusInput, opts ...request.Option) (*DescribeConfigurationAggregatorSourcesStatusOutput, error) { + req, out := c.DescribeConfigurationAggregatorSourcesStatusRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetComplianceDetailsByResource = "GetComplianceDetailsByResource" +const opDescribeConfigurationAggregators = "DescribeConfigurationAggregators" -// GetComplianceDetailsByResourceRequest generates a "aws/request.Request" representing the -// client's request for the GetComplianceDetailsByResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeConfigurationAggregatorsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeConfigurationAggregators operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetComplianceDetailsByResource for more information on using the GetComplianceDetailsByResource +// See DescribeConfigurationAggregators for more information on using the DescribeConfigurationAggregators // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetComplianceDetailsByResourceRequest method. -// req, resp := client.GetComplianceDetailsByResourceRequest(params) +// // Example sending a request using the DescribeConfigurationAggregatorsRequest method. +// req, resp := client.DescribeConfigurationAggregatorsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetComplianceDetailsByResource -func (c *ConfigService) GetComplianceDetailsByResourceRequest(input *GetComplianceDetailsByResourceInput) (req *request.Request, output *GetComplianceDetailsByResourceOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigurationAggregators +func (c *ConfigService) DescribeConfigurationAggregatorsRequest(input *DescribeConfigurationAggregatorsInput) (req *request.Request, output *DescribeConfigurationAggregatorsOutput) { op := &request.Operation{ - Name: opGetComplianceDetailsByResource, + Name: opDescribeConfigurationAggregators, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &GetComplianceDetailsByResourceInput{} + input = &DescribeConfigurationAggregatorsInput{} } - output = &GetComplianceDetailsByResourceOutput{} + output = &DescribeConfigurationAggregatorsOutput{} req = c.newRequest(op, input, output) return } -// GetComplianceDetailsByResource API operation for AWS Config. +// DescribeConfigurationAggregators API operation for AWS Config. // -// Returns the evaluation results for the specified AWS resource. The results -// indicate which AWS Config rules were used to evaluate the resource, when -// each rule was last used, and whether the resource complies with each rule. +// Returns the details of one or more configuration aggregators. If the configuration +// aggregator is not specified, this action returns the details for all the +// configuration aggregators associated with the account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation GetComplianceDetailsByResource for usage and error information. +// API operation DescribeConfigurationAggregators for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // One or more of the specified parameters are invalid. Verify that your parameters // are valid and try again. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetComplianceDetailsByResource -func (c *ConfigService) GetComplianceDetailsByResource(input *GetComplianceDetailsByResourceInput) (*GetComplianceDetailsByResourceOutput, error) { - req, out := c.GetComplianceDetailsByResourceRequest(input) +// * ErrCodeNoSuchConfigurationAggregatorException "NoSuchConfigurationAggregatorException" +// You have specified a configuration aggregator that does not exist. +// +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The specified next token is invalid. Specify the nextToken string that was +// returned in the previous response to get the next page of results. +// +// * ErrCodeInvalidLimitException "InvalidLimitException" +// The specified limit is outside the allowable range. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigurationAggregators +func (c *ConfigService) DescribeConfigurationAggregators(input *DescribeConfigurationAggregatorsInput) (*DescribeConfigurationAggregatorsOutput, error) { + req, out := c.DescribeConfigurationAggregatorsRequest(input) return out, req.Send() } -// GetComplianceDetailsByResourceWithContext is the same as GetComplianceDetailsByResource with the addition of +// DescribeConfigurationAggregatorsWithContext is the same as DescribeConfigurationAggregators with the addition of // the ability to pass a context and additional request options. // -// See GetComplianceDetailsByResource for details on how to use this API operation. +// See DescribeConfigurationAggregators for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) GetComplianceDetailsByResourceWithContext(ctx aws.Context, input *GetComplianceDetailsByResourceInput, opts ...request.Option) (*GetComplianceDetailsByResourceOutput, error) { - req, out := c.GetComplianceDetailsByResourceRequest(input) +func (c *ConfigService) DescribeConfigurationAggregatorsWithContext(ctx aws.Context, input *DescribeConfigurationAggregatorsInput, opts ...request.Option) (*DescribeConfigurationAggregatorsOutput, error) { + req, out := c.DescribeConfigurationAggregatorsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetComplianceSummaryByConfigRule = "GetComplianceSummaryByConfigRule" +const opDescribeConfigurationRecorderStatus = "DescribeConfigurationRecorderStatus" -// GetComplianceSummaryByConfigRuleRequest generates a "aws/request.Request" representing the -// client's request for the GetComplianceSummaryByConfigRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeConfigurationRecorderStatusRequest generates a "aws/request.Request" representing the +// client's request for the DescribeConfigurationRecorderStatus operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetComplianceSummaryByConfigRule for more information on using the GetComplianceSummaryByConfigRule +// See DescribeConfigurationRecorderStatus for more information on using the DescribeConfigurationRecorderStatus // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetComplianceSummaryByConfigRuleRequest method. -// req, resp := client.GetComplianceSummaryByConfigRuleRequest(params) +// // Example sending a request using the DescribeConfigurationRecorderStatusRequest method. +// req, resp := client.DescribeConfigurationRecorderStatusRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetComplianceSummaryByConfigRule -func (c *ConfigService) GetComplianceSummaryByConfigRuleRequest(input *GetComplianceSummaryByConfigRuleInput) (req *request.Request, output *GetComplianceSummaryByConfigRuleOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigurationRecorderStatus +func (c *ConfigService) DescribeConfigurationRecorderStatusRequest(input *DescribeConfigurationRecorderStatusInput) (req *request.Request, output *DescribeConfigurationRecorderStatusOutput) { op := &request.Operation{ - Name: opGetComplianceSummaryByConfigRule, + Name: opDescribeConfigurationRecorderStatus, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &GetComplianceSummaryByConfigRuleInput{} + input = &DescribeConfigurationRecorderStatusInput{} } - output = &GetComplianceSummaryByConfigRuleOutput{} + output = &DescribeConfigurationRecorderStatusOutput{} req = c.newRequest(op, input, output) return } -// GetComplianceSummaryByConfigRule API operation for AWS Config. +// DescribeConfigurationRecorderStatus API operation for AWS Config. // -// Returns the number of AWS Config rules that are compliant and noncompliant, -// up to a maximum of 25 for each. +// Returns the current status of the specified configuration recorder. If a +// configuration recorder is not specified, this action returns the status of +// all configuration recorders associated with the account. +// +// Currently, you can specify only one configuration recorder per region in +// your account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation GetComplianceSummaryByConfigRule for usage and error information. -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetComplianceSummaryByConfigRule -func (c *ConfigService) GetComplianceSummaryByConfigRule(input *GetComplianceSummaryByConfigRuleInput) (*GetComplianceSummaryByConfigRuleOutput, error) { - req, out := c.GetComplianceSummaryByConfigRuleRequest(input) +// API operation DescribeConfigurationRecorderStatus for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchConfigurationRecorderException "NoSuchConfigurationRecorderException" +// You have specified a configuration recorder that does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigurationRecorderStatus +func (c *ConfigService) DescribeConfigurationRecorderStatus(input *DescribeConfigurationRecorderStatusInput) (*DescribeConfigurationRecorderStatusOutput, error) { + req, out := c.DescribeConfigurationRecorderStatusRequest(input) return out, req.Send() } -// GetComplianceSummaryByConfigRuleWithContext is the same as GetComplianceSummaryByConfigRule with the addition of +// DescribeConfigurationRecorderStatusWithContext is the same as DescribeConfigurationRecorderStatus with the addition of // the ability to pass a context and additional request options. // -// See GetComplianceSummaryByConfigRule for details on how to use this API operation. +// See DescribeConfigurationRecorderStatus for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) GetComplianceSummaryByConfigRuleWithContext(ctx aws.Context, input *GetComplianceSummaryByConfigRuleInput, opts ...request.Option) (*GetComplianceSummaryByConfigRuleOutput, error) { - req, out := c.GetComplianceSummaryByConfigRuleRequest(input) +func (c *ConfigService) DescribeConfigurationRecorderStatusWithContext(ctx aws.Context, input *DescribeConfigurationRecorderStatusInput, opts ...request.Option) (*DescribeConfigurationRecorderStatusOutput, error) { + req, out := c.DescribeConfigurationRecorderStatusRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetComplianceSummaryByResourceType = "GetComplianceSummaryByResourceType" +const opDescribeConfigurationRecorders = "DescribeConfigurationRecorders" -// GetComplianceSummaryByResourceTypeRequest generates a "aws/request.Request" representing the -// client's request for the GetComplianceSummaryByResourceType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeConfigurationRecordersRequest generates a "aws/request.Request" representing the +// client's request for the DescribeConfigurationRecorders operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetComplianceSummaryByResourceType for more information on using the GetComplianceSummaryByResourceType +// See DescribeConfigurationRecorders for more information on using the DescribeConfigurationRecorders // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetComplianceSummaryByResourceTypeRequest method. -// req, resp := client.GetComplianceSummaryByResourceTypeRequest(params) +// // Example sending a request using the DescribeConfigurationRecordersRequest method. +// req, resp := client.DescribeConfigurationRecordersRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetComplianceSummaryByResourceType -func (c *ConfigService) GetComplianceSummaryByResourceTypeRequest(input *GetComplianceSummaryByResourceTypeInput) (req *request.Request, output *GetComplianceSummaryByResourceTypeOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigurationRecorders +func (c *ConfigService) DescribeConfigurationRecordersRequest(input *DescribeConfigurationRecordersInput) (req *request.Request, output *DescribeConfigurationRecordersOutput) { op := &request.Operation{ - Name: opGetComplianceSummaryByResourceType, + Name: opDescribeConfigurationRecorders, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &GetComplianceSummaryByResourceTypeInput{} + input = &DescribeConfigurationRecordersInput{} } - output = &GetComplianceSummaryByResourceTypeOutput{} + output = &DescribeConfigurationRecordersOutput{} req = c.newRequest(op, input, output) return } -// GetComplianceSummaryByResourceType API operation for AWS Config. +// DescribeConfigurationRecorders API operation for AWS Config. // -// Returns the number of resources that are compliant and the number that are -// noncompliant. You can specify one or more resource types to get these numbers -// for each resource type. The maximum number returned is 100. +// Returns the details for the specified configuration recorders. If the configuration +// recorder is not specified, this action returns the details for all configuration +// recorders associated with the account. +// +// Currently, you can specify only one configuration recorder per region in +// your account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation GetComplianceSummaryByResourceType for usage and error information. +// API operation DescribeConfigurationRecorders for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" -// One or more of the specified parameters are invalid. Verify that your parameters -// are valid and try again. +// * ErrCodeNoSuchConfigurationRecorderException "NoSuchConfigurationRecorderException" +// You have specified a configuration recorder that does not exist. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetComplianceSummaryByResourceType -func (c *ConfigService) GetComplianceSummaryByResourceType(input *GetComplianceSummaryByResourceTypeInput) (*GetComplianceSummaryByResourceTypeOutput, error) { - req, out := c.GetComplianceSummaryByResourceTypeRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeConfigurationRecorders +func (c *ConfigService) DescribeConfigurationRecorders(input *DescribeConfigurationRecordersInput) (*DescribeConfigurationRecordersOutput, error) { + req, out := c.DescribeConfigurationRecordersRequest(input) return out, req.Send() } -// GetComplianceSummaryByResourceTypeWithContext is the same as GetComplianceSummaryByResourceType with the addition of +// DescribeConfigurationRecordersWithContext is the same as DescribeConfigurationRecorders with the addition of // the ability to pass a context and additional request options. // -// See GetComplianceSummaryByResourceType for details on how to use this API operation. +// See DescribeConfigurationRecorders for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) GetComplianceSummaryByResourceTypeWithContext(ctx aws.Context, input *GetComplianceSummaryByResourceTypeInput, opts ...request.Option) (*GetComplianceSummaryByResourceTypeOutput, error) { - req, out := c.GetComplianceSummaryByResourceTypeRequest(input) +func (c *ConfigService) DescribeConfigurationRecordersWithContext(ctx aws.Context, input *DescribeConfigurationRecordersInput, opts ...request.Option) (*DescribeConfigurationRecordersOutput, error) { + req, out := c.DescribeConfigurationRecordersRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetDiscoveredResourceCounts = "GetDiscoveredResourceCounts" +const opDescribeDeliveryChannelStatus = "DescribeDeliveryChannelStatus" -// GetDiscoveredResourceCountsRequest generates a "aws/request.Request" representing the -// client's request for the GetDiscoveredResourceCounts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeDeliveryChannelStatusRequest generates a "aws/request.Request" representing the +// client's request for the DescribeDeliveryChannelStatus operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetDiscoveredResourceCounts for more information on using the GetDiscoveredResourceCounts +// See DescribeDeliveryChannelStatus for more information on using the DescribeDeliveryChannelStatus // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetDiscoveredResourceCountsRequest method. -// req, resp := client.GetDiscoveredResourceCountsRequest(params) +// // Example sending a request using the DescribeDeliveryChannelStatusRequest method. +// req, resp := client.DescribeDeliveryChannelStatusRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetDiscoveredResourceCounts -func (c *ConfigService) GetDiscoveredResourceCountsRequest(input *GetDiscoveredResourceCountsInput) (req *request.Request, output *GetDiscoveredResourceCountsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeDeliveryChannelStatus +func (c *ConfigService) DescribeDeliveryChannelStatusRequest(input *DescribeDeliveryChannelStatusInput) (req *request.Request, output *DescribeDeliveryChannelStatusOutput) { op := &request.Operation{ - Name: opGetDiscoveredResourceCounts, + Name: opDescribeDeliveryChannelStatus, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &GetDiscoveredResourceCountsInput{} + input = &DescribeDeliveryChannelStatusInput{} } - output = &GetDiscoveredResourceCountsOutput{} + output = &DescribeDeliveryChannelStatusOutput{} req = c.newRequest(op, input, output) return } -// GetDiscoveredResourceCounts API operation for AWS Config. -// -// Returns the resource types, the number of each resource type, and the total -// number of resources that AWS Config is recording in this region for your -// AWS account. -// -// Example -// -// AWS Config is recording three resource types in the US East (Ohio) Region -// for your account: 25 EC2 instances, 20 IAM users, and 15 S3 buckets. -// -// You make a call to the GetDiscoveredResourceCounts action and specify that -// you want all resource types. -// -// AWS Config returns the following: -// -// The resource types (EC2 instances, IAM users, and S3 buckets) -// -// The number of each resource type (25, 20, and 15) -// -// The total number of all resources (60) -// -// The response is paginated. By default, AWS Config lists 100 ResourceCount -// objects on each page. You can customize this number with the limit parameter. -// The response includes a nextToken string. To get the next page of results, -// run the request again and specify the string for the nextToken parameter. -// -// If you make a call to the GetDiscoveredResourceCounts action, you may not -// immediately receive resource counts in the following situations: -// -// You are a new AWS Config customer +// DescribeDeliveryChannelStatus API operation for AWS Config. // -// You just enabled resource recording +// Returns the current status of the specified delivery channel. If a delivery +// channel is not specified, this action returns the current status of all delivery +// channels associated with the account. // -// It may take a few minutes for AWS Config to record and count your resources. -// Wait a few minutes and then retry the GetDiscoveredResourceCounts action. +// Currently, you can specify only one delivery channel per region in your account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation GetDiscoveredResourceCounts for usage and error information. +// API operation DescribeDeliveryChannelStatus for usage and error information. // // Returned Error Codes: -// * ErrCodeValidationException "ValidationException" -// The requested action is not valid. -// -// * ErrCodeInvalidLimitException "InvalidLimitException" -// The specified limit is outside the allowable range. -// -// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" -// The specified next token is invalid. Specify the NextToken string that was -// returned in the previous response to get the next page of results. +// * ErrCodeNoSuchDeliveryChannelException "NoSuchDeliveryChannelException" +// You have specified a delivery channel that does not exist. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetDiscoveredResourceCounts -func (c *ConfigService) GetDiscoveredResourceCounts(input *GetDiscoveredResourceCountsInput) (*GetDiscoveredResourceCountsOutput, error) { - req, out := c.GetDiscoveredResourceCountsRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeDeliveryChannelStatus +func (c *ConfigService) DescribeDeliveryChannelStatus(input *DescribeDeliveryChannelStatusInput) (*DescribeDeliveryChannelStatusOutput, error) { + req, out := c.DescribeDeliveryChannelStatusRequest(input) return out, req.Send() } -// GetDiscoveredResourceCountsWithContext is the same as GetDiscoveredResourceCounts with the addition of +// DescribeDeliveryChannelStatusWithContext is the same as DescribeDeliveryChannelStatus with the addition of // the ability to pass a context and additional request options. // -// See GetDiscoveredResourceCounts for details on how to use this API operation. +// See DescribeDeliveryChannelStatus for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) GetDiscoveredResourceCountsWithContext(ctx aws.Context, input *GetDiscoveredResourceCountsInput, opts ...request.Option) (*GetDiscoveredResourceCountsOutput, error) { - req, out := c.GetDiscoveredResourceCountsRequest(input) +func (c *ConfigService) DescribeDeliveryChannelStatusWithContext(ctx aws.Context, input *DescribeDeliveryChannelStatusInput, opts ...request.Option) (*DescribeDeliveryChannelStatusOutput, error) { + req, out := c.DescribeDeliveryChannelStatusRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetResourceConfigHistory = "GetResourceConfigHistory" +const opDescribeDeliveryChannels = "DescribeDeliveryChannels" -// GetResourceConfigHistoryRequest generates a "aws/request.Request" representing the -// client's request for the GetResourceConfigHistory operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeDeliveryChannelsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeDeliveryChannels operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetResourceConfigHistory for more information on using the GetResourceConfigHistory +// See DescribeDeliveryChannels for more information on using the DescribeDeliveryChannels // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetResourceConfigHistoryRequest method. -// req, resp := client.GetResourceConfigHistoryRequest(params) +// // Example sending a request using the DescribeDeliveryChannelsRequest method. +// req, resp := client.DescribeDeliveryChannelsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetResourceConfigHistory -func (c *ConfigService) GetResourceConfigHistoryRequest(input *GetResourceConfigHistoryInput) (req *request.Request, output *GetResourceConfigHistoryOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeDeliveryChannels +func (c *ConfigService) DescribeDeliveryChannelsRequest(input *DescribeDeliveryChannelsInput) (req *request.Request, output *DescribeDeliveryChannelsOutput) { op := &request.Operation{ - Name: opGetResourceConfigHistory, + Name: opDescribeDeliveryChannels, HTTPMethod: "POST", HTTPPath: "/", - Paginator: &request.Paginator{ - InputTokens: []string{"nextToken"}, - OutputTokens: []string{"nextToken"}, - LimitToken: "limit", - TruncationToken: "", - }, } if input == nil { - input = &GetResourceConfigHistoryInput{} + input = &DescribeDeliveryChannelsInput{} } - output = &GetResourceConfigHistoryOutput{} + output = &DescribeDeliveryChannelsOutput{} req = c.newRequest(op, input, output) return } -// GetResourceConfigHistory API operation for AWS Config. -// -// Returns a list of configuration items for the specified resource. The list -// contains details about each state of the resource during the specified time -// interval. +// DescribeDeliveryChannels API operation for AWS Config. // -// The response is paginated. By default, AWS Config returns a limit of 10 configuration -// items per page. You can customize this number with the limit parameter. The -// response includes a nextToken string. To get the next page of results, run -// the request again and specify the string for the nextToken parameter. +// Returns details about the specified delivery channel. If a delivery channel +// is not specified, this action returns the details of all delivery channels +// associated with the account. // -// Each call to the API is limited to span a duration of seven days. It is likely -// that the number of records returned is smaller than the specified limit. -// In such cases, you can make another call, using the nextToken. +// Currently, you can specify only one delivery channel per region in your account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation GetResourceConfigHistory for usage and error information. +// API operation DescribeDeliveryChannels for usage and error information. // // Returned Error Codes: -// * ErrCodeValidationException "ValidationException" -// The requested action is not valid. -// -// * ErrCodeInvalidTimeRangeException "InvalidTimeRangeException" -// The specified time range is not valid. The earlier time is not chronologically -// before the later time. -// -// * ErrCodeInvalidLimitException "InvalidLimitException" -// The specified limit is outside the allowable range. -// -// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" -// The specified next token is invalid. Specify the NextToken string that was -// returned in the previous response to get the next page of results. -// -// * ErrCodeNoAvailableConfigurationRecorderException "NoAvailableConfigurationRecorderException" -// There are no configuration recorders available to provide the role needed -// to describe your resources. Create a configuration recorder. -// -// * ErrCodeResourceNotDiscoveredException "ResourceNotDiscoveredException" -// You have specified a resource that is either unknown or has not been discovered. +// * ErrCodeNoSuchDeliveryChannelException "NoSuchDeliveryChannelException" +// You have specified a delivery channel that does not exist. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetResourceConfigHistory -func (c *ConfigService) GetResourceConfigHistory(input *GetResourceConfigHistoryInput) (*GetResourceConfigHistoryOutput, error) { - req, out := c.GetResourceConfigHistoryRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeDeliveryChannels +func (c *ConfigService) DescribeDeliveryChannels(input *DescribeDeliveryChannelsInput) (*DescribeDeliveryChannelsOutput, error) { + req, out := c.DescribeDeliveryChannelsRequest(input) return out, req.Send() } -// GetResourceConfigHistoryWithContext is the same as GetResourceConfigHistory with the addition of +// DescribeDeliveryChannelsWithContext is the same as DescribeDeliveryChannels with the addition of // the ability to pass a context and additional request options. // -// See GetResourceConfigHistory for details on how to use this API operation. +// See DescribeDeliveryChannels for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) GetResourceConfigHistoryWithContext(ctx aws.Context, input *GetResourceConfigHistoryInput, opts ...request.Option) (*GetResourceConfigHistoryOutput, error) { - req, out := c.GetResourceConfigHistoryRequest(input) +func (c *ConfigService) DescribeDeliveryChannelsWithContext(ctx aws.Context, input *DescribeDeliveryChannelsInput, opts ...request.Option) (*DescribeDeliveryChannelsOutput, error) { + req, out := c.DescribeDeliveryChannelsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// GetResourceConfigHistoryPages iterates over the pages of a GetResourceConfigHistory operation, -// calling the "fn" function with the response data for each page. To stop -// iterating, return false from the fn function. -// -// See GetResourceConfigHistory method for more information on how to use this operation. -// -// Note: This operation can generate multiple requests to a service. -// -// // Example iterating over at most 3 pages of a GetResourceConfigHistory operation. -// pageNum := 0 -// err := client.GetResourceConfigHistoryPages(params, -// func(page *GetResourceConfigHistoryOutput, lastPage bool) bool { -// pageNum++ -// fmt.Println(page) -// return pageNum <= 3 -// }) -// -func (c *ConfigService) GetResourceConfigHistoryPages(input *GetResourceConfigHistoryInput, fn func(*GetResourceConfigHistoryOutput, bool) bool) error { - return c.GetResourceConfigHistoryPagesWithContext(aws.BackgroundContext(), input, fn) -} - -// GetResourceConfigHistoryPagesWithContext same as GetResourceConfigHistoryPages except -// it takes a Context and allows setting request options on the pages. -// -// The context must be non-nil and will be used for request cancellation. If -// the context is nil a panic will occur. In the future the SDK may create -// sub-contexts for http.Requests. See https://golang.org/pkg/context/ -// for more information on using Contexts. -func (c *ConfigService) GetResourceConfigHistoryPagesWithContext(ctx aws.Context, input *GetResourceConfigHistoryInput, fn func(*GetResourceConfigHistoryOutput, bool) bool, opts ...request.Option) error { - p := request.Pagination{ - NewRequest: func() (*request.Request, error) { - var inCpy *GetResourceConfigHistoryInput - if input != nil { - tmp := *input - inCpy = &tmp - } - req, _ := c.GetResourceConfigHistoryRequest(inCpy) - req.SetContext(ctx) - req.ApplyOptions(opts...) - return req, nil - }, - } - - cont := true - for p.Next() && cont { - cont = fn(p.Page().(*GetResourceConfigHistoryOutput), !p.HasNextPage()) - } - return p.Err() -} - -const opListDiscoveredResources = "ListDiscoveredResources" +const opDescribePendingAggregationRequests = "DescribePendingAggregationRequests" -// ListDiscoveredResourcesRequest generates a "aws/request.Request" representing the -// client's request for the ListDiscoveredResources operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribePendingAggregationRequestsRequest generates a "aws/request.Request" representing the +// client's request for the DescribePendingAggregationRequests operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListDiscoveredResources for more information on using the ListDiscoveredResources +// See DescribePendingAggregationRequests for more information on using the DescribePendingAggregationRequests // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListDiscoveredResourcesRequest method. -// req, resp := client.ListDiscoveredResourcesRequest(params) +// // Example sending a request using the DescribePendingAggregationRequestsRequest method. +// req, resp := client.DescribePendingAggregationRequestsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/ListDiscoveredResources -func (c *ConfigService) ListDiscoveredResourcesRequest(input *ListDiscoveredResourcesInput) (req *request.Request, output *ListDiscoveredResourcesOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribePendingAggregationRequests +func (c *ConfigService) DescribePendingAggregationRequestsRequest(input *DescribePendingAggregationRequestsInput) (req *request.Request, output *DescribePendingAggregationRequestsOutput) { op := &request.Operation{ - Name: opListDiscoveredResources, + Name: opDescribePendingAggregationRequests, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &ListDiscoveredResourcesInput{} + input = &DescribePendingAggregationRequestsInput{} } - output = &ListDiscoveredResourcesOutput{} + output = &DescribePendingAggregationRequestsOutput{} req = c.newRequest(op, input, output) return } -// ListDiscoveredResources API operation for AWS Config. -// -// Accepts a resource type and returns a list of resource identifiers for the -// resources of that type. A resource identifier includes the resource type, -// ID, and (if available) the custom resource name. The results consist of resources -// that AWS Config has discovered, including those that AWS Config is not currently -// recording. You can narrow the results to include only resources that have -// specific resource IDs or a resource name. -// -// You can specify either resource IDs or a resource name but not both in the -// same request. +// DescribePendingAggregationRequests API operation for AWS Config. // -// The response is paginated. By default, AWS Config lists 100 resource identifiers -// on each page. You can customize this number with the limit parameter. The -// response includes a nextToken string. To get the next page of results, run -// the request again and specify the string for the nextToken parameter. +// Returns a list of all pending aggregation requests. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation ListDiscoveredResources for usage and error information. +// API operation DescribePendingAggregationRequests for usage and error information. // // Returned Error Codes: -// * ErrCodeValidationException "ValidationException" -// The requested action is not valid. -// -// * ErrCodeInvalidLimitException "InvalidLimitException" -// The specified limit is outside the allowable range. +// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" +// One or more of the specified parameters are invalid. Verify that your parameters +// are valid and try again. // // * ErrCodeInvalidNextTokenException "InvalidNextTokenException" -// The specified next token is invalid. Specify the NextToken string that was +// The specified next token is invalid. Specify the nextToken string that was // returned in the previous response to get the next page of results. // -// * ErrCodeNoAvailableConfigurationRecorderException "NoAvailableConfigurationRecorderException" -// There are no configuration recorders available to provide the role needed -// to describe your resources. Create a configuration recorder. +// * ErrCodeInvalidLimitException "InvalidLimitException" +// The specified limit is outside the allowable range. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/ListDiscoveredResources -func (c *ConfigService) ListDiscoveredResources(input *ListDiscoveredResourcesInput) (*ListDiscoveredResourcesOutput, error) { - req, out := c.ListDiscoveredResourcesRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribePendingAggregationRequests +func (c *ConfigService) DescribePendingAggregationRequests(input *DescribePendingAggregationRequestsInput) (*DescribePendingAggregationRequestsOutput, error) { + req, out := c.DescribePendingAggregationRequestsRequest(input) return out, req.Send() } -// ListDiscoveredResourcesWithContext is the same as ListDiscoveredResources with the addition of +// DescribePendingAggregationRequestsWithContext is the same as DescribePendingAggregationRequests with the addition of // the ability to pass a context and additional request options. // -// See ListDiscoveredResources for details on how to use this API operation. +// See DescribePendingAggregationRequests for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) ListDiscoveredResourcesWithContext(ctx aws.Context, input *ListDiscoveredResourcesInput, opts ...request.Option) (*ListDiscoveredResourcesOutput, error) { - req, out := c.ListDiscoveredResourcesRequest(input) +func (c *ConfigService) DescribePendingAggregationRequestsWithContext(ctx aws.Context, input *DescribePendingAggregationRequestsInput, opts ...request.Option) (*DescribePendingAggregationRequestsOutput, error) { + req, out := c.DescribePendingAggregationRequestsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opPutConfigRule = "PutConfigRule" +const opDescribeRetentionConfigurations = "DescribeRetentionConfigurations" -// PutConfigRuleRequest generates a "aws/request.Request" representing the -// client's request for the PutConfigRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeRetentionConfigurationsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeRetentionConfigurations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See PutConfigRule for more information on using the PutConfigRule +// See DescribeRetentionConfigurations for more information on using the DescribeRetentionConfigurations // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the PutConfigRuleRequest method. -// req, resp := client.PutConfigRuleRequest(params) +// // Example sending a request using the DescribeRetentionConfigurationsRequest method. +// req, resp := client.DescribeRetentionConfigurationsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutConfigRule -func (c *ConfigService) PutConfigRuleRequest(input *PutConfigRuleInput) (req *request.Request, output *PutConfigRuleOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeRetentionConfigurations +func (c *ConfigService) DescribeRetentionConfigurationsRequest(input *DescribeRetentionConfigurationsInput) (req *request.Request, output *DescribeRetentionConfigurationsOutput) { op := &request.Operation{ - Name: opPutConfigRule, + Name: opDescribeRetentionConfigurations, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &PutConfigRuleInput{} + input = &DescribeRetentionConfigurationsInput{} } - output = &PutConfigRuleOutput{} + output = &DescribeRetentionConfigurationsOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// PutConfigRule API operation for AWS Config. -// -// Adds or updates an AWS Config rule for evaluating whether your AWS resources -// comply with your desired configurations. +// DescribeRetentionConfigurations API operation for AWS Config. // -// You can use this action for custom Config rules and AWS managed Config rules. -// A custom Config rule is a rule that you develop and maintain. An AWS managed -// Config rule is a customizable, predefined rule that AWS Config provides. +// Returns the details of one or more retention configurations. If the retention +// configuration name is not specified, this action returns the details for +// all the retention configurations for that account. // -// If you are adding a new custom Config rule, you must first create the AWS -// Lambda function that the rule invokes to evaluate your resources. When you -// use the PutConfigRule action to add the rule to AWS Config, you must specify -// the Amazon Resource Name (ARN) that AWS Lambda assigns to the function. Specify -// the ARN for the SourceIdentifier key. This key is part of the Source object, -// which is part of the ConfigRule object. -// -// If you are adding an AWS managed Config rule, specify the rule's identifier -// for the SourceIdentifier key. To reference AWS managed Config rule identifiers, -// see About AWS Managed Config Rules (http://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html). -// -// For any new rule that you add, specify the ConfigRuleName in the ConfigRule -// object. Do not specify the ConfigRuleArn or the ConfigRuleId. These values -// are generated by AWS Config for new rules. -// -// If you are updating a rule that you added previously, you can specify the -// rule by ConfigRuleName, ConfigRuleId, or ConfigRuleArn in the ConfigRule -// data type that you use in this request. -// -// The maximum number of rules that AWS Config supports is 50. -// -// For more information about requesting a rule limit increase, see AWS Config -// Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_config) -// in the AWS General Reference Guide. -// -// For more information about developing and using AWS Config rules, see Evaluating -// AWS Resource Configurations with AWS Config (http://docs.aws.amazon.com/config/latest/developerguide/evaluate-config.html) -// in the AWS Config Developer Guide. +// Currently, AWS Config supports only one retention configuration per region +// in your account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation PutConfigRule for usage and error information. +// API operation DescribeRetentionConfigurations for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // One or more of the specified parameters are invalid. Verify that your parameters // are valid and try again. // -// * ErrCodeMaxNumberOfConfigRulesExceededException "MaxNumberOfConfigRulesExceededException" -// Failed to add the AWS Config rule because the account already contains the -// maximum number of 50 rules. Consider deleting any deactivated rules before -// adding new rules. -// -// * ErrCodeResourceInUseException "ResourceInUseException" -// The rule is currently being deleted or the rule is deleting your evaluation -// results. Try your request again later. -// -// * ErrCodeInsufficientPermissionsException "InsufficientPermissionsException" -// Indicates one of the following errors: -// -// * The rule cannot be created because the IAM role assigned to AWS Config -// lacks permissions to perform the config:Put* action. -// -// * The AWS Lambda function cannot be invoked. Check the function ARN, and -// check the function's permissions. +// * ErrCodeNoSuchRetentionConfigurationException "NoSuchRetentionConfigurationException" +// You have specified a retention configuration that does not exist. // -// * ErrCodeNoAvailableConfigurationRecorderException "NoAvailableConfigurationRecorderException" -// There are no configuration recorders available to provide the role needed -// to describe your resources. Create a configuration recorder. +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The specified next token is invalid. Specify the nextToken string that was +// returned in the previous response to get the next page of results. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutConfigRule -func (c *ConfigService) PutConfigRule(input *PutConfigRuleInput) (*PutConfigRuleOutput, error) { - req, out := c.PutConfigRuleRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/DescribeRetentionConfigurations +func (c *ConfigService) DescribeRetentionConfigurations(input *DescribeRetentionConfigurationsInput) (*DescribeRetentionConfigurationsOutput, error) { + req, out := c.DescribeRetentionConfigurationsRequest(input) return out, req.Send() } -// PutConfigRuleWithContext is the same as PutConfigRule with the addition of +// DescribeRetentionConfigurationsWithContext is the same as DescribeRetentionConfigurations with the addition of // the ability to pass a context and additional request options. // -// See PutConfigRule for details on how to use this API operation. +// See DescribeRetentionConfigurations for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) PutConfigRuleWithContext(ctx aws.Context, input *PutConfigRuleInput, opts ...request.Option) (*PutConfigRuleOutput, error) { - req, out := c.PutConfigRuleRequest(input) +func (c *ConfigService) DescribeRetentionConfigurationsWithContext(ctx aws.Context, input *DescribeRetentionConfigurationsInput, opts ...request.Option) (*DescribeRetentionConfigurationsOutput, error) { + req, out := c.DescribeRetentionConfigurationsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opPutConfigurationRecorder = "PutConfigurationRecorder" +const opGetAggregateComplianceDetailsByConfigRule = "GetAggregateComplianceDetailsByConfigRule" -// PutConfigurationRecorderRequest generates a "aws/request.Request" representing the -// client's request for the PutConfigurationRecorder operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetAggregateComplianceDetailsByConfigRuleRequest generates a "aws/request.Request" representing the +// client's request for the GetAggregateComplianceDetailsByConfigRule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See PutConfigurationRecorder for more information on using the PutConfigurationRecorder +// See GetAggregateComplianceDetailsByConfigRule for more information on using the GetAggregateComplianceDetailsByConfigRule // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the PutConfigurationRecorderRequest method. -// req, resp := client.PutConfigurationRecorderRequest(params) +// // Example sending a request using the GetAggregateComplianceDetailsByConfigRuleRequest method. +// req, resp := client.GetAggregateComplianceDetailsByConfigRuleRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutConfigurationRecorder -func (c *ConfigService) PutConfigurationRecorderRequest(input *PutConfigurationRecorderInput) (req *request.Request, output *PutConfigurationRecorderOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetAggregateComplianceDetailsByConfigRule +func (c *ConfigService) GetAggregateComplianceDetailsByConfigRuleRequest(input *GetAggregateComplianceDetailsByConfigRuleInput) (req *request.Request, output *GetAggregateComplianceDetailsByConfigRuleOutput) { op := &request.Operation{ - Name: opPutConfigurationRecorder, + Name: opGetAggregateComplianceDetailsByConfigRule, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &PutConfigurationRecorderInput{} + input = &GetAggregateComplianceDetailsByConfigRuleInput{} } - output = &PutConfigurationRecorderOutput{} + output = &GetAggregateComplianceDetailsByConfigRuleOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// PutConfigurationRecorder API operation for AWS Config. -// -// Creates a new configuration recorder to record the selected resource configurations. -// -// You can use this action to change the role roleARN and/or the recordingGroup -// of an existing recorder. To change the role, call the action on the existing -// configuration recorder and specify a role. +// GetAggregateComplianceDetailsByConfigRule API operation for AWS Config. // -// Currently, you can specify only one configuration recorder per region in -// your account. +// Returns the evaluation results for the specified AWS Config rule for a specific +// resource in a rule. The results indicate which AWS resources were evaluated +// by the rule, when each resource was last evaluated, and whether each resource +// complies with the rule. // -// If ConfigurationRecorder does not have the recordingGroup parameter specified, -// the default is to record all supported resource types. +// The results can return an empty result page. But if you have a nextToken, +// the results are displayed on the next page. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation PutConfigurationRecorder for usage and error information. +// API operation GetAggregateComplianceDetailsByConfigRule for usage and error information. // // Returned Error Codes: -// * ErrCodeMaxNumberOfConfigurationRecordersExceededException "MaxNumberOfConfigurationRecordersExceededException" -// You have reached the limit on the number of recorders you can create. +// * ErrCodeValidationException "ValidationException" +// The requested action is not valid. // -// * ErrCodeInvalidConfigurationRecorderNameException "InvalidConfigurationRecorderNameException" -// You have provided a configuration recorder name that is not valid. +// * ErrCodeInvalidLimitException "InvalidLimitException" +// The specified limit is outside the allowable range. // -// * ErrCodeInvalidRoleException "InvalidRoleException" -// You have provided a null or empty role ARN. +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The specified next token is invalid. Specify the nextToken string that was +// returned in the previous response to get the next page of results. // -// * ErrCodeInvalidRecordingGroupException "InvalidRecordingGroupException" -// AWS Config throws an exception if the recording group does not contain a -// valid list of resource types. Invalid values could also be incorrectly formatted. +// * ErrCodeNoSuchConfigurationAggregatorException "NoSuchConfigurationAggregatorException" +// You have specified a configuration aggregator that does not exist. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutConfigurationRecorder -func (c *ConfigService) PutConfigurationRecorder(input *PutConfigurationRecorderInput) (*PutConfigurationRecorderOutput, error) { - req, out := c.PutConfigurationRecorderRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetAggregateComplianceDetailsByConfigRule +func (c *ConfigService) GetAggregateComplianceDetailsByConfigRule(input *GetAggregateComplianceDetailsByConfigRuleInput) (*GetAggregateComplianceDetailsByConfigRuleOutput, error) { + req, out := c.GetAggregateComplianceDetailsByConfigRuleRequest(input) return out, req.Send() } -// PutConfigurationRecorderWithContext is the same as PutConfigurationRecorder with the addition of +// GetAggregateComplianceDetailsByConfigRuleWithContext is the same as GetAggregateComplianceDetailsByConfigRule with the addition of // the ability to pass a context and additional request options. // -// See PutConfigurationRecorder for details on how to use this API operation. +// See GetAggregateComplianceDetailsByConfigRule for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) PutConfigurationRecorderWithContext(ctx aws.Context, input *PutConfigurationRecorderInput, opts ...request.Option) (*PutConfigurationRecorderOutput, error) { - req, out := c.PutConfigurationRecorderRequest(input) +func (c *ConfigService) GetAggregateComplianceDetailsByConfigRuleWithContext(ctx aws.Context, input *GetAggregateComplianceDetailsByConfigRuleInput, opts ...request.Option) (*GetAggregateComplianceDetailsByConfigRuleOutput, error) { + req, out := c.GetAggregateComplianceDetailsByConfigRuleRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opPutDeliveryChannel = "PutDeliveryChannel" +const opGetAggregateConfigRuleComplianceSummary = "GetAggregateConfigRuleComplianceSummary" -// PutDeliveryChannelRequest generates a "aws/request.Request" representing the -// client's request for the PutDeliveryChannel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetAggregateConfigRuleComplianceSummaryRequest generates a "aws/request.Request" representing the +// client's request for the GetAggregateConfigRuleComplianceSummary operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See PutDeliveryChannel for more information on using the PutDeliveryChannel +// See GetAggregateConfigRuleComplianceSummary for more information on using the GetAggregateConfigRuleComplianceSummary // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the PutDeliveryChannelRequest method. -// req, resp := client.PutDeliveryChannelRequest(params) +// // Example sending a request using the GetAggregateConfigRuleComplianceSummaryRequest method. +// req, resp := client.GetAggregateConfigRuleComplianceSummaryRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutDeliveryChannel -func (c *ConfigService) PutDeliveryChannelRequest(input *PutDeliveryChannelInput) (req *request.Request, output *PutDeliveryChannelOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetAggregateConfigRuleComplianceSummary +func (c *ConfigService) GetAggregateConfigRuleComplianceSummaryRequest(input *GetAggregateConfigRuleComplianceSummaryInput) (req *request.Request, output *GetAggregateConfigRuleComplianceSummaryOutput) { op := &request.Operation{ - Name: opPutDeliveryChannel, + Name: opGetAggregateConfigRuleComplianceSummary, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &PutDeliveryChannelInput{} + input = &GetAggregateConfigRuleComplianceSummaryInput{} } - output = &PutDeliveryChannelOutput{} + output = &GetAggregateConfigRuleComplianceSummaryOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// PutDeliveryChannel API operation for AWS Config. -// -// Creates a delivery channel object to deliver configuration information to -// an Amazon S3 bucket and Amazon SNS topic. -// -// Before you can create a delivery channel, you must create a configuration -// recorder. +// GetAggregateConfigRuleComplianceSummary API operation for AWS Config. // -// You can use this action to change the Amazon S3 bucket or an Amazon SNS topic -// of the existing delivery channel. To change the Amazon S3 bucket or an Amazon -// SNS topic, call this action and specify the changed values for the S3 bucket -// and the SNS topic. If you specify a different value for either the S3 bucket -// or the SNS topic, this action will keep the existing value for the parameter -// that is not changed. +// Returns the number of compliant and noncompliant rules for one or more accounts +// and regions in an aggregator. // -// You can have only one delivery channel per region in your account. +// The results can return an empty result page, but if you have a nextToken, +// the results are displayed on the next page. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation PutDeliveryChannel for usage and error information. +// API operation GetAggregateConfigRuleComplianceSummary for usage and error information. // // Returned Error Codes: -// * ErrCodeMaxNumberOfDeliveryChannelsExceededException "MaxNumberOfDeliveryChannelsExceededException" -// You have reached the limit on the number of delivery channels you can create. -// -// * ErrCodeNoAvailableConfigurationRecorderException "NoAvailableConfigurationRecorderException" -// There are no configuration recorders available to provide the role needed -// to describe your resources. Create a configuration recorder. -// -// * ErrCodeInvalidDeliveryChannelNameException "InvalidDeliveryChannelNameException" -// The specified delivery channel name is not valid. -// -// * ErrCodeNoSuchBucketException "NoSuchBucketException" -// The specified Amazon S3 bucket does not exist. +// * ErrCodeValidationException "ValidationException" +// The requested action is not valid. // -// * ErrCodeInvalidS3KeyPrefixException "InvalidS3KeyPrefixException" -// The specified Amazon S3 key prefix is not valid. +// * ErrCodeInvalidLimitException "InvalidLimitException" +// The specified limit is outside the allowable range. // -// * ErrCodeInvalidSNSTopicARNException "InvalidSNSTopicARNException" -// The specified Amazon SNS topic does not exist. +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The specified next token is invalid. Specify the nextToken string that was +// returned in the previous response to get the next page of results. // -// * ErrCodeInsufficientDeliveryPolicyException "InsufficientDeliveryPolicyException" -// Your Amazon S3 bucket policy does not permit AWS Config to write to it. +// * ErrCodeNoSuchConfigurationAggregatorException "NoSuchConfigurationAggregatorException" +// You have specified a configuration aggregator that does not exist. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutDeliveryChannel -func (c *ConfigService) PutDeliveryChannel(input *PutDeliveryChannelInput) (*PutDeliveryChannelOutput, error) { - req, out := c.PutDeliveryChannelRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetAggregateConfigRuleComplianceSummary +func (c *ConfigService) GetAggregateConfigRuleComplianceSummary(input *GetAggregateConfigRuleComplianceSummaryInput) (*GetAggregateConfigRuleComplianceSummaryOutput, error) { + req, out := c.GetAggregateConfigRuleComplianceSummaryRequest(input) return out, req.Send() } -// PutDeliveryChannelWithContext is the same as PutDeliveryChannel with the addition of +// GetAggregateConfigRuleComplianceSummaryWithContext is the same as GetAggregateConfigRuleComplianceSummary with the addition of // the ability to pass a context and additional request options. // -// See PutDeliveryChannel for details on how to use this API operation. +// See GetAggregateConfigRuleComplianceSummary for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) PutDeliveryChannelWithContext(ctx aws.Context, input *PutDeliveryChannelInput, opts ...request.Option) (*PutDeliveryChannelOutput, error) { - req, out := c.PutDeliveryChannelRequest(input) +func (c *ConfigService) GetAggregateConfigRuleComplianceSummaryWithContext(ctx aws.Context, input *GetAggregateConfigRuleComplianceSummaryInput, opts ...request.Option) (*GetAggregateConfigRuleComplianceSummaryOutput, error) { + req, out := c.GetAggregateConfigRuleComplianceSummaryRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opPutEvaluations = "PutEvaluations" +const opGetAggregateDiscoveredResourceCounts = "GetAggregateDiscoveredResourceCounts" -// PutEvaluationsRequest generates a "aws/request.Request" representing the -// client's request for the PutEvaluations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetAggregateDiscoveredResourceCountsRequest generates a "aws/request.Request" representing the +// client's request for the GetAggregateDiscoveredResourceCounts operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See PutEvaluations for more information on using the PutEvaluations +// See GetAggregateDiscoveredResourceCounts for more information on using the GetAggregateDiscoveredResourceCounts // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the PutEvaluationsRequest method. -// req, resp := client.PutEvaluationsRequest(params) +// // Example sending a request using the GetAggregateDiscoveredResourceCountsRequest method. +// req, resp := client.GetAggregateDiscoveredResourceCountsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutEvaluations -func (c *ConfigService) PutEvaluationsRequest(input *PutEvaluationsInput) (req *request.Request, output *PutEvaluationsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetAggregateDiscoveredResourceCounts +func (c *ConfigService) GetAggregateDiscoveredResourceCountsRequest(input *GetAggregateDiscoveredResourceCountsInput) (req *request.Request, output *GetAggregateDiscoveredResourceCountsOutput) { op := &request.Operation{ - Name: opPutEvaluations, + Name: opGetAggregateDiscoveredResourceCounts, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &PutEvaluationsInput{} + input = &GetAggregateDiscoveredResourceCountsInput{} } - output = &PutEvaluationsOutput{} + output = &GetAggregateDiscoveredResourceCountsOutput{} req = c.newRequest(op, input, output) return } -// PutEvaluations API operation for AWS Config. +// GetAggregateDiscoveredResourceCounts API operation for AWS Config. // -// Used by an AWS Lambda function to deliver evaluation results to AWS Config. -// This action is required in every AWS Lambda function that is invoked by an -// AWS Config rule. +// Returns the resource counts across accounts and regions that are present +// in your AWS Config aggregator. You can request the resource counts by providing +// filters and GroupByKey. +// +// For example, if the input contains accountID 12345678910 and region us-east-1 +// in filters, the API returns the count of resources in account ID 12345678910 +// and region us-east-1. If the input contains ACCOUNT_ID as a GroupByKey, the +// API returns resource counts for all source accounts that are present in your +// aggregator. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation PutEvaluations for usage and error information. +// API operation GetAggregateDiscoveredResourceCounts for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" -// One or more of the specified parameters are invalid. Verify that your parameters -// are valid and try again. +// * ErrCodeValidationException "ValidationException" +// The requested action is not valid. // -// * ErrCodeInvalidResultTokenException "InvalidResultTokenException" -// The specified ResultToken is invalid. +// * ErrCodeInvalidLimitException "InvalidLimitException" +// The specified limit is outside the allowable range. // -// * ErrCodeNoSuchConfigRuleException "NoSuchConfigRuleException" -// One or more AWS Config rules in the request are invalid. Verify that the -// rule names are correct and try again. +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The specified next token is invalid. Specify the nextToken string that was +// returned in the previous response to get the next page of results. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutEvaluations -func (c *ConfigService) PutEvaluations(input *PutEvaluationsInput) (*PutEvaluationsOutput, error) { - req, out := c.PutEvaluationsRequest(input) +// * ErrCodeNoSuchConfigurationAggregatorException "NoSuchConfigurationAggregatorException" +// You have specified a configuration aggregator that does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetAggregateDiscoveredResourceCounts +func (c *ConfigService) GetAggregateDiscoveredResourceCounts(input *GetAggregateDiscoveredResourceCountsInput) (*GetAggregateDiscoveredResourceCountsOutput, error) { + req, out := c.GetAggregateDiscoveredResourceCountsRequest(input) return out, req.Send() } -// PutEvaluationsWithContext is the same as PutEvaluations with the addition of +// GetAggregateDiscoveredResourceCountsWithContext is the same as GetAggregateDiscoveredResourceCounts with the addition of // the ability to pass a context and additional request options. // -// See PutEvaluations for details on how to use this API operation. +// See GetAggregateDiscoveredResourceCounts for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) PutEvaluationsWithContext(ctx aws.Context, input *PutEvaluationsInput, opts ...request.Option) (*PutEvaluationsOutput, error) { - req, out := c.PutEvaluationsRequest(input) +func (c *ConfigService) GetAggregateDiscoveredResourceCountsWithContext(ctx aws.Context, input *GetAggregateDiscoveredResourceCountsInput, opts ...request.Option) (*GetAggregateDiscoveredResourceCountsOutput, error) { + req, out := c.GetAggregateDiscoveredResourceCountsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opStartConfigRulesEvaluation = "StartConfigRulesEvaluation" +const opGetAggregateResourceConfig = "GetAggregateResourceConfig" -// StartConfigRulesEvaluationRequest generates a "aws/request.Request" representing the -// client's request for the StartConfigRulesEvaluation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetAggregateResourceConfigRequest generates a "aws/request.Request" representing the +// client's request for the GetAggregateResourceConfig operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See StartConfigRulesEvaluation for more information on using the StartConfigRulesEvaluation +// See GetAggregateResourceConfig for more information on using the GetAggregateResourceConfig // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the StartConfigRulesEvaluationRequest method. -// req, resp := client.StartConfigRulesEvaluationRequest(params) +// // Example sending a request using the GetAggregateResourceConfigRequest method. +// req, resp := client.GetAggregateResourceConfigRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/StartConfigRulesEvaluation -func (c *ConfigService) StartConfigRulesEvaluationRequest(input *StartConfigRulesEvaluationInput) (req *request.Request, output *StartConfigRulesEvaluationOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetAggregateResourceConfig +func (c *ConfigService) GetAggregateResourceConfigRequest(input *GetAggregateResourceConfigInput) (req *request.Request, output *GetAggregateResourceConfigOutput) { op := &request.Operation{ - Name: opStartConfigRulesEvaluation, + Name: opGetAggregateResourceConfig, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &StartConfigRulesEvaluationInput{} + input = &GetAggregateResourceConfigInput{} } - output = &StartConfigRulesEvaluationOutput{} + output = &GetAggregateResourceConfigOutput{} req = c.newRequest(op, input, output) return } -// StartConfigRulesEvaluation API operation for AWS Config. -// -// Runs an on-demand evaluation for the specified Config rules against the last -// known configuration state of the resources. Use StartConfigRulesEvaluation -// when you want to test a rule that you updated is working as expected. StartConfigRulesEvaluation -// does not re-record the latest configuration state for your resources; it -// re-runs an evaluation against the last known state of your resources. -// -// You can specify up to 25 Config rules per request. -// -// An existing StartConfigRulesEvaluation call must complete for the specified -// rules before you can call the API again. If you chose to have AWS Config -// stream to an Amazon SNS topic, you will receive a ConfigRuleEvaluationStarted -// notification when the evaluation starts. +// GetAggregateResourceConfig API operation for AWS Config. // -// You don't need to call the StartConfigRulesEvaluation API to run an evaluation -// for a new rule. When you create a new rule, AWS Config automatically evaluates -// your resources against the rule. -// -// The StartConfigRulesEvaluation API is useful if you want to run on-demand -// evaluations, such as the following example: -// -// You have a custom rule that evaluates your IAM resources every 24 hours. -// -// You update your Lambda function to add additional conditions to your rule. -// -// Instead of waiting for the next periodic evaluation, you call the StartConfigRulesEvaluation -// API. -// -// AWS Config invokes your Lambda function and evaluates your IAM resources. -// -// Your custom rule will still run periodic evaluations every 24 hours. +// Returns configuration item that is aggregated for your specific resource +// in a specific source account and region. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation StartConfigRulesEvaluation for usage and error information. +// API operation GetAggregateResourceConfig for usage and error information. // // Returned Error Codes: -// * ErrCodeNoSuchConfigRuleException "NoSuchConfigRuleException" -// One or more AWS Config rules in the request are invalid. Verify that the -// rule names are correct and try again. +// * ErrCodeValidationException "ValidationException" +// The requested action is not valid. // -// * ErrCodeLimitExceededException "LimitExceededException" -// This exception is thrown if an evaluation is in progress or if you call the -// StartConfigRulesEvaluation API more than once per minute. +// * ErrCodeNoSuchConfigurationAggregatorException "NoSuchConfigurationAggregatorException" +// You have specified a configuration aggregator that does not exist. // -// * ErrCodeResourceInUseException "ResourceInUseException" -// The rule is currently being deleted or the rule is deleting your evaluation -// results. Try your request again later. +// * ErrCodeOversizedConfigurationItemException "OversizedConfigurationItemException" +// The configuration item size is outside the allowable range. // -// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" -// One or more of the specified parameters are invalid. Verify that your parameters -// are valid and try again. +// * ErrCodeResourceNotDiscoveredException "ResourceNotDiscoveredException" +// You have specified a resource that is either unknown or has not been discovered. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/StartConfigRulesEvaluation -func (c *ConfigService) StartConfigRulesEvaluation(input *StartConfigRulesEvaluationInput) (*StartConfigRulesEvaluationOutput, error) { - req, out := c.StartConfigRulesEvaluationRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetAggregateResourceConfig +func (c *ConfigService) GetAggregateResourceConfig(input *GetAggregateResourceConfigInput) (*GetAggregateResourceConfigOutput, error) { + req, out := c.GetAggregateResourceConfigRequest(input) return out, req.Send() } -// StartConfigRulesEvaluationWithContext is the same as StartConfigRulesEvaluation with the addition of +// GetAggregateResourceConfigWithContext is the same as GetAggregateResourceConfig with the addition of // the ability to pass a context and additional request options. // -// See StartConfigRulesEvaluation for details on how to use this API operation. +// See GetAggregateResourceConfig for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) StartConfigRulesEvaluationWithContext(ctx aws.Context, input *StartConfigRulesEvaluationInput, opts ...request.Option) (*StartConfigRulesEvaluationOutput, error) { - req, out := c.StartConfigRulesEvaluationRequest(input) +func (c *ConfigService) GetAggregateResourceConfigWithContext(ctx aws.Context, input *GetAggregateResourceConfigInput, opts ...request.Option) (*GetAggregateResourceConfigOutput, error) { + req, out := c.GetAggregateResourceConfigRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opStartConfigurationRecorder = "StartConfigurationRecorder" +const opGetComplianceDetailsByConfigRule = "GetComplianceDetailsByConfigRule" -// StartConfigurationRecorderRequest generates a "aws/request.Request" representing the -// client's request for the StartConfigurationRecorder operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetComplianceDetailsByConfigRuleRequest generates a "aws/request.Request" representing the +// client's request for the GetComplianceDetailsByConfigRule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See StartConfigurationRecorder for more information on using the StartConfigurationRecorder +// See GetComplianceDetailsByConfigRule for more information on using the GetComplianceDetailsByConfigRule // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the StartConfigurationRecorderRequest method. -// req, resp := client.StartConfigurationRecorderRequest(params) +// // Example sending a request using the GetComplianceDetailsByConfigRuleRequest method. +// req, resp := client.GetComplianceDetailsByConfigRuleRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/StartConfigurationRecorder -func (c *ConfigService) StartConfigurationRecorderRequest(input *StartConfigurationRecorderInput) (req *request.Request, output *StartConfigurationRecorderOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetComplianceDetailsByConfigRule +func (c *ConfigService) GetComplianceDetailsByConfigRuleRequest(input *GetComplianceDetailsByConfigRuleInput) (req *request.Request, output *GetComplianceDetailsByConfigRuleOutput) { op := &request.Operation{ - Name: opStartConfigurationRecorder, + Name: opGetComplianceDetailsByConfigRule, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &StartConfigurationRecorderInput{} + input = &GetComplianceDetailsByConfigRuleInput{} } - output = &StartConfigurationRecorderOutput{} + output = &GetComplianceDetailsByConfigRuleOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// StartConfigurationRecorder API operation for AWS Config. -// -// Starts recording configurations of the AWS resources you have selected to -// record in your AWS account. +// GetComplianceDetailsByConfigRule API operation for AWS Config. // -// You must have created at least one delivery channel to successfully start -// the configuration recorder. +// Returns the evaluation results for the specified AWS Config rule. The results +// indicate which AWS resources were evaluated by the rule, when each resource +// was last evaluated, and whether each resource complies with the rule. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation StartConfigurationRecorder for usage and error information. +// API operation GetComplianceDetailsByConfigRule for usage and error information. // // Returned Error Codes: -// * ErrCodeNoSuchConfigurationRecorderException "NoSuchConfigurationRecorderException" -// You have specified a configuration recorder that does not exist. +// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" +// One or more of the specified parameters are invalid. Verify that your parameters +// are valid and try again. // -// * ErrCodeNoAvailableDeliveryChannelException "NoAvailableDeliveryChannelException" -// There is no delivery channel available to record configurations. +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The specified next token is invalid. Specify the nextToken string that was +// returned in the previous response to get the next page of results. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/StartConfigurationRecorder -func (c *ConfigService) StartConfigurationRecorder(input *StartConfigurationRecorderInput) (*StartConfigurationRecorderOutput, error) { - req, out := c.StartConfigurationRecorderRequest(input) +// * ErrCodeNoSuchConfigRuleException "NoSuchConfigRuleException" +// One or more AWS Config rules in the request are invalid. Verify that the +// rule names are correct and try again. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetComplianceDetailsByConfigRule +func (c *ConfigService) GetComplianceDetailsByConfigRule(input *GetComplianceDetailsByConfigRuleInput) (*GetComplianceDetailsByConfigRuleOutput, error) { + req, out := c.GetComplianceDetailsByConfigRuleRequest(input) return out, req.Send() } -// StartConfigurationRecorderWithContext is the same as StartConfigurationRecorder with the addition of +// GetComplianceDetailsByConfigRuleWithContext is the same as GetComplianceDetailsByConfigRule with the addition of // the ability to pass a context and additional request options. // -// See StartConfigurationRecorder for details on how to use this API operation. +// See GetComplianceDetailsByConfigRule for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) StartConfigurationRecorderWithContext(ctx aws.Context, input *StartConfigurationRecorderInput, opts ...request.Option) (*StartConfigurationRecorderOutput, error) { - req, out := c.StartConfigurationRecorderRequest(input) +func (c *ConfigService) GetComplianceDetailsByConfigRuleWithContext(ctx aws.Context, input *GetComplianceDetailsByConfigRuleInput, opts ...request.Option) (*GetComplianceDetailsByConfigRuleOutput, error) { + req, out := c.GetComplianceDetailsByConfigRuleRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opStopConfigurationRecorder = "StopConfigurationRecorder" +const opGetComplianceDetailsByResource = "GetComplianceDetailsByResource" -// StopConfigurationRecorderRequest generates a "aws/request.Request" representing the -// client's request for the StopConfigurationRecorder operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetComplianceDetailsByResourceRequest generates a "aws/request.Request" representing the +// client's request for the GetComplianceDetailsByResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See StopConfigurationRecorder for more information on using the StopConfigurationRecorder +// See GetComplianceDetailsByResource for more information on using the GetComplianceDetailsByResource // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the StopConfigurationRecorderRequest method. -// req, resp := client.StopConfigurationRecorderRequest(params) +// // Example sending a request using the GetComplianceDetailsByResourceRequest method. +// req, resp := client.GetComplianceDetailsByResourceRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/StopConfigurationRecorder -func (c *ConfigService) StopConfigurationRecorderRequest(input *StopConfigurationRecorderInput) (req *request.Request, output *StopConfigurationRecorderOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetComplianceDetailsByResource +func (c *ConfigService) GetComplianceDetailsByResourceRequest(input *GetComplianceDetailsByResourceInput) (req *request.Request, output *GetComplianceDetailsByResourceOutput) { op := &request.Operation{ - Name: opStopConfigurationRecorder, + Name: opGetComplianceDetailsByResource, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &StopConfigurationRecorderInput{} + input = &GetComplianceDetailsByResourceInput{} } - output = &StopConfigurationRecorderOutput{} + output = &GetComplianceDetailsByResourceOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// StopConfigurationRecorder API operation for AWS Config. +// GetComplianceDetailsByResource API operation for AWS Config. // -// Stops recording configurations of the AWS resources you have selected to -// record in your AWS account. +// Returns the evaluation results for the specified AWS resource. The results +// indicate which AWS Config rules were used to evaluate the resource, when +// each rule was last used, and whether the resource complies with each rule. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Config's -// API operation StopConfigurationRecorder for usage and error information. +// API operation GetComplianceDetailsByResource for usage and error information. // // Returned Error Codes: -// * ErrCodeNoSuchConfigurationRecorderException "NoSuchConfigurationRecorderException" -// You have specified a configuration recorder that does not exist. +// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" +// One or more of the specified parameters are invalid. Verify that your parameters +// are valid and try again. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/StopConfigurationRecorder -func (c *ConfigService) StopConfigurationRecorder(input *StopConfigurationRecorderInput) (*StopConfigurationRecorderOutput, error) { - req, out := c.StopConfigurationRecorderRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetComplianceDetailsByResource +func (c *ConfigService) GetComplianceDetailsByResource(input *GetComplianceDetailsByResourceInput) (*GetComplianceDetailsByResourceOutput, error) { + req, out := c.GetComplianceDetailsByResourceRequest(input) return out, req.Send() } -// StopConfigurationRecorderWithContext is the same as StopConfigurationRecorder with the addition of +// GetComplianceDetailsByResourceWithContext is the same as GetComplianceDetailsByResource with the addition of // the ability to pass a context and additional request options. // -// See StopConfigurationRecorder for details on how to use this API operation. +// See GetComplianceDetailsByResource for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ConfigService) StopConfigurationRecorderWithContext(ctx aws.Context, input *StopConfigurationRecorderInput, opts ...request.Option) (*StopConfigurationRecorderOutput, error) { - req, out := c.StopConfigurationRecorderRequest(input) +func (c *ConfigService) GetComplianceDetailsByResourceWithContext(ctx aws.Context, input *GetComplianceDetailsByResourceInput, opts ...request.Option) (*GetComplianceDetailsByResourceOutput, error) { + req, out := c.GetComplianceDetailsByResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetComplianceSummaryByConfigRule = "GetComplianceSummaryByConfigRule" + +// GetComplianceSummaryByConfigRuleRequest generates a "aws/request.Request" representing the +// client's request for the GetComplianceSummaryByConfigRule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetComplianceSummaryByConfigRule for more information on using the GetComplianceSummaryByConfigRule +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetComplianceSummaryByConfigRuleRequest method. +// req, resp := client.GetComplianceSummaryByConfigRuleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetComplianceSummaryByConfigRule +func (c *ConfigService) GetComplianceSummaryByConfigRuleRequest(input *GetComplianceSummaryByConfigRuleInput) (req *request.Request, output *GetComplianceSummaryByConfigRuleOutput) { + op := &request.Operation{ + Name: opGetComplianceSummaryByConfigRule, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetComplianceSummaryByConfigRuleInput{} + } + + output = &GetComplianceSummaryByConfigRuleOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetComplianceSummaryByConfigRule API operation for AWS Config. +// +// Returns the number of AWS Config rules that are compliant and noncompliant, +// up to a maximum of 25 for each. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation GetComplianceSummaryByConfigRule for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetComplianceSummaryByConfigRule +func (c *ConfigService) GetComplianceSummaryByConfigRule(input *GetComplianceSummaryByConfigRuleInput) (*GetComplianceSummaryByConfigRuleOutput, error) { + req, out := c.GetComplianceSummaryByConfigRuleRequest(input) + return out, req.Send() +} + +// GetComplianceSummaryByConfigRuleWithContext is the same as GetComplianceSummaryByConfigRule with the addition of +// the ability to pass a context and additional request options. +// +// See GetComplianceSummaryByConfigRule for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) GetComplianceSummaryByConfigRuleWithContext(ctx aws.Context, input *GetComplianceSummaryByConfigRuleInput, opts ...request.Option) (*GetComplianceSummaryByConfigRuleOutput, error) { + req, out := c.GetComplianceSummaryByConfigRuleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetComplianceSummaryByResourceType = "GetComplianceSummaryByResourceType" + +// GetComplianceSummaryByResourceTypeRequest generates a "aws/request.Request" representing the +// client's request for the GetComplianceSummaryByResourceType operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetComplianceSummaryByResourceType for more information on using the GetComplianceSummaryByResourceType +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetComplianceSummaryByResourceTypeRequest method. +// req, resp := client.GetComplianceSummaryByResourceTypeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetComplianceSummaryByResourceType +func (c *ConfigService) GetComplianceSummaryByResourceTypeRequest(input *GetComplianceSummaryByResourceTypeInput) (req *request.Request, output *GetComplianceSummaryByResourceTypeOutput) { + op := &request.Operation{ + Name: opGetComplianceSummaryByResourceType, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetComplianceSummaryByResourceTypeInput{} + } + + output = &GetComplianceSummaryByResourceTypeOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetComplianceSummaryByResourceType API operation for AWS Config. +// +// Returns the number of resources that are compliant and the number that are +// noncompliant. You can specify one or more resource types to get these numbers +// for each resource type. The maximum number returned is 100. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation GetComplianceSummaryByResourceType for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" +// One or more of the specified parameters are invalid. Verify that your parameters +// are valid and try again. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetComplianceSummaryByResourceType +func (c *ConfigService) GetComplianceSummaryByResourceType(input *GetComplianceSummaryByResourceTypeInput) (*GetComplianceSummaryByResourceTypeOutput, error) { + req, out := c.GetComplianceSummaryByResourceTypeRequest(input) + return out, req.Send() +} + +// GetComplianceSummaryByResourceTypeWithContext is the same as GetComplianceSummaryByResourceType with the addition of +// the ability to pass a context and additional request options. +// +// See GetComplianceSummaryByResourceType for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) GetComplianceSummaryByResourceTypeWithContext(ctx aws.Context, input *GetComplianceSummaryByResourceTypeInput, opts ...request.Option) (*GetComplianceSummaryByResourceTypeOutput, error) { + req, out := c.GetComplianceSummaryByResourceTypeRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// Indicates whether an AWS resource or AWS Config rule is compliant and provides -// the number of contributors that affect the compliance. -type Compliance struct { +const opGetDiscoveredResourceCounts = "GetDiscoveredResourceCounts" + +// GetDiscoveredResourceCountsRequest generates a "aws/request.Request" representing the +// client's request for the GetDiscoveredResourceCounts operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetDiscoveredResourceCounts for more information on using the GetDiscoveredResourceCounts +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetDiscoveredResourceCountsRequest method. +// req, resp := client.GetDiscoveredResourceCountsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetDiscoveredResourceCounts +func (c *ConfigService) GetDiscoveredResourceCountsRequest(input *GetDiscoveredResourceCountsInput) (req *request.Request, output *GetDiscoveredResourceCountsOutput) { + op := &request.Operation{ + Name: opGetDiscoveredResourceCounts, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetDiscoveredResourceCountsInput{} + } + + output = &GetDiscoveredResourceCountsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetDiscoveredResourceCounts API operation for AWS Config. +// +// Returns the resource types, the number of each resource type, and the total +// number of resources that AWS Config is recording in this region for your +// AWS account. +// +// Example +// +// AWS Config is recording three resource types in the US East (Ohio) Region +// for your account: 25 EC2 instances, 20 IAM users, and 15 S3 buckets. +// +// You make a call to the GetDiscoveredResourceCounts action and specify that +// you want all resource types. +// +// AWS Config returns the following: +// +// The resource types (EC2 instances, IAM users, and S3 buckets). +// +// The number of each resource type (25, 20, and 15). +// +// The total number of all resources (60). +// +// The response is paginated. By default, AWS Config lists 100 ResourceCount +// objects on each page. You can customize this number with the limit parameter. +// The response includes a nextToken string. To get the next page of results, +// run the request again and specify the string for the nextToken parameter. +// +// If you make a call to the GetDiscoveredResourceCounts action, you might not +// immediately receive resource counts in the following situations: +// +// You are a new AWS Config customer. +// +// You just enabled resource recording. +// +// It might take a few minutes for AWS Config to record and count your resources. +// Wait a few minutes and then retry the GetDiscoveredResourceCounts action. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation GetDiscoveredResourceCounts for usage and error information. +// +// Returned Error Codes: +// * ErrCodeValidationException "ValidationException" +// The requested action is not valid. +// +// * ErrCodeInvalidLimitException "InvalidLimitException" +// The specified limit is outside the allowable range. +// +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The specified next token is invalid. Specify the nextToken string that was +// returned in the previous response to get the next page of results. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetDiscoveredResourceCounts +func (c *ConfigService) GetDiscoveredResourceCounts(input *GetDiscoveredResourceCountsInput) (*GetDiscoveredResourceCountsOutput, error) { + req, out := c.GetDiscoveredResourceCountsRequest(input) + return out, req.Send() +} + +// GetDiscoveredResourceCountsWithContext is the same as GetDiscoveredResourceCounts with the addition of +// the ability to pass a context and additional request options. +// +// See GetDiscoveredResourceCounts for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) GetDiscoveredResourceCountsWithContext(ctx aws.Context, input *GetDiscoveredResourceCountsInput, opts ...request.Option) (*GetDiscoveredResourceCountsOutput, error) { + req, out := c.GetDiscoveredResourceCountsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetResourceConfigHistory = "GetResourceConfigHistory" + +// GetResourceConfigHistoryRequest generates a "aws/request.Request" representing the +// client's request for the GetResourceConfigHistory operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetResourceConfigHistory for more information on using the GetResourceConfigHistory +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetResourceConfigHistoryRequest method. +// req, resp := client.GetResourceConfigHistoryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetResourceConfigHistory +func (c *ConfigService) GetResourceConfigHistoryRequest(input *GetResourceConfigHistoryInput) (req *request.Request, output *GetResourceConfigHistoryOutput) { + op := &request.Operation{ + Name: opGetResourceConfigHistory, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"nextToken"}, + OutputTokens: []string{"nextToken"}, + LimitToken: "limit", + TruncationToken: "", + }, + } + + if input == nil { + input = &GetResourceConfigHistoryInput{} + } + + output = &GetResourceConfigHistoryOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetResourceConfigHistory API operation for AWS Config. +// +// Returns a list of configuration items for the specified resource. The list +// contains details about each state of the resource during the specified time +// interval. If you specified a retention period to retain your ConfigurationItems +// between a minimum of 30 days and a maximum of 7 years (2557 days), AWS Config +// returns the ConfigurationItems for the specified retention period. +// +// The response is paginated. By default, AWS Config returns a limit of 10 configuration +// items per page. You can customize this number with the limit parameter. The +// response includes a nextToken string. To get the next page of results, run +// the request again and specify the string for the nextToken parameter. +// +// Each call to the API is limited to span a duration of seven days. It is likely +// that the number of records returned is smaller than the specified limit. +// In such cases, you can make another call, using the nextToken. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation GetResourceConfigHistory for usage and error information. +// +// Returned Error Codes: +// * ErrCodeValidationException "ValidationException" +// The requested action is not valid. +// +// * ErrCodeInvalidTimeRangeException "InvalidTimeRangeException" +// The specified time range is not valid. The earlier time is not chronologically +// before the later time. +// +// * ErrCodeInvalidLimitException "InvalidLimitException" +// The specified limit is outside the allowable range. +// +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The specified next token is invalid. Specify the nextToken string that was +// returned in the previous response to get the next page of results. +// +// * ErrCodeNoAvailableConfigurationRecorderException "NoAvailableConfigurationRecorderException" +// There are no configuration recorders available to provide the role needed +// to describe your resources. Create a configuration recorder. +// +// * ErrCodeResourceNotDiscoveredException "ResourceNotDiscoveredException" +// You have specified a resource that is either unknown or has not been discovered. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/GetResourceConfigHistory +func (c *ConfigService) GetResourceConfigHistory(input *GetResourceConfigHistoryInput) (*GetResourceConfigHistoryOutput, error) { + req, out := c.GetResourceConfigHistoryRequest(input) + return out, req.Send() +} + +// GetResourceConfigHistoryWithContext is the same as GetResourceConfigHistory with the addition of +// the ability to pass a context and additional request options. +// +// See GetResourceConfigHistory for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) GetResourceConfigHistoryWithContext(ctx aws.Context, input *GetResourceConfigHistoryInput, opts ...request.Option) (*GetResourceConfigHistoryOutput, error) { + req, out := c.GetResourceConfigHistoryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// GetResourceConfigHistoryPages iterates over the pages of a GetResourceConfigHistory operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See GetResourceConfigHistory method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a GetResourceConfigHistory operation. +// pageNum := 0 +// err := client.GetResourceConfigHistoryPages(params, +// func(page *GetResourceConfigHistoryOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *ConfigService) GetResourceConfigHistoryPages(input *GetResourceConfigHistoryInput, fn func(*GetResourceConfigHistoryOutput, bool) bool) error { + return c.GetResourceConfigHistoryPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// GetResourceConfigHistoryPagesWithContext same as GetResourceConfigHistoryPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) GetResourceConfigHistoryPagesWithContext(ctx aws.Context, input *GetResourceConfigHistoryInput, fn func(*GetResourceConfigHistoryOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *GetResourceConfigHistoryInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.GetResourceConfigHistoryRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*GetResourceConfigHistoryOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListAggregateDiscoveredResources = "ListAggregateDiscoveredResources" + +// ListAggregateDiscoveredResourcesRequest generates a "aws/request.Request" representing the +// client's request for the ListAggregateDiscoveredResources operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListAggregateDiscoveredResources for more information on using the ListAggregateDiscoveredResources +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListAggregateDiscoveredResourcesRequest method. +// req, resp := client.ListAggregateDiscoveredResourcesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/ListAggregateDiscoveredResources +func (c *ConfigService) ListAggregateDiscoveredResourcesRequest(input *ListAggregateDiscoveredResourcesInput) (req *request.Request, output *ListAggregateDiscoveredResourcesOutput) { + op := &request.Operation{ + Name: opListAggregateDiscoveredResources, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListAggregateDiscoveredResourcesInput{} + } + + output = &ListAggregateDiscoveredResourcesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListAggregateDiscoveredResources API operation for AWS Config. +// +// Accepts a resource type and returns a list of resource identifiers that are +// aggregated for a specific resource type across accounts and regions. A resource +// identifier includes the resource type, ID, (if available) the custom resource +// name, source account, and source region. You can narrow the results to include +// only resources that have specific resource IDs, or a resource name, or source +// account ID, or source region. +// +// For example, if the input consists of accountID 12345678910 and the region +// is us-east-1 for resource type AWS::EC2::Instance then the API returns all +// the EC2 instance identifiers of accountID 12345678910 and region us-east-1. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation ListAggregateDiscoveredResources for usage and error information. +// +// Returned Error Codes: +// * ErrCodeValidationException "ValidationException" +// The requested action is not valid. +// +// * ErrCodeInvalidLimitException "InvalidLimitException" +// The specified limit is outside the allowable range. +// +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The specified next token is invalid. Specify the nextToken string that was +// returned in the previous response to get the next page of results. +// +// * ErrCodeNoSuchConfigurationAggregatorException "NoSuchConfigurationAggregatorException" +// You have specified a configuration aggregator that does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/ListAggregateDiscoveredResources +func (c *ConfigService) ListAggregateDiscoveredResources(input *ListAggregateDiscoveredResourcesInput) (*ListAggregateDiscoveredResourcesOutput, error) { + req, out := c.ListAggregateDiscoveredResourcesRequest(input) + return out, req.Send() +} + +// ListAggregateDiscoveredResourcesWithContext is the same as ListAggregateDiscoveredResources with the addition of +// the ability to pass a context and additional request options. +// +// See ListAggregateDiscoveredResources for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) ListAggregateDiscoveredResourcesWithContext(ctx aws.Context, input *ListAggregateDiscoveredResourcesInput, opts ...request.Option) (*ListAggregateDiscoveredResourcesOutput, error) { + req, out := c.ListAggregateDiscoveredResourcesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListDiscoveredResources = "ListDiscoveredResources" + +// ListDiscoveredResourcesRequest generates a "aws/request.Request" representing the +// client's request for the ListDiscoveredResources operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListDiscoveredResources for more information on using the ListDiscoveredResources +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListDiscoveredResourcesRequest method. +// req, resp := client.ListDiscoveredResourcesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/ListDiscoveredResources +func (c *ConfigService) ListDiscoveredResourcesRequest(input *ListDiscoveredResourcesInput) (req *request.Request, output *ListDiscoveredResourcesOutput) { + op := &request.Operation{ + Name: opListDiscoveredResources, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListDiscoveredResourcesInput{} + } + + output = &ListDiscoveredResourcesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListDiscoveredResources API operation for AWS Config. +// +// Accepts a resource type and returns a list of resource identifiers for the +// resources of that type. A resource identifier includes the resource type, +// ID, and (if available) the custom resource name. The results consist of resources +// that AWS Config has discovered, including those that AWS Config is not currently +// recording. You can narrow the results to include only resources that have +// specific resource IDs or a resource name. +// +// You can specify either resource IDs or a resource name, but not both, in +// the same request. +// +// The response is paginated. By default, AWS Config lists 100 resource identifiers +// on each page. You can customize this number with the limit parameter. The +// response includes a nextToken string. To get the next page of results, run +// the request again and specify the string for the nextToken parameter. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation ListDiscoveredResources for usage and error information. +// +// Returned Error Codes: +// * ErrCodeValidationException "ValidationException" +// The requested action is not valid. +// +// * ErrCodeInvalidLimitException "InvalidLimitException" +// The specified limit is outside the allowable range. +// +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The specified next token is invalid. Specify the nextToken string that was +// returned in the previous response to get the next page of results. +// +// * ErrCodeNoAvailableConfigurationRecorderException "NoAvailableConfigurationRecorderException" +// There are no configuration recorders available to provide the role needed +// to describe your resources. Create a configuration recorder. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/ListDiscoveredResources +func (c *ConfigService) ListDiscoveredResources(input *ListDiscoveredResourcesInput) (*ListDiscoveredResourcesOutput, error) { + req, out := c.ListDiscoveredResourcesRequest(input) + return out, req.Send() +} + +// ListDiscoveredResourcesWithContext is the same as ListDiscoveredResources with the addition of +// the ability to pass a context and additional request options. +// +// See ListDiscoveredResources for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) ListDiscoveredResourcesWithContext(ctx aws.Context, input *ListDiscoveredResourcesInput, opts ...request.Option) (*ListDiscoveredResourcesOutput, error) { + req, out := c.ListDiscoveredResourcesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutAggregationAuthorization = "PutAggregationAuthorization" + +// PutAggregationAuthorizationRequest generates a "aws/request.Request" representing the +// client's request for the PutAggregationAuthorization operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutAggregationAuthorization for more information on using the PutAggregationAuthorization +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutAggregationAuthorizationRequest method. +// req, resp := client.PutAggregationAuthorizationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutAggregationAuthorization +func (c *ConfigService) PutAggregationAuthorizationRequest(input *PutAggregationAuthorizationInput) (req *request.Request, output *PutAggregationAuthorizationOutput) { + op := &request.Operation{ + Name: opPutAggregationAuthorization, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutAggregationAuthorizationInput{} + } + + output = &PutAggregationAuthorizationOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutAggregationAuthorization API operation for AWS Config. +// +// Authorizes the aggregator account and region to collect data from the source +// account and region. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation PutAggregationAuthorization for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" +// One or more of the specified parameters are invalid. Verify that your parameters +// are valid and try again. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutAggregationAuthorization +func (c *ConfigService) PutAggregationAuthorization(input *PutAggregationAuthorizationInput) (*PutAggregationAuthorizationOutput, error) { + req, out := c.PutAggregationAuthorizationRequest(input) + return out, req.Send() +} + +// PutAggregationAuthorizationWithContext is the same as PutAggregationAuthorization with the addition of +// the ability to pass a context and additional request options. +// +// See PutAggregationAuthorization for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) PutAggregationAuthorizationWithContext(ctx aws.Context, input *PutAggregationAuthorizationInput, opts ...request.Option) (*PutAggregationAuthorizationOutput, error) { + req, out := c.PutAggregationAuthorizationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutConfigRule = "PutConfigRule" + +// PutConfigRuleRequest generates a "aws/request.Request" representing the +// client's request for the PutConfigRule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutConfigRule for more information on using the PutConfigRule +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutConfigRuleRequest method. +// req, resp := client.PutConfigRuleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutConfigRule +func (c *ConfigService) PutConfigRuleRequest(input *PutConfigRuleInput) (req *request.Request, output *PutConfigRuleOutput) { + op := &request.Operation{ + Name: opPutConfigRule, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutConfigRuleInput{} + } + + output = &PutConfigRuleOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutConfigRule API operation for AWS Config. +// +// Adds or updates an AWS Config rule for evaluating whether your AWS resources +// comply with your desired configurations. +// +// You can use this action for custom AWS Config rules and AWS managed Config +// rules. A custom AWS Config rule is a rule that you develop and maintain. +// An AWS managed Config rule is a customizable, predefined rule that AWS Config +// provides. +// +// If you are adding a new custom AWS Config rule, you must first create the +// AWS Lambda function that the rule invokes to evaluate your resources. When +// you use the PutConfigRule action to add the rule to AWS Config, you must +// specify the Amazon Resource Name (ARN) that AWS Lambda assigns to the function. +// Specify the ARN for the SourceIdentifier key. This key is part of the Source +// object, which is part of the ConfigRule object. +// +// If you are adding an AWS managed Config rule, specify the rule's identifier +// for the SourceIdentifier key. To reference AWS managed Config rule identifiers, +// see About AWS Managed Config Rules (http://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html). +// +// For any new rule that you add, specify the ConfigRuleName in the ConfigRule +// object. Do not specify the ConfigRuleArn or the ConfigRuleId. These values +// are generated by AWS Config for new rules. +// +// If you are updating a rule that you added previously, you can specify the +// rule by ConfigRuleName, ConfigRuleId, or ConfigRuleArn in the ConfigRule +// data type that you use in this request. +// +// The maximum number of rules that AWS Config supports is 50. +// +// For information about requesting a rule limit increase, see AWS Config Limits +// (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_config) +// in the AWS General Reference Guide. +// +// For more information about developing and using AWS Config rules, see Evaluating +// AWS Resource Configurations with AWS Config (http://docs.aws.amazon.com/config/latest/developerguide/evaluate-config.html) +// in the AWS Config Developer Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation PutConfigRule for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" +// One or more of the specified parameters are invalid. Verify that your parameters +// are valid and try again. +// +// * ErrCodeMaxNumberOfConfigRulesExceededException "MaxNumberOfConfigRulesExceededException" +// Failed to add the AWS Config rule because the account already contains the +// maximum number of 50 rules. Consider deleting any deactivated rules before +// you add new rules. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// The rule is currently being deleted or the rule is deleting your evaluation +// results. Try your request again later. +// +// * ErrCodeInsufficientPermissionsException "InsufficientPermissionsException" +// Indicates one of the following errors: +// +// * The rule cannot be created because the IAM role assigned to AWS Config +// lacks permissions to perform the config:Put* action. +// +// * The AWS Lambda function cannot be invoked. Check the function ARN, and +// check the function's permissions. +// +// * ErrCodeNoAvailableConfigurationRecorderException "NoAvailableConfigurationRecorderException" +// There are no configuration recorders available to provide the role needed +// to describe your resources. Create a configuration recorder. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutConfigRule +func (c *ConfigService) PutConfigRule(input *PutConfigRuleInput) (*PutConfigRuleOutput, error) { + req, out := c.PutConfigRuleRequest(input) + return out, req.Send() +} + +// PutConfigRuleWithContext is the same as PutConfigRule with the addition of +// the ability to pass a context and additional request options. +// +// See PutConfigRule for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) PutConfigRuleWithContext(ctx aws.Context, input *PutConfigRuleInput, opts ...request.Option) (*PutConfigRuleOutput, error) { + req, out := c.PutConfigRuleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutConfigurationAggregator = "PutConfigurationAggregator" + +// PutConfigurationAggregatorRequest generates a "aws/request.Request" representing the +// client's request for the PutConfigurationAggregator operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutConfigurationAggregator for more information on using the PutConfigurationAggregator +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutConfigurationAggregatorRequest method. +// req, resp := client.PutConfigurationAggregatorRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutConfigurationAggregator +func (c *ConfigService) PutConfigurationAggregatorRequest(input *PutConfigurationAggregatorInput) (req *request.Request, output *PutConfigurationAggregatorOutput) { + op := &request.Operation{ + Name: opPutConfigurationAggregator, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutConfigurationAggregatorInput{} + } + + output = &PutConfigurationAggregatorOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutConfigurationAggregator API operation for AWS Config. +// +// Creates and updates the configuration aggregator with the selected source +// accounts and regions. The source account can be individual account(s) or +// an organization. +// +// AWS Config should be enabled in source accounts and regions you want to aggregate. +// +// If your source type is an organization, you must be signed in to the master +// account and all features must be enabled in your organization. AWS Config +// calls EnableAwsServiceAccess API to enable integration between AWS Config +// and AWS Organizations. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation PutConfigurationAggregator for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" +// One or more of the specified parameters are invalid. Verify that your parameters +// are valid and try again. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// For StartConfigRulesEvaluation API, this exception is thrown if an evaluation +// is in progress or if you call the StartConfigRulesEvaluation API more than +// once per minute. +// +// For PutConfigurationAggregator API, this exception is thrown if the number +// of accounts and aggregators exceeds the limit. +// +// * ErrCodeInvalidRoleException "InvalidRoleException" +// You have provided a null or empty role ARN. +// +// * ErrCodeOrganizationAccessDeniedException "OrganizationAccessDeniedException" +// No permission to call the EnableAWSServiceAccess API. +// +// * ErrCodeNoAvailableOrganizationException "NoAvailableOrganizationException" +// Organization does is no longer available. +// +// * ErrCodeOrganizationAllFeaturesNotEnabledException "OrganizationAllFeaturesNotEnabledException" +// The configuration aggregator cannot be created because organization does +// not have all features enabled. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutConfigurationAggregator +func (c *ConfigService) PutConfigurationAggregator(input *PutConfigurationAggregatorInput) (*PutConfigurationAggregatorOutput, error) { + req, out := c.PutConfigurationAggregatorRequest(input) + return out, req.Send() +} + +// PutConfigurationAggregatorWithContext is the same as PutConfigurationAggregator with the addition of +// the ability to pass a context and additional request options. +// +// See PutConfigurationAggregator for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) PutConfigurationAggregatorWithContext(ctx aws.Context, input *PutConfigurationAggregatorInput, opts ...request.Option) (*PutConfigurationAggregatorOutput, error) { + req, out := c.PutConfigurationAggregatorRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutConfigurationRecorder = "PutConfigurationRecorder" + +// PutConfigurationRecorderRequest generates a "aws/request.Request" representing the +// client's request for the PutConfigurationRecorder operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutConfigurationRecorder for more information on using the PutConfigurationRecorder +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutConfigurationRecorderRequest method. +// req, resp := client.PutConfigurationRecorderRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutConfigurationRecorder +func (c *ConfigService) PutConfigurationRecorderRequest(input *PutConfigurationRecorderInput) (req *request.Request, output *PutConfigurationRecorderOutput) { + op := &request.Operation{ + Name: opPutConfigurationRecorder, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutConfigurationRecorderInput{} + } + + output = &PutConfigurationRecorderOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutConfigurationRecorder API operation for AWS Config. +// +// Creates a new configuration recorder to record the selected resource configurations. +// +// You can use this action to change the role roleARN or the recordingGroup +// of an existing recorder. To change the role, call the action on the existing +// configuration recorder and specify a role. +// +// Currently, you can specify only one configuration recorder per region in +// your account. +// +// If ConfigurationRecorder does not have the recordingGroup parameter specified, +// the default is to record all supported resource types. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation PutConfigurationRecorder for usage and error information. +// +// Returned Error Codes: +// * ErrCodeMaxNumberOfConfigurationRecordersExceededException "MaxNumberOfConfigurationRecordersExceededException" +// You have reached the limit of the number of recorders you can create. +// +// * ErrCodeInvalidConfigurationRecorderNameException "InvalidConfigurationRecorderNameException" +// You have provided a configuration recorder name that is not valid. +// +// * ErrCodeInvalidRoleException "InvalidRoleException" +// You have provided a null or empty role ARN. +// +// * ErrCodeInvalidRecordingGroupException "InvalidRecordingGroupException" +// AWS Config throws an exception if the recording group does not contain a +// valid list of resource types. Invalid values might also be incorrectly formatted. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutConfigurationRecorder +func (c *ConfigService) PutConfigurationRecorder(input *PutConfigurationRecorderInput) (*PutConfigurationRecorderOutput, error) { + req, out := c.PutConfigurationRecorderRequest(input) + return out, req.Send() +} + +// PutConfigurationRecorderWithContext is the same as PutConfigurationRecorder with the addition of +// the ability to pass a context and additional request options. +// +// See PutConfigurationRecorder for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) PutConfigurationRecorderWithContext(ctx aws.Context, input *PutConfigurationRecorderInput, opts ...request.Option) (*PutConfigurationRecorderOutput, error) { + req, out := c.PutConfigurationRecorderRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutDeliveryChannel = "PutDeliveryChannel" + +// PutDeliveryChannelRequest generates a "aws/request.Request" representing the +// client's request for the PutDeliveryChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutDeliveryChannel for more information on using the PutDeliveryChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutDeliveryChannelRequest method. +// req, resp := client.PutDeliveryChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutDeliveryChannel +func (c *ConfigService) PutDeliveryChannelRequest(input *PutDeliveryChannelInput) (req *request.Request, output *PutDeliveryChannelOutput) { + op := &request.Operation{ + Name: opPutDeliveryChannel, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutDeliveryChannelInput{} + } + + output = &PutDeliveryChannelOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutDeliveryChannel API operation for AWS Config. +// +// Creates a delivery channel object to deliver configuration information to +// an Amazon S3 bucket and Amazon SNS topic. +// +// Before you can create a delivery channel, you must create a configuration +// recorder. +// +// You can use this action to change the Amazon S3 bucket or an Amazon SNS topic +// of the existing delivery channel. To change the Amazon S3 bucket or an Amazon +// SNS topic, call this action and specify the changed values for the S3 bucket +// and the SNS topic. If you specify a different value for either the S3 bucket +// or the SNS topic, this action will keep the existing value for the parameter +// that is not changed. +// +// You can have only one delivery channel per region in your account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation PutDeliveryChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeMaxNumberOfDeliveryChannelsExceededException "MaxNumberOfDeliveryChannelsExceededException" +// You have reached the limit of the number of delivery channels you can create. +// +// * ErrCodeNoAvailableConfigurationRecorderException "NoAvailableConfigurationRecorderException" +// There are no configuration recorders available to provide the role needed +// to describe your resources. Create a configuration recorder. +// +// * ErrCodeInvalidDeliveryChannelNameException "InvalidDeliveryChannelNameException" +// The specified delivery channel name is not valid. +// +// * ErrCodeNoSuchBucketException "NoSuchBucketException" +// The specified Amazon S3 bucket does not exist. +// +// * ErrCodeInvalidS3KeyPrefixException "InvalidS3KeyPrefixException" +// The specified Amazon S3 key prefix is not valid. +// +// * ErrCodeInvalidSNSTopicARNException "InvalidSNSTopicARNException" +// The specified Amazon SNS topic does not exist. +// +// * ErrCodeInsufficientDeliveryPolicyException "InsufficientDeliveryPolicyException" +// Your Amazon S3 bucket policy does not permit AWS Config to write to it. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutDeliveryChannel +func (c *ConfigService) PutDeliveryChannel(input *PutDeliveryChannelInput) (*PutDeliveryChannelOutput, error) { + req, out := c.PutDeliveryChannelRequest(input) + return out, req.Send() +} + +// PutDeliveryChannelWithContext is the same as PutDeliveryChannel with the addition of +// the ability to pass a context and additional request options. +// +// See PutDeliveryChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) PutDeliveryChannelWithContext(ctx aws.Context, input *PutDeliveryChannelInput, opts ...request.Option) (*PutDeliveryChannelOutput, error) { + req, out := c.PutDeliveryChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutEvaluations = "PutEvaluations" + +// PutEvaluationsRequest generates a "aws/request.Request" representing the +// client's request for the PutEvaluations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutEvaluations for more information on using the PutEvaluations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutEvaluationsRequest method. +// req, resp := client.PutEvaluationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutEvaluations +func (c *ConfigService) PutEvaluationsRequest(input *PutEvaluationsInput) (req *request.Request, output *PutEvaluationsOutput) { + op := &request.Operation{ + Name: opPutEvaluations, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutEvaluationsInput{} + } + + output = &PutEvaluationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutEvaluations API operation for AWS Config. +// +// Used by an AWS Lambda function to deliver evaluation results to AWS Config. +// This action is required in every AWS Lambda function that is invoked by an +// AWS Config rule. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation PutEvaluations for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" +// One or more of the specified parameters are invalid. Verify that your parameters +// are valid and try again. +// +// * ErrCodeInvalidResultTokenException "InvalidResultTokenException" +// The specified ResultToken is invalid. +// +// * ErrCodeNoSuchConfigRuleException "NoSuchConfigRuleException" +// One or more AWS Config rules in the request are invalid. Verify that the +// rule names are correct and try again. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutEvaluations +func (c *ConfigService) PutEvaluations(input *PutEvaluationsInput) (*PutEvaluationsOutput, error) { + req, out := c.PutEvaluationsRequest(input) + return out, req.Send() +} + +// PutEvaluationsWithContext is the same as PutEvaluations with the addition of +// the ability to pass a context and additional request options. +// +// See PutEvaluations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) PutEvaluationsWithContext(ctx aws.Context, input *PutEvaluationsInput, opts ...request.Option) (*PutEvaluationsOutput, error) { + req, out := c.PutEvaluationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutRetentionConfiguration = "PutRetentionConfiguration" + +// PutRetentionConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the PutRetentionConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutRetentionConfiguration for more information on using the PutRetentionConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutRetentionConfigurationRequest method. +// req, resp := client.PutRetentionConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutRetentionConfiguration +func (c *ConfigService) PutRetentionConfigurationRequest(input *PutRetentionConfigurationInput) (req *request.Request, output *PutRetentionConfigurationOutput) { + op := &request.Operation{ + Name: opPutRetentionConfiguration, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutRetentionConfigurationInput{} + } + + output = &PutRetentionConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutRetentionConfiguration API operation for AWS Config. +// +// Creates and updates the retention configuration with details about retention +// period (number of days) that AWS Config stores your historical information. +// The API creates the RetentionConfiguration object and names the object as +// default. When you have a RetentionConfiguration object named default, calling +// the API modifies the default object. +// +// Currently, AWS Config supports only one retention configuration per region +// in your account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation PutRetentionConfiguration for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" +// One or more of the specified parameters are invalid. Verify that your parameters +// are valid and try again. +// +// * ErrCodeMaxNumberOfRetentionConfigurationsExceededException "MaxNumberOfRetentionConfigurationsExceededException" +// Failed to add the retention configuration because a retention configuration +// with that name already exists. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/PutRetentionConfiguration +func (c *ConfigService) PutRetentionConfiguration(input *PutRetentionConfigurationInput) (*PutRetentionConfigurationOutput, error) { + req, out := c.PutRetentionConfigurationRequest(input) + return out, req.Send() +} + +// PutRetentionConfigurationWithContext is the same as PutRetentionConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See PutRetentionConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) PutRetentionConfigurationWithContext(ctx aws.Context, input *PutRetentionConfigurationInput, opts ...request.Option) (*PutRetentionConfigurationOutput, error) { + req, out := c.PutRetentionConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStartConfigRulesEvaluation = "StartConfigRulesEvaluation" + +// StartConfigRulesEvaluationRequest generates a "aws/request.Request" representing the +// client's request for the StartConfigRulesEvaluation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StartConfigRulesEvaluation for more information on using the StartConfigRulesEvaluation +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StartConfigRulesEvaluationRequest method. +// req, resp := client.StartConfigRulesEvaluationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/StartConfigRulesEvaluation +func (c *ConfigService) StartConfigRulesEvaluationRequest(input *StartConfigRulesEvaluationInput) (req *request.Request, output *StartConfigRulesEvaluationOutput) { + op := &request.Operation{ + Name: opStartConfigRulesEvaluation, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StartConfigRulesEvaluationInput{} + } + + output = &StartConfigRulesEvaluationOutput{} + req = c.newRequest(op, input, output) + return +} + +// StartConfigRulesEvaluation API operation for AWS Config. +// +// Runs an on-demand evaluation for the specified AWS Config rules against the +// last known configuration state of the resources. Use StartConfigRulesEvaluation +// when you want to test that a rule you updated is working as expected. StartConfigRulesEvaluation +// does not re-record the latest configuration state for your resources. It +// re-runs an evaluation against the last known state of your resources. +// +// You can specify up to 25 AWS Config rules per request. +// +// An existing StartConfigRulesEvaluation call for the specified rules must +// complete before you can call the API again. If you chose to have AWS Config +// stream to an Amazon SNS topic, you will receive a ConfigRuleEvaluationStarted +// notification when the evaluation starts. +// +// You don't need to call the StartConfigRulesEvaluation API to run an evaluation +// for a new rule. When you create a rule, AWS Config evaluates your resources +// against the rule automatically. +// +// The StartConfigRulesEvaluation API is useful if you want to run on-demand +// evaluations, such as the following example: +// +// You have a custom rule that evaluates your IAM resources every 24 hours. +// +// You update your Lambda function to add additional conditions to your rule. +// +// Instead of waiting for the next periodic evaluation, you call the StartConfigRulesEvaluation +// API. +// +// AWS Config invokes your Lambda function and evaluates your IAM resources. +// +// Your custom rule will still run periodic evaluations every 24 hours. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation StartConfigRulesEvaluation for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchConfigRuleException "NoSuchConfigRuleException" +// One or more AWS Config rules in the request are invalid. Verify that the +// rule names are correct and try again. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// For StartConfigRulesEvaluation API, this exception is thrown if an evaluation +// is in progress or if you call the StartConfigRulesEvaluation API more than +// once per minute. +// +// For PutConfigurationAggregator API, this exception is thrown if the number +// of accounts and aggregators exceeds the limit. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// The rule is currently being deleted or the rule is deleting your evaluation +// results. Try your request again later. +// +// * ErrCodeInvalidParameterValueException "InvalidParameterValueException" +// One or more of the specified parameters are invalid. Verify that your parameters +// are valid and try again. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/StartConfigRulesEvaluation +func (c *ConfigService) StartConfigRulesEvaluation(input *StartConfigRulesEvaluationInput) (*StartConfigRulesEvaluationOutput, error) { + req, out := c.StartConfigRulesEvaluationRequest(input) + return out, req.Send() +} + +// StartConfigRulesEvaluationWithContext is the same as StartConfigRulesEvaluation with the addition of +// the ability to pass a context and additional request options. +// +// See StartConfigRulesEvaluation for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) StartConfigRulesEvaluationWithContext(ctx aws.Context, input *StartConfigRulesEvaluationInput, opts ...request.Option) (*StartConfigRulesEvaluationOutput, error) { + req, out := c.StartConfigRulesEvaluationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStartConfigurationRecorder = "StartConfigurationRecorder" + +// StartConfigurationRecorderRequest generates a "aws/request.Request" representing the +// client's request for the StartConfigurationRecorder operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StartConfigurationRecorder for more information on using the StartConfigurationRecorder +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StartConfigurationRecorderRequest method. +// req, resp := client.StartConfigurationRecorderRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/StartConfigurationRecorder +func (c *ConfigService) StartConfigurationRecorderRequest(input *StartConfigurationRecorderInput) (req *request.Request, output *StartConfigurationRecorderOutput) { + op := &request.Operation{ + Name: opStartConfigurationRecorder, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StartConfigurationRecorderInput{} + } + + output = &StartConfigurationRecorderOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// StartConfigurationRecorder API operation for AWS Config. +// +// Starts recording configurations of the AWS resources you have selected to +// record in your AWS account. +// +// You must have created at least one delivery channel to successfully start +// the configuration recorder. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation StartConfigurationRecorder for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchConfigurationRecorderException "NoSuchConfigurationRecorderException" +// You have specified a configuration recorder that does not exist. +// +// * ErrCodeNoAvailableDeliveryChannelException "NoAvailableDeliveryChannelException" +// There is no delivery channel available to record configurations. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/StartConfigurationRecorder +func (c *ConfigService) StartConfigurationRecorder(input *StartConfigurationRecorderInput) (*StartConfigurationRecorderOutput, error) { + req, out := c.StartConfigurationRecorderRequest(input) + return out, req.Send() +} + +// StartConfigurationRecorderWithContext is the same as StartConfigurationRecorder with the addition of +// the ability to pass a context and additional request options. +// +// See StartConfigurationRecorder for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) StartConfigurationRecorderWithContext(ctx aws.Context, input *StartConfigurationRecorderInput, opts ...request.Option) (*StartConfigurationRecorderOutput, error) { + req, out := c.StartConfigurationRecorderRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStopConfigurationRecorder = "StopConfigurationRecorder" + +// StopConfigurationRecorderRequest generates a "aws/request.Request" representing the +// client's request for the StopConfigurationRecorder operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StopConfigurationRecorder for more information on using the StopConfigurationRecorder +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StopConfigurationRecorderRequest method. +// req, resp := client.StopConfigurationRecorderRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/StopConfigurationRecorder +func (c *ConfigService) StopConfigurationRecorderRequest(input *StopConfigurationRecorderInput) (req *request.Request, output *StopConfigurationRecorderOutput) { + op := &request.Operation{ + Name: opStopConfigurationRecorder, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StopConfigurationRecorderInput{} + } + + output = &StopConfigurationRecorderOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// StopConfigurationRecorder API operation for AWS Config. +// +// Stops recording configurations of the AWS resources you have selected to +// record in your AWS account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Config's +// API operation StopConfigurationRecorder for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchConfigurationRecorderException "NoSuchConfigurationRecorderException" +// You have specified a configuration recorder that does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12/StopConfigurationRecorder +func (c *ConfigService) StopConfigurationRecorder(input *StopConfigurationRecorderInput) (*StopConfigurationRecorderOutput, error) { + req, out := c.StopConfigurationRecorderRequest(input) + return out, req.Send() +} + +// StopConfigurationRecorderWithContext is the same as StopConfigurationRecorder with the addition of +// the ability to pass a context and additional request options. +// +// See StopConfigurationRecorder for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ConfigService) StopConfigurationRecorderWithContext(ctx aws.Context, input *StopConfigurationRecorderInput, opts ...request.Option) (*StopConfigurationRecorderOutput, error) { + req, out := c.StopConfigurationRecorderRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// A collection of accounts and regions. +type AccountAggregationSource struct { + _ struct{} `type:"structure"` + + // The 12-digit account ID of the account being aggregated. + // + // AccountIds is a required field + AccountIds []*string `min:"1" type:"list" required:"true"` + + // If true, aggregate existing AWS Config regions and future regions. + AllAwsRegions *bool `type:"boolean"` + + // The source regions being aggregated. + AwsRegions []*string `min:"1" type:"list"` +} + +// String returns the string representation +func (s AccountAggregationSource) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AccountAggregationSource) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AccountAggregationSource) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AccountAggregationSource"} + if s.AccountIds == nil { + invalidParams.Add(request.NewErrParamRequired("AccountIds")) + } + if s.AccountIds != nil && len(s.AccountIds) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AccountIds", 1)) + } + if s.AwsRegions != nil && len(s.AwsRegions) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AwsRegions", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccountIds sets the AccountIds field's value. +func (s *AccountAggregationSource) SetAccountIds(v []*string) *AccountAggregationSource { + s.AccountIds = v + return s +} + +// SetAllAwsRegions sets the AllAwsRegions field's value. +func (s *AccountAggregationSource) SetAllAwsRegions(v bool) *AccountAggregationSource { + s.AllAwsRegions = &v + return s +} + +// SetAwsRegions sets the AwsRegions field's value. +func (s *AccountAggregationSource) SetAwsRegions(v []*string) *AccountAggregationSource { + s.AwsRegions = v + return s +} + +// Indicates whether an AWS Config rule is compliant based on account ID, region, +// compliance, and rule name. +// +// A rule is compliant if all of the resources that the rule evaluated comply +// with it. It is noncompliant if any of these resources do not comply. +type AggregateComplianceByConfigRule struct { + _ struct{} `type:"structure"` + + // The 12-digit account ID of the source account. + AccountId *string `type:"string"` + + // The source region from where the data is aggregated. + AwsRegion *string `min:"1" type:"string"` + + // Indicates whether an AWS resource or AWS Config rule is compliant and provides + // the number of contributors that affect the compliance. + Compliance *Compliance `type:"structure"` + + // The name of the AWS Config rule. + ConfigRuleName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s AggregateComplianceByConfigRule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AggregateComplianceByConfigRule) GoString() string { + return s.String() +} + +// SetAccountId sets the AccountId field's value. +func (s *AggregateComplianceByConfigRule) SetAccountId(v string) *AggregateComplianceByConfigRule { + s.AccountId = &v + return s +} + +// SetAwsRegion sets the AwsRegion field's value. +func (s *AggregateComplianceByConfigRule) SetAwsRegion(v string) *AggregateComplianceByConfigRule { + s.AwsRegion = &v + return s +} + +// SetCompliance sets the Compliance field's value. +func (s *AggregateComplianceByConfigRule) SetCompliance(v *Compliance) *AggregateComplianceByConfigRule { + s.Compliance = v + return s +} + +// SetConfigRuleName sets the ConfigRuleName field's value. +func (s *AggregateComplianceByConfigRule) SetConfigRuleName(v string) *AggregateComplianceByConfigRule { + s.ConfigRuleName = &v + return s +} + +// Returns the number of compliant and noncompliant rules for one or more accounts +// and regions in an aggregator. +type AggregateComplianceCount struct { + _ struct{} `type:"structure"` + + // The number of compliant and noncompliant AWS Config rules. + ComplianceSummary *ComplianceSummary `type:"structure"` + + // The 12-digit account ID or region based on the GroupByKey value. + GroupName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s AggregateComplianceCount) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AggregateComplianceCount) GoString() string { + return s.String() +} + +// SetComplianceSummary sets the ComplianceSummary field's value. +func (s *AggregateComplianceCount) SetComplianceSummary(v *ComplianceSummary) *AggregateComplianceCount { + s.ComplianceSummary = v + return s +} + +// SetGroupName sets the GroupName field's value. +func (s *AggregateComplianceCount) SetGroupName(v string) *AggregateComplianceCount { + s.GroupName = &v + return s +} + +// The details of an AWS Config evaluation for an account ID and region in an +// aggregator. Provides the AWS resource that was evaluated, the compliance +// of the resource, related time stamps, and supplementary information. +type AggregateEvaluationResult struct { + _ struct{} `type:"structure"` + + // The 12-digit account ID of the source account. + AccountId *string `type:"string"` + + // Supplementary information about how the agrregate evaluation determined the + // compliance. + Annotation *string `min:"1" type:"string"` + + // The source region from where the data is aggregated. + AwsRegion *string `min:"1" type:"string"` + + // The resource compliance status. + // + // For the AggregationEvaluationResult data type, AWS Config supports only the + // COMPLIANT and NON_COMPLIANT. AWS Config does not support the NOT_APPLICABLE + // and INSUFFICIENT_DATA value. + ComplianceType *string `type:"string" enum:"ComplianceType"` + + // The time when the AWS Config rule evaluated the AWS resource. + ConfigRuleInvokedTime *time.Time `type:"timestamp"` + + // Uniquely identifies the evaluation result. + EvaluationResultIdentifier *EvaluationResultIdentifier `type:"structure"` + + // The time when AWS Config recorded the aggregate evaluation result. + ResultRecordedTime *time.Time `type:"timestamp"` +} + +// String returns the string representation +func (s AggregateEvaluationResult) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AggregateEvaluationResult) GoString() string { + return s.String() +} + +// SetAccountId sets the AccountId field's value. +func (s *AggregateEvaluationResult) SetAccountId(v string) *AggregateEvaluationResult { + s.AccountId = &v + return s +} + +// SetAnnotation sets the Annotation field's value. +func (s *AggregateEvaluationResult) SetAnnotation(v string) *AggregateEvaluationResult { + s.Annotation = &v + return s +} + +// SetAwsRegion sets the AwsRegion field's value. +func (s *AggregateEvaluationResult) SetAwsRegion(v string) *AggregateEvaluationResult { + s.AwsRegion = &v + return s +} + +// SetComplianceType sets the ComplianceType field's value. +func (s *AggregateEvaluationResult) SetComplianceType(v string) *AggregateEvaluationResult { + s.ComplianceType = &v + return s +} + +// SetConfigRuleInvokedTime sets the ConfigRuleInvokedTime field's value. +func (s *AggregateEvaluationResult) SetConfigRuleInvokedTime(v time.Time) *AggregateEvaluationResult { + s.ConfigRuleInvokedTime = &v + return s +} + +// SetEvaluationResultIdentifier sets the EvaluationResultIdentifier field's value. +func (s *AggregateEvaluationResult) SetEvaluationResultIdentifier(v *EvaluationResultIdentifier) *AggregateEvaluationResult { + s.EvaluationResultIdentifier = v + return s +} + +// SetResultRecordedTime sets the ResultRecordedTime field's value. +func (s *AggregateEvaluationResult) SetResultRecordedTime(v time.Time) *AggregateEvaluationResult { + s.ResultRecordedTime = &v + return s +} + +// The details that identify a resource that is collected by AWS Config aggregator, +// including the resource type, ID, (if available) the custom resource name, +// the source account, and source region. +type AggregateResourceIdentifier struct { + _ struct{} `type:"structure"` + + // The ID of the AWS resource. + // + // ResourceId is a required field + ResourceId *string `min:"1" type:"string" required:"true"` + + // The name of the AWS resource. + ResourceName *string `type:"string"` + + // The type of the AWS resource. + // + // ResourceType is a required field + ResourceType *string `type:"string" required:"true" enum:"ResourceType"` + + // The 12-digit account ID of the source account. + // + // SourceAccountId is a required field + SourceAccountId *string `type:"string" required:"true"` + + // The source region where data is aggregated. + // + // SourceRegion is a required field + SourceRegion *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s AggregateResourceIdentifier) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AggregateResourceIdentifier) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AggregateResourceIdentifier) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AggregateResourceIdentifier"} + if s.ResourceId == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceId")) + } + if s.ResourceId != nil && len(*s.ResourceId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceId", 1)) + } + if s.ResourceType == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceType")) + } + if s.SourceAccountId == nil { + invalidParams.Add(request.NewErrParamRequired("SourceAccountId")) + } + if s.SourceRegion == nil { + invalidParams.Add(request.NewErrParamRequired("SourceRegion")) + } + if s.SourceRegion != nil && len(*s.SourceRegion) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SourceRegion", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceId sets the ResourceId field's value. +func (s *AggregateResourceIdentifier) SetResourceId(v string) *AggregateResourceIdentifier { + s.ResourceId = &v + return s +} + +// SetResourceName sets the ResourceName field's value. +func (s *AggregateResourceIdentifier) SetResourceName(v string) *AggregateResourceIdentifier { + s.ResourceName = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *AggregateResourceIdentifier) SetResourceType(v string) *AggregateResourceIdentifier { + s.ResourceType = &v + return s +} + +// SetSourceAccountId sets the SourceAccountId field's value. +func (s *AggregateResourceIdentifier) SetSourceAccountId(v string) *AggregateResourceIdentifier { + s.SourceAccountId = &v + return s +} + +// SetSourceRegion sets the SourceRegion field's value. +func (s *AggregateResourceIdentifier) SetSourceRegion(v string) *AggregateResourceIdentifier { + s.SourceRegion = &v + return s +} + +// The current sync status between the source and the aggregator account. +type AggregatedSourceStatus struct { + _ struct{} `type:"structure"` + + // The region authorized to collect aggregated data. + AwsRegion *string `min:"1" type:"string"` + + // The error code that AWS Config returned when the source account aggregation + // last failed. + LastErrorCode *string `type:"string"` + + // The message indicating that the source account aggregation failed due to + // an error. + LastErrorMessage *string `type:"string"` + + // Filters the last updated status type. + // + // * Valid value FAILED indicates errors while moving data. + // + // * Valid value SUCCEEDED indicates the data was successfully moved. + // + // * Valid value OUTDATED indicates the data is not the most recent. + LastUpdateStatus *string `type:"string" enum:"AggregatedSourceStatusType"` + + // The time of the last update. + LastUpdateTime *time.Time `type:"timestamp"` + + // The source account ID or an organization. + SourceId *string `type:"string"` + + // The source account or an organization. + SourceType *string `type:"string" enum:"AggregatedSourceType"` +} + +// String returns the string representation +func (s AggregatedSourceStatus) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AggregatedSourceStatus) GoString() string { + return s.String() +} + +// SetAwsRegion sets the AwsRegion field's value. +func (s *AggregatedSourceStatus) SetAwsRegion(v string) *AggregatedSourceStatus { + s.AwsRegion = &v + return s +} + +// SetLastErrorCode sets the LastErrorCode field's value. +func (s *AggregatedSourceStatus) SetLastErrorCode(v string) *AggregatedSourceStatus { + s.LastErrorCode = &v + return s +} + +// SetLastErrorMessage sets the LastErrorMessage field's value. +func (s *AggregatedSourceStatus) SetLastErrorMessage(v string) *AggregatedSourceStatus { + s.LastErrorMessage = &v + return s +} + +// SetLastUpdateStatus sets the LastUpdateStatus field's value. +func (s *AggregatedSourceStatus) SetLastUpdateStatus(v string) *AggregatedSourceStatus { + s.LastUpdateStatus = &v + return s +} + +// SetLastUpdateTime sets the LastUpdateTime field's value. +func (s *AggregatedSourceStatus) SetLastUpdateTime(v time.Time) *AggregatedSourceStatus { + s.LastUpdateTime = &v + return s +} + +// SetSourceId sets the SourceId field's value. +func (s *AggregatedSourceStatus) SetSourceId(v string) *AggregatedSourceStatus { + s.SourceId = &v + return s +} + +// SetSourceType sets the SourceType field's value. +func (s *AggregatedSourceStatus) SetSourceType(v string) *AggregatedSourceStatus { + s.SourceType = &v + return s +} + +// An object that represents the authorizations granted to aggregator accounts +// and regions. +type AggregationAuthorization struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the aggregation object. + AggregationAuthorizationArn *string `type:"string"` + + // The 12-digit account ID of the account authorized to aggregate data. + AuthorizedAccountId *string `type:"string"` + + // The region authorized to collect aggregated data. + AuthorizedAwsRegion *string `min:"1" type:"string"` + + // The time stamp when the aggregation authorization was created. + CreationTime *time.Time `type:"timestamp"` +} + +// String returns the string representation +func (s AggregationAuthorization) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AggregationAuthorization) GoString() string { + return s.String() +} + +// SetAggregationAuthorizationArn sets the AggregationAuthorizationArn field's value. +func (s *AggregationAuthorization) SetAggregationAuthorizationArn(v string) *AggregationAuthorization { + s.AggregationAuthorizationArn = &v + return s +} + +// SetAuthorizedAccountId sets the AuthorizedAccountId field's value. +func (s *AggregationAuthorization) SetAuthorizedAccountId(v string) *AggregationAuthorization { + s.AuthorizedAccountId = &v + return s +} + +// SetAuthorizedAwsRegion sets the AuthorizedAwsRegion field's value. +func (s *AggregationAuthorization) SetAuthorizedAwsRegion(v string) *AggregationAuthorization { + s.AuthorizedAwsRegion = &v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *AggregationAuthorization) SetCreationTime(v time.Time) *AggregationAuthorization { + s.CreationTime = &v + return s +} + +// The detailed configuration of a specified resource. +type BaseConfigurationItem struct { + _ struct{} `type:"structure"` + + // The 12-digit AWS account ID associated with the resource. + AccountId *string `locationName:"accountId" type:"string"` + + // The Amazon Resource Name (ARN) of the resource. + Arn *string `locationName:"arn" type:"string"` + + // The Availability Zone associated with the resource. + AvailabilityZone *string `locationName:"availabilityZone" type:"string"` + + // The region where the resource resides. + AwsRegion *string `locationName:"awsRegion" min:"1" type:"string"` + + // The description of the resource configuration. + Configuration *string `locationName:"configuration" type:"string"` + + // The time when the configuration recording was initiated. + ConfigurationItemCaptureTime *time.Time `locationName:"configurationItemCaptureTime" type:"timestamp"` + + // The configuration item status. + ConfigurationItemStatus *string `locationName:"configurationItemStatus" type:"string" enum:"ConfigurationItemStatus"` + + // An identifier that indicates the ordering of the configuration items of a + // resource. + ConfigurationStateId *string `locationName:"configurationStateId" type:"string"` + + // The time stamp when the resource was created. + ResourceCreationTime *time.Time `locationName:"resourceCreationTime" type:"timestamp"` + + // The ID of the resource (for example., sg-xxxxxx). + ResourceId *string `locationName:"resourceId" min:"1" type:"string"` + + // The custom name of the resource, if available. + ResourceName *string `locationName:"resourceName" type:"string"` + + // The type of AWS resource. + ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` + + // Configuration attributes that AWS Config returns for certain resource types + // to supplement the information returned for the configuration parameter. + SupplementaryConfiguration map[string]*string `locationName:"supplementaryConfiguration" type:"map"` + + // The version number of the resource configuration. + Version *string `locationName:"version" type:"string"` +} + +// String returns the string representation +func (s BaseConfigurationItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BaseConfigurationItem) GoString() string { + return s.String() +} + +// SetAccountId sets the AccountId field's value. +func (s *BaseConfigurationItem) SetAccountId(v string) *BaseConfigurationItem { + s.AccountId = &v + return s +} + +// SetArn sets the Arn field's value. +func (s *BaseConfigurationItem) SetArn(v string) *BaseConfigurationItem { + s.Arn = &v + return s +} + +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *BaseConfigurationItem) SetAvailabilityZone(v string) *BaseConfigurationItem { + s.AvailabilityZone = &v + return s +} + +// SetAwsRegion sets the AwsRegion field's value. +func (s *BaseConfigurationItem) SetAwsRegion(v string) *BaseConfigurationItem { + s.AwsRegion = &v + return s +} + +// SetConfiguration sets the Configuration field's value. +func (s *BaseConfigurationItem) SetConfiguration(v string) *BaseConfigurationItem { + s.Configuration = &v + return s +} + +// SetConfigurationItemCaptureTime sets the ConfigurationItemCaptureTime field's value. +func (s *BaseConfigurationItem) SetConfigurationItemCaptureTime(v time.Time) *BaseConfigurationItem { + s.ConfigurationItemCaptureTime = &v + return s +} + +// SetConfigurationItemStatus sets the ConfigurationItemStatus field's value. +func (s *BaseConfigurationItem) SetConfigurationItemStatus(v string) *BaseConfigurationItem { + s.ConfigurationItemStatus = &v + return s +} + +// SetConfigurationStateId sets the ConfigurationStateId field's value. +func (s *BaseConfigurationItem) SetConfigurationStateId(v string) *BaseConfigurationItem { + s.ConfigurationStateId = &v + return s +} + +// SetResourceCreationTime sets the ResourceCreationTime field's value. +func (s *BaseConfigurationItem) SetResourceCreationTime(v time.Time) *BaseConfigurationItem { + s.ResourceCreationTime = &v + return s +} + +// SetResourceId sets the ResourceId field's value. +func (s *BaseConfigurationItem) SetResourceId(v string) *BaseConfigurationItem { + s.ResourceId = &v + return s +} + +// SetResourceName sets the ResourceName field's value. +func (s *BaseConfigurationItem) SetResourceName(v string) *BaseConfigurationItem { + s.ResourceName = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *BaseConfigurationItem) SetResourceType(v string) *BaseConfigurationItem { + s.ResourceType = &v + return s +} + +// SetSupplementaryConfiguration sets the SupplementaryConfiguration field's value. +func (s *BaseConfigurationItem) SetSupplementaryConfiguration(v map[string]*string) *BaseConfigurationItem { + s.SupplementaryConfiguration = v + return s +} + +// SetVersion sets the Version field's value. +func (s *BaseConfigurationItem) SetVersion(v string) *BaseConfigurationItem { + s.Version = &v + return s +} + +type BatchGetAggregateResourceConfigInput struct { + _ struct{} `type:"structure"` + + // The name of the configuration aggregator. + // + // ConfigurationAggregatorName is a required field + ConfigurationAggregatorName *string `min:"1" type:"string" required:"true"` + + // A list of aggregate ResourceIdentifiers objects. + // + // ResourceIdentifiers is a required field + ResourceIdentifiers []*AggregateResourceIdentifier `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s BatchGetAggregateResourceConfigInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchGetAggregateResourceConfigInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *BatchGetAggregateResourceConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BatchGetAggregateResourceConfigInput"} + if s.ConfigurationAggregatorName == nil { + invalidParams.Add(request.NewErrParamRequired("ConfigurationAggregatorName")) + } + if s.ConfigurationAggregatorName != nil && len(*s.ConfigurationAggregatorName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConfigurationAggregatorName", 1)) + } + if s.ResourceIdentifiers == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceIdentifiers")) + } + if s.ResourceIdentifiers != nil && len(s.ResourceIdentifiers) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceIdentifiers", 1)) + } + if s.ResourceIdentifiers != nil { + for i, v := range s.ResourceIdentifiers { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ResourceIdentifiers", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetConfigurationAggregatorName sets the ConfigurationAggregatorName field's value. +func (s *BatchGetAggregateResourceConfigInput) SetConfigurationAggregatorName(v string) *BatchGetAggregateResourceConfigInput { + s.ConfigurationAggregatorName = &v + return s +} + +// SetResourceIdentifiers sets the ResourceIdentifiers field's value. +func (s *BatchGetAggregateResourceConfigInput) SetResourceIdentifiers(v []*AggregateResourceIdentifier) *BatchGetAggregateResourceConfigInput { + s.ResourceIdentifiers = v + return s +} + +type BatchGetAggregateResourceConfigOutput struct { + _ struct{} `type:"structure"` + + // A list that contains the current configuration of one or more resources. + BaseConfigurationItems []*BaseConfigurationItem `type:"list"` + + // A list of resource identifiers that were not processed with current scope. + // The list is empty if all the resources are processed. + UnprocessedResourceIdentifiers []*AggregateResourceIdentifier `type:"list"` +} + +// String returns the string representation +func (s BatchGetAggregateResourceConfigOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchGetAggregateResourceConfigOutput) GoString() string { + return s.String() +} + +// SetBaseConfigurationItems sets the BaseConfigurationItems field's value. +func (s *BatchGetAggregateResourceConfigOutput) SetBaseConfigurationItems(v []*BaseConfigurationItem) *BatchGetAggregateResourceConfigOutput { + s.BaseConfigurationItems = v + return s +} + +// SetUnprocessedResourceIdentifiers sets the UnprocessedResourceIdentifiers field's value. +func (s *BatchGetAggregateResourceConfigOutput) SetUnprocessedResourceIdentifiers(v []*AggregateResourceIdentifier) *BatchGetAggregateResourceConfigOutput { + s.UnprocessedResourceIdentifiers = v + return s +} + +type BatchGetResourceConfigInput struct { + _ struct{} `type:"structure"` + + // A list of resource keys to be processed with the current request. Each element + // in the list consists of the resource type and resource ID. + // + // ResourceKeys is a required field + ResourceKeys []*ResourceKey `locationName:"resourceKeys" min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s BatchGetResourceConfigInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchGetResourceConfigInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *BatchGetResourceConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BatchGetResourceConfigInput"} + if s.ResourceKeys == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceKeys")) + } + if s.ResourceKeys != nil && len(s.ResourceKeys) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceKeys", 1)) + } + if s.ResourceKeys != nil { + for i, v := range s.ResourceKeys { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ResourceKeys", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceKeys sets the ResourceKeys field's value. +func (s *BatchGetResourceConfigInput) SetResourceKeys(v []*ResourceKey) *BatchGetResourceConfigInput { + s.ResourceKeys = v + return s +} + +type BatchGetResourceConfigOutput struct { + _ struct{} `type:"structure"` + + // A list that contains the current configuration of one or more resources. + BaseConfigurationItems []*BaseConfigurationItem `locationName:"baseConfigurationItems" type:"list"` + + // A list of resource keys that were not processed with the current response. + // The unprocessesResourceKeys value is in the same form as ResourceKeys, so + // the value can be directly provided to a subsequent BatchGetResourceConfig + // operation. If there are no unprocessed resource keys, the response contains + // an empty unprocessedResourceKeys list. + UnprocessedResourceKeys []*ResourceKey `locationName:"unprocessedResourceKeys" min:"1" type:"list"` +} + +// String returns the string representation +func (s BatchGetResourceConfigOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchGetResourceConfigOutput) GoString() string { + return s.String() +} + +// SetBaseConfigurationItems sets the BaseConfigurationItems field's value. +func (s *BatchGetResourceConfigOutput) SetBaseConfigurationItems(v []*BaseConfigurationItem) *BatchGetResourceConfigOutput { + s.BaseConfigurationItems = v + return s +} + +// SetUnprocessedResourceKeys sets the UnprocessedResourceKeys field's value. +func (s *BatchGetResourceConfigOutput) SetUnprocessedResourceKeys(v []*ResourceKey) *BatchGetResourceConfigOutput { + s.UnprocessedResourceKeys = v + return s +} + +// Indicates whether an AWS resource or AWS Config rule is compliant and provides +// the number of contributors that affect the compliance. +type Compliance struct { + _ struct{} `type:"structure"` + + // The number of AWS resources or AWS Config rules that cause a result of NON_COMPLIANT, + // up to a maximum number. + ComplianceContributorCount *ComplianceContributorCount `type:"structure"` + + // Indicates whether an AWS resource or AWS Config rule is compliant. + // + // A resource is compliant if it complies with all of the AWS Config rules that + // evaluate it. A resource is noncompliant if it does not comply with one or + // more of these rules. + // + // A rule is compliant if all of the resources that the rule evaluates comply + // with it. A rule is noncompliant if any of these resources do not comply. + // + // AWS Config returns the INSUFFICIENT_DATA value when no evaluation results + // are available for the AWS resource or AWS Config rule. + // + // For the Compliance data type, AWS Config supports only COMPLIANT, NON_COMPLIANT, + // and INSUFFICIENT_DATA values. AWS Config does not support the NOT_APPLICABLE + // value for the Compliance data type. + ComplianceType *string `type:"string" enum:"ComplianceType"` +} + +// String returns the string representation +func (s Compliance) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Compliance) GoString() string { + return s.String() +} + +// SetComplianceContributorCount sets the ComplianceContributorCount field's value. +func (s *Compliance) SetComplianceContributorCount(v *ComplianceContributorCount) *Compliance { + s.ComplianceContributorCount = v + return s +} + +// SetComplianceType sets the ComplianceType field's value. +func (s *Compliance) SetComplianceType(v string) *Compliance { + s.ComplianceType = &v + return s +} + +// Indicates whether an AWS Config rule is compliant. A rule is compliant if +// all of the resources that the rule evaluated comply with it. A rule is noncompliant +// if any of these resources do not comply. +type ComplianceByConfigRule struct { + _ struct{} `type:"structure"` + + // Indicates whether the AWS Config rule is compliant. + Compliance *Compliance `type:"structure"` + + // The name of the AWS Config rule. + ConfigRuleName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ComplianceByConfigRule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ComplianceByConfigRule) GoString() string { + return s.String() +} + +// SetCompliance sets the Compliance field's value. +func (s *ComplianceByConfigRule) SetCompliance(v *Compliance) *ComplianceByConfigRule { + s.Compliance = v + return s +} + +// SetConfigRuleName sets the ConfigRuleName field's value. +func (s *ComplianceByConfigRule) SetConfigRuleName(v string) *ComplianceByConfigRule { + s.ConfigRuleName = &v + return s +} + +// Indicates whether an AWS resource that is evaluated according to one or more +// AWS Config rules is compliant. A resource is compliant if it complies with +// all of the rules that evaluate it. A resource is noncompliant if it does +// not comply with one or more of these rules. +type ComplianceByResource struct { + _ struct{} `type:"structure"` + + // Indicates whether the AWS resource complies with all of the AWS Config rules + // that evaluated it. + Compliance *Compliance `type:"structure"` + + // The ID of the AWS resource that was evaluated. + ResourceId *string `min:"1" type:"string"` + + // The type of the AWS resource that was evaluated. + ResourceType *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ComplianceByResource) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ComplianceByResource) GoString() string { + return s.String() +} + +// SetCompliance sets the Compliance field's value. +func (s *ComplianceByResource) SetCompliance(v *Compliance) *ComplianceByResource { + s.Compliance = v + return s +} + +// SetResourceId sets the ResourceId field's value. +func (s *ComplianceByResource) SetResourceId(v string) *ComplianceByResource { + s.ResourceId = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *ComplianceByResource) SetResourceType(v string) *ComplianceByResource { + s.ResourceType = &v + return s +} + +// The number of AWS resources or AWS Config rules responsible for the current +// compliance of the item, up to a maximum number. +type ComplianceContributorCount struct { + _ struct{} `type:"structure"` + + // Indicates whether the maximum count is reached. + CapExceeded *bool `type:"boolean"` + + // The number of AWS resources or AWS Config rules responsible for the current + // compliance of the item. + CappedCount *int64 `type:"integer"` +} + +// String returns the string representation +func (s ComplianceContributorCount) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ComplianceContributorCount) GoString() string { + return s.String() +} + +// SetCapExceeded sets the CapExceeded field's value. +func (s *ComplianceContributorCount) SetCapExceeded(v bool) *ComplianceContributorCount { + s.CapExceeded = &v + return s +} + +// SetCappedCount sets the CappedCount field's value. +func (s *ComplianceContributorCount) SetCappedCount(v int64) *ComplianceContributorCount { + s.CappedCount = &v + return s +} + +// The number of AWS Config rules or AWS resources that are compliant and noncompliant. +type ComplianceSummary struct { + _ struct{} `type:"structure"` + + // The time that AWS Config created the compliance summary. + ComplianceSummaryTimestamp *time.Time `type:"timestamp"` + + // The number of AWS Config rules or AWS resources that are compliant, up to + // a maximum of 25 for rules and 100 for resources. + CompliantResourceCount *ComplianceContributorCount `type:"structure"` + + // The number of AWS Config rules or AWS resources that are noncompliant, up + // to a maximum of 25 for rules and 100 for resources. + NonCompliantResourceCount *ComplianceContributorCount `type:"structure"` +} + +// String returns the string representation +func (s ComplianceSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ComplianceSummary) GoString() string { + return s.String() +} + +// SetComplianceSummaryTimestamp sets the ComplianceSummaryTimestamp field's value. +func (s *ComplianceSummary) SetComplianceSummaryTimestamp(v time.Time) *ComplianceSummary { + s.ComplianceSummaryTimestamp = &v + return s +} + +// SetCompliantResourceCount sets the CompliantResourceCount field's value. +func (s *ComplianceSummary) SetCompliantResourceCount(v *ComplianceContributorCount) *ComplianceSummary { + s.CompliantResourceCount = v + return s +} + +// SetNonCompliantResourceCount sets the NonCompliantResourceCount field's value. +func (s *ComplianceSummary) SetNonCompliantResourceCount(v *ComplianceContributorCount) *ComplianceSummary { + s.NonCompliantResourceCount = v + return s +} + +// The number of AWS resources of a specific type that are compliant or noncompliant, +// up to a maximum of 100 for each. +type ComplianceSummaryByResourceType struct { + _ struct{} `type:"structure"` + + // The number of AWS resources that are compliant or noncompliant, up to a maximum + // of 100 for each. + ComplianceSummary *ComplianceSummary `type:"structure"` + + // The type of AWS resource. + ResourceType *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ComplianceSummaryByResourceType) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ComplianceSummaryByResourceType) GoString() string { + return s.String() +} + +// SetComplianceSummary sets the ComplianceSummary field's value. +func (s *ComplianceSummaryByResourceType) SetComplianceSummary(v *ComplianceSummary) *ComplianceSummaryByResourceType { + s.ComplianceSummary = v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *ComplianceSummaryByResourceType) SetResourceType(v string) *ComplianceSummaryByResourceType { + s.ResourceType = &v + return s +} + +// Provides status of the delivery of the snapshot or the configuration history +// to the specified Amazon S3 bucket. Also provides the status of notifications +// about the Amazon S3 delivery to the specified Amazon SNS topic. +type ConfigExportDeliveryInfo struct { + _ struct{} `type:"structure"` + + // The time of the last attempted delivery. + LastAttemptTime *time.Time `locationName:"lastAttemptTime" type:"timestamp"` + + // The error code from the last attempted delivery. + LastErrorCode *string `locationName:"lastErrorCode" type:"string"` + + // The error message from the last attempted delivery. + LastErrorMessage *string `locationName:"lastErrorMessage" type:"string"` + + // Status of the last attempted delivery. + LastStatus *string `locationName:"lastStatus" type:"string" enum:"DeliveryStatus"` + + // The time of the last successful delivery. + LastSuccessfulTime *time.Time `locationName:"lastSuccessfulTime" type:"timestamp"` + + // The time that the next delivery occurs. + NextDeliveryTime *time.Time `locationName:"nextDeliveryTime" type:"timestamp"` +} + +// String returns the string representation +func (s ConfigExportDeliveryInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfigExportDeliveryInfo) GoString() string { + return s.String() +} + +// SetLastAttemptTime sets the LastAttemptTime field's value. +func (s *ConfigExportDeliveryInfo) SetLastAttemptTime(v time.Time) *ConfigExportDeliveryInfo { + s.LastAttemptTime = &v + return s +} + +// SetLastErrorCode sets the LastErrorCode field's value. +func (s *ConfigExportDeliveryInfo) SetLastErrorCode(v string) *ConfigExportDeliveryInfo { + s.LastErrorCode = &v + return s +} + +// SetLastErrorMessage sets the LastErrorMessage field's value. +func (s *ConfigExportDeliveryInfo) SetLastErrorMessage(v string) *ConfigExportDeliveryInfo { + s.LastErrorMessage = &v + return s +} + +// SetLastStatus sets the LastStatus field's value. +func (s *ConfigExportDeliveryInfo) SetLastStatus(v string) *ConfigExportDeliveryInfo { + s.LastStatus = &v + return s +} + +// SetLastSuccessfulTime sets the LastSuccessfulTime field's value. +func (s *ConfigExportDeliveryInfo) SetLastSuccessfulTime(v time.Time) *ConfigExportDeliveryInfo { + s.LastSuccessfulTime = &v + return s +} + +// SetNextDeliveryTime sets the NextDeliveryTime field's value. +func (s *ConfigExportDeliveryInfo) SetNextDeliveryTime(v time.Time) *ConfigExportDeliveryInfo { + s.NextDeliveryTime = &v + return s +} + +// An AWS Config rule represents an AWS Lambda function that you create for +// a custom rule or a predefined function for an AWS managed rule. The function +// evaluates configuration items to assess whether your AWS resources comply +// with your desired configurations. This function can run when AWS Config detects +// a configuration change to an AWS resource and at a periodic frequency that +// you choose (for example, every 24 hours). +// +// You can use the AWS CLI and AWS SDKs if you want to create a rule that triggers +// evaluations for your resources when AWS Config delivers the configuration +// snapshot. For more information, see ConfigSnapshotDeliveryProperties. +// +// For more information about developing and using AWS Config rules, see Evaluating +// AWS Resource Configurations with AWS Config (http://docs.aws.amazon.com/config/latest/developerguide/evaluate-config.html) +// in the AWS Config Developer Guide. +type ConfigRule struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the AWS Config rule. + ConfigRuleArn *string `type:"string"` + + // The ID of the AWS Config rule. + ConfigRuleId *string `type:"string"` + + // The name that you assign to the AWS Config rule. The name is required if + // you are adding a new rule. + ConfigRuleName *string `min:"1" type:"string"` + + // Indicates whether the AWS Config rule is active or is currently being deleted + // by AWS Config. It can also indicate the evaluation status for the AWS Config + // rule. + // + // AWS Config sets the state of the rule to EVALUATING temporarily after you + // use the StartConfigRulesEvaluation request to evaluate your resources against + // the AWS Config rule. + // + // AWS Config sets the state of the rule to DELETING_RESULTS temporarily after + // you use the DeleteEvaluationResults request to delete the current evaluation + // results for the AWS Config rule. + // + // AWS Config temporarily sets the state of a rule to DELETING after you use + // the DeleteConfigRule request to delete the rule. After AWS Config deletes + // the rule, the rule and all of its evaluations are erased and are no longer + // available. + ConfigRuleState *string `type:"string" enum:"ConfigRuleState"` + + // Service principal name of the service that created the rule. + // + // The field is populated only if the service linked rule is created by a service. + // The field is empty if you create your own rule. + CreatedBy *string `min:"1" type:"string"` + + // The description that you provide for the AWS Config rule. + Description *string `type:"string"` + + // A string, in JSON format, that is passed to the AWS Config rule Lambda function. + InputParameters *string `min:"1" type:"string"` + + // The maximum frequency with which AWS Config runs evaluations for a rule. + // You can specify a value for MaximumExecutionFrequency when: + // + // * You are using an AWS managed rule that is triggered at a periodic frequency. + // + // * Your custom rule is triggered when AWS Config delivers the configuration + // snapshot. For more information, see ConfigSnapshotDeliveryProperties. + // + // By default, rules with a periodic trigger are evaluated every 24 hours. To + // change the frequency, specify a valid value for the MaximumExecutionFrequency + // parameter. + MaximumExecutionFrequency *string `type:"string" enum:"MaximumExecutionFrequency"` + + // Defines which resources can trigger an evaluation for the rule. The scope + // can include one or more resource types, a combination of one resource type + // and one resource ID, or a combination of a tag key and value. Specify a scope + // to constrain the resources that can trigger an evaluation for the rule. If + // you do not specify a scope, evaluations are triggered when any resource in + // the recording group changes. + Scope *Scope `type:"structure"` + + // Provides the rule owner (AWS or customer), the rule identifier, and the notifications + // that cause the function to evaluate your AWS resources. + // + // Source is a required field + Source *Source `type:"structure" required:"true"` +} + +// String returns the string representation +func (s ConfigRule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfigRule) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ConfigRule) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ConfigRule"} + if s.ConfigRuleName != nil && len(*s.ConfigRuleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConfigRuleName", 1)) + } + if s.CreatedBy != nil && len(*s.CreatedBy) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CreatedBy", 1)) + } + if s.InputParameters != nil && len(*s.InputParameters) < 1 { + invalidParams.Add(request.NewErrParamMinLen("InputParameters", 1)) + } + if s.Source == nil { + invalidParams.Add(request.NewErrParamRequired("Source")) + } + if s.Scope != nil { + if err := s.Scope.Validate(); err != nil { + invalidParams.AddNested("Scope", err.(request.ErrInvalidParams)) + } + } + if s.Source != nil { + if err := s.Source.Validate(); err != nil { + invalidParams.AddNested("Source", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetConfigRuleArn sets the ConfigRuleArn field's value. +func (s *ConfigRule) SetConfigRuleArn(v string) *ConfigRule { + s.ConfigRuleArn = &v + return s +} + +// SetConfigRuleId sets the ConfigRuleId field's value. +func (s *ConfigRule) SetConfigRuleId(v string) *ConfigRule { + s.ConfigRuleId = &v + return s +} + +// SetConfigRuleName sets the ConfigRuleName field's value. +func (s *ConfigRule) SetConfigRuleName(v string) *ConfigRule { + s.ConfigRuleName = &v + return s +} + +// SetConfigRuleState sets the ConfigRuleState field's value. +func (s *ConfigRule) SetConfigRuleState(v string) *ConfigRule { + s.ConfigRuleState = &v + return s +} + +// SetCreatedBy sets the CreatedBy field's value. +func (s *ConfigRule) SetCreatedBy(v string) *ConfigRule { + s.CreatedBy = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *ConfigRule) SetDescription(v string) *ConfigRule { + s.Description = &v + return s +} + +// SetInputParameters sets the InputParameters field's value. +func (s *ConfigRule) SetInputParameters(v string) *ConfigRule { + s.InputParameters = &v + return s +} + +// SetMaximumExecutionFrequency sets the MaximumExecutionFrequency field's value. +func (s *ConfigRule) SetMaximumExecutionFrequency(v string) *ConfigRule { + s.MaximumExecutionFrequency = &v + return s +} + +// SetScope sets the Scope field's value. +func (s *ConfigRule) SetScope(v *Scope) *ConfigRule { + s.Scope = v + return s +} + +// SetSource sets the Source field's value. +func (s *ConfigRule) SetSource(v *Source) *ConfigRule { + s.Source = v + return s +} + +// Filters the compliance results based on account ID, region, compliance type, +// and rule name. +type ConfigRuleComplianceFilters struct { + _ struct{} `type:"structure"` + + // The 12-digit account ID of the source account. + AccountId *string `type:"string"` + + // The source region where the data is aggregated. + AwsRegion *string `min:"1" type:"string"` + + // The rule compliance status. + // + // For the ConfigRuleComplianceFilters data type, AWS Config supports only COMPLIANT + // and NON_COMPLIANT. AWS Config does not support the NOT_APPLICABLE and the + // INSUFFICIENT_DATA values. + ComplianceType *string `type:"string" enum:"ComplianceType"` + + // The name of the AWS Config rule. + ConfigRuleName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ConfigRuleComplianceFilters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfigRuleComplianceFilters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ConfigRuleComplianceFilters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ConfigRuleComplianceFilters"} + if s.AwsRegion != nil && len(*s.AwsRegion) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AwsRegion", 1)) + } + if s.ConfigRuleName != nil && len(*s.ConfigRuleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConfigRuleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccountId sets the AccountId field's value. +func (s *ConfigRuleComplianceFilters) SetAccountId(v string) *ConfigRuleComplianceFilters { + s.AccountId = &v + return s +} + +// SetAwsRegion sets the AwsRegion field's value. +func (s *ConfigRuleComplianceFilters) SetAwsRegion(v string) *ConfigRuleComplianceFilters { + s.AwsRegion = &v + return s +} + +// SetComplianceType sets the ComplianceType field's value. +func (s *ConfigRuleComplianceFilters) SetComplianceType(v string) *ConfigRuleComplianceFilters { + s.ComplianceType = &v + return s +} + +// SetConfigRuleName sets the ConfigRuleName field's value. +func (s *ConfigRuleComplianceFilters) SetConfigRuleName(v string) *ConfigRuleComplianceFilters { + s.ConfigRuleName = &v + return s +} + +// Filters the results based on the account IDs and regions. +type ConfigRuleComplianceSummaryFilters struct { + _ struct{} `type:"structure"` + + // The 12-digit account ID of the source account. + AccountId *string `type:"string"` + + // The source region where the data is aggregated. + AwsRegion *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ConfigRuleComplianceSummaryFilters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfigRuleComplianceSummaryFilters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ConfigRuleComplianceSummaryFilters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ConfigRuleComplianceSummaryFilters"} + if s.AwsRegion != nil && len(*s.AwsRegion) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AwsRegion", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccountId sets the AccountId field's value. +func (s *ConfigRuleComplianceSummaryFilters) SetAccountId(v string) *ConfigRuleComplianceSummaryFilters { + s.AccountId = &v + return s +} + +// SetAwsRegion sets the AwsRegion field's value. +func (s *ConfigRuleComplianceSummaryFilters) SetAwsRegion(v string) *ConfigRuleComplianceSummaryFilters { + s.AwsRegion = &v + return s +} + +// Status information for your AWS managed Config rules. The status includes +// information such as the last time the rule ran, the last time it failed, +// and the related error for the last failure. +// +// This action does not return status information about custom AWS Config rules. +type ConfigRuleEvaluationStatus struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the AWS Config rule. + ConfigRuleArn *string `type:"string"` + + // The ID of the AWS Config rule. + ConfigRuleId *string `type:"string"` + + // The name of the AWS Config rule. + ConfigRuleName *string `min:"1" type:"string"` + + // The time that you first activated the AWS Config rule. + FirstActivatedTime *time.Time `type:"timestamp"` + + // Indicates whether AWS Config has evaluated your resources against the rule + // at least once. + // + // * true - AWS Config has evaluated your AWS resources against the rule + // at least once. + // + // * false - AWS Config has not once finished evaluating your AWS resources + // against the rule. + FirstEvaluationStarted *bool `type:"boolean"` + + // The error code that AWS Config returned when the rule last failed. + LastErrorCode *string `type:"string"` + + // The error message that AWS Config returned when the rule last failed. + LastErrorMessage *string `type:"string"` + + // The time that AWS Config last failed to evaluate your AWS resources against + // the rule. + LastFailedEvaluationTime *time.Time `type:"timestamp"` + + // The time that AWS Config last failed to invoke the AWS Config rule to evaluate + // your AWS resources. + LastFailedInvocationTime *time.Time `type:"timestamp"` + + // The time that AWS Config last successfully evaluated your AWS resources against + // the rule. + LastSuccessfulEvaluationTime *time.Time `type:"timestamp"` + + // The time that AWS Config last successfully invoked the AWS Config rule to + // evaluate your AWS resources. + LastSuccessfulInvocationTime *time.Time `type:"timestamp"` +} + +// String returns the string representation +func (s ConfigRuleEvaluationStatus) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfigRuleEvaluationStatus) GoString() string { + return s.String() +} + +// SetConfigRuleArn sets the ConfigRuleArn field's value. +func (s *ConfigRuleEvaluationStatus) SetConfigRuleArn(v string) *ConfigRuleEvaluationStatus { + s.ConfigRuleArn = &v + return s +} + +// SetConfigRuleId sets the ConfigRuleId field's value. +func (s *ConfigRuleEvaluationStatus) SetConfigRuleId(v string) *ConfigRuleEvaluationStatus { + s.ConfigRuleId = &v + return s +} + +// SetConfigRuleName sets the ConfigRuleName field's value. +func (s *ConfigRuleEvaluationStatus) SetConfigRuleName(v string) *ConfigRuleEvaluationStatus { + s.ConfigRuleName = &v + return s +} + +// SetFirstActivatedTime sets the FirstActivatedTime field's value. +func (s *ConfigRuleEvaluationStatus) SetFirstActivatedTime(v time.Time) *ConfigRuleEvaluationStatus { + s.FirstActivatedTime = &v + return s +} + +// SetFirstEvaluationStarted sets the FirstEvaluationStarted field's value. +func (s *ConfigRuleEvaluationStatus) SetFirstEvaluationStarted(v bool) *ConfigRuleEvaluationStatus { + s.FirstEvaluationStarted = &v + return s +} + +// SetLastErrorCode sets the LastErrorCode field's value. +func (s *ConfigRuleEvaluationStatus) SetLastErrorCode(v string) *ConfigRuleEvaluationStatus { + s.LastErrorCode = &v + return s +} + +// SetLastErrorMessage sets the LastErrorMessage field's value. +func (s *ConfigRuleEvaluationStatus) SetLastErrorMessage(v string) *ConfigRuleEvaluationStatus { + s.LastErrorMessage = &v + return s +} + +// SetLastFailedEvaluationTime sets the LastFailedEvaluationTime field's value. +func (s *ConfigRuleEvaluationStatus) SetLastFailedEvaluationTime(v time.Time) *ConfigRuleEvaluationStatus { + s.LastFailedEvaluationTime = &v + return s +} + +// SetLastFailedInvocationTime sets the LastFailedInvocationTime field's value. +func (s *ConfigRuleEvaluationStatus) SetLastFailedInvocationTime(v time.Time) *ConfigRuleEvaluationStatus { + s.LastFailedInvocationTime = &v + return s +} + +// SetLastSuccessfulEvaluationTime sets the LastSuccessfulEvaluationTime field's value. +func (s *ConfigRuleEvaluationStatus) SetLastSuccessfulEvaluationTime(v time.Time) *ConfigRuleEvaluationStatus { + s.LastSuccessfulEvaluationTime = &v + return s +} + +// SetLastSuccessfulInvocationTime sets the LastSuccessfulInvocationTime field's value. +func (s *ConfigRuleEvaluationStatus) SetLastSuccessfulInvocationTime(v time.Time) *ConfigRuleEvaluationStatus { + s.LastSuccessfulInvocationTime = &v + return s +} + +// Provides options for how often AWS Config delivers configuration snapshots +// to the Amazon S3 bucket in your delivery channel. +// +// If you want to create a rule that triggers evaluations for your resources +// when AWS Config delivers the configuration snapshot, see the following: +// +// The frequency for a rule that triggers evaluations for your resources when +// AWS Config delivers the configuration snapshot is set by one of two values, +// depending on which is less frequent: +// +// * The value for the deliveryFrequency parameter within the delivery channel +// configuration, which sets how often AWS Config delivers configuration +// snapshots. This value also sets how often AWS Config invokes evaluations +// for AWS Config rules. +// +// * The value for the MaximumExecutionFrequency parameter, which sets the +// maximum frequency with which AWS Config invokes evaluations for the rule. +// For more information, see ConfigRule. +// +// If the deliveryFrequency value is less frequent than the MaximumExecutionFrequency +// value for a rule, AWS Config invokes the rule only as often as the deliveryFrequency +// value. +// +// For example, you want your rule to run evaluations when AWS Config delivers +// the configuration snapshot. +// +// You specify the MaximumExecutionFrequency value for Six_Hours. +// +// You then specify the delivery channel deliveryFrequency value for TwentyFour_Hours. +// +// Because the value for deliveryFrequency is less frequent than MaximumExecutionFrequency, +// AWS Config invokes evaluations for the rule every 24 hours. +// +// You should set the MaximumExecutionFrequency value to be at least as frequent +// as the deliveryFrequency value. You can view the deliveryFrequency value +// by using the DescribeDeliveryChannnels action. +// +// To update the deliveryFrequency with which AWS Config delivers your configuration +// snapshots, use the PutDeliveryChannel action. +type ConfigSnapshotDeliveryProperties struct { + _ struct{} `type:"structure"` + + // The frequency with which AWS Config delivers configuration snapshots. + DeliveryFrequency *string `locationName:"deliveryFrequency" type:"string" enum:"MaximumExecutionFrequency"` +} + +// String returns the string representation +func (s ConfigSnapshotDeliveryProperties) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfigSnapshotDeliveryProperties) GoString() string { + return s.String() +} + +// SetDeliveryFrequency sets the DeliveryFrequency field's value. +func (s *ConfigSnapshotDeliveryProperties) SetDeliveryFrequency(v string) *ConfigSnapshotDeliveryProperties { + s.DeliveryFrequency = &v + return s +} + +// A list that contains the status of the delivery of the configuration stream +// notification to the Amazon SNS topic. +type ConfigStreamDeliveryInfo struct { + _ struct{} `type:"structure"` + + // The error code from the last attempted delivery. + LastErrorCode *string `locationName:"lastErrorCode" type:"string"` + + // The error message from the last attempted delivery. + LastErrorMessage *string `locationName:"lastErrorMessage" type:"string"` + + // Status of the last attempted delivery. + // + // Note Providing an SNS topic on a DeliveryChannel (http://docs.aws.amazon.com/config/latest/APIReference/API_DeliveryChannel.html) + // for AWS Config is optional. If the SNS delivery is turned off, the last status + // will be Not_Applicable. + LastStatus *string `locationName:"lastStatus" type:"string" enum:"DeliveryStatus"` + + // The time from the last status change. + LastStatusChangeTime *time.Time `locationName:"lastStatusChangeTime" type:"timestamp"` +} + +// String returns the string representation +func (s ConfigStreamDeliveryInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfigStreamDeliveryInfo) GoString() string { + return s.String() +} + +// SetLastErrorCode sets the LastErrorCode field's value. +func (s *ConfigStreamDeliveryInfo) SetLastErrorCode(v string) *ConfigStreamDeliveryInfo { + s.LastErrorCode = &v + return s +} + +// SetLastErrorMessage sets the LastErrorMessage field's value. +func (s *ConfigStreamDeliveryInfo) SetLastErrorMessage(v string) *ConfigStreamDeliveryInfo { + s.LastErrorMessage = &v + return s +} + +// SetLastStatus sets the LastStatus field's value. +func (s *ConfigStreamDeliveryInfo) SetLastStatus(v string) *ConfigStreamDeliveryInfo { + s.LastStatus = &v + return s +} + +// SetLastStatusChangeTime sets the LastStatusChangeTime field's value. +func (s *ConfigStreamDeliveryInfo) SetLastStatusChangeTime(v time.Time) *ConfigStreamDeliveryInfo { + s.LastStatusChangeTime = &v + return s +} + +// The details about the configuration aggregator, including information about +// source accounts, regions, and metadata of the aggregator. +type ConfigurationAggregator struct { + _ struct{} `type:"structure"` + + // Provides a list of source accounts and regions to be aggregated. + AccountAggregationSources []*AccountAggregationSource `type:"list"` + + // The Amazon Resource Name (ARN) of the aggregator. + ConfigurationAggregatorArn *string `type:"string"` + + // The name of the aggregator. + ConfigurationAggregatorName *string `min:"1" type:"string"` + + // The time stamp when the configuration aggregator was created. + CreationTime *time.Time `type:"timestamp"` + + // The time of the last update. + LastUpdatedTime *time.Time `type:"timestamp"` + + // Provides an organization and list of regions to be aggregated. + OrganizationAggregationSource *OrganizationAggregationSource `type:"structure"` +} + +// String returns the string representation +func (s ConfigurationAggregator) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfigurationAggregator) GoString() string { + return s.String() +} + +// SetAccountAggregationSources sets the AccountAggregationSources field's value. +func (s *ConfigurationAggregator) SetAccountAggregationSources(v []*AccountAggregationSource) *ConfigurationAggregator { + s.AccountAggregationSources = v + return s +} + +// SetConfigurationAggregatorArn sets the ConfigurationAggregatorArn field's value. +func (s *ConfigurationAggregator) SetConfigurationAggregatorArn(v string) *ConfigurationAggregator { + s.ConfigurationAggregatorArn = &v + return s +} + +// SetConfigurationAggregatorName sets the ConfigurationAggregatorName field's value. +func (s *ConfigurationAggregator) SetConfigurationAggregatorName(v string) *ConfigurationAggregator { + s.ConfigurationAggregatorName = &v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *ConfigurationAggregator) SetCreationTime(v time.Time) *ConfigurationAggregator { + s.CreationTime = &v + return s +} + +// SetLastUpdatedTime sets the LastUpdatedTime field's value. +func (s *ConfigurationAggregator) SetLastUpdatedTime(v time.Time) *ConfigurationAggregator { + s.LastUpdatedTime = &v + return s +} + +// SetOrganizationAggregationSource sets the OrganizationAggregationSource field's value. +func (s *ConfigurationAggregator) SetOrganizationAggregationSource(v *OrganizationAggregationSource) *ConfigurationAggregator { + s.OrganizationAggregationSource = v + return s +} + +// A list that contains detailed configurations of a specified resource. +type ConfigurationItem struct { + _ struct{} `type:"structure"` + + // The 12-digit AWS account ID associated with the resource. + AccountId *string `locationName:"accountId" type:"string"` + + // The Amazon Resource Name (ARN) of the resource. + Arn *string `locationName:"arn" type:"string"` + + // The Availability Zone associated with the resource. + AvailabilityZone *string `locationName:"availabilityZone" type:"string"` + + // The region where the resource resides. + AwsRegion *string `locationName:"awsRegion" min:"1" type:"string"` + + // The description of the resource configuration. + Configuration *string `locationName:"configuration" type:"string"` + + // The time when the configuration recording was initiated. + ConfigurationItemCaptureTime *time.Time `locationName:"configurationItemCaptureTime" type:"timestamp"` + + // Unique MD5 hash that represents the configuration item's state. + // + // You can use MD5 hash to compare the states of two or more configuration items + // that are associated with the same resource. + ConfigurationItemMD5Hash *string `locationName:"configurationItemMD5Hash" type:"string"` + + // The configuration item status. + ConfigurationItemStatus *string `locationName:"configurationItemStatus" type:"string" enum:"ConfigurationItemStatus"` + + // An identifier that indicates the ordering of the configuration items of a + // resource. + ConfigurationStateId *string `locationName:"configurationStateId" type:"string"` + + // A list of CloudTrail event IDs. + // + // A populated field indicates that the current configuration was initiated + // by the events recorded in the CloudTrail log. For more information about + // CloudTrail, see What Is AWS CloudTrail (http://docs.aws.amazon.com/awscloudtrail/latest/userguide/what_is_cloud_trail_top_level.html). + // + // An empty field indicates that the current configuration was not initiated + // by any event. + RelatedEvents []*string `locationName:"relatedEvents" type:"list"` + + // A list of related AWS resources. + Relationships []*Relationship `locationName:"relationships" type:"list"` + + // The time stamp when the resource was created. + ResourceCreationTime *time.Time `locationName:"resourceCreationTime" type:"timestamp"` + + // The ID of the resource (for example, sg-xxxxxx). + ResourceId *string `locationName:"resourceId" min:"1" type:"string"` + + // The custom name of the resource, if available. + ResourceName *string `locationName:"resourceName" type:"string"` + + // The type of AWS resource. + ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` + + // Configuration attributes that AWS Config returns for certain resource types + // to supplement the information returned for the configuration parameter. + SupplementaryConfiguration map[string]*string `locationName:"supplementaryConfiguration" type:"map"` + + // A mapping of key value tags associated with the resource. + Tags map[string]*string `locationName:"tags" type:"map"` + + // The version number of the resource configuration. + Version *string `locationName:"version" type:"string"` +} + +// String returns the string representation +func (s ConfigurationItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfigurationItem) GoString() string { + return s.String() +} + +// SetAccountId sets the AccountId field's value. +func (s *ConfigurationItem) SetAccountId(v string) *ConfigurationItem { + s.AccountId = &v + return s +} + +// SetArn sets the Arn field's value. +func (s *ConfigurationItem) SetArn(v string) *ConfigurationItem { + s.Arn = &v + return s +} + +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *ConfigurationItem) SetAvailabilityZone(v string) *ConfigurationItem { + s.AvailabilityZone = &v + return s +} + +// SetAwsRegion sets the AwsRegion field's value. +func (s *ConfigurationItem) SetAwsRegion(v string) *ConfigurationItem { + s.AwsRegion = &v + return s +} + +// SetConfiguration sets the Configuration field's value. +func (s *ConfigurationItem) SetConfiguration(v string) *ConfigurationItem { + s.Configuration = &v + return s +} + +// SetConfigurationItemCaptureTime sets the ConfigurationItemCaptureTime field's value. +func (s *ConfigurationItem) SetConfigurationItemCaptureTime(v time.Time) *ConfigurationItem { + s.ConfigurationItemCaptureTime = &v + return s +} + +// SetConfigurationItemMD5Hash sets the ConfigurationItemMD5Hash field's value. +func (s *ConfigurationItem) SetConfigurationItemMD5Hash(v string) *ConfigurationItem { + s.ConfigurationItemMD5Hash = &v + return s +} + +// SetConfigurationItemStatus sets the ConfigurationItemStatus field's value. +func (s *ConfigurationItem) SetConfigurationItemStatus(v string) *ConfigurationItem { + s.ConfigurationItemStatus = &v + return s +} + +// SetConfigurationStateId sets the ConfigurationStateId field's value. +func (s *ConfigurationItem) SetConfigurationStateId(v string) *ConfigurationItem { + s.ConfigurationStateId = &v + return s +} + +// SetRelatedEvents sets the RelatedEvents field's value. +func (s *ConfigurationItem) SetRelatedEvents(v []*string) *ConfigurationItem { + s.RelatedEvents = v + return s +} + +// SetRelationships sets the Relationships field's value. +func (s *ConfigurationItem) SetRelationships(v []*Relationship) *ConfigurationItem { + s.Relationships = v + return s +} + +// SetResourceCreationTime sets the ResourceCreationTime field's value. +func (s *ConfigurationItem) SetResourceCreationTime(v time.Time) *ConfigurationItem { + s.ResourceCreationTime = &v + return s +} + +// SetResourceId sets the ResourceId field's value. +func (s *ConfigurationItem) SetResourceId(v string) *ConfigurationItem { + s.ResourceId = &v + return s +} + +// SetResourceName sets the ResourceName field's value. +func (s *ConfigurationItem) SetResourceName(v string) *ConfigurationItem { + s.ResourceName = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *ConfigurationItem) SetResourceType(v string) *ConfigurationItem { + s.ResourceType = &v + return s +} + +// SetSupplementaryConfiguration sets the SupplementaryConfiguration field's value. +func (s *ConfigurationItem) SetSupplementaryConfiguration(v map[string]*string) *ConfigurationItem { + s.SupplementaryConfiguration = v + return s +} + +// SetTags sets the Tags field's value. +func (s *ConfigurationItem) SetTags(v map[string]*string) *ConfigurationItem { + s.Tags = v + return s +} + +// SetVersion sets the Version field's value. +func (s *ConfigurationItem) SetVersion(v string) *ConfigurationItem { + s.Version = &v + return s +} + +// An object that represents the recording of configuration changes of an AWS +// resource. +type ConfigurationRecorder struct { + _ struct{} `type:"structure"` + + // The name of the recorder. By default, AWS Config automatically assigns the + // name "default" when creating the configuration recorder. You cannot change + // the assigned name. + Name *string `locationName:"name" min:"1" type:"string"` + + // Specifies the types of AWS resources for which AWS Config records configuration + // changes. + RecordingGroup *RecordingGroup `locationName:"recordingGroup" type:"structure"` + + // Amazon Resource Name (ARN) of the IAM role used to describe the AWS resources + // associated with the account. + RoleARN *string `locationName:"roleARN" type:"string"` +} + +// String returns the string representation +func (s ConfigurationRecorder) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfigurationRecorder) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ConfigurationRecorder) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ConfigurationRecorder"} + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *ConfigurationRecorder) SetName(v string) *ConfigurationRecorder { + s.Name = &v + return s +} + +// SetRecordingGroup sets the RecordingGroup field's value. +func (s *ConfigurationRecorder) SetRecordingGroup(v *RecordingGroup) *ConfigurationRecorder { + s.RecordingGroup = v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *ConfigurationRecorder) SetRoleARN(v string) *ConfigurationRecorder { + s.RoleARN = &v + return s +} + +// The current status of the configuration recorder. +type ConfigurationRecorderStatus struct { + _ struct{} `type:"structure"` + + // The error code indicating that the recording failed. + LastErrorCode *string `locationName:"lastErrorCode" type:"string"` + + // The message indicating that the recording failed due to an error. + LastErrorMessage *string `locationName:"lastErrorMessage" type:"string"` + + // The time the recorder was last started. + LastStartTime *time.Time `locationName:"lastStartTime" type:"timestamp"` + + // The last (previous) status of the recorder. + LastStatus *string `locationName:"lastStatus" type:"string" enum:"RecorderStatus"` + + // The time when the status was last changed. + LastStatusChangeTime *time.Time `locationName:"lastStatusChangeTime" type:"timestamp"` + + // The time the recorder was last stopped. + LastStopTime *time.Time `locationName:"lastStopTime" type:"timestamp"` + + // The name of the configuration recorder. + Name *string `locationName:"name" type:"string"` + + // Specifies whether or not the recorder is currently recording. + Recording *bool `locationName:"recording" type:"boolean"` +} + +// String returns the string representation +func (s ConfigurationRecorderStatus) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfigurationRecorderStatus) GoString() string { + return s.String() +} + +// SetLastErrorCode sets the LastErrorCode field's value. +func (s *ConfigurationRecorderStatus) SetLastErrorCode(v string) *ConfigurationRecorderStatus { + s.LastErrorCode = &v + return s +} + +// SetLastErrorMessage sets the LastErrorMessage field's value. +func (s *ConfigurationRecorderStatus) SetLastErrorMessage(v string) *ConfigurationRecorderStatus { + s.LastErrorMessage = &v + return s +} + +// SetLastStartTime sets the LastStartTime field's value. +func (s *ConfigurationRecorderStatus) SetLastStartTime(v time.Time) *ConfigurationRecorderStatus { + s.LastStartTime = &v + return s +} + +// SetLastStatus sets the LastStatus field's value. +func (s *ConfigurationRecorderStatus) SetLastStatus(v string) *ConfigurationRecorderStatus { + s.LastStatus = &v + return s +} + +// SetLastStatusChangeTime sets the LastStatusChangeTime field's value. +func (s *ConfigurationRecorderStatus) SetLastStatusChangeTime(v time.Time) *ConfigurationRecorderStatus { + s.LastStatusChangeTime = &v + return s +} + +// SetLastStopTime sets the LastStopTime field's value. +func (s *ConfigurationRecorderStatus) SetLastStopTime(v time.Time) *ConfigurationRecorderStatus { + s.LastStopTime = &v + return s +} + +// SetName sets the Name field's value. +func (s *ConfigurationRecorderStatus) SetName(v string) *ConfigurationRecorderStatus { + s.Name = &v + return s +} + +// SetRecording sets the Recording field's value. +func (s *ConfigurationRecorderStatus) SetRecording(v bool) *ConfigurationRecorderStatus { + s.Recording = &v + return s +} + +type DeleteAggregationAuthorizationInput struct { + _ struct{} `type:"structure"` + + // The 12-digit account ID of the account authorized to aggregate data. + // + // AuthorizedAccountId is a required field + AuthorizedAccountId *string `type:"string" required:"true"` + + // The region authorized to collect aggregated data. + // + // AuthorizedAwsRegion is a required field + AuthorizedAwsRegion *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteAggregationAuthorizationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAggregationAuthorizationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteAggregationAuthorizationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteAggregationAuthorizationInput"} + if s.AuthorizedAccountId == nil { + invalidParams.Add(request.NewErrParamRequired("AuthorizedAccountId")) + } + if s.AuthorizedAwsRegion == nil { + invalidParams.Add(request.NewErrParamRequired("AuthorizedAwsRegion")) + } + if s.AuthorizedAwsRegion != nil && len(*s.AuthorizedAwsRegion) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AuthorizedAwsRegion", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAuthorizedAccountId sets the AuthorizedAccountId field's value. +func (s *DeleteAggregationAuthorizationInput) SetAuthorizedAccountId(v string) *DeleteAggregationAuthorizationInput { + s.AuthorizedAccountId = &v + return s +} + +// SetAuthorizedAwsRegion sets the AuthorizedAwsRegion field's value. +func (s *DeleteAggregationAuthorizationInput) SetAuthorizedAwsRegion(v string) *DeleteAggregationAuthorizationInput { + s.AuthorizedAwsRegion = &v + return s +} + +type DeleteAggregationAuthorizationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteAggregationAuthorizationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAggregationAuthorizationOutput) GoString() string { + return s.String() +} + +type DeleteConfigRuleInput struct { + _ struct{} `type:"structure"` + + // The name of the AWS Config rule that you want to delete. + // + // ConfigRuleName is a required field + ConfigRuleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteConfigRuleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteConfigRuleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteConfigRuleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteConfigRuleInput"} + if s.ConfigRuleName == nil { + invalidParams.Add(request.NewErrParamRequired("ConfigRuleName")) + } + if s.ConfigRuleName != nil && len(*s.ConfigRuleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConfigRuleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetConfigRuleName sets the ConfigRuleName field's value. +func (s *DeleteConfigRuleInput) SetConfigRuleName(v string) *DeleteConfigRuleInput { + s.ConfigRuleName = &v + return s +} + +type DeleteConfigRuleOutput struct { _ struct{} `type:"structure"` +} - // The number of AWS resources or AWS Config rules that cause a result of NON_COMPLIANT, - // up to a maximum number. - ComplianceContributorCount *ComplianceContributorCount `type:"structure"` +// String returns the string representation +func (s DeleteConfigRuleOutput) String() string { + return awsutil.Prettify(s) +} - // Indicates whether an AWS resource or AWS Config rule is compliant. - // - // A resource is compliant if it complies with all of the AWS Config rules that - // evaluate it, and it is noncompliant if it does not comply with one or more - // of these rules. - // - // A rule is compliant if all of the resources that the rule evaluates comply - // with it, and it is noncompliant if any of these resources do not comply. - // - // AWS Config returns the INSUFFICIENT_DATA value when no evaluation results - // are available for the AWS resource or Config rule. +// GoString returns the string representation +func (s DeleteConfigRuleOutput) GoString() string { + return s.String() +} + +type DeleteConfigurationAggregatorInput struct { + _ struct{} `type:"structure"` + + // The name of the configuration aggregator. // - // For the Compliance data type, AWS Config supports only COMPLIANT, NON_COMPLIANT, - // and INSUFFICIENT_DATA values. AWS Config does not support the NOT_APPLICABLE - // value for the Compliance data type. - ComplianceType *string `type:"string" enum:"ComplianceType"` + // ConfigurationAggregatorName is a required field + ConfigurationAggregatorName *string `min:"1" type:"string" required:"true"` } // String returns the string representation -func (s Compliance) String() string { +func (s DeleteConfigurationAggregatorInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Compliance) GoString() string { +func (s DeleteConfigurationAggregatorInput) GoString() string { return s.String() } -// SetComplianceContributorCount sets the ComplianceContributorCount field's value. -func (s *Compliance) SetComplianceContributorCount(v *ComplianceContributorCount) *Compliance { - s.ComplianceContributorCount = v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteConfigurationAggregatorInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteConfigurationAggregatorInput"} + if s.ConfigurationAggregatorName == nil { + invalidParams.Add(request.NewErrParamRequired("ConfigurationAggregatorName")) + } + if s.ConfigurationAggregatorName != nil && len(*s.ConfigurationAggregatorName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConfigurationAggregatorName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetComplianceType sets the ComplianceType field's value. -func (s *Compliance) SetComplianceType(v string) *Compliance { - s.ComplianceType = &v +// SetConfigurationAggregatorName sets the ConfigurationAggregatorName field's value. +func (s *DeleteConfigurationAggregatorInput) SetConfigurationAggregatorName(v string) *DeleteConfigurationAggregatorInput { + s.ConfigurationAggregatorName = &v return s } -// Indicates whether an AWS Config rule is compliant. A rule is compliant if -// all of the resources that the rule evaluated comply with it, and it is noncompliant -// if any of these resources do not comply. -type ComplianceByConfigRule struct { +type DeleteConfigurationAggregatorOutput struct { _ struct{} `type:"structure"` +} - // Indicates whether the AWS Config rule is compliant. - Compliance *Compliance `type:"structure"` +// String returns the string representation +func (s DeleteConfigurationAggregatorOutput) String() string { + return awsutil.Prettify(s) +} - // The name of the AWS Config rule. - ConfigRuleName *string `min:"1" type:"string"` +// GoString returns the string representation +func (s DeleteConfigurationAggregatorOutput) GoString() string { + return s.String() +} + +// The request object for the DeleteConfigurationRecorder action. +type DeleteConfigurationRecorderInput struct { + _ struct{} `type:"structure"` + + // The name of the configuration recorder to be deleted. You can retrieve the + // name of your configuration recorder by using the DescribeConfigurationRecorders + // action. + // + // ConfigurationRecorderName is a required field + ConfigurationRecorderName *string `min:"1" type:"string" required:"true"` } // String returns the string representation -func (s ComplianceByConfigRule) String() string { +func (s DeleteConfigurationRecorderInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ComplianceByConfigRule) GoString() string { +func (s DeleteConfigurationRecorderInput) GoString() string { return s.String() } -// SetCompliance sets the Compliance field's value. -func (s *ComplianceByConfigRule) SetCompliance(v *Compliance) *ComplianceByConfigRule { - s.Compliance = v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteConfigurationRecorderInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteConfigurationRecorderInput"} + if s.ConfigurationRecorderName == nil { + invalidParams.Add(request.NewErrParamRequired("ConfigurationRecorderName")) + } + if s.ConfigurationRecorderName != nil && len(*s.ConfigurationRecorderName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConfigurationRecorderName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetConfigRuleName sets the ConfigRuleName field's value. -func (s *ComplianceByConfigRule) SetConfigRuleName(v string) *ComplianceByConfigRule { - s.ConfigRuleName = &v +// SetConfigurationRecorderName sets the ConfigurationRecorderName field's value. +func (s *DeleteConfigurationRecorderInput) SetConfigurationRecorderName(v string) *DeleteConfigurationRecorderInput { + s.ConfigurationRecorderName = &v return s } -// Indicates whether an AWS resource that is evaluated according to one or more -// AWS Config rules is compliant. A resource is compliant if it complies with -// all of the rules that evaluate it, and it is noncompliant if it does not -// comply with one or more of these rules. -type ComplianceByResource struct { +type DeleteConfigurationRecorderOutput struct { _ struct{} `type:"structure"` +} - // Indicates whether the AWS resource complies with all of the AWS Config rules - // that evaluated it. - Compliance *Compliance `type:"structure"` +// String returns the string representation +func (s DeleteConfigurationRecorderOutput) String() string { + return awsutil.Prettify(s) +} - // The ID of the AWS resource that was evaluated. - ResourceId *string `min:"1" type:"string"` +// GoString returns the string representation +func (s DeleteConfigurationRecorderOutput) GoString() string { + return s.String() +} - // The type of the AWS resource that was evaluated. - ResourceType *string `min:"1" type:"string"` +// The input for the DeleteDeliveryChannel action. The action accepts the following +// data, in JSON format. +type DeleteDeliveryChannelInput struct { + _ struct{} `type:"structure"` + + // The name of the delivery channel to delete. + // + // DeliveryChannelName is a required field + DeliveryChannelName *string `min:"1" type:"string" required:"true"` } // String returns the string representation -func (s ComplianceByResource) String() string { +func (s DeleteDeliveryChannelInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ComplianceByResource) GoString() string { +func (s DeleteDeliveryChannelInput) GoString() string { return s.String() } -// SetCompliance sets the Compliance field's value. -func (s *ComplianceByResource) SetCompliance(v *Compliance) *ComplianceByResource { - s.Compliance = v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteDeliveryChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDeliveryChannelInput"} + if s.DeliveryChannelName == nil { + invalidParams.Add(request.NewErrParamRequired("DeliveryChannelName")) + } + if s.DeliveryChannelName != nil && len(*s.DeliveryChannelName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DeliveryChannelName", 1)) + } -// SetResourceId sets the ResourceId field's value. -func (s *ComplianceByResource) SetResourceId(v string) *ComplianceByResource { - s.ResourceId = &v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetResourceType sets the ResourceType field's value. -func (s *ComplianceByResource) SetResourceType(v string) *ComplianceByResource { - s.ResourceType = &v +// SetDeliveryChannelName sets the DeliveryChannelName field's value. +func (s *DeleteDeliveryChannelInput) SetDeliveryChannelName(v string) *DeleteDeliveryChannelInput { + s.DeliveryChannelName = &v return s } -// The number of AWS resources or AWS Config rules responsible for the current -// compliance of the item, up to a maximum number. -type ComplianceContributorCount struct { +type DeleteDeliveryChannelOutput struct { _ struct{} `type:"structure"` +} - // Indicates whether the maximum count is reached. - CapExceeded *bool `type:"boolean"` +// String returns the string representation +func (s DeleteDeliveryChannelOutput) String() string { + return awsutil.Prettify(s) +} - // The number of AWS resources or AWS Config rules responsible for the current - // compliance of the item. - CappedCount *int64 `type:"integer"` +// GoString returns the string representation +func (s DeleteDeliveryChannelOutput) GoString() string { + return s.String() +} + +type DeleteEvaluationResultsInput struct { + _ struct{} `type:"structure"` + + // The name of the AWS Config rule for which you want to delete the evaluation + // results. + // + // ConfigRuleName is a required field + ConfigRuleName *string `min:"1" type:"string" required:"true"` } // String returns the string representation -func (s ComplianceContributorCount) String() string { +func (s DeleteEvaluationResultsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ComplianceContributorCount) GoString() string { +func (s DeleteEvaluationResultsInput) GoString() string { return s.String() } -// SetCapExceeded sets the CapExceeded field's value. -func (s *ComplianceContributorCount) SetCapExceeded(v bool) *ComplianceContributorCount { - s.CapExceeded = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteEvaluationResultsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteEvaluationResultsInput"} + if s.ConfigRuleName == nil { + invalidParams.Add(request.NewErrParamRequired("ConfigRuleName")) + } + if s.ConfigRuleName != nil && len(*s.ConfigRuleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConfigRuleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetCappedCount sets the CappedCount field's value. -func (s *ComplianceContributorCount) SetCappedCount(v int64) *ComplianceContributorCount { - s.CappedCount = &v +// SetConfigRuleName sets the ConfigRuleName field's value. +func (s *DeleteEvaluationResultsInput) SetConfigRuleName(v string) *DeleteEvaluationResultsInput { + s.ConfigRuleName = &v return s } -// The number of AWS Config rules or AWS resources that are compliant and noncompliant. -type ComplianceSummary struct { +// The output when you delete the evaluation results for the specified AWS Config +// rule. +type DeleteEvaluationResultsOutput struct { _ struct{} `type:"structure"` +} - // The time that AWS Config created the compliance summary. - ComplianceSummaryTimestamp *time.Time `type:"timestamp" timestampFormat:"unix"` +// String returns the string representation +func (s DeleteEvaluationResultsOutput) String() string { + return awsutil.Prettify(s) +} - // The number of AWS Config rules or AWS resources that are compliant, up to - // a maximum of 25 for rules and 100 for resources. - CompliantResourceCount *ComplianceContributorCount `type:"structure"` +// GoString returns the string representation +func (s DeleteEvaluationResultsOutput) GoString() string { + return s.String() +} - // The number of AWS Config rules or AWS resources that are noncompliant, up - // to a maximum of 25 for rules and 100 for resources. - NonCompliantResourceCount *ComplianceContributorCount `type:"structure"` +type DeletePendingAggregationRequestInput struct { + _ struct{} `type:"structure"` + + // The 12-digit account ID of the account requesting to aggregate data. + // + // RequesterAccountId is a required field + RequesterAccountId *string `type:"string" required:"true"` + + // The region requesting to aggregate data. + // + // RequesterAwsRegion is a required field + RequesterAwsRegion *string `min:"1" type:"string" required:"true"` } // String returns the string representation -func (s ComplianceSummary) String() string { +func (s DeletePendingAggregationRequestInput) String() string { return awsutil.Prettify(s) } -// GoString returns the string representation -func (s ComplianceSummary) GoString() string { - return s.String() +// GoString returns the string representation +func (s DeletePendingAggregationRequestInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeletePendingAggregationRequestInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeletePendingAggregationRequestInput"} + if s.RequesterAccountId == nil { + invalidParams.Add(request.NewErrParamRequired("RequesterAccountId")) + } + if s.RequesterAwsRegion == nil { + invalidParams.Add(request.NewErrParamRequired("RequesterAwsRegion")) + } + if s.RequesterAwsRegion != nil && len(*s.RequesterAwsRegion) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RequesterAwsRegion", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRequesterAccountId sets the RequesterAccountId field's value. +func (s *DeletePendingAggregationRequestInput) SetRequesterAccountId(v string) *DeletePendingAggregationRequestInput { + s.RequesterAccountId = &v + return s } -// SetComplianceSummaryTimestamp sets the ComplianceSummaryTimestamp field's value. -func (s *ComplianceSummary) SetComplianceSummaryTimestamp(v time.Time) *ComplianceSummary { - s.ComplianceSummaryTimestamp = &v +// SetRequesterAwsRegion sets the RequesterAwsRegion field's value. +func (s *DeletePendingAggregationRequestInput) SetRequesterAwsRegion(v string) *DeletePendingAggregationRequestInput { + s.RequesterAwsRegion = &v return s } -// SetCompliantResourceCount sets the CompliantResourceCount field's value. -func (s *ComplianceSummary) SetCompliantResourceCount(v *ComplianceContributorCount) *ComplianceSummary { - s.CompliantResourceCount = v - return s +type DeletePendingAggregationRequestOutput struct { + _ struct{} `type:"structure"` } -// SetNonCompliantResourceCount sets the NonCompliantResourceCount field's value. -func (s *ComplianceSummary) SetNonCompliantResourceCount(v *ComplianceContributorCount) *ComplianceSummary { - s.NonCompliantResourceCount = v - return s +// String returns the string representation +func (s DeletePendingAggregationRequestOutput) String() string { + return awsutil.Prettify(s) } -// The number of AWS resources of a specific type that are compliant or noncompliant, -// up to a maximum of 100 for each compliance. -type ComplianceSummaryByResourceType struct { - _ struct{} `type:"structure"` +// GoString returns the string representation +func (s DeletePendingAggregationRequestOutput) GoString() string { + return s.String() +} - // The number of AWS resources that are compliant or noncompliant, up to a maximum - // of 100 for each compliance. - ComplianceSummary *ComplianceSummary `type:"structure"` +type DeleteRetentionConfigurationInput struct { + _ struct{} `type:"structure"` - // The type of AWS resource. - ResourceType *string `min:"1" type:"string"` + // The name of the retention configuration to delete. + // + // RetentionConfigurationName is a required field + RetentionConfigurationName *string `min:"1" type:"string" required:"true"` } // String returns the string representation -func (s ComplianceSummaryByResourceType) String() string { +func (s DeleteRetentionConfigurationInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ComplianceSummaryByResourceType) GoString() string { +func (s DeleteRetentionConfigurationInput) GoString() string { return s.String() } -// SetComplianceSummary sets the ComplianceSummary field's value. -func (s *ComplianceSummaryByResourceType) SetComplianceSummary(v *ComplianceSummary) *ComplianceSummaryByResourceType { - s.ComplianceSummary = v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteRetentionConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteRetentionConfigurationInput"} + if s.RetentionConfigurationName == nil { + invalidParams.Add(request.NewErrParamRequired("RetentionConfigurationName")) + } + if s.RetentionConfigurationName != nil && len(*s.RetentionConfigurationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RetentionConfigurationName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetResourceType sets the ResourceType field's value. -func (s *ComplianceSummaryByResourceType) SetResourceType(v string) *ComplianceSummaryByResourceType { - s.ResourceType = &v +// SetRetentionConfigurationName sets the RetentionConfigurationName field's value. +func (s *DeleteRetentionConfigurationInput) SetRetentionConfigurationName(v string) *DeleteRetentionConfigurationInput { + s.RetentionConfigurationName = &v return s } -// Provides status of the delivery of the snapshot or the configuration history -// to the specified Amazon S3 bucket. Also provides the status of notifications -// about the Amazon S3 delivery to the specified Amazon SNS topic. -type ConfigExportDeliveryInfo struct { +type DeleteRetentionConfigurationOutput struct { _ struct{} `type:"structure"` +} - // The time of the last attempted delivery. - LastAttemptTime *time.Time `locationName:"lastAttemptTime" type:"timestamp" timestampFormat:"unix"` - - // The error code from the last attempted delivery. - LastErrorCode *string `locationName:"lastErrorCode" type:"string"` - - // The error message from the last attempted delivery. - LastErrorMessage *string `locationName:"lastErrorMessage" type:"string"` +// String returns the string representation +func (s DeleteRetentionConfigurationOutput) String() string { + return awsutil.Prettify(s) +} - // Status of the last attempted delivery. - LastStatus *string `locationName:"lastStatus" type:"string" enum:"DeliveryStatus"` +// GoString returns the string representation +func (s DeleteRetentionConfigurationOutput) GoString() string { + return s.String() +} - // The time of the last successful delivery. - LastSuccessfulTime *time.Time `locationName:"lastSuccessfulTime" type:"timestamp" timestampFormat:"unix"` +// The input for the DeliverConfigSnapshot action. +type DeliverConfigSnapshotInput struct { + _ struct{} `type:"structure"` - // The time that the next delivery occurs. - NextDeliveryTime *time.Time `locationName:"nextDeliveryTime" type:"timestamp" timestampFormat:"unix"` + // The name of the delivery channel through which the snapshot is delivered. + // + // DeliveryChannelName is a required field + DeliveryChannelName *string `locationName:"deliveryChannelName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s ConfigExportDeliveryInfo) String() string { +func (s DeliverConfigSnapshotInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ConfigExportDeliveryInfo) GoString() string { +func (s DeliverConfigSnapshotInput) GoString() string { return s.String() } -// SetLastAttemptTime sets the LastAttemptTime field's value. -func (s *ConfigExportDeliveryInfo) SetLastAttemptTime(v time.Time) *ConfigExportDeliveryInfo { - s.LastAttemptTime = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeliverConfigSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeliverConfigSnapshotInput"} + if s.DeliveryChannelName == nil { + invalidParams.Add(request.NewErrParamRequired("DeliveryChannelName")) + } + if s.DeliveryChannelName != nil && len(*s.DeliveryChannelName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DeliveryChannelName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetLastErrorCode sets the LastErrorCode field's value. -func (s *ConfigExportDeliveryInfo) SetLastErrorCode(v string) *ConfigExportDeliveryInfo { - s.LastErrorCode = &v +// SetDeliveryChannelName sets the DeliveryChannelName field's value. +func (s *DeliverConfigSnapshotInput) SetDeliveryChannelName(v string) *DeliverConfigSnapshotInput { + s.DeliveryChannelName = &v return s } -// SetLastErrorMessage sets the LastErrorMessage field's value. -func (s *ConfigExportDeliveryInfo) SetLastErrorMessage(v string) *ConfigExportDeliveryInfo { - s.LastErrorMessage = &v - return s +// The output for the DeliverConfigSnapshot action, in JSON format. +type DeliverConfigSnapshotOutput struct { + _ struct{} `type:"structure"` + + // The ID of the snapshot that is being created. + ConfigSnapshotId *string `locationName:"configSnapshotId" type:"string"` } -// SetLastStatus sets the LastStatus field's value. -func (s *ConfigExportDeliveryInfo) SetLastStatus(v string) *ConfigExportDeliveryInfo { - s.LastStatus = &v - return s +// String returns the string representation +func (s DeliverConfigSnapshotOutput) String() string { + return awsutil.Prettify(s) } -// SetLastSuccessfulTime sets the LastSuccessfulTime field's value. -func (s *ConfigExportDeliveryInfo) SetLastSuccessfulTime(v time.Time) *ConfigExportDeliveryInfo { - s.LastSuccessfulTime = &v - return s +// GoString returns the string representation +func (s DeliverConfigSnapshotOutput) GoString() string { + return s.String() } -// SetNextDeliveryTime sets the NextDeliveryTime field's value. -func (s *ConfigExportDeliveryInfo) SetNextDeliveryTime(v time.Time) *ConfigExportDeliveryInfo { - s.NextDeliveryTime = &v +// SetConfigSnapshotId sets the ConfigSnapshotId field's value. +func (s *DeliverConfigSnapshotOutput) SetConfigSnapshotId(v string) *DeliverConfigSnapshotOutput { + s.ConfigSnapshotId = &v return s } -// An AWS Config rule represents an AWS Lambda function that you create for -// a custom rule or a predefined function for an AWS managed rule. The function -// evaluates configuration items to assess whether your AWS resources comply -// with your desired configurations. This function can run when AWS Config detects -// a configuration change to an AWS resource and at a periodic frequency that -// you choose (for example, every 24 hours). -// -// You can use the AWS CLI and AWS SDKs if you want to create a rule that triggers -// evaluations for your resources when AWS Config delivers the configuration -// snapshot. For more information, see ConfigSnapshotDeliveryProperties. -// -// For more information about developing and using AWS Config rules, see Evaluating -// AWS Resource Configurations with AWS Config (http://docs.aws.amazon.com/config/latest/developerguide/evaluate-config.html) -// in the AWS Config Developer Guide. -type ConfigRule struct { +// The channel through which AWS Config delivers notifications and updated configuration +// states. +type DeliveryChannel struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of the AWS Config rule. - ConfigRuleArn *string `type:"string"` - - // The ID of the AWS Config rule. - ConfigRuleId *string `type:"string"` - - // The name that you assign to the AWS Config rule. The name is required if - // you are adding a new rule. - ConfigRuleName *string `min:"1" type:"string"` - - // Indicates whether the AWS Config rule is active or is currently being deleted - // by AWS Config. It can also indicate the evaluation status for the Config - // rule. - // - // AWS Config sets the state of the rule to EVALUATING temporarily after you - // use the StartConfigRulesEvaluation request to evaluate your resources against - // the Config rule. - // - // AWS Config sets the state of the rule to DELETING_RESULTS temporarily after - // you use the DeleteEvaluationResults request to delete the current evaluation - // results for the Config rule. - // - // AWS Config sets the state of a rule to DELETING temporarily after you use - // the DeleteConfigRule request to delete the rule. After AWS Config deletes - // the rule, the rule and all of its evaluations are erased and are no longer - // available. - ConfigRuleState *string `type:"string" enum:"ConfigRuleState"` - - // The description that you provide for the AWS Config rule. - Description *string `type:"string"` + // The options for how often AWS Config delivers configuration snapshots to + // the Amazon S3 bucket. + ConfigSnapshotDeliveryProperties *ConfigSnapshotDeliveryProperties `locationName:"configSnapshotDeliveryProperties" type:"structure"` - // A string in JSON format that is passed to the AWS Config rule Lambda function. - InputParameters *string `min:"1" type:"string"` + // The name of the delivery channel. By default, AWS Config assigns the name + // "default" when creating the delivery channel. To change the delivery channel + // name, you must use the DeleteDeliveryChannel action to delete your current + // delivery channel, and then you must use the PutDeliveryChannel command to + // create a delivery channel that has the desired name. + Name *string `locationName:"name" min:"1" type:"string"` - // The maximum frequency with which AWS Config runs evaluations for a rule. - // You can specify a value for MaximumExecutionFrequency when: - // - // * You are using an AWS managed rule that is triggered at a periodic frequency. - // - // * Your custom rule is triggered when AWS Config delivers the configuration - // snapshot. For more information, see ConfigSnapshotDeliveryProperties. + // The name of the Amazon S3 bucket to which AWS Config delivers configuration + // snapshots and configuration history files. // - // By default, rules with a periodic trigger are evaluated every 24 hours. To - // change the frequency, specify a valid value for the MaximumExecutionFrequency - // parameter. - MaximumExecutionFrequency *string `type:"string" enum:"MaximumExecutionFrequency"` + // If you specify a bucket that belongs to another AWS account, that bucket + // must have policies that grant access permissions to AWS Config. For more + // information, see Permissions for the Amazon S3 Bucket (http://docs.aws.amazon.com/config/latest/developerguide/s3-bucket-policy.html) + // in the AWS Config Developer Guide. + S3BucketName *string `locationName:"s3BucketName" type:"string"` - // Defines which resources can trigger an evaluation for the rule. The scope - // can include one or more resource types, a combination of one resource type - // and one resource ID, or a combination of a tag key and value. Specify a scope - // to constrain the resources that can trigger an evaluation for the rule. If - // you do not specify a scope, evaluations are triggered when any resource in - // the recording group changes. - Scope *Scope `type:"structure"` + // The prefix for the specified Amazon S3 bucket. + S3KeyPrefix *string `locationName:"s3KeyPrefix" type:"string"` - // Provides the rule owner (AWS or customer), the rule identifier, and the notifications - // that cause the function to evaluate your AWS resources. + // The Amazon Resource Name (ARN) of the Amazon SNS topic to which AWS Config + // sends notifications about configuration changes. // - // Source is a required field - Source *Source `type:"structure" required:"true"` + // If you choose a topic from another account, the topic must have policies + // that grant access permissions to AWS Config. For more information, see Permissions + // for the Amazon SNS Topic (http://docs.aws.amazon.com/config/latest/developerguide/sns-topic-policy.html) + // in the AWS Config Developer Guide. + SnsTopicARN *string `locationName:"snsTopicARN" type:"string"` } // String returns the string representation -func (s ConfigRule) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s ConfigRule) GoString() string { - return s.String() -} - -// Validate inspects the fields of the type to determine if they are valid. -func (s *ConfigRule) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ConfigRule"} - if s.ConfigRuleName != nil && len(*s.ConfigRuleName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ConfigRuleName", 1)) - } - if s.InputParameters != nil && len(*s.InputParameters) < 1 { - invalidParams.Add(request.NewErrParamMinLen("InputParameters", 1)) - } - if s.Source == nil { - invalidParams.Add(request.NewErrParamRequired("Source")) - } - if s.Scope != nil { - if err := s.Scope.Validate(); err != nil { - invalidParams.AddNested("Scope", err.(request.ErrInvalidParams)) - } - } - if s.Source != nil { - if err := s.Source.Validate(); err != nil { - invalidParams.AddNested("Source", err.(request.ErrInvalidParams)) - } +func (s DeliveryChannel) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeliveryChannel) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeliveryChannel) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeliveryChannel"} + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) } if invalidParams.Len() > 0 { @@ -3072,670 +7142,661 @@ func (s *ConfigRule) Validate() error { return nil } -// SetConfigRuleArn sets the ConfigRuleArn field's value. -func (s *ConfigRule) SetConfigRuleArn(v string) *ConfigRule { - s.ConfigRuleArn = &v +// SetConfigSnapshotDeliveryProperties sets the ConfigSnapshotDeliveryProperties field's value. +func (s *DeliveryChannel) SetConfigSnapshotDeliveryProperties(v *ConfigSnapshotDeliveryProperties) *DeliveryChannel { + s.ConfigSnapshotDeliveryProperties = v return s } -// SetConfigRuleId sets the ConfigRuleId field's value. -func (s *ConfigRule) SetConfigRuleId(v string) *ConfigRule { - s.ConfigRuleId = &v +// SetName sets the Name field's value. +func (s *DeliveryChannel) SetName(v string) *DeliveryChannel { + s.Name = &v return s } -// SetConfigRuleName sets the ConfigRuleName field's value. -func (s *ConfigRule) SetConfigRuleName(v string) *ConfigRule { - s.ConfigRuleName = &v +// SetS3BucketName sets the S3BucketName field's value. +func (s *DeliveryChannel) SetS3BucketName(v string) *DeliveryChannel { + s.S3BucketName = &v return s } -// SetConfigRuleState sets the ConfigRuleState field's value. -func (s *ConfigRule) SetConfigRuleState(v string) *ConfigRule { - s.ConfigRuleState = &v +// SetS3KeyPrefix sets the S3KeyPrefix field's value. +func (s *DeliveryChannel) SetS3KeyPrefix(v string) *DeliveryChannel { + s.S3KeyPrefix = &v return s } -// SetDescription sets the Description field's value. -func (s *ConfigRule) SetDescription(v string) *ConfigRule { - s.Description = &v +// SetSnsTopicARN sets the SnsTopicARN field's value. +func (s *DeliveryChannel) SetSnsTopicARN(v string) *DeliveryChannel { + s.SnsTopicARN = &v return s } -// SetInputParameters sets the InputParameters field's value. -func (s *ConfigRule) SetInputParameters(v string) *ConfigRule { - s.InputParameters = &v +// The status of a specified delivery channel. +// +// Valid values: Success | Failure +type DeliveryChannelStatus struct { + _ struct{} `type:"structure"` + + // A list that contains the status of the delivery of the configuration history + // to the specified Amazon S3 bucket. + ConfigHistoryDeliveryInfo *ConfigExportDeliveryInfo `locationName:"configHistoryDeliveryInfo" type:"structure"` + + // A list containing the status of the delivery of the snapshot to the specified + // Amazon S3 bucket. + ConfigSnapshotDeliveryInfo *ConfigExportDeliveryInfo `locationName:"configSnapshotDeliveryInfo" type:"structure"` + + // A list containing the status of the delivery of the configuration stream + // notification to the specified Amazon SNS topic. + ConfigStreamDeliveryInfo *ConfigStreamDeliveryInfo `locationName:"configStreamDeliveryInfo" type:"structure"` + + // The name of the delivery channel. + Name *string `locationName:"name" type:"string"` +} + +// String returns the string representation +func (s DeliveryChannelStatus) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeliveryChannelStatus) GoString() string { + return s.String() +} + +// SetConfigHistoryDeliveryInfo sets the ConfigHistoryDeliveryInfo field's value. +func (s *DeliveryChannelStatus) SetConfigHistoryDeliveryInfo(v *ConfigExportDeliveryInfo) *DeliveryChannelStatus { + s.ConfigHistoryDeliveryInfo = v return s } -// SetMaximumExecutionFrequency sets the MaximumExecutionFrequency field's value. -func (s *ConfigRule) SetMaximumExecutionFrequency(v string) *ConfigRule { - s.MaximumExecutionFrequency = &v +// SetConfigSnapshotDeliveryInfo sets the ConfigSnapshotDeliveryInfo field's value. +func (s *DeliveryChannelStatus) SetConfigSnapshotDeliveryInfo(v *ConfigExportDeliveryInfo) *DeliveryChannelStatus { + s.ConfigSnapshotDeliveryInfo = v return s } -// SetScope sets the Scope field's value. -func (s *ConfigRule) SetScope(v *Scope) *ConfigRule { - s.Scope = v +// SetConfigStreamDeliveryInfo sets the ConfigStreamDeliveryInfo field's value. +func (s *DeliveryChannelStatus) SetConfigStreamDeliveryInfo(v *ConfigStreamDeliveryInfo) *DeliveryChannelStatus { + s.ConfigStreamDeliveryInfo = v return s } -// SetSource sets the Source field's value. -func (s *ConfigRule) SetSource(v *Source) *ConfigRule { - s.Source = v +// SetName sets the Name field's value. +func (s *DeliveryChannelStatus) SetName(v string) *DeliveryChannelStatus { + s.Name = &v return s } -// Status information for your AWS managed Config rules. The status includes -// information such as the last time the rule ran, the last time it failed, -// and the related error for the last failure. -// -// This action does not return status information about custom Config rules. -type ConfigRuleEvaluationStatus struct { +type DescribeAggregateComplianceByConfigRulesInput struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of the AWS Config rule. - ConfigRuleArn *string `type:"string"` - - // The ID of the AWS Config rule. - ConfigRuleId *string `type:"string"` - - // The name of the AWS Config rule. - ConfigRuleName *string `min:"1" type:"string"` - - // The time that you first activated the AWS Config rule. - FirstActivatedTime *time.Time `type:"timestamp" timestampFormat:"unix"` - - // Indicates whether AWS Config has evaluated your resources against the rule - // at least once. - // - // * true - AWS Config has evaluated your AWS resources against the rule - // at least once. + // The name of the configuration aggregator. // - // * false - AWS Config has not once finished evaluating your AWS resources - // against the rule. - FirstEvaluationStarted *bool `type:"boolean"` - - // The error code that AWS Config returned when the rule last failed. - LastErrorCode *string `type:"string"` - - // The error message that AWS Config returned when the rule last failed. - LastErrorMessage *string `type:"string"` - - // The time that AWS Config last failed to evaluate your AWS resources against - // the rule. - LastFailedEvaluationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + // ConfigurationAggregatorName is a required field + ConfigurationAggregatorName *string `min:"1" type:"string" required:"true"` - // The time that AWS Config last failed to invoke the AWS Config rule to evaluate - // your AWS resources. - LastFailedInvocationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + // Filters the results by ConfigRuleComplianceFilters object. + Filters *ConfigRuleComplianceFilters `type:"structure"` - // The time that AWS Config last successfully evaluated your AWS resources against - // the rule. - LastSuccessfulEvaluationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + // The maximum number of evaluation results returned on each page. The default + // is maximum. If you specify 0, AWS Config uses the default. + Limit *int64 `type:"integer"` - // The time that AWS Config last successfully invoked the AWS Config rule to - // evaluate your AWS resources. - LastSuccessfulInvocationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` } // String returns the string representation -func (s ConfigRuleEvaluationStatus) String() string { +func (s DescribeAggregateComplianceByConfigRulesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ConfigRuleEvaluationStatus) GoString() string { +func (s DescribeAggregateComplianceByConfigRulesInput) GoString() string { return s.String() } -// SetConfigRuleArn sets the ConfigRuleArn field's value. -func (s *ConfigRuleEvaluationStatus) SetConfigRuleArn(v string) *ConfigRuleEvaluationStatus { - s.ConfigRuleArn = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeAggregateComplianceByConfigRulesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeAggregateComplianceByConfigRulesInput"} + if s.ConfigurationAggregatorName == nil { + invalidParams.Add(request.NewErrParamRequired("ConfigurationAggregatorName")) + } + if s.ConfigurationAggregatorName != nil && len(*s.ConfigurationAggregatorName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConfigurationAggregatorName", 1)) + } + if s.Filters != nil { + if err := s.Filters.Validate(); err != nil { + invalidParams.AddNested("Filters", err.(request.ErrInvalidParams)) + } + } -// SetConfigRuleId sets the ConfigRuleId field's value. -func (s *ConfigRuleEvaluationStatus) SetConfigRuleId(v string) *ConfigRuleEvaluationStatus { - s.ConfigRuleId = &v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetConfigRuleName sets the ConfigRuleName field's value. -func (s *ConfigRuleEvaluationStatus) SetConfigRuleName(v string) *ConfigRuleEvaluationStatus { - s.ConfigRuleName = &v +// SetConfigurationAggregatorName sets the ConfigurationAggregatorName field's value. +func (s *DescribeAggregateComplianceByConfigRulesInput) SetConfigurationAggregatorName(v string) *DescribeAggregateComplianceByConfigRulesInput { + s.ConfigurationAggregatorName = &v return s } -// SetFirstActivatedTime sets the FirstActivatedTime field's value. -func (s *ConfigRuleEvaluationStatus) SetFirstActivatedTime(v time.Time) *ConfigRuleEvaluationStatus { - s.FirstActivatedTime = &v +// SetFilters sets the Filters field's value. +func (s *DescribeAggregateComplianceByConfigRulesInput) SetFilters(v *ConfigRuleComplianceFilters) *DescribeAggregateComplianceByConfigRulesInput { + s.Filters = v return s } -// SetFirstEvaluationStarted sets the FirstEvaluationStarted field's value. -func (s *ConfigRuleEvaluationStatus) SetFirstEvaluationStarted(v bool) *ConfigRuleEvaluationStatus { - s.FirstEvaluationStarted = &v +// SetLimit sets the Limit field's value. +func (s *DescribeAggregateComplianceByConfigRulesInput) SetLimit(v int64) *DescribeAggregateComplianceByConfigRulesInput { + s.Limit = &v return s } -// SetLastErrorCode sets the LastErrorCode field's value. -func (s *ConfigRuleEvaluationStatus) SetLastErrorCode(v string) *ConfigRuleEvaluationStatus { - s.LastErrorCode = &v +// SetNextToken sets the NextToken field's value. +func (s *DescribeAggregateComplianceByConfigRulesInput) SetNextToken(v string) *DescribeAggregateComplianceByConfigRulesInput { + s.NextToken = &v return s } -// SetLastErrorMessage sets the LastErrorMessage field's value. -func (s *ConfigRuleEvaluationStatus) SetLastErrorMessage(v string) *ConfigRuleEvaluationStatus { - s.LastErrorMessage = &v - return s +type DescribeAggregateComplianceByConfigRulesOutput struct { + _ struct{} `type:"structure"` + + // Returns a list of AggregateComplianceByConfigRule object. + AggregateComplianceByConfigRules []*AggregateComplianceByConfigRule `type:"list"` + + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` } -// SetLastFailedEvaluationTime sets the LastFailedEvaluationTime field's value. -func (s *ConfigRuleEvaluationStatus) SetLastFailedEvaluationTime(v time.Time) *ConfigRuleEvaluationStatus { - s.LastFailedEvaluationTime = &v - return s +// String returns the string representation +func (s DescribeAggregateComplianceByConfigRulesOutput) String() string { + return awsutil.Prettify(s) } -// SetLastFailedInvocationTime sets the LastFailedInvocationTime field's value. -func (s *ConfigRuleEvaluationStatus) SetLastFailedInvocationTime(v time.Time) *ConfigRuleEvaluationStatus { - s.LastFailedInvocationTime = &v - return s +// GoString returns the string representation +func (s DescribeAggregateComplianceByConfigRulesOutput) GoString() string { + return s.String() } -// SetLastSuccessfulEvaluationTime sets the LastSuccessfulEvaluationTime field's value. -func (s *ConfigRuleEvaluationStatus) SetLastSuccessfulEvaluationTime(v time.Time) *ConfigRuleEvaluationStatus { - s.LastSuccessfulEvaluationTime = &v +// SetAggregateComplianceByConfigRules sets the AggregateComplianceByConfigRules field's value. +func (s *DescribeAggregateComplianceByConfigRulesOutput) SetAggregateComplianceByConfigRules(v []*AggregateComplianceByConfigRule) *DescribeAggregateComplianceByConfigRulesOutput { + s.AggregateComplianceByConfigRules = v return s } -// SetLastSuccessfulInvocationTime sets the LastSuccessfulInvocationTime field's value. -func (s *ConfigRuleEvaluationStatus) SetLastSuccessfulInvocationTime(v time.Time) *ConfigRuleEvaluationStatus { - s.LastSuccessfulInvocationTime = &v +// SetNextToken sets the NextToken field's value. +func (s *DescribeAggregateComplianceByConfigRulesOutput) SetNextToken(v string) *DescribeAggregateComplianceByConfigRulesOutput { + s.NextToken = &v return s } -// Provides options for how often AWS Config delivers configuration snapshots -// to the Amazon S3 bucket in your delivery channel. -// -// If you want to create a rule that triggers evaluations for your resources -// when AWS Config delivers the configuration snapshot, see the following: -// -// The frequency for a rule that triggers evaluations for your resources when -// AWS Config delivers the configuration snapshot is set by one of two values, -// depending on which is less frequent: -// -// * The value for the deliveryFrequency parameter within the delivery channel -// configuration, which sets how often AWS Config delivers configuration -// snapshots. This value also sets how often AWS Config invokes evaluations -// for Config rules. -// -// * The value for the MaximumExecutionFrequency parameter, which sets the -// maximum frequency with which AWS Config invokes evaluations for the rule. -// For more information, see ConfigRule. -// -// If the deliveryFrequency value is less frequent than the MaximumExecutionFrequency -// value for a rule, AWS Config invokes the rule only as often as the deliveryFrequency -// value. -// -// For example, you want your rule to run evaluations when AWS Config delivers -// the configuration snapshot. -// -// You specify the MaximumExecutionFrequency value for Six_Hours. -// -// You then specify the delivery channel deliveryFrequency value for TwentyFour_Hours. -// -// Because the value for deliveryFrequency is less frequent than MaximumExecutionFrequency, -// AWS Config invokes evaluations for the rule every 24 hours. -// -// You should set the MaximumExecutionFrequency value to be at least as frequent -// as the deliveryFrequency value. You can view the deliveryFrequency value -// by using the DescribeDeliveryChannnels action. -// -// To update the deliveryFrequency with which AWS Config delivers your configuration -// snapshots, use the PutDeliveryChannel action. -type ConfigSnapshotDeliveryProperties struct { +type DescribeAggregationAuthorizationsInput struct { _ struct{} `type:"structure"` - // The frequency with which AWS Config delivers configuration snapshots. - DeliveryFrequency *string `locationName:"deliveryFrequency" type:"string" enum:"MaximumExecutionFrequency"` + // The maximum number of AggregationAuthorizations returned on each page. The + // default is maximum. If you specify 0, AWS Config uses the default. + Limit *int64 `type:"integer"` + + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` } // String returns the string representation -func (s ConfigSnapshotDeliveryProperties) String() string { +func (s DescribeAggregationAuthorizationsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ConfigSnapshotDeliveryProperties) GoString() string { +func (s DescribeAggregationAuthorizationsInput) GoString() string { return s.String() } -// SetDeliveryFrequency sets the DeliveryFrequency field's value. -func (s *ConfigSnapshotDeliveryProperties) SetDeliveryFrequency(v string) *ConfigSnapshotDeliveryProperties { - s.DeliveryFrequency = &v +// SetLimit sets the Limit field's value. +func (s *DescribeAggregationAuthorizationsInput) SetLimit(v int64) *DescribeAggregationAuthorizationsInput { + s.Limit = &v return s } -// A list that contains the status of the delivery of the configuration stream -// notification to the Amazon SNS topic. -type ConfigStreamDeliveryInfo struct { - _ struct{} `type:"structure"` - - // The error code from the last attempted delivery. - LastErrorCode *string `locationName:"lastErrorCode" type:"string"` +// SetNextToken sets the NextToken field's value. +func (s *DescribeAggregationAuthorizationsInput) SetNextToken(v string) *DescribeAggregationAuthorizationsInput { + s.NextToken = &v + return s +} - // The error message from the last attempted delivery. - LastErrorMessage *string `locationName:"lastErrorMessage" type:"string"` +type DescribeAggregationAuthorizationsOutput struct { + _ struct{} `type:"structure"` - // Status of the last attempted delivery. - // - // Note Providing an SNS topic on a DeliveryChannel (http://docs.aws.amazon.com/config/latest/APIReference/API_DeliveryChannel.html) - // for AWS Config is optional. If the SNS delivery is turned off, the last status - // will be Not_Applicable. - LastStatus *string `locationName:"lastStatus" type:"string" enum:"DeliveryStatus"` + // Returns a list of authorizations granted to various aggregator accounts and + // regions. + AggregationAuthorizations []*AggregationAuthorization `type:"list"` - // The time from the last status change. - LastStatusChangeTime *time.Time `locationName:"lastStatusChangeTime" type:"timestamp" timestampFormat:"unix"` + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` } // String returns the string representation -func (s ConfigStreamDeliveryInfo) String() string { +func (s DescribeAggregationAuthorizationsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ConfigStreamDeliveryInfo) GoString() string { +func (s DescribeAggregationAuthorizationsOutput) GoString() string { return s.String() } -// SetLastErrorCode sets the LastErrorCode field's value. -func (s *ConfigStreamDeliveryInfo) SetLastErrorCode(v string) *ConfigStreamDeliveryInfo { - s.LastErrorCode = &v - return s -} - -// SetLastErrorMessage sets the LastErrorMessage field's value. -func (s *ConfigStreamDeliveryInfo) SetLastErrorMessage(v string) *ConfigStreamDeliveryInfo { - s.LastErrorMessage = &v - return s -} - -// SetLastStatus sets the LastStatus field's value. -func (s *ConfigStreamDeliveryInfo) SetLastStatus(v string) *ConfigStreamDeliveryInfo { - s.LastStatus = &v +// SetAggregationAuthorizations sets the AggregationAuthorizations field's value. +func (s *DescribeAggregationAuthorizationsOutput) SetAggregationAuthorizations(v []*AggregationAuthorization) *DescribeAggregationAuthorizationsOutput { + s.AggregationAuthorizations = v return s } -// SetLastStatusChangeTime sets the LastStatusChangeTime field's value. -func (s *ConfigStreamDeliveryInfo) SetLastStatusChangeTime(v time.Time) *ConfigStreamDeliveryInfo { - s.LastStatusChangeTime = &v +// SetNextToken sets the NextToken field's value. +func (s *DescribeAggregationAuthorizationsOutput) SetNextToken(v string) *DescribeAggregationAuthorizationsOutput { + s.NextToken = &v return s } -// A list that contains detailed configurations of a specified resource. -type ConfigurationItem struct { +type DescribeComplianceByConfigRuleInput struct { _ struct{} `type:"structure"` - // The 12 digit AWS account ID associated with the resource. - AccountId *string `locationName:"accountId" type:"string"` - - // The Amazon Resource Name (ARN) of the resource. - Arn *string `locationName:"arn" type:"string"` - - // The Availability Zone associated with the resource. - AvailabilityZone *string `locationName:"availabilityZone" type:"string"` - - // The region where the resource resides. - AwsRegion *string `locationName:"awsRegion" type:"string"` - - // The description of the resource configuration. - Configuration *string `locationName:"configuration" type:"string"` - - // The time when the configuration recording was initiated. - ConfigurationItemCaptureTime *time.Time `locationName:"configurationItemCaptureTime" type:"timestamp" timestampFormat:"unix"` - - // Unique MD5 hash that represents the configuration item's state. - // - // You can use MD5 hash to compare the states of two or more configuration items - // that are associated with the same resource. - ConfigurationItemMD5Hash *string `locationName:"configurationItemMD5Hash" type:"string"` - - // The configuration item status. - ConfigurationItemStatus *string `locationName:"configurationItemStatus" type:"string" enum:"ConfigurationItemStatus"` - - // An identifier that indicates the ordering of the configuration items of a - // resource. - ConfigurationStateId *string `locationName:"configurationStateId" type:"string"` - - // A list of CloudTrail event IDs. - // - // A populated field indicates that the current configuration was initiated - // by the events recorded in the CloudTrail log. For more information about - // CloudTrail, see What is AWS CloudTrail? (http://docs.aws.amazon.com/awscloudtrail/latest/userguide/what_is_cloud_trail_top_level.html). + // Filters the results by compliance. // - // An empty field indicates that the current configuration was not initiated - // by any event. - RelatedEvents []*string `locationName:"relatedEvents" type:"list"` - - // A list of related AWS resources. - Relationships []*Relationship `locationName:"relationships" type:"list"` - - // The time stamp when the resource was created. - ResourceCreationTime *time.Time `locationName:"resourceCreationTime" type:"timestamp" timestampFormat:"unix"` - - // The ID of the resource (for example., sg-xxxxxx). - ResourceId *string `locationName:"resourceId" type:"string"` - - // The custom name of the resource, if available. - ResourceName *string `locationName:"resourceName" type:"string"` - - // The type of AWS resource. - ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` - - // Configuration attributes that AWS Config returns for certain resource types - // to supplement the information returned for the configuration parameter. - SupplementaryConfiguration map[string]*string `locationName:"supplementaryConfiguration" type:"map"` + // The allowed values are COMPLIANT, NON_COMPLIANT, and INSUFFICIENT_DATA. + ComplianceTypes []*string `type:"list"` - // A mapping of key value tags associated with the resource. - Tags map[string]*string `locationName:"tags" type:"map"` + // Specify one or more AWS Config rule names to filter the results by rule. + ConfigRuleNames []*string `type:"list"` - // The version number of the resource configuration. - Version *string `locationName:"version" type:"string"` + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` } // String returns the string representation -func (s ConfigurationItem) String() string { +func (s DescribeComplianceByConfigRuleInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ConfigurationItem) GoString() string { +func (s DescribeComplianceByConfigRuleInput) GoString() string { return s.String() } -// SetAccountId sets the AccountId field's value. -func (s *ConfigurationItem) SetAccountId(v string) *ConfigurationItem { - s.AccountId = &v +// SetComplianceTypes sets the ComplianceTypes field's value. +func (s *DescribeComplianceByConfigRuleInput) SetComplianceTypes(v []*string) *DescribeComplianceByConfigRuleInput { + s.ComplianceTypes = v return s } -// SetArn sets the Arn field's value. -func (s *ConfigurationItem) SetArn(v string) *ConfigurationItem { - s.Arn = &v +// SetConfigRuleNames sets the ConfigRuleNames field's value. +func (s *DescribeComplianceByConfigRuleInput) SetConfigRuleNames(v []*string) *DescribeComplianceByConfigRuleInput { + s.ConfigRuleNames = v return s } -// SetAvailabilityZone sets the AvailabilityZone field's value. -func (s *ConfigurationItem) SetAvailabilityZone(v string) *ConfigurationItem { - s.AvailabilityZone = &v +// SetNextToken sets the NextToken field's value. +func (s *DescribeComplianceByConfigRuleInput) SetNextToken(v string) *DescribeComplianceByConfigRuleInput { + s.NextToken = &v return s } -// SetAwsRegion sets the AwsRegion field's value. -func (s *ConfigurationItem) SetAwsRegion(v string) *ConfigurationItem { - s.AwsRegion = &v - return s +type DescribeComplianceByConfigRuleOutput struct { + _ struct{} `type:"structure"` + + // Indicates whether each of the specified AWS Config rules is compliant. + ComplianceByConfigRules []*ComplianceByConfigRule `type:"list"` + + // The string that you use in a subsequent request to get the next page of results + // in a paginated response. + NextToken *string `type:"string"` } -// SetConfiguration sets the Configuration field's value. -func (s *ConfigurationItem) SetConfiguration(v string) *ConfigurationItem { - s.Configuration = &v - return s +// String returns the string representation +func (s DescribeComplianceByConfigRuleOutput) String() string { + return awsutil.Prettify(s) } -// SetConfigurationItemCaptureTime sets the ConfigurationItemCaptureTime field's value. -func (s *ConfigurationItem) SetConfigurationItemCaptureTime(v time.Time) *ConfigurationItem { - s.ConfigurationItemCaptureTime = &v - return s +// GoString returns the string representation +func (s DescribeComplianceByConfigRuleOutput) GoString() string { + return s.String() } -// SetConfigurationItemMD5Hash sets the ConfigurationItemMD5Hash field's value. -func (s *ConfigurationItem) SetConfigurationItemMD5Hash(v string) *ConfigurationItem { - s.ConfigurationItemMD5Hash = &v +// SetComplianceByConfigRules sets the ComplianceByConfigRules field's value. +func (s *DescribeComplianceByConfigRuleOutput) SetComplianceByConfigRules(v []*ComplianceByConfigRule) *DescribeComplianceByConfigRuleOutput { + s.ComplianceByConfigRules = v return s } -// SetConfigurationItemStatus sets the ConfigurationItemStatus field's value. -func (s *ConfigurationItem) SetConfigurationItemStatus(v string) *ConfigurationItem { - s.ConfigurationItemStatus = &v +// SetNextToken sets the NextToken field's value. +func (s *DescribeComplianceByConfigRuleOutput) SetNextToken(v string) *DescribeComplianceByConfigRuleOutput { + s.NextToken = &v return s } -// SetConfigurationStateId sets the ConfigurationStateId field's value. -func (s *ConfigurationItem) SetConfigurationStateId(v string) *ConfigurationItem { - s.ConfigurationStateId = &v - return s +type DescribeComplianceByResourceInput struct { + _ struct{} `type:"structure"` + + // Filters the results by compliance. + // + // The allowed values are COMPLIANT and NON_COMPLIANT. + ComplianceTypes []*string `type:"list"` + + // The maximum number of evaluation results returned on each page. The default + // is 10. You cannot specify a number greater than 100. If you specify 0, AWS + // Config uses the default. + Limit *int64 `type:"integer"` + + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` + + // The ID of the AWS resource for which you want compliance information. You + // can specify only one resource ID. If you specify a resource ID, you must + // also specify a type for ResourceType. + ResourceId *string `min:"1" type:"string"` + + // The types of AWS resources for which you want compliance information (for + // example, AWS::EC2::Instance). For this action, you can specify that the resource + // type is an AWS account by specifying AWS::::Account. + ResourceType *string `min:"1" type:"string"` } -// SetRelatedEvents sets the RelatedEvents field's value. -func (s *ConfigurationItem) SetRelatedEvents(v []*string) *ConfigurationItem { - s.RelatedEvents = v - return s +// String returns the string representation +func (s DescribeComplianceByResourceInput) String() string { + return awsutil.Prettify(s) } -// SetRelationships sets the Relationships field's value. -func (s *ConfigurationItem) SetRelationships(v []*Relationship) *ConfigurationItem { - s.Relationships = v - return s +// GoString returns the string representation +func (s DescribeComplianceByResourceInput) GoString() string { + return s.String() } -// SetResourceCreationTime sets the ResourceCreationTime field's value. -func (s *ConfigurationItem) SetResourceCreationTime(v time.Time) *ConfigurationItem { - s.ResourceCreationTime = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeComplianceByResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeComplianceByResourceInput"} + if s.ResourceId != nil && len(*s.ResourceId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceId", 1)) + } + if s.ResourceType != nil && len(*s.ResourceType) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceType", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetComplianceTypes sets the ComplianceTypes field's value. +func (s *DescribeComplianceByResourceInput) SetComplianceTypes(v []*string) *DescribeComplianceByResourceInput { + s.ComplianceTypes = v return s } -// SetResourceId sets the ResourceId field's value. -func (s *ConfigurationItem) SetResourceId(v string) *ConfigurationItem { - s.ResourceId = &v +// SetLimit sets the Limit field's value. +func (s *DescribeComplianceByResourceInput) SetLimit(v int64) *DescribeComplianceByResourceInput { + s.Limit = &v return s } -// SetResourceName sets the ResourceName field's value. -func (s *ConfigurationItem) SetResourceName(v string) *ConfigurationItem { - s.ResourceName = &v +// SetNextToken sets the NextToken field's value. +func (s *DescribeComplianceByResourceInput) SetNextToken(v string) *DescribeComplianceByResourceInput { + s.NextToken = &v + return s +} + +// SetResourceId sets the ResourceId field's value. +func (s *DescribeComplianceByResourceInput) SetResourceId(v string) *DescribeComplianceByResourceInput { + s.ResourceId = &v return s } // SetResourceType sets the ResourceType field's value. -func (s *ConfigurationItem) SetResourceType(v string) *ConfigurationItem { +func (s *DescribeComplianceByResourceInput) SetResourceType(v string) *DescribeComplianceByResourceInput { s.ResourceType = &v return s } -// SetSupplementaryConfiguration sets the SupplementaryConfiguration field's value. -func (s *ConfigurationItem) SetSupplementaryConfiguration(v map[string]*string) *ConfigurationItem { - s.SupplementaryConfiguration = v - return s +type DescribeComplianceByResourceOutput struct { + _ struct{} `type:"structure"` + + // Indicates whether the specified AWS resource complies with all of the AWS + // Config rules that evaluate it. + ComplianceByResources []*ComplianceByResource `type:"list"` + + // The string that you use in a subsequent request to get the next page of results + // in a paginated response. + NextToken *string `type:"string"` } -// SetTags sets the Tags field's value. -func (s *ConfigurationItem) SetTags(v map[string]*string) *ConfigurationItem { - s.Tags = v +// String returns the string representation +func (s DescribeComplianceByResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeComplianceByResourceOutput) GoString() string { + return s.String() +} + +// SetComplianceByResources sets the ComplianceByResources field's value. +func (s *DescribeComplianceByResourceOutput) SetComplianceByResources(v []*ComplianceByResource) *DescribeComplianceByResourceOutput { + s.ComplianceByResources = v return s } -// SetVersion sets the Version field's value. -func (s *ConfigurationItem) SetVersion(v string) *ConfigurationItem { - s.Version = &v +// SetNextToken sets the NextToken field's value. +func (s *DescribeComplianceByResourceOutput) SetNextToken(v string) *DescribeComplianceByResourceOutput { + s.NextToken = &v return s } -// An object that represents the recording of configuration changes of an AWS -// resource. -type ConfigurationRecorder struct { +type DescribeConfigRuleEvaluationStatusInput struct { _ struct{} `type:"structure"` - // The name of the recorder. By default, AWS Config automatically assigns the - // name "default" when creating the configuration recorder. You cannot change - // the assigned name. - Name *string `locationName:"name" min:"1" type:"string"` + // The name of the AWS managed Config rules for which you want status information. + // If you do not specify any names, AWS Config returns status information for + // all AWS managed Config rules that you use. + ConfigRuleNames []*string `type:"list"` - // Specifies the types of AWS resource for which AWS Config records configuration - // changes. - RecordingGroup *RecordingGroup `locationName:"recordingGroup" type:"structure"` + // The number of rule evaluation results that you want returned. + // + // This parameter is required if the rule limit for your account is more than + // the default of 50 rules. + // + // For information about requesting a rule limit increase, see AWS Config Limits + // (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_config) + // in the AWS General Reference Guide. + Limit *int64 `type:"integer"` - // Amazon Resource Name (ARN) of the IAM role used to describe the AWS resources - // associated with the account. - RoleARN *string `locationName:"roleARN" type:"string"` + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` } // String returns the string representation -func (s ConfigurationRecorder) String() string { +func (s DescribeConfigRuleEvaluationStatusInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ConfigurationRecorder) GoString() string { +func (s DescribeConfigRuleEvaluationStatusInput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *ConfigurationRecorder) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ConfigurationRecorder"} - if s.Name != nil && len(*s.Name) < 1 { - invalidParams.Add(request.NewErrParamMinLen("Name", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetName sets the Name field's value. -func (s *ConfigurationRecorder) SetName(v string) *ConfigurationRecorder { - s.Name = &v +// SetConfigRuleNames sets the ConfigRuleNames field's value. +func (s *DescribeConfigRuleEvaluationStatusInput) SetConfigRuleNames(v []*string) *DescribeConfigRuleEvaluationStatusInput { + s.ConfigRuleNames = v return s } -// SetRecordingGroup sets the RecordingGroup field's value. -func (s *ConfigurationRecorder) SetRecordingGroup(v *RecordingGroup) *ConfigurationRecorder { - s.RecordingGroup = v +// SetLimit sets the Limit field's value. +func (s *DescribeConfigRuleEvaluationStatusInput) SetLimit(v int64) *DescribeConfigRuleEvaluationStatusInput { + s.Limit = &v return s } -// SetRoleARN sets the RoleARN field's value. -func (s *ConfigurationRecorder) SetRoleARN(v string) *ConfigurationRecorder { - s.RoleARN = &v +// SetNextToken sets the NextToken field's value. +func (s *DescribeConfigRuleEvaluationStatusInput) SetNextToken(v string) *DescribeConfigRuleEvaluationStatusInput { + s.NextToken = &v return s } -// The current status of the configuration recorder. -type ConfigurationRecorderStatus struct { +type DescribeConfigRuleEvaluationStatusOutput struct { _ struct{} `type:"structure"` - // The error code indicating that the recording failed. - LastErrorCode *string `locationName:"lastErrorCode" type:"string"` + // Status information about your AWS managed Config rules. + ConfigRulesEvaluationStatus []*ConfigRuleEvaluationStatus `type:"list"` - // The message indicating that the recording failed due to an error. - LastErrorMessage *string `locationName:"lastErrorMessage" type:"string"` + // The string that you use in a subsequent request to get the next page of results + // in a paginated response. + NextToken *string `type:"string"` +} - // The time the recorder was last started. - LastStartTime *time.Time `locationName:"lastStartTime" type:"timestamp" timestampFormat:"unix"` +// String returns the string representation +func (s DescribeConfigRuleEvaluationStatusOutput) String() string { + return awsutil.Prettify(s) +} - // The last (previous) status of the recorder. - LastStatus *string `locationName:"lastStatus" type:"string" enum:"RecorderStatus"` +// GoString returns the string representation +func (s DescribeConfigRuleEvaluationStatusOutput) GoString() string { + return s.String() +} - // The time when the status was last changed. - LastStatusChangeTime *time.Time `locationName:"lastStatusChangeTime" type:"timestamp" timestampFormat:"unix"` +// SetConfigRulesEvaluationStatus sets the ConfigRulesEvaluationStatus field's value. +func (s *DescribeConfigRuleEvaluationStatusOutput) SetConfigRulesEvaluationStatus(v []*ConfigRuleEvaluationStatus) *DescribeConfigRuleEvaluationStatusOutput { + s.ConfigRulesEvaluationStatus = v + return s +} - // The time the recorder was last stopped. - LastStopTime *time.Time `locationName:"lastStopTime" type:"timestamp" timestampFormat:"unix"` +// SetNextToken sets the NextToken field's value. +func (s *DescribeConfigRuleEvaluationStatusOutput) SetNextToken(v string) *DescribeConfigRuleEvaluationStatusOutput { + s.NextToken = &v + return s +} - // The name of the configuration recorder. - Name *string `locationName:"name" type:"string"` +type DescribeConfigRulesInput struct { + _ struct{} `type:"structure"` - // Specifies whether the recorder is currently recording or not. - Recording *bool `locationName:"recording" type:"boolean"` + // The names of the AWS Config rules for which you want details. If you do not + // specify any names, AWS Config returns details for all your rules. + ConfigRuleNames []*string `type:"list"` + + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` } // String returns the string representation -func (s ConfigurationRecorderStatus) String() string { +func (s DescribeConfigRulesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ConfigurationRecorderStatus) GoString() string { +func (s DescribeConfigRulesInput) GoString() string { return s.String() } -// SetLastErrorCode sets the LastErrorCode field's value. -func (s *ConfigurationRecorderStatus) SetLastErrorCode(v string) *ConfigurationRecorderStatus { - s.LastErrorCode = &v +// SetConfigRuleNames sets the ConfigRuleNames field's value. +func (s *DescribeConfigRulesInput) SetConfigRuleNames(v []*string) *DescribeConfigRulesInput { + s.ConfigRuleNames = v return s } -// SetLastErrorMessage sets the LastErrorMessage field's value. -func (s *ConfigurationRecorderStatus) SetLastErrorMessage(v string) *ConfigurationRecorderStatus { - s.LastErrorMessage = &v +// SetNextToken sets the NextToken field's value. +func (s *DescribeConfigRulesInput) SetNextToken(v string) *DescribeConfigRulesInput { + s.NextToken = &v return s } -// SetLastStartTime sets the LastStartTime field's value. -func (s *ConfigurationRecorderStatus) SetLastStartTime(v time.Time) *ConfigurationRecorderStatus { - s.LastStartTime = &v - return s -} +type DescribeConfigRulesOutput struct { + _ struct{} `type:"structure"` -// SetLastStatus sets the LastStatus field's value. -func (s *ConfigurationRecorderStatus) SetLastStatus(v string) *ConfigurationRecorderStatus { - s.LastStatus = &v - return s + // The details about your AWS Config rules. + ConfigRules []*ConfigRule `type:"list"` + + // The string that you use in a subsequent request to get the next page of results + // in a paginated response. + NextToken *string `type:"string"` } -// SetLastStatusChangeTime sets the LastStatusChangeTime field's value. -func (s *ConfigurationRecorderStatus) SetLastStatusChangeTime(v time.Time) *ConfigurationRecorderStatus { - s.LastStatusChangeTime = &v - return s +// String returns the string representation +func (s DescribeConfigRulesOutput) String() string { + return awsutil.Prettify(s) } -// SetLastStopTime sets the LastStopTime field's value. -func (s *ConfigurationRecorderStatus) SetLastStopTime(v time.Time) *ConfigurationRecorderStatus { - s.LastStopTime = &v - return s +// GoString returns the string representation +func (s DescribeConfigRulesOutput) GoString() string { + return s.String() } -// SetName sets the Name field's value. -func (s *ConfigurationRecorderStatus) SetName(v string) *ConfigurationRecorderStatus { - s.Name = &v +// SetConfigRules sets the ConfigRules field's value. +func (s *DescribeConfigRulesOutput) SetConfigRules(v []*ConfigRule) *DescribeConfigRulesOutput { + s.ConfigRules = v return s } -// SetRecording sets the Recording field's value. -func (s *ConfigurationRecorderStatus) SetRecording(v bool) *ConfigurationRecorderStatus { - s.Recording = &v +// SetNextToken sets the NextToken field's value. +func (s *DescribeConfigRulesOutput) SetNextToken(v string) *DescribeConfigRulesOutput { + s.NextToken = &v return s } -type DeleteConfigRuleInput struct { +type DescribeConfigurationAggregatorSourcesStatusInput struct { _ struct{} `type:"structure"` - // The name of the AWS Config rule that you want to delete. + // The name of the configuration aggregator. // - // ConfigRuleName is a required field - ConfigRuleName *string `min:"1" type:"string" required:"true"` + // ConfigurationAggregatorName is a required field + ConfigurationAggregatorName *string `min:"1" type:"string" required:"true"` + + // The maximum number of AggregatorSourceStatus returned on each page. The default + // is maximum. If you specify 0, AWS Config uses the default. + Limit *int64 `type:"integer"` + + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` + + // Filters the status type. + // + // * Valid value FAILED indicates errors while moving data. + // + // * Valid value SUCCEEDED indicates the data was successfully moved. + // + // * Valid value OUTDATED indicates the data is not the most recent. + UpdateStatus []*string `min:"1" type:"list"` } // String returns the string representation -func (s DeleteConfigRuleInput) String() string { +func (s DescribeConfigurationAggregatorSourcesStatusInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteConfigRuleInput) GoString() string { +func (s DescribeConfigurationAggregatorSourcesStatusInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteConfigRuleInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteConfigRuleInput"} - if s.ConfigRuleName == nil { - invalidParams.Add(request.NewErrParamRequired("ConfigRuleName")) +func (s *DescribeConfigurationAggregatorSourcesStatusInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeConfigurationAggregatorSourcesStatusInput"} + if s.ConfigurationAggregatorName == nil { + invalidParams.Add(request.NewErrParamRequired("ConfigurationAggregatorName")) } - if s.ConfigRuleName != nil && len(*s.ConfigRuleName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ConfigRuleName", 1)) + if s.ConfigurationAggregatorName != nil && len(*s.ConfigurationAggregatorName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConfigurationAggregatorName", 1)) + } + if s.UpdateStatus != nil && len(s.UpdateStatus) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UpdateStatus", 1)) } if invalidParams.Len() > 0 { @@ -3744,1027 +7805,1073 @@ func (s *DeleteConfigRuleInput) Validate() error { return nil } -// SetConfigRuleName sets the ConfigRuleName field's value. -func (s *DeleteConfigRuleInput) SetConfigRuleName(v string) *DeleteConfigRuleInput { - s.ConfigRuleName = &v +// SetConfigurationAggregatorName sets the ConfigurationAggregatorName field's value. +func (s *DescribeConfigurationAggregatorSourcesStatusInput) SetConfigurationAggregatorName(v string) *DescribeConfigurationAggregatorSourcesStatusInput { + s.ConfigurationAggregatorName = &v return s } -type DeleteConfigRuleOutput struct { - _ struct{} `type:"structure"` +// SetLimit sets the Limit field's value. +func (s *DescribeConfigurationAggregatorSourcesStatusInput) SetLimit(v int64) *DescribeConfigurationAggregatorSourcesStatusInput { + s.Limit = &v + return s } -// String returns the string representation -func (s DeleteConfigRuleOutput) String() string { - return awsutil.Prettify(s) +// SetNextToken sets the NextToken field's value. +func (s *DescribeConfigurationAggregatorSourcesStatusInput) SetNextToken(v string) *DescribeConfigurationAggregatorSourcesStatusInput { + s.NextToken = &v + return s } -// GoString returns the string representation -func (s DeleteConfigRuleOutput) GoString() string { - return s.String() +// SetUpdateStatus sets the UpdateStatus field's value. +func (s *DescribeConfigurationAggregatorSourcesStatusInput) SetUpdateStatus(v []*string) *DescribeConfigurationAggregatorSourcesStatusInput { + s.UpdateStatus = v + return s } -// The request object for the DeleteConfigurationRecorder action. -type DeleteConfigurationRecorderInput struct { +type DescribeConfigurationAggregatorSourcesStatusOutput struct { _ struct{} `type:"structure"` - // The name of the configuration recorder to be deleted. You can retrieve the - // name of your configuration recorder by using the DescribeConfigurationRecorders - // action. - // - // ConfigurationRecorderName is a required field - ConfigurationRecorderName *string `min:"1" type:"string" required:"true"` + // Returns an AggregatedSourceStatus object. + AggregatedSourceStatusList []*AggregatedSourceStatus `type:"list"` + + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` } // String returns the string representation -func (s DeleteConfigurationRecorderInput) String() string { +func (s DescribeConfigurationAggregatorSourcesStatusOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteConfigurationRecorderInput) GoString() string { +func (s DescribeConfigurationAggregatorSourcesStatusOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteConfigurationRecorderInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteConfigurationRecorderInput"} - if s.ConfigurationRecorderName == nil { - invalidParams.Add(request.NewErrParamRequired("ConfigurationRecorderName")) - } - if s.ConfigurationRecorderName != nil && len(*s.ConfigurationRecorderName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ConfigurationRecorderName", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetAggregatedSourceStatusList sets the AggregatedSourceStatusList field's value. +func (s *DescribeConfigurationAggregatorSourcesStatusOutput) SetAggregatedSourceStatusList(v []*AggregatedSourceStatus) *DescribeConfigurationAggregatorSourcesStatusOutput { + s.AggregatedSourceStatusList = v + return s } -// SetConfigurationRecorderName sets the ConfigurationRecorderName field's value. -func (s *DeleteConfigurationRecorderInput) SetConfigurationRecorderName(v string) *DeleteConfigurationRecorderInput { - s.ConfigurationRecorderName = &v +// SetNextToken sets the NextToken field's value. +func (s *DescribeConfigurationAggregatorSourcesStatusOutput) SetNextToken(v string) *DescribeConfigurationAggregatorSourcesStatusOutput { + s.NextToken = &v return s } -type DeleteConfigurationRecorderOutput struct { +type DescribeConfigurationAggregatorsInput struct { _ struct{} `type:"structure"` + + // The name of the configuration aggregators. + ConfigurationAggregatorNames []*string `type:"list"` + + // The maximum number of configuration aggregators returned on each page. The + // default is maximum. If you specify 0, AWS Config uses the default. + Limit *int64 `type:"integer"` + + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` } // String returns the string representation -func (s DeleteConfigurationRecorderOutput) String() string { +func (s DescribeConfigurationAggregatorsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteConfigurationRecorderOutput) GoString() string { +func (s DescribeConfigurationAggregatorsInput) GoString() string { return s.String() } -// The input for the DeleteDeliveryChannel action. The action accepts the following -// data in JSON format. -type DeleteDeliveryChannelInput struct { +// SetConfigurationAggregatorNames sets the ConfigurationAggregatorNames field's value. +func (s *DescribeConfigurationAggregatorsInput) SetConfigurationAggregatorNames(v []*string) *DescribeConfigurationAggregatorsInput { + s.ConfigurationAggregatorNames = v + return s +} + +// SetLimit sets the Limit field's value. +func (s *DescribeConfigurationAggregatorsInput) SetLimit(v int64) *DescribeConfigurationAggregatorsInput { + s.Limit = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeConfigurationAggregatorsInput) SetNextToken(v string) *DescribeConfigurationAggregatorsInput { + s.NextToken = &v + return s +} + +type DescribeConfigurationAggregatorsOutput struct { _ struct{} `type:"structure"` - // The name of the delivery channel to delete. - // - // DeliveryChannelName is a required field - DeliveryChannelName *string `min:"1" type:"string" required:"true"` + // Returns a ConfigurationAggregators object. + ConfigurationAggregators []*ConfigurationAggregator `type:"list"` + + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` } // String returns the string representation -func (s DeleteDeliveryChannelInput) String() string { +func (s DescribeConfigurationAggregatorsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteDeliveryChannelInput) GoString() string { +func (s DescribeConfigurationAggregatorsOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteDeliveryChannelInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteDeliveryChannelInput"} - if s.DeliveryChannelName == nil { - invalidParams.Add(request.NewErrParamRequired("DeliveryChannelName")) - } - if s.DeliveryChannelName != nil && len(*s.DeliveryChannelName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("DeliveryChannelName", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetConfigurationAggregators sets the ConfigurationAggregators field's value. +func (s *DescribeConfigurationAggregatorsOutput) SetConfigurationAggregators(v []*ConfigurationAggregator) *DescribeConfigurationAggregatorsOutput { + s.ConfigurationAggregators = v + return s } -// SetDeliveryChannelName sets the DeliveryChannelName field's value. -func (s *DeleteDeliveryChannelInput) SetDeliveryChannelName(v string) *DeleteDeliveryChannelInput { - s.DeliveryChannelName = &v +// SetNextToken sets the NextToken field's value. +func (s *DescribeConfigurationAggregatorsOutput) SetNextToken(v string) *DescribeConfigurationAggregatorsOutput { + s.NextToken = &v return s } -type DeleteDeliveryChannelOutput struct { +// The input for the DescribeConfigurationRecorderStatus action. +type DescribeConfigurationRecorderStatusInput struct { _ struct{} `type:"structure"` + + // The name(s) of the configuration recorder. If the name is not specified, + // the action returns the current status of all the configuration recorders + // associated with the account. + ConfigurationRecorderNames []*string `type:"list"` } // String returns the string representation -func (s DeleteDeliveryChannelOutput) String() string { +func (s DescribeConfigurationRecorderStatusInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteDeliveryChannelOutput) GoString() string { +func (s DescribeConfigurationRecorderStatusInput) GoString() string { return s.String() } -type DeleteEvaluationResultsInput struct { +// SetConfigurationRecorderNames sets the ConfigurationRecorderNames field's value. +func (s *DescribeConfigurationRecorderStatusInput) SetConfigurationRecorderNames(v []*string) *DescribeConfigurationRecorderStatusInput { + s.ConfigurationRecorderNames = v + return s +} + +// The output for the DescribeConfigurationRecorderStatus action, in JSON format. +type DescribeConfigurationRecorderStatusOutput struct { _ struct{} `type:"structure"` - // The name of the Config rule for which you want to delete the evaluation results. - // - // ConfigRuleName is a required field - ConfigRuleName *string `min:"1" type:"string" required:"true"` + // A list that contains status of the specified recorders. + ConfigurationRecordersStatus []*ConfigurationRecorderStatus `type:"list"` } // String returns the string representation -func (s DeleteEvaluationResultsInput) String() string { +func (s DescribeConfigurationRecorderStatusOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteEvaluationResultsInput) GoString() string { +func (s DescribeConfigurationRecorderStatusOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteEvaluationResultsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteEvaluationResultsInput"} - if s.ConfigRuleName == nil { - invalidParams.Add(request.NewErrParamRequired("ConfigRuleName")) - } - if s.ConfigRuleName != nil && len(*s.ConfigRuleName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ConfigRuleName", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetConfigRuleName sets the ConfigRuleName field's value. -func (s *DeleteEvaluationResultsInput) SetConfigRuleName(v string) *DeleteEvaluationResultsInput { - s.ConfigRuleName = &v +// SetConfigurationRecordersStatus sets the ConfigurationRecordersStatus field's value. +func (s *DescribeConfigurationRecorderStatusOutput) SetConfigurationRecordersStatus(v []*ConfigurationRecorderStatus) *DescribeConfigurationRecorderStatusOutput { + s.ConfigurationRecordersStatus = v return s } -// The output when you delete the evaluation results for the specified Config -// rule. -type DeleteEvaluationResultsOutput struct { +// The input for the DescribeConfigurationRecorders action. +type DescribeConfigurationRecordersInput struct { _ struct{} `type:"structure"` + + // A list of configuration recorder names. + ConfigurationRecorderNames []*string `type:"list"` } // String returns the string representation -func (s DeleteEvaluationResultsOutput) String() string { +func (s DescribeConfigurationRecordersInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteEvaluationResultsOutput) GoString() string { +func (s DescribeConfigurationRecordersInput) GoString() string { return s.String() } -// The input for the DeliverConfigSnapshot action. -type DeliverConfigSnapshotInput struct { +// SetConfigurationRecorderNames sets the ConfigurationRecorderNames field's value. +func (s *DescribeConfigurationRecordersInput) SetConfigurationRecorderNames(v []*string) *DescribeConfigurationRecordersInput { + s.ConfigurationRecorderNames = v + return s +} + +// The output for the DescribeConfigurationRecorders action. +type DescribeConfigurationRecordersOutput struct { _ struct{} `type:"structure"` - // The name of the delivery channel through which the snapshot is delivered. - // - // DeliveryChannelName is a required field - DeliveryChannelName *string `locationName:"deliveryChannelName" min:"1" type:"string" required:"true"` + // A list that contains the descriptions of the specified configuration recorders. + ConfigurationRecorders []*ConfigurationRecorder `type:"list"` } // String returns the string representation -func (s DeliverConfigSnapshotInput) String() string { +func (s DescribeConfigurationRecordersOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeliverConfigSnapshotInput) GoString() string { +func (s DescribeConfigurationRecordersOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeliverConfigSnapshotInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeliverConfigSnapshotInput"} - if s.DeliveryChannelName == nil { - invalidParams.Add(request.NewErrParamRequired("DeliveryChannelName")) - } - if s.DeliveryChannelName != nil && len(*s.DeliveryChannelName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("DeliveryChannelName", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetDeliveryChannelName sets the DeliveryChannelName field's value. -func (s *DeliverConfigSnapshotInput) SetDeliveryChannelName(v string) *DeliverConfigSnapshotInput { - s.DeliveryChannelName = &v +// SetConfigurationRecorders sets the ConfigurationRecorders field's value. +func (s *DescribeConfigurationRecordersOutput) SetConfigurationRecorders(v []*ConfigurationRecorder) *DescribeConfigurationRecordersOutput { + s.ConfigurationRecorders = v return s } -// The output for the DeliverConfigSnapshot action in JSON format. -type DeliverConfigSnapshotOutput struct { +// The input for the DeliveryChannelStatus action. +type DescribeDeliveryChannelStatusInput struct { _ struct{} `type:"structure"` - // The ID of the snapshot that is being created. - ConfigSnapshotId *string `locationName:"configSnapshotId" type:"string"` + // A list of delivery channel names. + DeliveryChannelNames []*string `type:"list"` } // String returns the string representation -func (s DeliverConfigSnapshotOutput) String() string { +func (s DescribeDeliveryChannelStatusInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeliverConfigSnapshotOutput) GoString() string { +func (s DescribeDeliveryChannelStatusInput) GoString() string { return s.String() } -// SetConfigSnapshotId sets the ConfigSnapshotId field's value. -func (s *DeliverConfigSnapshotOutput) SetConfigSnapshotId(v string) *DeliverConfigSnapshotOutput { - s.ConfigSnapshotId = &v +// SetDeliveryChannelNames sets the DeliveryChannelNames field's value. +func (s *DescribeDeliveryChannelStatusInput) SetDeliveryChannelNames(v []*string) *DescribeDeliveryChannelStatusInput { + s.DeliveryChannelNames = v return s } -// The channel through which AWS Config delivers notifications and updated configuration -// states. -type DeliveryChannel struct { +// The output for the DescribeDeliveryChannelStatus action. +type DescribeDeliveryChannelStatusOutput struct { _ struct{} `type:"structure"` - // The options for how often AWS Config delivers configuration snapshots to - // the Amazon S3 bucket. - ConfigSnapshotDeliveryProperties *ConfigSnapshotDeliveryProperties `locationName:"configSnapshotDeliveryProperties" type:"structure"` + // A list that contains the status of a specified delivery channel. + DeliveryChannelsStatus []*DeliveryChannelStatus `type:"list"` +} - // The name of the delivery channel. By default, AWS Config assigns the name - // "default" when creating the delivery channel. To change the delivery channel - // name, you must use the DeleteDeliveryChannel action to delete your current - // delivery channel, and then you must use the PutDeliveryChannel command to - // create a delivery channel that has the desired name. - Name *string `locationName:"name" min:"1" type:"string"` +// String returns the string representation +func (s DescribeDeliveryChannelStatusOutput) String() string { + return awsutil.Prettify(s) +} - // The name of the Amazon S3 bucket to which AWS Config delivers configuration - // snapshots and configuration history files. - // - // If you specify a bucket that belongs to another AWS account, that bucket - // must have policies that grant access permissions to AWS Config. For more - // information, see Permissions for the Amazon S3 Bucket (http://docs.aws.amazon.com/config/latest/developerguide/s3-bucket-policy.html) - // in the AWS Config Developer Guide. - S3BucketName *string `locationName:"s3BucketName" type:"string"` +// GoString returns the string representation +func (s DescribeDeliveryChannelStatusOutput) GoString() string { + return s.String() +} - // The prefix for the specified Amazon S3 bucket. - S3KeyPrefix *string `locationName:"s3KeyPrefix" type:"string"` +// SetDeliveryChannelsStatus sets the DeliveryChannelsStatus field's value. +func (s *DescribeDeliveryChannelStatusOutput) SetDeliveryChannelsStatus(v []*DeliveryChannelStatus) *DescribeDeliveryChannelStatusOutput { + s.DeliveryChannelsStatus = v + return s +} - // The Amazon Resource Name (ARN) of the Amazon SNS topic to which AWS Config - // sends notifications about configuration changes. - // - // If you choose a topic from another account, the topic must have policies - // that grant access permissions to AWS Config. For more information, see Permissions - // for the Amazon SNS Topic (http://docs.aws.amazon.com/config/latest/developerguide/sns-topic-policy.html) - // in the AWS Config Developer Guide. - SnsTopicARN *string `locationName:"snsTopicARN" type:"string"` +// The input for the DescribeDeliveryChannels action. +type DescribeDeliveryChannelsInput struct { + _ struct{} `type:"structure"` + + // A list of delivery channel names. + DeliveryChannelNames []*string `type:"list"` } // String returns the string representation -func (s DeliveryChannel) String() string { +func (s DescribeDeliveryChannelsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeliveryChannel) GoString() string { +func (s DescribeDeliveryChannelsInput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeliveryChannel) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeliveryChannel"} - if s.Name != nil && len(*s.Name) < 1 { - invalidParams.Add(request.NewErrParamMinLen("Name", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetConfigSnapshotDeliveryProperties sets the ConfigSnapshotDeliveryProperties field's value. -func (s *DeliveryChannel) SetConfigSnapshotDeliveryProperties(v *ConfigSnapshotDeliveryProperties) *DeliveryChannel { - s.ConfigSnapshotDeliveryProperties = v +// SetDeliveryChannelNames sets the DeliveryChannelNames field's value. +func (s *DescribeDeliveryChannelsInput) SetDeliveryChannelNames(v []*string) *DescribeDeliveryChannelsInput { + s.DeliveryChannelNames = v return s } -// SetName sets the Name field's value. -func (s *DeliveryChannel) SetName(v string) *DeliveryChannel { - s.Name = &v - return s +// The output for the DescribeDeliveryChannels action. +type DescribeDeliveryChannelsOutput struct { + _ struct{} `type:"structure"` + + // A list that contains the descriptions of the specified delivery channel. + DeliveryChannels []*DeliveryChannel `type:"list"` } -// SetS3BucketName sets the S3BucketName field's value. -func (s *DeliveryChannel) SetS3BucketName(v string) *DeliveryChannel { - s.S3BucketName = &v - return s +// String returns the string representation +func (s DescribeDeliveryChannelsOutput) String() string { + return awsutil.Prettify(s) } -// SetS3KeyPrefix sets the S3KeyPrefix field's value. -func (s *DeliveryChannel) SetS3KeyPrefix(v string) *DeliveryChannel { - s.S3KeyPrefix = &v - return s +// GoString returns the string representation +func (s DescribeDeliveryChannelsOutput) GoString() string { + return s.String() } -// SetSnsTopicARN sets the SnsTopicARN field's value. -func (s *DeliveryChannel) SetSnsTopicARN(v string) *DeliveryChannel { - s.SnsTopicARN = &v +// SetDeliveryChannels sets the DeliveryChannels field's value. +func (s *DescribeDeliveryChannelsOutput) SetDeliveryChannels(v []*DeliveryChannel) *DescribeDeliveryChannelsOutput { + s.DeliveryChannels = v return s } -// The status of a specified delivery channel. -// -// Valid values: Success | Failure -type DeliveryChannelStatus struct { +type DescribePendingAggregationRequestsInput struct { _ struct{} `type:"structure"` - // A list that contains the status of the delivery of the configuration history - // to the specified Amazon S3 bucket. - ConfigHistoryDeliveryInfo *ConfigExportDeliveryInfo `locationName:"configHistoryDeliveryInfo" type:"structure"` - - // A list containing the status of the delivery of the snapshot to the specified - // Amazon S3 bucket. - ConfigSnapshotDeliveryInfo *ConfigExportDeliveryInfo `locationName:"configSnapshotDeliveryInfo" type:"structure"` - - // A list containing the status of the delivery of the configuration stream - // notification to the specified Amazon SNS topic. - ConfigStreamDeliveryInfo *ConfigStreamDeliveryInfo `locationName:"configStreamDeliveryInfo" type:"structure"` + // The maximum number of evaluation results returned on each page. The default + // is maximum. If you specify 0, AWS Config uses the default. + Limit *int64 `type:"integer"` - // The name of the delivery channel. - Name *string `locationName:"name" type:"string"` + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` } // String returns the string representation -func (s DeliveryChannelStatus) String() string { +func (s DescribePendingAggregationRequestsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeliveryChannelStatus) GoString() string { +func (s DescribePendingAggregationRequestsInput) GoString() string { return s.String() } -// SetConfigHistoryDeliveryInfo sets the ConfigHistoryDeliveryInfo field's value. -func (s *DeliveryChannelStatus) SetConfigHistoryDeliveryInfo(v *ConfigExportDeliveryInfo) *DeliveryChannelStatus { - s.ConfigHistoryDeliveryInfo = v +// SetLimit sets the Limit field's value. +func (s *DescribePendingAggregationRequestsInput) SetLimit(v int64) *DescribePendingAggregationRequestsInput { + s.Limit = &v return s } -// SetConfigSnapshotDeliveryInfo sets the ConfigSnapshotDeliveryInfo field's value. -func (s *DeliveryChannelStatus) SetConfigSnapshotDeliveryInfo(v *ConfigExportDeliveryInfo) *DeliveryChannelStatus { - s.ConfigSnapshotDeliveryInfo = v +// SetNextToken sets the NextToken field's value. +func (s *DescribePendingAggregationRequestsInput) SetNextToken(v string) *DescribePendingAggregationRequestsInput { + s.NextToken = &v return s } -// SetConfigStreamDeliveryInfo sets the ConfigStreamDeliveryInfo field's value. -func (s *DeliveryChannelStatus) SetConfigStreamDeliveryInfo(v *ConfigStreamDeliveryInfo) *DeliveryChannelStatus { - s.ConfigStreamDeliveryInfo = v +type DescribePendingAggregationRequestsOutput struct { + _ struct{} `type:"structure"` + + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` + + // Returns a PendingAggregationRequests object. + PendingAggregationRequests []*PendingAggregationRequest `type:"list"` +} + +// String returns the string representation +func (s DescribePendingAggregationRequestsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribePendingAggregationRequestsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribePendingAggregationRequestsOutput) SetNextToken(v string) *DescribePendingAggregationRequestsOutput { + s.NextToken = &v return s } -// SetName sets the Name field's value. -func (s *DeliveryChannelStatus) SetName(v string) *DeliveryChannelStatus { - s.Name = &v +// SetPendingAggregationRequests sets the PendingAggregationRequests field's value. +func (s *DescribePendingAggregationRequestsOutput) SetPendingAggregationRequests(v []*PendingAggregationRequest) *DescribePendingAggregationRequestsOutput { + s.PendingAggregationRequests = v return s } -type DescribeComplianceByConfigRuleInput struct { +type DescribeRetentionConfigurationsInput struct { _ struct{} `type:"structure"` - // Filters the results by compliance. - // - // The allowed values are COMPLIANT, NON_COMPLIANT, and INSUFFICIENT_DATA. - ComplianceTypes []*string `type:"list"` - - // Specify one or more AWS Config rule names to filter the results by rule. - ConfigRuleNames []*string `type:"list"` - - // The NextToken string returned on a previous page that you use to get the + // The nextToken string returned on a previous page that you use to get the // next page of results in a paginated response. NextToken *string `type:"string"` + + // A list of names of retention configurations for which you want details. If + // you do not specify a name, AWS Config returns details for all the retention + // configurations for that account. + // + // Currently, AWS Config supports only one retention configuration per region + // in your account. + RetentionConfigurationNames []*string `type:"list"` } // String returns the string representation -func (s DescribeComplianceByConfigRuleInput) String() string { +func (s DescribeRetentionConfigurationsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeComplianceByConfigRuleInput) GoString() string { +func (s DescribeRetentionConfigurationsInput) GoString() string { return s.String() } -// SetComplianceTypes sets the ComplianceTypes field's value. -func (s *DescribeComplianceByConfigRuleInput) SetComplianceTypes(v []*string) *DescribeComplianceByConfigRuleInput { - s.ComplianceTypes = v - return s -} - -// SetConfigRuleNames sets the ConfigRuleNames field's value. -func (s *DescribeComplianceByConfigRuleInput) SetConfigRuleNames(v []*string) *DescribeComplianceByConfigRuleInput { - s.ConfigRuleNames = v +// SetNextToken sets the NextToken field's value. +func (s *DescribeRetentionConfigurationsInput) SetNextToken(v string) *DescribeRetentionConfigurationsInput { + s.NextToken = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *DescribeComplianceByConfigRuleInput) SetNextToken(v string) *DescribeComplianceByConfigRuleInput { - s.NextToken = &v +// SetRetentionConfigurationNames sets the RetentionConfigurationNames field's value. +func (s *DescribeRetentionConfigurationsInput) SetRetentionConfigurationNames(v []*string) *DescribeRetentionConfigurationsInput { + s.RetentionConfigurationNames = v return s } -type DescribeComplianceByConfigRuleOutput struct { +type DescribeRetentionConfigurationsOutput struct { _ struct{} `type:"structure"` - // Indicates whether each of the specified AWS Config rules is compliant. - ComplianceByConfigRules []*ComplianceByConfigRule `type:"list"` - - // The string that you use in a subsequent request to get the next page of results - // in a paginated response. + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. NextToken *string `type:"string"` + + // Returns a retention configuration object. + RetentionConfigurations []*RetentionConfiguration `type:"list"` } // String returns the string representation -func (s DescribeComplianceByConfigRuleOutput) String() string { +func (s DescribeRetentionConfigurationsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeComplianceByConfigRuleOutput) GoString() string { +func (s DescribeRetentionConfigurationsOutput) GoString() string { return s.String() } -// SetComplianceByConfigRules sets the ComplianceByConfigRules field's value. -func (s *DescribeComplianceByConfigRuleOutput) SetComplianceByConfigRules(v []*ComplianceByConfigRule) *DescribeComplianceByConfigRuleOutput { - s.ComplianceByConfigRules = v +// SetNextToken sets the NextToken field's value. +func (s *DescribeRetentionConfigurationsOutput) SetNextToken(v string) *DescribeRetentionConfigurationsOutput { + s.NextToken = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *DescribeComplianceByConfigRuleOutput) SetNextToken(v string) *DescribeComplianceByConfigRuleOutput { - s.NextToken = &v +// SetRetentionConfigurations sets the RetentionConfigurations field's value. +func (s *DescribeRetentionConfigurationsOutput) SetRetentionConfigurations(v []*RetentionConfiguration) *DescribeRetentionConfigurationsOutput { + s.RetentionConfigurations = v return s } -type DescribeComplianceByResourceInput struct { +// Identifies an AWS resource and indicates whether it complies with the AWS +// Config rule that it was evaluated against. +type Evaluation struct { _ struct{} `type:"structure"` - // Filters the results by compliance. - // - // The allowed values are COMPLIANT, NON_COMPLIANT, and INSUFFICIENT_DATA. - ComplianceTypes []*string `type:"list"` + // Supplementary information about how the evaluation determined the compliance. + Annotation *string `min:"1" type:"string"` - // The maximum number of evaluation results returned on each page. The default - // is 10. You cannot specify a limit greater than 100. If you specify 0, AWS - // Config uses the default. - Limit *int64 `type:"integer"` + // The ID of the AWS resource that was evaluated. + // + // ComplianceResourceId is a required field + ComplianceResourceId *string `min:"1" type:"string" required:"true"` - // The NextToken string returned on a previous page that you use to get the - // next page of results in a paginated response. - NextToken *string `type:"string"` + // The type of AWS resource that was evaluated. + // + // ComplianceResourceType is a required field + ComplianceResourceType *string `min:"1" type:"string" required:"true"` - // The ID of the AWS resource for which you want compliance information. You - // can specify only one resource ID. If you specify a resource ID, you must - // also specify a type for ResourceType. - ResourceId *string `min:"1" type:"string"` + // Indicates whether the AWS resource complies with the AWS Config rule that + // it was evaluated against. + // + // For the Evaluation data type, AWS Config supports only the COMPLIANT, NON_COMPLIANT, + // and NOT_APPLICABLE values. AWS Config does not support the INSUFFICIENT_DATA + // value for this data type. + // + // Similarly, AWS Config does not accept INSUFFICIENT_DATA as the value for + // ComplianceType from a PutEvaluations request. For example, an AWS Lambda + // function for a custom AWS Config rule cannot pass an INSUFFICIENT_DATA value + // to AWS Config. + // + // ComplianceType is a required field + ComplianceType *string `type:"string" required:"true" enum:"ComplianceType"` - // The types of AWS resources for which you want compliance information; for - // example, AWS::EC2::Instance. For this action, you can specify that the resource - // type is an AWS account by specifying AWS::::Account. - ResourceType *string `min:"1" type:"string"` + // The time of the event in AWS Config that triggered the evaluation. For event-based + // evaluations, the time indicates when AWS Config created the configuration + // item that triggered the evaluation. For periodic evaluations, the time indicates + // when AWS Config triggered the evaluation at the frequency that you specified + // (for example, every 24 hours). + // + // OrderingTimestamp is a required field + OrderingTimestamp *time.Time `type:"timestamp" required:"true"` } // String returns the string representation -func (s DescribeComplianceByResourceInput) String() string { +func (s Evaluation) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeComplianceByResourceInput) GoString() string { +func (s Evaluation) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeComplianceByResourceInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeComplianceByResourceInput"} - if s.ResourceId != nil && len(*s.ResourceId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ResourceId", 1)) +func (s *Evaluation) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Evaluation"} + if s.Annotation != nil && len(*s.Annotation) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Annotation", 1)) } - if s.ResourceType != nil && len(*s.ResourceType) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ResourceType", 1)) + if s.ComplianceResourceId == nil { + invalidParams.Add(request.NewErrParamRequired("ComplianceResourceId")) } - - if invalidParams.Len() > 0 { - return invalidParams + if s.ComplianceResourceId != nil && len(*s.ComplianceResourceId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ComplianceResourceId", 1)) + } + if s.ComplianceResourceType == nil { + invalidParams.Add(request.NewErrParamRequired("ComplianceResourceType")) + } + if s.ComplianceResourceType != nil && len(*s.ComplianceResourceType) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ComplianceResourceType", 1)) + } + if s.ComplianceType == nil { + invalidParams.Add(request.NewErrParamRequired("ComplianceType")) + } + if s.OrderingTimestamp == nil { + invalidParams.Add(request.NewErrParamRequired("OrderingTimestamp")) } - return nil -} - -// SetComplianceTypes sets the ComplianceTypes field's value. -func (s *DescribeComplianceByResourceInput) SetComplianceTypes(v []*string) *DescribeComplianceByResourceInput { - s.ComplianceTypes = v - return s -} - -// SetLimit sets the Limit field's value. -func (s *DescribeComplianceByResourceInput) SetLimit(v int64) *DescribeComplianceByResourceInput { - s.Limit = &v - return s -} -// SetNextToken sets the NextToken field's value. -func (s *DescribeComplianceByResourceInput) SetNextToken(v string) *DescribeComplianceByResourceInput { - s.NextToken = &v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetResourceId sets the ResourceId field's value. -func (s *DescribeComplianceByResourceInput) SetResourceId(v string) *DescribeComplianceByResourceInput { - s.ResourceId = &v +// SetAnnotation sets the Annotation field's value. +func (s *Evaluation) SetAnnotation(v string) *Evaluation { + s.Annotation = &v return s } -// SetResourceType sets the ResourceType field's value. -func (s *DescribeComplianceByResourceInput) SetResourceType(v string) *DescribeComplianceByResourceInput { - s.ResourceType = &v +// SetComplianceResourceId sets the ComplianceResourceId field's value. +func (s *Evaluation) SetComplianceResourceId(v string) *Evaluation { + s.ComplianceResourceId = &v return s } -type DescribeComplianceByResourceOutput struct { - _ struct{} `type:"structure"` - - // Indicates whether the specified AWS resource complies with all of the AWS - // Config rules that evaluate it. - ComplianceByResources []*ComplianceByResource `type:"list"` - - // The string that you use in a subsequent request to get the next page of results - // in a paginated response. - NextToken *string `type:"string"` -} - -// String returns the string representation -func (s DescribeComplianceByResourceOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s DescribeComplianceByResourceOutput) GoString() string { - return s.String() +// SetComplianceResourceType sets the ComplianceResourceType field's value. +func (s *Evaluation) SetComplianceResourceType(v string) *Evaluation { + s.ComplianceResourceType = &v + return s } -// SetComplianceByResources sets the ComplianceByResources field's value. -func (s *DescribeComplianceByResourceOutput) SetComplianceByResources(v []*ComplianceByResource) *DescribeComplianceByResourceOutput { - s.ComplianceByResources = v +// SetComplianceType sets the ComplianceType field's value. +func (s *Evaluation) SetComplianceType(v string) *Evaluation { + s.ComplianceType = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *DescribeComplianceByResourceOutput) SetNextToken(v string) *DescribeComplianceByResourceOutput { - s.NextToken = &v +// SetOrderingTimestamp sets the OrderingTimestamp field's value. +func (s *Evaluation) SetOrderingTimestamp(v time.Time) *Evaluation { + s.OrderingTimestamp = &v return s } -type DescribeConfigRuleEvaluationStatusInput struct { +// The details of an AWS Config evaluation. Provides the AWS resource that was +// evaluated, the compliance of the resource, related time stamps, and supplementary +// information. +type EvaluationResult struct { _ struct{} `type:"structure"` - // The name of the AWS managed Config rules for which you want status information. - // If you do not specify any names, AWS Config returns status information for - // all AWS managed Config rules that you use. - ConfigRuleNames []*string `type:"list"` + // Supplementary information about how the evaluation determined the compliance. + Annotation *string `min:"1" type:"string"` - // The number of rule evaluation results that you want returned. - // - // This parameter is required if the rule limit for your account is more than - // the default of 50 rules. + // Indicates whether the AWS resource complies with the AWS Config rule that + // evaluated it. // - // For more information about requesting a rule limit increase, see AWS Config - // Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_config) - // in the AWS General Reference Guide. - Limit *int64 `type:"integer"` + // For the EvaluationResult data type, AWS Config supports only the COMPLIANT, + // NON_COMPLIANT, and NOT_APPLICABLE values. AWS Config does not support the + // INSUFFICIENT_DATA value for the EvaluationResult data type. + ComplianceType *string `type:"string" enum:"ComplianceType"` - // The NextToken string returned on a previous page that you use to get the - // next page of results in a paginated response. - NextToken *string `type:"string"` + // The time when the AWS Config rule evaluated the AWS resource. + ConfigRuleInvokedTime *time.Time `type:"timestamp"` + + // Uniquely identifies the evaluation result. + EvaluationResultIdentifier *EvaluationResultIdentifier `type:"structure"` + + // The time when AWS Config recorded the evaluation result. + ResultRecordedTime *time.Time `type:"timestamp"` + + // An encrypted token that associates an evaluation with an AWS Config rule. + // The token identifies the rule, the AWS resource being evaluated, and the + // event that triggered the evaluation. + ResultToken *string `type:"string"` } // String returns the string representation -func (s DescribeConfigRuleEvaluationStatusInput) String() string { +func (s EvaluationResult) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeConfigRuleEvaluationStatusInput) GoString() string { +func (s EvaluationResult) GoString() string { return s.String() } -// SetConfigRuleNames sets the ConfigRuleNames field's value. -func (s *DescribeConfigRuleEvaluationStatusInput) SetConfigRuleNames(v []*string) *DescribeConfigRuleEvaluationStatusInput { - s.ConfigRuleNames = v +// SetAnnotation sets the Annotation field's value. +func (s *EvaluationResult) SetAnnotation(v string) *EvaluationResult { + s.Annotation = &v return s } -// SetLimit sets the Limit field's value. -func (s *DescribeConfigRuleEvaluationStatusInput) SetLimit(v int64) *DescribeConfigRuleEvaluationStatusInput { - s.Limit = &v +// SetComplianceType sets the ComplianceType field's value. +func (s *EvaluationResult) SetComplianceType(v string) *EvaluationResult { + s.ComplianceType = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *DescribeConfigRuleEvaluationStatusInput) SetNextToken(v string) *DescribeConfigRuleEvaluationStatusInput { - s.NextToken = &v +// SetConfigRuleInvokedTime sets the ConfigRuleInvokedTime field's value. +func (s *EvaluationResult) SetConfigRuleInvokedTime(v time.Time) *EvaluationResult { + s.ConfigRuleInvokedTime = &v return s } -type DescribeConfigRuleEvaluationStatusOutput struct { - _ struct{} `type:"structure"` - - // Status information about your AWS managed Config rules. - ConfigRulesEvaluationStatus []*ConfigRuleEvaluationStatus `type:"list"` - - // The string that you use in a subsequent request to get the next page of results - // in a paginated response. - NextToken *string `type:"string"` -} - -// String returns the string representation -func (s DescribeConfigRuleEvaluationStatusOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s DescribeConfigRuleEvaluationStatusOutput) GoString() string { - return s.String() +// SetEvaluationResultIdentifier sets the EvaluationResultIdentifier field's value. +func (s *EvaluationResult) SetEvaluationResultIdentifier(v *EvaluationResultIdentifier) *EvaluationResult { + s.EvaluationResultIdentifier = v + return s } -// SetConfigRulesEvaluationStatus sets the ConfigRulesEvaluationStatus field's value. -func (s *DescribeConfigRuleEvaluationStatusOutput) SetConfigRulesEvaluationStatus(v []*ConfigRuleEvaluationStatus) *DescribeConfigRuleEvaluationStatusOutput { - s.ConfigRulesEvaluationStatus = v +// SetResultRecordedTime sets the ResultRecordedTime field's value. +func (s *EvaluationResult) SetResultRecordedTime(v time.Time) *EvaluationResult { + s.ResultRecordedTime = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *DescribeConfigRuleEvaluationStatusOutput) SetNextToken(v string) *DescribeConfigRuleEvaluationStatusOutput { - s.NextToken = &v +// SetResultToken sets the ResultToken field's value. +func (s *EvaluationResult) SetResultToken(v string) *EvaluationResult { + s.ResultToken = &v return s } -type DescribeConfigRulesInput struct { +// Uniquely identifies an evaluation result. +type EvaluationResultIdentifier struct { _ struct{} `type:"structure"` - // The names of the AWS Config rules for which you want details. If you do not - // specify any names, AWS Config returns details for all your rules. - ConfigRuleNames []*string `type:"list"` + // Identifies an AWS Config rule used to evaluate an AWS resource, and provides + // the type and ID of the evaluated resource. + EvaluationResultQualifier *EvaluationResultQualifier `type:"structure"` - // The NextToken string returned on a previous page that you use to get the - // next page of results in a paginated response. - NextToken *string `type:"string"` + // The time of the event that triggered the evaluation of your AWS resources. + // The time can indicate when AWS Config delivered a configuration item change + // notification, or it can indicate when AWS Config delivered the configuration + // snapshot, depending on which event triggered the evaluation. + OrderingTimestamp *time.Time `type:"timestamp"` } // String returns the string representation -func (s DescribeConfigRulesInput) String() string { +func (s EvaluationResultIdentifier) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeConfigRulesInput) GoString() string { +func (s EvaluationResultIdentifier) GoString() string { return s.String() } -// SetConfigRuleNames sets the ConfigRuleNames field's value. -func (s *DescribeConfigRulesInput) SetConfigRuleNames(v []*string) *DescribeConfigRulesInput { - s.ConfigRuleNames = v +// SetEvaluationResultQualifier sets the EvaluationResultQualifier field's value. +func (s *EvaluationResultIdentifier) SetEvaluationResultQualifier(v *EvaluationResultQualifier) *EvaluationResultIdentifier { + s.EvaluationResultQualifier = v return s } -// SetNextToken sets the NextToken field's value. -func (s *DescribeConfigRulesInput) SetNextToken(v string) *DescribeConfigRulesInput { - s.NextToken = &v +// SetOrderingTimestamp sets the OrderingTimestamp field's value. +func (s *EvaluationResultIdentifier) SetOrderingTimestamp(v time.Time) *EvaluationResultIdentifier { + s.OrderingTimestamp = &v return s } -type DescribeConfigRulesOutput struct { +// Identifies an AWS Config rule that evaluated an AWS resource, and provides +// the type and ID of the resource that the rule evaluated. +type EvaluationResultQualifier struct { _ struct{} `type:"structure"` - // The details about your AWS Config rules. - ConfigRules []*ConfigRule `type:"list"` + // The name of the AWS Config rule that was used in the evaluation. + ConfigRuleName *string `min:"1" type:"string"` - // The string that you use in a subsequent request to get the next page of results - // in a paginated response. - NextToken *string `type:"string"` + // The ID of the evaluated AWS resource. + ResourceId *string `min:"1" type:"string"` + + // The type of AWS resource that was evaluated. + ResourceType *string `min:"1" type:"string"` } // String returns the string representation -func (s DescribeConfigRulesOutput) String() string { +func (s EvaluationResultQualifier) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeConfigRulesOutput) GoString() string { +func (s EvaluationResultQualifier) GoString() string { return s.String() } -// SetConfigRules sets the ConfigRules field's value. -func (s *DescribeConfigRulesOutput) SetConfigRules(v []*ConfigRule) *DescribeConfigRulesOutput { - s.ConfigRules = v +// SetConfigRuleName sets the ConfigRuleName field's value. +func (s *EvaluationResultQualifier) SetConfigRuleName(v string) *EvaluationResultQualifier { + s.ConfigRuleName = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *DescribeConfigRulesOutput) SetNextToken(v string) *DescribeConfigRulesOutput { - s.NextToken = &v +// SetResourceId sets the ResourceId field's value. +func (s *EvaluationResultQualifier) SetResourceId(v string) *EvaluationResultQualifier { + s.ResourceId = &v return s } -// The input for the DescribeConfigurationRecorderStatus action. -type DescribeConfigurationRecorderStatusInput struct { +// SetResourceType sets the ResourceType field's value. +func (s *EvaluationResultQualifier) SetResourceType(v string) *EvaluationResultQualifier { + s.ResourceType = &v + return s +} + +type GetAggregateComplianceDetailsByConfigRuleInput struct { _ struct{} `type:"structure"` - // The name(s) of the configuration recorder. If the name is not specified, - // the action returns the current status of all the configuration recorders - // associated with the account. - ConfigurationRecorderNames []*string `type:"list"` -} + // The 12-digit account ID of the source account. + // + // AccountId is a required field + AccountId *string `type:"string" required:"true"` -// String returns the string representation -func (s DescribeConfigurationRecorderStatusInput) String() string { - return awsutil.Prettify(s) -} + // The source region from where the data is aggregated. + // + // AwsRegion is a required field + AwsRegion *string `min:"1" type:"string" required:"true"` -// GoString returns the string representation -func (s DescribeConfigurationRecorderStatusInput) GoString() string { - return s.String() -} + // The resource compliance status. + // + // For the GetAggregateComplianceDetailsByConfigRuleRequest data type, AWS Config + // supports only the COMPLIANT and NON_COMPLIANT. AWS Config does not support + // the NOT_APPLICABLE and INSUFFICIENT_DATA values. + ComplianceType *string `type:"string" enum:"ComplianceType"` -// SetConfigurationRecorderNames sets the ConfigurationRecorderNames field's value. -func (s *DescribeConfigurationRecorderStatusInput) SetConfigurationRecorderNames(v []*string) *DescribeConfigurationRecorderStatusInput { - s.ConfigurationRecorderNames = v - return s -} + // The name of the AWS Config rule for which you want compliance information. + // + // ConfigRuleName is a required field + ConfigRuleName *string `min:"1" type:"string" required:"true"` -// The output for the DescribeConfigurationRecorderStatus action in JSON format. -type DescribeConfigurationRecorderStatusOutput struct { - _ struct{} `type:"structure"` + // The name of the configuration aggregator. + // + // ConfigurationAggregatorName is a required field + ConfigurationAggregatorName *string `min:"1" type:"string" required:"true"` - // A list that contains status of the specified recorders. - ConfigurationRecordersStatus []*ConfigurationRecorderStatus `type:"list"` + // The maximum number of evaluation results returned on each page. The default + // is 50. You cannot specify a number greater than 100. If you specify 0, AWS + // Config uses the default. + Limit *int64 `type:"integer"` + + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` } // String returns the string representation -func (s DescribeConfigurationRecorderStatusOutput) String() string { +func (s GetAggregateComplianceDetailsByConfigRuleInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeConfigurationRecorderStatusOutput) GoString() string { +func (s GetAggregateComplianceDetailsByConfigRuleInput) GoString() string { return s.String() } -// SetConfigurationRecordersStatus sets the ConfigurationRecordersStatus field's value. -func (s *DescribeConfigurationRecorderStatusOutput) SetConfigurationRecordersStatus(v []*ConfigurationRecorderStatus) *DescribeConfigurationRecorderStatusOutput { - s.ConfigurationRecordersStatus = v - return s -} - -// The input for the DescribeConfigurationRecorders action. -type DescribeConfigurationRecordersInput struct { - _ struct{} `type:"structure"` +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetAggregateComplianceDetailsByConfigRuleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetAggregateComplianceDetailsByConfigRuleInput"} + if s.AccountId == nil { + invalidParams.Add(request.NewErrParamRequired("AccountId")) + } + if s.AwsRegion == nil { + invalidParams.Add(request.NewErrParamRequired("AwsRegion")) + } + if s.AwsRegion != nil && len(*s.AwsRegion) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AwsRegion", 1)) + } + if s.ConfigRuleName == nil { + invalidParams.Add(request.NewErrParamRequired("ConfigRuleName")) + } + if s.ConfigRuleName != nil && len(*s.ConfigRuleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConfigRuleName", 1)) + } + if s.ConfigurationAggregatorName == nil { + invalidParams.Add(request.NewErrParamRequired("ConfigurationAggregatorName")) + } + if s.ConfigurationAggregatorName != nil && len(*s.ConfigurationAggregatorName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConfigurationAggregatorName", 1)) + } - // A list of configuration recorder names. - ConfigurationRecorderNames []*string `type:"list"` + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// String returns the string representation -func (s DescribeConfigurationRecordersInput) String() string { - return awsutil.Prettify(s) +// SetAccountId sets the AccountId field's value. +func (s *GetAggregateComplianceDetailsByConfigRuleInput) SetAccountId(v string) *GetAggregateComplianceDetailsByConfigRuleInput { + s.AccountId = &v + return s } -// GoString returns the string representation -func (s DescribeConfigurationRecordersInput) GoString() string { - return s.String() +// SetAwsRegion sets the AwsRegion field's value. +func (s *GetAggregateComplianceDetailsByConfigRuleInput) SetAwsRegion(v string) *GetAggregateComplianceDetailsByConfigRuleInput { + s.AwsRegion = &v + return s } -// SetConfigurationRecorderNames sets the ConfigurationRecorderNames field's value. -func (s *DescribeConfigurationRecordersInput) SetConfigurationRecorderNames(v []*string) *DescribeConfigurationRecordersInput { - s.ConfigurationRecorderNames = v +// SetComplianceType sets the ComplianceType field's value. +func (s *GetAggregateComplianceDetailsByConfigRuleInput) SetComplianceType(v string) *GetAggregateComplianceDetailsByConfigRuleInput { + s.ComplianceType = &v return s } -// The output for the DescribeConfigurationRecorders action. -type DescribeConfigurationRecordersOutput struct { - _ struct{} `type:"structure"` - - // A list that contains the descriptions of the specified configuration recorders. - ConfigurationRecorders []*ConfigurationRecorder `type:"list"` +// SetConfigRuleName sets the ConfigRuleName field's value. +func (s *GetAggregateComplianceDetailsByConfigRuleInput) SetConfigRuleName(v string) *GetAggregateComplianceDetailsByConfigRuleInput { + s.ConfigRuleName = &v + return s } -// String returns the string representation -func (s DescribeConfigurationRecordersOutput) String() string { - return awsutil.Prettify(s) +// SetConfigurationAggregatorName sets the ConfigurationAggregatorName field's value. +func (s *GetAggregateComplianceDetailsByConfigRuleInput) SetConfigurationAggregatorName(v string) *GetAggregateComplianceDetailsByConfigRuleInput { + s.ConfigurationAggregatorName = &v + return s } -// GoString returns the string representation -func (s DescribeConfigurationRecordersOutput) GoString() string { - return s.String() +// SetLimit sets the Limit field's value. +func (s *GetAggregateComplianceDetailsByConfigRuleInput) SetLimit(v int64) *GetAggregateComplianceDetailsByConfigRuleInput { + s.Limit = &v + return s } -// SetConfigurationRecorders sets the ConfigurationRecorders field's value. -func (s *DescribeConfigurationRecordersOutput) SetConfigurationRecorders(v []*ConfigurationRecorder) *DescribeConfigurationRecordersOutput { - s.ConfigurationRecorders = v +// SetNextToken sets the NextToken field's value. +func (s *GetAggregateComplianceDetailsByConfigRuleInput) SetNextToken(v string) *GetAggregateComplianceDetailsByConfigRuleInput { + s.NextToken = &v return s } -// The input for the DeliveryChannelStatus action. -type DescribeDeliveryChannelStatusInput struct { +type GetAggregateComplianceDetailsByConfigRuleOutput struct { _ struct{} `type:"structure"` - // A list of delivery channel names. - DeliveryChannelNames []*string `type:"list"` + // Returns an AggregateEvaluationResults object. + AggregateEvaluationResults []*AggregateEvaluationResult `type:"list"` + + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` } // String returns the string representation -func (s DescribeDeliveryChannelStatusInput) String() string { +func (s GetAggregateComplianceDetailsByConfigRuleOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeDeliveryChannelStatusInput) GoString() string { +func (s GetAggregateComplianceDetailsByConfigRuleOutput) GoString() string { return s.String() } -// SetDeliveryChannelNames sets the DeliveryChannelNames field's value. -func (s *DescribeDeliveryChannelStatusInput) SetDeliveryChannelNames(v []*string) *DescribeDeliveryChannelStatusInput { - s.DeliveryChannelNames = v +// SetAggregateEvaluationResults sets the AggregateEvaluationResults field's value. +func (s *GetAggregateComplianceDetailsByConfigRuleOutput) SetAggregateEvaluationResults(v []*AggregateEvaluationResult) *GetAggregateComplianceDetailsByConfigRuleOutput { + s.AggregateEvaluationResults = v return s } -// The output for the DescribeDeliveryChannelStatus action. -type DescribeDeliveryChannelStatusOutput struct { +// SetNextToken sets the NextToken field's value. +func (s *GetAggregateComplianceDetailsByConfigRuleOutput) SetNextToken(v string) *GetAggregateComplianceDetailsByConfigRuleOutput { + s.NextToken = &v + return s +} + +type GetAggregateConfigRuleComplianceSummaryInput struct { _ struct{} `type:"structure"` - // A list that contains the status of a specified delivery channel. - DeliveryChannelsStatus []*DeliveryChannelStatus `type:"list"` + // The name of the configuration aggregator. + // + // ConfigurationAggregatorName is a required field + ConfigurationAggregatorName *string `min:"1" type:"string" required:"true"` + + // Filters the results based on the ConfigRuleComplianceSummaryFilters object. + Filters *ConfigRuleComplianceSummaryFilters `type:"structure"` + + // Groups the result based on ACCOUNT_ID or AWS_REGION. + GroupByKey *string `type:"string" enum:"ConfigRuleComplianceSummaryGroupKey"` + + // The maximum number of evaluation results returned on each page. The default + // is 1000. You cannot specify a number greater than 1000. If you specify 0, + // AWS Config uses the default. + Limit *int64 `type:"integer"` + + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` } // String returns the string representation -func (s DescribeDeliveryChannelStatusOutput) String() string { +func (s GetAggregateConfigRuleComplianceSummaryInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeDeliveryChannelStatusOutput) GoString() string { +func (s GetAggregateConfigRuleComplianceSummaryInput) GoString() string { return s.String() } -// SetDeliveryChannelsStatus sets the DeliveryChannelsStatus field's value. -func (s *DescribeDeliveryChannelStatusOutput) SetDeliveryChannelsStatus(v []*DeliveryChannelStatus) *DescribeDeliveryChannelStatusOutput { - s.DeliveryChannelsStatus = v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetAggregateConfigRuleComplianceSummaryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetAggregateConfigRuleComplianceSummaryInput"} + if s.ConfigurationAggregatorName == nil { + invalidParams.Add(request.NewErrParamRequired("ConfigurationAggregatorName")) + } + if s.ConfigurationAggregatorName != nil && len(*s.ConfigurationAggregatorName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConfigurationAggregatorName", 1)) + } + if s.Filters != nil { + if err := s.Filters.Validate(); err != nil { + invalidParams.AddNested("Filters", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// The input for the DescribeDeliveryChannels action. -type DescribeDeliveryChannelsInput struct { - _ struct{} `type:"structure"` +// SetConfigurationAggregatorName sets the ConfigurationAggregatorName field's value. +func (s *GetAggregateConfigRuleComplianceSummaryInput) SetConfigurationAggregatorName(v string) *GetAggregateConfigRuleComplianceSummaryInput { + s.ConfigurationAggregatorName = &v + return s +} - // A list of delivery channel names. - DeliveryChannelNames []*string `type:"list"` +// SetFilters sets the Filters field's value. +func (s *GetAggregateConfigRuleComplianceSummaryInput) SetFilters(v *ConfigRuleComplianceSummaryFilters) *GetAggregateConfigRuleComplianceSummaryInput { + s.Filters = v + return s } -// String returns the string representation -func (s DescribeDeliveryChannelsInput) String() string { - return awsutil.Prettify(s) +// SetGroupByKey sets the GroupByKey field's value. +func (s *GetAggregateConfigRuleComplianceSummaryInput) SetGroupByKey(v string) *GetAggregateConfigRuleComplianceSummaryInput { + s.GroupByKey = &v + return s } -// GoString returns the string representation -func (s DescribeDeliveryChannelsInput) GoString() string { - return s.String() +// SetLimit sets the Limit field's value. +func (s *GetAggregateConfigRuleComplianceSummaryInput) SetLimit(v int64) *GetAggregateConfigRuleComplianceSummaryInput { + s.Limit = &v + return s } -// SetDeliveryChannelNames sets the DeliveryChannelNames field's value. -func (s *DescribeDeliveryChannelsInput) SetDeliveryChannelNames(v []*string) *DescribeDeliveryChannelsInput { - s.DeliveryChannelNames = v +// SetNextToken sets the NextToken field's value. +func (s *GetAggregateConfigRuleComplianceSummaryInput) SetNextToken(v string) *GetAggregateConfigRuleComplianceSummaryInput { + s.NextToken = &v return s } -// The output for the DescribeDeliveryChannels action. -type DescribeDeliveryChannelsOutput struct { +type GetAggregateConfigRuleComplianceSummaryOutput struct { _ struct{} `type:"structure"` - // A list that contains the descriptions of the specified delivery channel. - DeliveryChannels []*DeliveryChannel `type:"list"` + // Returns a list of AggregateComplianceCounts object. + AggregateComplianceCounts []*AggregateComplianceCount `type:"list"` + + // Groups the result based on ACCOUNT_ID or AWS_REGION. + GroupByKey *string `min:"1" type:"string"` + + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` } // String returns the string representation -func (s DescribeDeliveryChannelsOutput) String() string { +func (s GetAggregateConfigRuleComplianceSummaryOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeDeliveryChannelsOutput) GoString() string { +func (s GetAggregateConfigRuleComplianceSummaryOutput) GoString() string { return s.String() } -// SetDeliveryChannels sets the DeliveryChannels field's value. -func (s *DescribeDeliveryChannelsOutput) SetDeliveryChannels(v []*DeliveryChannel) *DescribeDeliveryChannelsOutput { - s.DeliveryChannels = v +// SetAggregateComplianceCounts sets the AggregateComplianceCounts field's value. +func (s *GetAggregateConfigRuleComplianceSummaryOutput) SetAggregateComplianceCounts(v []*AggregateComplianceCount) *GetAggregateConfigRuleComplianceSummaryOutput { + s.AggregateComplianceCounts = v return s } -// Identifies an AWS resource and indicates whether it complies with the AWS -// Config rule that it was evaluated against. -type Evaluation struct { - _ struct{} `type:"structure"` +// SetGroupByKey sets the GroupByKey field's value. +func (s *GetAggregateConfigRuleComplianceSummaryOutput) SetGroupByKey(v string) *GetAggregateConfigRuleComplianceSummaryOutput { + s.GroupByKey = &v + return s +} - // Supplementary information about how the evaluation determined the compliance. - Annotation *string `min:"1" type:"string"` +// SetNextToken sets the NextToken field's value. +func (s *GetAggregateConfigRuleComplianceSummaryOutput) SetNextToken(v string) *GetAggregateConfigRuleComplianceSummaryOutput { + s.NextToken = &v + return s +} - // The ID of the AWS resource that was evaluated. - // - // ComplianceResourceId is a required field - ComplianceResourceId *string `min:"1" type:"string" required:"true"` +type GetAggregateDiscoveredResourceCountsInput struct { + _ struct{} `type:"structure"` - // The type of AWS resource that was evaluated. + // The name of the configuration aggregator. // - // ComplianceResourceType is a required field - ComplianceResourceType *string `min:"1" type:"string" required:"true"` + // ConfigurationAggregatorName is a required field + ConfigurationAggregatorName *string `min:"1" type:"string" required:"true"` - // Indicates whether the AWS resource complies with the AWS Config rule that - // it was evaluated against. - // - // For the Evaluation data type, AWS Config supports only the COMPLIANT, NON_COMPLIANT, - // and NOT_APPLICABLE values. AWS Config does not support the INSUFFICIENT_DATA - // value for this data type. - // - // Similarly, AWS Config does not accept INSUFFICIENT_DATA as the value for - // ComplianceType from a PutEvaluations request. For example, an AWS Lambda - // function for a custom Config rule cannot pass an INSUFFICIENT_DATA value - // to AWS Config. - // - // ComplianceType is a required field - ComplianceType *string `type:"string" required:"true" enum:"ComplianceType"` + // Filters the results based on the ResourceCountFilters object. + Filters *ResourceCountFilters `type:"structure"` - // The time of the event in AWS Config that triggered the evaluation. For event-based - // evaluations, the time indicates when AWS Config created the configuration - // item that triggered the evaluation. For periodic evaluations, the time indicates - // when AWS Config triggered the evaluation at the frequency that you specified - // (for example, every 24 hours). - // - // OrderingTimestamp is a required field - OrderingTimestamp *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + // The key to group the resource counts. + GroupByKey *string `type:"string" enum:"ResourceCountGroupKey"` + + // The maximum number of GroupedResourceCount objects returned on each page. + // The default is 1000. You cannot specify a number greater than 1000. If you + // specify 0, AWS Config uses the default. + Limit *int64 `type:"integer"` + + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` } // String returns the string representation -func (s Evaluation) String() string { +func (s GetAggregateDiscoveredResourceCountsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Evaluation) GoString() string { +func (s GetAggregateDiscoveredResourceCountsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *Evaluation) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "Evaluation"} - if s.Annotation != nil && len(*s.Annotation) < 1 { - invalidParams.Add(request.NewErrParamMinLen("Annotation", 1)) - } - if s.ComplianceResourceId == nil { - invalidParams.Add(request.NewErrParamRequired("ComplianceResourceId")) - } - if s.ComplianceResourceId != nil && len(*s.ComplianceResourceId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ComplianceResourceId", 1)) - } - if s.ComplianceResourceType == nil { - invalidParams.Add(request.NewErrParamRequired("ComplianceResourceType")) +func (s *GetAggregateDiscoveredResourceCountsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetAggregateDiscoveredResourceCountsInput"} + if s.ConfigurationAggregatorName == nil { + invalidParams.Add(request.NewErrParamRequired("ConfigurationAggregatorName")) } - if s.ComplianceResourceType != nil && len(*s.ComplianceResourceType) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ComplianceResourceType", 1)) - } - if s.ComplianceType == nil { - invalidParams.Add(request.NewErrParamRequired("ComplianceType")) + if s.ConfigurationAggregatorName != nil && len(*s.ConfigurationAggregatorName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConfigurationAggregatorName", 1)) } - if s.OrderingTimestamp == nil { - invalidParams.Add(request.NewErrParamRequired("OrderingTimestamp")) + if s.Filters != nil { + if err := s.Filters.Validate(); err != nil { + invalidParams.AddNested("Filters", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -4773,191 +8880,171 @@ func (s *Evaluation) Validate() error { return nil } -// SetAnnotation sets the Annotation field's value. -func (s *Evaluation) SetAnnotation(v string) *Evaluation { - s.Annotation = &v - return s -} - -// SetComplianceResourceId sets the ComplianceResourceId field's value. -func (s *Evaluation) SetComplianceResourceId(v string) *Evaluation { - s.ComplianceResourceId = &v +// SetConfigurationAggregatorName sets the ConfigurationAggregatorName field's value. +func (s *GetAggregateDiscoveredResourceCountsInput) SetConfigurationAggregatorName(v string) *GetAggregateDiscoveredResourceCountsInput { + s.ConfigurationAggregatorName = &v return s } -// SetComplianceResourceType sets the ComplianceResourceType field's value. -func (s *Evaluation) SetComplianceResourceType(v string) *Evaluation { - s.ComplianceResourceType = &v +// SetFilters sets the Filters field's value. +func (s *GetAggregateDiscoveredResourceCountsInput) SetFilters(v *ResourceCountFilters) *GetAggregateDiscoveredResourceCountsInput { + s.Filters = v return s } -// SetComplianceType sets the ComplianceType field's value. -func (s *Evaluation) SetComplianceType(v string) *Evaluation { - s.ComplianceType = &v +// SetGroupByKey sets the GroupByKey field's value. +func (s *GetAggregateDiscoveredResourceCountsInput) SetGroupByKey(v string) *GetAggregateDiscoveredResourceCountsInput { + s.GroupByKey = &v return s } -// SetOrderingTimestamp sets the OrderingTimestamp field's value. -func (s *Evaluation) SetOrderingTimestamp(v time.Time) *Evaluation { - s.OrderingTimestamp = &v +// SetLimit sets the Limit field's value. +func (s *GetAggregateDiscoveredResourceCountsInput) SetLimit(v int64) *GetAggregateDiscoveredResourceCountsInput { + s.Limit = &v return s } -// The details of an AWS Config evaluation. Provides the AWS resource that was -// evaluated, the compliance of the resource, related timestamps, and supplementary -// information. -type EvaluationResult struct { - _ struct{} `type:"structure"` - - // Supplementary information about how the evaluation determined the compliance. - Annotation *string `min:"1" type:"string"` +// SetNextToken sets the NextToken field's value. +func (s *GetAggregateDiscoveredResourceCountsInput) SetNextToken(v string) *GetAggregateDiscoveredResourceCountsInput { + s.NextToken = &v + return s +} - // Indicates whether the AWS resource complies with the AWS Config rule that - // evaluated it. - // - // For the EvaluationResult data type, AWS Config supports only the COMPLIANT, - // NON_COMPLIANT, and NOT_APPLICABLE values. AWS Config does not support the - // INSUFFICIENT_DATA value for the EvaluationResult data type. - ComplianceType *string `type:"string" enum:"ComplianceType"` +type GetAggregateDiscoveredResourceCountsOutput struct { + _ struct{} `type:"structure"` - // The time when the AWS Config rule evaluated the AWS resource. - ConfigRuleInvokedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + // The key passed into the request object. If GroupByKey is not provided, the + // result will be empty. + GroupByKey *string `min:"1" type:"string"` - // Uniquely identifies the evaluation result. - EvaluationResultIdentifier *EvaluationResultIdentifier `type:"structure"` + // Returns a list of GroupedResourceCount objects. + GroupedResourceCounts []*GroupedResourceCount `type:"list"` - // The time when AWS Config recorded the evaluation result. - ResultRecordedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` - // An encrypted token that associates an evaluation with an AWS Config rule. - // The token identifies the rule, the AWS resource being evaluated, and the - // event that triggered the evaluation. - ResultToken *string `type:"string"` + // The total number of resources that are present in an aggregator with the + // filters that you provide. + // + // TotalDiscoveredResources is a required field + TotalDiscoveredResources *int64 `type:"long" required:"true"` } // String returns the string representation -func (s EvaluationResult) String() string { +func (s GetAggregateDiscoveredResourceCountsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s EvaluationResult) GoString() string { +func (s GetAggregateDiscoveredResourceCountsOutput) GoString() string { return s.String() } -// SetAnnotation sets the Annotation field's value. -func (s *EvaluationResult) SetAnnotation(v string) *EvaluationResult { - s.Annotation = &v - return s -} - -// SetComplianceType sets the ComplianceType field's value. -func (s *EvaluationResult) SetComplianceType(v string) *EvaluationResult { - s.ComplianceType = &v - return s -} - -// SetConfigRuleInvokedTime sets the ConfigRuleInvokedTime field's value. -func (s *EvaluationResult) SetConfigRuleInvokedTime(v time.Time) *EvaluationResult { - s.ConfigRuleInvokedTime = &v +// SetGroupByKey sets the GroupByKey field's value. +func (s *GetAggregateDiscoveredResourceCountsOutput) SetGroupByKey(v string) *GetAggregateDiscoveredResourceCountsOutput { + s.GroupByKey = &v return s } -// SetEvaluationResultIdentifier sets the EvaluationResultIdentifier field's value. -func (s *EvaluationResult) SetEvaluationResultIdentifier(v *EvaluationResultIdentifier) *EvaluationResult { - s.EvaluationResultIdentifier = v +// SetGroupedResourceCounts sets the GroupedResourceCounts field's value. +func (s *GetAggregateDiscoveredResourceCountsOutput) SetGroupedResourceCounts(v []*GroupedResourceCount) *GetAggregateDiscoveredResourceCountsOutput { + s.GroupedResourceCounts = v return s } -// SetResultRecordedTime sets the ResultRecordedTime field's value. -func (s *EvaluationResult) SetResultRecordedTime(v time.Time) *EvaluationResult { - s.ResultRecordedTime = &v +// SetNextToken sets the NextToken field's value. +func (s *GetAggregateDiscoveredResourceCountsOutput) SetNextToken(v string) *GetAggregateDiscoveredResourceCountsOutput { + s.NextToken = &v return s } -// SetResultToken sets the ResultToken field's value. -func (s *EvaluationResult) SetResultToken(v string) *EvaluationResult { - s.ResultToken = &v +// SetTotalDiscoveredResources sets the TotalDiscoveredResources field's value. +func (s *GetAggregateDiscoveredResourceCountsOutput) SetTotalDiscoveredResources(v int64) *GetAggregateDiscoveredResourceCountsOutput { + s.TotalDiscoveredResources = &v return s } -// Uniquely identifies an evaluation result. -type EvaluationResultIdentifier struct { +type GetAggregateResourceConfigInput struct { _ struct{} `type:"structure"` - // Identifies an AWS Config rule used to evaluate an AWS resource, and provides - // the type and ID of the evaluated resource. - EvaluationResultQualifier *EvaluationResultQualifier `type:"structure"` + // The name of the configuration aggregator. + // + // ConfigurationAggregatorName is a required field + ConfigurationAggregatorName *string `min:"1" type:"string" required:"true"` - // The time of the event that triggered the evaluation of your AWS resources. - // The time can indicate when AWS Config delivered a configuration item change - // notification, or it can indicate when AWS Config delivered the configuration - // snapshot, depending on which event triggered the evaluation. - OrderingTimestamp *time.Time `type:"timestamp" timestampFormat:"unix"` + // An object that identifies aggregate resource. + // + // ResourceIdentifier is a required field + ResourceIdentifier *AggregateResourceIdentifier `type:"structure" required:"true"` } // String returns the string representation -func (s EvaluationResultIdentifier) String() string { +func (s GetAggregateResourceConfigInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s EvaluationResultIdentifier) GoString() string { +func (s GetAggregateResourceConfigInput) GoString() string { return s.String() } -// SetEvaluationResultQualifier sets the EvaluationResultQualifier field's value. -func (s *EvaluationResultIdentifier) SetEvaluationResultQualifier(v *EvaluationResultQualifier) *EvaluationResultIdentifier { - s.EvaluationResultQualifier = v +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetAggregateResourceConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetAggregateResourceConfigInput"} + if s.ConfigurationAggregatorName == nil { + invalidParams.Add(request.NewErrParamRequired("ConfigurationAggregatorName")) + } + if s.ConfigurationAggregatorName != nil && len(*s.ConfigurationAggregatorName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConfigurationAggregatorName", 1)) + } + if s.ResourceIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceIdentifier")) + } + if s.ResourceIdentifier != nil { + if err := s.ResourceIdentifier.Validate(); err != nil { + invalidParams.AddNested("ResourceIdentifier", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetConfigurationAggregatorName sets the ConfigurationAggregatorName field's value. +func (s *GetAggregateResourceConfigInput) SetConfigurationAggregatorName(v string) *GetAggregateResourceConfigInput { + s.ConfigurationAggregatorName = &v return s } -// SetOrderingTimestamp sets the OrderingTimestamp field's value. -func (s *EvaluationResultIdentifier) SetOrderingTimestamp(v time.Time) *EvaluationResultIdentifier { - s.OrderingTimestamp = &v +// SetResourceIdentifier sets the ResourceIdentifier field's value. +func (s *GetAggregateResourceConfigInput) SetResourceIdentifier(v *AggregateResourceIdentifier) *GetAggregateResourceConfigInput { + s.ResourceIdentifier = v return s } -// Identifies an AWS Config rule that evaluated an AWS resource, and provides -// the type and ID of the resource that the rule evaluated. -type EvaluationResultQualifier struct { +type GetAggregateResourceConfigOutput struct { _ struct{} `type:"structure"` - // The name of the AWS Config rule that was used in the evaluation. - ConfigRuleName *string `min:"1" type:"string"` - - // The ID of the evaluated AWS resource. - ResourceId *string `min:"1" type:"string"` - - // The type of AWS resource that was evaluated. - ResourceType *string `min:"1" type:"string"` + // Returns a ConfigurationItem object. + ConfigurationItem *ConfigurationItem `type:"structure"` } // String returns the string representation -func (s EvaluationResultQualifier) String() string { +func (s GetAggregateResourceConfigOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s EvaluationResultQualifier) GoString() string { +func (s GetAggregateResourceConfigOutput) GoString() string { return s.String() } -// SetConfigRuleName sets the ConfigRuleName field's value. -func (s *EvaluationResultQualifier) SetConfigRuleName(v string) *EvaluationResultQualifier { - s.ConfigRuleName = &v - return s -} - -// SetResourceId sets the ResourceId field's value. -func (s *EvaluationResultQualifier) SetResourceId(v string) *EvaluationResultQualifier { - s.ResourceId = &v - return s -} - -// SetResourceType sets the ResourceType field's value. -func (s *EvaluationResultQualifier) SetResourceType(v string) *EvaluationResultQualifier { - s.ResourceType = &v +// SetConfigurationItem sets the ConfigurationItem field's value. +func (s *GetAggregateResourceConfigOutput) SetConfigurationItem(v *ConfigurationItem) *GetAggregateResourceConfigOutput { + s.ConfigurationItem = v return s } @@ -4975,11 +9062,11 @@ type GetComplianceDetailsByConfigRuleInput struct { ConfigRuleName *string `min:"1" type:"string" required:"true"` // The maximum number of evaluation results returned on each page. The default - // is 10. You cannot specify a limit greater than 100. If you specify 0, AWS + // is 10. You cannot specify a number greater than 100. If you specify 0, AWS // Config uses the default. Limit *int64 `type:"integer"` - // The NextToken string returned on a previous page that you use to get the + // The nextToken string returned on a previous page that you use to get the // next page of results in a paginated response. NextToken *string `type:"string"` } @@ -5076,7 +9163,7 @@ type GetComplianceDetailsByResourceInput struct { // The allowed values are COMPLIANT, NON_COMPLIANT, and NOT_APPLICABLE. ComplianceTypes []*string `type:"list"` - // The NextToken string returned on a previous page that you use to get the + // The nextToken string returned on a previous page that you use to get the // next page of results in a paginated response. NextToken *string `type:"string"` @@ -5224,9 +9311,8 @@ type GetComplianceSummaryByResourceTypeInput struct { // Specify one or more resource types to get the number of resources that are // compliant and the number that are noncompliant for each resource type. // - // For this request, you can specify an AWS resource type such as AWS::EC2::Instance, - // and you can specify that the resource type is an AWS account by specifying - // AWS::::Account. + // For this request, you can specify an AWS resource type such as AWS::EC2::Instance. + // You can specify that the resource type is an AWS account by specifying AWS::::Account. ResourceTypes []*string `type:"list"` } @@ -5275,7 +9361,7 @@ type GetDiscoveredResourceCountsInput struct { _ struct{} `type:"structure"` // The maximum number of ResourceCount objects returned on each page. The default - // is 100. You cannot specify a limit greater than 100. If you specify 0, AWS + // is 100. You cannot specify a number greater than 100. If you specify 0, AWS // Config uses the default. Limit *int64 `locationName:"limit" type:"integer"` @@ -5284,7 +9370,7 @@ type GetDiscoveredResourceCountsInput struct { NextToken *string `locationName:"nextToken" type:"string"` // The comma-separated list that specifies the resource types that you want - // the AWS Config to return. For example, ("AWS::EC2::Instance", "AWS::IAM::User"). + // AWS Config to return (for example, "AWS::EC2::Instance", "AWS::IAM::User"). // // If a value for resourceTypes is not specified, AWS Config returns all resource // types that AWS Config is recording in the region for your account. @@ -5346,96 +9432,273 @@ type GetDiscoveredResourceCountsOutput struct { // a total of 60 resources. // // You make a call to the GetDiscoveredResourceCounts action and specify the - // resource type, "AWS::EC2::Instances" in the request. + // resource type, "AWS::EC2::Instances", in the request. // // AWS Config returns 25 for totalDiscoveredResources. TotalDiscoveredResources *int64 `locationName:"totalDiscoveredResources" type:"long"` } // String returns the string representation -func (s GetDiscoveredResourceCountsOutput) String() string { +func (s GetDiscoveredResourceCountsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDiscoveredResourceCountsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *GetDiscoveredResourceCountsOutput) SetNextToken(v string) *GetDiscoveredResourceCountsOutput { + s.NextToken = &v + return s +} + +// SetResourceCounts sets the ResourceCounts field's value. +func (s *GetDiscoveredResourceCountsOutput) SetResourceCounts(v []*ResourceCount) *GetDiscoveredResourceCountsOutput { + s.ResourceCounts = v + return s +} + +// SetTotalDiscoveredResources sets the TotalDiscoveredResources field's value. +func (s *GetDiscoveredResourceCountsOutput) SetTotalDiscoveredResources(v int64) *GetDiscoveredResourceCountsOutput { + s.TotalDiscoveredResources = &v + return s +} + +// The input for the GetResourceConfigHistory action. +type GetResourceConfigHistoryInput struct { + _ struct{} `type:"structure"` + + // The chronological order for configuration items listed. By default, the results + // are listed in reverse chronological order. + ChronologicalOrder *string `locationName:"chronologicalOrder" type:"string" enum:"ChronologicalOrder"` + + // The time stamp that indicates an earlier time. If not specified, the action + // returns paginated results that contain configuration items that start when + // the first configuration item was recorded. + EarlierTime *time.Time `locationName:"earlierTime" type:"timestamp"` + + // The time stamp that indicates a later time. If not specified, current time + // is taken. + LaterTime *time.Time `locationName:"laterTime" type:"timestamp"` + + // The maximum number of configuration items returned on each page. The default + // is 10. You cannot specify a number greater than 100. If you specify 0, AWS + // Config uses the default. + Limit *int64 `locationName:"limit" type:"integer"` + + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `locationName:"nextToken" type:"string"` + + // The ID of the resource (for example., sg-xxxxxx). + // + // ResourceId is a required field + ResourceId *string `locationName:"resourceId" min:"1" type:"string" required:"true"` + + // The resource type. + // + // ResourceType is a required field + ResourceType *string `locationName:"resourceType" type:"string" required:"true" enum:"ResourceType"` +} + +// String returns the string representation +func (s GetResourceConfigHistoryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetResourceConfigHistoryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetResourceConfigHistoryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetResourceConfigHistoryInput"} + if s.ResourceId == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceId")) + } + if s.ResourceId != nil && len(*s.ResourceId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceId", 1)) + } + if s.ResourceType == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceType")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetChronologicalOrder sets the ChronologicalOrder field's value. +func (s *GetResourceConfigHistoryInput) SetChronologicalOrder(v string) *GetResourceConfigHistoryInput { + s.ChronologicalOrder = &v + return s +} + +// SetEarlierTime sets the EarlierTime field's value. +func (s *GetResourceConfigHistoryInput) SetEarlierTime(v time.Time) *GetResourceConfigHistoryInput { + s.EarlierTime = &v + return s +} + +// SetLaterTime sets the LaterTime field's value. +func (s *GetResourceConfigHistoryInput) SetLaterTime(v time.Time) *GetResourceConfigHistoryInput { + s.LaterTime = &v + return s +} + +// SetLimit sets the Limit field's value. +func (s *GetResourceConfigHistoryInput) SetLimit(v int64) *GetResourceConfigHistoryInput { + s.Limit = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *GetResourceConfigHistoryInput) SetNextToken(v string) *GetResourceConfigHistoryInput { + s.NextToken = &v + return s +} + +// SetResourceId sets the ResourceId field's value. +func (s *GetResourceConfigHistoryInput) SetResourceId(v string) *GetResourceConfigHistoryInput { + s.ResourceId = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *GetResourceConfigHistoryInput) SetResourceType(v string) *GetResourceConfigHistoryInput { + s.ResourceType = &v + return s +} + +// The output for the GetResourceConfigHistory action. +type GetResourceConfigHistoryOutput struct { + _ struct{} `type:"structure"` + + // A list that contains the configuration history of one or more resources. + ConfigurationItems []*ConfigurationItem `locationName:"configurationItems" type:"list"` + + // The string that you use in a subsequent request to get the next page of results + // in a paginated response. + NextToken *string `locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s GetResourceConfigHistoryOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDiscoveredResourceCountsOutput) GoString() string { +func (s GetResourceConfigHistoryOutput) GoString() string { return s.String() } +// SetConfigurationItems sets the ConfigurationItems field's value. +func (s *GetResourceConfigHistoryOutput) SetConfigurationItems(v []*ConfigurationItem) *GetResourceConfigHistoryOutput { + s.ConfigurationItems = v + return s +} + // SetNextToken sets the NextToken field's value. -func (s *GetDiscoveredResourceCountsOutput) SetNextToken(v string) *GetDiscoveredResourceCountsOutput { +func (s *GetResourceConfigHistoryOutput) SetNextToken(v string) *GetResourceConfigHistoryOutput { s.NextToken = &v return s } -// SetResourceCounts sets the ResourceCounts field's value. -func (s *GetDiscoveredResourceCountsOutput) SetResourceCounts(v []*ResourceCount) *GetDiscoveredResourceCountsOutput { - s.ResourceCounts = v +// The count of resources that are grouped by the group name. +type GroupedResourceCount struct { + _ struct{} `type:"structure"` + + // The name of the group that can be region, account ID, or resource type. For + // example, region1, region2 if the region was chosen as GroupByKey. + // + // GroupName is a required field + GroupName *string `min:"1" type:"string" required:"true"` + + // The number of resources in the group. + // + // ResourceCount is a required field + ResourceCount *int64 `type:"long" required:"true"` +} + +// String returns the string representation +func (s GroupedResourceCount) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GroupedResourceCount) GoString() string { + return s.String() +} + +// SetGroupName sets the GroupName field's value. +func (s *GroupedResourceCount) SetGroupName(v string) *GroupedResourceCount { + s.GroupName = &v return s } -// SetTotalDiscoveredResources sets the TotalDiscoveredResources field's value. -func (s *GetDiscoveredResourceCountsOutput) SetTotalDiscoveredResources(v int64) *GetDiscoveredResourceCountsOutput { - s.TotalDiscoveredResources = &v +// SetResourceCount sets the ResourceCount field's value. +func (s *GroupedResourceCount) SetResourceCount(v int64) *GroupedResourceCount { + s.ResourceCount = &v return s } -// The input for the GetResourceConfigHistory action. -type GetResourceConfigHistoryInput struct { +type ListAggregateDiscoveredResourcesInput struct { _ struct{} `type:"structure"` - // The chronological order for configuration items listed. By default the results - // are listed in reverse chronological order. - ChronologicalOrder *string `locationName:"chronologicalOrder" type:"string" enum:"ChronologicalOrder"` - - // The time stamp that indicates an earlier time. If not specified, the action - // returns paginated results that contain configuration items that start from - // when the first configuration item was recorded. - EarlierTime *time.Time `locationName:"earlierTime" type:"timestamp" timestampFormat:"unix"` + // The name of the configuration aggregator. + // + // ConfigurationAggregatorName is a required field + ConfigurationAggregatorName *string `min:"1" type:"string" required:"true"` - // The time stamp that indicates a later time. If not specified, current time - // is taken. - LaterTime *time.Time `locationName:"laterTime" type:"timestamp" timestampFormat:"unix"` + // Filters the results based on the ResourceFilters object. + Filters *ResourceFilters `type:"structure"` - // The maximum number of configuration items returned on each page. The default - // is 10. You cannot specify a limit greater than 100. If you specify 0, AWS + // The maximum number of resource identifiers returned on each page. The default + // is 100. You cannot specify a number greater than 100. If you specify 0, AWS // Config uses the default. - Limit *int64 `locationName:"limit" type:"integer"` + Limit *int64 `type:"integer"` // The nextToken string returned on a previous page that you use to get the // next page of results in a paginated response. - NextToken *string `locationName:"nextToken" type:"string"` - - // The ID of the resource (for example., sg-xxxxxx). - // - // ResourceId is a required field - ResourceId *string `locationName:"resourceId" type:"string" required:"true"` + NextToken *string `type:"string"` - // The resource type. + // The type of resources that you want AWS Config to list in the response. // // ResourceType is a required field - ResourceType *string `locationName:"resourceType" type:"string" required:"true" enum:"ResourceType"` + ResourceType *string `type:"string" required:"true" enum:"ResourceType"` } // String returns the string representation -func (s GetResourceConfigHistoryInput) String() string { +func (s ListAggregateDiscoveredResourcesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetResourceConfigHistoryInput) GoString() string { +func (s ListAggregateDiscoveredResourcesInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetResourceConfigHistoryInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetResourceConfigHistoryInput"} - if s.ResourceId == nil { - invalidParams.Add(request.NewErrParamRequired("ResourceId")) +func (s *ListAggregateDiscoveredResourcesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListAggregateDiscoveredResourcesInput"} + if s.ConfigurationAggregatorName == nil { + invalidParams.Add(request.NewErrParamRequired("ConfigurationAggregatorName")) + } + if s.ConfigurationAggregatorName != nil && len(*s.ConfigurationAggregatorName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConfigurationAggregatorName", 1)) } if s.ResourceType == nil { invalidParams.Add(request.NewErrParamRequired("ResourceType")) } + if s.Filters != nil { + if err := s.Filters.Validate(); err != nil { + invalidParams.AddNested("Filters", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -5443,79 +9706,66 @@ func (s *GetResourceConfigHistoryInput) Validate() error { return nil } -// SetChronologicalOrder sets the ChronologicalOrder field's value. -func (s *GetResourceConfigHistoryInput) SetChronologicalOrder(v string) *GetResourceConfigHistoryInput { - s.ChronologicalOrder = &v - return s -} - -// SetEarlierTime sets the EarlierTime field's value. -func (s *GetResourceConfigHistoryInput) SetEarlierTime(v time.Time) *GetResourceConfigHistoryInput { - s.EarlierTime = &v +// SetConfigurationAggregatorName sets the ConfigurationAggregatorName field's value. +func (s *ListAggregateDiscoveredResourcesInput) SetConfigurationAggregatorName(v string) *ListAggregateDiscoveredResourcesInput { + s.ConfigurationAggregatorName = &v return s } -// SetLaterTime sets the LaterTime field's value. -func (s *GetResourceConfigHistoryInput) SetLaterTime(v time.Time) *GetResourceConfigHistoryInput { - s.LaterTime = &v +// SetFilters sets the Filters field's value. +func (s *ListAggregateDiscoveredResourcesInput) SetFilters(v *ResourceFilters) *ListAggregateDiscoveredResourcesInput { + s.Filters = v return s } // SetLimit sets the Limit field's value. -func (s *GetResourceConfigHistoryInput) SetLimit(v int64) *GetResourceConfigHistoryInput { +func (s *ListAggregateDiscoveredResourcesInput) SetLimit(v int64) *ListAggregateDiscoveredResourcesInput { s.Limit = &v return s } // SetNextToken sets the NextToken field's value. -func (s *GetResourceConfigHistoryInput) SetNextToken(v string) *GetResourceConfigHistoryInput { +func (s *ListAggregateDiscoveredResourcesInput) SetNextToken(v string) *ListAggregateDiscoveredResourcesInput { s.NextToken = &v return s } -// SetResourceId sets the ResourceId field's value. -func (s *GetResourceConfigHistoryInput) SetResourceId(v string) *GetResourceConfigHistoryInput { - s.ResourceId = &v - return s -} - // SetResourceType sets the ResourceType field's value. -func (s *GetResourceConfigHistoryInput) SetResourceType(v string) *GetResourceConfigHistoryInput { +func (s *ListAggregateDiscoveredResourcesInput) SetResourceType(v string) *ListAggregateDiscoveredResourcesInput { s.ResourceType = &v return s } -// The output for the GetResourceConfigHistory action. -type GetResourceConfigHistoryOutput struct { +type ListAggregateDiscoveredResourcesOutput struct { _ struct{} `type:"structure"` - // A list that contains the configuration history of one or more resources. - ConfigurationItems []*ConfigurationItem `locationName:"configurationItems" type:"list"` + // The nextToken string returned on a previous page that you use to get the + // next page of results in a paginated response. + NextToken *string `type:"string"` - // The string that you use in a subsequent request to get the next page of results - // in a paginated response. - NextToken *string `locationName:"nextToken" type:"string"` + // Returns a list of ResourceIdentifiers objects. + ResourceIdentifiers []*AggregateResourceIdentifier `type:"list"` } // String returns the string representation -func (s GetResourceConfigHistoryOutput) String() string { +func (s ListAggregateDiscoveredResourcesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetResourceConfigHistoryOutput) GoString() string { +func (s ListAggregateDiscoveredResourcesOutput) GoString() string { return s.String() } -// SetConfigurationItems sets the ConfigurationItems field's value. -func (s *GetResourceConfigHistoryOutput) SetConfigurationItems(v []*ConfigurationItem) *GetResourceConfigHistoryOutput { - s.ConfigurationItems = v +// SetNextToken sets the NextToken field's value. +func (s *ListAggregateDiscoveredResourcesOutput) SetNextToken(v string) *ListAggregateDiscoveredResourcesOutput { + s.NextToken = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *GetResourceConfigHistoryOutput) SetNextToken(v string) *GetResourceConfigHistoryOutput { - s.NextToken = &v +// SetResourceIdentifiers sets the ResourceIdentifiers field's value. +func (s *ListAggregateDiscoveredResourcesOutput) SetResourceIdentifiers(v []*AggregateResourceIdentifier) *ListAggregateDiscoveredResourcesOutput { + s.ResourceIdentifiers = v return s } @@ -5527,7 +9777,7 @@ type ListDiscoveredResourcesInput struct { IncludeDeletedResources *bool `locationName:"includeDeletedResources" type:"boolean"` // The maximum number of resource identifiers returned on each page. The default - // is 100. You cannot specify a limit greater than 100. If you specify 0, AWS + // is 100. You cannot specify a number greater than 100. If you specify 0, AWS // Config uses the default. Limit *int64 `locationName:"limit" type:"integer"` @@ -5644,6 +9894,180 @@ func (s *ListDiscoveredResourcesOutput) SetResourceIdentifiers(v []*ResourceIden return s } +// This object contains regions to setup the aggregator and an IAM role to retrieve +// organization details. +type OrganizationAggregationSource struct { + _ struct{} `type:"structure"` + + // If true, aggregate existing AWS Config regions and future regions. + AllAwsRegions *bool `type:"boolean"` + + // The source regions being aggregated. + AwsRegions []*string `min:"1" type:"list"` + + // ARN of the IAM role used to retreive AWS Organization details associated + // with the aggregator account. + // + // RoleArn is a required field + RoleArn *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s OrganizationAggregationSource) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OrganizationAggregationSource) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *OrganizationAggregationSource) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "OrganizationAggregationSource"} + if s.AwsRegions != nil && len(s.AwsRegions) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AwsRegions", 1)) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAllAwsRegions sets the AllAwsRegions field's value. +func (s *OrganizationAggregationSource) SetAllAwsRegions(v bool) *OrganizationAggregationSource { + s.AllAwsRegions = &v + return s +} + +// SetAwsRegions sets the AwsRegions field's value. +func (s *OrganizationAggregationSource) SetAwsRegions(v []*string) *OrganizationAggregationSource { + s.AwsRegions = v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *OrganizationAggregationSource) SetRoleArn(v string) *OrganizationAggregationSource { + s.RoleArn = &v + return s +} + +// An object that represents the account ID and region of an aggregator account +// that is requesting authorization but is not yet authorized. +type PendingAggregationRequest struct { + _ struct{} `type:"structure"` + + // The 12-digit account ID of the account requesting to aggregate data. + RequesterAccountId *string `type:"string"` + + // The region requesting to aggregate data. + RequesterAwsRegion *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s PendingAggregationRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PendingAggregationRequest) GoString() string { + return s.String() +} + +// SetRequesterAccountId sets the RequesterAccountId field's value. +func (s *PendingAggregationRequest) SetRequesterAccountId(v string) *PendingAggregationRequest { + s.RequesterAccountId = &v + return s +} + +// SetRequesterAwsRegion sets the RequesterAwsRegion field's value. +func (s *PendingAggregationRequest) SetRequesterAwsRegion(v string) *PendingAggregationRequest { + s.RequesterAwsRegion = &v + return s +} + +type PutAggregationAuthorizationInput struct { + _ struct{} `type:"structure"` + + // The 12-digit account ID of the account authorized to aggregate data. + // + // AuthorizedAccountId is a required field + AuthorizedAccountId *string `type:"string" required:"true"` + + // The region authorized to collect aggregated data. + // + // AuthorizedAwsRegion is a required field + AuthorizedAwsRegion *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s PutAggregationAuthorizationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutAggregationAuthorizationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutAggregationAuthorizationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutAggregationAuthorizationInput"} + if s.AuthorizedAccountId == nil { + invalidParams.Add(request.NewErrParamRequired("AuthorizedAccountId")) + } + if s.AuthorizedAwsRegion == nil { + invalidParams.Add(request.NewErrParamRequired("AuthorizedAwsRegion")) + } + if s.AuthorizedAwsRegion != nil && len(*s.AuthorizedAwsRegion) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AuthorizedAwsRegion", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAuthorizedAccountId sets the AuthorizedAccountId field's value. +func (s *PutAggregationAuthorizationInput) SetAuthorizedAccountId(v string) *PutAggregationAuthorizationInput { + s.AuthorizedAccountId = &v + return s +} + +// SetAuthorizedAwsRegion sets the AuthorizedAwsRegion field's value. +func (s *PutAggregationAuthorizationInput) SetAuthorizedAwsRegion(v string) *PutAggregationAuthorizationInput { + s.AuthorizedAwsRegion = &v + return s +} + +type PutAggregationAuthorizationOutput struct { + _ struct{} `type:"structure"` + + // Returns an AggregationAuthorization object. + AggregationAuthorization *AggregationAuthorization `type:"structure"` +} + +// String returns the string representation +func (s PutAggregationAuthorizationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutAggregationAuthorizationOutput) GoString() string { + return s.String() +} + +// SetAggregationAuthorization sets the AggregationAuthorization field's value. +func (s *PutAggregationAuthorizationOutput) SetAggregationAuthorization(v *AggregationAuthorization) *PutAggregationAuthorizationOutput { + s.AggregationAuthorization = v + return s +} + type PutConfigRuleInput struct { _ struct{} `type:"structure"` @@ -5669,9 +10093,85 @@ func (s *PutConfigRuleInput) Validate() error { if s.ConfigRule == nil { invalidParams.Add(request.NewErrParamRequired("ConfigRule")) } - if s.ConfigRule != nil { - if err := s.ConfigRule.Validate(); err != nil { - invalidParams.AddNested("ConfigRule", err.(request.ErrInvalidParams)) + if s.ConfigRule != nil { + if err := s.ConfigRule.Validate(); err != nil { + invalidParams.AddNested("ConfigRule", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetConfigRule sets the ConfigRule field's value. +func (s *PutConfigRuleInput) SetConfigRule(v *ConfigRule) *PutConfigRuleInput { + s.ConfigRule = v + return s +} + +type PutConfigRuleOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutConfigRuleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutConfigRuleOutput) GoString() string { + return s.String() +} + +type PutConfigurationAggregatorInput struct { + _ struct{} `type:"structure"` + + // A list of AccountAggregationSource object. + AccountAggregationSources []*AccountAggregationSource `type:"list"` + + // The name of the configuration aggregator. + // + // ConfigurationAggregatorName is a required field + ConfigurationAggregatorName *string `min:"1" type:"string" required:"true"` + + // An OrganizationAggregationSource object. + OrganizationAggregationSource *OrganizationAggregationSource `type:"structure"` +} + +// String returns the string representation +func (s PutConfigurationAggregatorInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutConfigurationAggregatorInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutConfigurationAggregatorInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutConfigurationAggregatorInput"} + if s.ConfigurationAggregatorName == nil { + invalidParams.Add(request.NewErrParamRequired("ConfigurationAggregatorName")) + } + if s.ConfigurationAggregatorName != nil && len(*s.ConfigurationAggregatorName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConfigurationAggregatorName", 1)) + } + if s.AccountAggregationSources != nil { + for i, v := range s.AccountAggregationSources { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AccountAggregationSources", i), err.(request.ErrInvalidParams)) + } + } + } + if s.OrganizationAggregationSource != nil { + if err := s.OrganizationAggregationSource.Validate(); err != nil { + invalidParams.AddNested("OrganizationAggregationSource", err.(request.ErrInvalidParams)) } } @@ -5681,26 +10181,47 @@ func (s *PutConfigRuleInput) Validate() error { return nil } -// SetConfigRule sets the ConfigRule field's value. -func (s *PutConfigRuleInput) SetConfigRule(v *ConfigRule) *PutConfigRuleInput { - s.ConfigRule = v +// SetAccountAggregationSources sets the AccountAggregationSources field's value. +func (s *PutConfigurationAggregatorInput) SetAccountAggregationSources(v []*AccountAggregationSource) *PutConfigurationAggregatorInput { + s.AccountAggregationSources = v return s } -type PutConfigRuleOutput struct { +// SetConfigurationAggregatorName sets the ConfigurationAggregatorName field's value. +func (s *PutConfigurationAggregatorInput) SetConfigurationAggregatorName(v string) *PutConfigurationAggregatorInput { + s.ConfigurationAggregatorName = &v + return s +} + +// SetOrganizationAggregationSource sets the OrganizationAggregationSource field's value. +func (s *PutConfigurationAggregatorInput) SetOrganizationAggregationSource(v *OrganizationAggregationSource) *PutConfigurationAggregatorInput { + s.OrganizationAggregationSource = v + return s +} + +type PutConfigurationAggregatorOutput struct { _ struct{} `type:"structure"` + + // Returns a ConfigurationAggregator object. + ConfigurationAggregator *ConfigurationAggregator `type:"structure"` } // String returns the string representation -func (s PutConfigRuleOutput) String() string { +func (s PutConfigurationAggregatorOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s PutConfigRuleOutput) GoString() string { +func (s PutConfigurationAggregatorOutput) GoString() string { return s.String() } +// SetConfigurationAggregator sets the ConfigurationAggregator field's value. +func (s *PutConfigurationAggregatorOutput) SetConfigurationAggregator(v *ConfigurationAggregator) *PutConfigurationAggregatorOutput { + s.ConfigurationAggregator = v + return s +} + // The input for the PutConfigurationRecorder action. type PutConfigurationRecorderInput struct { _ struct{} `type:"structure"` @@ -5765,7 +10286,7 @@ type PutDeliveryChannelInput struct { _ struct{} `type:"structure"` // The configuration delivery channel object that delivers the configuration - // information to an Amazon S3 bucket, and to an Amazon SNS topic. + // information to an Amazon S3 bucket and to an Amazon SNS topic. // // DeliveryChannel is a required field DeliveryChannel *DeliveryChannel `type:"structure" required:"true"` @@ -5828,7 +10349,7 @@ type PutEvaluationsInput struct { Evaluations []*Evaluation `type:"list"` // An encrypted token that associates an evaluation with an AWS Config rule. - // Identifies the rule and the event that triggered the evaluation + // Identifies the rule and the event that triggered the evaluation. // // ResultToken is a required field ResultToken *string `type:"string" required:"true"` @@ -5917,6 +10438,72 @@ func (s *PutEvaluationsOutput) SetFailedEvaluations(v []*Evaluation) *PutEvaluat return s } +type PutRetentionConfigurationInput struct { + _ struct{} `type:"structure"` + + // Number of days AWS Config stores your historical information. + // + // Currently, only applicable to the configuration item history. + // + // RetentionPeriodInDays is a required field + RetentionPeriodInDays *int64 `min:"30" type:"integer" required:"true"` +} + +// String returns the string representation +func (s PutRetentionConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutRetentionConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutRetentionConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutRetentionConfigurationInput"} + if s.RetentionPeriodInDays == nil { + invalidParams.Add(request.NewErrParamRequired("RetentionPeriodInDays")) + } + if s.RetentionPeriodInDays != nil && *s.RetentionPeriodInDays < 30 { + invalidParams.Add(request.NewErrParamMinValue("RetentionPeriodInDays", 30)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRetentionPeriodInDays sets the RetentionPeriodInDays field's value. +func (s *PutRetentionConfigurationInput) SetRetentionPeriodInDays(v int64) *PutRetentionConfigurationInput { + s.RetentionPeriodInDays = &v + return s +} + +type PutRetentionConfigurationOutput struct { + _ struct{} `type:"structure"` + + // Returns a retention configuration object. + RetentionConfiguration *RetentionConfiguration `type:"structure"` +} + +// String returns the string representation +func (s PutRetentionConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutRetentionConfigurationOutput) GoString() string { + return s.String() +} + +// SetRetentionConfiguration sets the RetentionConfiguration field's value. +func (s *PutRetentionConfigurationOutput) SetRetentionConfiguration(v *RetentionConfiguration) *PutRetentionConfigurationOutput { + s.RetentionConfiguration = v + return s +} + // Specifies the types of AWS resource for which AWS Config records configuration // changes. // @@ -5944,7 +10531,7 @@ func (s *PutEvaluationsOutput) SetFailedEvaluations(v []*Evaluation) *PutEvaluat // If you don't want AWS Config to record all resources, you can specify which // types of resources it will record with the resourceTypes parameter. // -// For a list of supported resource types, see Supported resource types (http://docs.aws.amazon.com/config/latest/developerguide/resource-config-reference.html#supported-resources). +// For a list of supported resource types, see Supported Resource Types (http://docs.aws.amazon.com/config/latest/developerguide/resource-config-reference.html#supported-resources). // // For more information, see Selecting Which Resources AWS Config Records (http://docs.aws.amazon.com/config/latest/developerguide/select-resources.html). type RecordingGroup struct { @@ -5954,8 +10541,7 @@ type RecordingGroup struct { // type of regional resource. // // If you set this option to true, when AWS Config adds support for a new type - // of regional resource, it automatically starts recording resources of that - // type. + // of regional resource, it starts recording resources of that type automatically. // // If you set this option to true, you cannot enumerate a list of resourceTypes. AllSupported *bool `locationName:"allSupported" type:"boolean"` @@ -5967,7 +10553,7 @@ type RecordingGroup struct { // to true. // // If you set this option to true, when AWS Config adds support for a new type - // of global resource, it automatically starts recording resources of that type. + // of global resource, it starts recording resources of that type automatically. // // The configuration details for any global resource are the same in all regions. // To prevent duplicate configuration items, you should consider customizing @@ -6026,7 +10612,7 @@ type Relationship struct { RelationshipName *string `locationName:"relationshipName" type:"string"` // The ID of the related resource (for example, sg-xxxxxx). - ResourceId *string `locationName:"resourceId" type:"string"` + ResourceId *string `locationName:"resourceId" min:"1" type:"string"` // The custom name of the related resource, if available. ResourceName *string `locationName:"resourceName" type:"string"` @@ -6076,7 +10662,7 @@ type ResourceCount struct { // The number of resources. Count *int64 `locationName:"count" type:"long"` - // The resource type, for example "AWS::EC2::Instance". + // The resource type (for example, "AWS::EC2::Instance"). ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` } @@ -6102,16 +10688,139 @@ func (s *ResourceCount) SetResourceType(v string) *ResourceCount { return s } +// Filters the resource count based on account ID, region, and resource type. +type ResourceCountFilters struct { + _ struct{} `type:"structure"` + + // The 12-digit ID of the account. + AccountId *string `type:"string"` + + // The region where the account is located. + Region *string `min:"1" type:"string"` + + // The type of the AWS resource. + ResourceType *string `type:"string" enum:"ResourceType"` +} + +// String returns the string representation +func (s ResourceCountFilters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResourceCountFilters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ResourceCountFilters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResourceCountFilters"} + if s.Region != nil && len(*s.Region) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Region", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccountId sets the AccountId field's value. +func (s *ResourceCountFilters) SetAccountId(v string) *ResourceCountFilters { + s.AccountId = &v + return s +} + +// SetRegion sets the Region field's value. +func (s *ResourceCountFilters) SetRegion(v string) *ResourceCountFilters { + s.Region = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *ResourceCountFilters) SetResourceType(v string) *ResourceCountFilters { + s.ResourceType = &v + return s +} + +// Filters the results by resource account ID, region, resource ID, and resource +// name. +type ResourceFilters struct { + _ struct{} `type:"structure"` + + // The 12-digit source account ID. + AccountId *string `type:"string"` + + // The source region. + Region *string `min:"1" type:"string"` + + // The ID of the resource. + ResourceId *string `min:"1" type:"string"` + + // The name of the resource. + ResourceName *string `type:"string"` +} + +// String returns the string representation +func (s ResourceFilters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResourceFilters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ResourceFilters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResourceFilters"} + if s.Region != nil && len(*s.Region) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Region", 1)) + } + if s.ResourceId != nil && len(*s.ResourceId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccountId sets the AccountId field's value. +func (s *ResourceFilters) SetAccountId(v string) *ResourceFilters { + s.AccountId = &v + return s +} + +// SetRegion sets the Region field's value. +func (s *ResourceFilters) SetRegion(v string) *ResourceFilters { + s.Region = &v + return s +} + +// SetResourceId sets the ResourceId field's value. +func (s *ResourceFilters) SetResourceId(v string) *ResourceFilters { + s.ResourceId = &v + return s +} + +// SetResourceName sets the ResourceName field's value. +func (s *ResourceFilters) SetResourceName(v string) *ResourceFilters { + s.ResourceName = &v + return s +} + // The details that identify a resource that is discovered by AWS Config, including // the resource type, ID, and (if available) the custom resource name. type ResourceIdentifier struct { _ struct{} `type:"structure"` // The time that the resource was deleted. - ResourceDeletionTime *time.Time `locationName:"resourceDeletionTime" type:"timestamp" timestampFormat:"unix"` + ResourceDeletionTime *time.Time `locationName:"resourceDeletionTime" type:"timestamp"` - // The ID of the resource (for example., sg-xxxxxx). - ResourceId *string `locationName:"resourceId" type:"string"` + // The ID of the resource (for example, sg-xxxxxx). + ResourceId *string `locationName:"resourceId" min:"1" type:"string"` // The custom name of the resource (if available). ResourceName *string `locationName:"resourceName" type:"string"` @@ -6154,6 +10863,104 @@ func (s *ResourceIdentifier) SetResourceType(v string) *ResourceIdentifier { return s } +// The details that identify a resource within AWS Config, including the resource +// type and resource ID. +type ResourceKey struct { + _ struct{} `type:"structure"` + + // The ID of the resource (for example., sg-xxxxxx). + // + // ResourceId is a required field + ResourceId *string `locationName:"resourceId" min:"1" type:"string" required:"true"` + + // The resource type. + // + // ResourceType is a required field + ResourceType *string `locationName:"resourceType" type:"string" required:"true" enum:"ResourceType"` +} + +// String returns the string representation +func (s ResourceKey) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResourceKey) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ResourceKey) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResourceKey"} + if s.ResourceId == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceId")) + } + if s.ResourceId != nil && len(*s.ResourceId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceId", 1)) + } + if s.ResourceType == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceType")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceId sets the ResourceId field's value. +func (s *ResourceKey) SetResourceId(v string) *ResourceKey { + s.ResourceId = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *ResourceKey) SetResourceType(v string) *ResourceKey { + s.ResourceType = &v + return s +} + +// An object with the name of the retention configuration and the retention +// period in days. The object stores the configuration for data retention in +// AWS Config. +type RetentionConfiguration struct { + _ struct{} `type:"structure"` + + // The name of the retention configuration object. + // + // Name is a required field + Name *string `min:"1" type:"string" required:"true"` + + // Number of days AWS Config stores your historical information. + // + // Currently, only applicable to the configuration item history. + // + // RetentionPeriodInDays is a required field + RetentionPeriodInDays *int64 `min:"30" type:"integer" required:"true"` +} + +// String returns the string representation +func (s RetentionConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RetentionConfiguration) GoString() string { + return s.String() +} + +// SetName sets the Name field's value. +func (s *RetentionConfiguration) SetName(v string) *RetentionConfiguration { + s.Name = &v + return s +} + +// SetRetentionPeriodInDays sets the RetentionPeriodInDays field's value. +func (s *RetentionConfiguration) SetRetentionPeriodInDays(v int64) *RetentionConfiguration { + s.RetentionPeriodInDays = &v + return s +} + // Defines which resources trigger an evaluation for an AWS Config rule. The // scope can include one or more resource types, a combination of a tag key // and value, or a combination of one resource type and one resource ID. Specify @@ -6163,7 +10970,7 @@ func (s *ResourceIdentifier) SetResourceType(v string) *ResourceIdentifier { type Scope struct { _ struct{} `type:"structure"` - // The IDs of the only AWS resource that you want to trigger an evaluation for + // The ID of the only AWS resource that you want to trigger an evaluation for // the rule. If you specify a resource ID, you must specify one resource type // for ComplianceResourceTypes. ComplianceResourceId *string `min:"1" type:"string"` @@ -6320,13 +11127,18 @@ type SourceDetail struct { // to evaluate your AWS resources. EventSource *string `type:"string" enum:"EventSource"` - // The frequency that you want AWS Config to run evaluations for a custom rule - // with a periodic trigger. If you specify a value for MaximumExecutionFrequency, + // The frequency at which you want AWS Config to run evaluations for a custom + // rule with a periodic trigger. If you specify a value for MaximumExecutionFrequency, // then MessageType must use the ScheduledNotification value. // // By default, rules with a periodic trigger are evaluated every 24 hours. To // change the frequency, specify a valid value for the MaximumExecutionFrequency // parameter. + // + // Based on the valid value you choose, AWS Config runs evaluations once for + // each valid value. For example, if you choose Three_Hours, AWS Config runs + // evaluations once every three hours. In this case, Three_Hours is the frequency + // of this rule. MaximumExecutionFrequency *string `type:"string" enum:"MaximumExecutionFrequency"` // The type of notification that triggers AWS Config to run an evaluation for @@ -6347,7 +11159,8 @@ type SourceDetail struct { // when AWS Config delivers a configuration snapshot. // // If you want your custom rule to be triggered by configuration changes, specify - // both ConfigurationItemChangeNotification and OversizedConfigurationItemChangeNotification. + // two SourceDetail objects, one for ConfigurationItemChangeNotification and + // one for OversizedConfigurationItemChangeNotification. MessageType *string `type:"string" enum:"MessageType"` } @@ -6382,7 +11195,7 @@ func (s *SourceDetail) SetMessageType(v string) *SourceDetail { type StartConfigRulesEvaluationInput struct { _ struct{} `type:"structure"` - // The list of names of Config rules that you want to run evaluations for. + // The list of names of AWS Config rules that you want to run evaluations for. ConfigRuleNames []*string `min:"1" type:"list"` } @@ -6415,7 +11228,7 @@ func (s *StartConfigRulesEvaluationInput) SetConfigRuleNames(v []*string) *Start return s } -// The output when you start the evaluation for the specified Config rule. +// The output when you start the evaluation for the specified AWS Config rule. type StartConfigRulesEvaluationOutput struct { _ struct{} `type:"structure"` } @@ -6544,6 +11357,25 @@ func (s StopConfigurationRecorderOutput) GoString() string { return s.String() } +const ( + // AggregatedSourceStatusTypeFailed is a AggregatedSourceStatusType enum value + AggregatedSourceStatusTypeFailed = "FAILED" + + // AggregatedSourceStatusTypeSucceeded is a AggregatedSourceStatusType enum value + AggregatedSourceStatusTypeSucceeded = "SUCCEEDED" + + // AggregatedSourceStatusTypeOutdated is a AggregatedSourceStatusType enum value + AggregatedSourceStatusTypeOutdated = "OUTDATED" +) + +const ( + // AggregatedSourceTypeAccount is a AggregatedSourceType enum value + AggregatedSourceTypeAccount = "ACCOUNT" + + // AggregatedSourceTypeOrganization is a AggregatedSourceType enum value + AggregatedSourceTypeOrganization = "ORGANIZATION" +) + const ( // ChronologicalOrderReverse is a ChronologicalOrder enum value ChronologicalOrderReverse = "Reverse" @@ -6566,6 +11398,14 @@ const ( ComplianceTypeInsufficientData = "INSUFFICIENT_DATA" ) +const ( + // ConfigRuleComplianceSummaryGroupKeyAccountId is a ConfigRuleComplianceSummaryGroupKey enum value + ConfigRuleComplianceSummaryGroupKeyAccountId = "ACCOUNT_ID" + + // ConfigRuleComplianceSummaryGroupKeyAwsRegion is a ConfigRuleComplianceSummaryGroupKey enum value + ConfigRuleComplianceSummaryGroupKeyAwsRegion = "AWS_REGION" +) + const ( // ConfigRuleStateActive is a ConfigRuleState enum value ConfigRuleStateActive = "ACTIVE" @@ -6663,6 +11503,17 @@ const ( RecorderStatusFailure = "Failure" ) +const ( + // ResourceCountGroupKeyResourceType is a ResourceCountGroupKey enum value + ResourceCountGroupKeyResourceType = "RESOURCE_TYPE" + + // ResourceCountGroupKeyAccountId is a ResourceCountGroupKey enum value + ResourceCountGroupKeyAccountId = "ACCOUNT_ID" + + // ResourceCountGroupKeyAwsRegion is a ResourceCountGroupKey enum value + ResourceCountGroupKeyAwsRegion = "AWS_REGION" +) + const ( // ResourceTypeAwsEc2CustomerGateway is a ResourceType enum value ResourceTypeAwsEc2CustomerGateway = "AWS::EC2::CustomerGateway" @@ -6813,4 +11664,46 @@ const ( // ResourceTypeAwsCloudFrontStreamingDistribution is a ResourceType enum value ResourceTypeAwsCloudFrontStreamingDistribution = "AWS::CloudFront::StreamingDistribution" + + // ResourceTypeAwsWafRuleGroup is a ResourceType enum value + ResourceTypeAwsWafRuleGroup = "AWS::WAF::RuleGroup" + + // ResourceTypeAwsWafregionalRuleGroup is a ResourceType enum value + ResourceTypeAwsWafregionalRuleGroup = "AWS::WAFRegional::RuleGroup" + + // ResourceTypeAwsLambdaFunction is a ResourceType enum value + ResourceTypeAwsLambdaFunction = "AWS::Lambda::Function" + + // ResourceTypeAwsElasticBeanstalkApplication is a ResourceType enum value + ResourceTypeAwsElasticBeanstalkApplication = "AWS::ElasticBeanstalk::Application" + + // ResourceTypeAwsElasticBeanstalkApplicationVersion is a ResourceType enum value + ResourceTypeAwsElasticBeanstalkApplicationVersion = "AWS::ElasticBeanstalk::ApplicationVersion" + + // ResourceTypeAwsElasticBeanstalkEnvironment is a ResourceType enum value + ResourceTypeAwsElasticBeanstalkEnvironment = "AWS::ElasticBeanstalk::Environment" + + // ResourceTypeAwsElasticLoadBalancingLoadBalancer is a ResourceType enum value + ResourceTypeAwsElasticLoadBalancingLoadBalancer = "AWS::ElasticLoadBalancing::LoadBalancer" + + // ResourceTypeAwsXrayEncryptionConfig is a ResourceType enum value + ResourceTypeAwsXrayEncryptionConfig = "AWS::XRay::EncryptionConfig" + + // ResourceTypeAwsSsmAssociationCompliance is a ResourceType enum value + ResourceTypeAwsSsmAssociationCompliance = "AWS::SSM::AssociationCompliance" + + // ResourceTypeAwsSsmPatchCompliance is a ResourceType enum value + ResourceTypeAwsSsmPatchCompliance = "AWS::SSM::PatchCompliance" + + // ResourceTypeAwsShieldProtection is a ResourceType enum value + ResourceTypeAwsShieldProtection = "AWS::Shield::Protection" + + // ResourceTypeAwsShieldRegionalProtection is a ResourceType enum value + ResourceTypeAwsShieldRegionalProtection = "AWS::ShieldRegional::Protection" + + // ResourceTypeAwsConfigResourceCompliance is a ResourceType enum value + ResourceTypeAwsConfigResourceCompliance = "AWS::Config::ResourceCompliance" + + // ResourceTypeAwsCodePipelinePipeline is a ResourceType enum value + ResourceTypeAwsCodePipelinePipeline = "AWS::CodePipeline::Pipeline" ) diff --git a/vendor/github.com/aws/aws-sdk-go/service/configservice/doc.go b/vendor/github.com/aws/aws-sdk-go/service/configservice/doc.go index 39b2880e12d..853be226732 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/configservice/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/configservice/doc.go @@ -8,24 +8,20 @@ // get the current and historical configurations of each AWS resource and also // to get information about the relationship between the resources. An AWS resource // can be an Amazon Compute Cloud (Amazon EC2) instance, an Elastic Block Store -// (EBS) volume, an Elastic network Interface (ENI), or a security group. For +// (EBS) volume, an elastic network Interface (ENI), or a security group. For // a complete list of resources currently supported by AWS Config, see Supported // AWS Resources (http://docs.aws.amazon.com/config/latest/developerguide/resource-config-reference.html#supported-resources). // // You can access and manage AWS Config through the AWS Management Console, // the AWS Command Line Interface (AWS CLI), the AWS Config API, or the AWS -// SDKs for AWS Config -// -// This reference guide contains documentation for the AWS Config API and the -// AWS CLI commands that you can use to manage AWS Config. -// +// SDKs for AWS Config. This reference guide contains documentation for the +// AWS Config API and the AWS CLI commands that you can use to manage AWS Config. // The AWS Config API uses the Signature Version 4 protocol for signing requests. // For more information about how to sign a request with this protocol, see // Signature Version 4 Signing Process (http://docs.aws.amazon.com/general/latest/gr/signature-version-4.html). -// // For detailed information about AWS Config features and their associated actions // or commands, as well as how to work with AWS Management Console, see What -// Is AWS Config? (http://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html) +// Is AWS Config (http://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html) // in the AWS Config Developer Guide. // // See https://docs.aws.amazon.com/goto/WebAPI/config-2014-11-12 for more information on this service. diff --git a/vendor/github.com/aws/aws-sdk-go/service/configservice/errors.go b/vendor/github.com/aws/aws-sdk-go/service/configservice/errors.go index cc1554087ec..4ffa6dbdc2b 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/configservice/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/configservice/errors.go @@ -43,7 +43,7 @@ const ( // ErrCodeInvalidNextTokenException for service response error code // "InvalidNextTokenException". // - // The specified next token is invalid. Specify the NextToken string that was + // The specified next token is invalid. Specify the nextToken string that was // returned in the previous response to get the next page of results. ErrCodeInvalidNextTokenException = "InvalidNextTokenException" @@ -58,7 +58,7 @@ const ( // "InvalidRecordingGroupException". // // AWS Config throws an exception if the recording group does not contain a - // valid list of resource types. Invalid values could also be incorrectly formatted. + // valid list of resource types. Invalid values might also be incorrectly formatted. ErrCodeInvalidRecordingGroupException = "InvalidRecordingGroupException" // ErrCodeInvalidResultTokenException for service response error code @@ -102,8 +102,12 @@ const ( // ErrCodeLimitExceededException for service response error code // "LimitExceededException". // - // This exception is thrown if an evaluation is in progress or if you call the - // StartConfigRulesEvaluation API more than once per minute. + // For StartConfigRulesEvaluation API, this exception is thrown if an evaluation + // is in progress or if you call the StartConfigRulesEvaluation API more than + // once per minute. + // + // For PutConfigurationAggregator API, this exception is thrown if the number + // of accounts and aggregators exceeds the limit. ErrCodeLimitExceededException = "LimitExceededException" // ErrCodeMaxNumberOfConfigRulesExceededException for service response error code @@ -111,21 +115,28 @@ const ( // // Failed to add the AWS Config rule because the account already contains the // maximum number of 50 rules. Consider deleting any deactivated rules before - // adding new rules. + // you add new rules. ErrCodeMaxNumberOfConfigRulesExceededException = "MaxNumberOfConfigRulesExceededException" // ErrCodeMaxNumberOfConfigurationRecordersExceededException for service response error code // "MaxNumberOfConfigurationRecordersExceededException". // - // You have reached the limit on the number of recorders you can create. + // You have reached the limit of the number of recorders you can create. ErrCodeMaxNumberOfConfigurationRecordersExceededException = "MaxNumberOfConfigurationRecordersExceededException" // ErrCodeMaxNumberOfDeliveryChannelsExceededException for service response error code // "MaxNumberOfDeliveryChannelsExceededException". // - // You have reached the limit on the number of delivery channels you can create. + // You have reached the limit of the number of delivery channels you can create. ErrCodeMaxNumberOfDeliveryChannelsExceededException = "MaxNumberOfDeliveryChannelsExceededException" + // ErrCodeMaxNumberOfRetentionConfigurationsExceededException for service response error code + // "MaxNumberOfRetentionConfigurationsExceededException". + // + // Failed to add the retention configuration because a retention configuration + // with that name already exists. + ErrCodeMaxNumberOfRetentionConfigurationsExceededException = "MaxNumberOfRetentionConfigurationsExceededException" + // ErrCodeNoAvailableConfigurationRecorderException for service response error code // "NoAvailableConfigurationRecorderException". // @@ -139,6 +150,12 @@ const ( // There is no delivery channel available to record configurations. ErrCodeNoAvailableDeliveryChannelException = "NoAvailableDeliveryChannelException" + // ErrCodeNoAvailableOrganizationException for service response error code + // "NoAvailableOrganizationException". + // + // Organization does is no longer available. + ErrCodeNoAvailableOrganizationException = "NoAvailableOrganizationException" + // ErrCodeNoRunningConfigurationRecorderException for service response error code // "NoRunningConfigurationRecorderException". // @@ -158,6 +175,12 @@ const ( // rule names are correct and try again. ErrCodeNoSuchConfigRuleException = "NoSuchConfigRuleException" + // ErrCodeNoSuchConfigurationAggregatorException for service response error code + // "NoSuchConfigurationAggregatorException". + // + // You have specified a configuration aggregator that does not exist. + ErrCodeNoSuchConfigurationAggregatorException = "NoSuchConfigurationAggregatorException" + // ErrCodeNoSuchConfigurationRecorderException for service response error code // "NoSuchConfigurationRecorderException". // @@ -170,6 +193,31 @@ const ( // You have specified a delivery channel that does not exist. ErrCodeNoSuchDeliveryChannelException = "NoSuchDeliveryChannelException" + // ErrCodeNoSuchRetentionConfigurationException for service response error code + // "NoSuchRetentionConfigurationException". + // + // You have specified a retention configuration that does not exist. + ErrCodeNoSuchRetentionConfigurationException = "NoSuchRetentionConfigurationException" + + // ErrCodeOrganizationAccessDeniedException for service response error code + // "OrganizationAccessDeniedException". + // + // No permission to call the EnableAWSServiceAccess API. + ErrCodeOrganizationAccessDeniedException = "OrganizationAccessDeniedException" + + // ErrCodeOrganizationAllFeaturesNotEnabledException for service response error code + // "OrganizationAllFeaturesNotEnabledException". + // + // The configuration aggregator cannot be created because organization does + // not have all features enabled. + ErrCodeOrganizationAllFeaturesNotEnabledException = "OrganizationAllFeaturesNotEnabledException" + + // ErrCodeOversizedConfigurationItemException for service response error code + // "OversizedConfigurationItemException". + // + // The configuration item size is outside the allowable range. + ErrCodeOversizedConfigurationItemException = "OversizedConfigurationItemException" + // ErrCodeResourceInUseException for service response error code // "ResourceInUseException". // diff --git a/vendor/github.com/aws/aws-sdk-go/service/configservice/service.go b/vendor/github.com/aws/aws-sdk-go/service/configservice/service.go index 3593126add4..2fdea95561f 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/configservice/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/configservice/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "config" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "config" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Config Service" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the ConfigService client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/databasemigrationservice/api.go b/vendor/github.com/aws/aws-sdk-go/service/databasemigrationservice/api.go index 22bb4d1aaef..56a7773685f 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/databasemigrationservice/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/databasemigrationservice/api.go @@ -15,8 +15,8 @@ const opAddTagsToResource = "AddTagsToResource" // AddTagsToResourceRequest generates a "aws/request.Request" representing the // client's request for the AddTagsToResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -55,10 +55,10 @@ func (c *DatabaseMigrationService) AddTagsToResourceRequest(input *AddTagsToReso // AddTagsToResource API operation for AWS Database Migration Service. // -// Adds metadata tags to a DMS resource, including replication instance, endpoint, -// security group, and migration task. These tags can also be used with cost -// allocation reporting to track cost associated with DMS resources, or used -// in a Condition statement in an IAM policy for DMS. +// Adds metadata tags to an AWS DMS resource, including replication instance, +// endpoint, security group, and migration task. These tags can also be used +// with cost allocation reporting to track cost associated with DMS resources, +// or used in a Condition statement in an IAM policy for DMS. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -97,8 +97,8 @@ const opCreateEndpoint = "CreateEndpoint" // CreateEndpointRequest generates a "aws/request.Request" representing the // client's request for the CreateEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -192,8 +192,8 @@ const opCreateEventSubscription = "CreateEventSubscription" // CreateEventSubscriptionRequest generates a "aws/request.Request" representing the // client's request for the CreateEventSubscription operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -246,9 +246,9 @@ func (c *DatabaseMigrationService) CreateEventSubscriptionRequest(input *CreateE // will be notified of events generated from all AWS DMS sources belonging to // your customer account. // -// For more information about AWS DMS events, see Working with Events and Notifications -// (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Events.html) in the -// AWS Database MIgration Service User Guide. +// For more information about AWS DMS events, see Working with Events and Notifications +// (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Events.html) in the +// AWS Database Migration Service User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -299,8 +299,8 @@ const opCreateReplicationInstance = "CreateReplicationInstance" // CreateReplicationInstanceRequest generates a "aws/request.Request" representing the // client's request for the CreateReplicationInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -407,8 +407,8 @@ const opCreateReplicationSubnetGroup = "CreateReplicationSubnetGroup" // CreateReplicationSubnetGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateReplicationSubnetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -502,8 +502,8 @@ const opCreateReplicationTask = "CreateReplicationTask" // CreateReplicationTaskRequest generates a "aws/request.Request" representing the // client's request for the CreateReplicationTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -597,8 +597,8 @@ const opDeleteCertificate = "DeleteCertificate" // DeleteCertificateRequest generates a "aws/request.Request" representing the // client's request for the DeleteCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -680,8 +680,8 @@ const opDeleteEndpoint = "DeleteEndpoint" // DeleteEndpointRequest generates a "aws/request.Request" representing the // client's request for the DeleteEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -766,8 +766,8 @@ const opDeleteEventSubscription = "DeleteEventSubscription" // DeleteEventSubscriptionRequest generates a "aws/request.Request" representing the // client's request for the DeleteEventSubscription operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -849,8 +849,8 @@ const opDeleteReplicationInstance = "DeleteReplicationInstance" // DeleteReplicationInstanceRequest generates a "aws/request.Request" representing the // client's request for the DeleteReplicationInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -935,8 +935,8 @@ const opDeleteReplicationSubnetGroup = "DeleteReplicationSubnetGroup" // DeleteReplicationSubnetGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteReplicationSubnetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1018,8 +1018,8 @@ const opDeleteReplicationTask = "DeleteReplicationTask" // DeleteReplicationTaskRequest generates a "aws/request.Request" representing the // client's request for the DeleteReplicationTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1101,8 +1101,8 @@ const opDescribeAccountAttributes = "DescribeAccountAttributes" // DescribeAccountAttributesRequest generates a "aws/request.Request" representing the // client's request for the DescribeAccountAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1180,8 +1180,8 @@ const opDescribeCertificates = "DescribeCertificates" // DescribeCertificatesRequest generates a "aws/request.Request" representing the // client's request for the DescribeCertificates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1315,8 +1315,8 @@ const opDescribeConnections = "DescribeConnections" // DescribeConnectionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeConnections operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1451,8 +1451,8 @@ const opDescribeEndpointTypes = "DescribeEndpointTypes" // DescribeEndpointTypesRequest generates a "aws/request.Request" representing the // client's request for the DescribeEndpointTypes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1581,8 +1581,8 @@ const opDescribeEndpoints = "DescribeEndpoints" // DescribeEndpointsRequest generates a "aws/request.Request" representing the // client's request for the DescribeEndpoints operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1716,8 +1716,8 @@ const opDescribeEventCategories = "DescribeEventCategories" // DescribeEventCategoriesRequest generates a "aws/request.Request" representing the // client's request for the DescribeEventCategories operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1758,7 +1758,7 @@ func (c *DatabaseMigrationService) DescribeEventCategoriesRequest(input *Describ // // Lists categories for all event source types, or, if specified, for a specified // source type. You can see a list of the event categories and source types -// in Working with Events and Notifications (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Events.html) +// in Working with Events and Notifications (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Events.html) // in the AWS Database Migration Service User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -1793,8 +1793,8 @@ const opDescribeEventSubscriptions = "DescribeEventSubscriptions" // DescribeEventSubscriptionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeEventSubscriptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1933,8 +1933,8 @@ const opDescribeEvents = "DescribeEvents" // DescribeEventsRequest generates a "aws/request.Request" representing the // client's request for the DescribeEvents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1981,7 +1981,8 @@ func (c *DatabaseMigrationService) DescribeEventsRequest(input *DescribeEventsIn // // Lists events for a given source identifier and source type. You can also // specify a start and end time. For more information on AWS DMS events, see -// Working with Events and Notifications (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Events.html). +// Working with Events and Notifications (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Events.html) +// in the AWS Database Migration User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2065,8 +2066,8 @@ const opDescribeOrderableReplicationInstances = "DescribeOrderableReplicationIns // DescribeOrderableReplicationInstancesRequest generates a "aws/request.Request" representing the // client's request for the DescribeOrderableReplicationInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2196,8 +2197,8 @@ const opDescribeRefreshSchemasStatus = "DescribeRefreshSchemasStatus" // DescribeRefreshSchemasStatusRequest generates a "aws/request.Request" representing the // client's request for the DescribeRefreshSchemasStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2279,8 +2280,8 @@ const opDescribeReplicationInstanceTaskLogs = "DescribeReplicationInstanceTaskLo // DescribeReplicationInstanceTaskLogsRequest generates a "aws/request.Request" representing the // client's request for the DescribeReplicationInstanceTaskLogs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2418,8 +2419,8 @@ const opDescribeReplicationInstances = "DescribeReplicationInstances" // DescribeReplicationInstancesRequest generates a "aws/request.Request" representing the // client's request for the DescribeReplicationInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2554,8 +2555,8 @@ const opDescribeReplicationSubnetGroups = "DescribeReplicationSubnetGroups" // DescribeReplicationSubnetGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeReplicationSubnetGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2689,8 +2690,8 @@ const opDescribeReplicationTaskAssessmentResults = "DescribeReplicationTaskAsses // DescribeReplicationTaskAssessmentResultsRequest generates a "aws/request.Request" representing the // client's request for the DescribeReplicationTaskAssessmentResults operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2825,8 +2826,8 @@ const opDescribeReplicationTasks = "DescribeReplicationTasks" // DescribeReplicationTasksRequest generates a "aws/request.Request" representing the // client's request for the DescribeReplicationTasks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2961,8 +2962,8 @@ const opDescribeSchemas = "DescribeSchemas" // DescribeSchemasRequest generates a "aws/request.Request" representing the // client's request for the DescribeSchemas operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3100,8 +3101,8 @@ const opDescribeTableStatistics = "DescribeTableStatistics" // DescribeTableStatisticsRequest generates a "aws/request.Request" representing the // client's request for the DescribeTableStatistics operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3244,8 +3245,8 @@ const opImportCertificate = "ImportCertificate" // ImportCertificateRequest generates a "aws/request.Request" representing the // client's request for the ImportCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3329,8 +3330,8 @@ const opListTagsForResource = "ListTagsForResource" // ListTagsForResourceRequest generates a "aws/request.Request" representing the // client's request for the ListTagsForResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3408,8 +3409,8 @@ const opModifyEndpoint = "ModifyEndpoint" // ModifyEndpointRequest generates a "aws/request.Request" representing the // client's request for the ModifyEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3500,8 +3501,8 @@ const opModifyEventSubscription = "ModifyEventSubscription" // ModifyEventSubscriptionRequest generates a "aws/request.Request" representing the // client's request for the ModifyEventSubscription operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3588,8 +3589,8 @@ const opModifyReplicationInstance = "ModifyReplicationInstance" // ModifyReplicationInstanceRequest generates a "aws/request.Request" representing the // client's request for the ModifyReplicationInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3687,8 +3688,8 @@ const opModifyReplicationSubnetGroup = "ModifyReplicationSubnetGroup" // ModifyReplicationSubnetGroupRequest generates a "aws/request.Request" representing the // client's request for the ModifyReplicationSubnetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3782,8 +3783,8 @@ const opModifyReplicationTask = "ModifyReplicationTask" // ModifyReplicationTaskRequest generates a "aws/request.Request" representing the // client's request for the ModifyReplicationTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3827,8 +3828,9 @@ func (c *DatabaseMigrationService) ModifyReplicationTaskRequest(input *ModifyRep // You can't modify the task endpoints. The task must be stopped before you // can modify it. // -// For more information about AWS DMS tasks, see the AWS DMS user guide at -// Working with Migration Tasks (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.html) +// For more information about AWS DMS tasks, see Working with Migration Tasks +// (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.html) in the +// AWS Database Migration Service User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3877,8 +3879,8 @@ const opRebootReplicationInstance = "RebootReplicationInstance" // RebootReplicationInstanceRequest generates a "aws/request.Request" representing the // client's request for the RebootReplicationInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3961,8 +3963,8 @@ const opRefreshSchemas = "RefreshSchemas" // RefreshSchemasRequest generates a "aws/request.Request" representing the // client's request for the RefreshSchemas operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4052,8 +4054,8 @@ const opReloadTables = "ReloadTables" // ReloadTablesRequest generates a "aws/request.Request" representing the // client's request for the ReloadTables operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4135,8 +4137,8 @@ const opRemoveTagsFromResource = "RemoveTagsFromResource" // RemoveTagsFromResourceRequest generates a "aws/request.Request" representing the // client's request for the RemoveTagsFromResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4214,8 +4216,8 @@ const opStartReplicationTask = "StartReplicationTask" // StartReplicationTaskRequest generates a "aws/request.Request" representing the // client's request for the StartReplicationTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4256,8 +4258,9 @@ func (c *DatabaseMigrationService) StartReplicationTaskRequest(input *StartRepli // // Starts the replication task. // -// For more information about AWS DMS tasks, see the AWS DMS user guide at -// Working with Migration Tasks (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.html) +// For more information about AWS DMS tasks, see Working with Migration Tasks +// (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.html) in the +// AWS Database Migration Service User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4274,6 +4277,9 @@ func (c *DatabaseMigrationService) StartReplicationTaskRequest(input *StartRepli // The resource is in a state that prevents it from being used for database // migration. // +// * ErrCodeAccessDeniedFault "AccessDeniedFault" +// AWS DMS was denied access to the endpoint. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01/StartReplicationTask func (c *DatabaseMigrationService) StartReplicationTask(input *StartReplicationTaskInput) (*StartReplicationTaskOutput, error) { req, out := c.StartReplicationTaskRequest(input) @@ -4300,8 +4306,8 @@ const opStartReplicationTaskAssessment = "StartReplicationTaskAssessment" // StartReplicationTaskAssessmentRequest generates a "aws/request.Request" representing the // client's request for the StartReplicationTaskAssessment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4384,8 +4390,8 @@ const opStopReplicationTask = "StopReplicationTask" // StopReplicationTaskRequest generates a "aws/request.Request" representing the // client's request for the StopReplicationTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4467,8 +4473,8 @@ const opTestConnection = "TestConnection" // TestConnectionRequest generates a "aws/request.Request" representing the // client's request for the TestConnection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4695,7 +4701,7 @@ type Certificate struct { CertificateArn *string `type:"string"` // The date that the certificate was created. - CertificateCreationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CertificateCreationDate *time.Time `type:"timestamp"` // The customer-assigned name of the certificate. Valid characters are A-z and // 0-9. @@ -4719,10 +4725,10 @@ type Certificate struct { SigningAlgorithm *string `type:"string"` // The beginning date that the certificate is valid. - ValidFromDate *time.Time `type:"timestamp" timestampFormat:"unix"` + ValidFromDate *time.Time `type:"timestamp"` // The final date that the certificate is valid. - ValidToDate *time.Time `type:"timestamp" timestampFormat:"unix"` + ValidToDate *time.Time `type:"timestamp"` } // String returns the string representation @@ -4875,12 +4881,37 @@ type CreateEndpointInput struct { // The name of the endpoint database. DatabaseName *string `type:"string"` + // The settings in JSON format for the DMS transfer type of source endpoint. + // + // Possible attributes include the following: + // + // * serviceAccessRoleArn - The IAM role that has permission to access the + // Amazon S3 bucket. + // + // * bucketName - The name of the S3 bucket to use. + // + // * compressionType - An optional parameter to use GZIP to compress the + // target files. To use GZIP, set this value to NONE (the default). To keep + // the files uncompressed, don't use this value. + // + // Shorthand syntax for these attributes is as follows: ServiceAccessRoleArn=string,BucketName=string,CompressionType=string + // + // JSON syntax for these attributes is as follows: { "ServiceAccessRoleArn": + // "string", "BucketName": "string", "CompressionType": "none"|"gzip" } + DmsTransferSettings *DmsTransferSettings `type:"structure"` + // Settings in JSON format for the target Amazon DynamoDB endpoint. For more - // information about the available settings, see the Using Object Mapping to - // Migrate Data to DynamoDB section at Using an Amazon DynamoDB Database as - // a Target for AWS Database Migration Service (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.DynamoDB.html). + // information about the available settings, see Using Object Mapping to Migrate + // Data to DynamoDB (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.DynamoDB.html) + // in the AWS Database Migration Service User Guide. DynamoDbSettings *DynamoDbSettings `type:"structure"` + // Settings in JSON format for the target Elasticsearch endpoint. For more information + // about the available settings, see Extra Connection Attributes When Using + // Elasticsearch as a Target for AWS DMS (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Elasticsearch.html#CHAP_Target.Elasticsearch.Configuration) + // in the AWS Database Migration User Guide. + ElasticsearchSettings *ElasticsearchSettings `type:"structure"` + // The database endpoint identifier. Identifiers must begin with a letter; must // contain only ASCII letters, digits, and hyphens; and must not end with a // hyphen or contain two consecutive hyphens. @@ -4893,54 +4924,66 @@ type CreateEndpointInput struct { // EndpointType is a required field EndpointType *string `type:"string" required:"true" enum:"ReplicationEndpointTypeValue"` - // The type of engine for the endpoint. Valid values, depending on the EndPointType, - // include mysql, oracle, postgres, mariadb, aurora, redshift, S3, sybase, dynamodb, - // mongodb, and sqlserver. + // The type of engine for the endpoint. Valid values, depending on the EndPointType + // value, include mysql, oracle, postgres, mariadb, aurora, aurora-postgresql, + // redshift, s3, db2, azuredb, sybase, dynamodb, mongodb, and sqlserver. // // EngineName is a required field EngineName *string `type:"string" required:"true"` + // The external table definition. + ExternalTableDefinition *string `type:"string"` + // Additional attributes associated with the connection. ExtraConnectionAttributes *string `type:"string"` - // The KMS key identifier that will be used to encrypt the connection parameters. - // If you do not specify a value for the KmsKeyId parameter, then AWS DMS will - // use your default encryption key. AWS KMS creates the default encryption key - // for your AWS account. Your AWS account has a different default encryption - // key for each AWS region. + // Settings in JSON format for the target Amazon Kinesis Data Streams endpoint. + // For more information about the available settings, see Using Object Mapping + // to Migrate Data to a Kinesis Data Stream (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kinesis.html#CHAP_Target.Kinesis.ObjectMapping + // ) in the AWS Database Migration User Guide. + KinesisSettings *KinesisSettings `type:"structure"` + + // The AWS KMS key identifier to use to encrypt the connection parameters. If + // you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your + // default encryption key. AWS KMS creates the default encryption key for your + // AWS account. Your AWS account has a different default encryption key for + // each AWS Region. KmsKeyId *string `type:"string"` // Settings in JSON format for the source MongoDB endpoint. For more information - // about the available settings, see the Configuration Properties When Using - // MongoDB as a Source for AWS Database Migration Service section at Using - // Amazon S3 as a Target for AWS Database Migration Service (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html). + // about the available settings, see the configuration properties section in + // Using MongoDB as a Target for AWS Database Migration Service (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html) + // in the AWS Database Migration Service User Guide. MongoDbSettings *MongoDbSettings `type:"structure"` - // The password to be used to login to the endpoint database. + // The password to be used to log in to the endpoint database. Password *string `type:"string"` // The port used by the endpoint database. Port *int64 `type:"integer"` - // Settings in JSON format for the target S3 endpoint. For more information - // about the available settings, see the Extra Connection Attributes section - // at Using Amazon S3 as a Target for AWS Database Migration Service (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html). + // Settings in JSON format for the target Amazon S3 endpoint. For more information + // about the available settings, see Extra Connection Attributes When Using + // Amazon S3 as a Target for AWS DMS (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.Configuring) + // in the AWS Database Migration Service User Guide. S3Settings *S3Settings `type:"structure"` // The name of the server where the endpoint database resides. ServerName *string `type:"string"` - // The SSL mode to use for the SSL connection. - // - // SSL mode can be one of four values: none, require, verify-ca, verify-full. - // - // The default value is none. + // The Amazon Resource Name (ARN) for the service access role that you want + // to use to create the endpoint. + ServiceAccessRoleArn *string `type:"string"` + + // The Secure Sockets Layer (SSL) mode to use for the SSL connection. The SSL + // mode can be one of four values: none, require, verify-ca, verify-full. The + // default value is none. SslMode *string `type:"string" enum:"DmsSslModeValue"` // Tags to be added to the endpoint. Tags []*Tag `type:"list"` - // The user name to be used to login to the endpoint database. + // The user name to be used to log in to the endpoint database. Username *string `type:"string"` } @@ -4971,6 +5014,11 @@ func (s *CreateEndpointInput) Validate() error { invalidParams.AddNested("DynamoDbSettings", err.(request.ErrInvalidParams)) } } + if s.ElasticsearchSettings != nil { + if err := s.ElasticsearchSettings.Validate(); err != nil { + invalidParams.AddNested("ElasticsearchSettings", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -4990,12 +5038,24 @@ func (s *CreateEndpointInput) SetDatabaseName(v string) *CreateEndpointInput { return s } +// SetDmsTransferSettings sets the DmsTransferSettings field's value. +func (s *CreateEndpointInput) SetDmsTransferSettings(v *DmsTransferSettings) *CreateEndpointInput { + s.DmsTransferSettings = v + return s +} + // SetDynamoDbSettings sets the DynamoDbSettings field's value. func (s *CreateEndpointInput) SetDynamoDbSettings(v *DynamoDbSettings) *CreateEndpointInput { s.DynamoDbSettings = v return s } +// SetElasticsearchSettings sets the ElasticsearchSettings field's value. +func (s *CreateEndpointInput) SetElasticsearchSettings(v *ElasticsearchSettings) *CreateEndpointInput { + s.ElasticsearchSettings = v + return s +} + // SetEndpointIdentifier sets the EndpointIdentifier field's value. func (s *CreateEndpointInput) SetEndpointIdentifier(v string) *CreateEndpointInput { s.EndpointIdentifier = &v @@ -5014,12 +5074,24 @@ func (s *CreateEndpointInput) SetEngineName(v string) *CreateEndpointInput { return s } +// SetExternalTableDefinition sets the ExternalTableDefinition field's value. +func (s *CreateEndpointInput) SetExternalTableDefinition(v string) *CreateEndpointInput { + s.ExternalTableDefinition = &v + return s +} + // SetExtraConnectionAttributes sets the ExtraConnectionAttributes field's value. func (s *CreateEndpointInput) SetExtraConnectionAttributes(v string) *CreateEndpointInput { s.ExtraConnectionAttributes = &v return s } +// SetKinesisSettings sets the KinesisSettings field's value. +func (s *CreateEndpointInput) SetKinesisSettings(v *KinesisSettings) *CreateEndpointInput { + s.KinesisSettings = v + return s +} + // SetKmsKeyId sets the KmsKeyId field's value. func (s *CreateEndpointInput) SetKmsKeyId(v string) *CreateEndpointInput { s.KmsKeyId = &v @@ -5056,6 +5128,12 @@ func (s *CreateEndpointInput) SetServerName(v string) *CreateEndpointInput { return s } +// SetServiceAccessRoleArn sets the ServiceAccessRoleArn field's value. +func (s *CreateEndpointInput) SetServiceAccessRoleArn(v string) *CreateEndpointInput { + s.ServiceAccessRoleArn = &v + return s +} + // SetSslMode sets the SslMode field's value. func (s *CreateEndpointInput) SetSslMode(v string) *CreateEndpointInput { s.SslMode = &v @@ -5106,7 +5184,7 @@ type CreateEventSubscriptionInput struct { // A list of event categories for a source type that you want to subscribe to. // You can see a list of the categories for a given source type by calling the - // DescribeEventCategories action or in the topic Working with Events and Notifications + // DescribeEventCategories action or in the topic Working with Events and Notifications // (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Events.html) in the // AWS Database Migration Service User Guide. EventCategories []*string `type:"list"` @@ -5132,7 +5210,7 @@ type CreateEventSubscriptionInput struct { // Valid values: replication-instance | migration-task SourceType *string `type:"string"` - // The name of the DMS event notification subscription. + // The name of the AWS DMS event notification subscription. // // Constraints: The name must be less than 255 characters. // @@ -5254,14 +5332,17 @@ type CreateReplicationInstanceInput struct { // Example: us-east-1d AvailabilityZone *string `type:"string"` + // A list of DNS name servers supported for the replication instance. + DnsNameServers *string `type:"string"` + // The engine version number of the replication instance. EngineVersion *string `type:"string"` - // The KMS key identifier that will be used to encrypt the content on the replication - // instance. If you do not specify a value for the KmsKeyId parameter, then - // AWS DMS will use your default encryption key. AWS KMS creates the default - // encryption key for your AWS account. Your AWS account has a different default - // encryption key for each AWS region. + // The AWS KMS key identifier that is used to encrypt the content on the replication + // instance. If you don't specify a value for the KmsKeyId parameter, then AWS + // DMS uses your default encryption key. AWS KMS creates the default encryption + // key for your AWS account. Your AWS account has a different default encryption + // key for each AWS Region. KmsKeyId *string `type:"string"` // Specifies if the replication instance is a Multi-AZ deployment. You cannot @@ -5367,6 +5448,12 @@ func (s *CreateReplicationInstanceInput) SetAvailabilityZone(v string) *CreateRe return s } +// SetDnsNameServers sets the DnsNameServers field's value. +func (s *CreateReplicationInstanceInput) SetDnsNameServers(v string) *CreateReplicationInstanceInput { + s.DnsNameServers = &v + return s +} + // SetEngineVersion sets the EngineVersion field's value. func (s *CreateReplicationInstanceInput) SetEngineVersion(v string) *CreateReplicationInstanceInput { s.EngineVersion = &v @@ -5557,8 +5644,34 @@ func (s *CreateReplicationSubnetGroupOutput) SetReplicationSubnetGroup(v *Replic type CreateReplicationTaskInput struct { _ struct{} `type:"structure"` - // The start time for the Change Data Capture (CDC) operation. - CdcStartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + // Indicates when you want a change data capture (CDC) operation to start. Use + // either CdcStartPosition or CdcStartTime to specify when you want a CDC operation + // to start. Specifying both values results in an error. + // + // The value can be in date, checkpoint, or LSN/SCN format. + // + // Date Example: --cdc-start-position “2018-03-08T12:12:12” + // + // Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93" + // + // LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373” + CdcStartPosition *string `type:"string"` + + // Indicates the start time for a change data capture (CDC) operation. Use either + // CdcStartTime or CdcStartPosition to specify when you want a CDC operation + // to start. Specifying both values results in an error. + // + // Timestamp Example: --cdc-start-time “2018-03-08T12:12:12” + CdcStartTime *time.Time `type:"timestamp"` + + // Indicates when you want a change data capture (CDC) operation to stop. The + // value can be either server time or commit time. + // + // Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12” + // + // Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 + // “ + CdcStopPosition *string `type:"string"` // The migration type. // @@ -5585,7 +5698,8 @@ type CreateReplicationTaskInput struct { // Settings for the task, such as target metadata settings. For a complete list // of task settings, see Task Settings for AWS Database Migration Service Tasks - // (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.html). + // (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.html) + // in the AWS Database Migration User Guide. ReplicationTaskSettings *string `type:"string"` // The Amazon Resource Name (ARN) string that uniquely identifies the endpoint. @@ -5649,12 +5763,24 @@ func (s *CreateReplicationTaskInput) Validate() error { return nil } +// SetCdcStartPosition sets the CdcStartPosition field's value. +func (s *CreateReplicationTaskInput) SetCdcStartPosition(v string) *CreateReplicationTaskInput { + s.CdcStartPosition = &v + return s +} + // SetCdcStartTime sets the CdcStartTime field's value. func (s *CreateReplicationTaskInput) SetCdcStartTime(v time.Time) *CreateReplicationTaskInput { s.CdcStartTime = &v return s } +// SetCdcStopPosition sets the CdcStopPosition field's value. +func (s *CreateReplicationTaskInput) SetCdcStopPosition(v string) *CreateReplicationTaskInput { + s.CdcStopPosition = &v + return s +} + // SetMigrationType sets the MigrationType field's value. func (s *CreateReplicationTaskInput) SetMigrationType(v string) *CreateReplicationTaskInput { s.MigrationType = &v @@ -6731,7 +6857,7 @@ type DescribeEventsInput struct { Duration *int64 `type:"integer"` // The end time for the events to be listed. - EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `type:"timestamp"` // A list of event categories for a source type that you want to subscribe to. EventCategories []*string `type:"list"` @@ -6764,7 +6890,7 @@ type DescribeEventsInput struct { SourceType *string `type:"string" enum:"SourceType"` // The start time for the events to be listed. - StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -7766,6 +7892,39 @@ func (s *DescribeTableStatisticsOutput) SetTableStatistics(v []*TableStatistics) return s } +// The settings in JSON format for the DMS Transfer type source endpoint. +type DmsTransferSettings struct { + _ struct{} `type:"structure"` + + // The name of the S3 bucket to use. + BucketName *string `type:"string"` + + // The IAM role that has permission to access the Amazon S3 bucket. + ServiceAccessRoleArn *string `type:"string"` +} + +// String returns the string representation +func (s DmsTransferSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DmsTransferSettings) GoString() string { + return s.String() +} + +// SetBucketName sets the BucketName field's value. +func (s *DmsTransferSettings) SetBucketName(v string) *DmsTransferSettings { + s.BucketName = &v + return s +} + +// SetServiceAccessRoleArn sets the ServiceAccessRoleArn field's value. +func (s *DmsTransferSettings) SetServiceAccessRoleArn(v string) *DmsTransferSettings { + s.ServiceAccessRoleArn = &v + return s +} + type DynamoDbSettings struct { _ struct{} `type:"structure"` @@ -7804,6 +7963,78 @@ func (s *DynamoDbSettings) SetServiceAccessRoleArn(v string) *DynamoDbSettings { return s } +type ElasticsearchSettings struct { + _ struct{} `type:"structure"` + + // The endpoint for the ElasticSearch cluster. + // + // EndpointUri is a required field + EndpointUri *string `type:"string" required:"true"` + + // The maximum number of seconds that DMS retries failed API requests to the + // Elasticsearch cluster. + ErrorRetryDuration *int64 `type:"integer"` + + // The maximum percentage of records that can fail to be written before a full + // load operation stops. + FullLoadErrorPercentage *int64 `type:"integer"` + + // The Amazon Resource Name (ARN) used by service to access the IAM role. + // + // ServiceAccessRoleArn is a required field + ServiceAccessRoleArn *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s ElasticsearchSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ElasticsearchSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ElasticsearchSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ElasticsearchSettings"} + if s.EndpointUri == nil { + invalidParams.Add(request.NewErrParamRequired("EndpointUri")) + } + if s.ServiceAccessRoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceAccessRoleArn")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEndpointUri sets the EndpointUri field's value. +func (s *ElasticsearchSettings) SetEndpointUri(v string) *ElasticsearchSettings { + s.EndpointUri = &v + return s +} + +// SetErrorRetryDuration sets the ErrorRetryDuration field's value. +func (s *ElasticsearchSettings) SetErrorRetryDuration(v int64) *ElasticsearchSettings { + s.ErrorRetryDuration = &v + return s +} + +// SetFullLoadErrorPercentage sets the FullLoadErrorPercentage field's value. +func (s *ElasticsearchSettings) SetFullLoadErrorPercentage(v int64) *ElasticsearchSettings { + s.FullLoadErrorPercentage = &v + return s +} + +// SetServiceAccessRoleArn sets the ServiceAccessRoleArn field's value. +func (s *ElasticsearchSettings) SetServiceAccessRoleArn(v string) *ElasticsearchSettings { + s.ServiceAccessRoleArn = &v + return s +} + type Endpoint struct { _ struct{} `type:"structure"` @@ -7813,10 +8044,33 @@ type Endpoint struct { // The name of the database at the endpoint. DatabaseName *string `type:"string"` + // The settings in JSON format for the DMS transfer type of source endpoint. + // + // Possible attributes include the following: + // + // * serviceAccessRoleArn - The IAM role that has permission to access the + // Amazon S3 bucket. + // + // * bucketName - The name of the S3 bucket to use. + // + // * compressionType - An optional parameter to use GZIP to compress the + // target files. To use GZIP, set this value to NONE (the default). To keep + // the files uncompressed, don't use this value. + // + // Shorthand syntax for these attributes is as follows: ServiceAccessRoleArn=string,BucketName=string,CompressionType=string + // + // JSON syntax for these attributes is as follows: { "ServiceAccessRoleArn": + // "string", "BucketName": "string", "CompressionType": "none"|"gzip" } + DmsTransferSettings *DmsTransferSettings `type:"structure"` + // The settings for the target DynamoDB database. For more information, see // the DynamoDBSettings structure. DynamoDbSettings *DynamoDbSettings `type:"structure"` + // The settings for the Elasticsearch source endpoint. For more information, + // see the ElasticsearchSettings structure. + ElasticsearchSettings *ElasticsearchSettings `type:"structure"` + // The Amazon Resource Name (ARN) string that uniquely identifies the endpoint. EndpointArn *string `type:"string"` @@ -7828,9 +8082,13 @@ type Endpoint struct { // The type of endpoint. EndpointType *string `type:"string" enum:"ReplicationEndpointTypeValue"` + // The expanded name for the engine name. For example, if the EngineName parameter + // is "aurora," this value would be "Amazon Aurora MySQL." + EngineDisplayName *string `type:"string"` + // The database engine name. Valid values, depending on the EndPointType, include - // mysql, oracle, postgres, mariadb, aurora, redshift, S3, sybase, dynamodb, - // mongodb, and sqlserver. + // mysql, oracle, postgres, mariadb, aurora, aurora-postgresql, redshift, s3, + // db2, azuredb, sybase, sybase, dynamodb, mongodb, and sqlserver. EngineName *string `type:"string"` // Value returned by a call to CreateEndpoint that can be used for cross-account @@ -7838,14 +8096,21 @@ type Endpoint struct { // with a cross-account. ExternalId *string `type:"string"` + // The external table definition. + ExternalTableDefinition *string `type:"string"` + // Additional connection attributes used to connect to the endpoint. ExtraConnectionAttributes *string `type:"string"` - // The KMS key identifier that will be used to encrypt the connection parameters. - // If you do not specify a value for the KmsKeyId parameter, then AWS DMS will - // use your default encryption key. AWS KMS creates the default encryption key - // for your AWS account. Your AWS account has a different default encryption - // key for each AWS region. + // The settings for the Amazon Kinesis source endpoint. For more information, + // see the KinesisSettings structure. + KinesisSettings *KinesisSettings `type:"structure"` + + // The AWS KMS key identifier that is used to encrypt the content on the replication + // instance. If you don't specify a value for the KmsKeyId parameter, then AWS + // DMS uses your default encryption key. AWS KMS creates the default encryption + // key for your AWS account. Your AWS account has a different default encryption + // key for each AWS Region. KmsKeyId *string `type:"string"` // The settings for the MongoDB source endpoint. For more information, see the @@ -7862,6 +8127,9 @@ type Endpoint struct { // The name of the server at the endpoint. ServerName *string `type:"string"` + // The Amazon Resource Name (ARN) used by the service access IAM role. + ServiceAccessRoleArn *string `type:"string"` + // The SSL mode used to connect to the endpoint. // // SSL mode can be one of four values: none, require, verify-ca, verify-full. @@ -7898,12 +8166,24 @@ func (s *Endpoint) SetDatabaseName(v string) *Endpoint { return s } +// SetDmsTransferSettings sets the DmsTransferSettings field's value. +func (s *Endpoint) SetDmsTransferSettings(v *DmsTransferSettings) *Endpoint { + s.DmsTransferSettings = v + return s +} + // SetDynamoDbSettings sets the DynamoDbSettings field's value. func (s *Endpoint) SetDynamoDbSettings(v *DynamoDbSettings) *Endpoint { s.DynamoDbSettings = v return s } +// SetElasticsearchSettings sets the ElasticsearchSettings field's value. +func (s *Endpoint) SetElasticsearchSettings(v *ElasticsearchSettings) *Endpoint { + s.ElasticsearchSettings = v + return s +} + // SetEndpointArn sets the EndpointArn field's value. func (s *Endpoint) SetEndpointArn(v string) *Endpoint { s.EndpointArn = &v @@ -7922,6 +8202,12 @@ func (s *Endpoint) SetEndpointType(v string) *Endpoint { return s } +// SetEngineDisplayName sets the EngineDisplayName field's value. +func (s *Endpoint) SetEngineDisplayName(v string) *Endpoint { + s.EngineDisplayName = &v + return s +} + // SetEngineName sets the EngineName field's value. func (s *Endpoint) SetEngineName(v string) *Endpoint { s.EngineName = &v @@ -7934,12 +8220,24 @@ func (s *Endpoint) SetExternalId(v string) *Endpoint { return s } +// SetExternalTableDefinition sets the ExternalTableDefinition field's value. +func (s *Endpoint) SetExternalTableDefinition(v string) *Endpoint { + s.ExternalTableDefinition = &v + return s +} + // SetExtraConnectionAttributes sets the ExtraConnectionAttributes field's value. func (s *Endpoint) SetExtraConnectionAttributes(v string) *Endpoint { s.ExtraConnectionAttributes = &v return s } +// SetKinesisSettings sets the KinesisSettings field's value. +func (s *Endpoint) SetKinesisSettings(v *KinesisSettings) *Endpoint { + s.KinesisSettings = v + return s +} + // SetKmsKeyId sets the KmsKeyId field's value. func (s *Endpoint) SetKmsKeyId(v string) *Endpoint { s.KmsKeyId = &v @@ -7970,6 +8268,12 @@ func (s *Endpoint) SetServerName(v string) *Endpoint { return s } +// SetServiceAccessRoleArn sets the ServiceAccessRoleArn field's value. +func (s *Endpoint) SetServiceAccessRoleArn(v string) *Endpoint { + s.ServiceAccessRoleArn = &v + return s +} + // SetSslMode sets the SslMode field's value. func (s *Endpoint) SetSslMode(v string) *Endpoint { s.SslMode = &v @@ -7992,7 +8296,7 @@ type Event struct { _ struct{} `type:"structure"` // The date of the event. - Date *time.Time `type:"timestamp" timestampFormat:"unix"` + Date *time.Time `type:"timestamp"` // The event categories available for the specified source type. EventCategories []*string `type:"list"` @@ -8338,6 +8642,49 @@ func (s *ImportCertificateOutput) SetCertificate(v *Certificate) *ImportCertific return s } +type KinesisSettings struct { + _ struct{} `type:"structure"` + + // The output format for the records created on the endpoint. The message format + // is JSON. + MessageFormat *string `type:"string" enum:"MessageFormatValue"` + + // The Amazon Resource Name (ARN) for the IAM role that DMS uses to write to + // the Amazon Kinesis data stream. + ServiceAccessRoleArn *string `type:"string"` + + // The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint. + StreamArn *string `type:"string"` +} + +// String returns the string representation +func (s KinesisSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s KinesisSettings) GoString() string { + return s.String() +} + +// SetMessageFormat sets the MessageFormat field's value. +func (s *KinesisSettings) SetMessageFormat(v string) *KinesisSettings { + s.MessageFormat = &v + return s +} + +// SetServiceAccessRoleArn sets the ServiceAccessRoleArn field's value. +func (s *KinesisSettings) SetServiceAccessRoleArn(v string) *KinesisSettings { + s.ServiceAccessRoleArn = &v + return s +} + +// SetStreamArn sets the StreamArn field's value. +func (s *KinesisSettings) SetStreamArn(v string) *KinesisSettings { + s.StreamArn = &v + return s +} + type ListTagsForResourceInput struct { _ struct{} `type:"structure"` @@ -8409,12 +8756,39 @@ type ModifyEndpointInput struct { // The name of the endpoint database. DatabaseName *string `type:"string"` + // The settings in JSON format for the DMS transfer type of source endpoint. + // + // Attributes include the following: + // + // * serviceAccessRoleArn - The IAM role that has permission to access the + // Amazon S3 bucket. + // + // * BucketName - The name of the S3 bucket to use. + // + // * compressionType - An optional parameter to use GZIP to compress the + // target files. Set to NONE (the default) or do not use to leave the files + // uncompressed. + // + // Shorthand syntax: ServiceAccessRoleArn=string ,BucketName=string,CompressionType=string + // + // JSON syntax: + // + // { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": + // "none"|"gzip" } + DmsTransferSettings *DmsTransferSettings `type:"structure"` + // Settings in JSON format for the target Amazon DynamoDB endpoint. For more - // information about the available settings, see the Using Object Mapping to - // Migrate Data to DynamoDB section at Using an Amazon DynamoDB Database as - // a Target for AWS Database Migration Service (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.DynamoDB.html). + // information about the available settings, see Using Object Mapping to Migrate + // Data to DynamoDB (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.DynamoDB.html) + // in the AWS Database Migration Service User Guide. DynamoDbSettings *DynamoDbSettings `type:"structure"` + // Settings in JSON format for the target Elasticsearch endpoint. For more information + // about the available settings, see Extra Connection Attributes When Using + // Elasticsearch as a Target for AWS DMS (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Elasticsearch.html#CHAP_Target.Elasticsearch.Configuration) + // in the AWS Database Migration User Guide. + ElasticsearchSettings *ElasticsearchSettings `type:"structure"` + // The Amazon Resource Name (ARN) string that uniquely identifies the endpoint. // // EndpointArn is a required field @@ -8429,18 +8803,27 @@ type ModifyEndpointInput struct { EndpointType *string `type:"string" enum:"ReplicationEndpointTypeValue"` // The type of engine for the endpoint. Valid values, depending on the EndPointType, - // include mysql, oracle, postgres, mariadb, aurora, redshift, S3, sybase, dynamodb, - // mongodb, and sqlserver. + // include mysql, oracle, postgres, mariadb, aurora, aurora-postgresql, redshift, + // s3, db2, azuredb, sybase, sybase, dynamodb, mongodb, and sqlserver. EngineName *string `type:"string"` + // The external table definition. + ExternalTableDefinition *string `type:"string"` + // Additional attributes associated with the connection. To reset this parameter, // pass the empty string ("") as an argument. ExtraConnectionAttributes *string `type:"string"` + // Settings in JSON format for the target Amazon Kinesis Data Streams endpoint. + // For more information about the available settings, see Using Object Mapping + // to Migrate Data to a Kinesis Data Stream (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kinesis.html#CHAP_Target.Kinesis.ObjectMapping + // ) in the AWS Database Migration User Guide. + KinesisSettings *KinesisSettings `type:"structure"` + // Settings in JSON format for the source MongoDB endpoint. For more information - // about the available settings, see the Configuration Properties When Using - // MongoDB as a Source for AWS Database Migration Service section at Using - // Amazon S3 as a Target for AWS Database Migration Service (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html). + // about the available settings, see the configuration properties section in + // Using MongoDB as a Target for AWS Database Migration Service (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html) + // in the AWS Database Migration Service User Guide. MongoDbSettings *MongoDbSettings `type:"structure"` // The password to be used to login to the endpoint database. @@ -8449,14 +8832,19 @@ type ModifyEndpointInput struct { // The port used by the endpoint database. Port *int64 `type:"integer"` - // Settings in JSON format for the target S3 endpoint. For more information - // about the available settings, see the Extra Connection Attributes section - // at Using Amazon S3 as a Target for AWS Database Migration Service (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html). + // Settings in JSON format for the target Amazon S3 endpoint. For more information + // about the available settings, see Extra Connection Attributes When Using + // Amazon S3 as a Target for AWS DMS (http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html#CHAP_Target.S3.Configuring) + // in the AWS Database Migration Service User Guide. S3Settings *S3Settings `type:"structure"` // The name of the server where the endpoint database resides. ServerName *string `type:"string"` + // The Amazon Resource Name (ARN) for the service access role you want to use + // to modify the endpoint. + ServiceAccessRoleArn *string `type:"string"` + // The SSL mode to be used. // // SSL mode can be one of four values: none, require, verify-ca, verify-full. @@ -8489,6 +8877,11 @@ func (s *ModifyEndpointInput) Validate() error { invalidParams.AddNested("DynamoDbSettings", err.(request.ErrInvalidParams)) } } + if s.ElasticsearchSettings != nil { + if err := s.ElasticsearchSettings.Validate(); err != nil { + invalidParams.AddNested("ElasticsearchSettings", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -8508,12 +8901,24 @@ func (s *ModifyEndpointInput) SetDatabaseName(v string) *ModifyEndpointInput { return s } +// SetDmsTransferSettings sets the DmsTransferSettings field's value. +func (s *ModifyEndpointInput) SetDmsTransferSettings(v *DmsTransferSettings) *ModifyEndpointInput { + s.DmsTransferSettings = v + return s +} + // SetDynamoDbSettings sets the DynamoDbSettings field's value. func (s *ModifyEndpointInput) SetDynamoDbSettings(v *DynamoDbSettings) *ModifyEndpointInput { s.DynamoDbSettings = v return s } +// SetElasticsearchSettings sets the ElasticsearchSettings field's value. +func (s *ModifyEndpointInput) SetElasticsearchSettings(v *ElasticsearchSettings) *ModifyEndpointInput { + s.ElasticsearchSettings = v + return s +} + // SetEndpointArn sets the EndpointArn field's value. func (s *ModifyEndpointInput) SetEndpointArn(v string) *ModifyEndpointInput { s.EndpointArn = &v @@ -8538,12 +8943,24 @@ func (s *ModifyEndpointInput) SetEngineName(v string) *ModifyEndpointInput { return s } +// SetExternalTableDefinition sets the ExternalTableDefinition field's value. +func (s *ModifyEndpointInput) SetExternalTableDefinition(v string) *ModifyEndpointInput { + s.ExternalTableDefinition = &v + return s +} + // SetExtraConnectionAttributes sets the ExtraConnectionAttributes field's value. func (s *ModifyEndpointInput) SetExtraConnectionAttributes(v string) *ModifyEndpointInput { s.ExtraConnectionAttributes = &v return s } +// SetKinesisSettings sets the KinesisSettings field's value. +func (s *ModifyEndpointInput) SetKinesisSettings(v *KinesisSettings) *ModifyEndpointInput { + s.KinesisSettings = v + return s +} + // SetMongoDbSettings sets the MongoDbSettings field's value. func (s *ModifyEndpointInput) SetMongoDbSettings(v *MongoDbSettings) *ModifyEndpointInput { s.MongoDbSettings = v @@ -8574,6 +8991,12 @@ func (s *ModifyEndpointInput) SetServerName(v string) *ModifyEndpointInput { return s } +// SetServiceAccessRoleArn sets the ServiceAccessRoleArn field's value. +func (s *ModifyEndpointInput) SetServiceAccessRoleArn(v string) *ModifyEndpointInput { + s.ServiceAccessRoleArn = &v + return s +} + // SetSslMode sets the SslMode field's value. func (s *ModifyEndpointInput) SetSslMode(v string) *ModifyEndpointInput { s.SslMode = &v @@ -8984,8 +9407,34 @@ func (s *ModifyReplicationSubnetGroupOutput) SetReplicationSubnetGroup(v *Replic type ModifyReplicationTaskInput struct { _ struct{} `type:"structure"` - // The start time for the Change Data Capture (CDC) operation. - CdcStartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + // Indicates when you want a change data capture (CDC) operation to start. Use + // either CdcStartPosition or CdcStartTime to specify when you want a CDC operation + // to start. Specifying both values results in an error. + // + // The value can be in date, checkpoint, or LSN/SCN format. + // + // Date Example: --cdc-start-position “2018-03-08T12:12:12” + // + // Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93" + // + // LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373” + CdcStartPosition *string `type:"string"` + + // Indicates the start time for a change data capture (CDC) operation. Use either + // CdcStartTime or CdcStartPosition to specify when you want a CDC operation + // to start. Specifying both values results in an error. + // + // Timestamp Example: --cdc-start-time “2018-03-08T12:12:12” + CdcStartTime *time.Time `type:"timestamp"` + + // Indicates when you want a change data capture (CDC) operation to stop. The + // value can be either server time or commit time. + // + // Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12” + // + // Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 + // “ + CdcStopPosition *string `type:"string"` // The migration type. // @@ -9042,12 +9491,24 @@ func (s *ModifyReplicationTaskInput) Validate() error { return nil } +// SetCdcStartPosition sets the CdcStartPosition field's value. +func (s *ModifyReplicationTaskInput) SetCdcStartPosition(v string) *ModifyReplicationTaskInput { + s.CdcStartPosition = &v + return s +} + // SetCdcStartTime sets the CdcStartTime field's value. func (s *ModifyReplicationTaskInput) SetCdcStartTime(v time.Time) *ModifyReplicationTaskInput { s.CdcStartTime = &v return s } +// SetCdcStopPosition sets the CdcStopPosition field's value. +func (s *ModifyReplicationTaskInput) SetCdcStopPosition(v string) *ModifyReplicationTaskInput { + s.CdcStopPosition = &v + return s +} + // SetMigrationType sets the MigrationType field's value. func (s *ModifyReplicationTaskInput) SetMigrationType(v string) *ModifyReplicationTaskInput { s.MigrationType = &v @@ -9140,6 +9601,13 @@ type MongoDbSettings struct { // Default value is false. ExtractDocId *string `type:"string"` + // The AWS KMS key identifier that is used to encrypt the content on the replication + // instance. If you don't specify a value for the KmsKeyId parameter, then AWS + // DMS uses your default encryption key. AWS KMS creates the default encryption + // key for your AWS account. Your AWS account has a different default encryption + // key for each AWS Region. + KmsKeyId *string `type:"string"` + // Specifies either document or table mode. // // Valid values: NONE, ONE @@ -9207,6 +9675,12 @@ func (s *MongoDbSettings) SetExtractDocId(v string) *MongoDbSettings { return s } +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *MongoDbSettings) SetKmsKeyId(v string) *MongoDbSettings { + s.KmsKeyId = &v + return s +} + // SetNestingLevel sets the NestingLevel field's value. func (s *MongoDbSettings) SetNestingLevel(v string) *MongoDbSettings { s.NestingLevel = &v @@ -9477,7 +9951,7 @@ type RefreshSchemasStatus struct { LastFailureMessage *string `type:"string"` // The date the schema was last refreshed. - LastRefreshDate *time.Time `type:"timestamp" timestampFormat:"unix"` + LastRefreshDate *time.Time `type:"timestamp"` // The Amazon Resource Name (ARN) of the replication instance. ReplicationInstanceArn *string `type:"string"` @@ -9529,7 +10003,16 @@ func (s *RefreshSchemasStatus) SetStatus(v string) *RefreshSchemasStatus { type ReloadTablesInput struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of the replication instance. + // Options for reload. Specify data-reload to reload the data and re-validate + // it if validation is enabled. Specify validate-only to re-validate the table. + // This option applies only when validation is enabled for the task. + // + // Valid values: data-reload, validate-only + // + // Default value is data-reload. + ReloadOption *string `type:"string" enum:"ReloadOptionValue"` + + // The Amazon Resource Name (ARN) of the replication task. // // ReplicationTaskArn is a required field ReplicationTaskArn *string `type:"string" required:"true"` @@ -9566,6 +10049,12 @@ func (s *ReloadTablesInput) Validate() error { return nil } +// SetReloadOption sets the ReloadOption field's value. +func (s *ReloadTablesInput) SetReloadOption(v string) *ReloadTablesInput { + s.ReloadOption = &v + return s +} + // SetReplicationTaskArn sets the ReplicationTaskArn field's value. func (s *ReloadTablesInput) SetReplicationTaskArn(v string) *ReloadTablesInput { s.ReplicationTaskArn = &v @@ -9682,17 +10171,24 @@ type ReplicationInstance struct { // The Availability Zone for the instance. AvailabilityZone *string `type:"string"` + // The DNS name servers for the replication instance. + DnsNameServers *string `type:"string"` + // The engine version number of the replication instance. EngineVersion *string `type:"string"` + // The expiration date of the free replication instance that is part of the + // Free DMS program. + FreeUntil *time.Time `type:"timestamp"` + // The time the replication instance was created. - InstanceCreateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + InstanceCreateTime *time.Time `type:"timestamp"` - // The KMS key identifier that is used to encrypt the content on the replication - // instance. If you do not specify a value for the KmsKeyId parameter, then - // AWS DMS will use your default encryption key. AWS KMS creates the default - // encryption key for your AWS account. Your AWS account has a different default - // encryption key for each AWS region. + // The AWS KMS key identifier that is used to encrypt the content on the replication + // instance. If you don't specify a value for the KmsKeyId parameter, then AWS + // DMS uses your default encryption key. AWS KMS creates the default encryption + // key for your AWS account. Your AWS account has a different default encryption + // key for each AWS Region. KmsKeyId *string `type:"string"` // Specifies if the replication instance is a Multi-AZ deployment. You cannot @@ -9734,12 +10230,16 @@ type ReplicationInstance struct { ReplicationInstanceIdentifier *string `type:"string"` // The private IP address of the replication instance. + // + // Deprecated: ReplicationInstancePrivateIpAddress has been deprecated ReplicationInstancePrivateIpAddress *string `deprecated:"true" type:"string"` // The private IP address of the replication instance. ReplicationInstancePrivateIpAddresses []*string `type:"list"` // The public IP address of the replication instance. + // + // Deprecated: ReplicationInstancePublicIpAddress has been deprecated ReplicationInstancePublicIpAddress *string `deprecated:"true" type:"string"` // The public IP address of the replication instance. @@ -9786,12 +10286,24 @@ func (s *ReplicationInstance) SetAvailabilityZone(v string) *ReplicationInstance return s } +// SetDnsNameServers sets the DnsNameServers field's value. +func (s *ReplicationInstance) SetDnsNameServers(v string) *ReplicationInstance { + s.DnsNameServers = &v + return s +} + // SetEngineVersion sets the EngineVersion field's value. func (s *ReplicationInstance) SetEngineVersion(v string) *ReplicationInstance { s.EngineVersion = &v return s } +// SetFreeUntil sets the FreeUntil field's value. +func (s *ReplicationInstance) SetFreeUntil(v time.Time) *ReplicationInstance { + s.FreeUntil = &v + return s +} + // SetInstanceCreateTime sets the InstanceCreateTime field's value. func (s *ReplicationInstance) SetInstanceCreateTime(v time.Time) *ReplicationInstance { s.InstanceCreateTime = &v @@ -10053,12 +10565,39 @@ func (s *ReplicationSubnetGroup) SetVpcId(v string) *ReplicationSubnetGroup { type ReplicationTask struct { _ struct{} `type:"structure"` + // Indicates when you want a change data capture (CDC) operation to start. Use + // either CdcStartPosition or CdcStartTime to specify when you want a CDC operation + // to start. Specifying both values results in an error. + // + // The value can be in date, checkpoint, or LSN/SCN format. + // + // Date Example: --cdc-start-position “2018-03-08T12:12:12” + // + // Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93" + // + // LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373” + CdcStartPosition *string `type:"string"` + + // Indicates when you want a change data capture (CDC) operation to stop. The + // value can be either server time or commit time. + // + // Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12” + // + // Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 + // “ + CdcStopPosition *string `type:"string"` + // The last error (failure) message generated for the replication instance. LastFailureMessage *string `type:"string"` // The type of migration. MigrationType *string `type:"string" enum:"MigrationTypeValue"` + // Indicates the last checkpoint that occurred during a change data capture + // (CDC) operation. You can provide this value to the CdcStartPosition parameter + // to start a CDC operation that begins at that checkpoint. + RecoveryCheckpoint *string `type:"string"` + // The Amazon Resource Name (ARN) of the replication instance. ReplicationInstanceArn *string `type:"string"` @@ -10066,9 +10605,9 @@ type ReplicationTask struct { ReplicationTaskArn *string `type:"string"` // The date the replication task was created. - ReplicationTaskCreationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + ReplicationTaskCreationDate *time.Time `type:"timestamp"` - // The replication task identifier. + // The user-assigned replication task identifier or name. // // Constraints: // @@ -10083,7 +10622,7 @@ type ReplicationTask struct { ReplicationTaskSettings *string `type:"string"` // The date the replication task is scheduled to start. - ReplicationTaskStartDate *time.Time `type:"timestamp" timestampFormat:"unix"` + ReplicationTaskStartDate *time.Time `type:"timestamp"` // The statistics for the task, including elapsed time, tables loaded, and table // errors. @@ -10115,6 +10654,18 @@ func (s ReplicationTask) GoString() string { return s.String() } +// SetCdcStartPosition sets the CdcStartPosition field's value. +func (s *ReplicationTask) SetCdcStartPosition(v string) *ReplicationTask { + s.CdcStartPosition = &v + return s +} + +// SetCdcStopPosition sets the CdcStopPosition field's value. +func (s *ReplicationTask) SetCdcStopPosition(v string) *ReplicationTask { + s.CdcStopPosition = &v + return s +} + // SetLastFailureMessage sets the LastFailureMessage field's value. func (s *ReplicationTask) SetLastFailureMessage(v string) *ReplicationTask { s.LastFailureMessage = &v @@ -10127,6 +10678,12 @@ func (s *ReplicationTask) SetMigrationType(v string) *ReplicationTask { return s } +// SetRecoveryCheckpoint sets the RecoveryCheckpoint field's value. +func (s *ReplicationTask) SetRecoveryCheckpoint(v string) *ReplicationTask { + s.RecoveryCheckpoint = &v + return s +} + // SetReplicationInstanceArn sets the ReplicationInstanceArn field's value. func (s *ReplicationTask) SetReplicationInstanceArn(v string) *ReplicationTask { s.ReplicationInstanceArn = &v @@ -10220,7 +10777,7 @@ type ReplicationTaskAssessmentResult struct { ReplicationTaskIdentifier *string `type:"string"` // The date the task assessment was completed. - ReplicationTaskLastAssessmentDate *time.Time `type:"timestamp" timestampFormat:"unix"` + ReplicationTaskLastAssessmentDate *time.Time `type:"timestamp"` // The URL of the S3 object containing the task assessment results. S3ObjectUrl *string `type:"string"` @@ -10370,6 +10927,7 @@ type S3Settings struct { // carriage return (\n). CsvRowDelimiter *string `type:"string"` + // The external table definition. ExternalTableDefinition *string `type:"string"` // The Amazon Resource Name (ARN) used by the service access IAM role. @@ -10492,8 +11050,34 @@ func (s *StartReplicationTaskAssessmentOutput) SetReplicationTask(v *Replication type StartReplicationTaskInput struct { _ struct{} `type:"structure"` - // The start time for the Change Data Capture (CDC) operation. - CdcStartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + // Indicates when you want a change data capture (CDC) operation to start. Use + // either CdcStartPosition or CdcStartTime to specify when you want a CDC operation + // to start. Specifying both values results in an error. + // + // The value can be in date, checkpoint, or LSN/SCN format. + // + // Date Example: --cdc-start-position “2018-03-08T12:12:12” + // + // Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93" + // + // LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373” + CdcStartPosition *string `type:"string"` + + // Indicates the start time for a change data capture (CDC) operation. Use either + // CdcStartTime or CdcStartPosition to specify when you want a CDC operation + // to start. Specifying both values results in an error. + // + // Timestamp Example: --cdc-start-time “2018-03-08T12:12:12” + CdcStartTime *time.Time `type:"timestamp"` + + // Indicates when you want a change data capture (CDC) operation to stop. The + // value can be either server time or commit time. + // + // Server time example: --cdc-stop-position “server_time:3018-02-09T12:12:12” + // + // Commit time example: --cdc-stop-position “commit_time: 3018-02-09T12:12:12 + // “ + CdcStopPosition *string `type:"string"` // The Amazon Resource Name (ARN) of the replication task to be started. // @@ -10532,12 +11116,24 @@ func (s *StartReplicationTaskInput) Validate() error { return nil } +// SetCdcStartPosition sets the CdcStartPosition field's value. +func (s *StartReplicationTaskInput) SetCdcStartPosition(v string) *StartReplicationTaskInput { + s.CdcStartPosition = &v + return s +} + // SetCdcStartTime sets the CdcStartTime field's value. func (s *StartReplicationTaskInput) SetCdcStartTime(v time.Time) *StartReplicationTaskInput { s.CdcStartTime = &v return s } +// SetCdcStopPosition sets the CdcStopPosition field's value. +func (s *StartReplicationTaskInput) SetCdcStopPosition(v string) *StartReplicationTaskInput { + s.CdcStopPosition = &v + return s +} + // SetReplicationTaskArn sets the ReplicationTaskArn field's value. func (s *StartReplicationTaskInput) SetReplicationTaskArn(v string) *StartReplicationTaskInput { s.ReplicationTaskArn = &v @@ -10681,9 +11277,13 @@ type SupportedEndpointType struct { // The type of endpoint. EndpointType *string `type:"string" enum:"ReplicationEndpointTypeValue"` + // The expanded name for the engine name. For example, if the EngineName parameter + // is "aurora," this value would be "Amazon Aurora MySQL." + EngineDisplayName *string `type:"string"` + // The database engine name. Valid values, depending on the EndPointType, include - // mysql, oracle, postgres, mariadb, aurora, redshift, S3, sybase, dynamodb, - // mongodb, and sqlserver. + // mysql, oracle, postgres, mariadb, aurora, aurora-postgresql, redshift, s3, + // db2, azuredb, sybase, sybase, dynamodb, mongodb, and sqlserver. EngineName *string `type:"string"` // Indicates if Change Data Capture (CDC) is supported. @@ -10706,6 +11306,12 @@ func (s *SupportedEndpointType) SetEndpointType(v string) *SupportedEndpointType return s } +// SetEngineDisplayName sets the EngineDisplayName field's value. +func (s *SupportedEndpointType) SetEngineDisplayName(v string) *SupportedEndpointType { + s.EngineDisplayName = &v + return s +} + // SetEngineName sets the EngineName field's value. func (s *SupportedEndpointType) SetEngineName(v string) *SupportedEndpointType { s.EngineName = &v @@ -10743,7 +11349,7 @@ type TableStatistics struct { Inserts *int64 `type:"long"` // The last time the table was updated. - LastUpdateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LastUpdateTime *time.Time `type:"timestamp"` // The schema name. SchemaName *string `type:"string"` @@ -10793,6 +11399,9 @@ type TableStatistics struct { // * Error—The table could not be validated because of an unexpected error. ValidationState *string `type:"string"` + // Additional details about the state of validation. + ValidationStateDetails *string `type:"string"` + // The number of records that could not be validated. ValidationSuspendedRecords *int64 `type:"long"` } @@ -10891,6 +11500,12 @@ func (s *TableStatistics) SetValidationState(v string) *TableStatistics { return s } +// SetValidationStateDetails sets the ValidationStateDetails field's value. +func (s *TableStatistics) SetValidationStateDetails(v string) *TableStatistics { + s.ValidationStateDetails = &v + return s +} + // SetValidationSuspendedRecords sets the ValidationSuspendedRecords field's value. func (s *TableStatistics) SetValidationSuspendedRecords(v int64) *TableStatistics { s.ValidationSuspendedRecords = &v @@ -11115,6 +11730,11 @@ const ( DmsSslModeValueVerifyFull = "verify-full" ) +const ( + // MessageFormatValueJson is a MessageFormatValue enum value + MessageFormatValueJson = "json" +) + const ( // MigrationTypeValueFullLoad is a MigrationTypeValue enum value MigrationTypeValueFullLoad = "full-load" @@ -11145,6 +11765,14 @@ const ( RefreshSchemasStatusTypeValueRefreshing = "refreshing" ) +const ( + // ReloadOptionValueDataReload is a ReloadOptionValue enum value + ReloadOptionValueDataReload = "data-reload" + + // ReloadOptionValueValidateOnly is a ReloadOptionValue enum value + ReloadOptionValueValidateOnly = "validate-only" +) + const ( // ReplicationEndpointTypeValueSource is a ReplicationEndpointTypeValue enum value ReplicationEndpointTypeValueSource = "source" diff --git a/vendor/github.com/aws/aws-sdk-go/service/databasemigrationservice/doc.go b/vendor/github.com/aws/aws-sdk-go/service/databasemigrationservice/doc.go index ed8b2a33665..9820993c6c0 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/databasemigrationservice/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/databasemigrationservice/doc.go @@ -11,8 +11,9 @@ // between different database platforms, such as Oracle to MySQL or SQL Server // to PostgreSQL. // -// For more information about AWS DMS, see the AWS DMS user guide at What Is -// AWS Database Migration Service? (http://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) +// For more information about AWS DMS, see What Is AWS Database Migration Service? +// (http://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) in the AWS +// Database Migration User Guide. // // See https://docs.aws.amazon.com/goto/WebAPI/dms-2016-01-01 for more information on this service. // diff --git a/vendor/github.com/aws/aws-sdk-go/service/databasemigrationservice/service.go b/vendor/github.com/aws/aws-sdk-go/service/databasemigrationservice/service.go index bf5b476130b..8ee775e03ca 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/databasemigrationservice/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/databasemigrationservice/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "dms" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "dms" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Database Migration Service" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the DatabaseMigrationService client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/databasemigrationservice/waiters.go b/vendor/github.com/aws/aws-sdk-go/service/databasemigrationservice/waiters.go new file mode 100644 index 00000000000..225f2d58ccf --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/databasemigrationservice/waiters.go @@ -0,0 +1,563 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package databasemigrationservice + +import ( + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/request" +) + +// WaitUntilEndpointDeleted uses the AWS Database Migration Service API operation +// DescribeEndpoints to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *DatabaseMigrationService) WaitUntilEndpointDeleted(input *DescribeEndpointsInput) error { + return c.WaitUntilEndpointDeletedWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilEndpointDeletedWithContext is an extended version of WaitUntilEndpointDeleted. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DatabaseMigrationService) WaitUntilEndpointDeletedWithContext(ctx aws.Context, input *DescribeEndpointsInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilEndpointDeleted", + MaxAttempts: 60, + Delay: request.ConstantWaiterDelay(5 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.ErrorWaiterMatch, + Expected: "ResourceNotFoundFault", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "Endpoints[].Status", + Expected: "active", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "Endpoints[].Status", + Expected: "creating", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeEndpointsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeEndpointsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} + +// WaitUntilReplicationInstanceAvailable uses the AWS Database Migration Service API operation +// DescribeReplicationInstances to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *DatabaseMigrationService) WaitUntilReplicationInstanceAvailable(input *DescribeReplicationInstancesInput) error { + return c.WaitUntilReplicationInstanceAvailableWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilReplicationInstanceAvailableWithContext is an extended version of WaitUntilReplicationInstanceAvailable. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DatabaseMigrationService) WaitUntilReplicationInstanceAvailableWithContext(ctx aws.Context, input *DescribeReplicationInstancesInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilReplicationInstanceAvailable", + MaxAttempts: 60, + Delay: request.ConstantWaiterDelay(60 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.PathAllWaiterMatch, Argument: "ReplicationInstances[].ReplicationInstanceStatus", + Expected: "available", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationInstances[].ReplicationInstanceStatus", + Expected: "deleting", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationInstances[].ReplicationInstanceStatus", + Expected: "incompatible-credentials", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationInstances[].ReplicationInstanceStatus", + Expected: "incompatible-network", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationInstances[].ReplicationInstanceStatus", + Expected: "inaccessible-encryption-credentials", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeReplicationInstancesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeReplicationInstancesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} + +// WaitUntilReplicationInstanceDeleted uses the AWS Database Migration Service API operation +// DescribeReplicationInstances to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *DatabaseMigrationService) WaitUntilReplicationInstanceDeleted(input *DescribeReplicationInstancesInput) error { + return c.WaitUntilReplicationInstanceDeletedWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilReplicationInstanceDeletedWithContext is an extended version of WaitUntilReplicationInstanceDeleted. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DatabaseMigrationService) WaitUntilReplicationInstanceDeletedWithContext(ctx aws.Context, input *DescribeReplicationInstancesInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilReplicationInstanceDeleted", + MaxAttempts: 60, + Delay: request.ConstantWaiterDelay(15 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationInstances[].ReplicationInstanceStatus", + Expected: "available", + }, + { + State: request.SuccessWaiterState, + Matcher: request.ErrorWaiterMatch, + Expected: "ResourceNotFoundFault", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeReplicationInstancesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeReplicationInstancesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} + +// WaitUntilReplicationTaskDeleted uses the AWS Database Migration Service API operation +// DescribeReplicationTasks to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *DatabaseMigrationService) WaitUntilReplicationTaskDeleted(input *DescribeReplicationTasksInput) error { + return c.WaitUntilReplicationTaskDeletedWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilReplicationTaskDeletedWithContext is an extended version of WaitUntilReplicationTaskDeleted. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DatabaseMigrationService) WaitUntilReplicationTaskDeletedWithContext(ctx aws.Context, input *DescribeReplicationTasksInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilReplicationTaskDeleted", + MaxAttempts: 60, + Delay: request.ConstantWaiterDelay(15 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "ready", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "creating", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "stopped", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "running", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "failed", + }, + { + State: request.SuccessWaiterState, + Matcher: request.ErrorWaiterMatch, + Expected: "ResourceNotFoundFault", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeReplicationTasksInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeReplicationTasksRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} + +// WaitUntilReplicationTaskReady uses the AWS Database Migration Service API operation +// DescribeReplicationTasks to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *DatabaseMigrationService) WaitUntilReplicationTaskReady(input *DescribeReplicationTasksInput) error { + return c.WaitUntilReplicationTaskReadyWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilReplicationTaskReadyWithContext is an extended version of WaitUntilReplicationTaskReady. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DatabaseMigrationService) WaitUntilReplicationTaskReadyWithContext(ctx aws.Context, input *DescribeReplicationTasksInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilReplicationTaskReady", + MaxAttempts: 60, + Delay: request.ConstantWaiterDelay(15 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.PathAllWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "ready", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "starting", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "running", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "stopping", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "stopped", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "failed", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "modifying", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "testing", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "deleting", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeReplicationTasksInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeReplicationTasksRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} + +// WaitUntilReplicationTaskRunning uses the AWS Database Migration Service API operation +// DescribeReplicationTasks to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *DatabaseMigrationService) WaitUntilReplicationTaskRunning(input *DescribeReplicationTasksInput) error { + return c.WaitUntilReplicationTaskRunningWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilReplicationTaskRunningWithContext is an extended version of WaitUntilReplicationTaskRunning. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DatabaseMigrationService) WaitUntilReplicationTaskRunningWithContext(ctx aws.Context, input *DescribeReplicationTasksInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilReplicationTaskRunning", + MaxAttempts: 60, + Delay: request.ConstantWaiterDelay(15 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.PathAllWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "running", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "ready", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "creating", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "stopping", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "stopped", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "failed", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "modifying", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "testing", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "deleting", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeReplicationTasksInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeReplicationTasksRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} + +// WaitUntilReplicationTaskStopped uses the AWS Database Migration Service API operation +// DescribeReplicationTasks to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *DatabaseMigrationService) WaitUntilReplicationTaskStopped(input *DescribeReplicationTasksInput) error { + return c.WaitUntilReplicationTaskStoppedWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilReplicationTaskStoppedWithContext is an extended version of WaitUntilReplicationTaskStopped. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DatabaseMigrationService) WaitUntilReplicationTaskStoppedWithContext(ctx aws.Context, input *DescribeReplicationTasksInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilReplicationTaskStopped", + MaxAttempts: 60, + Delay: request.ConstantWaiterDelay(15 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.PathAllWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "stopped", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "ready", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "creating", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "starting", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "running", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "failed", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "modifying", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "testing", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "ReplicationTasks[].Status", + Expected: "deleting", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeReplicationTasksInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeReplicationTasksRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} + +// WaitUntilTestConnectionSucceeds uses the AWS Database Migration Service API operation +// TestConnection to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *DatabaseMigrationService) WaitUntilTestConnectionSucceeds(input *TestConnectionInput) error { + return c.WaitUntilTestConnectionSucceedsWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilTestConnectionSucceedsWithContext is an extended version of WaitUntilTestConnectionSucceeds. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DatabaseMigrationService) WaitUntilTestConnectionSucceedsWithContext(ctx aws.Context, input *TestConnectionInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilTestConnectionSucceeds", + MaxAttempts: 60, + Delay: request.ConstantWaiterDelay(5 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.PathWaiterMatch, Argument: "Connection.Status", + Expected: "successful", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathWaiterMatch, Argument: "Connection.Status", + Expected: "failed", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *TestConnectionInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.TestConnectionRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/datapipeline/api.go b/vendor/github.com/aws/aws-sdk-go/service/datapipeline/api.go new file mode 100644 index 00000000000..24426726a98 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/datapipeline/api.go @@ -0,0 +1,4660 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package datapipeline + +import ( + "fmt" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol" + "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" +) + +const opActivatePipeline = "ActivatePipeline" + +// ActivatePipelineRequest generates a "aws/request.Request" representing the +// client's request for the ActivatePipeline operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ActivatePipeline for more information on using the ActivatePipeline +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ActivatePipelineRequest method. +// req, resp := client.ActivatePipelineRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/ActivatePipeline +func (c *DataPipeline) ActivatePipelineRequest(input *ActivatePipelineInput) (req *request.Request, output *ActivatePipelineOutput) { + op := &request.Operation{ + Name: opActivatePipeline, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ActivatePipelineInput{} + } + + output = &ActivatePipelineOutput{} + req = c.newRequest(op, input, output) + return +} + +// ActivatePipeline API operation for AWS Data Pipeline. +// +// Validates the specified pipeline and starts processing pipeline tasks. If +// the pipeline does not pass validation, activation fails. +// +// If you need to pause the pipeline to investigate an issue with a component, +// such as a data source or script, call DeactivatePipeline. +// +// To activate a finished pipeline, modify the end date for the pipeline and +// then activate it. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Data Pipeline's +// API operation ActivatePipeline for usage and error information. +// +// Returned Error Codes: +// * ErrCodePipelineNotFoundException "PipelineNotFoundException" +// The specified pipeline was not found. Verify that you used the correct user +// and account identifiers. +// +// * ErrCodePipelineDeletedException "PipelineDeletedException" +// The specified pipeline has been deleted. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An internal service error occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request was not valid. Verify that your request was properly formatted, +// that the signature was generated with the correct credentials, and that you +// haven't exceeded any of the service limits for your account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/ActivatePipeline +func (c *DataPipeline) ActivatePipeline(input *ActivatePipelineInput) (*ActivatePipelineOutput, error) { + req, out := c.ActivatePipelineRequest(input) + return out, req.Send() +} + +// ActivatePipelineWithContext is the same as ActivatePipeline with the addition of +// the ability to pass a context and additional request options. +// +// See ActivatePipeline for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) ActivatePipelineWithContext(ctx aws.Context, input *ActivatePipelineInput, opts ...request.Option) (*ActivatePipelineOutput, error) { + req, out := c.ActivatePipelineRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAddTags = "AddTags" + +// AddTagsRequest generates a "aws/request.Request" representing the +// client's request for the AddTags operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AddTags for more information on using the AddTags +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AddTagsRequest method. +// req, resp := client.AddTagsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/AddTags +func (c *DataPipeline) AddTagsRequest(input *AddTagsInput) (req *request.Request, output *AddTagsOutput) { + op := &request.Operation{ + Name: opAddTags, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AddTagsInput{} + } + + output = &AddTagsOutput{} + req = c.newRequest(op, input, output) + return +} + +// AddTags API operation for AWS Data Pipeline. +// +// Adds or modifies tags for the specified pipeline. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Data Pipeline's +// API operation AddTags for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServiceError "InternalServiceError" +// An internal service error occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request was not valid. Verify that your request was properly formatted, +// that the signature was generated with the correct credentials, and that you +// haven't exceeded any of the service limits for your account. +// +// * ErrCodePipelineNotFoundException "PipelineNotFoundException" +// The specified pipeline was not found. Verify that you used the correct user +// and account identifiers. +// +// * ErrCodePipelineDeletedException "PipelineDeletedException" +// The specified pipeline has been deleted. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/AddTags +func (c *DataPipeline) AddTags(input *AddTagsInput) (*AddTagsOutput, error) { + req, out := c.AddTagsRequest(input) + return out, req.Send() +} + +// AddTagsWithContext is the same as AddTags with the addition of +// the ability to pass a context and additional request options. +// +// See AddTags for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) AddTagsWithContext(ctx aws.Context, input *AddTagsInput, opts ...request.Option) (*AddTagsOutput, error) { + req, out := c.AddTagsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreatePipeline = "CreatePipeline" + +// CreatePipelineRequest generates a "aws/request.Request" representing the +// client's request for the CreatePipeline operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreatePipeline for more information on using the CreatePipeline +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreatePipelineRequest method. +// req, resp := client.CreatePipelineRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/CreatePipeline +func (c *DataPipeline) CreatePipelineRequest(input *CreatePipelineInput) (req *request.Request, output *CreatePipelineOutput) { + op := &request.Operation{ + Name: opCreatePipeline, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreatePipelineInput{} + } + + output = &CreatePipelineOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreatePipeline API operation for AWS Data Pipeline. +// +// Creates a new, empty pipeline. Use PutPipelineDefinition to populate the +// pipeline. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Data Pipeline's +// API operation CreatePipeline for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServiceError "InternalServiceError" +// An internal service error occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request was not valid. Verify that your request was properly formatted, +// that the signature was generated with the correct credentials, and that you +// haven't exceeded any of the service limits for your account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/CreatePipeline +func (c *DataPipeline) CreatePipeline(input *CreatePipelineInput) (*CreatePipelineOutput, error) { + req, out := c.CreatePipelineRequest(input) + return out, req.Send() +} + +// CreatePipelineWithContext is the same as CreatePipeline with the addition of +// the ability to pass a context and additional request options. +// +// See CreatePipeline for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) CreatePipelineWithContext(ctx aws.Context, input *CreatePipelineInput, opts ...request.Option) (*CreatePipelineOutput, error) { + req, out := c.CreatePipelineRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeactivatePipeline = "DeactivatePipeline" + +// DeactivatePipelineRequest generates a "aws/request.Request" representing the +// client's request for the DeactivatePipeline operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeactivatePipeline for more information on using the DeactivatePipeline +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeactivatePipelineRequest method. +// req, resp := client.DeactivatePipelineRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/DeactivatePipeline +func (c *DataPipeline) DeactivatePipelineRequest(input *DeactivatePipelineInput) (req *request.Request, output *DeactivatePipelineOutput) { + op := &request.Operation{ + Name: opDeactivatePipeline, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeactivatePipelineInput{} + } + + output = &DeactivatePipelineOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeactivatePipeline API operation for AWS Data Pipeline. +// +// Deactivates the specified running pipeline. The pipeline is set to the DEACTIVATING +// state until the deactivation process completes. +// +// To resume a deactivated pipeline, use ActivatePipeline. By default, the pipeline +// resumes from the last completed execution. Optionally, you can specify the +// date and time to resume the pipeline. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Data Pipeline's +// API operation DeactivatePipeline for usage and error information. +// +// Returned Error Codes: +// * ErrCodePipelineNotFoundException "PipelineNotFoundException" +// The specified pipeline was not found. Verify that you used the correct user +// and account identifiers. +// +// * ErrCodePipelineDeletedException "PipelineDeletedException" +// The specified pipeline has been deleted. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An internal service error occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request was not valid. Verify that your request was properly formatted, +// that the signature was generated with the correct credentials, and that you +// haven't exceeded any of the service limits for your account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/DeactivatePipeline +func (c *DataPipeline) DeactivatePipeline(input *DeactivatePipelineInput) (*DeactivatePipelineOutput, error) { + req, out := c.DeactivatePipelineRequest(input) + return out, req.Send() +} + +// DeactivatePipelineWithContext is the same as DeactivatePipeline with the addition of +// the ability to pass a context and additional request options. +// +// See DeactivatePipeline for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) DeactivatePipelineWithContext(ctx aws.Context, input *DeactivatePipelineInput, opts ...request.Option) (*DeactivatePipelineOutput, error) { + req, out := c.DeactivatePipelineRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeletePipeline = "DeletePipeline" + +// DeletePipelineRequest generates a "aws/request.Request" representing the +// client's request for the DeletePipeline operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeletePipeline for more information on using the DeletePipeline +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeletePipelineRequest method. +// req, resp := client.DeletePipelineRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/DeletePipeline +func (c *DataPipeline) DeletePipelineRequest(input *DeletePipelineInput) (req *request.Request, output *DeletePipelineOutput) { + op := &request.Operation{ + Name: opDeletePipeline, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeletePipelineInput{} + } + + output = &DeletePipelineOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeletePipeline API operation for AWS Data Pipeline. +// +// Deletes a pipeline, its pipeline definition, and its run history. AWS Data +// Pipeline attempts to cancel instances associated with the pipeline that are +// currently being processed by task runners. +// +// Deleting a pipeline cannot be undone. You cannot query or restore a deleted +// pipeline. To temporarily pause a pipeline instead of deleting it, call SetStatus +// with the status set to PAUSE on individual components. Components that are +// paused by SetStatus can be resumed. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Data Pipeline's +// API operation DeletePipeline for usage and error information. +// +// Returned Error Codes: +// * ErrCodePipelineNotFoundException "PipelineNotFoundException" +// The specified pipeline was not found. Verify that you used the correct user +// and account identifiers. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An internal service error occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request was not valid. Verify that your request was properly formatted, +// that the signature was generated with the correct credentials, and that you +// haven't exceeded any of the service limits for your account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/DeletePipeline +func (c *DataPipeline) DeletePipeline(input *DeletePipelineInput) (*DeletePipelineOutput, error) { + req, out := c.DeletePipelineRequest(input) + return out, req.Send() +} + +// DeletePipelineWithContext is the same as DeletePipeline with the addition of +// the ability to pass a context and additional request options. +// +// See DeletePipeline for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) DeletePipelineWithContext(ctx aws.Context, input *DeletePipelineInput, opts ...request.Option) (*DeletePipelineOutput, error) { + req, out := c.DeletePipelineRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeObjects = "DescribeObjects" + +// DescribeObjectsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeObjects operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeObjects for more information on using the DescribeObjects +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeObjectsRequest method. +// req, resp := client.DescribeObjectsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/DescribeObjects +func (c *DataPipeline) DescribeObjectsRequest(input *DescribeObjectsInput) (req *request.Request, output *DescribeObjectsOutput) { + op := &request.Operation{ + Name: opDescribeObjects, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"marker"}, + OutputTokens: []string{"marker"}, + LimitToken: "", + TruncationToken: "hasMoreResults", + }, + } + + if input == nil { + input = &DescribeObjectsInput{} + } + + output = &DescribeObjectsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeObjects API operation for AWS Data Pipeline. +// +// Gets the object definitions for a set of objects associated with the pipeline. +// Object definitions are composed of a set of fields that define the properties +// of the object. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Data Pipeline's +// API operation DescribeObjects for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServiceError "InternalServiceError" +// An internal service error occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request was not valid. Verify that your request was properly formatted, +// that the signature was generated with the correct credentials, and that you +// haven't exceeded any of the service limits for your account. +// +// * ErrCodePipelineNotFoundException "PipelineNotFoundException" +// The specified pipeline was not found. Verify that you used the correct user +// and account identifiers. +// +// * ErrCodePipelineDeletedException "PipelineDeletedException" +// The specified pipeline has been deleted. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/DescribeObjects +func (c *DataPipeline) DescribeObjects(input *DescribeObjectsInput) (*DescribeObjectsOutput, error) { + req, out := c.DescribeObjectsRequest(input) + return out, req.Send() +} + +// DescribeObjectsWithContext is the same as DescribeObjects with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeObjects for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) DescribeObjectsWithContext(ctx aws.Context, input *DescribeObjectsInput, opts ...request.Option) (*DescribeObjectsOutput, error) { + req, out := c.DescribeObjectsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeObjectsPages iterates over the pages of a DescribeObjects operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeObjects method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeObjects operation. +// pageNum := 0 +// err := client.DescribeObjectsPages(params, +// func(page *DescribeObjectsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *DataPipeline) DescribeObjectsPages(input *DescribeObjectsInput, fn func(*DescribeObjectsOutput, bool) bool) error { + return c.DescribeObjectsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeObjectsPagesWithContext same as DescribeObjectsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) DescribeObjectsPagesWithContext(ctx aws.Context, input *DescribeObjectsInput, fn func(*DescribeObjectsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeObjectsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeObjectsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeObjectsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDescribePipelines = "DescribePipelines" + +// DescribePipelinesRequest generates a "aws/request.Request" representing the +// client's request for the DescribePipelines operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribePipelines for more information on using the DescribePipelines +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribePipelinesRequest method. +// req, resp := client.DescribePipelinesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/DescribePipelines +func (c *DataPipeline) DescribePipelinesRequest(input *DescribePipelinesInput) (req *request.Request, output *DescribePipelinesOutput) { + op := &request.Operation{ + Name: opDescribePipelines, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribePipelinesInput{} + } + + output = &DescribePipelinesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribePipelines API operation for AWS Data Pipeline. +// +// Retrieves metadata about one or more pipelines. The information retrieved +// includes the name of the pipeline, the pipeline identifier, its current state, +// and the user account that owns the pipeline. Using account credentials, you +// can retrieve metadata about pipelines that you or your IAM users have created. +// If you are using an IAM user account, you can retrieve metadata about only +// those pipelines for which you have read permissions. +// +// To retrieve the full pipeline definition instead of metadata about the pipeline, +// call GetPipelineDefinition. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Data Pipeline's +// API operation DescribePipelines for usage and error information. +// +// Returned Error Codes: +// * ErrCodePipelineNotFoundException "PipelineNotFoundException" +// The specified pipeline was not found. Verify that you used the correct user +// and account identifiers. +// +// * ErrCodePipelineDeletedException "PipelineDeletedException" +// The specified pipeline has been deleted. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An internal service error occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request was not valid. Verify that your request was properly formatted, +// that the signature was generated with the correct credentials, and that you +// haven't exceeded any of the service limits for your account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/DescribePipelines +func (c *DataPipeline) DescribePipelines(input *DescribePipelinesInput) (*DescribePipelinesOutput, error) { + req, out := c.DescribePipelinesRequest(input) + return out, req.Send() +} + +// DescribePipelinesWithContext is the same as DescribePipelines with the addition of +// the ability to pass a context and additional request options. +// +// See DescribePipelines for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) DescribePipelinesWithContext(ctx aws.Context, input *DescribePipelinesInput, opts ...request.Option) (*DescribePipelinesOutput, error) { + req, out := c.DescribePipelinesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opEvaluateExpression = "EvaluateExpression" + +// EvaluateExpressionRequest generates a "aws/request.Request" representing the +// client's request for the EvaluateExpression operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See EvaluateExpression for more information on using the EvaluateExpression +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the EvaluateExpressionRequest method. +// req, resp := client.EvaluateExpressionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/EvaluateExpression +func (c *DataPipeline) EvaluateExpressionRequest(input *EvaluateExpressionInput) (req *request.Request, output *EvaluateExpressionOutput) { + op := &request.Operation{ + Name: opEvaluateExpression, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &EvaluateExpressionInput{} + } + + output = &EvaluateExpressionOutput{} + req = c.newRequest(op, input, output) + return +} + +// EvaluateExpression API operation for AWS Data Pipeline. +// +// Task runners call EvaluateExpression to evaluate a string in the context +// of the specified object. For example, a task runner can evaluate SQL queries +// stored in Amazon S3. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Data Pipeline's +// API operation EvaluateExpression for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServiceError "InternalServiceError" +// An internal service error occurred. +// +// * ErrCodeTaskNotFoundException "TaskNotFoundException" +// The specified task was not found. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request was not valid. Verify that your request was properly formatted, +// that the signature was generated with the correct credentials, and that you +// haven't exceeded any of the service limits for your account. +// +// * ErrCodePipelineNotFoundException "PipelineNotFoundException" +// The specified pipeline was not found. Verify that you used the correct user +// and account identifiers. +// +// * ErrCodePipelineDeletedException "PipelineDeletedException" +// The specified pipeline has been deleted. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/EvaluateExpression +func (c *DataPipeline) EvaluateExpression(input *EvaluateExpressionInput) (*EvaluateExpressionOutput, error) { + req, out := c.EvaluateExpressionRequest(input) + return out, req.Send() +} + +// EvaluateExpressionWithContext is the same as EvaluateExpression with the addition of +// the ability to pass a context and additional request options. +// +// See EvaluateExpression for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) EvaluateExpressionWithContext(ctx aws.Context, input *EvaluateExpressionInput, opts ...request.Option) (*EvaluateExpressionOutput, error) { + req, out := c.EvaluateExpressionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetPipelineDefinition = "GetPipelineDefinition" + +// GetPipelineDefinitionRequest generates a "aws/request.Request" representing the +// client's request for the GetPipelineDefinition operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetPipelineDefinition for more information on using the GetPipelineDefinition +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetPipelineDefinitionRequest method. +// req, resp := client.GetPipelineDefinitionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/GetPipelineDefinition +func (c *DataPipeline) GetPipelineDefinitionRequest(input *GetPipelineDefinitionInput) (req *request.Request, output *GetPipelineDefinitionOutput) { + op := &request.Operation{ + Name: opGetPipelineDefinition, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetPipelineDefinitionInput{} + } + + output = &GetPipelineDefinitionOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetPipelineDefinition API operation for AWS Data Pipeline. +// +// Gets the definition of the specified pipeline. You can call GetPipelineDefinition +// to retrieve the pipeline definition that you provided using PutPipelineDefinition. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Data Pipeline's +// API operation GetPipelineDefinition for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServiceError "InternalServiceError" +// An internal service error occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request was not valid. Verify that your request was properly formatted, +// that the signature was generated with the correct credentials, and that you +// haven't exceeded any of the service limits for your account. +// +// * ErrCodePipelineNotFoundException "PipelineNotFoundException" +// The specified pipeline was not found. Verify that you used the correct user +// and account identifiers. +// +// * ErrCodePipelineDeletedException "PipelineDeletedException" +// The specified pipeline has been deleted. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/GetPipelineDefinition +func (c *DataPipeline) GetPipelineDefinition(input *GetPipelineDefinitionInput) (*GetPipelineDefinitionOutput, error) { + req, out := c.GetPipelineDefinitionRequest(input) + return out, req.Send() +} + +// GetPipelineDefinitionWithContext is the same as GetPipelineDefinition with the addition of +// the ability to pass a context and additional request options. +// +// See GetPipelineDefinition for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) GetPipelineDefinitionWithContext(ctx aws.Context, input *GetPipelineDefinitionInput, opts ...request.Option) (*GetPipelineDefinitionOutput, error) { + req, out := c.GetPipelineDefinitionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListPipelines = "ListPipelines" + +// ListPipelinesRequest generates a "aws/request.Request" representing the +// client's request for the ListPipelines operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListPipelines for more information on using the ListPipelines +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListPipelinesRequest method. +// req, resp := client.ListPipelinesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/ListPipelines +func (c *DataPipeline) ListPipelinesRequest(input *ListPipelinesInput) (req *request.Request, output *ListPipelinesOutput) { + op := &request.Operation{ + Name: opListPipelines, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"marker"}, + OutputTokens: []string{"marker"}, + LimitToken: "", + TruncationToken: "hasMoreResults", + }, + } + + if input == nil { + input = &ListPipelinesInput{} + } + + output = &ListPipelinesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListPipelines API operation for AWS Data Pipeline. +// +// Lists the pipeline identifiers for all active pipelines that you have permission +// to access. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Data Pipeline's +// API operation ListPipelines for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServiceError "InternalServiceError" +// An internal service error occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request was not valid. Verify that your request was properly formatted, +// that the signature was generated with the correct credentials, and that you +// haven't exceeded any of the service limits for your account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/ListPipelines +func (c *DataPipeline) ListPipelines(input *ListPipelinesInput) (*ListPipelinesOutput, error) { + req, out := c.ListPipelinesRequest(input) + return out, req.Send() +} + +// ListPipelinesWithContext is the same as ListPipelines with the addition of +// the ability to pass a context and additional request options. +// +// See ListPipelines for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) ListPipelinesWithContext(ctx aws.Context, input *ListPipelinesInput, opts ...request.Option) (*ListPipelinesOutput, error) { + req, out := c.ListPipelinesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListPipelinesPages iterates over the pages of a ListPipelines operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListPipelines method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListPipelines operation. +// pageNum := 0 +// err := client.ListPipelinesPages(params, +// func(page *ListPipelinesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *DataPipeline) ListPipelinesPages(input *ListPipelinesInput, fn func(*ListPipelinesOutput, bool) bool) error { + return c.ListPipelinesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListPipelinesPagesWithContext same as ListPipelinesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) ListPipelinesPagesWithContext(ctx aws.Context, input *ListPipelinesInput, fn func(*ListPipelinesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListPipelinesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListPipelinesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListPipelinesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opPollForTask = "PollForTask" + +// PollForTaskRequest generates a "aws/request.Request" representing the +// client's request for the PollForTask operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PollForTask for more information on using the PollForTask +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PollForTaskRequest method. +// req, resp := client.PollForTaskRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/PollForTask +func (c *DataPipeline) PollForTaskRequest(input *PollForTaskInput) (req *request.Request, output *PollForTaskOutput) { + op := &request.Operation{ + Name: opPollForTask, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PollForTaskInput{} + } + + output = &PollForTaskOutput{} + req = c.newRequest(op, input, output) + return +} + +// PollForTask API operation for AWS Data Pipeline. +// +// Task runners call PollForTask to receive a task to perform from AWS Data +// Pipeline. The task runner specifies which tasks it can perform by setting +// a value for the workerGroup parameter. The task returned can come from any +// of the pipelines that match the workerGroup value passed in by the task runner +// and that was launched using the IAM user credentials specified by the task +// runner. +// +// If tasks are ready in the work queue, PollForTask returns a response immediately. +// If no tasks are available in the queue, PollForTask uses long-polling and +// holds on to a poll connection for up to a 90 seconds, during which time the +// first newly scheduled task is handed to the task runner. To accomodate this, +// set the socket timeout in your task runner to 90 seconds. The task runner +// should not call PollForTask again on the same workerGroup until it receives +// a response, and this can take up to 90 seconds. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Data Pipeline's +// API operation PollForTask for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServiceError "InternalServiceError" +// An internal service error occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request was not valid. Verify that your request was properly formatted, +// that the signature was generated with the correct credentials, and that you +// haven't exceeded any of the service limits for your account. +// +// * ErrCodeTaskNotFoundException "TaskNotFoundException" +// The specified task was not found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/PollForTask +func (c *DataPipeline) PollForTask(input *PollForTaskInput) (*PollForTaskOutput, error) { + req, out := c.PollForTaskRequest(input) + return out, req.Send() +} + +// PollForTaskWithContext is the same as PollForTask with the addition of +// the ability to pass a context and additional request options. +// +// See PollForTask for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) PollForTaskWithContext(ctx aws.Context, input *PollForTaskInput, opts ...request.Option) (*PollForTaskOutput, error) { + req, out := c.PollForTaskRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutPipelineDefinition = "PutPipelineDefinition" + +// PutPipelineDefinitionRequest generates a "aws/request.Request" representing the +// client's request for the PutPipelineDefinition operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutPipelineDefinition for more information on using the PutPipelineDefinition +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutPipelineDefinitionRequest method. +// req, resp := client.PutPipelineDefinitionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/PutPipelineDefinition +func (c *DataPipeline) PutPipelineDefinitionRequest(input *PutPipelineDefinitionInput) (req *request.Request, output *PutPipelineDefinitionOutput) { + op := &request.Operation{ + Name: opPutPipelineDefinition, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutPipelineDefinitionInput{} + } + + output = &PutPipelineDefinitionOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutPipelineDefinition API operation for AWS Data Pipeline. +// +// Adds tasks, schedules, and preconditions to the specified pipeline. You can +// use PutPipelineDefinition to populate a new pipeline. +// +// PutPipelineDefinition also validates the configuration as it adds it to the +// pipeline. Changes to the pipeline are saved unless one of the following three +// validation errors exists in the pipeline. +// +// An object is missing a name or identifier field. +// A string or reference field is empty. +// The number of objects in the pipeline exceeds the maximum allowed objects. +// +// The pipeline is in a FINISHED state. +// Pipeline object definitions are passed to the PutPipelineDefinition action +// and returned by the GetPipelineDefinition action. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Data Pipeline's +// API operation PutPipelineDefinition for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServiceError "InternalServiceError" +// An internal service error occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request was not valid. Verify that your request was properly formatted, +// that the signature was generated with the correct credentials, and that you +// haven't exceeded any of the service limits for your account. +// +// * ErrCodePipelineNotFoundException "PipelineNotFoundException" +// The specified pipeline was not found. Verify that you used the correct user +// and account identifiers. +// +// * ErrCodePipelineDeletedException "PipelineDeletedException" +// The specified pipeline has been deleted. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/PutPipelineDefinition +func (c *DataPipeline) PutPipelineDefinition(input *PutPipelineDefinitionInput) (*PutPipelineDefinitionOutput, error) { + req, out := c.PutPipelineDefinitionRequest(input) + return out, req.Send() +} + +// PutPipelineDefinitionWithContext is the same as PutPipelineDefinition with the addition of +// the ability to pass a context and additional request options. +// +// See PutPipelineDefinition for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) PutPipelineDefinitionWithContext(ctx aws.Context, input *PutPipelineDefinitionInput, opts ...request.Option) (*PutPipelineDefinitionOutput, error) { + req, out := c.PutPipelineDefinitionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opQueryObjects = "QueryObjects" + +// QueryObjectsRequest generates a "aws/request.Request" representing the +// client's request for the QueryObjects operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See QueryObjects for more information on using the QueryObjects +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the QueryObjectsRequest method. +// req, resp := client.QueryObjectsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/QueryObjects +func (c *DataPipeline) QueryObjectsRequest(input *QueryObjectsInput) (req *request.Request, output *QueryObjectsOutput) { + op := &request.Operation{ + Name: opQueryObjects, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"marker"}, + OutputTokens: []string{"marker"}, + LimitToken: "limit", + TruncationToken: "hasMoreResults", + }, + } + + if input == nil { + input = &QueryObjectsInput{} + } + + output = &QueryObjectsOutput{} + req = c.newRequest(op, input, output) + return +} + +// QueryObjects API operation for AWS Data Pipeline. +// +// Queries the specified pipeline for the names of objects that match the specified +// set of conditions. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Data Pipeline's +// API operation QueryObjects for usage and error information. +// +// Returned Error Codes: +// * ErrCodePipelineNotFoundException "PipelineNotFoundException" +// The specified pipeline was not found. Verify that you used the correct user +// and account identifiers. +// +// * ErrCodePipelineDeletedException "PipelineDeletedException" +// The specified pipeline has been deleted. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An internal service error occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request was not valid. Verify that your request was properly formatted, +// that the signature was generated with the correct credentials, and that you +// haven't exceeded any of the service limits for your account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/QueryObjects +func (c *DataPipeline) QueryObjects(input *QueryObjectsInput) (*QueryObjectsOutput, error) { + req, out := c.QueryObjectsRequest(input) + return out, req.Send() +} + +// QueryObjectsWithContext is the same as QueryObjects with the addition of +// the ability to pass a context and additional request options. +// +// See QueryObjects for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) QueryObjectsWithContext(ctx aws.Context, input *QueryObjectsInput, opts ...request.Option) (*QueryObjectsOutput, error) { + req, out := c.QueryObjectsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// QueryObjectsPages iterates over the pages of a QueryObjects operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See QueryObjects method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a QueryObjects operation. +// pageNum := 0 +// err := client.QueryObjectsPages(params, +// func(page *QueryObjectsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *DataPipeline) QueryObjectsPages(input *QueryObjectsInput, fn func(*QueryObjectsOutput, bool) bool) error { + return c.QueryObjectsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// QueryObjectsPagesWithContext same as QueryObjectsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) QueryObjectsPagesWithContext(ctx aws.Context, input *QueryObjectsInput, fn func(*QueryObjectsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *QueryObjectsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.QueryObjectsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*QueryObjectsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opRemoveTags = "RemoveTags" + +// RemoveTagsRequest generates a "aws/request.Request" representing the +// client's request for the RemoveTags operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RemoveTags for more information on using the RemoveTags +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RemoveTagsRequest method. +// req, resp := client.RemoveTagsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/RemoveTags +func (c *DataPipeline) RemoveTagsRequest(input *RemoveTagsInput) (req *request.Request, output *RemoveTagsOutput) { + op := &request.Operation{ + Name: opRemoveTags, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RemoveTagsInput{} + } + + output = &RemoveTagsOutput{} + req = c.newRequest(op, input, output) + return +} + +// RemoveTags API operation for AWS Data Pipeline. +// +// Removes existing tags from the specified pipeline. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Data Pipeline's +// API operation RemoveTags for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServiceError "InternalServiceError" +// An internal service error occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request was not valid. Verify that your request was properly formatted, +// that the signature was generated with the correct credentials, and that you +// haven't exceeded any of the service limits for your account. +// +// * ErrCodePipelineNotFoundException "PipelineNotFoundException" +// The specified pipeline was not found. Verify that you used the correct user +// and account identifiers. +// +// * ErrCodePipelineDeletedException "PipelineDeletedException" +// The specified pipeline has been deleted. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/RemoveTags +func (c *DataPipeline) RemoveTags(input *RemoveTagsInput) (*RemoveTagsOutput, error) { + req, out := c.RemoveTagsRequest(input) + return out, req.Send() +} + +// RemoveTagsWithContext is the same as RemoveTags with the addition of +// the ability to pass a context and additional request options. +// +// See RemoveTags for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) RemoveTagsWithContext(ctx aws.Context, input *RemoveTagsInput, opts ...request.Option) (*RemoveTagsOutput, error) { + req, out := c.RemoveTagsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opReportTaskProgress = "ReportTaskProgress" + +// ReportTaskProgressRequest generates a "aws/request.Request" representing the +// client's request for the ReportTaskProgress operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ReportTaskProgress for more information on using the ReportTaskProgress +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ReportTaskProgressRequest method. +// req, resp := client.ReportTaskProgressRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/ReportTaskProgress +func (c *DataPipeline) ReportTaskProgressRequest(input *ReportTaskProgressInput) (req *request.Request, output *ReportTaskProgressOutput) { + op := &request.Operation{ + Name: opReportTaskProgress, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ReportTaskProgressInput{} + } + + output = &ReportTaskProgressOutput{} + req = c.newRequest(op, input, output) + return +} + +// ReportTaskProgress API operation for AWS Data Pipeline. +// +// Task runners call ReportTaskProgress when assigned a task to acknowledge +// that it has the task. If the web service does not receive this acknowledgement +// within 2 minutes, it assigns the task in a subsequent PollForTask call. After +// this initial acknowledgement, the task runner only needs to report progress +// every 15 minutes to maintain its ownership of the task. You can change this +// reporting time from 15 minutes by specifying a reportProgressTimeout field +// in your pipeline. +// +// If a task runner does not report its status after 5 minutes, AWS Data Pipeline +// assumes that the task runner is unable to process the task and reassigns +// the task in a subsequent response to PollForTask. Task runners should call +// ReportTaskProgress every 60 seconds. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Data Pipeline's +// API operation ReportTaskProgress for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServiceError "InternalServiceError" +// An internal service error occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request was not valid. Verify that your request was properly formatted, +// that the signature was generated with the correct credentials, and that you +// haven't exceeded any of the service limits for your account. +// +// * ErrCodeTaskNotFoundException "TaskNotFoundException" +// The specified task was not found. +// +// * ErrCodePipelineNotFoundException "PipelineNotFoundException" +// The specified pipeline was not found. Verify that you used the correct user +// and account identifiers. +// +// * ErrCodePipelineDeletedException "PipelineDeletedException" +// The specified pipeline has been deleted. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/ReportTaskProgress +func (c *DataPipeline) ReportTaskProgress(input *ReportTaskProgressInput) (*ReportTaskProgressOutput, error) { + req, out := c.ReportTaskProgressRequest(input) + return out, req.Send() +} + +// ReportTaskProgressWithContext is the same as ReportTaskProgress with the addition of +// the ability to pass a context and additional request options. +// +// See ReportTaskProgress for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) ReportTaskProgressWithContext(ctx aws.Context, input *ReportTaskProgressInput, opts ...request.Option) (*ReportTaskProgressOutput, error) { + req, out := c.ReportTaskProgressRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opReportTaskRunnerHeartbeat = "ReportTaskRunnerHeartbeat" + +// ReportTaskRunnerHeartbeatRequest generates a "aws/request.Request" representing the +// client's request for the ReportTaskRunnerHeartbeat operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ReportTaskRunnerHeartbeat for more information on using the ReportTaskRunnerHeartbeat +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ReportTaskRunnerHeartbeatRequest method. +// req, resp := client.ReportTaskRunnerHeartbeatRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/ReportTaskRunnerHeartbeat +func (c *DataPipeline) ReportTaskRunnerHeartbeatRequest(input *ReportTaskRunnerHeartbeatInput) (req *request.Request, output *ReportTaskRunnerHeartbeatOutput) { + op := &request.Operation{ + Name: opReportTaskRunnerHeartbeat, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ReportTaskRunnerHeartbeatInput{} + } + + output = &ReportTaskRunnerHeartbeatOutput{} + req = c.newRequest(op, input, output) + return +} + +// ReportTaskRunnerHeartbeat API operation for AWS Data Pipeline. +// +// Task runners call ReportTaskRunnerHeartbeat every 15 minutes to indicate +// that they are operational. If the AWS Data Pipeline Task Runner is launched +// on a resource managed by AWS Data Pipeline, the web service can use this +// call to detect when the task runner application has failed and restart a +// new instance. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Data Pipeline's +// API operation ReportTaskRunnerHeartbeat for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServiceError "InternalServiceError" +// An internal service error occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request was not valid. Verify that your request was properly formatted, +// that the signature was generated with the correct credentials, and that you +// haven't exceeded any of the service limits for your account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/ReportTaskRunnerHeartbeat +func (c *DataPipeline) ReportTaskRunnerHeartbeat(input *ReportTaskRunnerHeartbeatInput) (*ReportTaskRunnerHeartbeatOutput, error) { + req, out := c.ReportTaskRunnerHeartbeatRequest(input) + return out, req.Send() +} + +// ReportTaskRunnerHeartbeatWithContext is the same as ReportTaskRunnerHeartbeat with the addition of +// the ability to pass a context and additional request options. +// +// See ReportTaskRunnerHeartbeat for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) ReportTaskRunnerHeartbeatWithContext(ctx aws.Context, input *ReportTaskRunnerHeartbeatInput, opts ...request.Option) (*ReportTaskRunnerHeartbeatOutput, error) { + req, out := c.ReportTaskRunnerHeartbeatRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opSetStatus = "SetStatus" + +// SetStatusRequest generates a "aws/request.Request" representing the +// client's request for the SetStatus operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SetStatus for more information on using the SetStatus +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SetStatusRequest method. +// req, resp := client.SetStatusRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/SetStatus +func (c *DataPipeline) SetStatusRequest(input *SetStatusInput) (req *request.Request, output *SetStatusOutput) { + op := &request.Operation{ + Name: opSetStatus, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &SetStatusInput{} + } + + output = &SetStatusOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// SetStatus API operation for AWS Data Pipeline. +// +// Requests that the status of the specified physical or logical pipeline objects +// be updated in the specified pipeline. This update might not occur immediately, +// but is eventually consistent. The status that can be set depends on the type +// of object (for example, DataNode or Activity). You cannot perform this operation +// on FINISHED pipelines and attempting to do so returns InvalidRequestException. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Data Pipeline's +// API operation SetStatus for usage and error information. +// +// Returned Error Codes: +// * ErrCodePipelineNotFoundException "PipelineNotFoundException" +// The specified pipeline was not found. Verify that you used the correct user +// and account identifiers. +// +// * ErrCodePipelineDeletedException "PipelineDeletedException" +// The specified pipeline has been deleted. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An internal service error occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request was not valid. Verify that your request was properly formatted, +// that the signature was generated with the correct credentials, and that you +// haven't exceeded any of the service limits for your account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/SetStatus +func (c *DataPipeline) SetStatus(input *SetStatusInput) (*SetStatusOutput, error) { + req, out := c.SetStatusRequest(input) + return out, req.Send() +} + +// SetStatusWithContext is the same as SetStatus with the addition of +// the ability to pass a context and additional request options. +// +// See SetStatus for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) SetStatusWithContext(ctx aws.Context, input *SetStatusInput, opts ...request.Option) (*SetStatusOutput, error) { + req, out := c.SetStatusRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opSetTaskStatus = "SetTaskStatus" + +// SetTaskStatusRequest generates a "aws/request.Request" representing the +// client's request for the SetTaskStatus operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SetTaskStatus for more information on using the SetTaskStatus +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SetTaskStatusRequest method. +// req, resp := client.SetTaskStatusRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/SetTaskStatus +func (c *DataPipeline) SetTaskStatusRequest(input *SetTaskStatusInput) (req *request.Request, output *SetTaskStatusOutput) { + op := &request.Operation{ + Name: opSetTaskStatus, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &SetTaskStatusInput{} + } + + output = &SetTaskStatusOutput{} + req = c.newRequest(op, input, output) + return +} + +// SetTaskStatus API operation for AWS Data Pipeline. +// +// Task runners call SetTaskStatus to notify AWS Data Pipeline that a task is +// completed and provide information about the final status. A task runner makes +// this call regardless of whether the task was sucessful. A task runner does +// not need to call SetTaskStatus for tasks that are canceled by the web service +// during a call to ReportTaskProgress. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Data Pipeline's +// API operation SetTaskStatus for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServiceError "InternalServiceError" +// An internal service error occurred. +// +// * ErrCodeTaskNotFoundException "TaskNotFoundException" +// The specified task was not found. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request was not valid. Verify that your request was properly formatted, +// that the signature was generated with the correct credentials, and that you +// haven't exceeded any of the service limits for your account. +// +// * ErrCodePipelineNotFoundException "PipelineNotFoundException" +// The specified pipeline was not found. Verify that you used the correct user +// and account identifiers. +// +// * ErrCodePipelineDeletedException "PipelineDeletedException" +// The specified pipeline has been deleted. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/SetTaskStatus +func (c *DataPipeline) SetTaskStatus(input *SetTaskStatusInput) (*SetTaskStatusOutput, error) { + req, out := c.SetTaskStatusRequest(input) + return out, req.Send() +} + +// SetTaskStatusWithContext is the same as SetTaskStatus with the addition of +// the ability to pass a context and additional request options. +// +// See SetTaskStatus for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) SetTaskStatusWithContext(ctx aws.Context, input *SetTaskStatusInput, opts ...request.Option) (*SetTaskStatusOutput, error) { + req, out := c.SetTaskStatusRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opValidatePipelineDefinition = "ValidatePipelineDefinition" + +// ValidatePipelineDefinitionRequest generates a "aws/request.Request" representing the +// client's request for the ValidatePipelineDefinition operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ValidatePipelineDefinition for more information on using the ValidatePipelineDefinition +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ValidatePipelineDefinitionRequest method. +// req, resp := client.ValidatePipelineDefinitionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/ValidatePipelineDefinition +func (c *DataPipeline) ValidatePipelineDefinitionRequest(input *ValidatePipelineDefinitionInput) (req *request.Request, output *ValidatePipelineDefinitionOutput) { + op := &request.Operation{ + Name: opValidatePipelineDefinition, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ValidatePipelineDefinitionInput{} + } + + output = &ValidatePipelineDefinitionOutput{} + req = c.newRequest(op, input, output) + return +} + +// ValidatePipelineDefinition API operation for AWS Data Pipeline. +// +// Validates the specified pipeline definition to ensure that it is well formed +// and can be run without error. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Data Pipeline's +// API operation ValidatePipelineDefinition for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServiceError "InternalServiceError" +// An internal service error occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request was not valid. Verify that your request was properly formatted, +// that the signature was generated with the correct credentials, and that you +// haven't exceeded any of the service limits for your account. +// +// * ErrCodePipelineNotFoundException "PipelineNotFoundException" +// The specified pipeline was not found. Verify that you used the correct user +// and account identifiers. +// +// * ErrCodePipelineDeletedException "PipelineDeletedException" +// The specified pipeline has been deleted. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29/ValidatePipelineDefinition +func (c *DataPipeline) ValidatePipelineDefinition(input *ValidatePipelineDefinitionInput) (*ValidatePipelineDefinitionOutput, error) { + req, out := c.ValidatePipelineDefinitionRequest(input) + return out, req.Send() +} + +// ValidatePipelineDefinitionWithContext is the same as ValidatePipelineDefinition with the addition of +// the ability to pass a context and additional request options. +// +// See ValidatePipelineDefinition for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DataPipeline) ValidatePipelineDefinitionWithContext(ctx aws.Context, input *ValidatePipelineDefinitionInput, opts ...request.Option) (*ValidatePipelineDefinitionOutput, error) { + req, out := c.ValidatePipelineDefinitionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// Contains the parameters for ActivatePipeline. +type ActivatePipelineInput struct { + _ struct{} `type:"structure"` + + // A list of parameter values to pass to the pipeline at activation. + ParameterValues []*ParameterValue `locationName:"parameterValues" type:"list"` + + // The ID of the pipeline. + // + // PipelineId is a required field + PipelineId *string `locationName:"pipelineId" min:"1" type:"string" required:"true"` + + // The date and time to resume the pipeline. By default, the pipeline resumes + // from the last completed execution. + StartTimestamp *time.Time `locationName:"startTimestamp" type:"timestamp"` +} + +// String returns the string representation +func (s ActivatePipelineInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ActivatePipelineInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ActivatePipelineInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ActivatePipelineInput"} + if s.PipelineId == nil { + invalidParams.Add(request.NewErrParamRequired("PipelineId")) + } + if s.PipelineId != nil && len(*s.PipelineId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PipelineId", 1)) + } + if s.ParameterValues != nil { + for i, v := range s.ParameterValues { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ParameterValues", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetParameterValues sets the ParameterValues field's value. +func (s *ActivatePipelineInput) SetParameterValues(v []*ParameterValue) *ActivatePipelineInput { + s.ParameterValues = v + return s +} + +// SetPipelineId sets the PipelineId field's value. +func (s *ActivatePipelineInput) SetPipelineId(v string) *ActivatePipelineInput { + s.PipelineId = &v + return s +} + +// SetStartTimestamp sets the StartTimestamp field's value. +func (s *ActivatePipelineInput) SetStartTimestamp(v time.Time) *ActivatePipelineInput { + s.StartTimestamp = &v + return s +} + +// Contains the output of ActivatePipeline. +type ActivatePipelineOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s ActivatePipelineOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ActivatePipelineOutput) GoString() string { + return s.String() +} + +// Contains the parameters for AddTags. +type AddTagsInput struct { + _ struct{} `type:"structure"` + + // The ID of the pipeline. + // + // PipelineId is a required field + PipelineId *string `locationName:"pipelineId" min:"1" type:"string" required:"true"` + + // The tags to add, as key/value pairs. + // + // Tags is a required field + Tags []*Tag `locationName:"tags" type:"list" required:"true"` +} + +// String returns the string representation +func (s AddTagsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddTagsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddTagsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddTagsInput"} + if s.PipelineId == nil { + invalidParams.Add(request.NewErrParamRequired("PipelineId")) + } + if s.PipelineId != nil && len(*s.PipelineId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PipelineId", 1)) + } + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPipelineId sets the PipelineId field's value. +func (s *AddTagsInput) SetPipelineId(v string) *AddTagsInput { + s.PipelineId = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *AddTagsInput) SetTags(v []*Tag) *AddTagsInput { + s.Tags = v + return s +} + +// Contains the output of AddTags. +type AddTagsOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AddTagsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddTagsOutput) GoString() string { + return s.String() +} + +// Contains the parameters for CreatePipeline. +type CreatePipelineInput struct { + _ struct{} `type:"structure"` + + // The description for the pipeline. + Description *string `locationName:"description" type:"string"` + + // The name for the pipeline. You can use the same name for multiple pipelines + // associated with your AWS account, because AWS Data Pipeline assigns each + // pipeline a unique pipeline identifier. + // + // Name is a required field + Name *string `locationName:"name" min:"1" type:"string" required:"true"` + + // A list of tags to associate with the pipeline at creation. Tags let you control + // access to pipelines. For more information, see Controlling User Access to + // Pipelines (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-control-access.html) + // in the AWS Data Pipeline Developer Guide. + Tags []*Tag `locationName:"tags" type:"list"` + + // A unique identifier. This identifier is not the same as the pipeline identifier + // assigned by AWS Data Pipeline. You are responsible for defining the format + // and ensuring the uniqueness of this identifier. You use this parameter to + // ensure idempotency during repeated calls to CreatePipeline. For example, + // if the first call to CreatePipeline does not succeed, you can pass in the + // same unique identifier and pipeline name combination on a subsequent call + // to CreatePipeline. CreatePipeline ensures that if a pipeline already exists + // with the same name and unique identifier, a new pipeline is not created. + // Instead, you'll receive the pipeline identifier from the previous attempt. + // The uniqueness of the name and unique identifier combination is scoped to + // the AWS account or IAM user credentials. + // + // UniqueId is a required field + UniqueId *string `locationName:"uniqueId" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreatePipelineInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreatePipelineInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreatePipelineInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreatePipelineInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + if s.UniqueId == nil { + invalidParams.Add(request.NewErrParamRequired("UniqueId")) + } + if s.UniqueId != nil && len(*s.UniqueId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UniqueId", 1)) + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDescription sets the Description field's value. +func (s *CreatePipelineInput) SetDescription(v string) *CreatePipelineInput { + s.Description = &v + return s +} + +// SetName sets the Name field's value. +func (s *CreatePipelineInput) SetName(v string) *CreatePipelineInput { + s.Name = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreatePipelineInput) SetTags(v []*Tag) *CreatePipelineInput { + s.Tags = v + return s +} + +// SetUniqueId sets the UniqueId field's value. +func (s *CreatePipelineInput) SetUniqueId(v string) *CreatePipelineInput { + s.UniqueId = &v + return s +} + +// Contains the output of CreatePipeline. +type CreatePipelineOutput struct { + _ struct{} `type:"structure"` + + // The ID that AWS Data Pipeline assigns the newly created pipeline. For example, + // df-06372391ZG65EXAMPLE. + // + // PipelineId is a required field + PipelineId *string `locationName:"pipelineId" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreatePipelineOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreatePipelineOutput) GoString() string { + return s.String() +} + +// SetPipelineId sets the PipelineId field's value. +func (s *CreatePipelineOutput) SetPipelineId(v string) *CreatePipelineOutput { + s.PipelineId = &v + return s +} + +// Contains the parameters for DeactivatePipeline. +type DeactivatePipelineInput struct { + _ struct{} `type:"structure"` + + // Indicates whether to cancel any running objects. The default is true, which + // sets the state of any running objects to CANCELED. If this value is false, + // the pipeline is deactivated after all running objects finish. + CancelActive *bool `locationName:"cancelActive" type:"boolean"` + + // The ID of the pipeline. + // + // PipelineId is a required field + PipelineId *string `locationName:"pipelineId" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeactivatePipelineInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeactivatePipelineInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeactivatePipelineInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeactivatePipelineInput"} + if s.PipelineId == nil { + invalidParams.Add(request.NewErrParamRequired("PipelineId")) + } + if s.PipelineId != nil && len(*s.PipelineId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PipelineId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCancelActive sets the CancelActive field's value. +func (s *DeactivatePipelineInput) SetCancelActive(v bool) *DeactivatePipelineInput { + s.CancelActive = &v + return s +} + +// SetPipelineId sets the PipelineId field's value. +func (s *DeactivatePipelineInput) SetPipelineId(v string) *DeactivatePipelineInput { + s.PipelineId = &v + return s +} + +// Contains the output of DeactivatePipeline. +type DeactivatePipelineOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeactivatePipelineOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeactivatePipelineOutput) GoString() string { + return s.String() +} + +// Contains the parameters for DeletePipeline. +type DeletePipelineInput struct { + _ struct{} `type:"structure"` + + // The ID of the pipeline. + // + // PipelineId is a required field + PipelineId *string `locationName:"pipelineId" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeletePipelineInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeletePipelineInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeletePipelineInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeletePipelineInput"} + if s.PipelineId == nil { + invalidParams.Add(request.NewErrParamRequired("PipelineId")) + } + if s.PipelineId != nil && len(*s.PipelineId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PipelineId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPipelineId sets the PipelineId field's value. +func (s *DeletePipelineInput) SetPipelineId(v string) *DeletePipelineInput { + s.PipelineId = &v + return s +} + +type DeletePipelineOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeletePipelineOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeletePipelineOutput) GoString() string { + return s.String() +} + +// Contains the parameters for DescribeObjects. +type DescribeObjectsInput struct { + _ struct{} `type:"structure"` + + // Indicates whether any expressions in the object should be evaluated when + // the object descriptions are returned. + EvaluateExpressions *bool `locationName:"evaluateExpressions" type:"boolean"` + + // The starting point for the results to be returned. For the first call, this + // value should be empty. As long as there are more results, continue to call + // DescribeObjects with the marker value from the previous call to retrieve + // the next set of results. + Marker *string `locationName:"marker" type:"string"` + + // The IDs of the pipeline objects that contain the definitions to be described. + // You can pass as many as 25 identifiers in a single call to DescribeObjects. + // + // ObjectIds is a required field + ObjectIds []*string `locationName:"objectIds" type:"list" required:"true"` + + // The ID of the pipeline that contains the object definitions. + // + // PipelineId is a required field + PipelineId *string `locationName:"pipelineId" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeObjectsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeObjectsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeObjectsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeObjectsInput"} + if s.ObjectIds == nil { + invalidParams.Add(request.NewErrParamRequired("ObjectIds")) + } + if s.PipelineId == nil { + invalidParams.Add(request.NewErrParamRequired("PipelineId")) + } + if s.PipelineId != nil && len(*s.PipelineId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PipelineId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEvaluateExpressions sets the EvaluateExpressions field's value. +func (s *DescribeObjectsInput) SetEvaluateExpressions(v bool) *DescribeObjectsInput { + s.EvaluateExpressions = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeObjectsInput) SetMarker(v string) *DescribeObjectsInput { + s.Marker = &v + return s +} + +// SetObjectIds sets the ObjectIds field's value. +func (s *DescribeObjectsInput) SetObjectIds(v []*string) *DescribeObjectsInput { + s.ObjectIds = v + return s +} + +// SetPipelineId sets the PipelineId field's value. +func (s *DescribeObjectsInput) SetPipelineId(v string) *DescribeObjectsInput { + s.PipelineId = &v + return s +} + +// Contains the output of DescribeObjects. +type DescribeObjectsOutput struct { + _ struct{} `type:"structure"` + + // Indicates whether there are more results to return. + HasMoreResults *bool `locationName:"hasMoreResults" type:"boolean"` + + // The starting point for the next page of results. To view the next page of + // results, call DescribeObjects again with this marker value. If the value + // is null, there are no more results. + Marker *string `locationName:"marker" type:"string"` + + // An array of object definitions. + // + // PipelineObjects is a required field + PipelineObjects []*PipelineObject `locationName:"pipelineObjects" type:"list" required:"true"` +} + +// String returns the string representation +func (s DescribeObjectsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeObjectsOutput) GoString() string { + return s.String() +} + +// SetHasMoreResults sets the HasMoreResults field's value. +func (s *DescribeObjectsOutput) SetHasMoreResults(v bool) *DescribeObjectsOutput { + s.HasMoreResults = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeObjectsOutput) SetMarker(v string) *DescribeObjectsOutput { + s.Marker = &v + return s +} + +// SetPipelineObjects sets the PipelineObjects field's value. +func (s *DescribeObjectsOutput) SetPipelineObjects(v []*PipelineObject) *DescribeObjectsOutput { + s.PipelineObjects = v + return s +} + +// Contains the parameters for DescribePipelines. +type DescribePipelinesInput struct { + _ struct{} `type:"structure"` + + // The IDs of the pipelines to describe. You can pass as many as 25 identifiers + // in a single call. To obtain pipeline IDs, call ListPipelines. + // + // PipelineIds is a required field + PipelineIds []*string `locationName:"pipelineIds" type:"list" required:"true"` +} + +// String returns the string representation +func (s DescribePipelinesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribePipelinesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribePipelinesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribePipelinesInput"} + if s.PipelineIds == nil { + invalidParams.Add(request.NewErrParamRequired("PipelineIds")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPipelineIds sets the PipelineIds field's value. +func (s *DescribePipelinesInput) SetPipelineIds(v []*string) *DescribePipelinesInput { + s.PipelineIds = v + return s +} + +// Contains the output of DescribePipelines. +type DescribePipelinesOutput struct { + _ struct{} `type:"structure"` + + // An array of descriptions for the specified pipelines. + // + // PipelineDescriptionList is a required field + PipelineDescriptionList []*PipelineDescription `locationName:"pipelineDescriptionList" type:"list" required:"true"` +} + +// String returns the string representation +func (s DescribePipelinesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribePipelinesOutput) GoString() string { + return s.String() +} + +// SetPipelineDescriptionList sets the PipelineDescriptionList field's value. +func (s *DescribePipelinesOutput) SetPipelineDescriptionList(v []*PipelineDescription) *DescribePipelinesOutput { + s.PipelineDescriptionList = v + return s +} + +// Contains the parameters for EvaluateExpression. +type EvaluateExpressionInput struct { + _ struct{} `type:"structure"` + + // The expression to evaluate. + // + // Expression is a required field + Expression *string `locationName:"expression" type:"string" required:"true"` + + // The ID of the object. + // + // ObjectId is a required field + ObjectId *string `locationName:"objectId" min:"1" type:"string" required:"true"` + + // The ID of the pipeline. + // + // PipelineId is a required field + PipelineId *string `locationName:"pipelineId" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s EvaluateExpressionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EvaluateExpressionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *EvaluateExpressionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "EvaluateExpressionInput"} + if s.Expression == nil { + invalidParams.Add(request.NewErrParamRequired("Expression")) + } + if s.ObjectId == nil { + invalidParams.Add(request.NewErrParamRequired("ObjectId")) + } + if s.ObjectId != nil && len(*s.ObjectId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ObjectId", 1)) + } + if s.PipelineId == nil { + invalidParams.Add(request.NewErrParamRequired("PipelineId")) + } + if s.PipelineId != nil && len(*s.PipelineId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PipelineId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetExpression sets the Expression field's value. +func (s *EvaluateExpressionInput) SetExpression(v string) *EvaluateExpressionInput { + s.Expression = &v + return s +} + +// SetObjectId sets the ObjectId field's value. +func (s *EvaluateExpressionInput) SetObjectId(v string) *EvaluateExpressionInput { + s.ObjectId = &v + return s +} + +// SetPipelineId sets the PipelineId field's value. +func (s *EvaluateExpressionInput) SetPipelineId(v string) *EvaluateExpressionInput { + s.PipelineId = &v + return s +} + +// Contains the output of EvaluateExpression. +type EvaluateExpressionOutput struct { + _ struct{} `type:"structure"` + + // The evaluated expression. + // + // EvaluatedExpression is a required field + EvaluatedExpression *string `locationName:"evaluatedExpression" type:"string" required:"true"` +} + +// String returns the string representation +func (s EvaluateExpressionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EvaluateExpressionOutput) GoString() string { + return s.String() +} + +// SetEvaluatedExpression sets the EvaluatedExpression field's value. +func (s *EvaluateExpressionOutput) SetEvaluatedExpression(v string) *EvaluateExpressionOutput { + s.EvaluatedExpression = &v + return s +} + +// A key-value pair that describes a property of a pipeline object. The value +// is specified as either a string value (StringValue) or a reference to another +// object (RefValue) but not as both. +type Field struct { + _ struct{} `type:"structure"` + + // The field identifier. + // + // Key is a required field + Key *string `locationName:"key" min:"1" type:"string" required:"true"` + + // The field value, expressed as the identifier of another object. + RefValue *string `locationName:"refValue" min:"1" type:"string"` + + // The field value, expressed as a String. + StringValue *string `locationName:"stringValue" type:"string"` +} + +// String returns the string representation +func (s Field) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Field) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Field) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Field"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.RefValue != nil && len(*s.RefValue) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RefValue", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *Field) SetKey(v string) *Field { + s.Key = &v + return s +} + +// SetRefValue sets the RefValue field's value. +func (s *Field) SetRefValue(v string) *Field { + s.RefValue = &v + return s +} + +// SetStringValue sets the StringValue field's value. +func (s *Field) SetStringValue(v string) *Field { + s.StringValue = &v + return s +} + +// Contains the parameters for GetPipelineDefinition. +type GetPipelineDefinitionInput struct { + _ struct{} `type:"structure"` + + // The ID of the pipeline. + // + // PipelineId is a required field + PipelineId *string `locationName:"pipelineId" min:"1" type:"string" required:"true"` + + // The version of the pipeline definition to retrieve. Set this parameter to + // latest (default) to use the last definition saved to the pipeline or active + // to use the last definition that was activated. + Version *string `locationName:"version" type:"string"` +} + +// String returns the string representation +func (s GetPipelineDefinitionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetPipelineDefinitionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetPipelineDefinitionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetPipelineDefinitionInput"} + if s.PipelineId == nil { + invalidParams.Add(request.NewErrParamRequired("PipelineId")) + } + if s.PipelineId != nil && len(*s.PipelineId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PipelineId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPipelineId sets the PipelineId field's value. +func (s *GetPipelineDefinitionInput) SetPipelineId(v string) *GetPipelineDefinitionInput { + s.PipelineId = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *GetPipelineDefinitionInput) SetVersion(v string) *GetPipelineDefinitionInput { + s.Version = &v + return s +} + +// Contains the output of GetPipelineDefinition. +type GetPipelineDefinitionOutput struct { + _ struct{} `type:"structure"` + + // The parameter objects used in the pipeline definition. + ParameterObjects []*ParameterObject `locationName:"parameterObjects" type:"list"` + + // The parameter values used in the pipeline definition. + ParameterValues []*ParameterValue `locationName:"parameterValues" type:"list"` + + // The objects defined in the pipeline. + PipelineObjects []*PipelineObject `locationName:"pipelineObjects" type:"list"` +} + +// String returns the string representation +func (s GetPipelineDefinitionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetPipelineDefinitionOutput) GoString() string { + return s.String() +} + +// SetParameterObjects sets the ParameterObjects field's value. +func (s *GetPipelineDefinitionOutput) SetParameterObjects(v []*ParameterObject) *GetPipelineDefinitionOutput { + s.ParameterObjects = v + return s +} + +// SetParameterValues sets the ParameterValues field's value. +func (s *GetPipelineDefinitionOutput) SetParameterValues(v []*ParameterValue) *GetPipelineDefinitionOutput { + s.ParameterValues = v + return s +} + +// SetPipelineObjects sets the PipelineObjects field's value. +func (s *GetPipelineDefinitionOutput) SetPipelineObjects(v []*PipelineObject) *GetPipelineDefinitionOutput { + s.PipelineObjects = v + return s +} + +// Identity information for the EC2 instance that is hosting the task runner. +// You can get this value by calling a metadata URI from the EC2 instance. For +// more information, see Instance Metadata (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html) +// in the Amazon Elastic Compute Cloud User Guide. Passing in this value proves +// that your task runner is running on an EC2 instance, and ensures the proper +// AWS Data Pipeline service charges are applied to your pipeline. +type InstanceIdentity struct { + _ struct{} `type:"structure"` + + // A description of an EC2 instance that is generated when the instance is launched + // and exposed to the instance via the instance metadata service in the form + // of a JSON representation of an object. + Document *string `locationName:"document" type:"string"` + + // A signature which can be used to verify the accuracy and authenticity of + // the information provided in the instance identity document. + Signature *string `locationName:"signature" type:"string"` +} + +// String returns the string representation +func (s InstanceIdentity) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceIdentity) GoString() string { + return s.String() +} + +// SetDocument sets the Document field's value. +func (s *InstanceIdentity) SetDocument(v string) *InstanceIdentity { + s.Document = &v + return s +} + +// SetSignature sets the Signature field's value. +func (s *InstanceIdentity) SetSignature(v string) *InstanceIdentity { + s.Signature = &v + return s +} + +// Contains the parameters for ListPipelines. +type ListPipelinesInput struct { + _ struct{} `type:"structure"` + + // The starting point for the results to be returned. For the first call, this + // value should be empty. As long as there are more results, continue to call + // ListPipelines with the marker value from the previous call to retrieve the + // next set of results. + Marker *string `locationName:"marker" type:"string"` +} + +// String returns the string representation +func (s ListPipelinesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListPipelinesInput) GoString() string { + return s.String() +} + +// SetMarker sets the Marker field's value. +func (s *ListPipelinesInput) SetMarker(v string) *ListPipelinesInput { + s.Marker = &v + return s +} + +// Contains the output of ListPipelines. +type ListPipelinesOutput struct { + _ struct{} `type:"structure"` + + // Indicates whether there are more results that can be obtained by a subsequent + // call. + HasMoreResults *bool `locationName:"hasMoreResults" type:"boolean"` + + // The starting point for the next page of results. To view the next page of + // results, call ListPipelinesOutput again with this marker value. If the value + // is null, there are no more results. + Marker *string `locationName:"marker" type:"string"` + + // The pipeline identifiers. If you require additional information about the + // pipelines, you can use these identifiers to call DescribePipelines and GetPipelineDefinition. + // + // PipelineIdList is a required field + PipelineIdList []*PipelineIdName `locationName:"pipelineIdList" type:"list" required:"true"` +} + +// String returns the string representation +func (s ListPipelinesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListPipelinesOutput) GoString() string { + return s.String() +} + +// SetHasMoreResults sets the HasMoreResults field's value. +func (s *ListPipelinesOutput) SetHasMoreResults(v bool) *ListPipelinesOutput { + s.HasMoreResults = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListPipelinesOutput) SetMarker(v string) *ListPipelinesOutput { + s.Marker = &v + return s +} + +// SetPipelineIdList sets the PipelineIdList field's value. +func (s *ListPipelinesOutput) SetPipelineIdList(v []*PipelineIdName) *ListPipelinesOutput { + s.PipelineIdList = v + return s +} + +// Contains a logical operation for comparing the value of a field with a specified +// value. +type Operator struct { + _ struct{} `type:"structure"` + + // The logical operation to be performed: equal (EQ), equal reference (REF_EQ), + // less than or equal (LE), greater than or equal (GE), or between (BETWEEN). + // Equal reference (REF_EQ) can be used only with reference fields. The other + // comparison types can be used only with String fields. The comparison types + // you can use apply only to certain object fields, as detailed below. + // + // The comparison operators EQ and REF_EQ act on the following fields: + // + // * name + // * @sphere + // * parent + // * @componentParent + // * @instanceParent + // * @status + // * @scheduledStartTime + // * @scheduledEndTime + // * @actualStartTime + // * @actualEndTime + // The comparison operators GE, LE, and BETWEEN act on the following fields: + // + // * @scheduledStartTime + // * @scheduledEndTime + // * @actualStartTime + // * @actualEndTime + // Note that fields beginning with the at sign (@) are read-only and set by + // the web service. When you name fields, you should choose names containing + // only alpha-numeric values, as symbols may be reserved by AWS Data Pipeline. + // User-defined fields that you add to a pipeline should prefix their name with + // the string "my". + Type *string `locationName:"type" type:"string" enum:"OperatorType"` + + // The value that the actual field value will be compared with. + Values []*string `locationName:"values" type:"list"` +} + +// String returns the string representation +func (s Operator) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Operator) GoString() string { + return s.String() +} + +// SetType sets the Type field's value. +func (s *Operator) SetType(v string) *Operator { + s.Type = &v + return s +} + +// SetValues sets the Values field's value. +func (s *Operator) SetValues(v []*string) *Operator { + s.Values = v + return s +} + +// The attributes allowed or specified with a parameter object. +type ParameterAttribute struct { + _ struct{} `type:"structure"` + + // The field identifier. + // + // Key is a required field + Key *string `locationName:"key" min:"1" type:"string" required:"true"` + + // The field value, expressed as a String. + // + // StringValue is a required field + StringValue *string `locationName:"stringValue" type:"string" required:"true"` +} + +// String returns the string representation +func (s ParameterAttribute) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ParameterAttribute) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ParameterAttribute) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ParameterAttribute"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.StringValue == nil { + invalidParams.Add(request.NewErrParamRequired("StringValue")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *ParameterAttribute) SetKey(v string) *ParameterAttribute { + s.Key = &v + return s +} + +// SetStringValue sets the StringValue field's value. +func (s *ParameterAttribute) SetStringValue(v string) *ParameterAttribute { + s.StringValue = &v + return s +} + +// Contains information about a parameter object. +type ParameterObject struct { + _ struct{} `type:"structure"` + + // The attributes of the parameter object. + // + // Attributes is a required field + Attributes []*ParameterAttribute `locationName:"attributes" type:"list" required:"true"` + + // The ID of the parameter object. + // + // Id is a required field + Id *string `locationName:"id" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ParameterObject) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ParameterObject) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ParameterObject) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ParameterObject"} + if s.Attributes == nil { + invalidParams.Add(request.NewErrParamRequired("Attributes")) + } + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.Id != nil && len(*s.Id) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Id", 1)) + } + if s.Attributes != nil { + for i, v := range s.Attributes { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Attributes", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttributes sets the Attributes field's value. +func (s *ParameterObject) SetAttributes(v []*ParameterAttribute) *ParameterObject { + s.Attributes = v + return s +} + +// SetId sets the Id field's value. +func (s *ParameterObject) SetId(v string) *ParameterObject { + s.Id = &v + return s +} + +// A value or list of parameter values. +type ParameterValue struct { + _ struct{} `type:"structure"` + + // The ID of the parameter value. + // + // Id is a required field + Id *string `locationName:"id" min:"1" type:"string" required:"true"` + + // The field value, expressed as a String. + // + // StringValue is a required field + StringValue *string `locationName:"stringValue" type:"string" required:"true"` +} + +// String returns the string representation +func (s ParameterValue) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ParameterValue) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ParameterValue) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ParameterValue"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.Id != nil && len(*s.Id) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Id", 1)) + } + if s.StringValue == nil { + invalidParams.Add(request.NewErrParamRequired("StringValue")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetId sets the Id field's value. +func (s *ParameterValue) SetId(v string) *ParameterValue { + s.Id = &v + return s +} + +// SetStringValue sets the StringValue field's value. +func (s *ParameterValue) SetStringValue(v string) *ParameterValue { + s.StringValue = &v + return s +} + +// Contains pipeline metadata. +type PipelineDescription struct { + _ struct{} `type:"structure"` + + // Description of the pipeline. + Description *string `locationName:"description" type:"string"` + + // A list of read-only fields that contain metadata about the pipeline: @userId, + // @accountId, and @pipelineState. + // + // Fields is a required field + Fields []*Field `locationName:"fields" type:"list" required:"true"` + + // The name of the pipeline. + // + // Name is a required field + Name *string `locationName:"name" min:"1" type:"string" required:"true"` + + // The pipeline identifier that was assigned by AWS Data Pipeline. This is a + // string of the form df-297EG78HU43EEXAMPLE. + // + // PipelineId is a required field + PipelineId *string `locationName:"pipelineId" min:"1" type:"string" required:"true"` + + // A list of tags to associated with a pipeline. Tags let you control access + // to pipelines. For more information, see Controlling User Access to Pipelines + // (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-control-access.html) + // in the AWS Data Pipeline Developer Guide. + Tags []*Tag `locationName:"tags" type:"list"` +} + +// String returns the string representation +func (s PipelineDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PipelineDescription) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *PipelineDescription) SetDescription(v string) *PipelineDescription { + s.Description = &v + return s +} + +// SetFields sets the Fields field's value. +func (s *PipelineDescription) SetFields(v []*Field) *PipelineDescription { + s.Fields = v + return s +} + +// SetName sets the Name field's value. +func (s *PipelineDescription) SetName(v string) *PipelineDescription { + s.Name = &v + return s +} + +// SetPipelineId sets the PipelineId field's value. +func (s *PipelineDescription) SetPipelineId(v string) *PipelineDescription { + s.PipelineId = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *PipelineDescription) SetTags(v []*Tag) *PipelineDescription { + s.Tags = v + return s +} + +// Contains the name and identifier of a pipeline. +type PipelineIdName struct { + _ struct{} `type:"structure"` + + // The ID of the pipeline that was assigned by AWS Data Pipeline. This is a + // string of the form df-297EG78HU43EEXAMPLE. + Id *string `locationName:"id" min:"1" type:"string"` + + // The name of the pipeline. + Name *string `locationName:"name" min:"1" type:"string"` +} + +// String returns the string representation +func (s PipelineIdName) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PipelineIdName) GoString() string { + return s.String() +} + +// SetId sets the Id field's value. +func (s *PipelineIdName) SetId(v string) *PipelineIdName { + s.Id = &v + return s +} + +// SetName sets the Name field's value. +func (s *PipelineIdName) SetName(v string) *PipelineIdName { + s.Name = &v + return s +} + +// Contains information about a pipeline object. This can be a logical, physical, +// or physical attempt pipeline object. The complete set of components of a +// pipeline defines the pipeline. +type PipelineObject struct { + _ struct{} `type:"structure"` + + // Key-value pairs that define the properties of the object. + // + // Fields is a required field + Fields []*Field `locationName:"fields" type:"list" required:"true"` + + // The ID of the object. + // + // Id is a required field + Id *string `locationName:"id" min:"1" type:"string" required:"true"` + + // The name of the object. + // + // Name is a required field + Name *string `locationName:"name" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s PipelineObject) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PipelineObject) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PipelineObject) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PipelineObject"} + if s.Fields == nil { + invalidParams.Add(request.NewErrParamRequired("Fields")) + } + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.Id != nil && len(*s.Id) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Id", 1)) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + if s.Fields != nil { + for i, v := range s.Fields { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Fields", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFields sets the Fields field's value. +func (s *PipelineObject) SetFields(v []*Field) *PipelineObject { + s.Fields = v + return s +} + +// SetId sets the Id field's value. +func (s *PipelineObject) SetId(v string) *PipelineObject { + s.Id = &v + return s +} + +// SetName sets the Name field's value. +func (s *PipelineObject) SetName(v string) *PipelineObject { + s.Name = &v + return s +} + +// Contains the parameters for PollForTask. +type PollForTaskInput struct { + _ struct{} `type:"structure"` + + // The public DNS name of the calling task runner. + Hostname *string `locationName:"hostname" min:"1" type:"string"` + + // Identity information for the EC2 instance that is hosting the task runner. + // You can get this value from the instance using http://169.254.169.254/latest/meta-data/instance-id. + // For more information, see Instance Metadata (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html) + // in the Amazon Elastic Compute Cloud User Guide. Passing in this value proves + // that your task runner is running on an EC2 instance, and ensures the proper + // AWS Data Pipeline service charges are applied to your pipeline. + InstanceIdentity *InstanceIdentity `locationName:"instanceIdentity" type:"structure"` + + // The type of task the task runner is configured to accept and process. The + // worker group is set as a field on objects in the pipeline when they are created. + // You can only specify a single value for workerGroup in the call to PollForTask. + // There are no wildcard values permitted in workerGroup; the string must be + // an exact, case-sensitive, match. + // + // WorkerGroup is a required field + WorkerGroup *string `locationName:"workerGroup" type:"string" required:"true"` +} + +// String returns the string representation +func (s PollForTaskInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PollForTaskInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PollForTaskInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PollForTaskInput"} + if s.Hostname != nil && len(*s.Hostname) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Hostname", 1)) + } + if s.WorkerGroup == nil { + invalidParams.Add(request.NewErrParamRequired("WorkerGroup")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetHostname sets the Hostname field's value. +func (s *PollForTaskInput) SetHostname(v string) *PollForTaskInput { + s.Hostname = &v + return s +} + +// SetInstanceIdentity sets the InstanceIdentity field's value. +func (s *PollForTaskInput) SetInstanceIdentity(v *InstanceIdentity) *PollForTaskInput { + s.InstanceIdentity = v + return s +} + +// SetWorkerGroup sets the WorkerGroup field's value. +func (s *PollForTaskInput) SetWorkerGroup(v string) *PollForTaskInput { + s.WorkerGroup = &v + return s +} + +// Contains the output of PollForTask. +type PollForTaskOutput struct { + _ struct{} `type:"structure"` + + // The information needed to complete the task that is being assigned to the + // task runner. One of the fields returned in this object is taskId, which contains + // an identifier for the task being assigned. The calling task runner uses taskId + // in subsequent calls to ReportTaskProgress and SetTaskStatus. + TaskObject *TaskObject `locationName:"taskObject" type:"structure"` +} + +// String returns the string representation +func (s PollForTaskOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PollForTaskOutput) GoString() string { + return s.String() +} + +// SetTaskObject sets the TaskObject field's value. +func (s *PollForTaskOutput) SetTaskObject(v *TaskObject) *PollForTaskOutput { + s.TaskObject = v + return s +} + +// Contains the parameters for PutPipelineDefinition. +type PutPipelineDefinitionInput struct { + _ struct{} `type:"structure"` + + // The parameter objects used with the pipeline. + ParameterObjects []*ParameterObject `locationName:"parameterObjects" type:"list"` + + // The parameter values used with the pipeline. + ParameterValues []*ParameterValue `locationName:"parameterValues" type:"list"` + + // The ID of the pipeline. + // + // PipelineId is a required field + PipelineId *string `locationName:"pipelineId" min:"1" type:"string" required:"true"` + + // The objects that define the pipeline. These objects overwrite the existing + // pipeline definition. + // + // PipelineObjects is a required field + PipelineObjects []*PipelineObject `locationName:"pipelineObjects" type:"list" required:"true"` +} + +// String returns the string representation +func (s PutPipelineDefinitionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutPipelineDefinitionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutPipelineDefinitionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutPipelineDefinitionInput"} + if s.PipelineId == nil { + invalidParams.Add(request.NewErrParamRequired("PipelineId")) + } + if s.PipelineId != nil && len(*s.PipelineId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PipelineId", 1)) + } + if s.PipelineObjects == nil { + invalidParams.Add(request.NewErrParamRequired("PipelineObjects")) + } + if s.ParameterObjects != nil { + for i, v := range s.ParameterObjects { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ParameterObjects", i), err.(request.ErrInvalidParams)) + } + } + } + if s.ParameterValues != nil { + for i, v := range s.ParameterValues { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ParameterValues", i), err.(request.ErrInvalidParams)) + } + } + } + if s.PipelineObjects != nil { + for i, v := range s.PipelineObjects { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "PipelineObjects", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetParameterObjects sets the ParameterObjects field's value. +func (s *PutPipelineDefinitionInput) SetParameterObjects(v []*ParameterObject) *PutPipelineDefinitionInput { + s.ParameterObjects = v + return s +} + +// SetParameterValues sets the ParameterValues field's value. +func (s *PutPipelineDefinitionInput) SetParameterValues(v []*ParameterValue) *PutPipelineDefinitionInput { + s.ParameterValues = v + return s +} + +// SetPipelineId sets the PipelineId field's value. +func (s *PutPipelineDefinitionInput) SetPipelineId(v string) *PutPipelineDefinitionInput { + s.PipelineId = &v + return s +} + +// SetPipelineObjects sets the PipelineObjects field's value. +func (s *PutPipelineDefinitionInput) SetPipelineObjects(v []*PipelineObject) *PutPipelineDefinitionInput { + s.PipelineObjects = v + return s +} + +// Contains the output of PutPipelineDefinition. +type PutPipelineDefinitionOutput struct { + _ struct{} `type:"structure"` + + // Indicates whether there were validation errors, and the pipeline definition + // is stored but cannot be activated until you correct the pipeline and call + // PutPipelineDefinition to commit the corrected pipeline. + // + // Errored is a required field + Errored *bool `locationName:"errored" type:"boolean" required:"true"` + + // The validation errors that are associated with the objects defined in pipelineObjects. + ValidationErrors []*ValidationError `locationName:"validationErrors" type:"list"` + + // The validation warnings that are associated with the objects defined in pipelineObjects. + ValidationWarnings []*ValidationWarning `locationName:"validationWarnings" type:"list"` +} + +// String returns the string representation +func (s PutPipelineDefinitionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutPipelineDefinitionOutput) GoString() string { + return s.String() +} + +// SetErrored sets the Errored field's value. +func (s *PutPipelineDefinitionOutput) SetErrored(v bool) *PutPipelineDefinitionOutput { + s.Errored = &v + return s +} + +// SetValidationErrors sets the ValidationErrors field's value. +func (s *PutPipelineDefinitionOutput) SetValidationErrors(v []*ValidationError) *PutPipelineDefinitionOutput { + s.ValidationErrors = v + return s +} + +// SetValidationWarnings sets the ValidationWarnings field's value. +func (s *PutPipelineDefinitionOutput) SetValidationWarnings(v []*ValidationWarning) *PutPipelineDefinitionOutput { + s.ValidationWarnings = v + return s +} + +// Defines the query to run against an object. +type Query struct { + _ struct{} `type:"structure"` + + // List of selectors that define the query. An object must satisfy all of the + // selectors to match the query. + Selectors []*Selector `locationName:"selectors" type:"list"` +} + +// String returns the string representation +func (s Query) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Query) GoString() string { + return s.String() +} + +// SetSelectors sets the Selectors field's value. +func (s *Query) SetSelectors(v []*Selector) *Query { + s.Selectors = v + return s +} + +// Contains the parameters for QueryObjects. +type QueryObjectsInput struct { + _ struct{} `type:"structure"` + + // The maximum number of object names that QueryObjects will return in a single + // call. The default value is 100. + Limit *int64 `locationName:"limit" type:"integer"` + + // The starting point for the results to be returned. For the first call, this + // value should be empty. As long as there are more results, continue to call + // QueryObjects with the marker value from the previous call to retrieve the + // next set of results. + Marker *string `locationName:"marker" type:"string"` + + // The ID of the pipeline. + // + // PipelineId is a required field + PipelineId *string `locationName:"pipelineId" min:"1" type:"string" required:"true"` + + // The query that defines the objects to be returned. The Query object can contain + // a maximum of ten selectors. The conditions in the query are limited to top-level + // String fields in the object. These filters can be applied to components, + // instances, and attempts. + Query *Query `locationName:"query" type:"structure"` + + // Indicates whether the query applies to components or instances. The possible + // values are: COMPONENT, INSTANCE, and ATTEMPT. + // + // Sphere is a required field + Sphere *string `locationName:"sphere" type:"string" required:"true"` +} + +// String returns the string representation +func (s QueryObjectsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s QueryObjectsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *QueryObjectsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "QueryObjectsInput"} + if s.PipelineId == nil { + invalidParams.Add(request.NewErrParamRequired("PipelineId")) + } + if s.PipelineId != nil && len(*s.PipelineId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PipelineId", 1)) + } + if s.Sphere == nil { + invalidParams.Add(request.NewErrParamRequired("Sphere")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLimit sets the Limit field's value. +func (s *QueryObjectsInput) SetLimit(v int64) *QueryObjectsInput { + s.Limit = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *QueryObjectsInput) SetMarker(v string) *QueryObjectsInput { + s.Marker = &v + return s +} + +// SetPipelineId sets the PipelineId field's value. +func (s *QueryObjectsInput) SetPipelineId(v string) *QueryObjectsInput { + s.PipelineId = &v + return s +} + +// SetQuery sets the Query field's value. +func (s *QueryObjectsInput) SetQuery(v *Query) *QueryObjectsInput { + s.Query = v + return s +} + +// SetSphere sets the Sphere field's value. +func (s *QueryObjectsInput) SetSphere(v string) *QueryObjectsInput { + s.Sphere = &v + return s +} + +// Contains the output of QueryObjects. +type QueryObjectsOutput struct { + _ struct{} `type:"structure"` + + // Indicates whether there are more results that can be obtained by a subsequent + // call. + HasMoreResults *bool `locationName:"hasMoreResults" type:"boolean"` + + // The identifiers that match the query selectors. + Ids []*string `locationName:"ids" type:"list"` + + // The starting point for the next page of results. To view the next page of + // results, call QueryObjects again with this marker value. If the value is + // null, there are no more results. + Marker *string `locationName:"marker" type:"string"` +} + +// String returns the string representation +func (s QueryObjectsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s QueryObjectsOutput) GoString() string { + return s.String() +} + +// SetHasMoreResults sets the HasMoreResults field's value. +func (s *QueryObjectsOutput) SetHasMoreResults(v bool) *QueryObjectsOutput { + s.HasMoreResults = &v + return s +} + +// SetIds sets the Ids field's value. +func (s *QueryObjectsOutput) SetIds(v []*string) *QueryObjectsOutput { + s.Ids = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *QueryObjectsOutput) SetMarker(v string) *QueryObjectsOutput { + s.Marker = &v + return s +} + +// Contains the parameters for RemoveTags. +type RemoveTagsInput struct { + _ struct{} `type:"structure"` + + // The ID of the pipeline. + // + // PipelineId is a required field + PipelineId *string `locationName:"pipelineId" min:"1" type:"string" required:"true"` + + // The keys of the tags to remove. + // + // TagKeys is a required field + TagKeys []*string `locationName:"tagKeys" type:"list" required:"true"` +} + +// String returns the string representation +func (s RemoveTagsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveTagsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RemoveTagsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RemoveTagsInput"} + if s.PipelineId == nil { + invalidParams.Add(request.NewErrParamRequired("PipelineId")) + } + if s.PipelineId != nil && len(*s.PipelineId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PipelineId", 1)) + } + if s.TagKeys == nil { + invalidParams.Add(request.NewErrParamRequired("TagKeys")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPipelineId sets the PipelineId field's value. +func (s *RemoveTagsInput) SetPipelineId(v string) *RemoveTagsInput { + s.PipelineId = &v + return s +} + +// SetTagKeys sets the TagKeys field's value. +func (s *RemoveTagsInput) SetTagKeys(v []*string) *RemoveTagsInput { + s.TagKeys = v + return s +} + +// Contains the output of RemoveTags. +type RemoveTagsOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s RemoveTagsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveTagsOutput) GoString() string { + return s.String() +} + +// Contains the parameters for ReportTaskProgress. +type ReportTaskProgressInput struct { + _ struct{} `type:"structure"` + + // Key-value pairs that define the properties of the ReportTaskProgressInput + // object. + Fields []*Field `locationName:"fields" type:"list"` + + // The ID of the task assigned to the task runner. This value is provided in + // the response for PollForTask. + // + // TaskId is a required field + TaskId *string `locationName:"taskId" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ReportTaskProgressInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReportTaskProgressInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ReportTaskProgressInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ReportTaskProgressInput"} + if s.TaskId == nil { + invalidParams.Add(request.NewErrParamRequired("TaskId")) + } + if s.TaskId != nil && len(*s.TaskId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TaskId", 1)) + } + if s.Fields != nil { + for i, v := range s.Fields { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Fields", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFields sets the Fields field's value. +func (s *ReportTaskProgressInput) SetFields(v []*Field) *ReportTaskProgressInput { + s.Fields = v + return s +} + +// SetTaskId sets the TaskId field's value. +func (s *ReportTaskProgressInput) SetTaskId(v string) *ReportTaskProgressInput { + s.TaskId = &v + return s +} + +// Contains the output of ReportTaskProgress. +type ReportTaskProgressOutput struct { + _ struct{} `type:"structure"` + + // If true, the calling task runner should cancel processing of the task. The + // task runner does not need to call SetTaskStatus for canceled tasks. + // + // Canceled is a required field + Canceled *bool `locationName:"canceled" type:"boolean" required:"true"` +} + +// String returns the string representation +func (s ReportTaskProgressOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReportTaskProgressOutput) GoString() string { + return s.String() +} + +// SetCanceled sets the Canceled field's value. +func (s *ReportTaskProgressOutput) SetCanceled(v bool) *ReportTaskProgressOutput { + s.Canceled = &v + return s +} + +// Contains the parameters for ReportTaskRunnerHeartbeat. +type ReportTaskRunnerHeartbeatInput struct { + _ struct{} `type:"structure"` + + // The public DNS name of the task runner. + Hostname *string `locationName:"hostname" min:"1" type:"string"` + + // The ID of the task runner. This value should be unique across your AWS account. + // In the case of AWS Data Pipeline Task Runner launched on a resource managed + // by AWS Data Pipeline, the web service provides a unique identifier when it + // launches the application. If you have written a custom task runner, you should + // assign a unique identifier for the task runner. + // + // TaskrunnerId is a required field + TaskrunnerId *string `locationName:"taskrunnerId" min:"1" type:"string" required:"true"` + + // The type of task the task runner is configured to accept and process. The + // worker group is set as a field on objects in the pipeline when they are created. + // You can only specify a single value for workerGroup. There are no wildcard + // values permitted in workerGroup; the string must be an exact, case-sensitive, + // match. + WorkerGroup *string `locationName:"workerGroup" type:"string"` +} + +// String returns the string representation +func (s ReportTaskRunnerHeartbeatInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReportTaskRunnerHeartbeatInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ReportTaskRunnerHeartbeatInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ReportTaskRunnerHeartbeatInput"} + if s.Hostname != nil && len(*s.Hostname) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Hostname", 1)) + } + if s.TaskrunnerId == nil { + invalidParams.Add(request.NewErrParamRequired("TaskrunnerId")) + } + if s.TaskrunnerId != nil && len(*s.TaskrunnerId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TaskrunnerId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetHostname sets the Hostname field's value. +func (s *ReportTaskRunnerHeartbeatInput) SetHostname(v string) *ReportTaskRunnerHeartbeatInput { + s.Hostname = &v + return s +} + +// SetTaskrunnerId sets the TaskrunnerId field's value. +func (s *ReportTaskRunnerHeartbeatInput) SetTaskrunnerId(v string) *ReportTaskRunnerHeartbeatInput { + s.TaskrunnerId = &v + return s +} + +// SetWorkerGroup sets the WorkerGroup field's value. +func (s *ReportTaskRunnerHeartbeatInput) SetWorkerGroup(v string) *ReportTaskRunnerHeartbeatInput { + s.WorkerGroup = &v + return s +} + +// Contains the output of ReportTaskRunnerHeartbeat. +type ReportTaskRunnerHeartbeatOutput struct { + _ struct{} `type:"structure"` + + // Indicates whether the calling task runner should terminate. + // + // Terminate is a required field + Terminate *bool `locationName:"terminate" type:"boolean" required:"true"` +} + +// String returns the string representation +func (s ReportTaskRunnerHeartbeatOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReportTaskRunnerHeartbeatOutput) GoString() string { + return s.String() +} + +// SetTerminate sets the Terminate field's value. +func (s *ReportTaskRunnerHeartbeatOutput) SetTerminate(v bool) *ReportTaskRunnerHeartbeatOutput { + s.Terminate = &v + return s +} + +// A comparision that is used to determine whether a query should return this +// object. +type Selector struct { + _ struct{} `type:"structure"` + + // The name of the field that the operator will be applied to. The field name + // is the "key" portion of the field definition in the pipeline definition syntax + // that is used by the AWS Data Pipeline API. If the field is not set on the + // object, the condition fails. + FieldName *string `locationName:"fieldName" type:"string"` + + // Contains a logical operation for comparing the value of a field with a specified + // value. + Operator *Operator `locationName:"operator" type:"structure"` +} + +// String returns the string representation +func (s Selector) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Selector) GoString() string { + return s.String() +} + +// SetFieldName sets the FieldName field's value. +func (s *Selector) SetFieldName(v string) *Selector { + s.FieldName = &v + return s +} + +// SetOperator sets the Operator field's value. +func (s *Selector) SetOperator(v *Operator) *Selector { + s.Operator = v + return s +} + +// Contains the parameters for SetStatus. +type SetStatusInput struct { + _ struct{} `type:"structure"` + + // The IDs of the objects. The corresponding objects can be either physical + // or components, but not a mix of both types. + // + // ObjectIds is a required field + ObjectIds []*string `locationName:"objectIds" type:"list" required:"true"` + + // The ID of the pipeline that contains the objects. + // + // PipelineId is a required field + PipelineId *string `locationName:"pipelineId" min:"1" type:"string" required:"true"` + + // The status to be set on all the objects specified in objectIds. For components, + // use PAUSE or RESUME. For instances, use TRY_CANCEL, RERUN, or MARK_FINISHED. + // + // Status is a required field + Status *string `locationName:"status" type:"string" required:"true"` +} + +// String returns the string representation +func (s SetStatusInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SetStatusInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SetStatusInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SetStatusInput"} + if s.ObjectIds == nil { + invalidParams.Add(request.NewErrParamRequired("ObjectIds")) + } + if s.PipelineId == nil { + invalidParams.Add(request.NewErrParamRequired("PipelineId")) + } + if s.PipelineId != nil && len(*s.PipelineId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PipelineId", 1)) + } + if s.Status == nil { + invalidParams.Add(request.NewErrParamRequired("Status")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetObjectIds sets the ObjectIds field's value. +func (s *SetStatusInput) SetObjectIds(v []*string) *SetStatusInput { + s.ObjectIds = v + return s +} + +// SetPipelineId sets the PipelineId field's value. +func (s *SetStatusInput) SetPipelineId(v string) *SetStatusInput { + s.PipelineId = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *SetStatusInput) SetStatus(v string) *SetStatusInput { + s.Status = &v + return s +} + +type SetStatusOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s SetStatusOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SetStatusOutput) GoString() string { + return s.String() +} + +// Contains the parameters for SetTaskStatus. +type SetTaskStatusInput struct { + _ struct{} `type:"structure"` + + // If an error occurred during the task, this value specifies the error code. + // This value is set on the physical attempt object. It is used to display error + // information to the user. It should not start with string "Service_" which + // is reserved by the system. + ErrorId *string `locationName:"errorId" type:"string"` + + // If an error occurred during the task, this value specifies a text description + // of the error. This value is set on the physical attempt object. It is used + // to display error information to the user. The web service does not parse + // this value. + ErrorMessage *string `locationName:"errorMessage" type:"string"` + + // If an error occurred during the task, this value specifies the stack trace + // associated with the error. This value is set on the physical attempt object. + // It is used to display error information to the user. The web service does + // not parse this value. + ErrorStackTrace *string `locationName:"errorStackTrace" type:"string"` + + // The ID of the task assigned to the task runner. This value is provided in + // the response for PollForTask. + // + // TaskId is a required field + TaskId *string `locationName:"taskId" min:"1" type:"string" required:"true"` + + // If FINISHED, the task successfully completed. If FAILED, the task ended unsuccessfully. + // Preconditions use false. + // + // TaskStatus is a required field + TaskStatus *string `locationName:"taskStatus" type:"string" required:"true" enum:"TaskStatus"` +} + +// String returns the string representation +func (s SetTaskStatusInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SetTaskStatusInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SetTaskStatusInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SetTaskStatusInput"} + if s.TaskId == nil { + invalidParams.Add(request.NewErrParamRequired("TaskId")) + } + if s.TaskId != nil && len(*s.TaskId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TaskId", 1)) + } + if s.TaskStatus == nil { + invalidParams.Add(request.NewErrParamRequired("TaskStatus")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetErrorId sets the ErrorId field's value. +func (s *SetTaskStatusInput) SetErrorId(v string) *SetTaskStatusInput { + s.ErrorId = &v + return s +} + +// SetErrorMessage sets the ErrorMessage field's value. +func (s *SetTaskStatusInput) SetErrorMessage(v string) *SetTaskStatusInput { + s.ErrorMessage = &v + return s +} + +// SetErrorStackTrace sets the ErrorStackTrace field's value. +func (s *SetTaskStatusInput) SetErrorStackTrace(v string) *SetTaskStatusInput { + s.ErrorStackTrace = &v + return s +} + +// SetTaskId sets the TaskId field's value. +func (s *SetTaskStatusInput) SetTaskId(v string) *SetTaskStatusInput { + s.TaskId = &v + return s +} + +// SetTaskStatus sets the TaskStatus field's value. +func (s *SetTaskStatusInput) SetTaskStatus(v string) *SetTaskStatusInput { + s.TaskStatus = &v + return s +} + +// Contains the output of SetTaskStatus. +type SetTaskStatusOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s SetTaskStatusOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SetTaskStatusOutput) GoString() string { + return s.String() +} + +// Tags are key/value pairs defined by a user and associated with a pipeline +// to control access. AWS Data Pipeline allows you to associate ten tags per +// pipeline. For more information, see Controlling User Access to Pipelines +// (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-control-access.html) +// in the AWS Data Pipeline Developer Guide. +type Tag struct { + _ struct{} `type:"structure"` + + // The key name of a tag defined by a user. For more information, see Controlling + // User Access to Pipelines (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-control-access.html) + // in the AWS Data Pipeline Developer Guide. + // + // Key is a required field + Key *string `locationName:"key" min:"1" type:"string" required:"true"` + + // The optional value portion of a tag defined by a user. For more information, + // see Controlling User Access to Pipelines (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-control-access.html) + // in the AWS Data Pipeline Developer Guide. + // + // Value is a required field + Value *string `locationName:"value" type:"string" required:"true"` +} + +// String returns the string representation +func (s Tag) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Tag) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Tag) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Tag"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *Tag) SetKey(v string) *Tag { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Tag) SetValue(v string) *Tag { + s.Value = &v + return s +} + +// Contains information about a pipeline task that is assigned to a task runner. +type TaskObject struct { + _ struct{} `type:"structure"` + + // The ID of the pipeline task attempt object. AWS Data Pipeline uses this value + // to track how many times a task is attempted. + AttemptId *string `locationName:"attemptId" min:"1" type:"string"` + + // Connection information for the location where the task runner will publish + // the output of the task. + Objects map[string]*PipelineObject `locationName:"objects" type:"map"` + + // The ID of the pipeline that provided the task. + PipelineId *string `locationName:"pipelineId" min:"1" type:"string"` + + // An internal identifier for the task. This ID is passed to the SetTaskStatus + // and ReportTaskProgress actions. + TaskId *string `locationName:"taskId" min:"1" type:"string"` +} + +// String returns the string representation +func (s TaskObject) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TaskObject) GoString() string { + return s.String() +} + +// SetAttemptId sets the AttemptId field's value. +func (s *TaskObject) SetAttemptId(v string) *TaskObject { + s.AttemptId = &v + return s +} + +// SetObjects sets the Objects field's value. +func (s *TaskObject) SetObjects(v map[string]*PipelineObject) *TaskObject { + s.Objects = v + return s +} + +// SetPipelineId sets the PipelineId field's value. +func (s *TaskObject) SetPipelineId(v string) *TaskObject { + s.PipelineId = &v + return s +} + +// SetTaskId sets the TaskId field's value. +func (s *TaskObject) SetTaskId(v string) *TaskObject { + s.TaskId = &v + return s +} + +// Contains the parameters for ValidatePipelineDefinition. +type ValidatePipelineDefinitionInput struct { + _ struct{} `type:"structure"` + + // The parameter objects used with the pipeline. + ParameterObjects []*ParameterObject `locationName:"parameterObjects" type:"list"` + + // The parameter values used with the pipeline. + ParameterValues []*ParameterValue `locationName:"parameterValues" type:"list"` + + // The ID of the pipeline. + // + // PipelineId is a required field + PipelineId *string `locationName:"pipelineId" min:"1" type:"string" required:"true"` + + // The objects that define the pipeline changes to validate against the pipeline. + // + // PipelineObjects is a required field + PipelineObjects []*PipelineObject `locationName:"pipelineObjects" type:"list" required:"true"` +} + +// String returns the string representation +func (s ValidatePipelineDefinitionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ValidatePipelineDefinitionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ValidatePipelineDefinitionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ValidatePipelineDefinitionInput"} + if s.PipelineId == nil { + invalidParams.Add(request.NewErrParamRequired("PipelineId")) + } + if s.PipelineId != nil && len(*s.PipelineId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PipelineId", 1)) + } + if s.PipelineObjects == nil { + invalidParams.Add(request.NewErrParamRequired("PipelineObjects")) + } + if s.ParameterObjects != nil { + for i, v := range s.ParameterObjects { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ParameterObjects", i), err.(request.ErrInvalidParams)) + } + } + } + if s.ParameterValues != nil { + for i, v := range s.ParameterValues { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ParameterValues", i), err.(request.ErrInvalidParams)) + } + } + } + if s.PipelineObjects != nil { + for i, v := range s.PipelineObjects { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "PipelineObjects", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetParameterObjects sets the ParameterObjects field's value. +func (s *ValidatePipelineDefinitionInput) SetParameterObjects(v []*ParameterObject) *ValidatePipelineDefinitionInput { + s.ParameterObjects = v + return s +} + +// SetParameterValues sets the ParameterValues field's value. +func (s *ValidatePipelineDefinitionInput) SetParameterValues(v []*ParameterValue) *ValidatePipelineDefinitionInput { + s.ParameterValues = v + return s +} + +// SetPipelineId sets the PipelineId field's value. +func (s *ValidatePipelineDefinitionInput) SetPipelineId(v string) *ValidatePipelineDefinitionInput { + s.PipelineId = &v + return s +} + +// SetPipelineObjects sets the PipelineObjects field's value. +func (s *ValidatePipelineDefinitionInput) SetPipelineObjects(v []*PipelineObject) *ValidatePipelineDefinitionInput { + s.PipelineObjects = v + return s +} + +// Contains the output of ValidatePipelineDefinition. +type ValidatePipelineDefinitionOutput struct { + _ struct{} `type:"structure"` + + // Indicates whether there were validation errors. + // + // Errored is a required field + Errored *bool `locationName:"errored" type:"boolean" required:"true"` + + // Any validation errors that were found. + ValidationErrors []*ValidationError `locationName:"validationErrors" type:"list"` + + // Any validation warnings that were found. + ValidationWarnings []*ValidationWarning `locationName:"validationWarnings" type:"list"` +} + +// String returns the string representation +func (s ValidatePipelineDefinitionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ValidatePipelineDefinitionOutput) GoString() string { + return s.String() +} + +// SetErrored sets the Errored field's value. +func (s *ValidatePipelineDefinitionOutput) SetErrored(v bool) *ValidatePipelineDefinitionOutput { + s.Errored = &v + return s +} + +// SetValidationErrors sets the ValidationErrors field's value. +func (s *ValidatePipelineDefinitionOutput) SetValidationErrors(v []*ValidationError) *ValidatePipelineDefinitionOutput { + s.ValidationErrors = v + return s +} + +// SetValidationWarnings sets the ValidationWarnings field's value. +func (s *ValidatePipelineDefinitionOutput) SetValidationWarnings(v []*ValidationWarning) *ValidatePipelineDefinitionOutput { + s.ValidationWarnings = v + return s +} + +// Defines a validation error. Validation errors prevent pipeline activation. +// The set of validation errors that can be returned are defined by AWS Data +// Pipeline. +type ValidationError struct { + _ struct{} `type:"structure"` + + // A description of the validation error. + Errors []*string `locationName:"errors" type:"list"` + + // The identifier of the object that contains the validation error. + Id *string `locationName:"id" min:"1" type:"string"` +} + +// String returns the string representation +func (s ValidationError) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ValidationError) GoString() string { + return s.String() +} + +// SetErrors sets the Errors field's value. +func (s *ValidationError) SetErrors(v []*string) *ValidationError { + s.Errors = v + return s +} + +// SetId sets the Id field's value. +func (s *ValidationError) SetId(v string) *ValidationError { + s.Id = &v + return s +} + +// Defines a validation warning. Validation warnings do not prevent pipeline +// activation. The set of validation warnings that can be returned are defined +// by AWS Data Pipeline. +type ValidationWarning struct { + _ struct{} `type:"structure"` + + // The identifier of the object that contains the validation warning. + Id *string `locationName:"id" min:"1" type:"string"` + + // A description of the validation warning. + Warnings []*string `locationName:"warnings" type:"list"` +} + +// String returns the string representation +func (s ValidationWarning) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ValidationWarning) GoString() string { + return s.String() +} + +// SetId sets the Id field's value. +func (s *ValidationWarning) SetId(v string) *ValidationWarning { + s.Id = &v + return s +} + +// SetWarnings sets the Warnings field's value. +func (s *ValidationWarning) SetWarnings(v []*string) *ValidationWarning { + s.Warnings = v + return s +} + +const ( + // OperatorTypeEq is a OperatorType enum value + OperatorTypeEq = "EQ" + + // OperatorTypeRefEq is a OperatorType enum value + OperatorTypeRefEq = "REF_EQ" + + // OperatorTypeLe is a OperatorType enum value + OperatorTypeLe = "LE" + + // OperatorTypeGe is a OperatorType enum value + OperatorTypeGe = "GE" + + // OperatorTypeBetween is a OperatorType enum value + OperatorTypeBetween = "BETWEEN" +) + +const ( + // TaskStatusFinished is a TaskStatus enum value + TaskStatusFinished = "FINISHED" + + // TaskStatusFailed is a TaskStatus enum value + TaskStatusFailed = "FAILED" + + // TaskStatusFalse is a TaskStatus enum value + TaskStatusFalse = "FALSE" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/datapipeline/doc.go b/vendor/github.com/aws/aws-sdk-go/service/datapipeline/doc.go new file mode 100644 index 00000000000..79ee00a1eec --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/datapipeline/doc.go @@ -0,0 +1,49 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package datapipeline provides the client and types for making API +// requests to AWS Data Pipeline. +// +// AWS Data Pipeline configures and manages a data-driven workflow called a +// pipeline. AWS Data Pipeline handles the details of scheduling and ensuring +// that data dependencies are met so that your application can focus on processing +// the data. +// +// AWS Data Pipeline provides a JAR implementation of a task runner called AWS +// Data Pipeline Task Runner. AWS Data Pipeline Task Runner provides logic for +// common data management scenarios, such as performing database queries and +// running data analysis using Amazon Elastic MapReduce (Amazon EMR). You can +// use AWS Data Pipeline Task Runner as your task runner, or you can write your +// own task runner to provide custom data management. +// +// AWS Data Pipeline implements two main sets of functionality. Use the first +// set to create a pipeline and define data sources, schedules, dependencies, +// and the transforms to be performed on the data. Use the second set in your +// task runner application to receive the next task ready for processing. The +// logic for performing the task, such as querying the data, running data analysis, +// or converting the data from one format to another, is contained within the +// task runner. The task runner performs the task assigned to it by the web +// service, reporting progress to the web service as it does so. When the task +// is done, the task runner reports the final success or failure of the task +// to the web service. +// +// See https://docs.aws.amazon.com/goto/WebAPI/datapipeline-2012-10-29 for more information on this service. +// +// See datapipeline package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/datapipeline/ +// +// Using the Client +// +// To contact AWS Data Pipeline with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the AWS Data Pipeline client DataPipeline for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/datapipeline/#New +package datapipeline diff --git a/vendor/github.com/aws/aws-sdk-go/service/datapipeline/errors.go b/vendor/github.com/aws/aws-sdk-go/service/datapipeline/errors.go new file mode 100644 index 00000000000..f60b7da9f17 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/datapipeline/errors.go @@ -0,0 +1,39 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package datapipeline + +const ( + + // ErrCodeInternalServiceError for service response error code + // "InternalServiceError". + // + // An internal service error occurred. + ErrCodeInternalServiceError = "InternalServiceError" + + // ErrCodeInvalidRequestException for service response error code + // "InvalidRequestException". + // + // The request was not valid. Verify that your request was properly formatted, + // that the signature was generated with the correct credentials, and that you + // haven't exceeded any of the service limits for your account. + ErrCodeInvalidRequestException = "InvalidRequestException" + + // ErrCodePipelineDeletedException for service response error code + // "PipelineDeletedException". + // + // The specified pipeline has been deleted. + ErrCodePipelineDeletedException = "PipelineDeletedException" + + // ErrCodePipelineNotFoundException for service response error code + // "PipelineNotFoundException". + // + // The specified pipeline was not found. Verify that you used the correct user + // and account identifiers. + ErrCodePipelineNotFoundException = "PipelineNotFoundException" + + // ErrCodeTaskNotFoundException for service response error code + // "TaskNotFoundException". + // + // The specified task was not found. + ErrCodeTaskNotFoundException = "TaskNotFoundException" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/datapipeline/service.go b/vendor/github.com/aws/aws-sdk-go/service/datapipeline/service.go new file mode 100644 index 00000000000..ebd5c29b2fc --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/datapipeline/service.go @@ -0,0 +1,97 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package datapipeline + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" +) + +// DataPipeline provides the API operation methods for making requests to +// AWS Data Pipeline. See this package's package overview docs +// for details on the service. +// +// DataPipeline methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type DataPipeline struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "datapipeline" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Data Pipeline" // ServiceID is a unique identifer of a specific service. +) + +// New creates a new instance of the DataPipeline client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a DataPipeline client from just a session. +// svc := datapipeline.New(mySession) +// +// // Create a DataPipeline client with additional configuration +// svc := datapipeline.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *DataPipeline { + c := p.ClientConfig(EndpointsID, cfgs...) + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *DataPipeline { + svc := &DataPipeline{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + ServiceID: ServiceID, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2012-10-29", + JSONVersion: "1.1", + TargetPrefix: "DataPipeline", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(jsonrpc.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(jsonrpc.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(jsonrpc.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(jsonrpc.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a DataPipeline operation and runs any +// custom request initialization. +func (c *DataPipeline) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/dax/api.go b/vendor/github.com/aws/aws-sdk-go/service/dax/api.go index 99c044f7a5c..4b836d462bd 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/dax/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/dax/api.go @@ -14,8 +14,8 @@ const opCreateCluster = "CreateCluster" // CreateClusterRequest generates a "aws/request.Request" representing the // client's request for the CreateCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -100,6 +100,8 @@ func (c *DAX) CreateClusterRequest(input *CreateClusterInput) (req *request.Requ // * ErrCodeTagQuotaPerResourceExceeded "TagQuotaPerResourceExceeded" // You have exceeded the maximum number of tags for this DAX cluster. // +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // The value for a parameter is invalid. // @@ -132,8 +134,8 @@ const opCreateParameterGroup = "CreateParameterGroup" // CreateParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -192,6 +194,8 @@ func (c *DAX) CreateParameterGroupRequest(input *CreateParameterGroupInput) (req // * ErrCodeInvalidParameterGroupStateFault "InvalidParameterGroupStateFault" // One or more parameters in a parameter group are in an invalid state. // +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // The value for a parameter is invalid. // @@ -224,8 +228,8 @@ const opCreateSubnetGroup = "CreateSubnetGroup" // CreateSubnetGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateSubnetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -288,6 +292,8 @@ func (c *DAX) CreateSubnetGroupRequest(input *CreateSubnetGroupInput) (req *requ // * ErrCodeInvalidSubnet "InvalidSubnet" // An invalid subnet identifier was specified. // +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // See also, https://docs.aws.amazon.com/goto/WebAPI/dax-2017-04-19/CreateSubnetGroup func (c *DAX) CreateSubnetGroup(input *CreateSubnetGroupInput) (*CreateSubnetGroupOutput, error) { req, out := c.CreateSubnetGroupRequest(input) @@ -314,8 +320,8 @@ const opDecreaseReplicationFactor = "DecreaseReplicationFactor" // DecreaseReplicationFactorRequest generates a "aws/request.Request" representing the // client's request for the DecreaseReplicationFactor operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -376,6 +382,8 @@ func (c *DAX) DecreaseReplicationFactorRequest(input *DecreaseReplicationFactorI // * ErrCodeInvalidClusterStateFault "InvalidClusterStateFault" // The requested DAX cluster is not in the available state. // +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // The value for a parameter is invalid. // @@ -408,8 +416,8 @@ const opDeleteCluster = "DeleteCluster" // DeleteClusterRequest generates a "aws/request.Request" representing the // client's request for the DeleteCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -467,6 +475,8 @@ func (c *DAX) DeleteClusterRequest(input *DeleteClusterInput) (req *request.Requ // * ErrCodeInvalidClusterStateFault "InvalidClusterStateFault" // The requested DAX cluster is not in the available state. // +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // The value for a parameter is invalid. // @@ -499,8 +509,8 @@ const opDeleteParameterGroup = "DeleteParameterGroup" // DeleteParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -556,6 +566,8 @@ func (c *DAX) DeleteParameterGroupRequest(input *DeleteParameterGroupInput) (req // * ErrCodeParameterGroupNotFoundFault "ParameterGroupNotFoundFault" // The specified parameter group does not exist. // +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // The value for a parameter is invalid. // @@ -588,8 +600,8 @@ const opDeleteSubnetGroup = "DeleteSubnetGroup" // DeleteSubnetGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteSubnetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -646,6 +658,8 @@ func (c *DAX) DeleteSubnetGroupRequest(input *DeleteSubnetGroupInput) (req *requ // * ErrCodeSubnetGroupNotFoundFault "SubnetGroupNotFoundFault" // The requested subnet group name does not refer to an existing subnet group. // +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // See also, https://docs.aws.amazon.com/goto/WebAPI/dax-2017-04-19/DeleteSubnetGroup func (c *DAX) DeleteSubnetGroup(input *DeleteSubnetGroupInput) (*DeleteSubnetGroupOutput, error) { req, out := c.DeleteSubnetGroupRequest(input) @@ -672,8 +686,8 @@ const opDescribeClusters = "DescribeClusters" // DescribeClustersRequest generates a "aws/request.Request" representing the // client's request for the DescribeClusters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -741,6 +755,8 @@ func (c *DAX) DescribeClustersRequest(input *DescribeClustersInput) (req *reques // * ErrCodeClusterNotFoundFault "ClusterNotFoundFault" // The requested cluster ID does not refer to an existing DAX cluster. // +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // The value for a parameter is invalid. // @@ -773,8 +789,8 @@ const opDescribeDefaultParameters = "DescribeDefaultParameters" // DescribeDefaultParametersRequest generates a "aws/request.Request" representing the // client's request for the DescribeDefaultParameters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -823,6 +839,8 @@ func (c *DAX) DescribeDefaultParametersRequest(input *DescribeDefaultParametersI // API operation DescribeDefaultParameters for usage and error information. // // Returned Error Codes: +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // The value for a parameter is invalid. // @@ -855,8 +873,8 @@ const opDescribeEvents = "DescribeEvents" // DescribeEventsRequest generates a "aws/request.Request" representing the // client's request for the DescribeEvents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -910,6 +928,8 @@ func (c *DAX) DescribeEventsRequest(input *DescribeEventsInput) (req *request.Re // API operation DescribeEvents for usage and error information. // // Returned Error Codes: +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // The value for a parameter is invalid. // @@ -942,8 +962,8 @@ const opDescribeParameterGroups = "DescribeParameterGroups" // DescribeParameterGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeParameterGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -996,6 +1016,8 @@ func (c *DAX) DescribeParameterGroupsRequest(input *DescribeParameterGroupsInput // * ErrCodeParameterGroupNotFoundFault "ParameterGroupNotFoundFault" // The specified parameter group does not exist. // +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // The value for a parameter is invalid. // @@ -1028,8 +1050,8 @@ const opDescribeParameters = "DescribeParameters" // DescribeParametersRequest generates a "aws/request.Request" representing the // client's request for the DescribeParameters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1081,6 +1103,8 @@ func (c *DAX) DescribeParametersRequest(input *DescribeParametersInput) (req *re // * ErrCodeParameterGroupNotFoundFault "ParameterGroupNotFoundFault" // The specified parameter group does not exist. // +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // The value for a parameter is invalid. // @@ -1113,8 +1137,8 @@ const opDescribeSubnetGroups = "DescribeSubnetGroups" // DescribeSubnetGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeSubnetGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1167,6 +1191,8 @@ func (c *DAX) DescribeSubnetGroupsRequest(input *DescribeSubnetGroupsInput) (req // * ErrCodeSubnetGroupNotFoundFault "SubnetGroupNotFoundFault" // The requested subnet group name does not refer to an existing subnet group. // +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // See also, https://docs.aws.amazon.com/goto/WebAPI/dax-2017-04-19/DescribeSubnetGroups func (c *DAX) DescribeSubnetGroups(input *DescribeSubnetGroupsInput) (*DescribeSubnetGroupsOutput, error) { req, out := c.DescribeSubnetGroupsRequest(input) @@ -1193,8 +1219,8 @@ const opIncreaseReplicationFactor = "IncreaseReplicationFactor" // IncreaseReplicationFactorRequest generates a "aws/request.Request" representing the // client's request for the IncreaseReplicationFactor operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1262,6 +1288,8 @@ func (c *DAX) IncreaseReplicationFactorRequest(input *IncreaseReplicationFactorI // * ErrCodeNodeQuotaForCustomerExceededFault "NodeQuotaForCustomerExceededFault" // You have attempted to exceed the maximum number of nodes for your AWS account. // +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // The value for a parameter is invalid. // @@ -1294,8 +1322,8 @@ const opListTags = "ListTags" // ListTagsRequest generates a "aws/request.Request" representing the // client's request for the ListTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1354,6 +1382,8 @@ func (c *DAX) ListTagsRequest(input *ListTagsInput) (req *request.Request, outpu // * ErrCodeInvalidClusterStateFault "InvalidClusterStateFault" // The requested DAX cluster is not in the available state. // +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // The value for a parameter is invalid. // @@ -1386,8 +1416,8 @@ const opRebootNode = "RebootNode" // RebootNodeRequest generates a "aws/request.Request" representing the // client's request for the RebootNode operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1446,6 +1476,8 @@ func (c *DAX) RebootNodeRequest(input *RebootNodeInput) (req *request.Request, o // * ErrCodeInvalidClusterStateFault "InvalidClusterStateFault" // The requested DAX cluster is not in the available state. // +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // The value for a parameter is invalid. // @@ -1478,8 +1510,8 @@ const opTagResource = "TagResource" // TagResourceRequest generates a "aws/request.Request" representing the // client's request for the TagResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1541,6 +1573,8 @@ func (c *DAX) TagResourceRequest(input *TagResourceInput) (req *request.Request, // * ErrCodeInvalidClusterStateFault "InvalidClusterStateFault" // The requested DAX cluster is not in the available state. // +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // The value for a parameter is invalid. // @@ -1573,8 +1607,8 @@ const opUntagResource = "UntagResource" // UntagResourceRequest generates a "aws/request.Request" representing the // client's request for the UntagResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1636,6 +1670,8 @@ func (c *DAX) UntagResourceRequest(input *UntagResourceInput) (req *request.Requ // * ErrCodeInvalidClusterStateFault "InvalidClusterStateFault" // The requested DAX cluster is not in the available state. // +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // The value for a parameter is invalid. // @@ -1668,8 +1704,8 @@ const opUpdateCluster = "UpdateCluster" // UpdateClusterRequest generates a "aws/request.Request" representing the // client's request for the UpdateCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1732,6 +1768,8 @@ func (c *DAX) UpdateClusterRequest(input *UpdateClusterInput) (req *request.Requ // * ErrCodeParameterGroupNotFoundFault "ParameterGroupNotFoundFault" // The specified parameter group does not exist. // +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // The value for a parameter is invalid. // @@ -1764,8 +1802,8 @@ const opUpdateParameterGroup = "UpdateParameterGroup" // UpdateParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the UpdateParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1821,6 +1859,8 @@ func (c *DAX) UpdateParameterGroupRequest(input *UpdateParameterGroupInput) (req // * ErrCodeParameterGroupNotFoundFault "ParameterGroupNotFoundFault" // The specified parameter group does not exist. // +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // The value for a parameter is invalid. // @@ -1853,8 +1893,8 @@ const opUpdateSubnetGroup = "UpdateSubnetGroup" // UpdateSubnetGroupRequest generates a "aws/request.Request" representing the // client's request for the UpdateSubnetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1916,6 +1956,8 @@ func (c *DAX) UpdateSubnetGroupRequest(input *UpdateSubnetGroupInput) (req *requ // * ErrCodeInvalidSubnet "InvalidSubnet" // An invalid subnet identifier was specified. // +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// // See also, https://docs.aws.amazon.com/goto/WebAPI/dax-2017-04-19/UpdateSubnetGroup func (c *DAX) UpdateSubnetGroup(input *UpdateSubnetGroupInput) (*UpdateSubnetGroupOutput, error) { req, out := c.UpdateSubnetGroupRequest(input) @@ -1989,6 +2031,10 @@ type Cluster struct { // than 30 minutes, and is performed automatically within the maintenance window. PreferredMaintenanceWindow *string `type:"string"` + // The description of the server-side encryption status on the specified DAX + // cluster. + SSEDescription *SSEDescription `type:"structure"` + // A list of security groups, and the status of each, for the nodes in the cluster. SecurityGroups []*SecurityGroupMembership `type:"list"` @@ -2084,6 +2130,12 @@ func (s *Cluster) SetPreferredMaintenanceWindow(v string) *Cluster { return s } +// SetSSEDescription sets the SSEDescription field's value. +func (s *Cluster) SetSSEDescription(v *SSEDescription) *Cluster { + s.SSEDescription = v + return s +} + // SetSecurityGroups sets the SecurityGroups field's value. func (s *Cluster) SetSecurityGroups(v []*SecurityGroupMembership) *Cluster { s.SecurityGroups = v @@ -2189,6 +2241,9 @@ type CreateClusterInput struct { // ReplicationFactor is a required field ReplicationFactor *int64 `type:"integer" required:"true"` + // Represents the settings used to enable server-side encryption on the cluster. + SSESpecification *SSESpecification `type:"structure"` + // A list of security group IDs to be assigned to each node in the DAX cluster. // (Each of the security group ID is system-generated.) // @@ -2231,6 +2286,11 @@ func (s *CreateClusterInput) Validate() error { if s.ReplicationFactor == nil { invalidParams.Add(request.NewErrParamRequired("ReplicationFactor")) } + if s.SSESpecification != nil { + if err := s.SSESpecification.Validate(); err != nil { + invalidParams.AddNested("SSESpecification", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -2292,6 +2352,12 @@ func (s *CreateClusterInput) SetReplicationFactor(v int64) *CreateClusterInput { return s } +// SetSSESpecification sets the SSESpecification field's value. +func (s *CreateClusterInput) SetSSESpecification(v *SSESpecification) *CreateClusterInput { + s.SSESpecification = v + return s +} + // SetSecurityGroupIds sets the SecurityGroupIds field's value. func (s *CreateClusterInput) SetSecurityGroupIds(v []*string) *CreateClusterInput { s.SecurityGroupIds = v @@ -2925,7 +2991,7 @@ type DescribeEventsInput struct { // The end of the time interval for which to retrieve events, specified in ISO // 8601 format. - EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `type:"timestamp"` // The maximum number of results to include in the response. If more results // exist than the specified MaxResults value, a token is included in the response @@ -2949,7 +3015,7 @@ type DescribeEventsInput struct { // The beginning of the time interval to retrieve events for, specified in ISO // 8601 format. - StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -3343,7 +3409,7 @@ type Event struct { _ struct{} `type:"structure"` // The date and time when the event occurred. - Date *time.Time `type:"timestamp" timestampFormat:"unix"` + Date *time.Time `type:"timestamp"` // A user-defined message associated with the event. Message *string `type:"string"` @@ -3573,7 +3639,7 @@ type Node struct { Endpoint *Endpoint `type:"structure"` // The date and time (in UNIX epoch format) when the node was launched. - NodeCreateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + NodeCreateTime *time.Time `type:"timestamp"` // A system-generated identifier for the node. NodeId *string `type:"string"` @@ -3992,6 +4058,79 @@ func (s *RebootNodeOutput) SetCluster(v *Cluster) *RebootNodeOutput { return s } +// The description of the server-side encryption status on the specified DAX +// cluster. +type SSEDescription struct { + _ struct{} `type:"structure"` + + // The current state of server-side encryption: + // + // * ENABLING - Server-side encryption is being enabled. + // + // * ENABLED - Server-side encryption is enabled. + // + // * DISABLING - Server-side encryption is being disabled. + // + // * DISABLED - Server-side encryption is disabled. + Status *string `type:"string" enum:"SSEStatus"` +} + +// String returns the string representation +func (s SSEDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SSEDescription) GoString() string { + return s.String() +} + +// SetStatus sets the Status field's value. +func (s *SSEDescription) SetStatus(v string) *SSEDescription { + s.Status = &v + return s +} + +// Represents the settings used to enable server-side encryption. +type SSESpecification struct { + _ struct{} `type:"structure"` + + // Indicates whether server-side encryption is enabled (true) or disabled (false) + // on the cluster. + // + // Enabled is a required field + Enabled *bool `type:"boolean" required:"true"` +} + +// String returns the string representation +func (s SSESpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SSESpecification) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SSESpecification) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SSESpecification"} + if s.Enabled == nil { + invalidParams.Add(request.NewErrParamRequired("Enabled")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEnabled sets the Enabled field's value. +func (s *SSESpecification) SetEnabled(v bool) *SSESpecification { + s.Enabled = &v + return s +} + // An individual VPC security group and its status. type SecurityGroupMembership struct { _ struct{} `type:"structure"` @@ -4609,6 +4748,20 @@ const ( ParameterTypeNodeTypeSpecific = "NODE_TYPE_SPECIFIC" ) +const ( + // SSEStatusEnabling is a SSEStatus enum value + SSEStatusEnabling = "ENABLING" + + // SSEStatusEnabled is a SSEStatus enum value + SSEStatusEnabled = "ENABLED" + + // SSEStatusDisabling is a SSEStatus enum value + SSEStatusDisabling = "DISABLING" + + // SSEStatusDisabled is a SSEStatus enum value + SSEStatusDisabled = "DISABLED" +) + const ( // SourceTypeCluster is a SourceType enum value SourceTypeCluster = "CLUSTER" diff --git a/vendor/github.com/aws/aws-sdk-go/service/dax/errors.go b/vendor/github.com/aws/aws-sdk-go/service/dax/errors.go index 24aaf1a2327..f5ef87e87a0 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/dax/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/dax/errors.go @@ -108,6 +108,10 @@ const ( // You have attempted to exceed the maximum number of parameter groups. ErrCodeParameterGroupQuotaExceededFault = "ParameterGroupQuotaExceededFault" + // ErrCodeServiceLinkedRoleNotFoundFault for service response error code + // "ServiceLinkedRoleNotFoundFault". + ErrCodeServiceLinkedRoleNotFoundFault = "ServiceLinkedRoleNotFoundFault" + // ErrCodeSubnetGroupAlreadyExistsFault for service response error code // "SubnetGroupAlreadyExistsFault". // diff --git a/vendor/github.com/aws/aws-sdk-go/service/dax/service.go b/vendor/github.com/aws/aws-sdk-go/service/dax/service.go index a80ed1441a0..545ea0312c3 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/dax/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/dax/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "dax" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "dax" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "DAX" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the DAX client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/devicefarm/api.go b/vendor/github.com/aws/aws-sdk-go/service/devicefarm/api.go index 31a9a3328ab..3a5a56339a0 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/devicefarm/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/devicefarm/api.go @@ -14,8 +14,8 @@ const opCreateDevicePool = "CreateDevicePool" // CreateDevicePoolRequest generates a "aws/request.Request" representing the // client's request for the CreateDevicePool operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -98,12 +98,101 @@ func (c *DeviceFarm) CreateDevicePoolWithContext(ctx aws.Context, input *CreateD return out, req.Send() } +const opCreateInstanceProfile = "CreateInstanceProfile" + +// CreateInstanceProfileRequest generates a "aws/request.Request" representing the +// client's request for the CreateInstanceProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateInstanceProfile for more information on using the CreateInstanceProfile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateInstanceProfileRequest method. +// req, resp := client.CreateInstanceProfileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/CreateInstanceProfile +func (c *DeviceFarm) CreateInstanceProfileRequest(input *CreateInstanceProfileInput) (req *request.Request, output *CreateInstanceProfileOutput) { + op := &request.Operation{ + Name: opCreateInstanceProfile, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateInstanceProfileInput{} + } + + output = &CreateInstanceProfileOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateInstanceProfile API operation for AWS Device Farm. +// +// Creates a profile that can be applied to one or more private fleet device +// instances. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Device Farm's +// API operation CreateInstanceProfile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeArgumentException "ArgumentException" +// An invalid argument was specified. +// +// * ErrCodeNotFoundException "NotFoundException" +// The specified entity was not found. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit was exceeded. +// +// * ErrCodeServiceAccountException "ServiceAccountException" +// There was a problem with the service account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/CreateInstanceProfile +func (c *DeviceFarm) CreateInstanceProfile(input *CreateInstanceProfileInput) (*CreateInstanceProfileOutput, error) { + req, out := c.CreateInstanceProfileRequest(input) + return out, req.Send() +} + +// CreateInstanceProfileWithContext is the same as CreateInstanceProfile with the addition of +// the ability to pass a context and additional request options. +// +// See CreateInstanceProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DeviceFarm) CreateInstanceProfileWithContext(ctx aws.Context, input *CreateInstanceProfileInput, opts ...request.Option) (*CreateInstanceProfileOutput, error) { + req, out := c.CreateInstanceProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateNetworkProfile = "CreateNetworkProfile" // CreateNetworkProfileRequest generates a "aws/request.Request" representing the // client's request for the CreateNetworkProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -190,8 +279,8 @@ const opCreateProject = "CreateProject" // CreateProjectRequest generates a "aws/request.Request" representing the // client's request for the CreateProject operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -278,8 +367,8 @@ const opCreateRemoteAccessSession = "CreateRemoteAccessSession" // CreateRemoteAccessSessionRequest generates a "aws/request.Request" representing the // client's request for the CreateRemoteAccessSession operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -366,8 +455,8 @@ const opCreateUpload = "CreateUpload" // CreateUploadRequest generates a "aws/request.Request" representing the // client's request for the CreateUpload operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -450,12 +539,98 @@ func (c *DeviceFarm) CreateUploadWithContext(ctx aws.Context, input *CreateUploa return out, req.Send() } +const opCreateVPCEConfiguration = "CreateVPCEConfiguration" + +// CreateVPCEConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the CreateVPCEConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateVPCEConfiguration for more information on using the CreateVPCEConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateVPCEConfigurationRequest method. +// req, resp := client.CreateVPCEConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/CreateVPCEConfiguration +func (c *DeviceFarm) CreateVPCEConfigurationRequest(input *CreateVPCEConfigurationInput) (req *request.Request, output *CreateVPCEConfigurationOutput) { + op := &request.Operation{ + Name: opCreateVPCEConfiguration, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateVPCEConfigurationInput{} + } + + output = &CreateVPCEConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateVPCEConfiguration API operation for AWS Device Farm. +// +// Creates a configuration record in Device Farm for your Amazon Virtual Private +// Cloud (VPC) endpoint. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Device Farm's +// API operation CreateVPCEConfiguration for usage and error information. +// +// Returned Error Codes: +// * ErrCodeArgumentException "ArgumentException" +// An invalid argument was specified. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit was exceeded. +// +// * ErrCodeServiceAccountException "ServiceAccountException" +// There was a problem with the service account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/CreateVPCEConfiguration +func (c *DeviceFarm) CreateVPCEConfiguration(input *CreateVPCEConfigurationInput) (*CreateVPCEConfigurationOutput, error) { + req, out := c.CreateVPCEConfigurationRequest(input) + return out, req.Send() +} + +// CreateVPCEConfigurationWithContext is the same as CreateVPCEConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See CreateVPCEConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DeviceFarm) CreateVPCEConfigurationWithContext(ctx aws.Context, input *CreateVPCEConfigurationInput, opts ...request.Option) (*CreateVPCEConfigurationOutput, error) { + req, out := c.CreateVPCEConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteDevicePool = "DeleteDevicePool" // DeleteDevicePoolRequest generates a "aws/request.Request" representing the // client's request for the DeleteDevicePool operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -539,12 +714,100 @@ func (c *DeviceFarm) DeleteDevicePoolWithContext(ctx aws.Context, input *DeleteD return out, req.Send() } +const opDeleteInstanceProfile = "DeleteInstanceProfile" + +// DeleteInstanceProfileRequest generates a "aws/request.Request" representing the +// client's request for the DeleteInstanceProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteInstanceProfile for more information on using the DeleteInstanceProfile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteInstanceProfileRequest method. +// req, resp := client.DeleteInstanceProfileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/DeleteInstanceProfile +func (c *DeviceFarm) DeleteInstanceProfileRequest(input *DeleteInstanceProfileInput) (req *request.Request, output *DeleteInstanceProfileOutput) { + op := &request.Operation{ + Name: opDeleteInstanceProfile, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteInstanceProfileInput{} + } + + output = &DeleteInstanceProfileOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteInstanceProfile API operation for AWS Device Farm. +// +// Deletes a profile that can be applied to one or more private device instances. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Device Farm's +// API operation DeleteInstanceProfile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeArgumentException "ArgumentException" +// An invalid argument was specified. +// +// * ErrCodeNotFoundException "NotFoundException" +// The specified entity was not found. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit was exceeded. +// +// * ErrCodeServiceAccountException "ServiceAccountException" +// There was a problem with the service account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/DeleteInstanceProfile +func (c *DeviceFarm) DeleteInstanceProfile(input *DeleteInstanceProfileInput) (*DeleteInstanceProfileOutput, error) { + req, out := c.DeleteInstanceProfileRequest(input) + return out, req.Send() +} + +// DeleteInstanceProfileWithContext is the same as DeleteInstanceProfile with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteInstanceProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DeviceFarm) DeleteInstanceProfileWithContext(ctx aws.Context, input *DeleteInstanceProfileInput, opts ...request.Option) (*DeleteInstanceProfileOutput, error) { + req, out := c.DeleteInstanceProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteNetworkProfile = "DeleteNetworkProfile" // DeleteNetworkProfileRequest generates a "aws/request.Request" representing the // client's request for the DeleteNetworkProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -631,8 +894,8 @@ const opDeleteProject = "DeleteProject" // DeleteProjectRequest generates a "aws/request.Request" representing the // client's request for the DeleteProject operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -721,8 +984,8 @@ const opDeleteRemoteAccessSession = "DeleteRemoteAccessSession" // DeleteRemoteAccessSessionRequest generates a "aws/request.Request" representing the // client's request for the DeleteRemoteAccessSession operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -809,8 +1072,8 @@ const opDeleteRun = "DeleteRun" // DeleteRunRequest generates a "aws/request.Request" representing the // client's request for the DeleteRun operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -899,8 +1162,8 @@ const opDeleteUpload = "DeleteUpload" // DeleteUploadRequest generates a "aws/request.Request" representing the // client's request for the DeleteUpload operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -983,12 +1246,101 @@ func (c *DeviceFarm) DeleteUploadWithContext(ctx aws.Context, input *DeleteUploa return out, req.Send() } +const opDeleteVPCEConfiguration = "DeleteVPCEConfiguration" + +// DeleteVPCEConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the DeleteVPCEConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteVPCEConfiguration for more information on using the DeleteVPCEConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteVPCEConfigurationRequest method. +// req, resp := client.DeleteVPCEConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/DeleteVPCEConfiguration +func (c *DeviceFarm) DeleteVPCEConfigurationRequest(input *DeleteVPCEConfigurationInput) (req *request.Request, output *DeleteVPCEConfigurationOutput) { + op := &request.Operation{ + Name: opDeleteVPCEConfiguration, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteVPCEConfigurationInput{} + } + + output = &DeleteVPCEConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteVPCEConfiguration API operation for AWS Device Farm. +// +// Deletes a configuration for your Amazon Virtual Private Cloud (VPC) endpoint. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Device Farm's +// API operation DeleteVPCEConfiguration for usage and error information. +// +// Returned Error Codes: +// * ErrCodeArgumentException "ArgumentException" +// An invalid argument was specified. +// +// * ErrCodeNotFoundException "NotFoundException" +// The specified entity was not found. +// +// * ErrCodeServiceAccountException "ServiceAccountException" +// There was a problem with the service account. +// +// * ErrCodeInvalidOperationException "InvalidOperationException" +// There was an error with the update request, or you do not have sufficient +// permissions to update this VPC endpoint configuration. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/DeleteVPCEConfiguration +func (c *DeviceFarm) DeleteVPCEConfiguration(input *DeleteVPCEConfigurationInput) (*DeleteVPCEConfigurationOutput, error) { + req, out := c.DeleteVPCEConfigurationRequest(input) + return out, req.Send() +} + +// DeleteVPCEConfigurationWithContext is the same as DeleteVPCEConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteVPCEConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DeviceFarm) DeleteVPCEConfigurationWithContext(ctx aws.Context, input *DeleteVPCEConfigurationInput, opts ...request.Option) (*DeleteVPCEConfigurationOutput, error) { + req, out := c.DeleteVPCEConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opGetAccountSettings = "GetAccountSettings" // GetAccountSettingsRequest generates a "aws/request.Request" representing the // client's request for the GetAccountSettings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1076,8 +1428,8 @@ const opGetDevice = "GetDevice" // GetDeviceRequest generates a "aws/request.Request" representing the // client's request for the GetDevice operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1160,12 +1512,101 @@ func (c *DeviceFarm) GetDeviceWithContext(ctx aws.Context, input *GetDeviceInput return out, req.Send() } -const opGetDevicePool = "GetDevicePool" +const opGetDeviceInstance = "GetDeviceInstance" + +// GetDeviceInstanceRequest generates a "aws/request.Request" representing the +// client's request for the GetDeviceInstance operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetDeviceInstance for more information on using the GetDeviceInstance +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetDeviceInstanceRequest method. +// req, resp := client.GetDeviceInstanceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/GetDeviceInstance +func (c *DeviceFarm) GetDeviceInstanceRequest(input *GetDeviceInstanceInput) (req *request.Request, output *GetDeviceInstanceOutput) { + op := &request.Operation{ + Name: opGetDeviceInstance, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetDeviceInstanceInput{} + } + + output = &GetDeviceInstanceOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetDeviceInstance API operation for AWS Device Farm. +// +// Returns information about a device instance belonging to a private device +// fleet. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Device Farm's +// API operation GetDeviceInstance for usage and error information. +// +// Returned Error Codes: +// * ErrCodeArgumentException "ArgumentException" +// An invalid argument was specified. +// +// * ErrCodeNotFoundException "NotFoundException" +// The specified entity was not found. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit was exceeded. +// +// * ErrCodeServiceAccountException "ServiceAccountException" +// There was a problem with the service account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/GetDeviceInstance +func (c *DeviceFarm) GetDeviceInstance(input *GetDeviceInstanceInput) (*GetDeviceInstanceOutput, error) { + req, out := c.GetDeviceInstanceRequest(input) + return out, req.Send() +} + +// GetDeviceInstanceWithContext is the same as GetDeviceInstance with the addition of +// the ability to pass a context and additional request options. +// +// See GetDeviceInstance for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DeviceFarm) GetDeviceInstanceWithContext(ctx aws.Context, input *GetDeviceInstanceInput, opts ...request.Option) (*GetDeviceInstanceOutput, error) { + req, out := c.GetDeviceInstanceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetDevicePool = "GetDevicePool" // GetDevicePoolRequest generates a "aws/request.Request" representing the // client's request for the GetDevicePool operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1252,8 +1693,8 @@ const opGetDevicePoolCompatibility = "GetDevicePoolCompatibility" // GetDevicePoolCompatibilityRequest generates a "aws/request.Request" representing the // client's request for the GetDevicePoolCompatibility operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1336,12 +1777,100 @@ func (c *DeviceFarm) GetDevicePoolCompatibilityWithContext(ctx aws.Context, inpu return out, req.Send() } +const opGetInstanceProfile = "GetInstanceProfile" + +// GetInstanceProfileRequest generates a "aws/request.Request" representing the +// client's request for the GetInstanceProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetInstanceProfile for more information on using the GetInstanceProfile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetInstanceProfileRequest method. +// req, resp := client.GetInstanceProfileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/GetInstanceProfile +func (c *DeviceFarm) GetInstanceProfileRequest(input *GetInstanceProfileInput) (req *request.Request, output *GetInstanceProfileOutput) { + op := &request.Operation{ + Name: opGetInstanceProfile, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetInstanceProfileInput{} + } + + output = &GetInstanceProfileOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetInstanceProfile API operation for AWS Device Farm. +// +// Returns information about the specified instance profile. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Device Farm's +// API operation GetInstanceProfile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeArgumentException "ArgumentException" +// An invalid argument was specified. +// +// * ErrCodeNotFoundException "NotFoundException" +// The specified entity was not found. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit was exceeded. +// +// * ErrCodeServiceAccountException "ServiceAccountException" +// There was a problem with the service account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/GetInstanceProfile +func (c *DeviceFarm) GetInstanceProfile(input *GetInstanceProfileInput) (*GetInstanceProfileOutput, error) { + req, out := c.GetInstanceProfileRequest(input) + return out, req.Send() +} + +// GetInstanceProfileWithContext is the same as GetInstanceProfile with the addition of +// the ability to pass a context and additional request options. +// +// See GetInstanceProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DeviceFarm) GetInstanceProfileWithContext(ctx aws.Context, input *GetInstanceProfileInput, opts ...request.Option) (*GetInstanceProfileOutput, error) { + req, out := c.GetInstanceProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opGetJob = "GetJob" // GetJobRequest generates a "aws/request.Request" representing the // client's request for the GetJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1428,8 +1957,8 @@ const opGetNetworkProfile = "GetNetworkProfile" // GetNetworkProfileRequest generates a "aws/request.Request" representing the // client's request for the GetNetworkProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1516,8 +2045,8 @@ const opGetOfferingStatus = "GetOfferingStatus" // GetOfferingStatusRequest generates a "aws/request.Request" representing the // client's request for the GetOfferingStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1669,8 +2198,8 @@ const opGetProject = "GetProject" // GetProjectRequest generates a "aws/request.Request" representing the // client's request for the GetProject operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1757,8 +2286,8 @@ const opGetRemoteAccessSession = "GetRemoteAccessSession" // GetRemoteAccessSessionRequest generates a "aws/request.Request" representing the // client's request for the GetRemoteAccessSession operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1845,8 +2374,8 @@ const opGetRun = "GetRun" // GetRunRequest generates a "aws/request.Request" representing the // client's request for the GetRun operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1933,8 +2462,8 @@ const opGetSuite = "GetSuite" // GetSuiteRequest generates a "aws/request.Request" representing the // client's request for the GetSuite operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2021,8 +2550,8 @@ const opGetTest = "GetTest" // GetTestRequest generates a "aws/request.Request" representing the // client's request for the GetTest operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2109,8 +2638,8 @@ const opGetUpload = "GetUpload" // GetUploadRequest generates a "aws/request.Request" representing the // client's request for the GetUpload operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2193,12 +2722,98 @@ func (c *DeviceFarm) GetUploadWithContext(ctx aws.Context, input *GetUploadInput return out, req.Send() } +const opGetVPCEConfiguration = "GetVPCEConfiguration" + +// GetVPCEConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the GetVPCEConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetVPCEConfiguration for more information on using the GetVPCEConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetVPCEConfigurationRequest method. +// req, resp := client.GetVPCEConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/GetVPCEConfiguration +func (c *DeviceFarm) GetVPCEConfigurationRequest(input *GetVPCEConfigurationInput) (req *request.Request, output *GetVPCEConfigurationOutput) { + op := &request.Operation{ + Name: opGetVPCEConfiguration, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetVPCEConfigurationInput{} + } + + output = &GetVPCEConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetVPCEConfiguration API operation for AWS Device Farm. +// +// Returns information about the configuration settings for your Amazon Virtual +// Private Cloud (VPC) endpoint. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Device Farm's +// API operation GetVPCEConfiguration for usage and error information. +// +// Returned Error Codes: +// * ErrCodeArgumentException "ArgumentException" +// An invalid argument was specified. +// +// * ErrCodeNotFoundException "NotFoundException" +// The specified entity was not found. +// +// * ErrCodeServiceAccountException "ServiceAccountException" +// There was a problem with the service account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/GetVPCEConfiguration +func (c *DeviceFarm) GetVPCEConfiguration(input *GetVPCEConfigurationInput) (*GetVPCEConfigurationOutput, error) { + req, out := c.GetVPCEConfigurationRequest(input) + return out, req.Send() +} + +// GetVPCEConfigurationWithContext is the same as GetVPCEConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See GetVPCEConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DeviceFarm) GetVPCEConfigurationWithContext(ctx aws.Context, input *GetVPCEConfigurationInput, opts ...request.Option) (*GetVPCEConfigurationOutput, error) { + req, out := c.GetVPCEConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opInstallToRemoteAccessSession = "InstallToRemoteAccessSession" // InstallToRemoteAccessSessionRequest generates a "aws/request.Request" representing the // client's request for the InstallToRemoteAccessSession operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2287,8 +2902,8 @@ const opListArtifacts = "ListArtifacts" // ListArtifactsRequest generates a "aws/request.Request" representing the // client's request for the ListArtifacts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2427,64 +3042,59 @@ func (c *DeviceFarm) ListArtifactsPagesWithContext(ctx aws.Context, input *ListA return p.Err() } -const opListDevicePools = "ListDevicePools" +const opListDeviceInstances = "ListDeviceInstances" -// ListDevicePoolsRequest generates a "aws/request.Request" representing the -// client's request for the ListDevicePools operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListDeviceInstancesRequest generates a "aws/request.Request" representing the +// client's request for the ListDeviceInstances operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListDevicePools for more information on using the ListDevicePools +// See ListDeviceInstances for more information on using the ListDeviceInstances // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListDevicePoolsRequest method. -// req, resp := client.ListDevicePoolsRequest(params) +// // Example sending a request using the ListDeviceInstancesRequest method. +// req, resp := client.ListDeviceInstancesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/ListDevicePools -func (c *DeviceFarm) ListDevicePoolsRequest(input *ListDevicePoolsInput) (req *request.Request, output *ListDevicePoolsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/ListDeviceInstances +func (c *DeviceFarm) ListDeviceInstancesRequest(input *ListDeviceInstancesInput) (req *request.Request, output *ListDeviceInstancesOutput) { op := &request.Operation{ - Name: opListDevicePools, + Name: opListDeviceInstances, HTTPMethod: "POST", HTTPPath: "/", - Paginator: &request.Paginator{ - InputTokens: []string{"nextToken"}, - OutputTokens: []string{"nextToken"}, - LimitToken: "", - TruncationToken: "", - }, } if input == nil { - input = &ListDevicePoolsInput{} + input = &ListDeviceInstancesInput{} } - output = &ListDevicePoolsOutput{} + output = &ListDeviceInstancesOutput{} req = c.newRequest(op, input, output) return } -// ListDevicePools API operation for AWS Device Farm. +// ListDeviceInstances API operation for AWS Device Farm. // -// Gets information about device pools. +// Returns information about the private device instances associated with one +// or more AWS accounts. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Device Farm's -// API operation ListDevicePools for usage and error information. +// API operation ListDeviceInstances for usage and error information. // // Returned Error Codes: // * ErrCodeArgumentException "ArgumentException" @@ -2499,59 +3109,153 @@ func (c *DeviceFarm) ListDevicePoolsRequest(input *ListDevicePoolsInput) (req *r // * ErrCodeServiceAccountException "ServiceAccountException" // There was a problem with the service account. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/ListDevicePools -func (c *DeviceFarm) ListDevicePools(input *ListDevicePoolsInput) (*ListDevicePoolsOutput, error) { - req, out := c.ListDevicePoolsRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/ListDeviceInstances +func (c *DeviceFarm) ListDeviceInstances(input *ListDeviceInstancesInput) (*ListDeviceInstancesOutput, error) { + req, out := c.ListDeviceInstancesRequest(input) return out, req.Send() } -// ListDevicePoolsWithContext is the same as ListDevicePools with the addition of +// ListDeviceInstancesWithContext is the same as ListDeviceInstances with the addition of // the ability to pass a context and additional request options. // -// See ListDevicePools for details on how to use this API operation. +// See ListDeviceInstances for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *DeviceFarm) ListDevicePoolsWithContext(ctx aws.Context, input *ListDevicePoolsInput, opts ...request.Option) (*ListDevicePoolsOutput, error) { - req, out := c.ListDevicePoolsRequest(input) +func (c *DeviceFarm) ListDeviceInstancesWithContext(ctx aws.Context, input *ListDeviceInstancesInput, opts ...request.Option) (*ListDeviceInstancesOutput, error) { + req, out := c.ListDeviceInstancesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// ListDevicePoolsPages iterates over the pages of a ListDevicePools operation, -// calling the "fn" function with the response data for each page. To stop -// iterating, return false from the fn function. +const opListDevicePools = "ListDevicePools" + +// ListDevicePoolsRequest generates a "aws/request.Request" representing the +// client's request for the ListDevicePools operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // -// See ListDevicePools method for more information on how to use this operation. +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. // -// Note: This operation can generate multiple requests to a service. +// See ListDevicePools for more information on using the ListDevicePools +// API call, and error handling. // -// // Example iterating over at most 3 pages of a ListDevicePools operation. -// pageNum := 0 -// err := client.ListDevicePoolsPages(params, -// func(page *ListDevicePoolsOutput, lastPage bool) bool { -// pageNum++ -// fmt.Println(page) -// return pageNum <= 3 -// }) +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. // -func (c *DeviceFarm) ListDevicePoolsPages(input *ListDevicePoolsInput, fn func(*ListDevicePoolsOutput, bool) bool) error { - return c.ListDevicePoolsPagesWithContext(aws.BackgroundContext(), input, fn) -} - -// ListDevicePoolsPagesWithContext same as ListDevicePoolsPages except -// it takes a Context and allows setting request options on the pages. // -// The context must be non-nil and will be used for request cancellation. If -// the context is nil a panic will occur. In the future the SDK may create -// sub-contexts for http.Requests. See https://golang.org/pkg/context/ -// for more information on using Contexts. -func (c *DeviceFarm) ListDevicePoolsPagesWithContext(ctx aws.Context, input *ListDevicePoolsInput, fn func(*ListDevicePoolsOutput, bool) bool, opts ...request.Option) error { - p := request.Pagination{ - NewRequest: func() (*request.Request, error) { +// // Example sending a request using the ListDevicePoolsRequest method. +// req, resp := client.ListDevicePoolsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/ListDevicePools +func (c *DeviceFarm) ListDevicePoolsRequest(input *ListDevicePoolsInput) (req *request.Request, output *ListDevicePoolsOutput) { + op := &request.Operation{ + Name: opListDevicePools, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"nextToken"}, + OutputTokens: []string{"nextToken"}, + LimitToken: "", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListDevicePoolsInput{} + } + + output = &ListDevicePoolsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListDevicePools API operation for AWS Device Farm. +// +// Gets information about device pools. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Device Farm's +// API operation ListDevicePools for usage and error information. +// +// Returned Error Codes: +// * ErrCodeArgumentException "ArgumentException" +// An invalid argument was specified. +// +// * ErrCodeNotFoundException "NotFoundException" +// The specified entity was not found. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit was exceeded. +// +// * ErrCodeServiceAccountException "ServiceAccountException" +// There was a problem with the service account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/ListDevicePools +func (c *DeviceFarm) ListDevicePools(input *ListDevicePoolsInput) (*ListDevicePoolsOutput, error) { + req, out := c.ListDevicePoolsRequest(input) + return out, req.Send() +} + +// ListDevicePoolsWithContext is the same as ListDevicePools with the addition of +// the ability to pass a context and additional request options. +// +// See ListDevicePools for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DeviceFarm) ListDevicePoolsWithContext(ctx aws.Context, input *ListDevicePoolsInput, opts ...request.Option) (*ListDevicePoolsOutput, error) { + req, out := c.ListDevicePoolsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListDevicePoolsPages iterates over the pages of a ListDevicePools operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListDevicePools method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListDevicePools operation. +// pageNum := 0 +// err := client.ListDevicePoolsPages(params, +// func(page *ListDevicePoolsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *DeviceFarm) ListDevicePoolsPages(input *ListDevicePoolsInput, fn func(*ListDevicePoolsOutput, bool) bool) error { + return c.ListDevicePoolsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListDevicePoolsPagesWithContext same as ListDevicePoolsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DeviceFarm) ListDevicePoolsPagesWithContext(ctx aws.Context, input *ListDevicePoolsInput, fn func(*ListDevicePoolsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { var inCpy *ListDevicePoolsInput if input != nil { tmp := *input @@ -2575,8 +3279,8 @@ const opListDevices = "ListDevices" // ListDevicesRequest generates a "aws/request.Request" representing the // client's request for the ListDevices operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2715,12 +3419,100 @@ func (c *DeviceFarm) ListDevicesPagesWithContext(ctx aws.Context, input *ListDev return p.Err() } +const opListInstanceProfiles = "ListInstanceProfiles" + +// ListInstanceProfilesRequest generates a "aws/request.Request" representing the +// client's request for the ListInstanceProfiles operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListInstanceProfiles for more information on using the ListInstanceProfiles +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListInstanceProfilesRequest method. +// req, resp := client.ListInstanceProfilesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/ListInstanceProfiles +func (c *DeviceFarm) ListInstanceProfilesRequest(input *ListInstanceProfilesInput) (req *request.Request, output *ListInstanceProfilesOutput) { + op := &request.Operation{ + Name: opListInstanceProfiles, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListInstanceProfilesInput{} + } + + output = &ListInstanceProfilesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListInstanceProfiles API operation for AWS Device Farm. +// +// Returns information about all the instance profiles in an AWS account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Device Farm's +// API operation ListInstanceProfiles for usage and error information. +// +// Returned Error Codes: +// * ErrCodeArgumentException "ArgumentException" +// An invalid argument was specified. +// +// * ErrCodeNotFoundException "NotFoundException" +// The specified entity was not found. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit was exceeded. +// +// * ErrCodeServiceAccountException "ServiceAccountException" +// There was a problem with the service account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/ListInstanceProfiles +func (c *DeviceFarm) ListInstanceProfiles(input *ListInstanceProfilesInput) (*ListInstanceProfilesOutput, error) { + req, out := c.ListInstanceProfilesRequest(input) + return out, req.Send() +} + +// ListInstanceProfilesWithContext is the same as ListInstanceProfiles with the addition of +// the ability to pass a context and additional request options. +// +// See ListInstanceProfiles for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DeviceFarm) ListInstanceProfilesWithContext(ctx aws.Context, input *ListInstanceProfilesInput, opts ...request.Option) (*ListInstanceProfilesOutput, error) { + req, out := c.ListInstanceProfilesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opListJobs = "ListJobs" // ListJobsRequest generates a "aws/request.Request" representing the // client's request for the ListJobs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2863,8 +3655,8 @@ const opListNetworkProfiles = "ListNetworkProfiles" // ListNetworkProfilesRequest generates a "aws/request.Request" representing the // client's request for the ListNetworkProfiles operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2951,8 +3743,8 @@ const opListOfferingPromotions = "ListOfferingPromotions" // ListOfferingPromotionsRequest generates a "aws/request.Request" representing the // client's request for the ListOfferingPromotions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3047,8 +3839,8 @@ const opListOfferingTransactions = "ListOfferingTransactions" // ListOfferingTransactionsRequest generates a "aws/request.Request" representing the // client's request for the ListOfferingTransactions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3200,8 +3992,8 @@ const opListOfferings = "ListOfferings" // ListOfferingsRequest generates a "aws/request.Request" representing the // client's request for the ListOfferings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3353,8 +4145,8 @@ const opListProjects = "ListProjects" // ListProjectsRequest generates a "aws/request.Request" representing the // client's request for the ListProjects operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3497,8 +4289,8 @@ const opListRemoteAccessSessions = "ListRemoteAccessSessions" // ListRemoteAccessSessionsRequest generates a "aws/request.Request" representing the // client's request for the ListRemoteAccessSessions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3585,8 +4377,8 @@ const opListRuns = "ListRuns" // ListRunsRequest generates a "aws/request.Request" representing the // client's request for the ListRuns operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3729,8 +4521,8 @@ const opListSamples = "ListSamples" // ListSamplesRequest generates a "aws/request.Request" representing the // client's request for the ListSamples operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3775,7 +4567,7 @@ func (c *DeviceFarm) ListSamplesRequest(input *ListSamplesInput) (req *request.R // ListSamples API operation for AWS Device Farm. // -// Gets information about samples, given an AWS Device Farm project ARN +// Gets information about samples, given an AWS Device Farm job ARN. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3873,8 +4665,8 @@ const opListSuites = "ListSuites" // ListSuitesRequest generates a "aws/request.Request" representing the // client's request for the ListSuites operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4017,8 +4809,8 @@ const opListTests = "ListTests" // ListTestsRequest generates a "aws/request.Request" representing the // client's request for the ListTests operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4161,8 +4953,8 @@ const opListUniqueProblems = "ListUniqueProblems" // ListUniqueProblemsRequest generates a "aws/request.Request" representing the // client's request for the ListUniqueProblems operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4305,8 +5097,8 @@ const opListUploads = "ListUploads" // ListUploadsRequest generates a "aws/request.Request" representing the // client's request for the ListUploads operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4445,12 +5237,95 @@ func (c *DeviceFarm) ListUploadsPagesWithContext(ctx aws.Context, input *ListUpl return p.Err() } +const opListVPCEConfigurations = "ListVPCEConfigurations" + +// ListVPCEConfigurationsRequest generates a "aws/request.Request" representing the +// client's request for the ListVPCEConfigurations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListVPCEConfigurations for more information on using the ListVPCEConfigurations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListVPCEConfigurationsRequest method. +// req, resp := client.ListVPCEConfigurationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/ListVPCEConfigurations +func (c *DeviceFarm) ListVPCEConfigurationsRequest(input *ListVPCEConfigurationsInput) (req *request.Request, output *ListVPCEConfigurationsOutput) { + op := &request.Operation{ + Name: opListVPCEConfigurations, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListVPCEConfigurationsInput{} + } + + output = &ListVPCEConfigurationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListVPCEConfigurations API operation for AWS Device Farm. +// +// Returns information about all Amazon Virtual Private Cloud (VPC) endpoint +// configurations in the AWS account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Device Farm's +// API operation ListVPCEConfigurations for usage and error information. +// +// Returned Error Codes: +// * ErrCodeArgumentException "ArgumentException" +// An invalid argument was specified. +// +// * ErrCodeServiceAccountException "ServiceAccountException" +// There was a problem with the service account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/ListVPCEConfigurations +func (c *DeviceFarm) ListVPCEConfigurations(input *ListVPCEConfigurationsInput) (*ListVPCEConfigurationsOutput, error) { + req, out := c.ListVPCEConfigurationsRequest(input) + return out, req.Send() +} + +// ListVPCEConfigurationsWithContext is the same as ListVPCEConfigurations with the addition of +// the ability to pass a context and additional request options. +// +// See ListVPCEConfigurations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DeviceFarm) ListVPCEConfigurationsWithContext(ctx aws.Context, input *ListVPCEConfigurationsInput, opts ...request.Option) (*ListVPCEConfigurationsOutput, error) { + req, out := c.ListVPCEConfigurationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opPurchaseOffering = "PurchaseOffering" // PurchaseOfferingRequest generates a "aws/request.Request" representing the // client's request for the PurchaseOffering operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4546,8 +5421,8 @@ const opRenewOffering = "RenewOffering" // RenewOfferingRequest generates a "aws/request.Request" representing the // client's request for the RenewOffering operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4642,8 +5517,8 @@ const opScheduleRun = "ScheduleRun" // ScheduleRunRequest generates a "aws/request.Request" representing the // client's request for the ScheduleRun operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4729,58 +5604,63 @@ func (c *DeviceFarm) ScheduleRunWithContext(ctx aws.Context, input *ScheduleRunI return out, req.Send() } -const opStopRemoteAccessSession = "StopRemoteAccessSession" +const opStopJob = "StopJob" -// StopRemoteAccessSessionRequest generates a "aws/request.Request" representing the -// client's request for the StopRemoteAccessSession operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// StopJobRequest generates a "aws/request.Request" representing the +// client's request for the StopJob operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See StopRemoteAccessSession for more information on using the StopRemoteAccessSession +// See StopJob for more information on using the StopJob // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the StopRemoteAccessSessionRequest method. -// req, resp := client.StopRemoteAccessSessionRequest(params) +// // Example sending a request using the StopJobRequest method. +// req, resp := client.StopJobRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/StopRemoteAccessSession -func (c *DeviceFarm) StopRemoteAccessSessionRequest(input *StopRemoteAccessSessionInput) (req *request.Request, output *StopRemoteAccessSessionOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/StopJob +func (c *DeviceFarm) StopJobRequest(input *StopJobInput) (req *request.Request, output *StopJobOutput) { op := &request.Operation{ - Name: opStopRemoteAccessSession, + Name: opStopJob, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &StopRemoteAccessSessionInput{} + input = &StopJobInput{} } - output = &StopRemoteAccessSessionOutput{} + output = &StopJobOutput{} req = c.newRequest(op, input, output) return } -// StopRemoteAccessSession API operation for AWS Device Farm. +// StopJob API operation for AWS Device Farm. // -// Ends a specified remote access session. +// Initiates a stop request for the current job. AWS Device Farm will immediately +// stop the job on the device where tests have not started executing, and you +// will not be billed for this device. On the device where tests have started +// executing, Setup Suite and Teardown Suite tests will run to completion before +// stopping execution on the device. You will be billed for Setup, Teardown, +// and any tests that were in progress or already completed. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Device Farm's -// API operation StopRemoteAccessSession for usage and error information. +// API operation StopJob for usage and error information. // // Returned Error Codes: // * ErrCodeArgumentException "ArgumentException" @@ -4795,34 +5675,122 @@ func (c *DeviceFarm) StopRemoteAccessSessionRequest(input *StopRemoteAccessSessi // * ErrCodeServiceAccountException "ServiceAccountException" // There was a problem with the service account. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/StopRemoteAccessSession -func (c *DeviceFarm) StopRemoteAccessSession(input *StopRemoteAccessSessionInput) (*StopRemoteAccessSessionOutput, error) { - req, out := c.StopRemoteAccessSessionRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/StopJob +func (c *DeviceFarm) StopJob(input *StopJobInput) (*StopJobOutput, error) { + req, out := c.StopJobRequest(input) return out, req.Send() } -// StopRemoteAccessSessionWithContext is the same as StopRemoteAccessSession with the addition of +// StopJobWithContext is the same as StopJob with the addition of // the ability to pass a context and additional request options. // -// See StopRemoteAccessSession for details on how to use this API operation. +// See StopJob for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *DeviceFarm) StopRemoteAccessSessionWithContext(ctx aws.Context, input *StopRemoteAccessSessionInput, opts ...request.Option) (*StopRemoteAccessSessionOutput, error) { - req, out := c.StopRemoteAccessSessionRequest(input) +func (c *DeviceFarm) StopJobWithContext(ctx aws.Context, input *StopJobInput, opts ...request.Option) (*StopJobOutput, error) { + req, out := c.StopJobRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opStopRun = "StopRun" +const opStopRemoteAccessSession = "StopRemoteAccessSession" -// StopRunRequest generates a "aws/request.Request" representing the +// StopRemoteAccessSessionRequest generates a "aws/request.Request" representing the +// client's request for the StopRemoteAccessSession operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StopRemoteAccessSession for more information on using the StopRemoteAccessSession +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StopRemoteAccessSessionRequest method. +// req, resp := client.StopRemoteAccessSessionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/StopRemoteAccessSession +func (c *DeviceFarm) StopRemoteAccessSessionRequest(input *StopRemoteAccessSessionInput) (req *request.Request, output *StopRemoteAccessSessionOutput) { + op := &request.Operation{ + Name: opStopRemoteAccessSession, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StopRemoteAccessSessionInput{} + } + + output = &StopRemoteAccessSessionOutput{} + req = c.newRequest(op, input, output) + return +} + +// StopRemoteAccessSession API operation for AWS Device Farm. +// +// Ends a specified remote access session. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Device Farm's +// API operation StopRemoteAccessSession for usage and error information. +// +// Returned Error Codes: +// * ErrCodeArgumentException "ArgumentException" +// An invalid argument was specified. +// +// * ErrCodeNotFoundException "NotFoundException" +// The specified entity was not found. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit was exceeded. +// +// * ErrCodeServiceAccountException "ServiceAccountException" +// There was a problem with the service account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/StopRemoteAccessSession +func (c *DeviceFarm) StopRemoteAccessSession(input *StopRemoteAccessSessionInput) (*StopRemoteAccessSessionOutput, error) { + req, out := c.StopRemoteAccessSessionRequest(input) + return out, req.Send() +} + +// StopRemoteAccessSessionWithContext is the same as StopRemoteAccessSession with the addition of +// the ability to pass a context and additional request options. +// +// See StopRemoteAccessSession for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DeviceFarm) StopRemoteAccessSessionWithContext(ctx aws.Context, input *StopRemoteAccessSessionInput, opts ...request.Option) (*StopRemoteAccessSessionOutput, error) { + req, out := c.StopRemoteAccessSessionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStopRun = "StopRun" + +// StopRunRequest generates a "aws/request.Request" representing the // client's request for the StopRun operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4910,12 +5878,100 @@ func (c *DeviceFarm) StopRunWithContext(ctx aws.Context, input *StopRunInput, op return out, req.Send() } +const opUpdateDeviceInstance = "UpdateDeviceInstance" + +// UpdateDeviceInstanceRequest generates a "aws/request.Request" representing the +// client's request for the UpdateDeviceInstance operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateDeviceInstance for more information on using the UpdateDeviceInstance +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateDeviceInstanceRequest method. +// req, resp := client.UpdateDeviceInstanceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/UpdateDeviceInstance +func (c *DeviceFarm) UpdateDeviceInstanceRequest(input *UpdateDeviceInstanceInput) (req *request.Request, output *UpdateDeviceInstanceOutput) { + op := &request.Operation{ + Name: opUpdateDeviceInstance, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateDeviceInstanceInput{} + } + + output = &UpdateDeviceInstanceOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateDeviceInstance API operation for AWS Device Farm. +// +// Updates information about an existing private device instance. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Device Farm's +// API operation UpdateDeviceInstance for usage and error information. +// +// Returned Error Codes: +// * ErrCodeArgumentException "ArgumentException" +// An invalid argument was specified. +// +// * ErrCodeNotFoundException "NotFoundException" +// The specified entity was not found. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit was exceeded. +// +// * ErrCodeServiceAccountException "ServiceAccountException" +// There was a problem with the service account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/UpdateDeviceInstance +func (c *DeviceFarm) UpdateDeviceInstance(input *UpdateDeviceInstanceInput) (*UpdateDeviceInstanceOutput, error) { + req, out := c.UpdateDeviceInstanceRequest(input) + return out, req.Send() +} + +// UpdateDeviceInstanceWithContext is the same as UpdateDeviceInstance with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateDeviceInstance for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DeviceFarm) UpdateDeviceInstanceWithContext(ctx aws.Context, input *UpdateDeviceInstanceInput, opts ...request.Option) (*UpdateDeviceInstanceOutput, error) { + req, out := c.UpdateDeviceInstanceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opUpdateDevicePool = "UpdateDevicePool" // UpdateDevicePoolRequest generates a "aws/request.Request" representing the // client's request for the UpdateDevicePool operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4944,26 +6000,290 @@ func (c *DeviceFarm) UpdateDevicePoolRequest(input *UpdateDevicePoolInput) (req } if input == nil { - input = &UpdateDevicePoolInput{} + input = &UpdateDevicePoolInput{} + } + + output = &UpdateDevicePoolOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateDevicePool API operation for AWS Device Farm. +// +// Modifies the name, description, and rules in a device pool given the attributes +// and the pool ARN. Rule updates are all-or-nothing, meaning they can only +// be updated as a whole (or not at all). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Device Farm's +// API operation UpdateDevicePool for usage and error information. +// +// Returned Error Codes: +// * ErrCodeArgumentException "ArgumentException" +// An invalid argument was specified. +// +// * ErrCodeNotFoundException "NotFoundException" +// The specified entity was not found. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit was exceeded. +// +// * ErrCodeServiceAccountException "ServiceAccountException" +// There was a problem with the service account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/UpdateDevicePool +func (c *DeviceFarm) UpdateDevicePool(input *UpdateDevicePoolInput) (*UpdateDevicePoolOutput, error) { + req, out := c.UpdateDevicePoolRequest(input) + return out, req.Send() +} + +// UpdateDevicePoolWithContext is the same as UpdateDevicePool with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateDevicePool for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DeviceFarm) UpdateDevicePoolWithContext(ctx aws.Context, input *UpdateDevicePoolInput, opts ...request.Option) (*UpdateDevicePoolOutput, error) { + req, out := c.UpdateDevicePoolRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateInstanceProfile = "UpdateInstanceProfile" + +// UpdateInstanceProfileRequest generates a "aws/request.Request" representing the +// client's request for the UpdateInstanceProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateInstanceProfile for more information on using the UpdateInstanceProfile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateInstanceProfileRequest method. +// req, resp := client.UpdateInstanceProfileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/UpdateInstanceProfile +func (c *DeviceFarm) UpdateInstanceProfileRequest(input *UpdateInstanceProfileInput) (req *request.Request, output *UpdateInstanceProfileOutput) { + op := &request.Operation{ + Name: opUpdateInstanceProfile, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateInstanceProfileInput{} + } + + output = &UpdateInstanceProfileOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateInstanceProfile API operation for AWS Device Farm. +// +// Updates information about an existing private device instance profile. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Device Farm's +// API operation UpdateInstanceProfile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeArgumentException "ArgumentException" +// An invalid argument was specified. +// +// * ErrCodeNotFoundException "NotFoundException" +// The specified entity was not found. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit was exceeded. +// +// * ErrCodeServiceAccountException "ServiceAccountException" +// There was a problem with the service account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/UpdateInstanceProfile +func (c *DeviceFarm) UpdateInstanceProfile(input *UpdateInstanceProfileInput) (*UpdateInstanceProfileOutput, error) { + req, out := c.UpdateInstanceProfileRequest(input) + return out, req.Send() +} + +// UpdateInstanceProfileWithContext is the same as UpdateInstanceProfile with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateInstanceProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DeviceFarm) UpdateInstanceProfileWithContext(ctx aws.Context, input *UpdateInstanceProfileInput, opts ...request.Option) (*UpdateInstanceProfileOutput, error) { + req, out := c.UpdateInstanceProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateNetworkProfile = "UpdateNetworkProfile" + +// UpdateNetworkProfileRequest generates a "aws/request.Request" representing the +// client's request for the UpdateNetworkProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateNetworkProfile for more information on using the UpdateNetworkProfile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateNetworkProfileRequest method. +// req, resp := client.UpdateNetworkProfileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/UpdateNetworkProfile +func (c *DeviceFarm) UpdateNetworkProfileRequest(input *UpdateNetworkProfileInput) (req *request.Request, output *UpdateNetworkProfileOutput) { + op := &request.Operation{ + Name: opUpdateNetworkProfile, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateNetworkProfileInput{} + } + + output = &UpdateNetworkProfileOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateNetworkProfile API operation for AWS Device Farm. +// +// Updates the network profile with specific settings. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Device Farm's +// API operation UpdateNetworkProfile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeArgumentException "ArgumentException" +// An invalid argument was specified. +// +// * ErrCodeNotFoundException "NotFoundException" +// The specified entity was not found. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit was exceeded. +// +// * ErrCodeServiceAccountException "ServiceAccountException" +// There was a problem with the service account. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/UpdateNetworkProfile +func (c *DeviceFarm) UpdateNetworkProfile(input *UpdateNetworkProfileInput) (*UpdateNetworkProfileOutput, error) { + req, out := c.UpdateNetworkProfileRequest(input) + return out, req.Send() +} + +// UpdateNetworkProfileWithContext is the same as UpdateNetworkProfile with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateNetworkProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DeviceFarm) UpdateNetworkProfileWithContext(ctx aws.Context, input *UpdateNetworkProfileInput, opts ...request.Option) (*UpdateNetworkProfileOutput, error) { + req, out := c.UpdateNetworkProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateProject = "UpdateProject" + +// UpdateProjectRequest generates a "aws/request.Request" representing the +// client's request for the UpdateProject operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateProject for more information on using the UpdateProject +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateProjectRequest method. +// req, resp := client.UpdateProjectRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/UpdateProject +func (c *DeviceFarm) UpdateProjectRequest(input *UpdateProjectInput) (req *request.Request, output *UpdateProjectOutput) { + op := &request.Operation{ + Name: opUpdateProject, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateProjectInput{} } - output = &UpdateDevicePoolOutput{} + output = &UpdateProjectOutput{} req = c.newRequest(op, input, output) return } -// UpdateDevicePool API operation for AWS Device Farm. +// UpdateProject API operation for AWS Device Farm. // -// Modifies the name, description, and rules in a device pool given the attributes -// and the pool ARN. Rule updates are all-or-nothing, meaning they can only -// be updated as a whole (or not at all). +// Modifies the specified project name, given the project ARN and a new name. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Device Farm's -// API operation UpdateDevicePool for usage and error information. +// API operation UpdateProject for usage and error information. // // Returned Error Codes: // * ErrCodeArgumentException "ArgumentException" @@ -4978,80 +6298,80 @@ func (c *DeviceFarm) UpdateDevicePoolRequest(input *UpdateDevicePoolInput) (req // * ErrCodeServiceAccountException "ServiceAccountException" // There was a problem with the service account. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/UpdateDevicePool -func (c *DeviceFarm) UpdateDevicePool(input *UpdateDevicePoolInput) (*UpdateDevicePoolOutput, error) { - req, out := c.UpdateDevicePoolRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/UpdateProject +func (c *DeviceFarm) UpdateProject(input *UpdateProjectInput) (*UpdateProjectOutput, error) { + req, out := c.UpdateProjectRequest(input) return out, req.Send() } -// UpdateDevicePoolWithContext is the same as UpdateDevicePool with the addition of +// UpdateProjectWithContext is the same as UpdateProject with the addition of // the ability to pass a context and additional request options. // -// See UpdateDevicePool for details on how to use this API operation. +// See UpdateProject for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *DeviceFarm) UpdateDevicePoolWithContext(ctx aws.Context, input *UpdateDevicePoolInput, opts ...request.Option) (*UpdateDevicePoolOutput, error) { - req, out := c.UpdateDevicePoolRequest(input) +func (c *DeviceFarm) UpdateProjectWithContext(ctx aws.Context, input *UpdateProjectInput, opts ...request.Option) (*UpdateProjectOutput, error) { + req, out := c.UpdateProjectRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateNetworkProfile = "UpdateNetworkProfile" +const opUpdateUpload = "UpdateUpload" -// UpdateNetworkProfileRequest generates a "aws/request.Request" representing the -// client's request for the UpdateNetworkProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UpdateUploadRequest generates a "aws/request.Request" representing the +// client's request for the UpdateUpload operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateNetworkProfile for more information on using the UpdateNetworkProfile +// See UpdateUpload for more information on using the UpdateUpload // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateNetworkProfileRequest method. -// req, resp := client.UpdateNetworkProfileRequest(params) +// // Example sending a request using the UpdateUploadRequest method. +// req, resp := client.UpdateUploadRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/UpdateNetworkProfile -func (c *DeviceFarm) UpdateNetworkProfileRequest(input *UpdateNetworkProfileInput) (req *request.Request, output *UpdateNetworkProfileOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/UpdateUpload +func (c *DeviceFarm) UpdateUploadRequest(input *UpdateUploadInput) (req *request.Request, output *UpdateUploadOutput) { op := &request.Operation{ - Name: opUpdateNetworkProfile, + Name: opUpdateUpload, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateNetworkProfileInput{} + input = &UpdateUploadInput{} } - output = &UpdateNetworkProfileOutput{} + output = &UpdateUploadOutput{} req = c.newRequest(op, input, output) return } -// UpdateNetworkProfile API operation for AWS Device Farm. +// UpdateUpload API operation for AWS Device Farm. // -// Updates the network profile with specific settings. +// Update an uploaded test specification (test spec). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Device Farm's -// API operation UpdateNetworkProfile for usage and error information. +// API operation UpdateUpload for usage and error information. // // Returned Error Codes: // * ErrCodeArgumentException "ArgumentException" @@ -5066,80 +6386,81 @@ func (c *DeviceFarm) UpdateNetworkProfileRequest(input *UpdateNetworkProfileInpu // * ErrCodeServiceAccountException "ServiceAccountException" // There was a problem with the service account. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/UpdateNetworkProfile -func (c *DeviceFarm) UpdateNetworkProfile(input *UpdateNetworkProfileInput) (*UpdateNetworkProfileOutput, error) { - req, out := c.UpdateNetworkProfileRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/UpdateUpload +func (c *DeviceFarm) UpdateUpload(input *UpdateUploadInput) (*UpdateUploadOutput, error) { + req, out := c.UpdateUploadRequest(input) return out, req.Send() } -// UpdateNetworkProfileWithContext is the same as UpdateNetworkProfile with the addition of +// UpdateUploadWithContext is the same as UpdateUpload with the addition of // the ability to pass a context and additional request options. // -// See UpdateNetworkProfile for details on how to use this API operation. +// See UpdateUpload for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *DeviceFarm) UpdateNetworkProfileWithContext(ctx aws.Context, input *UpdateNetworkProfileInput, opts ...request.Option) (*UpdateNetworkProfileOutput, error) { - req, out := c.UpdateNetworkProfileRequest(input) +func (c *DeviceFarm) UpdateUploadWithContext(ctx aws.Context, input *UpdateUploadInput, opts ...request.Option) (*UpdateUploadOutput, error) { + req, out := c.UpdateUploadRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateProject = "UpdateProject" +const opUpdateVPCEConfiguration = "UpdateVPCEConfiguration" -// UpdateProjectRequest generates a "aws/request.Request" representing the -// client's request for the UpdateProject operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UpdateVPCEConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the UpdateVPCEConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateProject for more information on using the UpdateProject +// See UpdateVPCEConfiguration for more information on using the UpdateVPCEConfiguration // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateProjectRequest method. -// req, resp := client.UpdateProjectRequest(params) +// // Example sending a request using the UpdateVPCEConfigurationRequest method. +// req, resp := client.UpdateVPCEConfigurationRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/UpdateProject -func (c *DeviceFarm) UpdateProjectRequest(input *UpdateProjectInput) (req *request.Request, output *UpdateProjectOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/UpdateVPCEConfiguration +func (c *DeviceFarm) UpdateVPCEConfigurationRequest(input *UpdateVPCEConfigurationInput) (req *request.Request, output *UpdateVPCEConfigurationOutput) { op := &request.Operation{ - Name: opUpdateProject, + Name: opUpdateVPCEConfiguration, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateProjectInput{} + input = &UpdateVPCEConfigurationInput{} } - output = &UpdateProjectOutput{} + output = &UpdateVPCEConfigurationOutput{} req = c.newRequest(op, input, output) return } -// UpdateProject API operation for AWS Device Farm. +// UpdateVPCEConfiguration API operation for AWS Device Farm. // -// Modifies the specified project name, given the project ARN and a new name. +// Updates information about an existing Amazon Virtual Private Cloud (VPC) +// endpoint configuration. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Device Farm's -// API operation UpdateProject for usage and error information. +// API operation UpdateVPCEConfiguration for usage and error information. // // Returned Error Codes: // * ErrCodeArgumentException "ArgumentException" @@ -5148,29 +6469,30 @@ func (c *DeviceFarm) UpdateProjectRequest(input *UpdateProjectInput) (req *reque // * ErrCodeNotFoundException "NotFoundException" // The specified entity was not found. // -// * ErrCodeLimitExceededException "LimitExceededException" -// A limit was exceeded. -// // * ErrCodeServiceAccountException "ServiceAccountException" // There was a problem with the service account. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/UpdateProject -func (c *DeviceFarm) UpdateProject(input *UpdateProjectInput) (*UpdateProjectOutput, error) { - req, out := c.UpdateProjectRequest(input) +// * ErrCodeInvalidOperationException "InvalidOperationException" +// There was an error with the update request, or you do not have sufficient +// permissions to update this VPC endpoint configuration. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/devicefarm-2015-06-23/UpdateVPCEConfiguration +func (c *DeviceFarm) UpdateVPCEConfiguration(input *UpdateVPCEConfigurationInput) (*UpdateVPCEConfigurationOutput, error) { + req, out := c.UpdateVPCEConfigurationRequest(input) return out, req.Send() } -// UpdateProjectWithContext is the same as UpdateProject with the addition of +// UpdateVPCEConfigurationWithContext is the same as UpdateVPCEConfiguration with the addition of // the ability to pass a context and additional request options. // -// See UpdateProject for details on how to use this API operation. +// See UpdateVPCEConfiguration for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *DeviceFarm) UpdateProjectWithContext(ctx aws.Context, input *UpdateProjectInput, opts ...request.Option) (*UpdateProjectOutput, error) { - req, out := c.UpdateProjectRequest(input) +func (c *DeviceFarm) UpdateVPCEConfigurationWithContext(ctx aws.Context, input *UpdateVPCEConfigurationInput, opts ...request.Option) (*UpdateVPCEConfigurationOutput, error) { + req, out := c.UpdateVPCEConfigurationRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() @@ -5195,6 +6517,15 @@ type AccountSettings struct { // represents one of the IDs returned by the ListOfferings command. MaxSlots map[string]*int64 `locationName:"maxSlots" type:"map"` + // When set to true, for private devices, Device Farm will not sign your app + // again. For public devices, Device Farm always signs your apps again and this + // parameter has no effect. + // + // For more information about how Device Farm re-signs your app(s), see Do you + // modify my app? (https://aws.amazon.com/device-farm/faq/) in the AWS Device + // Farm FAQs. + SkipAppResign *bool `locationName:"skipAppResign" type:"boolean"` + // Information about an AWS account's usage of free trial device minutes. TrialMinutes *TrialMinutes `locationName:"trialMinutes" type:"structure"` @@ -5240,6 +6571,12 @@ func (s *AccountSettings) SetMaxSlots(v map[string]*int64) *AccountSettings { return s } +// SetSkipAppResign sets the SkipAppResign field's value. +func (s *AccountSettings) SetSkipAppResign(v bool) *AccountSettings { + s.SkipAppResign = &v + return s +} + // SetTrialMinutes sets the TrialMinutes field's value. func (s *AccountSettings) SetTrialMinutes(v *TrialMinutes) *AccountSettings { s.TrialMinutes = v @@ -5594,6 +6931,108 @@ func (s *CreateDevicePoolOutput) SetDevicePool(v *DevicePool) *CreateDevicePoolO return s } +type CreateInstanceProfileInput struct { + _ struct{} `type:"structure"` + + // The description of your instance profile. + Description *string `locationName:"description" type:"string"` + + // An array of strings specifying the list of app packages that should not be + // cleaned up from the device after a test run is over. + // + // The list of packages is only considered if you set packageCleanup to true. + ExcludeAppPackagesFromCleanup []*string `locationName:"excludeAppPackagesFromCleanup" type:"list"` + + // The name of your instance profile. + // + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true"` + + // When set to true, Device Farm will remove app packages after a test run. + // The default value is false for private devices. + PackageCleanup *bool `locationName:"packageCleanup" type:"boolean"` + + // When set to true, Device Farm will reboot the instance after a test run. + // The default value is true. + RebootAfterUse *bool `locationName:"rebootAfterUse" type:"boolean"` +} + +// String returns the string representation +func (s CreateInstanceProfileInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateInstanceProfileInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateInstanceProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateInstanceProfileInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDescription sets the Description field's value. +func (s *CreateInstanceProfileInput) SetDescription(v string) *CreateInstanceProfileInput { + s.Description = &v + return s +} + +// SetExcludeAppPackagesFromCleanup sets the ExcludeAppPackagesFromCleanup field's value. +func (s *CreateInstanceProfileInput) SetExcludeAppPackagesFromCleanup(v []*string) *CreateInstanceProfileInput { + s.ExcludeAppPackagesFromCleanup = v + return s +} + +// SetName sets the Name field's value. +func (s *CreateInstanceProfileInput) SetName(v string) *CreateInstanceProfileInput { + s.Name = &v + return s +} + +// SetPackageCleanup sets the PackageCleanup field's value. +func (s *CreateInstanceProfileInput) SetPackageCleanup(v bool) *CreateInstanceProfileInput { + s.PackageCleanup = &v + return s +} + +// SetRebootAfterUse sets the RebootAfterUse field's value. +func (s *CreateInstanceProfileInput) SetRebootAfterUse(v bool) *CreateInstanceProfileInput { + s.RebootAfterUse = &v + return s +} + +type CreateInstanceProfileOutput struct { + _ struct{} `type:"structure"` + + // An object containing information about your instance profile. + InstanceProfile *InstanceProfile `locationName:"instanceProfile" type:"structure"` +} + +// String returns the string representation +func (s CreateInstanceProfileOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateInstanceProfileOutput) GoString() string { + return s.String() +} + +// SetInstanceProfile sets the InstanceProfile field's value. +func (s *CreateInstanceProfileOutput) SetInstanceProfile(v *InstanceProfile) *CreateInstanceProfileOutput { + s.InstanceProfile = v + return s +} + type CreateNetworkProfileInput struct { _ struct{} `type:"structure"` @@ -5841,13 +7280,15 @@ func (s *CreateProjectOutput) SetProject(v *Project) *CreateProjectOutput { return s } -// Creates the configuration settings for a remote access session, including -// the device model and type. +// Configuration settings for a remote access session, including billing method. type CreateRemoteAccessSessionConfiguration struct { _ struct{} `type:"structure"` - // Returns the billing method for purposes of configuring a remote access session. + // The billing method for the remote access session. BillingMethod *string `locationName:"billingMethod" type:"string" enum:"BillingMethod"` + + // An array of Amazon Resource Names (ARNs) included in the VPC endpoint configuration. + VpceConfigurationArns []*string `locationName:"vpceConfigurationArns" type:"list"` } // String returns the string representation @@ -5866,6 +7307,12 @@ func (s *CreateRemoteAccessSessionConfiguration) SetBillingMethod(v string) *Cre return s } +// SetVpceConfigurationArns sets the VpceConfigurationArns field's value. +func (s *CreateRemoteAccessSessionConfiguration) SetVpceConfigurationArns(v []*string) *CreateRemoteAccessSessionConfiguration { + s.VpceConfigurationArns = v + return s +} + // Creates and submits a request to start a remote access session. type CreateRemoteAccessSessionInput struct { _ struct{} `type:"structure"` @@ -5885,6 +7332,10 @@ type CreateRemoteAccessSessionInput struct { // DeviceArn is a required field DeviceArn *string `locationName:"deviceArn" min:"32" type:"string" required:"true"` + // The Amazon Resource Name (ARN) of the device instance for which you want + // to create a remote access session. + InstanceArn *string `locationName:"instanceArn" min:"32" type:"string"` + // The interaction mode of the remote access session. Valid values are: // // * INTERACTIVE: You can interact with the iOS device by viewing, touching, @@ -5919,6 +7370,15 @@ type CreateRemoteAccessSessionInput struct { // Set to true to enable remote recording for the remote access session. RemoteRecordEnabled *bool `locationName:"remoteRecordEnabled" type:"boolean"` + // When set to true, for private devices, Device Farm will not sign your app + // again. For public devices, Device Farm always signs your apps again and this + // parameter has no effect. + // + // For more information about how Device Farm re-signs your app(s), see Do you + // modify my app? (https://aws.amazon.com/device-farm/faq/) in the AWS Device + // Farm FAQs. + SkipAppResign *bool `locationName:"skipAppResign" type:"boolean"` + // The public key of the ssh key pair you want to use for connecting to remote // devices in your remote debugging session. This is only required if remoteDebugEnabled // is set to true. @@ -5944,6 +7404,9 @@ func (s *CreateRemoteAccessSessionInput) Validate() error { if s.DeviceArn != nil && len(*s.DeviceArn) < 32 { invalidParams.Add(request.NewErrParamMinLen("DeviceArn", 32)) } + if s.InstanceArn != nil && len(*s.InstanceArn) < 32 { + invalidParams.Add(request.NewErrParamMinLen("InstanceArn", 32)) + } if s.ProjectArn == nil { invalidParams.Add(request.NewErrParamRequired("ProjectArn")) } @@ -5978,6 +7441,12 @@ func (s *CreateRemoteAccessSessionInput) SetDeviceArn(v string) *CreateRemoteAcc return s } +// SetInstanceArn sets the InstanceArn field's value. +func (s *CreateRemoteAccessSessionInput) SetInstanceArn(v string) *CreateRemoteAccessSessionInput { + s.InstanceArn = &v + return s +} + // SetInteractionMode sets the InteractionMode field's value. func (s *CreateRemoteAccessSessionInput) SetInteractionMode(v string) *CreateRemoteAccessSessionInput { s.InteractionMode = &v @@ -6014,6 +7483,12 @@ func (s *CreateRemoteAccessSessionInput) SetRemoteRecordEnabled(v bool) *CreateR return s } +// SetSkipAppResign sets the SkipAppResign field's value. +func (s *CreateRemoteAccessSessionInput) SetSkipAppResign(v bool) *CreateRemoteAccessSessionInput { + s.SkipAppResign = &v + return s +} + // SetSshPublicKey sets the SshPublicKey field's value. func (s *CreateRemoteAccessSessionInput) SetSshPublicKey(v string) *CreateRemoteAccessSessionInput { s.SshPublicKey = &v @@ -6073,7 +7548,7 @@ type CreateUploadInput struct { // // * IOS_APP: An iOS upload. // - // * WEB_APP: A web appliction upload. + // * WEB_APP: A web application upload. // // * EXTERNAL_DATA: An external data upload. // @@ -6102,39 +7577,157 @@ type CreateUploadInput struct { // // * XCTEST_TEST_PACKAGE: An XCode test package upload. // - // * XCTEST_UI_TEST_PACKAGE: An XCode UI test package upload. + // * XCTEST_UI_TEST_PACKAGE: An XCode UI test package upload. + // + // * APPIUM_JAVA_JUNIT_TEST_SPEC: An Appium Java JUnit test spec upload. + // + // * APPIUM_JAVA_TESTNG_TEST_SPEC: An Appium Java TestNG test spec upload. + // + // * APPIUM_PYTHON_TEST_SPEC: An Appium Python test spec upload. + // + // * APPIUM_WEB_JAVA_JUNIT_TEST_SPEC: An Appium Java JUnit test spec upload. + // + // * APPIUM_WEB_JAVA_TESTNG_TEST_SPEC: An Appium Java TestNG test spec upload. + // + // * APPIUM_WEB_PYTHON_TEST_SPEC: An Appium Python test spec upload. + // + // * INSTRUMENTATION_TEST_SPEC: An instrumentation test spec upload. + // + // * XCTEST_UI_TEST_SPEC: An XCode UI test spec upload. + // + // Note If you call CreateUpload with WEB_APP specified, AWS Device Farm throws + // an ArgumentException error. + // + // Type is a required field + Type *string `locationName:"type" type:"string" required:"true" enum:"UploadType"` +} + +// String returns the string representation +func (s CreateUploadInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateUploadInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateUploadInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateUploadInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.ProjectArn == nil { + invalidParams.Add(request.NewErrParamRequired("ProjectArn")) + } + if s.ProjectArn != nil && len(*s.ProjectArn) < 32 { + invalidParams.Add(request.NewErrParamMinLen("ProjectArn", 32)) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetContentType sets the ContentType field's value. +func (s *CreateUploadInput) SetContentType(v string) *CreateUploadInput { + s.ContentType = &v + return s +} + +// SetName sets the Name field's value. +func (s *CreateUploadInput) SetName(v string) *CreateUploadInput { + s.Name = &v + return s +} + +// SetProjectArn sets the ProjectArn field's value. +func (s *CreateUploadInput) SetProjectArn(v string) *CreateUploadInput { + s.ProjectArn = &v + return s +} + +// SetType sets the Type field's value. +func (s *CreateUploadInput) SetType(v string) *CreateUploadInput { + s.Type = &v + return s +} + +// Represents the result of a create upload request. +type CreateUploadOutput struct { + _ struct{} `type:"structure"` + + // The newly created upload. + Upload *Upload `locationName:"upload" type:"structure"` +} + +// String returns the string representation +func (s CreateUploadOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateUploadOutput) GoString() string { + return s.String() +} + +// SetUpload sets the Upload field's value. +func (s *CreateUploadOutput) SetUpload(v *Upload) *CreateUploadOutput { + s.Upload = v + return s +} + +type CreateVPCEConfigurationInput struct { + _ struct{} `type:"structure"` + + // The DNS name of the service running in your VPC that you want Device Farm + // to test. + // + // ServiceDnsName is a required field + ServiceDnsName *string `locationName:"serviceDnsName" type:"string" required:"true"` + + // An optional description, providing more details about your VPC endpoint configuration. + VpceConfigurationDescription *string `locationName:"vpceConfigurationDescription" type:"string"` + + // The friendly name you give to your VPC endpoint configuration, to manage + // your configurations more easily. // - // Note If you call CreateUpload with WEB_APP specified, AWS Device Farm throws - // an ArgumentException error. + // VpceConfigurationName is a required field + VpceConfigurationName *string `locationName:"vpceConfigurationName" type:"string" required:"true"` + + // The name of the VPC endpoint service running inside your AWS account that + // you want Device Farm to test. // - // Type is a required field - Type *string `locationName:"type" type:"string" required:"true" enum:"UploadType"` + // VpceServiceName is a required field + VpceServiceName *string `locationName:"vpceServiceName" type:"string" required:"true"` } // String returns the string representation -func (s CreateUploadInput) String() string { +func (s CreateVPCEConfigurationInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateUploadInput) GoString() string { +func (s CreateVPCEConfigurationInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateUploadInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateUploadInput"} - if s.Name == nil { - invalidParams.Add(request.NewErrParamRequired("Name")) +func (s *CreateVPCEConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateVPCEConfigurationInput"} + if s.ServiceDnsName == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceDnsName")) } - if s.ProjectArn == nil { - invalidParams.Add(request.NewErrParamRequired("ProjectArn")) - } - if s.ProjectArn != nil && len(*s.ProjectArn) < 32 { - invalidParams.Add(request.NewErrParamMinLen("ProjectArn", 32)) + if s.VpceConfigurationName == nil { + invalidParams.Add(request.NewErrParamRequired("VpceConfigurationName")) } - if s.Type == nil { - invalidParams.Add(request.NewErrParamRequired("Type")) + if s.VpceServiceName == nil { + invalidParams.Add(request.NewErrParamRequired("VpceServiceName")) } if invalidParams.Len() > 0 { @@ -6143,51 +7736,50 @@ func (s *CreateUploadInput) Validate() error { return nil } -// SetContentType sets the ContentType field's value. -func (s *CreateUploadInput) SetContentType(v string) *CreateUploadInput { - s.ContentType = &v +// SetServiceDnsName sets the ServiceDnsName field's value. +func (s *CreateVPCEConfigurationInput) SetServiceDnsName(v string) *CreateVPCEConfigurationInput { + s.ServiceDnsName = &v return s } -// SetName sets the Name field's value. -func (s *CreateUploadInput) SetName(v string) *CreateUploadInput { - s.Name = &v +// SetVpceConfigurationDescription sets the VpceConfigurationDescription field's value. +func (s *CreateVPCEConfigurationInput) SetVpceConfigurationDescription(v string) *CreateVPCEConfigurationInput { + s.VpceConfigurationDescription = &v return s } -// SetProjectArn sets the ProjectArn field's value. -func (s *CreateUploadInput) SetProjectArn(v string) *CreateUploadInput { - s.ProjectArn = &v +// SetVpceConfigurationName sets the VpceConfigurationName field's value. +func (s *CreateVPCEConfigurationInput) SetVpceConfigurationName(v string) *CreateVPCEConfigurationInput { + s.VpceConfigurationName = &v return s } -// SetType sets the Type field's value. -func (s *CreateUploadInput) SetType(v string) *CreateUploadInput { - s.Type = &v +// SetVpceServiceName sets the VpceServiceName field's value. +func (s *CreateVPCEConfigurationInput) SetVpceServiceName(v string) *CreateVPCEConfigurationInput { + s.VpceServiceName = &v return s } -// Represents the result of a create upload request. -type CreateUploadOutput struct { +type CreateVPCEConfigurationOutput struct { _ struct{} `type:"structure"` - // The newly created upload. - Upload *Upload `locationName:"upload" type:"structure"` + // An object containing information about your VPC endpoint configuration. + VpceConfiguration *VPCEConfiguration `locationName:"vpceConfiguration" type:"structure"` } // String returns the string representation -func (s CreateUploadOutput) String() string { +func (s CreateVPCEConfigurationOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateUploadOutput) GoString() string { +func (s CreateVPCEConfigurationOutput) GoString() string { return s.String() } -// SetUpload sets the Upload field's value. -func (s *CreateUploadOutput) SetUpload(v *Upload) *CreateUploadOutput { - s.Upload = v +// SetVpceConfiguration sets the VpceConfiguration field's value. +func (s *CreateVPCEConfigurationOutput) SetVpceConfiguration(v *VPCEConfiguration) *CreateVPCEConfigurationOutput { + s.VpceConfiguration = v return s } @@ -6299,6 +7891,62 @@ func (s DeleteDevicePoolOutput) GoString() string { return s.String() } +type DeleteInstanceProfileInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the instance profile you are requesting + // to delete. + // + // Arn is a required field + Arn *string `locationName:"arn" min:"32" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteInstanceProfileInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteInstanceProfileInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteInstanceProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteInstanceProfileInput"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + if s.Arn != nil && len(*s.Arn) < 32 { + invalidParams.Add(request.NewErrParamMinLen("Arn", 32)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *DeleteInstanceProfileInput) SetArn(v string) *DeleteInstanceProfileInput { + s.Arn = &v + return s +} + +type DeleteInstanceProfileOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteInstanceProfileOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteInstanceProfileOutput) GoString() string { + return s.String() +} + type DeleteNetworkProfileInput struct { _ struct{} `type:"structure"` @@ -6586,6 +8234,62 @@ func (s DeleteUploadOutput) GoString() string { return s.String() } +type DeleteVPCEConfigurationInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the VPC endpoint configuration you want + // to delete. + // + // Arn is a required field + Arn *string `locationName:"arn" min:"32" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteVPCEConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteVPCEConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteVPCEConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteVPCEConfigurationInput"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + if s.Arn != nil && len(*s.Arn) < 32 { + invalidParams.Add(request.NewErrParamMinLen("Arn", 32)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *DeleteVPCEConfigurationInput) SetArn(v string) *DeleteVPCEConfigurationInput { + s.Arn = &v + return s +} + +type DeleteVPCEConfigurationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteVPCEConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteVPCEConfigurationOutput) GoString() string { + return s.String() +} + // Represents a device type that an app is tested against. type Device struct { _ struct{} `type:"structure"` @@ -6593,6 +8297,9 @@ type Device struct { // The device's ARN. Arn *string `locationName:"arn" min:"32" type:"string"` + // Reflects how likely a device will be available for a test run. + Availability *string `locationName:"availability" type:"string" enum:"DeviceAvailability"` + // The device's carrier. Carrier *string `locationName:"carrier" type:"string"` @@ -6621,6 +8328,9 @@ type Device struct { // The device's image name. Image *string `locationName:"image" type:"string"` + // The instances belonging to this device. + Instances []*DeviceInstance `locationName:"instances" type:"list"` + // The device's manufacturer name. Manufacturer *string `locationName:"manufacturer" type:"string"` @@ -6677,6 +8387,12 @@ func (s *Device) SetArn(v string) *Device { return s } +// SetAvailability sets the Availability field's value. +func (s *Device) SetAvailability(v string) *Device { + s.Availability = &v + return s +} + // SetCarrier sets the Carrier field's value. func (s *Device) SetCarrier(v string) *Device { s.Carrier = &v @@ -6719,6 +8435,12 @@ func (s *Device) SetImage(v string) *Device { return s } +// SetInstances sets the Instances field's value. +func (s *Device) SetInstances(v []*DeviceInstance) *Device { + s.Instances = v + return s +} + // SetManufacturer sets the Manufacturer field's value. func (s *Device) SetManufacturer(v string) *Device { s.Manufacturer = &v @@ -6737,51 +8459,225 @@ func (s *Device) SetModel(v string) *Device { return s } -// SetModelId sets the ModelId field's value. -func (s *Device) SetModelId(v string) *Device { - s.ModelId = &v - return s +// SetModelId sets the ModelId field's value. +func (s *Device) SetModelId(v string) *Device { + s.ModelId = &v + return s +} + +// SetName sets the Name field's value. +func (s *Device) SetName(v string) *Device { + s.Name = &v + return s +} + +// SetOs sets the Os field's value. +func (s *Device) SetOs(v string) *Device { + s.Os = &v + return s +} + +// SetPlatform sets the Platform field's value. +func (s *Device) SetPlatform(v string) *Device { + s.Platform = &v + return s +} + +// SetRadio sets the Radio field's value. +func (s *Device) SetRadio(v string) *Device { + s.Radio = &v + return s +} + +// SetRemoteAccessEnabled sets the RemoteAccessEnabled field's value. +func (s *Device) SetRemoteAccessEnabled(v bool) *Device { + s.RemoteAccessEnabled = &v + return s +} + +// SetRemoteDebugEnabled sets the RemoteDebugEnabled field's value. +func (s *Device) SetRemoteDebugEnabled(v bool) *Device { + s.RemoteDebugEnabled = &v + return s +} + +// SetResolution sets the Resolution field's value. +func (s *Device) SetResolution(v *Resolution) *Device { + s.Resolution = v + return s +} + +// Represents a device filter used to select a set of devices to be included +// in a test run. This data structure is passed in as the "deviceSelectionConfiguration" +// parameter to ScheduleRun. For an example of the JSON request syntax, see +// ScheduleRun. +// +// It is also passed in as the "filters" parameter to ListDevices. For an example +// of the JSON request syntax, see ListDevices. +type DeviceFilter struct { + _ struct{} `type:"structure"` + + // The aspect of a device such as platform or model used as the selection criteria + // in a device filter. + // + // Allowed values include: + // + // * ARN: The Amazon Resource Name (ARN) of the device. For example, "arn:aws:devicefarm:us-west-2::device:12345Example". + // + // * PLATFORM: The device platform. Valid values are "ANDROID" or "IOS". + // + // * OS_VERSION: The operating system version. For example, "10.3.2". + // + // * MODEL: The device model. For example, "iPad 5th Gen". + // + // * AVAILABILITY: The current availability of the device. Valid values are + // "AVAILABLE", "HIGHLY_AVAILABLE", "BUSY", or "TEMPORARY_NOT_AVAILABLE". + // + // * FORM_FACTOR: The device form factor. Valid values are "PHONE" or "TABLET". + // + // * MANUFACTURER: The device manufacturer. For example, "Apple". + // + // * REMOTE_ACCESS_ENABLED: Whether the device is enabled for remote access. + // + // * REMOTE_DEBUG_ENABLED: Whether the device is enabled for remote debugging. + // + // * INSTANCE_ARN: The Amazon Resource Name (ARN) of the device instance. + // + // * INSTANCE_LABELS: The label of the device instance. + // + // * FLEET_TYPE: The fleet type. Valid values are "PUBLIC" or "PRIVATE". + Attribute *string `locationName:"attribute" type:"string" enum:"DeviceFilterAttribute"` + + // The filter operator. + // + // * The EQUALS operator is available for every attribute except INSTANCE_LABELS. + // + // * The CONTAINS operator is available for the INSTANCE_LABELS and MODEL + // attributes. + // + // * The IN and NOT_IN operators are available for the ARN, OS_VERSION, MODEL, + // MANUFACTURER, and INSTANCE_ARN attributes. + // + // * The LESS_THAN, GREATER_THAN, LESS_THAN_OR_EQUALS, and GREATER_THAN_OR_EQUALS + // operators are also available for the OS_VERSION attribute. + Operator *string `locationName:"operator" type:"string" enum:"DeviceFilterOperator"` + + // An array of one or more filter values used in a device filter. + // + // Operator Values + // + // * The IN and NOT operators can take a values array that has more than + // one element. + // + // * The other operators require an array with a single element. + // + // Attribute Values + // + // * The PLATFORM attribute can be set to "ANDROID" or "IOS". + // + // * The AVAILABILITY attribute can be set to "AVAILABLE", "HIGHLY_AVAILABLE", + // "BUSY", or "TEMPORARY_NOT_AVAILABLE". + // + // * The FORM_FACTOR attribute can be set to "PHONE" or "TABLET". + // + // * The FLEET_TYPE attribute can be set to "PUBLIC" or "PRIVATE". + Values []*string `locationName:"values" type:"list"` +} + +// String returns the string representation +func (s DeviceFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeviceFilter) GoString() string { + return s.String() +} + +// SetAttribute sets the Attribute field's value. +func (s *DeviceFilter) SetAttribute(v string) *DeviceFilter { + s.Attribute = &v + return s +} + +// SetOperator sets the Operator field's value. +func (s *DeviceFilter) SetOperator(v string) *DeviceFilter { + s.Operator = &v + return s +} + +// SetValues sets the Values field's value. +func (s *DeviceFilter) SetValues(v []*string) *DeviceFilter { + s.Values = v + return s +} + +// Represents the device instance. +type DeviceInstance struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the device instance. + Arn *string `locationName:"arn" min:"32" type:"string"` + + // The Amazon Resource Name (ARN) of the device. + DeviceArn *string `locationName:"deviceArn" min:"32" type:"string"` + + // A object containing information about the instance profile. + InstanceProfile *InstanceProfile `locationName:"instanceProfile" type:"structure"` + + // An array of strings describing the device instance. + Labels []*string `locationName:"labels" type:"list"` + + // The status of the device instance. Valid values are listed below. + Status *string `locationName:"status" type:"string" enum:"InstanceStatus"` + + // Unique device identifier for the device instance. + Udid *string `locationName:"udid" type:"string"` +} + +// String returns the string representation +func (s DeviceInstance) String() string { + return awsutil.Prettify(s) } -// SetName sets the Name field's value. -func (s *Device) SetName(v string) *Device { - s.Name = &v - return s +// GoString returns the string representation +func (s DeviceInstance) GoString() string { + return s.String() } -// SetOs sets the Os field's value. -func (s *Device) SetOs(v string) *Device { - s.Os = &v +// SetArn sets the Arn field's value. +func (s *DeviceInstance) SetArn(v string) *DeviceInstance { + s.Arn = &v return s } -// SetPlatform sets the Platform field's value. -func (s *Device) SetPlatform(v string) *Device { - s.Platform = &v +// SetDeviceArn sets the DeviceArn field's value. +func (s *DeviceInstance) SetDeviceArn(v string) *DeviceInstance { + s.DeviceArn = &v return s } -// SetRadio sets the Radio field's value. -func (s *Device) SetRadio(v string) *Device { - s.Radio = &v +// SetInstanceProfile sets the InstanceProfile field's value. +func (s *DeviceInstance) SetInstanceProfile(v *InstanceProfile) *DeviceInstance { + s.InstanceProfile = v return s } -// SetRemoteAccessEnabled sets the RemoteAccessEnabled field's value. -func (s *Device) SetRemoteAccessEnabled(v bool) *Device { - s.RemoteAccessEnabled = &v +// SetLabels sets the Labels field's value. +func (s *DeviceInstance) SetLabels(v []*string) *DeviceInstance { + s.Labels = v return s } -// SetRemoteDebugEnabled sets the RemoteDebugEnabled field's value. -func (s *Device) SetRemoteDebugEnabled(v bool) *Device { - s.RemoteDebugEnabled = &v +// SetStatus sets the Status field's value. +func (s *DeviceInstance) SetStatus(v string) *DeviceInstance { + s.Status = &v return s } -// SetResolution sets the Resolution field's value. -func (s *Device) SetResolution(v *Resolution) *Device { - s.Resolution = v +// SetUdid sets the Udid field's value. +func (s *DeviceInstance) SetUdid(v string) *DeviceInstance { + s.Udid = &v return s } @@ -6940,6 +8836,158 @@ func (s *DevicePoolCompatibilityResult) SetIncompatibilityMessages(v []*Incompat return s } +// Represents the device filters used in a test run as well as the maximum number +// of devices to be included in the run. It is passed in as the deviceSelectionConfiguration +// request parameter in ScheduleRun. +type DeviceSelectionConfiguration struct { + _ struct{} `type:"structure"` + + // Used to dynamically select a set of devices for a test run. A filter is made + // up of an attribute, an operator, and one or more values. + // + // * Attribute: The aspect of a device such as platform or model used as + // the selection criteria in a device filter. + // + // Allowed values include: + // + // ARN: The Amazon Resource Name (ARN) of the device. For example, "arn:aws:devicefarm:us-west-2::device:12345Example". + // + // PLATFORM: The device platform. Valid values are "ANDROID" or "IOS". + // + // OS_VERSION: The operating system version. For example, "10.3.2". + // + // MODEL: The device model. For example, "iPad 5th Gen". + // + // AVAILABILITY: The current availability of the device. Valid values are "AVAILABLE", + // "HIGHLY_AVAILABLE", "BUSY", or "TEMPORARY_NOT_AVAILABLE". + // + // FORM_FACTOR: The device form factor. Valid values are "PHONE" or "TABLET". + // + // MANUFACTURER: The device manufacturer. For example, "Apple". + // + // REMOTE_ACCESS_ENABLED: Whether the device is enabled for remote access. + // + // REMOTE_DEBUG_ENABLED: Whether the device is enabled for remote debugging. + // + // INSTANCE_ARN: The Amazon Resource Name (ARN) of the device instance. + // + // INSTANCE_LABELS: The label of the device instance. + // + // FLEET_TYPE: The fleet type. Valid values are "PUBLIC" or "PRIVATE". + // + // * Operator: The filter operator. + // + // The EQUALS operator is available for every attribute except INSTANCE_LABELS. + // + // The CONTAINS operator is available for the INSTANCE_LABELS and MODEL attributes. + // + // The IN and NOT_IN operators are available for the ARN, OS_VERSION, MODEL, + // MANUFACTURER, and INSTANCE_ARN attributes. + // + // The LESS_THAN, GREATER_THAN, LESS_THAN_OR_EQUALS, and GREATER_THAN_OR_EQUALS + // operators are also available for the OS_VERSION attribute. + // + // * Values: An array of one or more filter values. + // + // The IN and NOT operators can take a values array that has more than one element. + // + // The other operators require an array with a single element. + // + // In a request, the AVAILABILITY attribute takes "AVAILABLE", "HIGHLY_AVAILABLE", + // "BUSY", or "TEMPORARY_NOT_AVAILABLE" as values. + // + // Filters is a required field + Filters []*DeviceFilter `locationName:"filters" type:"list" required:"true"` + + // The maximum number of devices to be included in a test run. + // + // MaxDevices is a required field + MaxDevices *int64 `locationName:"maxDevices" type:"integer" required:"true"` +} + +// String returns the string representation +func (s DeviceSelectionConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeviceSelectionConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeviceSelectionConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeviceSelectionConfiguration"} + if s.Filters == nil { + invalidParams.Add(request.NewErrParamRequired("Filters")) + } + if s.MaxDevices == nil { + invalidParams.Add(request.NewErrParamRequired("MaxDevices")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DeviceSelectionConfiguration) SetFilters(v []*DeviceFilter) *DeviceSelectionConfiguration { + s.Filters = v + return s +} + +// SetMaxDevices sets the MaxDevices field's value. +func (s *DeviceSelectionConfiguration) SetMaxDevices(v int64) *DeviceSelectionConfiguration { + s.MaxDevices = &v + return s +} + +// Contains the run results requested by the device selection configuration +// as well as how many devices were returned. For an example of the JSON response +// syntax, see ScheduleRun. +type DeviceSelectionResult struct { + _ struct{} `type:"structure"` + + // The filters in a device selection result. + Filters []*DeviceFilter `locationName:"filters" type:"list"` + + // The number of devices that matched the device filter selection criteria. + MatchedDevicesCount *int64 `locationName:"matchedDevicesCount" type:"integer"` + + // The maximum number of devices to be selected by a device filter and included + // in a test run. + MaxDevices *int64 `locationName:"maxDevices" type:"integer"` +} + +// String returns the string representation +func (s DeviceSelectionResult) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeviceSelectionResult) GoString() string { + return s.String() +} + +// SetFilters sets the Filters field's value. +func (s *DeviceSelectionResult) SetFilters(v []*DeviceFilter) *DeviceSelectionResult { + s.Filters = v + return s +} + +// SetMatchedDevicesCount sets the MatchedDevicesCount field's value. +func (s *DeviceSelectionResult) SetMatchedDevicesCount(v int64) *DeviceSelectionResult { + s.MatchedDevicesCount = &v + return s +} + +// SetMaxDevices sets the MaxDevices field's value. +func (s *DeviceSelectionResult) SetMaxDevices(v int64) *DeviceSelectionResult { + s.MaxDevices = &v + return s +} + // Represents configuration information about a test run, such as the execution // timeout (in minutes). type ExecutionConfiguration struct { @@ -6955,6 +9003,19 @@ type ExecutionConfiguration struct { // The number of minutes a test run will execute before it times out. JobTimeoutMinutes *int64 `locationName:"jobTimeoutMinutes" type:"integer"` + + // When set to true, for private devices, Device Farm will not sign your app + // again. For public devices, Device Farm always signs your apps again and this + // parameter has no effect. + // + // For more information about how Device Farm re-signs your app(s), see Do you + // modify my app? (https://aws.amazon.com/device-farm/faq/) in the AWS Device + // Farm FAQs. + SkipAppResign *bool `locationName:"skipAppResign" type:"boolean"` + + // Set to true to enable video capture; otherwise, set to false. The default + // is true. + VideoCapture *bool `locationName:"videoCapture" type:"boolean"` } // String returns the string representation @@ -6985,6 +9046,18 @@ func (s *ExecutionConfiguration) SetJobTimeoutMinutes(v int64) *ExecutionConfigu return s } +// SetSkipAppResign sets the SkipAppResign field's value. +func (s *ExecutionConfiguration) SetSkipAppResign(v bool) *ExecutionConfiguration { + s.SkipAppResign = &v + return s +} + +// SetVideoCapture sets the VideoCapture field's value. +func (s *ExecutionConfiguration) SetVideoCapture(v bool) *ExecutionConfiguration { + s.VideoCapture = &v + return s +} + // Represents the request sent to retrieve the account settings. type GetAccountSettingsInput struct { _ struct{} `type:"structure"` @@ -7067,6 +9140,71 @@ func (s *GetDeviceInput) SetArn(v string) *GetDeviceInput { return s } +type GetDeviceInstanceInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the instance you're requesting information + // about. + // + // Arn is a required field + Arn *string `locationName:"arn" min:"32" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetDeviceInstanceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDeviceInstanceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetDeviceInstanceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetDeviceInstanceInput"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + if s.Arn != nil && len(*s.Arn) < 32 { + invalidParams.Add(request.NewErrParamMinLen("Arn", 32)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *GetDeviceInstanceInput) SetArn(v string) *GetDeviceInstanceInput { + s.Arn = &v + return s +} + +type GetDeviceInstanceOutput struct { + _ struct{} `type:"structure"` + + // An object containing information about your device instance. + DeviceInstance *DeviceInstance `locationName:"deviceInstance" type:"structure"` +} + +// String returns the string representation +func (s GetDeviceInstanceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDeviceInstanceOutput) GoString() string { + return s.String() +} + +// SetDeviceInstance sets the DeviceInstance field's value. +func (s *GetDeviceInstanceOutput) SetDeviceInstance(v *DeviceInstance) *GetDeviceInstanceOutput { + s.DeviceInstance = v + return s +} + // Represents the result of a get device request. type GetDeviceOutput struct { _ struct{} `type:"structure"` @@ -7098,6 +9236,9 @@ type GetDevicePoolCompatibilityInput struct { // The ARN of the app that is associated with the specified device pool. AppArn *string `locationName:"appArn" min:"32" type:"string"` + // An object containing information about the settings for a run. + Configuration *ScheduleRunConfiguration `locationName:"configuration" type:"structure"` + // The device pool's ARN. // // DevicePoolArn is a required field @@ -7164,6 +9305,11 @@ func (s *GetDevicePoolCompatibilityInput) Validate() error { if s.DevicePoolArn != nil && len(*s.DevicePoolArn) < 32 { invalidParams.Add(request.NewErrParamMinLen("DevicePoolArn", 32)) } + if s.Configuration != nil { + if err := s.Configuration.Validate(); err != nil { + invalidParams.AddNested("Configuration", err.(request.ErrInvalidParams)) + } + } if s.Test != nil { if err := s.Test.Validate(); err != nil { invalidParams.AddNested("Test", err.(request.ErrInvalidParams)) @@ -7182,6 +9328,12 @@ func (s *GetDevicePoolCompatibilityInput) SetAppArn(v string) *GetDevicePoolComp return s } +// SetConfiguration sets the Configuration field's value. +func (s *GetDevicePoolCompatibilityInput) SetConfiguration(v *ScheduleRunConfiguration) *GetDevicePoolCompatibilityInput { + s.Configuration = v + return s +} + // SetDevicePoolArn sets the DevicePoolArn field's value. func (s *GetDevicePoolCompatibilityInput) SetDevicePoolArn(v string) *GetDevicePoolCompatibilityInput { s.DevicePoolArn = &v @@ -7207,55 +9359,120 @@ type GetDevicePoolCompatibilityOutput struct { // Information about compatible devices. CompatibleDevices []*DevicePoolCompatibilityResult `locationName:"compatibleDevices" type:"list"` - // Information about incompatible devices. - IncompatibleDevices []*DevicePoolCompatibilityResult `locationName:"incompatibleDevices" type:"list"` + // Information about incompatible devices. + IncompatibleDevices []*DevicePoolCompatibilityResult `locationName:"incompatibleDevices" type:"list"` +} + +// String returns the string representation +func (s GetDevicePoolCompatibilityOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDevicePoolCompatibilityOutput) GoString() string { + return s.String() +} + +// SetCompatibleDevices sets the CompatibleDevices field's value. +func (s *GetDevicePoolCompatibilityOutput) SetCompatibleDevices(v []*DevicePoolCompatibilityResult) *GetDevicePoolCompatibilityOutput { + s.CompatibleDevices = v + return s +} + +// SetIncompatibleDevices sets the IncompatibleDevices field's value. +func (s *GetDevicePoolCompatibilityOutput) SetIncompatibleDevices(v []*DevicePoolCompatibilityResult) *GetDevicePoolCompatibilityOutput { + s.IncompatibleDevices = v + return s +} + +// Represents a request to the get device pool operation. +type GetDevicePoolInput struct { + _ struct{} `type:"structure"` + + // The device pool's ARN. + // + // Arn is a required field + Arn *string `locationName:"arn" min:"32" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetDevicePoolInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDevicePoolInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetDevicePoolInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetDevicePoolInput"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + if s.Arn != nil && len(*s.Arn) < 32 { + invalidParams.Add(request.NewErrParamMinLen("Arn", 32)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *GetDevicePoolInput) SetArn(v string) *GetDevicePoolInput { + s.Arn = &v + return s +} + +// Represents the result of a get device pool request. +type GetDevicePoolOutput struct { + _ struct{} `type:"structure"` + + // An object containing information about the requested device pool. + DevicePool *DevicePool `locationName:"devicePool" type:"structure"` } // String returns the string representation -func (s GetDevicePoolCompatibilityOutput) String() string { +func (s GetDevicePoolOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDevicePoolCompatibilityOutput) GoString() string { +func (s GetDevicePoolOutput) GoString() string { return s.String() } -// SetCompatibleDevices sets the CompatibleDevices field's value. -func (s *GetDevicePoolCompatibilityOutput) SetCompatibleDevices(v []*DevicePoolCompatibilityResult) *GetDevicePoolCompatibilityOutput { - s.CompatibleDevices = v - return s -} - -// SetIncompatibleDevices sets the IncompatibleDevices field's value. -func (s *GetDevicePoolCompatibilityOutput) SetIncompatibleDevices(v []*DevicePoolCompatibilityResult) *GetDevicePoolCompatibilityOutput { - s.IncompatibleDevices = v +// SetDevicePool sets the DevicePool field's value. +func (s *GetDevicePoolOutput) SetDevicePool(v *DevicePool) *GetDevicePoolOutput { + s.DevicePool = v return s } -// Represents a request to the get device pool operation. -type GetDevicePoolInput struct { +type GetInstanceProfileInput struct { _ struct{} `type:"structure"` - // The device pool's ARN. + // The Amazon Resource Name (ARN) of your instance profile. // // Arn is a required field Arn *string `locationName:"arn" min:"32" type:"string" required:"true"` } // String returns the string representation -func (s GetDevicePoolInput) String() string { +func (s GetInstanceProfileInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDevicePoolInput) GoString() string { +func (s GetInstanceProfileInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetDevicePoolInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetDevicePoolInput"} +func (s *GetInstanceProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetInstanceProfileInput"} if s.Arn == nil { invalidParams.Add(request.NewErrParamRequired("Arn")) } @@ -7270,32 +9487,31 @@ func (s *GetDevicePoolInput) Validate() error { } // SetArn sets the Arn field's value. -func (s *GetDevicePoolInput) SetArn(v string) *GetDevicePoolInput { +func (s *GetInstanceProfileInput) SetArn(v string) *GetInstanceProfileInput { s.Arn = &v return s } -// Represents the result of a get device pool request. -type GetDevicePoolOutput struct { +type GetInstanceProfileOutput struct { _ struct{} `type:"structure"` - // An object containing information about the requested device pool. - DevicePool *DevicePool `locationName:"devicePool" type:"structure"` + // An object containing information about your instance profile. + InstanceProfile *InstanceProfile `locationName:"instanceProfile" type:"structure"` } // String returns the string representation -func (s GetDevicePoolOutput) String() string { +func (s GetInstanceProfileOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDevicePoolOutput) GoString() string { +func (s GetInstanceProfileOutput) GoString() string { return s.String() } -// SetDevicePool sets the DevicePool field's value. -func (s *GetDevicePoolOutput) SetDevicePool(v *DevicePool) *GetDevicePoolOutput { - s.DevicePool = v +// SetInstanceProfile sets the InstanceProfile field's value. +func (s *GetInstanceProfileOutput) SetInstanceProfile(v *InstanceProfile) *GetInstanceProfileOutput { + s.InstanceProfile = v return s } @@ -7911,6 +10127,71 @@ func (s *GetUploadOutput) SetUpload(v *Upload) *GetUploadOutput { return s } +type GetVPCEConfigurationInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the VPC endpoint configuration you want + // to describe. + // + // Arn is a required field + Arn *string `locationName:"arn" min:"32" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetVPCEConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetVPCEConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetVPCEConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetVPCEConfigurationInput"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + if s.Arn != nil && len(*s.Arn) < 32 { + invalidParams.Add(request.NewErrParamMinLen("Arn", 32)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *GetVPCEConfigurationInput) SetArn(v string) *GetVPCEConfigurationInput { + s.Arn = &v + return s +} + +type GetVPCEConfigurationOutput struct { + _ struct{} `type:"structure"` + + // An object containing information about your VPC endpoint configuration. + VpceConfiguration *VPCEConfiguration `locationName:"vpceConfiguration" type:"structure"` +} + +// String returns the string representation +func (s GetVPCEConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetVPCEConfigurationOutput) GoString() string { + return s.String() +} + +// SetVpceConfiguration sets the VpceConfiguration field's value. +func (s *GetVPCEConfigurationOutput) SetVpceConfiguration(v *VPCEConfiguration) *GetVPCEConfigurationOutput { + s.VpceConfiguration = v + return s +} + // Represents information about incompatibility. type IncompatibilityMessage struct { _ struct{} `type:"structure"` @@ -8045,6 +10326,80 @@ func (s *InstallToRemoteAccessSessionOutput) SetAppUpload(v *Upload) *InstallToR return s } +// Represents the instance profile. +type InstanceProfile struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the instance profile. + Arn *string `locationName:"arn" min:"32" type:"string"` + + // The description of the instance profile. + Description *string `locationName:"description" type:"string"` + + // An array of strings specifying the list of app packages that should not be + // cleaned up from the device after a test run is over. + // + // The list of packages is only considered if you set packageCleanup to true. + ExcludeAppPackagesFromCleanup []*string `locationName:"excludeAppPackagesFromCleanup" type:"list"` + + // The name of the instance profile. + Name *string `locationName:"name" type:"string"` + + // When set to true, Device Farm will remove app packages after a test run. + // The default value is false for private devices. + PackageCleanup *bool `locationName:"packageCleanup" type:"boolean"` + + // When set to true, Device Farm will reboot the instance after a test run. + // The default value is true. + RebootAfterUse *bool `locationName:"rebootAfterUse" type:"boolean"` +} + +// String returns the string representation +func (s InstanceProfile) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceProfile) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *InstanceProfile) SetArn(v string) *InstanceProfile { + s.Arn = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *InstanceProfile) SetDescription(v string) *InstanceProfile { + s.Description = &v + return s +} + +// SetExcludeAppPackagesFromCleanup sets the ExcludeAppPackagesFromCleanup field's value. +func (s *InstanceProfile) SetExcludeAppPackagesFromCleanup(v []*string) *InstanceProfile { + s.ExcludeAppPackagesFromCleanup = v + return s +} + +// SetName sets the Name field's value. +func (s *InstanceProfile) SetName(v string) *InstanceProfile { + s.Name = &v + return s +} + +// SetPackageCleanup sets the PackageCleanup field's value. +func (s *InstanceProfile) SetPackageCleanup(v bool) *InstanceProfile { + s.PackageCleanup = &v + return s +} + +// SetRebootAfterUse sets the RebootAfterUse field's value. +func (s *InstanceProfile) SetRebootAfterUse(v bool) *InstanceProfile { + s.RebootAfterUse = &v + return s +} + // Represents a device. type Job struct { _ struct{} `type:"structure"` @@ -8056,7 +10411,7 @@ type Job struct { Counters *Counters `locationName:"counters" type:"structure"` // When the job was created. - Created *time.Time `locationName:"created" type:"timestamp" timestampFormat:"unix"` + Created *time.Time `locationName:"created" type:"timestamp"` // The device (phone or tablet). Device *Device `locationName:"device" type:"structure"` @@ -8064,6 +10419,9 @@ type Job struct { // Represents the total (metered or unmetered) minutes used by the job. DeviceMinutes *DeviceMinutes `locationName:"deviceMinutes" type:"structure"` + // The Amazon Resource Name (ARN) of the instance. + InstanceArn *string `locationName:"instanceArn" min:"32" type:"string"` + // A message about the job's result. Message *string `locationName:"message" type:"string"` @@ -8090,7 +10448,7 @@ type Job struct { Result *string `locationName:"result" type:"string" enum:"ExecutionResult"` // The job's start time. - Started *time.Time `locationName:"started" type:"timestamp" timestampFormat:"unix"` + Started *time.Time `locationName:"started" type:"timestamp"` // The job's status. // @@ -8116,7 +10474,7 @@ type Job struct { Status *string `locationName:"status" type:"string" enum:"ExecutionStatus"` // The job's stop time. - Stopped *time.Time `locationName:"stopped" type:"timestamp" timestampFormat:"unix"` + Stopped *time.Time `locationName:"stopped" type:"timestamp"` // The job's type. // @@ -8152,6 +10510,13 @@ type Job struct { // // * XCTEST_UI: The XCode UI test type. Type *string `locationName:"type" type:"string" enum:"TestType"` + + // This value is set to true if video capture is enabled; otherwise, it is set + // to false. + VideoCapture *bool `locationName:"videoCapture" type:"boolean"` + + // The endpoint for streaming device video. + VideoEndpoint *string `locationName:"videoEndpoint" type:"string"` } // String returns the string representation @@ -8194,6 +10559,12 @@ func (s *Job) SetDeviceMinutes(v *DeviceMinutes) *Job { return s } +// SetInstanceArn sets the InstanceArn field's value. +func (s *Job) SetInstanceArn(v string) *Job { + s.InstanceArn = &v + return s +} + // SetMessage sets the Message field's value. func (s *Job) SetMessage(v string) *Job { s.Message = &v @@ -8236,58 +10607,158 @@ func (s *Job) SetType(v string) *Job { return s } -// Represents a request to the list artifacts operation. -type ListArtifactsInput struct { +// SetVideoCapture sets the VideoCapture field's value. +func (s *Job) SetVideoCapture(v bool) *Job { + s.VideoCapture = &v + return s +} + +// SetVideoEndpoint sets the VideoEndpoint field's value. +func (s *Job) SetVideoEndpoint(v string) *Job { + s.VideoEndpoint = &v + return s +} + +// Represents a request to the list artifacts operation. +type ListArtifactsInput struct { + _ struct{} `type:"structure"` + + // The Run, Job, Suite, or Test ARN. + // + // Arn is a required field + Arn *string `locationName:"arn" min:"32" type:"string" required:"true"` + + // An identifier that was returned from the previous call to this operation, + // which can be used to return the next set of items in the list. + NextToken *string `locationName:"nextToken" min:"4" type:"string"` + + // The artifacts' type. + // + // Allowed values include: + // + // * FILE: The artifacts are files. + // + // * LOG: The artifacts are logs. + // + // * SCREENSHOT: The artifacts are screenshots. + // + // Type is a required field + Type *string `locationName:"type" type:"string" required:"true" enum:"ArtifactCategory"` +} + +// String returns the string representation +func (s ListArtifactsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListArtifactsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListArtifactsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListArtifactsInput"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + if s.Arn != nil && len(*s.Arn) < 32 { + invalidParams.Add(request.NewErrParamMinLen("Arn", 32)) + } + if s.NextToken != nil && len(*s.NextToken) < 4 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 4)) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *ListArtifactsInput) SetArn(v string) *ListArtifactsInput { + s.Arn = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListArtifactsInput) SetNextToken(v string) *ListArtifactsInput { + s.NextToken = &v + return s +} + +// SetType sets the Type field's value. +func (s *ListArtifactsInput) SetType(v string) *ListArtifactsInput { + s.Type = &v + return s +} + +// Represents the result of a list artifacts operation. +type ListArtifactsOutput struct { + _ struct{} `type:"structure"` + + // Information about the artifacts. + Artifacts []*Artifact `locationName:"artifacts" type:"list"` + + // If the number of items that are returned is significantly large, this is + // an identifier that is also returned, which can be used in a subsequent call + // to this operation to return the next set of items in the list. + NextToken *string `locationName:"nextToken" min:"4" type:"string"` +} + +// String returns the string representation +func (s ListArtifactsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListArtifactsOutput) GoString() string { + return s.String() +} + +// SetArtifacts sets the Artifacts field's value. +func (s *ListArtifactsOutput) SetArtifacts(v []*Artifact) *ListArtifactsOutput { + s.Artifacts = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListArtifactsOutput) SetNextToken(v string) *ListArtifactsOutput { + s.NextToken = &v + return s +} + +type ListDeviceInstancesInput struct { _ struct{} `type:"structure"` - // The Run, Job, Suite, or Test ARN. - // - // Arn is a required field - Arn *string `locationName:"arn" min:"32" type:"string" required:"true"` + // An integer specifying the maximum number of items you want to return in the + // API response. + MaxResults *int64 `locationName:"maxResults" type:"integer"` // An identifier that was returned from the previous call to this operation, // which can be used to return the next set of items in the list. NextToken *string `locationName:"nextToken" min:"4" type:"string"` - - // The artifacts' type. - // - // Allowed values include: - // - // * FILE: The artifacts are files. - // - // * LOG: The artifacts are logs. - // - // * SCREENSHOT: The artifacts are screenshots. - // - // Type is a required field - Type *string `locationName:"type" type:"string" required:"true" enum:"ArtifactCategory"` } // String returns the string representation -func (s ListArtifactsInput) String() string { +func (s ListDeviceInstancesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListArtifactsInput) GoString() string { +func (s ListDeviceInstancesInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListArtifactsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListArtifactsInput"} - if s.Arn == nil { - invalidParams.Add(request.NewErrParamRequired("Arn")) - } - if s.Arn != nil && len(*s.Arn) < 32 { - invalidParams.Add(request.NewErrParamMinLen("Arn", 32)) - } +func (s *ListDeviceInstancesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListDeviceInstancesInput"} if s.NextToken != nil && len(*s.NextToken) < 4 { invalidParams.Add(request.NewErrParamMinLen("NextToken", 4)) } - if s.Type == nil { - invalidParams.Add(request.NewErrParamRequired("Type")) - } if invalidParams.Len() > 0 { return invalidParams @@ -8295,55 +10766,47 @@ func (s *ListArtifactsInput) Validate() error { return nil } -// SetArn sets the Arn field's value. -func (s *ListArtifactsInput) SetArn(v string) *ListArtifactsInput { - s.Arn = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListDeviceInstancesInput) SetMaxResults(v int64) *ListDeviceInstancesInput { + s.MaxResults = &v return s } // SetNextToken sets the NextToken field's value. -func (s *ListArtifactsInput) SetNextToken(v string) *ListArtifactsInput { +func (s *ListDeviceInstancesInput) SetNextToken(v string) *ListDeviceInstancesInput { s.NextToken = &v return s } -// SetType sets the Type field's value. -func (s *ListArtifactsInput) SetType(v string) *ListArtifactsInput { - s.Type = &v - return s -} - -// Represents the result of a list artifacts operation. -type ListArtifactsOutput struct { +type ListDeviceInstancesOutput struct { _ struct{} `type:"structure"` - // Information about the artifacts. - Artifacts []*Artifact `locationName:"artifacts" type:"list"` + // An object containing information about your device instances. + DeviceInstances []*DeviceInstance `locationName:"deviceInstances" type:"list"` - // If the number of items that are returned is significantly large, this is - // an identifier that is also returned, which can be used in a subsequent call - // to this operation to return the next set of items in the list. + // An identifier that can be used in the next call to this operation to return + // the next set of items in the list. NextToken *string `locationName:"nextToken" min:"4" type:"string"` } // String returns the string representation -func (s ListArtifactsOutput) String() string { +func (s ListDeviceInstancesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListArtifactsOutput) GoString() string { +func (s ListDeviceInstancesOutput) GoString() string { return s.String() } -// SetArtifacts sets the Artifacts field's value. -func (s *ListArtifactsOutput) SetArtifacts(v []*Artifact) *ListArtifactsOutput { - s.Artifacts = v +// SetDeviceInstances sets the DeviceInstances field's value. +func (s *ListDeviceInstancesOutput) SetDeviceInstances(v []*DeviceInstance) *ListDeviceInstancesOutput { + s.DeviceInstances = v return s } // SetNextToken sets the NextToken field's value. -func (s *ListArtifactsOutput) SetNextToken(v string) *ListArtifactsOutput { +func (s *ListDeviceInstancesOutput) SetNextToken(v string) *ListDeviceInstancesOutput { s.NextToken = &v return s } @@ -8461,6 +10924,61 @@ type ListDevicesInput struct { // The Amazon Resource Name (ARN) of the project. Arn *string `locationName:"arn" min:"32" type:"string"` + // Used to select a set of devices. A filter is made up of an attribute, an + // operator, and one or more values. + // + // * Attribute: The aspect of a device such as platform or model used as + // the selction criteria in a device filter. + // + // Allowed values include: + // + // ARN: The Amazon Resource Name (ARN) of the device. For example, "arn:aws:devicefarm:us-west-2::device:12345Example". + // + // PLATFORM: The device platform. Valid values are "ANDROID" or "IOS". + // + // OS_VERSION: The operating system version. For example, "10.3.2". + // + // MODEL: The device model. For example, "iPad 5th Gen". + // + // AVAILABILITY: The current availability of the device. Valid values are "AVAILABLE", + // "HIGHLY_AVAILABLE", "BUSY", or "TEMPORARY_NOT_AVAILABLE". + // + // FORM_FACTOR: The device form factor. Valid values are "PHONE" or "TABLET". + // + // MANUFACTURER: The device manufacturer. For example, "Apple". + // + // REMOTE_ACCESS_ENABLED: Whether the device is enabled for remote access. + // + // REMOTE_DEBUG_ENABLED: Whether the device is enabled for remote debugging. + // + // INSTANCE_ARN: The Amazon Resource Name (ARN) of the device instance. + // + // INSTANCE_LABELS: The label of the device instance. + // + // FLEET_TYPE: The fleet type. Valid values are "PUBLIC" or "PRIVATE". + // + // * Operator: The filter operator. + // + // The EQUALS operator is available for every attribute except INSTANCE_LABELS. + // + // The CONTAINS operator is available for the INSTANCE_LABELS and MODEL attributes. + // + // The IN and NOT_IN operators are available for the ARN, OS_VERSION, MODEL, + // MANUFACTURER, and INSTANCE_ARN attributes. + // + // The LESS_THAN, GREATER_THAN, LESS_THAN_OR_EQUALS, and GREATER_THAN_OR_EQUALS + // operators are also available for the OS_VERSION attribute. + // + // * Values: An array of one or more filter values. + // + // The IN and NOT operators can take a values array that has more than one element. + // + // The other operators require an array with a single element. + // + // In a request, the AVAILABILITY attribute takes "AVAILABLE", "HIGHLY_AVAILABLE", + // "BUSY", or "TEMPORARY_NOT_AVAILABLE" as values. + Filters []*DeviceFilter `locationName:"filters" type:"list"` + // An identifier that was returned from the previous call to this operation, // which can be used to return the next set of items in the list. NextToken *string `locationName:"nextToken" min:"4" type:"string"` @@ -8498,6 +11016,12 @@ func (s *ListDevicesInput) SetArn(v string) *ListDevicesInput { return s } +// SetFilters sets the Filters field's value. +func (s *ListDevicesInput) SetFilters(v []*DeviceFilter) *ListDevicesInput { + s.Filters = v + return s +} + // SetNextToken sets the NextToken field's value. func (s *ListDevicesInput) SetNextToken(v string) *ListDevicesInput { s.NextToken = &v @@ -8539,6 +11063,86 @@ func (s *ListDevicesOutput) SetNextToken(v string) *ListDevicesOutput { return s } +type ListInstanceProfilesInput struct { + _ struct{} `type:"structure"` + + // An integer specifying the maximum number of items you want to return in the + // API response. + MaxResults *int64 `locationName:"maxResults" type:"integer"` + + // An identifier that was returned from the previous call to this operation, + // which can be used to return the next set of items in the list. + NextToken *string `locationName:"nextToken" min:"4" type:"string"` +} + +// String returns the string representation +func (s ListInstanceProfilesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListInstanceProfilesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListInstanceProfilesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListInstanceProfilesInput"} + if s.NextToken != nil && len(*s.NextToken) < 4 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 4)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListInstanceProfilesInput) SetMaxResults(v int64) *ListInstanceProfilesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListInstanceProfilesInput) SetNextToken(v string) *ListInstanceProfilesInput { + s.NextToken = &v + return s +} + +type ListInstanceProfilesOutput struct { + _ struct{} `type:"structure"` + + // An object containing information about your instance profiles. + InstanceProfiles []*InstanceProfile `locationName:"instanceProfiles" type:"list"` + + // An identifier that can be used in the next call to this operation to return + // the next set of items in the list. + NextToken *string `locationName:"nextToken" min:"4" type:"string"` +} + +// String returns the string representation +func (s ListInstanceProfilesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListInstanceProfilesOutput) GoString() string { + return s.String() +} + +// SetInstanceProfiles sets the InstanceProfiles field's value. +func (s *ListInstanceProfilesOutput) SetInstanceProfiles(v []*InstanceProfile) *ListInstanceProfilesOutput { + s.InstanceProfiles = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListInstanceProfilesOutput) SetNextToken(v string) *ListInstanceProfilesOutput { + s.NextToken = &v + return s +} + // Represents a request to the list jobs operation. type ListJobsInput struct { _ struct{} `type:"structure"` @@ -9216,8 +11820,7 @@ func (s *ListRunsOutput) SetRuns(v []*Run) *ListRunsOutput { type ListSamplesInput struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of the project for which you want to list - // samples. + // The Amazon Resource Name (ARN) of the job used to list samples. // // Arn is a required field Arn *string `locationName:"arn" min:"32" type:"string" required:"true"` @@ -9602,6 +12205,62 @@ type ListUploadsInput struct { // An identifier that was returned from the previous call to this operation, // which can be used to return the next set of items in the list. NextToken *string `locationName:"nextToken" min:"4" type:"string"` + + // The type of upload. + // + // Must be one of the following values: + // + // * ANDROID_APP: An Android upload. + // + // * IOS_APP: An iOS upload. + // + // * WEB_APP: A web appliction upload. + // + // * EXTERNAL_DATA: An external data upload. + // + // * APPIUM_JAVA_JUNIT_TEST_PACKAGE: An Appium Java JUnit test package upload. + // + // * APPIUM_JAVA_TESTNG_TEST_PACKAGE: An Appium Java TestNG test package + // upload. + // + // * APPIUM_PYTHON_TEST_PACKAGE: An Appium Python test package upload. + // + // * APPIUM_WEB_JAVA_JUNIT_TEST_PACKAGE: An Appium Java JUnit test package + // upload. + // + // * APPIUM_WEB_JAVA_TESTNG_TEST_PACKAGE: An Appium Java TestNG test package + // upload. + // + // * APPIUM_WEB_PYTHON_TEST_PACKAGE: An Appium Python test package upload. + // + // * CALABASH_TEST_PACKAGE: A Calabash test package upload. + // + // * INSTRUMENTATION_TEST_PACKAGE: An instrumentation upload. + // + // * UIAUTOMATION_TEST_PACKAGE: A uiautomation test package upload. + // + // * UIAUTOMATOR_TEST_PACKAGE: A uiautomator test package upload. + // + // * XCTEST_TEST_PACKAGE: An XCode test package upload. + // + // * XCTEST_UI_TEST_PACKAGE: An XCode UI test package upload. + // + // * APPIUM_JAVA_JUNIT_TEST_SPEC: An Appium Java JUnit test spec upload. + // + // * APPIUM_JAVA_TESTNG_TEST_SPEC: An Appium Java TestNG test spec upload. + // + // * APPIUM_PYTHON_TEST_SPEC: An Appium Python test spec upload. + // + // * APPIUM_WEB_JAVA_JUNIT_TEST_SPEC: An Appium Java JUnit test spec upload. + // + // * APPIUM_WEB_JAVA_TESTNG_TEST_SPEC: An Appium Java TestNG test spec upload. + // + // * APPIUM_WEB_PYTHON_TEST_SPEC: An Appium Python test spec upload. + // + // * INSTRUMENTATION_TEST_SPEC: An instrumentation test spec upload. + // + // * XCTEST_UI_TEST_SPEC: An XCode UI test spec upload. + Type *string `locationName:"type" type:"string" enum:"UploadType"` } // String returns the string representation @@ -9645,6 +12304,12 @@ func (s *ListUploadsInput) SetNextToken(v string) *ListUploadsInput { return s } +// SetType sets the Type field's value. +func (s *ListUploadsInput) SetType(v string) *ListUploadsInput { + s.Type = &v + return s +} + // Represents the result of a list uploads request. type ListUploadsOutput struct { _ struct{} `type:"structure"` @@ -9669,14 +12334,95 @@ func (s ListUploadsOutput) GoString() string { } // SetNextToken sets the NextToken field's value. -func (s *ListUploadsOutput) SetNextToken(v string) *ListUploadsOutput { +func (s *ListUploadsOutput) SetNextToken(v string) *ListUploadsOutput { + s.NextToken = &v + return s +} + +// SetUploads sets the Uploads field's value. +func (s *ListUploadsOutput) SetUploads(v []*Upload) *ListUploadsOutput { + s.Uploads = v + return s +} + +type ListVPCEConfigurationsInput struct { + _ struct{} `type:"structure"` + + // An integer specifying the maximum number of items you want to return in the + // API response. + MaxResults *int64 `locationName:"maxResults" type:"integer"` + + // An identifier that was returned from the previous call to this operation, + // which can be used to return the next set of items in the list. + NextToken *string `locationName:"nextToken" min:"4" type:"string"` +} + +// String returns the string representation +func (s ListVPCEConfigurationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListVPCEConfigurationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListVPCEConfigurationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListVPCEConfigurationsInput"} + if s.NextToken != nil && len(*s.NextToken) < 4 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 4)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListVPCEConfigurationsInput) SetMaxResults(v int64) *ListVPCEConfigurationsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListVPCEConfigurationsInput) SetNextToken(v string) *ListVPCEConfigurationsInput { + s.NextToken = &v + return s +} + +type ListVPCEConfigurationsOutput struct { + _ struct{} `type:"structure"` + + // An identifier that was returned from the previous call to this operation, + // which can be used to return the next set of items in the list. + NextToken *string `locationName:"nextToken" min:"4" type:"string"` + + // An array of VPCEConfiguration objects containing information about your VPC + // endpoint configuration. + VpceConfigurations []*VPCEConfiguration `locationName:"vpceConfigurations" type:"list"` +} + +// String returns the string representation +func (s ListVPCEConfigurationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListVPCEConfigurationsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListVPCEConfigurationsOutput) SetNextToken(v string) *ListVPCEConfigurationsOutput { s.NextToken = &v return s } -// SetUploads sets the Uploads field's value. -func (s *ListUploadsOutput) SetUploads(v []*Upload) *ListUploadsOutput { - s.Uploads = v +// SetVpceConfigurations sets the VpceConfigurations field's value. +func (s *ListVPCEConfigurationsOutput) SetVpceConfigurations(v []*VPCEConfiguration) *ListVPCEConfigurationsOutput { + s.VpceConfigurations = v return s } @@ -9994,7 +12740,7 @@ type OfferingStatus struct { _ struct{} `type:"structure"` // The date on which the offering is effective. - EffectiveOn *time.Time `locationName:"effectiveOn" type:"timestamp" timestampFormat:"unix"` + EffectiveOn *time.Time `locationName:"effectiveOn" type:"timestamp"` // Represents the metadata of an offering status. Offering *Offering `locationName:"offering" type:"structure"` @@ -10048,7 +12794,7 @@ type OfferingTransaction struct { Cost *MonetaryAmount `locationName:"cost" type:"structure"` // The date on which an offering transaction was created. - CreatedOn *time.Time `locationName:"createdOn" type:"timestamp" timestampFormat:"unix"` + CreatedOn *time.Time `locationName:"createdOn" type:"timestamp"` // The ID that corresponds to a device offering promotion. OfferingPromotionId *string `locationName:"offeringPromotionId" min:"4" type:"string"` @@ -10236,7 +12982,7 @@ type Project struct { Arn *string `locationName:"arn" min:"32" type:"string"` // When the project was created. - Created *time.Time `locationName:"created" type:"timestamp" timestampFormat:"unix"` + Created *time.Time `locationName:"created" type:"timestamp"` // The default number of minutes (at the project level) a test run will execute // before it times out. Default value is 60 minutes. @@ -10464,7 +13210,7 @@ type RemoteAccessSession struct { ClientId *string `locationName:"clientId" type:"string"` // The date and time the remote access session was created. - Created *time.Time `locationName:"created" type:"timestamp" timestampFormat:"unix"` + Created *time.Time `locationName:"created" type:"timestamp"` // The device (phone or tablet) used in the remote access session. Device *Device `locationName:"device" type:"structure"` @@ -10484,6 +13230,9 @@ type RemoteAccessSession struct { // Only returned if remote debugging is enabled for the remote access session. HostAddress *string `locationName:"hostAddress" type:"string"` + // The Amazon Resource Name (ARN) of the instance. + InstanceArn *string `locationName:"instanceArn" min:"32" type:"string"` + // The interaction mode of the remote access session. Valid values are: // // * INTERACTIVE: You can interact with the iOS device by viewing, touching, @@ -10533,8 +13282,17 @@ type RemoteAccessSession struct { // * STOPPED: A stopped condition. Result *string `locationName:"result" type:"string" enum:"ExecutionResult"` + // When set to true, for private devices, Device Farm will not sign your app + // again. For public devices, Device Farm always signs your apps again and this + // parameter has no effect. + // + // For more information about how Device Farm re-signs your app(s), see Do you + // modify my app? (https://aws.amazon.com/device-farm/faq/) in the AWS Device + // Farm FAQs. + SkipAppResign *bool `locationName:"skipAppResign" type:"boolean"` + // The date and time the remote access session was started. - Started *time.Time `locationName:"started" type:"timestamp" timestampFormat:"unix"` + Started *time.Time `locationName:"started" type:"timestamp"` // The status of the remote access session. Can be any of the following: // @@ -10558,7 +13316,7 @@ type RemoteAccessSession struct { Status *string `locationName:"status" type:"string" enum:"ExecutionStatus"` // The date and time the remote access session was stopped. - Stopped *time.Time `locationName:"stopped" type:"timestamp" timestampFormat:"unix"` + Stopped *time.Time `locationName:"stopped" type:"timestamp"` } // String returns the string representation @@ -10625,6 +13383,12 @@ func (s *RemoteAccessSession) SetHostAddress(v string) *RemoteAccessSession { return s } +// SetInstanceArn sets the InstanceArn field's value. +func (s *RemoteAccessSession) SetInstanceArn(v string) *RemoteAccessSession { + s.InstanceArn = &v + return s +} + // SetInteractionMode sets the InteractionMode field's value. func (s *RemoteAccessSession) SetInteractionMode(v string) *RemoteAccessSession { s.InteractionMode = &v @@ -10667,6 +13431,12 @@ func (s *RemoteAccessSession) SetResult(v string) *RemoteAccessSession { return s } +// SetSkipAppResign sets the SkipAppResign field's value. +func (s *RemoteAccessSession) SetSkipAppResign(v bool) *RemoteAccessSession { + s.SkipAppResign = &v + return s +} + // SetStarted sets the Started field's value. func (s *RemoteAccessSession) SetStarted(v time.Time) *RemoteAccessSession { s.Started = &v @@ -10789,25 +13559,35 @@ func (s *Resolution) SetWidth(v int64) *Resolution { return s } -// Represents a condition for a device pool. +// Represents a condition for a device pool. It is passed in as the rules parameter +// to CreateDevicePool and UpdateDevicePool. type Rule struct { _ struct{} `type:"structure"` - // The rule's stringified attribute. For example, specify the value as "\"abc\"". + // The rule's attribute. It is the aspect of a device such as platform or model + // used as selection criteria to create or update a device pool. // // Allowed values include: // - // * ARN: The ARN. + // * ARN: The Amazon Resource Name (ARN) of a device. For example, "arn:aws:devicefarm:us-west-2::device:12345Example". // - // * FORM_FACTOR: The form factor (for example, phone or tablet). + // * PLATFORM: The device platform. Valid values are "ANDROID" or "IOS". // - // * MANUFACTURER: The manufacturer. + // * FORM_FACTOR: The device form factor. Valid values are "PHONE" or "TABLET". // - // * PLATFORM: The platform (for example, Android or iOS). + // * MANUFACTURER: The device manufacturer. For example, "Apple". // // * REMOTE_ACCESS_ENABLED: Whether the device is enabled for remote access. // + // * REMOTE_DEBUG_ENABLED: Whether the device is enabled for remote debugging. + // // * APPIUM_VERSION: The Appium version for the test. + // + // * INSTANCE_ARN: The Amazon Resource Name (ARN) of the device instance. + // + // * INSTANCE_LABELS: The label of the device instance. + // + // * FLEET_TYPE: The fleet type. Valid values are "PUBLIC" or "PRIVATE". Attribute *string `locationName:"attribute" type:"string" enum:"DeviceAttribute"` // The rule's operator. @@ -10826,6 +13606,12 @@ type Rule struct { Operator *string `locationName:"operator" type:"string" enum:"RuleOperator"` // The rule's value. + // + // The value must be passed in as a string using escaped quotes. + // + // For example: + // + // "value": "\"ANDROID\"" Value *string `locationName:"value" type:"string"` } @@ -10879,7 +13665,7 @@ type Run struct { Counters *Counters `locationName:"counters" type:"structure"` // When the run was created. - Created *time.Time `locationName:"created" type:"timestamp" timestampFormat:"unix"` + Created *time.Time `locationName:"created" type:"timestamp"` // Output CustomerArtifactPaths object for the test run. CustomerArtifactPaths *CustomerArtifactPaths `locationName:"customerArtifactPaths" type:"structure"` @@ -10890,6 +13676,9 @@ type Run struct { // The ARN of the device pool for the run. DevicePoolArn *string `locationName:"devicePoolArn" min:"32" type:"string"` + // The results of a device filter used to select the devices for a test run. + DeviceSelectionResult *DeviceSelectionResult `locationName:"deviceSelectionResult" type:"structure"` + // For fuzz tests, this is the number of events, between 1 and 10000, that the // UI fuzz test should perform. EventCount *int64 `locationName:"eventCount" type:"integer"` @@ -10956,8 +13745,17 @@ type Run struct { // the same seed value between tests ensures identical event sequences. Seed *int64 `locationName:"seed" type:"integer"` + // When set to true, for private devices, Device Farm will not sign your app + // again. For public devices, Device Farm always signs your apps again and this + // parameter has no effect. + // + // For more information about how Device Farm re-signs your app(s), see Do you + // modify my app? (https://aws.amazon.com/device-farm/faq/) in the AWS Device + // Farm FAQs. + SkipAppResign *bool `locationName:"skipAppResign" type:"boolean"` + // The run's start time. - Started *time.Time `locationName:"started" type:"timestamp" timestampFormat:"unix"` + Started *time.Time `locationName:"started" type:"timestamp"` // The run's status. // @@ -10983,7 +13781,10 @@ type Run struct { Status *string `locationName:"status" type:"string" enum:"ExecutionStatus"` // The run's stop time. - Stopped *time.Time `locationName:"stopped" type:"timestamp" timestampFormat:"unix"` + Stopped *time.Time `locationName:"stopped" type:"timestamp"` + + // The ARN of the YAML-formatted test specification for the run. + TestSpecArn *string `locationName:"testSpecArn" min:"32" type:"string"` // The total number of jobs for the run. TotalJobs *int64 `locationName:"totalJobs" type:"integer"` @@ -11023,8 +13824,7 @@ type Run struct { // * XCTEST_UI: The XCode UI test type. Type *string `locationName:"type" type:"string" enum:"TestType"` - // A pre-signed Amazon S3 URL that can be used with a corresponding GET request - // to download the symbol file for the run. + // The Device Farm console URL for the recording of the run. WebUrl *string `locationName:"webUrl" type:"string"` } @@ -11092,6 +13892,12 @@ func (s *Run) SetDevicePoolArn(v string) *Run { return s } +// SetDeviceSelectionResult sets the DeviceSelectionResult field's value. +func (s *Run) SetDeviceSelectionResult(v *DeviceSelectionResult) *Run { + s.DeviceSelectionResult = v + return s +} + // SetEventCount sets the EventCount field's value. func (s *Run) SetEventCount(v int64) *Run { s.EventCount = &v @@ -11170,6 +13976,12 @@ func (s *Run) SetSeed(v int64) *Run { return s } +// SetSkipAppResign sets the SkipAppResign field's value. +func (s *Run) SetSkipAppResign(v bool) *Run { + s.SkipAppResign = &v + return s +} + // SetStarted sets the Started field's value. func (s *Run) SetStarted(v time.Time) *Run { s.Started = &v @@ -11188,6 +14000,12 @@ func (s *Run) SetStopped(v time.Time) *Run { return s } +// SetTestSpecArn sets the TestSpecArn field's value. +func (s *Run) SetTestSpecArn(v string) *Run { + s.TestSpecArn = &v + return s +} + // SetTotalJobs sets the TotalJobs field's value. func (s *Run) SetTotalJobs(v int64) *Run { s.TotalJobs = &v @@ -11321,6 +14139,9 @@ type ScheduleRunConfiguration struct { // Information about the radio states for the run. Radios *Radios `locationName:"radios" type:"structure"` + + // An array of Amazon Resource Names (ARNs) for your VPC endpoint configurations. + VpceConfigurationArns []*string `locationName:"vpceConfigurationArns" type:"list"` } // String returns the string representation @@ -11402,6 +14223,12 @@ func (s *ScheduleRunConfiguration) SetRadios(v *Radios) *ScheduleRunConfiguratio return s } +// SetVpceConfigurationArns sets the VpceConfigurationArns field's value. +func (s *ScheduleRunConfiguration) SetVpceConfigurationArns(v []*string) *ScheduleRunConfiguration { + s.VpceConfigurationArns = v + return s +} + // Represents a request to the schedule run operation. type ScheduleRunInput struct { _ struct{} `type:"structure"` @@ -11414,8 +14241,14 @@ type ScheduleRunInput struct { // The ARN of the device pool for the run to be scheduled. // - // DevicePoolArn is a required field - DevicePoolArn *string `locationName:"devicePoolArn" min:"32" type:"string" required:"true"` + // Either devicePoolArn or deviceSelectionConfiguration are required in a request. + DevicePoolArn *string `locationName:"devicePoolArn" min:"32" type:"string"` + + // The filter criteria used to dynamically select a set of devices for a test + // run, as well as the maximum number of devices to be included in the run. + // + // Either devicePoolArn or deviceSelectionConfiguration are required in a request. + DeviceSelectionConfiguration *DeviceSelectionConfiguration `locationName:"deviceSelectionConfiguration" type:"structure"` // Specifies configuration information about a test run, such as the execution // timeout (in minutes). @@ -11451,9 +14284,6 @@ func (s *ScheduleRunInput) Validate() error { if s.AppArn != nil && len(*s.AppArn) < 32 { invalidParams.Add(request.NewErrParamMinLen("AppArn", 32)) } - if s.DevicePoolArn == nil { - invalidParams.Add(request.NewErrParamRequired("DevicePoolArn")) - } if s.DevicePoolArn != nil && len(*s.DevicePoolArn) < 32 { invalidParams.Add(request.NewErrParamMinLen("DevicePoolArn", 32)) } @@ -11471,6 +14301,11 @@ func (s *ScheduleRunInput) Validate() error { invalidParams.AddNested("Configuration", err.(request.ErrInvalidParams)) } } + if s.DeviceSelectionConfiguration != nil { + if err := s.DeviceSelectionConfiguration.Validate(); err != nil { + invalidParams.AddNested("DeviceSelectionConfiguration", err.(request.ErrInvalidParams)) + } + } if s.Test != nil { if err := s.Test.Validate(); err != nil { invalidParams.AddNested("Test", err.(request.ErrInvalidParams)) @@ -11501,6 +14336,12 @@ func (s *ScheduleRunInput) SetDevicePoolArn(v string) *ScheduleRunInput { return s } +// SetDeviceSelectionConfiguration sets the DeviceSelectionConfiguration field's value. +func (s *ScheduleRunInput) SetDeviceSelectionConfiguration(v *DeviceSelectionConfiguration) *ScheduleRunInput { + s.DeviceSelectionConfiguration = v + return s +} + // SetExecutionConfiguration sets the ExecutionConfiguration field's value. func (s *ScheduleRunInput) SetExecutionConfiguration(v *ExecutionConfiguration) *ScheduleRunInput { s.ExecutionConfiguration = v @@ -11549,15 +14390,22 @@ func (s *ScheduleRunOutput) SetRun(v *Run) *ScheduleRunOutput { return s } -// Represents additional test settings. +// Represents test settings. This data structure is passed in as the "test" +// parameter to ScheduleRun. For an example of the JSON request syntax, see +// ScheduleRun. type ScheduleRunTest struct { _ struct{} `type:"structure"` // The test's filter. Filter *string `locationName:"filter" type:"string"` - // The test's parameters, such as the following test framework parameters and - // fixture settings: + // The test's parameters, such as test framework parameters and fixture settings. + // Parameters are represented by name-value pairs of strings. + // + // For all tests: + // + // * app_performance_monitoring: Performance monitoring is enabled by default. + // Set this parameter to "false" to disable it. // // For Calabash tests: // @@ -11568,14 +14416,14 @@ type ScheduleRunTest struct { // // For Appium tests (all types): // - // * appium_version: The Appium version. Currently supported values are "1.4.16", - // "1.6.3", "latest", and "default". + // * appium_version: The Appium version. Currently supported values are "1.7.2", + // "1.7.1", "1.6.5", "latest", and "default". // - // “latest” will run the latest Appium version supported by Device Farm (1.6.3). + // “latest” will run the latest Appium version supported by Device Farm (1.7.2). // // For “default”, Device Farm will choose a compatible version of Appium for - // the device. The current behavior is to run 1.4.16 on Android devices and - // iOS 9 and earlier, 1.6.3 for iOS 10 and later. + // the device. The current behavior is to run 1.7.2 on Android devices and + // iOS 9 and earlier, 1.7.2 for iOS 10 and later. // // This behavior is subject to change. // @@ -11634,6 +14482,9 @@ type ScheduleRunTest struct { // The ARN of the uploaded test that will be run. TestPackageArn *string `locationName:"testPackageArn" min:"32" type:"string"` + // The ARN of the YAML-formatted test specification. + TestSpecArn *string `locationName:"testSpecArn" min:"32" type:"string"` + // The test's type. // // Must be one of the following values: @@ -11688,6 +14539,9 @@ func (s *ScheduleRunTest) Validate() error { if s.TestPackageArn != nil && len(*s.TestPackageArn) < 32 { invalidParams.Add(request.NewErrParamMinLen("TestPackageArn", 32)) } + if s.TestSpecArn != nil && len(*s.TestSpecArn) < 32 { + invalidParams.Add(request.NewErrParamMinLen("TestSpecArn", 32)) + } if s.Type == nil { invalidParams.Add(request.NewErrParamRequired("Type")) } @@ -11716,12 +14570,83 @@ func (s *ScheduleRunTest) SetTestPackageArn(v string) *ScheduleRunTest { return s } +// SetTestSpecArn sets the TestSpecArn field's value. +func (s *ScheduleRunTest) SetTestSpecArn(v string) *ScheduleRunTest { + s.TestSpecArn = &v + return s +} + // SetType sets the Type field's value. func (s *ScheduleRunTest) SetType(v string) *ScheduleRunTest { s.Type = &v return s } +type StopJobInput struct { + _ struct{} `type:"structure"` + + // Represents the Amazon Resource Name (ARN) of the Device Farm job you wish + // to stop. + // + // Arn is a required field + Arn *string `locationName:"arn" min:"32" type:"string" required:"true"` +} + +// String returns the string representation +func (s StopJobInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StopJobInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StopJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StopJobInput"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + if s.Arn != nil && len(*s.Arn) < 32 { + invalidParams.Add(request.NewErrParamMinLen("Arn", 32)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *StopJobInput) SetArn(v string) *StopJobInput { + s.Arn = &v + return s +} + +type StopJobOutput struct { + _ struct{} `type:"structure"` + + // The job that was stopped. + Job *Job `locationName:"job" type:"structure"` +} + +// String returns the string representation +func (s StopJobOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StopJobOutput) GoString() string { + return s.String() +} + +// SetJob sets the Job field's value. +func (s *StopJobOutput) SetJob(v *Job) *StopJobOutput { + s.Job = v + return s +} + // Represents the request to stop the remote access session. type StopRemoteAccessSessionInput struct { _ struct{} `type:"structure"` @@ -11868,7 +14793,7 @@ type Suite struct { Counters *Counters `locationName:"counters" type:"structure"` // When the suite was created. - Created *time.Time `locationName:"created" type:"timestamp" timestampFormat:"unix"` + Created *time.Time `locationName:"created" type:"timestamp"` // Represents the total (metered or unmetered) minutes used by the test suite. DeviceMinutes *DeviceMinutes `locationName:"deviceMinutes" type:"structure"` @@ -11899,7 +14824,7 @@ type Suite struct { Result *string `locationName:"result" type:"string" enum:"ExecutionResult"` // The suite's start time. - Started *time.Time `locationName:"started" type:"timestamp" timestampFormat:"unix"` + Started *time.Time `locationName:"started" type:"timestamp"` // The suite's status. // @@ -11925,7 +14850,7 @@ type Suite struct { Status *string `locationName:"status" type:"string" enum:"ExecutionStatus"` // The suite's stop time. - Stopped *time.Time `locationName:"stopped" type:"timestamp" timestampFormat:"unix"` + Stopped *time.Time `locationName:"stopped" type:"timestamp"` // The suite's type. // @@ -12050,7 +14975,7 @@ type Test struct { Counters *Counters `locationName:"counters" type:"structure"` // When the test was created. - Created *time.Time `locationName:"created" type:"timestamp" timestampFormat:"unix"` + Created *time.Time `locationName:"created" type:"timestamp"` // Represents the total (metered or unmetered) minutes used by the test. DeviceMinutes *DeviceMinutes `locationName:"deviceMinutes" type:"structure"` @@ -12081,7 +15006,7 @@ type Test struct { Result *string `locationName:"result" type:"string" enum:"ExecutionResult"` // The test's start time. - Started *time.Time `locationName:"started" type:"timestamp" timestampFormat:"unix"` + Started *time.Time `locationName:"started" type:"timestamp"` // The test's status. // @@ -12107,7 +15032,7 @@ type Test struct { Status *string `locationName:"status" type:"string" enum:"ExecutionStatus"` // The test's stop time. - Stopped *time.Time `locationName:"stopped" type:"timestamp" timestampFormat:"unix"` + Stopped *time.Time `locationName:"stopped" type:"timestamp"` // The test's type. // @@ -12287,6 +15212,92 @@ func (s *UniqueProblem) SetProblems(v []*Problem) *UniqueProblem { return s } +type UpdateDeviceInstanceInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the device instance. + // + // Arn is a required field + Arn *string `locationName:"arn" min:"32" type:"string" required:"true"` + + // An array of strings that you want to associate with the device instance. + Labels []*string `locationName:"labels" type:"list"` + + // The Amazon Resource Name (ARN) of the profile that you want to associate + // with the device instance. + ProfileArn *string `locationName:"profileArn" min:"32" type:"string"` +} + +// String returns the string representation +func (s UpdateDeviceInstanceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateDeviceInstanceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateDeviceInstanceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateDeviceInstanceInput"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + if s.Arn != nil && len(*s.Arn) < 32 { + invalidParams.Add(request.NewErrParamMinLen("Arn", 32)) + } + if s.ProfileArn != nil && len(*s.ProfileArn) < 32 { + invalidParams.Add(request.NewErrParamMinLen("ProfileArn", 32)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *UpdateDeviceInstanceInput) SetArn(v string) *UpdateDeviceInstanceInput { + s.Arn = &v + return s +} + +// SetLabels sets the Labels field's value. +func (s *UpdateDeviceInstanceInput) SetLabels(v []*string) *UpdateDeviceInstanceInput { + s.Labels = v + return s +} + +// SetProfileArn sets the ProfileArn field's value. +func (s *UpdateDeviceInstanceInput) SetProfileArn(v string) *UpdateDeviceInstanceInput { + s.ProfileArn = &v + return s +} + +type UpdateDeviceInstanceOutput struct { + _ struct{} `type:"structure"` + + // An object containing information about your device instance. + DeviceInstance *DeviceInstance `locationName:"deviceInstance" type:"structure"` +} + +// String returns the string representation +func (s UpdateDeviceInstanceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateDeviceInstanceOutput) GoString() string { + return s.String() +} + +// SetDeviceInstance sets the DeviceInstance field's value. +func (s *UpdateDeviceInstanceOutput) SetDeviceInstance(v *DeviceInstance) *UpdateDeviceInstanceOutput { + s.DeviceInstance = v + return s +} + // Represents a request to the update device pool operation. type UpdateDevicePoolInput struct { _ struct{} `type:"structure"` @@ -12342,44 +15353,158 @@ func (s *UpdateDevicePoolInput) SetArn(v string) *UpdateDevicePoolInput { } // SetDescription sets the Description field's value. -func (s *UpdateDevicePoolInput) SetDescription(v string) *UpdateDevicePoolInput { +func (s *UpdateDevicePoolInput) SetDescription(v string) *UpdateDevicePoolInput { + s.Description = &v + return s +} + +// SetName sets the Name field's value. +func (s *UpdateDevicePoolInput) SetName(v string) *UpdateDevicePoolInput { + s.Name = &v + return s +} + +// SetRules sets the Rules field's value. +func (s *UpdateDevicePoolInput) SetRules(v []*Rule) *UpdateDevicePoolInput { + s.Rules = v + return s +} + +// Represents the result of an update device pool request. +type UpdateDevicePoolOutput struct { + _ struct{} `type:"structure"` + + // The device pool you just updated. + DevicePool *DevicePool `locationName:"devicePool" type:"structure"` +} + +// String returns the string representation +func (s UpdateDevicePoolOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateDevicePoolOutput) GoString() string { + return s.String() +} + +// SetDevicePool sets the DevicePool field's value. +func (s *UpdateDevicePoolOutput) SetDevicePool(v *DevicePool) *UpdateDevicePoolOutput { + s.DevicePool = v + return s +} + +type UpdateInstanceProfileInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the instance profile. + // + // Arn is a required field + Arn *string `locationName:"arn" min:"32" type:"string" required:"true"` + + // The updated description for your instance profile. + Description *string `locationName:"description" type:"string"` + + // An array of strings specifying the list of app packages that should not be + // cleaned up from the device after a test run is over. + // + // The list of packages is only considered if you set packageCleanup to true. + ExcludeAppPackagesFromCleanup []*string `locationName:"excludeAppPackagesFromCleanup" type:"list"` + + // The updated name for your instance profile. + Name *string `locationName:"name" type:"string"` + + // The updated choice for whether you want to specify package cleanup. The default + // value is false for private devices. + PackageCleanup *bool `locationName:"packageCleanup" type:"boolean"` + + // The updated choice for whether you want to reboot the device after use. The + // default value is true. + RebootAfterUse *bool `locationName:"rebootAfterUse" type:"boolean"` +} + +// String returns the string representation +func (s UpdateInstanceProfileInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateInstanceProfileInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateInstanceProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateInstanceProfileInput"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + if s.Arn != nil && len(*s.Arn) < 32 { + invalidParams.Add(request.NewErrParamMinLen("Arn", 32)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *UpdateInstanceProfileInput) SetArn(v string) *UpdateInstanceProfileInput { + s.Arn = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *UpdateInstanceProfileInput) SetDescription(v string) *UpdateInstanceProfileInput { s.Description = &v return s } +// SetExcludeAppPackagesFromCleanup sets the ExcludeAppPackagesFromCleanup field's value. +func (s *UpdateInstanceProfileInput) SetExcludeAppPackagesFromCleanup(v []*string) *UpdateInstanceProfileInput { + s.ExcludeAppPackagesFromCleanup = v + return s +} + // SetName sets the Name field's value. -func (s *UpdateDevicePoolInput) SetName(v string) *UpdateDevicePoolInput { +func (s *UpdateInstanceProfileInput) SetName(v string) *UpdateInstanceProfileInput { s.Name = &v return s } -// SetRules sets the Rules field's value. -func (s *UpdateDevicePoolInput) SetRules(v []*Rule) *UpdateDevicePoolInput { - s.Rules = v +// SetPackageCleanup sets the PackageCleanup field's value. +func (s *UpdateInstanceProfileInput) SetPackageCleanup(v bool) *UpdateInstanceProfileInput { + s.PackageCleanup = &v return s } -// Represents the result of an update device pool request. -type UpdateDevicePoolOutput struct { +// SetRebootAfterUse sets the RebootAfterUse field's value. +func (s *UpdateInstanceProfileInput) SetRebootAfterUse(v bool) *UpdateInstanceProfileInput { + s.RebootAfterUse = &v + return s +} + +type UpdateInstanceProfileOutput struct { _ struct{} `type:"structure"` - // The device pool you just updated. - DevicePool *DevicePool `locationName:"devicePool" type:"structure"` + // An object containing information about your instance profile. + InstanceProfile *InstanceProfile `locationName:"instanceProfile" type:"structure"` } // String returns the string representation -func (s UpdateDevicePoolOutput) String() string { +func (s UpdateInstanceProfileOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s UpdateDevicePoolOutput) GoString() string { +func (s UpdateInstanceProfileOutput) GoString() string { return s.String() } -// SetDevicePool sets the DevicePool field's value. -func (s *UpdateDevicePoolOutput) SetDevicePool(v *DevicePool) *UpdateDevicePoolOutput { - s.DevicePool = v +// SetInstanceProfile sets the InstanceProfile field's value. +func (s *UpdateInstanceProfileOutput) SetInstanceProfile(v *InstanceProfile) *UpdateInstanceProfileOutput { + s.InstanceProfile = v return s } @@ -12637,6 +15762,203 @@ func (s *UpdateProjectOutput) SetProject(v *Project) *UpdateProjectOutput { return s } +type UpdateUploadInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the uploaded test spec. + // + // Arn is a required field + Arn *string `locationName:"arn" min:"32" type:"string" required:"true"` + + // The upload's content type (for example, "application/x-yaml"). + ContentType *string `locationName:"contentType" type:"string"` + + // Set to true if the YAML file has changed and needs to be updated; otherwise, + // set to false. + EditContent *bool `locationName:"editContent" type:"boolean"` + + // The upload's test spec file name. The name should not contain the '/' character. + // The test spec file name must end with the .yaml or .yml file extension. + Name *string `locationName:"name" type:"string"` +} + +// String returns the string representation +func (s UpdateUploadInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateUploadInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateUploadInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateUploadInput"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + if s.Arn != nil && len(*s.Arn) < 32 { + invalidParams.Add(request.NewErrParamMinLen("Arn", 32)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *UpdateUploadInput) SetArn(v string) *UpdateUploadInput { + s.Arn = &v + return s +} + +// SetContentType sets the ContentType field's value. +func (s *UpdateUploadInput) SetContentType(v string) *UpdateUploadInput { + s.ContentType = &v + return s +} + +// SetEditContent sets the EditContent field's value. +func (s *UpdateUploadInput) SetEditContent(v bool) *UpdateUploadInput { + s.EditContent = &v + return s +} + +// SetName sets the Name field's value. +func (s *UpdateUploadInput) SetName(v string) *UpdateUploadInput { + s.Name = &v + return s +} + +type UpdateUploadOutput struct { + _ struct{} `type:"structure"` + + // A test spec uploaded to Device Farm. + Upload *Upload `locationName:"upload" type:"structure"` +} + +// String returns the string representation +func (s UpdateUploadOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateUploadOutput) GoString() string { + return s.String() +} + +// SetUpload sets the Upload field's value. +func (s *UpdateUploadOutput) SetUpload(v *Upload) *UpdateUploadOutput { + s.Upload = v + return s +} + +type UpdateVPCEConfigurationInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the VPC endpoint configuration you want + // to update. + // + // Arn is a required field + Arn *string `locationName:"arn" min:"32" type:"string" required:"true"` + + // The DNS (domain) name used to connect to your private service in your Amazon + // VPC. The DNS name must not already be in use on the Internet. + ServiceDnsName *string `locationName:"serviceDnsName" type:"string"` + + // An optional description, providing more details about your VPC endpoint configuration. + VpceConfigurationDescription *string `locationName:"vpceConfigurationDescription" type:"string"` + + // The friendly name you give to your VPC endpoint configuration, to manage + // your configurations more easily. + VpceConfigurationName *string `locationName:"vpceConfigurationName" type:"string"` + + // The name of the VPC endpoint service running inside your AWS account that + // you want Device Farm to test. + VpceServiceName *string `locationName:"vpceServiceName" type:"string"` +} + +// String returns the string representation +func (s UpdateVPCEConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateVPCEConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateVPCEConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateVPCEConfigurationInput"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + if s.Arn != nil && len(*s.Arn) < 32 { + invalidParams.Add(request.NewErrParamMinLen("Arn", 32)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *UpdateVPCEConfigurationInput) SetArn(v string) *UpdateVPCEConfigurationInput { + s.Arn = &v + return s +} + +// SetServiceDnsName sets the ServiceDnsName field's value. +func (s *UpdateVPCEConfigurationInput) SetServiceDnsName(v string) *UpdateVPCEConfigurationInput { + s.ServiceDnsName = &v + return s +} + +// SetVpceConfigurationDescription sets the VpceConfigurationDescription field's value. +func (s *UpdateVPCEConfigurationInput) SetVpceConfigurationDescription(v string) *UpdateVPCEConfigurationInput { + s.VpceConfigurationDescription = &v + return s +} + +// SetVpceConfigurationName sets the VpceConfigurationName field's value. +func (s *UpdateVPCEConfigurationInput) SetVpceConfigurationName(v string) *UpdateVPCEConfigurationInput { + s.VpceConfigurationName = &v + return s +} + +// SetVpceServiceName sets the VpceServiceName field's value. +func (s *UpdateVPCEConfigurationInput) SetVpceServiceName(v string) *UpdateVPCEConfigurationInput { + s.VpceServiceName = &v + return s +} + +type UpdateVPCEConfigurationOutput struct { + _ struct{} `type:"structure"` + + // An object containing information about your VPC endpoint configuration. + VpceConfiguration *VPCEConfiguration `locationName:"vpceConfiguration" type:"structure"` +} + +// String returns the string representation +func (s UpdateVPCEConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateVPCEConfigurationOutput) GoString() string { + return s.String() +} + +// SetVpceConfiguration sets the VpceConfiguration field's value. +func (s *UpdateVPCEConfigurationOutput) SetVpceConfiguration(v *VPCEConfiguration) *UpdateVPCEConfigurationOutput { + s.VpceConfiguration = v + return s +} + // An app or a set of one or more tests to upload or that have been uploaded. type Upload struct { _ struct{} `type:"structure"` @@ -12644,11 +15966,18 @@ type Upload struct { // The upload's ARN. Arn *string `locationName:"arn" min:"32" type:"string"` + // The upload's category. Allowed values include: + // + // * CURATED: An upload managed by AWS Device Farm. + // + // * PRIVATE: An upload managed by the AWS Device Farm customer. + Category *string `locationName:"category" type:"string" enum:"UploadCategory"` + // The upload's content type (for example, "application/octet-stream"). ContentType *string `locationName:"contentType" type:"string"` // When the upload was created. - Created *time.Time `locationName:"created" type:"timestamp" timestampFormat:"unix"` + Created *time.Time `locationName:"created" type:"timestamp"` // A message about the upload's result. Message *string `locationName:"message" type:"string"` @@ -12712,6 +16041,22 @@ type Upload struct { // * XCTEST_TEST_PACKAGE: An XCode test package upload. // // * XCTEST_UI_TEST_PACKAGE: An XCode UI test package upload. + // + // * APPIUM_JAVA_JUNIT_TEST_SPEC: An Appium Java JUnit test spec upload. + // + // * APPIUM_JAVA_TESTNG_TEST_SPEC: An Appium Java TestNG test spec upload. + // + // * APPIUM_PYTHON_TEST_SPEC: An Appium Python test spec upload. + // + // * APPIUM_WEB_JAVA_JUNIT_TEST_SPEC: An Appium Java JUnit test spec upload. + // + // * APPIUM_WEB_JAVA_TESTNG_TEST_SPEC: An Appium Java TestNG test spec upload. + // + // * APPIUM_WEB_PYTHON_TEST_SPEC: An Appium Python test spec upload. + // + // * INSTRUMENTATION_TEST_SPEC: An instrumentation test spec upload. + // + // * XCTEST_UI_TEST_SPEC: An XCode UI test spec upload. Type *string `locationName:"type" type:"string" enum:"UploadType"` // The pre-signed Amazon S3 URL that was used to store a file through a corresponding @@ -12735,6 +16080,12 @@ func (s *Upload) SetArn(v string) *Upload { return s } +// SetCategory sets the Category field's value. +func (s *Upload) SetCategory(v string) *Upload { + s.Category = &v + return s +} + // SetContentType sets the ContentType field's value. func (s *Upload) SetContentType(v string) *Upload { s.ContentType = &v @@ -12783,6 +16134,69 @@ func (s *Upload) SetUrl(v string) *Upload { return s } +// Represents an Amazon Virtual Private Cloud (VPC) endpoint configuration. +type VPCEConfiguration struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the VPC endpoint configuration. + Arn *string `locationName:"arn" min:"32" type:"string"` + + // The DNS name that maps to the private IP address of the service you want + // to access. + ServiceDnsName *string `locationName:"serviceDnsName" type:"string"` + + // An optional description, providing more details about your VPC endpoint configuration. + VpceConfigurationDescription *string `locationName:"vpceConfigurationDescription" type:"string"` + + // The friendly name you give to your VPC endpoint configuration, to manage + // your configurations more easily. + VpceConfigurationName *string `locationName:"vpceConfigurationName" type:"string"` + + // The name of the VPC endpoint service running inside your AWS account that + // you want Device Farm to test. + VpceServiceName *string `locationName:"vpceServiceName" type:"string"` +} + +// String returns the string representation +func (s VPCEConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s VPCEConfiguration) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *VPCEConfiguration) SetArn(v string) *VPCEConfiguration { + s.Arn = &v + return s +} + +// SetServiceDnsName sets the ServiceDnsName field's value. +func (s *VPCEConfiguration) SetServiceDnsName(v string) *VPCEConfiguration { + s.ServiceDnsName = &v + return s +} + +// SetVpceConfigurationDescription sets the VpceConfigurationDescription field's value. +func (s *VPCEConfiguration) SetVpceConfigurationDescription(v string) *VPCEConfiguration { + s.VpceConfigurationDescription = &v + return s +} + +// SetVpceConfigurationName sets the VpceConfigurationName field's value. +func (s *VPCEConfiguration) SetVpceConfigurationName(v string) *VPCEConfiguration { + s.VpceConfigurationName = &v + return s +} + +// SetVpceServiceName sets the VpceServiceName field's value. +func (s *VPCEConfiguration) SetVpceServiceName(v string) *VPCEConfiguration { + s.VpceServiceName = &v + return s +} + const ( // ArtifactCategoryScreenshot is a ArtifactCategory enum value ArtifactCategoryScreenshot = "SCREENSHOT" @@ -12875,6 +16289,9 @@ const ( // ArtifactTypeCustomerArtifactLog is a ArtifactType enum value ArtifactTypeCustomerArtifactLog = "CUSTOMER_ARTIFACT_LOG" + + // ArtifactTypeTestspecOutput is a ArtifactType enum value + ArtifactTypeTestspecOutput = "TESTSPEC_OUTPUT" ) const ( @@ -12911,6 +16328,93 @@ const ( // DeviceAttributeAppiumVersion is a DeviceAttribute enum value DeviceAttributeAppiumVersion = "APPIUM_VERSION" + + // DeviceAttributeInstanceArn is a DeviceAttribute enum value + DeviceAttributeInstanceArn = "INSTANCE_ARN" + + // DeviceAttributeInstanceLabels is a DeviceAttribute enum value + DeviceAttributeInstanceLabels = "INSTANCE_LABELS" + + // DeviceAttributeFleetType is a DeviceAttribute enum value + DeviceAttributeFleetType = "FLEET_TYPE" +) + +const ( + // DeviceAvailabilityTemporaryNotAvailable is a DeviceAvailability enum value + DeviceAvailabilityTemporaryNotAvailable = "TEMPORARY_NOT_AVAILABLE" + + // DeviceAvailabilityBusy is a DeviceAvailability enum value + DeviceAvailabilityBusy = "BUSY" + + // DeviceAvailabilityAvailable is a DeviceAvailability enum value + DeviceAvailabilityAvailable = "AVAILABLE" + + // DeviceAvailabilityHighlyAvailable is a DeviceAvailability enum value + DeviceAvailabilityHighlyAvailable = "HIGHLY_AVAILABLE" +) + +const ( + // DeviceFilterAttributeArn is a DeviceFilterAttribute enum value + DeviceFilterAttributeArn = "ARN" + + // DeviceFilterAttributePlatform is a DeviceFilterAttribute enum value + DeviceFilterAttributePlatform = "PLATFORM" + + // DeviceFilterAttributeOsVersion is a DeviceFilterAttribute enum value + DeviceFilterAttributeOsVersion = "OS_VERSION" + + // DeviceFilterAttributeModel is a DeviceFilterAttribute enum value + DeviceFilterAttributeModel = "MODEL" + + // DeviceFilterAttributeAvailability is a DeviceFilterAttribute enum value + DeviceFilterAttributeAvailability = "AVAILABILITY" + + // DeviceFilterAttributeFormFactor is a DeviceFilterAttribute enum value + DeviceFilterAttributeFormFactor = "FORM_FACTOR" + + // DeviceFilterAttributeManufacturer is a DeviceFilterAttribute enum value + DeviceFilterAttributeManufacturer = "MANUFACTURER" + + // DeviceFilterAttributeRemoteAccessEnabled is a DeviceFilterAttribute enum value + DeviceFilterAttributeRemoteAccessEnabled = "REMOTE_ACCESS_ENABLED" + + // DeviceFilterAttributeRemoteDebugEnabled is a DeviceFilterAttribute enum value + DeviceFilterAttributeRemoteDebugEnabled = "REMOTE_DEBUG_ENABLED" + + // DeviceFilterAttributeInstanceArn is a DeviceFilterAttribute enum value + DeviceFilterAttributeInstanceArn = "INSTANCE_ARN" + + // DeviceFilterAttributeInstanceLabels is a DeviceFilterAttribute enum value + DeviceFilterAttributeInstanceLabels = "INSTANCE_LABELS" + + // DeviceFilterAttributeFleetType is a DeviceFilterAttribute enum value + DeviceFilterAttributeFleetType = "FLEET_TYPE" +) + +const ( + // DeviceFilterOperatorEquals is a DeviceFilterOperator enum value + DeviceFilterOperatorEquals = "EQUALS" + + // DeviceFilterOperatorLessThan is a DeviceFilterOperator enum value + DeviceFilterOperatorLessThan = "LESS_THAN" + + // DeviceFilterOperatorLessThanOrEquals is a DeviceFilterOperator enum value + DeviceFilterOperatorLessThanOrEquals = "LESS_THAN_OR_EQUALS" + + // DeviceFilterOperatorGreaterThan is a DeviceFilterOperator enum value + DeviceFilterOperatorGreaterThan = "GREATER_THAN" + + // DeviceFilterOperatorGreaterThanOrEquals is a DeviceFilterOperator enum value + DeviceFilterOperatorGreaterThanOrEquals = "GREATER_THAN_OR_EQUALS" + + // DeviceFilterOperatorIn is a DeviceFilterOperator enum value + DeviceFilterOperatorIn = "IN" + + // DeviceFilterOperatorNotIn is a DeviceFilterOperator enum value + DeviceFilterOperatorNotIn = "NOT_IN" + + // DeviceFilterOperatorContains is a DeviceFilterOperator enum value + DeviceFilterOperatorContains = "CONTAINS" ) const ( @@ -12963,6 +16467,9 @@ const ( const ( // ExecutionResultCodeParsingFailed is a ExecutionResultCode enum value ExecutionResultCodeParsingFailed = "PARSING_FAILED" + + // ExecutionResultCodeVpcEndpointSetupFailed is a ExecutionResultCode enum value + ExecutionResultCodeVpcEndpointSetupFailed = "VPC_ENDPOINT_SETUP_FAILED" ) const ( @@ -12994,6 +16501,20 @@ const ( ExecutionStatusStopping = "STOPPING" ) +const ( + // InstanceStatusInUse is a InstanceStatus enum value + InstanceStatusInUse = "IN_USE" + + // InstanceStatusPreparing is a InstanceStatus enum value + InstanceStatusPreparing = "PREPARING" + + // InstanceStatusAvailable is a InstanceStatus enum value + InstanceStatusAvailable = "AVAILABLE" + + // InstanceStatusNotAvailable is a InstanceStatus enum value + InstanceStatusNotAvailable = "NOT_AVAILABLE" +) + const ( // InteractionModeInteractive is a InteractionMode enum value InteractionModeInteractive = "INTERACTIVE" @@ -13160,6 +16681,14 @@ const ( TestTypeRemoteAccessReplay = "REMOTE_ACCESS_REPLAY" ) +const ( + // UploadCategoryCurated is a UploadCategory enum value + UploadCategoryCurated = "CURATED" + + // UploadCategoryPrivate is a UploadCategory enum value + UploadCategoryPrivate = "PRIVATE" +) + const ( // UploadStatusInitialized is a UploadStatus enum value UploadStatusInitialized = "INITIALIZED" @@ -13222,4 +16751,28 @@ const ( // UploadTypeXctestUiTestPackage is a UploadType enum value UploadTypeXctestUiTestPackage = "XCTEST_UI_TEST_PACKAGE" + + // UploadTypeAppiumJavaJunitTestSpec is a UploadType enum value + UploadTypeAppiumJavaJunitTestSpec = "APPIUM_JAVA_JUNIT_TEST_SPEC" + + // UploadTypeAppiumJavaTestngTestSpec is a UploadType enum value + UploadTypeAppiumJavaTestngTestSpec = "APPIUM_JAVA_TESTNG_TEST_SPEC" + + // UploadTypeAppiumPythonTestSpec is a UploadType enum value + UploadTypeAppiumPythonTestSpec = "APPIUM_PYTHON_TEST_SPEC" + + // UploadTypeAppiumWebJavaJunitTestSpec is a UploadType enum value + UploadTypeAppiumWebJavaJunitTestSpec = "APPIUM_WEB_JAVA_JUNIT_TEST_SPEC" + + // UploadTypeAppiumWebJavaTestngTestSpec is a UploadType enum value + UploadTypeAppiumWebJavaTestngTestSpec = "APPIUM_WEB_JAVA_TESTNG_TEST_SPEC" + + // UploadTypeAppiumWebPythonTestSpec is a UploadType enum value + UploadTypeAppiumWebPythonTestSpec = "APPIUM_WEB_PYTHON_TEST_SPEC" + + // UploadTypeInstrumentationTestSpec is a UploadType enum value + UploadTypeInstrumentationTestSpec = "INSTRUMENTATION_TEST_SPEC" + + // UploadTypeXctestUiTestSpec is a UploadType enum value + UploadTypeXctestUiTestSpec = "XCTEST_UI_TEST_SPEC" ) diff --git a/vendor/github.com/aws/aws-sdk-go/service/devicefarm/errors.go b/vendor/github.com/aws/aws-sdk-go/service/devicefarm/errors.go index bcb0d460108..2d9e5abdfbf 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/devicefarm/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/devicefarm/errors.go @@ -16,6 +16,13 @@ const ( // An entity with the same name already exists. ErrCodeIdempotencyException = "IdempotencyException" + // ErrCodeInvalidOperationException for service response error code + // "InvalidOperationException". + // + // There was an error with the update request, or you do not have sufficient + // permissions to update this VPC endpoint configuration. + ErrCodeInvalidOperationException = "InvalidOperationException" + // ErrCodeLimitExceededException for service response error code // "LimitExceededException". // diff --git a/vendor/github.com/aws/aws-sdk-go/service/devicefarm/service.go b/vendor/github.com/aws/aws-sdk-go/service/devicefarm/service.go index 76cf7ca614a..0f2354a4e1c 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/devicefarm/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/devicefarm/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "devicefarm" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "devicefarm" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Device Farm" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the DeviceFarm client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/directconnect/api.go b/vendor/github.com/aws/aws-sdk-go/service/directconnect/api.go index a4843870857..212dfa91a58 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/directconnect/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/directconnect/api.go @@ -15,8 +15,8 @@ const opAllocateConnectionOnInterconnect = "AllocateConnectionOnInterconnect" // AllocateConnectionOnInterconnectRequest generates a "aws/request.Request" representing the // client's request for the AllocateConnectionOnInterconnect operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -37,6 +37,8 @@ const opAllocateConnectionOnInterconnect = "AllocateConnectionOnInterconnect" // } // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/AllocateConnectionOnInterconnect +// +// Deprecated: AllocateConnectionOnInterconnect has been deprecated func (c *DirectConnect) AllocateConnectionOnInterconnectRequest(input *AllocateConnectionOnInterconnectInput) (req *request.Request, output *Connection) { if c.Client.Config.Logger != nil { c.Client.Config.Logger.Log("This operation, AllocateConnectionOnInterconnect, has been deprecated") @@ -58,14 +60,14 @@ func (c *DirectConnect) AllocateConnectionOnInterconnectRequest(input *AllocateC // AllocateConnectionOnInterconnect API operation for AWS Direct Connect. // -// Deprecated in favor of AllocateHostedConnection. +// Deprecated. Use AllocateHostedConnection instead. // // Creates a hosted connection on an interconnect. // // Allocates a VLAN number and a specified amount of bandwidth for use by a -// hosted connection on the given interconnect. +// hosted connection on the specified interconnect. // -// This is intended for use by AWS Direct Connect partners only. +// Intended for use by AWS Direct Connect partners only. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -75,15 +77,15 @@ func (c *DirectConnect) AllocateConnectionOnInterconnectRequest(input *AllocateC // API operation AllocateConnectionOnInterconnect for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/AllocateConnectionOnInterconnect +// +// Deprecated: AllocateConnectionOnInterconnect has been deprecated func (c *DirectConnect) AllocateConnectionOnInterconnect(input *AllocateConnectionOnInterconnectInput) (*Connection, error) { req, out := c.AllocateConnectionOnInterconnectRequest(input) return out, req.Send() @@ -98,6 +100,8 @@ func (c *DirectConnect) AllocateConnectionOnInterconnect(input *AllocateConnecti // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. +// +// Deprecated: AllocateConnectionOnInterconnectWithContext has been deprecated func (c *DirectConnect) AllocateConnectionOnInterconnectWithContext(ctx aws.Context, input *AllocateConnectionOnInterconnectInput, opts ...request.Option) (*Connection, error) { req, out := c.AllocateConnectionOnInterconnectRequest(input) req.SetContext(ctx) @@ -109,8 +113,8 @@ const opAllocateHostedConnection = "AllocateHostedConnection" // AllocateHostedConnectionRequest generates a "aws/request.Request" representing the // client's request for the AllocateHostedConnection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -149,13 +153,13 @@ func (c *DirectConnect) AllocateHostedConnectionRequest(input *AllocateHostedCon // AllocateHostedConnection API operation for AWS Direct Connect. // -// Creates a hosted connection on an interconnect or a link aggregation group -// (LAG). +// Creates a hosted connection on the specified interconnect or a link aggregation +// group (LAG). // // Allocates a VLAN number and a specified amount of bandwidth for use by a -// hosted connection on the given interconnect or LAG. +// hosted connection on the specified interconnect or LAG. // -// This is intended for use by AWS Direct Connect partners only. +// Intended for use by AWS Direct Connect partners only. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -165,13 +169,11 @@ func (c *DirectConnect) AllocateHostedConnectionRequest(input *AllocateHostedCon // API operation AllocateHostedConnection for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/AllocateHostedConnection func (c *DirectConnect) AllocateHostedConnection(input *AllocateHostedConnectionInput) (*Connection, error) { @@ -199,8 +201,8 @@ const opAllocatePrivateVirtualInterface = "AllocatePrivateVirtualInterface" // AllocatePrivateVirtualInterfaceRequest generates a "aws/request.Request" representing the // client's request for the AllocatePrivateVirtualInterface operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -239,12 +241,11 @@ func (c *DirectConnect) AllocatePrivateVirtualInterfaceRequest(input *AllocatePr // AllocatePrivateVirtualInterface API operation for AWS Direct Connect. // -// Provisions a private virtual interface to be owned by another AWS customer. +// Provisions a private virtual interface to be owned by the specified AWS account. // -// Virtual interfaces created using this action must be confirmed by the virtual -// interface owner by using the ConfirmPrivateVirtualInterface action. Until -// then, the virtual interface will be in 'Confirming' state, and will not be -// available for handling traffic. +// Virtual interfaces created using this action must be confirmed by the owner +// using ConfirmPrivateVirtualInterface. Until then, the virtual interface is +// in the Confirming state and is not available to handle traffic. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -254,13 +255,11 @@ func (c *DirectConnect) AllocatePrivateVirtualInterfaceRequest(input *AllocatePr // API operation AllocatePrivateVirtualInterface for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/AllocatePrivateVirtualInterface func (c *DirectConnect) AllocatePrivateVirtualInterface(input *AllocatePrivateVirtualInterfaceInput) (*VirtualInterface, error) { @@ -288,8 +287,8 @@ const opAllocatePublicVirtualInterface = "AllocatePublicVirtualInterface" // AllocatePublicVirtualInterfaceRequest generates a "aws/request.Request" representing the // client's request for the AllocatePublicVirtualInterface operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -328,19 +327,19 @@ func (c *DirectConnect) AllocatePublicVirtualInterfaceRequest(input *AllocatePub // AllocatePublicVirtualInterface API operation for AWS Direct Connect. // -// Provisions a public virtual interface to be owned by a different customer. +// Provisions a public virtual interface to be owned by the specified AWS account. // // The owner of a connection calls this function to provision a public virtual -// interface which will be owned by another AWS customer. +// interface to be owned by the specified AWS account. // -// Virtual interfaces created using this function must be confirmed by the virtual -// interface owner by calling ConfirmPublicVirtualInterface. Until this step -// has been completed, the virtual interface will be in 'Confirming' state, -// and will not be available for handling traffic. +// Virtual interfaces created using this function must be confirmed by the owner +// using ConfirmPublicVirtualInterface. Until this step has been completed, +// the virtual interface is in the confirming state and is not available to +// handle traffic. // -// When creating an IPv6 public virtual interface (addressFamily is 'ipv6'), -// the customer and amazon address fields should be left blank to use auto-assigned -// IPv6 space. Custom IPv6 Addresses are currently not supported. +// When creating an IPv6 public virtual interface, omit the Amazon address and +// customer address. IPv6 addresses are automatically assigned from the Amazon +// pool of IPv6 addresses; you cannot specify custom IPv6 addresses. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -350,13 +349,11 @@ func (c *DirectConnect) AllocatePublicVirtualInterfaceRequest(input *AllocatePub // API operation AllocatePublicVirtualInterface for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/AllocatePublicVirtualInterface func (c *DirectConnect) AllocatePublicVirtualInterface(input *AllocatePublicVirtualInterfaceInput) (*VirtualInterface, error) { @@ -384,8 +381,8 @@ const opAssociateConnectionWithLag = "AssociateConnectionWithLag" // AssociateConnectionWithLagRequest generates a "aws/request.Request" representing the // client's request for the AssociateConnectionWithLag operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -426,12 +423,12 @@ func (c *DirectConnect) AssociateConnectionWithLagRequest(input *AssociateConnec // // Associates an existing connection with a link aggregation group (LAG). The // connection is interrupted and re-established as a member of the LAG (connectivity -// to AWS will be interrupted). The connection must be hosted on the same AWS -// Direct Connect endpoint as the LAG, and its bandwidth must match the bandwidth -// for the LAG. You can reassociate a connection that's currently associated -// with a different LAG; however, if removing the connection will cause the -// original LAG to fall below its setting for minimum number of operational -// connections, the request fails. +// to AWS is interrupted). The connection must be hosted on the same AWS Direct +// Connect endpoint as the LAG, and its bandwidth must match the bandwidth for +// the LAG. You can re-associate a connection that's currently associated with +// a different LAG; however, if removing the connection would cause the original +// LAG to fall below its setting for minimum number of operational connections, +// the request fails. // // Any virtual interfaces that are directly associated with the connection are // automatically re-associated with the LAG. If the connection was originally @@ -450,13 +447,11 @@ func (c *DirectConnect) AssociateConnectionWithLagRequest(input *AssociateConnec // API operation AssociateConnectionWithLag for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/AssociateConnectionWithLag func (c *DirectConnect) AssociateConnectionWithLag(input *AssociateConnectionWithLagInput) (*Connection, error) { @@ -484,8 +479,8 @@ const opAssociateHostedConnection = "AssociateHostedConnection" // AssociateHostedConnectionRequest generates a "aws/request.Request" representing the // client's request for the AssociateHostedConnection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -530,7 +525,7 @@ func (c *DirectConnect) AssociateHostedConnectionRequest(input *AssociateHostedC // fails. This action temporarily interrupts the hosted connection's connectivity // to AWS as it is being migrated. // -// This is intended for use by AWS Direct Connect partners only. +// Intended for use by AWS Direct Connect partners only. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -540,13 +535,11 @@ func (c *DirectConnect) AssociateHostedConnectionRequest(input *AssociateHostedC // API operation AssociateHostedConnection for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/AssociateHostedConnection func (c *DirectConnect) AssociateHostedConnection(input *AssociateHostedConnectionInput) (*Connection, error) { @@ -574,8 +567,8 @@ const opAssociateVirtualInterface = "AssociateVirtualInterface" // AssociateVirtualInterfaceRequest generates a "aws/request.Request" representing the // client's request for the AssociateVirtualInterface operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -624,11 +617,10 @@ func (c *DirectConnect) AssociateVirtualInterfaceRequest(input *AssociateVirtual // with a LAG; hosted connections must be migrated along with their virtual // interfaces using AssociateHostedConnection. // -// In order to reassociate a virtual interface to a new connection or LAG, the -// requester must own either the virtual interface itself or the connection -// to which the virtual interface is currently associated. Additionally, the -// requester must own the connection or LAG to which the virtual interface will -// be newly associated. +// To reassociate a virtual interface to a new connection or LAG, the requester +// must own either the virtual interface itself or the connection to which the +// virtual interface is currently associated. Additionally, the requester must +// own the connection or LAG for the association. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -638,13 +630,11 @@ func (c *DirectConnect) AssociateVirtualInterfaceRequest(input *AssociateVirtual // API operation AssociateVirtualInterface for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/AssociateVirtualInterface func (c *DirectConnect) AssociateVirtualInterface(input *AssociateVirtualInterfaceInput) (*VirtualInterface, error) { @@ -672,8 +662,8 @@ const opConfirmConnection = "ConfirmConnection" // ConfirmConnectionRequest generates a "aws/request.Request" representing the // client's request for the ConfirmConnection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -712,11 +702,11 @@ func (c *DirectConnect) ConfirmConnectionRequest(input *ConfirmConnectionInput) // ConfirmConnection API operation for AWS Direct Connect. // -// Confirm the creation of a hosted connection on an interconnect. +// Confirms the creation of the specified hosted connection on an interconnect. // -// Upon creation, the hosted connection is initially in the 'Ordering' state, -// and will remain in this state until the owner calls ConfirmConnection to -// confirm creation of the hosted connection. +// Upon creation, the hosted connection is initially in the Ordering state, +// and remains in this state until the owner confirms creation of the hosted +// connection. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -726,13 +716,11 @@ func (c *DirectConnect) ConfirmConnectionRequest(input *ConfirmConnectionInput) // API operation ConfirmConnection for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/ConfirmConnection func (c *DirectConnect) ConfirmConnection(input *ConfirmConnectionInput) (*ConfirmConnectionOutput, error) { @@ -760,8 +748,8 @@ const opConfirmPrivateVirtualInterface = "ConfirmPrivateVirtualInterface" // ConfirmPrivateVirtualInterfaceRequest generates a "aws/request.Request" representing the // client's request for the ConfirmPrivateVirtualInterface operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -800,11 +788,11 @@ func (c *DirectConnect) ConfirmPrivateVirtualInterfaceRequest(input *ConfirmPriv // ConfirmPrivateVirtualInterface API operation for AWS Direct Connect. // -// Accept ownership of a private virtual interface created by another customer. +// Accepts ownership of a private virtual interface created by another AWS account. // -// After the virtual interface owner calls this function, the virtual interface -// will be created and attached to the given virtual private gateway or direct -// connect gateway, and will be available for handling traffic. +// After the virtual interface owner makes this call, the virtual interface +// is created and attached to the specified virtual private gateway or Direct +// Connect gateway, and is made available to handle traffic. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -814,13 +802,11 @@ func (c *DirectConnect) ConfirmPrivateVirtualInterfaceRequest(input *ConfirmPriv // API operation ConfirmPrivateVirtualInterface for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/ConfirmPrivateVirtualInterface func (c *DirectConnect) ConfirmPrivateVirtualInterface(input *ConfirmPrivateVirtualInterfaceInput) (*ConfirmPrivateVirtualInterfaceOutput, error) { @@ -848,8 +834,8 @@ const opConfirmPublicVirtualInterface = "ConfirmPublicVirtualInterface" // ConfirmPublicVirtualInterfaceRequest generates a "aws/request.Request" representing the // client's request for the ConfirmPublicVirtualInterface operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -888,10 +874,10 @@ func (c *DirectConnect) ConfirmPublicVirtualInterfaceRequest(input *ConfirmPubli // ConfirmPublicVirtualInterface API operation for AWS Direct Connect. // -// Accept ownership of a public virtual interface created by another customer. +// Accepts ownership of a public virtual interface created by another AWS account. // -// After the virtual interface owner calls this function, the specified virtual -// interface will be created and made available for handling traffic. +// After the virtual interface owner makes this call, the specified virtual +// interface is created and made available to handle traffic. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -901,13 +887,11 @@ func (c *DirectConnect) ConfirmPublicVirtualInterfaceRequest(input *ConfirmPubli // API operation ConfirmPublicVirtualInterface for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/ConfirmPublicVirtualInterface func (c *DirectConnect) ConfirmPublicVirtualInterface(input *ConfirmPublicVirtualInterfaceInput) (*ConfirmPublicVirtualInterfaceOutput, error) { @@ -935,8 +919,8 @@ const opCreateBGPPeer = "CreateBGPPeer" // CreateBGPPeerRequest generates a "aws/request.Request" representing the // client's request for the CreateBGPPeer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -975,16 +959,18 @@ func (c *DirectConnect) CreateBGPPeerRequest(input *CreateBGPPeerInput) (req *re // CreateBGPPeer API operation for AWS Direct Connect. // -// Creates a new BGP peer on a specified virtual interface. The BGP peer cannot -// be in the same address family (IPv4/IPv6) of an existing BGP peer on the -// virtual interface. +// Creates a BGP peer on the specified virtual interface. // -// You must create a BGP peer for the corresponding address family in order -// to access AWS resources that also use that address family. +// You must create a BGP peer for the corresponding address family (IPv4/IPv6) +// in order to access AWS resources that also use that address family. // -// When creating a IPv6 BGP peer, the Amazon address and customer address fields -// must be left blank. IPv6 addresses are automatically assigned from Amazon's -// pool of IPv6 addresses; you cannot specify custom IPv6 addresses. +// If logical redundancy is not supported by the connection, interconnect, or +// LAG, the BGP peer cannot be in the same address family as an existing BGP +// peer on the virtual interface. +// +// When creating a IPv6 BGP peer, omit the Amazon address and customer address. +// IPv6 addresses are automatically assigned from the Amazon pool of IPv6 addresses; +// you cannot specify custom IPv6 addresses. // // For a public virtual interface, the Autonomous System Number (ASN) must be // private or already whitelisted for the virtual interface. @@ -997,13 +983,11 @@ func (c *DirectConnect) CreateBGPPeerRequest(input *CreateBGPPeerInput) (req *re // API operation CreateBGPPeer for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/CreateBGPPeer func (c *DirectConnect) CreateBGPPeer(input *CreateBGPPeerInput) (*CreateBGPPeerOutput, error) { @@ -1031,8 +1015,8 @@ const opCreateConnection = "CreateConnection" // CreateConnectionRequest generates a "aws/request.Request" representing the // client's request for the CreateConnection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1071,24 +1055,20 @@ func (c *DirectConnect) CreateConnectionRequest(input *CreateConnectionInput) (r // CreateConnection API operation for AWS Direct Connect. // -// Creates a new connection between the customer network and a specific AWS -// Direct Connect location. +// Creates a connection between a customer network and a specific AWS Direct +// Connect location. // // A connection links your internal network to an AWS Direct Connect location -// over a standard 1 gigabit or 10 gigabit Ethernet fiber-optic cable. One end -// of the cable is connected to your router, the other to an AWS Direct Connect -// router. An AWS Direct Connect location provides access to Amazon Web Services -// in the region it is associated with. You can establish connections with AWS -// Direct Connect locations in multiple regions, but a connection in one region -// does not provide connectivity to other regions. +// over a standard Ethernet fiber-optic cable. One end of the cable is connected +// to your router, the other to an AWS Direct Connect router. // -// To find the locations for your region, use DescribeLocations. +// To find the locations for your Region, use DescribeLocations. // // You can automatically add the new connection to a link aggregation group // (LAG) by specifying a LAG ID in the request. This ensures that the new connection // is allocated on the same AWS Direct Connect endpoint that hosts the specified // LAG. If there are no available ports on the endpoint, the request fails and -// no connection will be created. +// no connection is created. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1098,13 +1078,11 @@ func (c *DirectConnect) CreateConnectionRequest(input *CreateConnectionInput) (r // API operation CreateConnection for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/CreateConnection func (c *DirectConnect) CreateConnection(input *CreateConnectionInput) (*Connection, error) { @@ -1132,8 +1110,8 @@ const opCreateDirectConnectGateway = "CreateDirectConnectGateway" // CreateDirectConnectGatewayRequest generates a "aws/request.Request" representing the // client's request for the CreateDirectConnectGateway operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1172,14 +1150,13 @@ func (c *DirectConnect) CreateDirectConnectGatewayRequest(input *CreateDirectCon // CreateDirectConnectGateway API operation for AWS Direct Connect. // -// Creates a new direct connect gateway. A direct connect gateway is an intermediate -// object that enables you to connect a set of virtual interfaces and virtual -// private gateways. direct connect gateways are global and visible in any AWS -// region after they are created. The virtual interfaces and virtual private -// gateways that are connected through a direct connect gateway can be in different -// regions. This enables you to connect to a VPC in any region, regardless of -// the region in which the virtual interfaces are located, and pass traffic -// between them. +// Creates a Direct Connect gateway, which is an intermediate object that enables +// you to connect a set of virtual interfaces and virtual private gateways. +// A Direct Connect gateway is global and visible in any AWS Region after it +// is created. The virtual interfaces and virtual private gateways that are +// connected through a Direct Connect gateway can be in different AWS Regions. +// This enables you to connect to a VPC in any Region, regardless of the Region +// in which the virtual interfaces are located, and pass traffic between them. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1189,13 +1166,11 @@ func (c *DirectConnect) CreateDirectConnectGatewayRequest(input *CreateDirectCon // API operation CreateDirectConnectGateway for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/CreateDirectConnectGateway func (c *DirectConnect) CreateDirectConnectGateway(input *CreateDirectConnectGatewayInput) (*CreateDirectConnectGatewayOutput, error) { @@ -1223,8 +1198,8 @@ const opCreateDirectConnectGatewayAssociation = "CreateDirectConnectGatewayAssoc // CreateDirectConnectGatewayAssociationRequest generates a "aws/request.Request" representing the // client's request for the CreateDirectConnectGatewayAssociation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1263,9 +1238,9 @@ func (c *DirectConnect) CreateDirectConnectGatewayAssociationRequest(input *Crea // CreateDirectConnectGatewayAssociation API operation for AWS Direct Connect. // -// Creates an association between a direct connect gateway and a virtual private -// gateway (VGW). The VGW must be attached to a VPC and must not be associated -// with another direct connect gateway. +// Creates an association between a Direct Connect gateway and a virtual private +// gateway. The virtual private gateway must be attached to a VPC and must not +// be associated with another Direct Connect gateway. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1275,13 +1250,11 @@ func (c *DirectConnect) CreateDirectConnectGatewayAssociationRequest(input *Crea // API operation CreateDirectConnectGatewayAssociation for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/CreateDirectConnectGatewayAssociation func (c *DirectConnect) CreateDirectConnectGatewayAssociation(input *CreateDirectConnectGatewayAssociationInput) (*CreateDirectConnectGatewayAssociationOutput, error) { @@ -1309,8 +1282,8 @@ const opCreateInterconnect = "CreateInterconnect" // CreateInterconnectRequest generates a "aws/request.Request" representing the // client's request for the CreateInterconnect operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1349,30 +1322,29 @@ func (c *DirectConnect) CreateInterconnectRequest(input *CreateInterconnectInput // CreateInterconnect API operation for AWS Direct Connect. // -// Creates a new interconnect between a AWS Direct Connect partner's network -// and a specific AWS Direct Connect location. +// Creates an interconnect between an AWS Direct Connect partner's network and +// a specific AWS Direct Connect location. // // An interconnect is a connection which is capable of hosting other connections. -// The AWS Direct Connect partner can use an interconnect to provide sub-1Gbps -// AWS Direct Connect service to tier 2 customers who do not have their own -// connections. Like a standard connection, an interconnect links the AWS Direct -// Connect partner's network to an AWS Direct Connect location over a standard -// 1 Gbps or 10 Gbps Ethernet fiber-optic cable. One end is connected to the -// partner's router, the other to an AWS Direct Connect router. +// The partner can use an interconnect to provide sub-1Gbps AWS Direct Connect +// service to tier 2 customers who do not have their own connections. Like a +// standard connection, an interconnect links the partner's network to an AWS +// Direct Connect location over a standard Ethernet fiber-optic cable. One end +// is connected to the partner's router, the other to an AWS Direct Connect +// router. // // You can automatically add the new interconnect to a link aggregation group // (LAG) by specifying a LAG ID in the request. This ensures that the new interconnect // is allocated on the same AWS Direct Connect endpoint that hosts the specified // LAG. If there are no available ports on the endpoint, the request fails and -// no interconnect will be created. +// no interconnect is created. // // For each end customer, the AWS Direct Connect partner provisions a connection // on their interconnect by calling AllocateConnectionOnInterconnect. The end // customer can then connect to AWS resources by creating a virtual interface -// on their connection, using the VLAN assigned to them by the AWS Direct Connect -// partner. +// on their connection, using the VLAN assigned to them by the partner. // -// This is intended for use by AWS Direct Connect partners only. +// Intended for use by AWS Direct Connect partners only. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1382,13 +1354,11 @@ func (c *DirectConnect) CreateInterconnectRequest(input *CreateInterconnectInput // API operation CreateInterconnect for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/CreateInterconnect func (c *DirectConnect) CreateInterconnect(input *CreateInterconnectInput) (*Interconnect, error) { @@ -1416,8 +1386,8 @@ const opCreateLag = "CreateLag" // CreateLagRequest generates a "aws/request.Request" representing the // client's request for the CreateLag operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1456,14 +1426,14 @@ func (c *DirectConnect) CreateLagRequest(input *CreateLagInput) (req *request.Re // CreateLag API operation for AWS Direct Connect. // -// Creates a new link aggregation group (LAG) with the specified number of bundled +// Creates a link aggregation group (LAG) with the specified number of bundled // physical connections between the customer network and a specific AWS Direct // Connect location. A LAG is a logical interface that uses the Link Aggregation -// Control Protocol (LACP) to aggregate multiple 1 gigabit or 10 gigabit interfaces, -// allowing you to treat them as a single interface. +// Control Protocol (LACP) to aggregate multiple interfaces, enabling you to +// treat them as a single interface. // -// All connections in a LAG must use the same bandwidth (for example, 10 Gbps), -// and must terminate at the same AWS Direct Connect endpoint. +// All connections in a LAG must use the same bandwidth and must terminate at +// the same AWS Direct Connect endpoint. // // You can have up to 10 connections per LAG. Regardless of this limit, if you // request more connections for the LAG than AWS Direct Connect can allocate @@ -1490,13 +1460,11 @@ func (c *DirectConnect) CreateLagRequest(input *CreateLagInput) (req *request.Re // API operation CreateLag for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/CreateLag func (c *DirectConnect) CreateLag(input *CreateLagInput) (*Lag, error) { @@ -1524,8 +1492,8 @@ const opCreatePrivateVirtualInterface = "CreatePrivateVirtualInterface" // CreatePrivateVirtualInterfaceRequest generates a "aws/request.Request" representing the // client's request for the CreatePrivateVirtualInterface operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1564,9 +1532,13 @@ func (c *DirectConnect) CreatePrivateVirtualInterfaceRequest(input *CreatePrivat // CreatePrivateVirtualInterface API operation for AWS Direct Connect. // -// Creates a new private virtual interface. A virtual interface is the VLAN -// that transports AWS Direct Connect traffic. A private virtual interface supports -// sending traffic to a single virtual private cloud (VPC). +// Creates a private virtual interface. A virtual interface is the VLAN that +// transports AWS Direct Connect traffic. A private virtual interface can be +// connected to either a Direct Connect gateway or a Virtual Private Gateway +// (VGW). Connecting the private virtual interface to a Direct Connect gateway +// enables the possibility for connecting to multiple VPCs, including VPCs in +// different AWS Regions. Connecting the private virtual interface to a VGW +// only provides access to a single VPC within the same Region. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1576,13 +1548,11 @@ func (c *DirectConnect) CreatePrivateVirtualInterfaceRequest(input *CreatePrivat // API operation CreatePrivateVirtualInterface for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/CreatePrivateVirtualInterface func (c *DirectConnect) CreatePrivateVirtualInterface(input *CreatePrivateVirtualInterfaceInput) (*VirtualInterface, error) { @@ -1610,8 +1580,8 @@ const opCreatePublicVirtualInterface = "CreatePublicVirtualInterface" // CreatePublicVirtualInterfaceRequest generates a "aws/request.Request" representing the // client's request for the CreatePublicVirtualInterface operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1650,14 +1620,13 @@ func (c *DirectConnect) CreatePublicVirtualInterfaceRequest(input *CreatePublicV // CreatePublicVirtualInterface API operation for AWS Direct Connect. // -// Creates a new public virtual interface. A virtual interface is the VLAN that +// Creates a public virtual interface. A virtual interface is the VLAN that // transports AWS Direct Connect traffic. A public virtual interface supports -// sending traffic to public services of AWS such as Amazon Simple Storage Service -// (Amazon S3). +// sending traffic to public services of AWS such as Amazon S3. // -// When creating an IPv6 public virtual interface (addressFamily is 'ipv6'), -// the customer and amazon address fields should be left blank to use auto-assigned -// IPv6 space. Custom IPv6 Addresses are currently not supported. +// When creating an IPv6 public virtual interface (addressFamily is ipv6), leave +// the customer and amazon address fields blank to use auto-assigned IPv6 space. +// Custom IPv6 addresses are not supported. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1667,13 +1636,11 @@ func (c *DirectConnect) CreatePublicVirtualInterfaceRequest(input *CreatePublicV // API operation CreatePublicVirtualInterface for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/CreatePublicVirtualInterface func (c *DirectConnect) CreatePublicVirtualInterface(input *CreatePublicVirtualInterfaceInput) (*VirtualInterface, error) { @@ -1701,8 +1668,8 @@ const opDeleteBGPPeer = "DeleteBGPPeer" // DeleteBGPPeerRequest generates a "aws/request.Request" representing the // client's request for the DeleteBGPPeer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1741,9 +1708,10 @@ func (c *DirectConnect) DeleteBGPPeerRequest(input *DeleteBGPPeerInput) (req *re // DeleteBGPPeer API operation for AWS Direct Connect. // -// Deletes a BGP peer on the specified virtual interface that matches the specified -// customer address and ASN. You cannot delete the last BGP peer from a virtual -// interface. +// Deletes the specified BGP peer on the specified virtual interface with the +// specified customer address and ASN. +// +// You cannot delete the last BGP peer from a virtual interface. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1753,13 +1721,11 @@ func (c *DirectConnect) DeleteBGPPeerRequest(input *DeleteBGPPeerInput) (req *re // API operation DeleteBGPPeer for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DeleteBGPPeer func (c *DirectConnect) DeleteBGPPeer(input *DeleteBGPPeerInput) (*DeleteBGPPeerOutput, error) { @@ -1787,8 +1753,8 @@ const opDeleteConnection = "DeleteConnection" // DeleteConnectionRequest generates a "aws/request.Request" representing the // client's request for the DeleteConnection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1827,12 +1793,12 @@ func (c *DirectConnect) DeleteConnectionRequest(input *DeleteConnectionInput) (r // DeleteConnection API operation for AWS Direct Connect. // -// Deletes the connection. +// Deletes the specified connection. // // Deleting a connection only stops the AWS Direct Connect port hour and data -// transfer charges. You need to cancel separately with the providers any services -// or charges for cross-connects or network circuits that connect you to the -// AWS Direct Connect location. +// transfer charges. If you are partnering with any third parties to connect +// with the AWS Direct Connect location, you must cancel your service with them +// separately. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1842,13 +1808,11 @@ func (c *DirectConnect) DeleteConnectionRequest(input *DeleteConnectionInput) (r // API operation DeleteConnection for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DeleteConnection func (c *DirectConnect) DeleteConnection(input *DeleteConnectionInput) (*Connection, error) { @@ -1876,8 +1840,8 @@ const opDeleteDirectConnectGateway = "DeleteDirectConnectGateway" // DeleteDirectConnectGatewayRequest generates a "aws/request.Request" representing the // client's request for the DeleteDirectConnectGateway operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1916,9 +1880,10 @@ func (c *DirectConnect) DeleteDirectConnectGatewayRequest(input *DeleteDirectCon // DeleteDirectConnectGateway API operation for AWS Direct Connect. // -// Deletes a direct connect gateway. You must first delete all virtual interfaces -// that are attached to the direct connect gateway and disassociate all virtual -// private gateways that are associated with the direct connect gateway. +// Deletes the specified Direct Connect gateway. You must first delete all virtual +// interfaces that are attached to the Direct Connect gateway and disassociate +// all virtual private gateways that are associated with the Direct Connect +// gateway. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1928,13 +1893,11 @@ func (c *DirectConnect) DeleteDirectConnectGatewayRequest(input *DeleteDirectCon // API operation DeleteDirectConnectGateway for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DeleteDirectConnectGateway func (c *DirectConnect) DeleteDirectConnectGateway(input *DeleteDirectConnectGatewayInput) (*DeleteDirectConnectGatewayOutput, error) { @@ -1962,8 +1925,8 @@ const opDeleteDirectConnectGatewayAssociation = "DeleteDirectConnectGatewayAssoc // DeleteDirectConnectGatewayAssociationRequest generates a "aws/request.Request" representing the // client's request for the DeleteDirectConnectGatewayAssociation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2002,8 +1965,8 @@ func (c *DirectConnect) DeleteDirectConnectGatewayAssociationRequest(input *Dele // DeleteDirectConnectGatewayAssociation API operation for AWS Direct Connect. // -// Deletes the association between a direct connect gateway and a virtual private -// gateway. +// Deletes the association between the specified Direct Connect gateway and +// virtual private gateway. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2013,13 +1976,11 @@ func (c *DirectConnect) DeleteDirectConnectGatewayAssociationRequest(input *Dele // API operation DeleteDirectConnectGatewayAssociation for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DeleteDirectConnectGatewayAssociation func (c *DirectConnect) DeleteDirectConnectGatewayAssociation(input *DeleteDirectConnectGatewayAssociationInput) (*DeleteDirectConnectGatewayAssociationOutput, error) { @@ -2047,8 +2008,8 @@ const opDeleteInterconnect = "DeleteInterconnect" // DeleteInterconnectRequest generates a "aws/request.Request" representing the // client's request for the DeleteInterconnect operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2089,7 +2050,7 @@ func (c *DirectConnect) DeleteInterconnectRequest(input *DeleteInterconnectInput // // Deletes the specified interconnect. // -// This is intended for use by AWS Direct Connect partners only. +// Intended for use by AWS Direct Connect partners only. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2099,13 +2060,11 @@ func (c *DirectConnect) DeleteInterconnectRequest(input *DeleteInterconnectInput // API operation DeleteInterconnect for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DeleteInterconnect func (c *DirectConnect) DeleteInterconnect(input *DeleteInterconnectInput) (*DeleteInterconnectOutput, error) { @@ -2133,8 +2092,8 @@ const opDeleteLag = "DeleteLag" // DeleteLagRequest generates a "aws/request.Request" representing the // client's request for the DeleteLag operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2173,8 +2132,8 @@ func (c *DirectConnect) DeleteLagRequest(input *DeleteLagInput) (req *request.Re // DeleteLag API operation for AWS Direct Connect. // -// Deletes a link aggregation group (LAG). You cannot delete a LAG if it has -// active virtual interfaces or hosted connections. +// Deletes the specified link aggregation group (LAG). You cannot delete a LAG +// if it has active virtual interfaces or hosted connections. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2184,13 +2143,11 @@ func (c *DirectConnect) DeleteLagRequest(input *DeleteLagInput) (req *request.Re // API operation DeleteLag for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DeleteLag func (c *DirectConnect) DeleteLag(input *DeleteLagInput) (*Lag, error) { @@ -2218,8 +2175,8 @@ const opDeleteVirtualInterface = "DeleteVirtualInterface" // DeleteVirtualInterfaceRequest generates a "aws/request.Request" representing the // client's request for the DeleteVirtualInterface operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2268,13 +2225,11 @@ func (c *DirectConnect) DeleteVirtualInterfaceRequest(input *DeleteVirtualInterf // API operation DeleteVirtualInterface for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DeleteVirtualInterface func (c *DirectConnect) DeleteVirtualInterface(input *DeleteVirtualInterfaceInput) (*DeleteVirtualInterfaceOutput, error) { @@ -2302,8 +2257,8 @@ const opDescribeConnectionLoa = "DescribeConnectionLoa" // DescribeConnectionLoaRequest generates a "aws/request.Request" representing the // client's request for the DescribeConnectionLoa operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2324,6 +2279,8 @@ const opDescribeConnectionLoa = "DescribeConnectionLoa" // } // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DescribeConnectionLoa +// +// Deprecated: DescribeConnectionLoa has been deprecated func (c *DirectConnect) DescribeConnectionLoaRequest(input *DescribeConnectionLoaInput) (req *request.Request, output *DescribeConnectionLoaOutput) { if c.Client.Config.Logger != nil { c.Client.Config.Logger.Log("This operation, DescribeConnectionLoa, has been deprecated") @@ -2345,15 +2302,15 @@ func (c *DirectConnect) DescribeConnectionLoaRequest(input *DescribeConnectionLo // DescribeConnectionLoa API operation for AWS Direct Connect. // -// Deprecated in favor of DescribeLoa. +// Deprecated. Use DescribeLoa instead. // -// Returns the LOA-CFA for a Connection. +// Gets the LOA-CFA for a connection. // // The Letter of Authorization - Connecting Facility Assignment (LOA-CFA) is // a document that your APN partner or service provider uses when establishing // your cross connect to AWS at the colocation facility. For more information, // see Requesting Cross Connects at AWS Direct Connect Locations (http://docs.aws.amazon.com/directconnect/latest/UserGuide/Colocation.html) -// in the AWS Direct Connect user guide. +// in the AWS Direct Connect User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2363,15 +2320,15 @@ func (c *DirectConnect) DescribeConnectionLoaRequest(input *DescribeConnectionLo // API operation DescribeConnectionLoa for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DescribeConnectionLoa +// +// Deprecated: DescribeConnectionLoa has been deprecated func (c *DirectConnect) DescribeConnectionLoa(input *DescribeConnectionLoaInput) (*DescribeConnectionLoaOutput, error) { req, out := c.DescribeConnectionLoaRequest(input) return out, req.Send() @@ -2386,6 +2343,8 @@ func (c *DirectConnect) DescribeConnectionLoa(input *DescribeConnectionLoaInput) // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. +// +// Deprecated: DescribeConnectionLoaWithContext has been deprecated func (c *DirectConnect) DescribeConnectionLoaWithContext(ctx aws.Context, input *DescribeConnectionLoaInput, opts ...request.Option) (*DescribeConnectionLoaOutput, error) { req, out := c.DescribeConnectionLoaRequest(input) req.SetContext(ctx) @@ -2397,8 +2356,8 @@ const opDescribeConnections = "DescribeConnections" // DescribeConnectionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeConnections operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2437,9 +2396,7 @@ func (c *DirectConnect) DescribeConnectionsRequest(input *DescribeConnectionsInp // DescribeConnections API operation for AWS Direct Connect. // -// Displays all connections in this region. -// -// If a connection ID is provided, the call returns only that particular connection. +// Displays the specified connection or all connections in this Region. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2449,13 +2406,11 @@ func (c *DirectConnect) DescribeConnectionsRequest(input *DescribeConnectionsInp // API operation DescribeConnections for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DescribeConnections func (c *DirectConnect) DescribeConnections(input *DescribeConnectionsInput) (*Connections, error) { @@ -2483,8 +2438,8 @@ const opDescribeConnectionsOnInterconnect = "DescribeConnectionsOnInterconnect" // DescribeConnectionsOnInterconnectRequest generates a "aws/request.Request" representing the // client's request for the DescribeConnectionsOnInterconnect operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2505,6 +2460,8 @@ const opDescribeConnectionsOnInterconnect = "DescribeConnectionsOnInterconnect" // } // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DescribeConnectionsOnInterconnect +// +// Deprecated: DescribeConnectionsOnInterconnect has been deprecated func (c *DirectConnect) DescribeConnectionsOnInterconnectRequest(input *DescribeConnectionsOnInterconnectInput) (req *request.Request, output *Connections) { if c.Client.Config.Logger != nil { c.Client.Config.Logger.Log("This operation, DescribeConnectionsOnInterconnect, has been deprecated") @@ -2526,11 +2483,11 @@ func (c *DirectConnect) DescribeConnectionsOnInterconnectRequest(input *Describe // DescribeConnectionsOnInterconnect API operation for AWS Direct Connect. // -// Deprecated in favor of DescribeHostedConnections. +// Deprecated. Use DescribeHostedConnections instead. // -// Returns a list of connections that have been provisioned on the given interconnect. +// Lists the connections that have been provisioned on the specified interconnect. // -// This is intended for use by AWS Direct Connect partners only. +// Intended for use by AWS Direct Connect partners only. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2540,15 +2497,15 @@ func (c *DirectConnect) DescribeConnectionsOnInterconnectRequest(input *Describe // API operation DescribeConnectionsOnInterconnect for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DescribeConnectionsOnInterconnect +// +// Deprecated: DescribeConnectionsOnInterconnect has been deprecated func (c *DirectConnect) DescribeConnectionsOnInterconnect(input *DescribeConnectionsOnInterconnectInput) (*Connections, error) { req, out := c.DescribeConnectionsOnInterconnectRequest(input) return out, req.Send() @@ -2563,6 +2520,8 @@ func (c *DirectConnect) DescribeConnectionsOnInterconnect(input *DescribeConnect // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. +// +// Deprecated: DescribeConnectionsOnInterconnectWithContext has been deprecated func (c *DirectConnect) DescribeConnectionsOnInterconnectWithContext(ctx aws.Context, input *DescribeConnectionsOnInterconnectInput, opts ...request.Option) (*Connections, error) { req, out := c.DescribeConnectionsOnInterconnectRequest(input) req.SetContext(ctx) @@ -2574,8 +2533,8 @@ const opDescribeDirectConnectGatewayAssociations = "DescribeDirectConnectGateway // DescribeDirectConnectGatewayAssociationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeDirectConnectGatewayAssociations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2614,13 +2573,14 @@ func (c *DirectConnect) DescribeDirectConnectGatewayAssociationsRequest(input *D // DescribeDirectConnectGatewayAssociations API operation for AWS Direct Connect. // -// Returns a list of all direct connect gateway and virtual private gateway -// (VGW) associations. Either a direct connect gateway ID or a VGW ID must be -// provided in the request. If a direct connect gateway ID is provided, the -// response returns all VGWs associated with the direct connect gateway. If -// a VGW ID is provided, the response returns all direct connect gateways associated -// with the VGW. If both are provided, the response only returns the association -// that matches both the direct connect gateway and the VGW. +// Lists the associations between your Direct Connect gateways and virtual private +// gateways. You must specify a Direct Connect gateway, a virtual private gateway, +// or both. If you specify a Direct Connect gateway, the response contains all +// virtual private gateways associated with the Direct Connect gateway. If you +// specify a virtual private gateway, the response contains all Direct Connect +// gateways associated with the virtual private gateway. If you specify both, +// the response contains the association between the Direct Connect gateway +// and the virtual private gateway. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2630,13 +2590,11 @@ func (c *DirectConnect) DescribeDirectConnectGatewayAssociationsRequest(input *D // API operation DescribeDirectConnectGatewayAssociations for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DescribeDirectConnectGatewayAssociations func (c *DirectConnect) DescribeDirectConnectGatewayAssociations(input *DescribeDirectConnectGatewayAssociationsInput) (*DescribeDirectConnectGatewayAssociationsOutput, error) { @@ -2664,8 +2622,8 @@ const opDescribeDirectConnectGatewayAttachments = "DescribeDirectConnectGatewayA // DescribeDirectConnectGatewayAttachmentsRequest generates a "aws/request.Request" representing the // client's request for the DescribeDirectConnectGatewayAttachments operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2704,13 +2662,13 @@ func (c *DirectConnect) DescribeDirectConnectGatewayAttachmentsRequest(input *De // DescribeDirectConnectGatewayAttachments API operation for AWS Direct Connect. // -// Returns a list of all direct connect gateway and virtual interface (VIF) -// attachments. Either a direct connect gateway ID or a VIF ID must be provided -// in the request. If a direct connect gateway ID is provided, the response -// returns all VIFs attached to the direct connect gateway. If a VIF ID is provided, -// the response returns all direct connect gateways attached to the VIF. If -// both are provided, the response only returns the attachment that matches -// both the direct connect gateway and the VIF. +// Lists the attachments between your Direct Connect gateways and virtual interfaces. +// You must specify a Direct Connect gateway, a virtual interface, or both. +// If you specify a Direct Connect gateway, the response contains all virtual +// interfaces attached to the Direct Connect gateway. If you specify a virtual +// interface, the response contains all Direct Connect gateways attached to +// the virtual interface. If you specify both, the response contains the attachment +// between the Direct Connect gateway and the virtual interface. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2720,13 +2678,11 @@ func (c *DirectConnect) DescribeDirectConnectGatewayAttachmentsRequest(input *De // API operation DescribeDirectConnectGatewayAttachments for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DescribeDirectConnectGatewayAttachments func (c *DirectConnect) DescribeDirectConnectGatewayAttachments(input *DescribeDirectConnectGatewayAttachmentsInput) (*DescribeDirectConnectGatewayAttachmentsOutput, error) { @@ -2754,8 +2710,8 @@ const opDescribeDirectConnectGateways = "DescribeDirectConnectGateways" // DescribeDirectConnectGatewaysRequest generates a "aws/request.Request" representing the // client's request for the DescribeDirectConnectGateways operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2794,11 +2750,8 @@ func (c *DirectConnect) DescribeDirectConnectGatewaysRequest(input *DescribeDire // DescribeDirectConnectGateways API operation for AWS Direct Connect. // -// Returns a list of direct connect gateways in your account. Deleted direct -// connect gateways are not returned. You can provide a direct connect gateway -// ID in the request to return information about the specific direct connect -// gateway only. Otherwise, if a direct connect gateway ID is not provided, -// information about all of your direct connect gateways is returned. +// Lists all your Direct Connect gateways or only the specified Direct Connect +// gateway. Deleted Direct Connect gateways are not returned. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2808,13 +2761,11 @@ func (c *DirectConnect) DescribeDirectConnectGatewaysRequest(input *DescribeDire // API operation DescribeDirectConnectGateways for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DescribeDirectConnectGateways func (c *DirectConnect) DescribeDirectConnectGateways(input *DescribeDirectConnectGatewaysInput) (*DescribeDirectConnectGatewaysOutput, error) { @@ -2842,8 +2793,8 @@ const opDescribeHostedConnections = "DescribeHostedConnections" // DescribeHostedConnectionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeHostedConnections operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2882,10 +2833,10 @@ func (c *DirectConnect) DescribeHostedConnectionsRequest(input *DescribeHostedCo // DescribeHostedConnections API operation for AWS Direct Connect. // -// Returns a list of hosted connections that have been provisioned on the given +// Lists the hosted connections that have been provisioned on the specified // interconnect or link aggregation group (LAG). // -// This is intended for use by AWS Direct Connect partners only. +// Intended for use by AWS Direct Connect partners only. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2895,13 +2846,11 @@ func (c *DirectConnect) DescribeHostedConnectionsRequest(input *DescribeHostedCo // API operation DescribeHostedConnections for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DescribeHostedConnections func (c *DirectConnect) DescribeHostedConnections(input *DescribeHostedConnectionsInput) (*Connections, error) { @@ -2929,8 +2878,8 @@ const opDescribeInterconnectLoa = "DescribeInterconnectLoa" // DescribeInterconnectLoaRequest generates a "aws/request.Request" representing the // client's request for the DescribeInterconnectLoa operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2951,6 +2900,8 @@ const opDescribeInterconnectLoa = "DescribeInterconnectLoa" // } // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DescribeInterconnectLoa +// +// Deprecated: DescribeInterconnectLoa has been deprecated func (c *DirectConnect) DescribeInterconnectLoaRequest(input *DescribeInterconnectLoaInput) (req *request.Request, output *DescribeInterconnectLoaOutput) { if c.Client.Config.Logger != nil { c.Client.Config.Logger.Log("This operation, DescribeInterconnectLoa, has been deprecated") @@ -2972,15 +2923,15 @@ func (c *DirectConnect) DescribeInterconnectLoaRequest(input *DescribeInterconne // DescribeInterconnectLoa API operation for AWS Direct Connect. // -// Deprecated in favor of DescribeLoa. +// Deprecated. Use DescribeLoa instead. // -// Returns the LOA-CFA for an Interconnect. +// Gets the LOA-CFA for the specified interconnect. // // The Letter of Authorization - Connecting Facility Assignment (LOA-CFA) is // a document that is used when establishing your cross connect to AWS at the // colocation facility. For more information, see Requesting Cross Connects // at AWS Direct Connect Locations (http://docs.aws.amazon.com/directconnect/latest/UserGuide/Colocation.html) -// in the AWS Direct Connect user guide. +// in the AWS Direct Connect User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2990,15 +2941,15 @@ func (c *DirectConnect) DescribeInterconnectLoaRequest(input *DescribeInterconne // API operation DescribeInterconnectLoa for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DescribeInterconnectLoa +// +// Deprecated: DescribeInterconnectLoa has been deprecated func (c *DirectConnect) DescribeInterconnectLoa(input *DescribeInterconnectLoaInput) (*DescribeInterconnectLoaOutput, error) { req, out := c.DescribeInterconnectLoaRequest(input) return out, req.Send() @@ -3013,6 +2964,8 @@ func (c *DirectConnect) DescribeInterconnectLoa(input *DescribeInterconnectLoaIn // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. +// +// Deprecated: DescribeInterconnectLoaWithContext has been deprecated func (c *DirectConnect) DescribeInterconnectLoaWithContext(ctx aws.Context, input *DescribeInterconnectLoaInput, opts ...request.Option) (*DescribeInterconnectLoaOutput, error) { req, out := c.DescribeInterconnectLoaRequest(input) req.SetContext(ctx) @@ -3024,8 +2977,8 @@ const opDescribeInterconnects = "DescribeInterconnects" // DescribeInterconnectsRequest generates a "aws/request.Request" representing the // client's request for the DescribeInterconnects operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3064,9 +3017,7 @@ func (c *DirectConnect) DescribeInterconnectsRequest(input *DescribeInterconnect // DescribeInterconnects API operation for AWS Direct Connect. // -// Returns a list of interconnects owned by the AWS account. -// -// If an interconnect ID is provided, it will only return this particular interconnect. +// Lists the interconnects owned by the AWS account or only the specified interconnect. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3076,13 +3027,11 @@ func (c *DirectConnect) DescribeInterconnectsRequest(input *DescribeInterconnect // API operation DescribeInterconnects for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DescribeInterconnects func (c *DirectConnect) DescribeInterconnects(input *DescribeInterconnectsInput) (*DescribeInterconnectsOutput, error) { @@ -3110,8 +3059,8 @@ const opDescribeLags = "DescribeLags" // DescribeLagsRequest generates a "aws/request.Request" representing the // client's request for the DescribeLags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3150,9 +3099,7 @@ func (c *DirectConnect) DescribeLagsRequest(input *DescribeLagsInput) (req *requ // DescribeLags API operation for AWS Direct Connect. // -// Describes the link aggregation groups (LAGs) in your account. -// -// If a LAG ID is provided, only information about the specified LAG is returned. +// Describes all your link aggregation groups (LAG) or the specified LAG. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3162,13 +3109,11 @@ func (c *DirectConnect) DescribeLagsRequest(input *DescribeLagsInput) (req *requ // API operation DescribeLags for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DescribeLags func (c *DirectConnect) DescribeLags(input *DescribeLagsInput) (*DescribeLagsOutput, error) { @@ -3196,8 +3141,8 @@ const opDescribeLoa = "DescribeLoa" // DescribeLoaRequest generates a "aws/request.Request" representing the // client's request for the DescribeLoa operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3236,14 +3181,14 @@ func (c *DirectConnect) DescribeLoaRequest(input *DescribeLoaInput) (req *reques // DescribeLoa API operation for AWS Direct Connect. // -// Returns the LOA-CFA for a connection, interconnect, or link aggregation group +// Gets the LOA-CFA for a connection, interconnect, or link aggregation group // (LAG). // // The Letter of Authorization - Connecting Facility Assignment (LOA-CFA) is // a document that is used when establishing your cross connect to AWS at the // colocation facility. For more information, see Requesting Cross Connects // at AWS Direct Connect Locations (http://docs.aws.amazon.com/directconnect/latest/UserGuide/Colocation.html) -// in the AWS Direct Connect user guide. +// in the AWS Direct Connect User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3253,13 +3198,11 @@ func (c *DirectConnect) DescribeLoaRequest(input *DescribeLoaInput) (req *reques // API operation DescribeLoa for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DescribeLoa func (c *DirectConnect) DescribeLoa(input *DescribeLoaInput) (*Loa, error) { @@ -3287,8 +3230,8 @@ const opDescribeLocations = "DescribeLocations" // DescribeLocationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeLocations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3327,9 +3270,8 @@ func (c *DirectConnect) DescribeLocationsRequest(input *DescribeLocationsInput) // DescribeLocations API operation for AWS Direct Connect. // -// Returns the list of AWS Direct Connect locations in the current AWS region. -// These are the locations that may be selected when calling CreateConnection -// or CreateInterconnect. +// Lists the AWS Direct Connect locations in the current AWS Region. These are +// the locations that can be selected when calling CreateConnection or CreateInterconnect. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3339,13 +3281,11 @@ func (c *DirectConnect) DescribeLocationsRequest(input *DescribeLocationsInput) // API operation DescribeLocations for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DescribeLocations func (c *DirectConnect) DescribeLocations(input *DescribeLocationsInput) (*DescribeLocationsOutput, error) { @@ -3373,8 +3313,8 @@ const opDescribeTags = "DescribeTags" // DescribeTagsRequest generates a "aws/request.Request" representing the // client's request for the DescribeTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3413,7 +3353,7 @@ func (c *DirectConnect) DescribeTagsRequest(input *DescribeTagsInput) (req *requ // DescribeTags API operation for AWS Direct Connect. // -// Describes the tags associated with the specified Direct Connect resources. +// Describes the tags associated with the specified AWS Direct Connect resources. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3423,13 +3363,11 @@ func (c *DirectConnect) DescribeTagsRequest(input *DescribeTagsInput) (req *requ // API operation DescribeTags for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DescribeTags func (c *DirectConnect) DescribeTags(input *DescribeTagsInput) (*DescribeTagsOutput, error) { @@ -3457,8 +3395,8 @@ const opDescribeVirtualGateways = "DescribeVirtualGateways" // DescribeVirtualGatewaysRequest generates a "aws/request.Request" representing the // client's request for the DescribeVirtualGateways operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3497,13 +3435,10 @@ func (c *DirectConnect) DescribeVirtualGatewaysRequest(input *DescribeVirtualGat // DescribeVirtualGateways API operation for AWS Direct Connect. // -// Returns a list of virtual private gateways owned by the AWS account. +// Lists the virtual private gateways owned by the AWS account. // // You can create one or more AWS Direct Connect private virtual interfaces -// linking to a virtual private gateway. A virtual private gateway can be managed -// via Amazon Virtual Private Cloud (VPC) console or the EC2 CreateVpnGateway -// (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-CreateVpnGateway.html) -// action. +// linked to a virtual private gateway. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3513,13 +3448,11 @@ func (c *DirectConnect) DescribeVirtualGatewaysRequest(input *DescribeVirtualGat // API operation DescribeVirtualGateways for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DescribeVirtualGateways func (c *DirectConnect) DescribeVirtualGateways(input *DescribeVirtualGatewaysInput) (*DescribeVirtualGatewaysOutput, error) { @@ -3547,8 +3480,8 @@ const opDescribeVirtualInterfaces = "DescribeVirtualInterfaces" // DescribeVirtualInterfacesRequest generates a "aws/request.Request" representing the // client's request for the DescribeVirtualInterfaces operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3594,7 +3527,7 @@ func (c *DirectConnect) DescribeVirtualInterfacesRequest(input *DescribeVirtualI // a single virtual interface is returned. // // A virtual interface (VLAN) transmits the traffic between the AWS Direct Connect -// location and the customer. +// location and the customer network. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3604,13 +3537,11 @@ func (c *DirectConnect) DescribeVirtualInterfacesRequest(input *DescribeVirtualI // API operation DescribeVirtualInterfaces for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DescribeVirtualInterfaces func (c *DirectConnect) DescribeVirtualInterfaces(input *DescribeVirtualInterfacesInput) (*DescribeVirtualInterfacesOutput, error) { @@ -3638,8 +3569,8 @@ const opDisassociateConnectionFromLag = "DisassociateConnectionFromLag" // DisassociateConnectionFromLagRequest generates a "aws/request.Request" representing the // client's request for the DisassociateConnectionFromLag operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3685,7 +3616,7 @@ func (c *DirectConnect) DisassociateConnectionFromLagRequest(input *Disassociate // remain associated with the LAG. A disassociated connection owned by an AWS // Direct Connect partner is automatically converted to an interconnect. // -// If disassociating the connection will cause the LAG to fall below its setting +// If disassociating the connection would cause the LAG to fall below its setting // for minimum number of operational connections, the request fails, except // when it's the last member of the LAG. If all connections are disassociated, // the LAG continues to exist as an empty LAG with no physical connections. @@ -3698,13 +3629,11 @@ func (c *DirectConnect) DisassociateConnectionFromLagRequest(input *Disassociate // API operation DisassociateConnectionFromLag for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/DisassociateConnectionFromLag func (c *DirectConnect) DisassociateConnectionFromLag(input *DisassociateConnectionFromLagInput) (*Connection, error) { @@ -3732,8 +3661,8 @@ const opTagResource = "TagResource" // TagResourceRequest generates a "aws/request.Request" representing the // client's request for the TagResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3772,12 +3701,11 @@ func (c *DirectConnect) TagResourceRequest(input *TagResourceInput) (req *reques // TagResource API operation for AWS Direct Connect. // -// Adds the specified tags to the specified Direct Connect resource. Each Direct -// Connect resource can have a maximum of 50 tags. +// Adds the specified tags to the specified AWS Direct Connect resource. Each +// resource can have a maximum of 50 tags. // // Each tag consists of a key and an optional value. If a tag with the same -// key is already associated with the Direct Connect resource, this action updates -// its value. +// key is already associated with the resource, this action updates its value. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3791,16 +3719,13 @@ func (c *DirectConnect) TagResourceRequest(input *TagResourceInput) (req *reques // A tag key was specified more than once. // // * ErrCodeTooManyTagsException "TooManyTagsException" -// You have reached the limit on the number of tags that can be assigned to -// a Direct Connect resource. +// You have reached the limit on the number of tags that can be assigned. // -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/TagResource func (c *DirectConnect) TagResource(input *TagResourceInput) (*TagResourceOutput, error) { @@ -3828,8 +3753,8 @@ const opUntagResource = "UntagResource" // UntagResourceRequest generates a "aws/request.Request" representing the // client's request for the UntagResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3868,7 +3793,7 @@ func (c *DirectConnect) UntagResourceRequest(input *UntagResourceInput) (req *re // UntagResource API operation for AWS Direct Connect. // -// Removes one or more tags from the specified Direct Connect resource. +// Removes one or more tags from the specified AWS Direct Connect resource. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3878,13 +3803,11 @@ func (c *DirectConnect) UntagResourceRequest(input *UntagResourceInput) (req *re // API operation UntagResource for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/UntagResource func (c *DirectConnect) UntagResource(input *UntagResourceInput) (*UntagResourceOutput, error) { @@ -3912,8 +3835,8 @@ const opUpdateLag = "UpdateLag" // UpdateLagRequest generates a "aws/request.Request" representing the // client's request for the UpdateLag operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3952,7 +3875,7 @@ func (c *DirectConnect) UpdateLagRequest(input *UpdateLagInput) (req *request.Re // UpdateLag API operation for AWS Direct Connect. // -// Updates the attributes of a link aggregation group (LAG). +// Updates the attributes of the specified link aggregation group (LAG). // // You can update the following attributes: // @@ -3962,11 +3885,11 @@ func (c *DirectConnect) UpdateLagRequest(input *UpdateLagInput) (req *request.Re // for the LAG itself to be operational. // // When you create a LAG, the default value for the minimum number of operational -// connections is zero (0). If you update this value, and the number of operational -// connections falls below the specified value, the LAG will automatically go -// down to avoid overutilization of the remaining connections. Adjusting this -// value should be done with care as it could force the LAG down if the value -// is set higher than the current number of operational connections. +// connections is zero (0). If you update this value and the number of operational +// connections falls below the specified value, the LAG automatically goes down +// to avoid over-utilization of the remaining connections. Adjust this value +// with care, as it could force the LAG down if it is set higher than the current +// number of operational connections. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3976,13 +3899,11 @@ func (c *DirectConnect) UpdateLagRequest(input *UpdateLagInput) (req *request.Re // API operation UpdateLag for usage and error information. // // Returned Error Codes: -// * ErrCodeServerException "ServerException" -// A server-side error occurred during the API call. The error message will -// contain additional details about the cause. +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. // -// * ErrCodeClientException "ClientException" -// The API was called with invalid parameters. The error message will contain -// additional details about the cause. +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/UpdateLag func (c *DirectConnect) UpdateLag(input *UpdateLagInput) (*Lag, error) { @@ -4006,54 +3927,123 @@ func (c *DirectConnect) UpdateLagWithContext(ctx aws.Context, input *UpdateLagIn return out, req.Send() } -// Container for the parameters to the AllocateConnectionOnInterconnect operation. +const opUpdateVirtualInterfaceAttributes = "UpdateVirtualInterfaceAttributes" + +// UpdateVirtualInterfaceAttributesRequest generates a "aws/request.Request" representing the +// client's request for the UpdateVirtualInterfaceAttributes operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateVirtualInterfaceAttributes for more information on using the UpdateVirtualInterfaceAttributes +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateVirtualInterfaceAttributesRequest method. +// req, resp := client.UpdateVirtualInterfaceAttributesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/UpdateVirtualInterfaceAttributes +func (c *DirectConnect) UpdateVirtualInterfaceAttributesRequest(input *UpdateVirtualInterfaceAttributesInput) (req *request.Request, output *UpdateVirtualInterfaceAttributesOutput) { + op := &request.Operation{ + Name: opUpdateVirtualInterfaceAttributes, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateVirtualInterfaceAttributesInput{} + } + + output = &UpdateVirtualInterfaceAttributesOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateVirtualInterfaceAttributes API operation for AWS Direct Connect. +// +// Updates the specified attributes of the specified virtual private interface. +// +// Setting the MTU of a virtual interface to 9001 (jumbo frames) can cause an +// update to the underlying physical connection if it wasn't updated to support +// jumbo frames. Updating the connection disrupts network connectivity for all +// virtual interfaces associated with the connection for up to 30 seconds. To +// check whether your connection supports jumbo frames, call DescribeConnections. +// To check whether your virtual interface supports jumbo frames, call DescribeVirtualInterfaces. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Direct Connect's +// API operation UpdateVirtualInterfaceAttributes for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServerException "DirectConnectServerException" +// A server-side error occurred. +// +// * ErrCodeClientException "DirectConnectClientException" +// One or more parameters are not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25/UpdateVirtualInterfaceAttributes +func (c *DirectConnect) UpdateVirtualInterfaceAttributes(input *UpdateVirtualInterfaceAttributesInput) (*UpdateVirtualInterfaceAttributesOutput, error) { + req, out := c.UpdateVirtualInterfaceAttributesRequest(input) + return out, req.Send() +} + +// UpdateVirtualInterfaceAttributesWithContext is the same as UpdateVirtualInterfaceAttributes with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateVirtualInterfaceAttributes for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DirectConnect) UpdateVirtualInterfaceAttributesWithContext(ctx aws.Context, input *UpdateVirtualInterfaceAttributesInput, opts ...request.Option) (*UpdateVirtualInterfaceAttributesOutput, error) { + req, out := c.UpdateVirtualInterfaceAttributesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + type AllocateConnectionOnInterconnectInput struct { _ struct{} `type:"structure"` - // Bandwidth of the connection. - // - // Example: "500Mbps" - // - // Default: None - // - // Values: 50Mbps, 100Mbps, 200Mbps, 300Mbps, 400Mbps, or 500Mbps + // The bandwidth of the connection, in Mbps. The possible values are 50Mbps, + // 100Mbps, 200Mbps, 300Mbps, 400Mbps, and 500Mbps. // // Bandwidth is a required field Bandwidth *string `locationName:"bandwidth" type:"string" required:"true"` - // Name of the provisioned connection. - // - // Example: "500M Connection to AWS" - // - // Default: None + // The name of the provisioned connection. // // ConnectionName is a required field ConnectionName *string `locationName:"connectionName" type:"string" required:"true"` - // ID of the interconnect on which the connection will be provisioned. - // - // Example: dxcon-456abc78 - // - // Default: None + // The ID of the interconnect on which the connection will be provisioned. For + // example, dxcon-456abc78. // // InterconnectId is a required field InterconnectId *string `locationName:"interconnectId" type:"string" required:"true"` - // Numeric account Id of the customer for whom the connection will be provisioned. - // - // Example: 123443215678 - // - // Default: None + // The ID of the AWS account of the customer for whom the connection will be + // provisioned. // // OwnerAccount is a required field OwnerAccount *string `locationName:"ownerAccount" type:"string" required:"true"` // The dedicated VLAN provisioned to the connection. // - // Example: 101 - // - // Default: None - // // Vlan is a required field Vlan *int64 `locationName:"vlan" type:"integer" required:"true"` } @@ -4123,54 +4113,32 @@ func (s *AllocateConnectionOnInterconnectInput) SetVlan(v int64) *AllocateConnec return s } -// Container for the parameters to theHostedConnection operation. type AllocateHostedConnectionInput struct { _ struct{} `type:"structure"` - // The bandwidth of the connection. - // - // Example: 500Mbps - // - // Default: None - // - // Values: 50Mbps, 100Mbps, 200Mbps, 300Mbps, 400Mbps, or 500Mbps + // The bandwidth of the hosted connection, in Mbps. The possible values are + // 50Mbps, 100Mbps, 200Mbps, 300Mbps, 400Mbps, and 500Mbps. // // Bandwidth is a required field Bandwidth *string `locationName:"bandwidth" type:"string" required:"true"` - // The ID of the interconnect or LAG on which the connection will be provisioned. - // - // Example: dxcon-456abc78 or dxlag-abc123 - // - // Default: None + // The ID of the interconnect or LAG. // // ConnectionId is a required field ConnectionId *string `locationName:"connectionId" type:"string" required:"true"` - // The name of the provisioned connection. - // - // Example: "500M Connection to AWS" - // - // Default: None + // The name of the hosted connection. // // ConnectionName is a required field ConnectionName *string `locationName:"connectionName" type:"string" required:"true"` - // The numeric account ID of the customer for whom the connection will be provisioned. - // - // Example: 123443215678 - // - // Default: None + // The ID of the AWS account ID of the customer for the connection. // // OwnerAccount is a required field OwnerAccount *string `locationName:"ownerAccount" type:"string" required:"true"` // The dedicated VLAN provisioned to the hosted connection. // - // Example: 101 - // - // Default: None - // // Vlan is a required field Vlan *int64 `locationName:"vlan" type:"integer" required:"true"` } @@ -4240,27 +4208,20 @@ func (s *AllocateHostedConnectionInput) SetVlan(v int64) *AllocateHostedConnecti return s } -// Container for the parameters to the AllocatePrivateVirtualInterface operation. type AllocatePrivateVirtualInterfaceInput struct { _ struct{} `type:"structure"` - // The connection ID on which the private virtual interface is provisioned. - // - // Default: None + // The ID of the connection on which the private virtual interface is provisioned. // // ConnectionId is a required field ConnectionId *string `locationName:"connectionId" type:"string" required:"true"` - // Detailed information for the private virtual interface to be provisioned. - // - // Default: None + // Information about the private virtual interface. // // NewPrivateVirtualInterfaceAllocation is a required field NewPrivateVirtualInterfaceAllocation *NewPrivateVirtualInterfaceAllocation `locationName:"newPrivateVirtualInterfaceAllocation" type:"structure" required:"true"` - // The AWS account that will own the new private virtual interface. - // - // Default: None + // The ID of the AWS account that owns the virtual private interface. // // OwnerAccount is a required field OwnerAccount *string `locationName:"ownerAccount" type:"string" required:"true"` @@ -4318,27 +4279,20 @@ func (s *AllocatePrivateVirtualInterfaceInput) SetOwnerAccount(v string) *Alloca return s } -// Container for the parameters to the AllocatePublicVirtualInterface operation. type AllocatePublicVirtualInterfaceInput struct { _ struct{} `type:"structure"` - // The connection ID on which the public virtual interface is provisioned. - // - // Default: None + // The ID of the connection on which the public virtual interface is provisioned. // // ConnectionId is a required field ConnectionId *string `locationName:"connectionId" type:"string" required:"true"` - // Detailed information for the public virtual interface to be provisioned. - // - // Default: None + // Information about the public virtual interface. // // NewPublicVirtualInterfaceAllocation is a required field NewPublicVirtualInterfaceAllocation *NewPublicVirtualInterfaceAllocation `locationName:"newPublicVirtualInterfaceAllocation" type:"structure" required:"true"` - // The AWS account that will own the new public virtual interface. - // - // Default: None + // The ID of the AWS account that owns the public virtual interface. // // OwnerAccount is a required field OwnerAccount *string `locationName:"ownerAccount" type:"string" required:"true"` @@ -4396,24 +4350,15 @@ func (s *AllocatePublicVirtualInterfaceInput) SetOwnerAccount(v string) *Allocat return s } -// Container for the parameters to the AssociateConnectionWithLag operation. type AssociateConnectionWithLagInput struct { _ struct{} `type:"structure"` - // The ID of the connection. - // - // Example: dxcon-abc123 - // - // Default: None + // The ID of the connection. For example, dxcon-abc123. // // ConnectionId is a required field ConnectionId *string `locationName:"connectionId" type:"string" required:"true"` - // The ID of the LAG with which to associate the connection. - // - // Example: dxlag-abc123 - // - // Default: None + // The ID of the LAG with which to associate the connection. For example, dxlag-abc123. // // LagId is a required field LagId *string `locationName:"lagId" type:"string" required:"true"` @@ -4457,25 +4402,16 @@ func (s *AssociateConnectionWithLagInput) SetLagId(v string) *AssociateConnectio return s } -// Container for the parameters to the AssociateHostedConnection operation. type AssociateHostedConnectionInput struct { _ struct{} `type:"structure"` // The ID of the hosted connection. // - // Example: dxcon-abc123 - // - // Default: None - // // ConnectionId is a required field ConnectionId *string `locationName:"connectionId" type:"string" required:"true"` // The ID of the interconnect or the LAG. // - // Example: dxcon-abc123 or dxlag-abc123 - // - // Default: None - // // ParentConnectionId is a required field ParentConnectionId *string `locationName:"parentConnectionId" type:"string" required:"true"` } @@ -4518,25 +4454,16 @@ func (s *AssociateHostedConnectionInput) SetParentConnectionId(v string) *Associ return s } -// Container for the parameters to the AssociateVirtualInterface operation. type AssociateVirtualInterfaceInput struct { _ struct{} `type:"structure"` - // The ID of the LAG or connection with which to associate the virtual interface. - // - // Example: dxlag-abc123 or dxcon-abc123 - // - // Default: None + // The ID of the LAG or connection. // // ConnectionId is a required field ConnectionId *string `locationName:"connectionId" type:"string" required:"true"` // The ID of the virtual interface. // - // Example: dxvif-123dfg56 - // - // Default: None - // // VirtualInterfaceId is a required field VirtualInterfaceId *string `locationName:"virtualInterfaceId" type:"string" required:"true"` } @@ -4579,58 +4506,56 @@ func (s *AssociateVirtualInterfaceInput) SetVirtualInterfaceId(v string) *Associ return s } -// A structure containing information about a BGP peer. +// Information about a BGP peer. type BGPPeer struct { _ struct{} `type:"structure"` - // Indicates the address family for the BGP peer. - // - // * ipv4: IPv4 address family - // - // * ipv6: IPv6 address family + // The address family for the BGP peer. AddressFamily *string `locationName:"addressFamily" type:"string" enum:"AddressFamily"` - // IP address assigned to the Amazon interface. - // - // Example: 192.168.1.1/30 or 2001:db8::1/125 + // The IP address assigned to the Amazon interface. AmazonAddress *string `locationName:"amazonAddress" type:"string"` // The autonomous system (AS) number for Border Gateway Protocol (BGP) configuration. - // - // Example: 65000 Asn *int64 `locationName:"asn" type:"integer"` // The authentication key for BGP configuration. - // - // Example: asdf34example AuthKey *string `locationName:"authKey" type:"string"` - // The state of the BGP peer. + // The Direct Connect endpoint on which the BGP peer terminates. + AwsDeviceV2 *string `locationName:"awsDeviceV2" type:"string"` + + // The ID of the BGP peer. + BgpPeerId *string `locationName:"bgpPeerId" type:"string"` + + // The state of the BGP peer. The following are the possible values: // - // * Verifying: The BGP peering addresses or ASN require validation before - // the BGP peer can be created. This state only applies to BGP peers on a - // public virtual interface. + // * verifying: The BGP peering addresses or ASN require validation before + // the BGP peer can be created. This state applies only to public virtual + // interfaces. // - // * Pending: The BGP peer has been created, and is in this state until it + // * pending: The BGP peer is created, and remains in this state until it // is ready to be established. // - // * Available: The BGP peer can be established. + // * available: The BGP peer is ready to be established. // - // * Deleting: The BGP peer is in the process of being deleted. + // * deleting: The BGP peer is being deleted. // - // * Deleted: The BGP peer has been deleted and cannot be established. + // * deleted: The BGP peer is deleted and cannot be established. BgpPeerState *string `locationName:"bgpPeerState" type:"string" enum:"BGPPeerState"` - // The Up/Down state of the BGP peer. + // The status of the BGP peer. The following are the possible values: // - // * Up: The BGP peer is established. + // * up: The BGP peer is established. This state does not indicate the state + // of the routing function. Ensure that you are receiving routes over the + // BGP session. // - // * Down: The BGP peer is down. + // * down: The BGP peer is down. + // + // * unknown: The BGP peer status is unknown. BgpStatus *string `locationName:"bgpStatus" type:"string" enum:"BGPStatus"` - // IP address assigned to the customer interface. - // - // Example: 192.168.1.2/30 or 2001:db8::2/125 + // The IP address assigned to the customer interface. CustomerAddress *string `locationName:"customerAddress" type:"string"` } @@ -4668,6 +4593,18 @@ func (s *BGPPeer) SetAuthKey(v string) *BGPPeer { return s } +// SetAwsDeviceV2 sets the AwsDeviceV2 field's value. +func (s *BGPPeer) SetAwsDeviceV2(v string) *BGPPeer { + s.AwsDeviceV2 = &v + return s +} + +// SetBgpPeerId sets the BgpPeerId field's value. +func (s *BGPPeer) SetBgpPeerId(v string) *BGPPeer { + s.BgpPeerId = &v + return s +} + // SetBgpPeerState sets the BgpPeerState field's value. func (s *BGPPeer) SetBgpPeerState(v string) *BGPPeer { s.BgpPeerState = &v @@ -4686,16 +4623,10 @@ func (s *BGPPeer) SetCustomerAddress(v string) *BGPPeer { return s } -// Container for the parameters to the ConfirmConnection operation. type ConfirmConnectionInput struct { _ struct{} `type:"structure"` - // The ID of the connection. This field is also used as the ID type for operations - // that use multiple connection types (LAG, interconnect, and/or connection). - // - // Example: dxcon-fg5678gh - // - // Default: None + // The ID of the hosted connection. // // ConnectionId is a required field ConnectionId *string `locationName:"connectionId" type:"string" required:"true"` @@ -4730,32 +4661,31 @@ func (s *ConfirmConnectionInput) SetConnectionId(v string) *ConfirmConnectionInp return s } -// The response received when ConfirmConnection is called. type ConfirmConnectionOutput struct { _ struct{} `type:"structure"` - // State of the connection. + // The state of the connection. The following are the possible values: // - // * Ordering: The initial state of a hosted connection provisioned on an + // * ordering: The initial state of a hosted connection provisioned on an // interconnect. The connection stays in the ordering state until the owner // of the hosted connection confirms or declines the connection order. // - // * Requested: The initial state of a standard connection. The connection + // * requested: The initial state of a standard connection. The connection // stays in the requested state until the Letter of Authorization (LOA) is // sent to the customer. // - // * Pending: The connection has been approved, and is being initialized. + // * pending: The connection has been approved and is being initialized. // - // * Available: The network link is up, and the connection is ready for use. + // * available: The network link is up and the connection is ready for use. // - // * Down: The network link is down. + // * down: The network link is down. // - // * Deleting: The connection is in the process of being deleted. + // * deleting: The connection is being deleted. // - // * Deleted: The connection has been deleted. + // * deleted: The connection has been deleted. // - // * Rejected: A hosted connection in the 'Ordering' state will enter the - // 'Rejected' state if it is deleted by the end customer. + // * rejected: A hosted connection in the ordering state enters the rejected + // state if it is deleted by the customer. ConnectionState *string `locationName:"connectionState" type:"string" enum:"ConnectionState"` } @@ -4775,33 +4705,17 @@ func (s *ConfirmConnectionOutput) SetConnectionState(v string) *ConfirmConnectio return s } -// Container for the parameters to the ConfirmPrivateVirtualInterface operation. type ConfirmPrivateVirtualInterfaceInput struct { _ struct{} `type:"structure"` - // ID of the direct connect gateway that will be attached to the virtual interface. - // - // A direct connect gateway can be managed via the AWS Direct Connect console - // or the CreateDirectConnectGateway action. - // - // Default: None + // The ID of the Direct Connect gateway. DirectConnectGatewayId *string `locationName:"directConnectGatewayId" type:"string"` - // ID of the virtual private gateway that will be attached to the virtual interface. - // - // A virtual private gateway can be managed via the Amazon Virtual Private Cloud - // (VPC) console or the EC2 CreateVpnGateway (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-CreateVpnGateway.html) - // action. - // - // Default: None + // The ID of the virtual private gateway. VirtualGatewayId *string `locationName:"virtualGatewayId" type:"string"` // The ID of the virtual interface. // - // Example: dxvif-123dfg56 - // - // Default: None - // // VirtualInterfaceId is a required field VirtualInterfaceId *string `locationName:"virtualInterfaceId" type:"string" required:"true"` } @@ -4847,37 +4761,36 @@ func (s *ConfirmPrivateVirtualInterfaceInput) SetVirtualInterfaceId(v string) *C return s } -// The response received when ConfirmPrivateVirtualInterface is called. type ConfirmPrivateVirtualInterfaceOutput struct { _ struct{} `type:"structure"` - // State of the virtual interface. + // The state of the virtual interface. The following are the possible values: // - // * Confirming: The creation of the virtual interface is pending confirmation + // * confirming: The creation of the virtual interface is pending confirmation // from the virtual interface owner. If the owner of the virtual interface // is different from the owner of the connection on which it is provisioned, // then the virtual interface will remain in this state until it is confirmed // by the virtual interface owner. // - // * Verifying: This state only applies to public virtual interfaces. Each + // * verifying: This state only applies to public virtual interfaces. Each // public virtual interface needs validation before the virtual interface // can be created. // - // * Pending: A virtual interface is in this state from the time that it + // * pending: A virtual interface is in this state from the time that it // is created until the virtual interface is ready to forward traffic. // - // * Available: A virtual interface that is able to forward traffic. + // * available: A virtual interface that is able to forward traffic. // - // * Down: A virtual interface that is BGP down. + // * down: A virtual interface that is BGP down. // - // * Deleting: A virtual interface is in this state immediately after calling + // * deleting: A virtual interface is in this state immediately after calling // DeleteVirtualInterface until it can no longer forward traffic. // - // * Deleted: A virtual interface that cannot forward traffic. + // * deleted: A virtual interface that cannot forward traffic. // - // * Rejected: The virtual interface owner has declined creation of the virtual - // interface. If a virtual interface in the 'Confirming' state is deleted - // by the virtual interface owner, the virtual interface will enter the 'Rejected' + // * rejected: The virtual interface owner has declined creation of the virtual + // interface. If a virtual interface in the Confirming state is deleted by + // the virtual interface owner, the virtual interface enters the Rejected // state. VirtualInterfaceState *string `locationName:"virtualInterfaceState" type:"string" enum:"VirtualInterfaceState"` } @@ -4898,16 +4811,11 @@ func (s *ConfirmPrivateVirtualInterfaceOutput) SetVirtualInterfaceState(v string return s } -// Container for the parameters to the ConfirmPublicVirtualInterface operation. type ConfirmPublicVirtualInterfaceInput struct { _ struct{} `type:"structure"` // The ID of the virtual interface. // - // Example: dxvif-123dfg56 - // - // Default: None - // // VirtualInterfaceId is a required field VirtualInterfaceId *string `locationName:"virtualInterfaceId" type:"string" required:"true"` } @@ -4941,37 +4849,36 @@ func (s *ConfirmPublicVirtualInterfaceInput) SetVirtualInterfaceId(v string) *Co return s } -// The response received when ConfirmPublicVirtualInterface is called. type ConfirmPublicVirtualInterfaceOutput struct { _ struct{} `type:"structure"` - // State of the virtual interface. + // The state of the virtual interface. The following are the possible values: // - // * Confirming: The creation of the virtual interface is pending confirmation + // * confirming: The creation of the virtual interface is pending confirmation // from the virtual interface owner. If the owner of the virtual interface // is different from the owner of the connection on which it is provisioned, // then the virtual interface will remain in this state until it is confirmed // by the virtual interface owner. // - // * Verifying: This state only applies to public virtual interfaces. Each + // * verifying: This state only applies to public virtual interfaces. Each // public virtual interface needs validation before the virtual interface // can be created. // - // * Pending: A virtual interface is in this state from the time that it + // * pending: A virtual interface is in this state from the time that it // is created until the virtual interface is ready to forward traffic. // - // * Available: A virtual interface that is able to forward traffic. + // * available: A virtual interface that is able to forward traffic. // - // * Down: A virtual interface that is BGP down. + // * down: A virtual interface that is BGP down. // - // * Deleting: A virtual interface is in this state immediately after calling + // * deleting: A virtual interface is in this state immediately after calling // DeleteVirtualInterface until it can no longer forward traffic. // - // * Deleted: A virtual interface that cannot forward traffic. + // * deleted: A virtual interface that cannot forward traffic. // - // * Rejected: The virtual interface owner has declined creation of the virtual - // interface. If a virtual interface in the 'Confirming' state is deleted - // by the virtual interface owner, the virtual interface will enter the 'Rejected' + // * rejected: The virtual interface owner has declined creation of the virtual + // interface. If a virtual interface in the Confirming state is deleted by + // the virtual interface owner, the virtual interface enters the Rejected // state. VirtualInterfaceState *string `locationName:"virtualInterfaceState" type:"string" enum:"VirtualInterfaceState"` } @@ -4992,91 +4899,75 @@ func (s *ConfirmPublicVirtualInterfaceOutput) SetVirtualInterfaceState(v string) return s } -// A connection represents the physical network connection between the AWS Direct -// Connect location and the customer. +// Information about an AWS Direct Connect connection. type Connection struct { _ struct{} `type:"structure"` - // The Direct Connection endpoint which the physical connection terminates on. - AwsDevice *string `locationName:"awsDevice" type:"string"` + // The Direct Connect endpoint on which the physical connection terminates. + AwsDevice *string `locationName:"awsDevice" deprecated:"true" type:"string"` - // Bandwidth of the connection. - // - // Example: 1Gbps (for regular connections), or 500Mbps (for hosted connections) - // - // Default: None + // The Direct Connect endpoint on which the physical connection terminates. + AwsDeviceV2 *string `locationName:"awsDeviceV2" type:"string"` + + // The bandwidth of the connection. Bandwidth *string `locationName:"bandwidth" type:"string"` - // The ID of the connection. This field is also used as the ID type for operations - // that use multiple connection types (LAG, interconnect, and/or connection). - // - // Example: dxcon-fg5678gh - // - // Default: None + // The ID of the connection. ConnectionId *string `locationName:"connectionId" type:"string"` // The name of the connection. - // - // Example: "My Connection to AWS" - // - // Default: None ConnectionName *string `locationName:"connectionName" type:"string"` - // State of the connection. + // The state of the connection. The following are the possible values: // - // * Ordering: The initial state of a hosted connection provisioned on an + // * ordering: The initial state of a hosted connection provisioned on an // interconnect. The connection stays in the ordering state until the owner // of the hosted connection confirms or declines the connection order. // - // * Requested: The initial state of a standard connection. The connection + // * requested: The initial state of a standard connection. The connection // stays in the requested state until the Letter of Authorization (LOA) is // sent to the customer. // - // * Pending: The connection has been approved, and is being initialized. + // * pending: The connection has been approved and is being initialized. // - // * Available: The network link is up, and the connection is ready for use. + // * available: The network link is up and the connection is ready for use. // - // * Down: The network link is down. + // * down: The network link is down. // - // * Deleting: The connection is in the process of being deleted. + // * deleting: The connection is being deleted. // - // * Deleted: The connection has been deleted. + // * deleted: The connection has been deleted. // - // * Rejected: A hosted connection in the 'Ordering' state will enter the - // 'Rejected' state if it is deleted by the end customer. + // * rejected: A hosted connection in the ordering state enters the rejected + // state if it is deleted by the customer. ConnectionState *string `locationName:"connectionState" type:"string" enum:"ConnectionState"` + // Indicates whether the connection supports a secondary BGP peer in the same + // address family (IPv4/IPv6). + HasLogicalRedundancy *string `locationName:"hasLogicalRedundancy" type:"string" enum:"HasLogicalRedundancy"` + + // Indicates whether jumbo frames (9001 MTU) are supported. + JumboFrameCapable *bool `locationName:"jumboFrameCapable" type:"boolean"` + // The ID of the LAG. - // - // Example: dxlag-fg5678gh LagId *string `locationName:"lagId" type:"string"` // The time of the most recent call to DescribeLoa for this connection. - LoaIssueTime *time.Time `locationName:"loaIssueTime" type:"timestamp" timestampFormat:"unix"` + LoaIssueTime *time.Time `locationName:"loaIssueTime" type:"timestamp"` - // Where the connection is located. - // - // Example: EqSV5 - // - // Default: None + // The location of the connection. Location *string `locationName:"location" type:"string"` - // The AWS account that will own the new connection. + // The ID of the AWS account that owns the connection. OwnerAccount *string `locationName:"ownerAccount" type:"string"` // The name of the AWS Direct Connect service provider associated with the connection. PartnerName *string `locationName:"partnerName" type:"string"` - // The AWS region where the connection is located. - // - // Example: us-east-1 - // - // Default: None + // The AWS Region where the connection is located. Region *string `locationName:"region" type:"string"` - // The VLAN ID. - // - // Example: 101 + // The ID of the VLAN. Vlan *int64 `locationName:"vlan" type:"integer"` } @@ -5096,6 +4987,12 @@ func (s *Connection) SetAwsDevice(v string) *Connection { return s } +// SetAwsDeviceV2 sets the AwsDeviceV2 field's value. +func (s *Connection) SetAwsDeviceV2(v string) *Connection { + s.AwsDeviceV2 = &v + return s +} + // SetBandwidth sets the Bandwidth field's value. func (s *Connection) SetBandwidth(v string) *Connection { s.Bandwidth = &v @@ -5120,6 +5017,18 @@ func (s *Connection) SetConnectionState(v string) *Connection { return s } +// SetHasLogicalRedundancy sets the HasLogicalRedundancy field's value. +func (s *Connection) SetHasLogicalRedundancy(v string) *Connection { + s.HasLogicalRedundancy = &v + return s +} + +// SetJumboFrameCapable sets the JumboFrameCapable field's value. +func (s *Connection) SetJumboFrameCapable(v bool) *Connection { + s.JumboFrameCapable = &v + return s +} + // SetLagId sets the LagId field's value. func (s *Connection) SetLagId(v string) *Connection { s.LagId = &v @@ -5162,11 +5071,10 @@ func (s *Connection) SetVlan(v int64) *Connection { return s } -// A structure containing a list of connections. type Connections struct { _ struct{} `type:"structure"` - // A list of connections. + // The connections. Connections []*Connection `locationName:"connections" type:"list"` } @@ -5186,20 +5094,13 @@ func (s *Connections) SetConnections(v []*Connection) *Connections { return s } -// Container for the parameters to the CreateBGPPeer operation. type CreateBGPPeerInput struct { _ struct{} `type:"structure"` - // Detailed information for the BGP peer to be created. - // - // Default: None + // Information about the BGP peer. NewBGPPeer *NewBGPPeer `locationName:"newBGPPeer" type:"structure"` - // The ID of the virtual interface on which the BGP peer will be provisioned. - // - // Example: dxvif-456abc78 - // - // Default: None + // The ID of the virtual interface. VirtualInterfaceId *string `locationName:"virtualInterfaceId" type:"string"` } @@ -5225,12 +5126,10 @@ func (s *CreateBGPPeerInput) SetVirtualInterfaceId(v string) *CreateBGPPeerInput return s } -// The response received when CreateBGPPeer is called. type CreateBGPPeerOutput struct { _ struct{} `type:"structure"` - // A virtual interface (VLAN) transmits the traffic between the AWS Direct Connect - // location and the customer. + // The virtual interface. VirtualInterface *VirtualInterface `locationName:"virtualInterface" type:"structure"` } @@ -5250,38 +5149,23 @@ func (s *CreateBGPPeerOutput) SetVirtualInterface(v *VirtualInterface) *CreateBG return s } -// Container for the parameters to the CreateConnection operation. type CreateConnectionInput struct { _ struct{} `type:"structure"` - // Bandwidth of the connection. - // - // Example: 1Gbps - // - // Default: None + // The bandwidth of the connection. // // Bandwidth is a required field Bandwidth *string `locationName:"bandwidth" type:"string" required:"true"` // The name of the connection. // - // Example: "My Connection to AWS" - // - // Default: None - // // ConnectionName is a required field ConnectionName *string `locationName:"connectionName" type:"string" required:"true"` // The ID of the LAG. - // - // Example: dxlag-fg5678gh LagId *string `locationName:"lagId" type:"string"` - // Where the connection is located. - // - // Example: EqSV5 - // - // Default: None + // The location of the connection. // // Location is a required field Location *string `locationName:"location" type:"string" required:"true"` @@ -5340,26 +5224,16 @@ func (s *CreateConnectionInput) SetLocation(v string) *CreateConnectionInput { return s } -// Container for the parameters to the CreateDirectConnectGatewayAssociation -// operation. type CreateDirectConnectGatewayAssociationInput struct { _ struct{} `type:"structure"` - // The ID of the direct connect gateway. - // - // Example: "abcd1234-dcba-5678-be23-cdef9876ab45" - // - // Default: None + // The ID of the Direct Connect gateway. // // DirectConnectGatewayId is a required field DirectConnectGatewayId *string `locationName:"directConnectGatewayId" type:"string" required:"true"` // The ID of the virtual private gateway. // - // Example: "vgw-abc123ef" - // - // Default: None - // // VirtualGatewayId is a required field VirtualGatewayId *string `locationName:"virtualGatewayId" type:"string" required:"true"` } @@ -5402,12 +5276,10 @@ func (s *CreateDirectConnectGatewayAssociationInput) SetVirtualGatewayId(v strin return s } -// Container for the response from the CreateDirectConnectGatewayAssociation -// API call type CreateDirectConnectGatewayAssociationOutput struct { _ struct{} `type:"structure"` - // The direct connect gateway association to be created. + // The association to be created. DirectConnectGatewayAssociation *GatewayAssociation `locationName:"directConnectGatewayAssociation" type:"structure"` } @@ -5427,24 +5299,16 @@ func (s *CreateDirectConnectGatewayAssociationOutput) SetDirectConnectGatewayAss return s } -// Container for the parameters to the CreateDirectConnectGateway operation. type CreateDirectConnectGatewayInput struct { _ struct{} `type:"structure"` // The autonomous system number (ASN) for Border Gateway Protocol (BGP) to be // configured on the Amazon side of the connection. The ASN must be in the private - // range of 64,512 to 65,534 or 4,200,000,000 to 4,294,967,294 - // - // Example: 65200 - // - // Default: 64512 + // range of 64,512 to 65,534 or 4,200,000,000 to 4,294,967,294. The default + // is 64512. AmazonSideAsn *int64 `locationName:"amazonSideAsn" type:"long"` - // The name of the direct connect gateway. - // - // Example: "My direct connect gateway" - // - // Default: None + // The name of the Direct Connect gateway. // // DirectConnectGatewayName is a required field DirectConnectGatewayName *string `locationName:"directConnectGatewayName" type:"string" required:"true"` @@ -5485,11 +5349,10 @@ func (s *CreateDirectConnectGatewayInput) SetDirectConnectGatewayName(v string) return s } -// Container for the response from the CreateDirectConnectGateway API call type CreateDirectConnectGatewayOutput struct { _ struct{} `type:"structure"` - // The direct connect gateway to be created. + // The Direct Connect gateway. DirectConnectGateway *Gateway `locationName:"directConnectGateway" type:"structure"` } @@ -5509,40 +5372,23 @@ func (s *CreateDirectConnectGatewayOutput) SetDirectConnectGateway(v *Gateway) * return s } -// Container for the parameters to the CreateInterconnect operation. type CreateInterconnectInput struct { _ struct{} `type:"structure"` - // The port bandwidth - // - // Example: 1Gbps - // - // Default: None - // - // Available values: 1Gbps,10Gbps + // The port bandwidth, in Gbps. The possible values are 1 and 10. // // Bandwidth is a required field Bandwidth *string `locationName:"bandwidth" type:"string" required:"true"` // The name of the interconnect. // - // Example: "1G Interconnect to AWS" - // - // Default: None - // // InterconnectName is a required field InterconnectName *string `locationName:"interconnectName" type:"string" required:"true"` // The ID of the LAG. - // - // Example: dxlag-fg5678gh LagId *string `locationName:"lagId" type:"string"` - // Where the interconnect is located - // - // Example: EqSV5 - // - // Default: None + // The location of the interconnect. // // Location is a required field Location *string `locationName:"location" type:"string" required:"true"` @@ -5601,38 +5447,24 @@ func (s *CreateInterconnectInput) SetLocation(v string) *CreateInterconnectInput return s } -// Container for the parameters to the CreateLag operation. type CreateLagInput struct { _ struct{} `type:"structure"` // The ID of an existing connection to migrate to the LAG. - // - // Default: None ConnectionId *string `locationName:"connectionId" type:"string"` // The bandwidth of the individual physical connections bundled by the LAG. - // - // Default: None - // - // Available values: 1Gbps, 10Gbps + // The possible values are 1Gbps and 10Gbps. // // ConnectionsBandwidth is a required field ConnectionsBandwidth *string `locationName:"connectionsBandwidth" type:"string" required:"true"` // The name of the LAG. // - // Example: "3x10G LAG to AWS" - // - // Default: None - // // LagName is a required field LagName *string `locationName:"lagName" type:"string" required:"true"` - // The AWS Direct Connect location in which the LAG should be allocated. - // - // Example: EqSV5 - // - // Default: None + // The location for the LAG. // // Location is a required field Location *string `locationName:"location" type:"string" required:"true"` @@ -5640,8 +5472,6 @@ type CreateLagInput struct { // The number of physical connections initially provisioned and bundled by the // LAG. // - // Default: None - // // NumberOfConnections is a required field NumberOfConnections *int64 `locationName:"numberOfConnections" type:"integer" required:"true"` } @@ -5708,23 +5538,15 @@ func (s *CreateLagInput) SetNumberOfConnections(v int64) *CreateLagInput { return s } -// Container for the parameters to the CreatePrivateVirtualInterface operation. type CreatePrivateVirtualInterfaceInput struct { _ struct{} `type:"structure"` - // The ID of the connection. This field is also used as the ID type for operations - // that use multiple connection types (LAG, interconnect, and/or connection). - // - // Example: dxcon-fg5678gh - // - // Default: None + // The ID of the connection. // // ConnectionId is a required field ConnectionId *string `locationName:"connectionId" type:"string" required:"true"` - // Detailed information for the private virtual interface to be created. - // - // Default: None + // Information about the private virtual interface. // // NewPrivateVirtualInterface is a required field NewPrivateVirtualInterface *NewPrivateVirtualInterface `locationName:"newPrivateVirtualInterface" type:"structure" required:"true"` @@ -5773,23 +5595,15 @@ func (s *CreatePrivateVirtualInterfaceInput) SetNewPrivateVirtualInterface(v *Ne return s } -// Container for the parameters to the CreatePublicVirtualInterface operation. type CreatePublicVirtualInterfaceInput struct { _ struct{} `type:"structure"` - // The ID of the connection. This field is also used as the ID type for operations - // that use multiple connection types (LAG, interconnect, and/or connection). - // - // Example: dxcon-fg5678gh - // - // Default: None + // The ID of the connection. // // ConnectionId is a required field ConnectionId *string `locationName:"connectionId" type:"string" required:"true"` - // Detailed information for the public virtual interface to be created. - // - // Default: None + // Information about the public virtual interface. // // NewPublicVirtualInterface is a required field NewPublicVirtualInterface *NewPublicVirtualInterface `locationName:"newPublicVirtualInterface" type:"structure" required:"true"` @@ -5838,25 +5652,19 @@ func (s *CreatePublicVirtualInterfaceInput) SetNewPublicVirtualInterface(v *NewP return s } -// Container for the parameters to the DeleteBGPPeer operation. type DeleteBGPPeerInput struct { _ struct{} `type:"structure"` // The autonomous system (AS) number for Border Gateway Protocol (BGP) configuration. - // - // Example: 65000 Asn *int64 `locationName:"asn" type:"integer"` - // IP address assigned to the customer interface. - // - // Example: 192.168.1.2/30 or 2001:db8::2/125 + // The ID of the BGP peer. + BgpPeerId *string `locationName:"bgpPeerId" type:"string"` + + // The IP address assigned to the customer interface. CustomerAddress *string `locationName:"customerAddress" type:"string"` - // The ID of the virtual interface from which the BGP peer will be deleted. - // - // Example: dxvif-456abc78 - // - // Default: None + // The ID of the virtual interface. VirtualInterfaceId *string `locationName:"virtualInterfaceId" type:"string"` } @@ -5876,6 +5684,12 @@ func (s *DeleteBGPPeerInput) SetAsn(v int64) *DeleteBGPPeerInput { return s } +// SetBgpPeerId sets the BgpPeerId field's value. +func (s *DeleteBGPPeerInput) SetBgpPeerId(v string) *DeleteBGPPeerInput { + s.BgpPeerId = &v + return s +} + // SetCustomerAddress sets the CustomerAddress field's value. func (s *DeleteBGPPeerInput) SetCustomerAddress(v string) *DeleteBGPPeerInput { s.CustomerAddress = &v @@ -5888,12 +5702,10 @@ func (s *DeleteBGPPeerInput) SetVirtualInterfaceId(v string) *DeleteBGPPeerInput return s } -// The response received when DeleteBGPPeer is called. type DeleteBGPPeerOutput struct { _ struct{} `type:"structure"` - // A virtual interface (VLAN) transmits the traffic between the AWS Direct Connect - // location and the customer. + // The virtual interface. VirtualInterface *VirtualInterface `locationName:"virtualInterface" type:"structure"` } @@ -5913,16 +5725,10 @@ func (s *DeleteBGPPeerOutput) SetVirtualInterface(v *VirtualInterface) *DeleteBG return s } -// Container for the parameters to the DeleteConnection operation. type DeleteConnectionInput struct { _ struct{} `type:"structure"` - // The ID of the connection. This field is also used as the ID type for operations - // that use multiple connection types (LAG, interconnect, and/or connection). - // - // Example: dxcon-fg5678gh - // - // Default: None + // The ID of the connection. // // ConnectionId is a required field ConnectionId *string `locationName:"connectionId" type:"string" required:"true"` @@ -5957,26 +5763,16 @@ func (s *DeleteConnectionInput) SetConnectionId(v string) *DeleteConnectionInput return s } -// Container for the parameters to the DeleteDirectConnectGatewayAssociation -// operation. type DeleteDirectConnectGatewayAssociationInput struct { _ struct{} `type:"structure"` - // The ID of the direct connect gateway. - // - // Example: "abcd1234-dcba-5678-be23-cdef9876ab45" - // - // Default: None + // The ID of the Direct Connect gateway. // // DirectConnectGatewayId is a required field DirectConnectGatewayId *string `locationName:"directConnectGatewayId" type:"string" required:"true"` // The ID of the virtual private gateway. // - // Example: "vgw-abc123ef" - // - // Default: None - // // VirtualGatewayId is a required field VirtualGatewayId *string `locationName:"virtualGatewayId" type:"string" required:"true"` } @@ -6019,12 +5815,10 @@ func (s *DeleteDirectConnectGatewayAssociationInput) SetVirtualGatewayId(v strin return s } -// Container for the response from the DeleteDirectConnectGatewayAssociation -// API call type DeleteDirectConnectGatewayAssociationOutput struct { _ struct{} `type:"structure"` - // The direct connect gateway association to be deleted. + // The association to be deleted. DirectConnectGatewayAssociation *GatewayAssociation `locationName:"directConnectGatewayAssociation" type:"structure"` } @@ -6044,15 +5838,10 @@ func (s *DeleteDirectConnectGatewayAssociationOutput) SetDirectConnectGatewayAss return s } -// Container for the parameters to the DeleteDirectConnectGateway operation. type DeleteDirectConnectGatewayInput struct { _ struct{} `type:"structure"` - // The ID of the direct connect gateway. - // - // Example: "abcd1234-dcba-5678-be23-cdef9876ab45" - // - // Default: None + // The ID of the Direct Connect gateway. // // DirectConnectGatewayId is a required field DirectConnectGatewayId *string `locationName:"directConnectGatewayId" type:"string" required:"true"` @@ -6087,11 +5876,10 @@ func (s *DeleteDirectConnectGatewayInput) SetDirectConnectGatewayId(v string) *D return s } -// Container for the response from the DeleteDirectConnectGateway API call type DeleteDirectConnectGatewayOutput struct { _ struct{} `type:"structure"` - // The direct connect gateway to be deleted. + // The Direct Connect gateway. DirectConnectGateway *Gateway `locationName:"directConnectGateway" type:"structure"` } @@ -6111,14 +5899,11 @@ func (s *DeleteDirectConnectGatewayOutput) SetDirectConnectGateway(v *Gateway) * return s } -// Container for the parameters to the DeleteInterconnect operation. type DeleteInterconnectInput struct { _ struct{} `type:"structure"` // The ID of the interconnect. // - // Example: dxcon-abc123 - // // InterconnectId is a required field InterconnectId *string `locationName:"interconnectId" type:"string" required:"true"` } @@ -6152,26 +5937,25 @@ func (s *DeleteInterconnectInput) SetInterconnectId(v string) *DeleteInterconnec return s } -// The response received when DeleteInterconnect is called. type DeleteInterconnectOutput struct { _ struct{} `type:"structure"` - // State of the interconnect. + // The state of the interconnect. The following are the possible values: // - // * Requested: The initial state of an interconnect. The interconnect stays + // * requested: The initial state of an interconnect. The interconnect stays // in the requested state until the Letter of Authorization (LOA) is sent // to the customer. // - // * Pending: The interconnect has been approved, and is being initialized. + // * pending: The interconnect is approved, and is being initialized. // - // * Available: The network link is up, and the interconnect is ready for + // * available: The network link is up, and the interconnect is ready for // use. // - // * Down: The network link is down. + // * down: The network link is down. // - // * Deleting: The interconnect is in the process of being deleted. + // * deleting: The interconnect is being deleted. // - // * Deleted: The interconnect has been deleted. + // * deleted: The interconnect is deleted. InterconnectState *string `locationName:"interconnectState" type:"string" enum:"InterconnectState"` } @@ -6191,15 +5975,10 @@ func (s *DeleteInterconnectOutput) SetInterconnectState(v string) *DeleteInterco return s } -// Container for the parameters to the DeleteLag operation. type DeleteLagInput struct { _ struct{} `type:"structure"` - // The ID of the LAG to delete. - // - // Example: dxlag-abc123 - // - // Default: None + // The ID of the LAG. // // LagId is a required field LagId *string `locationName:"lagId" type:"string" required:"true"` @@ -6234,16 +6013,11 @@ func (s *DeleteLagInput) SetLagId(v string) *DeleteLagInput { return s } -// Container for the parameters to the DeleteVirtualInterface operation. type DeleteVirtualInterfaceInput struct { _ struct{} `type:"structure"` // The ID of the virtual interface. // - // Example: dxvif-123dfg56 - // - // Default: None - // // VirtualInterfaceId is a required field VirtualInterfaceId *string `locationName:"virtualInterfaceId" type:"string" required:"true"` } @@ -6277,37 +6051,36 @@ func (s *DeleteVirtualInterfaceInput) SetVirtualInterfaceId(v string) *DeleteVir return s } -// The response received when DeleteVirtualInterface is called. type DeleteVirtualInterfaceOutput struct { _ struct{} `type:"structure"` - // State of the virtual interface. + // The state of the virtual interface. The following are the possible values: // - // * Confirming: The creation of the virtual interface is pending confirmation + // * confirming: The creation of the virtual interface is pending confirmation // from the virtual interface owner. If the owner of the virtual interface // is different from the owner of the connection on which it is provisioned, // then the virtual interface will remain in this state until it is confirmed // by the virtual interface owner. // - // * Verifying: This state only applies to public virtual interfaces. Each + // * verifying: This state only applies to public virtual interfaces. Each // public virtual interface needs validation before the virtual interface // can be created. // - // * Pending: A virtual interface is in this state from the time that it + // * pending: A virtual interface is in this state from the time that it // is created until the virtual interface is ready to forward traffic. // - // * Available: A virtual interface that is able to forward traffic. + // * available: A virtual interface that is able to forward traffic. // - // * Down: A virtual interface that is BGP down. + // * down: A virtual interface that is BGP down. // - // * Deleting: A virtual interface is in this state immediately after calling + // * deleting: A virtual interface is in this state immediately after calling // DeleteVirtualInterface until it can no longer forward traffic. // - // * Deleted: A virtual interface that cannot forward traffic. + // * deleted: A virtual interface that cannot forward traffic. // - // * Rejected: The virtual interface owner has declined creation of the virtual - // interface. If a virtual interface in the 'Confirming' state is deleted - // by the virtual interface owner, the virtual interface will enter the 'Rejected' + // * rejected: The virtual interface owner has declined creation of the virtual + // interface. If a virtual interface in the Confirming state is deleted by + // the virtual interface owner, the virtual interface enters the Rejected // state. VirtualInterfaceState *string `locationName:"virtualInterfaceState" type:"string" enum:"VirtualInterfaceState"` } @@ -6328,31 +6101,21 @@ func (s *DeleteVirtualInterfaceOutput) SetVirtualInterfaceState(v string) *Delet return s } -// Container for the parameters to the DescribeConnectionLoa operation. type DescribeConnectionLoaInput struct { _ struct{} `type:"structure"` - // The ID of the connection. This field is also used as the ID type for operations - // that use multiple connection types (LAG, interconnect, and/or connection). - // - // Example: dxcon-fg5678gh - // - // Default: None + // The ID of the connection. // // ConnectionId is a required field ConnectionId *string `locationName:"connectionId" type:"string" required:"true"` - // A standard media type indicating the content type of the LOA-CFA document. - // Currently, the only supported value is "application/pdf". - // - // Default: application/pdf + // The standard media type for the LOA-CFA document. The only supported value + // is application/pdf. LoaContentType *string `locationName:"loaContentType" type:"string" enum:"LoaContentType"` // The name of the APN partner or service provider who establishes connectivity - // on your behalf. If you supply this parameter, the LOA-CFA lists the provider + // on your behalf. If you specify this parameter, the LOA-CFA lists the provider // name alongside your company name as the requester of the cross connect. - // - // Default: None ProviderName *string `locationName:"providerName" type:"string"` } @@ -6397,12 +6160,10 @@ func (s *DescribeConnectionLoaInput) SetProviderName(v string) *DescribeConnecti return s } -// The response received when DescribeConnectionLoa is called. type DescribeConnectionLoaOutput struct { _ struct{} `type:"structure"` - // A structure containing the Letter of Authorization - Connecting Facility - // Assignment (LOA-CFA) for a connection. + // The Letter of Authorization - Connecting Facility Assignment (LOA-CFA). Loa *Loa `locationName:"loa" type:"structure"` } @@ -6422,16 +6183,10 @@ func (s *DescribeConnectionLoaOutput) SetLoa(v *Loa) *DescribeConnectionLoaOutpu return s } -// Container for the parameters to the DescribeConnections operation. type DescribeConnectionsInput struct { _ struct{} `type:"structure"` - // The ID of the connection. This field is also used as the ID type for operations - // that use multiple connection types (LAG, interconnect, and/or connection). - // - // Example: dxcon-fg5678gh - // - // Default: None + // The ID of the connection. ConnectionId *string `locationName:"connectionId" type:"string"` } @@ -6451,15 +6206,10 @@ func (s *DescribeConnectionsInput) SetConnectionId(v string) *DescribeConnection return s } -// Container for the parameters to the DescribeConnectionsOnInterconnect operation. type DescribeConnectionsOnInterconnectInput struct { _ struct{} `type:"structure"` - // ID of the interconnect on which a list of connection is provisioned. - // - // Example: dxcon-abc123 - // - // Default: None + // The ID of the interconnect. // // InterconnectId is a required field InterconnectId *string `locationName:"interconnectId" type:"string" required:"true"` @@ -6494,36 +6244,19 @@ func (s *DescribeConnectionsOnInterconnectInput) SetInterconnectId(v string) *De return s } -// Container for the parameters to the DescribeDirectConnectGatewayAssociations -// operation. type DescribeDirectConnectGatewayAssociationsInput struct { _ struct{} `type:"structure"` - // The ID of the direct connect gateway. - // - // Example: "abcd1234-dcba-5678-be23-cdef9876ab45" - // - // Default: None + // The ID of the Direct Connect gateway. DirectConnectGatewayId *string `locationName:"directConnectGatewayId" type:"string"` - // The maximum number of direct connect gateway associations to return per page. - // - // Example: 15 - // - // Default: None + // The maximum number of associations to return per page. MaxResults *int64 `locationName:"maxResults" type:"integer"` - // The token provided in the previous describe result to retrieve the next page - // of the result. - // - // Default: None + // The token provided in the previous call to retrieve the next page. NextToken *string `locationName:"nextToken" type:"string"` // The ID of the virtual private gateway. - // - // Example: "vgw-abc123ef" - // - // Default: None VirtualGatewayId *string `locationName:"virtualGatewayId" type:"string"` } @@ -6561,15 +6294,13 @@ func (s *DescribeDirectConnectGatewayAssociationsInput) SetVirtualGatewayId(v st return s } -// Container for the response from the DescribeDirectConnectGatewayAssociations -// API call type DescribeDirectConnectGatewayAssociationsOutput struct { _ struct{} `type:"structure"` - // Information about the direct connect gateway associations. + // The associations. DirectConnectGatewayAssociations []*GatewayAssociation `locationName:"directConnectGatewayAssociations" type:"list"` - // Token to retrieve the next page of the result. + // The token to retrieve the next page. NextToken *string `locationName:"nextToken" type:"string"` } @@ -6595,36 +6326,19 @@ func (s *DescribeDirectConnectGatewayAssociationsOutput) SetNextToken(v string) return s } -// Container for the parameters to the DescribeDirectConnectGatewayAttachments -// operation. type DescribeDirectConnectGatewayAttachmentsInput struct { _ struct{} `type:"structure"` - // The ID of the direct connect gateway. - // - // Example: "abcd1234-dcba-5678-be23-cdef9876ab45" - // - // Default: None + // The ID of the Direct Connect gateway. DirectConnectGatewayId *string `locationName:"directConnectGatewayId" type:"string"` - // The maximum number of direct connect gateway attachments to return per page. - // - // Example: 15 - // - // Default: None + // The maximum number of attachments to return per page. MaxResults *int64 `locationName:"maxResults" type:"integer"` - // The token provided in the previous describe result to retrieve the next page - // of the result. - // - // Default: None + // The token provided in the previous call to retrieve the next page. NextToken *string `locationName:"nextToken" type:"string"` // The ID of the virtual interface. - // - // Example: "dxvif-abc123ef" - // - // Default: None VirtualInterfaceId *string `locationName:"virtualInterfaceId" type:"string"` } @@ -6662,15 +6376,13 @@ func (s *DescribeDirectConnectGatewayAttachmentsInput) SetVirtualInterfaceId(v s return s } -// Container for the response from the DescribeDirectConnectGatewayAttachments -// API call type DescribeDirectConnectGatewayAttachmentsOutput struct { _ struct{} `type:"structure"` - // Information about the direct connect gateway attachments. + // The attachments. DirectConnectGatewayAttachments []*GatewayAttachment `locationName:"directConnectGatewayAttachments" type:"list"` - // Token to retrieve the next page of the result. + // The token to retrieve the next page. NextToken *string `locationName:"nextToken" type:"string"` } @@ -6696,28 +6408,16 @@ func (s *DescribeDirectConnectGatewayAttachmentsOutput) SetNextToken(v string) * return s } -// Container for the parameters to the DescribeDirectConnectGateways operation. type DescribeDirectConnectGatewaysInput struct { _ struct{} `type:"structure"` - // The ID of the direct connect gateway. - // - // Example: "abcd1234-dcba-5678-be23-cdef9876ab45" - // - // Default: None + // The ID of the Direct Connect gateway. DirectConnectGatewayId *string `locationName:"directConnectGatewayId" type:"string"` - // The maximum number of direct connect gateways to return per page. - // - // Example: 15 - // - // Default: None + // The maximum number of Direct Connect gateways to return per page. MaxResults *int64 `locationName:"maxResults" type:"integer"` - // The token provided in the previous describe result to retrieve the next page - // of the result. - // - // Default: None + // The token provided in the previous call to retrieve the next page. NextToken *string `locationName:"nextToken" type:"string"` } @@ -6749,14 +6449,13 @@ func (s *DescribeDirectConnectGatewaysInput) SetNextToken(v string) *DescribeDir return s } -// Container for the response from the DescribeDirectConnectGateways API call type DescribeDirectConnectGatewaysOutput struct { _ struct{} `type:"structure"` - // Information about the direct connect gateways. + // The Direct Connect gateways. DirectConnectGateways []*Gateway `locationName:"directConnectGateways" type:"list"` - // Token to retrieve the next page of the result. + // The token to retrieve the next page. NextToken *string `locationName:"nextToken" type:"string"` } @@ -6782,15 +6481,10 @@ func (s *DescribeDirectConnectGatewaysOutput) SetNextToken(v string) *DescribeDi return s } -// Container for the parameters to the DescribeHostedConnections operation. type DescribeHostedConnectionsInput struct { _ struct{} `type:"structure"` - // The ID of the interconnect or LAG on which the hosted connections are provisioned. - // - // Example: dxcon-abc123 or dxlag-abc123 - // - // Default: None + // The ID of the interconnect or LAG. // // ConnectionId is a required field ConnectionId *string `locationName:"connectionId" type:"string" required:"true"` @@ -6825,28 +6519,21 @@ func (s *DescribeHostedConnectionsInput) SetConnectionId(v string) *DescribeHost return s } -// Container for the parameters to the DescribeInterconnectLoa operation. type DescribeInterconnectLoaInput struct { _ struct{} `type:"structure"` // The ID of the interconnect. // - // Example: dxcon-abc123 - // // InterconnectId is a required field InterconnectId *string `locationName:"interconnectId" type:"string" required:"true"` - // A standard media type indicating the content type of the LOA-CFA document. - // Currently, the only supported value is "application/pdf". - // - // Default: application/pdf + // The standard media type for the LOA-CFA document. The only supported value + // is application/pdf. LoaContentType *string `locationName:"loaContentType" type:"string" enum:"LoaContentType"` // The name of the service provider who establishes connectivity on your behalf. // If you supply this parameter, the LOA-CFA lists the provider name alongside // your company name as the requester of the cross connect. - // - // Default: None ProviderName *string `locationName:"providerName" type:"string"` } @@ -6891,12 +6578,10 @@ func (s *DescribeInterconnectLoaInput) SetProviderName(v string) *DescribeInterc return s } -// The response received when DescribeInterconnectLoa is called. type DescribeInterconnectLoaOutput struct { _ struct{} `type:"structure"` - // A structure containing the Letter of Authorization - Connecting Facility - // Assignment (LOA-CFA) for a connection. + // The Letter of Authorization - Connecting Facility Assignment (LOA-CFA). Loa *Loa `locationName:"loa" type:"structure"` } @@ -6916,13 +6601,10 @@ func (s *DescribeInterconnectLoaOutput) SetLoa(v *Loa) *DescribeInterconnectLoaO return s } -// Container for the parameters to the DescribeInterconnects operation. type DescribeInterconnectsInput struct { _ struct{} `type:"structure"` // The ID of the interconnect. - // - // Example: dxcon-abc123 InterconnectId *string `locationName:"interconnectId" type:"string"` } @@ -6942,11 +6624,10 @@ func (s *DescribeInterconnectsInput) SetInterconnectId(v string) *DescribeInterc return s } -// A structure containing a list of interconnects. type DescribeInterconnectsOutput struct { _ struct{} `type:"structure"` - // A list of interconnects. + // The interconnects. Interconnects []*Interconnect `locationName:"interconnects" type:"list"` } @@ -6966,15 +6647,10 @@ func (s *DescribeInterconnectsOutput) SetInterconnects(v []*Interconnect) *Descr return s } -// Container for the parameters to the DescribeLags operation. type DescribeLagsInput struct { _ struct{} `type:"structure"` // The ID of the LAG. - // - // Example: dxlag-abc123 - // - // Default: None LagId *string `locationName:"lagId" type:"string"` } @@ -6994,11 +6670,10 @@ func (s *DescribeLagsInput) SetLagId(v string) *DescribeLagsInput { return s } -// A structure containing a list of LAGs. type DescribeLagsOutput struct { _ struct{} `type:"structure"` - // A list of LAGs. + // The LAGs. Lags []*Lag `locationName:"lags" type:"list"` } @@ -7018,31 +6693,21 @@ func (s *DescribeLagsOutput) SetLags(v []*Lag) *DescribeLagsOutput { return s } -// Container for the parameters to the DescribeLoa operation. type DescribeLoaInput struct { _ struct{} `type:"structure"` - // The ID of a connection, LAG, or interconnect for which to get the LOA-CFA - // information. - // - // Example: dxcon-abc123 or dxlag-abc123 - // - // Default: None + // The ID of a connection, LAG, or interconnect. // // ConnectionId is a required field ConnectionId *string `locationName:"connectionId" type:"string" required:"true"` - // A standard media type indicating the content type of the LOA-CFA document. - // Currently, the only supported value is "application/pdf". - // - // Default: application/pdf + // The standard media type for the LOA-CFA document. The only supported value + // is application/pdf. LoaContentType *string `locationName:"loaContentType" type:"string" enum:"LoaContentType"` // The name of the service provider who establishes connectivity on your behalf. - // If you supply this parameter, the LOA-CFA lists the provider name alongside + // If you specify this parameter, the LOA-CFA lists the provider name alongside // your company name as the requester of the cross connect. - // - // Default: None ProviderName *string `locationName:"providerName" type:"string"` } @@ -7101,15 +6766,10 @@ func (s DescribeLocationsInput) GoString() string { return s.String() } -// A location is a network facility where AWS Direct Connect routers are available -// to be connected. Generally, these are colocation hubs where many network -// providers have equipment, and where cross connects can be delivered. Locations -// include a name and facility code, and must be provided when creating a connection. type DescribeLocationsOutput struct { _ struct{} `type:"structure"` - // A list of colocation hubs where network providers have equipment. Most regions - // have multiple locations available. + // The locations. Locations []*Location `locationName:"locations" type:"list"` } @@ -7129,11 +6789,10 @@ func (s *DescribeLocationsOutput) SetLocations(v []*Location) *DescribeLocations return s } -// Container for the parameters to the DescribeTags operation. type DescribeTagsInput struct { _ struct{} `type:"structure"` - // The Amazon Resource Names (ARNs) of the Direct Connect resources. + // The Amazon Resource Names (ARNs) of the resources. // // ResourceArns is a required field ResourceArns []*string `locationName:"resourceArns" type:"list" required:"true"` @@ -7168,7 +6827,6 @@ func (s *DescribeTagsInput) SetResourceArns(v []*string) *DescribeTagsInput { return s } -// The response received when DescribeTags is called. type DescribeTagsOutput struct { _ struct{} `type:"structure"` @@ -7206,11 +6864,10 @@ func (s DescribeVirtualGatewaysInput) GoString() string { return s.String() } -// A structure containing a list of virtual private gateways. type DescribeVirtualGatewaysOutput struct { _ struct{} `type:"structure"` - // A list of virtual private gateways. + // The virtual private gateways. VirtualGateways []*VirtualGateway `locationName:"virtualGateways" type:"list"` } @@ -7230,23 +6887,13 @@ func (s *DescribeVirtualGatewaysOutput) SetVirtualGateways(v []*VirtualGateway) return s } -// Container for the parameters to the DescribeVirtualInterfaces operation. type DescribeVirtualInterfacesInput struct { _ struct{} `type:"structure"` - // The ID of the connection. This field is also used as the ID type for operations - // that use multiple connection types (LAG, interconnect, and/or connection). - // - // Example: dxcon-fg5678gh - // - // Default: None + // The ID of the connection. ConnectionId *string `locationName:"connectionId" type:"string"` // The ID of the virtual interface. - // - // Example: dxvif-123dfg56 - // - // Default: None VirtualInterfaceId *string `locationName:"virtualInterfaceId" type:"string"` } @@ -7272,11 +6919,10 @@ func (s *DescribeVirtualInterfacesInput) SetVirtualInterfaceId(v string) *Descri return s } -// A structure containing a list of virtual interfaces. type DescribeVirtualInterfacesOutput struct { _ struct{} `type:"structure"` - // A list of virtual interfaces. + // The virtual interfaces VirtualInterfaces []*VirtualInterface `locationName:"virtualInterfaces" type:"list"` } @@ -7296,24 +6942,15 @@ func (s *DescribeVirtualInterfacesOutput) SetVirtualInterfaces(v []*VirtualInter return s } -// Container for the parameters to the DisassociateConnectionFromLag operation. type DisassociateConnectionFromLagInput struct { _ struct{} `type:"structure"` - // The ID of the connection to disassociate from the LAG. - // - // Example: dxcon-abc123 - // - // Default: None + // The ID of the connection. For example, dxcon-abc123. // // ConnectionId is a required field ConnectionId *string `locationName:"connectionId" type:"string" required:"true"` - // The ID of the LAG. - // - // Example: dxlag-abc123 - // - // Default: None + // The ID of the LAG. For example, dxlag-abc123. // // LagId is a required field LagId *string `locationName:"lagId" type:"string" required:"true"` @@ -7357,7 +6994,7 @@ func (s *DisassociateConnectionFromLagInput) SetLagId(v string) *DisassociateCon return s } -// A direct connect gateway is an intermediate object that enables you to connect +// Information about a Direct Connect gateway, which enables you to connect // virtual interfaces and virtual private gateways. type Gateway struct { _ struct{} `type:"structure"` @@ -7365,33 +7002,27 @@ type Gateway struct { // The autonomous system number (ASN) for the Amazon side of the connection. AmazonSideAsn *int64 `locationName:"amazonSideAsn" type:"long"` - // The ID of the direct connect gateway. - // - // Example: "abcd1234-dcba-5678-be23-cdef9876ab45" + // The ID of the Direct Connect gateway. DirectConnectGatewayId *string `locationName:"directConnectGatewayId" type:"string"` - // The name of the direct connect gateway. - // - // Example: "My direct connect gateway" - // - // Default: None + // The name of the Direct Connect gateway. DirectConnectGatewayName *string `locationName:"directConnectGatewayName" type:"string"` - // State of the direct connect gateway. + // The state of the Direct Connect gateway. The following are the possible values: // - // * Pending: The initial state after calling CreateDirectConnectGateway. + // * pending: The initial state after calling CreateDirectConnectGateway. // - // * Available: The direct connect gateway is ready for use. + // * available: The Direct Connect gateway is ready for use. // - // * Deleting: The initial state after calling DeleteDirectConnectGateway. + // * deleting: The initial state after calling DeleteDirectConnectGateway. // - // * Deleted: The direct connect gateway is deleted and cannot pass traffic. + // * deleted: The Direct Connect gateway is deleted and cannot pass traffic. DirectConnectGatewayState *string `locationName:"directConnectGatewayState" type:"string" enum:"GatewayState"` - // The AWS account ID of the owner of the direct connect gateway. + // The ID of the AWS account that owns the Direct Connect gateway. OwnerAccount *string `locationName:"ownerAccount" type:"string"` - // Error message when the state of an object fails to advance. + // The error message if the state of an object failed to advance. StateChangeError *string `locationName:"stateChangeError" type:"string"` } @@ -7441,44 +7072,38 @@ func (s *Gateway) SetStateChangeError(v string) *Gateway { return s } -// The association between a direct connect gateway and virtual private gateway. +// Information about an association between a Direct Connect gateway and a virtual +// private gateway. type GatewayAssociation struct { _ struct{} `type:"structure"` - // State of the direct connect gateway association. + // The state of the association. The following are the possible values: // - // * Associating: The initial state after calling CreateDirectConnectGatewayAssociation. + // * associating: The initial state after calling CreateDirectConnectGatewayAssociation. // - // * Associated: The direct connect gateway and virtual private gateway are + // * associated: The Direct Connect gateway and virtual private gateway are // successfully associated and ready to pass traffic. // - // * Disassociating: The initial state after calling DeleteDirectConnectGatewayAssociation. + // * disassociating: The initial state after calling DeleteDirectConnectGatewayAssociation. // - // * Disassociated: The virtual private gateway is successfully disassociated - // from the direct connect gateway. Traffic flow between the direct connect - // gateway and virtual private gateway stops. + // * disassociated: The virtual private gateway is disassociated from the + // Direct Connect gateway. Traffic flow between the Direct Connect gateway + // and virtual private gateway is stopped. AssociationState *string `locationName:"associationState" type:"string" enum:"GatewayAssociationState"` - // The ID of the direct connect gateway. - // - // Example: "abcd1234-dcba-5678-be23-cdef9876ab45" + // The ID of the Direct Connect gateway. DirectConnectGatewayId *string `locationName:"directConnectGatewayId" type:"string"` - // Error message when the state of an object fails to advance. + // The error message if the state of an object failed to advance. StateChangeError *string `locationName:"stateChangeError" type:"string"` - // The ID of the virtual private gateway to a VPC. This only applies to private - // virtual interfaces. - // - // Example: vgw-123er56 + // The ID of the virtual private gateway. Applies only to private virtual interfaces. VirtualGatewayId *string `locationName:"virtualGatewayId" type:"string"` - // The AWS account ID of the owner of the virtual private gateway. + // The ID of the AWS account that owns the virtual private gateway. VirtualGatewayOwnerAccount *string `locationName:"virtualGatewayOwnerAccount" type:"string"` - // The region in which the virtual private gateway is located. - // - // Example: us-east-1 + // The AWS Region where the virtual private gateway is located. VirtualGatewayRegion *string `locationName:"virtualGatewayRegion" type:"string"` } @@ -7528,47 +7153,39 @@ func (s *GatewayAssociation) SetVirtualGatewayRegion(v string) *GatewayAssociati return s } -// The association between a direct connect gateway and virtual interface. +// Information about an attachment between a Direct Connect gateway and a virtual +// interface. type GatewayAttachment struct { _ struct{} `type:"structure"` - // State of the direct connect gateway attachment. + // The state of the attachment. The following are the possible values: // - // * Attaching: The initial state after a virtual interface is created using - // the direct connect gateway. + // * attaching: The initial state after a virtual interface is created using + // the Direct Connect gateway. // - // * Attached: The direct connect gateway and virtual interface are successfully - // attached and ready to pass traffic. + // * attached: The Direct Connect gateway and virtual interface are attached + // and ready to pass traffic. // - // * Detaching: The initial state after calling DeleteVirtualInterface on - // a virtual interface that is attached to a direct connect gateway. + // * detaching: The initial state after calling DeleteVirtualInterface. // - // * Detached: The virtual interface is successfully detached from the direct - // connect gateway. Traffic flow between the direct connect gateway and virtual - // interface stops. + // * detached: The virtual interface is detached from the Direct Connect + // gateway. Traffic flow between the Direct Connect gateway and virtual interface + // is stopped. AttachmentState *string `locationName:"attachmentState" type:"string" enum:"GatewayAttachmentState"` - // The ID of the direct connect gateway. - // - // Example: "abcd1234-dcba-5678-be23-cdef9876ab45" + // The ID of the Direct Connect gateway. DirectConnectGatewayId *string `locationName:"directConnectGatewayId" type:"string"` - // Error message when the state of an object fails to advance. + // The error message if the state of an object failed to advance. StateChangeError *string `locationName:"stateChangeError" type:"string"` // The ID of the virtual interface. - // - // Example: dxvif-123dfg56 - // - // Default: None VirtualInterfaceId *string `locationName:"virtualInterfaceId" type:"string"` - // The AWS account ID of the owner of the virtual interface. + // The ID of the AWS account that owns the virtual interface. VirtualInterfaceOwnerAccount *string `locationName:"virtualInterfaceOwnerAccount" type:"string"` - // The region in which the virtual interface is located. - // - // Example: us-east-1 + // The AWS Region where the virtual interface is located. VirtualInterfaceRegion *string `locationName:"virtualInterfaceRegion" type:"string"` } @@ -7618,79 +7235,60 @@ func (s *GatewayAttachment) SetVirtualInterfaceRegion(v string) *GatewayAttachme return s } -// An interconnect is a connection that can host other connections. -// -// Like a standard AWS Direct Connect connection, an interconnect represents -// the physical connection between an AWS Direct Connect partner's network and -// a specific Direct Connect location. An AWS Direct Connect partner who owns -// an interconnect can provision hosted connections on the interconnect for -// their end customers, thereby providing the end customers with connectivity -// to AWS services. -// -// The resources of the interconnect, including bandwidth and VLAN numbers, -// are shared by all of the hosted connections on the interconnect, and the -// owner of the interconnect determines how these resources are assigned. +// Information about an interconnect. type Interconnect struct { _ struct{} `type:"structure"` - // The Direct Connection endpoint which the physical connection terminates on. - AwsDevice *string `locationName:"awsDevice" type:"string"` + // The Direct Connect endpoint on which the physical connection terminates. + AwsDevice *string `locationName:"awsDevice" deprecated:"true" type:"string"` - // Bandwidth of the connection. - // - // Example: 1Gbps - // - // Default: None + // The Direct Connect endpoint on which the physical connection terminates. + AwsDeviceV2 *string `locationName:"awsDeviceV2" type:"string"` + + // The bandwidth of the connection. Bandwidth *string `locationName:"bandwidth" type:"string"` + // Indicates whether the interconnect supports a secondary BGP in the same address + // family (IPv4/IPv6). + HasLogicalRedundancy *string `locationName:"hasLogicalRedundancy" type:"string" enum:"HasLogicalRedundancy"` + // The ID of the interconnect. - // - // Example: dxcon-abc123 InterconnectId *string `locationName:"interconnectId" type:"string"` // The name of the interconnect. - // - // Example: "1G Interconnect to AWS" InterconnectName *string `locationName:"interconnectName" type:"string"` - // State of the interconnect. + // The state of the interconnect. The following are the possible values: // - // * Requested: The initial state of an interconnect. The interconnect stays + // * requested: The initial state of an interconnect. The interconnect stays // in the requested state until the Letter of Authorization (LOA) is sent // to the customer. // - // * Pending: The interconnect has been approved, and is being initialized. + // * pending: The interconnect is approved, and is being initialized. // - // * Available: The network link is up, and the interconnect is ready for + // * available: The network link is up, and the interconnect is ready for // use. // - // * Down: The network link is down. + // * down: The network link is down. // - // * Deleting: The interconnect is in the process of being deleted. + // * deleting: The interconnect is being deleted. // - // * Deleted: The interconnect has been deleted. + // * deleted: The interconnect is deleted. InterconnectState *string `locationName:"interconnectState" type:"string" enum:"InterconnectState"` + // Indicates whether jumbo frames (9001 MTU) are supported. + JumboFrameCapable *bool `locationName:"jumboFrameCapable" type:"boolean"` + // The ID of the LAG. - // - // Example: dxlag-fg5678gh LagId *string `locationName:"lagId" type:"string"` - // The time of the most recent call to DescribeInterconnectLoa for this Interconnect. - LoaIssueTime *time.Time `locationName:"loaIssueTime" type:"timestamp" timestampFormat:"unix"` + // The time of the most recent call to DescribeLoa for this connection. + LoaIssueTime *time.Time `locationName:"loaIssueTime" type:"timestamp"` - // Where the connection is located. - // - // Example: EqSV5 - // - // Default: None + // The location of the connection. Location *string `locationName:"location" type:"string"` - // The AWS region where the connection is located. - // - // Example: us-east-1 - // - // Default: None + // The AWS Region where the connection is located. Region *string `locationName:"region" type:"string"` } @@ -7710,12 +7308,24 @@ func (s *Interconnect) SetAwsDevice(v string) *Interconnect { return s } +// SetAwsDeviceV2 sets the AwsDeviceV2 field's value. +func (s *Interconnect) SetAwsDeviceV2(v string) *Interconnect { + s.AwsDeviceV2 = &v + return s +} + // SetBandwidth sets the Bandwidth field's value. func (s *Interconnect) SetBandwidth(v string) *Interconnect { s.Bandwidth = &v return s } +// SetHasLogicalRedundancy sets the HasLogicalRedundancy field's value. +func (s *Interconnect) SetHasLogicalRedundancy(v string) *Interconnect { + s.HasLogicalRedundancy = &v + return s +} + // SetInterconnectId sets the InterconnectId field's value. func (s *Interconnect) SetInterconnectId(v string) *Interconnect { s.InterconnectId = &v @@ -7734,6 +7344,12 @@ func (s *Interconnect) SetInterconnectState(v string) *Interconnect { return s } +// SetJumboFrameCapable sets the JumboFrameCapable field's value. +func (s *Interconnect) SetJumboFrameCapable(v bool) *Interconnect { + s.JumboFrameCapable = &v + return s +} + // SetLagId sets the LagId field's value. func (s *Interconnect) SetLagId(v string) *Interconnect { s.LagId = &v @@ -7758,81 +7374,71 @@ func (s *Interconnect) SetRegion(v string) *Interconnect { return s } -// Describes a link aggregation group (LAG). A LAG is a connection that uses -// the Link Aggregation Control Protocol (LACP) to logically aggregate a bundle -// of physical connections. Like an interconnect, it can host other connections. -// All connections in a LAG must terminate on the same physical AWS Direct Connect -// endpoint, and must be the same bandwidth. +// Information about a link aggregation group (LAG). type Lag struct { _ struct{} `type:"structure"` // Indicates whether the LAG can host other connections. - // - // This is intended for use by AWS Direct Connect partners only. AllowsHostedConnections *bool `locationName:"allowsHostedConnections" type:"boolean"` - // The AWS Direct Connection endpoint that hosts the LAG. - AwsDevice *string `locationName:"awsDevice" type:"string"` + // The Direct Connect endpoint that hosts the LAG. + AwsDevice *string `locationName:"awsDevice" deprecated:"true" type:"string"` - // A list of connections bundled by this LAG. + // The Direct Connect endpoint that hosts the LAG. + AwsDeviceV2 *string `locationName:"awsDeviceV2" type:"string"` + + // The connections bundled by the LAG. Connections []*Connection `locationName:"connections" type:"list"` // The individual bandwidth of the physical connections bundled by the LAG. - // - // Available values: 1Gbps, 10Gbps + // The possible values are 1Gbps and 10Gbps. ConnectionsBandwidth *string `locationName:"connectionsBandwidth" type:"string"` + // Indicates whether the LAG supports a secondary BGP peer in the same address + // family (IPv4/IPv6). + HasLogicalRedundancy *string `locationName:"hasLogicalRedundancy" type:"string" enum:"HasLogicalRedundancy"` + + // Indicates whether jumbo frames (9001 MTU) are supported. + JumboFrameCapable *bool `locationName:"jumboFrameCapable" type:"boolean"` + // The ID of the LAG. - // - // Example: dxlag-fg5678gh LagId *string `locationName:"lagId" type:"string"` // The name of the LAG. LagName *string `locationName:"lagName" type:"string"` - // The state of the LAG. + // The state of the LAG. The following are the possible values: // - // * Requested: The initial state of a LAG. The LAG stays in the requested + // * requested: The initial state of a LAG. The LAG stays in the requested // state until the Letter of Authorization (LOA) is available. // - // * Pending: The LAG has been approved, and is being initialized. + // * pending: The LAG has been approved and is being initialized. // - // * Available: The network link is established, and the LAG is ready for + // * available: The network link is established and the LAG is ready for // use. // - // * Down: The network link is down. + // * down: The network link is down. // - // * Deleting: The LAG is in the process of being deleted. + // * deleting: The LAG is being deleted. // - // * Deleted: The LAG has been deleted. + // * deleted: The LAG is deleted. LagState *string `locationName:"lagState" type:"string" enum:"LagState"` - // Where the connection is located. - // - // Example: EqSV5 - // - // Default: None + // The location of the LAG. Location *string `locationName:"location" type:"string"` // The minimum number of physical connections that must be operational for the - // LAG itself to be operational. If the number of operational connections drops - // below this setting, the LAG state changes to down. This value can help to - // ensure that a LAG is not overutilized if a significant number of its bundled - // connections go down. + // LAG itself to be operational. MinimumLinks *int64 `locationName:"minimumLinks" type:"integer"` // The number of physical connections bundled by the LAG, up to a maximum of // 10. NumberOfConnections *int64 `locationName:"numberOfConnections" type:"integer"` - // The owner of the LAG. + // The ID of the AWS account that owns the LAG. OwnerAccount *string `locationName:"ownerAccount" type:"string"` - // The AWS region where the connection is located. - // - // Example: us-east-1 - // - // Default: None + // The AWS Region where the connection is located. Region *string `locationName:"region" type:"string"` } @@ -7858,6 +7464,12 @@ func (s *Lag) SetAwsDevice(v string) *Lag { return s } +// SetAwsDeviceV2 sets the AwsDeviceV2 field's value. +func (s *Lag) SetAwsDeviceV2(v string) *Lag { + s.AwsDeviceV2 = &v + return s +} + // SetConnections sets the Connections field's value. func (s *Lag) SetConnections(v []*Connection) *Lag { s.Connections = v @@ -7870,6 +7482,18 @@ func (s *Lag) SetConnectionsBandwidth(v string) *Lag { return s } +// SetHasLogicalRedundancy sets the HasLogicalRedundancy field's value. +func (s *Lag) SetHasLogicalRedundancy(v string) *Lag { + s.HasLogicalRedundancy = &v + return s +} + +// SetJumboFrameCapable sets the JumboFrameCapable field's value. +func (s *Lag) SetJumboFrameCapable(v bool) *Lag { + s.JumboFrameCapable = &v + return s +} + // SetLagId sets the LagId field's value. func (s *Lag) SetLagId(v string) *Lag { s.LagId = &v @@ -7918,8 +7542,8 @@ func (s *Lag) SetRegion(v string) *Lag { return s } -// A structure containing the Letter of Authorization - Connecting Facility -// Assignment (LOA-CFA) for a connection. +// Information about a Letter of Authorization - Connecting Facility Assignment +// (LOA-CFA) for a connection. type Loa struct { _ struct{} `type:"structure"` @@ -7928,10 +7552,8 @@ type Loa struct { // LoaContent is automatically base64 encoded/decoded by the SDK. LoaContent []byte `locationName:"loaContent" type:"blob"` - // A standard media type indicating the content type of the LOA-CFA document. - // Currently, the only supported value is "application/pdf". - // - // Default: application/pdf + // The standard media type for the LOA-CFA document. The only supported value + // is application/pdf. LoaContentType *string `locationName:"loaContentType" type:"string" enum:"LoaContentType"` } @@ -7957,17 +7579,19 @@ func (s *Loa) SetLoaContentType(v string) *Loa { return s } -// An AWS Direct Connect location where connections and interconnects can be -// requested. +// Information about an AWS Direct Connect location. type Location struct { _ struct{} `type:"structure"` - // The code used to indicate the AWS Direct Connect location. + // The code for the location. LocationCode *string `locationName:"locationCode" type:"string"` - // The name of the AWS Direct Connect location. The name includes the colocation - // partner name and the physical site of the lit building. + // The name of the location. This includes the name of the colocation partner + // and the physical site of the building. LocationName *string `locationName:"locationName" type:"string"` + + // The AWS Region for the location. + Region *string `locationName:"region" type:"string"` } // String returns the string representation @@ -7992,35 +7616,29 @@ func (s *Location) SetLocationName(v string) *Location { return s } -// A structure containing information about a new BGP peer. +// SetRegion sets the Region field's value. +func (s *Location) SetRegion(v string) *Location { + s.Region = &v + return s +} + +// Information about a new BGP peer. type NewBGPPeer struct { _ struct{} `type:"structure"` - // Indicates the address family for the BGP peer. - // - // * ipv4: IPv4 address family - // - // * ipv6: IPv6 address family + // The address family for the BGP peer. AddressFamily *string `locationName:"addressFamily" type:"string" enum:"AddressFamily"` - // IP address assigned to the Amazon interface. - // - // Example: 192.168.1.1/30 or 2001:db8::1/125 + // The IP address assigned to the Amazon interface. AmazonAddress *string `locationName:"amazonAddress" type:"string"` // The autonomous system (AS) number for Border Gateway Protocol (BGP) configuration. - // - // Example: 65000 Asn *int64 `locationName:"asn" type:"integer"` // The authentication key for BGP configuration. - // - // Example: asdf34example AuthKey *string `locationName:"authKey" type:"string"` - // IP address assigned to the customer interface. - // - // Example: 192.168.1.2/30 or 2001:db8::2/125 + // The IP address assigned to the customer interface. CustomerAddress *string `locationName:"customerAddress" type:"string"` } @@ -8064,60 +7682,43 @@ func (s *NewBGPPeer) SetCustomerAddress(v string) *NewBGPPeer { return s } -// A structure containing information about a new private virtual interface. +// Information about a private virtual interface. type NewPrivateVirtualInterface struct { _ struct{} `type:"structure"` - // Indicates the address family for the BGP peer. - // - // * ipv4: IPv4 address family - // - // * ipv6: IPv6 address family + // The address family for the BGP peer. AddressFamily *string `locationName:"addressFamily" type:"string" enum:"AddressFamily"` - // IP address assigned to the Amazon interface. - // - // Example: 192.168.1.1/30 or 2001:db8::1/125 + // The IP address assigned to the Amazon interface. AmazonAddress *string `locationName:"amazonAddress" type:"string"` // The autonomous system (AS) number for Border Gateway Protocol (BGP) configuration. // - // Example: 65000 - // // Asn is a required field Asn *int64 `locationName:"asn" type:"integer" required:"true"` // The authentication key for BGP configuration. - // - // Example: asdf34example AuthKey *string `locationName:"authKey" type:"string"` - // IP address assigned to the customer interface. - // - // Example: 192.168.1.2/30 or 2001:db8::2/125 + // The IP address assigned to the customer interface. CustomerAddress *string `locationName:"customerAddress" type:"string"` - // The ID of the direct connect gateway. - // - // Example: "abcd1234-dcba-5678-be23-cdef9876ab45" + // The ID of the Direct Connect gateway. DirectConnectGatewayId *string `locationName:"directConnectGatewayId" type:"string"` - // The ID of the virtual private gateway to a VPC. This only applies to private - // virtual interfaces. - // - // Example: vgw-123er56 + // The maximum transmission unit (MTU), in bytes. The supported values are 1500 + // and 9001. The default value is 1500. + Mtu *int64 `locationName:"mtu" type:"integer"` + + // The ID of the virtual private gateway. VirtualGatewayId *string `locationName:"virtualGatewayId" type:"string"` - // The name of the virtual interface assigned by the customer. - // - // Example: "My VPC" + // The name of the virtual interface assigned by the customer network. // // VirtualInterfaceName is a required field VirtualInterfaceName *string `locationName:"virtualInterfaceName" type:"string" required:"true"` - // The VLAN ID. - // - // Example: 101 + // The ID of the VLAN. // // Vlan is a required field Vlan *int64 `locationName:"vlan" type:"integer" required:"true"` @@ -8188,6 +7789,12 @@ func (s *NewPrivateVirtualInterface) SetDirectConnectGatewayId(v string) *NewPri return s } +// SetMtu sets the Mtu field's value. +func (s *NewPrivateVirtualInterface) SetMtu(v int64) *NewPrivateVirtualInterface { + s.Mtu = &v + return s +} + // SetVirtualGatewayId sets the VirtualGatewayId field's value. func (s *NewPrivateVirtualInterface) SetVirtualGatewayId(v string) *NewPrivateVirtualInterface { s.VirtualGatewayId = &v @@ -8206,50 +7813,37 @@ func (s *NewPrivateVirtualInterface) SetVlan(v int64) *NewPrivateVirtualInterfac return s } -// A structure containing information about a private virtual interface that -// will be provisioned on a connection. +// Information about a private virtual interface to be provisioned on a connection. type NewPrivateVirtualInterfaceAllocation struct { _ struct{} `type:"structure"` - // Indicates the address family for the BGP peer. - // - // * ipv4: IPv4 address family - // - // * ipv6: IPv6 address family + // The address family for the BGP peer. AddressFamily *string `locationName:"addressFamily" type:"string" enum:"AddressFamily"` - // IP address assigned to the Amazon interface. - // - // Example: 192.168.1.1/30 or 2001:db8::1/125 + // The IP address assigned to the Amazon interface. AmazonAddress *string `locationName:"amazonAddress" type:"string"` // The autonomous system (AS) number for Border Gateway Protocol (BGP) configuration. // - // Example: 65000 - // // Asn is a required field Asn *int64 `locationName:"asn" type:"integer" required:"true"` // The authentication key for BGP configuration. - // - // Example: asdf34example AuthKey *string `locationName:"authKey" type:"string"` - // IP address assigned to the customer interface. - // - // Example: 192.168.1.2/30 or 2001:db8::2/125 + // The IP address assigned to the customer interface. CustomerAddress *string `locationName:"customerAddress" type:"string"` - // The name of the virtual interface assigned by the customer. - // - // Example: "My VPC" + // The maximum transmission unit (MTU), in bytes. The supported values are 1500 + // and 9001. The default value is 1500. + Mtu *int64 `locationName:"mtu" type:"integer"` + + // The name of the virtual interface assigned by the customer network. // // VirtualInterfaceName is a required field VirtualInterfaceName *string `locationName:"virtualInterfaceName" type:"string" required:"true"` - // The VLAN ID. - // - // Example: 101 + // The ID of the VLAN. // // Vlan is a required field Vlan *int64 `locationName:"vlan" type:"integer" required:"true"` @@ -8314,6 +7908,12 @@ func (s *NewPrivateVirtualInterfaceAllocation) SetCustomerAddress(v string) *New return s } +// SetMtu sets the Mtu field's value. +func (s *NewPrivateVirtualInterfaceAllocation) SetMtu(v int64) *NewPrivateVirtualInterfaceAllocation { + s.Mtu = &v + return s +} + // SetVirtualInterfaceName sets the VirtualInterfaceName field's value. func (s *NewPrivateVirtualInterfaceAllocation) SetVirtualInterfaceName(v string) *NewPrivateVirtualInterfaceAllocation { s.VirtualInterfaceName = &v @@ -8326,53 +7926,37 @@ func (s *NewPrivateVirtualInterfaceAllocation) SetVlan(v int64) *NewPrivateVirtu return s } -// A structure containing information about a new public virtual interface. +// Information about a public virtual interface. type NewPublicVirtualInterface struct { _ struct{} `type:"structure"` - // Indicates the address family for the BGP peer. - // - // * ipv4: IPv4 address family - // - // * ipv6: IPv6 address family + // The address family for the BGP peer. AddressFamily *string `locationName:"addressFamily" type:"string" enum:"AddressFamily"` - // IP address assigned to the Amazon interface. - // - // Example: 192.168.1.1/30 or 2001:db8::1/125 + // The IP address assigned to the Amazon interface. AmazonAddress *string `locationName:"amazonAddress" type:"string"` // The autonomous system (AS) number for Border Gateway Protocol (BGP) configuration. // - // Example: 65000 - // // Asn is a required field Asn *int64 `locationName:"asn" type:"integer" required:"true"` // The authentication key for BGP configuration. - // - // Example: asdf34example AuthKey *string `locationName:"authKey" type:"string"` - // IP address assigned to the customer interface. - // - // Example: 192.168.1.2/30 or 2001:db8::2/125 + // The IP address assigned to the customer interface. CustomerAddress *string `locationName:"customerAddress" type:"string"` - // A list of routes to be advertised to the AWS network in this region (public - // virtual interface). + // The routes to be advertised to the AWS network in this Region. Applies to + // public virtual interfaces. RouteFilterPrefixes []*RouteFilterPrefix `locationName:"routeFilterPrefixes" type:"list"` - // The name of the virtual interface assigned by the customer. - // - // Example: "My VPC" + // The name of the virtual interface assigned by the customer network. // // VirtualInterfaceName is a required field VirtualInterfaceName *string `locationName:"virtualInterfaceName" type:"string" required:"true"` - // The VLAN ID. - // - // Example: 101 + // The ID of the VLAN. // // Vlan is a required field Vlan *int64 `locationName:"vlan" type:"integer" required:"true"` @@ -8455,54 +8039,37 @@ func (s *NewPublicVirtualInterface) SetVlan(v int64) *NewPublicVirtualInterface return s } -// A structure containing information about a public virtual interface that -// will be provisioned on a connection. +// Information about a public virtual interface to be provisioned on a connection. type NewPublicVirtualInterfaceAllocation struct { _ struct{} `type:"structure"` - // Indicates the address family for the BGP peer. - // - // * ipv4: IPv4 address family - // - // * ipv6: IPv6 address family + // The address family for the BGP peer. AddressFamily *string `locationName:"addressFamily" type:"string" enum:"AddressFamily"` - // IP address assigned to the Amazon interface. - // - // Example: 192.168.1.1/30 or 2001:db8::1/125 + // The IP address assigned to the Amazon interface. AmazonAddress *string `locationName:"amazonAddress" type:"string"` // The autonomous system (AS) number for Border Gateway Protocol (BGP) configuration. // - // Example: 65000 - // // Asn is a required field Asn *int64 `locationName:"asn" type:"integer" required:"true"` // The authentication key for BGP configuration. - // - // Example: asdf34example AuthKey *string `locationName:"authKey" type:"string"` - // IP address assigned to the customer interface. - // - // Example: 192.168.1.2/30 or 2001:db8::2/125 + // The IP address assigned to the customer interface. CustomerAddress *string `locationName:"customerAddress" type:"string"` - // A list of routes to be advertised to the AWS network in this region (public - // virtual interface). + // The routes to be advertised to the AWS network in this Region. Applies to + // public virtual interfaces. RouteFilterPrefixes []*RouteFilterPrefix `locationName:"routeFilterPrefixes" type:"list"` - // The name of the virtual interface assigned by the customer. - // - // Example: "My VPC" + // The name of the virtual interface assigned by the customer network. // // VirtualInterfaceName is a required field VirtualInterfaceName *string `locationName:"virtualInterfaceName" type:"string" required:"true"` - // The VLAN ID. - // - // Example: 101 + // The ID of the VLAN. // // Vlan is a required field Vlan *int64 `locationName:"vlan" type:"integer" required:"true"` @@ -8585,11 +8152,11 @@ func (s *NewPublicVirtualInterfaceAllocation) SetVlan(v int64) *NewPublicVirtual return s } -// The tags associated with a Direct Connect resource. +// Information about a tag associated with an AWS Direct Connect resource. type ResourceTag struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of the Direct Connect resource. + // The Amazon Resource Name (ARN) of the resource. ResourceArn *string `locationName:"resourceArn" type:"string"` // The tags. @@ -8618,17 +8185,13 @@ func (s *ResourceTag) SetTags(v []*Tag) *ResourceTag { return s } -// A route filter prefix that the customer can advertise through Border Gateway -// Protocol (BGP) over a public virtual interface. +// Information about a route filter prefix that a customer can advertise through +// Border Gateway Protocol (BGP) over a public virtual interface. type RouteFilterPrefix struct { _ struct{} `type:"structure"` - // CIDR notation for the advertised route. Multiple routes are separated by - // commas. - // - // IPv6 CIDRs must be at least a /64 or shorter - // - // Example: 10.10.10.0/24,10.10.11.0/24,2001:db8::/64 + // The CIDR block for the advertised route. Separate multiple routes using commas. + // An IPv6 CIDR must use /64 or shorter. Cidr *string `locationName:"cidr" type:"string"` } @@ -8652,12 +8215,12 @@ func (s *RouteFilterPrefix) SetCidr(v string) *RouteFilterPrefix { type Tag struct { _ struct{} `type:"structure"` - // The key of the tag. + // The key. // // Key is a required field Key *string `locationName:"key" min:"1" type:"string" required:"true"` - // The value of the tag. + // The value. Value *string `locationName:"value" type:"string"` } @@ -8699,18 +8262,15 @@ func (s *Tag) SetValue(v string) *Tag { return s } -// Container for the parameters to the TagResource operation. type TagResourceInput struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of the Direct Connect resource. - // - // Example: arn:aws:directconnect:us-east-1:123456789012:dxcon/dxcon-fg5678gh + // The Amazon Resource Name (ARN) of the resource. // // ResourceArn is a required field ResourceArn *string `locationName:"resourceArn" type:"string" required:"true"` - // The list of tags to add. + // The tags to add. // // Tags is a required field Tags []*Tag `locationName:"tags" min:"1" type:"list" required:"true"` @@ -8767,7 +8327,6 @@ func (s *TagResourceInput) SetTags(v []*Tag) *TagResourceInput { return s } -// The response received when TagResource is called. type TagResourceOutput struct { _ struct{} `type:"structure"` } @@ -8782,16 +8341,15 @@ func (s TagResourceOutput) GoString() string { return s.String() } -// Container for the parameters to the UntagResource operation. type UntagResourceInput struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of the Direct Connect resource. + // The Amazon Resource Name (ARN) of the resource. // // ResourceArn is a required field ResourceArn *string `locationName:"resourceArn" type:"string" required:"true"` - // The list of tag keys to remove. + // The tag keys of the tags to remove. // // TagKeys is a required field TagKeys []*string `locationName:"tagKeys" type:"list" required:"true"` @@ -8835,7 +8393,6 @@ func (s *UntagResourceInput) SetTagKeys(v []*string) *UntagResourceInput { return s } -// The response received when UntagResource is called. type UntagResourceOutput struct { _ struct{} `type:"structure"` } @@ -8850,30 +8407,19 @@ func (s UntagResourceOutput) GoString() string { return s.String() } -// Container for the parameters to the UpdateLag operation. type UpdateLagInput struct { _ struct{} `type:"structure"` - // The ID of the LAG to update. - // - // Example: dxlag-abc123 - // - // Default: None + // The ID of the LAG. // // LagId is a required field LagId *string `locationName:"lagId" type:"string" required:"true"` - // The name for the LAG. - // - // Example: "3x10G LAG to AWS" - // - // Default: None + // The name of the LAG. LagName *string `locationName:"lagName" type:"string"` // The minimum number of physical connections that must be operational for the // LAG itself to be operational. - // - // Default: None MinimumLinks *int64 `locationName:"minimumLinks" type:"integer"` } @@ -8918,30 +8464,323 @@ func (s *UpdateLagInput) SetMinimumLinks(v int64) *UpdateLagInput { return s } -// You can create one or more AWS Direct Connect private virtual interfaces -// linking to your virtual private gateway. -// -// Virtual private gateways can be managed using the Amazon Virtual Private -// Cloud (Amazon VPC) console or the Amazon EC2 CreateVpnGateway action (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-CreateVpnGateway.html). -type VirtualGateway struct { +type UpdateVirtualInterfaceAttributesInput struct { _ struct{} `type:"structure"` - // The ID of the virtual private gateway to a VPC. This only applies to private - // virtual interfaces. + // The maximum transmission unit (MTU), in bytes. The supported values are 1500 + // and 9001. The default value is 1500. + Mtu *int64 `locationName:"mtu" type:"integer"` + + // The ID of the virtual private interface. // - // Example: vgw-123er56 + // VirtualInterfaceId is a required field + VirtualInterfaceId *string `locationName:"virtualInterfaceId" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateVirtualInterfaceAttributesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateVirtualInterfaceAttributesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateVirtualInterfaceAttributesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateVirtualInterfaceAttributesInput"} + if s.VirtualInterfaceId == nil { + invalidParams.Add(request.NewErrParamRequired("VirtualInterfaceId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMtu sets the Mtu field's value. +func (s *UpdateVirtualInterfaceAttributesInput) SetMtu(v int64) *UpdateVirtualInterfaceAttributesInput { + s.Mtu = &v + return s +} + +// SetVirtualInterfaceId sets the VirtualInterfaceId field's value. +func (s *UpdateVirtualInterfaceAttributesInput) SetVirtualInterfaceId(v string) *UpdateVirtualInterfaceAttributesInput { + s.VirtualInterfaceId = &v + return s +} + +// Information about a virtual interface. +type UpdateVirtualInterfaceAttributesOutput struct { + _ struct{} `type:"structure"` + + // The address family for the BGP peer. + AddressFamily *string `locationName:"addressFamily" type:"string" enum:"AddressFamily"` + + // The IP address assigned to the Amazon interface. + AmazonAddress *string `locationName:"amazonAddress" type:"string"` + + // The autonomous system number (ASN) for the Amazon side of the connection. + AmazonSideAsn *int64 `locationName:"amazonSideAsn" type:"long"` + + // The autonomous system (AS) number for Border Gateway Protocol (BGP) configuration. + Asn *int64 `locationName:"asn" type:"integer"` + + // The authentication key for BGP configuration. + AuthKey *string `locationName:"authKey" type:"string"` + + // The Direct Connect endpoint on which the virtual interface terminates. + AwsDeviceV2 *string `locationName:"awsDeviceV2" type:"string"` + + // The BGP peers configured on this virtual interface. + BgpPeers []*BGPPeer `locationName:"bgpPeers" type:"list"` + + // The ID of the connection. + ConnectionId *string `locationName:"connectionId" type:"string"` + + // The IP address assigned to the customer interface. + CustomerAddress *string `locationName:"customerAddress" type:"string"` + + // The customer router configuration. + CustomerRouterConfig *string `locationName:"customerRouterConfig" type:"string"` + + // The ID of the Direct Connect gateway. + DirectConnectGatewayId *string `locationName:"directConnectGatewayId" type:"string"` + + // Indicates whether jumbo frames (9001 MTU) are supported. + JumboFrameCapable *bool `locationName:"jumboFrameCapable" type:"boolean"` + + // The location of the connection. + Location *string `locationName:"location" type:"string"` + + // The maximum transmission unit (MTU), in bytes. The supported values are 1500 + // and 9001. The default value is 1500. + Mtu *int64 `locationName:"mtu" type:"integer"` + + // The ID of the AWS account that owns the virtual interface. + OwnerAccount *string `locationName:"ownerAccount" type:"string"` + + // The AWS Region where the virtual interface is located. + Region *string `locationName:"region" type:"string"` + + // The routes to be advertised to the AWS network in this Region. Applies to + // public virtual interfaces. + RouteFilterPrefixes []*RouteFilterPrefix `locationName:"routeFilterPrefixes" type:"list"` + + // The ID of the virtual private gateway. Applies only to private virtual interfaces. VirtualGatewayId *string `locationName:"virtualGatewayId" type:"string"` - // State of the virtual private gateway. + // The ID of the virtual interface. + VirtualInterfaceId *string `locationName:"virtualInterfaceId" type:"string"` + + // The name of the virtual interface assigned by the customer network. + VirtualInterfaceName *string `locationName:"virtualInterfaceName" type:"string"` + + // The state of the virtual interface. The following are the possible values: // - // * Pending: This is the initial state after calling CreateVpnGateway. + // * confirming: The creation of the virtual interface is pending confirmation + // from the virtual interface owner. If the owner of the virtual interface + // is different from the owner of the connection on which it is provisioned, + // then the virtual interface will remain in this state until it is confirmed + // by the virtual interface owner. + // + // * verifying: This state only applies to public virtual interfaces. Each + // public virtual interface needs validation before the virtual interface + // can be created. + // + // * pending: A virtual interface is in this state from the time that it + // is created until the virtual interface is ready to forward traffic. + // + // * available: A virtual interface that is able to forward traffic. + // + // * down: A virtual interface that is BGP down. + // + // * deleting: A virtual interface is in this state immediately after calling + // DeleteVirtualInterface until it can no longer forward traffic. + // + // * deleted: A virtual interface that cannot forward traffic. + // + // * rejected: The virtual interface owner has declined creation of the virtual + // interface. If a virtual interface in the Confirming state is deleted by + // the virtual interface owner, the virtual interface enters the Rejected + // state. + VirtualInterfaceState *string `locationName:"virtualInterfaceState" type:"string" enum:"VirtualInterfaceState"` + + // The type of virtual interface. The possible values are private and public. + VirtualInterfaceType *string `locationName:"virtualInterfaceType" type:"string"` + + // The ID of the VLAN. + Vlan *int64 `locationName:"vlan" type:"integer"` +} + +// String returns the string representation +func (s UpdateVirtualInterfaceAttributesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateVirtualInterfaceAttributesOutput) GoString() string { + return s.String() +} + +// SetAddressFamily sets the AddressFamily field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetAddressFamily(v string) *UpdateVirtualInterfaceAttributesOutput { + s.AddressFamily = &v + return s +} + +// SetAmazonAddress sets the AmazonAddress field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetAmazonAddress(v string) *UpdateVirtualInterfaceAttributesOutput { + s.AmazonAddress = &v + return s +} + +// SetAmazonSideAsn sets the AmazonSideAsn field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetAmazonSideAsn(v int64) *UpdateVirtualInterfaceAttributesOutput { + s.AmazonSideAsn = &v + return s +} + +// SetAsn sets the Asn field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetAsn(v int64) *UpdateVirtualInterfaceAttributesOutput { + s.Asn = &v + return s +} + +// SetAuthKey sets the AuthKey field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetAuthKey(v string) *UpdateVirtualInterfaceAttributesOutput { + s.AuthKey = &v + return s +} + +// SetAwsDeviceV2 sets the AwsDeviceV2 field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetAwsDeviceV2(v string) *UpdateVirtualInterfaceAttributesOutput { + s.AwsDeviceV2 = &v + return s +} + +// SetBgpPeers sets the BgpPeers field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetBgpPeers(v []*BGPPeer) *UpdateVirtualInterfaceAttributesOutput { + s.BgpPeers = v + return s +} + +// SetConnectionId sets the ConnectionId field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetConnectionId(v string) *UpdateVirtualInterfaceAttributesOutput { + s.ConnectionId = &v + return s +} + +// SetCustomerAddress sets the CustomerAddress field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetCustomerAddress(v string) *UpdateVirtualInterfaceAttributesOutput { + s.CustomerAddress = &v + return s +} + +// SetCustomerRouterConfig sets the CustomerRouterConfig field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetCustomerRouterConfig(v string) *UpdateVirtualInterfaceAttributesOutput { + s.CustomerRouterConfig = &v + return s +} + +// SetDirectConnectGatewayId sets the DirectConnectGatewayId field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetDirectConnectGatewayId(v string) *UpdateVirtualInterfaceAttributesOutput { + s.DirectConnectGatewayId = &v + return s +} + +// SetJumboFrameCapable sets the JumboFrameCapable field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetJumboFrameCapable(v bool) *UpdateVirtualInterfaceAttributesOutput { + s.JumboFrameCapable = &v + return s +} + +// SetLocation sets the Location field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetLocation(v string) *UpdateVirtualInterfaceAttributesOutput { + s.Location = &v + return s +} + +// SetMtu sets the Mtu field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetMtu(v int64) *UpdateVirtualInterfaceAttributesOutput { + s.Mtu = &v + return s +} + +// SetOwnerAccount sets the OwnerAccount field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetOwnerAccount(v string) *UpdateVirtualInterfaceAttributesOutput { + s.OwnerAccount = &v + return s +} + +// SetRegion sets the Region field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetRegion(v string) *UpdateVirtualInterfaceAttributesOutput { + s.Region = &v + return s +} + +// SetRouteFilterPrefixes sets the RouteFilterPrefixes field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetRouteFilterPrefixes(v []*RouteFilterPrefix) *UpdateVirtualInterfaceAttributesOutput { + s.RouteFilterPrefixes = v + return s +} + +// SetVirtualGatewayId sets the VirtualGatewayId field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetVirtualGatewayId(v string) *UpdateVirtualInterfaceAttributesOutput { + s.VirtualGatewayId = &v + return s +} + +// SetVirtualInterfaceId sets the VirtualInterfaceId field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetVirtualInterfaceId(v string) *UpdateVirtualInterfaceAttributesOutput { + s.VirtualInterfaceId = &v + return s +} + +// SetVirtualInterfaceName sets the VirtualInterfaceName field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetVirtualInterfaceName(v string) *UpdateVirtualInterfaceAttributesOutput { + s.VirtualInterfaceName = &v + return s +} + +// SetVirtualInterfaceState sets the VirtualInterfaceState field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetVirtualInterfaceState(v string) *UpdateVirtualInterfaceAttributesOutput { + s.VirtualInterfaceState = &v + return s +} + +// SetVirtualInterfaceType sets the VirtualInterfaceType field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetVirtualInterfaceType(v string) *UpdateVirtualInterfaceAttributesOutput { + s.VirtualInterfaceType = &v + return s +} + +// SetVlan sets the Vlan field's value. +func (s *UpdateVirtualInterfaceAttributesOutput) SetVlan(v int64) *UpdateVirtualInterfaceAttributesOutput { + s.Vlan = &v + return s +} + +// Information about a virtual private gateway for a private virtual interface. +type VirtualGateway struct { + _ struct{} `type:"structure"` + + // The ID of the virtual private gateway. + VirtualGatewayId *string `locationName:"virtualGatewayId" type:"string"` + + // The state of the virtual private gateway. The following are the possible + // values: + // + // * pending: Initial state after creating the virtual private gateway. // - // * Available: Ready for use by a private virtual interface. + // * available: Ready for use by a private virtual interface. // - // * Deleting: This is the initial state after calling DeleteVpnGateway. + // * deleting: Initial state after deleting the virtual private gateway. // - // * Deleted: In this state, a private virtual interface is unable to send - // traffic over this gateway. + // * deleted: The virtual private gateway is deleted. The private virtual + // interface is unable to send traffic over this gateway. VirtualGatewayState *string `locationName:"virtualGatewayState" type:"string"` } @@ -8967,131 +8806,106 @@ func (s *VirtualGateway) SetVirtualGatewayState(v string) *VirtualGateway { return s } -// A virtual interface (VLAN) transmits the traffic between the AWS Direct Connect -// location and the customer. +// Information about a virtual interface. type VirtualInterface struct { _ struct{} `type:"structure"` - // Indicates the address family for the BGP peer. - // - // * ipv4: IPv4 address family - // - // * ipv6: IPv6 address family + // The address family for the BGP peer. AddressFamily *string `locationName:"addressFamily" type:"string" enum:"AddressFamily"` - // IP address assigned to the Amazon interface. - // - // Example: 192.168.1.1/30 or 2001:db8::1/125 + // The IP address assigned to the Amazon interface. AmazonAddress *string `locationName:"amazonAddress" type:"string"` // The autonomous system number (ASN) for the Amazon side of the connection. AmazonSideAsn *int64 `locationName:"amazonSideAsn" type:"long"` // The autonomous system (AS) number for Border Gateway Protocol (BGP) configuration. - // - // Example: 65000 Asn *int64 `locationName:"asn" type:"integer"` // The authentication key for BGP configuration. - // - // Example: asdf34example AuthKey *string `locationName:"authKey" type:"string"` - // A list of the BGP peers configured on this virtual interface. + // The Direct Connect endpoint on which the virtual interface terminates. + AwsDeviceV2 *string `locationName:"awsDeviceV2" type:"string"` + + // The BGP peers configured on this virtual interface. BgpPeers []*BGPPeer `locationName:"bgpPeers" type:"list"` - // The ID of the connection. This field is also used as the ID type for operations - // that use multiple connection types (LAG, interconnect, and/or connection). - // - // Example: dxcon-fg5678gh - // - // Default: None + // The ID of the connection. ConnectionId *string `locationName:"connectionId" type:"string"` - // IP address assigned to the customer interface. - // - // Example: 192.168.1.2/30 or 2001:db8::2/125 + // The IP address assigned to the customer interface. CustomerAddress *string `locationName:"customerAddress" type:"string"` - // Information for generating the customer router configuration. + // The customer router configuration. CustomerRouterConfig *string `locationName:"customerRouterConfig" type:"string"` - // The ID of the direct connect gateway. - // - // Example: "abcd1234-dcba-5678-be23-cdef9876ab45" + // The ID of the Direct Connect gateway. DirectConnectGatewayId *string `locationName:"directConnectGatewayId" type:"string"` - // Where the connection is located. - // - // Example: EqSV5 - // - // Default: None + // Indicates whether jumbo frames (9001 MTU) are supported. + JumboFrameCapable *bool `locationName:"jumboFrameCapable" type:"boolean"` + + // The location of the connection. Location *string `locationName:"location" type:"string"` - // The AWS account that will own the new virtual interface. + // The maximum transmission unit (MTU), in bytes. The supported values are 1500 + // and 9001. The default value is 1500. + Mtu *int64 `locationName:"mtu" type:"integer"` + + // The ID of the AWS account that owns the virtual interface. OwnerAccount *string `locationName:"ownerAccount" type:"string"` - // A list of routes to be advertised to the AWS network in this region (public - // virtual interface). + // The AWS Region where the virtual interface is located. + Region *string `locationName:"region" type:"string"` + + // The routes to be advertised to the AWS network in this Region. Applies to + // public virtual interfaces. RouteFilterPrefixes []*RouteFilterPrefix `locationName:"routeFilterPrefixes" type:"list"` - // The ID of the virtual private gateway to a VPC. This only applies to private - // virtual interfaces. - // - // Example: vgw-123er56 + // The ID of the virtual private gateway. Applies only to private virtual interfaces. VirtualGatewayId *string `locationName:"virtualGatewayId" type:"string"` // The ID of the virtual interface. - // - // Example: dxvif-123dfg56 - // - // Default: None VirtualInterfaceId *string `locationName:"virtualInterfaceId" type:"string"` - // The name of the virtual interface assigned by the customer. - // - // Example: "My VPC" + // The name of the virtual interface assigned by the customer network. VirtualInterfaceName *string `locationName:"virtualInterfaceName" type:"string"` - // State of the virtual interface. + // The state of the virtual interface. The following are the possible values: // - // * Confirming: The creation of the virtual interface is pending confirmation + // * confirming: The creation of the virtual interface is pending confirmation // from the virtual interface owner. If the owner of the virtual interface // is different from the owner of the connection on which it is provisioned, // then the virtual interface will remain in this state until it is confirmed // by the virtual interface owner. // - // * Verifying: This state only applies to public virtual interfaces. Each + // * verifying: This state only applies to public virtual interfaces. Each // public virtual interface needs validation before the virtual interface // can be created. // - // * Pending: A virtual interface is in this state from the time that it + // * pending: A virtual interface is in this state from the time that it // is created until the virtual interface is ready to forward traffic. // - // * Available: A virtual interface that is able to forward traffic. + // * available: A virtual interface that is able to forward traffic. // - // * Down: A virtual interface that is BGP down. + // * down: A virtual interface that is BGP down. // - // * Deleting: A virtual interface is in this state immediately after calling + // * deleting: A virtual interface is in this state immediately after calling // DeleteVirtualInterface until it can no longer forward traffic. // - // * Deleted: A virtual interface that cannot forward traffic. + // * deleted: A virtual interface that cannot forward traffic. // - // * Rejected: The virtual interface owner has declined creation of the virtual - // interface. If a virtual interface in the 'Confirming' state is deleted - // by the virtual interface owner, the virtual interface will enter the 'Rejected' + // * rejected: The virtual interface owner has declined creation of the virtual + // interface. If a virtual interface in the Confirming state is deleted by + // the virtual interface owner, the virtual interface enters the Rejected // state. VirtualInterfaceState *string `locationName:"virtualInterfaceState" type:"string" enum:"VirtualInterfaceState"` - // The type of virtual interface. - // - // Example: private (Amazon VPC) or public (Amazon S3, Amazon DynamoDB, and - // so on.) + // The type of virtual interface. The possible values are private and public. VirtualInterfaceType *string `locationName:"virtualInterfaceType" type:"string"` - // The VLAN ID. - // - // Example: 101 + // The ID of the VLAN. Vlan *int64 `locationName:"vlan" type:"integer"` } @@ -9135,6 +8949,12 @@ func (s *VirtualInterface) SetAuthKey(v string) *VirtualInterface { return s } +// SetAwsDeviceV2 sets the AwsDeviceV2 field's value. +func (s *VirtualInterface) SetAwsDeviceV2(v string) *VirtualInterface { + s.AwsDeviceV2 = &v + return s +} + // SetBgpPeers sets the BgpPeers field's value. func (s *VirtualInterface) SetBgpPeers(v []*BGPPeer) *VirtualInterface { s.BgpPeers = v @@ -9165,18 +8985,36 @@ func (s *VirtualInterface) SetDirectConnectGatewayId(v string) *VirtualInterface return s } +// SetJumboFrameCapable sets the JumboFrameCapable field's value. +func (s *VirtualInterface) SetJumboFrameCapable(v bool) *VirtualInterface { + s.JumboFrameCapable = &v + return s +} + // SetLocation sets the Location field's value. func (s *VirtualInterface) SetLocation(v string) *VirtualInterface { s.Location = &v return s } +// SetMtu sets the Mtu field's value. +func (s *VirtualInterface) SetMtu(v int64) *VirtualInterface { + s.Mtu = &v + return s +} + // SetOwnerAccount sets the OwnerAccount field's value. func (s *VirtualInterface) SetOwnerAccount(v string) *VirtualInterface { s.OwnerAccount = &v return s } +// SetRegion sets the Region field's value. +func (s *VirtualInterface) SetRegion(v string) *VirtualInterface { + s.Region = &v + return s +} + // SetRouteFilterPrefixes sets the RouteFilterPrefixes field's value. func (s *VirtualInterface) SetRouteFilterPrefixes(v []*RouteFilterPrefix) *VirtualInterface { s.RouteFilterPrefixes = v @@ -9219,11 +9057,6 @@ func (s *VirtualInterface) SetVlan(v int64) *VirtualInterface { return s } -// Indicates the address family for the BGP peer. -// -// * ipv4: IPv4 address family -// -// * ipv6: IPv6 address family const ( // AddressFamilyIpv4 is a AddressFamily enum value AddressFamilyIpv4 = "ipv4" @@ -9232,20 +9065,6 @@ const ( AddressFamilyIpv6 = "ipv6" ) -// The state of the BGP peer. -// -// * Verifying: The BGP peering addresses or ASN require validation before -// the BGP peer can be created. This state only applies to BGP peers on a -// public virtual interface. -// -// * Pending: The BGP peer has been created, and is in this state until it -// is ready to be established. -// -// * Available: The BGP peer can be established. -// -// * Deleting: The BGP peer is in the process of being deleted. -// -// * Deleted: The BGP peer has been deleted and cannot be established. const ( // BGPPeerStateVerifying is a BGPPeerState enum value BGPPeerStateVerifying = "verifying" @@ -9263,11 +9082,6 @@ const ( BGPPeerStateDeleted = "deleted" ) -// The Up/Down state of the BGP peer. -// -// * Up: The BGP peer is established. -// -// * Down: The BGP peer is down. const ( // BGPStatusUp is a BGPStatus enum value BGPStatusUp = "up" @@ -9276,28 +9090,6 @@ const ( BGPStatusDown = "down" ) -// State of the connection. -// -// * Ordering: The initial state of a hosted connection provisioned on an -// interconnect. The connection stays in the ordering state until the owner -// of the hosted connection confirms or declines the connection order. -// -// * Requested: The initial state of a standard connection. The connection -// stays in the requested state until the Letter of Authorization (LOA) is -// sent to the customer. -// -// * Pending: The connection has been approved, and is being initialized. -// -// * Available: The network link is up, and the connection is ready for use. -// -// * Down: The network link is down. -// -// * Deleting: The connection is in the process of being deleted. -// -// * Deleted: The connection has been deleted. -// -// * Rejected: A hosted connection in the 'Ordering' state will enter the -// 'Rejected' state if it is deleted by the end customer. const ( // ConnectionStateOrdering is a ConnectionState enum value ConnectionStateOrdering = "ordering" @@ -9324,18 +9116,6 @@ const ( ConnectionStateRejected = "rejected" ) -// State of the direct connect gateway association. -// -// * Associating: The initial state after calling CreateDirectConnectGatewayAssociation. -// -// * Associated: The direct connect gateway and virtual private gateway are -// successfully associated and ready to pass traffic. -// -// * Disassociating: The initial state after calling DeleteDirectConnectGatewayAssociation. -// -// * Disassociated: The virtual private gateway is successfully disassociated -// from the direct connect gateway. Traffic flow between the direct connect -// gateway and virtual private gateway stops. const ( // GatewayAssociationStateAssociating is a GatewayAssociationState enum value GatewayAssociationStateAssociating = "associating" @@ -9350,20 +9130,6 @@ const ( GatewayAssociationStateDisassociated = "disassociated" ) -// State of the direct connect gateway attachment. -// -// * Attaching: The initial state after a virtual interface is created using -// the direct connect gateway. -// -// * Attached: The direct connect gateway and virtual interface are successfully -// attached and ready to pass traffic. -// -// * Detaching: The initial state after calling DeleteVirtualInterface on -// a virtual interface that is attached to a direct connect gateway. -// -// * Detached: The virtual interface is successfully detached from the direct -// connect gateway. Traffic flow between the direct connect gateway and virtual -// interface stops. const ( // GatewayAttachmentStateAttaching is a GatewayAttachmentState enum value GatewayAttachmentStateAttaching = "attaching" @@ -9378,15 +9144,6 @@ const ( GatewayAttachmentStateDetached = "detached" ) -// State of the direct connect gateway. -// -// * Pending: The initial state after calling CreateDirectConnectGateway. -// -// * Available: The direct connect gateway is ready for use. -// -// * Deleting: The initial state after calling DeleteDirectConnectGateway. -// -// * Deleted: The direct connect gateway is deleted and cannot pass traffic. const ( // GatewayStatePending is a GatewayState enum value GatewayStatePending = "pending" @@ -9401,22 +9158,17 @@ const ( GatewayStateDeleted = "deleted" ) -// State of the interconnect. -// -// * Requested: The initial state of an interconnect. The interconnect stays -// in the requested state until the Letter of Authorization (LOA) is sent -// to the customer. -// -// * Pending: The interconnect has been approved, and is being initialized. -// -// * Available: The network link is up, and the interconnect is ready for -// use. -// -// * Down: The network link is down. -// -// * Deleting: The interconnect is in the process of being deleted. -// -// * Deleted: The interconnect has been deleted. +const ( + // HasLogicalRedundancyUnknown is a HasLogicalRedundancy enum value + HasLogicalRedundancyUnknown = "unknown" + + // HasLogicalRedundancyYes is a HasLogicalRedundancy enum value + HasLogicalRedundancyYes = "yes" + + // HasLogicalRedundancyNo is a HasLogicalRedundancy enum value + HasLogicalRedundancyNo = "no" +) + const ( // InterconnectStateRequested is a InterconnectState enum value InterconnectStateRequested = "requested" @@ -9437,21 +9189,6 @@ const ( InterconnectStateDeleted = "deleted" ) -// The state of the LAG. -// -// * Requested: The initial state of a LAG. The LAG stays in the requested -// state until the Letter of Authorization (LOA) is available. -// -// * Pending: The LAG has been approved, and is being initialized. -// -// * Available: The network link is established, and the LAG is ready for -// use. -// -// * Down: The network link is down. -// -// * Deleting: The LAG is in the process of being deleted. -// -// * Deleted: The LAG has been deleted. const ( // LagStateRequested is a LagState enum value LagStateRequested = "requested" @@ -9472,43 +9209,11 @@ const ( LagStateDeleted = "deleted" ) -// A standard media type indicating the content type of the LOA-CFA document. -// Currently, the only supported value is "application/pdf". -// -// Default: application/pdf const ( // LoaContentTypeApplicationPdf is a LoaContentType enum value LoaContentTypeApplicationPdf = "application/pdf" ) -// State of the virtual interface. -// -// * Confirming: The creation of the virtual interface is pending confirmation -// from the virtual interface owner. If the owner of the virtual interface -// is different from the owner of the connection on which it is provisioned, -// then the virtual interface will remain in this state until it is confirmed -// by the virtual interface owner. -// -// * Verifying: This state only applies to public virtual interfaces. Each -// public virtual interface needs validation before the virtual interface -// can be created. -// -// * Pending: A virtual interface is in this state from the time that it -// is created until the virtual interface is ready to forward traffic. -// -// * Available: A virtual interface that is able to forward traffic. -// -// * Down: A virtual interface that is BGP down. -// -// * Deleting: A virtual interface is in this state immediately after calling -// DeleteVirtualInterface until it can no longer forward traffic. -// -// * Deleted: A virtual interface that cannot forward traffic. -// -// * Rejected: The virtual interface owner has declined creation of the virtual -// interface. If a virtual interface in the 'Confirming' state is deleted -// by the virtual interface owner, the virtual interface will enter the 'Rejected' -// state. const ( // VirtualInterfaceStateConfirming is a VirtualInterfaceState enum value VirtualInterfaceStateConfirming = "confirming" diff --git a/vendor/github.com/aws/aws-sdk-go/service/directconnect/doc.go b/vendor/github.com/aws/aws-sdk-go/service/directconnect/doc.go index 277fbe626ff..de61cdbfa78 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/directconnect/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/directconnect/doc.go @@ -4,17 +4,14 @@ // requests to AWS Direct Connect. // // AWS Direct Connect links your internal network to an AWS Direct Connect location -// over a standard 1 gigabit or 10 gigabit Ethernet fiber-optic cable. One end -// of the cable is connected to your router, the other to an AWS Direct Connect -// router. With this connection in place, you can create virtual interfaces -// directly to the AWS cloud (for example, to Amazon Elastic Compute Cloud (Amazon -// EC2) and Amazon Simple Storage Service (Amazon S3)) and to Amazon Virtual -// Private Cloud (Amazon VPC), bypassing Internet service providers in your -// network path. An AWS Direct Connect location provides access to AWS in the -// region it is associated with, as well as access to other US regions. For -// example, you can provision a single connection to any AWS Direct Connect -// location in the US and use it to access public AWS services in all US Regions -// and AWS GovCloud (US). +// over a standard Ethernet fiber-optic cable. One end of the cable is connected +// to your router, the other to an AWS Direct Connect router. With this connection +// in place, you can create virtual interfaces directly to the AWS cloud (for +// example, to Amazon EC2 and Amazon S3) and to Amazon VPC, bypassing Internet +// service providers in your network path. A connection provides access to all +// AWS Regions except the China (Beijing) and (China) Ningxia Regions. AWS resources +// in the China Regions can only be accessed through locations associated with +// those Regions. // // See https://docs.aws.amazon.com/goto/WebAPI/directconnect-2012-10-25 for more information on this service. // diff --git a/vendor/github.com/aws/aws-sdk-go/service/directconnect/errors.go b/vendor/github.com/aws/aws-sdk-go/service/directconnect/errors.go index cbf9bf2f536..9d6f810311e 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/directconnect/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/directconnect/errors.go @@ -5,11 +5,10 @@ package directconnect const ( // ErrCodeClientException for service response error code - // "ClientException". + // "DirectConnectClientException". // - // The API was called with invalid parameters. The error message will contain - // additional details about the cause. - ErrCodeClientException = "ClientException" + // One or more parameters are not valid. + ErrCodeClientException = "DirectConnectClientException" // ErrCodeDuplicateTagKeysException for service response error code // "DuplicateTagKeysException". @@ -18,16 +17,14 @@ const ( ErrCodeDuplicateTagKeysException = "DuplicateTagKeysException" // ErrCodeServerException for service response error code - // "ServerException". + // "DirectConnectServerException". // - // A server-side error occurred during the API call. The error message will - // contain additional details about the cause. - ErrCodeServerException = "ServerException" + // A server-side error occurred. + ErrCodeServerException = "DirectConnectServerException" // ErrCodeTooManyTagsException for service response error code // "TooManyTagsException". // - // You have reached the limit on the number of tags that can be assigned to - // a Direct Connect resource. + // You have reached the limit on the number of tags that can be assigned. ErrCodeTooManyTagsException = "TooManyTagsException" ) diff --git a/vendor/github.com/aws/aws-sdk-go/service/directconnect/service.go b/vendor/github.com/aws/aws-sdk-go/service/directconnect/service.go index 1798ae11a1a..bb182821c59 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/directconnect/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/directconnect/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "directconnect" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "directconnect" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Direct Connect" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the DirectConnect client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/directoryservice/api.go b/vendor/github.com/aws/aws-sdk-go/service/directoryservice/api.go index 9040732ca3d..f7f0fa7c476 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/directoryservice/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/directoryservice/api.go @@ -11,12 +11,104 @@ import ( "github.com/aws/aws-sdk-go/aws/request" ) +const opAcceptSharedDirectory = "AcceptSharedDirectory" + +// AcceptSharedDirectoryRequest generates a "aws/request.Request" representing the +// client's request for the AcceptSharedDirectory operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AcceptSharedDirectory for more information on using the AcceptSharedDirectory +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AcceptSharedDirectoryRequest method. +// req, resp := client.AcceptSharedDirectoryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/AcceptSharedDirectory +func (c *DirectoryService) AcceptSharedDirectoryRequest(input *AcceptSharedDirectoryInput) (req *request.Request, output *AcceptSharedDirectoryOutput) { + op := &request.Operation{ + Name: opAcceptSharedDirectory, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AcceptSharedDirectoryInput{} + } + + output = &AcceptSharedDirectoryOutput{} + req = c.newRequest(op, input, output) + return +} + +// AcceptSharedDirectory API operation for AWS Directory Service. +// +// Accepts a directory sharing request that was sent from the directory owner +// account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Directory Service's +// API operation AcceptSharedDirectory for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// One or more parameters are not valid. +// +// * ErrCodeEntityDoesNotExistException "EntityDoesNotExistException" +// The specified entity could not be found. +// +// * ErrCodeDirectoryAlreadySharedException "DirectoryAlreadySharedException" +// The specified directory has already been shared with this AWS account. +// +// * ErrCodeClientException "ClientException" +// A client exception has occurred. +// +// * ErrCodeServiceException "ServiceException" +// An exception has occurred in AWS Directory Service. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/AcceptSharedDirectory +func (c *DirectoryService) AcceptSharedDirectory(input *AcceptSharedDirectoryInput) (*AcceptSharedDirectoryOutput, error) { + req, out := c.AcceptSharedDirectoryRequest(input) + return out, req.Send() +} + +// AcceptSharedDirectoryWithContext is the same as AcceptSharedDirectory with the addition of +// the ability to pass a context and additional request options. +// +// See AcceptSharedDirectory for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DirectoryService) AcceptSharedDirectoryWithContext(ctx aws.Context, input *AcceptSharedDirectoryInput, opts ...request.Option) (*AcceptSharedDirectoryOutput, error) { + req, out := c.AcceptSharedDirectoryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opAddIpRoutes = "AddIpRoutes" // AddIpRoutesRequest generates a "aws/request.Request" representing the // client's request for the AddIpRoutes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -122,8 +214,8 @@ const opAddTagsToResource = "AddTagsToResource" // AddTagsToResourceRequest generates a "aws/request.Request" representing the // client's request for the AddTagsToResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -215,8 +307,8 @@ const opCancelSchemaExtension = "CancelSchemaExtension" // CancelSchemaExtensionRequest generates a "aws/request.Request" representing the // client's request for the CancelSchemaExtension operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -303,8 +395,8 @@ const opConnectDirectory = "ConnectDirectory" // ConnectDirectoryRequest generates a "aws/request.Request" representing the // client's request for the ConnectDirectory operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -398,8 +490,8 @@ const opCreateAlias = "CreateAlias" // CreateAliasRequest generates a "aws/request.Request" representing the // client's request for the CreateAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -494,8 +586,8 @@ const opCreateComputer = "CreateComputer" // CreateComputerRequest generates a "aws/request.Request" representing the // client's request for the CreateComputer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -595,8 +687,8 @@ const opCreateConditionalForwarder = "CreateConditionalForwarder" // CreateConditionalForwarderRequest generates a "aws/request.Request" representing the // client's request for the CreateConditionalForwarder operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -694,8 +786,8 @@ const opCreateDirectory = "CreateDirectory" // CreateDirectoryRequest generates a "aws/request.Request" representing the // client's request for the CreateDirectory operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -785,12 +877,107 @@ func (c *DirectoryService) CreateDirectoryWithContext(ctx aws.Context, input *Cr return out, req.Send() } +const opCreateLogSubscription = "CreateLogSubscription" + +// CreateLogSubscriptionRequest generates a "aws/request.Request" representing the +// client's request for the CreateLogSubscription operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateLogSubscription for more information on using the CreateLogSubscription +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateLogSubscriptionRequest method. +// req, resp := client.CreateLogSubscriptionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/CreateLogSubscription +func (c *DirectoryService) CreateLogSubscriptionRequest(input *CreateLogSubscriptionInput) (req *request.Request, output *CreateLogSubscriptionOutput) { + op := &request.Operation{ + Name: opCreateLogSubscription, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateLogSubscriptionInput{} + } + + output = &CreateLogSubscriptionOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateLogSubscription API operation for AWS Directory Service. +// +// Creates a subscription to forward real time Directory Service domain controller +// security logs to the specified CloudWatch log group in your AWS account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Directory Service's +// API operation CreateLogSubscription for usage and error information. +// +// Returned Error Codes: +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExistsException" +// The specified entity already exists. +// +// * ErrCodeEntityDoesNotExistException "EntityDoesNotExistException" +// The specified entity could not be found. +// +// * ErrCodeUnsupportedOperationException "UnsupportedOperationException" +// The operation is not supported. +// +// * ErrCodeInsufficientPermissionsException "InsufficientPermissionsException" +// The account does not have sufficient permission to perform the operation. +// +// * ErrCodeClientException "ClientException" +// A client exception has occurred. +// +// * ErrCodeServiceException "ServiceException" +// An exception has occurred in AWS Directory Service. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/CreateLogSubscription +func (c *DirectoryService) CreateLogSubscription(input *CreateLogSubscriptionInput) (*CreateLogSubscriptionOutput, error) { + req, out := c.CreateLogSubscriptionRequest(input) + return out, req.Send() +} + +// CreateLogSubscriptionWithContext is the same as CreateLogSubscription with the addition of +// the ability to pass a context and additional request options. +// +// See CreateLogSubscription for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DirectoryService) CreateLogSubscriptionWithContext(ctx aws.Context, input *CreateLogSubscriptionInput, opts ...request.Option) (*CreateLogSubscriptionOutput, error) { + req, out := c.CreateLogSubscriptionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateMicrosoftAD = "CreateMicrosoftAD" // CreateMicrosoftADRequest generates a "aws/request.Request" representing the // client's request for the CreateMicrosoftAD operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -829,7 +1016,7 @@ func (c *DirectoryService) CreateMicrosoftADRequest(input *CreateMicrosoftADInpu // CreateMicrosoftAD API operation for AWS Directory Service. // -// Creates a Microsoft AD in the AWS cloud. +// Creates an AWS Managed Microsoft AD directory. // // Before you call CreateMicrosoftAD, ensure that all of the required permissions // have been explicitly granted through a policy. For details about what permissions @@ -887,8 +1074,8 @@ const opCreateSnapshot = "CreateSnapshot" // CreateSnapshotRequest generates a "aws/request.Request" representing the // client's request for the CreateSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -982,8 +1169,8 @@ const opCreateTrust = "CreateTrust" // CreateTrustRequest generates a "aws/request.Request" representing the // client's request for the CreateTrust operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1024,12 +1211,13 @@ func (c *DirectoryService) CreateTrustRequest(input *CreateTrustInput) (req *req // // AWS Directory Service for Microsoft Active Directory allows you to configure // trust relationships. For example, you can establish a trust between your -// Microsoft AD in the AWS cloud, and your existing on-premises Microsoft Active -// Directory. This would allow you to provide users and groups access to resources -// in either domain, with a single set of credentials. +// AWS Managed Microsoft AD directory, and your existing on-premises Microsoft +// Active Directory. This would allow you to provide users and groups access +// to resources in either domain, with a single set of credentials. // // This action initiates the creation of the AWS side of a trust relationship -// between a Microsoft AD in the AWS cloud and an external domain. +// between an AWS Managed Microsoft AD directory and an external domain. You +// can create either a forest trust or an external trust. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1083,8 +1271,8 @@ const opDeleteConditionalForwarder = "DeleteConditionalForwarder" // DeleteConditionalForwarderRequest generates a "aws/request.Request" representing the // client's request for the DeleteConditionalForwarder operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1177,8 +1365,8 @@ const opDeleteDirectory = "DeleteDirectory" // DeleteDirectoryRequest generates a "aws/request.Request" representing the // client's request for the DeleteDirectory operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1263,12 +1451,100 @@ func (c *DirectoryService) DeleteDirectoryWithContext(ctx aws.Context, input *De return out, req.Send() } +const opDeleteLogSubscription = "DeleteLogSubscription" + +// DeleteLogSubscriptionRequest generates a "aws/request.Request" representing the +// client's request for the DeleteLogSubscription operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteLogSubscription for more information on using the DeleteLogSubscription +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteLogSubscriptionRequest method. +// req, resp := client.DeleteLogSubscriptionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/DeleteLogSubscription +func (c *DirectoryService) DeleteLogSubscriptionRequest(input *DeleteLogSubscriptionInput) (req *request.Request, output *DeleteLogSubscriptionOutput) { + op := &request.Operation{ + Name: opDeleteLogSubscription, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteLogSubscriptionInput{} + } + + output = &DeleteLogSubscriptionOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteLogSubscription API operation for AWS Directory Service. +// +// Deletes the specified log subscription. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Directory Service's +// API operation DeleteLogSubscription for usage and error information. +// +// Returned Error Codes: +// * ErrCodeEntityDoesNotExistException "EntityDoesNotExistException" +// The specified entity could not be found. +// +// * ErrCodeUnsupportedOperationException "UnsupportedOperationException" +// The operation is not supported. +// +// * ErrCodeClientException "ClientException" +// A client exception has occurred. +// +// * ErrCodeServiceException "ServiceException" +// An exception has occurred in AWS Directory Service. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/DeleteLogSubscription +func (c *DirectoryService) DeleteLogSubscription(input *DeleteLogSubscriptionInput) (*DeleteLogSubscriptionOutput, error) { + req, out := c.DeleteLogSubscriptionRequest(input) + return out, req.Send() +} + +// DeleteLogSubscriptionWithContext is the same as DeleteLogSubscription with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteLogSubscription for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DirectoryService) DeleteLogSubscriptionWithContext(ctx aws.Context, input *DeleteLogSubscriptionInput, opts ...request.Option) (*DeleteLogSubscriptionOutput, error) { + req, out := c.DeleteLogSubscriptionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteSnapshot = "DeleteSnapshot" // DeleteSnapshotRequest generates a "aws/request.Request" representing the // client's request for the DeleteSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1355,8 +1631,8 @@ const opDeleteTrust = "DeleteTrust" // DeleteTrustRequest generates a "aws/request.Request" representing the // client's request for the DeleteTrust operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1395,8 +1671,8 @@ func (c *DirectoryService) DeleteTrustRequest(input *DeleteTrustInput) (req *req // DeleteTrust API operation for AWS Directory Service. // -// Deletes an existing trust relationship between your Microsoft AD in the AWS -// cloud and an external domain. +// Deletes an existing trust relationship between your AWS Managed Microsoft +// AD directory and an external domain. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1447,8 +1723,8 @@ const opDeregisterEventTopic = "DeregisterEventTopic" // DeregisterEventTopicRequest generates a "aws/request.Request" representing the // client's request for the DeregisterEventTopic operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1535,8 +1811,8 @@ const opDescribeConditionalForwarders = "DescribeConditionalForwarders" // DescribeConditionalForwardersRequest generates a "aws/request.Request" representing the // client's request for the DescribeConditionalForwarders operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1632,8 +1908,8 @@ const opDescribeDirectories = "DescribeDirectories" // DescribeDirectoriesRequest generates a "aws/request.Request" representing the // client's request for the DescribeDirectories operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1734,8 +2010,8 @@ const opDescribeDomainControllers = "DescribeDomainControllers" // DescribeDomainControllersRequest generates a "aws/request.Request" representing the // client's request for the DescribeDomainControllers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1884,8 +2160,8 @@ const opDescribeEventTopics = "DescribeEventTopics" // DescribeEventTopicsRequest generates a "aws/request.Request" representing the // client's request for the DescribeEventTopics operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1972,75 +2248,71 @@ func (c *DirectoryService) DescribeEventTopicsWithContext(ctx aws.Context, input return out, req.Send() } -const opDescribeSnapshots = "DescribeSnapshots" +const opDescribeSharedDirectories = "DescribeSharedDirectories" -// DescribeSnapshotsRequest generates a "aws/request.Request" representing the -// client's request for the DescribeSnapshots operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeSharedDirectoriesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeSharedDirectories operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeSnapshots for more information on using the DescribeSnapshots +// See DescribeSharedDirectories for more information on using the DescribeSharedDirectories // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeSnapshotsRequest method. -// req, resp := client.DescribeSnapshotsRequest(params) +// // Example sending a request using the DescribeSharedDirectoriesRequest method. +// req, resp := client.DescribeSharedDirectoriesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/DescribeSnapshots -func (c *DirectoryService) DescribeSnapshotsRequest(input *DescribeSnapshotsInput) (req *request.Request, output *DescribeSnapshotsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/DescribeSharedDirectories +func (c *DirectoryService) DescribeSharedDirectoriesRequest(input *DescribeSharedDirectoriesInput) (req *request.Request, output *DescribeSharedDirectoriesOutput) { op := &request.Operation{ - Name: opDescribeSnapshots, + Name: opDescribeSharedDirectories, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &DescribeSnapshotsInput{} + input = &DescribeSharedDirectoriesInput{} } - output = &DescribeSnapshotsOutput{} + output = &DescribeSharedDirectoriesOutput{} req = c.newRequest(op, input, output) return } -// DescribeSnapshots API operation for AWS Directory Service. -// -// Obtains information about the directory snapshots that belong to this account. +// DescribeSharedDirectories API operation for AWS Directory Service. // -// This operation supports pagination with the use of the NextToken request -// and response parameters. If more results are available, the DescribeSnapshots.NextToken -// member contains a token that you pass in the next call to DescribeSnapshots -// to retrieve the next set of items. -// -// You can also specify a maximum number of return results with the Limit parameter. +// Returns the shared directories in your account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Directory Service's -// API operation DescribeSnapshots for usage and error information. +// API operation DescribeSharedDirectories for usage and error information. // // Returned Error Codes: // * ErrCodeEntityDoesNotExistException "EntityDoesNotExistException" // The specified entity could not be found. // +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The NextToken value is not valid. +// // * ErrCodeInvalidParameterException "InvalidParameterException" // One or more parameters are not valid. // -// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" -// The NextToken value is not valid. +// * ErrCodeUnsupportedOperationException "UnsupportedOperationException" +// The operation is not supported. // // * ErrCodeClientException "ClientException" // A client exception has occurred. @@ -2048,63 +2320,161 @@ func (c *DirectoryService) DescribeSnapshotsRequest(input *DescribeSnapshotsInpu // * ErrCodeServiceException "ServiceException" // An exception has occurred in AWS Directory Service. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/DescribeSnapshots -func (c *DirectoryService) DescribeSnapshots(input *DescribeSnapshotsInput) (*DescribeSnapshotsOutput, error) { - req, out := c.DescribeSnapshotsRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/DescribeSharedDirectories +func (c *DirectoryService) DescribeSharedDirectories(input *DescribeSharedDirectoriesInput) (*DescribeSharedDirectoriesOutput, error) { + req, out := c.DescribeSharedDirectoriesRequest(input) return out, req.Send() } -// DescribeSnapshotsWithContext is the same as DescribeSnapshots with the addition of +// DescribeSharedDirectoriesWithContext is the same as DescribeSharedDirectories with the addition of // the ability to pass a context and additional request options. // -// See DescribeSnapshots for details on how to use this API operation. +// See DescribeSharedDirectories for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *DirectoryService) DescribeSnapshotsWithContext(ctx aws.Context, input *DescribeSnapshotsInput, opts ...request.Option) (*DescribeSnapshotsOutput, error) { - req, out := c.DescribeSnapshotsRequest(input) +func (c *DirectoryService) DescribeSharedDirectoriesWithContext(ctx aws.Context, input *DescribeSharedDirectoriesInput, opts ...request.Option) (*DescribeSharedDirectoriesOutput, error) { + req, out := c.DescribeSharedDirectoriesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeTrusts = "DescribeTrusts" +const opDescribeSnapshots = "DescribeSnapshots" -// DescribeTrustsRequest generates a "aws/request.Request" representing the -// client's request for the DescribeTrusts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeSnapshotsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeSnapshots operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeTrusts for more information on using the DescribeTrusts +// See DescribeSnapshots for more information on using the DescribeSnapshots // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeTrustsRequest method. -// req, resp := client.DescribeTrustsRequest(params) +// // Example sending a request using the DescribeSnapshotsRequest method. +// req, resp := client.DescribeSnapshotsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/DescribeTrusts -func (c *DirectoryService) DescribeTrustsRequest(input *DescribeTrustsInput) (req *request.Request, output *DescribeTrustsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/DescribeSnapshots +func (c *DirectoryService) DescribeSnapshotsRequest(input *DescribeSnapshotsInput) (req *request.Request, output *DescribeSnapshotsOutput) { op := &request.Operation{ - Name: opDescribeTrusts, + Name: opDescribeSnapshots, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &DescribeTrustsInput{} + input = &DescribeSnapshotsInput{} + } + + output = &DescribeSnapshotsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeSnapshots API operation for AWS Directory Service. +// +// Obtains information about the directory snapshots that belong to this account. +// +// This operation supports pagination with the use of the NextToken request +// and response parameters. If more results are available, the DescribeSnapshots.NextToken +// member contains a token that you pass in the next call to DescribeSnapshots +// to retrieve the next set of items. +// +// You can also specify a maximum number of return results with the Limit parameter. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Directory Service's +// API operation DescribeSnapshots for usage and error information. +// +// Returned Error Codes: +// * ErrCodeEntityDoesNotExistException "EntityDoesNotExistException" +// The specified entity could not be found. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// One or more parameters are not valid. +// +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The NextToken value is not valid. +// +// * ErrCodeClientException "ClientException" +// A client exception has occurred. +// +// * ErrCodeServiceException "ServiceException" +// An exception has occurred in AWS Directory Service. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/DescribeSnapshots +func (c *DirectoryService) DescribeSnapshots(input *DescribeSnapshotsInput) (*DescribeSnapshotsOutput, error) { + req, out := c.DescribeSnapshotsRequest(input) + return out, req.Send() +} + +// DescribeSnapshotsWithContext is the same as DescribeSnapshots with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeSnapshots for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DirectoryService) DescribeSnapshotsWithContext(ctx aws.Context, input *DescribeSnapshotsInput, opts ...request.Option) (*DescribeSnapshotsOutput, error) { + req, out := c.DescribeSnapshotsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeTrusts = "DescribeTrusts" + +// DescribeTrustsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeTrusts operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeTrusts for more information on using the DescribeTrusts +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeTrustsRequest method. +// req, resp := client.DescribeTrustsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/DescribeTrusts +func (c *DirectoryService) DescribeTrustsRequest(input *DescribeTrustsInput) (req *request.Request, output *DescribeTrustsOutput) { + op := &request.Operation{ + Name: opDescribeTrusts, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeTrustsInput{} } output = &DescribeTrustsOutput{} @@ -2171,8 +2541,8 @@ const opDisableRadius = "DisableRadius" // DisableRadiusRequest generates a "aws/request.Request" representing the // client's request for the DisableRadius operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2212,7 +2582,8 @@ func (c *DirectoryService) DisableRadiusRequest(input *DisableRadiusInput) (req // DisableRadius API operation for AWS Directory Service. // // Disables multi-factor authentication (MFA) with the Remote Authentication -// Dial In User Service (RADIUS) server for an AD Connector directory. +// Dial In User Service (RADIUS) server for an AD Connector or Microsoft AD +// directory. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2257,8 +2628,8 @@ const opDisableSso = "DisableSso" // DisableSsoRequest generates a "aws/request.Request" representing the // client's request for the DisableSso operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2348,8 +2719,8 @@ const opEnableRadius = "EnableRadius" // EnableRadiusRequest generates a "aws/request.Request" representing the // client's request for the EnableRadius operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2389,7 +2760,8 @@ func (c *DirectoryService) EnableRadiusRequest(input *EnableRadiusInput) (req *r // EnableRadius API operation for AWS Directory Service. // // Enables multi-factor authentication (MFA) with the Remote Authentication -// Dial In User Service (RADIUS) server for an AD Connector directory. +// Dial In User Service (RADIUS) server for an AD Connector or Microsoft AD +// directory. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2440,8 +2812,8 @@ const opEnableSso = "EnableSso" // EnableSsoRequest generates a "aws/request.Request" representing the // client's request for the EnableSso operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2531,8 +2903,8 @@ const opGetDirectoryLimits = "GetDirectoryLimits" // GetDirectoryLimitsRequest generates a "aws/request.Request" representing the // client's request for the GetDirectoryLimits operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2616,8 +2988,8 @@ const opGetSnapshotLimits = "GetSnapshotLimits" // GetSnapshotLimitsRequest generates a "aws/request.Request" representing the // client's request for the GetSnapshotLimits operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2701,8 +3073,8 @@ const opListIpRoutes = "ListIpRoutes" // ListIpRoutesRequest generates a "aws/request.Request" representing the // client's request for the ListIpRoutes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2788,12 +3160,100 @@ func (c *DirectoryService) ListIpRoutesWithContext(ctx aws.Context, input *ListI return out, req.Send() } +const opListLogSubscriptions = "ListLogSubscriptions" + +// ListLogSubscriptionsRequest generates a "aws/request.Request" representing the +// client's request for the ListLogSubscriptions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListLogSubscriptions for more information on using the ListLogSubscriptions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListLogSubscriptionsRequest method. +// req, resp := client.ListLogSubscriptionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/ListLogSubscriptions +func (c *DirectoryService) ListLogSubscriptionsRequest(input *ListLogSubscriptionsInput) (req *request.Request, output *ListLogSubscriptionsOutput) { + op := &request.Operation{ + Name: opListLogSubscriptions, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListLogSubscriptionsInput{} + } + + output = &ListLogSubscriptionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListLogSubscriptions API operation for AWS Directory Service. +// +// Lists the active log subscriptions for the AWS account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Directory Service's +// API operation ListLogSubscriptions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeEntityDoesNotExistException "EntityDoesNotExistException" +// The specified entity could not be found. +// +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The NextToken value is not valid. +// +// * ErrCodeClientException "ClientException" +// A client exception has occurred. +// +// * ErrCodeServiceException "ServiceException" +// An exception has occurred in AWS Directory Service. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/ListLogSubscriptions +func (c *DirectoryService) ListLogSubscriptions(input *ListLogSubscriptionsInput) (*ListLogSubscriptionsOutput, error) { + req, out := c.ListLogSubscriptionsRequest(input) + return out, req.Send() +} + +// ListLogSubscriptionsWithContext is the same as ListLogSubscriptions with the addition of +// the ability to pass a context and additional request options. +// +// See ListLogSubscriptions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DirectoryService) ListLogSubscriptionsWithContext(ctx aws.Context, input *ListLogSubscriptionsInput, opts ...request.Option) (*ListLogSubscriptionsOutput, error) { + req, out := c.ListLogSubscriptionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opListSchemaExtensions = "ListSchemaExtensions" // ListSchemaExtensionsRequest generates a "aws/request.Request" representing the // client's request for the ListSchemaExtensions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2880,8 +3340,8 @@ const opListTagsForResource = "ListTagsForResource" // ListTagsForResourceRequest generates a "aws/request.Request" representing the // client's request for the ListTagsForResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2971,8 +3431,8 @@ const opRegisterEventTopic = "RegisterEventTopic" // RegisterEventTopicRequest generates a "aws/request.Request" representing the // client's request for the RegisterEventTopic operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3060,12 +3520,104 @@ func (c *DirectoryService) RegisterEventTopicWithContext(ctx aws.Context, input return out, req.Send() } +const opRejectSharedDirectory = "RejectSharedDirectory" + +// RejectSharedDirectoryRequest generates a "aws/request.Request" representing the +// client's request for the RejectSharedDirectory operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RejectSharedDirectory for more information on using the RejectSharedDirectory +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RejectSharedDirectoryRequest method. +// req, resp := client.RejectSharedDirectoryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/RejectSharedDirectory +func (c *DirectoryService) RejectSharedDirectoryRequest(input *RejectSharedDirectoryInput) (req *request.Request, output *RejectSharedDirectoryOutput) { + op := &request.Operation{ + Name: opRejectSharedDirectory, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RejectSharedDirectoryInput{} + } + + output = &RejectSharedDirectoryOutput{} + req = c.newRequest(op, input, output) + return +} + +// RejectSharedDirectory API operation for AWS Directory Service. +// +// Rejects a directory sharing request that was sent from the directory owner +// account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Directory Service's +// API operation RejectSharedDirectory for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// One or more parameters are not valid. +// +// * ErrCodeEntityDoesNotExistException "EntityDoesNotExistException" +// The specified entity could not be found. +// +// * ErrCodeDirectoryAlreadySharedException "DirectoryAlreadySharedException" +// The specified directory has already been shared with this AWS account. +// +// * ErrCodeClientException "ClientException" +// A client exception has occurred. +// +// * ErrCodeServiceException "ServiceException" +// An exception has occurred in AWS Directory Service. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/RejectSharedDirectory +func (c *DirectoryService) RejectSharedDirectory(input *RejectSharedDirectoryInput) (*RejectSharedDirectoryOutput, error) { + req, out := c.RejectSharedDirectoryRequest(input) + return out, req.Send() +} + +// RejectSharedDirectoryWithContext is the same as RejectSharedDirectory with the addition of +// the ability to pass a context and additional request options. +// +// See RejectSharedDirectory for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DirectoryService) RejectSharedDirectoryWithContext(ctx aws.Context, input *RejectSharedDirectoryInput, opts ...request.Option) (*RejectSharedDirectoryOutput, error) { + req, out := c.RejectSharedDirectoryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opRemoveIpRoutes = "RemoveIpRoutes" // RemoveIpRoutesRequest generates a "aws/request.Request" representing the // client's request for the RemoveIpRoutes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3155,8 +3707,8 @@ const opRemoveTagsFromResource = "RemoveTagsFromResource" // RemoveTagsFromResourceRequest generates a "aws/request.Request" representing the // client's request for the RemoveTagsFromResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3239,73 +3791,172 @@ func (c *DirectoryService) RemoveTagsFromResourceWithContext(ctx aws.Context, in return out, req.Send() } -const opRestoreFromSnapshot = "RestoreFromSnapshot" +const opResetUserPassword = "ResetUserPassword" -// RestoreFromSnapshotRequest generates a "aws/request.Request" representing the -// client's request for the RestoreFromSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ResetUserPasswordRequest generates a "aws/request.Request" representing the +// client's request for the ResetUserPassword operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See RestoreFromSnapshot for more information on using the RestoreFromSnapshot +// See ResetUserPassword for more information on using the ResetUserPassword // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the RestoreFromSnapshotRequest method. -// req, resp := client.RestoreFromSnapshotRequest(params) +// // Example sending a request using the ResetUserPasswordRequest method. +// req, resp := client.ResetUserPasswordRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/RestoreFromSnapshot -func (c *DirectoryService) RestoreFromSnapshotRequest(input *RestoreFromSnapshotInput) (req *request.Request, output *RestoreFromSnapshotOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/ResetUserPassword +func (c *DirectoryService) ResetUserPasswordRequest(input *ResetUserPasswordInput) (req *request.Request, output *ResetUserPasswordOutput) { op := &request.Operation{ - Name: opRestoreFromSnapshot, + Name: opResetUserPassword, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &RestoreFromSnapshotInput{} + input = &ResetUserPasswordInput{} } - output = &RestoreFromSnapshotOutput{} + output = &ResetUserPasswordOutput{} req = c.newRequest(op, input, output) return } -// RestoreFromSnapshot API operation for AWS Directory Service. +// ResetUserPassword API operation for AWS Directory Service. // -// Restores a directory using an existing directory snapshot. -// -// When you restore a directory from a snapshot, any changes made to the directory -// after the snapshot date are overwritten. -// -// This action returns as soon as the restore operation is initiated. You can -// monitor the progress of the restore operation by calling the DescribeDirectories -// operation with the directory identifier. When the DirectoryDescription.Stage -// value changes to Active, the restore operation is complete. +// Resets the password for any user in your AWS Managed Microsoft AD or Simple +// AD directory. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Directory Service's -// API operation RestoreFromSnapshot for usage and error information. +// API operation ResetUserPassword for usage and error information. // // Returned Error Codes: -// * ErrCodeEntityDoesNotExistException "EntityDoesNotExistException" -// The specified entity could not be found. +// * ErrCodeDirectoryUnavailableException "DirectoryUnavailableException" +// The specified directory is unavailable or could not be found. // -// * ErrCodeInvalidParameterException "InvalidParameterException" -// One or more parameters are not valid. +// * ErrCodeUserDoesNotExistException "UserDoesNotExistException" +// The user provided a username that does not exist in your directory. +// +// * ErrCodeInvalidPasswordException "InvalidPasswordException" +// The new password provided by the user does not meet the password complexity +// requirements defined in your directory. +// +// * ErrCodeUnsupportedOperationException "UnsupportedOperationException" +// The operation is not supported. +// +// * ErrCodeEntityDoesNotExistException "EntityDoesNotExistException" +// The specified entity could not be found. +// +// * ErrCodeClientException "ClientException" +// A client exception has occurred. +// +// * ErrCodeServiceException "ServiceException" +// An exception has occurred in AWS Directory Service. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/ResetUserPassword +func (c *DirectoryService) ResetUserPassword(input *ResetUserPasswordInput) (*ResetUserPasswordOutput, error) { + req, out := c.ResetUserPasswordRequest(input) + return out, req.Send() +} + +// ResetUserPasswordWithContext is the same as ResetUserPassword with the addition of +// the ability to pass a context and additional request options. +// +// See ResetUserPassword for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DirectoryService) ResetUserPasswordWithContext(ctx aws.Context, input *ResetUserPasswordInput, opts ...request.Option) (*ResetUserPasswordOutput, error) { + req, out := c.ResetUserPasswordRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRestoreFromSnapshot = "RestoreFromSnapshot" + +// RestoreFromSnapshotRequest generates a "aws/request.Request" representing the +// client's request for the RestoreFromSnapshot operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RestoreFromSnapshot for more information on using the RestoreFromSnapshot +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RestoreFromSnapshotRequest method. +// req, resp := client.RestoreFromSnapshotRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/RestoreFromSnapshot +func (c *DirectoryService) RestoreFromSnapshotRequest(input *RestoreFromSnapshotInput) (req *request.Request, output *RestoreFromSnapshotOutput) { + op := &request.Operation{ + Name: opRestoreFromSnapshot, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RestoreFromSnapshotInput{} + } + + output = &RestoreFromSnapshotOutput{} + req = c.newRequest(op, input, output) + return +} + +// RestoreFromSnapshot API operation for AWS Directory Service. +// +// Restores a directory using an existing directory snapshot. +// +// When you restore a directory from a snapshot, any changes made to the directory +// after the snapshot date are overwritten. +// +// This action returns as soon as the restore operation is initiated. You can +// monitor the progress of the restore operation by calling the DescribeDirectories +// operation with the directory identifier. When the DirectoryDescription.Stage +// value changes to Active, the restore operation is complete. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Directory Service's +// API operation RestoreFromSnapshot for usage and error information. +// +// Returned Error Codes: +// * ErrCodeEntityDoesNotExistException "EntityDoesNotExistException" +// The specified entity could not be found. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// One or more parameters are not valid. // // * ErrCodeClientException "ClientException" // A client exception has occurred. @@ -3335,12 +3986,136 @@ func (c *DirectoryService) RestoreFromSnapshotWithContext(ctx aws.Context, input return out, req.Send() } +const opShareDirectory = "ShareDirectory" + +// ShareDirectoryRequest generates a "aws/request.Request" representing the +// client's request for the ShareDirectory operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ShareDirectory for more information on using the ShareDirectory +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ShareDirectoryRequest method. +// req, resp := client.ShareDirectoryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/ShareDirectory +func (c *DirectoryService) ShareDirectoryRequest(input *ShareDirectoryInput) (req *request.Request, output *ShareDirectoryOutput) { + op := &request.Operation{ + Name: opShareDirectory, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ShareDirectoryInput{} + } + + output = &ShareDirectoryOutput{} + req = c.newRequest(op, input, output) + return +} + +// ShareDirectory API operation for AWS Directory Service. +// +// Shares a specified directory (DirectoryId) in your AWS account (directory +// owner) with another AWS account (directory consumer). With this operation +// you can use your directory from any AWS account and from any Amazon VPC within +// an AWS Region. +// +// When you share your AWS Managed Microsoft AD directory, AWS Directory Service +// creates a shared directory in the directory consumer account. This shared +// directory contains the metadata to provide access to the directory within +// the directory owner account. The shared directory is visible in all VPCs +// in the directory consumer account. +// +// The ShareMethod parameter determines whether the specified directory can +// be shared between AWS accounts inside the same AWS organization (ORGANIZATIONS). +// It also determines whether you can share the directory with any other AWS +// account either inside or outside of the organization (HANDSHAKE). +// +// The ShareNotes parameter is only used when HANDSHAKE is called, which sends +// a directory sharing request to the directory consumer. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Directory Service's +// API operation ShareDirectory for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDirectoryAlreadySharedException "DirectoryAlreadySharedException" +// The specified directory has already been shared with this AWS account. +// +// * ErrCodeEntityDoesNotExistException "EntityDoesNotExistException" +// The specified entity could not be found. +// +// * ErrCodeInvalidTargetException "InvalidTargetException" +// The specified shared target is not valid. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// One or more parameters are not valid. +// +// * ErrCodeClientException "ClientException" +// A client exception has occurred. +// +// * ErrCodeShareLimitExceededException "ShareLimitExceededException" +// The maximum number of AWS accounts that you can share with this directory +// has been reached. +// +// * ErrCodeOrganizationsException "OrganizationsException" +// Exception encountered while trying to access your AWS organization. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// You do not have sufficient access to perform this action. +// +// * ErrCodeUnsupportedOperationException "UnsupportedOperationException" +// The operation is not supported. +// +// * ErrCodeServiceException "ServiceException" +// An exception has occurred in AWS Directory Service. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/ShareDirectory +func (c *DirectoryService) ShareDirectory(input *ShareDirectoryInput) (*ShareDirectoryOutput, error) { + req, out := c.ShareDirectoryRequest(input) + return out, req.Send() +} + +// ShareDirectoryWithContext is the same as ShareDirectory with the addition of +// the ability to pass a context and additional request options. +// +// See ShareDirectory for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DirectoryService) ShareDirectoryWithContext(ctx aws.Context, input *ShareDirectoryInput, opts ...request.Option) (*ShareDirectoryOutput, error) { + req, out := c.ShareDirectoryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opStartSchemaExtension = "StartSchemaExtension" // StartSchemaExtensionRequest generates a "aws/request.Request" representing the // client's request for the StartSchemaExtension operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3431,12 +4206,103 @@ func (c *DirectoryService) StartSchemaExtensionWithContext(ctx aws.Context, inpu return out, req.Send() } +const opUnshareDirectory = "UnshareDirectory" + +// UnshareDirectoryRequest generates a "aws/request.Request" representing the +// client's request for the UnshareDirectory operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UnshareDirectory for more information on using the UnshareDirectory +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UnshareDirectoryRequest method. +// req, resp := client.UnshareDirectoryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/UnshareDirectory +func (c *DirectoryService) UnshareDirectoryRequest(input *UnshareDirectoryInput) (req *request.Request, output *UnshareDirectoryOutput) { + op := &request.Operation{ + Name: opUnshareDirectory, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UnshareDirectoryInput{} + } + + output = &UnshareDirectoryOutput{} + req = c.newRequest(op, input, output) + return +} + +// UnshareDirectory API operation for AWS Directory Service. +// +// Stops the directory sharing between the directory owner and consumer accounts. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Directory Service's +// API operation UnshareDirectory for usage and error information. +// +// Returned Error Codes: +// * ErrCodeEntityDoesNotExistException "EntityDoesNotExistException" +// The specified entity could not be found. +// +// * ErrCodeInvalidTargetException "InvalidTargetException" +// The specified shared target is not valid. +// +// * ErrCodeDirectoryNotSharedException "DirectoryNotSharedException" +// The specified directory has not been shared with this AWS account. +// +// * ErrCodeClientException "ClientException" +// A client exception has occurred. +// +// * ErrCodeServiceException "ServiceException" +// An exception has occurred in AWS Directory Service. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/UnshareDirectory +func (c *DirectoryService) UnshareDirectory(input *UnshareDirectoryInput) (*UnshareDirectoryOutput, error) { + req, out := c.UnshareDirectoryRequest(input) + return out, req.Send() +} + +// UnshareDirectoryWithContext is the same as UnshareDirectory with the addition of +// the ability to pass a context and additional request options. +// +// See UnshareDirectory for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DirectoryService) UnshareDirectoryWithContext(ctx aws.Context, input *UnshareDirectoryInput, opts ...request.Option) (*UnshareDirectoryOutput, error) { + req, out := c.UnshareDirectoryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opUpdateConditionalForwarder = "UpdateConditionalForwarder" // UpdateConditionalForwarderRequest generates a "aws/request.Request" representing the // client's request for the UpdateConditionalForwarder operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3529,8 +4395,8 @@ const opUpdateNumberOfDomainControllers = "UpdateNumberOfDomainControllers" // UpdateNumberOfDomainControllersRequest generates a "aws/request.Request" representing the // client's request for the UpdateNumberOfDomainControllers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3632,8 +4498,8 @@ const opUpdateRadius = "UpdateRadius" // UpdateRadiusRequest generates a "aws/request.Request" representing the // client's request for the UpdateRadius operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3673,7 +4539,7 @@ func (c *DirectoryService) UpdateRadiusRequest(input *UpdateRadiusInput) (req *r // UpdateRadius API operation for AWS Directory Service. // // Updates the Remote Authentication Dial In User Service (RADIUS) server information -// for an AD Connector directory. +// for an AD Connector or Microsoft AD directory. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3717,62 +4583,59 @@ func (c *DirectoryService) UpdateRadiusWithContext(ctx aws.Context, input *Updat return out, req.Send() } -const opVerifyTrust = "VerifyTrust" +const opUpdateTrust = "UpdateTrust" -// VerifyTrustRequest generates a "aws/request.Request" representing the -// client's request for the VerifyTrust operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UpdateTrustRequest generates a "aws/request.Request" representing the +// client's request for the UpdateTrust operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See VerifyTrust for more information on using the VerifyTrust +// See UpdateTrust for more information on using the UpdateTrust // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the VerifyTrustRequest method. -// req, resp := client.VerifyTrustRequest(params) +// // Example sending a request using the UpdateTrustRequest method. +// req, resp := client.UpdateTrustRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/VerifyTrust -func (c *DirectoryService) VerifyTrustRequest(input *VerifyTrustInput) (req *request.Request, output *VerifyTrustOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/UpdateTrust +func (c *DirectoryService) UpdateTrustRequest(input *UpdateTrustInput) (req *request.Request, output *UpdateTrustOutput) { op := &request.Operation{ - Name: opVerifyTrust, + Name: opUpdateTrust, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &VerifyTrustInput{} + input = &UpdateTrustInput{} } - output = &VerifyTrustOutput{} + output = &UpdateTrustOutput{} req = c.newRequest(op, input, output) return } -// VerifyTrust API operation for AWS Directory Service. -// -// AWS Directory Service for Microsoft Active Directory allows you to configure -// and verify trust relationships. +// UpdateTrust API operation for AWS Directory Service. // -// This action verifies a trust relationship between your Microsoft AD in the -// AWS cloud and an external domain. +// Updates the trust that has been set up between your AWS Managed Microsoft +// AD directory and an on-premises Active Directory. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Directory Service's -// API operation VerifyTrust for usage and error information. +// API operation UpdateTrust for usage and error information. // // Returned Error Codes: // * ErrCodeEntityDoesNotExistException "EntityDoesNotExistException" @@ -3787,12 +4650,104 @@ func (c *DirectoryService) VerifyTrustRequest(input *VerifyTrustInput) (req *req // * ErrCodeServiceException "ServiceException" // An exception has occurred in AWS Directory Service. // -// * ErrCodeUnsupportedOperationException "UnsupportedOperationException" -// The operation is not supported. -// -// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/VerifyTrust -func (c *DirectoryService) VerifyTrust(input *VerifyTrustInput) (*VerifyTrustOutput, error) { - req, out := c.VerifyTrustRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/UpdateTrust +func (c *DirectoryService) UpdateTrust(input *UpdateTrustInput) (*UpdateTrustOutput, error) { + req, out := c.UpdateTrustRequest(input) + return out, req.Send() +} + +// UpdateTrustWithContext is the same as UpdateTrust with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateTrust for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DirectoryService) UpdateTrustWithContext(ctx aws.Context, input *UpdateTrustInput, opts ...request.Option) (*UpdateTrustOutput, error) { + req, out := c.UpdateTrustRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opVerifyTrust = "VerifyTrust" + +// VerifyTrustRequest generates a "aws/request.Request" representing the +// client's request for the VerifyTrust operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See VerifyTrust for more information on using the VerifyTrust +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the VerifyTrustRequest method. +// req, resp := client.VerifyTrustRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/VerifyTrust +func (c *DirectoryService) VerifyTrustRequest(input *VerifyTrustInput) (req *request.Request, output *VerifyTrustOutput) { + op := &request.Operation{ + Name: opVerifyTrust, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &VerifyTrustInput{} + } + + output = &VerifyTrustOutput{} + req = c.newRequest(op, input, output) + return +} + +// VerifyTrust API operation for AWS Directory Service. +// +// AWS Directory Service for Microsoft Active Directory allows you to configure +// and verify trust relationships. +// +// This action verifies a trust relationship between your AWS Managed Microsoft +// AD directory and an external domain. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Directory Service's +// API operation VerifyTrust for usage and error information. +// +// Returned Error Codes: +// * ErrCodeEntityDoesNotExistException "EntityDoesNotExistException" +// The specified entity could not be found. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// One or more parameters are not valid. +// +// * ErrCodeClientException "ClientException" +// A client exception has occurred. +// +// * ErrCodeServiceException "ServiceException" +// An exception has occurred in AWS Directory Service. +// +// * ErrCodeUnsupportedOperationException "UnsupportedOperationException" +// The operation is not supported. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ds-2015-04-16/VerifyTrust +func (c *DirectoryService) VerifyTrust(input *VerifyTrustInput) (*VerifyTrustOutput, error) { + req, out := c.VerifyTrustRequest(input) return out, req.Send() } @@ -3812,6 +4767,68 @@ func (c *DirectoryService) VerifyTrustWithContext(ctx aws.Context, input *Verify return out, req.Send() } +type AcceptSharedDirectoryInput struct { + _ struct{} `type:"structure"` + + // Identifier of the shared directory in the directory consumer account. This + // identifier is different for each directory owner account. + // + // SharedDirectoryId is a required field + SharedDirectoryId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s AcceptSharedDirectoryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AcceptSharedDirectoryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AcceptSharedDirectoryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AcceptSharedDirectoryInput"} + if s.SharedDirectoryId == nil { + invalidParams.Add(request.NewErrParamRequired("SharedDirectoryId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSharedDirectoryId sets the SharedDirectoryId field's value. +func (s *AcceptSharedDirectoryInput) SetSharedDirectoryId(v string) *AcceptSharedDirectoryInput { + s.SharedDirectoryId = &v + return s +} + +type AcceptSharedDirectoryOutput struct { + _ struct{} `type:"structure"` + + // The shared directory in the directory consumer account. + SharedDirectory *SharedDirectory `type:"structure"` +} + +// String returns the string representation +func (s AcceptSharedDirectoryOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AcceptSharedDirectoryOutput) GoString() string { + return s.String() +} + +// SetSharedDirectory sets the SharedDirectory field's value. +func (s *AcceptSharedDirectoryOutput) SetSharedDirectory(v *SharedDirectory) *AcceptSharedDirectoryOutput { + s.SharedDirectory = v + return s +} + type AddIpRoutesInput struct { _ struct{} `type:"structure"` @@ -4223,7 +5240,7 @@ type ConnectDirectoryInput struct { // A textual description for the directory. Description *string `type:"string"` - // The fully-qualified name of the on-premises directory, such as corp.example.com. + // The fully qualified name of the on-premises directory, such as corp.example.com. // // Name is a required field Name *string `type:"string" required:"true"` @@ -4664,7 +5681,7 @@ type CreateDirectoryInput struct { Name *string `type:"string" required:"true"` // The password for the directory administrator. The directory creation process - // creates a directory administrator account with the username Administrator + // creates a directory administrator account with the user name Administrator // and this password. // // Password is a required field @@ -4777,7 +5794,78 @@ func (s *CreateDirectoryOutput) SetDirectoryId(v string) *CreateDirectoryOutput return s } -// Creates a Microsoft AD in the AWS cloud. +type CreateLogSubscriptionInput struct { + _ struct{} `type:"structure"` + + // Identifier (ID) of the directory to which you want to subscribe and receive + // real-time logs to your specified CloudWatch log group. + // + // DirectoryId is a required field + DirectoryId *string `type:"string" required:"true"` + + // The name of the CloudWatch log group where the real-time domain controller + // logs are forwarded. + // + // LogGroupName is a required field + LogGroupName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateLogSubscriptionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLogSubscriptionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateLogSubscriptionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateLogSubscriptionInput"} + if s.DirectoryId == nil { + invalidParams.Add(request.NewErrParamRequired("DirectoryId")) + } + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDirectoryId sets the DirectoryId field's value. +func (s *CreateLogSubscriptionInput) SetDirectoryId(v string) *CreateLogSubscriptionInput { + s.DirectoryId = &v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *CreateLogSubscriptionInput) SetLogGroupName(v string) *CreateLogSubscriptionInput { + s.LogGroupName = &v + return s +} + +type CreateLogSubscriptionOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s CreateLogSubscriptionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLogSubscriptionOutput) GoString() string { + return s.String() +} + +// Creates an AWS Managed Microsoft AD directory. type CreateMicrosoftADInput struct { _ struct{} `type:"structure"` @@ -4785,8 +5873,8 @@ type CreateMicrosoftADInput struct { // console Directory Details page after the directory is created. Description *string `type:"string"` - // AWS Microsoft AD is available in two editions: Standard and Enterprise. Enterprise - // is the default. + // AWS Managed Microsoft AD is available in two editions: Standard and Enterprise. + // Enterprise is the default. Edition *string `type:"string" enum:"DirectoryEdition"` // The fully qualified domain name for the directory, such as corp.example.com. @@ -4980,19 +6068,19 @@ func (s *CreateSnapshotOutput) SetSnapshotId(v string) *CreateSnapshotOutput { // AWS Directory Service for Microsoft Active Directory allows you to configure // trust relationships. For example, you can establish a trust between your -// Microsoft AD in the AWS cloud, and your existing on-premises Microsoft Active -// Directory. This would allow you to provide users and groups access to resources -// in either domain, with a single set of credentials. +// AWS Managed Microsoft AD directory, and your existing on-premises Microsoft +// Active Directory. This would allow you to provide users and groups access +// to resources in either domain, with a single set of credentials. // // This action initiates the creation of the AWS side of a trust relationship -// between a Microsoft AD in the AWS cloud and an external domain. +// between an AWS Managed Microsoft AD directory and an external domain. type CreateTrustInput struct { _ struct{} `type:"structure"` // The IP addresses of the remote DNS server associated with RemoteDomainName. ConditionalForwarderIpAddrs []*string `type:"list"` - // The Directory ID of the Microsoft AD in the AWS cloud for which to establish + // The Directory ID of the AWS Managed Microsoft AD directory for which to establish // the trust relationship. // // DirectoryId is a required field @@ -5004,6 +6092,9 @@ type CreateTrustInput struct { // RemoteDomainName is a required field RemoteDomainName *string `type:"string" required:"true"` + // Optional parameter to enable selective authentication for the trust. + SelectiveAuth *string `type:"string" enum:"SelectiveAuth"` + // The direction of the trust relationship. // // TrustDirection is a required field @@ -5015,7 +6106,7 @@ type CreateTrustInput struct { // TrustPassword is a required field TrustPassword *string `min:"1" type:"string" required:"true"` - // The trust relationship type. + // The trust relationship type. Forest is the default. TrustType *string `type:"string" enum:"TrustType"` } @@ -5072,6 +6163,12 @@ func (s *CreateTrustInput) SetRemoteDomainName(v string) *CreateTrustInput { return s } +// SetSelectiveAuth sets the SelectiveAuth field's value. +func (s *CreateTrustInput) SetSelectiveAuth(v string) *CreateTrustInput { + s.SelectiveAuth = &v + return s +} + // SetTrustDirection sets the TrustDirection field's value. func (s *CreateTrustInput) SetTrustDirection(v string) *CreateTrustInput { s.TrustDirection = &v @@ -5246,6 +6343,58 @@ func (s *DeleteDirectoryOutput) SetDirectoryId(v string) *DeleteDirectoryOutput return s } +type DeleteLogSubscriptionInput struct { + _ struct{} `type:"structure"` + + // Identifier (ID) of the directory whose log subscription you want to delete. + // + // DirectoryId is a required field + DirectoryId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteLogSubscriptionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLogSubscriptionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteLogSubscriptionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteLogSubscriptionInput"} + if s.DirectoryId == nil { + invalidParams.Add(request.NewErrParamRequired("DirectoryId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDirectoryId sets the DirectoryId field's value. +func (s *DeleteLogSubscriptionInput) SetDirectoryId(v string) *DeleteLogSubscriptionInput { + s.DirectoryId = &v + return s +} + +type DeleteLogSubscriptionOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteLogSubscriptionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLogSubscriptionOutput) GoString() string { + return s.String() +} + // Contains the inputs for the DeleteSnapshot operation. type DeleteSnapshotInput struct { _ struct{} `type:"structure"` @@ -5309,8 +6458,8 @@ func (s *DeleteSnapshotOutput) SetSnapshotId(v string) *DeleteSnapshotOutput { return s } -// Deletes the local side of an existing trust relationship between the Microsoft -// AD in the AWS cloud and the external domain. +// Deletes the local side of an existing trust relationship between the AWS +// Managed Microsoft AD directory and the external domain. type DeleteTrustInput struct { _ struct{} `type:"structure"` @@ -5779,62 +6928,162 @@ func (s *DescribeEventTopicsOutput) SetEventTopics(v []*EventTopic) *DescribeEve return s } -// Contains the inputs for the DescribeSnapshots operation. -type DescribeSnapshotsInput struct { +type DescribeSharedDirectoriesInput struct { _ struct{} `type:"structure"` - // The identifier of the directory for which to retrieve snapshot information. - DirectoryId *string `type:"string"` - - // The maximum number of objects to return. + // The number of shared directories to return in the response object. Limit *int64 `type:"integer"` - // The DescribeSnapshotsResult.NextToken value from a previous call to DescribeSnapshots. - // Pass null if this is the first call. + // The DescribeSharedDirectoriesResult.NextToken value from a previous call + // to DescribeSharedDirectories. Pass null if this is the first call. NextToken *string `type:"string"` - // A list of identifiers of the snapshots to obtain the information for. If - // this member is null or empty, all snapshots are returned using the Limit - // and NextToken members. - SnapshotIds []*string `type:"list"` + // Returns the identifier of the directory in the directory owner account. + // + // OwnerDirectoryId is a required field + OwnerDirectoryId *string `type:"string" required:"true"` + + // A list of identifiers of all shared directories in your account. + SharedDirectoryIds []*string `type:"list"` } // String returns the string representation -func (s DescribeSnapshotsInput) String() string { +func (s DescribeSharedDirectoriesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeSnapshotsInput) GoString() string { +func (s DescribeSharedDirectoriesInput) GoString() string { return s.String() } -// SetDirectoryId sets the DirectoryId field's value. -func (s *DescribeSnapshotsInput) SetDirectoryId(v string) *DescribeSnapshotsInput { - s.DirectoryId = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeSharedDirectoriesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeSharedDirectoriesInput"} + if s.OwnerDirectoryId == nil { + invalidParams.Add(request.NewErrParamRequired("OwnerDirectoryId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } // SetLimit sets the Limit field's value. -func (s *DescribeSnapshotsInput) SetLimit(v int64) *DescribeSnapshotsInput { +func (s *DescribeSharedDirectoriesInput) SetLimit(v int64) *DescribeSharedDirectoriesInput { s.Limit = &v return s } // SetNextToken sets the NextToken field's value. -func (s *DescribeSnapshotsInput) SetNextToken(v string) *DescribeSnapshotsInput { +func (s *DescribeSharedDirectoriesInput) SetNextToken(v string) *DescribeSharedDirectoriesInput { s.NextToken = &v return s } -// SetSnapshotIds sets the SnapshotIds field's value. -func (s *DescribeSnapshotsInput) SetSnapshotIds(v []*string) *DescribeSnapshotsInput { - s.SnapshotIds = v +// SetOwnerDirectoryId sets the OwnerDirectoryId field's value. +func (s *DescribeSharedDirectoriesInput) SetOwnerDirectoryId(v string) *DescribeSharedDirectoriesInput { + s.OwnerDirectoryId = &v return s } -// Contains the results of the DescribeSnapshots operation. -type DescribeSnapshotsOutput struct { +// SetSharedDirectoryIds sets the SharedDirectoryIds field's value. +func (s *DescribeSharedDirectoriesInput) SetSharedDirectoryIds(v []*string) *DescribeSharedDirectoriesInput { + s.SharedDirectoryIds = v + return s +} + +type DescribeSharedDirectoriesOutput struct { + _ struct{} `type:"structure"` + + // If not null, token that indicates that more results are available. Pass this + // value for the NextToken parameter in a subsequent call to DescribeSharedDirectories + // to retrieve the next set of items. + NextToken *string `type:"string"` + + // A list of all shared directories in your account. + SharedDirectories []*SharedDirectory `type:"list"` +} + +// String returns the string representation +func (s DescribeSharedDirectoriesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeSharedDirectoriesOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeSharedDirectoriesOutput) SetNextToken(v string) *DescribeSharedDirectoriesOutput { + s.NextToken = &v + return s +} + +// SetSharedDirectories sets the SharedDirectories field's value. +func (s *DescribeSharedDirectoriesOutput) SetSharedDirectories(v []*SharedDirectory) *DescribeSharedDirectoriesOutput { + s.SharedDirectories = v + return s +} + +// Contains the inputs for the DescribeSnapshots operation. +type DescribeSnapshotsInput struct { + _ struct{} `type:"structure"` + + // The identifier of the directory for which to retrieve snapshot information. + DirectoryId *string `type:"string"` + + // The maximum number of objects to return. + Limit *int64 `type:"integer"` + + // The DescribeSnapshotsResult.NextToken value from a previous call to DescribeSnapshots. + // Pass null if this is the first call. + NextToken *string `type:"string"` + + // A list of identifiers of the snapshots to obtain the information for. If + // this member is null or empty, all snapshots are returned using the Limit + // and NextToken members. + SnapshotIds []*string `type:"list"` +} + +// String returns the string representation +func (s DescribeSnapshotsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeSnapshotsInput) GoString() string { + return s.String() +} + +// SetDirectoryId sets the DirectoryId field's value. +func (s *DescribeSnapshotsInput) SetDirectoryId(v string) *DescribeSnapshotsInput { + s.DirectoryId = &v + return s +} + +// SetLimit sets the Limit field's value. +func (s *DescribeSnapshotsInput) SetLimit(v int64) *DescribeSnapshotsInput { + s.Limit = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeSnapshotsInput) SetNextToken(v string) *DescribeSnapshotsInput { + s.NextToken = &v + return s +} + +// SetSnapshotIds sets the SnapshotIds field's value. +func (s *DescribeSnapshotsInput) SetSnapshotIds(v []*string) *DescribeSnapshotsInput { + s.SnapshotIds = v + return s +} + +// Contains the results of the DescribeSnapshots operation. +type DescribeSnapshotsOutput struct { _ struct{} `type:"structure"` // If not null, more results are available. Pass this value in the NextToken @@ -5872,9 +7121,9 @@ func (s *DescribeSnapshotsOutput) SetSnapshots(v []*Snapshot) *DescribeSnapshots return s } -// Describes the trust relationships for a particular Microsoft AD in the AWS -// cloud. If no input parameters are are provided, such as directory ID or trust -// ID, this request describes all the trust relationships. +// Describes the trust relationships for a particular AWS Managed Microsoft +// AD directory. If no input parameters are are provided, such as directory +// ID or trust ID, this request describes all the trust relationships. type DescribeTrustsInput struct { _ struct{} `type:"structure"` @@ -5982,8 +7231,8 @@ type DirectoryConnectSettings struct { // CustomerDnsIps is a required field CustomerDnsIps []*string `type:"list" required:"true"` - // The username of an account in the on-premises directory that is used to connect - // to the directory. This account must have the following privileges: + // The user name of an account in the on-premises directory that is used to + // connect to the directory. This account must have the following permissions: // // * Read users and groups // @@ -6074,7 +7323,7 @@ type DirectoryConnectSettingsDescription struct { // The IP addresses of the AD Connector servers. ConnectIps []*string `type:"list"` - // The username of the service account in the on-premises directory. + // The user name of the service account in the on-premises directory. CustomerUserName *string `min:"1" type:"string"` // The security group identifier for the AD Connector directory. @@ -6172,11 +7421,14 @@ type DirectoryDescription struct { Edition *string `type:"string" enum:"DirectoryEdition"` // Specifies when the directory was created. - LaunchTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LaunchTime *time.Time `type:"timestamp"` - // The fully-qualified name of the directory. + // The fully qualified name of the directory. Name *string `type:"string"` + // Describes the AWS Managed Microsoft AD directory in the directory owner account. + OwnerDirectoryDescription *OwnerDirectoryDescription `type:"structure"` + // A RadiusSettings object that contains information about the RADIUS server // configured for this directory. RadiusSettings *RadiusSettings `type:"structure"` @@ -6184,13 +7436,26 @@ type DirectoryDescription struct { // The status of the RADIUS MFA server connection. RadiusStatus *string `type:"string" enum:"RadiusStatus"` + // The method used when sharing a directory to determine whether the directory + // should be shared within your AWS organization (ORGANIZATIONS) or with any + // AWS account by sending a shared directory request (HANDSHAKE). + ShareMethod *string `type:"string" enum:"ShareMethod"` + + // A directory share request that is sent by the directory owner to the directory + // consumer. The request includes a typed message to help the directory consumer + // administrator determine whether to approve or reject the share invitation. + ShareNotes *string `type:"string"` + + // Current directory status of the shared AWS Managed Microsoft AD directory. + ShareStatus *string `type:"string" enum:"ShareStatus"` + // The short name of the directory. ShortName *string `type:"string"` // The directory size. Size *string `type:"string" enum:"DirectorySize"` - // Indicates if single-sign on is enabled for the directory. For more information, + // Indicates if single sign-on is enabled for the directory. For more information, // see EnableSso and DisableSso. SsoEnabled *bool `type:"boolean"` @@ -6198,7 +7463,7 @@ type DirectoryDescription struct { Stage *string `type:"string" enum:"DirectoryStage"` // The date and time that the stage was last updated. - StageLastUpdatedDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StageLastUpdatedDateTime *time.Time `type:"timestamp"` // Additional information about the directory stage. StageReason *string `type:"string"` @@ -6282,6 +7547,12 @@ func (s *DirectoryDescription) SetName(v string) *DirectoryDescription { return s } +// SetOwnerDirectoryDescription sets the OwnerDirectoryDescription field's value. +func (s *DirectoryDescription) SetOwnerDirectoryDescription(v *OwnerDirectoryDescription) *DirectoryDescription { + s.OwnerDirectoryDescription = v + return s +} + // SetRadiusSettings sets the RadiusSettings field's value. func (s *DirectoryDescription) SetRadiusSettings(v *RadiusSettings) *DirectoryDescription { s.RadiusSettings = v @@ -6294,6 +7565,24 @@ func (s *DirectoryDescription) SetRadiusStatus(v string) *DirectoryDescription { return s } +// SetShareMethod sets the ShareMethod field's value. +func (s *DirectoryDescription) SetShareMethod(v string) *DirectoryDescription { + s.ShareMethod = &v + return s +} + +// SetShareNotes sets the ShareNotes field's value. +func (s *DirectoryDescription) SetShareNotes(v string) *DirectoryDescription { + s.ShareNotes = &v + return s +} + +// SetShareStatus sets the ShareStatus field's value. +func (s *DirectoryDescription) SetShareStatus(v string) *DirectoryDescription { + s.ShareStatus = &v + return s +} + // SetShortName sets the ShortName field's value. func (s *DirectoryDescription) SetShortName(v string) *DirectoryDescription { s.ShortName = &v @@ -6355,13 +7644,14 @@ type DirectoryLimits struct { // Indicates if the cloud directory limit has been reached. CloudOnlyDirectoriesLimitReached *bool `type:"boolean"` - // The current number of Microsoft AD directories in the region. + // The current number of AWS Managed Microsoft AD directories in the region. CloudOnlyMicrosoftADCurrentCount *int64 `type:"integer"` - // The maximum number of Microsoft AD directories allowed in the region. + // The maximum number of AWS Managed Microsoft AD directories allowed in the + // region. CloudOnlyMicrosoftADLimit *int64 `type:"integer"` - // Indicates if the Microsoft AD directory limit has been reached. + // Indicates if the AWS Managed Microsoft AD directory limit has been reached. CloudOnlyMicrosoftADLimitReached *bool `type:"boolean"` // The current number of connected directories in the region. @@ -6703,13 +7993,13 @@ type DomainController struct { DomainControllerId *string `type:"string"` // Specifies when the domain controller was created. - LaunchTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LaunchTime *time.Time `type:"timestamp"` // The status of the domain controller. Status *string `type:"string" enum:"DomainControllerStatus"` // The date and time that the status was last updated. - StatusLastUpdatedDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StatusLastUpdatedDateTime *time.Time `type:"timestamp"` // A description of the domain controller state. StatusReason *string `type:"string"` @@ -6957,7 +8247,7 @@ type EventTopic struct { _ struct{} `type:"structure"` // The date and time of when you associated your directory with the SNS topic. - CreatedDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedDateTime *time.Time `type:"timestamp"` // The Directory ID of an AWS Directory Service directory that will publish // status messages to an SNS topic. @@ -7158,7 +8448,7 @@ type IpRouteInfo struct { _ struct{} `type:"structure"` // The date and time the address block was added to the directory. - AddedDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + AddedDateTime *time.Time `type:"timestamp"` // IP address block in the IpRoute. CidrIp *string `type:"string"` @@ -7314,6 +8604,82 @@ func (s *ListIpRoutesOutput) SetNextToken(v string) *ListIpRoutesOutput { return s } +type ListLogSubscriptionsInput struct { + _ struct{} `type:"structure"` + + // If a DirectoryID is provided, lists only the log subscription associated + // with that directory. If no DirectoryId is provided, lists all log subscriptions + // associated with your AWS account. If there are no log subscriptions for the + // AWS account or the directory, an empty list will be returned. + DirectoryId *string `type:"string"` + + // The maximum number of items returned. + Limit *int64 `type:"integer"` + + // The token for the next set of items to return. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListLogSubscriptionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListLogSubscriptionsInput) GoString() string { + return s.String() +} + +// SetDirectoryId sets the DirectoryId field's value. +func (s *ListLogSubscriptionsInput) SetDirectoryId(v string) *ListLogSubscriptionsInput { + s.DirectoryId = &v + return s +} + +// SetLimit sets the Limit field's value. +func (s *ListLogSubscriptionsInput) SetLimit(v int64) *ListLogSubscriptionsInput { + s.Limit = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListLogSubscriptionsInput) SetNextToken(v string) *ListLogSubscriptionsInput { + s.NextToken = &v + return s +} + +type ListLogSubscriptionsOutput struct { + _ struct{} `type:"structure"` + + // A list of active LogSubscription objects for calling the AWS account. + LogSubscriptions []*LogSubscription `type:"list"` + + // The token for the next set of items to return. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListLogSubscriptionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListLogSubscriptionsOutput) GoString() string { + return s.String() +} + +// SetLogSubscriptions sets the LogSubscriptions field's value. +func (s *ListLogSubscriptionsOutput) SetLogSubscriptions(v []*LogSubscription) *ListLogSubscriptionsOutput { + s.LogSubscriptions = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListLogSubscriptionsOutput) SetNextToken(v string) *ListLogSubscriptionsOutput { + s.NextToken = &v + return s +} + type ListSchemaExtensionsInput struct { _ struct{} `type:"structure"` @@ -7494,6 +8860,121 @@ func (s *ListTagsForResourceOutput) SetTags(v []*Tag) *ListTagsForResourceOutput return s } +// Represents a log subscription, which tracks real-time data from a chosen +// log group to a specified destination. +type LogSubscription struct { + _ struct{} `type:"structure"` + + // Identifier (ID) of the directory that you want to associate with the log + // subscription. + DirectoryId *string `type:"string"` + + // The name of the log group. + LogGroupName *string `min:"1" type:"string"` + + // The date and time that the log subscription was created. + SubscriptionCreatedDateTime *time.Time `type:"timestamp"` +} + +// String returns the string representation +func (s LogSubscription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LogSubscription) GoString() string { + return s.String() +} + +// SetDirectoryId sets the DirectoryId field's value. +func (s *LogSubscription) SetDirectoryId(v string) *LogSubscription { + s.DirectoryId = &v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *LogSubscription) SetLogGroupName(v string) *LogSubscription { + s.LogGroupName = &v + return s +} + +// SetSubscriptionCreatedDateTime sets the SubscriptionCreatedDateTime field's value. +func (s *LogSubscription) SetSubscriptionCreatedDateTime(v time.Time) *LogSubscription { + s.SubscriptionCreatedDateTime = &v + return s +} + +// Describes the directory owner account details that have been shared to the +// directory consumer account. +type OwnerDirectoryDescription struct { + _ struct{} `type:"structure"` + + // Identifier of the directory owner account. + AccountId *string `type:"string"` + + // Identifier of the AWS Managed Microsoft AD directory in the directory owner + // account. + DirectoryId *string `type:"string"` + + // IP address of the directory’s domain controllers. + DnsIpAddrs []*string `type:"list"` + + // A RadiusSettings object that contains information about the RADIUS server. + RadiusSettings *RadiusSettings `type:"structure"` + + // Information about the status of the RADIUS server. + RadiusStatus *string `type:"string" enum:"RadiusStatus"` + + // Information about the VPC settings for the directory. + VpcSettings *DirectoryVpcSettingsDescription `type:"structure"` +} + +// String returns the string representation +func (s OwnerDirectoryDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OwnerDirectoryDescription) GoString() string { + return s.String() +} + +// SetAccountId sets the AccountId field's value. +func (s *OwnerDirectoryDescription) SetAccountId(v string) *OwnerDirectoryDescription { + s.AccountId = &v + return s +} + +// SetDirectoryId sets the DirectoryId field's value. +func (s *OwnerDirectoryDescription) SetDirectoryId(v string) *OwnerDirectoryDescription { + s.DirectoryId = &v + return s +} + +// SetDnsIpAddrs sets the DnsIpAddrs field's value. +func (s *OwnerDirectoryDescription) SetDnsIpAddrs(v []*string) *OwnerDirectoryDescription { + s.DnsIpAddrs = v + return s +} + +// SetRadiusSettings sets the RadiusSettings field's value. +func (s *OwnerDirectoryDescription) SetRadiusSettings(v *RadiusSettings) *OwnerDirectoryDescription { + s.RadiusSettings = v + return s +} + +// SetRadiusStatus sets the RadiusStatus field's value. +func (s *OwnerDirectoryDescription) SetRadiusStatus(v string) *OwnerDirectoryDescription { + s.RadiusStatus = &v + return s +} + +// SetVpcSettings sets the VpcSettings field's value. +func (s *OwnerDirectoryDescription) SetVpcSettings(v *DirectoryVpcSettingsDescription) *OwnerDirectoryDescription { + s.VpcSettings = v + return s +} + // Contains information about a Remote Authentication Dial In User Service (RADIUS) // server. type RadiusSettings struct { @@ -7521,7 +9002,7 @@ type RadiusSettings struct { // The amount of time, in seconds, to wait for the RADIUS server to respond. RadiusTimeout *int64 `min:"1" type:"integer"` - // Not currently used. + // Required for enabling RADIUS on the directory. SharedSecret *string `min:"8" type:"string"` // Not currently used. @@ -7680,38 +9161,31 @@ func (s RegisterEventTopicOutput) GoString() string { return s.String() } -type RemoveIpRoutesInput struct { +type RejectSharedDirectoryInput struct { _ struct{} `type:"structure"` - // IP address blocks that you want to remove. - // - // CidrIps is a required field - CidrIps []*string `type:"list" required:"true"` - - // Identifier (ID) of the directory from which you want to remove the IP addresses. + // Identifier of the shared directory in the directory consumer account. This + // identifier is different for each directory owner account. // - // DirectoryId is a required field - DirectoryId *string `type:"string" required:"true"` + // SharedDirectoryId is a required field + SharedDirectoryId *string `type:"string" required:"true"` } // String returns the string representation -func (s RemoveIpRoutesInput) String() string { +func (s RejectSharedDirectoryInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s RemoveIpRoutesInput) GoString() string { +func (s RejectSharedDirectoryInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *RemoveIpRoutesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "RemoveIpRoutesInput"} - if s.CidrIps == nil { - invalidParams.Add(request.NewErrParamRequired("CidrIps")) - } - if s.DirectoryId == nil { - invalidParams.Add(request.NewErrParamRequired("DirectoryId")) +func (s *RejectSharedDirectoryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RejectSharedDirectoryInput"} + if s.SharedDirectoryId == nil { + invalidParams.Add(request.NewErrParamRequired("SharedDirectoryId")) } if invalidParams.Len() > 0 { @@ -7720,17 +9194,86 @@ func (s *RemoveIpRoutesInput) Validate() error { return nil } -// SetCidrIps sets the CidrIps field's value. -func (s *RemoveIpRoutesInput) SetCidrIps(v []*string) *RemoveIpRoutesInput { - s.CidrIps = v +// SetSharedDirectoryId sets the SharedDirectoryId field's value. +func (s *RejectSharedDirectoryInput) SetSharedDirectoryId(v string) *RejectSharedDirectoryInput { + s.SharedDirectoryId = &v return s } -// SetDirectoryId sets the DirectoryId field's value. -func (s *RemoveIpRoutesInput) SetDirectoryId(v string) *RemoveIpRoutesInput { - s.DirectoryId = &v - return s -} +type RejectSharedDirectoryOutput struct { + _ struct{} `type:"structure"` + + // Identifier of the shared directory in the directory consumer account. + SharedDirectoryId *string `type:"string"` +} + +// String returns the string representation +func (s RejectSharedDirectoryOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RejectSharedDirectoryOutput) GoString() string { + return s.String() +} + +// SetSharedDirectoryId sets the SharedDirectoryId field's value. +func (s *RejectSharedDirectoryOutput) SetSharedDirectoryId(v string) *RejectSharedDirectoryOutput { + s.SharedDirectoryId = &v + return s +} + +type RemoveIpRoutesInput struct { + _ struct{} `type:"structure"` + + // IP address blocks that you want to remove. + // + // CidrIps is a required field + CidrIps []*string `type:"list" required:"true"` + + // Identifier (ID) of the directory from which you want to remove the IP addresses. + // + // DirectoryId is a required field + DirectoryId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s RemoveIpRoutesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveIpRoutesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RemoveIpRoutesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RemoveIpRoutesInput"} + if s.CidrIps == nil { + invalidParams.Add(request.NewErrParamRequired("CidrIps")) + } + if s.DirectoryId == nil { + invalidParams.Add(request.NewErrParamRequired("DirectoryId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCidrIps sets the CidrIps field's value. +func (s *RemoveIpRoutesInput) SetCidrIps(v []*string) *RemoveIpRoutesInput { + s.CidrIps = v + return s +} + +// SetDirectoryId sets the DirectoryId field's value. +func (s *RemoveIpRoutesInput) SetDirectoryId(v string) *RemoveIpRoutesInput { + s.DirectoryId = &v + return s +} type RemoveIpRoutesOutput struct { _ struct{} `type:"structure"` @@ -7812,31 +9355,372 @@ func (s RemoveTagsFromResourceOutput) GoString() string { return s.String() } +type ResetUserPasswordInput struct { + _ struct{} `type:"structure"` + + // Identifier of the AWS Managed Microsoft AD or Simple AD directory in which + // the user resides. + // + // DirectoryId is a required field + DirectoryId *string `type:"string" required:"true"` + + // The new password that will be reset. + // + // NewPassword is a required field + NewPassword *string `min:"1" type:"string" required:"true"` + + // The user name of the user whose password will be reset. + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ResetUserPasswordInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResetUserPasswordInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ResetUserPasswordInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResetUserPasswordInput"} + if s.DirectoryId == nil { + invalidParams.Add(request.NewErrParamRequired("DirectoryId")) + } + if s.NewPassword == nil { + invalidParams.Add(request.NewErrParamRequired("NewPassword")) + } + if s.NewPassword != nil && len(*s.NewPassword) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NewPassword", 1)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDirectoryId sets the DirectoryId field's value. +func (s *ResetUserPasswordInput) SetDirectoryId(v string) *ResetUserPasswordInput { + s.DirectoryId = &v + return s +} + +// SetNewPassword sets the NewPassword field's value. +func (s *ResetUserPasswordInput) SetNewPassword(v string) *ResetUserPasswordInput { + s.NewPassword = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *ResetUserPasswordInput) SetUserName(v string) *ResetUserPasswordInput { + s.UserName = &v + return s +} + +type ResetUserPasswordOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s ResetUserPasswordOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResetUserPasswordOutput) GoString() string { + return s.String() +} + // An object representing the inputs for the RestoreFromSnapshot operation. type RestoreFromSnapshotInput struct { _ struct{} `type:"structure"` - // The identifier of the snapshot to restore from. + // The identifier of the snapshot to restore from. + // + // SnapshotId is a required field + SnapshotId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s RestoreFromSnapshotInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreFromSnapshotInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RestoreFromSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RestoreFromSnapshotInput"} + if s.SnapshotId == nil { + invalidParams.Add(request.NewErrParamRequired("SnapshotId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSnapshotId sets the SnapshotId field's value. +func (s *RestoreFromSnapshotInput) SetSnapshotId(v string) *RestoreFromSnapshotInput { + s.SnapshotId = &v + return s +} + +// Contains the results of the RestoreFromSnapshot operation. +type RestoreFromSnapshotOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s RestoreFromSnapshotOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreFromSnapshotOutput) GoString() string { + return s.String() +} + +// Information about a schema extension. +type SchemaExtensionInfo struct { + _ struct{} `type:"structure"` + + // A description of the schema extension. + Description *string `type:"string"` + + // The identifier of the directory to which the schema extension is applied. + DirectoryId *string `type:"string"` + + // The date and time that the schema extension was completed. + EndDateTime *time.Time `type:"timestamp"` + + // The identifier of the schema extension. + SchemaExtensionId *string `type:"string"` + + // The current status of the schema extension. + SchemaExtensionStatus *string `type:"string" enum:"SchemaExtensionStatus"` + + // The reason for the SchemaExtensionStatus. + SchemaExtensionStatusReason *string `type:"string"` + + // The date and time that the schema extension started being applied to the + // directory. + StartDateTime *time.Time `type:"timestamp"` +} + +// String returns the string representation +func (s SchemaExtensionInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SchemaExtensionInfo) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *SchemaExtensionInfo) SetDescription(v string) *SchemaExtensionInfo { + s.Description = &v + return s +} + +// SetDirectoryId sets the DirectoryId field's value. +func (s *SchemaExtensionInfo) SetDirectoryId(v string) *SchemaExtensionInfo { + s.DirectoryId = &v + return s +} + +// SetEndDateTime sets the EndDateTime field's value. +func (s *SchemaExtensionInfo) SetEndDateTime(v time.Time) *SchemaExtensionInfo { + s.EndDateTime = &v + return s +} + +// SetSchemaExtensionId sets the SchemaExtensionId field's value. +func (s *SchemaExtensionInfo) SetSchemaExtensionId(v string) *SchemaExtensionInfo { + s.SchemaExtensionId = &v + return s +} + +// SetSchemaExtensionStatus sets the SchemaExtensionStatus field's value. +func (s *SchemaExtensionInfo) SetSchemaExtensionStatus(v string) *SchemaExtensionInfo { + s.SchemaExtensionStatus = &v + return s +} + +// SetSchemaExtensionStatusReason sets the SchemaExtensionStatusReason field's value. +func (s *SchemaExtensionInfo) SetSchemaExtensionStatusReason(v string) *SchemaExtensionInfo { + s.SchemaExtensionStatusReason = &v + return s +} + +// SetStartDateTime sets the StartDateTime field's value. +func (s *SchemaExtensionInfo) SetStartDateTime(v time.Time) *SchemaExtensionInfo { + s.StartDateTime = &v + return s +} + +type ShareDirectoryInput struct { + _ struct{} `type:"structure"` + + // Identifier of the AWS Managed Microsoft AD directory that you want to share + // with other AWS accounts. + // + // DirectoryId is a required field + DirectoryId *string `type:"string" required:"true"` + + // The method used when sharing a directory to determine whether the directory + // should be shared within your AWS organization (ORGANIZATIONS) or with any + // AWS account by sending a directory sharing request (HANDSHAKE). + // + // ShareMethod is a required field + ShareMethod *string `type:"string" required:"true" enum:"ShareMethod"` + + // A directory share request that is sent by the directory owner to the directory + // consumer. The request includes a typed message to help the directory consumer + // administrator determine whether to approve or reject the share invitation. + ShareNotes *string `type:"string"` + + // Identifier for the directory consumer account with whom the directory is + // to be shared. + // + // ShareTarget is a required field + ShareTarget *ShareTarget `type:"structure" required:"true"` +} + +// String returns the string representation +func (s ShareDirectoryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ShareDirectoryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ShareDirectoryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ShareDirectoryInput"} + if s.DirectoryId == nil { + invalidParams.Add(request.NewErrParamRequired("DirectoryId")) + } + if s.ShareMethod == nil { + invalidParams.Add(request.NewErrParamRequired("ShareMethod")) + } + if s.ShareTarget == nil { + invalidParams.Add(request.NewErrParamRequired("ShareTarget")) + } + if s.ShareTarget != nil { + if err := s.ShareTarget.Validate(); err != nil { + invalidParams.AddNested("ShareTarget", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDirectoryId sets the DirectoryId field's value. +func (s *ShareDirectoryInput) SetDirectoryId(v string) *ShareDirectoryInput { + s.DirectoryId = &v + return s +} + +// SetShareMethod sets the ShareMethod field's value. +func (s *ShareDirectoryInput) SetShareMethod(v string) *ShareDirectoryInput { + s.ShareMethod = &v + return s +} + +// SetShareNotes sets the ShareNotes field's value. +func (s *ShareDirectoryInput) SetShareNotes(v string) *ShareDirectoryInput { + s.ShareNotes = &v + return s +} + +// SetShareTarget sets the ShareTarget field's value. +func (s *ShareDirectoryInput) SetShareTarget(v *ShareTarget) *ShareDirectoryInput { + s.ShareTarget = v + return s +} + +type ShareDirectoryOutput struct { + _ struct{} `type:"structure"` + + // Identifier of the directory that is stored in the directory consumer account + // that is shared from the specified directory (DirectoryId). + SharedDirectoryId *string `type:"string"` +} + +// String returns the string representation +func (s ShareDirectoryOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ShareDirectoryOutput) GoString() string { + return s.String() +} + +// SetSharedDirectoryId sets the SharedDirectoryId field's value. +func (s *ShareDirectoryOutput) SetSharedDirectoryId(v string) *ShareDirectoryOutput { + s.SharedDirectoryId = &v + return s +} + +// Identifier that contains details about the directory consumer account. +type ShareTarget struct { + _ struct{} `type:"structure"` + + // Identifier of the directory consumer account. // - // SnapshotId is a required field - SnapshotId *string `type:"string" required:"true"` + // Id is a required field + Id *string `min:"1" type:"string" required:"true"` + + // Type of identifier to be used in the Id field. + // + // Type is a required field + Type *string `type:"string" required:"true" enum:"TargetType"` } // String returns the string representation -func (s RestoreFromSnapshotInput) String() string { +func (s ShareTarget) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s RestoreFromSnapshotInput) GoString() string { +func (s ShareTarget) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *RestoreFromSnapshotInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "RestoreFromSnapshotInput"} - if s.SnapshotId == nil { - invalidParams.Add(request.NewErrParamRequired("SnapshotId")) +func (s *ShareTarget) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ShareTarget"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.Id != nil && len(*s.Id) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Id", 1)) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) } if invalidParams.Len() > 0 { @@ -7845,103 +9729,119 @@ func (s *RestoreFromSnapshotInput) Validate() error { return nil } -// SetSnapshotId sets the SnapshotId field's value. -func (s *RestoreFromSnapshotInput) SetSnapshotId(v string) *RestoreFromSnapshotInput { - s.SnapshotId = &v +// SetId sets the Id field's value. +func (s *ShareTarget) SetId(v string) *ShareTarget { + s.Id = &v return s } -// Contains the results of the RestoreFromSnapshot operation. -type RestoreFromSnapshotOutput struct { - _ struct{} `type:"structure"` +// SetType sets the Type field's value. +func (s *ShareTarget) SetType(v string) *ShareTarget { + s.Type = &v + return s } -// String returns the string representation -func (s RestoreFromSnapshotOutput) String() string { - return awsutil.Prettify(s) -} +// Details about the shared directory in the directory owner account for which +// the share request in the directory consumer account has been accepted. +type SharedDirectory struct { + _ struct{} `type:"structure"` -// GoString returns the string representation -func (s RestoreFromSnapshotOutput) GoString() string { - return s.String() -} + // The date and time that the shared directory was created. + CreatedDateTime *time.Time `type:"timestamp"` -// Information about a schema extension. -type SchemaExtensionInfo struct { - _ struct{} `type:"structure"` + // The date and time that the shared directory was last updated. + LastUpdatedDateTime *time.Time `type:"timestamp"` - // A description of the schema extension. - Description *string `type:"string"` + // Identifier of the directory owner account, which contains the directory that + // has been shared to the consumer account. + OwnerAccountId *string `type:"string"` - // The identifier of the directory to which the schema extension is applied. - DirectoryId *string `type:"string"` + // Identifier of the directory in the directory owner account. + OwnerDirectoryId *string `type:"string"` - // The date and time that the schema extension was completed. - EndDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + // The method used when sharing a directory to determine whether the directory + // should be shared within your AWS organization (ORGANIZATIONS) or with any + // AWS account by sending a shared directory request (HANDSHAKE). + ShareMethod *string `type:"string" enum:"ShareMethod"` - // The identifier of the schema extension. - SchemaExtensionId *string `type:"string"` + // A directory share request that is sent by the directory owner to the directory + // consumer. The request includes a typed message to help the directory consumer + // administrator determine whether to approve or reject the share invitation. + ShareNotes *string `type:"string"` - // The current status of the schema extension. - SchemaExtensionStatus *string `type:"string" enum:"SchemaExtensionStatus"` + // Current directory status of the shared AWS Managed Microsoft AD directory. + ShareStatus *string `type:"string" enum:"ShareStatus"` - // The reason for the SchemaExtensionStatus. - SchemaExtensionStatusReason *string `type:"string"` + // Identifier of the directory consumer account that has access to the shared + // directory (OwnerDirectoryId) in the directory owner account. + SharedAccountId *string `type:"string"` - // The date and time that the schema extension started being applied to the - // directory. - StartDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + // Identifier of the shared directory in the directory consumer account. This + // identifier is different for each directory owner account. + SharedDirectoryId *string `type:"string"` } // String returns the string representation -func (s SchemaExtensionInfo) String() string { +func (s SharedDirectory) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SchemaExtensionInfo) GoString() string { +func (s SharedDirectory) GoString() string { return s.String() } -// SetDescription sets the Description field's value. -func (s *SchemaExtensionInfo) SetDescription(v string) *SchemaExtensionInfo { - s.Description = &v +// SetCreatedDateTime sets the CreatedDateTime field's value. +func (s *SharedDirectory) SetCreatedDateTime(v time.Time) *SharedDirectory { + s.CreatedDateTime = &v return s } -// SetDirectoryId sets the DirectoryId field's value. -func (s *SchemaExtensionInfo) SetDirectoryId(v string) *SchemaExtensionInfo { - s.DirectoryId = &v +// SetLastUpdatedDateTime sets the LastUpdatedDateTime field's value. +func (s *SharedDirectory) SetLastUpdatedDateTime(v time.Time) *SharedDirectory { + s.LastUpdatedDateTime = &v return s } -// SetEndDateTime sets the EndDateTime field's value. -func (s *SchemaExtensionInfo) SetEndDateTime(v time.Time) *SchemaExtensionInfo { - s.EndDateTime = &v +// SetOwnerAccountId sets the OwnerAccountId field's value. +func (s *SharedDirectory) SetOwnerAccountId(v string) *SharedDirectory { + s.OwnerAccountId = &v return s } -// SetSchemaExtensionId sets the SchemaExtensionId field's value. -func (s *SchemaExtensionInfo) SetSchemaExtensionId(v string) *SchemaExtensionInfo { - s.SchemaExtensionId = &v +// SetOwnerDirectoryId sets the OwnerDirectoryId field's value. +func (s *SharedDirectory) SetOwnerDirectoryId(v string) *SharedDirectory { + s.OwnerDirectoryId = &v return s } -// SetSchemaExtensionStatus sets the SchemaExtensionStatus field's value. -func (s *SchemaExtensionInfo) SetSchemaExtensionStatus(v string) *SchemaExtensionInfo { - s.SchemaExtensionStatus = &v +// SetShareMethod sets the ShareMethod field's value. +func (s *SharedDirectory) SetShareMethod(v string) *SharedDirectory { + s.ShareMethod = &v return s } -// SetSchemaExtensionStatusReason sets the SchemaExtensionStatusReason field's value. -func (s *SchemaExtensionInfo) SetSchemaExtensionStatusReason(v string) *SchemaExtensionInfo { - s.SchemaExtensionStatusReason = &v +// SetShareNotes sets the ShareNotes field's value. +func (s *SharedDirectory) SetShareNotes(v string) *SharedDirectory { + s.ShareNotes = &v return s } -// SetStartDateTime sets the StartDateTime field's value. -func (s *SchemaExtensionInfo) SetStartDateTime(v time.Time) *SchemaExtensionInfo { - s.StartDateTime = &v +// SetShareStatus sets the ShareStatus field's value. +func (s *SharedDirectory) SetShareStatus(v string) *SharedDirectory { + s.ShareStatus = &v + return s +} + +// SetSharedAccountId sets the SharedAccountId field's value. +func (s *SharedDirectory) SetSharedAccountId(v string) *SharedDirectory { + s.SharedAccountId = &v + return s +} + +// SetSharedDirectoryId sets the SharedDirectoryId field's value. +func (s *SharedDirectory) SetSharedDirectoryId(v string) *SharedDirectory { + s.SharedDirectoryId = &v return s } @@ -7959,7 +9859,7 @@ type Snapshot struct { SnapshotId *string `type:"string"` // The date and time that the snapshot was taken. - StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `type:"timestamp"` // The snapshot status. Status *string `type:"string" enum:"SnapshotStatus"` @@ -8226,26 +10126,29 @@ func (s *Tag) SetValue(v string) *Tag { return s } -// Describes a trust relationship between an Microsoft AD in the AWS cloud and -// an external domain. +// Describes a trust relationship between an AWS Managed Microsoft AD directory +// and an external domain. type Trust struct { _ struct{} `type:"structure"` // The date and time that the trust relationship was created. - CreatedDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedDateTime *time.Time `type:"timestamp"` // The Directory ID of the AWS directory involved in the trust relationship. DirectoryId *string `type:"string"` // The date and time that the trust relationship was last updated. - LastUpdatedDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LastUpdatedDateTime *time.Time `type:"timestamp"` // The Fully Qualified Domain Name (FQDN) of the external domain involved in // the trust relationship. RemoteDomainName *string `type:"string"` + // Current state of selective authentication for the trust. + SelectiveAuth *string `type:"string" enum:"SelectiveAuth"` + // The date and time that the TrustState was last updated. - StateLastUpdatedDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StateLastUpdatedDateTime *time.Time `type:"timestamp"` // The trust relationship direction. TrustDirection *string `type:"string" enum:"TrustDirection"` @@ -8259,7 +10162,7 @@ type Trust struct { // The reason for the TrustState. TrustStateReason *string `type:"string"` - // The trust relationship type. + // The trust relationship type. Forest is the default. TrustType *string `type:"string" enum:"TrustType"` } @@ -8297,6 +10200,12 @@ func (s *Trust) SetRemoteDomainName(v string) *Trust { return s } +// SetSelectiveAuth sets the SelectiveAuth field's value. +func (s *Trust) SetSelectiveAuth(v string) *Trust { + s.SelectiveAuth = &v + return s +} + // SetStateLastUpdatedDateTime sets the StateLastUpdatedDateTime field's value. func (s *Trust) SetStateLastUpdatedDateTime(v time.Time) *Trust { s.StateLastUpdatedDateTime = &v @@ -8333,6 +10242,146 @@ func (s *Trust) SetTrustType(v string) *Trust { return s } +type UnshareDirectoryInput struct { + _ struct{} `type:"structure"` + + // The identifier of the AWS Managed Microsoft AD directory that you want to + // stop sharing. + // + // DirectoryId is a required field + DirectoryId *string `type:"string" required:"true"` + + // Identifier for the directory consumer account with whom the directory has + // to be unshared. + // + // UnshareTarget is a required field + UnshareTarget *UnshareTarget `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UnshareDirectoryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UnshareDirectoryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UnshareDirectoryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UnshareDirectoryInput"} + if s.DirectoryId == nil { + invalidParams.Add(request.NewErrParamRequired("DirectoryId")) + } + if s.UnshareTarget == nil { + invalidParams.Add(request.NewErrParamRequired("UnshareTarget")) + } + if s.UnshareTarget != nil { + if err := s.UnshareTarget.Validate(); err != nil { + invalidParams.AddNested("UnshareTarget", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDirectoryId sets the DirectoryId field's value. +func (s *UnshareDirectoryInput) SetDirectoryId(v string) *UnshareDirectoryInput { + s.DirectoryId = &v + return s +} + +// SetUnshareTarget sets the UnshareTarget field's value. +func (s *UnshareDirectoryInput) SetUnshareTarget(v *UnshareTarget) *UnshareDirectoryInput { + s.UnshareTarget = v + return s +} + +type UnshareDirectoryOutput struct { + _ struct{} `type:"structure"` + + // Identifier of the directory stored in the directory consumer account that + // is to be unshared from the specified directory (DirectoryId). + SharedDirectoryId *string `type:"string"` +} + +// String returns the string representation +func (s UnshareDirectoryOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UnshareDirectoryOutput) GoString() string { + return s.String() +} + +// SetSharedDirectoryId sets the SharedDirectoryId field's value. +func (s *UnshareDirectoryOutput) SetSharedDirectoryId(v string) *UnshareDirectoryOutput { + s.SharedDirectoryId = &v + return s +} + +// Identifier that contains details about the directory consumer account with +// whom the directory is being unshared. +type UnshareTarget struct { + _ struct{} `type:"structure"` + + // Identifier of the directory consumer account. + // + // Id is a required field + Id *string `min:"1" type:"string" required:"true"` + + // Type of identifier to be used in the Id field. + // + // Type is a required field + Type *string `type:"string" required:"true" enum:"TargetType"` +} + +// String returns the string representation +func (s UnshareTarget) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UnshareTarget) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UnshareTarget) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UnshareTarget"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.Id != nil && len(*s.Id) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Id", 1)) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetId sets the Id field's value. +func (s *UnshareTarget) SetId(v string) *UnshareTarget { + s.Id = &v + return s +} + +// SetType sets the Type field's value. +func (s *UnshareTarget) SetType(v string) *UnshareTarget { + s.Type = &v + return s +} + // Updates a conditional forwarder. type UpdateConditionalForwarderInput struct { _ struct{} `type:"structure"` @@ -8561,8 +10610,87 @@ func (s UpdateRadiusOutput) GoString() string { return s.String() } -// Initiates the verification of an existing trust relationship between a Microsoft -// AD in the AWS cloud and an external domain. +type UpdateTrustInput struct { + _ struct{} `type:"structure"` + + // Updates selective authentication for the trust. + SelectiveAuth *string `type:"string" enum:"SelectiveAuth"` + + // Identifier of the trust relationship. + // + // TrustId is a required field + TrustId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateTrustInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateTrustInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateTrustInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateTrustInput"} + if s.TrustId == nil { + invalidParams.Add(request.NewErrParamRequired("TrustId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSelectiveAuth sets the SelectiveAuth field's value. +func (s *UpdateTrustInput) SetSelectiveAuth(v string) *UpdateTrustInput { + s.SelectiveAuth = &v + return s +} + +// SetTrustId sets the TrustId field's value. +func (s *UpdateTrustInput) SetTrustId(v string) *UpdateTrustInput { + s.TrustId = &v + return s +} + +type UpdateTrustOutput struct { + _ struct{} `type:"structure"` + + // The AWS request identifier. + RequestId *string `type:"string"` + + // Identifier of the trust relationship. + TrustId *string `type:"string"` +} + +// String returns the string representation +func (s UpdateTrustOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateTrustOutput) GoString() string { + return s.String() +} + +// SetRequestId sets the RequestId field's value. +func (s *UpdateTrustOutput) SetRequestId(v string) *UpdateTrustOutput { + s.RequestId = &v + return s +} + +// SetTrustId sets the TrustId field's value. +func (s *UpdateTrustOutput) SetTrustId(v string) *UpdateTrustOutput { + s.TrustId = &v + return s +} + +// Initiates the verification of an existing trust relationship between an AWS +// Managed Microsoft AD directory and an external domain. type VerifyTrustInput struct { _ struct{} `type:"structure"` @@ -8685,6 +10813,9 @@ const ( // DirectoryTypeMicrosoftAd is a DirectoryType enum value DirectoryTypeMicrosoftAd = "MicrosoftAD" + + // DirectoryTypeSharedMicrosoftAd is a DirectoryType enum value + DirectoryTypeSharedMicrosoftAd = "SharedMicrosoftAD" ) const ( @@ -8789,6 +10920,51 @@ const ( SchemaExtensionStatusCompleted = "Completed" ) +const ( + // SelectiveAuthEnabled is a SelectiveAuth enum value + SelectiveAuthEnabled = "Enabled" + + // SelectiveAuthDisabled is a SelectiveAuth enum value + SelectiveAuthDisabled = "Disabled" +) + +const ( + // ShareMethodOrganizations is a ShareMethod enum value + ShareMethodOrganizations = "ORGANIZATIONS" + + // ShareMethodHandshake is a ShareMethod enum value + ShareMethodHandshake = "HANDSHAKE" +) + +const ( + // ShareStatusShared is a ShareStatus enum value + ShareStatusShared = "Shared" + + // ShareStatusPendingAcceptance is a ShareStatus enum value + ShareStatusPendingAcceptance = "PendingAcceptance" + + // ShareStatusRejected is a ShareStatus enum value + ShareStatusRejected = "Rejected" + + // ShareStatusRejecting is a ShareStatus enum value + ShareStatusRejecting = "Rejecting" + + // ShareStatusRejectFailed is a ShareStatus enum value + ShareStatusRejectFailed = "RejectFailed" + + // ShareStatusSharing is a ShareStatus enum value + ShareStatusSharing = "Sharing" + + // ShareStatusShareFailed is a ShareStatus enum value + ShareStatusShareFailed = "ShareFailed" + + // ShareStatusDeleted is a ShareStatus enum value + ShareStatusDeleted = "Deleted" + + // ShareStatusDeleting is a ShareStatus enum value + ShareStatusDeleting = "Deleting" +) + const ( // SnapshotStatusCreating is a SnapshotStatus enum value SnapshotStatusCreating = "Creating" @@ -8808,6 +10984,11 @@ const ( SnapshotTypeManual = "Manual" ) +const ( + // TargetTypeAccount is a TargetType enum value + TargetTypeAccount = "ACCOUNT" +) + const ( // TopicStatusRegistered is a TopicStatus enum value TopicStatusRegistered = "Registered" @@ -8849,6 +11030,15 @@ const ( // TrustStateVerified is a TrustState enum value TrustStateVerified = "Verified" + // TrustStateUpdating is a TrustState enum value + TrustStateUpdating = "Updating" + + // TrustStateUpdateFailed is a TrustState enum value + TrustStateUpdateFailed = "UpdateFailed" + + // TrustStateUpdated is a TrustState enum value + TrustStateUpdated = "Updated" + // TrustStateDeleting is a TrustState enum value TrustStateDeleting = "Deleting" @@ -8862,4 +11052,7 @@ const ( const ( // TrustTypeForest is a TrustType enum value TrustTypeForest = "Forest" + + // TrustTypeExternal is a TrustType enum value + TrustTypeExternal = "External" ) diff --git a/vendor/github.com/aws/aws-sdk-go/service/directoryservice/errors.go b/vendor/github.com/aws/aws-sdk-go/service/directoryservice/errors.go index f7560305b9c..b5bdd25a10f 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/directoryservice/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/directoryservice/errors.go @@ -4,6 +4,12 @@ package directoryservice const ( + // ErrCodeAccessDeniedException for service response error code + // "AccessDeniedException". + // + // You do not have sufficient access to perform this action. + ErrCodeAccessDeniedException = "AccessDeniedException" + // ErrCodeAuthenticationFailedException for service response error code // "AuthenticationFailedException". // @@ -16,6 +22,12 @@ const ( // A client exception has occurred. ErrCodeClientException = "ClientException" + // ErrCodeDirectoryAlreadySharedException for service response error code + // "DirectoryAlreadySharedException". + // + // The specified directory has already been shared with this AWS account. + ErrCodeDirectoryAlreadySharedException = "DirectoryAlreadySharedException" + // ErrCodeDirectoryLimitExceededException for service response error code // "DirectoryLimitExceededException". // @@ -24,6 +36,12 @@ const ( // the region. ErrCodeDirectoryLimitExceededException = "DirectoryLimitExceededException" + // ErrCodeDirectoryNotSharedException for service response error code + // "DirectoryNotSharedException". + // + // The specified directory has not been shared with this AWS account. + ErrCodeDirectoryNotSharedException = "DirectoryNotSharedException" + // ErrCodeDirectoryUnavailableException for service response error code // "DirectoryUnavailableException". // @@ -67,6 +85,19 @@ const ( // One or more parameters are not valid. ErrCodeInvalidParameterException = "InvalidParameterException" + // ErrCodeInvalidPasswordException for service response error code + // "InvalidPasswordException". + // + // The new password provided by the user does not meet the password complexity + // requirements defined in your directory. + ErrCodeInvalidPasswordException = "InvalidPasswordException" + + // ErrCodeInvalidTargetException for service response error code + // "InvalidTargetException". + // + // The specified shared target is not valid. + ErrCodeInvalidTargetException = "InvalidTargetException" + // ErrCodeIpRouteLimitExceededException for service response error code // "IpRouteLimitExceededException". // @@ -74,12 +105,25 @@ const ( // is 100 IP address blocks. ErrCodeIpRouteLimitExceededException = "IpRouteLimitExceededException" + // ErrCodeOrganizationsException for service response error code + // "OrganizationsException". + // + // Exception encountered while trying to access your AWS organization. + ErrCodeOrganizationsException = "OrganizationsException" + // ErrCodeServiceException for service response error code // "ServiceException". // // An exception has occurred in AWS Directory Service. ErrCodeServiceException = "ServiceException" + // ErrCodeShareLimitExceededException for service response error code + // "ShareLimitExceededException". + // + // The maximum number of AWS accounts that you can share with this directory + // has been reached. + ErrCodeShareLimitExceededException = "ShareLimitExceededException" + // ErrCodeSnapshotLimitExceededException for service response error code // "SnapshotLimitExceededException". // @@ -99,4 +143,10 @@ const ( // // The operation is not supported. ErrCodeUnsupportedOperationException = "UnsupportedOperationException" + + // ErrCodeUserDoesNotExistException for service response error code + // "UserDoesNotExistException". + // + // The user provided a username that does not exist in your directory. + ErrCodeUserDoesNotExistException = "UserDoesNotExistException" ) diff --git a/vendor/github.com/aws/aws-sdk-go/service/directoryservice/service.go b/vendor/github.com/aws/aws-sdk-go/service/directoryservice/service.go index 59299ae5dfd..0743c9632ca 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/directoryservice/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/directoryservice/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "ds" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "ds" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Directory Service" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the DirectoryService client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/dlm/api.go b/vendor/github.com/aws/aws-sdk-go/service/dlm/api.go new file mode 100644 index 00000000000..8a135ba6567 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/dlm/api.go @@ -0,0 +1,1356 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package dlm + +import ( + "fmt" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" +) + +const opCreateLifecyclePolicy = "CreateLifecyclePolicy" + +// CreateLifecyclePolicyRequest generates a "aws/request.Request" representing the +// client's request for the CreateLifecyclePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateLifecyclePolicy for more information on using the CreateLifecyclePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateLifecyclePolicyRequest method. +// req, resp := client.CreateLifecyclePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dlm-2018-01-12/CreateLifecyclePolicy +func (c *DLM) CreateLifecyclePolicyRequest(input *CreateLifecyclePolicyInput) (req *request.Request, output *CreateLifecyclePolicyOutput) { + op := &request.Operation{ + Name: opCreateLifecyclePolicy, + HTTPMethod: "POST", + HTTPPath: "/policies", + } + + if input == nil { + input = &CreateLifecyclePolicyInput{} + } + + output = &CreateLifecyclePolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateLifecyclePolicy API operation for Amazon Data Lifecycle Manager. +// +// Creates a policy to manage the lifecycle of the specified AWS resources. +// You can create up to 100 lifecycle policies. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Data Lifecycle Manager's +// API operation CreateLifecyclePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// Bad request. The request is missing required parameters or has invalid parameters. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The request failed because a limit was exceeded. +// +// * ErrCodeInternalServerException "InternalServerException" +// The service failed in an unexpected way. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dlm-2018-01-12/CreateLifecyclePolicy +func (c *DLM) CreateLifecyclePolicy(input *CreateLifecyclePolicyInput) (*CreateLifecyclePolicyOutput, error) { + req, out := c.CreateLifecyclePolicyRequest(input) + return out, req.Send() +} + +// CreateLifecyclePolicyWithContext is the same as CreateLifecyclePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See CreateLifecyclePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DLM) CreateLifecyclePolicyWithContext(ctx aws.Context, input *CreateLifecyclePolicyInput, opts ...request.Option) (*CreateLifecyclePolicyOutput, error) { + req, out := c.CreateLifecyclePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteLifecyclePolicy = "DeleteLifecyclePolicy" + +// DeleteLifecyclePolicyRequest generates a "aws/request.Request" representing the +// client's request for the DeleteLifecyclePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteLifecyclePolicy for more information on using the DeleteLifecyclePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteLifecyclePolicyRequest method. +// req, resp := client.DeleteLifecyclePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dlm-2018-01-12/DeleteLifecyclePolicy +func (c *DLM) DeleteLifecyclePolicyRequest(input *DeleteLifecyclePolicyInput) (req *request.Request, output *DeleteLifecyclePolicyOutput) { + op := &request.Operation{ + Name: opDeleteLifecyclePolicy, + HTTPMethod: "DELETE", + HTTPPath: "/policies/{policyId}/", + } + + if input == nil { + input = &DeleteLifecyclePolicyInput{} + } + + output = &DeleteLifecyclePolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteLifecyclePolicy API operation for Amazon Data Lifecycle Manager. +// +// Deletes the specified lifecycle policy and halts the automated operations +// that the policy specified. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Data Lifecycle Manager's +// API operation DeleteLifecyclePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// A requested resource was not found. +// +// * ErrCodeInternalServerException "InternalServerException" +// The service failed in an unexpected way. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The request failed because a limit was exceeded. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dlm-2018-01-12/DeleteLifecyclePolicy +func (c *DLM) DeleteLifecyclePolicy(input *DeleteLifecyclePolicyInput) (*DeleteLifecyclePolicyOutput, error) { + req, out := c.DeleteLifecyclePolicyRequest(input) + return out, req.Send() +} + +// DeleteLifecyclePolicyWithContext is the same as DeleteLifecyclePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteLifecyclePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DLM) DeleteLifecyclePolicyWithContext(ctx aws.Context, input *DeleteLifecyclePolicyInput, opts ...request.Option) (*DeleteLifecyclePolicyOutput, error) { + req, out := c.DeleteLifecyclePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetLifecyclePolicies = "GetLifecyclePolicies" + +// GetLifecyclePoliciesRequest generates a "aws/request.Request" representing the +// client's request for the GetLifecyclePolicies operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetLifecyclePolicies for more information on using the GetLifecyclePolicies +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetLifecyclePoliciesRequest method. +// req, resp := client.GetLifecyclePoliciesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dlm-2018-01-12/GetLifecyclePolicies +func (c *DLM) GetLifecyclePoliciesRequest(input *GetLifecyclePoliciesInput) (req *request.Request, output *GetLifecyclePoliciesOutput) { + op := &request.Operation{ + Name: opGetLifecyclePolicies, + HTTPMethod: "GET", + HTTPPath: "/policies", + } + + if input == nil { + input = &GetLifecyclePoliciesInput{} + } + + output = &GetLifecyclePoliciesOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetLifecyclePolicies API operation for Amazon Data Lifecycle Manager. +// +// Gets summary information about all or the specified data lifecycle policies. +// +// To get complete information about a policy, use GetLifecyclePolicy. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Data Lifecycle Manager's +// API operation GetLifecyclePolicies for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// A requested resource was not found. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// Bad request. The request is missing required parameters or has invalid parameters. +// +// * ErrCodeInternalServerException "InternalServerException" +// The service failed in an unexpected way. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The request failed because a limit was exceeded. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dlm-2018-01-12/GetLifecyclePolicies +func (c *DLM) GetLifecyclePolicies(input *GetLifecyclePoliciesInput) (*GetLifecyclePoliciesOutput, error) { + req, out := c.GetLifecyclePoliciesRequest(input) + return out, req.Send() +} + +// GetLifecyclePoliciesWithContext is the same as GetLifecyclePolicies with the addition of +// the ability to pass a context and additional request options. +// +// See GetLifecyclePolicies for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DLM) GetLifecyclePoliciesWithContext(ctx aws.Context, input *GetLifecyclePoliciesInput, opts ...request.Option) (*GetLifecyclePoliciesOutput, error) { + req, out := c.GetLifecyclePoliciesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetLifecyclePolicy = "GetLifecyclePolicy" + +// GetLifecyclePolicyRequest generates a "aws/request.Request" representing the +// client's request for the GetLifecyclePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetLifecyclePolicy for more information on using the GetLifecyclePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetLifecyclePolicyRequest method. +// req, resp := client.GetLifecyclePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dlm-2018-01-12/GetLifecyclePolicy +func (c *DLM) GetLifecyclePolicyRequest(input *GetLifecyclePolicyInput) (req *request.Request, output *GetLifecyclePolicyOutput) { + op := &request.Operation{ + Name: opGetLifecyclePolicy, + HTTPMethod: "GET", + HTTPPath: "/policies/{policyId}/", + } + + if input == nil { + input = &GetLifecyclePolicyInput{} + } + + output = &GetLifecyclePolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetLifecyclePolicy API operation for Amazon Data Lifecycle Manager. +// +// Gets detailed information about the specified lifecycle policy. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Data Lifecycle Manager's +// API operation GetLifecyclePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// A requested resource was not found. +// +// * ErrCodeInternalServerException "InternalServerException" +// The service failed in an unexpected way. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The request failed because a limit was exceeded. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dlm-2018-01-12/GetLifecyclePolicy +func (c *DLM) GetLifecyclePolicy(input *GetLifecyclePolicyInput) (*GetLifecyclePolicyOutput, error) { + req, out := c.GetLifecyclePolicyRequest(input) + return out, req.Send() +} + +// GetLifecyclePolicyWithContext is the same as GetLifecyclePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See GetLifecyclePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DLM) GetLifecyclePolicyWithContext(ctx aws.Context, input *GetLifecyclePolicyInput, opts ...request.Option) (*GetLifecyclePolicyOutput, error) { + req, out := c.GetLifecyclePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateLifecyclePolicy = "UpdateLifecyclePolicy" + +// UpdateLifecyclePolicyRequest generates a "aws/request.Request" representing the +// client's request for the UpdateLifecyclePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateLifecyclePolicy for more information on using the UpdateLifecyclePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateLifecyclePolicyRequest method. +// req, resp := client.UpdateLifecyclePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dlm-2018-01-12/UpdateLifecyclePolicy +func (c *DLM) UpdateLifecyclePolicyRequest(input *UpdateLifecyclePolicyInput) (req *request.Request, output *UpdateLifecyclePolicyOutput) { + op := &request.Operation{ + Name: opUpdateLifecyclePolicy, + HTTPMethod: "PATCH", + HTTPPath: "/policies/{policyId}", + } + + if input == nil { + input = &UpdateLifecyclePolicyInput{} + } + + output = &UpdateLifecyclePolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateLifecyclePolicy API operation for Amazon Data Lifecycle Manager. +// +// Updates the specified lifecycle policy. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Data Lifecycle Manager's +// API operation UpdateLifecyclePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// A requested resource was not found. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// Bad request. The request is missing required parameters or has invalid parameters. +// +// * ErrCodeInternalServerException "InternalServerException" +// The service failed in an unexpected way. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The request failed because a limit was exceeded. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dlm-2018-01-12/UpdateLifecyclePolicy +func (c *DLM) UpdateLifecyclePolicy(input *UpdateLifecyclePolicyInput) (*UpdateLifecyclePolicyOutput, error) { + req, out := c.UpdateLifecyclePolicyRequest(input) + return out, req.Send() +} + +// UpdateLifecyclePolicyWithContext is the same as UpdateLifecyclePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateLifecyclePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DLM) UpdateLifecyclePolicyWithContext(ctx aws.Context, input *UpdateLifecyclePolicyInput, opts ...request.Option) (*UpdateLifecyclePolicyOutput, error) { + req, out := c.UpdateLifecyclePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type CreateLifecyclePolicyInput struct { + _ struct{} `type:"structure"` + + // A description of the lifecycle policy. The characters ^[0-9A-Za-z _-]+$ are + // supported. + // + // Description is a required field + Description *string `type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the IAM role used to run the operations + // specified by the lifecycle policy. + // + // ExecutionRoleArn is a required field + ExecutionRoleArn *string `type:"string" required:"true"` + + // The configuration details of the lifecycle policy. + // + // Target tags cannot be re-used across lifecycle policies. + // + // PolicyDetails is a required field + PolicyDetails *PolicyDetails `type:"structure" required:"true"` + + // The desired activation state of the lifecycle policy after creation. + // + // State is a required field + State *string `type:"string" required:"true" enum:"SettablePolicyStateValues"` +} + +// String returns the string representation +func (s CreateLifecyclePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLifecyclePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateLifecyclePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateLifecyclePolicyInput"} + if s.Description == nil { + invalidParams.Add(request.NewErrParamRequired("Description")) + } + if s.ExecutionRoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("ExecutionRoleArn")) + } + if s.PolicyDetails == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyDetails")) + } + if s.State == nil { + invalidParams.Add(request.NewErrParamRequired("State")) + } + if s.PolicyDetails != nil { + if err := s.PolicyDetails.Validate(); err != nil { + invalidParams.AddNested("PolicyDetails", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDescription sets the Description field's value. +func (s *CreateLifecyclePolicyInput) SetDescription(v string) *CreateLifecyclePolicyInput { + s.Description = &v + return s +} + +// SetExecutionRoleArn sets the ExecutionRoleArn field's value. +func (s *CreateLifecyclePolicyInput) SetExecutionRoleArn(v string) *CreateLifecyclePolicyInput { + s.ExecutionRoleArn = &v + return s +} + +// SetPolicyDetails sets the PolicyDetails field's value. +func (s *CreateLifecyclePolicyInput) SetPolicyDetails(v *PolicyDetails) *CreateLifecyclePolicyInput { + s.PolicyDetails = v + return s +} + +// SetState sets the State field's value. +func (s *CreateLifecyclePolicyInput) SetState(v string) *CreateLifecyclePolicyInput { + s.State = &v + return s +} + +type CreateLifecyclePolicyOutput struct { + _ struct{} `type:"structure"` + + // The identifier of the lifecycle policy. + PolicyId *string `type:"string"` +} + +// String returns the string representation +func (s CreateLifecyclePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLifecyclePolicyOutput) GoString() string { + return s.String() +} + +// SetPolicyId sets the PolicyId field's value. +func (s *CreateLifecyclePolicyOutput) SetPolicyId(v string) *CreateLifecyclePolicyOutput { + s.PolicyId = &v + return s +} + +// Specifies when to create snapshots of EBS volumes. +type CreateRule struct { + _ struct{} `type:"structure"` + + // The interval. The supported values are 12 and 24. + // + // Interval is a required field + Interval *int64 `min:"1" type:"integer" required:"true"` + + // The interval unit. + // + // IntervalUnit is a required field + IntervalUnit *string `type:"string" required:"true" enum:"IntervalUnitValues"` + + // The time, in UTC, to start the operation. + // + // The operation occurs within a one-hour window following the specified time. + Times []*string `type:"list"` +} + +// String returns the string representation +func (s CreateRule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateRule) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateRule) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateRule"} + if s.Interval == nil { + invalidParams.Add(request.NewErrParamRequired("Interval")) + } + if s.Interval != nil && *s.Interval < 1 { + invalidParams.Add(request.NewErrParamMinValue("Interval", 1)) + } + if s.IntervalUnit == nil { + invalidParams.Add(request.NewErrParamRequired("IntervalUnit")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInterval sets the Interval field's value. +func (s *CreateRule) SetInterval(v int64) *CreateRule { + s.Interval = &v + return s +} + +// SetIntervalUnit sets the IntervalUnit field's value. +func (s *CreateRule) SetIntervalUnit(v string) *CreateRule { + s.IntervalUnit = &v + return s +} + +// SetTimes sets the Times field's value. +func (s *CreateRule) SetTimes(v []*string) *CreateRule { + s.Times = v + return s +} + +type DeleteLifecyclePolicyInput struct { + _ struct{} `type:"structure"` + + // The identifier of the lifecycle policy. + // + // PolicyId is a required field + PolicyId *string `location:"uri" locationName:"policyId" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteLifecyclePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLifecyclePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteLifecyclePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteLifecyclePolicyInput"} + if s.PolicyId == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyId sets the PolicyId field's value. +func (s *DeleteLifecyclePolicyInput) SetPolicyId(v string) *DeleteLifecyclePolicyInput { + s.PolicyId = &v + return s +} + +type DeleteLifecyclePolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteLifecyclePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLifecyclePolicyOutput) GoString() string { + return s.String() +} + +type GetLifecyclePoliciesInput struct { + _ struct{} `type:"structure"` + + // The identifiers of the data lifecycle policies. + PolicyIds []*string `location:"querystring" locationName:"policyIds" type:"list"` + + // The resource type. + ResourceTypes []*string `location:"querystring" locationName:"resourceTypes" min:"1" type:"list"` + + // The activation state. + State *string `location:"querystring" locationName:"state" type:"string" enum:"GettablePolicyStateValues"` + + // The tags to add to objects created by the policy. + // + // Tags are strings in the format key=value. + // + // These user-defined tags are added in addition to the AWS-added lifecycle + // tags. + TagsToAdd []*string `location:"querystring" locationName:"tagsToAdd" type:"list"` + + // The target tag for a policy. + // + // Tags are strings in the format key=value. + TargetTags []*string `location:"querystring" locationName:"targetTags" min:"1" type:"list"` +} + +// String returns the string representation +func (s GetLifecyclePoliciesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetLifecyclePoliciesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetLifecyclePoliciesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetLifecyclePoliciesInput"} + if s.ResourceTypes != nil && len(s.ResourceTypes) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceTypes", 1)) + } + if s.TargetTags != nil && len(s.TargetTags) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TargetTags", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyIds sets the PolicyIds field's value. +func (s *GetLifecyclePoliciesInput) SetPolicyIds(v []*string) *GetLifecyclePoliciesInput { + s.PolicyIds = v + return s +} + +// SetResourceTypes sets the ResourceTypes field's value. +func (s *GetLifecyclePoliciesInput) SetResourceTypes(v []*string) *GetLifecyclePoliciesInput { + s.ResourceTypes = v + return s +} + +// SetState sets the State field's value. +func (s *GetLifecyclePoliciesInput) SetState(v string) *GetLifecyclePoliciesInput { + s.State = &v + return s +} + +// SetTagsToAdd sets the TagsToAdd field's value. +func (s *GetLifecyclePoliciesInput) SetTagsToAdd(v []*string) *GetLifecyclePoliciesInput { + s.TagsToAdd = v + return s +} + +// SetTargetTags sets the TargetTags field's value. +func (s *GetLifecyclePoliciesInput) SetTargetTags(v []*string) *GetLifecyclePoliciesInput { + s.TargetTags = v + return s +} + +type GetLifecyclePoliciesOutput struct { + _ struct{} `type:"structure"` + + // Summary information about the lifecycle policies. + Policies []*LifecyclePolicySummary `type:"list"` +} + +// String returns the string representation +func (s GetLifecyclePoliciesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetLifecyclePoliciesOutput) GoString() string { + return s.String() +} + +// SetPolicies sets the Policies field's value. +func (s *GetLifecyclePoliciesOutput) SetPolicies(v []*LifecyclePolicySummary) *GetLifecyclePoliciesOutput { + s.Policies = v + return s +} + +type GetLifecyclePolicyInput struct { + _ struct{} `type:"structure"` + + // The identifier of the lifecycle policy. + // + // PolicyId is a required field + PolicyId *string `location:"uri" locationName:"policyId" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetLifecyclePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetLifecyclePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetLifecyclePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetLifecyclePolicyInput"} + if s.PolicyId == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyId sets the PolicyId field's value. +func (s *GetLifecyclePolicyInput) SetPolicyId(v string) *GetLifecyclePolicyInput { + s.PolicyId = &v + return s +} + +type GetLifecyclePolicyOutput struct { + _ struct{} `type:"structure"` + + // Detailed information about the lifecycle policy. + Policy *LifecyclePolicy `type:"structure"` +} + +// String returns the string representation +func (s GetLifecyclePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetLifecyclePolicyOutput) GoString() string { + return s.String() +} + +// SetPolicy sets the Policy field's value. +func (s *GetLifecyclePolicyOutput) SetPolicy(v *LifecyclePolicy) *GetLifecyclePolicyOutput { + s.Policy = v + return s +} + +// Detailed information about a lifecycle policy. +type LifecyclePolicy struct { + _ struct{} `type:"structure"` + + // The local date and time when the lifecycle policy was created. + DateCreated *time.Time `type:"timestamp"` + + // The local date and time when the lifecycle policy was last modified. + DateModified *time.Time `type:"timestamp"` + + // The description of the lifecycle policy. + Description *string `type:"string"` + + // The Amazon Resource Name (ARN) of the IAM role used to run the operations + // specified by the lifecycle policy. + ExecutionRoleArn *string `type:"string"` + + // The configuration of the lifecycle policy + PolicyDetails *PolicyDetails `type:"structure"` + + // The identifier of the lifecycle policy. + PolicyId *string `type:"string"` + + // The activation state of the lifecycle policy. + State *string `type:"string" enum:"GettablePolicyStateValues"` +} + +// String returns the string representation +func (s LifecyclePolicy) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LifecyclePolicy) GoString() string { + return s.String() +} + +// SetDateCreated sets the DateCreated field's value. +func (s *LifecyclePolicy) SetDateCreated(v time.Time) *LifecyclePolicy { + s.DateCreated = &v + return s +} + +// SetDateModified sets the DateModified field's value. +func (s *LifecyclePolicy) SetDateModified(v time.Time) *LifecyclePolicy { + s.DateModified = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *LifecyclePolicy) SetDescription(v string) *LifecyclePolicy { + s.Description = &v + return s +} + +// SetExecutionRoleArn sets the ExecutionRoleArn field's value. +func (s *LifecyclePolicy) SetExecutionRoleArn(v string) *LifecyclePolicy { + s.ExecutionRoleArn = &v + return s +} + +// SetPolicyDetails sets the PolicyDetails field's value. +func (s *LifecyclePolicy) SetPolicyDetails(v *PolicyDetails) *LifecyclePolicy { + s.PolicyDetails = v + return s +} + +// SetPolicyId sets the PolicyId field's value. +func (s *LifecyclePolicy) SetPolicyId(v string) *LifecyclePolicy { + s.PolicyId = &v + return s +} + +// SetState sets the State field's value. +func (s *LifecyclePolicy) SetState(v string) *LifecyclePolicy { + s.State = &v + return s +} + +// Summary information about a lifecycle policy. +type LifecyclePolicySummary struct { + _ struct{} `type:"structure"` + + // The description of the lifecycle policy. + Description *string `type:"string"` + + // The identifier of the lifecycle policy. + PolicyId *string `type:"string"` + + // The activation state of the lifecycle policy. + State *string `type:"string" enum:"GettablePolicyStateValues"` +} + +// String returns the string representation +func (s LifecyclePolicySummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LifecyclePolicySummary) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *LifecyclePolicySummary) SetDescription(v string) *LifecyclePolicySummary { + s.Description = &v + return s +} + +// SetPolicyId sets the PolicyId field's value. +func (s *LifecyclePolicySummary) SetPolicyId(v string) *LifecyclePolicySummary { + s.PolicyId = &v + return s +} + +// SetState sets the State field's value. +func (s *LifecyclePolicySummary) SetState(v string) *LifecyclePolicySummary { + s.State = &v + return s +} + +// Specifies the configuration of a lifecycle policy. +type PolicyDetails struct { + _ struct{} `type:"structure"` + + // The resource type. + ResourceTypes []*string `min:"1" type:"list"` + + // The schedule of policy-defined actions. + Schedules []*Schedule `min:"1" type:"list"` + + // The single tag that identifies targeted resources for this policy. + TargetTags []*Tag `min:"1" type:"list"` +} + +// String returns the string representation +func (s PolicyDetails) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PolicyDetails) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PolicyDetails) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PolicyDetails"} + if s.ResourceTypes != nil && len(s.ResourceTypes) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceTypes", 1)) + } + if s.Schedules != nil && len(s.Schedules) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Schedules", 1)) + } + if s.TargetTags != nil && len(s.TargetTags) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TargetTags", 1)) + } + if s.Schedules != nil { + for i, v := range s.Schedules { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Schedules", i), err.(request.ErrInvalidParams)) + } + } + } + if s.TargetTags != nil { + for i, v := range s.TargetTags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "TargetTags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceTypes sets the ResourceTypes field's value. +func (s *PolicyDetails) SetResourceTypes(v []*string) *PolicyDetails { + s.ResourceTypes = v + return s +} + +// SetSchedules sets the Schedules field's value. +func (s *PolicyDetails) SetSchedules(v []*Schedule) *PolicyDetails { + s.Schedules = v + return s +} + +// SetTargetTags sets the TargetTags field's value. +func (s *PolicyDetails) SetTargetTags(v []*Tag) *PolicyDetails { + s.TargetTags = v + return s +} + +// Specifies the number of snapshots to keep for each EBS volume. +type RetainRule struct { + _ struct{} `type:"structure"` + + // The number of snapshots to keep for each volume, up to a maximum of 1000. + // + // Count is a required field + Count *int64 `min:"1" type:"integer" required:"true"` +} + +// String returns the string representation +func (s RetainRule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RetainRule) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RetainRule) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RetainRule"} + if s.Count == nil { + invalidParams.Add(request.NewErrParamRequired("Count")) + } + if s.Count != nil && *s.Count < 1 { + invalidParams.Add(request.NewErrParamMinValue("Count", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCount sets the Count field's value. +func (s *RetainRule) SetCount(v int64) *RetainRule { + s.Count = &v + return s +} + +// Specifies a schedule. +type Schedule struct { + _ struct{} `type:"structure"` + + CopyTags *bool `type:"boolean"` + + // The create rule. + CreateRule *CreateRule `type:"structure"` + + // The name of the schedule. + Name *string `type:"string"` + + // The retain rule. + RetainRule *RetainRule `type:"structure"` + + // The tags to apply to policy-created resources. These user-defined tags are + // in addition to the AWS-added lifecycle tags. + TagsToAdd []*Tag `type:"list"` +} + +// String returns the string representation +func (s Schedule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Schedule) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Schedule) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Schedule"} + if s.CreateRule != nil { + if err := s.CreateRule.Validate(); err != nil { + invalidParams.AddNested("CreateRule", err.(request.ErrInvalidParams)) + } + } + if s.RetainRule != nil { + if err := s.RetainRule.Validate(); err != nil { + invalidParams.AddNested("RetainRule", err.(request.ErrInvalidParams)) + } + } + if s.TagsToAdd != nil { + for i, v := range s.TagsToAdd { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "TagsToAdd", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCopyTags sets the CopyTags field's value. +func (s *Schedule) SetCopyTags(v bool) *Schedule { + s.CopyTags = &v + return s +} + +// SetCreateRule sets the CreateRule field's value. +func (s *Schedule) SetCreateRule(v *CreateRule) *Schedule { + s.CreateRule = v + return s +} + +// SetName sets the Name field's value. +func (s *Schedule) SetName(v string) *Schedule { + s.Name = &v + return s +} + +// SetRetainRule sets the RetainRule field's value. +func (s *Schedule) SetRetainRule(v *RetainRule) *Schedule { + s.RetainRule = v + return s +} + +// SetTagsToAdd sets the TagsToAdd field's value. +func (s *Schedule) SetTagsToAdd(v []*Tag) *Schedule { + s.TagsToAdd = v + return s +} + +// Specifies a tag for a resource. +type Tag struct { + _ struct{} `type:"structure"` + + // The tag key. + // + // Key is a required field + Key *string `type:"string" required:"true"` + + // The tag value. + // + // Value is a required field + Value *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s Tag) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Tag) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Tag) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Tag"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *Tag) SetKey(v string) *Tag { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Tag) SetValue(v string) *Tag { + s.Value = &v + return s +} + +type UpdateLifecyclePolicyInput struct { + _ struct{} `type:"structure"` + + // A description of the lifecycle policy. + Description *string `type:"string"` + + // The Amazon Resource Name (ARN) of the IAM role used to run the operations + // specified by the lifecycle policy. + ExecutionRoleArn *string `type:"string"` + + // The configuration of the lifecycle policy. + // + // Target tags cannot be re-used across policies. + PolicyDetails *PolicyDetails `type:"structure"` + + // The identifier of the lifecycle policy. + // + // PolicyId is a required field + PolicyId *string `location:"uri" locationName:"policyId" type:"string" required:"true"` + + // The desired activation state of the lifecycle policy after creation. + State *string `type:"string" enum:"SettablePolicyStateValues"` +} + +// String returns the string representation +func (s UpdateLifecyclePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateLifecyclePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateLifecyclePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateLifecyclePolicyInput"} + if s.PolicyId == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyId")) + } + if s.PolicyDetails != nil { + if err := s.PolicyDetails.Validate(); err != nil { + invalidParams.AddNested("PolicyDetails", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDescription sets the Description field's value. +func (s *UpdateLifecyclePolicyInput) SetDescription(v string) *UpdateLifecyclePolicyInput { + s.Description = &v + return s +} + +// SetExecutionRoleArn sets the ExecutionRoleArn field's value. +func (s *UpdateLifecyclePolicyInput) SetExecutionRoleArn(v string) *UpdateLifecyclePolicyInput { + s.ExecutionRoleArn = &v + return s +} + +// SetPolicyDetails sets the PolicyDetails field's value. +func (s *UpdateLifecyclePolicyInput) SetPolicyDetails(v *PolicyDetails) *UpdateLifecyclePolicyInput { + s.PolicyDetails = v + return s +} + +// SetPolicyId sets the PolicyId field's value. +func (s *UpdateLifecyclePolicyInput) SetPolicyId(v string) *UpdateLifecyclePolicyInput { + s.PolicyId = &v + return s +} + +// SetState sets the State field's value. +func (s *UpdateLifecyclePolicyInput) SetState(v string) *UpdateLifecyclePolicyInput { + s.State = &v + return s +} + +type UpdateLifecyclePolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UpdateLifecyclePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateLifecyclePolicyOutput) GoString() string { + return s.String() +} + +const ( + // GettablePolicyStateValuesEnabled is a GettablePolicyStateValues enum value + GettablePolicyStateValuesEnabled = "ENABLED" + + // GettablePolicyStateValuesDisabled is a GettablePolicyStateValues enum value + GettablePolicyStateValuesDisabled = "DISABLED" + + // GettablePolicyStateValuesError is a GettablePolicyStateValues enum value + GettablePolicyStateValuesError = "ERROR" +) + +const ( + // IntervalUnitValuesHours is a IntervalUnitValues enum value + IntervalUnitValuesHours = "HOURS" +) + +const ( + // ResourceTypeValuesVolume is a ResourceTypeValues enum value + ResourceTypeValuesVolume = "VOLUME" +) + +const ( + // SettablePolicyStateValuesEnabled is a SettablePolicyStateValues enum value + SettablePolicyStateValuesEnabled = "ENABLED" + + // SettablePolicyStateValuesDisabled is a SettablePolicyStateValues enum value + SettablePolicyStateValuesDisabled = "DISABLED" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/dlm/doc.go b/vendor/github.com/aws/aws-sdk-go/service/dlm/doc.go new file mode 100644 index 00000000000..52f3edd7316 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/dlm/doc.go @@ -0,0 +1,35 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package dlm provides the client and types for making API +// requests to Amazon Data Lifecycle Manager. +// +// With Amazon Data Lifecycle Manager, you can manage the lifecycle of your +// AWS resources. You create lifecycle policies, which are used to automate +// operations on the specified resources. +// +// Amazon DLM supports Amazon EBS volumes and snapshots. For information about +// using Amazon DLM with Amazon EBS, see Automating the Amazon EBS Snapshot +// Lifecycle (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html) +// in the Amazon EC2 User Guide. +// +// See https://docs.aws.amazon.com/goto/WebAPI/dlm-2018-01-12 for more information on this service. +// +// See dlm package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/dlm/ +// +// Using the Client +// +// To contact Amazon Data Lifecycle Manager with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the Amazon Data Lifecycle Manager client DLM for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/dlm/#New +package dlm diff --git a/vendor/github.com/aws/aws-sdk-go/service/dlm/errors.go b/vendor/github.com/aws/aws-sdk-go/service/dlm/errors.go new file mode 100644 index 00000000000..f2f7e4b92bb --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/dlm/errors.go @@ -0,0 +1,30 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package dlm + +const ( + + // ErrCodeInternalServerException for service response error code + // "InternalServerException". + // + // The service failed in an unexpected way. + ErrCodeInternalServerException = "InternalServerException" + + // ErrCodeInvalidRequestException for service response error code + // "InvalidRequestException". + // + // Bad request. The request is missing required parameters or has invalid parameters. + ErrCodeInvalidRequestException = "InvalidRequestException" + + // ErrCodeLimitExceededException for service response error code + // "LimitExceededException". + // + // The request failed because a limit was exceeded. + ErrCodeLimitExceededException = "LimitExceededException" + + // ErrCodeResourceNotFoundException for service response error code + // "ResourceNotFoundException". + // + // A requested resource was not found. + ErrCodeResourceNotFoundException = "ResourceNotFoundException" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/dlm/service.go b/vendor/github.com/aws/aws-sdk-go/service/dlm/service.go new file mode 100644 index 00000000000..63b2a803edb --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/dlm/service.go @@ -0,0 +1,99 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package dlm + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/restjson" +) + +// DLM provides the API operation methods for making requests to +// Amazon Data Lifecycle Manager. See this package's package overview docs +// for details on the service. +// +// DLM methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type DLM struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "DLM" // Name of service. + EndpointsID = "dlm" // ID to lookup a service endpoint with. + ServiceID = "DLM" // ServiceID is a unique identifer of a specific service. +) + +// New creates a new instance of the DLM client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a DLM client from just a session. +// svc := dlm.New(mySession) +// +// // Create a DLM client with additional configuration +// svc := dlm.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *DLM { + c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "dlm" + } + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *DLM { + svc := &DLM{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + ServiceID: ServiceID, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2018-01-12", + JSONVersion: "1.1", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(restjson.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(restjson.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(restjson.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(restjson.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a DLM operation and runs any +// custom request initialization. +func (c *DLM) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/api.go b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/api.go index 43abb412355..bdfa0beda36 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/api.go @@ -4,10 +4,12 @@ package dynamodb import ( "fmt" + "net/url" "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/crr" "github.com/aws/aws-sdk-go/aws/request" "github.com/aws/aws-sdk-go/private/protocol" "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" @@ -17,8 +19,8 @@ const opBatchGetItem = "BatchGetItem" // BatchGetItemRequest generates a "aws/request.Request" representing the // client's request for the BatchGetItem operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -58,6 +60,27 @@ func (c *DynamoDB) BatchGetItemRequest(input *BatchGetItemInput) (req *request.R output = &BatchGetItemOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -212,8 +235,8 @@ const opBatchWriteItem = "BatchWriteItem" // BatchWriteItemRequest generates a "aws/request.Request" representing the // client's request for the BatchWriteItem operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -247,6 +270,27 @@ func (c *DynamoDB) BatchWriteItemRequest(input *BatchWriteItemInput) (req *reque output = &BatchWriteItemOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -316,6 +360,9 @@ func (c *DynamoDB) BatchWriteItemRequest(input *BatchWriteItemInput) (req *reque // BatchWriteItem request. For example, you cannot put and delete the same // item in the same BatchWriteItem request. // +// * Your request contains at least two items with identical hash and range +// keys (which essentially is two put operations). +// // * There are more than 25 requests in the batch. // // * Any individual item in a batch exceeds 400 KB. @@ -375,8 +422,8 @@ const opCreateBackup = "CreateBackup" // CreateBackupRequest generates a "aws/request.Request" representing the // client's request for the CreateBackup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -410,6 +457,27 @@ func (c *DynamoDB) CreateBackupRequest(input *CreateBackupInput) (req *request.R output = &CreateBackupOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -455,11 +523,11 @@ func (c *DynamoDB) CreateBackupRequest(input *CreateBackupInput) (req *request.R // // Returned Error Codes: // * ErrCodeTableNotFoundException "TableNotFoundException" -// A table with the name TableName does not currently exist within the subscriber's -// account. +// A source table with the name TableName does not currently exist within the +// subscriber's account. // // * ErrCodeTableInUseException "TableInUseException" -// A table by that name is either being created or deleted. +// A target table with the specified name is either being created or deleted. // // * ErrCodeContinuousBackupsUnavailableException "ContinuousBackupsUnavailableException" // Backups have not yet been enabled for this table. @@ -469,17 +537,11 @@ func (c *DynamoDB) CreateBackupRequest(input *CreateBackupInput) (req *request.R // table. The backups is either being created, deleted or restored to a table. // // * ErrCodeLimitExceededException "LimitExceededException" -// Up to 50 CreateBackup operations are allowed per second, per account. There -// is no limit to the number of daily on-demand backups that can be taken. +// There is no limit to the number of daily on-demand backups that can be taken. // // Up to 10 simultaneous table operations are allowed per account. These operations -// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, and RestoreTableFromBackup. -// -// For tables with secondary indexes, only one of those tables can be in the -// CREATING state at any point in time. Do not attempt to create more than one -// such table simultaneously. -// -// The total limit of tables in the ACTIVE state is 250. +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. // // For tables with secondary indexes, only one of those tables can be in the // CREATING state at any point in time. Do not attempt to create more than one @@ -516,8 +578,8 @@ const opCreateGlobalTable = "CreateGlobalTable" // CreateGlobalTableRequest generates a "aws/request.Request" representing the // client's request for the CreateGlobalTable operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -551,6 +613,27 @@ func (c *DynamoDB) CreateGlobalTableRequest(input *CreateGlobalTableInput) (req output = &CreateGlobalTableOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -560,16 +643,35 @@ func (c *DynamoDB) CreateGlobalTableRequest(input *CreateGlobalTableInput) (req // relationship between two or more DynamoDB tables with the same table name // in the provided regions. // -// Tables can only be added as the replicas of a global table group under the -// following conditions: +// If you want to add a new replica table to a global table, each of the following +// conditions must be true: +// +// * The table must have the same primary key as all of the other replicas. +// +// * The table must have the same name as all of the other replicas. +// +// * The table must have DynamoDB Streams enabled, with the stream containing +// both the new and the old images of the item. +// +// * None of the replica tables in the global table can contain any data. +// +// If global secondary indexes are specified, then the following conditions +// must also be met: // -// * The tables must have the same name. +// * The global secondary indexes must have the same name. // -// * The tables must contain no items. +// * The global secondary indexes must have the same hash key and sort key +// (if present). // -// * The tables must have the same hash key and sort key (if present). +// Write capacity settings should be set consistently across your replica tables +// and secondary indexes. DynamoDB strongly recommends enabling auto scaling +// to manage the write capacity settings for all of your global tables replicas +// and indexes. // -// * The tables must have DynamoDB Streams enabled (NEW_AND_OLD_IMAGES). +// If you prefer to manage write capacity settings manually, you should provision +// equal replicated write capacity units to your replica tables. You should +// also provision equal replicated write capacity units to matching secondary +// indexes across your global table. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -580,17 +682,11 @@ func (c *DynamoDB) CreateGlobalTableRequest(input *CreateGlobalTableInput) (req // // Returned Error Codes: // * ErrCodeLimitExceededException "LimitExceededException" -// Up to 50 CreateBackup operations are allowed per second, per account. There -// is no limit to the number of daily on-demand backups that can be taken. +// There is no limit to the number of daily on-demand backups that can be taken. // // Up to 10 simultaneous table operations are allowed per account. These operations -// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, and RestoreTableFromBackup. -// -// For tables with secondary indexes, only one of those tables can be in the -// CREATING state at any point in time. Do not attempt to create more than one -// such table simultaneously. -// -// The total limit of tables in the ACTIVE state is 250. +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. // // For tables with secondary indexes, only one of those tables can be in the // CREATING state at any point in time. Do not attempt to create more than one @@ -605,8 +701,8 @@ func (c *DynamoDB) CreateGlobalTableRequest(input *CreateGlobalTableInput) (req // The specified global table already exists. // // * ErrCodeTableNotFoundException "TableNotFoundException" -// A table with the name TableName does not currently exist within the subscriber's -// account. +// A source table with the name TableName does not currently exist within the +// subscriber's account. // // See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/CreateGlobalTable func (c *DynamoDB) CreateGlobalTable(input *CreateGlobalTableInput) (*CreateGlobalTableOutput, error) { @@ -634,8 +730,8 @@ const opCreateTable = "CreateTable" // CreateTableRequest generates a "aws/request.Request" representing the // client's request for the CreateTable operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -669,6 +765,27 @@ func (c *DynamoDB) CreateTableRequest(input *CreateTableInput) (req *request.Req output = &CreateTableOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -704,17 +821,11 @@ func (c *DynamoDB) CreateTableRequest(input *CreateTableInput) (req *request.Req // in the CREATING state. // // * ErrCodeLimitExceededException "LimitExceededException" -// Up to 50 CreateBackup operations are allowed per second, per account. There -// is no limit to the number of daily on-demand backups that can be taken. +// There is no limit to the number of daily on-demand backups that can be taken. // // Up to 10 simultaneous table operations are allowed per account. These operations -// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, and RestoreTableFromBackup. -// -// For tables with secondary indexes, only one of those tables can be in the -// CREATING state at any point in time. Do not attempt to create more than one -// such table simultaneously. -// -// The total limit of tables in the ACTIVE state is 250. +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. // // For tables with secondary indexes, only one of those tables can be in the // CREATING state at any point in time. Do not attempt to create more than one @@ -751,8 +862,8 @@ const opDeleteBackup = "DeleteBackup" // DeleteBackupRequest generates a "aws/request.Request" representing the // client's request for the DeleteBackup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -786,6 +897,27 @@ func (c *DynamoDB) DeleteBackupRequest(input *DeleteBackupInput) (req *request.R output = &DeleteBackupOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -811,17 +943,11 @@ func (c *DynamoDB) DeleteBackupRequest(input *DeleteBackupInput) (req *request.R // table. The backups is either being created, deleted or restored to a table. // // * ErrCodeLimitExceededException "LimitExceededException" -// Up to 50 CreateBackup operations are allowed per second, per account. There -// is no limit to the number of daily on-demand backups that can be taken. +// There is no limit to the number of daily on-demand backups that can be taken. // // Up to 10 simultaneous table operations are allowed per account. These operations -// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, and RestoreTableFromBackup. -// -// For tables with secondary indexes, only one of those tables can be in the -// CREATING state at any point in time. Do not attempt to create more than one -// such table simultaneously. -// -// The total limit of tables in the ACTIVE state is 250. +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. // // For tables with secondary indexes, only one of those tables can be in the // CREATING state at any point in time. Do not attempt to create more than one @@ -858,8 +984,8 @@ const opDeleteItem = "DeleteItem" // DeleteItemRequest generates a "aws/request.Request" representing the // client's request for the DeleteItem operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -893,6 +1019,27 @@ func (c *DynamoDB) DeleteItemRequest(input *DeleteItemInput) (req *request.Reque output = &DeleteItemOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -969,8 +1116,8 @@ const opDeleteTable = "DeleteTable" // DeleteTableRequest generates a "aws/request.Request" representing the // client's request for the DeleteTable operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1004,6 +1151,27 @@ func (c *DynamoDB) DeleteTableRequest(input *DeleteTableInput) (req *request.Req output = &DeleteTableOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -1046,17 +1214,11 @@ func (c *DynamoDB) DeleteTableRequest(input *DeleteTableInput) (req *request.Req // might not be specified correctly, or its status might not be ACTIVE. // // * ErrCodeLimitExceededException "LimitExceededException" -// Up to 50 CreateBackup operations are allowed per second, per account. There -// is no limit to the number of daily on-demand backups that can be taken. +// There is no limit to the number of daily on-demand backups that can be taken. // // Up to 10 simultaneous table operations are allowed per account. These operations -// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, and RestoreTableFromBackup. -// -// For tables with secondary indexes, only one of those tables can be in the -// CREATING state at any point in time. Do not attempt to create more than one -// such table simultaneously. -// -// The total limit of tables in the ACTIVE state is 250. +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. // // For tables with secondary indexes, only one of those tables can be in the // CREATING state at any point in time. Do not attempt to create more than one @@ -1093,8 +1255,8 @@ const opDescribeBackup = "DescribeBackup" // DescribeBackupRequest generates a "aws/request.Request" representing the // client's request for the DescribeBackup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1128,6 +1290,27 @@ func (c *DynamoDB) DescribeBackupRequest(input *DescribeBackupInput) (req *reque output = &DescribeBackupOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -1177,8 +1360,8 @@ const opDescribeContinuousBackups = "DescribeContinuousBackups" // DescribeContinuousBackupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeContinuousBackups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1212,13 +1395,42 @@ func (c *DynamoDB) DescribeContinuousBackupsRequest(input *DescribeContinuousBac output = &DescribeContinuousBackupsOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } // DescribeContinuousBackups API operation for Amazon DynamoDB. // -// Checks the status of the backup restore settings on the specified table. -// If backups are enabled, ContinuousBackupsStatus will bet set to ENABLED. +// Checks the status of continuous backups and point in time recovery on the +// specified table. Continuous backups are ENABLED on all tables at table creation. +// If point in time recovery is enabled, PointInTimeRecoveryStatus will be set +// to ENABLED. +// +// Once continuous backups and point in time recovery are enabled, you can restore +// to any point in time within EarliestRestorableDateTime and LatestRestorableDateTime. +// +// LatestRestorableDateTime is typically 5 minutes before the current time. +// You can restore your table to any point in time during the last 35 days. // // You can call DescribeContinuousBackups at a maximum rate of 10 times per // second. @@ -1232,8 +1444,8 @@ func (c *DynamoDB) DescribeContinuousBackupsRequest(input *DescribeContinuousBac // // Returned Error Codes: // * ErrCodeTableNotFoundException "TableNotFoundException" -// A table with the name TableName does not currently exist within the subscriber's -// account. +// A source table with the name TableName does not currently exist within the +// subscriber's account. // // * ErrCodeInternalServerError "InternalServerError" // An error occurred on the server side. @@ -1260,12 +1472,143 @@ func (c *DynamoDB) DescribeContinuousBackupsWithContext(ctx aws.Context, input * return out, req.Send() } +const opDescribeEndpoints = "DescribeEndpoints" + +// DescribeEndpointsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeEndpoints operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeEndpoints for more information on using the DescribeEndpoints +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeEndpointsRequest method. +// req, resp := client.DescribeEndpointsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeEndpoints +func (c *DynamoDB) DescribeEndpointsRequest(input *DescribeEndpointsInput) (req *request.Request, output *DescribeEndpointsOutput) { + op := &request.Operation{ + Name: opDescribeEndpoints, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeEndpointsInput{} + } + + output = &DescribeEndpointsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeEndpoints API operation for Amazon DynamoDB. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation DescribeEndpoints for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeEndpoints +func (c *DynamoDB) DescribeEndpoints(input *DescribeEndpointsInput) (*DescribeEndpointsOutput, error) { + req, out := c.DescribeEndpointsRequest(input) + return out, req.Send() +} + +// DescribeEndpointsWithContext is the same as DescribeEndpoints with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeEndpoints for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) DescribeEndpointsWithContext(ctx aws.Context, input *DescribeEndpointsInput, opts ...request.Option) (*DescribeEndpointsOutput, error) { + req, out := c.DescribeEndpointsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type discovererDescribeEndpoints struct { + Client *DynamoDB + Required bool + EndpointCache *crr.EndpointCache + Params map[string]*string + Key string +} + +func (d *discovererDescribeEndpoints) Discover() (crr.Endpoint, error) { + input := &DescribeEndpointsInput{} + + resp, err := d.Client.DescribeEndpoints(input) + if err != nil { + return crr.Endpoint{}, err + } + + endpoint := crr.Endpoint{ + Key: d.Key, + } + + for _, e := range resp.Endpoints { + if e.Address == nil { + continue + } + + cachedInMinutes := aws.Int64Value(e.CachePeriodInMinutes) + u, err := url.Parse(*e.Address) + if err != nil { + continue + } + + addr := crr.WeightedAddress{ + URL: u, + Expired: time.Now().Add(time.Duration(cachedInMinutes) * time.Minute), + } + + endpoint.Add(addr) + } + + d.EndpointCache.Add(endpoint) + + return endpoint, nil +} + +func (d *discovererDescribeEndpoints) Handler(r *request.Request) { + endpointKey := crr.BuildEndpointKey(d.Params) + d.Key = endpointKey + + endpoint, err := d.EndpointCache.Get(d, endpointKey, d.Required) + if err != nil { + r.Error = err + return + } + + if endpoint.URL != nil && len(endpoint.URL.String()) > 0 { + r.HTTPRequest.URL = endpoint.URL + } +} + const opDescribeGlobalTable = "DescribeGlobalTable" // DescribeGlobalTableRequest generates a "aws/request.Request" representing the // client's request for the DescribeGlobalTable operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1299,6 +1642,27 @@ func (c *DynamoDB) DescribeGlobalTableRequest(input *DescribeGlobalTableInput) ( output = &DescribeGlobalTableOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -1342,79 +1706,203 @@ func (c *DynamoDB) DescribeGlobalTableWithContext(ctx aws.Context, input *Descri return out, req.Send() } -const opDescribeLimits = "DescribeLimits" +const opDescribeGlobalTableSettings = "DescribeGlobalTableSettings" -// DescribeLimitsRequest generates a "aws/request.Request" representing the -// client's request for the DescribeLimits operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeGlobalTableSettingsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeGlobalTableSettings operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeLimits for more information on using the DescribeLimits +// See DescribeGlobalTableSettings for more information on using the DescribeGlobalTableSettings // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeLimitsRequest method. -// req, resp := client.DescribeLimitsRequest(params) +// // Example sending a request using the DescribeGlobalTableSettingsRequest method. +// req, resp := client.DescribeGlobalTableSettingsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeLimits -func (c *DynamoDB) DescribeLimitsRequest(input *DescribeLimitsInput) (req *request.Request, output *DescribeLimitsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeGlobalTableSettings +func (c *DynamoDB) DescribeGlobalTableSettingsRequest(input *DescribeGlobalTableSettingsInput) (req *request.Request, output *DescribeGlobalTableSettingsOutput) { op := &request.Operation{ - Name: opDescribeLimits, + Name: opDescribeGlobalTableSettings, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &DescribeLimitsInput{} + input = &DescribeGlobalTableSettingsInput{} } - output = &DescribeLimitsOutput{} + output = &DescribeGlobalTableSettingsOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } -// DescribeLimits API operation for Amazon DynamoDB. +// DescribeGlobalTableSettings API operation for Amazon DynamoDB. // -// Returns the current provisioned-capacity limits for your AWS account in a -// region, both for the region as a whole and for any one DynamoDB table that -// you create there. +// Describes region specific settings for a global table. // -// When you establish an AWS account, the account has initial limits on the -// maximum read capacity units and write capacity units that you can provision -// across all of your DynamoDB tables in a given region. Also, there are per-table -// limits that apply when you create a table there. For more information, see -// Limits (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html) -// page in the Amazon DynamoDB Developer Guide. +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. // -// Although you can increase these limits by filing a case at AWS Support Center -// (https://console.aws.amazon.com/support/home#/), obtaining the increase is -// not instantaneous. The DescribeLimits action lets you write code to compare -// the capacity you are currently using to those limits imposed by your account -// so that you have enough time to apply for an increase before you hit a limit. +// See the AWS API reference guide for Amazon DynamoDB's +// API operation DescribeGlobalTableSettings for usage and error information. // -// For example, you could use one of the AWS SDKs to do the following: +// Returned Error Codes: +// * ErrCodeGlobalTableNotFoundException "GlobalTableNotFoundException" +// The specified global table does not exist. // -// Call DescribeLimits for a particular region to obtain your current account -// limits on provisioned capacity there. +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. // -// Create a variable to hold the aggregate read capacity units provisioned for -// all your tables in that region, and one to hold the aggregate write capacity -// units. Zero them both. +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeGlobalTableSettings +func (c *DynamoDB) DescribeGlobalTableSettings(input *DescribeGlobalTableSettingsInput) (*DescribeGlobalTableSettingsOutput, error) { + req, out := c.DescribeGlobalTableSettingsRequest(input) + return out, req.Send() +} + +// DescribeGlobalTableSettingsWithContext is the same as DescribeGlobalTableSettings with the addition of +// the ability to pass a context and additional request options. // -// Call ListTables to obtain a list of all your DynamoDB tables. +// See DescribeGlobalTableSettings for details on how to use this API operation. // -// For each table name listed by ListTables, do the following: +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) DescribeGlobalTableSettingsWithContext(ctx aws.Context, input *DescribeGlobalTableSettingsInput, opts ...request.Option) (*DescribeGlobalTableSettingsOutput, error) { + req, out := c.DescribeGlobalTableSettingsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeLimits = "DescribeLimits" + +// DescribeLimitsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeLimits operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeLimits for more information on using the DescribeLimits +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeLimitsRequest method. +// req, resp := client.DescribeLimitsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/DescribeLimits +func (c *DynamoDB) DescribeLimitsRequest(input *DescribeLimitsInput) (req *request.Request, output *DescribeLimitsOutput) { + op := &request.Operation{ + Name: opDescribeLimits, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeLimitsInput{} + } + + output = &DescribeLimitsOutput{} + req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } + return +} + +// DescribeLimits API operation for Amazon DynamoDB. +// +// Returns the current provisioned-capacity limits for your AWS account in a +// region, both for the region as a whole and for any one DynamoDB table that +// you create there. +// +// When you establish an AWS account, the account has initial limits on the +// maximum read capacity units and write capacity units that you can provision +// across all of your DynamoDB tables in a given region. Also, there are per-table +// limits that apply when you create a table there. For more information, see +// Limits (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html) +// page in the Amazon DynamoDB Developer Guide. +// +// Although you can increase these limits by filing a case at AWS Support Center +// (https://console.aws.amazon.com/support/home#/), obtaining the increase is +// not instantaneous. The DescribeLimits action lets you write code to compare +// the capacity you are currently using to those limits imposed by your account +// so that you have enough time to apply for an increase before you hit a limit. +// +// For example, you could use one of the AWS SDKs to do the following: +// +// Call DescribeLimits for a particular region to obtain your current account +// limits on provisioned capacity there. +// +// Create a variable to hold the aggregate read capacity units provisioned for +// all your tables in that region, and one to hold the aggregate write capacity +// units. Zero them both. +// +// Call ListTables to obtain a list of all your DynamoDB tables. +// +// For each table name listed by ListTables, do the following: // // Call DescribeTable with the table name. // @@ -1481,8 +1969,8 @@ const opDescribeTable = "DescribeTable" // DescribeTableRequest generates a "aws/request.Request" representing the // client's request for the DescribeTable operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1516,6 +2004,27 @@ func (c *DynamoDB) DescribeTableRequest(input *DescribeTableInput) (req *request output = &DescribeTableOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -1572,8 +2081,8 @@ const opDescribeTimeToLive = "DescribeTimeToLive" // DescribeTimeToLiveRequest generates a "aws/request.Request" representing the // client's request for the DescribeTimeToLive operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1607,6 +2116,27 @@ func (c *DynamoDB) DescribeTimeToLiveRequest(input *DescribeTimeToLiveInput) (re output = &DescribeTimeToLiveOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -1655,8 +2185,8 @@ const opGetItem = "GetItem" // GetItemRequest generates a "aws/request.Request" representing the // client's request for the GetItem operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1690,6 +2220,27 @@ func (c *DynamoDB) GetItemRequest(input *GetItemInput) (req *request.Request, ou output = &GetItemOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -1753,8 +2304,8 @@ const opListBackups = "ListBackups" // ListBackupsRequest generates a "aws/request.Request" representing the // client's request for the ListBackups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1788,6 +2339,27 @@ func (c *DynamoDB) ListBackupsRequest(input *ListBackupsInput) (req *request.Req output = &ListBackupsOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -1840,8 +2412,8 @@ const opListGlobalTables = "ListGlobalTables" // ListGlobalTablesRequest generates a "aws/request.Request" representing the // client's request for the ListGlobalTables operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1875,6 +2447,27 @@ func (c *DynamoDB) ListGlobalTablesRequest(input *ListGlobalTablesInput) (req *r output = &ListGlobalTablesOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -1919,8 +2512,8 @@ const opListTables = "ListTables" // ListTablesRequest generates a "aws/request.Request" representing the // client's request for the ListTables operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1960,6 +2553,27 @@ func (c *DynamoDB) ListTablesRequest(input *ListTablesInput) (req *request.Reque output = &ListTablesOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -2056,8 +2670,8 @@ const opListTagsOfResource = "ListTagsOfResource" // ListTagsOfResourceRequest generates a "aws/request.Request" representing the // client's request for the ListTagsOfResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2091,6 +2705,27 @@ func (c *DynamoDB) ListTagsOfResourceRequest(input *ListTagsOfResourceInput) (re output = &ListTagsOfResourceOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -2143,8 +2778,8 @@ const opPutItem = "PutItem" // PutItemRequest generates a "aws/request.Request" representing the // client's request for the PutItem operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2178,6 +2813,27 @@ func (c *DynamoDB) PutItemRequest(input *PutItemInput) (req *request.Request, ou output = &PutItemOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -2284,8 +2940,8 @@ const opQuery = "Query" // QueryRequest generates a "aws/request.Request" representing the // client's request for the Query operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2325,6 +2981,27 @@ func (c *DynamoDB) QueryRequest(input *QueryInput) (req *request.Request, output output = &QueryOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -2478,8 +3155,8 @@ const opRestoreTableFromBackup = "RestoreTableFromBackup" // RestoreTableFromBackupRequest generates a "aws/request.Request" representing the // client's request for the RestoreTableFromBackup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2513,13 +3190,34 @@ func (c *DynamoDB) RestoreTableFromBackupRequest(input *RestoreTableFromBackupIn output = &RestoreTableFromBackupOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } // RestoreTableFromBackup API operation for Amazon DynamoDB. // // Creates a new table from an existing backup. Any number of users can execute -// up to 10 concurrent restores in a given account. +// up to 4 concurrent restores (any type of restore) in a given account. // // You can call RestoreTableFromBackup at a maximum rate of 10 times per second. // @@ -2546,10 +3244,10 @@ func (c *DynamoDB) RestoreTableFromBackupRequest(input *RestoreTableFromBackupIn // // Returned Error Codes: // * ErrCodeTableAlreadyExistsException "TableAlreadyExistsException" -// A table with the name already exists. +// A target table with the specified name already exists. // // * ErrCodeTableInUseException "TableInUseException" -// A table by that name is either being created or deleted. +// A target table with the specified name is either being created or deleted. // // * ErrCodeBackupNotFoundException "BackupNotFoundException" // Backup not found for the given BackupARN. @@ -2559,17 +3257,11 @@ func (c *DynamoDB) RestoreTableFromBackupRequest(input *RestoreTableFromBackupIn // table. The backups is either being created, deleted or restored to a table. // // * ErrCodeLimitExceededException "LimitExceededException" -// Up to 50 CreateBackup operations are allowed per second, per account. There -// is no limit to the number of daily on-demand backups that can be taken. +// There is no limit to the number of daily on-demand backups that can be taken. // // Up to 10 simultaneous table operations are allowed per account. These operations -// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, and RestoreTableFromBackup. -// -// For tables with secondary indexes, only one of those tables can be in the -// CREATING state at any point in time. Do not attempt to create more than one -// such table simultaneously. -// -// The total limit of tables in the ACTIVE state is 250. +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. // // For tables with secondary indexes, only one of those tables can be in the // CREATING state at any point in time. Do not attempt to create more than one @@ -2602,132 +3294,320 @@ func (c *DynamoDB) RestoreTableFromBackupWithContext(ctx aws.Context, input *Res return out, req.Send() } -const opScan = "Scan" +const opRestoreTableToPointInTime = "RestoreTableToPointInTime" -// ScanRequest generates a "aws/request.Request" representing the -// client's request for the Scan operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// RestoreTableToPointInTimeRequest generates a "aws/request.Request" representing the +// client's request for the RestoreTableToPointInTime operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See Scan for more information on using the Scan +// See RestoreTableToPointInTime for more information on using the RestoreTableToPointInTime // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ScanRequest method. -// req, resp := client.ScanRequest(params) +// // Example sending a request using the RestoreTableToPointInTimeRequest method. +// req, resp := client.RestoreTableToPointInTimeRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/Scan -func (c *DynamoDB) ScanRequest(input *ScanInput) (req *request.Request, output *ScanOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/RestoreTableToPointInTime +func (c *DynamoDB) RestoreTableToPointInTimeRequest(input *RestoreTableToPointInTimeInput) (req *request.Request, output *RestoreTableToPointInTimeOutput) { op := &request.Operation{ - Name: opScan, + Name: opRestoreTableToPointInTime, HTTPMethod: "POST", HTTPPath: "/", - Paginator: &request.Paginator{ - InputTokens: []string{"ExclusiveStartKey"}, - OutputTokens: []string{"LastEvaluatedKey"}, - LimitToken: "Limit", - TruncationToken: "", - }, } if input == nil { - input = &ScanInput{} + input = &RestoreTableToPointInTimeInput{} } - output = &ScanOutput{} + output = &RestoreTableToPointInTimeOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } -// Scan API operation for Amazon DynamoDB. +// RestoreTableToPointInTime API operation for Amazon DynamoDB. // -// The Scan operation returns one or more items and item attributes by accessing -// every item in a table or a secondary index. To have DynamoDB return fewer -// items, you can provide a FilterExpression operation. +// Restores the specified table to the specified point in time within EarliestRestorableDateTime +// and LatestRestorableDateTime. You can restore your table to any point in +// time during the last 35 days. Any number of users can execute up to 4 concurrent +// restores (any type of restore) in a given account. // -// If the total number of scanned items exceeds the maximum data set size limit -// of 1 MB, the scan stops and results are returned to the user as a LastEvaluatedKey -// value to continue the scan in a subsequent operation. The results also include -// the number of items exceeding the limit. A scan can result in no table data -// meeting the filter criteria. +// When you restore using point in time recovery, DynamoDB restores your table +// data to the state based on the selected date and time (day:hour:minute:second) +// to a new table. // -// A single Scan operation will read up to the maximum number of items set (if -// using the Limit parameter) or a maximum of 1 MB of data and then apply any -// filtering to the results using FilterExpression. If LastEvaluatedKey is present -// in the response, you will need to paginate the result set. For more information, -// see Paginating the Results (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Scan.html#Scan.Pagination) -// in the Amazon DynamoDB Developer Guide. +// Along with data, the following are also included on the new restored table +// using point in time recovery: // -// Scan operations proceed sequentially; however, for faster performance on -// a large table or secondary index, applications can request a parallel Scan -// operation by providing the Segment and TotalSegments parameters. For more -// information, see Parallel Scan (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Scan.html#Scan.ParallelScan) -// in the Amazon DynamoDB Developer Guide. +// * Global secondary indexes (GSIs) // -// Scan uses eventually consistent reads when accessing the data in a table; -// therefore, the result set might not include the changes to data in the table -// immediately before the operation began. If you need a consistent copy of -// the data, as of the time that the Scan begins, you can set the ConsistentRead -// parameter to true. +// * Local secondary indexes (LSIs) +// +// * Provisioned read and write capacity +// +// * Encryption settings +// +// All these settings come from the current settings of the source table at +// the time of restore. +// +// You must manually set up the following on the restored table: +// +// * Auto scaling policies +// +// * IAM policies +// +// * Cloudwatch metrics and alarms +// +// * Tags +// +// * Stream settings +// +// * Time to Live (TTL) settings +// +// * Point in time recovery settings // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon DynamoDB's -// API operation Scan for usage and error information. +// API operation RestoreTableToPointInTime for usage and error information. // // Returned Error Codes: -// * ErrCodeProvisionedThroughputExceededException "ProvisionedThroughputExceededException" -// Your request rate is too high. The AWS SDKs for DynamoDB automatically retry -// requests that receive this exception. Your request is eventually successful, -// unless your retry queue is too large to finish. Reduce the frequency of requests -// and use exponential backoff. For more information, go to Error Retries and -// Exponential Backoff (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff) -// in the Amazon DynamoDB Developer Guide. +// * ErrCodeTableAlreadyExistsException "TableAlreadyExistsException" +// A target table with the specified name already exists. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The operation tried to access a nonexistent table or index. The resource -// might not be specified correctly, or its status might not be ACTIVE. +// * ErrCodeTableNotFoundException "TableNotFoundException" +// A source table with the name TableName does not currently exist within the +// subscriber's account. +// +// * ErrCodeTableInUseException "TableInUseException" +// A target table with the specified name is either being created or deleted. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// There is no limit to the number of daily on-demand backups that can be taken. +// +// Up to 10 simultaneous table operations are allowed per account. These operations +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. +// +// For tables with secondary indexes, only one of those tables can be in the +// CREATING state at any point in time. Do not attempt to create more than one +// such table simultaneously. +// +// The total limit of tables in the ACTIVE state is 250. +// +// * ErrCodeInvalidRestoreTimeException "InvalidRestoreTimeException" +// An invalid restore time was specified. RestoreDateTime must be between EarliestRestorableDateTime +// and LatestRestorableDateTime. +// +// * ErrCodePointInTimeRecoveryUnavailableException "PointInTimeRecoveryUnavailableException" +// Point in time recovery has not yet been enabled for this source table. // // * ErrCodeInternalServerError "InternalServerError" // An error occurred on the server side. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/Scan -func (c *DynamoDB) Scan(input *ScanInput) (*ScanOutput, error) { - req, out := c.ScanRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/RestoreTableToPointInTime +func (c *DynamoDB) RestoreTableToPointInTime(input *RestoreTableToPointInTimeInput) (*RestoreTableToPointInTimeOutput, error) { + req, out := c.RestoreTableToPointInTimeRequest(input) return out, req.Send() } -// ScanWithContext is the same as Scan with the addition of +// RestoreTableToPointInTimeWithContext is the same as RestoreTableToPointInTime with the addition of // the ability to pass a context and additional request options. // -// See Scan for details on how to use this API operation. +// See RestoreTableToPointInTime for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *DynamoDB) ScanWithContext(ctx aws.Context, input *ScanInput, opts ...request.Option) (*ScanOutput, error) { - req, out := c.ScanRequest(input) +func (c *DynamoDB) RestoreTableToPointInTimeWithContext(ctx aws.Context, input *RestoreTableToPointInTimeInput, opts ...request.Option) (*RestoreTableToPointInTimeOutput, error) { + req, out := c.RestoreTableToPointInTimeRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// ScanPages iterates over the pages of a Scan operation, -// calling the "fn" function with the response data for each page. To stop +const opScan = "Scan" + +// ScanRequest generates a "aws/request.Request" representing the +// client's request for the Scan operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See Scan for more information on using the Scan +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ScanRequest method. +// req, resp := client.ScanRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/Scan +func (c *DynamoDB) ScanRequest(input *ScanInput) (req *request.Request, output *ScanOutput) { + op := &request.Operation{ + Name: opScan, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"ExclusiveStartKey"}, + OutputTokens: []string{"LastEvaluatedKey"}, + LimitToken: "Limit", + TruncationToken: "", + }, + } + + if input == nil { + input = &ScanInput{} + } + + output = &ScanOutput{} + req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } + return +} + +// Scan API operation for Amazon DynamoDB. +// +// The Scan operation returns one or more items and item attributes by accessing +// every item in a table or a secondary index. To have DynamoDB return fewer +// items, you can provide a FilterExpression operation. +// +// If the total number of scanned items exceeds the maximum data set size limit +// of 1 MB, the scan stops and results are returned to the user as a LastEvaluatedKey +// value to continue the scan in a subsequent operation. The results also include +// the number of items exceeding the limit. A scan can result in no table data +// meeting the filter criteria. +// +// A single Scan operation will read up to the maximum number of items set (if +// using the Limit parameter) or a maximum of 1 MB of data and then apply any +// filtering to the results using FilterExpression. If LastEvaluatedKey is present +// in the response, you will need to paginate the result set. For more information, +// see Paginating the Results (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Scan.html#Scan.Pagination) +// in the Amazon DynamoDB Developer Guide. +// +// Scan operations proceed sequentially; however, for faster performance on +// a large table or secondary index, applications can request a parallel Scan +// operation by providing the Segment and TotalSegments parameters. For more +// information, see Parallel Scan (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Scan.html#Scan.ParallelScan) +// in the Amazon DynamoDB Developer Guide. +// +// Scan uses eventually consistent reads when accessing the data in a table; +// therefore, the result set might not include the changes to data in the table +// immediately before the operation began. If you need a consistent copy of +// the data, as of the time that the Scan begins, you can set the ConsistentRead +// parameter to true. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation Scan for usage and error information. +// +// Returned Error Codes: +// * ErrCodeProvisionedThroughputExceededException "ProvisionedThroughputExceededException" +// Your request rate is too high. The AWS SDKs for DynamoDB automatically retry +// requests that receive this exception. Your request is eventually successful, +// unless your retry queue is too large to finish. Reduce the frequency of requests +// and use exponential backoff. For more information, go to Error Retries and +// Exponential Backoff (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff) +// in the Amazon DynamoDB Developer Guide. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The operation tried to access a nonexistent table or index. The resource +// might not be specified correctly, or its status might not be ACTIVE. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/Scan +func (c *DynamoDB) Scan(input *ScanInput) (*ScanOutput, error) { + req, out := c.ScanRequest(input) + return out, req.Send() +} + +// ScanWithContext is the same as Scan with the addition of +// the ability to pass a context and additional request options. +// +// See Scan for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) ScanWithContext(ctx aws.Context, input *ScanInput, opts ...request.Option) (*ScanOutput, error) { + req, out := c.ScanRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ScanPages iterates over the pages of a Scan operation, +// calling the "fn" function with the response data for each page. To stop // iterating, return false from the fn function. // // See Scan method for more information on how to use this operation. @@ -2780,8 +3660,8 @@ const opTagResource = "TagResource" // TagResourceRequest generates a "aws/request.Request" representing the // client's request for the TagResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2817,6 +3697,27 @@ func (c *DynamoDB) TagResourceRequest(input *TagResourceInput) (req *request.Req req = c.newRequest(op, input, output) req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -2839,17 +3740,11 @@ func (c *DynamoDB) TagResourceRequest(input *TagResourceInput) (req *request.Req // // Returned Error Codes: // * ErrCodeLimitExceededException "LimitExceededException" -// Up to 50 CreateBackup operations are allowed per second, per account. There -// is no limit to the number of daily on-demand backups that can be taken. +// There is no limit to the number of daily on-demand backups that can be taken. // // Up to 10 simultaneous table operations are allowed per account. These operations -// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, and RestoreTableFromBackup. -// -// For tables with secondary indexes, only one of those tables can be in the -// CREATING state at any point in time. Do not attempt to create more than one -// such table simultaneously. -// -// The total limit of tables in the ACTIVE state is 250. +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. // // For tables with secondary indexes, only one of those tables can be in the // CREATING state at any point in time. Do not attempt to create more than one @@ -2895,8 +3790,8 @@ const opUntagResource = "UntagResource" // UntagResourceRequest generates a "aws/request.Request" representing the // client's request for the UntagResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2932,6 +3827,27 @@ func (c *DynamoDB) UntagResourceRequest(input *UntagResourceInput) (req *request req = c.newRequest(op, input, output) req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -2952,17 +3868,11 @@ func (c *DynamoDB) UntagResourceRequest(input *UntagResourceInput) (req *request // // Returned Error Codes: // * ErrCodeLimitExceededException "LimitExceededException" -// Up to 50 CreateBackup operations are allowed per second, per account. There -// is no limit to the number of daily on-demand backups that can be taken. +// There is no limit to the number of daily on-demand backups that can be taken. // // Up to 10 simultaneous table operations are allowed per account. These operations -// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, and RestoreTableFromBackup. -// -// For tables with secondary indexes, only one of those tables can be in the -// CREATING state at any point in time. Do not attempt to create more than one -// such table simultaneously. -// -// The total limit of tables in the ACTIVE state is 250. +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. // // For tables with secondary indexes, only one of those tables can be in the // CREATING state at any point in time. Do not attempt to create more than one @@ -3004,12 +3914,129 @@ func (c *DynamoDB) UntagResourceWithContext(ctx aws.Context, input *UntagResourc return out, req.Send() } +const opUpdateContinuousBackups = "UpdateContinuousBackups" + +// UpdateContinuousBackupsRequest generates a "aws/request.Request" representing the +// client's request for the UpdateContinuousBackups operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateContinuousBackups for more information on using the UpdateContinuousBackups +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateContinuousBackupsRequest method. +// req, resp := client.UpdateContinuousBackupsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UpdateContinuousBackups +func (c *DynamoDB) UpdateContinuousBackupsRequest(input *UpdateContinuousBackupsInput) (req *request.Request, output *UpdateContinuousBackupsOutput) { + op := &request.Operation{ + Name: opUpdateContinuousBackups, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateContinuousBackupsInput{} + } + + output = &UpdateContinuousBackupsOutput{} + req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } + return +} + +// UpdateContinuousBackups API operation for Amazon DynamoDB. +// +// UpdateContinuousBackups enables or disables point in time recovery for the +// specified table. A successful UpdateContinuousBackups call returns the current +// ContinuousBackupsDescription. Continuous backups are ENABLED on all tables +// at table creation. If point in time recovery is enabled, PointInTimeRecoveryStatus +// will be set to ENABLED. +// +// Once continuous backups and point in time recovery are enabled, you can restore +// to any point in time within EarliestRestorableDateTime and LatestRestorableDateTime. +// +// LatestRestorableDateTime is typically 5 minutes before the current time. +// You can restore your table to any point in time during the last 35 days.. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation UpdateContinuousBackups for usage and error information. +// +// Returned Error Codes: +// * ErrCodeTableNotFoundException "TableNotFoundException" +// A source table with the name TableName does not currently exist within the +// subscriber's account. +// +// * ErrCodeContinuousBackupsUnavailableException "ContinuousBackupsUnavailableException" +// Backups have not yet been enabled for this table. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UpdateContinuousBackups +func (c *DynamoDB) UpdateContinuousBackups(input *UpdateContinuousBackupsInput) (*UpdateContinuousBackupsOutput, error) { + req, out := c.UpdateContinuousBackupsRequest(input) + return out, req.Send() +} + +// UpdateContinuousBackupsWithContext is the same as UpdateContinuousBackups with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateContinuousBackups for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) UpdateContinuousBackupsWithContext(ctx aws.Context, input *UpdateContinuousBackupsInput, opts ...request.Option) (*UpdateContinuousBackupsOutput, error) { + req, out := c.UpdateContinuousBackupsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opUpdateGlobalTable = "UpdateGlobalTable" // UpdateGlobalTableRequest generates a "aws/request.Request" representing the // client's request for the UpdateGlobalTable operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3043,6 +4070,27 @@ func (c *DynamoDB) UpdateGlobalTableRequest(input *UpdateGlobalTableInput) (req output = &UpdateGlobalTableOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -3051,13 +4099,24 @@ func (c *DynamoDB) UpdateGlobalTableRequest(input *UpdateGlobalTableInput) (req // Adds or removes replicas in the specified global table. The global table // must already exist to be able to use this operation. Any replica to be added // must be empty, must have the same name as the global table, must have the -// same key schema, must have DynamoDB Streams enabled, and cannot have any -// local secondary indexes (LSIs). +// same key schema, and must have DynamoDB Streams enabled and must have same +// provisioned and maximum write capacity units. // // Although you can use UpdateGlobalTable to add replicas and remove replicas // in a single request, for simplicity we recommend that you issue separate // requests for adding or removing replicas. // +// If global secondary indexes are specified, then the following conditions +// must also be met: +// +// * The global secondary indexes must have the same name. +// +// * The global secondary indexes must have the same hash key and sort key +// (if present). +// +// * The global secondary indexes must have the same provisioned and maximum +// write capacity units. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -3079,8 +4138,8 @@ func (c *DynamoDB) UpdateGlobalTableRequest(input *UpdateGlobalTableInput) (req // The specified replica is no longer part of the global table. // // * ErrCodeTableNotFoundException "TableNotFoundException" -// A table with the name TableName does not currently exist within the subscriber's -// account. +// A source table with the name TableName does not currently exist within the +// subscriber's account. // // See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UpdateGlobalTable func (c *DynamoDB) UpdateGlobalTable(input *UpdateGlobalTableInput) (*UpdateGlobalTableOutput, error) { @@ -3104,45 +4163,193 @@ func (c *DynamoDB) UpdateGlobalTableWithContext(ctx aws.Context, input *UpdateGl return out, req.Send() } -const opUpdateItem = "UpdateItem" +const opUpdateGlobalTableSettings = "UpdateGlobalTableSettings" -// UpdateItemRequest generates a "aws/request.Request" representing the -// client's request for the UpdateItem operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UpdateGlobalTableSettingsRequest generates a "aws/request.Request" representing the +// client's request for the UpdateGlobalTableSettings operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateItem for more information on using the UpdateItem +// See UpdateGlobalTableSettings for more information on using the UpdateGlobalTableSettings // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateItemRequest method. -// req, resp := client.UpdateItemRequest(params) +// // Example sending a request using the UpdateGlobalTableSettingsRequest method. +// req, resp := client.UpdateGlobalTableSettingsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UpdateItem -func (c *DynamoDB) UpdateItemRequest(input *UpdateItemInput) (req *request.Request, output *UpdateItemOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UpdateGlobalTableSettings +func (c *DynamoDB) UpdateGlobalTableSettingsRequest(input *UpdateGlobalTableSettingsInput) (req *request.Request, output *UpdateGlobalTableSettingsOutput) { op := &request.Operation{ - Name: opUpdateItem, + Name: opUpdateGlobalTableSettings, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateItemInput{} + input = &UpdateGlobalTableSettingsInput{} } - output = &UpdateItemOutput{} + output = &UpdateGlobalTableSettingsOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } + return +} + +// UpdateGlobalTableSettings API operation for Amazon DynamoDB. +// +// Updates settings for a global table. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon DynamoDB's +// API operation UpdateGlobalTableSettings for usage and error information. +// +// Returned Error Codes: +// * ErrCodeGlobalTableNotFoundException "GlobalTableNotFoundException" +// The specified global table does not exist. +// +// * ErrCodeReplicaNotFoundException "ReplicaNotFoundException" +// The specified replica is no longer part of the global table. +// +// * ErrCodeIndexNotFoundException "IndexNotFoundException" +// The operation tried to access a nonexistent index. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// There is no limit to the number of daily on-demand backups that can be taken. +// +// Up to 10 simultaneous table operations are allowed per account. These operations +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. +// +// For tables with secondary indexes, only one of those tables can be in the +// CREATING state at any point in time. Do not attempt to create more than one +// such table simultaneously. +// +// The total limit of tables in the ACTIVE state is 250. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// The operation conflicts with the resource's availability. For example, you +// attempted to recreate an existing table, or tried to delete a table currently +// in the CREATING state. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UpdateGlobalTableSettings +func (c *DynamoDB) UpdateGlobalTableSettings(input *UpdateGlobalTableSettingsInput) (*UpdateGlobalTableSettingsOutput, error) { + req, out := c.UpdateGlobalTableSettingsRequest(input) + return out, req.Send() +} + +// UpdateGlobalTableSettingsWithContext is the same as UpdateGlobalTableSettings with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateGlobalTableSettings for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *DynamoDB) UpdateGlobalTableSettingsWithContext(ctx aws.Context, input *UpdateGlobalTableSettingsInput, opts ...request.Option) (*UpdateGlobalTableSettingsOutput, error) { + req, out := c.UpdateGlobalTableSettingsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateItem = "UpdateItem" + +// UpdateItemRequest generates a "aws/request.Request" representing the +// client's request for the UpdateItem operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateItem for more information on using the UpdateItem +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateItemRequest method. +// req, resp := client.UpdateItemRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10/UpdateItem +func (c *DynamoDB) UpdateItemRequest(input *UpdateItemInput) (req *request.Request, output *UpdateItemOutput) { + op := &request.Operation{ + Name: opUpdateItem, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateItemInput{} + } + + output = &UpdateItemOutput{} + req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -3213,8 +4420,8 @@ const opUpdateTable = "UpdateTable" // UpdateTableRequest generates a "aws/request.Request" representing the // client's request for the UpdateTable operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3248,6 +4455,27 @@ func (c *DynamoDB) UpdateTableRequest(input *UpdateTableInput) (req *request.Req output = &UpdateTableOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -3290,17 +4518,11 @@ func (c *DynamoDB) UpdateTableRequest(input *UpdateTableInput) (req *request.Req // might not be specified correctly, or its status might not be ACTIVE. // // * ErrCodeLimitExceededException "LimitExceededException" -// Up to 50 CreateBackup operations are allowed per second, per account. There -// is no limit to the number of daily on-demand backups that can be taken. +// There is no limit to the number of daily on-demand backups that can be taken. // // Up to 10 simultaneous table operations are allowed per account. These operations -// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, and RestoreTableFromBackup. -// -// For tables with secondary indexes, only one of those tables can be in the -// CREATING state at any point in time. Do not attempt to create more than one -// such table simultaneously. -// -// The total limit of tables in the ACTIVE state is 250. +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. // // For tables with secondary indexes, only one of those tables can be in the // CREATING state at any point in time. Do not attempt to create more than one @@ -3337,8 +4559,8 @@ const opUpdateTimeToLive = "UpdateTimeToLive" // UpdateTimeToLiveRequest generates a "aws/request.Request" representing the // client's request for the UpdateTimeToLive operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3372,6 +4594,27 @@ func (c *DynamoDB) UpdateTimeToLiveRequest(input *UpdateTimeToLiveInput) (req *r output = &UpdateTimeToLiveOutput{} req = c.newRequest(op, input, output) + if aws.BoolValue(req.Config.EnableEndpointDiscovery) { + de := discovererDescribeEndpoints{ + Required: false, + EndpointCache: c.endpointCache, + Params: map[string]*string{ + "op": aws.String(req.Operation.Name), + }, + Client: c, + } + + for k, v := range de.Params { + if v == nil { + delete(de.Params, k) + } + } + + req.Handlers.Build.PushFrontNamed(request.NamedHandler{ + Name: "crr.endpointdiscovery", + Fn: de.Handler, + }) + } return } @@ -3424,17 +4667,11 @@ func (c *DynamoDB) UpdateTimeToLiveRequest(input *UpdateTimeToLiveInput) (req *r // might not be specified correctly, or its status might not be ACTIVE. // // * ErrCodeLimitExceededException "LimitExceededException" -// Up to 50 CreateBackup operations are allowed per second, per account. There -// is no limit to the number of daily on-demand backups that can be taken. +// There is no limit to the number of daily on-demand backups that can be taken. // // Up to 10 simultaneous table operations are allowed per account. These operations -// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, and RestoreTableFromBackup. -// -// For tables with secondary indexes, only one of those tables can be in the -// CREATING state at any point in time. Do not attempt to create more than one -// such table simultaneously. -// -// The total limit of tables in the ACTIVE state is 250. +// include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, +// and RestoreTableToPointInTime. // // For tables with secondary indexes, only one of those tables can be in the // CREATING state at any point in time. Do not attempt to create more than one @@ -3780,186 +5017,622 @@ func (s *AttributeValueUpdate) SetValue(v *AttributeValue) *AttributeValueUpdate return s } -// Contains the description of the backup created for the table. -type BackupDescription struct { +// Represents the properties of the scaling policy. +type AutoScalingPolicyDescription struct { _ struct{} `type:"structure"` - // Contains the details of the backup created for the table. - BackupDetails *BackupDetails `type:"structure"` - - // Contains the details of the table when the backup was created. - SourceTableDetails *SourceTableDetails `type:"structure"` + // The name of the scaling policy. + PolicyName *string `min:"1" type:"string"` - // Contains the details of the features enabled on the table when the backup - // was created. For example, LSIs, GSIs, streams, TTL. - SourceTableFeatureDetails *SourceTableFeatureDetails `type:"structure"` + // Represents a target tracking scaling policy configuration. + TargetTrackingScalingPolicyConfiguration *AutoScalingTargetTrackingScalingPolicyConfigurationDescription `type:"structure"` } // String returns the string representation -func (s BackupDescription) String() string { +func (s AutoScalingPolicyDescription) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s BackupDescription) GoString() string { +func (s AutoScalingPolicyDescription) GoString() string { return s.String() } -// SetBackupDetails sets the BackupDetails field's value. -func (s *BackupDescription) SetBackupDetails(v *BackupDetails) *BackupDescription { - s.BackupDetails = v +// SetPolicyName sets the PolicyName field's value. +func (s *AutoScalingPolicyDescription) SetPolicyName(v string) *AutoScalingPolicyDescription { + s.PolicyName = &v return s } -// SetSourceTableDetails sets the SourceTableDetails field's value. -func (s *BackupDescription) SetSourceTableDetails(v *SourceTableDetails) *BackupDescription { - s.SourceTableDetails = v +// SetTargetTrackingScalingPolicyConfiguration sets the TargetTrackingScalingPolicyConfiguration field's value. +func (s *AutoScalingPolicyDescription) SetTargetTrackingScalingPolicyConfiguration(v *AutoScalingTargetTrackingScalingPolicyConfigurationDescription) *AutoScalingPolicyDescription { + s.TargetTrackingScalingPolicyConfiguration = v return s } -// SetSourceTableFeatureDetails sets the SourceTableFeatureDetails field's value. -func (s *BackupDescription) SetSourceTableFeatureDetails(v *SourceTableFeatureDetails) *BackupDescription { - s.SourceTableFeatureDetails = v +// Represents the autoscaling policy to be modified. +type AutoScalingPolicyUpdate struct { + _ struct{} `type:"structure"` + + // The name of the scaling policy. + PolicyName *string `min:"1" type:"string"` + + // Represents a target tracking scaling policy configuration. + // + // TargetTrackingScalingPolicyConfiguration is a required field + TargetTrackingScalingPolicyConfiguration *AutoScalingTargetTrackingScalingPolicyConfigurationUpdate `type:"structure" required:"true"` +} + +// String returns the string representation +func (s AutoScalingPolicyUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AutoScalingPolicyUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AutoScalingPolicyUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AutoScalingPolicyUpdate"} + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + if s.TargetTrackingScalingPolicyConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("TargetTrackingScalingPolicyConfiguration")) + } + if s.TargetTrackingScalingPolicyConfiguration != nil { + if err := s.TargetTrackingScalingPolicyConfiguration.Validate(); err != nil { + invalidParams.AddNested("TargetTrackingScalingPolicyConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyName sets the PolicyName field's value. +func (s *AutoScalingPolicyUpdate) SetPolicyName(v string) *AutoScalingPolicyUpdate { + s.PolicyName = &v return s } -// Contains the details of the backup created for the table. -type BackupDetails struct { +// SetTargetTrackingScalingPolicyConfiguration sets the TargetTrackingScalingPolicyConfiguration field's value. +func (s *AutoScalingPolicyUpdate) SetTargetTrackingScalingPolicyConfiguration(v *AutoScalingTargetTrackingScalingPolicyConfigurationUpdate) *AutoScalingPolicyUpdate { + s.TargetTrackingScalingPolicyConfiguration = v + return s +} + +// Represents the autoscaling settings for a global table or global secondary +// index. +type AutoScalingSettingsDescription struct { _ struct{} `type:"structure"` - // ARN associated with the backup. - // - // BackupArn is a required field - BackupArn *string `min:"37" type:"string" required:"true"` + // Disabled autoscaling for this global table or global secondary index. + AutoScalingDisabled *bool `type:"boolean"` - // Time at which the backup was created. This is the request time of the backup. - // - // BackupCreationDateTime is a required field - BackupCreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + // Role ARN used for configuring autoScaling policy. + AutoScalingRoleArn *string `type:"string"` - // Name of the requested backup. - // - // BackupName is a required field - BackupName *string `min:"3" type:"string" required:"true"` + // The maximum capacity units that a global table or global secondary index + // should be scaled up to. + MaximumUnits *int64 `min:"1" type:"long"` - // Size of the backup in bytes. - BackupSizeBytes *int64 `type:"long"` + // The minimum capacity units that a global table or global secondary index + // should be scaled down to. + MinimumUnits *int64 `min:"1" type:"long"` - // Backup can be in one of the following states: CREATING, ACTIVE, DELETED. - // - // BackupStatus is a required field - BackupStatus *string `type:"string" required:"true" enum:"BackupStatus"` + // Information about the scaling policies. + ScalingPolicies []*AutoScalingPolicyDescription `type:"list"` } // String returns the string representation -func (s BackupDetails) String() string { +func (s AutoScalingSettingsDescription) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s BackupDetails) GoString() string { +func (s AutoScalingSettingsDescription) GoString() string { return s.String() } -// SetBackupArn sets the BackupArn field's value. -func (s *BackupDetails) SetBackupArn(v string) *BackupDetails { - s.BackupArn = &v +// SetAutoScalingDisabled sets the AutoScalingDisabled field's value. +func (s *AutoScalingSettingsDescription) SetAutoScalingDisabled(v bool) *AutoScalingSettingsDescription { + s.AutoScalingDisabled = &v return s } -// SetBackupCreationDateTime sets the BackupCreationDateTime field's value. -func (s *BackupDetails) SetBackupCreationDateTime(v time.Time) *BackupDetails { - s.BackupCreationDateTime = &v +// SetAutoScalingRoleArn sets the AutoScalingRoleArn field's value. +func (s *AutoScalingSettingsDescription) SetAutoScalingRoleArn(v string) *AutoScalingSettingsDescription { + s.AutoScalingRoleArn = &v return s } -// SetBackupName sets the BackupName field's value. -func (s *BackupDetails) SetBackupName(v string) *BackupDetails { - s.BackupName = &v +// SetMaximumUnits sets the MaximumUnits field's value. +func (s *AutoScalingSettingsDescription) SetMaximumUnits(v int64) *AutoScalingSettingsDescription { + s.MaximumUnits = &v return s } -// SetBackupSizeBytes sets the BackupSizeBytes field's value. -func (s *BackupDetails) SetBackupSizeBytes(v int64) *BackupDetails { - s.BackupSizeBytes = &v +// SetMinimumUnits sets the MinimumUnits field's value. +func (s *AutoScalingSettingsDescription) SetMinimumUnits(v int64) *AutoScalingSettingsDescription { + s.MinimumUnits = &v return s } -// SetBackupStatus sets the BackupStatus field's value. -func (s *BackupDetails) SetBackupStatus(v string) *BackupDetails { - s.BackupStatus = &v +// SetScalingPolicies sets the ScalingPolicies field's value. +func (s *AutoScalingSettingsDescription) SetScalingPolicies(v []*AutoScalingPolicyDescription) *AutoScalingSettingsDescription { + s.ScalingPolicies = v return s } -// Contains details for the backup. -type BackupSummary struct { +// Represents the autoscaling settings to be modified for a global table or +// global secondary index. +type AutoScalingSettingsUpdate struct { _ struct{} `type:"structure"` - // ARN associated with the backup. - BackupArn *string `min:"37" type:"string"` - - // Time at which the backup was created. - BackupCreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` - - // Name of the specified backup. - BackupName *string `min:"3" type:"string"` - - // Size of the backup in bytes. - BackupSizeBytes *int64 `type:"long"` + // Disabled autoscaling for this global table or global secondary index. + AutoScalingDisabled *bool `type:"boolean"` - // Backup can be in one of the following states: CREATING, ACTIVE, DELETED. - BackupStatus *string `type:"string" enum:"BackupStatus"` + // Role ARN used for configuring autoscaling policy. + AutoScalingRoleArn *string `min:"1" type:"string"` - // ARN associated with the table. - TableArn *string `type:"string"` + // The maximum capacity units that a global table or global secondary index + // should be scaled up to. + MaximumUnits *int64 `min:"1" type:"long"` - // Unique identifier for the table. - TableId *string `type:"string"` + // The minimum capacity units that a global table or global secondary index + // should be scaled down to. + MinimumUnits *int64 `min:"1" type:"long"` - // Name of the table. - TableName *string `min:"3" type:"string"` + // The scaling policy to apply for scaling target global table or global secondary + // index capacity units. + ScalingPolicyUpdate *AutoScalingPolicyUpdate `type:"structure"` } // String returns the string representation -func (s BackupSummary) String() string { +func (s AutoScalingSettingsUpdate) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s BackupSummary) GoString() string { +func (s AutoScalingSettingsUpdate) GoString() string { return s.String() } -// SetBackupArn sets the BackupArn field's value. -func (s *BackupSummary) SetBackupArn(v string) *BackupSummary { - s.BackupArn = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *AutoScalingSettingsUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AutoScalingSettingsUpdate"} + if s.AutoScalingRoleArn != nil && len(*s.AutoScalingRoleArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AutoScalingRoleArn", 1)) + } + if s.MaximumUnits != nil && *s.MaximumUnits < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaximumUnits", 1)) + } + if s.MinimumUnits != nil && *s.MinimumUnits < 1 { + invalidParams.Add(request.NewErrParamMinValue("MinimumUnits", 1)) + } + if s.ScalingPolicyUpdate != nil { + if err := s.ScalingPolicyUpdate.Validate(); err != nil { + invalidParams.AddNested("ScalingPolicyUpdate", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAutoScalingDisabled sets the AutoScalingDisabled field's value. +func (s *AutoScalingSettingsUpdate) SetAutoScalingDisabled(v bool) *AutoScalingSettingsUpdate { + s.AutoScalingDisabled = &v return s } -// SetBackupCreationDateTime sets the BackupCreationDateTime field's value. -func (s *BackupSummary) SetBackupCreationDateTime(v time.Time) *BackupSummary { - s.BackupCreationDateTime = &v +// SetAutoScalingRoleArn sets the AutoScalingRoleArn field's value. +func (s *AutoScalingSettingsUpdate) SetAutoScalingRoleArn(v string) *AutoScalingSettingsUpdate { + s.AutoScalingRoleArn = &v return s } -// SetBackupName sets the BackupName field's value. -func (s *BackupSummary) SetBackupName(v string) *BackupSummary { - s.BackupName = &v +// SetMaximumUnits sets the MaximumUnits field's value. +func (s *AutoScalingSettingsUpdate) SetMaximumUnits(v int64) *AutoScalingSettingsUpdate { + s.MaximumUnits = &v return s } -// SetBackupSizeBytes sets the BackupSizeBytes field's value. -func (s *BackupSummary) SetBackupSizeBytes(v int64) *BackupSummary { - s.BackupSizeBytes = &v +// SetMinimumUnits sets the MinimumUnits field's value. +func (s *AutoScalingSettingsUpdate) SetMinimumUnits(v int64) *AutoScalingSettingsUpdate { + s.MinimumUnits = &v return s } -// SetBackupStatus sets the BackupStatus field's value. +// SetScalingPolicyUpdate sets the ScalingPolicyUpdate field's value. +func (s *AutoScalingSettingsUpdate) SetScalingPolicyUpdate(v *AutoScalingPolicyUpdate) *AutoScalingSettingsUpdate { + s.ScalingPolicyUpdate = v + return s +} + +// Represents the properties of a target tracking scaling policy. +type AutoScalingTargetTrackingScalingPolicyConfigurationDescription struct { + _ struct{} `type:"structure"` + + // Indicates whether scale in by the target tracking policy is disabled. If + // the value is true, scale in is disabled and the target tracking policy won't + // remove capacity from the scalable resource. Otherwise, scale in is enabled + // and the target tracking policy can remove capacity from the scalable resource. + // The default value is false. + DisableScaleIn *bool `type:"boolean"` + + // The amount of time, in seconds, after a scale in activity completes before + // another scale in activity can start. The cooldown period is used to block + // subsequent scale in requests until it has expired. You should scale in conservatively + // to protect your application's availability. However, if another alarm triggers + // a scale out policy during the cooldown period after a scale-in, application + // autoscaling scales out your scalable target immediately. + ScaleInCooldown *int64 `type:"integer"` + + // The amount of time, in seconds, after a scale out activity completes before + // another scale out activity can start. While the cooldown period is in effect, + // the capacity that has been added by the previous scale out event that initiated + // the cooldown is calculated as part of the desired capacity for the next scale + // out. You should continuously (but not excessively) scale out. + ScaleOutCooldown *int64 `type:"integer"` + + // The target value for the metric. The range is 8.515920e-109 to 1.174271e+108 + // (Base 10) or 2e-360 to 2e360 (Base 2). + // + // TargetValue is a required field + TargetValue *float64 `type:"double" required:"true"` +} + +// String returns the string representation +func (s AutoScalingTargetTrackingScalingPolicyConfigurationDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AutoScalingTargetTrackingScalingPolicyConfigurationDescription) GoString() string { + return s.String() +} + +// SetDisableScaleIn sets the DisableScaleIn field's value. +func (s *AutoScalingTargetTrackingScalingPolicyConfigurationDescription) SetDisableScaleIn(v bool) *AutoScalingTargetTrackingScalingPolicyConfigurationDescription { + s.DisableScaleIn = &v + return s +} + +// SetScaleInCooldown sets the ScaleInCooldown field's value. +func (s *AutoScalingTargetTrackingScalingPolicyConfigurationDescription) SetScaleInCooldown(v int64) *AutoScalingTargetTrackingScalingPolicyConfigurationDescription { + s.ScaleInCooldown = &v + return s +} + +// SetScaleOutCooldown sets the ScaleOutCooldown field's value. +func (s *AutoScalingTargetTrackingScalingPolicyConfigurationDescription) SetScaleOutCooldown(v int64) *AutoScalingTargetTrackingScalingPolicyConfigurationDescription { + s.ScaleOutCooldown = &v + return s +} + +// SetTargetValue sets the TargetValue field's value. +func (s *AutoScalingTargetTrackingScalingPolicyConfigurationDescription) SetTargetValue(v float64) *AutoScalingTargetTrackingScalingPolicyConfigurationDescription { + s.TargetValue = &v + return s +} + +// Represents the settings of a target tracking scaling policy that will be +// modified. +type AutoScalingTargetTrackingScalingPolicyConfigurationUpdate struct { + _ struct{} `type:"structure"` + + // Indicates whether scale in by the target tracking policy is disabled. If + // the value is true, scale in is disabled and the target tracking policy won't + // remove capacity from the scalable resource. Otherwise, scale in is enabled + // and the target tracking policy can remove capacity from the scalable resource. + // The default value is false. + DisableScaleIn *bool `type:"boolean"` + + // The amount of time, in seconds, after a scale in activity completes before + // another scale in activity can start. The cooldown period is used to block + // subsequent scale in requests until it has expired. You should scale in conservatively + // to protect your application's availability. However, if another alarm triggers + // a scale out policy during the cooldown period after a scale-in, application + // autoscaling scales out your scalable target immediately. + ScaleInCooldown *int64 `type:"integer"` + + // The amount of time, in seconds, after a scale out activity completes before + // another scale out activity can start. While the cooldown period is in effect, + // the capacity that has been added by the previous scale out event that initiated + // the cooldown is calculated as part of the desired capacity for the next scale + // out. You should continuously (but not excessively) scale out. + ScaleOutCooldown *int64 `type:"integer"` + + // The target value for the metric. The range is 8.515920e-109 to 1.174271e+108 + // (Base 10) or 2e-360 to 2e360 (Base 2). + // + // TargetValue is a required field + TargetValue *float64 `type:"double" required:"true"` +} + +// String returns the string representation +func (s AutoScalingTargetTrackingScalingPolicyConfigurationUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AutoScalingTargetTrackingScalingPolicyConfigurationUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AutoScalingTargetTrackingScalingPolicyConfigurationUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AutoScalingTargetTrackingScalingPolicyConfigurationUpdate"} + if s.TargetValue == nil { + invalidParams.Add(request.NewErrParamRequired("TargetValue")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDisableScaleIn sets the DisableScaleIn field's value. +func (s *AutoScalingTargetTrackingScalingPolicyConfigurationUpdate) SetDisableScaleIn(v bool) *AutoScalingTargetTrackingScalingPolicyConfigurationUpdate { + s.DisableScaleIn = &v + return s +} + +// SetScaleInCooldown sets the ScaleInCooldown field's value. +func (s *AutoScalingTargetTrackingScalingPolicyConfigurationUpdate) SetScaleInCooldown(v int64) *AutoScalingTargetTrackingScalingPolicyConfigurationUpdate { + s.ScaleInCooldown = &v + return s +} + +// SetScaleOutCooldown sets the ScaleOutCooldown field's value. +func (s *AutoScalingTargetTrackingScalingPolicyConfigurationUpdate) SetScaleOutCooldown(v int64) *AutoScalingTargetTrackingScalingPolicyConfigurationUpdate { + s.ScaleOutCooldown = &v + return s +} + +// SetTargetValue sets the TargetValue field's value. +func (s *AutoScalingTargetTrackingScalingPolicyConfigurationUpdate) SetTargetValue(v float64) *AutoScalingTargetTrackingScalingPolicyConfigurationUpdate { + s.TargetValue = &v + return s +} + +// Contains the description of the backup created for the table. +type BackupDescription struct { + _ struct{} `type:"structure"` + + // Contains the details of the backup created for the table. + BackupDetails *BackupDetails `type:"structure"` + + // Contains the details of the table when the backup was created. + SourceTableDetails *SourceTableDetails `type:"structure"` + + // Contains the details of the features enabled on the table when the backup + // was created. For example, LSIs, GSIs, streams, TTL. + SourceTableFeatureDetails *SourceTableFeatureDetails `type:"structure"` +} + +// String returns the string representation +func (s BackupDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BackupDescription) GoString() string { + return s.String() +} + +// SetBackupDetails sets the BackupDetails field's value. +func (s *BackupDescription) SetBackupDetails(v *BackupDetails) *BackupDescription { + s.BackupDetails = v + return s +} + +// SetSourceTableDetails sets the SourceTableDetails field's value. +func (s *BackupDescription) SetSourceTableDetails(v *SourceTableDetails) *BackupDescription { + s.SourceTableDetails = v + return s +} + +// SetSourceTableFeatureDetails sets the SourceTableFeatureDetails field's value. +func (s *BackupDescription) SetSourceTableFeatureDetails(v *SourceTableFeatureDetails) *BackupDescription { + s.SourceTableFeatureDetails = v + return s +} + +// Contains the details of the backup created for the table. +type BackupDetails struct { + _ struct{} `type:"structure"` + + // ARN associated with the backup. + // + // BackupArn is a required field + BackupArn *string `min:"37" type:"string" required:"true"` + + // Time at which the backup was created. This is the request time of the backup. + // + // BackupCreationDateTime is a required field + BackupCreationDateTime *time.Time `type:"timestamp" required:"true"` + + // Time at which the automatic on-demand backup created by DynamoDB will expire. + // This SYSTEM on-demand backup expires automatically 35 days after its creation. + BackupExpiryDateTime *time.Time `type:"timestamp"` + + // Name of the requested backup. + // + // BackupName is a required field + BackupName *string `min:"3" type:"string" required:"true"` + + // Size of the backup in bytes. + BackupSizeBytes *int64 `type:"long"` + + // Backup can be in one of the following states: CREATING, ACTIVE, DELETED. + // + // BackupStatus is a required field + BackupStatus *string `type:"string" required:"true" enum:"BackupStatus"` + + // BackupType: + // + // * USER - On-demand backup created by you. + // + // * SYSTEM - On-demand backup automatically created by DynamoDB. + // + // BackupType is a required field + BackupType *string `type:"string" required:"true" enum:"BackupType"` +} + +// String returns the string representation +func (s BackupDetails) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BackupDetails) GoString() string { + return s.String() +} + +// SetBackupArn sets the BackupArn field's value. +func (s *BackupDetails) SetBackupArn(v string) *BackupDetails { + s.BackupArn = &v + return s +} + +// SetBackupCreationDateTime sets the BackupCreationDateTime field's value. +func (s *BackupDetails) SetBackupCreationDateTime(v time.Time) *BackupDetails { + s.BackupCreationDateTime = &v + return s +} + +// SetBackupExpiryDateTime sets the BackupExpiryDateTime field's value. +func (s *BackupDetails) SetBackupExpiryDateTime(v time.Time) *BackupDetails { + s.BackupExpiryDateTime = &v + return s +} + +// SetBackupName sets the BackupName field's value. +func (s *BackupDetails) SetBackupName(v string) *BackupDetails { + s.BackupName = &v + return s +} + +// SetBackupSizeBytes sets the BackupSizeBytes field's value. +func (s *BackupDetails) SetBackupSizeBytes(v int64) *BackupDetails { + s.BackupSizeBytes = &v + return s +} + +// SetBackupStatus sets the BackupStatus field's value. +func (s *BackupDetails) SetBackupStatus(v string) *BackupDetails { + s.BackupStatus = &v + return s +} + +// SetBackupType sets the BackupType field's value. +func (s *BackupDetails) SetBackupType(v string) *BackupDetails { + s.BackupType = &v + return s +} + +// Contains details for the backup. +type BackupSummary struct { + _ struct{} `type:"structure"` + + // ARN associated with the backup. + BackupArn *string `min:"37" type:"string"` + + // Time at which the backup was created. + BackupCreationDateTime *time.Time `type:"timestamp"` + + // Time at which the automatic on-demand backup created by DynamoDB will expire. + // This SYSTEM on-demand backup expires automatically 35 days after its creation. + BackupExpiryDateTime *time.Time `type:"timestamp"` + + // Name of the specified backup. + BackupName *string `min:"3" type:"string"` + + // Size of the backup in bytes. + BackupSizeBytes *int64 `type:"long"` + + // Backup can be in one of the following states: CREATING, ACTIVE, DELETED. + BackupStatus *string `type:"string" enum:"BackupStatus"` + + // BackupType: + // + // * USER - On-demand backup created by you. + // + // * SYSTEM - On-demand backup automatically created by DynamoDB. + BackupType *string `type:"string" enum:"BackupType"` + + // ARN associated with the table. + TableArn *string `type:"string"` + + // Unique identifier for the table. + TableId *string `type:"string"` + + // Name of the table. + TableName *string `min:"3" type:"string"` +} + +// String returns the string representation +func (s BackupSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BackupSummary) GoString() string { + return s.String() +} + +// SetBackupArn sets the BackupArn field's value. +func (s *BackupSummary) SetBackupArn(v string) *BackupSummary { + s.BackupArn = &v + return s +} + +// SetBackupCreationDateTime sets the BackupCreationDateTime field's value. +func (s *BackupSummary) SetBackupCreationDateTime(v time.Time) *BackupSummary { + s.BackupCreationDateTime = &v + return s +} + +// SetBackupExpiryDateTime sets the BackupExpiryDateTime field's value. +func (s *BackupSummary) SetBackupExpiryDateTime(v time.Time) *BackupSummary { + s.BackupExpiryDateTime = &v + return s +} + +// SetBackupName sets the BackupName field's value. +func (s *BackupSummary) SetBackupName(v string) *BackupSummary { + s.BackupName = &v + return s +} + +// SetBackupSizeBytes sets the BackupSizeBytes field's value. +func (s *BackupSummary) SetBackupSizeBytes(v int64) *BackupSummary { + s.BackupSizeBytes = &v + return s +} + +// SetBackupStatus sets the BackupStatus field's value. func (s *BackupSummary) SetBackupStatus(v string) *BackupSummary { s.BackupStatus = &v return s } +// SetBackupType sets the BackupType field's value. +func (s *BackupSummary) SetBackupType(v string) *BackupSummary { + s.BackupType = &v + return s +} + // SetTableArn sets the TableArn field's value. func (s *BackupSummary) SetTableArn(v string) *BackupSummary { s.TableArn = &v @@ -4584,15 +6257,18 @@ func (s *ConsumedCapacity) SetTableName(v string) *ConsumedCapacity { return s } -// Represents the backup and restore settings on the table when the backup was -// created. +// Represents the continuous backups and point in time recovery settings on +// the table. type ContinuousBackupsDescription struct { _ struct{} `type:"structure"` - // ContinuousBackupsStatus can be one of the following states : ENABLED, DISABLED + // ContinuousBackupsStatus can be one of the following states: ENABLED, DISABLED // // ContinuousBackupsStatus is a required field ContinuousBackupsStatus *string `type:"string" required:"true" enum:"ContinuousBackupsStatus"` + + // The description of the point in time recovery settings applied to the table. + PointInTimeRecoveryDescription *PointInTimeRecoveryDescription `type:"structure"` } // String returns the string representation @@ -4611,6 +6287,12 @@ func (s *ContinuousBackupsDescription) SetContinuousBackupsStatus(v string) *Con return s } +// SetPointInTimeRecoveryDescription sets the PointInTimeRecoveryDescription field's value. +func (s *ContinuousBackupsDescription) SetPointInTimeRecoveryDescription(v *PointInTimeRecoveryDescription) *ContinuousBackupsDescription { + s.PointInTimeRecoveryDescription = v + return s +} + type CreateBackupInput struct { _ struct{} `type:"structure"` @@ -5152,11 +6834,6 @@ func (s *CreateTableInput) Validate() error { invalidParams.AddNested("ProvisionedThroughput", err.(request.ErrInvalidParams)) } } - if s.SSESpecification != nil { - if err := s.SSESpecification.Validate(); err != nil { - invalidParams.AddNested("SSESpecification", err.(request.ErrInvalidParams)) - } - } if invalidParams.Len() > 0 { return invalidParams @@ -5842,8 +7519,8 @@ func (s *DescribeBackupOutput) SetBackupDescription(v *BackupDescription) *Descr type DescribeContinuousBackupsInput struct { _ struct{} `type:"structure"` - // Name of the table for which the customer wants to check the backup and restore - // settings. + // Name of the table for which the customer wants to check the continuous backups + // and point in time recovery settings. // // TableName is a required field TableName *string `min:"3" type:"string" required:"true"` @@ -5884,7 +7561,8 @@ func (s *DescribeContinuousBackupsInput) SetTableName(v string) *DescribeContinu type DescribeContinuousBackupsOutput struct { _ struct{} `type:"structure"` - // ContinuousBackupsDescription can be one of the following : ENABLED, DISABLED. + // Represents the continuous backups and point in time recovery settings on + // the table. ContinuousBackupsDescription *ContinuousBackupsDescription `type:"structure"` } @@ -5904,6 +7582,43 @@ func (s *DescribeContinuousBackupsOutput) SetContinuousBackupsDescription(v *Con return s } +type DescribeEndpointsInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DescribeEndpointsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEndpointsInput) GoString() string { + return s.String() +} + +type DescribeEndpointsOutput struct { + _ struct{} `type:"structure"` + + // Endpoints is a required field + Endpoints []*Endpoint `type:"list" required:"true"` +} + +// String returns the string representation +func (s DescribeEndpointsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEndpointsOutput) GoString() string { + return s.String() +} + +// SetEndpoints sets the Endpoints field's value. +func (s *DescribeEndpointsOutput) SetEndpoints(v []*Endpoint) *DescribeEndpointsOutput { + s.Endpoints = v + return s +} + type DescribeGlobalTableInput struct { _ struct{} `type:"structure"` @@ -5968,6 +7683,79 @@ func (s *DescribeGlobalTableOutput) SetGlobalTableDescription(v *GlobalTableDesc return s } +type DescribeGlobalTableSettingsInput struct { + _ struct{} `type:"structure"` + + // The name of the global table to describe. + // + // GlobalTableName is a required field + GlobalTableName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeGlobalTableSettingsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeGlobalTableSettingsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeGlobalTableSettingsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeGlobalTableSettingsInput"} + if s.GlobalTableName == nil { + invalidParams.Add(request.NewErrParamRequired("GlobalTableName")) + } + if s.GlobalTableName != nil && len(*s.GlobalTableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("GlobalTableName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGlobalTableName sets the GlobalTableName field's value. +func (s *DescribeGlobalTableSettingsInput) SetGlobalTableName(v string) *DescribeGlobalTableSettingsInput { + s.GlobalTableName = &v + return s +} + +type DescribeGlobalTableSettingsOutput struct { + _ struct{} `type:"structure"` + + // The name of the global table. + GlobalTableName *string `min:"3" type:"string"` + + // The region specific settings for the global table. + ReplicaSettings []*ReplicaSettingsDescription `type:"list"` +} + +// String returns the string representation +func (s DescribeGlobalTableSettingsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeGlobalTableSettingsOutput) GoString() string { + return s.String() +} + +// SetGlobalTableName sets the GlobalTableName field's value. +func (s *DescribeGlobalTableSettingsOutput) SetGlobalTableName(v string) *DescribeGlobalTableSettingsOutput { + s.GlobalTableName = &v + return s +} + +// SetReplicaSettings sets the ReplicaSettings field's value. +func (s *DescribeGlobalTableSettingsOutput) SetReplicaSettings(v []*ReplicaSettingsDescription) *DescribeGlobalTableSettingsOutput { + s.ReplicaSettings = v + return s +} + // Represents the input of a DescribeLimits operation. Has no content. type DescribeLimitsInput struct { _ struct{} `type:"structure"` @@ -6170,6 +7958,38 @@ func (s *DescribeTimeToLiveOutput) SetTimeToLiveDescription(v *TimeToLiveDescrip return s } +type Endpoint struct { + _ struct{} `type:"structure"` + + // Address is a required field + Address *string `type:"string" required:"true"` + + // CachePeriodInMinutes is a required field + CachePeriodInMinutes *int64 `type:"long" required:"true"` +} + +// String returns the string representation +func (s Endpoint) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Endpoint) GoString() string { + return s.String() +} + +// SetAddress sets the Address field's value. +func (s *Endpoint) SetAddress(v string) *Endpoint { + s.Address = &v + return s +} + +// SetCachePeriodInMinutes sets the CachePeriodInMinutes field's value. +func (s *Endpoint) SetCachePeriodInMinutes(v int64) *Endpoint { + s.CachePeriodInMinutes = &v + return s +} + // Represents a condition to be compared with an attribute value. This condition // can be used with DeleteItem, PutItem or UpdateItem operations; if the comparison // evaluates to true, the operation succeeds; if not, the operation fails. You @@ -6988,7 +8808,7 @@ type GlobalTableDescription struct { _ struct{} `type:"structure"` // The creation time of the global table. - CreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDateTime *time.Time `type:"timestamp"` // The unique identifier of the global table. GlobalTableArn *string `type:"string"` @@ -7051,6 +8871,78 @@ func (s *GlobalTableDescription) SetReplicationGroup(v []*ReplicaDescription) *G return s } +// Represents the settings of a global secondary index for a global table that +// will be modified. +type GlobalTableGlobalSecondaryIndexSettingsUpdate struct { + _ struct{} `type:"structure"` + + // The name of the global secondary index. The name must be unique among all + // other indexes on this table. + // + // IndexName is a required field + IndexName *string `min:"3" type:"string" required:"true"` + + // AutoScaling settings for managing a global secondary index's write capacity + // units. + ProvisionedWriteCapacityAutoScalingSettingsUpdate *AutoScalingSettingsUpdate `type:"structure"` + + // The maximum number of writes consumed per second before DynamoDB returns + // a ThrottlingException. + ProvisionedWriteCapacityUnits *int64 `min:"1" type:"long"` +} + +// String returns the string representation +func (s GlobalTableGlobalSecondaryIndexSettingsUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GlobalTableGlobalSecondaryIndexSettingsUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GlobalTableGlobalSecondaryIndexSettingsUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GlobalTableGlobalSecondaryIndexSettingsUpdate"} + if s.IndexName == nil { + invalidParams.Add(request.NewErrParamRequired("IndexName")) + } + if s.IndexName != nil && len(*s.IndexName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("IndexName", 3)) + } + if s.ProvisionedWriteCapacityUnits != nil && *s.ProvisionedWriteCapacityUnits < 1 { + invalidParams.Add(request.NewErrParamMinValue("ProvisionedWriteCapacityUnits", 1)) + } + if s.ProvisionedWriteCapacityAutoScalingSettingsUpdate != nil { + if err := s.ProvisionedWriteCapacityAutoScalingSettingsUpdate.Validate(); err != nil { + invalidParams.AddNested("ProvisionedWriteCapacityAutoScalingSettingsUpdate", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetIndexName sets the IndexName field's value. +func (s *GlobalTableGlobalSecondaryIndexSettingsUpdate) SetIndexName(v string) *GlobalTableGlobalSecondaryIndexSettingsUpdate { + s.IndexName = &v + return s +} + +// SetProvisionedWriteCapacityAutoScalingSettingsUpdate sets the ProvisionedWriteCapacityAutoScalingSettingsUpdate field's value. +func (s *GlobalTableGlobalSecondaryIndexSettingsUpdate) SetProvisionedWriteCapacityAutoScalingSettingsUpdate(v *AutoScalingSettingsUpdate) *GlobalTableGlobalSecondaryIndexSettingsUpdate { + s.ProvisionedWriteCapacityAutoScalingSettingsUpdate = v + return s +} + +// SetProvisionedWriteCapacityUnits sets the ProvisionedWriteCapacityUnits field's value. +func (s *GlobalTableGlobalSecondaryIndexSettingsUpdate) SetProvisionedWriteCapacityUnits(v int64) *GlobalTableGlobalSecondaryIndexSettingsUpdate { + s.ProvisionedWriteCapacityUnits = &v + return s +} + // Information about item collections, if any, that were affected by the operation. // ItemCollectionMetrics is only returned if the request asked for it. If the // table does not have any local secondary indexes, this information is not @@ -7312,7 +9204,21 @@ func (s *KeysAndAttributes) SetProjectionExpression(v string) *KeysAndAttributes type ListBackupsInput struct { _ struct{} `type:"structure"` - // LastEvaluatedBackupARN returned by the previous ListBackups call. + // The backups from the table specified by BackupType are listed. + // + // Where BackupType can be: + // + // * USER - On-demand backup created by you. + // + // * SYSTEM - On-demand backup automatically created by DynamoDB. + // + // * ALL - All types of on-demand backups (USER and SYSTEM). + BackupType *string `type:"string" enum:"BackupTypeFilter"` + + // LastEvaluatedBackupArn is the ARN of the backup last evaluated when the current + // page of results was returned, inclusive of the current page of results. This + // value may be specified as the ExclusiveStartBackupArn of a new ListBackups + // operation in order to fetch the next page of results. ExclusiveStartBackupArn *string `min:"37" type:"string"` // Maximum number of backups to return at once. @@ -7322,11 +9228,11 @@ type ListBackupsInput struct { TableName *string `min:"3" type:"string"` // Only backups created after this time are listed. TimeRangeLowerBound is inclusive. - TimeRangeLowerBound *time.Time `type:"timestamp" timestampFormat:"unix"` + TimeRangeLowerBound *time.Time `type:"timestamp"` // Only backups created before this time are listed. TimeRangeUpperBound is // exclusive. - TimeRangeUpperBound *time.Time `type:"timestamp" timestampFormat:"unix"` + TimeRangeUpperBound *time.Time `type:"timestamp"` } // String returns the string representation @@ -7358,6 +9264,12 @@ func (s *ListBackupsInput) Validate() error { return nil } +// SetBackupType sets the BackupType field's value. +func (s *ListBackupsInput) SetBackupType(v string) *ListBackupsInput { + s.BackupType = &v + return s +} + // SetExclusiveStartBackupArn sets the ExclusiveStartBackupArn field's value. func (s *ListBackupsInput) SetExclusiveStartBackupArn(v string) *ListBackupsInput { s.ExclusiveStartBackupArn = &v @@ -7394,7 +9306,17 @@ type ListBackupsOutput struct { // List of BackupSummary objects. BackupSummaries []*BackupSummary `type:"list"` - // Last evaluated BackupARN. + // The ARN of the backup last evaluated when the current page of results was + // returned, inclusive of the current page of results. This value may be specified + // as the ExclusiveStartBackupArn of a new ListBackups operation in order to + // fetch the next page of results. + // + // If LastEvaluatedBackupArn is empty, then the last page of results has been + // processed and there are no more results to be retrieved. + // + // If LastEvaluatedBackupArn is not empty, this may or may not indicate there + // is more data to be returned. All results are guaranteed to have been returned + // if and only if no value for LastEvaluatedBackupArn is returned. LastEvaluatedBackupArn *string `min:"37" type:"string"` } @@ -7921,25 +9843,114 @@ func (s LocalSecondaryIndexInfo) String() string { } // GoString returns the string representation -func (s LocalSecondaryIndexInfo) GoString() string { +func (s LocalSecondaryIndexInfo) GoString() string { + return s.String() +} + +// SetIndexName sets the IndexName field's value. +func (s *LocalSecondaryIndexInfo) SetIndexName(v string) *LocalSecondaryIndexInfo { + s.IndexName = &v + return s +} + +// SetKeySchema sets the KeySchema field's value. +func (s *LocalSecondaryIndexInfo) SetKeySchema(v []*KeySchemaElement) *LocalSecondaryIndexInfo { + s.KeySchema = v + return s +} + +// SetProjection sets the Projection field's value. +func (s *LocalSecondaryIndexInfo) SetProjection(v *Projection) *LocalSecondaryIndexInfo { + s.Projection = v + return s +} + +// The description of the point in time settings applied to the table. +type PointInTimeRecoveryDescription struct { + _ struct{} `type:"structure"` + + // Specifies the earliest point in time you can restore your table to. It You + // can restore your table to any point in time during the last 35 days. + EarliestRestorableDateTime *time.Time `type:"timestamp"` + + // LatestRestorableDateTime is typically 5 minutes before the current time. + LatestRestorableDateTime *time.Time `type:"timestamp"` + + // The current state of point in time recovery: + // + // * ENABLING - Point in time recovery is being enabled. + // + // * ENABLED - Point in time recovery is enabled. + // + // * DISABLED - Point in time recovery is disabled. + PointInTimeRecoveryStatus *string `type:"string" enum:"PointInTimeRecoveryStatus"` +} + +// String returns the string representation +func (s PointInTimeRecoveryDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PointInTimeRecoveryDescription) GoString() string { + return s.String() +} + +// SetEarliestRestorableDateTime sets the EarliestRestorableDateTime field's value. +func (s *PointInTimeRecoveryDescription) SetEarliestRestorableDateTime(v time.Time) *PointInTimeRecoveryDescription { + s.EarliestRestorableDateTime = &v + return s +} + +// SetLatestRestorableDateTime sets the LatestRestorableDateTime field's value. +func (s *PointInTimeRecoveryDescription) SetLatestRestorableDateTime(v time.Time) *PointInTimeRecoveryDescription { + s.LatestRestorableDateTime = &v + return s +} + +// SetPointInTimeRecoveryStatus sets the PointInTimeRecoveryStatus field's value. +func (s *PointInTimeRecoveryDescription) SetPointInTimeRecoveryStatus(v string) *PointInTimeRecoveryDescription { + s.PointInTimeRecoveryStatus = &v + return s +} + +// Represents the settings used to enable point in time recovery. +type PointInTimeRecoverySpecification struct { + _ struct{} `type:"structure"` + + // Indicates whether point in time recovery is enabled (true) or disabled (false) + // on the table. + // + // PointInTimeRecoveryEnabled is a required field + PointInTimeRecoveryEnabled *bool `type:"boolean" required:"true"` +} + +// String returns the string representation +func (s PointInTimeRecoverySpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PointInTimeRecoverySpecification) GoString() string { return s.String() } -// SetIndexName sets the IndexName field's value. -func (s *LocalSecondaryIndexInfo) SetIndexName(v string) *LocalSecondaryIndexInfo { - s.IndexName = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *PointInTimeRecoverySpecification) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PointInTimeRecoverySpecification"} + if s.PointInTimeRecoveryEnabled == nil { + invalidParams.Add(request.NewErrParamRequired("PointInTimeRecoveryEnabled")) + } -// SetKeySchema sets the KeySchema field's value. -func (s *LocalSecondaryIndexInfo) SetKeySchema(v []*KeySchemaElement) *LocalSecondaryIndexInfo { - s.KeySchema = v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetProjection sets the Projection field's value. -func (s *LocalSecondaryIndexInfo) SetProjection(v *Projection) *LocalSecondaryIndexInfo { - s.Projection = v +// SetPointInTimeRecoveryEnabled sets the PointInTimeRecoveryEnabled field's value. +func (s *PointInTimeRecoverySpecification) SetPointInTimeRecoveryEnabled(v bool) *PointInTimeRecoverySpecification { + s.PointInTimeRecoveryEnabled = &v return s } @@ -8079,10 +10090,10 @@ type ProvisionedThroughputDescription struct { _ struct{} `type:"structure"` // The date and time of the last provisioned throughput decrease for this table. - LastDecreaseDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LastDecreaseDateTime *time.Time `type:"timestamp"` // The date and time of the last provisioned throughput increase for this table. - LastIncreaseDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LastIncreaseDateTime *time.Time `type:"timestamp"` // The number of provisioned throughput decreases for this table during this // UTC calendar day. For current maximums on provisioned throughput decreases, @@ -8717,9 +10728,8 @@ type QueryInput struct { // // Items with the same partition key value are stored in sorted order by sort // key. If the sort key data type is Number, the results are stored in numeric - // order. For type String, the results are stored in order of ASCII character - // code values. For type Binary, DynamoDB treats each byte of the binary data - // as unsigned. + // order. For type String, the results are stored in order of UTF-8 bytes. For + // type Binary, DynamoDB treats each byte of the binary data as unsigned. // // If ScanIndexForward is true, DynamoDB returns the results in the order in // which they are stored (by sort key value). This is the default behavior. @@ -8900,175 +10910,517 @@ func (s *QueryInput) SetLimit(v int64) *QueryInput { return s } -// SetProjectionExpression sets the ProjectionExpression field's value. -func (s *QueryInput) SetProjectionExpression(v string) *QueryInput { - s.ProjectionExpression = &v - return s +// SetProjectionExpression sets the ProjectionExpression field's value. +func (s *QueryInput) SetProjectionExpression(v string) *QueryInput { + s.ProjectionExpression = &v + return s +} + +// SetQueryFilter sets the QueryFilter field's value. +func (s *QueryInput) SetQueryFilter(v map[string]*Condition) *QueryInput { + s.QueryFilter = v + return s +} + +// SetReturnConsumedCapacity sets the ReturnConsumedCapacity field's value. +func (s *QueryInput) SetReturnConsumedCapacity(v string) *QueryInput { + s.ReturnConsumedCapacity = &v + return s +} + +// SetScanIndexForward sets the ScanIndexForward field's value. +func (s *QueryInput) SetScanIndexForward(v bool) *QueryInput { + s.ScanIndexForward = &v + return s +} + +// SetSelect sets the Select field's value. +func (s *QueryInput) SetSelect(v string) *QueryInput { + s.Select = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *QueryInput) SetTableName(v string) *QueryInput { + s.TableName = &v + return s +} + +// Represents the output of a Query operation. +type QueryOutput struct { + _ struct{} `type:"structure"` + + // The capacity units consumed by the Query operation. The data returned includes + // the total provisioned throughput consumed, along with statistics for the + // table and any indexes involved in the operation. ConsumedCapacity is only + // returned if the ReturnConsumedCapacity parameter was specified For more information, + // see Provisioned Throughput (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ProvisionedThroughputIntro.html) + // in the Amazon DynamoDB Developer Guide. + ConsumedCapacity *ConsumedCapacity `type:"structure"` + + // The number of items in the response. + // + // If you used a QueryFilter in the request, then Count is the number of items + // returned after the filter was applied, and ScannedCount is the number of + // matching items before the filter was applied. + // + // If you did not use a filter in the request, then Count and ScannedCount are + // the same. + Count *int64 `type:"integer"` + + // An array of item attributes that match the query criteria. Each element in + // this array consists of an attribute name and the value for that attribute. + Items []map[string]*AttributeValue `type:"list"` + + // The primary key of the item where the operation stopped, inclusive of the + // previous result set. Use this value to start a new operation, excluding this + // value in the new request. + // + // If LastEvaluatedKey is empty, then the "last page" of results has been processed + // and there is no more data to be retrieved. + // + // If LastEvaluatedKey is not empty, it does not necessarily mean that there + // is more data in the result set. The only way to know when you have reached + // the end of the result set is when LastEvaluatedKey is empty. + LastEvaluatedKey map[string]*AttributeValue `type:"map"` + + // The number of items evaluated, before any QueryFilter is applied. A high + // ScannedCount value with few, or no, Count results indicates an inefficient + // Query operation. For more information, see Count and ScannedCount (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryAndScan.html#Count) + // in the Amazon DynamoDB Developer Guide. + // + // If you did not use a filter in the request, then ScannedCount is the same + // as Count. + ScannedCount *int64 `type:"integer"` +} + +// String returns the string representation +func (s QueryOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s QueryOutput) GoString() string { + return s.String() +} + +// SetConsumedCapacity sets the ConsumedCapacity field's value. +func (s *QueryOutput) SetConsumedCapacity(v *ConsumedCapacity) *QueryOutput { + s.ConsumedCapacity = v + return s +} + +// SetCount sets the Count field's value. +func (s *QueryOutput) SetCount(v int64) *QueryOutput { + s.Count = &v + return s +} + +// SetItems sets the Items field's value. +func (s *QueryOutput) SetItems(v []map[string]*AttributeValue) *QueryOutput { + s.Items = v + return s +} + +// SetLastEvaluatedKey sets the LastEvaluatedKey field's value. +func (s *QueryOutput) SetLastEvaluatedKey(v map[string]*AttributeValue) *QueryOutput { + s.LastEvaluatedKey = v + return s +} + +// SetScannedCount sets the ScannedCount field's value. +func (s *QueryOutput) SetScannedCount(v int64) *QueryOutput { + s.ScannedCount = &v + return s +} + +// Represents the properties of a replica. +type Replica struct { + _ struct{} `type:"structure"` + + // The region where the replica needs to be created. + RegionName *string `type:"string"` +} + +// String returns the string representation +func (s Replica) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Replica) GoString() string { + return s.String() +} + +// SetRegionName sets the RegionName field's value. +func (s *Replica) SetRegionName(v string) *Replica { + s.RegionName = &v + return s +} + +// Contains the details of the replica. +type ReplicaDescription struct { + _ struct{} `type:"structure"` + + // The name of the region. + RegionName *string `type:"string"` +} + +// String returns the string representation +func (s ReplicaDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReplicaDescription) GoString() string { + return s.String() +} + +// SetRegionName sets the RegionName field's value. +func (s *ReplicaDescription) SetRegionName(v string) *ReplicaDescription { + s.RegionName = &v + return s +} + +// Represents the properties of a global secondary index. +type ReplicaGlobalSecondaryIndexSettingsDescription struct { + _ struct{} `type:"structure"` + + // The name of the global secondary index. The name must be unique among all + // other indexes on this table. + // + // IndexName is a required field + IndexName *string `min:"3" type:"string" required:"true"` + + // The current status of the global secondary index: + // + // * CREATING - The global secondary index is being created. + // + // * UPDATING - The global secondary index is being updated. + // + // * DELETING - The global secondary index is being deleted. + // + // * ACTIVE - The global secondary index is ready for use. + IndexStatus *string `type:"string" enum:"IndexStatus"` + + // Autoscaling settings for a global secondary index replica's read capacity + // units. + ProvisionedReadCapacityAutoScalingSettings *AutoScalingSettingsDescription `type:"structure"` + + // The maximum number of strongly consistent reads consumed per second before + // DynamoDB returns a ThrottlingException. + ProvisionedReadCapacityUnits *int64 `min:"1" type:"long"` + + // AutoScaling settings for a global secondary index replica's write capacity + // units. + ProvisionedWriteCapacityAutoScalingSettings *AutoScalingSettingsDescription `type:"structure"` + + // The maximum number of writes consumed per second before DynamoDB returns + // a ThrottlingException. + ProvisionedWriteCapacityUnits *int64 `min:"1" type:"long"` +} + +// String returns the string representation +func (s ReplicaGlobalSecondaryIndexSettingsDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReplicaGlobalSecondaryIndexSettingsDescription) GoString() string { + return s.String() +} + +// SetIndexName sets the IndexName field's value. +func (s *ReplicaGlobalSecondaryIndexSettingsDescription) SetIndexName(v string) *ReplicaGlobalSecondaryIndexSettingsDescription { + s.IndexName = &v + return s +} + +// SetIndexStatus sets the IndexStatus field's value. +func (s *ReplicaGlobalSecondaryIndexSettingsDescription) SetIndexStatus(v string) *ReplicaGlobalSecondaryIndexSettingsDescription { + s.IndexStatus = &v + return s +} + +// SetProvisionedReadCapacityAutoScalingSettings sets the ProvisionedReadCapacityAutoScalingSettings field's value. +func (s *ReplicaGlobalSecondaryIndexSettingsDescription) SetProvisionedReadCapacityAutoScalingSettings(v *AutoScalingSettingsDescription) *ReplicaGlobalSecondaryIndexSettingsDescription { + s.ProvisionedReadCapacityAutoScalingSettings = v + return s +} + +// SetProvisionedReadCapacityUnits sets the ProvisionedReadCapacityUnits field's value. +func (s *ReplicaGlobalSecondaryIndexSettingsDescription) SetProvisionedReadCapacityUnits(v int64) *ReplicaGlobalSecondaryIndexSettingsDescription { + s.ProvisionedReadCapacityUnits = &v + return s +} + +// SetProvisionedWriteCapacityAutoScalingSettings sets the ProvisionedWriteCapacityAutoScalingSettings field's value. +func (s *ReplicaGlobalSecondaryIndexSettingsDescription) SetProvisionedWriteCapacityAutoScalingSettings(v *AutoScalingSettingsDescription) *ReplicaGlobalSecondaryIndexSettingsDescription { + s.ProvisionedWriteCapacityAutoScalingSettings = v + return s +} + +// SetProvisionedWriteCapacityUnits sets the ProvisionedWriteCapacityUnits field's value. +func (s *ReplicaGlobalSecondaryIndexSettingsDescription) SetProvisionedWriteCapacityUnits(v int64) *ReplicaGlobalSecondaryIndexSettingsDescription { + s.ProvisionedWriteCapacityUnits = &v + return s +} + +// Represents the settings of a global secondary index for a global table that +// will be modified. +type ReplicaGlobalSecondaryIndexSettingsUpdate struct { + _ struct{} `type:"structure"` + + // The name of the global secondary index. The name must be unique among all + // other indexes on this table. + // + // IndexName is a required field + IndexName *string `min:"3" type:"string" required:"true"` + + // Autoscaling settings for managing a global secondary index replica's read + // capacity units. + ProvisionedReadCapacityAutoScalingSettingsUpdate *AutoScalingSettingsUpdate `type:"structure"` + + // The maximum number of strongly consistent reads consumed per second before + // DynamoDB returns a ThrottlingException. + ProvisionedReadCapacityUnits *int64 `min:"1" type:"long"` +} + +// String returns the string representation +func (s ReplicaGlobalSecondaryIndexSettingsUpdate) String() string { + return awsutil.Prettify(s) } -// SetQueryFilter sets the QueryFilter field's value. -func (s *QueryInput) SetQueryFilter(v map[string]*Condition) *QueryInput { - s.QueryFilter = v - return s +// GoString returns the string representation +func (s ReplicaGlobalSecondaryIndexSettingsUpdate) GoString() string { + return s.String() } -// SetReturnConsumedCapacity sets the ReturnConsumedCapacity field's value. -func (s *QueryInput) SetReturnConsumedCapacity(v string) *QueryInput { - s.ReturnConsumedCapacity = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *ReplicaGlobalSecondaryIndexSettingsUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ReplicaGlobalSecondaryIndexSettingsUpdate"} + if s.IndexName == nil { + invalidParams.Add(request.NewErrParamRequired("IndexName")) + } + if s.IndexName != nil && len(*s.IndexName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("IndexName", 3)) + } + if s.ProvisionedReadCapacityUnits != nil && *s.ProvisionedReadCapacityUnits < 1 { + invalidParams.Add(request.NewErrParamMinValue("ProvisionedReadCapacityUnits", 1)) + } + if s.ProvisionedReadCapacityAutoScalingSettingsUpdate != nil { + if err := s.ProvisionedReadCapacityAutoScalingSettingsUpdate.Validate(); err != nil { + invalidParams.AddNested("ProvisionedReadCapacityAutoScalingSettingsUpdate", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetScanIndexForward sets the ScanIndexForward field's value. -func (s *QueryInput) SetScanIndexForward(v bool) *QueryInput { - s.ScanIndexForward = &v +// SetIndexName sets the IndexName field's value. +func (s *ReplicaGlobalSecondaryIndexSettingsUpdate) SetIndexName(v string) *ReplicaGlobalSecondaryIndexSettingsUpdate { + s.IndexName = &v return s } -// SetSelect sets the Select field's value. -func (s *QueryInput) SetSelect(v string) *QueryInput { - s.Select = &v +// SetProvisionedReadCapacityAutoScalingSettingsUpdate sets the ProvisionedReadCapacityAutoScalingSettingsUpdate field's value. +func (s *ReplicaGlobalSecondaryIndexSettingsUpdate) SetProvisionedReadCapacityAutoScalingSettingsUpdate(v *AutoScalingSettingsUpdate) *ReplicaGlobalSecondaryIndexSettingsUpdate { + s.ProvisionedReadCapacityAutoScalingSettingsUpdate = v return s } -// SetTableName sets the TableName field's value. -func (s *QueryInput) SetTableName(v string) *QueryInput { - s.TableName = &v +// SetProvisionedReadCapacityUnits sets the ProvisionedReadCapacityUnits field's value. +func (s *ReplicaGlobalSecondaryIndexSettingsUpdate) SetProvisionedReadCapacityUnits(v int64) *ReplicaGlobalSecondaryIndexSettingsUpdate { + s.ProvisionedReadCapacityUnits = &v return s } -// Represents the output of a Query operation. -type QueryOutput struct { +// Represents the properties of a replica. +type ReplicaSettingsDescription struct { _ struct{} `type:"structure"` - // The capacity units consumed by the Query operation. The data returned includes - // the total provisioned throughput consumed, along with statistics for the - // table and any indexes involved in the operation. ConsumedCapacity is only - // returned if the ReturnConsumedCapacity parameter was specified For more information, - // see Provisioned Throughput (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ProvisionedThroughputIntro.html) + // The region name of the replica. + // + // RegionName is a required field + RegionName *string `type:"string" required:"true"` + + // Replica global secondary index settings for the global table. + ReplicaGlobalSecondaryIndexSettings []*ReplicaGlobalSecondaryIndexSettingsDescription `type:"list"` + + // Autoscaling settings for a global table replica's read capacity units. + ReplicaProvisionedReadCapacityAutoScalingSettings *AutoScalingSettingsDescription `type:"structure"` + + // The maximum number of strongly consistent reads consumed per second before + // DynamoDB returns a ThrottlingException. For more information, see Specifying + // Read and Write Requirements (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html#ProvisionedThroughput) // in the Amazon DynamoDB Developer Guide. - ConsumedCapacity *ConsumedCapacity `type:"structure"` + ReplicaProvisionedReadCapacityUnits *int64 `min:"1" type:"long"` - // The number of items in the response. - // - // If you used a QueryFilter in the request, then Count is the number of items - // returned after the filter was applied, and ScannedCount is the number of - // matching items before the filter was applied. - // - // If you did not use a filter in the request, then Count and ScannedCount are - // the same. - Count *int64 `type:"integer"` + // AutoScaling settings for a global table replica's write capacity units. + ReplicaProvisionedWriteCapacityAutoScalingSettings *AutoScalingSettingsDescription `type:"structure"` - // An array of item attributes that match the query criteria. Each element in - // this array consists of an attribute name and the value for that attribute. - Items []map[string]*AttributeValue `type:"list"` + // The maximum number of writes consumed per second before DynamoDB returns + // a ThrottlingException. For more information, see Specifying Read and Write + // Requirements (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html#ProvisionedThroughput) + // in the Amazon DynamoDB Developer Guide. + ReplicaProvisionedWriteCapacityUnits *int64 `min:"1" type:"long"` - // The primary key of the item where the operation stopped, inclusive of the - // previous result set. Use this value to start a new operation, excluding this - // value in the new request. + // The current state of the region: // - // If LastEvaluatedKey is empty, then the "last page" of results has been processed - // and there is no more data to be retrieved. + // * CREATING - The region is being created. // - // If LastEvaluatedKey is not empty, it does not necessarily mean that there - // is more data in the result set. The only way to know when you have reached - // the end of the result set is when LastEvaluatedKey is empty. - LastEvaluatedKey map[string]*AttributeValue `type:"map"` - - // The number of items evaluated, before any QueryFilter is applied. A high - // ScannedCount value with few, or no, Count results indicates an inefficient - // Query operation. For more information, see Count and ScannedCount (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryAndScan.html#Count) - // in the Amazon DynamoDB Developer Guide. + // * UPDATING - The region is being updated. // - // If you did not use a filter in the request, then ScannedCount is the same - // as Count. - ScannedCount *int64 `type:"integer"` + // * DELETING - The region is being deleted. + // + // * ACTIVE - The region is ready for use. + ReplicaStatus *string `type:"string" enum:"ReplicaStatus"` } // String returns the string representation -func (s QueryOutput) String() string { +func (s ReplicaSettingsDescription) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s QueryOutput) GoString() string { +func (s ReplicaSettingsDescription) GoString() string { return s.String() } -// SetConsumedCapacity sets the ConsumedCapacity field's value. -func (s *QueryOutput) SetConsumedCapacity(v *ConsumedCapacity) *QueryOutput { - s.ConsumedCapacity = v +// SetRegionName sets the RegionName field's value. +func (s *ReplicaSettingsDescription) SetRegionName(v string) *ReplicaSettingsDescription { + s.RegionName = &v return s } -// SetCount sets the Count field's value. -func (s *QueryOutput) SetCount(v int64) *QueryOutput { - s.Count = &v +// SetReplicaGlobalSecondaryIndexSettings sets the ReplicaGlobalSecondaryIndexSettings field's value. +func (s *ReplicaSettingsDescription) SetReplicaGlobalSecondaryIndexSettings(v []*ReplicaGlobalSecondaryIndexSettingsDescription) *ReplicaSettingsDescription { + s.ReplicaGlobalSecondaryIndexSettings = v return s } -// SetItems sets the Items field's value. -func (s *QueryOutput) SetItems(v []map[string]*AttributeValue) *QueryOutput { - s.Items = v +// SetReplicaProvisionedReadCapacityAutoScalingSettings sets the ReplicaProvisionedReadCapacityAutoScalingSettings field's value. +func (s *ReplicaSettingsDescription) SetReplicaProvisionedReadCapacityAutoScalingSettings(v *AutoScalingSettingsDescription) *ReplicaSettingsDescription { + s.ReplicaProvisionedReadCapacityAutoScalingSettings = v return s } -// SetLastEvaluatedKey sets the LastEvaluatedKey field's value. -func (s *QueryOutput) SetLastEvaluatedKey(v map[string]*AttributeValue) *QueryOutput { - s.LastEvaluatedKey = v +// SetReplicaProvisionedReadCapacityUnits sets the ReplicaProvisionedReadCapacityUnits field's value. +func (s *ReplicaSettingsDescription) SetReplicaProvisionedReadCapacityUnits(v int64) *ReplicaSettingsDescription { + s.ReplicaProvisionedReadCapacityUnits = &v return s } -// SetScannedCount sets the ScannedCount field's value. -func (s *QueryOutput) SetScannedCount(v int64) *QueryOutput { - s.ScannedCount = &v +// SetReplicaProvisionedWriteCapacityAutoScalingSettings sets the ReplicaProvisionedWriteCapacityAutoScalingSettings field's value. +func (s *ReplicaSettingsDescription) SetReplicaProvisionedWriteCapacityAutoScalingSettings(v *AutoScalingSettingsDescription) *ReplicaSettingsDescription { + s.ReplicaProvisionedWriteCapacityAutoScalingSettings = v return s } -// Represents the properties of a replica. -type Replica struct { +// SetReplicaProvisionedWriteCapacityUnits sets the ReplicaProvisionedWriteCapacityUnits field's value. +func (s *ReplicaSettingsDescription) SetReplicaProvisionedWriteCapacityUnits(v int64) *ReplicaSettingsDescription { + s.ReplicaProvisionedWriteCapacityUnits = &v + return s +} + +// SetReplicaStatus sets the ReplicaStatus field's value. +func (s *ReplicaSettingsDescription) SetReplicaStatus(v string) *ReplicaSettingsDescription { + s.ReplicaStatus = &v + return s +} + +// Represents the settings for a global table in a region that will be modified. +type ReplicaSettingsUpdate struct { _ struct{} `type:"structure"` - // The region where the replica needs to be created. - RegionName *string `type:"string"` + // The region of the replica to be added. + // + // RegionName is a required field + RegionName *string `type:"string" required:"true"` + + // Represents the settings of a global secondary index for a global table that + // will be modified. + ReplicaGlobalSecondaryIndexSettingsUpdate []*ReplicaGlobalSecondaryIndexSettingsUpdate `min:"1" type:"list"` + + // Autoscaling settings for managing a global table replica's read capacity + // units. + ReplicaProvisionedReadCapacityAutoScalingSettingsUpdate *AutoScalingSettingsUpdate `type:"structure"` + + // The maximum number of strongly consistent reads consumed per second before + // DynamoDB returns a ThrottlingException. For more information, see Specifying + // Read and Write Requirements (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithTables.html#ProvisionedThroughput) + // in the Amazon DynamoDB Developer Guide. + ReplicaProvisionedReadCapacityUnits *int64 `min:"1" type:"long"` } // String returns the string representation -func (s Replica) String() string { +func (s ReplicaSettingsUpdate) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Replica) GoString() string { +func (s ReplicaSettingsUpdate) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *ReplicaSettingsUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ReplicaSettingsUpdate"} + if s.RegionName == nil { + invalidParams.Add(request.NewErrParamRequired("RegionName")) + } + if s.ReplicaGlobalSecondaryIndexSettingsUpdate != nil && len(s.ReplicaGlobalSecondaryIndexSettingsUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ReplicaGlobalSecondaryIndexSettingsUpdate", 1)) + } + if s.ReplicaProvisionedReadCapacityUnits != nil && *s.ReplicaProvisionedReadCapacityUnits < 1 { + invalidParams.Add(request.NewErrParamMinValue("ReplicaProvisionedReadCapacityUnits", 1)) + } + if s.ReplicaGlobalSecondaryIndexSettingsUpdate != nil { + for i, v := range s.ReplicaGlobalSecondaryIndexSettingsUpdate { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ReplicaGlobalSecondaryIndexSettingsUpdate", i), err.(request.ErrInvalidParams)) + } + } + } + if s.ReplicaProvisionedReadCapacityAutoScalingSettingsUpdate != nil { + if err := s.ReplicaProvisionedReadCapacityAutoScalingSettingsUpdate.Validate(); err != nil { + invalidParams.AddNested("ReplicaProvisionedReadCapacityAutoScalingSettingsUpdate", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetRegionName sets the RegionName field's value. -func (s *Replica) SetRegionName(v string) *Replica { +func (s *ReplicaSettingsUpdate) SetRegionName(v string) *ReplicaSettingsUpdate { s.RegionName = &v return s } -// Contains the details of the replica. -type ReplicaDescription struct { - _ struct{} `type:"structure"` - - // The name of the region. - RegionName *string `type:"string"` -} - -// String returns the string representation -func (s ReplicaDescription) String() string { - return awsutil.Prettify(s) +// SetReplicaGlobalSecondaryIndexSettingsUpdate sets the ReplicaGlobalSecondaryIndexSettingsUpdate field's value. +func (s *ReplicaSettingsUpdate) SetReplicaGlobalSecondaryIndexSettingsUpdate(v []*ReplicaGlobalSecondaryIndexSettingsUpdate) *ReplicaSettingsUpdate { + s.ReplicaGlobalSecondaryIndexSettingsUpdate = v + return s } -// GoString returns the string representation -func (s ReplicaDescription) GoString() string { - return s.String() +// SetReplicaProvisionedReadCapacityAutoScalingSettingsUpdate sets the ReplicaProvisionedReadCapacityAutoScalingSettingsUpdate field's value. +func (s *ReplicaSettingsUpdate) SetReplicaProvisionedReadCapacityAutoScalingSettingsUpdate(v *AutoScalingSettingsUpdate) *ReplicaSettingsUpdate { + s.ReplicaProvisionedReadCapacityAutoScalingSettingsUpdate = v + return s } -// SetRegionName sets the RegionName field's value. -func (s *ReplicaDescription) SetRegionName(v string) *ReplicaDescription { - s.RegionName = &v +// SetReplicaProvisionedReadCapacityUnits sets the ReplicaProvisionedReadCapacityUnits field's value. +func (s *ReplicaSettingsUpdate) SetReplicaProvisionedReadCapacityUnits(v int64) *ReplicaSettingsUpdate { + s.ReplicaProvisionedReadCapacityUnits = &v return s } @@ -9138,7 +11490,7 @@ type RestoreSummary struct { // Point in time or source backup time. // // RestoreDateTime is a required field - RestoreDateTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + RestoreDateTime *time.Time `type:"timestamp" required:"true"` // Indicates if a restore is in progress or not. // @@ -9186,38 +11538,126 @@ func (s *RestoreSummary) SetSourceTableArn(v string) *RestoreSummary { return s } -type RestoreTableFromBackupInput struct { +type RestoreTableFromBackupInput struct { + _ struct{} `type:"structure"` + + // The ARN associated with the backup. + // + // BackupArn is a required field + BackupArn *string `min:"37" type:"string" required:"true"` + + // The name of the new table to which the backup must be restored. + // + // TargetTableName is a required field + TargetTableName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s RestoreTableFromBackupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreTableFromBackupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RestoreTableFromBackupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RestoreTableFromBackupInput"} + if s.BackupArn == nil { + invalidParams.Add(request.NewErrParamRequired("BackupArn")) + } + if s.BackupArn != nil && len(*s.BackupArn) < 37 { + invalidParams.Add(request.NewErrParamMinLen("BackupArn", 37)) + } + if s.TargetTableName == nil { + invalidParams.Add(request.NewErrParamRequired("TargetTableName")) + } + if s.TargetTableName != nil && len(*s.TargetTableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TargetTableName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBackupArn sets the BackupArn field's value. +func (s *RestoreTableFromBackupInput) SetBackupArn(v string) *RestoreTableFromBackupInput { + s.BackupArn = &v + return s +} + +// SetTargetTableName sets the TargetTableName field's value. +func (s *RestoreTableFromBackupInput) SetTargetTableName(v string) *RestoreTableFromBackupInput { + s.TargetTableName = &v + return s +} + +type RestoreTableFromBackupOutput struct { + _ struct{} `type:"structure"` + + // The description of the table created from an existing backup. + TableDescription *TableDescription `type:"structure"` +} + +// String returns the string representation +func (s RestoreTableFromBackupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreTableFromBackupOutput) GoString() string { + return s.String() +} + +// SetTableDescription sets the TableDescription field's value. +func (s *RestoreTableFromBackupOutput) SetTableDescription(v *TableDescription) *RestoreTableFromBackupOutput { + s.TableDescription = v + return s +} + +type RestoreTableToPointInTimeInput struct { _ struct{} `type:"structure"` - // The ARN associated with the backup. + // Time in the past to restore the table to. + RestoreDateTime *time.Time `type:"timestamp"` + + // Name of the source table that is being restored. // - // BackupArn is a required field - BackupArn *string `min:"37" type:"string" required:"true"` + // SourceTableName is a required field + SourceTableName *string `min:"3" type:"string" required:"true"` - // The name of the new table to which the backup must be restored. + // The name of the new table to which it must be restored to. // // TargetTableName is a required field TargetTableName *string `min:"3" type:"string" required:"true"` + + // Restore the table to the latest possible time. LatestRestorableDateTime is + // typically 5 minutes before the current time. + UseLatestRestorableTime *bool `type:"boolean"` } // String returns the string representation -func (s RestoreTableFromBackupInput) String() string { +func (s RestoreTableToPointInTimeInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s RestoreTableFromBackupInput) GoString() string { +func (s RestoreTableToPointInTimeInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *RestoreTableFromBackupInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "RestoreTableFromBackupInput"} - if s.BackupArn == nil { - invalidParams.Add(request.NewErrParamRequired("BackupArn")) +func (s *RestoreTableToPointInTimeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RestoreTableToPointInTimeInput"} + if s.SourceTableName == nil { + invalidParams.Add(request.NewErrParamRequired("SourceTableName")) } - if s.BackupArn != nil && len(*s.BackupArn) < 37 { - invalidParams.Add(request.NewErrParamMinLen("BackupArn", 37)) + if s.SourceTableName != nil && len(*s.SourceTableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("SourceTableName", 3)) } if s.TargetTableName == nil { invalidParams.Add(request.NewErrParamRequired("TargetTableName")) @@ -9232,37 +11672,49 @@ func (s *RestoreTableFromBackupInput) Validate() error { return nil } -// SetBackupArn sets the BackupArn field's value. -func (s *RestoreTableFromBackupInput) SetBackupArn(v string) *RestoreTableFromBackupInput { - s.BackupArn = &v +// SetRestoreDateTime sets the RestoreDateTime field's value. +func (s *RestoreTableToPointInTimeInput) SetRestoreDateTime(v time.Time) *RestoreTableToPointInTimeInput { + s.RestoreDateTime = &v + return s +} + +// SetSourceTableName sets the SourceTableName field's value. +func (s *RestoreTableToPointInTimeInput) SetSourceTableName(v string) *RestoreTableToPointInTimeInput { + s.SourceTableName = &v return s } // SetTargetTableName sets the TargetTableName field's value. -func (s *RestoreTableFromBackupInput) SetTargetTableName(v string) *RestoreTableFromBackupInput { +func (s *RestoreTableToPointInTimeInput) SetTargetTableName(v string) *RestoreTableToPointInTimeInput { s.TargetTableName = &v return s } -type RestoreTableFromBackupOutput struct { +// SetUseLatestRestorableTime sets the UseLatestRestorableTime field's value. +func (s *RestoreTableToPointInTimeInput) SetUseLatestRestorableTime(v bool) *RestoreTableToPointInTimeInput { + s.UseLatestRestorableTime = &v + return s +} + +type RestoreTableToPointInTimeOutput struct { _ struct{} `type:"structure"` - // The description of the table created from an existing backup. + // Represents the properties of a table. TableDescription *TableDescription `type:"structure"` } // String returns the string representation -func (s RestoreTableFromBackupOutput) String() string { +func (s RestoreTableToPointInTimeOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s RestoreTableFromBackupOutput) GoString() string { +func (s RestoreTableToPointInTimeOutput) GoString() string { return s.String() } // SetTableDescription sets the TableDescription field's value. -func (s *RestoreTableFromBackupOutput) SetTableDescription(v *TableDescription) *RestoreTableFromBackupOutput { +func (s *RestoreTableToPointInTimeOutput) SetTableDescription(v *TableDescription) *RestoreTableToPointInTimeOutput { s.TableDescription = v return s } @@ -9271,6 +11723,16 @@ func (s *RestoreTableFromBackupOutput) SetTableDescription(v *TableDescription) type SSEDescription struct { _ struct{} `type:"structure"` + // The KMS master key ARN used for the KMS encryption. + KMSMasterKeyArn *string `type:"string"` + + // Server-side encryption type: + // + // * AES256 - Server-side encryption which uses the AES256 algorithm. + // + // * KMS - Server-side encryption which uses AWS Key Management Service. + SSEType *string `type:"string" enum:"SSEType"` + // The current state of server-side encryption: // // * ENABLING - Server-side encryption is being enabled. @@ -9280,6 +11742,8 @@ type SSEDescription struct { // * DISABLING - Server-side encryption is being disabled. // // * DISABLED - Server-side encryption is disabled. + // + // * UPDATING - Server-side encryption is being updated. Status *string `type:"string" enum:"SSEStatus"` } @@ -9293,6 +11757,18 @@ func (s SSEDescription) GoString() string { return s.String() } +// SetKMSMasterKeyArn sets the KMSMasterKeyArn field's value. +func (s *SSEDescription) SetKMSMasterKeyArn(v string) *SSEDescription { + s.KMSMasterKeyArn = &v + return s +} + +// SetSSEType sets the SSEType field's value. +func (s *SSEDescription) SetSSEType(v string) *SSEDescription { + s.SSEType = &v + return s +} + // SetStatus sets the Status field's value. func (s *SSEDescription) SetStatus(v string) *SSEDescription { s.Status = &v @@ -9305,9 +11781,21 @@ type SSESpecification struct { // Indicates whether server-side encryption is enabled (true) or disabled (false) // on the table. + Enabled *bool `type:"boolean"` + + // The KMS Master Key (CMK) which should be used for the KMS encryption. To + // specify a CMK, use its key ID, Amazon Resource Name (ARN), alias name, or + // alias ARN. Note that you should only provide this parameter if the key is + // different from the default DynamoDB KMS Master Key alias/aws/dynamodb. + KMSMasterKeyId *string `type:"string"` + + // Server-side encryption type: // - // Enabled is a required field - Enabled *bool `type:"boolean" required:"true"` + // * AES256 - Server-side encryption which uses the AES256 algorithm. + // + // * KMS - Server-side encryption which uses AWS Key Management Service. + // (default) + SSEType *string `type:"string" enum:"SSEType"` } // String returns the string representation @@ -9320,25 +11808,24 @@ func (s SSESpecification) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *SSESpecification) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "SSESpecification"} - if s.Enabled == nil { - invalidParams.Add(request.NewErrParamRequired("Enabled")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - // SetEnabled sets the Enabled field's value. func (s *SSESpecification) SetEnabled(v bool) *SSESpecification { s.Enabled = &v return s } +// SetKMSMasterKeyId sets the KMSMasterKeyId field's value. +func (s *SSESpecification) SetKMSMasterKeyId(v string) *SSESpecification { + s.KMSMasterKeyId = &v + return s +} + +// SetSSEType sets the SSEType field's value. +func (s *SSESpecification) SetSSEType(v string) *SSESpecification { + s.SSEType = &v + return s +} + // Represents the input of a Scan operation. type ScanInput struct { _ struct{} `type:"structure"` @@ -9839,7 +12326,7 @@ type SourceTableDetails struct { // Time when the source table was created. // // TableCreationDateTime is a required field - TableCreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + TableCreationDateTime *time.Time `type:"timestamp" required:"true"` // Unique identifier for the table for which the backup was created. // @@ -10043,7 +12530,7 @@ type TableDescription struct { // The date and time when the table was created, in UNIX epoch time (http://www.epochconverter.com/) // format. - CreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDateTime *time.Time `type:"timestamp"` // The global secondary indexes, if any, on the table. Each index is scoped // to a given partition key value. Each element is composed of: @@ -10656,6 +13143,90 @@ func (s UntagResourceOutput) GoString() string { return s.String() } +type UpdateContinuousBackupsInput struct { + _ struct{} `type:"structure"` + + // Represents the settings used to enable point in time recovery. + // + // PointInTimeRecoverySpecification is a required field + PointInTimeRecoverySpecification *PointInTimeRecoverySpecification `type:"structure" required:"true"` + + // The name of the table. + // + // TableName is a required field + TableName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateContinuousBackupsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateContinuousBackupsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateContinuousBackupsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateContinuousBackupsInput"} + if s.PointInTimeRecoverySpecification == nil { + invalidParams.Add(request.NewErrParamRequired("PointInTimeRecoverySpecification")) + } + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 3)) + } + if s.PointInTimeRecoverySpecification != nil { + if err := s.PointInTimeRecoverySpecification.Validate(); err != nil { + invalidParams.AddNested("PointInTimeRecoverySpecification", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPointInTimeRecoverySpecification sets the PointInTimeRecoverySpecification field's value. +func (s *UpdateContinuousBackupsInput) SetPointInTimeRecoverySpecification(v *PointInTimeRecoverySpecification) *UpdateContinuousBackupsInput { + s.PointInTimeRecoverySpecification = v + return s +} + +// SetTableName sets the TableName field's value. +func (s *UpdateContinuousBackupsInput) SetTableName(v string) *UpdateContinuousBackupsInput { + s.TableName = &v + return s +} + +type UpdateContinuousBackupsOutput struct { + _ struct{} `type:"structure"` + + // Represents the continuous backups and point in time recovery settings on + // the table. + ContinuousBackupsDescription *ContinuousBackupsDescription `type:"structure"` +} + +// String returns the string representation +func (s UpdateContinuousBackupsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateContinuousBackupsOutput) GoString() string { + return s.String() +} + +// SetContinuousBackupsDescription sets the ContinuousBackupsDescription field's value. +func (s *UpdateContinuousBackupsOutput) SetContinuousBackupsDescription(v *ContinuousBackupsDescription) *UpdateContinuousBackupsOutput { + s.ContinuousBackupsDescription = v + return s +} + // Represents the new provisioned throughput settings to be applied to a global // secondary index. type UpdateGlobalSecondaryIndexAction struct { @@ -10811,6 +13382,152 @@ func (s *UpdateGlobalTableOutput) SetGlobalTableDescription(v *GlobalTableDescri return s } +type UpdateGlobalTableSettingsInput struct { + _ struct{} `type:"structure"` + + // Represents the settings of a global secondary index for a global table that + // will be modified. + GlobalTableGlobalSecondaryIndexSettingsUpdate []*GlobalTableGlobalSecondaryIndexSettingsUpdate `min:"1" type:"list"` + + // The name of the global table + // + // GlobalTableName is a required field + GlobalTableName *string `min:"3" type:"string" required:"true"` + + // AutoScaling settings for managing provisioned write capacity for the global + // table. + GlobalTableProvisionedWriteCapacityAutoScalingSettingsUpdate *AutoScalingSettingsUpdate `type:"structure"` + + // The maximum number of writes consumed per second before DynamoDB returns + // a ThrottlingException. + GlobalTableProvisionedWriteCapacityUnits *int64 `min:"1" type:"long"` + + // Represents the settings for a global table in a region that will be modified. + ReplicaSettingsUpdate []*ReplicaSettingsUpdate `min:"1" type:"list"` +} + +// String returns the string representation +func (s UpdateGlobalTableSettingsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateGlobalTableSettingsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateGlobalTableSettingsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateGlobalTableSettingsInput"} + if s.GlobalTableGlobalSecondaryIndexSettingsUpdate != nil && len(s.GlobalTableGlobalSecondaryIndexSettingsUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GlobalTableGlobalSecondaryIndexSettingsUpdate", 1)) + } + if s.GlobalTableName == nil { + invalidParams.Add(request.NewErrParamRequired("GlobalTableName")) + } + if s.GlobalTableName != nil && len(*s.GlobalTableName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("GlobalTableName", 3)) + } + if s.GlobalTableProvisionedWriteCapacityUnits != nil && *s.GlobalTableProvisionedWriteCapacityUnits < 1 { + invalidParams.Add(request.NewErrParamMinValue("GlobalTableProvisionedWriteCapacityUnits", 1)) + } + if s.ReplicaSettingsUpdate != nil && len(s.ReplicaSettingsUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ReplicaSettingsUpdate", 1)) + } + if s.GlobalTableGlobalSecondaryIndexSettingsUpdate != nil { + for i, v := range s.GlobalTableGlobalSecondaryIndexSettingsUpdate { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "GlobalTableGlobalSecondaryIndexSettingsUpdate", i), err.(request.ErrInvalidParams)) + } + } + } + if s.GlobalTableProvisionedWriteCapacityAutoScalingSettingsUpdate != nil { + if err := s.GlobalTableProvisionedWriteCapacityAutoScalingSettingsUpdate.Validate(); err != nil { + invalidParams.AddNested("GlobalTableProvisionedWriteCapacityAutoScalingSettingsUpdate", err.(request.ErrInvalidParams)) + } + } + if s.ReplicaSettingsUpdate != nil { + for i, v := range s.ReplicaSettingsUpdate { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ReplicaSettingsUpdate", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGlobalTableGlobalSecondaryIndexSettingsUpdate sets the GlobalTableGlobalSecondaryIndexSettingsUpdate field's value. +func (s *UpdateGlobalTableSettingsInput) SetGlobalTableGlobalSecondaryIndexSettingsUpdate(v []*GlobalTableGlobalSecondaryIndexSettingsUpdate) *UpdateGlobalTableSettingsInput { + s.GlobalTableGlobalSecondaryIndexSettingsUpdate = v + return s +} + +// SetGlobalTableName sets the GlobalTableName field's value. +func (s *UpdateGlobalTableSettingsInput) SetGlobalTableName(v string) *UpdateGlobalTableSettingsInput { + s.GlobalTableName = &v + return s +} + +// SetGlobalTableProvisionedWriteCapacityAutoScalingSettingsUpdate sets the GlobalTableProvisionedWriteCapacityAutoScalingSettingsUpdate field's value. +func (s *UpdateGlobalTableSettingsInput) SetGlobalTableProvisionedWriteCapacityAutoScalingSettingsUpdate(v *AutoScalingSettingsUpdate) *UpdateGlobalTableSettingsInput { + s.GlobalTableProvisionedWriteCapacityAutoScalingSettingsUpdate = v + return s +} + +// SetGlobalTableProvisionedWriteCapacityUnits sets the GlobalTableProvisionedWriteCapacityUnits field's value. +func (s *UpdateGlobalTableSettingsInput) SetGlobalTableProvisionedWriteCapacityUnits(v int64) *UpdateGlobalTableSettingsInput { + s.GlobalTableProvisionedWriteCapacityUnits = &v + return s +} + +// SetReplicaSettingsUpdate sets the ReplicaSettingsUpdate field's value. +func (s *UpdateGlobalTableSettingsInput) SetReplicaSettingsUpdate(v []*ReplicaSettingsUpdate) *UpdateGlobalTableSettingsInput { + s.ReplicaSettingsUpdate = v + return s +} + +type UpdateGlobalTableSettingsOutput struct { + _ struct{} `type:"structure"` + + // The name of the global table. + GlobalTableName *string `min:"3" type:"string"` + + // The region specific settings for the global table. + ReplicaSettings []*ReplicaSettingsDescription `type:"list"` +} + +// String returns the string representation +func (s UpdateGlobalTableSettingsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateGlobalTableSettingsOutput) GoString() string { + return s.String() +} + +// SetGlobalTableName sets the GlobalTableName field's value. +func (s *UpdateGlobalTableSettingsOutput) SetGlobalTableName(v string) *UpdateGlobalTableSettingsOutput { + s.GlobalTableName = &v + return s +} + +// SetReplicaSettings sets the ReplicaSettings field's value. +func (s *UpdateGlobalTableSettingsOutput) SetReplicaSettings(v []*ReplicaSettingsDescription) *UpdateGlobalTableSettingsOutput { + s.ReplicaSettings = v + return s +} + // Represents the input of an UpdateItem operation. type UpdateItemInput struct { _ struct{} `type:"structure"` @@ -11243,6 +13960,9 @@ type UpdateTableInput struct { // The new provisioned throughput settings for the specified table or index. ProvisionedThroughput *ProvisionedThroughput `type:"structure"` + // The new server-side encryption settings for the specified table. + SSESpecification *SSESpecification `type:"structure"` + // Represents the DynamoDB Streams configuration for the table. // // You will receive a ResourceInUseException if you attempt to enable a stream @@ -11325,6 +14045,12 @@ func (s *UpdateTableInput) SetProvisionedThroughput(v *ProvisionedThroughput) *U return s } +// SetSSESpecification sets the SSESpecification field's value. +func (s *UpdateTableInput) SetSSESpecification(v *SSESpecification) *UpdateTableInput { + s.SSESpecification = v + return s +} + // SetStreamSpecification sets the StreamSpecification field's value. func (s *UpdateTableInput) SetStreamSpecification(v *StreamSpecification) *UpdateTableInput { s.StreamSpecification = v @@ -11504,6 +14230,25 @@ const ( BackupStatusAvailable = "AVAILABLE" ) +const ( + // BackupTypeUser is a BackupType enum value + BackupTypeUser = "USER" + + // BackupTypeSystem is a BackupType enum value + BackupTypeSystem = "SYSTEM" +) + +const ( + // BackupTypeFilterUser is a BackupTypeFilter enum value + BackupTypeFilterUser = "USER" + + // BackupTypeFilterSystem is a BackupTypeFilter enum value + BackupTypeFilterSystem = "SYSTEM" + + // BackupTypeFilterAll is a BackupTypeFilter enum value + BackupTypeFilterAll = "ALL" +) + const ( // ComparisonOperatorEq is a ComparisonOperator enum value ComparisonOperatorEq = "EQ" @@ -11597,6 +14342,14 @@ const ( KeyTypeRange = "RANGE" ) +const ( + // PointInTimeRecoveryStatusEnabled is a PointInTimeRecoveryStatus enum value + PointInTimeRecoveryStatusEnabled = "ENABLED" + + // PointInTimeRecoveryStatusDisabled is a PointInTimeRecoveryStatus enum value + PointInTimeRecoveryStatusDisabled = "DISABLED" +) + const ( // ProjectionTypeAll is a ProjectionType enum value ProjectionTypeAll = "ALL" @@ -11608,6 +14361,20 @@ const ( ProjectionTypeInclude = "INCLUDE" ) +const ( + // ReplicaStatusCreating is a ReplicaStatus enum value + ReplicaStatusCreating = "CREATING" + + // ReplicaStatusUpdating is a ReplicaStatus enum value + ReplicaStatusUpdating = "UPDATING" + + // ReplicaStatusDeleting is a ReplicaStatus enum value + ReplicaStatusDeleting = "DELETING" + + // ReplicaStatusActive is a ReplicaStatus enum value + ReplicaStatusActive = "ACTIVE" +) + // Determines the level of detail about provisioned throughput consumption that // is returned in the response: // @@ -11671,6 +14438,17 @@ const ( // SSEStatusDisabled is a SSEStatus enum value SSEStatusDisabled = "DISABLED" + + // SSEStatusUpdating is a SSEStatus enum value + SSEStatusUpdating = "UPDATING" +) + +const ( + // SSETypeAes256 is a SSEType enum value + SSETypeAes256 = "AES256" + + // SSETypeKms is a SSEType enum value + SSETypeKms = "KMS" ) const ( diff --git a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/errors.go b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/errors.go index 4f898d96737..4abbbe6632f 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/errors.go @@ -41,12 +41,25 @@ const ( // The specified global table does not exist. ErrCodeGlobalTableNotFoundException = "GlobalTableNotFoundException" + // ErrCodeIndexNotFoundException for service response error code + // "IndexNotFoundException". + // + // The operation tried to access a nonexistent index. + ErrCodeIndexNotFoundException = "IndexNotFoundException" + // ErrCodeInternalServerError for service response error code // "InternalServerError". // // An error occurred on the server side. ErrCodeInternalServerError = "InternalServerError" + // ErrCodeInvalidRestoreTimeException for service response error code + // "InvalidRestoreTimeException". + // + // An invalid restore time was specified. RestoreDateTime must be between EarliestRestorableDateTime + // and LatestRestorableDateTime. + ErrCodeInvalidRestoreTimeException = "InvalidRestoreTimeException" + // ErrCodeItemCollectionSizeLimitExceededException for service response error code // "ItemCollectionSizeLimitExceededException". // @@ -57,17 +70,11 @@ const ( // ErrCodeLimitExceededException for service response error code // "LimitExceededException". // - // Up to 50 CreateBackup operations are allowed per second, per account. There - // is no limit to the number of daily on-demand backups that can be taken. + // There is no limit to the number of daily on-demand backups that can be taken. // // Up to 10 simultaneous table operations are allowed per account. These operations - // include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, and RestoreTableFromBackup. - // - // For tables with secondary indexes, only one of those tables can be in the - // CREATING state at any point in time. Do not attempt to create more than one - // such table simultaneously. - // - // The total limit of tables in the ACTIVE state is 250. + // include CreateTable, UpdateTable, DeleteTable,UpdateTimeToLive, RestoreTableFromBackup, + // and RestoreTableToPointInTime. // // For tables with secondary indexes, only one of those tables can be in the // CREATING state at any point in time. Do not attempt to create more than one @@ -76,6 +83,12 @@ const ( // The total limit of tables in the ACTIVE state is 250. ErrCodeLimitExceededException = "LimitExceededException" + // ErrCodePointInTimeRecoveryUnavailableException for service response error code + // "PointInTimeRecoveryUnavailableException". + // + // Point in time recovery has not yet been enabled for this source table. + ErrCodePointInTimeRecoveryUnavailableException = "PointInTimeRecoveryUnavailableException" + // ErrCodeProvisionedThroughputExceededException for service response error code // "ProvisionedThroughputExceededException". // @@ -117,19 +130,19 @@ const ( // ErrCodeTableAlreadyExistsException for service response error code // "TableAlreadyExistsException". // - // A table with the name already exists. + // A target table with the specified name already exists. ErrCodeTableAlreadyExistsException = "TableAlreadyExistsException" // ErrCodeTableInUseException for service response error code // "TableInUseException". // - // A table by that name is either being created or deleted. + // A target table with the specified name is either being created or deleted. ErrCodeTableInUseException = "TableInUseException" // ErrCodeTableNotFoundException for service response error code // "TableNotFoundException". // - // A table with the name TableName does not currently exist within the subscriber's - // account. + // A source table with the name TableName does not currently exist within the + // subscriber's account. ErrCodeTableNotFoundException = "TableNotFoundException" ) diff --git a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/service.go b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/service.go index 80dcd19fd97..edcb5b8598e 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/service.go @@ -6,6 +6,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/client" "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/crr" "github.com/aws/aws-sdk-go/aws/request" "github.com/aws/aws-sdk-go/aws/signer/v4" "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" @@ -19,6 +20,7 @@ import ( // modify mutate any of the struct's properties though. type DynamoDB struct { *client.Client + endpointCache *crr.EndpointCache } // Used for custom client initialization logic @@ -29,8 +31,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "dynamodb" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "dynamodb" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "DynamoDB" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the DynamoDB client with a session. @@ -55,6 +58,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, @@ -65,6 +69,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio handlers, ), } + svc.endpointCache = crr.NewEndpointCache(10) // Handlers svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) diff --git a/vendor/github.com/aws/aws-sdk-go/service/ec2/api.go b/vendor/github.com/aws/aws-sdk-go/service/ec2/api.go index 3fe0cf0ca79..bf2962f21a1 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ec2/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ec2/api.go @@ -17,8 +17,8 @@ const opAcceptReservedInstancesExchangeQuote = "AcceptReservedInstancesExchangeQ // AcceptReservedInstancesExchangeQuoteRequest generates a "aws/request.Request" representing the // client's request for the AcceptReservedInstancesExchangeQuote operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -92,8 +92,8 @@ const opAcceptVpcEndpointConnections = "AcceptVpcEndpointConnections" // AcceptVpcEndpointConnectionsRequest generates a "aws/request.Request" representing the // client's request for the AcceptVpcEndpointConnections operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -167,8 +167,8 @@ const opAcceptVpcPeeringConnection = "AcceptVpcPeeringConnection" // AcceptVpcPeeringConnectionRequest generates a "aws/request.Request" representing the // client's request for the AcceptVpcPeeringConnection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -243,12 +243,101 @@ func (c *EC2) AcceptVpcPeeringConnectionWithContext(ctx aws.Context, input *Acce return out, req.Send() } +const opAdvertiseByoipCidr = "AdvertiseByoipCidr" + +// AdvertiseByoipCidrRequest generates a "aws/request.Request" representing the +// client's request for the AdvertiseByoipCidr operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AdvertiseByoipCidr for more information on using the AdvertiseByoipCidr +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AdvertiseByoipCidrRequest method. +// req, resp := client.AdvertiseByoipCidrRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AdvertiseByoipCidr +func (c *EC2) AdvertiseByoipCidrRequest(input *AdvertiseByoipCidrInput) (req *request.Request, output *AdvertiseByoipCidrOutput) { + op := &request.Operation{ + Name: opAdvertiseByoipCidr, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AdvertiseByoipCidrInput{} + } + + output = &AdvertiseByoipCidrOutput{} + req = c.newRequest(op, input, output) + return +} + +// AdvertiseByoipCidr API operation for Amazon Elastic Compute Cloud. +// +// Advertises an IPv4 address range that is provisioned for use with your AWS +// resources through bring your own IP addresses (BYOIP). +// +// You can perform this operation at most once every 10 seconds, even if you +// specify different address ranges each time. +// +// We recommend that you stop advertising the BYOIP CIDR from other locations +// when you advertise it from AWS. To minimize down time, you can configure +// your AWS resources to use an address from a BYOIP CIDR before it is advertised, +// and then simultaneously stop advertising it from the current location and +// start advertising it through AWS. +// +// It can take a few minutes before traffic to the specified addresses starts +// routing to AWS because of BGP propagation delays. +// +// To stop advertising the BYOIP CIDR, use WithdrawByoipCidr. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation AdvertiseByoipCidr for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AdvertiseByoipCidr +func (c *EC2) AdvertiseByoipCidr(input *AdvertiseByoipCidrInput) (*AdvertiseByoipCidrOutput, error) { + req, out := c.AdvertiseByoipCidrRequest(input) + return out, req.Send() +} + +// AdvertiseByoipCidrWithContext is the same as AdvertiseByoipCidr with the addition of +// the ability to pass a context and additional request options. +// +// See AdvertiseByoipCidr for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) AdvertiseByoipCidrWithContext(ctx aws.Context, input *AdvertiseByoipCidrInput, opts ...request.Option) (*AdvertiseByoipCidrOutput, error) { + req, out := c.AdvertiseByoipCidrRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opAllocateAddress = "AllocateAddress" // AllocateAddressRequest generates a "aws/request.Request" representing the // client's request for the AllocateAddress operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -287,17 +376,28 @@ func (c *EC2) AllocateAddressRequest(input *AllocateAddressInput) (req *request. // AllocateAddress API operation for Amazon Elastic Compute Cloud. // -// Allocates an Elastic IP address. +// Allocates an Elastic IP address to your AWS account. After you allocate the +// Elastic IP address you can associate it with an instance or network interface. +// After you release an Elastic IP address, it is released to the IP address +// pool and can be allocated to a different AWS account. +// +// You can allocate an Elastic IP address from an address pool owned by AWS +// or from an address pool created from a public IPv4 address range that you +// have brought to AWS for use with your AWS resources using bring your own +// IP addresses (BYOIP). For more information, see Bring Your Own IP Addresses +// (BYOIP) (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-byoip.html) +// in the Amazon Elastic Compute Cloud User Guide. +// +// [EC2-VPC] If you release an Elastic IP address, you might be able to recover +// it. You cannot recover an Elastic IP address that you released after it is +// allocated to another AWS account. You cannot recover an Elastic IP address +// for EC2-Classic. To attempt to recover an Elastic IP address that you released, +// specify it in this operation. // // An Elastic IP address is for use either in the EC2-Classic platform or in // a VPC. By default, you can allocate 5 Elastic IP addresses for EC2-Classic // per region and 5 Elastic IP addresses for EC2-VPC per region. // -// If you release an Elastic IP address for use in a VPC, you might be able -// to recover it. To recover an Elastic IP address that you released, specify -// it in the Address parameter. Note that you cannot recover an Elastic IP address -// that you released after it is allocated to another AWS account. -// // For more information, see Elastic IP Addresses (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html) // in the Amazon Elastic Compute Cloud User Guide. // @@ -333,8 +433,8 @@ const opAllocateHosts = "AllocateHosts" // AllocateHostsRequest generates a "aws/request.Request" representing the // client's request for the AllocateHosts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -373,9 +473,8 @@ func (c *EC2) AllocateHostsRequest(input *AllocateHostsInput) (req *request.Requ // AllocateHosts API operation for Amazon Elastic Compute Cloud. // -// Allocates a Dedicated Host to your account. At minimum you need to specify -// the instance size type, Availability Zone, and quantity of hosts you want -// to allocate. +// Allocates a Dedicated Host to your account. At a minimum, specify the instance +// size type, Availability Zone, and quantity of hosts to allocate. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -409,8 +508,8 @@ const opAssignIpv6Addresses = "AssignIpv6Addresses" // AssignIpv6AddressesRequest generates a "aws/request.Request" representing the // client's request for the AssignIpv6Addresses operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -490,8 +589,8 @@ const opAssignPrivateIpAddresses = "AssignPrivateIpAddresses" // AssignPrivateIpAddressesRequest generates a "aws/request.Request" representing the // client's request for the AssignPrivateIpAddresses operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -533,16 +632,23 @@ func (c *EC2) AssignPrivateIpAddressesRequest(input *AssignPrivateIpAddressesInp // AssignPrivateIpAddresses API operation for Amazon Elastic Compute Cloud. // // Assigns one or more secondary private IP addresses to the specified network -// interface. You can specify one or more specific secondary IP addresses, or -// you can specify the number of secondary IP addresses to be automatically -// assigned within the subnet's CIDR block range. The number of secondary IP -// addresses that you can assign to an instance varies by instance type. For -// information about instance types, see Instance Types (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html) +// interface. +// +// You can specify one or more specific secondary IP addresses, or you can specify +// the number of secondary IP addresses to be automatically assigned within +// the subnet's CIDR block range. The number of secondary IP addresses that +// you can assign to an instance varies by instance type. For information about +// instance types, see Instance Types (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html) // in the Amazon Elastic Compute Cloud User Guide. For more information about // Elastic IP addresses, see Elastic IP Addresses (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html) // in the Amazon Elastic Compute Cloud User Guide. // -// AssignPrivateIpAddresses is available only in EC2-VPC. +// When you move a secondary private IP address to another network interface, +// any Elastic IP address that is associated with the IP address is also moved. +// +// Remapping an IP address is an asynchronous operation. When you move an IP +// address from one network interface to another, check network/interfaces/macs/mac/local-ipv4s +// in the instance metadata to confirm that the remapping is complete. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -576,8 +682,8 @@ const opAssociateAddress = "AssociateAddress" // AssociateAddressRequest generates a "aws/request.Request" representing the // client's request for the AssociateAddress operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -617,6 +723,7 @@ func (c *EC2) AssociateAddressRequest(input *AssociateAddressInput) (req *reques // AssociateAddress API operation for Amazon Elastic Compute Cloud. // // Associates an Elastic IP address with an instance or a network interface. +// Before you can use an Elastic IP address, you must allocate it to your account. // // An Elastic IP address is for use in either the EC2-Classic platform or in // a VPC. For more information, see Elastic IP Addresses (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html) @@ -673,8 +780,8 @@ const opAssociateDhcpOptions = "AssociateDhcpOptions" // AssociateDhcpOptionsRequest generates a "aws/request.Request" representing the // client's request for the AssociateDhcpOptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -760,8 +867,8 @@ const opAssociateIamInstanceProfile = "AssociateIamInstanceProfile" // AssociateIamInstanceProfileRequest generates a "aws/request.Request" representing the // client's request for the AssociateIamInstanceProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -835,8 +942,8 @@ const opAssociateRouteTable = "AssociateRouteTable" // AssociateRouteTableRequest generates a "aws/request.Request" representing the // client's request for the AssociateRouteTable operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -881,7 +988,7 @@ func (c *EC2) AssociateRouteTableRequest(input *AssociateRouteTableInput) (req * // an association ID, which you need in order to disassociate the route table // from the subnet later. A route table can be associated with multiple subnets. // -// For more information about route tables, see Route Tables (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) +// For more information, see Route Tables (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) // in the Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -916,8 +1023,8 @@ const opAssociateSubnetCidrBlock = "AssociateSubnetCidrBlock" // AssociateSubnetCidrBlockRequest generates a "aws/request.Request" representing the // client's request for the AssociateSubnetCidrBlock operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -992,8 +1099,8 @@ const opAssociateVpcCidrBlock = "AssociateVpcCidrBlock" // AssociateVpcCidrBlockRequest generates a "aws/request.Request" representing the // client's request for the AssociateVpcCidrBlock operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1072,8 +1179,8 @@ const opAttachClassicLinkVpc = "AttachClassicLinkVpc" // AttachClassicLinkVpcRequest generates a "aws/request.Request" representing the // client's request for the AttachClassicLinkVpc operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1157,8 +1264,8 @@ const opAttachInternetGateway = "AttachInternetGateway" // AttachInternetGatewayRequest generates a "aws/request.Request" representing the // client's request for the AttachInternetGateway operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1199,8 +1306,8 @@ func (c *EC2) AttachInternetGatewayRequest(input *AttachInternetGatewayInput) (r // AttachInternetGateway API operation for Amazon Elastic Compute Cloud. // -// Attaches an Internet gateway to a VPC, enabling connectivity between the -// Internet and the VPC. For more information about your VPC and Internet gateway, +// Attaches an internet gateway to a VPC, enabling connectivity between the +// internet and the VPC. For more information about your VPC and internet gateway, // see the Amazon Virtual Private Cloud User Guide (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -1235,8 +1342,8 @@ const opAttachNetworkInterface = "AttachNetworkInterface" // AttachNetworkInterfaceRequest generates a "aws/request.Request" representing the // client's request for the AttachNetworkInterface operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1309,8 +1416,8 @@ const opAttachVolume = "AttachVolume" // AttachVolumeRequest generates a "aws/request.Request" representing the // client's request for the AttachVolume operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1374,8 +1481,6 @@ func (c *EC2) AttachVolumeRequest(input *AttachVolumeInput) (req *request.Reques // the product. For example, you can't detach a volume from a Windows instance // and attach it to a Linux instance. // -// For an overview of the AWS Marketplace, see Introducing AWS Marketplace (https://aws.amazon.com/marketplace/help/200900000). -// // For more information about EBS volumes, see Attaching Amazon EBS Volumes // (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-attaching-volume.html) // in the Amazon Elastic Compute Cloud User Guide. @@ -1412,8 +1517,8 @@ const opAttachVpnGateway = "AttachVpnGateway" // AttachVpnGatewayRequest generates a "aws/request.Request" representing the // client's request for the AttachVpnGateway operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1490,8 +1595,8 @@ const opAuthorizeSecurityGroupEgress = "AuthorizeSecurityGroupEgress" // AuthorizeSecurityGroupEgressRequest generates a "aws/request.Request" representing the // client's request for the AuthorizeSecurityGroupEgress operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1583,8 +1688,8 @@ const opAuthorizeSecurityGroupIngress = "AuthorizeSecurityGroupIngress" // AuthorizeSecurityGroupIngressRequest generates a "aws/request.Request" representing the // client's request for the AuthorizeSecurityGroupIngress operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1677,8 +1782,8 @@ const opBundleInstance = "BundleInstance" // BundleInstanceRequest generates a "aws/request.Request" representing the // client's request for the BundleInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1725,8 +1830,6 @@ func (c *EC2) BundleInstanceRequest(input *BundleInstanceInput) (req *request.Re // This action is not applicable for Linux/Unix instances or Windows instances // that are backed by Amazon EBS. // -// For more information, see Creating an Instance Store-Backed Windows AMI (http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/Creating_InstanceStoreBacked_WinAMI.html). -// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -1759,8 +1862,8 @@ const opCancelBundleTask = "CancelBundleTask" // CancelBundleTaskRequest generates a "aws/request.Request" representing the // client's request for the CancelBundleTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1829,12 +1932,93 @@ func (c *EC2) CancelBundleTaskWithContext(ctx aws.Context, input *CancelBundleTa return out, req.Send() } +const opCancelCapacityReservation = "CancelCapacityReservation" + +// CancelCapacityReservationRequest generates a "aws/request.Request" representing the +// client's request for the CancelCapacityReservation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CancelCapacityReservation for more information on using the CancelCapacityReservation +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CancelCapacityReservationRequest method. +// req, resp := client.CancelCapacityReservationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelCapacityReservation +func (c *EC2) CancelCapacityReservationRequest(input *CancelCapacityReservationInput) (req *request.Request, output *CancelCapacityReservationOutput) { + op := &request.Operation{ + Name: opCancelCapacityReservation, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CancelCapacityReservationInput{} + } + + output = &CancelCapacityReservationOutput{} + req = c.newRequest(op, input, output) + return +} + +// CancelCapacityReservation API operation for Amazon Elastic Compute Cloud. +// +// Cancels the specified Capacity Reservation, releases the reserved capacity, +// and changes the Capacity Reservation's state to cancelled. +// +// Instances running in the reserved capacity continue running until you stop +// them. Stopped instances that target the Capacity Reservation can no longer +// launch. Modify these instances to either target a different Capacity Reservation, +// launch On-Demand Instance capacity, or run in any open Capacity Reservation +// that has matching attributes and sufficient capacity. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation CancelCapacityReservation for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelCapacityReservation +func (c *EC2) CancelCapacityReservation(input *CancelCapacityReservationInput) (*CancelCapacityReservationOutput, error) { + req, out := c.CancelCapacityReservationRequest(input) + return out, req.Send() +} + +// CancelCapacityReservationWithContext is the same as CancelCapacityReservation with the addition of +// the ability to pass a context and additional request options. +// +// See CancelCapacityReservation for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) CancelCapacityReservationWithContext(ctx aws.Context, input *CancelCapacityReservationInput, opts ...request.Option) (*CancelCapacityReservationOutput, error) { + req, out := c.CancelCapacityReservationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCancelConversionTask = "CancelConversionTask" // CancelConversionTaskRequest generates a "aws/request.Request" representing the // client's request for the CancelConversionTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1916,8 +2100,8 @@ const opCancelExportTask = "CancelExportTask" // CancelExportTaskRequest generates a "aws/request.Request" representing the // client's request for the CancelExportTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1995,8 +2179,8 @@ const opCancelImportTask = "CancelImportTask" // CancelImportTaskRequest generates a "aws/request.Request" representing the // client's request for the CancelImportTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2069,8 +2253,8 @@ const opCancelReservedInstancesListing = "CancelReservedInstancesListing" // CancelReservedInstancesListingRequest generates a "aws/request.Request" representing the // client's request for the CancelReservedInstancesListing operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2147,8 +2331,8 @@ const opCancelSpotFleetRequests = "CancelSpotFleetRequests" // CancelSpotFleetRequestsRequest generates a "aws/request.Request" representing the // client's request for the CancelSpotFleetRequests operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2228,8 +2412,8 @@ const opCancelSpotInstanceRequests = "CancelSpotInstanceRequests" // CancelSpotInstanceRequestsRequest generates a "aws/request.Request" representing the // client's request for the CancelSpotInstanceRequests operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2268,11 +2452,7 @@ func (c *EC2) CancelSpotInstanceRequestsRequest(input *CancelSpotInstanceRequest // CancelSpotInstanceRequests API operation for Amazon Elastic Compute Cloud. // -// Cancels one or more Spot Instance requests. Spot Instances are instances -// that Amazon EC2 starts on your behalf when the maximum price that you specify -// exceeds the current Spot price. For more information, see Spot Instance Requests -// (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html) in -// the Amazon Elastic Compute Cloud User Guide. +// Cancels one or more Spot Instance requests. // // Canceling a Spot Instance request does not terminate running Spot Instances // associated with the request. @@ -2309,8 +2489,8 @@ const opConfirmProductInstance = "ConfirmProductInstance" // ConfirmProductInstanceRequest generates a "aws/request.Request" representing the // client's request for the ConfirmProductInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2385,8 +2565,8 @@ const opCopyFpgaImage = "CopyFpgaImage" // CopyFpgaImageRequest generates a "aws/request.Request" representing the // client's request for the CopyFpgaImage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2459,8 +2639,8 @@ const opCopyImage = "CopyImage" // CopyImageRequest generates a "aws/request.Request" representing the // client's request for the CopyImage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2503,6 +2683,11 @@ func (c *EC2) CopyImageRequest(input *CopyImageInput) (req *request.Request, out // region. You specify the destination region by using its endpoint when making // the request. // +// Copies of encrypted backing snapshots for the AMI are encrypted. Copies of +// unencrypted backing snapshots remain unencrypted, unless you set Encrypted +// during the copy operation. You cannot create an unencrypted copy of an encrypted +// backing snapshot. +// // For more information about the prerequisites and limits when copying an AMI, // see Copying an AMI (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html) // in the Amazon Elastic Compute Cloud User Guide. @@ -2539,8 +2724,8 @@ const opCopySnapshot = "CopySnapshot" // CopySnapshotRequest generates a "aws/request.Request" representing the // client's request for the CopySnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2594,7 +2779,7 @@ func (c *EC2) CopySnapshotRequest(input *CopySnapshotInput) (req *request.Reques // To copy an encrypted snapshot that has been shared from another account, // you must have permissions for the CMK used to encrypt the snapshot. // -// Snapshots created by the CopySnapshot action have an arbitrary volume ID +// Snapshots created by copying another snapshot have an arbitrary volume ID // that should not be used for any purpose. // // For more information, see Copying an Amazon EBS Snapshot (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-copy-snapshot.html) @@ -2628,12 +2813,109 @@ func (c *EC2) CopySnapshotWithContext(ctx aws.Context, input *CopySnapshotInput, return out, req.Send() } +const opCreateCapacityReservation = "CreateCapacityReservation" + +// CreateCapacityReservationRequest generates a "aws/request.Request" representing the +// client's request for the CreateCapacityReservation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateCapacityReservation for more information on using the CreateCapacityReservation +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateCapacityReservationRequest method. +// req, resp := client.CreateCapacityReservationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateCapacityReservation +func (c *EC2) CreateCapacityReservationRequest(input *CreateCapacityReservationInput) (req *request.Request, output *CreateCapacityReservationOutput) { + op := &request.Operation{ + Name: opCreateCapacityReservation, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateCapacityReservationInput{} + } + + output = &CreateCapacityReservationOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateCapacityReservation API operation for Amazon Elastic Compute Cloud. +// +// Creates a new Capacity Reservation with the specified attributes. +// +// Capacity Reservations enable you to reserve capacity for your Amazon EC2 +// instances in a specific Availability Zone for any duration. This gives you +// the flexibility to selectively add capacity reservations and still get the +// Regional RI discounts for that usage. By creating Capacity Reservations, +// you ensure that you always have access to Amazon EC2 capacity when you need +// it, for as long as you need it. For more information, see Capacity Reservations +// (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html) +// in the Amazon Elastic Compute Cloud User Guide. +// +// Your request to create a Capacity Reservation could fail if Amazon EC2 does +// not have sufficient capacity to fulfill the request. If your request fails +// due to Amazon EC2 capacity constraints, either try again at a later time, +// try in a different Availability Zone, or request a smaller capacity reservation. +// If your application is flexible across instance types and sizes, try to create +// a Capacity Reservation with different instance attributes. +// +// Your request could also fail if the requested quantity exceeds your On-Demand +// Instance limit for the selected instance type. If your request fails due +// to limit constraints, increase your On-Demand Instance limit for the required +// instance type and try again. For more information about increasing your instance +// limits, see Amazon EC2 Service Limits (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-resource-limits.html) +// in the Amazon Elastic Compute Cloud User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation CreateCapacityReservation for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateCapacityReservation +func (c *EC2) CreateCapacityReservation(input *CreateCapacityReservationInput) (*CreateCapacityReservationOutput, error) { + req, out := c.CreateCapacityReservationRequest(input) + return out, req.Send() +} + +// CreateCapacityReservationWithContext is the same as CreateCapacityReservation with the addition of +// the ability to pass a context and additional request options. +// +// See CreateCapacityReservation for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) CreateCapacityReservationWithContext(ctx aws.Context, input *CreateCapacityReservationInput, opts ...request.Option) (*CreateCapacityReservationOutput, error) { + req, out := c.CreateCapacityReservationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateCustomerGateway = "CreateCustomerGateway" // CreateCustomerGatewayRequest generates a "aws/request.Request" representing the // client's request for the CreateCustomerGateway operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2730,8 +3012,8 @@ const opCreateDefaultSubnet = "CreateDefaultSubnet" // CreateDefaultSubnetRequest generates a "aws/request.Request" representing the // client's request for the CreateDefaultSubnet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2808,8 +3090,8 @@ const opCreateDefaultVpc = "CreateDefaultVpc" // CreateDefaultVpcRequest generates a "aws/request.Request" representing the // client's request for the CreateDefaultVpc operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2854,12 +3136,12 @@ func (c *EC2) CreateDefaultVpcRequest(input *CreateDefaultVpcInput) (req *reques // in the Amazon Virtual Private Cloud User Guide. You cannot specify the components // of the default VPC yourself. // -// You can create a default VPC if you deleted your previous default VPC. You -// cannot have more than one default VPC per region. +// If you deleted your previous default VPC, you can create a default VPC. You +// cannot have more than one default VPC per Region. // // If your account supports EC2-Classic, you cannot use this action to create -// a default VPC in a region that supports EC2-Classic. If you want a default -// VPC in a region that supports EC2-Classic, see "I really want a default VPC +// a default VPC in a Region that supports EC2-Classic. If you want a default +// VPC in a Region that supports EC2-Classic, see "I really want a default VPC // for my existing EC2 account. Is that possible?" in the Default VPCs FAQ (http://aws.amazon.com/vpc/faqs/#Default_VPCs). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -2894,8 +3176,8 @@ const opCreateDhcpOptions = "CreateDhcpOptions" // CreateDhcpOptionsRequest generates a "aws/request.Request" representing the // client's request for the CreateDhcpOptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2943,9 +3225,9 @@ func (c *EC2) CreateDhcpOptionsRequest(input *CreateDhcpOptionsInput) (req *requ // * domain-name-servers - The IP addresses of up to four domain name servers, // or AmazonProvidedDNS. The default DHCP option set specifies AmazonProvidedDNS. // If specifying more than one domain name server, specify the IP addresses -// in a single parameter, separated by commas. If you want your instance -// to receive a custom DNS hostname as specified in domain-name, you must -// set domain-name-servers to a custom DNS server. +// in a single parameter, separated by commas. ITo have your instance to +// receive a custom DNS hostname as specified in domain-name, you must set +// domain-name-servers to a custom DNS server. // // * domain-name - If you're using AmazonProvidedDNS in us-east-1, specify // ec2.internal. If you're using AmazonProvidedDNS in another region, specify @@ -2969,10 +3251,9 @@ func (c *EC2) CreateDhcpOptionsRequest(input *CreateDhcpOptionsInput) (req *requ // // Your VPC automatically starts out with a set of DHCP options that includes // only a DNS server that we provide (AmazonProvidedDNS). If you create a set -// of options, and if your VPC has an Internet gateway, make sure to set the +// of options, and if your VPC has an internet gateway, make sure to set the // domain-name-servers option either to AmazonProvidedDNS or to a domain name -// server of your choice. For more information about DHCP options, see DHCP -// Options Sets (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_DHCP_Options.html) +// server of your choice. For more information, see DHCP Options Sets (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_DHCP_Options.html) // in the Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -3007,8 +3288,8 @@ const opCreateEgressOnlyInternetGateway = "CreateEgressOnlyInternetGateway" // CreateEgressOnlyInternetGatewayRequest generates a "aws/request.Request" representing the // client's request for the CreateEgressOnlyInternetGateway operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3047,9 +3328,9 @@ func (c *EC2) CreateEgressOnlyInternetGatewayRequest(input *CreateEgressOnlyInte // CreateEgressOnlyInternetGateway API operation for Amazon Elastic Compute Cloud. // -// [IPv6 only] Creates an egress-only Internet gateway for your VPC. An egress-only -// Internet gateway is used to enable outbound communication over IPv6 from -// instances in your VPC to the Internet, and prevents hosts outside of your +// [IPv6 only] Creates an egress-only internet gateway for your VPC. An egress-only +// internet gateway is used to enable outbound communication over IPv6 from +// instances in your VPC to the internet, and prevents hosts outside of your // VPC from initiating an IPv6 connection with your instance. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -3080,12 +3361,92 @@ func (c *EC2) CreateEgressOnlyInternetGatewayWithContext(ctx aws.Context, input return out, req.Send() } +const opCreateFleet = "CreateFleet" + +// CreateFleetRequest generates a "aws/request.Request" representing the +// client's request for the CreateFleet operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateFleet for more information on using the CreateFleet +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateFleetRequest method. +// req, resp := client.CreateFleetRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateFleet +func (c *EC2) CreateFleetRequest(input *CreateFleetInput) (req *request.Request, output *CreateFleetOutput) { + op := &request.Operation{ + Name: opCreateFleet, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateFleetInput{} + } + + output = &CreateFleetOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateFleet API operation for Amazon Elastic Compute Cloud. +// +// Launches an EC2 Fleet. +// +// You can create a single EC2 Fleet that includes multiple launch specifications +// that vary by instance type, AMI, Availability Zone, or subnet. +// +// For more information, see Launching an EC2 Fleet (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-fleet.html) +// in the Amazon Elastic Compute Cloud User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation CreateFleet for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateFleet +func (c *EC2) CreateFleet(input *CreateFleetInput) (*CreateFleetOutput, error) { + req, out := c.CreateFleetRequest(input) + return out, req.Send() +} + +// CreateFleetWithContext is the same as CreateFleet with the addition of +// the ability to pass a context and additional request options. +// +// See CreateFleet for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) CreateFleetWithContext(ctx aws.Context, input *CreateFleetInput, opts ...request.Option) (*CreateFleetOutput, error) { + req, out := c.CreateFleetRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateFlowLogs = "CreateFlowLogs" // CreateFlowLogsRequest generates a "aws/request.Request" representing the // client's request for the CreateFlowLogs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3124,16 +3485,19 @@ func (c *EC2) CreateFlowLogsRequest(input *CreateFlowLogsInput) (req *request.Re // CreateFlowLogs API operation for Amazon Elastic Compute Cloud. // -// Creates one or more flow logs to capture IP traffic for a specific network -// interface, subnet, or VPC. Flow logs are delivered to a specified log group -// in Amazon CloudWatch Logs. If you specify a VPC or subnet in the request, -// a log stream is created in CloudWatch Logs for each network interface in -// the subnet or VPC. Log streams can include information about accepted and -// rejected traffic to a network interface. You can view the data in your log -// streams using Amazon CloudWatch Logs. +// Creates one or more flow logs to capture information about IP traffic for +// a specific network interface, subnet, or VPC. // -// In your request, you must also specify an IAM role that has permission to -// publish logs to CloudWatch Logs. +// Flow log data for a monitored network interface is recorded as flow log records, +// which are log events consisting of fields that describe the traffic flow. +// For more information, see Flow Log Records (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html#flow-log-records) +// in the Amazon Virtual Private Cloud User Guide. +// +// When publishing to CloudWatch Logs, flow log records are published to a log +// group, and each network interface has a unique log stream in the log group. +// When publishing to Amazon S3, flow log records for all of the monitored network +// interfaces are published to a single log file object that is stored in the +// specified bucket. // // For more information, see VPC Flow Logs (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html) // in the Amazon Virtual Private Cloud User Guide. @@ -3170,8 +3534,8 @@ const opCreateFpgaImage = "CreateFpgaImage" // CreateFpgaImageRequest generates a "aws/request.Request" representing the // client's request for the CreateFpgaImage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3251,8 +3615,8 @@ const opCreateImage = "CreateImage" // CreateImageRequest generates a "aws/request.Request" representing the // client's request for the CreateImage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3334,8 +3698,8 @@ const opCreateInstanceExportTask = "CreateInstanceExportTask" // CreateInstanceExportTaskRequest generates a "aws/request.Request" representing the // client's request for the CreateInstanceExportTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3413,8 +3777,8 @@ const opCreateInternetGateway = "CreateInternetGateway" // CreateInternetGatewayRequest generates a "aws/request.Request" representing the // client's request for the CreateInternetGateway operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3453,10 +3817,10 @@ func (c *EC2) CreateInternetGatewayRequest(input *CreateInternetGatewayInput) (r // CreateInternetGateway API operation for Amazon Elastic Compute Cloud. // -// Creates an Internet gateway for use with a VPC. After creating the Internet +// Creates an internet gateway for use with a VPC. After creating the internet // gateway, you attach it to a VPC using AttachInternetGateway. // -// For more information about your VPC and Internet gateway, see the Amazon +// For more information about your VPC and internet gateway, see the Amazon // Virtual Private Cloud User Guide (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -3491,8 +3855,8 @@ const opCreateKeyPair = "CreateKeyPair" // CreateKeyPairRequest generates a "aws/request.Request" representing the // client's request for the CreateKeyPair operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3577,8 +3941,8 @@ const opCreateLaunchTemplate = "CreateLaunchTemplate" // CreateLaunchTemplateRequest generates a "aws/request.Request" representing the // client's request for the CreateLaunchTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3653,8 +4017,8 @@ const opCreateLaunchTemplateVersion = "CreateLaunchTemplateVersion" // CreateLaunchTemplateVersionRequest generates a "aws/request.Request" representing the // client's request for the CreateLaunchTemplateVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3731,8 +4095,8 @@ const opCreateNatGateway = "CreateNatGateway" // CreateNatGatewayRequest generates a "aws/request.Request" representing the // client's request for the CreateNatGateway operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3771,11 +4135,12 @@ func (c *EC2) CreateNatGatewayRequest(input *CreateNatGatewayInput) (req *reques // CreateNatGateway API operation for Amazon Elastic Compute Cloud. // -// Creates a NAT gateway in the specified subnet. A NAT gateway can be used -// to enable instances in a private subnet to connect to the Internet. This -// action creates a network interface in the specified subnet with a private -// IP address from the IP address range of the subnet. For more information, -// see NAT Gateways (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html) +// Creates a NAT gateway in the specified public subnet. This action creates +// a network interface in the specified subnet with a private IP address from +// the IP address range of the subnet. Internet-bound traffic from a private +// subnet can be routed to the NAT gateway, therefore enabling instances in +// the private subnet to connect to the internet. For more information, see +// NAT Gateways (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html) // in the Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -3810,8 +4175,8 @@ const opCreateNetworkAcl = "CreateNetworkAcl" // CreateNetworkAclRequest generates a "aws/request.Request" representing the // client's request for the CreateNetworkAcl operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3853,7 +4218,7 @@ func (c *EC2) CreateNetworkAclRequest(input *CreateNetworkAclInput) (req *reques // Creates a network ACL in a VPC. Network ACLs provide an optional layer of // security (in addition to security groups) for the instances in your VPC. // -// For more information about network ACLs, see Network ACLs (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html) +// For more information, see Network ACLs (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html) // in the Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -3888,8 +4253,8 @@ const opCreateNetworkAclEntry = "CreateNetworkAclEntry" // CreateNetworkAclEntryRequest generates a "aws/request.Request" representing the // client's request for the CreateNetworkAclEntry operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3980,8 +4345,8 @@ const opCreateNetworkInterface = "CreateNetworkInterface" // CreateNetworkInterfaceRequest generates a "aws/request.Request" representing the // client's request for the CreateNetworkInterface operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4058,8 +4423,8 @@ const opCreateNetworkInterfacePermission = "CreateNetworkInterfacePermission" // CreateNetworkInterfacePermissionRequest generates a "aws/request.Request" representing the // client's request for the CreateNetworkInterfacePermission operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4098,8 +4463,8 @@ func (c *EC2) CreateNetworkInterfacePermissionRequest(input *CreateNetworkInterf // CreateNetworkInterfacePermission API operation for Amazon Elastic Compute Cloud. // -// Grants an AWS authorized partner account permission to attach the specified -// network interface to an instance in their account. +// Grants an AWS-authorized account permission to attach the specified network +// interface to an instance in their account. // // You can grant permission to a single AWS account only, and only one account // at a time. @@ -4136,8 +4501,8 @@ const opCreatePlacementGroup = "CreatePlacementGroup" // CreatePlacementGroupRequest generates a "aws/request.Request" representing the // client's request for the CreatePlacementGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4220,8 +4585,8 @@ const opCreateReservedInstancesListing = "CreateReservedInstancesListing" // CreateReservedInstancesListingRequest generates a "aws/request.Request" representing the // client's request for the CreateReservedInstancesListing operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4317,8 +4682,8 @@ const opCreateRoute = "CreateRoute" // CreateRouteRequest generates a "aws/request.Request" representing the // client's request for the CreateRoute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4359,9 +4724,9 @@ func (c *EC2) CreateRouteRequest(input *CreateRouteInput) (req *request.Request, // // Creates a route in a route table within a VPC. // -// You must specify one of the following targets: Internet gateway or virtual +// You must specify one of the following targets: internet gateway or virtual // private gateway, NAT instance, NAT gateway, VPC peering connection, network -// interface, or egress-only Internet gateway. +// interface, or egress-only internet gateway. // // When determining how to route traffic, we use the route with the most specific // match. For example, traffic is destined for the IPv4 address 192.0.2.3, and @@ -4410,8 +4775,8 @@ const opCreateRouteTable = "CreateRouteTable" // CreateRouteTableRequest generates a "aws/request.Request" representing the // client's request for the CreateRouteTable operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4453,7 +4818,7 @@ func (c *EC2) CreateRouteTableRequest(input *CreateRouteTableInput) (req *reques // Creates a route table for the specified VPC. After you create a route table, // you can add routes and associate the table with a subnet. // -// For more information about route tables, see Route Tables (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) +// For more information, see Route Tables (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) // in the Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -4488,8 +4853,8 @@ const opCreateSecurityGroup = "CreateSecurityGroup" // CreateSecurityGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateSecurityGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4588,8 +4953,8 @@ const opCreateSnapshot = "CreateSnapshot" // CreateSnapshotRequest generates a "aws/request.Request" representing the // client's request for the CreateSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4654,7 +5019,8 @@ func (c *EC2) CreateSnapshotRequest(input *CreateSnapshotInput) (req *request.Re // protected. // // You can tag your snapshots during creation. For more information, see Tagging -// Your Amazon EC2 Resources (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html). +// Your Amazon EC2 Resources (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) +// in the Amazon Elastic Compute Cloud User Guide. // // For more information, see Amazon Elastic Block Store (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) // and Amazon EBS Encryption (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html) @@ -4692,8 +5058,8 @@ const opCreateSpotDatafeedSubscription = "CreateSpotDatafeedSubscription" // CreateSpotDatafeedSubscriptionRequest generates a "aws/request.Request" representing the // client's request for the CreateSpotDatafeedSubscription operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4735,7 +5101,7 @@ func (c *EC2) CreateSpotDatafeedSubscriptionRequest(input *CreateSpotDatafeedSub // Creates a data feed for Spot Instances, enabling you to view Spot Instance // usage logs. You can create one data feed per AWS account. For more information, // see Spot Instance Data Feed (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-data-feeds.html) -// in the Amazon Elastic Compute Cloud User Guide. +// in the Amazon EC2 User Guide for Linux Instances. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4769,8 +5135,8 @@ const opCreateSubnet = "CreateSubnet" // CreateSubnetRequest generates a "aws/request.Request" representing the // client's request for the CreateSubnet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4811,13 +5177,13 @@ func (c *EC2) CreateSubnetRequest(input *CreateSubnetInput) (req *request.Reques // // Creates a subnet in an existing VPC. // -// When you create each subnet, you provide the VPC ID and the IPv4 CIDR block -// you want for the subnet. After you create a subnet, you can't change its -// CIDR block. The size of the subnet's IPv4 CIDR block can be the same as a -// VPC's IPv4 CIDR block, or a subset of a VPC's IPv4 CIDR block. If you create -// more than one subnet in a VPC, the subnets' CIDR blocks must not overlap. -// The smallest IPv4 subnet (and VPC) you can create uses a /28 netmask (16 -// IPv4 addresses), and the largest uses a /16 netmask (65,536 IPv4 addresses). +// When you create each subnet, you provide the VPC ID and IPv4 CIDR block for +// the subnet. After you create a subnet, you can't change its CIDR block. The +// size of the subnet's IPv4 CIDR block can be the same as a VPC's IPv4 CIDR +// block, or a subset of a VPC's IPv4 CIDR block. If you create more than one +// subnet in a VPC, the subnets' CIDR blocks must not overlap. The smallest +// IPv4 subnet (and VPC) you can create uses a /28 netmask (16 IPv4 addresses), +// and the largest uses a /16 netmask (65,536 IPv4 addresses). // // If you've associated an IPv6 CIDR block with your VPC, you can create a subnet // with an IPv6 CIDR block that uses a /64 prefix length. @@ -4869,8 +5235,8 @@ const opCreateTags = "CreateTags" // CreateTagsRequest generates a "aws/request.Request" representing the // client's request for the CreateTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4953,8 +5319,8 @@ const opCreateVolume = "CreateVolume" // CreateVolumeRequest generates a "aws/request.Request" representing the // client's request for the CreateVolume operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5008,7 +5374,8 @@ func (c *EC2) CreateVolumeRequest(input *CreateVolumeInput) (req *request.Reques // in the Amazon Elastic Compute Cloud User Guide. // // You can tag your volumes during creation. For more information, see Tagging -// Your Amazon EC2 Resources (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html). +// Your Amazon EC2 Resources (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) +// in the Amazon Elastic Compute Cloud User Guide. // // For more information, see Creating an Amazon EBS Volume (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-volume.html) // in the Amazon Elastic Compute Cloud User Guide. @@ -5045,8 +5412,8 @@ const opCreateVpc = "CreateVpc" // CreateVpcRequest generates a "aws/request.Request" representing the // client's request for the CreateVpc operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5087,8 +5454,8 @@ func (c *EC2) CreateVpcRequest(input *CreateVpcInput) (req *request.Request, out // // Creates a VPC with the specified IPv4 CIDR block. The smallest VPC you can // create uses a /28 netmask (16 IPv4 addresses), and the largest uses a /16 -// netmask (65,536 IPv4 addresses). To help you decide how big to make your -// VPC, see Your VPC and Subnets (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) +// netmask (65,536 IPv4 addresses). For more information about how large to +// make your VPC, see Your VPC and Subnets (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) // in the Amazon Virtual Private Cloud User Guide. // // You can optionally request an Amazon-provided IPv6 CIDR block for the VPC. @@ -5096,8 +5463,8 @@ func (c *EC2) CreateVpcRequest(input *CreateVpcInput) (req *request.Request, out // pool of IPv6 addresses. You cannot choose the IPv6 range for your VPC. // // By default, each instance you launch in the VPC has the default DHCP options, -// which includes only a default DNS server that we provide (AmazonProvidedDNS). -// For more information about DHCP options, see DHCP Options Sets (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_DHCP_Options.html) +// which include only a default DNS server that we provide (AmazonProvidedDNS). +// For more information, see DHCP Options Sets (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_DHCP_Options.html) // in the Amazon Virtual Private Cloud User Guide. // // You can specify the instance tenancy value for the VPC when you create it. @@ -5137,8 +5504,8 @@ const opCreateVpcEndpoint = "CreateVpcEndpoint" // CreateVpcEndpointRequest generates a "aws/request.Request" representing the // client's request for the CreateVpcEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5227,8 +5594,8 @@ const opCreateVpcEndpointConnectionNotification = "CreateVpcEndpointConnectionNo // CreateVpcEndpointConnectionNotificationRequest generates a "aws/request.Request" representing the // client's request for the CreateVpcEndpointConnectionNotification operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5307,8 +5674,8 @@ const opCreateVpcEndpointServiceConfiguration = "CreateVpcEndpointServiceConfigu // CreateVpcEndpointServiceConfigurationRequest generates a "aws/request.Request" representing the // client's request for the CreateVpcEndpointServiceConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5388,8 +5755,8 @@ const opCreateVpcPeeringConnection = "CreateVpcPeeringConnection" // CreateVpcPeeringConnectionRequest generates a "aws/request.Request" representing the // client's request for the CreateVpcPeeringConnection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5430,7 +5797,7 @@ func (c *EC2) CreateVpcPeeringConnectionRequest(input *CreateVpcPeeringConnectio // // Requests a VPC peering connection between two VPCs: a requester VPC that // you own and an accepter VPC with which to create the connection. The accepter -// VPC can belong to another AWS account and can be in a different region to +// VPC can belong to another AWS account and can be in a different Region to // the requester VPC. The requester VPC and accepter VPC cannot have overlapping // CIDR blocks. // @@ -5477,8 +5844,8 @@ const opCreateVpnConnection = "CreateVpnConnection" // CreateVpnConnectionRequest generates a "aws/request.Request" representing the // client's request for the CreateVpnConnection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5569,8 +5936,8 @@ const opCreateVpnConnectionRoute = "CreateVpnConnectionRoute" // CreateVpnConnectionRouteRequest generates a "aws/request.Request" representing the // client's request for the CreateVpnConnectionRoute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5652,8 +6019,8 @@ const opCreateVpnGateway = "CreateVpnGateway" // CreateVpnGatewayRequest generates a "aws/request.Request" representing the // client's request for the CreateVpnGateway operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5732,8 +6099,8 @@ const opDeleteCustomerGateway = "DeleteCustomerGateway" // DeleteCustomerGatewayRequest generates a "aws/request.Request" representing the // client's request for the DeleteCustomerGateway operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5809,8 +6176,8 @@ const opDeleteDhcpOptions = "DeleteDhcpOptions" // DeleteDhcpOptionsRequest generates a "aws/request.Request" representing the // client's request for the DeleteDhcpOptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5888,8 +6255,8 @@ const opDeleteEgressOnlyInternetGateway = "DeleteEgressOnlyInternetGateway" // DeleteEgressOnlyInternetGatewayRequest generates a "aws/request.Request" representing the // client's request for the DeleteEgressOnlyInternetGateway operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5928,7 +6295,7 @@ func (c *EC2) DeleteEgressOnlyInternetGatewayRequest(input *DeleteEgressOnlyInte // DeleteEgressOnlyInternetGateway API operation for Amazon Elastic Compute Cloud. // -// Deletes an egress-only Internet gateway. +// Deletes an egress-only internet gateway. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -5958,12 +6325,92 @@ func (c *EC2) DeleteEgressOnlyInternetGatewayWithContext(ctx aws.Context, input return out, req.Send() } +const opDeleteFleets = "DeleteFleets" + +// DeleteFleetsRequest generates a "aws/request.Request" representing the +// client's request for the DeleteFleets operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteFleets for more information on using the DeleteFleets +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteFleetsRequest method. +// req, resp := client.DeleteFleetsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteFleets +func (c *EC2) DeleteFleetsRequest(input *DeleteFleetsInput) (req *request.Request, output *DeleteFleetsOutput) { + op := &request.Operation{ + Name: opDeleteFleets, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteFleetsInput{} + } + + output = &DeleteFleetsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteFleets API operation for Amazon Elastic Compute Cloud. +// +// Deletes the specified EC2 Fleet. +// +// After you delete an EC2 Fleet, it launches no new instances. You must specify +// whether an EC2 Fleet should also terminate its instances. If you terminate +// the instances, the EC2 Fleet enters the deleted_terminating state. Otherwise, +// the EC2 Fleet enters the deleted_running state, and the instances continue +// to run until they are interrupted or you terminate them manually. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DeleteFleets for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteFleets +func (c *EC2) DeleteFleets(input *DeleteFleetsInput) (*DeleteFleetsOutput, error) { + req, out := c.DeleteFleetsRequest(input) + return out, req.Send() +} + +// DeleteFleetsWithContext is the same as DeleteFleets with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteFleets for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DeleteFleetsWithContext(ctx aws.Context, input *DeleteFleetsInput, opts ...request.Option) (*DeleteFleetsOutput, error) { + req, out := c.DeleteFleetsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteFlowLogs = "DeleteFlowLogs" // DeleteFlowLogsRequest generates a "aws/request.Request" representing the // client's request for the DeleteFlowLogs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6036,8 +6483,8 @@ const opDeleteFpgaImage = "DeleteFpgaImage" // DeleteFpgaImageRequest generates a "aws/request.Request" representing the // client's request for the DeleteFpgaImage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6110,8 +6557,8 @@ const opDeleteInternetGateway = "DeleteInternetGateway" // DeleteInternetGatewayRequest generates a "aws/request.Request" representing the // client's request for the DeleteInternetGateway operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6152,7 +6599,7 @@ func (c *EC2) DeleteInternetGatewayRequest(input *DeleteInternetGatewayInput) (r // DeleteInternetGateway API operation for Amazon Elastic Compute Cloud. // -// Deletes the specified Internet gateway. You must detach the Internet gateway +// Deletes the specified internet gateway. You must detach the internet gateway // from the VPC before you can delete it. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -6187,8 +6634,8 @@ const opDeleteKeyPair = "DeleteKeyPair" // DeleteKeyPairRequest generates a "aws/request.Request" representing the // client's request for the DeleteKeyPair operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6263,8 +6710,8 @@ const opDeleteLaunchTemplate = "DeleteLaunchTemplate" // DeleteLaunchTemplateRequest generates a "aws/request.Request" representing the // client's request for the DeleteLaunchTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6338,8 +6785,8 @@ const opDeleteLaunchTemplateVersions = "DeleteLaunchTemplateVersions" // DeleteLaunchTemplateVersionsRequest generates a "aws/request.Request" representing the // client's request for the DeleteLaunchTemplateVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6415,8 +6862,8 @@ const opDeleteNatGateway = "DeleteNatGateway" // DeleteNatGatewayRequest generates a "aws/request.Request" representing the // client's request for the DeleteNatGateway operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6491,8 +6938,8 @@ const opDeleteNetworkAcl = "DeleteNetworkAcl" // DeleteNetworkAclRequest generates a "aws/request.Request" representing the // client's request for the DeleteNetworkAcl operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6568,8 +7015,8 @@ const opDeleteNetworkAclEntry = "DeleteNetworkAclEntry" // DeleteNetworkAclEntryRequest generates a "aws/request.Request" representing the // client's request for the DeleteNetworkAclEntry operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6645,8 +7092,8 @@ const opDeleteNetworkInterface = "DeleteNetworkInterface" // DeleteNetworkInterfaceRequest generates a "aws/request.Request" representing the // client's request for the DeleteNetworkInterface operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6722,8 +7169,8 @@ const opDeleteNetworkInterfacePermission = "DeleteNetworkInterfacePermission" // DeleteNetworkInterfacePermissionRequest generates a "aws/request.Request" representing the // client's request for the DeleteNetworkInterfacePermission operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6799,8 +7246,8 @@ const opDeletePlacementGroup = "DeletePlacementGroup" // DeletePlacementGroupRequest generates a "aws/request.Request" representing the // client's request for the DeletePlacementGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6878,8 +7325,8 @@ const opDeleteRoute = "DeleteRoute" // DeleteRouteRequest generates a "aws/request.Request" representing the // client's request for the DeleteRoute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6954,8 +7401,8 @@ const opDeleteRouteTable = "DeleteRouteTable" // DeleteRouteTableRequest generates a "aws/request.Request" representing the // client's request for the DeleteRouteTable operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7032,8 +7479,8 @@ const opDeleteSecurityGroup = "DeleteSecurityGroup" // DeleteSecurityGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteSecurityGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7112,8 +7559,8 @@ const opDeleteSnapshot = "DeleteSnapshot" // DeleteSnapshotRequest generates a "aws/request.Request" representing the // client's request for the DeleteSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7202,8 +7649,8 @@ const opDeleteSpotDatafeedSubscription = "DeleteSpotDatafeedSubscription" // DeleteSpotDatafeedSubscriptionRequest generates a "aws/request.Request" representing the // client's request for the DeleteSpotDatafeedSubscription operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7278,8 +7725,8 @@ const opDeleteSubnet = "DeleteSubnet" // DeleteSubnetRequest generates a "aws/request.Request" representing the // client's request for the DeleteSubnet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7355,8 +7802,8 @@ const opDeleteTags = "DeleteTags" // DeleteTagsRequest generates a "aws/request.Request" representing the // client's request for the DeleteTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7435,8 +7882,8 @@ const opDeleteVolume = "DeleteVolume" // DeleteVolumeRequest generates a "aws/request.Request" representing the // client's request for the DeleteVolume operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7480,7 +7927,7 @@ func (c *EC2) DeleteVolumeRequest(input *DeleteVolumeInput) (req *request.Reques // Deletes the specified EBS volume. The volume must be in the available state // (not attached to an instance). // -// The volume may remain in the deleting state for several minutes. +// The volume can remain in the deleting state for several minutes. // // For more information, see Deleting an Amazon EBS Volume (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-deleting-volume.html) // in the Amazon Elastic Compute Cloud User Guide. @@ -7517,8 +7964,8 @@ const opDeleteVpc = "DeleteVpc" // DeleteVpcRequest generates a "aws/request.Request" representing the // client's request for the DeleteVpc operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7597,8 +8044,8 @@ const opDeleteVpcEndpointConnectionNotifications = "DeleteVpcEndpointConnectionN // DeleteVpcEndpointConnectionNotificationsRequest generates a "aws/request.Request" representing the // client's request for the DeleteVpcEndpointConnectionNotifications operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7671,8 +8118,8 @@ const opDeleteVpcEndpointServiceConfigurations = "DeleteVpcEndpointServiceConfig // DeleteVpcEndpointServiceConfigurationsRequest generates a "aws/request.Request" representing the // client's request for the DeleteVpcEndpointServiceConfigurations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7748,8 +8195,8 @@ const opDeleteVpcEndpoints = "DeleteVpcEndpoints" // DeleteVpcEndpointsRequest generates a "aws/request.Request" representing the // client's request for the DeleteVpcEndpoints operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7825,8 +8272,8 @@ const opDeleteVpcPeeringConnection = "DeleteVpcPeeringConnection" // DeleteVpcPeeringConnectionRequest generates a "aws/request.Request" representing the // client's request for the DeleteVpcPeeringConnection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7903,8 +8350,8 @@ const opDeleteVpnConnection = "DeleteVpnConnection" // DeleteVpnConnectionRequest generates a "aws/request.Request" representing the // client's request for the DeleteVpnConnection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7988,8 +8435,8 @@ const opDeleteVpnConnectionRoute = "DeleteVpnConnectionRoute" // DeleteVpnConnectionRouteRequest generates a "aws/request.Request" representing the // client's request for the DeleteVpnConnectionRoute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8067,8 +8514,8 @@ const opDeleteVpnGateway = "DeleteVpnGateway" // DeleteVpnGatewayRequest generates a "aws/request.Request" representing the // client's request for the DeleteVpnGateway operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8143,12 +8590,92 @@ func (c *EC2) DeleteVpnGatewayWithContext(ctx aws.Context, input *DeleteVpnGatew return out, req.Send() } +const opDeprovisionByoipCidr = "DeprovisionByoipCidr" + +// DeprovisionByoipCidrRequest generates a "aws/request.Request" representing the +// client's request for the DeprovisionByoipCidr operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeprovisionByoipCidr for more information on using the DeprovisionByoipCidr +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeprovisionByoipCidrRequest method. +// req, resp := client.DeprovisionByoipCidrRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeprovisionByoipCidr +func (c *EC2) DeprovisionByoipCidrRequest(input *DeprovisionByoipCidrInput) (req *request.Request, output *DeprovisionByoipCidrOutput) { + op := &request.Operation{ + Name: opDeprovisionByoipCidr, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeprovisionByoipCidrInput{} + } + + output = &DeprovisionByoipCidrOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeprovisionByoipCidr API operation for Amazon Elastic Compute Cloud. +// +// Releases the specified address range that you provisioned for use with your +// AWS resources through bring your own IP addresses (BYOIP) and deletes the +// corresponding address pool. +// +// Before you can release an address range, you must stop advertising it using +// WithdrawByoipCidr and you must not have any IP addresses allocated from its +// address range. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DeprovisionByoipCidr for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeprovisionByoipCidr +func (c *EC2) DeprovisionByoipCidr(input *DeprovisionByoipCidrInput) (*DeprovisionByoipCidrOutput, error) { + req, out := c.DeprovisionByoipCidrRequest(input) + return out, req.Send() +} + +// DeprovisionByoipCidrWithContext is the same as DeprovisionByoipCidr with the addition of +// the ability to pass a context and additional request options. +// +// See DeprovisionByoipCidr for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DeprovisionByoipCidrWithContext(ctx aws.Context, input *DeprovisionByoipCidrInput, opts ...request.Option) (*DeprovisionByoipCidrOutput, error) { + req, out := c.DeprovisionByoipCidrRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeregisterImage = "DeregisterImage" // DeregisterImageRequest generates a "aws/request.Request" representing the // client's request for the DeregisterImage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8231,8 +8758,8 @@ const opDescribeAccountAttributes = "DescribeAccountAttributes" // DescribeAccountAttributesRequest generates a "aws/request.Request" representing the // client's request for the DescribeAccountAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8323,8 +8850,8 @@ const opDescribeAddresses = "DescribeAddresses" // DescribeAddressesRequest generates a "aws/request.Request" representing the // client's request for the DescribeAddresses operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8401,8 +8928,8 @@ const opDescribeAggregateIdFormat = "DescribeAggregateIdFormat" // DescribeAggregateIdFormatRequest generates a "aws/request.Request" representing the // client's request for the DescribeAggregateIdFormat operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8489,8 +9016,8 @@ const opDescribeAvailabilityZones = "DescribeAvailabilityZones" // DescribeAvailabilityZonesRequest generates a "aws/request.Request" representing the // client's request for the DescribeAvailabilityZones operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8569,8 +9096,8 @@ const opDescribeBundleTasks = "DescribeBundleTasks" // DescribeBundleTasksRequest generates a "aws/request.Request" representing the // client's request for the DescribeBundleTasks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8644,12 +9171,164 @@ func (c *EC2) DescribeBundleTasksWithContext(ctx aws.Context, input *DescribeBun return out, req.Send() } +const opDescribeByoipCidrs = "DescribeByoipCidrs" + +// DescribeByoipCidrsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeByoipCidrs operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeByoipCidrs for more information on using the DescribeByoipCidrs +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeByoipCidrsRequest method. +// req, resp := client.DescribeByoipCidrsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeByoipCidrs +func (c *EC2) DescribeByoipCidrsRequest(input *DescribeByoipCidrsInput) (req *request.Request, output *DescribeByoipCidrsOutput) { + op := &request.Operation{ + Name: opDescribeByoipCidrs, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeByoipCidrsInput{} + } + + output = &DescribeByoipCidrsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeByoipCidrs API operation for Amazon Elastic Compute Cloud. +// +// Describes the IP address ranges that were specified in calls to ProvisionByoipCidr. +// +// To describe the address pools that were created when you provisioned the +// address ranges, use DescribePublicIpv4Pools. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DescribeByoipCidrs for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeByoipCidrs +func (c *EC2) DescribeByoipCidrs(input *DescribeByoipCidrsInput) (*DescribeByoipCidrsOutput, error) { + req, out := c.DescribeByoipCidrsRequest(input) + return out, req.Send() +} + +// DescribeByoipCidrsWithContext is the same as DescribeByoipCidrs with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeByoipCidrs for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DescribeByoipCidrsWithContext(ctx aws.Context, input *DescribeByoipCidrsInput, opts ...request.Option) (*DescribeByoipCidrsOutput, error) { + req, out := c.DescribeByoipCidrsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeCapacityReservations = "DescribeCapacityReservations" + +// DescribeCapacityReservationsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeCapacityReservations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeCapacityReservations for more information on using the DescribeCapacityReservations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeCapacityReservationsRequest method. +// req, resp := client.DescribeCapacityReservationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeCapacityReservations +func (c *EC2) DescribeCapacityReservationsRequest(input *DescribeCapacityReservationsInput) (req *request.Request, output *DescribeCapacityReservationsOutput) { + op := &request.Operation{ + Name: opDescribeCapacityReservations, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeCapacityReservationsInput{} + } + + output = &DescribeCapacityReservationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeCapacityReservations API operation for Amazon Elastic Compute Cloud. +// +// Describes one or more of your Capacity Reservations. The results describe +// only the Capacity Reservations in the AWS Region that you're currently using. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DescribeCapacityReservations for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeCapacityReservations +func (c *EC2) DescribeCapacityReservations(input *DescribeCapacityReservationsInput) (*DescribeCapacityReservationsOutput, error) { + req, out := c.DescribeCapacityReservationsRequest(input) + return out, req.Send() +} + +// DescribeCapacityReservationsWithContext is the same as DescribeCapacityReservations with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeCapacityReservations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DescribeCapacityReservationsWithContext(ctx aws.Context, input *DescribeCapacityReservationsInput, opts ...request.Option) (*DescribeCapacityReservationsOutput, error) { + req, out := c.DescribeCapacityReservationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeClassicLinkInstances = "DescribeClassicLinkInstances" // DescribeClassicLinkInstancesRequest generates a "aws/request.Request" representing the // client's request for the DescribeClassicLinkInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8690,7 +9369,7 @@ func (c *EC2) DescribeClassicLinkInstancesRequest(input *DescribeClassicLinkInst // // Describes one or more of your linked EC2-Classic instances. This request // only returns information about EC2-Classic instances linked to a VPC through -// ClassicLink; you cannot use this request to return information about other +// ClassicLink. You cannot use this request to return information about other // instances. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -8725,8 +9404,8 @@ const opDescribeConversionTasks = "DescribeConversionTasks" // DescribeConversionTasksRequest generates a "aws/request.Request" representing the // client's request for the DescribeConversionTasks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8803,8 +9482,8 @@ const opDescribeCustomerGateways = "DescribeCustomerGateways" // DescribeCustomerGatewaysRequest generates a "aws/request.Request" representing the // client's request for the DescribeCustomerGateways operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8881,8 +9560,8 @@ const opDescribeDhcpOptions = "DescribeDhcpOptions" // DescribeDhcpOptionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeDhcpOptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8923,7 +9602,7 @@ func (c *EC2) DescribeDhcpOptionsRequest(input *DescribeDhcpOptionsInput) (req * // // Describes one or more of your DHCP options sets. // -// For more information about DHCP options sets, see DHCP Options Sets (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_DHCP_Options.html) +// For more information, see DHCP Options Sets (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_DHCP_Options.html) // in the Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -8958,8 +9637,8 @@ const opDescribeEgressOnlyInternetGateways = "DescribeEgressOnlyInternetGateways // DescribeEgressOnlyInternetGatewaysRequest generates a "aws/request.Request" representing the // client's request for the DescribeEgressOnlyInternetGateways operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8998,7 +9677,7 @@ func (c *EC2) DescribeEgressOnlyInternetGatewaysRequest(input *DescribeEgressOnl // DescribeEgressOnlyInternetGateways API operation for Amazon Elastic Compute Cloud. // -// Describes one or more of your egress-only Internet gateways. +// Describes one or more of your egress-only internet gateways. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -9032,8 +9711,8 @@ const opDescribeElasticGpus = "DescribeElasticGpus" // DescribeElasticGpusRequest generates a "aws/request.Request" representing the // client's request for the DescribeElasticGpus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9107,8 +9786,8 @@ const opDescribeExportTasks = "DescribeExportTasks" // DescribeExportTasksRequest generates a "aws/request.Request" representing the // client's request for the DescribeExportTasks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9177,12 +9856,234 @@ func (c *EC2) DescribeExportTasksWithContext(ctx aws.Context, input *DescribeExp return out, req.Send() } +const opDescribeFleetHistory = "DescribeFleetHistory" + +// DescribeFleetHistoryRequest generates a "aws/request.Request" representing the +// client's request for the DescribeFleetHistory operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeFleetHistory for more information on using the DescribeFleetHistory +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeFleetHistoryRequest method. +// req, resp := client.DescribeFleetHistoryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFleetHistory +func (c *EC2) DescribeFleetHistoryRequest(input *DescribeFleetHistoryInput) (req *request.Request, output *DescribeFleetHistoryOutput) { + op := &request.Operation{ + Name: opDescribeFleetHistory, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeFleetHistoryInput{} + } + + output = &DescribeFleetHistoryOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeFleetHistory API operation for Amazon Elastic Compute Cloud. +// +// Describes the events for the specified EC2 Fleet during the specified time. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DescribeFleetHistory for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFleetHistory +func (c *EC2) DescribeFleetHistory(input *DescribeFleetHistoryInput) (*DescribeFleetHistoryOutput, error) { + req, out := c.DescribeFleetHistoryRequest(input) + return out, req.Send() +} + +// DescribeFleetHistoryWithContext is the same as DescribeFleetHistory with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeFleetHistory for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DescribeFleetHistoryWithContext(ctx aws.Context, input *DescribeFleetHistoryInput, opts ...request.Option) (*DescribeFleetHistoryOutput, error) { + req, out := c.DescribeFleetHistoryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeFleetInstances = "DescribeFleetInstances" + +// DescribeFleetInstancesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeFleetInstances operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeFleetInstances for more information on using the DescribeFleetInstances +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeFleetInstancesRequest method. +// req, resp := client.DescribeFleetInstancesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFleetInstances +func (c *EC2) DescribeFleetInstancesRequest(input *DescribeFleetInstancesInput) (req *request.Request, output *DescribeFleetInstancesOutput) { + op := &request.Operation{ + Name: opDescribeFleetInstances, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeFleetInstancesInput{} + } + + output = &DescribeFleetInstancesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeFleetInstances API operation for Amazon Elastic Compute Cloud. +// +// Describes the running instances for the specified EC2 Fleet. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DescribeFleetInstances for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFleetInstances +func (c *EC2) DescribeFleetInstances(input *DescribeFleetInstancesInput) (*DescribeFleetInstancesOutput, error) { + req, out := c.DescribeFleetInstancesRequest(input) + return out, req.Send() +} + +// DescribeFleetInstancesWithContext is the same as DescribeFleetInstances with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeFleetInstances for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DescribeFleetInstancesWithContext(ctx aws.Context, input *DescribeFleetInstancesInput, opts ...request.Option) (*DescribeFleetInstancesOutput, error) { + req, out := c.DescribeFleetInstancesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeFleets = "DescribeFleets" + +// DescribeFleetsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeFleets operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeFleets for more information on using the DescribeFleets +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeFleetsRequest method. +// req, resp := client.DescribeFleetsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFleets +func (c *EC2) DescribeFleetsRequest(input *DescribeFleetsInput) (req *request.Request, output *DescribeFleetsOutput) { + op := &request.Operation{ + Name: opDescribeFleets, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeFleetsInput{} + } + + output = &DescribeFleetsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeFleets API operation for Amazon Elastic Compute Cloud. +// +// Describes one or more of your EC2 Fleets. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DescribeFleets for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFleets +func (c *EC2) DescribeFleets(input *DescribeFleetsInput) (*DescribeFleetsOutput, error) { + req, out := c.DescribeFleetsRequest(input) + return out, req.Send() +} + +// DescribeFleetsWithContext is the same as DescribeFleets with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeFleets for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DescribeFleetsWithContext(ctx aws.Context, input *DescribeFleetsInput, opts ...request.Option) (*DescribeFleetsOutput, error) { + req, out := c.DescribeFleetsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeFlowLogs = "DescribeFlowLogs" // DescribeFlowLogsRequest generates a "aws/request.Request" representing the // client's request for the DescribeFlowLogs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9257,8 +10158,8 @@ const opDescribeFpgaImageAttribute = "DescribeFpgaImageAttribute" // DescribeFpgaImageAttributeRequest generates a "aws/request.Request" representing the // client's request for the DescribeFpgaImageAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9331,8 +10232,8 @@ const opDescribeFpgaImages = "DescribeFpgaImages" // DescribeFpgaImagesRequest generates a "aws/request.Request" representing the // client's request for the DescribeFpgaImages operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9407,8 +10308,8 @@ const opDescribeHostReservationOfferings = "DescribeHostReservationOfferings" // DescribeHostReservationOfferingsRequest generates a "aws/request.Request" representing the // client's request for the DescribeHostReservationOfferings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9447,14 +10348,14 @@ func (c *EC2) DescribeHostReservationOfferingsRequest(input *DescribeHostReserva // DescribeHostReservationOfferings API operation for Amazon Elastic Compute Cloud. // -// Describes the Dedicated Host Reservations that are available to purchase. +// Describes the Dedicated Host reservations that are available to purchase. // -// The results describe all the Dedicated Host Reservation offerings, including +// The results describe all the Dedicated Host reservation offerings, including // offerings that may not match the instance family and region of your Dedicated -// Hosts. When purchasing an offering, ensure that the the instance family and -// region of the offering matches that of the Dedicated Host/s it will be associated -// with. For an overview of supported instance types, see Dedicated Hosts Overview -// (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-hosts-overview.html) +// Hosts. When purchasing an offering, ensure that the instance family and Region +// of the offering matches that of the Dedicated Hosts with which it is to be +// associated. For more information about supported instance types, see Dedicated +// Hosts Overview (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-hosts-overview.html) // in the Amazon Elastic Compute Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -9489,8 +10390,8 @@ const opDescribeHostReservations = "DescribeHostReservations" // DescribeHostReservationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeHostReservations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9529,8 +10430,7 @@ func (c *EC2) DescribeHostReservationsRequest(input *DescribeHostReservationsInp // DescribeHostReservations API operation for Amazon Elastic Compute Cloud. // -// Describes Dedicated Host Reservations which are associated with Dedicated -// Hosts in your account. +// Describes reservations that are associated with Dedicated Hosts in your account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -9564,8 +10464,8 @@ const opDescribeHosts = "DescribeHosts" // DescribeHostsRequest generates a "aws/request.Request" representing the // client's request for the DescribeHosts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9608,7 +10508,7 @@ func (c *EC2) DescribeHostsRequest(input *DescribeHostsInput) (req *request.Requ // // The results describe only the Dedicated Hosts in the region you're currently // using. All listed instances consume capacity on your Dedicated Host. Dedicated -// Hosts that have recently been released will be listed with the state released. +// Hosts that have recently been released are listed with the state released. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -9642,8 +10542,8 @@ const opDescribeIamInstanceProfileAssociations = "DescribeIamInstanceProfileAsso // DescribeIamInstanceProfileAssociationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeIamInstanceProfileAssociations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9716,8 +10616,8 @@ const opDescribeIdFormat = "DescribeIdFormat" // DescribeIdFormatRequest generates a "aws/request.Request" representing the // client's request for the DescribeIdFormat operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9808,8 +10708,8 @@ const opDescribeIdentityIdFormat = "DescribeIdentityIdFormat" // DescribeIdentityIdFormatRequest generates a "aws/request.Request" representing the // client's request for the DescribeIdentityIdFormat operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9898,8 +10798,8 @@ const opDescribeImageAttribute = "DescribeImageAttribute" // DescribeImageAttributeRequest generates a "aws/request.Request" representing the // client's request for the DescribeImageAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9973,8 +10873,8 @@ const opDescribeImages = "DescribeImages" // DescribeImagesRequest generates a "aws/request.Request" representing the // client's request for the DescribeImages operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10053,8 +10953,8 @@ const opDescribeImportImageTasks = "DescribeImportImageTasks" // DescribeImportImageTasksRequest generates a "aws/request.Request" representing the // client's request for the DescribeImportImageTasks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10128,8 +11028,8 @@ const opDescribeImportSnapshotTasks = "DescribeImportSnapshotTasks" // DescribeImportSnapshotTasksRequest generates a "aws/request.Request" representing the // client's request for the DescribeImportSnapshotTasks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10202,8 +11102,8 @@ const opDescribeInstanceAttribute = "DescribeInstanceAttribute" // DescribeInstanceAttributeRequest generates a "aws/request.Request" representing the // client's request for the DescribeInstanceAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10280,8 +11180,8 @@ const opDescribeInstanceCreditSpecifications = "DescribeInstanceCreditSpecificat // DescribeInstanceCreditSpecificationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeInstanceCreditSpecifications operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10320,14 +11220,19 @@ func (c *EC2) DescribeInstanceCreditSpecificationsRequest(input *DescribeInstanc // DescribeInstanceCreditSpecifications API operation for Amazon Elastic Compute Cloud. // -// Describes the credit option for CPU usage of one or more of your T2 instances. -// The credit options are standard and unlimited. +// Describes the credit option for CPU usage of one or more of your T2 or T3 +// instances. The credit options are standard and unlimited. // -// If you do not specify an instance ID, Amazon EC2 returns only the T2 instances -// with the unlimited credit option. If you specify one or more instance IDs, -// Amazon EC2 returns the credit option (standard or unlimited) of those instances. -// If you specify an instance ID that is not valid, such as an instance that -// is not a T2 instance, an error is returned. +// If you do not specify an instance ID, Amazon EC2 returns T2 and T3 instances +// with the unlimited credit option, as well as instances that were previously +// configured as T2 or T3 with the unlimited credit option. For example, if +// you resize a T2 instance, while it is configured as unlimited, to an M4 instance, +// Amazon EC2 returns the M4 instance. +// +// If you specify one or more instance IDs, Amazon EC2 returns the credit option +// (standard or unlimited) of those instances. If you specify an instance ID +// that is not valid, such as an instance that is not a T2 or T3 instance, an +// error is returned. // // Recently terminated instances might appear in the returned results. This // interval is usually less than one hour. @@ -10337,7 +11242,7 @@ func (c *EC2) DescribeInstanceCreditSpecificationsRequest(input *DescribeInstanc // all, the call fails. If you specify only instance IDs in an unaffected zone, // the call works normally. // -// For more information, see T2 Instances (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) +// For more information, see Burstable Performance Instances (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances.html) // in the Amazon Elastic Compute Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -10372,8 +11277,8 @@ const opDescribeInstanceStatus = "DescribeInstanceStatus" // DescribeInstanceStatusRequest generates a "aws/request.Request" representing the // client's request for the DescribeInstanceStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10523,8 +11428,8 @@ const opDescribeInstances = "DescribeInstances" // DescribeInstancesRequest generates a "aws/request.Request" representing the // client's request for the DescribeInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10668,8 +11573,8 @@ const opDescribeInternetGateways = "DescribeInternetGateways" // DescribeInternetGatewaysRequest generates a "aws/request.Request" representing the // client's request for the DescribeInternetGateways operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10708,7 +11613,7 @@ func (c *EC2) DescribeInternetGatewaysRequest(input *DescribeInternetGatewaysInp // DescribeInternetGateways API operation for Amazon Elastic Compute Cloud. // -// Describes one or more of your Internet gateways. +// Describes one or more of your internet gateways. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -10742,8 +11647,8 @@ const opDescribeKeyPairs = "DescribeKeyPairs" // DescribeKeyPairsRequest generates a "aws/request.Request" representing the // client's request for the DescribeKeyPairs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10819,8 +11724,8 @@ const opDescribeLaunchTemplateVersions = "DescribeLaunchTemplateVersions" // DescribeLaunchTemplateVersionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeLaunchTemplateVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10894,8 +11799,8 @@ const opDescribeLaunchTemplates = "DescribeLaunchTemplates" // DescribeLaunchTemplatesRequest generates a "aws/request.Request" representing the // client's request for the DescribeLaunchTemplates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10968,8 +11873,8 @@ const opDescribeMovingAddresses = "DescribeMovingAddresses" // DescribeMovingAddressesRequest generates a "aws/request.Request" representing the // client's request for the DescribeMovingAddresses operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11044,8 +11949,8 @@ const opDescribeNatGateways = "DescribeNatGateways" // DescribeNatGatewaysRequest generates a "aws/request.Request" representing the // client's request for the DescribeNatGateways operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11090,7 +11995,7 @@ func (c *EC2) DescribeNatGatewaysRequest(input *DescribeNatGatewaysInput) (req * // DescribeNatGateways API operation for Amazon Elastic Compute Cloud. // -// Describes one or more of the your NAT gateways. +// Describes one or more of your NAT gateways. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -11174,8 +12079,8 @@ const opDescribeNetworkAcls = "DescribeNetworkAcls" // DescribeNetworkAclsRequest generates a "aws/request.Request" representing the // client's request for the DescribeNetworkAcls operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11216,7 +12121,7 @@ func (c *EC2) DescribeNetworkAclsRequest(input *DescribeNetworkAclsInput) (req * // // Describes one or more of your network ACLs. // -// For more information about network ACLs, see Network ACLs (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html) +// For more information, see Network ACLs (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html) // in the Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -11251,8 +12156,8 @@ const opDescribeNetworkInterfaceAttribute = "DescribeNetworkInterfaceAttribute" // DescribeNetworkInterfaceAttributeRequest generates a "aws/request.Request" representing the // client's request for the DescribeNetworkInterfaceAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11326,8 +12231,8 @@ const opDescribeNetworkInterfacePermissions = "DescribeNetworkInterfacePermissio // DescribeNetworkInterfacePermissionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeNetworkInterfacePermissions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11400,8 +12305,8 @@ const opDescribeNetworkInterfaces = "DescribeNetworkInterfaces" // DescribeNetworkInterfacesRequest generates a "aws/request.Request" representing the // client's request for the DescribeNetworkInterfaces operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11427,6 +12332,12 @@ func (c *EC2) DescribeNetworkInterfacesRequest(input *DescribeNetworkInterfacesI Name: opDescribeNetworkInterfaces, HTTPMethod: "POST", HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, } if input == nil { @@ -11470,12 +12381,62 @@ func (c *EC2) DescribeNetworkInterfacesWithContext(ctx aws.Context, input *Descr return out, req.Send() } +// DescribeNetworkInterfacesPages iterates over the pages of a DescribeNetworkInterfaces operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeNetworkInterfaces method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeNetworkInterfaces operation. +// pageNum := 0 +// err := client.DescribeNetworkInterfacesPages(params, +// func(page *DescribeNetworkInterfacesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *EC2) DescribeNetworkInterfacesPages(input *DescribeNetworkInterfacesInput, fn func(*DescribeNetworkInterfacesOutput, bool) bool) error { + return c.DescribeNetworkInterfacesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeNetworkInterfacesPagesWithContext same as DescribeNetworkInterfacesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DescribeNetworkInterfacesPagesWithContext(ctx aws.Context, input *DescribeNetworkInterfacesInput, fn func(*DescribeNetworkInterfacesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeNetworkInterfacesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeNetworkInterfacesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeNetworkInterfacesOutput), !p.HasNextPage()) + } + return p.Err() +} + const opDescribePlacementGroups = "DescribePlacementGroups" // DescribePlacementGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribePlacementGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11550,8 +12511,8 @@ const opDescribePrefixLists = "DescribePrefixLists" // DescribePrefixListsRequest generates a "aws/request.Request" representing the // client's request for the DescribePrefixLists operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11594,7 +12555,8 @@ func (c *EC2) DescribePrefixListsRequest(input *DescribePrefixListsInput) (req * // the prefix list name and prefix list ID of the service and the IP address // range for the service. A prefix list ID is required for creating an outbound // security group rule that allows traffic from a VPC to access an AWS service -// through a gateway VPC endpoint. +// through a gateway VPC endpoint. Currently, the services that support this +// action are Amazon S3 and Amazon DynamoDB. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -11628,8 +12590,8 @@ const opDescribePrincipalIdFormat = "DescribePrincipalIdFormat" // DescribePrincipalIdFormatRequest generates a "aws/request.Request" representing the // client's request for the DescribePrincipalIdFormat operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11712,12 +12674,86 @@ func (c *EC2) DescribePrincipalIdFormatWithContext(ctx aws.Context, input *Descr return out, req.Send() } +const opDescribePublicIpv4Pools = "DescribePublicIpv4Pools" + +// DescribePublicIpv4PoolsRequest generates a "aws/request.Request" representing the +// client's request for the DescribePublicIpv4Pools operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribePublicIpv4Pools for more information on using the DescribePublicIpv4Pools +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribePublicIpv4PoolsRequest method. +// req, resp := client.DescribePublicIpv4PoolsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribePublicIpv4Pools +func (c *EC2) DescribePublicIpv4PoolsRequest(input *DescribePublicIpv4PoolsInput) (req *request.Request, output *DescribePublicIpv4PoolsOutput) { + op := &request.Operation{ + Name: opDescribePublicIpv4Pools, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribePublicIpv4PoolsInput{} + } + + output = &DescribePublicIpv4PoolsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribePublicIpv4Pools API operation for Amazon Elastic Compute Cloud. +// +// Describes the specified IPv4 address pools. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DescribePublicIpv4Pools for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribePublicIpv4Pools +func (c *EC2) DescribePublicIpv4Pools(input *DescribePublicIpv4PoolsInput) (*DescribePublicIpv4PoolsOutput, error) { + req, out := c.DescribePublicIpv4PoolsRequest(input) + return out, req.Send() +} + +// DescribePublicIpv4PoolsWithContext is the same as DescribePublicIpv4Pools with the addition of +// the ability to pass a context and additional request options. +// +// See DescribePublicIpv4Pools for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DescribePublicIpv4PoolsWithContext(ctx aws.Context, input *DescribePublicIpv4PoolsInput, opts ...request.Option) (*DescribePublicIpv4PoolsOutput, error) { + req, out := c.DescribePublicIpv4PoolsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeRegions = "DescribeRegions" // DescribeRegionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeRegions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11793,8 +12829,8 @@ const opDescribeReservedInstances = "DescribeReservedInstances" // DescribeReservedInstancesRequest generates a "aws/request.Request" representing the // client's request for the DescribeReservedInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11870,8 +12906,8 @@ const opDescribeReservedInstancesListings = "DescribeReservedInstancesListings" // DescribeReservedInstancesListingsRequest generates a "aws/request.Request" representing the // client's request for the DescribeReservedInstancesListings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11965,8 +13001,8 @@ const opDescribeReservedInstancesModifications = "DescribeReservedInstancesModif // DescribeReservedInstancesModificationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeReservedInstancesModifications operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -12101,8 +13137,8 @@ const opDescribeReservedInstancesOfferings = "DescribeReservedInstancesOfferings // DescribeReservedInstancesOfferingsRequest generates a "aws/request.Request" representing the // client's request for the DescribeReservedInstancesOfferings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -12242,8 +13278,8 @@ const opDescribeRouteTables = "DescribeRouteTables" // DescribeRouteTablesRequest generates a "aws/request.Request" representing the // client's request for the DescribeRouteTables operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -12269,6 +13305,12 @@ func (c *EC2) DescribeRouteTablesRequest(input *DescribeRouteTablesInput) (req * Name: opDescribeRouteTables, HTTPMethod: "POST", HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, } if input == nil { @@ -12289,7 +13331,7 @@ func (c *EC2) DescribeRouteTablesRequest(input *DescribeRouteTablesInput) (req * // with the main route table. This command does not return the subnet ID for // implicit associations. // -// For more information about route tables, see Route Tables (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) +// For more information, see Route Tables (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) // in the Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -12320,12 +13362,62 @@ func (c *EC2) DescribeRouteTablesWithContext(ctx aws.Context, input *DescribeRou return out, req.Send() } +// DescribeRouteTablesPages iterates over the pages of a DescribeRouteTables operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeRouteTables method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeRouteTables operation. +// pageNum := 0 +// err := client.DescribeRouteTablesPages(params, +// func(page *DescribeRouteTablesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *EC2) DescribeRouteTablesPages(input *DescribeRouteTablesInput, fn func(*DescribeRouteTablesOutput, bool) bool) error { + return c.DescribeRouteTablesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeRouteTablesPagesWithContext same as DescribeRouteTablesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DescribeRouteTablesPagesWithContext(ctx aws.Context, input *DescribeRouteTablesInput, fn func(*DescribeRouteTablesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeRouteTablesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeRouteTablesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeRouteTablesOutput), !p.HasNextPage()) + } + return p.Err() +} + const opDescribeScheduledInstanceAvailability = "DescribeScheduledInstanceAvailability" // DescribeScheduledInstanceAvailabilityRequest generates a "aws/request.Request" representing the // client's request for the DescribeScheduledInstanceAvailability operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -12406,8 +13498,8 @@ const opDescribeScheduledInstances = "DescribeScheduledInstances" // DescribeScheduledInstancesRequest generates a "aws/request.Request" representing the // client's request for the DescribeScheduledInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -12480,8 +13572,8 @@ const opDescribeSecurityGroupReferences = "DescribeSecurityGroupReferences" // DescribeSecurityGroupReferencesRequest generates a "aws/request.Request" representing the // client's request for the DescribeSecurityGroupReferences operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -12555,8 +13647,8 @@ const opDescribeSecurityGroups = "DescribeSecurityGroups" // DescribeSecurityGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeSecurityGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -12582,6 +13674,12 @@ func (c *EC2) DescribeSecurityGroupsRequest(input *DescribeSecurityGroupsInput) Name: opDescribeSecurityGroups, HTTPMethod: "POST", HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, } if input == nil { @@ -12632,12 +13730,62 @@ func (c *EC2) DescribeSecurityGroupsWithContext(ctx aws.Context, input *Describe return out, req.Send() } +// DescribeSecurityGroupsPages iterates over the pages of a DescribeSecurityGroups operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeSecurityGroups method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeSecurityGroups operation. +// pageNum := 0 +// err := client.DescribeSecurityGroupsPages(params, +// func(page *DescribeSecurityGroupsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *EC2) DescribeSecurityGroupsPages(input *DescribeSecurityGroupsInput, fn func(*DescribeSecurityGroupsOutput, bool) bool) error { + return c.DescribeSecurityGroupsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeSecurityGroupsPagesWithContext same as DescribeSecurityGroupsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DescribeSecurityGroupsPagesWithContext(ctx aws.Context, input *DescribeSecurityGroupsInput, fn func(*DescribeSecurityGroupsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeSecurityGroupsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeSecurityGroupsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeSecurityGroupsOutput), !p.HasNextPage()) + } + return p.Err() +} + const opDescribeSnapshotAttribute = "DescribeSnapshotAttribute" // DescribeSnapshotAttributeRequest generates a "aws/request.Request" representing the // client's request for the DescribeSnapshotAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -12714,8 +13862,8 @@ const opDescribeSnapshots = "DescribeSnapshots" // DescribeSnapshotsRequest generates a "aws/request.Request" representing the // client's request for the DescribeSnapshots operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -12889,8 +14037,8 @@ const opDescribeSpotDatafeedSubscription = "DescribeSpotDatafeedSubscription" // DescribeSpotDatafeedSubscriptionRequest generates a "aws/request.Request" representing the // client's request for the DescribeSpotDatafeedSubscription operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -12931,7 +14079,7 @@ func (c *EC2) DescribeSpotDatafeedSubscriptionRequest(input *DescribeSpotDatafee // // Describes the data feed for Spot Instances. For more information, see Spot // Instance Data Feed (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-data-feeds.html) -// in the Amazon Elastic Compute Cloud User Guide. +// in the Amazon EC2 User Guide for Linux Instances. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -12965,8 +14113,8 @@ const opDescribeSpotFleetInstances = "DescribeSpotFleetInstances" // DescribeSpotFleetInstancesRequest generates a "aws/request.Request" representing the // client's request for the DescribeSpotFleetInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -13039,8 +14187,8 @@ const opDescribeSpotFleetRequestHistory = "DescribeSpotFleetRequestHistory" // DescribeSpotFleetRequestHistoryRequest generates a "aws/request.Request" representing the // client's request for the DescribeSpotFleetRequestHistory operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -13084,7 +14232,7 @@ func (c *EC2) DescribeSpotFleetRequestHistoryRequest(input *DescribeSpotFleetReq // // Spot Fleet events are delayed by up to 30 seconds before they can be described. // This ensures that you can query by the last evaluated time and not miss a -// recorded event. +// recorded event. Spot Fleet events are available for 48 hours. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -13118,8 +14266,8 @@ const opDescribeSpotFleetRequests = "DescribeSpotFleetRequests" // DescribeSpotFleetRequestsRequest generates a "aws/request.Request" representing the // client's request for the DescribeSpotFleetRequests operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -13251,8 +14399,8 @@ const opDescribeSpotInstanceRequests = "DescribeSpotInstanceRequests" // DescribeSpotInstanceRequestsRequest generates a "aws/request.Request" representing the // client's request for the DescribeSpotInstanceRequests operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -13291,11 +14439,7 @@ func (c *EC2) DescribeSpotInstanceRequestsRequest(input *DescribeSpotInstanceReq // DescribeSpotInstanceRequests API operation for Amazon Elastic Compute Cloud. // -// Describes the Spot Instance requests that belong to your account. Spot Instances -// are instances that Amazon EC2 launches when the Spot price that you specify -// exceeds the current Spot price. For more information, see Spot Instance Requests -// (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html) in -// the Amazon Elastic Compute Cloud User Guide. +// Describes the specified Spot Instance requests. // // You can use DescribeSpotInstanceRequests to find a running Spot Instance // by examining the response. If the status of the Spot Instance is fulfilled, @@ -13303,8 +14447,8 @@ func (c *EC2) DescribeSpotInstanceRequestsRequest(input *DescribeSpotInstanceReq // instance. Alternatively, you can use DescribeInstances with a filter to look // for instances where the instance lifecycle is spot. // -// Spot Instance requests are deleted 4 hours after they are canceled and their -// instances are terminated. +// Spot Instance requests are deleted four hours after they are canceled and +// their instances are terminated. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -13338,8 +14482,8 @@ const opDescribeSpotPriceHistory = "DescribeSpotPriceHistory" // DescribeSpotPriceHistoryRequest generates a "aws/request.Request" representing the // client's request for the DescribeSpotPriceHistory operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -13386,7 +14530,7 @@ func (c *EC2) DescribeSpotPriceHistoryRequest(input *DescribeSpotPriceHistoryInp // // Describes the Spot price history. For more information, see Spot Instance // Pricing History (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances-history.html) -// in the Amazon Elastic Compute Cloud User Guide. +// in the Amazon EC2 User Guide for Linux Instances. // // When you specify a start and end time, this operation returns the prices // of the instance types within the time range that you specified and the time @@ -13475,8 +14619,8 @@ const opDescribeStaleSecurityGroups = "DescribeStaleSecurityGroups" // DescribeStaleSecurityGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeStaleSecurityGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -13552,8 +14696,8 @@ const opDescribeSubnets = "DescribeSubnets" // DescribeSubnetsRequest generates a "aws/request.Request" representing the // client's request for the DescribeSubnets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -13594,7 +14738,7 @@ func (c *EC2) DescribeSubnetsRequest(input *DescribeSubnetsInput) (req *request. // // Describes one or more of your subnets. // -// For more information about subnets, see Your VPC and Subnets (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) +// For more information, see Your VPC and Subnets (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) // in the Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -13629,8 +14773,8 @@ const opDescribeTags = "DescribeTags" // DescribeTagsRequest generates a "aws/request.Request" representing the // client's request for the DescribeTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -13762,8 +14906,8 @@ const opDescribeVolumeAttribute = "DescribeVolumeAttribute" // DescribeVolumeAttributeRequest generates a "aws/request.Request" representing the // client's request for the DescribeVolumeAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -13840,8 +14984,8 @@ const opDescribeVolumeStatus = "DescribeVolumeStatus" // DescribeVolumeStatusRequest generates a "aws/request.Request" representing the // client's request for the DescribeVolumeStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -13903,8 +15047,9 @@ func (c *EC2) DescribeVolumeStatusRequest(input *DescribeVolumeStatusInput) (req // status of the volume is ok. If the check fails, the overall status is impaired. // If the status is insufficient-data, then the checks may still be taking place // on your volume at the time. We recommend that you retry the request. For -// more information on volume status, see Monitoring the Status of Your Volumes -// (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-volume-status.html). +// more information about volume status, see Monitoring the Status of Your Volumes +// (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-volume-status.html) +// in the Amazon Elastic Compute Cloud User Guide. // // Events: Reflect the cause of a volume status and may require you to take // action. For example, if your volume returns an impaired status, then the @@ -14004,8 +15149,8 @@ const opDescribeVolumes = "DescribeVolumes" // DescribeVolumesRequest generates a "aws/request.Request" representing the // client's request for the DescribeVolumes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -14144,8 +15289,8 @@ const opDescribeVolumesModifications = "DescribeVolumesModifications" // DescribeVolumesModificationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeVolumesModifications operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -14196,7 +15341,8 @@ func (c *EC2) DescribeVolumesModificationsRequest(input *DescribeVolumesModifica // You can also use CloudWatch Events to check the status of a modification // to an EBS volume. For information about CloudWatch Events, see the Amazon // CloudWatch Events User Guide (http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/). -// For more information, see Monitoring Volume Modifications" (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html#monitoring_mods). +// For more information, see Monitoring Volume Modifications" (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html#monitoring_mods) +// in the Amazon Elastic Compute Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -14230,8 +15376,8 @@ const opDescribeVpcAttribute = "DescribeVpcAttribute" // DescribeVpcAttributeRequest generates a "aws/request.Request" representing the // client's request for the DescribeVpcAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -14305,8 +15451,8 @@ const opDescribeVpcClassicLink = "DescribeVpcClassicLink" // DescribeVpcClassicLinkRequest generates a "aws/request.Request" representing the // client's request for the DescribeVpcClassicLink operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -14379,8 +15525,8 @@ const opDescribeVpcClassicLinkDnsSupport = "DescribeVpcClassicLinkDnsSupport" // DescribeVpcClassicLinkDnsSupportRequest generates a "aws/request.Request" representing the // client's request for the DescribeVpcClassicLinkDnsSupport operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -14459,8 +15605,8 @@ const opDescribeVpcEndpointConnectionNotifications = "DescribeVpcEndpointConnect // DescribeVpcEndpointConnectionNotificationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeVpcEndpointConnectionNotifications operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -14534,8 +15680,8 @@ const opDescribeVpcEndpointConnections = "DescribeVpcEndpointConnections" // DescribeVpcEndpointConnectionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeVpcEndpointConnections operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -14609,8 +15755,8 @@ const opDescribeVpcEndpointServiceConfigurations = "DescribeVpcEndpointServiceCo // DescribeVpcEndpointServiceConfigurationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeVpcEndpointServiceConfigurations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -14683,8 +15829,8 @@ const opDescribeVpcEndpointServicePermissions = "DescribeVpcEndpointServicePermi // DescribeVpcEndpointServicePermissionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeVpcEndpointServicePermissions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -14758,8 +15904,8 @@ const opDescribeVpcEndpointServices = "DescribeVpcEndpointServices" // DescribeVpcEndpointServicesRequest generates a "aws/request.Request" representing the // client's request for the DescribeVpcEndpointServices operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -14832,8 +15978,8 @@ const opDescribeVpcEndpoints = "DescribeVpcEndpoints" // DescribeVpcEndpointsRequest generates a "aws/request.Request" representing the // client's request for the DescribeVpcEndpoints operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -14906,8 +16052,8 @@ const opDescribeVpcPeeringConnections = "DescribeVpcPeeringConnections" // DescribeVpcPeeringConnectionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeVpcPeeringConnections operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -14980,8 +16126,8 @@ const opDescribeVpcs = "DescribeVpcs" // DescribeVpcsRequest generates a "aws/request.Request" representing the // client's request for the DescribeVpcs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -15054,8 +16200,8 @@ const opDescribeVpnConnections = "DescribeVpnConnections" // DescribeVpnConnectionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeVpnConnections operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -15132,8 +16278,8 @@ const opDescribeVpnGateways = "DescribeVpnGateways" // DescribeVpnGatewaysRequest generates a "aws/request.Request" representing the // client's request for the DescribeVpnGateways operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -15210,8 +16356,8 @@ const opDetachClassicLinkVpc = "DetachClassicLinkVpc" // DetachClassicLinkVpcRequest generates a "aws/request.Request" representing the // client's request for the DetachClassicLinkVpc operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -15286,8 +16432,8 @@ const opDetachInternetGateway = "DetachInternetGateway" // DetachInternetGatewayRequest generates a "aws/request.Request" representing the // client's request for the DetachInternetGateway operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -15328,8 +16474,8 @@ func (c *EC2) DetachInternetGatewayRequest(input *DetachInternetGatewayInput) (r // DetachInternetGateway API operation for Amazon Elastic Compute Cloud. // -// Detaches an Internet gateway from a VPC, disabling connectivity between the -// Internet and the VPC. The VPC must not contain any running instances with +// Detaches an internet gateway from a VPC, disabling connectivity between the +// internet and the VPC. The VPC must not contain any running instances with // Elastic IP addresses or public IPv4 addresses. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -15364,8 +16510,8 @@ const opDetachNetworkInterface = "DetachNetworkInterface" // DetachNetworkInterfaceRequest generates a "aws/request.Request" representing the // client's request for the DetachNetworkInterface operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -15440,8 +16586,8 @@ const opDetachVolume = "DetachVolume" // DetachVolumeRequest generates a "aws/request.Request" representing the // client's request for the DetachVolume operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -15527,8 +16673,8 @@ const opDetachVpnGateway = "DetachVpnGateway" // DetachVpnGatewayRequest generates a "aws/request.Request" representing the // client's request for the DetachVpnGateway operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -15610,8 +16756,8 @@ const opDisableVgwRoutePropagation = "DisableVgwRoutePropagation" // DisableVgwRoutePropagationRequest generates a "aws/request.Request" representing the // client's request for the DisableVgwRoutePropagation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -15687,8 +16833,8 @@ const opDisableVpcClassicLink = "DisableVpcClassicLink" // DisableVpcClassicLinkRequest generates a "aws/request.Request" representing the // client's request for the DisableVpcClassicLink operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -15762,8 +16908,8 @@ const opDisableVpcClassicLinkDnsSupport = "DisableVpcClassicLinkDnsSupport" // DisableVpcClassicLinkDnsSupportRequest generates a "aws/request.Request" representing the // client's request for the DisableVpcClassicLinkDnsSupport operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -15804,8 +16950,8 @@ func (c *EC2) DisableVpcClassicLinkDnsSupportRequest(input *DisableVpcClassicLin // // Disables ClassicLink DNS support for a VPC. If disabled, DNS hostnames resolve // to public IP addresses when addressed between a linked EC2-Classic instance -// and instances in the VPC to which it's linked. For more information about -// ClassicLink, see ClassicLink (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/vpc-classiclink.html) +// and instances in the VPC to which it's linked. For more information, see +// ClassicLink (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/vpc-classiclink.html) // in the Amazon Elastic Compute Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -15840,8 +16986,8 @@ const opDisassociateAddress = "DisassociateAddress" // DisassociateAddressRequest generates a "aws/request.Request" representing the // client's request for the DisassociateAddress operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -15924,8 +17070,8 @@ const opDisassociateIamInstanceProfile = "DisassociateIamInstanceProfile" // DisassociateIamInstanceProfileRequest generates a "aws/request.Request" representing the // client's request for the DisassociateIamInstanceProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -16000,8 +17146,8 @@ const opDisassociateRouteTable = "DisassociateRouteTable" // DisassociateRouteTableRequest generates a "aws/request.Request" representing the // client's request for the DisassociateRouteTable operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -16081,8 +17227,8 @@ const opDisassociateSubnetCidrBlock = "DisassociateSubnetCidrBlock" // DisassociateSubnetCidrBlockRequest generates a "aws/request.Request" representing the // client's request for the DisassociateSubnetCidrBlock operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -16157,8 +17303,8 @@ const opDisassociateVpcCidrBlock = "DisassociateVpcCidrBlock" // DisassociateVpcCidrBlockRequest generates a "aws/request.Request" representing the // client's request for the DisassociateVpcCidrBlock operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -16237,8 +17383,8 @@ const opEnableVgwRoutePropagation = "EnableVgwRoutePropagation" // EnableVgwRoutePropagationRequest generates a "aws/request.Request" representing the // client's request for the EnableVgwRoutePropagation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -16314,8 +17460,8 @@ const opEnableVolumeIO = "EnableVolumeIO" // EnableVolumeIORequest generates a "aws/request.Request" representing the // client's request for the EnableVolumeIO operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -16391,8 +17537,8 @@ const opEnableVpcClassicLink = "EnableVpcClassicLink" // EnableVpcClassicLinkRequest generates a "aws/request.Request" representing the // client's request for the EnableVpcClassicLink operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -16433,7 +17579,7 @@ func (c *EC2) EnableVpcClassicLinkRequest(input *EnableVpcClassicLinkInput) (req // // Enables a VPC for ClassicLink. You can then link EC2-Classic instances to // your ClassicLink-enabled VPC to allow communication over private IP addresses. -// You cannot enable your VPC for ClassicLink if any of your VPC's route tables +// You cannot enable your VPC for ClassicLink if any of your VPC route tables // have existing routes for address ranges within the 10.0.0.0/8 IP address // range, excluding local routes for VPCs in the 10.0.0.0/16 and 10.1.0.0/16 // IP address ranges. For more information, see ClassicLink (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/vpc-classiclink.html) @@ -16471,8 +17617,8 @@ const opEnableVpcClassicLinkDnsSupport = "EnableVpcClassicLinkDnsSupport" // EnableVpcClassicLinkDnsSupportRequest generates a "aws/request.Request" representing the // client's request for the EnableVpcClassicLinkDnsSupport operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -16515,8 +17661,8 @@ func (c *EC2) EnableVpcClassicLinkDnsSupportRequest(input *EnableVpcClassicLinkD // the DNS hostname of a linked EC2-Classic instance resolves to its private // IP address when addressed from an instance in the VPC to which it's linked. // Similarly, the DNS hostname of an instance in a VPC resolves to its private -// IP address when addressed from a linked EC2-Classic instance. For more information -// about ClassicLink, see ClassicLink (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/vpc-classiclink.html) +// IP address when addressed from a linked EC2-Classic instance. For more information, +// see ClassicLink (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/vpc-classiclink.html) // in the Amazon Elastic Compute Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -16551,8 +17697,8 @@ const opGetConsoleOutput = "GetConsoleOutput" // GetConsoleOutputRequest generates a "aws/request.Request" representing the // client's request for the GetConsoleOutput operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -16591,24 +17737,23 @@ func (c *EC2) GetConsoleOutputRequest(input *GetConsoleOutputInput) (req *reques // GetConsoleOutput API operation for Amazon Elastic Compute Cloud. // -// Gets the console output for the specified instance. -// -// Instances do not have a physical monitor through which you can view their -// console output. They also lack physical controls that allow you to power -// up, reboot, or shut them down. To allow these actions, we provide them through -// the Amazon EC2 API and command line interface. +// Gets the console output for the specified instance. For Linux instances, +// the instance console output displays the exact console output that would +// normally be displayed on a physical monitor attached to a computer. For Windows +// instances, the instance console output includes the last three system event +// log errors. // -// Instance console output is buffered and posted shortly after instance boot, -// reboot, and termination. Amazon EC2 preserves the most recent 64 KB output, -// which is available for at least one hour after the most recent post. +// By default, the console output returns buffered information that was posted +// shortly after an instance transition state (start, stop, reboot, or terminate). +// This information is available for at least one hour after the most recent +// post. Only the most recent 64 KB of console output is available. // -// For Linux instances, the instance console output displays the exact console -// output that would normally be displayed on a physical monitor attached to -// a computer. This output is buffered because the instance produces it and -// then posts it to a store where the instance's owner can retrieve it. +// You can optionally retrieve the latest serial console output at any time +// during the instance lifecycle. This option is supported on instance types +// that use the Nitro hypervisor. // -// For Windows instances, the instance console output includes output from the -// EC2Config service. +// For more information, see Instance Console Output (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-console.html#instance-console-console-output) +// in the Amazon Elastic Compute Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -16642,8 +17787,8 @@ const opGetConsoleScreenshot = "GetConsoleScreenshot" // GetConsoleScreenshotRequest generates a "aws/request.Request" representing the // client's request for the GetConsoleScreenshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -16718,8 +17863,8 @@ const opGetHostReservationPurchasePreview = "GetHostReservationPurchasePreview" // GetHostReservationPurchasePreviewRequest generates a "aws/request.Request" representing the // client's request for the GetHostReservationPurchasePreview operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -16797,8 +17942,8 @@ const opGetLaunchTemplateData = "GetLaunchTemplateData" // GetLaunchTemplateDataRequest generates a "aws/request.Request" representing the // client's request for the GetLaunchTemplateData operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -16872,8 +18017,8 @@ const opGetPasswordData = "GetPasswordData" // GetPasswordDataRequest generates a "aws/request.Request" representing the // client's request for the GetPasswordData operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -16963,8 +18108,8 @@ const opGetReservedInstancesExchangeQuote = "GetReservedInstancesExchangeQuote" // GetReservedInstancesExchangeQuoteRequest generates a "aws/request.Request" representing the // client's request for the GetReservedInstancesExchangeQuote operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -17040,8 +18185,8 @@ const opImportImage = "ImportImage" // ImportImageRequest generates a "aws/request.Request" representing the // client's request for the ImportImage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -17117,8 +18262,8 @@ const opImportInstance = "ImportInstance" // ImportInstanceRequest generates a "aws/request.Request" representing the // client's request for the ImportInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -17197,8 +18342,8 @@ const opImportKeyPair = "ImportKeyPair" // ImportKeyPairRequest generates a "aws/request.Request" representing the // client's request for the ImportKeyPair operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -17278,8 +18423,8 @@ const opImportSnapshot = "ImportSnapshot" // ImportSnapshotRequest generates a "aws/request.Request" representing the // client's request for the ImportSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -17352,8 +18497,8 @@ const opImportVolume = "ImportVolume" // ImportVolumeRequest generates a "aws/request.Request" representing the // client's request for the ImportVolume operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -17426,12 +18571,167 @@ func (c *EC2) ImportVolumeWithContext(ctx aws.Context, input *ImportVolumeInput, return out, req.Send() } +const opModifyCapacityReservation = "ModifyCapacityReservation" + +// ModifyCapacityReservationRequest generates a "aws/request.Request" representing the +// client's request for the ModifyCapacityReservation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyCapacityReservation for more information on using the ModifyCapacityReservation +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyCapacityReservationRequest method. +// req, resp := client.ModifyCapacityReservationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyCapacityReservation +func (c *EC2) ModifyCapacityReservationRequest(input *ModifyCapacityReservationInput) (req *request.Request, output *ModifyCapacityReservationOutput) { + op := &request.Operation{ + Name: opModifyCapacityReservation, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyCapacityReservationInput{} + } + + output = &ModifyCapacityReservationOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyCapacityReservation API operation for Amazon Elastic Compute Cloud. +// +// Modifies a Capacity Reservation's capacity and the conditions under which +// it is to be released. You cannot change a Capacity Reservation's instance +// type, EBS optimization, instance store settings, platform, Availability Zone, +// or instance eligibility. If you need to modify any of these attributes, we +// recommend that you cancel the Capacity Reservation, and then create a new +// one with the required attributes. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation ModifyCapacityReservation for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyCapacityReservation +func (c *EC2) ModifyCapacityReservation(input *ModifyCapacityReservationInput) (*ModifyCapacityReservationOutput, error) { + req, out := c.ModifyCapacityReservationRequest(input) + return out, req.Send() +} + +// ModifyCapacityReservationWithContext is the same as ModifyCapacityReservation with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyCapacityReservation for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) ModifyCapacityReservationWithContext(ctx aws.Context, input *ModifyCapacityReservationInput, opts ...request.Option) (*ModifyCapacityReservationOutput, error) { + req, out := c.ModifyCapacityReservationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyFleet = "ModifyFleet" + +// ModifyFleetRequest generates a "aws/request.Request" representing the +// client's request for the ModifyFleet operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyFleet for more information on using the ModifyFleet +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyFleetRequest method. +// req, resp := client.ModifyFleetRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyFleet +func (c *EC2) ModifyFleetRequest(input *ModifyFleetInput) (req *request.Request, output *ModifyFleetOutput) { + op := &request.Operation{ + Name: opModifyFleet, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyFleetInput{} + } + + output = &ModifyFleetOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyFleet API operation for Amazon Elastic Compute Cloud. +// +// Modifies the specified EC2 Fleet. +// +// While the EC2 Fleet is being modified, it is in the modifying state. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation ModifyFleet for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyFleet +func (c *EC2) ModifyFleet(input *ModifyFleetInput) (*ModifyFleetOutput, error) { + req, out := c.ModifyFleetRequest(input) + return out, req.Send() +} + +// ModifyFleetWithContext is the same as ModifyFleet with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyFleet for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) ModifyFleetWithContext(ctx aws.Context, input *ModifyFleetInput, opts ...request.Option) (*ModifyFleetOutput, error) { + req, out := c.ModifyFleetRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opModifyFpgaImageAttribute = "ModifyFpgaImageAttribute" // ModifyFpgaImageAttributeRequest generates a "aws/request.Request" representing the // client's request for the ModifyFpgaImageAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -17504,8 +18804,8 @@ const opModifyHosts = "ModifyHosts" // ModifyHostsRequest generates a "aws/request.Request" representing the // client's request for the ModifyHosts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -17545,12 +18845,12 @@ func (c *EC2) ModifyHostsRequest(input *ModifyHostsInput) (req *request.Request, // ModifyHosts API operation for Amazon Elastic Compute Cloud. // // Modify the auto-placement setting of a Dedicated Host. When auto-placement -// is enabled, AWS will place instances that you launch with a tenancy of host, -// but without targeting a specific host ID, onto any available Dedicated Host -// in your account which has auto-placement enabled. When auto-placement is -// disabled, you need to provide a host ID if you want the instance to launch -// onto a specific host. If no host ID is provided, the instance will be launched -// onto a suitable host which has auto-placement enabled. +// is enabled, any instances that you launch with a tenancy of host but without +// a specific host ID are placed onto any available Dedicated Host in your account +// that has auto-placement enabled. When auto-placement is disabled, you need +// to provide a host ID to have the instance launch onto a specific host. If +// no host ID is provided, the instance is launched onto a suitable host with +// auto-placement enabled. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -17584,8 +18884,8 @@ const opModifyIdFormat = "ModifyIdFormat" // ModifyIdFormatRequest generates a "aws/request.Request" representing the // client's request for the ModifyIdFormat operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -17682,8 +18982,8 @@ const opModifyIdentityIdFormat = "ModifyIdentityIdFormat" // ModifyIdentityIdFormatRequest generates a "aws/request.Request" representing the // client's request for the ModifyIdentityIdFormat operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -17780,8 +19080,8 @@ const opModifyImageAttribute = "ModifyImageAttribute" // ModifyImageAttributeRequest generates a "aws/request.Request" representing the // client's request for the ModifyImageAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -17865,8 +19165,8 @@ const opModifyInstanceAttribute = "ModifyInstanceAttribute" // ModifyInstanceAttributeRequest generates a "aws/request.Request" representing the // client's request for the ModifyInstanceAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -17910,6 +19210,12 @@ func (c *EC2) ModifyInstanceAttributeRequest(input *ModifyInstanceAttributeInput // Modifies the specified attribute of the specified instance. You can specify // only one attribute at a time. // +// Note: Using this action to change the security groups associated with an +// elastic network interface (ENI) attached to an instance in a VPC can result +// in an error if the instance has more than one ENI. To change the security +// groups associated with an ENI attached to an instance that has multiple ENIs, +// we recommend that you use the ModifyNetworkInterfaceAttribute action. +// // To modify some attributes, the instance must be stopped. For more information, // see Modifying Attributes of a Stopped Instance (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_ChangingAttributesWhileInstanceStopped.html) // in the Amazon Elastic Compute Cloud User Guide. @@ -17942,12 +19248,89 @@ func (c *EC2) ModifyInstanceAttributeWithContext(ctx aws.Context, input *ModifyI return out, req.Send() } +const opModifyInstanceCapacityReservationAttributes = "ModifyInstanceCapacityReservationAttributes" + +// ModifyInstanceCapacityReservationAttributesRequest generates a "aws/request.Request" representing the +// client's request for the ModifyInstanceCapacityReservationAttributes operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyInstanceCapacityReservationAttributes for more information on using the ModifyInstanceCapacityReservationAttributes +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyInstanceCapacityReservationAttributesRequest method. +// req, resp := client.ModifyInstanceCapacityReservationAttributesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyInstanceCapacityReservationAttributes +func (c *EC2) ModifyInstanceCapacityReservationAttributesRequest(input *ModifyInstanceCapacityReservationAttributesInput) (req *request.Request, output *ModifyInstanceCapacityReservationAttributesOutput) { + op := &request.Operation{ + Name: opModifyInstanceCapacityReservationAttributes, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyInstanceCapacityReservationAttributesInput{} + } + + output = &ModifyInstanceCapacityReservationAttributesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyInstanceCapacityReservationAttributes API operation for Amazon Elastic Compute Cloud. +// +// Modifies the Capacity Reservation settings for a stopped instance. Use this +// action to configure an instance to target a specific Capacity Reservation, +// run in any open Capacity Reservation with matching attributes, or run On-Demand +// Instance capacity. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation ModifyInstanceCapacityReservationAttributes for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyInstanceCapacityReservationAttributes +func (c *EC2) ModifyInstanceCapacityReservationAttributes(input *ModifyInstanceCapacityReservationAttributesInput) (*ModifyInstanceCapacityReservationAttributesOutput, error) { + req, out := c.ModifyInstanceCapacityReservationAttributesRequest(input) + return out, req.Send() +} + +// ModifyInstanceCapacityReservationAttributesWithContext is the same as ModifyInstanceCapacityReservationAttributes with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyInstanceCapacityReservationAttributes for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) ModifyInstanceCapacityReservationAttributesWithContext(ctx aws.Context, input *ModifyInstanceCapacityReservationAttributesInput, opts ...request.Option) (*ModifyInstanceCapacityReservationAttributesOutput, error) { + req, out := c.ModifyInstanceCapacityReservationAttributesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opModifyInstanceCreditSpecification = "ModifyInstanceCreditSpecification" // ModifyInstanceCreditSpecificationRequest generates a "aws/request.Request" representing the // client's request for the ModifyInstanceCreditSpecification operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -17986,10 +19369,10 @@ func (c *EC2) ModifyInstanceCreditSpecificationRequest(input *ModifyInstanceCred // ModifyInstanceCreditSpecification API operation for Amazon Elastic Compute Cloud. // -// Modifies the credit option for CPU usage on a running or stopped T2 instance. -// The credit options are standard and unlimited. +// Modifies the credit option for CPU usage on a running or stopped T2 or T3 +// instance. The credit options are standard and unlimited. // -// For more information, see T2 Instances (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) +// For more information, see Burstable Performance Instances (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances.html) // in the Amazon Elastic Compute Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -18024,8 +19407,8 @@ const opModifyInstancePlacement = "ModifyInstancePlacement" // ModifyInstancePlacementRequest generates a "aws/request.Request" representing the // client's request for the ModifyInstancePlacement operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -18119,8 +19502,8 @@ const opModifyLaunchTemplate = "ModifyLaunchTemplate" // ModifyLaunchTemplateRequest generates a "aws/request.Request" representing the // client's request for the ModifyLaunchTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -18195,8 +19578,8 @@ const opModifyNetworkInterfaceAttribute = "ModifyNetworkInterfaceAttribute" // ModifyNetworkInterfaceAttributeRequest generates a "aws/request.Request" representing the // client's request for the ModifyNetworkInterfaceAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -18272,8 +19655,8 @@ const opModifyReservedInstances = "ModifyReservedInstances" // ModifyReservedInstancesRequest generates a "aws/request.Request" representing the // client's request for the ModifyReservedInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -18352,8 +19735,8 @@ const opModifySnapshotAttribute = "ModifySnapshotAttribute" // ModifySnapshotAttributeRequest generates a "aws/request.Request" representing the // client's request for the ModifySnapshotAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -18404,7 +19787,7 @@ func (c *EC2) ModifySnapshotAttributeRequest(input *ModifySnapshotAttributeInput // be made public. Snapshots encrypted with your default CMK cannot be shared // with other accounts. // -// For more information on modifying snapshot permissions, see Sharing Snapshots +// For more information about modifying snapshot permissions, see Sharing Snapshots // (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html) // in the Amazon Elastic Compute Cloud User Guide. // @@ -18440,8 +19823,8 @@ const opModifySpotFleetRequest = "ModifySpotFleetRequest" // ModifySpotFleetRequestRequest generates a "aws/request.Request" representing the // client's request for the ModifySpotFleetRequest operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -18536,8 +19919,8 @@ const opModifySubnetAttribute = "ModifySubnetAttribute" // ModifySubnetAttributeRequest generates a "aws/request.Request" representing the // client's request for the ModifySubnetAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -18612,8 +19995,8 @@ const opModifyVolume = "ModifyVolume" // ModifyVolumeRequest generates a "aws/request.Request" representing the // client's request for the ModifyVolume operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -18677,10 +20060,9 @@ func (c *EC2) ModifyVolumeRequest(input *ModifyVolumeInput) (req *request.Reques // // With previous-generation instance types, resizing an EBS volume may require // detaching and reattaching the volume or stopping and restarting the instance. -// For more information about modifying an EBS volume running Linux, see Modifying -// the Size, IOPS, or Type of an EBS Volume on Linux (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html). -// For more information about modifying an EBS volume running Windows, see Modifying -// the Size, IOPS, or Type of an EBS Volume on Windows (http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ebs-expand-volume.html). +// For more information, see Modifying the Size, IOPS, or Type of an EBS Volume +// on Linux (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html) +// and Modifying the Size, IOPS, or Type of an EBS Volume on Windows (http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ebs-expand-volume.html). // // If you reach the maximum volume modification rate per volume limit, you will // need to wait at least six hours before applying further modifications to @@ -18718,8 +20100,8 @@ const opModifyVolumeAttribute = "ModifyVolumeAttribute" // ModifyVolumeAttributeRequest generates a "aws/request.Request" representing the // client's request for the ModifyVolumeAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -18803,8 +20185,8 @@ const opModifyVpcAttribute = "ModifyVpcAttribute" // ModifyVpcAttributeRequest generates a "aws/request.Request" representing the // client's request for the ModifyVpcAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -18879,8 +20261,8 @@ const opModifyVpcEndpoint = "ModifyVpcEndpoint" // ModifyVpcEndpointRequest generates a "aws/request.Request" representing the // client's request for the ModifyVpcEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -18956,8 +20338,8 @@ const opModifyVpcEndpointConnectionNotification = "ModifyVpcEndpointConnectionNo // ModifyVpcEndpointConnectionNotificationRequest generates a "aws/request.Request" representing the // client's request for the ModifyVpcEndpointConnectionNotification operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -19032,8 +20414,8 @@ const opModifyVpcEndpointServiceConfiguration = "ModifyVpcEndpointServiceConfigu // ModifyVpcEndpointServiceConfigurationRequest generates a "aws/request.Request" representing the // client's request for the ModifyVpcEndpointServiceConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -19109,8 +20491,8 @@ const opModifyVpcEndpointServicePermissions = "ModifyVpcEndpointServicePermissio // ModifyVpcEndpointServicePermissionsRequest generates a "aws/request.Request" representing the // client's request for the ModifyVpcEndpointServicePermissions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -19153,6 +20535,11 @@ func (c *EC2) ModifyVpcEndpointServicePermissionsRequest(input *ModifyVpcEndpoin // You can add or remove permissions for service consumers (IAM users, IAM roles, // and AWS accounts) to connect to your endpoint service. // +// If you grant permissions to all principals, the service is public. Any users +// who know the name of a public service can send a request to attach an endpoint. +// If the service does not require manual approval, attachments are automatically +// approved. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -19185,8 +20572,8 @@ const opModifyVpcPeeringConnectionOptions = "ModifyVpcPeeringConnectionOptions" // ModifyVpcPeeringConnectionOptionsRequest generates a "aws/request.Request" representing the // client's request for the ModifyVpcPeeringConnectionOptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -19238,12 +20625,16 @@ func (c *EC2) ModifyVpcPeeringConnectionOptionsRequest(input *ModifyVpcPeeringCo // * Enable/disable the ability to resolve public DNS hostnames to private // IP addresses when queried from instances in the peer VPC. // -// If the peered VPCs are in different accounts, each owner must initiate a -// separate request to modify the peering connection options, depending on whether -// their VPC was the requester or accepter for the VPC peering connection. If -// the peered VPCs are in the same account, you can modify the requester and -// accepter options in the same request. To confirm which VPC is the accepter -// and requester for a VPC peering connection, use the DescribeVpcPeeringConnections +// If the peered VPCs are in the same AWS account, you can enable DNS resolution +// for queries from the local VPC. This ensures that queries from the local +// VPC resolve to private IP addresses in the peer VPC. This option is not available +// if the peered VPCs are in different AWS accounts or different regions. For +// peered VPCs in different AWS accounts, each AWS account owner must initiate +// a separate request to modify the peering connection options. For inter-region +// peering connections, you must use the region for the requester VPC to modify +// the requester VPC peering options and the region for the accepter VPC to +// modify the accepter VPC peering options. To verify which VPCs are the accepter +// and the requester for a VPC peering connection, use the DescribeVpcPeeringConnections // command. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -19278,8 +20669,8 @@ const opModifyVpcTenancy = "ModifyVpcTenancy" // ModifyVpcTenancyRequest generates a "aws/request.Request" representing the // client's request for the ModifyVpcTenancy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -19326,7 +20717,7 @@ func (c *EC2) ModifyVpcTenancyRequest(input *ModifyVpcTenancyInput) (req *reques // into the VPC have a tenancy of default, unless you specify otherwise during // launch. The tenancy of any existing instances in the VPC is not affected. // -// For more information about Dedicated Instances, see Dedicated Instances (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-instance.html) +// For more information, see Dedicated Instances (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-instance.html) // in the Amazon Elastic Compute Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -19361,8 +20752,8 @@ const opMonitorInstances = "MonitorInstances" // MonitorInstancesRequest generates a "aws/request.Request" representing the // client's request for the MonitorInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -19440,8 +20831,8 @@ const opMoveAddressToVpc = "MoveAddressToVpc" // MoveAddressToVpcRequest generates a "aws/request.Request" representing the // client's request for the MoveAddressToVpc operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -19516,12 +20907,102 @@ func (c *EC2) MoveAddressToVpcWithContext(ctx aws.Context, input *MoveAddressToV return out, req.Send() } +const opProvisionByoipCidr = "ProvisionByoipCidr" + +// ProvisionByoipCidrRequest generates a "aws/request.Request" representing the +// client's request for the ProvisionByoipCidr operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ProvisionByoipCidr for more information on using the ProvisionByoipCidr +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ProvisionByoipCidrRequest method. +// req, resp := client.ProvisionByoipCidrRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ProvisionByoipCidr +func (c *EC2) ProvisionByoipCidrRequest(input *ProvisionByoipCidrInput) (req *request.Request, output *ProvisionByoipCidrOutput) { + op := &request.Operation{ + Name: opProvisionByoipCidr, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ProvisionByoipCidrInput{} + } + + output = &ProvisionByoipCidrOutput{} + req = c.newRequest(op, input, output) + return +} + +// ProvisionByoipCidr API operation for Amazon Elastic Compute Cloud. +// +// Provisions an address range for use with your AWS resources through bring +// your own IP addresses (BYOIP) and creates a corresponding address pool. After +// the address range is provisioned, it is ready to be advertised using AdvertiseByoipCidr. +// +// AWS verifies that you own the address range and are authorized to advertise +// it. You must ensure that the address range is registered to you and that +// you created an RPKI ROA to authorize Amazon ASNs 16509 and 14618 to advertise +// the address range. For more information, see Bring Your Own IP Addresses +// (BYOIP) (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-byoip.html) +// in the Amazon Elastic Compute Cloud User Guide. +// +// Provisioning an address range is an asynchronous operation, so the call returns +// immediately, but the address range is not ready to use until its status changes +// from pending-provision to provisioned. To monitor the status of an address +// range, use DescribeByoipCidrs. To allocate an Elastic IP address from your +// address pool, use AllocateAddress with either the specific address from the +// address pool or the ID of the address pool. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation ProvisionByoipCidr for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ProvisionByoipCidr +func (c *EC2) ProvisionByoipCidr(input *ProvisionByoipCidrInput) (*ProvisionByoipCidrOutput, error) { + req, out := c.ProvisionByoipCidrRequest(input) + return out, req.Send() +} + +// ProvisionByoipCidrWithContext is the same as ProvisionByoipCidr with the addition of +// the ability to pass a context and additional request options. +// +// See ProvisionByoipCidr for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) ProvisionByoipCidrWithContext(ctx aws.Context, input *ProvisionByoipCidrInput, opts ...request.Option) (*ProvisionByoipCidrOutput, error) { + req, out := c.ProvisionByoipCidrRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opPurchaseHostReservation = "PurchaseHostReservation" // PurchaseHostReservationRequest generates a "aws/request.Request" representing the // client's request for the PurchaseHostReservation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -19597,8 +21078,8 @@ const opPurchaseReservedInstancesOffering = "PurchaseReservedInstancesOffering" // PurchaseReservedInstancesOfferingRequest generates a "aws/request.Request" representing the // client's request for the PurchaseReservedInstancesOffering operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -19680,8 +21161,8 @@ const opPurchaseScheduledInstances = "PurchaseScheduledInstances" // PurchaseScheduledInstancesRequest generates a "aws/request.Request" representing the // client's request for the PurchaseScheduledInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -19763,8 +21244,8 @@ const opRebootInstances = "RebootInstances" // RebootInstancesRequest generates a "aws/request.Request" representing the // client's request for the RebootInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -19849,8 +21330,8 @@ const opRegisterImage = "RegisterImage" // RegisterImageRequest generates a "aws/request.Request" representing the // client's request for the RegisterImage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -19909,10 +21390,13 @@ func (c *EC2) RegisterImageRequest(input *RegisterImageInput) (req *request.Requ // Some Linux distributions, such as Red Hat Enterprise Linux (RHEL) and SUSE // Linux Enterprise Server (SLES), use the EC2 billing product code associated // with an AMI to verify the subscription status for package updates. Creating -// an AMI from an EBS snapshot does not maintain this billing code, and subsequent -// instances launched from such an AMI will not be able to connect to package -// update infrastructure. To create an AMI that must retain billing codes, see -// CreateImage. +// an AMI from an EBS snapshot does not maintain this billing code, and instances +// launched from such an AMI are not able to connect to package update infrastructure. +// If you purchase a Reserved Instance offering for one of these Linux distributions +// and launch instances using an AMI that does not contain the required billing +// code, your Reserved Instance is not applied to these instances. +// +// To create an AMI for operating systems that require a billing code, see CreateImage. // // If needed, you can deregister an AMI at any time. Any modifications you make // to an AMI backed by an instance store volume invalidates its registration. @@ -19951,8 +21435,8 @@ const opRejectVpcEndpointConnections = "RejectVpcEndpointConnections" // RejectVpcEndpointConnectionsRequest generates a "aws/request.Request" representing the // client's request for the RejectVpcEndpointConnections operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -20026,8 +21510,8 @@ const opRejectVpcPeeringConnection = "RejectVpcPeeringConnection" // RejectVpcPeeringConnectionRequest generates a "aws/request.Request" representing the // client's request for the RejectVpcPeeringConnection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -20104,8 +21588,8 @@ const opReleaseAddress = "ReleaseAddress" // ReleaseAddressRequest generates a "aws/request.Request" representing the // client's request for the ReleaseAddress operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -20197,8 +21681,8 @@ const opReleaseHosts = "ReleaseHosts" // ReleaseHostsRequest generates a "aws/request.Request" representing the // client's request for the ReleaseHosts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -20240,15 +21724,14 @@ func (c *EC2) ReleaseHostsRequest(input *ReleaseHostsInput) (req *request.Reques // When you no longer want to use an On-Demand Dedicated Host it can be released. // On-Demand billing is stopped and the host goes into released state. The host // ID of Dedicated Hosts that have been released can no longer be specified -// in another request, e.g., ModifyHosts. You must stop or terminate all instances -// on a host before it can be released. +// in another request, for example, to modify the host. You must stop or terminate +// all instances on a host before it can be released. // -// When Dedicated Hosts are released, it make take some time for them to stop +// When Dedicated Hosts are released, it may take some time for them to stop // counting toward your limit and you may receive capacity errors when trying -// to allocate new Dedicated hosts. Try waiting a few minutes, and then try -// again. +// to allocate new Dedicated Hosts. Wait a few minutes and then try again. // -// Released hosts will still appear in a DescribeHosts response. +// Released hosts still appear in a DescribeHosts response. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -20282,8 +21765,8 @@ const opReplaceIamInstanceProfileAssociation = "ReplaceIamInstanceProfileAssocia // ReplaceIamInstanceProfileAssociationRequest generates a "aws/request.Request" representing the // client's request for the ReplaceIamInstanceProfileAssociation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -20361,8 +21844,8 @@ const opReplaceNetworkAclAssociation = "ReplaceNetworkAclAssociation" // ReplaceNetworkAclAssociationRequest generates a "aws/request.Request" representing the // client's request for the ReplaceNetworkAclAssociation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -20403,7 +21886,7 @@ func (c *EC2) ReplaceNetworkAclAssociationRequest(input *ReplaceNetworkAclAssoci // // Changes which network ACL a subnet is associated with. By default when you // create a subnet, it's automatically associated with the default network ACL. -// For more information about network ACLs, see Network ACLs (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html) +// For more information, see Network ACLs (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html) // in the Amazon Virtual Private Cloud User Guide. // // This is an idempotent operation. @@ -20440,8 +21923,8 @@ const opReplaceNetworkAclEntry = "ReplaceNetworkAclEntry" // ReplaceNetworkAclEntryRequest generates a "aws/request.Request" representing the // client's request for the ReplaceNetworkAclEntry operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -20482,8 +21965,8 @@ func (c *EC2) ReplaceNetworkAclEntryRequest(input *ReplaceNetworkAclEntryInput) // ReplaceNetworkAclEntry API operation for Amazon Elastic Compute Cloud. // -// Replaces an entry (rule) in a network ACL. For more information about network -// ACLs, see Network ACLs (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html) +// Replaces an entry (rule) in a network ACL. For more information, see Network +// ACLs (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html) // in the Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -20518,8 +22001,8 @@ const opReplaceRoute = "ReplaceRoute" // ReplaceRouteRequest generates a "aws/request.Request" representing the // client's request for the ReplaceRoute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -20561,11 +22044,11 @@ func (c *EC2) ReplaceRouteRequest(input *ReplaceRouteInput) (req *request.Reques // ReplaceRoute API operation for Amazon Elastic Compute Cloud. // // Replaces an existing route within a route table in a VPC. You must provide -// only one of the following: Internet gateway or virtual private gateway, NAT +// only one of the following: internet gateway or virtual private gateway, NAT // instance, NAT gateway, VPC peering connection, network interface, or egress-only -// Internet gateway. +// internet gateway. // -// For more information about route tables, see Route Tables (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) +// For more information, see Route Tables (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) // in the Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -20600,8 +22083,8 @@ const opReplaceRouteTableAssociation = "ReplaceRouteTableAssociation" // ReplaceRouteTableAssociationRequest generates a "aws/request.Request" representing the // client's request for the ReplaceRouteTableAssociation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -20682,8 +22165,8 @@ const opReportInstanceStatus = "ReportInstanceStatus" // ReportInstanceStatusRequest generates a "aws/request.Request" representing the // client's request for the ReportInstanceStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -20764,8 +22247,8 @@ const opRequestSpotFleet = "RequestSpotFleet" // RequestSpotFleetRequest generates a "aws/request.Request" representing the // client's request for the RequestSpotFleet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -20806,6 +22289,10 @@ func (c *EC2) RequestSpotFleetRequest(input *RequestSpotFleetInput) (req *reques // // Creates a Spot Fleet request. // +// The Spot Fleet request specifies the total target capacity and the On-Demand +// target capacity. Amazon EC2 calculates the difference between the total capacity +// and On-Demand capacity, and launches the difference as Spot capacity. +// // You can submit a single request that includes multiple launch specifications // that vary by instance type, AMI, Availability Zone, or subnet. // @@ -20820,10 +22307,11 @@ func (c *EC2) RequestSpotFleetRequest(input *RequestSpotFleetInput) (req *reques // pools, you can improve the availability of your fleet. // // You can specify tags for the Spot Instances. You cannot tag other resource -// types in a Spot Fleet request; only the instance resource type is supported. +// types in a Spot Fleet request because only the instance resource type is +// supported. // // For more information, see Spot Fleet Requests (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-requests.html) -// in the Amazon Elastic Compute Cloud User Guide. +// in the Amazon EC2 User Guide for Linux Instances. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -20857,8 +22345,8 @@ const opRequestSpotInstances = "RequestSpotInstances" // RequestSpotInstancesRequest generates a "aws/request.Request" representing the // client's request for the RequestSpotInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -20897,10 +22385,10 @@ func (c *EC2) RequestSpotInstancesRequest(input *RequestSpotInstancesInput) (req // RequestSpotInstances API operation for Amazon Elastic Compute Cloud. // -// Creates a Spot Instance request. Spot Instances are instances that Amazon -// EC2 launches when the maximum price that you specify exceeds the current -// Spot price. For more information, see Spot Instance Requests (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html) -// in the Amazon Elastic Compute Cloud User Guide. +// Creates a Spot Instance request. +// +// For more information, see Spot Instance Requests (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html) +// in the Amazon EC2 User Guide for Linux Instances. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -20934,8 +22422,8 @@ const opResetFpgaImageAttribute = "ResetFpgaImageAttribute" // ResetFpgaImageAttributeRequest generates a "aws/request.Request" representing the // client's request for the ResetFpgaImageAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -21009,8 +22497,8 @@ const opResetImageAttribute = "ResetImageAttribute" // ResetImageAttributeRequest generates a "aws/request.Request" representing the // client's request for the ResetImageAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -21087,8 +22575,8 @@ const opResetInstanceAttribute = "ResetInstanceAttribute" // ResetInstanceAttributeRequest generates a "aws/request.Request" representing the // client's request for the ResetInstanceAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -21171,8 +22659,8 @@ const opResetNetworkInterfaceAttribute = "ResetNetworkInterfaceAttribute" // ResetNetworkInterfaceAttributeRequest generates a "aws/request.Request" representing the // client's request for the ResetNetworkInterfaceAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -21248,8 +22736,8 @@ const opResetSnapshotAttribute = "ResetSnapshotAttribute" // ResetSnapshotAttributeRequest generates a "aws/request.Request" representing the // client's request for the ResetSnapshotAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -21292,7 +22780,7 @@ func (c *EC2) ResetSnapshotAttributeRequest(input *ResetSnapshotAttributeInput) // // Resets permission settings for the specified snapshot. // -// For more information on modifying snapshot permissions, see Sharing Snapshots +// For more information about modifying snapshot permissions, see Sharing Snapshots // (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html) // in the Amazon Elastic Compute Cloud User Guide. // @@ -21328,8 +22816,8 @@ const opRestoreAddressToClassic = "RestoreAddressToClassic" // RestoreAddressToClassicRequest generates a "aws/request.Request" representing the // client's request for the RestoreAddressToClassic operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -21405,8 +22893,8 @@ const opRevokeSecurityGroupEgress = "RevokeSecurityGroupEgress" // RevokeSecurityGroupEgressRequest generates a "aws/request.Request" representing the // client's request for the RevokeSecurityGroupEgress operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -21493,8 +22981,8 @@ const opRevokeSecurityGroupIngress = "RevokeSecurityGroupIngress" // RevokeSecurityGroupIngressRequest generates a "aws/request.Request" representing the // client's request for the RevokeSecurityGroupIngress operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -21584,8 +23072,8 @@ const opRunInstances = "RunInstances" // RunInstancesRequest generates a "aws/request.Request" representing the // client's request for the RunInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -21711,8 +23199,8 @@ const opRunScheduledInstances = "RunScheduledInstances" // RunScheduledInstancesRequest generates a "aws/request.Request" representing the // client's request for the RunScheduledInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -21795,8 +23283,8 @@ const opStartInstances = "StartInstances" // StartInstancesRequest generates a "aws/request.Request" representing the // client's request for the StartInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -21891,8 +23379,8 @@ const opStopInstances = "StopInstances" // StopInstancesRequest generates a "aws/request.Request" representing the // client's request for the StopInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -21997,8 +23485,8 @@ const opTerminateInstances = "TerminateInstances" // TerminateInstancesRequest generates a "aws/request.Request" representing the // client's request for the TerminateInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -22095,8 +23583,8 @@ const opUnassignIpv6Addresses = "UnassignIpv6Addresses" // UnassignIpv6AddressesRequest generates a "aws/request.Request" representing the // client's request for the UnassignIpv6Addresses operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -22169,8 +23657,8 @@ const opUnassignPrivateIpAddresses = "UnassignPrivateIpAddresses" // UnassignPrivateIpAddressesRequest generates a "aws/request.Request" representing the // client's request for the UnassignPrivateIpAddresses operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -22245,8 +23733,8 @@ const opUnmonitorInstances = "UnmonitorInstances" // UnmonitorInstancesRequest generates a "aws/request.Request" representing the // client's request for the UnmonitorInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -22321,8 +23809,8 @@ const opUpdateSecurityGroupRuleDescriptionsEgress = "UpdateSecurityGroupRuleDesc // UpdateSecurityGroupRuleDescriptionsEgressRequest generates a "aws/request.Request" representing the // client's request for the UpdateSecurityGroupRuleDescriptionsEgress operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -22401,8 +23889,8 @@ const opUpdateSecurityGroupRuleDescriptionsIngress = "UpdateSecurityGroupRuleDes // UpdateSecurityGroupRuleDescriptionsIngressRequest generates a "aws/request.Request" representing the // client's request for the UpdateSecurityGroupRuleDescriptionsIngress operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -22477,6 +23965,87 @@ func (c *EC2) UpdateSecurityGroupRuleDescriptionsIngressWithContext(ctx aws.Cont return out, req.Send() } +const opWithdrawByoipCidr = "WithdrawByoipCidr" + +// WithdrawByoipCidrRequest generates a "aws/request.Request" representing the +// client's request for the WithdrawByoipCidr operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See WithdrawByoipCidr for more information on using the WithdrawByoipCidr +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the WithdrawByoipCidrRequest method. +// req, resp := client.WithdrawByoipCidrRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/WithdrawByoipCidr +func (c *EC2) WithdrawByoipCidrRequest(input *WithdrawByoipCidrInput) (req *request.Request, output *WithdrawByoipCidrOutput) { + op := &request.Operation{ + Name: opWithdrawByoipCidr, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &WithdrawByoipCidrInput{} + } + + output = &WithdrawByoipCidrOutput{} + req = c.newRequest(op, input, output) + return +} + +// WithdrawByoipCidr API operation for Amazon Elastic Compute Cloud. +// +// Stops advertising an IPv4 address range that is provisioned as an address +// pool. +// +// You can perform this operation at most once every 10 seconds, even if you +// specify different address ranges each time. +// +// It can take a few minutes before traffic to the specified addresses stops +// routing to AWS because of BGP propagation delays. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation WithdrawByoipCidr for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/WithdrawByoipCidr +func (c *EC2) WithdrawByoipCidr(input *WithdrawByoipCidrInput) (*WithdrawByoipCidrOutput, error) { + req, out := c.WithdrawByoipCidrRequest(input) + return out, req.Send() +} + +// WithdrawByoipCidrWithContext is the same as WithdrawByoipCidr with the addition of +// the ability to pass a context and additional request options. +// +// See WithdrawByoipCidr for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) WithdrawByoipCidrWithContext(ctx aws.Context, input *WithdrawByoipCidrInput, opts ...request.Option) (*WithdrawByoipCidrOutput, error) { + req, out := c.WithdrawByoipCidrRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + // Contains the parameters for accepting the quote. type AcceptReservedInstancesExchangeQuoteInput struct { _ struct{} `type:"structure"` @@ -22660,7 +24229,6 @@ func (s *AcceptVpcEndpointConnectionsOutput) SetUnsuccessful(v []*UnsuccessfulIt return s } -// Contains the parameters for AcceptVpcPeeringConnection. type AcceptVpcPeeringConnectionInput struct { _ struct{} `type:"structure"` @@ -22697,7 +24265,6 @@ func (s *AcceptVpcPeeringConnectionInput) SetVpcPeeringConnectionId(v string) *A return s } -// Contains the output of AcceptVpcPeeringConnection. type AcceptVpcPeeringConnectionOutput struct { _ struct{} `type:"structure"` @@ -22861,6 +24428,9 @@ type Address struct { // The Elastic IP address. PublicIp *string `locationName:"publicIp" type:"string"` + // The ID of an address pool. + PublicIpv4Pool *string `locationName:"publicIpv4Pool" type:"string"` + // Any tags assigned to the Elastic IP address. Tags []*Tag `locationName:"tagSet" locationNameList:"item" type:"list"` } @@ -22923,17 +24493,96 @@ func (s *Address) SetPublicIp(v string) *Address { return s } +// SetPublicIpv4Pool sets the PublicIpv4Pool field's value. +func (s *Address) SetPublicIpv4Pool(v string) *Address { + s.PublicIpv4Pool = &v + return s +} + // SetTags sets the Tags field's value. func (s *Address) SetTags(v []*Tag) *Address { s.Tags = v return s } -// Contains the parameters for AllocateAddress. +type AdvertiseByoipCidrInput struct { + _ struct{} `type:"structure"` + + // The IPv4 address range, in CIDR notation. + // + // Cidr is a required field + Cidr *string `type:"string" required:"true"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` +} + +// String returns the string representation +func (s AdvertiseByoipCidrInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AdvertiseByoipCidrInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AdvertiseByoipCidrInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AdvertiseByoipCidrInput"} + if s.Cidr == nil { + invalidParams.Add(request.NewErrParamRequired("Cidr")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCidr sets the Cidr field's value. +func (s *AdvertiseByoipCidrInput) SetCidr(v string) *AdvertiseByoipCidrInput { + s.Cidr = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *AdvertiseByoipCidrInput) SetDryRun(v bool) *AdvertiseByoipCidrInput { + s.DryRun = &v + return s +} + +type AdvertiseByoipCidrOutput struct { + _ struct{} `type:"structure"` + + // Information about the address range. + ByoipCidr *ByoipCidr `locationName:"byoipCidr" type:"structure"` +} + +// String returns the string representation +func (s AdvertiseByoipCidrOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AdvertiseByoipCidrOutput) GoString() string { + return s.String() +} + +// SetByoipCidr sets the ByoipCidr field's value. +func (s *AdvertiseByoipCidrOutput) SetByoipCidr(v *ByoipCidr) *AdvertiseByoipCidrOutput { + s.ByoipCidr = v + return s +} + type AllocateAddressInput struct { _ struct{} `type:"structure"` - // [EC2-VPC] The Elastic IP address to recover. + // [EC2-VPC] The Elastic IP address to recover or an IPv4 address from an address + // pool. Address *string `type:"string"` // Set to vpc to allocate the address for use with instances in a VPC. @@ -22946,6 +24595,11 @@ type AllocateAddressInput struct { // the required permissions, the error response is DryRunOperation. Otherwise, // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` + + // The ID of an address pool that you own. Use this parameter to let Amazon + // EC2 select an address from the address pool. To specify a specific address + // from the address pool, use the Address parameter instead. + PublicIpv4Pool *string `type:"string"` } // String returns the string representation @@ -22976,7 +24630,12 @@ func (s *AllocateAddressInput) SetDryRun(v bool) *AllocateAddressInput { return s } -// Contains the output of AllocateAddress. +// SetPublicIpv4Pool sets the PublicIpv4Pool field's value. +func (s *AllocateAddressInput) SetPublicIpv4Pool(v string) *AllocateAddressInput { + s.PublicIpv4Pool = &v + return s +} + type AllocateAddressOutput struct { _ struct{} `type:"structure"` @@ -22990,6 +24649,9 @@ type AllocateAddressOutput struct { // The Elastic IP address. PublicIp *string `locationName:"publicIp" type:"string"` + + // The ID of an address pool. + PublicIpv4Pool *string `locationName:"publicIpv4Pool" type:"string"` } // String returns the string representation @@ -23020,7 +24682,12 @@ func (s *AllocateAddressOutput) SetPublicIp(v string) *AllocateAddressOutput { return s } -// Contains the parameters for AllocateHosts. +// SetPublicIpv4Pool sets the PublicIpv4Pool field's value. +func (s *AllocateAddressOutput) SetPublicIpv4Pool(v string) *AllocateAddressOutput { + s.PublicIpv4Pool = &v + return s +} + type AllocateHostsInput struct { _ struct{} `type:"structure"` @@ -23036,23 +24703,25 @@ type AllocateHostsInput struct { // AvailabilityZone is a required field AvailabilityZone *string `locationName:"availabilityZone" type:"string" required:"true"` - // Unique, case-sensitive identifier you provide to ensure idempotency of the - // request. For more information, see How to Ensure Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Run_Instance_Idempotency.html) + // Unique, case-sensitive identifier that you provide to ensure the idempotency + // of the request. For more information, see How to Ensure Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Run_Instance_Idempotency.html) // in the Amazon Elastic Compute Cloud User Guide. ClientToken *string `locationName:"clientToken" type:"string"` - // Specify the instance type that you want your Dedicated Hosts to be configured - // for. When you specify the instance type, that is the only instance type that - // you can launch onto that host. + // Specify the instance type for which to configure your Dedicated Hosts. When + // you specify the instance type, that is the only instance type that you can + // launch onto that host. // // InstanceType is a required field InstanceType *string `locationName:"instanceType" type:"string" required:"true"` - // The number of Dedicated Hosts you want to allocate to your account with these - // parameters. + // The number of Dedicated Hosts to allocate to your account with these parameters. // // Quantity is a required field Quantity *int64 `locationName:"quantity" type:"integer" required:"true"` + + // The tags to apply to the Dedicated Host during creation. + TagSpecifications []*TagSpecification `locationName:"TagSpecification" locationNameList:"item" type:"list"` } // String returns the string representation @@ -23114,12 +24783,18 @@ func (s *AllocateHostsInput) SetQuantity(v int64) *AllocateHostsInput { return s } +// SetTagSpecifications sets the TagSpecifications field's value. +func (s *AllocateHostsInput) SetTagSpecifications(v []*TagSpecification) *AllocateHostsInput { + s.TagSpecifications = v + return s +} + // Contains the output of AllocateHosts. type AllocateHostsOutput struct { _ struct{} `type:"structure"` - // The ID of the allocated Dedicated Host. This is used when you want to launch - // an instance onto a specific host. + // The ID of the allocated Dedicated Host. This is used to launch an instance + // onto a specific host. HostIds []*string `locationName:"hostIdSet" locationNameList:"item" type:"list"` } @@ -23350,7 +25025,6 @@ func (s AssignPrivateIpAddressesOutput) GoString() string { return s.String() } -// Contains the parameters for AssociateAddress. type AssociateAddressInput struct { _ struct{} `type:"structure"` @@ -23442,7 +25116,6 @@ func (s *AssociateAddressInput) SetPublicIp(v string) *AssociateAddressInput { return s } -// Contains the output of AssociateAddress. type AssociateAddressOutput struct { _ struct{} `type:"structure"` @@ -23467,7 +25140,6 @@ func (s *AssociateAddressOutput) SetAssociationId(v string) *AssociateAddressOut return s } -// Contains the parameters for AssociateDhcpOptions. type AssociateDhcpOptionsInput struct { _ struct{} `type:"structure"` @@ -23622,7 +25294,6 @@ func (s *AssociateIamInstanceProfileOutput) SetIamInstanceProfileAssociation(v * return s } -// Contains the parameters for AssociateRouteTable. type AssociateRouteTableInput struct { _ struct{} `type:"structure"` @@ -23687,11 +25358,11 @@ func (s *AssociateRouteTableInput) SetSubnetId(v string) *AssociateRouteTableInp return s } -// Contains the output of AssociateRouteTable. type AssociateRouteTableOutput struct { _ struct{} `type:"structure"` - // The route table association ID (needed to disassociate the route table). + // The route table association ID. This ID is required for disassociating the + // route table. AssociationId *string `locationName:"associationId" type:"string"` } @@ -23894,7 +25565,6 @@ func (s *AssociateVpcCidrBlockOutput) SetVpcId(v string) *AssociateVpcCidrBlockO return s } -// Contains the parameters for AttachClassicLinkVpc. type AttachClassicLinkVpcInput struct { _ struct{} `type:"structure"` @@ -23974,7 +25644,6 @@ func (s *AttachClassicLinkVpcInput) SetVpcId(v string) *AttachClassicLinkVpcInpu return s } -// Contains the output of AttachClassicLinkVpc. type AttachClassicLinkVpcOutput struct { _ struct{} `type:"structure"` @@ -23998,7 +25667,6 @@ func (s *AttachClassicLinkVpcOutput) SetReturn(v bool) *AttachClassicLinkVpcOutp return s } -// Contains the parameters for AttachInternetGateway. type AttachInternetGatewayInput struct { _ struct{} `type:"structure"` @@ -24008,7 +25676,7 @@ type AttachInternetGatewayInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // The ID of the Internet gateway. + // The ID of the internet gateway. // // InternetGatewayId is a required field InternetGatewayId *string `locationName:"internetGatewayId" type:"string" required:"true"` @@ -24397,7 +26065,6 @@ func (s *AttributeValue) SetValue(v string) *AttributeValue { return s } -// Contains the parameters for AuthorizeSecurityGroupEgress. type AuthorizeSecurityGroupEgressInput struct { _ struct{} `type:"structure"` @@ -24529,7 +26196,6 @@ func (s AuthorizeSecurityGroupEgressOutput) GoString() string { return s.String() } -// Contains the parameters for AuthorizeSecurityGroupIngress. type AuthorizeSecurityGroupIngressInput struct { _ struct{} `type:"structure"` @@ -24689,6 +26355,9 @@ type AvailabilityZone struct { // The state of the Availability Zone. State *string `locationName:"zoneState" type:"string" enum:"AvailabilityZoneState"` + // The ID of the Availability Zone. + ZoneId *string `locationName:"zoneId" type:"string"` + // The name of the Availability Zone. ZoneName *string `locationName:"zoneName" type:"string"` } @@ -24721,6 +26390,12 @@ func (s *AvailabilityZone) SetState(v string) *AvailabilityZone { return s } +// SetZoneId sets the ZoneId field's value. +func (s *AvailabilityZone) SetZoneId(v string) *AvailabilityZone { + s.ZoneId = &v + return s +} + // SetZoneName sets the ZoneName field's value. func (s *AvailabilityZone) SetZoneName(v string) *AvailabilityZone { s.ZoneName = &v @@ -24755,7 +26430,7 @@ func (s *AvailabilityZoneMessage) SetMessage(v string) *AvailabilityZoneMessage type AvailableCapacity struct { _ struct{} `type:"structure"` - // The total number of instances that the Dedicated Host supports. + // The total number of instances supported by the Dedicated Host. AvailableInstanceCapacity []*InstanceCapacity `locationName:"availableInstanceCapacity" locationNameList:"item" type:"list"` // The number of vCPUs available on the Dedicated Host. @@ -24824,10 +26499,13 @@ type BlockDeviceMapping struct { // The virtual device name (ephemeralN). Instance store volumes are numbered // starting from 0. An instance type with 2 available instance store volumes - // can specify mappings for ephemeral0 and ephemeral1.The number of available + // can specify mappings for ephemeral0 and ephemeral1. The number of available // instance store volumes depends on the instance type. After you connect to // the instance, you must mount the volume. // + // NVMe instance store volumes are automatically enumerated and assigned a device + // name. Including them in your block device mapping has no effect. + // // Constraints: For M3 instances, you must specify instance store volumes in // the block device mapping for the instance. When you launch an M3 instance, // we ignore any instance store volumes specified in the block device mapping @@ -24983,7 +26661,7 @@ type BundleTask struct { Progress *string `locationName:"progress" type:"string"` // The time this task started. - StartTime *time.Time `locationName:"startTime" type:"timestamp" timestampFormat:"iso8601"` + StartTime *time.Time `locationName:"startTime" type:"timestamp"` // The state of the task. State *string `locationName:"state" type:"string" enum:"BundleTaskState"` @@ -24992,7 +26670,7 @@ type BundleTask struct { Storage *Storage `locationName:"storage" type:"structure"` // The time of the most recent update for the task. - UpdateTime *time.Time `locationName:"updateTime" type:"timestamp" timestampFormat:"iso8601"` + UpdateTime *time.Time `locationName:"updateTime" type:"timestamp"` } // String returns the string representation @@ -25086,6 +26764,59 @@ func (s *BundleTaskError) SetMessage(v string) *BundleTaskError { return s } +// Information about an address range that is provisioned for use with your +// AWS resources through bring your own IP addresses (BYOIP). +type ByoipCidr struct { + _ struct{} `type:"structure"` + + // The public IPv4 address range, in CIDR notation. + Cidr *string `locationName:"cidr" type:"string"` + + // The description of the address range. + Description *string `locationName:"description" type:"string"` + + // The state of the address pool. + State *string `locationName:"state" type:"string" enum:"ByoipCidrState"` + + // Upon success, contains the ID of the address pool. Otherwise, contains an + // error message. + StatusMessage *string `locationName:"statusMessage" type:"string"` +} + +// String returns the string representation +func (s ByoipCidr) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ByoipCidr) GoString() string { + return s.String() +} + +// SetCidr sets the Cidr field's value. +func (s *ByoipCidr) SetCidr(v string) *ByoipCidr { + s.Cidr = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *ByoipCidr) SetDescription(v string) *ByoipCidr { + s.Description = &v + return s +} + +// SetState sets the State field's value. +func (s *ByoipCidr) SetState(v string) *ByoipCidr { + s.State = &v + return s +} + +// SetStatusMessage sets the StatusMessage field's value. +func (s *ByoipCidr) SetStatusMessage(v string) *ByoipCidr { + s.StatusMessage = &v + return s +} + // Contains the parameters for CancelBundleTask. type CancelBundleTaskInput struct { _ struct{} `type:"structure"` @@ -25161,6 +26892,79 @@ func (s *CancelBundleTaskOutput) SetBundleTask(v *BundleTask) *CancelBundleTaskO return s } +type CancelCapacityReservationInput struct { + _ struct{} `type:"structure"` + + // The ID of the Capacity Reservation to be cancelled. + // + // CapacityReservationId is a required field + CapacityReservationId *string `type:"string" required:"true"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` +} + +// String returns the string representation +func (s CancelCapacityReservationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelCapacityReservationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CancelCapacityReservationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CancelCapacityReservationInput"} + if s.CapacityReservationId == nil { + invalidParams.Add(request.NewErrParamRequired("CapacityReservationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCapacityReservationId sets the CapacityReservationId field's value. +func (s *CancelCapacityReservationInput) SetCapacityReservationId(v string) *CancelCapacityReservationInput { + s.CapacityReservationId = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *CancelCapacityReservationInput) SetDryRun(v bool) *CancelCapacityReservationInput { + s.DryRun = &v + return s +} + +type CancelCapacityReservationOutput struct { + _ struct{} `type:"structure"` + + // Returns true if the request succeeds; otherwise, it returns an error. + Return *bool `locationName:"return" type:"boolean"` +} + +// String returns the string representation +func (s CancelCapacityReservationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelCapacityReservationOutput) GoString() string { + return s.String() +} + +// SetReturn sets the Return field's value. +func (s *CancelCapacityReservationOutput) SetReturn(v bool) *CancelCapacityReservationOutput { + s.Return = &v + return s +} + // Contains the parameters for CancelConversionTask. type CancelConversionTaskInput struct { _ struct{} `type:"structure"` @@ -25767,6 +27571,401 @@ func (s *CancelledSpotInstanceRequest) SetState(v string) *CancelledSpotInstance return s } +// Describes a Capacity Reservation. +type CapacityReservation struct { + _ struct{} `type:"structure"` + + // The Availability Zone in which the capacity is reserved. + AvailabilityZone *string `locationName:"availabilityZone" type:"string"` + + // The remaining capacity. Indicates the number of instances that can be launched + // in the Capacity Reservation. + AvailableInstanceCount *int64 `locationName:"availableInstanceCount" type:"integer"` + + // The ID of the Capacity Reservation. + CapacityReservationId *string `locationName:"capacityReservationId" type:"string"` + + // The date and time at which the Capacity Reservation was created. + CreateDate *time.Time `locationName:"createDate" type:"timestamp"` + + // Indicates whether the Capacity Reservation supports EBS-optimized instances. + // This optimization provides dedicated throughput to Amazon EBS and an optimized + // configuration stack to provide optimal I/O performance. This optimization + // isn't available with all instance types. Additional usage charges apply when + // using an EBS- optimized instance. + EbsOptimized *bool `locationName:"ebsOptimized" type:"boolean"` + + // The date and time at which the Capacity Reservation expires. When a Capacity + // Reservation expires, the reserved capacity is released and you can no longer + // launch instances into it. The Capacity Reservation's state changes to expired + // when it reaches its end date and time. + EndDate *time.Time `locationName:"endDate" type:"timestamp"` + + // Indicates the way in which the Capacity Reservation ends. A Capacity Reservation + // can have one of the following end types: + // + // * unlimited - The Capacity Reservation remains active until you explicitly + // cancel it. + // + // * limited - The Capacity Reservation expires automatically at a specified + // date and time. + EndDateType *string `locationName:"endDateType" type:"string" enum:"EndDateType"` + + // Indicates whether the Capacity Reservation supports instances with temporary, + // block-level storage. + EphemeralStorage *bool `locationName:"ephemeralStorage" type:"boolean"` + + // Indicates the type of instance launches that the Capacity Reservation accepts. + // The options include: + // + // * open - The Capacity Reservation accepts all instances that have matching + // attributes (instance type, platform, and Availability Zone). Instances + // that have matching attributes launch into the Capacity Reservation automatically + // without specifying any additional parameters. + // + // * targeted - The Capacity Reservation only accepts instances that have + // matching attributes (instance type, platform, and Availability Zone), + // and explicitly target the Capacity Reservation. This ensures that only + // permitted instances can use the reserved capacity. + InstanceMatchCriteria *string `locationName:"instanceMatchCriteria" type:"string" enum:"InstanceMatchCriteria"` + + // The type of operating system for which the Capacity Reservation reserves + // capacity. + InstancePlatform *string `locationName:"instancePlatform" type:"string" enum:"CapacityReservationInstancePlatform"` + + // The type of instance for which the Capacity Reservation reserves capacity. + InstanceType *string `locationName:"instanceType" type:"string"` + + // The current state of the Capacity Reservation. A Capacity Reservation can + // be in one of the following states: + // + // * active - The Capacity Reservation is active and the capacity is available + // for your use. + // + // * cancelled - The Capacity Reservation expired automatically at the date + // and time specified in your request. The reserved capacity is no longer + // available for your use. + // + // * expired - The Capacity Reservation was manually cancelled. The reserved + // capacity is no longer available for your use. + // + // * pending - The Capacity Reservation request was successful but the capacity + // provisioning is still pending. + // + // * failed - The Capacity Reservation request has failed. A request might + // fail due to invalid request parameters, capacity constraints, or instance + // limit constraints. Failed requests are retained for 60 minutes. + State *string `locationName:"state" type:"string" enum:"CapacityReservationState"` + + // Any tags assigned to the Capacity Reservation. + Tags []*Tag `locationName:"tagSet" locationNameList:"item" type:"list"` + + // Indicates the tenancy of the Capacity Reservation. A Capacity Reservation + // can have one of the following tenancy settings: + // + // * default - The Capacity Reservation is created on hardware that is shared + // with other AWS accounts. + // + // * dedicated - The Capacity Reservation is created on single-tenant hardware + // that is dedicated to a single AWS account. + Tenancy *string `locationName:"tenancy" type:"string" enum:"CapacityReservationTenancy"` + + // The number of instances for which the Capacity Reservation reserves capacity. + TotalInstanceCount *int64 `locationName:"totalInstanceCount" type:"integer"` +} + +// String returns the string representation +func (s CapacityReservation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CapacityReservation) GoString() string { + return s.String() +} + +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *CapacityReservation) SetAvailabilityZone(v string) *CapacityReservation { + s.AvailabilityZone = &v + return s +} + +// SetAvailableInstanceCount sets the AvailableInstanceCount field's value. +func (s *CapacityReservation) SetAvailableInstanceCount(v int64) *CapacityReservation { + s.AvailableInstanceCount = &v + return s +} + +// SetCapacityReservationId sets the CapacityReservationId field's value. +func (s *CapacityReservation) SetCapacityReservationId(v string) *CapacityReservation { + s.CapacityReservationId = &v + return s +} + +// SetCreateDate sets the CreateDate field's value. +func (s *CapacityReservation) SetCreateDate(v time.Time) *CapacityReservation { + s.CreateDate = &v + return s +} + +// SetEbsOptimized sets the EbsOptimized field's value. +func (s *CapacityReservation) SetEbsOptimized(v bool) *CapacityReservation { + s.EbsOptimized = &v + return s +} + +// SetEndDate sets the EndDate field's value. +func (s *CapacityReservation) SetEndDate(v time.Time) *CapacityReservation { + s.EndDate = &v + return s +} + +// SetEndDateType sets the EndDateType field's value. +func (s *CapacityReservation) SetEndDateType(v string) *CapacityReservation { + s.EndDateType = &v + return s +} + +// SetEphemeralStorage sets the EphemeralStorage field's value. +func (s *CapacityReservation) SetEphemeralStorage(v bool) *CapacityReservation { + s.EphemeralStorage = &v + return s +} + +// SetInstanceMatchCriteria sets the InstanceMatchCriteria field's value. +func (s *CapacityReservation) SetInstanceMatchCriteria(v string) *CapacityReservation { + s.InstanceMatchCriteria = &v + return s +} + +// SetInstancePlatform sets the InstancePlatform field's value. +func (s *CapacityReservation) SetInstancePlatform(v string) *CapacityReservation { + s.InstancePlatform = &v + return s +} + +// SetInstanceType sets the InstanceType field's value. +func (s *CapacityReservation) SetInstanceType(v string) *CapacityReservation { + s.InstanceType = &v + return s +} + +// SetState sets the State field's value. +func (s *CapacityReservation) SetState(v string) *CapacityReservation { + s.State = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CapacityReservation) SetTags(v []*Tag) *CapacityReservation { + s.Tags = v + return s +} + +// SetTenancy sets the Tenancy field's value. +func (s *CapacityReservation) SetTenancy(v string) *CapacityReservation { + s.Tenancy = &v + return s +} + +// SetTotalInstanceCount sets the TotalInstanceCount field's value. +func (s *CapacityReservation) SetTotalInstanceCount(v int64) *CapacityReservation { + s.TotalInstanceCount = &v + return s +} + +// Describes an instance's Capacity Reservation targeting option. You can specify +// only one option at a time. Use the CapacityReservationPreference parameter +// to configure the instance to run as an On-Demand Instance or to run in any +// open Capacity Reservation that has matching attributes (instance type, platform, +// Availability Zone). Use the CapacityReservationTarget parameter to explicitly +// target a specific Capacity Reservation. +type CapacityReservationSpecification struct { + _ struct{} `type:"structure"` + + // Indicates the instance's Capacity Reservation preferences. Possible preferences + // include: + // + // * open - The instance can run in any open Capacity Reservation that has + // matching attributes (instance type, platform, Availability Zone). + // + // * none - The instance avoids running in a Capacity Reservation even if + // one is available. The instance runs as an On-Demand Instance. + CapacityReservationPreference *string `type:"string" enum:"CapacityReservationPreference"` + + // Information about the target Capacity Reservation. + CapacityReservationTarget *CapacityReservationTarget `type:"structure"` +} + +// String returns the string representation +func (s CapacityReservationSpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CapacityReservationSpecification) GoString() string { + return s.String() +} + +// SetCapacityReservationPreference sets the CapacityReservationPreference field's value. +func (s *CapacityReservationSpecification) SetCapacityReservationPreference(v string) *CapacityReservationSpecification { + s.CapacityReservationPreference = &v + return s +} + +// SetCapacityReservationTarget sets the CapacityReservationTarget field's value. +func (s *CapacityReservationSpecification) SetCapacityReservationTarget(v *CapacityReservationTarget) *CapacityReservationSpecification { + s.CapacityReservationTarget = v + return s +} + +// Describes the instance's Capacity Reservation targeting preferences. The +// action returns the capacityReservationPreference response element if the +// instance is configured to run in On-Demand capacity, or if it is configured +// in run in any open Capacity Reservation that has matching attributes (instance +// type, platform, Availability Zone). The action returns the capacityReservationTarget +// response element if the instance explicily targets a specific Capacity Reservation. +type CapacityReservationSpecificationResponse struct { + _ struct{} `type:"structure"` + + // Describes the instance's Capacity Reservation preferences. Possible preferences + // include: + // + // * open - The instance can run in any open Capacity Reservation that has + // matching attributes (instance type, platform, Availability Zone). + // + // * none - The instance avoids running in a Capacity Reservation even if + // one is available. The instance runs in On-Demand capacity. + CapacityReservationPreference *string `locationName:"capacityReservationPreference" type:"string" enum:"CapacityReservationPreference"` + + // Information about the targeted Capacity Reservation. + CapacityReservationTarget *CapacityReservationTargetResponse `locationName:"capacityReservationTarget" type:"structure"` +} + +// String returns the string representation +func (s CapacityReservationSpecificationResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CapacityReservationSpecificationResponse) GoString() string { + return s.String() +} + +// SetCapacityReservationPreference sets the CapacityReservationPreference field's value. +func (s *CapacityReservationSpecificationResponse) SetCapacityReservationPreference(v string) *CapacityReservationSpecificationResponse { + s.CapacityReservationPreference = &v + return s +} + +// SetCapacityReservationTarget sets the CapacityReservationTarget field's value. +func (s *CapacityReservationSpecificationResponse) SetCapacityReservationTarget(v *CapacityReservationTargetResponse) *CapacityReservationSpecificationResponse { + s.CapacityReservationTarget = v + return s +} + +// Describes a target Capacity Reservation. +type CapacityReservationTarget struct { + _ struct{} `type:"structure"` + + // The ID of the Capacity Reservation. + CapacityReservationId *string `type:"string"` +} + +// String returns the string representation +func (s CapacityReservationTarget) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CapacityReservationTarget) GoString() string { + return s.String() +} + +// SetCapacityReservationId sets the CapacityReservationId field's value. +func (s *CapacityReservationTarget) SetCapacityReservationId(v string) *CapacityReservationTarget { + s.CapacityReservationId = &v + return s +} + +// Describes a target Capacity Reservation. +type CapacityReservationTargetResponse struct { + _ struct{} `type:"structure"` + + // The ID of the Capacity Reservation. + CapacityReservationId *string `locationName:"capacityReservationId" type:"string"` +} + +// String returns the string representation +func (s CapacityReservationTargetResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CapacityReservationTargetResponse) GoString() string { + return s.String() +} + +// SetCapacityReservationId sets the CapacityReservationId field's value. +func (s *CapacityReservationTargetResponse) SetCapacityReservationId(v string) *CapacityReservationTargetResponse { + s.CapacityReservationId = &v + return s +} + +// Provides authorization for Amazon to bring a specific IP address range to +// a specific AWS account using bring your own IP addresses (BYOIP). +type CidrAuthorizationContext struct { + _ struct{} `type:"structure"` + + // The plain-text authorization message for the prefix and account. + // + // Message is a required field + Message *string `type:"string" required:"true"` + + // The signed authorization message for the prefix and account. + // + // Signature is a required field + Signature *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s CidrAuthorizationContext) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CidrAuthorizationContext) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CidrAuthorizationContext) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CidrAuthorizationContext"} + if s.Message == nil { + invalidParams.Add(request.NewErrParamRequired("Message")) + } + if s.Signature == nil { + invalidParams.Add(request.NewErrParamRequired("Signature")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMessage sets the Message field's value. +func (s *CidrAuthorizationContext) SetMessage(v string) *CidrAuthorizationContext { + s.Message = &v + return s +} + +// SetSignature sets the Signature field's value. +func (s *CidrAuthorizationContext) SetSignature(v string) *CidrAuthorizationContext { + s.Signature = &v + return s +} + // Describes an IPv4 CIDR block. type CidrBlock struct { _ struct{} `type:"structure"` @@ -25975,13 +28174,13 @@ type ClientData struct { Comment *string `type:"string"` // The time that the disk upload ends. - UploadEnd *time.Time `type:"timestamp" timestampFormat:"iso8601"` + UploadEnd *time.Time `type:"timestamp"` // The size of the uploaded disk image, in GiB. UploadSize *float64 `type:"double"` // The time that the disk upload starts. - UploadStart *time.Time `type:"timestamp" timestampFormat:"iso8601"` + UploadStart *time.Time `type:"timestamp"` } // String returns the string representation @@ -26202,9 +28401,7 @@ type ConversionTask struct { _ struct{} `type:"structure"` // The ID of the conversion task. - // - // ConversionTaskId is a required field - ConversionTaskId *string `locationName:"conversionTaskId" type:"string" required:"true"` + ConversionTaskId *string `locationName:"conversionTaskId" type:"string"` // The time when the task expires. If the upload isn't complete before the expiration // time, we automatically cancel the task. @@ -26219,9 +28416,7 @@ type ConversionTask struct { ImportVolume *ImportVolumeTaskDetails `locationName:"importVolume" type:"structure"` // The state of the conversion task. - // - // State is a required field - State *string `locationName:"state" type:"string" required:"true" enum:"ConversionTaskState"` + State *string `locationName:"state" type:"string" enum:"ConversionTaskState"` // The status message related to the conversion task. StatusMessage *string `locationName:"statusMessage" type:"string"` @@ -26416,10 +28611,12 @@ type CopyImageInput struct { DryRun *bool `locationName:"dryRun" type:"boolean"` // Specifies whether the destination snapshots of the copied image should be - // encrypted. The default CMK for EBS is used unless a non-default AWS Key Management - // Service (AWS KMS) CMK is specified with KmsKeyId. For more information, see - // Amazon EBS Encryption (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html) - // in the Amazon Elastic Compute Cloud User Guide. + // encrypted. You can encrypt a copy of an unencrypted snapshot, but you cannot + // create an unencrypted copy of an encrypted snapshot. The default CMK for + // EBS is used unless you specify a non-default AWS Key Management Service (AWS + // KMS) CMK using KmsKeyId. For more information, see Amazon EBS Encryption + // (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html) in + // the Amazon Elastic Compute Cloud User Guide. Encrypted *bool `locationName:"encrypted" type:"boolean"` // An identifier for the AWS Key Management Service (AWS KMS) customer master @@ -26580,10 +28777,10 @@ type CopySnapshotInput struct { // copy operation. This parameter is only valid for specifying the destination // region in a PresignedUrl parameter, where it is required. // - // CopySnapshot sends the snapshot copy to the regional endpoint that you send - // the HTTP request to, such as ec2.us-east-1.amazonaws.com (in the AWS CLI, - // this is specified with the --region parameter or the default region in your - // AWS configuration file). + // The snapshot copy is sent to the regional endpoint that you sent the HTTP + // request to (for example, ec2.us-east-1.amazonaws.com). With the AWS CLI, + // this is specified using the --region parameter or the default region in your + // AWS configuration file. DestinationRegion *string `locationName:"destinationRegion" type:"string"` // Checks whether you have the required permissions for the action, without @@ -26593,12 +28790,11 @@ type CopySnapshotInput struct { DryRun *bool `locationName:"dryRun" type:"boolean"` // Specifies whether the destination snapshot should be encrypted. You can encrypt - // a copy of an unencrypted snapshot using this flag, but you cannot use it - // to create an unencrypted copy from an encrypted snapshot. Your default CMK - // for EBS is used unless a non-default AWS Key Management Service (AWS KMS) - // CMK is specified with KmsKeyId. For more information, see Amazon EBS Encryption - // (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html) in - // the Amazon Elastic Compute Cloud User Guide. + // a copy of an unencrypted snapshot, but you cannot use it to create an unencrypted + // copy of an encrypted snapshot. Your default CMK for EBS is used unless you + // specify a non-default AWS Key Management Service (AWS KMS) CMK using KmsKeyId. + // For more information, see Amazon EBS Encryption (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html) + // in the Amazon Elastic Compute Cloud User Guide. Encrypted *bool `locationName:"encrypted" type:"boolean"` // An identifier for the AWS Key Management Service (AWS KMS) customer master @@ -26628,9 +28824,9 @@ type CopySnapshotInput struct { // will eventually fail. KmsKeyId *string `locationName:"kmsKeyId" type:"string"` - // The pre-signed URL parameter is required when copying an encrypted snapshot - // with the Amazon EC2 Query API; it is available as an optional parameter in - // all other cases. For more information, see Query Requests (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/Query-Requests.html). + // When you copy an encrypted source snapshot using the Amazon EC2 Query API, + // you must supply a pre-signed URL. This parameter is optional for unencrypted + // snapshots. For more information, see Query Requests (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/Query-Requests.html). // // The PresignedUrl should use the snapshot source endpoint, the CopySnapshot // action, and include the SourceRegion, SourceSnapshotId, and DestinationRegion @@ -26752,6 +28948,311 @@ func (s *CopySnapshotOutput) SetSnapshotId(v string) *CopySnapshotOutput { return s } +// The CPU options for the instance. +type CpuOptions struct { + _ struct{} `type:"structure"` + + // The number of CPU cores for the instance. + CoreCount *int64 `locationName:"coreCount" type:"integer"` + + // The number of threads per CPU core. + ThreadsPerCore *int64 `locationName:"threadsPerCore" type:"integer"` +} + +// String returns the string representation +func (s CpuOptions) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CpuOptions) GoString() string { + return s.String() +} + +// SetCoreCount sets the CoreCount field's value. +func (s *CpuOptions) SetCoreCount(v int64) *CpuOptions { + s.CoreCount = &v + return s +} + +// SetThreadsPerCore sets the ThreadsPerCore field's value. +func (s *CpuOptions) SetThreadsPerCore(v int64) *CpuOptions { + s.ThreadsPerCore = &v + return s +} + +// The CPU options for the instance. Both the core count and threads per core +// must be specified in the request. +type CpuOptionsRequest struct { + _ struct{} `type:"structure"` + + // The number of CPU cores for the instance. + CoreCount *int64 `type:"integer"` + + // The number of threads per CPU core. To disable Intel Hyper-Threading Technology + // for the instance, specify a value of 1. Otherwise, specify the default value + // of 2. + ThreadsPerCore *int64 `type:"integer"` +} + +// String returns the string representation +func (s CpuOptionsRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CpuOptionsRequest) GoString() string { + return s.String() +} + +// SetCoreCount sets the CoreCount field's value. +func (s *CpuOptionsRequest) SetCoreCount(v int64) *CpuOptionsRequest { + s.CoreCount = &v + return s +} + +// SetThreadsPerCore sets the ThreadsPerCore field's value. +func (s *CpuOptionsRequest) SetThreadsPerCore(v int64) *CpuOptionsRequest { + s.ThreadsPerCore = &v + return s +} + +type CreateCapacityReservationInput struct { + _ struct{} `type:"structure"` + + // The Availability Zone in which to create the Capacity Reservation. + // + // AvailabilityZone is a required field + AvailabilityZone *string `type:"string" required:"true"` + + // Unique, case-sensitive identifier that you provide to ensure the idempotency + // of the request. For more information, see How to Ensure Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/Run_Instance_Idempotency.html). + // + // Constraint: Maximum 64 ASCII characters. + ClientToken *string `type:"string"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // Indicates whether the Capacity Reservation supports EBS-optimized instances. + // This optimization provides dedicated throughput to Amazon EBS and an optimized + // configuration stack to provide optimal I/O performance. This optimization + // isn't available with all instance types. Additional usage charges apply when + // using an EBS- optimized instance. + EbsOptimized *bool `type:"boolean"` + + // The date and time at which the Capacity Reservation expires. When a Capacity + // Reservation expires, the reserved capacity is released and you can no longer + // launch instances into it. The Capacity Reservation's state changes to expired + // when it reaches its end date and time. + // + // You must provide an EndDate value if EndDateType is limited. Omit EndDate + // if EndDateType is unlimited. + // + // If the EndDateType is limited, the Capacity Reservation is cancelled within + // an hour from the specified time. For example, if you specify 5/31/2019, 13:30:55, + // the Capacity Reservation is guaranteed to end between 13:30:55 and 14:30:55 + // on 5/31/2019. + EndDate *time.Time `type:"timestamp"` + + // Indicates the way in which the Capacity Reservation ends. A Capacity Reservation + // can have one of the following end types: + // + // * unlimited - The Capacity Reservation remains active until you explicitly + // cancel it. Do not provide an EndDate if the EndDateType is unlimited. + // + // * limited - The Capacity Reservation expires automatically at a specified + // date and time. You must provide an EndDate value if the EndDateType value + // is limited. + EndDateType *string `type:"string" enum:"EndDateType"` + + // Indicates whether the Capacity Reservation supports instances with temporary, + // block-level storage. + EphemeralStorage *bool `type:"boolean"` + + // The number of instances for which to reserve capacity. + // + // InstanceCount is a required field + InstanceCount *int64 `type:"integer" required:"true"` + + // Indicates the type of instance launches that the Capacity Reservation accepts. + // The options include: + // + // * open - The Capacity Reservation automatically matches all instances + // that have matching attributes (instance type, platform, and Availability + // Zone). Instances that have matching attributes run in the Capacity Reservation + // automatically without specifying any additional parameters. + // + // * targeted - The Capacity Reservation only accepts instances that have + // matching attributes (instance type, platform, and Availability Zone), + // and explicitly target the Capacity Reservation. This ensures that only + // permitted instances can use the reserved capacity. + // + // Default: open + InstanceMatchCriteria *string `type:"string" enum:"InstanceMatchCriteria"` + + // The type of operating system for which to reserve capacity. + // + // InstancePlatform is a required field + InstancePlatform *string `type:"string" required:"true" enum:"CapacityReservationInstancePlatform"` + + // The instance type for which to reserve capacity. For more information, see + // Instance Types (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html) + // in the Amazon Elastic Compute Cloud User Guide. + // + // InstanceType is a required field + InstanceType *string `type:"string" required:"true"` + + // The tags to apply to the Capacity Reservation during launch. + TagSpecifications []*TagSpecification `locationNameList:"item" type:"list"` + + // Indicates the tenancy of the Capacity Reservation. A Capacity Reservation + // can have one of the following tenancy settings: + // + // * default - The Capacity Reservation is created on hardware that is shared + // with other AWS accounts. + // + // * dedicated - The Capacity Reservation is created on single-tenant hardware + // that is dedicated to a single AWS account. + Tenancy *string `type:"string" enum:"CapacityReservationTenancy"` +} + +// String returns the string representation +func (s CreateCapacityReservationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateCapacityReservationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateCapacityReservationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateCapacityReservationInput"} + if s.AvailabilityZone == nil { + invalidParams.Add(request.NewErrParamRequired("AvailabilityZone")) + } + if s.InstanceCount == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceCount")) + } + if s.InstancePlatform == nil { + invalidParams.Add(request.NewErrParamRequired("InstancePlatform")) + } + if s.InstanceType == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceType")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *CreateCapacityReservationInput) SetAvailabilityZone(v string) *CreateCapacityReservationInput { + s.AvailabilityZone = &v + return s +} + +// SetClientToken sets the ClientToken field's value. +func (s *CreateCapacityReservationInput) SetClientToken(v string) *CreateCapacityReservationInput { + s.ClientToken = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *CreateCapacityReservationInput) SetDryRun(v bool) *CreateCapacityReservationInput { + s.DryRun = &v + return s +} + +// SetEbsOptimized sets the EbsOptimized field's value. +func (s *CreateCapacityReservationInput) SetEbsOptimized(v bool) *CreateCapacityReservationInput { + s.EbsOptimized = &v + return s +} + +// SetEndDate sets the EndDate field's value. +func (s *CreateCapacityReservationInput) SetEndDate(v time.Time) *CreateCapacityReservationInput { + s.EndDate = &v + return s +} + +// SetEndDateType sets the EndDateType field's value. +func (s *CreateCapacityReservationInput) SetEndDateType(v string) *CreateCapacityReservationInput { + s.EndDateType = &v + return s +} + +// SetEphemeralStorage sets the EphemeralStorage field's value. +func (s *CreateCapacityReservationInput) SetEphemeralStorage(v bool) *CreateCapacityReservationInput { + s.EphemeralStorage = &v + return s +} + +// SetInstanceCount sets the InstanceCount field's value. +func (s *CreateCapacityReservationInput) SetInstanceCount(v int64) *CreateCapacityReservationInput { + s.InstanceCount = &v + return s +} + +// SetInstanceMatchCriteria sets the InstanceMatchCriteria field's value. +func (s *CreateCapacityReservationInput) SetInstanceMatchCriteria(v string) *CreateCapacityReservationInput { + s.InstanceMatchCriteria = &v + return s +} + +// SetInstancePlatform sets the InstancePlatform field's value. +func (s *CreateCapacityReservationInput) SetInstancePlatform(v string) *CreateCapacityReservationInput { + s.InstancePlatform = &v + return s +} + +// SetInstanceType sets the InstanceType field's value. +func (s *CreateCapacityReservationInput) SetInstanceType(v string) *CreateCapacityReservationInput { + s.InstanceType = &v + return s +} + +// SetTagSpecifications sets the TagSpecifications field's value. +func (s *CreateCapacityReservationInput) SetTagSpecifications(v []*TagSpecification) *CreateCapacityReservationInput { + s.TagSpecifications = v + return s +} + +// SetTenancy sets the Tenancy field's value. +func (s *CreateCapacityReservationInput) SetTenancy(v string) *CreateCapacityReservationInput { + s.Tenancy = &v + return s +} + +type CreateCapacityReservationOutput struct { + _ struct{} `type:"structure"` + + // Information about the Capacity Reservation. + CapacityReservation *CapacityReservation `locationName:"capacityReservation" type:"structure"` +} + +// String returns the string representation +func (s CreateCapacityReservationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateCapacityReservationOutput) GoString() string { + return s.String() +} + +// SetCapacityReservation sets the CapacityReservation field's value. +func (s *CreateCapacityReservationOutput) SetCapacityReservation(v *CapacityReservation) *CreateCapacityReservationOutput { + s.CapacityReservation = v + return s +} + // Contains the parameters for CreateCustomerGateway. type CreateCustomerGatewayInput struct { _ struct{} `type:"structure"` @@ -26931,7 +29432,6 @@ func (s *CreateDefaultSubnetOutput) SetSubnet(v *Subnet) *CreateDefaultSubnetOut return s } -// Contains the parameters for CreateDefaultVpc. type CreateDefaultVpcInput struct { _ struct{} `type:"structure"` @@ -26958,7 +29458,6 @@ func (s *CreateDefaultVpcInput) SetDryRun(v bool) *CreateDefaultVpcInput { return s } -// Contains the output of CreateDefaultVpc. type CreateDefaultVpcOutput struct { _ struct{} `type:"structure"` @@ -26982,7 +29481,6 @@ func (s *CreateDefaultVpcOutput) SetVpc(v *Vpc) *CreateDefaultVpcOutput { return s } -// Contains the parameters for CreateDhcpOptions. type CreateDhcpOptionsInput struct { _ struct{} `type:"structure"` @@ -27033,7 +29531,6 @@ func (s *CreateDhcpOptionsInput) SetDryRun(v bool) *CreateDhcpOptionsInput { return s } -// Contains the output of CreateDhcpOptions. type CreateDhcpOptionsOutput struct { _ struct{} `type:"structure"` @@ -27060,8 +29557,8 @@ func (s *CreateDhcpOptionsOutput) SetDhcpOptions(v *DhcpOptions) *CreateDhcpOpti type CreateEgressOnlyInternetGatewayInput struct { _ struct{} `type:"structure"` - // Unique, case-sensitive identifier you provide to ensure the idempotency of - // the request. For more information, see How to Ensure Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Run_Instance_Idempotency.html). + // Unique, case-sensitive identifier that you provide to ensure the idempotency + // of the request. For more information, see How to Ensure Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Run_Instance_Idempotency.html). ClientToken *string `type:"string"` // Checks whether you have the required permissions for the action, without @@ -27070,7 +29567,7 @@ type CreateEgressOnlyInternetGatewayInput struct { // it is UnauthorizedOperation. DryRun *bool `type:"boolean"` - // The ID of the VPC for which to create the egress-only Internet gateway. + // The ID of the VPC for which to create the egress-only internet gateway. // // VpcId is a required field VpcId *string `type:"string" required:"true"` @@ -27120,11 +29617,11 @@ func (s *CreateEgressOnlyInternetGatewayInput) SetVpcId(v string) *CreateEgressO type CreateEgressOnlyInternetGatewayOutput struct { _ struct{} `type:"structure"` - // Unique, case-sensitive identifier you provide to ensure the idempotency of - // the request. + // Unique, case-sensitive identifier that you provide to ensure the idempotency + // of the request. ClientToken *string `locationName:"clientToken" type:"string"` - // Information about the egress-only Internet gateway. + // Information about the egress-only internet gateway. EgressOnlyInternetGateway *EgressOnlyInternetGateway `locationName:"egressOnlyInternetGateway" type:"structure"` } @@ -27150,24 +29647,397 @@ func (s *CreateEgressOnlyInternetGatewayOutput) SetEgressOnlyInternetGateway(v * return s } -// Contains the parameters for CreateFlowLogs. -type CreateFlowLogsInput struct { +// Describes the instances that could not be launched by the fleet. +type CreateFleetError struct { + _ struct{} `type:"structure"` + + // The error code that indicates why the instance could not be launched. For + // more information about error codes, see Error Codes (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html.html). + ErrorCode *string `locationName:"errorCode" type:"string"` + + // The error message that describes why the instance could not be launched. + // For more information about error messages, see ee Error Codes (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html.html). + ErrorMessage *string `locationName:"errorMessage" type:"string"` + + // The launch templates and overrides that were used for launching the instances. + // Any parameters that you specify in the Overrides override the same parameters + // in the launch template. + LaunchTemplateAndOverrides *LaunchTemplateAndOverridesResponse `locationName:"launchTemplateAndOverrides" type:"structure"` + + // Indicates if the instance that could not be launched was a Spot Instance + // or On-Demand Instance. + Lifecycle *string `locationName:"lifecycle" type:"string" enum:"InstanceLifecycle"` +} + +// String returns the string representation +func (s CreateFleetError) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateFleetError) GoString() string { + return s.String() +} + +// SetErrorCode sets the ErrorCode field's value. +func (s *CreateFleetError) SetErrorCode(v string) *CreateFleetError { + s.ErrorCode = &v + return s +} + +// SetErrorMessage sets the ErrorMessage field's value. +func (s *CreateFleetError) SetErrorMessage(v string) *CreateFleetError { + s.ErrorMessage = &v + return s +} + +// SetLaunchTemplateAndOverrides sets the LaunchTemplateAndOverrides field's value. +func (s *CreateFleetError) SetLaunchTemplateAndOverrides(v *LaunchTemplateAndOverridesResponse) *CreateFleetError { + s.LaunchTemplateAndOverrides = v + return s +} + +// SetLifecycle sets the Lifecycle field's value. +func (s *CreateFleetError) SetLifecycle(v string) *CreateFleetError { + s.Lifecycle = &v + return s +} + +type CreateFleetInput struct { _ struct{} `type:"structure"` // Unique, case-sensitive identifier you provide to ensure the idempotency of - // the request. For more information, see How to Ensure Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Run_Instance_Idempotency.html). + // the request. For more information, see Ensuring Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/Run_Instance_Idempotency.html). ClientToken *string `type:"string"` - // The ARN for the IAM role that's used to post flow logs to a CloudWatch Logs - // log group. + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // Indicates whether running instances should be terminated if the total target + // capacity of the EC2 Fleet is decreased below the current size of the EC2 + // Fleet. + ExcessCapacityTerminationPolicy *string `type:"string" enum:"FleetExcessCapacityTerminationPolicy"` + + // The configuration for the EC2 Fleet. // - // DeliverLogsPermissionArn is a required field - DeliverLogsPermissionArn *string `type:"string" required:"true"` + // LaunchTemplateConfigs is a required field + LaunchTemplateConfigs []*FleetLaunchTemplateConfigRequest `locationNameList:"item" type:"list" required:"true"` - // The name of the CloudWatch log group. + // The allocation strategy of On-Demand Instances in an EC2 Fleet. + OnDemandOptions *OnDemandOptionsRequest `type:"structure"` + + // Indicates whether EC2 Fleet should replace unhealthy instances. + ReplaceUnhealthyInstances *bool `type:"boolean"` + + // Describes the configuration of Spot Instances in an EC2 Fleet. + SpotOptions *SpotOptionsRequest `type:"structure"` + + // The key-value pair for tagging the EC2 Fleet request on creation. The value + // for ResourceType must be fleet, otherwise the fleet request fails. To tag + // instances at launch, specify the tags in the launch template (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html#create-launch-template). + // For information about tagging after launch, see Tagging Your Resources (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#tag-resources). + TagSpecifications []*TagSpecification `locationName:"TagSpecification" locationNameList:"item" type:"list"` + + // The TotalTargetCapacity, OnDemandTargetCapacity, SpotTargetCapacity, and + // DefaultCapacityType structure. // - // LogGroupName is a required field - LogGroupName *string `type:"string" required:"true"` + // TargetCapacitySpecification is a required field + TargetCapacitySpecification *TargetCapacitySpecificationRequest `type:"structure" required:"true"` + + // Indicates whether running instances should be terminated when the EC2 Fleet + // expires. + TerminateInstancesWithExpiration *bool `type:"boolean"` + + // The type of the request. By default, the EC2 Fleet places an asynchronous + // request for your desired capacity, and maintains it by replenishing interrupted + // Spot Instances (maintain). A value of instant places a synchronous one-time + // request, and returns errors for any instances that could not be launched. + // A value of request places an asynchronous one-time request without maintaining + // capacity or submitting requests in alternative capacity pools if capacity + // is unavailable. For more information, see EC2 Fleet Request Types (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-fleet-configuration-strategies.html#ec2-fleet-request-type) + // in the Amazon Elastic Compute Cloud User Guide. + Type *string `type:"string" enum:"FleetType"` + + // The start date and time of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). + // The default is to start fulfilling the request immediately. + ValidFrom *time.Time `type:"timestamp"` + + // The end date and time of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). + // At this point, no new EC2 Fleet requests are placed or able to fulfill the + // request. The default end date is 7 days from the current date. + ValidUntil *time.Time `type:"timestamp"` +} + +// String returns the string representation +func (s CreateFleetInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateFleetInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateFleetInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateFleetInput"} + if s.LaunchTemplateConfigs == nil { + invalidParams.Add(request.NewErrParamRequired("LaunchTemplateConfigs")) + } + if s.TargetCapacitySpecification == nil { + invalidParams.Add(request.NewErrParamRequired("TargetCapacitySpecification")) + } + if s.LaunchTemplateConfigs != nil { + for i, v := range s.LaunchTemplateConfigs { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "LaunchTemplateConfigs", i), err.(request.ErrInvalidParams)) + } + } + } + if s.TargetCapacitySpecification != nil { + if err := s.TargetCapacitySpecification.Validate(); err != nil { + invalidParams.AddNested("TargetCapacitySpecification", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientToken sets the ClientToken field's value. +func (s *CreateFleetInput) SetClientToken(v string) *CreateFleetInput { + s.ClientToken = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *CreateFleetInput) SetDryRun(v bool) *CreateFleetInput { + s.DryRun = &v + return s +} + +// SetExcessCapacityTerminationPolicy sets the ExcessCapacityTerminationPolicy field's value. +func (s *CreateFleetInput) SetExcessCapacityTerminationPolicy(v string) *CreateFleetInput { + s.ExcessCapacityTerminationPolicy = &v + return s +} + +// SetLaunchTemplateConfigs sets the LaunchTemplateConfigs field's value. +func (s *CreateFleetInput) SetLaunchTemplateConfigs(v []*FleetLaunchTemplateConfigRequest) *CreateFleetInput { + s.LaunchTemplateConfigs = v + return s +} + +// SetOnDemandOptions sets the OnDemandOptions field's value. +func (s *CreateFleetInput) SetOnDemandOptions(v *OnDemandOptionsRequest) *CreateFleetInput { + s.OnDemandOptions = v + return s +} + +// SetReplaceUnhealthyInstances sets the ReplaceUnhealthyInstances field's value. +func (s *CreateFleetInput) SetReplaceUnhealthyInstances(v bool) *CreateFleetInput { + s.ReplaceUnhealthyInstances = &v + return s +} + +// SetSpotOptions sets the SpotOptions field's value. +func (s *CreateFleetInput) SetSpotOptions(v *SpotOptionsRequest) *CreateFleetInput { + s.SpotOptions = v + return s +} + +// SetTagSpecifications sets the TagSpecifications field's value. +func (s *CreateFleetInput) SetTagSpecifications(v []*TagSpecification) *CreateFleetInput { + s.TagSpecifications = v + return s +} + +// SetTargetCapacitySpecification sets the TargetCapacitySpecification field's value. +func (s *CreateFleetInput) SetTargetCapacitySpecification(v *TargetCapacitySpecificationRequest) *CreateFleetInput { + s.TargetCapacitySpecification = v + return s +} + +// SetTerminateInstancesWithExpiration sets the TerminateInstancesWithExpiration field's value. +func (s *CreateFleetInput) SetTerminateInstancesWithExpiration(v bool) *CreateFleetInput { + s.TerminateInstancesWithExpiration = &v + return s +} + +// SetType sets the Type field's value. +func (s *CreateFleetInput) SetType(v string) *CreateFleetInput { + s.Type = &v + return s +} + +// SetValidFrom sets the ValidFrom field's value. +func (s *CreateFleetInput) SetValidFrom(v time.Time) *CreateFleetInput { + s.ValidFrom = &v + return s +} + +// SetValidUntil sets the ValidUntil field's value. +func (s *CreateFleetInput) SetValidUntil(v time.Time) *CreateFleetInput { + s.ValidUntil = &v + return s +} + +// Describes the instances that were launched by the fleet. +type CreateFleetInstance struct { + _ struct{} `type:"structure"` + + // The IDs of the instances. + InstanceIds []*string `locationName:"instanceIds" locationNameList:"item" type:"list"` + + // The instance type. + InstanceType *string `locationName:"instanceType" type:"string" enum:"InstanceType"` + + // The launch templates and overrides that were used for launching the instances. + // Any parameters that you specify in the Overrides override the same parameters + // in the launch template. + LaunchTemplateAndOverrides *LaunchTemplateAndOverridesResponse `locationName:"launchTemplateAndOverrides" type:"structure"` + + // Indicates if the instance that was launched is a Spot Instance or On-Demand + // Instance. + Lifecycle *string `locationName:"lifecycle" type:"string" enum:"InstanceLifecycle"` + + // The value is Windows for Windows instances; otherwise blank. + Platform *string `locationName:"platform" type:"string" enum:"PlatformValues"` +} + +// String returns the string representation +func (s CreateFleetInstance) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateFleetInstance) GoString() string { + return s.String() +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *CreateFleetInstance) SetInstanceIds(v []*string) *CreateFleetInstance { + s.InstanceIds = v + return s +} + +// SetInstanceType sets the InstanceType field's value. +func (s *CreateFleetInstance) SetInstanceType(v string) *CreateFleetInstance { + s.InstanceType = &v + return s +} + +// SetLaunchTemplateAndOverrides sets the LaunchTemplateAndOverrides field's value. +func (s *CreateFleetInstance) SetLaunchTemplateAndOverrides(v *LaunchTemplateAndOverridesResponse) *CreateFleetInstance { + s.LaunchTemplateAndOverrides = v + return s +} + +// SetLifecycle sets the Lifecycle field's value. +func (s *CreateFleetInstance) SetLifecycle(v string) *CreateFleetInstance { + s.Lifecycle = &v + return s +} + +// SetPlatform sets the Platform field's value. +func (s *CreateFleetInstance) SetPlatform(v string) *CreateFleetInstance { + s.Platform = &v + return s +} + +type CreateFleetOutput struct { + _ struct{} `type:"structure"` + + // Information about the instances that could not be launched by the fleet. + // Valid only when Type is set to instant. + Errors []*CreateFleetError `locationName:"errorSet" locationNameList:"item" type:"list"` + + // The ID of the EC2 Fleet. + FleetId *string `locationName:"fleetId" type:"string"` + + // Information about the instances that were launched by the fleet. Valid only + // when Type is set to instant. + Instances []*CreateFleetInstance `locationName:"fleetInstanceSet" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s CreateFleetOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateFleetOutput) GoString() string { + return s.String() +} + +// SetErrors sets the Errors field's value. +func (s *CreateFleetOutput) SetErrors(v []*CreateFleetError) *CreateFleetOutput { + s.Errors = v + return s +} + +// SetFleetId sets the FleetId field's value. +func (s *CreateFleetOutput) SetFleetId(v string) *CreateFleetOutput { + s.FleetId = &v + return s +} + +// SetInstances sets the Instances field's value. +func (s *CreateFleetOutput) SetInstances(v []*CreateFleetInstance) *CreateFleetOutput { + s.Instances = v + return s +} + +type CreateFlowLogsInput struct { + _ struct{} `type:"structure"` + + // Unique, case-sensitive identifier that you provide to ensure the idempotency + // of the request. For more information, see How to Ensure Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Run_Instance_Idempotency.html). + ClientToken *string `type:"string"` + + // The ARN for the IAM role that's used to post flow logs to a log group. + DeliverLogsPermissionArn *string `type:"string"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // Specifies the destination to which the flow log data is to be published. + // Flow log data can be published to an CloudWatch Logs log group or an Amazon + // S3 bucket. The value specified for this parameter depends on the value specified + // for LogDestinationType. + // + // If LogDestinationType is not specified or cloud-watch-logs, specify the Amazon + // Resource Name (ARN) of the CloudWatch Logs log group. + // + // If LogDestinationType is s3, specify the ARN of the Amazon S3 bucket. You + // can also specify a subfolder in the bucket. To specify a subfolder in the + // bucket, use the following ARN format: bucket_ARN/subfolder_name/. For example, + // to specify a subfolder named my-logs in a bucket named my-bucket, use the + // following ARN: arn:aws:s3:::my-bucket/my-logs/. You cannot use AWSLogs as + // a subfolder name. This is a reserved term. + LogDestination *string `type:"string"` + + // Specifies the type of destination to which the flow log data is to be published. + // Flow log data can be published to CloudWatch Logs or Amazon S3. To publish + // flow log data to CloudWatch Logs, specify cloud-watch-logs. To publish flow + // log data to Amazon S3, specify s3. + // + // Default: cloud-watch-logs + LogDestinationType *string `type:"string" enum:"LogDestinationType"` + + // The name of the log group. + LogGroupName *string `type:"string"` // One or more subnet, network interface, or VPC IDs. // @@ -27200,12 +30070,6 @@ func (s CreateFlowLogsInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *CreateFlowLogsInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "CreateFlowLogsInput"} - if s.DeliverLogsPermissionArn == nil { - invalidParams.Add(request.NewErrParamRequired("DeliverLogsPermissionArn")) - } - if s.LogGroupName == nil { - invalidParams.Add(request.NewErrParamRequired("LogGroupName")) - } if s.ResourceIds == nil { invalidParams.Add(request.NewErrParamRequired("ResourceIds")) } @@ -27234,6 +30098,24 @@ func (s *CreateFlowLogsInput) SetDeliverLogsPermissionArn(v string) *CreateFlowL return s } +// SetDryRun sets the DryRun field's value. +func (s *CreateFlowLogsInput) SetDryRun(v bool) *CreateFlowLogsInput { + s.DryRun = &v + return s +} + +// SetLogDestination sets the LogDestination field's value. +func (s *CreateFlowLogsInput) SetLogDestination(v string) *CreateFlowLogsInput { + s.LogDestination = &v + return s +} + +// SetLogDestinationType sets the LogDestinationType field's value. +func (s *CreateFlowLogsInput) SetLogDestinationType(v string) *CreateFlowLogsInput { + s.LogDestinationType = &v + return s +} + // SetLogGroupName sets the LogGroupName field's value. func (s *CreateFlowLogsInput) SetLogGroupName(v string) *CreateFlowLogsInput { s.LogGroupName = &v @@ -27258,12 +30140,11 @@ func (s *CreateFlowLogsInput) SetTrafficType(v string) *CreateFlowLogsInput { return s } -// Contains the output of CreateFlowLogs. type CreateFlowLogsOutput struct { _ struct{} `type:"structure"` - // Unique, case-sensitive identifier you provide to ensure the idempotency of - // the request. + // Unique, case-sensitive identifier that you provide to ensure the idempotency + // of the request. ClientToken *string `locationName:"clientToken" type:"string"` // The IDs of the flow logs. @@ -27425,7 +30306,9 @@ func (s *CreateFpgaImageOutput) SetFpgaImageId(v string) *CreateFpgaImageOutput type CreateImageInput struct { _ struct{} `type:"structure"` - // Information about one or more block device mappings. + // Information about one or more block device mappings. This parameter cannot + // be used to modify the encryption status of existing volumes or snapshots. + // To create an AMI with encrypted snapshots, use the CopyImage action. BlockDeviceMappings []*BlockDeviceMapping `locationName:"blockDeviceMapping" locationNameList:"BlockDeviceMapping" type:"list"` // A description for the new image. @@ -27635,7 +30518,6 @@ func (s *CreateInstanceExportTaskOutput) SetExportTask(v *ExportTask) *CreateIns return s } -// Contains the parameters for CreateInternetGateway. type CreateInternetGatewayInput struct { _ struct{} `type:"structure"` @@ -27662,11 +30544,10 @@ func (s *CreateInternetGatewayInput) SetDryRun(v bool) *CreateInternetGatewayInp return s } -// Contains the output of CreateInternetGateway. type CreateInternetGatewayOutput struct { _ struct{} `type:"structure"` - // Information about the Internet gateway. + // Information about the internet gateway. InternetGateway *InternetGateway `locationName:"internetGateway" type:"structure"` } @@ -27686,7 +30567,6 @@ func (s *CreateInternetGatewayOutput) SetInternetGateway(v *InternetGateway) *Cr return s } -// Contains the parameters for CreateKeyPair. type CreateKeyPairInput struct { _ struct{} `type:"structure"` @@ -28026,7 +30906,6 @@ func (s *CreateLaunchTemplateVersionOutput) SetLaunchTemplateVersion(v *LaunchTe return s } -// Contains the parameters for CreateNatGateway. type CreateNatGatewayInput struct { _ struct{} `type:"structure"` @@ -28037,8 +30916,8 @@ type CreateNatGatewayInput struct { // AllocationId is a required field AllocationId *string `type:"string" required:"true"` - // Unique, case-sensitive identifier you provide to ensure the idempotency of - // the request. For more information, see How to Ensure Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/Run_Instance_Idempotency.html). + // Unique, case-sensitive identifier that you provide to ensure the idempotency + // of the request. For more information, see How to Ensure Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/Run_Instance_Idempotency.html). // // Constraint: Maximum 64 ASCII characters. ClientToken *string `type:"string"` @@ -28093,7 +30972,6 @@ func (s *CreateNatGatewayInput) SetSubnetId(v string) *CreateNatGatewayInput { return s } -// Contains the output of CreateNatGateway. type CreateNatGatewayOutput struct { _ struct{} `type:"structure"` @@ -28127,7 +31005,6 @@ func (s *CreateNatGatewayOutput) SetNatGateway(v *NatGateway) *CreateNatGatewayO return s } -// Contains the parameters for CreateNetworkAclEntry. type CreateNetworkAclEntryInput struct { _ struct{} `type:"structure"` @@ -28146,8 +31023,8 @@ type CreateNetworkAclEntryInput struct { // Egress is a required field Egress *bool `locationName:"egress" type:"boolean" required:"true"` - // ICMP protocol: The ICMP or ICMPv6 type and code. Required if specifying the - // ICMP protocol, or protocol 58 (ICMPv6) with an IPv6 CIDR block. + // ICMP protocol: The ICMP or ICMPv6 type and code. Required if specifying protocol + // 1 (ICMP) or protocol 58 (ICMPv6) with an IPv6 CIDR block. IcmpTypeCode *IcmpTypeCode `locationName:"Icmp" type:"structure"` // The IPv6 network range to allow or deny, in CIDR notation (for example 2001:db8:1234:1a00::/64). @@ -28158,16 +31035,17 @@ type CreateNetworkAclEntryInput struct { // NetworkAclId is a required field NetworkAclId *string `locationName:"networkAclId" type:"string" required:"true"` - // TCP or UDP protocols: The range of ports the rule applies to. + // TCP or UDP protocols: The range of ports the rule applies to. Required if + // specifying protocol 6 (TCP) or 17 (UDP). PortRange *PortRange `locationName:"portRange" type:"structure"` - // The protocol. A value of -1 or all means all protocols. If you specify all, - // -1, or a protocol number other than tcp, udp, or icmp, traffic on all ports - // is allowed, regardless of any ports or ICMP types or codes you specify. If - // you specify protocol 58 (ICMPv6) and specify an IPv4 CIDR block, traffic - // for all ICMP types and codes allowed, regardless of any that you specify. - // If you specify protocol 58 (ICMPv6) and specify an IPv6 CIDR block, you must - // specify an ICMP type and code. + // The protocol number. A value of "-1" means all protocols. If you specify + // "-1" or a protocol number other than "6" (TCP), "17" (UDP), or "1" (ICMP), + // traffic on all ports is allowed, regardless of any ports or ICMP types or + // codes that you specify. If you specify protocol "58" (ICMPv6) and specify + // an IPv4 CIDR block, traffic for all ICMP types and codes allowed, regardless + // of any that you specify. If you specify protocol "58" (ICMPv6) and specify + // an IPv6 CIDR block, you must specify an ICMP type and code. // // Protocol is a required field Protocol *string `locationName:"protocol" type:"string" required:"true"` @@ -28296,7 +31174,6 @@ func (s CreateNetworkAclEntryOutput) GoString() string { return s.String() } -// Contains the parameters for CreateNetworkAcl. type CreateNetworkAclInput struct { _ struct{} `type:"structure"` @@ -28347,7 +31224,6 @@ func (s *CreateNetworkAclInput) SetVpcId(v string) *CreateNetworkAclInput { return s } -// Contains the output of CreateNetworkAcl. type CreateNetworkAclOutput struct { _ struct{} `type:"structure"` @@ -28441,16 +31317,6 @@ func (s *CreateNetworkInterfaceInput) Validate() error { if s.SubnetId == nil { invalidParams.Add(request.NewErrParamRequired("SubnetId")) } - if s.PrivateIpAddresses != nil { - for i, v := range s.PrivateIpAddresses { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "PrivateIpAddresses", i), err.(request.ErrInvalidParams)) - } - } - } if invalidParams.Len() > 0 { return invalidParams @@ -28836,7 +31702,6 @@ func (s *CreateReservedInstancesListingOutput) SetReservedInstancesListings(v [] return s } -// Contains the parameters for CreateRoute. type CreateRouteInput struct { _ struct{} `type:"structure"` @@ -28854,10 +31719,10 @@ type CreateRouteInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // [IPv6 traffic only] The ID of an egress-only Internet gateway. + // [IPv6 traffic only] The ID of an egress-only internet gateway. EgressOnlyInternetGatewayId *string `locationName:"egressOnlyInternetGatewayId" type:"string"` - // The ID of an Internet gateway or virtual private gateway attached to your + // The ID of an internet gateway or virtual private gateway attached to your // VPC. GatewayId *string `locationName:"gatewayId" type:"string"` @@ -28963,7 +31828,6 @@ func (s *CreateRouteInput) SetVpcPeeringConnectionId(v string) *CreateRouteInput return s } -// Contains the output of CreateRoute. type CreateRouteOutput struct { _ struct{} `type:"structure"` @@ -28987,7 +31851,6 @@ func (s *CreateRouteOutput) SetReturn(v bool) *CreateRouteOutput { return s } -// Contains the parameters for CreateRouteTable. type CreateRouteTableInput struct { _ struct{} `type:"structure"` @@ -29038,7 +31901,6 @@ func (s *CreateRouteTableInput) SetVpcId(v string) *CreateRouteTableInput { return s } -// Contains the output of CreateRouteTable. type CreateRouteTableOutput struct { _ struct{} `type:"structure"` @@ -29062,7 +31924,6 @@ func (s *CreateRouteTableOutput) SetRouteTable(v *RouteTable) *CreateRouteTableO return s } -// Contains the parameters for CreateSecurityGroup. type CreateSecurityGroupInput struct { _ struct{} `type:"structure"` @@ -29148,7 +32009,6 @@ func (s *CreateSecurityGroupInput) SetVpcId(v string) *CreateSecurityGroupInput return s } -// Contains the output of CreateSecurityGroup. type CreateSecurityGroupOutput struct { _ struct{} `type:"structure"` @@ -29325,7 +32185,6 @@ func (s *CreateSpotDatafeedSubscriptionOutput) SetSpotDatafeedSubscription(v *Sp return s } -// Contains the parameters for CreateSubnet. type CreateSubnetInput struct { _ struct{} `type:"structure"` @@ -29412,7 +32271,6 @@ func (s *CreateSubnetInput) SetVpcId(v string) *CreateSubnetInput { return s } -// Contains the output of CreateSubnet. type CreateSubnetOutput struct { _ struct{} `type:"structure"` @@ -29436,7 +32294,6 @@ func (s *CreateSubnetOutput) SetSubnet(v *Subnet) *CreateSubnetOutput { return s } -// Contains the parameters for CreateTags. type CreateTagsInput struct { _ struct{} `type:"structure"` @@ -29446,7 +32303,7 @@ type CreateTagsInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // The IDs of one or more resources to tag. For example, ami-1a2b3c4d. + // The IDs of one or more resources, separated by spaces. // // Resources is a required field Resources []*string `locationName:"ResourceId" type:"list" required:"true"` @@ -29542,11 +32399,12 @@ type CreateVolumeInput struct { // in the Amazon Elastic Compute Cloud User Guide. Encrypted *bool `locationName:"encrypted" type:"boolean"` - // Only valid for Provisioned IOPS SSD volumes. The number of I/O operations - // per second (IOPS) to provision for the volume, with a maximum ratio of 50 - // IOPS/GiB. + // The number of I/O operations per second (IOPS) to provision for the volume, + // with a maximum ratio of 50 IOPS/GiB. Range is 100 to 32000 IOPS for volumes + // in most regions. For exceptions, see Amazon EBS Volume Types (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) + // in the Amazon Elastic Compute Cloud User Guide. // - // Constraint: Range is 100 to 20000 for Provisioned IOPS SSD volumes + // This parameter is valid only for Provisioned IOPS SSD (io1) volumes. Iops *int64 `type:"integer"` // An identifier for the AWS Key Management Service (AWS KMS) customer master @@ -29596,7 +32454,10 @@ type CreateVolumeInput struct { // IOPS SSD, st1 for Throughput Optimized HDD, sc1 for Cold HDD, or standard // for Magnetic volumes. // - // Default: standard + // Defaults: If no volume type is specified, the default is standard in us-east-1, + // eu-west-1, eu-central-1, us-west-2, us-west-1, sa-east-1, ap-northeast-1, + // ap-northeast-2, ap-southeast-1, ap-southeast-2, ap-south-1, us-gov-west-1, + // and cn-north-1. In all other regions, EBS defaults to gp2. VolumeType *string `type:"string" enum:"VolumeType"` } @@ -29906,7 +32767,7 @@ type CreateVpcEndpointInput struct { // true: enableDnsHostnames and enableDnsSupport. Use ModifyVpcAttribute to // set the VPC attributes. // - // Default: true + // Default: false PrivateDnsEnabled *bool `type:"boolean"` // (Gateway endpoint) One or more route table IDs. @@ -30161,7 +33022,6 @@ func (s *CreateVpcEndpointServiceConfigurationOutput) SetServiceConfiguration(v return s } -// Contains the parameters for CreateVpc. type CreateVpcInput struct { _ struct{} `type:"structure"` @@ -30241,7 +33101,6 @@ func (s *CreateVpcInput) SetInstanceTenancy(v string) *CreateVpcInput { return s } -// Contains the output of CreateVpc. type CreateVpcOutput struct { _ struct{} `type:"structure"` @@ -30265,7 +33124,6 @@ func (s *CreateVpcOutput) SetVpc(v *Vpc) *CreateVpcOutput { return s } -// Contains the parameters for CreateVpcPeeringConnection. type CreateVpcPeeringConnectionInput struct { _ struct{} `type:"structure"` @@ -30334,7 +33192,6 @@ func (s *CreateVpcPeeringConnectionInput) SetVpcId(v string) *CreateVpcPeeringCo return s } -// Contains the output of CreateVpcPeeringConnection. type CreateVpcPeeringConnectionOutput struct { _ struct{} `type:"structure"` @@ -30634,11 +33491,12 @@ func (s *CreateVpnGatewayOutput) SetVpnGateway(v *VpnGateway) *CreateVpnGatewayO return s } -// Describes the credit option for CPU usage of a T2 instance. +// Describes the credit option for CPU usage of a T2 or T3 instance. type CreditSpecification struct { _ struct{} `type:"structure"` - // The credit option for CPU usage of a T2 instance. + // The credit option for CPU usage of a T2 or T3 instance. Valid values are + // standard and unlimited. CpuCredits *string `locationName:"cpuCredits" type:"string"` } @@ -30658,12 +33516,12 @@ func (s *CreditSpecification) SetCpuCredits(v string) *CreditSpecification { return s } -// The credit option for CPU usage of a T2 instance. +// The credit option for CPU usage of a T2 or T3 instance. type CreditSpecificationRequest struct { _ struct{} `type:"structure"` - // The credit option for CPU usage of a T2 instance. Valid values are standard - // and unlimited. + // The credit option for CPU usage of a T2 or T3 instance. Valid values are + // standard and unlimited. // // CpuCredits is a required field CpuCredits *string `type:"string" required:"true"` @@ -30834,7 +33692,6 @@ func (s DeleteCustomerGatewayOutput) GoString() string { return s.String() } -// Contains the parameters for DeleteDhcpOptions. type DeleteDhcpOptionsInput struct { _ struct{} `type:"structure"` @@ -30908,7 +33765,7 @@ type DeleteEgressOnlyInternetGatewayInput struct { // it is UnauthorizedOperation. DryRun *bool `type:"boolean"` - // The ID of the egress-only Internet gateway. + // The ID of the egress-only internet gateway. // // EgressOnlyInternetGatewayId is a required field EgressOnlyInternetGatewayId *string `type:"string" required:"true"` @@ -30972,10 +33829,220 @@ func (s *DeleteEgressOnlyInternetGatewayOutput) SetReturnCode(v bool) *DeleteEgr return s } -// Contains the parameters for DeleteFlowLogs. +// Describes an EC2 Fleet error. +type DeleteFleetError struct { + _ struct{} `type:"structure"` + + // The error code. + Code *string `locationName:"code" type:"string" enum:"DeleteFleetErrorCode"` + + // The description for the error code. + Message *string `locationName:"message" type:"string"` +} + +// String returns the string representation +func (s DeleteFleetError) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteFleetError) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *DeleteFleetError) SetCode(v string) *DeleteFleetError { + s.Code = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *DeleteFleetError) SetMessage(v string) *DeleteFleetError { + s.Message = &v + return s +} + +// Describes an EC2 Fleet that was not successfully deleted. +type DeleteFleetErrorItem struct { + _ struct{} `type:"structure"` + + // The error. + Error *DeleteFleetError `locationName:"error" type:"structure"` + + // The ID of the EC2 Fleet. + FleetId *string `locationName:"fleetId" type:"string"` +} + +// String returns the string representation +func (s DeleteFleetErrorItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteFleetErrorItem) GoString() string { + return s.String() +} + +// SetError sets the Error field's value. +func (s *DeleteFleetErrorItem) SetError(v *DeleteFleetError) *DeleteFleetErrorItem { + s.Error = v + return s +} + +// SetFleetId sets the FleetId field's value. +func (s *DeleteFleetErrorItem) SetFleetId(v string) *DeleteFleetErrorItem { + s.FleetId = &v + return s +} + +// Describes an EC2 Fleet that was successfully deleted. +type DeleteFleetSuccessItem struct { + _ struct{} `type:"structure"` + + // The current state of the EC2 Fleet. + CurrentFleetState *string `locationName:"currentFleetState" type:"string" enum:"FleetStateCode"` + + // The ID of the EC2 Fleet. + FleetId *string `locationName:"fleetId" type:"string"` + + // The previous state of the EC2 Fleet. + PreviousFleetState *string `locationName:"previousFleetState" type:"string" enum:"FleetStateCode"` +} + +// String returns the string representation +func (s DeleteFleetSuccessItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteFleetSuccessItem) GoString() string { + return s.String() +} + +// SetCurrentFleetState sets the CurrentFleetState field's value. +func (s *DeleteFleetSuccessItem) SetCurrentFleetState(v string) *DeleteFleetSuccessItem { + s.CurrentFleetState = &v + return s +} + +// SetFleetId sets the FleetId field's value. +func (s *DeleteFleetSuccessItem) SetFleetId(v string) *DeleteFleetSuccessItem { + s.FleetId = &v + return s +} + +// SetPreviousFleetState sets the PreviousFleetState field's value. +func (s *DeleteFleetSuccessItem) SetPreviousFleetState(v string) *DeleteFleetSuccessItem { + s.PreviousFleetState = &v + return s +} + +type DeleteFleetsInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The IDs of the EC2 Fleets. + // + // FleetIds is a required field + FleetIds []*string `locationName:"FleetId" type:"list" required:"true"` + + // Indicates whether to terminate instances for an EC2 Fleet if it is deleted + // successfully. + // + // TerminateInstances is a required field + TerminateInstances *bool `type:"boolean" required:"true"` +} + +// String returns the string representation +func (s DeleteFleetsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteFleetsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteFleetsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteFleetsInput"} + if s.FleetIds == nil { + invalidParams.Add(request.NewErrParamRequired("FleetIds")) + } + if s.TerminateInstances == nil { + invalidParams.Add(request.NewErrParamRequired("TerminateInstances")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDryRun sets the DryRun field's value. +func (s *DeleteFleetsInput) SetDryRun(v bool) *DeleteFleetsInput { + s.DryRun = &v + return s +} + +// SetFleetIds sets the FleetIds field's value. +func (s *DeleteFleetsInput) SetFleetIds(v []*string) *DeleteFleetsInput { + s.FleetIds = v + return s +} + +// SetTerminateInstances sets the TerminateInstances field's value. +func (s *DeleteFleetsInput) SetTerminateInstances(v bool) *DeleteFleetsInput { + s.TerminateInstances = &v + return s +} + +type DeleteFleetsOutput struct { + _ struct{} `type:"structure"` + + // Information about the EC2 Fleets that are successfully deleted. + SuccessfulFleetDeletions []*DeleteFleetSuccessItem `locationName:"successfulFleetDeletionSet" locationNameList:"item" type:"list"` + + // Information about the EC2 Fleets that are not successfully deleted. + UnsuccessfulFleetDeletions []*DeleteFleetErrorItem `locationName:"unsuccessfulFleetDeletionSet" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s DeleteFleetsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteFleetsOutput) GoString() string { + return s.String() +} + +// SetSuccessfulFleetDeletions sets the SuccessfulFleetDeletions field's value. +func (s *DeleteFleetsOutput) SetSuccessfulFleetDeletions(v []*DeleteFleetSuccessItem) *DeleteFleetsOutput { + s.SuccessfulFleetDeletions = v + return s +} + +// SetUnsuccessfulFleetDeletions sets the UnsuccessfulFleetDeletions field's value. +func (s *DeleteFleetsOutput) SetUnsuccessfulFleetDeletions(v []*DeleteFleetErrorItem) *DeleteFleetsOutput { + s.UnsuccessfulFleetDeletions = v + return s +} + type DeleteFlowLogsInput struct { _ struct{} `type:"structure"` + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + // One or more flow log IDs. // // FlowLogIds is a required field @@ -31005,13 +34072,18 @@ func (s *DeleteFlowLogsInput) Validate() error { return nil } +// SetDryRun sets the DryRun field's value. +func (s *DeleteFlowLogsInput) SetDryRun(v bool) *DeleteFlowLogsInput { + s.DryRun = &v + return s +} + // SetFlowLogIds sets the FlowLogIds field's value. func (s *DeleteFlowLogsInput) SetFlowLogIds(v []*string) *DeleteFlowLogsInput { s.FlowLogIds = v return s } -// Contains the output of DeleteFlowLogs. type DeleteFlowLogsOutput struct { _ struct{} `type:"structure"` @@ -31108,7 +34180,6 @@ func (s *DeleteFpgaImageOutput) SetReturn(v bool) *DeleteFpgaImageOutput { return s } -// Contains the parameters for DeleteInternetGateway. type DeleteInternetGatewayInput struct { _ struct{} `type:"structure"` @@ -31118,7 +34189,7 @@ type DeleteInternetGatewayInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // The ID of the Internet gateway. + // The ID of the internet gateway. // // InternetGatewayId is a required field InternetGatewayId *string `locationName:"internetGatewayId" type:"string" required:"true"` @@ -31173,7 +34244,6 @@ func (s DeleteInternetGatewayOutput) GoString() string { return s.String() } -// Contains the parameters for DeleteKeyPair. type DeleteKeyPairInput struct { _ struct{} `type:"structure"` @@ -31518,7 +34588,6 @@ func (s *DeleteLaunchTemplateVersionsResponseSuccessItem) SetVersionNumber(v int return s } -// Contains the parameters for DeleteNatGateway. type DeleteNatGatewayInput struct { _ struct{} `type:"structure"` @@ -31557,7 +34626,6 @@ func (s *DeleteNatGatewayInput) SetNatGatewayId(v string) *DeleteNatGatewayInput return s } -// Contains the output of DeleteNatGateway. type DeleteNatGatewayOutput struct { _ struct{} `type:"structure"` @@ -31581,7 +34649,6 @@ func (s *DeleteNatGatewayOutput) SetNatGatewayId(v string) *DeleteNatGatewayOutp return s } -// Contains the parameters for DeleteNetworkAclEntry. type DeleteNetworkAclEntryInput struct { _ struct{} `type:"structure"` @@ -31674,7 +34741,6 @@ func (s DeleteNetworkAclEntryOutput) GoString() string { return s.String() } -// Contains the parameters for DeleteNetworkAcl. type DeleteNetworkAclInput struct { _ struct{} `type:"structure"` @@ -31954,7 +35020,6 @@ func (s DeletePlacementGroupOutput) GoString() string { return s.String() } -// Contains the parameters for DeleteRoute. type DeleteRouteInput struct { _ struct{} `type:"structure"` @@ -32039,7 +35104,6 @@ func (s DeleteRouteOutput) GoString() string { return s.String() } -// Contains the parameters for DeleteRouteTable. type DeleteRouteTableInput struct { _ struct{} `type:"structure"` @@ -32104,7 +35168,6 @@ func (s DeleteRouteTableOutput) GoString() string { return s.String() } -// Contains the parameters for DeleteSecurityGroup. type DeleteSecurityGroupInput struct { _ struct{} `type:"structure"` @@ -32270,7 +35333,6 @@ func (s DeleteSpotDatafeedSubscriptionOutput) GoString() string { return s.String() } -// Contains the parameters for DeleteSubnet. type DeleteSubnetInput struct { _ struct{} `type:"structure"` @@ -32335,7 +35397,6 @@ func (s DeleteSubnetOutput) GoString() string { return s.String() } -// Contains the parameters for DeleteTags. type DeleteTagsInput struct { _ struct{} `type:"structure"` @@ -32345,7 +35406,7 @@ type DeleteTagsInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // The IDs of one or more resources. + // The IDs of one or more resources, separated by spaces. // // Resources is a required field Resources []*string `locationName:"resourceId" type:"list" required:"true"` @@ -32702,7 +35763,6 @@ func (s *DeleteVpcEndpointsOutput) SetUnsuccessful(v []*UnsuccessfulItem) *Delet return s } -// Contains the parameters for DeleteVpc. type DeleteVpcInput struct { _ struct{} `type:"structure"` @@ -32767,7 +35827,6 @@ func (s DeleteVpcOutput) GoString() string { return s.String() } -// Contains the parameters for DeleteVpcPeeringConnection. type DeleteVpcPeeringConnectionInput struct { _ struct{} `type:"structure"` @@ -32818,7 +35877,6 @@ func (s *DeleteVpcPeeringConnectionInput) SetVpcPeeringConnectionId(v string) *D return s } -// Contains the output of DeleteVpcPeeringConnection. type DeleteVpcPeeringConnectionOutput struct { _ struct{} `type:"structure"` @@ -33039,6 +36097,80 @@ func (s DeleteVpnGatewayOutput) GoString() string { return s.String() } +type DeprovisionByoipCidrInput struct { + _ struct{} `type:"structure"` + + // The public IPv4 address range, in CIDR notation. The prefix must be the same + // prefix that you specified when you provisioned the address range. + // + // Cidr is a required field + Cidr *string `type:"string" required:"true"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` +} + +// String returns the string representation +func (s DeprovisionByoipCidrInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeprovisionByoipCidrInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeprovisionByoipCidrInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeprovisionByoipCidrInput"} + if s.Cidr == nil { + invalidParams.Add(request.NewErrParamRequired("Cidr")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCidr sets the Cidr field's value. +func (s *DeprovisionByoipCidrInput) SetCidr(v string) *DeprovisionByoipCidrInput { + s.Cidr = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *DeprovisionByoipCidrInput) SetDryRun(v bool) *DeprovisionByoipCidrInput { + s.DryRun = &v + return s +} + +type DeprovisionByoipCidrOutput struct { + _ struct{} `type:"structure"` + + // Information about the address range. + ByoipCidr *ByoipCidr `locationName:"byoipCidr" type:"structure"` +} + +// String returns the string representation +func (s DeprovisionByoipCidrOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeprovisionByoipCidrOutput) GoString() string { + return s.String() +} + +// SetByoipCidr sets the ByoipCidr field's value. +func (s *DeprovisionByoipCidrOutput) SetByoipCidr(v *ByoipCidr) *DeprovisionByoipCidrOutput { + s.ByoipCidr = v + return s +} + // Contains the parameters for DeregisterImage. type DeregisterImageInput struct { _ struct{} `type:"structure"` @@ -33104,7 +36236,6 @@ func (s DeregisterImageOutput) GoString() string { return s.String() } -// Contains the parameters for DescribeAccountAttributes. type DescribeAccountAttributesInput struct { _ struct{} `type:"structure"` @@ -33140,7 +36271,6 @@ func (s *DescribeAccountAttributesInput) SetDryRun(v bool) *DescribeAccountAttri return s } -// Contains the output of DescribeAccountAttributes. type DescribeAccountAttributesOutput struct { _ struct{} `type:"structure"` @@ -33164,7 +36294,6 @@ func (s *DescribeAccountAttributesOutput) SetAccountAttributes(v []*AccountAttri return s } -// Contains the parameters for DescribeAddresses. type DescribeAddressesInput struct { _ struct{} `type:"structure"` @@ -33201,17 +36330,15 @@ type DescribeAddressesInput struct { // // * public-ip - The Elastic IP address. // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. - // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of the tag's key). If you want to - // list only resources where Purpose is X, see the tag:key=value filter. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. + // + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` // [EC2-Classic] One or more Elastic IP addresses. @@ -33254,7 +36381,6 @@ func (s *DescribeAddressesInput) SetPublicIps(v []*string) *DescribeAddressesInp return s } -// Contains the output of DescribeAddresses. type DescribeAddressesOutput struct { _ struct{} `type:"structure"` @@ -33338,7 +36464,6 @@ func (s *DescribeAggregateIdFormatOutput) SetUseLongIdsAggregated(v bool) *Descr return s } -// Contains the parameters for DescribeAvailabilityZones. type DescribeAvailabilityZonesInput struct { _ struct{} `type:"structure"` @@ -33358,9 +36483,14 @@ type DescribeAvailabilityZonesInput struct { // * state - The state of the Availability Zone (available | information // | impaired | unavailable). // + // * zone-id - The ID of the Availability Zone (for example, use1-az1). + // // * zone-name - The name of the Availability Zone (for example, us-east-1a). Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` + // The IDs of one or more Availability Zones. + ZoneIds []*string `locationName:"ZoneId" locationNameList:"ZoneId" type:"list"` + // The names of one or more Availability Zones. ZoneNames []*string `locationName:"ZoneName" locationNameList:"ZoneName" type:"list"` } @@ -33387,13 +36517,18 @@ func (s *DescribeAvailabilityZonesInput) SetFilters(v []*Filter) *DescribeAvaila return s } +// SetZoneIds sets the ZoneIds field's value. +func (s *DescribeAvailabilityZonesInput) SetZoneIds(v []*string) *DescribeAvailabilityZonesInput { + s.ZoneIds = v + return s +} + // SetZoneNames sets the ZoneNames field's value. func (s *DescribeAvailabilityZonesInput) SetZoneNames(v []*string) *DescribeAvailabilityZonesInput { s.ZoneNames = v return s } -// Contains the output of DescribeAvailabiltyZones. type DescribeAvailabilityZonesOutput struct { _ struct{} `type:"structure"` @@ -33510,7 +36645,202 @@ func (s *DescribeBundleTasksOutput) SetBundleTasks(v []*BundleTask) *DescribeBun return s } -// Contains the parameters for DescribeClassicLinkInstances. +type DescribeByoipCidrsInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The maximum number of results to return with a single call. To retrieve the + // remaining results, make another call with the returned nextToken value. + // + // MaxResults is a required field + MaxResults *int64 `min:"5" type:"integer" required:"true"` + + // The token for the next page of results. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeByoipCidrsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeByoipCidrsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeByoipCidrsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeByoipCidrsInput"} + if s.MaxResults == nil { + invalidParams.Add(request.NewErrParamRequired("MaxResults")) + } + if s.MaxResults != nil && *s.MaxResults < 5 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 5)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDryRun sets the DryRun field's value. +func (s *DescribeByoipCidrsInput) SetDryRun(v bool) *DescribeByoipCidrsInput { + s.DryRun = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeByoipCidrsInput) SetMaxResults(v int64) *DescribeByoipCidrsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeByoipCidrsInput) SetNextToken(v string) *DescribeByoipCidrsInput { + s.NextToken = &v + return s +} + +type DescribeByoipCidrsOutput struct { + _ struct{} `type:"structure"` + + // Information about your address ranges. + ByoipCidrs []*ByoipCidr `locationName:"byoipCidrSet" locationNameList:"item" type:"list"` + + // The token to use to retrieve the next page of results. This value is null + // when there are no more results to return. + NextToken *string `locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s DescribeByoipCidrsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeByoipCidrsOutput) GoString() string { + return s.String() +} + +// SetByoipCidrs sets the ByoipCidrs field's value. +func (s *DescribeByoipCidrsOutput) SetByoipCidrs(v []*ByoipCidr) *DescribeByoipCidrsOutput { + s.ByoipCidrs = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeByoipCidrsOutput) SetNextToken(v string) *DescribeByoipCidrsOutput { + s.NextToken = &v + return s +} + +type DescribeCapacityReservationsInput struct { + _ struct{} `type:"structure"` + + // The ID of the Capacity Reservation. + CapacityReservationIds []*string `locationName:"CapacityReservationId" locationNameList:"item" type:"list"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // One or more filters. + Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` + + // The maximum number of results to return for the request in a single page. + // The remaining results can be seen by sending another request with the returned + // nextToken value. + MaxResults *int64 `type:"integer"` + + // The token to retrieve the next page of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeCapacityReservationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeCapacityReservationsInput) GoString() string { + return s.String() +} + +// SetCapacityReservationIds sets the CapacityReservationIds field's value. +func (s *DescribeCapacityReservationsInput) SetCapacityReservationIds(v []*string) *DescribeCapacityReservationsInput { + s.CapacityReservationIds = v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *DescribeCapacityReservationsInput) SetDryRun(v bool) *DescribeCapacityReservationsInput { + s.DryRun = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeCapacityReservationsInput) SetFilters(v []*Filter) *DescribeCapacityReservationsInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeCapacityReservationsInput) SetMaxResults(v int64) *DescribeCapacityReservationsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeCapacityReservationsInput) SetNextToken(v string) *DescribeCapacityReservationsInput { + s.NextToken = &v + return s +} + +type DescribeCapacityReservationsOutput struct { + _ struct{} `type:"structure"` + + // Information about the Capacity Reservations. + CapacityReservations []*CapacityReservation `locationName:"capacityReservationSet" locationNameList:"item" type:"list"` + + // The token to use to retrieve the next page of results. This value is null + // when there are no more results to return. + NextToken *string `locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s DescribeCapacityReservationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeCapacityReservationsOutput) GoString() string { + return s.String() +} + +// SetCapacityReservations sets the CapacityReservations field's value. +func (s *DescribeCapacityReservationsOutput) SetCapacityReservations(v []*CapacityReservation) *DescribeCapacityReservationsOutput { + s.CapacityReservations = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeCapacityReservationsOutput) SetNextToken(v string) *DescribeCapacityReservationsOutput { + s.NextToken = &v + return s +} + type DescribeClassicLinkInstancesInput struct { _ struct{} `type:"structure"` @@ -33527,20 +36857,19 @@ type DescribeClassicLinkInstancesInput struct { // // * instance-id - The ID of the instance. // - // * tag:key=value - The key/value combination of a tag assigned to the resource. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * vpc-id - The ID of the VPC to which the instance is linked. // - // * vpc-id - The ID of the VPC that the instance is linked to. + // vpc-id - The ID of the VPC that the instance is linked to. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` // One or more instance IDs. Must be instances linked to a VPC through ClassicLink. @@ -33549,7 +36878,7 @@ type DescribeClassicLinkInstancesInput struct { // The maximum number of results to return for the request in a single page. // The remaining results of the initial request can be seen by sending another // request with the returned NextToken value. This value can be between 5 and - // 1000; if MaxResults is given a value larger than 1000, only 1000 results + // 1000. If MaxResults is given a value larger than 1000, only 1000 results // are returned. You cannot specify this parameter and the instance IDs parameter // in the same request. // @@ -33600,7 +36929,6 @@ func (s *DescribeClassicLinkInstancesInput) SetNextToken(v string) *DescribeClas return s } -// Contains the output of DescribeClassicLinkInstances. type DescribeClassicLinkInstancesOutput struct { _ struct{} `type:"structure"` @@ -33725,21 +37053,15 @@ type DescribeCustomerGatewaysInput struct { // * type - The type of customer gateway. Currently, the only supported type // is ipsec.1. // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. - // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. - // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. + // + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` } @@ -33795,7 +37117,6 @@ func (s *DescribeCustomerGatewaysOutput) SetCustomerGateways(v []*CustomerGatewa return s } -// Contains the parameters for DescribeDhcpOptions. type DescribeDhcpOptionsInput struct { _ struct{} `type:"structure"` @@ -33818,21 +37139,15 @@ type DescribeDhcpOptionsInput struct { // // * value - The value for one of the options. // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. - // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. - // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. + // + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` } @@ -33864,7 +37179,6 @@ func (s *DescribeDhcpOptionsInput) SetFilters(v []*Filter) *DescribeDhcpOptionsI return s } -// Contains the output of DescribeDhcpOptions. type DescribeDhcpOptionsOutput struct { _ struct{} `type:"structure"` @@ -33897,12 +37211,12 @@ type DescribeEgressOnlyInternetGatewaysInput struct { // it is UnauthorizedOperation. DryRun *bool `type:"boolean"` - // One or more egress-only Internet gateway IDs. + // One or more egress-only internet gateway IDs. EgressOnlyInternetGatewayIds []*string `locationName:"EgressOnlyInternetGatewayId" locationNameList:"item" type:"list"` // The maximum number of results to return for the request in a single page. // The remaining results can be seen by sending another request with the returned - // NextToken value. This value can be between 5 and 1000; if MaxResults is given + // NextToken value. This value can be between 5 and 1000. If MaxResults is given // a value larger than 1000, only 1000 results are returned. MaxResults *int64 `type:"integer"` @@ -33947,7 +37261,7 @@ func (s *DescribeEgressOnlyInternetGatewaysInput) SetNextToken(v string) *Descri type DescribeEgressOnlyInternetGatewaysOutput struct { _ struct{} `type:"structure"` - // Information about the egress-only Internet gateways. + // Information about the egress-only internet gateways. EgressOnlyInternetGateways []*EgressOnlyInternetGateway `locationName:"egressOnlyInternetGatewaySet" locationNameList:"item" type:"list"` // The token to use to retrieve the next page of results. @@ -34142,21 +37456,538 @@ func (s *DescribeExportTasksOutput) SetExportTasks(v []*ExportTask) *DescribeExp return s } -// Contains the parameters for DescribeFlowLogs. +// Describes the instances that could not be launched by the fleet. +type DescribeFleetError struct { + _ struct{} `type:"structure"` + + // The error code that indicates why the instance could not be launched. For + // more information about error codes, see Error Codes (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html.html). + ErrorCode *string `locationName:"errorCode" type:"string"` + + // The error message that describes why the instance could not be launched. + // For more information about error messages, see ee Error Codes (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html.html). + ErrorMessage *string `locationName:"errorMessage" type:"string"` + + // The launch templates and overrides that were used for launching the instances. + // Any parameters that you specify in the Overrides override the same parameters + // in the launch template. + LaunchTemplateAndOverrides *LaunchTemplateAndOverridesResponse `locationName:"launchTemplateAndOverrides" type:"structure"` + + // Indicates if the instance that could not be launched was a Spot Instance + // or On-Demand Instance. + Lifecycle *string `locationName:"lifecycle" type:"string" enum:"InstanceLifecycle"` +} + +// String returns the string representation +func (s DescribeFleetError) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeFleetError) GoString() string { + return s.String() +} + +// SetErrorCode sets the ErrorCode field's value. +func (s *DescribeFleetError) SetErrorCode(v string) *DescribeFleetError { + s.ErrorCode = &v + return s +} + +// SetErrorMessage sets the ErrorMessage field's value. +func (s *DescribeFleetError) SetErrorMessage(v string) *DescribeFleetError { + s.ErrorMessage = &v + return s +} + +// SetLaunchTemplateAndOverrides sets the LaunchTemplateAndOverrides field's value. +func (s *DescribeFleetError) SetLaunchTemplateAndOverrides(v *LaunchTemplateAndOverridesResponse) *DescribeFleetError { + s.LaunchTemplateAndOverrides = v + return s +} + +// SetLifecycle sets the Lifecycle field's value. +func (s *DescribeFleetError) SetLifecycle(v string) *DescribeFleetError { + s.Lifecycle = &v + return s +} + +type DescribeFleetHistoryInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The type of events to describe. By default, all events are described. + EventType *string `type:"string" enum:"FleetEventType"` + + // The ID of the EC2 Fleet. + // + // FleetId is a required field + FleetId *string `type:"string" required:"true"` + + // The maximum number of results to return in a single call. Specify a value + // between 1 and 1000. The default value is 1000. To retrieve the remaining + // results, make another call with the returned NextToken value. + MaxResults *int64 `type:"integer"` + + // The token for the next set of results. + NextToken *string `type:"string"` + + // The start date and time for the events, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). + // + // StartTime is a required field + StartTime *time.Time `type:"timestamp" required:"true"` +} + +// String returns the string representation +func (s DescribeFleetHistoryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeFleetHistoryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeFleetHistoryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeFleetHistoryInput"} + if s.FleetId == nil { + invalidParams.Add(request.NewErrParamRequired("FleetId")) + } + if s.StartTime == nil { + invalidParams.Add(request.NewErrParamRequired("StartTime")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDryRun sets the DryRun field's value. +func (s *DescribeFleetHistoryInput) SetDryRun(v bool) *DescribeFleetHistoryInput { + s.DryRun = &v + return s +} + +// SetEventType sets the EventType field's value. +func (s *DescribeFleetHistoryInput) SetEventType(v string) *DescribeFleetHistoryInput { + s.EventType = &v + return s +} + +// SetFleetId sets the FleetId field's value. +func (s *DescribeFleetHistoryInput) SetFleetId(v string) *DescribeFleetHistoryInput { + s.FleetId = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeFleetHistoryInput) SetMaxResults(v int64) *DescribeFleetHistoryInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeFleetHistoryInput) SetNextToken(v string) *DescribeFleetHistoryInput { + s.NextToken = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *DescribeFleetHistoryInput) SetStartTime(v time.Time) *DescribeFleetHistoryInput { + s.StartTime = &v + return s +} + +type DescribeFleetHistoryOutput struct { + _ struct{} `type:"structure"` + + // The ID of the EC Fleet. + FleetId *string `locationName:"fleetId" type:"string"` + + // Information about the events in the history of the EC2 Fleet. + HistoryRecords []*HistoryRecordEntry `locationName:"historyRecordSet" locationNameList:"item" type:"list"` + + // The last date and time for the events, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). + // All records up to this time were retrieved. + // + // If nextToken indicates that there are more results, this value is not present. + LastEvaluatedTime *time.Time `locationName:"lastEvaluatedTime" type:"timestamp"` + + // The token for the next set of results. + NextToken *string `locationName:"nextToken" type:"string"` + + // The start date and time for the events, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). + StartTime *time.Time `locationName:"startTime" type:"timestamp"` +} + +// String returns the string representation +func (s DescribeFleetHistoryOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeFleetHistoryOutput) GoString() string { + return s.String() +} + +// SetFleetId sets the FleetId field's value. +func (s *DescribeFleetHistoryOutput) SetFleetId(v string) *DescribeFleetHistoryOutput { + s.FleetId = &v + return s +} + +// SetHistoryRecords sets the HistoryRecords field's value. +func (s *DescribeFleetHistoryOutput) SetHistoryRecords(v []*HistoryRecordEntry) *DescribeFleetHistoryOutput { + s.HistoryRecords = v + return s +} + +// SetLastEvaluatedTime sets the LastEvaluatedTime field's value. +func (s *DescribeFleetHistoryOutput) SetLastEvaluatedTime(v time.Time) *DescribeFleetHistoryOutput { + s.LastEvaluatedTime = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeFleetHistoryOutput) SetNextToken(v string) *DescribeFleetHistoryOutput { + s.NextToken = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *DescribeFleetHistoryOutput) SetStartTime(v time.Time) *DescribeFleetHistoryOutput { + s.StartTime = &v + return s +} + +type DescribeFleetInstancesInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // One or more filters. + // + // * instance-type - The instance type. + Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` + + // The ID of the EC2 Fleet. + // + // FleetId is a required field + FleetId *string `type:"string" required:"true"` + + // The maximum number of results to return in a single call. Specify a value + // between 1 and 1000. The default value is 1000. To retrieve the remaining + // results, make another call with the returned NextToken value. + MaxResults *int64 `type:"integer"` + + // The token for the next set of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeFleetInstancesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeFleetInstancesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeFleetInstancesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeFleetInstancesInput"} + if s.FleetId == nil { + invalidParams.Add(request.NewErrParamRequired("FleetId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDryRun sets the DryRun field's value. +func (s *DescribeFleetInstancesInput) SetDryRun(v bool) *DescribeFleetInstancesInput { + s.DryRun = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeFleetInstancesInput) SetFilters(v []*Filter) *DescribeFleetInstancesInput { + s.Filters = v + return s +} + +// SetFleetId sets the FleetId field's value. +func (s *DescribeFleetInstancesInput) SetFleetId(v string) *DescribeFleetInstancesInput { + s.FleetId = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeFleetInstancesInput) SetMaxResults(v int64) *DescribeFleetInstancesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeFleetInstancesInput) SetNextToken(v string) *DescribeFleetInstancesInput { + s.NextToken = &v + return s +} + +type DescribeFleetInstancesOutput struct { + _ struct{} `type:"structure"` + + // The running instances. This list is refreshed periodically and might be out + // of date. + ActiveInstances []*ActiveInstance `locationName:"activeInstanceSet" locationNameList:"item" type:"list"` + + // The ID of the EC2 Fleet. + FleetId *string `locationName:"fleetId" type:"string"` + + // The token for the next set of results. + NextToken *string `locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s DescribeFleetInstancesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeFleetInstancesOutput) GoString() string { + return s.String() +} + +// SetActiveInstances sets the ActiveInstances field's value. +func (s *DescribeFleetInstancesOutput) SetActiveInstances(v []*ActiveInstance) *DescribeFleetInstancesOutput { + s.ActiveInstances = v + return s +} + +// SetFleetId sets the FleetId field's value. +func (s *DescribeFleetInstancesOutput) SetFleetId(v string) *DescribeFleetInstancesOutput { + s.FleetId = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeFleetInstancesOutput) SetNextToken(v string) *DescribeFleetInstancesOutput { + s.NextToken = &v + return s +} + +type DescribeFleetsInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // One or more filters. + // + // * activity-status - The progress of the EC2 Fleet ( error | pending-fulfillment + // | pending-termination | fulfilled). + // + // * excess-capacity-termination-policy - Indicates whether to terminate + // running instances if the target capacity is decreased below the current + // EC2 Fleet size (true | false). + // + // * fleet-state - The state of the EC2 Fleet (submitted | active | deleted + // | failed | deleted-running | deleted-terminating | modifying). + // + // * replace-unhealthy-instances - Indicates whether EC2 Fleet should replace + // unhealthy instances (true | false). + // + // * type - The type of request (instant | request | maintain). + Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` + + // The ID of the EC2 Fleets. + FleetIds []*string `locationName:"FleetId" type:"list"` + + // The maximum number of results to return in a single call. Specify a value + // between 1 and 1000. The default value is 1000. To retrieve the remaining + // results, make another call with the returned NextToken value. + MaxResults *int64 `type:"integer"` + + // The token for the next set of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeFleetsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeFleetsInput) GoString() string { + return s.String() +} + +// SetDryRun sets the DryRun field's value. +func (s *DescribeFleetsInput) SetDryRun(v bool) *DescribeFleetsInput { + s.DryRun = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeFleetsInput) SetFilters(v []*Filter) *DescribeFleetsInput { + s.Filters = v + return s +} + +// SetFleetIds sets the FleetIds field's value. +func (s *DescribeFleetsInput) SetFleetIds(v []*string) *DescribeFleetsInput { + s.FleetIds = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeFleetsInput) SetMaxResults(v int64) *DescribeFleetsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeFleetsInput) SetNextToken(v string) *DescribeFleetsInput { + s.NextToken = &v + return s +} + +// Describes the instances that were launched by the fleet. +type DescribeFleetsInstances struct { + _ struct{} `type:"structure"` + + // The IDs of the instances. + InstanceIds []*string `locationName:"instanceIds" locationNameList:"item" type:"list"` + + // The instance type. + InstanceType *string `locationName:"instanceType" type:"string" enum:"InstanceType"` + + // The launch templates and overrides that were used for launching the instances. + // Any parameters that you specify in the Overrides override the same parameters + // in the launch template. + LaunchTemplateAndOverrides *LaunchTemplateAndOverridesResponse `locationName:"launchTemplateAndOverrides" type:"structure"` + + // Indicates if the instance that was launched is a Spot Instance or On-Demand + // Instance. + Lifecycle *string `locationName:"lifecycle" type:"string" enum:"InstanceLifecycle"` + + // The value is Windows for Windows instances; otherwise blank. + Platform *string `locationName:"platform" type:"string" enum:"PlatformValues"` +} + +// String returns the string representation +func (s DescribeFleetsInstances) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeFleetsInstances) GoString() string { + return s.String() +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *DescribeFleetsInstances) SetInstanceIds(v []*string) *DescribeFleetsInstances { + s.InstanceIds = v + return s +} + +// SetInstanceType sets the InstanceType field's value. +func (s *DescribeFleetsInstances) SetInstanceType(v string) *DescribeFleetsInstances { + s.InstanceType = &v + return s +} + +// SetLaunchTemplateAndOverrides sets the LaunchTemplateAndOverrides field's value. +func (s *DescribeFleetsInstances) SetLaunchTemplateAndOverrides(v *LaunchTemplateAndOverridesResponse) *DescribeFleetsInstances { + s.LaunchTemplateAndOverrides = v + return s +} + +// SetLifecycle sets the Lifecycle field's value. +func (s *DescribeFleetsInstances) SetLifecycle(v string) *DescribeFleetsInstances { + s.Lifecycle = &v + return s +} + +// SetPlatform sets the Platform field's value. +func (s *DescribeFleetsInstances) SetPlatform(v string) *DescribeFleetsInstances { + s.Platform = &v + return s +} + +type DescribeFleetsOutput struct { + _ struct{} `type:"structure"` + + // Information about the EC2 Fleets. + Fleets []*FleetData `locationName:"fleetSet" locationNameList:"item" type:"list"` + + // The token for the next set of results. + NextToken *string `locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s DescribeFleetsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeFleetsOutput) GoString() string { + return s.String() +} + +// SetFleets sets the Fleets field's value. +func (s *DescribeFleetsOutput) SetFleets(v []*FleetData) *DescribeFleetsOutput { + s.Fleets = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeFleetsOutput) SetNextToken(v string) *DescribeFleetsOutput { + s.NextToken = &v + return s +} + type DescribeFlowLogsInput struct { _ struct{} `type:"structure"` + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + // One or more filters. // // * deliver-log-status - The status of the logs delivery (SUCCESS | FAILED). // + // * log-destination-type - The type of destination to which the flow log + // publishes data. Possible destination types include cloud-watch-logs and + // S3. + // // * flow-log-id - The ID of the flow log. // // * log-group-name - The name of the log group. // // * resource-id - The ID of the VPC, subnet, or network interface. // - // * traffic-type - The type of traffic (ACCEPT | REJECT | ALL) + // * traffic-type - The type of traffic (ACCEPT | REJECT | ALL). Filter []*Filter `locationNameList:"Filter" type:"list"` // One or more flow log IDs. @@ -34164,7 +37995,7 @@ type DescribeFlowLogsInput struct { // The maximum number of results to return for the request in a single page. // The remaining results can be seen by sending another request with the returned - // NextToken value. This value can be between 5 and 1000; if MaxResults is given + // NextToken value. This value can be between 5 and 1000. If MaxResults is given // a value larger than 1000, only 1000 results are returned. You cannot specify // this parameter and the flow log IDs parameter in the same request. MaxResults *int64 `type:"integer"` @@ -34183,6 +38014,12 @@ func (s DescribeFlowLogsInput) GoString() string { return s.String() } +// SetDryRun sets the DryRun field's value. +func (s *DescribeFlowLogsInput) SetDryRun(v bool) *DescribeFlowLogsInput { + s.DryRun = &v + return s +} + // SetFilter sets the Filter field's value. func (s *DescribeFlowLogsInput) SetFilter(v []*Filter) *DescribeFlowLogsInput { s.Filter = v @@ -34207,7 +38044,6 @@ func (s *DescribeFlowLogsInput) SetNextToken(v string) *DescribeFlowLogsInput { return s } -// Contains the output of DescribeFlowLogs. type DescribeFlowLogsOutput struct { _ struct{} `type:"structure"` @@ -34356,21 +38192,15 @@ type DescribeFpgaImagesInput struct { // // * state - The state of the AFI (pending | failed | available | unavailable). // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. - // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. // // * update-time - The time of the most recent update. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` @@ -34489,22 +38319,23 @@ type DescribeHostReservationOfferingsInput struct { // One or more filters. // - // * instance-family - The instance family of the offering (e.g., m4). + // * instance-family - The instance family of the offering (for example, + // m4). // // * payment-option - The payment option (NoUpfront | PartialUpfront | AllUpfront). Filter []*Filter `locationNameList:"Filter" type:"list"` - // This is the maximum duration of the reservation you'd like to purchase, specified - // in seconds. Reservations are available in one-year and three-year terms. - // The number of seconds specified must be the number of seconds in a year (365x24x60x60) + // This is the maximum duration of the reservation to purchase, specified in + // seconds. Reservations are available in one-year and three-year terms. The + // number of seconds specified must be the number of seconds in a year (365x24x60x60) // times one of the supported durations (1 or 3). For example, specify 94608000 // for three years. MaxDuration *int64 `type:"integer"` // The maximum number of results to return for the request in a single page. // The remaining results can be seen by sending another request with the returned - // nextToken value. This value can be between 5 and 500; if maxResults is given - // a larger value than 500, you will receive an error. + // nextToken value. This value can be between 5 and 500. If maxResults is given + // a larger value than 500, you receive an error. MaxResults *int64 `type:"integer"` // This is the minimum duration of the reservation you'd like to purchase, specified @@ -34605,7 +38436,7 @@ type DescribeHostReservationsInput struct { // One or more filters. // - // * instance-family - The instance family (e.g., m4). + // * instance-family - The instance family (for example, m4). // // * payment-option - The payment option (NoUpfront | PartialUpfront | AllUpfront). // @@ -34618,8 +38449,8 @@ type DescribeHostReservationsInput struct { // The maximum number of results to return for the request in a single page. // The remaining results can be seen by sending another request with the returned - // nextToken value. This value can be between 5 and 500; if maxResults is given - // a larger value than 500, you will receive an error. + // nextToken value. This value can be between 5 and 500. If maxResults is given + // a larger value than 500, you receive an error. MaxResults *int64 `type:"integer"` // The token to use to retrieve the next page of results. @@ -34693,27 +38524,30 @@ func (s *DescribeHostReservationsOutput) SetNextToken(v string) *DescribeHostRes return s } -// Contains the parameters for DescribeHosts. type DescribeHostsInput struct { _ struct{} `type:"structure"` // One or more filters. // - // * instance-type - The instance type size that the Dedicated Host is configured - // to support. - // // * auto-placement - Whether auto-placement is enabled or disabled (on | // off). // + // * availability-zone - The Availability Zone of the host. + // + // * client-token - The idempotency token that you provided when you allocated + // the host. + // // * host-reservation-id - The ID of the reservation assigned to this host. // - // * client-token - The idempotency token you provided when you launched - // the instance + // * instance-type - The instance type size that the Dedicated Host is configured + // to support. // - // * state- The allocation state of the Dedicated Host (available | under-assessment + // * state - The allocation state of the Dedicated Host (available | under-assessment // | permanent-failure | released | released-permanent-failure). // - // * availability-zone - The Availability Zone of the host. + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. Filter []*Filter `locationName:"filter" locationNameList:"Filter" type:"list"` // The IDs of the Dedicated Hosts. The IDs are used for targeted instance launches. @@ -34721,9 +38555,9 @@ type DescribeHostsInput struct { // The maximum number of results to return for the request in a single page. // The remaining results can be seen by sending another request with the returned - // nextToken value. This value can be between 5 and 500; if maxResults is given - // a larger value than 500, you will receive an error. You cannot specify this - // parameter and the host IDs parameter in the same request. + // nextToken value. This value can be between 5 and 500. If maxResults is given + // a larger value than 500, you receive an error. You cannot specify this parameter + // and the host IDs parameter in the same request. MaxResults *int64 `locationName:"maxResults" type:"integer"` // The token to retrieve the next page of results. @@ -34764,7 +38598,6 @@ func (s *DescribeHostsInput) SetNextToken(v string) *DescribeHostsInput { return s } -// Contains the output of DescribeHosts. type DescribeHostsOutput struct { _ struct{} `type:"structure"` @@ -34903,7 +38736,6 @@ func (s *DescribeIamInstanceProfileAssociationsOutput) SetNextToken(v string) *D return s } -// Contains the parameters for DescribeIdFormat. type DescribeIdFormatInput struct { _ struct{} `type:"structure"` @@ -34933,7 +38765,6 @@ func (s *DescribeIdFormatInput) SetResource(v string) *DescribeIdFormatInput { return s } -// Contains the output of DescribeIdFormat. type DescribeIdFormatOutput struct { _ struct{} `type:"structure"` @@ -34957,7 +38788,6 @@ func (s *DescribeIdFormatOutput) SetStatuses(v []*IdFormat) *DescribeIdFormatOut return s } -// Contains the parameters for DescribeIdentityIdFormat. type DescribeIdentityIdFormatInput struct { _ struct{} `type:"structure"` @@ -35012,7 +38842,6 @@ func (s *DescribeIdentityIdFormatInput) SetResource(v string) *DescribeIdentityI return s } -// Contains the output of DescribeIdentityIdFormat. type DescribeIdentityIdFormatOutput struct { _ struct{} `type:"structure"` @@ -35273,21 +39102,15 @@ type DescribeImagesInput struct { // * sriov-net-support - A value of simple indicates that enhanced networking // with the Intel 82599 VF interface is enabled. // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. - // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. // // * virtualization-type - The virtualization type (paravirtual | hvm). Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` @@ -35919,7 +39742,7 @@ type DescribeInstanceStatusInput struct { // example, 2014-09-15T17:15:20.000Z). // // * instance-state-code - The code for the instance state, as a 16-bit unsigned - // integer. The high byte is an opaque internal value and should be ignored. + // integer. The high byte is used for internal purposes and should be ignored. // The low byte is set based on the state represented. The valid values are // 0 (pending), 16 (running), 32 (shutting-down), 48 (terminated), 64 (stopping), // and 80 (stopped). @@ -36103,7 +39926,7 @@ type DescribeInstancesInput struct { // Scheduled Instance (spot | scheduled). // // * instance-state-code - The state of the instance, as a 16-bit unsigned - // integer. The high byte is an opaque internal value and should be ignored. + // integer. The high byte is used for internal purposes and should be ignored. // The low byte is set based on the state represented. The valid values are: // 0 (pending), 16 (running), 32 (shutting-down), 48 (terminated), 64 (stopping), // and 80 (stopped). @@ -36270,20 +40093,15 @@ type DescribeInstancesInput struct { // // * subnet-id - The ID of the subnet for the instance. // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. - // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of the tag's key). If you want to - // list only resources where Purpose is X, see the tag:key=value filter. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources that have a tag with a specific key, regardless + // of the tag value. // // * tenancy - The tenancy of an instance (dedicated | default | host). // @@ -36301,7 +40119,7 @@ type DescribeInstancesInput struct { // The maximum number of results to return in a single call. To retrieve the // remaining results, make another call with the returned NextToken value. This // value can be between 5 and 1000. You cannot specify this parameter and the - // instance IDs parameter or tag filters in the same call. + // instance IDs parameter in the same call. MaxResults *int64 `locationName:"maxResults" type:"integer"` // The token to request the next page of results. @@ -36382,7 +40200,6 @@ func (s *DescribeInstancesOutput) SetReservations(v []*Reservation) *DescribeIns return s } -// Contains the parameters for DescribeInternetGateways. type DescribeInternetGatewaysInput struct { _ struct{} `type:"structure"` @@ -36401,26 +40218,20 @@ type DescribeInternetGatewaysInput struct { // // * internet-gateway-id - The ID of the Internet gateway. // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. - // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. - // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. + // + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` - // One or more Internet gateway IDs. + // One or more internet gateway IDs. // - // Default: Describes all your Internet gateways. + // Default: Describes all your internet gateways. InternetGatewayIds []*string `locationName:"internetGatewayId" locationNameList:"item" type:"list"` } @@ -36452,11 +40263,10 @@ func (s *DescribeInternetGatewaysInput) SetInternetGatewayIds(v []*string) *Desc return s } -// Contains the output of DescribeInternetGateways. type DescribeInternetGatewaysOutput struct { _ struct{} `type:"structure"` - // Information about one or more Internet gateways. + // Information about one or more internet gateways. InternetGateways []*InternetGateway `locationName:"internetGatewaySet" locationNameList:"item" type:"list"` } @@ -36476,7 +40286,6 @@ func (s *DescribeInternetGatewaysOutput) SetInternetGateways(v []*InternetGatewa return s } -// Contains the parameters for DescribeKeyPairs. type DescribeKeyPairsInput struct { _ struct{} `type:"structure"` @@ -36527,7 +40336,6 @@ func (s *DescribeKeyPairsInput) SetKeyNames(v []*string) *DescribeKeyPairsInput return s } -// Contains the output of DescribeKeyPairs. type DescribeKeyPairsOutput struct { _ struct{} `type:"structure"` @@ -36591,7 +40399,7 @@ type DescribeLaunchTemplateVersionsInput struct { // The maximum number of results to return in a single call. To retrieve the // remaining results, make another call with the returned NextToken value. This - // value can be between 5 and 1000. + // value can be between 1 and 200. MaxResults *int64 `type:"integer"` // The version number up to which to describe launch template versions. @@ -36732,17 +40540,15 @@ type DescribeLaunchTemplatesInput struct { // // * launch-template-name - The name of the launch template. // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. - // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of the tag's key). If you want to - // list only resources where Purpose is X, see the tag:key=value filter. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. + // + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` // One or more launch template IDs. @@ -36753,7 +40559,7 @@ type DescribeLaunchTemplatesInput struct { // The maximum number of results to return in a single call. To retrieve the // remaining results, make another call with the returned NextToken value. This - // value can be between 5 and 1000. + // value can be between 1 and 200. MaxResults *int64 `type:"integer"` // The token to request the next page of results. @@ -36839,7 +40645,6 @@ func (s *DescribeLaunchTemplatesOutput) SetNextToken(v string) *DescribeLaunchTe return s } -// Contains the parameters for DescribeMovingAddresses. type DescribeMovingAddressesInput struct { _ struct{} `type:"structure"` @@ -36863,7 +40668,7 @@ type DescribeMovingAddressesInput struct { // Default: If no value is provided, the default is 1000. MaxResults *int64 `locationName:"maxResults" type:"integer"` - // The token to use to retrieve the next page of results. + // The token for the next page of results. NextToken *string `locationName:"nextToken" type:"string"` // One or more Elastic IP addresses. @@ -36910,7 +40715,6 @@ func (s *DescribeMovingAddressesInput) SetPublicIps(v []*string) *DescribeMoving return s } -// Contains the output of DescribeMovingAddresses. type DescribeMovingAddressesOutput struct { _ struct{} `type:"structure"` @@ -36944,7 +40748,6 @@ func (s *DescribeMovingAddressesOutput) SetNextToken(v string) *DescribeMovingAd return s } -// Contains the parameters for DescribeNatGateways. type DescribeNatGatewaysInput struct { _ struct{} `type:"structure"` @@ -36957,21 +40760,15 @@ type DescribeNatGatewaysInput struct { // // * subnet-id - The ID of the subnet in which the NAT gateway resides. // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. - // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. // // * vpc-id - The ID of the VPC in which the NAT gateway resides. Filter []*Filter `locationNameList:"Filter" type:"list"` @@ -37025,7 +40822,6 @@ func (s *DescribeNatGatewaysInput) SetNextToken(v string) *DescribeNatGatewaysIn return s } -// Contains the output of DescribeNatGateways. type DescribeNatGatewaysOutput struct { _ struct{} `type:"structure"` @@ -37059,7 +40855,6 @@ func (s *DescribeNatGatewaysOutput) SetNextToken(v string) *DescribeNatGatewaysO return s } -// Contains the parameters for DescribeNetworkAcls. type DescribeNetworkAclsInput struct { _ struct{} `type:"structure"` @@ -37083,8 +40878,6 @@ type DescribeNetworkAclsInput struct { // // * entry.cidr - The IPv4 CIDR range specified in the entry. // - // * entry.egress - Indicates whether the entry applies to egress traffic. - // // * entry.icmp.code - The ICMP code specified in the entry, if any. // // * entry.icmp.type - The ICMP type specified in the entry, if any. @@ -37103,25 +40896,19 @@ type DescribeNetworkAclsInput struct { // * entry.rule-action - Allows or denies the matching traffic (allow | deny). // // * entry.rule-number - The number of an entry (in other words, rule) in - // the ACL's set of entries. + // the set of ACL entries. // // * network-acl-id - The ID of the network ACL. // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. - // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. // // * vpc-id - The ID of the VPC for the network ACL. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` @@ -37160,7 +40947,6 @@ func (s *DescribeNetworkAclsInput) SetNetworkAclIds(v []*string) *DescribeNetwor return s } -// Contains the output of DescribeNetworkAcls. type DescribeNetworkAclsOutput struct { _ struct{} `type:"structure"` @@ -37507,29 +41293,31 @@ type DescribeNetworkInterfacesInput struct { // // * subnet-id - The ID of the subnet for the network interface. // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. - // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. // // * vpc-id - The ID of the VPC for the network interface. Filters []*Filter `locationName:"filter" locationNameList:"Filter" type:"list"` + // The maximum number of items to return for this request. The request returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `type:"integer"` + // One or more network interface IDs. // // Default: Describes all your network interfaces. NetworkInterfaceIds []*string `locationName:"NetworkInterfaceId" locationNameList:"item" type:"list"` + + // The token to retrieve the next page of results. + NextToken *string `type:"string"` } // String returns the string representation @@ -37554,18 +41342,34 @@ func (s *DescribeNetworkInterfacesInput) SetFilters(v []*Filter) *DescribeNetwor return s } +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeNetworkInterfacesInput) SetMaxResults(v int64) *DescribeNetworkInterfacesInput { + s.MaxResults = &v + return s +} + // SetNetworkInterfaceIds sets the NetworkInterfaceIds field's value. func (s *DescribeNetworkInterfacesInput) SetNetworkInterfaceIds(v []*string) *DescribeNetworkInterfacesInput { s.NetworkInterfaceIds = v return s } +// SetNextToken sets the NextToken field's value. +func (s *DescribeNetworkInterfacesInput) SetNextToken(v string) *DescribeNetworkInterfacesInput { + s.NextToken = &v + return s +} + // Contains the output of DescribeNetworkInterfaces. type DescribeNetworkInterfacesOutput struct { _ struct{} `type:"structure"` // Information about one or more network interfaces. NetworkInterfaces []*NetworkInterface `locationName:"networkInterfaceSet" locationNameList:"item" type:"list"` + + // The token to use to retrieve the next page of results. This value is null + // when there are no more results to return. + NextToken *string `locationName:"nextToken" type:"string"` } // String returns the string representation @@ -37584,6 +41388,12 @@ func (s *DescribeNetworkInterfacesOutput) SetNetworkInterfaces(v []*NetworkInter return s } +// SetNextToken sets the NextToken field's value. +func (s *DescribeNetworkInterfacesOutput) SetNextToken(v string) *DescribeNetworkInterfacesOutput { + s.NextToken = &v + return s +} + // Contains the parameters for DescribePlacementGroups. type DescribePlacementGroupsInput struct { _ struct{} `type:"structure"` @@ -37662,7 +41472,6 @@ func (s *DescribePlacementGroupsOutput) SetPlacementGroups(v []*PlacementGroup) return s } -// Contains the parameters for DescribePrefixLists. type DescribePrefixListsInput struct { _ struct{} `type:"structure"` @@ -37735,7 +41544,6 @@ func (s *DescribePrefixListsInput) SetPrefixListIds(v []*string) *DescribePrefix return s } -// Contains the output of DescribePrefixLists. type DescribePrefixListsOutput struct { _ struct{} `type:"structure"` @@ -37862,7 +41670,97 @@ func (s *DescribePrincipalIdFormatOutput) SetPrincipals(v []*PrincipalIdFormat) return s } -// Contains the parameters for DescribeRegions. +type DescribePublicIpv4PoolsInput struct { + _ struct{} `type:"structure"` + + // The maximum number of results to return with a single call. To retrieve the + // remaining results, make another call with the returned nextToken value. + MaxResults *int64 `min:"1" type:"integer"` + + // The token for the next page of results. + NextToken *string `min:"1" type:"string"` + + // The IDs of the address pools. + PoolIds []*string `locationName:"PoolId" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s DescribePublicIpv4PoolsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribePublicIpv4PoolsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribePublicIpv4PoolsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribePublicIpv4PoolsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribePublicIpv4PoolsInput) SetMaxResults(v int64) *DescribePublicIpv4PoolsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribePublicIpv4PoolsInput) SetNextToken(v string) *DescribePublicIpv4PoolsInput { + s.NextToken = &v + return s +} + +// SetPoolIds sets the PoolIds field's value. +func (s *DescribePublicIpv4PoolsInput) SetPoolIds(v []*string) *DescribePublicIpv4PoolsInput { + s.PoolIds = v + return s +} + +type DescribePublicIpv4PoolsOutput struct { + _ struct{} `type:"structure"` + + // The token to use to retrieve the next page of results. This value is null + // when there are no more results to return. + NextToken *string `locationName:"nextToken" type:"string"` + + // Information about the address pools. + PublicIpv4Pools []*PublicIpv4Pool `locationName:"publicIpv4PoolSet" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s DescribePublicIpv4PoolsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribePublicIpv4PoolsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribePublicIpv4PoolsOutput) SetNextToken(v string) *DescribePublicIpv4PoolsOutput { + s.NextToken = &v + return s +} + +// SetPublicIpv4Pools sets the PublicIpv4Pools field's value. +func (s *DescribePublicIpv4PoolsOutput) SetPublicIpv4Pools(v []*PublicIpv4Pool) *DescribePublicIpv4PoolsOutput { + s.PublicIpv4Pools = v + return s +} + type DescribeRegionsInput struct { _ struct{} `type:"structure"` @@ -37911,7 +41809,6 @@ func (s *DescribeRegionsInput) SetRegionNames(v []*string) *DescribeRegionsInput return s } -// Contains the output of DescribeRegions. type DescribeRegionsOutput struct { _ struct{} `type:"structure"` @@ -37980,21 +41877,15 @@ type DescribeReservedInstancesInput struct { // * state - The state of the Reserved Instance (payment-pending | active // | payment-failed | retired). // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. - // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. // // * usage-price - The usage price of the Reserved Instance, per hour (for // example, 0.84). @@ -38505,7 +42396,6 @@ func (s *DescribeReservedInstancesOutput) SetReservedInstances(v []*ReservedInst return s } -// Contains the parameters for DescribeRouteTables. type DescribeRouteTablesInput struct { _ struct{} `type:"structure"` @@ -38564,25 +42454,27 @@ type DescribeRouteTablesInput struct { // * route.vpc-peering-connection-id - The ID of a VPC peering connection // specified in a route in the table. // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. - // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. // // * vpc-id - The ID of the VPC for the route table. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` + // The maximum number of results to return in a single call. To retrieve the + // remaining results, make another call with the returned NextToken value. This + // value can be between 5 and 100. + MaxResults *int64 `type:"integer"` + + // The token to retrieve the next page of results. + NextToken *string `type:"string"` + // One or more route table IDs. // // Default: Describes all your route tables. @@ -38611,6 +42503,18 @@ func (s *DescribeRouteTablesInput) SetFilters(v []*Filter) *DescribeRouteTablesI return s } +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeRouteTablesInput) SetMaxResults(v int64) *DescribeRouteTablesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeRouteTablesInput) SetNextToken(v string) *DescribeRouteTablesInput { + s.NextToken = &v + return s +} + // SetRouteTableIds sets the RouteTableIds field's value. func (s *DescribeRouteTablesInput) SetRouteTableIds(v []*string) *DescribeRouteTablesInput { s.RouteTableIds = v @@ -38621,6 +42525,10 @@ func (s *DescribeRouteTablesInput) SetRouteTableIds(v []*string) *DescribeRouteT type DescribeRouteTablesOutput struct { _ struct{} `type:"structure"` + // The token to use to retrieve the next page of results. This value is null + // when there are no more results to return. + NextToken *string `locationName:"nextToken" type:"string"` + // Information about one or more route tables. RouteTables []*RouteTable `locationName:"routeTableSet" locationNameList:"item" type:"list"` } @@ -38635,6 +42543,12 @@ func (s DescribeRouteTablesOutput) GoString() string { return s.String() } +// SetNextToken sets the NextToken field's value. +func (s *DescribeRouteTablesOutput) SetNextToken(v string) *DescribeRouteTablesOutput { + s.NextToken = &v + return s +} + // SetRouteTables sets the RouteTables field's value. func (s *DescribeRouteTablesOutput) SetRouteTables(v []*RouteTable) *DescribeRouteTablesOutput { s.RouteTables = v @@ -38923,7 +42837,7 @@ func (s *DescribeScheduledInstancesOutput) SetScheduledInstanceSet(v []*Schedule type DescribeSecurityGroupReferencesInput struct { _ struct{} `type:"structure"` - // Checks whether you have the required permissions for the operation, without + // Checks whether you have the required permissions for the action, without // actually making the request, and provides an error response. If you have // the required permissions, the error response is DryRunOperation. Otherwise, // it is UnauthorizedOperation. @@ -38993,7 +42907,6 @@ func (s *DescribeSecurityGroupReferencesOutput) SetSecurityGroupReferenceSet(v [ return s } -// Contains the parameters for DescribeSecurityGroups. type DescribeSecurityGroupsInput struct { _ struct{} `type:"structure"` @@ -39069,9 +42982,15 @@ type DescribeSecurityGroupsInput struct { // // * owner-id - The AWS account ID of the owner of the security group. // - // * tag-key - The key of a tag assigned to the security group. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. // - // * tag-value - The value of a tag assigned to the security group. + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. // // * vpc-id - The ID of the VPC specified when the security group was created. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` @@ -39146,7 +43065,6 @@ func (s *DescribeSecurityGroupsInput) SetNextToken(v string) *DescribeSecurityGr return s } -// Contains the output of DescribeSecurityGroups. type DescribeSecurityGroupsOutput struct { _ struct{} `type:"structure"` @@ -39316,21 +43234,15 @@ type DescribeSnapshotsInput struct { // // * status - The status of the snapshot (pending | completed | error). // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. - // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. // // * volume-id - The ID of the volume the snapshot is for. // @@ -39582,8 +43494,8 @@ func (s *DescribeSpotFleetInstancesInput) SetSpotFleetRequestId(v string) *Descr type DescribeSpotFleetInstancesOutput struct { _ struct{} `type:"structure"` - // The running instances. Note that this list is refreshed periodically and - // might be out of date. + // The running instances. This list is refreshed periodically and might be out + // of date. // // ActiveInstances is a required field ActiveInstances []*ActiveInstance `locationName:"activeInstanceSet" locationNameList:"item" type:"list" required:"true"` @@ -39655,7 +43567,7 @@ type DescribeSpotFleetRequestHistoryInput struct { // The starting date and time for the events, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). // // StartTime is a required field - StartTime *time.Time `locationName:"startTime" type:"timestamp" timestampFormat:"iso8601" required:"true"` + StartTime *time.Time `locationName:"startTime" type:"timestamp" required:"true"` } // String returns the string representation @@ -39735,7 +43647,7 @@ type DescribeSpotFleetRequestHistoryOutput struct { // If nextToken indicates that there are more results, this value is not present. // // LastEvaluatedTime is a required field - LastEvaluatedTime *time.Time `locationName:"lastEvaluatedTime" type:"timestamp" timestampFormat:"iso8601" required:"true"` + LastEvaluatedTime *time.Time `locationName:"lastEvaluatedTime" type:"timestamp" required:"true"` // The token required to retrieve the next set of results. This value is null // when there are no more results to return. @@ -39749,7 +43661,7 @@ type DescribeSpotFleetRequestHistoryOutput struct { // The starting date and time for the events, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). // // StartTime is a required field - StartTime *time.Time `locationName:"startTime" type:"timestamp" timestampFormat:"iso8601" required:"true"` + StartTime *time.Time `locationName:"startTime" type:"timestamp" required:"true"` } // String returns the string representation @@ -39924,7 +43836,9 @@ type DescribeSpotInstanceRequestsInput struct { // for General Purpose SSD, io1 for Provisioned IOPS SSD, st1 for Throughput // Optimized HDD, sc1for Cold HDD, or standard for Magnetic. // - // * launch.group-id - The security group for the instance. + // * launch.group-id - The ID of the security group for the instance. + // + // * launch.group-name - The name of the security group for the instance. // // * launch.image-id - The ID of the AMI. // @@ -39975,7 +43889,7 @@ type DescribeSpotInstanceRequestsInput struct { // | cancelled | failed). Spot request status information can help you track // your Amazon EC2 Spot Instance requests. For more information, see Spot // Request Status (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html) - // in the Amazon Elastic Compute Cloud User Guide. + // in the Amazon EC2 User Guide for Linux Instances. // // * status-code - The short code describing the most recent evaluation of // your Spot Instance request. @@ -39983,21 +43897,15 @@ type DescribeSpotInstanceRequestsInput struct { // * status-message - The message explaining the status of the Spot Instance // request. // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. - // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. // // * type - The type of Spot Instance request (one-time | persistent). // @@ -40077,7 +43985,7 @@ type DescribeSpotPriceHistoryInput struct { // The date and time, up to the current date, from which to stop retrieving // the price history data, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). - EndTime *time.Time `locationName:"endTime" type:"timestamp" timestampFormat:"iso8601"` + EndTime *time.Time `locationName:"endTime" type:"timestamp"` // One or more filters. // @@ -40093,9 +44001,9 @@ type DescribeSpotPriceHistoryInput struct { // * spot-price - The Spot price. The value must match exactly (or use wildcards; // greater than or less than comparison is not supported). // - // * timestamp - The timestamp of the Spot price history, in UTC format (for - // example, YYYY-MM-DDTHH:MM:SSZ). You can use wildcards (* and ?). Greater - // than or less than comparison is not supported. + // * timestamp - The time stamp of the Spot price history, in UTC format + // (for example, YYYY-MM-DDTHH:MM:SSZ). You can use wildcards (* and ?). + // Greater than or less than comparison is not supported. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` // Filters the results by the specified instance types. @@ -40114,7 +44022,7 @@ type DescribeSpotPriceHistoryInput struct { // The date and time, up to the past 90 days, from which to start retrieving // the price history data, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). - StartTime *time.Time `locationName:"startTime" type:"timestamp" timestampFormat:"iso8601"` + StartTime *time.Time `locationName:"startTime" type:"timestamp"` } // String returns the string representation @@ -40218,7 +44126,7 @@ func (s *DescribeSpotPriceHistoryOutput) SetSpotPriceHistory(v []*SpotPrice) *De type DescribeStaleSecurityGroupsInput struct { _ struct{} `type:"structure"` - // Checks whether you have the required permissions for the operation, without + // Checks whether you have the required permissions for the action, without // actually making the request, and provides an error response. If you have // the required permissions, the error response is DryRunOperation. Otherwise, // it is UnauthorizedOperation. @@ -40325,7 +44233,6 @@ func (s *DescribeStaleSecurityGroupsOutput) SetStaleSecurityGroupSet(v []*StaleS return s } -// Contains the parameters for DescribeSubnets. type DescribeSubnetsInput struct { _ struct{} `type:"structure"` @@ -40363,21 +44270,15 @@ type DescribeSubnetsInput struct { // // * subnet-id - The ID of the subnet. // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. - // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. // // * vpc-id - The ID of the VPC for the subnet. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` @@ -40416,7 +44317,6 @@ func (s *DescribeSubnetsInput) SetSubnetIds(v []*string) *DescribeSubnetsInput { return s } -// Contains the output of DescribeSubnets. type DescribeSubnetsOutput struct { _ struct{} `type:"structure"` @@ -40440,7 +44340,6 @@ func (s *DescribeSubnetsOutput) SetSubnets(v []*Subnet) *DescribeSubnetsOutput { return s } -// Contains the parameters for DescribeTags. type DescribeTagsInput struct { _ struct{} `type:"structure"` @@ -40454,13 +44353,17 @@ type DescribeTagsInput struct { // // * key - The tag key. // - // * resource-id - The resource ID. + // * resource-id - The ID of the resource. // - // * resource-type - The resource type (customer-gateway | dhcp-options | - // elastic-ip | fpga-image | image | instance | internet-gateway | launch-template - // | natgateway | network-acl | network-interface | reserved-instances | - // route-table | security-group | snapshot | spot-instances-request | subnet - // | volume | vpc | vpc-peering-connection | vpn-connection | vpn-gateway). + // * resource-type - The resource type (customer-gateway | dedicated-host + // | dhcp-options | elastic-ip | fleet | fpga-image | image | instance | + // internet-gateway | launch-template | natgateway | network-acl | network-interface + // | reserved-instances | route-table | security-group | snapshot | spot-instances-request + // | subnet | volume | vpc | vpc-peering-connection | vpn-connection | vpn-gateway). + // + // * tag: - The key/value combination of the tag. For example, specify + // "tag:Owner" for the filter name and "TeamA" for the filter value to find + // resources with the tag "Owner=TeamA". // // * value - The tag value. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` @@ -40508,15 +44411,14 @@ func (s *DescribeTagsInput) SetNextToken(v string) *DescribeTagsInput { return s } -// Contains the output of DescribeTags. type DescribeTagsOutput struct { _ struct{} `type:"structure"` // The token to use to retrieve the next page of results. This value is null - // when there are no more results to return.. + // when there are no more results to return. NextToken *string `locationName:"nextToken" type:"string"` - // A list of tags. + // The tags. Tags []*TagDescription `locationName:"tagSet" locationNameList:"item" type:"list"` } @@ -40547,7 +44449,9 @@ type DescribeVolumeAttributeInput struct { _ struct{} `type:"structure"` // The attribute of the volume. This parameter is required. - Attribute *string `type:"string" enum:"VolumeAttributeName"` + // + // Attribute is a required field + Attribute *string `type:"string" required:"true" enum:"VolumeAttributeName"` // Checks whether you have the required permissions for the action, without // actually making the request, and provides an error response. If you have @@ -40574,6 +44478,9 @@ func (s DescribeVolumeAttributeInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *DescribeVolumeAttributeInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "DescribeVolumeAttributeInput"} + if s.Attribute == nil { + invalidParams.Add(request.NewErrParamRequired("Attribute")) + } if s.VolumeId == nil { invalidParams.Add(request.NewErrParamRequired("VolumeId")) } @@ -40806,8 +44713,7 @@ type DescribeVolumesInput struct { // * attachment.instance-id - The ID of the instance the volume is attached // to. // - // * attachment.status - The attachment state (attaching | attached | detaching - // | detached). + // * attachment.status - The attachment state (attaching | attached | detaching). // // * availability-zone - The Availability Zone in which the volume was created. // @@ -40822,21 +44728,15 @@ type DescribeVolumesInput struct { // * status - The status of the volume (creating | available | in-use | deleting // | deleted | error). // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. - // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. // // * volume-id - The volume ID. // @@ -41039,7 +44939,6 @@ func (s *DescribeVolumesOutput) SetVolumes(v []*Volume) *DescribeVolumesOutput { return s } -// Contains the parameters for DescribeVpcAttribute. type DescribeVpcAttributeInput struct { _ struct{} `type:"structure"` @@ -41104,7 +45003,6 @@ func (s *DescribeVpcAttributeInput) SetVpcId(v string) *DescribeVpcAttributeInpu return s } -// Contains the output of DescribeVpcAttribute. type DescribeVpcAttributeOutput struct { _ struct{} `type:"structure"` @@ -41150,7 +45048,6 @@ func (s *DescribeVpcAttributeOutput) SetVpcId(v string) *DescribeVpcAttributeOut return s } -// Contains the parameters for DescribeVpcClassicLinkDnsSupport. type DescribeVpcClassicLinkDnsSupportInput struct { _ struct{} `type:"structure"` @@ -41211,7 +45108,6 @@ func (s *DescribeVpcClassicLinkDnsSupportInput) SetVpcIds(v []*string) *Describe return s } -// Contains the output of DescribeVpcClassicLinkDnsSupport. type DescribeVpcClassicLinkDnsSupportOutput struct { _ struct{} `type:"structure"` @@ -41244,7 +45140,6 @@ func (s *DescribeVpcClassicLinkDnsSupportOutput) SetVpcs(v []*ClassicLinkDnsSupp return s } -// Contains the parameters for DescribeVpcClassicLink. type DescribeVpcClassicLinkInput struct { _ struct{} `type:"structure"` @@ -41259,21 +45154,15 @@ type DescribeVpcClassicLinkInput struct { // * is-classic-link-enabled - Whether the VPC is enabled for ClassicLink // (true | false). // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. - // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. - // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. + // + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` // One or more VPCs for which you want to describe the ClassicLink status. @@ -41308,7 +45197,6 @@ func (s *DescribeVpcClassicLinkInput) SetVpcIds(v []*string) *DescribeVpcClassic return s } -// Contains the output of DescribeVpcClassicLink. type DescribeVpcClassicLinkOutput struct { _ struct{} `type:"structure"` @@ -41989,7 +45877,6 @@ func (s *DescribeVpcEndpointsOutput) SetVpcEndpoints(v []*VpcEndpoint) *Describe return s } -// Contains the parameters for DescribeVpcPeeringConnections. type DescribeVpcPeeringConnectionsInput struct { _ struct{} `type:"structure"` @@ -42024,21 +45911,15 @@ type DescribeVpcPeeringConnectionsInput struct { // * status-message - A message that provides more information about the // status of the VPC peering connection, if applicable. // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. - // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. // // * vpc-peering-connection-id - The ID of the VPC peering connection. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` @@ -42077,7 +45958,6 @@ func (s *DescribeVpcPeeringConnectionsInput) SetVpcPeeringConnectionIds(v []*str return s } -// Contains the output of DescribeVpcPeeringConnections. type DescribeVpcPeeringConnectionsOutput struct { _ struct{} `type:"structure"` @@ -42101,7 +45981,6 @@ func (s *DescribeVpcPeeringConnectionsOutput) SetVpcPeeringConnections(v []*VpcP return s } -// Contains the parameters for DescribeVpcs. type DescribeVpcsInput struct { _ struct{} `type:"structure"` @@ -42142,21 +46021,15 @@ type DescribeVpcsInput struct { // // * state - The state of the VPC (pending | available). // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. - // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. // // * vpc-id - The ID of the VPC. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` @@ -42195,7 +46068,6 @@ func (s *DescribeVpcsInput) SetVpcIds(v []*string) *DescribeVpcsInput { return s } -// Contains the output of DescribeVpcs. type DescribeVpcsOutput struct { _ struct{} `type:"structure"` @@ -42250,21 +46122,15 @@ type DescribeVpnConnectionsInput struct { // * bgp-asn - The BGP Autonomous System Number (ASN) associated with a BGP // device. // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. - // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. // // * type - The type of VPN connection. Currently the only supported type // is ipsec.1. @@ -42359,21 +46225,15 @@ type DescribeVpnGatewaysInput struct { // * state - The state of the virtual private gateway (pending | available // | deleting | deleted). // - // * tag:key=value - The key/value combination of a tag assigned to the resource. - // Specify the key of the tag in the filter name and the value of the tag - // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose - // for the filter name and X for the filter value. - // - // * tag-key - The key of a tag assigned to the resource. This filter is - // independent of the tag-value filter. For example, if you use both the - // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources - // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. + // * tag: - The key/value combination of a tag assigned to the resource. + // Use the tag key in the filter name and the tag value as the filter value. + // For example, to find all resources that have a tag with the key Owner + // and the value TeamA, specify tag:Owner for the filter name and TeamA for + // the filter value. // - // * tag-value - The value of a tag assigned to the resource. This filter - // is independent of the tag-key filter. + // * tag-key - The key of a tag assigned to the resource. Use this filter + // to find all resources assigned a tag with a specific key, regardless of + // the tag value. // // * type - The type of virtual private gateway. Currently the only supported // type is ipsec.1. @@ -42439,7 +46299,6 @@ func (s *DescribeVpnGatewaysOutput) SetVpnGateways(v []*VpnGateway) *DescribeVpn return s } -// Contains the parameters for DetachClassicLinkVpc. type DetachClassicLinkVpcInput struct { _ struct{} `type:"structure"` @@ -42504,7 +46363,6 @@ func (s *DetachClassicLinkVpcInput) SetVpcId(v string) *DetachClassicLinkVpcInpu return s } -// Contains the output of DetachClassicLinkVpc. type DetachClassicLinkVpcOutput struct { _ struct{} `type:"structure"` @@ -42528,7 +46386,6 @@ func (s *DetachClassicLinkVpcOutput) SetReturn(v bool) *DetachClassicLinkVpcOutp return s } -// Contains the parameters for DetachInternetGateway. type DetachInternetGatewayInput struct { _ struct{} `type:"structure"` @@ -42538,7 +46395,7 @@ type DetachInternetGatewayInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // The ID of the Internet gateway. + // The ID of the internet gateway. // // InternetGatewayId is a required field InternetGatewayId *string `locationName:"internetGatewayId" type:"string" required:"true"` @@ -42986,7 +46843,6 @@ func (s DisableVgwRoutePropagationOutput) GoString() string { return s.String() } -// Contains the parameters for DisableVpcClassicLinkDnsSupport. type DisableVpcClassicLinkDnsSupportInput struct { _ struct{} `type:"structure"` @@ -43010,7 +46866,6 @@ func (s *DisableVpcClassicLinkDnsSupportInput) SetVpcId(v string) *DisableVpcCla return s } -// Contains the output of DisableVpcClassicLinkDnsSupport. type DisableVpcClassicLinkDnsSupportOutput struct { _ struct{} `type:"structure"` @@ -43034,7 +46889,6 @@ func (s *DisableVpcClassicLinkDnsSupportOutput) SetReturn(v bool) *DisableVpcCla return s } -// Contains the parameters for DisableVpcClassicLink. type DisableVpcClassicLinkInput struct { _ struct{} `type:"structure"` @@ -43085,7 +46939,6 @@ func (s *DisableVpcClassicLinkInput) SetVpcId(v string) *DisableVpcClassicLinkIn return s } -// Contains the output of DisableVpcClassicLink. type DisableVpcClassicLinkOutput struct { _ struct{} `type:"structure"` @@ -43109,7 +46962,6 @@ func (s *DisableVpcClassicLinkOutput) SetReturn(v bool) *DisableVpcClassicLinkOu return s } -// Contains the parameters for DisassociateAddress. type DisassociateAddressInput struct { _ struct{} `type:"structure"` @@ -43229,7 +47081,6 @@ func (s *DisassociateIamInstanceProfileOutput) SetIamInstanceProfileAssociation( return s } -// Contains the parameters for DisassociateRouteTable. type DisassociateRouteTableInput struct { _ struct{} `type:"structure"` @@ -43514,9 +47365,7 @@ type DiskImageDescription struct { Checksum *string `locationName:"checksum" type:"string"` // The disk image format. - // - // Format is a required field - Format *string `locationName:"format" type:"string" required:"true" enum:"DiskImageFormat"` + Format *string `locationName:"format" type:"string" enum:"DiskImageFormat"` // A presigned URL for the import manifest stored in Amazon S3. For information // about creating a presigned URL for an Amazon S3 object, read the "Query String @@ -43526,14 +47375,10 @@ type DiskImageDescription struct { // // For information about the import manifest referenced by this API action, // see VM Import Manifest (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/manifest.html). - // - // ImportManifestUrl is a required field - ImportManifestUrl *string `locationName:"importManifestUrl" type:"string" required:"true"` + ImportManifestUrl *string `locationName:"importManifestUrl" type:"string"` // The size of the disk image, in GiB. - // - // Size is a required field - Size *int64 `locationName:"size" type:"long" required:"true"` + Size *int64 `locationName:"size" type:"long"` } // String returns the string representation @@ -43649,9 +47494,7 @@ type DiskImageVolumeDescription struct { _ struct{} `type:"structure"` // The volume identifier. - // - // Id is a required field - Id *string `locationName:"id" type:"string" required:"true"` + Id *string `locationName:"id" type:"string"` // The size of the volume, in GiB. Size *int64 `locationName:"size" type:"long"` @@ -43720,9 +47563,14 @@ type EbsBlockDevice struct { DeleteOnTermination *bool `locationName:"deleteOnTermination" type:"boolean"` // Indicates whether the EBS volume is encrypted. Encrypted volumes can only - // be attached to instances that support Amazon EBS encryption. If you are creating - // a volume from a snapshot, you can't specify an encryption value. This is - // because only blank volumes can be encrypted on creation. + // be attached to instances that support Amazon EBS encryption. + // + // If you are creating a volume from a snapshot, you cannot specify an encryption + // value. This is because only blank volumes can be encrypted on creation. If + // you are creating a snapshot from an existing EBS volume, you cannot specify + // an encryption value that differs from that of the EBS volume. We recommend + // that you omit the encryption value from the block device mappings when creating + // an image from an instance. Encrypted *bool `locationName:"encrypted" type:"boolean"` // The number of I/O operations per second (IOPS) that the volume supports. @@ -43743,8 +47591,8 @@ type EbsBlockDevice struct { // Identifier (key ID, key alias, ID ARN, or alias ARN) for a user-managed CMK // under which the EBS volume is encrypted. // - // Note: This parameter is only supported on BlockDeviceMapping objects called - // by RunInstances (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html), + // This parameter is only supported on BlockDeviceMapping objects called by + // RunInstances (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html), // RequestSpotFleet (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RequestSpotFleet.html), // and RequestSpotInstances (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RequestSpotInstances.html). KmsKeyId *string `type:"string"` @@ -43827,7 +47675,7 @@ type EbsInstanceBlockDevice struct { _ struct{} `type:"structure"` // The time stamp when the attachment initiated. - AttachTime *time.Time `locationName:"attachTime" type:"timestamp" timestampFormat:"iso8601"` + AttachTime *time.Time `locationName:"attachTime" type:"timestamp"` // Indicates whether the volume is deleted on instance termination. DeleteOnTermination *bool `locationName:"deleteOnTermination" type:"boolean"` @@ -43907,14 +47755,14 @@ func (s *EbsInstanceBlockDeviceSpecification) SetVolumeId(v string) *EbsInstance return s } -// Describes an egress-only Internet gateway. +// Describes an egress-only internet gateway. type EgressOnlyInternetGateway struct { _ struct{} `type:"structure"` - // Information about the attachment of the egress-only Internet gateway. + // Information about the attachment of the egress-only internet gateway. Attachments []*InternetGatewayAttachment `locationName:"attachmentSet" locationNameList:"item" type:"list"` - // The ID of the egress-only Internet gateway. + // The ID of the egress-only internet gateway. EgressOnlyInternetGatewayId *string `locationName:"egressOnlyInternetGatewayId" type:"string"` } @@ -44279,7 +48127,6 @@ func (s EnableVolumeIOOutput) GoString() string { return s.String() } -// Contains the parameters for EnableVpcClassicLinkDnsSupport. type EnableVpcClassicLinkDnsSupportInput struct { _ struct{} `type:"structure"` @@ -44303,7 +48150,6 @@ func (s *EnableVpcClassicLinkDnsSupportInput) SetVpcId(v string) *EnableVpcClass return s } -// Contains the output of EnableVpcClassicLinkDnsSupport. type EnableVpcClassicLinkDnsSupportOutput struct { _ struct{} `type:"structure"` @@ -44327,7 +48173,6 @@ func (s *EnableVpcClassicLinkDnsSupportOutput) SetReturn(v bool) *EnableVpcClass return s } -// Contains the parameters for EnableVpcClassicLink. type EnableVpcClassicLinkInput struct { _ struct{} `type:"structure"` @@ -44378,7 +48223,6 @@ func (s *EnableVpcClassicLinkInput) SetVpcId(v string) *EnableVpcClassicLinkInpu return s } -// Contains the output of EnableVpcClassicLink. type EnableVpcClassicLinkOutput struct { _ struct{} `type:"structure"` @@ -44434,9 +48278,9 @@ type EventInformation struct { // * cancelled - The Spot Fleet is canceled and has no running Spot Instances. // The Spot Fleet will be deleted two days after its instances were terminated. // - // * cancelled_running - The Spot Fleet is canceled and will not launch additional - // Spot Instances, but its existing Spot Instances continue to run until - // they are interrupted or terminated. + // * cancelled_running - The Spot Fleet is canceled and does not launch additional + // Spot Instances. Existing Spot Instances continue to run until they are + // interrupted or terminated. // // * cancelled_terminating - The Spot Fleet is canceled and its Spot Instances // are terminating. @@ -44682,8 +48526,30 @@ func (s *ExportToS3TaskSpecification) SetS3Prefix(v string) *ExportToS3TaskSpeci } // A filter name and value pair that is used to return a more specific list -// of results. Filters can be used to match a set of resources by various criteria, -// such as tags, attributes, or IDs. +// of results from a describe operation. Filters can be used to match a set +// of resources by specific criteria, such as tags, attributes, or IDs. The +// filters supported by a describe operation are documented with the describe +// operation. For example: +// +// * DescribeAvailabilityZones +// +// * DescribeImages +// +// * DescribeInstances +// +// * DescribeKeyPairs +// +// * DescribeSecurityGroups +// +// * DescribeSnapshots +// +// * DescribeSubnets +// +// * DescribeTags +// +// * DescribeVolumes +// +// * DescribeVpcs type Filter struct { _ struct{} `type:"structure"` @@ -44716,6 +48582,478 @@ func (s *Filter) SetValues(v []*string) *Filter { return s } +// Describes an EC2 Fleet. +type FleetData struct { + _ struct{} `type:"structure"` + + // The progress of the EC2 Fleet. If there is an error, the status is error. + // After all requests are placed, the status is pending_fulfillment. If the + // size of the EC2 Fleet is equal to or greater than its target capacity, the + // status is fulfilled. If the size of the EC2 Fleet is decreased, the status + // is pending_termination while instances are terminating. + ActivityStatus *string `locationName:"activityStatus" type:"string" enum:"FleetActivityStatus"` + + // Unique, case-sensitive identifier you provide to ensure the idempotency of + // the request. For more information, see Ensuring Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/Run_Instance_Idempotency.html). + // + // Constraints: Maximum 64 ASCII characters + ClientToken *string `locationName:"clientToken" type:"string"` + + // The creation date and time of the EC2 Fleet. + CreateTime *time.Time `locationName:"createTime" type:"timestamp"` + + // Information about the instances that could not be launched by the fleet. + // Valid only when Type is set to instant. + Errors []*DescribeFleetError `locationName:"errorSet" locationNameList:"item" type:"list"` + + // Indicates whether running instances should be terminated if the target capacity + // of the EC2 Fleet is decreased below the current size of the EC2 Fleet. + ExcessCapacityTerminationPolicy *string `locationName:"excessCapacityTerminationPolicy" type:"string" enum:"FleetExcessCapacityTerminationPolicy"` + + // The ID of the EC2 Fleet. + FleetId *string `locationName:"fleetId" type:"string"` + + // The state of the EC2 Fleet. + FleetState *string `locationName:"fleetState" type:"string" enum:"FleetStateCode"` + + // The number of units fulfilled by this request compared to the set target + // capacity. + FulfilledCapacity *float64 `locationName:"fulfilledCapacity" type:"double"` + + // The number of units fulfilled by this request compared to the set target + // On-Demand capacity. + FulfilledOnDemandCapacity *float64 `locationName:"fulfilledOnDemandCapacity" type:"double"` + + // Information about the instances that were launched by the fleet. Valid only + // when Type is set to instant. + Instances []*DescribeFleetsInstances `locationName:"fleetInstanceSet" locationNameList:"item" type:"list"` + + // The launch template and overrides. + LaunchTemplateConfigs []*FleetLaunchTemplateConfig `locationName:"launchTemplateConfigs" locationNameList:"item" type:"list"` + + // The allocation strategy of On-Demand Instances in an EC2 Fleet. + OnDemandOptions *OnDemandOptions `locationName:"onDemandOptions" type:"structure"` + + // Indicates whether EC2 Fleet should replace unhealthy instances. + ReplaceUnhealthyInstances *bool `locationName:"replaceUnhealthyInstances" type:"boolean"` + + // The configuration of Spot Instances in an EC2 Fleet. + SpotOptions *SpotOptions `locationName:"spotOptions" type:"structure"` + + // The tags for an EC2 Fleet resource. + Tags []*Tag `locationName:"tagSet" locationNameList:"item" type:"list"` + + // The number of units to request. You can choose to set the target capacity + // in terms of instances or a performance characteristic that is important to + // your application workload, such as vCPUs, memory, or I/O. If the request + // type is maintain, you can specify a target capacity of 0 and add capacity + // later. + TargetCapacitySpecification *TargetCapacitySpecification `locationName:"targetCapacitySpecification" type:"structure"` + + // Indicates whether running instances should be terminated when the EC2 Fleet + // expires. + TerminateInstancesWithExpiration *bool `locationName:"terminateInstancesWithExpiration" type:"boolean"` + + // The type of request. Indicates whether the EC2 Fleet only requests the target + // capacity, or also attempts to maintain it. If you request a certain target + // capacity, EC2 Fleet only places the required requests; it does not attempt + // to replenish instances if capacity is diminished, and does not submit requests + // in alternative capacity pools if capacity is unavailable. To maintain a certain + // target capacity, EC2 Fleet places the required requests to meet this target + // capacity. It also automatically replenishes any interrupted Spot Instances. + // Default: maintain. + Type *string `locationName:"type" type:"string" enum:"FleetType"` + + // The start date and time of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). + // The default is to start fulfilling the request immediately. + ValidFrom *time.Time `locationName:"validFrom" type:"timestamp"` + + // The end date and time of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). + // At this point, no new instance requests are placed or able to fulfill the + // request. The default end date is 7 days from the current date. + ValidUntil *time.Time `locationName:"validUntil" type:"timestamp"` +} + +// String returns the string representation +func (s FleetData) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FleetData) GoString() string { + return s.String() +} + +// SetActivityStatus sets the ActivityStatus field's value. +func (s *FleetData) SetActivityStatus(v string) *FleetData { + s.ActivityStatus = &v + return s +} + +// SetClientToken sets the ClientToken field's value. +func (s *FleetData) SetClientToken(v string) *FleetData { + s.ClientToken = &v + return s +} + +// SetCreateTime sets the CreateTime field's value. +func (s *FleetData) SetCreateTime(v time.Time) *FleetData { + s.CreateTime = &v + return s +} + +// SetErrors sets the Errors field's value. +func (s *FleetData) SetErrors(v []*DescribeFleetError) *FleetData { + s.Errors = v + return s +} + +// SetExcessCapacityTerminationPolicy sets the ExcessCapacityTerminationPolicy field's value. +func (s *FleetData) SetExcessCapacityTerminationPolicy(v string) *FleetData { + s.ExcessCapacityTerminationPolicy = &v + return s +} + +// SetFleetId sets the FleetId field's value. +func (s *FleetData) SetFleetId(v string) *FleetData { + s.FleetId = &v + return s +} + +// SetFleetState sets the FleetState field's value. +func (s *FleetData) SetFleetState(v string) *FleetData { + s.FleetState = &v + return s +} + +// SetFulfilledCapacity sets the FulfilledCapacity field's value. +func (s *FleetData) SetFulfilledCapacity(v float64) *FleetData { + s.FulfilledCapacity = &v + return s +} + +// SetFulfilledOnDemandCapacity sets the FulfilledOnDemandCapacity field's value. +func (s *FleetData) SetFulfilledOnDemandCapacity(v float64) *FleetData { + s.FulfilledOnDemandCapacity = &v + return s +} + +// SetInstances sets the Instances field's value. +func (s *FleetData) SetInstances(v []*DescribeFleetsInstances) *FleetData { + s.Instances = v + return s +} + +// SetLaunchTemplateConfigs sets the LaunchTemplateConfigs field's value. +func (s *FleetData) SetLaunchTemplateConfigs(v []*FleetLaunchTemplateConfig) *FleetData { + s.LaunchTemplateConfigs = v + return s +} + +// SetOnDemandOptions sets the OnDemandOptions field's value. +func (s *FleetData) SetOnDemandOptions(v *OnDemandOptions) *FleetData { + s.OnDemandOptions = v + return s +} + +// SetReplaceUnhealthyInstances sets the ReplaceUnhealthyInstances field's value. +func (s *FleetData) SetReplaceUnhealthyInstances(v bool) *FleetData { + s.ReplaceUnhealthyInstances = &v + return s +} + +// SetSpotOptions sets the SpotOptions field's value. +func (s *FleetData) SetSpotOptions(v *SpotOptions) *FleetData { + s.SpotOptions = v + return s +} + +// SetTags sets the Tags field's value. +func (s *FleetData) SetTags(v []*Tag) *FleetData { + s.Tags = v + return s +} + +// SetTargetCapacitySpecification sets the TargetCapacitySpecification field's value. +func (s *FleetData) SetTargetCapacitySpecification(v *TargetCapacitySpecification) *FleetData { + s.TargetCapacitySpecification = v + return s +} + +// SetTerminateInstancesWithExpiration sets the TerminateInstancesWithExpiration field's value. +func (s *FleetData) SetTerminateInstancesWithExpiration(v bool) *FleetData { + s.TerminateInstancesWithExpiration = &v + return s +} + +// SetType sets the Type field's value. +func (s *FleetData) SetType(v string) *FleetData { + s.Type = &v + return s +} + +// SetValidFrom sets the ValidFrom field's value. +func (s *FleetData) SetValidFrom(v time.Time) *FleetData { + s.ValidFrom = &v + return s +} + +// SetValidUntil sets the ValidUntil field's value. +func (s *FleetData) SetValidUntil(v time.Time) *FleetData { + s.ValidUntil = &v + return s +} + +// Describes a launch template and overrides. +type FleetLaunchTemplateConfig struct { + _ struct{} `type:"structure"` + + // The launch template. + LaunchTemplateSpecification *FleetLaunchTemplateSpecification `locationName:"launchTemplateSpecification" type:"structure"` + + // Any parameters that you specify override the same parameters in the launch + // template. + Overrides []*FleetLaunchTemplateOverrides `locationName:"overrides" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s FleetLaunchTemplateConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FleetLaunchTemplateConfig) GoString() string { + return s.String() +} + +// SetLaunchTemplateSpecification sets the LaunchTemplateSpecification field's value. +func (s *FleetLaunchTemplateConfig) SetLaunchTemplateSpecification(v *FleetLaunchTemplateSpecification) *FleetLaunchTemplateConfig { + s.LaunchTemplateSpecification = v + return s +} + +// SetOverrides sets the Overrides field's value. +func (s *FleetLaunchTemplateConfig) SetOverrides(v []*FleetLaunchTemplateOverrides) *FleetLaunchTemplateConfig { + s.Overrides = v + return s +} + +// Describes a launch template and overrides. +type FleetLaunchTemplateConfigRequest struct { + _ struct{} `type:"structure"` + + // The launch template to use. You must specify either the launch template ID + // or launch template name in the request. + LaunchTemplateSpecification *FleetLaunchTemplateSpecificationRequest `type:"structure"` + + // Any parameters that you specify override the same parameters in the launch + // template. + Overrides []*FleetLaunchTemplateOverridesRequest `locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s FleetLaunchTemplateConfigRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FleetLaunchTemplateConfigRequest) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *FleetLaunchTemplateConfigRequest) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "FleetLaunchTemplateConfigRequest"} + if s.LaunchTemplateSpecification != nil { + if err := s.LaunchTemplateSpecification.Validate(); err != nil { + invalidParams.AddNested("LaunchTemplateSpecification", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLaunchTemplateSpecification sets the LaunchTemplateSpecification field's value. +func (s *FleetLaunchTemplateConfigRequest) SetLaunchTemplateSpecification(v *FleetLaunchTemplateSpecificationRequest) *FleetLaunchTemplateConfigRequest { + s.LaunchTemplateSpecification = v + return s +} + +// SetOverrides sets the Overrides field's value. +func (s *FleetLaunchTemplateConfigRequest) SetOverrides(v []*FleetLaunchTemplateOverridesRequest) *FleetLaunchTemplateConfigRequest { + s.Overrides = v + return s +} + +// Describes overrides for a launch template. +type FleetLaunchTemplateOverrides struct { + _ struct{} `type:"structure"` + + // The Availability Zone in which to launch the instances. + AvailabilityZone *string `locationName:"availabilityZone" type:"string"` + + // The instance type. + InstanceType *string `locationName:"instanceType" type:"string" enum:"InstanceType"` + + // The maximum price per unit hour that you are willing to pay for a Spot Instance. + MaxPrice *string `locationName:"maxPrice" type:"string"` + + // The location where the instance launched, if applicable. + Placement *PlacementResponse `locationName:"placement" type:"structure"` + + // The priority for the launch template override. If AllocationStrategy is set + // to prioritized, EC2 Fleet uses priority to determine which launch template + // override to use first in fulfilling On-Demand capacity. The highest priority + // is launched first. Valid values are whole numbers starting at 0. The lower + // the number, the higher the priority. If no number is set, the override has + // the lowest priority. + Priority *float64 `locationName:"priority" type:"double"` + + // The ID of the subnet in which to launch the instances. + SubnetId *string `locationName:"subnetId" type:"string"` + + // The number of units provided by the specified instance type. + WeightedCapacity *float64 `locationName:"weightedCapacity" type:"double"` +} + +// String returns the string representation +func (s FleetLaunchTemplateOverrides) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FleetLaunchTemplateOverrides) GoString() string { + return s.String() +} + +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *FleetLaunchTemplateOverrides) SetAvailabilityZone(v string) *FleetLaunchTemplateOverrides { + s.AvailabilityZone = &v + return s +} + +// SetInstanceType sets the InstanceType field's value. +func (s *FleetLaunchTemplateOverrides) SetInstanceType(v string) *FleetLaunchTemplateOverrides { + s.InstanceType = &v + return s +} + +// SetMaxPrice sets the MaxPrice field's value. +func (s *FleetLaunchTemplateOverrides) SetMaxPrice(v string) *FleetLaunchTemplateOverrides { + s.MaxPrice = &v + return s +} + +// SetPlacement sets the Placement field's value. +func (s *FleetLaunchTemplateOverrides) SetPlacement(v *PlacementResponse) *FleetLaunchTemplateOverrides { + s.Placement = v + return s +} + +// SetPriority sets the Priority field's value. +func (s *FleetLaunchTemplateOverrides) SetPriority(v float64) *FleetLaunchTemplateOverrides { + s.Priority = &v + return s +} + +// SetSubnetId sets the SubnetId field's value. +func (s *FleetLaunchTemplateOverrides) SetSubnetId(v string) *FleetLaunchTemplateOverrides { + s.SubnetId = &v + return s +} + +// SetWeightedCapacity sets the WeightedCapacity field's value. +func (s *FleetLaunchTemplateOverrides) SetWeightedCapacity(v float64) *FleetLaunchTemplateOverrides { + s.WeightedCapacity = &v + return s +} + +// Describes overrides for a launch template. +type FleetLaunchTemplateOverridesRequest struct { + _ struct{} `type:"structure"` + + // The Availability Zone in which to launch the instances. + AvailabilityZone *string `type:"string"` + + // The instance type. + InstanceType *string `type:"string" enum:"InstanceType"` + + // The maximum price per unit hour that you are willing to pay for a Spot Instance. + MaxPrice *string `type:"string"` + + // The location where the instance launched, if applicable. + Placement *Placement `type:"structure"` + + // The priority for the launch template override. If AllocationStrategy is set + // to prioritized, EC2 Fleet uses priority to determine which launch template + // override to use first in fulfilling On-Demand capacity. The highest priority + // is launched first. Valid values are whole numbers starting at 0. The lower + // the number, the higher the priority. If no number is set, the launch template + // override has the lowest priority. + Priority *float64 `type:"double"` + + // The ID of the subnet in which to launch the instances. + SubnetId *string `type:"string"` + + // The number of units provided by the specified instance type. + WeightedCapacity *float64 `type:"double"` +} + +// String returns the string representation +func (s FleetLaunchTemplateOverridesRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FleetLaunchTemplateOverridesRequest) GoString() string { + return s.String() +} + +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *FleetLaunchTemplateOverridesRequest) SetAvailabilityZone(v string) *FleetLaunchTemplateOverridesRequest { + s.AvailabilityZone = &v + return s +} + +// SetInstanceType sets the InstanceType field's value. +func (s *FleetLaunchTemplateOverridesRequest) SetInstanceType(v string) *FleetLaunchTemplateOverridesRequest { + s.InstanceType = &v + return s +} + +// SetMaxPrice sets the MaxPrice field's value. +func (s *FleetLaunchTemplateOverridesRequest) SetMaxPrice(v string) *FleetLaunchTemplateOverridesRequest { + s.MaxPrice = &v + return s +} + +// SetPlacement sets the Placement field's value. +func (s *FleetLaunchTemplateOverridesRequest) SetPlacement(v *Placement) *FleetLaunchTemplateOverridesRequest { + s.Placement = v + return s +} + +// SetPriority sets the Priority field's value. +func (s *FleetLaunchTemplateOverridesRequest) SetPriority(v float64) *FleetLaunchTemplateOverridesRequest { + s.Priority = &v + return s +} + +// SetSubnetId sets the SubnetId field's value. +func (s *FleetLaunchTemplateOverridesRequest) SetSubnetId(v string) *FleetLaunchTemplateOverridesRequest { + s.SubnetId = &v + return s +} + +// SetWeightedCapacity sets the WeightedCapacity field's value. +func (s *FleetLaunchTemplateOverridesRequest) SetWeightedCapacity(v float64) *FleetLaunchTemplateOverridesRequest { + s.WeightedCapacity = &v + return s +} + // Describes a launch template. type FleetLaunchTemplateSpecification struct { _ struct{} `type:"structure"` @@ -44728,8 +49066,7 @@ type FleetLaunchTemplateSpecification struct { // or a template ID. LaunchTemplateName *string `locationName:"launchTemplateName" min:"3" type:"string"` - // The version number. By default, the default version of the launch template - // is used. + // The version number of the launch template. You must specify a version number. Version *string `locationName:"version" type:"string"` } @@ -44774,19 +49111,75 @@ func (s *FleetLaunchTemplateSpecification) SetVersion(v string) *FleetLaunchTemp return s } +// The launch template to use. You must specify either the launch template ID +// or launch template name in the request. +type FleetLaunchTemplateSpecificationRequest struct { + _ struct{} `type:"structure"` + + // The ID of the launch template. + LaunchTemplateId *string `type:"string"` + + // The name of the launch template. + LaunchTemplateName *string `min:"3" type:"string"` + + // The version number of the launch template. + Version *string `type:"string"` +} + +// String returns the string representation +func (s FleetLaunchTemplateSpecificationRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FleetLaunchTemplateSpecificationRequest) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *FleetLaunchTemplateSpecificationRequest) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "FleetLaunchTemplateSpecificationRequest"} + if s.LaunchTemplateName != nil && len(*s.LaunchTemplateName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("LaunchTemplateName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLaunchTemplateId sets the LaunchTemplateId field's value. +func (s *FleetLaunchTemplateSpecificationRequest) SetLaunchTemplateId(v string) *FleetLaunchTemplateSpecificationRequest { + s.LaunchTemplateId = &v + return s +} + +// SetLaunchTemplateName sets the LaunchTemplateName field's value. +func (s *FleetLaunchTemplateSpecificationRequest) SetLaunchTemplateName(v string) *FleetLaunchTemplateSpecificationRequest { + s.LaunchTemplateName = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *FleetLaunchTemplateSpecificationRequest) SetVersion(v string) *FleetLaunchTemplateSpecificationRequest { + s.Version = &v + return s +} + // Describes a flow log. type FlowLog struct { _ struct{} `type:"structure"` // The date and time the flow log was created. - CreationTime *time.Time `locationName:"creationTime" type:"timestamp" timestampFormat:"iso8601"` + CreationTime *time.Time `locationName:"creationTime" type:"timestamp"` // Information about the error that occurred. Rate limited indicates that CloudWatch - // logs throttling has been applied for one or more network interfaces, or that - // you've reached the limit on the number of CloudWatch Logs log groups that - // you can create. Access error indicates that the IAM role associated with - // the flow log does not have sufficient permissions to publish to CloudWatch - // Logs. Unknown error indicates an internal error. + // Logs throttling has been applied for one or more network interfaces, or that + // you've reached the limit on the number of log groups that you can create. + // Access error indicates that the IAM role associated with the flow log does + // not have sufficient permissions to publish to CloudWatch Logs. Unknown error + // indicates an internal error. DeliverLogsErrorMessage *string `locationName:"deliverLogsErrorMessage" type:"string"` // The ARN of the IAM role that posts logs to CloudWatch Logs. @@ -44801,6 +49194,18 @@ type FlowLog struct { // The status of the flow log (ACTIVE). FlowLogStatus *string `locationName:"flowLogStatus" type:"string"` + // Specifies the destination to which the flow log data is published. Flow log + // data can be published to an CloudWatch Logs log group or an Amazon S3 bucket. + // If the flow log publishes to CloudWatch Logs, this element indicates the + // Amazon Resource Name (ARN) of the CloudWatch Logs log group to which the + // data is published. If the flow log publishes to Amazon S3, this element indicates + // the ARN of the Amazon S3 bucket to which the data is published. + LogDestination *string `locationName:"logDestination" type:"string"` + + // Specifies the type of destination to which the flow log data is published. + // Flow log data can be published to CloudWatch Logs or Amazon S3. + LogDestinationType *string `locationName:"logDestinationType" type:"string" enum:"LogDestinationType"` + // The name of the flow log group. LogGroupName *string `locationName:"logGroupName" type:"string"` @@ -44857,6 +49262,18 @@ func (s *FlowLog) SetFlowLogStatus(v string) *FlowLog { return s } +// SetLogDestination sets the LogDestination field's value. +func (s *FlowLog) SetLogDestination(v string) *FlowLog { + s.LogDestination = &v + return s +} + +// SetLogDestinationType sets the LogDestinationType field's value. +func (s *FlowLog) SetLogDestinationType(v string) *FlowLog { + s.LogDestinationType = &v + return s +} + // SetLogGroupName sets the LogGroupName field's value. func (s *FlowLog) SetLogGroupName(v string) *FlowLog { s.LogGroupName = &v @@ -44880,7 +49297,7 @@ type FpgaImage struct { _ struct{} `type:"structure"` // The date and time the AFI was created. - CreateTime *time.Time `locationName:"createTime" type:"timestamp" timestampFormat:"iso8601"` + CreateTime *time.Time `locationName:"createTime" type:"timestamp"` // The description of the AFI. Description *string `locationName:"description" type:"string"` @@ -44919,7 +49336,7 @@ type FpgaImage struct { Tags []*Tag `locationName:"tags" locationNameList:"item" type:"list"` // The time of the most recent update to the AFI. - UpdateTime *time.Time `locationName:"updateTime" type:"timestamp" timestampFormat:"iso8601"` + UpdateTime *time.Time `locationName:"updateTime" type:"timestamp"` } // String returns the string representation @@ -45132,6 +49549,11 @@ type GetConsoleOutputInput struct { // // InstanceId is a required field InstanceId *string `type:"string" required:"true"` + + // When enabled, retrieves the latest console output for the instance. + // + // Default: disabled (false) + Latest *bool `type:"boolean"` } // String returns the string representation @@ -45169,6 +49591,12 @@ func (s *GetConsoleOutputInput) SetInstanceId(v string) *GetConsoleOutputInput { return s } +// SetLatest sets the Latest field's value. +func (s *GetConsoleOutputInput) SetLatest(v bool) *GetConsoleOutputInput { + s.Latest = &v + return s +} + // Contains the output of GetConsoleOutput. type GetConsoleOutputOutput struct { _ struct{} `type:"structure"` @@ -45176,12 +49604,12 @@ type GetConsoleOutputOutput struct { // The ID of the instance. InstanceId *string `locationName:"instanceId" type:"string"` - // The console output, Base64-encoded. If using a command line tool, the tool - // decodes the output for you. + // The console output, base64-encoded. If you are using a command line tool, + // the tool decodes the output for you. Output *string `locationName:"output" type:"string"` - // The time the output was last updated. - Timestamp *time.Time `locationName:"timestamp" type:"timestamp" timestampFormat:"iso8601"` + // The time at which the output was last updated. + Timestamp *time.Time `locationName:"timestamp" type:"timestamp"` } // String returns the string representation @@ -45309,8 +49737,7 @@ func (s *GetConsoleScreenshotOutput) SetInstanceId(v string) *GetConsoleScreensh type GetHostReservationPurchasePreviewInput struct { _ struct{} `type:"structure"` - // The ID/s of the Dedicated Host/s that the reservation will be associated - // with. + // The IDs of the Dedicated Hosts with which the reservation is associated. // // HostIdSet is a required field HostIdSet []*string `locationNameList:"item" type:"list" required:"true"` @@ -45366,7 +49793,7 @@ type GetHostReservationPurchasePreviewOutput struct { // are specified. At this time, the only supported currency is USD. CurrencyCode *string `locationName:"currencyCode" type:"string" enum:"CurrencyCodeValues"` - // The purchase information of the Dedicated Host Reservation and the Dedicated + // The purchase information of the Dedicated Host reservation and the Dedicated // Hosts associated with it. Purchase []*Purchase `locationName:"purchase" locationNameList:"item" type:"list"` @@ -45547,7 +49974,7 @@ type GetPasswordDataOutput struct { PasswordData *string `locationName:"passwordData" type:"string"` // The time the data was last updated. - Timestamp *time.Time `locationName:"timestamp" type:"timestamp" timestampFormat:"iso8601"` + Timestamp *time.Time `locationName:"timestamp" type:"timestamp"` } // String returns the string representation @@ -45660,7 +50087,7 @@ type GetReservedInstancesExchangeQuoteOutput struct { IsValidExchange *bool `locationName:"isValidExchange" type:"boolean"` // The new end date of the reservation term. - OutputReservedInstancesWillExpireAt *time.Time `locationName:"outputReservedInstancesWillExpireAt" type:"timestamp" timestampFormat:"iso8601"` + OutputReservedInstancesWillExpireAt *time.Time `locationName:"outputReservedInstancesWillExpireAt" type:"timestamp"` // The total true upfront charge for the exchange. PaymentDue *string `locationName:"paymentDue" type:"string"` @@ -45804,7 +50231,7 @@ type HistoryRecord struct { // The date and time of the event, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). // // Timestamp is a required field - Timestamp *time.Time `locationName:"timestamp" type:"timestamp" timestampFormat:"iso8601" required:"true"` + Timestamp *time.Time `locationName:"timestamp" type:"timestamp" required:"true"` } // String returns the string representation @@ -45835,10 +50262,55 @@ func (s *HistoryRecord) SetTimestamp(v time.Time) *HistoryRecord { return s } +// Describes an event in the history of an EC2 Fleet. +type HistoryRecordEntry struct { + _ struct{} `type:"structure"` + + // Information about the event. + EventInformation *EventInformation `locationName:"eventInformation" type:"structure"` + + // The event type. + EventType *string `locationName:"eventType" type:"string" enum:"FleetEventType"` + + // The date and time of the event, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). + Timestamp *time.Time `locationName:"timestamp" type:"timestamp"` +} + +// String returns the string representation +func (s HistoryRecordEntry) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s HistoryRecordEntry) GoString() string { + return s.String() +} + +// SetEventInformation sets the EventInformation field's value. +func (s *HistoryRecordEntry) SetEventInformation(v *EventInformation) *HistoryRecordEntry { + s.EventInformation = v + return s +} + +// SetEventType sets the EventType field's value. +func (s *HistoryRecordEntry) SetEventType(v string) *HistoryRecordEntry { + s.EventType = &v + return s +} + +// SetTimestamp sets the Timestamp field's value. +func (s *HistoryRecordEntry) SetTimestamp(v time.Time) *HistoryRecordEntry { + s.Timestamp = &v + return s +} + // Describes the properties of the Dedicated Host. type Host struct { _ struct{} `type:"structure"` + // The time that the Dedicated Host was allocated. + AllocationTime *time.Time `locationName:"allocationTime" type:"timestamp"` + // Whether auto-placement is on or off. AutoPlacement *string `locationName:"autoPlacement" type:"string" enum:"AutoPlacement"` @@ -45848,8 +50320,8 @@ type Host struct { // The number of new instances that can be launched onto the Dedicated Host. AvailableCapacity *AvailableCapacity `locationName:"availableCapacity" type:"structure"` - // Unique, case-sensitive identifier you provide to ensure idempotency of the - // request. For more information, see How to Ensure Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Run_Instance_Idempotency.html) + // Unique, case-sensitive identifier that you provide to ensure idempotency + // of the request. For more information, see How to Ensure Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Run_Instance_Idempotency.html) // in the Amazon Elastic Compute Cloud User Guide. ClientToken *string `locationName:"clientToken" type:"string"` @@ -45866,8 +50338,14 @@ type Host struct { // The IDs and instance type that are currently running on the Dedicated Host. Instances []*HostInstance `locationName:"instances" locationNameList:"item" type:"list"` + // The time that the Dedicated Host was released. + ReleaseTime *time.Time `locationName:"releaseTime" type:"timestamp"` + // The Dedicated Host's state. State *string `locationName:"state" type:"string" enum:"AllocationState"` + + // Any tags assigned to the Dedicated Host. + Tags []*Tag `locationName:"tagSet" locationNameList:"item" type:"list"` } // String returns the string representation @@ -45880,6 +50358,12 @@ func (s Host) GoString() string { return s.String() } +// SetAllocationTime sets the AllocationTime field's value. +func (s *Host) SetAllocationTime(v time.Time) *Host { + s.AllocationTime = &v + return s +} + // SetAutoPlacement sets the AutoPlacement field's value. func (s *Host) SetAutoPlacement(v string) *Host { s.AutoPlacement = &v @@ -45928,12 +50412,24 @@ func (s *Host) SetInstances(v []*HostInstance) *Host { return s } +// SetReleaseTime sets the ReleaseTime field's value. +func (s *Host) SetReleaseTime(v time.Time) *Host { + s.ReleaseTime = &v + return s +} + // SetState sets the State field's value. func (s *Host) SetState(v string) *Host { s.State = &v return s } +// SetTags sets the Tags field's value. +func (s *Host) SetTags(v []*Tag) *Host { + s.Tags = v + return s +} + // Describes an instance running on a Dedicated Host. type HostInstance struct { _ struct{} `type:"structure"` @@ -46112,7 +50608,7 @@ type HostReservation struct { Duration *int64 `locationName:"duration" type:"integer"` // The date and time that the reservation ends. - End *time.Time `locationName:"end" type:"timestamp" timestampFormat:"iso8601"` + End *time.Time `locationName:"end" type:"timestamp"` // The IDs of the Dedicated Hosts associated with the reservation. HostIdSet []*string `locationName:"hostIdSet" locationNameList:"item" type:"list"` @@ -46136,7 +50632,7 @@ type HostReservation struct { PaymentOption *string `locationName:"paymentOption" type:"string" enum:"PaymentOption"` // The date and time that the reservation started. - Start *time.Time `locationName:"start" type:"timestamp" timestampFormat:"iso8601"` + Start *time.Time `locationName:"start" type:"timestamp"` // The state of the reservation. State *string `locationName:"state" type:"string" enum:"ReservationState"` @@ -46283,7 +50779,7 @@ type IamInstanceProfileAssociation struct { State *string `locationName:"state" type:"string" enum:"IamInstanceProfileAssociationState"` // The time the IAM instance profile was associated with the instance. - Timestamp *time.Time `locationName:"timestamp" type:"timestamp" timestampFormat:"iso8601"` + Timestamp *time.Time `locationName:"timestamp" type:"timestamp"` } // String returns the string representation @@ -46399,7 +50895,7 @@ type IdFormat struct { // The date in UTC at which you are permanently switched over to using longer // IDs. If a deadline is not yet available for this resource type, this field // is not returned. - Deadline *time.Time `locationName:"deadline" type:"timestamp" timestampFormat:"iso8601"` + Deadline *time.Time `locationName:"deadline" type:"timestamp"` // The type of resource. Resource *string `locationName:"resource" type:"string"` @@ -46774,11 +51270,46 @@ type ImportImageInput struct { // it is UnauthorizedOperation. DryRun *bool `type:"boolean"` + // Specifies whether the destination AMI of the imported image should be encrypted. + // The default CMK for EBS is used unless you specify a non-default AWS Key + // Management Service (AWS KMS) CMK using KmsKeyId. For more information, see + // Amazon EBS Encryption (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html) + // in the Amazon Elastic Compute Cloud User Guide. + Encrypted *bool `type:"boolean"` + // The target hypervisor platform. // // Valid values: xen Hypervisor *string `type:"string"` + // An identifier for the AWS Key Management Service (AWS KMS) customer master + // key (CMK) to use when creating the encrypted AMI. This parameter is only + // required if you want to use a non-default CMK; if this parameter is not specified, + // the default CMK for EBS is used. If a KmsKeyId is specified, the Encrypted + // flag must also be set. + // + // The CMK identifier may be provided in any of the following formats: + // + // * Key ID + // + // * Key alias, in the form alias/ExampleAlias + // + // * ARN using key ID. The ID ARN contains the arn:aws:kms namespace, followed + // by the region of the CMK, the AWS account ID of the CMK owner, the key + // namespace, and then the CMK ID. For example, arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef. + // + // * ARN using key alias. The alias ARN contains the arn:aws:kms namespace, + // followed by the region of the CMK, the AWS account ID of the CMK owner, + // the alias namespace, and then the CMK alias. For example, arn:aws:kms:us-east-1:012345678910:alias/ExampleAlias. + // + // + // AWS parses KmsKeyId asynchronously, meaning that the action you call may + // appear to complete even though you provided an invalid identifier. This action + // will eventually report failure. + // + // The specified CMK must exist in the region that the AMI is being copied to. + KmsKeyId *string `type:"string"` + // The license type to be used for the Amazon Machine Image (AMI) after importing. // // Note: You may only use BYOL if you have existing licenses with rights to @@ -46844,12 +51375,24 @@ func (s *ImportImageInput) SetDryRun(v bool) *ImportImageInput { return s } +// SetEncrypted sets the Encrypted field's value. +func (s *ImportImageInput) SetEncrypted(v bool) *ImportImageInput { + s.Encrypted = &v + return s +} + // SetHypervisor sets the Hypervisor field's value. func (s *ImportImageInput) SetHypervisor(v string) *ImportImageInput { s.Hypervisor = &v return s } +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *ImportImageInput) SetKmsKeyId(v string) *ImportImageInput { + s.KmsKeyId = &v + return s +} + // SetLicenseType sets the LicenseType field's value. func (s *ImportImageInput) SetLicenseType(v string) *ImportImageInput { s.LicenseType = &v @@ -46878,6 +51421,9 @@ type ImportImageOutput struct { // A description of the import task. Description *string `locationName:"description" type:"string"` + // Indicates whether the AMI is encypted. + Encrypted *bool `locationName:"encrypted" type:"boolean"` + // The target hypervisor of the import task. Hypervisor *string `locationName:"hypervisor" type:"string"` @@ -46887,6 +51433,10 @@ type ImportImageOutput struct { // The task ID of the import image task. ImportTaskId *string `locationName:"importTaskId" type:"string"` + // The identifier for the AWS Key Management Service (AWS KMS) customer master + // key (CMK) that was used to create the encrypted AMI. + KmsKeyId *string `locationName:"kmsKeyId" type:"string"` + // The license type of the virtual machine. LicenseType *string `locationName:"licenseType" type:"string"` @@ -46928,6 +51478,12 @@ func (s *ImportImageOutput) SetDescription(v string) *ImportImageOutput { return s } +// SetEncrypted sets the Encrypted field's value. +func (s *ImportImageOutput) SetEncrypted(v bool) *ImportImageOutput { + s.Encrypted = &v + return s +} + // SetHypervisor sets the Hypervisor field's value. func (s *ImportImageOutput) SetHypervisor(v string) *ImportImageOutput { s.Hypervisor = &v @@ -46946,6 +51502,12 @@ func (s *ImportImageOutput) SetImportTaskId(v string) *ImportImageOutput { return s } +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *ImportImageOutput) SetKmsKeyId(v string) *ImportImageOutput { + s.KmsKeyId = &v + return s +} + // SetLicenseType sets the LicenseType field's value. func (s *ImportImageOutput) SetLicenseType(v string) *ImportImageOutput { s.LicenseType = &v @@ -46994,6 +51556,9 @@ type ImportImageTask struct { // A description of the import task. Description *string `locationName:"description" type:"string"` + // Indicates whether the image is encrypted. + Encrypted *bool `locationName:"encrypted" type:"boolean"` + // The target hypervisor for the import task. // // Valid values: xen @@ -47005,6 +51570,10 @@ type ImportImageTask struct { // The ID of the import image task. ImportTaskId *string `locationName:"importTaskId" type:"string"` + // The identifier for the AWS Key Management Service (AWS KMS) customer master + // key (CMK) that was used to create the encrypted image. + KmsKeyId *string `locationName:"kmsKeyId" type:"string"` + // The license type of the virtual machine. LicenseType *string `locationName:"licenseType" type:"string"` @@ -47046,6 +51615,12 @@ func (s *ImportImageTask) SetDescription(v string) *ImportImageTask { return s } +// SetEncrypted sets the Encrypted field's value. +func (s *ImportImageTask) SetEncrypted(v bool) *ImportImageTask { + s.Encrypted = &v + return s +} + // SetHypervisor sets the Hypervisor field's value. func (s *ImportImageTask) SetHypervisor(v string) *ImportImageTask { s.Hypervisor = &v @@ -47064,6 +51639,12 @@ func (s *ImportImageTask) SetImportTaskId(v string) *ImportImageTask { return s } +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *ImportImageTask) SetKmsKeyId(v string) *ImportImageTask { + s.KmsKeyId = &v + return s +} + // SetLicenseType sets the LicenseType field's value. func (s *ImportImageTask) SetLicenseType(v string) *ImportImageTask { s.LicenseType = &v @@ -47343,9 +51924,7 @@ type ImportInstanceTaskDetails struct { Platform *string `locationName:"platform" type:"string" enum:"PlatformValues"` // One or more volumes. - // - // Volumes is a required field - Volumes []*ImportInstanceVolumeDetailItem `locationName:"volumes" locationNameList:"item" type:"list" required:"true"` + Volumes []*ImportInstanceVolumeDetailItem `locationName:"volumes" locationNameList:"item" type:"list"` } // String returns the string representation @@ -47387,35 +51966,25 @@ type ImportInstanceVolumeDetailItem struct { _ struct{} `type:"structure"` // The Availability Zone where the resulting instance will reside. - // - // AvailabilityZone is a required field - AvailabilityZone *string `locationName:"availabilityZone" type:"string" required:"true"` + AvailabilityZone *string `locationName:"availabilityZone" type:"string"` // The number of bytes converted so far. - // - // BytesConverted is a required field - BytesConverted *int64 `locationName:"bytesConverted" type:"long" required:"true"` + BytesConverted *int64 `locationName:"bytesConverted" type:"long"` // A description of the task. Description *string `locationName:"description" type:"string"` // The image. - // - // Image is a required field - Image *DiskImageDescription `locationName:"image" type:"structure" required:"true"` + Image *DiskImageDescription `locationName:"image" type:"structure"` // The status of the import of this particular disk image. - // - // Status is a required field - Status *string `locationName:"status" type:"string" required:"true"` + Status *string `locationName:"status" type:"string"` // The status information or errors related to the disk image. StatusMessage *string `locationName:"statusMessage" type:"string"` // The volume. - // - // Volume is a required field - Volume *DiskImageVolumeDescription `locationName:"volume" type:"structure" required:"true"` + Volume *DiskImageVolumeDescription `locationName:"volume" type:"structure"` } // String returns the string representation @@ -47470,7 +52039,6 @@ func (s *ImportInstanceVolumeDetailItem) SetVolume(v *DiskImageVolumeDescription return s } -// Contains the parameters for ImportKeyPair. type ImportKeyPairInput struct { _ struct{} `type:"structure"` @@ -47538,7 +52106,6 @@ func (s *ImportKeyPairInput) SetPublicKeyMaterial(v []byte) *ImportKeyPairInput return s } -// Contains the output of ImportKeyPair. type ImportKeyPairOutput struct { _ struct{} `type:"structure"` @@ -47593,6 +52160,42 @@ type ImportSnapshotInput struct { // it is UnauthorizedOperation. DryRun *bool `type:"boolean"` + // Specifies whether the destination snapshot of the imported image should be + // encrypted. The default CMK for EBS is used unless you specify a non-default + // AWS Key Management Service (AWS KMS) CMK using KmsKeyId. For more information, + // see Amazon EBS Encryption (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html) + // in the Amazon Elastic Compute Cloud User Guide. + Encrypted *bool `type:"boolean"` + + // An identifier for the AWS Key Management Service (AWS KMS) customer master + // key (CMK) to use when creating the encrypted snapshot. This parameter is + // only required if you want to use a non-default CMK; if this parameter is + // not specified, the default CMK for EBS is used. If a KmsKeyId is specified, + // the Encrypted flag must also be set. + // + // The CMK identifier may be provided in any of the following formats: + // + // * Key ID + // + // * Key alias, in the form alias/ExampleAlias + // + // * ARN using key ID. The ID ARN contains the arn:aws:kms namespace, followed + // by the region of the CMK, the AWS account ID of the CMK owner, the key + // namespace, and then the CMK ID. For example, arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef. + // + // * ARN using key alias. The alias ARN contains the arn:aws:kms namespace, + // followed by the region of the CMK, the AWS account ID of the CMK owner, + // the alias namespace, and then the CMK alias. For example, arn:aws:kms:us-east-1:012345678910:alias/ExampleAlias. + // + // + // AWS parses KmsKeyId asynchronously, meaning that the action you call may + // appear to complete even though you provided an invalid identifier. This action + // will eventually report failure. + // + // The specified CMK must exist in the region that the snapshot is being copied + // to. + KmsKeyId *string `type:"string"` + // The name of the role to use when not using the default role, 'vmimport'. RoleName *string `type:"string"` } @@ -47637,6 +52240,18 @@ func (s *ImportSnapshotInput) SetDryRun(v bool) *ImportSnapshotInput { return s } +// SetEncrypted sets the Encrypted field's value. +func (s *ImportSnapshotInput) SetEncrypted(v bool) *ImportSnapshotInput { + s.Encrypted = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *ImportSnapshotInput) SetKmsKeyId(v string) *ImportSnapshotInput { + s.KmsKeyId = &v + return s +} + // SetRoleName sets the RoleName field's value. func (s *ImportSnapshotInput) SetRoleName(v string) *ImportSnapshotInput { s.RoleName = &v @@ -47854,27 +52469,19 @@ type ImportVolumeTaskDetails struct { _ struct{} `type:"structure"` // The Availability Zone where the resulting volume will reside. - // - // AvailabilityZone is a required field - AvailabilityZone *string `locationName:"availabilityZone" type:"string" required:"true"` + AvailabilityZone *string `locationName:"availabilityZone" type:"string"` // The number of bytes converted so far. - // - // BytesConverted is a required field - BytesConverted *int64 `locationName:"bytesConverted" type:"long" required:"true"` + BytesConverted *int64 `locationName:"bytesConverted" type:"long"` // The description you provided when starting the import volume task. Description *string `locationName:"description" type:"string"` // The image. - // - // Image is a required field - Image *DiskImageDescription `locationName:"image" type:"structure" required:"true"` + Image *DiskImageDescription `locationName:"image" type:"structure"` // The volume. - // - // Volume is a required field - Volume *DiskImageVolumeDescription `locationName:"volume" type:"structure" required:"true"` + Volume *DiskImageVolumeDescription `locationName:"volume" type:"structure"` } // String returns the string representation @@ -47931,9 +52538,18 @@ type Instance struct { // Any block device mapping entries for the instance. BlockDeviceMappings []*InstanceBlockDeviceMapping `locationName:"blockDeviceMapping" locationNameList:"item" type:"list"` + // The ID of the Capacity Reservation. + CapacityReservationId *string `locationName:"capacityReservationId" type:"string"` + + // Information about the Capacity Reservation targeting option. + CapacityReservationSpecification *CapacityReservationSpecificationResponse `locationName:"capacityReservationSpecification" type:"structure"` + // The idempotency token you provided when you launched the instance, if applicable. ClientToken *string `locationName:"clientToken" type:"string"` + // The CPU options for the instance. + CpuOptions *CpuOptions `locationName:"cpuOptions" type:"structure"` + // Indicates whether the instance is optimized for Amazon EBS I/O. This optimization // provides dedicated throughput to Amazon EBS and an optimized configuration // stack to provide optimal I/O performance. This optimization isn't available @@ -47973,7 +52589,7 @@ type Instance struct { KeyName *string `locationName:"keyName" type:"string"` // The time the instance was launched. - LaunchTime *time.Time `locationName:"launchTime" type:"timestamp" timestampFormat:"iso8601"` + LaunchTime *time.Time `locationName:"launchTime" type:"timestamp"` // The monitoring for the instance. Monitoring *Monitoring `locationName:"monitoring" type:"structure"` @@ -48089,12 +52705,30 @@ func (s *Instance) SetBlockDeviceMappings(v []*InstanceBlockDeviceMapping) *Inst return s } +// SetCapacityReservationId sets the CapacityReservationId field's value. +func (s *Instance) SetCapacityReservationId(v string) *Instance { + s.CapacityReservationId = &v + return s +} + +// SetCapacityReservationSpecification sets the CapacityReservationSpecification field's value. +func (s *Instance) SetCapacityReservationSpecification(v *CapacityReservationSpecificationResponse) *Instance { + s.CapacityReservationSpecification = v + return s +} + // SetClientToken sets the ClientToken field's value. func (s *Instance) SetClientToken(v string) *Instance { s.ClientToken = &v return s } +// SetCpuOptions sets the CpuOptions field's value. +func (s *Instance) SetCpuOptions(v *CpuOptions) *Instance { + s.CpuOptions = v + return s +} + // SetEbsOptimized sets the EbsOptimized field's value. func (s *Instance) SetEbsOptimized(v bool) *Instance { s.EbsOptimized = &v @@ -48466,7 +53100,7 @@ func (s *InstanceCount) SetState(v string) *InstanceCount { return s } -// Describes the credit option for CPU usage of a T2 instance. +// Describes the credit option for CPU usage of a T2 or T3 instance. type InstanceCreditSpecification struct { _ struct{} `type:"structure"` @@ -48500,7 +53134,7 @@ func (s *InstanceCreditSpecification) SetInstanceId(v string) *InstanceCreditSpe return s } -// Describes the credit option for CPU usage of a T2 instance. +// Describes the credit option for CPU usage of a T2 or T3 instance. type InstanceCreditSpecificationRequest struct { _ struct{} `type:"structure"` @@ -48879,7 +53513,7 @@ type InstanceNetworkInterfaceAttachment struct { _ struct{} `type:"structure"` // The time stamp when the attachment initiated. - AttachTime *time.Time `locationName:"attachTime" type:"timestamp" timestampFormat:"iso8601"` + AttachTime *time.Time `locationName:"attachTime" type:"timestamp"` // The ID of the network interface attachment. AttachmentId *string `locationName:"attachmentId" type:"string"` @@ -49011,26 +53645,6 @@ func (s InstanceNetworkInterfaceSpecification) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *InstanceNetworkInterfaceSpecification) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "InstanceNetworkInterfaceSpecification"} - if s.PrivateIpAddresses != nil { - for i, v := range s.PrivateIpAddresses { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "PrivateIpAddresses", i), err.(request.ErrInvalidParams)) - } - } - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - // SetAssociatePublicIpAddress sets the AssociatePublicIpAddress field's value. func (s *InstanceNetworkInterfaceSpecification) SetAssociatePublicIpAddress(v bool) *InstanceNetworkInterfaceSpecification { s.AssociatePublicIpAddress = &v @@ -49159,7 +53773,7 @@ func (s *InstancePrivateIpAddress) SetPrivateIpAddress(v string) *InstancePrivat type InstanceState struct { _ struct{} `type:"structure"` - // The low byte represents the state. The high byte is an opaque internal value + // The low byte represents the state. The high byte is used for internal purposes // and should be ignored. // // * 0 : pending @@ -49322,7 +53936,7 @@ type InstanceStatusDetails struct { // The time when a status check failed. For an instance that was launched and // impaired, this is the time when the instance was launched. - ImpairedSince *time.Time `locationName:"impairedSince" type:"timestamp" timestampFormat:"iso8601"` + ImpairedSince *time.Time `locationName:"impairedSince" type:"timestamp"` // The type of instance status. Name *string `locationName:"name" type:"string" enum:"StatusName"` @@ -49374,10 +53988,10 @@ type InstanceStatusEvent struct { Description *string `locationName:"description" type:"string"` // The latest scheduled end time for the event. - NotAfter *time.Time `locationName:"notAfter" type:"timestamp" timestampFormat:"iso8601"` + NotAfter *time.Time `locationName:"notAfter" type:"timestamp"` // The earliest scheduled start time for the event. - NotBefore *time.Time `locationName:"notBefore" type:"timestamp" timestampFormat:"iso8601"` + NotBefore *time.Time `locationName:"notBefore" type:"timestamp"` } // String returns the string representation @@ -49447,17 +54061,17 @@ func (s *InstanceStatusSummary) SetStatus(v string) *InstanceStatusSummary { return s } -// Describes an Internet gateway. +// Describes an internet gateway. type InternetGateway struct { _ struct{} `type:"structure"` - // Any VPCs attached to the Internet gateway. + // Any VPCs attached to the internet gateway. Attachments []*InternetGatewayAttachment `locationName:"attachmentSet" locationNameList:"item" type:"list"` - // The ID of the Internet gateway. + // The ID of the internet gateway. InternetGatewayId *string `locationName:"internetGatewayId" type:"string"` - // Any tags assigned to the Internet gateway. + // Any tags assigned to the internet gateway. Tags []*Tag `locationName:"tagSet" locationNameList:"item" type:"list"` } @@ -49489,12 +54103,12 @@ func (s *InternetGateway) SetTags(v []*Tag) *InternetGateway { return s } -// Describes the attachment of a VPC to an Internet gateway or an egress-only -// Internet gateway. +// Describes the attachment of a VPC to an internet gateway or an egress-only +// internet gateway. type InternetGatewayAttachment struct { _ struct{} `type:"structure"` - // The current state of the attachment. For an Internet gateway, the state is + // The current state of the attachment. For an internet gateway, the state is // available when attached to a VPC; otherwise, this value is not returned. State *string `locationName:"state" type:"string" enum:"AttachmentStatus"` @@ -49549,11 +54163,9 @@ type IpPermission struct { // [EC2-VPC only] One or more IPv6 ranges. Ipv6Ranges []*Ipv6Range `locationName:"ipv6Ranges" locationNameList:"item" type:"list"` - // (EC2-VPC only; valid for AuthorizeSecurityGroupEgress, RevokeSecurityGroupEgress - // and DescribeSecurityGroups only) One or more prefix list IDs for an AWS service. - // In an AuthorizeSecurityGroupEgress request, this is the AWS service that - // you want to access through a VPC endpoint from instances associated with - // the security group. + // [EC2-VPC only] One or more prefix list IDs for an AWS service. With AuthorizeSecurityGroupEgress, + // this is the AWS service that you want to access through a VPC endpoint from + // instances associated with the security group. PrefixListIds []*PrefixListId `locationName:"prefixListIds" locationNameList:"item" type:"list"` // The end of port range for the TCP and UDP protocols, or an ICMP/ICMPv6 code. @@ -49984,7 +54596,7 @@ type LaunchTemplate struct { _ struct{} `type:"structure"` // The time launch template was created. - CreateTime *time.Time `locationName:"createTime" type:"timestamp" timestampFormat:"iso8601"` + CreateTime *time.Time `locationName:"createTime" type:"timestamp"` // The principal that created the launch template. CreatedBy *string `locationName:"createdBy" type:"string"` @@ -50057,6 +54669,40 @@ func (s *LaunchTemplate) SetTags(v []*Tag) *LaunchTemplate { return s } +// Describes a launch template and overrides. +type LaunchTemplateAndOverridesResponse struct { + _ struct{} `type:"structure"` + + // The launch template. + LaunchTemplateSpecification *FleetLaunchTemplateSpecification `locationName:"launchTemplateSpecification" type:"structure"` + + // Any parameters that you specify override the same parameters in the launch + // template. + Overrides *FleetLaunchTemplateOverrides `locationName:"overrides" type:"structure"` +} + +// String returns the string representation +func (s LaunchTemplateAndOverridesResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateAndOverridesResponse) GoString() string { + return s.String() +} + +// SetLaunchTemplateSpecification sets the LaunchTemplateSpecification field's value. +func (s *LaunchTemplateAndOverridesResponse) SetLaunchTemplateSpecification(v *FleetLaunchTemplateSpecification) *LaunchTemplateAndOverridesResponse { + s.LaunchTemplateSpecification = v + return s +} + +// SetOverrides sets the Overrides field's value. +func (s *LaunchTemplateAndOverridesResponse) SetOverrides(v *FleetLaunchTemplateOverrides) *LaunchTemplateAndOverridesResponse { + s.Overrides = v + return s +} + // Describes a block device mapping. type LaunchTemplateBlockDeviceMapping struct { _ struct{} `type:"structure"` @@ -50166,6 +54812,91 @@ func (s *LaunchTemplateBlockDeviceMappingRequest) SetVirtualName(v string) *Laun return s } +// Describes an instance's Capacity Reservation targeting option. You can specify +// only one option at a time. Use the CapacityReservationPreference parameter +// to configure the instance to run in On-Demand capacity or to run in any open +// Capacity Reservation that has matching attributes (instance type, platform, +// Availability Zone). Use the CapacityReservationTarget parameter to explicitly +// target a specific Capacity Reservation. +type LaunchTemplateCapacityReservationSpecificationRequest struct { + _ struct{} `type:"structure"` + + // Indicates the instance's Capacity Reservation preferences. Possible preferences + // include: + // + // * open - The instance can run in any open Capacity Reservation that has + // matching attributes (instance type, platform, Availability Zone). + // + // * none - The instance avoids running in a Capacity Reservation even if + // one is available. The instance runs in On-Demand capacity. + CapacityReservationPreference *string `type:"string" enum:"CapacityReservationPreference"` + + // Information about the target Capacity Reservation. + CapacityReservationTarget *CapacityReservationTarget `type:"structure"` +} + +// String returns the string representation +func (s LaunchTemplateCapacityReservationSpecificationRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateCapacityReservationSpecificationRequest) GoString() string { + return s.String() +} + +// SetCapacityReservationPreference sets the CapacityReservationPreference field's value. +func (s *LaunchTemplateCapacityReservationSpecificationRequest) SetCapacityReservationPreference(v string) *LaunchTemplateCapacityReservationSpecificationRequest { + s.CapacityReservationPreference = &v + return s +} + +// SetCapacityReservationTarget sets the CapacityReservationTarget field's value. +func (s *LaunchTemplateCapacityReservationSpecificationRequest) SetCapacityReservationTarget(v *CapacityReservationTarget) *LaunchTemplateCapacityReservationSpecificationRequest { + s.CapacityReservationTarget = v + return s +} + +// Information about the Capacity Reservation targeting option. +type LaunchTemplateCapacityReservationSpecificationResponse struct { + _ struct{} `type:"structure"` + + // Indicates the instance's Capacity Reservation preferences. Possible preferences + // include: + // + // * open - The instance can run in any open Capacity Reservation that has + // matching attributes (instance type, platform, Availability Zone). + // + // * none - The instance avoids running in a Capacity Reservation even if + // one is available. The instance runs in On-Demand capacity. + CapacityReservationPreference *string `locationName:"capacityReservationPreference" type:"string" enum:"CapacityReservationPreference"` + + // Information about the target Capacity Reservation. + CapacityReservationTarget *CapacityReservationTargetResponse `locationName:"capacityReservationTarget" type:"structure"` +} + +// String returns the string representation +func (s LaunchTemplateCapacityReservationSpecificationResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateCapacityReservationSpecificationResponse) GoString() string { + return s.String() +} + +// SetCapacityReservationPreference sets the CapacityReservationPreference field's value. +func (s *LaunchTemplateCapacityReservationSpecificationResponse) SetCapacityReservationPreference(v string) *LaunchTemplateCapacityReservationSpecificationResponse { + s.CapacityReservationPreference = &v + return s +} + +// SetCapacityReservationTarget sets the CapacityReservationTarget field's value. +func (s *LaunchTemplateCapacityReservationSpecificationResponse) SetCapacityReservationTarget(v *CapacityReservationTargetResponse) *LaunchTemplateCapacityReservationSpecificationResponse { + s.CapacityReservationTarget = v + return s +} + // Describes a launch template and overrides. type LaunchTemplateConfig struct { _ struct{} `type:"structure"` @@ -50215,6 +54946,75 @@ func (s *LaunchTemplateConfig) SetOverrides(v []*LaunchTemplateOverrides) *Launc return s } +// The CPU options for the instance. +type LaunchTemplateCpuOptions struct { + _ struct{} `type:"structure"` + + // The number of CPU cores for the instance. + CoreCount *int64 `locationName:"coreCount" type:"integer"` + + // The number of threads per CPU core. + ThreadsPerCore *int64 `locationName:"threadsPerCore" type:"integer"` +} + +// String returns the string representation +func (s LaunchTemplateCpuOptions) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateCpuOptions) GoString() string { + return s.String() +} + +// SetCoreCount sets the CoreCount field's value. +func (s *LaunchTemplateCpuOptions) SetCoreCount(v int64) *LaunchTemplateCpuOptions { + s.CoreCount = &v + return s +} + +// SetThreadsPerCore sets the ThreadsPerCore field's value. +func (s *LaunchTemplateCpuOptions) SetThreadsPerCore(v int64) *LaunchTemplateCpuOptions { + s.ThreadsPerCore = &v + return s +} + +// The CPU options for the instance. Both the core count and threads per core +// must be specified in the request. +type LaunchTemplateCpuOptionsRequest struct { + _ struct{} `type:"structure"` + + // The number of CPU cores for the instance. + CoreCount *int64 `type:"integer"` + + // The number of threads per CPU core. To disable Intel Hyper-Threading Technology + // for the instance, specify a value of 1. Otherwise, specify the default value + // of 2. + ThreadsPerCore *int64 `type:"integer"` +} + +// String returns the string representation +func (s LaunchTemplateCpuOptionsRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateCpuOptionsRequest) GoString() string { + return s.String() +} + +// SetCoreCount sets the CoreCount field's value. +func (s *LaunchTemplateCpuOptionsRequest) SetCoreCount(v int64) *LaunchTemplateCpuOptionsRequest { + s.CoreCount = &v + return s +} + +// SetThreadsPerCore sets the ThreadsPerCore field's value. +func (s *LaunchTemplateCpuOptionsRequest) SetThreadsPerCore(v int64) *LaunchTemplateCpuOptionsRequest { + s.ThreadsPerCore = &v + return s +} + // Describes a block device for an EBS volume. type LaunchTemplateEbsBlockDevice struct { _ struct{} `type:"structure"` @@ -50695,26 +55495,6 @@ func (s LaunchTemplateInstanceNetworkInterfaceSpecificationRequest) GoString() s return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "LaunchTemplateInstanceNetworkInterfaceSpecificationRequest"} - if s.PrivateIpAddresses != nil { - for i, v := range s.PrivateIpAddresses { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "PrivateIpAddresses", i), err.(request.ErrInvalidParams)) - } - } - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - // SetAssociatePublicIpAddress sets the AssociatePublicIpAddress field's value. func (s *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest) SetAssociatePublicIpAddress(v bool) *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest { s.AssociatePublicIpAddress = &v @@ -50797,6 +55577,14 @@ type LaunchTemplateOverrides struct { // The instance type. InstanceType *string `locationName:"instanceType" type:"string" enum:"InstanceType"` + // The priority for the launch template override. If OnDemandAllocationStrategy + // is set to prioritized, Spot Fleet uses priority to determine which launch + // template override to use first in fulfilling On-Demand capacity. The highest + // priority is launched first. Valid values are whole numbers starting at 0. + // The lower the number, the higher the priority. If no number is set, the launch + // template override has the lowest priority. + Priority *float64 `locationName:"priority" type:"double"` + // The maximum price per unit hour that you are willing to pay for a Spot Instance. SpotPrice *string `locationName:"spotPrice" type:"string"` @@ -50829,6 +55617,12 @@ func (s *LaunchTemplateOverrides) SetInstanceType(v string) *LaunchTemplateOverr return s } +// SetPriority sets the Priority field's value. +func (s *LaunchTemplateOverrides) SetPriority(v float64) *LaunchTemplateOverrides { + s.Priority = &v + return s +} + // SetSpotPrice sets the SpotPrice field's value. func (s *LaunchTemplateOverrides) SetSpotPrice(v string) *LaunchTemplateOverrides { s.SpotPrice = &v @@ -50988,7 +55782,7 @@ func (s *LaunchTemplatePlacementRequest) SetTenancy(v string) *LaunchTemplatePla } // The launch template to use. You must specify either the launch template ID -// or launch template name in the request. +// or launch template name in the request, but not both. type LaunchTemplateSpecification struct { _ struct{} `type:"structure"` @@ -51054,7 +55848,7 @@ type LaunchTemplateSpotMarketOptions struct { // active until all instances launch, the request is canceled, or this date // is reached. If the request is persistent, it remains active until it is canceled // or this date and time is reached. - ValidUntil *time.Time `locationName:"validUntil" type:"timestamp" timestampFormat:"iso8601"` + ValidUntil *time.Time `locationName:"validUntil" type:"timestamp"` } // String returns the string representation @@ -51120,7 +55914,7 @@ type LaunchTemplateSpotMarketOptionsRequest struct { // is reached. If the request is persistent, it remains active until it is canceled // or this date and time is reached. The default end date is 7 days from the // current date. - ValidUntil *time.Time `type:"timestamp" timestampFormat:"iso8601"` + ValidUntil *time.Time `type:"timestamp"` } // String returns the string representation @@ -51201,7 +55995,8 @@ type LaunchTemplateTagSpecificationRequest struct { _ struct{} `type:"structure"` // The type of resource to tag. Currently, the resource types that support tagging - // on creation are instance and volume. + // on creation are instance and volume. To tag a resource after it has been + // created, see CreateTags. ResourceType *string `type:"string" enum:"ResourceType"` // The tags to apply to the resource. @@ -51235,7 +56030,7 @@ type LaunchTemplateVersion struct { _ struct{} `type:"structure"` // The time the version was created. - CreateTime *time.Time `locationName:"createTime" type:"timestamp" timestampFormat:"iso8601"` + CreateTime *time.Time `locationName:"createTime" type:"timestamp"` // The principal that created the version. CreatedBy *string `locationName:"createdBy" type:"string"` @@ -51520,6 +56315,226 @@ func (s *LoadPermissionRequest) SetUserId(v string) *LoadPermissionRequest { return s } +type ModifyCapacityReservationInput struct { + _ struct{} `type:"structure"` + + // The ID of the Capacity Reservation. + // + // CapacityReservationId is a required field + CapacityReservationId *string `type:"string" required:"true"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The date and time at which the Capacity Reservation expires. When a Capacity + // Reservation expires, the reserved capacity is released and you can no longer + // launch instances into it. The Capacity Reservation's state changes to expired + // when it reaches its end date and time. + // + // The Capacity Reservation is cancelled within an hour from the specified time. + // For example, if you specify 5/31/2019, 13:30:55, the Capacity Reservation + // is guaranteed to end between 13:30:55 and 14:30:55 on 5/31/2019. + // + // You must provide an EndDate value if EndDateType is limited. Omit EndDate + // if EndDateType is unlimited. + EndDate *time.Time `type:"timestamp"` + + // Indicates the way in which the Capacity Reservation ends. A Capacity Reservation + // can have one of the following end types: + // + // * unlimited - The Capacity Reservation remains active until you explicitly + // cancel it. Do not provide an EndDate value if EndDateType is unlimited. + // + // * limited - The Capacity Reservation expires automatically at a specified + // date and time. You must provide an EndDate value if EndDateType is limited. + EndDateType *string `type:"string" enum:"EndDateType"` + + // The number of instances for which to reserve capacity. + InstanceCount *int64 `type:"integer"` +} + +// String returns the string representation +func (s ModifyCapacityReservationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyCapacityReservationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyCapacityReservationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyCapacityReservationInput"} + if s.CapacityReservationId == nil { + invalidParams.Add(request.NewErrParamRequired("CapacityReservationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCapacityReservationId sets the CapacityReservationId field's value. +func (s *ModifyCapacityReservationInput) SetCapacityReservationId(v string) *ModifyCapacityReservationInput { + s.CapacityReservationId = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *ModifyCapacityReservationInput) SetDryRun(v bool) *ModifyCapacityReservationInput { + s.DryRun = &v + return s +} + +// SetEndDate sets the EndDate field's value. +func (s *ModifyCapacityReservationInput) SetEndDate(v time.Time) *ModifyCapacityReservationInput { + s.EndDate = &v + return s +} + +// SetEndDateType sets the EndDateType field's value. +func (s *ModifyCapacityReservationInput) SetEndDateType(v string) *ModifyCapacityReservationInput { + s.EndDateType = &v + return s +} + +// SetInstanceCount sets the InstanceCount field's value. +func (s *ModifyCapacityReservationInput) SetInstanceCount(v int64) *ModifyCapacityReservationInput { + s.InstanceCount = &v + return s +} + +type ModifyCapacityReservationOutput struct { + _ struct{} `type:"structure"` + + // Information about the Capacity Reservation. + Return *bool `locationName:"return" type:"boolean"` +} + +// String returns the string representation +func (s ModifyCapacityReservationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyCapacityReservationOutput) GoString() string { + return s.String() +} + +// SetReturn sets the Return field's value. +func (s *ModifyCapacityReservationOutput) SetReturn(v bool) *ModifyCapacityReservationOutput { + s.Return = &v + return s +} + +type ModifyFleetInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // Indicates whether running instances should be terminated if the total target + // capacity of the EC2 Fleet is decreased below the current size of the EC2 + // Fleet. + ExcessCapacityTerminationPolicy *string `type:"string" enum:"FleetExcessCapacityTerminationPolicy"` + + // The ID of the EC2 Fleet. + // + // FleetId is a required field + FleetId *string `type:"string" required:"true"` + + // The size of the EC2 Fleet. + // + // TargetCapacitySpecification is a required field + TargetCapacitySpecification *TargetCapacitySpecificationRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s ModifyFleetInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyFleetInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyFleetInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyFleetInput"} + if s.FleetId == nil { + invalidParams.Add(request.NewErrParamRequired("FleetId")) + } + if s.TargetCapacitySpecification == nil { + invalidParams.Add(request.NewErrParamRequired("TargetCapacitySpecification")) + } + if s.TargetCapacitySpecification != nil { + if err := s.TargetCapacitySpecification.Validate(); err != nil { + invalidParams.AddNested("TargetCapacitySpecification", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDryRun sets the DryRun field's value. +func (s *ModifyFleetInput) SetDryRun(v bool) *ModifyFleetInput { + s.DryRun = &v + return s +} + +// SetExcessCapacityTerminationPolicy sets the ExcessCapacityTerminationPolicy field's value. +func (s *ModifyFleetInput) SetExcessCapacityTerminationPolicy(v string) *ModifyFleetInput { + s.ExcessCapacityTerminationPolicy = &v + return s +} + +// SetFleetId sets the FleetId field's value. +func (s *ModifyFleetInput) SetFleetId(v string) *ModifyFleetInput { + s.FleetId = &v + return s +} + +// SetTargetCapacitySpecification sets the TargetCapacitySpecification field's value. +func (s *ModifyFleetInput) SetTargetCapacitySpecification(v *TargetCapacitySpecificationRequest) *ModifyFleetInput { + s.TargetCapacitySpecification = v + return s +} + +type ModifyFleetOutput struct { + _ struct{} `type:"structure"` + + // Is true if the request succeeds, and an error otherwise. + Return *bool `locationName:"return" type:"boolean"` +} + +// String returns the string representation +func (s ModifyFleetOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyFleetOutput) GoString() string { + return s.String() +} + +// SetReturn sets the Return field's value. +func (s *ModifyFleetOutput) SetReturn(v bool) *ModifyFleetOutput { + s.Return = &v + return s +} + type ModifyFpgaImageAttributeInput struct { _ struct{} `type:"structure"` @@ -51669,7 +56684,6 @@ func (s *ModifyFpgaImageAttributeOutput) SetFpgaImageAttribute(v *FpgaImageAttri return s } -// Contains the parameters for ModifyHosts. type ModifyHostsInput struct { _ struct{} `type:"structure"` @@ -51678,7 +56692,7 @@ type ModifyHostsInput struct { // AutoPlacement is a required field AutoPlacement *string `locationName:"autoPlacement" type:"string" required:"true" enum:"AutoPlacement"` - // The host IDs of the Dedicated Hosts you want to modify. + // The IDs of the Dedicated Hosts to modify. // // HostIds is a required field HostIds []*string `locationName:"hostId" locationNameList:"item" type:"list" required:"true"` @@ -51722,7 +56736,6 @@ func (s *ModifyHostsInput) SetHostIds(v []*string) *ModifyHostsInput { return s } -// Contains the output of ModifyHosts. type ModifyHostsOutput struct { _ struct{} `type:"structure"` @@ -51756,7 +56769,6 @@ func (s *ModifyHostsOutput) SetUnsuccessful(v []*UnsuccessfulItem) *ModifyHostsO return s } -// Contains the parameters of ModifyIdFormat. type ModifyIdFormatInput struct { _ struct{} `type:"structure"` @@ -51832,7 +56844,6 @@ func (s ModifyIdFormatOutput) GoString() string { return s.String() } -// Contains the parameters of ModifyIdentityIdFormat. type ModifyIdentityIdFormatInput struct { _ struct{} `type:"structure"` @@ -52297,6 +57308,93 @@ func (s ModifyInstanceAttributeOutput) GoString() string { return s.String() } +type ModifyInstanceCapacityReservationAttributesInput struct { + _ struct{} `type:"structure"` + + // Information about the Capacity Reservation targeting option. + // + // CapacityReservationSpecification is a required field + CapacityReservationSpecification *CapacityReservationSpecification `type:"structure" required:"true"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The ID of the instance to be modified. + // + // InstanceId is a required field + InstanceId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s ModifyInstanceCapacityReservationAttributesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyInstanceCapacityReservationAttributesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyInstanceCapacityReservationAttributesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyInstanceCapacityReservationAttributesInput"} + if s.CapacityReservationSpecification == nil { + invalidParams.Add(request.NewErrParamRequired("CapacityReservationSpecification")) + } + if s.InstanceId == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCapacityReservationSpecification sets the CapacityReservationSpecification field's value. +func (s *ModifyInstanceCapacityReservationAttributesInput) SetCapacityReservationSpecification(v *CapacityReservationSpecification) *ModifyInstanceCapacityReservationAttributesInput { + s.CapacityReservationSpecification = v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *ModifyInstanceCapacityReservationAttributesInput) SetDryRun(v bool) *ModifyInstanceCapacityReservationAttributesInput { + s.DryRun = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *ModifyInstanceCapacityReservationAttributesInput) SetInstanceId(v string) *ModifyInstanceCapacityReservationAttributesInput { + s.InstanceId = &v + return s +} + +type ModifyInstanceCapacityReservationAttributesOutput struct { + _ struct{} `type:"structure"` + + // Returns true if the request succeeds; otherwise, it returns an error. + Return *bool `locationName:"return" type:"boolean"` +} + +// String returns the string representation +func (s ModifyInstanceCapacityReservationAttributesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyInstanceCapacityReservationAttributesOutput) GoString() string { + return s.String() +} + +// SetReturn sets the Return field's value. +func (s *ModifyInstanceCapacityReservationAttributesOutput) SetReturn(v bool) *ModifyInstanceCapacityReservationAttributesOutput { + s.Return = &v + return s +} + type ModifyInstanceCreditSpecificationInput struct { _ struct{} `type:"structure"` @@ -52392,7 +57490,6 @@ func (s *ModifyInstanceCreditSpecificationOutput) SetUnsuccessfulInstanceCreditS return s } -// Contains the parameters for ModifyInstancePlacement. type ModifyInstancePlacementInput struct { _ struct{} `type:"structure"` @@ -52471,7 +57568,6 @@ func (s *ModifyInstancePlacementInput) SetTenancy(v string) *ModifyInstancePlace return s } -// Contains the output of ModifyInstancePlacement. type ModifyInstancePlacementOutput struct { _ struct{} `type:"structure"` @@ -52796,9 +57892,8 @@ func (s *ModifyReservedInstancesOutput) SetReservedInstancesModificationId(v str type ModifySnapshotAttributeInput struct { _ struct{} `type:"structure"` - // The snapshot attribute to modify. - // - // Only volume creation permissions may be modified at the customer level. + // The snapshot attribute to modify. Only volume creation permissions can be + // modified. Attribute *string `type:"string" enum:"SnapshotAttributeName"` // A JSON representation of the snapshot attribute modification. @@ -52987,7 +58082,6 @@ func (s *ModifySpotFleetRequestOutput) SetReturn(v bool) *ModifySpotFleetRequest return s } -// Contains the parameters for ModifySubnetAttribute. type ModifySubnetAttributeInput struct { _ struct{} `type:"structure"` @@ -53151,19 +58245,17 @@ type ModifyVolumeInput struct { // it is UnauthorizedOperation. DryRun *bool `type:"boolean"` - // Target IOPS rate of the volume to be modified. + // The target IOPS rate of the volume. // - // Only valid for Provisioned IOPS SSD (io1) volumes. For more information about - // io1 IOPS configuration, see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html#EBSVolumeTypes_piops - // (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html#EBSVolumeTypes_piops). + // This is only valid for Provisioned IOPS SSD (io1) volumes. For more information, + // see Provisioned IOPS SSD (io1) Volumes (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html#EBSVolumeTypes_piops). // // Default: If no IOPS value is specified, the existing value is retained. Iops *int64 `type:"integer"` - // Target size in GiB of the volume to be modified. Target volume size must - // be greater than or equal to than the existing size of the volume. For information - // about available EBS volume sizes, see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html - // (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html). + // The target size of the volume, in GiB. The target volume size must be greater + // than or equal to than the existing size of the volume. For information about + // available EBS volume sizes, see Amazon EBS Volume Types (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html). // // Default: If no size is specified, the existing size is retained. Size *int64 `type:"integer"` @@ -53173,10 +58265,7 @@ type ModifyVolumeInput struct { // VolumeId is a required field VolumeId *string `type:"string" required:"true"` - // Target EBS volume type of the volume to be modified - // - // The API does not support modifications for volume type standard. You also - // cannot change the type of a volume to standard. + // The target EBS volume type of the volume. // // Default: If no type is specified, the existing type is retained. VolumeType *string `type:"string" enum:"VolumeType"` @@ -53238,7 +58327,7 @@ func (s *ModifyVolumeInput) SetVolumeType(v string) *ModifyVolumeInput { type ModifyVolumeOutput struct { _ struct{} `type:"structure"` - // A VolumeModification object. + // Information about the volume modification. VolumeModification *VolumeModification `locationName:"volumeModification" type:"structure"` } @@ -53258,7 +58347,6 @@ func (s *ModifyVolumeOutput) SetVolumeModification(v *VolumeModification) *Modif return s } -// Contains the parameters for ModifyVpcAttribute. type ModifyVpcAttributeInput struct { _ struct{} `type:"structure"` @@ -53273,8 +58361,8 @@ type ModifyVpcAttributeInput struct { // Indicates whether the DNS resolution is supported for the VPC. If enabled, // queries to the Amazon provided DNS server at the 169.254.169.253 IP address, // or the reserved IP address at the base of the VPC network range "plus two" - // will succeed. If disabled, the Amazon provided DNS service in the VPC that - // resolves public DNS hostnames to IP addresses is not enabled. + // succeed. If disabled, the Amazon provided DNS service in the VPC that resolves + // public DNS hostnames to IP addresses is not enabled. // // You cannot modify the DNS resolution and DNS hostnames attributes in the // same request. Use separate requests for each attribute. @@ -53698,8 +58786,9 @@ func (s *ModifyVpcEndpointServiceConfigurationOutput) SetReturn(v bool) *ModifyV type ModifyVpcEndpointServicePermissionsInput struct { _ struct{} `type:"structure"` - // One or more Amazon Resource Names (ARNs) of principals for which to allow - // permission. Specify * to allow all principals. + // The Amazon Resource Names (ARN) of one or more principals. Permissions are + // granted to the principals in this list. To grant permissions to all principals, + // specify an asterisk (*). AddAllowedPrincipals []*string `locationNameList:"item" type:"list"` // Checks whether you have the required permissions for the action, without @@ -53708,8 +58797,8 @@ type ModifyVpcEndpointServicePermissionsInput struct { // it is UnauthorizedOperation. DryRun *bool `type:"boolean"` - // One or more Amazon Resource Names (ARNs) of principals for which to remove - // permission. + // The Amazon Resource Names (ARN) of one or more principals. Permissions are + // revoked for principals in this list. RemoveAllowedPrincipals []*string `locationNameList:"item" type:"list"` // The ID of the service. @@ -53794,7 +58883,7 @@ type ModifyVpcPeeringConnectionOptionsInput struct { // The VPC peering connection options for the accepter VPC. AccepterPeeringConnectionOptions *PeeringConnectionOptionsRequest `type:"structure"` - // Checks whether you have the required permissions for the operation, without + // Checks whether you have the required permissions for the action, without // actually making the request, and provides an error response. If you have // the required permissions, the error response is DryRunOperation. Otherwise, // it is UnauthorizedOperation. @@ -53888,11 +58977,10 @@ func (s *ModifyVpcPeeringConnectionOptionsOutput) SetRequesterPeeringConnectionO return s } -// Contains the parameters for ModifyVpcTenancy. type ModifyVpcTenancyInput struct { _ struct{} `type:"structure"` - // Checks whether you have the required permissions for the operation, without + // Checks whether you have the required permissions for the action, without // actually making the request, and provides an error response. If you have // the required permissions, the error response is DryRunOperation. Otherwise, // it is UnauthorizedOperation. @@ -53953,7 +59041,6 @@ func (s *ModifyVpcTenancyInput) SetVpcId(v string) *ModifyVpcTenancyInput { return s } -// Contains the output of ModifyVpcTenancy. type ModifyVpcTenancyOutput struct { _ struct{} `type:"structure"` @@ -54077,7 +59164,6 @@ func (s *Monitoring) SetState(v string) *Monitoring { return s } -// Contains the parameters for MoveAddressToVpc. type MoveAddressToVpcInput struct { _ struct{} `type:"structure"` @@ -54128,7 +59214,6 @@ func (s *MoveAddressToVpcInput) SetPublicIp(v string) *MoveAddressToVpcInput { return s } -// Contains the output of MoveAddressToVpc. type MoveAddressToVpcOutput struct { _ struct{} `type:"structure"` @@ -54200,10 +59285,10 @@ type NatGateway struct { _ struct{} `type:"structure"` // The date and time the NAT gateway was created. - CreateTime *time.Time `locationName:"createTime" type:"timestamp" timestampFormat:"iso8601"` + CreateTime *time.Time `locationName:"createTime" type:"timestamp"` // The date and time the NAT gateway was deleted, if applicable. - DeleteTime *time.Time `locationName:"deleteTime" type:"timestamp" timestampFormat:"iso8601"` + DeleteTime *time.Time `locationName:"deleteTime" type:"timestamp"` // If the NAT gateway could not be created, specifies the error code for the // failure. (InsufficientFreeAddressesInSubnet | Gateway.NotAttached | InvalidAllocationID.NotFound @@ -54532,7 +59617,7 @@ type NetworkAclEntry struct { // TCP or UDP protocols: The range of ports the rule applies to. PortRange *PortRange `locationName:"portRange" type:"structure"` - // The protocol. A value of -1 means all protocols. + // The protocol number. A value of "-1" means all protocols. Protocol *string `locationName:"protocol" type:"string"` // Indicates whether to allow or deny the traffic that matches the rule. @@ -54863,7 +59948,7 @@ type NetworkInterfaceAttachment struct { _ struct{} `type:"structure"` // The timestamp indicating when the attachment initiated. - AttachTime *time.Time `locationName:"attachTime" type:"timestamp" timestampFormat:"iso8601"` + AttachTime *time.Time `locationName:"attachTime" type:"timestamp"` // The ID of the network interface attachment. AttachmentId *string `locationName:"attachmentId" type:"string"` @@ -55178,6 +60263,104 @@ func (s *NewDhcpConfiguration) SetValues(v []*string) *NewDhcpConfiguration { return s } +// The allocation strategy of On-Demand Instances in an EC2 Fleet. +type OnDemandOptions struct { + _ struct{} `type:"structure"` + + // The order of the launch template overrides to use in fulfilling On-Demand + // capacity. If you specify lowest-price, EC2 Fleet uses price to determine + // the order, launching the lowest price first. If you specify prioritized, + // EC2 Fleet uses the priority that you assigned to each launch template override, + // launching the highest priority first. If you do not specify a value, EC2 + // Fleet defaults to lowest-price. + AllocationStrategy *string `locationName:"allocationStrategy" type:"string" enum:"FleetOnDemandAllocationStrategy"` + + // The minimum target capacity for On-Demand Instances in the fleet. If the + // minimum target capacity is not reached, the fleet launches no instances. + MinTargetCapacity *int64 `locationName:"minTargetCapacity" type:"integer"` + + // Indicates that the fleet uses a single instance type to launch all On-Demand + // Instances in the fleet. + SingleInstanceType *bool `locationName:"singleInstanceType" type:"boolean"` +} + +// String returns the string representation +func (s OnDemandOptions) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OnDemandOptions) GoString() string { + return s.String() +} + +// SetAllocationStrategy sets the AllocationStrategy field's value. +func (s *OnDemandOptions) SetAllocationStrategy(v string) *OnDemandOptions { + s.AllocationStrategy = &v + return s +} + +// SetMinTargetCapacity sets the MinTargetCapacity field's value. +func (s *OnDemandOptions) SetMinTargetCapacity(v int64) *OnDemandOptions { + s.MinTargetCapacity = &v + return s +} + +// SetSingleInstanceType sets the SingleInstanceType field's value. +func (s *OnDemandOptions) SetSingleInstanceType(v bool) *OnDemandOptions { + s.SingleInstanceType = &v + return s +} + +// The allocation strategy of On-Demand Instances in an EC2 Fleet. +type OnDemandOptionsRequest struct { + _ struct{} `type:"structure"` + + // The order of the launch template overrides to use in fulfilling On-Demand + // capacity. If you specify lowest-price, EC2 Fleet uses price to determine + // the order, launching the lowest price first. If you specify prioritized, + // EC2 Fleet uses the priority that you assigned to each launch template override, + // launching the highest priority first. If you do not specify a value, EC2 + // Fleet defaults to lowest-price. + AllocationStrategy *string `type:"string" enum:"FleetOnDemandAllocationStrategy"` + + // The minimum target capacity for On-Demand Instances in the fleet. If the + // minimum target capacity is not reached, the fleet launches no instances. + MinTargetCapacity *int64 `type:"integer"` + + // Indicates that the fleet uses a single instance type to launch all On-Demand + // Instances in the fleet. + SingleInstanceType *bool `type:"boolean"` +} + +// String returns the string representation +func (s OnDemandOptionsRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OnDemandOptionsRequest) GoString() string { + return s.String() +} + +// SetAllocationStrategy sets the AllocationStrategy field's value. +func (s *OnDemandOptionsRequest) SetAllocationStrategy(v string) *OnDemandOptionsRequest { + s.AllocationStrategy = &v + return s +} + +// SetMinTargetCapacity sets the MinTargetCapacity field's value. +func (s *OnDemandOptionsRequest) SetMinTargetCapacity(v int64) *OnDemandOptionsRequest { + s.MinTargetCapacity = &v + return s +} + +// SetSingleInstanceType sets the SingleInstanceType field's value. +func (s *OnDemandOptionsRequest) SetSingleInstanceType(v bool) *OnDemandOptionsRequest { + s.SingleInstanceType = &v + return s +} + // Describes the data that identifies an Amazon FPGA image (AFI) on the PCI // bus. type PciId struct { @@ -55239,11 +60422,11 @@ type PeeringConnectionOptions struct { AllowDnsResolutionFromRemoteVpc *bool `locationName:"allowDnsResolutionFromRemoteVpc" type:"boolean"` // If true, enables outbound communication from an EC2-Classic instance that's - // linked to a local VPC via ClassicLink to instances in a peer VPC. + // linked to a local VPC using ClassicLink to instances in a peer VPC. AllowEgressFromLocalClassicLinkToRemoteVpc *bool `locationName:"allowEgressFromLocalClassicLinkToRemoteVpc" type:"boolean"` // If true, enables outbound communication from instances in a local VPC to - // an EC2-Classic instance that's linked to a peer VPC via ClassicLink. + // an EC2-Classic instance that's linked to a peer VPC using ClassicLink. AllowEgressFromLocalVpcToRemoteClassicLink *bool `locationName:"allowEgressFromLocalVpcToRemoteClassicLink" type:"boolean"` } @@ -55284,11 +60467,11 @@ type PeeringConnectionOptionsRequest struct { AllowDnsResolutionFromRemoteVpc *bool `type:"boolean"` // If true, enables outbound communication from an EC2-Classic instance that's - // linked to a local VPC via ClassicLink to instances in a peer VPC. + // linked to a local VPC using ClassicLink to instances in a peer VPC. AllowEgressFromLocalClassicLinkToRemoteVpc *bool `type:"boolean"` // If true, enables outbound communication from instances in a local VPC to - // an EC2-Classic instance that's linked to a peer VPC via ClassicLink. + // an EC2-Classic instance that's linked to a peer VPC using ClassicLink. AllowEgressFromLocalVpcToRemoteClassicLink *bool `type:"boolean"` } @@ -55331,7 +60514,7 @@ type Placement struct { // The Availability Zone of the instance. AvailabilityZone *string `locationName:"availabilityZone" type:"string"` - // The name of the placement group the instance is in (for cluster compute instances). + // The name of the placement group the instance is in. GroupName *string `locationName:"groupName" type:"string"` // The ID of the Dedicated Host on which the instance resides. This parameter @@ -55435,6 +60618,30 @@ func (s *PlacementGroup) SetStrategy(v string) *PlacementGroup { return s } +// Describes the placement of an instance. +type PlacementResponse struct { + _ struct{} `type:"structure"` + + // The name of the placement group the instance is in. + GroupName *string `locationName:"groupName" type:"string"` +} + +// String returns the string representation +func (s PlacementResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PlacementResponse) GoString() string { + return s.String() +} + +// SetGroupName sets the GroupName field's value. +func (s *PlacementResponse) SetGroupName(v string) *PlacementResponse { + s.GroupName = &v + return s +} + // Describes a range of ports. type PortRange struct { _ struct{} `type:"structure"` @@ -55510,7 +60717,7 @@ func (s *PrefixList) SetPrefixListName(v string) *PrefixList { return s } -// [EC2-VPC only] The ID of the prefix. +// Describes a prefix list ID. type PrefixListId struct { _ struct{} `type:"structure"` @@ -55728,9 +60935,7 @@ type PrivateIpAddressSpecification struct { Primary *bool `locationName:"primary" type:"boolean"` // The private IPv4 addresses. - // - // PrivateIpAddress is a required field - PrivateIpAddress *string `locationName:"privateIpAddress" type:"string" required:"true"` + PrivateIpAddress *string `locationName:"privateIpAddress" type:"string"` } // String returns the string representation @@ -55743,19 +60948,6 @@ func (s PrivateIpAddressSpecification) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *PrivateIpAddressSpecification) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "PrivateIpAddressSpecification"} - if s.PrivateIpAddress == nil { - invalidParams.Add(request.NewErrParamRequired("PrivateIpAddress")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - // SetPrimary sets the Primary field's value. func (s *PrivateIpAddressSpecification) SetPrimary(v bool) *PrivateIpAddressSpecification { s.Primary = &v @@ -55805,7 +60997,7 @@ func (s *ProductCode) SetProductCodeType(v string) *ProductCode { type PropagatingVgw struct { _ struct{} `type:"structure"` - // The ID of the virtual private gateway (VGW). + // The ID of the virtual private gateway. GatewayId *string `locationName:"gatewayId" type:"string"` } @@ -55825,6 +61017,105 @@ func (s *PropagatingVgw) SetGatewayId(v string) *PropagatingVgw { return s } +type ProvisionByoipCidrInput struct { + _ struct{} `type:"structure"` + + // The public IPv4 address range, in CIDR notation. The most specific prefix + // that you can specify is /24. The address range cannot overlap with another + // address range that you've brought to this or another region. + // + // Cidr is a required field + Cidr *string `type:"string" required:"true"` + + // A signed document that proves that you are authorized to bring the specified + // IP address range to Amazon using BYOIP. + CidrAuthorizationContext *CidrAuthorizationContext `type:"structure"` + + // A description for the address range and the address pool. + Description *string `type:"string"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` +} + +// String returns the string representation +func (s ProvisionByoipCidrInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ProvisionByoipCidrInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ProvisionByoipCidrInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ProvisionByoipCidrInput"} + if s.Cidr == nil { + invalidParams.Add(request.NewErrParamRequired("Cidr")) + } + if s.CidrAuthorizationContext != nil { + if err := s.CidrAuthorizationContext.Validate(); err != nil { + invalidParams.AddNested("CidrAuthorizationContext", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCidr sets the Cidr field's value. +func (s *ProvisionByoipCidrInput) SetCidr(v string) *ProvisionByoipCidrInput { + s.Cidr = &v + return s +} + +// SetCidrAuthorizationContext sets the CidrAuthorizationContext field's value. +func (s *ProvisionByoipCidrInput) SetCidrAuthorizationContext(v *CidrAuthorizationContext) *ProvisionByoipCidrInput { + s.CidrAuthorizationContext = v + return s +} + +// SetDescription sets the Description field's value. +func (s *ProvisionByoipCidrInput) SetDescription(v string) *ProvisionByoipCidrInput { + s.Description = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *ProvisionByoipCidrInput) SetDryRun(v bool) *ProvisionByoipCidrInput { + s.DryRun = &v + return s +} + +type ProvisionByoipCidrOutput struct { + _ struct{} `type:"structure"` + + // Information about the address pool. + ByoipCidr *ByoipCidr `locationName:"byoipCidr" type:"structure"` +} + +// String returns the string representation +func (s ProvisionByoipCidrOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ProvisionByoipCidrOutput) GoString() string { + return s.String() +} + +// SetByoipCidr sets the ByoipCidr field's value. +func (s *ProvisionByoipCidrOutput) SetByoipCidr(v *ByoipCidr) *ProvisionByoipCidrOutput { + s.ByoipCidr = v + return s +} + // Reserved. If you need to sustain traffic greater than the documented limits // (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html), // contact us through the Support Center (https://console.aws.amazon.com/support/home?). @@ -55834,7 +61125,7 @@ type ProvisionedBandwidth struct { // Reserved. If you need to sustain traffic greater than the documented limits // (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html), // contact us through the Support Center (https://console.aws.amazon.com/support/home?). - ProvisionTime *time.Time `locationName:"provisionTime" type:"timestamp" timestampFormat:"iso8601"` + ProvisionTime *time.Time `locationName:"provisionTime" type:"timestamp"` // Reserved. If you need to sustain traffic greater than the documented limits // (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html), @@ -55844,7 +61135,7 @@ type ProvisionedBandwidth struct { // Reserved. If you need to sustain traffic greater than the documented limits // (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html), // contact us through the Support Center (https://console.aws.amazon.com/support/home?). - RequestTime *time.Time `locationName:"requestTime" type:"timestamp" timestampFormat:"iso8601"` + RequestTime *time.Time `locationName:"requestTime" type:"timestamp"` // Reserved. If you need to sustain traffic greater than the documented limits // (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html), @@ -55897,6 +61188,117 @@ func (s *ProvisionedBandwidth) SetStatus(v string) *ProvisionedBandwidth { return s } +// Describes an address pool. +type PublicIpv4Pool struct { + _ struct{} `type:"structure"` + + // A description of the address pool. + Description *string `locationName:"description" type:"string"` + + // The address ranges. + PoolAddressRanges []*PublicIpv4PoolRange `locationName:"poolAddressRangeSet" locationNameList:"item" type:"list"` + + // The ID of the IPv4 address pool. + PoolId *string `locationName:"poolId" type:"string"` + + // The total number of addresses. + TotalAddressCount *int64 `locationName:"totalAddressCount" type:"integer"` + + // The total number of available addresses. + TotalAvailableAddressCount *int64 `locationName:"totalAvailableAddressCount" type:"integer"` +} + +// String returns the string representation +func (s PublicIpv4Pool) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PublicIpv4Pool) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *PublicIpv4Pool) SetDescription(v string) *PublicIpv4Pool { + s.Description = &v + return s +} + +// SetPoolAddressRanges sets the PoolAddressRanges field's value. +func (s *PublicIpv4Pool) SetPoolAddressRanges(v []*PublicIpv4PoolRange) *PublicIpv4Pool { + s.PoolAddressRanges = v + return s +} + +// SetPoolId sets the PoolId field's value. +func (s *PublicIpv4Pool) SetPoolId(v string) *PublicIpv4Pool { + s.PoolId = &v + return s +} + +// SetTotalAddressCount sets the TotalAddressCount field's value. +func (s *PublicIpv4Pool) SetTotalAddressCount(v int64) *PublicIpv4Pool { + s.TotalAddressCount = &v + return s +} + +// SetTotalAvailableAddressCount sets the TotalAvailableAddressCount field's value. +func (s *PublicIpv4Pool) SetTotalAvailableAddressCount(v int64) *PublicIpv4Pool { + s.TotalAvailableAddressCount = &v + return s +} + +// Describes an address range of an IPv4 address pool. +type PublicIpv4PoolRange struct { + _ struct{} `type:"structure"` + + // The number of addresses in the range. + AddressCount *int64 `locationName:"addressCount" type:"integer"` + + // The number of available addresses in the range. + AvailableAddressCount *int64 `locationName:"availableAddressCount" type:"integer"` + + // The first IP address in the range. + FirstAddress *string `locationName:"firstAddress" type:"string"` + + // The last IP address in the range. + LastAddress *string `locationName:"lastAddress" type:"string"` +} + +// String returns the string representation +func (s PublicIpv4PoolRange) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PublicIpv4PoolRange) GoString() string { + return s.String() +} + +// SetAddressCount sets the AddressCount field's value. +func (s *PublicIpv4PoolRange) SetAddressCount(v int64) *PublicIpv4PoolRange { + s.AddressCount = &v + return s +} + +// SetAvailableAddressCount sets the AvailableAddressCount field's value. +func (s *PublicIpv4PoolRange) SetAvailableAddressCount(v int64) *PublicIpv4PoolRange { + s.AvailableAddressCount = &v + return s +} + +// SetFirstAddress sets the FirstAddress field's value. +func (s *PublicIpv4PoolRange) SetFirstAddress(v string) *PublicIpv4PoolRange { + s.FirstAddress = &v + return s +} + +// SetLastAddress sets the LastAddress field's value. +func (s *PublicIpv4PoolRange) SetLastAddress(v string) *PublicIpv4PoolRange { + s.LastAddress = &v + return s +} + // Describes the result of the purchase. type Purchase struct { _ struct{} `type:"structure"` @@ -55998,8 +61400,7 @@ type PurchaseHostReservationInput struct { // amounts are specified. At this time, the only supported currency is USD. CurrencyCode *string `type:"string" enum:"CurrencyCodeValues"` - // The ID/s of the Dedicated Host/s that the reservation will be associated - // with. + // The IDs of the Dedicated Hosts with which the reservation will be associated. // // HostIdSet is a required field HostIdSet []*string `locationNameList:"item" type:"list" required:"true"` @@ -56007,10 +61408,9 @@ type PurchaseHostReservationInput struct { // The specified limit is checked against the total upfront cost of the reservation // (calculated as the offering's upfront cost multiplied by the host count). // If the total upfront cost is greater than the specified price limit, the - // request will fail. This is used to ensure that the purchase does not exceed - // the expected upfront cost of the purchase. At this time, the only supported - // currency is USD. For example, to indicate a limit price of USD 100, specify - // 100.00. + // request fails. This is used to ensure that the purchase does not exceed the + // expected upfront cost of the purchase. At this time, the only supported currency + // is USD. For example, to indicate a limit price of USD 100, specify 100.00. LimitPrice *string `type:"string"` // The ID of the offering. @@ -56080,7 +61480,7 @@ type PurchaseHostReservationOutput struct { // Unique, case-sensitive identifier you provide to ensure idempotency of the // request. For more information, see How to Ensure Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Run_Instance_Idempotency.html) - // in the Amazon Elastic Compute Cloud User Guide + // in the Amazon Elastic Compute Cloud User Guide. ClientToken *string `locationName:"clientToken" type:"string"` // The currency in which the totalUpfrontPrice and totalHourlyPrice amounts @@ -56093,8 +61493,7 @@ type PurchaseHostReservationOutput struct { // The total hourly price of the reservation calculated per hour. TotalHourlyPrice *string `locationName:"totalHourlyPrice" type:"string"` - // The total amount that will be charged to your account when you purchase the - // reservation. + // The total amount charged to your account when you purchase the reservation. TotalUpfrontPrice *string `locationName:"totalUpfrontPrice" type:"string"` } @@ -56803,7 +62202,6 @@ func (s *RejectVpcEndpointConnectionsOutput) SetUnsuccessful(v []*UnsuccessfulIt return s } -// Contains the parameters for RejectVpcPeeringConnection. type RejectVpcPeeringConnectionInput struct { _ struct{} `type:"structure"` @@ -56854,7 +62252,6 @@ func (s *RejectVpcPeeringConnectionInput) SetVpcPeeringConnectionId(v string) *R return s } -// Contains the output of RejectVpcPeeringConnection. type RejectVpcPeeringConnectionOutput struct { _ struct{} `type:"structure"` @@ -56878,7 +62275,6 @@ func (s *RejectVpcPeeringConnectionOutput) SetReturn(v bool) *RejectVpcPeeringCo return s } -// Contains the parameters for ReleaseAddress. type ReleaseAddressInput struct { _ struct{} `type:"structure"` @@ -56937,11 +62333,10 @@ func (s ReleaseAddressOutput) GoString() string { return s.String() } -// Contains the parameters for ReleaseHosts. type ReleaseHostsInput struct { _ struct{} `type:"structure"` - // The IDs of the Dedicated Hosts you want to release. + // The IDs of the Dedicated Hosts to release. // // HostIds is a required field HostIds []*string `locationName:"hostId" locationNameList:"item" type:"list" required:"true"` @@ -56976,7 +62371,6 @@ func (s *ReleaseHostsInput) SetHostIds(v []*string) *ReleaseHostsInput { return s } -// Contains the output of ReleaseHosts. type ReleaseHostsOutput struct { _ struct{} `type:"structure"` @@ -57085,7 +62479,6 @@ func (s *ReplaceIamInstanceProfileAssociationOutput) SetIamInstanceProfileAssoci return s } -// Contains the parameters for ReplaceNetworkAclAssociation. type ReplaceNetworkAclAssociationInput struct { _ struct{} `type:"structure"` @@ -57151,7 +62544,6 @@ func (s *ReplaceNetworkAclAssociationInput) SetNetworkAclId(v string) *ReplaceNe return s } -// Contains the output of ReplaceNetworkAclAssociation. type ReplaceNetworkAclAssociationOutput struct { _ struct{} `type:"structure"` @@ -57175,7 +62567,6 @@ func (s *ReplaceNetworkAclAssociationOutput) SetNewAssociationId(v string) *Repl return s } -// Contains the parameters for ReplaceNetworkAclEntry. type ReplaceNetworkAclEntryInput struct { _ struct{} `type:"structure"` @@ -57195,8 +62586,8 @@ type ReplaceNetworkAclEntryInput struct { // Egress is a required field Egress *bool `locationName:"egress" type:"boolean" required:"true"` - // ICMP protocol: The ICMP or ICMPv6 type and code. Required if specifying the - // ICMP (1) protocol, or protocol 58 (ICMPv6) with an IPv6 CIDR block. + // ICMP protocol: The ICMP or ICMPv6 type and code. Required if specifying protocol + // 1 (ICMP) or protocol 58 (ICMPv6) with an IPv6 CIDR block. IcmpTypeCode *IcmpTypeCode `locationName:"Icmp" type:"structure"` // The IPv6 network range to allow or deny, in CIDR notation (for example 2001:bd8:1234:1a00::/64). @@ -57208,16 +62599,16 @@ type ReplaceNetworkAclEntryInput struct { NetworkAclId *string `locationName:"networkAclId" type:"string" required:"true"` // TCP or UDP protocols: The range of ports the rule applies to. Required if - // specifying TCP (6) or UDP (17) for the protocol. + // specifying protocol 6 (TCP) or 17 (UDP). PortRange *PortRange `locationName:"portRange" type:"structure"` - // The IP protocol. You can specify all or -1 to mean all protocols. If you - // specify all, -1, or a protocol number other than tcp, udp, or icmp, traffic - // on all ports is allowed, regardless of any ports or ICMP types or codes you - // specify. If you specify protocol 58 (ICMPv6) and specify an IPv4 CIDR block, - // traffic for all ICMP types and codes allowed, regardless of any that you - // specify. If you specify protocol 58 (ICMPv6) and specify an IPv6 CIDR block, - // you must specify an ICMP type and code. + // The protocol number. A value of "-1" means all protocols. If you specify + // "-1" or a protocol number other than "6" (TCP), "17" (UDP), or "1" (ICMP), + // traffic on all ports is allowed, regardless of any ports or ICMP types or + // codes that you specify. If you specify protocol "58" (ICMPv6) and specify + // an IPv4 CIDR block, traffic for all ICMP types and codes allowed, regardless + // of any that you specify. If you specify protocol "58" (ICMPv6) and specify + // an IPv6 CIDR block, you must specify an ICMP type and code. // // Protocol is a required field Protocol *string `locationName:"protocol" type:"string" required:"true"` @@ -57342,16 +62733,15 @@ func (s ReplaceNetworkAclEntryOutput) GoString() string { return s.String() } -// Contains the parameters for ReplaceRoute. type ReplaceRouteInput struct { _ struct{} `type:"structure"` - // The IPv4 CIDR address block used for the destination match. The value you - // provide must match the CIDR of an existing route in the table. + // The IPv4 CIDR address block used for the destination match. The value that + // you provide must match the CIDR of an existing route in the table. DestinationCidrBlock *string `locationName:"destinationCidrBlock" type:"string"` - // The IPv6 CIDR address block used for the destination match. The value you - // provide must match the CIDR of an existing route in the table. + // The IPv6 CIDR address block used for the destination match. The value that + // you provide must match the CIDR of an existing route in the table. DestinationIpv6CidrBlock *string `locationName:"destinationIpv6CidrBlock" type:"string"` // Checks whether you have the required permissions for the action, without @@ -57360,10 +62750,10 @@ type ReplaceRouteInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // [IPv6 traffic only] The ID of an egress-only Internet gateway. + // [IPv6 traffic only] The ID of an egress-only internet gateway. EgressOnlyInternetGatewayId *string `locationName:"egressOnlyInternetGatewayId" type:"string"` - // The ID of an Internet gateway or virtual private gateway. + // The ID of an internet gateway or virtual private gateway. GatewayId *string `locationName:"gatewayId" type:"string"` // The ID of a NAT instance in your VPC. @@ -57481,7 +62871,6 @@ func (s ReplaceRouteOutput) GoString() string { return s.String() } -// Contains the parameters for ReplaceRouteTableAssociation. type ReplaceRouteTableAssociationInput struct { _ struct{} `type:"structure"` @@ -57546,7 +62935,6 @@ func (s *ReplaceRouteTableAssociationInput) SetRouteTableId(v string) *ReplaceRo return s } -// Contains the output of ReplaceRouteTableAssociation. type ReplaceRouteTableAssociationOutput struct { _ struct{} `type:"structure"` @@ -57584,7 +62972,7 @@ type ReportInstanceStatusInput struct { DryRun *bool `locationName:"dryRun" type:"boolean"` // The time at which the reported instance health state ended. - EndTime *time.Time `locationName:"endTime" type:"timestamp" timestampFormat:"iso8601"` + EndTime *time.Time `locationName:"endTime" type:"timestamp"` // One or more instances. // @@ -57618,7 +63006,7 @@ type ReportInstanceStatusInput struct { ReasonCodes []*string `locationName:"reasonCode" locationNameList:"item" type:"list" required:"true"` // The time at which the reported instance health state began. - StartTime *time.Time `locationName:"startTime" type:"timestamp" timestampFormat:"iso8601"` + StartTime *time.Time `locationName:"startTime" type:"timestamp"` // The status of all instances listed. // @@ -57724,7 +63112,16 @@ type RequestLaunchTemplateData struct { // cannot be changed using this action. BlockDeviceMappings []*LaunchTemplateBlockDeviceMappingRequest `locationName:"BlockDeviceMapping" locationNameList:"BlockDeviceMapping" type:"list"` - // The credit option for CPU usage of the instance. Valid for T2 instances only. + // Information about the Capacity Reservation targeting option. + CapacityReservationSpecification *LaunchTemplateCapacityReservationSpecificationRequest `type:"structure"` + + // The CPU options for the instance. For more information, see Optimizing CPU + // Options (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html) + // in the Amazon Elastic Compute Cloud User Guide. + CpuOptions *LaunchTemplateCpuOptionsRequest `type:"structure"` + + // The credit option for CPU usage of the instance. Valid for T2 or T3 instances + // only. CreditSpecification *CreditSpecificationRequest `type:"structure"` // If set to true, you can't terminate the instance using the Amazon EC2 console, @@ -57800,9 +63197,10 @@ type RequestLaunchTemplateData struct { // group ID and security name in the same request. SecurityGroups []*string `locationName:"SecurityGroup" locationNameList:"SecurityGroup" type:"list"` - // The tags to apply to the resources during launch. You can tag instances and - // volumes. The specified tags are applied to all instances or volumes that - // are created during launch. + // The tags to apply to the resources during launch. You can only tag instances + // and volumes on launch. The specified tags are applied to all instances or + // volumes that are created during launch. To tag a resource after it has been + // created, see CreateTags. TagSpecifications []*LaunchTemplateTagSpecificationRequest `locationName:"TagSpecification" locationNameList:"LaunchTemplateTagSpecificationRequest" type:"list"` // The Base64-encoded user data to make available to the instance. For more @@ -57840,16 +63238,6 @@ func (s *RequestLaunchTemplateData) Validate() error { } } } - if s.NetworkInterfaces != nil { - for i, v := range s.NetworkInterfaces { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "NetworkInterfaces", i), err.(request.ErrInvalidParams)) - } - } - } if invalidParams.Len() > 0 { return invalidParams @@ -57863,6 +63251,18 @@ func (s *RequestLaunchTemplateData) SetBlockDeviceMappings(v []*LaunchTemplateBl return s } +// SetCapacityReservationSpecification sets the CapacityReservationSpecification field's value. +func (s *RequestLaunchTemplateData) SetCapacityReservationSpecification(v *LaunchTemplateCapacityReservationSpecificationRequest) *RequestLaunchTemplateData { + s.CapacityReservationSpecification = v + return s +} + +// SetCpuOptions sets the CpuOptions field's value. +func (s *RequestLaunchTemplateData) SetCpuOptions(v *LaunchTemplateCpuOptionsRequest) *RequestLaunchTemplateData { + s.CpuOptions = v + return s +} + // SetCreditSpecification sets the CreditSpecification field's value. func (s *RequestLaunchTemplateData) SetCreditSpecification(v *CreditSpecificationRequest) *RequestLaunchTemplateData { s.CreditSpecification = v @@ -58093,13 +63493,13 @@ type RequestSpotInstancesInput struct { // for termination and provides a Spot Instance termination notice, which gives // the instance a two-minute warning before it terminates. // - // Note that you can't specify an Availability Zone group or a launch group - // if you specify a duration. + // You can't specify an Availability Zone group or a launch group if you specify + // a duration. BlockDurationMinutes *int64 `locationName:"blockDurationMinutes" type:"integer"` // Unique, case-sensitive identifier that you provide to ensure the idempotency // of the request. For more information, see How to Ensure Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Run_Instance_Idempotency.html) - // in the Amazon Elastic Compute Cloud User Guide. + // in the Amazon EC2 User Guide for Linux Instances. ClientToken *string `locationName:"clientToken" type:"string"` // Checks whether you have the required permissions for the action, without @@ -58139,14 +63539,14 @@ type RequestSpotInstancesInput struct { // launch, the request expires, or the request is canceled. If the request is // persistent, the request becomes active at this date and time and remains // active until it expires or is canceled. - ValidFrom *time.Time `locationName:"validFrom" type:"timestamp" timestampFormat:"iso8601"` + ValidFrom *time.Time `locationName:"validFrom" type:"timestamp"` // The end date of the request. If this is a one-time request, the request remains // active until all instances launch, the request is canceled, or this date // is reached. If the request is persistent, it remains active until it is canceled // or this date is reached. The default end date is 7 days from the current // date. - ValidUntil *time.Time `locationName:"validUntil" type:"timestamp" timestampFormat:"iso8601"` + ValidUntil *time.Time `locationName:"validUntil" type:"timestamp"` } // String returns the string representation @@ -58355,16 +63755,6 @@ func (s *RequestSpotLaunchSpecification) Validate() error { invalidParams.AddNested("Monitoring", err.(request.ErrInvalidParams)) } } - if s.NetworkInterfaces != nil { - for i, v := range s.NetworkInterfaces { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "NetworkInterfaces", i), err.(request.ErrInvalidParams)) - } - } - } if invalidParams.Len() > 0 { return invalidParams @@ -58655,7 +64045,7 @@ type ReservedInstances struct { Duration *int64 `locationName:"duration" type:"long"` // The time when the Reserved Instance expires. - End *time.Time `locationName:"end" type:"timestamp" timestampFormat:"iso8601"` + End *time.Time `locationName:"end" type:"timestamp"` // The purchase price of the Reserved Instance. FixedPrice *float64 `locationName:"fixedPrice" type:"float"` @@ -58688,7 +64078,7 @@ type ReservedInstances struct { Scope *string `locationName:"scope" type:"string" enum:"scope"` // The date and time the Reserved Instance started. - Start *time.Time `locationName:"start" type:"timestamp" timestampFormat:"iso8601"` + Start *time.Time `locationName:"start" type:"timestamp"` // The state of the Reserved Instance purchase. State *string `locationName:"state" type:"string" enum:"ReservedInstanceState"` @@ -58913,7 +64303,7 @@ type ReservedInstancesListing struct { ClientToken *string `locationName:"clientToken" type:"string"` // The time the listing was created. - CreateDate *time.Time `locationName:"createDate" type:"timestamp" timestampFormat:"iso8601"` + CreateDate *time.Time `locationName:"createDate" type:"timestamp"` // The number of instances in this state. InstanceCounts []*InstanceCount `locationName:"instanceCounts" locationNameList:"item" type:"list"` @@ -58938,7 +64328,7 @@ type ReservedInstancesListing struct { Tags []*Tag `locationName:"tagSet" locationNameList:"item" type:"list"` // The last modified timestamp of the listing. - UpdateDate *time.Time `locationName:"updateDate" type:"timestamp" timestampFormat:"iso8601"` + UpdateDate *time.Time `locationName:"updateDate" type:"timestamp"` } // String returns the string representation @@ -59020,10 +64410,10 @@ type ReservedInstancesModification struct { ClientToken *string `locationName:"clientToken" type:"string"` // The time when the modification request was created. - CreateDate *time.Time `locationName:"createDate" type:"timestamp" timestampFormat:"iso8601"` + CreateDate *time.Time `locationName:"createDate" type:"timestamp"` // The time for the modification to become effective. - EffectiveDate *time.Time `locationName:"effectiveDate" type:"timestamp" timestampFormat:"iso8601"` + EffectiveDate *time.Time `locationName:"effectiveDate" type:"timestamp"` // Contains target configurations along with their corresponding new Reserved // Instance IDs. @@ -59042,7 +64432,7 @@ type ReservedInstancesModification struct { StatusMessage *string `locationName:"statusMessage" type:"string"` // The time when the modification request was last updated. - UpdateDate *time.Time `locationName:"updateDate" type:"timestamp" timestampFormat:"iso8601"` + UpdateDate *time.Time `locationName:"updateDate" type:"timestamp"` } // String returns the string representation @@ -59741,6 +65131,14 @@ type ResponseLaunchTemplateData struct { // The block device mappings. BlockDeviceMappings []*LaunchTemplateBlockDeviceMapping `locationName:"blockDeviceMappingSet" locationNameList:"item" type:"list"` + // Information about the Capacity Reservation targeting option. + CapacityReservationSpecification *LaunchTemplateCapacityReservationSpecificationResponse `locationName:"capacityReservationSpecification" type:"structure"` + + // The CPU options for the instance. For more information, see Optimizing CPU + // Options (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html) + // in the Amazon Elastic Compute Cloud User Guide. + CpuOptions *LaunchTemplateCpuOptions `locationName:"cpuOptions" type:"structure"` + // The credit option for CPU usage of the instance. CreditSpecification *CreditSpecification `locationName:"creditSpecification" type:"structure"` @@ -59817,6 +65215,18 @@ func (s *ResponseLaunchTemplateData) SetBlockDeviceMappings(v []*LaunchTemplateB return s } +// SetCapacityReservationSpecification sets the CapacityReservationSpecification field's value. +func (s *ResponseLaunchTemplateData) SetCapacityReservationSpecification(v *LaunchTemplateCapacityReservationSpecificationResponse) *ResponseLaunchTemplateData { + s.CapacityReservationSpecification = v + return s +} + +// SetCpuOptions sets the CpuOptions field's value. +func (s *ResponseLaunchTemplateData) SetCpuOptions(v *LaunchTemplateCpuOptions) *ResponseLaunchTemplateData { + s.CpuOptions = v + return s +} + // SetCreditSpecification sets the CreditSpecification field's value. func (s *ResponseLaunchTemplateData) SetCreditSpecification(v *CreditSpecification) *ResponseLaunchTemplateData { s.CreditSpecification = v @@ -59931,7 +65341,6 @@ func (s *ResponseLaunchTemplateData) SetUserData(v string) *ResponseLaunchTempla return s } -// Contains the parameters for RestoreAddressToClassic. type RestoreAddressToClassicInput struct { _ struct{} `type:"structure"` @@ -59982,7 +65391,6 @@ func (s *RestoreAddressToClassicInput) SetPublicIp(v string) *RestoreAddressToCl return s } -// Contains the output of RestoreAddressToClassic. type RestoreAddressToClassicOutput struct { _ struct{} `type:"structure"` @@ -60015,7 +65423,6 @@ func (s *RestoreAddressToClassicOutput) SetStatus(v string) *RestoreAddressToCla return s } -// Contains the parameters for RevokeSecurityGroupEgress. type RevokeSecurityGroupEgressInput struct { _ struct{} `type:"structure"` @@ -60147,7 +65554,6 @@ func (s RevokeSecurityGroupEgressOutput) GoString() string { return s.String() } -// Contains the parameters for RevokeSecurityGroupIngress. type RevokeSecurityGroupIngressInput struct { _ struct{} `type:"structure"` @@ -60300,7 +65706,7 @@ type Route struct { // The prefix of the AWS service. DestinationPrefixListId *string `locationName:"destinationPrefixListId" type:"string"` - // The ID of the egress-only Internet gateway. + // The ID of the egress-only internet gateway. EgressOnlyInternetGatewayId *string `locationName:"egressOnlyInternetGatewayId" type:"string"` // The ID of a gateway attached to your VPC. @@ -60552,18 +65958,26 @@ type RunInstancesInput struct { // its encryption status is used for the volume encryption status. BlockDeviceMappings []*BlockDeviceMapping `locationName:"BlockDeviceMapping" locationNameList:"BlockDeviceMapping" type:"list"` + // Information about the Capacity Reservation targeting option. + CapacityReservationSpecification *CapacityReservationSpecification `type:"structure"` + // Unique, case-sensitive identifier you provide to ensure the idempotency of // the request. For more information, see Ensuring Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/Run_Instance_Idempotency.html). // // Constraints: Maximum 64 ASCII characters ClientToken *string `locationName:"clientToken" type:"string"` + // The CPU options for the instance. For more information, see Optimizing CPU + // Options (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html) + // in the Amazon Elastic Compute Cloud User Guide. + CpuOptions *CpuOptionsRequest `type:"structure"` + // The credit option for CPU usage of the instance. Valid values are standard // and unlimited. To change this attribute after launch, use ModifyInstanceCreditSpecification. - // For more information, see T2 Instances (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) + // For more information, see Burstable Performance Instances (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances.html) // in the Amazon Elastic Compute Cloud User Guide. // - // Default: standard + // Default: standard (T2 instances) or unlimited (T3 instances) CreditSpecification *CreditSpecificationRequest `type:"structure"` // If you set this parameter to true, you can't terminate the instance using @@ -60608,6 +66022,9 @@ type RunInstancesInput struct { InstanceInitiatedShutdownBehavior *string `locationName:"instanceInitiatedShutdownBehavior" type:"string" enum:"ShutdownBehavior"` // The market (purchasing) option for the instances. + // + // For RunInstances, persistent Spot Instance requests are only supported when + // InstanceInterruptionBehavior is set to either hibernate or stop. InstanceMarketOptions *InstanceMarketOptionsRequest `type:"structure"` // The instance type. For more information, see Instance Types (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html) @@ -60646,6 +66063,7 @@ type RunInstancesInput struct { // The launch template to use to launch the instances. Any parameters that you // specify in RunInstances override the same parameters in the launch template. + // You can specify either the name or ID of a launch template, but not both. LaunchTemplate *LaunchTemplateSpecification `type:"structure"` // The maximum number of instances to launch. If you specify more instances @@ -60711,9 +66129,10 @@ type RunInstancesInput struct { // [EC2-VPC] The ID of the subnet to launch the instance into. SubnetId *string `type:"string"` - // The tags to apply to the resources during launch. You can tag instances and - // volumes. The specified tags are applied to all instances or volumes that - // are created during launch. + // The tags to apply to the resources during launch. You can only tag instances + // and volumes on launch. The specified tags are applied to all instances or + // volumes that are created during launch. To tag a resource after it has been + // created, see CreateTags. TagSpecifications []*TagSpecification `locationName:"TagSpecification" locationNameList:"item" type:"list"` // The user data to make available to the instance. For more information, see @@ -60764,16 +66183,6 @@ func (s *RunInstancesInput) Validate() error { invalidParams.AddNested("Monitoring", err.(request.ErrInvalidParams)) } } - if s.NetworkInterfaces != nil { - for i, v := range s.NetworkInterfaces { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "NetworkInterfaces", i), err.(request.ErrInvalidParams)) - } - } - } if invalidParams.Len() > 0 { return invalidParams @@ -60793,12 +66202,24 @@ func (s *RunInstancesInput) SetBlockDeviceMappings(v []*BlockDeviceMapping) *Run return s } +// SetCapacityReservationSpecification sets the CapacityReservationSpecification field's value. +func (s *RunInstancesInput) SetCapacityReservationSpecification(v *CapacityReservationSpecification) *RunInstancesInput { + s.CapacityReservationSpecification = v + return s +} + // SetClientToken sets the ClientToken field's value. func (s *RunInstancesInput) SetClientToken(v string) *RunInstancesInput { s.ClientToken = &v return s } +// SetCpuOptions sets the CpuOptions field's value. +func (s *RunInstancesInput) SetCpuOptions(v *CpuOptionsRequest) *RunInstancesInput { + s.CpuOptions = v + return s +} + // SetCreditSpecification sets the CreditSpecification field's value. func (s *RunInstancesInput) SetCreditSpecification(v *CreditSpecificationRequest) *RunInstancesInput { s.CreditSpecification = v @@ -61193,7 +66614,7 @@ type ScheduledInstance struct { AvailabilityZone *string `locationName:"availabilityZone" type:"string"` // The date when the Scheduled Instance was purchased. - CreateDate *time.Time `locationName:"createDate" type:"timestamp" timestampFormat:"iso8601"` + CreateDate *time.Time `locationName:"createDate" type:"timestamp"` // The hourly price for a single instance. HourlyPrice *string `locationName:"hourlyPrice" type:"string"` @@ -61208,13 +66629,13 @@ type ScheduledInstance struct { NetworkPlatform *string `locationName:"networkPlatform" type:"string"` // The time for the next schedule to start. - NextSlotStartTime *time.Time `locationName:"nextSlotStartTime" type:"timestamp" timestampFormat:"iso8601"` + NextSlotStartTime *time.Time `locationName:"nextSlotStartTime" type:"timestamp"` // The platform (Linux/UNIX or Windows). Platform *string `locationName:"platform" type:"string"` // The time that the previous schedule ended or will end. - PreviousSlotEndTime *time.Time `locationName:"previousSlotEndTime" type:"timestamp" timestampFormat:"iso8601"` + PreviousSlotEndTime *time.Time `locationName:"previousSlotEndTime" type:"timestamp"` // The schedule recurrence. Recurrence *ScheduledInstanceRecurrence `locationName:"recurrence" type:"structure"` @@ -61226,10 +66647,10 @@ type ScheduledInstance struct { SlotDurationInHours *int64 `locationName:"slotDurationInHours" type:"integer"` // The end date for the Scheduled Instance. - TermEndDate *time.Time `locationName:"termEndDate" type:"timestamp" timestampFormat:"iso8601"` + TermEndDate *time.Time `locationName:"termEndDate" type:"timestamp"` // The start date for the Scheduled Instance. - TermStartDate *time.Time `locationName:"termStartDate" type:"timestamp" timestampFormat:"iso8601"` + TermStartDate *time.Time `locationName:"termStartDate" type:"timestamp"` // The total number of hours for a single instance for the entire term. TotalScheduledInstanceHours *int64 `locationName:"totalScheduledInstanceHours" type:"integer"` @@ -61346,7 +66767,7 @@ type ScheduledInstanceAvailability struct { AvailableInstanceCount *int64 `locationName:"availableInstanceCount" type:"integer"` // The time period for the first schedule to start. - FirstSlotStartTime *time.Time `locationName:"firstSlotStartTime" type:"timestamp" timestampFormat:"iso8601"` + FirstSlotStartTime *time.Time `locationName:"firstSlotStartTime" type:"timestamp"` // The hourly price for a single instance. HourlyPrice *string `locationName:"hourlyPrice" type:"string"` @@ -62316,14 +67737,10 @@ type SecurityGroupReference struct { _ struct{} `type:"structure"` // The ID of your security group. - // - // GroupId is a required field - GroupId *string `locationName:"groupId" type:"string" required:"true"` + GroupId *string `locationName:"groupId" type:"string"` // The ID of the VPC with the referencing security group. - // - // ReferencingVpcId is a required field - ReferencingVpcId *string `locationName:"referencingVpcId" type:"string" required:"true"` + ReferencingVpcId *string `locationName:"referencingVpcId" type:"string"` // The ID of the VPC peering connection. VpcPeeringConnectionId *string `locationName:"vpcPeeringConnectionId" type:"string"` @@ -62574,14 +67991,14 @@ type SlotDateTimeRangeRequest struct { // The earliest date and time, in UTC, for the Scheduled Instance to start. // // EarliestTime is a required field - EarliestTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + EarliestTime *time.Time `type:"timestamp" required:"true"` // The latest date and time, in UTC, for the Scheduled Instance to start. This // value must be later than or equal to the earliest date and at most three // months in the future. // // LatestTime is a required field - LatestTime *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + LatestTime *time.Time `type:"timestamp" required:"true"` } // String returns the string representation @@ -62627,10 +68044,10 @@ type SlotStartTimeRangeRequest struct { _ struct{} `type:"structure"` // The earliest date and time, in UTC, for the Scheduled Instance to start. - EarliestTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + EarliestTime *time.Time `type:"timestamp"` // The latest date and time, in UTC, for the Scheduled Instance to start. - LatestTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + LatestTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -62694,7 +68111,7 @@ type Snapshot struct { SnapshotId *string `locationName:"snapshotId" type:"string"` // The time stamp when the snapshot was initiated. - StartTime *time.Time `locationName:"startTime" type:"timestamp" timestampFormat:"iso8601"` + StartTime *time.Time `locationName:"startTime" type:"timestamp"` // The snapshot state. State *string `locationName:"status" type:"string" enum:"SnapshotState"` @@ -62981,9 +68398,16 @@ type SnapshotTaskDetail struct { // The size of the disk in the snapshot, in GiB. DiskImageSize *float64 `locationName:"diskImageSize" type:"double"` + // Indicates whether the snapshot is encrypted. + Encrypted *bool `locationName:"encrypted" type:"boolean"` + // The format of the disk image from which the snapshot is created. Format *string `locationName:"format" type:"string"` + // The identifier for the AWS Key Management Service (AWS KMS) customer master + // key (CMK) that was used to create the encrypted snapshot. + KmsKeyId *string `locationName:"kmsKeyId" type:"string"` + // The percentage of completion for the import snapshot task. Progress *string `locationName:"progress" type:"string"` @@ -63025,12 +68449,24 @@ func (s *SnapshotTaskDetail) SetDiskImageSize(v float64) *SnapshotTaskDetail { return s } +// SetEncrypted sets the Encrypted field's value. +func (s *SnapshotTaskDetail) SetEncrypted(v bool) *SnapshotTaskDetail { + s.Encrypted = &v + return s +} + // SetFormat sets the Format field's value. func (s *SnapshotTaskDetail) SetFormat(v string) *SnapshotTaskDetail { s.Format = &v return s } +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *SnapshotTaskDetail) SetKmsKeyId(v string) *SnapshotTaskDetail { + s.KmsKeyId = &v + return s +} + // SetProgress sets the Progress field's value. func (s *SnapshotTaskDetail) SetProgress(v string) *SnapshotTaskDetail { s.Progress = &v @@ -63218,26 +68654,6 @@ func (s SpotFleetLaunchSpecification) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *SpotFleetLaunchSpecification) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "SpotFleetLaunchSpecification"} - if s.NetworkInterfaces != nil { - for i, v := range s.NetworkInterfaces { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "NetworkInterfaces", i), err.(request.ErrInvalidParams)) - } - } - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - // SetAddressingType sets the AddressingType field's value. func (s *SpotFleetLaunchSpecification) SetAddressingType(v string) *SpotFleetLaunchSpecification { s.AddressingType = &v @@ -63386,7 +68802,7 @@ type SpotFleetRequestConfig struct { // The creation date and time of the request. // // CreateTime is a required field - CreateTime *time.Time `locationName:"createTime" type:"timestamp" timestampFormat:"iso8601" required:"true"` + CreateTime *time.Time `locationName:"createTime" type:"timestamp" required:"true"` // The configuration of the Spot Fleet request. // @@ -63452,8 +68868,8 @@ type SpotFleetRequestConfigData struct { // by the Spot Fleet request. The default is lowestPrice. AllocationStrategy *string `locationName:"allocationStrategy" type:"string" enum:"AllocationStrategy"` - // A unique, case-sensitive identifier you provide to ensure idempotency of - // your listings. This helps avoid duplicate listings. For more information, + // A unique, case-sensitive identifier that you provide to ensure the idempotency + // of your listings. This helps to avoid duplicate listings. For more information, // see Ensuring Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/Run_Instance_Idempotency.html). ClientToken *string `locationName:"clientToken" type:"string"` @@ -63463,7 +68879,7 @@ type SpotFleetRequestConfigData struct { ExcessCapacityTerminationPolicy *string `locationName:"excessCapacityTerminationPolicy" type:"string" enum:"ExcessCapacityTerminationPolicy"` // The number of units fulfilled by this request compared to the set target - // capacity. + // capacity. You cannot set this value. FulfilledCapacity *float64 `locationName:"fulfilledCapacity" type:"double"` // Grants the Spot Fleet permission to terminate Spot Instances on your behalf @@ -63476,6 +68892,12 @@ type SpotFleetRequestConfigData struct { // The behavior when a Spot Instance is interrupted. The default is terminate. InstanceInterruptionBehavior *string `locationName:"instanceInterruptionBehavior" type:"string" enum:"InstanceInterruptionBehavior"` + // The number of Spot pools across which to allocate your target Spot capacity. + // Valid only when Spot AllocationStrategy is set to lowest-price. Spot Fleet + // selects the cheapest Spot pools and evenly allocates your target Spot capacity + // across the number of Spot pools that you specify. + InstancePoolsToUseCount *int64 `locationName:"instancePoolsToUseCount" type:"integer"` + // The launch specifications for the Spot Fleet request. LaunchSpecifications []*SpotFleetLaunchSpecification `locationName:"launchSpecifications" locationNameList:"item" type:"list"` @@ -63491,6 +68913,25 @@ type SpotFleetRequestConfigData struct { // HS1, M1, M2, M3, and T1. LoadBalancersConfig *LoadBalancersConfig `locationName:"loadBalancersConfig" type:"structure"` + // The order of the launch template overrides to use in fulfilling On-Demand + // capacity. If you specify lowestPrice, Spot Fleet uses price to determine + // the order, launching the lowest price first. If you specify prioritized, + // Spot Fleet uses the priority that you assign to each Spot Fleet launch template + // override, launching the highest priority first. If you do not specify a value, + // Spot Fleet defaults to lowestPrice. + OnDemandAllocationStrategy *string `locationName:"onDemandAllocationStrategy" type:"string" enum:"OnDemandAllocationStrategy"` + + // The number of On-Demand units fulfilled by this request compared to the set + // target On-Demand capacity. + OnDemandFulfilledCapacity *float64 `locationName:"onDemandFulfilledCapacity" type:"double"` + + // The number of On-Demand units to request. You can choose to set the target + // capacity in terms of instances or a performance characteristic that is important + // to your application workload, such as vCPUs, memory, or I/O. If the request + // type is maintain, you can specify a target capacity of 0 and add capacity + // later. + OnDemandTargetCapacity *int64 `locationName:"onDemandTargetCapacity" type:"integer"` + // Indicates whether Spot Fleet should replace unhealthy instances. ReplaceUnhealthyInstances *bool `locationName:"replaceUnhealthyInstances" type:"boolean"` @@ -63511,24 +68952,23 @@ type SpotFleetRequestConfigData struct { // Fleet request expires. TerminateInstancesWithExpiration *bool `locationName:"terminateInstancesWithExpiration" type:"boolean"` - // The type of request. Indicates whether the fleet will only request the target - // capacity or also attempt to maintain it. When you request a certain target - // capacity, the fleet will only place the required requests. It will not attempt - // to replenish Spot Instances if capacity is diminished, nor will it submit - // requests in alternative Spot pools if capacity is not available. When you - // want to maintain a certain target capacity, fleet will place the required - // requests to meet this target capacity. It will also automatically replenish - // any interrupted instances. Default: maintain. + // The type of request. Indicates whether the Spot Fleet only requests the target + // capacity or also attempts to maintain it. When this value is request, the + // Spot Fleet only places the required requests. It does not attempt to replenish + // Spot Instances if capacity is diminished, nor does it submit requests in + // alternative Spot pools if capacity is not available. To maintain a certain + // target capacity, the Spot Fleet places the required requests to meet capacity + // and automatically replenishes any interrupted instances. Default: maintain. Type *string `locationName:"type" type:"string" enum:"FleetType"` // The start date and time of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). // The default is to start fulfilling the request immediately. - ValidFrom *time.Time `locationName:"validFrom" type:"timestamp" timestampFormat:"iso8601"` + ValidFrom *time.Time `locationName:"validFrom" type:"timestamp"` // The end date and time of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). // At this point, no new Spot Instance requests are placed or able to fulfill // the request. The default end date is 7 days from the current date. - ValidUntil *time.Time `locationName:"validUntil" type:"timestamp" timestampFormat:"iso8601"` + ValidUntil *time.Time `locationName:"validUntil" type:"timestamp"` } // String returns the string representation @@ -63550,16 +68990,6 @@ func (s *SpotFleetRequestConfigData) Validate() error { if s.TargetCapacity == nil { invalidParams.Add(request.NewErrParamRequired("TargetCapacity")) } - if s.LaunchSpecifications != nil { - for i, v := range s.LaunchSpecifications { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "LaunchSpecifications", i), err.(request.ErrInvalidParams)) - } - } - } if s.LaunchTemplateConfigs != nil { for i, v := range s.LaunchTemplateConfigs { if v == nil { @@ -63618,6 +69048,12 @@ func (s *SpotFleetRequestConfigData) SetInstanceInterruptionBehavior(v string) * return s } +// SetInstancePoolsToUseCount sets the InstancePoolsToUseCount field's value. +func (s *SpotFleetRequestConfigData) SetInstancePoolsToUseCount(v int64) *SpotFleetRequestConfigData { + s.InstancePoolsToUseCount = &v + return s +} + // SetLaunchSpecifications sets the LaunchSpecifications field's value. func (s *SpotFleetRequestConfigData) SetLaunchSpecifications(v []*SpotFleetLaunchSpecification) *SpotFleetRequestConfigData { s.LaunchSpecifications = v @@ -63636,6 +69072,24 @@ func (s *SpotFleetRequestConfigData) SetLoadBalancersConfig(v *LoadBalancersConf return s } +// SetOnDemandAllocationStrategy sets the OnDemandAllocationStrategy field's value. +func (s *SpotFleetRequestConfigData) SetOnDemandAllocationStrategy(v string) *SpotFleetRequestConfigData { + s.OnDemandAllocationStrategy = &v + return s +} + +// SetOnDemandFulfilledCapacity sets the OnDemandFulfilledCapacity field's value. +func (s *SpotFleetRequestConfigData) SetOnDemandFulfilledCapacity(v float64) *SpotFleetRequestConfigData { + s.OnDemandFulfilledCapacity = &v + return s +} + +// SetOnDemandTargetCapacity sets the OnDemandTargetCapacity field's value. +func (s *SpotFleetRequestConfigData) SetOnDemandTargetCapacity(v int64) *SpotFleetRequestConfigData { + s.OnDemandTargetCapacity = &v + return s +} + // SetReplaceUnhealthyInstances sets the ReplaceUnhealthyInstances field's value. func (s *SpotFleetRequestConfigData) SetReplaceUnhealthyInstances(v bool) *SpotFleetRequestConfigData { s.ReplaceUnhealthyInstances = &v @@ -63730,7 +69184,7 @@ type SpotInstanceRequest struct { // The date and time when the Spot Instance request was created, in UTC format // (for example, YYYY-MM-DDTHH:MM:SSZ). - CreateTime *time.Time `locationName:"createTime" type:"timestamp" timestampFormat:"iso8601"` + CreateTime *time.Time `locationName:"createTime" type:"timestamp"` // The fault codes for the Spot Instance request, if any. Fault *SpotInstanceStateFault `locationName:"fault" type:"structure"` @@ -63761,10 +69215,9 @@ type SpotInstanceRequest struct { // The maximum price per hour that you are willing to pay for a Spot Instance. SpotPrice *string `locationName:"spotPrice" type:"string"` - // The state of the Spot Instance request. Spot status information can help - // you track your Spot Instance requests. For more information, see Spot Status - // (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html) - // in the Amazon Elastic Compute Cloud User Guide. + // The state of the Spot Instance request. Spot status information helps track + // your Spot Instance requests. For more information, see Spot Status (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html) + // in the Amazon EC2 User Guide for Linux Instances. State *string `locationName:"state" type:"string" enum:"SpotInstanceState"` // The status code and status message describing the Spot Instance request. @@ -63778,14 +69231,14 @@ type SpotInstanceRequest struct { // The start date of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). // The request becomes active at this date and time. - ValidFrom *time.Time `locationName:"validFrom" type:"timestamp" timestampFormat:"iso8601"` + ValidFrom *time.Time `locationName:"validFrom" type:"timestamp"` // The end date of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). // If this is a one-time request, it remains active until all instances launch, // the request is canceled, or this date is reached. If the request is persistent, // it remains active until it is canceled or this date is reached. The default // end date is 7 days from the current date. - ValidUntil *time.Time `locationName:"validUntil" type:"timestamp" timestampFormat:"iso8601"` + ValidUntil *time.Time `locationName:"validUntil" type:"timestamp"` } // String returns the string representation @@ -63950,7 +69403,7 @@ type SpotInstanceStatus struct { _ struct{} `type:"structure"` // The status code. For a list of status codes, see Spot Status Codes (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html#spot-instance-bid-status-understand) - // in the Amazon Elastic Compute Cloud User Guide. + // in the Amazon EC2 User Guide for Linux Instances. Code *string `locationName:"code" type:"string"` // The description for the status code. @@ -63958,7 +69411,7 @@ type SpotInstanceStatus struct { // The date and time of the most recent status update, in UTC format (for example, // YYYY-MM-DDTHH:MM:SSZ). - UpdateTime *time.Time `locationName:"updateTime" type:"timestamp" timestampFormat:"iso8601"` + UpdateTime *time.Time `locationName:"updateTime" type:"timestamp"` } // String returns the string representation @@ -64005,7 +69458,9 @@ type SpotMarketOptions struct { // default is the On-Demand price. MaxPrice *string `type:"string"` - // The Spot Instance request type. + // The Spot Instance request type. For RunInstances, persistent Spot Instance + // requests are only supported when InstanceInterruptionBehavior is set to either + // hibernate or stop. SpotInstanceType *string `type:"string" enum:"SpotInstanceType"` // The end date of the request. For a one-time request, the request remains @@ -64013,7 +69468,7 @@ type SpotMarketOptions struct { // is reached. If the request is persistent, it remains active until it is canceled // or this date and time is reached. The default end date is 7 days from the // current date. - ValidUntil *time.Time `type:"timestamp" timestampFormat:"iso8601"` + ValidUntil *time.Time `type:"timestamp"` } // String returns the string representation @@ -64056,6 +69511,138 @@ func (s *SpotMarketOptions) SetValidUntil(v time.Time) *SpotMarketOptions { return s } +// Describes the configuration of Spot Instances in an EC2 Fleet. +type SpotOptions struct { + _ struct{} `type:"structure"` + + // Indicates how to allocate the target capacity across the Spot pools specified + // by the Spot Fleet request. The default is lowest-price. + AllocationStrategy *string `locationName:"allocationStrategy" type:"string" enum:"SpotAllocationStrategy"` + + // The behavior when a Spot Instance is interrupted. The default is terminate. + InstanceInterruptionBehavior *string `locationName:"instanceInterruptionBehavior" type:"string" enum:"SpotInstanceInterruptionBehavior"` + + // The number of Spot pools across which to allocate your target Spot capacity. + // Valid only when AllocationStrategy is set to lowestPrice. EC2 Fleet selects + // the cheapest Spot pools and evenly allocates your target Spot capacity across + // the number of Spot pools that you specify. + InstancePoolsToUseCount *int64 `locationName:"instancePoolsToUseCount" type:"integer"` + + // The minimum target capacity for Spot Instances in the fleet. If the minimum + // target capacity is not reached, the fleet launches no instances. + MinTargetCapacity *int64 `locationName:"minTargetCapacity" type:"integer"` + + // Indicates that the fleet uses a single instance type to launch all Spot Instances + // in the fleet. + SingleInstanceType *bool `locationName:"singleInstanceType" type:"boolean"` +} + +// String returns the string representation +func (s SpotOptions) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SpotOptions) GoString() string { + return s.String() +} + +// SetAllocationStrategy sets the AllocationStrategy field's value. +func (s *SpotOptions) SetAllocationStrategy(v string) *SpotOptions { + s.AllocationStrategy = &v + return s +} + +// SetInstanceInterruptionBehavior sets the InstanceInterruptionBehavior field's value. +func (s *SpotOptions) SetInstanceInterruptionBehavior(v string) *SpotOptions { + s.InstanceInterruptionBehavior = &v + return s +} + +// SetInstancePoolsToUseCount sets the InstancePoolsToUseCount field's value. +func (s *SpotOptions) SetInstancePoolsToUseCount(v int64) *SpotOptions { + s.InstancePoolsToUseCount = &v + return s +} + +// SetMinTargetCapacity sets the MinTargetCapacity field's value. +func (s *SpotOptions) SetMinTargetCapacity(v int64) *SpotOptions { + s.MinTargetCapacity = &v + return s +} + +// SetSingleInstanceType sets the SingleInstanceType field's value. +func (s *SpotOptions) SetSingleInstanceType(v bool) *SpotOptions { + s.SingleInstanceType = &v + return s +} + +// Describes the configuration of Spot Instances in an EC2 Fleet request. +type SpotOptionsRequest struct { + _ struct{} `type:"structure"` + + // Indicates how to allocate the target capacity across the Spot pools specified + // by the Spot Fleet request. The default is lowestPrice. + AllocationStrategy *string `type:"string" enum:"SpotAllocationStrategy"` + + // The behavior when a Spot Instance is interrupted. The default is terminate. + InstanceInterruptionBehavior *string `type:"string" enum:"SpotInstanceInterruptionBehavior"` + + // The number of Spot pools across which to allocate your target Spot capacity. + // Valid only when Spot AllocationStrategy is set to lowest-price. EC2 Fleet + // selects the cheapest Spot pools and evenly allocates your target Spot capacity + // across the number of Spot pools that you specify. + InstancePoolsToUseCount *int64 `type:"integer"` + + // The minimum target capacity for Spot Instances in the fleet. If the minimum + // target capacity is not reached, the fleet launches no instances. + MinTargetCapacity *int64 `type:"integer"` + + // Indicates that the fleet uses a single instance type to launch all Spot Instances + // in the fleet. + SingleInstanceType *bool `type:"boolean"` +} + +// String returns the string representation +func (s SpotOptionsRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SpotOptionsRequest) GoString() string { + return s.String() +} + +// SetAllocationStrategy sets the AllocationStrategy field's value. +func (s *SpotOptionsRequest) SetAllocationStrategy(v string) *SpotOptionsRequest { + s.AllocationStrategy = &v + return s +} + +// SetInstanceInterruptionBehavior sets the InstanceInterruptionBehavior field's value. +func (s *SpotOptionsRequest) SetInstanceInterruptionBehavior(v string) *SpotOptionsRequest { + s.InstanceInterruptionBehavior = &v + return s +} + +// SetInstancePoolsToUseCount sets the InstancePoolsToUseCount field's value. +func (s *SpotOptionsRequest) SetInstancePoolsToUseCount(v int64) *SpotOptionsRequest { + s.InstancePoolsToUseCount = &v + return s +} + +// SetMinTargetCapacity sets the MinTargetCapacity field's value. +func (s *SpotOptionsRequest) SetMinTargetCapacity(v int64) *SpotOptionsRequest { + s.MinTargetCapacity = &v + return s +} + +// SetSingleInstanceType sets the SingleInstanceType field's value. +func (s *SpotOptionsRequest) SetSingleInstanceType(v bool) *SpotOptionsRequest { + s.SingleInstanceType = &v + return s +} + // Describes Spot Instance placement. type SpotPlacement struct { _ struct{} `type:"structure"` @@ -64121,7 +69708,7 @@ type SpotPrice struct { SpotPrice *string `locationName:"spotPrice" type:"string"` // The date and time the request was created, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). - Timestamp *time.Time `locationName:"timestamp" type:"timestamp" timestampFormat:"iso8601"` + Timestamp *time.Time `locationName:"timestamp" type:"timestamp"` } // String returns the string representation @@ -64246,9 +69833,7 @@ type StaleSecurityGroup struct { Description *string `locationName:"description" type:"string"` // The ID of the security group. - // - // GroupId is a required field - GroupId *string `locationName:"groupId" type:"string" required:"true"` + GroupId *string `locationName:"groupId" type:"string"` // The name of the security group. GroupName *string `locationName:"groupName" type:"string"` @@ -64402,19 +69987,23 @@ type StateReason struct { // The message for the state change. // - // * Server.InsufficientInstanceCapacity: There was insufficient instance - // capacity to satisfy the launch request. + // * Server.InsufficientInstanceCapacity: There was insufficient capacity + // available to satisfy the launch request. // - // * Server.InternalError: An internal error occurred during instance launch, - // resulting in termination. + // * Server.InternalError: An internal error caused the instance to terminate + // during launch. // // * Server.ScheduledStop: The instance was stopped due to a scheduled retirement. // - // * Server.SpotInstanceTermination: A Spot Instance was terminated due to - // an increase in the Spot price. + // * Server.SpotInstanceShutdown: The instance was stopped because the number + // of Spot requests with a maximum price equal to or higher than the Spot + // price exceeded available capacity or because of an increase in the Spot + // price. // - // * Client.InternalError: A client error caused the instance to terminate - // on launch. + // * Server.SpotInstanceTermination: The instance was terminated because + // the number of Spot requests with a maximum price equal to or higher than + // the Spot price exceeded available capacity or because of an increase in + // the Spot price. // // * Client.InstanceInitiatedShutdown: The instance was shut down using the // shutdown -h command from the instance. @@ -64422,14 +70011,17 @@ type StateReason struct { // * Client.InstanceTerminated: The instance was terminated or rebooted during // AMI creation. // + // * Client.InternalError: A client error caused the instance to terminate + // during launch. + // + // * Client.InvalidSnapshot.NotFound: The specified snapshot was not found. + // // * Client.UserInitiatedShutdown: The instance was shut down using the Amazon // EC2 API. // // * Client.VolumeLimitExceeded: The limit on the number of EBS volumes or // total storage was exceeded. Decrease usage or request an increase in your - // limits. - // - // * Client.InvalidSnapshot.NotFound: The specified snapshot was not found. + // account limits. Message *string `locationName:"message" type:"string"` } @@ -64612,8 +70204,8 @@ type Subnet struct { // The Availability Zone of the subnet. AvailabilityZone *string `locationName:"availabilityZone" type:"string"` - // The number of unused private IPv4 addresses in the subnet. Note that the - // IPv4 addresses for any stopped instances are considered unavailable. + // The number of unused private IPv4 addresses in the subnet. The IPv4 addresses + // for any stopped instances are considered unavailable. AvailableIpAddressCount *int64 `locationName:"availableIpAddressCount" type:"integer"` // The IPv4 CIDR block assigned to the subnet. @@ -64793,7 +70385,7 @@ func (s *SubnetIpv6CidrBlockAssociation) SetIpv6CidrBlockState(v *SubnetCidrBloc return s } -// Describes the T2 instance whose credit option for CPU usage was successfully +// Describes the T2 or T3 instance whose credit option for CPU usage was successfully // modified. type SuccessfulInstanceCreditSpecificationItem struct { _ struct{} `type:"structure"` @@ -64825,7 +70417,7 @@ type Tag struct { // The key of the tag. // // Constraints: Tag keys are case-sensitive and accept a maximum of 127 Unicode - // characters. May not begin with aws: + // characters. May not begin with aws:. Key *string `locationName:"key" type:"string"` // The value of the tag. @@ -64864,7 +70456,7 @@ type TagDescription struct { // The tag key. Key *string `locationName:"key" type:"string"` - // The ID of the resource. For example, ami-1a2b3c4d. + // The ID of the resource. ResourceId *string `locationName:"resourceId" type:"string"` // The resource type. @@ -64913,7 +70505,8 @@ type TagSpecification struct { _ struct{} `type:"structure"` // The type of resource to tag. Currently, the resource types that support tagging - // on creation are instance and volume. + // on creation are fleet, dedicated-host, instance, snapshot, and volume. To + // tag a resource after it has been created, see CreateTags. ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` // The tags to apply to the resource. @@ -64942,6 +70535,131 @@ func (s *TagSpecification) SetTags(v []*Tag) *TagSpecification { return s } +// The number of units to request. You can choose to set the target capacity +// in terms of instances or a performance characteristic that is important to +// your application workload, such as vCPUs, memory, or I/O. If the request +// type is maintain, you can specify a target capacity of 0 and add capacity +// later. +type TargetCapacitySpecification struct { + _ struct{} `type:"structure"` + + // The default TotalTargetCapacity, which is either Spot or On-Demand. + DefaultTargetCapacityType *string `locationName:"defaultTargetCapacityType" type:"string" enum:"DefaultTargetCapacityType"` + + // The number of On-Demand units to request. + OnDemandTargetCapacity *int64 `locationName:"onDemandTargetCapacity" type:"integer"` + + // The maximum number of Spot units to launch. + SpotTargetCapacity *int64 `locationName:"spotTargetCapacity" type:"integer"` + + // The number of units to request, filled using DefaultTargetCapacityType. + TotalTargetCapacity *int64 `locationName:"totalTargetCapacity" type:"integer"` +} + +// String returns the string representation +func (s TargetCapacitySpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TargetCapacitySpecification) GoString() string { + return s.String() +} + +// SetDefaultTargetCapacityType sets the DefaultTargetCapacityType field's value. +func (s *TargetCapacitySpecification) SetDefaultTargetCapacityType(v string) *TargetCapacitySpecification { + s.DefaultTargetCapacityType = &v + return s +} + +// SetOnDemandTargetCapacity sets the OnDemandTargetCapacity field's value. +func (s *TargetCapacitySpecification) SetOnDemandTargetCapacity(v int64) *TargetCapacitySpecification { + s.OnDemandTargetCapacity = &v + return s +} + +// SetSpotTargetCapacity sets the SpotTargetCapacity field's value. +func (s *TargetCapacitySpecification) SetSpotTargetCapacity(v int64) *TargetCapacitySpecification { + s.SpotTargetCapacity = &v + return s +} + +// SetTotalTargetCapacity sets the TotalTargetCapacity field's value. +func (s *TargetCapacitySpecification) SetTotalTargetCapacity(v int64) *TargetCapacitySpecification { + s.TotalTargetCapacity = &v + return s +} + +// The number of units to request. You can choose to set the target capacity +// in terms of instances or a performance characteristic that is important to +// your application workload, such as vCPUs, memory, or I/O. If the request +// type is maintain, you can specify a target capacity of 0 and add capacity +// later. +type TargetCapacitySpecificationRequest struct { + _ struct{} `type:"structure"` + + // The default TotalTargetCapacity, which is either Spot or On-Demand. + DefaultTargetCapacityType *string `type:"string" enum:"DefaultTargetCapacityType"` + + // The number of On-Demand units to request. + OnDemandTargetCapacity *int64 `type:"integer"` + + // The number of Spot units to request. + SpotTargetCapacity *int64 `type:"integer"` + + // The number of units to request, filled using DefaultTargetCapacityType. + // + // TotalTargetCapacity is a required field + TotalTargetCapacity *int64 `type:"integer" required:"true"` +} + +// String returns the string representation +func (s TargetCapacitySpecificationRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TargetCapacitySpecificationRequest) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *TargetCapacitySpecificationRequest) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TargetCapacitySpecificationRequest"} + if s.TotalTargetCapacity == nil { + invalidParams.Add(request.NewErrParamRequired("TotalTargetCapacity")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDefaultTargetCapacityType sets the DefaultTargetCapacityType field's value. +func (s *TargetCapacitySpecificationRequest) SetDefaultTargetCapacityType(v string) *TargetCapacitySpecificationRequest { + s.DefaultTargetCapacityType = &v + return s +} + +// SetOnDemandTargetCapacity sets the OnDemandTargetCapacity field's value. +func (s *TargetCapacitySpecificationRequest) SetOnDemandTargetCapacity(v int64) *TargetCapacitySpecificationRequest { + s.OnDemandTargetCapacity = &v + return s +} + +// SetSpotTargetCapacity sets the SpotTargetCapacity field's value. +func (s *TargetCapacitySpecificationRequest) SetSpotTargetCapacity(v int64) *TargetCapacitySpecificationRequest { + s.SpotTargetCapacity = &v + return s +} + +// SetTotalTargetCapacity sets the TotalTargetCapacity field's value. +func (s *TargetCapacitySpecificationRequest) SetTotalTargetCapacity(v int64) *TargetCapacitySpecificationRequest { + s.TotalTargetCapacity = &v + return s +} + // Information about the Convertible Reserved Instance offering. type TargetConfiguration struct { _ struct{} `type:"structure"` @@ -65458,12 +71176,13 @@ func (s *UnmonitorInstancesOutput) SetInstanceMonitorings(v []*InstanceMonitorin return s } -// Describes the T2 instance whose credit option for CPU usage was not modified. +// Describes the T2 or T3 instance whose credit option for CPU usage was not +// modified. type UnsuccessfulInstanceCreditSpecificationItem struct { _ struct{} `type:"structure"` - // The applicable error for the T2 instance whose credit option for CPU usage - // was not modified. + // The applicable error for the T2 or T3 instance whose credit option for CPU + // usage was not modified. Error *UnsuccessfulInstanceCreditSpecificationItemError `locationName:"error" type:"structure"` // The ID of the instance. @@ -65492,8 +71211,8 @@ func (s *UnsuccessfulInstanceCreditSpecificationItem) SetInstanceId(v string) *U return s } -// Information about the error for the T2 instance whose credit option for CPU -// usage was not modified. +// Information about the error for the T2 or T3 instance whose credit option +// for CPU usage was not modified. type UnsuccessfulInstanceCreditSpecificationItemError struct { _ struct{} `type:"structure"` @@ -65599,7 +71318,6 @@ func (s *UnsuccessfulItemError) SetMessage(v string) *UnsuccessfulItemError { return s } -// Contains the parameters for UpdateSecurityGroupRuleDescriptionsEgress. type UpdateSecurityGroupRuleDescriptionsEgressInput struct { _ struct{} `type:"structure"` @@ -65671,7 +71389,6 @@ func (s *UpdateSecurityGroupRuleDescriptionsEgressInput) SetIpPermissions(v []*I return s } -// Contains the output of UpdateSecurityGroupRuleDescriptionsEgress. type UpdateSecurityGroupRuleDescriptionsEgressOutput struct { _ struct{} `type:"structure"` @@ -65695,7 +71412,6 @@ func (s *UpdateSecurityGroupRuleDescriptionsEgressOutput) SetReturn(v bool) *Upd return s } -// Contains the parameters for UpdateSecurityGroupRuleDescriptionsIngress. type UpdateSecurityGroupRuleDescriptionsIngressInput struct { _ struct{} `type:"structure"` @@ -65767,7 +71483,6 @@ func (s *UpdateSecurityGroupRuleDescriptionsIngressInput) SetIpPermissions(v []* return s } -// Contains the output of UpdateSecurityGroupRuleDescriptionsIngress. type UpdateSecurityGroupRuleDescriptionsIngressOutput struct { _ struct{} `type:"structure"` @@ -65985,7 +71700,7 @@ type VgwTelemetry struct { AcceptedRouteCount *int64 `locationName:"acceptedRouteCount" type:"integer"` // The date and time of the last change in status. - LastStatusChange *time.Time `locationName:"lastStatusChange" type:"timestamp" timestampFormat:"iso8601"` + LastStatusChange *time.Time `locationName:"lastStatusChange" type:"timestamp"` // The Internet-routable IP address of the virtual private gateway's outside // interface. @@ -66049,7 +71764,7 @@ type Volume struct { AvailabilityZone *string `locationName:"availabilityZone" type:"string"` // The time stamp when volume creation was initiated. - CreateTime *time.Time `locationName:"createTime" type:"timestamp" timestampFormat:"iso8601"` + CreateTime *time.Time `locationName:"createTime" type:"timestamp"` // Indicates whether the volume will be encrypted. Encrypted *bool `locationName:"encrypted" type:"boolean"` @@ -66058,11 +71773,12 @@ type Volume struct { // For Provisioned IOPS SSD volumes, this represents the number of IOPS that // are provisioned for the volume. For General Purpose SSD volumes, this represents // the baseline performance of the volume and the rate at which the volume accumulates - // I/O credits for bursting. For more information on General Purpose SSD baseline - // performance, I/O credits, and bursting, see Amazon EBS Volume Types (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) + // I/O credits for bursting. For more information about General Purpose SSD + // baseline performance, I/O credits, and bursting, see Amazon EBS Volume Types + // (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) // in the Amazon Elastic Compute Cloud User Guide. // - // Constraint: Range is 100-20000 IOPS for io1 volumes and 100-10000 IOPS for + // Constraint: Range is 100-32000 IOPS for io1 volumes and 100-10000 IOPS for // gp2 volumes. // // Condition: This parameter is required for requests to create io1 volumes; @@ -66181,7 +71897,7 @@ type VolumeAttachment struct { _ struct{} `type:"structure"` // The time stamp when the attachment initiated. - AttachTime *time.Time `locationName:"attachTime" type:"timestamp" timestampFormat:"iso8601"` + AttachTime *time.Time `locationName:"attachTime" type:"timestamp"` // Indicates whether the EBS volume is deleted on instance termination. DeleteOnTermination *bool `locationName:"deleteOnTermination" type:"boolean"` @@ -66290,41 +72006,41 @@ func (s *VolumeDetail) SetSize(v int64) *VolumeDetail { type VolumeModification struct { _ struct{} `type:"structure"` - // Modification completion or failure time. - EndTime *time.Time `locationName:"endTime" type:"timestamp" timestampFormat:"iso8601"` + // The modification completion or failure time. + EndTime *time.Time `locationName:"endTime" type:"timestamp"` - // Current state of modification. Modification state is null for unmodified + // The current modification state. The modification state is null for unmodified // volumes. ModificationState *string `locationName:"modificationState" type:"string" enum:"VolumeModificationState"` - // Original IOPS rate of the volume being modified. + // The original IOPS rate of the volume. OriginalIops *int64 `locationName:"originalIops" type:"integer"` - // Original size of the volume being modified. + // The original size of the volume. OriginalSize *int64 `locationName:"originalSize" type:"integer"` - // Original EBS volume type of the volume being modified. + // The original EBS volume type of the volume. OriginalVolumeType *string `locationName:"originalVolumeType" type:"string" enum:"VolumeType"` - // Modification progress from 0 to 100%. + // The modification progress, from 0 to 100 percent complete. Progress *int64 `locationName:"progress" type:"long"` - // Modification start time - StartTime *time.Time `locationName:"startTime" type:"timestamp" timestampFormat:"iso8601"` + // The modification start time. + StartTime *time.Time `locationName:"startTime" type:"timestamp"` - // Generic status message on modification progress or failure. + // A status message about the modification progress or failure. StatusMessage *string `locationName:"statusMessage" type:"string"` - // Target IOPS rate of the volume being modified. + // The target IOPS rate of the volume. TargetIops *int64 `locationName:"targetIops" type:"integer"` - // Target size of the volume being modified. + // The target size of the volume, in GiB. TargetSize *int64 `locationName:"targetSize" type:"integer"` - // Target EBS volume type of the volume being modified. + // The target EBS volume type of the volume. TargetVolumeType *string `locationName:"targetVolumeType" type:"string" enum:"VolumeType"` - // ID of the volume being modified. + // The ID of the volume. VolumeId *string `locationName:"volumeId" type:"string"` } @@ -66508,10 +72224,10 @@ type VolumeStatusEvent struct { EventType *string `locationName:"eventType" type:"string"` // The latest end time of the event. - NotAfter *time.Time `locationName:"notAfter" type:"timestamp" timestampFormat:"iso8601"` + NotAfter *time.Time `locationName:"notAfter" type:"timestamp"` // The earliest start time of the event. - NotBefore *time.Time `locationName:"notBefore" type:"timestamp" timestampFormat:"iso8601"` + NotBefore *time.Time `locationName:"notBefore" type:"timestamp"` } // String returns the string representation @@ -66899,7 +72615,7 @@ type VpcEndpoint struct { _ struct{} `type:"structure"` // The date and time the VPC endpoint was created. - CreationTimestamp *time.Time `locationName:"creationTimestamp" type:"timestamp" timestampFormat:"iso8601"` + CreationTimestamp *time.Time `locationName:"creationTimestamp" type:"timestamp"` // (Interface endpoint) The DNS entries for the endpoint. DnsEntries []*DnsEntry `locationName:"dnsEntrySet" locationNameList:"item" type:"list"` @@ -67033,7 +72749,7 @@ type VpcEndpointConnection struct { _ struct{} `type:"structure"` // The date and time the VPC endpoint was created. - CreationTimestamp *time.Time `locationName:"creationTimestamp" type:"timestamp" timestampFormat:"iso8601"` + CreationTimestamp *time.Time `locationName:"creationTimestamp" type:"timestamp"` // The ID of the service to which the endpoint is connected. ServiceId *string `locationName:"serviceId" type:"string"` @@ -67139,7 +72855,7 @@ type VpcPeeringConnection struct { AccepterVpcInfo *VpcPeeringConnectionVpcInfo `locationName:"accepterVpcInfo" type:"structure"` // The time that an unaccepted VPC peering connection will expire. - ExpirationTime *time.Time `locationName:"expirationTime" type:"timestamp" timestampFormat:"iso8601"` + ExpirationTime *time.Time `locationName:"expirationTime" type:"timestamp"` // Information about the requester VPC. CIDR block information is only returned // when describing an active VPC peering connection. @@ -67717,6 +73433,79 @@ func (s *VpnTunnelOptionsSpecification) SetTunnelInsideCidr(v string) *VpnTunnel return s } +type WithdrawByoipCidrInput struct { + _ struct{} `type:"structure"` + + // The public IPv4 address range, in CIDR notation. + // + // Cidr is a required field + Cidr *string `type:"string" required:"true"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` +} + +// String returns the string representation +func (s WithdrawByoipCidrInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s WithdrawByoipCidrInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *WithdrawByoipCidrInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "WithdrawByoipCidrInput"} + if s.Cidr == nil { + invalidParams.Add(request.NewErrParamRequired("Cidr")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCidr sets the Cidr field's value. +func (s *WithdrawByoipCidrInput) SetCidr(v string) *WithdrawByoipCidrInput { + s.Cidr = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *WithdrawByoipCidrInput) SetDryRun(v bool) *WithdrawByoipCidrInput { + s.DryRun = &v + return s +} + +type WithdrawByoipCidrOutput struct { + _ struct{} `type:"structure"` + + // Information about the address pool. + ByoipCidr *ByoipCidr `locationName:"byoipCidr" type:"structure"` +} + +// String returns the string representation +func (s WithdrawByoipCidrOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s WithdrawByoipCidrOutput) GoString() string { + return s.String() +} + +// SetByoipCidr sets the ByoipCidr field's value. +func (s *WithdrawByoipCidrOutput) SetByoipCidr(v *ByoipCidr) *WithdrawByoipCidrOutput { + s.ByoipCidr = v + return s +} + const ( // AccountAttributeNameSupportedPlatforms is a AccountAttributeName enum value AccountAttributeNameSupportedPlatforms = "supported-platforms" @@ -67862,6 +73651,29 @@ const ( BundleTaskStateFailed = "failed" ) +const ( + // ByoipCidrStateAdvertised is a ByoipCidrState enum value + ByoipCidrStateAdvertised = "advertised" + + // ByoipCidrStateDeprovisioned is a ByoipCidrState enum value + ByoipCidrStateDeprovisioned = "deprovisioned" + + // ByoipCidrStateFailedDeprovision is a ByoipCidrState enum value + ByoipCidrStateFailedDeprovision = "failed-deprovision" + + // ByoipCidrStateFailedProvision is a ByoipCidrState enum value + ByoipCidrStateFailedProvision = "failed-provision" + + // ByoipCidrStatePendingDeprovision is a ByoipCidrState enum value + ByoipCidrStatePendingDeprovision = "pending-deprovision" + + // ByoipCidrStatePendingProvision is a ByoipCidrState enum value + ByoipCidrStatePendingProvision = "pending-provision" + + // ByoipCidrStateProvisioned is a ByoipCidrState enum value + ByoipCidrStateProvisioned = "provisioned" +) + const ( // CancelBatchErrorCodeFleetRequestIdDoesNotExist is a CancelBatchErrorCode enum value CancelBatchErrorCodeFleetRequestIdDoesNotExist = "fleetRequestIdDoesNotExist" @@ -67893,6 +73705,65 @@ const ( CancelSpotInstanceRequestStateCompleted = "completed" ) +const ( + // CapacityReservationInstancePlatformLinuxUnix is a CapacityReservationInstancePlatform enum value + CapacityReservationInstancePlatformLinuxUnix = "Linux/UNIX" + + // CapacityReservationInstancePlatformRedHatEnterpriseLinux is a CapacityReservationInstancePlatform enum value + CapacityReservationInstancePlatformRedHatEnterpriseLinux = "Red Hat Enterprise Linux" + + // CapacityReservationInstancePlatformSuselinux is a CapacityReservationInstancePlatform enum value + CapacityReservationInstancePlatformSuselinux = "SUSE Linux" + + // CapacityReservationInstancePlatformWindows is a CapacityReservationInstancePlatform enum value + CapacityReservationInstancePlatformWindows = "Windows" + + // CapacityReservationInstancePlatformWindowswithSqlserver is a CapacityReservationInstancePlatform enum value + CapacityReservationInstancePlatformWindowswithSqlserver = "Windows with SQL Server" + + // CapacityReservationInstancePlatformWindowswithSqlserverEnterprise is a CapacityReservationInstancePlatform enum value + CapacityReservationInstancePlatformWindowswithSqlserverEnterprise = "Windows with SQL Server Enterprise" + + // CapacityReservationInstancePlatformWindowswithSqlserverStandard is a CapacityReservationInstancePlatform enum value + CapacityReservationInstancePlatformWindowswithSqlserverStandard = "Windows with SQL Server Standard" + + // CapacityReservationInstancePlatformWindowswithSqlserverWeb is a CapacityReservationInstancePlatform enum value + CapacityReservationInstancePlatformWindowswithSqlserverWeb = "Windows with SQL Server Web" +) + +const ( + // CapacityReservationPreferenceOpen is a CapacityReservationPreference enum value + CapacityReservationPreferenceOpen = "open" + + // CapacityReservationPreferenceNone is a CapacityReservationPreference enum value + CapacityReservationPreferenceNone = "none" +) + +const ( + // CapacityReservationStateActive is a CapacityReservationState enum value + CapacityReservationStateActive = "active" + + // CapacityReservationStateExpired is a CapacityReservationState enum value + CapacityReservationStateExpired = "expired" + + // CapacityReservationStateCancelled is a CapacityReservationState enum value + CapacityReservationStateCancelled = "cancelled" + + // CapacityReservationStatePending is a CapacityReservationState enum value + CapacityReservationStatePending = "pending" + + // CapacityReservationStateFailed is a CapacityReservationState enum value + CapacityReservationStateFailed = "failed" +) + +const ( + // CapacityReservationTenancyDefault is a CapacityReservationTenancy enum value + CapacityReservationTenancyDefault = "default" + + // CapacityReservationTenancyDedicated is a CapacityReservationTenancy enum value + CapacityReservationTenancyDedicated = "dedicated" +) + const ( // ConnectionNotificationStateEnabled is a ConnectionNotificationState enum value ConnectionNotificationStateEnabled = "Enabled" @@ -67938,6 +73809,28 @@ const ( DatafeedSubscriptionStateInactive = "Inactive" ) +const ( + // DefaultTargetCapacityTypeSpot is a DefaultTargetCapacityType enum value + DefaultTargetCapacityTypeSpot = "spot" + + // DefaultTargetCapacityTypeOnDemand is a DefaultTargetCapacityType enum value + DefaultTargetCapacityTypeOnDemand = "on-demand" +) + +const ( + // DeleteFleetErrorCodeFleetIdDoesNotExist is a DeleteFleetErrorCode enum value + DeleteFleetErrorCodeFleetIdDoesNotExist = "fleetIdDoesNotExist" + + // DeleteFleetErrorCodeFleetIdMalformed is a DeleteFleetErrorCode enum value + DeleteFleetErrorCodeFleetIdMalformed = "fleetIdMalformed" + + // DeleteFleetErrorCodeFleetNotInDeletableState is a DeleteFleetErrorCode enum value + DeleteFleetErrorCodeFleetNotInDeletableState = "fleetNotInDeletableState" + + // DeleteFleetErrorCodeUnexpectedError is a DeleteFleetErrorCode enum value + DeleteFleetErrorCodeUnexpectedError = "unexpectedError" +) + const ( // DeviceTypeEbs is a DeviceType enum value DeviceTypeEbs = "ebs" @@ -67978,6 +73871,14 @@ const ( ElasticGpuStatusImpaired = "IMPAIRED" ) +const ( + // EndDateTypeUnlimited is a EndDateType enum value + EndDateTypeUnlimited = "unlimited" + + // EndDateTypeLimited is a EndDateType enum value + EndDateTypeLimited = "limited" +) + const ( // EventCodeInstanceReboot is a EventCode enum value EventCodeInstanceReboot = "instance-reboot" @@ -68039,12 +73940,79 @@ const ( ExportTaskStateCompleted = "completed" ) +const ( + // FleetActivityStatusError is a FleetActivityStatus enum value + FleetActivityStatusError = "error" + + // FleetActivityStatusPendingFulfillment is a FleetActivityStatus enum value + FleetActivityStatusPendingFulfillment = "pending-fulfillment" + + // FleetActivityStatusPendingTermination is a FleetActivityStatus enum value + FleetActivityStatusPendingTermination = "pending-termination" + + // FleetActivityStatusFulfilled is a FleetActivityStatus enum value + FleetActivityStatusFulfilled = "fulfilled" +) + +const ( + // FleetEventTypeInstanceChange is a FleetEventType enum value + FleetEventTypeInstanceChange = "instance-change" + + // FleetEventTypeFleetChange is a FleetEventType enum value + FleetEventTypeFleetChange = "fleet-change" + + // FleetEventTypeServiceError is a FleetEventType enum value + FleetEventTypeServiceError = "service-error" +) + +const ( + // FleetExcessCapacityTerminationPolicyNoTermination is a FleetExcessCapacityTerminationPolicy enum value + FleetExcessCapacityTerminationPolicyNoTermination = "no-termination" + + // FleetExcessCapacityTerminationPolicyTermination is a FleetExcessCapacityTerminationPolicy enum value + FleetExcessCapacityTerminationPolicyTermination = "termination" +) + +const ( + // FleetOnDemandAllocationStrategyLowestPrice is a FleetOnDemandAllocationStrategy enum value + FleetOnDemandAllocationStrategyLowestPrice = "lowest-price" + + // FleetOnDemandAllocationStrategyPrioritized is a FleetOnDemandAllocationStrategy enum value + FleetOnDemandAllocationStrategyPrioritized = "prioritized" +) + +const ( + // FleetStateCodeSubmitted is a FleetStateCode enum value + FleetStateCodeSubmitted = "submitted" + + // FleetStateCodeActive is a FleetStateCode enum value + FleetStateCodeActive = "active" + + // FleetStateCodeDeleted is a FleetStateCode enum value + FleetStateCodeDeleted = "deleted" + + // FleetStateCodeFailed is a FleetStateCode enum value + FleetStateCodeFailed = "failed" + + // FleetStateCodeDeletedRunning is a FleetStateCode enum value + FleetStateCodeDeletedRunning = "deleted-running" + + // FleetStateCodeDeletedTerminating is a FleetStateCode enum value + FleetStateCodeDeletedTerminating = "deleted-terminating" + + // FleetStateCodeModifying is a FleetStateCode enum value + FleetStateCodeModifying = "modifying" +) + const ( // FleetTypeRequest is a FleetType enum value FleetTypeRequest = "request" // FleetTypeMaintain is a FleetType enum value FleetTypeMaintain = "maintain" + + // FleetTypeInstant is a FleetType enum value + FleetTypeInstant = "instant" ) const ( @@ -68241,6 +74209,14 @@ const ( InstanceInterruptionBehaviorTerminate = "terminate" ) +const ( + // InstanceLifecycleSpot is a InstanceLifecycle enum value + InstanceLifecycleSpot = "spot" + + // InstanceLifecycleOnDemand is a InstanceLifecycle enum value + InstanceLifecycleOnDemand = "on-demand" +) + const ( // InstanceLifecycleTypeSpot is a InstanceLifecycleType enum value InstanceLifecycleTypeSpot = "spot" @@ -68249,6 +74225,14 @@ const ( InstanceLifecycleTypeScheduled = "scheduled" ) +const ( + // InstanceMatchCriteriaOpen is a InstanceMatchCriteria enum value + InstanceMatchCriteriaOpen = "open" + + // InstanceMatchCriteriaTargeted is a InstanceMatchCriteria enum value + InstanceMatchCriteriaTargeted = "targeted" +) + const ( // InstanceStateNamePending is a InstanceStateName enum value InstanceStateNamePending = "pending" @@ -68294,6 +74278,27 @@ const ( // InstanceTypeT22xlarge is a InstanceType enum value InstanceTypeT22xlarge = "t2.2xlarge" + // InstanceTypeT3Nano is a InstanceType enum value + InstanceTypeT3Nano = "t3.nano" + + // InstanceTypeT3Micro is a InstanceType enum value + InstanceTypeT3Micro = "t3.micro" + + // InstanceTypeT3Small is a InstanceType enum value + InstanceTypeT3Small = "t3.small" + + // InstanceTypeT3Medium is a InstanceType enum value + InstanceTypeT3Medium = "t3.medium" + + // InstanceTypeT3Large is a InstanceType enum value + InstanceTypeT3Large = "t3.large" + + // InstanceTypeT3Xlarge is a InstanceType enum value + InstanceTypeT3Xlarge = "t3.xlarge" + + // InstanceTypeT32xlarge is a InstanceType enum value + InstanceTypeT32xlarge = "t3.2xlarge" + // InstanceTypeM1Small is a InstanceType enum value InstanceTypeM1Small = "m1.small" @@ -68381,6 +74386,78 @@ const ( // InstanceTypeR416xlarge is a InstanceType enum value InstanceTypeR416xlarge = "r4.16xlarge" + // InstanceTypeR5Large is a InstanceType enum value + InstanceTypeR5Large = "r5.large" + + // InstanceTypeR5Xlarge is a InstanceType enum value + InstanceTypeR5Xlarge = "r5.xlarge" + + // InstanceTypeR52xlarge is a InstanceType enum value + InstanceTypeR52xlarge = "r5.2xlarge" + + // InstanceTypeR54xlarge is a InstanceType enum value + InstanceTypeR54xlarge = "r5.4xlarge" + + // InstanceTypeR58xlarge is a InstanceType enum value + InstanceTypeR58xlarge = "r5.8xlarge" + + // InstanceTypeR512xlarge is a InstanceType enum value + InstanceTypeR512xlarge = "r5.12xlarge" + + // InstanceTypeR516xlarge is a InstanceType enum value + InstanceTypeR516xlarge = "r5.16xlarge" + + // InstanceTypeR524xlarge is a InstanceType enum value + InstanceTypeR524xlarge = "r5.24xlarge" + + // InstanceTypeR5Metal is a InstanceType enum value + InstanceTypeR5Metal = "r5.metal" + + // InstanceTypeR5aLarge is a InstanceType enum value + InstanceTypeR5aLarge = "r5a.large" + + // InstanceTypeR5aXlarge is a InstanceType enum value + InstanceTypeR5aXlarge = "r5a.xlarge" + + // InstanceTypeR5a2xlarge is a InstanceType enum value + InstanceTypeR5a2xlarge = "r5a.2xlarge" + + // InstanceTypeR5a4xlarge is a InstanceType enum value + InstanceTypeR5a4xlarge = "r5a.4xlarge" + + // InstanceTypeR5a12xlarge is a InstanceType enum value + InstanceTypeR5a12xlarge = "r5a.12xlarge" + + // InstanceTypeR5a24xlarge is a InstanceType enum value + InstanceTypeR5a24xlarge = "r5a.24xlarge" + + // InstanceTypeR5dLarge is a InstanceType enum value + InstanceTypeR5dLarge = "r5d.large" + + // InstanceTypeR5dXlarge is a InstanceType enum value + InstanceTypeR5dXlarge = "r5d.xlarge" + + // InstanceTypeR5d2xlarge is a InstanceType enum value + InstanceTypeR5d2xlarge = "r5d.2xlarge" + + // InstanceTypeR5d4xlarge is a InstanceType enum value + InstanceTypeR5d4xlarge = "r5d.4xlarge" + + // InstanceTypeR5d8xlarge is a InstanceType enum value + InstanceTypeR5d8xlarge = "r5d.8xlarge" + + // InstanceTypeR5d12xlarge is a InstanceType enum value + InstanceTypeR5d12xlarge = "r5d.12xlarge" + + // InstanceTypeR5d16xlarge is a InstanceType enum value + InstanceTypeR5d16xlarge = "r5d.16xlarge" + + // InstanceTypeR5d24xlarge is a InstanceType enum value + InstanceTypeR5d24xlarge = "r5d.24xlarge" + + // InstanceTypeR5dMetal is a InstanceType enum value + InstanceTypeR5dMetal = "r5d.metal" + // InstanceTypeX116xlarge is a InstanceType enum value InstanceTypeX116xlarge = "x1.16xlarge" @@ -68435,6 +74512,9 @@ const ( // InstanceTypeI316xlarge is a InstanceType enum value InstanceTypeI316xlarge = "i3.16xlarge" + // InstanceTypeI3Metal is a InstanceType enum value + InstanceTypeI3Metal = "i3.metal" + // InstanceTypeHi14xlarge is a InstanceType enum value InstanceTypeHi14xlarge = "hi1.4xlarge" @@ -68495,6 +74575,24 @@ const ( // InstanceTypeC518xlarge is a InstanceType enum value InstanceTypeC518xlarge = "c5.18xlarge" + // InstanceTypeC5dLarge is a InstanceType enum value + InstanceTypeC5dLarge = "c5d.large" + + // InstanceTypeC5dXlarge is a InstanceType enum value + InstanceTypeC5dXlarge = "c5d.xlarge" + + // InstanceTypeC5d2xlarge is a InstanceType enum value + InstanceTypeC5d2xlarge = "c5d.2xlarge" + + // InstanceTypeC5d4xlarge is a InstanceType enum value + InstanceTypeC5d4xlarge = "c5d.4xlarge" + + // InstanceTypeC5d9xlarge is a InstanceType enum value + InstanceTypeC5d9xlarge = "c5d.9xlarge" + + // InstanceTypeC5d18xlarge is a InstanceType enum value + InstanceTypeC5d18xlarge = "c5d.18xlarge" + // InstanceTypeCc14xlarge is a InstanceType enum value InstanceTypeCc14xlarge = "cc1.4xlarge" @@ -68516,6 +74614,9 @@ const ( // InstanceTypeG316xlarge is a InstanceType enum value InstanceTypeG316xlarge = "g3.16xlarge" + // InstanceTypeG3sXlarge is a InstanceType enum value + InstanceTypeG3sXlarge = "g3s.xlarge" + // InstanceTypeCg14xlarge is a InstanceType enum value InstanceTypeCg14xlarge = "cg1.4xlarge" @@ -68552,6 +74653,9 @@ const ( // InstanceTypeF12xlarge is a InstanceType enum value InstanceTypeF12xlarge = "f1.2xlarge" + // InstanceTypeF14xlarge is a InstanceType enum value + InstanceTypeF14xlarge = "f1.4xlarge" + // InstanceTypeF116xlarge is a InstanceType enum value InstanceTypeF116xlarge = "f1.16xlarge" @@ -68573,6 +74677,42 @@ const ( // InstanceTypeM524xlarge is a InstanceType enum value InstanceTypeM524xlarge = "m5.24xlarge" + // InstanceTypeM5aLarge is a InstanceType enum value + InstanceTypeM5aLarge = "m5a.large" + + // InstanceTypeM5aXlarge is a InstanceType enum value + InstanceTypeM5aXlarge = "m5a.xlarge" + + // InstanceTypeM5a2xlarge is a InstanceType enum value + InstanceTypeM5a2xlarge = "m5a.2xlarge" + + // InstanceTypeM5a4xlarge is a InstanceType enum value + InstanceTypeM5a4xlarge = "m5a.4xlarge" + + // InstanceTypeM5a12xlarge is a InstanceType enum value + InstanceTypeM5a12xlarge = "m5a.12xlarge" + + // InstanceTypeM5a24xlarge is a InstanceType enum value + InstanceTypeM5a24xlarge = "m5a.24xlarge" + + // InstanceTypeM5dLarge is a InstanceType enum value + InstanceTypeM5dLarge = "m5d.large" + + // InstanceTypeM5dXlarge is a InstanceType enum value + InstanceTypeM5dXlarge = "m5d.xlarge" + + // InstanceTypeM5d2xlarge is a InstanceType enum value + InstanceTypeM5d2xlarge = "m5d.2xlarge" + + // InstanceTypeM5d4xlarge is a InstanceType enum value + InstanceTypeM5d4xlarge = "m5d.4xlarge" + + // InstanceTypeM5d12xlarge is a InstanceType enum value + InstanceTypeM5d12xlarge = "m5d.12xlarge" + + // InstanceTypeM5d24xlarge is a InstanceType enum value + InstanceTypeM5d24xlarge = "m5d.24xlarge" + // InstanceTypeH12xlarge is a InstanceType enum value InstanceTypeH12xlarge = "h1.2xlarge" @@ -68584,6 +74724,33 @@ const ( // InstanceTypeH116xlarge is a InstanceType enum value InstanceTypeH116xlarge = "h1.16xlarge" + + // InstanceTypeZ1dLarge is a InstanceType enum value + InstanceTypeZ1dLarge = "z1d.large" + + // InstanceTypeZ1dXlarge is a InstanceType enum value + InstanceTypeZ1dXlarge = "z1d.xlarge" + + // InstanceTypeZ1d2xlarge is a InstanceType enum value + InstanceTypeZ1d2xlarge = "z1d.2xlarge" + + // InstanceTypeZ1d3xlarge is a InstanceType enum value + InstanceTypeZ1d3xlarge = "z1d.3xlarge" + + // InstanceTypeZ1d6xlarge is a InstanceType enum value + InstanceTypeZ1d6xlarge = "z1d.6xlarge" + + // InstanceTypeZ1d12xlarge is a InstanceType enum value + InstanceTypeZ1d12xlarge = "z1d.12xlarge" + + // InstanceTypeU6tb1Metal is a InstanceType enum value + InstanceTypeU6tb1Metal = "u-6tb1.metal" + + // InstanceTypeU9tb1Metal is a InstanceType enum value + InstanceTypeU9tb1Metal = "u-9tb1.metal" + + // InstanceTypeU12tb1Metal is a InstanceType enum value + InstanceTypeU12tb1Metal = "u-12tb1.metal" ) const ( @@ -68642,6 +74809,14 @@ const ( ListingStatusClosed = "closed" ) +const ( + // LogDestinationTypeCloudWatchLogs is a LogDestinationType enum value + LogDestinationTypeCloudWatchLogs = "cloud-watch-logs" + + // LogDestinationTypeS3 is a LogDestinationType enum value + LogDestinationTypeS3 = "s3" +) + const ( // MarketTypeSpot is a MarketType enum value MarketTypeSpot = "spot" @@ -68767,6 +74942,14 @@ const ( OfferingTypeValuesAllUpfront = "All Upfront" ) +const ( + // OnDemandAllocationStrategyLowestPrice is a OnDemandAllocationStrategy enum value + OnDemandAllocationStrategyLowestPrice = "lowestPrice" + + // OnDemandAllocationStrategyPrioritized is a OnDemandAllocationStrategy enum value + OnDemandAllocationStrategyPrioritized = "prioritized" +) + const ( // OperationTypeAdd is a OperationType enum value OperationTypeAdd = "add" @@ -68944,9 +75127,21 @@ const ( // ResourceTypeCustomerGateway is a ResourceType enum value ResourceTypeCustomerGateway = "customer-gateway" + // ResourceTypeDedicatedHost is a ResourceType enum value + ResourceTypeDedicatedHost = "dedicated-host" + // ResourceTypeDhcpOptions is a ResourceType enum value ResourceTypeDhcpOptions = "dhcp-options" + // ResourceTypeElasticIp is a ResourceType enum value + ResourceTypeElasticIp = "elastic-ip" + + // ResourceTypeFleet is a ResourceType enum value + ResourceTypeFleet = "fleet" + + // ResourceTypeFpgaImage is a ResourceType enum value + ResourceTypeFpgaImage = "fpga-image" + // ResourceTypeImage is a ResourceType enum value ResourceTypeImage = "image" @@ -68956,6 +75151,12 @@ const ( // ResourceTypeInternetGateway is a ResourceType enum value ResourceTypeInternetGateway = "internet-gateway" + // ResourceTypeLaunchTemplate is a ResourceType enum value + ResourceTypeLaunchTemplate = "launch-template" + + // ResourceTypeNatgateway is a ResourceType enum value + ResourceTypeNatgateway = "natgateway" + // ResourceTypeNetworkAcl is a ResourceType enum value ResourceTypeNetworkAcl = "network-acl" @@ -68968,6 +75169,9 @@ const ( // ResourceTypeRouteTable is a ResourceType enum value ResourceTypeRouteTable = "route-table" + // ResourceTypeSecurityGroup is a ResourceType enum value + ResourceTypeSecurityGroup = "security-group" + // ResourceTypeSnapshot is a ResourceType enum value ResourceTypeSnapshot = "snapshot" @@ -68977,15 +75181,15 @@ const ( // ResourceTypeSubnet is a ResourceType enum value ResourceTypeSubnet = "subnet" - // ResourceTypeSecurityGroup is a ResourceType enum value - ResourceTypeSecurityGroup = "security-group" - // ResourceTypeVolume is a ResourceType enum value ResourceTypeVolume = "volume" // ResourceTypeVpc is a ResourceType enum value ResourceTypeVpc = "vpc" + // ResourceTypeVpcPeeringConnection is a ResourceType enum value + ResourceTypeVpcPeeringConnection = "vpc-peering-connection" + // ResourceTypeVpnConnection is a ResourceType enum value ResourceTypeVpnConnection = "vpn-connection" @@ -69072,6 +75276,25 @@ const ( SnapshotStateError = "error" ) +const ( + // SpotAllocationStrategyLowestPrice is a SpotAllocationStrategy enum value + SpotAllocationStrategyLowestPrice = "lowest-price" + + // SpotAllocationStrategyDiversified is a SpotAllocationStrategy enum value + SpotAllocationStrategyDiversified = "diversified" +) + +const ( + // SpotInstanceInterruptionBehaviorHibernate is a SpotInstanceInterruptionBehavior enum value + SpotInstanceInterruptionBehaviorHibernate = "hibernate" + + // SpotInstanceInterruptionBehaviorStop is a SpotInstanceInterruptionBehavior enum value + SpotInstanceInterruptionBehaviorStop = "stop" + + // SpotInstanceInterruptionBehaviorTerminate is a SpotInstanceInterruptionBehavior enum value + SpotInstanceInterruptionBehaviorTerminate = "terminate" +) + const ( // SpotInstanceStateOpen is a SpotInstanceState enum value SpotInstanceStateOpen = "open" diff --git a/vendor/github.com/aws/aws-sdk-go/service/ec2/doc.go b/vendor/github.com/aws/aws-sdk-go/service/ec2/doc.go index 909e05a14f3..c258e0e85c0 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ec2/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ec2/doc.go @@ -3,9 +3,22 @@ // Package ec2 provides the client and types for making API // requests to Amazon Elastic Compute Cloud. // -// Amazon Elastic Compute Cloud (Amazon EC2) provides resizable computing capacity -// in the AWS Cloud. Using Amazon EC2 eliminates the need to invest in hardware -// up front, so you can develop and deploy applications faster. +// Amazon Elastic Compute Cloud (Amazon EC2) provides secure and resizable computing +// capacity in the AWS cloud. Using Amazon EC2 eliminates the need to invest +// in hardware up front, so you can develop and deploy applications faster. +// +// To learn more about Amazon EC2, Amazon EBS, and Amazon VPC, see the following +// resources: +// +// * Amazon EC2 product page (http://aws.amazon.com/ec2) +// +// * Amazon EC2 documentation (http://aws.amazon.com/documentation/ec2) +// +// * Amazon EBS product page (http://aws.amazon.com/ebs) +// +// * Amazon VPC product page (http://aws.amazon.com/vpc) +// +// * Amazon VPC documentation (http://aws.amazon.com/documentation/vpc) // // See https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15 for more information on this service. // diff --git a/vendor/github.com/aws/aws-sdk-go/service/ec2/service.go b/vendor/github.com/aws/aws-sdk-go/service/ec2/service.go index ba4433d388e..6acbc43fe3d 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ec2/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ec2/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "ec2" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "ec2" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "EC2" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the EC2 client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/ecr/api.go b/vendor/github.com/aws/aws-sdk-go/service/ecr/api.go index f12d5ccb4b4..ef61f6e8e53 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ecr/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ecr/api.go @@ -14,8 +14,8 @@ const opBatchCheckLayerAvailability = "BatchCheckLayerAvailability" // BatchCheckLayerAvailabilityRequest generates a "aws/request.Request" representing the // client's request for the BatchCheckLayerAvailability operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -106,8 +106,8 @@ const opBatchDeleteImage = "BatchDeleteImage" // BatchDeleteImageRequest generates a "aws/request.Request" representing the // client's request for the BatchDeleteImage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -201,8 +201,8 @@ const opBatchGetImage = "BatchGetImage" // BatchGetImageRequest generates a "aws/request.Request" representing the // client's request for the BatchGetImage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -289,8 +289,8 @@ const opCompleteLayerUpload = "CompleteLayerUpload" // CompleteLayerUploadRequest generates a "aws/request.Request" representing the // client's request for the CompleteLayerUpload operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -399,8 +399,8 @@ const opCreateRepository = "CreateRepository" // CreateRepositoryRequest generates a "aws/request.Request" representing the // client's request for the CreateRepository operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -491,8 +491,8 @@ const opDeleteLifecyclePolicy = "DeleteLifecyclePolicy" // DeleteLifecyclePolicyRequest generates a "aws/request.Request" representing the // client's request for the DeleteLifecyclePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -581,8 +581,8 @@ const opDeleteRepository = "DeleteRepository" // DeleteRepositoryRequest generates a "aws/request.Request" representing the // client's request for the DeleteRepository operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -673,8 +673,8 @@ const opDeleteRepositoryPolicy = "DeleteRepositoryPolicy" // DeleteRepositoryPolicyRequest generates a "aws/request.Request" representing the // client's request for the DeleteRepositoryPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -764,8 +764,8 @@ const opDescribeImages = "DescribeImages" // DescribeImagesRequest generates a "aws/request.Request" representing the // client's request for the DescribeImages operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -916,8 +916,8 @@ const opDescribeRepositories = "DescribeRepositories" // DescribeRepositoriesRequest generates a "aws/request.Request" representing the // client's request for the DescribeRepositories operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1059,8 +1059,8 @@ const opGetAuthorizationToken = "GetAuthorizationToken" // GetAuthorizationTokenRequest generates a "aws/request.Request" representing the // client's request for the GetAuthorizationToken operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1149,8 +1149,8 @@ const opGetDownloadUrlForLayer = "GetDownloadUrlForLayer" // GetDownloadUrlForLayerRequest generates a "aws/request.Request" representing the // client's request for the GetDownloadUrlForLayer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1249,8 +1249,8 @@ const opGetLifecyclePolicy = "GetLifecyclePolicy" // GetLifecyclePolicyRequest generates a "aws/request.Request" representing the // client's request for the GetLifecyclePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1339,8 +1339,8 @@ const opGetLifecyclePolicyPreview = "GetLifecyclePolicyPreview" // GetLifecyclePolicyPreviewRequest generates a "aws/request.Request" representing the // client's request for the GetLifecyclePolicyPreview operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1429,8 +1429,8 @@ const opGetRepositoryPolicy = "GetRepositoryPolicy" // GetRepositoryPolicyRequest generates a "aws/request.Request" representing the // client's request for the GetRepositoryPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1520,8 +1520,8 @@ const opInitiateLayerUpload = "InitiateLayerUpload" // InitiateLayerUploadRequest generates a "aws/request.Request" representing the // client's request for the InitiateLayerUpload operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1611,8 +1611,8 @@ const opListImages = "ListImages" // ListImagesRequest generates a "aws/request.Request" representing the // client's request for the ListImages operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1760,8 +1760,8 @@ const opPutImage = "PutImage" // PutImageRequest generates a "aws/request.Request" representing the // client's request for the PutImage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1865,8 +1865,8 @@ const opPutLifecyclePolicy = "PutLifecyclePolicy" // PutLifecyclePolicyRequest generates a "aws/request.Request" representing the // client's request for the PutLifecyclePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1953,8 +1953,8 @@ const opSetRepositoryPolicy = "SetRepositoryPolicy" // SetRepositoryPolicyRequest generates a "aws/request.Request" representing the // client's request for the SetRepositoryPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2040,8 +2040,8 @@ const opStartLifecyclePolicyPreview = "StartLifecyclePolicyPreview" // StartLifecyclePolicyPreviewRequest generates a "aws/request.Request" representing the // client's request for the StartLifecyclePolicyPreview operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2135,8 +2135,8 @@ const opUploadLayerPart = "UploadLayerPart" // UploadLayerPartRequest generates a "aws/request.Request" representing the // client's request for the UploadLayerPart operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2247,7 +2247,7 @@ type AuthorizationData struct { // The Unix time in seconds and milliseconds when the authorization token expires. // Authorization tokens are valid for 12 hours. - ExpiresAt *time.Time `locationName:"expiresAt" type:"timestamp" timestampFormat:"unix"` + ExpiresAt *time.Time `locationName:"expiresAt" type:"timestamp"` // The registry URL to use for this authorization token in a docker login command. // The Amazon ECR registry URL format is https://aws_account_id.dkr.ecr.region.amazonaws.com. @@ -2857,7 +2857,7 @@ type DeleteLifecyclePolicyOutput struct { _ struct{} `type:"structure"` // The time stamp of the last time that the lifecycle policy was run. - LastEvaluatedAt *time.Time `locationName:"lastEvaluatedAt" type:"timestamp" timestampFormat:"unix"` + LastEvaluatedAt *time.Time `locationName:"lastEvaluatedAt" type:"timestamp"` // The JSON lifecycle policy text. LifecyclePolicyText *string `locationName:"lifecyclePolicyText" min:"100" type:"string"` @@ -3579,7 +3579,7 @@ type GetLifecyclePolicyOutput struct { _ struct{} `type:"structure"` // The time stamp of the last time that the lifecycle policy was run. - LastEvaluatedAt *time.Time `locationName:"lastEvaluatedAt" type:"timestamp" timestampFormat:"unix"` + LastEvaluatedAt *time.Time `locationName:"lastEvaluatedAt" type:"timestamp"` // The JSON lifecycle policy text. LifecyclePolicyText *string `locationName:"lifecyclePolicyText" min:"100" type:"string"` @@ -3964,7 +3964,7 @@ type ImageDetail struct { // The date and time, expressed in standard JavaScript date format, at which // the current image was pushed to the repository. - ImagePushedAt *time.Time `locationName:"imagePushedAt" type:"timestamp" timestampFormat:"unix"` + ImagePushedAt *time.Time `locationName:"imagePushedAt" type:"timestamp"` // The size, in bytes, of the image in the repository. // @@ -4323,7 +4323,7 @@ type LifecyclePolicyPreviewResult struct { // The date and time, expressed in standard JavaScript date format, at which // the current image was pushed to the repository. - ImagePushedAt *time.Time `locationName:"imagePushedAt" type:"timestamp" timestampFormat:"unix"` + ImagePushedAt *time.Time `locationName:"imagePushedAt" type:"timestamp"` // The list of tags associated with this image. ImageTags []*string `locationName:"imageTags" type:"list"` @@ -4784,7 +4784,7 @@ type Repository struct { _ struct{} `type:"structure"` // The date and time, in JavaScript date format, when the repository was created. - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` // The AWS account ID associated with the registry that contains the repository. RegistryId *string `locationName:"registryId" type:"string"` diff --git a/vendor/github.com/aws/aws-sdk-go/service/ecr/service.go b/vendor/github.com/aws/aws-sdk-go/service/ecr/service.go index 95de12e25e6..7bdf2137099 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ecr/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ecr/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "ecr" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "ecr" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "ECR" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the ECR client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/ecs/api.go b/vendor/github.com/aws/aws-sdk-go/service/ecs/api.go index 6f7d523e239..9ad4f31804f 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ecs/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ecs/api.go @@ -15,8 +15,8 @@ const opCreateCluster = "CreateCluster" // CreateClusterRequest generates a "aws/request.Request" representing the // client's request for the CreateCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -113,8 +113,8 @@ const opCreateService = "CreateService" // CreateServiceRequest generates a "aws/request.Request" representing the // client's request for the CreateService operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -180,17 +180,21 @@ func (c *ECS) CreateServiceRequest(input *CreateServiceInput) (req *request.Requ // balancer are considered healthy if they are in the RUNNING state. Tasks for // services that do use a load balancer are considered healthy if they are in // the RUNNING state and the container instance they are hosted on is reported -// as healthy by the load balancer. The default value for minimumHealthyPercent -// is 50% in the console and 100% for the AWS CLI, the AWS SDKs, and the APIs. +// as healthy by the load balancer. The default value for a replica service +// for minimumHealthyPercent is 50% in the console and 100% for the AWS CLI, +// the AWS SDKs, and the APIs. The default value for a daemon service for minimumHealthyPercent +// is 0% for the AWS CLI, the AWS SDKs, and the APIs and 50% for the console. // // The maximumPercent parameter represents an upper limit on the number of your // service's tasks that are allowed in the RUNNING or PENDING state during a // deployment, as a percentage of the desiredCount (rounded down to the nearest // integer). This parameter enables you to define the deployment batch size. -// For example, if your service has a desiredCount of four tasks and a maximumPercent -// value of 200%, the scheduler can start four new tasks before stopping the -// four older tasks (provided that the cluster resources required to do this -// are available). The default value for maximumPercent is 200%. +// For example, if your replica service has a desiredCount of four tasks and +// a maximumPercent value of 200%, the scheduler can start four new tasks before +// stopping the four older tasks (provided that the cluster resources required +// to do this are available). The default value for a replica service for maximumPercent +// is 200%. If you are using a daemon service type, the maximumPercent should +// remain at 100%, which is the default value. // // When the service scheduler launches new tasks, it determines task placement // in your cluster using the following logic: @@ -203,11 +207,11 @@ func (c *ECS) CreateServiceRequest(input *CreateServiceInput) (req *request.Requ // Zones in this manner (although you can choose a different placement strategy) // with the placementStrategy parameter): // -// Sort the valid container instances by the fewest number of running tasks -// for this service in the same Availability Zone as the instance. For example, -// if zone A has one running service task and zones B and C each have zero, -// valid container instances in either zone B or C are considered optimal -// for placement. +// Sort the valid container instances, giving priority to instances that have +// the fewest number of running tasks for this service in their respective +// Availability Zone. For example, if zone A has one running service task +// and zones B and C each have zero, valid container instances in either +// zone B or C are considered optimal for placement. // // Place the new service task on a valid container instance in an optimal Availability // Zone (based on the previous steps), favoring container instances with @@ -235,16 +239,16 @@ func (c *ECS) CreateServiceRequest(input *CreateServiceInput) (req *request.Requ // // * ErrCodeClusterNotFoundException "ClusterNotFoundException" // The specified cluster could not be found. You can view your available clusters -// with ListClusters. Amazon ECS clusters are region-specific. +// with ListClusters. Amazon ECS clusters are Region-specific. // // * ErrCodeUnsupportedFeatureException "UnsupportedFeatureException" -// The specified task is not supported in this region. +// The specified task is not supported in this Region. // // * ErrCodePlatformUnknownException "PlatformUnknownException" // The specified platform version does not exist. // // * ErrCodePlatformTaskDefinitionIncompatibilityException "PlatformTaskDefinitionIncompatibilityException" -// The specified platform version does not satisfy the task definition’s required +// The specified platform version does not satisfy the task definition's required // capabilities. // // * ErrCodeAccessDeniedException "AccessDeniedException" @@ -272,12 +276,103 @@ func (c *ECS) CreateServiceWithContext(ctx aws.Context, input *CreateServiceInpu return out, req.Send() } +const opDeleteAccountSetting = "DeleteAccountSetting" + +// DeleteAccountSettingRequest generates a "aws/request.Request" representing the +// client's request for the DeleteAccountSetting operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteAccountSetting for more information on using the DeleteAccountSetting +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteAccountSettingRequest method. +// req, resp := client.DeleteAccountSettingRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/DeleteAccountSetting +func (c *ECS) DeleteAccountSettingRequest(input *DeleteAccountSettingInput) (req *request.Request, output *DeleteAccountSettingOutput) { + op := &request.Operation{ + Name: opDeleteAccountSetting, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteAccountSettingInput{} + } + + output = &DeleteAccountSettingOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteAccountSetting API operation for Amazon EC2 Container Service. +// +// Modifies the ARN and resource ID format of a resource for a specified IAM +// user, IAM role, or the root user for an account. You can specify whether +// the new ARN and resource ID format are disabled for new resources that are +// created. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon EC2 Container Service's +// API operation DeleteAccountSetting for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServerException "ServerException" +// These errors are usually caused by a server issue. +// +// * ErrCodeClientException "ClientException" +// These errors are usually caused by a client action, such as using an action +// or resource on behalf of a user that doesn't have permissions to use the +// action or resource, or specifying an identifier that is not valid. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// The specified parameter is invalid. Review the available parameters for the +// API request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/DeleteAccountSetting +func (c *ECS) DeleteAccountSetting(input *DeleteAccountSettingInput) (*DeleteAccountSettingOutput, error) { + req, out := c.DeleteAccountSettingRequest(input) + return out, req.Send() +} + +// DeleteAccountSettingWithContext is the same as DeleteAccountSetting with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteAccountSetting for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DeleteAccountSettingWithContext(ctx aws.Context, input *DeleteAccountSettingInput, opts ...request.Option) (*DeleteAccountSettingOutput, error) { + req, out := c.DeleteAccountSettingRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteAttributes = "DeleteAttributes" // DeleteAttributesRequest generates a "aws/request.Request" representing the // client's request for the DeleteAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -328,12 +423,12 @@ func (c *ECS) DeleteAttributesRequest(input *DeleteAttributesInput) (req *reques // Returned Error Codes: // * ErrCodeClusterNotFoundException "ClusterNotFoundException" // The specified cluster could not be found. You can view your available clusters -// with ListClusters. Amazon ECS clusters are region-specific. +// with ListClusters. Amazon ECS clusters are Region-specific. // // * ErrCodeTargetNotFoundException "TargetNotFoundException" // The specified target could not be found. You can view your available container // instances with ListContainerInstances. Amazon ECS container instances are -// cluster-specific and region-specific. +// cluster-specific and Region-specific. // // * ErrCodeInvalidParameterException "InvalidParameterException" // The specified parameter is invalid. Review the available parameters for the @@ -365,8 +460,8 @@ const opDeleteCluster = "DeleteCluster" // DeleteClusterRequest generates a "aws/request.Request" representing the // client's request for the DeleteCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -431,7 +526,7 @@ func (c *ECS) DeleteClusterRequest(input *DeleteClusterInput) (req *request.Requ // // * ErrCodeClusterNotFoundException "ClusterNotFoundException" // The specified cluster could not be found. You can view your available clusters -// with ListClusters. Amazon ECS clusters are region-specific. +// with ListClusters. Amazon ECS clusters are Region-specific. // // * ErrCodeClusterContainsContainerInstancesException "ClusterContainsContainerInstancesException" // You cannot delete a cluster that has registered container instances. You @@ -472,8 +567,8 @@ const opDeleteService = "DeleteService" // DeleteServiceRequest generates a "aws/request.Request" representing the // client's request for the DeleteService operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -520,12 +615,16 @@ func (c *ECS) DeleteServiceRequest(input *DeleteServiceInput) (req *request.Requ // // When you delete a service, if there are still running tasks that require // cleanup, the service status moves from ACTIVE to DRAINING, and the service -// is no longer visible in the console or in ListServices API operations. After -// the tasks have stopped, then the service status moves from DRAINING to INACTIVE. -// Services in the DRAINING or INACTIVE status can still be viewed with DescribeServices -// API operations. However, in the future, INACTIVE services may be cleaned -// up and purged from Amazon ECS record keeping, and DescribeServices API operations -// on those services return a ServiceNotFoundException error. +// is no longer visible in the console or in the ListServices API operation. +// After the tasks have stopped, then the service status moves from DRAINING +// to INACTIVE. Services in the DRAINING or INACTIVE status can still be viewed +// with the DescribeServices API operation. However, in the future, INACTIVE +// services may be cleaned up and purged from Amazon ECS record keeping, and +// DescribeServices calls on those services return a ServiceNotFoundException +// error. +// +// If you attempt to create a new service with the same name as an existing +// service in either ACTIVE or DRAINING status, you receive an error. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -549,11 +648,11 @@ func (c *ECS) DeleteServiceRequest(input *DeleteServiceInput) (req *request.Requ // // * ErrCodeClusterNotFoundException "ClusterNotFoundException" // The specified cluster could not be found. You can view your available clusters -// with ListClusters. Amazon ECS clusters are region-specific. +// with ListClusters. Amazon ECS clusters are Region-specific. // // * ErrCodeServiceNotFoundException "ServiceNotFoundException" // The specified service could not be found. You can view your available services -// with ListServices. Amazon ECS services are cluster-specific and region-specific. +// with ListServices. Amazon ECS services are cluster-specific and Region-specific. // // See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/DeleteService func (c *ECS) DeleteService(input *DeleteServiceInput) (*DeleteServiceOutput, error) { @@ -581,8 +680,8 @@ const opDeregisterContainerInstance = "DeregisterContainerInstance" // DeregisterContainerInstanceRequest generates a "aws/request.Request" representing the // client's request for the DeregisterContainerInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -630,7 +729,7 @@ func (c *ECS) DeregisterContainerInstanceRequest(input *DeregisterContainerInsta // resources. // // Deregistering a container instance removes the instance from a cluster, but -// it does not terminate the EC2 instance; if you are finished using the instance, +// it does not terminate the EC2 instance. If you are finished using the instance, // be sure to terminate it in the Amazon EC2 console to stop billing. // // If you terminate a running container instance, Amazon ECS automatically deregisters @@ -659,7 +758,7 @@ func (c *ECS) DeregisterContainerInstanceRequest(input *DeregisterContainerInsta // // * ErrCodeClusterNotFoundException "ClusterNotFoundException" // The specified cluster could not be found. You can view your available clusters -// with ListClusters. Amazon ECS clusters are region-specific. +// with ListClusters. Amazon ECS clusters are Region-specific. // // See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/DeregisterContainerInstance func (c *ECS) DeregisterContainerInstance(input *DeregisterContainerInstanceInput) (*DeregisterContainerInstanceOutput, error) { @@ -687,8 +786,8 @@ const opDeregisterTaskDefinition = "DeregisterTaskDefinition" // DeregisterTaskDefinitionRequest generates a "aws/request.Request" representing the // client's request for the DeregisterTaskDefinition operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -739,7 +838,7 @@ func (c *ECS) DeregisterTaskDefinitionRequest(input *DeregisterTaskDefinitionInp // deregistration where these restrictions have not yet taken effect). // // At this time, INACTIVE task definitions remain discoverable in your account -// indefinitely; however, this behavior is subject to change in the future, +// indefinitely. However, this behavior is subject to change in the future, // so you should not rely on INACTIVE task definitions persisting beyond the // lifecycle of any associated tasks and services. // @@ -789,8 +888,8 @@ const opDescribeClusters = "DescribeClusters" // DescribeClustersRequest generates a "aws/request.Request" representing the // client's request for the DescribeClusters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -877,8 +976,8 @@ const opDescribeContainerInstances = "DescribeContainerInstances" // DescribeContainerInstancesRequest generates a "aws/request.Request" representing the // client's request for the DescribeContainerInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -942,7 +1041,7 @@ func (c *ECS) DescribeContainerInstancesRequest(input *DescribeContainerInstance // // * ErrCodeClusterNotFoundException "ClusterNotFoundException" // The specified cluster could not be found. You can view your available clusters -// with ListClusters. Amazon ECS clusters are region-specific. +// with ListClusters. Amazon ECS clusters are Region-specific. // // See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/DescribeContainerInstances func (c *ECS) DescribeContainerInstances(input *DescribeContainerInstancesInput) (*DescribeContainerInstancesOutput, error) { @@ -970,8 +1069,8 @@ const opDescribeServices = "DescribeServices" // DescribeServicesRequest generates a "aws/request.Request" representing the // client's request for the DescribeServices operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1034,7 +1133,7 @@ func (c *ECS) DescribeServicesRequest(input *DescribeServicesInput) (req *reques // // * ErrCodeClusterNotFoundException "ClusterNotFoundException" // The specified cluster could not be found. You can view your available clusters -// with ListClusters. Amazon ECS clusters are region-specific. +// with ListClusters. Amazon ECS clusters are Region-specific. // // See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/DescribeServices func (c *ECS) DescribeServices(input *DescribeServicesInput) (*DescribeServicesOutput, error) { @@ -1062,8 +1161,8 @@ const opDescribeTaskDefinition = "DescribeTaskDefinition" // DescribeTaskDefinitionRequest generates a "aws/request.Request" representing the // client's request for the DescribeTaskDefinition operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1155,8 +1254,8 @@ const opDescribeTasks = "DescribeTasks" // DescribeTasksRequest generates a "aws/request.Request" representing the // client's request for the DescribeTasks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1219,7 +1318,7 @@ func (c *ECS) DescribeTasksRequest(input *DescribeTasksInput) (req *request.Requ // // * ErrCodeClusterNotFoundException "ClusterNotFoundException" // The specified cluster could not be found. You can view your available clusters -// with ListClusters. Amazon ECS clusters are region-specific. +// with ListClusters. Amazon ECS clusters are Region-specific. // // See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/DescribeTasks func (c *ECS) DescribeTasks(input *DescribeTasksInput) (*DescribeTasksOutput, error) { @@ -1247,8 +1346,8 @@ const opDiscoverPollEndpoint = "DiscoverPollEndpoint" // DiscoverPollEndpointRequest generates a "aws/request.Request" representing the // client's request for the DiscoverPollEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1330,12 +1429,100 @@ func (c *ECS) DiscoverPollEndpointWithContext(ctx aws.Context, input *DiscoverPo return out, req.Send() } +const opListAccountSettings = "ListAccountSettings" + +// ListAccountSettingsRequest generates a "aws/request.Request" representing the +// client's request for the ListAccountSettings operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListAccountSettings for more information on using the ListAccountSettings +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListAccountSettingsRequest method. +// req, resp := client.ListAccountSettingsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/ListAccountSettings +func (c *ECS) ListAccountSettingsRequest(input *ListAccountSettingsInput) (req *request.Request, output *ListAccountSettingsOutput) { + op := &request.Operation{ + Name: opListAccountSettings, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListAccountSettingsInput{} + } + + output = &ListAccountSettingsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListAccountSettings API operation for Amazon EC2 Container Service. +// +// Lists the account settings for an Amazon ECS resource for a specified principal. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon EC2 Container Service's +// API operation ListAccountSettings for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServerException "ServerException" +// These errors are usually caused by a server issue. +// +// * ErrCodeClientException "ClientException" +// These errors are usually caused by a client action, such as using an action +// or resource on behalf of a user that doesn't have permissions to use the +// action or resource, or specifying an identifier that is not valid. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// The specified parameter is invalid. Review the available parameters for the +// API request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/ListAccountSettings +func (c *ECS) ListAccountSettings(input *ListAccountSettingsInput) (*ListAccountSettingsOutput, error) { + req, out := c.ListAccountSettingsRequest(input) + return out, req.Send() +} + +// ListAccountSettingsWithContext is the same as ListAccountSettings with the addition of +// the ability to pass a context and additional request options. +// +// See ListAccountSettings for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ListAccountSettingsWithContext(ctx aws.Context, input *ListAccountSettingsInput, opts ...request.Option) (*ListAccountSettingsOutput, error) { + req, out := c.ListAccountSettingsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opListAttributes = "ListAttributes" // ListAttributesRequest generates a "aws/request.Request" representing the // client's request for the ListAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1392,7 +1579,7 @@ func (c *ECS) ListAttributesRequest(input *ListAttributesInput) (req *request.Re // Returned Error Codes: // * ErrCodeClusterNotFoundException "ClusterNotFoundException" // The specified cluster could not be found. You can view your available clusters -// with ListClusters. Amazon ECS clusters are region-specific. +// with ListClusters. Amazon ECS clusters are Region-specific. // // * ErrCodeInvalidParameterException "InvalidParameterException" // The specified parameter is invalid. Review the available parameters for the @@ -1424,8 +1611,8 @@ const opListClusters = "ListClusters" // ListClustersRequest generates a "aws/request.Request" representing the // client's request for the ListClusters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1568,8 +1755,8 @@ const opListContainerInstances = "ListContainerInstances" // ListContainerInstancesRequest generates a "aws/request.Request" representing the // client's request for the ListContainerInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1642,7 +1829,7 @@ func (c *ECS) ListContainerInstancesRequest(input *ListContainerInstancesInput) // // * ErrCodeClusterNotFoundException "ClusterNotFoundException" // The specified cluster could not be found. You can view your available clusters -// with ListClusters. Amazon ECS clusters are region-specific. +// with ListClusters. Amazon ECS clusters are Region-specific. // // See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/ListContainerInstances func (c *ECS) ListContainerInstances(input *ListContainerInstancesInput) (*ListContainerInstancesOutput, error) { @@ -1720,8 +1907,8 @@ const opListServices = "ListServices" // ListServicesRequest generates a "aws/request.Request" representing the // client's request for the ListServices operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1790,7 +1977,7 @@ func (c *ECS) ListServicesRequest(input *ListServicesInput) (req *request.Reques // // * ErrCodeClusterNotFoundException "ClusterNotFoundException" // The specified cluster could not be found. You can view your available clusters -// with ListClusters. Amazon ECS clusters are region-specific. +// with ListClusters. Amazon ECS clusters are Region-specific. // // See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/ListServices func (c *ECS) ListServices(input *ListServicesInput) (*ListServicesOutput, error) { @@ -1864,12 +2051,104 @@ func (c *ECS) ListServicesPagesWithContext(ctx aws.Context, input *ListServicesI return p.Err() } +const opListTagsForResource = "ListTagsForResource" + +// ListTagsForResourceRequest generates a "aws/request.Request" representing the +// client's request for the ListTagsForResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListTagsForResource for more information on using the ListTagsForResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListTagsForResourceRequest method. +// req, resp := client.ListTagsForResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/ListTagsForResource +func (c *ECS) ListTagsForResourceRequest(input *ListTagsForResourceInput) (req *request.Request, output *ListTagsForResourceOutput) { + op := &request.Operation{ + Name: opListTagsForResource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListTagsForResourceInput{} + } + + output = &ListTagsForResourceOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListTagsForResource API operation for Amazon EC2 Container Service. +// +// List the tags for an Amazon ECS resource. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon EC2 Container Service's +// API operation ListTagsForResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServerException "ServerException" +// These errors are usually caused by a server issue. +// +// * ErrCodeClientException "ClientException" +// These errors are usually caused by a client action, such as using an action +// or resource on behalf of a user that doesn't have permissions to use the +// action or resource, or specifying an identifier that is not valid. +// +// * ErrCodeClusterNotFoundException "ClusterNotFoundException" +// The specified cluster could not be found. You can view your available clusters +// with ListClusters. Amazon ECS clusters are Region-specific. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// The specified parameter is invalid. Review the available parameters for the +// API request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/ListTagsForResource +func (c *ECS) ListTagsForResource(input *ListTagsForResourceInput) (*ListTagsForResourceOutput, error) { + req, out := c.ListTagsForResourceRequest(input) + return out, req.Send() +} + +// ListTagsForResourceWithContext is the same as ListTagsForResource with the addition of +// the ability to pass a context and additional request options. +// +// See ListTagsForResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ListTagsForResourceWithContext(ctx aws.Context, input *ListTagsForResourceInput, opts ...request.Option) (*ListTagsForResourceOutput, error) { + req, out := c.ListTagsForResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opListTaskDefinitionFamilies = "ListTaskDefinitionFamilies" // ListTaskDefinitionFamiliesRequest generates a "aws/request.Request" representing the // client's request for the ListTaskDefinitionFamilies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2018,8 +2297,8 @@ const opListTaskDefinitions = "ListTaskDefinitions" // ListTaskDefinitionsRequest generates a "aws/request.Request" representing the // client's request for the ListTaskDefinitions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2164,8 +2443,8 @@ const opListTasks = "ListTasks" // ListTasksRequest generates a "aws/request.Request" representing the // client's request for the ListTasks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2239,11 +2518,11 @@ func (c *ECS) ListTasksRequest(input *ListTasksInput) (req *request.Request, out // // * ErrCodeClusterNotFoundException "ClusterNotFoundException" // The specified cluster could not be found. You can view your available clusters -// with ListClusters. Amazon ECS clusters are region-specific. +// with ListClusters. Amazon ECS clusters are Region-specific. // // * ErrCodeServiceNotFoundException "ServiceNotFoundException" // The specified service could not be found. You can view your available services -// with ListServices. Amazon ECS services are cluster-specific and region-specific. +// with ListServices. Amazon ECS services are cluster-specific and Region-specific. // // See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/ListTasks func (c *ECS) ListTasks(input *ListTasksInput) (*ListTasksOutput, error) { @@ -2317,124 +2596,216 @@ func (c *ECS) ListTasksPagesWithContext(ctx aws.Context, input *ListTasksInput, return p.Err() } -const opPutAttributes = "PutAttributes" +const opPutAccountSetting = "PutAccountSetting" -// PutAttributesRequest generates a "aws/request.Request" representing the -// client's request for the PutAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// PutAccountSettingRequest generates a "aws/request.Request" representing the +// client's request for the PutAccountSetting operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See PutAttributes for more information on using the PutAttributes +// See PutAccountSetting for more information on using the PutAccountSetting // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the PutAttributesRequest method. -// req, resp := client.PutAttributesRequest(params) +// // Example sending a request using the PutAccountSettingRequest method. +// req, resp := client.PutAccountSettingRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/PutAttributes -func (c *ECS) PutAttributesRequest(input *PutAttributesInput) (req *request.Request, output *PutAttributesOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/PutAccountSetting +func (c *ECS) PutAccountSettingRequest(input *PutAccountSettingInput) (req *request.Request, output *PutAccountSettingOutput) { op := &request.Operation{ - Name: opPutAttributes, + Name: opPutAccountSetting, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &PutAttributesInput{} + input = &PutAccountSettingInput{} } - output = &PutAttributesOutput{} + output = &PutAccountSettingOutput{} req = c.newRequest(op, input, output) return } -// PutAttributes API operation for Amazon EC2 Container Service. +// PutAccountSetting API operation for Amazon EC2 Container Service. // -// Create or update an attribute on an Amazon ECS resource. If the attribute -// does not exist, it is created. If the attribute exists, its value is replaced -// with the specified value. To delete an attribute, use DeleteAttributes. For -// more information, see Attributes (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-constraints.html#attributes) -// in the Amazon Elastic Container Service Developer Guide. +// Modifies the ARN and resource ID format of a resource for a specified IAM +// user, IAM role, or the root user for an account. You can specify whether +// the new ARN and resource ID format are enabled for new resources that are +// created. Enabling this setting is required to use new Amazon ECS features +// such as resource tagging. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon EC2 Container Service's -// API operation PutAttributes for usage and error information. +// API operation PutAccountSetting for usage and error information. // // Returned Error Codes: -// * ErrCodeClusterNotFoundException "ClusterNotFoundException" -// The specified cluster could not be found. You can view your available clusters -// with ListClusters. Amazon ECS clusters are region-specific. -// -// * ErrCodeTargetNotFoundException "TargetNotFoundException" -// The specified target could not be found. You can view your available container -// instances with ListContainerInstances. Amazon ECS container instances are -// cluster-specific and region-specific. +// * ErrCodeServerException "ServerException" +// These errors are usually caused by a server issue. // -// * ErrCodeAttributeLimitExceededException "AttributeLimitExceededException" -// You can apply up to 10 custom attributes per resource. You can view the attributes -// of a resource with ListAttributes. You can remove existing attributes on -// a resource with DeleteAttributes. +// * ErrCodeClientException "ClientException" +// These errors are usually caused by a client action, such as using an action +// or resource on behalf of a user that doesn't have permissions to use the +// action or resource, or specifying an identifier that is not valid. // // * ErrCodeInvalidParameterException "InvalidParameterException" // The specified parameter is invalid. Review the available parameters for the // API request. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/PutAttributes -func (c *ECS) PutAttributes(input *PutAttributesInput) (*PutAttributesOutput, error) { - req, out := c.PutAttributesRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/PutAccountSetting +func (c *ECS) PutAccountSetting(input *PutAccountSettingInput) (*PutAccountSettingOutput, error) { + req, out := c.PutAccountSettingRequest(input) return out, req.Send() } -// PutAttributesWithContext is the same as PutAttributes with the addition of +// PutAccountSettingWithContext is the same as PutAccountSetting with the addition of // the ability to pass a context and additional request options. // -// See PutAttributes for details on how to use this API operation. +// See PutAccountSetting for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ECS) PutAttributesWithContext(ctx aws.Context, input *PutAttributesInput, opts ...request.Option) (*PutAttributesOutput, error) { - req, out := c.PutAttributesRequest(input) +func (c *ECS) PutAccountSettingWithContext(ctx aws.Context, input *PutAccountSettingInput, opts ...request.Option) (*PutAccountSettingOutput, error) { + req, out := c.PutAccountSettingRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opRegisterContainerInstance = "RegisterContainerInstance" +const opPutAttributes = "PutAttributes" -// RegisterContainerInstanceRequest generates a "aws/request.Request" representing the -// client's request for the RegisterContainerInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// PutAttributesRequest generates a "aws/request.Request" representing the +// client's request for the PutAttributes operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See RegisterContainerInstance for more information on using the RegisterContainerInstance +// See PutAttributes for more information on using the PutAttributes // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the RegisterContainerInstanceRequest method. -// req, resp := client.RegisterContainerInstanceRequest(params) -// +// // Example sending a request using the PutAttributesRequest method. +// req, resp := client.PutAttributesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/PutAttributes +func (c *ECS) PutAttributesRequest(input *PutAttributesInput) (req *request.Request, output *PutAttributesOutput) { + op := &request.Operation{ + Name: opPutAttributes, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutAttributesInput{} + } + + output = &PutAttributesOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutAttributes API operation for Amazon EC2 Container Service. +// +// Create or update an attribute on an Amazon ECS resource. If the attribute +// does not exist, it is created. If the attribute exists, its value is replaced +// with the specified value. To delete an attribute, use DeleteAttributes. For +// more information, see Attributes (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-constraints.html#attributes) +// in the Amazon Elastic Container Service Developer Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon EC2 Container Service's +// API operation PutAttributes for usage and error information. +// +// Returned Error Codes: +// * ErrCodeClusterNotFoundException "ClusterNotFoundException" +// The specified cluster could not be found. You can view your available clusters +// with ListClusters. Amazon ECS clusters are Region-specific. +// +// * ErrCodeTargetNotFoundException "TargetNotFoundException" +// The specified target could not be found. You can view your available container +// instances with ListContainerInstances. Amazon ECS container instances are +// cluster-specific and Region-specific. +// +// * ErrCodeAttributeLimitExceededException "AttributeLimitExceededException" +// You can apply up to 10 custom attributes per resource. You can view the attributes +// of a resource with ListAttributes. You can remove existing attributes on +// a resource with DeleteAttributes. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// The specified parameter is invalid. Review the available parameters for the +// API request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/PutAttributes +func (c *ECS) PutAttributes(input *PutAttributesInput) (*PutAttributesOutput, error) { + req, out := c.PutAttributesRequest(input) + return out, req.Send() +} + +// PutAttributesWithContext is the same as PutAttributes with the addition of +// the ability to pass a context and additional request options. +// +// See PutAttributes for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) PutAttributesWithContext(ctx aws.Context, input *PutAttributesInput, opts ...request.Option) (*PutAttributesOutput, error) { + req, out := c.PutAttributesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRegisterContainerInstance = "RegisterContainerInstance" + +// RegisterContainerInstanceRequest generates a "aws/request.Request" representing the +// client's request for the RegisterContainerInstance operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RegisterContainerInstance for more information on using the RegisterContainerInstance +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RegisterContainerInstanceRequest method. +// req, resp := client.RegisterContainerInstanceRequest(params) +// // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) @@ -2511,8 +2882,8 @@ const opRegisterTaskDefinition = "RegisterTaskDefinition" // RegisterTaskDefinitionRequest generates a "aws/request.Request" representing the // client's request for the RegisterTaskDefinition operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2568,7 +2939,7 @@ func (c *ECS) RegisterTaskDefinitionRequest(input *RegisterTaskDefinitionInput) // definition with the networkMode parameter. The available network modes correspond // to those described in Network settings (https://docs.docker.com/engine/reference/run/#/network-settings) // in the Docker run reference. If you specify the awsvpc network mode, the -// task is allocated an Elastic Network Interface, and you must specify a NetworkConfiguration +// task is allocated an elastic network interface, and you must specify a NetworkConfiguration // when you create a service or run a task with the task definition. For more // information, see Task Networking (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html) // in the Amazon Elastic Container Service Developer Guide. @@ -2619,8 +2990,8 @@ const opRunTask = "RunTask" // RunTaskRequest generates a "aws/request.Request" representing the // client's request for the RunTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2672,9 +3043,8 @@ func (c *ECS) RunTaskRequest(input *RunTaskInput) (req *request.Request, output // The Amazon ECS API follows an eventual consistency model, due to the distributed // nature of the system supporting the API. This means that the result of an // API command you run that affects your Amazon ECS resources might not be immediately -// visible to all subsequent commands you run. You should keep this in mind -// when you carry out an API command that immediately follows a previous API -// command. +// visible to all subsequent commands you run. Keep this in mind when you carry +// out an API command that immediately follows a previous API command. // // To manage eventual consistency, you can do the following: // @@ -2682,7 +3052,7 @@ func (c *ECS) RunTaskRequest(input *RunTaskInput) (req *request.Request, output // it. Run the DescribeTasks command using an exponential backoff algorithm // to ensure that you allow enough time for the previous command to propagate // through the system. To do this, run the DescribeTasks command repeatedly, -// starting with a couple of seconds of wait time, and increasing gradually +// starting with a couple of seconds of wait time and increasing gradually // up to five minutes of wait time. // // * Add wait time between subsequent commands, even if the DescribeTasks @@ -2712,24 +3082,24 @@ func (c *ECS) RunTaskRequest(input *RunTaskInput) (req *request.Request, output // // * ErrCodeClusterNotFoundException "ClusterNotFoundException" // The specified cluster could not be found. You can view your available clusters -// with ListClusters. Amazon ECS clusters are region-specific. +// with ListClusters. Amazon ECS clusters are Region-specific. // // * ErrCodeUnsupportedFeatureException "UnsupportedFeatureException" -// The specified task is not supported in this region. +// The specified task is not supported in this Region. // // * ErrCodePlatformUnknownException "PlatformUnknownException" // The specified platform version does not exist. // // * ErrCodePlatformTaskDefinitionIncompatibilityException "PlatformTaskDefinitionIncompatibilityException" -// The specified platform version does not satisfy the task definition’s required +// The specified platform version does not satisfy the task definition's required // capabilities. // // * ErrCodeAccessDeniedException "AccessDeniedException" // You do not have authorization to perform the requested action. // // * ErrCodeBlockedException "BlockedException" -// Your AWS account has been blocked. Contact AWS Support (http://aws.amazon.com/contact-us/) -// for more information. +// Your AWS account has been blocked. For more information, Contact AWS Support +// (http://aws.amazon.com/contact-us/). // // See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/RunTask func (c *ECS) RunTask(input *RunTaskInput) (*RunTaskOutput, error) { @@ -2757,8 +3127,8 @@ const opStartTask = "StartTask" // StartTaskRequest generates a "aws/request.Request" representing the // client's request for the StartTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2826,7 +3196,7 @@ func (c *ECS) StartTaskRequest(input *StartTaskInput) (req *request.Request, out // // * ErrCodeClusterNotFoundException "ClusterNotFoundException" // The specified cluster could not be found. You can view your available clusters -// with ListClusters. Amazon ECS clusters are region-specific. +// with ListClusters. Amazon ECS clusters are Region-specific. // // See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/StartTask func (c *ECS) StartTask(input *StartTaskInput) (*StartTaskOutput, error) { @@ -2854,8 +3224,8 @@ const opStopTask = "StopTask" // StopTaskRequest generates a "aws/request.Request" representing the // client's request for the StopTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2894,7 +3264,7 @@ func (c *ECS) StopTaskRequest(input *StopTaskInput) (req *request.Request, outpu // StopTask API operation for Amazon EC2 Container Service. // -// Stops a running task. +// Stops a running task. Any tags associated with the task will be deleted. // // When StopTask is called on a task, the equivalent of docker stop is issued // to the containers running in the task. This results in a SIGTERM and a default @@ -2929,7 +3299,7 @@ func (c *ECS) StopTaskRequest(input *StopTaskInput) (req *request.Request, outpu // // * ErrCodeClusterNotFoundException "ClusterNotFoundException" // The specified cluster could not be found. You can view your available clusters -// with ListClusters. Amazon ECS clusters are region-specific. +// with ListClusters. Amazon ECS clusters are Region-specific. // // See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/StopTask func (c *ECS) StopTask(input *StopTaskInput) (*StopTaskOutput, error) { @@ -2957,8 +3327,8 @@ const opSubmitContainerStateChange = "SubmitContainerStateChange" // SubmitContainerStateChangeRequest generates a "aws/request.Request" representing the // client's request for the SubmitContainerStateChange operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3047,8 +3417,8 @@ const opSubmitTaskStateChange = "SubmitTaskStateChange" // SubmitTaskStateChangeRequest generates a "aws/request.Request" representing the // client's request for the SubmitTaskStateChange operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3133,12 +3503,205 @@ func (c *ECS) SubmitTaskStateChangeWithContext(ctx aws.Context, input *SubmitTas return out, req.Send() } +const opTagResource = "TagResource" + +// TagResourceRequest generates a "aws/request.Request" representing the +// client's request for the TagResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See TagResource for more information on using the TagResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the TagResourceRequest method. +// req, resp := client.TagResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/TagResource +func (c *ECS) TagResourceRequest(input *TagResourceInput) (req *request.Request, output *TagResourceOutput) { + op := &request.Operation{ + Name: opTagResource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &TagResourceInput{} + } + + output = &TagResourceOutput{} + req = c.newRequest(op, input, output) + return +} + +// TagResource API operation for Amazon EC2 Container Service. +// +// Associates the specified tags to a resource with the specified resourceArn. +// If existing tags on a resource are not specified in the request parameters, +// they are not changed. When a resource is deleted, the tags associated with +// that resource are deleted as well. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon EC2 Container Service's +// API operation TagResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServerException "ServerException" +// These errors are usually caused by a server issue. +// +// * ErrCodeClientException "ClientException" +// These errors are usually caused by a client action, such as using an action +// or resource on behalf of a user that doesn't have permissions to use the +// action or resource, or specifying an identifier that is not valid. +// +// * ErrCodeClusterNotFoundException "ClusterNotFoundException" +// The specified cluster could not be found. You can view your available clusters +// with ListClusters. Amazon ECS clusters are Region-specific. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource could not be found. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// The specified parameter is invalid. Review the available parameters for the +// API request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/TagResource +func (c *ECS) TagResource(input *TagResourceInput) (*TagResourceOutput, error) { + req, out := c.TagResourceRequest(input) + return out, req.Send() +} + +// TagResourceWithContext is the same as TagResource with the addition of +// the ability to pass a context and additional request options. +// +// See TagResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) TagResourceWithContext(ctx aws.Context, input *TagResourceInput, opts ...request.Option) (*TagResourceOutput, error) { + req, out := c.TagResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUntagResource = "UntagResource" + +// UntagResourceRequest generates a "aws/request.Request" representing the +// client's request for the UntagResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UntagResource for more information on using the UntagResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UntagResourceRequest method. +// req, resp := client.UntagResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/UntagResource +func (c *ECS) UntagResourceRequest(input *UntagResourceInput) (req *request.Request, output *UntagResourceOutput) { + op := &request.Operation{ + Name: opUntagResource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UntagResourceInput{} + } + + output = &UntagResourceOutput{} + req = c.newRequest(op, input, output) + return +} + +// UntagResource API operation for Amazon EC2 Container Service. +// +// Deletes specified tags from a resource. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon EC2 Container Service's +// API operation UntagResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServerException "ServerException" +// These errors are usually caused by a server issue. +// +// * ErrCodeClientException "ClientException" +// These errors are usually caused by a client action, such as using an action +// or resource on behalf of a user that doesn't have permissions to use the +// action or resource, or specifying an identifier that is not valid. +// +// * ErrCodeClusterNotFoundException "ClusterNotFoundException" +// The specified cluster could not be found. You can view your available clusters +// with ListClusters. Amazon ECS clusters are Region-specific. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource could not be found. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// The specified parameter is invalid. Review the available parameters for the +// API request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/UntagResource +func (c *ECS) UntagResource(input *UntagResourceInput) (*UntagResourceOutput, error) { + req, out := c.UntagResourceRequest(input) + return out, req.Send() +} + +// UntagResourceWithContext is the same as UntagResource with the addition of +// the ability to pass a context and additional request options. +// +// See UntagResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) UntagResourceWithContext(ctx aws.Context, input *UntagResourceInput, opts ...request.Option) (*UntagResourceOutput, error) { + req, out := c.UntagResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opUpdateContainerAgent = "UpdateContainerAgent" // UpdateContainerAgentRequest generates a "aws/request.Request" representing the // client's request for the UpdateContainerAgent operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3211,7 +3774,7 @@ func (c *ECS) UpdateContainerAgentRequest(input *UpdateContainerAgentInput) (req // // * ErrCodeClusterNotFoundException "ClusterNotFoundException" // The specified cluster could not be found. You can view your available clusters -// with ListClusters. Amazon ECS clusters are region-specific. +// with ListClusters. Amazon ECS clusters are Region-specific. // // * ErrCodeUpdateInProgressException "UpdateInProgressException" // There is already a current Amazon ECS container agent update in progress @@ -3257,8 +3820,8 @@ const opUpdateContainerInstancesState = "UpdateContainerInstancesState" // UpdateContainerInstancesStateRequest generates a "aws/request.Request" representing the // client's request for the UpdateContainerInstancesState operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3333,8 +3896,8 @@ func (c *ECS) UpdateContainerInstancesStateRequest(input *UpdateContainerInstanc // are available). If the maximum is 100%, then replacement tasks can't start // until the draining tasks have stopped. // -// Any PENDING or RUNNING tasks that do not belong to a service are not affected; -// you must wait for them to finish or stop them manually. +// Any PENDING or RUNNING tasks that do not belong to a service are not affected. +// You must wait for them to finish or stop them manually. // // A container instance has completed draining when it has no more RUNNING tasks. // You can verify this using ListTasks. @@ -3364,7 +3927,7 @@ func (c *ECS) UpdateContainerInstancesStateRequest(input *UpdateContainerInstanc // // * ErrCodeClusterNotFoundException "ClusterNotFoundException" // The specified cluster could not be found. You can view your available clusters -// with ListClusters. Amazon ECS clusters are region-specific. +// with ListClusters. Amazon ECS clusters are Region-specific. // // See also, https://docs.aws.amazon.com/goto/WebAPI/ecs-2014-11-13/UpdateContainerInstancesState func (c *ECS) UpdateContainerInstancesState(input *UpdateContainerInstancesStateInput) (*UpdateContainerInstancesStateOutput, error) { @@ -3392,8 +3955,8 @@ const opUpdateService = "UpdateService" // UpdateServiceRequest generates a "aws/request.Request" representing the // client's request for the UpdateService operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3439,8 +4002,17 @@ func (c *ECS) UpdateServiceRequest(input *UpdateServiceInput) (req *request.Requ // in a service by specifying the cluster that the service is running in and // a new desiredCount parameter. // -// You can use UpdateService to modify your task definition and deploy a new -// version of your service. +// If you have updated the Docker image of your application, you can create +// a new task definition with that image and deploy it to your service. The +// service scheduler uses the minimum healthy percent and maximum percent parameters +// (in the service's deployment configuration) to determine the deployment strategy. +// +// If your updated Docker image uses the same tag as what is in the existing +// task definition for your service (for example, my_image:latest), you do not +// need to create a new revision of your task definition. You can update the +// service using the forceNewDeployment option. The new tasks launched by the +// deployment pull the current image/tag combination from your repository when +// they start. // // You can also update the deployment configuration of a service. When a deployment // is triggered by updating the task definition of a service, the service scheduler @@ -3522,11 +4094,11 @@ func (c *ECS) UpdateServiceRequest(input *UpdateServiceInput) (req *request.Requ // // * ErrCodeClusterNotFoundException "ClusterNotFoundException" // The specified cluster could not be found. You can view your available clusters -// with ListClusters. Amazon ECS clusters are region-specific. +// with ListClusters. Amazon ECS clusters are Region-specific. // // * ErrCodeServiceNotFoundException "ServiceNotFoundException" // The specified service could not be found. You can view your available services -// with ListServices. Amazon ECS services are cluster-specific and region-specific. +// with ListServices. Amazon ECS services are cluster-specific and Region-specific. // // * ErrCodeServiceNotActiveException "ServiceNotActiveException" // The specified service is not active. You can't update a service that is inactive. @@ -3536,7 +4108,7 @@ func (c *ECS) UpdateServiceRequest(input *UpdateServiceInput) (req *request.Requ // The specified platform version does not exist. // // * ErrCodePlatformTaskDefinitionIncompatibilityException "PlatformTaskDefinitionIncompatibilityException" -// The specified platform version does not satisfy the task definition’s required +// The specified platform version does not satisfy the task definition's required // capabilities. // // * ErrCodeAccessDeniedException "AccessDeniedException" @@ -3568,7 +4140,7 @@ func (c *ECS) UpdateServiceWithContext(ctx aws.Context, input *UpdateServiceInpu type Attachment struct { _ struct{} `type:"structure"` - // Details of the attachment. For Elastic Network Interfaces, this includes + // Details of the attachment. For elastic network interfaces, this includes // the network interface ID, the MAC address, the subnet ID, and the private // IPv4 address. Details []*KeyValuePair `locationName:"details" type:"list"` @@ -3751,13 +4323,20 @@ type AwsVpcConfiguration struct { _ struct{} `type:"structure"` // Whether the task's elastic network interface receives a public IP address. + // The default value is DISABLED. AssignPublicIp *string `locationName:"assignPublicIp" type:"string" enum:"AssignPublicIp"` // The security groups associated with the task or service. If you do not specify - // a security group, the default security group for the VPC is used. + // a security group, the default security group for the VPC is used. There is + // a limit of 5 security groups able to be specified per AwsVpcConfiguration. + // + // All specified security groups must be from the same VPC. SecurityGroups []*string `locationName:"securityGroups" type:"list"` - // The subnets associated with the task or service. + // The subnets associated with the task or service. There is a limit of 16 subnets + // able to be specified per AwsVpcConfiguration. + // + // All specified subnets must be from the same VPC. // // Subnets is a required field Subnets []*string `locationName:"subnets" type:"list" required:"true"` @@ -3816,7 +4395,7 @@ type Cluster struct { ActiveServicesCount *int64 `locationName:"activeServicesCount" type:"integer"` // The Amazon Resource Name (ARN) that identifies the cluster. The ARN contains - // the arn:aws:ecs namespace, followed by the region of the cluster, the AWS + // the arn:aws:ecs namespace, followed by the Region of the cluster, the AWS // account ID of the cluster owner, the cluster namespace, and then the cluster // name. For example, arn:aws:ecs:region:012345678910:cluster/test.. ClusterArn *string `locationName:"clusterArn" type:"string"` @@ -3827,7 +4406,8 @@ type Cluster struct { // The number of tasks in the cluster that are in the PENDING state. PendingTasksCount *int64 `locationName:"pendingTasksCount" type:"integer"` - // The number of container instances registered into the cluster. + // The number of container instances registered into the cluster. This includes + // container instances in both ACTIVE and DRAINING status. RegisteredContainerInstancesCount *int64 `locationName:"registeredContainerInstancesCount" type:"integer"` // The number of tasks in the cluster that are in the RUNNING state. @@ -3857,6 +4437,12 @@ type Cluster struct { // indicates that you can register container instances with the cluster and // the associated instances can accept tasks. Status *string `locationName:"status" type:"string"` + + // The metadata that you apply to the cluster to help you categorize and organize + // them. Each tag consists of a key and an optional value, both of which you + // define. Tag keys can have a maximum character length of 128 characters, and + // tag values can have a maximum length of 256 characters. + Tags []*Tag `locationName:"tags" type:"list"` } // String returns the string representation @@ -3917,6 +4503,12 @@ func (s *Cluster) SetStatus(v string) *Cluster { return s } +// SetTags sets the Tags field's value. +func (s *Cluster) SetTags(v []*Tag) *Cluster { + s.Tags = v + return s +} + // A Docker container that is part of a task. type Container struct { _ struct{} `type:"structure"` @@ -3928,7 +4520,8 @@ type Container struct { ExitCode *int64 `locationName:"exitCode" type:"integer"` // The health status of the container. If health checks are not configured for - // this container in its task definition, then it reports health status as UNKNOWN. + // this container in its task definition, then it reports the health status + // as UNKNOWN. HealthStatus *string `locationName:"healthStatus" type:"string" enum:"HealthStatus"` // The last known status of the container. @@ -4021,16 +4614,16 @@ type ContainerDefinition struct { _ struct{} `type:"structure"` // The command that is passed to the container. This parameter maps to Cmd in - // the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the COMMAND parameter to docker run (https://docs.docker.com/engine/reference/run/). // For more information, see https://docs.docker.com/engine/reference/builder/#cmd // (https://docs.docker.com/engine/reference/builder/#cmd). Command []*string `locationName:"command" type:"list"` // The number of cpu units reserved for the container. This parameter maps to - // CpuShares in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // CpuShares in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --cpu-shares option to docker run (https://docs.docker.com/engine/reference/run/). // // This field is optional for tasks using the Fargate launch type, and the only @@ -4066,7 +4659,7 @@ type ContainerDefinition struct { // uses the CPU value to calculate the relative CPU share ratios for running // containers. For more information, see CPU share constraint (https://docs.docker.com/engine/reference/run/#cpu-share-constraint) // in the Docker documentation. The minimum valid CPU share value that the Linux - // kernel allows is 2; however, the CPU parameter is not required, and you can + // kernel allows is 2. However, the CPU parameter is not required, and you can // use CPU values below 2 in your container definitions. For CPU values below // 2 (including null), the behavior varies based on your Amazon ECS container // agent version: @@ -4085,44 +4678,44 @@ type ContainerDefinition struct { Cpu *int64 `locationName:"cpu" type:"integer"` // When this parameter is true, networking is disabled within the container. - // This parameter maps to NetworkDisabled in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/). + // This parameter maps to NetworkDisabled in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/). // // This parameter is not supported for Windows containers. DisableNetworking *bool `locationName:"disableNetworking" type:"boolean"` // A list of DNS search domains that are presented to the container. This parameter - // maps to DnsSearch in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // maps to DnsSearch in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --dns-search option to docker run (https://docs.docker.com/engine/reference/run/). // // This parameter is not supported for Windows containers. DnsSearchDomains []*string `locationName:"dnsSearchDomains" type:"list"` // A list of DNS servers that are presented to the container. This parameter - // maps to Dns in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // maps to Dns in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --dns option to docker run (https://docs.docker.com/engine/reference/run/). // // This parameter is not supported for Windows containers. DnsServers []*string `locationName:"dnsServers" type:"list"` // A key/value map of labels to add to the container. This parameter maps to - // Labels in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // Labels in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --label option to docker run (https://docs.docker.com/engine/reference/run/). // This parameter requires version 1.18 of the Docker Remote API or greater // on your container instance. To check the Docker Remote API version on your // container instance, log in to your container instance and run the following - // command: sudo docker version | grep "Server API version" + // command: sudo docker version --format '{{.Server.APIVersion}}' DockerLabels map[string]*string `locationName:"dockerLabels" type:"map"` // A list of strings to provide custom labels for SELinux and AppArmor multi-level // security systems. This field is not valid for containers in tasks using the // Fargate launch type. // - // This parameter maps to SecurityOpt in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // This parameter maps to SecurityOpt in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --security-opt option to docker run (https://docs.docker.com/engine/reference/run/). // // The Amazon ECS container agent running on a container instance must register @@ -4140,16 +4733,16 @@ type ContainerDefinition struct { // agent or enter your commands and arguments as command array items instead. // // The entry point that is passed to the container. This parameter maps to Entrypoint - // in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --entrypoint option to docker run (https://docs.docker.com/engine/reference/run/). // For more information, see https://docs.docker.com/engine/reference/builder/#entrypoint // (https://docs.docker.com/engine/reference/builder/#entrypoint). EntryPoint []*string `locationName:"entryPoint" type:"list"` // The environment variables to pass to a container. This parameter maps to - // Env in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // Env in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --env option to docker run (https://docs.docker.com/engine/reference/run/). // // We do not recommend using plaintext environment variables for sensitive information, @@ -4171,25 +4764,28 @@ type ContainerDefinition struct { Essential *bool `locationName:"essential" type:"boolean"` // A list of hostnames and IP address mappings to append to the /etc/hosts file - // on the container. If using the Fargate launch type, this may be used to list - // non-Fargate hosts you want the container to talk to. This parameter maps - // to ExtraHosts in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) - // and the --add-host option to docker run (https://docs.docker.com/engine/reference/run/). + // on the container. This parameter maps to ExtraHosts in the Create a container + // (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section + // of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) and + // the --add-host option to docker run (https://docs.docker.com/engine/reference/run/). // - // This parameter is not supported for Windows containers. + // This parameter is not supported for Windows containers or tasks that use + // the awsvpc network mode. ExtraHosts []*HostEntry `locationName:"extraHosts" type:"list"` // The health check command and associated configuration parameters for the - // container. This parameter maps to HealthCheck in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // container. This parameter maps to HealthCheck in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the HEALTHCHECK parameter of docker run (https://docs.docker.com/engine/reference/run/). HealthCheck *HealthCheck `locationName:"healthCheck" type:"structure"` // The hostname to use for your container. This parameter maps to Hostname in - // the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --hostname option to docker run (https://docs.docker.com/engine/reference/run/). + // + // The hostname parameter is not supported if you are using the awsvpc network + // mode. Hostname *string `locationName:"hostname" type:"string"` // The image used to start a container. This string is passed directly to the @@ -4198,9 +4794,9 @@ type ContainerDefinition struct { // repository-url/image@digest. Up to 255 letters (uppercase and lowercase), // numbers, hyphens, underscores, colons, periods, forward slashes, and number // signs are allowed. This parameter maps to Image in the Create a container - // (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) - // and the IMAGE parameter of docker run (https://docs.docker.com/engine/reference/run/). + // (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section + // of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) and + // the IMAGE parameter of docker run (https://docs.docker.com/engine/reference/run/). // // * When a new task starts, the Amazon ECS container agent pulls the latest // version of the specified image and tag for the container to use. However, @@ -4223,6 +4819,13 @@ type ContainerDefinition struct { // name (for example, quay.io/assemblyline/ubuntu). Image *string `locationName:"image" type:"string"` + // When this parameter is true, this allows you to deploy containerized applications + // that require stdin or a tty to be allocated. This parameter maps to OpenStdin + // in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) + // and the --interactive option to docker run (https://docs.docker.com/engine/reference/run/). + Interactive *bool `locationName:"interactive" type:"boolean"` + // The link parameter allows containers to communicate with each other without // the need for port mappings. Only supported if the network mode of a task // definition is set to bridge. The name:internalName construct is analogous @@ -4230,8 +4833,8 @@ type ContainerDefinition struct { // numbers, hyphens, and underscores are allowed. For more information about // linking Docker containers, go to https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/ // (https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/). - // This parameter maps to Links in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // This parameter maps to Links in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --link option to docker run (https://docs.docker.com/engine/reference/commandline/run/). // // This parameter is not supported for Windows containers. @@ -4250,13 +4853,13 @@ type ContainerDefinition struct { // The log configuration specification for the container. // - // If using the Fargate launch type, the only supported value is awslogs. + // If you are using the Fargate launch type, the only supported value is awslogs. // - // This parameter maps to LogConfig in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // This parameter maps to LogConfig in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --log-driver option to docker run (https://docs.docker.com/engine/reference/run/). // By default, containers use the same logging driver that the Docker daemon - // uses; however the container may use a different logging driver than the Docker + // uses. However the container may use a different logging driver than the Docker // daemon by specifying a log driver with this parameter in the container definition. // To use a different logging driver for a container, the log system must be // configured properly on the container instance (or on a different log server @@ -4271,7 +4874,7 @@ type ContainerDefinition struct { // This parameter requires version 1.18 of the Docker Remote API or greater // on your container instance. To check the Docker Remote API version on your // container instance, log in to your container instance and run the following - // command: sudo docker version | grep "Server API version" + // command: sudo docker version --format '{{.Server.APIVersion}}' // // The Amazon ECS container agent running on a container instance must register // the logging drivers available on that instance with the ECS_AVAILABLE_LOGGING_DRIVERS @@ -4283,8 +4886,8 @@ type ContainerDefinition struct { // The hard limit (in MiB) of memory to present to the container. If your container // attempts to exceed the memory specified here, the container is killed. This - // parameter maps to Memory in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // parameter maps to Memory in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --memory option to docker run (https://docs.docker.com/engine/reference/run/). // // If your containers are part of a task using the Fargate launch type, this @@ -4296,7 +4899,7 @@ type ContainerDefinition struct { // in container definitions. If you specify both, memory must be greater than // memoryReservation. If you specify memoryReservation, then that value is subtracted // from the available memory resources for the container instance on which the - // container is placed; otherwise, the value of memory is used. + // container is placed. Otherwise, the value of memory is used. // // The Docker daemon reserves a minimum of 4 MiB of memory for a container, // so you should not specify fewer than 4 MiB of memory for your containers. @@ -4304,19 +4907,19 @@ type ContainerDefinition struct { // The soft limit (in MiB) of memory to reserve for the container. When system // memory is under heavy contention, Docker attempts to keep the container memory - // to this soft limit; however, your container can consume more memory when + // to this soft limit. However, your container can consume more memory when // it needs to, up to either the hard limit specified with the memory parameter // (if applicable), or all of the available memory on the container instance, // whichever comes first. This parameter maps to MemoryReservation in the Create - // a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --memory-reservation option to docker run (https://docs.docker.com/engine/reference/run/). // // You must specify a non-zero integer for one or both of memory or memoryReservation // in container definitions. If you specify both, memory must be greater than // memoryReservation. If you specify memoryReservation, then that value is subtracted // from the available memory resources for the container instance on which the - // container is placed; otherwise, the value of memory is used. + // container is placed. Otherwise, the value of memory is used. // // For example, if your container normally uses 128 MiB of memory, but occasionally // bursts to 256 MiB of memory for short periods of time, you can set a memoryReservation @@ -4331,8 +4934,8 @@ type ContainerDefinition struct { // The mount points for data volumes in your container. // - // This parameter maps to Volumes in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // This parameter maps to Volumes in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --volume option to docker run (https://docs.docker.com/engine/reference/run/). // // Windows containers can mount whole directories on the same drive as $env:ProgramData. @@ -4344,8 +4947,8 @@ type ContainerDefinition struct { // in a task definition, the name of one container can be entered in the links // of another container to connect the containers. Up to 255 letters (uppercase // and lowercase), numbers, hyphens, and underscores are allowed. This parameter - // maps to name in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // maps to name in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --name option to docker run (https://docs.docker.com/engine/reference/run/). Name *string `locationName:"name" type:"string"` @@ -4360,8 +4963,8 @@ type ContainerDefinition struct { // There is no loopback for port mappings on Windows, so you cannot access a // container's mapped port from the host itself. // - // This parameter maps to PortBindings in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // This parameter maps to PortBindings in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --publish option to docker run (https://docs.docker.com/engine/reference/run/). // If the network mode of a task definition is set to none, then you can't specify // port mappings. If the network mode of a task definition is set to host, then @@ -4370,59 +4973,84 @@ type ContainerDefinition struct { // // After a task reaches the RUNNING status, manual and automatic host and container // port assignments are visible in the Network Bindings section of a container - // description for a selected task in the Amazon ECS console, or the networkBindings - // section DescribeTasks responses. + // description for a selected task in the Amazon ECS console. The assignments + // are also visible in the networkBindings section DescribeTasks responses. PortMappings []*PortMapping `locationName:"portMappings" type:"list"` // When this parameter is true, the container is given elevated privileges on // the host container instance (similar to the root user). This parameter maps - // to Privileged in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // to Privileged in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --privileged option to docker run (https://docs.docker.com/engine/reference/run/). // // This parameter is not supported for Windows containers or tasks using the // Fargate launch type. Privileged *bool `locationName:"privileged" type:"boolean"` + // When this parameter is true, a TTY is allocated. This parameter maps to Tty + // in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) + // and the --tty option to docker run (https://docs.docker.com/engine/reference/run/). + PseudoTerminal *bool `locationName:"pseudoTerminal" type:"boolean"` + // When this parameter is true, the container is given read-only access to its // root file system. This parameter maps to ReadonlyRootfs in the Create a container - // (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) - // and the --read-only option to docker run. + // (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) section + // of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) and + // the --read-only option to docker run (https://docs.docker.com/engine/reference/run/). // // This parameter is not supported for Windows containers. ReadonlyRootFilesystem *bool `locationName:"readonlyRootFilesystem" type:"boolean"` + // The private repository authentication credentials to use. + RepositoryCredentials *RepositoryCredentials `locationName:"repositoryCredentials" type:"structure"` + + // The secrets to pass to the container. + Secrets []*Secret `locationName:"secrets" type:"list"` + + // A list of namespaced kernel parameters to set in the container. This parameter + // maps to Sysctls in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) + // and the --sysctl option to docker run (https://docs.docker.com/engine/reference/run/). + // + // It is not recommended that you specify network-related systemControls parameters + // for multiple containers in a single task that also uses either the awsvpc + // or host network modes. For tasks that use the awsvpc network mode, the container + // that is started last determines which systemControls parameters take effect. + // For tasks that use the host network mode, it changes the container instance's + // namespaced kernel parameters as well as the containers. + SystemControls []*SystemControl `locationName:"systemControls" type:"list"` + // A list of ulimits to set in the container. This parameter maps to Ulimits - // in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --ulimit option to docker run (https://docs.docker.com/engine/reference/run/). // Valid naming values are displayed in the Ulimit data type. This parameter // requires version 1.18 of the Docker Remote API or greater on your container // instance. To check the Docker Remote API version on your container instance, // log in to your container instance and run the following command: sudo docker - // version | grep "Server API version" + // version --format '{{.Server.APIVersion}}' // // This parameter is not supported for Windows containers. Ulimits []*Ulimit `locationName:"ulimits" type:"list"` // The user name to use inside the container. This parameter maps to User in - // the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --user option to docker run (https://docs.docker.com/engine/reference/run/). // // This parameter is not supported for Windows containers. User *string `locationName:"user" type:"string"` // Data volumes to mount from another container. This parameter maps to VolumesFrom - // in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --volumes-from option to docker run (https://docs.docker.com/engine/reference/run/). VolumesFrom []*VolumeFrom `locationName:"volumesFrom" type:"list"` // The working directory in which to run commands inside the container. This - // parameter maps to WorkingDir in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // parameter maps to WorkingDir in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --workdir option to docker run (https://docs.docker.com/engine/reference/run/). WorkingDirectory *string `locationName:"workingDirectory" type:"string"` } @@ -4465,17 +5093,32 @@ func (s *ContainerDefinition) Validate() error { invalidParams.AddNested("LogConfiguration", err.(request.ErrInvalidParams)) } } - if s.Ulimits != nil { - for i, v := range s.Ulimits { + if s.RepositoryCredentials != nil { + if err := s.RepositoryCredentials.Validate(); err != nil { + invalidParams.AddNested("RepositoryCredentials", err.(request.ErrInvalidParams)) + } + } + if s.Secrets != nil { + for i, v := range s.Secrets { if v == nil { continue } if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Ulimits", i), err.(request.ErrInvalidParams)) + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Secrets", i), err.(request.ErrInvalidParams)) } } } - + if s.Ulimits != nil { + for i, v := range s.Ulimits { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Ulimits", i), err.(request.ErrInvalidParams)) + } + } + } + if invalidParams.Len() > 0 { return invalidParams } @@ -4566,6 +5209,12 @@ func (s *ContainerDefinition) SetImage(v string) *ContainerDefinition { return s } +// SetInteractive sets the Interactive field's value. +func (s *ContainerDefinition) SetInteractive(v bool) *ContainerDefinition { + s.Interactive = &v + return s +} + // SetLinks sets the Links field's value. func (s *ContainerDefinition) SetLinks(v []*string) *ContainerDefinition { s.Links = v @@ -4620,12 +5269,36 @@ func (s *ContainerDefinition) SetPrivileged(v bool) *ContainerDefinition { return s } +// SetPseudoTerminal sets the PseudoTerminal field's value. +func (s *ContainerDefinition) SetPseudoTerminal(v bool) *ContainerDefinition { + s.PseudoTerminal = &v + return s +} + // SetReadonlyRootFilesystem sets the ReadonlyRootFilesystem field's value. func (s *ContainerDefinition) SetReadonlyRootFilesystem(v bool) *ContainerDefinition { s.ReadonlyRootFilesystem = &v return s } +// SetRepositoryCredentials sets the RepositoryCredentials field's value. +func (s *ContainerDefinition) SetRepositoryCredentials(v *RepositoryCredentials) *ContainerDefinition { + s.RepositoryCredentials = v + return s +} + +// SetSecrets sets the Secrets field's value. +func (s *ContainerDefinition) SetSecrets(v []*Secret) *ContainerDefinition { + s.Secrets = v + return s +} + +// SetSystemControls sets the SystemControls field's value. +func (s *ContainerDefinition) SetSystemControls(v []*SystemControl) *ContainerDefinition { + s.SystemControls = v + return s +} + // SetUlimits sets the Ulimits field's value. func (s *ContainerDefinition) SetUlimits(v []*Ulimit) *ContainerDefinition { s.Ulimits = v @@ -4656,15 +5329,15 @@ type ContainerInstance struct { _ struct{} `type:"structure"` // This parameter returns true if the agent is connected to Amazon ECS. Registered - // instances with an agent that may be unhealthy or stopped return false. Instances - // without a connected agent can't accept placement requests. + // instances with an agent that may be unhealthy or stopped return false. Only + // instances connected to an agent can accept placement requests. AgentConnected *bool `locationName:"agentConnected" type:"boolean"` // The status of the most recent agent update. If an update has never been requested, // this value is NULL. AgentUpdateStatus *string `locationName:"agentUpdateStatus" type:"string" enum:"AgentUpdateStatus"` - // The Elastic Network Interfaces associated with the container instance. + // The elastic network interfaces associated with the container instance. Attachments []*Attachment `locationName:"attachments" type:"list"` // The attributes set for the container instance, either by the Amazon ECS container @@ -4672,7 +5345,7 @@ type ContainerInstance struct { Attributes []*Attribute `locationName:"attributes" type:"list"` // The Amazon Resource Name (ARN) of the container instance. The ARN contains - // the arn:aws:ecs namespace, followed by the region of the container instance, + // the arn:aws:ecs namespace, followed by the Region of the container instance, // the AWS account ID of the container instance owner, the container-instance // namespace, and then the container instance ID. For example, arn:aws:ecs:region:aws_account_id:container-instance/container_instance_ID. ContainerInstanceArn *string `locationName:"containerInstanceArn" type:"string"` @@ -4683,21 +5356,25 @@ type ContainerInstance struct { // The number of tasks on the container instance that are in the PENDING status. PendingTasksCount *int64 `locationName:"pendingTasksCount" type:"integer"` - // The Unix time stamp for when the container instance was registered. - RegisteredAt *time.Time `locationName:"registeredAt" type:"timestamp" timestampFormat:"unix"` + // The Unix timestamp for when the container instance was registered. + RegisteredAt *time.Time `locationName:"registeredAt" type:"timestamp"` - // For most resource types, this parameter describes the registered resources - // on the container instance that are in use by current tasks. For port resource - // types, this parameter describes the ports that were reserved by the Amazon - // ECS container agent when it registered the container instance with Amazon - // ECS. + // For CPU and memory resource types, this parameter describes the amount of + // each resource that was available on the container instance when the container + // agent registered it with Amazon ECS. This value represents the total amount + // of CPU and memory that can be allocated on this container instance to tasks. + // For port resource types, this parameter describes the ports that were reserved + // by the Amazon ECS container agent when it registered the container instance + // with Amazon ECS. RegisteredResources []*Resource `locationName:"registeredResources" type:"list"` - // For most resource types, this parameter describes the remaining resources - // of the container instance that are available for new tasks. For port resource - // types, this parameter describes the ports that are reserved by the Amazon - // ECS container agent and any containers that have reserved port mappings; - // any port that is not specified here is available for new tasks. + // For CPU and memory resource types, this parameter describes the remaining + // CPU and memory that has not already been allocated to tasks and is therefore + // available for new tasks. For port resource types, this parameter describes + // the ports that were reserved by the Amazon ECS container agent (at instance + // registration time) and any task containers that have reserved port mappings + // on the host (with the host or bridge network mode). Any port that is not + // specified here is available for new tasks. RemainingResources []*Resource `locationName:"remainingResources" type:"list"` // The number of tasks on the container instance that are in the RUNNING status. @@ -4711,6 +5388,12 @@ type ContainerInstance struct { // in the Amazon Elastic Container Service Developer Guide. Status *string `locationName:"status" type:"string"` + // The metadata that you apply to the container instance to help you categorize + // and organize them. Each tag consists of a key and an optional value, both + // of which you define. Tag keys can have a maximum character length of 128 + // characters, and tag values can have a maximum length of 256 characters. + Tags []*Tag `locationName:"tags" type:"list"` + // The version counter for the container instance. Every time a container instance // experiences a change that triggers a CloudWatch event, the version counter // is incremented. If you are replicating your Amazon ECS container instance @@ -4807,6 +5490,12 @@ func (s *ContainerInstance) SetStatus(v string) *ContainerInstance { return s } +// SetTags sets the Tags field's value. +func (s *ContainerInstance) SetTags(v []*Tag) *ContainerInstance { + s.Tags = v + return s +} + // SetVersion sets the Version field's value. func (s *ContainerInstance) SetVersion(v int64) *ContainerInstance { s.Version = &v @@ -4968,6 +5657,12 @@ type CreateClusterInput struct { // you create a cluster named default. Up to 255 letters (uppercase and lowercase), // numbers, hyphens, and underscores are allowed. ClusterName *string `locationName:"clusterName" type:"string"` + + // The metadata that you apply to the cluster to help you categorize and organize + // them. Each tag consists of a key and an optional value, both of which you + // define. Tag keys can have a maximum character length of 128 characters, and + // tag values can have a maximum length of 256 characters. + Tags []*Tag `locationName:"tags" type:"list"` } // String returns the string representation @@ -4980,12 +5675,38 @@ func (s CreateClusterInput) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateClusterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateClusterInput"} + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetClusterName sets the ClusterName field's value. func (s *CreateClusterInput) SetClusterName(v string) *CreateClusterInput { s.ClusterName = &v return s } +// SetTags sets the Tags field's value. +func (s *CreateClusterInput) SetTags(v []*Tag) *CreateClusterInput { + s.Tags = v + return s +} + type CreateClusterOutput struct { _ struct{} `type:"structure"` @@ -5012,8 +5733,8 @@ func (s *CreateClusterOutput) SetCluster(v *Cluster) *CreateClusterOutput { type CreateServiceInput struct { _ struct{} `type:"structure"` - // Unique, case-sensitive identifier you provide to ensure the idempotency of - // the request. Up to 32 ASCII characters are allowed. + // Unique, case-sensitive identifier that you provide to ensure the idempotency + // of the request. Up to 32 ASCII characters are allowed. ClientToken *string `locationName:"clientToken" type:"string"` // The short name or full Amazon Resource Name (ARN) of the cluster on which @@ -5027,16 +5748,20 @@ type CreateServiceInput struct { // The number of instantiations of the specified task definition to place and // keep running on your cluster. - // - // DesiredCount is a required field - DesiredCount *int64 `locationName:"desiredCount" type:"integer" required:"true"` + DesiredCount *int64 `locationName:"desiredCount" type:"integer"` + + // Specifies whether to enable Amazon ECS managed tags for the tasks within + // the service. For more information, see Tagging Your Amazon ECS Resources + // (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Using_Tags.html) + // in the Amazon Elastic Container Service Developer Guide. + EnableECSManagedTags *bool `locationName:"enableECSManagedTags" type:"boolean"` // The period of time, in seconds, that the Amazon ECS service scheduler should // ignore unhealthy Elastic Load Balancing target health checks after a task // has first started. This is only valid if your service is configured to use // a load balancer. If your service's tasks take a while to start and respond // to Elastic Load Balancing health checks, you can specify a health check grace - // period of up to 1,800 seconds during which the ECS service scheduler ignores + // period of up to 7,200 seconds during which the ECS service scheduler ignores // health check status. This grace period can prevent the ECS service scheduler // from marking tasks as unhealthy and stopping them before they have time to // come up. @@ -5062,18 +5787,25 @@ type CreateServiceInput struct { // balancer. When a task from this service is placed on a container instance, // the container instance and port combination is registered as a target in // the target group specified here. + // + // Services with tasks that use the awsvpc network mode (for example, those + // with the Fargate launch type) only support Application Load Balancers and + // Network Load Balancers. Classic Load Balancers are not supported. Also, when + // you create any target groups for these services, you must choose ip as the + // target type, not instance, because tasks that use the awsvpc network mode + // are associated with an elastic network interface, not an Amazon EC2 instance. LoadBalancers []*LoadBalancer `locationName:"loadBalancers" type:"list"` // The network configuration for the service. This parameter is required for - // task definitions that use the awsvpc network mode to receive their own Elastic - // Network Interface, and it is not supported for other network modes. For more + // task definitions that use the awsvpc network mode to receive their own elastic + // network interface, and it is not supported for other network modes. For more // information, see Task Networking (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html) // in the Amazon Elastic Container Service Developer Guide. NetworkConfiguration *NetworkConfiguration `locationName:"networkConfiguration" type:"structure"` // An array of placement constraint objects to use for tasks in your service. // You can specify a maximum of 10 constraints per task (this limit includes - // constraints in the task definition and those specified at run time). + // constraints in the task definition and those specified at runtime). PlacementConstraints []*PlacementConstraint `locationName:"placementConstraints" type:"list"` // The placement strategy objects to use for tasks in your service. You can @@ -5084,6 +5816,12 @@ type CreateServiceInput struct { // the latest version is used by default. PlatformVersion *string `locationName:"platformVersion" type:"string"` + // Specifies whether to propagate the tags from the task definition or the service + // to the tasks. If no value is specified, the tags are not propagated. Tags + // can only be propagated to the tasks within the service during service creation. + // To add tags to a task after service creation, use the TagResource API action. + PropagateTags *string `locationName:"propagateTags" type:"string" enum:"PropagateTags"` + // The name or full Amazon Resource Name (ARN) of the IAM role that allows Amazon // ECS to make calls to your load balancer on your behalf. This parameter is // only permitted if you are using a load balancer with your service and your @@ -5106,14 +5844,48 @@ type CreateServiceInput struct { // in the IAM User Guide. Role *string `locationName:"role" type:"string"` + // The scheduling strategy to use for the service. For more information, see + // Services (http://docs.aws.amazon.com/AmazonECS/latest/developerguideecs_services.html). + // + // There are two service scheduler strategies available: + // + // * REPLICA-The replica scheduling strategy places and maintains the desired + // number of tasks across your cluster. By default, the service scheduler + // spreads tasks across Availability Zones. You can use task placement strategies + // and constraints to customize task placement decisions. + // + // * DAEMON-The daemon scheduling strategy deploys exactly one task on each + // active container instance that meets all of the task placement constraints + // that you specify in your cluster. When you are using this strategy, there + // is no need to specify a desired number of tasks, a task placement strategy, + // or use Service Auto Scaling policies. + // + // Fargate tasks do not support the DAEMON scheduling strategy. + SchedulingStrategy *string `locationName:"schedulingStrategy" type:"string" enum:"SchedulingStrategy"` + // The name of your service. Up to 255 letters (uppercase and lowercase), numbers, // hyphens, and underscores are allowed. Service names must be unique within // a cluster, but you can have similarly named services in multiple clusters - // within a region or across multiple regions. + // within a Region or across multiple Regions. // // ServiceName is a required field ServiceName *string `locationName:"serviceName" type:"string" required:"true"` + // The details of the service discovery registries to assign to this service. + // For more information, see Service Discovery (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html). + // + // Service discovery is supported for Fargate tasks if you are using platform + // version v1.1.0 or later. For more information, see AWS Fargate Platform Versions + // (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/platform_versions.html). + ServiceRegistries []*ServiceRegistry `locationName:"serviceRegistries" type:"list"` + + // The metadata that you apply to the service to help you categorize and organize + // them. Each tag consists of a key and an optional value, both of which you + // define. When a service is deleted, the tags are deleted as well. Tag keys + // can have a maximum character length of 128 characters, and tag values can + // have a maximum length of 256 characters. + Tags []*Tag `locationName:"tags" type:"list"` + // The family and revision (family:revision) or full ARN of the task definition // to run in your service. If a revision is not specified, the latest ACTIVE // revision is used. @@ -5135,9 +5907,6 @@ func (s CreateServiceInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *CreateServiceInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "CreateServiceInput"} - if s.DesiredCount == nil { - invalidParams.Add(request.NewErrParamRequired("DesiredCount")) - } if s.ServiceName == nil { invalidParams.Add(request.NewErrParamRequired("ServiceName")) } @@ -5149,6 +5918,16 @@ func (s *CreateServiceInput) Validate() error { invalidParams.AddNested("NetworkConfiguration", err.(request.ErrInvalidParams)) } } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } if invalidParams.Len() > 0 { return invalidParams @@ -5180,6 +5959,12 @@ func (s *CreateServiceInput) SetDesiredCount(v int64) *CreateServiceInput { return s } +// SetEnableECSManagedTags sets the EnableECSManagedTags field's value. +func (s *CreateServiceInput) SetEnableECSManagedTags(v bool) *CreateServiceInput { + s.EnableECSManagedTags = &v + return s +} + // SetHealthCheckGracePeriodSeconds sets the HealthCheckGracePeriodSeconds field's value. func (s *CreateServiceInput) SetHealthCheckGracePeriodSeconds(v int64) *CreateServiceInput { s.HealthCheckGracePeriodSeconds = &v @@ -5222,18 +6007,42 @@ func (s *CreateServiceInput) SetPlatformVersion(v string) *CreateServiceInput { return s } +// SetPropagateTags sets the PropagateTags field's value. +func (s *CreateServiceInput) SetPropagateTags(v string) *CreateServiceInput { + s.PropagateTags = &v + return s +} + // SetRole sets the Role field's value. func (s *CreateServiceInput) SetRole(v string) *CreateServiceInput { s.Role = &v return s } +// SetSchedulingStrategy sets the SchedulingStrategy field's value. +func (s *CreateServiceInput) SetSchedulingStrategy(v string) *CreateServiceInput { + s.SchedulingStrategy = &v + return s +} + // SetServiceName sets the ServiceName field's value. func (s *CreateServiceInput) SetServiceName(v string) *CreateServiceInput { s.ServiceName = &v return s } +// SetServiceRegistries sets the ServiceRegistries field's value. +func (s *CreateServiceInput) SetServiceRegistries(v []*ServiceRegistry) *CreateServiceInput { + s.ServiceRegistries = v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateServiceInput) SetTags(v []*Tag) *CreateServiceInput { + s.Tags = v + return s +} + // SetTaskDefinition sets the TaskDefinition field's value. func (s *CreateServiceInput) SetTaskDefinition(v string) *CreateServiceInput { s.TaskDefinition = &v @@ -5263,6 +6072,84 @@ func (s *CreateServiceOutput) SetService(v *Service) *CreateServiceOutput { return s } +type DeleteAccountSettingInput struct { + _ struct{} `type:"structure"` + + // The resource name for which to disable the new format. If serviceLongArnFormat + // is specified, the ARN for your Amazon ECS services is affected. If taskLongArnFormat + // is specified, the ARN and resource ID for your Amazon ECS tasks is affected. + // If containerInstanceLongArnFormat is specified, the ARN and resource ID for + // your Amazon ECS container instances is affected. + // + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true" enum:"SettingName"` + + // The ARN of the principal, which can be an IAM user, IAM role, or the root + // user. If you specify the root user, it modifies the ARN and resource ID format + // for all IAM users, IAM roles, and the root user of the account unless an + // IAM user or role explicitly overrides these settings for themselves. If this + // field is omitted, the setting are changed only for the authenticated user. + PrincipalArn *string `locationName:"principalArn" type:"string"` +} + +// String returns the string representation +func (s DeleteAccountSettingInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAccountSettingInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteAccountSettingInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteAccountSettingInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *DeleteAccountSettingInput) SetName(v string) *DeleteAccountSettingInput { + s.Name = &v + return s +} + +// SetPrincipalArn sets the PrincipalArn field's value. +func (s *DeleteAccountSettingInput) SetPrincipalArn(v string) *DeleteAccountSettingInput { + s.PrincipalArn = &v + return s +} + +type DeleteAccountSettingOutput struct { + _ struct{} `type:"structure"` + + // The account setting for the specified principal ARN. + Setting *Setting `locationName:"setting" type:"structure"` +} + +// String returns the string representation +func (s DeleteAccountSettingOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAccountSettingOutput) GoString() string { + return s.String() +} + +// SetSetting sets the Setting field's value. +func (s *DeleteAccountSettingOutput) SetSetting(v *Setting) *DeleteAccountSettingOutput { + s.Setting = v + return s +} + type DeleteAttributesInput struct { _ struct{} `type:"structure"` @@ -5417,6 +6304,11 @@ type DeleteServiceInput struct { // is assumed. Cluster *string `locationName:"cluster" type:"string"` + // If true, allows you to delete a service even if it has not been scaled down + // to zero tasks. It is only necessary to use this if the service is using the + // REPLICA scheduling strategy. + Force *bool `locationName:"force" type:"boolean"` + // The name of the service to delete. // // Service is a required field @@ -5452,6 +6344,12 @@ func (s *DeleteServiceInput) SetCluster(v string) *DeleteServiceInput { return s } +// SetForce sets the Force field's value. +func (s *DeleteServiceInput) SetForce(v bool) *DeleteServiceInput { + s.Force = &v + return s +} + // SetService sets the Service field's value. func (s *DeleteServiceInput) SetService(v string) *DeleteServiceInput { s.Service = &v @@ -5485,8 +6383,8 @@ func (s *DeleteServiceOutput) SetService(v *Service) *DeleteServiceOutput { type Deployment struct { _ struct{} `type:"structure"` - // The Unix time stamp for when the service was created. - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` + // The Unix timestamp for when the service was created. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` // The most recent desired count of tasks that was specified for the service // to deploy or maintain. @@ -5499,7 +6397,7 @@ type Deployment struct { LaunchType *string `locationName:"launchType" type:"string" enum:"LaunchType"` // The VPC subnet and security group configuration for tasks that receive their - // own Elastic Network Interface by using the awsvpc networking mode. + // own elastic network interface by using the awsvpc networking mode. NetworkConfiguration *NetworkConfiguration `locationName:"networkConfiguration" type:"structure"` // The number of tasks in the deployment that are in the PENDING status. @@ -5511,17 +6409,17 @@ type Deployment struct { // The number of tasks in the deployment that are in the RUNNING status. RunningCount *int64 `locationName:"runningCount" type:"integer"` - // The status of the deployment. Valid values are PRIMARY (for the most recent - // deployment), ACTIVE (for previous deployments that still have tasks running, - // but are being replaced with the PRIMARY deployment), and INACTIVE (for deployments - // that have been completely replaced). + // The status of the deployment. Valid values are PRIMARY for the most recent + // deployment, ACTIVE for previous deployments that still have tasks running, + // but are being replaced with the PRIMARY deployment, and INACTIVE for deployments + // that have been completely replaced. Status *string `locationName:"status" type:"string"` // The most recent task definition that was specified for the service to use. TaskDefinition *string `locationName:"taskDefinition" type:"string"` - // The Unix time stamp for when the service was last updated. - UpdatedAt *time.Time `locationName:"updatedAt" type:"timestamp" timestampFormat:"unix"` + // The Unix timestamp for when the service was last updated. + UpdatedAt *time.Time `locationName:"updatedAt" type:"timestamp"` } // String returns the string representation @@ -5650,7 +6548,7 @@ type DeregisterContainerInstanceInput struct { Cluster *string `locationName:"cluster" type:"string"` // The container instance ID or full ARN of the container instance to deregister. - // The ARN contains the arn:aws:ecs namespace, followed by the region of the + // The ARN contains the arn:aws:ecs namespace, followed by the Region of the // container instance, the AWS account ID of the container instance owner, the // container-instance namespace, and then the container instance ID. For example, // arn:aws:ecs:region:aws_account_id:container-instance/container_instance_ID. @@ -5889,10 +6787,16 @@ type DescribeContainerInstancesInput struct { // default cluster is assumed. Cluster *string `locationName:"cluster" type:"string"` - // A list of container instance IDs or full ARN entries. + // A list of up to 100 container instance IDs or full Amazon Resource Name (ARN) + // entries. // // ContainerInstances is a required field ContainerInstances []*string `locationName:"containerInstances" type:"list" required:"true"` + + // Specifies whether you want to see the resource tags for the container instance. + // If TAGS is specified, the tags are included in the response. If this field + // is omitted, tags are not included in the response. + Include []*string `locationName:"include" type:"list"` } // String returns the string representation @@ -5930,6 +6834,12 @@ func (s *DescribeContainerInstancesInput) SetContainerInstances(v []*string) *De return s } +// SetInclude sets the Include field's value. +func (s *DescribeContainerInstancesInput) SetInclude(v []*string) *DescribeContainerInstancesInput { + s.Include = v + return s +} + type DescribeContainerInstancesOutput struct { _ struct{} `type:"structure"` @@ -5970,6 +6880,11 @@ type DescribeServicesInput struct { // is assumed. Cluster *string `locationName:"cluster" type:"string"` + // Specifies whether you want to see the resource tags for the service. If TAGS + // is specified, the tags are included in the response. If this field is omitted, + // tags are not included in the response. + Include []*string `locationName:"include" type:"list"` + // A list of services to describe. You may specify up to 10 services to describe // in a single operation. // @@ -6006,6 +6921,12 @@ func (s *DescribeServicesInput) SetCluster(v string) *DescribeServicesInput { return s } +// SetInclude sets the Include field's value. +func (s *DescribeServicesInput) SetInclude(v []*string) *DescribeServicesInput { + s.Include = v + return s +} + // SetServices sets the Services field's value. func (s *DescribeServicesInput) SetServices(v []*string) *DescribeServicesInput { s.Services = v @@ -6047,6 +6968,11 @@ func (s *DescribeServicesOutput) SetServices(v []*Service) *DescribeServicesOutp type DescribeTaskDefinitionInput struct { _ struct{} `type:"structure"` + // Specifies whether to see the resource tags for the task definition. If TAGS + // is specified, the tags are included in the response. If this field is omitted, + // tags are not included in the response. + Include []*string `locationName:"include" type:"list"` + // The family for the latest ACTIVE revision, family and revision (family:revision) // for a specific revision in the family, or full Amazon Resource Name (ARN) // of the task definition to describe. @@ -6078,6 +7004,12 @@ func (s *DescribeTaskDefinitionInput) Validate() error { return nil } +// SetInclude sets the Include field's value. +func (s *DescribeTaskDefinitionInput) SetInclude(v []*string) *DescribeTaskDefinitionInput { + s.Include = v + return s +} + // SetTaskDefinition sets the TaskDefinition field's value. func (s *DescribeTaskDefinitionInput) SetTaskDefinition(v string) *DescribeTaskDefinitionInput { s.TaskDefinition = &v @@ -6087,6 +7019,12 @@ func (s *DescribeTaskDefinitionInput) SetTaskDefinition(v string) *DescribeTaskD type DescribeTaskDefinitionOutput struct { _ struct{} `type:"structure"` + // The metadata that is applied to the task definition to help you categorize + // and organize them. Each tag consists of a key and an optional value, both + // of which you define. Tag keys can have a maximum character length of 128 + // characters, and tag values can have a maximum length of 256 characters. + Tags []*Tag `locationName:"tags" type:"list"` + // The full task definition description. TaskDefinition *TaskDefinition `locationName:"taskDefinition" type:"structure"` } @@ -6101,6 +7039,12 @@ func (s DescribeTaskDefinitionOutput) GoString() string { return s.String() } +// SetTags sets the Tags field's value. +func (s *DescribeTaskDefinitionOutput) SetTags(v []*Tag) *DescribeTaskDefinitionOutput { + s.Tags = v + return s +} + // SetTaskDefinition sets the TaskDefinition field's value. func (s *DescribeTaskDefinitionOutput) SetTaskDefinition(v *TaskDefinition) *DescribeTaskDefinitionOutput { s.TaskDefinition = v @@ -6115,6 +7059,11 @@ type DescribeTasksInput struct { // is assumed. Cluster *string `locationName:"cluster" type:"string"` + // Specifies whether you want to see the resource tags for the task. If TAGS + // is specified, the tags are included in the response. If this field is omitted, + // tags are not included in the response. + Include []*string `locationName:"include" type:"list"` + // A list of up to 100 task IDs or full ARN entries. // // Tasks is a required field @@ -6150,6 +7099,12 @@ func (s *DescribeTasksInput) SetCluster(v string) *DescribeTasksInput { return s } +// SetInclude sets the Include field's value. +func (s *DescribeTasksInput) SetInclude(v []*string) *DescribeTasksInput { + s.Include = v + return s +} + // SetTasks sets the Tasks field's value. func (s *DescribeTasksInput) SetTasks(v []*string) *DescribeTasksInput { s.Tasks = v @@ -6249,12 +7204,12 @@ func (s *Device) SetPermissions(v []*string) *Device { type DiscoverPollEndpointInput struct { _ struct{} `type:"structure"` - // The short name or full Amazon Resource Name (ARN) of the cluster that the - // container instance belongs to. + // The short name or full Amazon Resource Name (ARN) of the cluster to which + // the container instance belongs. Cluster *string `locationName:"cluster" type:"string"` // The container instance ID or full ARN of the container instance. The ARN - // contains the arn:aws:ecs namespace, followed by the region of the container + // contains the arn:aws:ecs namespace, followed by the Region of the container // instance, the AWS account ID of the container instance owner, the container-instance // namespace, and then the container instance ID. For example, arn:aws:ecs:region:aws_account_id:container-instance/container_instance_ID. ContainerInstance *string `locationName:"containerInstance" type:"string"` @@ -6314,6 +7269,89 @@ func (s *DiscoverPollEndpointOutput) SetTelemetryEndpoint(v string) *DiscoverPol return s } +// This parameter is specified when you are using Docker volumes. Docker volumes +// are only supported when you are using the EC2 launch type. Windows containers +// only support the use of the local driver. To use bind mounts, specify a host +// instead. +type DockerVolumeConfiguration struct { + _ struct{} `type:"structure"` + + // If this value is true, the Docker volume is created if it does not already + // exist. + // + // This field is only used if the scope is shared. + Autoprovision *bool `locationName:"autoprovision" type:"boolean"` + + // The Docker volume driver to use. The driver value must match the driver name + // provided by Docker because it is used for task placement. If the driver was + // installed using the Docker plugin CLI, use docker plugin ls to retrieve the + // driver name from your container instance. If the driver was installed using + // another method, use Docker plugin discovery to retrieve the driver name. + // For more information, see Docker plugin discovery (https://docs.docker.com/engine/extend/plugin_api/#plugin-discovery). + // This parameter maps to Driver in the Create a volume (https://docs.docker.com/engine/api/v1.35/#operation/VolumeCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) + // and the xxdriver option to docker volume create (https://docs.docker.com/engine/reference/commandline/volume_create/). + Driver *string `locationName:"driver" type:"string"` + + // A map of Docker driver-specific options passed through. This parameter maps + // to DriverOpts in the Create a volume (https://docs.docker.com/engine/api/v1.35/#operation/VolumeCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) + // and the xxopt option to docker volume create (https://docs.docker.com/engine/reference/commandline/volume_create/). + DriverOpts map[string]*string `locationName:"driverOpts" type:"map"` + + // Custom metadata to add to your Docker volume. This parameter maps to Labels + // in the Create a volume (https://docs.docker.com/engine/api/v1.35/#operation/VolumeCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) + // and the xxlabel option to docker volume create (https://docs.docker.com/engine/reference/commandline/volume_create/). + Labels map[string]*string `locationName:"labels" type:"map"` + + // The scope for the Docker volume that determines its lifecycle. Docker volumes + // that are scoped to a task are automatically provisioned when the task starts + // and destroyed when the task stops. Docker volumes that are scoped as shared + // persist after the task stops. + Scope *string `locationName:"scope" type:"string" enum:"Scope"` +} + +// String returns the string representation +func (s DockerVolumeConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DockerVolumeConfiguration) GoString() string { + return s.String() +} + +// SetAutoprovision sets the Autoprovision field's value. +func (s *DockerVolumeConfiguration) SetAutoprovision(v bool) *DockerVolumeConfiguration { + s.Autoprovision = &v + return s +} + +// SetDriver sets the Driver field's value. +func (s *DockerVolumeConfiguration) SetDriver(v string) *DockerVolumeConfiguration { + s.Driver = &v + return s +} + +// SetDriverOpts sets the DriverOpts field's value. +func (s *DockerVolumeConfiguration) SetDriverOpts(v map[string]*string) *DockerVolumeConfiguration { + s.DriverOpts = v + return s +} + +// SetLabels sets the Labels field's value. +func (s *DockerVolumeConfiguration) SetLabels(v map[string]*string) *DockerVolumeConfiguration { + s.Labels = v + return s +} + +// SetScope sets the Scope field's value. +func (s *DockerVolumeConfiguration) SetScope(v string) *DockerVolumeConfiguration { + s.Scope = &v + return s +} + // A failed resource. type Failure struct { _ struct{} `type:"structure"` @@ -6351,6 +7389,19 @@ func (s *Failure) SetReason(v string) *Failure { // that are specified in a container definition override any Docker health checks // that exist in the container image (such as those specified in a parent image // or from the image's Dockerfile). +// +// The following are notes about container health check support: +// +// * Container health checks require version 1.17.0 or greater of the Amazon +// ECS container agent. For more information, see Updating the Amazon ECS +// Container Agent (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-update.html). +// +// * Container health checks are supported for Fargate tasks if you are using +// platform version 1.1.0 or greater. For more information, see AWS Fargate +// Platform Versions (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/platform_versions.html). +// +// * Container health checks are not supported for tasks that are part of +// a service that is configured to use a Classic Load Balancer. type HealthCheck struct { _ struct{} `type:"structure"` @@ -6362,8 +7413,8 @@ type HealthCheck struct { // [ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ] // // An exit code of 0 indicates success, and non-zero exit code indicates failure. - // For more information, see HealthCheck in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/). + // For more information, see HealthCheck in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/). // // Command is a required field Command []*string `locationName:"command" type:"list" required:"true"` @@ -6374,7 +7425,7 @@ type HealthCheck struct { // The number of times to retry a failed health check before the container is // considered unhealthy. You may specify between 1 and 10 retries. The default - // value is 3 retries. + // value is 3. Retries *int64 `locationName:"retries" type:"integer"` // The optional grace period within which to provide containers time to bootstrap @@ -6389,7 +7440,7 @@ type HealthCheck struct { // The time period in seconds to wait for a health check to succeed before it // is considered a failure. You may specify between 2 and 60 seconds. The default - // value is 5 seconds. + // value is 5. Timeout *int64 `locationName:"timeout" type:"integer"` } @@ -6500,17 +7551,18 @@ func (s *HostEntry) SetIpAddress(v string) *HostEntry { return s } -// Details on a container instance host volume. +// Details on a container instance bind mount host volume. type HostVolumeProperties struct { _ struct{} `type:"structure"` - // The path on the host container instance that is presented to the container. - // If this parameter is empty, then the Docker daemon has assigned a host path - // for you. If the host parameter contains a sourcePath file location, then - // the data volume persists at the specified location on the host container - // instance until you delete it manually. If the sourcePath value does not exist - // on the host container instance, the Docker daemon creates it. If the location - // does exist, the contents of the source path folder are exported. + // When the host parameter is used, specify a sourcePath to declare the path + // on the host container instance that is presented to the container. If this + // parameter is empty, then the Docker daemon has assigned a host path for you. + // If the host parameter contains a sourcePath file location, then the data + // volume persists at the specified location on the host container instance + // until you delete it manually. If the sourcePath value does not exist on the + // host container instance, the Docker daemon creates it. If the location does + // exist, the contents of the source path folder are exported. // // If you are using the Fargate launch type, the sourcePath parameter is not // supported. @@ -6545,8 +7597,8 @@ type KernelCapabilities struct { // The Linux capabilities for the container that have been added to the default // configuration provided by Docker. This parameter maps to CapAdd in the Create - // a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --cap-add option to docker run (https://docs.docker.com/engine/reference/run/). // // If you are using tasks that use the Fargate launch type, the add parameter @@ -6564,8 +7616,8 @@ type KernelCapabilities struct { // The Linux capabilities for the container that have been removed from the // default configuration provided by Docker. This parameter maps to CapDrop - // in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --cap-drop option to docker run (https://docs.docker.com/engine/reference/run/). // // Valid values: "ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | @@ -6601,15 +7653,15 @@ func (s *KernelCapabilities) SetDrop(v []*string) *KernelCapabilities { return s } -// A key and value pair object. +// A key-value pair object. type KeyValuePair struct { _ struct{} `type:"structure"` - // The name of the key value pair. For environment variables, this is the name + // The name of the key-value pair. For environment variables, this is the name // of the environment variable. Name *string `locationName:"name" type:"string"` - // The value of the key value pair. For environment variables, this is the value + // The value of the key-value pair. For environment variables, this is the value // of the environment variable. Value *string `locationName:"value" type:"string"` } @@ -6648,8 +7700,8 @@ type LinuxParameters struct { Capabilities *KernelCapabilities `locationName:"capabilities" type:"structure"` // Any host devices to expose to the container. This parameter maps to Devices - // in the Create a container (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/#create-a-container) - // section of the Docker Remote API (https://docs.docker.com/engine/reference/api/docker_remote_api_v1.27/) + // in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) + // section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) // and the --device option to docker run (https://docs.docker.com/engine/reference/run/). // // If you are using tasks that use the Fargate launch type, the devices parameter @@ -6661,8 +7713,22 @@ type LinuxParameters struct { // This parameter requires version 1.25 of the Docker Remote API or greater // on your container instance. To check the Docker Remote API version on your // container instance, log in to your container instance and run the following - // command: sudo docker version | grep "Server API version" + // command: sudo docker version --format '{{.Server.APIVersion}}' InitProcessEnabled *bool `locationName:"initProcessEnabled" type:"boolean"` + + // The value for the size (in MiB) of the /dev/shm volume. This parameter maps + // to the --shm-size option to docker run (https://docs.docker.com/engine/reference/run/). + // + // If you are using tasks that use the Fargate launch type, the sharedMemorySize + // parameter is not supported. + SharedMemorySize *int64 `locationName:"sharedMemorySize" type:"integer"` + + // The container path, mount options, and size (in MiB) of the tmpfs mount. + // This parameter maps to the --tmpfs option to docker run (https://docs.docker.com/engine/reference/run/). + // + // If you are using tasks that use the Fargate launch type, the tmpfs parameter + // is not supported. + Tmpfs []*Tmpfs `locationName:"tmpfs" type:"list"` } // String returns the string representation @@ -6688,6 +7754,16 @@ func (s *LinuxParameters) Validate() error { } } } + if s.Tmpfs != nil { + for i, v := range s.Tmpfs { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tmpfs", i), err.(request.ErrInvalidParams)) + } + } + } if invalidParams.Len() > 0 { return invalidParams @@ -6713,41 +7789,174 @@ func (s *LinuxParameters) SetInitProcessEnabled(v bool) *LinuxParameters { return s } -type ListAttributesInput struct { - _ struct{} `type:"structure"` +// SetSharedMemorySize sets the SharedMemorySize field's value. +func (s *LinuxParameters) SetSharedMemorySize(v int64) *LinuxParameters { + s.SharedMemorySize = &v + return s +} - // The name of the attribute with which to filter the results. - AttributeName *string `locationName:"attributeName" type:"string"` +// SetTmpfs sets the Tmpfs field's value. +func (s *LinuxParameters) SetTmpfs(v []*Tmpfs) *LinuxParameters { + s.Tmpfs = v + return s +} - // The value of the attribute with which to filter results. You must also specify - // an attribute name to use this parameter. - AttributeValue *string `locationName:"attributeValue" type:"string"` +type ListAccountSettingsInput struct { + _ struct{} `type:"structure"` - // The short name or full Amazon Resource Name (ARN) of the cluster to list - // attributes. If you do not specify a cluster, the default cluster is assumed. - Cluster *string `locationName:"cluster" type:"string"` + // Specifies whether to return the effective settings. If true, the account + // settings for the root user or the default setting for the principalArn. If + // false, the account settings for the principalArn are returned if they are + // set. Otherwise, no account settings are returned. + EffectiveSettings *bool `locationName:"effectiveSettings" type:"boolean"` - // The maximum number of cluster results returned by ListAttributes in paginated - // output. When this parameter is used, ListAttributes only returns maxResults - // results in a single page along with a nextToken response element. The remaining - // results of the initial request can be seen by sending another ListAttributes - // request with the returned nextToken value. This value can be between 1 and - // 100. If this parameter is not used, then ListAttributes returns up to 100 - // results and a nextToken value if applicable. + // The maximum number of account setting results returned by ListAccountSettings + // in paginated output. When this parameter is used, ListAccountSettings only + // returns maxResults results in a single page along with a nextToken response + // element. The remaining results of the initial request can be seen by sending + // another ListAccountSettings request with the returned nextToken value. This + // value can be between 1 and 10. If this parameter is not used, then ListAccountSettings + // returns up to 10 results and a nextToken value if applicable. MaxResults *int64 `locationName:"maxResults" type:"integer"` - // The nextToken value returned from a previous paginated ListAttributes request - // where maxResults was used and the results exceeded the value of that parameter. - // Pagination continues from the end of the previous results that returned the - // nextToken value. + // The resource name you want to list the account settings for. + Name *string `locationName:"name" type:"string" enum:"SettingName"` + + // The nextToken value returned from a previous paginated ListAccountSettings + // request where maxResults was used and the results exceeded the value of that + // parameter. Pagination continues from the end of the previous results that + // returned the nextToken value. // // This token should be treated as an opaque identifier that is only used to // retrieve the next items in a list and not for other programmatic purposes. NextToken *string `locationName:"nextToken" type:"string"` - // The type of the target with which to list attributes. - // - // TargetType is a required field + // The ARN of the principal, which can be an IAM user, IAM role, or the root + // user. If this field is omitted, the account settings are listed only for + // the authenticated user. + PrincipalArn *string `locationName:"principalArn" type:"string"` + + // The value of the account settings with which to filter results. You must + // also specify an account setting name to use this parameter. + Value *string `locationName:"value" type:"string"` +} + +// String returns the string representation +func (s ListAccountSettingsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListAccountSettingsInput) GoString() string { + return s.String() +} + +// SetEffectiveSettings sets the EffectiveSettings field's value. +func (s *ListAccountSettingsInput) SetEffectiveSettings(v bool) *ListAccountSettingsInput { + s.EffectiveSettings = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListAccountSettingsInput) SetMaxResults(v int64) *ListAccountSettingsInput { + s.MaxResults = &v + return s +} + +// SetName sets the Name field's value. +func (s *ListAccountSettingsInput) SetName(v string) *ListAccountSettingsInput { + s.Name = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListAccountSettingsInput) SetNextToken(v string) *ListAccountSettingsInput { + s.NextToken = &v + return s +} + +// SetPrincipalArn sets the PrincipalArn field's value. +func (s *ListAccountSettingsInput) SetPrincipalArn(v string) *ListAccountSettingsInput { + s.PrincipalArn = &v + return s +} + +// SetValue sets the Value field's value. +func (s *ListAccountSettingsInput) SetValue(v string) *ListAccountSettingsInput { + s.Value = &v + return s +} + +type ListAccountSettingsOutput struct { + _ struct{} `type:"structure"` + + // The nextToken value to include in a future ListAccountSettings request. When + // the results of a ListAccountSettings request exceed maxResults, this value + // can be used to retrieve the next page of results. This value is null when + // there are no more results to return. + NextToken *string `locationName:"nextToken" type:"string"` + + // The account settings for the resource. + Settings []*Setting `locationName:"settings" type:"list"` +} + +// String returns the string representation +func (s ListAccountSettingsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListAccountSettingsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListAccountSettingsOutput) SetNextToken(v string) *ListAccountSettingsOutput { + s.NextToken = &v + return s +} + +// SetSettings sets the Settings field's value. +func (s *ListAccountSettingsOutput) SetSettings(v []*Setting) *ListAccountSettingsOutput { + s.Settings = v + return s +} + +type ListAttributesInput struct { + _ struct{} `type:"structure"` + + // The name of the attribute with which to filter the results. + AttributeName *string `locationName:"attributeName" type:"string"` + + // The value of the attribute with which to filter results. You must also specify + // an attribute name to use this parameter. + AttributeValue *string `locationName:"attributeValue" type:"string"` + + // The short name or full Amazon Resource Name (ARN) of the cluster to list + // attributes. If you do not specify a cluster, the default cluster is assumed. + Cluster *string `locationName:"cluster" type:"string"` + + // The maximum number of cluster results returned by ListAttributes in paginated + // output. When this parameter is used, ListAttributes only returns maxResults + // results in a single page along with a nextToken response element. The remaining + // results of the initial request can be seen by sending another ListAttributes + // request with the returned nextToken value. This value can be between 1 and + // 100. If this parameter is not used, then ListAttributes returns up to 100 + // results and a nextToken value if applicable. + MaxResults *int64 `locationName:"maxResults" type:"integer"` + + // The nextToken value returned from a previous paginated ListAttributes request + // where maxResults was used and the results exceeded the value of that parameter. + // Pagination continues from the end of the previous results that returned the + // nextToken value. + // + // This token should be treated as an opaque identifier that is only used to + // retrieve the next items in a list and not for other programmatic purposes. + NextToken *string `locationName:"nextToken" type:"string"` + + // The type of the target with which to list attributes. + // + // TargetType is a required field TargetType *string `locationName:"targetType" type:"string" required:"true" enum:"TargetType"` } @@ -7050,7 +8259,7 @@ type ListServicesInput struct { // is assumed. Cluster *string `locationName:"cluster" type:"string"` - // The launch type for services you want to list. + // The launch type for the services to list. LaunchType *string `locationName:"launchType" type:"string" enum:"LaunchType"` // The maximum number of service results returned by ListServices in paginated @@ -7058,7 +8267,7 @@ type ListServicesInput struct { // results in a single page along with a nextToken response element. The remaining // results of the initial request can be seen by sending another ListServices // request with the returned nextToken value. This value can be between 1 and - // 10. If this parameter is not used, then ListServices returns up to 10 results + // 100. If this parameter is not used, then ListServices returns up to 10 results // and a nextToken value if applicable. MaxResults *int64 `locationName:"maxResults" type:"integer"` @@ -7070,6 +8279,9 @@ type ListServicesInput struct { // This token should be treated as an opaque identifier that is only used to // retrieve the next items in a list and not for other programmatic purposes. NextToken *string `locationName:"nextToken" type:"string"` + + // The scheduling strategy for services to list. + SchedulingStrategy *string `locationName:"schedulingStrategy" type:"string" enum:"SchedulingStrategy"` } // String returns the string representation @@ -7106,6 +8318,12 @@ func (s *ListServicesInput) SetNextToken(v string) *ListServicesInput { return s } +// SetSchedulingStrategy sets the SchedulingStrategy field's value. +func (s *ListServicesInput) SetSchedulingStrategy(v string) *ListServicesInput { + s.SchedulingStrategy = &v + return s +} + type ListServicesOutput struct { _ struct{} `type:"structure"` @@ -7142,6 +8360,69 @@ func (s *ListServicesOutput) SetServiceArns(v []*string) *ListServicesOutput { return s } +type ListTagsForResourceInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) that identifies the resource for which to + // list the tags. Currently, the supported resources are Amazon ECS tasks, services, + // task definitions, clusters, and container instances. + // + // ResourceArn is a required field + ResourceArn *string `locationName:"resourceArn" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListTagsForResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsForResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListTagsForResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTagsForResourceInput"} + if s.ResourceArn == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceArn")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceArn sets the ResourceArn field's value. +func (s *ListTagsForResourceInput) SetResourceArn(v string) *ListTagsForResourceInput { + s.ResourceArn = &v + return s +} + +type ListTagsForResourceOutput struct { + _ struct{} `type:"structure"` + + // The tags for the resource. + Tags []*Tag `locationName:"tags" type:"list"` +} + +// String returns the string representation +func (s ListTagsForResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsForResourceOutput) GoString() string { + return s.String() +} + +// SetTags sets the Tags field's value. +func (s *ListTagsForResourceOutput) SetTags(v []*Tag) *ListTagsForResourceOutput { + s.Tags = v + return s +} + type ListTaskDefinitionFamiliesInput struct { _ struct{} `type:"structure"` @@ -7382,21 +8663,21 @@ type ListTasksInput struct { // The task desired status with which to filter the ListTasks results. Specifying // a desiredStatus of STOPPED limits the results to tasks that Amazon ECS has - // set the desired status to STOPPED, which can be useful for debugging tasks + // set the desired status to STOPPED. This can be useful for debugging tasks // that are not starting properly or have died or finished. The default status // filter is RUNNING, which shows tasks that Amazon ECS has set the desired // status to RUNNING. // // Although you can filter results based on a desired status of PENDING, this - // does not return any results because Amazon ECS never sets the desired status - // of a task to that value (only a task's lastStatus may have a value of PENDING). + // does not return any results. Amazon ECS never sets the desired status of + // a task to that value (only a task's lastStatus may have a value of PENDING). DesiredStatus *string `locationName:"desiredStatus" type:"string" enum:"DesiredStatus"` // The name of the family with which to filter the ListTasks results. Specifying // a family limits the results to tasks that belong to that family. Family *string `locationName:"family" type:"string"` - // The launch type for services you want to list. + // The launch type for services to list. LaunchType *string `locationName:"launchType" type:"string" enum:"LaunchType"` // The maximum number of task results returned by ListTasks in paginated output. @@ -7526,6 +8807,13 @@ func (s *ListTasksOutput) SetTaskArns(v []*string) *ListTasksOutput { } // Details on a load balancer that is used with a service. +// +// Services with tasks that use the awsvpc network mode (for example, those +// with the Fargate launch type) only support Application Load Balancers and +// Network Load Balancers. Classic Load Balancers are not supported. Also, when +// you create any target groups for these services, you must choose ip as the +// target type, not instance. Tasks that use the awsvpc network mode are associated +// with an elastic network interface, not an Amazon EC2 instance. type LoadBalancer struct { _ struct{} `type:"structure"` @@ -7544,6 +8832,11 @@ type LoadBalancer struct { // The full Amazon Resource Name (ARN) of the Elastic Load Balancing target // group associated with a service. + // + // If your service's task definition uses the awsvpc network mode (which is + // required for the Fargate launch type), you must choose ip as the target type, + // not instance, because tasks that use the awsvpc network mode are associated + // with an elastic network interface, not an Amazon EC2 instance. TargetGroupArn *string `locationName:"targetGroupArn" type:"string"` } @@ -7587,9 +8880,9 @@ type LogConfiguration struct { // The log driver to use for the container. The valid values listed for this // parameter are log drivers that the Amazon ECS container agent can communicate - // with by default. If using the Fargate launch type, the only supported value - // is awslogs. For more information about using the awslogs driver, see Using - // the awslogs Log Driver (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) + // with by default. If you are using the Fargate launch type, the only supported + // value is awslogs. For more information about using the awslogs driver, see + // Using the awslogs Log Driver (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) // in the Amazon Elastic Container Service Developer Guide. // // If you have a custom driver that is not listed above that you would like @@ -7602,7 +8895,7 @@ type LogConfiguration struct { // This parameter requires version 1.18 of the Docker Remote API or greater // on your container instance. To check the Docker Remote API version on your // container instance, log in to your container instance and run the following - // command: sudo docker version | grep "Server API version" + // command: sudo docker version --format '{{.Server.APIVersion}}' // // LogDriver is a required field LogDriver *string `locationName:"logDriver" type:"string" required:"true" enum:"LogDriver"` @@ -7611,7 +8904,7 @@ type LogConfiguration struct { // version 1.19 of the Docker Remote API or greater on your container instance. // To check the Docker Remote API version on your container instance, log in // to your container instance and run the following command: sudo docker version - // | grep "Server API version" + // --format '{{.Server.APIVersion}}' Options map[string]*string `locationName:"options" type:"map"` } @@ -7662,7 +8955,8 @@ type MountPoint struct { // value is false. ReadOnly *bool `locationName:"readOnly" type:"boolean"` - // The name of the volume to mount. + // The name of the volume to mount. Must be a volume name referenced in the + // name parameter of task definition volume. SourceVolume *string `locationName:"sourceVolume" type:"string"` } @@ -7753,6 +9047,8 @@ type NetworkConfiguration struct { _ struct{} `type:"structure"` // The VPC subnets and security groups associated with a task. + // + // All specified subnets and security groups must be from the same VPC. AwsvpcConfiguration *AwsVpcConfiguration `locationName:"awsvpcConfiguration" type:"structure"` } @@ -7787,7 +9083,7 @@ func (s *NetworkConfiguration) SetAwsvpcConfiguration(v *AwsVpcConfiguration) *N return s } -// An object representing the Elastic Network Interface for tasks that use the +// An object representing the elastic network interface for tasks that use the // awsvpc network mode. type NetworkInterface struct { _ struct{} `type:"structure"` @@ -7836,9 +9132,9 @@ func (s *NetworkInterface) SetPrivateIpv4Address(v string) *NetworkInterface { type PlacementConstraint struct { _ struct{} `type:"structure"` - // A cluster query language expression to apply to the constraint. Note you - // cannot specify an expression if the constraint type is distinctInstance. - // For more information, see Cluster Query Language (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-query-language.html) + // A cluster query language expression to apply to the constraint. You cannot + // specify an expression if the constraint type is distinctInstance. For more + // information, see Cluster Query Language (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-query-language.html) // in the Amazon Elastic Container Service Developer Guide. Expression *string `locationName:"expression" type:"string"` @@ -7921,9 +9217,9 @@ func (s *PlacementStrategy) SetType(v string) *PlacementStrategy { // to send or receive traffic. Port mappings are specified as part of the container // definition. // -// If using containers in a task with the awsvpc or host network mode, exposed -// ports should be specified using containerPort. The hostPort can be left blank -// or it must be the same value as the containerPort. +// If you are using containers in a task with the awsvpc or host network mode, +// exposed ports should be specified using containerPort. The hostPort can be +// left blank or it must be the same value as the containerPort. // // After a task reaches the RUNNING status, manual and automatic host and container // port assignments are visible in the networkBindings section of DescribeTasks @@ -7934,33 +9230,33 @@ type PortMapping struct { // The port number on the container that is bound to the user-specified or automatically // assigned host port. // - // If using containers in a task with the awsvpc or host network mode, exposed - // ports should be specified using containerPort. + // If you are using containers in a task with the awsvpc or host network mode, + // exposed ports should be specified using containerPort. // - // If using containers in a task with the bridge network mode and you specify - // a container port and not a host port, your container automatically receives - // a host port in the ephemeral port range (for more information, see hostPort). - // Port mappings that are automatically assigned in this way do not count toward - // the 100 reserved ports limit of a container instance. + // If you are using containers in a task with the bridge network mode and you + // specify a container port and not a host port, your container automatically + // receives a host port in the ephemeral port range. For more information, see + // hostPort. Port mappings that are automatically assigned in this way do not + // count toward the 100 reserved ports limit of a container instance. ContainerPort *int64 `locationName:"containerPort" type:"integer"` // The port number on the container instance to reserve for your container. // - // If using containers in a task with the awsvpc or host network mode, the hostPort - // can either be left blank or set to the same value as the containerPort. + // If you are using containers in a task with the awsvpc or host network mode, + // the hostPort can either be left blank or set to the same value as the containerPort. // - // If using containers in a task with the bridge network mode, you can specify - // a non-reserved host port for your container port mapping, or you can omit - // the hostPort (or set it to 0) while specifying a containerPort and your container - // automatically receives a port in the ephemeral port range for your container - // instance operating system and Docker version. + // If you are using containers in a task with the bridge network mode, you can + // specify a non-reserved host port for your container port mapping, or you + // can omit the hostPort (or set it to 0) while specifying a containerPort and + // your container automatically receives a port in the ephemeral port range + // for your container instance operating system and Docker version. // // The default ephemeral port range for Docker version 1.6.0 and later is listed - // on the instance under /proc/sys/net/ipv4/ip_local_port_range; if this kernel + // on the instance under /proc/sys/net/ipv4/ip_local_port_range. If this kernel // parameter is unavailable, the default ephemeral port range from 49153 through - // 65535 is used. You should not attempt to specify a host port in the ephemeral - // port range as these are reserved for automatic assignment. In general, ports - // below 32768 are outside of the ephemeral port range. + // 65535 is used. Do not attempt to specify a host port in the ephemeral port + // range as these are reserved for automatic assignment. In general, ports below + // 32768 are outside of the ephemeral port range. // // The default ephemeral port range from 49153 through 65535 is always used // for Docker versions before 1.6.0. @@ -8008,6 +9304,99 @@ func (s *PortMapping) SetProtocol(v string) *PortMapping { return s } +type PutAccountSettingInput struct { + _ struct{} `type:"structure"` + + // The resource name for which to enable the new format. If serviceLongArnFormat + // is specified, the ARN for your Amazon ECS services is affected. If taskLongArnFormat + // is specified, the ARN and resource ID for your Amazon ECS tasks is affected. + // If containerInstanceLongArnFormat is specified, the ARN and resource ID for + // your Amazon ECS container instances is affected. + // + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true" enum:"SettingName"` + + // The ARN of the principal, which can be an IAM user, IAM role, or the root + // user. If you specify the root user, it modifies the ARN and resource ID format + // for all IAM users, IAM roles, and the root user of the account unless an + // IAM user or role explicitly overrides these settings for themselves. If this + // field is omitted, the setting are changed only for the authenticated user. + PrincipalArn *string `locationName:"principalArn" type:"string"` + + // The account setting value for the specified principal ARN. Accepted values + // are ENABLED and DISABLED. + // + // Value is a required field + Value *string `locationName:"value" type:"string" required:"true"` +} + +// String returns the string representation +func (s PutAccountSettingInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutAccountSettingInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutAccountSettingInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutAccountSettingInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *PutAccountSettingInput) SetName(v string) *PutAccountSettingInput { + s.Name = &v + return s +} + +// SetPrincipalArn sets the PrincipalArn field's value. +func (s *PutAccountSettingInput) SetPrincipalArn(v string) *PutAccountSettingInput { + s.PrincipalArn = &v + return s +} + +// SetValue sets the Value field's value. +func (s *PutAccountSettingInput) SetValue(v string) *PutAccountSettingInput { + s.Value = &v + return s +} + +type PutAccountSettingOutput struct { + _ struct{} `type:"structure"` + + // The current account setting for a resource. + Setting *Setting `locationName:"setting" type:"structure"` +} + +// String returns the string representation +func (s PutAccountSettingOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutAccountSettingOutput) GoString() string { + return s.String() +} + +// SetSetting sets the Setting field's value. +func (s *PutAccountSettingOutput) SetSetting(v *Setting) *PutAccountSettingOutput { + s.Setting = v + return s +} + type PutAttributesInput struct { _ struct{} `type:"structure"` @@ -8115,6 +9504,12 @@ type RegisterContainerInstanceInput struct { // curl http://169.254.169.254/latest/dynamic/instance-identity/signature/ InstanceIdentityDocumentSignature *string `locationName:"instanceIdentityDocumentSignature" type:"string"` + // The metadata that you apply to the container instance to help you categorize + // and organize them. Each tag consists of a key and an optional value, both + // of which you define. Tag keys can have a maximum character length of 128 + // characters, and tag values can have a maximum length of 256 characters. + Tags []*Tag `locationName:"tags" type:"list"` + // The resources available on the instance. TotalResources []*Resource `locationName:"totalResources" type:"list"` @@ -8146,6 +9541,16 @@ func (s *RegisterContainerInstanceInput) Validate() error { } } } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } if invalidParams.Len() > 0 { return invalidParams @@ -8183,6 +9588,12 @@ func (s *RegisterContainerInstanceInput) SetInstanceIdentityDocumentSignature(v return s } +// SetTags sets the Tags field's value. +func (s *RegisterContainerInstanceInput) SetTags(v []*Tag) *RegisterContainerInstanceInput { + s.Tags = v + return s +} + // SetTotalResources sets the TotalResources field's value. func (s *RegisterContainerInstanceInput) SetTotalResources(v []*Resource) *RegisterContainerInstanceInput { s.TotalResources = v @@ -8229,18 +9640,18 @@ type RegisterTaskDefinitionInput struct { // The number of CPU units used by the task. It can be expressed as an integer // using CPU units, for example 1024, or as a string using vCPUs, for example - // 1 vCPU or 1 vcpu, in a task definition but will be converted to an integer - // indicating the CPU units when the task definition is registered. + // 1 vCPU or 1 vcpu, in a task definition. String values are converted to an + // integer indicating the CPU units when the task definition is registered. // // Task-level CPU and memory parameters are ignored for Windows containers. // We recommend specifying container-level resources for Windows containers. // - // If using the EC2 launch type, this field is optional. Supported values are - // between 128 CPU units (0.125 vCPUs) and 10240 CPU units (10 vCPUs). + // If you are using the EC2 launch type, this field is optional. Supported values + // are between 128 CPU units (0.125 vCPUs) and 10240 CPU units (10 vCPUs). // - // If using the Fargate launch type, this field is required and you must use - // one of the following values, which determines your range of supported values - // for the memory parameter: + // If you are using the Fargate launch type, this field is required and you + // must use one of the following values, which determines your range of supported + // values for the memory parameter: // // * 256 (.25 vCPU) - Available memory values: 512 (0.5 GB), 1024 (1 GB), // 2048 (2 GB) @@ -8270,10 +9681,41 @@ type RegisterTaskDefinitionInput struct { // Family is a required field Family *string `locationName:"family" type:"string" required:"true"` + // The IPC resource namespace to use for the containers in the task. The valid + // values are host, task, or none. If host is specified, then all containers + // within the tasks that specified the host IPC mode on the same container instance + // share the same IPC resources with the host Amazon EC2 instance. If task is + // specified, all containers within the specified task share the same IPC resources. + // If none is specified, then IPC resources within the containers of a task + // are private and not shared with other containers in a task or on the container + // instance. If no value is specified, then the IPC resource namespace sharing + // depends on the Docker daemon setting on the container instance. For more + // information, see IPC settings (https://docs.docker.com/engine/reference/run/#ipc-settings---ipc) + // in the Docker run reference. + // + // If the host IPC mode is used, be aware that there is a heightened risk of + // undesired IPC namespace expose. For more information, see Docker security + // (https://docs.docker.com/engine/security/security/). + // + // If you are setting namespaced kernel parameters using systemControls for + // the containers in the task, the following will apply to your IPC resource + // namespace. For more information, see System Controls (http://docs.aws.amazon.com/AmazonECS/latest/developerguidetask_definition_parameters.html) + // in the Amazon Elastic Container Service Developer Guide. + // + // * For tasks that use the host IPC mode, IPC namespace related systemControls + // are not supported. + // + // * For tasks that use the task IPC mode, IPC namespace related systemControls + // will apply to all containers within a task. + // + // This parameter is not supported for Windows containers or tasks using the + // Fargate launch type. + IpcMode *string `locationName:"ipcMode" type:"string" enum:"IpcMode"` + // The amount of memory (in MiB) used by the task. It can be expressed as an // integer using MiB, for example 1024, or as a string using GB, for example - // 1GB or 1 GB, in a task definition but will be converted to an integer indicating - // the MiB when the task definition is registered. + // 1GB or 1 GB, in a task definition. String values are converted to an integer + // indicating the MiB when the task definition is registered. // // Task-level CPU and memory parameters are ignored for Windows containers. // We recommend specifying container-level resources for Windows containers. @@ -8302,45 +9744,73 @@ type RegisterTaskDefinitionInput struct { // The Docker networking mode to use for the containers in the task. The valid // values are none, bridge, awsvpc, and host. The default Docker network mode - // is bridge. If using the Fargate launch type, the awsvpc network mode is required. - // If using the EC2 launch type, any network mode can be used. If the network - // mode is set to none, you can't specify port mappings in your container definitions, - // and the task's containers do not have external connectivity. The host and - // awsvpc network modes offer the highest networking performance for containers - // because they use the EC2 network stack instead of the virtualized network - // stack provided by the bridge mode. + // is bridge. If you are using the Fargate launch type, the awsvpc network mode + // is required. If you are using the EC2 launch type, any network mode can be + // used. If the network mode is set to none, you cannot specify port mappings + // in your container definitions, and the tasks containers do not have external + // connectivity. The host and awsvpc network modes offer the highest networking + // performance for containers because they use the EC2 network stack instead + // of the virtualized network stack provided by the bridge mode. // // With the host and awsvpc network modes, exposed container ports are mapped // directly to the corresponding host port (for the host network mode) or the // attached elastic network interface port (for the awsvpc network mode), so // you cannot take advantage of dynamic host port mappings. // - // If the network mode is awsvpc, the task is allocated an Elastic Network Interface, - // and you must specify a NetworkConfiguration when you create a service or - // run a task with the task definition. For more information, see Task Networking + // If the network mode is awsvpc, the task is allocated an elastic network interface, + // and you must specify a NetworkConfiguration value when you create a service + // or run a task with the task definition. For more information, see Task Networking // (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html) // in the Amazon Elastic Container Service Developer Guide. // - // If the network mode is host, you can't run multiple instantiations of the + // Currently, only Amazon ECS-optimized AMIs, other Amazon Linux variants with + // the ecs-init package, or AWS Fargate infrastructure support the awsvpc network + // mode. + // + // If the network mode is host, you cannot run multiple instantiations of the // same task on a single container instance when port mappings are used. // // Docker for Windows uses different network modes than Docker for Linux. When // you register a task definition with Windows containers, you must not specify - // a network mode. + // a network mode. If you use the console to register a task definition with + // Windows containers, you must choose the network mode object. // // For more information, see Network settings (https://docs.docker.com/engine/reference/run/#network-settings) // in the Docker run reference. NetworkMode *string `locationName:"networkMode" type:"string" enum:"NetworkMode"` + // The process namespace to use for the containers in the task. The valid values + // are host or task. If host is specified, then all containers within the tasks + // that specified the host PID mode on the same container instance share the + // same IPC resources with the host Amazon EC2 instance. If task is specified, + // all containers within the specified task share the same process namespace. + // If no value is specified, the default is a private namespace. For more information, + // see PID settings (https://docs.docker.com/engine/reference/run/#pid-settings---pid) + // in the Docker run reference. + // + // If the host PID mode is used, be aware that there is a heightened risk of + // undesired process namespace expose. For more information, see Docker security + // (https://docs.docker.com/engine/security/security/). + // + // This parameter is not supported for Windows containers or tasks using the + // Fargate launch type. + PidMode *string `locationName:"pidMode" type:"string" enum:"PidMode"` + // An array of placement constraint objects to use for the task. You can specify // a maximum of 10 constraints per task (this limit includes constraints in - // the task definition and those specified at run time). + // the task definition and those specified at runtime). PlacementConstraints []*TaskDefinitionPlacementConstraint `locationName:"placementConstraints" type:"list"` // The launch type required by the task. If no value is specified, it defaults // to EC2. RequiresCompatibilities []*string `locationName:"requiresCompatibilities" type:"list"` + // The metadata that you apply to the task definition to help you categorize + // and organize them. Each tag consists of a key and an optional value, both + // of which you define. Tag keys can have a maximum character length of 128 + // characters, and tag values can have a maximum length of 256 characters. + Tags []*Tag `locationName:"tags" type:"list"` + // The short name or full Amazon Resource Name (ARN) of the IAM role that containers // in this task can assume. All containers in this task are granted the permissions // that are specified in this role. For more information, see IAM Roles for @@ -8382,6 +9852,16 @@ func (s *RegisterTaskDefinitionInput) Validate() error { } } } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } if invalidParams.Len() > 0 { return invalidParams @@ -8413,6 +9893,12 @@ func (s *RegisterTaskDefinitionInput) SetFamily(v string) *RegisterTaskDefinitio return s } +// SetIpcMode sets the IpcMode field's value. +func (s *RegisterTaskDefinitionInput) SetIpcMode(v string) *RegisterTaskDefinitionInput { + s.IpcMode = &v + return s +} + // SetMemory sets the Memory field's value. func (s *RegisterTaskDefinitionInput) SetMemory(v string) *RegisterTaskDefinitionInput { s.Memory = &v @@ -8425,6 +9911,12 @@ func (s *RegisterTaskDefinitionInput) SetNetworkMode(v string) *RegisterTaskDefi return s } +// SetPidMode sets the PidMode field's value. +func (s *RegisterTaskDefinitionInput) SetPidMode(v string) *RegisterTaskDefinitionInput { + s.PidMode = &v + return s +} + // SetPlacementConstraints sets the PlacementConstraints field's value. func (s *RegisterTaskDefinitionInput) SetPlacementConstraints(v []*TaskDefinitionPlacementConstraint) *RegisterTaskDefinitionInput { s.PlacementConstraints = v @@ -8437,6 +9929,12 @@ func (s *RegisterTaskDefinitionInput) SetRequiresCompatibilities(v []*string) *R return s } +// SetTags sets the Tags field's value. +func (s *RegisterTaskDefinitionInput) SetTags(v []*Tag) *RegisterTaskDefinitionInput { + s.Tags = v + return s +} + // SetTaskRoleArn sets the TaskRoleArn field's value. func (s *RegisterTaskDefinitionInput) SetTaskRoleArn(v string) *RegisterTaskDefinitionInput { s.TaskRoleArn = &v @@ -8452,6 +9950,9 @@ func (s *RegisterTaskDefinitionInput) SetVolumes(v []*Volume) *RegisterTaskDefin type RegisterTaskDefinitionOutput struct { _ struct{} `type:"structure"` + // The list of tags associated with the task definition. + Tags []*Tag `locationName:"tags" type:"list"` + // The full description of the registered task definition. TaskDefinition *TaskDefinition `locationName:"taskDefinition" type:"structure"` } @@ -8466,13 +9967,64 @@ func (s RegisterTaskDefinitionOutput) GoString() string { return s.String() } +// SetTags sets the Tags field's value. +func (s *RegisterTaskDefinitionOutput) SetTags(v []*Tag) *RegisterTaskDefinitionOutput { + s.Tags = v + return s +} + // SetTaskDefinition sets the TaskDefinition field's value. func (s *RegisterTaskDefinitionOutput) SetTaskDefinition(v *TaskDefinition) *RegisterTaskDefinitionOutput { s.TaskDefinition = v return s } -// Describes the resources available for a container instance. +// The repository credentials for private registry authentication. +type RepositoryCredentials struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the secret containing the private repository + // credentials. + // + // When you are using the Amazon ECS API, AWS CLI, or AWS SDK, if the secret + // exists in the same Region as the task that you are launching then you can + // use either the full ARN or the name of the secret. When you are using the + // AWS Management Console, you must specify the full ARN of the secret. + // + // CredentialsParameter is a required field + CredentialsParameter *string `locationName:"credentialsParameter" type:"string" required:"true"` +} + +// String returns the string representation +func (s RepositoryCredentials) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RepositoryCredentials) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RepositoryCredentials) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RepositoryCredentials"} + if s.CredentialsParameter == nil { + invalidParams.Add(request.NewErrParamRequired("CredentialsParameter")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCredentialsParameter sets the CredentialsParameter field's value. +func (s *RepositoryCredentials) SetCredentialsParameter(v string) *RepositoryCredentials { + s.CredentialsParameter = &v + return s +} + +// Describes the resources available for a container instance. type Resource struct { _ struct{} `type:"structure"` @@ -8487,7 +10039,8 @@ type Resource struct { // precision floating-point type. LongValue *int64 `locationName:"longValue" type:"long"` - // The name of the resource, such as cpu, memory, ports, or a user-defined resource. + // The name of the resource, such as CPU, MEMORY, PORTS, PORTS_UDP, or a user-defined + // resource. Name *string `locationName:"name" type:"string"` // When the stringSetValue type is set, the value of the resource must be a @@ -8556,6 +10109,11 @@ type RunTaskInput struct { // You can specify up to 10 tasks per call. Count *int64 `locationName:"count" type:"integer"` + // Specifies whether to enable Amazon ECS managed tags for the task. For more + // information, see Tagging Your Amazon ECS Resources (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Using_Tags.html) + // in the Amazon Elastic Container Service Developer Guide. + EnableECSManagedTags *bool `locationName:"enableECSManagedTags" type:"boolean"` + // The name of the task group to associate with the task. The default value // is the family name of the task definition (for example, family:my-family-name). Group *string `locationName:"group" type:"string"` @@ -8564,8 +10122,8 @@ type RunTaskInput struct { LaunchType *string `locationName:"launchType" type:"string" enum:"LaunchType"` // The network configuration for the task. This parameter is required for task - // definitions that use the awsvpc network mode to receive their own Elastic - // Network Interface, and it is not supported for other network modes. For more + // definitions that use the awsvpc network mode to receive their own elastic + // network interface, and it is not supported for other network modes. For more // information, see Task Networking (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html) // in the Amazon Elastic Container Service Developer Guide. NetworkConfiguration *NetworkConfiguration `locationName:"networkConfiguration" type:"structure"` @@ -8584,7 +10142,7 @@ type RunTaskInput struct { // An array of placement constraint objects to use for the task. You can specify // up to 10 constraints per task (including constraints in the task definition - // and those specified at run time). + // and those specified at runtime). PlacementConstraints []*PlacementConstraint `locationName:"placementConstraints" type:"list"` // The placement strategy objects to use for the task. You can specify a maximum @@ -8595,7 +10153,11 @@ type RunTaskInput struct { // the latest version is used by default. PlatformVersion *string `locationName:"platformVersion" type:"string"` - // An optional tag specified when a task is started. For example if you automatically + // Specifies whether to propagate the tags from the task definition or the service + // to the task. If no value is specified, the tags are not propagated. + PropagateTags *string `locationName:"propagateTags" type:"string" enum:"PropagateTags"` + + // An optional tag specified when a task is started. For example, if you automatically // trigger a task to run a batch process job, you could apply a unique identifier // for that job to your task with the startedBy parameter. You can then identify // which tasks belong to that job by filtering the results of a ListTasks call @@ -8606,6 +10168,12 @@ type RunTaskInput struct { // contains the deployment ID of the service that starts it. StartedBy *string `locationName:"startedBy" type:"string"` + // The metadata that you apply to the task to help you categorize and organize + // them. Each tag consists of a key and an optional value, both of which you + // define. Tag keys can have a maximum character length of 128 characters, and + // tag values can have a maximum length of 256 characters. + Tags []*Tag `locationName:"tags" type:"list"` + // The family and revision (family:revision) or full ARN of the task definition // to run. If a revision is not specified, the latest ACTIVE revision is used. // @@ -8634,6 +10202,16 @@ func (s *RunTaskInput) Validate() error { invalidParams.AddNested("NetworkConfiguration", err.(request.ErrInvalidParams)) } } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } if invalidParams.Len() > 0 { return invalidParams @@ -8653,6 +10231,12 @@ func (s *RunTaskInput) SetCount(v int64) *RunTaskInput { return s } +// SetEnableECSManagedTags sets the EnableECSManagedTags field's value. +func (s *RunTaskInput) SetEnableECSManagedTags(v bool) *RunTaskInput { + s.EnableECSManagedTags = &v + return s +} + // SetGroup sets the Group field's value. func (s *RunTaskInput) SetGroup(v string) *RunTaskInput { s.Group = &v @@ -8695,12 +10279,24 @@ func (s *RunTaskInput) SetPlatformVersion(v string) *RunTaskInput { return s } +// SetPropagateTags sets the PropagateTags field's value. +func (s *RunTaskInput) SetPropagateTags(v string) *RunTaskInput { + s.PropagateTags = &v + return s +} + // SetStartedBy sets the StartedBy field's value. func (s *RunTaskInput) SetStartedBy(v string) *RunTaskInput { s.StartedBy = &v return s } +// SetTags sets the Tags field's value. +func (s *RunTaskInput) SetTags(v []*Tag) *RunTaskInput { + s.Tags = v + return s +} + // SetTaskDefinition sets the TaskDefinition field's value. func (s *RunTaskInput) SetTaskDefinition(v string) *RunTaskInput { s.TaskDefinition = &v @@ -8740,6 +10336,60 @@ func (s *RunTaskOutput) SetTasks(v []*Task) *RunTaskOutput { return s } +// An object representing the secret to expose to your container. +type Secret struct { + _ struct{} `type:"structure"` + + // The value to set as the environment variable on the container. + // + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true"` + + // The secret to expose to the container. Supported values are either the full + // ARN or the name of the parameter in the AWS Systems Manager Parameter Store. + // + // ValueFrom is a required field + ValueFrom *string `locationName:"valueFrom" type:"string" required:"true"` +} + +// String returns the string representation +func (s Secret) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Secret) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Secret) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Secret"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.ValueFrom == nil { + invalidParams.Add(request.NewErrParamRequired("ValueFrom")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *Secret) SetName(v string) *Secret { + s.Name = &v + return s +} + +// SetValueFrom sets the ValueFrom field's value. +func (s *Secret) SetValueFrom(v string) *Secret { + s.ValueFrom = &v + return s +} + // Details on a service within a cluster type Service struct { _ struct{} `type:"structure"` @@ -8747,8 +10397,11 @@ type Service struct { // The Amazon Resource Name (ARN) of the cluster that hosts the service. ClusterArn *string `locationName:"clusterArn" type:"string"` - // The Unix time stamp for when the service was created. - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` + // The Unix timestamp for when the service was created. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` + + // The principal that created the service. + CreatedBy *string `locationName:"createdBy" type:"string"` // Optional deployment parameters that control how many tasks run during the // deployment and the ordering of stopping and starting tasks. @@ -8762,6 +10415,11 @@ type Service struct { // CreateService, and it can be modified with UpdateService. DesiredCount *int64 `locationName:"desiredCount" type:"integer"` + // Specifies whether to enable Amazon ECS managed tags for the tasks in the + // service. For more information, see Tagging Your Amazon ECS Resources (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Using_Tags.html) + // in the Amazon Elastic Container Service Developer Guide. + EnableECSManagedTags *bool `locationName:"enableECSManagedTags" type:"boolean"` + // The event stream for your service. A maximum of 100 of the latest events // are displayed. Events []*ServiceEvent `locationName:"events" type:"list"` @@ -8777,10 +10435,17 @@ type Service struct { // A list of Elastic Load Balancing load balancer objects, containing the load // balancer name, the container name (as it appears in a container definition), // and the container port to access from the load balancer. + // + // Services with tasks that use the awsvpc network mode (for example, those + // with the Fargate launch type) only support Application Load Balancers and + // Network Load Balancers. Classic Load Balancers are not supported. Also, when + // you create any target groups for these services, you must choose ip as the + // target type, not instance, because tasks that use the awsvpc network mode + // are associated with an elastic network interface, not an Amazon EC2 instance. LoadBalancers []*LoadBalancer `locationName:"loadBalancers" type:"list"` // The VPC subnet and security group configuration for tasks that receive their - // own Elastic Network Interface by using the awsvpc networking mode. + // own elastic network interface by using the awsvpc networking mode. NetworkConfiguration *NetworkConfiguration `locationName:"networkConfiguration" type:"structure"` // The number of tasks in the cluster that are in the PENDING state. @@ -8797,6 +10462,10 @@ type Service struct { // in the Amazon Elastic Container Service Developer Guide. PlatformVersion *string `locationName:"platformVersion" type:"string"` + // Specifies whether to propagate the tags from the task definition or the service + // to the task. If no value is specified, the tags are not propagated. + PropagateTags *string `locationName:"propagateTags" type:"string" enum:"PropagateTags"` + // The ARN of the IAM role associated with the service that allows the Amazon // ECS container agent to register container instances with an Elastic Load // Balancing load balancer. @@ -8805,20 +10474,45 @@ type Service struct { // The number of tasks in the cluster that are in the RUNNING state. RunningCount *int64 `locationName:"runningCount" type:"integer"` + // The scheduling strategy to use for the service. For more information, see + // Services (http://docs.aws.amazon.com/AmazonECS/latest/developerguideecs_services.html). + // + // There are two service scheduler strategies available: + // + // * REPLICA-The replica scheduling strategy places and maintains the desired + // number of tasks across your cluster. By default, the service scheduler + // spreads tasks across Availability Zones. You can use task placement strategies + // and constraints to customize task placement decisions. + // + // * DAEMON-The daemon scheduling strategy deploys exactly one task on each + // container instance in your cluster. When you are using this strategy, + // do not specify a desired number of tasks or any task placement strategies. + // + // Fargate tasks do not support the DAEMON scheduling strategy. + SchedulingStrategy *string `locationName:"schedulingStrategy" type:"string" enum:"SchedulingStrategy"` + // The ARN that identifies the service. The ARN contains the arn:aws:ecs namespace, - // followed by the region of the service, the AWS account ID of the service + // followed by the Region of the service, the AWS account ID of the service // owner, the service namespace, and then the service name. For example, arn:aws:ecs:region:012345678910:service/my-service. ServiceArn *string `locationName:"serviceArn" type:"string"` // The name of your service. Up to 255 letters (uppercase and lowercase), numbers, // hyphens, and underscores are allowed. Service names must be unique within // a cluster, but you can have similarly named services in multiple clusters - // within a region or across multiple regions. + // within a Region or across multiple Regions. ServiceName *string `locationName:"serviceName" type:"string"` + ServiceRegistries []*ServiceRegistry `locationName:"serviceRegistries" type:"list"` + // The status of the service. The valid values are ACTIVE, DRAINING, or INACTIVE. Status *string `locationName:"status" type:"string"` + // The metadata that you apply to the service to help you categorize and organize + // them. Each tag consists of a key and an optional value, both of which you + // define. Tag keys can have a maximum character length of 128 characters, and + // tag values can have a maximum length of 256 characters. + Tags []*Tag `locationName:"tags" type:"list"` + // The task definition to use for tasks in the service. This value is specified // when the service is created with CreateService, and it can be modified with // UpdateService. @@ -8847,6 +10541,12 @@ func (s *Service) SetCreatedAt(v time.Time) *Service { return s } +// SetCreatedBy sets the CreatedBy field's value. +func (s *Service) SetCreatedBy(v string) *Service { + s.CreatedBy = &v + return s +} + // SetDeploymentConfiguration sets the DeploymentConfiguration field's value. func (s *Service) SetDeploymentConfiguration(v *DeploymentConfiguration) *Service { s.DeploymentConfiguration = v @@ -8865,6 +10565,12 @@ func (s *Service) SetDesiredCount(v int64) *Service { return s } +// SetEnableECSManagedTags sets the EnableECSManagedTags field's value. +func (s *Service) SetEnableECSManagedTags(v bool) *Service { + s.EnableECSManagedTags = &v + return s +} + // SetEvents sets the Events field's value. func (s *Service) SetEvents(v []*ServiceEvent) *Service { s.Events = v @@ -8919,6 +10625,12 @@ func (s *Service) SetPlatformVersion(v string) *Service { return s } +// SetPropagateTags sets the PropagateTags field's value. +func (s *Service) SetPropagateTags(v string) *Service { + s.PropagateTags = &v + return s +} + // SetRoleArn sets the RoleArn field's value. func (s *Service) SetRoleArn(v string) *Service { s.RoleArn = &v @@ -8931,6 +10643,12 @@ func (s *Service) SetRunningCount(v int64) *Service { return s } +// SetSchedulingStrategy sets the SchedulingStrategy field's value. +func (s *Service) SetSchedulingStrategy(v string) *Service { + s.SchedulingStrategy = &v + return s +} + // SetServiceArn sets the ServiceArn field's value. func (s *Service) SetServiceArn(v string) *Service { s.ServiceArn = &v @@ -8943,12 +10661,24 @@ func (s *Service) SetServiceName(v string) *Service { return s } +// SetServiceRegistries sets the ServiceRegistries field's value. +func (s *Service) SetServiceRegistries(v []*ServiceRegistry) *Service { + s.ServiceRegistries = v + return s +} + // SetStatus sets the Status field's value. func (s *Service) SetStatus(v string) *Service { s.Status = &v return s } +// SetTags sets the Tags field's value. +func (s *Service) SetTags(v []*Tag) *Service { + s.Tags = v + return s +} + // SetTaskDefinition sets the TaskDefinition field's value. func (s *Service) SetTaskDefinition(v string) *Service { s.TaskDefinition = &v @@ -8959,8 +10689,8 @@ func (s *Service) SetTaskDefinition(v string) *Service { type ServiceEvent struct { _ struct{} `type:"structure"` - // The Unix time stamp for when the event was triggered. - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` + // The Unix timestamp for when the event was triggered. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` // The ID string of the event. Id *string `locationName:"id" type:"string"` @@ -8997,6 +10727,119 @@ func (s *ServiceEvent) SetMessage(v string) *ServiceEvent { return s } +// Details of the service registry. +type ServiceRegistry struct { + _ struct{} `type:"structure"` + + // The container name value, already specified in the task definition, to be + // used for your service discovery service. If the task definition that your + // service task specifies uses the bridge or host network mode, you must specify + // a containerName and containerPort combination from the task definition. If + // the task definition that your service task specifies uses the awsvpc network + // mode and a type SRV DNS record is used, you must specify either a containerName + // and containerPort combination or a port value, but not both. + ContainerName *string `locationName:"containerName" type:"string"` + + // The port value, already specified in the task definition, to be used for + // your service discovery service. If the task definition your service task + // specifies uses the bridge or host network mode, you must specify a containerName + // and containerPort combination from the task definition. If the task definition + // your service task specifies uses the awsvpc network mode and a type SRV DNS + // record is used, you must specify either a containerName and containerPort + // combination or a port value, but not both. + ContainerPort *int64 `locationName:"containerPort" type:"integer"` + + // The port value used if your service discovery service specified an SRV record. + // This field may be used if both the awsvpc network mode and SRV records are + // used. + Port *int64 `locationName:"port" type:"integer"` + + // The Amazon Resource Name (ARN) of the service registry. The currently supported + // service registry is Amazon Route 53 Auto Naming. For more information, see + // Service (https://docs.aws.amazon.com/Route53/latest/APIReference/API_autonaming_Service.html). + RegistryArn *string `locationName:"registryArn" type:"string"` +} + +// String returns the string representation +func (s ServiceRegistry) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServiceRegistry) GoString() string { + return s.String() +} + +// SetContainerName sets the ContainerName field's value. +func (s *ServiceRegistry) SetContainerName(v string) *ServiceRegistry { + s.ContainerName = &v + return s +} + +// SetContainerPort sets the ContainerPort field's value. +func (s *ServiceRegistry) SetContainerPort(v int64) *ServiceRegistry { + s.ContainerPort = &v + return s +} + +// SetPort sets the Port field's value. +func (s *ServiceRegistry) SetPort(v int64) *ServiceRegistry { + s.Port = &v + return s +} + +// SetRegistryArn sets the RegistryArn field's value. +func (s *ServiceRegistry) SetRegistryArn(v string) *ServiceRegistry { + s.RegistryArn = &v + return s +} + +// The current account setting for a resource. +type Setting struct { + _ struct{} `type:"structure"` + + // The account resource name. + Name *string `locationName:"name" type:"string" enum:"SettingName"` + + // The ARN of the principal, which can be an IAM user, IAM role, or the root + // user. If this field is omitted, the authenticated user is assumed. + PrincipalArn *string `locationName:"principalArn" type:"string"` + + // The current account setting for the resource name. If ENABLED, then the resource + // will receive the new Amazon Resource Name (ARN) and resource identifier (ID) + // format. If DISABLED, then the resource will receive the old Amazon Resource + // Name (ARN) and resource identifier (ID) format. + Value *string `locationName:"value" type:"string"` +} + +// String returns the string representation +func (s Setting) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Setting) GoString() string { + return s.String() +} + +// SetName sets the Name field's value. +func (s *Setting) SetName(v string) *Setting { + s.Name = &v + return s +} + +// SetPrincipalArn sets the PrincipalArn field's value. +func (s *Setting) SetPrincipalArn(v string) *Setting { + s.PrincipalArn = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Setting) SetValue(v string) *Setting { + s.Value = &v + return s +} + type StartTaskInput struct { _ struct{} `type:"structure"` @@ -9012,12 +10855,17 @@ type StartTaskInput struct { // ContainerInstances is a required field ContainerInstances []*string `locationName:"containerInstances" type:"list" required:"true"` + // Specifies whether to enable Amazon ECS managed tags for the task. For more + // information, see Tagging Your Amazon ECS Resources (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Using_Tags.html) + // in the Amazon Elastic Container Service Developer Guide. + EnableECSManagedTags *bool `locationName:"enableECSManagedTags" type:"boolean"` + // The name of the task group to associate with the task. The default value // is the family name of the task definition (for example, family:my-family-name). Group *string `locationName:"group" type:"string"` // The VPC subnet and security group configuration for tasks that receive their - // own Elastic Network Interface by using the awsvpc networking mode. + // own elastic network interface by using the awsvpc networking mode. NetworkConfiguration *NetworkConfiguration `locationName:"networkConfiguration" type:"structure"` // A list of container overrides in JSON format that specify the name of a container @@ -9032,7 +10880,11 @@ type StartTaskInput struct { // the JSON formatting characters of the override structure. Overrides *TaskOverride `locationName:"overrides" type:"structure"` - // An optional tag specified when a task is started. For example if you automatically + // Specifies whether to propagate the tags from the task definition or the service + // to the task. If no value is specified, the tags are not propagated. + PropagateTags *string `locationName:"propagateTags" type:"string" enum:"PropagateTags"` + + // An optional tag specified when a task is started. For example, if you automatically // trigger a task to run a batch process job, you could apply a unique identifier // for that job to your task with the startedBy parameter. You can then identify // which tasks belong to that job by filtering the results of a ListTasks call @@ -9043,6 +10895,12 @@ type StartTaskInput struct { // contains the deployment ID of the service that starts it. StartedBy *string `locationName:"startedBy" type:"string"` + // The metadata that you apply to the task to help you categorize and organize + // them. Each tag consists of a key and an optional value, both of which you + // define. Tag keys can have a maximum character length of 128 characters, and + // tag values can have a maximum length of 256 characters. + Tags []*Tag `locationName:"tags" type:"list"` + // The family and revision (family:revision) or full ARN of the task definition // to start. If a revision is not specified, the latest ACTIVE revision is used. // @@ -9074,6 +10932,16 @@ func (s *StartTaskInput) Validate() error { invalidParams.AddNested("NetworkConfiguration", err.(request.ErrInvalidParams)) } } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } if invalidParams.Len() > 0 { return invalidParams @@ -9093,6 +10961,12 @@ func (s *StartTaskInput) SetContainerInstances(v []*string) *StartTaskInput { return s } +// SetEnableECSManagedTags sets the EnableECSManagedTags field's value. +func (s *StartTaskInput) SetEnableECSManagedTags(v bool) *StartTaskInput { + s.EnableECSManagedTags = &v + return s +} + // SetGroup sets the Group field's value. func (s *StartTaskInput) SetGroup(v string) *StartTaskInput { s.Group = &v @@ -9111,12 +10985,24 @@ func (s *StartTaskInput) SetOverrides(v *TaskOverride) *StartTaskInput { return s } +// SetPropagateTags sets the PropagateTags field's value. +func (s *StartTaskInput) SetPropagateTags(v string) *StartTaskInput { + s.PropagateTags = &v + return s +} + // SetStartedBy sets the StartedBy field's value. func (s *StartTaskInput) SetStartedBy(v string) *StartTaskInput { s.StartedBy = &v return s } +// SetTags sets the Tags field's value. +func (s *StartTaskInput) SetTags(v []*Tag) *StartTaskInput { + s.Tags = v + return s +} + // SetTaskDefinition sets the TaskDefinition field's value. func (s *StartTaskInput) SetTaskDefinition(v string) *StartTaskInput { s.TaskDefinition = &v @@ -9354,14 +11240,14 @@ type SubmitTaskStateChangeInput struct { // Any containers associated with the state change request. Containers []*ContainerStateChange `locationName:"containers" type:"list"` - // The Unix time stamp for when the task execution stopped. - ExecutionStoppedAt *time.Time `locationName:"executionStoppedAt" type:"timestamp" timestampFormat:"unix"` + // The Unix timestamp for when the task execution stopped. + ExecutionStoppedAt *time.Time `locationName:"executionStoppedAt" type:"timestamp"` - // The Unix time stamp for when the container image pull began. - PullStartedAt *time.Time `locationName:"pullStartedAt" type:"timestamp" timestampFormat:"unix"` + // The Unix timestamp for when the container image pull began. + PullStartedAt *time.Time `locationName:"pullStartedAt" type:"timestamp"` - // The Unix time stamp for when the container image pull completed. - PullStoppedAt *time.Time `locationName:"pullStoppedAt" type:"timestamp" timestampFormat:"unix"` + // The Unix timestamp for when the container image pull completed. + PullStoppedAt *time.Time `locationName:"pullStoppedAt" type:"timestamp"` // The reason for the state change request. Reason *string `locationName:"reason" type:"string"` @@ -9480,6 +11366,186 @@ func (s *SubmitTaskStateChangeOutput) SetAcknowledgment(v string) *SubmitTaskSta return s } +// A list of namespaced kernel parameters to set in the container. This parameter +// maps to Sysctls in the Create a container (https://docs.docker.com/engine/api/v1.35/#operation/ContainerCreate) +// section of the Docker Remote API (https://docs.docker.com/engine/api/v1.35/) +// and the --sysctl option to docker run (https://docs.docker.com/engine/reference/run/). +// +// It is not recommended that you specify network-related systemControls parameters +// for multiple containers in a single task that also uses either the awsvpc +// or host network mode for the following reasons: +// +// * For tasks that use the awsvpc network mode, if you set systemControls +// for any container, it applies to all containers in the task. If you set +// different systemControls for multiple containers in a single task, the +// container that is started last determines which systemControls take effect. +// +// * For tasks that use the host network mode, the systemControls parameter +// applies to the container instance's kernel parameter as well as that of +// all containers of any tasks running on that container instance. +type SystemControl struct { + _ struct{} `type:"structure"` + + // The namespaced kernel parameter to set a value for. + Namespace *string `locationName:"namespace" type:"string"` + + // The value for the namespaced kernel parameter specified in namespace. + Value *string `locationName:"value" type:"string"` +} + +// String returns the string representation +func (s SystemControl) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SystemControl) GoString() string { + return s.String() +} + +// SetNamespace sets the Namespace field's value. +func (s *SystemControl) SetNamespace(v string) *SystemControl { + s.Namespace = &v + return s +} + +// SetValue sets the Value field's value. +func (s *SystemControl) SetValue(v string) *SystemControl { + s.Value = &v + return s +} + +// The metadata that you apply to a resource to help you categorize and organize +// them. Each tag consists of a key and an optional value, both of which you +// define. Tag keys can have a maximum character length of 128 characters, and +// tag values can have a maximum length of 256 characters. +type Tag struct { + _ struct{} `type:"structure"` + + // One part of a key-value pair that make up a tag. A key is a general label + // that acts like a category for more specific tag values. + Key *string `locationName:"key" min:"1" type:"string"` + + // The optional part of a key-value pair that make up a tag. A value acts as + // a descriptor within a tag category (key). + Value *string `locationName:"value" type:"string"` +} + +// String returns the string representation +func (s Tag) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Tag) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Tag) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Tag"} + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *Tag) SetKey(v string) *Tag { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Tag) SetValue(v string) *Tag { + s.Value = &v + return s +} + +type TagResourceInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the resource to which to add tags. Currently, + // the supported resources are Amazon ECS tasks, services, task definitions, + // clusters, and container instances. + // + // ResourceArn is a required field + ResourceArn *string `locationName:"resourceArn" type:"string" required:"true"` + + // The tags to add to the resource. A tag is an array of key-value pairs. Tag + // keys can have a maximum character length of 128 characters, and tag values + // can have a maximum length of 256 characters. + // + // Tags is a required field + Tags []*Tag `locationName:"tags" type:"list" required:"true"` +} + +// String returns the string representation +func (s TagResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *TagResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TagResourceInput"} + if s.ResourceArn == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceArn")) + } + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceArn sets the ResourceArn field's value. +func (s *TagResourceInput) SetResourceArn(v string) *TagResourceInput { + s.ResourceArn = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *TagResourceInput) SetTags(v []*Tag) *TagResourceInput { + s.Tags = v + return s +} + +type TagResourceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s TagResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagResourceOutput) GoString() string { + return s.String() +} + // Details on a task in a cluster. type Task struct { _ struct{} `type:"structure"` @@ -9494,8 +11560,8 @@ type Task struct { // The connectivity status of a task. Connectivity *string `locationName:"connectivity" type:"string" enum:"Connectivity"` - // The Unix time stamp for when the task last went into CONNECTED status. - ConnectivityAt *time.Time `locationName:"connectivityAt" type:"timestamp" timestampFormat:"unix"` + // The Unix timestamp for when the task last went into CONNECTED status. + ConnectivityAt *time.Time `locationName:"connectivityAt" type:"timestamp"` // The ARN of the container instances that host the task. ContainerInstanceArn *string `locationName:"containerInstanceArn" type:"string"` @@ -9503,17 +11569,18 @@ type Task struct { // The containers associated with the task. Containers []*Container `locationName:"containers" type:"list"` - // The number of CPU units used by the task. It can be expressed as an integer - // using CPU units, for example 1024, or as a string using vCPUs, for example - // 1 vCPU or 1 vcpu, in a task definition but is converted to an integer indicating - // the CPU units when the task definition is registered. + // The number of CPU units used by the task as expressed in a task definition. + // It can be expressed as an integer using CPU units, for example 1024. It can + // also be expressed as a string using vCPUs, for example 1 vCPU or 1 vcpu. + // String values are converted to an integer indicating the CPU units when the + // task definition is registered. // - // If using the EC2 launch type, this field is optional. Supported values are - // between 128 CPU units (0.125 vCPUs) and 10240 CPU units (10 vCPUs). + // If you are using the EC2 launch type, this field is optional. Supported values + // are between 128 CPU units (0.125 vCPUs) and 10240 CPU units (10 vCPUs). // - // If using the Fargate launch type, this field is required and you must use - // one of the following values, which determines your range of supported values - // for the memory parameter: + // If you are using the Fargate launch type, this field is required and you + // must use one of the following values, which determines your range of supported + // values for the memory parameter: // // * 256 (.25 vCPU) - Available memory values: 512 (0.5 GB), 1024 (1 GB), // 2048 (2 GB) @@ -9531,15 +11598,16 @@ type Task struct { // (30 GB) in increments of 1024 (1 GB) Cpu *string `locationName:"cpu" type:"string"` - // The Unix time stamp for when the task was created (the task entered the PENDING + // The Unix timestamp for when the task was created (the task entered the PENDING // state). - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` - // The desired status of the task. + // The desired status of the task. For more information, see Task Lifecycle + // (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_life_cycle.html). DesiredStatus *string `locationName:"desiredStatus" type:"string"` - // The Unix time stamp for when the task execution stopped. - ExecutionStoppedAt *time.Time `locationName:"executionStoppedAt" type:"timestamp" timestampFormat:"unix"` + // The Unix timestamp for when the task execution stopped. + ExecutionStoppedAt *time.Time `locationName:"executionStoppedAt" type:"timestamp"` // The name of the task group associated with the task. Group *string `locationName:"group" type:"string"` @@ -9557,22 +11625,24 @@ type Task struct { // override any Docker health checks that exist in the container image. HealthStatus *string `locationName:"healthStatus" type:"string" enum:"HealthStatus"` - // The last known status of the task. + // The last known status of the task. For more information, see Task Lifecycle + // (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_life_cycle.html). LastStatus *string `locationName:"lastStatus" type:"string"` // The launch type on which your task is running. LaunchType *string `locationName:"launchType" type:"string" enum:"LaunchType"` - // The amount of memory (in MiB) used by the task. It can be expressed as an - // integer using MiB, for example 1024, or as a string using GB, for example - // 1GB or 1 GB, in a task definition but is converted to an integer indicating - // the MiB when the task definition is registered. + // The amount of memory (in MiB) used by the task as expressed in a task definition. + // It can be expressed as an integer using MiB, for example 1024. It can also + // be expressed as a string using GB, for example 1GB or 1 GB. String values + // are converted to an integer indicating the MiB when the task definition is + // registered. // - // If using the EC2 launch type, this field is optional. + // If you are using the EC2 launch type, this field is optional. // - // If using the Fargate launch type, this field is required and you must use - // one of the following values, which determines your range of supported values - // for the cpu parameter: + // If you are using the Fargate launch type, this field is required and you + // must use one of the following values, which determines your range of supported + // values for the cpu parameter: // // * 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available cpu values: 256 (.25 // vCPU) @@ -9598,31 +11668,41 @@ type Task struct { // in the Amazon Elastic Container Service Developer Guide. PlatformVersion *string `locationName:"platformVersion" type:"string"` - // The Unix time stamp for when the container image pull began. - PullStartedAt *time.Time `locationName:"pullStartedAt" type:"timestamp" timestampFormat:"unix"` + // The Unix timestamp for when the container image pull began. + PullStartedAt *time.Time `locationName:"pullStartedAt" type:"timestamp"` - // The Unix time stamp for when the container image pull completed. - PullStoppedAt *time.Time `locationName:"pullStoppedAt" type:"timestamp" timestampFormat:"unix"` + // The Unix timestamp for when the container image pull completed. + PullStoppedAt *time.Time `locationName:"pullStoppedAt" type:"timestamp"` - // The Unix time stamp for when the task started (the task transitioned from + // The Unix timestamp for when the task started (the task transitioned from // the PENDING state to the RUNNING state). - StartedAt *time.Time `locationName:"startedAt" type:"timestamp" timestampFormat:"unix"` + StartedAt *time.Time `locationName:"startedAt" type:"timestamp"` // The tag specified when a task is started. If the task is started by an Amazon // ECS service, then the startedBy parameter contains the deployment ID of the // service that starts it. StartedBy *string `locationName:"startedBy" type:"string"` - // The Unix time stamp for when the task was stopped (the task transitioned - // from the RUNNING state to the STOPPED state). - StoppedAt *time.Time `locationName:"stoppedAt" type:"timestamp" timestampFormat:"unix"` + // The stop code indicating why a task was stopped. The stoppedReason may contain + // additional details. + StopCode *string `locationName:"stopCode" type:"string" enum:"TaskStopCode"` + + // The Unix timestamp for when the task was stopped (the task transitioned from + // the RUNNING state to the STOPPED state). + StoppedAt *time.Time `locationName:"stoppedAt" type:"timestamp"` // The reason the task was stopped. StoppedReason *string `locationName:"stoppedReason" type:"string"` - // The Unix time stamp for when the task will stop (transitions from the RUNNING + // The Unix timestamp for when the task stops (transitions from the RUNNING // state to STOPPED). - StoppingAt *time.Time `locationName:"stoppingAt" type:"timestamp" timestampFormat:"unix"` + StoppingAt *time.Time `locationName:"stoppingAt" type:"timestamp"` + + // The metadata that you apply to the task to help you categorize and organize + // them. Each tag consists of a key and an optional value, both of which you + // define. Tag keys can have a maximum character length of 128 characters, and + // tag values can have a maximum length of 256 characters. + Tags []*Tag `locationName:"tags" type:"list"` // The Amazon Resource Name (ARN) of the task. TaskArn *string `locationName:"taskArn" type:"string"` @@ -9775,6 +11855,12 @@ func (s *Task) SetStartedBy(v string) *Task { return s } +// SetStopCode sets the StopCode field's value. +func (s *Task) SetStopCode(v string) *Task { + s.StopCode = &v + return s +} + // SetStoppedAt sets the StoppedAt field's value. func (s *Task) SetStoppedAt(v time.Time) *Task { s.StoppedAt = &v @@ -9793,6 +11879,12 @@ func (s *Task) SetStoppingAt(v time.Time) *Task { return s } +// SetTags sets the Tags field's value. +func (s *Task) SetTags(v []*Tag) *Task { + s.Tags = v + return s +} + // SetTaskArn sets the TaskArn field's value. func (s *Task) SetTaskArn(v string) *Task { s.TaskArn = &v @@ -9826,10 +11918,11 @@ type TaskDefinition struct { // in the Amazon Elastic Container Service Developer Guide. ContainerDefinitions []*ContainerDefinition `locationName:"containerDefinitions" type:"list"` - // The number of cpu units used by the task. If using the EC2 launch type, this - // field is optional and any value can be used. If using the Fargate launch - // type, this field is required and you must use one of the following values, - // which determines your range of valid values for the memory parameter: + // The number of cpu units used by the task. If you are using the EC2 launch + // type, this field is optional and any value can be used. If you are using + // the Fargate launch type, this field is required and you must use one of the + // following values, which determines your range of valid values for the memory + // parameter: // // * 256 (.25 vCPU) - Available memory values: 512 (0.5 GB), 1024 (1 GB), // 2048 (2 GB) @@ -9854,6 +11947,37 @@ type TaskDefinition struct { // The family of your task definition, used as the definition name. Family *string `locationName:"family" type:"string"` + // The IPC resource namespace to use for the containers in the task. The valid + // values are host, task, or none. If host is specified, then all containers + // within the tasks that specified the host IPC mode on the same container instance + // share the same IPC resources with the host Amazon EC2 instance. If task is + // specified, all containers within the specified task share the same IPC resources. + // If none is specified, then IPC resources within the containers of a task + // are private and not shared with other containers in a task or on the container + // instance. If no value is specified, then the IPC resource namespace sharing + // depends on the Docker daemon setting on the container instance. For more + // information, see IPC settings (https://docs.docker.com/engine/reference/run/#ipc-settings---ipc) + // in the Docker run reference. + // + // If the host IPC mode is used, be aware that there is a heightened risk of + // undesired IPC namespace expose. For more information, see Docker security + // (https://docs.docker.com/engine/security/security/). + // + // If you are setting namespaced kernel parameters using systemControls for + // the containers in the task, the following will apply to your IPC resource + // namespace. For more information, see System Controls (http://docs.aws.amazon.com/AmazonECS/latest/developerguidetask_definition_parameters.html) + // in the Amazon Elastic Container Service Developer Guide. + // + // * For tasks that use the host IPC mode, IPC namespace related systemControls + // are not supported. + // + // * For tasks that use the task IPC mode, IPC namespace related systemControls + // will apply to all containers within a task. + // + // This parameter is not supported for Windows containers or tasks using the + // Fargate launch type. + IpcMode *string `locationName:"ipcMode" type:"string" enum:"IpcMode"` + // The amount (in MiB) of memory used by the task. If using the EC2 launch type, // this field is optional and any value can be used. If using the Fargate launch // type, this field is required and you must use one of the following values, @@ -9877,30 +12001,30 @@ type TaskDefinition struct { // The Docker networking mode to use for the containers in the task. The valid // values are none, bridge, awsvpc, and host. The default Docker network mode - // is bridge. If using the Fargate launch type, the awsvpc network mode is required. - // If using the EC2 launch type, any network mode can be used. If the network - // mode is set to none, you can't specify port mappings in your container definitions, - // and the task's containers do not have external connectivity. The host and - // awsvpc network modes offer the highest networking performance for containers - // because they use the EC2 network stack instead of the virtualized network - // stack provided by the bridge mode. + // is bridge. If you are using the Fargate launch type, the awsvpc network mode + // is required. If you are using the EC2 launch type, any network mode can be + // used. If the network mode is set to none, you cannot specify port mappings + // in your container definitions, and the tasks containers do not have external + // connectivity. The host and awsvpc network modes offer the highest networking + // performance for containers because they use the EC2 network stack instead + // of the virtualized network stack provided by the bridge mode. // // With the host and awsvpc network modes, exposed container ports are mapped // directly to the corresponding host port (for the host network mode) or the // attached elastic network interface port (for the awsvpc network mode), so // you cannot take advantage of dynamic host port mappings. // - // If the network mode is awsvpc, the task is allocated an Elastic Network Interface, - // and you must specify a NetworkConfiguration when you create a service or - // run a task with the task definition. For more information, see Task Networking + // If the network mode is awsvpc, the task is allocated an elastic network interface, + // and you must specify a NetworkConfiguration value when you create a service + // or run a task with the task definition. For more information, see Task Networking // (http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html) // in the Amazon Elastic Container Service Developer Guide. // - // Currently, only the Amazon ECS-optimized AMI, other Amazon Linux variants - // with the ecs-init package, or AWS Fargate infrastructure support the awsvpc - // network mode. + // Currently, only Amazon ECS-optimized AMIs, other Amazon Linux variants with + // the ecs-init package, or AWS Fargate infrastructure support the awsvpc network + // mode. // - // If the network mode is host, you can't run multiple instantiations of the + // If the network mode is host, you cannot run multiple instantiations of the // same task on a single container instance when port mappings are used. // // Docker for Windows uses different network modes than Docker for Linux. When @@ -9912,6 +12036,23 @@ type TaskDefinition struct { // in the Docker run reference. NetworkMode *string `locationName:"networkMode" type:"string" enum:"NetworkMode"` + // The process namespace to use for the containers in the task. The valid values + // are host or task. If host is specified, then all containers within the tasks + // that specified the host PID mode on the same container instance share the + // same IPC resources with the host Amazon EC2 instance. If task is specified, + // all containers within the specified task share the same process namespace. + // If no value is specified, the default is a private namespace. For more information, + // see PID settings (https://docs.docker.com/engine/reference/run/#pid-settings---pid) + // in the Docker run reference. + // + // If the host PID mode is used, be aware that there is a heightened risk of + // undesired process namespace expose. For more information, see Docker security + // (https://docs.docker.com/engine/security/security/). + // + // This parameter is not supported for Windows containers or tasks using the + // Fargate launch type. + PidMode *string `locationName:"pidMode" type:"string" enum:"PidMode"` + // An array of placement constraint objects to use for tasks. This field is // not valid if using the Fargate launch type for your task. PlacementConstraints []*TaskDefinitionPlacementConstraint `locationName:"placementConstraints" type:"list"` @@ -9925,7 +12066,7 @@ type TaskDefinition struct { // The revision of the task in a particular family. The revision is a version // number of a task definition in a family. When you register a task definition - // for the first time, the revision is 1; each time you register a new revision + // for the first time, the revision is 1. Each time you register a new revision // of a task definition in the same family, the revision value always increases // by one (even if you have deregistered previous revisions in this family). Revision *int64 `locationName:"revision" type:"integer"` @@ -9997,6 +12138,12 @@ func (s *TaskDefinition) SetFamily(v string) *TaskDefinition { return s } +// SetIpcMode sets the IpcMode field's value. +func (s *TaskDefinition) SetIpcMode(v string) *TaskDefinition { + s.IpcMode = &v + return s +} + // SetMemory sets the Memory field's value. func (s *TaskDefinition) SetMemory(v string) *TaskDefinition { s.Memory = &v @@ -10009,6 +12156,12 @@ func (s *TaskDefinition) SetNetworkMode(v string) *TaskDefinition { return s } +// SetPidMode sets the PidMode field's value. +func (s *TaskDefinition) SetPidMode(v string) *TaskDefinition { + s.PidMode = &v + return s +} + // SetPlacementConstraints sets the PlacementConstraints field's value. func (s *TaskDefinition) SetPlacementConstraints(v []*TaskDefinitionPlacementConstraint) *TaskDefinition { s.PlacementConstraints = v @@ -10145,6 +12298,75 @@ func (s *TaskOverride) SetTaskRoleArn(v string) *TaskOverride { return s } +// The container path, mount options, and size of the tmpfs mount. +type Tmpfs struct { + _ struct{} `type:"structure"` + + // The absolute file path where the tmpfs volume is to be mounted. + // + // ContainerPath is a required field + ContainerPath *string `locationName:"containerPath" type:"string" required:"true"` + + // The list of tmpfs volume mount options. + // + // Valid values: "defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" + // | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | + // "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" + // | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" + // | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" + // | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol" + MountOptions []*string `locationName:"mountOptions" type:"list"` + + // The size (in MiB) of the tmpfs volume. + // + // Size is a required field + Size *int64 `locationName:"size" type:"integer" required:"true"` +} + +// String returns the string representation +func (s Tmpfs) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Tmpfs) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Tmpfs) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Tmpfs"} + if s.ContainerPath == nil { + invalidParams.Add(request.NewErrParamRequired("ContainerPath")) + } + if s.Size == nil { + invalidParams.Add(request.NewErrParamRequired("Size")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetContainerPath sets the ContainerPath field's value. +func (s *Tmpfs) SetContainerPath(v string) *Tmpfs { + s.ContainerPath = &v + return s +} + +// SetMountOptions sets the MountOptions field's value. +func (s *Tmpfs) SetMountOptions(v []*string) *Tmpfs { + s.MountOptions = v + return s +} + +// SetSize sets the Size field's value. +func (s *Tmpfs) SetSize(v int64) *Tmpfs { + s.Size = &v + return s +} + // The ulimit settings to pass to the container. type Ulimit struct { _ struct{} `type:"structure"` @@ -10212,6 +12434,74 @@ func (s *Ulimit) SetSoftLimit(v int64) *Ulimit { return s } +type UntagResourceInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the resource from which to delete tags. + // Currently, the supported resources are Amazon ECS tasks, services, task definitions, + // clusters, and container instances. + // + // ResourceArn is a required field + ResourceArn *string `locationName:"resourceArn" type:"string" required:"true"` + + // The keys of the tags to be removed. + // + // TagKeys is a required field + TagKeys []*string `locationName:"tagKeys" type:"list" required:"true"` +} + +// String returns the string representation +func (s UntagResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UntagResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UntagResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UntagResourceInput"} + if s.ResourceArn == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceArn")) + } + if s.TagKeys == nil { + invalidParams.Add(request.NewErrParamRequired("TagKeys")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceArn sets the ResourceArn field's value. +func (s *UntagResourceInput) SetResourceArn(v string) *UntagResourceInput { + s.ResourceArn = &v + return s +} + +// SetTagKeys sets the TagKeys field's value. +func (s *UntagResourceInput) SetTagKeys(v []*string) *UntagResourceInput { + s.TagKeys = v + return s +} + +type UntagResourceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UntagResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UntagResourceOutput) GoString() string { + return s.String() +} + type UpdateContainerAgentInput struct { _ struct{} `type:"structure"` @@ -10396,8 +12686,11 @@ type UpdateServiceInput struct { // service. DesiredCount *int64 `locationName:"desiredCount" type:"integer"` - // Whether to force a new deployment of the service. By default, --no-force-new-deployment - // is assumed unless otherwise specified. + // Whether to force a new deployment of the service. Deployments are not forced + // by default. You can use this option to trigger a new deployment with no service + // definition changes. For example, you can update a service's tasks to use + // a newer Docker image with the same image/tag combination (my_image:latest) + // or to roll Fargate tasks onto a newer platform version. ForceNewDeployment *bool `locationName:"forceNewDeployment" type:"boolean"` // The period of time, in seconds, that the Amazon ECS service scheduler should @@ -10423,7 +12716,7 @@ type UpdateServiceInput struct { // network configuration, this does not trigger a new service deployment. NetworkConfiguration *NetworkConfiguration `locationName:"networkConfiguration" type:"structure"` - // The platform version you want to update your service to run. + // The platform version that your service should run. PlatformVersion *string `locationName:"platformVersion" type:"string"` // The name of the service to update. @@ -10588,15 +12881,26 @@ func (s *VersionInfo) SetDockerVersion(v string) *VersionInfo { return s } -// A data volume used in a task definition. +// A data volume used in a task definition. For tasks that use a Docker volume, +// specify a DockerVolumeConfiguration. For tasks that use a bind mount host +// volume, specify a host and optional sourcePath. For more information, see +// Using Data Volumes in Tasks (http://docs.aws.amazon.com/AmazonECS/latest/developerguideusing_data_volumes.html). type Volume struct { _ struct{} `type:"structure"` - // The contents of the host parameter determine whether your data volume persists - // on the host container instance and where it is stored. If the host parameter - // is empty, then the Docker daemon assigns a host path for your data volume, - // but the data is not guaranteed to persist after the containers associated - // with it stop running. + // This parameter is specified when you are using Docker volumes. Docker volumes + // are only supported when you are using the EC2 launch type. Windows containers + // only support the use of the local driver. To use bind mounts, specify a host + // instead. + DockerVolumeConfiguration *DockerVolumeConfiguration `locationName:"dockerVolumeConfiguration" type:"structure"` + + // This parameter is specified when you are using bind mount host volumes. Bind + // mount host volumes are supported when you are using either the EC2 or Fargate + // launch types. The contents of the host parameter determine whether your bind + // mount host volume persists on the host container instance and where it is + // stored. If the host parameter is empty, then the Docker daemon assigns a + // host path for your data volume, but the data is not guaranteed to persist + // after the containers associated with it stop running. // // Windows containers can mount whole directories on the same drive as $env:ProgramData. // Windows containers cannot mount directories on a different drive, and mount @@ -10620,6 +12924,12 @@ func (s Volume) GoString() string { return s.String() } +// SetDockerVolumeConfiguration sets the DockerVolumeConfiguration field's value. +func (s *Volume) SetDockerVolumeConfiguration(v *DockerVolumeConfiguration) *Volume { + s.DockerVolumeConfiguration = v + return s +} + // SetHost sets the Host field's value. func (s *Volume) SetHost(v *HostVolumeProperties) *Volume { s.Host = v @@ -10699,6 +13009,9 @@ const ( const ( // ClusterFieldStatistics is a ClusterField enum value ClusterFieldStatistics = "STATISTICS" + + // ClusterFieldTags is a ClusterField enum value + ClusterFieldTags = "TAGS" ) const ( @@ -10717,6 +13030,11 @@ const ( ConnectivityDisconnected = "DISCONNECTED" ) +const ( + // ContainerInstanceFieldTags is a ContainerInstanceField enum value + ContainerInstanceFieldTags = "TAGS" +) + const ( // ContainerInstanceStatusActive is a ContainerInstanceStatus enum value ContainerInstanceStatusActive = "ACTIVE" @@ -10758,6 +13076,17 @@ const ( HealthStatusUnknown = "UNKNOWN" ) +const ( + // IpcModeHost is a IpcMode enum value + IpcModeHost = "host" + + // IpcModeTask is a IpcMode enum value + IpcModeTask = "task" + + // IpcModeNone is a IpcMode enum value + IpcModeNone = "none" +) + const ( // LaunchTypeEc2 is a LaunchType enum value LaunchTypeEc2 = "EC2" @@ -10803,6 +13132,14 @@ const ( NetworkModeNone = "none" ) +const ( + // PidModeHost is a PidMode enum value + PidModeHost = "host" + + // PidModeTask is a PidMode enum value + PidModeTask = "task" +) + const ( // PlacementConstraintTypeDistinctInstance is a PlacementConstraintType enum value PlacementConstraintTypeDistinctInstance = "distinctInstance" @@ -10822,6 +13159,46 @@ const ( PlacementStrategyTypeBinpack = "binpack" ) +const ( + // PropagateTagsTaskDefinition is a PropagateTags enum value + PropagateTagsTaskDefinition = "TASK_DEFINITION" + + // PropagateTagsService is a PropagateTags enum value + PropagateTagsService = "SERVICE" +) + +const ( + // SchedulingStrategyReplica is a SchedulingStrategy enum value + SchedulingStrategyReplica = "REPLICA" + + // SchedulingStrategyDaemon is a SchedulingStrategy enum value + SchedulingStrategyDaemon = "DAEMON" +) + +const ( + // ScopeTask is a Scope enum value + ScopeTask = "task" + + // ScopeShared is a Scope enum value + ScopeShared = "shared" +) + +const ( + // ServiceFieldTags is a ServiceField enum value + ServiceFieldTags = "TAGS" +) + +const ( + // SettingNameServiceLongArnFormat is a SettingName enum value + SettingNameServiceLongArnFormat = "serviceLongArnFormat" + + // SettingNameTaskLongArnFormat is a SettingName enum value + SettingNameTaskLongArnFormat = "taskLongArnFormat" + + // SettingNameContainerInstanceLongArnFormat is a SettingName enum value + SettingNameContainerInstanceLongArnFormat = "containerInstanceLongArnFormat" +) + const ( // SortOrderAsc is a SortOrder enum value SortOrderAsc = "ASC" @@ -10846,6 +13223,11 @@ const ( TaskDefinitionFamilyStatusAll = "ALL" ) +const ( + // TaskDefinitionFieldTags is a TaskDefinitionField enum value + TaskDefinitionFieldTags = "TAGS" +) + const ( // TaskDefinitionPlacementConstraintTypeMemberOf is a TaskDefinitionPlacementConstraintType enum value TaskDefinitionPlacementConstraintTypeMemberOf = "memberOf" @@ -10859,6 +13241,22 @@ const ( TaskDefinitionStatusInactive = "INACTIVE" ) +const ( + // TaskFieldTags is a TaskField enum value + TaskFieldTags = "TAGS" +) + +const ( + // TaskStopCodeTaskFailedToStart is a TaskStopCode enum value + TaskStopCodeTaskFailedToStart = "TaskFailedToStart" + + // TaskStopCodeEssentialContainerExited is a TaskStopCode enum value + TaskStopCodeEssentialContainerExited = "EssentialContainerExited" + + // TaskStopCodeUserInitiated is a TaskStopCode enum value + TaskStopCodeUserInitiated = "UserInitiated" +) + const ( // TransportProtocolTcp is a TransportProtocol enum value TransportProtocolTcp = "tcp" diff --git a/vendor/github.com/aws/aws-sdk-go/service/ecs/errors.go b/vendor/github.com/aws/aws-sdk-go/service/ecs/errors.go index 619efc778fe..f2d458fc51b 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ecs/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ecs/errors.go @@ -21,8 +21,8 @@ const ( // ErrCodeBlockedException for service response error code // "BlockedException". // - // Your AWS account has been blocked. Contact AWS Support (http://aws.amazon.com/contact-us/) - // for more information. + // Your AWS account has been blocked. For more information, Contact AWS Support + // (http://aws.amazon.com/contact-us/). ErrCodeBlockedException = "BlockedException" // ErrCodeClientException for service response error code @@ -59,7 +59,7 @@ const ( // "ClusterNotFoundException". // // The specified cluster could not be found. You can view your available clusters - // with ListClusters. Amazon ECS clusters are region-specific. + // with ListClusters. Amazon ECS clusters are Region-specific. ErrCodeClusterNotFoundException = "ClusterNotFoundException" // ErrCodeInvalidParameterException for service response error code @@ -89,7 +89,7 @@ const ( // ErrCodePlatformTaskDefinitionIncompatibilityException for service response error code // "PlatformTaskDefinitionIncompatibilityException". // - // The specified platform version does not satisfy the task definition’s required + // The specified platform version does not satisfy the task definition's required // capabilities. ErrCodePlatformTaskDefinitionIncompatibilityException = "PlatformTaskDefinitionIncompatibilityException" @@ -99,6 +99,12 @@ const ( // The specified platform version does not exist. ErrCodePlatformUnknownException = "PlatformUnknownException" + // ErrCodeResourceNotFoundException for service response error code + // "ResourceNotFoundException". + // + // The specified resource could not be found. + ErrCodeResourceNotFoundException = "ResourceNotFoundException" + // ErrCodeServerException for service response error code // "ServerException". // @@ -116,7 +122,7 @@ const ( // "ServiceNotFoundException". // // The specified service could not be found. You can view your available services - // with ListServices. Amazon ECS services are cluster-specific and region-specific. + // with ListServices. Amazon ECS services are cluster-specific and Region-specific. ErrCodeServiceNotFoundException = "ServiceNotFoundException" // ErrCodeTargetNotFoundException for service response error code @@ -124,13 +130,13 @@ const ( // // The specified target could not be found. You can view your available container // instances with ListContainerInstances. Amazon ECS container instances are - // cluster-specific and region-specific. + // cluster-specific and Region-specific. ErrCodeTargetNotFoundException = "TargetNotFoundException" // ErrCodeUnsupportedFeatureException for service response error code // "UnsupportedFeatureException". // - // The specified task is not supported in this region. + // The specified task is not supported in this Region. ErrCodeUnsupportedFeatureException = "UnsupportedFeatureException" // ErrCodeUpdateInProgressException for service response error code diff --git a/vendor/github.com/aws/aws-sdk-go/service/ecs/service.go b/vendor/github.com/aws/aws-sdk-go/service/ecs/service.go index 6082b928218..c268614ecb9 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ecs/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ecs/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "ecs" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "ecs" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "ECS" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the ECS client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/efs/api.go b/vendor/github.com/aws/aws-sdk-go/service/efs/api.go index 413b0b2c675..f24886fa39b 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/efs/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/efs/api.go @@ -17,8 +17,8 @@ const opCreateFileSystem = "CreateFileSystem" // CreateFileSystemRequest generates a "aws/request.Request" representing the // client's request for the CreateFileSystem operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -124,8 +124,19 @@ func (c *EFS) CreateFileSystemRequest(input *CreateFileSystemInput) (req *reques // the creation token you provided. // // * ErrCodeFileSystemLimitExceeded "FileSystemLimitExceeded" -// Returned if the AWS account has already created maximum number of file systems -// allowed per account. +// Returned if the AWS account has already created the maximum number of file +// systems allowed per account. +// +// * ErrCodeInsufficientThroughputCapacity "InsufficientThroughputCapacity" +// Returned if there's not enough capacity to provision additional throughput. +// This value might be returned when you try to create a file system in provisioned +// throughput mode, when you attempt to increase the provisioned throughput +// of an existing file system, or when you attempt to change an existing file +// system from bursting to provisioned throughput mode. +// +// * ErrCodeThroughputLimitExceeded "ThroughputLimitExceeded" +// Returned if the throughput mode or amount of provisioned throughput can't +// be changed because the throughput limit of 1024 MiB/s has been reached. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticfilesystem-2015-02-01/CreateFileSystem func (c *EFS) CreateFileSystem(input *CreateFileSystemInput) (*FileSystemDescription, error) { @@ -153,8 +164,8 @@ const opCreateMountTarget = "CreateMountTarget" // CreateMountTargetRequest generates a "aws/request.Request" representing the // client's request for the CreateMountTarget operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -305,11 +316,11 @@ func (c *EFS) CreateMountTargetRequest(input *CreateMountTargetInput) (req *requ // Returned if an error occurred on the server side. // // * ErrCodeFileSystemNotFound "FileSystemNotFound" -// Returned if the specified FileSystemId does not exist in the requester's +// Returned if the specified FileSystemId value doesn't exist in the requester's // AWS account. // // * ErrCodeIncorrectFileSystemLifeCycleState "IncorrectFileSystemLifeCycleState" -// Returned if the file system's life cycle state is not "created". +// Returned if the file system's lifecycle state is not "available". // // * ErrCodeMountTargetConflict "MountTargetConflict" // Returned if the mount target would violate one of the specified restrictions @@ -327,18 +338,19 @@ func (c *EFS) CreateMountTargetRequest(input *CreateMountTargetInput) (req *requ // the subnet. // // * ErrCodeNetworkInterfaceLimitExceeded "NetworkInterfaceLimitExceeded" -// The calling account has reached the ENI limit for the specific AWS region. -// Client should try to delete some ENIs or get its account limit raised. For -// more information, see Amazon VPC Limits (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Appendix_Limits.html) -// in the Amazon Virtual Private Cloud User Guide (see the Network interfaces -// per VPC entry in the table). +// The calling account has reached the limit for elastic network interfaces +// for the specific AWS Region. The client should try to delete some elastic +// network interfaces or get the account limit raised. For more information, +// see Amazon VPC Limits (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Appendix_Limits.html) +// in the Amazon VPC User Guide (see the Network interfaces per VPC entry in +// the table). // // * ErrCodeSecurityGroupLimitExceeded "SecurityGroupLimitExceeded" // Returned if the size of SecurityGroups specified in the request is greater // than five. // // * ErrCodeSecurityGroupNotFound "SecurityGroupNotFound" -// Returned if one of the specified security groups does not exist in the subnet's +// Returned if one of the specified security groups doesn't exist in the subnet's // VPC. // // * ErrCodeUnsupportedAvailabilityZone "UnsupportedAvailabilityZone" @@ -369,8 +381,8 @@ const opCreateTags = "CreateTags" // CreateTagsRequest generates a "aws/request.Request" representing the // client's request for the CreateTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -435,7 +447,7 @@ func (c *EFS) CreateTagsRequest(input *CreateTagsInput) (req *request.Request, o // Returned if an error occurred on the server side. // // * ErrCodeFileSystemNotFound "FileSystemNotFound" -// Returned if the specified FileSystemId does not exist in the requester's +// Returned if the specified FileSystemId value doesn't exist in the requester's // AWS account. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticfilesystem-2015-02-01/CreateTags @@ -464,8 +476,8 @@ const opDeleteFileSystem = "DeleteFileSystem" // DeleteFileSystemRequest generates a "aws/request.Request" representing the // client's request for the DeleteFileSystem operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -539,7 +551,7 @@ func (c *EFS) DeleteFileSystemRequest(input *DeleteFileSystemInput) (req *reques // Returned if an error occurred on the server side. // // * ErrCodeFileSystemNotFound "FileSystemNotFound" -// Returned if the specified FileSystemId does not exist in the requester's +// Returned if the specified FileSystemId value doesn't exist in the requester's // AWS account. // // * ErrCodeFileSystemInUse "FileSystemInUse" @@ -571,8 +583,8 @@ const opDeleteMountTarget = "DeleteMountTarget" // DeleteMountTargetRequest generates a "aws/request.Request" representing the // client's request for the DeleteMountTarget operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -688,8 +700,8 @@ const opDeleteTags = "DeleteTags" // DeleteTagsRequest generates a "aws/request.Request" representing the // client's request for the DeleteTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -755,7 +767,7 @@ func (c *EFS) DeleteTagsRequest(input *DeleteTagsInput) (req *request.Request, o // Returned if an error occurred on the server side. // // * ErrCodeFileSystemNotFound "FileSystemNotFound" -// Returned if the specified FileSystemId does not exist in the requester's +// Returned if the specified FileSystemId value doesn't exist in the requester's // AWS account. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticfilesystem-2015-02-01/DeleteTags @@ -784,8 +796,8 @@ const opDescribeFileSystems = "DescribeFileSystems" // DescribeFileSystemsRequest generates a "aws/request.Request" representing the // client's request for the DescribeFileSystems operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -867,7 +879,7 @@ func (c *EFS) DescribeFileSystemsRequest(input *DescribeFileSystemsInput) (req * // Returned if an error occurred on the server side. // // * ErrCodeFileSystemNotFound "FileSystemNotFound" -// Returned if the specified FileSystemId does not exist in the requester's +// Returned if the specified FileSystemId value doesn't exist in the requester's // AWS account. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticfilesystem-2015-02-01/DescribeFileSystems @@ -896,8 +908,8 @@ const opDescribeMountTargetSecurityGroups = "DescribeMountTargetSecurityGroups" // DescribeMountTargetSecurityGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeMountTargetSecurityGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -996,8 +1008,8 @@ const opDescribeMountTargets = "DescribeMountTargets" // DescribeMountTargetsRequest generates a "aws/request.Request" representing the // client's request for the DescribeMountTargets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1060,7 +1072,7 @@ func (c *EFS) DescribeMountTargetsRequest(input *DescribeMountTargetsInput) (req // Returned if an error occurred on the server side. // // * ErrCodeFileSystemNotFound "FileSystemNotFound" -// Returned if the specified FileSystemId does not exist in the requester's +// Returned if the specified FileSystemId value doesn't exist in the requester's // AWS account. // // * ErrCodeMountTargetNotFound "MountTargetNotFound" @@ -1093,8 +1105,8 @@ const opDescribeTags = "DescribeTags" // DescribeTagsRequest generates a "aws/request.Request" representing the // client's request for the DescribeTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1156,7 +1168,7 @@ func (c *EFS) DescribeTagsRequest(input *DescribeTagsInput) (req *request.Reques // Returned if an error occurred on the server side. // // * ErrCodeFileSystemNotFound "FileSystemNotFound" -// Returned if the specified FileSystemId does not exist in the requester's +// Returned if the specified FileSystemId value doesn't exist in the requester's // AWS account. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticfilesystem-2015-02-01/DescribeTags @@ -1185,8 +1197,8 @@ const opModifyMountTargetSecurityGroups = "ModifyMountTargetSecurityGroups" // ModifyMountTargetSecurityGroupsRequest generates a "aws/request.Request" representing the // client's request for the ModifyMountTargetSecurityGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1271,7 +1283,7 @@ func (c *EFS) ModifyMountTargetSecurityGroupsRequest(input *ModifyMountTargetSec // than five. // // * ErrCodeSecurityGroupNotFound "SecurityGroupNotFound" -// Returned if one of the specified security groups does not exist in the subnet's +// Returned if one of the specified security groups doesn't exist in the subnet's // VPC. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticfilesystem-2015-02-01/ModifyMountTargetSecurityGroups @@ -1296,6 +1308,112 @@ func (c *EFS) ModifyMountTargetSecurityGroupsWithContext(ctx aws.Context, input return out, req.Send() } +const opUpdateFileSystem = "UpdateFileSystem" + +// UpdateFileSystemRequest generates a "aws/request.Request" representing the +// client's request for the UpdateFileSystem operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateFileSystem for more information on using the UpdateFileSystem +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateFileSystemRequest method. +// req, resp := client.UpdateFileSystemRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/elasticfilesystem-2015-02-01/UpdateFileSystem +func (c *EFS) UpdateFileSystemRequest(input *UpdateFileSystemInput) (req *request.Request, output *UpdateFileSystemOutput) { + op := &request.Operation{ + Name: opUpdateFileSystem, + HTTPMethod: "PUT", + HTTPPath: "/2015-02-01/file-systems/{FileSystemId}", + } + + if input == nil { + input = &UpdateFileSystemInput{} + } + + output = &UpdateFileSystemOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateFileSystem API operation for Amazon Elastic File System. +// +// Updates the throughput mode or the amount of provisioned throughput of an +// existing file system. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic File System's +// API operation UpdateFileSystem for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequest "BadRequest" +// Returned if the request is malformed or contains an error such as an invalid +// parameter value or a missing required parameter. +// +// * ErrCodeFileSystemNotFound "FileSystemNotFound" +// Returned if the specified FileSystemId value doesn't exist in the requester's +// AWS account. +// +// * ErrCodeIncorrectFileSystemLifeCycleState "IncorrectFileSystemLifeCycleState" +// Returned if the file system's lifecycle state is not "available". +// +// * ErrCodeInsufficientThroughputCapacity "InsufficientThroughputCapacity" +// Returned if there's not enough capacity to provision additional throughput. +// This value might be returned when you try to create a file system in provisioned +// throughput mode, when you attempt to increase the provisioned throughput +// of an existing file system, or when you attempt to change an existing file +// system from bursting to provisioned throughput mode. +// +// * ErrCodeInternalServerError "InternalServerError" +// Returned if an error occurred on the server side. +// +// * ErrCodeThroughputLimitExceeded "ThroughputLimitExceeded" +// Returned if the throughput mode or amount of provisioned throughput can't +// be changed because the throughput limit of 1024 MiB/s has been reached. +// +// * ErrCodeTooManyRequests "TooManyRequests" +// Returned if you don’t wait at least 24 hours before changing the throughput +// mode, or decreasing the Provisioned Throughput value. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/elasticfilesystem-2015-02-01/UpdateFileSystem +func (c *EFS) UpdateFileSystem(input *UpdateFileSystemInput) (*UpdateFileSystemOutput, error) { + req, out := c.UpdateFileSystemRequest(input) + return out, req.Send() +} + +// UpdateFileSystemWithContext is the same as UpdateFileSystem with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateFileSystem for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EFS) UpdateFileSystemWithContext(ctx aws.Context, input *UpdateFileSystemInput, opts ...request.Option) (*UpdateFileSystemOutput, error) { + req, out := c.UpdateFileSystemRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + type CreateFileSystemInput struct { _ struct{} `type:"structure"` @@ -1305,30 +1423,29 @@ type CreateFileSystemInput struct { // CreationToken is a required field CreationToken *string `min:"1" type:"string" required:"true"` - // A boolean value that, if true, creates an encrypted file system. When creating + // A Boolean value that, if true, creates an encrypted file system. When creating // an encrypted file system, you have the option of specifying a CreateFileSystemRequest$KmsKeyId // for an existing AWS Key Management Service (AWS KMS) customer master key // (CMK). If you don't specify a CMK, then the default CMK for Amazon EFS, /aws/elasticfilesystem, // is used to protect the encrypted file system. Encrypted *bool `type:"boolean"` - // The id of the AWS KMS CMK that will be used to protect the encrypted file - // system. This parameter is only required if you want to use a non-default - // CMK. If this parameter is not specified, the default CMK for Amazon EFS is - // used. This id can be in one of the following formats: + // The ID of the AWS KMS CMK to be used to protect the encrypted file system. + // This parameter is only required if you want to use a non-default CMK. If + // this parameter is not specified, the default CMK for Amazon EFS is used. + // This ID can be in one of the following formats: // - // * Key ID - A unique identifier of the key. For example, 1234abcd-12ab-34cd-56ef-1234567890ab. + // * Key ID - A unique identifier of the key, for example, 1234abcd-12ab-34cd-56ef-1234567890ab. // - // * ARN - An Amazon Resource Name for the key. For example, arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab. + // * ARN - An Amazon Resource Name (ARN) for the key, for example, arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab. // // * Key alias - A previously created display name for a key. For example, // alias/projectKey1. // - // * Key alias ARN - An Amazon Resource Name for a key alias. For example, - // arn:aws:kms:us-west-2:444455556666:alias/projectKey1. + // * Key alias ARN - An ARN for a key alias, for example, arn:aws:kms:us-west-2:444455556666:alias/projectKey1. // - // Note that if the KmsKeyId is specified, the CreateFileSystemRequest$Encrypted - // parameter must be set to true. + // If KmsKeyId is specified, the CreateFileSystemRequest$Encrypted parameter + // must be set to true. KmsKeyId *string `min:"1" type:"string"` // The PerformanceMode of the file system. We recommend generalPurpose performance @@ -1337,6 +1454,20 @@ type CreateFileSystemInput struct { // with a tradeoff of slightly higher latencies for most file operations. This // can't be changed after the file system has been created. PerformanceMode *string `type:"string" enum:"PerformanceMode"` + + // The throughput, measured in MiB/s, that you want to provision for a file + // system that you're creating. The limit on throughput is 1024 MiB/s. You can + // get these limits increased by contacting AWS Support. For more information, + // see Amazon EFS Limits That You Can Increase (http://docs.aws.amazon.com/efs/latest/ug/limits.html#soft-limits) + // in the Amazon EFS User Guide. + ProvisionedThroughputInMibps *float64 `type:"double"` + + // The throughput mode for the file system to be created. There are two throughput + // modes to choose from for your file system: bursting and provisioned. You + // can decrease your file system's throughput in Provisioned Throughput mode + // or change between the throughput modes as long as it’s been more than 24 + // hours since the last decrease or throughput mode change. + ThroughputMode *string `type:"string" enum:"ThroughputMode"` } // String returns the string representation @@ -1392,6 +1523,18 @@ func (s *CreateFileSystemInput) SetPerformanceMode(v string) *CreateFileSystemIn return s } +// SetProvisionedThroughputInMibps sets the ProvisionedThroughputInMibps field's value. +func (s *CreateFileSystemInput) SetProvisionedThroughputInMibps(v float64) *CreateFileSystemInput { + s.ProvisionedThroughputInMibps = &v + return s +} + +// SetThroughputMode sets the ThroughputMode field's value. +func (s *CreateFileSystemInput) SetThroughputMode(v string) *CreateFileSystemInput { + s.ThroughputMode = &v + return s +} + type CreateMountTargetInput struct { _ struct{} `type:"structure"` @@ -2117,14 +2260,14 @@ type FileSystemDescription struct { // Time that the file system was created, in seconds (since 1970-01-01T00:00:00Z). // // CreationTime is a required field - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + CreationTime *time.Time `type:"timestamp" required:"true"` // Opaque string specified in the request. // // CreationToken is a required field CreationToken *string `min:"1" type:"string" required:"true"` - // A boolean value that, if true, indicates that the file system is encrypted. + // A Boolean value that, if true, indicates that the file system is encrypted. Encrypted *bool `type:"boolean"` // ID of the file system, assigned by Amazon EFS. @@ -2132,7 +2275,7 @@ type FileSystemDescription struct { // FileSystemId is a required field FileSystemId *string `type:"string" required:"true"` - // The id of an AWS Key Management Service (AWS KMS) customer master key (CMK) + // The ID of an AWS Key Management Service (AWS KMS) customer master key (CMK) // that was used to protect the encrypted file system. KmsKeyId *string `min:"1" type:"string"` @@ -2163,18 +2306,32 @@ type FileSystemDescription struct { // PerformanceMode is a required field PerformanceMode *string `type:"string" required:"true" enum:"PerformanceMode"` + // The throughput, measured in MiB/s, that you want to provision for a file + // system. The limit on throughput is 1024 MiB/s. You can get these limits increased + // by contacting AWS Support. For more information, see Amazon EFS Limits That + // You Can Increase (http://docs.aws.amazon.com/efs/latest/ug/limits.html#soft-limits) + // in the Amazon EFS User Guide. + ProvisionedThroughputInMibps *float64 `type:"double"` + // Latest known metered size (in bytes) of data stored in the file system, in - // bytes, in its Value field, and the time at which that size was determined - // in its Timestamp field. The Timestamp value is the integer number of seconds - // since 1970-01-01T00:00:00Z. Note that the value does not represent the size - // of a consistent snapshot of the file system, but it is eventually consistent - // when there are no writes to the file system. That is, the value will represent - // actual size only if the file system is not modified for a period longer than - // a couple of hours. Otherwise, the value is not the exact size the file system - // was at any instant in time. + // its Value field, and the time at which that size was determined in its Timestamp + // field. The Timestamp value is the integer number of seconds since 1970-01-01T00:00:00Z. + // The SizeInBytes value doesn't represent the size of a consistent snapshot + // of the file system, but it is eventually consistent when there are no writes + // to the file system. That is, SizeInBytes represents actual size only if the + // file system is not modified for a period longer than a couple of hours. Otherwise, + // the value is not the exact size that the file system was at any point in + // time. // // SizeInBytes is a required field SizeInBytes *FileSystemSize `type:"structure" required:"true"` + + // The throughput mode for a file system. There are two throughput modes to + // choose from for your file system: bursting and provisioned. You can decrease + // your file system's throughput in Provisioned Throughput mode or change between + // the throughput modes as long as it’s been more than 24 hours since the last + // decrease or throughput mode change. + ThroughputMode *string `type:"string" enum:"ThroughputMode"` } // String returns the string representation @@ -2247,12 +2404,24 @@ func (s *FileSystemDescription) SetPerformanceMode(v string) *FileSystemDescript return s } +// SetProvisionedThroughputInMibps sets the ProvisionedThroughputInMibps field's value. +func (s *FileSystemDescription) SetProvisionedThroughputInMibps(v float64) *FileSystemDescription { + s.ProvisionedThroughputInMibps = &v + return s +} + // SetSizeInBytes sets the SizeInBytes field's value. func (s *FileSystemDescription) SetSizeInBytes(v *FileSystemSize) *FileSystemDescription { s.SizeInBytes = v return s } +// SetThroughputMode sets the ThroughputMode field's value. +func (s *FileSystemDescription) SetThroughputMode(v string) *FileSystemDescription { + s.ThroughputMode = &v + return s +} + // Latest known metered size (in bytes) of data stored in the file system, in // its Value field, and the time at which that size was determined in its Timestamp // field. Note that the value does not represent the size of a consistent snapshot @@ -2266,7 +2435,7 @@ type FileSystemSize struct { // Time at which the size of data, returned in the Value field, was determined. // The value is the integer number of seconds since 1970-01-01T00:00:00Z. - Timestamp *time.Time `type:"timestamp" timestampFormat:"unix"` + Timestamp *time.Time `type:"timestamp"` // Latest known metered size (in bytes) of data stored in the file system. // @@ -2501,6 +2670,235 @@ func (s *Tag) SetValue(v string) *Tag { return s } +type UpdateFileSystemInput struct { + _ struct{} `type:"structure"` + + // The ID of the file system that you want to update. + // + // FileSystemId is a required field + FileSystemId *string `location:"uri" locationName:"FileSystemId" type:"string" required:"true"` + + // (Optional) The amount of throughput, in MiB/s, that you want to provision + // for your file system. If you're not updating the amount of provisioned throughput + // for your file system, you don't need to provide this value in your request. + ProvisionedThroughputInMibps *float64 `type:"double"` + + // (Optional) The throughput mode that you want your file system to use. If + // you're not updating your throughput mode, you don't need to provide this + // value in your request. + ThroughputMode *string `type:"string" enum:"ThroughputMode"` +} + +// String returns the string representation +func (s UpdateFileSystemInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateFileSystemInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateFileSystemInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateFileSystemInput"} + if s.FileSystemId == nil { + invalidParams.Add(request.NewErrParamRequired("FileSystemId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFileSystemId sets the FileSystemId field's value. +func (s *UpdateFileSystemInput) SetFileSystemId(v string) *UpdateFileSystemInput { + s.FileSystemId = &v + return s +} + +// SetProvisionedThroughputInMibps sets the ProvisionedThroughputInMibps field's value. +func (s *UpdateFileSystemInput) SetProvisionedThroughputInMibps(v float64) *UpdateFileSystemInput { + s.ProvisionedThroughputInMibps = &v + return s +} + +// SetThroughputMode sets the ThroughputMode field's value. +func (s *UpdateFileSystemInput) SetThroughputMode(v string) *UpdateFileSystemInput { + s.ThroughputMode = &v + return s +} + +// Description of the file system. +type UpdateFileSystemOutput struct { + _ struct{} `type:"structure"` + + // Time that the file system was created, in seconds (since 1970-01-01T00:00:00Z). + // + // CreationTime is a required field + CreationTime *time.Time `type:"timestamp" required:"true"` + + // Opaque string specified in the request. + // + // CreationToken is a required field + CreationToken *string `min:"1" type:"string" required:"true"` + + // A Boolean value that, if true, indicates that the file system is encrypted. + Encrypted *bool `type:"boolean"` + + // ID of the file system, assigned by Amazon EFS. + // + // FileSystemId is a required field + FileSystemId *string `type:"string" required:"true"` + + // The ID of an AWS Key Management Service (AWS KMS) customer master key (CMK) + // that was used to protect the encrypted file system. + KmsKeyId *string `min:"1" type:"string"` + + // Lifecycle phase of the file system. + // + // LifeCycleState is a required field + LifeCycleState *string `type:"string" required:"true" enum:"LifeCycleState"` + + // You can add tags to a file system, including a Name tag. For more information, + // see CreateTags. If the file system has a Name tag, Amazon EFS returns the + // value in this field. + Name *string `type:"string"` + + // Current number of mount targets that the file system has. For more information, + // see CreateMountTarget. + // + // NumberOfMountTargets is a required field + NumberOfMountTargets *int64 `type:"integer" required:"true"` + + // AWS account that created the file system. If the file system was created + // by an IAM user, the parent account to which the user belongs is the owner. + // + // OwnerId is a required field + OwnerId *string `type:"string" required:"true"` + + // The PerformanceMode of the file system. + // + // PerformanceMode is a required field + PerformanceMode *string `type:"string" required:"true" enum:"PerformanceMode"` + + // The throughput, measured in MiB/s, that you want to provision for a file + // system. The limit on throughput is 1024 MiB/s. You can get these limits increased + // by contacting AWS Support. For more information, see Amazon EFS Limits That + // You Can Increase (http://docs.aws.amazon.com/efs/latest/ug/limits.html#soft-limits) + // in the Amazon EFS User Guide. + ProvisionedThroughputInMibps *float64 `type:"double"` + + // Latest known metered size (in bytes) of data stored in the file system, in + // its Value field, and the time at which that size was determined in its Timestamp + // field. The Timestamp value is the integer number of seconds since 1970-01-01T00:00:00Z. + // The SizeInBytes value doesn't represent the size of a consistent snapshot + // of the file system, but it is eventually consistent when there are no writes + // to the file system. That is, SizeInBytes represents actual size only if the + // file system is not modified for a period longer than a couple of hours. Otherwise, + // the value is not the exact size that the file system was at any point in + // time. + // + // SizeInBytes is a required field + SizeInBytes *FileSystemSize `type:"structure" required:"true"` + + // The throughput mode for a file system. There are two throughput modes to + // choose from for your file system: bursting and provisioned. You can decrease + // your file system's throughput in Provisioned Throughput mode or change between + // the throughput modes as long as it’s been more than 24 hours since the last + // decrease or throughput mode change. + ThroughputMode *string `type:"string" enum:"ThroughputMode"` +} + +// String returns the string representation +func (s UpdateFileSystemOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateFileSystemOutput) GoString() string { + return s.String() +} + +// SetCreationTime sets the CreationTime field's value. +func (s *UpdateFileSystemOutput) SetCreationTime(v time.Time) *UpdateFileSystemOutput { + s.CreationTime = &v + return s +} + +// SetCreationToken sets the CreationToken field's value. +func (s *UpdateFileSystemOutput) SetCreationToken(v string) *UpdateFileSystemOutput { + s.CreationToken = &v + return s +} + +// SetEncrypted sets the Encrypted field's value. +func (s *UpdateFileSystemOutput) SetEncrypted(v bool) *UpdateFileSystemOutput { + s.Encrypted = &v + return s +} + +// SetFileSystemId sets the FileSystemId field's value. +func (s *UpdateFileSystemOutput) SetFileSystemId(v string) *UpdateFileSystemOutput { + s.FileSystemId = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *UpdateFileSystemOutput) SetKmsKeyId(v string) *UpdateFileSystemOutput { + s.KmsKeyId = &v + return s +} + +// SetLifeCycleState sets the LifeCycleState field's value. +func (s *UpdateFileSystemOutput) SetLifeCycleState(v string) *UpdateFileSystemOutput { + s.LifeCycleState = &v + return s +} + +// SetName sets the Name field's value. +func (s *UpdateFileSystemOutput) SetName(v string) *UpdateFileSystemOutput { + s.Name = &v + return s +} + +// SetNumberOfMountTargets sets the NumberOfMountTargets field's value. +func (s *UpdateFileSystemOutput) SetNumberOfMountTargets(v int64) *UpdateFileSystemOutput { + s.NumberOfMountTargets = &v + return s +} + +// SetOwnerId sets the OwnerId field's value. +func (s *UpdateFileSystemOutput) SetOwnerId(v string) *UpdateFileSystemOutput { + s.OwnerId = &v + return s +} + +// SetPerformanceMode sets the PerformanceMode field's value. +func (s *UpdateFileSystemOutput) SetPerformanceMode(v string) *UpdateFileSystemOutput { + s.PerformanceMode = &v + return s +} + +// SetProvisionedThroughputInMibps sets the ProvisionedThroughputInMibps field's value. +func (s *UpdateFileSystemOutput) SetProvisionedThroughputInMibps(v float64) *UpdateFileSystemOutput { + s.ProvisionedThroughputInMibps = &v + return s +} + +// SetSizeInBytes sets the SizeInBytes field's value. +func (s *UpdateFileSystemOutput) SetSizeInBytes(v *FileSystemSize) *UpdateFileSystemOutput { + s.SizeInBytes = v + return s +} + +// SetThroughputMode sets the ThroughputMode field's value. +func (s *UpdateFileSystemOutput) SetThroughputMode(v string) *UpdateFileSystemOutput { + s.ThroughputMode = &v + return s +} + const ( // LifeCycleStateCreating is a LifeCycleState enum value LifeCycleStateCreating = "creating" @@ -2508,6 +2906,9 @@ const ( // LifeCycleStateAvailable is a LifeCycleState enum value LifeCycleStateAvailable = "available" + // LifeCycleStateUpdating is a LifeCycleState enum value + LifeCycleStateUpdating = "updating" + // LifeCycleStateDeleting is a LifeCycleState enum value LifeCycleStateDeleting = "deleting" @@ -2522,3 +2923,11 @@ const ( // PerformanceModeMaxIo is a PerformanceMode enum value PerformanceModeMaxIo = "maxIO" ) + +const ( + // ThroughputModeBursting is a ThroughputMode enum value + ThroughputModeBursting = "bursting" + + // ThroughputModeProvisioned is a ThroughputMode enum value + ThroughputModeProvisioned = "provisioned" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/efs/errors.go b/vendor/github.com/aws/aws-sdk-go/service/efs/errors.go index 950e4ca5fcc..b616a864c7d 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/efs/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/efs/errors.go @@ -34,21 +34,21 @@ const ( // ErrCodeFileSystemLimitExceeded for service response error code // "FileSystemLimitExceeded". // - // Returned if the AWS account has already created maximum number of file systems - // allowed per account. + // Returned if the AWS account has already created the maximum number of file + // systems allowed per account. ErrCodeFileSystemLimitExceeded = "FileSystemLimitExceeded" // ErrCodeFileSystemNotFound for service response error code // "FileSystemNotFound". // - // Returned if the specified FileSystemId does not exist in the requester's + // Returned if the specified FileSystemId value doesn't exist in the requester's // AWS account. ErrCodeFileSystemNotFound = "FileSystemNotFound" // ErrCodeIncorrectFileSystemLifeCycleState for service response error code // "IncorrectFileSystemLifeCycleState". // - // Returned if the file system's life cycle state is not "created". + // Returned if the file system's lifecycle state is not "available". ErrCodeIncorrectFileSystemLifeCycleState = "IncorrectFileSystemLifeCycleState" // ErrCodeIncorrectMountTargetState for service response error code @@ -57,6 +57,16 @@ const ( // Returned if the mount target is not in the correct state for the operation. ErrCodeIncorrectMountTargetState = "IncorrectMountTargetState" + // ErrCodeInsufficientThroughputCapacity for service response error code + // "InsufficientThroughputCapacity". + // + // Returned if there's not enough capacity to provision additional throughput. + // This value might be returned when you try to create a file system in provisioned + // throughput mode, when you attempt to increase the provisioned throughput + // of an existing file system, or when you attempt to change an existing file + // system from bursting to provisioned throughput mode. + ErrCodeInsufficientThroughputCapacity = "InsufficientThroughputCapacity" + // ErrCodeInternalServerError for service response error code // "InternalServerError". // @@ -87,11 +97,12 @@ const ( // ErrCodeNetworkInterfaceLimitExceeded for service response error code // "NetworkInterfaceLimitExceeded". // - // The calling account has reached the ENI limit for the specific AWS region. - // Client should try to delete some ENIs or get its account limit raised. For - // more information, see Amazon VPC Limits (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Appendix_Limits.html) - // in the Amazon Virtual Private Cloud User Guide (see the Network interfaces - // per VPC entry in the table). + // The calling account has reached the limit for elastic network interfaces + // for the specific AWS Region. The client should try to delete some elastic + // network interfaces or get the account limit raised. For more information, + // see Amazon VPC Limits (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Appendix_Limits.html) + // in the Amazon VPC User Guide (see the Network interfaces per VPC entry in + // the table). ErrCodeNetworkInterfaceLimitExceeded = "NetworkInterfaceLimitExceeded" // ErrCodeNoFreeAddressesInSubnet for service response error code @@ -111,7 +122,7 @@ const ( // ErrCodeSecurityGroupNotFound for service response error code // "SecurityGroupNotFound". // - // Returned if one of the specified security groups does not exist in the subnet's + // Returned if one of the specified security groups doesn't exist in the subnet's // VPC. ErrCodeSecurityGroupNotFound = "SecurityGroupNotFound" @@ -121,6 +132,20 @@ const ( // Returned if there is no subnet with ID SubnetId provided in the request. ErrCodeSubnetNotFound = "SubnetNotFound" + // ErrCodeThroughputLimitExceeded for service response error code + // "ThroughputLimitExceeded". + // + // Returned if the throughput mode or amount of provisioned throughput can't + // be changed because the throughput limit of 1024 MiB/s has been reached. + ErrCodeThroughputLimitExceeded = "ThroughputLimitExceeded" + + // ErrCodeTooManyRequests for service response error code + // "TooManyRequests". + // + // Returned if you don’t wait at least 24 hours before changing the throughput + // mode, or decreasing the Provisioned Throughput value. + ErrCodeTooManyRequests = "TooManyRequests" + // ErrCodeUnsupportedAvailabilityZone for service response error code // "UnsupportedAvailabilityZone". ErrCodeUnsupportedAvailabilityZone = "UnsupportedAvailabilityZone" diff --git a/vendor/github.com/aws/aws-sdk-go/service/efs/service.go b/vendor/github.com/aws/aws-sdk-go/service/efs/service.go index 4d6bb2cdcef..6b1a11c900a 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/efs/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/efs/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "elasticfilesystem" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "elasticfilesystem" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "EFS" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the EFS client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/eks/api.go b/vendor/github.com/aws/aws-sdk-go/service/eks/api.go new file mode 100644 index 00000000000..a4fd7cd4935 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/eks/api.go @@ -0,0 +1,1025 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package eks + +import ( + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" +) + +const opCreateCluster = "CreateCluster" + +// CreateClusterRequest generates a "aws/request.Request" representing the +// client's request for the CreateCluster operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateCluster for more information on using the CreateCluster +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateClusterRequest method. +// req, resp := client.CreateClusterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/eks-2017-11-01/CreateCluster +func (c *EKS) CreateClusterRequest(input *CreateClusterInput) (req *request.Request, output *CreateClusterOutput) { + op := &request.Operation{ + Name: opCreateCluster, + HTTPMethod: "POST", + HTTPPath: "/clusters", + } + + if input == nil { + input = &CreateClusterInput{} + } + + output = &CreateClusterOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateCluster API operation for Amazon Elastic Container Service for Kubernetes. +// +// Creates an Amazon EKS control plane. +// +// The Amazon EKS control plane consists of control plane instances that run +// the Kubernetes software, like etcd and the API server. The control plane +// runs in an account managed by AWS, and the Kubernetes API is exposed via +// the Amazon EKS API server endpoint. +// +// Amazon EKS worker nodes run in your AWS account and connect to your cluster's +// control plane via the Kubernetes API server endpoint and a certificate file +// that is created for your cluster. +// +// The cluster control plane is provisioned across multiple Availability Zones +// and fronted by an Elastic Load Balancing Network Load Balancer. Amazon EKS +// also provisions elastic network interfaces in your VPC subnets to provide +// connectivity from the control plane instances to the worker nodes (for example, +// to support kubectl exec, logs, and proxy data flows). +// +// After you create an Amazon EKS cluster, you must configure your Kubernetes +// tooling to communicate with the API server and launch worker nodes into your +// cluster. For more information, see Managing Cluster Authentication (http://docs.aws.amazon.com/eks/latest/userguide/managing-auth.html) +// and Launching Amazon EKS Worker Nodes (http://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html)in +// the Amazon EKS User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Container Service for Kubernetes's +// API operation CreateCluster for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceInUseException "ResourceInUseException" +// The specified resource is in use. +// +// * ErrCodeResourceLimitExceededException "ResourceLimitExceededException" +// You have encountered a service limit on the specified resource. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// The specified parameter is invalid. Review the available parameters for the +// API request. +// +// * ErrCodeClientException "ClientException" +// These errors are usually caused by a client action, such as using an action +// or resource on behalf of a user that doesn't have permissions to use the +// action or resource, or specifying an identifier that is not valid. +// +// * ErrCodeServerException "ServerException" +// These errors are usually caused by a server-side issue. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is unavailable. Back off and retry the operation. +// +// * ErrCodeUnsupportedAvailabilityZoneException "UnsupportedAvailabilityZoneException" +// At least one of your specified cluster subnets is in an Availability Zone +// that does not support Amazon EKS. The exception output specifies the supported +// Availability Zones for your account, from which you can choose subnets for +// your cluster. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/eks-2017-11-01/CreateCluster +func (c *EKS) CreateCluster(input *CreateClusterInput) (*CreateClusterOutput, error) { + req, out := c.CreateClusterRequest(input) + return out, req.Send() +} + +// CreateClusterWithContext is the same as CreateCluster with the addition of +// the ability to pass a context and additional request options. +// +// See CreateCluster for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EKS) CreateClusterWithContext(ctx aws.Context, input *CreateClusterInput, opts ...request.Option) (*CreateClusterOutput, error) { + req, out := c.CreateClusterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteCluster = "DeleteCluster" + +// DeleteClusterRequest generates a "aws/request.Request" representing the +// client's request for the DeleteCluster operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteCluster for more information on using the DeleteCluster +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteClusterRequest method. +// req, resp := client.DeleteClusterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/eks-2017-11-01/DeleteCluster +func (c *EKS) DeleteClusterRequest(input *DeleteClusterInput) (req *request.Request, output *DeleteClusterOutput) { + op := &request.Operation{ + Name: opDeleteCluster, + HTTPMethod: "DELETE", + HTTPPath: "/clusters/{name}", + } + + if input == nil { + input = &DeleteClusterInput{} + } + + output = &DeleteClusterOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteCluster API operation for Amazon Elastic Container Service for Kubernetes. +// +// Deletes the Amazon EKS cluster control plane. +// +// If you have active services in your cluster that are associated with a load +// balancer, you must delete those services before deleting the cluster so that +// the load balancers are deleted properly. Otherwise, you can have orphaned +// resources in your VPC that prevent you from being able to delete the VPC. +// For more information, see Deleting a Cluster (http://docs.aws.amazon.com/eks/latest/userguide/delete-cluster.html) +// in the Amazon EKS User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Container Service for Kubernetes's +// API operation DeleteCluster for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceInUseException "ResourceInUseException" +// The specified resource is in use. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource could not be found. You can view your available clusters +// with ListClusters. Amazon EKS clusters are Region-specific. +// +// * ErrCodeClientException "ClientException" +// These errors are usually caused by a client action, such as using an action +// or resource on behalf of a user that doesn't have permissions to use the +// action or resource, or specifying an identifier that is not valid. +// +// * ErrCodeServerException "ServerException" +// These errors are usually caused by a server-side issue. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is unavailable. Back off and retry the operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/eks-2017-11-01/DeleteCluster +func (c *EKS) DeleteCluster(input *DeleteClusterInput) (*DeleteClusterOutput, error) { + req, out := c.DeleteClusterRequest(input) + return out, req.Send() +} + +// DeleteClusterWithContext is the same as DeleteCluster with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteCluster for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EKS) DeleteClusterWithContext(ctx aws.Context, input *DeleteClusterInput, opts ...request.Option) (*DeleteClusterOutput, error) { + req, out := c.DeleteClusterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeCluster = "DescribeCluster" + +// DescribeClusterRequest generates a "aws/request.Request" representing the +// client's request for the DescribeCluster operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeCluster for more information on using the DescribeCluster +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeClusterRequest method. +// req, resp := client.DescribeClusterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/eks-2017-11-01/DescribeCluster +func (c *EKS) DescribeClusterRequest(input *DescribeClusterInput) (req *request.Request, output *DescribeClusterOutput) { + op := &request.Operation{ + Name: opDescribeCluster, + HTTPMethod: "GET", + HTTPPath: "/clusters/{name}", + } + + if input == nil { + input = &DescribeClusterInput{} + } + + output = &DescribeClusterOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeCluster API operation for Amazon Elastic Container Service for Kubernetes. +// +// Returns descriptive information about an Amazon EKS cluster. +// +// The API server endpoint and certificate authority data returned by this operation +// are required for kubelet and kubectl to communicate with your Kubernetes +// API server. For more information, see Create a kubeconfig for Amazon EKS +// (http://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html). +// +// The API server endpoint and certificate authority data are not available +// until the cluster reaches the ACTIVE state. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Container Service for Kubernetes's +// API operation DescribeCluster for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource could not be found. You can view your available clusters +// with ListClusters. Amazon EKS clusters are Region-specific. +// +// * ErrCodeClientException "ClientException" +// These errors are usually caused by a client action, such as using an action +// or resource on behalf of a user that doesn't have permissions to use the +// action or resource, or specifying an identifier that is not valid. +// +// * ErrCodeServerException "ServerException" +// These errors are usually caused by a server-side issue. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is unavailable. Back off and retry the operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/eks-2017-11-01/DescribeCluster +func (c *EKS) DescribeCluster(input *DescribeClusterInput) (*DescribeClusterOutput, error) { + req, out := c.DescribeClusterRequest(input) + return out, req.Send() +} + +// DescribeClusterWithContext is the same as DescribeCluster with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeCluster for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EKS) DescribeClusterWithContext(ctx aws.Context, input *DescribeClusterInput, opts ...request.Option) (*DescribeClusterOutput, error) { + req, out := c.DescribeClusterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListClusters = "ListClusters" + +// ListClustersRequest generates a "aws/request.Request" representing the +// client's request for the ListClusters operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListClusters for more information on using the ListClusters +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListClustersRequest method. +// req, resp := client.ListClustersRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/eks-2017-11-01/ListClusters +func (c *EKS) ListClustersRequest(input *ListClustersInput) (req *request.Request, output *ListClustersOutput) { + op := &request.Operation{ + Name: opListClusters, + HTTPMethod: "GET", + HTTPPath: "/clusters", + } + + if input == nil { + input = &ListClustersInput{} + } + + output = &ListClustersOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListClusters API operation for Amazon Elastic Container Service for Kubernetes. +// +// Lists the Amazon EKS clusters in your AWS account in the specified Region. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Container Service for Kubernetes's +// API operation ListClusters for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// The specified parameter is invalid. Review the available parameters for the +// API request. +// +// * ErrCodeClientException "ClientException" +// These errors are usually caused by a client action, such as using an action +// or resource on behalf of a user that doesn't have permissions to use the +// action or resource, or specifying an identifier that is not valid. +// +// * ErrCodeServerException "ServerException" +// These errors are usually caused by a server-side issue. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is unavailable. Back off and retry the operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/eks-2017-11-01/ListClusters +func (c *EKS) ListClusters(input *ListClustersInput) (*ListClustersOutput, error) { + req, out := c.ListClustersRequest(input) + return out, req.Send() +} + +// ListClustersWithContext is the same as ListClusters with the addition of +// the ability to pass a context and additional request options. +// +// See ListClusters for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EKS) ListClustersWithContext(ctx aws.Context, input *ListClustersInput, opts ...request.Option) (*ListClustersOutput, error) { + req, out := c.ListClustersRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// An object representing the certificate-authority-data for your cluster. +type Certificate struct { + _ struct{} `type:"structure"` + + // The base64 encoded certificate data required to communicate with your cluster. + // Add this to the certificate-authority-data section of the kubeconfig file + // for your cluster. + Data *string `locationName:"data" type:"string"` +} + +// String returns the string representation +func (s Certificate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Certificate) GoString() string { + return s.String() +} + +// SetData sets the Data field's value. +func (s *Certificate) SetData(v string) *Certificate { + s.Data = &v + return s +} + +// An object representing an Amazon EKS cluster. +type Cluster struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the cluster. + Arn *string `locationName:"arn" type:"string"` + + // The certificate-authority-data for your cluster. + CertificateAuthority *Certificate `locationName:"certificateAuthority" type:"structure"` + + // Unique, case-sensitive identifier you provide to ensure the idempotency of + // the request. + ClientRequestToken *string `locationName:"clientRequestToken" type:"string"` + + // The Unix epoch time stamp in seconds for when the cluster was created. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` + + // The endpoint for your Kubernetes API server. + Endpoint *string `locationName:"endpoint" type:"string"` + + // The name of the cluster. + Name *string `locationName:"name" type:"string"` + + // The platform version of your Amazon EKS cluster. For more information, see + // Platform Versions (eks/latest/userguide/platform-versions.html) in the Amazon + // EKS User Guide. + PlatformVersion *string `locationName:"platformVersion" type:"string"` + + // The VPC subnets and security groups used by the cluster control plane. Amazon + // EKS VPC resources have specific requirements to work properly with Kubernetes. + // For more information, see Cluster VPC Considerations (http://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html) + // and Cluster Security Group Considerations (http://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) + // in the Amazon EKS User Guide. + ResourcesVpcConfig *VpcConfigResponse `locationName:"resourcesVpcConfig" type:"structure"` + + // The Amazon Resource Name (ARN) of the IAM role that provides permissions + // for the Kubernetes control plane to make calls to AWS API operations on your + // behalf. + RoleArn *string `locationName:"roleArn" type:"string"` + + // The current status of the cluster. + Status *string `locationName:"status" type:"string" enum:"ClusterStatus"` + + // The Kubernetes server version for the cluster. + Version *string `locationName:"version" type:"string"` +} + +// String returns the string representation +func (s Cluster) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Cluster) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *Cluster) SetArn(v string) *Cluster { + s.Arn = &v + return s +} + +// SetCertificateAuthority sets the CertificateAuthority field's value. +func (s *Cluster) SetCertificateAuthority(v *Certificate) *Cluster { + s.CertificateAuthority = v + return s +} + +// SetClientRequestToken sets the ClientRequestToken field's value. +func (s *Cluster) SetClientRequestToken(v string) *Cluster { + s.ClientRequestToken = &v + return s +} + +// SetCreatedAt sets the CreatedAt field's value. +func (s *Cluster) SetCreatedAt(v time.Time) *Cluster { + s.CreatedAt = &v + return s +} + +// SetEndpoint sets the Endpoint field's value. +func (s *Cluster) SetEndpoint(v string) *Cluster { + s.Endpoint = &v + return s +} + +// SetName sets the Name field's value. +func (s *Cluster) SetName(v string) *Cluster { + s.Name = &v + return s +} + +// SetPlatformVersion sets the PlatformVersion field's value. +func (s *Cluster) SetPlatformVersion(v string) *Cluster { + s.PlatformVersion = &v + return s +} + +// SetResourcesVpcConfig sets the ResourcesVpcConfig field's value. +func (s *Cluster) SetResourcesVpcConfig(v *VpcConfigResponse) *Cluster { + s.ResourcesVpcConfig = v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *Cluster) SetRoleArn(v string) *Cluster { + s.RoleArn = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *Cluster) SetStatus(v string) *Cluster { + s.Status = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *Cluster) SetVersion(v string) *Cluster { + s.Version = &v + return s +} + +type CreateClusterInput struct { + _ struct{} `type:"structure"` + + // Unique, case-sensitive identifier you provide to ensure the idempotency of + // the request. + ClientRequestToken *string `locationName:"clientRequestToken" type:"string" idempotencyToken:"true"` + + // The unique name to give to your cluster. + // + // Name is a required field + Name *string `locationName:"name" min:"1" type:"string" required:"true"` + + // The VPC subnets and security groups used by the cluster control plane. Amazon + // EKS VPC resources have specific requirements to work properly with Kubernetes. + // For more information, see Cluster VPC Considerations (http://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html) + // and Cluster Security Group Considerations (http://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) + // in the Amazon EKS User Guide. You must specify at least two subnets. You + // may specify up to 5 security groups, but we recommend that you use a dedicated + // security group for your cluster control plane. + // + // ResourcesVpcConfig is a required field + ResourcesVpcConfig *VpcConfigRequest `locationName:"resourcesVpcConfig" type:"structure" required:"true"` + + // The Amazon Resource Name (ARN) of the IAM role that provides permissions + // for Amazon EKS to make calls to other AWS API operations on your behalf. + // For more information, see Amazon EKS Service IAM Role (http://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role.html) + // in the Amazon EKS User Guide. + // + // RoleArn is a required field + RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + + // The desired Kubernetes version for your cluster. If you do not specify a + // value here, the latest version available in Amazon EKS is used. + Version *string `locationName:"version" type:"string"` +} + +// String returns the string representation +func (s CreateClusterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateClusterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateClusterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateClusterInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + if s.ResourcesVpcConfig == nil { + invalidParams.Add(request.NewErrParamRequired("ResourcesVpcConfig")) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + if s.ResourcesVpcConfig != nil { + if err := s.ResourcesVpcConfig.Validate(); err != nil { + invalidParams.AddNested("ResourcesVpcConfig", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientRequestToken sets the ClientRequestToken field's value. +func (s *CreateClusterInput) SetClientRequestToken(v string) *CreateClusterInput { + s.ClientRequestToken = &v + return s +} + +// SetName sets the Name field's value. +func (s *CreateClusterInput) SetName(v string) *CreateClusterInput { + s.Name = &v + return s +} + +// SetResourcesVpcConfig sets the ResourcesVpcConfig field's value. +func (s *CreateClusterInput) SetResourcesVpcConfig(v *VpcConfigRequest) *CreateClusterInput { + s.ResourcesVpcConfig = v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *CreateClusterInput) SetRoleArn(v string) *CreateClusterInput { + s.RoleArn = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *CreateClusterInput) SetVersion(v string) *CreateClusterInput { + s.Version = &v + return s +} + +type CreateClusterOutput struct { + _ struct{} `type:"structure"` + + // The full description of your new cluster. + Cluster *Cluster `locationName:"cluster" type:"structure"` +} + +// String returns the string representation +func (s CreateClusterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateClusterOutput) GoString() string { + return s.String() +} + +// SetCluster sets the Cluster field's value. +func (s *CreateClusterOutput) SetCluster(v *Cluster) *CreateClusterOutput { + s.Cluster = v + return s +} + +type DeleteClusterInput struct { + _ struct{} `type:"structure"` + + // The name of the cluster to delete. + // + // Name is a required field + Name *string `location:"uri" locationName:"name" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteClusterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteClusterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteClusterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteClusterInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *DeleteClusterInput) SetName(v string) *DeleteClusterInput { + s.Name = &v + return s +} + +type DeleteClusterOutput struct { + _ struct{} `type:"structure"` + + // The full description of the cluster to delete. + Cluster *Cluster `locationName:"cluster" type:"structure"` +} + +// String returns the string representation +func (s DeleteClusterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteClusterOutput) GoString() string { + return s.String() +} + +// SetCluster sets the Cluster field's value. +func (s *DeleteClusterOutput) SetCluster(v *Cluster) *DeleteClusterOutput { + s.Cluster = v + return s +} + +type DescribeClusterInput struct { + _ struct{} `type:"structure"` + + // The name of the cluster to describe. + // + // Name is a required field + Name *string `location:"uri" locationName:"name" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeClusterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeClusterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeClusterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeClusterInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *DescribeClusterInput) SetName(v string) *DescribeClusterInput { + s.Name = &v + return s +} + +type DescribeClusterOutput struct { + _ struct{} `type:"structure"` + + // The full description of your specified cluster. + Cluster *Cluster `locationName:"cluster" type:"structure"` +} + +// String returns the string representation +func (s DescribeClusterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeClusterOutput) GoString() string { + return s.String() +} + +// SetCluster sets the Cluster field's value. +func (s *DescribeClusterOutput) SetCluster(v *Cluster) *DescribeClusterOutput { + s.Cluster = v + return s +} + +type ListClustersInput struct { + _ struct{} `type:"structure"` + + // The maximum number of cluster results returned by ListClusters in paginated + // output. When this parameter is used, ListClusters only returns maxResults + // results in a single page along with a nextToken response element. The remaining + // results of the initial request can be seen by sending another ListClusters + // request with the returned nextToken value. This value can be between 1 and + // 100. If this parameter is not used, then ListClusters returns up to 100 results + // and a nextToken value if applicable. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + + // The nextToken value returned from a previous paginated ListClusters request + // where maxResults was used and the results exceeded the value of that parameter. + // Pagination continues from the end of the previous results that returned the + // nextToken value. + // + // This token should be treated as an opaque identifier that is only used to + // retrieve the next items in a list and not for other programmatic purposes. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s ListClustersInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListClustersInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListClustersInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListClustersInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListClustersInput) SetMaxResults(v int64) *ListClustersInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListClustersInput) SetNextToken(v string) *ListClustersInput { + s.NextToken = &v + return s +} + +type ListClustersOutput struct { + _ struct{} `type:"structure"` + + // A list of all of the clusters for your account in the specified Region. + Clusters []*string `locationName:"clusters" type:"list"` + + // The nextToken value to include in a future ListClusters request. When the + // results of a ListClusters request exceed maxResults, this value can be used + // to retrieve the next page of results. This value is null when there are no + // more results to return. + NextToken *string `locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s ListClustersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListClustersOutput) GoString() string { + return s.String() +} + +// SetClusters sets the Clusters field's value. +func (s *ListClustersOutput) SetClusters(v []*string) *ListClustersOutput { + s.Clusters = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListClustersOutput) SetNextToken(v string) *ListClustersOutput { + s.NextToken = &v + return s +} + +// An object representing an Amazon EKS cluster VPC configuration request. +type VpcConfigRequest struct { + _ struct{} `type:"structure"` + + // Specify one or more security groups for the cross-account elastic network + // interfaces that Amazon EKS creates to use to allow communication between + // your worker nodes and the Kubernetes control plane. + SecurityGroupIds []*string `locationName:"securityGroupIds" type:"list"` + + // Specify subnets for your Amazon EKS worker nodes. Amazon EKS creates cross-account + // elastic network interfaces in these subnets to allow communication between + // your worker nodes and the Kubernetes control plane. + // + // SubnetIds is a required field + SubnetIds []*string `locationName:"subnetIds" type:"list" required:"true"` +} + +// String returns the string representation +func (s VpcConfigRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s VpcConfigRequest) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *VpcConfigRequest) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "VpcConfigRequest"} + if s.SubnetIds == nil { + invalidParams.Add(request.NewErrParamRequired("SubnetIds")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSecurityGroupIds sets the SecurityGroupIds field's value. +func (s *VpcConfigRequest) SetSecurityGroupIds(v []*string) *VpcConfigRequest { + s.SecurityGroupIds = v + return s +} + +// SetSubnetIds sets the SubnetIds field's value. +func (s *VpcConfigRequest) SetSubnetIds(v []*string) *VpcConfigRequest { + s.SubnetIds = v + return s +} + +// An object representing an Amazon EKS cluster VPC configuration response. +type VpcConfigResponse struct { + _ struct{} `type:"structure"` + + // The security groups associated with the cross-account elastic network interfaces + // that are used to allow communication between your worker nodes and the Kubernetes + // control plane. + SecurityGroupIds []*string `locationName:"securityGroupIds" type:"list"` + + // The subnets associated with your cluster. + SubnetIds []*string `locationName:"subnetIds" type:"list"` + + // The VPC associated with your cluster. + VpcId *string `locationName:"vpcId" type:"string"` +} + +// String returns the string representation +func (s VpcConfigResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s VpcConfigResponse) GoString() string { + return s.String() +} + +// SetSecurityGroupIds sets the SecurityGroupIds field's value. +func (s *VpcConfigResponse) SetSecurityGroupIds(v []*string) *VpcConfigResponse { + s.SecurityGroupIds = v + return s +} + +// SetSubnetIds sets the SubnetIds field's value. +func (s *VpcConfigResponse) SetSubnetIds(v []*string) *VpcConfigResponse { + s.SubnetIds = v + return s +} + +// SetVpcId sets the VpcId field's value. +func (s *VpcConfigResponse) SetVpcId(v string) *VpcConfigResponse { + s.VpcId = &v + return s +} + +const ( + // ClusterStatusCreating is a ClusterStatus enum value + ClusterStatusCreating = "CREATING" + + // ClusterStatusActive is a ClusterStatus enum value + ClusterStatusActive = "ACTIVE" + + // ClusterStatusDeleting is a ClusterStatus enum value + ClusterStatusDeleting = "DELETING" + + // ClusterStatusFailed is a ClusterStatus enum value + ClusterStatusFailed = "FAILED" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/eks/doc.go b/vendor/github.com/aws/aws-sdk-go/service/eks/doc.go new file mode 100644 index 00000000000..0f194107613 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/eks/doc.go @@ -0,0 +1,40 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package eks provides the client and types for making API +// requests to Amazon Elastic Container Service for Kubernetes. +// +// Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed +// service that makes it easy for you to run Kubernetes on AWS without needing +// to stand up or maintain your own Kubernetes control plane. Kubernetes is +// an open-source system for automating the deployment, scaling, and management +// of containerized applications. +// +// Amazon EKS runs up-to-date versions of the open-source Kubernetes software, +// so you can use all the existing plugins and tooling from the Kubernetes community. +// Applications running on Amazon EKS are fully compatible with applications +// running on any standard Kubernetes environment, whether running in on-premises +// data centers or public clouds. This means that you can easily migrate any +// standard Kubernetes application to Amazon EKS without any code modification +// required. +// +// See https://docs.aws.amazon.com/goto/WebAPI/eks-2017-11-01 for more information on this service. +// +// See eks package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/eks/ +// +// Using the Client +// +// To contact Amazon Elastic Container Service for Kubernetes with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the Amazon Elastic Container Service for Kubernetes client EKS for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/eks/#New +package eks diff --git a/vendor/github.com/aws/aws-sdk-go/service/eks/errors.go b/vendor/github.com/aws/aws-sdk-go/service/eks/errors.go new file mode 100644 index 00000000000..98a2410c563 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/eks/errors.go @@ -0,0 +1,61 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package eks + +const ( + + // ErrCodeClientException for service response error code + // "ClientException". + // + // These errors are usually caused by a client action, such as using an action + // or resource on behalf of a user that doesn't have permissions to use the + // action or resource, or specifying an identifier that is not valid. + ErrCodeClientException = "ClientException" + + // ErrCodeInvalidParameterException for service response error code + // "InvalidParameterException". + // + // The specified parameter is invalid. Review the available parameters for the + // API request. + ErrCodeInvalidParameterException = "InvalidParameterException" + + // ErrCodeResourceInUseException for service response error code + // "ResourceInUseException". + // + // The specified resource is in use. + ErrCodeResourceInUseException = "ResourceInUseException" + + // ErrCodeResourceLimitExceededException for service response error code + // "ResourceLimitExceededException". + // + // You have encountered a service limit on the specified resource. + ErrCodeResourceLimitExceededException = "ResourceLimitExceededException" + + // ErrCodeResourceNotFoundException for service response error code + // "ResourceNotFoundException". + // + // The specified resource could not be found. You can view your available clusters + // with ListClusters. Amazon EKS clusters are Region-specific. + ErrCodeResourceNotFoundException = "ResourceNotFoundException" + + // ErrCodeServerException for service response error code + // "ServerException". + // + // These errors are usually caused by a server-side issue. + ErrCodeServerException = "ServerException" + + // ErrCodeServiceUnavailableException for service response error code + // "ServiceUnavailableException". + // + // The service is unavailable. Back off and retry the operation. + ErrCodeServiceUnavailableException = "ServiceUnavailableException" + + // ErrCodeUnsupportedAvailabilityZoneException for service response error code + // "UnsupportedAvailabilityZoneException". + // + // At least one of your specified cluster subnets is in an Availability Zone + // that does not support Amazon EKS. The exception output specifies the supported + // Availability Zones for your account, from which you can choose subnets for + // your cluster. + ErrCodeUnsupportedAvailabilityZoneException = "UnsupportedAvailabilityZoneException" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/eks/service.go b/vendor/github.com/aws/aws-sdk-go/service/eks/service.go new file mode 100644 index 00000000000..e72b2c362e0 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/eks/service.go @@ -0,0 +1,99 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package eks + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/restjson" +) + +// EKS provides the API operation methods for making requests to +// Amazon Elastic Container Service for Kubernetes. See this package's package overview docs +// for details on the service. +// +// EKS methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type EKS struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "eks" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "EKS" // ServiceID is a unique identifer of a specific service. +) + +// New creates a new instance of the EKS client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a EKS client from just a session. +// svc := eks.New(mySession) +// +// // Create a EKS client with additional configuration +// svc := eks.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *EKS { + c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "eks" + } + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *EKS { + svc := &EKS{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + ServiceID: ServiceID, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2017-11-01", + JSONVersion: "1.1", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(restjson.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(restjson.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(restjson.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(restjson.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a EKS operation and runs any +// custom request initialization. +func (c *EKS) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/eks/waiters.go b/vendor/github.com/aws/aws-sdk-go/service/eks/waiters.go new file mode 100644 index 00000000000..022255cf053 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/eks/waiters.go @@ -0,0 +1,122 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package eks + +import ( + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/request" +) + +// WaitUntilClusterActive uses the Amazon EKS API operation +// DescribeCluster to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *EKS) WaitUntilClusterActive(input *DescribeClusterInput) error { + return c.WaitUntilClusterActiveWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilClusterActiveWithContext is an extended version of WaitUntilClusterActive. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EKS) WaitUntilClusterActiveWithContext(ctx aws.Context, input *DescribeClusterInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilClusterActive", + MaxAttempts: 40, + Delay: request.ConstantWaiterDelay(30 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.FailureWaiterState, + Matcher: request.PathWaiterMatch, Argument: "cluster.status", + Expected: "DELETING", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathWaiterMatch, Argument: "cluster.status", + Expected: "FAILED", + }, + { + State: request.SuccessWaiterState, + Matcher: request.PathWaiterMatch, Argument: "cluster.status", + Expected: "ACTIVE", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeClusterInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeClusterRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} + +// WaitUntilClusterDeleted uses the Amazon EKS API operation +// DescribeCluster to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *EKS) WaitUntilClusterDeleted(input *DescribeClusterInput) error { + return c.WaitUntilClusterDeletedWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilClusterDeletedWithContext is an extended version of WaitUntilClusterDeleted. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EKS) WaitUntilClusterDeletedWithContext(ctx aws.Context, input *DescribeClusterInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilClusterDeleted", + MaxAttempts: 40, + Delay: request.ConstantWaiterDelay(30 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.FailureWaiterState, + Matcher: request.PathWaiterMatch, Argument: "cluster.status", + Expected: "ACTIVE", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathWaiterMatch, Argument: "cluster.status", + Expected: "CREATING", + }, + { + State: request.SuccessWaiterState, + Matcher: request.ErrorWaiterMatch, + Expected: "ResourceNotFoundException", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeClusterInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeClusterRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/elasticache/api.go b/vendor/github.com/aws/aws-sdk-go/service/elasticache/api.go index a1ad3a61b36..20d01f6dfce 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/elasticache/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/elasticache/api.go @@ -3,6 +3,7 @@ package elasticache import ( + "fmt" "time" "github.com/aws/aws-sdk-go/aws" @@ -16,8 +17,8 @@ const opAddTagsToResource = "AddTagsToResource" // AddTagsToResourceRequest generates a "aws/request.Request" representing the // client's request for the AddTagsToResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -65,7 +66,7 @@ func (c *ElastiCache) AddTagsToResourceRequest(input *AddTagsToResourceInput) (r // by your tags. You can apply tags that represent business categories (such // as cost centers, application names, or owners) to organize your costs across // multiple services. For more information, see Using Cost Allocation Tags in -// Amazon ElastiCache (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Tagging.html) +// Amazon ElastiCache (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Tagging.html) // in the ElastiCache User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -116,8 +117,8 @@ const opAuthorizeCacheSecurityGroupIngress = "AuthorizeCacheSecurityGroupIngress // AuthorizeCacheSecurityGroupIngressRequest generates a "aws/request.Request" representing the // client's request for the AuthorizeCacheSecurityGroupIngress operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -214,8 +215,8 @@ const opCopySnapshot = "CopySnapshot" // CopySnapshotRequest generates a "aws/request.Request" representing the // client's request for the CopySnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -262,8 +263,8 @@ func (c *ElastiCache) CopySnapshotRequest(input *CopySnapshotInput) (req *reques // create their own Amazon S3 buckets and copy snapshots to it. To control access // to your snapshots, use an IAM policy to control who has the ability to use // the CopySnapshot operation. For more information about using IAM to control -// the use of ElastiCache operations, see Exporting Snapshots (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Snapshots.Exporting.html) -// and Authentication & Access Control (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/IAM.html). +// the use of ElastiCache operations, see Exporting Snapshots (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Snapshots.Exporting.html) +// and Authentication & Access Control (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/IAM.html). // // You could receive the following error messages. // @@ -272,19 +273,19 @@ func (c *ElastiCache) CopySnapshotRequest(input *CopySnapshotInput) (req *reques // * Error Message: The S3 bucket %s is outside of the region. // // Solution: Create an Amazon S3 bucket in the same region as your snapshot. -// For more information, see Step 1: Create an Amazon S3 Bucket (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Snapshots.Exporting.html#Snapshots.Exporting.CreateBucket) +// For more information, see Step 1: Create an Amazon S3 Bucket (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Snapshots.Exporting.html#Snapshots.Exporting.CreateBucket) // in the ElastiCache User Guide. // // * Error Message: The S3 bucket %s does not exist. // // Solution: Create an Amazon S3 bucket in the same region as your snapshot. -// For more information, see Step 1: Create an Amazon S3 Bucket (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Snapshots.Exporting.html#Snapshots.Exporting.CreateBucket) +// For more information, see Step 1: Create an Amazon S3 Bucket (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Snapshots.Exporting.html#Snapshots.Exporting.CreateBucket) // in the ElastiCache User Guide. // // * Error Message: The S3 bucket %s is not owned by the authenticated user. // // Solution: Create an Amazon S3 bucket in the same region as your snapshot. -// For more information, see Step 1: Create an Amazon S3 Bucket (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Snapshots.Exporting.html#Snapshots.Exporting.CreateBucket) +// For more information, see Step 1: Create an Amazon S3 Bucket (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Snapshots.Exporting.html#Snapshots.Exporting.CreateBucket) // in the ElastiCache User Guide. // // * Error Message: The authenticated user does not have sufficient permissions @@ -303,21 +304,21 @@ func (c *ElastiCache) CopySnapshotRequest(input *CopySnapshotInput) (req *reques // on the S3 Bucket. // // Solution: Add List and Read permissions on the bucket. For more information, -// see Step 2: Grant ElastiCache Access to Your Amazon S3 Bucket (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Snapshots.Exporting.html#Snapshots.Exporting.GrantAccess) +// see Step 2: Grant ElastiCache Access to Your Amazon S3 Bucket (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Snapshots.Exporting.html#Snapshots.Exporting.GrantAccess) // in the ElastiCache User Guide. // // * Error Message: ElastiCache has not been granted WRITE permissions %s // on the S3 Bucket. // // Solution: Add Upload/Delete permissions on the bucket. For more information, -// see Step 2: Grant ElastiCache Access to Your Amazon S3 Bucket (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Snapshots.Exporting.html#Snapshots.Exporting.GrantAccess) +// see Step 2: Grant ElastiCache Access to Your Amazon S3 Bucket (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Snapshots.Exporting.html#Snapshots.Exporting.GrantAccess) // in the ElastiCache User Guide. // // * Error Message: ElastiCache has not been granted READ_ACP permissions // %s on the S3 Bucket. // // Solution: Add View Permissions on the bucket. For more information, see Step -// 2: Grant ElastiCache Access to Your Amazon S3 Bucket (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Snapshots.Exporting.html#Snapshots.Exporting.GrantAccess) +// 2: Grant ElastiCache Access to Your Amazon S3 Bucket (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Snapshots.Exporting.html#Snapshots.Exporting.GrantAccess) // in the ElastiCache User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -374,8 +375,8 @@ const opCreateCacheCluster = "CreateCacheCluster" // CreateCacheClusterRequest generates a "aws/request.Request" representing the // client's request for the CreateCacheCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -417,9 +418,7 @@ func (c *ElastiCache) CreateCacheClusterRequest(input *CreateCacheClusterInput) // Creates a cluster. All nodes in the cluster run the same protocol-compliant // cache engine software, either Memcached or Redis. // -// Due to current limitations on Redis (cluster mode disabled), this operation -// or parameter is not supported on Redis (cluster mode enabled) replication -// groups. +// This operation is not supported for Redis (cluster mode enabled) clusters. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -506,8 +505,8 @@ const opCreateCacheParameterGroup = "CreateCacheParameterGroup" // CreateCacheParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateCacheParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -558,7 +557,7 @@ func (c *ElastiCache) CreateCacheParameterGroupRequest(input *CreateCacheParamet // * ModifyCacheParameterGroup (http://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyCacheParameterGroup.html) // in the ElastiCache API Reference. // -// * Parameters and Parameter Groups (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/ParameterGroups.html) +// * Parameters and Parameter Groups (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/ParameterGroups.html) // in the ElastiCache User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -612,8 +611,8 @@ const opCreateCacheSecurityGroup = "CreateCacheSecurityGroup" // CreateCacheSecurityGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateCacheSecurityGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -707,8 +706,8 @@ const opCreateCacheSubnetGroup = "CreateCacheSubnetGroup" // CreateCacheSubnetGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateCacheSubnetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -801,8 +800,8 @@ const opCreateReplicationGroup = "CreateReplicationGroup" // CreateReplicationGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateReplicationGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -860,7 +859,7 @@ func (c *ElastiCache) CreateReplicationGroupRequest(input *CreateReplicationGrou // group after it has been created. However, if you need to increase or decrease // the number of node groups (console: shards), you can avail yourself of ElastiCache // for Redis' enhanced backup and restore. For more information, see Restoring -// From a Backup with Cluster Resizing (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/backups-restoring.html) +// From a Backup with Cluster Resizing (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups-restoring.html) // in the ElastiCache User Guide. // // This operation is valid for Redis only. @@ -955,8 +954,8 @@ const opCreateSnapshot = "CreateSnapshot" // CreateSnapshotRequest generates a "aws/request.Request" representing the // client's request for the CreateSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1066,12 +1065,132 @@ func (c *ElastiCache) CreateSnapshotWithContext(ctx aws.Context, input *CreateSn return out, req.Send() } +const opDecreaseReplicaCount = "DecreaseReplicaCount" + +// DecreaseReplicaCountRequest generates a "aws/request.Request" representing the +// client's request for the DecreaseReplicaCount operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DecreaseReplicaCount for more information on using the DecreaseReplicaCount +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DecreaseReplicaCountRequest method. +// req, resp := client.DecreaseReplicaCountRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/elasticache-2015-02-02/DecreaseReplicaCount +func (c *ElastiCache) DecreaseReplicaCountRequest(input *DecreaseReplicaCountInput) (req *request.Request, output *DecreaseReplicaCountOutput) { + op := &request.Operation{ + Name: opDecreaseReplicaCount, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DecreaseReplicaCountInput{} + } + + output = &DecreaseReplicaCountOutput{} + req = c.newRequest(op, input, output) + return +} + +// DecreaseReplicaCount API operation for Amazon ElastiCache. +// +// Dynamically decreases the number of replics in a Redis (cluster mode disabled) +// replication group or the number of replica nodes in one or more node groups +// (shards) of a Redis (cluster mode enabled) replication group. This operation +// is performed with no cluster down time. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon ElastiCache's +// API operation DecreaseReplicaCount for usage and error information. +// +// Returned Error Codes: +// * ErrCodeReplicationGroupNotFoundFault "ReplicationGroupNotFoundFault" +// The specified replication group does not exist. +// +// * ErrCodeInvalidReplicationGroupStateFault "InvalidReplicationGroupState" +// The requested replication group is not in the available state. +// +// * ErrCodeInvalidCacheClusterStateFault "InvalidCacheClusterState" +// The requested cluster is not in the available state. +// +// * ErrCodeInvalidVPCNetworkStateFault "InvalidVPCNetworkStateFault" +// The VPC network is in an invalid state. +// +// * ErrCodeInsufficientCacheClusterCapacityFault "InsufficientCacheClusterCapacity" +// The requested cache node type is not available in the specified Availability +// Zone. +// +// * ErrCodeClusterQuotaForCustomerExceededFault "ClusterQuotaForCustomerExceeded" +// The request cannot be processed because it would exceed the allowed number +// of clusters per customer. +// +// * ErrCodeNodeGroupsPerReplicationGroupQuotaExceededFault "NodeGroupsPerReplicationGroupQuotaExceeded" +// The request cannot be processed because it would exceed the maximum allowed +// number of node groups (shards) in a single replication group. The default +// maximum is 15 +// +// * ErrCodeNodeQuotaForCustomerExceededFault "NodeQuotaForCustomerExceeded" +// The request cannot be processed because it would exceed the allowed number +// of cache nodes per customer. +// +// * ErrCodeServiceLinkedRoleNotFoundFault "ServiceLinkedRoleNotFoundFault" +// The specified service linked role (SLR) was not found. +// +// * ErrCodeNoOperationFault "NoOperationFault" +// The operation was not performed because no changes were required. +// +// * ErrCodeInvalidParameterValueException "InvalidParameterValue" +// The value for a parameter is invalid. +// +// * ErrCodeInvalidParameterCombinationException "InvalidParameterCombination" +// Two or more incompatible parameters were specified. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/elasticache-2015-02-02/DecreaseReplicaCount +func (c *ElastiCache) DecreaseReplicaCount(input *DecreaseReplicaCountInput) (*DecreaseReplicaCountOutput, error) { + req, out := c.DecreaseReplicaCountRequest(input) + return out, req.Send() +} + +// DecreaseReplicaCountWithContext is the same as DecreaseReplicaCount with the addition of +// the ability to pass a context and additional request options. +// +// See DecreaseReplicaCount for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ElastiCache) DecreaseReplicaCountWithContext(ctx aws.Context, input *DecreaseReplicaCountInput, opts ...request.Option) (*DecreaseReplicaCountOutput, error) { + req, out := c.DecreaseReplicaCountRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteCacheCluster = "DeleteCacheCluster" // DeleteCacheClusterRequest generates a "aws/request.Request" representing the // client's request for the DeleteCacheCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1119,9 +1238,7 @@ func (c *ElastiCache) DeleteCacheClusterRequest(input *DeleteCacheClusterInput) // of a replication group or node group (shard) that has Multi-AZ mode enabled // or a cluster from a Redis (cluster mode enabled) replication group. // -// Due to current limitations on Redis (cluster mode disabled), this operation -// or parameter is not supported on Redis (cluster mode enabled) replication -// groups. +// This operation is not valid for Redis (cluster mode enabled) clusters. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1187,8 +1304,8 @@ const opDeleteCacheParameterGroup = "DeleteCacheParameterGroup" // DeleteCacheParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteCacheParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1280,8 +1397,8 @@ const opDeleteCacheSecurityGroup = "DeleteCacheSecurityGroup" // DeleteCacheSecurityGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteCacheSecurityGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1373,8 +1490,8 @@ const opDeleteCacheSubnetGroup = "DeleteCacheSubnetGroup" // DeleteCacheSubnetGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteCacheSubnetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1460,8 +1577,8 @@ const opDeleteReplicationGroup = "DeleteReplicationGroup" // DeleteReplicationGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteReplicationGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1576,8 +1693,8 @@ const opDeleteSnapshot = "DeleteSnapshot" // DeleteSnapshotRequest generates a "aws/request.Request" representing the // client's request for the DeleteSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1669,8 +1786,8 @@ const opDescribeCacheClusters = "DescribeCacheClusters" // DescribeCacheClustersRequest generates a "aws/request.Request" representing the // client's request for the DescribeCacheClusters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1831,8 +1948,8 @@ const opDescribeCacheEngineVersions = "DescribeCacheEngineVersions" // DescribeCacheEngineVersionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeCacheEngineVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1961,8 +2078,8 @@ const opDescribeCacheParameterGroups = "DescribeCacheParameterGroups" // DescribeCacheParameterGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeCacheParameterGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2105,8 +2222,8 @@ const opDescribeCacheParameters = "DescribeCacheParameters" // DescribeCacheParametersRequest generates a "aws/request.Request" representing the // client's request for the DescribeCacheParameters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2247,8 +2364,8 @@ const opDescribeCacheSecurityGroups = "DescribeCacheSecurityGroups" // DescribeCacheSecurityGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeCacheSecurityGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2390,8 +2507,8 @@ const opDescribeCacheSubnetGroups = "DescribeCacheSubnetGroups" // DescribeCacheSubnetGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeCacheSubnetGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2527,8 +2644,8 @@ const opDescribeEngineDefaultParameters = "DescribeEngineDefaultParameters" // DescribeEngineDefaultParametersRequest generates a "aws/request.Request" representing the // client's request for the DescribeEngineDefaultParameters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2666,8 +2783,8 @@ const opDescribeEvents = "DescribeEvents" // DescribeEventsRequest generates a "aws/request.Request" representing the // client's request for the DescribeEvents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2809,8 +2926,8 @@ const opDescribeReplicationGroups = "DescribeReplicationGroups" // DescribeReplicationGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeReplicationGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2954,8 +3071,8 @@ const opDescribeReservedCacheNodes = "DescribeReservedCacheNodes" // DescribeReservedCacheNodesRequest generates a "aws/request.Request" representing the // client's request for the DescribeReservedCacheNodes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3096,8 +3213,8 @@ const opDescribeReservedCacheNodesOfferings = "DescribeReservedCacheNodesOfferin // DescribeReservedCacheNodesOfferingsRequest generates a "aws/request.Request" representing the // client's request for the DescribeReservedCacheNodesOfferings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3237,8 +3354,8 @@ const opDescribeSnapshots = "DescribeSnapshots" // DescribeSnapshotsRequest generates a "aws/request.Request" representing the // client's request for the DescribeSnapshots operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3382,12 +3499,129 @@ func (c *ElastiCache) DescribeSnapshotsPagesWithContext(ctx aws.Context, input * return p.Err() } +const opIncreaseReplicaCount = "IncreaseReplicaCount" + +// IncreaseReplicaCountRequest generates a "aws/request.Request" representing the +// client's request for the IncreaseReplicaCount operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See IncreaseReplicaCount for more information on using the IncreaseReplicaCount +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the IncreaseReplicaCountRequest method. +// req, resp := client.IncreaseReplicaCountRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/elasticache-2015-02-02/IncreaseReplicaCount +func (c *ElastiCache) IncreaseReplicaCountRequest(input *IncreaseReplicaCountInput) (req *request.Request, output *IncreaseReplicaCountOutput) { + op := &request.Operation{ + Name: opIncreaseReplicaCount, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &IncreaseReplicaCountInput{} + } + + output = &IncreaseReplicaCountOutput{} + req = c.newRequest(op, input, output) + return +} + +// IncreaseReplicaCount API operation for Amazon ElastiCache. +// +// Dynamically increases the number of replics in a Redis (cluster mode disabled) +// replication group or the number of replica nodes in one or more node groups +// (shards) of a Redis (cluster mode enabled) replication group. This operation +// is performed with no cluster down time. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon ElastiCache's +// API operation IncreaseReplicaCount for usage and error information. +// +// Returned Error Codes: +// * ErrCodeReplicationGroupNotFoundFault "ReplicationGroupNotFoundFault" +// The specified replication group does not exist. +// +// * ErrCodeInvalidReplicationGroupStateFault "InvalidReplicationGroupState" +// The requested replication group is not in the available state. +// +// * ErrCodeInvalidCacheClusterStateFault "InvalidCacheClusterState" +// The requested cluster is not in the available state. +// +// * ErrCodeInvalidVPCNetworkStateFault "InvalidVPCNetworkStateFault" +// The VPC network is in an invalid state. +// +// * ErrCodeInsufficientCacheClusterCapacityFault "InsufficientCacheClusterCapacity" +// The requested cache node type is not available in the specified Availability +// Zone. +// +// * ErrCodeClusterQuotaForCustomerExceededFault "ClusterQuotaForCustomerExceeded" +// The request cannot be processed because it would exceed the allowed number +// of clusters per customer. +// +// * ErrCodeNodeGroupsPerReplicationGroupQuotaExceededFault "NodeGroupsPerReplicationGroupQuotaExceeded" +// The request cannot be processed because it would exceed the maximum allowed +// number of node groups (shards) in a single replication group. The default +// maximum is 15 +// +// * ErrCodeNodeQuotaForCustomerExceededFault "NodeQuotaForCustomerExceeded" +// The request cannot be processed because it would exceed the allowed number +// of cache nodes per customer. +// +// * ErrCodeNoOperationFault "NoOperationFault" +// The operation was not performed because no changes were required. +// +// * ErrCodeInvalidParameterValueException "InvalidParameterValue" +// The value for a parameter is invalid. +// +// * ErrCodeInvalidParameterCombinationException "InvalidParameterCombination" +// Two or more incompatible parameters were specified. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/elasticache-2015-02-02/IncreaseReplicaCount +func (c *ElastiCache) IncreaseReplicaCount(input *IncreaseReplicaCountInput) (*IncreaseReplicaCountOutput, error) { + req, out := c.IncreaseReplicaCountRequest(input) + return out, req.Send() +} + +// IncreaseReplicaCountWithContext is the same as IncreaseReplicaCount with the addition of +// the ability to pass a context and additional request options. +// +// See IncreaseReplicaCount for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ElastiCache) IncreaseReplicaCountWithContext(ctx aws.Context, input *IncreaseReplicaCountInput, opts ...request.Option) (*IncreaseReplicaCountOutput, error) { + req, out := c.IncreaseReplicaCountRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opListAllowedNodeTypeModifications = "ListAllowedNodeTypeModifications" // ListAllowedNodeTypeModificationsRequest generates a "aws/request.Request" representing the // client's request for the ListAllowedNodeTypeModifications operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3479,8 +3713,8 @@ const opListTagsForResource = "ListTagsForResource" // ListTagsForResourceRequest generates a "aws/request.Request" representing the // client's request for the ListTagsForResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3524,9 +3758,11 @@ func (c *ElastiCache) ListTagsForResourceRequest(input *ListTagsForResourceInput // optional. You can use cost allocation tags to categorize and track your AWS // costs. // +// If the cluster is not in the available state, ListTagsForResource returns +// an error. +// // You can have a maximum of 50 cost allocation tags on an ElastiCache resource. -// For more information, see Using Cost Allocation Tags in Amazon ElastiCache -// (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/BestPractices.html). +// For more information, see Monitoring Costs with Tags (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Tagging.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3571,8 +3807,8 @@ const opModifyCacheCluster = "ModifyCacheCluster" // ModifyCacheClusterRequest generates a "aws/request.Request" representing the // client's request for the ModifyCacheCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3687,8 +3923,8 @@ const opModifyCacheParameterGroup = "ModifyCacheParameterGroup" // ModifyCacheParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the ModifyCacheParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3779,8 +4015,8 @@ const opModifyCacheSubnetGroup = "ModifyCacheSubnetGroup" // ModifyCacheSubnetGroupRequest generates a "aws/request.Request" representing the // client's request for the ModifyCacheSubnetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3869,8 +4105,8 @@ const opModifyReplicationGroup = "ModifyReplicationGroup" // ModifyReplicationGroupRequest generates a "aws/request.Request" representing the // client's request for the ModifyReplicationGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3911,9 +4147,16 @@ func (c *ElastiCache) ModifyReplicationGroupRequest(input *ModifyReplicationGrou // // Modifies the settings for a replication group. // -// Due to current limitations on Redis (cluster mode disabled), this operation -// or parameter is not supported on Redis (cluster mode enabled) replication -// groups. +// For Redis (cluster mode enabled) clusters, this operation cannot be used +// to change a cluster's node type or engine version. For more information, +// see: +// +// * Scaling for Amazon ElastiCache for Redis—Redis (cluster mode enabled) +// (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/scaling-redis-cluster-mode-enabled.html) +// in the ElastiCache User Guide +// +// * ModifyReplicationGroupShardConfiguration (http://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyReplicationGroupShardConfiguration.html) +// in the ElastiCache API Reference // // This operation is valid for Redis only. // @@ -3995,8 +4238,8 @@ const opModifyReplicationGroupShardConfiguration = "ModifyReplicationGroupShardC // ModifyReplicationGroupShardConfigurationRequest generates a "aws/request.Request" representing the // client's request for the ModifyReplicationGroupShardConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4035,14 +4278,8 @@ func (c *ElastiCache) ModifyReplicationGroupShardConfigurationRequest(input *Mod // ModifyReplicationGroupShardConfiguration API operation for Amazon ElastiCache. // -// Performs horizontal scaling on a Redis (cluster mode enabled) cluster with -// no downtime. Requires Redis engine version 3.2.10 or newer. For information -// on upgrading your engine to a newer version, see Upgrading Engine Versions -// (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/VersionManagement.html) -// in the Amazon ElastiCache User Guide. -// -// For more information on ElastiCache for Redis online horizontal scaling, -// see ElastiCache for Redis Horizontal Scaling (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/redis-cluster-resharding-online.html) +// Modifies a replication group's shards (node groups) by allowing you to add +// shards, remove shards, or rebalance the keyspaces among exisiting shards. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4109,8 +4346,8 @@ const opPurchaseReservedCacheNodesOffering = "PurchaseReservedCacheNodesOffering // PurchaseReservedCacheNodesOfferingRequest generates a "aws/request.Request" representing the // client's request for the PurchaseReservedCacheNodesOffering operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4201,8 +4438,8 @@ const opRebootCacheCluster = "RebootCacheCluster" // RebootCacheClusterRequest generates a "aws/request.Request" representing the // client's request for the RebootCacheCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4256,7 +4493,7 @@ func (c *ElastiCache) RebootCacheClusterRequest(input *RebootCacheClusterInput) // enabled) clusters. // // If you make changes to parameters that require a Redis (cluster mode enabled) -// cluster reboot for the changes to be applied, see Rebooting a Cluster (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Clusters.Rebooting.htm) +// cluster reboot for the changes to be applied, see Rebooting a Cluster (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Clusters.Rebooting.html) // for an alternate process. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -4299,8 +4536,8 @@ const opRemoveTagsFromResource = "RemoveTagsFromResource" // RemoveTagsFromResourceRequest generates a "aws/request.Request" representing the // client's request for the RemoveTagsFromResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4387,8 +4624,8 @@ const opResetCacheParameterGroup = "ResetCacheParameterGroup" // ResetCacheParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the ResetCacheParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4480,8 +4717,8 @@ const opRevokeCacheSecurityGroupIngress = "RevokeCacheSecurityGroupIngress" // RevokeCacheSecurityGroupIngressRequest generates a "aws/request.Request" representing the // client's request for the RevokeCacheSecurityGroupIngress operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4574,8 +4811,8 @@ const opTestFailover = "TestFailover" // TestFailoverRequest generates a "aws/request.Request" representing the // client's request for the TestFailover operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4650,13 +4887,13 @@ func (c *ElastiCache) TestFailoverRequest(input *TestFailoverInput) (req *reques // // For more information see: // -// Viewing ElastiCache Events (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/ECEvents.Viewing.html) +// Viewing ElastiCache Events (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/ECEvents.Viewing.html) // in the ElastiCache User Guide // // DescribeEvents (http://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DescribeEvents.html) // in the ElastiCache API Reference // -// Also see, Testing Multi-AZ with Automatic Failover (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/AutoFailover.html#auto-failover-test) +// Also see, Testing Multi-AZ with Automatic Failover (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html#auto-failover-test) // in the ElastiCache User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -4685,6 +4922,7 @@ func (c *ElastiCache) TestFailoverRequest(input *TestFailoverInput) (req *reques // The specified replication group does not exist. // // * ErrCodeTestFailoverNotAvailableFault "TestFailoverNotAvailableFault" +// The TestFailover action is not available. // // * ErrCodeInvalidParameterValueException "InvalidParameterValue" // The value for a parameter is invalid. @@ -4907,6 +5145,9 @@ type CacheCluster struct { // is created. To enable at-rest encryption on a cluster you must set AtRestEncryptionEnabled // to true when you create a cluster. // + // Required: Only available when creating a replication group in an Amazon VPC + // using redis version 3.2.6 or 4.x. + // // Default: false AtRestEncryptionEnabled *bool `type:"boolean"` @@ -4919,7 +5160,7 @@ type CacheCluster struct { AutoMinorVersionUpgrade *bool `type:"boolean"` // The date and time when the cluster was created. - CacheClusterCreateTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CacheClusterCreateTime *time.Time `type:"timestamp"` // The user-supplied identifier of the cluster. This identifier is a unique // key that identifies a cluster. @@ -4966,6 +5207,9 @@ type CacheCluster struct { // R3 node types:cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, // cache.r3.8xlarge // + // R4 node types;cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, + // cache.r4.8xlarge, cache.r4.16xlarge + // // Previous generation: (not recommended) // // M2 node types:cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge @@ -4984,10 +5228,13 @@ type CacheCluster struct { // * Redis Append-only files (AOF) functionality is not supported for T1 // or T2 instances. // - // For a complete listing of node types and specifications, see Amazon ElastiCache - // Product Features and Details (http://aws.amazon.com/elasticache/details) - // and either Cache Node Type-Specific Parameters for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Memcached.html#ParameterGroups.Memcached.NodeSpecific) - // or Cache Node Type-Specific Parameters for Redis (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Redis.html#ParameterGroups.Redis.NodeSpecific). + // For a complete listing of node types and specifications, see: + // + // * Amazon ElastiCache Product Features and Details (http://aws.amazon.com/elasticache/details) + // + // * Cache Node Type-Specific Parameters for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/ParameterGroups.Memcached.html#ParameterGroups.Memcached.NodeSpecific) + // + // * Cache Node Type-Specific Parameters for Redis (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/ParameterGroups.Redis.html#ParameterGroups.Redis.NodeSpecific) CacheNodeType *string `type:"string"` // A list of cache nodes that are members of the cluster. @@ -5088,6 +5335,9 @@ type CacheCluster struct { // is created. To enable in-transit encryption on a cluster you must set TransitEncryptionEnabled // to true when you create a cluster. // + // Required: Only available when creating a replication group in an Amazon VPC + // using redis version 3.2.6 or 4.x. + // // Default: false TransitEncryptionEnabled *bool `type:"boolean"` } @@ -5264,7 +5514,7 @@ type CacheEngineVersion struct { // The name of the cache parameter group family associated with this cache engine. // - // Valid values are: memcached1.4 | redis2.6 | redis2.8 | redis3.2 + // Valid values are: memcached1.4 | redis2.6 | redis2.8 | redis3.2 | redis4.0 CacheParameterGroupFamily *string `type:"string"` // The name of the cache engine. @@ -5352,6 +5602,9 @@ func (s *CacheEngineVersion) SetEngineVersion(v string) *CacheEngineVersion { // R3 node types:cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, // cache.r3.8xlarge // +// R4 node types;cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, +// cache.r4.8xlarge, cache.r4.16xlarge +// // Previous generation: (not recommended) // // M2 node types:cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge @@ -5370,15 +5623,18 @@ func (s *CacheEngineVersion) SetEngineVersion(v string) *CacheEngineVersion { // * Redis Append-only files (AOF) functionality is not supported for T1 // or T2 instances. // -// For a complete listing of node types and specifications, see Amazon ElastiCache -// Product Features and Details (http://aws.amazon.com/elasticache/details) -// and either Cache Node Type-Specific Parameters for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Memcached.html#ParameterGroups.Memcached.NodeSpecific) -// or Cache Node Type-Specific Parameters for Redis (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Redis.html#ParameterGroups.Redis.NodeSpecific). +// For a complete listing of node types and specifications, see: +// +// * Amazon ElastiCache Product Features and Details (http://aws.amazon.com/elasticache/details) +// +// * Cache Node Type-Specific Parameters for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/ParameterGroups.Memcached.html#ParameterGroups.Memcached.NodeSpecific) +// +// * Cache Node Type-Specific Parameters for Redis (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/ParameterGroups.Redis.html#ParameterGroups.Redis.NodeSpecific) type CacheNode struct { _ struct{} `type:"structure"` // The date and time when the cache node was created. - CacheNodeCreateTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CacheNodeCreateTime *time.Time `type:"timestamp"` // The cache node identifier. A node ID is a numeric identifier (0001, 0002, // etc.). The combination of cluster ID and node ID uniquely identifies every @@ -5469,7 +5725,7 @@ type CacheNodeTypeSpecificParameter struct { // Indicates whether a change to the parameter is applied immediately or requires // a reboot for the change to be applied. You can force a reboot or wait until // the next maintenance window's reboot. For more information, see Rebooting - // a Cluster (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Clusters.Rebooting.html). + // a Cluster (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Clusters.Rebooting.html). ChangeType *string `type:"string" enum:"ChangeType"` // The valid data type for the parameter. @@ -5597,7 +5853,7 @@ type CacheParameterGroup struct { // The name of the cache parameter group family that this cache parameter group // is compatible with. // - // Valid values are: memcached1.4 | redis2.6 | redis2.8 | redis3.2 + // Valid values are: memcached1.4 | redis2.6 | redis2.8 | redis3.2 | redis4.0 CacheParameterGroupFamily *string `type:"string"` // The name of the cache parameter group. @@ -5855,6 +6111,93 @@ func (s *CacheSubnetGroup) SetVpcId(v string) *CacheSubnetGroup { return s } +// Node group (shard) configuration options when adding or removing replicas. +// Each node group (shard) configuration has the following members: NodeGroupId, +// NewReplicaCount, and PreferredAvailabilityZones. +type ConfigureShard struct { + _ struct{} `type:"structure"` + + // The number of replicas you want in this node group at the end of this operation. + // The maximum value for NewReplicaCount is 5. The minimum value depends upon + // the type of Redis replication group you are working with. + // + // The minimum number of replicas in a shard or replication group is: + // + // * Redis (cluster mode disabled) + // + // If Multi-AZ with Automatic Failover is enabled: 1 + // + // If Multi-AZ with Automatic Failover is not enable: 0 + // + // * Redis (cluster mode enabled): 0 (though you will not be able to failover + // to a replica if your primary node fails) + // + // NewReplicaCount is a required field + NewReplicaCount *int64 `type:"integer" required:"true"` + + // The 4-digit id for the node group you are configuring. For Redis (cluster + // mode disabled) replication groups, the node group id is always 0001. To find + // a Redis (cluster mode enabled)'s node group's (shard's) id, see Finding a + // Shard's Id (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/shard-find-id.html). + // + // NodeGroupId is a required field + NodeGroupId *string `min:"1" type:"string" required:"true"` + + // A list of PreferredAvailabilityZone strings that specify which availability + // zones the replication group's nodes are to be in. The nummber of PreferredAvailabilityZone + // values must equal the value of NewReplicaCount plus 1 to account for the + // primary node. If this member of ReplicaConfiguration is omitted, ElastiCache + // for Redis selects the availability zone for each of the replicas. + PreferredAvailabilityZones []*string `locationNameList:"PreferredAvailabilityZone" type:"list"` +} + +// String returns the string representation +func (s ConfigureShard) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfigureShard) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ConfigureShard) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ConfigureShard"} + if s.NewReplicaCount == nil { + invalidParams.Add(request.NewErrParamRequired("NewReplicaCount")) + } + if s.NodeGroupId == nil { + invalidParams.Add(request.NewErrParamRequired("NodeGroupId")) + } + if s.NodeGroupId != nil && len(*s.NodeGroupId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NodeGroupId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNewReplicaCount sets the NewReplicaCount field's value. +func (s *ConfigureShard) SetNewReplicaCount(v int64) *ConfigureShard { + s.NewReplicaCount = &v + return s +} + +// SetNodeGroupId sets the NodeGroupId field's value. +func (s *ConfigureShard) SetNodeGroupId(v string) *ConfigureShard { + s.NodeGroupId = &v + return s +} + +// SetPreferredAvailabilityZones sets the PreferredAvailabilityZones field's value. +func (s *ConfigureShard) SetPreferredAvailabilityZones(v []*string) *ConfigureShard { + s.PreferredAvailabilityZones = v + return s +} + // Represents the input of a CopySnapshotMessage operation. type CopySnapshotInput struct { _ struct{} `type:"structure"` @@ -5869,10 +6212,10 @@ type CopySnapshotInput struct { // // When using this parameter to export a snapshot, be sure Amazon ElastiCache // has the needed permissions to this S3 bucket. For more information, see Step - // 2: Grant ElastiCache Access to Your Amazon S3 Bucket (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Snapshots.Exporting.html#Snapshots.Exporting.GrantAccess) + // 2: Grant ElastiCache Access to Your Amazon S3 Bucket (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Snapshots.Exporting.html#Snapshots.Exporting.GrantAccess) // in the Amazon ElastiCache User Guide. // - // For more information, see Exporting a Snapshot (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Snapshots.Exporting.html) + // For more information, see Exporting a Snapshot (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Snapshots.Exporting.html) // in the Amazon ElastiCache User Guide. TargetBucket *string `type:"string"` @@ -5968,13 +6311,6 @@ type CreateCacheClusterInput struct { // Reserved parameter. The password used to access a password protected server. // - // This parameter is valid only if: - // - // * The parameter TransitEncryptionEnabled was set to true when the cluster - // was created. - // - // * The line requirepass was added to the database configuration file. - // // Password constraints: // // * Must be only printable ASCII characters. @@ -6040,6 +6376,9 @@ type CreateCacheClusterInput struct { // R3 node types:cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, // cache.r3.8xlarge // + // R4 node types;cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, + // cache.r4.8xlarge, cache.r4.16xlarge + // // Previous generation: (not recommended) // // M2 node types:cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge @@ -6058,10 +6397,13 @@ type CreateCacheClusterInput struct { // * Redis Append-only files (AOF) functionality is not supported for T1 // or T2 instances. // - // For a complete listing of node types and specifications, see Amazon ElastiCache - // Product Features and Details (http://aws.amazon.com/elasticache/details) - // and either Cache Node Type-Specific Parameters for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Memcached.html#ParameterGroups.Memcached.NodeSpecific) - // or Cache Node Type-Specific Parameters for Redis (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Redis.html#ParameterGroups.Redis.NodeSpecific). + // For a complete listing of node types and specifications, see: + // + // * Amazon ElastiCache Product Features and Details (http://aws.amazon.com/elasticache/details) + // + // * Cache Node Type-Specific Parameters for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/ParameterGroups.Memcached.html#ParameterGroups.Memcached.NodeSpecific) + // + // * Cache Node Type-Specific Parameters for Redis (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/ParameterGroups.Redis.html#ParameterGroups.Redis.NodeSpecific) CacheNodeType *string `type:"string"` // The name of the parameter group to associate with this cluster. If this argument @@ -6083,7 +6425,7 @@ type CreateCacheClusterInput struct { // // If you're going to launch your cluster in an Amazon VPC, you need to create // a subnet group before you start creating a cluster. For more information, - // see Subnets and Subnet Groups (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/SubnetGroups.html). + // see Subnets and Subnet Groups (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/SubnetGroups.html). CacheSubnetGroupName *string `type:"string"` // The name of the cache engine to be used for this cluster. @@ -6096,7 +6438,7 @@ type CreateCacheClusterInput struct { // operation. // // Important: You can upgrade to a newer engine version (see Selecting a Cache - // Engine and Version (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/SelectEngine.html#VersionManagement)), + // Engine and Version (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/SelectEngine.html#VersionManagement)), // but you cannot downgrade to an earlier engine version. If you want to use // an earlier engine version, you must delete the existing cluster or replication // group and create it anew with the earlier engine version. @@ -6175,10 +6517,6 @@ type CreateCacheClusterInput struct { // Example: sun:23:00-mon:01:30 PreferredMaintenanceWindow *string `type:"string"` - // Due to current limitations on Redis (cluster mode disabled), this operation - // or parameter is not supported on Redis (cluster mode enabled) replication - // groups. - // // The ID of the replication group to which this cluster should belong. If this // parameter is specified, the cluster is added to the specified replication // group as a read replica; otherwise, the cluster is a standalone primary that @@ -6220,7 +6558,7 @@ type CreateCacheClusterInput struct { // // This parameter is only valid if the Engine parameter is redis. // - // Default: 0 (i.e., automatic backups are disabled for this cluster). + // Default: 0 (i.e., automatic backups are disabled for this cache cluster). SnapshotRetentionLimit *int64 `type:"integer"` // The daily time range (in UTC) during which ElastiCache begins taking a daily @@ -6429,7 +6767,7 @@ type CreateCacheParameterGroupInput struct { // The name of the cache parameter group family that the cache parameter group // can be used with. // - // Valid values are: memcached1.4 | redis2.6 | redis2.8 | redis3.2 + // Valid values are: memcached1.4 | redis2.6 | redis2.8 | redis3.2 | redis4.0 // // CacheParameterGroupFamily is a required field CacheParameterGroupFamily *string `type:"string" required:"true"` @@ -6712,20 +7050,19 @@ type CreateReplicationGroupInput struct { // must set AtRestEncryptionEnabled to true when you create the replication // group. // - // This parameter is valid only if the Engine parameter is redis and the cluster - // is being created in an Amazon VPC. + // Required: Only available when creating a replication group in an Amazon VPC + // using redis version 3.2.6 or 4.x. // // Default: false AtRestEncryptionEnabled *bool `type:"boolean"` // Reserved parameter. The password used to access a password protected server. // - // This parameter is valid only if: + // AuthToken can be specified only on replication groups where TransitEncryptionEnabled + // is true. // - // * The parameter TransitEncryptionEnabled was set to true when the cluster - // was created. - // - // * The line requirepass was added to the database configuration file. + // For HIPAA compliance, you must specify TransitEncryptionEnabled as true, + // an AuthToken, and a CacheSubnetGroup. // // Password constraints: // @@ -6799,6 +7136,9 @@ type CreateReplicationGroupInput struct { // R3 node types:cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, // cache.r3.8xlarge // + // R4 node types;cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, + // cache.r4.8xlarge, cache.r4.16xlarge + // // Previous generation: (not recommended) // // M2 node types:cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge @@ -6817,10 +7157,13 @@ type CreateReplicationGroupInput struct { // * Redis Append-only files (AOF) functionality is not supported for T1 // or T2 instances. // - // For a complete listing of node types and specifications, see Amazon ElastiCache - // Product Features and Details (http://aws.amazon.com/elasticache/details) - // and either Cache Node Type-Specific Parameters for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Memcached.html#ParameterGroups.Memcached.NodeSpecific) - // or Cache Node Type-Specific Parameters for Redis (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Redis.html#ParameterGroups.Redis.NodeSpecific). + // For a complete listing of node types and specifications, see: + // + // * Amazon ElastiCache Product Features and Details (http://aws.amazon.com/elasticache/details) + // + // * Cache Node Type-Specific Parameters for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/ParameterGroups.Memcached.html#ParameterGroups.Memcached.NodeSpecific) + // + // * Cache Node Type-Specific Parameters for Redis (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/ParameterGroups.Redis.html#ParameterGroups.Redis.NodeSpecific) CacheNodeType *string `type:"string"` // The name of the parameter group to associate with this replication group. @@ -6843,7 +7186,7 @@ type CreateReplicationGroupInput struct { // // If you're going to launch your cluster in an Amazon VPC, you need to create // a subnet group before you start creating a cluster. For more information, - // see Subnets and Subnet Groups (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/SubnetGroups.html). + // see Subnets and Subnet Groups (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/SubnetGroups.html). CacheSubnetGroupName *string `type:"string"` // The name of the cache engine to be used for the clusters in this replication @@ -6855,7 +7198,7 @@ type CreateReplicationGroupInput struct { // operation. // // Important: You can upgrade to a newer engine version (see Selecting a Cache - // Engine and Version (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/SelectEngine.html#VersionManagement)) + // Engine and Version (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/SelectEngine.html#VersionManagement)) // in the ElastiCache User Guide, but you cannot downgrade to an earlier engine // version. If you want to use an earlier engine version, you must delete the // existing cluster or replication group and create it anew with the earlier @@ -6863,12 +7206,15 @@ type CreateReplicationGroupInput struct { EngineVersion *string `type:"string"` // A list of node group (shard) configuration options. Each node group (shard) - // configuration has the following: Slots, PrimaryAvailabilityZone, ReplicaAvailabilityZones, - // ReplicaCount. + // configuration has the following members: PrimaryAvailabilityZone, ReplicaAvailabilityZones, + // ReplicaCount, and Slots. // // If you're creating a Redis (cluster mode disabled) or a Redis (cluster mode // enabled) replication group, you can use this parameter to individually configure - // each node group (shard), or you can omit this parameter. + // each node group (shard), or you can omit this parameter. However, when seeding + // a Redis (cluster mode enabled) cluster from a S3 rdb file, you must configure + // each node group (shard) using this parameter because you must specify the + // slots for each node group. NodeGroupConfiguration []*NodeGroupConfiguration `locationNameList:"NodeGroupConfiguration" type:"list"` // The Amazon Resource Name (ARN) of the Amazon Simple Notification Service @@ -6887,7 +7233,7 @@ type CreateReplicationGroupInput struct { // (it will default to 1), or you can explicitly set it to a value between 2 // and 6. // - // The maximum permitted value for NumCacheClusters is 6 (primary plus 5 replicas). + // The maximum permitted value for NumCacheClusters is 6 (1 primary plus 5 replicas). NumCacheClusters *int64 `type:"integer"` // An optional parameter that specifies the number of node groups (shards) for @@ -7014,7 +7360,7 @@ type CreateReplicationGroupInput struct { SnapshotWindow *string `type:"string"` // A list of cost allocation tags to be added to this resource. A tag is a key-value - // pair. A tag key does not have to be accompanied by a tag value. + // pair. Tags []*Tag `locationNameList:"Tag" type:"list"` // A flag that enables in-transit encryption when set to true. @@ -7024,12 +7370,18 @@ type CreateReplicationGroupInput struct { // to true when you create a cluster. // // This parameter is valid only if the Engine parameter is redis, the EngineVersion - // parameter is 3.2.4 or later, and the cluster is being created in an Amazon + // parameter is 3.2.6 or 4.x, and the cluster is being created in an Amazon // VPC. // // If you enable in-transit encryption, you must also specify a value for CacheSubnetGroup. // + // Required: Only available when creating a replication group in an Amazon VPC + // using redis version 3.2.6 or 4.x. + // // Default: false + // + // For HIPAA compliance, you must specify TransitEncryptionEnabled as true, + // an AuthToken, and a CacheSubnetGroup. TransitEncryptionEnabled *bool `type:"boolean"` } @@ -7052,6 +7404,16 @@ func (s *CreateReplicationGroupInput) Validate() error { if s.ReplicationGroupId == nil { invalidParams.Add(request.NewErrParamRequired("ReplicationGroupId")) } + if s.NodeGroupConfiguration != nil { + for i, v := range s.NodeGroupConfiguration { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "NodeGroupConfiguration", i), err.(request.ErrInvalidParams)) + } + } + } if invalidParams.Len() > 0 { return invalidParams @@ -7333,6 +7695,137 @@ func (s *CreateSnapshotOutput) SetSnapshot(v *Snapshot) *CreateSnapshotOutput { return s } +type DecreaseReplicaCountInput struct { + _ struct{} `type:"structure"` + + // If True, the number of replica nodes is decreased immediately. If False, + // the number of replica nodes is decreased during the next maintenance window. + // + // ApplyImmediately is a required field + ApplyImmediately *bool `type:"boolean" required:"true"` + + // The number of read replica nodes you want at the completion of this operation. + // For Redis (cluster mode disabled) replication groups, this is the number + // of replica nodes in the replication group. For Redis (cluster mode enabled) + // replication groups, this is the number of replica nodes in each of the replication + // group's node groups. + // + // The minimum number of replicas in a shard or replication group is: + // + // * Redis (cluster mode disabled) + // + // If Multi-AZ with Automatic Failover is enabled: 1 + // + // If Multi-AZ with Automatic Failover is not enabled: 0 + // + // * Redis (cluster mode enabled): 0 (though you will not be able to failover + // to a replica if your primary node fails) + NewReplicaCount *int64 `type:"integer"` + + // A list of ConfigureShard objects that can be used to configure each shard + // in a Redis (cluster mode enabled) replication group. The ConfigureShard has + // three members: NewReplicaCount, NodeGroupId, and PreferredAvailabilityZones. + ReplicaConfiguration []*ConfigureShard `locationNameList:"ConfigureShard" type:"list"` + + // A list of the node ids to remove from the replication group or node group + // (shard). + ReplicasToRemove []*string `type:"list"` + + // The id of the replication group from which you want to remove replica nodes. + // + // ReplicationGroupId is a required field + ReplicationGroupId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DecreaseReplicaCountInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DecreaseReplicaCountInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DecreaseReplicaCountInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DecreaseReplicaCountInput"} + if s.ApplyImmediately == nil { + invalidParams.Add(request.NewErrParamRequired("ApplyImmediately")) + } + if s.ReplicationGroupId == nil { + invalidParams.Add(request.NewErrParamRequired("ReplicationGroupId")) + } + if s.ReplicaConfiguration != nil { + for i, v := range s.ReplicaConfiguration { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ReplicaConfiguration", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplyImmediately sets the ApplyImmediately field's value. +func (s *DecreaseReplicaCountInput) SetApplyImmediately(v bool) *DecreaseReplicaCountInput { + s.ApplyImmediately = &v + return s +} + +// SetNewReplicaCount sets the NewReplicaCount field's value. +func (s *DecreaseReplicaCountInput) SetNewReplicaCount(v int64) *DecreaseReplicaCountInput { + s.NewReplicaCount = &v + return s +} + +// SetReplicaConfiguration sets the ReplicaConfiguration field's value. +func (s *DecreaseReplicaCountInput) SetReplicaConfiguration(v []*ConfigureShard) *DecreaseReplicaCountInput { + s.ReplicaConfiguration = v + return s +} + +// SetReplicasToRemove sets the ReplicasToRemove field's value. +func (s *DecreaseReplicaCountInput) SetReplicasToRemove(v []*string) *DecreaseReplicaCountInput { + s.ReplicasToRemove = v + return s +} + +// SetReplicationGroupId sets the ReplicationGroupId field's value. +func (s *DecreaseReplicaCountInput) SetReplicationGroupId(v string) *DecreaseReplicaCountInput { + s.ReplicationGroupId = &v + return s +} + +type DecreaseReplicaCountOutput struct { + _ struct{} `type:"structure"` + + // Contains all of the attributes of a specific Redis replication group. + ReplicationGroup *ReplicationGroup `type:"structure"` +} + +// String returns the string representation +func (s DecreaseReplicaCountOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DecreaseReplicaCountOutput) GoString() string { + return s.String() +} + +// SetReplicationGroup sets the ReplicationGroup field's value. +func (s *DecreaseReplicaCountOutput) SetReplicationGroup(v *ReplicationGroup) *DecreaseReplicaCountOutput { + s.ReplicationGroup = v + return s +} + // Represents the input of a DeleteCacheCluster operation. type DeleteCacheClusterInput struct { _ struct{} `type:"structure"` @@ -7833,7 +8326,7 @@ type DescribeCacheEngineVersionsInput struct { // The name of a specific cache parameter group family to return details for. // - // Valid values are: memcached1.4 | redis2.6 | redis2.8 | redis3.2 + // Valid values are: memcached1.4 | redis2.6 | redis2.8 | redis3.2 | redis4.0 // // Constraints: // @@ -8328,7 +8821,7 @@ type DescribeEngineDefaultParametersInput struct { // The name of the cache parameter group family. // - // Valid values are: memcached1.4 | redis2.6 | redis2.8 | redis3.2 + // Valid values are: memcached1.4 | redis2.6 | redis2.8 | redis3.2 | redis4.0 // // CacheParameterGroupFamily is a required field CacheParameterGroupFamily *string `type:"string" required:"true"` @@ -8423,7 +8916,7 @@ type DescribeEventsInput struct { // 8601 format. // // Example: 2017-03-30T07:03:49.555Z - EndTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + EndTime *time.Time `type:"timestamp"` // An optional marker returned from a prior request. Use this marker for pagination // of results from this operation. If this parameter is specified, the response @@ -8451,7 +8944,7 @@ type DescribeEventsInput struct { // 8601 format. // // Example: 2017-03-30T07:03:49.555Z - StartTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + StartTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -8669,6 +9162,9 @@ type DescribeReservedCacheNodesInput struct { // R3 node types:cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, // cache.r3.8xlarge // + // R4 node types;cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, + // cache.r4.8xlarge, cache.r4.16xlarge + // // Previous generation: (not recommended) // // M2 node types:cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge @@ -8687,10 +9183,13 @@ type DescribeReservedCacheNodesInput struct { // * Redis Append-only files (AOF) functionality is not supported for T1 // or T2 instances. // - // For a complete listing of node types and specifications, see Amazon ElastiCache - // Product Features and Details (http://aws.amazon.com/elasticache/details) - // and either Cache Node Type-Specific Parameters for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Memcached.html#ParameterGroups.Memcached.NodeSpecific) - // or Cache Node Type-Specific Parameters for Redis (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Redis.html#ParameterGroups.Redis.NodeSpecific). + // For a complete listing of node types and specifications, see: + // + // * Amazon ElastiCache Product Features and Details (http://aws.amazon.com/elasticache/details) + // + // * Cache Node Type-Specific Parameters for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/ParameterGroups.Memcached.html#ParameterGroups.Memcached.NodeSpecific) + // + // * Cache Node Type-Specific Parameters for Redis (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/ParameterGroups.Redis.html#ParameterGroups.Redis.NodeSpecific) CacheNodeType *string `type:"string"` // The duration filter value, specified in years or seconds. Use this parameter @@ -8831,6 +9330,9 @@ type DescribeReservedCacheNodesOfferingsInput struct { // R3 node types:cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, // cache.r3.8xlarge // + // R4 node types;cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, + // cache.r4.8xlarge, cache.r4.16xlarge + // // Previous generation: (not recommended) // // M2 node types:cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge @@ -8849,10 +9351,13 @@ type DescribeReservedCacheNodesOfferingsInput struct { // * Redis Append-only files (AOF) functionality is not supported for T1 // or T2 instances. // - // For a complete listing of node types and specifications, see Amazon ElastiCache - // Product Features and Details (http://aws.amazon.com/elasticache/details) - // and either Cache Node Type-Specific Parameters for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Memcached.html#ParameterGroups.Memcached.NodeSpecific) - // or Cache Node Type-Specific Parameters for Redis (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Redis.html#ParameterGroups.Redis.NodeSpecific). + // For a complete listing of node types and specifications, see: + // + // * Amazon ElastiCache Product Features and Details (http://aws.amazon.com/elasticache/details) + // + // * Cache Node Type-Specific Parameters for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/ParameterGroups.Memcached.html#ParameterGroups.Memcached.NodeSpecific) + // + // * Cache Node Type-Specific Parameters for Redis (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/ParameterGroups.Redis.html#ParameterGroups.Redis.NodeSpecific) CacheNodeType *string `type:"string"` // Duration filter value, specified in years or seconds. Use this parameter @@ -9228,7 +9733,7 @@ type EngineDefaults struct { // Specifies the name of the cache parameter group family to which the engine // default parameters apply. // - // Valid values are: memcached1.4 | redis2.6 | redis2.8 | redis3.2 + // Valid values are: memcached1.4 | redis2.6 | redis2.8 | redis3.2 | redis4.0 CacheParameterGroupFamily *string `type:"string"` // Provides an identifier to allow retrieval of paginated results. @@ -9279,7 +9784,7 @@ type Event struct { _ struct{} `type:"structure"` // The date and time when the event occurred. - Date *time.Time `type:"timestamp" timestampFormat:"iso8601"` + Date *time.Time `type:"timestamp"` // The text of the event. Message *string `type:"string"` @@ -9327,6 +9832,116 @@ func (s *Event) SetSourceType(v string) *Event { return s } +type IncreaseReplicaCountInput struct { + _ struct{} `type:"structure"` + + // If True, the number of replica nodes is increased immediately. If False, + // the number of replica nodes is increased during the next maintenance window. + // + // ApplyImmediately is a required field + ApplyImmediately *bool `type:"boolean" required:"true"` + + // The number of read replica nodes you want at the completion of this operation. + // For Redis (cluster mode disabled) replication groups, this is the number + // of replica nodes in the replication group. For Redis (cluster mode enabled) + // replication groups, this is the number of replica nodes in each of the replication + // group's node groups. + NewReplicaCount *int64 `type:"integer"` + + // A list of ConfigureShard objects that can be used to configure each shard + // in a Redis (cluster mode enabled) replication group. The ConfigureShard has + // three members: NewReplicaCount, NodeGroupId, and PreferredAvailabilityZones. + ReplicaConfiguration []*ConfigureShard `locationNameList:"ConfigureShard" type:"list"` + + // The id of the replication group to which you want to add replica nodes. + // + // ReplicationGroupId is a required field + ReplicationGroupId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s IncreaseReplicaCountInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s IncreaseReplicaCountInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *IncreaseReplicaCountInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "IncreaseReplicaCountInput"} + if s.ApplyImmediately == nil { + invalidParams.Add(request.NewErrParamRequired("ApplyImmediately")) + } + if s.ReplicationGroupId == nil { + invalidParams.Add(request.NewErrParamRequired("ReplicationGroupId")) + } + if s.ReplicaConfiguration != nil { + for i, v := range s.ReplicaConfiguration { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ReplicaConfiguration", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplyImmediately sets the ApplyImmediately field's value. +func (s *IncreaseReplicaCountInput) SetApplyImmediately(v bool) *IncreaseReplicaCountInput { + s.ApplyImmediately = &v + return s +} + +// SetNewReplicaCount sets the NewReplicaCount field's value. +func (s *IncreaseReplicaCountInput) SetNewReplicaCount(v int64) *IncreaseReplicaCountInput { + s.NewReplicaCount = &v + return s +} + +// SetReplicaConfiguration sets the ReplicaConfiguration field's value. +func (s *IncreaseReplicaCountInput) SetReplicaConfiguration(v []*ConfigureShard) *IncreaseReplicaCountInput { + s.ReplicaConfiguration = v + return s +} + +// SetReplicationGroupId sets the ReplicationGroupId field's value. +func (s *IncreaseReplicaCountInput) SetReplicationGroupId(v string) *IncreaseReplicaCountInput { + s.ReplicationGroupId = &v + return s +} + +type IncreaseReplicaCountOutput struct { + _ struct{} `type:"structure"` + + // Contains all of the attributes of a specific Redis replication group. + ReplicationGroup *ReplicationGroup `type:"structure"` +} + +// String returns the string representation +func (s IncreaseReplicaCountOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s IncreaseReplicaCountOutput) GoString() string { + return s.String() +} + +// SetReplicationGroup sets the ReplicationGroup field's value. +func (s *IncreaseReplicaCountOutput) SetReplicationGroup(v *ReplicationGroup) *IncreaseReplicaCountOutput { + s.ReplicationGroup = v + return s +} + // The input parameters for the ListAllowedNodeTypeModifications operation. type ListAllowedNodeTypeModificationsInput struct { _ struct{} `type:"structure"` @@ -9461,7 +10076,7 @@ type ModifyCacheClusterInput struct { // Only newly created nodes are located in different Availability Zones. For // instructions on how to move existing Memcached nodes to different Availability // Zones, see the Availability Zone Considerations section of Cache Node Considerations - // for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheNode.Memcached.html). + // for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/CacheNode.Memcached.html). AZMode *string `type:"string" enum:"AZMode"` // If true, this parameter causes the modifications in this request and any @@ -9520,7 +10135,7 @@ type ModifyCacheClusterInput struct { // The upgraded version of the cache engine to be run on the cache nodes. // // Important: You can upgrade to a newer engine version (see Selecting a Cache - // Engine and Version (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/SelectEngine.html#VersionManagement)), + // Engine and Version (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/SelectEngine.html#VersionManagement)), // but you cannot downgrade to an earlier engine version. If you want to use // an earlier engine version, you must delete the existing cluster and create // it anew with the earlier engine version. @@ -9557,7 +10172,7 @@ type ModifyCacheClusterInput struct { // Availability Zone. Only newly created nodes can be located in different Availability // Zones. For guidance on how to move existing Memcached nodes to different // Availability Zones, see the Availability Zone Considerations section of Cache - // Node Considerations for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheNode.Memcached.html). + // Node Considerations for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/CacheNode.Memcached.html). // // Impact of new add/remove requests upon pending requests // @@ -10035,14 +10650,16 @@ type ModifyReplicationGroupInput struct { // replication group. // // Important: You can upgrade to a newer engine version (see Selecting a Cache - // Engine and Version (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/SelectEngine.html#VersionManagement)), + // Engine and Version (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/SelectEngine.html#VersionManagement)), // but you cannot downgrade to an earlier engine version. If you want to use // an earlier engine version, you must delete the existing replication group // and create it anew with the earlier engine version. EngineVersion *string `type:"string"` - // The name of the Node Group (called shard in the console). - NodeGroupId *string `type:"string"` + // Deprecated. This parameter is not used. + // + // Deprecated: NodeGroupId has been deprecated + NodeGroupId *string `deprecated:"true" type:"string"` // The Amazon Resource Name (ARN) of the Amazon SNS topic to which notifications // are sent. @@ -10297,10 +10914,21 @@ type ModifyReplicationGroupShardConfigurationInput struct { NodeGroupCount *int64 `type:"integer" required:"true"` // If the value of NodeGroupCount is less than the current number of node groups - // (shards), NodeGroupsToRemove is a required list of node group ids to remove + // (shards), the NodeGroupsToRemove or NodeGroupsToRetain is a required list + // of node group ids to remove from or retain in the cluster. + // + // ElastiCache for Redis will attempt to remove all node groups listed by NodeGroupsToRemove // from the cluster. NodeGroupsToRemove []*string `locationNameList:"NodeGroupToRemove" type:"list"` + // If the value of NodeGroupCount is less than the current number of node groups + // (shards), the NodeGroupsToRemove or NodeGroupsToRetain is a required list + // of node group ids to remove from or retain in the cluster. + // + // ElastiCache for Redis will attempt to remove all node groups except those + // listed by NodeGroupsToRetain from the cluster. + NodeGroupsToRetain []*string `locationNameList:"NodeGroupToRetain" type:"list"` + // The name of the Redis (cluster mode enabled) cluster (replication group) // on which the shards are to be configured. // @@ -10340,6 +10968,16 @@ func (s *ModifyReplicationGroupShardConfigurationInput) Validate() error { if s.ReplicationGroupId == nil { invalidParams.Add(request.NewErrParamRequired("ReplicationGroupId")) } + if s.ReshardingConfiguration != nil { + for i, v := range s.ReshardingConfiguration { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ReshardingConfiguration", i), err.(request.ErrInvalidParams)) + } + } + } if invalidParams.Len() > 0 { return invalidParams @@ -10365,6 +11003,12 @@ func (s *ModifyReplicationGroupShardConfigurationInput) SetNodeGroupsToRemove(v return s } +// SetNodeGroupsToRetain sets the NodeGroupsToRetain field's value. +func (s *ModifyReplicationGroupShardConfigurationInput) SetNodeGroupsToRetain(v []*string) *ModifyReplicationGroupShardConfigurationInput { + s.NodeGroupsToRetain = v + return s +} + // SetReplicationGroupId sets the ReplicationGroupId field's value. func (s *ModifyReplicationGroupShardConfigurationInput) SetReplicationGroupId(v string) *ModifyReplicationGroupShardConfigurationInput { s.ReplicationGroupId = &v @@ -10472,6 +11116,9 @@ func (s *NodeGroup) SetStatus(v string) *NodeGroup { type NodeGroupConfiguration struct { _ struct{} `type:"structure"` + // The 4-digit id for the node group these configuration values apply to. + NodeGroupId *string `min:"1" type:"string"` + // The Availability Zone where the primary node of this node group (shard) is // launched. PrimaryAvailabilityZone *string `type:"string"` @@ -10501,6 +11148,25 @@ func (s NodeGroupConfiguration) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *NodeGroupConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "NodeGroupConfiguration"} + if s.NodeGroupId != nil && len(*s.NodeGroupId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NodeGroupId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNodeGroupId sets the NodeGroupId field's value. +func (s *NodeGroupConfiguration) SetNodeGroupId(v string) *NodeGroupConfiguration { + s.NodeGroupId = &v + return s +} + // SetPrimaryAvailabilityZone sets the PrimaryAvailabilityZone field's value. func (s *NodeGroupConfiguration) SetPrimaryAvailabilityZone(v string) *NodeGroupConfiguration { s.PrimaryAvailabilityZone = &v @@ -10536,14 +11202,16 @@ type NodeGroupMember struct { // (0001, 0002, etc.). CacheNodeId *string `type:"string"` - // The role that is currently assigned to the node - primary or replica. + // The role that is currently assigned to the node - primary or replica. This + // member is only applicable for Redis (cluster mode disabled) replication groups. CurrentRole *string `type:"string"` // The name of the Availability Zone in which the node is located. PreferredAvailabilityZone *string `type:"string"` - // Represents the information required for client programs to connect to a cache - // node. + // The information required for client programs to connect to a node for read + // operations. The read endpoint is only applicable on Redis (cluster mode disabled) + // clusters. ReadEndpoint *Endpoint `type:"structure"` } @@ -10595,7 +11263,7 @@ type NodeSnapshot struct { CacheClusterId *string `type:"string"` // The date and time when the cache node was created in the source cluster. - CacheNodeCreateTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CacheNodeCreateTime *time.Time `type:"timestamp"` // The cache node identifier for the node in the source cluster. CacheNodeId *string `type:"string"` @@ -10611,7 +11279,7 @@ type NodeSnapshot struct { // The date and time when the source node's metadata and cache data set was // obtained for the snapshot. - SnapshotCreateTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + SnapshotCreateTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -10712,7 +11380,7 @@ type Parameter struct { // Indicates whether a change to the parameter is applied immediately or requires // a reboot for the change to be applied. You can force a reboot or wait until // the next maintenance window's reboot. For more information, see Rebooting - // a Cluster (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Clusters.Rebooting.html). + // a Cluster (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Clusters.Rebooting.html). ChangeType *string `type:"string" enum:"ChangeType"` // The valid data type for the parameter. @@ -10842,7 +11510,7 @@ type PendingModifiedValues struct { _ struct{} `type:"structure"` // A list of cache node IDs that are being removed (or will be removed) from - // the cluster. A node ID is a numeric identifier (0001, 0002, etc.). + // the cluster. A node ID is a 4-digit numeric identifier (0001, 0002, etc.). CacheNodeIdsToRemove []*string `locationNameList:"CacheNodeId" type:"list"` // The cache node type that this cluster or replication group is scaled to. @@ -11161,6 +11829,9 @@ type ReplicationGroup struct { // is created. To enable encryption at-rest on a cluster you must set AtRestEncryptionEnabled // to true when you create a cluster. // + // Required: Only available when creating a replication group in an Amazon VPC + // using redis version 3.2.6 or 4.x. + // // Default: false AtRestEncryptionEnabled *bool `type:"boolean"` @@ -11200,7 +11871,7 @@ type ReplicationGroup struct { // The user supplied description of the replication group. Description *string `type:"string"` - // The identifiers of all the nodes that are part of this replication group. + // The names of all the cache clusters that are part of this replication group. MemberClusters []*string `locationNameList:"ClusterId" type:"list"` // A list of node groups in this replication group. For Redis (cluster mode @@ -11249,6 +11920,9 @@ type ReplicationGroup struct { // is created. To enable in-transit encryption on a cluster you must set TransitEncryptionEnabled // to true when you create a cluster. // + // Required: Only available when creating a replication group in an Amazon VPC + // using redis version 3.2.6 or 4.x. + // // Default: false TransitEncryptionEnabled *bool `type:"boolean"` } @@ -11456,6 +12130,9 @@ type ReservedCacheNode struct { // R3 node types:cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, // cache.r3.8xlarge // + // R4 node types;cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, + // cache.r4.8xlarge, cache.r4.16xlarge + // // Previous generation: (not recommended) // // M2 node types:cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge @@ -11474,10 +12151,13 @@ type ReservedCacheNode struct { // * Redis Append-only files (AOF) functionality is not supported for T1 // or T2 instances. // - // For a complete listing of node types and specifications, see Amazon ElastiCache - // Product Features and Details (http://aws.amazon.com/elasticache/details) - // and either Cache Node Type-Specific Parameters for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Memcached.html#ParameterGroups.Memcached.NodeSpecific) - // or Cache Node Type-Specific Parameters for Redis (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Redis.html#ParameterGroups.Redis.NodeSpecific). + // For a complete listing of node types and specifications, see: + // + // * Amazon ElastiCache Product Features and Details (http://aws.amazon.com/elasticache/details) + // + // * Cache Node Type-Specific Parameters for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/ParameterGroups.Memcached.html#ParameterGroups.Memcached.NodeSpecific) + // + // * Cache Node Type-Specific Parameters for Redis (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/ParameterGroups.Redis.html#ParameterGroups.Redis.NodeSpecific) CacheNodeType *string `type:"string"` // The duration of the reservation in seconds. @@ -11495,6 +12175,11 @@ type ReservedCacheNode struct { // The recurring price charged to run this reserved cache node. RecurringCharges []*RecurringCharge `locationNameList:"RecurringCharge" type:"list"` + // The Amazon Resource Name (ARN) of the reserved cache node. + // + // Example: arn:aws:elasticache:us-east-1:123456789012:reserved-instance:ri-2017-03-27-08-33-25-582 + ReservationARN *string `type:"string"` + // The unique identifier for the reservation. ReservedCacheNodeId *string `type:"string"` @@ -11502,7 +12187,7 @@ type ReservedCacheNode struct { ReservedCacheNodesOfferingId *string `type:"string"` // The time the reservation started. - StartTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + StartTime *time.Time `type:"timestamp"` // The state of the reserved cache node. State *string `type:"string"` @@ -11563,6 +12248,12 @@ func (s *ReservedCacheNode) SetRecurringCharges(v []*RecurringCharge) *ReservedC return s } +// SetReservationARN sets the ReservationARN field's value. +func (s *ReservedCacheNode) SetReservationARN(v string) *ReservedCacheNode { + s.ReservationARN = &v + return s +} + // SetReservedCacheNodeId sets the ReservedCacheNodeId field's value. func (s *ReservedCacheNode) SetReservedCacheNodeId(v string) *ReservedCacheNode { s.ReservedCacheNodeId = &v @@ -11633,6 +12324,9 @@ type ReservedCacheNodesOffering struct { // R3 node types:cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, // cache.r3.8xlarge // + // R4 node types;cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, + // cache.r4.8xlarge, cache.r4.16xlarge + // // Previous generation: (not recommended) // // M2 node types:cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge @@ -11651,10 +12345,13 @@ type ReservedCacheNodesOffering struct { // * Redis Append-only files (AOF) functionality is not supported for T1 // or T2 instances. // - // For a complete listing of node types and specifications, see Amazon ElastiCache - // Product Features and Details (http://aws.amazon.com/elasticache/details) - // and either Cache Node Type-Specific Parameters for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Memcached.html#ParameterGroups.Memcached.NodeSpecific) - // or Cache Node Type-Specific Parameters for Redis (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Redis.html#ParameterGroups.Redis.NodeSpecific). + // For a complete listing of node types and specifications, see: + // + // * Amazon ElastiCache Product Features and Details (http://aws.amazon.com/elasticache/details) + // + // * Cache Node Type-Specific Parameters for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/ParameterGroups.Memcached.html#ParameterGroups.Memcached.NodeSpecific) + // + // * Cache Node Type-Specific Parameters for Redis (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/ParameterGroups.Redis.html#ParameterGroups.Redis.NodeSpecific) CacheNodeType *string `type:"string"` // The duration of the offering. in seconds. @@ -11805,6 +12502,9 @@ func (s *ResetCacheParameterGroupInput) SetResetAllParameters(v bool) *ResetCach type ReshardingConfiguration struct { _ struct{} `type:"structure"` + // The 4-digit id for the node group these configuration values apply to. + NodeGroupId *string `min:"1" type:"string"` + // A list of preferred availability zones for the nodes in this cluster. PreferredAvailabilityZones []*string `locationNameList:"AvailabilityZone" type:"list"` } @@ -11819,6 +12519,25 @@ func (s ReshardingConfiguration) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *ReshardingConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ReshardingConfiguration"} + if s.NodeGroupId != nil && len(*s.NodeGroupId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NodeGroupId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNodeGroupId sets the NodeGroupId field's value. +func (s *ReshardingConfiguration) SetNodeGroupId(v string) *ReshardingConfiguration { + s.NodeGroupId = &v + return s +} + // SetPreferredAvailabilityZones sets the PreferredAvailabilityZones field's value. func (s *ReshardingConfiguration) SetPreferredAvailabilityZones(v []*string) *ReshardingConfiguration { s.PreferredAvailabilityZones = v @@ -12028,7 +12747,7 @@ type Snapshot struct { AutomaticFailover *string `type:"string" enum:"AutomaticFailoverStatus"` // The date and time when the source cluster was created. - CacheClusterCreateTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CacheClusterCreateTime *time.Time `type:"timestamp"` // The user-supplied identifier of the source cluster. CacheClusterId *string `type:"string"` @@ -12069,6 +12788,9 @@ type Snapshot struct { // R3 node types:cache.r3.large, cache.r3.xlarge, cache.r3.2xlarge, cache.r3.4xlarge, // cache.r3.8xlarge // + // R4 node types;cache.r4.large, cache.r4.xlarge, cache.r4.2xlarge, cache.r4.4xlarge, + // cache.r4.8xlarge, cache.r4.16xlarge + // // Previous generation: (not recommended) // // M2 node types:cache.m2.xlarge, cache.m2.2xlarge, cache.m2.4xlarge @@ -12087,10 +12809,13 @@ type Snapshot struct { // * Redis Append-only files (AOF) functionality is not supported for T1 // or T2 instances. // - // For a complete listing of node types and specifications, see Amazon ElastiCache - // Product Features and Details (http://aws.amazon.com/elasticache/details) - // and either Cache Node Type-Specific Parameters for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Memcached.html#ParameterGroups.Memcached.NodeSpecific) - // or Cache Node Type-Specific Parameters for Redis (http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheParameterGroups.Redis.html#ParameterGroups.Redis.NodeSpecific). + // For a complete listing of node types and specifications, see: + // + // * Amazon ElastiCache Product Features and Details (http://aws.amazon.com/elasticache/details) + // + // * Cache Node Type-Specific Parameters for Memcached (http://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/ParameterGroups.Memcached.html#ParameterGroups.Memcached.NodeSpecific) + // + // * Cache Node Type-Specific Parameters for Redis (http://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/ParameterGroups.Redis.html#ParameterGroups.Redis.NodeSpecific) CacheNodeType *string `type:"string"` // The cache parameter group that is associated with the source cluster. @@ -12448,7 +13173,7 @@ type TestFailoverInput struct { // failover on up to 5 node groups in any rolling 24-hour period. // // NodeGroupId is a required field - NodeGroupId *string `type:"string" required:"true"` + NodeGroupId *string `min:"1" type:"string" required:"true"` // The name of the replication group (console: cluster) whose automatic failover // is being tested by this operation. @@ -12473,6 +13198,9 @@ func (s *TestFailoverInput) Validate() error { if s.NodeGroupId == nil { invalidParams.Add(request.NewErrParamRequired("NodeGroupId")) } + if s.NodeGroupId != nil && len(*s.NodeGroupId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NodeGroupId", 1)) + } if s.ReplicationGroupId == nil { invalidParams.Add(request.NewErrParamRequired("ReplicationGroupId")) } diff --git a/vendor/github.com/aws/aws-sdk-go/service/elasticache/errors.go b/vendor/github.com/aws/aws-sdk-go/service/elasticache/errors.go index 272ee75b800..e35a9ece81a 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/elasticache/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/elasticache/errors.go @@ -186,6 +186,12 @@ const ( // The VPC network is in an invalid state. ErrCodeInvalidVPCNetworkStateFault = "InvalidVPCNetworkStateFault" + // ErrCodeNoOperationFault for service response error code + // "NoOperationFault". + // + // The operation was not performed because no changes were required. + ErrCodeNoOperationFault = "NoOperationFault" + // ErrCodeNodeGroupNotFoundFault for service response error code // "NodeGroupNotFoundFault". // @@ -253,6 +259,12 @@ const ( // The requested cache node offering does not exist. ErrCodeReservedCacheNodesOfferingNotFoundFault = "ReservedCacheNodesOfferingNotFound" + // ErrCodeServiceLinkedRoleNotFoundFault for service response error code + // "ServiceLinkedRoleNotFoundFault". + // + // The specified service linked role (SLR) was not found. + ErrCodeServiceLinkedRoleNotFoundFault = "ServiceLinkedRoleNotFoundFault" + // ErrCodeSnapshotAlreadyExistsFault for service response error code // "SnapshotAlreadyExistsFault". // @@ -308,5 +320,7 @@ const ( // ErrCodeTestFailoverNotAvailableFault for service response error code // "TestFailoverNotAvailableFault". + // + // The TestFailover action is not available. ErrCodeTestFailoverNotAvailableFault = "TestFailoverNotAvailableFault" ) diff --git a/vendor/github.com/aws/aws-sdk-go/service/elasticache/service.go b/vendor/github.com/aws/aws-sdk-go/service/elasticache/service.go index 40bed298a61..fd5f8c51707 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/elasticache/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/elasticache/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "elasticache" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "elasticache" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "ElastiCache" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the ElastiCache client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/elasticbeanstalk/api.go b/vendor/github.com/aws/aws-sdk-go/service/elasticbeanstalk/api.go index 14f972572a5..1f3a76b0426 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/elasticbeanstalk/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/elasticbeanstalk/api.go @@ -17,8 +17,8 @@ const opAbortEnvironmentUpdate = "AbortEnvironmentUpdate" // AbortEnvironmentUpdateRequest generates a "aws/request.Request" representing the // client's request for the AbortEnvironmentUpdate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -71,7 +71,7 @@ func (c *ElasticBeanstalk) AbortEnvironmentUpdateRequest(input *AbortEnvironment // // Returned Error Codes: // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticbeanstalk-2010-12-01/AbortEnvironmentUpdate @@ -100,8 +100,8 @@ const opApplyEnvironmentManagedAction = "ApplyEnvironmentManagedAction" // ApplyEnvironmentManagedActionRequest generates a "aws/request.Request" representing the // client's request for the ApplyEnvironmentManagedAction operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -152,7 +152,7 @@ func (c *ElasticBeanstalk) ApplyEnvironmentManagedActionRequest(input *ApplyEnvi // API operation ApplyEnvironmentManagedAction for usage and error information. // // Returned Error Codes: -// * ErrCodeServiceException "ServiceException" +// * ErrCodeServiceException "ElasticBeanstalkServiceException" // A generic service exception has occurred. // // * ErrCodeManagedActionInvalidStateException "ManagedActionInvalidStateException" @@ -184,8 +184,8 @@ const opCheckDNSAvailability = "CheckDNSAvailability" // CheckDNSAvailabilityRequest generates a "aws/request.Request" representing the // client's request for the CheckDNSAvailability operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -258,8 +258,8 @@ const opComposeEnvironments = "ComposeEnvironments" // ComposeEnvironmentsRequest generates a "aws/request.Request" representing the // client's request for the ComposeEnvironments operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -318,7 +318,7 @@ func (c *ElasticBeanstalk) ComposeEnvironmentsRequest(input *ComposeEnvironments // The specified account has reached its limit of environments. // // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticbeanstalk-2010-12-01/ComposeEnvironments @@ -347,8 +347,8 @@ const opCreateApplication = "CreateApplication" // CreateApplicationRequest generates a "aws/request.Request" representing the // client's request for the CreateApplication operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -427,8 +427,8 @@ const opCreateApplicationVersion = "CreateApplicationVersion" // CreateApplicationVersionRequest generates a "aws/request.Request" representing the // client's request for the CreateApplicationVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -500,7 +500,7 @@ func (c *ElasticBeanstalk) CreateApplicationVersionRequest(input *CreateApplicat // The specified account has reached its limit of application versions. // // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // // * ErrCodeS3LocationNotInServiceRegionException "S3LocationNotInServiceRegionException" @@ -542,8 +542,8 @@ const opCreateConfigurationTemplate = "CreateConfigurationTemplate" // CreateConfigurationTemplateRequest generates a "aws/request.Request" representing the // client's request for the CreateConfigurationTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -586,6 +586,9 @@ func (c *ElasticBeanstalk) CreateConfigurationTemplateRequest(input *CreateConfi // application and are used to deploy different versions of the application // with the same configuration settings. // +// Templates aren't associated with any environment. The EnvironmentName response +// element is always null. +// // Related Topics // // * DescribeConfigurationOptions @@ -603,7 +606,7 @@ func (c *ElasticBeanstalk) CreateConfigurationTemplateRequest(input *CreateConfi // // Returned Error Codes: // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // // * ErrCodeTooManyBucketsException "TooManyBucketsException" @@ -638,8 +641,8 @@ const opCreateEnvironment = "CreateEnvironment" // CreateEnvironmentRequest generates a "aws/request.Request" representing the // client's request for the CreateEnvironment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -693,7 +696,7 @@ func (c *ElasticBeanstalk) CreateEnvironmentRequest(input *CreateEnvironmentInpu // The specified account has reached its limit of environments. // // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticbeanstalk-2010-12-01/CreateEnvironment @@ -722,8 +725,8 @@ const opCreatePlatformVersion = "CreatePlatformVersion" // CreatePlatformVersionRequest generates a "aws/request.Request" representing the // client's request for the CreatePlatformVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -773,10 +776,10 @@ func (c *ElasticBeanstalk) CreatePlatformVersionRequest(input *CreatePlatformVer // // Returned Error Codes: // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // -// * ErrCodeServiceException "ServiceException" +// * ErrCodeServiceException "ElasticBeanstalkServiceException" // A generic service exception has occurred. // // * ErrCodeTooManyPlatformsException "TooManyPlatformsException" @@ -809,8 +812,8 @@ const opCreateStorageLocation = "CreateStorageLocation" // CreateStorageLocationRequest generates a "aws/request.Request" representing the // client's request for the CreateStorageLocation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -870,7 +873,7 @@ func (c *ElasticBeanstalk) CreateStorageLocationRequest(input *CreateStorageLoca // The specified account does not have a subscription to Amazon S3. // // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticbeanstalk-2010-12-01/CreateStorageLocation @@ -899,8 +902,8 @@ const opDeleteApplication = "DeleteApplication" // DeleteApplicationRequest generates a "aws/request.Request" representing the // client's request for the DeleteApplication operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -985,8 +988,8 @@ const opDeleteApplicationVersion = "DeleteApplicationVersion" // DeleteApplicationVersionRequest generates a "aws/request.Request" representing the // client's request for the DeleteApplicationVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1045,7 +1048,7 @@ func (c *ElasticBeanstalk) DeleteApplicationVersionRequest(input *DeleteApplicat // version. The application version was deleted successfully. // // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // // * ErrCodeOperationInProgressException "OperationInProgressFailure" @@ -1088,8 +1091,8 @@ const opDeleteConfigurationTemplate = "DeleteConfigurationTemplate" // DeleteConfigurationTemplateRequest generates a "aws/request.Request" representing the // client's request for the DeleteConfigurationTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1174,8 +1177,8 @@ const opDeleteEnvironmentConfiguration = "DeleteEnvironmentConfiguration" // DeleteEnvironmentConfigurationRequest generates a "aws/request.Request" representing the // client's request for the DeleteEnvironmentConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1257,8 +1260,8 @@ const opDeletePlatformVersion = "DeletePlatformVersion" // DeletePlatformVersionRequest generates a "aws/request.Request" representing the // client's request for the DeletePlatformVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1312,10 +1315,10 @@ func (c *ElasticBeanstalk) DeletePlatformVersionRequest(input *DeletePlatformVer // effects an element in this activity is already in progress. // // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // -// * ErrCodeServiceException "ServiceException" +// * ErrCodeServiceException "ElasticBeanstalkServiceException" // A generic service exception has occurred. // // * ErrCodePlatformVersionStillReferencedException "PlatformVersionStillReferencedException" @@ -1344,12 +1347,95 @@ func (c *ElasticBeanstalk) DeletePlatformVersionWithContext(ctx aws.Context, inp return out, req.Send() } +const opDescribeAccountAttributes = "DescribeAccountAttributes" + +// DescribeAccountAttributesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeAccountAttributes operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeAccountAttributes for more information on using the DescribeAccountAttributes +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeAccountAttributesRequest method. +// req, resp := client.DescribeAccountAttributesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/elasticbeanstalk-2010-12-01/DescribeAccountAttributes +func (c *ElasticBeanstalk) DescribeAccountAttributesRequest(input *DescribeAccountAttributesInput) (req *request.Request, output *DescribeAccountAttributesOutput) { + op := &request.Operation{ + Name: opDescribeAccountAttributes, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeAccountAttributesInput{} + } + + output = &DescribeAccountAttributesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeAccountAttributes API operation for AWS Elastic Beanstalk. +// +// Returns attributes related to AWS Elastic Beanstalk that are associated with +// the calling AWS account. +// +// The result currently has one set of attributes—resource quotas. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Elastic Beanstalk's +// API operation DescribeAccountAttributes for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" +// The specified account does not have sufficient privileges for one or more +// AWS services. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/elasticbeanstalk-2010-12-01/DescribeAccountAttributes +func (c *ElasticBeanstalk) DescribeAccountAttributes(input *DescribeAccountAttributesInput) (*DescribeAccountAttributesOutput, error) { + req, out := c.DescribeAccountAttributesRequest(input) + return out, req.Send() +} + +// DescribeAccountAttributesWithContext is the same as DescribeAccountAttributes with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeAccountAttributes for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ElasticBeanstalk) DescribeAccountAttributesWithContext(ctx aws.Context, input *DescribeAccountAttributesInput, opts ...request.Option) (*DescribeAccountAttributesOutput, error) { + req, out := c.DescribeAccountAttributesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeApplicationVersions = "DescribeApplicationVersions" // DescribeApplicationVersionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeApplicationVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1422,8 +1508,8 @@ const opDescribeApplications = "DescribeApplications" // DescribeApplicationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeApplications operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1496,8 +1582,8 @@ const opDescribeConfigurationOptions = "DescribeConfigurationOptions" // DescribeConfigurationOptionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeConfigurationOptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1579,8 +1665,8 @@ const opDescribeConfigurationSettings = "DescribeConfigurationSettings" // DescribeConfigurationSettingsRequest generates a "aws/request.Request" representing the // client's request for the DescribeConfigurationSettings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1670,8 +1756,8 @@ const opDescribeEnvironmentHealth = "DescribeEnvironmentHealth" // DescribeEnvironmentHealthRequest generates a "aws/request.Request" representing the // client's request for the DescribeEnvironmentHealth operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1726,7 +1812,7 @@ func (c *ElasticBeanstalk) DescribeEnvironmentHealthRequest(input *DescribeEnvir // One or more input parameters is not valid. Please correct the input parameters // and try the operation again. // -// * ErrCodeServiceException "ServiceException" +// * ErrCodeServiceException "ElasticBeanstalkServiceException" // A generic service exception has occurred. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticbeanstalk-2010-12-01/DescribeEnvironmentHealth @@ -1755,8 +1841,8 @@ const opDescribeEnvironmentManagedActionHistory = "DescribeEnvironmentManagedAct // DescribeEnvironmentManagedActionHistoryRequest generates a "aws/request.Request" representing the // client's request for the DescribeEnvironmentManagedActionHistory operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1805,7 +1891,7 @@ func (c *ElasticBeanstalk) DescribeEnvironmentManagedActionHistoryRequest(input // API operation DescribeEnvironmentManagedActionHistory for usage and error information. // // Returned Error Codes: -// * ErrCodeServiceException "ServiceException" +// * ErrCodeServiceException "ElasticBeanstalkServiceException" // A generic service exception has occurred. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticbeanstalk-2010-12-01/DescribeEnvironmentManagedActionHistory @@ -1834,8 +1920,8 @@ const opDescribeEnvironmentManagedActions = "DescribeEnvironmentManagedActions" // DescribeEnvironmentManagedActionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeEnvironmentManagedActions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1884,7 +1970,7 @@ func (c *ElasticBeanstalk) DescribeEnvironmentManagedActionsRequest(input *Descr // API operation DescribeEnvironmentManagedActions for usage and error information. // // Returned Error Codes: -// * ErrCodeServiceException "ServiceException" +// * ErrCodeServiceException "ElasticBeanstalkServiceException" // A generic service exception has occurred. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticbeanstalk-2010-12-01/DescribeEnvironmentManagedActions @@ -1913,8 +1999,8 @@ const opDescribeEnvironmentResources = "DescribeEnvironmentResources" // DescribeEnvironmentResourcesRequest generates a "aws/request.Request" representing the // client's request for the DescribeEnvironmentResources operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1964,7 +2050,7 @@ func (c *ElasticBeanstalk) DescribeEnvironmentResourcesRequest(input *DescribeEn // // Returned Error Codes: // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticbeanstalk-2010-12-01/DescribeEnvironmentResources @@ -1993,8 +2079,8 @@ const opDescribeEnvironments = "DescribeEnvironments" // DescribeEnvironmentsRequest generates a "aws/request.Request" representing the // client's request for the DescribeEnvironments operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2067,8 +2153,8 @@ const opDescribeEvents = "DescribeEvents" // DescribeEventsRequest generates a "aws/request.Request" representing the // client's request for the DescribeEvents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2199,8 +2285,8 @@ const opDescribeInstancesHealth = "DescribeInstancesHealth" // DescribeInstancesHealthRequest generates a "aws/request.Request" representing the // client's request for the DescribeInstancesHealth operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2239,8 +2325,8 @@ func (c *ElasticBeanstalk) DescribeInstancesHealthRequest(input *DescribeInstanc // DescribeInstancesHealth API operation for AWS Elastic Beanstalk. // -// Retrives detailed information about the health of instances in your AWS Elastic -// Beanstalk. This operation requires enhanced health reporting (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/health-enhanced.html). +// Retrieves detailed information about the health of instances in your AWS +// Elastic Beanstalk. This operation requires enhanced health reporting (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/health-enhanced.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2254,7 +2340,7 @@ func (c *ElasticBeanstalk) DescribeInstancesHealthRequest(input *DescribeInstanc // One or more input parameters is not valid. Please correct the input parameters // and try the operation again. // -// * ErrCodeServiceException "ServiceException" +// * ErrCodeServiceException "ElasticBeanstalkServiceException" // A generic service exception has occurred. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticbeanstalk-2010-12-01/DescribeInstancesHealth @@ -2283,8 +2369,8 @@ const opDescribePlatformVersion = "DescribePlatformVersion" // DescribePlatformVersionRequest generates a "aws/request.Request" representing the // client's request for the DescribePlatformVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2334,10 +2420,10 @@ func (c *ElasticBeanstalk) DescribePlatformVersionRequest(input *DescribePlatfor // // Returned Error Codes: // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // -// * ErrCodeServiceException "ServiceException" +// * ErrCodeServiceException "ElasticBeanstalkServiceException" // A generic service exception has occurred. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticbeanstalk-2010-12-01/DescribePlatformVersion @@ -2366,8 +2452,8 @@ const opListAvailableSolutionStacks = "ListAvailableSolutionStacks" // ListAvailableSolutionStacksRequest generates a "aws/request.Request" representing the // client's request for the ListAvailableSolutionStacks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2441,8 +2527,8 @@ const opListPlatformVersions = "ListPlatformVersions" // ListPlatformVersionsRequest generates a "aws/request.Request" representing the // client's request for the ListPlatformVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2492,10 +2578,10 @@ func (c *ElasticBeanstalk) ListPlatformVersionsRequest(input *ListPlatformVersio // // Returned Error Codes: // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // -// * ErrCodeServiceException "ServiceException" +// * ErrCodeServiceException "ElasticBeanstalkServiceException" // A generic service exception has occurred. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticbeanstalk-2010-12-01/ListPlatformVersions @@ -2524,8 +2610,8 @@ const opListTagsForResource = "ListTagsForResource" // ListTagsForResourceRequest generates a "aws/request.Request" representing the // client's request for the ListTagsForResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2580,7 +2666,7 @@ func (c *ElasticBeanstalk) ListTagsForResourceRequest(input *ListTagsForResource // // Returned Error Codes: // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // // * ErrCodeResourceNotFoundException "ResourceNotFoundException" @@ -2616,8 +2702,8 @@ const opRebuildEnvironment = "RebuildEnvironment" // RebuildEnvironmentRequest generates a "aws/request.Request" representing the // client's request for the RebuildEnvironment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2670,7 +2756,7 @@ func (c *ElasticBeanstalk) RebuildEnvironmentRequest(input *RebuildEnvironmentIn // // Returned Error Codes: // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticbeanstalk-2010-12-01/RebuildEnvironment @@ -2699,8 +2785,8 @@ const opRequestEnvironmentInfo = "RequestEnvironmentInfo" // RequestEnvironmentInfoRequest generates a "aws/request.Request" representing the // client's request for the RequestEnvironmentInfo operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2789,8 +2875,8 @@ const opRestartAppServer = "RestartAppServer" // RestartAppServerRequest generates a "aws/request.Request" representing the // client's request for the RestartAppServer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2866,8 +2952,8 @@ const opRetrieveEnvironmentInfo = "RetrieveEnvironmentInfo" // RetrieveEnvironmentInfoRequest generates a "aws/request.Request" representing the // client's request for the RetrieveEnvironmentInfo operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2944,8 +3030,8 @@ const opSwapEnvironmentCNAMEs = "SwapEnvironmentCNAMEs" // SwapEnvironmentCNAMEsRequest generates a "aws/request.Request" representing the // client's request for the SwapEnvironmentCNAMEs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3020,8 +3106,8 @@ const opTerminateEnvironment = "TerminateEnvironment" // TerminateEnvironmentRequest generates a "aws/request.Request" representing the // client's request for the TerminateEnvironment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3071,7 +3157,7 @@ func (c *ElasticBeanstalk) TerminateEnvironmentRequest(input *TerminateEnvironme // // Returned Error Codes: // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticbeanstalk-2010-12-01/TerminateEnvironment @@ -3100,8 +3186,8 @@ const opUpdateApplication = "UpdateApplication" // UpdateApplicationRequest generates a "aws/request.Request" representing the // client's request for the UpdateApplication operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3177,8 +3263,8 @@ const opUpdateApplicationResourceLifecycle = "UpdateApplicationResourceLifecycle // UpdateApplicationResourceLifecycleRequest generates a "aws/request.Request" representing the // client's request for the UpdateApplicationResourceLifecycle operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3228,7 +3314,7 @@ func (c *ElasticBeanstalk) UpdateApplicationResourceLifecycleRequest(input *Upda // // Returned Error Codes: // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticbeanstalk-2010-12-01/UpdateApplicationResourceLifecycle @@ -3257,8 +3343,8 @@ const opUpdateApplicationVersion = "UpdateApplicationVersion" // UpdateApplicationVersionRequest generates a "aws/request.Request" representing the // client's request for the UpdateApplicationVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3334,8 +3420,8 @@ const opUpdateConfigurationTemplate = "UpdateConfigurationTemplate" // UpdateConfigurationTemplateRequest generates a "aws/request.Request" representing the // client's request for the UpdateConfigurationTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3393,7 +3479,7 @@ func (c *ElasticBeanstalk) UpdateConfigurationTemplateRequest(input *UpdateConfi // // Returned Error Codes: // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // // * ErrCodeTooManyBucketsException "TooManyBucketsException" @@ -3425,8 +3511,8 @@ const opUpdateEnvironment = "UpdateEnvironment" // UpdateEnvironmentRequest generates a "aws/request.Request" representing the // client's request for the UpdateEnvironment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3486,7 +3572,7 @@ func (c *ElasticBeanstalk) UpdateEnvironmentRequest(input *UpdateEnvironmentInpu // // Returned Error Codes: // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // // * ErrCodeTooManyBucketsException "TooManyBucketsException" @@ -3518,8 +3604,8 @@ const opUpdateTagsForResource = "UpdateTagsForResource" // UpdateTagsForResourceRequest generates a "aws/request.Request" representing the // client's request for the UpdateTagsForResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3589,7 +3675,7 @@ func (c *ElasticBeanstalk) UpdateTagsForResourceRequest(input *UpdateTagsForReso // // Returned Error Codes: // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // // * ErrCodeOperationInProgressException "OperationInProgressFailure" @@ -3636,8 +3722,8 @@ const opValidateConfigurationSettings = "ValidateConfigurationSettings" // ValidateConfigurationSettingsRequest generates a "aws/request.Request" representing the // client's request for the ValidateConfigurationSettings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3691,7 +3777,7 @@ func (c *ElasticBeanstalk) ValidateConfigurationSettingsRequest(input *ValidateC // // Returned Error Codes: // * ErrCodeInsufficientPrivilegesException "InsufficientPrivilegesException" -// The specified account does not have sufficient privileges for one of more +// The specified account does not have sufficient privileges for one or more // AWS services. // // * ErrCodeTooManyBucketsException "TooManyBucketsException" @@ -3784,6 +3870,9 @@ func (s AbortEnvironmentUpdateOutput) GoString() string { type ApplicationDescription struct { _ struct{} `type:"structure"` + // The Amazon Resource Name (ARN) of the application. + ApplicationArn *string `type:"string"` + // The name of the application. ApplicationName *string `min:"1" type:"string"` @@ -3791,10 +3880,10 @@ type ApplicationDescription struct { ConfigurationTemplates []*string `type:"list"` // The date when the application was created. - DateCreated *time.Time `type:"timestamp" timestampFormat:"iso8601"` + DateCreated *time.Time `type:"timestamp"` // The date when the application was last modified. - DateUpdated *time.Time `type:"timestamp" timestampFormat:"iso8601"` + DateUpdated *time.Time `type:"timestamp"` // User-defined description of the application. Description *string `type:"string"` @@ -3816,6 +3905,12 @@ func (s ApplicationDescription) GoString() string { return s.String() } +// SetApplicationArn sets the ApplicationArn field's value. +func (s *ApplicationDescription) SetApplicationArn(v string) *ApplicationDescription { + s.ApplicationArn = &v + return s +} + // SetApplicationName sets the ApplicationName field's value. func (s *ApplicationDescription) SetApplicationName(v string) *ApplicationDescription { s.ApplicationName = &v @@ -3947,6 +4042,14 @@ type ApplicationResourceLifecycleConfig struct { _ struct{} `type:"structure"` // The ARN of an IAM service role that Elastic Beanstalk has permission to assume. + // + // The ServiceRole property is required the first time that you provide a VersionLifecycleConfig + // for the application in one of the supporting calls (CreateApplication or + // UpdateApplicationResourceLifecycle). After you provide it once, in either + // one of the calls, Elastic Beanstalk persists the Service Role with the application, + // and you don't need to specify it again in subsequent UpdateApplicationResourceLifecycle + // calls. You can, however, specify it in subsequent calls to change the Service + // Role to another value. ServiceRole *string `type:"string"` // The application version lifecycle configuration. @@ -3997,14 +4100,17 @@ type ApplicationVersionDescription struct { // The name of the application to which the application version belongs. ApplicationName *string `min:"1" type:"string"` + // The Amazon Resource Name (ARN) of the application version. + ApplicationVersionArn *string `type:"string"` + // Reference to the artifact from the AWS CodeBuild build. BuildArn *string `type:"string"` // The creation date of the application version. - DateCreated *time.Time `type:"timestamp" timestampFormat:"iso8601"` + DateCreated *time.Time `type:"timestamp"` // The last modified date of the application version. - DateUpdated *time.Time `type:"timestamp" timestampFormat:"iso8601"` + DateUpdated *time.Time `type:"timestamp"` // The description of the application version. Description *string `type:"string"` @@ -4017,7 +4123,25 @@ type ApplicationVersionDescription struct { // S3. SourceBundle *S3Location `type:"structure"` - // The processing status of the application version. + // The processing status of the application version. Reflects the state of the + // application version during its creation. Many of the values are only applicable + // if you specified True for the Process parameter of the CreateApplicationVersion + // action. The following list describes the possible values. + // + // * Unprocessed – Application version wasn't pre-processed or validated. + // Elastic Beanstalk will validate configuration files during deployment + // of the application version to an environment. + // + // * Processing – Elastic Beanstalk is currently processing the application + // version. + // + // * Building – Application version is currently undergoing an AWS CodeBuild + // build. + // + // * Processed – Elastic Beanstalk was successfully pre-processed and validated. + // + // * Failed – Either the AWS CodeBuild build failed or configuration files + // didn't pass validation. This application version isn't usable. Status *string `type:"string" enum:"ApplicationVersionStatus"` // A unique identifier for the application version. @@ -4040,6 +4164,12 @@ func (s *ApplicationVersionDescription) SetApplicationName(v string) *Applicatio return s } +// SetApplicationVersionArn sets the ApplicationVersionArn field's value. +func (s *ApplicationVersionDescription) SetApplicationVersionArn(v string) *ApplicationVersionDescription { + s.ApplicationVersionArn = &v + return s +} + // SetBuildArn sets the BuildArn field's value. func (s *ApplicationVersionDescription) SetBuildArn(v string) *ApplicationVersionDescription { s.BuildArn = &v @@ -4426,10 +4556,14 @@ func (s *Builder) SetARN(v string) *Builder { type CPUUtilization struct { _ struct{} `type:"structure"` + // Available on Linux environments only. + // // Percentage of time that the CPU has spent in the I/O Wait state over the // last 10 seconds. IOWait *float64 `type:"double"` + // Available on Linux environments only. + // // Percentage of time that the CPU has spent in the IRQ state over the last // 10 seconds. IRQ *float64 `type:"double"` @@ -4438,14 +4572,26 @@ type CPUUtilization struct { // 10 seconds. Idle *float64 `type:"double"` + // Available on Linux environments only. + // // Percentage of time that the CPU has spent in the Nice state over the last // 10 seconds. Nice *float64 `type:"double"` + // Available on Windows environments only. + // + // Percentage of time that the CPU has spent in the Privileged state over the + // last 10 seconds. + Privileged *float64 `type:"double"` + + // Available on Linux environments only. + // // Percentage of time that the CPU has spent in the SoftIRQ state over the last // 10 seconds. SoftIRQ *float64 `type:"double"` + // Available on Linux environments only. + // // Percentage of time that the CPU has spent in the System state over the last // 10 seconds. System *float64 `type:"double"` @@ -4489,6 +4635,12 @@ func (s *CPUUtilization) SetNice(v float64) *CPUUtilization { return s } +// SetPrivileged sets the Privileged field's value. +func (s *CPUUtilization) SetPrivileged(v float64) *CPUUtilization { + s.Privileged = &v + return s +} + // SetSoftIRQ sets the SoftIRQ field's value. func (s *CPUUtilization) SetSoftIRQ(v float64) *CPUUtilization { s.SoftIRQ = &v @@ -4881,10 +5033,10 @@ type ConfigurationSettingsDescription struct { ApplicationName *string `min:"1" type:"string"` // The date (in UTC time) when this configuration set was created. - DateCreated *time.Time `type:"timestamp" timestampFormat:"iso8601"` + DateCreated *time.Time `type:"timestamp"` // The date (in UTC time) when this configuration set was last modified. - DateUpdated *time.Time `type:"timestamp" timestampFormat:"iso8601"` + DateUpdated *time.Time `type:"timestamp"` // If this configuration set is associated with an environment, the DeploymentStatus // parameter indicates the deployment status of this configuration set: @@ -5079,11 +5231,15 @@ type CreateApplicationVersionInput struct { // Describes this version. Description *string `type:"string"` - // Preprocesses and validates the environment manifest (env.yaml) and configuration + // Pre-processes and validates the environment manifest (env.yaml) and configuration // files (*.config files in the .ebextensions folder) in the source bundle. // Validating configuration files can identify issues prior to deploying the // application version to an environment. // + // You must turn processing on for application versions that you create using + // AWS CodeBuild or AWS CodeCommit. For application versions built from a source + // bundle in Amazon S3, processing is optional. + // // The Process option validates Elastic Beanstalk configuration files. It doesn't // validate your application's configuration files, like proxy server or Docker // configuration. @@ -5417,6 +5573,9 @@ type CreateEnvironmentInput struct { // This is an alternative to specifying a template name. If specified, AWS Elastic // Beanstalk sets the configuration values to the default values associated // with the specified solution stack. + // + // For a list of current solution stacks, see Elastic Beanstalk Supported Platforms + // (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.platforms.html). SolutionStackName *string `type:"string"` // This specifies the tags applied to resources in the environment. @@ -6143,7 +6302,7 @@ type Deployment struct { // For in-progress deployments, the time that the deployment started. // // For completed deployments, the time that the deployment ended. - DeploymentTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + DeploymentTime *time.Time `type:"timestamp"` // The status of the deployment: // @@ -6192,6 +6351,43 @@ func (s *Deployment) SetVersionLabel(v string) *Deployment { return s } +type DescribeAccountAttributesInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DescribeAccountAttributesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAccountAttributesInput) GoString() string { + return s.String() +} + +type DescribeAccountAttributesOutput struct { + _ struct{} `type:"structure"` + + // The Elastic Beanstalk resource quotas associated with the calling AWS account. + ResourceQuotas *ResourceQuotas `type:"structure"` +} + +// String returns the string representation +func (s DescribeAccountAttributesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAccountAttributesOutput) GoString() string { + return s.String() +} + +// SetResourceQuotas sets the ResourceQuotas field's value. +func (s *DescribeAccountAttributesOutput) SetResourceQuotas(v *ResourceQuotas) *DescribeAccountAttributesOutput { + s.ResourceQuotas = v + return s +} + // Request to describe application versions. type DescribeApplicationVersionsInput struct { _ struct{} `type:"structure"` @@ -6680,7 +6876,7 @@ type DescribeEnvironmentHealthOutput struct { InstancesHealth *InstanceHealthSummary `type:"structure"` // The date and time that the health information was retrieved. - RefreshedAt *time.Time `type:"timestamp" timestampFormat:"iso8601"` + RefreshedAt *time.Time `type:"timestamp"` // The environment's operational status. Ready, Launching, Updating, Terminating, // or Terminated. @@ -7013,7 +7209,7 @@ type DescribeEnvironmentsInput struct { // If specified when IncludeDeleted is set to true, then environments deleted // after this date are displayed. - IncludedDeletedBackTo *time.Time `type:"timestamp" timestampFormat:"iso8601"` + IncludedDeletedBackTo *time.Time `type:"timestamp"` // For a paginated request. Specify a maximum number of environments to include // in each response. @@ -7121,7 +7317,7 @@ type DescribeEventsInput struct { // If specified, AWS Elastic Beanstalk restricts the returned descriptions to // those that occur up to, but not including, the EndTime. - EndTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + EndTime *time.Time `type:"timestamp"` // If specified, AWS Elastic Beanstalk restricts the returned descriptions to // those associated with this environment. @@ -7151,7 +7347,7 @@ type DescribeEventsInput struct { // If specified, AWS Elastic Beanstalk restricts the returned descriptions to // those that occur on or after this time. - StartTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + StartTime *time.Time `type:"timestamp"` // If specified, AWS Elastic Beanstalk restricts the returned descriptions to // those that are associated with this environment configuration. @@ -7377,13 +7573,17 @@ type DescribeInstancesHealthOutput struct { _ struct{} `type:"structure"` // Detailed health information about each instance. + // + // The output differs slightly between Linux and Windows environments. There + // is a difference in the members that are supported under the + // type. InstanceHealthList []*SingleInstanceHealth `type:"list"` // Pagination token for the next page of results, if available. NextToken *string `min:"1" type:"string"` // The date and time that the health information was retrieved. - RefreshedAt *time.Time `type:"timestamp" timestampFormat:"iso8601"` + RefreshedAt *time.Time `type:"timestamp"` } // String returns the string representation @@ -7479,10 +7679,10 @@ type EnvironmentDescription struct { CNAME *string `min:"1" type:"string"` // The creation date for this environment. - DateCreated *time.Time `type:"timestamp" timestampFormat:"iso8601"` + DateCreated *time.Time `type:"timestamp"` // The last modified date for this environment. - DateUpdated *time.Time `type:"timestamp" timestampFormat:"iso8601"` + DateUpdated *time.Time `type:"timestamp"` // Describes this environment. Description *string `type:"string"` @@ -7492,7 +7692,7 @@ type EnvironmentDescription struct { EndpointURL *string `type:"string"` // The environment's Amazon Resource Name (ARN), which can be used in other - // API reuqests that require an ARN. + // API requests that require an ARN. EnvironmentArn *string `type:"string"` // The ID of this environment. @@ -7738,7 +7938,7 @@ type EnvironmentInfoDescription struct { Message *string `type:"string"` // The time stamp when this information was retrieved. - SampleTimestamp *time.Time `type:"timestamp" timestampFormat:"iso8601"` + SampleTimestamp *time.Time `type:"timestamp"` } // String returns the string representation @@ -7925,7 +8125,11 @@ type EnvironmentTier struct { // The type of this environment tier. Type *string `type:"string"` - // The version of this environment tier. + // The version of this environment tier. When you don't set a value to it, Elastic + // Beanstalk uses the latest compatible worker tier version. + // + // This member is deprecated. Any specific version that you set may become out + // of date. We recommend leaving it unspecified. Version *string `type:"string"` } @@ -7968,7 +8172,7 @@ type EventDescription struct { EnvironmentName *string `min:"4" type:"string"` // The date when the event occurred. - EventDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + EventDate *time.Time `type:"timestamp"` // The event message. Message *string `type:"string"` @@ -8617,7 +8821,7 @@ type ManagedAction struct { // The start time of the maintenance window in which the managed action will // execute. - WindowStartTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + WindowStartTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -8674,7 +8878,7 @@ type ManagedActionHistoryItem struct { ActionType *string `type:"string" enum:"ActionType"` // The date and time that the action started executing. - ExecutedTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + ExecutedTime *time.Time `type:"timestamp"` // If the action failed, a description of the failure. FailureDescription *string `type:"string"` @@ -8683,7 +8887,7 @@ type ManagedActionHistoryItem struct { FailureType *string `type:"string" enum:"FailureType"` // The date and time that the action finished executing. - FinishedTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + FinishedTime *time.Time `type:"timestamp"` // The status of the action. Status *string `type:"string" enum:"ActionHistoryStatus"` @@ -8963,10 +9167,10 @@ type PlatformDescription struct { CustomAmiList []*CustomAmi `type:"list"` // The date when the platform was created. - DateCreated *time.Time `type:"timestamp" timestampFormat:"iso8601"` + DateCreated *time.Time `type:"timestamp"` // The date when the platform was last updated. - DateUpdated *time.Time `type:"timestamp" timestampFormat:"iso8601"` + DateUpdated *time.Time `type:"timestamp"` // The description of the platform. Description *string `type:"string"` @@ -9527,6 +9731,93 @@ func (s RequestEnvironmentInfoOutput) GoString() string { return s.String() } +// The AWS Elastic Beanstalk quota information for a single resource type in +// an AWS account. It reflects the resource's limits for this account. +type ResourceQuota struct { + _ struct{} `type:"structure"` + + // The maximum number of instances of this Elastic Beanstalk resource type that + // an AWS account can use. + Maximum *int64 `type:"integer"` +} + +// String returns the string representation +func (s ResourceQuota) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResourceQuota) GoString() string { + return s.String() +} + +// SetMaximum sets the Maximum field's value. +func (s *ResourceQuota) SetMaximum(v int64) *ResourceQuota { + s.Maximum = &v + return s +} + +// A set of per-resource AWS Elastic Beanstalk quotas associated with an AWS +// account. They reflect Elastic Beanstalk resource limits for this account. +type ResourceQuotas struct { + _ struct{} `type:"structure"` + + // The quota for applications in the AWS account. + ApplicationQuota *ResourceQuota `type:"structure"` + + // The quota for application versions in the AWS account. + ApplicationVersionQuota *ResourceQuota `type:"structure"` + + // The quota for configuration templates in the AWS account. + ConfigurationTemplateQuota *ResourceQuota `type:"structure"` + + // The quota for custom platforms in the AWS account. + CustomPlatformQuota *ResourceQuota `type:"structure"` + + // The quota for environments in the AWS account. + EnvironmentQuota *ResourceQuota `type:"structure"` +} + +// String returns the string representation +func (s ResourceQuotas) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResourceQuotas) GoString() string { + return s.String() +} + +// SetApplicationQuota sets the ApplicationQuota field's value. +func (s *ResourceQuotas) SetApplicationQuota(v *ResourceQuota) *ResourceQuotas { + s.ApplicationQuota = v + return s +} + +// SetApplicationVersionQuota sets the ApplicationVersionQuota field's value. +func (s *ResourceQuotas) SetApplicationVersionQuota(v *ResourceQuota) *ResourceQuotas { + s.ApplicationVersionQuota = v + return s +} + +// SetConfigurationTemplateQuota sets the ConfigurationTemplateQuota field's value. +func (s *ResourceQuotas) SetConfigurationTemplateQuota(v *ResourceQuota) *ResourceQuotas { + s.ConfigurationTemplateQuota = v + return s +} + +// SetCustomPlatformQuota sets the CustomPlatformQuota field's value. +func (s *ResourceQuotas) SetCustomPlatformQuota(v *ResourceQuota) *ResourceQuotas { + s.CustomPlatformQuota = v + return s +} + +// SetEnvironmentQuota sets the EnvironmentQuota field's value. +func (s *ResourceQuotas) SetEnvironmentQuota(v *ResourceQuota) *ResourceQuotas { + s.EnvironmentQuota = v + return s +} + type RestartAppServerInput struct { _ struct{} `type:"structure"` @@ -9757,7 +10048,7 @@ type SingleInstanceHealth struct { InstanceType *string `type:"string"` // The time at which the EC2 instance was launched. - LaunchedAt *time.Time `type:"timestamp" timestampFormat:"iso8601"` + LaunchedAt *time.Time `type:"timestamp"` // Operating system metrics from the instance. System *SystemStatus `type:"structure"` @@ -11287,6 +11578,9 @@ const ( // EnvironmentHealthStatusSevere is a EnvironmentHealthStatus enum value EnvironmentHealthStatusSevere = "Severe" + + // EnvironmentHealthStatusSuspended is a EnvironmentHealthStatus enum value + EnvironmentHealthStatusSuspended = "Suspended" ) const ( diff --git a/vendor/github.com/aws/aws-sdk-go/service/elasticbeanstalk/errors.go b/vendor/github.com/aws/aws-sdk-go/service/elasticbeanstalk/errors.go index 6373e870ae7..d13041287a9 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/elasticbeanstalk/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/elasticbeanstalk/errors.go @@ -13,7 +13,7 @@ const ( // ErrCodeInsufficientPrivilegesException for service response error code // "InsufficientPrivilegesException". // - // The specified account does not have sufficient privileges for one of more + // The specified account does not have sufficient privileges for one or more // AWS services. ErrCodeInsufficientPrivilegesException = "InsufficientPrivilegesException" @@ -77,10 +77,10 @@ const ( ErrCodeS3SubscriptionRequiredException = "S3SubscriptionRequiredException" // ErrCodeServiceException for service response error code - // "ServiceException". + // "ElasticBeanstalkServiceException". // // A generic service exception has occurred. - ErrCodeServiceException = "ServiceException" + ErrCodeServiceException = "ElasticBeanstalkServiceException" // ErrCodeSourceBundleDeletionException for service response error code // "SourceBundleDeletionFailure". diff --git a/vendor/github.com/aws/aws-sdk-go/service/elasticbeanstalk/service.go b/vendor/github.com/aws/aws-sdk-go/service/elasticbeanstalk/service.go index e56509610da..12e8b1c819a 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/elasticbeanstalk/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/elasticbeanstalk/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "elasticbeanstalk" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "elasticbeanstalk" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Elastic Beanstalk" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the ElasticBeanstalk client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/elasticsearchservice/api.go b/vendor/github.com/aws/aws-sdk-go/service/elasticsearchservice/api.go index dc5672d021b..c4915fbd8be 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/elasticsearchservice/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/elasticsearchservice/api.go @@ -17,8 +17,8 @@ const opAddTags = "AddTags" // AddTagsRequest generates a "aws/request.Request" representing the // client's request for the AddTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -106,12 +106,103 @@ func (c *ElasticsearchService) AddTagsWithContext(ctx aws.Context, input *AddTag return out, req.Send() } +const opCancelElasticsearchServiceSoftwareUpdate = "CancelElasticsearchServiceSoftwareUpdate" + +// CancelElasticsearchServiceSoftwareUpdateRequest generates a "aws/request.Request" representing the +// client's request for the CancelElasticsearchServiceSoftwareUpdate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CancelElasticsearchServiceSoftwareUpdate for more information on using the CancelElasticsearchServiceSoftwareUpdate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CancelElasticsearchServiceSoftwareUpdateRequest method. +// req, resp := client.CancelElasticsearchServiceSoftwareUpdateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ElasticsearchService) CancelElasticsearchServiceSoftwareUpdateRequest(input *CancelElasticsearchServiceSoftwareUpdateInput) (req *request.Request, output *CancelElasticsearchServiceSoftwareUpdateOutput) { + op := &request.Operation{ + Name: opCancelElasticsearchServiceSoftwareUpdate, + HTTPMethod: "POST", + HTTPPath: "/2015-01-01/es/serviceSoftwareUpdate/cancel", + } + + if input == nil { + input = &CancelElasticsearchServiceSoftwareUpdateInput{} + } + + output = &CancelElasticsearchServiceSoftwareUpdateOutput{} + req = c.newRequest(op, input, output) + return +} + +// CancelElasticsearchServiceSoftwareUpdate API operation for Amazon Elasticsearch Service. +// +// Cancels a scheduled service software update for an Amazon ES domain. You +// can only perform this operation before the AutomatedUpdateDate and when the +// UpdateStatus is in the PENDING_UPDATE state. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elasticsearch Service's +// API operation CancelElasticsearchServiceSoftwareUpdate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBaseException "BaseException" +// An error occurred while processing the request. +// +// * ErrCodeInternalException "InternalException" +// The request processing has failed because of an unknown error, exception +// or failure (the failure is internal to the service) . Gives http status code +// of 500. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// An exception for accessing or deleting a resource that does not exist. Gives +// http status code of 400. +// +// * ErrCodeValidationException "ValidationException" +// An exception for missing / invalid input fields. Gives http status code of +// 400. +// +func (c *ElasticsearchService) CancelElasticsearchServiceSoftwareUpdate(input *CancelElasticsearchServiceSoftwareUpdateInput) (*CancelElasticsearchServiceSoftwareUpdateOutput, error) { + req, out := c.CancelElasticsearchServiceSoftwareUpdateRequest(input) + return out, req.Send() +} + +// CancelElasticsearchServiceSoftwareUpdateWithContext is the same as CancelElasticsearchServiceSoftwareUpdate with the addition of +// the ability to pass a context and additional request options. +// +// See CancelElasticsearchServiceSoftwareUpdate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ElasticsearchService) CancelElasticsearchServiceSoftwareUpdateWithContext(ctx aws.Context, input *CancelElasticsearchServiceSoftwareUpdateInput, opts ...request.Option) (*CancelElasticsearchServiceSoftwareUpdateOutput, error) { + req, out := c.CancelElasticsearchServiceSoftwareUpdateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateElasticsearchDomain = "CreateElasticsearchDomain" // CreateElasticsearchDomainRequest generates a "aws/request.Request" representing the // client's request for the CreateElasticsearchDomain operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -213,8 +304,8 @@ const opDeleteElasticsearchDomain = "DeleteElasticsearchDomain" // DeleteElasticsearchDomainRequest generates a "aws/request.Request" representing the // client's request for the DeleteElasticsearchDomain operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -303,8 +394,8 @@ const opDeleteElasticsearchServiceRole = "DeleteElasticsearchServiceRole" // DeleteElasticsearchServiceRoleRequest generates a "aws/request.Request" representing the // client's request for the DeleteElasticsearchServiceRole operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -394,8 +485,8 @@ const opDescribeElasticsearchDomain = "DescribeElasticsearchDomain" // DescribeElasticsearchDomainRequest generates a "aws/request.Request" representing the // client's request for the DescribeElasticsearchDomain operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -484,8 +575,8 @@ const opDescribeElasticsearchDomainConfig = "DescribeElasticsearchDomainConfig" // DescribeElasticsearchDomainConfigRequest generates a "aws/request.Request" representing the // client's request for the DescribeElasticsearchDomainConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -575,8 +666,8 @@ const opDescribeElasticsearchDomains = "DescribeElasticsearchDomains" // DescribeElasticsearchDomainsRequest generates a "aws/request.Request" representing the // client's request for the DescribeElasticsearchDomains operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -661,8 +752,8 @@ const opDescribeElasticsearchInstanceTypeLimits = "DescribeElasticsearchInstance // DescribeElasticsearchInstanceTypeLimitsRequest generates a "aws/request.Request" representing the // client's request for the DescribeElasticsearchInstanceTypeLimits operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -756,116 +847,35 @@ func (c *ElasticsearchService) DescribeElasticsearchInstanceTypeLimitsWithContex return out, req.Send() } -const opListDomainNames = "ListDomainNames" - -// ListDomainNamesRequest generates a "aws/request.Request" representing the -// client's request for the ListDomainNames operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. -// -// Use "Send" method on the returned Request to send the API call to the service. -// the "output" return value is not valid until after Send returns without error. -// -// See ListDomainNames for more information on using the ListDomainNames -// API call, and error handling. -// -// This method is useful when you want to inject custom logic or configuration -// into the SDK's request lifecycle. Such as custom headers, or retry logic. -// -// -// // Example sending a request using the ListDomainNamesRequest method. -// req, resp := client.ListDomainNamesRequest(params) -// -// err := req.Send() -// if err == nil { // resp is now filled -// fmt.Println(resp) -// } -func (c *ElasticsearchService) ListDomainNamesRequest(input *ListDomainNamesInput) (req *request.Request, output *ListDomainNamesOutput) { - op := &request.Operation{ - Name: opListDomainNames, - HTTPMethod: "GET", - HTTPPath: "/2015-01-01/domain", - } - - if input == nil { - input = &ListDomainNamesInput{} - } - - output = &ListDomainNamesOutput{} - req = c.newRequest(op, input, output) - return -} - -// ListDomainNames API operation for Amazon Elasticsearch Service. -// -// Returns the name of all Elasticsearch domains owned by the current user's -// account. -// -// Returns awserr.Error for service API and SDK errors. Use runtime type assertions -// with awserr.Error's Code and Message methods to get detailed information about -// the error. -// -// See the AWS API reference guide for Amazon Elasticsearch Service's -// API operation ListDomainNames for usage and error information. -// -// Returned Error Codes: -// * ErrCodeBaseException "BaseException" -// An error occurred while processing the request. -// -// * ErrCodeValidationException "ValidationException" -// An exception for missing / invalid input fields. Gives http status code of -// 400. -// -func (c *ElasticsearchService) ListDomainNames(input *ListDomainNamesInput) (*ListDomainNamesOutput, error) { - req, out := c.ListDomainNamesRequest(input) - return out, req.Send() -} - -// ListDomainNamesWithContext is the same as ListDomainNames with the addition of -// the ability to pass a context and additional request options. -// -// See ListDomainNames for details on how to use this API operation. -// -// The context must be non-nil and will be used for request cancellation. If -// the context is nil a panic will occur. In the future the SDK may create -// sub-contexts for http.Requests. See https://golang.org/pkg/context/ -// for more information on using Contexts. -func (c *ElasticsearchService) ListDomainNamesWithContext(ctx aws.Context, input *ListDomainNamesInput, opts ...request.Option) (*ListDomainNamesOutput, error) { - req, out := c.ListDomainNamesRequest(input) - req.SetContext(ctx) - req.ApplyOptions(opts...) - return out, req.Send() -} - -const opListElasticsearchInstanceTypes = "ListElasticsearchInstanceTypes" +const opDescribeReservedElasticsearchInstanceOfferings = "DescribeReservedElasticsearchInstanceOfferings" -// ListElasticsearchInstanceTypesRequest generates a "aws/request.Request" representing the -// client's request for the ListElasticsearchInstanceTypes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeReservedElasticsearchInstanceOfferingsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeReservedElasticsearchInstanceOfferings operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListElasticsearchInstanceTypes for more information on using the ListElasticsearchInstanceTypes +// See DescribeReservedElasticsearchInstanceOfferings for more information on using the DescribeReservedElasticsearchInstanceOfferings // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListElasticsearchInstanceTypesRequest method. -// req, resp := client.ListElasticsearchInstanceTypesRequest(params) +// // Example sending a request using the DescribeReservedElasticsearchInstanceOfferingsRequest method. +// req, resp := client.DescribeReservedElasticsearchInstanceOfferingsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *ElasticsearchService) ListElasticsearchInstanceTypesRequest(input *ListElasticsearchInstanceTypesInput) (req *request.Request, output *ListElasticsearchInstanceTypesOutput) { +func (c *ElasticsearchService) DescribeReservedElasticsearchInstanceOfferingsRequest(input *DescribeReservedElasticsearchInstanceOfferingsInput) (req *request.Request, output *DescribeReservedElasticsearchInstanceOfferingsOutput) { op := &request.Operation{ - Name: opListElasticsearchInstanceTypes, + Name: opDescribeReservedElasticsearchInstanceOfferings, HTTPMethod: "GET", - HTTPPath: "/2015-01-01/es/instanceTypes/{ElasticsearchVersion}", + HTTPPath: "/2015-01-01/es/reservedInstanceOfferings", Paginator: &request.Paginator{ InputTokens: []string{"NextToken"}, OutputTokens: []string{"NextToken"}, @@ -875,34 +885,26 @@ func (c *ElasticsearchService) ListElasticsearchInstanceTypesRequest(input *List } if input == nil { - input = &ListElasticsearchInstanceTypesInput{} + input = &DescribeReservedElasticsearchInstanceOfferingsInput{} } - output = &ListElasticsearchInstanceTypesOutput{} + output = &DescribeReservedElasticsearchInstanceOfferingsOutput{} req = c.newRequest(op, input, output) return } -// ListElasticsearchInstanceTypes API operation for Amazon Elasticsearch Service. +// DescribeReservedElasticsearchInstanceOfferings API operation for Amazon Elasticsearch Service. // -// List all Elasticsearch instance types that are supported for given ElasticsearchVersion +// Lists available reserved Elasticsearch instance offerings. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Elasticsearch Service's -// API operation ListElasticsearchInstanceTypes for usage and error information. +// API operation DescribeReservedElasticsearchInstanceOfferings for usage and error information. // // Returned Error Codes: -// * ErrCodeBaseException "BaseException" -// An error occurred while processing the request. -// -// * ErrCodeInternalException "InternalException" -// The request processing has failed because of an unknown error, exception -// or failure (the failure is internal to the service) . Gives http status code -// of 500. -// // * ErrCodeResourceNotFoundException "ResourceNotFoundException" // An exception for accessing or deleting a resource that does not exist. Gives // http status code of 400. @@ -911,64 +913,73 @@ func (c *ElasticsearchService) ListElasticsearchInstanceTypesRequest(input *List // An exception for missing / invalid input fields. Gives http status code of // 400. // -func (c *ElasticsearchService) ListElasticsearchInstanceTypes(input *ListElasticsearchInstanceTypesInput) (*ListElasticsearchInstanceTypesOutput, error) { - req, out := c.ListElasticsearchInstanceTypesRequest(input) +// * ErrCodeDisabledOperationException "DisabledOperationException" +// An error occured because the client wanted to access a not supported operation. +// Gives http status code of 409. +// +// * ErrCodeInternalException "InternalException" +// The request processing has failed because of an unknown error, exception +// or failure (the failure is internal to the service) . Gives http status code +// of 500. +// +func (c *ElasticsearchService) DescribeReservedElasticsearchInstanceOfferings(input *DescribeReservedElasticsearchInstanceOfferingsInput) (*DescribeReservedElasticsearchInstanceOfferingsOutput, error) { + req, out := c.DescribeReservedElasticsearchInstanceOfferingsRequest(input) return out, req.Send() } -// ListElasticsearchInstanceTypesWithContext is the same as ListElasticsearchInstanceTypes with the addition of +// DescribeReservedElasticsearchInstanceOfferingsWithContext is the same as DescribeReservedElasticsearchInstanceOfferings with the addition of // the ability to pass a context and additional request options. // -// See ListElasticsearchInstanceTypes for details on how to use this API operation. +// See DescribeReservedElasticsearchInstanceOfferings for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ElasticsearchService) ListElasticsearchInstanceTypesWithContext(ctx aws.Context, input *ListElasticsearchInstanceTypesInput, opts ...request.Option) (*ListElasticsearchInstanceTypesOutput, error) { - req, out := c.ListElasticsearchInstanceTypesRequest(input) +func (c *ElasticsearchService) DescribeReservedElasticsearchInstanceOfferingsWithContext(ctx aws.Context, input *DescribeReservedElasticsearchInstanceOfferingsInput, opts ...request.Option) (*DescribeReservedElasticsearchInstanceOfferingsOutput, error) { + req, out := c.DescribeReservedElasticsearchInstanceOfferingsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// ListElasticsearchInstanceTypesPages iterates over the pages of a ListElasticsearchInstanceTypes operation, +// DescribeReservedElasticsearchInstanceOfferingsPages iterates over the pages of a DescribeReservedElasticsearchInstanceOfferings operation, // calling the "fn" function with the response data for each page. To stop // iterating, return false from the fn function. // -// See ListElasticsearchInstanceTypes method for more information on how to use this operation. +// See DescribeReservedElasticsearchInstanceOfferings method for more information on how to use this operation. // // Note: This operation can generate multiple requests to a service. // -// // Example iterating over at most 3 pages of a ListElasticsearchInstanceTypes operation. +// // Example iterating over at most 3 pages of a DescribeReservedElasticsearchInstanceOfferings operation. // pageNum := 0 -// err := client.ListElasticsearchInstanceTypesPages(params, -// func(page *ListElasticsearchInstanceTypesOutput, lastPage bool) bool { +// err := client.DescribeReservedElasticsearchInstanceOfferingsPages(params, +// func(page *DescribeReservedElasticsearchInstanceOfferingsOutput, lastPage bool) bool { // pageNum++ // fmt.Println(page) // return pageNum <= 3 // }) // -func (c *ElasticsearchService) ListElasticsearchInstanceTypesPages(input *ListElasticsearchInstanceTypesInput, fn func(*ListElasticsearchInstanceTypesOutput, bool) bool) error { - return c.ListElasticsearchInstanceTypesPagesWithContext(aws.BackgroundContext(), input, fn) +func (c *ElasticsearchService) DescribeReservedElasticsearchInstanceOfferingsPages(input *DescribeReservedElasticsearchInstanceOfferingsInput, fn func(*DescribeReservedElasticsearchInstanceOfferingsOutput, bool) bool) error { + return c.DescribeReservedElasticsearchInstanceOfferingsPagesWithContext(aws.BackgroundContext(), input, fn) } -// ListElasticsearchInstanceTypesPagesWithContext same as ListElasticsearchInstanceTypesPages except +// DescribeReservedElasticsearchInstanceOfferingsPagesWithContext same as DescribeReservedElasticsearchInstanceOfferingsPages except // it takes a Context and allows setting request options on the pages. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ElasticsearchService) ListElasticsearchInstanceTypesPagesWithContext(ctx aws.Context, input *ListElasticsearchInstanceTypesInput, fn func(*ListElasticsearchInstanceTypesOutput, bool) bool, opts ...request.Option) error { +func (c *ElasticsearchService) DescribeReservedElasticsearchInstanceOfferingsPagesWithContext(ctx aws.Context, input *DescribeReservedElasticsearchInstanceOfferingsInput, fn func(*DescribeReservedElasticsearchInstanceOfferingsOutput, bool) bool, opts ...request.Option) error { p := request.Pagination{ NewRequest: func() (*request.Request, error) { - var inCpy *ListElasticsearchInstanceTypesInput + var inCpy *DescribeReservedElasticsearchInstanceOfferingsInput if input != nil { tmp := *input inCpy = &tmp } - req, _ := c.ListElasticsearchInstanceTypesRequest(inCpy) + req, _ := c.DescribeReservedElasticsearchInstanceOfferingsRequest(inCpy) req.SetContext(ctx) req.ApplyOptions(opts...) return req, nil @@ -977,40 +988,40 @@ func (c *ElasticsearchService) ListElasticsearchInstanceTypesPagesWithContext(ct cont := true for p.Next() && cont { - cont = fn(p.Page().(*ListElasticsearchInstanceTypesOutput), !p.HasNextPage()) + cont = fn(p.Page().(*DescribeReservedElasticsearchInstanceOfferingsOutput), !p.HasNextPage()) } return p.Err() } -const opListElasticsearchVersions = "ListElasticsearchVersions" +const opDescribeReservedElasticsearchInstances = "DescribeReservedElasticsearchInstances" -// ListElasticsearchVersionsRequest generates a "aws/request.Request" representing the -// client's request for the ListElasticsearchVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeReservedElasticsearchInstancesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeReservedElasticsearchInstances operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListElasticsearchVersions for more information on using the ListElasticsearchVersions +// See DescribeReservedElasticsearchInstances for more information on using the DescribeReservedElasticsearchInstances // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListElasticsearchVersionsRequest method. -// req, resp := client.ListElasticsearchVersionsRequest(params) +// // Example sending a request using the DescribeReservedElasticsearchInstancesRequest method. +// req, resp := client.DescribeReservedElasticsearchInstancesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *ElasticsearchService) ListElasticsearchVersionsRequest(input *ListElasticsearchVersionsInput) (req *request.Request, output *ListElasticsearchVersionsOutput) { +func (c *ElasticsearchService) DescribeReservedElasticsearchInstancesRequest(input *DescribeReservedElasticsearchInstancesInput) (req *request.Request, output *DescribeReservedElasticsearchInstancesOutput) { op := &request.Operation{ - Name: opListElasticsearchVersions, + Name: opDescribeReservedElasticsearchInstances, HTTPMethod: "GET", - HTTPPath: "/2015-01-01/es/versions", + HTTPPath: "/2015-01-01/es/reservedInstances", Paginator: &request.Paginator{ InputTokens: []string{"NextToken"}, OutputTokens: []string{"NextToken"}, @@ -1020,100 +1031,101 @@ func (c *ElasticsearchService) ListElasticsearchVersionsRequest(input *ListElast } if input == nil { - input = &ListElasticsearchVersionsInput{} + input = &DescribeReservedElasticsearchInstancesInput{} } - output = &ListElasticsearchVersionsOutput{} + output = &DescribeReservedElasticsearchInstancesOutput{} req = c.newRequest(op, input, output) return } -// ListElasticsearchVersions API operation for Amazon Elasticsearch Service. +// DescribeReservedElasticsearchInstances API operation for Amazon Elasticsearch Service. // -// List all supported Elasticsearch versions +// Returns information about reserved Elasticsearch instances for this account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Elasticsearch Service's -// API operation ListElasticsearchVersions for usage and error information. +// API operation DescribeReservedElasticsearchInstances for usage and error information. // // Returned Error Codes: -// * ErrCodeBaseException "BaseException" -// An error occurred while processing the request. +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// An exception for accessing or deleting a resource that does not exist. Gives +// http status code of 400. // // * ErrCodeInternalException "InternalException" // The request processing has failed because of an unknown error, exception // or failure (the failure is internal to the service) . Gives http status code // of 500. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// An exception for accessing or deleting a resource that does not exist. Gives -// http status code of 400. -// // * ErrCodeValidationException "ValidationException" // An exception for missing / invalid input fields. Gives http status code of // 400. // -func (c *ElasticsearchService) ListElasticsearchVersions(input *ListElasticsearchVersionsInput) (*ListElasticsearchVersionsOutput, error) { - req, out := c.ListElasticsearchVersionsRequest(input) +// * ErrCodeDisabledOperationException "DisabledOperationException" +// An error occured because the client wanted to access a not supported operation. +// Gives http status code of 409. +// +func (c *ElasticsearchService) DescribeReservedElasticsearchInstances(input *DescribeReservedElasticsearchInstancesInput) (*DescribeReservedElasticsearchInstancesOutput, error) { + req, out := c.DescribeReservedElasticsearchInstancesRequest(input) return out, req.Send() } -// ListElasticsearchVersionsWithContext is the same as ListElasticsearchVersions with the addition of +// DescribeReservedElasticsearchInstancesWithContext is the same as DescribeReservedElasticsearchInstances with the addition of // the ability to pass a context and additional request options. // -// See ListElasticsearchVersions for details on how to use this API operation. +// See DescribeReservedElasticsearchInstances for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ElasticsearchService) ListElasticsearchVersionsWithContext(ctx aws.Context, input *ListElasticsearchVersionsInput, opts ...request.Option) (*ListElasticsearchVersionsOutput, error) { - req, out := c.ListElasticsearchVersionsRequest(input) +func (c *ElasticsearchService) DescribeReservedElasticsearchInstancesWithContext(ctx aws.Context, input *DescribeReservedElasticsearchInstancesInput, opts ...request.Option) (*DescribeReservedElasticsearchInstancesOutput, error) { + req, out := c.DescribeReservedElasticsearchInstancesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// ListElasticsearchVersionsPages iterates over the pages of a ListElasticsearchVersions operation, +// DescribeReservedElasticsearchInstancesPages iterates over the pages of a DescribeReservedElasticsearchInstances operation, // calling the "fn" function with the response data for each page. To stop // iterating, return false from the fn function. // -// See ListElasticsearchVersions method for more information on how to use this operation. +// See DescribeReservedElasticsearchInstances method for more information on how to use this operation. // // Note: This operation can generate multiple requests to a service. // -// // Example iterating over at most 3 pages of a ListElasticsearchVersions operation. +// // Example iterating over at most 3 pages of a DescribeReservedElasticsearchInstances operation. // pageNum := 0 -// err := client.ListElasticsearchVersionsPages(params, -// func(page *ListElasticsearchVersionsOutput, lastPage bool) bool { +// err := client.DescribeReservedElasticsearchInstancesPages(params, +// func(page *DescribeReservedElasticsearchInstancesOutput, lastPage bool) bool { // pageNum++ // fmt.Println(page) // return pageNum <= 3 // }) // -func (c *ElasticsearchService) ListElasticsearchVersionsPages(input *ListElasticsearchVersionsInput, fn func(*ListElasticsearchVersionsOutput, bool) bool) error { - return c.ListElasticsearchVersionsPagesWithContext(aws.BackgroundContext(), input, fn) +func (c *ElasticsearchService) DescribeReservedElasticsearchInstancesPages(input *DescribeReservedElasticsearchInstancesInput, fn func(*DescribeReservedElasticsearchInstancesOutput, bool) bool) error { + return c.DescribeReservedElasticsearchInstancesPagesWithContext(aws.BackgroundContext(), input, fn) } -// ListElasticsearchVersionsPagesWithContext same as ListElasticsearchVersionsPages except +// DescribeReservedElasticsearchInstancesPagesWithContext same as DescribeReservedElasticsearchInstancesPages except // it takes a Context and allows setting request options on the pages. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ElasticsearchService) ListElasticsearchVersionsPagesWithContext(ctx aws.Context, input *ListElasticsearchVersionsInput, fn func(*ListElasticsearchVersionsOutput, bool) bool, opts ...request.Option) error { +func (c *ElasticsearchService) DescribeReservedElasticsearchInstancesPagesWithContext(ctx aws.Context, input *DescribeReservedElasticsearchInstancesInput, fn func(*DescribeReservedElasticsearchInstancesOutput, bool) bool, opts ...request.Option) error { p := request.Pagination{ NewRequest: func() (*request.Request, error) { - var inCpy *ListElasticsearchVersionsInput + var inCpy *DescribeReservedElasticsearchInstancesInput if input != nil { tmp := *input inCpy = &tmp } - req, _ := c.ListElasticsearchVersionsRequest(inCpy) + req, _ := c.DescribeReservedElasticsearchInstancesRequest(inCpy) req.SetContext(ctx) req.ApplyOptions(opts...) return req, nil @@ -1122,61 +1134,63 @@ func (c *ElasticsearchService) ListElasticsearchVersionsPagesWithContext(ctx aws cont := true for p.Next() && cont { - cont = fn(p.Page().(*ListElasticsearchVersionsOutput), !p.HasNextPage()) + cont = fn(p.Page().(*DescribeReservedElasticsearchInstancesOutput), !p.HasNextPage()) } return p.Err() } -const opListTags = "ListTags" +const opGetCompatibleElasticsearchVersions = "GetCompatibleElasticsearchVersions" -// ListTagsRequest generates a "aws/request.Request" representing the -// client's request for the ListTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetCompatibleElasticsearchVersionsRequest generates a "aws/request.Request" representing the +// client's request for the GetCompatibleElasticsearchVersions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListTags for more information on using the ListTags +// See GetCompatibleElasticsearchVersions for more information on using the GetCompatibleElasticsearchVersions // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListTagsRequest method. -// req, resp := client.ListTagsRequest(params) +// // Example sending a request using the GetCompatibleElasticsearchVersionsRequest method. +// req, resp := client.GetCompatibleElasticsearchVersionsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *ElasticsearchService) ListTagsRequest(input *ListTagsInput) (req *request.Request, output *ListTagsOutput) { +func (c *ElasticsearchService) GetCompatibleElasticsearchVersionsRequest(input *GetCompatibleElasticsearchVersionsInput) (req *request.Request, output *GetCompatibleElasticsearchVersionsOutput) { op := &request.Operation{ - Name: opListTags, + Name: opGetCompatibleElasticsearchVersions, HTTPMethod: "GET", - HTTPPath: "/2015-01-01/tags/", + HTTPPath: "/2015-01-01/es/compatibleVersions", } if input == nil { - input = &ListTagsInput{} + input = &GetCompatibleElasticsearchVersionsInput{} } - output = &ListTagsOutput{} + output = &GetCompatibleElasticsearchVersionsOutput{} req = c.newRequest(op, input, output) return } -// ListTags API operation for Amazon Elasticsearch Service. +// GetCompatibleElasticsearchVersions API operation for Amazon Elasticsearch Service. // -// Returns all tags for the given Elasticsearch domain. +// Returns a list of upgrade compatible Elastisearch versions. You can optionally +// pass a DomainName to get all upgrade compatible Elasticsearch versions for +// that specific domain. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Elasticsearch Service's -// API operation ListTags for usage and error information. +// API operation GetCompatibleElasticsearchVersions for usage and error information. // // Returned Error Codes: // * ErrCodeBaseException "BaseException" @@ -1186,8 +1200,12 @@ func (c *ElasticsearchService) ListTagsRequest(input *ListTagsInput) (req *reque // An exception for accessing or deleting a resource that does not exist. Gives // http status code of 400. // -// * ErrCodeValidationException "ValidationException" -// An exception for missing / invalid input fields. Gives http status code of +// * ErrCodeDisabledOperationException "DisabledOperationException" +// An error occured because the client wanted to access a not supported operation. +// Gives http status code of 409. +// +// * ErrCodeValidationException "ValidationException" +// An exception for missing / invalid input fields. Gives http status code of // 400. // // * ErrCodeInternalException "InternalException" @@ -1195,84 +1213,97 @@ func (c *ElasticsearchService) ListTagsRequest(input *ListTagsInput) (req *reque // or failure (the failure is internal to the service) . Gives http status code // of 500. // -func (c *ElasticsearchService) ListTags(input *ListTagsInput) (*ListTagsOutput, error) { - req, out := c.ListTagsRequest(input) +func (c *ElasticsearchService) GetCompatibleElasticsearchVersions(input *GetCompatibleElasticsearchVersionsInput) (*GetCompatibleElasticsearchVersionsOutput, error) { + req, out := c.GetCompatibleElasticsearchVersionsRequest(input) return out, req.Send() } -// ListTagsWithContext is the same as ListTags with the addition of +// GetCompatibleElasticsearchVersionsWithContext is the same as GetCompatibleElasticsearchVersions with the addition of // the ability to pass a context and additional request options. // -// See ListTags for details on how to use this API operation. +// See GetCompatibleElasticsearchVersions for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ElasticsearchService) ListTagsWithContext(ctx aws.Context, input *ListTagsInput, opts ...request.Option) (*ListTagsOutput, error) { - req, out := c.ListTagsRequest(input) +func (c *ElasticsearchService) GetCompatibleElasticsearchVersionsWithContext(ctx aws.Context, input *GetCompatibleElasticsearchVersionsInput, opts ...request.Option) (*GetCompatibleElasticsearchVersionsOutput, error) { + req, out := c.GetCompatibleElasticsearchVersionsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opRemoveTags = "RemoveTags" +const opGetUpgradeHistory = "GetUpgradeHistory" -// RemoveTagsRequest generates a "aws/request.Request" representing the -// client's request for the RemoveTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetUpgradeHistoryRequest generates a "aws/request.Request" representing the +// client's request for the GetUpgradeHistory operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See RemoveTags for more information on using the RemoveTags +// See GetUpgradeHistory for more information on using the GetUpgradeHistory // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the RemoveTagsRequest method. -// req, resp := client.RemoveTagsRequest(params) +// // Example sending a request using the GetUpgradeHistoryRequest method. +// req, resp := client.GetUpgradeHistoryRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *ElasticsearchService) RemoveTagsRequest(input *RemoveTagsInput) (req *request.Request, output *RemoveTagsOutput) { +func (c *ElasticsearchService) GetUpgradeHistoryRequest(input *GetUpgradeHistoryInput) (req *request.Request, output *GetUpgradeHistoryOutput) { op := &request.Operation{ - Name: opRemoveTags, - HTTPMethod: "POST", - HTTPPath: "/2015-01-01/tags-removal", + Name: opGetUpgradeHistory, + HTTPMethod: "GET", + HTTPPath: "/2015-01-01/es/upgradeDomain/{DomainName}/history", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, } if input == nil { - input = &RemoveTagsInput{} + input = &GetUpgradeHistoryInput{} } - output = &RemoveTagsOutput{} + output = &GetUpgradeHistoryOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// RemoveTags API operation for Amazon Elasticsearch Service. +// GetUpgradeHistory API operation for Amazon Elasticsearch Service. // -// Removes the specified set of tags from the specified Elasticsearch domain. +// Retrieves the complete history of the last 10 upgrades that were performed +// on the domain. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Elasticsearch Service's -// API operation RemoveTags for usage and error information. +// API operation GetUpgradeHistory for usage and error information. // // Returned Error Codes: // * ErrCodeBaseException "BaseException" // An error occurred while processing the request. // +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// An exception for accessing or deleting a resource that does not exist. Gives +// http status code of 400. +// +// * ErrCodeDisabledOperationException "DisabledOperationException" +// An error occured because the client wanted to access a not supported operation. +// Gives http status code of 409. +// // * ErrCodeValidationException "ValidationException" // An exception for missing / invalid input fields. Gives http status code of // 400. @@ -1282,211 +1313,1975 @@ func (c *ElasticsearchService) RemoveTagsRequest(input *RemoveTagsInput) (req *r // or failure (the failure is internal to the service) . Gives http status code // of 500. // -func (c *ElasticsearchService) RemoveTags(input *RemoveTagsInput) (*RemoveTagsOutput, error) { - req, out := c.RemoveTagsRequest(input) +func (c *ElasticsearchService) GetUpgradeHistory(input *GetUpgradeHistoryInput) (*GetUpgradeHistoryOutput, error) { + req, out := c.GetUpgradeHistoryRequest(input) return out, req.Send() } -// RemoveTagsWithContext is the same as RemoveTags with the addition of +// GetUpgradeHistoryWithContext is the same as GetUpgradeHistory with the addition of // the ability to pass a context and additional request options. // -// See RemoveTags for details on how to use this API operation. +// See GetUpgradeHistory for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ElasticsearchService) RemoveTagsWithContext(ctx aws.Context, input *RemoveTagsInput, opts ...request.Option) (*RemoveTagsOutput, error) { - req, out := c.RemoveTagsRequest(input) +func (c *ElasticsearchService) GetUpgradeHistoryWithContext(ctx aws.Context, input *GetUpgradeHistoryInput, opts ...request.Option) (*GetUpgradeHistoryOutput, error) { + req, out := c.GetUpgradeHistoryRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateElasticsearchDomainConfig = "UpdateElasticsearchDomainConfig" +// GetUpgradeHistoryPages iterates over the pages of a GetUpgradeHistory operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See GetUpgradeHistory method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a GetUpgradeHistory operation. +// pageNum := 0 +// err := client.GetUpgradeHistoryPages(params, +// func(page *GetUpgradeHistoryOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *ElasticsearchService) GetUpgradeHistoryPages(input *GetUpgradeHistoryInput, fn func(*GetUpgradeHistoryOutput, bool) bool) error { + return c.GetUpgradeHistoryPagesWithContext(aws.BackgroundContext(), input, fn) +} -// UpdateElasticsearchDomainConfigRequest generates a "aws/request.Request" representing the -// client's request for the UpdateElasticsearchDomainConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetUpgradeHistoryPagesWithContext same as GetUpgradeHistoryPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ElasticsearchService) GetUpgradeHistoryPagesWithContext(ctx aws.Context, input *GetUpgradeHistoryInput, fn func(*GetUpgradeHistoryOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *GetUpgradeHistoryInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.GetUpgradeHistoryRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*GetUpgradeHistoryOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opGetUpgradeStatus = "GetUpgradeStatus" + +// GetUpgradeStatusRequest generates a "aws/request.Request" representing the +// client's request for the GetUpgradeStatus operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateElasticsearchDomainConfig for more information on using the UpdateElasticsearchDomainConfig +// See GetUpgradeStatus for more information on using the GetUpgradeStatus // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateElasticsearchDomainConfigRequest method. -// req, resp := client.UpdateElasticsearchDomainConfigRequest(params) +// // Example sending a request using the GetUpgradeStatusRequest method. +// req, resp := client.GetUpgradeStatusRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *ElasticsearchService) UpdateElasticsearchDomainConfigRequest(input *UpdateElasticsearchDomainConfigInput) (req *request.Request, output *UpdateElasticsearchDomainConfigOutput) { +func (c *ElasticsearchService) GetUpgradeStatusRequest(input *GetUpgradeStatusInput) (req *request.Request, output *GetUpgradeStatusOutput) { op := &request.Operation{ - Name: opUpdateElasticsearchDomainConfig, - HTTPMethod: "POST", - HTTPPath: "/2015-01-01/es/domain/{DomainName}/config", + Name: opGetUpgradeStatus, + HTTPMethod: "GET", + HTTPPath: "/2015-01-01/es/upgradeDomain/{DomainName}/status", } if input == nil { - input = &UpdateElasticsearchDomainConfigInput{} + input = &GetUpgradeStatusInput{} } - output = &UpdateElasticsearchDomainConfigOutput{} + output = &GetUpgradeStatusOutput{} req = c.newRequest(op, input, output) return } -// UpdateElasticsearchDomainConfig API operation for Amazon Elasticsearch Service. +// GetUpgradeStatus API operation for Amazon Elasticsearch Service. // -// Modifies the cluster configuration of the specified Elasticsearch domain, -// setting as setting the instance type and the number of instances. +// Retrieves the latest status of the last upgrade or upgrade eligibility check +// that was performed on the domain. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Elasticsearch Service's -// API operation UpdateElasticsearchDomainConfig for usage and error information. +// API operation GetUpgradeStatus for usage and error information. // // Returned Error Codes: // * ErrCodeBaseException "BaseException" // An error occurred while processing the request. // -// * ErrCodeInternalException "InternalException" -// The request processing has failed because of an unknown error, exception -// or failure (the failure is internal to the service) . Gives http status code -// of 500. -// -// * ErrCodeInvalidTypeException "InvalidTypeException" -// An exception for trying to create or access sub-resource that is either invalid -// or not supported. Gives http status code of 409. -// -// * ErrCodeLimitExceededException "LimitExceededException" -// An exception for trying to create more than allowed resources or sub-resources. -// Gives http status code of 409. -// // * ErrCodeResourceNotFoundException "ResourceNotFoundException" // An exception for accessing or deleting a resource that does not exist. Gives // http status code of 400. // +// * ErrCodeDisabledOperationException "DisabledOperationException" +// An error occured because the client wanted to access a not supported operation. +// Gives http status code of 409. +// // * ErrCodeValidationException "ValidationException" // An exception for missing / invalid input fields. Gives http status code of // 400. // -func (c *ElasticsearchService) UpdateElasticsearchDomainConfig(input *UpdateElasticsearchDomainConfigInput) (*UpdateElasticsearchDomainConfigOutput, error) { - req, out := c.UpdateElasticsearchDomainConfigRequest(input) +// * ErrCodeInternalException "InternalException" +// The request processing has failed because of an unknown error, exception +// or failure (the failure is internal to the service) . Gives http status code +// of 500. +// +func (c *ElasticsearchService) GetUpgradeStatus(input *GetUpgradeStatusInput) (*GetUpgradeStatusOutput, error) { + req, out := c.GetUpgradeStatusRequest(input) return out, req.Send() } -// UpdateElasticsearchDomainConfigWithContext is the same as UpdateElasticsearchDomainConfig with the addition of +// GetUpgradeStatusWithContext is the same as GetUpgradeStatus with the addition of // the ability to pass a context and additional request options. // -// See UpdateElasticsearchDomainConfig for details on how to use this API operation. +// See GetUpgradeStatus for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ElasticsearchService) UpdateElasticsearchDomainConfigWithContext(ctx aws.Context, input *UpdateElasticsearchDomainConfigInput, opts ...request.Option) (*UpdateElasticsearchDomainConfigOutput, error) { - req, out := c.UpdateElasticsearchDomainConfigRequest(input) +func (c *ElasticsearchService) GetUpgradeStatusWithContext(ctx aws.Context, input *GetUpgradeStatusInput, opts ...request.Option) (*GetUpgradeStatusOutput, error) { + req, out := c.GetUpgradeStatusRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// The configured access rules for the domain's document and search endpoints, -// and the current status of those rules. -type AccessPoliciesStatus struct { - _ struct{} `type:"structure"` - - // The access policy configured for the Elasticsearch domain. Access policies - // may be resource-based, IP-based, or IAM-based. See Configuring Access Policies - // (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-access-policies)for - // more information. - // - // Options is a required field - Options *string `type:"string" required:"true"` +const opListDomainNames = "ListDomainNames" - // The status of the access policy for the Elasticsearch domain. See OptionStatus - // for the status information that's included. - // - // Status is a required field - Status *OptionStatus `type:"structure" required:"true"` -} +// ListDomainNamesRequest generates a "aws/request.Request" representing the +// client's request for the ListDomainNames operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListDomainNames for more information on using the ListDomainNames +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListDomainNamesRequest method. +// req, resp := client.ListDomainNamesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ElasticsearchService) ListDomainNamesRequest(input *ListDomainNamesInput) (req *request.Request, output *ListDomainNamesOutput) { + op := &request.Operation{ + Name: opListDomainNames, + HTTPMethod: "GET", + HTTPPath: "/2015-01-01/domain", + } -// String returns the string representation -func (s AccessPoliciesStatus) String() string { - return awsutil.Prettify(s) -} + if input == nil { + input = &ListDomainNamesInput{} + } -// GoString returns the string representation -func (s AccessPoliciesStatus) GoString() string { - return s.String() + output = &ListDomainNamesOutput{} + req = c.newRequest(op, input, output) + return } -// SetOptions sets the Options field's value. -func (s *AccessPoliciesStatus) SetOptions(v string) *AccessPoliciesStatus { - s.Options = &v - return s +// ListDomainNames API operation for Amazon Elasticsearch Service. +// +// Returns the name of all Elasticsearch domains owned by the current user's +// account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elasticsearch Service's +// API operation ListDomainNames for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBaseException "BaseException" +// An error occurred while processing the request. +// +// * ErrCodeValidationException "ValidationException" +// An exception for missing / invalid input fields. Gives http status code of +// 400. +// +func (c *ElasticsearchService) ListDomainNames(input *ListDomainNamesInput) (*ListDomainNamesOutput, error) { + req, out := c.ListDomainNamesRequest(input) + return out, req.Send() } -// SetStatus sets the Status field's value. -func (s *AccessPoliciesStatus) SetStatus(v *OptionStatus) *AccessPoliciesStatus { - s.Status = v - return s +// ListDomainNamesWithContext is the same as ListDomainNames with the addition of +// the ability to pass a context and additional request options. +// +// See ListDomainNames for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ElasticsearchService) ListDomainNamesWithContext(ctx aws.Context, input *ListDomainNamesInput, opts ...request.Option) (*ListDomainNamesOutput, error) { + req, out := c.ListDomainNamesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() } -// Container for the parameters to the AddTags operation. Specify the tags that -// you want to attach to the Elasticsearch domain. -type AddTagsInput struct { - _ struct{} `type:"structure"` - - // Specify the ARN for which you want to add the tags. - // - // ARN is a required field - ARN *string `type:"string" required:"true"` - - // List of Tag that need to be added for the Elasticsearch domain. - // - // TagList is a required field - TagList []*Tag `type:"list" required:"true"` -} +const opListElasticsearchInstanceTypes = "ListElasticsearchInstanceTypes" -// String returns the string representation -func (s AddTagsInput) String() string { - return awsutil.Prettify(s) -} +// ListElasticsearchInstanceTypesRequest generates a "aws/request.Request" representing the +// client's request for the ListElasticsearchInstanceTypes operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListElasticsearchInstanceTypes for more information on using the ListElasticsearchInstanceTypes +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListElasticsearchInstanceTypesRequest method. +// req, resp := client.ListElasticsearchInstanceTypesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ElasticsearchService) ListElasticsearchInstanceTypesRequest(input *ListElasticsearchInstanceTypesInput) (req *request.Request, output *ListElasticsearchInstanceTypesOutput) { + op := &request.Operation{ + Name: opListElasticsearchInstanceTypes, + HTTPMethod: "GET", + HTTPPath: "/2015-01-01/es/instanceTypes/{ElasticsearchVersion}", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListElasticsearchInstanceTypesInput{} + } + + output = &ListElasticsearchInstanceTypesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListElasticsearchInstanceTypes API operation for Amazon Elasticsearch Service. +// +// List all Elasticsearch instance types that are supported for given ElasticsearchVersion +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elasticsearch Service's +// API operation ListElasticsearchInstanceTypes for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBaseException "BaseException" +// An error occurred while processing the request. +// +// * ErrCodeInternalException "InternalException" +// The request processing has failed because of an unknown error, exception +// or failure (the failure is internal to the service) . Gives http status code +// of 500. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// An exception for accessing or deleting a resource that does not exist. Gives +// http status code of 400. +// +// * ErrCodeValidationException "ValidationException" +// An exception for missing / invalid input fields. Gives http status code of +// 400. +// +func (c *ElasticsearchService) ListElasticsearchInstanceTypes(input *ListElasticsearchInstanceTypesInput) (*ListElasticsearchInstanceTypesOutput, error) { + req, out := c.ListElasticsearchInstanceTypesRequest(input) + return out, req.Send() +} + +// ListElasticsearchInstanceTypesWithContext is the same as ListElasticsearchInstanceTypes with the addition of +// the ability to pass a context and additional request options. +// +// See ListElasticsearchInstanceTypes for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ElasticsearchService) ListElasticsearchInstanceTypesWithContext(ctx aws.Context, input *ListElasticsearchInstanceTypesInput, opts ...request.Option) (*ListElasticsearchInstanceTypesOutput, error) { + req, out := c.ListElasticsearchInstanceTypesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListElasticsearchInstanceTypesPages iterates over the pages of a ListElasticsearchInstanceTypes operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListElasticsearchInstanceTypes method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListElasticsearchInstanceTypes operation. +// pageNum := 0 +// err := client.ListElasticsearchInstanceTypesPages(params, +// func(page *ListElasticsearchInstanceTypesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *ElasticsearchService) ListElasticsearchInstanceTypesPages(input *ListElasticsearchInstanceTypesInput, fn func(*ListElasticsearchInstanceTypesOutput, bool) bool) error { + return c.ListElasticsearchInstanceTypesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListElasticsearchInstanceTypesPagesWithContext same as ListElasticsearchInstanceTypesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ElasticsearchService) ListElasticsearchInstanceTypesPagesWithContext(ctx aws.Context, input *ListElasticsearchInstanceTypesInput, fn func(*ListElasticsearchInstanceTypesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListElasticsearchInstanceTypesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListElasticsearchInstanceTypesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListElasticsearchInstanceTypesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListElasticsearchVersions = "ListElasticsearchVersions" + +// ListElasticsearchVersionsRequest generates a "aws/request.Request" representing the +// client's request for the ListElasticsearchVersions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListElasticsearchVersions for more information on using the ListElasticsearchVersions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListElasticsearchVersionsRequest method. +// req, resp := client.ListElasticsearchVersionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ElasticsearchService) ListElasticsearchVersionsRequest(input *ListElasticsearchVersionsInput) (req *request.Request, output *ListElasticsearchVersionsOutput) { + op := &request.Operation{ + Name: opListElasticsearchVersions, + HTTPMethod: "GET", + HTTPPath: "/2015-01-01/es/versions", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListElasticsearchVersionsInput{} + } + + output = &ListElasticsearchVersionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListElasticsearchVersions API operation for Amazon Elasticsearch Service. +// +// List all supported Elasticsearch versions +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elasticsearch Service's +// API operation ListElasticsearchVersions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBaseException "BaseException" +// An error occurred while processing the request. +// +// * ErrCodeInternalException "InternalException" +// The request processing has failed because of an unknown error, exception +// or failure (the failure is internal to the service) . Gives http status code +// of 500. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// An exception for accessing or deleting a resource that does not exist. Gives +// http status code of 400. +// +// * ErrCodeValidationException "ValidationException" +// An exception for missing / invalid input fields. Gives http status code of +// 400. +// +func (c *ElasticsearchService) ListElasticsearchVersions(input *ListElasticsearchVersionsInput) (*ListElasticsearchVersionsOutput, error) { + req, out := c.ListElasticsearchVersionsRequest(input) + return out, req.Send() +} + +// ListElasticsearchVersionsWithContext is the same as ListElasticsearchVersions with the addition of +// the ability to pass a context and additional request options. +// +// See ListElasticsearchVersions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ElasticsearchService) ListElasticsearchVersionsWithContext(ctx aws.Context, input *ListElasticsearchVersionsInput, opts ...request.Option) (*ListElasticsearchVersionsOutput, error) { + req, out := c.ListElasticsearchVersionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListElasticsearchVersionsPages iterates over the pages of a ListElasticsearchVersions operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListElasticsearchVersions method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListElasticsearchVersions operation. +// pageNum := 0 +// err := client.ListElasticsearchVersionsPages(params, +// func(page *ListElasticsearchVersionsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *ElasticsearchService) ListElasticsearchVersionsPages(input *ListElasticsearchVersionsInput, fn func(*ListElasticsearchVersionsOutput, bool) bool) error { + return c.ListElasticsearchVersionsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListElasticsearchVersionsPagesWithContext same as ListElasticsearchVersionsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ElasticsearchService) ListElasticsearchVersionsPagesWithContext(ctx aws.Context, input *ListElasticsearchVersionsInput, fn func(*ListElasticsearchVersionsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListElasticsearchVersionsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListElasticsearchVersionsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListElasticsearchVersionsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListTags = "ListTags" + +// ListTagsRequest generates a "aws/request.Request" representing the +// client's request for the ListTags operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListTags for more information on using the ListTags +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListTagsRequest method. +// req, resp := client.ListTagsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ElasticsearchService) ListTagsRequest(input *ListTagsInput) (req *request.Request, output *ListTagsOutput) { + op := &request.Operation{ + Name: opListTags, + HTTPMethod: "GET", + HTTPPath: "/2015-01-01/tags/", + } + + if input == nil { + input = &ListTagsInput{} + } + + output = &ListTagsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListTags API operation for Amazon Elasticsearch Service. +// +// Returns all tags for the given Elasticsearch domain. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elasticsearch Service's +// API operation ListTags for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBaseException "BaseException" +// An error occurred while processing the request. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// An exception for accessing or deleting a resource that does not exist. Gives +// http status code of 400. +// +// * ErrCodeValidationException "ValidationException" +// An exception for missing / invalid input fields. Gives http status code of +// 400. +// +// * ErrCodeInternalException "InternalException" +// The request processing has failed because of an unknown error, exception +// or failure (the failure is internal to the service) . Gives http status code +// of 500. +// +func (c *ElasticsearchService) ListTags(input *ListTagsInput) (*ListTagsOutput, error) { + req, out := c.ListTagsRequest(input) + return out, req.Send() +} + +// ListTagsWithContext is the same as ListTags with the addition of +// the ability to pass a context and additional request options. +// +// See ListTags for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ElasticsearchService) ListTagsWithContext(ctx aws.Context, input *ListTagsInput, opts ...request.Option) (*ListTagsOutput, error) { + req, out := c.ListTagsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPurchaseReservedElasticsearchInstanceOffering = "PurchaseReservedElasticsearchInstanceOffering" + +// PurchaseReservedElasticsearchInstanceOfferingRequest generates a "aws/request.Request" representing the +// client's request for the PurchaseReservedElasticsearchInstanceOffering operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PurchaseReservedElasticsearchInstanceOffering for more information on using the PurchaseReservedElasticsearchInstanceOffering +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PurchaseReservedElasticsearchInstanceOfferingRequest method. +// req, resp := client.PurchaseReservedElasticsearchInstanceOfferingRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ElasticsearchService) PurchaseReservedElasticsearchInstanceOfferingRequest(input *PurchaseReservedElasticsearchInstanceOfferingInput) (req *request.Request, output *PurchaseReservedElasticsearchInstanceOfferingOutput) { + op := &request.Operation{ + Name: opPurchaseReservedElasticsearchInstanceOffering, + HTTPMethod: "POST", + HTTPPath: "/2015-01-01/es/purchaseReservedInstanceOffering", + } + + if input == nil { + input = &PurchaseReservedElasticsearchInstanceOfferingInput{} + } + + output = &PurchaseReservedElasticsearchInstanceOfferingOutput{} + req = c.newRequest(op, input, output) + return +} + +// PurchaseReservedElasticsearchInstanceOffering API operation for Amazon Elasticsearch Service. +// +// Allows you to purchase reserved Elasticsearch instances. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elasticsearch Service's +// API operation PurchaseReservedElasticsearchInstanceOffering for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// An exception for accessing or deleting a resource that does not exist. Gives +// http status code of 400. +// +// * ErrCodeResourceAlreadyExistsException "ResourceAlreadyExistsException" +// An exception for creating a resource that already exists. Gives http status +// code of 400. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// An exception for trying to create more than allowed resources or sub-resources. +// Gives http status code of 409. +// +// * ErrCodeDisabledOperationException "DisabledOperationException" +// An error occured because the client wanted to access a not supported operation. +// Gives http status code of 409. +// +// * ErrCodeValidationException "ValidationException" +// An exception for missing / invalid input fields. Gives http status code of +// 400. +// +// * ErrCodeInternalException "InternalException" +// The request processing has failed because of an unknown error, exception +// or failure (the failure is internal to the service) . Gives http status code +// of 500. +// +func (c *ElasticsearchService) PurchaseReservedElasticsearchInstanceOffering(input *PurchaseReservedElasticsearchInstanceOfferingInput) (*PurchaseReservedElasticsearchInstanceOfferingOutput, error) { + req, out := c.PurchaseReservedElasticsearchInstanceOfferingRequest(input) + return out, req.Send() +} + +// PurchaseReservedElasticsearchInstanceOfferingWithContext is the same as PurchaseReservedElasticsearchInstanceOffering with the addition of +// the ability to pass a context and additional request options. +// +// See PurchaseReservedElasticsearchInstanceOffering for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ElasticsearchService) PurchaseReservedElasticsearchInstanceOfferingWithContext(ctx aws.Context, input *PurchaseReservedElasticsearchInstanceOfferingInput, opts ...request.Option) (*PurchaseReservedElasticsearchInstanceOfferingOutput, error) { + req, out := c.PurchaseReservedElasticsearchInstanceOfferingRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRemoveTags = "RemoveTags" + +// RemoveTagsRequest generates a "aws/request.Request" representing the +// client's request for the RemoveTags operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RemoveTags for more information on using the RemoveTags +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RemoveTagsRequest method. +// req, resp := client.RemoveTagsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ElasticsearchService) RemoveTagsRequest(input *RemoveTagsInput) (req *request.Request, output *RemoveTagsOutput) { + op := &request.Operation{ + Name: opRemoveTags, + HTTPMethod: "POST", + HTTPPath: "/2015-01-01/tags-removal", + } + + if input == nil { + input = &RemoveTagsInput{} + } + + output = &RemoveTagsOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// RemoveTags API operation for Amazon Elasticsearch Service. +// +// Removes the specified set of tags from the specified Elasticsearch domain. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elasticsearch Service's +// API operation RemoveTags for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBaseException "BaseException" +// An error occurred while processing the request. +// +// * ErrCodeValidationException "ValidationException" +// An exception for missing / invalid input fields. Gives http status code of +// 400. +// +// * ErrCodeInternalException "InternalException" +// The request processing has failed because of an unknown error, exception +// or failure (the failure is internal to the service) . Gives http status code +// of 500. +// +func (c *ElasticsearchService) RemoveTags(input *RemoveTagsInput) (*RemoveTagsOutput, error) { + req, out := c.RemoveTagsRequest(input) + return out, req.Send() +} + +// RemoveTagsWithContext is the same as RemoveTags with the addition of +// the ability to pass a context and additional request options. +// +// See RemoveTags for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ElasticsearchService) RemoveTagsWithContext(ctx aws.Context, input *RemoveTagsInput, opts ...request.Option) (*RemoveTagsOutput, error) { + req, out := c.RemoveTagsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStartElasticsearchServiceSoftwareUpdate = "StartElasticsearchServiceSoftwareUpdate" + +// StartElasticsearchServiceSoftwareUpdateRequest generates a "aws/request.Request" representing the +// client's request for the StartElasticsearchServiceSoftwareUpdate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StartElasticsearchServiceSoftwareUpdate for more information on using the StartElasticsearchServiceSoftwareUpdate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StartElasticsearchServiceSoftwareUpdateRequest method. +// req, resp := client.StartElasticsearchServiceSoftwareUpdateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ElasticsearchService) StartElasticsearchServiceSoftwareUpdateRequest(input *StartElasticsearchServiceSoftwareUpdateInput) (req *request.Request, output *StartElasticsearchServiceSoftwareUpdateOutput) { + op := &request.Operation{ + Name: opStartElasticsearchServiceSoftwareUpdate, + HTTPMethod: "POST", + HTTPPath: "/2015-01-01/es/serviceSoftwareUpdate/start", + } + + if input == nil { + input = &StartElasticsearchServiceSoftwareUpdateInput{} + } + + output = &StartElasticsearchServiceSoftwareUpdateOutput{} + req = c.newRequest(op, input, output) + return +} + +// StartElasticsearchServiceSoftwareUpdate API operation for Amazon Elasticsearch Service. +// +// Schedules a service software update for an Amazon ES domain. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elasticsearch Service's +// API operation StartElasticsearchServiceSoftwareUpdate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBaseException "BaseException" +// An error occurred while processing the request. +// +// * ErrCodeInternalException "InternalException" +// The request processing has failed because of an unknown error, exception +// or failure (the failure is internal to the service) . Gives http status code +// of 500. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// An exception for accessing or deleting a resource that does not exist. Gives +// http status code of 400. +// +// * ErrCodeValidationException "ValidationException" +// An exception for missing / invalid input fields. Gives http status code of +// 400. +// +func (c *ElasticsearchService) StartElasticsearchServiceSoftwareUpdate(input *StartElasticsearchServiceSoftwareUpdateInput) (*StartElasticsearchServiceSoftwareUpdateOutput, error) { + req, out := c.StartElasticsearchServiceSoftwareUpdateRequest(input) + return out, req.Send() +} + +// StartElasticsearchServiceSoftwareUpdateWithContext is the same as StartElasticsearchServiceSoftwareUpdate with the addition of +// the ability to pass a context and additional request options. +// +// See StartElasticsearchServiceSoftwareUpdate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ElasticsearchService) StartElasticsearchServiceSoftwareUpdateWithContext(ctx aws.Context, input *StartElasticsearchServiceSoftwareUpdateInput, opts ...request.Option) (*StartElasticsearchServiceSoftwareUpdateOutput, error) { + req, out := c.StartElasticsearchServiceSoftwareUpdateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateElasticsearchDomainConfig = "UpdateElasticsearchDomainConfig" + +// UpdateElasticsearchDomainConfigRequest generates a "aws/request.Request" representing the +// client's request for the UpdateElasticsearchDomainConfig operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateElasticsearchDomainConfig for more information on using the UpdateElasticsearchDomainConfig +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateElasticsearchDomainConfigRequest method. +// req, resp := client.UpdateElasticsearchDomainConfigRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ElasticsearchService) UpdateElasticsearchDomainConfigRequest(input *UpdateElasticsearchDomainConfigInput) (req *request.Request, output *UpdateElasticsearchDomainConfigOutput) { + op := &request.Operation{ + Name: opUpdateElasticsearchDomainConfig, + HTTPMethod: "POST", + HTTPPath: "/2015-01-01/es/domain/{DomainName}/config", + } + + if input == nil { + input = &UpdateElasticsearchDomainConfigInput{} + } + + output = &UpdateElasticsearchDomainConfigOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateElasticsearchDomainConfig API operation for Amazon Elasticsearch Service. +// +// Modifies the cluster configuration of the specified Elasticsearch domain, +// setting as setting the instance type and the number of instances. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elasticsearch Service's +// API operation UpdateElasticsearchDomainConfig for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBaseException "BaseException" +// An error occurred while processing the request. +// +// * ErrCodeInternalException "InternalException" +// The request processing has failed because of an unknown error, exception +// or failure (the failure is internal to the service) . Gives http status code +// of 500. +// +// * ErrCodeInvalidTypeException "InvalidTypeException" +// An exception for trying to create or access sub-resource that is either invalid +// or not supported. Gives http status code of 409. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// An exception for trying to create more than allowed resources or sub-resources. +// Gives http status code of 409. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// An exception for accessing or deleting a resource that does not exist. Gives +// http status code of 400. +// +// * ErrCodeValidationException "ValidationException" +// An exception for missing / invalid input fields. Gives http status code of +// 400. +// +func (c *ElasticsearchService) UpdateElasticsearchDomainConfig(input *UpdateElasticsearchDomainConfigInput) (*UpdateElasticsearchDomainConfigOutput, error) { + req, out := c.UpdateElasticsearchDomainConfigRequest(input) + return out, req.Send() +} + +// UpdateElasticsearchDomainConfigWithContext is the same as UpdateElasticsearchDomainConfig with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateElasticsearchDomainConfig for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ElasticsearchService) UpdateElasticsearchDomainConfigWithContext(ctx aws.Context, input *UpdateElasticsearchDomainConfigInput, opts ...request.Option) (*UpdateElasticsearchDomainConfigOutput, error) { + req, out := c.UpdateElasticsearchDomainConfigRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpgradeElasticsearchDomain = "UpgradeElasticsearchDomain" + +// UpgradeElasticsearchDomainRequest generates a "aws/request.Request" representing the +// client's request for the UpgradeElasticsearchDomain operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpgradeElasticsearchDomain for more information on using the UpgradeElasticsearchDomain +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpgradeElasticsearchDomainRequest method. +// req, resp := client.UpgradeElasticsearchDomainRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ElasticsearchService) UpgradeElasticsearchDomainRequest(input *UpgradeElasticsearchDomainInput) (req *request.Request, output *UpgradeElasticsearchDomainOutput) { + op := &request.Operation{ + Name: opUpgradeElasticsearchDomain, + HTTPMethod: "POST", + HTTPPath: "/2015-01-01/es/upgradeDomain", + } + + if input == nil { + input = &UpgradeElasticsearchDomainInput{} + } + + output = &UpgradeElasticsearchDomainOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpgradeElasticsearchDomain API operation for Amazon Elasticsearch Service. +// +// Allows you to either upgrade your domain or perform an Upgrade eligibility +// check to a compatible Elasticsearch version. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elasticsearch Service's +// API operation UpgradeElasticsearchDomain for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBaseException "BaseException" +// An error occurred while processing the request. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// An exception for accessing or deleting a resource that does not exist. Gives +// http status code of 400. +// +// * ErrCodeResourceAlreadyExistsException "ResourceAlreadyExistsException" +// An exception for creating a resource that already exists. Gives http status +// code of 400. +// +// * ErrCodeDisabledOperationException "DisabledOperationException" +// An error occured because the client wanted to access a not supported operation. +// Gives http status code of 409. +// +// * ErrCodeValidationException "ValidationException" +// An exception for missing / invalid input fields. Gives http status code of +// 400. +// +// * ErrCodeInternalException "InternalException" +// The request processing has failed because of an unknown error, exception +// or failure (the failure is internal to the service) . Gives http status code +// of 500. +// +func (c *ElasticsearchService) UpgradeElasticsearchDomain(input *UpgradeElasticsearchDomainInput) (*UpgradeElasticsearchDomainOutput, error) { + req, out := c.UpgradeElasticsearchDomainRequest(input) + return out, req.Send() +} + +// UpgradeElasticsearchDomainWithContext is the same as UpgradeElasticsearchDomain with the addition of +// the ability to pass a context and additional request options. +// +// See UpgradeElasticsearchDomain for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ElasticsearchService) UpgradeElasticsearchDomainWithContext(ctx aws.Context, input *UpgradeElasticsearchDomainInput, opts ...request.Option) (*UpgradeElasticsearchDomainOutput, error) { + req, out := c.UpgradeElasticsearchDomainRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// The configured access rules for the domain's document and search endpoints, +// and the current status of those rules. +type AccessPoliciesStatus struct { + _ struct{} `type:"structure"` + + // The access policy configured for the Elasticsearch domain. Access policies + // may be resource-based, IP-based, or IAM-based. See Configuring Access Policies + // (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-access-policies)for + // more information. + // + // Options is a required field + Options *string `type:"string" required:"true"` + + // The status of the access policy for the Elasticsearch domain. See OptionStatus + // for the status information that's included. + // + // Status is a required field + Status *OptionStatus `type:"structure" required:"true"` +} + +// String returns the string representation +func (s AccessPoliciesStatus) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AccessPoliciesStatus) GoString() string { + return s.String() +} + +// SetOptions sets the Options field's value. +func (s *AccessPoliciesStatus) SetOptions(v string) *AccessPoliciesStatus { + s.Options = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *AccessPoliciesStatus) SetStatus(v *OptionStatus) *AccessPoliciesStatus { + s.Status = v + return s +} + +// Container for the parameters to the AddTags operation. Specify the tags that +// you want to attach to the Elasticsearch domain. +type AddTagsInput struct { + _ struct{} `type:"structure"` + + // Specify the ARN for which you want to add the tags. + // + // ARN is a required field + ARN *string `type:"string" required:"true"` + + // List of Tag that need to be added for the Elasticsearch domain. + // + // TagList is a required field + TagList []*Tag `type:"list" required:"true"` +} + +// String returns the string representation +func (s AddTagsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddTagsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddTagsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddTagsInput"} + if s.ARN == nil { + invalidParams.Add(request.NewErrParamRequired("ARN")) + } + if s.TagList == nil { + invalidParams.Add(request.NewErrParamRequired("TagList")) + } + if s.TagList != nil { + for i, v := range s.TagList { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "TagList", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetARN sets the ARN field's value. +func (s *AddTagsInput) SetARN(v string) *AddTagsInput { + s.ARN = &v + return s +} + +// SetTagList sets the TagList field's value. +func (s *AddTagsInput) SetTagList(v []*Tag) *AddTagsInput { + s.TagList = v + return s +} + +type AddTagsOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AddTagsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddTagsOutput) GoString() string { + return s.String() +} + +// List of limits that are specific to a given InstanceType and for each of +// it's InstanceRole . +type AdditionalLimit struct { + _ struct{} `type:"structure"` + + // Name of Additional Limit is specific to a given InstanceType and for each + // of it's InstanceRole etc. Attributes and their details: MaximumNumberOfDataNodesSupported + // This attribute will be present in Master node only to specify how much data + // nodes upto which given ESPartitionInstanceTypecan support as master node. MaximumNumberOfDataNodesWithoutMasterNode + // This attribute will be present in Data node only to specify how much data + // nodes of given ESPartitionInstanceType + LimitName *string `type:"string"` + + // Value for given AdditionalLimit$LimitName . + LimitValues []*string `type:"list"` +} + +// String returns the string representation +func (s AdditionalLimit) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AdditionalLimit) GoString() string { + return s.String() +} + +// SetLimitName sets the LimitName field's value. +func (s *AdditionalLimit) SetLimitName(v string) *AdditionalLimit { + s.LimitName = &v + return s +} + +// SetLimitValues sets the LimitValues field's value. +func (s *AdditionalLimit) SetLimitValues(v []*string) *AdditionalLimit { + s.LimitValues = v + return s +} + +// Status of the advanced options for the specified Elasticsearch domain. Currently, +// the following advanced options are available: +// +// * Option to allow references to indices in an HTTP request body. Must +// be false when configuring access to individual sub-resources. By default, +// the value is true. See Configuration Advanced Options (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-advanced-options) +// for more information. +// * Option to specify the percentage of heap space that is allocated to +// field data. By default, this setting is unbounded. +// For more information, see Configuring Advanced Options (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-advanced-options). +type AdvancedOptionsStatus struct { + _ struct{} `type:"structure"` + + // Specifies the status of advanced options for the specified Elasticsearch + // domain. + // + // Options is a required field + Options map[string]*string `type:"map" required:"true"` + + // Specifies the status of OptionStatus for advanced options for the specified + // Elasticsearch domain. + // + // Status is a required field + Status *OptionStatus `type:"structure" required:"true"` +} + +// String returns the string representation +func (s AdvancedOptionsStatus) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AdvancedOptionsStatus) GoString() string { + return s.String() +} + +// SetOptions sets the Options field's value. +func (s *AdvancedOptionsStatus) SetOptions(v map[string]*string) *AdvancedOptionsStatus { + s.Options = v + return s +} + +// SetStatus sets the Status field's value. +func (s *AdvancedOptionsStatus) SetStatus(v *OptionStatus) *AdvancedOptionsStatus { + s.Status = v + return s +} + +// Container for the parameters to the CancelElasticsearchServiceSoftwareUpdate +// operation. Specifies the name of the Elasticsearch domain that you wish to +// cancel a service software update on. +type CancelElasticsearchServiceSoftwareUpdateInput struct { + _ struct{} `type:"structure"` + + // The name of the domain that you want to stop the latest service software + // update on. + // + // DomainName is a required field + DomainName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s CancelElasticsearchServiceSoftwareUpdateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelElasticsearchServiceSoftwareUpdateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CancelElasticsearchServiceSoftwareUpdateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CancelElasticsearchServiceSoftwareUpdateInput"} + if s.DomainName == nil { + invalidParams.Add(request.NewErrParamRequired("DomainName")) + } + if s.DomainName != nil && len(*s.DomainName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("DomainName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDomainName sets the DomainName field's value. +func (s *CancelElasticsearchServiceSoftwareUpdateInput) SetDomainName(v string) *CancelElasticsearchServiceSoftwareUpdateInput { + s.DomainName = &v + return s +} + +// The result of a CancelElasticsearchServiceSoftwareUpdate operation. Contains +// the status of the update. +type CancelElasticsearchServiceSoftwareUpdateOutput struct { + _ struct{} `type:"structure"` + + // The current status of the Elasticsearch service software update. + ServiceSoftwareOptions *ServiceSoftwareOptions `type:"structure"` +} + +// String returns the string representation +func (s CancelElasticsearchServiceSoftwareUpdateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelElasticsearchServiceSoftwareUpdateOutput) GoString() string { + return s.String() +} + +// SetServiceSoftwareOptions sets the ServiceSoftwareOptions field's value. +func (s *CancelElasticsearchServiceSoftwareUpdateOutput) SetServiceSoftwareOptions(v *ServiceSoftwareOptions) *CancelElasticsearchServiceSoftwareUpdateOutput { + s.ServiceSoftwareOptions = v + return s +} + +// Options to specify the Cognito user and identity pools for Kibana authentication. +// For more information, see Amazon Cognito Authentication for Kibana (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-cognito-auth.html). +type CognitoOptions struct { + _ struct{} `type:"structure"` + + // Specifies the option to enable Cognito for Kibana authentication. + Enabled *bool `type:"boolean"` + + // Specifies the Cognito identity pool ID for Kibana authentication. + IdentityPoolId *string `min:"1" type:"string"` + + // Specifies the role ARN that provides Elasticsearch permissions for accessing + // Cognito resources. + RoleArn *string `min:"20" type:"string"` + + // Specifies the Cognito user pool ID for Kibana authentication. + UserPoolId *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s CognitoOptions) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CognitoOptions) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CognitoOptions) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CognitoOptions"} + if s.IdentityPoolId != nil && len(*s.IdentityPoolId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("IdentityPoolId", 1)) + } + if s.RoleArn != nil && len(*s.RoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + } + if s.UserPoolId != nil && len(*s.UserPoolId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserPoolId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEnabled sets the Enabled field's value. +func (s *CognitoOptions) SetEnabled(v bool) *CognitoOptions { + s.Enabled = &v + return s +} + +// SetIdentityPoolId sets the IdentityPoolId field's value. +func (s *CognitoOptions) SetIdentityPoolId(v string) *CognitoOptions { + s.IdentityPoolId = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *CognitoOptions) SetRoleArn(v string) *CognitoOptions { + s.RoleArn = &v + return s +} + +// SetUserPoolId sets the UserPoolId field's value. +func (s *CognitoOptions) SetUserPoolId(v string) *CognitoOptions { + s.UserPoolId = &v + return s +} + +// Status of the Cognito options for the specified Elasticsearch domain. +type CognitoOptionsStatus struct { + _ struct{} `type:"structure"` + + // Specifies the Cognito options for the specified Elasticsearch domain. + // + // Options is a required field + Options *CognitoOptions `type:"structure" required:"true"` + + // Specifies the status of the Cognito options for the specified Elasticsearch + // domain. + // + // Status is a required field + Status *OptionStatus `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CognitoOptionsStatus) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CognitoOptionsStatus) GoString() string { + return s.String() +} + +// SetOptions sets the Options field's value. +func (s *CognitoOptionsStatus) SetOptions(v *CognitoOptions) *CognitoOptionsStatus { + s.Options = v + return s +} + +// SetStatus sets the Status field's value. +func (s *CognitoOptionsStatus) SetStatus(v *OptionStatus) *CognitoOptionsStatus { + s.Status = v + return s +} + +// A map from an ElasticsearchVersion to a list of compatible ElasticsearchVersion +// s to which the domain can be upgraded. +type CompatibleVersionsMap struct { + _ struct{} `type:"structure"` + + // The current version of Elasticsearch on which a domain is. + SourceVersion *string `type:"string"` + + // List of supported elastic search versions. + TargetVersions []*string `type:"list"` +} + +// String returns the string representation +func (s CompatibleVersionsMap) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CompatibleVersionsMap) GoString() string { + return s.String() +} + +// SetSourceVersion sets the SourceVersion field's value. +func (s *CompatibleVersionsMap) SetSourceVersion(v string) *CompatibleVersionsMap { + s.SourceVersion = &v + return s +} + +// SetTargetVersions sets the TargetVersions field's value. +func (s *CompatibleVersionsMap) SetTargetVersions(v []*string) *CompatibleVersionsMap { + s.TargetVersions = v + return s +} + +type CreateElasticsearchDomainInput struct { + _ struct{} `type:"structure"` + + // IAM access policy as a JSON-formatted string. + AccessPolicies *string `type:"string"` + + // Option to allow references to indices in an HTTP request body. Must be false + // when configuring access to individual sub-resources. By default, the value + // is true. See Configuration Advanced Options (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-advanced-options) + // for more information. + AdvancedOptions map[string]*string `type:"map"` + + // Options to specify the Cognito user and identity pools for Kibana authentication. + // For more information, see Amazon Cognito Authentication for Kibana (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-cognito-auth.html). + CognitoOptions *CognitoOptions `type:"structure"` + + // The name of the Elasticsearch domain that you are creating. Domain names + // are unique across the domains owned by an account within an AWS region. Domain + // names must start with a letter or number and can contain the following characters: + // a-z (lowercase), 0-9, and - (hyphen). + // + // DomainName is a required field + DomainName *string `min:"3" type:"string" required:"true"` + + // Options to enable, disable and specify the type and size of EBS storage volumes. + EBSOptions *EBSOptions `type:"structure"` + + // Configuration options for an Elasticsearch domain. Specifies the instance + // type and number of instances in the domain cluster. + ElasticsearchClusterConfig *ElasticsearchClusterConfig `type:"structure"` + + // String of format X.Y to specify version for the Elasticsearch domain eg. + // "1.5" or "2.3". For more information, see Creating Elasticsearch Domains + // (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomains) + // in the Amazon Elasticsearch Service Developer Guide. + ElasticsearchVersion *string `type:"string"` + + // Specifies the Encryption At Rest Options. + EncryptionAtRestOptions *EncryptionAtRestOptions `type:"structure"` + + // Map of LogType and LogPublishingOption, each containing options to publish + // a given type of Elasticsearch log. + LogPublishingOptions map[string]*LogPublishingOption `type:"map"` + + // Specifies the NodeToNodeEncryptionOptions. + NodeToNodeEncryptionOptions *NodeToNodeEncryptionOptions `type:"structure"` + + // Option to set time, in UTC format, of the daily automated snapshot. Default + // value is 0 hours. + SnapshotOptions *SnapshotOptions `type:"structure"` + + // Options to specify the subnets and security groups for VPC endpoint. For + // more information, see Creating a VPC (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-vpc.html#es-creating-vpc) + // in VPC Endpoints for Amazon Elasticsearch Service Domains + VPCOptions *VPCOptions `type:"structure"` +} + +// String returns the string representation +func (s CreateElasticsearchDomainInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateElasticsearchDomainInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateElasticsearchDomainInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateElasticsearchDomainInput"} + if s.DomainName == nil { + invalidParams.Add(request.NewErrParamRequired("DomainName")) + } + if s.DomainName != nil && len(*s.DomainName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("DomainName", 3)) + } + if s.CognitoOptions != nil { + if err := s.CognitoOptions.Validate(); err != nil { + invalidParams.AddNested("CognitoOptions", err.(request.ErrInvalidParams)) + } + } + if s.EncryptionAtRestOptions != nil { + if err := s.EncryptionAtRestOptions.Validate(); err != nil { + invalidParams.AddNested("EncryptionAtRestOptions", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccessPolicies sets the AccessPolicies field's value. +func (s *CreateElasticsearchDomainInput) SetAccessPolicies(v string) *CreateElasticsearchDomainInput { + s.AccessPolicies = &v + return s +} + +// SetAdvancedOptions sets the AdvancedOptions field's value. +func (s *CreateElasticsearchDomainInput) SetAdvancedOptions(v map[string]*string) *CreateElasticsearchDomainInput { + s.AdvancedOptions = v + return s +} + +// SetCognitoOptions sets the CognitoOptions field's value. +func (s *CreateElasticsearchDomainInput) SetCognitoOptions(v *CognitoOptions) *CreateElasticsearchDomainInput { + s.CognitoOptions = v + return s +} + +// SetDomainName sets the DomainName field's value. +func (s *CreateElasticsearchDomainInput) SetDomainName(v string) *CreateElasticsearchDomainInput { + s.DomainName = &v + return s +} + +// SetEBSOptions sets the EBSOptions field's value. +func (s *CreateElasticsearchDomainInput) SetEBSOptions(v *EBSOptions) *CreateElasticsearchDomainInput { + s.EBSOptions = v + return s +} + +// SetElasticsearchClusterConfig sets the ElasticsearchClusterConfig field's value. +func (s *CreateElasticsearchDomainInput) SetElasticsearchClusterConfig(v *ElasticsearchClusterConfig) *CreateElasticsearchDomainInput { + s.ElasticsearchClusterConfig = v + return s +} + +// SetElasticsearchVersion sets the ElasticsearchVersion field's value. +func (s *CreateElasticsearchDomainInput) SetElasticsearchVersion(v string) *CreateElasticsearchDomainInput { + s.ElasticsearchVersion = &v + return s +} + +// SetEncryptionAtRestOptions sets the EncryptionAtRestOptions field's value. +func (s *CreateElasticsearchDomainInput) SetEncryptionAtRestOptions(v *EncryptionAtRestOptions) *CreateElasticsearchDomainInput { + s.EncryptionAtRestOptions = v + return s +} + +// SetLogPublishingOptions sets the LogPublishingOptions field's value. +func (s *CreateElasticsearchDomainInput) SetLogPublishingOptions(v map[string]*LogPublishingOption) *CreateElasticsearchDomainInput { + s.LogPublishingOptions = v + return s +} + +// SetNodeToNodeEncryptionOptions sets the NodeToNodeEncryptionOptions field's value. +func (s *CreateElasticsearchDomainInput) SetNodeToNodeEncryptionOptions(v *NodeToNodeEncryptionOptions) *CreateElasticsearchDomainInput { + s.NodeToNodeEncryptionOptions = v + return s +} + +// SetSnapshotOptions sets the SnapshotOptions field's value. +func (s *CreateElasticsearchDomainInput) SetSnapshotOptions(v *SnapshotOptions) *CreateElasticsearchDomainInput { + s.SnapshotOptions = v + return s +} + +// SetVPCOptions sets the VPCOptions field's value. +func (s *CreateElasticsearchDomainInput) SetVPCOptions(v *VPCOptions) *CreateElasticsearchDomainInput { + s.VPCOptions = v + return s +} + +// The result of a CreateElasticsearchDomain operation. Contains the status +// of the newly created Elasticsearch domain. +type CreateElasticsearchDomainOutput struct { + _ struct{} `type:"structure"` + + // The status of the newly created Elasticsearch domain. + DomainStatus *ElasticsearchDomainStatus `type:"structure"` +} + +// String returns the string representation +func (s CreateElasticsearchDomainOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateElasticsearchDomainOutput) GoString() string { + return s.String() +} + +// SetDomainStatus sets the DomainStatus field's value. +func (s *CreateElasticsearchDomainOutput) SetDomainStatus(v *ElasticsearchDomainStatus) *CreateElasticsearchDomainOutput { + s.DomainStatus = v + return s +} + +// Container for the parameters to the DeleteElasticsearchDomain operation. +// Specifies the name of the Elasticsearch domain that you want to delete. +type DeleteElasticsearchDomainInput struct { + _ struct{} `type:"structure"` + + // The name of the Elasticsearch domain that you want to permanently delete. + // + // DomainName is a required field + DomainName *string `location:"uri" locationName:"DomainName" min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteElasticsearchDomainInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteElasticsearchDomainInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteElasticsearchDomainInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteElasticsearchDomainInput"} + if s.DomainName == nil { + invalidParams.Add(request.NewErrParamRequired("DomainName")) + } + if s.DomainName != nil && len(*s.DomainName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("DomainName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDomainName sets the DomainName field's value. +func (s *DeleteElasticsearchDomainInput) SetDomainName(v string) *DeleteElasticsearchDomainInput { + s.DomainName = &v + return s +} + +// The result of a DeleteElasticsearchDomain request. Contains the status of +// the pending deletion, or no status if the domain and all of its resources +// have been deleted. +type DeleteElasticsearchDomainOutput struct { + _ struct{} `type:"structure"` + + // The status of the Elasticsearch domain being deleted. + DomainStatus *ElasticsearchDomainStatus `type:"structure"` +} + +// String returns the string representation +func (s DeleteElasticsearchDomainOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteElasticsearchDomainOutput) GoString() string { + return s.String() +} + +// SetDomainStatus sets the DomainStatus field's value. +func (s *DeleteElasticsearchDomainOutput) SetDomainStatus(v *ElasticsearchDomainStatus) *DeleteElasticsearchDomainOutput { + s.DomainStatus = v + return s +} + +type DeleteElasticsearchServiceRoleInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteElasticsearchServiceRoleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteElasticsearchServiceRoleInput) GoString() string { + return s.String() +} + +type DeleteElasticsearchServiceRoleOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteElasticsearchServiceRoleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteElasticsearchServiceRoleOutput) GoString() string { + return s.String() +} + +// Container for the parameters to the DescribeElasticsearchDomainConfig operation. +// Specifies the domain name for which you want configuration information. +type DescribeElasticsearchDomainConfigInput struct { + _ struct{} `type:"structure"` + + // The Elasticsearch domain that you want to get information about. + // + // DomainName is a required field + DomainName *string `location:"uri" locationName:"DomainName" min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeElasticsearchDomainConfigInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeElasticsearchDomainConfigInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeElasticsearchDomainConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeElasticsearchDomainConfigInput"} + if s.DomainName == nil { + invalidParams.Add(request.NewErrParamRequired("DomainName")) + } + if s.DomainName != nil && len(*s.DomainName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("DomainName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDomainName sets the DomainName field's value. +func (s *DescribeElasticsearchDomainConfigInput) SetDomainName(v string) *DescribeElasticsearchDomainConfigInput { + s.DomainName = &v + return s +} + +// The result of a DescribeElasticsearchDomainConfig request. Contains the configuration +// information of the requested domain. +type DescribeElasticsearchDomainConfigOutput struct { + _ struct{} `type:"structure"` + + // The configuration information of the domain requested in the DescribeElasticsearchDomainConfig + // request. + // + // DomainConfig is a required field + DomainConfig *ElasticsearchDomainConfig `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DescribeElasticsearchDomainConfigOutput) String() string { + return awsutil.Prettify(s) +} // GoString returns the string representation -func (s AddTagsInput) GoString() string { +func (s DescribeElasticsearchDomainConfigOutput) GoString() string { + return s.String() +} + +// SetDomainConfig sets the DomainConfig field's value. +func (s *DescribeElasticsearchDomainConfigOutput) SetDomainConfig(v *ElasticsearchDomainConfig) *DescribeElasticsearchDomainConfigOutput { + s.DomainConfig = v + return s +} + +// Container for the parameters to the DescribeElasticsearchDomain operation. +type DescribeElasticsearchDomainInput struct { + _ struct{} `type:"structure"` + + // The name of the Elasticsearch domain for which you want information. + // + // DomainName is a required field + DomainName *string `location:"uri" locationName:"DomainName" min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeElasticsearchDomainInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeElasticsearchDomainInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *AddTagsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AddTagsInput"} - if s.ARN == nil { - invalidParams.Add(request.NewErrParamRequired("ARN")) +func (s *DescribeElasticsearchDomainInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeElasticsearchDomainInput"} + if s.DomainName == nil { + invalidParams.Add(request.NewErrParamRequired("DomainName")) } - if s.TagList == nil { - invalidParams.Add(request.NewErrParamRequired("TagList")) + if s.DomainName != nil && len(*s.DomainName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("DomainName", 3)) } - if s.TagList != nil { - for i, v := range s.TagList { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "TagList", i), err.(request.ErrInvalidParams)) - } - } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDomainName sets the DomainName field's value. +func (s *DescribeElasticsearchDomainInput) SetDomainName(v string) *DescribeElasticsearchDomainInput { + s.DomainName = &v + return s +} + +// The result of a DescribeElasticsearchDomain request. Contains the status +// of the domain specified in the request. +type DescribeElasticsearchDomainOutput struct { + _ struct{} `type:"structure"` + + // The current status of the Elasticsearch domain. + // + // DomainStatus is a required field + DomainStatus *ElasticsearchDomainStatus `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DescribeElasticsearchDomainOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeElasticsearchDomainOutput) GoString() string { + return s.String() +} + +// SetDomainStatus sets the DomainStatus field's value. +func (s *DescribeElasticsearchDomainOutput) SetDomainStatus(v *ElasticsearchDomainStatus) *DescribeElasticsearchDomainOutput { + s.DomainStatus = v + return s +} + +// Container for the parameters to the DescribeElasticsearchDomains operation. +// By default, the API returns the status of all Elasticsearch domains. +type DescribeElasticsearchDomainsInput struct { + _ struct{} `type:"structure"` + + // The Elasticsearch domains for which you want information. + // + // DomainNames is a required field + DomainNames []*string `type:"list" required:"true"` +} + +// String returns the string representation +func (s DescribeElasticsearchDomainsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeElasticsearchDomainsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeElasticsearchDomainsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeElasticsearchDomainsInput"} + if s.DomainNames == nil { + invalidParams.Add(request.NewErrParamRequired("DomainNames")) } if invalidParams.Len() > 0 { @@ -1495,1684 +3290,2019 @@ func (s *AddTagsInput) Validate() error { return nil } -// SetARN sets the ARN field's value. -func (s *AddTagsInput) SetARN(v string) *AddTagsInput { - s.ARN = &v +// SetDomainNames sets the DomainNames field's value. +func (s *DescribeElasticsearchDomainsInput) SetDomainNames(v []*string) *DescribeElasticsearchDomainsInput { + s.DomainNames = v return s } -// SetTagList sets the TagList field's value. -func (s *AddTagsInput) SetTagList(v []*Tag) *AddTagsInput { - s.TagList = v +// The result of a DescribeElasticsearchDomains request. Contains the status +// of the specified domains or all domains owned by the account. +type DescribeElasticsearchDomainsOutput struct { + _ struct{} `type:"structure"` + + // The status of the domains requested in the DescribeElasticsearchDomains request. + // + // DomainStatusList is a required field + DomainStatusList []*ElasticsearchDomainStatus `type:"list" required:"true"` +} + +// String returns the string representation +func (s DescribeElasticsearchDomainsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeElasticsearchDomainsOutput) GoString() string { + return s.String() +} + +// SetDomainStatusList sets the DomainStatusList field's value. +func (s *DescribeElasticsearchDomainsOutput) SetDomainStatusList(v []*ElasticsearchDomainStatus) *DescribeElasticsearchDomainsOutput { + s.DomainStatusList = v return s } -type AddTagsOutput struct { +// Container for the parameters to DescribeElasticsearchInstanceTypeLimits operation. +type DescribeElasticsearchInstanceTypeLimitsInput struct { + _ struct{} `type:"structure"` + + // DomainName represents the name of the Domain that we are trying to modify. + // This should be present only if we are querying for Elasticsearch Limits for + // existing domain. + DomainName *string `location:"querystring" locationName:"domainName" min:"3" type:"string"` + + // Version of Elasticsearch for which Limits are needed. + // + // ElasticsearchVersion is a required field + ElasticsearchVersion *string `location:"uri" locationName:"ElasticsearchVersion" type:"string" required:"true"` + + // The instance type for an Elasticsearch cluster for which Elasticsearch Limits + // are needed. + // + // InstanceType is a required field + InstanceType *string `location:"uri" locationName:"InstanceType" type:"string" required:"true" enum:"ESPartitionInstanceType"` +} + +// String returns the string representation +func (s DescribeElasticsearchInstanceTypeLimitsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeElasticsearchInstanceTypeLimitsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeElasticsearchInstanceTypeLimitsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeElasticsearchInstanceTypeLimitsInput"} + if s.DomainName != nil && len(*s.DomainName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("DomainName", 3)) + } + if s.ElasticsearchVersion == nil { + invalidParams.Add(request.NewErrParamRequired("ElasticsearchVersion")) + } + if s.InstanceType == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceType")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDomainName sets the DomainName field's value. +func (s *DescribeElasticsearchInstanceTypeLimitsInput) SetDomainName(v string) *DescribeElasticsearchInstanceTypeLimitsInput { + s.DomainName = &v + return s +} + +// SetElasticsearchVersion sets the ElasticsearchVersion field's value. +func (s *DescribeElasticsearchInstanceTypeLimitsInput) SetElasticsearchVersion(v string) *DescribeElasticsearchInstanceTypeLimitsInput { + s.ElasticsearchVersion = &v + return s +} + +// SetInstanceType sets the InstanceType field's value. +func (s *DescribeElasticsearchInstanceTypeLimitsInput) SetInstanceType(v string) *DescribeElasticsearchInstanceTypeLimitsInput { + s.InstanceType = &v + return s +} + +// Container for the parameters received from DescribeElasticsearchInstanceTypeLimits +// operation. +type DescribeElasticsearchInstanceTypeLimitsOutput struct { _ struct{} `type:"structure"` + + // Map of Role of the Instance and Limits that are applicable. Role performed + // by given Instance in Elasticsearch can be one of the following: Data: If + // the given InstanceType is used as Data node + // Master: If the given InstanceType is used as Master node + LimitsByRole map[string]*Limits `type:"map"` } // String returns the string representation -func (s AddTagsOutput) String() string { +func (s DescribeElasticsearchInstanceTypeLimitsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AddTagsOutput) GoString() string { +func (s DescribeElasticsearchInstanceTypeLimitsOutput) GoString() string { return s.String() } -// List of limits that are specific to a given InstanceType and for each of -// it's InstanceRole . -type AdditionalLimit struct { +// SetLimitsByRole sets the LimitsByRole field's value. +func (s *DescribeElasticsearchInstanceTypeLimitsOutput) SetLimitsByRole(v map[string]*Limits) *DescribeElasticsearchInstanceTypeLimitsOutput { + s.LimitsByRole = v + return s +} + +// Container for parameters to DescribeReservedElasticsearchInstanceOfferings +type DescribeReservedElasticsearchInstanceOfferingsInput struct { _ struct{} `type:"structure"` - // Name of Additional Limit is specific to a given InstanceType and for each - // of it's InstanceRole etc. Attributes and their details: MaximumNumberOfDataNodesSupported - // This attribute will be present in Master node only to specify how much data - // nodes upto which given ESPartitionInstanceTypecan support as master node. MaximumNumberOfDataNodesWithoutMasterNode - // This attribute will be present in Data node only to specify how much data - // nodes of given ESPartitionInstanceType - LimitName *string `type:"string"` + // Set this value to limit the number of results returned. If not specified, + // defaults to 100. + MaxResults *int64 `location:"querystring" locationName:"maxResults" type:"integer"` - // Value for given AdditionalLimit$LimitName . - LimitValues []*string `type:"list"` + // NextToken should be sent in case if earlier API call produced result containing + // NextToken. It is used for pagination. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + + // The offering identifier filter value. Use this parameter to show only the + // available offering that matches the specified reservation identifier. + ReservedElasticsearchInstanceOfferingId *string `location:"querystring" locationName:"offeringId" type:"string"` } // String returns the string representation -func (s AdditionalLimit) String() string { +func (s DescribeReservedElasticsearchInstanceOfferingsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AdditionalLimit) GoString() string { +func (s DescribeReservedElasticsearchInstanceOfferingsInput) GoString() string { return s.String() } -// SetLimitName sets the LimitName field's value. -func (s *AdditionalLimit) SetLimitName(v string) *AdditionalLimit { - s.LimitName = &v +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeReservedElasticsearchInstanceOfferingsInput) SetMaxResults(v int64) *DescribeReservedElasticsearchInstanceOfferingsInput { + s.MaxResults = &v return s } -// SetLimitValues sets the LimitValues field's value. -func (s *AdditionalLimit) SetLimitValues(v []*string) *AdditionalLimit { - s.LimitValues = v +// SetNextToken sets the NextToken field's value. +func (s *DescribeReservedElasticsearchInstanceOfferingsInput) SetNextToken(v string) *DescribeReservedElasticsearchInstanceOfferingsInput { + s.NextToken = &v return s } -// Status of the advanced options for the specified Elasticsearch domain. Currently, -// the following advanced options are available: -// -// * Option to allow references to indices in an HTTP request body. Must -// be false when configuring access to individual sub-resources. By default, -// the value is true. See Configuration Advanced Options (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-advanced-options) -// for more information. -// * Option to specify the percentage of heap space that is allocated to -// field data. By default, this setting is unbounded. -// For more information, see Configuring Advanced Options (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-advanced-options). -type AdvancedOptionsStatus struct { +// SetReservedElasticsearchInstanceOfferingId sets the ReservedElasticsearchInstanceOfferingId field's value. +func (s *DescribeReservedElasticsearchInstanceOfferingsInput) SetReservedElasticsearchInstanceOfferingId(v string) *DescribeReservedElasticsearchInstanceOfferingsInput { + s.ReservedElasticsearchInstanceOfferingId = &v + return s +} + +// Container for results from DescribeReservedElasticsearchInstanceOfferings +type DescribeReservedElasticsearchInstanceOfferingsOutput struct { _ struct{} `type:"structure"` - // Specifies the status of advanced options for the specified Elasticsearch - // domain. - // - // Options is a required field - Options map[string]*string `type:"map" required:"true"` + // Provides an identifier to allow retrieval of paginated results. + NextToken *string `type:"string"` - // Specifies the status of OptionStatus for advanced options for the specified - // Elasticsearch domain. - // - // Status is a required field - Status *OptionStatus `type:"structure" required:"true"` + // List of reserved Elasticsearch instance offerings + ReservedElasticsearchInstanceOfferings []*ReservedElasticsearchInstanceOffering `type:"list"` } // String returns the string representation -func (s AdvancedOptionsStatus) String() string { +func (s DescribeReservedElasticsearchInstanceOfferingsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AdvancedOptionsStatus) GoString() string { +func (s DescribeReservedElasticsearchInstanceOfferingsOutput) GoString() string { return s.String() } -// SetOptions sets the Options field's value. -func (s *AdvancedOptionsStatus) SetOptions(v map[string]*string) *AdvancedOptionsStatus { - s.Options = v +// SetNextToken sets the NextToken field's value. +func (s *DescribeReservedElasticsearchInstanceOfferingsOutput) SetNextToken(v string) *DescribeReservedElasticsearchInstanceOfferingsOutput { + s.NextToken = &v return s } -// SetStatus sets the Status field's value. -func (s *AdvancedOptionsStatus) SetStatus(v *OptionStatus) *AdvancedOptionsStatus { - s.Status = v +// SetReservedElasticsearchInstanceOfferings sets the ReservedElasticsearchInstanceOfferings field's value. +func (s *DescribeReservedElasticsearchInstanceOfferingsOutput) SetReservedElasticsearchInstanceOfferings(v []*ReservedElasticsearchInstanceOffering) *DescribeReservedElasticsearchInstanceOfferingsOutput { + s.ReservedElasticsearchInstanceOfferings = v return s } -type CreateElasticsearchDomainInput struct { +// Container for parameters to DescribeReservedElasticsearchInstances +type DescribeReservedElasticsearchInstancesInput struct { _ struct{} `type:"structure"` - // IAM access policy as a JSON-formatted string. - AccessPolicies *string `type:"string"` + // Set this value to limit the number of results returned. If not specified, + // defaults to 100. + MaxResults *int64 `location:"querystring" locationName:"maxResults" type:"integer"` - // Option to allow references to indices in an HTTP request body. Must be false - // when configuring access to individual sub-resources. By default, the value - // is true. See Configuration Advanced Options (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-advanced-options) - // for more information. - AdvancedOptions map[string]*string `type:"map"` + // NextToken should be sent in case if earlier API call produced result containing + // NextToken. It is used for pagination. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` - // The name of the Elasticsearch domain that you are creating. Domain names - // are unique across the domains owned by an account within an AWS region. Domain - // names must start with a letter or number and can contain the following characters: - // a-z (lowercase), 0-9, and - (hyphen). - // - // DomainName is a required field - DomainName *string `min:"3" type:"string" required:"true"` + // The reserved instance identifier filter value. Use this parameter to show + // only the reservation that matches the specified reserved Elasticsearch instance + // ID. + ReservedElasticsearchInstanceId *string `location:"querystring" locationName:"reservationId" type:"string"` +} - // Options to enable, disable and specify the type and size of EBS storage volumes. - EBSOptions *EBSOptions `type:"structure"` +// String returns the string representation +func (s DescribeReservedElasticsearchInstancesInput) String() string { + return awsutil.Prettify(s) +} - // Configuration options for an Elasticsearch domain. Specifies the instance - // type and number of instances in the domain cluster. - ElasticsearchClusterConfig *ElasticsearchClusterConfig `type:"structure"` +// GoString returns the string representation +func (s DescribeReservedElasticsearchInstancesInput) GoString() string { + return s.String() +} - // String of format X.Y to specify version for the Elasticsearch domain eg. - // "1.5" or "2.3". For more information, see Creating Elasticsearch Domains - // (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomains) - // in the Amazon Elasticsearch Service Developer Guide. - ElasticsearchVersion *string `type:"string"` +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeReservedElasticsearchInstancesInput) SetMaxResults(v int64) *DescribeReservedElasticsearchInstancesInput { + s.MaxResults = &v + return s +} - // Specifies the Encryption At Rest Options. - EncryptionAtRestOptions *EncryptionAtRestOptions `type:"structure"` +// SetNextToken sets the NextToken field's value. +func (s *DescribeReservedElasticsearchInstancesInput) SetNextToken(v string) *DescribeReservedElasticsearchInstancesInput { + s.NextToken = &v + return s +} - // Map of LogType and LogPublishingOption, each containing options to publish - // a given type of Elasticsearch log. - LogPublishingOptions map[string]*LogPublishingOption `type:"map"` +// SetReservedElasticsearchInstanceId sets the ReservedElasticsearchInstanceId field's value. +func (s *DescribeReservedElasticsearchInstancesInput) SetReservedElasticsearchInstanceId(v string) *DescribeReservedElasticsearchInstancesInput { + s.ReservedElasticsearchInstanceId = &v + return s +} - // Option to set time, in UTC format, of the daily automated snapshot. Default - // value is 0 hours. - SnapshotOptions *SnapshotOptions `type:"structure"` +// Container for results from DescribeReservedElasticsearchInstances +type DescribeReservedElasticsearchInstancesOutput struct { + _ struct{} `type:"structure"` - // Options to specify the subnets and security groups for VPC endpoint. For - // more information, see Creating a VPC (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-vpc.html#es-creating-vpc) - // in VPC Endpoints for Amazon Elasticsearch Service Domains - VPCOptions *VPCOptions `type:"structure"` + // Provides an identifier to allow retrieval of paginated results. + NextToken *string `type:"string"` + + // List of reserved Elasticsearch instances. + ReservedElasticsearchInstances []*ReservedElasticsearchInstance `type:"list"` } // String returns the string representation -func (s CreateElasticsearchDomainInput) String() string { +func (s DescribeReservedElasticsearchInstancesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateElasticsearchDomainInput) GoString() string { +func (s DescribeReservedElasticsearchInstancesOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *CreateElasticsearchDomainInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateElasticsearchDomainInput"} - if s.DomainName == nil { - invalidParams.Add(request.NewErrParamRequired("DomainName")) - } - if s.DomainName != nil && len(*s.DomainName) < 3 { - invalidParams.Add(request.NewErrParamMinLen("DomainName", 3)) - } - if s.EncryptionAtRestOptions != nil { - if err := s.EncryptionAtRestOptions.Validate(); err != nil { - invalidParams.AddNested("EncryptionAtRestOptions", err.(request.ErrInvalidParams)) - } - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetNextToken sets the NextToken field's value. +func (s *DescribeReservedElasticsearchInstancesOutput) SetNextToken(v string) *DescribeReservedElasticsearchInstancesOutput { + s.NextToken = &v + return s } -// SetAccessPolicies sets the AccessPolicies field's value. -func (s *CreateElasticsearchDomainInput) SetAccessPolicies(v string) *CreateElasticsearchDomainInput { - s.AccessPolicies = &v +// SetReservedElasticsearchInstances sets the ReservedElasticsearchInstances field's value. +func (s *DescribeReservedElasticsearchInstancesOutput) SetReservedElasticsearchInstances(v []*ReservedElasticsearchInstance) *DescribeReservedElasticsearchInstancesOutput { + s.ReservedElasticsearchInstances = v return s } -// SetAdvancedOptions sets the AdvancedOptions field's value. -func (s *CreateElasticsearchDomainInput) SetAdvancedOptions(v map[string]*string) *CreateElasticsearchDomainInput { - s.AdvancedOptions = v - return s +type DomainInfo struct { + _ struct{} `type:"structure"` + + // Specifies the DomainName. + DomainName *string `min:"3" type:"string"` +} + +// String returns the string representation +func (s DomainInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DomainInfo) GoString() string { + return s.String() } // SetDomainName sets the DomainName field's value. -func (s *CreateElasticsearchDomainInput) SetDomainName(v string) *CreateElasticsearchDomainInput { +func (s *DomainInfo) SetDomainName(v string) *DomainInfo { s.DomainName = &v return s } -// SetEBSOptions sets the EBSOptions field's value. -func (s *CreateElasticsearchDomainInput) SetEBSOptions(v *EBSOptions) *CreateElasticsearchDomainInput { - s.EBSOptions = v - return s +// Options to enable, disable, and specify the properties of EBS storage volumes. +// For more information, see Configuring EBS-based Storage (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-ebs). +type EBSOptions struct { + _ struct{} `type:"structure"` + + // Specifies whether EBS-based storage is enabled. + EBSEnabled *bool `type:"boolean"` + + // Specifies the IOPD for a Provisioned IOPS EBS volume (SSD). + Iops *int64 `type:"integer"` + + // Integer to specify the size of an EBS volume. + VolumeSize *int64 `type:"integer"` + + // Specifies the volume type for EBS-based storage. + VolumeType *string `type:"string" enum:"VolumeType"` } -// SetElasticsearchClusterConfig sets the ElasticsearchClusterConfig field's value. -func (s *CreateElasticsearchDomainInput) SetElasticsearchClusterConfig(v *ElasticsearchClusterConfig) *CreateElasticsearchDomainInput { - s.ElasticsearchClusterConfig = v - return s +// String returns the string representation +func (s EBSOptions) String() string { + return awsutil.Prettify(s) } -// SetElasticsearchVersion sets the ElasticsearchVersion field's value. -func (s *CreateElasticsearchDomainInput) SetElasticsearchVersion(v string) *CreateElasticsearchDomainInput { - s.ElasticsearchVersion = &v - return s +// GoString returns the string representation +func (s EBSOptions) GoString() string { + return s.String() } -// SetEncryptionAtRestOptions sets the EncryptionAtRestOptions field's value. -func (s *CreateElasticsearchDomainInput) SetEncryptionAtRestOptions(v *EncryptionAtRestOptions) *CreateElasticsearchDomainInput { - s.EncryptionAtRestOptions = v +// SetEBSEnabled sets the EBSEnabled field's value. +func (s *EBSOptions) SetEBSEnabled(v bool) *EBSOptions { + s.EBSEnabled = &v return s } -// SetLogPublishingOptions sets the LogPublishingOptions field's value. -func (s *CreateElasticsearchDomainInput) SetLogPublishingOptions(v map[string]*LogPublishingOption) *CreateElasticsearchDomainInput { - s.LogPublishingOptions = v +// SetIops sets the Iops field's value. +func (s *EBSOptions) SetIops(v int64) *EBSOptions { + s.Iops = &v return s } -// SetSnapshotOptions sets the SnapshotOptions field's value. -func (s *CreateElasticsearchDomainInput) SetSnapshotOptions(v *SnapshotOptions) *CreateElasticsearchDomainInput { - s.SnapshotOptions = v +// SetVolumeSize sets the VolumeSize field's value. +func (s *EBSOptions) SetVolumeSize(v int64) *EBSOptions { + s.VolumeSize = &v return s } -// SetVPCOptions sets the VPCOptions field's value. -func (s *CreateElasticsearchDomainInput) SetVPCOptions(v *VPCOptions) *CreateElasticsearchDomainInput { - s.VPCOptions = v +// SetVolumeType sets the VolumeType field's value. +func (s *EBSOptions) SetVolumeType(v string) *EBSOptions { + s.VolumeType = &v return s } -// The result of a CreateElasticsearchDomain operation. Contains the status -// of the newly created Elasticsearch domain. -type CreateElasticsearchDomainOutput struct { +// Status of the EBS options for the specified Elasticsearch domain. +type EBSOptionsStatus struct { _ struct{} `type:"structure"` - // The status of the newly created Elasticsearch domain. - DomainStatus *ElasticsearchDomainStatus `type:"structure"` + // Specifies the EBS options for the specified Elasticsearch domain. + // + // Options is a required field + Options *EBSOptions `type:"structure" required:"true"` + + // Specifies the status of the EBS options for the specified Elasticsearch domain. + // + // Status is a required field + Status *OptionStatus `type:"structure" required:"true"` } // String returns the string representation -func (s CreateElasticsearchDomainOutput) String() string { +func (s EBSOptionsStatus) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateElasticsearchDomainOutput) GoString() string { +func (s EBSOptionsStatus) GoString() string { return s.String() } -// SetDomainStatus sets the DomainStatus field's value. -func (s *CreateElasticsearchDomainOutput) SetDomainStatus(v *ElasticsearchDomainStatus) *CreateElasticsearchDomainOutput { - s.DomainStatus = v +// SetOptions sets the Options field's value. +func (s *EBSOptionsStatus) SetOptions(v *EBSOptions) *EBSOptionsStatus { + s.Options = v return s } -// Container for the parameters to the DeleteElasticsearchDomain operation. -// Specifies the name of the Elasticsearch domain that you want to delete. -type DeleteElasticsearchDomainInput struct { +// SetStatus sets the Status field's value. +func (s *EBSOptionsStatus) SetStatus(v *OptionStatus) *EBSOptionsStatus { + s.Status = v + return s +} + +// Specifies the configuration for the domain cluster, such as the type and +// number of instances. +type ElasticsearchClusterConfig struct { _ struct{} `type:"structure"` - // The name of the Elasticsearch domain that you want to permanently delete. - // - // DomainName is a required field - DomainName *string `location:"uri" locationName:"DomainName" min:"3" type:"string" required:"true"` + // Total number of dedicated master nodes, active and on standby, for the cluster. + DedicatedMasterCount *int64 `type:"integer"` + + // A boolean value to indicate whether a dedicated master node is enabled. See + // About Dedicated Master Nodes (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains.html#es-managedomains-dedicatedmasternodes) + // for more information. + DedicatedMasterEnabled *bool `type:"boolean"` + + // The instance type for a dedicated master node. + DedicatedMasterType *string `type:"string" enum:"ESPartitionInstanceType"` + + // The number of instances in the specified domain cluster. + InstanceCount *int64 `type:"integer"` + + // The instance type for an Elasticsearch cluster. + InstanceType *string `type:"string" enum:"ESPartitionInstanceType"` + + // A boolean value to indicate whether zone awareness is enabled. See About + // Zone Awareness (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains.html#es-managedomains-zoneawareness) + // for more information. + ZoneAwarenessEnabled *bool `type:"boolean"` } // String returns the string representation -func (s DeleteElasticsearchDomainInput) String() string { +func (s ElasticsearchClusterConfig) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteElasticsearchDomainInput) GoString() string { +func (s ElasticsearchClusterConfig) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteElasticsearchDomainInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteElasticsearchDomainInput"} - if s.DomainName == nil { - invalidParams.Add(request.NewErrParamRequired("DomainName")) - } - if s.DomainName != nil && len(*s.DomainName) < 3 { - invalidParams.Add(request.NewErrParamMinLen("DomainName", 3)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetDedicatedMasterCount sets the DedicatedMasterCount field's value. +func (s *ElasticsearchClusterConfig) SetDedicatedMasterCount(v int64) *ElasticsearchClusterConfig { + s.DedicatedMasterCount = &v + return s } -// SetDomainName sets the DomainName field's value. -func (s *DeleteElasticsearchDomainInput) SetDomainName(v string) *DeleteElasticsearchDomainInput { - s.DomainName = &v +// SetDedicatedMasterEnabled sets the DedicatedMasterEnabled field's value. +func (s *ElasticsearchClusterConfig) SetDedicatedMasterEnabled(v bool) *ElasticsearchClusterConfig { + s.DedicatedMasterEnabled = &v return s } -// The result of a DeleteElasticsearchDomain request. Contains the status of -// the pending deletion, or no status if the domain and all of its resources -// have been deleted. -type DeleteElasticsearchDomainOutput struct { - _ struct{} `type:"structure"` - - // The status of the Elasticsearch domain being deleted. - DomainStatus *ElasticsearchDomainStatus `type:"structure"` +// SetDedicatedMasterType sets the DedicatedMasterType field's value. +func (s *ElasticsearchClusterConfig) SetDedicatedMasterType(v string) *ElasticsearchClusterConfig { + s.DedicatedMasterType = &v + return s } -// String returns the string representation -func (s DeleteElasticsearchDomainOutput) String() string { - return awsutil.Prettify(s) +// SetInstanceCount sets the InstanceCount field's value. +func (s *ElasticsearchClusterConfig) SetInstanceCount(v int64) *ElasticsearchClusterConfig { + s.InstanceCount = &v + return s } -// GoString returns the string representation -func (s DeleteElasticsearchDomainOutput) GoString() string { - return s.String() +// SetInstanceType sets the InstanceType field's value. +func (s *ElasticsearchClusterConfig) SetInstanceType(v string) *ElasticsearchClusterConfig { + s.InstanceType = &v + return s } -// SetDomainStatus sets the DomainStatus field's value. -func (s *DeleteElasticsearchDomainOutput) SetDomainStatus(v *ElasticsearchDomainStatus) *DeleteElasticsearchDomainOutput { - s.DomainStatus = v +// SetZoneAwarenessEnabled sets the ZoneAwarenessEnabled field's value. +func (s *ElasticsearchClusterConfig) SetZoneAwarenessEnabled(v bool) *ElasticsearchClusterConfig { + s.ZoneAwarenessEnabled = &v return s } -type DeleteElasticsearchServiceRoleInput struct { +// Specifies the configuration status for the specified Elasticsearch domain. +type ElasticsearchClusterConfigStatus struct { _ struct{} `type:"structure"` + + // Specifies the cluster configuration for the specified Elasticsearch domain. + // + // Options is a required field + Options *ElasticsearchClusterConfig `type:"structure" required:"true"` + + // Specifies the status of the configuration for the specified Elasticsearch + // domain. + // + // Status is a required field + Status *OptionStatus `type:"structure" required:"true"` } // String returns the string representation -func (s DeleteElasticsearchServiceRoleInput) String() string { +func (s ElasticsearchClusterConfigStatus) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteElasticsearchServiceRoleInput) GoString() string { +func (s ElasticsearchClusterConfigStatus) GoString() string { return s.String() } -type DeleteElasticsearchServiceRoleOutput struct { +// SetOptions sets the Options field's value. +func (s *ElasticsearchClusterConfigStatus) SetOptions(v *ElasticsearchClusterConfig) *ElasticsearchClusterConfigStatus { + s.Options = v + return s +} + +// SetStatus sets the Status field's value. +func (s *ElasticsearchClusterConfigStatus) SetStatus(v *OptionStatus) *ElasticsearchClusterConfigStatus { + s.Status = v + return s +} + +// The configuration of an Elasticsearch domain. +type ElasticsearchDomainConfig struct { _ struct{} `type:"structure"` + + // IAM access policy as a JSON-formatted string. + AccessPolicies *AccessPoliciesStatus `type:"structure"` + + // Specifies the AdvancedOptions for the domain. See Configuring Advanced Options + // (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-advanced-options) + // for more information. + AdvancedOptions *AdvancedOptionsStatus `type:"structure"` + + // The CognitoOptions for the specified domain. For more information, see Amazon + // Cognito Authentication for Kibana (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-cognito-auth.html). + CognitoOptions *CognitoOptionsStatus `type:"structure"` + + // Specifies the EBSOptions for the Elasticsearch domain. + EBSOptions *EBSOptionsStatus `type:"structure"` + + // Specifies the ElasticsearchClusterConfig for the Elasticsearch domain. + ElasticsearchClusterConfig *ElasticsearchClusterConfigStatus `type:"structure"` + + // String of format X.Y to specify version for the Elasticsearch domain. + ElasticsearchVersion *ElasticsearchVersionStatus `type:"structure"` + + // Specifies the EncryptionAtRestOptions for the Elasticsearch domain. + EncryptionAtRestOptions *EncryptionAtRestOptionsStatus `type:"structure"` + + // Log publishing options for the given domain. + LogPublishingOptions *LogPublishingOptionsStatus `type:"structure"` + + // Specifies the NodeToNodeEncryptionOptions for the Elasticsearch domain. + NodeToNodeEncryptionOptions *NodeToNodeEncryptionOptionsStatus `type:"structure"` + + // Specifies the SnapshotOptions for the Elasticsearch domain. + SnapshotOptions *SnapshotOptionsStatus `type:"structure"` + + // The VPCOptions for the specified domain. For more information, see VPC Endpoints + // for Amazon Elasticsearch Service Domains (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-vpc.html). + VPCOptions *VPCDerivedInfoStatus `type:"structure"` } // String returns the string representation -func (s DeleteElasticsearchServiceRoleOutput) String() string { +func (s ElasticsearchDomainConfig) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteElasticsearchServiceRoleOutput) GoString() string { +func (s ElasticsearchDomainConfig) GoString() string { return s.String() } -// Container for the parameters to the DescribeElasticsearchDomainConfig operation. -// Specifies the domain name for which you want configuration information. -type DescribeElasticsearchDomainConfigInput struct { - _ struct{} `type:"structure"` - - // The Elasticsearch domain that you want to get information about. - // - // DomainName is a required field - DomainName *string `location:"uri" locationName:"DomainName" min:"3" type:"string" required:"true"` +// SetAccessPolicies sets the AccessPolicies field's value. +func (s *ElasticsearchDomainConfig) SetAccessPolicies(v *AccessPoliciesStatus) *ElasticsearchDomainConfig { + s.AccessPolicies = v + return s } -// String returns the string representation -func (s DescribeElasticsearchDomainConfigInput) String() string { - return awsutil.Prettify(s) +// SetAdvancedOptions sets the AdvancedOptions field's value. +func (s *ElasticsearchDomainConfig) SetAdvancedOptions(v *AdvancedOptionsStatus) *ElasticsearchDomainConfig { + s.AdvancedOptions = v + return s } -// GoString returns the string representation -func (s DescribeElasticsearchDomainConfigInput) GoString() string { - return s.String() +// SetCognitoOptions sets the CognitoOptions field's value. +func (s *ElasticsearchDomainConfig) SetCognitoOptions(v *CognitoOptionsStatus) *ElasticsearchDomainConfig { + s.CognitoOptions = v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeElasticsearchDomainConfigInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeElasticsearchDomainConfigInput"} - if s.DomainName == nil { - invalidParams.Add(request.NewErrParamRequired("DomainName")) - } - if s.DomainName != nil && len(*s.DomainName) < 3 { - invalidParams.Add(request.NewErrParamMinLen("DomainName", 3)) - } +// SetEBSOptions sets the EBSOptions field's value. +func (s *ElasticsearchDomainConfig) SetEBSOptions(v *EBSOptionsStatus) *ElasticsearchDomainConfig { + s.EBSOptions = v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetElasticsearchClusterConfig sets the ElasticsearchClusterConfig field's value. +func (s *ElasticsearchDomainConfig) SetElasticsearchClusterConfig(v *ElasticsearchClusterConfigStatus) *ElasticsearchDomainConfig { + s.ElasticsearchClusterConfig = v + return s } -// SetDomainName sets the DomainName field's value. -func (s *DescribeElasticsearchDomainConfigInput) SetDomainName(v string) *DescribeElasticsearchDomainConfigInput { - s.DomainName = &v +// SetElasticsearchVersion sets the ElasticsearchVersion field's value. +func (s *ElasticsearchDomainConfig) SetElasticsearchVersion(v *ElasticsearchVersionStatus) *ElasticsearchDomainConfig { + s.ElasticsearchVersion = v return s } -// The result of a DescribeElasticsearchDomainConfig request. Contains the configuration -// information of the requested domain. -type DescribeElasticsearchDomainConfigOutput struct { - _ struct{} `type:"structure"` +// SetEncryptionAtRestOptions sets the EncryptionAtRestOptions field's value. +func (s *ElasticsearchDomainConfig) SetEncryptionAtRestOptions(v *EncryptionAtRestOptionsStatus) *ElasticsearchDomainConfig { + s.EncryptionAtRestOptions = v + return s +} - // The configuration information of the domain requested in the DescribeElasticsearchDomainConfig - // request. - // - // DomainConfig is a required field - DomainConfig *ElasticsearchDomainConfig `type:"structure" required:"true"` +// SetLogPublishingOptions sets the LogPublishingOptions field's value. +func (s *ElasticsearchDomainConfig) SetLogPublishingOptions(v *LogPublishingOptionsStatus) *ElasticsearchDomainConfig { + s.LogPublishingOptions = v + return s } -// String returns the string representation -func (s DescribeElasticsearchDomainConfigOutput) String() string { - return awsutil.Prettify(s) +// SetNodeToNodeEncryptionOptions sets the NodeToNodeEncryptionOptions field's value. +func (s *ElasticsearchDomainConfig) SetNodeToNodeEncryptionOptions(v *NodeToNodeEncryptionOptionsStatus) *ElasticsearchDomainConfig { + s.NodeToNodeEncryptionOptions = v + return s } -// GoString returns the string representation -func (s DescribeElasticsearchDomainConfigOutput) GoString() string { - return s.String() +// SetSnapshotOptions sets the SnapshotOptions field's value. +func (s *ElasticsearchDomainConfig) SetSnapshotOptions(v *SnapshotOptionsStatus) *ElasticsearchDomainConfig { + s.SnapshotOptions = v + return s } -// SetDomainConfig sets the DomainConfig field's value. -func (s *DescribeElasticsearchDomainConfigOutput) SetDomainConfig(v *ElasticsearchDomainConfig) *DescribeElasticsearchDomainConfigOutput { - s.DomainConfig = v +// SetVPCOptions sets the VPCOptions field's value. +func (s *ElasticsearchDomainConfig) SetVPCOptions(v *VPCDerivedInfoStatus) *ElasticsearchDomainConfig { + s.VPCOptions = v return s } -// Container for the parameters to the DescribeElasticsearchDomain operation. -type DescribeElasticsearchDomainInput struct { +// The current status of an Elasticsearch domain. +type ElasticsearchDomainStatus struct { _ struct{} `type:"structure"` - // The name of the Elasticsearch domain for which you want information. + // The Amazon resource name (ARN) of an Elasticsearch domain. See Identifiers + // for IAM Entities (http://docs.aws.amazon.com/IAM/latest/UserGuide/index.html?Using_Identifiers.html) + // in Using AWS Identity and Access Management for more information. // - // DomainName is a required field - DomainName *string `location:"uri" locationName:"DomainName" min:"3" type:"string" required:"true"` -} + // ARN is a required field + ARN *string `type:"string" required:"true"` -// String returns the string representation -func (s DescribeElasticsearchDomainInput) String() string { - return awsutil.Prettify(s) -} + // IAM access policy as a JSON-formatted string. + AccessPolicies *string `type:"string"` -// GoString returns the string representation -func (s DescribeElasticsearchDomainInput) GoString() string { - return s.String() -} + // Specifies the status of the AdvancedOptions + AdvancedOptions map[string]*string `type:"map"` -// Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeElasticsearchDomainInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeElasticsearchDomainInput"} - if s.DomainName == nil { - invalidParams.Add(request.NewErrParamRequired("DomainName")) - } - if s.DomainName != nil && len(*s.DomainName) < 3 { - invalidParams.Add(request.NewErrParamMinLen("DomainName", 3)) - } + // The CognitoOptions for the specified domain. For more information, see Amazon + // Cognito Authentication for Kibana (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-cognito-auth.html). + CognitoOptions *CognitoOptions `type:"structure"` - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} + // The domain creation status. True if the creation of an Elasticsearch domain + // is complete. False if domain creation is still in progress. + Created *bool `type:"boolean"` -// SetDomainName sets the DomainName field's value. -func (s *DescribeElasticsearchDomainInput) SetDomainName(v string) *DescribeElasticsearchDomainInput { - s.DomainName = &v - return s -} + // The domain deletion status. True if a delete request has been received for + // the domain but resource cleanup is still in progress. False if the domain + // has not been deleted. Once domain deletion is complete, the status of the + // domain is no longer returned. + Deleted *bool `type:"boolean"` -// The result of a DescribeElasticsearchDomain request. Contains the status -// of the domain specified in the request. -type DescribeElasticsearchDomainOutput struct { - _ struct{} `type:"structure"` + // The unique identifier for the specified Elasticsearch domain. + // + // DomainId is a required field + DomainId *string `min:"1" type:"string" required:"true"` - // The current status of the Elasticsearch domain. + // The name of an Elasticsearch domain. Domain names are unique across the domains + // owned by an account within an AWS region. Domain names start with a letter + // or number and can contain the following characters: a-z (lowercase), 0-9, + // and - (hyphen). // - // DomainStatus is a required field - DomainStatus *ElasticsearchDomainStatus `type:"structure" required:"true"` -} + // DomainName is a required field + DomainName *string `min:"3" type:"string" required:"true"` -// String returns the string representation -func (s DescribeElasticsearchDomainOutput) String() string { - return awsutil.Prettify(s) -} + // The EBSOptions for the specified domain. See Configuring EBS-based Storage + // (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-ebs) + // for more information. + EBSOptions *EBSOptions `type:"structure"` -// GoString returns the string representation -func (s DescribeElasticsearchDomainOutput) GoString() string { - return s.String() -} + // The type and number of instances in the domain cluster. + // + // ElasticsearchClusterConfig is a required field + ElasticsearchClusterConfig *ElasticsearchClusterConfig `type:"structure" required:"true"` -// SetDomainStatus sets the DomainStatus field's value. -func (s *DescribeElasticsearchDomainOutput) SetDomainStatus(v *ElasticsearchDomainStatus) *DescribeElasticsearchDomainOutput { - s.DomainStatus = v - return s -} + ElasticsearchVersion *string `type:"string"` -// Container for the parameters to the DescribeElasticsearchDomains operation. -// By default, the API returns the status of all Elasticsearch domains. -type DescribeElasticsearchDomainsInput struct { - _ struct{} `type:"structure"` + // Specifies the status of the EncryptionAtRestOptions. + EncryptionAtRestOptions *EncryptionAtRestOptions `type:"structure"` - // The Elasticsearch domains for which you want information. - // - // DomainNames is a required field - DomainNames []*string `type:"list" required:"true"` + // The Elasticsearch domain endpoint that you use to submit index and search + // requests. + Endpoint *string `type:"string"` + + // Map containing the Elasticsearch domain endpoints used to submit index and + // search requests. Example key, value: 'vpc','vpc-endpoint-h2dsd34efgyghrtguk5gt6j2foh4.us-east-1.es.amazonaws.com'. + Endpoints map[string]*string `type:"map"` + + // Log publishing options for the given domain. + LogPublishingOptions map[string]*LogPublishingOption `type:"map"` + + // Specifies the status of the NodeToNodeEncryptionOptions. + NodeToNodeEncryptionOptions *NodeToNodeEncryptionOptions `type:"structure"` + + // The status of the Elasticsearch domain configuration. True if Amazon Elasticsearch + // Service is processing configuration changes. False if the configuration is + // active. + Processing *bool `type:"boolean"` + + // The current status of the Elasticsearch domain's service software. + ServiceSoftwareOptions *ServiceSoftwareOptions `type:"structure"` + + // Specifies the status of the SnapshotOptions + SnapshotOptions *SnapshotOptions `type:"structure"` + + // The status of an Elasticsearch domain version upgrade. True if Amazon Elasticsearch + // Service is undergoing a version upgrade. False if the configuration is active. + UpgradeProcessing *bool `type:"boolean"` + + // The VPCOptions for the specified domain. For more information, see VPC Endpoints + // for Amazon Elasticsearch Service Domains (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-vpc.html). + VPCOptions *VPCDerivedInfo `type:"structure"` } // String returns the string representation -func (s DescribeElasticsearchDomainsInput) String() string { +func (s ElasticsearchDomainStatus) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeElasticsearchDomainsInput) GoString() string { +func (s ElasticsearchDomainStatus) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeElasticsearchDomainsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeElasticsearchDomainsInput"} - if s.DomainNames == nil { - invalidParams.Add(request.NewErrParamRequired("DomainNames")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetARN sets the ARN field's value. +func (s *ElasticsearchDomainStatus) SetARN(v string) *ElasticsearchDomainStatus { + s.ARN = &v + return s } -// SetDomainNames sets the DomainNames field's value. -func (s *DescribeElasticsearchDomainsInput) SetDomainNames(v []*string) *DescribeElasticsearchDomainsInput { - s.DomainNames = v +// SetAccessPolicies sets the AccessPolicies field's value. +func (s *ElasticsearchDomainStatus) SetAccessPolicies(v string) *ElasticsearchDomainStatus { + s.AccessPolicies = &v return s } -// The result of a DescribeElasticsearchDomains request. Contains the status -// of the specified domains or all domains owned by the account. -type DescribeElasticsearchDomainsOutput struct { - _ struct{} `type:"structure"` - - // The status of the domains requested in the DescribeElasticsearchDomains request. - // - // DomainStatusList is a required field - DomainStatusList []*ElasticsearchDomainStatus `type:"list" required:"true"` +// SetAdvancedOptions sets the AdvancedOptions field's value. +func (s *ElasticsearchDomainStatus) SetAdvancedOptions(v map[string]*string) *ElasticsearchDomainStatus { + s.AdvancedOptions = v + return s } -// String returns the string representation -func (s DescribeElasticsearchDomainsOutput) String() string { - return awsutil.Prettify(s) +// SetCognitoOptions sets the CognitoOptions field's value. +func (s *ElasticsearchDomainStatus) SetCognitoOptions(v *CognitoOptions) *ElasticsearchDomainStatus { + s.CognitoOptions = v + return s } -// GoString returns the string representation -func (s DescribeElasticsearchDomainsOutput) GoString() string { - return s.String() +// SetCreated sets the Created field's value. +func (s *ElasticsearchDomainStatus) SetCreated(v bool) *ElasticsearchDomainStatus { + s.Created = &v + return s } -// SetDomainStatusList sets the DomainStatusList field's value. -func (s *DescribeElasticsearchDomainsOutput) SetDomainStatusList(v []*ElasticsearchDomainStatus) *DescribeElasticsearchDomainsOutput { - s.DomainStatusList = v +// SetDeleted sets the Deleted field's value. +func (s *ElasticsearchDomainStatus) SetDeleted(v bool) *ElasticsearchDomainStatus { + s.Deleted = &v return s } -// Container for the parameters to DescribeElasticsearchInstanceTypeLimits operation. -type DescribeElasticsearchInstanceTypeLimitsInput struct { - _ struct{} `type:"structure"` - - // DomainName represents the name of the Domain that we are trying to modify. - // This should be present only if we are querying for Elasticsearch Limits for - // existing domain. - DomainName *string `location:"querystring" locationName:"domainName" min:"3" type:"string"` +// SetDomainId sets the DomainId field's value. +func (s *ElasticsearchDomainStatus) SetDomainId(v string) *ElasticsearchDomainStatus { + s.DomainId = &v + return s +} - // Version of Elasticsearch for which Limits are needed. - // - // ElasticsearchVersion is a required field - ElasticsearchVersion *string `location:"uri" locationName:"ElasticsearchVersion" type:"string" required:"true"` +// SetDomainName sets the DomainName field's value. +func (s *ElasticsearchDomainStatus) SetDomainName(v string) *ElasticsearchDomainStatus { + s.DomainName = &v + return s +} - // The instance type for an Elasticsearch cluster for which Elasticsearch Limits - // are needed. - // - // InstanceType is a required field - InstanceType *string `location:"uri" locationName:"InstanceType" type:"string" required:"true" enum:"ESPartitionInstanceType"` +// SetEBSOptions sets the EBSOptions field's value. +func (s *ElasticsearchDomainStatus) SetEBSOptions(v *EBSOptions) *ElasticsearchDomainStatus { + s.EBSOptions = v + return s } -// String returns the string representation -func (s DescribeElasticsearchInstanceTypeLimitsInput) String() string { - return awsutil.Prettify(s) +// SetElasticsearchClusterConfig sets the ElasticsearchClusterConfig field's value. +func (s *ElasticsearchDomainStatus) SetElasticsearchClusterConfig(v *ElasticsearchClusterConfig) *ElasticsearchDomainStatus { + s.ElasticsearchClusterConfig = v + return s } -// GoString returns the string representation -func (s DescribeElasticsearchInstanceTypeLimitsInput) GoString() string { - return s.String() +// SetElasticsearchVersion sets the ElasticsearchVersion field's value. +func (s *ElasticsearchDomainStatus) SetElasticsearchVersion(v string) *ElasticsearchDomainStatus { + s.ElasticsearchVersion = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeElasticsearchInstanceTypeLimitsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeElasticsearchInstanceTypeLimitsInput"} - if s.DomainName != nil && len(*s.DomainName) < 3 { - invalidParams.Add(request.NewErrParamMinLen("DomainName", 3)) - } - if s.ElasticsearchVersion == nil { - invalidParams.Add(request.NewErrParamRequired("ElasticsearchVersion")) - } - if s.InstanceType == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceType")) - } +// SetEncryptionAtRestOptions sets the EncryptionAtRestOptions field's value. +func (s *ElasticsearchDomainStatus) SetEncryptionAtRestOptions(v *EncryptionAtRestOptions) *ElasticsearchDomainStatus { + s.EncryptionAtRestOptions = v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetEndpoint sets the Endpoint field's value. +func (s *ElasticsearchDomainStatus) SetEndpoint(v string) *ElasticsearchDomainStatus { + s.Endpoint = &v + return s } -// SetDomainName sets the DomainName field's value. -func (s *DescribeElasticsearchInstanceTypeLimitsInput) SetDomainName(v string) *DescribeElasticsearchInstanceTypeLimitsInput { - s.DomainName = &v +// SetEndpoints sets the Endpoints field's value. +func (s *ElasticsearchDomainStatus) SetEndpoints(v map[string]*string) *ElasticsearchDomainStatus { + s.Endpoints = v return s } -// SetElasticsearchVersion sets the ElasticsearchVersion field's value. -func (s *DescribeElasticsearchInstanceTypeLimitsInput) SetElasticsearchVersion(v string) *DescribeElasticsearchInstanceTypeLimitsInput { - s.ElasticsearchVersion = &v +// SetLogPublishingOptions sets the LogPublishingOptions field's value. +func (s *ElasticsearchDomainStatus) SetLogPublishingOptions(v map[string]*LogPublishingOption) *ElasticsearchDomainStatus { + s.LogPublishingOptions = v return s } -// SetInstanceType sets the InstanceType field's value. -func (s *DescribeElasticsearchInstanceTypeLimitsInput) SetInstanceType(v string) *DescribeElasticsearchInstanceTypeLimitsInput { - s.InstanceType = &v +// SetNodeToNodeEncryptionOptions sets the NodeToNodeEncryptionOptions field's value. +func (s *ElasticsearchDomainStatus) SetNodeToNodeEncryptionOptions(v *NodeToNodeEncryptionOptions) *ElasticsearchDomainStatus { + s.NodeToNodeEncryptionOptions = v return s } -// Container for the parameters received from DescribeElasticsearchInstanceTypeLimits -// operation. -type DescribeElasticsearchInstanceTypeLimitsOutput struct { - _ struct{} `type:"structure"` +// SetProcessing sets the Processing field's value. +func (s *ElasticsearchDomainStatus) SetProcessing(v bool) *ElasticsearchDomainStatus { + s.Processing = &v + return s +} - // Map of Role of the Instance and Limits that are applicable. Role performed - // by given Instance in Elasticsearch can be one of the following: Data: If - // the given InstanceType is used as Data node - // Master: If the given InstanceType is used as Master node - LimitsByRole map[string]*Limits `type:"map"` +// SetServiceSoftwareOptions sets the ServiceSoftwareOptions field's value. +func (s *ElasticsearchDomainStatus) SetServiceSoftwareOptions(v *ServiceSoftwareOptions) *ElasticsearchDomainStatus { + s.ServiceSoftwareOptions = v + return s } -// String returns the string representation -func (s DescribeElasticsearchInstanceTypeLimitsOutput) String() string { - return awsutil.Prettify(s) +// SetSnapshotOptions sets the SnapshotOptions field's value. +func (s *ElasticsearchDomainStatus) SetSnapshotOptions(v *SnapshotOptions) *ElasticsearchDomainStatus { + s.SnapshotOptions = v + return s } -// GoString returns the string representation -func (s DescribeElasticsearchInstanceTypeLimitsOutput) GoString() string { - return s.String() +// SetUpgradeProcessing sets the UpgradeProcessing field's value. +func (s *ElasticsearchDomainStatus) SetUpgradeProcessing(v bool) *ElasticsearchDomainStatus { + s.UpgradeProcessing = &v + return s } -// SetLimitsByRole sets the LimitsByRole field's value. -func (s *DescribeElasticsearchInstanceTypeLimitsOutput) SetLimitsByRole(v map[string]*Limits) *DescribeElasticsearchInstanceTypeLimitsOutput { - s.LimitsByRole = v +// SetVPCOptions sets the VPCOptions field's value. +func (s *ElasticsearchDomainStatus) SetVPCOptions(v *VPCDerivedInfo) *ElasticsearchDomainStatus { + s.VPCOptions = v return s } -type DomainInfo struct { +// Status of the Elasticsearch version options for the specified Elasticsearch +// domain. +type ElasticsearchVersionStatus struct { _ struct{} `type:"structure"` - // Specifies the DomainName. - DomainName *string `min:"3" type:"string"` + // Specifies the Elasticsearch version for the specified Elasticsearch domain. + // + // Options is a required field + Options *string `type:"string" required:"true"` + + // Specifies the status of the Elasticsearch version options for the specified + // Elasticsearch domain. + // + // Status is a required field + Status *OptionStatus `type:"structure" required:"true"` } // String returns the string representation -func (s DomainInfo) String() string { +func (s ElasticsearchVersionStatus) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DomainInfo) GoString() string { +func (s ElasticsearchVersionStatus) GoString() string { return s.String() } -// SetDomainName sets the DomainName field's value. -func (s *DomainInfo) SetDomainName(v string) *DomainInfo { - s.DomainName = &v +// SetOptions sets the Options field's value. +func (s *ElasticsearchVersionStatus) SetOptions(v string) *ElasticsearchVersionStatus { + s.Options = &v return s } -// Options to enable, disable, and specify the properties of EBS storage volumes. -// For more information, see Configuring EBS-based Storage (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-ebs). -type EBSOptions struct { - _ struct{} `type:"structure"` - - // Specifies whether EBS-based storage is enabled. - EBSEnabled *bool `type:"boolean"` +// SetStatus sets the Status field's value. +func (s *ElasticsearchVersionStatus) SetStatus(v *OptionStatus) *ElasticsearchVersionStatus { + s.Status = v + return s +} - // Specifies the IOPD for a Provisioned IOPS EBS volume (SSD). - Iops *int64 `type:"integer"` +// Specifies the Encryption At Rest Options. +type EncryptionAtRestOptions struct { + _ struct{} `type:"structure"` - // Integer to specify the size of an EBS volume. - VolumeSize *int64 `type:"integer"` + // Specifies the option to enable Encryption At Rest. + Enabled *bool `type:"boolean"` - // Specifies the volume type for EBS-based storage. - VolumeType *string `type:"string" enum:"VolumeType"` + // Specifies the KMS Key ID for Encryption At Rest options. + KmsKeyId *string `min:"1" type:"string"` } // String returns the string representation -func (s EBSOptions) String() string { +func (s EncryptionAtRestOptions) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s EBSOptions) GoString() string { +func (s EncryptionAtRestOptions) GoString() string { return s.String() } -// SetEBSEnabled sets the EBSEnabled field's value. -func (s *EBSOptions) SetEBSEnabled(v bool) *EBSOptions { - s.EBSEnabled = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *EncryptionAtRestOptions) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "EncryptionAtRestOptions"} + if s.KmsKeyId != nil && len(*s.KmsKeyId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("KmsKeyId", 1)) + } -// SetIops sets the Iops field's value. -func (s *EBSOptions) SetIops(v int64) *EBSOptions { - s.Iops = &v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetVolumeSize sets the VolumeSize field's value. -func (s *EBSOptions) SetVolumeSize(v int64) *EBSOptions { - s.VolumeSize = &v +// SetEnabled sets the Enabled field's value. +func (s *EncryptionAtRestOptions) SetEnabled(v bool) *EncryptionAtRestOptions { + s.Enabled = &v return s } -// SetVolumeType sets the VolumeType field's value. -func (s *EBSOptions) SetVolumeType(v string) *EBSOptions { - s.VolumeType = &v +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *EncryptionAtRestOptions) SetKmsKeyId(v string) *EncryptionAtRestOptions { + s.KmsKeyId = &v return s } -// Status of the EBS options for the specified Elasticsearch domain. -type EBSOptionsStatus struct { +// Status of the Encryption At Rest options for the specified Elasticsearch +// domain. +type EncryptionAtRestOptionsStatus struct { _ struct{} `type:"structure"` - // Specifies the EBS options for the specified Elasticsearch domain. + // Specifies the Encryption At Rest options for the specified Elasticsearch + // domain. // // Options is a required field - Options *EBSOptions `type:"structure" required:"true"` + Options *EncryptionAtRestOptions `type:"structure" required:"true"` - // Specifies the status of the EBS options for the specified Elasticsearch domain. + // Specifies the status of the Encryption At Rest options for the specified + // Elasticsearch domain. // // Status is a required field Status *OptionStatus `type:"structure" required:"true"` } // String returns the string representation -func (s EBSOptionsStatus) String() string { +func (s EncryptionAtRestOptionsStatus) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s EBSOptionsStatus) GoString() string { +func (s EncryptionAtRestOptionsStatus) GoString() string { return s.String() } // SetOptions sets the Options field's value. -func (s *EBSOptionsStatus) SetOptions(v *EBSOptions) *EBSOptionsStatus { +func (s *EncryptionAtRestOptionsStatus) SetOptions(v *EncryptionAtRestOptions) *EncryptionAtRestOptionsStatus { s.Options = v return s } // SetStatus sets the Status field's value. -func (s *EBSOptionsStatus) SetStatus(v *OptionStatus) *EBSOptionsStatus { +func (s *EncryptionAtRestOptionsStatus) SetStatus(v *OptionStatus) *EncryptionAtRestOptionsStatus { s.Status = v return s } -// Specifies the configuration for the domain cluster, such as the type and -// number of instances. -type ElasticsearchClusterConfig struct { +// Container for request parameters to GetCompatibleElasticsearchVersions operation. +type GetCompatibleElasticsearchVersionsInput struct { _ struct{} `type:"structure"` - // Total number of dedicated master nodes, active and on standby, for the cluster. - DedicatedMasterCount *int64 `type:"integer"` + // The name of an Elasticsearch domain. Domain names are unique across the domains + // owned by an account within an AWS region. Domain names start with a letter + // or number and can contain the following characters: a-z (lowercase), 0-9, + // and - (hyphen). + DomainName *string `location:"querystring" locationName:"domainName" min:"3" type:"string"` +} - // A boolean value to indicate whether a dedicated master node is enabled. See - // About Dedicated Master Nodes (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains.html#es-managedomains-dedicatedmasternodes) - // for more information. - DedicatedMasterEnabled *bool `type:"boolean"` +// String returns the string representation +func (s GetCompatibleElasticsearchVersionsInput) String() string { + return awsutil.Prettify(s) +} - // The instance type for a dedicated master node. - DedicatedMasterType *string `type:"string" enum:"ESPartitionInstanceType"` +// GoString returns the string representation +func (s GetCompatibleElasticsearchVersionsInput) GoString() string { + return s.String() +} - // The number of instances in the specified domain cluster. - InstanceCount *int64 `type:"integer"` +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetCompatibleElasticsearchVersionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetCompatibleElasticsearchVersionsInput"} + if s.DomainName != nil && len(*s.DomainName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("DomainName", 3)) + } - // The instance type for an Elasticsearch cluster. - InstanceType *string `type:"string" enum:"ESPartitionInstanceType"` + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} - // A boolean value to indicate whether zone awareness is enabled. See About - // Zone Awareness (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains.html#es-managedomains-zoneawareness) - // for more information. - ZoneAwarenessEnabled *bool `type:"boolean"` +// SetDomainName sets the DomainName field's value. +func (s *GetCompatibleElasticsearchVersionsInput) SetDomainName(v string) *GetCompatibleElasticsearchVersionsInput { + s.DomainName = &v + return s +} + +// Container for response returned by GetCompatibleElasticsearchVersions operation. +type GetCompatibleElasticsearchVersionsOutput struct { + _ struct{} `type:"structure"` + + // A map of compatible Elasticsearch versions returned as part of the GetCompatibleElasticsearchVersions + // operation. + CompatibleElasticsearchVersions []*CompatibleVersionsMap `type:"list"` } // String returns the string representation -func (s ElasticsearchClusterConfig) String() string { +func (s GetCompatibleElasticsearchVersionsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ElasticsearchClusterConfig) GoString() string { +func (s GetCompatibleElasticsearchVersionsOutput) GoString() string { return s.String() } -// SetDedicatedMasterCount sets the DedicatedMasterCount field's value. -func (s *ElasticsearchClusterConfig) SetDedicatedMasterCount(v int64) *ElasticsearchClusterConfig { - s.DedicatedMasterCount = &v +// SetCompatibleElasticsearchVersions sets the CompatibleElasticsearchVersions field's value. +func (s *GetCompatibleElasticsearchVersionsOutput) SetCompatibleElasticsearchVersions(v []*CompatibleVersionsMap) *GetCompatibleElasticsearchVersionsOutput { + s.CompatibleElasticsearchVersions = v return s } -// SetDedicatedMasterEnabled sets the DedicatedMasterEnabled field's value. -func (s *ElasticsearchClusterConfig) SetDedicatedMasterEnabled(v bool) *ElasticsearchClusterConfig { - s.DedicatedMasterEnabled = &v - return s +// Container for request parameters to GetUpgradeHistory operation. +type GetUpgradeHistoryInput struct { + _ struct{} `type:"structure"` + + // The name of an Elasticsearch domain. Domain names are unique across the domains + // owned by an account within an AWS region. Domain names start with a letter + // or number and can contain the following characters: a-z (lowercase), 0-9, + // and - (hyphen). + // + // DomainName is a required field + DomainName *string `location:"uri" locationName:"DomainName" min:"3" type:"string" required:"true"` + + // Set this value to limit the number of results returned. + MaxResults *int64 `location:"querystring" locationName:"maxResults" type:"integer"` + + // Paginated APIs accepts NextToken input to returns next page results and provides + // a NextToken output in the response which can be used by the client to retrieve + // more results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` } -// SetDedicatedMasterType sets the DedicatedMasterType field's value. -func (s *ElasticsearchClusterConfig) SetDedicatedMasterType(v string) *ElasticsearchClusterConfig { - s.DedicatedMasterType = &v - return s +// String returns the string representation +func (s GetUpgradeHistoryInput) String() string { + return awsutil.Prettify(s) } -// SetInstanceCount sets the InstanceCount field's value. -func (s *ElasticsearchClusterConfig) SetInstanceCount(v int64) *ElasticsearchClusterConfig { - s.InstanceCount = &v +// GoString returns the string representation +func (s GetUpgradeHistoryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetUpgradeHistoryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetUpgradeHistoryInput"} + if s.DomainName == nil { + invalidParams.Add(request.NewErrParamRequired("DomainName")) + } + if s.DomainName != nil && len(*s.DomainName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("DomainName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDomainName sets the DomainName field's value. +func (s *GetUpgradeHistoryInput) SetDomainName(v string) *GetUpgradeHistoryInput { + s.DomainName = &v return s } -// SetInstanceType sets the InstanceType field's value. -func (s *ElasticsearchClusterConfig) SetInstanceType(v string) *ElasticsearchClusterConfig { - s.InstanceType = &v +// SetMaxResults sets the MaxResults field's value. +func (s *GetUpgradeHistoryInput) SetMaxResults(v int64) *GetUpgradeHistoryInput { + s.MaxResults = &v return s } -// SetZoneAwarenessEnabled sets the ZoneAwarenessEnabled field's value. -func (s *ElasticsearchClusterConfig) SetZoneAwarenessEnabled(v bool) *ElasticsearchClusterConfig { - s.ZoneAwarenessEnabled = &v +// SetNextToken sets the NextToken field's value. +func (s *GetUpgradeHistoryInput) SetNextToken(v string) *GetUpgradeHistoryInput { + s.NextToken = &v return s } -// Specifies the configuration status for the specified Elasticsearch domain. -type ElasticsearchClusterConfigStatus struct { +// Container for response returned by GetUpgradeHistory operation. +type GetUpgradeHistoryOutput struct { _ struct{} `type:"structure"` - // Specifies the cluster configuration for the specified Elasticsearch domain. - // - // Options is a required field - Options *ElasticsearchClusterConfig `type:"structure" required:"true"` + // Pagination token that needs to be supplied to the next call to get the next + // page of results + NextToken *string `type:"string"` - // Specifies the status of the configuration for the specified Elasticsearch - // domain. - // - // Status is a required field - Status *OptionStatus `type:"structure" required:"true"` + // A list of UpgradeHistory objects corresponding to each Upgrade or Upgrade + // Eligibility Check performed on a domain returned as part of GetUpgradeHistoryResponse + // object. + UpgradeHistories []*UpgradeHistory `type:"list"` } // String returns the string representation -func (s ElasticsearchClusterConfigStatus) String() string { +func (s GetUpgradeHistoryOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ElasticsearchClusterConfigStatus) GoString() string { +func (s GetUpgradeHistoryOutput) GoString() string { return s.String() } -// SetOptions sets the Options field's value. -func (s *ElasticsearchClusterConfigStatus) SetOptions(v *ElasticsearchClusterConfig) *ElasticsearchClusterConfigStatus { - s.Options = v +// SetNextToken sets the NextToken field's value. +func (s *GetUpgradeHistoryOutput) SetNextToken(v string) *GetUpgradeHistoryOutput { + s.NextToken = &v return s } -// SetStatus sets the Status field's value. -func (s *ElasticsearchClusterConfigStatus) SetStatus(v *OptionStatus) *ElasticsearchClusterConfigStatus { - s.Status = v +// SetUpgradeHistories sets the UpgradeHistories field's value. +func (s *GetUpgradeHistoryOutput) SetUpgradeHistories(v []*UpgradeHistory) *GetUpgradeHistoryOutput { + s.UpgradeHistories = v return s } -// The configuration of an Elasticsearch domain. -type ElasticsearchDomainConfig struct { +// Container for request parameters to GetUpgradeStatus operation. +type GetUpgradeStatusInput struct { _ struct{} `type:"structure"` - // IAM access policy as a JSON-formatted string. - AccessPolicies *AccessPoliciesStatus `type:"structure"` + // The name of an Elasticsearch domain. Domain names are unique across the domains + // owned by an account within an AWS region. Domain names start with a letter + // or number and can contain the following characters: a-z (lowercase), 0-9, + // and - (hyphen). + // + // DomainName is a required field + DomainName *string `location:"uri" locationName:"DomainName" min:"3" type:"string" required:"true"` +} - // Specifies the AdvancedOptions for the domain. See Configuring Advanced Options - // (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-advanced-options) - // for more information. - AdvancedOptions *AdvancedOptionsStatus `type:"structure"` +// String returns the string representation +func (s GetUpgradeStatusInput) String() string { + return awsutil.Prettify(s) +} - // Specifies the EBSOptions for the Elasticsearch domain. - EBSOptions *EBSOptionsStatus `type:"structure"` +// GoString returns the string representation +func (s GetUpgradeStatusInput) GoString() string { + return s.String() +} - // Specifies the ElasticsearchClusterConfig for the Elasticsearch domain. - ElasticsearchClusterConfig *ElasticsearchClusterConfigStatus `type:"structure"` +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetUpgradeStatusInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetUpgradeStatusInput"} + if s.DomainName == nil { + invalidParams.Add(request.NewErrParamRequired("DomainName")) + } + if s.DomainName != nil && len(*s.DomainName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("DomainName", 3)) + } - // String of format X.Y to specify version for the Elasticsearch domain. - ElasticsearchVersion *ElasticsearchVersionStatus `type:"structure"` + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} - // Specifies the EncryptionAtRestOptions for the Elasticsearch domain. - EncryptionAtRestOptions *EncryptionAtRestOptionsStatus `type:"structure"` +// SetDomainName sets the DomainName field's value. +func (s *GetUpgradeStatusInput) SetDomainName(v string) *GetUpgradeStatusInput { + s.DomainName = &v + return s +} - // Log publishing options for the given domain. - LogPublishingOptions *LogPublishingOptionsStatus `type:"structure"` +// Container for response returned by GetUpgradeStatus operation. +type GetUpgradeStatusOutput struct { + _ struct{} `type:"structure"` - // Specifies the SnapshotOptions for the Elasticsearch domain. - SnapshotOptions *SnapshotOptionsStatus `type:"structure"` + // One of 4 statuses that a step can go through returned as part of the GetUpgradeStatusResponse + // object. The status can take one of the following values: In Progress + // Succeeded + // Succeeded with Issues + // Failed + StepStatus *string `type:"string" enum:"UpgradeStatus"` - // The VPCOptions for the specified domain. For more information, see VPC Endpoints - // for Amazon Elasticsearch Service Domains (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-vpc.html). - VPCOptions *VPCDerivedInfoStatus `type:"structure"` + // A string that describes the update briefly + UpgradeName *string `type:"string"` + + // Represents one of 3 steps that an Upgrade or Upgrade Eligibility Check does + // through: PreUpgradeCheck + // Snapshot + // Upgrade + UpgradeStep *string `type:"string" enum:"UpgradeStep"` } // String returns the string representation -func (s ElasticsearchDomainConfig) String() string { +func (s GetUpgradeStatusOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ElasticsearchDomainConfig) GoString() string { +func (s GetUpgradeStatusOutput) GoString() string { return s.String() } -// SetAccessPolicies sets the AccessPolicies field's value. -func (s *ElasticsearchDomainConfig) SetAccessPolicies(v *AccessPoliciesStatus) *ElasticsearchDomainConfig { - s.AccessPolicies = v +// SetStepStatus sets the StepStatus field's value. +func (s *GetUpgradeStatusOutput) SetStepStatus(v string) *GetUpgradeStatusOutput { + s.StepStatus = &v return s } -// SetAdvancedOptions sets the AdvancedOptions field's value. -func (s *ElasticsearchDomainConfig) SetAdvancedOptions(v *AdvancedOptionsStatus) *ElasticsearchDomainConfig { - s.AdvancedOptions = v +// SetUpgradeName sets the UpgradeName field's value. +func (s *GetUpgradeStatusOutput) SetUpgradeName(v string) *GetUpgradeStatusOutput { + s.UpgradeName = &v return s } -// SetEBSOptions sets the EBSOptions field's value. -func (s *ElasticsearchDomainConfig) SetEBSOptions(v *EBSOptionsStatus) *ElasticsearchDomainConfig { - s.EBSOptions = v +// SetUpgradeStep sets the UpgradeStep field's value. +func (s *GetUpgradeStatusOutput) SetUpgradeStep(v string) *GetUpgradeStatusOutput { + s.UpgradeStep = &v return s } -// SetElasticsearchClusterConfig sets the ElasticsearchClusterConfig field's value. -func (s *ElasticsearchDomainConfig) SetElasticsearchClusterConfig(v *ElasticsearchClusterConfigStatus) *ElasticsearchDomainConfig { - s.ElasticsearchClusterConfig = v - return s -} +// InstanceCountLimits represents the limits on number of instances that be +// created in Amazon Elasticsearch for given InstanceType. +type InstanceCountLimits struct { + _ struct{} `type:"structure"` -// SetElasticsearchVersion sets the ElasticsearchVersion field's value. -func (s *ElasticsearchDomainConfig) SetElasticsearchVersion(v *ElasticsearchVersionStatus) *ElasticsearchDomainConfig { - s.ElasticsearchVersion = v - return s + // Maximum number of Instances that can be instantiated for given InstanceType. + MaximumInstanceCount *int64 `type:"integer"` + + // Minimum number of Instances that can be instantiated for given InstanceType. + MinimumInstanceCount *int64 `type:"integer"` } -// SetEncryptionAtRestOptions sets the EncryptionAtRestOptions field's value. -func (s *ElasticsearchDomainConfig) SetEncryptionAtRestOptions(v *EncryptionAtRestOptionsStatus) *ElasticsearchDomainConfig { - s.EncryptionAtRestOptions = v - return s +// String returns the string representation +func (s InstanceCountLimits) String() string { + return awsutil.Prettify(s) } -// SetLogPublishingOptions sets the LogPublishingOptions field's value. -func (s *ElasticsearchDomainConfig) SetLogPublishingOptions(v *LogPublishingOptionsStatus) *ElasticsearchDomainConfig { - s.LogPublishingOptions = v - return s +// GoString returns the string representation +func (s InstanceCountLimits) GoString() string { + return s.String() } -// SetSnapshotOptions sets the SnapshotOptions field's value. -func (s *ElasticsearchDomainConfig) SetSnapshotOptions(v *SnapshotOptionsStatus) *ElasticsearchDomainConfig { - s.SnapshotOptions = v +// SetMaximumInstanceCount sets the MaximumInstanceCount field's value. +func (s *InstanceCountLimits) SetMaximumInstanceCount(v int64) *InstanceCountLimits { + s.MaximumInstanceCount = &v return s } -// SetVPCOptions sets the VPCOptions field's value. -func (s *ElasticsearchDomainConfig) SetVPCOptions(v *VPCDerivedInfoStatus) *ElasticsearchDomainConfig { - s.VPCOptions = v +// SetMinimumInstanceCount sets the MinimumInstanceCount field's value. +func (s *InstanceCountLimits) SetMinimumInstanceCount(v int64) *InstanceCountLimits { + s.MinimumInstanceCount = &v return s } -// The current status of an Elasticsearch domain. -type ElasticsearchDomainStatus struct { +// InstanceLimits represents the list of instance related attributes that are +// available for given InstanceType. +type InstanceLimits struct { _ struct{} `type:"structure"` - // The Amazon resource name (ARN) of an Elasticsearch domain. See Identifiers - // for IAM Entities (http://docs.aws.amazon.com/IAM/latest/UserGuide/index.html?Using_Identifiers.html) - // in Using AWS Identity and Access Management for more information. - // - // ARN is a required field - ARN *string `type:"string" required:"true"` - - // IAM access policy as a JSON-formatted string. - AccessPolicies *string `type:"string"` - - // Specifies the status of the AdvancedOptions - AdvancedOptions map[string]*string `type:"map"` - - // The domain creation status. True if the creation of an Elasticsearch domain - // is complete. False if domain creation is still in progress. - Created *bool `type:"boolean"` - - // The domain deletion status. True if a delete request has been received for - // the domain but resource cleanup is still in progress. False if the domain - // has not been deleted. Once domain deletion is complete, the status of the - // domain is no longer returned. - Deleted *bool `type:"boolean"` + // InstanceCountLimits represents the limits on number of instances that be + // created in Amazon Elasticsearch for given InstanceType. + InstanceCountLimits *InstanceCountLimits `type:"structure"` +} - // The unique identifier for the specified Elasticsearch domain. - // - // DomainId is a required field - DomainId *string `min:"1" type:"string" required:"true"` +// String returns the string representation +func (s InstanceLimits) String() string { + return awsutil.Prettify(s) +} - // The name of an Elasticsearch domain. Domain names are unique across the domains - // owned by an account within an AWS region. Domain names start with a letter - // or number and can contain the following characters: a-z (lowercase), 0-9, - // and - (hyphen). - // - // DomainName is a required field - DomainName *string `min:"3" type:"string" required:"true"` +// GoString returns the string representation +func (s InstanceLimits) GoString() string { + return s.String() +} - // The EBSOptions for the specified domain. See Configuring EBS-based Storage - // (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-ebs) - // for more information. - EBSOptions *EBSOptions `type:"structure"` +// SetInstanceCountLimits sets the InstanceCountLimits field's value. +func (s *InstanceLimits) SetInstanceCountLimits(v *InstanceCountLimits) *InstanceLimits { + s.InstanceCountLimits = v + return s +} - // The type and number of instances in the domain cluster. - // - // ElasticsearchClusterConfig is a required field - ElasticsearchClusterConfig *ElasticsearchClusterConfig `type:"structure" required:"true"` +// Limits for given InstanceType and for each of it's role. Limits contains following StorageTypes, InstanceLimitsand AdditionalLimits +type Limits struct { + _ struct{} `type:"structure"` - ElasticsearchVersion *string `type:"string"` + // List of additional limits that are specific to a given InstanceType and for + // each of it's InstanceRole . + AdditionalLimits []*AdditionalLimit `type:"list"` - // Specifies the status of the EncryptionAtRestOptions. - EncryptionAtRestOptions *EncryptionAtRestOptions `type:"structure"` + // InstanceLimits represents the list of instance related attributes that are + // available for given InstanceType. + InstanceLimits *InstanceLimits `type:"structure"` - // The Elasticsearch domain endpoint that you use to submit index and search - // requests. - Endpoint *string `type:"string"` + // StorageType represents the list of storage related types and attributes that + // are available for given InstanceType. + StorageTypes []*StorageType `type:"list"` +} - // Map containing the Elasticsearch domain endpoints used to submit index and - // search requests. Example key, value: 'vpc','vpc-endpoint-h2dsd34efgyghrtguk5gt6j2foh4.us-east-1.es.amazonaws.com'. - Endpoints map[string]*string `type:"map"` +// String returns the string representation +func (s Limits) String() string { + return awsutil.Prettify(s) +} - // Log publishing options for the given domain. - LogPublishingOptions map[string]*LogPublishingOption `type:"map"` +// GoString returns the string representation +func (s Limits) GoString() string { + return s.String() +} - // The status of the Elasticsearch domain configuration. True if Amazon Elasticsearch - // Service is processing configuration changes. False if the configuration is - // active. - Processing *bool `type:"boolean"` +// SetAdditionalLimits sets the AdditionalLimits field's value. +func (s *Limits) SetAdditionalLimits(v []*AdditionalLimit) *Limits { + s.AdditionalLimits = v + return s +} - // Specifies the status of the SnapshotOptions - SnapshotOptions *SnapshotOptions `type:"structure"` +// SetInstanceLimits sets the InstanceLimits field's value. +func (s *Limits) SetInstanceLimits(v *InstanceLimits) *Limits { + s.InstanceLimits = v + return s +} - // The VPCOptions for the specified domain. For more information, see VPC Endpoints - // for Amazon Elasticsearch Service Domains (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-vpc.html). - VPCOptions *VPCDerivedInfo `type:"structure"` +// SetStorageTypes sets the StorageTypes field's value. +func (s *Limits) SetStorageTypes(v []*StorageType) *Limits { + s.StorageTypes = v + return s +} + +type ListDomainNamesInput struct { + _ struct{} `type:"structure"` } // String returns the string representation -func (s ElasticsearchDomainStatus) String() string { +func (s ListDomainNamesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ElasticsearchDomainStatus) GoString() string { +func (s ListDomainNamesInput) GoString() string { return s.String() } -// SetARN sets the ARN field's value. -func (s *ElasticsearchDomainStatus) SetARN(v string) *ElasticsearchDomainStatus { - s.ARN = &v - return s +// The result of a ListDomainNames operation. Contains the names of all Elasticsearch +// domains owned by this account. +type ListDomainNamesOutput struct { + _ struct{} `type:"structure"` + + // List of Elasticsearch domain names. + DomainNames []*DomainInfo `type:"list"` } -// SetAccessPolicies sets the AccessPolicies field's value. -func (s *ElasticsearchDomainStatus) SetAccessPolicies(v string) *ElasticsearchDomainStatus { - s.AccessPolicies = &v - return s +// String returns the string representation +func (s ListDomainNamesOutput) String() string { + return awsutil.Prettify(s) } -// SetAdvancedOptions sets the AdvancedOptions field's value. -func (s *ElasticsearchDomainStatus) SetAdvancedOptions(v map[string]*string) *ElasticsearchDomainStatus { - s.AdvancedOptions = v - return s +// GoString returns the string representation +func (s ListDomainNamesOutput) GoString() string { + return s.String() } -// SetCreated sets the Created field's value. -func (s *ElasticsearchDomainStatus) SetCreated(v bool) *ElasticsearchDomainStatus { - s.Created = &v +// SetDomainNames sets the DomainNames field's value. +func (s *ListDomainNamesOutput) SetDomainNames(v []*DomainInfo) *ListDomainNamesOutput { + s.DomainNames = v return s } -// SetDeleted sets the Deleted field's value. -func (s *ElasticsearchDomainStatus) SetDeleted(v bool) *ElasticsearchDomainStatus { - s.Deleted = &v - return s +// Container for the parameters to the ListElasticsearchInstanceTypes operation. +type ListElasticsearchInstanceTypesInput struct { + _ struct{} `type:"structure"` + + // DomainName represents the name of the Domain that we are trying to modify. + // This should be present only if we are querying for list of available Elasticsearch + // instance types when modifying existing domain. + DomainName *string `location:"querystring" locationName:"domainName" min:"3" type:"string"` + + // Version of Elasticsearch for which list of supported elasticsearch instance + // types are needed. + // + // ElasticsearchVersion is a required field + ElasticsearchVersion *string `location:"uri" locationName:"ElasticsearchVersion" type:"string" required:"true"` + + // Set this value to limit the number of results returned. Value provided must + // be greater than 30 else it wont be honored. + MaxResults *int64 `location:"querystring" locationName:"maxResults" type:"integer"` + + // NextToken should be sent in case if earlier API call produced result containing + // NextToken. It is used for pagination. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` } -// SetDomainId sets the DomainId field's value. -func (s *ElasticsearchDomainStatus) SetDomainId(v string) *ElasticsearchDomainStatus { - s.DomainId = &v - return s +// String returns the string representation +func (s ListElasticsearchInstanceTypesInput) String() string { + return awsutil.Prettify(s) } -// SetDomainName sets the DomainName field's value. -func (s *ElasticsearchDomainStatus) SetDomainName(v string) *ElasticsearchDomainStatus { - s.DomainName = &v - return s +// GoString returns the string representation +func (s ListElasticsearchInstanceTypesInput) GoString() string { + return s.String() } -// SetEBSOptions sets the EBSOptions field's value. -func (s *ElasticsearchDomainStatus) SetEBSOptions(v *EBSOptions) *ElasticsearchDomainStatus { - s.EBSOptions = v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListElasticsearchInstanceTypesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListElasticsearchInstanceTypesInput"} + if s.DomainName != nil && len(*s.DomainName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("DomainName", 3)) + } + if s.ElasticsearchVersion == nil { + invalidParams.Add(request.NewErrParamRequired("ElasticsearchVersion")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetElasticsearchClusterConfig sets the ElasticsearchClusterConfig field's value. -func (s *ElasticsearchDomainStatus) SetElasticsearchClusterConfig(v *ElasticsearchClusterConfig) *ElasticsearchDomainStatus { - s.ElasticsearchClusterConfig = v +// SetDomainName sets the DomainName field's value. +func (s *ListElasticsearchInstanceTypesInput) SetDomainName(v string) *ListElasticsearchInstanceTypesInput { + s.DomainName = &v return s } // SetElasticsearchVersion sets the ElasticsearchVersion field's value. -func (s *ElasticsearchDomainStatus) SetElasticsearchVersion(v string) *ElasticsearchDomainStatus { +func (s *ListElasticsearchInstanceTypesInput) SetElasticsearchVersion(v string) *ListElasticsearchInstanceTypesInput { s.ElasticsearchVersion = &v return s } -// SetEncryptionAtRestOptions sets the EncryptionAtRestOptions field's value. -func (s *ElasticsearchDomainStatus) SetEncryptionAtRestOptions(v *EncryptionAtRestOptions) *ElasticsearchDomainStatus { - s.EncryptionAtRestOptions = v +// SetMaxResults sets the MaxResults field's value. +func (s *ListElasticsearchInstanceTypesInput) SetMaxResults(v int64) *ListElasticsearchInstanceTypesInput { + s.MaxResults = &v return s } -// SetEndpoint sets the Endpoint field's value. -func (s *ElasticsearchDomainStatus) SetEndpoint(v string) *ElasticsearchDomainStatus { - s.Endpoint = &v +// SetNextToken sets the NextToken field's value. +func (s *ListElasticsearchInstanceTypesInput) SetNextToken(v string) *ListElasticsearchInstanceTypesInput { + s.NextToken = &v return s } -// SetEndpoints sets the Endpoints field's value. -func (s *ElasticsearchDomainStatus) SetEndpoints(v map[string]*string) *ElasticsearchDomainStatus { - s.Endpoints = v - return s +// Container for the parameters returned by ListElasticsearchInstanceTypes operation. +type ListElasticsearchInstanceTypesOutput struct { + _ struct{} `type:"structure"` + + // List of instance types supported by Amazon Elasticsearch service for given + // ElasticsearchVersion + ElasticsearchInstanceTypes []*string `type:"list"` + + // In case if there are more results available NextToken would be present, make + // further request to the same API with received NextToken to paginate remaining + // results. + NextToken *string `type:"string"` } -// SetLogPublishingOptions sets the LogPublishingOptions field's value. -func (s *ElasticsearchDomainStatus) SetLogPublishingOptions(v map[string]*LogPublishingOption) *ElasticsearchDomainStatus { - s.LogPublishingOptions = v - return s +// String returns the string representation +func (s ListElasticsearchInstanceTypesOutput) String() string { + return awsutil.Prettify(s) } -// SetProcessing sets the Processing field's value. -func (s *ElasticsearchDomainStatus) SetProcessing(v bool) *ElasticsearchDomainStatus { - s.Processing = &v - return s +// GoString returns the string representation +func (s ListElasticsearchInstanceTypesOutput) GoString() string { + return s.String() } -// SetSnapshotOptions sets the SnapshotOptions field's value. -func (s *ElasticsearchDomainStatus) SetSnapshotOptions(v *SnapshotOptions) *ElasticsearchDomainStatus { - s.SnapshotOptions = v +// SetElasticsearchInstanceTypes sets the ElasticsearchInstanceTypes field's value. +func (s *ListElasticsearchInstanceTypesOutput) SetElasticsearchInstanceTypes(v []*string) *ListElasticsearchInstanceTypesOutput { + s.ElasticsearchInstanceTypes = v return s } -// SetVPCOptions sets the VPCOptions field's value. -func (s *ElasticsearchDomainStatus) SetVPCOptions(v *VPCDerivedInfo) *ElasticsearchDomainStatus { - s.VPCOptions = v +// SetNextToken sets the NextToken field's value. +func (s *ListElasticsearchInstanceTypesOutput) SetNextToken(v string) *ListElasticsearchInstanceTypesOutput { + s.NextToken = &v return s } -// Status of the Elasticsearch version options for the specified Elasticsearch -// domain. -type ElasticsearchVersionStatus struct { +// Container for the parameters to the ListElasticsearchVersions operation. +// Use MaxResults to control the maximum number of results to retrieve in a +// single call. +// +// Use NextToken in response to retrieve more results. If the received response +// does not contain a NextToken, then there are no more results to retrieve. +type ListElasticsearchVersionsInput struct { _ struct{} `type:"structure"` - // Specifies the Elasticsearch version for the specified Elasticsearch domain. - // - // Options is a required field - Options *string `type:"string" required:"true"` + // Set this value to limit the number of results returned. Value provided must + // be greater than 10 else it wont be honored. + MaxResults *int64 `location:"querystring" locationName:"maxResults" type:"integer"` - // Specifies the status of the Elasticsearch version options for the specified - // Elasticsearch domain. - // - // Status is a required field - Status *OptionStatus `type:"structure" required:"true"` + // Paginated APIs accepts NextToken input to returns next page results and provides + // a NextToken output in the response which can be used by the client to retrieve + // more results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` } // String returns the string representation -func (s ElasticsearchVersionStatus) String() string { +func (s ListElasticsearchVersionsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ElasticsearchVersionStatus) GoString() string { +func (s ListElasticsearchVersionsInput) GoString() string { return s.String() } -// SetOptions sets the Options field's value. -func (s *ElasticsearchVersionStatus) SetOptions(v string) *ElasticsearchVersionStatus { - s.Options = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListElasticsearchVersionsInput) SetMaxResults(v int64) *ListElasticsearchVersionsInput { + s.MaxResults = &v return s } -// SetStatus sets the Status field's value. -func (s *ElasticsearchVersionStatus) SetStatus(v *OptionStatus) *ElasticsearchVersionStatus { - s.Status = v +// SetNextToken sets the NextToken field's value. +func (s *ListElasticsearchVersionsInput) SetNextToken(v string) *ListElasticsearchVersionsInput { + s.NextToken = &v return s } -// Specifies the Encryption At Rest Options. -type EncryptionAtRestOptions struct { +// Container for the parameters for response received from ListElasticsearchVersions +// operation. +type ListElasticsearchVersionsOutput struct { _ struct{} `type:"structure"` - // Specifies the option to enable Encryption At Rest. - Enabled *bool `type:"boolean"` + // List of supported elastic search versions. + ElasticsearchVersions []*string `type:"list"` - // Specifies the KMS Key ID for Encryption At Rest options. - KmsKeyId *string `min:"1" type:"string"` + // Paginated APIs accepts NextToken input to returns next page results and provides + // a NextToken output in the response which can be used by the client to retrieve + // more results. + NextToken *string `type:"string"` } // String returns the string representation -func (s EncryptionAtRestOptions) String() string { +func (s ListElasticsearchVersionsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s EncryptionAtRestOptions) GoString() string { +func (s ListElasticsearchVersionsOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *EncryptionAtRestOptions) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "EncryptionAtRestOptions"} - if s.KmsKeyId != nil && len(*s.KmsKeyId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("KmsKeyId", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetEnabled sets the Enabled field's value. -func (s *EncryptionAtRestOptions) SetEnabled(v bool) *EncryptionAtRestOptions { - s.Enabled = &v +// SetElasticsearchVersions sets the ElasticsearchVersions field's value. +func (s *ListElasticsearchVersionsOutput) SetElasticsearchVersions(v []*string) *ListElasticsearchVersionsOutput { + s.ElasticsearchVersions = v return s } -// SetKmsKeyId sets the KmsKeyId field's value. -func (s *EncryptionAtRestOptions) SetKmsKeyId(v string) *EncryptionAtRestOptions { - s.KmsKeyId = &v +// SetNextToken sets the NextToken field's value. +func (s *ListElasticsearchVersionsOutput) SetNextToken(v string) *ListElasticsearchVersionsOutput { + s.NextToken = &v return s } -// Status of the Encryption At Rest options for the specified Elasticsearch -// domain. -type EncryptionAtRestOptionsStatus struct { +// Container for the parameters to the ListTags operation. Specify the ARN for +// the Elasticsearch domain to which the tags are attached that you want to +// view are attached. +type ListTagsInput struct { _ struct{} `type:"structure"` - // Specifies the Encryption At Rest options for the specified Elasticsearch - // domain. - // - // Options is a required field - Options *EncryptionAtRestOptions `type:"structure" required:"true"` - - // Specifies the status of the Encryption At Rest options for the specified - // Elasticsearch domain. + // Specify the ARN for the Elasticsearch domain to which the tags are attached + // that you want to view. // - // Status is a required field - Status *OptionStatus `type:"structure" required:"true"` + // ARN is a required field + ARN *string `location:"querystring" locationName:"arn" type:"string" required:"true"` } // String returns the string representation -func (s EncryptionAtRestOptionsStatus) String() string { +func (s ListTagsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s EncryptionAtRestOptionsStatus) GoString() string { +func (s ListTagsInput) GoString() string { return s.String() } -// SetOptions sets the Options field's value. -func (s *EncryptionAtRestOptionsStatus) SetOptions(v *EncryptionAtRestOptions) *EncryptionAtRestOptionsStatus { - s.Options = v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListTagsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTagsInput"} + if s.ARN == nil { + invalidParams.Add(request.NewErrParamRequired("ARN")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetStatus sets the Status field's value. -func (s *EncryptionAtRestOptionsStatus) SetStatus(v *OptionStatus) *EncryptionAtRestOptionsStatus { - s.Status = v +// SetARN sets the ARN field's value. +func (s *ListTagsInput) SetARN(v string) *ListTagsInput { + s.ARN = &v return s } -// InstanceCountLimits represents the limits on number of instances that be -// created in Amazon Elasticsearch for given InstanceType. -type InstanceCountLimits struct { +// The result of a ListTags operation. Contains tags for all requested Elasticsearch +// domains. +type ListTagsOutput struct { _ struct{} `type:"structure"` - // Maximum number of Instances that can be instantiated for given InstanceType. - MaximumInstanceCount *int64 `type:"integer"` - - // Minimum number of Instances that can be instantiated for given InstanceType. - MinimumInstanceCount *int64 `type:"integer"` + // List of Tag for the requested Elasticsearch domain. + TagList []*Tag `type:"list"` } // String returns the string representation -func (s InstanceCountLimits) String() string { +func (s ListTagsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s InstanceCountLimits) GoString() string { +func (s ListTagsOutput) GoString() string { return s.String() } -// SetMaximumInstanceCount sets the MaximumInstanceCount field's value. -func (s *InstanceCountLimits) SetMaximumInstanceCount(v int64) *InstanceCountLimits { - s.MaximumInstanceCount = &v - return s -} - -// SetMinimumInstanceCount sets the MinimumInstanceCount field's value. -func (s *InstanceCountLimits) SetMinimumInstanceCount(v int64) *InstanceCountLimits { - s.MinimumInstanceCount = &v +// SetTagList sets the TagList field's value. +func (s *ListTagsOutput) SetTagList(v []*Tag) *ListTagsOutput { + s.TagList = v return s } -// InstanceLimits represents the list of instance related attributes that are -// available for given InstanceType. -type InstanceLimits struct { +// Log Publishing option that is set for given domain. Attributes and their details: CloudWatchLogsLogGroupArn: ARN of the Cloudwatch +// log group to which log needs to be published. +// Enabled: Whether the log publishing for given log type is enabled or not +type LogPublishingOption struct { _ struct{} `type:"structure"` - // InstanceCountLimits represents the limits on number of instances that be - // created in Amazon Elasticsearch for given InstanceType. - InstanceCountLimits *InstanceCountLimits `type:"structure"` + // ARN of the Cloudwatch log group to which log needs to be published. + CloudWatchLogsLogGroupArn *string `type:"string"` + + // Specifies whether given log publishing option is enabled or not. + Enabled *bool `type:"boolean"` } // String returns the string representation -func (s InstanceLimits) String() string { +func (s LogPublishingOption) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s InstanceLimits) GoString() string { +func (s LogPublishingOption) GoString() string { return s.String() } -// SetInstanceCountLimits sets the InstanceCountLimits field's value. -func (s *InstanceLimits) SetInstanceCountLimits(v *InstanceCountLimits) *InstanceLimits { - s.InstanceCountLimits = v +// SetCloudWatchLogsLogGroupArn sets the CloudWatchLogsLogGroupArn field's value. +func (s *LogPublishingOption) SetCloudWatchLogsLogGroupArn(v string) *LogPublishingOption { + s.CloudWatchLogsLogGroupArn = &v return s } -// Limits for given InstanceType and for each of it's role. Limits contains following StorageTypes, InstanceLimitsand AdditionalLimits -type Limits struct { - _ struct{} `type:"structure"` +// SetEnabled sets the Enabled field's value. +func (s *LogPublishingOption) SetEnabled(v bool) *LogPublishingOption { + s.Enabled = &v + return s +} - // List of additional limits that are specific to a given InstanceType and for - // each of it's InstanceRole . - AdditionalLimits []*AdditionalLimit `type:"list"` +// The configured log publishing options for the domain and their current status. +type LogPublishingOptionsStatus struct { + _ struct{} `type:"structure"` - // InstanceLimits represents the list of instance related attributes that are - // available for given InstanceType. - InstanceLimits *InstanceLimits `type:"structure"` + // The log publishing options configured for the Elasticsearch domain. + Options map[string]*LogPublishingOption `type:"map"` - // StorageType represents the list of storage related types and attributes that - // are available for given InstanceType. - StorageTypes []*StorageType `type:"list"` + // The status of the log publishing options for the Elasticsearch domain. See + // OptionStatus for the status information that's included. + Status *OptionStatus `type:"structure"` } // String returns the string representation -func (s Limits) String() string { +func (s LogPublishingOptionsStatus) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Limits) GoString() string { +func (s LogPublishingOptionsStatus) GoString() string { return s.String() } -// SetAdditionalLimits sets the AdditionalLimits field's value. -func (s *Limits) SetAdditionalLimits(v []*AdditionalLimit) *Limits { - s.AdditionalLimits = v - return s -} - -// SetInstanceLimits sets the InstanceLimits field's value. -func (s *Limits) SetInstanceLimits(v *InstanceLimits) *Limits { - s.InstanceLimits = v +// SetOptions sets the Options field's value. +func (s *LogPublishingOptionsStatus) SetOptions(v map[string]*LogPublishingOption) *LogPublishingOptionsStatus { + s.Options = v return s } -// SetStorageTypes sets the StorageTypes field's value. -func (s *Limits) SetStorageTypes(v []*StorageType) *Limits { - s.StorageTypes = v +// SetStatus sets the Status field's value. +func (s *LogPublishingOptionsStatus) SetStatus(v *OptionStatus) *LogPublishingOptionsStatus { + s.Status = v return s } -type ListDomainNamesInput struct { +// Specifies the node-to-node encryption options. +type NodeToNodeEncryptionOptions struct { _ struct{} `type:"structure"` + + // Specify true to enable node-to-node encryption. + Enabled *bool `type:"boolean"` } // String returns the string representation -func (s ListDomainNamesInput) String() string { +func (s NodeToNodeEncryptionOptions) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListDomainNamesInput) GoString() string { +func (s NodeToNodeEncryptionOptions) GoString() string { return s.String() } -// The result of a ListDomainNames operation. Contains the names of all Elasticsearch -// domains owned by this account. -type ListDomainNamesOutput struct { +// SetEnabled sets the Enabled field's value. +func (s *NodeToNodeEncryptionOptions) SetEnabled(v bool) *NodeToNodeEncryptionOptions { + s.Enabled = &v + return s +} + +// Status of the node-to-node encryption options for the specified Elasticsearch +// domain. +type NodeToNodeEncryptionOptionsStatus struct { _ struct{} `type:"structure"` - // List of Elasticsearch domain names. - DomainNames []*DomainInfo `type:"list"` + // Specifies the node-to-node encryption options for the specified Elasticsearch + // domain. + // + // Options is a required field + Options *NodeToNodeEncryptionOptions `type:"structure" required:"true"` + + // Specifies the status of the node-to-node encryption options for the specified + // Elasticsearch domain. + // + // Status is a required field + Status *OptionStatus `type:"structure" required:"true"` } // String returns the string representation -func (s ListDomainNamesOutput) String() string { +func (s NodeToNodeEncryptionOptionsStatus) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListDomainNamesOutput) GoString() string { +func (s NodeToNodeEncryptionOptionsStatus) GoString() string { return s.String() } -// SetDomainNames sets the DomainNames field's value. -func (s *ListDomainNamesOutput) SetDomainNames(v []*DomainInfo) *ListDomainNamesOutput { - s.DomainNames = v +// SetOptions sets the Options field's value. +func (s *NodeToNodeEncryptionOptionsStatus) SetOptions(v *NodeToNodeEncryptionOptions) *NodeToNodeEncryptionOptionsStatus { + s.Options = v return s } -// Container for the parameters to the ListElasticsearchInstanceTypes operation. -type ListElasticsearchInstanceTypesInput struct { +// SetStatus sets the Status field's value. +func (s *NodeToNodeEncryptionOptionsStatus) SetStatus(v *OptionStatus) *NodeToNodeEncryptionOptionsStatus { + s.Status = v + return s +} + +// Provides the current status of the entity. +type OptionStatus struct { _ struct{} `type:"structure"` - // DomainName represents the name of the Domain that we are trying to modify. - // This should be present only if we are querying for list of available Elasticsearch - // instance types when modifying existing domain. - DomainName *string `location:"querystring" locationName:"domainName" min:"3" type:"string"` + // Timestamp which tells the creation date for the entity. + // + // CreationDate is a required field + CreationDate *time.Time `type:"timestamp" required:"true"` - // Version of Elasticsearch for which list of supported elasticsearch instance - // types are needed. + // Indicates whether the Elasticsearch domain is being deleted. + PendingDeletion *bool `type:"boolean"` + + // Provides the OptionState for the Elasticsearch domain. // - // ElasticsearchVersion is a required field - ElasticsearchVersion *string `location:"uri" locationName:"ElasticsearchVersion" type:"string" required:"true"` + // State is a required field + State *string `type:"string" required:"true" enum:"OptionState"` - // Set this value to limit the number of results returned. Value provided must - // be greater than 30 else it wont be honored. - MaxResults *int64 `location:"querystring" locationName:"maxResults" type:"integer"` + // Timestamp which tells the last updated time for the entity. + // + // UpdateDate is a required field + UpdateDate *time.Time `type:"timestamp" required:"true"` - // NextToken should be sent in case if earlier API call produced result containing - // NextToken. It is used for pagination. - NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + // Specifies the latest version for the entity. + UpdateVersion *int64 `type:"integer"` } // String returns the string representation -func (s ListElasticsearchInstanceTypesInput) String() string { +func (s OptionStatus) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListElasticsearchInstanceTypesInput) GoString() string { +func (s OptionStatus) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *ListElasticsearchInstanceTypesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListElasticsearchInstanceTypesInput"} - if s.DomainName != nil && len(*s.DomainName) < 3 { - invalidParams.Add(request.NewErrParamMinLen("DomainName", 3)) - } - if s.ElasticsearchVersion == nil { - invalidParams.Add(request.NewErrParamRequired("ElasticsearchVersion")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetCreationDate sets the CreationDate field's value. +func (s *OptionStatus) SetCreationDate(v time.Time) *OptionStatus { + s.CreationDate = &v + return s } -// SetDomainName sets the DomainName field's value. -func (s *ListElasticsearchInstanceTypesInput) SetDomainName(v string) *ListElasticsearchInstanceTypesInput { - s.DomainName = &v +// SetPendingDeletion sets the PendingDeletion field's value. +func (s *OptionStatus) SetPendingDeletion(v bool) *OptionStatus { + s.PendingDeletion = &v return s } -// SetElasticsearchVersion sets the ElasticsearchVersion field's value. -func (s *ListElasticsearchInstanceTypesInput) SetElasticsearchVersion(v string) *ListElasticsearchInstanceTypesInput { - s.ElasticsearchVersion = &v +// SetState sets the State field's value. +func (s *OptionStatus) SetState(v string) *OptionStatus { + s.State = &v return s } -// SetMaxResults sets the MaxResults field's value. -func (s *ListElasticsearchInstanceTypesInput) SetMaxResults(v int64) *ListElasticsearchInstanceTypesInput { - s.MaxResults = &v +// SetUpdateDate sets the UpdateDate field's value. +func (s *OptionStatus) SetUpdateDate(v time.Time) *OptionStatus { + s.UpdateDate = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListElasticsearchInstanceTypesInput) SetNextToken(v string) *ListElasticsearchInstanceTypesInput { - s.NextToken = &v +// SetUpdateVersion sets the UpdateVersion field's value. +func (s *OptionStatus) SetUpdateVersion(v int64) *OptionStatus { + s.UpdateVersion = &v return s } -// Container for the parameters returned by ListElasticsearchInstanceTypes operation. -type ListElasticsearchInstanceTypesOutput struct { +// Container for parameters to PurchaseReservedElasticsearchInstanceOffering +type PurchaseReservedElasticsearchInstanceOfferingInput struct { _ struct{} `type:"structure"` - // List of instance types supported by Amazon Elasticsearch service for given - // ElasticsearchVersion - ElasticsearchInstanceTypes []*string `type:"list"` + // The number of Elasticsearch instances to reserve. + InstanceCount *int64 `min:"1" type:"integer"` - // In case if there are more results available NextToken would be present, make - // further request to the same API with received NextToken to paginate remaining - // results. - NextToken *string `type:"string"` + // A customer-specified identifier to track this reservation. + // + // ReservationName is a required field + ReservationName *string `min:"5" type:"string" required:"true"` + + // The ID of the reserved Elasticsearch instance offering to purchase. + // + // ReservedElasticsearchInstanceOfferingId is a required field + ReservedElasticsearchInstanceOfferingId *string `type:"string" required:"true"` } // String returns the string representation -func (s ListElasticsearchInstanceTypesOutput) String() string { +func (s PurchaseReservedElasticsearchInstanceOfferingInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListElasticsearchInstanceTypesOutput) GoString() string { +func (s PurchaseReservedElasticsearchInstanceOfferingInput) GoString() string { return s.String() } -// SetElasticsearchInstanceTypes sets the ElasticsearchInstanceTypes field's value. -func (s *ListElasticsearchInstanceTypesOutput) SetElasticsearchInstanceTypes(v []*string) *ListElasticsearchInstanceTypesOutput { - s.ElasticsearchInstanceTypes = v +// Validate inspects the fields of the type to determine if they are valid. +func (s *PurchaseReservedElasticsearchInstanceOfferingInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PurchaseReservedElasticsearchInstanceOfferingInput"} + if s.InstanceCount != nil && *s.InstanceCount < 1 { + invalidParams.Add(request.NewErrParamMinValue("InstanceCount", 1)) + } + if s.ReservationName == nil { + invalidParams.Add(request.NewErrParamRequired("ReservationName")) + } + if s.ReservationName != nil && len(*s.ReservationName) < 5 { + invalidParams.Add(request.NewErrParamMinLen("ReservationName", 5)) + } + if s.ReservedElasticsearchInstanceOfferingId == nil { + invalidParams.Add(request.NewErrParamRequired("ReservedElasticsearchInstanceOfferingId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceCount sets the InstanceCount field's value. +func (s *PurchaseReservedElasticsearchInstanceOfferingInput) SetInstanceCount(v int64) *PurchaseReservedElasticsearchInstanceOfferingInput { + s.InstanceCount = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListElasticsearchInstanceTypesOutput) SetNextToken(v string) *ListElasticsearchInstanceTypesOutput { - s.NextToken = &v +// SetReservationName sets the ReservationName field's value. +func (s *PurchaseReservedElasticsearchInstanceOfferingInput) SetReservationName(v string) *PurchaseReservedElasticsearchInstanceOfferingInput { + s.ReservationName = &v return s } -// Container for the parameters to the ListElasticsearchVersions operation. -// Use MaxResults to control the maximum number of results to retrieve in a -// single call. -// -// Use NextToken in response to retrieve more results. If the received response -// does not contain a NextToken, then there are no more results to retrieve. -type ListElasticsearchVersionsInput struct { +// SetReservedElasticsearchInstanceOfferingId sets the ReservedElasticsearchInstanceOfferingId field's value. +func (s *PurchaseReservedElasticsearchInstanceOfferingInput) SetReservedElasticsearchInstanceOfferingId(v string) *PurchaseReservedElasticsearchInstanceOfferingInput { + s.ReservedElasticsearchInstanceOfferingId = &v + return s +} + +// Represents the output of a PurchaseReservedElasticsearchInstanceOffering +// operation. +type PurchaseReservedElasticsearchInstanceOfferingOutput struct { _ struct{} `type:"structure"` - // Set this value to limit the number of results returned. Value provided must - // be greater than 10 else it wont be honored. - MaxResults *int64 `location:"querystring" locationName:"maxResults" type:"integer"` + // The customer-specified identifier used to track this reservation. + ReservationName *string `min:"5" type:"string"` - // Paginated APIs accepts NextToken input to returns next page results and provides - // a NextToken output in the response which can be used by the client to retrieve - // more results. - NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + // Details of the reserved Elasticsearch instance which was purchased. + ReservedElasticsearchInstanceId *string `type:"string"` } // String returns the string representation -func (s ListElasticsearchVersionsInput) String() string { +func (s PurchaseReservedElasticsearchInstanceOfferingOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListElasticsearchVersionsInput) GoString() string { +func (s PurchaseReservedElasticsearchInstanceOfferingOutput) GoString() string { return s.String() } -// SetMaxResults sets the MaxResults field's value. -func (s *ListElasticsearchVersionsInput) SetMaxResults(v int64) *ListElasticsearchVersionsInput { - s.MaxResults = &v +// SetReservationName sets the ReservationName field's value. +func (s *PurchaseReservedElasticsearchInstanceOfferingOutput) SetReservationName(v string) *PurchaseReservedElasticsearchInstanceOfferingOutput { + s.ReservationName = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListElasticsearchVersionsInput) SetNextToken(v string) *ListElasticsearchVersionsInput { - s.NextToken = &v +// SetReservedElasticsearchInstanceId sets the ReservedElasticsearchInstanceId field's value. +func (s *PurchaseReservedElasticsearchInstanceOfferingOutput) SetReservedElasticsearchInstanceId(v string) *PurchaseReservedElasticsearchInstanceOfferingOutput { + s.ReservedElasticsearchInstanceId = &v return s } -// Container for the parameters for response received from ListElasticsearchVersions -// operation. -type ListElasticsearchVersionsOutput struct { +// Contains the specific price and frequency of a recurring charges for a reserved +// Elasticsearch instance, or for a reserved Elasticsearch instance offering. +type RecurringCharge struct { _ struct{} `type:"structure"` - // List of supported elastic search versions. - ElasticsearchVersions []*string `type:"list"` + // The monetary amount of the recurring charge. + RecurringChargeAmount *float64 `type:"double"` - // Paginated APIs accepts NextToken input to returns next page results and provides - // a NextToken output in the response which can be used by the client to retrieve - // more results. - NextToken *string `type:"string"` + // The frequency of the recurring charge. + RecurringChargeFrequency *string `type:"string"` } // String returns the string representation -func (s ListElasticsearchVersionsOutput) String() string { +func (s RecurringCharge) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListElasticsearchVersionsOutput) GoString() string { +func (s RecurringCharge) GoString() string { return s.String() } -// SetElasticsearchVersions sets the ElasticsearchVersions field's value. -func (s *ListElasticsearchVersionsOutput) SetElasticsearchVersions(v []*string) *ListElasticsearchVersionsOutput { - s.ElasticsearchVersions = v +// SetRecurringChargeAmount sets the RecurringChargeAmount field's value. +func (s *RecurringCharge) SetRecurringChargeAmount(v float64) *RecurringCharge { + s.RecurringChargeAmount = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListElasticsearchVersionsOutput) SetNextToken(v string) *ListElasticsearchVersionsOutput { - s.NextToken = &v +// SetRecurringChargeFrequency sets the RecurringChargeFrequency field's value. +func (s *RecurringCharge) SetRecurringChargeFrequency(v string) *RecurringCharge { + s.RecurringChargeFrequency = &v return s } -// Container for the parameters to the ListTags operation. Specify the ARN for -// the Elasticsearch domain to which the tags are attached that you want to -// view are attached. -type ListTagsInput struct { +// Container for the parameters to the RemoveTags operation. Specify the ARN +// for the Elasticsearch domain from which you want to remove the specified +// TagKey. +type RemoveTagsInput struct { _ struct{} `type:"structure"` - // Specify the ARN for the Elasticsearch domain to which the tags are attached - // that you want to view. + // Specifies the ARN for the Elasticsearch domain from which you want to delete + // the specified tags. // // ARN is a required field - ARN *string `location:"querystring" locationName:"arn" type:"string" required:"true"` + ARN *string `type:"string" required:"true"` + + // Specifies the TagKey list which you want to remove from the Elasticsearch + // domain. + // + // TagKeys is a required field + TagKeys []*string `type:"list" required:"true"` } // String returns the string representation -func (s ListTagsInput) String() string { +func (s RemoveTagsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListTagsInput) GoString() string { +func (s RemoveTagsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListTagsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListTagsInput"} +func (s *RemoveTagsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RemoveTagsInput"} if s.ARN == nil { invalidParams.Add(request.NewErrParamRequired("ARN")) } + if s.TagKeys == nil { + invalidParams.Add(request.NewErrParamRequired("TagKeys")) + } if invalidParams.Len() > 0 { return invalidParams @@ -3181,240 +5311,337 @@ func (s *ListTagsInput) Validate() error { } // SetARN sets the ARN field's value. -func (s *ListTagsInput) SetARN(v string) *ListTagsInput { +func (s *RemoveTagsInput) SetARN(v string) *RemoveTagsInput { s.ARN = &v return s } -// The result of a ListTags operation. Contains tags for all requested Elasticsearch -// domains. -type ListTagsOutput struct { - _ struct{} `type:"structure"` +// SetTagKeys sets the TagKeys field's value. +func (s *RemoveTagsInput) SetTagKeys(v []*string) *RemoveTagsInput { + s.TagKeys = v + return s +} - // List of Tag for the requested Elasticsearch domain. - TagList []*Tag `type:"list"` +type RemoveTagsOutput struct { + _ struct{} `type:"structure"` } // String returns the string representation -func (s ListTagsOutput) String() string { +func (s RemoveTagsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListTagsOutput) GoString() string { +func (s RemoveTagsOutput) GoString() string { return s.String() } -// SetTagList sets the TagList field's value. -func (s *ListTagsOutput) SetTagList(v []*Tag) *ListTagsOutput { - s.TagList = v - return s -} - -// Log Publishing option that is set for given domain. Attributes and their details: CloudWatchLogsLogGroupArn: ARN of the Cloudwatch -// log group to which log needs to be published. -// Enabled: Whether the log publishing for given log type is enabled or not -type LogPublishingOption struct { +// Details of a reserved Elasticsearch instance. +type ReservedElasticsearchInstance struct { _ struct{} `type:"structure"` - // ARN of the Cloudwatch log group to which log needs to be published. - CloudWatchLogsLogGroupArn *string `type:"string"` + // The currency code for the reserved Elasticsearch instance offering. + CurrencyCode *string `type:"string"` - // Specifies whether given log publishing option is enabled or not. - Enabled *bool `type:"boolean"` + // The duration, in seconds, for which the Elasticsearch instance is reserved. + Duration *int64 `type:"integer"` + + // The number of Elasticsearch instances that have been reserved. + ElasticsearchInstanceCount *int64 `type:"integer"` + + // The Elasticsearch instance type offered by the reserved instance offering. + ElasticsearchInstanceType *string `type:"string" enum:"ESPartitionInstanceType"` + + // The upfront fixed charge you will paid to purchase the specific reserved + // Elasticsearch instance offering. + FixedPrice *float64 `type:"double"` + + // The payment option as defined in the reserved Elasticsearch instance offering. + PaymentOption *string `type:"string" enum:"ReservedElasticsearchInstancePaymentOption"` + + // The charge to your account regardless of whether you are creating any domains + // using the instance offering. + RecurringCharges []*RecurringCharge `type:"list"` + + // The customer-specified identifier to track this reservation. + ReservationName *string `min:"5" type:"string"` + + // The unique identifier for the reservation. + ReservedElasticsearchInstanceId *string `type:"string"` + + // The offering identifier. + ReservedElasticsearchInstanceOfferingId *string `type:"string"` + + // The time the reservation started. + StartTime *time.Time `type:"timestamp"` + + // The state of the reserved Elasticsearch instance. + State *string `type:"string"` + + // The rate you are charged for each hour for the domain that is using this + // reserved instance. + UsagePrice *float64 `type:"double"` } // String returns the string representation -func (s LogPublishingOption) String() string { +func (s ReservedElasticsearchInstance) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s LogPublishingOption) GoString() string { +func (s ReservedElasticsearchInstance) GoString() string { return s.String() } -// SetCloudWatchLogsLogGroupArn sets the CloudWatchLogsLogGroupArn field's value. -func (s *LogPublishingOption) SetCloudWatchLogsLogGroupArn(v string) *LogPublishingOption { - s.CloudWatchLogsLogGroupArn = &v +// SetCurrencyCode sets the CurrencyCode field's value. +func (s *ReservedElasticsearchInstance) SetCurrencyCode(v string) *ReservedElasticsearchInstance { + s.CurrencyCode = &v return s } -// SetEnabled sets the Enabled field's value. -func (s *LogPublishingOption) SetEnabled(v bool) *LogPublishingOption { - s.Enabled = &v +// SetDuration sets the Duration field's value. +func (s *ReservedElasticsearchInstance) SetDuration(v int64) *ReservedElasticsearchInstance { + s.Duration = &v return s } -// The configured log publishing options for the domain and their current status. -type LogPublishingOptionsStatus struct { - _ struct{} `type:"structure"` +// SetElasticsearchInstanceCount sets the ElasticsearchInstanceCount field's value. +func (s *ReservedElasticsearchInstance) SetElasticsearchInstanceCount(v int64) *ReservedElasticsearchInstance { + s.ElasticsearchInstanceCount = &v + return s +} - // The log publishing options configured for the Elasticsearch domain. - Options map[string]*LogPublishingOption `type:"map"` +// SetElasticsearchInstanceType sets the ElasticsearchInstanceType field's value. +func (s *ReservedElasticsearchInstance) SetElasticsearchInstanceType(v string) *ReservedElasticsearchInstance { + s.ElasticsearchInstanceType = &v + return s +} - // The status of the log publishing options for the Elasticsearch domain. See - // OptionStatus for the status information that's included. - Status *OptionStatus `type:"structure"` +// SetFixedPrice sets the FixedPrice field's value. +func (s *ReservedElasticsearchInstance) SetFixedPrice(v float64) *ReservedElasticsearchInstance { + s.FixedPrice = &v + return s } -// String returns the string representation -func (s LogPublishingOptionsStatus) String() string { - return awsutil.Prettify(s) +// SetPaymentOption sets the PaymentOption field's value. +func (s *ReservedElasticsearchInstance) SetPaymentOption(v string) *ReservedElasticsearchInstance { + s.PaymentOption = &v + return s } -// GoString returns the string representation -func (s LogPublishingOptionsStatus) GoString() string { - return s.String() +// SetRecurringCharges sets the RecurringCharges field's value. +func (s *ReservedElasticsearchInstance) SetRecurringCharges(v []*RecurringCharge) *ReservedElasticsearchInstance { + s.RecurringCharges = v + return s } -// SetOptions sets the Options field's value. -func (s *LogPublishingOptionsStatus) SetOptions(v map[string]*LogPublishingOption) *LogPublishingOptionsStatus { - s.Options = v +// SetReservationName sets the ReservationName field's value. +func (s *ReservedElasticsearchInstance) SetReservationName(v string) *ReservedElasticsearchInstance { + s.ReservationName = &v return s } -// SetStatus sets the Status field's value. -func (s *LogPublishingOptionsStatus) SetStatus(v *OptionStatus) *LogPublishingOptionsStatus { - s.Status = v +// SetReservedElasticsearchInstanceId sets the ReservedElasticsearchInstanceId field's value. +func (s *ReservedElasticsearchInstance) SetReservedElasticsearchInstanceId(v string) *ReservedElasticsearchInstance { + s.ReservedElasticsearchInstanceId = &v return s } -// Provides the current status of the entity. -type OptionStatus struct { +// SetReservedElasticsearchInstanceOfferingId sets the ReservedElasticsearchInstanceOfferingId field's value. +func (s *ReservedElasticsearchInstance) SetReservedElasticsearchInstanceOfferingId(v string) *ReservedElasticsearchInstance { + s.ReservedElasticsearchInstanceOfferingId = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *ReservedElasticsearchInstance) SetStartTime(v time.Time) *ReservedElasticsearchInstance { + s.StartTime = &v + return s +} + +// SetState sets the State field's value. +func (s *ReservedElasticsearchInstance) SetState(v string) *ReservedElasticsearchInstance { + s.State = &v + return s +} + +// SetUsagePrice sets the UsagePrice field's value. +func (s *ReservedElasticsearchInstance) SetUsagePrice(v float64) *ReservedElasticsearchInstance { + s.UsagePrice = &v + return s +} + +// Details of a reserved Elasticsearch instance offering. +type ReservedElasticsearchInstanceOffering struct { _ struct{} `type:"structure"` - // Timestamp which tells the creation date for the entity. - // - // CreationDate is a required field - CreationDate *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + // The currency code for the reserved Elasticsearch instance offering. + CurrencyCode *string `type:"string"` - // Indicates whether the Elasticsearch domain is being deleted. - PendingDeletion *bool `type:"boolean"` + // The duration, in seconds, for which the offering will reserve the Elasticsearch + // instance. + Duration *int64 `type:"integer"` - // Provides the OptionState for the Elasticsearch domain. - // - // State is a required field - State *string `type:"string" required:"true" enum:"OptionState"` + // The Elasticsearch instance type offered by the reserved instance offering. + ElasticsearchInstanceType *string `type:"string" enum:"ESPartitionInstanceType"` - // Timestamp which tells the last updated time for the entity. - // - // UpdateDate is a required field - UpdateDate *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + // The upfront fixed charge you will pay to purchase the specific reserved Elasticsearch + // instance offering. + FixedPrice *float64 `type:"double"` - // Specifies the latest version for the entity. - UpdateVersion *int64 `type:"integer"` + // Payment option for the reserved Elasticsearch instance offering + PaymentOption *string `type:"string" enum:"ReservedElasticsearchInstancePaymentOption"` + + // The charge to your account regardless of whether you are creating any domains + // using the instance offering. + RecurringCharges []*RecurringCharge `type:"list"` + + // The Elasticsearch reserved instance offering identifier. + ReservedElasticsearchInstanceOfferingId *string `type:"string"` + + // The rate you are charged for each hour the domain that is using the offering + // is running. + UsagePrice *float64 `type:"double"` } // String returns the string representation -func (s OptionStatus) String() string { +func (s ReservedElasticsearchInstanceOffering) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s OptionStatus) GoString() string { +func (s ReservedElasticsearchInstanceOffering) GoString() string { return s.String() } -// SetCreationDate sets the CreationDate field's value. -func (s *OptionStatus) SetCreationDate(v time.Time) *OptionStatus { - s.CreationDate = &v +// SetCurrencyCode sets the CurrencyCode field's value. +func (s *ReservedElasticsearchInstanceOffering) SetCurrencyCode(v string) *ReservedElasticsearchInstanceOffering { + s.CurrencyCode = &v return s } -// SetPendingDeletion sets the PendingDeletion field's value. -func (s *OptionStatus) SetPendingDeletion(v bool) *OptionStatus { - s.PendingDeletion = &v +// SetDuration sets the Duration field's value. +func (s *ReservedElasticsearchInstanceOffering) SetDuration(v int64) *ReservedElasticsearchInstanceOffering { + s.Duration = &v return s } -// SetState sets the State field's value. -func (s *OptionStatus) SetState(v string) *OptionStatus { - s.State = &v +// SetElasticsearchInstanceType sets the ElasticsearchInstanceType field's value. +func (s *ReservedElasticsearchInstanceOffering) SetElasticsearchInstanceType(v string) *ReservedElasticsearchInstanceOffering { + s.ElasticsearchInstanceType = &v return s } -// SetUpdateDate sets the UpdateDate field's value. -func (s *OptionStatus) SetUpdateDate(v time.Time) *OptionStatus { - s.UpdateDate = &v +// SetFixedPrice sets the FixedPrice field's value. +func (s *ReservedElasticsearchInstanceOffering) SetFixedPrice(v float64) *ReservedElasticsearchInstanceOffering { + s.FixedPrice = &v return s } -// SetUpdateVersion sets the UpdateVersion field's value. -func (s *OptionStatus) SetUpdateVersion(v int64) *OptionStatus { - s.UpdateVersion = &v +// SetPaymentOption sets the PaymentOption field's value. +func (s *ReservedElasticsearchInstanceOffering) SetPaymentOption(v string) *ReservedElasticsearchInstanceOffering { + s.PaymentOption = &v return s } -// Container for the parameters to the RemoveTags operation. Specify the ARN -// for the Elasticsearch domain from which you want to remove the specified -// TagKey. -type RemoveTagsInput struct { +// SetRecurringCharges sets the RecurringCharges field's value. +func (s *ReservedElasticsearchInstanceOffering) SetRecurringCharges(v []*RecurringCharge) *ReservedElasticsearchInstanceOffering { + s.RecurringCharges = v + return s +} + +// SetReservedElasticsearchInstanceOfferingId sets the ReservedElasticsearchInstanceOfferingId field's value. +func (s *ReservedElasticsearchInstanceOffering) SetReservedElasticsearchInstanceOfferingId(v string) *ReservedElasticsearchInstanceOffering { + s.ReservedElasticsearchInstanceOfferingId = &v + return s +} + +// SetUsagePrice sets the UsagePrice field's value. +func (s *ReservedElasticsearchInstanceOffering) SetUsagePrice(v float64) *ReservedElasticsearchInstanceOffering { + s.UsagePrice = &v + return s +} + +// The current options of an Elasticsearch domain service software options. +type ServiceSoftwareOptions struct { _ struct{} `type:"structure"` - // Specifies the ARN for the Elasticsearch domain from which you want to delete - // the specified tags. - // - // ARN is a required field - ARN *string `type:"string" required:"true"` + // Timestamp, in Epoch time, until which you can manually request a service + // software update. After this date, we automatically update your service software. + AutomatedUpdateDate *time.Time `type:"timestamp"` + + // True if you are able to cancel your service software version update. False + // if you are not able to cancel your service software version. + Cancellable *bool `type:"boolean"` + + // The current service software version that is present on the domain. + CurrentVersion *string `type:"string"` + + // The description of the UpdateStatus. + Description *string `type:"string"` + + // The new service software version if one is available. + NewVersion *string `type:"string"` + + // True if you are able to update you service software version. False if you + // are not able to update your service software version. + UpdateAvailable *bool `type:"boolean"` - // Specifies the TagKey list which you want to remove from the Elasticsearch - // domain. - // - // TagKeys is a required field - TagKeys []*string `type:"list" required:"true"` + // The status of your service software update. This field can take the following + // values: ELIGIBLE, PENDING_UPDATE, IN_PROGRESS, COMPLETED, and NOT_ELIGIBLE. + UpdateStatus *string `type:"string" enum:"DeploymentStatus"` } // String returns the string representation -func (s RemoveTagsInput) String() string { +func (s ServiceSoftwareOptions) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s RemoveTagsInput) GoString() string { +func (s ServiceSoftwareOptions) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *RemoveTagsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "RemoveTagsInput"} - if s.ARN == nil { - invalidParams.Add(request.NewErrParamRequired("ARN")) - } - if s.TagKeys == nil { - invalidParams.Add(request.NewErrParamRequired("TagKeys")) - } +// SetAutomatedUpdateDate sets the AutomatedUpdateDate field's value. +func (s *ServiceSoftwareOptions) SetAutomatedUpdateDate(v time.Time) *ServiceSoftwareOptions { + s.AutomatedUpdateDate = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetCancellable sets the Cancellable field's value. +func (s *ServiceSoftwareOptions) SetCancellable(v bool) *ServiceSoftwareOptions { + s.Cancellable = &v + return s } -// SetARN sets the ARN field's value. -func (s *RemoveTagsInput) SetARN(v string) *RemoveTagsInput { - s.ARN = &v +// SetCurrentVersion sets the CurrentVersion field's value. +func (s *ServiceSoftwareOptions) SetCurrentVersion(v string) *ServiceSoftwareOptions { + s.CurrentVersion = &v return s } -// SetTagKeys sets the TagKeys field's value. -func (s *RemoveTagsInput) SetTagKeys(v []*string) *RemoveTagsInput { - s.TagKeys = v +// SetDescription sets the Description field's value. +func (s *ServiceSoftwareOptions) SetDescription(v string) *ServiceSoftwareOptions { + s.Description = &v return s } -type RemoveTagsOutput struct { - _ struct{} `type:"structure"` +// SetNewVersion sets the NewVersion field's value. +func (s *ServiceSoftwareOptions) SetNewVersion(v string) *ServiceSoftwareOptions { + s.NewVersion = &v + return s } -// String returns the string representation -func (s RemoveTagsOutput) String() string { - return awsutil.Prettify(s) +// SetUpdateAvailable sets the UpdateAvailable field's value. +func (s *ServiceSoftwareOptions) SetUpdateAvailable(v bool) *ServiceSoftwareOptions { + s.UpdateAvailable = &v + return s } -// GoString returns the string representation -func (s RemoveTagsOutput) GoString() string { - return s.String() +// SetUpdateStatus sets the UpdateStatus field's value. +func (s *ServiceSoftwareOptions) SetUpdateStatus(v string) *ServiceSoftwareOptions { + s.UpdateStatus = &v + return s } // Specifies the time, in UTC format, when the service takes a daily automated @@ -3480,6 +5707,75 @@ func (s *SnapshotOptionsStatus) SetStatus(v *OptionStatus) *SnapshotOptionsStatu return s } +// Container for the parameters to the StartElasticsearchServiceSoftwareUpdate +// operation. Specifies the name of the Elasticsearch domain that you wish to +// schedule a service software update on. +type StartElasticsearchServiceSoftwareUpdateInput struct { + _ struct{} `type:"structure"` + + // The name of the domain that you want to update to the latest service software. + // + // DomainName is a required field + DomainName *string `min:"3" type:"string" required:"true"` +} + +// String returns the string representation +func (s StartElasticsearchServiceSoftwareUpdateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartElasticsearchServiceSoftwareUpdateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StartElasticsearchServiceSoftwareUpdateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StartElasticsearchServiceSoftwareUpdateInput"} + if s.DomainName == nil { + invalidParams.Add(request.NewErrParamRequired("DomainName")) + } + if s.DomainName != nil && len(*s.DomainName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("DomainName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDomainName sets the DomainName field's value. +func (s *StartElasticsearchServiceSoftwareUpdateInput) SetDomainName(v string) *StartElasticsearchServiceSoftwareUpdateInput { + s.DomainName = &v + return s +} + +// The result of a StartElasticsearchServiceSoftwareUpdate operation. Contains +// the status of the update. +type StartElasticsearchServiceSoftwareUpdateOutput struct { + _ struct{} `type:"structure"` + + // The current status of the Elasticsearch service software update. + ServiceSoftwareOptions *ServiceSoftwareOptions `type:"structure"` +} + +// String returns the string representation +func (s StartElasticsearchServiceSoftwareUpdateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartElasticsearchServiceSoftwareUpdateOutput) GoString() string { + return s.String() +} + +// SetServiceSoftwareOptions sets the ServiceSoftwareOptions field's value. +func (s *StartElasticsearchServiceSoftwareUpdateOutput) SetServiceSoftwareOptions(v *ServiceSoftwareOptions) *StartElasticsearchServiceSoftwareUpdateOutput { + s.ServiceSoftwareOptions = v + return s +} + // StorageTypes represents the list of storage related types and their attributes // that are available for given InstanceType. type StorageType struct { @@ -3646,6 +5942,10 @@ type UpdateElasticsearchDomainConfigInput struct { // for more information. AdvancedOptions map[string]*string `type:"map"` + // Options to specify the Cognito user and identity pools for Kibana authentication. + // For more information, see Amazon Cognito Authentication for Kibana (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-cognito-auth.html). + CognitoOptions *CognitoOptions `type:"structure"` + // The name of the Elasticsearch domain that you are updating. // // DomainName is a required field @@ -3690,6 +5990,11 @@ func (s *UpdateElasticsearchDomainConfigInput) Validate() error { if s.DomainName != nil && len(*s.DomainName) < 3 { invalidParams.Add(request.NewErrParamMinLen("DomainName", 3)) } + if s.CognitoOptions != nil { + if err := s.CognitoOptions.Validate(); err != nil { + invalidParams.AddNested("CognitoOptions", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -3709,6 +6014,12 @@ func (s *UpdateElasticsearchDomainConfigInput) SetAdvancedOptions(v map[string]* return s } +// SetCognitoOptions sets the CognitoOptions field's value. +func (s *UpdateElasticsearchDomainConfigInput) SetCognitoOptions(v *CognitoOptions) *UpdateElasticsearchDomainConfigInput { + s.CognitoOptions = v + return s +} + // SetDomainName sets the DomainName field's value. func (s *UpdateElasticsearchDomainConfigInput) SetDomainName(v string) *UpdateElasticsearchDomainConfigInput { s.DomainName = &v @@ -3772,6 +6083,238 @@ func (s *UpdateElasticsearchDomainConfigOutput) SetDomainConfig(v *Elasticsearch return s } +// Container for request parameters to UpgradeElasticsearchDomain operation. +type UpgradeElasticsearchDomainInput struct { + _ struct{} `type:"structure"` + + // The name of an Elasticsearch domain. Domain names are unique across the domains + // owned by an account within an AWS region. Domain names start with a letter + // or number and can contain the following characters: a-z (lowercase), 0-9, + // and - (hyphen). + // + // DomainName is a required field + DomainName *string `min:"3" type:"string" required:"true"` + + // This flag, when set to True, indicates that an Upgrade Eligibility Check + // needs to be performed. This will not actually perform the Upgrade. + PerformCheckOnly *bool `type:"boolean"` + + // The version of Elasticsearch that you intend to upgrade the domain to. + // + // TargetVersion is a required field + TargetVersion *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s UpgradeElasticsearchDomainInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpgradeElasticsearchDomainInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpgradeElasticsearchDomainInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpgradeElasticsearchDomainInput"} + if s.DomainName == nil { + invalidParams.Add(request.NewErrParamRequired("DomainName")) + } + if s.DomainName != nil && len(*s.DomainName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("DomainName", 3)) + } + if s.TargetVersion == nil { + invalidParams.Add(request.NewErrParamRequired("TargetVersion")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDomainName sets the DomainName field's value. +func (s *UpgradeElasticsearchDomainInput) SetDomainName(v string) *UpgradeElasticsearchDomainInput { + s.DomainName = &v + return s +} + +// SetPerformCheckOnly sets the PerformCheckOnly field's value. +func (s *UpgradeElasticsearchDomainInput) SetPerformCheckOnly(v bool) *UpgradeElasticsearchDomainInput { + s.PerformCheckOnly = &v + return s +} + +// SetTargetVersion sets the TargetVersion field's value. +func (s *UpgradeElasticsearchDomainInput) SetTargetVersion(v string) *UpgradeElasticsearchDomainInput { + s.TargetVersion = &v + return s +} + +// Container for response returned by UpgradeElasticsearchDomain operation. +type UpgradeElasticsearchDomainOutput struct { + _ struct{} `type:"structure"` + + // The name of an Elasticsearch domain. Domain names are unique across the domains + // owned by an account within an AWS region. Domain names start with a letter + // or number and can contain the following characters: a-z (lowercase), 0-9, + // and - (hyphen). + DomainName *string `min:"3" type:"string"` + + // This flag, when set to True, indicates that an Upgrade Eligibility Check + // needs to be performed. This will not actually perform the Upgrade. + PerformCheckOnly *bool `type:"boolean"` + + // The version of Elasticsearch that you intend to upgrade the domain to. + TargetVersion *string `type:"string"` +} + +// String returns the string representation +func (s UpgradeElasticsearchDomainOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpgradeElasticsearchDomainOutput) GoString() string { + return s.String() +} + +// SetDomainName sets the DomainName field's value. +func (s *UpgradeElasticsearchDomainOutput) SetDomainName(v string) *UpgradeElasticsearchDomainOutput { + s.DomainName = &v + return s +} + +// SetPerformCheckOnly sets the PerformCheckOnly field's value. +func (s *UpgradeElasticsearchDomainOutput) SetPerformCheckOnly(v bool) *UpgradeElasticsearchDomainOutput { + s.PerformCheckOnly = &v + return s +} + +// SetTargetVersion sets the TargetVersion field's value. +func (s *UpgradeElasticsearchDomainOutput) SetTargetVersion(v string) *UpgradeElasticsearchDomainOutput { + s.TargetVersion = &v + return s +} + +// History of the last 10 Upgrades and Upgrade Eligibility Checks. +type UpgradeHistory struct { + _ struct{} `type:"structure"` + + // UTC Timestamp at which the Upgrade API call was made in "yyyy-MM-ddTHH:mm:ssZ" + // format. + StartTimestamp *time.Time `type:"timestamp"` + + // A list of UpgradeStepItem s representing information about each step performed + // as pard of a specific Upgrade or Upgrade Eligibility Check. + StepsList []*UpgradeStepItem `type:"list"` + + // A string that describes the update briefly + UpgradeName *string `type:"string"` + + // The overall status of the update. The status can take one of the following + // values: In Progress + // Succeeded + // Succeeded with Issues + // Failed + UpgradeStatus *string `type:"string" enum:"UpgradeStatus"` +} + +// String returns the string representation +func (s UpgradeHistory) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpgradeHistory) GoString() string { + return s.String() +} + +// SetStartTimestamp sets the StartTimestamp field's value. +func (s *UpgradeHistory) SetStartTimestamp(v time.Time) *UpgradeHistory { + s.StartTimestamp = &v + return s +} + +// SetStepsList sets the StepsList field's value. +func (s *UpgradeHistory) SetStepsList(v []*UpgradeStepItem) *UpgradeHistory { + s.StepsList = v + return s +} + +// SetUpgradeName sets the UpgradeName field's value. +func (s *UpgradeHistory) SetUpgradeName(v string) *UpgradeHistory { + s.UpgradeName = &v + return s +} + +// SetUpgradeStatus sets the UpgradeStatus field's value. +func (s *UpgradeHistory) SetUpgradeStatus(v string) *UpgradeHistory { + s.UpgradeStatus = &v + return s +} + +// Represents a single step of the Upgrade or Upgrade Eligibility Check workflow. +type UpgradeStepItem struct { + _ struct{} `type:"structure"` + + // A list of strings containing detailed information about the errors encountered + // in a particular step. + Issues []*string `type:"list"` + + // The Floating point value representing progress percentage of a particular + // step. + ProgressPercent *float64 `type:"double"` + + // Represents one of 3 steps that an Upgrade or Upgrade Eligibility Check does + // through: PreUpgradeCheck + // Snapshot + // Upgrade + UpgradeStep *string `type:"string" enum:"UpgradeStep"` + + // The status of a particular step during an upgrade. The status can take one + // of the following values: In Progress + // Succeeded + // Succeeded with Issues + // Failed + UpgradeStepStatus *string `type:"string" enum:"UpgradeStatus"` +} + +// String returns the string representation +func (s UpgradeStepItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpgradeStepItem) GoString() string { + return s.String() +} + +// SetIssues sets the Issues field's value. +func (s *UpgradeStepItem) SetIssues(v []*string) *UpgradeStepItem { + s.Issues = v + return s +} + +// SetProgressPercent sets the ProgressPercent field's value. +func (s *UpgradeStepItem) SetProgressPercent(v float64) *UpgradeStepItem { + s.ProgressPercent = &v + return s +} + +// SetUpgradeStep sets the UpgradeStep field's value. +func (s *UpgradeStepItem) SetUpgradeStep(v string) *UpgradeStepItem { + s.UpgradeStep = &v + return s +} + +// SetUpgradeStepStatus sets the UpgradeStepStatus field's value. +func (s *UpgradeStepItem) SetUpgradeStepStatus(v string) *UpgradeStepItem { + s.UpgradeStepStatus = &v + return s +} + // Options to specify the subnets and security groups for VPC endpoint. For // more information, see VPC Endpoints for Amazon Elasticsearch Service Domains // (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-vpc.html). @@ -3899,6 +6442,23 @@ func (s *VPCOptions) SetSubnetIds(v []*string) *VPCOptions { return s } +const ( + // DeploymentStatusPendingUpdate is a DeploymentStatus enum value + DeploymentStatusPendingUpdate = "PENDING_UPDATE" + + // DeploymentStatusInProgress is a DeploymentStatus enum value + DeploymentStatusInProgress = "IN_PROGRESS" + + // DeploymentStatusCompleted is a DeploymentStatus enum value + DeploymentStatusCompleted = "COMPLETED" + + // DeploymentStatusNotEligible is a DeploymentStatus enum value + DeploymentStatusNotEligible = "NOT_ELIGIBLE" + + // DeploymentStatusEligible is a DeploymentStatus enum value + DeploymentStatusEligible = "ELIGIBLE" +) + const ( // ESPartitionInstanceTypeM3MediumElasticsearch is a ESPartitionInstanceType enum value ESPartitionInstanceTypeM3MediumElasticsearch = "m3.medium.elasticsearch" @@ -4022,16 +6582,22 @@ const ( ) // Type of Log File, it can be one of the following: INDEX_SLOW_LOGS: Index -// slow logs contains insert requests that took more time than configured index +// slow logs contain insert requests that took more time than configured index // query log threshold to execute. -// SEARCH_SLOW_LOGS: Search slow logs contains search queries that took more +// SEARCH_SLOW_LOGS: Search slow logs contain search queries that took more // time than configured search query log threshold to execute. +// ES_APPLICATION_LOGS: Elasticsearch application logs contain information about +// errors and warnings raised during the operation of the service and can be +// useful for troubleshooting. const ( // LogTypeIndexSlowLogs is a LogType enum value LogTypeIndexSlowLogs = "INDEX_SLOW_LOGS" // LogTypeSearchSlowLogs is a LogType enum value LogTypeSearchSlowLogs = "SEARCH_SLOW_LOGS" + + // LogTypeEsApplicationLogs is a LogType enum value + LogTypeEsApplicationLogs = "ES_APPLICATION_LOGS" ) // The state of a requested change. One of the following: @@ -4050,6 +6616,42 @@ const ( OptionStateActive = "Active" ) +const ( + // ReservedElasticsearchInstancePaymentOptionAllUpfront is a ReservedElasticsearchInstancePaymentOption enum value + ReservedElasticsearchInstancePaymentOptionAllUpfront = "ALL_UPFRONT" + + // ReservedElasticsearchInstancePaymentOptionPartialUpfront is a ReservedElasticsearchInstancePaymentOption enum value + ReservedElasticsearchInstancePaymentOptionPartialUpfront = "PARTIAL_UPFRONT" + + // ReservedElasticsearchInstancePaymentOptionNoUpfront is a ReservedElasticsearchInstancePaymentOption enum value + ReservedElasticsearchInstancePaymentOptionNoUpfront = "NO_UPFRONT" +) + +const ( + // UpgradeStatusInProgress is a UpgradeStatus enum value + UpgradeStatusInProgress = "IN_PROGRESS" + + // UpgradeStatusSucceeded is a UpgradeStatus enum value + UpgradeStatusSucceeded = "SUCCEEDED" + + // UpgradeStatusSucceededWithIssues is a UpgradeStatus enum value + UpgradeStatusSucceededWithIssues = "SUCCEEDED_WITH_ISSUES" + + // UpgradeStatusFailed is a UpgradeStatus enum value + UpgradeStatusFailed = "FAILED" +) + +const ( + // UpgradeStepPreUpgradeCheck is a UpgradeStep enum value + UpgradeStepPreUpgradeCheck = "PRE_UPGRADE_CHECK" + + // UpgradeStepSnapshot is a UpgradeStep enum value + UpgradeStepSnapshot = "SNAPSHOT" + + // UpgradeStepUpgrade is a UpgradeStep enum value + UpgradeStepUpgrade = "UPGRADE" +) + // The type of EBS volume, standard, gp2, or io1. See Configuring EBS-based // Storage (http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-ebs)for // more information. diff --git a/vendor/github.com/aws/aws-sdk-go/service/elasticsearchservice/service.go b/vendor/github.com/aws/aws-sdk-go/service/elasticsearchservice/service.go index c4388e79b02..d2f8f382733 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/elasticsearchservice/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/elasticsearchservice/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "es" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "es" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Elasticsearch Service" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the ElasticsearchService client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/elastictranscoder/api.go b/vendor/github.com/aws/aws-sdk-go/service/elastictranscoder/api.go index 96ca242ffb4..05c07b9f6df 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/elastictranscoder/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/elastictranscoder/api.go @@ -14,8 +14,8 @@ const opCancelJob = "CancelJob" // CancelJobRequest generates a "aws/request.Request" representing the // client's request for the CancelJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -112,8 +112,8 @@ const opCreateJob = "CreateJob" // CreateJobRequest generates a "aws/request.Request" representing the // client's request for the CreateJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -212,8 +212,8 @@ const opCreatePipeline = "CreatePipeline" // CreatePipelineRequest generates a "aws/request.Request" representing the // client's request for the CreatePipeline operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -306,8 +306,8 @@ const opCreatePreset = "CreatePreset" // CreatePresetRequest generates a "aws/request.Request" representing the // client's request for the CreatePreset operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -409,8 +409,8 @@ const opDeletePipeline = "DeletePipeline" // DeletePipelineRequest generates a "aws/request.Request" representing the // client's request for the DeletePipeline operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -507,8 +507,8 @@ const opDeletePreset = "DeletePreset" // DeletePresetRequest generates a "aws/request.Request" representing the // client's request for the DeletePreset operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -599,8 +599,8 @@ const opListJobsByPipeline = "ListJobsByPipeline" // ListJobsByPipelineRequest generates a "aws/request.Request" representing the // client's request for the ListJobsByPipeline operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -749,8 +749,8 @@ const opListJobsByStatus = "ListJobsByStatus" // ListJobsByStatusRequest generates a "aws/request.Request" representing the // client's request for the ListJobsByStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -897,8 +897,8 @@ const opListPipelines = "ListPipelines" // ListPipelinesRequest generates a "aws/request.Request" representing the // client's request for the ListPipelines operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1039,8 +1039,8 @@ const opListPresets = "ListPresets" // ListPresetsRequest generates a "aws/request.Request" representing the // client's request for the ListPresets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1181,8 +1181,8 @@ const opReadJob = "ReadJob" // ReadJobRequest generates a "aws/request.Request" representing the // client's request for the ReadJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1271,8 +1271,8 @@ const opReadPipeline = "ReadPipeline" // ReadPipelineRequest generates a "aws/request.Request" representing the // client's request for the ReadPipeline operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1361,8 +1361,8 @@ const opReadPreset = "ReadPreset" // ReadPresetRequest generates a "aws/request.Request" representing the // client's request for the ReadPreset operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1451,8 +1451,8 @@ const opTestRole = "TestRole" // TestRoleRequest generates a "aws/request.Request" representing the // client's request for the TestRole operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1471,6 +1471,8 @@ const opTestRole = "TestRole" // if err == nil { // resp is now filled // fmt.Println(resp) // } +// +// Deprecated: TestRole has been deprecated func (c *ElasticTranscoder) TestRoleRequest(input *TestRoleInput) (req *request.Request, output *TestRoleOutput) { if c.Client.Config.Logger != nil { c.Client.Config.Logger.Log("This operation, TestRole, has been deprecated") @@ -1525,6 +1527,8 @@ func (c *ElasticTranscoder) TestRoleRequest(input *TestRoleInput) (req *request. // Elastic Transcoder encountered an unexpected exception while trying to fulfill // the request. // +// +// Deprecated: TestRole has been deprecated func (c *ElasticTranscoder) TestRole(input *TestRoleInput) (*TestRoleOutput, error) { req, out := c.TestRoleRequest(input) return out, req.Send() @@ -1539,6 +1543,8 @@ func (c *ElasticTranscoder) TestRole(input *TestRoleInput) (*TestRoleOutput, err // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. +// +// Deprecated: TestRoleWithContext has been deprecated func (c *ElasticTranscoder) TestRoleWithContext(ctx aws.Context, input *TestRoleInput, opts ...request.Option) (*TestRoleOutput, error) { req, out := c.TestRoleRequest(input) req.SetContext(ctx) @@ -1550,8 +1556,8 @@ const opUpdatePipeline = "UpdatePipeline" // UpdatePipelineRequest generates a "aws/request.Request" representing the // client's request for the UpdatePipeline operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1649,8 +1655,8 @@ const opUpdatePipelineNotifications = "UpdatePipelineNotifications" // UpdatePipelineNotificationsRequest generates a "aws/request.Request" representing the // client's request for the UpdatePipelineNotifications operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1747,8 +1753,8 @@ const opUpdatePipelineStatus = "UpdatePipelineStatus" // UpdatePipelineStatusRequest generates a "aws/request.Request" representing the // client's request for the UpdatePipelineStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2539,6 +2545,8 @@ type Captions struct { // Source files for the input sidecar captions used during the transcoding process. // To omit all sidecar captions, leave CaptionSources blank. + // + // Deprecated: CaptionSources has been deprecated CaptionSources []*CaptionSource `deprecated:"true" type:"list"` // A policy that determines how Elastic Transcoder handles the existence of @@ -2560,6 +2568,8 @@ type Captions struct { // you specify in CaptionSources. // // MergePolicy cannot be null. + // + // Deprecated: MergePolicy has been deprecated MergePolicy *string `deprecated:"true" type:"string"` } @@ -2613,6 +2623,8 @@ func (s *Captions) SetMergePolicy(v string) *Captions { // Settings for one clip in a composition. All jobs in a playlist must have // the same clip settings. +// +// Deprecated: Clip has been deprecated type Clip struct { _ struct{} `deprecated:"true" type:"structure"` @@ -2858,6 +2870,8 @@ type CreateJobOutput struct { // of the file. The Composition object contains settings for the clips that // make up an output file. For the current release, you can only specify settings // for a single clip per output file. The Composition object cannot be null. + // + // Deprecated: Composition has been deprecated Composition []*Clip `deprecated:"true" type:"list"` // You can specify encryption settings for any output files that you want to @@ -3211,11 +3225,11 @@ type CreatePipelineInput struct { // The AWS Key Management Service (AWS KMS) key that you want to use with this // pipeline. // - // If you use either S3 or S3-AWS-KMS as your Encryption:Mode, you don't need + // If you use either s3 or s3-aws-kms as your Encryption:Mode, you don't need // to provide a key with your job because a default key, known as an AWS-KMS // key, is created for you automatically. You need to provide an AWS-KMS key // only if you want to use a non-default AWS-KMS key, or if you are using an - // Encryption:Mode of AES-PKCS7, AES-CTR, or AES-GCM. + // Encryption:Mode of aes-cbc-pkcs7, aes-ctr, or aes-gcm. AwsKmsKeyArn *string `type:"string"` // The optional ContentConfig object specifies information about the Amazon @@ -3311,7 +3325,7 @@ type CreatePipelineInput struct { // SNS returned when you created the topic. For more information, see Create // a Topic in the Amazon Simple Notification Service Developer Guide. // - // * Completed: The topic ARN for the Amazon SNS topic that you want to notify + // * Complete: The topic ARN for the Amazon SNS topic that you want to notify // when Elastic Transcoder has finished processing a job in this pipeline. // This is the ARN that Amazon SNS returned when you created the topic. // @@ -3893,20 +3907,20 @@ type Encryption struct { // to use when decrypting your input files or encrypting your output files. // Elastic Transcoder supports the following options: // - // * S3: Amazon S3 creates and manages the keys used for encrypting your + // * s3: Amazon S3 creates and manages the keys used for encrypting your // files. // - // * S3-AWS-KMS: Amazon S3 calls the Amazon Key Management Service, which + // * s3-aws-kms: Amazon S3 calls the Amazon Key Management Service, which // creates and manages the keys that are used for encrypting your files. - // If you specify S3-AWS-KMS and you don't want to use the default key, you + // If you specify s3-aws-kms and you don't want to use the default key, you // must add the AWS-KMS key that you want to use to your pipeline. // - // * AES-CBC-PKCS7: A padded cipher-block mode of operation originally used + // * aes-cbc-pkcs7: A padded cipher-block mode of operation originally used // for HLS files. // - // * AES-CTR: AES Counter Mode. + // * aes-ctr: AES Counter Mode. // - // * AES-GCM: AES Galois Counter Mode, a mode of operation that is an authenticated + // * aes-gcm: AES Galois Counter Mode, a mode of operation that is an authenticated // encryption format, meaning that a file, key, or initialization vector // that has been tampered with fails the decryption process. // @@ -4631,6 +4645,8 @@ type JobOutput struct { // of the file. The Composition object contains settings for the clips that // make up an output file. For the current release, you can only specify settings // for a single clip per output file. The Composition object cannot be null. + // + // Deprecated: Composition has been deprecated Composition []*Clip `deprecated:"true" type:"list"` // Duration of the output file, in seconds. @@ -5471,11 +5487,11 @@ type Pipeline struct { // The AWS Key Management Service (AWS KMS) key that you want to use with this // pipeline. // - // If you use either S3 or S3-AWS-KMS as your Encryption:Mode, you don't need + // If you use either s3 or s3-aws-kms as your Encryption:Mode, you don't need // to provide a key with your job because a default key, known as an AWS-KMS // key, is created for you automatically. You need to provide an AWS-KMS key // only if you want to use a non-default AWS-KMS key, or if you are using an - // Encryption:Mode of AES-PKCS7, AES-CTR, or AES-GCM. + // Encryption:Mode of aes-cbc-pkcs7, aes-ctr, or aes-gcm. AwsKmsKeyArn *string `type:"string"` // Information about the Amazon S3 bucket in which you want Elastic Transcoder @@ -5547,7 +5563,7 @@ type Pipeline struct { // SNS) topic that you want to notify when Elastic Transcoder has started // to process the job. // - // * Completed (optional): The Amazon SNS topic that you want to notify when + // * Complete (optional): The Amazon SNS topic that you want to notify when // Elastic Transcoder has finished processing the job. // // * Warning (optional): The Amazon SNS topic that you want to notify when @@ -5793,7 +5809,7 @@ func (s *PipelineOutputConfig) SetStorageClass(v string) *PipelineOutputConfig { // The PlayReady DRM settings, if any, that you want Elastic Transcoder to apply // to the output files associated with this playlist. // -// PlayReady DRM encrypts your media files using AES-CTR encryption. +// PlayReady DRM encrypts your media files using aes-ctr encryption. // // If you use DRM for an HLSv3 playlist, your outputs must have a master playlist. type PlayReadyDrm struct { @@ -6589,6 +6605,8 @@ func (s *ReadPresetOutput) SetPreset(v *Preset) *ReadPresetOutput { } // The TestRoleRequest structure. +// +// Deprecated: TestRoleInput has been deprecated type TestRoleInput struct { _ struct{} `deprecated:"true" type:"structure"` @@ -6674,6 +6692,8 @@ func (s *TestRoleInput) SetTopics(v []*string) *TestRoleInput { } // The TestRoleResponse structure. +// +// Deprecated: TestRoleOutput has been deprecated type TestRoleOutput struct { _ struct{} `deprecated:"true" type:"structure"` @@ -6942,11 +6962,11 @@ type UpdatePipelineInput struct { // The AWS Key Management Service (AWS KMS) key that you want to use with this // pipeline. // - // If you use either S3 or S3-AWS-KMS as your Encryption:Mode, you don't need + // If you use either s3 or s3-aws-kms as your Encryption:Mode, you don't need // to provide a key with your job because a default key, known as an AWS-KMS // key, is created for you automatically. You need to provide an AWS-KMS key // only if you want to use a non-default AWS-KMS key, or if you are using an - // Encryption:Mode of AES-PKCS7, AES-CTR, or AES-GCM. + // Encryption:Mode of aes-cbc-pkcs7, aes-ctr, or aes-gcm. AwsKmsKeyArn *string `type:"string"` // The optional ContentConfig object specifies information about the Amazon @@ -7042,7 +7062,7 @@ type UpdatePipelineInput struct { // started to process jobs that are added to this pipeline. This is the ARN // that Amazon SNS returned when you created the topic. // - // * Completed: The topic ARN for the Amazon SNS topic that you want to notify + // * Complete: The topic ARN for the Amazon SNS topic that you want to notify // when Elastic Transcoder has finished processing a job. This is the ARN // that Amazon SNS returned when you created the topic. // @@ -7225,7 +7245,7 @@ type UpdatePipelineNotificationsInput struct { // started to process jobs that are added to this pipeline. This is the ARN // that Amazon SNS returned when you created the topic. // - // * Completed: The topic ARN for the Amazon SNS topic that you want to notify + // * Complete: The topic ARN for the Amazon SNS topic that you want to notify // when Elastic Transcoder has finished processing a job. This is the ARN // that Amazon SNS returned when you created the topic. // diff --git a/vendor/github.com/aws/aws-sdk-go/service/elastictranscoder/service.go b/vendor/github.com/aws/aws-sdk-go/service/elastictranscoder/service.go index 29d15e2e7f4..30acb8d1bc0 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/elastictranscoder/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/elastictranscoder/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "elastictranscoder" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "elastictranscoder" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Elastic Transcoder" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the ElasticTranscoder client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/elb/api.go b/vendor/github.com/aws/aws-sdk-go/service/elb/api.go index 66658a1a68d..ea0bc5937be 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/elb/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/elb/api.go @@ -15,8 +15,8 @@ const opAddTags = "AddTags" // AddTagsRequest generates a "aws/request.Request" representing the // client's request for the AddTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -62,7 +62,7 @@ func (c *ELB) AddTagsRequest(input *AddTagsInput) (req *request.Request, output // key is already associated with the load balancer, AddTags updates its value. // // For more information, see Tag Your Classic Load Balancer (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/add-remove-tags.html) -// in the Classic Load Balancer Guide. +// in the Classic Load Balancers Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -108,8 +108,8 @@ const opApplySecurityGroupsToLoadBalancer = "ApplySecurityGroupsToLoadBalancer" // ApplySecurityGroupsToLoadBalancerRequest generates a "aws/request.Request" representing the // client's request for the ApplySecurityGroupsToLoadBalancer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -153,7 +153,7 @@ func (c *ELB) ApplySecurityGroupsToLoadBalancerRequest(input *ApplySecurityGroup // associated security groups. // // For more information, see Security Groups for Load Balancers in a VPC (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-groups.html#elb-vpc-security-groups) -// in the Classic Load Balancer Guide. +// in the Classic Load Balancers Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -198,8 +198,8 @@ const opAttachLoadBalancerToSubnets = "AttachLoadBalancerToSubnets" // AttachLoadBalancerToSubnetsRequest generates a "aws/request.Request" representing the // client's request for the AttachLoadBalancerToSubnets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -244,7 +244,7 @@ func (c *ELB) AttachLoadBalancerToSubnetsRequest(input *AttachLoadBalancerToSubn // The load balancer evenly distributes requests across all registered subnets. // For more information, see Add or Remove Subnets for Your Load Balancer in // a VPC (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-manage-subnets.html) -// in the Classic Load Balancer Guide. +// in the Classic Load Balancers Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -292,8 +292,8 @@ const opConfigureHealthCheck = "ConfigureHealthCheck" // ConfigureHealthCheckRequest generates a "aws/request.Request" representing the // client's request for the ConfigureHealthCheck operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -337,7 +337,7 @@ func (c *ELB) ConfigureHealthCheckRequest(input *ConfigureHealthCheckInput) (req // // For more information, see Configure Health Checks for Your Load Balancer // (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-healthchecks.html) -// in the Classic Load Balancer Guide. +// in the Classic Load Balancers Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -376,8 +376,8 @@ const opCreateAppCookieStickinessPolicy = "CreateAppCookieStickinessPolicy" // CreateAppCookieStickinessPolicyRequest generates a "aws/request.Request" representing the // client's request for the CreateAppCookieStickinessPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -430,7 +430,7 @@ func (c *ELB) CreateAppCookieStickinessPolicyRequest(input *CreateAppCookieStick // being sticky until a new application cookie is issued. // // For more information, see Application-Controlled Session Stickiness (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-sticky-sessions.html#enable-sticky-sessions-application) -// in the Classic Load Balancer Guide. +// in the Classic Load Balancers Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -478,8 +478,8 @@ const opCreateLBCookieStickinessPolicy = "CreateLBCookieStickinessPolicy" // CreateLBCookieStickinessPolicyRequest generates a "aws/request.Request" representing the // client's request for the CreateLBCookieStickinessPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -534,7 +534,7 @@ func (c *ELB) CreateLBCookieStickinessPolicyRequest(input *CreateLBCookieStickin // cookie expiration time, which is specified in the policy configuration. // // For more information, see Duration-Based Session Stickiness (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-sticky-sessions.html#enable-sticky-sessions-duration) -// in the Classic Load Balancer Guide. +// in the Classic Load Balancers Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -582,8 +582,8 @@ const opCreateLoadBalancer = "CreateLoadBalancer" // CreateLoadBalancerRequest generates a "aws/request.Request" representing the // client's request for the CreateLoadBalancer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -634,7 +634,7 @@ func (c *ELB) CreateLoadBalancerRequest(input *CreateLoadBalancerInput) (req *re // You can create up to 20 load balancers per region per account. You can request // an increase for the number of load balancers for your account. For more information, // see Limits for Your Classic Load Balancer (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-limits.html) -// in the Classic Load Balancer Guide. +// in the Classic Load Balancers Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -711,8 +711,8 @@ const opCreateLoadBalancerListeners = "CreateLoadBalancerListeners" // CreateLoadBalancerListenersRequest generates a "aws/request.Request" representing the // client's request for the CreateLoadBalancerListeners operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -757,7 +757,7 @@ func (c *ELB) CreateLoadBalancerListenersRequest(input *CreateLoadBalancerListen // listener. // // For more information, see Listeners for Your Classic Load Balancer (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html) -// in the Classic Load Balancer Guide. +// in the Classic Load Balancers Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -812,8 +812,8 @@ const opCreateLoadBalancerPolicy = "CreateLoadBalancerPolicy" // CreateLoadBalancerPolicyRequest generates a "aws/request.Request" representing the // client's request for the CreateLoadBalancerPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -907,8 +907,8 @@ const opDeleteLoadBalancer = "DeleteLoadBalancer" // DeleteLoadBalancerRequest generates a "aws/request.Request" representing the // client's request for the DeleteLoadBalancer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -990,8 +990,8 @@ const opDeleteLoadBalancerListeners = "DeleteLoadBalancerListeners" // DeleteLoadBalancerListenersRequest generates a "aws/request.Request" representing the // client's request for the DeleteLoadBalancerListeners operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1069,8 +1069,8 @@ const opDeleteLoadBalancerPolicy = "DeleteLoadBalancerPolicy" // DeleteLoadBalancerPolicyRequest generates a "aws/request.Request" representing the // client's request for the DeleteLoadBalancerPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1152,8 +1152,8 @@ const opDeregisterInstancesFromLoadBalancer = "DeregisterInstancesFromLoadBalanc // DeregisterInstancesFromLoadBalancerRequest generates a "aws/request.Request" representing the // client's request for the DeregisterInstancesFromLoadBalancer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1200,7 +1200,7 @@ func (c *ELB) DeregisterInstancesFromLoadBalancerRequest(input *DeregisterInstan // from the load balancer. // // For more information, see Register or De-Register EC2 Instances (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-deregister-register-instances.html) -// in the Classic Load Balancer Guide. +// in the Classic Load Balancers Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1242,8 +1242,8 @@ const opDescribeAccountLimits = "DescribeAccountLimits" // DescribeAccountLimitsRequest generates a "aws/request.Request" representing the // client's request for the DescribeAccountLimits operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1286,7 +1286,7 @@ func (c *ELB) DescribeAccountLimitsRequest(input *DescribeAccountLimitsInput) (r // account. // // For more information, see Limits for Your Classic Load Balancer (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-limits.html) -// in the Classic Load Balancer Guide. +// in the Classic Load Balancers Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1320,8 +1320,8 @@ const opDescribeInstanceHealth = "DescribeInstanceHealth" // DescribeInstanceHealthRequest generates a "aws/request.Request" representing the // client's request for the DescribeInstanceHealth operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1407,8 +1407,8 @@ const opDescribeLoadBalancerAttributes = "DescribeLoadBalancerAttributes" // DescribeLoadBalancerAttributesRequest generates a "aws/request.Request" representing the // client's request for the DescribeLoadBalancerAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1489,8 +1489,8 @@ const opDescribeLoadBalancerPolicies = "DescribeLoadBalancerPolicies" // DescribeLoadBalancerPoliciesRequest generates a "aws/request.Request" representing the // client's request for the DescribeLoadBalancerPolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1578,8 +1578,8 @@ const opDescribeLoadBalancerPolicyTypes = "DescribeLoadBalancerPolicyTypes" // DescribeLoadBalancerPolicyTypesRequest generates a "aws/request.Request" representing the // client's request for the DescribeLoadBalancerPolicyTypes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1668,8 +1668,8 @@ const opDescribeLoadBalancers = "DescribeLoadBalancers" // DescribeLoadBalancersRequest generates a "aws/request.Request" representing the // client's request for the DescribeLoadBalancers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1729,6 +1729,8 @@ func (c *ELB) DescribeLoadBalancersRequest(input *DescribeLoadBalancersInput) (r // The specified load balancer does not exist. // // * ErrCodeDependencyThrottleException "DependencyThrottle" +// A request made by Elastic Load Balancing to another service exceeds the maximum +// request rate permitted for your account. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticloadbalancing-2012-06-01/DescribeLoadBalancers func (c *ELB) DescribeLoadBalancers(input *DescribeLoadBalancersInput) (*DescribeLoadBalancersOutput, error) { @@ -1806,8 +1808,8 @@ const opDescribeTags = "DescribeTags" // DescribeTagsRequest generates a "aws/request.Request" representing the // client's request for the DescribeTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1885,8 +1887,8 @@ const opDetachLoadBalancerFromSubnets = "DetachLoadBalancerFromSubnets" // DetachLoadBalancerFromSubnetsRequest generates a "aws/request.Request" representing the // client's request for the DetachLoadBalancerFromSubnets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1972,8 +1974,8 @@ const opDisableAvailabilityZonesForLoadBalancer = "DisableAvailabilityZonesForLo // DisableAvailabilityZonesForLoadBalancerRequest generates a "aws/request.Request" representing the // client's request for the DisableAvailabilityZonesForLoadBalancer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2013,7 +2015,9 @@ func (c *ELB) DisableAvailabilityZonesForLoadBalancerRequest(input *DisableAvail // DisableAvailabilityZonesForLoadBalancer API operation for Elastic Load Balancing. // // Removes the specified Availability Zones from the set of Availability Zones -// for the specified load balancer. +// for the specified load balancer in EC2-Classic or a default VPC. +// +// For load balancers in a non-default VPC, use DetachLoadBalancerFromSubnets. // // There must be at least one Availability Zone registered with a load balancer // at all times. After an Availability Zone is removed, all instances registered @@ -2022,7 +2026,7 @@ func (c *ELB) DisableAvailabilityZonesForLoadBalancerRequest(input *DisableAvail // the traffic among its remaining Availability Zones. // // For more information, see Add or Remove Availability Zones (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-disable-az.html) -// in the Classic Load Balancer Guide. +// in the Classic Load Balancers Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2064,8 +2068,8 @@ const opEnableAvailabilityZonesForLoadBalancer = "EnableAvailabilityZonesForLoad // EnableAvailabilityZonesForLoadBalancerRequest generates a "aws/request.Request" representing the // client's request for the EnableAvailabilityZonesForLoadBalancer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2105,13 +2109,14 @@ func (c *ELB) EnableAvailabilityZonesForLoadBalancerRequest(input *EnableAvailab // EnableAvailabilityZonesForLoadBalancer API operation for Elastic Load Balancing. // // Adds the specified Availability Zones to the set of Availability Zones for -// the specified load balancer. +// the specified load balancer in EC2-Classic or a default VPC. // -// The load balancer evenly distributes requests across all its registered Availability -// Zones that contain instances. +// For load balancers in a non-default VPC, use AttachLoadBalancerToSubnets. // -// For more information, see Add or Remove Availability Zones (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-disable-az.html) -// in the Classic Load Balancer Guide. +// The load balancer evenly distributes requests across all its registered Availability +// Zones that contain instances. For more information, see Add or Remove Availability +// Zones (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-disable-az.html) +// in the Classic Load Balancers Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2150,8 +2155,8 @@ const opModifyLoadBalancerAttributes = "ModifyLoadBalancerAttributes" // ModifyLoadBalancerAttributesRequest generates a "aws/request.Request" representing the // client's request for the ModifyLoadBalancerAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2197,7 +2202,7 @@ func (c *ELB) ModifyLoadBalancerAttributesRequest(input *ModifyLoadBalancerAttri // can modify the load balancer attribute ConnectionSettings by specifying an // idle connection timeout value for your load balancer. // -// For more information, see the following in the Classic Load Balancer Guide: +// For more information, see the following in the Classic Load Balancers Guide: // // * Cross-Zone Load Balancing (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-disable-crosszone-lb.html) // @@ -2250,8 +2255,8 @@ const opRegisterInstancesWithLoadBalancer = "RegisterInstancesWithLoadBalancer" // RegisterInstancesWithLoadBalancerRequest generates a "aws/request.Request" representing the // client's request for the RegisterInstancesWithLoadBalancer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2312,7 +2317,7 @@ func (c *ELB) RegisterInstancesWithLoadBalancerRequest(input *RegisterInstancesW // To deregister instances from a load balancer, use DeregisterInstancesFromLoadBalancer. // // For more information, see Register or De-Register EC2 Instances (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-deregister-register-instances.html) -// in the Classic Load Balancer Guide. +// in the Classic Load Balancers Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2354,8 +2359,8 @@ const opRemoveTags = "RemoveTags" // RemoveTagsRequest generates a "aws/request.Request" representing the // client's request for the RemoveTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2433,8 +2438,8 @@ const opSetLoadBalancerListenerSSLCertificate = "SetLoadBalancerListenerSSLCerti // SetLoadBalancerListenerSSLCertificateRequest generates a "aws/request.Request" representing the // client's request for the SetLoadBalancerListenerSSLCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2479,7 +2484,7 @@ func (c *ELB) SetLoadBalancerListenerSSLCertificateRequest(input *SetLoadBalance // // For more information about updating your SSL certificate, see Replace the // SSL Certificate for Your Load Balancer (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-update-ssl-cert.html) -// in the Classic Load Balancer Guide. +// in the Classic Load Balancers Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2533,8 +2538,8 @@ const opSetLoadBalancerPoliciesForBackendServer = "SetLoadBalancerPoliciesForBac // SetLoadBalancerPoliciesForBackendServerRequest generates a "aws/request.Request" representing the // client's request for the SetLoadBalancerPoliciesForBackendServer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2586,9 +2591,9 @@ func (c *ELB) SetLoadBalancerPoliciesForBackendServerRequest(input *SetLoadBalan // // For more information about enabling back-end instance authentication, see // Configure Back-end Instance Authentication (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-create-https-ssl-load-balancer.html#configure_backendauth_clt) -// in the Classic Load Balancer Guide. For more information about Proxy Protocol, +// in the Classic Load Balancers Guide. For more information about Proxy Protocol, // see Configure Proxy Protocol Support (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-proxy-protocol.html) -// in the Classic Load Balancer Guide. +// in the Classic Load Balancers Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2633,8 +2638,8 @@ const opSetLoadBalancerPoliciesOfListener = "SetLoadBalancerPoliciesOfListener" // SetLoadBalancerPoliciesOfListenerRequest generates a "aws/request.Request" representing the // client's request for the SetLoadBalancerPoliciesOfListener operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2682,7 +2687,7 @@ func (c *ELB) SetLoadBalancerPoliciesOfListenerRequest(input *SetLoadBalancerPol // Configuration (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/ssl-config-update.html), // Duration-Based Session Stickiness (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-sticky-sessions.html#enable-sticky-sessions-duration), // and Application-Controlled Session Stickiness (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-sticky-sessions.html#enable-sticky-sessions-application) -// in the Classic Load Balancer Guide. +// in the Classic Load Balancers Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3489,7 +3494,7 @@ type CreateLoadBalancerInput struct { // The listeners. // // For more information, see Listeners for Your Classic Load Balancer (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html) - // in the Classic Load Balancer Guide. + // in the Classic Load Balancers Guide. // // Listeners is a required field Listeners []*Listener `type:"list" required:"true"` @@ -3526,7 +3531,7 @@ type CreateLoadBalancerInput struct { // // For more information about tagging your load balancer, see Tag Your Classic // Load Balancer (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/add-remove-tags.html) - // in the Classic Load Balancer Guide. + // in the Classic Load Balancers Guide. Tags []*Tag `min:"1" type:"list"` } @@ -5115,6 +5120,8 @@ type Limit struct { // * classic-listeners // // * classic-load-balancers + // + // * classic-registered-instances Name *string `type:"string"` } @@ -5144,7 +5151,7 @@ func (s *Limit) SetName(v string) *Limit { // // For information about the protocols and the ports supported by Elastic Load // Balancing, see Listeners for Your Classic Load Balancer (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html) -// in the Classic Load Balancer Guide. +// in the Classic Load Balancers Guide. type Listener struct { _ struct{} `type:"structure"` @@ -5286,7 +5293,7 @@ type LoadBalancerAttributes struct { // and delivers the information to the Amazon S3 bucket that you specify. // // For more information, see Enable Access Logs (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html) - // in the Classic Load Balancer Guide. + // in the Classic Load Balancers Guide. AccessLog *AccessLog `type:"structure"` // This parameter is reserved. @@ -5296,7 +5303,7 @@ type LoadBalancerAttributes struct { // the load balancer shifts traffic away from a deregistered or unhealthy instance. // // For more information, see Configure Connection Draining (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-conn-drain.html) - // in the Classic Load Balancer Guide. + // in the Classic Load Balancers Guide. ConnectionDraining *ConnectionDraining `type:"structure"` // If enabled, the load balancer allows the connections to remain idle (no data @@ -5305,14 +5312,14 @@ type LoadBalancerAttributes struct { // By default, Elastic Load Balancing maintains a 60-second idle connection // timeout for both front-end and back-end connections of your load balancer. // For more information, see Configure Idle Connection Timeout (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html) - // in the Classic Load Balancer Guide. + // in the Classic Load Balancers Guide. ConnectionSettings *ConnectionSettings `type:"structure"` // If enabled, the load balancer routes the request traffic evenly across all // instances regardless of the Availability Zones. // // For more information, see Configure Cross-Zone Load Balancing (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-disable-crosszone-lb.html) - // in the Classic Load Balancer Guide. + // in the Classic Load Balancers Guide. CrossZoneLoadBalancing *CrossZoneLoadBalancing `type:"structure"` } @@ -5399,14 +5406,14 @@ type LoadBalancerDescription struct { // The DNS name of the load balancer. // // For more information, see Configure a Custom Domain Name (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/using-domain-names-with-elb.html) - // in the Classic Load Balancer Guide. + // in the Classic Load Balancers Guide. CanonicalHostedZoneName *string `type:"string"` // The ID of the Amazon Route 53 hosted zone for the load balancer. CanonicalHostedZoneNameID *string `type:"string"` // The date and time the load balancer was created. - CreatedTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CreatedTime *time.Time `type:"timestamp"` // The DNS name of the load balancer. DNSName *string `type:"string"` diff --git a/vendor/github.com/aws/aws-sdk-go/service/elb/errors.go b/vendor/github.com/aws/aws-sdk-go/service/elb/errors.go index fbf2140d87a..14f1e6717e4 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/elb/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/elb/errors.go @@ -21,6 +21,9 @@ const ( // ErrCodeDependencyThrottleException for service response error code // "DependencyThrottle". + // + // A request made by Elastic Load Balancing to another service exceeds the maximum + // request rate permitted for your account. ErrCodeDependencyThrottleException = "DependencyThrottle" // ErrCodeDuplicateAccessPointNameException for service response error code diff --git a/vendor/github.com/aws/aws-sdk-go/service/elb/service.go b/vendor/github.com/aws/aws-sdk-go/service/elb/service.go index 057530f6cad..5dfdd322c9b 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/elb/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/elb/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "elasticloadbalancing" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "elasticloadbalancing" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Elastic Load Balancing" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the ELB client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/elbv2/api.go b/vendor/github.com/aws/aws-sdk-go/service/elbv2/api.go index f30c7cf516e..68b905d3fd0 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/elbv2/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/elbv2/api.go @@ -15,8 +15,8 @@ const opAddListenerCertificates = "AddListenerCertificates" // AddListenerCertificatesRequest generates a "aws/request.Request" representing the // client's request for the AddListenerCertificates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -106,8 +106,8 @@ const opAddTags = "AddTags" // AddTagsRequest generates a "aws/request.Request" representing the // client's request for the AddTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -202,8 +202,8 @@ const opCreateListener = "CreateListener" // CreateListenerRequest generates a "aws/request.Request" representing the // client's request for the CreateListener operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -306,6 +306,12 @@ func (c *ELBV2) CreateListenerRequest(input *CreateListenerInput) (req *request. // * ErrCodeTooManyTargetsException "TooManyTargets" // You've reached the limit on the number of targets. // +// * ErrCodeTooManyActionsException "TooManyActions" +// You've reached the limit on the number of actions per rule. +// +// * ErrCodeInvalidLoadBalancerActionException "InvalidLoadBalancerAction" +// The requested action is not valid. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticloadbalancingv2-2015-12-01/CreateListener func (c *ELBV2) CreateListener(input *CreateListenerInput) (*CreateListenerOutput, error) { req, out := c.CreateListenerRequest(input) @@ -332,8 +338,8 @@ const opCreateLoadBalancer = "CreateLoadBalancer" // CreateLoadBalancerRequest generates a "aws/request.Request" representing the // client's request for the CreateLoadBalancer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -468,8 +474,8 @@ const opCreateRule = "CreateRule" // CreateRuleRequest generates a "aws/request.Request" representing the // client's request for the CreateRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -512,9 +518,9 @@ func (c *ELBV2) CreateRuleRequest(input *CreateRuleInput) (req *request.Request, // with an Application Load Balancer. // // Rules are evaluated in priority order, from the lowest value to the highest -// value. When the condition for a rule is met, the specified action is taken. -// If no conditions are met, the action for the default rule is taken. For more -// information, see Listener Rules (http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#listener-rules) +// value. When the conditions for a rule are met, its actions are performed. +// If the conditions for no rules are met, the actions for the default rule +// are performed. For more information, see Listener Rules (http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#listener-rules) // in the Application Load Balancers Guide. // // To view your current rules, use DescribeRules. To update a rule, use ModifyRule. @@ -560,6 +566,15 @@ func (c *ELBV2) CreateRuleRequest(input *CreateRuleInput) (req *request.Request, // * ErrCodeTooManyTargetsException "TooManyTargets" // You've reached the limit on the number of targets. // +// * ErrCodeUnsupportedProtocolException "UnsupportedProtocol" +// The specified protocol is not supported. +// +// * ErrCodeTooManyActionsException "TooManyActions" +// You've reached the limit on the number of actions per rule. +// +// * ErrCodeInvalidLoadBalancerActionException "InvalidLoadBalancerAction" +// The requested action is not valid. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticloadbalancingv2-2015-12-01/CreateRule func (c *ELBV2) CreateRule(input *CreateRuleInput) (*CreateRuleOutput, error) { req, out := c.CreateRuleRequest(input) @@ -586,8 +601,8 @@ const opCreateTargetGroup = "CreateTargetGroup" // CreateTargetGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateTargetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -690,8 +705,8 @@ const opDeleteListener = "DeleteListener" // DeleteListenerRequest generates a "aws/request.Request" representing the // client's request for the DeleteListener operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -733,7 +748,7 @@ func (c *ELBV2) DeleteListenerRequest(input *DeleteListenerInput) (req *request. // Deletes the specified listener. // // Alternatively, your listener is deleted when you delete the load balancer -// it is attached to using DeleteLoadBalancer. +// to which it is attached, using DeleteLoadBalancer. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -772,8 +787,8 @@ const opDeleteLoadBalancer = "DeleteLoadBalancer" // DeleteLoadBalancerRequest generates a "aws/request.Request" representing the // client's request for the DeleteLoadBalancer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -866,8 +881,8 @@ const opDeleteRule = "DeleteRule" // DeleteRuleRequest generates a "aws/request.Request" representing the // client's request for the DeleteRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -948,8 +963,8 @@ const opDeleteTargetGroup = "DeleteTargetGroup" // DeleteTargetGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteTargetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1030,8 +1045,8 @@ const opDeregisterTargets = "DeregisterTargets" // DeregisterTargetsRequest generates a "aws/request.Request" representing the // client's request for the DeregisterTargets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1115,8 +1130,8 @@ const opDescribeAccountLimits = "DescribeAccountLimits" // DescribeAccountLimitsRequest generates a "aws/request.Request" representing the // client's request for the DescribeAccountLimits operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1195,8 +1210,8 @@ const opDescribeListenerCertificates = "DescribeListenerCertificates" // DescribeListenerCertificatesRequest generates a "aws/request.Request" representing the // client's request for the DescribeListenerCertificates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1274,8 +1289,8 @@ const opDescribeListeners = "DescribeListeners" // DescribeListenersRequest generates a "aws/request.Request" representing the // client's request for the DescribeListeners operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1338,6 +1353,9 @@ func (c *ELBV2) DescribeListenersRequest(input *DescribeListenersInput) (req *re // * ErrCodeLoadBalancerNotFoundException "LoadBalancerNotFound" // The specified load balancer does not exist. // +// * ErrCodeUnsupportedProtocolException "UnsupportedProtocol" +// The specified protocol is not supported. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticloadbalancingv2-2015-12-01/DescribeListeners func (c *ELBV2) DescribeListeners(input *DescribeListenersInput) (*DescribeListenersOutput, error) { req, out := c.DescribeListenersRequest(input) @@ -1414,8 +1432,8 @@ const opDescribeLoadBalancerAttributes = "DescribeLoadBalancerAttributes" // DescribeLoadBalancerAttributesRequest generates a "aws/request.Request" representing the // client's request for the DescribeLoadBalancerAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1457,6 +1475,10 @@ func (c *ELBV2) DescribeLoadBalancerAttributesRequest(input *DescribeLoadBalance // Describes the attributes for the specified Application Load Balancer or Network // Load Balancer. // +// For more information, see Load Balancer Attributes (http://docs.aws.amazon.com/elasticloadbalancing/latest/application/application-load-balancers.html#load-balancer-attributes) +// in the Application Load Balancers Guide or Load Balancer Attributes (http://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancers.html#load-balancer-attributes) +// in the Network Load Balancers Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -1494,8 +1516,8 @@ const opDescribeLoadBalancers = "DescribeLoadBalancers" // DescribeLoadBalancersRequest generates a "aws/request.Request" representing the // client's request for the DescribeLoadBalancers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1632,8 +1654,8 @@ const opDescribeRules = "DescribeRules" // DescribeRulesRequest generates a "aws/request.Request" representing the // client's request for the DescribeRules operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1689,6 +1711,9 @@ func (c *ELBV2) DescribeRulesRequest(input *DescribeRulesInput) (req *request.Re // * ErrCodeRuleNotFoundException "RuleNotFound" // The specified rule does not exist. // +// * ErrCodeUnsupportedProtocolException "UnsupportedProtocol" +// The specified protocol is not supported. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticloadbalancingv2-2015-12-01/DescribeRules func (c *ELBV2) DescribeRules(input *DescribeRulesInput) (*DescribeRulesOutput, error) { req, out := c.DescribeRulesRequest(input) @@ -1715,8 +1740,8 @@ const opDescribeSSLPolicies = "DescribeSSLPolicies" // DescribeSSLPoliciesRequest generates a "aws/request.Request" representing the // client's request for the DescribeSSLPolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1797,8 +1822,8 @@ const opDescribeTags = "DescribeTags" // DescribeTagsRequest generates a "aws/request.Request" representing the // client's request for the DescribeTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1887,8 +1912,8 @@ const opDescribeTargetGroupAttributes = "DescribeTargetGroupAttributes" // DescribeTargetGroupAttributesRequest generates a "aws/request.Request" representing the // client's request for the DescribeTargetGroupAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1929,6 +1954,10 @@ func (c *ELBV2) DescribeTargetGroupAttributesRequest(input *DescribeTargetGroupA // // Describes the attributes for the specified target group. // +// For more information, see Target Group Attributes (http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#target-group-attributes) +// in the Application Load Balancers Guide or Target Group Attributes (http://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#target-group-attributes) +// in the Network Load Balancers Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -1966,8 +1995,8 @@ const opDescribeTargetGroups = "DescribeTargetGroups" // DescribeTargetGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeTargetGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2110,8 +2139,8 @@ const opDescribeTargetHealth = "DescribeTargetHealth" // DescribeTargetHealthRequest generates a "aws/request.Request" representing the // client's request for the DescribeTargetHealth operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2197,8 +2226,8 @@ const opModifyListener = "ModifyListener" // ModifyListenerRequest generates a "aws/request.Request" representing the // client's request for the ModifyListener operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2292,6 +2321,12 @@ func (c *ELBV2) ModifyListenerRequest(input *ModifyListenerInput) (req *request. // * ErrCodeTooManyTargetsException "TooManyTargets" // You've reached the limit on the number of targets. // +// * ErrCodeTooManyActionsException "TooManyActions" +// You've reached the limit on the number of actions per rule. +// +// * ErrCodeInvalidLoadBalancerActionException "InvalidLoadBalancerAction" +// The requested action is not valid. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticloadbalancingv2-2015-12-01/ModifyListener func (c *ELBV2) ModifyListener(input *ModifyListenerInput) (*ModifyListenerOutput, error) { req, out := c.ModifyListenerRequest(input) @@ -2318,8 +2353,8 @@ const opModifyLoadBalancerAttributes = "ModifyLoadBalancerAttributes" // ModifyLoadBalancerAttributesRequest generates a "aws/request.Request" representing the // client's request for the ModifyLoadBalancerAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2405,8 +2440,8 @@ const opModifyRule = "ModifyRule" // ModifyRuleRequest generates a "aws/request.Request" representing the // client's request for the ModifyRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2449,7 +2484,7 @@ func (c *ELBV2) ModifyRuleRequest(input *ModifyRuleInput) (req *request.Request, // // Any existing properties that you do not modify retain their current values. // -// To modify the default action, use ModifyListener. +// To modify the actions for the default rule, use ModifyListener. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2481,6 +2516,15 @@ func (c *ELBV2) ModifyRuleRequest(input *ModifyRuleInput) (req *request.Request, // * ErrCodeTargetGroupNotFoundException "TargetGroupNotFound" // The specified target group does not exist. // +// * ErrCodeUnsupportedProtocolException "UnsupportedProtocol" +// The specified protocol is not supported. +// +// * ErrCodeTooManyActionsException "TooManyActions" +// You've reached the limit on the number of actions per rule. +// +// * ErrCodeInvalidLoadBalancerActionException "InvalidLoadBalancerAction" +// The requested action is not valid. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticloadbalancingv2-2015-12-01/ModifyRule func (c *ELBV2) ModifyRule(input *ModifyRuleInput) (*ModifyRuleOutput, error) { req, out := c.ModifyRuleRequest(input) @@ -2507,8 +2551,8 @@ const opModifyTargetGroup = "ModifyTargetGroup" // ModifyTargetGroupRequest generates a "aws/request.Request" representing the // client's request for the ModifyTargetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2592,8 +2636,8 @@ const opModifyTargetGroupAttributes = "ModifyTargetGroupAttributes" // ModifyTargetGroupAttributesRequest generates a "aws/request.Request" representing the // client's request for the ModifyTargetGroupAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2674,8 +2718,8 @@ const opRegisterTargets = "RegisterTargets" // RegisterTargetsRequest generates a "aws/request.Request" representing the // client's request for the RegisterTargets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2779,8 +2823,8 @@ const opRemoveListenerCertificates = "RemoveListenerCertificates" // RemoveListenerCertificatesRequest generates a "aws/request.Request" representing the // client's request for the RemoveListenerCertificates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2866,8 +2910,8 @@ const opRemoveTags = "RemoveTags" // RemoveTagsRequest generates a "aws/request.Request" representing the // client's request for the RemoveTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2959,8 +3003,8 @@ const opSetIpAddressType = "SetIpAddressType" // SetIpAddressTypeRequest generates a "aws/request.Request" representing the // client's request for the SetIpAddressType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3002,7 +3046,7 @@ func (c *ELBV2) SetIpAddressTypeRequest(input *SetIpAddressTypeInput) (req *requ // Sets the type of IP addresses used by the subnets of the specified Application // Load Balancer or Network Load Balancer. // -// Note that Network Load Balancers must use ipv4. +// Network Load Balancers must use ipv4. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3047,8 +3091,8 @@ const opSetRulePriorities = "SetRulePriorities" // SetRulePrioritiesRequest generates a "aws/request.Request" representing the // client's request for the SetRulePriorities operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3136,8 +3180,8 @@ const opSetSecurityGroups = "SetSecurityGroups" // SetSecurityGroupsRequest generates a "aws/request.Request" representing the // client's request for the SetSecurityGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3180,7 +3224,7 @@ func (c *ELBV2) SetSecurityGroupsRequest(input *SetSecurityGroupsInput) (req *re // Balancer. The specified security groups override the previously associated // security groups. // -// Note that you can't specify a security group for a Network Load Balancer. +// You can't specify a security group for a Network Load Balancer. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3225,8 +3269,8 @@ const opSetSubnets = "SetSubnets" // SetSubnetsRequest generates a "aws/request.Request" representing the // client's request for the SetSubnets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3269,7 +3313,7 @@ func (c *ELBV2) SetSubnetsRequest(input *SetSubnetsInput) (req *request.Request, // Application Load Balancer. The specified subnets replace the previously enabled // subnets. // -// Note that you can't change the subnets for a Network Load Balancer. +// You can't change the subnets for a Network Load Balancer. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3323,12 +3367,33 @@ func (c *ELBV2) SetSubnetsWithContext(ctx aws.Context, input *SetSubnetsInput, o type Action struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of the target group. - // - // TargetGroupArn is a required field - TargetGroupArn *string `type:"string" required:"true"` + // [HTTPS listener] Information for using Amazon Cognito to authenticate users. + // Specify only when Type is authenticate-cognito. + AuthenticateCognitoConfig *AuthenticateCognitoActionConfig `type:"structure"` + + // [HTTPS listener] Information about an identity provider that is compliant + // with OpenID Connect (OIDC). Specify only when Type is authenticate-oidc. + AuthenticateOidcConfig *AuthenticateOidcActionConfig `type:"structure"` + + // [Application Load Balancer] Information for creating an action that returns + // a custom HTTP response. Specify only when Type is fixed-response. + FixedResponseConfig *FixedResponseActionConfig `type:"structure"` - // The type of action. + // The order for the action. This value is required for rules with multiple + // actions. The action with the lowest value for order is performed first. The + // final action to be performed must be a forward or a fixed-response action. + Order *int64 `min:"1" type:"integer"` + + // [Application Load Balancer] Information for creating a redirect action. Specify + // only when Type is redirect. + RedirectConfig *RedirectActionConfig `type:"structure"` + + // The Amazon Resource Name (ARN) of the target group. Specify only when Type + // is forward. + TargetGroupArn *string `type:"string"` + + // The type of action. Each rule must include exactly one of the following types + // of actions: forward, fixed-response, or redirect. // // Type is a required field Type *string `type:"string" required:"true" enum:"ActionTypeEnum"` @@ -3347,12 +3412,32 @@ func (s Action) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *Action) Validate() error { invalidParams := request.ErrInvalidParams{Context: "Action"} - if s.TargetGroupArn == nil { - invalidParams.Add(request.NewErrParamRequired("TargetGroupArn")) + if s.Order != nil && *s.Order < 1 { + invalidParams.Add(request.NewErrParamMinValue("Order", 1)) } if s.Type == nil { invalidParams.Add(request.NewErrParamRequired("Type")) } + if s.AuthenticateCognitoConfig != nil { + if err := s.AuthenticateCognitoConfig.Validate(); err != nil { + invalidParams.AddNested("AuthenticateCognitoConfig", err.(request.ErrInvalidParams)) + } + } + if s.AuthenticateOidcConfig != nil { + if err := s.AuthenticateOidcConfig.Validate(); err != nil { + invalidParams.AddNested("AuthenticateOidcConfig", err.(request.ErrInvalidParams)) + } + } + if s.FixedResponseConfig != nil { + if err := s.FixedResponseConfig.Validate(); err != nil { + invalidParams.AddNested("FixedResponseConfig", err.(request.ErrInvalidParams)) + } + } + if s.RedirectConfig != nil { + if err := s.RedirectConfig.Validate(); err != nil { + invalidParams.AddNested("RedirectConfig", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -3360,6 +3445,36 @@ func (s *Action) Validate() error { return nil } +// SetAuthenticateCognitoConfig sets the AuthenticateCognitoConfig field's value. +func (s *Action) SetAuthenticateCognitoConfig(v *AuthenticateCognitoActionConfig) *Action { + s.AuthenticateCognitoConfig = v + return s +} + +// SetAuthenticateOidcConfig sets the AuthenticateOidcConfig field's value. +func (s *Action) SetAuthenticateOidcConfig(v *AuthenticateOidcActionConfig) *Action { + s.AuthenticateOidcConfig = v + return s +} + +// SetFixedResponseConfig sets the FixedResponseConfig field's value. +func (s *Action) SetFixedResponseConfig(v *FixedResponseActionConfig) *Action { + s.FixedResponseConfig = v + return s +} + +// SetOrder sets the Order field's value. +func (s *Action) SetOrder(v int64) *Action { + s.Order = &v + return s +} + +// SetRedirectConfig sets the RedirectConfig field's value. +func (s *Action) SetRedirectConfig(v *RedirectActionConfig) *Action { + s.RedirectConfig = v + return s +} + // SetTargetGroupArn sets the TargetGroupArn field's value. func (s *Action) SetTargetGroupArn(v string) *Action { s.TargetGroupArn = &v @@ -3526,6 +3641,305 @@ func (s AddTagsOutput) GoString() string { return s.String() } +// Request parameters to use when integrating with Amazon Cognito to authenticate +// users. +type AuthenticateCognitoActionConfig struct { + _ struct{} `type:"structure"` + + // The query parameters (up to 10) to include in the redirect request to the + // authorization endpoint. + AuthenticationRequestExtraParams map[string]*string `type:"map"` + + // The behavior if the user is not authenticated. The following are possible + // values: + // + // * deny - Return an HTTP 401 Unauthorized error. + // + // * allow - Allow the request to be forwarded to the target. + // + // authenticate + OnUnauthenticatedRequest *string `type:"string" enum:"AuthenticateCognitoActionConditionalBehaviorEnum"` + + // The set of user claims to be requested from the IdP. The default is openid. + // + // To verify which scope values your IdP supports and how to separate multiple + // values, see the documentation for your IdP. + Scope *string `type:"string"` + + // The name of the cookie used to maintain session information. The default + // is AWSELBAuthSessionCookie. + SessionCookieName *string `type:"string"` + + // The maximum duration of the authentication session, in seconds. The default + // is 604800 seconds (7 days). + SessionTimeout *int64 `type:"long"` + + // The Amazon Resource Name (ARN) of the Amazon Cognito user pool. + // + // UserPoolArn is a required field + UserPoolArn *string `type:"string" required:"true"` + + // The ID of the Amazon Cognito user pool client. + // + // UserPoolClientId is a required field + UserPoolClientId *string `type:"string" required:"true"` + + // The domain prefix or fully-qualified domain name of the Amazon Cognito user + // pool. + // + // UserPoolDomain is a required field + UserPoolDomain *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s AuthenticateCognitoActionConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AuthenticateCognitoActionConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AuthenticateCognitoActionConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AuthenticateCognitoActionConfig"} + if s.UserPoolArn == nil { + invalidParams.Add(request.NewErrParamRequired("UserPoolArn")) + } + if s.UserPoolClientId == nil { + invalidParams.Add(request.NewErrParamRequired("UserPoolClientId")) + } + if s.UserPoolDomain == nil { + invalidParams.Add(request.NewErrParamRequired("UserPoolDomain")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAuthenticationRequestExtraParams sets the AuthenticationRequestExtraParams field's value. +func (s *AuthenticateCognitoActionConfig) SetAuthenticationRequestExtraParams(v map[string]*string) *AuthenticateCognitoActionConfig { + s.AuthenticationRequestExtraParams = v + return s +} + +// SetOnUnauthenticatedRequest sets the OnUnauthenticatedRequest field's value. +func (s *AuthenticateCognitoActionConfig) SetOnUnauthenticatedRequest(v string) *AuthenticateCognitoActionConfig { + s.OnUnauthenticatedRequest = &v + return s +} + +// SetScope sets the Scope field's value. +func (s *AuthenticateCognitoActionConfig) SetScope(v string) *AuthenticateCognitoActionConfig { + s.Scope = &v + return s +} + +// SetSessionCookieName sets the SessionCookieName field's value. +func (s *AuthenticateCognitoActionConfig) SetSessionCookieName(v string) *AuthenticateCognitoActionConfig { + s.SessionCookieName = &v + return s +} + +// SetSessionTimeout sets the SessionTimeout field's value. +func (s *AuthenticateCognitoActionConfig) SetSessionTimeout(v int64) *AuthenticateCognitoActionConfig { + s.SessionTimeout = &v + return s +} + +// SetUserPoolArn sets the UserPoolArn field's value. +func (s *AuthenticateCognitoActionConfig) SetUserPoolArn(v string) *AuthenticateCognitoActionConfig { + s.UserPoolArn = &v + return s +} + +// SetUserPoolClientId sets the UserPoolClientId field's value. +func (s *AuthenticateCognitoActionConfig) SetUserPoolClientId(v string) *AuthenticateCognitoActionConfig { + s.UserPoolClientId = &v + return s +} + +// SetUserPoolDomain sets the UserPoolDomain field's value. +func (s *AuthenticateCognitoActionConfig) SetUserPoolDomain(v string) *AuthenticateCognitoActionConfig { + s.UserPoolDomain = &v + return s +} + +// Request parameters when using an identity provider (IdP) that is compliant +// with OpenID Connect (OIDC) to authenticate users. +type AuthenticateOidcActionConfig struct { + _ struct{} `type:"structure"` + + // The query parameters (up to 10) to include in the redirect request to the + // authorization endpoint. + AuthenticationRequestExtraParams map[string]*string `type:"map"` + + // The authorization endpoint of the IdP. This must be a full URL, including + // the HTTPS protocol, the domain, and the path. + // + // AuthorizationEndpoint is a required field + AuthorizationEndpoint *string `type:"string" required:"true"` + + // The OAuth 2.0 client identifier. + // + // ClientId is a required field + ClientId *string `type:"string" required:"true"` + + // The OAuth 2.0 client secret. + // + // ClientSecret is a required field + ClientSecret *string `type:"string" required:"true"` + + // The OIDC issuer identifier of the IdP. This must be a full URL, including + // the HTTPS protocol, the domain, and the path. + // + // Issuer is a required field + Issuer *string `type:"string" required:"true"` + + // The behavior if the user is not authenticated. The following are possible + // values: + // + // * deny - Return an HTTP 401 Unauthorized error. + // + // * allow - Allow the request to be forwarded to the target. + // + // authenticate + OnUnauthenticatedRequest *string `type:"string" enum:"AuthenticateOidcActionConditionalBehaviorEnum"` + + // The set of user claims to be requested from the IdP. The default is openid. + // + // To verify which scope values your IdP supports and how to separate multiple + // values, see the documentation for your IdP. + Scope *string `type:"string"` + + // The name of the cookie used to maintain session information. The default + // is AWSELBAuthSessionCookie. + SessionCookieName *string `type:"string"` + + // The maximum duration of the authentication session, in seconds. The default + // is 604800 seconds (7 days). + SessionTimeout *int64 `type:"long"` + + // The token endpoint of the IdP. This must be a full URL, including the HTTPS + // protocol, the domain, and the path. + // + // TokenEndpoint is a required field + TokenEndpoint *string `type:"string" required:"true"` + + // The user info endpoint of the IdP. This must be a full URL, including the + // HTTPS protocol, the domain, and the path. + // + // UserInfoEndpoint is a required field + UserInfoEndpoint *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s AuthenticateOidcActionConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AuthenticateOidcActionConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AuthenticateOidcActionConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AuthenticateOidcActionConfig"} + if s.AuthorizationEndpoint == nil { + invalidParams.Add(request.NewErrParamRequired("AuthorizationEndpoint")) + } + if s.ClientId == nil { + invalidParams.Add(request.NewErrParamRequired("ClientId")) + } + if s.ClientSecret == nil { + invalidParams.Add(request.NewErrParamRequired("ClientSecret")) + } + if s.Issuer == nil { + invalidParams.Add(request.NewErrParamRequired("Issuer")) + } + if s.TokenEndpoint == nil { + invalidParams.Add(request.NewErrParamRequired("TokenEndpoint")) + } + if s.UserInfoEndpoint == nil { + invalidParams.Add(request.NewErrParamRequired("UserInfoEndpoint")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAuthenticationRequestExtraParams sets the AuthenticationRequestExtraParams field's value. +func (s *AuthenticateOidcActionConfig) SetAuthenticationRequestExtraParams(v map[string]*string) *AuthenticateOidcActionConfig { + s.AuthenticationRequestExtraParams = v + return s +} + +// SetAuthorizationEndpoint sets the AuthorizationEndpoint field's value. +func (s *AuthenticateOidcActionConfig) SetAuthorizationEndpoint(v string) *AuthenticateOidcActionConfig { + s.AuthorizationEndpoint = &v + return s +} + +// SetClientId sets the ClientId field's value. +func (s *AuthenticateOidcActionConfig) SetClientId(v string) *AuthenticateOidcActionConfig { + s.ClientId = &v + return s +} + +// SetClientSecret sets the ClientSecret field's value. +func (s *AuthenticateOidcActionConfig) SetClientSecret(v string) *AuthenticateOidcActionConfig { + s.ClientSecret = &v + return s +} + +// SetIssuer sets the Issuer field's value. +func (s *AuthenticateOidcActionConfig) SetIssuer(v string) *AuthenticateOidcActionConfig { + s.Issuer = &v + return s +} + +// SetOnUnauthenticatedRequest sets the OnUnauthenticatedRequest field's value. +func (s *AuthenticateOidcActionConfig) SetOnUnauthenticatedRequest(v string) *AuthenticateOidcActionConfig { + s.OnUnauthenticatedRequest = &v + return s +} + +// SetScope sets the Scope field's value. +func (s *AuthenticateOidcActionConfig) SetScope(v string) *AuthenticateOidcActionConfig { + s.Scope = &v + return s +} + +// SetSessionCookieName sets the SessionCookieName field's value. +func (s *AuthenticateOidcActionConfig) SetSessionCookieName(v string) *AuthenticateOidcActionConfig { + s.SessionCookieName = &v + return s +} + +// SetSessionTimeout sets the SessionTimeout field's value. +func (s *AuthenticateOidcActionConfig) SetSessionTimeout(v int64) *AuthenticateOidcActionConfig { + s.SessionTimeout = &v + return s +} + +// SetTokenEndpoint sets the TokenEndpoint field's value. +func (s *AuthenticateOidcActionConfig) SetTokenEndpoint(v string) *AuthenticateOidcActionConfig { + s.TokenEndpoint = &v + return s +} + +// SetUserInfoEndpoint sets the UserInfoEndpoint field's value. +func (s *AuthenticateOidcActionConfig) SetUserInfoEndpoint(v string) *AuthenticateOidcActionConfig { + s.UserInfoEndpoint = &v + return s +} + // Information about an Availability Zone. type AvailabilityZone struct { _ struct{} `type:"structure"` @@ -3637,13 +4051,29 @@ func (s *Cipher) SetPriority(v int64) *Cipher { type CreateListenerInput struct { _ struct{} `type:"structure"` - // [HTTPS listeners] The SSL server certificate. You must provide exactly one - // certificate. + // [HTTPS listeners] The default SSL server certificate. You must provide exactly + // one default certificate. To create a certificate list, use AddListenerCertificates. Certificates []*Certificate `type:"list"` - // The default action for the listener. For Application Load Balancers, the - // protocol of the specified target group must be HTTP or HTTPS. For Network - // Load Balancers, the protocol of the specified target group must be TCP. + // The actions for the default rule. The rule must include one forward action + // or one or more fixed-response actions. + // + // If the action type is forward, you can specify a single target group. The + // protocol of the target group must be HTTP or HTTPS for an Application Load + // Balancer or TCP for a Network Load Balancer. + // + // [HTTPS listener] If the action type is authenticate-oidc, you can use an + // identity provider that is OpenID Connect (OIDC) compliant to authenticate + // users as they access your application. + // + // [HTTPS listener] If the action type is authenticate-cognito, you can use + // Amazon Cognito to authenticate users as they access your application. + // + // [Application Load Balancer] If the action type is redirect, you can redirect + // HTTP and HTTPS requests. + // + // [Application Load Balancer] If the action type is fixed-response, you can + // return a custom HTTP response. // // DefaultActions is a required field DefaultActions []*Action `type:"list" required:"true"` @@ -3786,8 +4216,8 @@ type CreateLoadBalancerInput struct { // The name of the load balancer. // // This name must be unique per region per account, can have a maximum of 32 - // characters, must contain only alphanumeric characters or hyphens, and must - // not begin or end with a hyphen. + // characters, must contain only alphanumeric characters or hyphens, must not + // begin or end with a hyphen, and must not begin with "internal-". // // Name is a required field Name *string `type:"string" required:"true"` @@ -3795,7 +4225,7 @@ type CreateLoadBalancerInput struct { // The nodes of an Internet-facing load balancer have public IP addresses. The // DNS name of an Internet-facing load balancer is publicly resolvable to the // public IP addresses of the nodes. Therefore, Internet-facing load balancers - // can route requests from clients over the Internet. + // can route requests from clients over the internet. // // The nodes of an internal load balancer have only private IP addresses. The // DNS name of an internal load balancer is publicly resolvable to the private @@ -3946,7 +4376,23 @@ func (s *CreateLoadBalancerOutput) SetLoadBalancers(v []*LoadBalancer) *CreateLo type CreateRuleInput struct { _ struct{} `type:"structure"` - // An action. Each action has the type forward and specifies a target group. + // The actions. Each rule must include exactly one of the following types of + // actions: forward, fixed-response, or redirect. + // + // If the action type is forward, you can specify a single target group. + // + // [HTTPS listener] If the action type is authenticate-oidc, you can use an + // identity provider that is OpenID Connect (OIDC) compliant to authenticate + // users as they access your application. + // + // [HTTPS listener] If the action type is authenticate-cognito, you can use + // Amazon Cognito to authenticate users as they access your application. + // + // [Application Load Balancer] If the action type is redirect, you can redirect + // HTTP and HTTPS requests. + // + // [Application Load Balancer] If the action type is fixed-response, you can + // return a custom HTTP response. // // Actions is a required field Actions []*Action `type:"list" required:"true"` @@ -3955,8 +4401,8 @@ type CreateRuleInput struct { // // If the field name is host-header, you can specify a single host name (for // example, my.example.com). A host name is case insensitive, can be up to 128 - // characters in length, and can contain any of the following characters. Note - // that you can include up to three wildcard characters. + // characters in length, and can contain any of the following characters. You + // can include up to three wildcard characters. // // * A-Z, a-z, 0-9 // @@ -3967,9 +4413,9 @@ type CreateRuleInput struct { // * ? (matches exactly 1 character) // // If the field name is path-pattern, you can specify a single path pattern. - // A path pattern is case sensitive, can be up to 128 characters in length, - // and can contain any of the following characters. Note that you can include - // up to three wildcard characters. + // A path pattern is case-sensitive, can be up to 128 characters in length, + // and can contain any of the following characters. You can include up to three + // wildcard characters. // // * A-Z, a-z, 0-9 // @@ -3989,8 +4435,7 @@ type CreateRuleInput struct { // ListenerArn is a required field ListenerArn *string `type:"string" required:"true"` - // The priority for the rule. A listener can't have multiple rules with the - // same priority. + // The rule priority. A listener can't have multiple rules with the same priority. // // Priority is a required field Priority *int64 `min:"1" type:"integer" required:"true"` @@ -4092,9 +4537,9 @@ type CreateTargetGroupInput struct { _ struct{} `type:"structure"` // The approximate amount of time, in seconds, between health checks of an individual - // target. For Application Load Balancers, the range is 5 to 300 seconds. For - // Network Load Balancers, the supported values are 10 or 30 seconds. The default - // is 30 seconds. + // target. For Application Load Balancers, the range is 5–300 seconds. For Network + // Load Balancers, the supported values are 10 or 30 seconds. The default is + // 30 seconds. HealthCheckIntervalSeconds *int64 `min:"5" type:"integer"` // [HTTP/HTTPS health checks] The ping path that is the destination on the targets @@ -4113,9 +4558,9 @@ type CreateTargetGroupInput struct { HealthCheckProtocol *string `type:"string" enum:"ProtocolEnum"` // The amount of time, in seconds, during which no response from a target means - // a failed health check. For Application Load Balancers, the range is 2 to - // 60 seconds and the default is 5 seconds. For Network Load Balancers, this - // is 10 seconds for TCP and HTTPS health checks and 6 seconds for HTTP health + // a failed health check. For Application Load Balancers, the range is 2–60 + // seconds and the default is 5 seconds. For Network Load Balancers, this is + // 10 seconds for TCP and HTTPS health checks and 6 seconds for HTTP health // checks. HealthCheckTimeoutSeconds *int64 `min:"2" type:"integer"` @@ -4153,8 +4598,8 @@ type CreateTargetGroupInput struct { // The type of target that you must specify when registering targets with this // target group. The possible values are instance (targets are specified by // instance ID) or ip (targets are specified by IP address). The default is - // instance. Note that you can't specify targets for a target group using both - // instance IDs and IP addresses. + // instance. You can't specify targets for a target group using both instance + // IDs and IP addresses. // // If the target type is ip, specify IP addresses from the subnets of the virtual // private cloud (VPC) for the target group, the RFC 1918 range (10.0.0.0/8, @@ -5536,6 +5981,66 @@ func (s *DescribeTargetHealthOutput) SetTargetHealthDescriptions(v []*TargetHeal return s } +// Information about an action that returns a custom HTTP response. +type FixedResponseActionConfig struct { + _ struct{} `type:"structure"` + + // The content type. + // + // Valid Values: text/plain | text/css | text/html | application/javascript + // | application/json + ContentType *string `type:"string"` + + // The message. + MessageBody *string `type:"string"` + + // The HTTP response code (2XX, 4XX, or 5XX). + // + // StatusCode is a required field + StatusCode *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s FixedResponseActionConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FixedResponseActionConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *FixedResponseActionConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "FixedResponseActionConfig"} + if s.StatusCode == nil { + invalidParams.Add(request.NewErrParamRequired("StatusCode")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetContentType sets the ContentType field's value. +func (s *FixedResponseActionConfig) SetContentType(v string) *FixedResponseActionConfig { + s.ContentType = &v + return s +} + +// SetMessageBody sets the MessageBody field's value. +func (s *FixedResponseActionConfig) SetMessageBody(v string) *FixedResponseActionConfig { + s.MessageBody = &v + return s +} + +// SetStatusCode sets the StatusCode field's value. +func (s *FixedResponseActionConfig) SetStatusCode(v string) *FixedResponseActionConfig { + s.StatusCode = &v + return s +} + // Information about an Elastic Load Balancing resource limit for your AWS account. type Limit struct { _ struct{} `type:"structure"` @@ -5678,7 +6183,7 @@ type LoadBalancer struct { CanonicalHostedZoneId *string `type:"string"` // The date and time the load balancer was created. - CreatedTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CreatedTime *time.Time `type:"timestamp"` // The public DNS name of the load balancer. DNSName *string `type:"string"` @@ -5697,7 +6202,7 @@ type LoadBalancer struct { // The nodes of an Internet-facing load balancer have public IP addresses. The // DNS name of an Internet-facing load balancer is publicly resolvable to the // public IP addresses of the nodes. Therefore, Internet-facing load balancers - // can route requests from clients over the Internet. + // can route requests from clients over the internet. // // The nodes of an internal load balancer have only private IP addresses. The // DNS name of an internal load balancer is publicly resolvable to the private @@ -5839,32 +6344,35 @@ type LoadBalancerAttribute struct { // The name of the attribute. // - // * access_logs.s3.enabled - [Application Load Balancers] Indicates whether - // access logs stored in Amazon S3 are enabled. The value is true or false. + // The following attributes are supported by both Application Load Balancers + // and Network Load Balancers: + // + // * deletion_protection.enabled - Indicates whether deletion protection + // is enabled. The value is true or false. The default is false. // - // * access_logs.s3.bucket - [Application Load Balancers] The name of the - // S3 bucket for the access logs. This attribute is required if access logs - // in Amazon S3 are enabled. The bucket must exist in the same region as - // the load balancer and have a bucket policy that grants Elastic Load Balancing - // permission to write to the bucket. + // The following attributes are supported by only Application Load Balancers: // - // * access_logs.s3.prefix - [Application Load Balancers] The prefix for - // the location in the S3 bucket. If you don't specify a prefix, the access - // logs are stored in the root of the bucket. + // * access_logs.s3.enabled - Indicates whether access logs are enabled. + // The value is true or false. The default is false. // - // * deletion_protection.enabled - Indicates whether deletion protection - // is enabled. The value is true or false. + // * access_logs.s3.bucket - The name of the S3 bucket for the access logs. + // This attribute is required if access logs are enabled. The bucket must + // exist in the same region as the load balancer and have a bucket policy + // that grants Elastic Load Balancing permissions to write to the bucket. + // + // * access_logs.s3.prefix - The prefix for the location in the S3 bucket + // for the access logs. // - // * idle_timeout.timeout_seconds - [Application Load Balancers] The idle - // timeout value, in seconds. The valid range is 1-4000. The default is 60 - // seconds. + // * idle_timeout.timeout_seconds - The idle timeout value, in seconds. The + // valid range is 1-4000 seconds. The default is 60 seconds. // - // * load_balancing.cross_zone.enabled - [Network Load Balancers] Indicates - // whether cross-zone load balancing is enabled. The value is true or false. - // The default is false. + // * routing.http2.enabled - Indicates whether HTTP/2 is enabled. The value + // is true or false. The default is true. // - // * routing.http2.enabled - [Application Load Balancers] Indicates whether - // HTTP/2 is enabled. The value is true or false. The default is true. + // The following attributes are supported by only Network Load Balancers: + // + // * load_balancing.cross_zone.enabled - Indicates whether cross-zone load + // balancing is enabled. The value is true or false. The default is false. Key *string `type:"string"` // The value of the attribute. @@ -5938,7 +6446,7 @@ type Matcher struct { // and the default value is 200. You can specify multiple values (for example, // "200,202") or a range of values (for example, "200-299"). // - // For Network Load Balancers, this is 200 to 399. + // For Network Load Balancers, this is 200–399. // // HttpCode is a required field HttpCode *string `type:"string" required:"true"` @@ -5976,12 +6484,29 @@ func (s *Matcher) SetHttpCode(v string) *Matcher { type ModifyListenerInput struct { _ struct{} `type:"structure"` - // The default SSL server certificate. + // [HTTPS listeners] The default SSL server certificate. You must provide exactly + // one default certificate. To create a certificate list, use AddListenerCertificates. Certificates []*Certificate `type:"list"` - // The default action. For Application Load Balancers, the protocol of the specified - // target group must be HTTP or HTTPS. For Network Load Balancers, the protocol - // of the specified target group must be TCP. + // The actions for the default rule. The rule must include one forward action + // or one or more fixed-response actions. + // + // If the action type is forward, you can specify a single target group. The + // protocol of the target group must be HTTP or HTTPS for an Application Load + // Balancer or TCP for a Network Load Balancer. + // + // [HTTPS listener] If the action type is authenticate-oidc, you can use an + // identity provider that is OpenID Connect (OIDC) compliant to authenticate + // users as they access your application. + // + // [HTTPS listener] If the action type is authenticate-cognito, you can use + // Amazon Cognito to authenticate users as they access your application. + // + // [Application Load Balancer] If the action type is redirect, you can redirect + // HTTP and HTTPS requests. + // + // [Application Load Balancer] If the action type is fixed-response, you can + // return a custom HTTP response. DefaultActions []*Action `type:"list"` // The Amazon Resource Name (ARN) of the listener. @@ -5997,8 +6522,8 @@ type ModifyListenerInput struct { // TCP. Protocol *string `type:"string" enum:"ProtocolEnum"` - // The security policy that defines which protocols and ciphers are supported. - // For more information, see Security Policies (http://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html#describe-ssl-policies) + // [HTTPS listeners] The security policy that defines which protocols and ciphers + // are supported. For more information, see Security Policies (http://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html#describe-ssl-policies) // in the Application Load Balancers Guide. SslPolicy *string `type:"string"` } @@ -6078,7 +6603,7 @@ func (s *ModifyListenerInput) SetSslPolicy(v string) *ModifyListenerInput { type ModifyListenerOutput struct { _ struct{} `type:"structure"` - // Information about the modified listeners. + // Information about the modified listener. Listeners []*Listener `type:"list"` } @@ -6176,10 +6701,47 @@ func (s *ModifyLoadBalancerAttributesOutput) SetAttributes(v []*LoadBalancerAttr type ModifyRuleInput struct { _ struct{} `type:"structure"` - // The actions. The target group must use the HTTP or HTTPS protocol. + // The actions. + // + // If the action type is forward, you can specify a single target group. + // + // If the action type is authenticate-oidc, you can use an identity provider + // that is OpenID Connect (OIDC) compliant to authenticate users as they access + // your application. + // + // If the action type is authenticate-cognito, you can use Amazon Cognito to + // authenticate users as they access your application. Actions []*Action `type:"list"` - // The conditions. + // The conditions. Each condition specifies a field name and a single value. + // + // If the field name is host-header, you can specify a single host name (for + // example, my.example.com). A host name is case insensitive, can be up to 128 + // characters in length, and can contain any of the following characters. You + // can include up to three wildcard characters. + // + // * A-Z, a-z, 0-9 + // + // * - . + // + // * * (matches 0 or more characters) + // + // * ? (matches exactly 1 character) + // + // If the field name is path-pattern, you can specify a single path pattern. + // A path pattern is case-sensitive, can be up to 128 characters in length, + // and can contain any of the following characters. You can include up to three + // wildcard characters. + // + // * A-Z, a-z, 0-9 + // + // * _ - . $ / ~ " ' @ : + + // + // * & (using &) + // + // * * (matches 0 or more characters) + // + // * ? (matches exactly 1 character) Conditions []*RuleCondition `type:"list"` // The Amazon Resource Name (ARN) of the rule. @@ -6242,7 +6804,7 @@ func (s *ModifyRuleInput) SetRuleArn(v string) *ModifyRuleInput { type ModifyRuleOutput struct { _ struct{} `type:"structure"` - // Information about the rule. + // Information about the modified rule. Rules []*Rule `type:"list"` } @@ -6341,8 +6903,8 @@ type ModifyTargetGroupInput struct { _ struct{} `type:"structure"` // The approximate amount of time, in seconds, between health checks of an individual - // target. For Application Load Balancers, the range is 5 to 300 seconds. For - // Network Load Balancers, the supported values are 10 or 30 seconds. + // target. For Application Load Balancers, the range is 5–300 seconds. For Network + // Load Balancers, the supported values are 10 or 30 seconds. HealthCheckIntervalSeconds *int64 `min:"5" type:"integer"` // [HTTP/HTTPS health checks] The ping path that is the destination for the @@ -6480,7 +7042,7 @@ func (s *ModifyTargetGroupInput) SetUnhealthyThresholdCount(v int64) *ModifyTarg type ModifyTargetGroupOutput struct { _ struct{} `type:"structure"` - // Information about the target group. + // Information about the modified target group. TargetGroups []*TargetGroup `type:"list"` } @@ -6500,6 +7062,123 @@ func (s *ModifyTargetGroupOutput) SetTargetGroups(v []*TargetGroup) *ModifyTarge return s } +// Information about a redirect action. +// +// A URI consists of the following components: protocol://hostname:port/path?query. +// You must modify at least one of the following components to avoid a redirect +// loop: protocol, hostname, port, or path. Any components that you do not modify +// retain their original values. +// +// You can reuse URI components using the following reserved keywords: +// +// * #{protocol} +// +// * #{host} +// +// * #{port} +// +// * #{path} (the leading "/" is removed) +// +// * #{query} +// +// For example, you can change the path to "/new/#{path}", the hostname to "example.#{host}", +// or the query to "#{query}&value=xyz". +type RedirectActionConfig struct { + _ struct{} `type:"structure"` + + // The hostname. This component is not percent-encoded. The hostname can contain + // #{host}. + Host *string `min:"1" type:"string"` + + // The absolute path, starting with the leading "/". This component is not percent-encoded. + // The path can contain #{host}, #{path}, and #{port}. + Path *string `min:"1" type:"string"` + + // The port. You can specify a value from 1 to 65535 or #{port}. + Port *string `type:"string"` + + // The protocol. You can specify HTTP, HTTPS, or #{protocol}. You can redirect + // HTTP to HTTP, HTTP to HTTPS, and HTTPS to HTTPS. You cannot redirect HTTPS + // to HTTP. + Protocol *string `type:"string"` + + // The query parameters, URL-encoded when necessary, but not percent-encoded. + // Do not include the leading "?", as it is automatically added. You can specify + // any of the reserved keywords. + Query *string `type:"string"` + + // The HTTP redirect code. The redirect is either permanent (HTTP 301) or temporary + // (HTTP 302). + // + // StatusCode is a required field + StatusCode *string `type:"string" required:"true" enum:"RedirectActionStatusCodeEnum"` +} + +// String returns the string representation +func (s RedirectActionConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RedirectActionConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RedirectActionConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RedirectActionConfig"} + if s.Host != nil && len(*s.Host) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Host", 1)) + } + if s.Path != nil && len(*s.Path) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Path", 1)) + } + if s.StatusCode == nil { + invalidParams.Add(request.NewErrParamRequired("StatusCode")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetHost sets the Host field's value. +func (s *RedirectActionConfig) SetHost(v string) *RedirectActionConfig { + s.Host = &v + return s +} + +// SetPath sets the Path field's value. +func (s *RedirectActionConfig) SetPath(v string) *RedirectActionConfig { + s.Path = &v + return s +} + +// SetPort sets the Port field's value. +func (s *RedirectActionConfig) SetPort(v string) *RedirectActionConfig { + s.Port = &v + return s +} + +// SetProtocol sets the Protocol field's value. +func (s *RedirectActionConfig) SetProtocol(v string) *RedirectActionConfig { + s.Protocol = &v + return s +} + +// SetQuery sets the Query field's value. +func (s *RedirectActionConfig) SetQuery(v string) *RedirectActionConfig { + s.Query = &v + return s +} + +// SetStatusCode sets the StatusCode field's value. +func (s *RedirectActionConfig) SetStatusCode(v string) *RedirectActionConfig { + s.StatusCode = &v + return s +} + type RegisterTargetsInput struct { _ struct{} `type:"structure"` @@ -6779,8 +7458,8 @@ type RuleCondition struct { // // If the field name is host-header, you can specify a single host name (for // example, my.example.com). A host name is case insensitive, can be up to 128 - // characters in length, and can contain any of the following characters. Note - // that you can include up to three wildcard characters. + // characters in length, and can contain any of the following characters. You + // can include up to three wildcard characters. // // * A-Z, a-z, 0-9 // @@ -6791,9 +7470,9 @@ type RuleCondition struct { // * ? (matches exactly 1 character) // // If the field name is path-pattern, you can specify a single path pattern - // (for example, /img/*). A path pattern is case sensitive, can be up to 128 - // characters in length, and can contain any of the following characters. Note - // that you can include up to three wildcard characters. + // (for example, /img/*). A path pattern is case-sensitive, can be up to 128 + // characters in length, and can contain any of the following characters. You + // can include up to three wildcard characters. // // * A-Z, a-z, 0-9 // @@ -7116,9 +7795,7 @@ type SetSubnetsInput struct { // The IDs of the public subnets. You must specify subnets from at least two // Availability Zones. You can specify only one subnet per Availability Zone. // You must specify either subnets or subnet mappings. - // - // Subnets is a required field - Subnets []*string `type:"list" required:"true"` + Subnets []*string `type:"list"` } // String returns the string representation @@ -7137,9 +7814,6 @@ func (s *SetSubnetsInput) Validate() error { if s.LoadBalancerArn == nil { invalidParams.Add(request.NewErrParamRequired("LoadBalancerArn")) } - if s.Subnets == nil { - invalidParams.Add(request.NewErrParamRequired("Subnets")) - } if invalidParams.Len() > 0 { return invalidParams @@ -7581,25 +8255,38 @@ type TargetGroupAttribute struct { // The name of the attribute. // - // * deregistration_delay.timeout_seconds - The amount time for Elastic Load - // Balancing to wait before changing the state of a deregistering target - // from draining to unused. The range is 0-3600 seconds. The default value - // is 300 seconds. + // The following attributes are supported by both Application Load Balancers + // and Network Load Balancers: + // + // * deregistration_delay.timeout_seconds - The amount of time, in seconds, + // for Elastic Load Balancing to wait before changing the state of a deregistering + // target from draining to unused. The range is 0-3600 seconds. The default + // value is 300 seconds. + // + // The following attributes are supported by only Application Load Balancers: // - // * proxy_protocol_v2.enabled - [Network Load Balancers] Indicates whether - // Proxy Protocol version 2 is enabled. + // * slow_start.duration_seconds - The time period, in seconds, during which + // a newly registered target receives a linearly increasing share of the + // traffic to the target group. After this time period ends, the target receives + // its full share of traffic. The range is 30-900 seconds (15 minutes). Slow + // start mode is disabled by default. // - // * stickiness.enabled - [Application Load Balancers] Indicates whether - // sticky sessions are enabled. The value is true or false. + // * stickiness.enabled - Indicates whether sticky sessions are enabled. + // The value is true or false. The default is false. // - // * stickiness.type - [Application Load Balancers] The type of sticky sessions. - // The possible value is lb_cookie. + // * stickiness.type - The type of sticky sessions. The possible value is + // lb_cookie. // - // * stickiness.lb_cookie.duration_seconds - [Application Load Balancers] - // The time period, in seconds, during which requests from a client should - // be routed to the same target. After this time period expires, the load - // balancer-generated cookie is considered stale. The range is 1 second to - // 1 week (604800 seconds). The default value is 1 day (86400 seconds). + // * stickiness.lb_cookie.duration_seconds - The time period, in seconds, + // during which requests from a client should be routed to the same target. + // After this time period expires, the load balancer-generated cookie is + // considered stale. The range is 1 second to 1 week (604800 seconds). The + // default value is 1 day (86400 seconds). + // + // The following attributes are supported by only Network Load Balancers: + // + // * proxy_protocol_v2.enabled - Indicates whether Proxy Protocol version + // 2 is enabled. The value is true or false. The default is false. Key *string `type:"string"` // The value of the attribute. @@ -7759,6 +8446,40 @@ func (s *TargetHealthDescription) SetTargetHealth(v *TargetHealth) *TargetHealth const ( // ActionTypeEnumForward is a ActionTypeEnum enum value ActionTypeEnumForward = "forward" + + // ActionTypeEnumAuthenticateOidc is a ActionTypeEnum enum value + ActionTypeEnumAuthenticateOidc = "authenticate-oidc" + + // ActionTypeEnumAuthenticateCognito is a ActionTypeEnum enum value + ActionTypeEnumAuthenticateCognito = "authenticate-cognito" + + // ActionTypeEnumRedirect is a ActionTypeEnum enum value + ActionTypeEnumRedirect = "redirect" + + // ActionTypeEnumFixedResponse is a ActionTypeEnum enum value + ActionTypeEnumFixedResponse = "fixed-response" +) + +const ( + // AuthenticateCognitoActionConditionalBehaviorEnumDeny is a AuthenticateCognitoActionConditionalBehaviorEnum enum value + AuthenticateCognitoActionConditionalBehaviorEnumDeny = "deny" + + // AuthenticateCognitoActionConditionalBehaviorEnumAllow is a AuthenticateCognitoActionConditionalBehaviorEnum enum value + AuthenticateCognitoActionConditionalBehaviorEnumAllow = "allow" + + // AuthenticateCognitoActionConditionalBehaviorEnumAuthenticate is a AuthenticateCognitoActionConditionalBehaviorEnum enum value + AuthenticateCognitoActionConditionalBehaviorEnumAuthenticate = "authenticate" +) + +const ( + // AuthenticateOidcActionConditionalBehaviorEnumDeny is a AuthenticateOidcActionConditionalBehaviorEnum enum value + AuthenticateOidcActionConditionalBehaviorEnumDeny = "deny" + + // AuthenticateOidcActionConditionalBehaviorEnumAllow is a AuthenticateOidcActionConditionalBehaviorEnum enum value + AuthenticateOidcActionConditionalBehaviorEnumAllow = "allow" + + // AuthenticateOidcActionConditionalBehaviorEnumAuthenticate is a AuthenticateOidcActionConditionalBehaviorEnum enum value + AuthenticateOidcActionConditionalBehaviorEnumAuthenticate = "authenticate" ) const ( @@ -7810,6 +8531,14 @@ const ( ProtocolEnumTcp = "TCP" ) +const ( + // RedirectActionStatusCodeEnumHttp301 is a RedirectActionStatusCodeEnum enum value + RedirectActionStatusCodeEnumHttp301 = "HTTP_301" + + // RedirectActionStatusCodeEnumHttp302 is a RedirectActionStatusCodeEnum enum value + RedirectActionStatusCodeEnumHttp302 = "HTTP_302" +) + const ( // TargetHealthReasonEnumElbRegistrationInProgress is a TargetHealthReasonEnum enum value TargetHealthReasonEnumElbRegistrationInProgress = "Elb.RegistrationInProgress" diff --git a/vendor/github.com/aws/aws-sdk-go/service/elbv2/errors.go b/vendor/github.com/aws/aws-sdk-go/service/elbv2/errors.go index 88edc02e9cb..b813ebeff7a 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/elbv2/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/elbv2/errors.go @@ -65,6 +65,12 @@ const ( // The requested configuration is not valid. ErrCodeInvalidConfigurationRequestException = "InvalidConfigurationRequest" + // ErrCodeInvalidLoadBalancerActionException for service response error code + // "InvalidLoadBalancerAction". + // + // The requested action is not valid. + ErrCodeInvalidLoadBalancerActionException = "InvalidLoadBalancerAction" + // ErrCodeInvalidSchemeException for service response error code // "InvalidScheme". // @@ -150,6 +156,12 @@ const ( // The specified target group does not exist. ErrCodeTargetGroupNotFoundException = "TargetGroupNotFound" + // ErrCodeTooManyActionsException for service response error code + // "TooManyActions". + // + // You've reached the limit on the number of actions per rule. + ErrCodeTooManyActionsException = "TooManyActions" + // ErrCodeTooManyCertificatesException for service response error code // "TooManyCertificates". // diff --git a/vendor/github.com/aws/aws-sdk-go/service/elbv2/service.go b/vendor/github.com/aws/aws-sdk-go/service/elbv2/service.go index c3733846cac..ad97e8df885 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/elbv2/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/elbv2/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "elasticloadbalancing" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "elasticloadbalancing" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Elastic Load Balancing v2" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the ELBV2 client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/emr/api.go b/vendor/github.com/aws/aws-sdk-go/service/emr/api.go index b9b47e6e6c4..66e46254eeb 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/emr/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/emr/api.go @@ -17,8 +17,8 @@ const opAddInstanceFleet = "AddInstanceFleet" // AddInstanceFleetRequest generates a "aws/request.Request" representing the // client's request for the AddInstanceFleet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -102,8 +102,8 @@ const opAddInstanceGroups = "AddInstanceGroups" // AddInstanceGroupsRequest generates a "aws/request.Request" representing the // client's request for the AddInstanceGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -182,8 +182,8 @@ const opAddJobFlowSteps = "AddJobFlowSteps" // AddJobFlowStepsRequest generates a "aws/request.Request" representing the // client's request for the AddJobFlowSteps operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -284,8 +284,8 @@ const opAddTags = "AddTags" // AddTagsRequest generates a "aws/request.Request" representing the // client's request for the AddTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -368,8 +368,8 @@ const opCancelSteps = "CancelSteps" // CancelStepsRequest generates a "aws/request.Request" representing the // client's request for the CancelSteps operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -455,8 +455,8 @@ const opCreateSecurityConfiguration = "CreateSecurityConfiguration" // CreateSecurityConfigurationRequest generates a "aws/request.Request" representing the // client's request for the CreateSecurityConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -538,8 +538,8 @@ const opDeleteSecurityConfiguration = "DeleteSecurityConfiguration" // DeleteSecurityConfigurationRequest generates a "aws/request.Request" representing the // client's request for the DeleteSecurityConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -620,8 +620,8 @@ const opDescribeCluster = "DescribeCluster" // DescribeClusterRequest generates a "aws/request.Request" representing the // client's request for the DescribeCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -661,7 +661,7 @@ func (c *EMR) DescribeClusterRequest(input *DescribeClusterInput) (req *request. // DescribeCluster API operation for Amazon Elastic MapReduce. // // Provides cluster-level details including status, hardware and software configuration, -// VPC settings, and so on. For information about the cluster steps, see ListSteps. +// VPC settings, and so on. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -703,8 +703,8 @@ const opDescribeJobFlows = "DescribeJobFlows" // DescribeJobFlowsRequest generates a "aws/request.Request" representing the // client's request for the DescribeJobFlows operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -725,6 +725,8 @@ const opDescribeJobFlows = "DescribeJobFlows" // } // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/DescribeJobFlows +// +// Deprecated: DescribeJobFlows has been deprecated func (c *EMR) DescribeJobFlowsRequest(input *DescribeJobFlowsInput) (req *request.Request, output *DescribeJobFlowsOutput) { if c.Client.Config.Logger != nil { c.Client.Config.Logger.Log("This operation, DescribeJobFlows, has been deprecated") @@ -780,6 +782,8 @@ func (c *EMR) DescribeJobFlowsRequest(input *DescribeJobFlowsInput) (req *reques // request was not completed. // // See also, https://docs.aws.amazon.com/goto/WebAPI/elasticmapreduce-2009-03-31/DescribeJobFlows +// +// Deprecated: DescribeJobFlows has been deprecated func (c *EMR) DescribeJobFlows(input *DescribeJobFlowsInput) (*DescribeJobFlowsOutput, error) { req, out := c.DescribeJobFlowsRequest(input) return out, req.Send() @@ -794,6 +798,8 @@ func (c *EMR) DescribeJobFlows(input *DescribeJobFlowsInput) (*DescribeJobFlowsO // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. +// +// Deprecated: DescribeJobFlowsWithContext has been deprecated func (c *EMR) DescribeJobFlowsWithContext(ctx aws.Context, input *DescribeJobFlowsInput, opts ...request.Option) (*DescribeJobFlowsOutput, error) { req, out := c.DescribeJobFlowsRequest(input) req.SetContext(ctx) @@ -805,8 +811,8 @@ const opDescribeSecurityConfiguration = "DescribeSecurityConfiguration" // DescribeSecurityConfigurationRequest generates a "aws/request.Request" representing the // client's request for the DescribeSecurityConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -888,8 +894,8 @@ const opDescribeStep = "DescribeStep" // DescribeStepRequest generates a "aws/request.Request" representing the // client's request for the DescribeStep operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -970,8 +976,8 @@ const opListBootstrapActions = "ListBootstrapActions" // ListBootstrapActionsRequest generates a "aws/request.Request" representing the // client's request for the ListBootstrapActions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1108,8 +1114,8 @@ const opListClusters = "ListClusters" // ListClustersRequest generates a "aws/request.Request" representing the // client's request for the ListClusters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1250,8 +1256,8 @@ const opListInstanceFleets = "ListInstanceFleets" // ListInstanceFleetsRequest generates a "aws/request.Request" representing the // client's request for the ListInstanceFleets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1391,8 +1397,8 @@ const opListInstanceGroups = "ListInstanceGroups" // ListInstanceGroupsRequest generates a "aws/request.Request" representing the // client's request for the ListInstanceGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1529,8 +1535,8 @@ const opListInstances = "ListInstances" // ListInstancesRequest generates a "aws/request.Request" representing the // client's request for the ListInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1670,8 +1676,8 @@ const opListSecurityConfigurations = "ListSecurityConfigurations" // ListSecurityConfigurationsRequest generates a "aws/request.Request" representing the // client's request for the ListSecurityConfigurations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1755,8 +1761,8 @@ const opListSteps = "ListSteps" // ListStepsRequest generates a "aws/request.Request" representing the // client's request for the ListSteps operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1894,8 +1900,8 @@ const opModifyInstanceFleet = "ModifyInstanceFleet" // ModifyInstanceFleetRequest generates a "aws/request.Request" representing the // client's request for the ModifyInstanceFleet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1983,8 +1989,8 @@ const opModifyInstanceGroups = "ModifyInstanceGroups" // ModifyInstanceGroupsRequest generates a "aws/request.Request" representing the // client's request for the ModifyInstanceGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2068,8 +2074,8 @@ const opPutAutoScalingPolicy = "PutAutoScalingPolicy" // PutAutoScalingPolicyRequest generates a "aws/request.Request" representing the // client's request for the PutAutoScalingPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2145,8 +2151,8 @@ const opRemoveAutoScalingPolicy = "RemoveAutoScalingPolicy" // RemoveAutoScalingPolicyRequest generates a "aws/request.Request" representing the // client's request for the RemoveAutoScalingPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2220,8 +2226,8 @@ const opRemoveTags = "RemoveTags" // RemoveTagsRequest generates a "aws/request.Request" representing the // client's request for the RemoveTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2306,8 +2312,8 @@ const opRunJobFlow = "RunJobFlow" // RunJobFlowRequest generates a "aws/request.Request" representing the // client's request for the RunJobFlow operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2412,8 +2418,8 @@ const opSetTerminationProtection = "SetTerminationProtection" // SetTerminationProtectionRequest generates a "aws/request.Request" representing the // client's request for the SetTerminationProtection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2510,8 +2516,8 @@ const opSetVisibleToAllUsers = "SetVisibleToAllUsers" // SetVisibleToAllUsersRequest generates a "aws/request.Request" representing the // client's request for the SetVisibleToAllUsers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2597,8 +2603,8 @@ const opTerminateJobFlows = "TerminateJobFlows" // TerminateJobFlowsRequest generates a "aws/request.Request" representing the // client's request for the TerminateJobFlows operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3675,7 +3681,14 @@ type Cluster struct { // the actual billing rate. NormalizedInstanceHours *int64 `type:"integer"` - // The release label for the Amazon EMR release. + // The Amazon EMR release label, which determines the version of open-source + // application packages installed on the cluster. Release labels are in the + // form emr-x.x.x, where x.x.x is an Amazon EMR release version, for example, + // emr-5.14.0. For more information about Amazon EMR release versions and included + // application versions and features, see http://docs.aws.amazon.com/emr/latest/ReleaseGuide/ + // (http://docs.aws.amazon.com/emr/latest/ReleaseGuide/). The release label + // applies only to Amazon EMR releases versions 4.x and later. Earlier versions + // use AmiVersion. ReleaseLabel *string `type:"string"` // Applies only when CustomAmiID is used. Specifies the type of updates that @@ -4027,13 +4040,13 @@ type ClusterTimeline struct { _ struct{} `type:"structure"` // The creation date and time of the cluster. - CreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDateTime *time.Time `type:"timestamp"` // The date and time when the cluster was terminated. - EndDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndDateTime *time.Time `type:"timestamp"` // The date and time when the cluster was ready to execute steps. - ReadyDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + ReadyDateTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -4215,7 +4228,7 @@ type CreateSecurityConfigurationOutput struct { // The date and time the security configuration was created. // // CreationDateTime is a required field - CreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + CreationDateTime *time.Time `type:"timestamp" required:"true"` // The name of the security configuration. // @@ -4365,10 +4378,10 @@ type DescribeJobFlowsInput struct { _ struct{} `type:"structure"` // Return only job flows created after this date and time. - CreatedAfter *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedAfter *time.Time `type:"timestamp"` // Return only job flows created before this date and time. - CreatedBefore *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedBefore *time.Time `type:"timestamp"` // Return only job flows whose job flow ID is contained in this list. JobFlowIds []*string `type:"list"` @@ -4477,7 +4490,7 @@ type DescribeSecurityConfigurationOutput struct { _ struct{} `type:"structure"` // The date and time the security configuration was created - CreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDateTime *time.Time `type:"timestamp"` // The name of the security configuration. Name *string `type:"string"` @@ -5713,13 +5726,13 @@ type InstanceFleetTimeline struct { _ struct{} `type:"structure"` // The time and date the instance fleet was created. - CreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDateTime *time.Time `type:"timestamp"` // The time and date the instance fleet terminated. - EndDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndDateTime *time.Time `type:"timestamp"` // The time and date the instance fleet was ready to run jobs. - ReadyDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + ReadyDateTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -5761,8 +5774,12 @@ type InstanceGroup struct { // of a CloudWatch metric. See PutAutoScalingPolicy. AutoScalingPolicy *AutoScalingPolicyDescription `type:"structure"` - // The bid price for each EC2 instance in the instance group when launching - // nodes as Spot Instances, expressed in USD. + // The maximum Spot price your are willing to pay for EC2 instances. + // + // An optional, nullable field that applies if the MarketType for the instance + // group is specified as SPOT. Specify the maximum spot price in USD. If the + // value is NULL and SPOT is specified, the maximum Spot price is set equal + // to the On-Demand price. BidPrice *string `type:"string"` // Amazon EMR releases 4.x or later. @@ -5913,8 +5930,12 @@ type InstanceGroupConfig struct { // of a CloudWatch metric. See PutAutoScalingPolicy. AutoScalingPolicy *AutoScalingPolicy `type:"structure"` - // Bid price for each EC2 instance in the instance group when launching nodes - // as Spot Instances, expressed in USD. + // The maximum Spot price your are willing to pay for EC2 instances. + // + // An optional, nullable field that applies if the MarketType for the instance + // group is specified as SPOT. Specify the maximum spot price in USD. If the + // value is NULL and SPOT is specified, the maximum Spot price is set equal + // to the On-Demand price. BidPrice *string `type:"string"` // Amazon EMR releases 4.x or later. @@ -6050,17 +6071,20 @@ func (s *InstanceGroupConfig) SetName(v string) *InstanceGroupConfig { type InstanceGroupDetail struct { _ struct{} `type:"structure"` - // Bid price for EC2 Instances when launching nodes as Spot Instances, expressed - // in USD. + // The maximum Spot price your are willing to pay for EC2 instances. + // + // An optional, nullable field that applies if the MarketType for the instance + // group is specified as SPOT. Specified in USD. If the value is NULL and SPOT + // is specified, the maximum Spot price is set equal to the On-Demand price. BidPrice *string `type:"string"` // The date/time the instance group was created. // // CreationDateTime is a required field - CreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + CreationDateTime *time.Time `type:"timestamp" required:"true"` // The date/time the instance group was terminated. - EndDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndDateTime *time.Time `type:"timestamp"` // Unique identifier for the instance group. InstanceGroupId *string `type:"string"` @@ -6097,10 +6121,10 @@ type InstanceGroupDetail struct { Name *string `type:"string"` // The date/time the instance group was available to the cluster. - ReadyDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + ReadyDateTime *time.Time `type:"timestamp"` // The date/time the instance group was started. - StartDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartDateTime *time.Time `type:"timestamp"` // State of instance group. The following values are deprecated: STARTING, TERMINATED, // and FAILED. @@ -6350,13 +6374,13 @@ type InstanceGroupTimeline struct { _ struct{} `type:"structure"` // The creation date and time of the instance group. - CreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDateTime *time.Time `type:"timestamp"` // The date and time when the instance group terminated. - EndDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndDateTime *time.Time `type:"timestamp"` // The date and time when the instance group became ready to perform tasks. - ReadyDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + ReadyDateTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -6511,13 +6535,13 @@ type InstanceTimeline struct { _ struct{} `type:"structure"` // The creation date and time of the instance. - CreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDateTime *time.Time `type:"timestamp"` // The date and time when the instance was terminated. - EndDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndDateTime *time.Time `type:"timestamp"` // The date and time when the instance was ready to perform tasks. - ReadyDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + ReadyDateTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -6751,10 +6775,8 @@ func (s *InstanceTypeSpecification) SetWeightedCapacity(v int64) *InstanceTypeSp type JobFlowDetail struct { _ struct{} `type:"structure"` - // Used only for version 2.x and 3.x of Amazon EMR. The version of the AMI used - // to initialize Amazon EC2 instances in the job flow. For a list of AMI versions - // supported by Amazon EMR, see AMI Versions Supported in EMR (http://docs.aws.amazon.com/emr/latest/DeveloperGuide/emr-dg.pdf#nameddest=ami-versions-supported) - // in the Amazon EMR Developer Guide. + // Applies only to Amazon EMR AMI versions 3.x and 2.x. For Amazon EMR releases + // 4.0 and later, ReleaseLabel is used. To specify a custom AMI, use CustomAmiID. AmiVersion *string `type:"string"` // An IAM role for automatic scaling policies. The default role is EMR_AutoScaling_DefaultRole. @@ -6929,20 +6951,20 @@ type JobFlowExecutionStatusDetail struct { // The creation date and time of the job flow. // // CreationDateTime is a required field - CreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + CreationDateTime *time.Time `type:"timestamp" required:"true"` // The completion date and time of the job flow. - EndDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndDateTime *time.Time `type:"timestamp"` // Description of the job flow last changed state. LastStateChangeReason *string `type:"string"` // The date and time when the job flow was ready to start running bootstrap // actions. - ReadyDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + ReadyDateTime *time.Time `type:"timestamp"` // The start date and time of the job flow. - StartDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartDateTime *time.Time `type:"timestamp"` // The state of the job flow. // @@ -7041,11 +7063,12 @@ type JobFlowInstancesConfig struct { // The identifier of the Amazon EC2 security group for the slave nodes. EmrManagedSlaveSecurityGroup *string `type:"string"` - // The Hadoop version for the cluster. Valid inputs are "0.18" (deprecated), - // "0.20" (deprecated), "0.20.205" (deprecated), "1.0.3", "2.2.0", or "2.4.0". - // If you do not set this value, the default of 0.18 is used, unless the AmiVersion - // parameter is set in the RunJobFlow call, in which case the default version - // of Hadoop for that AMI version is used. + // Applies only to Amazon EMR release versions earlier than 4.0. The Hadoop + // version for the cluster. Valid inputs are "0.18" (deprecated), "0.20" (deprecated), + // "0.20.205" (deprecated), "1.0.3", "2.2.0", or "2.4.0". If you do not set + // this value, the default of 0.18 is used, unless the AmiVersion parameter + // is set in the RunJobFlow call, in which case the default version of Hadoop + // for that AMI version is used. HadoopVersion *string `type:"string"` // The number of EC2 instances in the cluster. @@ -7596,10 +7619,10 @@ type ListClustersInput struct { ClusterStates []*string `type:"list"` // The creation date and time beginning value filter for listing clusters. - CreatedAfter *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedAfter *time.Time `type:"timestamp"` // The creation date and time end value filter for listing clusters. - CreatedBefore *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedBefore *time.Time `type:"timestamp"` // The pagination token that indicates the next set of results to retrieve. Marker *string `type:"string"` @@ -8591,22 +8614,8 @@ type RunJobFlowInput struct { // A JSON string for selecting additional features. AdditionalInfo *string `type:"string"` - // For Amazon EMR AMI versions 3.x and 2.x. For Amazon EMR releases 4.0 and - // later, the Linux AMI is determined by the ReleaseLabel specified or by CustomAmiID. - // The version of the Amazon Machine Image (AMI) to use when launching Amazon - // EC2 instances in the job flow. For details about the AMI versions currently - // supported in EMR version 3.x and 2.x, see AMI Versions Supported in EMR (emr/latest/DeveloperGuide/emr-dg.pdf#nameddest=ami-versions-supported) - // in the Amazon EMR Developer Guide. - // - // If the AMI supports multiple versions of Hadoop (for example, AMI 1.0 supports - // both Hadoop 0.18 and 0.20), you can use the JobFlowInstancesConfigHadoopVersion - // parameter to modify the version of Hadoop from the defaults shown above. - // - // Previously, the EMR AMI version API parameter options allowed you to use - // latest for the latest AMI version rather than specify a numerical value. - // Some regions no longer support this deprecated option as they only have a - // newer release label version of EMR, which requires you to specify an EMR - // release label release (EMR 4.x or later). + // Applies only to Amazon EMR AMI versions 3.x and 2.x. For Amazon EMR releases + // 4.0 and later, ReleaseLabel is used. To specify a custom AMI, use CustomAmiID. AmiVersion *string `type:"string"` // For Amazon EMR releases 4.0 and later. A list of applications for the cluster. @@ -8698,8 +8707,14 @@ type RunJobFlowInput struct { // * "ganglia" - launch the cluster with the Ganglia Monitoring System installed. NewSupportedProducts []*SupportedProductConfig `type:"list"` - // The release label for the Amazon EMR release. For Amazon EMR 3.x and 2.x - // AMIs, use AmiVersion instead. + // The Amazon EMR release label, which determines the version of open-source + // application packages installed on the cluster. Release labels are in the + // form emr-x.x.x, where x.x.x is an Amazon EMR release version, for example, + // emr-5.14.0. For more information about Amazon EMR release versions and included + // application versions and features, see http://docs.aws.amazon.com/emr/latest/ReleaseGuide/ + // (http://docs.aws.amazon.com/emr/latest/ReleaseGuide/). The release label + // applies only to Amazon EMR releases versions 4.x and later. Earlier versions + // use AmiVersion. ReleaseLabel *string `type:"string"` // Applies only when CustomAmiID is used. Specifies which updates from the Amazon @@ -9279,7 +9294,7 @@ type SecurityConfigurationSummary struct { _ struct{} `type:"structure"` // The date and time the security configuration was created. - CreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDateTime *time.Time `type:"timestamp"` // The name of the security configuration. Name *string `type:"string"` @@ -9814,16 +9829,16 @@ type StepExecutionStatusDetail struct { // The creation date and time of the step. // // CreationDateTime is a required field - CreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + CreationDateTime *time.Time `type:"timestamp" required:"true"` // The completion date and time of the step. - EndDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndDateTime *time.Time `type:"timestamp"` // A description of the step's current state. LastStateChangeReason *string `type:"string"` // The start date and time of the step. - StartDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartDateTime *time.Time `type:"timestamp"` // The state of the step. // @@ -10023,13 +10038,13 @@ type StepTimeline struct { _ struct{} `type:"structure"` // The date and time when the cluster step was created. - CreationDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDateTime *time.Time `type:"timestamp"` // The date and time when the cluster step execution completed or failed. - EndDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndDateTime *time.Time `type:"timestamp"` // The date and time when the cluster step execution started. - StartDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartDateTime *time.Time `type:"timestamp"` } // String returns the string representation diff --git a/vendor/github.com/aws/aws-sdk-go/service/emr/service.go b/vendor/github.com/aws/aws-sdk-go/service/emr/service.go index 61fc42581ef..92735a793d4 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/emr/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/emr/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "elasticmapreduce" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "elasticmapreduce" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "EMR" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the EMR client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/firehose/api.go b/vendor/github.com/aws/aws-sdk-go/service/firehose/api.go index c4f0a8add5f..b229b2e72a3 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/firehose/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/firehose/api.go @@ -15,8 +15,8 @@ const opCreateDeliveryStream = "CreateDeliveryStream" // CreateDeliveryStreamRequest generates a "aws/request.Request" representing the // client's request for the CreateDeliveryStream operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -55,9 +55,9 @@ func (c *Firehose) CreateDeliveryStreamRequest(input *CreateDeliveryStreamInput) // CreateDeliveryStream API operation for Amazon Kinesis Firehose. // -// Creates a delivery stream. +// Creates a Kinesis Data Firehose delivery stream. // -// By default, you can create up to 20 delivery streams per region. +// By default, you can create up to 50 delivery streams per AWS Region. // // This is an asynchronous operation that immediately returns. The initial status // of the delivery stream is CREATING. After the delivery stream is created, @@ -65,33 +65,34 @@ func (c *Firehose) CreateDeliveryStreamRequest(input *CreateDeliveryStreamInput) // delivery stream that is not in the ACTIVE state cause an exception. To check // the state of a delivery stream, use DescribeDeliveryStream. // -// A Kinesis Firehose delivery stream can be configured to receive records directly -// from providers using PutRecord or PutRecordBatch, or it can be configured -// to use an existing Kinesis stream as its source. To specify a Kinesis stream -// as input, set the DeliveryStreamType parameter to KinesisStreamAsSource, -// and provide the Kinesis stream ARN and role ARN in the KinesisStreamSourceConfiguration -// parameter. +// A Kinesis Data Firehose delivery stream can be configured to receive records +// directly from providers using PutRecord or PutRecordBatch, or it can be configured +// to use an existing Kinesis stream as its source. To specify a Kinesis data +// stream as input, set the DeliveryStreamType parameter to KinesisStreamAsSource, +// and provide the Kinesis stream Amazon Resource Name (ARN) and role ARN in +// the KinesisStreamSourceConfiguration parameter. // // A delivery stream is configured with a single destination: Amazon S3, Amazon -// ES, or Amazon Redshift. You must specify only one of the following destination -// configuration parameters: ExtendedS3DestinationConfiguration, S3DestinationConfiguration, -// ElasticsearchDestinationConfiguration, or RedshiftDestinationConfiguration. +// ES, Amazon Redshift, or Splunk. You must specify only one of the following +// destination configuration parameters: ExtendedS3DestinationConfiguration, +// S3DestinationConfiguration, ElasticsearchDestinationConfiguration, RedshiftDestinationConfiguration, +// or SplunkDestinationConfiguration. // // When you specify S3DestinationConfiguration, you can also provide the following // optional values: BufferingHints, EncryptionConfiguration, and CompressionFormat. -// By default, if no BufferingHints value is provided, Kinesis Firehose buffers -// data up to 5 MB or for 5 minutes, whichever condition is satisfied first. -// Note that BufferingHints is a hint, so there are some cases where the service -// cannot adhere to these conditions strictly; for example, record boundaries -// are such that the size is a little over or under the configured buffering +// By default, if no BufferingHints value is provided, Kinesis Data Firehose +// buffers data up to 5 MB or for 5 minutes, whichever condition is satisfied +// first. BufferingHints is a hint, so there are some cases where the service +// cannot adhere to these conditions strictly. For example, record boundaries +// might be such that the size is a little over or under the configured buffering // size. By default, no encryption is performed. We strongly recommend that // you enable encryption to ensure secure data storage in Amazon S3. // // A few notes about Amazon Redshift as a destination: // // * An Amazon Redshift destination requires an S3 bucket as intermediate -// location, as Kinesis Firehose first delivers data to S3 and then uses -// COPY syntax to load data into an Amazon Redshift table. This is specified +// location. Kinesis Data Firehose first delivers data to Amazon S3 and then +// uses COPY syntax to load data into an Amazon Redshift table. This is specified // in the RedshiftDestinationConfiguration.S3Configuration parameter. // // * The compression formats SNAPPY or ZIP cannot be specified in RedshiftDestinationConfiguration.S3Configuration @@ -99,14 +100,15 @@ func (c *Firehose) CreateDeliveryStreamRequest(input *CreateDeliveryStreamInput) // doesn't support these compression formats. // // * We strongly recommend that you use the user name and password you provide -// exclusively with Kinesis Firehose, and that the permissions for the account -// are restricted for Amazon Redshift INSERT permissions. +// exclusively with Kinesis Data Firehose, and that the permissions for the +// account are restricted for Amazon Redshift INSERT permissions. // -// Kinesis Firehose assumes the IAM role that is configured as part of the destination. -// The role should allow the Kinesis Firehose principal to assume the role, -// and the role should have permissions that allow the service to deliver the -// data. For more information, see Amazon S3 Bucket Access (http://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-s3) -// in the Amazon Kinesis Firehose Developer Guide. +// Kinesis Data Firehose assumes the IAM role that is configured as part of +// the destination. The role should allow the Kinesis Data Firehose principal +// to assume the role, and the role should have permissions that allow the service +// to deliver the data. For more information, see Grant Kinesis Data Firehose +// Access to an Amazon S3 Destination (http://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-s3) +// in the Amazon Kinesis Data Firehose Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -151,8 +153,8 @@ const opDeleteDeliveryStream = "DeleteDeliveryStream" // DeleteDeliveryStreamRequest generates a "aws/request.Request" representing the // client's request for the DeleteDeliveryStream operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -199,10 +201,10 @@ func (c *Firehose) DeleteDeliveryStreamRequest(input *DeleteDeliveryStreamInput) // // To check the state of a delivery stream, use DescribeDeliveryStream. // -// While the delivery stream is DELETING state, the service may continue to -// accept the records, but the service doesn't make any guarantees with respect -// to delivering the data. Therefore, as a best practice, you should first stop -// any applications that are sending records before deleting a delivery stream. +// While the delivery stream is DELETING state, the service might continue to +// accept the records, but it doesn't make any guarantees with respect to delivering +// the data. Therefore, as a best practice, you should first stop any applications +// that are sending records before deleting a delivery stream. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -244,8 +246,8 @@ const opDescribeDeliveryStream = "DescribeDeliveryStream" // DescribeDeliveryStreamRequest generates a "aws/request.Request" representing the // client's request for the DescribeDeliveryStream operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -286,8 +288,8 @@ func (c *Firehose) DescribeDeliveryStreamRequest(input *DescribeDeliveryStreamIn // // Describes the specified delivery stream and gets the status. For example, // after your delivery stream is created, call DescribeDeliveryStream to see -// if the delivery stream is ACTIVE and therefore ready for data to be sent -// to it. +// whether the delivery stream is ACTIVE and therefore ready for data to be +// sent to it. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -326,8 +328,8 @@ const opListDeliveryStreams = "ListDeliveryStreams" // ListDeliveryStreamsRequest generates a "aws/request.Request" representing the // client's request for the ListDeliveryStreams operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -366,15 +368,15 @@ func (c *Firehose) ListDeliveryStreamsRequest(input *ListDeliveryStreamsInput) ( // ListDeliveryStreams API operation for Amazon Kinesis Firehose. // -// Lists your delivery streams. +// Lists your delivery streams in alphabetical order of their names. // // The number of delivery streams might be too large to return using a single // call to ListDeliveryStreams. You can limit the number of delivery streams // returned, using the Limit parameter. To determine whether there are more // delivery streams to list, check the value of HasMoreDeliveryStreams in the // output. If there are more delivery streams to list, you can request them -// by specifying the name of the last delivery stream returned in the call in -// the ExclusiveStartDeliveryStreamName parameter of a subsequent call. +// by calling this operation again and setting the ExclusiveStartDeliveryStreamName +// parameter to the name of the last delivery stream returned in the last call. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -404,12 +406,98 @@ func (c *Firehose) ListDeliveryStreamsWithContext(ctx aws.Context, input *ListDe return out, req.Send() } +const opListTagsForDeliveryStream = "ListTagsForDeliveryStream" + +// ListTagsForDeliveryStreamRequest generates a "aws/request.Request" representing the +// client's request for the ListTagsForDeliveryStream operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListTagsForDeliveryStream for more information on using the ListTagsForDeliveryStream +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListTagsForDeliveryStreamRequest method. +// req, resp := client.ListTagsForDeliveryStreamRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04/ListTagsForDeliveryStream +func (c *Firehose) ListTagsForDeliveryStreamRequest(input *ListTagsForDeliveryStreamInput) (req *request.Request, output *ListTagsForDeliveryStreamOutput) { + op := &request.Operation{ + Name: opListTagsForDeliveryStream, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListTagsForDeliveryStreamInput{} + } + + output = &ListTagsForDeliveryStreamOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListTagsForDeliveryStream API operation for Amazon Kinesis Firehose. +// +// Lists the tags for the specified delivery stream. This operation has a limit +// of five transactions per second per account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Firehose's +// API operation ListTagsForDeliveryStream for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource could not be found. +// +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// The specified input parameter has a value that is not valid. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// You have already reached the limit for a requested resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04/ListTagsForDeliveryStream +func (c *Firehose) ListTagsForDeliveryStream(input *ListTagsForDeliveryStreamInput) (*ListTagsForDeliveryStreamOutput, error) { + req, out := c.ListTagsForDeliveryStreamRequest(input) + return out, req.Send() +} + +// ListTagsForDeliveryStreamWithContext is the same as ListTagsForDeliveryStream with the addition of +// the ability to pass a context and additional request options. +// +// See ListTagsForDeliveryStream for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Firehose) ListTagsForDeliveryStreamWithContext(ctx aws.Context, input *ListTagsForDeliveryStreamInput, opts ...request.Option) (*ListTagsForDeliveryStreamOutput, error) { + req, out := c.ListTagsForDeliveryStreamRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opPutRecord = "PutRecord" // PutRecordRequest generates a "aws/request.Request" representing the // client's request for the PutRecord operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -448,22 +536,23 @@ func (c *Firehose) PutRecordRequest(input *PutRecordInput) (req *request.Request // PutRecord API operation for Amazon Kinesis Firehose. // -// Writes a single data record into an Amazon Kinesis Firehose delivery stream. -// To write multiple data records into a delivery stream, use PutRecordBatch. +// Writes a single data record into an Amazon Kinesis Data Firehose delivery +// stream. To write multiple data records into a delivery stream, use PutRecordBatch. // Applications using these operations are referred to as producers. // // By default, each delivery stream can take in up to 2,000 transactions per -// second, 5,000 records per second, or 5 MB per second. Note that if you use -// PutRecord and PutRecordBatch, the limits are an aggregate across these two -// operations for each delivery stream. For more information about limits and -// how to request an increase, see Amazon Kinesis Firehose Limits (http://docs.aws.amazon.com/firehose/latest/dev/limits.html). +// second, 5,000 records per second, or 5 MB per second. If you use PutRecord +// and PutRecordBatch, the limits are an aggregate across these two operations +// for each delivery stream. For more information about limits and how to request +// an increase, see Amazon Kinesis Data Firehose Limits (http://docs.aws.amazon.com/firehose/latest/dev/limits.html). // // You must specify the name of the delivery stream and the data record when // using PutRecord. The data record consists of a data blob that can be up to -// 1,000 KB in size, and any kind of data, for example, a segment from a log -// file, geographic location data, website clickstream data, and so on. +// 1,000 KB in size, and any kind of data. For example, it can be a segment +// from a log file, geographic location data, website clickstream data, and +// so on. // -// Kinesis Firehose buffers records before delivering them to the destination. +// Kinesis Data Firehose buffers records before delivering them to the destination. // To disambiguate the data blobs at the destination, a common solution is to // use delimiters in the data, such as a newline (\n) or some other character // unique within the data. This allows the consumer application to parse individual @@ -477,11 +566,14 @@ func (c *Firehose) PutRecordRequest(input *PutRecordInput) (req *request.Request // and retry. If the exception persists, it is possible that the throughput // limits have been exceeded for the delivery stream. // -// Data records sent to Kinesis Firehose are stored for 24 hours from the time -// they are added to a delivery stream as it attempts to send the records to -// the destination. If the destination is unreachable for more than 24 hours, +// Data records sent to Kinesis Data Firehose are stored for 24 hours from the +// time they are added to a delivery stream as it tries to send the records +// to the destination. If the destination is unreachable for more than 24 hours, // the data is no longer available. // +// Don't concatenate two or more base64 strings to form the data fields of your +// records. Instead, concatenate the raw data, then perform base64 encoding. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -497,10 +589,10 @@ func (c *Firehose) PutRecordRequest(input *PutRecordInput) (req *request.Request // The specified input parameter has a value that is not valid. // // * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is unavailable, back off and retry the operation. If you continue +// The service is unavailable. Back off and retry the operation. If you continue // to see the exception, throughput limits for the delivery stream may have // been exceeded. For more information about limits and how to request an increase, -// see Amazon Kinesis Firehose Limits (http://docs.aws.amazon.com/firehose/latest/dev/limits.html). +// see Amazon Kinesis Data Firehose Limits (http://docs.aws.amazon.com/firehose/latest/dev/limits.html). // // See also, https://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04/PutRecord func (c *Firehose) PutRecord(input *PutRecordInput) (*PutRecordOutput, error) { @@ -528,8 +620,8 @@ const opPutRecordBatch = "PutRecordBatch" // PutRecordBatchRequest generates a "aws/request.Request" representing the // client's request for the PutRecordBatch operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -577,7 +669,7 @@ func (c *Firehose) PutRecordBatchRequest(input *PutRecordBatchInput) (req *reque // second, 5,000 records per second, or 5 MB per second. If you use PutRecord // and PutRecordBatch, the limits are an aggregate across these two operations // for each delivery stream. For more information about limits, see Amazon Kinesis -// Firehose Limits (http://docs.aws.amazon.com/firehose/latest/dev/limits.html). +// Data Firehose Limits (http://docs.aws.amazon.com/firehose/latest/dev/limits.html). // // Each PutRecordBatch request supports up to 500 records. Each record in the // request can be as large as 1,000 KB (before 64-bit encoding), up to a limit @@ -586,29 +678,31 @@ func (c *Firehose) PutRecordBatchRequest(input *PutRecordBatchInput) (req *reque // You must specify the name of the delivery stream and the data record when // using PutRecord. The data record consists of a data blob that can be up to // 1,000 KB in size, and any kind of data. For example, it could be a segment -// from a log file, geographic location data, web site clickstream data, and +// from a log file, geographic location data, website clickstream data, and // so on. // -// Kinesis Firehose buffers records before delivering them to the destination. +// Kinesis Data Firehose buffers records before delivering them to the destination. // To disambiguate the data blobs at the destination, a common solution is to // use delimiters in the data, such as a newline (\n) or some other character // unique within the data. This allows the consumer application to parse individual // data items when reading the data from the destination. // // The PutRecordBatch response includes a count of failed records, FailedPutCount, -// and an array of responses, RequestResponses. Each entry in the RequestResponses -// array provides additional information about the processed record. It directly -// correlates with a record in the request array using the same ordering, from -// the top to the bottom. The response array always includes the same number -// of records as the request array. RequestResponses includes both successfully -// and unsuccessfully processed records. Kinesis Firehose attempts to process -// all records in each PutRecordBatch request. A single record failure does -// not stop the processing of subsequent records. +// and an array of responses, RequestResponses. Even if the PutRecordBatch call +// succeeds, the value of FailedPutCount may be greater than 0, indicating that +// there are records for which the operation didn't succeed. Each entry in the +// RequestResponses array provides additional information about the processed +// record. It directly correlates with a record in the request array using the +// same ordering, from the top to the bottom. The response array always includes +// the same number of records as the request array. RequestResponses includes +// both successfully and unsuccessfully processed records. Kinesis Data Firehose +// tries to process all records in each PutRecordBatch request. A single record +// failure does not stop the processing of subsequent records. // // A successfully processed record includes a RecordId value, which is unique // for the record. An unsuccessfully processed record includes ErrorCode and // ErrorMessage values. ErrorCode reflects the type of error, and is one of -// the following values: ServiceUnavailable or InternalFailure. ErrorMessage +// the following values: ServiceUnavailableException or InternalFailure. ErrorMessage // provides more detailed information about the error. // // If there is an internal server error or a timeout, the write might have completed @@ -622,11 +716,14 @@ func (c *Firehose) PutRecordBatchRequest(input *PutRecordBatchInput) (req *reque // If the exception persists, it is possible that the throughput limits have // been exceeded for the delivery stream. // -// Data records sent to Kinesis Firehose are stored for 24 hours from the time -// they are added to a delivery stream as it attempts to send the records to -// the destination. If the destination is unreachable for more than 24 hours, +// Data records sent to Kinesis Data Firehose are stored for 24 hours from the +// time they are added to a delivery stream as it attempts to send the records +// to the destination. If the destination is unreachable for more than 24 hours, // the data is no longer available. // +// Don't concatenate two or more base64 strings to form the data fields of your +// records. Instead, concatenate the raw data, then perform base64 encoding. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -642,10 +739,10 @@ func (c *Firehose) PutRecordBatchRequest(input *PutRecordBatchInput) (req *reque // The specified input parameter has a value that is not valid. // // * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is unavailable, back off and retry the operation. If you continue +// The service is unavailable. Back off and retry the operation. If you continue // to see the exception, throughput limits for the delivery stream may have // been exceeded. For more information about limits and how to request an increase, -// see Amazon Kinesis Firehose Limits (http://docs.aws.amazon.com/firehose/latest/dev/limits.html). +// see Amazon Kinesis Data Firehose Limits (http://docs.aws.amazon.com/firehose/latest/dev/limits.html). // // See also, https://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04/PutRecordBatch func (c *Firehose) PutRecordBatch(input *PutRecordBatchInput) (*PutRecordBatchOutput, error) { @@ -669,278 +766,1437 @@ func (c *Firehose) PutRecordBatchWithContext(ctx aws.Context, input *PutRecordBa return out, req.Send() } -const opUpdateDestination = "UpdateDestination" +const opStartDeliveryStreamEncryption = "StartDeliveryStreamEncryption" -// UpdateDestinationRequest generates a "aws/request.Request" representing the -// client's request for the UpdateDestination operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// StartDeliveryStreamEncryptionRequest generates a "aws/request.Request" representing the +// client's request for the StartDeliveryStreamEncryption operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateDestination for more information on using the UpdateDestination +// See StartDeliveryStreamEncryption for more information on using the StartDeliveryStreamEncryption // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateDestinationRequest method. -// req, resp := client.UpdateDestinationRequest(params) +// // Example sending a request using the StartDeliveryStreamEncryptionRequest method. +// req, resp := client.StartDeliveryStreamEncryptionRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04/UpdateDestination -func (c *Firehose) UpdateDestinationRequest(input *UpdateDestinationInput) (req *request.Request, output *UpdateDestinationOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04/StartDeliveryStreamEncryption +func (c *Firehose) StartDeliveryStreamEncryptionRequest(input *StartDeliveryStreamEncryptionInput) (req *request.Request, output *StartDeliveryStreamEncryptionOutput) { op := &request.Operation{ - Name: opUpdateDestination, + Name: opStartDeliveryStreamEncryption, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateDestinationInput{} + input = &StartDeliveryStreamEncryptionInput{} } - output = &UpdateDestinationOutput{} + output = &StartDeliveryStreamEncryptionOutput{} req = c.newRequest(op, input, output) return } -// UpdateDestination API operation for Amazon Kinesis Firehose. -// -// Updates the specified destination of the specified delivery stream. -// -// You can use this operation to change the destination type (for example, to -// replace the Amazon S3 destination with Amazon Redshift) or change the parameters -// associated with a destination (for example, to change the bucket name of -// the Amazon S3 destination). The update might not occur immediately. The target -// delivery stream remains active while the configurations are updated, so data -// writes to the delivery stream can continue during this process. The updated -// configurations are usually effective within a few minutes. +// StartDeliveryStreamEncryption API operation for Amazon Kinesis Firehose. // -// Note that switching between Amazon ES and other services is not supported. -// For an Amazon ES destination, you can only update to another Amazon ES destination. +// Enables server-side encryption (SSE) for the delivery stream. This operation +// is asynchronous. It returns immediately. When you invoke it, Kinesis Firehose +// first sets the status of the stream to ENABLING then to ENABLED. You can +// continue to read and write data to your stream while its status is ENABLING +// but they won't get encrypted. It can take up to 5 seconds after the encryption +// status changes to ENABLED before all records written to the delivery stream +// are encrypted. // -// If the destination type is the same, Kinesis Firehose merges the configuration -// parameters specified with the destination configuration that already exists -// on the delivery stream. If any of the parameters are not specified in the -// call, the existing values are retained. For example, in the Amazon S3 destination, -// if EncryptionConfiguration is not specified, then the existing EncryptionConfiguration -// is maintained on the destination. +// To check the encryption state of a delivery stream, use DescribeDeliveryStream. // -// If the destination type is not the same, for example, changing the destination -// from Amazon S3 to Amazon Redshift, Kinesis Firehose does not merge any parameters. -// In this case, all parameters must be specified. +// You can only enable SSE for a delivery stream that uses DirectPut as its +// source. // -// Kinesis Firehose uses CurrentDeliveryStreamVersionId to avoid race conditions -// and conflicting merges. This is a required field, and the service updates -// the configuration only if the existing configuration has a version ID that -// matches. After the update is applied successfully, the version ID is updated, -// and can be retrieved using DescribeDeliveryStream. Use the new version ID -// to set CurrentDeliveryStreamVersionId in the next call. +// The StartDeliveryStreamEncryption and StopDeliveryStreamEncryption operations +// have a combined limit of 25 calls per delivery stream per 24 hours. For example, +// you reach the limit if you call StartDeliveryStreamEncryption thirteen times +// and StopDeliveryStreamEncryption twelve times for the same stream in a 24-hour +// period. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Kinesis Firehose's -// API operation UpdateDestination for usage and error information. +// API operation StartDeliveryStreamEncryption for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidArgumentException "InvalidArgumentException" -// The specified input parameter has a value that is not valid. +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource could not be found. // // * ErrCodeResourceInUseException "ResourceInUseException" // The resource is already in use and not available for this operation. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource could not be found. +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// The specified input parameter has a value that is not valid. // -// * ErrCodeConcurrentModificationException "ConcurrentModificationException" -// Another modification has already happened. Fetch VersionId again and use -// it to update the destination. +// * ErrCodeLimitExceededException "LimitExceededException" +// You have already reached the limit for a requested resource. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04/UpdateDestination -func (c *Firehose) UpdateDestination(input *UpdateDestinationInput) (*UpdateDestinationOutput, error) { - req, out := c.UpdateDestinationRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04/StartDeliveryStreamEncryption +func (c *Firehose) StartDeliveryStreamEncryption(input *StartDeliveryStreamEncryptionInput) (*StartDeliveryStreamEncryptionOutput, error) { + req, out := c.StartDeliveryStreamEncryptionRequest(input) return out, req.Send() } -// UpdateDestinationWithContext is the same as UpdateDestination with the addition of +// StartDeliveryStreamEncryptionWithContext is the same as StartDeliveryStreamEncryption with the addition of // the ability to pass a context and additional request options. // -// See UpdateDestination for details on how to use this API operation. +// See StartDeliveryStreamEncryption for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Firehose) UpdateDestinationWithContext(ctx aws.Context, input *UpdateDestinationInput, opts ...request.Option) (*UpdateDestinationOutput, error) { - req, out := c.UpdateDestinationRequest(input) +func (c *Firehose) StartDeliveryStreamEncryptionWithContext(ctx aws.Context, input *StartDeliveryStreamEncryptionInput, opts ...request.Option) (*StartDeliveryStreamEncryptionOutput, error) { + req, out := c.StartDeliveryStreamEncryptionRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// Describes hints for the buffering to perform before delivering data to the -// destination. Please note that these options are treated as hints, and therefore -// Kinesis Firehose may choose to use different values when it is optimal. -type BufferingHints struct { - _ struct{} `type:"structure"` - - // Buffer incoming data for the specified period of time, in seconds, before - // delivering it to the destination. The default value is 300. - IntervalInSeconds *int64 `min:"60" type:"integer"` - - // Buffer incoming data to the specified size, in MBs, before delivering it - // to the destination. The default value is 5. - // - // We recommend setting this parameter to a value greater than the amount of - // data you typically ingest into the delivery stream in 10 seconds. For example, - // if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher. - SizeInMBs *int64 `min:"1" type:"integer"` -} - -// String returns the string representation -func (s BufferingHints) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s BufferingHints) GoString() string { - return s.String() -} +const opStopDeliveryStreamEncryption = "StopDeliveryStreamEncryption" -// Validate inspects the fields of the type to determine if they are valid. -func (s *BufferingHints) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "BufferingHints"} - if s.IntervalInSeconds != nil && *s.IntervalInSeconds < 60 { - invalidParams.Add(request.NewErrParamMinValue("IntervalInSeconds", 60)) - } - if s.SizeInMBs != nil && *s.SizeInMBs < 1 { - invalidParams.Add(request.NewErrParamMinValue("SizeInMBs", 1)) +// StopDeliveryStreamEncryptionRequest generates a "aws/request.Request" representing the +// client's request for the StopDeliveryStreamEncryption operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StopDeliveryStreamEncryption for more information on using the StopDeliveryStreamEncryption +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StopDeliveryStreamEncryptionRequest method. +// req, resp := client.StopDeliveryStreamEncryptionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04/StopDeliveryStreamEncryption +func (c *Firehose) StopDeliveryStreamEncryptionRequest(input *StopDeliveryStreamEncryptionInput) (req *request.Request, output *StopDeliveryStreamEncryptionOutput) { + op := &request.Operation{ + Name: opStopDeliveryStreamEncryption, + HTTPMethod: "POST", + HTTPPath: "/", } - if invalidParams.Len() > 0 { - return invalidParams + if input == nil { + input = &StopDeliveryStreamEncryptionInput{} } - return nil + + output = &StopDeliveryStreamEncryptionOutput{} + req = c.newRequest(op, input, output) + return } -// SetIntervalInSeconds sets the IntervalInSeconds field's value. -func (s *BufferingHints) SetIntervalInSeconds(v int64) *BufferingHints { - s.IntervalInSeconds = &v - return s +// StopDeliveryStreamEncryption API operation for Amazon Kinesis Firehose. +// +// Disables server-side encryption (SSE) for the delivery stream. This operation +// is asynchronous. It returns immediately. When you invoke it, Kinesis Firehose +// first sets the status of the stream to DISABLING then to DISABLED. You can +// continue to read and write data to your stream while its status is DISABLING. +// It can take up to 5 seconds after the encryption status changes to DISABLED +// before all records written to the delivery stream are no longer subject to +// encryption. +// +// To check the encryption state of a delivery stream, use DescribeDeliveryStream. +// +// The StartDeliveryStreamEncryption and StopDeliveryStreamEncryption operations +// have a combined limit of 25 calls per delivery stream per 24 hours. For example, +// you reach the limit if you call StartDeliveryStreamEncryption thirteen times +// and StopDeliveryStreamEncryption twelve times for the same stream in a 24-hour +// period. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Firehose's +// API operation StopDeliveryStreamEncryption for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource could not be found. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// The resource is already in use and not available for this operation. +// +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// The specified input parameter has a value that is not valid. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// You have already reached the limit for a requested resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04/StopDeliveryStreamEncryption +func (c *Firehose) StopDeliveryStreamEncryption(input *StopDeliveryStreamEncryptionInput) (*StopDeliveryStreamEncryptionOutput, error) { + req, out := c.StopDeliveryStreamEncryptionRequest(input) + return out, req.Send() } -// SetSizeInMBs sets the SizeInMBs field's value. -func (s *BufferingHints) SetSizeInMBs(v int64) *BufferingHints { - s.SizeInMBs = &v - return s +// StopDeliveryStreamEncryptionWithContext is the same as StopDeliveryStreamEncryption with the addition of +// the ability to pass a context and additional request options. +// +// See StopDeliveryStreamEncryption for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Firehose) StopDeliveryStreamEncryptionWithContext(ctx aws.Context, input *StopDeliveryStreamEncryptionInput, opts ...request.Option) (*StopDeliveryStreamEncryptionOutput, error) { + req, out := c.StopDeliveryStreamEncryptionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() } -// Describes the Amazon CloudWatch logging options for your delivery stream. -type CloudWatchLoggingOptions struct { - _ struct{} `type:"structure"` +const opTagDeliveryStream = "TagDeliveryStream" - // Enables or disables CloudWatch logging. - Enabled *bool `type:"boolean"` +// TagDeliveryStreamRequest generates a "aws/request.Request" representing the +// client's request for the TagDeliveryStream operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See TagDeliveryStream for more information on using the TagDeliveryStream +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the TagDeliveryStreamRequest method. +// req, resp := client.TagDeliveryStreamRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04/TagDeliveryStream +func (c *Firehose) TagDeliveryStreamRequest(input *TagDeliveryStreamInput) (req *request.Request, output *TagDeliveryStreamOutput) { + op := &request.Operation{ + Name: opTagDeliveryStream, + HTTPMethod: "POST", + HTTPPath: "/", + } - // The CloudWatch group name for logging. This value is required if CloudWatch - // logging is enabled. - LogGroupName *string `type:"string"` + if input == nil { + input = &TagDeliveryStreamInput{} + } - // The CloudWatch log stream name for logging. This value is required if CloudWatch - // logging is enabled. - LogStreamName *string `type:"string"` + output = &TagDeliveryStreamOutput{} + req = c.newRequest(op, input, output) + return +} + +// TagDeliveryStream API operation for Amazon Kinesis Firehose. +// +// Adds or updates tags for the specified delivery stream. A tag is a key-value +// pair (the value is optional) that you can define and assign to AWS resources. +// If you specify a tag that already exists, the tag value is replaced with +// the value that you specify in the request. Tags are metadata. For example, +// you can add friendly names and descriptions or other types of information +// that can help you distinguish the delivery stream. For more information about +// tags, see Using Cost Allocation Tags (https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html) +// in the AWS Billing and Cost Management User Guide. +// +// Each delivery stream can have up to 50 tags. +// +// This operation has a limit of five transactions per second per account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Firehose's +// API operation TagDeliveryStream for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource could not be found. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// The resource is already in use and not available for this operation. +// +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// The specified input parameter has a value that is not valid. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// You have already reached the limit for a requested resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04/TagDeliveryStream +func (c *Firehose) TagDeliveryStream(input *TagDeliveryStreamInput) (*TagDeliveryStreamOutput, error) { + req, out := c.TagDeliveryStreamRequest(input) + return out, req.Send() +} + +// TagDeliveryStreamWithContext is the same as TagDeliveryStream with the addition of +// the ability to pass a context and additional request options. +// +// See TagDeliveryStream for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Firehose) TagDeliveryStreamWithContext(ctx aws.Context, input *TagDeliveryStreamInput, opts ...request.Option) (*TagDeliveryStreamOutput, error) { + req, out := c.TagDeliveryStreamRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUntagDeliveryStream = "UntagDeliveryStream" + +// UntagDeliveryStreamRequest generates a "aws/request.Request" representing the +// client's request for the UntagDeliveryStream operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UntagDeliveryStream for more information on using the UntagDeliveryStream +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UntagDeliveryStreamRequest method. +// req, resp := client.UntagDeliveryStreamRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04/UntagDeliveryStream +func (c *Firehose) UntagDeliveryStreamRequest(input *UntagDeliveryStreamInput) (req *request.Request, output *UntagDeliveryStreamOutput) { + op := &request.Operation{ + Name: opUntagDeliveryStream, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UntagDeliveryStreamInput{} + } + + output = &UntagDeliveryStreamOutput{} + req = c.newRequest(op, input, output) + return +} + +// UntagDeliveryStream API operation for Amazon Kinesis Firehose. +// +// Removes tags from the specified delivery stream. Removed tags are deleted, +// and you can't recover them after this operation successfully completes. +// +// If you specify a tag that doesn't exist, the operation ignores it. +// +// This operation has a limit of five transactions per second per account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Firehose's +// API operation UntagDeliveryStream for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource could not be found. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// The resource is already in use and not available for this operation. +// +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// The specified input parameter has a value that is not valid. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// You have already reached the limit for a requested resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04/UntagDeliveryStream +func (c *Firehose) UntagDeliveryStream(input *UntagDeliveryStreamInput) (*UntagDeliveryStreamOutput, error) { + req, out := c.UntagDeliveryStreamRequest(input) + return out, req.Send() +} + +// UntagDeliveryStreamWithContext is the same as UntagDeliveryStream with the addition of +// the ability to pass a context and additional request options. +// +// See UntagDeliveryStream for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Firehose) UntagDeliveryStreamWithContext(ctx aws.Context, input *UntagDeliveryStreamInput, opts ...request.Option) (*UntagDeliveryStreamOutput, error) { + req, out := c.UntagDeliveryStreamRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateDestination = "UpdateDestination" + +// UpdateDestinationRequest generates a "aws/request.Request" representing the +// client's request for the UpdateDestination operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateDestination for more information on using the UpdateDestination +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateDestinationRequest method. +// req, resp := client.UpdateDestinationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04/UpdateDestination +func (c *Firehose) UpdateDestinationRequest(input *UpdateDestinationInput) (req *request.Request, output *UpdateDestinationOutput) { + op := &request.Operation{ + Name: opUpdateDestination, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateDestinationInput{} + } + + output = &UpdateDestinationOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateDestination API operation for Amazon Kinesis Firehose. +// +// Updates the specified destination of the specified delivery stream. +// +// Use this operation to change the destination type (for example, to replace +// the Amazon S3 destination with Amazon Redshift) or change the parameters +// associated with a destination (for example, to change the bucket name of +// the Amazon S3 destination). The update might not occur immediately. The target +// delivery stream remains active while the configurations are updated, so data +// writes to the delivery stream can continue during this process. The updated +// configurations are usually effective within a few minutes. +// +// Switching between Amazon ES and other services is not supported. For an Amazon +// ES destination, you can only update to another Amazon ES destination. +// +// If the destination type is the same, Kinesis Data Firehose merges the configuration +// parameters specified with the destination configuration that already exists +// on the delivery stream. If any of the parameters are not specified in the +// call, the existing values are retained. For example, in the Amazon S3 destination, +// if EncryptionConfiguration is not specified, then the existing EncryptionConfiguration +// is maintained on the destination. +// +// If the destination type is not the same, for example, changing the destination +// from Amazon S3 to Amazon Redshift, Kinesis Data Firehose does not merge any +// parameters. In this case, all parameters must be specified. +// +// Kinesis Data Firehose uses CurrentDeliveryStreamVersionId to avoid race conditions +// and conflicting merges. This is a required field, and the service updates +// the configuration only if the existing configuration has a version ID that +// matches. After the update is applied successfully, the version ID is updated, +// and can be retrieved using DescribeDeliveryStream. Use the new version ID +// to set CurrentDeliveryStreamVersionId in the next call. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Firehose's +// API operation UpdateDestination for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// The specified input parameter has a value that is not valid. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// The resource is already in use and not available for this operation. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource could not be found. +// +// * ErrCodeConcurrentModificationException "ConcurrentModificationException" +// Another modification has already happened. Fetch VersionId again and use +// it to update the destination. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04/UpdateDestination +func (c *Firehose) UpdateDestination(input *UpdateDestinationInput) (*UpdateDestinationOutput, error) { + req, out := c.UpdateDestinationRequest(input) + return out, req.Send() +} + +// UpdateDestinationWithContext is the same as UpdateDestination with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateDestination for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Firehose) UpdateDestinationWithContext(ctx aws.Context, input *UpdateDestinationInput, opts ...request.Option) (*UpdateDestinationOutput, error) { + req, out := c.UpdateDestinationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// Describes hints for the buffering to perform before delivering data to the +// destination. These options are treated as hints, and therefore Kinesis Data +// Firehose might choose to use different values when it is optimal. +type BufferingHints struct { + _ struct{} `type:"structure"` + + // Buffer incoming data for the specified period of time, in seconds, before + // delivering it to the destination. The default value is 300. + IntervalInSeconds *int64 `min:"60" type:"integer"` + + // Buffer incoming data to the specified size, in MBs, before delivering it + // to the destination. The default value is 5. + // + // We recommend setting this parameter to a value greater than the amount of + // data you typically ingest into the delivery stream in 10 seconds. For example, + // if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher. + SizeInMBs *int64 `min:"1" type:"integer"` +} + +// String returns the string representation +func (s BufferingHints) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BufferingHints) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *BufferingHints) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BufferingHints"} + if s.IntervalInSeconds != nil && *s.IntervalInSeconds < 60 { + invalidParams.Add(request.NewErrParamMinValue("IntervalInSeconds", 60)) + } + if s.SizeInMBs != nil && *s.SizeInMBs < 1 { + invalidParams.Add(request.NewErrParamMinValue("SizeInMBs", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetIntervalInSeconds sets the IntervalInSeconds field's value. +func (s *BufferingHints) SetIntervalInSeconds(v int64) *BufferingHints { + s.IntervalInSeconds = &v + return s +} + +// SetSizeInMBs sets the SizeInMBs field's value. +func (s *BufferingHints) SetSizeInMBs(v int64) *BufferingHints { + s.SizeInMBs = &v + return s +} + +// Describes the Amazon CloudWatch logging options for your delivery stream. +type CloudWatchLoggingOptions struct { + _ struct{} `type:"structure"` + + // Enables or disables CloudWatch logging. + Enabled *bool `type:"boolean"` + + // The CloudWatch group name for logging. This value is required if CloudWatch + // logging is enabled. + LogGroupName *string `type:"string"` + + // The CloudWatch log stream name for logging. This value is required if CloudWatch + // logging is enabled. + LogStreamName *string `type:"string"` +} + +// String returns the string representation +func (s CloudWatchLoggingOptions) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CloudWatchLoggingOptions) GoString() string { + return s.String() +} + +// SetEnabled sets the Enabled field's value. +func (s *CloudWatchLoggingOptions) SetEnabled(v bool) *CloudWatchLoggingOptions { + s.Enabled = &v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *CloudWatchLoggingOptions) SetLogGroupName(v string) *CloudWatchLoggingOptions { + s.LogGroupName = &v + return s +} + +// SetLogStreamName sets the LogStreamName field's value. +func (s *CloudWatchLoggingOptions) SetLogStreamName(v string) *CloudWatchLoggingOptions { + s.LogStreamName = &v + return s +} + +// Describes a COPY command for Amazon Redshift. +type CopyCommand struct { + _ struct{} `type:"structure"` + + // Optional parameters to use with the Amazon Redshift COPY command. For more + // information, see the "Optional Parameters" section of Amazon Redshift COPY + // command (http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html). Some + // possible examples that would apply to Kinesis Data Firehose are as follows: + // + // delimiter '\t' lzop; - fields are delimited with "\t" (TAB character) and + // compressed using lzop. + // + // delimiter '|' - fields are delimited with "|" (this is the default delimiter). + // + // delimiter '|' escape - the delimiter should be escaped. + // + // fixedwidth 'venueid:3,venuename:25,venuecity:12,venuestate:2,venueseats:6' + // - fields are fixed width in the source, with each width specified after every + // column in the table. + // + // JSON 's3://mybucket/jsonpaths.txt' - data is in JSON format, and the path + // specified is the format of the data. + // + // For more examples, see Amazon Redshift COPY command examples (http://docs.aws.amazon.com/redshift/latest/dg/r_COPY_command_examples.html). + CopyOptions *string `type:"string"` + + // A comma-separated list of column names. + DataTableColumns *string `type:"string"` + + // The name of the target table. The table must already exist in the database. + // + // DataTableName is a required field + DataTableName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CopyCommand) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CopyCommand) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CopyCommand) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CopyCommand"} + if s.DataTableName == nil { + invalidParams.Add(request.NewErrParamRequired("DataTableName")) + } + if s.DataTableName != nil && len(*s.DataTableName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DataTableName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCopyOptions sets the CopyOptions field's value. +func (s *CopyCommand) SetCopyOptions(v string) *CopyCommand { + s.CopyOptions = &v + return s +} + +// SetDataTableColumns sets the DataTableColumns field's value. +func (s *CopyCommand) SetDataTableColumns(v string) *CopyCommand { + s.DataTableColumns = &v + return s +} + +// SetDataTableName sets the DataTableName field's value. +func (s *CopyCommand) SetDataTableName(v string) *CopyCommand { + s.DataTableName = &v + return s +} + +type CreateDeliveryStreamInput struct { + _ struct{} `type:"structure"` + + // The name of the delivery stream. This name must be unique per AWS account + // in the same AWS Region. If the delivery streams are in different accounts + // or different Regions, you can have multiple delivery streams with the same + // name. + // + // DeliveryStreamName is a required field + DeliveryStreamName *string `min:"1" type:"string" required:"true"` + + // The delivery stream type. This parameter can be one of the following values: + // + // * DirectPut: Provider applications access the delivery stream directly. + // + // * KinesisStreamAsSource: The delivery stream uses a Kinesis data stream + // as a source. + DeliveryStreamType *string `type:"string" enum:"DeliveryStreamType"` + + // The destination in Amazon ES. You can specify only one destination. + ElasticsearchDestinationConfiguration *ElasticsearchDestinationConfiguration `type:"structure"` + + // The destination in Amazon S3. You can specify only one destination. + ExtendedS3DestinationConfiguration *ExtendedS3DestinationConfiguration `type:"structure"` + + // When a Kinesis data stream is used as the source for the delivery stream, + // a KinesisStreamSourceConfiguration containing the Kinesis data stream Amazon + // Resource Name (ARN) and the role ARN for the source stream. + KinesisStreamSourceConfiguration *KinesisStreamSourceConfiguration `type:"structure"` + + // The destination in Amazon Redshift. You can specify only one destination. + RedshiftDestinationConfiguration *RedshiftDestinationConfiguration `type:"structure"` + + // [Deprecated] The destination in Amazon S3. You can specify only one destination. + // + // Deprecated: S3DestinationConfiguration has been deprecated + S3DestinationConfiguration *S3DestinationConfiguration `deprecated:"true" type:"structure"` + + // The destination in Splunk. You can specify only one destination. + SplunkDestinationConfiguration *SplunkDestinationConfiguration `type:"structure"` + + // A set of tags to assign to the delivery stream. A tag is a key-value pair + // that you can define and assign to AWS resources. Tags are metadata. For example, + // you can add friendly names and descriptions or other types of information + // that can help you distinguish the delivery stream. For more information about + // tags, see Using Cost Allocation Tags (https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html) + // in the AWS Billing and Cost Management User Guide. + // + // You can specify up to 50 tags when creating a delivery stream. + Tags []*Tag `min:"1" type:"list"` +} + +// String returns the string representation +func (s CreateDeliveryStreamInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDeliveryStreamInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDeliveryStreamInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDeliveryStreamInput"} + if s.DeliveryStreamName == nil { + invalidParams.Add(request.NewErrParamRequired("DeliveryStreamName")) + } + if s.DeliveryStreamName != nil && len(*s.DeliveryStreamName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DeliveryStreamName", 1)) + } + if s.Tags != nil && len(s.Tags) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Tags", 1)) + } + if s.ElasticsearchDestinationConfiguration != nil { + if err := s.ElasticsearchDestinationConfiguration.Validate(); err != nil { + invalidParams.AddNested("ElasticsearchDestinationConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.ExtendedS3DestinationConfiguration != nil { + if err := s.ExtendedS3DestinationConfiguration.Validate(); err != nil { + invalidParams.AddNested("ExtendedS3DestinationConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.KinesisStreamSourceConfiguration != nil { + if err := s.KinesisStreamSourceConfiguration.Validate(); err != nil { + invalidParams.AddNested("KinesisStreamSourceConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.RedshiftDestinationConfiguration != nil { + if err := s.RedshiftDestinationConfiguration.Validate(); err != nil { + invalidParams.AddNested("RedshiftDestinationConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.S3DestinationConfiguration != nil { + if err := s.S3DestinationConfiguration.Validate(); err != nil { + invalidParams.AddNested("S3DestinationConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.SplunkDestinationConfiguration != nil { + if err := s.SplunkDestinationConfiguration.Validate(); err != nil { + invalidParams.AddNested("SplunkDestinationConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDeliveryStreamName sets the DeliveryStreamName field's value. +func (s *CreateDeliveryStreamInput) SetDeliveryStreamName(v string) *CreateDeliveryStreamInput { + s.DeliveryStreamName = &v + return s +} + +// SetDeliveryStreamType sets the DeliveryStreamType field's value. +func (s *CreateDeliveryStreamInput) SetDeliveryStreamType(v string) *CreateDeliveryStreamInput { + s.DeliveryStreamType = &v + return s +} + +// SetElasticsearchDestinationConfiguration sets the ElasticsearchDestinationConfiguration field's value. +func (s *CreateDeliveryStreamInput) SetElasticsearchDestinationConfiguration(v *ElasticsearchDestinationConfiguration) *CreateDeliveryStreamInput { + s.ElasticsearchDestinationConfiguration = v + return s +} + +// SetExtendedS3DestinationConfiguration sets the ExtendedS3DestinationConfiguration field's value. +func (s *CreateDeliveryStreamInput) SetExtendedS3DestinationConfiguration(v *ExtendedS3DestinationConfiguration) *CreateDeliveryStreamInput { + s.ExtendedS3DestinationConfiguration = v + return s +} + +// SetKinesisStreamSourceConfiguration sets the KinesisStreamSourceConfiguration field's value. +func (s *CreateDeliveryStreamInput) SetKinesisStreamSourceConfiguration(v *KinesisStreamSourceConfiguration) *CreateDeliveryStreamInput { + s.KinesisStreamSourceConfiguration = v + return s +} + +// SetRedshiftDestinationConfiguration sets the RedshiftDestinationConfiguration field's value. +func (s *CreateDeliveryStreamInput) SetRedshiftDestinationConfiguration(v *RedshiftDestinationConfiguration) *CreateDeliveryStreamInput { + s.RedshiftDestinationConfiguration = v + return s +} + +// SetS3DestinationConfiguration sets the S3DestinationConfiguration field's value. +func (s *CreateDeliveryStreamInput) SetS3DestinationConfiguration(v *S3DestinationConfiguration) *CreateDeliveryStreamInput { + s.S3DestinationConfiguration = v + return s +} + +// SetSplunkDestinationConfiguration sets the SplunkDestinationConfiguration field's value. +func (s *CreateDeliveryStreamInput) SetSplunkDestinationConfiguration(v *SplunkDestinationConfiguration) *CreateDeliveryStreamInput { + s.SplunkDestinationConfiguration = v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateDeliveryStreamInput) SetTags(v []*Tag) *CreateDeliveryStreamInput { + s.Tags = v + return s +} + +type CreateDeliveryStreamOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the delivery stream. + DeliveryStreamARN *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s CreateDeliveryStreamOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDeliveryStreamOutput) GoString() string { + return s.String() +} + +// SetDeliveryStreamARN sets the DeliveryStreamARN field's value. +func (s *CreateDeliveryStreamOutput) SetDeliveryStreamARN(v string) *CreateDeliveryStreamOutput { + s.DeliveryStreamARN = &v + return s +} + +// Specifies that you want Kinesis Data Firehose to convert data from the JSON +// format to the Parquet or ORC format before writing it to Amazon S3. Kinesis +// Data Firehose uses the serializer and deserializer that you specify, in addition +// to the column information from the AWS Glue table, to deserialize your input +// data from JSON and then serialize it to the Parquet or ORC format. For more +// information, see Kinesis Data Firehose Record Format Conversion (https://docs.aws.amazon.com/firehose/latest/dev/record-format-conversion.html). +type DataFormatConversionConfiguration struct { + _ struct{} `type:"structure"` + + // Defaults to true. Set it to false if you want to disable format conversion + // while preserving the configuration details. + Enabled *bool `type:"boolean"` + + // Specifies the deserializer that you want Kinesis Data Firehose to use to + // convert the format of your data from JSON. + InputFormatConfiguration *InputFormatConfiguration `type:"structure"` + + // Specifies the serializer that you want Kinesis Data Firehose to use to convert + // the format of your data to the Parquet or ORC format. + OutputFormatConfiguration *OutputFormatConfiguration `type:"structure"` + + // Specifies the AWS Glue Data Catalog table that contains the column information. + SchemaConfiguration *SchemaConfiguration `type:"structure"` +} + +// String returns the string representation +func (s DataFormatConversionConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DataFormatConversionConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DataFormatConversionConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DataFormatConversionConfiguration"} + if s.OutputFormatConfiguration != nil { + if err := s.OutputFormatConfiguration.Validate(); err != nil { + invalidParams.AddNested("OutputFormatConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEnabled sets the Enabled field's value. +func (s *DataFormatConversionConfiguration) SetEnabled(v bool) *DataFormatConversionConfiguration { + s.Enabled = &v + return s +} + +// SetInputFormatConfiguration sets the InputFormatConfiguration field's value. +func (s *DataFormatConversionConfiguration) SetInputFormatConfiguration(v *InputFormatConfiguration) *DataFormatConversionConfiguration { + s.InputFormatConfiguration = v + return s +} + +// SetOutputFormatConfiguration sets the OutputFormatConfiguration field's value. +func (s *DataFormatConversionConfiguration) SetOutputFormatConfiguration(v *OutputFormatConfiguration) *DataFormatConversionConfiguration { + s.OutputFormatConfiguration = v + return s +} + +// SetSchemaConfiguration sets the SchemaConfiguration field's value. +func (s *DataFormatConversionConfiguration) SetSchemaConfiguration(v *SchemaConfiguration) *DataFormatConversionConfiguration { + s.SchemaConfiguration = v + return s +} + +type DeleteDeliveryStreamInput struct { + _ struct{} `type:"structure"` + + // The name of the delivery stream. + // + // DeliveryStreamName is a required field + DeliveryStreamName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteDeliveryStreamInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDeliveryStreamInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteDeliveryStreamInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDeliveryStreamInput"} + if s.DeliveryStreamName == nil { + invalidParams.Add(request.NewErrParamRequired("DeliveryStreamName")) + } + if s.DeliveryStreamName != nil && len(*s.DeliveryStreamName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DeliveryStreamName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDeliveryStreamName sets the DeliveryStreamName field's value. +func (s *DeleteDeliveryStreamInput) SetDeliveryStreamName(v string) *DeleteDeliveryStreamInput { + s.DeliveryStreamName = &v + return s +} + +type DeleteDeliveryStreamOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteDeliveryStreamOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDeliveryStreamOutput) GoString() string { + return s.String() +} + +// Contains information about a delivery stream. +type DeliveryStreamDescription struct { + _ struct{} `type:"structure"` + + // The date and time that the delivery stream was created. + CreateTimestamp *time.Time `type:"timestamp"` + + // The Amazon Resource Name (ARN) of the delivery stream. For more information, + // see Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). + // + // DeliveryStreamARN is a required field + DeliveryStreamARN *string `min:"1" type:"string" required:"true"` + + // Indicates the server-side encryption (SSE) status for the delivery stream. + DeliveryStreamEncryptionConfiguration *DeliveryStreamEncryptionConfiguration `type:"structure"` + + // The name of the delivery stream. + // + // DeliveryStreamName is a required field + DeliveryStreamName *string `min:"1" type:"string" required:"true"` + + // The status of the delivery stream. + // + // DeliveryStreamStatus is a required field + DeliveryStreamStatus *string `type:"string" required:"true" enum:"DeliveryStreamStatus"` + + // The delivery stream type. This can be one of the following values: + // + // * DirectPut: Provider applications access the delivery stream directly. + // + // * KinesisStreamAsSource: The delivery stream uses a Kinesis data stream + // as a source. + // + // DeliveryStreamType is a required field + DeliveryStreamType *string `type:"string" required:"true" enum:"DeliveryStreamType"` + + // The destinations. + // + // Destinations is a required field + Destinations []*DestinationDescription `type:"list" required:"true"` + + // Indicates whether there are more destinations available to list. + // + // HasMoreDestinations is a required field + HasMoreDestinations *bool `type:"boolean" required:"true"` + + // The date and time that the delivery stream was last updated. + LastUpdateTimestamp *time.Time `type:"timestamp"` + + // If the DeliveryStreamType parameter is KinesisStreamAsSource, a SourceDescription + // object describing the source Kinesis data stream. + Source *SourceDescription `type:"structure"` + + // Each time the destination is updated for a delivery stream, the version ID + // is changed, and the current version ID is required when updating the destination. + // This is so that the service knows it is applying the changes to the correct + // version of the delivery stream. + // + // VersionId is a required field + VersionId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeliveryStreamDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeliveryStreamDescription) GoString() string { + return s.String() +} + +// SetCreateTimestamp sets the CreateTimestamp field's value. +func (s *DeliveryStreamDescription) SetCreateTimestamp(v time.Time) *DeliveryStreamDescription { + s.CreateTimestamp = &v + return s +} + +// SetDeliveryStreamARN sets the DeliveryStreamARN field's value. +func (s *DeliveryStreamDescription) SetDeliveryStreamARN(v string) *DeliveryStreamDescription { + s.DeliveryStreamARN = &v + return s +} + +// SetDeliveryStreamEncryptionConfiguration sets the DeliveryStreamEncryptionConfiguration field's value. +func (s *DeliveryStreamDescription) SetDeliveryStreamEncryptionConfiguration(v *DeliveryStreamEncryptionConfiguration) *DeliveryStreamDescription { + s.DeliveryStreamEncryptionConfiguration = v + return s +} + +// SetDeliveryStreamName sets the DeliveryStreamName field's value. +func (s *DeliveryStreamDescription) SetDeliveryStreamName(v string) *DeliveryStreamDescription { + s.DeliveryStreamName = &v + return s +} + +// SetDeliveryStreamStatus sets the DeliveryStreamStatus field's value. +func (s *DeliveryStreamDescription) SetDeliveryStreamStatus(v string) *DeliveryStreamDescription { + s.DeliveryStreamStatus = &v + return s +} + +// SetDeliveryStreamType sets the DeliveryStreamType field's value. +func (s *DeliveryStreamDescription) SetDeliveryStreamType(v string) *DeliveryStreamDescription { + s.DeliveryStreamType = &v + return s +} + +// SetDestinations sets the Destinations field's value. +func (s *DeliveryStreamDescription) SetDestinations(v []*DestinationDescription) *DeliveryStreamDescription { + s.Destinations = v + return s +} + +// SetHasMoreDestinations sets the HasMoreDestinations field's value. +func (s *DeliveryStreamDescription) SetHasMoreDestinations(v bool) *DeliveryStreamDescription { + s.HasMoreDestinations = &v + return s +} + +// SetLastUpdateTimestamp sets the LastUpdateTimestamp field's value. +func (s *DeliveryStreamDescription) SetLastUpdateTimestamp(v time.Time) *DeliveryStreamDescription { + s.LastUpdateTimestamp = &v + return s +} + +// SetSource sets the Source field's value. +func (s *DeliveryStreamDescription) SetSource(v *SourceDescription) *DeliveryStreamDescription { + s.Source = v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *DeliveryStreamDescription) SetVersionId(v string) *DeliveryStreamDescription { + s.VersionId = &v + return s +} + +// Indicates the server-side encryption (SSE) status for the delivery stream. +type DeliveryStreamEncryptionConfiguration struct { + _ struct{} `type:"structure"` + + // For a full description of the different values of this status, see StartDeliveryStreamEncryption + // and StopDeliveryStreamEncryption. + Status *string `type:"string" enum:"DeliveryStreamEncryptionStatus"` } // String returns the string representation -func (s CloudWatchLoggingOptions) String() string { +func (s DeliveryStreamEncryptionConfiguration) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CloudWatchLoggingOptions) GoString() string { +func (s DeliveryStreamEncryptionConfiguration) GoString() string { return s.String() } -// SetEnabled sets the Enabled field's value. -func (s *CloudWatchLoggingOptions) SetEnabled(v bool) *CloudWatchLoggingOptions { - s.Enabled = &v +// SetStatus sets the Status field's value. +func (s *DeliveryStreamEncryptionConfiguration) SetStatus(v string) *DeliveryStreamEncryptionConfiguration { + s.Status = &v return s } -// SetLogGroupName sets the LogGroupName field's value. -func (s *CloudWatchLoggingOptions) SetLogGroupName(v string) *CloudWatchLoggingOptions { - s.LogGroupName = &v +type DescribeDeliveryStreamInput struct { + _ struct{} `type:"structure"` + + // The name of the delivery stream. + // + // DeliveryStreamName is a required field + DeliveryStreamName *string `min:"1" type:"string" required:"true"` + + // The ID of the destination to start returning the destination information. + // Kinesis Data Firehose supports one destination per delivery stream. + ExclusiveStartDestinationId *string `min:"1" type:"string"` + + // The limit on the number of destinations to return. You can have one destination + // per delivery stream. + Limit *int64 `min:"1" type:"integer"` +} + +// String returns the string representation +func (s DescribeDeliveryStreamInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDeliveryStreamInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeDeliveryStreamInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeDeliveryStreamInput"} + if s.DeliveryStreamName == nil { + invalidParams.Add(request.NewErrParamRequired("DeliveryStreamName")) + } + if s.DeliveryStreamName != nil && len(*s.DeliveryStreamName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DeliveryStreamName", 1)) + } + if s.ExclusiveStartDestinationId != nil && len(*s.ExclusiveStartDestinationId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ExclusiveStartDestinationId", 1)) + } + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDeliveryStreamName sets the DeliveryStreamName field's value. +func (s *DescribeDeliveryStreamInput) SetDeliveryStreamName(v string) *DescribeDeliveryStreamInput { + s.DeliveryStreamName = &v return s } -// SetLogStreamName sets the LogStreamName field's value. -func (s *CloudWatchLoggingOptions) SetLogStreamName(v string) *CloudWatchLoggingOptions { - s.LogStreamName = &v +// SetExclusiveStartDestinationId sets the ExclusiveStartDestinationId field's value. +func (s *DescribeDeliveryStreamInput) SetExclusiveStartDestinationId(v string) *DescribeDeliveryStreamInput { + s.ExclusiveStartDestinationId = &v return s } -// Describes a COPY command for Amazon Redshift. -type CopyCommand struct { +// SetLimit sets the Limit field's value. +func (s *DescribeDeliveryStreamInput) SetLimit(v int64) *DescribeDeliveryStreamInput { + s.Limit = &v + return s +} + +type DescribeDeliveryStreamOutput struct { _ struct{} `type:"structure"` - // Optional parameters to use with the Amazon Redshift COPY command. For more - // information, see the "Optional Parameters" section of Amazon Redshift COPY - // command (http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html). Some - // possible examples that would apply to Kinesis Firehose are as follows: - // - // delimiter '\t' lzop; - fields are delimited with "\t" (TAB character) and - // compressed using lzop. - // - // delimiter '|' - fields are delimited with "|" (this is the default delimiter). - // - // delimiter '|' escape - the delimiter should be escaped. - // - // fixedwidth 'venueid:3,venuename:25,venuecity:12,venuestate:2,venueseats:6' - // - fields are fixed width in the source, with each width specified after every - // column in the table. + // Information about the delivery stream. // - // JSON 's3://mybucket/jsonpaths.txt' - data is in JSON format, and the path - // specified is the format of the data. + // DeliveryStreamDescription is a required field + DeliveryStreamDescription *DeliveryStreamDescription `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DescribeDeliveryStreamOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDeliveryStreamOutput) GoString() string { + return s.String() +} + +// SetDeliveryStreamDescription sets the DeliveryStreamDescription field's value. +func (s *DescribeDeliveryStreamOutput) SetDeliveryStreamDescription(v *DeliveryStreamDescription) *DescribeDeliveryStreamOutput { + s.DeliveryStreamDescription = v + return s +} + +// The deserializer you want Kinesis Data Firehose to use for converting the +// input data from JSON. Kinesis Data Firehose then serializes the data to its +// final format using the Serializer. Kinesis Data Firehose supports two types +// of deserializers: the Apache Hive JSON SerDe (https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-JSON) +// and the OpenX JSON SerDe (https://github.com/rcongiu/Hive-JSON-Serde). +type Deserializer struct { + _ struct{} `type:"structure"` + + // The native Hive / HCatalog JsonSerDe. Used by Kinesis Data Firehose for deserializing + // data, which means converting it from the JSON format in preparation for serializing + // it to the Parquet or ORC format. This is one of two deserializers you can + // choose, depending on which one offers the functionality you need. The other + // option is the OpenX SerDe. + HiveJsonSerDe *HiveJsonSerDe `type:"structure"` + + // The OpenX SerDe. Used by Kinesis Data Firehose for deserializing data, which + // means converting it from the JSON format in preparation for serializing it + // to the Parquet or ORC format. This is one of two deserializers you can choose, + // depending on which one offers the functionality you need. The other option + // is the native Hive / HCatalog JsonSerDe. + OpenXJsonSerDe *OpenXJsonSerDe `type:"structure"` +} + +// String returns the string representation +func (s Deserializer) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Deserializer) GoString() string { + return s.String() +} + +// SetHiveJsonSerDe sets the HiveJsonSerDe field's value. +func (s *Deserializer) SetHiveJsonSerDe(v *HiveJsonSerDe) *Deserializer { + s.HiveJsonSerDe = v + return s +} + +// SetOpenXJsonSerDe sets the OpenXJsonSerDe field's value. +func (s *Deserializer) SetOpenXJsonSerDe(v *OpenXJsonSerDe) *Deserializer { + s.OpenXJsonSerDe = v + return s +} + +// Describes the destination for a delivery stream. +type DestinationDescription struct { + _ struct{} `type:"structure"` + + // The ID of the destination. // - // For more examples, see Amazon Redshift COPY command examples (http://docs.aws.amazon.com/redshift/latest/dg/r_COPY_command_examples.html). - CopyOptions *string `type:"string"` + // DestinationId is a required field + DestinationId *string `min:"1" type:"string" required:"true"` + + // The destination in Amazon ES. + ElasticsearchDestinationDescription *ElasticsearchDestinationDescription `type:"structure"` + + // The destination in Amazon S3. + ExtendedS3DestinationDescription *ExtendedS3DestinationDescription `type:"structure"` + + // The destination in Amazon Redshift. + RedshiftDestinationDescription *RedshiftDestinationDescription `type:"structure"` + + // [Deprecated] The destination in Amazon S3. + S3DestinationDescription *S3DestinationDescription `type:"structure"` + + // The destination in Splunk. + SplunkDestinationDescription *SplunkDestinationDescription `type:"structure"` +} + +// String returns the string representation +func (s DestinationDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DestinationDescription) GoString() string { + return s.String() +} + +// SetDestinationId sets the DestinationId field's value. +func (s *DestinationDescription) SetDestinationId(v string) *DestinationDescription { + s.DestinationId = &v + return s +} + +// SetElasticsearchDestinationDescription sets the ElasticsearchDestinationDescription field's value. +func (s *DestinationDescription) SetElasticsearchDestinationDescription(v *ElasticsearchDestinationDescription) *DestinationDescription { + s.ElasticsearchDestinationDescription = v + return s +} + +// SetExtendedS3DestinationDescription sets the ExtendedS3DestinationDescription field's value. +func (s *DestinationDescription) SetExtendedS3DestinationDescription(v *ExtendedS3DestinationDescription) *DestinationDescription { + s.ExtendedS3DestinationDescription = v + return s +} + +// SetRedshiftDestinationDescription sets the RedshiftDestinationDescription field's value. +func (s *DestinationDescription) SetRedshiftDestinationDescription(v *RedshiftDestinationDescription) *DestinationDescription { + s.RedshiftDestinationDescription = v + return s +} - // A comma-separated list of column names. - DataTableColumns *string `type:"string"` +// SetS3DestinationDescription sets the S3DestinationDescription field's value. +func (s *DestinationDescription) SetS3DestinationDescription(v *S3DestinationDescription) *DestinationDescription { + s.S3DestinationDescription = v + return s +} - // The name of the target table. The table must already exist in the database. +// SetSplunkDestinationDescription sets the SplunkDestinationDescription field's value. +func (s *DestinationDescription) SetSplunkDestinationDescription(v *SplunkDestinationDescription) *DestinationDescription { + s.SplunkDestinationDescription = v + return s +} + +// Describes the buffering to perform before delivering data to the Amazon ES +// destination. +type ElasticsearchBufferingHints struct { + _ struct{} `type:"structure"` + + // Buffer incoming data for the specified period of time, in seconds, before + // delivering it to the destination. The default value is 300 (5 minutes). + IntervalInSeconds *int64 `min:"60" type:"integer"` + + // Buffer incoming data to the specified size, in MBs, before delivering it + // to the destination. The default value is 5. // - // DataTableName is a required field - DataTableName *string `min:"1" type:"string" required:"true"` + // We recommend setting this parameter to a value greater than the amount of + // data you typically ingest into the delivery stream in 10 seconds. For example, + // if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher. + SizeInMBs *int64 `min:"1" type:"integer"` } // String returns the string representation -func (s CopyCommand) String() string { +func (s ElasticsearchBufferingHints) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CopyCommand) GoString() string { +func (s ElasticsearchBufferingHints) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CopyCommand) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CopyCommand"} - if s.DataTableName == nil { - invalidParams.Add(request.NewErrParamRequired("DataTableName")) +func (s *ElasticsearchBufferingHints) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ElasticsearchBufferingHints"} + if s.IntervalInSeconds != nil && *s.IntervalInSeconds < 60 { + invalidParams.Add(request.NewErrParamMinValue("IntervalInSeconds", 60)) } - if s.DataTableName != nil && len(*s.DataTableName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("DataTableName", 1)) + if s.SizeInMBs != nil && *s.SizeInMBs < 1 { + invalidParams.Add(request.NewErrParamMinValue("SizeInMBs", 1)) } if invalidParams.Len() > 0 { @@ -949,109 +2205,141 @@ func (s *CopyCommand) Validate() error { return nil } -// SetCopyOptions sets the CopyOptions field's value. -func (s *CopyCommand) SetCopyOptions(v string) *CopyCommand { - s.CopyOptions = &v - return s -} - -// SetDataTableColumns sets the DataTableColumns field's value. -func (s *CopyCommand) SetDataTableColumns(v string) *CopyCommand { - s.DataTableColumns = &v +// SetIntervalInSeconds sets the IntervalInSeconds field's value. +func (s *ElasticsearchBufferingHints) SetIntervalInSeconds(v int64) *ElasticsearchBufferingHints { + s.IntervalInSeconds = &v return s } -// SetDataTableName sets the DataTableName field's value. -func (s *CopyCommand) SetDataTableName(v string) *CopyCommand { - s.DataTableName = &v +// SetSizeInMBs sets the SizeInMBs field's value. +func (s *ElasticsearchBufferingHints) SetSizeInMBs(v int64) *ElasticsearchBufferingHints { + s.SizeInMBs = &v return s } -type CreateDeliveryStreamInput struct { +// Describes the configuration of a destination in Amazon ES. +type ElasticsearchDestinationConfiguration struct { _ struct{} `type:"structure"` - // The name of the delivery stream. This name must be unique per AWS account - // in the same region. If the delivery streams are in different accounts or - // different regions, you can have multiple delivery streams with the same name. - // - // DeliveryStreamName is a required field - DeliveryStreamName *string `min:"1" type:"string" required:"true"` + // The buffering options. If no value is specified, the default values for ElasticsearchBufferingHints + // are used. + BufferingHints *ElasticsearchBufferingHints `type:"structure"` - // The delivery stream type. This parameter can be one of the following values: + // The Amazon CloudWatch logging options for your delivery stream. + CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"` + + // The ARN of the Amazon ES domain. The IAM role must have permissions for DescribeElasticsearchDomain, + // DescribeElasticsearchDomains, and DescribeElasticsearchDomainConfig after + // assuming the role specified in RoleARN. For more information, see Amazon + // Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). // - // * DirectPut: Provider applications access the delivery stream directly. + // DomainARN is a required field + DomainARN *string `min:"1" type:"string" required:"true"` + + // The Elasticsearch index name. // - // * KinesisStreamAsSource: The delivery stream uses a Kinesis stream as - // a source. - DeliveryStreamType *string `type:"string" enum:"DeliveryStreamType"` + // IndexName is a required field + IndexName *string `min:"1" type:"string" required:"true"` - // The destination in Amazon ES. You can specify only one destination. - ElasticsearchDestinationConfiguration *ElasticsearchDestinationConfiguration `type:"structure"` + // The Elasticsearch index rotation period. Index rotation appends a time stamp + // to the IndexName to facilitate the expiration of old data. For more information, + // see Index Rotation for the Amazon ES Destination (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#es-index-rotation). + // The default value is OneDay. + IndexRotationPeriod *string `type:"string" enum:"ElasticsearchIndexRotationPeriod"` - // The destination in Amazon S3. You can specify only one destination. - ExtendedS3DestinationConfiguration *ExtendedS3DestinationConfiguration `type:"structure"` + // The data processing configuration. + ProcessingConfiguration *ProcessingConfiguration `type:"structure"` - // When a Kinesis stream is used as the source for the delivery stream, a KinesisStreamSourceConfiguration - // containing the Kinesis stream ARN and the role ARN for the source stream. - KinesisStreamSourceConfiguration *KinesisStreamSourceConfiguration `type:"structure"` + // The retry behavior in case Kinesis Data Firehose is unable to deliver documents + // to Amazon ES. The default value is 300 (5 minutes). + RetryOptions *ElasticsearchRetryOptions `type:"structure"` - // The destination in Amazon Redshift. You can specify only one destination. - RedshiftDestinationConfiguration *RedshiftDestinationConfiguration `type:"structure"` + // The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data + // Firehose for calling the Amazon ES Configuration API and for indexing documents. + // For more information, see Grant Kinesis Data Firehose Access to an Amazon + // S3 Destination (http://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-s3) + // and Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). + // + // RoleARN is a required field + RoleARN *string `min:"1" type:"string" required:"true"` - // [Deprecated] The destination in Amazon S3. You can specify only one destination. - S3DestinationConfiguration *S3DestinationConfiguration `deprecated:"true" type:"structure"` + // Defines how documents should be delivered to Amazon S3. When it is set to + // FailedDocumentsOnly, Kinesis Data Firehose writes any documents that could + // not be indexed to the configured Amazon S3 destination, with elasticsearch-failed/ + // appended to the key prefix. When set to AllDocuments, Kinesis Data Firehose + // delivers all incoming records to Amazon S3, and also writes failed documents + // with elasticsearch-failed/ appended to the prefix. For more information, + // see Amazon S3 Backup for the Amazon ES Destination (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#es-s3-backup). + // Default value is FailedDocumentsOnly. + S3BackupMode *string `type:"string" enum:"ElasticsearchS3BackupMode"` - // The destination in Splunk. You can specify only one destination. - SplunkDestinationConfiguration *SplunkDestinationConfiguration `type:"structure"` + // The configuration for the backup Amazon S3 location. + // + // S3Configuration is a required field + S3Configuration *S3DestinationConfiguration `type:"structure" required:"true"` + + // The Elasticsearch type name. For Elasticsearch 6.x, there can be only one + // type per index. If you try to specify a new type for an existing index that + // already has another type, Kinesis Data Firehose returns an error during run + // time. + // + // TypeName is a required field + TypeName *string `min:"1" type:"string" required:"true"` } // String returns the string representation -func (s CreateDeliveryStreamInput) String() string { +func (s ElasticsearchDestinationConfiguration) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateDeliveryStreamInput) GoString() string { +func (s ElasticsearchDestinationConfiguration) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateDeliveryStreamInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateDeliveryStreamInput"} - if s.DeliveryStreamName == nil { - invalidParams.Add(request.NewErrParamRequired("DeliveryStreamName")) +func (s *ElasticsearchDestinationConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ElasticsearchDestinationConfiguration"} + if s.DomainARN == nil { + invalidParams.Add(request.NewErrParamRequired("DomainARN")) } - if s.DeliveryStreamName != nil && len(*s.DeliveryStreamName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("DeliveryStreamName", 1)) + if s.DomainARN != nil && len(*s.DomainARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DomainARN", 1)) } - if s.ElasticsearchDestinationConfiguration != nil { - if err := s.ElasticsearchDestinationConfiguration.Validate(); err != nil { - invalidParams.AddNested("ElasticsearchDestinationConfiguration", err.(request.ErrInvalidParams)) - } + if s.IndexName == nil { + invalidParams.Add(request.NewErrParamRequired("IndexName")) } - if s.ExtendedS3DestinationConfiguration != nil { - if err := s.ExtendedS3DestinationConfiguration.Validate(); err != nil { - invalidParams.AddNested("ExtendedS3DestinationConfiguration", err.(request.ErrInvalidParams)) - } + if s.IndexName != nil && len(*s.IndexName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("IndexName", 1)) } - if s.KinesisStreamSourceConfiguration != nil { - if err := s.KinesisStreamSourceConfiguration.Validate(); err != nil { - invalidParams.AddNested("KinesisStreamSourceConfiguration", err.(request.ErrInvalidParams)) - } + if s.RoleARN == nil { + invalidParams.Add(request.NewErrParamRequired("RoleARN")) } - if s.RedshiftDestinationConfiguration != nil { - if err := s.RedshiftDestinationConfiguration.Validate(); err != nil { - invalidParams.AddNested("RedshiftDestinationConfiguration", err.(request.ErrInvalidParams)) + if s.RoleARN != nil && len(*s.RoleARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleARN", 1)) + } + if s.S3Configuration == nil { + invalidParams.Add(request.NewErrParamRequired("S3Configuration")) + } + if s.TypeName == nil { + invalidParams.Add(request.NewErrParamRequired("TypeName")) + } + if s.TypeName != nil && len(*s.TypeName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TypeName", 1)) + } + if s.BufferingHints != nil { + if err := s.BufferingHints.Validate(); err != nil { + invalidParams.AddNested("BufferingHints", err.(request.ErrInvalidParams)) } } - if s.S3DestinationConfiguration != nil { - if err := s.S3DestinationConfiguration.Validate(); err != nil { - invalidParams.AddNested("S3DestinationConfiguration", err.(request.ErrInvalidParams)) + if s.ProcessingConfiguration != nil { + if err := s.ProcessingConfiguration.Validate(); err != nil { + invalidParams.AddNested("ProcessingConfiguration", err.(request.ErrInvalidParams)) } } - if s.SplunkDestinationConfiguration != nil { - if err := s.SplunkDestinationConfiguration.Validate(); err != nil { - invalidParams.AddNested("SplunkDestinationConfiguration", err.(request.ErrInvalidParams)) + if s.S3Configuration != nil { + if err := s.S3Configuration.Validate(); err != nil { + invalidParams.AddNested("S3Configuration", err.(request.ErrInvalidParams)) } } @@ -1061,301 +2349,276 @@ func (s *CreateDeliveryStreamInput) Validate() error { return nil } -// SetDeliveryStreamName sets the DeliveryStreamName field's value. -func (s *CreateDeliveryStreamInput) SetDeliveryStreamName(v string) *CreateDeliveryStreamInput { - s.DeliveryStreamName = &v - return s -} - -// SetDeliveryStreamType sets the DeliveryStreamType field's value. -func (s *CreateDeliveryStreamInput) SetDeliveryStreamType(v string) *CreateDeliveryStreamInput { - s.DeliveryStreamType = &v - return s -} - -// SetElasticsearchDestinationConfiguration sets the ElasticsearchDestinationConfiguration field's value. -func (s *CreateDeliveryStreamInput) SetElasticsearchDestinationConfiguration(v *ElasticsearchDestinationConfiguration) *CreateDeliveryStreamInput { - s.ElasticsearchDestinationConfiguration = v - return s -} - -// SetExtendedS3DestinationConfiguration sets the ExtendedS3DestinationConfiguration field's value. -func (s *CreateDeliveryStreamInput) SetExtendedS3DestinationConfiguration(v *ExtendedS3DestinationConfiguration) *CreateDeliveryStreamInput { - s.ExtendedS3DestinationConfiguration = v - return s -} - -// SetKinesisStreamSourceConfiguration sets the KinesisStreamSourceConfiguration field's value. -func (s *CreateDeliveryStreamInput) SetKinesisStreamSourceConfiguration(v *KinesisStreamSourceConfiguration) *CreateDeliveryStreamInput { - s.KinesisStreamSourceConfiguration = v - return s -} - -// SetRedshiftDestinationConfiguration sets the RedshiftDestinationConfiguration field's value. -func (s *CreateDeliveryStreamInput) SetRedshiftDestinationConfiguration(v *RedshiftDestinationConfiguration) *CreateDeliveryStreamInput { - s.RedshiftDestinationConfiguration = v - return s -} - -// SetS3DestinationConfiguration sets the S3DestinationConfiguration field's value. -func (s *CreateDeliveryStreamInput) SetS3DestinationConfiguration(v *S3DestinationConfiguration) *CreateDeliveryStreamInput { - s.S3DestinationConfiguration = v - return s -} - -// SetSplunkDestinationConfiguration sets the SplunkDestinationConfiguration field's value. -func (s *CreateDeliveryStreamInput) SetSplunkDestinationConfiguration(v *SplunkDestinationConfiguration) *CreateDeliveryStreamInput { - s.SplunkDestinationConfiguration = v - return s -} - -type CreateDeliveryStreamOutput struct { - _ struct{} `type:"structure"` - - // The ARN of the delivery stream. - DeliveryStreamARN *string `min:"1" type:"string"` -} - -// String returns the string representation -func (s CreateDeliveryStreamOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s CreateDeliveryStreamOutput) GoString() string { - return s.String() -} - -// SetDeliveryStreamARN sets the DeliveryStreamARN field's value. -func (s *CreateDeliveryStreamOutput) SetDeliveryStreamARN(v string) *CreateDeliveryStreamOutput { - s.DeliveryStreamARN = &v +// SetBufferingHints sets the BufferingHints field's value. +func (s *ElasticsearchDestinationConfiguration) SetBufferingHints(v *ElasticsearchBufferingHints) *ElasticsearchDestinationConfiguration { + s.BufferingHints = v return s } -type DeleteDeliveryStreamInput struct { - _ struct{} `type:"structure"` +// SetCloudWatchLoggingOptions sets the CloudWatchLoggingOptions field's value. +func (s *ElasticsearchDestinationConfiguration) SetCloudWatchLoggingOptions(v *CloudWatchLoggingOptions) *ElasticsearchDestinationConfiguration { + s.CloudWatchLoggingOptions = v + return s +} - // The name of the delivery stream. - // - // DeliveryStreamName is a required field - DeliveryStreamName *string `min:"1" type:"string" required:"true"` +// SetDomainARN sets the DomainARN field's value. +func (s *ElasticsearchDestinationConfiguration) SetDomainARN(v string) *ElasticsearchDestinationConfiguration { + s.DomainARN = &v + return s } -// String returns the string representation -func (s DeleteDeliveryStreamInput) String() string { - return awsutil.Prettify(s) +// SetIndexName sets the IndexName field's value. +func (s *ElasticsearchDestinationConfiguration) SetIndexName(v string) *ElasticsearchDestinationConfiguration { + s.IndexName = &v + return s } -// GoString returns the string representation -func (s DeleteDeliveryStreamInput) GoString() string { - return s.String() +// SetIndexRotationPeriod sets the IndexRotationPeriod field's value. +func (s *ElasticsearchDestinationConfiguration) SetIndexRotationPeriod(v string) *ElasticsearchDestinationConfiguration { + s.IndexRotationPeriod = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteDeliveryStreamInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteDeliveryStreamInput"} - if s.DeliveryStreamName == nil { - invalidParams.Add(request.NewErrParamRequired("DeliveryStreamName")) - } - if s.DeliveryStreamName != nil && len(*s.DeliveryStreamName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("DeliveryStreamName", 1)) - } +// SetProcessingConfiguration sets the ProcessingConfiguration field's value. +func (s *ElasticsearchDestinationConfiguration) SetProcessingConfiguration(v *ProcessingConfiguration) *ElasticsearchDestinationConfiguration { + s.ProcessingConfiguration = v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetRetryOptions sets the RetryOptions field's value. +func (s *ElasticsearchDestinationConfiguration) SetRetryOptions(v *ElasticsearchRetryOptions) *ElasticsearchDestinationConfiguration { + s.RetryOptions = v + return s } -// SetDeliveryStreamName sets the DeliveryStreamName field's value. -func (s *DeleteDeliveryStreamInput) SetDeliveryStreamName(v string) *DeleteDeliveryStreamInput { - s.DeliveryStreamName = &v +// SetRoleARN sets the RoleARN field's value. +func (s *ElasticsearchDestinationConfiguration) SetRoleARN(v string) *ElasticsearchDestinationConfiguration { + s.RoleARN = &v return s } -type DeleteDeliveryStreamOutput struct { - _ struct{} `type:"structure"` +// SetS3BackupMode sets the S3BackupMode field's value. +func (s *ElasticsearchDestinationConfiguration) SetS3BackupMode(v string) *ElasticsearchDestinationConfiguration { + s.S3BackupMode = &v + return s } -// String returns the string representation -func (s DeleteDeliveryStreamOutput) String() string { - return awsutil.Prettify(s) +// SetS3Configuration sets the S3Configuration field's value. +func (s *ElasticsearchDestinationConfiguration) SetS3Configuration(v *S3DestinationConfiguration) *ElasticsearchDestinationConfiguration { + s.S3Configuration = v + return s } -// GoString returns the string representation -func (s DeleteDeliveryStreamOutput) GoString() string { - return s.String() +// SetTypeName sets the TypeName field's value. +func (s *ElasticsearchDestinationConfiguration) SetTypeName(v string) *ElasticsearchDestinationConfiguration { + s.TypeName = &v + return s } -// Contains information about a delivery stream. -type DeliveryStreamDescription struct { +// The destination description in Amazon ES. +type ElasticsearchDestinationDescription struct { _ struct{} `type:"structure"` - // The date and time that the delivery stream was created. - CreateTimestamp *time.Time `type:"timestamp" timestampFormat:"unix"` + // The buffering options. + BufferingHints *ElasticsearchBufferingHints `type:"structure"` - // The Amazon Resource Name (ARN) of the delivery stream. - // - // DeliveryStreamARN is a required field - DeliveryStreamARN *string `min:"1" type:"string" required:"true"` + // The Amazon CloudWatch logging options. + CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"` - // The name of the delivery stream. - // - // DeliveryStreamName is a required field - DeliveryStreamName *string `min:"1" type:"string" required:"true"` + // The ARN of the Amazon ES domain. For more information, see Amazon Resource + // Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). + DomainARN *string `min:"1" type:"string"` - // The status of the delivery stream. - // - // DeliveryStreamStatus is a required field - DeliveryStreamStatus *string `type:"string" required:"true" enum:"DeliveryStreamStatus"` + // The Elasticsearch index name. + IndexName *string `min:"1" type:"string"` - // The delivery stream type. This can be one of the following values: - // - // * DirectPut: Provider applications access the delivery stream directly. - // - // * KinesisStreamAsSource: The delivery stream uses a Kinesis stream as - // a source. - // - // DeliveryStreamType is a required field - DeliveryStreamType *string `type:"string" required:"true" enum:"DeliveryStreamType"` + // The Elasticsearch index rotation period + IndexRotationPeriod *string `type:"string" enum:"ElasticsearchIndexRotationPeriod"` - // The destinations. - // - // Destinations is a required field - Destinations []*DestinationDescription `type:"list" required:"true"` + // The data processing configuration. + ProcessingConfiguration *ProcessingConfiguration `type:"structure"` - // Indicates whether there are more destinations available to list. - // - // HasMoreDestinations is a required field - HasMoreDestinations *bool `type:"boolean" required:"true"` + // The Amazon ES retry options. + RetryOptions *ElasticsearchRetryOptions `type:"structure"` - // The date and time that the delivery stream was last updated. - LastUpdateTimestamp *time.Time `type:"timestamp" timestampFormat:"unix"` + // The Amazon Resource Name (ARN) of the AWS credentials. For more information, + // see Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). + RoleARN *string `min:"1" type:"string"` - // If the DeliveryStreamType parameter is KinesisStreamAsSource, a SourceDescription - // object describing the source Kinesis stream. - Source *SourceDescription `type:"structure"` + // The Amazon S3 backup mode. + S3BackupMode *string `type:"string" enum:"ElasticsearchS3BackupMode"` - // Each time the destination is updated for a delivery stream, the version ID - // is changed, and the current version ID is required when updating the destination. - // This is so that the service knows it is applying the changes to the correct - // version of the delivery stream. - // - // VersionId is a required field - VersionId *string `min:"1" type:"string" required:"true"` + // The Amazon S3 destination. + S3DestinationDescription *S3DestinationDescription `type:"structure"` + + // The Elasticsearch type name. + TypeName *string `min:"1" type:"string"` } // String returns the string representation -func (s DeliveryStreamDescription) String() string { +func (s ElasticsearchDestinationDescription) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeliveryStreamDescription) GoString() string { +func (s ElasticsearchDestinationDescription) GoString() string { return s.String() } -// SetCreateTimestamp sets the CreateTimestamp field's value. -func (s *DeliveryStreamDescription) SetCreateTimestamp(v time.Time) *DeliveryStreamDescription { - s.CreateTimestamp = &v +// SetBufferingHints sets the BufferingHints field's value. +func (s *ElasticsearchDestinationDescription) SetBufferingHints(v *ElasticsearchBufferingHints) *ElasticsearchDestinationDescription { + s.BufferingHints = v return s } -// SetDeliveryStreamARN sets the DeliveryStreamARN field's value. -func (s *DeliveryStreamDescription) SetDeliveryStreamARN(v string) *DeliveryStreamDescription { - s.DeliveryStreamARN = &v +// SetCloudWatchLoggingOptions sets the CloudWatchLoggingOptions field's value. +func (s *ElasticsearchDestinationDescription) SetCloudWatchLoggingOptions(v *CloudWatchLoggingOptions) *ElasticsearchDestinationDescription { + s.CloudWatchLoggingOptions = v return s } -// SetDeliveryStreamName sets the DeliveryStreamName field's value. -func (s *DeliveryStreamDescription) SetDeliveryStreamName(v string) *DeliveryStreamDescription { - s.DeliveryStreamName = &v +// SetDomainARN sets the DomainARN field's value. +func (s *ElasticsearchDestinationDescription) SetDomainARN(v string) *ElasticsearchDestinationDescription { + s.DomainARN = &v return s } -// SetDeliveryStreamStatus sets the DeliveryStreamStatus field's value. -func (s *DeliveryStreamDescription) SetDeliveryStreamStatus(v string) *DeliveryStreamDescription { - s.DeliveryStreamStatus = &v +// SetIndexName sets the IndexName field's value. +func (s *ElasticsearchDestinationDescription) SetIndexName(v string) *ElasticsearchDestinationDescription { + s.IndexName = &v return s } -// SetDeliveryStreamType sets the DeliveryStreamType field's value. -func (s *DeliveryStreamDescription) SetDeliveryStreamType(v string) *DeliveryStreamDescription { - s.DeliveryStreamType = &v +// SetIndexRotationPeriod sets the IndexRotationPeriod field's value. +func (s *ElasticsearchDestinationDescription) SetIndexRotationPeriod(v string) *ElasticsearchDestinationDescription { + s.IndexRotationPeriod = &v return s } -// SetDestinations sets the Destinations field's value. -func (s *DeliveryStreamDescription) SetDestinations(v []*DestinationDescription) *DeliveryStreamDescription { - s.Destinations = v +// SetProcessingConfiguration sets the ProcessingConfiguration field's value. +func (s *ElasticsearchDestinationDescription) SetProcessingConfiguration(v *ProcessingConfiguration) *ElasticsearchDestinationDescription { + s.ProcessingConfiguration = v return s } -// SetHasMoreDestinations sets the HasMoreDestinations field's value. -func (s *DeliveryStreamDescription) SetHasMoreDestinations(v bool) *DeliveryStreamDescription { - s.HasMoreDestinations = &v +// SetRetryOptions sets the RetryOptions field's value. +func (s *ElasticsearchDestinationDescription) SetRetryOptions(v *ElasticsearchRetryOptions) *ElasticsearchDestinationDescription { + s.RetryOptions = v return s } -// SetLastUpdateTimestamp sets the LastUpdateTimestamp field's value. -func (s *DeliveryStreamDescription) SetLastUpdateTimestamp(v time.Time) *DeliveryStreamDescription { - s.LastUpdateTimestamp = &v +// SetRoleARN sets the RoleARN field's value. +func (s *ElasticsearchDestinationDescription) SetRoleARN(v string) *ElasticsearchDestinationDescription { + s.RoleARN = &v return s } -// SetSource sets the Source field's value. -func (s *DeliveryStreamDescription) SetSource(v *SourceDescription) *DeliveryStreamDescription { - s.Source = v +// SetS3BackupMode sets the S3BackupMode field's value. +func (s *ElasticsearchDestinationDescription) SetS3BackupMode(v string) *ElasticsearchDestinationDescription { + s.S3BackupMode = &v return s } -// SetVersionId sets the VersionId field's value. -func (s *DeliveryStreamDescription) SetVersionId(v string) *DeliveryStreamDescription { - s.VersionId = &v +// SetS3DestinationDescription sets the S3DestinationDescription field's value. +func (s *ElasticsearchDestinationDescription) SetS3DestinationDescription(v *S3DestinationDescription) *ElasticsearchDestinationDescription { + s.S3DestinationDescription = v return s } -type DescribeDeliveryStreamInput struct { +// SetTypeName sets the TypeName field's value. +func (s *ElasticsearchDestinationDescription) SetTypeName(v string) *ElasticsearchDestinationDescription { + s.TypeName = &v + return s +} + +// Describes an update for a destination in Amazon ES. +type ElasticsearchDestinationUpdate struct { _ struct{} `type:"structure"` - // The name of the delivery stream. - // - // DeliveryStreamName is a required field - DeliveryStreamName *string `min:"1" type:"string" required:"true"` + // The buffering options. If no value is specified, ElasticsearchBufferingHints + // object default values are used. + BufferingHints *ElasticsearchBufferingHints `type:"structure"` - // The ID of the destination to start returning the destination information. - // Currently, Kinesis Firehose supports one destination per delivery stream. - ExclusiveStartDestinationId *string `min:"1" type:"string"` + // The CloudWatch logging options for your delivery stream. + CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"` - // The limit on the number of destinations to return. Currently, you can have - // one destination per delivery stream. - Limit *int64 `min:"1" type:"integer"` + // The ARN of the Amazon ES domain. The IAM role must have permissions for DescribeElasticsearchDomain, + // DescribeElasticsearchDomains, and DescribeElasticsearchDomainConfig after + // assuming the IAM role specified in RoleARN. For more information, see Amazon + // Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). + DomainARN *string `min:"1" type:"string"` + + // The Elasticsearch index name. + IndexName *string `min:"1" type:"string"` + + // The Elasticsearch index rotation period. Index rotation appends a time stamp + // to IndexName to facilitate the expiration of old data. For more information, + // see Index Rotation for the Amazon ES Destination (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#es-index-rotation). + // Default value is OneDay. + IndexRotationPeriod *string `type:"string" enum:"ElasticsearchIndexRotationPeriod"` + + // The data processing configuration. + ProcessingConfiguration *ProcessingConfiguration `type:"structure"` + + // The retry behavior in case Kinesis Data Firehose is unable to deliver documents + // to Amazon ES. The default value is 300 (5 minutes). + RetryOptions *ElasticsearchRetryOptions `type:"structure"` + + // The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data + // Firehose for calling the Amazon ES Configuration API and for indexing documents. + // For more information, see Grant Kinesis Data Firehose Access to an Amazon + // S3 Destination (http://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-s3) + // and Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). + RoleARN *string `min:"1" type:"string"` + + // The Amazon S3 destination. + S3Update *S3DestinationUpdate `type:"structure"` + + // The Elasticsearch type name. For Elasticsearch 6.x, there can be only one + // type per index. If you try to specify a new type for an existing index that + // already has another type, Kinesis Data Firehose returns an error during runtime. + TypeName *string `min:"1" type:"string"` } // String returns the string representation -func (s DescribeDeliveryStreamInput) String() string { +func (s ElasticsearchDestinationUpdate) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeDeliveryStreamInput) GoString() string { +func (s ElasticsearchDestinationUpdate) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeDeliveryStreamInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeDeliveryStreamInput"} - if s.DeliveryStreamName == nil { - invalidParams.Add(request.NewErrParamRequired("DeliveryStreamName")) +func (s *ElasticsearchDestinationUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ElasticsearchDestinationUpdate"} + if s.DomainARN != nil && len(*s.DomainARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DomainARN", 1)) } - if s.DeliveryStreamName != nil && len(*s.DeliveryStreamName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("DeliveryStreamName", 1)) + if s.IndexName != nil && len(*s.IndexName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("IndexName", 1)) } - if s.ExclusiveStartDestinationId != nil && len(*s.ExclusiveStartDestinationId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ExclusiveStartDestinationId", 1)) + if s.RoleARN != nil && len(*s.RoleARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleARN", 1)) } - if s.Limit != nil && *s.Limit < 1 { - invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + if s.TypeName != nil && len(*s.TypeName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TypeName", 1)) + } + if s.BufferingHints != nil { + if err := s.BufferingHints.Validate(); err != nil { + invalidParams.AddNested("BufferingHints", err.(request.ErrInvalidParams)) + } + } + if s.ProcessingConfiguration != nil { + if err := s.ProcessingConfiguration.Validate(); err != nil { + invalidParams.AddNested("ProcessingConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.S3Update != nil { + if err := s.S3Update.Validate(); err != nil { + invalidParams.AddNested("S3Update", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -1364,156 +2627,124 @@ func (s *DescribeDeliveryStreamInput) Validate() error { return nil } -// SetDeliveryStreamName sets the DeliveryStreamName field's value. -func (s *DescribeDeliveryStreamInput) SetDeliveryStreamName(v string) *DescribeDeliveryStreamInput { - s.DeliveryStreamName = &v +// SetBufferingHints sets the BufferingHints field's value. +func (s *ElasticsearchDestinationUpdate) SetBufferingHints(v *ElasticsearchBufferingHints) *ElasticsearchDestinationUpdate { + s.BufferingHints = v return s } -// SetExclusiveStartDestinationId sets the ExclusiveStartDestinationId field's value. -func (s *DescribeDeliveryStreamInput) SetExclusiveStartDestinationId(v string) *DescribeDeliveryStreamInput { - s.ExclusiveStartDestinationId = &v +// SetCloudWatchLoggingOptions sets the CloudWatchLoggingOptions field's value. +func (s *ElasticsearchDestinationUpdate) SetCloudWatchLoggingOptions(v *CloudWatchLoggingOptions) *ElasticsearchDestinationUpdate { + s.CloudWatchLoggingOptions = v return s } -// SetLimit sets the Limit field's value. -func (s *DescribeDeliveryStreamInput) SetLimit(v int64) *DescribeDeliveryStreamInput { - s.Limit = &v +// SetDomainARN sets the DomainARN field's value. +func (s *ElasticsearchDestinationUpdate) SetDomainARN(v string) *ElasticsearchDestinationUpdate { + s.DomainARN = &v return s } -type DescribeDeliveryStreamOutput struct { - _ struct{} `type:"structure"` - - // Information about the delivery stream. - // - // DeliveryStreamDescription is a required field - DeliveryStreamDescription *DeliveryStreamDescription `type:"structure" required:"true"` -} - -// String returns the string representation -func (s DescribeDeliveryStreamOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s DescribeDeliveryStreamOutput) GoString() string { - return s.String() +// SetIndexName sets the IndexName field's value. +func (s *ElasticsearchDestinationUpdate) SetIndexName(v string) *ElasticsearchDestinationUpdate { + s.IndexName = &v + return s } -// SetDeliveryStreamDescription sets the DeliveryStreamDescription field's value. -func (s *DescribeDeliveryStreamOutput) SetDeliveryStreamDescription(v *DeliveryStreamDescription) *DescribeDeliveryStreamOutput { - s.DeliveryStreamDescription = v +// SetIndexRotationPeriod sets the IndexRotationPeriod field's value. +func (s *ElasticsearchDestinationUpdate) SetIndexRotationPeriod(v string) *ElasticsearchDestinationUpdate { + s.IndexRotationPeriod = &v return s } -// Describes the destination for a delivery stream. -type DestinationDescription struct { - _ struct{} `type:"structure"` - - // The ID of the destination. - // - // DestinationId is a required field - DestinationId *string `min:"1" type:"string" required:"true"` - - // The destination in Amazon ES. - ElasticsearchDestinationDescription *ElasticsearchDestinationDescription `type:"structure"` - - // The destination in Amazon S3. - ExtendedS3DestinationDescription *ExtendedS3DestinationDescription `type:"structure"` - - // The destination in Amazon Redshift. - RedshiftDestinationDescription *RedshiftDestinationDescription `type:"structure"` - - // [Deprecated] The destination in Amazon S3. - S3DestinationDescription *S3DestinationDescription `type:"structure"` - - // The destination in Splunk. - SplunkDestinationDescription *SplunkDestinationDescription `type:"structure"` +// SetProcessingConfiguration sets the ProcessingConfiguration field's value. +func (s *ElasticsearchDestinationUpdate) SetProcessingConfiguration(v *ProcessingConfiguration) *ElasticsearchDestinationUpdate { + s.ProcessingConfiguration = v + return s } -// String returns the string representation -func (s DestinationDescription) String() string { - return awsutil.Prettify(s) +// SetRetryOptions sets the RetryOptions field's value. +func (s *ElasticsearchDestinationUpdate) SetRetryOptions(v *ElasticsearchRetryOptions) *ElasticsearchDestinationUpdate { + s.RetryOptions = v + return s } -// GoString returns the string representation -func (s DestinationDescription) GoString() string { - return s.String() +// SetRoleARN sets the RoleARN field's value. +func (s *ElasticsearchDestinationUpdate) SetRoleARN(v string) *ElasticsearchDestinationUpdate { + s.RoleARN = &v + return s } -// SetDestinationId sets the DestinationId field's value. -func (s *DestinationDescription) SetDestinationId(v string) *DestinationDescription { - s.DestinationId = &v +// SetS3Update sets the S3Update field's value. +func (s *ElasticsearchDestinationUpdate) SetS3Update(v *S3DestinationUpdate) *ElasticsearchDestinationUpdate { + s.S3Update = v return s } -// SetElasticsearchDestinationDescription sets the ElasticsearchDestinationDescription field's value. -func (s *DestinationDescription) SetElasticsearchDestinationDescription(v *ElasticsearchDestinationDescription) *DestinationDescription { - s.ElasticsearchDestinationDescription = v +// SetTypeName sets the TypeName field's value. +func (s *ElasticsearchDestinationUpdate) SetTypeName(v string) *ElasticsearchDestinationUpdate { + s.TypeName = &v return s } -// SetExtendedS3DestinationDescription sets the ExtendedS3DestinationDescription field's value. -func (s *DestinationDescription) SetExtendedS3DestinationDescription(v *ExtendedS3DestinationDescription) *DestinationDescription { - s.ExtendedS3DestinationDescription = v - return s +// Configures retry behavior in case Kinesis Data Firehose is unable to deliver +// documents to Amazon ES. +type ElasticsearchRetryOptions struct { + _ struct{} `type:"structure"` + + // After an initial failure to deliver to Amazon ES, the total amount of time + // during which Kinesis Data Firehose retries delivery (including the first + // attempt). After this time has elapsed, the failed documents are written to + // Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) + // results in no retries. + DurationInSeconds *int64 `type:"integer"` } -// SetRedshiftDestinationDescription sets the RedshiftDestinationDescription field's value. -func (s *DestinationDescription) SetRedshiftDestinationDescription(v *RedshiftDestinationDescription) *DestinationDescription { - s.RedshiftDestinationDescription = v - return s +// String returns the string representation +func (s ElasticsearchRetryOptions) String() string { + return awsutil.Prettify(s) } -// SetS3DestinationDescription sets the S3DestinationDescription field's value. -func (s *DestinationDescription) SetS3DestinationDescription(v *S3DestinationDescription) *DestinationDescription { - s.S3DestinationDescription = v - return s +// GoString returns the string representation +func (s ElasticsearchRetryOptions) GoString() string { + return s.String() } -// SetSplunkDestinationDescription sets the SplunkDestinationDescription field's value. -func (s *DestinationDescription) SetSplunkDestinationDescription(v *SplunkDestinationDescription) *DestinationDescription { - s.SplunkDestinationDescription = v +// SetDurationInSeconds sets the DurationInSeconds field's value. +func (s *ElasticsearchRetryOptions) SetDurationInSeconds(v int64) *ElasticsearchRetryOptions { + s.DurationInSeconds = &v return s } -// Describes the buffering to perform before delivering data to the Amazon ES -// destination. -type ElasticsearchBufferingHints struct { +// Describes the encryption for a destination in Amazon S3. +type EncryptionConfiguration struct { _ struct{} `type:"structure"` - // Buffer incoming data for the specified period of time, in seconds, before - // delivering it to the destination. The default value is 300 (5 minutes). - IntervalInSeconds *int64 `min:"60" type:"integer"` + // The encryption key. + KMSEncryptionConfig *KMSEncryptionConfig `type:"structure"` - // Buffer incoming data to the specified size, in MBs, before delivering it - // to the destination. The default value is 5. - // - // We recommend setting this parameter to a value greater than the amount of - // data you typically ingest into the delivery stream in 10 seconds. For example, - // if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher. - SizeInMBs *int64 `min:"1" type:"integer"` + // Specifically override existing encryption information to ensure that no encryption + // is used. + NoEncryptionConfig *string `type:"string" enum:"NoEncryptionConfig"` } // String returns the string representation -func (s ElasticsearchBufferingHints) String() string { +func (s EncryptionConfiguration) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ElasticsearchBufferingHints) GoString() string { +func (s EncryptionConfiguration) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ElasticsearchBufferingHints) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ElasticsearchBufferingHints"} - if s.IntervalInSeconds != nil && *s.IntervalInSeconds < 60 { - invalidParams.Add(request.NewErrParamMinValue("IntervalInSeconds", 60)) - } - if s.SizeInMBs != nil && *s.SizeInMBs < 1 { - invalidParams.Add(request.NewErrParamMinValue("SizeInMBs", 1)) +func (s *EncryptionConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "EncryptionConfiguration"} + if s.KMSEncryptionConfig != nil { + if err := s.KMSEncryptionConfig.Validate(); err != nil { + invalidParams.AddNested("KMSEncryptionConfig", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -1522,106 +2753,87 @@ func (s *ElasticsearchBufferingHints) Validate() error { return nil } -// SetIntervalInSeconds sets the IntervalInSeconds field's value. -func (s *ElasticsearchBufferingHints) SetIntervalInSeconds(v int64) *ElasticsearchBufferingHints { - s.IntervalInSeconds = &v +// SetKMSEncryptionConfig sets the KMSEncryptionConfig field's value. +func (s *EncryptionConfiguration) SetKMSEncryptionConfig(v *KMSEncryptionConfig) *EncryptionConfiguration { + s.KMSEncryptionConfig = v return s } -// SetSizeInMBs sets the SizeInMBs field's value. -func (s *ElasticsearchBufferingHints) SetSizeInMBs(v int64) *ElasticsearchBufferingHints { - s.SizeInMBs = &v +// SetNoEncryptionConfig sets the NoEncryptionConfig field's value. +func (s *EncryptionConfiguration) SetNoEncryptionConfig(v string) *EncryptionConfiguration { + s.NoEncryptionConfig = &v return s } -// Describes the configuration of a destination in Amazon ES. -type ElasticsearchDestinationConfiguration struct { +// Describes the configuration of a destination in Amazon S3. +type ExtendedS3DestinationConfiguration struct { _ struct{} `type:"structure"` - // The buffering options. If no value is specified, the default values for ElasticsearchBufferingHints - // are used. - BufferingHints *ElasticsearchBufferingHints `type:"structure"` - - // The CloudWatch logging options for your delivery stream. - CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"` - - // The ARN of the Amazon ES domain. The IAM role must have permissions for DescribeElasticsearchDomain, - // DescribeElasticsearchDomains, and DescribeElasticsearchDomainConfig after - // assuming the role specified in RoleARN. + // The ARN of the S3 bucket. For more information, see Amazon Resource Names + // (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). // - // DomainARN is a required field - DomainARN *string `min:"1" type:"string" required:"true"` + // BucketARN is a required field + BucketARN *string `min:"1" type:"string" required:"true"` - // The Elasticsearch index name. - // - // IndexName is a required field - IndexName *string `min:"1" type:"string" required:"true"` + // The buffering option. + BufferingHints *BufferingHints `type:"structure"` - // The Elasticsearch index rotation period. Index rotation appends a time stamp - // to the IndexName to facilitate the expiration of old data. For more information, - // see Index Rotation for Amazon Elasticsearch Service Destination (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#es-index-rotation). - // The default value is OneDay. - IndexRotationPeriod *string `type:"string" enum:"ElasticsearchIndexRotationPeriod"` + // The Amazon CloudWatch logging options for your delivery stream. + CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"` - // The data processing configuration. - ProcessingConfiguration *ProcessingConfiguration `type:"structure"` + // The compression format. If no value is specified, the default is UNCOMPRESSED. + CompressionFormat *string `type:"string" enum:"CompressionFormat"` - // The retry behavior in case Kinesis Firehose is unable to deliver documents - // to Amazon ES. The default value is 300 (5 minutes). - RetryOptions *ElasticsearchRetryOptions `type:"structure"` + // The serializer, deserializer, and schema for converting data from the JSON + // format to the Parquet or ORC format before writing it to Amazon S3. + DataFormatConversionConfiguration *DataFormatConversionConfiguration `type:"structure"` - // The ARN of the IAM role to be assumed by Kinesis Firehose for calling the - // Amazon ES Configuration API and for indexing documents. For more information, - // see Amazon S3 Bucket Access (http://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-s3). - // - // RoleARN is a required field - RoleARN *string `min:"1" type:"string" required:"true"` + // The encryption configuration. If no value is specified, the default is no + // encryption. + EncryptionConfiguration *EncryptionConfiguration `type:"structure"` - // Defines how documents should be delivered to Amazon S3. When set to FailedDocumentsOnly, - // Kinesis Firehose writes any documents that could not be indexed to the configured - // Amazon S3 destination, with elasticsearch-failed/ appended to the key prefix. - // When set to AllDocuments, Kinesis Firehose delivers all incoming records - // to Amazon S3, and also writes failed documents with elasticsearch-failed/ - // appended to the prefix. For more information, see Amazon S3 Backup for Amazon - // Elasticsearch Service Destination (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#es-s3-backup). - // Default value is FailedDocumentsOnly. - S3BackupMode *string `type:"string" enum:"ElasticsearchS3BackupMode"` + // The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered + // Amazon S3 files. You can specify an extra prefix to be added in front of + // the time format prefix. If the prefix ends with a slash, it appears as a + // folder in the S3 bucket. For more information, see Amazon S3 Object Name + // Format (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#s3-object-name) + // in the Amazon Kinesis Data Firehose Developer Guide. + Prefix *string `type:"string"` - // The configuration for the backup Amazon S3 location. - // - // S3Configuration is a required field - S3Configuration *S3DestinationConfiguration `type:"structure" required:"true"` + // The data processing configuration. + ProcessingConfiguration *ProcessingConfiguration `type:"structure"` - // The Elasticsearch type name. + // The Amazon Resource Name (ARN) of the AWS credentials. For more information, + // see Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). // - // TypeName is a required field - TypeName *string `min:"1" type:"string" required:"true"` + // RoleARN is a required field + RoleARN *string `min:"1" type:"string" required:"true"` + + // The configuration for backup in Amazon S3. + S3BackupConfiguration *S3DestinationConfiguration `type:"structure"` + + // The Amazon S3 backup mode. + S3BackupMode *string `type:"string" enum:"S3BackupMode"` } // String returns the string representation -func (s ElasticsearchDestinationConfiguration) String() string { +func (s ExtendedS3DestinationConfiguration) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ElasticsearchDestinationConfiguration) GoString() string { +func (s ExtendedS3DestinationConfiguration) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ElasticsearchDestinationConfiguration) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ElasticsearchDestinationConfiguration"} - if s.DomainARN == nil { - invalidParams.Add(request.NewErrParamRequired("DomainARN")) - } - if s.DomainARN != nil && len(*s.DomainARN) < 1 { - invalidParams.Add(request.NewErrParamMinLen("DomainARN", 1)) - } - if s.IndexName == nil { - invalidParams.Add(request.NewErrParamRequired("IndexName")) +func (s *ExtendedS3DestinationConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ExtendedS3DestinationConfiguration"} + if s.BucketARN == nil { + invalidParams.Add(request.NewErrParamRequired("BucketARN")) } - if s.IndexName != nil && len(*s.IndexName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("IndexName", 1)) + if s.BucketARN != nil && len(*s.BucketARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BucketARN", 1)) } if s.RoleARN == nil { invalidParams.Add(request.NewErrParamRequired("RoleARN")) @@ -1629,28 +2841,29 @@ func (s *ElasticsearchDestinationConfiguration) Validate() error { if s.RoleARN != nil && len(*s.RoleARN) < 1 { invalidParams.Add(request.NewErrParamMinLen("RoleARN", 1)) } - if s.S3Configuration == nil { - invalidParams.Add(request.NewErrParamRequired("S3Configuration")) - } - if s.TypeName == nil { - invalidParams.Add(request.NewErrParamRequired("TypeName")) - } - if s.TypeName != nil && len(*s.TypeName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("TypeName", 1)) - } if s.BufferingHints != nil { if err := s.BufferingHints.Validate(); err != nil { invalidParams.AddNested("BufferingHints", err.(request.ErrInvalidParams)) } } + if s.DataFormatConversionConfiguration != nil { + if err := s.DataFormatConversionConfiguration.Validate(); err != nil { + invalidParams.AddNested("DataFormatConversionConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.EncryptionConfiguration != nil { + if err := s.EncryptionConfiguration.Validate(); err != nil { + invalidParams.AddNested("EncryptionConfiguration", err.(request.ErrInvalidParams)) + } + } if s.ProcessingConfiguration != nil { if err := s.ProcessingConfiguration.Validate(); err != nil { invalidParams.AddNested("ProcessingConfiguration", err.(request.ErrInvalidParams)) } } - if s.S3Configuration != nil { - if err := s.S3Configuration.Validate(); err != nil { - invalidParams.AddNested("S3Configuration", err.(request.ErrInvalidParams)) + if s.S3BackupConfiguration != nil { + if err := s.S3BackupConfiguration.Validate(); err != nil { + invalidParams.AddNested("S3BackupConfiguration", err.(request.ErrInvalidParams)) } } @@ -1660,268 +2873,294 @@ func (s *ElasticsearchDestinationConfiguration) Validate() error { return nil } +// SetBucketARN sets the BucketARN field's value. +func (s *ExtendedS3DestinationConfiguration) SetBucketARN(v string) *ExtendedS3DestinationConfiguration { + s.BucketARN = &v + return s +} + // SetBufferingHints sets the BufferingHints field's value. -func (s *ElasticsearchDestinationConfiguration) SetBufferingHints(v *ElasticsearchBufferingHints) *ElasticsearchDestinationConfiguration { +func (s *ExtendedS3DestinationConfiguration) SetBufferingHints(v *BufferingHints) *ExtendedS3DestinationConfiguration { s.BufferingHints = v return s } // SetCloudWatchLoggingOptions sets the CloudWatchLoggingOptions field's value. -func (s *ElasticsearchDestinationConfiguration) SetCloudWatchLoggingOptions(v *CloudWatchLoggingOptions) *ElasticsearchDestinationConfiguration { +func (s *ExtendedS3DestinationConfiguration) SetCloudWatchLoggingOptions(v *CloudWatchLoggingOptions) *ExtendedS3DestinationConfiguration { s.CloudWatchLoggingOptions = v return s } -// SetDomainARN sets the DomainARN field's value. -func (s *ElasticsearchDestinationConfiguration) SetDomainARN(v string) *ElasticsearchDestinationConfiguration { - s.DomainARN = &v +// SetCompressionFormat sets the CompressionFormat field's value. +func (s *ExtendedS3DestinationConfiguration) SetCompressionFormat(v string) *ExtendedS3DestinationConfiguration { + s.CompressionFormat = &v return s } -// SetIndexName sets the IndexName field's value. -func (s *ElasticsearchDestinationConfiguration) SetIndexName(v string) *ElasticsearchDestinationConfiguration { - s.IndexName = &v +// SetDataFormatConversionConfiguration sets the DataFormatConversionConfiguration field's value. +func (s *ExtendedS3DestinationConfiguration) SetDataFormatConversionConfiguration(v *DataFormatConversionConfiguration) *ExtendedS3DestinationConfiguration { + s.DataFormatConversionConfiguration = v return s } -// SetIndexRotationPeriod sets the IndexRotationPeriod field's value. -func (s *ElasticsearchDestinationConfiguration) SetIndexRotationPeriod(v string) *ElasticsearchDestinationConfiguration { - s.IndexRotationPeriod = &v +// SetEncryptionConfiguration sets the EncryptionConfiguration field's value. +func (s *ExtendedS3DestinationConfiguration) SetEncryptionConfiguration(v *EncryptionConfiguration) *ExtendedS3DestinationConfiguration { + s.EncryptionConfiguration = v return s } -// SetProcessingConfiguration sets the ProcessingConfiguration field's value. -func (s *ElasticsearchDestinationConfiguration) SetProcessingConfiguration(v *ProcessingConfiguration) *ElasticsearchDestinationConfiguration { - s.ProcessingConfiguration = v +// SetPrefix sets the Prefix field's value. +func (s *ExtendedS3DestinationConfiguration) SetPrefix(v string) *ExtendedS3DestinationConfiguration { + s.Prefix = &v return s } -// SetRetryOptions sets the RetryOptions field's value. -func (s *ElasticsearchDestinationConfiguration) SetRetryOptions(v *ElasticsearchRetryOptions) *ElasticsearchDestinationConfiguration { - s.RetryOptions = v +// SetProcessingConfiguration sets the ProcessingConfiguration field's value. +func (s *ExtendedS3DestinationConfiguration) SetProcessingConfiguration(v *ProcessingConfiguration) *ExtendedS3DestinationConfiguration { + s.ProcessingConfiguration = v return s } // SetRoleARN sets the RoleARN field's value. -func (s *ElasticsearchDestinationConfiguration) SetRoleARN(v string) *ElasticsearchDestinationConfiguration { +func (s *ExtendedS3DestinationConfiguration) SetRoleARN(v string) *ExtendedS3DestinationConfiguration { s.RoleARN = &v return s } -// SetS3BackupMode sets the S3BackupMode field's value. -func (s *ElasticsearchDestinationConfiguration) SetS3BackupMode(v string) *ElasticsearchDestinationConfiguration { - s.S3BackupMode = &v - return s -} - -// SetS3Configuration sets the S3Configuration field's value. -func (s *ElasticsearchDestinationConfiguration) SetS3Configuration(v *S3DestinationConfiguration) *ElasticsearchDestinationConfiguration { - s.S3Configuration = v +// SetS3BackupConfiguration sets the S3BackupConfiguration field's value. +func (s *ExtendedS3DestinationConfiguration) SetS3BackupConfiguration(v *S3DestinationConfiguration) *ExtendedS3DestinationConfiguration { + s.S3BackupConfiguration = v return s } -// SetTypeName sets the TypeName field's value. -func (s *ElasticsearchDestinationConfiguration) SetTypeName(v string) *ElasticsearchDestinationConfiguration { - s.TypeName = &v +// SetS3BackupMode sets the S3BackupMode field's value. +func (s *ExtendedS3DestinationConfiguration) SetS3BackupMode(v string) *ExtendedS3DestinationConfiguration { + s.S3BackupMode = &v return s } -// The destination description in Amazon ES. -type ElasticsearchDestinationDescription struct { +// Describes a destination in Amazon S3. +type ExtendedS3DestinationDescription struct { _ struct{} `type:"structure"` - // The buffering options. - BufferingHints *ElasticsearchBufferingHints `type:"structure"` + // The ARN of the S3 bucket. For more information, see Amazon Resource Names + // (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). + // + // BucketARN is a required field + BucketARN *string `min:"1" type:"string" required:"true"` + + // The buffering option. + // + // BufferingHints is a required field + BufferingHints *BufferingHints `type:"structure" required:"true"` - // The CloudWatch logging options. + // The Amazon CloudWatch logging options for your delivery stream. CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"` - // The ARN of the Amazon ES domain. - DomainARN *string `min:"1" type:"string"` + // The compression format. If no value is specified, the default is UNCOMPRESSED. + // + // CompressionFormat is a required field + CompressionFormat *string `type:"string" required:"true" enum:"CompressionFormat"` - // The Elasticsearch index name. - IndexName *string `min:"1" type:"string"` + // The serializer, deserializer, and schema for converting data from the JSON + // format to the Parquet or ORC format before writing it to Amazon S3. + DataFormatConversionConfiguration *DataFormatConversionConfiguration `type:"structure"` - // The Elasticsearch index rotation period - IndexRotationPeriod *string `type:"string" enum:"ElasticsearchIndexRotationPeriod"` + // The encryption configuration. If no value is specified, the default is no + // encryption. + // + // EncryptionConfiguration is a required field + EncryptionConfiguration *EncryptionConfiguration `type:"structure" required:"true"` + + // The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered + // Amazon S3 files. You can specify an extra prefix to be added in front of + // the time format prefix. If the prefix ends with a slash, it appears as a + // folder in the S3 bucket. For more information, see Amazon S3 Object Name + // Format (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#s3-object-name) + // in the Amazon Kinesis Data Firehose Developer Guide. + Prefix *string `type:"string"` // The data processing configuration. ProcessingConfiguration *ProcessingConfiguration `type:"structure"` - // The Amazon ES retry options. - RetryOptions *ElasticsearchRetryOptions `type:"structure"` + // The Amazon Resource Name (ARN) of the AWS credentials. For more information, + // see Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). + // + // RoleARN is a required field + RoleARN *string `min:"1" type:"string" required:"true"` - // The ARN of the AWS credentials. - RoleARN *string `min:"1" type:"string"` + // The configuration for backup in Amazon S3. + S3BackupDescription *S3DestinationDescription `type:"structure"` // The Amazon S3 backup mode. - S3BackupMode *string `type:"string" enum:"ElasticsearchS3BackupMode"` - - // The Amazon S3 destination. - S3DestinationDescription *S3DestinationDescription `type:"structure"` - - // The Elasticsearch type name. - TypeName *string `min:"1" type:"string"` + S3BackupMode *string `type:"string" enum:"S3BackupMode"` } // String returns the string representation -func (s ElasticsearchDestinationDescription) String() string { +func (s ExtendedS3DestinationDescription) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ElasticsearchDestinationDescription) GoString() string { +func (s ExtendedS3DestinationDescription) GoString() string { return s.String() } +// SetBucketARN sets the BucketARN field's value. +func (s *ExtendedS3DestinationDescription) SetBucketARN(v string) *ExtendedS3DestinationDescription { + s.BucketARN = &v + return s +} + // SetBufferingHints sets the BufferingHints field's value. -func (s *ElasticsearchDestinationDescription) SetBufferingHints(v *ElasticsearchBufferingHints) *ElasticsearchDestinationDescription { +func (s *ExtendedS3DestinationDescription) SetBufferingHints(v *BufferingHints) *ExtendedS3DestinationDescription { s.BufferingHints = v return s } // SetCloudWatchLoggingOptions sets the CloudWatchLoggingOptions field's value. -func (s *ElasticsearchDestinationDescription) SetCloudWatchLoggingOptions(v *CloudWatchLoggingOptions) *ElasticsearchDestinationDescription { +func (s *ExtendedS3DestinationDescription) SetCloudWatchLoggingOptions(v *CloudWatchLoggingOptions) *ExtendedS3DestinationDescription { s.CloudWatchLoggingOptions = v return s } -// SetDomainARN sets the DomainARN field's value. -func (s *ElasticsearchDestinationDescription) SetDomainARN(v string) *ElasticsearchDestinationDescription { - s.DomainARN = &v +// SetCompressionFormat sets the CompressionFormat field's value. +func (s *ExtendedS3DestinationDescription) SetCompressionFormat(v string) *ExtendedS3DestinationDescription { + s.CompressionFormat = &v return s } -// SetIndexName sets the IndexName field's value. -func (s *ElasticsearchDestinationDescription) SetIndexName(v string) *ElasticsearchDestinationDescription { - s.IndexName = &v +// SetDataFormatConversionConfiguration sets the DataFormatConversionConfiguration field's value. +func (s *ExtendedS3DestinationDescription) SetDataFormatConversionConfiguration(v *DataFormatConversionConfiguration) *ExtendedS3DestinationDescription { + s.DataFormatConversionConfiguration = v return s } -// SetIndexRotationPeriod sets the IndexRotationPeriod field's value. -func (s *ElasticsearchDestinationDescription) SetIndexRotationPeriod(v string) *ElasticsearchDestinationDescription { - s.IndexRotationPeriod = &v +// SetEncryptionConfiguration sets the EncryptionConfiguration field's value. +func (s *ExtendedS3DestinationDescription) SetEncryptionConfiguration(v *EncryptionConfiguration) *ExtendedS3DestinationDescription { + s.EncryptionConfiguration = v return s } -// SetProcessingConfiguration sets the ProcessingConfiguration field's value. -func (s *ElasticsearchDestinationDescription) SetProcessingConfiguration(v *ProcessingConfiguration) *ElasticsearchDestinationDescription { - s.ProcessingConfiguration = v +// SetPrefix sets the Prefix field's value. +func (s *ExtendedS3DestinationDescription) SetPrefix(v string) *ExtendedS3DestinationDescription { + s.Prefix = &v return s } -// SetRetryOptions sets the RetryOptions field's value. -func (s *ElasticsearchDestinationDescription) SetRetryOptions(v *ElasticsearchRetryOptions) *ElasticsearchDestinationDescription { - s.RetryOptions = v +// SetProcessingConfiguration sets the ProcessingConfiguration field's value. +func (s *ExtendedS3DestinationDescription) SetProcessingConfiguration(v *ProcessingConfiguration) *ExtendedS3DestinationDescription { + s.ProcessingConfiguration = v return s } // SetRoleARN sets the RoleARN field's value. -func (s *ElasticsearchDestinationDescription) SetRoleARN(v string) *ElasticsearchDestinationDescription { +func (s *ExtendedS3DestinationDescription) SetRoleARN(v string) *ExtendedS3DestinationDescription { s.RoleARN = &v return s } -// SetS3BackupMode sets the S3BackupMode field's value. -func (s *ElasticsearchDestinationDescription) SetS3BackupMode(v string) *ElasticsearchDestinationDescription { - s.S3BackupMode = &v - return s -} - -// SetS3DestinationDescription sets the S3DestinationDescription field's value. -func (s *ElasticsearchDestinationDescription) SetS3DestinationDescription(v *S3DestinationDescription) *ElasticsearchDestinationDescription { - s.S3DestinationDescription = v +// SetS3BackupDescription sets the S3BackupDescription field's value. +func (s *ExtendedS3DestinationDescription) SetS3BackupDescription(v *S3DestinationDescription) *ExtendedS3DestinationDescription { + s.S3BackupDescription = v return s } -// SetTypeName sets the TypeName field's value. -func (s *ElasticsearchDestinationDescription) SetTypeName(v string) *ElasticsearchDestinationDescription { - s.TypeName = &v +// SetS3BackupMode sets the S3BackupMode field's value. +func (s *ExtendedS3DestinationDescription) SetS3BackupMode(v string) *ExtendedS3DestinationDescription { + s.S3BackupMode = &v return s } -// Describes an update for a destination in Amazon ES. -type ElasticsearchDestinationUpdate struct { +// Describes an update for a destination in Amazon S3. +type ExtendedS3DestinationUpdate struct { _ struct{} `type:"structure"` - // The buffering options. If no value is specified, ElasticsearchBufferingHints - // object default values are used. - BufferingHints *ElasticsearchBufferingHints `type:"structure"` + // The ARN of the S3 bucket. For more information, see Amazon Resource Names + // (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). + BucketARN *string `min:"1" type:"string"` - // The CloudWatch logging options for your delivery stream. + // The buffering option. + BufferingHints *BufferingHints `type:"structure"` + + // The Amazon CloudWatch logging options for your delivery stream. CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"` - // The ARN of the Amazon ES domain. The IAM role must have permissions for DescribeElasticsearchDomain, - // DescribeElasticsearchDomains, and DescribeElasticsearchDomainConfig after - // assuming the IAM role specified in RoleARN. - DomainARN *string `min:"1" type:"string"` + // The compression format. If no value is specified, the default is UNCOMPRESSED. + CompressionFormat *string `type:"string" enum:"CompressionFormat"` - // The Elasticsearch index name. - IndexName *string `min:"1" type:"string"` + // The serializer, deserializer, and schema for converting data from the JSON + // format to the Parquet or ORC format before writing it to Amazon S3. + DataFormatConversionConfiguration *DataFormatConversionConfiguration `type:"structure"` - // The Elasticsearch index rotation period. Index rotation appends a time stamp - // to IndexName to facilitate the expiration of old data. For more information, - // see Index Rotation for Amazon Elasticsearch Service Destination (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#es-index-rotation). - // Default value is OneDay. - IndexRotationPeriod *string `type:"string" enum:"ElasticsearchIndexRotationPeriod"` + // The encryption configuration. If no value is specified, the default is no + // encryption. + EncryptionConfiguration *EncryptionConfiguration `type:"structure"` + + // The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered + // Amazon S3 files. You can specify an extra prefix to be added in front of + // the time format prefix. If the prefix ends with a slash, it appears as a + // folder in the S3 bucket. For more information, see Amazon S3 Object Name + // Format (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#s3-object-name) + // in the Amazon Kinesis Data Firehose Developer Guide. + Prefix *string `type:"string"` // The data processing configuration. ProcessingConfiguration *ProcessingConfiguration `type:"structure"` - // The retry behavior in case Kinesis Firehose is unable to deliver documents - // to Amazon ES. The default value is 300 (5 minutes). - RetryOptions *ElasticsearchRetryOptions `type:"structure"` - - // The ARN of the IAM role to be assumed by Kinesis Firehose for calling the - // Amazon ES Configuration API and for indexing documents. For more information, - // see Amazon S3 Bucket Access (http://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-s3). + // The Amazon Resource Name (ARN) of the AWS credentials. For more information, + // see Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). RoleARN *string `min:"1" type:"string"` - // The Amazon S3 destination. - S3Update *S3DestinationUpdate `type:"structure"` + // Enables or disables Amazon S3 backup mode. + S3BackupMode *string `type:"string" enum:"S3BackupMode"` - // The Elasticsearch type name. - TypeName *string `min:"1" type:"string"` + // The Amazon S3 destination for backup. + S3BackupUpdate *S3DestinationUpdate `type:"structure"` } // String returns the string representation -func (s ElasticsearchDestinationUpdate) String() string { +func (s ExtendedS3DestinationUpdate) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ElasticsearchDestinationUpdate) GoString() string { +func (s ExtendedS3DestinationUpdate) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ElasticsearchDestinationUpdate) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ElasticsearchDestinationUpdate"} - if s.DomainARN != nil && len(*s.DomainARN) < 1 { - invalidParams.Add(request.NewErrParamMinLen("DomainARN", 1)) - } - if s.IndexName != nil && len(*s.IndexName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("IndexName", 1)) +func (s *ExtendedS3DestinationUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ExtendedS3DestinationUpdate"} + if s.BucketARN != nil && len(*s.BucketARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BucketARN", 1)) } if s.RoleARN != nil && len(*s.RoleARN) < 1 { invalidParams.Add(request.NewErrParamMinLen("RoleARN", 1)) } - if s.TypeName != nil && len(*s.TypeName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("TypeName", 1)) - } if s.BufferingHints != nil { if err := s.BufferingHints.Validate(); err != nil { invalidParams.AddNested("BufferingHints", err.(request.ErrInvalidParams)) } } + if s.DataFormatConversionConfiguration != nil { + if err := s.DataFormatConversionConfiguration.Validate(); err != nil { + invalidParams.AddNested("DataFormatConversionConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.EncryptionConfiguration != nil { + if err := s.EncryptionConfiguration.Validate(); err != nil { + invalidParams.AddNested("EncryptionConfiguration", err.(request.ErrInvalidParams)) + } + } if s.ProcessingConfiguration != nil { if err := s.ProcessingConfiguration.Validate(); err != nil { invalidParams.AddNested("ProcessingConfiguration", err.(request.ErrInvalidParams)) } } - if s.S3Update != nil { - if err := s.S3Update.Validate(); err != nil { - invalidParams.AddNested("S3Update", err.(request.ErrInvalidParams)) + if s.S3BackupUpdate != nil { + if err := s.S3BackupUpdate.Validate(); err != nil { + invalidParams.AddNested("S3BackupUpdate", err.(request.ErrInvalidParams)) } } @@ -1931,124 +3170,163 @@ func (s *ElasticsearchDestinationUpdate) Validate() error { return nil } +// SetBucketARN sets the BucketARN field's value. +func (s *ExtendedS3DestinationUpdate) SetBucketARN(v string) *ExtendedS3DestinationUpdate { + s.BucketARN = &v + return s +} + // SetBufferingHints sets the BufferingHints field's value. -func (s *ElasticsearchDestinationUpdate) SetBufferingHints(v *ElasticsearchBufferingHints) *ElasticsearchDestinationUpdate { +func (s *ExtendedS3DestinationUpdate) SetBufferingHints(v *BufferingHints) *ExtendedS3DestinationUpdate { s.BufferingHints = v return s } // SetCloudWatchLoggingOptions sets the CloudWatchLoggingOptions field's value. -func (s *ElasticsearchDestinationUpdate) SetCloudWatchLoggingOptions(v *CloudWatchLoggingOptions) *ElasticsearchDestinationUpdate { +func (s *ExtendedS3DestinationUpdate) SetCloudWatchLoggingOptions(v *CloudWatchLoggingOptions) *ExtendedS3DestinationUpdate { s.CloudWatchLoggingOptions = v return s } -// SetDomainARN sets the DomainARN field's value. -func (s *ElasticsearchDestinationUpdate) SetDomainARN(v string) *ElasticsearchDestinationUpdate { - s.DomainARN = &v +// SetCompressionFormat sets the CompressionFormat field's value. +func (s *ExtendedS3DestinationUpdate) SetCompressionFormat(v string) *ExtendedS3DestinationUpdate { + s.CompressionFormat = &v return s } -// SetIndexName sets the IndexName field's value. -func (s *ElasticsearchDestinationUpdate) SetIndexName(v string) *ElasticsearchDestinationUpdate { - s.IndexName = &v +// SetDataFormatConversionConfiguration sets the DataFormatConversionConfiguration field's value. +func (s *ExtendedS3DestinationUpdate) SetDataFormatConversionConfiguration(v *DataFormatConversionConfiguration) *ExtendedS3DestinationUpdate { + s.DataFormatConversionConfiguration = v return s } -// SetIndexRotationPeriod sets the IndexRotationPeriod field's value. -func (s *ElasticsearchDestinationUpdate) SetIndexRotationPeriod(v string) *ElasticsearchDestinationUpdate { - s.IndexRotationPeriod = &v +// SetEncryptionConfiguration sets the EncryptionConfiguration field's value. +func (s *ExtendedS3DestinationUpdate) SetEncryptionConfiguration(v *EncryptionConfiguration) *ExtendedS3DestinationUpdate { + s.EncryptionConfiguration = v return s } -// SetProcessingConfiguration sets the ProcessingConfiguration field's value. -func (s *ElasticsearchDestinationUpdate) SetProcessingConfiguration(v *ProcessingConfiguration) *ElasticsearchDestinationUpdate { - s.ProcessingConfiguration = v +// SetPrefix sets the Prefix field's value. +func (s *ExtendedS3DestinationUpdate) SetPrefix(v string) *ExtendedS3DestinationUpdate { + s.Prefix = &v return s } -// SetRetryOptions sets the RetryOptions field's value. -func (s *ElasticsearchDestinationUpdate) SetRetryOptions(v *ElasticsearchRetryOptions) *ElasticsearchDestinationUpdate { - s.RetryOptions = v +// SetProcessingConfiguration sets the ProcessingConfiguration field's value. +func (s *ExtendedS3DestinationUpdate) SetProcessingConfiguration(v *ProcessingConfiguration) *ExtendedS3DestinationUpdate { + s.ProcessingConfiguration = v return s } // SetRoleARN sets the RoleARN field's value. -func (s *ElasticsearchDestinationUpdate) SetRoleARN(v string) *ElasticsearchDestinationUpdate { +func (s *ExtendedS3DestinationUpdate) SetRoleARN(v string) *ExtendedS3DestinationUpdate { s.RoleARN = &v return s } -// SetS3Update sets the S3Update field's value. -func (s *ElasticsearchDestinationUpdate) SetS3Update(v *S3DestinationUpdate) *ElasticsearchDestinationUpdate { - s.S3Update = v +// SetS3BackupMode sets the S3BackupMode field's value. +func (s *ExtendedS3DestinationUpdate) SetS3BackupMode(v string) *ExtendedS3DestinationUpdate { + s.S3BackupMode = &v return s } -// SetTypeName sets the TypeName field's value. -func (s *ElasticsearchDestinationUpdate) SetTypeName(v string) *ElasticsearchDestinationUpdate { - s.TypeName = &v +// SetS3BackupUpdate sets the S3BackupUpdate field's value. +func (s *ExtendedS3DestinationUpdate) SetS3BackupUpdate(v *S3DestinationUpdate) *ExtendedS3DestinationUpdate { + s.S3BackupUpdate = v return s } -// Configures retry behavior in case Kinesis Firehose is unable to deliver documents -// to Amazon ES. -type ElasticsearchRetryOptions struct { +// The native Hive / HCatalog JsonSerDe. Used by Kinesis Data Firehose for deserializing +// data, which means converting it from the JSON format in preparation for serializing +// it to the Parquet or ORC format. This is one of two deserializers you can +// choose, depending on which one offers the functionality you need. The other +// option is the OpenX SerDe. +type HiveJsonSerDe struct { _ struct{} `type:"structure"` - // After an initial failure to deliver to Amazon ES, the total amount of time - // during which Kinesis Firehose re-attempts delivery (including the first attempt). - // After this time has elapsed, the failed documents are written to Amazon S3. - // Default value is 300 seconds (5 minutes). A value of 0 (zero) results in - // no retries. - DurationInSeconds *int64 `type:"integer"` + // Indicates how you want Kinesis Data Firehose to parse the date and time stamps + // that may be present in your input data JSON. To specify these format strings, + // follow the pattern syntax of JodaTime's DateTimeFormat format strings. For + // more information, see Class DateTimeFormat (https://www.joda.org/joda-time/apidocs/org/joda/time/format/DateTimeFormat.html). + // You can also use the special value millis to parse time stamps in epoch milliseconds. + // If you don't specify a format, Kinesis Data Firehose uses java.sql.Timestamp::valueOf + // by default. + TimestampFormats []*string `type:"list"` } // String returns the string representation -func (s ElasticsearchRetryOptions) String() string { +func (s HiveJsonSerDe) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ElasticsearchRetryOptions) GoString() string { +func (s HiveJsonSerDe) GoString() string { return s.String() } -// SetDurationInSeconds sets the DurationInSeconds field's value. -func (s *ElasticsearchRetryOptions) SetDurationInSeconds(v int64) *ElasticsearchRetryOptions { - s.DurationInSeconds = &v +// SetTimestampFormats sets the TimestampFormats field's value. +func (s *HiveJsonSerDe) SetTimestampFormats(v []*string) *HiveJsonSerDe { + s.TimestampFormats = v return s } -// Describes the encryption for a destination in Amazon S3. -type EncryptionConfiguration struct { +// Specifies the deserializer you want to use to convert the format of the input +// data. +type InputFormatConfiguration struct { _ struct{} `type:"structure"` - // The encryption key. - KMSEncryptionConfig *KMSEncryptionConfig `type:"structure"` + // Specifies which deserializer to use. You can choose either the Apache Hive + // JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects + // the request. + Deserializer *Deserializer `type:"structure"` +} - // Specifically override existing encryption information to ensure that no encryption - // is used. - NoEncryptionConfig *string `type:"string" enum:"NoEncryptionConfig"` +// String returns the string representation +func (s InputFormatConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputFormatConfiguration) GoString() string { + return s.String() +} + +// SetDeserializer sets the Deserializer field's value. +func (s *InputFormatConfiguration) SetDeserializer(v *Deserializer) *InputFormatConfiguration { + s.Deserializer = v + return s +} + +// Describes an encryption key for a destination in Amazon S3. +type KMSEncryptionConfig struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the encryption key. Must belong to the + // same AWS Region as the destination Amazon S3 bucket. For more information, + // see Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). + // + // AWSKMSKeyARN is a required field + AWSKMSKeyARN *string `min:"1" type:"string" required:"true"` } // String returns the string representation -func (s EncryptionConfiguration) String() string { +func (s KMSEncryptionConfig) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s EncryptionConfiguration) GoString() string { +func (s KMSEncryptionConfig) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *EncryptionConfiguration) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "EncryptionConfiguration"} - if s.KMSEncryptionConfig != nil { - if err := s.KMSEncryptionConfig.Validate(); err != nil { - invalidParams.AddNested("KMSEncryptionConfig", err.(request.ErrInvalidParams)) - } +func (s *KMSEncryptionConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "KMSEncryptionConfig"} + if s.AWSKMSKeyARN == nil { + invalidParams.Add(request.NewErrParamRequired("AWSKMSKeyARN")) + } + if s.AWSKMSKeyARN != nil && len(*s.AWSKMSKeyARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AWSKMSKeyARN", 1)) } if invalidParams.Len() > 0 { @@ -2057,80 +3335,49 @@ func (s *EncryptionConfiguration) Validate() error { return nil } -// SetKMSEncryptionConfig sets the KMSEncryptionConfig field's value. -func (s *EncryptionConfiguration) SetKMSEncryptionConfig(v *KMSEncryptionConfig) *EncryptionConfiguration { - s.KMSEncryptionConfig = v - return s -} - -// SetNoEncryptionConfig sets the NoEncryptionConfig field's value. -func (s *EncryptionConfiguration) SetNoEncryptionConfig(v string) *EncryptionConfiguration { - s.NoEncryptionConfig = &v +// SetAWSKMSKeyARN sets the AWSKMSKeyARN field's value. +func (s *KMSEncryptionConfig) SetAWSKMSKeyARN(v string) *KMSEncryptionConfig { + s.AWSKMSKeyARN = &v return s } -// Describes the configuration of a destination in Amazon S3. -type ExtendedS3DestinationConfiguration struct { +// The stream and role Amazon Resource Names (ARNs) for a Kinesis data stream +// used as the source for a delivery stream. +type KinesisStreamSourceConfiguration struct { _ struct{} `type:"structure"` - // The ARN of the S3 bucket. + // The ARN of the source Kinesis data stream. For more information, see Amazon + // Kinesis Data Streams ARN Format (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-kinesis-streams). // - // BucketARN is a required field - BucketARN *string `min:"1" type:"string" required:"true"` - - // The buffering option. - BufferingHints *BufferingHints `type:"structure"` - - // The CloudWatch logging options for your delivery stream. - CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"` - - // The compression format. If no value is specified, the default is UNCOMPRESSED. - CompressionFormat *string `type:"string" enum:"CompressionFormat"` - - // The encryption configuration. If no value is specified, the default is no - // encryption. - EncryptionConfiguration *EncryptionConfiguration `type:"structure"` - - // The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered - // S3 files. You can specify an extra prefix to be added in front of the time - // format prefix. If the prefix ends with a slash, it appears as a folder in - // the S3 bucket. For more information, see Amazon S3 Object Name Format (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html) - // in the Amazon Kinesis Firehose Developer Guide. - Prefix *string `type:"string"` - - // The data processing configuration. - ProcessingConfiguration *ProcessingConfiguration `type:"structure"` + // KinesisStreamARN is a required field + KinesisStreamARN *string `min:"1" type:"string" required:"true"` - // The ARN of the AWS credentials. + // The ARN of the role that provides access to the source Kinesis data stream. + // For more information, see AWS Identity and Access Management (IAM) ARN Format + // (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-iam). // // RoleARN is a required field RoleARN *string `min:"1" type:"string" required:"true"` - - // The configuration for backup in Amazon S3. - S3BackupConfiguration *S3DestinationConfiguration `type:"structure"` - - // The Amazon S3 backup mode. - S3BackupMode *string `type:"string" enum:"S3BackupMode"` } // String returns the string representation -func (s ExtendedS3DestinationConfiguration) String() string { +func (s KinesisStreamSourceConfiguration) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ExtendedS3DestinationConfiguration) GoString() string { +func (s KinesisStreamSourceConfiguration) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ExtendedS3DestinationConfiguration) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ExtendedS3DestinationConfiguration"} - if s.BucketARN == nil { - invalidParams.Add(request.NewErrParamRequired("BucketARN")) +func (s *KinesisStreamSourceConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "KinesisStreamSourceConfiguration"} + if s.KinesisStreamARN == nil { + invalidParams.Add(request.NewErrParamRequired("KinesisStreamARN")) } - if s.BucketARN != nil && len(*s.BucketARN) < 1 { - invalidParams.Add(request.NewErrParamMinLen("BucketARN", 1)) + if s.KinesisStreamARN != nil && len(*s.KinesisStreamARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("KinesisStreamARN", 1)) } if s.RoleARN == nil { invalidParams.Add(request.NewErrParamRequired("RoleARN")) @@ -2138,26 +3385,6 @@ func (s *ExtendedS3DestinationConfiguration) Validate() error { if s.RoleARN != nil && len(*s.RoleARN) < 1 { invalidParams.Add(request.NewErrParamMinLen("RoleARN", 1)) } - if s.BufferingHints != nil { - if err := s.BufferingHints.Validate(); err != nil { - invalidParams.AddNested("BufferingHints", err.(request.ErrInvalidParams)) - } - } - if s.EncryptionConfiguration != nil { - if err := s.EncryptionConfiguration.Validate(); err != nil { - invalidParams.AddNested("EncryptionConfiguration", err.(request.ErrInvalidParams)) - } - } - if s.ProcessingConfiguration != nil { - if err := s.ProcessingConfiguration.Validate(); err != nil { - invalidParams.AddNested("ProcessingConfiguration", err.(request.ErrInvalidParams)) - } - } - if s.S3BackupConfiguration != nil { - if err := s.S3BackupConfiguration.Validate(); err != nil { - invalidParams.AddNested("S3BackupConfiguration", err.(request.ErrInvalidParams)) - } - } if invalidParams.Len() > 0 { return invalidParams @@ -2165,264 +3392,210 @@ func (s *ExtendedS3DestinationConfiguration) Validate() error { return nil } -// SetBucketARN sets the BucketARN field's value. -func (s *ExtendedS3DestinationConfiguration) SetBucketARN(v string) *ExtendedS3DestinationConfiguration { - s.BucketARN = &v +// SetKinesisStreamARN sets the KinesisStreamARN field's value. +func (s *KinesisStreamSourceConfiguration) SetKinesisStreamARN(v string) *KinesisStreamSourceConfiguration { + s.KinesisStreamARN = &v return s } -// SetBufferingHints sets the BufferingHints field's value. -func (s *ExtendedS3DestinationConfiguration) SetBufferingHints(v *BufferingHints) *ExtendedS3DestinationConfiguration { - s.BufferingHints = v +// SetRoleARN sets the RoleARN field's value. +func (s *KinesisStreamSourceConfiguration) SetRoleARN(v string) *KinesisStreamSourceConfiguration { + s.RoleARN = &v return s } -// SetCloudWatchLoggingOptions sets the CloudWatchLoggingOptions field's value. -func (s *ExtendedS3DestinationConfiguration) SetCloudWatchLoggingOptions(v *CloudWatchLoggingOptions) *ExtendedS3DestinationConfiguration { - s.CloudWatchLoggingOptions = v - return s -} +// Details about a Kinesis data stream used as the source for a Kinesis Data +// Firehose delivery stream. +type KinesisStreamSourceDescription struct { + _ struct{} `type:"structure"` -// SetCompressionFormat sets the CompressionFormat field's value. -func (s *ExtendedS3DestinationConfiguration) SetCompressionFormat(v string) *ExtendedS3DestinationConfiguration { - s.CompressionFormat = &v - return s -} + // Kinesis Data Firehose starts retrieving records from the Kinesis data stream + // starting with this time stamp. + DeliveryStartTimestamp *time.Time `type:"timestamp"` -// SetEncryptionConfiguration sets the EncryptionConfiguration field's value. -func (s *ExtendedS3DestinationConfiguration) SetEncryptionConfiguration(v *EncryptionConfiguration) *ExtendedS3DestinationConfiguration { - s.EncryptionConfiguration = v - return s + // The Amazon Resource Name (ARN) of the source Kinesis data stream. For more + // information, see Amazon Kinesis Data Streams ARN Format (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-kinesis-streams). + KinesisStreamARN *string `min:"1" type:"string"` + + // The ARN of the role used by the source Kinesis data stream. For more information, + // see AWS Identity and Access Management (IAM) ARN Format (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-iam). + RoleARN *string `min:"1" type:"string"` } -// SetPrefix sets the Prefix field's value. -func (s *ExtendedS3DestinationConfiguration) SetPrefix(v string) *ExtendedS3DestinationConfiguration { - s.Prefix = &v - return s +// String returns the string representation +func (s KinesisStreamSourceDescription) String() string { + return awsutil.Prettify(s) } -// SetProcessingConfiguration sets the ProcessingConfiguration field's value. -func (s *ExtendedS3DestinationConfiguration) SetProcessingConfiguration(v *ProcessingConfiguration) *ExtendedS3DestinationConfiguration { - s.ProcessingConfiguration = v - return s +// GoString returns the string representation +func (s KinesisStreamSourceDescription) GoString() string { + return s.String() } -// SetRoleARN sets the RoleARN field's value. -func (s *ExtendedS3DestinationConfiguration) SetRoleARN(v string) *ExtendedS3DestinationConfiguration { - s.RoleARN = &v +// SetDeliveryStartTimestamp sets the DeliveryStartTimestamp field's value. +func (s *KinesisStreamSourceDescription) SetDeliveryStartTimestamp(v time.Time) *KinesisStreamSourceDescription { + s.DeliveryStartTimestamp = &v return s } -// SetS3BackupConfiguration sets the S3BackupConfiguration field's value. -func (s *ExtendedS3DestinationConfiguration) SetS3BackupConfiguration(v *S3DestinationConfiguration) *ExtendedS3DestinationConfiguration { - s.S3BackupConfiguration = v +// SetKinesisStreamARN sets the KinesisStreamARN field's value. +func (s *KinesisStreamSourceDescription) SetKinesisStreamARN(v string) *KinesisStreamSourceDescription { + s.KinesisStreamARN = &v return s } -// SetS3BackupMode sets the S3BackupMode field's value. -func (s *ExtendedS3DestinationConfiguration) SetS3BackupMode(v string) *ExtendedS3DestinationConfiguration { - s.S3BackupMode = &v +// SetRoleARN sets the RoleARN field's value. +func (s *KinesisStreamSourceDescription) SetRoleARN(v string) *KinesisStreamSourceDescription { + s.RoleARN = &v return s } -// Describes a destination in Amazon S3. -type ExtendedS3DestinationDescription struct { +type ListDeliveryStreamsInput struct { _ struct{} `type:"structure"` - // The ARN of the S3 bucket. - // - // BucketARN is a required field - BucketARN *string `min:"1" type:"string" required:"true"` - - // The buffering option. - // - // BufferingHints is a required field - BufferingHints *BufferingHints `type:"structure" required:"true"` - - // The CloudWatch logging options for your delivery stream. - CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"` - - // The compression format. If no value is specified, the default is UNCOMPRESSED. + // The delivery stream type. This can be one of the following values: // - // CompressionFormat is a required field - CompressionFormat *string `type:"string" required:"true" enum:"CompressionFormat"` - - // The encryption configuration. If no value is specified, the default is no - // encryption. + // * DirectPut: Provider applications access the delivery stream directly. // - // EncryptionConfiguration is a required field - EncryptionConfiguration *EncryptionConfiguration `type:"structure" required:"true"` - - // The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered - // S3 files. You can specify an extra prefix to be added in front of the time - // format prefix. If the prefix ends with a slash, it appears as a folder in - // the S3 bucket. For more information, see Amazon S3 Object Name Format (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html) - // in the Amazon Kinesis Firehose Developer Guide. - Prefix *string `type:"string"` - - // The data processing configuration. - ProcessingConfiguration *ProcessingConfiguration `type:"structure"` - - // The ARN of the AWS credentials. + // * KinesisStreamAsSource: The delivery stream uses a Kinesis data stream + // as a source. // - // RoleARN is a required field - RoleARN *string `min:"1" type:"string" required:"true"` + // This parameter is optional. If this parameter is omitted, delivery streams + // of all types are returned. + DeliveryStreamType *string `type:"string" enum:"DeliveryStreamType"` - // The configuration for backup in Amazon S3. - S3BackupDescription *S3DestinationDescription `type:"structure"` + // The list of delivery streams returned by this call to ListDeliveryStreams + // will start with the delivery stream whose name comes alphabetically immediately + // after the name you specify in ExclusiveStartDeliveryStreamName. + ExclusiveStartDeliveryStreamName *string `min:"1" type:"string"` - // The Amazon S3 backup mode. - S3BackupMode *string `type:"string" enum:"S3BackupMode"` + // The maximum number of delivery streams to list. The default value is 10. + Limit *int64 `min:"1" type:"integer"` } // String returns the string representation -func (s ExtendedS3DestinationDescription) String() string { +func (s ListDeliveryStreamsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ExtendedS3DestinationDescription) GoString() string { +func (s ListDeliveryStreamsInput) GoString() string { return s.String() } -// SetBucketARN sets the BucketARN field's value. -func (s *ExtendedS3DestinationDescription) SetBucketARN(v string) *ExtendedS3DestinationDescription { - s.BucketARN = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListDeliveryStreamsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListDeliveryStreamsInput"} + if s.ExclusiveStartDeliveryStreamName != nil && len(*s.ExclusiveStartDeliveryStreamName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ExclusiveStartDeliveryStreamName", 1)) + } + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } -// SetBufferingHints sets the BufferingHints field's value. -func (s *ExtendedS3DestinationDescription) SetBufferingHints(v *BufferingHints) *ExtendedS3DestinationDescription { - s.BufferingHints = v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetCloudWatchLoggingOptions sets the CloudWatchLoggingOptions field's value. -func (s *ExtendedS3DestinationDescription) SetCloudWatchLoggingOptions(v *CloudWatchLoggingOptions) *ExtendedS3DestinationDescription { - s.CloudWatchLoggingOptions = v +// SetDeliveryStreamType sets the DeliveryStreamType field's value. +func (s *ListDeliveryStreamsInput) SetDeliveryStreamType(v string) *ListDeliveryStreamsInput { + s.DeliveryStreamType = &v return s } -// SetCompressionFormat sets the CompressionFormat field's value. -func (s *ExtendedS3DestinationDescription) SetCompressionFormat(v string) *ExtendedS3DestinationDescription { - s.CompressionFormat = &v +// SetExclusiveStartDeliveryStreamName sets the ExclusiveStartDeliveryStreamName field's value. +func (s *ListDeliveryStreamsInput) SetExclusiveStartDeliveryStreamName(v string) *ListDeliveryStreamsInput { + s.ExclusiveStartDeliveryStreamName = &v return s } -// SetEncryptionConfiguration sets the EncryptionConfiguration field's value. -func (s *ExtendedS3DestinationDescription) SetEncryptionConfiguration(v *EncryptionConfiguration) *ExtendedS3DestinationDescription { - s.EncryptionConfiguration = v +// SetLimit sets the Limit field's value. +func (s *ListDeliveryStreamsInput) SetLimit(v int64) *ListDeliveryStreamsInput { + s.Limit = &v return s } -// SetPrefix sets the Prefix field's value. -func (s *ExtendedS3DestinationDescription) SetPrefix(v string) *ExtendedS3DestinationDescription { - s.Prefix = &v - return s +type ListDeliveryStreamsOutput struct { + _ struct{} `type:"structure"` + + // The names of the delivery streams. + // + // DeliveryStreamNames is a required field + DeliveryStreamNames []*string `type:"list" required:"true"` + + // Indicates whether there are more delivery streams available to list. + // + // HasMoreDeliveryStreams is a required field + HasMoreDeliveryStreams *bool `type:"boolean" required:"true"` } -// SetProcessingConfiguration sets the ProcessingConfiguration field's value. -func (s *ExtendedS3DestinationDescription) SetProcessingConfiguration(v *ProcessingConfiguration) *ExtendedS3DestinationDescription { - s.ProcessingConfiguration = v - return s +// String returns the string representation +func (s ListDeliveryStreamsOutput) String() string { + return awsutil.Prettify(s) } -// SetRoleARN sets the RoleARN field's value. -func (s *ExtendedS3DestinationDescription) SetRoleARN(v string) *ExtendedS3DestinationDescription { - s.RoleARN = &v - return s +// GoString returns the string representation +func (s ListDeliveryStreamsOutput) GoString() string { + return s.String() } -// SetS3BackupDescription sets the S3BackupDescription field's value. -func (s *ExtendedS3DestinationDescription) SetS3BackupDescription(v *S3DestinationDescription) *ExtendedS3DestinationDescription { - s.S3BackupDescription = v +// SetDeliveryStreamNames sets the DeliveryStreamNames field's value. +func (s *ListDeliveryStreamsOutput) SetDeliveryStreamNames(v []*string) *ListDeliveryStreamsOutput { + s.DeliveryStreamNames = v return s } -// SetS3BackupMode sets the S3BackupMode field's value. -func (s *ExtendedS3DestinationDescription) SetS3BackupMode(v string) *ExtendedS3DestinationDescription { - s.S3BackupMode = &v +// SetHasMoreDeliveryStreams sets the HasMoreDeliveryStreams field's value. +func (s *ListDeliveryStreamsOutput) SetHasMoreDeliveryStreams(v bool) *ListDeliveryStreamsOutput { + s.HasMoreDeliveryStreams = &v return s } -// Describes an update for a destination in Amazon S3. -type ExtendedS3DestinationUpdate struct { +type ListTagsForDeliveryStreamInput struct { _ struct{} `type:"structure"` - // The ARN of the S3 bucket. - BucketARN *string `min:"1" type:"string"` - - // The buffering option. - BufferingHints *BufferingHints `type:"structure"` - - // The CloudWatch logging options for your delivery stream. - CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"` - - // The compression format. If no value is specified, the default is UNCOMPRESSED. - CompressionFormat *string `type:"string" enum:"CompressionFormat"` - - // The encryption configuration. If no value is specified, the default is no - // encryption. - EncryptionConfiguration *EncryptionConfiguration `type:"structure"` - - // The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered - // S3 files. You can specify an extra prefix to be added in front of the time - // format prefix. If the prefix ends with a slash, it appears as a folder in - // the S3 bucket. For more information, see Amazon S3 Object Name Format (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html) - // in the Amazon Kinesis Firehose Developer Guide. - Prefix *string `type:"string"` - - // The data processing configuration. - ProcessingConfiguration *ProcessingConfiguration `type:"structure"` - - // The ARN of the AWS credentials. - RoleARN *string `min:"1" type:"string"` + // The name of the delivery stream whose tags you want to list. + // + // DeliveryStreamName is a required field + DeliveryStreamName *string `min:"1" type:"string" required:"true"` - // Enables or disables Amazon S3 backup mode. - S3BackupMode *string `type:"string" enum:"S3BackupMode"` + // The key to use as the starting point for the list of tags. If you set this + // parameter, ListTagsForDeliveryStream gets all tags that occur after ExclusiveStartTagKey. + ExclusiveStartTagKey *string `min:"1" type:"string"` - // The Amazon S3 destination for backup. - S3BackupUpdate *S3DestinationUpdate `type:"structure"` + // The number of tags to return. If this number is less than the total number + // of tags associated with the delivery stream, HasMoreTags is set to true in + // the response. To list additional tags, set ExclusiveStartTagKey to the last + // key in the response. + Limit *int64 `min:"1" type:"integer"` } // String returns the string representation -func (s ExtendedS3DestinationUpdate) String() string { +func (s ListTagsForDeliveryStreamInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ExtendedS3DestinationUpdate) GoString() string { +func (s ListTagsForDeliveryStreamInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ExtendedS3DestinationUpdate) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ExtendedS3DestinationUpdate"} - if s.BucketARN != nil && len(*s.BucketARN) < 1 { - invalidParams.Add(request.NewErrParamMinLen("BucketARN", 1)) - } - if s.RoleARN != nil && len(*s.RoleARN) < 1 { - invalidParams.Add(request.NewErrParamMinLen("RoleARN", 1)) - } - if s.BufferingHints != nil { - if err := s.BufferingHints.Validate(); err != nil { - invalidParams.AddNested("BufferingHints", err.(request.ErrInvalidParams)) - } +func (s *ListTagsForDeliveryStreamInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTagsForDeliveryStreamInput"} + if s.DeliveryStreamName == nil { + invalidParams.Add(request.NewErrParamRequired("DeliveryStreamName")) } - if s.EncryptionConfiguration != nil { - if err := s.EncryptionConfiguration.Validate(); err != nil { - invalidParams.AddNested("EncryptionConfiguration", err.(request.ErrInvalidParams)) - } + if s.DeliveryStreamName != nil && len(*s.DeliveryStreamName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DeliveryStreamName", 1)) } - if s.ProcessingConfiguration != nil { - if err := s.ProcessingConfiguration.Validate(); err != nil { - invalidParams.AddNested("ProcessingConfiguration", err.(request.ErrInvalidParams)) - } + if s.ExclusiveStartTagKey != nil && len(*s.ExclusiveStartTagKey) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ExclusiveStartTagKey", 1)) } - if s.S3BackupUpdate != nil { - if err := s.S3BackupUpdate.Validate(); err != nil { - invalidParams.AddNested("S3BackupUpdate", err.(request.ErrInvalidParams)) - } + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) } if invalidParams.Len() > 0 { @@ -2431,149 +3604,202 @@ func (s *ExtendedS3DestinationUpdate) Validate() error { return nil } -// SetBucketARN sets the BucketARN field's value. -func (s *ExtendedS3DestinationUpdate) SetBucketARN(v string) *ExtendedS3DestinationUpdate { - s.BucketARN = &v - return s -} - -// SetBufferingHints sets the BufferingHints field's value. -func (s *ExtendedS3DestinationUpdate) SetBufferingHints(v *BufferingHints) *ExtendedS3DestinationUpdate { - s.BufferingHints = v +// SetDeliveryStreamName sets the DeliveryStreamName field's value. +func (s *ListTagsForDeliveryStreamInput) SetDeliveryStreamName(v string) *ListTagsForDeliveryStreamInput { + s.DeliveryStreamName = &v return s } -// SetCloudWatchLoggingOptions sets the CloudWatchLoggingOptions field's value. -func (s *ExtendedS3DestinationUpdate) SetCloudWatchLoggingOptions(v *CloudWatchLoggingOptions) *ExtendedS3DestinationUpdate { - s.CloudWatchLoggingOptions = v +// SetExclusiveStartTagKey sets the ExclusiveStartTagKey field's value. +func (s *ListTagsForDeliveryStreamInput) SetExclusiveStartTagKey(v string) *ListTagsForDeliveryStreamInput { + s.ExclusiveStartTagKey = &v return s } -// SetCompressionFormat sets the CompressionFormat field's value. -func (s *ExtendedS3DestinationUpdate) SetCompressionFormat(v string) *ExtendedS3DestinationUpdate { - s.CompressionFormat = &v +// SetLimit sets the Limit field's value. +func (s *ListTagsForDeliveryStreamInput) SetLimit(v int64) *ListTagsForDeliveryStreamInput { + s.Limit = &v return s } -// SetEncryptionConfiguration sets the EncryptionConfiguration field's value. -func (s *ExtendedS3DestinationUpdate) SetEncryptionConfiguration(v *EncryptionConfiguration) *ExtendedS3DestinationUpdate { - s.EncryptionConfiguration = v - return s -} +type ListTagsForDeliveryStreamOutput struct { + _ struct{} `type:"structure"` -// SetPrefix sets the Prefix field's value. -func (s *ExtendedS3DestinationUpdate) SetPrefix(v string) *ExtendedS3DestinationUpdate { - s.Prefix = &v - return s -} + // If this is true in the response, more tags are available. To list the remaining + // tags, set ExclusiveStartTagKey to the key of the last tag returned and call + // ListTagsForDeliveryStream again. + // + // HasMoreTags is a required field + HasMoreTags *bool `type:"boolean" required:"true"` -// SetProcessingConfiguration sets the ProcessingConfiguration field's value. -func (s *ExtendedS3DestinationUpdate) SetProcessingConfiguration(v *ProcessingConfiguration) *ExtendedS3DestinationUpdate { - s.ProcessingConfiguration = v - return s + // A list of tags associated with DeliveryStreamName, starting with the first + // tag after ExclusiveStartTagKey and up to the specified Limit. + // + // Tags is a required field + Tags []*Tag `type:"list" required:"true"` } -// SetRoleARN sets the RoleARN field's value. -func (s *ExtendedS3DestinationUpdate) SetRoleARN(v string) *ExtendedS3DestinationUpdate { - s.RoleARN = &v - return s +// String returns the string representation +func (s ListTagsForDeliveryStreamOutput) String() string { + return awsutil.Prettify(s) } -// SetS3BackupMode sets the S3BackupMode field's value. -func (s *ExtendedS3DestinationUpdate) SetS3BackupMode(v string) *ExtendedS3DestinationUpdate { - s.S3BackupMode = &v +// GoString returns the string representation +func (s ListTagsForDeliveryStreamOutput) GoString() string { + return s.String() +} + +// SetHasMoreTags sets the HasMoreTags field's value. +func (s *ListTagsForDeliveryStreamOutput) SetHasMoreTags(v bool) *ListTagsForDeliveryStreamOutput { + s.HasMoreTags = &v return s } -// SetS3BackupUpdate sets the S3BackupUpdate field's value. -func (s *ExtendedS3DestinationUpdate) SetS3BackupUpdate(v *S3DestinationUpdate) *ExtendedS3DestinationUpdate { - s.S3BackupUpdate = v +// SetTags sets the Tags field's value. +func (s *ListTagsForDeliveryStreamOutput) SetTags(v []*Tag) *ListTagsForDeliveryStreamOutput { + s.Tags = v return s } -// Describes an encryption key for a destination in Amazon S3. -type KMSEncryptionConfig struct { +// The OpenX SerDe. Used by Kinesis Data Firehose for deserializing data, which +// means converting it from the JSON format in preparation for serializing it +// to the Parquet or ORC format. This is one of two deserializers you can choose, +// depending on which one offers the functionality you need. The other option +// is the native Hive / HCatalog JsonSerDe. +type OpenXJsonSerDe struct { _ struct{} `type:"structure"` - // The ARN of the encryption key. Must belong to the same region as the destination - // Amazon S3 bucket. + // When set to true, which is the default, Kinesis Data Firehose converts JSON + // keys to lowercase before deserializing them. + CaseInsensitive *bool `type:"boolean"` + + // Maps column names to JSON keys that aren't identical to the column names. + // This is useful when the JSON contains keys that are Hive keywords. For example, + // timestamp is a Hive keyword. If you have a JSON key named timestamp, set + // this parameter to {"ts": "timestamp"} to map this key to a column named ts. + ColumnToJsonKeyMappings map[string]*string `type:"map"` + + // When set to true, specifies that the names of the keys include dots and that + // you want Kinesis Data Firehose to replace them with underscores. This is + // useful because Apache Hive does not allow dots in column names. For example, + // if the JSON contains a key whose name is "a.b", you can define the column + // name to be "a_b" when using this option. // - // AWSKMSKeyARN is a required field - AWSKMSKeyARN *string `min:"1" type:"string" required:"true"` + // The default is false. + ConvertDotsInJsonKeysToUnderscores *bool `type:"boolean"` } // String returns the string representation -func (s KMSEncryptionConfig) String() string { +func (s OpenXJsonSerDe) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s KMSEncryptionConfig) GoString() string { +func (s OpenXJsonSerDe) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *KMSEncryptionConfig) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "KMSEncryptionConfig"} - if s.AWSKMSKeyARN == nil { - invalidParams.Add(request.NewErrParamRequired("AWSKMSKeyARN")) - } - if s.AWSKMSKeyARN != nil && len(*s.AWSKMSKeyARN) < 1 { - invalidParams.Add(request.NewErrParamMinLen("AWSKMSKeyARN", 1)) - } +// SetCaseInsensitive sets the CaseInsensitive field's value. +func (s *OpenXJsonSerDe) SetCaseInsensitive(v bool) *OpenXJsonSerDe { + s.CaseInsensitive = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetColumnToJsonKeyMappings sets the ColumnToJsonKeyMappings field's value. +func (s *OpenXJsonSerDe) SetColumnToJsonKeyMappings(v map[string]*string) *OpenXJsonSerDe { + s.ColumnToJsonKeyMappings = v + return s } -// SetAWSKMSKeyARN sets the AWSKMSKeyARN field's value. -func (s *KMSEncryptionConfig) SetAWSKMSKeyARN(v string) *KMSEncryptionConfig { - s.AWSKMSKeyARN = &v +// SetConvertDotsInJsonKeysToUnderscores sets the ConvertDotsInJsonKeysToUnderscores field's value. +func (s *OpenXJsonSerDe) SetConvertDotsInJsonKeysToUnderscores(v bool) *OpenXJsonSerDe { + s.ConvertDotsInJsonKeysToUnderscores = &v return s } -// The stream and role ARNs for a Kinesis stream used as the source for a delivery -// stream. -type KinesisStreamSourceConfiguration struct { +// A serializer to use for converting data to the ORC format before storing +// it in Amazon S3. For more information, see Apache ORC (https://orc.apache.org/docs/). +type OrcSerDe struct { _ struct{} `type:"structure"` - // The ARN of the source Kinesis stream. + // The Hadoop Distributed File System (HDFS) block size. This is useful if you + // intend to copy the data from Amazon S3 to HDFS before querying. The default + // is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value + // for padding calculations. + BlockSizeBytes *int64 `min:"6.7108864e+07" type:"integer"` + + // The column names for which you want Kinesis Data Firehose to create bloom + // filters. The default is null. + BloomFilterColumns []*string `type:"list"` + + // The Bloom filter false positive probability (FPP). The lower the FPP, the + // bigger the Bloom filter. The default value is 0.05, the minimum is 0, and + // the maximum is 1. + BloomFilterFalsePositiveProbability *float64 `type:"double"` + + // The compression code to use over data blocks. The default is SNAPPY. + Compression *string `type:"string" enum:"OrcCompression"` + + // Represents the fraction of the total number of non-null rows. To turn off + // dictionary encoding, set this fraction to a number that is less than the + // number of distinct keys in a dictionary. To always use dictionary encoding, + // set this threshold to 1. + DictionaryKeyThreshold *float64 `type:"double"` + + // Set this to true to indicate that you want stripes to be padded to the HDFS + // block boundaries. This is useful if you intend to copy the data from Amazon + // S3 to HDFS before querying. The default is false. + EnablePadding *bool `type:"boolean"` + + // The version of the file to write. The possible values are V0_11 and V0_12. + // The default is V0_12. + FormatVersion *string `type:"string" enum:"OrcFormatVersion"` + + // A number between 0 and 1 that defines the tolerance for block padding as + // a decimal fraction of stripe size. The default value is 0.05, which means + // 5 percent of stripe size. // - // KinesisStreamARN is a required field - KinesisStreamARN *string `min:"1" type:"string" required:"true"` - - // The ARN of the role that provides access to the source Kinesis stream. + // For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the + // default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB + // for padding within the 256 MiB block. In such a case, if the available size + // within the block is more than 3.2 MiB, a new, smaller stripe is inserted + // to fit within that space. This ensures that no stripe crosses block boundaries + // and causes remote reads within a node-local task. // - // RoleARN is a required field - RoleARN *string `min:"1" type:"string" required:"true"` + // Kinesis Data Firehose ignores this parameter when OrcSerDe$EnablePadding + // is false. + PaddingTolerance *float64 `type:"double"` + + // The number of rows between index entries. The default is 10,000 and the minimum + // is 1,000. + RowIndexStride *int64 `min:"1000" type:"integer"` + + // The number of bytes in each stripe. The default is 64 MiB and the minimum + // is 8 MiB. + StripeSizeBytes *int64 `min:"8.388608e+06" type:"integer"` } // String returns the string representation -func (s KinesisStreamSourceConfiguration) String() string { +func (s OrcSerDe) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s KinesisStreamSourceConfiguration) GoString() string { +func (s OrcSerDe) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *KinesisStreamSourceConfiguration) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "KinesisStreamSourceConfiguration"} - if s.KinesisStreamARN == nil { - invalidParams.Add(request.NewErrParamRequired("KinesisStreamARN")) - } - if s.KinesisStreamARN != nil && len(*s.KinesisStreamARN) < 1 { - invalidParams.Add(request.NewErrParamMinLen("KinesisStreamARN", 1)) +func (s *OrcSerDe) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "OrcSerDe"} + if s.BlockSizeBytes != nil && *s.BlockSizeBytes < 6.7108864e+07 { + invalidParams.Add(request.NewErrParamMinValue("BlockSizeBytes", 6.7108864e+07)) } - if s.RoleARN == nil { - invalidParams.Add(request.NewErrParamRequired("RoleARN")) + if s.RowIndexStride != nil && *s.RowIndexStride < 1000 { + invalidParams.Add(request.NewErrParamMinValue("RowIndexStride", 1000)) } - if s.RoleARN != nil && len(*s.RoleARN) < 1 { - invalidParams.Add(request.NewErrParamMinLen("RoleARN", 1)) + if s.StripeSizeBytes != nil && *s.StripeSizeBytes < 8.388608e+06 { + invalidParams.Add(request.NewErrParamMinValue("StripeSizeBytes", 8.388608e+06)) } if invalidParams.Len() > 0 { @@ -2582,101 +3808,93 @@ func (s *KinesisStreamSourceConfiguration) Validate() error { return nil } -// SetKinesisStreamARN sets the KinesisStreamARN field's value. -func (s *KinesisStreamSourceConfiguration) SetKinesisStreamARN(v string) *KinesisStreamSourceConfiguration { - s.KinesisStreamARN = &v +// SetBlockSizeBytes sets the BlockSizeBytes field's value. +func (s *OrcSerDe) SetBlockSizeBytes(v int64) *OrcSerDe { + s.BlockSizeBytes = &v return s } -// SetRoleARN sets the RoleARN field's value. -func (s *KinesisStreamSourceConfiguration) SetRoleARN(v string) *KinesisStreamSourceConfiguration { - s.RoleARN = &v +// SetBloomFilterColumns sets the BloomFilterColumns field's value. +func (s *OrcSerDe) SetBloomFilterColumns(v []*string) *OrcSerDe { + s.BloomFilterColumns = v return s } -// Details about a Kinesis stream used as the source for a Kinesis Firehose -// delivery stream. -type KinesisStreamSourceDescription struct { - _ struct{} `type:"structure"` - - // Kinesis Firehose starts retrieving records from the Kinesis stream starting - // with this time stamp. - DeliveryStartTimestamp *time.Time `type:"timestamp" timestampFormat:"unix"` +// SetBloomFilterFalsePositiveProbability sets the BloomFilterFalsePositiveProbability field's value. +func (s *OrcSerDe) SetBloomFilterFalsePositiveProbability(v float64) *OrcSerDe { + s.BloomFilterFalsePositiveProbability = &v + return s +} - // The ARN of the source Kinesis stream. - KinesisStreamARN *string `min:"1" type:"string"` +// SetCompression sets the Compression field's value. +func (s *OrcSerDe) SetCompression(v string) *OrcSerDe { + s.Compression = &v + return s +} - // The ARN of the role used by the source Kinesis stream. - RoleARN *string `min:"1" type:"string"` +// SetDictionaryKeyThreshold sets the DictionaryKeyThreshold field's value. +func (s *OrcSerDe) SetDictionaryKeyThreshold(v float64) *OrcSerDe { + s.DictionaryKeyThreshold = &v + return s } -// String returns the string representation -func (s KinesisStreamSourceDescription) String() string { - return awsutil.Prettify(s) +// SetEnablePadding sets the EnablePadding field's value. +func (s *OrcSerDe) SetEnablePadding(v bool) *OrcSerDe { + s.EnablePadding = &v + return s } -// GoString returns the string representation -func (s KinesisStreamSourceDescription) GoString() string { - return s.String() +// SetFormatVersion sets the FormatVersion field's value. +func (s *OrcSerDe) SetFormatVersion(v string) *OrcSerDe { + s.FormatVersion = &v + return s } -// SetDeliveryStartTimestamp sets the DeliveryStartTimestamp field's value. -func (s *KinesisStreamSourceDescription) SetDeliveryStartTimestamp(v time.Time) *KinesisStreamSourceDescription { - s.DeliveryStartTimestamp = &v +// SetPaddingTolerance sets the PaddingTolerance field's value. +func (s *OrcSerDe) SetPaddingTolerance(v float64) *OrcSerDe { + s.PaddingTolerance = &v return s } -// SetKinesisStreamARN sets the KinesisStreamARN field's value. -func (s *KinesisStreamSourceDescription) SetKinesisStreamARN(v string) *KinesisStreamSourceDescription { - s.KinesisStreamARN = &v +// SetRowIndexStride sets the RowIndexStride field's value. +func (s *OrcSerDe) SetRowIndexStride(v int64) *OrcSerDe { + s.RowIndexStride = &v return s } -// SetRoleARN sets the RoleARN field's value. -func (s *KinesisStreamSourceDescription) SetRoleARN(v string) *KinesisStreamSourceDescription { - s.RoleARN = &v +// SetStripeSizeBytes sets the StripeSizeBytes field's value. +func (s *OrcSerDe) SetStripeSizeBytes(v int64) *OrcSerDe { + s.StripeSizeBytes = &v return s } -type ListDeliveryStreamsInput struct { +// Specifies the serializer that you want Kinesis Data Firehose to use to convert +// the format of your data before it writes it to Amazon S3. +type OutputFormatConfiguration struct { _ struct{} `type:"structure"` - // The delivery stream type. This can be one of the following values: - // - // * DirectPut: Provider applications access the delivery stream directly. - // - // * KinesisStreamAsSource: The delivery stream uses a Kinesis stream as - // a source. - // - // This parameter is optional. If this parameter is omitted, delivery streams - // of all types are returned. - DeliveryStreamType *string `type:"string" enum:"DeliveryStreamType"` - - // The name of the delivery stream to start the list with. - ExclusiveStartDeliveryStreamName *string `min:"1" type:"string"` - - // The maximum number of delivery streams to list. The default value is 10. - Limit *int64 `min:"1" type:"integer"` + // Specifies which serializer to use. You can choose either the ORC SerDe or + // the Parquet SerDe. If both are non-null, the server rejects the request. + Serializer *Serializer `type:"structure"` } // String returns the string representation -func (s ListDeliveryStreamsInput) String() string { +func (s OutputFormatConfiguration) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListDeliveryStreamsInput) GoString() string { +func (s OutputFormatConfiguration) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListDeliveryStreamsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListDeliveryStreamsInput"} - if s.ExclusiveStartDeliveryStreamName != nil && len(*s.ExclusiveStartDeliveryStreamName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ExclusiveStartDeliveryStreamName", 1)) - } - if s.Limit != nil && *s.Limit < 1 { - invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) +func (s *OutputFormatConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "OutputFormatConfiguration"} + if s.Serializer != nil { + if err := s.Serializer.Validate(); err != nil { + invalidParams.AddNested("Serializer", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -2685,57 +3903,104 @@ func (s *ListDeliveryStreamsInput) Validate() error { return nil } -// SetDeliveryStreamType sets the DeliveryStreamType field's value. -func (s *ListDeliveryStreamsInput) SetDeliveryStreamType(v string) *ListDeliveryStreamsInput { - s.DeliveryStreamType = &v +// SetSerializer sets the Serializer field's value. +func (s *OutputFormatConfiguration) SetSerializer(v *Serializer) *OutputFormatConfiguration { + s.Serializer = v return s } -// SetExclusiveStartDeliveryStreamName sets the ExclusiveStartDeliveryStreamName field's value. -func (s *ListDeliveryStreamsInput) SetExclusiveStartDeliveryStreamName(v string) *ListDeliveryStreamsInput { - s.ExclusiveStartDeliveryStreamName = &v - return s -} +// A serializer to use for converting data to the Parquet format before storing +// it in Amazon S3. For more information, see Apache Parquet (https://parquet.apache.org/documentation/latest/). +type ParquetSerDe struct { + _ struct{} `type:"structure"` -// SetLimit sets the Limit field's value. -func (s *ListDeliveryStreamsInput) SetLimit(v int64) *ListDeliveryStreamsInput { - s.Limit = &v - return s -} + // The Hadoop Distributed File System (HDFS) block size. This is useful if you + // intend to copy the data from Amazon S3 to HDFS before querying. The default + // is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value + // for padding calculations. + BlockSizeBytes *int64 `min:"6.7108864e+07" type:"integer"` -type ListDeliveryStreamsOutput struct { - _ struct{} `type:"structure"` + // The compression code to use over data blocks. The possible values are UNCOMPRESSED, + // SNAPPY, and GZIP, with the default being SNAPPY. Use SNAPPY for higher decompression + // speed. Use GZIP if the compression ration is more important than speed. + Compression *string `type:"string" enum:"ParquetCompression"` - // The names of the delivery streams. - // - // DeliveryStreamNames is a required field - DeliveryStreamNames []*string `type:"list" required:"true"` + // Indicates whether to enable dictionary compression. + EnableDictionaryCompression *bool `type:"boolean"` - // Indicates whether there are more delivery streams available to list. - // - // HasMoreDeliveryStreams is a required field - HasMoreDeliveryStreams *bool `type:"boolean" required:"true"` + // The maximum amount of padding to apply. This is useful if you intend to copy + // the data from Amazon S3 to HDFS before querying. The default is 0. + MaxPaddingBytes *int64 `type:"integer"` + + // The Parquet page size. Column chunks are divided into pages. A page is conceptually + // an indivisible unit (in terms of compression and encoding). The minimum value + // is 64 KiB and the default is 1 MiB. + PageSizeBytes *int64 `min:"65536" type:"integer"` + + // Indicates the version of row format to output. The possible values are V1 + // and V2. The default is V1. + WriterVersion *string `type:"string" enum:"ParquetWriterVersion"` } // String returns the string representation -func (s ListDeliveryStreamsOutput) String() string { +func (s ParquetSerDe) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListDeliveryStreamsOutput) GoString() string { +func (s ParquetSerDe) GoString() string { return s.String() } -// SetDeliveryStreamNames sets the DeliveryStreamNames field's value. -func (s *ListDeliveryStreamsOutput) SetDeliveryStreamNames(v []*string) *ListDeliveryStreamsOutput { - s.DeliveryStreamNames = v +// Validate inspects the fields of the type to determine if they are valid. +func (s *ParquetSerDe) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ParquetSerDe"} + if s.BlockSizeBytes != nil && *s.BlockSizeBytes < 6.7108864e+07 { + invalidParams.Add(request.NewErrParamMinValue("BlockSizeBytes", 6.7108864e+07)) + } + if s.PageSizeBytes != nil && *s.PageSizeBytes < 65536 { + invalidParams.Add(request.NewErrParamMinValue("PageSizeBytes", 65536)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBlockSizeBytes sets the BlockSizeBytes field's value. +func (s *ParquetSerDe) SetBlockSizeBytes(v int64) *ParquetSerDe { + s.BlockSizeBytes = &v return s } -// SetHasMoreDeliveryStreams sets the HasMoreDeliveryStreams field's value. -func (s *ListDeliveryStreamsOutput) SetHasMoreDeliveryStreams(v bool) *ListDeliveryStreamsOutput { - s.HasMoreDeliveryStreams = &v +// SetCompression sets the Compression field's value. +func (s *ParquetSerDe) SetCompression(v string) *ParquetSerDe { + s.Compression = &v + return s +} + +// SetEnableDictionaryCompression sets the EnableDictionaryCompression field's value. +func (s *ParquetSerDe) SetEnableDictionaryCompression(v bool) *ParquetSerDe { + s.EnableDictionaryCompression = &v + return s +} + +// SetMaxPaddingBytes sets the MaxPaddingBytes field's value. +func (s *ParquetSerDe) SetMaxPaddingBytes(v int64) *ParquetSerDe { + s.MaxPaddingBytes = &v + return s +} + +// SetPageSizeBytes sets the PageSizeBytes field's value. +func (s *ParquetSerDe) SetPageSizeBytes(v int64) *ParquetSerDe { + s.PageSizeBytes = &v + return s +} + +// SetWriterVersion sets the WriterVersion field's value. +func (s *ParquetSerDe) SetWriterVersion(v string) *ParquetSerDe { + s.WriterVersion = &v return s } @@ -2977,7 +4242,12 @@ func (s *PutRecordBatchInput) SetRecords(v []*Record) *PutRecordBatchInput { type PutRecordBatchOutput struct { _ struct{} `type:"structure"` - // The number of records that might have failed processing. + // Indicates whether server-side encryption (SSE) was enabled during this operation. + Encrypted *bool `type:"boolean"` + + // The number of records that might have failed processing. This number might + // be greater than 0 even if the PutRecordBatch call succeeds. Check FailedPutCount + // to determine whether there are records that you need to resend. // // FailedPutCount is a required field FailedPutCount *int64 `type:"integer" required:"true"` @@ -2999,6 +4269,12 @@ func (s PutRecordBatchOutput) GoString() string { return s.String() } +// SetEncrypted sets the Encrypted field's value. +func (s *PutRecordBatchOutput) SetEncrypted(v bool) *PutRecordBatchOutput { + s.Encrypted = &v + return s +} + // SetFailedPutCount sets the FailedPutCount field's value. func (s *PutRecordBatchOutput) SetFailedPutCount(v int64) *PutRecordBatchOutput { s.FailedPutCount = &v @@ -3119,6 +4395,9 @@ func (s *PutRecordInput) SetRecord(v *Record) *PutRecordInput { type PutRecordOutput struct { _ struct{} `type:"structure"` + // Indicates whether server-side encryption (SSE) was enabled during this operation. + Encrypted *bool `type:"boolean"` + // The ID of the record. // // RecordId is a required field @@ -3135,6 +4414,12 @@ func (s PutRecordOutput) GoString() string { return s.String() } +// SetEncrypted sets the Encrypted field's value. +func (s *PutRecordOutput) SetEncrypted(v bool) *PutRecordOutput { + s.Encrypted = &v + return s +} + // SetRecordId sets the RecordId field's value. func (s *PutRecordOutput) SetRecordId(v string) *PutRecordOutput { s.RecordId = &v @@ -3146,7 +4431,7 @@ type Record struct { _ struct{} `type:"structure"` // The data blob, which is base64-encoded when the blob is serialized. The maximum - // size of the data blob, before base64-encoding, is 1,000 KB. + // size of the data blob, before base64-encoding, is 1,000 KiB. // // Data is automatically base64 encoded/decoded by the SDK. // @@ -3208,11 +4493,12 @@ type RedshiftDestinationConfiguration struct { // The data processing configuration. ProcessingConfiguration *ProcessingConfiguration `type:"structure"` - // The retry behavior in case Kinesis Firehose is unable to deliver documents + // The retry behavior in case Kinesis Data Firehose is unable to deliver documents // to Amazon Redshift. Default value is 3600 (60 minutes). RetryOptions *RedshiftRetryOptions `type:"structure"` - // The ARN of the AWS credentials. + // The Amazon Resource Name (ARN) of the AWS credentials. For more information, + // see Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). // // RoleARN is a required field RoleARN *string `min:"1" type:"string" required:"true"` @@ -3379,7 +4665,7 @@ func (s *RedshiftDestinationConfiguration) SetUsername(v string) *RedshiftDestin type RedshiftDestinationDescription struct { _ struct{} `type:"structure"` - // The CloudWatch logging options for your delivery stream. + // The Amazon CloudWatch logging options for your delivery stream. CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"` // The database connection string. @@ -3395,11 +4681,12 @@ type RedshiftDestinationDescription struct { // The data processing configuration. ProcessingConfiguration *ProcessingConfiguration `type:"structure"` - // The retry behavior in case Kinesis Firehose is unable to deliver documents + // The retry behavior in case Kinesis Data Firehose is unable to deliver documents // to Amazon Redshift. Default value is 3600 (60 minutes). RetryOptions *RedshiftRetryOptions `type:"structure"` - // The ARN of the AWS credentials. + // The Amazon Resource Name (ARN) of the AWS credentials. For more information, + // see Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). // // RoleARN is a required field RoleARN *string `min:"1" type:"string" required:"true"` @@ -3495,7 +4782,7 @@ func (s *RedshiftDestinationDescription) SetUsername(v string) *RedshiftDestinat type RedshiftDestinationUpdate struct { _ struct{} `type:"structure"` - // The CloudWatch logging options for your delivery stream. + // The Amazon CloudWatch logging options for your delivery stream. CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"` // The database connection string. @@ -3510,11 +4797,12 @@ type RedshiftDestinationUpdate struct { // The data processing configuration. ProcessingConfiguration *ProcessingConfiguration `type:"structure"` - // The retry behavior in case Kinesis Firehose is unable to deliver documents + // The retry behavior in case Kinesis Data Firehose is unable to deliver documents // to Amazon Redshift. Default value is 3600 (60 minutes). RetryOptions *RedshiftRetryOptions `type:"structure"` - // The ARN of the AWS credentials. + // The Amazon Resource Name (ARN) of the AWS credentials. For more information, + // see Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). RoleARN *string `min:"1" type:"string"` // The Amazon S3 backup mode. @@ -3652,15 +4940,15 @@ func (s *RedshiftDestinationUpdate) SetUsername(v string) *RedshiftDestinationUp return s } -// Configures retry behavior in case Kinesis Firehose is unable to deliver documents -// to Amazon Redshift. +// Configures retry behavior in case Kinesis Data Firehose is unable to deliver +// documents to Amazon Redshift. type RedshiftRetryOptions struct { _ struct{} `type:"structure"` - // The length of time during which Kinesis Firehose retries delivery after a - // failure, starting from the initial request and including the first attempt. - // The default value is 3600 seconds (60 minutes). Kinesis Firehose does not - // retry if the value of DurationInSeconds is 0 (zero) or if the first delivery + // The length of time during which Kinesis Data Firehose retries delivery after + // a failure, starting from the initial request and including the first attempt. + // The default value is 3600 seconds (60 minutes). Kinesis Data Firehose does + // not retry if the value of DurationInSeconds is 0 (zero) or if the first delivery // attempt takes longer than the current value. DurationInSeconds *int64 `type:"integer"` } @@ -3685,7 +4973,8 @@ func (s *RedshiftRetryOptions) SetDurationInSeconds(v int64) *RedshiftRetryOptio type S3DestinationConfiguration struct { _ struct{} `type:"structure"` - // The ARN of the S3 bucket. + // The ARN of the S3 bucket. For more information, see Amazon Resource Names + // (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). // // BucketARN is a required field BucketARN *string `min:"1" type:"string" required:"true"` @@ -3709,13 +4998,15 @@ type S3DestinationConfiguration struct { EncryptionConfiguration *EncryptionConfiguration `type:"structure"` // The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered - // S3 files. You can specify an extra prefix to be added in front of the time - // format prefix. If the prefix ends with a slash, it appears as a folder in - // the S3 bucket. For more information, see Amazon S3 Object Name Format (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html) - // in the Amazon Kinesis Firehose Developer Guide. + // Amazon S3 files. You can specify an extra prefix to be added in front of + // the time format prefix. If the prefix ends with a slash, it appears as a + // folder in the S3 bucket. For more information, see Amazon S3 Object Name + // Format (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#s3-object-name) + // in the Amazon Kinesis Data Firehose Developer Guide. Prefix *string `type:"string"` - // The ARN of the AWS credentials. + // The Amazon Resource Name (ARN) of the AWS credentials. For more information, + // see Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). // // RoleARN is a required field RoleARN *string `min:"1" type:"string" required:"true"` @@ -3809,7 +5100,8 @@ func (s *S3DestinationConfiguration) SetRoleARN(v string) *S3DestinationConfigur type S3DestinationDescription struct { _ struct{} `type:"structure"` - // The ARN of the S3 bucket. + // The ARN of the S3 bucket. For more information, see Amazon Resource Names + // (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). // // BucketARN is a required field BucketARN *string `min:"1" type:"string" required:"true"` @@ -3820,7 +5112,7 @@ type S3DestinationDescription struct { // BufferingHints is a required field BufferingHints *BufferingHints `type:"structure" required:"true"` - // The CloudWatch logging options for your delivery stream. + // The Amazon CloudWatch logging options for your delivery stream. CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"` // The compression format. If no value is specified, the default is UNCOMPRESSED. @@ -3835,13 +5127,15 @@ type S3DestinationDescription struct { EncryptionConfiguration *EncryptionConfiguration `type:"structure" required:"true"` // The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered - // S3 files. You can specify an extra prefix to be added in front of the time - // format prefix. If the prefix ends with a slash, it appears as a folder in - // the S3 bucket. For more information, see Amazon S3 Object Name Format (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html) - // in the Amazon Kinesis Firehose Developer Guide. + // Amazon S3 files. You can specify an extra prefix to be added in front of + // the time format prefix. If the prefix ends with a slash, it appears as a + // folder in the S3 bucket. For more information, see Amazon S3 Object Name + // Format (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#s3-object-name) + // in the Amazon Kinesis Data Firehose Developer Guide. Prefix *string `type:"string"` - // The ARN of the AWS credentials. + // The Amazon Resource Name (ARN) of the AWS credentials. For more information, + // see Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). // // RoleARN is a required field RoleARN *string `min:"1" type:"string" required:"true"` @@ -3903,7 +5197,8 @@ func (s *S3DestinationDescription) SetRoleARN(v string) *S3DestinationDescriptio type S3DestinationUpdate struct { _ struct{} `type:"structure"` - // The ARN of the S3 bucket. + // The ARN of the S3 bucket. For more information, see Amazon Resource Names + // (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). BucketARN *string `min:"1" type:"string"` // The buffering option. If no value is specified, BufferingHints object default @@ -3925,13 +5220,15 @@ type S3DestinationUpdate struct { EncryptionConfiguration *EncryptionConfiguration `type:"structure"` // The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered - // S3 files. You can specify an extra prefix to be added in front of the time - // format prefix. If the prefix ends with a slash, it appears as a folder in - // the S3 bucket. For more information, see Amazon S3 Object Name Format (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html) - // in the Amazon Kinesis Firehose Developer Guide. + // Amazon S3 files. You can specify an extra prefix to be added in front of + // the time format prefix. If the prefix ends with a slash, it appears as a + // folder in the S3 bucket. For more information, see Amazon S3 Object Name + // Format (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#s3-object-name) + // in the Amazon Kinesis Data Firehose Developer Guide. Prefix *string `type:"string"` - // The ARN of the AWS credentials. + // The Amazon Resource Name (ARN) of the AWS credentials. For more information, + // see Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). RoleARN *string `min:"1" type:"string"` } @@ -3995,30 +5292,166 @@ func (s *S3DestinationUpdate) SetCompressionFormat(v string) *S3DestinationUpdat return s } -// SetEncryptionConfiguration sets the EncryptionConfiguration field's value. -func (s *S3DestinationUpdate) SetEncryptionConfiguration(v *EncryptionConfiguration) *S3DestinationUpdate { - s.EncryptionConfiguration = v - return s +// SetEncryptionConfiguration sets the EncryptionConfiguration field's value. +func (s *S3DestinationUpdate) SetEncryptionConfiguration(v *EncryptionConfiguration) *S3DestinationUpdate { + s.EncryptionConfiguration = v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *S3DestinationUpdate) SetPrefix(v string) *S3DestinationUpdate { + s.Prefix = &v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *S3DestinationUpdate) SetRoleARN(v string) *S3DestinationUpdate { + s.RoleARN = &v + return s +} + +// Specifies the schema to which you want Kinesis Data Firehose to configure +// your data before it writes it to Amazon S3. +type SchemaConfiguration struct { + _ struct{} `type:"structure"` + + // The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account + // ID is used by default. + CatalogId *string `type:"string"` + + // Specifies the name of the AWS Glue database that contains the schema for + // the output data. + DatabaseName *string `type:"string"` + + // If you don't specify an AWS Region, the default is the current Region. + Region *string `type:"string"` + + // The role that Kinesis Data Firehose can use to access AWS Glue. This role + // must be in the same account you use for Kinesis Data Firehose. Cross-account + // roles aren't allowed. + RoleARN *string `type:"string"` + + // Specifies the AWS Glue table that contains the column information that constitutes + // your data schema. + TableName *string `type:"string"` + + // Specifies the table version for the output data schema. If you don't specify + // this version ID, or if you set it to LATEST, Kinesis Data Firehose uses the + // most recent version. This means that any updates to the table are automatically + // picked up. + VersionId *string `type:"string"` +} + +// String returns the string representation +func (s SchemaConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SchemaConfiguration) GoString() string { + return s.String() +} + +// SetCatalogId sets the CatalogId field's value. +func (s *SchemaConfiguration) SetCatalogId(v string) *SchemaConfiguration { + s.CatalogId = &v + return s +} + +// SetDatabaseName sets the DatabaseName field's value. +func (s *SchemaConfiguration) SetDatabaseName(v string) *SchemaConfiguration { + s.DatabaseName = &v + return s +} + +// SetRegion sets the Region field's value. +func (s *SchemaConfiguration) SetRegion(v string) *SchemaConfiguration { + s.Region = &v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *SchemaConfiguration) SetRoleARN(v string) *SchemaConfiguration { + s.RoleARN = &v + return s +} + +// SetTableName sets the TableName field's value. +func (s *SchemaConfiguration) SetTableName(v string) *SchemaConfiguration { + s.TableName = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *SchemaConfiguration) SetVersionId(v string) *SchemaConfiguration { + s.VersionId = &v + return s +} + +// The serializer that you want Kinesis Data Firehose to use to convert data +// to the target format before writing it to Amazon S3. Kinesis Data Firehose +// supports two types of serializers: the ORC SerDe (https://hive.apache.org/javadocs/r1.2.2/api/org/apache/hadoop/hive/ql/io/orc/OrcSerde.html) +// and the Parquet SerDe (https://hive.apache.org/javadocs/r1.2.2/api/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.html). +type Serializer struct { + _ struct{} `type:"structure"` + + // A serializer to use for converting data to the ORC format before storing + // it in Amazon S3. For more information, see Apache ORC (https://orc.apache.org/docs/). + OrcSerDe *OrcSerDe `type:"structure"` + + // A serializer to use for converting data to the Parquet format before storing + // it in Amazon S3. For more information, see Apache Parquet (https://parquet.apache.org/documentation/latest/). + ParquetSerDe *ParquetSerDe `type:"structure"` +} + +// String returns the string representation +func (s Serializer) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Serializer) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Serializer) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Serializer"} + if s.OrcSerDe != nil { + if err := s.OrcSerDe.Validate(); err != nil { + invalidParams.AddNested("OrcSerDe", err.(request.ErrInvalidParams)) + } + } + if s.ParquetSerDe != nil { + if err := s.ParquetSerDe.Validate(); err != nil { + invalidParams.AddNested("ParquetSerDe", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetPrefix sets the Prefix field's value. -func (s *S3DestinationUpdate) SetPrefix(v string) *S3DestinationUpdate { - s.Prefix = &v +// SetOrcSerDe sets the OrcSerDe field's value. +func (s *Serializer) SetOrcSerDe(v *OrcSerDe) *Serializer { + s.OrcSerDe = v return s } -// SetRoleARN sets the RoleARN field's value. -func (s *S3DestinationUpdate) SetRoleARN(v string) *S3DestinationUpdate { - s.RoleARN = &v +// SetParquetSerDe sets the ParquetSerDe field's value. +func (s *Serializer) SetParquetSerDe(v *ParquetSerDe) *Serializer { + s.ParquetSerDe = v return s } -// Details about a Kinesis stream used as the source for a Kinesis Firehose -// delivery stream. +// Details about a Kinesis data stream used as the source for a Kinesis Data +// Firehose delivery stream. type SourceDescription struct { _ struct{} `type:"structure"` - // The KinesisStreamSourceDescription value for the source Kinesis stream. + // The KinesisStreamSourceDescription value for the source Kinesis data stream. KinesisStreamSourceDescription *KinesisStreamSourceDescription `type:"structure"` } @@ -4042,28 +5475,28 @@ func (s *SourceDescription) SetKinesisStreamSourceDescription(v *KinesisStreamSo type SplunkDestinationConfiguration struct { _ struct{} `type:"structure"` - // The CloudWatch logging options for your delivery stream. + // The Amazon CloudWatch logging options for your delivery stream. CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"` - // The amount of time that Kinesis Firehose waits to receive an acknowledgment - // from Splunk after it sends it data. At the end of the timeout period Kinesis - // Firehose either tries to send the data again or considers it an error, based - // on your retry settings. + // The amount of time that Kinesis Data Firehose waits to receive an acknowledgment + // from Splunk after it sends it data. At the end of the timeout period, Kinesis + // Data Firehose either tries to send the data again or considers it an error, + // based on your retry settings. HECAcknowledgmentTimeoutInSeconds *int64 `min:"180" type:"integer"` - // The HTTP Event Collector (HEC) endpoint to which Kinesis Firehose sends your - // data. + // The HTTP Event Collector (HEC) endpoint to which Kinesis Data Firehose sends + // your data. // // HECEndpoint is a required field HECEndpoint *string `type:"string" required:"true"` - // This type can be either "Raw" or "Event". + // This type can be either "Raw" or "Event." // // HECEndpointType is a required field HECEndpointType *string `type:"string" required:"true" enum:"HECEndpointType"` - // This is a GUID you obtain from your Splunk cluster when you create a new - // HEC endpoint. + // This is a GUID that you obtain from your Splunk cluster when you create a + // new HEC endpoint. // // HECToken is a required field HECToken *string `type:"string" required:"true"` @@ -4071,13 +5504,13 @@ type SplunkDestinationConfiguration struct { // The data processing configuration. ProcessingConfiguration *ProcessingConfiguration `type:"structure"` - // The retry behavior in case Kinesis Firehose is unable to deliver data to - // Splunk or if it doesn't receive an acknowledgment of receipt from Splunk. + // The retry behavior in case Kinesis Data Firehose is unable to deliver data + // to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk. RetryOptions *SplunkRetryOptions `type:"structure"` // Defines how documents should be delivered to Amazon S3. When set to FailedDocumentsOnly, - // Kinesis Firehose writes any data that could not be indexed to the configured - // Amazon S3 destination. When set to AllDocuments, Kinesis Firehose delivers + // Kinesis Data Firehose writes any data that could not be indexed to the configured + // Amazon S3 destination. When set to AllDocuments, Kinesis Data Firehose delivers // all incoming records to Amazon S3, and also writes failed documents to Amazon // S3. Default value is FailedDocumentsOnly. S3BackupMode *string `type:"string" enum:"SplunkS3BackupMode"` @@ -4191,174 +5624,478 @@ func (s *SplunkDestinationConfiguration) SetS3Configuration(v *S3DestinationConf type SplunkDestinationDescription struct { _ struct{} `type:"structure"` - // The CloudWatch logging options for your delivery stream. + // The Amazon CloudWatch logging options for your delivery stream. + CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"` + + // The amount of time that Kinesis Data Firehose waits to receive an acknowledgment + // from Splunk after it sends it data. At the end of the timeout period, Kinesis + // Data Firehose either tries to send the data again or considers it an error, + // based on your retry settings. + HECAcknowledgmentTimeoutInSeconds *int64 `min:"180" type:"integer"` + + // The HTTP Event Collector (HEC) endpoint to which Kinesis Data Firehose sends + // your data. + HECEndpoint *string `type:"string"` + + // This type can be either "Raw" or "Event." + HECEndpointType *string `type:"string" enum:"HECEndpointType"` + + // A GUID you obtain from your Splunk cluster when you create a new HEC endpoint. + HECToken *string `type:"string"` + + // The data processing configuration. + ProcessingConfiguration *ProcessingConfiguration `type:"structure"` + + // The retry behavior in case Kinesis Data Firehose is unable to deliver data + // to Splunk or if it doesn't receive an acknowledgment of receipt from Splunk. + RetryOptions *SplunkRetryOptions `type:"structure"` + + // Defines how documents should be delivered to Amazon S3. When set to FailedDocumentsOnly, + // Kinesis Data Firehose writes any data that could not be indexed to the configured + // Amazon S3 destination. When set to AllDocuments, Kinesis Data Firehose delivers + // all incoming records to Amazon S3, and also writes failed documents to Amazon + // S3. Default value is FailedDocumentsOnly. + S3BackupMode *string `type:"string" enum:"SplunkS3BackupMode"` + + // The Amazon S3 destination.> + S3DestinationDescription *S3DestinationDescription `type:"structure"` +} + +// String returns the string representation +func (s SplunkDestinationDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SplunkDestinationDescription) GoString() string { + return s.String() +} + +// SetCloudWatchLoggingOptions sets the CloudWatchLoggingOptions field's value. +func (s *SplunkDestinationDescription) SetCloudWatchLoggingOptions(v *CloudWatchLoggingOptions) *SplunkDestinationDescription { + s.CloudWatchLoggingOptions = v + return s +} + +// SetHECAcknowledgmentTimeoutInSeconds sets the HECAcknowledgmentTimeoutInSeconds field's value. +func (s *SplunkDestinationDescription) SetHECAcknowledgmentTimeoutInSeconds(v int64) *SplunkDestinationDescription { + s.HECAcknowledgmentTimeoutInSeconds = &v + return s +} + +// SetHECEndpoint sets the HECEndpoint field's value. +func (s *SplunkDestinationDescription) SetHECEndpoint(v string) *SplunkDestinationDescription { + s.HECEndpoint = &v + return s +} + +// SetHECEndpointType sets the HECEndpointType field's value. +func (s *SplunkDestinationDescription) SetHECEndpointType(v string) *SplunkDestinationDescription { + s.HECEndpointType = &v + return s +} + +// SetHECToken sets the HECToken field's value. +func (s *SplunkDestinationDescription) SetHECToken(v string) *SplunkDestinationDescription { + s.HECToken = &v + return s +} + +// SetProcessingConfiguration sets the ProcessingConfiguration field's value. +func (s *SplunkDestinationDescription) SetProcessingConfiguration(v *ProcessingConfiguration) *SplunkDestinationDescription { + s.ProcessingConfiguration = v + return s +} + +// SetRetryOptions sets the RetryOptions field's value. +func (s *SplunkDestinationDescription) SetRetryOptions(v *SplunkRetryOptions) *SplunkDestinationDescription { + s.RetryOptions = v + return s +} + +// SetS3BackupMode sets the S3BackupMode field's value. +func (s *SplunkDestinationDescription) SetS3BackupMode(v string) *SplunkDestinationDescription { + s.S3BackupMode = &v + return s +} + +// SetS3DestinationDescription sets the S3DestinationDescription field's value. +func (s *SplunkDestinationDescription) SetS3DestinationDescription(v *S3DestinationDescription) *SplunkDestinationDescription { + s.S3DestinationDescription = v + return s +} + +// Describes an update for a destination in Splunk. +type SplunkDestinationUpdate struct { + _ struct{} `type:"structure"` + + // The Amazon CloudWatch logging options for your delivery stream. CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"` - // The amount of time that Kinesis Firehose waits to receive an acknowledgment - // from Splunk after it sends it data. At the end of the timeout period Kinesis - // Firehose either tries to send the data again or considers it an error, based - // on your retry settings. + // The amount of time that Kinesis Data Firehose waits to receive an acknowledgment + // from Splunk after it sends data. At the end of the timeout period, Kinesis + // Data Firehose either tries to send the data again or considers it an error, + // based on your retry settings. HECAcknowledgmentTimeoutInSeconds *int64 `min:"180" type:"integer"` - // The HTTP Event Collector (HEC) endpoint to which Kinesis Firehose sends your - // data. + // The HTTP Event Collector (HEC) endpoint to which Kinesis Data Firehose sends + // your data. HECEndpoint *string `type:"string"` - // This type can be either "Raw" or "Event". + // This type can be either "Raw" or "Event." HECEndpointType *string `type:"string" enum:"HECEndpointType"` - // This is a GUID you obtain from your Splunk cluster when you create a new - // HEC endpoint. + // A GUID that you obtain from your Splunk cluster when you create a new HEC + // endpoint. HECToken *string `type:"string"` // The data processing configuration. ProcessingConfiguration *ProcessingConfiguration `type:"structure"` - // The retry behavior in case Kinesis Firehose is unable to deliver data to - // Splunk or if it doesn't receive an acknowledgment of receipt from Splunk. + // The retry behavior in case Kinesis Data Firehose is unable to deliver data + // to Splunk or if it doesn't receive an acknowledgment of receipt from Splunk. RetryOptions *SplunkRetryOptions `type:"structure"` - // Defines how documents should be delivered to Amazon S3. When set to FailedDocumentsOnly, - // Kinesis Firehose writes any data that could not be indexed to the configured - // Amazon S3 destination. When set to AllDocuments, Kinesis Firehose delivers - // all incoming records to Amazon S3, and also writes failed documents to Amazon - // S3. Default value is FailedDocumentsOnly. - S3BackupMode *string `type:"string" enum:"SplunkS3BackupMode"` + // Defines how documents should be delivered to Amazon S3. When set to FailedDocumentsOnly, + // Kinesis Data Firehose writes any data that could not be indexed to the configured + // Amazon S3 destination. When set to AllDocuments, Kinesis Data Firehose delivers + // all incoming records to Amazon S3, and also writes failed documents to Amazon + // S3. Default value is FailedDocumentsOnly. + S3BackupMode *string `type:"string" enum:"SplunkS3BackupMode"` + + // Your update to the configuration of the backup Amazon S3 location. + S3Update *S3DestinationUpdate `type:"structure"` +} + +// String returns the string representation +func (s SplunkDestinationUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SplunkDestinationUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SplunkDestinationUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SplunkDestinationUpdate"} + if s.HECAcknowledgmentTimeoutInSeconds != nil && *s.HECAcknowledgmentTimeoutInSeconds < 180 { + invalidParams.Add(request.NewErrParamMinValue("HECAcknowledgmentTimeoutInSeconds", 180)) + } + if s.ProcessingConfiguration != nil { + if err := s.ProcessingConfiguration.Validate(); err != nil { + invalidParams.AddNested("ProcessingConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.S3Update != nil { + if err := s.S3Update.Validate(); err != nil { + invalidParams.AddNested("S3Update", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCloudWatchLoggingOptions sets the CloudWatchLoggingOptions field's value. +func (s *SplunkDestinationUpdate) SetCloudWatchLoggingOptions(v *CloudWatchLoggingOptions) *SplunkDestinationUpdate { + s.CloudWatchLoggingOptions = v + return s +} + +// SetHECAcknowledgmentTimeoutInSeconds sets the HECAcknowledgmentTimeoutInSeconds field's value. +func (s *SplunkDestinationUpdate) SetHECAcknowledgmentTimeoutInSeconds(v int64) *SplunkDestinationUpdate { + s.HECAcknowledgmentTimeoutInSeconds = &v + return s +} + +// SetHECEndpoint sets the HECEndpoint field's value. +func (s *SplunkDestinationUpdate) SetHECEndpoint(v string) *SplunkDestinationUpdate { + s.HECEndpoint = &v + return s +} + +// SetHECEndpointType sets the HECEndpointType field's value. +func (s *SplunkDestinationUpdate) SetHECEndpointType(v string) *SplunkDestinationUpdate { + s.HECEndpointType = &v + return s +} + +// SetHECToken sets the HECToken field's value. +func (s *SplunkDestinationUpdate) SetHECToken(v string) *SplunkDestinationUpdate { + s.HECToken = &v + return s +} + +// SetProcessingConfiguration sets the ProcessingConfiguration field's value. +func (s *SplunkDestinationUpdate) SetProcessingConfiguration(v *ProcessingConfiguration) *SplunkDestinationUpdate { + s.ProcessingConfiguration = v + return s +} + +// SetRetryOptions sets the RetryOptions field's value. +func (s *SplunkDestinationUpdate) SetRetryOptions(v *SplunkRetryOptions) *SplunkDestinationUpdate { + s.RetryOptions = v + return s +} + +// SetS3BackupMode sets the S3BackupMode field's value. +func (s *SplunkDestinationUpdate) SetS3BackupMode(v string) *SplunkDestinationUpdate { + s.S3BackupMode = &v + return s +} + +// SetS3Update sets the S3Update field's value. +func (s *SplunkDestinationUpdate) SetS3Update(v *S3DestinationUpdate) *SplunkDestinationUpdate { + s.S3Update = v + return s +} + +// Configures retry behavior in case Kinesis Data Firehose is unable to deliver +// documents to Splunk, or if it doesn't receive an acknowledgment from Splunk. +type SplunkRetryOptions struct { + _ struct{} `type:"structure"` + + // The total amount of time that Kinesis Data Firehose spends on retries. This + // duration starts after the initial attempt to send data to Splunk fails. It + // doesn't include the periods during which Kinesis Data Firehose waits for + // acknowledgment from Splunk after each attempt. + DurationInSeconds *int64 `type:"integer"` +} + +// String returns the string representation +func (s SplunkRetryOptions) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SplunkRetryOptions) GoString() string { + return s.String() +} + +// SetDurationInSeconds sets the DurationInSeconds field's value. +func (s *SplunkRetryOptions) SetDurationInSeconds(v int64) *SplunkRetryOptions { + s.DurationInSeconds = &v + return s +} + +type StartDeliveryStreamEncryptionInput struct { + _ struct{} `type:"structure"` + + // The name of the delivery stream for which you want to enable server-side + // encryption (SSE). + // + // DeliveryStreamName is a required field + DeliveryStreamName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s StartDeliveryStreamEncryptionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartDeliveryStreamEncryptionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StartDeliveryStreamEncryptionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StartDeliveryStreamEncryptionInput"} + if s.DeliveryStreamName == nil { + invalidParams.Add(request.NewErrParamRequired("DeliveryStreamName")) + } + if s.DeliveryStreamName != nil && len(*s.DeliveryStreamName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DeliveryStreamName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDeliveryStreamName sets the DeliveryStreamName field's value. +func (s *StartDeliveryStreamEncryptionInput) SetDeliveryStreamName(v string) *StartDeliveryStreamEncryptionInput { + s.DeliveryStreamName = &v + return s +} - // The Amazon S3 destination.> - S3DestinationDescription *S3DestinationDescription `type:"structure"` +type StartDeliveryStreamEncryptionOutput struct { + _ struct{} `type:"structure"` } // String returns the string representation -func (s SplunkDestinationDescription) String() string { +func (s StartDeliveryStreamEncryptionOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SplunkDestinationDescription) GoString() string { +func (s StartDeliveryStreamEncryptionOutput) GoString() string { return s.String() } -// SetCloudWatchLoggingOptions sets the CloudWatchLoggingOptions field's value. -func (s *SplunkDestinationDescription) SetCloudWatchLoggingOptions(v *CloudWatchLoggingOptions) *SplunkDestinationDescription { - s.CloudWatchLoggingOptions = v - return s -} +type StopDeliveryStreamEncryptionInput struct { + _ struct{} `type:"structure"` -// SetHECAcknowledgmentTimeoutInSeconds sets the HECAcknowledgmentTimeoutInSeconds field's value. -func (s *SplunkDestinationDescription) SetHECAcknowledgmentTimeoutInSeconds(v int64) *SplunkDestinationDescription { - s.HECAcknowledgmentTimeoutInSeconds = &v - return s + // The name of the delivery stream for which you want to disable server-side + // encryption (SSE). + // + // DeliveryStreamName is a required field + DeliveryStreamName *string `min:"1" type:"string" required:"true"` } -// SetHECEndpoint sets the HECEndpoint field's value. -func (s *SplunkDestinationDescription) SetHECEndpoint(v string) *SplunkDestinationDescription { - s.HECEndpoint = &v - return s +// String returns the string representation +func (s StopDeliveryStreamEncryptionInput) String() string { + return awsutil.Prettify(s) } -// SetHECEndpointType sets the HECEndpointType field's value. -func (s *SplunkDestinationDescription) SetHECEndpointType(v string) *SplunkDestinationDescription { - s.HECEndpointType = &v - return s +// GoString returns the string representation +func (s StopDeliveryStreamEncryptionInput) GoString() string { + return s.String() } -// SetHECToken sets the HECToken field's value. -func (s *SplunkDestinationDescription) SetHECToken(v string) *SplunkDestinationDescription { - s.HECToken = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *StopDeliveryStreamEncryptionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StopDeliveryStreamEncryptionInput"} + if s.DeliveryStreamName == nil { + invalidParams.Add(request.NewErrParamRequired("DeliveryStreamName")) + } + if s.DeliveryStreamName != nil && len(*s.DeliveryStreamName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DeliveryStreamName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetProcessingConfiguration sets the ProcessingConfiguration field's value. -func (s *SplunkDestinationDescription) SetProcessingConfiguration(v *ProcessingConfiguration) *SplunkDestinationDescription { - s.ProcessingConfiguration = v +// SetDeliveryStreamName sets the DeliveryStreamName field's value. +func (s *StopDeliveryStreamEncryptionInput) SetDeliveryStreamName(v string) *StopDeliveryStreamEncryptionInput { + s.DeliveryStreamName = &v return s } -// SetRetryOptions sets the RetryOptions field's value. -func (s *SplunkDestinationDescription) SetRetryOptions(v *SplunkRetryOptions) *SplunkDestinationDescription { - s.RetryOptions = v - return s +type StopDeliveryStreamEncryptionOutput struct { + _ struct{} `type:"structure"` } -// SetS3BackupMode sets the S3BackupMode field's value. -func (s *SplunkDestinationDescription) SetS3BackupMode(v string) *SplunkDestinationDescription { - s.S3BackupMode = &v - return s +// String returns the string representation +func (s StopDeliveryStreamEncryptionOutput) String() string { + return awsutil.Prettify(s) } -// SetS3DestinationDescription sets the S3DestinationDescription field's value. -func (s *SplunkDestinationDescription) SetS3DestinationDescription(v *S3DestinationDescription) *SplunkDestinationDescription { - s.S3DestinationDescription = v - return s +// GoString returns the string representation +func (s StopDeliveryStreamEncryptionOutput) GoString() string { + return s.String() } -// Describes an update for a destination in Splunk. -type SplunkDestinationUpdate struct { +// Metadata that you can assign to a delivery stream, consisting of a key-value +// pair. +type Tag struct { _ struct{} `type:"structure"` - // The CloudWatch logging options for your delivery stream. - CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"` + // A unique identifier for the tag. Maximum length: 128 characters. Valid characters: + // Unicode letters, digits, white space, _ . / = + - % @ + // + // Key is a required field + Key *string `min:"1" type:"string" required:"true"` - // The amount of time that Kinesis Firehose waits to receive an acknowledgment - // from Splunk after it sends it data. At the end of the timeout period Kinesis - // Firehose either tries to send the data again or considers it an error, based - // on your retry settings. - HECAcknowledgmentTimeoutInSeconds *int64 `min:"180" type:"integer"` + // An optional string, which you can use to describe or define the tag. Maximum + // length: 256 characters. Valid characters: Unicode letters, digits, white + // space, _ . / = + - % @ + Value *string `type:"string"` +} - // The HTTP Event Collector (HEC) endpoint to which Kinesis Firehose sends your - // data. - HECEndpoint *string `type:"string"` +// String returns the string representation +func (s Tag) String() string { + return awsutil.Prettify(s) +} - // This type can be either "Raw" or "Event". - HECEndpointType *string `type:"string" enum:"HECEndpointType"` +// GoString returns the string representation +func (s Tag) GoString() string { + return s.String() +} - // This is a GUID you obtain from your Splunk cluster when you create a new - // HEC endpoint. - HECToken *string `type:"string"` +// Validate inspects the fields of the type to determine if they are valid. +func (s *Tag) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Tag"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } - // The data processing configuration. - ProcessingConfiguration *ProcessingConfiguration `type:"structure"` + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} - // The retry behavior in case Kinesis Firehose is unable to deliver data to - // Splunk or if it doesn't receive an acknowledgment of receipt from Splunk. - RetryOptions *SplunkRetryOptions `type:"structure"` +// SetKey sets the Key field's value. +func (s *Tag) SetKey(v string) *Tag { + s.Key = &v + return s +} - // Defines how documents should be delivered to Amazon S3. When set to FailedDocumentsOnly, - // Kinesis Firehose writes any data that could not be indexed to the configured - // Amazon S3 destination. When set to AllDocuments, Kinesis Firehose delivers - // all incoming records to Amazon S3, and also writes failed documents to Amazon - // S3. Default value is FailedDocumentsOnly. - S3BackupMode *string `type:"string" enum:"SplunkS3BackupMode"` +// SetValue sets the Value field's value. +func (s *Tag) SetValue(v string) *Tag { + s.Value = &v + return s +} - // Your update to the configuration of the backup Amazon S3 location. - S3Update *S3DestinationUpdate `type:"structure"` +type TagDeliveryStreamInput struct { + _ struct{} `type:"structure"` + + // The name of the delivery stream to which you want to add the tags. + // + // DeliveryStreamName is a required field + DeliveryStreamName *string `min:"1" type:"string" required:"true"` + + // A set of key-value pairs to use to create the tags. + // + // Tags is a required field + Tags []*Tag `min:"1" type:"list" required:"true"` } // String returns the string representation -func (s SplunkDestinationUpdate) String() string { +func (s TagDeliveryStreamInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SplunkDestinationUpdate) GoString() string { +func (s TagDeliveryStreamInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *SplunkDestinationUpdate) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "SplunkDestinationUpdate"} - if s.HECAcknowledgmentTimeoutInSeconds != nil && *s.HECAcknowledgmentTimeoutInSeconds < 180 { - invalidParams.Add(request.NewErrParamMinValue("HECAcknowledgmentTimeoutInSeconds", 180)) +func (s *TagDeliveryStreamInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TagDeliveryStreamInput"} + if s.DeliveryStreamName == nil { + invalidParams.Add(request.NewErrParamRequired("DeliveryStreamName")) } - if s.ProcessingConfiguration != nil { - if err := s.ProcessingConfiguration.Validate(); err != nil { - invalidParams.AddNested("ProcessingConfiguration", err.(request.ErrInvalidParams)) - } + if s.DeliveryStreamName != nil && len(*s.DeliveryStreamName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DeliveryStreamName", 1)) } - if s.S3Update != nil { - if err := s.S3Update.Validate(); err != nil { - invalidParams.AddNested("S3Update", err.(request.ErrInvalidParams)) + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) + } + if s.Tags != nil && len(s.Tags) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Tags", 1)) + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } } } @@ -4368,93 +6105,109 @@ func (s *SplunkDestinationUpdate) Validate() error { return nil } -// SetCloudWatchLoggingOptions sets the CloudWatchLoggingOptions field's value. -func (s *SplunkDestinationUpdate) SetCloudWatchLoggingOptions(v *CloudWatchLoggingOptions) *SplunkDestinationUpdate { - s.CloudWatchLoggingOptions = v +// SetDeliveryStreamName sets the DeliveryStreamName field's value. +func (s *TagDeliveryStreamInput) SetDeliveryStreamName(v string) *TagDeliveryStreamInput { + s.DeliveryStreamName = &v return s } -// SetHECAcknowledgmentTimeoutInSeconds sets the HECAcknowledgmentTimeoutInSeconds field's value. -func (s *SplunkDestinationUpdate) SetHECAcknowledgmentTimeoutInSeconds(v int64) *SplunkDestinationUpdate { - s.HECAcknowledgmentTimeoutInSeconds = &v +// SetTags sets the Tags field's value. +func (s *TagDeliveryStreamInput) SetTags(v []*Tag) *TagDeliveryStreamInput { + s.Tags = v return s } -// SetHECEndpoint sets the HECEndpoint field's value. -func (s *SplunkDestinationUpdate) SetHECEndpoint(v string) *SplunkDestinationUpdate { - s.HECEndpoint = &v - return s +type TagDeliveryStreamOutput struct { + _ struct{} `type:"structure"` } -// SetHECEndpointType sets the HECEndpointType field's value. -func (s *SplunkDestinationUpdate) SetHECEndpointType(v string) *SplunkDestinationUpdate { - s.HECEndpointType = &v - return s +// String returns the string representation +func (s TagDeliveryStreamOutput) String() string { + return awsutil.Prettify(s) } -// SetHECToken sets the HECToken field's value. -func (s *SplunkDestinationUpdate) SetHECToken(v string) *SplunkDestinationUpdate { - s.HECToken = &v - return s +// GoString returns the string representation +func (s TagDeliveryStreamOutput) GoString() string { + return s.String() } -// SetProcessingConfiguration sets the ProcessingConfiguration field's value. -func (s *SplunkDestinationUpdate) SetProcessingConfiguration(v *ProcessingConfiguration) *SplunkDestinationUpdate { - s.ProcessingConfiguration = v - return s +type UntagDeliveryStreamInput struct { + _ struct{} `type:"structure"` + + // The name of the delivery stream. + // + // DeliveryStreamName is a required field + DeliveryStreamName *string `min:"1" type:"string" required:"true"` + + // A list of tag keys. Each corresponding tag is removed from the delivery stream. + // + // TagKeys is a required field + TagKeys []*string `min:"1" type:"list" required:"true"` } -// SetRetryOptions sets the RetryOptions field's value. -func (s *SplunkDestinationUpdate) SetRetryOptions(v *SplunkRetryOptions) *SplunkDestinationUpdate { - s.RetryOptions = v - return s +// String returns the string representation +func (s UntagDeliveryStreamInput) String() string { + return awsutil.Prettify(s) } -// SetS3BackupMode sets the S3BackupMode field's value. -func (s *SplunkDestinationUpdate) SetS3BackupMode(v string) *SplunkDestinationUpdate { - s.S3BackupMode = &v +// GoString returns the string representation +func (s UntagDeliveryStreamInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UntagDeliveryStreamInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UntagDeliveryStreamInput"} + if s.DeliveryStreamName == nil { + invalidParams.Add(request.NewErrParamRequired("DeliveryStreamName")) + } + if s.DeliveryStreamName != nil && len(*s.DeliveryStreamName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DeliveryStreamName", 1)) + } + if s.TagKeys == nil { + invalidParams.Add(request.NewErrParamRequired("TagKeys")) + } + if s.TagKeys != nil && len(s.TagKeys) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TagKeys", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDeliveryStreamName sets the DeliveryStreamName field's value. +func (s *UntagDeliveryStreamInput) SetDeliveryStreamName(v string) *UntagDeliveryStreamInput { + s.DeliveryStreamName = &v return s } -// SetS3Update sets the S3Update field's value. -func (s *SplunkDestinationUpdate) SetS3Update(v *S3DestinationUpdate) *SplunkDestinationUpdate { - s.S3Update = v +// SetTagKeys sets the TagKeys field's value. +func (s *UntagDeliveryStreamInput) SetTagKeys(v []*string) *UntagDeliveryStreamInput { + s.TagKeys = v return s } -// Configures retry behavior in case Kinesis Firehose is unable to deliver documents -// to Splunk or if it doesn't receive an acknowledgment from Splunk. -type SplunkRetryOptions struct { +type UntagDeliveryStreamOutput struct { _ struct{} `type:"structure"` - - // The total amount of time that Kinesis Firehose spends on retries. This duration - // starts after the initial attempt to send data to Splunk fails and doesn't - // include the periods during which Kinesis Firehose waits for acknowledgment - // from Splunk after each attempt. - DurationInSeconds *int64 `type:"integer"` } // String returns the string representation -func (s SplunkRetryOptions) String() string { +func (s UntagDeliveryStreamOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SplunkRetryOptions) GoString() string { +func (s UntagDeliveryStreamOutput) GoString() string { return s.String() } -// SetDurationInSeconds sets the DurationInSeconds field's value. -func (s *SplunkRetryOptions) SetDurationInSeconds(v int64) *SplunkRetryOptions { - s.DurationInSeconds = &v - return s -} - type UpdateDestinationInput struct { _ struct{} `type:"structure"` // Obtain this value from the VersionId result of DeliveryStreamDescription. - // This value is required, and helps the service to perform conditional operations. + // This value is required, and helps the service perform conditional operations. // For example, if there is an interleaving update and this value is null, then // the update destination fails. After the update is successful, the VersionId // value is updated. The service then performs a merge of the old configuration @@ -4483,6 +6236,8 @@ type UpdateDestinationInput struct { RedshiftDestinationUpdate *RedshiftDestinationUpdate `type:"structure"` // [Deprecated] Describes an update for a destination in Amazon S3. + // + // Deprecated: S3DestinationUpdate has been deprecated S3DestinationUpdate *S3DestinationUpdate `deprecated:"true" type:"structure"` // Describes an update for a destination in Splunk. @@ -4628,6 +6383,20 @@ const ( CompressionFormatSnappy = "Snappy" ) +const ( + // DeliveryStreamEncryptionStatusEnabled is a DeliveryStreamEncryptionStatus enum value + DeliveryStreamEncryptionStatusEnabled = "ENABLED" + + // DeliveryStreamEncryptionStatusEnabling is a DeliveryStreamEncryptionStatus enum value + DeliveryStreamEncryptionStatusEnabling = "ENABLING" + + // DeliveryStreamEncryptionStatusDisabled is a DeliveryStreamEncryptionStatus enum value + DeliveryStreamEncryptionStatusDisabled = "DISABLED" + + // DeliveryStreamEncryptionStatusDisabling is a DeliveryStreamEncryptionStatus enum value + DeliveryStreamEncryptionStatusDisabling = "DISABLING" +) + const ( // DeliveryStreamStatusCreating is a DeliveryStreamStatus enum value DeliveryStreamStatusCreating = "CREATING" @@ -4685,6 +6454,44 @@ const ( NoEncryptionConfigNoEncryption = "NoEncryption" ) +const ( + // OrcCompressionNone is a OrcCompression enum value + OrcCompressionNone = "NONE" + + // OrcCompressionZlib is a OrcCompression enum value + OrcCompressionZlib = "ZLIB" + + // OrcCompressionSnappy is a OrcCompression enum value + OrcCompressionSnappy = "SNAPPY" +) + +const ( + // OrcFormatVersionV011 is a OrcFormatVersion enum value + OrcFormatVersionV011 = "V0_11" + + // OrcFormatVersionV012 is a OrcFormatVersion enum value + OrcFormatVersionV012 = "V0_12" +) + +const ( + // ParquetCompressionUncompressed is a ParquetCompression enum value + ParquetCompressionUncompressed = "UNCOMPRESSED" + + // ParquetCompressionGzip is a ParquetCompression enum value + ParquetCompressionGzip = "GZIP" + + // ParquetCompressionSnappy is a ParquetCompression enum value + ParquetCompressionSnappy = "SNAPPY" +) + +const ( + // ParquetWriterVersionV1 is a ParquetWriterVersion enum value + ParquetWriterVersionV1 = "V1" + + // ParquetWriterVersionV2 is a ParquetWriterVersion enum value + ParquetWriterVersionV2 = "V2" +) + const ( // ProcessorParameterNameLambdaArn is a ProcessorParameterName enum value ProcessorParameterNameLambdaArn = "LambdaArn" diff --git a/vendor/github.com/aws/aws-sdk-go/service/firehose/doc.go b/vendor/github.com/aws/aws-sdk-go/service/firehose/doc.go index a183c051c9d..32bec009c2f 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/firehose/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/firehose/doc.go @@ -3,9 +3,9 @@ // Package firehose provides the client and types for making API // requests to Amazon Kinesis Firehose. // -// Amazon Kinesis Firehose is a fully managed service that delivers real-time +// Amazon Kinesis Data Firehose is a fully managed service that delivers real-time // streaming data to destinations such as Amazon Simple Storage Service (Amazon -// S3), Amazon Elasticsearch Service (Amazon ES), and Amazon Redshift. +// S3), Amazon Elasticsearch Service (Amazon ES), Amazon Redshift, and Splunk. // // See https://docs.aws.amazon.com/goto/WebAPI/firehose-2015-08-04 for more information on this service. // diff --git a/vendor/github.com/aws/aws-sdk-go/service/firehose/errors.go b/vendor/github.com/aws/aws-sdk-go/service/firehose/errors.go index 7854fccd3ff..d70656e3e3e 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/firehose/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/firehose/errors.go @@ -38,9 +38,9 @@ const ( // ErrCodeServiceUnavailableException for service response error code // "ServiceUnavailableException". // - // The service is unavailable, back off and retry the operation. If you continue + // The service is unavailable. Back off and retry the operation. If you continue // to see the exception, throughput limits for the delivery stream may have // been exceeded. For more information about limits and how to request an increase, - // see Amazon Kinesis Firehose Limits (http://docs.aws.amazon.com/firehose/latest/dev/limits.html). + // see Amazon Kinesis Data Firehose Limits (http://docs.aws.amazon.com/firehose/latest/dev/limits.html). ErrCodeServiceUnavailableException = "ServiceUnavailableException" ) diff --git a/vendor/github.com/aws/aws-sdk-go/service/firehose/service.go b/vendor/github.com/aws/aws-sdk-go/service/firehose/service.go index 973386c05b1..bcdf23dffb9 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/firehose/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/firehose/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "firehose" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "firehose" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Firehose" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the Firehose client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/fms/api.go b/vendor/github.com/aws/aws-sdk-go/service/fms/api.go new file mode 100644 index 00000000000..16eb63e6ffa --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/fms/api.go @@ -0,0 +1,2750 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package fms + +import ( + "fmt" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol" + "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" +) + +const opAssociateAdminAccount = "AssociateAdminAccount" + +// AssociateAdminAccountRequest generates a "aws/request.Request" representing the +// client's request for the AssociateAdminAccount operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AssociateAdminAccount for more information on using the AssociateAdminAccount +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AssociateAdminAccountRequest method. +// req, resp := client.AssociateAdminAccountRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/AssociateAdminAccount +func (c *FMS) AssociateAdminAccountRequest(input *AssociateAdminAccountInput) (req *request.Request, output *AssociateAdminAccountOutput) { + op := &request.Operation{ + Name: opAssociateAdminAccount, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AssociateAdminAccountInput{} + } + + output = &AssociateAdminAccountOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// AssociateAdminAccount API operation for Firewall Management Service. +// +// Sets the AWS Firewall Manager administrator account. AWS Firewall Manager +// must be associated with the master account your AWS organization or associated +// with a member account that has the appropriate permissions. If the account +// ID that you submit is not an AWS Organizations master account, AWS Firewall +// Manager will set the appropriate permissions for the given member account. +// +// The account that you associate with AWS Firewall Manager is called the AWS +// Firewall Manager administrator account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Firewall Management Service's +// API operation AssociateAdminAccount for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidOperationException "InvalidOperationException" +// The operation failed because there was nothing to do. For example, you might +// have submitted an AssociateAdminAccount request, but the account ID that +// you submitted was already set as the AWS Firewall Manager administrator. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// The parameters of the request were invalid. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInternalErrorException "InternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/AssociateAdminAccount +func (c *FMS) AssociateAdminAccount(input *AssociateAdminAccountInput) (*AssociateAdminAccountOutput, error) { + req, out := c.AssociateAdminAccountRequest(input) + return out, req.Send() +} + +// AssociateAdminAccountWithContext is the same as AssociateAdminAccount with the addition of +// the ability to pass a context and additional request options. +// +// See AssociateAdminAccount for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *FMS) AssociateAdminAccountWithContext(ctx aws.Context, input *AssociateAdminAccountInput, opts ...request.Option) (*AssociateAdminAccountOutput, error) { + req, out := c.AssociateAdminAccountRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteNotificationChannel = "DeleteNotificationChannel" + +// DeleteNotificationChannelRequest generates a "aws/request.Request" representing the +// client's request for the DeleteNotificationChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteNotificationChannel for more information on using the DeleteNotificationChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteNotificationChannelRequest method. +// req, resp := client.DeleteNotificationChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/DeleteNotificationChannel +func (c *FMS) DeleteNotificationChannelRequest(input *DeleteNotificationChannelInput) (req *request.Request, output *DeleteNotificationChannelOutput) { + op := &request.Operation{ + Name: opDeleteNotificationChannel, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteNotificationChannelInput{} + } + + output = &DeleteNotificationChannelOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteNotificationChannel API operation for Firewall Management Service. +// +// Deletes an AWS Firewall Manager association with the IAM role and the Amazon +// Simple Notification Service (SNS) topic that is used to record AWS Firewall +// Manager SNS logs. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Firewall Management Service's +// API operation DeleteNotificationChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInvalidOperationException "InvalidOperationException" +// The operation failed because there was nothing to do. For example, you might +// have submitted an AssociateAdminAccount request, but the account ID that +// you submitted was already set as the AWS Firewall Manager administrator. +// +// * ErrCodeInternalErrorException "InternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/DeleteNotificationChannel +func (c *FMS) DeleteNotificationChannel(input *DeleteNotificationChannelInput) (*DeleteNotificationChannelOutput, error) { + req, out := c.DeleteNotificationChannelRequest(input) + return out, req.Send() +} + +// DeleteNotificationChannelWithContext is the same as DeleteNotificationChannel with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteNotificationChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *FMS) DeleteNotificationChannelWithContext(ctx aws.Context, input *DeleteNotificationChannelInput, opts ...request.Option) (*DeleteNotificationChannelOutput, error) { + req, out := c.DeleteNotificationChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeletePolicy = "DeletePolicy" + +// DeletePolicyRequest generates a "aws/request.Request" representing the +// client's request for the DeletePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeletePolicy for more information on using the DeletePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeletePolicyRequest method. +// req, resp := client.DeletePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/DeletePolicy +func (c *FMS) DeletePolicyRequest(input *DeletePolicyInput) (req *request.Request, output *DeletePolicyOutput) { + op := &request.Operation{ + Name: opDeletePolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeletePolicyInput{} + } + + output = &DeletePolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeletePolicy API operation for Firewall Management Service. +// +// Permanently deletes an AWS Firewall Manager policy. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Firewall Management Service's +// API operation DeletePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInvalidOperationException "InvalidOperationException" +// The operation failed because there was nothing to do. For example, you might +// have submitted an AssociateAdminAccount request, but the account ID that +// you submitted was already set as the AWS Firewall Manager administrator. +// +// * ErrCodeInternalErrorException "InternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/DeletePolicy +func (c *FMS) DeletePolicy(input *DeletePolicyInput) (*DeletePolicyOutput, error) { + req, out := c.DeletePolicyRequest(input) + return out, req.Send() +} + +// DeletePolicyWithContext is the same as DeletePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See DeletePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *FMS) DeletePolicyWithContext(ctx aws.Context, input *DeletePolicyInput, opts ...request.Option) (*DeletePolicyOutput, error) { + req, out := c.DeletePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDisassociateAdminAccount = "DisassociateAdminAccount" + +// DisassociateAdminAccountRequest generates a "aws/request.Request" representing the +// client's request for the DisassociateAdminAccount operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DisassociateAdminAccount for more information on using the DisassociateAdminAccount +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DisassociateAdminAccountRequest method. +// req, resp := client.DisassociateAdminAccountRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/DisassociateAdminAccount +func (c *FMS) DisassociateAdminAccountRequest(input *DisassociateAdminAccountInput) (req *request.Request, output *DisassociateAdminAccountOutput) { + op := &request.Operation{ + Name: opDisassociateAdminAccount, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DisassociateAdminAccountInput{} + } + + output = &DisassociateAdminAccountOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DisassociateAdminAccount API operation for Firewall Management Service. +// +// Disassociates the account that has been set as the AWS Firewall Manager administrator +// account. You will need to submit an AssociateAdminAccount request to set +// a new account as the AWS Firewall administrator. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Firewall Management Service's +// API operation DisassociateAdminAccount for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidOperationException "InvalidOperationException" +// The operation failed because there was nothing to do. For example, you might +// have submitted an AssociateAdminAccount request, but the account ID that +// you submitted was already set as the AWS Firewall Manager administrator. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInternalErrorException "InternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/DisassociateAdminAccount +func (c *FMS) DisassociateAdminAccount(input *DisassociateAdminAccountInput) (*DisassociateAdminAccountOutput, error) { + req, out := c.DisassociateAdminAccountRequest(input) + return out, req.Send() +} + +// DisassociateAdminAccountWithContext is the same as DisassociateAdminAccount with the addition of +// the ability to pass a context and additional request options. +// +// See DisassociateAdminAccount for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *FMS) DisassociateAdminAccountWithContext(ctx aws.Context, input *DisassociateAdminAccountInput, opts ...request.Option) (*DisassociateAdminAccountOutput, error) { + req, out := c.DisassociateAdminAccountRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetAdminAccount = "GetAdminAccount" + +// GetAdminAccountRequest generates a "aws/request.Request" representing the +// client's request for the GetAdminAccount operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetAdminAccount for more information on using the GetAdminAccount +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetAdminAccountRequest method. +// req, resp := client.GetAdminAccountRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/GetAdminAccount +func (c *FMS) GetAdminAccountRequest(input *GetAdminAccountInput) (req *request.Request, output *GetAdminAccountOutput) { + op := &request.Operation{ + Name: opGetAdminAccount, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetAdminAccountInput{} + } + + output = &GetAdminAccountOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetAdminAccount API operation for Firewall Management Service. +// +// Returns the AWS Organizations master account that is associated with AWS +// Firewall Manager as the AWS Firewall Manager administrator. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Firewall Management Service's +// API operation GetAdminAccount for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidOperationException "InvalidOperationException" +// The operation failed because there was nothing to do. For example, you might +// have submitted an AssociateAdminAccount request, but the account ID that +// you submitted was already set as the AWS Firewall Manager administrator. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInternalErrorException "InternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/GetAdminAccount +func (c *FMS) GetAdminAccount(input *GetAdminAccountInput) (*GetAdminAccountOutput, error) { + req, out := c.GetAdminAccountRequest(input) + return out, req.Send() +} + +// GetAdminAccountWithContext is the same as GetAdminAccount with the addition of +// the ability to pass a context and additional request options. +// +// See GetAdminAccount for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *FMS) GetAdminAccountWithContext(ctx aws.Context, input *GetAdminAccountInput, opts ...request.Option) (*GetAdminAccountOutput, error) { + req, out := c.GetAdminAccountRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetComplianceDetail = "GetComplianceDetail" + +// GetComplianceDetailRequest generates a "aws/request.Request" representing the +// client's request for the GetComplianceDetail operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetComplianceDetail for more information on using the GetComplianceDetail +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetComplianceDetailRequest method. +// req, resp := client.GetComplianceDetailRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/GetComplianceDetail +func (c *FMS) GetComplianceDetailRequest(input *GetComplianceDetailInput) (req *request.Request, output *GetComplianceDetailOutput) { + op := &request.Operation{ + Name: opGetComplianceDetail, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetComplianceDetailInput{} + } + + output = &GetComplianceDetailOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetComplianceDetail API operation for Firewall Management Service. +// +// Returns detailed compliance information about the specified member account. +// Details include resources that are in and out of compliance with the specified +// policy. Resources are considered non-compliant if the specified policy has +// not been applied to them. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Firewall Management Service's +// API operation GetComplianceDetail for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInternalErrorException "InternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/GetComplianceDetail +func (c *FMS) GetComplianceDetail(input *GetComplianceDetailInput) (*GetComplianceDetailOutput, error) { + req, out := c.GetComplianceDetailRequest(input) + return out, req.Send() +} + +// GetComplianceDetailWithContext is the same as GetComplianceDetail with the addition of +// the ability to pass a context and additional request options. +// +// See GetComplianceDetail for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *FMS) GetComplianceDetailWithContext(ctx aws.Context, input *GetComplianceDetailInput, opts ...request.Option) (*GetComplianceDetailOutput, error) { + req, out := c.GetComplianceDetailRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetNotificationChannel = "GetNotificationChannel" + +// GetNotificationChannelRequest generates a "aws/request.Request" representing the +// client's request for the GetNotificationChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetNotificationChannel for more information on using the GetNotificationChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetNotificationChannelRequest method. +// req, resp := client.GetNotificationChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/GetNotificationChannel +func (c *FMS) GetNotificationChannelRequest(input *GetNotificationChannelInput) (req *request.Request, output *GetNotificationChannelOutput) { + op := &request.Operation{ + Name: opGetNotificationChannel, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetNotificationChannelInput{} + } + + output = &GetNotificationChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetNotificationChannel API operation for Firewall Management Service. +// +// Returns information about the Amazon Simple Notification Service (SNS) topic +// that is used to record AWS Firewall Manager SNS logs. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Firewall Management Service's +// API operation GetNotificationChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInvalidOperationException "InvalidOperationException" +// The operation failed because there was nothing to do. For example, you might +// have submitted an AssociateAdminAccount request, but the account ID that +// you submitted was already set as the AWS Firewall Manager administrator. +// +// * ErrCodeInternalErrorException "InternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/GetNotificationChannel +func (c *FMS) GetNotificationChannel(input *GetNotificationChannelInput) (*GetNotificationChannelOutput, error) { + req, out := c.GetNotificationChannelRequest(input) + return out, req.Send() +} + +// GetNotificationChannelWithContext is the same as GetNotificationChannel with the addition of +// the ability to pass a context and additional request options. +// +// See GetNotificationChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *FMS) GetNotificationChannelWithContext(ctx aws.Context, input *GetNotificationChannelInput, opts ...request.Option) (*GetNotificationChannelOutput, error) { + req, out := c.GetNotificationChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetPolicy = "GetPolicy" + +// GetPolicyRequest generates a "aws/request.Request" representing the +// client's request for the GetPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetPolicy for more information on using the GetPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetPolicyRequest method. +// req, resp := client.GetPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/GetPolicy +func (c *FMS) GetPolicyRequest(input *GetPolicyInput) (req *request.Request, output *GetPolicyOutput) { + op := &request.Operation{ + Name: opGetPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetPolicyInput{} + } + + output = &GetPolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetPolicy API operation for Firewall Management Service. +// +// Returns information about the specified AWS Firewall Manager policy. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Firewall Management Service's +// API operation GetPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInvalidOperationException "InvalidOperationException" +// The operation failed because there was nothing to do. For example, you might +// have submitted an AssociateAdminAccount request, but the account ID that +// you submitted was already set as the AWS Firewall Manager administrator. +// +// * ErrCodeInternalErrorException "InternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// * ErrCodeInvalidTypeException "InvalidTypeException" +// The value of the Type parameter is invalid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/GetPolicy +func (c *FMS) GetPolicy(input *GetPolicyInput) (*GetPolicyOutput, error) { + req, out := c.GetPolicyRequest(input) + return out, req.Send() +} + +// GetPolicyWithContext is the same as GetPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See GetPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *FMS) GetPolicyWithContext(ctx aws.Context, input *GetPolicyInput, opts ...request.Option) (*GetPolicyOutput, error) { + req, out := c.GetPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListComplianceStatus = "ListComplianceStatus" + +// ListComplianceStatusRequest generates a "aws/request.Request" representing the +// client's request for the ListComplianceStatus operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListComplianceStatus for more information on using the ListComplianceStatus +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListComplianceStatusRequest method. +// req, resp := client.ListComplianceStatusRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/ListComplianceStatus +func (c *FMS) ListComplianceStatusRequest(input *ListComplianceStatusInput) (req *request.Request, output *ListComplianceStatusOutput) { + op := &request.Operation{ + Name: opListComplianceStatus, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListComplianceStatusInput{} + } + + output = &ListComplianceStatusOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListComplianceStatus API operation for Firewall Management Service. +// +// Returns an array of PolicyComplianceStatus objects in the response. Use PolicyComplianceStatus +// to get a summary of which member accounts are protected by the specified +// policy. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Firewall Management Service's +// API operation ListComplianceStatus for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInternalErrorException "InternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/ListComplianceStatus +func (c *FMS) ListComplianceStatus(input *ListComplianceStatusInput) (*ListComplianceStatusOutput, error) { + req, out := c.ListComplianceStatusRequest(input) + return out, req.Send() +} + +// ListComplianceStatusWithContext is the same as ListComplianceStatus with the addition of +// the ability to pass a context and additional request options. +// +// See ListComplianceStatus for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *FMS) ListComplianceStatusWithContext(ctx aws.Context, input *ListComplianceStatusInput, opts ...request.Option) (*ListComplianceStatusOutput, error) { + req, out := c.ListComplianceStatusRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListMemberAccounts = "ListMemberAccounts" + +// ListMemberAccountsRequest generates a "aws/request.Request" representing the +// client's request for the ListMemberAccounts operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListMemberAccounts for more information on using the ListMemberAccounts +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListMemberAccountsRequest method. +// req, resp := client.ListMemberAccountsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/ListMemberAccounts +func (c *FMS) ListMemberAccountsRequest(input *ListMemberAccountsInput) (req *request.Request, output *ListMemberAccountsOutput) { + op := &request.Operation{ + Name: opListMemberAccounts, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListMemberAccountsInput{} + } + + output = &ListMemberAccountsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListMemberAccounts API operation for Firewall Management Service. +// +// Returns a MemberAccounts object that lists the member accounts in the administrator's +// AWS organization. +// +// The ListMemberAccounts must be submitted by the account that is set as the +// AWS Firewall Manager administrator. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Firewall Management Service's +// API operation ListMemberAccounts for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInternalErrorException "InternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/ListMemberAccounts +func (c *FMS) ListMemberAccounts(input *ListMemberAccountsInput) (*ListMemberAccountsOutput, error) { + req, out := c.ListMemberAccountsRequest(input) + return out, req.Send() +} + +// ListMemberAccountsWithContext is the same as ListMemberAccounts with the addition of +// the ability to pass a context and additional request options. +// +// See ListMemberAccounts for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *FMS) ListMemberAccountsWithContext(ctx aws.Context, input *ListMemberAccountsInput, opts ...request.Option) (*ListMemberAccountsOutput, error) { + req, out := c.ListMemberAccountsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListPolicies = "ListPolicies" + +// ListPoliciesRequest generates a "aws/request.Request" representing the +// client's request for the ListPolicies operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListPolicies for more information on using the ListPolicies +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListPoliciesRequest method. +// req, resp := client.ListPoliciesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/ListPolicies +func (c *FMS) ListPoliciesRequest(input *ListPoliciesInput) (req *request.Request, output *ListPoliciesOutput) { + op := &request.Operation{ + Name: opListPolicies, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListPoliciesInput{} + } + + output = &ListPoliciesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListPolicies API operation for Firewall Management Service. +// +// Returns an array of PolicySummary objects in the response. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Firewall Management Service's +// API operation ListPolicies for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInvalidOperationException "InvalidOperationException" +// The operation failed because there was nothing to do. For example, you might +// have submitted an AssociateAdminAccount request, but the account ID that +// you submitted was already set as the AWS Firewall Manager administrator. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The operation exceeds a resource limit, for example, the maximum number of +// policy objects that you can create for an AWS account. For more information, +// see Firewall Manager Limits (http://docs.aws.amazon.com/waf/latest/developerguide/fms-limits.html) +// in the AWS WAF Developer Guide. +// +// * ErrCodeInternalErrorException "InternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/ListPolicies +func (c *FMS) ListPolicies(input *ListPoliciesInput) (*ListPoliciesOutput, error) { + req, out := c.ListPoliciesRequest(input) + return out, req.Send() +} + +// ListPoliciesWithContext is the same as ListPolicies with the addition of +// the ability to pass a context and additional request options. +// +// See ListPolicies for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *FMS) ListPoliciesWithContext(ctx aws.Context, input *ListPoliciesInput, opts ...request.Option) (*ListPoliciesOutput, error) { + req, out := c.ListPoliciesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutNotificationChannel = "PutNotificationChannel" + +// PutNotificationChannelRequest generates a "aws/request.Request" representing the +// client's request for the PutNotificationChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutNotificationChannel for more information on using the PutNotificationChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutNotificationChannelRequest method. +// req, resp := client.PutNotificationChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/PutNotificationChannel +func (c *FMS) PutNotificationChannelRequest(input *PutNotificationChannelInput) (req *request.Request, output *PutNotificationChannelOutput) { + op := &request.Operation{ + Name: opPutNotificationChannel, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutNotificationChannelInput{} + } + + output = &PutNotificationChannelOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutNotificationChannel API operation for Firewall Management Service. +// +// Designates the IAM role and Amazon Simple Notification Service (SNS) topic +// that AWS Firewall Manager uses to record SNS logs. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Firewall Management Service's +// API operation PutNotificationChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInvalidOperationException "InvalidOperationException" +// The operation failed because there was nothing to do. For example, you might +// have submitted an AssociateAdminAccount request, but the account ID that +// you submitted was already set as the AWS Firewall Manager administrator. +// +// * ErrCodeInternalErrorException "InternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/PutNotificationChannel +func (c *FMS) PutNotificationChannel(input *PutNotificationChannelInput) (*PutNotificationChannelOutput, error) { + req, out := c.PutNotificationChannelRequest(input) + return out, req.Send() +} + +// PutNotificationChannelWithContext is the same as PutNotificationChannel with the addition of +// the ability to pass a context and additional request options. +// +// See PutNotificationChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *FMS) PutNotificationChannelWithContext(ctx aws.Context, input *PutNotificationChannelInput, opts ...request.Option) (*PutNotificationChannelOutput, error) { + req, out := c.PutNotificationChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutPolicy = "PutPolicy" + +// PutPolicyRequest generates a "aws/request.Request" representing the +// client's request for the PutPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutPolicy for more information on using the PutPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutPolicyRequest method. +// req, resp := client.PutPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/PutPolicy +func (c *FMS) PutPolicyRequest(input *PutPolicyInput) (req *request.Request, output *PutPolicyOutput) { + op := &request.Operation{ + Name: opPutPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutPolicyInput{} + } + + output = &PutPolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutPolicy API operation for Firewall Management Service. +// +// Creates an AWS Firewall Manager policy. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Firewall Management Service's +// API operation PutPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInvalidOperationException "InvalidOperationException" +// The operation failed because there was nothing to do. For example, you might +// have submitted an AssociateAdminAccount request, but the account ID that +// you submitted was already set as the AWS Firewall Manager administrator. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// The parameters of the request were invalid. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The operation exceeds a resource limit, for example, the maximum number of +// policy objects that you can create for an AWS account. For more information, +// see Firewall Manager Limits (http://docs.aws.amazon.com/waf/latest/developerguide/fms-limits.html) +// in the AWS WAF Developer Guide. +// +// * ErrCodeInternalErrorException "InternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// * ErrCodeInvalidTypeException "InvalidTypeException" +// The value of the Type parameter is invalid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01/PutPolicy +func (c *FMS) PutPolicy(input *PutPolicyInput) (*PutPolicyOutput, error) { + req, out := c.PutPolicyRequest(input) + return out, req.Send() +} + +// PutPolicyWithContext is the same as PutPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See PutPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *FMS) PutPolicyWithContext(ctx aws.Context, input *PutPolicyInput, opts ...request.Option) (*PutPolicyOutput, error) { + req, out := c.PutPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type AssociateAdminAccountInput struct { + _ struct{} `type:"structure"` + + // The AWS account ID to associate with AWS Firewall Manager as the AWS Firewall + // Manager administrator account. This can be an AWS Organizations master account + // or a member account. For more information about AWS Organizations and master + // accounts, see Managing the AWS Accounts in Your Organization (https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts.html). + // + // AdminAccount is a required field + AdminAccount *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s AssociateAdminAccountInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociateAdminAccountInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AssociateAdminAccountInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AssociateAdminAccountInput"} + if s.AdminAccount == nil { + invalidParams.Add(request.NewErrParamRequired("AdminAccount")) + } + if s.AdminAccount != nil && len(*s.AdminAccount) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AdminAccount", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAdminAccount sets the AdminAccount field's value. +func (s *AssociateAdminAccountInput) SetAdminAccount(v string) *AssociateAdminAccountInput { + s.AdminAccount = &v + return s +} + +type AssociateAdminAccountOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AssociateAdminAccountOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociateAdminAccountOutput) GoString() string { + return s.String() +} + +// Details of the resource that is not protected by the policy. +type ComplianceViolator struct { + _ struct{} `type:"structure"` + + // The resource ID. + ResourceId *string `min:"1" type:"string"` + + // The resource type. This is in the format shown in AWS Resource Types Reference + // (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html). + // Valid values are AWS::ElasticLoadBalancingV2::LoadBalancer or AWS::CloudFront::Distribution. + ResourceType *string `min:"1" type:"string"` + + // The reason that the resource is not protected by the policy. + ViolationReason *string `type:"string" enum:"ViolationReason"` +} + +// String returns the string representation +func (s ComplianceViolator) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ComplianceViolator) GoString() string { + return s.String() +} + +// SetResourceId sets the ResourceId field's value. +func (s *ComplianceViolator) SetResourceId(v string) *ComplianceViolator { + s.ResourceId = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *ComplianceViolator) SetResourceType(v string) *ComplianceViolator { + s.ResourceType = &v + return s +} + +// SetViolationReason sets the ViolationReason field's value. +func (s *ComplianceViolator) SetViolationReason(v string) *ComplianceViolator { + s.ViolationReason = &v + return s +} + +type DeleteNotificationChannelInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteNotificationChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteNotificationChannelInput) GoString() string { + return s.String() +} + +type DeleteNotificationChannelOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteNotificationChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteNotificationChannelOutput) GoString() string { + return s.String() +} + +type DeletePolicyInput struct { + _ struct{} `type:"structure"` + + // The ID of the policy that you want to delete. PolicyId is returned by PutPolicy + // and by ListPolicies. + // + // PolicyId is a required field + PolicyId *string `min:"36" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeletePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeletePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeletePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeletePolicyInput"} + if s.PolicyId == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyId")) + } + if s.PolicyId != nil && len(*s.PolicyId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("PolicyId", 36)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyId sets the PolicyId field's value. +func (s *DeletePolicyInput) SetPolicyId(v string) *DeletePolicyInput { + s.PolicyId = &v + return s +} + +type DeletePolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeletePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeletePolicyOutput) GoString() string { + return s.String() +} + +type DisassociateAdminAccountInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DisassociateAdminAccountInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisassociateAdminAccountInput) GoString() string { + return s.String() +} + +type DisassociateAdminAccountOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DisassociateAdminAccountOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisassociateAdminAccountOutput) GoString() string { + return s.String() +} + +// Describes the compliance status for the account. An account is considered +// non-compliant if it includes resources that are not protected by the specified +// policy. +type EvaluationResult struct { + _ struct{} `type:"structure"` + + // Describes an AWS account's compliance with the AWS Firewall Manager policy. + ComplianceStatus *string `type:"string" enum:"PolicyComplianceStatusType"` + + // Indicates that over 100 resources are non-compliant with the AWS Firewall + // Manager policy. + EvaluationLimitExceeded *bool `type:"boolean"` + + // Number of resources that are non-compliant with the specified policy. A resource + // is considered non-compliant if it is not associated with the specified policy. + ViolatorCount *int64 `type:"long"` +} + +// String returns the string representation +func (s EvaluationResult) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EvaluationResult) GoString() string { + return s.String() +} + +// SetComplianceStatus sets the ComplianceStatus field's value. +func (s *EvaluationResult) SetComplianceStatus(v string) *EvaluationResult { + s.ComplianceStatus = &v + return s +} + +// SetEvaluationLimitExceeded sets the EvaluationLimitExceeded field's value. +func (s *EvaluationResult) SetEvaluationLimitExceeded(v bool) *EvaluationResult { + s.EvaluationLimitExceeded = &v + return s +} + +// SetViolatorCount sets the ViolatorCount field's value. +func (s *EvaluationResult) SetViolatorCount(v int64) *EvaluationResult { + s.ViolatorCount = &v + return s +} + +type GetAdminAccountInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s GetAdminAccountInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAdminAccountInput) GoString() string { + return s.String() +} + +type GetAdminAccountOutput struct { + _ struct{} `type:"structure"` + + // The AWS account that is set as the AWS Firewall Manager administrator. + AdminAccount *string `min:"1" type:"string"` + + // The status of the AWS account that you set as the AWS Firewall Manager administrator. + RoleStatus *string `type:"string" enum:"AccountRoleStatus"` +} + +// String returns the string representation +func (s GetAdminAccountOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAdminAccountOutput) GoString() string { + return s.String() +} + +// SetAdminAccount sets the AdminAccount field's value. +func (s *GetAdminAccountOutput) SetAdminAccount(v string) *GetAdminAccountOutput { + s.AdminAccount = &v + return s +} + +// SetRoleStatus sets the RoleStatus field's value. +func (s *GetAdminAccountOutput) SetRoleStatus(v string) *GetAdminAccountOutput { + s.RoleStatus = &v + return s +} + +type GetComplianceDetailInput struct { + _ struct{} `type:"structure"` + + // The AWS account that owns the resources that you want to get the details + // for. + // + // MemberAccount is a required field + MemberAccount *string `min:"1" type:"string" required:"true"` + + // The ID of the policy that you want to get the details for. PolicyId is returned + // by PutPolicy and by ListPolicies. + // + // PolicyId is a required field + PolicyId *string `min:"36" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetComplianceDetailInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetComplianceDetailInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetComplianceDetailInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetComplianceDetailInput"} + if s.MemberAccount == nil { + invalidParams.Add(request.NewErrParamRequired("MemberAccount")) + } + if s.MemberAccount != nil && len(*s.MemberAccount) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MemberAccount", 1)) + } + if s.PolicyId == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyId")) + } + if s.PolicyId != nil && len(*s.PolicyId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("PolicyId", 36)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMemberAccount sets the MemberAccount field's value. +func (s *GetComplianceDetailInput) SetMemberAccount(v string) *GetComplianceDetailInput { + s.MemberAccount = &v + return s +} + +// SetPolicyId sets the PolicyId field's value. +func (s *GetComplianceDetailInput) SetPolicyId(v string) *GetComplianceDetailInput { + s.PolicyId = &v + return s +} + +type GetComplianceDetailOutput struct { + _ struct{} `type:"structure"` + + // Information about the resources and the policy that you specified in the + // GetComplianceDetail request. + PolicyComplianceDetail *PolicyComplianceDetail `type:"structure"` +} + +// String returns the string representation +func (s GetComplianceDetailOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetComplianceDetailOutput) GoString() string { + return s.String() +} + +// SetPolicyComplianceDetail sets the PolicyComplianceDetail field's value. +func (s *GetComplianceDetailOutput) SetPolicyComplianceDetail(v *PolicyComplianceDetail) *GetComplianceDetailOutput { + s.PolicyComplianceDetail = v + return s +} + +type GetNotificationChannelInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s GetNotificationChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetNotificationChannelInput) GoString() string { + return s.String() +} + +type GetNotificationChannelOutput struct { + _ struct{} `type:"structure"` + + // The IAM role that is used by AWS Firewall Manager to record activity to SNS. + SnsRoleName *string `min:"1" type:"string"` + + // The SNS topic that records AWS Firewall Manager activity. + SnsTopicArn *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s GetNotificationChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetNotificationChannelOutput) GoString() string { + return s.String() +} + +// SetSnsRoleName sets the SnsRoleName field's value. +func (s *GetNotificationChannelOutput) SetSnsRoleName(v string) *GetNotificationChannelOutput { + s.SnsRoleName = &v + return s +} + +// SetSnsTopicArn sets the SnsTopicArn field's value. +func (s *GetNotificationChannelOutput) SetSnsTopicArn(v string) *GetNotificationChannelOutput { + s.SnsTopicArn = &v + return s +} + +type GetPolicyInput struct { + _ struct{} `type:"structure"` + + // The ID of the AWS Firewall Manager policy that you want the details for. + // + // PolicyId is a required field + PolicyId *string `min:"36" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetPolicyInput"} + if s.PolicyId == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyId")) + } + if s.PolicyId != nil && len(*s.PolicyId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("PolicyId", 36)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyId sets the PolicyId field's value. +func (s *GetPolicyInput) SetPolicyId(v string) *GetPolicyInput { + s.PolicyId = &v + return s +} + +type GetPolicyOutput struct { + _ struct{} `type:"structure"` + + // Information about the specified AWS Firewall Manager policy. + Policy *Policy `type:"structure"` + + // The Amazon Resource Name (ARN) of the specified policy. + PolicyArn *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s GetPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetPolicyOutput) GoString() string { + return s.String() +} + +// SetPolicy sets the Policy field's value. +func (s *GetPolicyOutput) SetPolicy(v *Policy) *GetPolicyOutput { + s.Policy = v + return s +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *GetPolicyOutput) SetPolicyArn(v string) *GetPolicyOutput { + s.PolicyArn = &v + return s +} + +type ListComplianceStatusInput struct { + _ struct{} `type:"structure"` + + // Specifies the number of PolicyComplianceStatus objects that you want AWS + // Firewall Manager to return for this request. If you have more PolicyComplianceStatus + // objects than the number that you specify for MaxResults, the response includes + // a NextToken value that you can use to get another batch of PolicyComplianceStatus + // objects. + MaxResults *int64 `min:"1" type:"integer"` + + // If you specify a value for MaxResults and you have more PolicyComplianceStatus + // objects than the number that you specify for MaxResults, AWS Firewall Manager + // returns a NextToken value in the response that allows you to list another + // group of PolicyComplianceStatus objects. For the second and subsequent ListComplianceStatus + // requests, specify the value of NextToken from the previous response to get + // information about another batch of PolicyComplianceStatus objects. + NextToken *string `min:"1" type:"string"` + + // The ID of the AWS Firewall Manager policy that you want the details for. + // + // PolicyId is a required field + PolicyId *string `min:"36" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListComplianceStatusInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListComplianceStatusInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListComplianceStatusInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListComplianceStatusInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + if s.PolicyId == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyId")) + } + if s.PolicyId != nil && len(*s.PolicyId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("PolicyId", 36)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListComplianceStatusInput) SetMaxResults(v int64) *ListComplianceStatusInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListComplianceStatusInput) SetNextToken(v string) *ListComplianceStatusInput { + s.NextToken = &v + return s +} + +// SetPolicyId sets the PolicyId field's value. +func (s *ListComplianceStatusInput) SetPolicyId(v string) *ListComplianceStatusInput { + s.PolicyId = &v + return s +} + +type ListComplianceStatusOutput struct { + _ struct{} `type:"structure"` + + // If you have more PolicyComplianceStatus objects than the number that you + // specified for MaxResults in the request, the response includes a NextToken + // value. To list more PolicyComplianceStatus objects, submit another ListComplianceStatus + // request, and specify the NextToken value from the response in the NextToken + // value in the next request. + NextToken *string `min:"1" type:"string"` + + // An array of PolicyComplianceStatus objects. + PolicyComplianceStatusList []*PolicyComplianceStatus `type:"list"` +} + +// String returns the string representation +func (s ListComplianceStatusOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListComplianceStatusOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListComplianceStatusOutput) SetNextToken(v string) *ListComplianceStatusOutput { + s.NextToken = &v + return s +} + +// SetPolicyComplianceStatusList sets the PolicyComplianceStatusList field's value. +func (s *ListComplianceStatusOutput) SetPolicyComplianceStatusList(v []*PolicyComplianceStatus) *ListComplianceStatusOutput { + s.PolicyComplianceStatusList = v + return s +} + +type ListMemberAccountsInput struct { + _ struct{} `type:"structure"` + + // Specifies the number of member account IDs that you want AWS Firewall Manager + // to return for this request. If you have more IDs than the number that you + // specify for MaxResults, the response includes a NextToken value that you + // can use to get another batch of member account IDs. The maximum value for + // MaxResults is 100. + MaxResults *int64 `min:"1" type:"integer"` + + // If you specify a value for MaxResults and you have more account IDs than + // the number that you specify for MaxResults, AWS Firewall Manager returns + // a NextToken value in the response that allows you to list another group of + // IDs. For the second and subsequent ListMemberAccountsRequest requests, specify + // the value of NextToken from the previous response to get information about + // another batch of member account IDs. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListMemberAccountsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListMemberAccountsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListMemberAccountsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListMemberAccountsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListMemberAccountsInput) SetMaxResults(v int64) *ListMemberAccountsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListMemberAccountsInput) SetNextToken(v string) *ListMemberAccountsInput { + s.NextToken = &v + return s +} + +type ListMemberAccountsOutput struct { + _ struct{} `type:"structure"` + + // An array of account IDs. + MemberAccounts []*string `type:"list"` + + // If you have more member account IDs than the number that you specified for + // MaxResults in the request, the response includes a NextToken value. To list + // more IDs, submit another ListMemberAccounts request, and specify the NextToken + // value from the response in the NextToken value in the next request. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListMemberAccountsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListMemberAccountsOutput) GoString() string { + return s.String() +} + +// SetMemberAccounts sets the MemberAccounts field's value. +func (s *ListMemberAccountsOutput) SetMemberAccounts(v []*string) *ListMemberAccountsOutput { + s.MemberAccounts = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListMemberAccountsOutput) SetNextToken(v string) *ListMemberAccountsOutput { + s.NextToken = &v + return s +} + +type ListPoliciesInput struct { + _ struct{} `type:"structure"` + + // Specifies the number of PolicySummary objects that you want AWS Firewall + // Manager to return for this request. If you have more PolicySummary objects + // than the number that you specify for MaxResults, the response includes a + // NextToken value that you can use to get another batch of PolicySummary objects. + MaxResults *int64 `min:"1" type:"integer"` + + // If you specify a value for MaxResults and you have more PolicySummary objects + // than the number that you specify for MaxResults, AWS Firewall Manager returns + // a NextToken value in the response that allows you to list another group of + // PolicySummary objects. For the second and subsequent ListPolicies requests, + // specify the value of NextToken from the previous response to get information + // about another batch of PolicySummary objects. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListPoliciesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListPoliciesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListPoliciesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListPoliciesInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListPoliciesInput) SetMaxResults(v int64) *ListPoliciesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListPoliciesInput) SetNextToken(v string) *ListPoliciesInput { + s.NextToken = &v + return s +} + +type ListPoliciesOutput struct { + _ struct{} `type:"structure"` + + // If you have more PolicySummary objects than the number that you specified + // for MaxResults in the request, the response includes a NextToken value. To + // list more PolicySummary objects, submit another ListPolicies request, and + // specify the NextToken value from the response in the NextToken value in the + // next request. + NextToken *string `min:"1" type:"string"` + + // An array of PolicySummary objects. + PolicyList []*PolicySummary `type:"list"` +} + +// String returns the string representation +func (s ListPoliciesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListPoliciesOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListPoliciesOutput) SetNextToken(v string) *ListPoliciesOutput { + s.NextToken = &v + return s +} + +// SetPolicyList sets the PolicyList field's value. +func (s *ListPoliciesOutput) SetPolicyList(v []*PolicySummary) *ListPoliciesOutput { + s.PolicyList = v + return s +} + +// An AWS Firewall Manager policy. +type Policy struct { + _ struct{} `type:"structure"` + + // Specifies the AWS account IDs to exclude from the policy. The IncludeMap + // values are evaluated first, with all of the appropriate account IDs added + // to the policy. Then the accounts listed in ExcludeMap are removed, resulting + // in the final list of accounts to add to the policy. + // + // The key to the map is ACCOUNT. For example, a valid ExcludeMap would be {“ACCOUNT” + // : [“accountID1”, “accountID2”]}. + ExcludeMap map[string][]*string `type:"map"` + + // If set to True, resources with the tags that are specified in the ResourceTag + // array are not protected by the policy. If set to False, and the ResourceTag + // array is not null, only resources with the specified tags are associated + // with the policy. + // + // ExcludeResourceTags is a required field + ExcludeResourceTags *bool `type:"boolean" required:"true"` + + // Specifies the AWS account IDs to include in the policy. If IncludeMap is + // null, all accounts in the AWS Organization are included in the policy. If + // IncludeMap is not null, only values listed in IncludeMap will be included + // in the policy. + // + // The key to the map is ACCOUNT. For example, a valid IncludeMap would be {“ACCOUNT” + // : [“accountID1”, “accountID2”]}. + IncludeMap map[string][]*string `type:"map"` + + // The ID of the AWS Firewall Manager policy. + PolicyId *string `min:"36" type:"string"` + + // The friendly name of the AWS Firewall Manager policy. + // + // PolicyName is a required field + PolicyName *string `min:"1" type:"string" required:"true"` + + // A unique identifier for each update to the policy. When issuing a PutPolicy + // request, the PolicyUpdateToken in the request must match the PolicyUpdateToken + // of the current policy version. To get the PolicyUpdateToken of the current + // policy version, use a GetPolicy request. + PolicyUpdateToken *string `min:"1" type:"string"` + + // Indicates if the policy should be automatically applied to new resources. + // + // RemediationEnabled is a required field + RemediationEnabled *bool `type:"boolean" required:"true"` + + // An array of ResourceTag objects. + ResourceTags []*ResourceTag `type:"list"` + + // The type of resource to protect with the policy, either an Application Load + // Balancer or a CloudFront distribution. This is in the format shown in AWS + // Resource Types Reference (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html). + // Valid values are AWS::ElasticLoadBalancingV2::LoadBalancer or AWS::CloudFront::Distribution. + // + // ResourceType is a required field + ResourceType *string `min:"1" type:"string" required:"true"` + + // Details about the security service that is being used to protect the resources. + // + // SecurityServicePolicyData is a required field + SecurityServicePolicyData *SecurityServicePolicyData `type:"structure" required:"true"` +} + +// String returns the string representation +func (s Policy) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Policy) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Policy) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Policy"} + if s.ExcludeResourceTags == nil { + invalidParams.Add(request.NewErrParamRequired("ExcludeResourceTags")) + } + if s.PolicyId != nil && len(*s.PolicyId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("PolicyId", 36)) + } + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) + } + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + if s.PolicyUpdateToken != nil && len(*s.PolicyUpdateToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyUpdateToken", 1)) + } + if s.RemediationEnabled == nil { + invalidParams.Add(request.NewErrParamRequired("RemediationEnabled")) + } + if s.ResourceType == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceType")) + } + if s.ResourceType != nil && len(*s.ResourceType) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceType", 1)) + } + if s.SecurityServicePolicyData == nil { + invalidParams.Add(request.NewErrParamRequired("SecurityServicePolicyData")) + } + if s.ResourceTags != nil { + for i, v := range s.ResourceTags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ResourceTags", i), err.(request.ErrInvalidParams)) + } + } + } + if s.SecurityServicePolicyData != nil { + if err := s.SecurityServicePolicyData.Validate(); err != nil { + invalidParams.AddNested("SecurityServicePolicyData", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetExcludeMap sets the ExcludeMap field's value. +func (s *Policy) SetExcludeMap(v map[string][]*string) *Policy { + s.ExcludeMap = v + return s +} + +// SetExcludeResourceTags sets the ExcludeResourceTags field's value. +func (s *Policy) SetExcludeResourceTags(v bool) *Policy { + s.ExcludeResourceTags = &v + return s +} + +// SetIncludeMap sets the IncludeMap field's value. +func (s *Policy) SetIncludeMap(v map[string][]*string) *Policy { + s.IncludeMap = v + return s +} + +// SetPolicyId sets the PolicyId field's value. +func (s *Policy) SetPolicyId(v string) *Policy { + s.PolicyId = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *Policy) SetPolicyName(v string) *Policy { + s.PolicyName = &v + return s +} + +// SetPolicyUpdateToken sets the PolicyUpdateToken field's value. +func (s *Policy) SetPolicyUpdateToken(v string) *Policy { + s.PolicyUpdateToken = &v + return s +} + +// SetRemediationEnabled sets the RemediationEnabled field's value. +func (s *Policy) SetRemediationEnabled(v bool) *Policy { + s.RemediationEnabled = &v + return s +} + +// SetResourceTags sets the ResourceTags field's value. +func (s *Policy) SetResourceTags(v []*ResourceTag) *Policy { + s.ResourceTags = v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *Policy) SetResourceType(v string) *Policy { + s.ResourceType = &v + return s +} + +// SetSecurityServicePolicyData sets the SecurityServicePolicyData field's value. +func (s *Policy) SetSecurityServicePolicyData(v *SecurityServicePolicyData) *Policy { + s.SecurityServicePolicyData = v + return s +} + +// Describes the non-compliant resources in a member account for a specific +// AWS Firewall Manager policy. A maximum of 100 entries are displayed. If more +// than 100 resources are non-compliant, EvaluationLimitExceeded is set to True. +type PolicyComplianceDetail struct { + _ struct{} `type:"structure"` + + // Indicates if over 100 resources are non-compliant with the AWS Firewall Manager + // policy. + EvaluationLimitExceeded *bool `type:"boolean"` + + // A time stamp that indicates when the returned information should be considered + // out-of-date. + ExpiredAt *time.Time `type:"timestamp"` + + // Details about problems with dependent services, such as AWS WAF or AWS Config, + // that are causing a resource to be non-compliant. The details include the + // name of the dependent service and the error message recieved indicating the + // problem with the service. + IssueInfoMap map[string]*string `type:"map"` + + // The AWS account ID. + MemberAccount *string `min:"1" type:"string"` + + // The ID of the AWS Firewall Manager policy. + PolicyId *string `min:"36" type:"string"` + + // The AWS account that created the AWS Firewall Manager policy. + PolicyOwner *string `min:"1" type:"string"` + + // An array of resources that are not protected by the policy. + Violators []*ComplianceViolator `type:"list"` +} + +// String returns the string representation +func (s PolicyComplianceDetail) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PolicyComplianceDetail) GoString() string { + return s.String() +} + +// SetEvaluationLimitExceeded sets the EvaluationLimitExceeded field's value. +func (s *PolicyComplianceDetail) SetEvaluationLimitExceeded(v bool) *PolicyComplianceDetail { + s.EvaluationLimitExceeded = &v + return s +} + +// SetExpiredAt sets the ExpiredAt field's value. +func (s *PolicyComplianceDetail) SetExpiredAt(v time.Time) *PolicyComplianceDetail { + s.ExpiredAt = &v + return s +} + +// SetIssueInfoMap sets the IssueInfoMap field's value. +func (s *PolicyComplianceDetail) SetIssueInfoMap(v map[string]*string) *PolicyComplianceDetail { + s.IssueInfoMap = v + return s +} + +// SetMemberAccount sets the MemberAccount field's value. +func (s *PolicyComplianceDetail) SetMemberAccount(v string) *PolicyComplianceDetail { + s.MemberAccount = &v + return s +} + +// SetPolicyId sets the PolicyId field's value. +func (s *PolicyComplianceDetail) SetPolicyId(v string) *PolicyComplianceDetail { + s.PolicyId = &v + return s +} + +// SetPolicyOwner sets the PolicyOwner field's value. +func (s *PolicyComplianceDetail) SetPolicyOwner(v string) *PolicyComplianceDetail { + s.PolicyOwner = &v + return s +} + +// SetViolators sets the Violators field's value. +func (s *PolicyComplianceDetail) SetViolators(v []*ComplianceViolator) *PolicyComplianceDetail { + s.Violators = v + return s +} + +// Indicates whether the account is compliant with the specified policy. An +// account is considered non-compliant if it includes resources that are not +// protected by the policy. +type PolicyComplianceStatus struct { + _ struct{} `type:"structure"` + + // An array of EvaluationResult objects. + EvaluationResults []*EvaluationResult `type:"list"` + + // Details about problems with dependent services, such as AWS WAF or AWS Config, + // that are causing a resource to be non-compliant. The details include the + // name of the dependent service and the error message recieved indicating the + // problem with the service. + IssueInfoMap map[string]*string `type:"map"` + + // Time stamp of the last update to the EvaluationResult objects. + LastUpdated *time.Time `type:"timestamp"` + + // The member account ID. + MemberAccount *string `min:"1" type:"string"` + + // The ID of the AWS Firewall Manager policy. + PolicyId *string `min:"36" type:"string"` + + // The friendly name of the AWS Firewall Manager policy. + PolicyName *string `min:"1" type:"string"` + + // The AWS account that created the AWS Firewall Manager policy. + PolicyOwner *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s PolicyComplianceStatus) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PolicyComplianceStatus) GoString() string { + return s.String() +} + +// SetEvaluationResults sets the EvaluationResults field's value. +func (s *PolicyComplianceStatus) SetEvaluationResults(v []*EvaluationResult) *PolicyComplianceStatus { + s.EvaluationResults = v + return s +} + +// SetIssueInfoMap sets the IssueInfoMap field's value. +func (s *PolicyComplianceStatus) SetIssueInfoMap(v map[string]*string) *PolicyComplianceStatus { + s.IssueInfoMap = v + return s +} + +// SetLastUpdated sets the LastUpdated field's value. +func (s *PolicyComplianceStatus) SetLastUpdated(v time.Time) *PolicyComplianceStatus { + s.LastUpdated = &v + return s +} + +// SetMemberAccount sets the MemberAccount field's value. +func (s *PolicyComplianceStatus) SetMemberAccount(v string) *PolicyComplianceStatus { + s.MemberAccount = &v + return s +} + +// SetPolicyId sets the PolicyId field's value. +func (s *PolicyComplianceStatus) SetPolicyId(v string) *PolicyComplianceStatus { + s.PolicyId = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *PolicyComplianceStatus) SetPolicyName(v string) *PolicyComplianceStatus { + s.PolicyName = &v + return s +} + +// SetPolicyOwner sets the PolicyOwner field's value. +func (s *PolicyComplianceStatus) SetPolicyOwner(v string) *PolicyComplianceStatus { + s.PolicyOwner = &v + return s +} + +// Details of the AWS Firewall Manager policy. +type PolicySummary struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the specified policy. + PolicyArn *string `min:"1" type:"string"` + + // The ID of the specified policy. + PolicyId *string `min:"36" type:"string"` + + // The friendly name of the specified policy. + PolicyName *string `min:"1" type:"string"` + + // Indicates if the policy should be automatically applied to new resources. + RemediationEnabled *bool `type:"boolean"` + + // The type of resource to protect with the policy, either an Application Load + // Balancer or a CloudFront distribution. This is in the format shown in AWS + // Resource Types Reference (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html). + // Valid values are AWS::ElasticLoadBalancingV2::LoadBalancer or AWS::CloudFront::Distribution. + ResourceType *string `min:"1" type:"string"` + + // The service that the policy is using to protect the resources. This value + // is WAF. + SecurityServiceType *string `type:"string" enum:"SecurityServiceType"` +} + +// String returns the string representation +func (s PolicySummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PolicySummary) GoString() string { + return s.String() +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *PolicySummary) SetPolicyArn(v string) *PolicySummary { + s.PolicyArn = &v + return s +} + +// SetPolicyId sets the PolicyId field's value. +func (s *PolicySummary) SetPolicyId(v string) *PolicySummary { + s.PolicyId = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *PolicySummary) SetPolicyName(v string) *PolicySummary { + s.PolicyName = &v + return s +} + +// SetRemediationEnabled sets the RemediationEnabled field's value. +func (s *PolicySummary) SetRemediationEnabled(v bool) *PolicySummary { + s.RemediationEnabled = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *PolicySummary) SetResourceType(v string) *PolicySummary { + s.ResourceType = &v + return s +} + +// SetSecurityServiceType sets the SecurityServiceType field's value. +func (s *PolicySummary) SetSecurityServiceType(v string) *PolicySummary { + s.SecurityServiceType = &v + return s +} + +type PutNotificationChannelInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the IAM role that allows Amazon SNS to + // record AWS Firewall Manager activity. + // + // SnsRoleName is a required field + SnsRoleName *string `min:"1" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the SNS topic that collects notifications + // from AWS Firewall Manager. + // + // SnsTopicArn is a required field + SnsTopicArn *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s PutNotificationChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutNotificationChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutNotificationChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutNotificationChannelInput"} + if s.SnsRoleName == nil { + invalidParams.Add(request.NewErrParamRequired("SnsRoleName")) + } + if s.SnsRoleName != nil && len(*s.SnsRoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SnsRoleName", 1)) + } + if s.SnsTopicArn == nil { + invalidParams.Add(request.NewErrParamRequired("SnsTopicArn")) + } + if s.SnsTopicArn != nil && len(*s.SnsTopicArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SnsTopicArn", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSnsRoleName sets the SnsRoleName field's value. +func (s *PutNotificationChannelInput) SetSnsRoleName(v string) *PutNotificationChannelInput { + s.SnsRoleName = &v + return s +} + +// SetSnsTopicArn sets the SnsTopicArn field's value. +func (s *PutNotificationChannelInput) SetSnsTopicArn(v string) *PutNotificationChannelInput { + s.SnsTopicArn = &v + return s +} + +type PutNotificationChannelOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutNotificationChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutNotificationChannelOutput) GoString() string { + return s.String() +} + +type PutPolicyInput struct { + _ struct{} `type:"structure"` + + // The details of the AWS Firewall Manager policy to be created. + // + // Policy is a required field + Policy *Policy `type:"structure" required:"true"` +} + +// String returns the string representation +func (s PutPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutPolicyInput"} + if s.Policy == nil { + invalidParams.Add(request.NewErrParamRequired("Policy")) + } + if s.Policy != nil { + if err := s.Policy.Validate(); err != nil { + invalidParams.AddNested("Policy", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicy sets the Policy field's value. +func (s *PutPolicyInput) SetPolicy(v *Policy) *PutPolicyInput { + s.Policy = v + return s +} + +type PutPolicyOutput struct { + _ struct{} `type:"structure"` + + // The details of the AWS Firewall Manager policy that was created. + Policy *Policy `type:"structure"` + + // The Amazon Resource Name (ARN) of the policy that was created. + PolicyArn *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s PutPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutPolicyOutput) GoString() string { + return s.String() +} + +// SetPolicy sets the Policy field's value. +func (s *PutPolicyOutput) SetPolicy(v *Policy) *PutPolicyOutput { + s.Policy = v + return s +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *PutPolicyOutput) SetPolicyArn(v string) *PutPolicyOutput { + s.PolicyArn = &v + return s +} + +// The resource tags that AWS Firewall Manager uses to determine if a particular +// resource should be included or excluded from protection by the AWS Firewall +// Manager policy. Tags enable you to categorize your AWS resources in different +// ways, for example, by purpose, owner, or environment. Each tag consists of +// a key and an optional value, both of which you define. Tags are combined +// with an "OR." That is, if you add more than one tag, if any of the tags matches, +// the resource is considered a match for the include or exclude. Working with +// Tag Editor (https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/tag-editor.html). +type ResourceTag struct { + _ struct{} `type:"structure"` + + // The resource tag key. + // + // Key is a required field + Key *string `min:"1" type:"string" required:"true"` + + // The resource tag value. + Value *string `type:"string"` +} + +// String returns the string representation +func (s ResourceTag) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResourceTag) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ResourceTag) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResourceTag"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *ResourceTag) SetKey(v string) *ResourceTag { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *ResourceTag) SetValue(v string) *ResourceTag { + s.Value = &v + return s +} + +// Details about the security service that is being used to protect the resources. +type SecurityServicePolicyData struct { + _ struct{} `type:"structure"` + + // Details about the service. This contains WAF data in JSON format, as shown + // in the following example: + // + // ManagedServiceData": "{\"type\": \"WAF\", \"ruleGroups\": [{\"id\": \"12345678-1bcd-9012-efga-0987654321ab\", + // \"overrideAction\" : {\"type\": \"COUNT\"}}], \"defaultAction\": {\"type\": + // \"BLOCK\"}} + ManagedServiceData *string `min:"1" type:"string"` + + // The service that the policy is using to protect the resources. This value + // is WAF. + // + // Type is a required field + Type *string `type:"string" required:"true" enum:"SecurityServiceType"` +} + +// String returns the string representation +func (s SecurityServicePolicyData) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SecurityServicePolicyData) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SecurityServicePolicyData) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SecurityServicePolicyData"} + if s.ManagedServiceData != nil && len(*s.ManagedServiceData) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ManagedServiceData", 1)) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetManagedServiceData sets the ManagedServiceData field's value. +func (s *SecurityServicePolicyData) SetManagedServiceData(v string) *SecurityServicePolicyData { + s.ManagedServiceData = &v + return s +} + +// SetType sets the Type field's value. +func (s *SecurityServicePolicyData) SetType(v string) *SecurityServicePolicyData { + s.Type = &v + return s +} + +const ( + // AccountRoleStatusReady is a AccountRoleStatus enum value + AccountRoleStatusReady = "READY" + + // AccountRoleStatusCreating is a AccountRoleStatus enum value + AccountRoleStatusCreating = "CREATING" + + // AccountRoleStatusPendingDeletion is a AccountRoleStatus enum value + AccountRoleStatusPendingDeletion = "PENDING_DELETION" + + // AccountRoleStatusDeleting is a AccountRoleStatus enum value + AccountRoleStatusDeleting = "DELETING" + + // AccountRoleStatusDeleted is a AccountRoleStatus enum value + AccountRoleStatusDeleted = "DELETED" +) + +const ( + // CustomerPolicyScopeIdTypeAccount is a CustomerPolicyScopeIdType enum value + CustomerPolicyScopeIdTypeAccount = "ACCOUNT" +) + +const ( + // DependentServiceNameAwsconfig is a DependentServiceName enum value + DependentServiceNameAwsconfig = "AWSCONFIG" + + // DependentServiceNameAwswaf is a DependentServiceName enum value + DependentServiceNameAwswaf = "AWSWAF" +) + +const ( + // PolicyComplianceStatusTypeCompliant is a PolicyComplianceStatusType enum value + PolicyComplianceStatusTypeCompliant = "COMPLIANT" + + // PolicyComplianceStatusTypeNonCompliant is a PolicyComplianceStatusType enum value + PolicyComplianceStatusTypeNonCompliant = "NON_COMPLIANT" +) + +const ( + // SecurityServiceTypeWaf is a SecurityServiceType enum value + SecurityServiceTypeWaf = "WAF" +) + +const ( + // ViolationReasonWebAclMissingRuleGroup is a ViolationReason enum value + ViolationReasonWebAclMissingRuleGroup = "WEB_ACL_MISSING_RULE_GROUP" + + // ViolationReasonResourceMissingWebAcl is a ViolationReason enum value + ViolationReasonResourceMissingWebAcl = "RESOURCE_MISSING_WEB_ACL" + + // ViolationReasonResourceIncorrectWebAcl is a ViolationReason enum value + ViolationReasonResourceIncorrectWebAcl = "RESOURCE_INCORRECT_WEB_ACL" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/fms/doc.go b/vendor/github.com/aws/aws-sdk-go/service/fms/doc.go new file mode 100644 index 00000000000..baae6d87f70 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/fms/doc.go @@ -0,0 +1,31 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package fms provides the client and types for making API +// requests to Firewall Management Service. +// +// This is the AWS Firewall Manager API Reference. This guide is for developers +// who need detailed information about the AWS Firewall Manager API actions, +// data types, and errors. For detailed information about AWS Firewall Manager +// features, see the AWS Firewall Manager Developer Guide (http://docs.aws.amazon.com/waf/latest/developerguide/fms-chapter.html). +// +// See https://docs.aws.amazon.com/goto/WebAPI/fms-2018-01-01 for more information on this service. +// +// See fms package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/fms/ +// +// Using the Client +// +// To contact Firewall Management Service with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the Firewall Management Service client FMS for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/fms/#New +package fms diff --git a/vendor/github.com/aws/aws-sdk-go/service/fms/errors.go b/vendor/github.com/aws/aws-sdk-go/service/fms/errors.go new file mode 100644 index 00000000000..c63fd3c5b80 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/fms/errors.go @@ -0,0 +1,48 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package fms + +const ( + + // ErrCodeInternalErrorException for service response error code + // "InternalErrorException". + // + // The operation failed because of a system problem, even though the request + // was valid. Retry your request. + ErrCodeInternalErrorException = "InternalErrorException" + + // ErrCodeInvalidInputException for service response error code + // "InvalidInputException". + // + // The parameters of the request were invalid. + ErrCodeInvalidInputException = "InvalidInputException" + + // ErrCodeInvalidOperationException for service response error code + // "InvalidOperationException". + // + // The operation failed because there was nothing to do. For example, you might + // have submitted an AssociateAdminAccount request, but the account ID that + // you submitted was already set as the AWS Firewall Manager administrator. + ErrCodeInvalidOperationException = "InvalidOperationException" + + // ErrCodeInvalidTypeException for service response error code + // "InvalidTypeException". + // + // The value of the Type parameter is invalid. + ErrCodeInvalidTypeException = "InvalidTypeException" + + // ErrCodeLimitExceededException for service response error code + // "LimitExceededException". + // + // The operation exceeds a resource limit, for example, the maximum number of + // policy objects that you can create for an AWS account. For more information, + // see Firewall Manager Limits (http://docs.aws.amazon.com/waf/latest/developerguide/fms-limits.html) + // in the AWS WAF Developer Guide. + ErrCodeLimitExceededException = "LimitExceededException" + + // ErrCodeResourceNotFoundException for service response error code + // "ResourceNotFoundException". + // + // The specified resource was not found. + ErrCodeResourceNotFoundException = "ResourceNotFoundException" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/fms/service.go b/vendor/github.com/aws/aws-sdk-go/service/fms/service.go new file mode 100644 index 00000000000..6103e57fd82 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/fms/service.go @@ -0,0 +1,97 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package fms + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" +) + +// FMS provides the API operation methods for making requests to +// Firewall Management Service. See this package's package overview docs +// for details on the service. +// +// FMS methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type FMS struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "fms" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "FMS" // ServiceID is a unique identifer of a specific service. +) + +// New creates a new instance of the FMS client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a FMS client from just a session. +// svc := fms.New(mySession) +// +// // Create a FMS client with additional configuration +// svc := fms.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *FMS { + c := p.ClientConfig(EndpointsID, cfgs...) + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *FMS { + svc := &FMS{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + ServiceID: ServiceID, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2018-01-01", + JSONVersion: "1.1", + TargetPrefix: "AWSFMS_20180101", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(jsonrpc.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(jsonrpc.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(jsonrpc.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(jsonrpc.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a FMS operation and runs any +// custom request initialization. +func (c *FMS) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/gamelift/api.go b/vendor/github.com/aws/aws-sdk-go/service/gamelift/api.go index 0d0fb89433a..d77230940b3 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/gamelift/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/gamelift/api.go @@ -17,8 +17,8 @@ const opAcceptMatch = "AcceptMatch" // AcceptMatchRequest generates a "aws/request.Request" representing the // client's request for the AcceptMatch operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -142,8 +142,8 @@ const opCreateAlias = "CreateAlias" // CreateAliasRequest generates a "aws/request.Request" representing the // client's request for the CreateAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -271,8 +271,8 @@ const opCreateBuild = "CreateBuild" // CreateBuildRequest generates a "aws/request.Request" representing the // client's request for the CreateBuild operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -412,8 +412,8 @@ const opCreateFleet = "CreateFleet" // CreateFleetRequest generates a "aws/request.Request" representing the // client's request for the CreateFleet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -509,16 +509,22 @@ func (c *GameLift) CreateFleetRequest(input *CreateFleetInput) (req *request.Req // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -531,21 +537,11 @@ func (c *GameLift) CreateFleetRequest(input *CreateFleetInput) (req *request.Req // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: -// -// DescribeFleetCapacity -// -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) -// -// DescribeScalingPolicies (automatic scaling) +// * Manage fleet actions: // -// DeleteScalingPolicy (automatic scaling) +// StartFleetActions // -// DescribeEC2InstanceLimits -// -// * DeleteFleet +// StopFleetActions // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -606,8 +602,8 @@ const opCreateGameSession = "CreateGameSession" // CreateGameSessionRequest generates a "aws/request.Request" representing the // client's request for the CreateGameSession operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -778,8 +774,8 @@ const opCreateGameSessionQueue = "CreateGameSessionQueue" // CreateGameSessionQueueRequest generates a "aws/request.Request" representing the // client's request for the CreateGameSessionQueue operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -909,8 +905,8 @@ const opCreateMatchmakingConfiguration = "CreateMatchmakingConfiguration" // CreateMatchmakingConfigurationRequest generates a "aws/request.Request" representing the // client's request for the CreateMatchmakingConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1048,8 +1044,8 @@ const opCreateMatchmakingRuleSet = "CreateMatchmakingRuleSet" // CreateMatchmakingRuleSetRequest generates a "aws/request.Request" representing the // client's request for the CreateMatchmakingRuleSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1172,8 +1168,8 @@ const opCreatePlayerSession = "CreatePlayerSession" // CreatePlayerSessionRequest generates a "aws/request.Request" representing the // client's request for the CreatePlayerSession operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1305,8 +1301,8 @@ const opCreatePlayerSessions = "CreatePlayerSessions" // CreatePlayerSessionsRequest generates a "aws/request.Request" representing the // client's request for the CreatePlayerSessions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1439,8 +1435,8 @@ const opCreateVpcPeeringAuthorization = "CreateVpcPeeringAuthorization" // CreateVpcPeeringAuthorizationRequest generates a "aws/request.Request" representing the // client's request for the CreateVpcPeeringAuthorization operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1572,8 +1568,8 @@ const opCreateVpcPeeringConnection = "CreateVpcPeeringConnection" // CreateVpcPeeringConnectionRequest generates a "aws/request.Request" representing the // client's request for the CreateVpcPeeringConnection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1700,8 +1696,8 @@ const opDeleteAlias = "DeleteAlias" // DeleteAliasRequest generates a "aws/request.Request" representing the // client's request for the DeleteAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1810,8 +1806,8 @@ const opDeleteBuild = "DeleteBuild" // DeleteBuildRequest generates a "aws/request.Request" representing the // client's request for the DeleteBuild operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1921,8 +1917,8 @@ const opDeleteFleet = "DeleteFleet" // DeleteFleetRequest generates a "aws/request.Request" representing the // client's request for the DeleteFleet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1975,16 +1971,22 @@ func (c *GameLift) DeleteFleetRequest(input *DeleteFleetInput) (req *request.Req // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -1997,21 +1999,11 @@ func (c *GameLift) DeleteFleetRequest(input *DeleteFleetInput) (req *request.Req // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: -// -// DescribeFleetCapacity -// -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) -// -// DescribeScalingPolicies (automatic scaling) -// -// DeleteScalingPolicy (automatic scaling) +// * Manage fleet actions: // -// DescribeEC2InstanceLimits +// StartFleetActions // -// * DeleteFleet +// StopFleetActions // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2068,8 +2060,8 @@ const opDeleteGameSessionQueue = "DeleteGameSessionQueue" // DeleteGameSessionQueueRequest generates a "aws/request.Request" representing the // client's request for the DeleteGameSessionQueue operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2172,8 +2164,8 @@ const opDeleteMatchmakingConfiguration = "DeleteMatchmakingConfiguration" // DeleteMatchmakingConfigurationRequest generates a "aws/request.Request" representing the // client's request for the DeleteMatchmakingConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2282,8 +2274,8 @@ const opDeleteScalingPolicy = "DeleteScalingPolicy" // DeleteScalingPolicyRequest generates a "aws/request.Request" representing the // client's request for the DeleteScalingPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2328,49 +2320,30 @@ func (c *GameLift) DeleteScalingPolicyRequest(input *DeleteScalingPolicyInput) ( // in force and removes all record of it. To delete a scaling policy, specify // both the scaling policy name and the fleet ID it is associated with. // -// Fleet-related operations include: +// To temporarily suspend scaling policies, call StopFleetActions. This operation +// suspends all policies for the fleet. // -// * CreateFleet +// Operations related to fleet capacity scaling include: // -// * ListFleets -// -// * Describe fleets: -// -// DescribeFleetAttributes -// -// DescribeFleetPortSettings +// * DescribeFleetCapacity // -// DescribeFleetUtilization +// * UpdateFleetCapacity // -// DescribeRuntimeConfiguration +// * DescribeEC2InstanceLimits // -// DescribeFleetEvents +// * Manage scaling policies: // -// * Update fleets: +// PutScalingPolicy (auto-scaling) // -// UpdateFleetAttributes +// DescribeScalingPolicies (auto-scaling) // -// UpdateFleetCapacity +// DeleteScalingPolicy (auto-scaling) // -// UpdateFleetPortSettings +// * Manage fleet actions: // -// UpdateRuntimeConfiguration +// StartFleetActions // -// * Manage fleet capacity: -// -// DescribeFleetCapacity -// -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) -// -// DescribeScalingPolicies (automatic scaling) -// -// DeleteScalingPolicy (automatic scaling) -// -// DescribeEC2InstanceLimits -// -// * DeleteFleet +// StopFleetActions // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2422,8 +2395,8 @@ const opDeleteVpcPeeringAuthorization = "DeleteVpcPeeringAuthorization" // DeleteVpcPeeringAuthorizationRequest generates a "aws/request.Request" representing the // client's request for the DeleteVpcPeeringAuthorization operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2530,8 +2503,8 @@ const opDeleteVpcPeeringConnection = "DeleteVpcPeeringConnection" // DeleteVpcPeeringConnectionRequest generates a "aws/request.Request" representing the // client's request for the DeleteVpcPeeringConnection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2644,8 +2617,8 @@ const opDescribeAlias = "DescribeAlias" // DescribeAliasRequest generates a "aws/request.Request" representing the // client's request for the DescribeAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2754,8 +2727,8 @@ const opDescribeBuild = "DescribeBuild" // DescribeBuildRequest generates a "aws/request.Request" representing the // client's request for the DescribeBuild operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2859,8 +2832,8 @@ const opDescribeEC2InstanceLimits = "DescribeEC2InstanceLimits" // DescribeEC2InstanceLimitsRequest generates a "aws/request.Request" representing the // client's request for the DescribeEC2InstanceLimits operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2915,16 +2888,22 @@ func (c *GameLift) DescribeEC2InstanceLimitsRequest(input *DescribeEC2InstanceLi // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -2937,21 +2916,11 @@ func (c *GameLift) DescribeEC2InstanceLimitsRequest(input *DescribeEC2InstanceLi // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: -// -// DescribeFleetCapacity -// -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) -// -// DescribeScalingPolicies (automatic scaling) -// -// DeleteScalingPolicy (automatic scaling) +// * Manage fleet actions: // -// DescribeEC2InstanceLimits +// StartFleetActions // -// * DeleteFleet +// StopFleetActions // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2999,8 +2968,8 @@ const opDescribeFleetAttributes = "DescribeFleetAttributes" // DescribeFleetAttributesRequest generates a "aws/request.Request" representing the // client's request for the DescribeFleetAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3057,16 +3026,22 @@ func (c *GameLift) DescribeFleetAttributesRequest(input *DescribeFleetAttributes // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -3079,21 +3054,11 @@ func (c *GameLift) DescribeFleetAttributesRequest(input *DescribeFleetAttributes // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: +// * Manage fleet actions: // -// DescribeFleetCapacity -// -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) +// StartFleetActions // -// DescribeScalingPolicies (automatic scaling) -// -// DeleteScalingPolicy (automatic scaling) -// -// DescribeEC2InstanceLimits -// -// * DeleteFleet +// StopFleetActions // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3145,8 +3110,8 @@ const opDescribeFleetCapacity = "DescribeFleetCapacity" // DescribeFleetCapacityRequest generates a "aws/request.Request" representing the // client's request for the DescribeFleetCapacity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3204,16 +3169,22 @@ func (c *GameLift) DescribeFleetCapacityRequest(input *DescribeFleetCapacityInpu // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -3226,21 +3197,11 @@ func (c *GameLift) DescribeFleetCapacityRequest(input *DescribeFleetCapacityInpu // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: +// * Manage fleet actions: // -// DescribeFleetCapacity -// -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) -// -// DescribeScalingPolicies (automatic scaling) -// -// DeleteScalingPolicy (automatic scaling) -// -// DescribeEC2InstanceLimits +// StartFleetActions // -// * DeleteFleet +// StopFleetActions // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3292,8 +3253,8 @@ const opDescribeFleetEvents = "DescribeFleetEvents" // DescribeFleetEventsRequest generates a "aws/request.Request" representing the // client's request for the DescribeFleetEvents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3343,16 +3304,22 @@ func (c *GameLift) DescribeFleetEventsRequest(input *DescribeFleetEventsInput) ( // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -3365,21 +3332,11 @@ func (c *GameLift) DescribeFleetEventsRequest(input *DescribeFleetEventsInput) ( // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: +// * Manage fleet actions: // -// DescribeFleetCapacity +// StartFleetActions // -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) -// -// DescribeScalingPolicies (automatic scaling) -// -// DeleteScalingPolicy (automatic scaling) -// -// DescribeEC2InstanceLimits -// -// * DeleteFleet +// StopFleetActions // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3431,8 +3388,8 @@ const opDescribeFleetPortSettings = "DescribeFleetPortSettings" // DescribeFleetPortSettingsRequest generates a "aws/request.Request" representing the // client's request for the DescribeFleetPortSettings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3484,16 +3441,22 @@ func (c *GameLift) DescribeFleetPortSettingsRequest(input *DescribeFleetPortSett // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -3506,21 +3469,11 @@ func (c *GameLift) DescribeFleetPortSettingsRequest(input *DescribeFleetPortSett // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: +// * Manage fleet actions: // -// DescribeFleetCapacity +// StartFleetActions // -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) -// -// DescribeScalingPolicies (automatic scaling) -// -// DeleteScalingPolicy (automatic scaling) -// -// DescribeEC2InstanceLimits -// -// * DeleteFleet +// StopFleetActions // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3572,8 +3525,8 @@ const opDescribeFleetUtilization = "DescribeFleetUtilization" // DescribeFleetUtilizationRequest generates a "aws/request.Request" representing the // client's request for the DescribeFleetUtilization operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3629,16 +3582,22 @@ func (c *GameLift) DescribeFleetUtilizationRequest(input *DescribeFleetUtilizati // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -3651,21 +3610,11 @@ func (c *GameLift) DescribeFleetUtilizationRequest(input *DescribeFleetUtilizati // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: +// * Manage fleet actions: // -// DescribeFleetCapacity +// StartFleetActions // -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) -// -// DescribeScalingPolicies (automatic scaling) -// -// DeleteScalingPolicy (automatic scaling) -// -// DescribeEC2InstanceLimits -// -// * DeleteFleet +// StopFleetActions // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3717,8 +3666,8 @@ const opDescribeGameSessionDetails = "DescribeGameSessionDetails" // DescribeGameSessionDetailsRequest generates a "aws/request.Request" representing the // client's request for the DescribeGameSessionDetails operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3848,8 +3797,8 @@ const opDescribeGameSessionPlacement = "DescribeGameSessionPlacement" // DescribeGameSessionPlacementRequest generates a "aws/request.Request" representing the // client's request for the DescribeGameSessionPlacement operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3964,8 +3913,8 @@ const opDescribeGameSessionQueues = "DescribeGameSessionQueues" // DescribeGameSessionQueuesRequest generates a "aws/request.Request" representing the // client's request for the DescribeGameSessionQueues operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4070,8 +4019,8 @@ const opDescribeGameSessions = "DescribeGameSessions" // DescribeGameSessionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeGameSessions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4202,8 +4151,8 @@ const opDescribeInstances = "DescribeInstances" // DescribeInstancesRequest generates a "aws/request.Request" representing the // client's request for the DescribeInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4301,8 +4250,8 @@ const opDescribeMatchmaking = "DescribeMatchmaking" // DescribeMatchmakingRequest generates a "aws/request.Request" representing the // client's request for the DescribeMatchmaking operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4412,8 +4361,8 @@ const opDescribeMatchmakingConfigurations = "DescribeMatchmakingConfigurations" // DescribeMatchmakingConfigurationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeMatchmakingConfigurations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4523,8 +4472,8 @@ const opDescribeMatchmakingRuleSets = "DescribeMatchmakingRuleSets" // DescribeMatchmakingRuleSetsRequest generates a "aws/request.Request" representing the // client's request for the DescribeMatchmakingRuleSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4635,8 +4584,8 @@ const opDescribePlayerSessions = "DescribePlayerSessions" // DescribePlayerSessionsRequest generates a "aws/request.Request" representing the // client's request for the DescribePlayerSessions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4755,8 +4704,8 @@ const opDescribeRuntimeConfiguration = "DescribeRuntimeConfiguration" // DescribeRuntimeConfigurationRequest generates a "aws/request.Request" representing the // client's request for the DescribeRuntimeConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4805,16 +4754,22 @@ func (c *GameLift) DescribeRuntimeConfigurationRequest(input *DescribeRuntimeCon // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -4827,21 +4782,11 @@ func (c *GameLift) DescribeRuntimeConfigurationRequest(input *DescribeRuntimeCon // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: -// -// DescribeFleetCapacity -// -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) -// -// DescribeScalingPolicies (automatic scaling) +// * Manage fleet actions: // -// DeleteScalingPolicy (automatic scaling) +// StartFleetActions // -// DescribeEC2InstanceLimits -// -// * DeleteFleet +// StopFleetActions // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4893,8 +4838,8 @@ const opDescribeScalingPolicies = "DescribeScalingPolicies" // DescribeScalingPoliciesRequest generates a "aws/request.Request" representing the // client's request for the DescribeScalingPolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4940,49 +4885,32 @@ func (c *GameLift) DescribeScalingPoliciesRequest(input *DescribeScalingPolicies // Use the pagination parameters to retrieve results as a set of sequential // pages. If successful, set of ScalingPolicy objects is returned for the fleet. // -// Fleet-related operations include: -// -// * CreateFleet -// -// * ListFleets -// -// * Describe fleets: +// A fleet may have all of its scaling policies suspended (StopFleetActions). +// This action does not affect the status of the scaling policies, which remains +// ACTIVE. To see whether a fleet's scaling policies are in force or suspended, +// call DescribeFleetAttributes and check the stopped actions. // -// DescribeFleetAttributes +// Operations related to fleet capacity scaling include: // -// DescribeFleetPortSettings +// * DescribeFleetCapacity // -// DescribeFleetUtilization -// -// DescribeRuntimeConfiguration -// -// DescribeFleetEvents -// -// * Update fleets: -// -// UpdateFleetAttributes -// -// UpdateFleetCapacity -// -// UpdateFleetPortSettings -// -// UpdateRuntimeConfiguration +// * UpdateFleetCapacity // -// * Manage fleet capacity: +// * DescribeEC2InstanceLimits // -// DescribeFleetCapacity +// * Manage scaling policies: // -// UpdateFleetCapacity +// PutScalingPolicy (auto-scaling) // -// PutScalingPolicy (automatic scaling) +// DescribeScalingPolicies (auto-scaling) // -// DescribeScalingPolicies (automatic scaling) +// DeleteScalingPolicy (auto-scaling) // -// DeleteScalingPolicy (automatic scaling) +// * Manage fleet actions: // -// DescribeEC2InstanceLimits +// StartFleetActions // -// * DeleteFleet +// StopFleetActions // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -5034,8 +4962,8 @@ const opDescribeVpcPeeringAuthorizations = "DescribeVpcPeeringAuthorizations" // DescribeVpcPeeringAuthorizationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeVpcPeeringAuthorizations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5138,8 +5066,8 @@ const opDescribeVpcPeeringConnections = "DescribeVpcPeeringConnections" // DescribeVpcPeeringConnectionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeVpcPeeringConnections operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5251,8 +5179,8 @@ const opGetGameSessionLogUrl = "GetGameSessionLogUrl" // GetGameSessionLogUrlRequest generates a "aws/request.Request" representing the // client's request for the GetGameSessionLogUrl operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5372,8 +5300,8 @@ const opGetInstanceAccess = "GetInstanceAccess" // GetInstanceAccessRequest generates a "aws/request.Request" representing the // client's request for the GetInstanceAccess operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5479,8 +5407,8 @@ const opListAliases = "ListAliases" // ListAliasesRequest generates a "aws/request.Request" representing the // client's request for the ListAliases operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5585,8 +5513,8 @@ const opListBuilds = "ListBuilds" // ListBuildsRequest generates a "aws/request.Request" representing the // client's request for the ListBuilds operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5690,8 +5618,8 @@ const opListFleets = "ListFleets" // ListFleetsRequest generates a "aws/request.Request" representing the // client's request for the ListFleets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5742,16 +5670,22 @@ func (c *GameLift) ListFleetsRequest(input *ListFleetsInput) (req *request.Reque // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -5764,21 +5698,11 @@ func (c *GameLift) ListFleetsRequest(input *ListFleetsInput) (req *request.Reque // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: -// -// DescribeFleetCapacity -// -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) +// * Manage fleet actions: // -// DescribeScalingPolicies (automatic scaling) +// StartFleetActions // -// DeleteScalingPolicy (automatic scaling) -// -// DescribeEC2InstanceLimits -// -// * DeleteFleet +// StopFleetActions // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -5830,8 +5754,8 @@ const opPutScalingPolicy = "PutScalingPolicy" // PutScalingPolicyRequest generates a "aws/request.Request" representing the // client's request for the PutScalingPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5870,72 +5794,102 @@ func (c *GameLift) PutScalingPolicyRequest(input *PutScalingPolicyInput) (req *r // PutScalingPolicy API operation for Amazon GameLift. // -// Creates or updates a scaling policy for a fleet. An active scaling policy -// prompts Amazon GameLift to track a certain metric for a fleet and automatically -// change the fleet's capacity in specific circumstances. Each scaling policy -// contains one rule statement. Fleets can have multiple scaling policies in -// force simultaneously. -// -// A scaling policy rule statement has the following structure: +// Creates or updates a scaling policy for a fleet. Scaling policies are used +// to automatically scale a fleet's hosting capacity to meet player demand. +// An active scaling policy instructs Amazon GameLift to track a fleet metric +// and automatically change the fleet's capacity when a certain threshold is +// reached. There are two types of scaling policies: target-based and rule-based. +// Use a target-based policy to quickly and efficiently manage fleet scaling; +// this option is the most commonly used. Use rule-based policies when you need +// to exert fine-grained control over auto-scaling. +// +// Fleets can have multiple scaling policies of each type in force at the same +// time; you can have one target-based policy, one or multiple rule-based scaling +// policies, or both. We recommend caution, however, because multiple auto-scaling +// policies can have unintended consequences. +// +// You can temporarily suspend all scaling policies for a fleet by calling StopFleetActions +// with the fleet action AUTO_SCALING. To resume scaling policies, call StartFleetActions +// with the same fleet action. To stop just one scaling policy--or to permanently +// remove it, you must delete the policy with DeleteScalingPolicy. +// +// Learn more about how to work with auto-scaling in Set Up Fleet Automatic +// Scaling (http://docs.aws.amazon.com/gamelift/latest/developerguide/fleets-autoscaling.html). +// +// Target-based policy +// +// A target-based policy tracks a single metric: PercentAvailableGameSessions. +// This metric tells us how much of a fleet's hosting capacity is ready to host +// game sessions but is not currently in use. This is the fleet's buffer; it +// measures the additional player demand that the fleet could handle at current +// capacity. With a target-based policy, you set your ideal buffer size and +// leave it to Amazon GameLift to take whatever action is needed to maintain +// that target. +// +// For example, you might choose to maintain a 10% buffer for a fleet that has +// the capacity to host 100 simultaneous game sessions. This policy tells Amazon +// GameLift to take action whenever the fleet's available capacity falls below +// or rises above 10 game sessions. Amazon GameLift will start new instances +// or stop unused instances in order to return to the 10% buffer. +// +// To create or update a target-based policy, specify a fleet ID and name, and +// set the policy type to "TargetBased". Specify the metric to track (PercentAvailableGameSessions) +// and reference a TargetConfiguration object with your desired buffer value. +// Exclude all other parameters. On a successful request, the policy name is +// returned. The scaling policy is automatically in force as soon as it's successfully +// created. If the fleet's auto-scaling actions are temporarily suspended, the +// new policy will be in force once the fleet actions are restarted. +// +// Rule-based policy +// +// A rule-based policy tracks specified fleet metric, sets a threshold value, +// and specifies the type of action to initiate when triggered. With a rule-based +// policy, you can select from several available fleet metrics. Each policy +// specifies whether to scale up or scale down (and by how much), so you need +// one policy for each type of action. +// +// For example, a policy may make the following statement: "If the percentage +// of idle instances is greater than 20% for more than 15 minutes, then reduce +// the fleet capacity by 10%." +// +// A policy's rule statement has the following structure: // // If [MetricName] is [ComparisonOperator][Threshold] for [EvaluationPeriods] // minutes, then [ScalingAdjustmentType] to/by [ScalingAdjustment]. // -// For example, this policy: "If the number of idle instances exceeds 20 for -// more than 15 minutes, then reduce the fleet capacity by 10 instances" could -// be implemented as the following rule statement: +// To implement the example, the rule statement would look like this: // -// If [IdleInstances] is [GreaterThanOrEqualToThreshold] [20] for [15] minutes, -// then [ChangeInCapacity] by [-10]. +// If [PercentIdleInstances] is [GreaterThanThreshold][20] for [15] minutes, +// then [PercentChangeInCapacity] to/by [10]. // // To create or update a scaling policy, specify a unique combination of name -// and fleet ID, and set the rule values. All parameters for this action are -// required. If successful, the policy name is returned. Scaling policies cannot -// be suspended or made inactive. To stop enforcing a scaling policy, call DeleteScalingPolicy. -// -// Fleet-related operations include: -// -// * CreateFleet -// -// * ListFleets -// -// * Describe fleets: -// -// DescribeFleetAttributes -// -// DescribeFleetPortSettings -// -// DescribeFleetUtilization -// -// DescribeRuntimeConfiguration -// -// DescribeFleetEvents -// -// * Update fleets: +// and fleet ID, and set the policy type to "RuleBased". Specify the parameter +// values for a policy rule statement. On a successful request, the policy name +// is returned. Scaling policies are automatically in force as soon as they're +// successfully created. If the fleet's auto-scaling actions are temporarily +// suspended, the new policy will be in force once the fleet actions are restarted. // -// UpdateFleetAttributes +// Operations related to fleet capacity scaling include: // -// UpdateFleetCapacity +// * DescribeFleetCapacity // -// UpdateFleetPortSettings +// * UpdateFleetCapacity // -// UpdateRuntimeConfiguration +// * DescribeEC2InstanceLimits // -// * Manage fleet capacity: +// * Manage scaling policies: // -// DescribeFleetCapacity +// PutScalingPolicy (auto-scaling) // -// UpdateFleetCapacity +// DescribeScalingPolicies (auto-scaling) // -// PutScalingPolicy (automatic scaling) +// DeleteScalingPolicy (auto-scaling) // -// DescribeScalingPolicies (automatic scaling) +// * Manage fleet actions: // -// DeleteScalingPolicy (automatic scaling) +// StartFleetActions // -// DescribeEC2InstanceLimits -// -// * DeleteFleet +// StopFleetActions // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -5987,8 +5941,8 @@ const opRequestUploadCredentials = "RequestUploadCredentials" // RequestUploadCredentialsRequest generates a "aws/request.Request" representing the // client's request for the RequestUploadCredentials operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6085,8 +6039,8 @@ const opResolveAlias = "ResolveAlias" // ResolveAliasRequest generates a "aws/request.Request" representing the // client's request for the ResolveAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6198,8 +6152,8 @@ const opSearchGameSessions = "SearchGameSessions" // SearchGameSessionsRequest generates a "aws/request.Request" representing the // client's request for the SearchGameSessions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6363,12 +6317,133 @@ func (c *GameLift) SearchGameSessionsWithContext(ctx aws.Context, input *SearchG return out, req.Send() } +const opStartFleetActions = "StartFleetActions" + +// StartFleetActionsRequest generates a "aws/request.Request" representing the +// client's request for the StartFleetActions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StartFleetActions for more information on using the StartFleetActions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StartFleetActionsRequest method. +// req, resp := client.StartFleetActionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/gamelift-2015-10-01/StartFleetActions +func (c *GameLift) StartFleetActionsRequest(input *StartFleetActionsInput) (req *request.Request, output *StartFleetActionsOutput) { + op := &request.Operation{ + Name: opStartFleetActions, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StartFleetActionsInput{} + } + + output = &StartFleetActionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// StartFleetActions API operation for Amazon GameLift. +// +// Resumes activity on a fleet that was suspended with StopFleetActions. Currently, +// this operation is used to restart a fleet's auto-scaling activity. +// +// To start fleet actions, specify the fleet ID and the type of actions to restart. +// When auto-scaling fleet actions are restarted, Amazon GameLift once again +// initiates scaling events as triggered by the fleet's scaling policies. If +// actions on the fleet were never stopped, this operation will have no effect. +// You can view a fleet's stopped actions using DescribeFleetAttributes. +// +// Operations related to fleet capacity scaling include: +// +// * DescribeFleetCapacity +// +// * UpdateFleetCapacity +// +// * DescribeEC2InstanceLimits +// +// * Manage scaling policies: +// +// PutScalingPolicy (auto-scaling) +// +// DescribeScalingPolicies (auto-scaling) +// +// DeleteScalingPolicy (auto-scaling) +// +// * Manage fleet actions: +// +// StartFleetActions +// +// StopFleetActions +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon GameLift's +// API operation StartFleetActions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServiceException "InternalServiceException" +// The service encountered an unrecoverable internal failure while processing +// the request. Clients can retry such requests immediately or after a waiting +// period. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// One or more parameter values in the request are invalid. Correct the invalid +// parameter values before retrying. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// The client failed authentication. Clients should not retry such requests. +// +// * ErrCodeNotFoundException "NotFoundException" +// A service resource associated with the request could not be found. Clients +// should not retry such requests. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/gamelift-2015-10-01/StartFleetActions +func (c *GameLift) StartFleetActions(input *StartFleetActionsInput) (*StartFleetActionsOutput, error) { + req, out := c.StartFleetActionsRequest(input) + return out, req.Send() +} + +// StartFleetActionsWithContext is the same as StartFleetActions with the addition of +// the ability to pass a context and additional request options. +// +// See StartFleetActions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *GameLift) StartFleetActionsWithContext(ctx aws.Context, input *StartFleetActionsInput, opts ...request.Option) (*StartFleetActionsOutput, error) { + req, out := c.StartFleetActionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opStartGameSessionPlacement = "StartGameSessionPlacement" // StartGameSessionPlacementRequest generates a "aws/request.Request" representing the // client's request for the StartGameSessionPlacement operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6521,8 +6596,8 @@ const opStartMatchBackfill = "StartMatchBackfill" // StartMatchBackfillRequest generates a "aws/request.Request" representing the // client's request for the StartMatchBackfill operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6649,8 +6724,8 @@ const opStartMatchmaking = "StartMatchmaking" // StartMatchmakingRequest generates a "aws/request.Request" representing the // client's request for the StartMatchmaking operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6809,12 +6884,113 @@ func (c *GameLift) StartMatchmakingWithContext(ctx aws.Context, input *StartMatc return out, req.Send() } +const opStopFleetActions = "StopFleetActions" + +// StopFleetActionsRequest generates a "aws/request.Request" representing the +// client's request for the StopFleetActions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StopFleetActions for more information on using the StopFleetActions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StopFleetActionsRequest method. +// req, resp := client.StopFleetActionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/gamelift-2015-10-01/StopFleetActions +func (c *GameLift) StopFleetActionsRequest(input *StopFleetActionsInput) (req *request.Request, output *StopFleetActionsOutput) { + op := &request.Operation{ + Name: opStopFleetActions, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StopFleetActionsInput{} + } + + output = &StopFleetActionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// StopFleetActions API operation for Amazon GameLift. +// +// Suspends activity on a fleet. Currently, this operation is used to stop a +// fleet's auto-scaling activity. It is used to temporarily stop scaling events +// triggered by the fleet's scaling policies. The policies can be retained and +// auto-scaling activity can be restarted using StartFleetActions. You can view +// a fleet's stopped actions using DescribeFleetAttributes. +// +// To stop fleet actions, specify the fleet ID and the type of actions to suspend. +// When auto-scaling fleet actions are stopped, Amazon GameLift no longer initiates +// scaling events except to maintain the fleet's desired instances setting (FleetCapacity. +// Changes to the fleet's capacity must be done manually using UpdateFleetCapacity. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon GameLift's +// API operation StopFleetActions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServiceException "InternalServiceException" +// The service encountered an unrecoverable internal failure while processing +// the request. Clients can retry such requests immediately or after a waiting +// period. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// One or more parameter values in the request are invalid. Correct the invalid +// parameter values before retrying. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// The client failed authentication. Clients should not retry such requests. +// +// * ErrCodeNotFoundException "NotFoundException" +// A service resource associated with the request could not be found. Clients +// should not retry such requests. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/gamelift-2015-10-01/StopFleetActions +func (c *GameLift) StopFleetActions(input *StopFleetActionsInput) (*StopFleetActionsOutput, error) { + req, out := c.StopFleetActionsRequest(input) + return out, req.Send() +} + +// StopFleetActionsWithContext is the same as StopFleetActions with the addition of +// the ability to pass a context and additional request options. +// +// See StopFleetActions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *GameLift) StopFleetActionsWithContext(ctx aws.Context, input *StopFleetActionsInput, opts ...request.Option) (*StopFleetActionsOutput, error) { + req, out := c.StopFleetActionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opStopGameSessionPlacement = "StopGameSessionPlacement" // StopGameSessionPlacementRequest generates a "aws/request.Request" representing the // client's request for the StopGameSessionPlacement operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6929,8 +7105,8 @@ const opStopMatchmaking = "StopMatchmaking" // StopMatchmakingRequest generates a "aws/request.Request" representing the // client's request for the StopMatchmaking operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7035,8 +7211,8 @@ const opUpdateAlias = "UpdateAlias" // UpdateAliasRequest generates a "aws/request.Request" representing the // client's request for the UpdateAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7144,8 +7320,8 @@ const opUpdateBuild = "UpdateBuild" // UpdateBuildRequest generates a "aws/request.Request" representing the // client's request for the UpdateBuild operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7251,8 +7427,8 @@ const opUpdateFleetAttributes = "UpdateFleetAttributes" // UpdateFleetAttributesRequest generates a "aws/request.Request" representing the // client's request for the UpdateFleetAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7301,16 +7477,22 @@ func (c *GameLift) UpdateFleetAttributesRequest(input *UpdateFleetAttributesInpu // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -7323,21 +7505,11 @@ func (c *GameLift) UpdateFleetAttributesRequest(input *UpdateFleetAttributesInpu // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: -// -// DescribeFleetCapacity -// -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) +// * Manage fleet actions: // -// DescribeScalingPolicies (automatic scaling) +// StartFleetActions // -// DeleteScalingPolicy (automatic scaling) -// -// DescribeEC2InstanceLimits -// -// * DeleteFleet +// StopFleetActions // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -7403,8 +7575,8 @@ const opUpdateFleetCapacity = "UpdateFleetCapacity" // UpdateFleetCapacityRequest generates a "aws/request.Request" representing the // client's request for the UpdateFleetCapacity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7448,9 +7620,10 @@ func (c *GameLift) UpdateFleetCapacityRequest(input *UpdateFleetCapacityInput) ( // this action, you may want to call DescribeEC2InstanceLimits to get the maximum // capacity based on the fleet's EC2 instance type. // -// If you're using autoscaling (see PutScalingPolicy), you may want to specify -// a minimum and/or maximum capacity. If you don't provide these, autoscaling -// can set capacity anywhere between zero and the service limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_gamelift). +// Specify minimum and maximum number of instances. Amazon GameLift will not +// change fleet capacity to values fall outside of this range. This is particularly +// important when using auto-scaling (see PutScalingPolicy) to allow capacity +// to adjust based on player demand while imposing limits on automatic adjustments. // // To update fleet capacity, specify the fleet ID and the number of instances // you want the fleet to host. If successful, Amazon GameLift starts or terminates @@ -7465,16 +7638,22 @@ func (c *GameLift) UpdateFleetCapacityRequest(input *UpdateFleetCapacityInput) ( // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -7487,21 +7666,11 @@ func (c *GameLift) UpdateFleetCapacityRequest(input *UpdateFleetCapacityInput) ( // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: -// -// DescribeFleetCapacity -// -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) +// * Manage fleet actions: // -// DescribeScalingPolicies (automatic scaling) +// StartFleetActions // -// DeleteScalingPolicy (automatic scaling) -// -// DescribeEC2InstanceLimits -// -// * DeleteFleet +// StopFleetActions // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -7567,8 +7736,8 @@ const opUpdateFleetPortSettings = "UpdateFleetPortSettings" // UpdateFleetPortSettingsRequest generates a "aws/request.Request" representing the // client's request for the UpdateFleetPortSettings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7620,16 +7789,22 @@ func (c *GameLift) UpdateFleetPortSettingsRequest(input *UpdateFleetPortSettings // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -7642,21 +7817,11 @@ func (c *GameLift) UpdateFleetPortSettingsRequest(input *UpdateFleetPortSettings // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: -// -// DescribeFleetCapacity -// -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) +// * Manage fleet actions: // -// DescribeScalingPolicies (automatic scaling) +// StartFleetActions // -// DeleteScalingPolicy (automatic scaling) -// -// DescribeEC2InstanceLimits -// -// * DeleteFleet +// StopFleetActions // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -7722,8 +7887,8 @@ const opUpdateGameSession = "UpdateGameSession" // UpdateGameSessionRequest generates a "aws/request.Request" representing the // client's request for the UpdateGameSession operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7852,8 +8017,8 @@ const opUpdateGameSessionQueue = "UpdateGameSessionQueue" // UpdateGameSessionQueueRequest generates a "aws/request.Request" representing the // client's request for the UpdateGameSessionQueue operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7957,8 +8122,8 @@ const opUpdateMatchmakingConfiguration = "UpdateMatchmakingConfiguration" // UpdateMatchmakingConfigurationRequest generates a "aws/request.Request" representing the // client's request for the UpdateMatchmakingConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8066,8 +8231,8 @@ const opUpdateRuntimeConfiguration = "UpdateRuntimeConfiguration" // UpdateRuntimeConfigurationRequest generates a "aws/request.Request" representing the // client's request for the UpdateRuntimeConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8129,16 +8294,22 @@ func (c *GameLift) UpdateRuntimeConfigurationRequest(input *UpdateRuntimeConfigu // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -8151,21 +8322,11 @@ func (c *GameLift) UpdateRuntimeConfigurationRequest(input *UpdateRuntimeConfigu // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: -// -// DescribeFleetCapacity -// -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) -// -// DescribeScalingPolicies (automatic scaling) +// * Manage fleet actions: // -// DeleteScalingPolicy (automatic scaling) +// StartFleetActions // -// DescribeEC2InstanceLimits -// -// * DeleteFleet +// StopFleetActions // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -8222,8 +8383,8 @@ const opValidateMatchmakingRuleSet = "ValidateMatchmakingRuleSet" // ValidateMatchmakingRuleSetRequest generates a "aws/request.Request" representing the // client's request for the ValidateMatchmakingRuleSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8436,14 +8597,14 @@ type Alias struct { // Time stamp indicating when this data object was created. Format is a number // expressed in Unix time as milliseconds (for example "1469498468.057"). - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationTime *time.Time `type:"timestamp"` // Human-readable description of an alias. Description *string `type:"string"` // Time stamp indicating when this data object was last modified. Format is // a number expressed in Unix time as milliseconds (for example "1469498468.057"). - LastUpdatedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LastUpdatedTime *time.Time `type:"timestamp"` // Descriptive label that is associated with an alias. Alias names do not need // to be unique. @@ -8641,7 +8802,7 @@ type Build struct { // Time stamp indicating when this data object was created. Format is a number // expressed in Unix time as milliseconds (for example "1469498468.057"). - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationTime *time.Time `type:"timestamp"` // Descriptive label that is associated with a build. Build names do not need // to be unique. It can be set using CreateBuild or UpdateBuild. @@ -11115,7 +11276,7 @@ type DescribeFleetEventsInput struct { // Most recent date to retrieve event logs for. If no end time is specified, // this call returns entries from the specified start time up to the present. // Format is a number expressed in Unix time as milliseconds (ex: "1469498468.057"). - EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `type:"timestamp"` // Unique identifier for a fleet to get event logs for. // @@ -11135,7 +11296,7 @@ type DescribeFleetEventsInput struct { // this call returns entries starting from when the fleet was created to the // specified end time. Format is a number expressed in Unix time as milliseconds // (ex: "1469498468.057"). - StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -12702,16 +12863,22 @@ func (s *DesiredPlayerSession) SetPlayerId(v string) *DesiredPlayerSession { // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -12724,21 +12891,11 @@ func (s *DesiredPlayerSession) SetPlayerId(v string) *DesiredPlayerSession { // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: -// -// DescribeFleetCapacity -// -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) +// * Manage fleet actions: // -// DescribeScalingPolicies (automatic scaling) +// StartFleetActions // -// DeleteScalingPolicy (automatic scaling) -// -// DescribeEC2InstanceLimits -// -// * DeleteFleet +// StopFleetActions type EC2InstanceCounts struct { _ struct{} `type:"structure"` @@ -12974,7 +13131,7 @@ type Event struct { // Time stamp indicating when this event occurred. Format is a number expressed // in Unix time as milliseconds (for example "1469498468.057"). - EventTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EventTime *time.Time `type:"timestamp"` // Additional information related to the event. Message *string `min:"1" type:"string"` @@ -13042,16 +13199,22 @@ func (s *Event) SetResourceId(v string) *Event { // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -13064,21 +13227,11 @@ func (s *Event) SetResourceId(v string) *Event { // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: -// -// DescribeFleetCapacity -// -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) -// -// DescribeScalingPolicies (automatic scaling) -// -// DeleteScalingPolicy (automatic scaling) +// * Manage fleet actions: // -// DescribeEC2InstanceLimits +// StartFleetActions // -// * DeleteFleet +// StopFleetActions type FleetAttributes struct { _ struct{} `type:"structure"` @@ -13087,7 +13240,7 @@ type FleetAttributes struct { // Time stamp indicating when this data object was created. Format is a number // expressed in Unix time as milliseconds (for example "1469498468.057"). - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationTime *time.Time `type:"timestamp"` // Human-readable description of the fleet. Description *string `min:"1" type:"string"` @@ -13178,9 +13331,13 @@ type FleetAttributes struct { // * TERMINATED -- The fleet no longer exists. Status *string `type:"string" enum:"FleetStatus"` + // List of fleet actions that have been suspended using StopFleetActions. This + // includes auto-scaling. + StoppedActions []*string `min:"1" type:"list"` + // Time stamp indicating when this data object was terminated. Format is a number // expressed in Unix time as milliseconds (for example "1469498468.057"). - TerminationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + TerminationTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -13289,6 +13446,12 @@ func (s *FleetAttributes) SetStatus(v string) *FleetAttributes { return s } +// SetStoppedActions sets the StoppedActions field's value. +func (s *FleetAttributes) SetStoppedActions(v []*string) *FleetAttributes { + s.StoppedActions = v + return s +} + // SetTerminationTime sets the TerminationTime field's value. func (s *FleetAttributes) SetTerminationTime(v time.Time) *FleetAttributes { s.TerminationTime = &v @@ -13306,16 +13469,22 @@ func (s *FleetAttributes) SetTerminationTime(v time.Time) *FleetAttributes { // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -13328,21 +13497,11 @@ func (s *FleetAttributes) SetTerminationTime(v time.Time) *FleetAttributes { // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: -// -// DescribeFleetCapacity -// -// UpdateFleetCapacity +// * Manage fleet actions: // -// PutScalingPolicy (automatic scaling) +// StartFleetActions // -// DescribeScalingPolicies (automatic scaling) -// -// DeleteScalingPolicy (automatic scaling) -// -// DescribeEC2InstanceLimits -// -// * DeleteFleet +// StopFleetActions type FleetCapacity struct { _ struct{} `type:"structure"` @@ -13397,16 +13556,22 @@ func (s *FleetCapacity) SetInstanceType(v string) *FleetCapacity { // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -13419,21 +13584,11 @@ func (s *FleetCapacity) SetInstanceType(v string) *FleetCapacity { // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: -// -// DescribeFleetCapacity -// -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) -// -// DescribeScalingPolicies (automatic scaling) +// * Manage fleet actions: // -// DeleteScalingPolicy (automatic scaling) -// -// DescribeEC2InstanceLimits +// StartFleetActions // -// * DeleteFleet +// StopFleetActions type FleetUtilization struct { _ struct{} `type:"structure"` @@ -13591,7 +13746,7 @@ type GameSession struct { // Time stamp indicating when this data object was created. Format is a number // expressed in Unix time as milliseconds (for example "1469498468.057"). - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationTime *time.Time `type:"timestamp"` // Unique identifier for a player. This ID is used to enforce a resource protection // policy (if one exists), that limits the number of game sessions a player @@ -13625,7 +13780,7 @@ type GameSession struct { IpAddress *string `type:"string"` // Information about the matchmaking process that was used to create the game - // session. It is in JSON syntax, formated as a string. In addition the matchmaking + // session. It is in JSON syntax, formatted as a string. In addition the matchmaking // configuration used, it contains data on all players assigned to the match, // including player attributes and team assignments. For more details on matchmaker // data, see Match Data (http://docs.aws.amazon.com/gamelift/latest/developerguide/match-server.html#match-server-data). @@ -13659,7 +13814,7 @@ type GameSession struct { // Time stamp indicating when this data object was terminated. Format is a number // expressed in Unix time as milliseconds (for example "1469498468.057"). - TerminationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + TerminationTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -13883,7 +14038,7 @@ type GameSessionPlacement struct { // Time stamp indicating when this request was completed, canceled, or timed // out. - EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `type:"timestamp"` // Set of custom properties for a game session, formatted as key:value pairs. // These properties are passed to a game server process in the GameSession object @@ -13924,7 +14079,7 @@ type GameSessionPlacement struct { IpAddress *string `type:"string"` // Information on the matchmaking process for this game. Data is in JSON syntax, - // formated as a string. It identifies the matchmaking configuration used to + // formatted as a string. It identifies the matchmaking configuration used to // create the match, and contains data on all players assigned to the match, // including player attributes and team assignments. For more details on matchmaker // data, see Match Data (http://docs.aws.amazon.com/gamelift/latest/developerguide/match-server.html#match-server-data). @@ -13956,7 +14111,7 @@ type GameSessionPlacement struct { // Time stamp indicating when this request was placed in the queue. Format is // a number expressed in Unix time as milliseconds (for example "1469498468.057"). - StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `type:"timestamp"` // Current status of the game session placement request. // @@ -14391,7 +14546,7 @@ type Instance struct { // Time stamp indicating when this data object was created. Format is a number // expressed in Unix time as milliseconds (for example "1469498468.057"). - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationTime *time.Time `type:"timestamp"` // Unique identifier for a fleet that the instance is in. FleetId *string `type:"string"` @@ -15058,7 +15213,7 @@ type MatchmakingConfiguration struct { // Time stamp indicating when this data object was created. Format is a number // expressed in Unix time as milliseconds (for example "1469498468.057"). - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationTime *time.Time `type:"timestamp"` // Information to attached to all events related to the matchmaking configuration. CustomEventData *string `type:"string"` @@ -15232,7 +15387,7 @@ type MatchmakingRuleSet struct { // Time stamp indicating when this data object was created. Format is a number // expressed in Unix time as milliseconds (for example "1469498468.057"). - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationTime *time.Time `type:"timestamp"` // Collection of matchmaking rules, formatted as a JSON string. (Note that comments14 // are not allowed in JSON, but most elements support a description field.) @@ -15287,7 +15442,7 @@ type MatchmakingTicket struct { // Time stamp indicating when this matchmaking request stopped being processed // due to success, failure, or cancellation. Format is a number expressed in // Unix time as milliseconds (for example "1469498468.057"). - EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `type:"timestamp"` // Average amount of time (in seconds) that players are currently waiting for // a match. If there is not enough recent data, this property may be empty. @@ -15306,7 +15461,7 @@ type MatchmakingTicket struct { // Time stamp indicating when this matchmaking request was received. Format // is a number expressed in Unix time as milliseconds (for example "1469498468.057"). - StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `type:"timestamp"` // Current status of the matchmaking request. // @@ -15710,7 +15865,7 @@ type PlayerSession struct { // Time stamp indicating when this data object was created. Format is a number // expressed in Unix time as milliseconds (for example "1469498468.057"). - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationTime *time.Time `type:"timestamp"` // Unique identifier for a fleet that the player's game session is running on. FleetId *string `type:"string"` @@ -15755,7 +15910,7 @@ type PlayerSession struct { // Time stamp indicating when this data object was terminated. Format is a number // expressed in Unix time as milliseconds (for example "1469498468.057"). - TerminationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + TerminationTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -15834,41 +15989,54 @@ type PutScalingPolicyInput struct { // Comparison operator to use when measuring the metric against the threshold // value. - // - // ComparisonOperator is a required field - ComparisonOperator *string `type:"string" required:"true" enum:"ComparisonOperatorType"` + ComparisonOperator *string `type:"string" enum:"ComparisonOperatorType"` // Length of time (in minutes) the metric must be at or beyond the threshold // before a scaling event is triggered. - // - // EvaluationPeriods is a required field - EvaluationPeriods *int64 `min:"1" type:"integer" required:"true"` + EvaluationPeriods *int64 `min:"1" type:"integer"` - // Unique identifier for a fleet to apply this policy to. + // Unique identifier for a fleet to apply this policy to. The fleet cannot be + // in any of the following statuses: ERROR or DELETING. // // FleetId is a required field FleetId *string `type:"string" required:"true"` - // Name of the Amazon GameLift-defined metric that is used to trigger an adjustment. + // Name of the Amazon GameLift-defined metric that is used to trigger a scaling + // adjustment. For detailed descriptions of fleet metrics, see Monitor Amazon + // GameLift with Amazon CloudWatch (http://docs.aws.amazon.com/gamelift/latest/developerguide/monitoring-cloudwatch.html). + // + // * ActivatingGameSessions -- Game sessions in the process of being created. + // + // * ActiveGameSessions -- Game sessions that are currently running. + // + // * ActiveInstances -- Fleet instances that are currently running at least + // one game session. // - // * ActivatingGameSessions -- number of game sessions in the process of - // being created (game session status = ACTIVATING). + // * AvailableGameSessions -- Additional game sessions that fleet could host + // simultaneously, given current capacity. // - // * ActiveGameSessions -- number of game sessions currently running (game - // session status = ACTIVE). + // * AvailablePlayerSessions -- Empty player slots in currently active game + // sessions. This includes game sessions that are not currently accepting + // players. Reserved player slots are not included. // - // * CurrentPlayerSessions -- number of active or reserved player sessions - // (player session status = ACTIVE or RESERVED). + // * CurrentPlayerSessions -- Player slots in active game sessions that are + // being used by a player or are reserved for a player. // - // * AvailablePlayerSessions -- number of player session slots currently - // available in active game sessions across the fleet, calculated by subtracting - // a game session's current player session count from its maximum player - // session count. This number includes game sessions that are not currently - // accepting players (game session PlayerSessionCreationPolicy = DENY_ALL). + // * IdleInstances -- Active instances that are currently hosting zero game + // sessions. // - // * ActiveInstances -- number of instances currently running a game session. + // * PercentAvailableGameSessions -- Unused percentage of the total number + // of game sessions that a fleet could host simultaneously, given current + // capacity. Use this metric for a target-based scaling policy. // - // * IdleInstances -- number of instances not currently running a game session. + // * PercentIdleInstances -- Percentage of the total number of active instances + // that are hosting zero game sessions. + // + // * QueueDepth -- Pending game session placement requests, in any queue, + // where the current fleet is the top-priority destination. + // + // * WaitTime -- Current wait time for pending game session placement requests, + // in any queue, where the current fleet is the top-priority destination. // // MetricName is a required field MetricName *string `type:"string" required:"true" enum:"MetricName"` @@ -15880,10 +16048,14 @@ type PutScalingPolicyInput struct { // Name is a required field Name *string `min:"1" type:"string" required:"true"` + // Type of scaling policy to create. For a target-based policy, set the parameter + // MetricName to 'PercentAvailableGameSessions' and specify a TargetConfiguration. + // For a rule-based policy set the following parameters: MetricName, ComparisonOperator, + // Threshold, EvaluationPeriods, ScalingAdjustmentType, and ScalingAdjustment. + PolicyType *string `type:"string" enum:"PolicyType"` + // Amount of adjustment to make, based on the scaling adjustment type. - // - // ScalingAdjustment is a required field - ScalingAdjustment *int64 `type:"integer" required:"true"` + ScalingAdjustment *int64 `type:"integer"` // Type of adjustment to make to a fleet's instance count (see FleetCapacity): // @@ -15897,14 +16069,13 @@ type PutScalingPolicyInput struct { // by the scaling adjustment, read as a percentage. Positive values scale // up while negative values scale down; for example, a value of "-10" scales // the fleet down by 10%. - // - // ScalingAdjustmentType is a required field - ScalingAdjustmentType *string `type:"string" required:"true" enum:"ScalingAdjustmentType"` + ScalingAdjustmentType *string `type:"string" enum:"ScalingAdjustmentType"` + + // Object that contains settings for a target-based scaling policy. + TargetConfiguration *TargetConfiguration `type:"structure"` // Metric value used to trigger a scaling event. - // - // Threshold is a required field - Threshold *float64 `type:"double" required:"true"` + Threshold *float64 `type:"double"` } // String returns the string representation @@ -15920,12 +16091,6 @@ func (s PutScalingPolicyInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *PutScalingPolicyInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "PutScalingPolicyInput"} - if s.ComparisonOperator == nil { - invalidParams.Add(request.NewErrParamRequired("ComparisonOperator")) - } - if s.EvaluationPeriods == nil { - invalidParams.Add(request.NewErrParamRequired("EvaluationPeriods")) - } if s.EvaluationPeriods != nil && *s.EvaluationPeriods < 1 { invalidParams.Add(request.NewErrParamMinValue("EvaluationPeriods", 1)) } @@ -15941,14 +16106,10 @@ func (s *PutScalingPolicyInput) Validate() error { if s.Name != nil && len(*s.Name) < 1 { invalidParams.Add(request.NewErrParamMinLen("Name", 1)) } - if s.ScalingAdjustment == nil { - invalidParams.Add(request.NewErrParamRequired("ScalingAdjustment")) - } - if s.ScalingAdjustmentType == nil { - invalidParams.Add(request.NewErrParamRequired("ScalingAdjustmentType")) - } - if s.Threshold == nil { - invalidParams.Add(request.NewErrParamRequired("Threshold")) + if s.TargetConfiguration != nil { + if err := s.TargetConfiguration.Validate(); err != nil { + invalidParams.AddNested("TargetConfiguration", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -15987,6 +16148,12 @@ func (s *PutScalingPolicyInput) SetName(v string) *PutScalingPolicyInput { return s } +// SetPolicyType sets the PolicyType field's value. +func (s *PutScalingPolicyInput) SetPolicyType(v string) *PutScalingPolicyInput { + s.PolicyType = &v + return s +} + // SetScalingAdjustment sets the ScalingAdjustment field's value. func (s *PutScalingPolicyInput) SetScalingAdjustment(v int64) *PutScalingPolicyInput { s.ScalingAdjustment = &v @@ -15999,6 +16166,12 @@ func (s *PutScalingPolicyInput) SetScalingAdjustmentType(v string) *PutScalingPo return s } +// SetTargetConfiguration sets the TargetConfiguration field's value. +func (s *PutScalingPolicyInput) SetTargetConfiguration(v *TargetConfiguration) *PutScalingPolicyInput { + s.TargetConfiguration = v + return s +} + // SetThreshold sets the Threshold field's value. func (s *PutScalingPolicyInput) SetThreshold(v float64) *PutScalingPolicyInput { s.Threshold = &v @@ -16219,16 +16392,22 @@ func (s *ResourceCreationLimitPolicy) SetPolicyPeriodInMinutes(v int64) *Resourc // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -16241,21 +16420,11 @@ func (s *ResourceCreationLimitPolicy) SetPolicyPeriodInMinutes(v int64) *Resourc // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: -// -// DescribeFleetCapacity -// -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) +// * Manage fleet actions: // -// DescribeScalingPolicies (automatic scaling) +// StartFleetActions // -// DeleteScalingPolicy (automatic scaling) -// -// DescribeEC2InstanceLimits -// -// * DeleteFleet +// StopFleetActions type RoutingStrategy struct { _ struct{} `type:"structure"` @@ -16334,16 +16503,22 @@ func (s *RoutingStrategy) SetType(v string) *RoutingStrategy { // // * ListFleets // +// * DeleteFleet +// // * Describe fleets: // // DescribeFleetAttributes // +// DescribeFleetCapacity +// // DescribeFleetPortSettings // // DescribeFleetUtilization // // DescribeRuntimeConfiguration // +// DescribeEC2InstanceLimits +// // DescribeFleetEvents // // * Update fleets: @@ -16356,21 +16531,11 @@ func (s *RoutingStrategy) SetType(v string) *RoutingStrategy { // // UpdateRuntimeConfiguration // -// * Manage fleet capacity: -// -// DescribeFleetCapacity -// -// UpdateFleetCapacity -// -// PutScalingPolicy (automatic scaling) -// -// DescribeScalingPolicies (automatic scaling) +// * Manage fleet actions: // -// DeleteScalingPolicy (automatic scaling) +// StartFleetActions // -// DescribeEC2InstanceLimits -// -// * DeleteFleet +// StopFleetActions type RuntimeConfiguration struct { _ struct{} `type:"structure"` @@ -16514,49 +16679,27 @@ func (s *S3Location) SetRoleArn(v string) *S3Location { // Rule that controls how a fleet is scaled. Scaling policies are uniquely identified // by the combination of name and fleet ID. // -// Fleet-related operations include: -// -// * CreateFleet -// -// * ListFleets -// -// * Describe fleets: -// -// DescribeFleetAttributes -// -// DescribeFleetPortSettings -// -// DescribeFleetUtilization -// -// DescribeRuntimeConfiguration -// -// DescribeFleetEvents -// -// * Update fleets: -// -// UpdateFleetAttributes -// -// UpdateFleetCapacity +// Operations related to fleet capacity scaling include: // -// UpdateFleetPortSettings +// * DescribeFleetCapacity // -// UpdateRuntimeConfiguration +// * UpdateFleetCapacity // -// * Manage fleet capacity: +// * DescribeEC2InstanceLimits // -// DescribeFleetCapacity +// * Manage scaling policies: // -// UpdateFleetCapacity +// PutScalingPolicy (auto-scaling) // -// PutScalingPolicy (automatic scaling) +// DescribeScalingPolicies (auto-scaling) // -// DescribeScalingPolicies (automatic scaling) +// DeleteScalingPolicy (auto-scaling) // -// DeleteScalingPolicy (automatic scaling) +// * Manage fleet actions: // -// DescribeEC2InstanceLimits +// StartFleetActions // -// * DeleteFleet +// StopFleetActions type ScalingPolicy struct { _ struct{} `type:"structure"` @@ -16571,32 +16714,54 @@ type ScalingPolicy struct { // Unique identifier for a fleet that is associated with this scaling policy. FleetId *string `type:"string"` - // Name of the Amazon GameLift-defined metric that is used to trigger an adjustment. + // Name of the Amazon GameLift-defined metric that is used to trigger a scaling + // adjustment. For detailed descriptions of fleet metrics, see Monitor Amazon + // GameLift with Amazon CloudWatch (http://docs.aws.amazon.com/gamelift/latest/developerguide/monitoring-cloudwatch.html). + // + // * ActivatingGameSessions -- Game sessions in the process of being created. + // + // * ActiveGameSessions -- Game sessions that are currently running. + // + // * ActiveInstances -- Fleet instances that are currently running at least + // one game session. + // + // * AvailableGameSessions -- Additional game sessions that fleet could host + // simultaneously, given current capacity. // - // * ActivatingGameSessions -- number of game sessions in the process of - // being created (game session status = ACTIVATING). + // * AvailablePlayerSessions -- Empty player slots in currently active game + // sessions. This includes game sessions that are not currently accepting + // players. Reserved player slots are not included. // - // * ActiveGameSessions -- number of game sessions currently running (game - // session status = ACTIVE). + // * CurrentPlayerSessions -- Player slots in active game sessions that are + // being used by a player or are reserved for a player. // - // * CurrentPlayerSessions -- number of active or reserved player sessions - // (player session status = ACTIVE or RESERVED). + // * IdleInstances -- Active instances that are currently hosting zero game + // sessions. // - // * AvailablePlayerSessions -- number of player session slots currently - // available in active game sessions across the fleet, calculated by subtracting - // a game session's current player session count from its maximum player - // session count. This number does include game sessions that are not currently - // accepting players (game session PlayerSessionCreationPolicy = DENY_ALL). + // * PercentAvailableGameSessions -- Unused percentage of the total number + // of game sessions that a fleet could host simultaneously, given current + // capacity. Use this metric for a target-based scaling policy. // - // * ActiveInstances -- number of instances currently running a game session. + // * PercentIdleInstances -- Percentage of the total number of active instances + // that are hosting zero game sessions. // - // * IdleInstances -- number of instances not currently running a game session. + // * QueueDepth -- Pending game session placement requests, in any queue, + // where the current fleet is the top-priority destination. + // + // * WaitTime -- Current wait time for pending game session placement requests, + // in any queue, where the current fleet is the top-priority destination. MetricName *string `type:"string" enum:"MetricName"` // Descriptive label that is associated with a scaling policy. Policy names // do not need to be unique. Name *string `min:"1" type:"string"` + // Type of scaling policy to create. For a target-based policy, set the parameter + // MetricName to 'PercentAvailableGameSessions' and specify a TargetConfiguration. + // For a rule-based policy set the following parameters: MetricName, ComparisonOperator, + // Threshold, EvaluationPeriods, ScalingAdjustmentType, and ScalingAdjustment. + PolicyType *string `type:"string" enum:"PolicyType"` + // Amount of adjustment to make, based on the scaling adjustment type. ScalingAdjustment *int64 `type:"integer"` @@ -16613,10 +16778,12 @@ type ScalingPolicy struct { // up while negative values scale down. ScalingAdjustmentType *string `type:"string" enum:"ScalingAdjustmentType"` - // Current status of the scaling policy. The scaling policy is only in force - // when in an ACTIVE status. + // Current status of the scaling policy. The scaling policy can be in force + // only when in an ACTIVE status. Scaling policies can be suspended for individual + // fleets (see StopFleetActions; if suspended for a fleet, the policy status + // does not change. View a fleet's stopped actions by calling DescribeFleetCapacity. // - // * ACTIVE -- The scaling policy is currently in force. + // * ACTIVE -- The scaling policy can be used for auto-scaling a fleet. // // * UPDATE_REQUESTED -- A request to update the scaling policy has been // received. @@ -16634,6 +16801,9 @@ type ScalingPolicy struct { // and recreated. Status *string `type:"string" enum:"ScalingStatusType"` + // Object that contains settings for a target-based scaling policy. + TargetConfiguration *TargetConfiguration `type:"structure"` + // Metric value used to trigger a scaling event. Threshold *float64 `type:"double"` } @@ -16678,6 +16848,12 @@ func (s *ScalingPolicy) SetName(v string) *ScalingPolicy { return s } +// SetPolicyType sets the PolicyType field's value. +func (s *ScalingPolicy) SetPolicyType(v string) *ScalingPolicy { + s.PolicyType = &v + return s +} + // SetScalingAdjustment sets the ScalingAdjustment field's value. func (s *ScalingPolicy) SetScalingAdjustment(v int64) *ScalingPolicy { s.ScalingAdjustment = &v @@ -16696,6 +16872,12 @@ func (s *ScalingPolicy) SetStatus(v string) *ScalingPolicy { return s } +// SetTargetConfiguration sets the TargetConfiguration field's value. +func (s *ScalingPolicy) SetTargetConfiguration(v *TargetConfiguration) *ScalingPolicy { + s.TargetConfiguration = v + return s +} + // SetThreshold sets the Threshold field's value. func (s *ScalingPolicy) SetThreshold(v float64) *ScalingPolicy { s.Threshold = &v @@ -16967,6 +17149,75 @@ func (s *ServerProcess) SetParameters(v string) *ServerProcess { return s } +type StartFleetActionsInput struct { + _ struct{} `type:"structure"` + + // List of actions to restart on the fleet. + // + // Actions is a required field + Actions []*string `min:"1" type:"list" required:"true"` + + // Unique identifier for a fleet + // + // FleetId is a required field + FleetId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s StartFleetActionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartFleetActionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StartFleetActionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StartFleetActionsInput"} + if s.Actions == nil { + invalidParams.Add(request.NewErrParamRequired("Actions")) + } + if s.Actions != nil && len(s.Actions) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Actions", 1)) + } + if s.FleetId == nil { + invalidParams.Add(request.NewErrParamRequired("FleetId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetActions sets the Actions field's value. +func (s *StartFleetActionsInput) SetActions(v []*string) *StartFleetActionsInput { + s.Actions = v + return s +} + +// SetFleetId sets the FleetId field's value. +func (s *StartFleetActionsInput) SetFleetId(v string) *StartFleetActionsInput { + s.FleetId = &v + return s +} + +type StartFleetActionsOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s StartFleetActionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartFleetActionsOutput) GoString() string { + return s.String() +} + // Represents the input for a request action. type StartGameSessionPlacementInput struct { _ struct{} `type:"structure"` @@ -17410,6 +17661,75 @@ func (s *StartMatchmakingOutput) SetMatchmakingTicket(v *MatchmakingTicket) *Sta return s } +type StopFleetActionsInput struct { + _ struct{} `type:"structure"` + + // List of actions to suspend on the fleet. + // + // Actions is a required field + Actions []*string `min:"1" type:"list" required:"true"` + + // Unique identifier for a fleet + // + // FleetId is a required field + FleetId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s StopFleetActionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StopFleetActionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StopFleetActionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StopFleetActionsInput"} + if s.Actions == nil { + invalidParams.Add(request.NewErrParamRequired("Actions")) + } + if s.Actions != nil && len(s.Actions) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Actions", 1)) + } + if s.FleetId == nil { + invalidParams.Add(request.NewErrParamRequired("FleetId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetActions sets the Actions field's value. +func (s *StopFleetActionsInput) SetActions(v []*string) *StopFleetActionsInput { + s.Actions = v + return s +} + +// SetFleetId sets the FleetId field's value. +func (s *StopFleetActionsInput) SetFleetId(v string) *StopFleetActionsInput { + s.FleetId = &v + return s +} + +type StopFleetActionsOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s StopFleetActionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StopFleetActionsOutput) GoString() string { + return s.String() +} + // Represents the input for a request action. type StopGameSessionPlacementInput struct { _ struct{} `type:"structure"` @@ -17533,6 +17853,76 @@ func (s StopMatchmakingOutput) GoString() string { return s.String() } +// Settings for a target-based scaling policy (see ScalingPolicy. A target-based +// policy tracks a particular fleet metric specifies a target value for the +// metric. As player usage changes, the policy triggers Amazon GameLift to adjust +// capacity so that the metric returns to the target value. The target configuration +// specifies settings as needed for the target based policy, including the target +// value. +// +// Operations related to fleet capacity scaling include: +// +// * DescribeFleetCapacity +// +// * UpdateFleetCapacity +// +// * DescribeEC2InstanceLimits +// +// * Manage scaling policies: +// +// PutScalingPolicy (auto-scaling) +// +// DescribeScalingPolicies (auto-scaling) +// +// DeleteScalingPolicy (auto-scaling) +// +// * Manage fleet actions: +// +// StartFleetActions +// +// StopFleetActions +type TargetConfiguration struct { + _ struct{} `type:"structure"` + + // Desired value to use with a target-based scaling policy. The value must be + // relevant for whatever metric the scaling policy is using. For example, in + // a policy using the metric PercentAvailableGameSessions, the target value + // should be the preferred size of the fleet's buffer (the percent of capacity + // that should be idle and ready for new game sessions). + // + // TargetValue is a required field + TargetValue *float64 `type:"double" required:"true"` +} + +// String returns the string representation +func (s TargetConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TargetConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *TargetConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TargetConfiguration"} + if s.TargetValue == nil { + invalidParams.Add(request.NewErrParamRequired("TargetValue")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTargetValue sets the TargetValue field's value. +func (s *TargetConfiguration) SetTargetValue(v float64) *TargetConfiguration { + s.TargetValue = &v + return s +} + // Represents the input for a request action. type UpdateAliasInput struct { _ struct{} `type:"structure"` @@ -18660,11 +19050,11 @@ type VpcPeeringAuthorization struct { // Time stamp indicating when this authorization was issued. Format is a number // expressed in Unix time as milliseconds (for example "1469498468.057"). - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationTime *time.Time `type:"timestamp"` // Time stamp indicating when this authorization expires (24 hours after issuance). // Format is a number expressed in Unix time as milliseconds (for example "1469498468.057"). - ExpirationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + ExpirationTime *time.Time `type:"timestamp"` // Unique identifier for the AWS account that you use to manage your Amazon // GameLift fleet. You can find your Account ID in the AWS Management Console @@ -19090,6 +19480,11 @@ const ( EventCodeInstanceInterrupted = "INSTANCE_INTERRUPTED" ) +const ( + // FleetActionAutoScaling is a FleetAction enum value + FleetActionAutoScaling = "AUTO_SCALING" +) + const ( // FleetStatusNew is a FleetStatus enum value FleetStatusNew = "NEW" @@ -19273,6 +19668,14 @@ const ( PlayerSessionStatusTimedout = "TIMEDOUT" ) +const ( + // PolicyTypeRuleBased is a PolicyType enum value + PolicyTypeRuleBased = "RuleBased" + + // PolicyTypeTargetBased is a PolicyType enum value + PolicyTypeTargetBased = "TargetBased" +) + const ( // ProtectionPolicyNoProtection is a ProtectionPolicy enum value ProtectionPolicyNoProtection = "NoProtection" diff --git a/vendor/github.com/aws/aws-sdk-go/service/gamelift/doc.go b/vendor/github.com/aws/aws-sdk-go/service/gamelift/doc.go index 0e2d3bdaafd..def8e05107d 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/gamelift/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/gamelift/doc.go @@ -17,7 +17,7 @@ // a game session. // // * Configure and manage game server resources -- Manage builds, fleets, -// queues, and aliases; set autoscaling policies; retrieve logs and metrics. +// queues, and aliases; set auto-scaling policies; retrieve logs and metrics. // // This reference guide describes the low-level service API for Amazon GameLift. // You can use the API functionality with these tools: @@ -188,16 +188,20 @@ // and the current number of instances in a fleet; adjust fleet capacity // settings to scale up or down. // -// Autoscale -- Manage autoscaling rules and apply them to a fleet. +// Autoscale -- Manage auto-scaling rules and apply them to a fleet. // -// PutScalingPolicy -- Create a new autoscaling policy, or update an existing +// PutScalingPolicy -- Create a new auto-scaling policy, or update an existing // one. // -// DescribeScalingPolicies -- Retrieve an existing autoscaling policy. +// DescribeScalingPolicies -- Retrieve an existing auto-scaling policy. // -// DeleteScalingPolicy -- Delete an autoscaling policy and stop it from affecting +// DeleteScalingPolicy -- Delete an auto-scaling policy and stop it from affecting // a fleet's capacity. // +// StartFleetActions -- Restart a fleet's auto-scaling policies. +// +// StopFleetActions -- Suspend a fleet's auto-scaling policies. +// // * Manage VPC peering connections for fleets // // CreateVpcPeeringAuthorization -- Authorize a peering connection to one of diff --git a/vendor/github.com/aws/aws-sdk-go/service/gamelift/service.go b/vendor/github.com/aws/aws-sdk-go/service/gamelift/service.go index b79ac20478d..a2361e47690 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/gamelift/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/gamelift/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "gamelift" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "gamelift" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "GameLift" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the GameLift client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/glacier/api.go b/vendor/github.com/aws/aws-sdk-go/service/glacier/api.go index 8bed039fe95..09929920d12 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/glacier/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/glacier/api.go @@ -17,8 +17,8 @@ const opAbortMultipartUpload = "AbortMultipartUpload" // AbortMultipartUploadRequest generates a "aws/request.Request" representing the // client's request for the AbortMultipartUpload operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -124,8 +124,8 @@ const opAbortVaultLock = "AbortVaultLock" // AbortVaultLockRequest generates a "aws/request.Request" representing the // client's request for the AbortVaultLock operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -227,8 +227,8 @@ const opAddTagsToVault = "AddTagsToVault" // AddTagsToVaultRequest generates a "aws/request.Request" representing the // client's request for the AddTagsToVault operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -323,8 +323,8 @@ const opCompleteMultipartUpload = "CompleteMultipartUpload" // CompleteMultipartUploadRequest generates a "aws/request.Request" representing the // client's request for the CompleteMultipartUpload operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -452,8 +452,8 @@ const opCompleteVaultLock = "CompleteVaultLock" // CompleteVaultLockRequest generates a "aws/request.Request" representing the // client's request for the CompleteVaultLock operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -554,8 +554,8 @@ const opCreateVault = "CreateVault" // CreateVaultRequest generates a "aws/request.Request" representing the // client's request for the CreateVault operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -662,8 +662,8 @@ const opDeleteArchive = "DeleteArchive" // DeleteArchiveRequest generates a "aws/request.Request" representing the // client's request for the DeleteArchive operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -774,8 +774,8 @@ const opDeleteVault = "DeleteVault" // DeleteVaultRequest generates a "aws/request.Request" representing the // client's request for the DeleteVault operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -884,8 +884,8 @@ const opDeleteVaultAccessPolicy = "DeleteVaultAccessPolicy" // DeleteVaultAccessPolicyRequest generates a "aws/request.Request" representing the // client's request for the DeleteVaultAccessPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -981,8 +981,8 @@ const opDeleteVaultNotifications = "DeleteVaultNotifications" // DeleteVaultNotificationsRequest generates a "aws/request.Request" representing the // client's request for the DeleteVaultNotifications operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1030,7 +1030,7 @@ func (c *Glacier) DeleteVaultNotificationsRequest(input *DeleteVaultNotification // AWS Identity and Access Management (IAM) users don't have any permissions // by default. You must grant them explicit permission to perform specific actions. // For more information, see Access Control Using AWS Identity and Access Management -// (IAM) (http://docs.aws.amazon.com/latest/dev/using-iam-with-amazon-glacier.html). +// (IAM) (http://docs.aws.amazon.com/amazonglacier/latest/dev/using-iam-with-amazon-glacier.html). // // For conceptual information and underlying REST API, see Configuring Vault // Notifications in Amazon Glacier (http://docs.aws.amazon.com/amazonglacier/latest/dev/configuring-notifications.html) @@ -1083,8 +1083,8 @@ const opDescribeJob = "DescribeJob" // DescribeJobRequest generates a "aws/request.Request" representing the // client's request for the DescribeJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1190,8 +1190,8 @@ const opDescribeVault = "DescribeVault" // DescribeVaultRequest generates a "aws/request.Request" representing the // client's request for the DescribeVault operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1295,8 +1295,8 @@ const opGetDataRetrievalPolicy = "GetDataRetrievalPolicy" // GetDataRetrievalPolicyRequest generates a "aws/request.Request" representing the // client's request for the GetDataRetrievalPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1379,8 +1379,8 @@ const opGetJobOutput = "GetJobOutput" // GetJobOutputRequest generates a "aws/request.Request" representing the // client's request for the GetJobOutput operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1508,8 +1508,8 @@ const opGetVaultAccessPolicy = "GetVaultAccessPolicy" // GetVaultAccessPolicyRequest generates a "aws/request.Request" representing the // client's request for the GetVaultAccessPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1599,8 +1599,8 @@ const opGetVaultLock = "GetVaultLock" // GetVaultLockRequest generates a "aws/request.Request" representing the // client's request for the GetVaultLock operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1704,8 +1704,8 @@ const opGetVaultNotifications = "GetVaultNotifications" // GetVaultNotificationsRequest generates a "aws/request.Request" representing the // client's request for the GetVaultNotifications operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1808,8 +1808,8 @@ const opInitiateJob = "InitiateJob" // InitiateJobRequest generates a "aws/request.Request" representing the // client's request for the InitiateJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1906,8 +1906,8 @@ const opInitiateMultipartUpload = "InitiateMultipartUpload" // InitiateMultipartUploadRequest generates a "aws/request.Request" representing the // client's request for the InitiateMultipartUpload operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2026,8 +2026,8 @@ const opInitiateVaultLock = "InitiateVaultLock" // InitiateVaultLockRequest generates a "aws/request.Request" representing the // client's request for the InitiateVaultLock operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2140,8 +2140,8 @@ const opListJobs = "ListJobs" // ListJobsRequest generates a "aws/request.Request" representing the // client's request for the ListJobs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2207,7 +2207,7 @@ func (c *Glacier) ListJobsRequest(input *ListJobsInput) (req *request.Request, o // List Jobs request. // // You can set a maximum limit for the number of jobs returned in the response -// by specifying the limit parameter in the request. The default limit is 1000. +// by specifying the limit parameter in the request. The default limit is 50. // The number of jobs returned might be fewer than the limit, but the number // of returned jobs never exceeds the limit. // @@ -2317,8 +2317,8 @@ const opListMultipartUploads = "ListMultipartUploads" // ListMultipartUploadsRequest generates a "aws/request.Request" representing the // client's request for the ListMultipartUploads operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2368,7 +2368,7 @@ func (c *Glacier) ListMultipartUploadsRequest(input *ListMultipartUploadsInput) // order. // // The List Multipart Uploads operation supports pagination. By default, this -// operation returns up to 1,000 multipart uploads in the response. You should +// operation returns up to 50 multipart uploads in the response. You should // always check the response for a marker at which to continue the list; if // there are no more items the marker is null. To return a list of multipart // uploads that begins at a specific upload, set the marker request parameter @@ -2488,8 +2488,8 @@ const opListParts = "ListParts" // ListPartsRequest generates a "aws/request.Request" representing the // client's request for the ListParts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2539,7 +2539,7 @@ func (c *Glacier) ListPartsRequest(input *ListPartsInput) (req *request.Request, // List Parts response is sorted by part range. // // The List Parts operation supports pagination. By default, this operation -// returns up to 1,000 uploaded parts in the response. You should always check +// returns up to 50 uploaded parts in the response. You should always check // the response for a marker at which to continue the list; if there are no // more items the marker is null. To return a list of parts that begins at a // specific part, set the marker request parameter to the value you obtained @@ -2653,8 +2653,8 @@ const opListProvisionedCapacity = "ListProvisionedCapacity" // ListProvisionedCapacityRequest generates a "aws/request.Request" representing the // client's request for the ListProvisionedCapacity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2736,8 +2736,8 @@ const opListTagsForVault = "ListTagsForVault" // ListTagsForVaultRequest generates a "aws/request.Request" representing the // client's request for the ListTagsForVault operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2824,8 +2824,8 @@ const opListVaults = "ListVaults" // ListVaultsRequest generates a "aws/request.Request" representing the // client's request for the ListVaults operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2871,7 +2871,7 @@ func (c *Glacier) ListVaultsRequest(input *ListVaultsInput) (req *request.Reques // This operation lists all vaults owned by the calling user's account. The // list returned in the response is ASCII-sorted by vault name. // -// By default, this operation returns up to 1,000 items. If there are more vaults +// By default, this operation returns up to 10 items. If there are more vaults // to list, the response marker field contains the vault Amazon Resource Name // (ARN) at which to continue the list with a new List Vaults request; otherwise, // the marker field is null. To return a list of vaults that begins at a specific @@ -2986,8 +2986,8 @@ const opPurchaseProvisionedCapacity = "PurchaseProvisionedCapacity" // PurchaseProvisionedCapacityRequest generates a "aws/request.Request" representing the // client's request for the PurchaseProvisionedCapacity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3071,8 +3071,8 @@ const opRemoveTagsFromVault = "RemoveTagsFromVault" // RemoveTagsFromVaultRequest generates a "aws/request.Request" representing the // client's request for the RemoveTagsFromVault operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3163,8 +3163,8 @@ const opSetDataRetrievalPolicy = "SetDataRetrievalPolicy" // SetDataRetrievalPolicyRequest generates a "aws/request.Request" representing the // client's request for the SetDataRetrievalPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3253,8 +3253,8 @@ const opSetVaultAccessPolicy = "SetVaultAccessPolicy" // SetVaultAccessPolicyRequest generates a "aws/request.Request" representing the // client's request for the SetVaultAccessPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3347,8 +3347,8 @@ const opSetVaultNotifications = "SetVaultNotifications" // SetVaultNotificationsRequest generates a "aws/request.Request" representing the // client's request for the SetVaultNotifications operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3466,8 +3466,8 @@ const opUploadArchive = "UploadArchive" // UploadArchiveRequest generates a "aws/request.Request" representing the // client's request for the UploadArchive operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3591,8 +3591,8 @@ const opUploadMultipartPart = "UploadMultipartPart" // UploadMultipartPartRequest generates a "aws/request.Request" representing the // client's request for the UploadMultipartPart operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6569,9 +6569,9 @@ type ListJobsInput struct { // The state of the jobs to return. You can specify true or false. Completed *string `location:"querystring" locationName:"completed" type:"string"` - // The maximum number of jobs to be returned. The default limit is 1000. The - // number of jobs returned might be fewer than the specified limit, but the - // number of returned jobs never exceeds the limit. + // The maximum number of jobs to be returned. The default limit is 50. The number + // of jobs returned might be fewer than the specified limit, but the number + // of returned jobs never exceeds the limit. Limit *string `location:"querystring" locationName:"limit" type:"string"` // An opaque string used for pagination. This value specifies the job at which @@ -6703,7 +6703,7 @@ type ListMultipartUploadsInput struct { AccountId *string `location:"uri" locationName:"accountId" type:"string" required:"true"` // Specifies the maximum number of uploads returned in the response body. If - // this value is not specified, the List Uploads operation returns up to 1,000 + // this value is not specified, the List Uploads operation returns up to 50 // uploads. Limit *string `location:"querystring" locationName:"limit" type:"string"` @@ -6818,7 +6818,7 @@ type ListPartsInput struct { // AccountId is a required field AccountId *string `location:"uri" locationName:"accountId" type:"string" required:"true"` - // The maximum number of parts to be returned. The default limit is 1000. The + // The maximum number of parts to be returned. The default limit is 50. The // number of parts returned might be fewer than the specified limit, but the // number of returned parts never exceeds the limit. Limit *string `location:"querystring" locationName:"limit" type:"string"` @@ -7145,7 +7145,7 @@ type ListVaultsInput struct { // AccountId is a required field AccountId *string `location:"uri" locationName:"accountId" type:"string" required:"true"` - // The maximum number of vaults to be returned. The default limit is 1000. The + // The maximum number of vaults to be returned. The default limit is 10. The // number of vaults returned might be fewer than the specified limit, but the // number of returned vaults never exceeds the limit. Limit *string `location:"querystring" locationName:"limit" type:"string"` diff --git a/vendor/github.com/aws/aws-sdk-go/service/glacier/doc.go b/vendor/github.com/aws/aws-sdk-go/service/glacier/doc.go index 844805b4233..80c74d84891 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/glacier/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/glacier/doc.go @@ -13,11 +13,10 @@ // about capacity planning, hardware provisioning, data replication, hardware // failure and recovery, or time-consuming hardware migrations. // -// Amazon Glacier is a great storage choice when low storage cost is paramount, -// your data is rarely retrieved, and retrieval latency of several hours is -// acceptable. If your application requires fast or frequent access to your -// data, consider using Amazon S3. For more information, see Amazon Simple Storage -// Service (Amazon S3) (http://aws.amazon.com/s3/). +// Amazon Glacier is a great storage choice when low storage cost is paramount +// and your data is rarely retrieved. If your application requires fast or frequent +// access to your data, consider using Amazon S3. For more information, see +// Amazon Simple Storage Service (Amazon S3) (http://aws.amazon.com/s3/). // // You can store any kind of data in any format. There is no maximum limit on // the total amount of data you can store in Amazon Glacier. diff --git a/vendor/github.com/aws/aws-sdk-go/service/glacier/service.go b/vendor/github.com/aws/aws-sdk-go/service/glacier/service.go index b875f0faf2c..85e6e367b20 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/glacier/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/glacier/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "glacier" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "glacier" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Glacier" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the Glacier client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/glue/api.go b/vendor/github.com/aws/aws-sdk-go/service/glue/api.go index 3e172586c27..4847a246d8f 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/glue/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/glue/api.go @@ -15,8 +15,8 @@ const opBatchCreatePartition = "BatchCreatePartition" // BatchCreatePartitionRequest generates a "aws/request.Request" representing the // client's request for the BatchCreatePartition operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -83,6 +83,9 @@ func (c *Glue) BatchCreatePartitionRequest(input *BatchCreatePartitionInput) (re // * ErrCodeOperationTimeoutException "OperationTimeoutException" // The operation timed out. // +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/BatchCreatePartition func (c *Glue) BatchCreatePartition(input *BatchCreatePartitionInput) (*BatchCreatePartitionOutput, error) { req, out := c.BatchCreatePartitionRequest(input) @@ -109,8 +112,8 @@ const opBatchDeleteConnection = "BatchDeleteConnection" // BatchDeleteConnectionRequest generates a "aws/request.Request" representing the // client's request for the BatchDeleteConnection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -191,8 +194,8 @@ const opBatchDeletePartition = "BatchDeletePartition" // BatchDeletePartitionRequest generates a "aws/request.Request" representing the // client's request for the BatchDeletePartition operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -279,8 +282,8 @@ const opBatchDeleteTable = "BatchDeleteTable" // BatchDeleteTableRequest generates a "aws/request.Request" representing the // client's request for the BatchDeleteTable operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -321,6 +324,15 @@ func (c *Glue) BatchDeleteTableRequest(input *BatchDeleteTableInput) (req *reque // // Deletes multiple tables at once. // +// After completing this operation, you will no longer have access to the table +// versions and partitions that belong to the deleted table. AWS Glue deletes +// these "orphaned" resources asynchronously in a timely manner, at the discretion +// of the service. +// +// To ensure immediate deletion of all related resources, before calling BatchDeleteTable, +// use DeleteTableVersion or BatchDeleteTableVersion, and DeletePartition or +// BatchDeletePartition, to delete any resources that belong to the table. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -367,8 +379,8 @@ const opBatchDeleteTableVersion = "BatchDeleteTableVersion" // BatchDeleteTableVersionRequest generates a "aws/request.Request" representing the // client's request for the BatchDeleteTableVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -455,8 +467,8 @@ const opBatchGetPartition = "BatchGetPartition" // BatchGetPartitionRequest generates a "aws/request.Request" representing the // client's request for the BatchGetPartition operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -517,6 +529,9 @@ func (c *Glue) BatchGetPartitionRequest(input *BatchGetPartitionInput) (req *req // * ErrCodeInternalServiceException "InternalServiceException" // An internal service error occurred. // +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/BatchGetPartition func (c *Glue) BatchGetPartition(input *BatchGetPartitionInput) (*BatchGetPartitionOutput, error) { req, out := c.BatchGetPartitionRequest(input) @@ -543,8 +558,8 @@ const opBatchStopJobRun = "BatchStopJobRun" // BatchStopJobRunRequest generates a "aws/request.Request" representing the // client's request for the BatchStopJobRun operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -583,7 +598,7 @@ func (c *Glue) BatchStopJobRunRequest(input *BatchStopJobRunInput) (req *request // BatchStopJobRun API operation for AWS Glue. // -// Stops one or more job runs for a specified Job. +// Stops one or more job runs for a specified job definition. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -628,8 +643,8 @@ const opCreateClassifier = "CreateClassifier" // CreateClassifierRequest generates a "aws/request.Request" representing the // client's request for the CreateClassifier operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -715,8 +730,8 @@ const opCreateConnection = "CreateConnection" // CreateConnectionRequest generates a "aws/request.Request" representing the // client's request for the CreateConnection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -777,6 +792,9 @@ func (c *Glue) CreateConnectionRequest(input *CreateConnectionInput) (req *reque // * ErrCodeResourceNumberLimitExceededException "ResourceNumberLimitExceededException" // A resource numerical limit was exceeded. // +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/CreateConnection func (c *Glue) CreateConnection(input *CreateConnectionInput) (*CreateConnectionOutput, error) { req, out := c.CreateConnectionRequest(input) @@ -803,8 +821,8 @@ const opCreateCrawler = "CreateCrawler" // CreateCrawlerRequest generates a "aws/request.Request" representing the // client's request for the CreateCrawler operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -844,8 +862,8 @@ func (c *Glue) CreateCrawlerRequest(input *CreateCrawlerInput) (req *request.Req // CreateCrawler API operation for AWS Glue. // // Creates a new crawler with specified targets, role, configuration, and optional -// schedule. At least one crawl target must be specified, in either the s3Targets -// or the jdbcTargets field. +// schedule. At least one crawl target must be specified, in the s3Targets field, +// the jdbcTargets field, or the DynamoDBTargets field. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -893,8 +911,8 @@ const opCreateDatabase = "CreateDatabase" // CreateDatabaseRequest generates a "aws/request.Request" representing the // client's request for the CreateDatabase operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -958,6 +976,9 @@ func (c *Glue) CreateDatabaseRequest(input *CreateDatabaseInput) (req *request.R // * ErrCodeOperationTimeoutException "OperationTimeoutException" // The operation timed out. // +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/CreateDatabase func (c *Glue) CreateDatabase(input *CreateDatabaseInput) (*CreateDatabaseOutput, error) { req, out := c.CreateDatabaseRequest(input) @@ -984,8 +1005,8 @@ const opCreateDevEndpoint = "CreateDevEndpoint" // CreateDevEndpointRequest generates a "aws/request.Request" representing the // client's request for the CreateDevEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1084,8 +1105,8 @@ const opCreateJob = "CreateJob" // CreateJobRequest generates a "aws/request.Request" representing the // client's request for the CreateJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1124,7 +1145,7 @@ func (c *Glue) CreateJobRequest(input *CreateJobInput) (req *request.Request, ou // CreateJob API operation for AWS Glue. // -// Creates a new job. +// Creates a new job definition. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1181,8 +1202,8 @@ const opCreatePartition = "CreatePartition" // CreatePartitionRequest generates a "aws/request.Request" representing the // client's request for the CreatePartition operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1249,6 +1270,9 @@ func (c *Glue) CreatePartitionRequest(input *CreatePartitionInput) (req *request // * ErrCodeOperationTimeoutException "OperationTimeoutException" // The operation timed out. // +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/CreatePartition func (c *Glue) CreatePartition(input *CreatePartitionInput) (*CreatePartitionOutput, error) { req, out := c.CreatePartitionRequest(input) @@ -1275,8 +1299,8 @@ const opCreateScript = "CreateScript" // CreateScriptRequest generates a "aws/request.Request" representing the // client's request for the CreateScript operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1356,12 +1380,103 @@ func (c *Glue) CreateScriptWithContext(ctx aws.Context, input *CreateScriptInput return out, req.Send() } +const opCreateSecurityConfiguration = "CreateSecurityConfiguration" + +// CreateSecurityConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the CreateSecurityConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateSecurityConfiguration for more information on using the CreateSecurityConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateSecurityConfigurationRequest method. +// req, resp := client.CreateSecurityConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/CreateSecurityConfiguration +func (c *Glue) CreateSecurityConfigurationRequest(input *CreateSecurityConfigurationInput) (req *request.Request, output *CreateSecurityConfigurationOutput) { + op := &request.Operation{ + Name: opCreateSecurityConfiguration, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateSecurityConfigurationInput{} + } + + output = &CreateSecurityConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateSecurityConfiguration API operation for AWS Glue. +// +// Creates a new security configuration. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Glue's +// API operation CreateSecurityConfiguration for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAlreadyExistsException "AlreadyExistsException" +// A resource to be created or added already exists. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// The input provided was not valid. +// +// * ErrCodeInternalServiceException "InternalServiceException" +// An internal service error occurred. +// +// * ErrCodeOperationTimeoutException "OperationTimeoutException" +// The operation timed out. +// +// * ErrCodeResourceNumberLimitExceededException "ResourceNumberLimitExceededException" +// A resource numerical limit was exceeded. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/CreateSecurityConfiguration +func (c *Glue) CreateSecurityConfiguration(input *CreateSecurityConfigurationInput) (*CreateSecurityConfigurationOutput, error) { + req, out := c.CreateSecurityConfigurationRequest(input) + return out, req.Send() +} + +// CreateSecurityConfigurationWithContext is the same as CreateSecurityConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See CreateSecurityConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Glue) CreateSecurityConfigurationWithContext(ctx aws.Context, input *CreateSecurityConfigurationInput, opts ...request.Option) (*CreateSecurityConfigurationOutput, error) { + req, out := c.CreateSecurityConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateTable = "CreateTable" // CreateTableRequest generates a "aws/request.Request" representing the // client's request for the CreateTable operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1428,6 +1543,9 @@ func (c *Glue) CreateTableRequest(input *CreateTableInput) (req *request.Request // * ErrCodeOperationTimeoutException "OperationTimeoutException" // The operation timed out. // +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/CreateTable func (c *Glue) CreateTable(input *CreateTableInput) (*CreateTableOutput, error) { req, out := c.CreateTableRequest(input) @@ -1454,8 +1572,8 @@ const opCreateTrigger = "CreateTrigger" // CreateTriggerRequest generates a "aws/request.Request" representing the // client's request for the CreateTrigger operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1551,8 +1669,8 @@ const opCreateUserDefinedFunction = "CreateUserDefinedFunction" // CreateUserDefinedFunctionRequest generates a "aws/request.Request" representing the // client's request for the CreateUserDefinedFunction operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1619,6 +1737,9 @@ func (c *Glue) CreateUserDefinedFunctionRequest(input *CreateUserDefinedFunction // * ErrCodeResourceNumberLimitExceededException "ResourceNumberLimitExceededException" // A resource numerical limit was exceeded. // +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/CreateUserDefinedFunction func (c *Glue) CreateUserDefinedFunction(input *CreateUserDefinedFunctionInput) (*CreateUserDefinedFunctionOutput, error) { req, out := c.CreateUserDefinedFunctionRequest(input) @@ -1645,8 +1766,8 @@ const opDeleteClassifier = "DeleteClassifier" // DeleteClassifierRequest generates a "aws/request.Request" representing the // client's request for the DeleteClassifier operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1727,8 +1848,8 @@ const opDeleteConnection = "DeleteConnection" // DeleteConnectionRequest generates a "aws/request.Request" representing the // client's request for the DeleteConnection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1809,8 +1930,8 @@ const opDeleteCrawler = "DeleteCrawler" // DeleteCrawlerRequest generates a "aws/request.Request" representing the // client's request for the DeleteCrawler operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1898,8 +2019,8 @@ const opDeleteDatabase = "DeleteDatabase" // DeleteDatabaseRequest generates a "aws/request.Request" representing the // client's request for the DeleteDatabase operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1940,6 +2061,17 @@ func (c *Glue) DeleteDatabaseRequest(input *DeleteDatabaseInput) (req *request.R // // Removes a specified Database from a Data Catalog. // +// After completing this operation, you will no longer have access to the tables +// (and all table versions and partitions that might belong to the tables) and +// the user-defined functions in the deleted database. AWS Glue deletes these +// "orphaned" resources asynchronously in a timely manner, at the discretion +// of the service. +// +// To ensure immediate deletion of all related resources, before calling DeleteDatabase, +// use DeleteTableVersion or BatchDeleteTableVersion, DeletePartition or BatchDeletePartition, +// DeleteUserDefinedFunction, and DeleteTable or BatchDeleteTable, to delete +// any resources that belong to the database. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -1986,8 +2118,8 @@ const opDeleteDevEndpoint = "DeleteDevEndpoint" // DeleteDevEndpointRequest generates a "aws/request.Request" representing the // client's request for the DeleteDevEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2074,8 +2206,8 @@ const opDeleteJob = "DeleteJob" // DeleteJobRequest generates a "aws/request.Request" representing the // client's request for the DeleteJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2114,7 +2246,8 @@ func (c *Glue) DeleteJobRequest(input *DeleteJobInput) (req *request.Request, ou // DeleteJob API operation for AWS Glue. // -// Deletes a specified job. If the job is not found, no exception is thrown. +// Deletes a specified job definition. If the job definition is not found, no +// exception is thrown. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2159,8 +2292,8 @@ const opDeletePartition = "DeletePartition" // DeletePartitionRequest generates a "aws/request.Request" representing the // client's request for the DeletePartition operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2243,12 +2376,191 @@ func (c *Glue) DeletePartitionWithContext(ctx aws.Context, input *DeletePartitio return out, req.Send() } +const opDeleteResourcePolicy = "DeleteResourcePolicy" + +// DeleteResourcePolicyRequest generates a "aws/request.Request" representing the +// client's request for the DeleteResourcePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteResourcePolicy for more information on using the DeleteResourcePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteResourcePolicyRequest method. +// req, resp := client.DeleteResourcePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/DeleteResourcePolicy +func (c *Glue) DeleteResourcePolicyRequest(input *DeleteResourcePolicyInput) (req *request.Request, output *DeleteResourcePolicyOutput) { + op := &request.Operation{ + Name: opDeleteResourcePolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteResourcePolicyInput{} + } + + output = &DeleteResourcePolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteResourcePolicy API operation for AWS Glue. +// +// Deletes a specified policy. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Glue's +// API operation DeleteResourcePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeEntityNotFoundException "EntityNotFoundException" +// A specified entity does not exist +// +// * ErrCodeInternalServiceException "InternalServiceException" +// An internal service error occurred. +// +// * ErrCodeOperationTimeoutException "OperationTimeoutException" +// The operation timed out. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// The input provided was not valid. +// +// * ErrCodeConditionCheckFailureException "ConditionCheckFailureException" +// A specified condition was not satisfied. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/DeleteResourcePolicy +func (c *Glue) DeleteResourcePolicy(input *DeleteResourcePolicyInput) (*DeleteResourcePolicyOutput, error) { + req, out := c.DeleteResourcePolicyRequest(input) + return out, req.Send() +} + +// DeleteResourcePolicyWithContext is the same as DeleteResourcePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteResourcePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Glue) DeleteResourcePolicyWithContext(ctx aws.Context, input *DeleteResourcePolicyInput, opts ...request.Option) (*DeleteResourcePolicyOutput, error) { + req, out := c.DeleteResourcePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteSecurityConfiguration = "DeleteSecurityConfiguration" + +// DeleteSecurityConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the DeleteSecurityConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteSecurityConfiguration for more information on using the DeleteSecurityConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteSecurityConfigurationRequest method. +// req, resp := client.DeleteSecurityConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/DeleteSecurityConfiguration +func (c *Glue) DeleteSecurityConfigurationRequest(input *DeleteSecurityConfigurationInput) (req *request.Request, output *DeleteSecurityConfigurationOutput) { + op := &request.Operation{ + Name: opDeleteSecurityConfiguration, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteSecurityConfigurationInput{} + } + + output = &DeleteSecurityConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteSecurityConfiguration API operation for AWS Glue. +// +// Deletes a specified security configuration. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Glue's +// API operation DeleteSecurityConfiguration for usage and error information. +// +// Returned Error Codes: +// * ErrCodeEntityNotFoundException "EntityNotFoundException" +// A specified entity does not exist +// +// * ErrCodeInvalidInputException "InvalidInputException" +// The input provided was not valid. +// +// * ErrCodeInternalServiceException "InternalServiceException" +// An internal service error occurred. +// +// * ErrCodeOperationTimeoutException "OperationTimeoutException" +// The operation timed out. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/DeleteSecurityConfiguration +func (c *Glue) DeleteSecurityConfiguration(input *DeleteSecurityConfigurationInput) (*DeleteSecurityConfigurationOutput, error) { + req, out := c.DeleteSecurityConfigurationRequest(input) + return out, req.Send() +} + +// DeleteSecurityConfigurationWithContext is the same as DeleteSecurityConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteSecurityConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Glue) DeleteSecurityConfigurationWithContext(ctx aws.Context, input *DeleteSecurityConfigurationInput, opts ...request.Option) (*DeleteSecurityConfigurationOutput, error) { + req, out := c.DeleteSecurityConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteTable = "DeleteTable" // DeleteTableRequest generates a "aws/request.Request" representing the // client's request for the DeleteTable operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2289,6 +2601,15 @@ func (c *Glue) DeleteTableRequest(input *DeleteTableInput) (req *request.Request // // Removes a table definition from the Data Catalog. // +// After completing this operation, you will no longer have access to the table +// versions and partitions that belong to the deleted table. AWS Glue deletes +// these "orphaned" resources asynchronously in a timely manner, at the discretion +// of the service. +// +// To ensure immediate deletion of all related resources, before calling DeleteTable, +// use DeleteTableVersion or BatchDeleteTableVersion, and DeletePartition or +// BatchDeletePartition, to delete any resources that belong to the table. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -2335,8 +2656,8 @@ const opDeleteTableVersion = "DeleteTableVersion" // DeleteTableVersionRequest generates a "aws/request.Request" representing the // client's request for the DeleteTableVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2423,8 +2744,8 @@ const opDeleteTrigger = "DeleteTrigger" // DeleteTriggerRequest generates a "aws/request.Request" representing the // client's request for the DeleteTrigger operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2512,8 +2833,8 @@ const opDeleteUserDefinedFunction = "DeleteUserDefinedFunction" // DeleteUserDefinedFunctionRequest generates a "aws/request.Request" representing the // client's request for the DeleteUserDefinedFunction operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2600,8 +2921,8 @@ const opGetCatalogImportStatus = "GetCatalogImportStatus" // GetCatalogImportStatusRequest generates a "aws/request.Request" representing the // client's request for the GetCatalogImportStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2682,8 +3003,8 @@ const opGetClassifier = "GetClassifier" // GetClassifierRequest generates a "aws/request.Request" representing the // client's request for the GetClassifier operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2764,8 +3085,8 @@ const opGetClassifiers = "GetClassifiers" // GetClassifiersRequest generates a "aws/request.Request" representing the // client's request for the GetClassifiers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2899,8 +3220,8 @@ const opGetConnection = "GetConnection" // GetConnectionRequest generates a "aws/request.Request" representing the // client's request for the GetConnection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2955,6 +3276,12 @@ func (c *Glue) GetConnectionRequest(input *GetConnectionInput) (req *request.Req // * ErrCodeOperationTimeoutException "OperationTimeoutException" // The operation timed out. // +// * ErrCodeInvalidInputException "InvalidInputException" +// The input provided was not valid. +// +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetConnection func (c *Glue) GetConnection(input *GetConnectionInput) (*GetConnectionOutput, error) { req, out := c.GetConnectionRequest(input) @@ -2981,8 +3308,8 @@ const opGetConnections = "GetConnections" // GetConnectionsRequest generates a "aws/request.Request" representing the // client's request for the GetConnections operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3043,6 +3370,12 @@ func (c *Glue) GetConnectionsRequest(input *GetConnectionsInput) (req *request.R // * ErrCodeOperationTimeoutException "OperationTimeoutException" // The operation timed out. // +// * ErrCodeInvalidInputException "InvalidInputException" +// The input provided was not valid. +// +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetConnections func (c *Glue) GetConnections(input *GetConnectionsInput) (*GetConnectionsOutput, error) { req, out := c.GetConnectionsRequest(input) @@ -3119,8 +3452,8 @@ const opGetCrawler = "GetCrawler" // GetCrawlerRequest generates a "aws/request.Request" representing the // client's request for the GetCrawler operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3201,8 +3534,8 @@ const opGetCrawlerMetrics = "GetCrawlerMetrics" // GetCrawlerMetricsRequest generates a "aws/request.Request" representing the // client's request for the GetCrawlerMetrics operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3336,8 +3669,8 @@ const opGetCrawlers = "GetCrawlers" // GetCrawlersRequest generates a "aws/request.Request" representing the // client's request for the GetCrawlers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3467,100 +3800,188 @@ func (c *Glue) GetCrawlersPagesWithContext(ctx aws.Context, input *GetCrawlersIn return p.Err() } -const opGetDatabase = "GetDatabase" +const opGetDataCatalogEncryptionSettings = "GetDataCatalogEncryptionSettings" -// GetDatabaseRequest generates a "aws/request.Request" representing the -// client's request for the GetDatabase operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetDataCatalogEncryptionSettingsRequest generates a "aws/request.Request" representing the +// client's request for the GetDataCatalogEncryptionSettings operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetDatabase for more information on using the GetDatabase +// See GetDataCatalogEncryptionSettings for more information on using the GetDataCatalogEncryptionSettings // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetDatabaseRequest method. -// req, resp := client.GetDatabaseRequest(params) +// // Example sending a request using the GetDataCatalogEncryptionSettingsRequest method. +// req, resp := client.GetDataCatalogEncryptionSettingsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetDatabase -func (c *Glue) GetDatabaseRequest(input *GetDatabaseInput) (req *request.Request, output *GetDatabaseOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetDataCatalogEncryptionSettings +func (c *Glue) GetDataCatalogEncryptionSettingsRequest(input *GetDataCatalogEncryptionSettingsInput) (req *request.Request, output *GetDataCatalogEncryptionSettingsOutput) { op := &request.Operation{ - Name: opGetDatabase, + Name: opGetDataCatalogEncryptionSettings, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &GetDatabaseInput{} + input = &GetDataCatalogEncryptionSettingsInput{} } - output = &GetDatabaseOutput{} + output = &GetDataCatalogEncryptionSettingsOutput{} req = c.newRequest(op, input, output) return } -// GetDatabase API operation for AWS Glue. +// GetDataCatalogEncryptionSettings API operation for AWS Glue. // -// Retrieves the definition of a specified database. +// Retrieves the security configuration for a specified catalog. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Glue's -// API operation GetDatabase for usage and error information. +// API operation GetDataCatalogEncryptionSettings for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidInputException "InvalidInputException" -// The input provided was not valid. -// -// * ErrCodeEntityNotFoundException "EntityNotFoundException" -// A specified entity does not exist -// // * ErrCodeInternalServiceException "InternalServiceException" // An internal service error occurred. // +// * ErrCodeInvalidInputException "InvalidInputException" +// The input provided was not valid. +// // * ErrCodeOperationTimeoutException "OperationTimeoutException" // The operation timed out. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetDatabase -func (c *Glue) GetDatabase(input *GetDatabaseInput) (*GetDatabaseOutput, error) { - req, out := c.GetDatabaseRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetDataCatalogEncryptionSettings +func (c *Glue) GetDataCatalogEncryptionSettings(input *GetDataCatalogEncryptionSettingsInput) (*GetDataCatalogEncryptionSettingsOutput, error) { + req, out := c.GetDataCatalogEncryptionSettingsRequest(input) return out, req.Send() } -// GetDatabaseWithContext is the same as GetDatabase with the addition of +// GetDataCatalogEncryptionSettingsWithContext is the same as GetDataCatalogEncryptionSettings with the addition of // the ability to pass a context and additional request options. // -// See GetDatabase for details on how to use this API operation. +// See GetDataCatalogEncryptionSettings for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Glue) GetDatabaseWithContext(ctx aws.Context, input *GetDatabaseInput, opts ...request.Option) (*GetDatabaseOutput, error) { - req, out := c.GetDatabaseRequest(input) +func (c *Glue) GetDataCatalogEncryptionSettingsWithContext(ctx aws.Context, input *GetDataCatalogEncryptionSettingsInput, opts ...request.Option) (*GetDataCatalogEncryptionSettingsOutput, error) { + req, out := c.GetDataCatalogEncryptionSettingsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetDatabases = "GetDatabases" +const opGetDatabase = "GetDatabase" -// GetDatabasesRequest generates a "aws/request.Request" representing the -// client's request for the GetDatabases operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetDatabaseRequest generates a "aws/request.Request" representing the +// client's request for the GetDatabase operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetDatabase for more information on using the GetDatabase +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetDatabaseRequest method. +// req, resp := client.GetDatabaseRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetDatabase +func (c *Glue) GetDatabaseRequest(input *GetDatabaseInput) (req *request.Request, output *GetDatabaseOutput) { + op := &request.Operation{ + Name: opGetDatabase, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetDatabaseInput{} + } + + output = &GetDatabaseOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetDatabase API operation for AWS Glue. +// +// Retrieves the definition of a specified database. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Glue's +// API operation GetDatabase for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInputException" +// The input provided was not valid. +// +// * ErrCodeEntityNotFoundException "EntityNotFoundException" +// A specified entity does not exist +// +// * ErrCodeInternalServiceException "InternalServiceException" +// An internal service error occurred. +// +// * ErrCodeOperationTimeoutException "OperationTimeoutException" +// The operation timed out. +// +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetDatabase +func (c *Glue) GetDatabase(input *GetDatabaseInput) (*GetDatabaseOutput, error) { + req, out := c.GetDatabaseRequest(input) + return out, req.Send() +} + +// GetDatabaseWithContext is the same as GetDatabase with the addition of +// the ability to pass a context and additional request options. +// +// See GetDatabase for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Glue) GetDatabaseWithContext(ctx aws.Context, input *GetDatabaseInput, opts ...request.Option) (*GetDatabaseOutput, error) { + req, out := c.GetDatabaseRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetDatabases = "GetDatabases" + +// GetDatabasesRequest generates a "aws/request.Request" representing the +// client's request for the GetDatabases operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3624,6 +4045,9 @@ func (c *Glue) GetDatabasesRequest(input *GetDatabasesInput) (req *request.Reque // * ErrCodeOperationTimeoutException "OperationTimeoutException" // The operation timed out. // +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetDatabases func (c *Glue) GetDatabases(input *GetDatabasesInput) (*GetDatabasesOutput, error) { req, out := c.GetDatabasesRequest(input) @@ -3700,8 +4124,8 @@ const opGetDataflowGraph = "GetDataflowGraph" // GetDataflowGraphRequest generates a "aws/request.Request" representing the // client's request for the GetDataflowGraph operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3785,8 +4209,8 @@ const opGetDevEndpoint = "GetDevEndpoint" // GetDevEndpointRequest generates a "aws/request.Request" representing the // client's request for the GetDevEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3827,6 +4251,11 @@ func (c *Glue) GetDevEndpointRequest(input *GetDevEndpointInput) (req *request.R // // Retrieves information about a specified DevEndpoint. // +// When you create a development endpoint in a virtual private cloud (VPC), +// AWS Glue returns only a private IP address, and the public IP address field +// is not populated. When you create a non-VPC development endpoint, AWS Glue +// returns only a public IP address. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -3873,8 +4302,8 @@ const opGetDevEndpoints = "GetDevEndpoints" // GetDevEndpointsRequest generates a "aws/request.Request" representing the // client's request for the GetDevEndpoints operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3921,6 +4350,11 @@ func (c *Glue) GetDevEndpointsRequest(input *GetDevEndpointsInput) (req *request // // Retrieves all the DevEndpoints in this AWS account. // +// When you create a development endpoint in a virtual private cloud (VPC), +// AWS Glue returns only a private IP address and the public IP address field +// is not populated. When you create a non-VPC development endpoint, AWS Glue +// returns only a public IP address. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -4017,8 +4451,8 @@ const opGetJob = "GetJob" // GetJobRequest generates a "aws/request.Request" representing the // client's request for the GetJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4105,8 +4539,8 @@ const opGetJobRun = "GetJobRun" // GetJobRunRequest generates a "aws/request.Request" representing the // client's request for the GetJobRun operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4193,8 +4627,8 @@ const opGetJobRuns = "GetJobRuns" // GetJobRunsRequest generates a "aws/request.Request" representing the // client's request for the GetJobRuns operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4239,7 +4673,7 @@ func (c *Glue) GetJobRunsRequest(input *GetJobRunsInput) (req *request.Request, // GetJobRuns API operation for AWS Glue. // -// Retrieves metadata for all runs of a given job. +// Retrieves metadata for all runs of a given job definition. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4337,8 +4771,8 @@ const opGetJobs = "GetJobs" // GetJobsRequest generates a "aws/request.Request" representing the // client's request for the GetJobs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4383,7 +4817,7 @@ func (c *Glue) GetJobsRequest(input *GetJobsInput) (req *request.Request, output // GetJobs API operation for AWS Glue. // -// Retrieves all current jobs. +// Retrieves all current job definitions. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4481,8 +4915,8 @@ const opGetMapping = "GetMapping" // GetMappingRequest generates a "aws/request.Request" representing the // client's request for the GetMapping operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4569,8 +5003,8 @@ const opGetPartition = "GetPartition" // GetPartitionRequest generates a "aws/request.Request" representing the // client's request for the GetPartition operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4631,6 +5065,9 @@ func (c *Glue) GetPartitionRequest(input *GetPartitionInput) (req *request.Reque // * ErrCodeOperationTimeoutException "OperationTimeoutException" // The operation timed out. // +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetPartition func (c *Glue) GetPartition(input *GetPartitionInput) (*GetPartitionOutput, error) { req, out := c.GetPartitionRequest(input) @@ -4657,8 +5094,8 @@ const opGetPartitions = "GetPartitions" // GetPartitionsRequest generates a "aws/request.Request" representing the // client's request for the GetPartitions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4725,6 +5162,9 @@ func (c *Glue) GetPartitionsRequest(input *GetPartitionsInput) (req *request.Req // * ErrCodeInternalServiceException "InternalServiceException" // An internal service error occurred. // +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetPartitions func (c *Glue) GetPartitions(input *GetPartitionsInput) (*GetPartitionsOutput, error) { req, out := c.GetPartitionsRequest(input) @@ -4801,8 +5241,8 @@ const opGetPlan = "GetPlan" // GetPlanRequest generates a "aws/request.Request" representing the // client's request for the GetPlan operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4882,12 +5322,276 @@ func (c *Glue) GetPlanWithContext(ctx aws.Context, input *GetPlanInput, opts ... return out, req.Send() } +const opGetResourcePolicy = "GetResourcePolicy" + +// GetResourcePolicyRequest generates a "aws/request.Request" representing the +// client's request for the GetResourcePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetResourcePolicy for more information on using the GetResourcePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetResourcePolicyRequest method. +// req, resp := client.GetResourcePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetResourcePolicy +func (c *Glue) GetResourcePolicyRequest(input *GetResourcePolicyInput) (req *request.Request, output *GetResourcePolicyOutput) { + op := &request.Operation{ + Name: opGetResourcePolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetResourcePolicyInput{} + } + + output = &GetResourcePolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetResourcePolicy API operation for AWS Glue. +// +// Retrieves a specified resource policy. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Glue's +// API operation GetResourcePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeEntityNotFoundException "EntityNotFoundException" +// A specified entity does not exist +// +// * ErrCodeInternalServiceException "InternalServiceException" +// An internal service error occurred. +// +// * ErrCodeOperationTimeoutException "OperationTimeoutException" +// The operation timed out. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// The input provided was not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetResourcePolicy +func (c *Glue) GetResourcePolicy(input *GetResourcePolicyInput) (*GetResourcePolicyOutput, error) { + req, out := c.GetResourcePolicyRequest(input) + return out, req.Send() +} + +// GetResourcePolicyWithContext is the same as GetResourcePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See GetResourcePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Glue) GetResourcePolicyWithContext(ctx aws.Context, input *GetResourcePolicyInput, opts ...request.Option) (*GetResourcePolicyOutput, error) { + req, out := c.GetResourcePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetSecurityConfiguration = "GetSecurityConfiguration" + +// GetSecurityConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the GetSecurityConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetSecurityConfiguration for more information on using the GetSecurityConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetSecurityConfigurationRequest method. +// req, resp := client.GetSecurityConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetSecurityConfiguration +func (c *Glue) GetSecurityConfigurationRequest(input *GetSecurityConfigurationInput) (req *request.Request, output *GetSecurityConfigurationOutput) { + op := &request.Operation{ + Name: opGetSecurityConfiguration, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetSecurityConfigurationInput{} + } + + output = &GetSecurityConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetSecurityConfiguration API operation for AWS Glue. +// +// Retrieves a specified security configuration. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Glue's +// API operation GetSecurityConfiguration for usage and error information. +// +// Returned Error Codes: +// * ErrCodeEntityNotFoundException "EntityNotFoundException" +// A specified entity does not exist +// +// * ErrCodeInvalidInputException "InvalidInputException" +// The input provided was not valid. +// +// * ErrCodeInternalServiceException "InternalServiceException" +// An internal service error occurred. +// +// * ErrCodeOperationTimeoutException "OperationTimeoutException" +// The operation timed out. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetSecurityConfiguration +func (c *Glue) GetSecurityConfiguration(input *GetSecurityConfigurationInput) (*GetSecurityConfigurationOutput, error) { + req, out := c.GetSecurityConfigurationRequest(input) + return out, req.Send() +} + +// GetSecurityConfigurationWithContext is the same as GetSecurityConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See GetSecurityConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Glue) GetSecurityConfigurationWithContext(ctx aws.Context, input *GetSecurityConfigurationInput, opts ...request.Option) (*GetSecurityConfigurationOutput, error) { + req, out := c.GetSecurityConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetSecurityConfigurations = "GetSecurityConfigurations" + +// GetSecurityConfigurationsRequest generates a "aws/request.Request" representing the +// client's request for the GetSecurityConfigurations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetSecurityConfigurations for more information on using the GetSecurityConfigurations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetSecurityConfigurationsRequest method. +// req, resp := client.GetSecurityConfigurationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetSecurityConfigurations +func (c *Glue) GetSecurityConfigurationsRequest(input *GetSecurityConfigurationsInput) (req *request.Request, output *GetSecurityConfigurationsOutput) { + op := &request.Operation{ + Name: opGetSecurityConfigurations, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetSecurityConfigurationsInput{} + } + + output = &GetSecurityConfigurationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetSecurityConfigurations API operation for AWS Glue. +// +// Retrieves a list of all security configurations. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Glue's +// API operation GetSecurityConfigurations for usage and error information. +// +// Returned Error Codes: +// * ErrCodeEntityNotFoundException "EntityNotFoundException" +// A specified entity does not exist +// +// * ErrCodeInvalidInputException "InvalidInputException" +// The input provided was not valid. +// +// * ErrCodeInternalServiceException "InternalServiceException" +// An internal service error occurred. +// +// * ErrCodeOperationTimeoutException "OperationTimeoutException" +// The operation timed out. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetSecurityConfigurations +func (c *Glue) GetSecurityConfigurations(input *GetSecurityConfigurationsInput) (*GetSecurityConfigurationsOutput, error) { + req, out := c.GetSecurityConfigurationsRequest(input) + return out, req.Send() +} + +// GetSecurityConfigurationsWithContext is the same as GetSecurityConfigurations with the addition of +// the ability to pass a context and additional request options. +// +// See GetSecurityConfigurations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Glue) GetSecurityConfigurationsWithContext(ctx aws.Context, input *GetSecurityConfigurationsInput, opts ...request.Option) (*GetSecurityConfigurationsOutput, error) { + req, out := c.GetSecurityConfigurationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opGetTable = "GetTable" // GetTableRequest generates a "aws/request.Request" representing the // client's request for the GetTable operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4948,6 +5652,9 @@ func (c *Glue) GetTableRequest(input *GetTableInput) (req *request.Request, outp // * ErrCodeOperationTimeoutException "OperationTimeoutException" // The operation timed out. // +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetTable func (c *Glue) GetTable(input *GetTableInput) (*GetTableOutput, error) { req, out := c.GetTableRequest(input) @@ -4974,8 +5681,8 @@ const opGetTableVersion = "GetTableVersion" // GetTableVersionRequest generates a "aws/request.Request" representing the // client's request for the GetTableVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5036,6 +5743,9 @@ func (c *Glue) GetTableVersionRequest(input *GetTableVersionInput) (req *request // * ErrCodeOperationTimeoutException "OperationTimeoutException" // The operation timed out. // +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetTableVersion func (c *Glue) GetTableVersion(input *GetTableVersionInput) (*GetTableVersionOutput, error) { req, out := c.GetTableVersionRequest(input) @@ -5062,8 +5772,8 @@ const opGetTableVersions = "GetTableVersions" // GetTableVersionsRequest generates a "aws/request.Request" representing the // client's request for the GetTableVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5131,6 +5841,9 @@ func (c *Glue) GetTableVersionsRequest(input *GetTableVersionsInput) (req *reque // * ErrCodeOperationTimeoutException "OperationTimeoutException" // The operation timed out. // +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetTableVersions func (c *Glue) GetTableVersions(input *GetTableVersionsInput) (*GetTableVersionsOutput, error) { req, out := c.GetTableVersionsRequest(input) @@ -5207,8 +5920,8 @@ const opGetTables = "GetTables" // GetTablesRequest generates a "aws/request.Request" representing the // client's request for the GetTables operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5275,6 +5988,9 @@ func (c *Glue) GetTablesRequest(input *GetTablesInput) (req *request.Request, ou // * ErrCodeInternalServiceException "InternalServiceException" // An internal service error occurred. // +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetTables func (c *Glue) GetTables(input *GetTablesInput) (*GetTablesOutput, error) { req, out := c.GetTablesRequest(input) @@ -5351,8 +6067,8 @@ const opGetTrigger = "GetTrigger" // GetTriggerRequest generates a "aws/request.Request" representing the // client's request for the GetTrigger operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5439,8 +6155,8 @@ const opGetTriggers = "GetTriggers" // GetTriggersRequest generates a "aws/request.Request" representing the // client's request for the GetTriggers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5583,8 +6299,8 @@ const opGetUserDefinedFunction = "GetUserDefinedFunction" // GetUserDefinedFunctionRequest generates a "aws/request.Request" representing the // client's request for the GetUserDefinedFunction operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5645,6 +6361,9 @@ func (c *Glue) GetUserDefinedFunctionRequest(input *GetUserDefinedFunctionInput) // * ErrCodeOperationTimeoutException "OperationTimeoutException" // The operation timed out. // +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetUserDefinedFunction func (c *Glue) GetUserDefinedFunction(input *GetUserDefinedFunctionInput) (*GetUserDefinedFunctionOutput, error) { req, out := c.GetUserDefinedFunctionRequest(input) @@ -5671,8 +6390,8 @@ const opGetUserDefinedFunctions = "GetUserDefinedFunctions" // GetUserDefinedFunctionsRequest generates a "aws/request.Request" representing the // client's request for the GetUserDefinedFunctions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5736,158 +6455,339 @@ func (c *Glue) GetUserDefinedFunctionsRequest(input *GetUserDefinedFunctionsInpu // * ErrCodeOperationTimeoutException "OperationTimeoutException" // The operation timed out. // -// * ErrCodeInternalServiceException "InternalServiceException" -// An internal service error occurred. -// -// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetUserDefinedFunctions -func (c *Glue) GetUserDefinedFunctions(input *GetUserDefinedFunctionsInput) (*GetUserDefinedFunctionsOutput, error) { - req, out := c.GetUserDefinedFunctionsRequest(input) +// * ErrCodeInternalServiceException "InternalServiceException" +// An internal service error occurred. +// +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/GetUserDefinedFunctions +func (c *Glue) GetUserDefinedFunctions(input *GetUserDefinedFunctionsInput) (*GetUserDefinedFunctionsOutput, error) { + req, out := c.GetUserDefinedFunctionsRequest(input) + return out, req.Send() +} + +// GetUserDefinedFunctionsWithContext is the same as GetUserDefinedFunctions with the addition of +// the ability to pass a context and additional request options. +// +// See GetUserDefinedFunctions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Glue) GetUserDefinedFunctionsWithContext(ctx aws.Context, input *GetUserDefinedFunctionsInput, opts ...request.Option) (*GetUserDefinedFunctionsOutput, error) { + req, out := c.GetUserDefinedFunctionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// GetUserDefinedFunctionsPages iterates over the pages of a GetUserDefinedFunctions operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See GetUserDefinedFunctions method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a GetUserDefinedFunctions operation. +// pageNum := 0 +// err := client.GetUserDefinedFunctionsPages(params, +// func(page *GetUserDefinedFunctionsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *Glue) GetUserDefinedFunctionsPages(input *GetUserDefinedFunctionsInput, fn func(*GetUserDefinedFunctionsOutput, bool) bool) error { + return c.GetUserDefinedFunctionsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// GetUserDefinedFunctionsPagesWithContext same as GetUserDefinedFunctionsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Glue) GetUserDefinedFunctionsPagesWithContext(ctx aws.Context, input *GetUserDefinedFunctionsInput, fn func(*GetUserDefinedFunctionsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *GetUserDefinedFunctionsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.GetUserDefinedFunctionsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*GetUserDefinedFunctionsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opImportCatalogToGlue = "ImportCatalogToGlue" + +// ImportCatalogToGlueRequest generates a "aws/request.Request" representing the +// client's request for the ImportCatalogToGlue operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ImportCatalogToGlue for more information on using the ImportCatalogToGlue +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ImportCatalogToGlueRequest method. +// req, resp := client.ImportCatalogToGlueRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/ImportCatalogToGlue +func (c *Glue) ImportCatalogToGlueRequest(input *ImportCatalogToGlueInput) (req *request.Request, output *ImportCatalogToGlueOutput) { + op := &request.Operation{ + Name: opImportCatalogToGlue, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ImportCatalogToGlueInput{} + } + + output = &ImportCatalogToGlueOutput{} + req = c.newRequest(op, input, output) + return +} + +// ImportCatalogToGlue API operation for AWS Glue. +// +// Imports an existing Athena Data Catalog to AWS Glue +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Glue's +// API operation ImportCatalogToGlue for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServiceException "InternalServiceException" +// An internal service error occurred. +// +// * ErrCodeOperationTimeoutException "OperationTimeoutException" +// The operation timed out. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/ImportCatalogToGlue +func (c *Glue) ImportCatalogToGlue(input *ImportCatalogToGlueInput) (*ImportCatalogToGlueOutput, error) { + req, out := c.ImportCatalogToGlueRequest(input) + return out, req.Send() +} + +// ImportCatalogToGlueWithContext is the same as ImportCatalogToGlue with the addition of +// the ability to pass a context and additional request options. +// +// See ImportCatalogToGlue for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Glue) ImportCatalogToGlueWithContext(ctx aws.Context, input *ImportCatalogToGlueInput, opts ...request.Option) (*ImportCatalogToGlueOutput, error) { + req, out := c.ImportCatalogToGlueRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutDataCatalogEncryptionSettings = "PutDataCatalogEncryptionSettings" + +// PutDataCatalogEncryptionSettingsRequest generates a "aws/request.Request" representing the +// client's request for the PutDataCatalogEncryptionSettings operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutDataCatalogEncryptionSettings for more information on using the PutDataCatalogEncryptionSettings +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutDataCatalogEncryptionSettingsRequest method. +// req, resp := client.PutDataCatalogEncryptionSettingsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/PutDataCatalogEncryptionSettings +func (c *Glue) PutDataCatalogEncryptionSettingsRequest(input *PutDataCatalogEncryptionSettingsInput) (req *request.Request, output *PutDataCatalogEncryptionSettingsOutput) { + op := &request.Operation{ + Name: opPutDataCatalogEncryptionSettings, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutDataCatalogEncryptionSettingsInput{} + } + + output = &PutDataCatalogEncryptionSettingsOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutDataCatalogEncryptionSettings API operation for AWS Glue. +// +// Sets the security configuration for a specified catalog. Once the configuration +// has been set, the specified encryption is applied to every catalog write +// thereafter. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Glue's +// API operation PutDataCatalogEncryptionSettings for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServiceException "InternalServiceException" +// An internal service error occurred. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// The input provided was not valid. +// +// * ErrCodeOperationTimeoutException "OperationTimeoutException" +// The operation timed out. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/PutDataCatalogEncryptionSettings +func (c *Glue) PutDataCatalogEncryptionSettings(input *PutDataCatalogEncryptionSettingsInput) (*PutDataCatalogEncryptionSettingsOutput, error) { + req, out := c.PutDataCatalogEncryptionSettingsRequest(input) return out, req.Send() } -// GetUserDefinedFunctionsWithContext is the same as GetUserDefinedFunctions with the addition of +// PutDataCatalogEncryptionSettingsWithContext is the same as PutDataCatalogEncryptionSettings with the addition of // the ability to pass a context and additional request options. // -// See GetUserDefinedFunctions for details on how to use this API operation. +// See PutDataCatalogEncryptionSettings for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Glue) GetUserDefinedFunctionsWithContext(ctx aws.Context, input *GetUserDefinedFunctionsInput, opts ...request.Option) (*GetUserDefinedFunctionsOutput, error) { - req, out := c.GetUserDefinedFunctionsRequest(input) +func (c *Glue) PutDataCatalogEncryptionSettingsWithContext(ctx aws.Context, input *PutDataCatalogEncryptionSettingsInput, opts ...request.Option) (*PutDataCatalogEncryptionSettingsOutput, error) { + req, out := c.PutDataCatalogEncryptionSettingsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// GetUserDefinedFunctionsPages iterates over the pages of a GetUserDefinedFunctions operation, -// calling the "fn" function with the response data for each page. To stop -// iterating, return false from the fn function. -// -// See GetUserDefinedFunctions method for more information on how to use this operation. -// -// Note: This operation can generate multiple requests to a service. -// -// // Example iterating over at most 3 pages of a GetUserDefinedFunctions operation. -// pageNum := 0 -// err := client.GetUserDefinedFunctionsPages(params, -// func(page *GetUserDefinedFunctionsOutput, lastPage bool) bool { -// pageNum++ -// fmt.Println(page) -// return pageNum <= 3 -// }) -// -func (c *Glue) GetUserDefinedFunctionsPages(input *GetUserDefinedFunctionsInput, fn func(*GetUserDefinedFunctionsOutput, bool) bool) error { - return c.GetUserDefinedFunctionsPagesWithContext(aws.BackgroundContext(), input, fn) -} - -// GetUserDefinedFunctionsPagesWithContext same as GetUserDefinedFunctionsPages except -// it takes a Context and allows setting request options on the pages. -// -// The context must be non-nil and will be used for request cancellation. If -// the context is nil a panic will occur. In the future the SDK may create -// sub-contexts for http.Requests. See https://golang.org/pkg/context/ -// for more information on using Contexts. -func (c *Glue) GetUserDefinedFunctionsPagesWithContext(ctx aws.Context, input *GetUserDefinedFunctionsInput, fn func(*GetUserDefinedFunctionsOutput, bool) bool, opts ...request.Option) error { - p := request.Pagination{ - NewRequest: func() (*request.Request, error) { - var inCpy *GetUserDefinedFunctionsInput - if input != nil { - tmp := *input - inCpy = &tmp - } - req, _ := c.GetUserDefinedFunctionsRequest(inCpy) - req.SetContext(ctx) - req.ApplyOptions(opts...) - return req, nil - }, - } - - cont := true - for p.Next() && cont { - cont = fn(p.Page().(*GetUserDefinedFunctionsOutput), !p.HasNextPage()) - } - return p.Err() -} - -const opImportCatalogToGlue = "ImportCatalogToGlue" +const opPutResourcePolicy = "PutResourcePolicy" -// ImportCatalogToGlueRequest generates a "aws/request.Request" representing the -// client's request for the ImportCatalogToGlue operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// PutResourcePolicyRequest generates a "aws/request.Request" representing the +// client's request for the PutResourcePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ImportCatalogToGlue for more information on using the ImportCatalogToGlue +// See PutResourcePolicy for more information on using the PutResourcePolicy // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ImportCatalogToGlueRequest method. -// req, resp := client.ImportCatalogToGlueRequest(params) +// // Example sending a request using the PutResourcePolicyRequest method. +// req, resp := client.PutResourcePolicyRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/ImportCatalogToGlue -func (c *Glue) ImportCatalogToGlueRequest(input *ImportCatalogToGlueInput) (req *request.Request, output *ImportCatalogToGlueOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/PutResourcePolicy +func (c *Glue) PutResourcePolicyRequest(input *PutResourcePolicyInput) (req *request.Request, output *PutResourcePolicyOutput) { op := &request.Operation{ - Name: opImportCatalogToGlue, + Name: opPutResourcePolicy, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &ImportCatalogToGlueInput{} + input = &PutResourcePolicyInput{} } - output = &ImportCatalogToGlueOutput{} + output = &PutResourcePolicyOutput{} req = c.newRequest(op, input, output) return } -// ImportCatalogToGlue API operation for AWS Glue. +// PutResourcePolicy API operation for AWS Glue. // -// Imports an existing Athena Data Catalog to AWS Glue +// Sets the Data Catalog resource policy for access control. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Glue's -// API operation ImportCatalogToGlue for usage and error information. +// API operation PutResourcePolicy for usage and error information. // // Returned Error Codes: +// * ErrCodeEntityNotFoundException "EntityNotFoundException" +// A specified entity does not exist +// // * ErrCodeInternalServiceException "InternalServiceException" // An internal service error occurred. // // * ErrCodeOperationTimeoutException "OperationTimeoutException" // The operation timed out. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/ImportCatalogToGlue -func (c *Glue) ImportCatalogToGlue(input *ImportCatalogToGlueInput) (*ImportCatalogToGlueOutput, error) { - req, out := c.ImportCatalogToGlueRequest(input) +// * ErrCodeInvalidInputException "InvalidInputException" +// The input provided was not valid. +// +// * ErrCodeConditionCheckFailureException "ConditionCheckFailureException" +// A specified condition was not satisfied. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/PutResourcePolicy +func (c *Glue) PutResourcePolicy(input *PutResourcePolicyInput) (*PutResourcePolicyOutput, error) { + req, out := c.PutResourcePolicyRequest(input) return out, req.Send() } -// ImportCatalogToGlueWithContext is the same as ImportCatalogToGlue with the addition of +// PutResourcePolicyWithContext is the same as PutResourcePolicy with the addition of // the ability to pass a context and additional request options. // -// See ImportCatalogToGlue for details on how to use this API operation. +// See PutResourcePolicy for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Glue) ImportCatalogToGlueWithContext(ctx aws.Context, input *ImportCatalogToGlueInput, opts ...request.Option) (*ImportCatalogToGlueOutput, error) { - req, out := c.ImportCatalogToGlueRequest(input) +func (c *Glue) PutResourcePolicyWithContext(ctx aws.Context, input *PutResourcePolicyInput, opts ...request.Option) (*PutResourcePolicyOutput, error) { + req, out := c.PutResourcePolicyRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() @@ -5897,8 +6797,8 @@ const opResetJobBookmark = "ResetJobBookmark" // ResetJobBookmarkRequest generates a "aws/request.Request" representing the // client's request for the ResetJobBookmark operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5985,8 +6885,8 @@ const opStartCrawler = "StartCrawler" // StartCrawlerRequest generates a "aws/request.Request" representing the // client's request for the StartCrawler operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6026,7 +6926,7 @@ func (c *Glue) StartCrawlerRequest(input *StartCrawlerInput) (req *request.Reque // StartCrawler API operation for AWS Glue. // // Starts a crawl using the specified crawler, regardless of what is scheduled. -// If the crawler is already running, does nothing. +// If the crawler is already running, returns a CrawlerRunningException (https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-exceptions.html#aws-glue-api-exceptions-CrawlerRunningException). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -6071,8 +6971,8 @@ const opStartCrawlerSchedule = "StartCrawlerSchedule" // StartCrawlerScheduleRequest generates a "aws/request.Request" representing the // client's request for the StartCrawlerSchedule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6163,8 +7063,8 @@ const opStartJobRun = "StartJobRun" // StartJobRunRequest generates a "aws/request.Request" representing the // client's request for the StartJobRun operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6203,7 +7103,7 @@ func (c *Glue) StartJobRunRequest(input *StartJobRunInput) (req *request.Request // StartJobRun API operation for AWS Glue. // -// Runs a job. +// Starts a job run using a job definition. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -6257,8 +7157,8 @@ const opStartTrigger = "StartTrigger" // StartTriggerRequest generates a "aws/request.Request" representing the // client's request for the StartTrigger operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6352,8 +7252,8 @@ const opStopCrawler = "StopCrawler" // StopCrawlerRequest generates a "aws/request.Request" representing the // client's request for the StopCrawler operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6440,8 +7340,8 @@ const opStopCrawlerSchedule = "StopCrawlerSchedule" // StopCrawlerScheduleRequest generates a "aws/request.Request" representing the // client's request for the StopCrawlerSchedule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6529,8 +7429,8 @@ const opStopTrigger = "StopTrigger" // StopTriggerRequest generates a "aws/request.Request" representing the // client's request for the StopTrigger operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6620,8 +7520,8 @@ const opUpdateClassifier = "UpdateClassifier" // UpdateClassifierRequest generates a "aws/request.Request" representing the // client's request for the UpdateClassifier operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6709,8 +7609,8 @@ const opUpdateConnection = "UpdateConnection" // UpdateConnectionRequest generates a "aws/request.Request" representing the // client's request for the UpdateConnection operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6768,6 +7668,12 @@ func (c *Glue) UpdateConnectionRequest(input *UpdateConnectionInput) (req *reque // * ErrCodeOperationTimeoutException "OperationTimeoutException" // The operation timed out. // +// * ErrCodeInvalidInputException "InvalidInputException" +// The input provided was not valid. +// +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/UpdateConnection func (c *Glue) UpdateConnection(input *UpdateConnectionInput) (*UpdateConnectionOutput, error) { req, out := c.UpdateConnectionRequest(input) @@ -6794,8 +7700,8 @@ const opUpdateCrawler = "UpdateCrawler" // UpdateCrawlerRequest generates a "aws/request.Request" representing the // client's request for the UpdateCrawler operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6886,8 +7792,8 @@ const opUpdateCrawlerSchedule = "UpdateCrawlerSchedule" // UpdateCrawlerScheduleRequest generates a "aws/request.Request" representing the // client's request for the UpdateCrawlerSchedule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6977,8 +7883,8 @@ const opUpdateDatabase = "UpdateDatabase" // UpdateDatabaseRequest generates a "aws/request.Request" representing the // client's request for the UpdateDatabase operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7039,6 +7945,9 @@ func (c *Glue) UpdateDatabaseRequest(input *UpdateDatabaseInput) (req *request.R // * ErrCodeOperationTimeoutException "OperationTimeoutException" // The operation timed out. // +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/UpdateDatabase func (c *Glue) UpdateDatabase(input *UpdateDatabaseInput) (*UpdateDatabaseOutput, error) { req, out := c.UpdateDatabaseRequest(input) @@ -7065,8 +7974,8 @@ const opUpdateDevEndpoint = "UpdateDevEndpoint" // UpdateDevEndpointRequest generates a "aws/request.Request" representing the // client's request for the UpdateDevEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7156,8 +8065,8 @@ const opUpdateJob = "UpdateJob" // UpdateJobRequest generates a "aws/request.Request" representing the // client's request for the UpdateJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7247,8 +8156,8 @@ const opUpdatePartition = "UpdatePartition" // UpdatePartitionRequest generates a "aws/request.Request" representing the // client's request for the UpdatePartition operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7309,6 +8218,9 @@ func (c *Glue) UpdatePartitionRequest(input *UpdatePartitionInput) (req *request // * ErrCodeOperationTimeoutException "OperationTimeoutException" // The operation timed out. // +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/UpdatePartition func (c *Glue) UpdatePartition(input *UpdatePartitionInput) (*UpdatePartitionOutput, error) { req, out := c.UpdatePartitionRequest(input) @@ -7335,8 +8247,8 @@ const opUpdateTable = "UpdateTable" // UpdateTableRequest generates a "aws/request.Request" representing the // client's request for the UpdateTable operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7403,6 +8315,9 @@ func (c *Glue) UpdateTableRequest(input *UpdateTableInput) (req *request.Request // * ErrCodeResourceNumberLimitExceededException "ResourceNumberLimitExceededException" // A resource numerical limit was exceeded. // +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/UpdateTable func (c *Glue) UpdateTable(input *UpdateTableInput) (*UpdateTableOutput, error) { req, out := c.UpdateTableRequest(input) @@ -7429,8 +8344,8 @@ const opUpdateTrigger = "UpdateTrigger" // UpdateTriggerRequest generates a "aws/request.Request" representing the // client's request for the UpdateTrigger operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7520,8 +8435,8 @@ const opUpdateUserDefinedFunction = "UpdateUserDefinedFunction" // UpdateUserDefinedFunctionRequest generates a "aws/request.Request" representing the // client's request for the UpdateUserDefinedFunction operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7582,6 +8497,9 @@ func (c *Glue) UpdateUserDefinedFunctionRequest(input *UpdateUserDefinedFunction // * ErrCodeOperationTimeoutException "OperationTimeoutException" // The operation timed out. // +// * ErrCodeEncryptionException "GlueEncryptionException" +// An encryption operation failed. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/glue-2017-03-31/UpdateUserDefinedFunction func (c *Glue) UpdateUserDefinedFunction(input *UpdateUserDefinedFunctionInput) (*UpdateUserDefinedFunctionOutput, error) { req, out := c.UpdateUserDefinedFunctionRequest(input) @@ -7608,7 +8526,7 @@ func (c *Glue) UpdateUserDefinedFunctionWithContext(ctx aws.Context, input *Upda type Action struct { _ struct{} `type:"structure"` - // Arguments to be passed to the job. + // Arguments to be passed to the job run. // // You can specify arguments here that your own job-execution script consumes, // as well as arguments that AWS Glue itself consumes. @@ -7618,12 +8536,24 @@ type Action struct { // topic in the developer guide. // // For information about the key-value pairs that AWS Glue consumes to set up - // your job, see the Special Parameters Used by AWS Glue (http://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-python-glue-arguments.html) + // your job, see the Special Parameters Used by AWS Glue (http://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.html) // topic in the developer guide. Arguments map[string]*string `type:"map"` // The name of a job to be executed. JobName *string `min:"1" type:"string"` + + // Specifies configuration properties of a job run notification. + NotificationProperty *NotificationProperty `type:"structure"` + + // The name of the SecurityConfiguration structure to be used with this action. + SecurityConfiguration *string `min:"1" type:"string"` + + // The JobRun timeout in minutes. This is the maximum time that a job run can + // consume resources before it is terminated and enters TIMEOUT status. The + // default is 2,880 minutes (48 hours). This overrides the timeout value set + // in the parent job. + Timeout *int64 `min:"1" type:"integer"` } // String returns the string representation @@ -7642,6 +8572,17 @@ func (s *Action) Validate() error { if s.JobName != nil && len(*s.JobName) < 1 { invalidParams.Add(request.NewErrParamMinLen("JobName", 1)) } + if s.SecurityConfiguration != nil && len(*s.SecurityConfiguration) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecurityConfiguration", 1)) + } + if s.Timeout != nil && *s.Timeout < 1 { + invalidParams.Add(request.NewErrParamMinValue("Timeout", 1)) + } + if s.NotificationProperty != nil { + if err := s.NotificationProperty.Validate(); err != nil { + invalidParams.AddNested("NotificationProperty", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -7661,6 +8602,24 @@ func (s *Action) SetJobName(v string) *Action { return s } +// SetNotificationProperty sets the NotificationProperty field's value. +func (s *Action) SetNotificationProperty(v *NotificationProperty) *Action { + s.NotificationProperty = v + return s +} + +// SetSecurityConfiguration sets the SecurityConfiguration field's value. +func (s *Action) SetSecurityConfiguration(v string) *Action { + s.SecurityConfiguration = &v + return s +} + +// SetTimeout sets the Timeout field's value. +func (s *Action) SetTimeout(v int64) *Action { + s.Timeout = &v + return s +} + type BatchCreatePartitionInput struct { _ struct{} `type:"structure"` @@ -8091,7 +9050,8 @@ type BatchDeleteTableVersionInput struct { // TableName is a required field TableName *string `min:"1" type:"string" required:"true"` - // A list of the IDs of versions to be deleted. + // A list of the IDs of versions to be deleted. A VersionId is a string representation + // of an integer. Each version is incremented by 1. // // VersionIds is a required field VersionIds []*string `type:"list" required:"true"` @@ -8310,17 +9270,17 @@ func (s *BatchGetPartitionOutput) SetUnprocessedKeys(v []*PartitionValueList) *B return s } -// Records an error that occurred when attempting to stop a specified JobRun. +// Records an error that occurred when attempting to stop a specified job run. type BatchStopJobRunError struct { _ struct{} `type:"structure"` // Specifies details about the error that was encountered. ErrorDetail *ErrorDetail `type:"structure"` - // The name of the Job in question. + // The name of the job definition used in the job run in question. JobName *string `min:"1" type:"string"` - // The JobRunId of the JobRun in question. + // The JobRunId of the job run in question. JobRunId *string `min:"1" type:"string"` } @@ -8355,12 +9315,12 @@ func (s *BatchStopJobRunError) SetJobRunId(v string) *BatchStopJobRunError { type BatchStopJobRunInput struct { _ struct{} `type:"structure"` - // The name of the Job in question. + // The name of the job definition for which to stop job runs. // // JobName is a required field JobName *string `min:"1" type:"string" required:"true"` - // A list of the JobRunIds that should be stopped for that Job. + // A list of the JobRunIds that should be stopped for that job definition. // // JobRunIds is a required field JobRunIds []*string `min:"1" type:"list" required:"true"` @@ -8447,10 +9407,10 @@ func (s *BatchStopJobRunOutput) SetSuccessfulSubmissions(v []*BatchStopJobRunSuc type BatchStopJobRunSuccessfulSubmission struct { _ struct{} `type:"structure"` - // The Name of the Job in question. + // The name of the job definition used in the job run that was stopped. JobName *string `min:"1" type:"string"` - // The JobRunId of the JobRun in question. + // The JobRunId of the job run that was stopped. JobRunId *string `min:"1" type:"string"` } @@ -8543,7 +9503,7 @@ type CatalogImportStatus struct { ImportCompleted *bool `type:"boolean"` // The time that the migration was started. - ImportTime *time.Time `type:"timestamp" timestampFormat:"unix"` + ImportTime *time.Time `type:"timestamp"` // The name of the person who initiated the migration. ImportedBy *string `min:"1" type:"string"` @@ -8577,14 +9537,15 @@ func (s *CatalogImportStatus) SetImportedBy(v string) *CatalogImportStatus { return s } -// Classifiers are written in Python and triggered during a crawl task. You -// can write your own classifiers to best categorize your data sources and specify -// the appropriate schemas to use for them. A classifier checks whether a given -// file is in a format it can handle, and if it is, the classifier creates a -// schema in the form of a StructType object that matches that data format. +// Classifiers are triggered during a crawl task. A classifier checks whether +// a given file is in a format it can handle, and if it is, the classifier creates +// a schema in the form of a StructType object that matches that data format. // -// A classifier can be a grok classifier, an XML classifier, or a JSON classifier, -// asspecified in one of the fields in the Classifier object. +// You can use the standard classifiers that AWS Glue supplies, or you can write +// your own classifiers to best categorize your data sources and specify the +// appropriate schemas to use for them. A classifier can be a grok classifier, +// an XML classifier, or a JSON classifier, as specified in one of the fields +// in the Classifier object. type Classifier struct { _ struct{} `type:"structure"` @@ -8626,6 +9587,39 @@ func (s *Classifier) SetXMLClassifier(v *XMLClassifier) *Classifier { return s } +// Specifies how CloudWatch data should be encrypted. +type CloudWatchEncryption struct { + _ struct{} `type:"structure"` + + // The encryption mode to use for CloudWatch data. + CloudWatchEncryptionMode *string `type:"string" enum:"CloudWatchEncryptionMode"` + + // The AWS ARN of the KMS key to be used to encrypt the data. + KmsKeyArn *string `type:"string"` +} + +// String returns the string representation +func (s CloudWatchEncryption) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CloudWatchEncryption) GoString() string { + return s.String() +} + +// SetCloudWatchEncryptionMode sets the CloudWatchEncryptionMode field's value. +func (s *CloudWatchEncryption) SetCloudWatchEncryptionMode(v string) *CloudWatchEncryption { + s.CloudWatchEncryptionMode = &v + return s +} + +// SetKmsKeyArn sets the KmsKeyArn field's value. +func (s *CloudWatchEncryption) SetKmsKeyArn(v string) *CloudWatchEncryption { + s.KmsKeyArn = &v + return s +} + // Represents a directional edge in a directed acyclic graph (DAG). type CodeGenEdge struct { _ struct{} `type:"structure"` @@ -8916,8 +9910,8 @@ type Condition struct { // A logical operator. LogicalOperator *string `type:"string" enum:"LogicalOperator"` - // The condition state. Currently, the values supported are SUCCEEDED, STOPPED - // and FAILED. + // The condition state. Currently, the values supported are SUCCEEDED, STOPPED, + // TIMEOUT and FAILED. State *string `type:"string" enum:"JobRunState"` } @@ -8966,7 +9960,37 @@ func (s *Condition) SetState(v string) *Condition { type Connection struct { _ struct{} `type:"structure"` - // A list of key-value pairs used as parameters for this connection. + // These key-value pairs define parameters for the connection: + // + // * HOST - The host URI: either the fully qualified domain name (FQDN) or + // the IPv4 address of the database host. + // + // * PORT - The port number, between 1024 and 65535, of the port on which + // the database host is listening for database connections. + // + // * USER_NAME - The name under which to log in to the database. The value + // string for USER_NAME is "USERNAME". + // + // * PASSWORD - A password, if one is used, for the user name. + // + // * JDBC_DRIVER_JAR_URI - The S3 path of the a jar file that contains the + // JDBC driver to use. + // + // * JDBC_DRIVER_CLASS_NAME - The class name of the JDBC driver to use. + // + // * JDBC_ENGINE - The name of the JDBC engine to use. + // + // * JDBC_ENGINE_VERSION - The version of the JDBC engine to use. + // + // * CONFIG_FILES - (Reserved for future use). + // + // * INSTANCE_ID - The instance ID to use. + // + // * JDBC_CONNECTION_URL - The URL for the JDBC connection. + // + // * JDBC_ENFORCE_SSL - A Boolean string (true, false) specifying whether + // SSL with hostname matching will be enforced for the JDBC connection on + // the client. The default is false. ConnectionProperties map[string]*string `type:"map"` // The type of the connection. Currently, only JDBC is supported; SFTP is not @@ -8974,7 +9998,7 @@ type Connection struct { ConnectionType *string `type:"string" enum:"ConnectionType"` // The time this connection definition was created. - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationTime *time.Time `type:"timestamp"` // Description of the connection. Description *string `type:"string"` @@ -8983,7 +10007,7 @@ type Connection struct { LastUpdatedBy *string `min:"1" type:"string"` // The last time this connection definition was updated. - LastUpdatedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LastUpdatedTime *time.Time `type:"timestamp"` // A list of criteria that can be used in selecting this connection. MatchCriteria []*string `type:"list"` @@ -9064,7 +10088,7 @@ func (s *Connection) SetPhysicalConnectionRequirements(v *PhysicalConnectionRequ type ConnectionInput struct { _ struct{} `type:"structure"` - // A list of key-value pairs used as parameters for this connection. + // These key-value pairs define parameters for the connection. // // ConnectionProperties is a required field ConnectionProperties map[string]*string `type:"map" required:"true"` @@ -9198,23 +10222,19 @@ type Crawler struct { Classifiers []*string `type:"list"` // Crawler configuration information. This versioned JSON string allows users - // to specify aspects of a Crawler's behavior. - // - // You can use this field to force partitions to inherit metadata such as classification, - // input format, output format, serde information, and schema from their parent - // table, rather than detect this information separately for each partition. - // Use the following JSON string to specify that behavior: - // - // Example: '{ "Version": 1.0, "CrawlerOutput": { "Partitions": { "AddOrUpdateBehavior": - // "InheritFromTable" } } }' + // to specify aspects of a crawler's behavior. For more information, see Configuring + // a Crawler (http://docs.aws.amazon.com/glue/latest/dg/crawler-configuration.html). Configuration *string `type:"string"` // If the crawler is running, contains the total time elapsed since the last // crawl began. CrawlElapsedTime *int64 `type:"long"` + // The name of the SecurityConfiguration structure to be used by this Crawler. + CrawlerSecurityConfiguration *string `type:"string"` + // The time when the crawler was created. - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationTime *time.Time `type:"timestamp"` // The database where metadata is written by this crawler. DatabaseName *string `type:"string"` @@ -9227,7 +10247,7 @@ type Crawler struct { LastCrawl *LastCrawlInfo `type:"structure"` // The time the crawler was last updated. - LastUpdated *time.Time `type:"timestamp" timestampFormat:"unix"` + LastUpdated *time.Time `type:"timestamp"` // The crawler name. Name *string `min:"1" type:"string"` @@ -9283,6 +10303,12 @@ func (s *Crawler) SetCrawlElapsedTime(v int64) *Crawler { return s } +// SetCrawlerSecurityConfiguration sets the CrawlerSecurityConfiguration field's value. +func (s *Crawler) SetCrawlerSecurityConfiguration(v string) *Crawler { + s.CrawlerSecurityConfiguration = &v + return s +} + // SetCreationTime sets the CreationTime field's value. func (s *Crawler) SetCreationTime(v time.Time) *Crawler { s.CreationTime = &v @@ -9453,6 +10479,9 @@ func (s *CrawlerMetrics) SetTimeLeftSeconds(v float64) *CrawlerMetrics { type CrawlerTargets struct { _ struct{} `type:"structure"` + // Specifies DynamoDB targets. + DynamoDBTargets []*DynamoDBTarget `type:"list"` + // Specifies JDBC targets. JdbcTargets []*JdbcTarget `type:"list"` @@ -9470,6 +10499,12 @@ func (s CrawlerTargets) GoString() string { return s.String() } +// SetDynamoDBTargets sets the DynamoDBTargets field's value. +func (s *CrawlerTargets) SetDynamoDBTargets(v []*DynamoDBTarget) *CrawlerTargets { + s.DynamoDBTargets = v + return s +} + // SetJdbcTargets sets the JdbcTargets field's value. func (s *CrawlerTargets) SetJdbcTargets(v []*JdbcTarget) *CrawlerTargets { s.JdbcTargets = v @@ -9636,22 +10671,18 @@ type CreateCrawlerInput struct { _ struct{} `type:"structure"` // A list of custom classifiers that the user has registered. By default, all - // AWS classifiers are included in a crawl, but these custom classifiers always - // override the default classifiers for a given classification. + // built-in classifiers are included in a crawl, but these custom classifiers + // always override the default classifiers for a given classification. Classifiers []*string `type:"list"` // Crawler configuration information. This versioned JSON string allows users - // to specify aspects of a Crawler's behavior. - // - // You can use this field to force partitions to inherit metadata such as classification, - // input format, output format, serde information, and schema from their parent - // table, rather than detect this information separately for each partition. - // Use the following JSON string to specify that behavior: - // - // Example: '{ "Version": 1.0, "CrawlerOutput": { "Partitions": { "AddOrUpdateBehavior": - // "InheritFromTable" } } }' + // to specify aspects of a crawler's behavior. For more information, see Configuring + // a Crawler (http://docs.aws.amazon.com/glue/latest/dg/crawler-configuration.html). Configuration *string `type:"string"` + // The name of the SecurityConfiguration structure to be used by this Crawler. + CrawlerSecurityConfiguration *string `type:"string"` + // The AWS Glue database where results are written, such as: arn:aws:daylight:us-east-1::database/sometable/*. // // DatabaseName is a required field @@ -9736,6 +10767,12 @@ func (s *CreateCrawlerInput) SetConfiguration(v string) *CreateCrawlerInput { return s } +// SetCrawlerSecurityConfiguration sets the CrawlerSecurityConfiguration field's value. +func (s *CreateCrawlerInput) SetCrawlerSecurityConfiguration(v string) *CreateCrawlerInput { + s.CrawlerSecurityConfiguration = &v + return s +} + // SetDatabaseName sets the DatabaseName field's value. func (s *CreateCrawlerInput) SetDatabaseName(v string) *CreateCrawlerInput { s.DatabaseName = &v @@ -9892,16 +10929,29 @@ type CreateDevEndpointInput struct { // The number of AWS Glue Data Processing Units (DPUs) to allocate to this DevEndpoint. NumberOfNodes *int64 `type:"integer"` - // The public key to use for authentication. + // The public key to be used by this DevEndpoint for authentication. This attribute + // is provided for backward compatibility, as the recommended attribute to use + // is public keys. + PublicKey *string `type:"string"` + + // A list of public keys to be used by the DevEndpoints for authentication. + // The use of this attribute is preferred over a single public key because the + // public keys allow you to have a different private key per client. // - // PublicKey is a required field - PublicKey *string `type:"string" required:"true"` + // If you previously created an endpoint with a public key, you must remove + // that key to be able to set a list of public keys: call the UpdateDevEndpoint + // API with the public key content in the deletePublicKeys attribute, and the + // list of new keys in the addPublicKeys attribute. + PublicKeys []*string `type:"list"` // The IAM role for the DevEndpoint. // // RoleArn is a required field RoleArn *string `type:"string" required:"true"` + // The name of the SecurityConfiguration structure to be used with this DevEndpoint. + SecurityConfiguration *string `min:"1" type:"string"` + // Security group IDs for the security groups to be used by the new DevEndpoint. SecurityGroupIds []*string `type:"list"` @@ -9925,12 +10975,12 @@ func (s *CreateDevEndpointInput) Validate() error { if s.EndpointName == nil { invalidParams.Add(request.NewErrParamRequired("EndpointName")) } - if s.PublicKey == nil { - invalidParams.Add(request.NewErrParamRequired("PublicKey")) - } if s.RoleArn == nil { invalidParams.Add(request.NewErrParamRequired("RoleArn")) } + if s.SecurityConfiguration != nil && len(*s.SecurityConfiguration) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecurityConfiguration", 1)) + } if invalidParams.Len() > 0 { return invalidParams @@ -9968,12 +11018,24 @@ func (s *CreateDevEndpointInput) SetPublicKey(v string) *CreateDevEndpointInput return s } +// SetPublicKeys sets the PublicKeys field's value. +func (s *CreateDevEndpointInput) SetPublicKeys(v []*string) *CreateDevEndpointInput { + s.PublicKeys = v + return s +} + // SetRoleArn sets the RoleArn field's value. func (s *CreateDevEndpointInput) SetRoleArn(v string) *CreateDevEndpointInput { s.RoleArn = &v return s } +// SetSecurityConfiguration sets the SecurityConfiguration field's value. +func (s *CreateDevEndpointInput) SetSecurityConfiguration(v string) *CreateDevEndpointInput { + s.SecurityConfiguration = &v + return s +} + // SetSecurityGroupIds sets the SecurityGroupIds field's value. func (s *CreateDevEndpointInput) SetSecurityGroupIds(v []*string) *CreateDevEndpointInput { s.SecurityGroupIds = v @@ -9993,7 +11055,7 @@ type CreateDevEndpointOutput struct { AvailabilityZone *string `type:"string"` // The point in time at which this DevEndpoint was created. - CreatedTimestamp *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedTimestamp *time.Time `type:"timestamp"` // The name assigned to the new DevEndpoint. EndpointName *string `type:"string"` @@ -10015,6 +11077,9 @@ type CreateDevEndpointOutput struct { // The AWS ARN of the role assigned to the new DevEndpoint. RoleArn *string `type:"string"` + // The name of the SecurityConfiguration structure being used with this DevEndpoint. + SecurityConfiguration *string `min:"1" type:"string"` + // The security groups assigned to the new DevEndpoint. SecurityGroupIds []*string `type:"list"` @@ -10092,6 +11157,12 @@ func (s *CreateDevEndpointOutput) SetRoleArn(v string) *CreateDevEndpointOutput return s } +// SetSecurityConfiguration sets the SecurityConfiguration field's value. +func (s *CreateDevEndpointOutput) SetSecurityConfiguration(v string) *CreateDevEndpointOutput { + s.SecurityConfiguration = &v + return s +} + // SetSecurityGroupIds sets the SecurityGroupIds field's value. func (s *CreateDevEndpointOutput) SetSecurityGroupIds(v []*string) *CreateDevEndpointOutput { s.SecurityGroupIds = v @@ -10239,11 +11310,11 @@ type CreateJobInput struct { // topic in the developer guide. // // For information about the key-value pairs that AWS Glue consumes to set up - // your job, see the Special Parameters Used by AWS Glue (http://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-python-glue-arguments.html) + // your job, see the Special Parameters Used by AWS Glue (http://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.html) // topic in the developer guide. DefaultArguments map[string]*string `type:"map"` - // Description of the job. + // Description of the job being defined. Description *string `type:"string"` // An ExecutionProperty specifying the maximum number of concurrent runs allowed @@ -10256,15 +11327,26 @@ type CreateJobInput struct { // The maximum number of times to retry this job if it fails. MaxRetries *int64 `type:"integer"` - // The name you assign to this job. It must be unique in your account. + // The name you assign to this job definition. It must be unique in your account. // // Name is a required field Name *string `min:"1" type:"string" required:"true"` - // The name of the IAM role associated with this job. + // Specifies configuration properties of a job notification. + NotificationProperty *NotificationProperty `type:"structure"` + + // The name or ARN of the IAM role associated with this job. // // Role is a required field Role *string `type:"string" required:"true"` + + // The name of the SecurityConfiguration structure to be used with this job. + SecurityConfiguration *string `min:"1" type:"string"` + + // The job timeout in minutes. This is the maximum time that a job run can consume + // resources before it is terminated and enters TIMEOUT status. The default + // is 2,880 minutes (48 hours). + Timeout *int64 `min:"1" type:"integer"` } // String returns the string representation @@ -10292,6 +11374,17 @@ func (s *CreateJobInput) Validate() error { if s.Role == nil { invalidParams.Add(request.NewErrParamRequired("Role")) } + if s.SecurityConfiguration != nil && len(*s.SecurityConfiguration) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecurityConfiguration", 1)) + } + if s.Timeout != nil && *s.Timeout < 1 { + invalidParams.Add(request.NewErrParamMinValue("Timeout", 1)) + } + if s.NotificationProperty != nil { + if err := s.NotificationProperty.Validate(); err != nil { + invalidParams.AddNested("NotificationProperty", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -10353,16 +11446,34 @@ func (s *CreateJobInput) SetName(v string) *CreateJobInput { return s } +// SetNotificationProperty sets the NotificationProperty field's value. +func (s *CreateJobInput) SetNotificationProperty(v *NotificationProperty) *CreateJobInput { + s.NotificationProperty = v + return s +} + // SetRole sets the Role field's value. func (s *CreateJobInput) SetRole(v string) *CreateJobInput { s.Role = &v return s } +// SetSecurityConfiguration sets the SecurityConfiguration field's value. +func (s *CreateJobInput) SetSecurityConfiguration(v string) *CreateJobInput { + s.SecurityConfiguration = &v + return s +} + +// SetTimeout sets the Timeout field's value. +func (s *CreateJobInput) SetTimeout(v int64) *CreateJobInput { + s.Timeout = &v + return s +} + type CreateJobOutput struct { _ struct{} `type:"structure"` - // The unique name that was provided. + // The unique name that was provided for this job definition. Name *string `min:"1" type:"string"` } @@ -10647,6 +11758,93 @@ func (s *CreateScriptOutput) SetScalaCode(v string) *CreateScriptOutput { return s } +type CreateSecurityConfigurationInput struct { + _ struct{} `type:"structure"` + + // The encryption configuration for the new security configuration. + // + // EncryptionConfiguration is a required field + EncryptionConfiguration *EncryptionConfiguration `type:"structure" required:"true"` + + // The name for the new security configuration. + // + // Name is a required field + Name *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateSecurityConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateSecurityConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateSecurityConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateSecurityConfigurationInput"} + if s.EncryptionConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("EncryptionConfiguration")) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEncryptionConfiguration sets the EncryptionConfiguration field's value. +func (s *CreateSecurityConfigurationInput) SetEncryptionConfiguration(v *EncryptionConfiguration) *CreateSecurityConfigurationInput { + s.EncryptionConfiguration = v + return s +} + +// SetName sets the Name field's value. +func (s *CreateSecurityConfigurationInput) SetName(v string) *CreateSecurityConfigurationInput { + s.Name = &v + return s +} + +type CreateSecurityConfigurationOutput struct { + _ struct{} `type:"structure"` + + // The time at which the new security configuration was created. + CreatedTimestamp *time.Time `type:"timestamp"` + + // The name assigned to the new security configuration. + Name *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s CreateSecurityConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateSecurityConfigurationOutput) GoString() string { + return s.String() +} + +// SetCreatedTimestamp sets the CreatedTimestamp field's value. +func (s *CreateSecurityConfigurationOutput) SetCreatedTimestamp(v time.Time) *CreateSecurityConfigurationOutput { + s.CreatedTimestamp = &v + return s +} + +// SetName sets the Name field's value. +func (s *CreateSecurityConfigurationOutput) SetName(v string) *CreateSecurityConfigurationOutput { + s.Name = &v + return s +} + type CreateTableInput struct { _ struct{} `type:"structure"` @@ -10764,6 +11962,10 @@ type CreateTriggerInput struct { // This field is required when the trigger type is SCHEDULED. Schedule *string `type:"string"` + // Set to true to start SCHEDULED and CONDITIONAL triggers when created. True + // not supported for ON_DEMAND triggers. + StartOnCreation *bool `type:"boolean"` + // The type of the new trigger. // // Type is a required field @@ -10847,6 +12049,12 @@ func (s *CreateTriggerInput) SetSchedule(v string) *CreateTriggerInput { return s } +// SetStartOnCreation sets the StartOnCreation field's value. +func (s *CreateTriggerInput) SetStartOnCreation(v bool) *CreateTriggerInput { + s.StartOnCreation = &v + return s +} + // SetType sets the Type field's value. func (s *CreateTriggerInput) SetType(v string) *CreateTriggerInput { s.Type = &v @@ -11032,13 +12240,52 @@ func (s *CreateXMLClassifierRequest) SetRowTag(v string) *CreateXMLClassifierReq return s } +// Contains configuration information for maintaining Data Catalog security. +type DataCatalogEncryptionSettings struct { + _ struct{} `type:"structure"` + + // Specifies encryption-at-rest configuration for the Data Catalog. + EncryptionAtRest *EncryptionAtRest `type:"structure"` +} + +// String returns the string representation +func (s DataCatalogEncryptionSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DataCatalogEncryptionSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DataCatalogEncryptionSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DataCatalogEncryptionSettings"} + if s.EncryptionAtRest != nil { + if err := s.EncryptionAtRest.Validate(); err != nil { + invalidParams.AddNested("EncryptionAtRest", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEncryptionAtRest sets the EncryptionAtRest field's value. +func (s *DataCatalogEncryptionSettings) SetEncryptionAtRest(v *EncryptionAtRest) *DataCatalogEncryptionSettings { + s.EncryptionAtRest = v + return s +} + // The Database object represents a logical grouping of tables that may reside // in a Hive metastore or an RDBMS. type Database struct { _ struct{} `type:"structure"` // The time at which the metadata database was created in the catalog. - CreateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreateTime *time.Time `type:"timestamp"` // Description of the database. Description *string `type:"string"` @@ -11052,7 +12299,7 @@ type Database struct { // Name is a required field Name *string `min:"1" type:"string" required:"true"` - // A list of key-value pairs that define parameters and properties of the database. + // These key-value pairs define parameters and properties of the database. Parameters map[string]*string `type:"map"` } @@ -11112,7 +12359,7 @@ type DatabaseInput struct { // Name is a required field Name *string `min:"1" type:"string" required:"true"` - // A list of key-value pairs that define parameters and properties of the database. + // Thes key-value pairs define parameters and properties of the database. Parameters map[string]*string `type:"map"` } @@ -11471,7 +12718,7 @@ func (s DeleteDevEndpointOutput) GoString() string { type DeleteJobInput struct { _ struct{} `type:"structure"` - // The name of the job to delete. + // The name of the job definition to delete. // // JobName is a required field JobName *string `min:"1" type:"string" required:"true"` @@ -11512,7 +12759,7 @@ func (s *DeleteJobInput) SetJobName(v string) *DeleteJobInput { type DeleteJobOutput struct { _ struct{} `type:"structure"` - // The name of the job that was deleted. + // The name of the job definition that was deleted. JobName *string `min:"1" type:"string"` } @@ -11631,6 +12878,111 @@ func (s DeletePartitionOutput) GoString() string { return s.String() } +type DeleteResourcePolicyInput struct { + _ struct{} `type:"structure"` + + // The hash value returned when this policy was set. + PolicyHashCondition *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DeleteResourcePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteResourcePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteResourcePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteResourcePolicyInput"} + if s.PolicyHashCondition != nil && len(*s.PolicyHashCondition) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyHashCondition", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyHashCondition sets the PolicyHashCondition field's value. +func (s *DeleteResourcePolicyInput) SetPolicyHashCondition(v string) *DeleteResourcePolicyInput { + s.PolicyHashCondition = &v + return s +} + +type DeleteResourcePolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteResourcePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteResourcePolicyOutput) GoString() string { + return s.String() +} + +type DeleteSecurityConfigurationInput struct { + _ struct{} `type:"structure"` + + // The name of the security configuration to delete. + // + // Name is a required field + Name *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteSecurityConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSecurityConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteSecurityConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteSecurityConfigurationInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *DeleteSecurityConfigurationInput) SetName(v string) *DeleteSecurityConfigurationInput { + s.Name = &v + return s +} + +type DeleteSecurityConfigurationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteSecurityConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSecurityConfigurationOutput) GoString() string { + return s.String() +} + type DeleteTableInput struct { _ struct{} `type:"structure"` @@ -11736,7 +13088,8 @@ type DeleteTableVersionInput struct { // TableName is a required field TableName *string `min:"1" type:"string" required:"true"` - // The ID of the table version to be deleted. + // The ID of the table version to be deleted. A VersionID is a string representation + // of an integer. Each version is incremented by 1. // // VersionId is a required field VersionId *string `min:"1" type:"string" required:"true"` @@ -11978,7 +13331,7 @@ type DevEndpoint struct { AvailabilityZone *string `type:"string"` // The point in time at which this DevEndpoint was created. - CreatedTimestamp *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedTimestamp *time.Time `type:"timestamp"` // The name of the DevEndpoint. EndpointName *string `type:"string"` @@ -12003,7 +13356,7 @@ type DevEndpoint struct { FailureReason *string `type:"string"` // The point in time at which this DevEndpoint was last modified. - LastModifiedTimestamp *time.Time `type:"timestamp" timestampFormat:"unix"` + LastModifiedTimestamp *time.Time `type:"timestamp"` // The status of the last update. LastUpdateStatus *string `type:"string"` @@ -12011,15 +13364,36 @@ type DevEndpoint struct { // The number of AWS Glue Data Processing Units (DPUs) allocated to this DevEndpoint. NumberOfNodes *int64 `type:"integer"` - // The public address used by this DevEndpoint. + // A private IP address to access the DevEndpoint within a VPC, if the DevEndpoint + // is created within one. The PrivateAddress field is present only when you + // create the DevEndpoint within your virtual private cloud (VPC). + PrivateAddress *string `type:"string"` + + // The public IP address used by this DevEndpoint. The PublicAddress field is + // present only when you create a non-VPC (virtual private cloud) DevEndpoint. PublicAddress *string `type:"string"` - // The public key to be used by this DevEndpoint for authentication. + // The public key to be used by this DevEndpoint for authentication. This attribute + // is provided for backward compatibility, as the recommended attribute to use + // is public keys. PublicKey *string `type:"string"` + // A list of public keys to be used by the DevEndpoints for authentication. + // The use of this attribute is preferred over a single public key because the + // public keys allow you to have a different private key per client. + // + // If you previously created an endpoint with a public key, you must remove + // that key to be able to set a list of public keys: call the UpdateDevEndpoint + // API with the public key content in the deletePublicKeys attribute, and the + // list of new keys in the addPublicKeys attribute. + PublicKeys []*string `type:"list"` + // The AWS ARN of the IAM role used in this DevEndpoint. RoleArn *string `type:"string"` + // The name of the SecurityConfiguration structure to be used with this DevEndpoint. + SecurityConfiguration *string `min:"1" type:"string"` + // A list of security group identifiers used in this DevEndpoint. SecurityGroupIds []*string `type:"list"` @@ -12103,6 +13477,12 @@ func (s *DevEndpoint) SetNumberOfNodes(v int64) *DevEndpoint { return s } +// SetPrivateAddress sets the PrivateAddress field's value. +func (s *DevEndpoint) SetPrivateAddress(v string) *DevEndpoint { + s.PrivateAddress = &v + return s +} + // SetPublicAddress sets the PublicAddress field's value. func (s *DevEndpoint) SetPublicAddress(v string) *DevEndpoint { s.PublicAddress = &v @@ -12115,12 +13495,24 @@ func (s *DevEndpoint) SetPublicKey(v string) *DevEndpoint { return s } +// SetPublicKeys sets the PublicKeys field's value. +func (s *DevEndpoint) SetPublicKeys(v []*string) *DevEndpoint { + s.PublicKeys = v + return s +} + // SetRoleArn sets the RoleArn field's value. func (s *DevEndpoint) SetRoleArn(v string) *DevEndpoint { s.RoleArn = &v return s } +// SetSecurityConfiguration sets the SecurityConfiguration field's value. +func (s *DevEndpoint) SetSecurityConfiguration(v string) *DevEndpoint { + s.SecurityConfiguration = &v + return s +} + // SetSecurityGroupIds sets the SecurityGroupIds field's value. func (s *DevEndpoint) SetSecurityGroupIds(v []*string) *DevEndpoint { s.SecurityGroupIds = v @@ -12200,6 +13592,123 @@ func (s *DevEndpointCustomLibraries) SetExtraPythonLibsS3Path(v string) *DevEndp return s } +// Specifies a DynamoDB table to crawl. +type DynamoDBTarget struct { + _ struct{} `type:"structure"` + + // The name of the DynamoDB table to crawl. + Path *string `type:"string"` +} + +// String returns the string representation +func (s DynamoDBTarget) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DynamoDBTarget) GoString() string { + return s.String() +} + +// SetPath sets the Path field's value. +func (s *DynamoDBTarget) SetPath(v string) *DynamoDBTarget { + s.Path = &v + return s +} + +// Specifies encryption-at-rest configuration for the Data Catalog. +type EncryptionAtRest struct { + _ struct{} `type:"structure"` + + // The encryption-at-rest mode for encrypting Data Catalog data. + // + // CatalogEncryptionMode is a required field + CatalogEncryptionMode *string `type:"string" required:"true" enum:"CatalogEncryptionMode"` + + // The ID of the AWS KMS key to use for encryption at rest. + SseAwsKmsKeyId *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s EncryptionAtRest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EncryptionAtRest) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *EncryptionAtRest) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "EncryptionAtRest"} + if s.CatalogEncryptionMode == nil { + invalidParams.Add(request.NewErrParamRequired("CatalogEncryptionMode")) + } + if s.SseAwsKmsKeyId != nil && len(*s.SseAwsKmsKeyId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SseAwsKmsKeyId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCatalogEncryptionMode sets the CatalogEncryptionMode field's value. +func (s *EncryptionAtRest) SetCatalogEncryptionMode(v string) *EncryptionAtRest { + s.CatalogEncryptionMode = &v + return s +} + +// SetSseAwsKmsKeyId sets the SseAwsKmsKeyId field's value. +func (s *EncryptionAtRest) SetSseAwsKmsKeyId(v string) *EncryptionAtRest { + s.SseAwsKmsKeyId = &v + return s +} + +// Specifies an encryption configuration. +type EncryptionConfiguration struct { + _ struct{} `type:"structure"` + + // The encryption configuration for CloudWatch. + CloudWatchEncryption *CloudWatchEncryption `type:"structure"` + + // The encryption configuration for Job Bookmarks. + JobBookmarksEncryption *JobBookmarksEncryption `type:"structure"` + + // The encryption configuration for S3 data. + S3Encryption []*S3Encryption `type:"list"` +} + +// String returns the string representation +func (s EncryptionConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EncryptionConfiguration) GoString() string { + return s.String() +} + +// SetCloudWatchEncryption sets the CloudWatchEncryption field's value. +func (s *EncryptionConfiguration) SetCloudWatchEncryption(v *CloudWatchEncryption) *EncryptionConfiguration { + s.CloudWatchEncryption = v + return s +} + +// SetJobBookmarksEncryption sets the JobBookmarksEncryption field's value. +func (s *EncryptionConfiguration) SetJobBookmarksEncryption(v *JobBookmarksEncryption) *EncryptionConfiguration { + s.JobBookmarksEncryption = v + return s +} + +// SetS3Encryption sets the S3Encryption field's value. +func (s *EncryptionConfiguration) SetS3Encryption(v []*S3Encryption) *EncryptionConfiguration { + s.S3Encryption = v + return s +} + // Contains details about an error. type ErrorDetail struct { _ struct{} `type:"structure"` @@ -12237,9 +13746,9 @@ func (s *ErrorDetail) SetErrorMessage(v string) *ErrorDetail { type ExecutionProperty struct { _ struct{} `type:"structure"` - // The maximum number of concurrent runs allowed for a job. The default is 1. - // An error is returned when this threshold is reached. The maximum value you - // can specify is controlled by a service limit. + // The maximum number of concurrent runs allowed for the job. The default is + // 1. An error is returned when this threshold is reached. The maximum value + // you can specify is controlled by a service limit. MaxConcurrentRuns *int64 `type:"integer"` } @@ -12868,36 +14377,96 @@ func (s *GetCrawlersInput) SetNextToken(v string) *GetCrawlersInput { return s } -type GetCrawlersOutput struct { +type GetCrawlersOutput struct { + _ struct{} `type:"structure"` + + // A list of crawler metadata. + Crawlers []*Crawler `type:"list"` + + // A continuation token, if the returned list has not reached the end of those + // defined in this customer account. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s GetCrawlersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCrawlersOutput) GoString() string { + return s.String() +} + +// SetCrawlers sets the Crawlers field's value. +func (s *GetCrawlersOutput) SetCrawlers(v []*Crawler) *GetCrawlersOutput { + s.Crawlers = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *GetCrawlersOutput) SetNextToken(v string) *GetCrawlersOutput { + s.NextToken = &v + return s +} + +type GetDataCatalogEncryptionSettingsInput struct { + _ struct{} `type:"structure"` + + // The ID of the Data Catalog for which to retrieve the security configuration. + // If none is supplied, the AWS account ID is used by default. + CatalogId *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s GetDataCatalogEncryptionSettingsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDataCatalogEncryptionSettingsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetDataCatalogEncryptionSettingsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetDataCatalogEncryptionSettingsInput"} + if s.CatalogId != nil && len(*s.CatalogId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CatalogId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCatalogId sets the CatalogId field's value. +func (s *GetDataCatalogEncryptionSettingsInput) SetCatalogId(v string) *GetDataCatalogEncryptionSettingsInput { + s.CatalogId = &v + return s +} + +type GetDataCatalogEncryptionSettingsOutput struct { _ struct{} `type:"structure"` - // A list of crawler metadata. - Crawlers []*Crawler `type:"list"` - - // A continuation token, if the returned list has not reached the end of those - // defined in this customer account. - NextToken *string `type:"string"` + // The requested security configuration. + DataCatalogEncryptionSettings *DataCatalogEncryptionSettings `type:"structure"` } // String returns the string representation -func (s GetCrawlersOutput) String() string { +func (s GetDataCatalogEncryptionSettingsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetCrawlersOutput) GoString() string { +func (s GetDataCatalogEncryptionSettingsOutput) GoString() string { return s.String() } -// SetCrawlers sets the Crawlers field's value. -func (s *GetCrawlersOutput) SetCrawlers(v []*Crawler) *GetCrawlersOutput { - s.Crawlers = v - return s -} - -// SetNextToken sets the NextToken field's value. -func (s *GetCrawlersOutput) SetNextToken(v string) *GetCrawlersOutput { - s.NextToken = &v +// SetDataCatalogEncryptionSettings sets the DataCatalogEncryptionSettings field's value. +func (s *GetDataCatalogEncryptionSettingsOutput) SetDataCatalogEncryptionSettings(v *DataCatalogEncryptionSettings) *GetDataCatalogEncryptionSettingsOutput { + s.DataCatalogEncryptionSettings = v return s } @@ -13268,7 +14837,7 @@ func (s *GetDevEndpointsOutput) SetNextToken(v string) *GetDevEndpointsOutput { type GetJobInput struct { _ struct{} `type:"structure"` - // The name of the job to retrieve. + // The name of the job definition to retrieve. // // JobName is a required field JobName *string `min:"1" type:"string" required:"true"` @@ -13332,7 +14901,7 @@ func (s *GetJobOutput) SetJob(v *Job) *GetJobOutput { type GetJobRunInput struct { _ struct{} `type:"structure"` - // Name of the job being run. + // Name of the job definition being run. // // JobName is a required field JobName *string `min:"1" type:"string" required:"true"` @@ -13422,7 +14991,7 @@ func (s *GetJobRunOutput) SetJobRun(v *JobRun) *GetJobRunOutput { type GetJobRunsInput struct { _ struct{} `type:"structure"` - // The name of the job for which to retrieve all job runs. + // The name of the job definition for which to retrieve all job runs. // // JobName is a required field JobName *string `min:"1" type:"string" required:"true"` @@ -13561,10 +15130,10 @@ func (s *GetJobsInput) SetNextToken(v string) *GetJobsInput { type GetJobsOutput struct { _ struct{} `type:"structure"` - // A list of jobs. + // A list of job definitions. Jobs []*Job `type:"list"` - // A continuation token, if not all jobs have yet been returned. + // A continuation token, if not all job definitions have yet been returned. NextToken *string `type:"string"` } @@ -13812,6 +15381,76 @@ type GetPartitionsInput struct { DatabaseName *string `min:"1" type:"string" required:"true"` // An expression filtering the partitions to be returned. + // + // The expression uses SQL syntax similar to the SQL WHERE filter clause. The + // SQL statement parser JSQLParser (http://jsqlparser.sourceforge.net/home.php) + // parses the expression. + // + // Operators: The following are the operators that you can use in the Expression + // API call: + // + // =Checks if the values of the two operands are equal or not; if yes, then + // the condition becomes true. + // + // Example: Assume 'variable a' holds 10 and 'variable b' holds 20. + // + // (a = b) is not true. + // + // < >Checks if the values of two operands are equal or not; if the values are + // not equal, then the condition becomes true. + // + // Example: (a < > b) is true. + // + // >Checks if the value of the left operand is greater than the value of the + // right operand; if yes, then the condition becomes true. + // + // Example: (a > b) is not true. + // + // =Checks if the value of the left operand is greater than or equal to the + // value of the right operand; if yes, then the condition becomes true. + // + // Example: (a >= b) is not true. + // + // <=Checks if the value of the left operand is less than or equal to the value + // of the right operand; if yes, then the condition becomes true. + // + // Example: (a <= b) is true. + // + // AND, OR, IN, BETWEEN, LIKE, NOT, IS NULLLogical operators. + // + // Supported Partition Key Types: The following are the the supported partition + // keys. + // + // * string + // + // * date + // + // * timestamp + // + // * int + // + // * bigint + // + // * long + // + // * tinyint + // + // * smallint + // + // * decimal + // + // If an invalid type is encountered, an exception is thrown. + // + // The following list shows the valid operators on each type. When you define + // a crawler, the partitionKey type is created as a STRING, to be compatible + // with the catalog partitions. + // + // Sample API Call: Expression *string `type:"string"` // The maximum number of partitions to return in a single response. @@ -14078,6 +15717,211 @@ func (s *GetPlanOutput) SetScalaCode(v string) *GetPlanOutput { return s } +type GetResourcePolicyInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s GetResourcePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetResourcePolicyInput) GoString() string { + return s.String() +} + +type GetResourcePolicyOutput struct { + _ struct{} `type:"structure"` + + // The date and time at which the policy was created. + CreateTime *time.Time `type:"timestamp"` + + // Contains the hash value associated with this policy. + PolicyHash *string `min:"1" type:"string"` + + // Contains the requested policy document, in JSON format. + PolicyInJson *string `min:"2" type:"string"` + + // The date and time at which the policy was last updated. + UpdateTime *time.Time `type:"timestamp"` +} + +// String returns the string representation +func (s GetResourcePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetResourcePolicyOutput) GoString() string { + return s.String() +} + +// SetCreateTime sets the CreateTime field's value. +func (s *GetResourcePolicyOutput) SetCreateTime(v time.Time) *GetResourcePolicyOutput { + s.CreateTime = &v + return s +} + +// SetPolicyHash sets the PolicyHash field's value. +func (s *GetResourcePolicyOutput) SetPolicyHash(v string) *GetResourcePolicyOutput { + s.PolicyHash = &v + return s +} + +// SetPolicyInJson sets the PolicyInJson field's value. +func (s *GetResourcePolicyOutput) SetPolicyInJson(v string) *GetResourcePolicyOutput { + s.PolicyInJson = &v + return s +} + +// SetUpdateTime sets the UpdateTime field's value. +func (s *GetResourcePolicyOutput) SetUpdateTime(v time.Time) *GetResourcePolicyOutput { + s.UpdateTime = &v + return s +} + +type GetSecurityConfigurationInput struct { + _ struct{} `type:"structure"` + + // The name of the security configuration to retrieve. + // + // Name is a required field + Name *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetSecurityConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSecurityConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetSecurityConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetSecurityConfigurationInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *GetSecurityConfigurationInput) SetName(v string) *GetSecurityConfigurationInput { + s.Name = &v + return s +} + +type GetSecurityConfigurationOutput struct { + _ struct{} `type:"structure"` + + // The requested security configuration + SecurityConfiguration *SecurityConfiguration `type:"structure"` +} + +// String returns the string representation +func (s GetSecurityConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSecurityConfigurationOutput) GoString() string { + return s.String() +} + +// SetSecurityConfiguration sets the SecurityConfiguration field's value. +func (s *GetSecurityConfigurationOutput) SetSecurityConfiguration(v *SecurityConfiguration) *GetSecurityConfigurationOutput { + s.SecurityConfiguration = v + return s +} + +type GetSecurityConfigurationsInput struct { + _ struct{} `type:"structure"` + + // The maximum number of results to return. + MaxResults *int64 `min:"1" type:"integer"` + + // A continuation token, if this is a continuation call. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s GetSecurityConfigurationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSecurityConfigurationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetSecurityConfigurationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetSecurityConfigurationsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *GetSecurityConfigurationsInput) SetMaxResults(v int64) *GetSecurityConfigurationsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *GetSecurityConfigurationsInput) SetNextToken(v string) *GetSecurityConfigurationsInput { + s.NextToken = &v + return s +} + +type GetSecurityConfigurationsOutput struct { + _ struct{} `type:"structure"` + + // A continuation token, if there are more security configurations to return. + NextToken *string `type:"string"` + + // A list of security configurations. + SecurityConfigurations []*SecurityConfiguration `type:"list"` +} + +// String returns the string representation +func (s GetSecurityConfigurationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSecurityConfigurationsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *GetSecurityConfigurationsOutput) SetNextToken(v string) *GetSecurityConfigurationsOutput { + s.NextToken = &v + return s +} + +// SetSecurityConfigurations sets the SecurityConfigurations field's value. +func (s *GetSecurityConfigurationsOutput) SetSecurityConfigurations(v []*SecurityConfiguration) *GetSecurityConfigurationsOutput { + s.SecurityConfigurations = v + return s +} + type GetTableInput struct { _ struct{} `type:"structure"` @@ -14192,7 +16036,8 @@ type GetTableVersionInput struct { // TableName is a required field TableName *string `min:"1" type:"string" required:"true"` - // The ID value of the table version to be retrieved. + // The ID value of the table version to be retrieved. A VersionID is a string + // representation of an integer. Each version is incremented by 1. VersionId *string `min:"1" type:"string"` } @@ -14911,7 +16756,7 @@ type GrokClassifier struct { Classification *string `type:"string" required:"true"` // The time this classifier was registered. - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationTime *time.Time `type:"timestamp"` // Optional custom grok patterns defined by this classifier. For more information, // see custom patterns in Writing Custom Classifers (http://docs.aws.amazon.com/glue/latest/dg/custom-classifier.html). @@ -14924,7 +16769,7 @@ type GrokClassifier struct { GrokPattern *string `min:"1" type:"string" required:"true"` // The time this classifier was last updated. - LastUpdated *time.Time `type:"timestamp" timestampFormat:"unix"` + LastUpdated *time.Time `type:"timestamp"` // The name of the classifier. // @@ -15081,15 +16926,15 @@ func (s *JdbcTarget) SetPath(v string) *JdbcTarget { return s } -// Specifies a job. +// Specifies a job definition. type Job struct { _ struct{} `type:"structure"` - // The number of AWS Glue data processing units (DPUs) allocated to this Job. - // From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative - // measure of processing power that consists of 4 vCPUs of compute capacity - // and 16 GB of memory. For more information, see the AWS Glue pricing page - // (https://aws.amazon.com/glue/pricing/). + // The number of AWS Glue data processing units (DPUs) allocated to runs of + // this job. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is + // a relative measure of processing power that consists of 4 vCPUs of compute + // capacity and 16 GB of memory. For more information, see the AWS Glue pricing + // page (https://aws.amazon.com/glue/pricing/). AllocatedCapacity *int64 `type:"integer"` // The JobCommand that executes this job. @@ -15098,8 +16943,8 @@ type Job struct { // The connections used for this job. Connections *ConnectionsList `type:"structure"` - // The time and date that this job specification was created. - CreatedOn *time.Time `type:"timestamp" timestampFormat:"unix"` + // The time and date that this job definition was created. + CreatedOn *time.Time `type:"timestamp"` // The default arguments for this job, specified as name-value pairs. // @@ -15111,31 +16956,42 @@ type Job struct { // topic in the developer guide. // // For information about the key-value pairs that AWS Glue consumes to set up - // your job, see the Special Parameters Used by AWS Glue (http://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-python-glue-arguments.html) + // your job, see the Special Parameters Used by AWS Glue (http://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.html) // topic in the developer guide. DefaultArguments map[string]*string `type:"map"` - // Description of this job. + // Description of the job being defined. Description *string `type:"string"` // An ExecutionProperty specifying the maximum number of concurrent runs allowed // for this job. ExecutionProperty *ExecutionProperty `type:"structure"` - // The last point in time when this job specification was modified. - LastModifiedOn *time.Time `type:"timestamp" timestampFormat:"unix"` + // The last point in time when this job definition was modified. + LastModifiedOn *time.Time `type:"timestamp"` // This field is reserved for future use. LogUri *string `type:"string"` - // The maximum number of times to retry this job if it fails. + // The maximum number of times to retry this job after a JobRun fails. MaxRetries *int64 `type:"integer"` - // The name you assign to this job. + // The name you assign to this job definition. Name *string `min:"1" type:"string"` - // The name of the IAM role associated with this job. + // Specifies configuration properties of a job notification. + NotificationProperty *NotificationProperty `type:"structure"` + + // The name or ARN of the IAM role associated with this job. Role *string `type:"string"` + + // The name of the SecurityConfiguration structure to be used with this job. + SecurityConfiguration *string `min:"1" type:"string"` + + // The job timeout in minutes. This is the maximum time that a job run can consume + // resources before it is terminated and enters TIMEOUT status. The default + // is 2,880 minutes (48 hours). + Timeout *int64 `min:"1" type:"integer"` } // String returns the string representation @@ -15214,12 +17070,30 @@ func (s *Job) SetName(v string) *Job { return s } +// SetNotificationProperty sets the NotificationProperty field's value. +func (s *Job) SetNotificationProperty(v *NotificationProperty) *Job { + s.NotificationProperty = v + return s +} + // SetRole sets the Role field's value. func (s *Job) SetRole(v string) *Job { s.Role = &v return s } +// SetSecurityConfiguration sets the SecurityConfiguration field's value. +func (s *Job) SetSecurityConfiguration(v string) *Job { + s.SecurityConfiguration = &v + return s +} + +// SetTimeout sets the Timeout field's value. +func (s *Job) SetTimeout(v int64) *Job { + s.Timeout = &v + return s +} + // Defines a point which a job can resume processing. type JobBookmarkEntry struct { _ struct{} `type:"structure"` @@ -15280,7 +17154,40 @@ func (s *JobBookmarkEntry) SetVersion(v int64) *JobBookmarkEntry { return s } -// Specifies code that executes a job. +// Specifies how Job bookmark data should be encrypted. +type JobBookmarksEncryption struct { + _ struct{} `type:"structure"` + + // The encryption mode to use for Job bookmarks data. + JobBookmarksEncryptionMode *string `type:"string" enum:"JobBookmarksEncryptionMode"` + + // The AWS ARN of the KMS key to be used to encrypt the data. + KmsKeyArn *string `type:"string"` +} + +// String returns the string representation +func (s JobBookmarksEncryption) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s JobBookmarksEncryption) GoString() string { + return s.String() +} + +// SetJobBookmarksEncryptionMode sets the JobBookmarksEncryptionMode field's value. +func (s *JobBookmarksEncryption) SetJobBookmarksEncryptionMode(v string) *JobBookmarksEncryption { + s.JobBookmarksEncryptionMode = &v + return s +} + +// SetKmsKeyArn sets the KmsKeyArn field's value. +func (s *JobBookmarksEncryption) SetKmsKeyArn(v string) *JobBookmarksEncryption { + s.KmsKeyArn = &v + return s +} + +// Specifies code executed when a job is run. type JobCommand struct { _ struct{} `type:"structure"` @@ -15335,7 +17242,7 @@ type JobRun struct { // topic in the developer guide. // // For information about the key-value pairs that AWS Glue consumes to set up - // your job, see the Special Parameters Used by AWS Glue (http://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-python-glue-arguments.html) + // your job, see the Special Parameters Used by AWS Glue (http://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.html) // topic in the developer guide. Arguments map[string]*string `type:"map"` @@ -15343,22 +17250,35 @@ type JobRun struct { Attempt *int64 `type:"integer"` // The date and time this job run completed. - CompletedOn *time.Time `type:"timestamp" timestampFormat:"unix"` + CompletedOn *time.Time `type:"timestamp"` // An error message associated with this job run. ErrorMessage *string `type:"string"` + // The amount of time (in seconds) that the job run consumed resources. + ExecutionTime *int64 `type:"integer"` + // The ID of this job run. Id *string `min:"1" type:"string"` - // The name of the job being run. + // The name of the job definition being used in this run. JobName *string `min:"1" type:"string"` // The current state of the job run. JobRunState *string `type:"string" enum:"JobRunState"` // The last time this job run was modified. - LastModifiedOn *time.Time `type:"timestamp" timestampFormat:"unix"` + LastModifiedOn *time.Time `type:"timestamp"` + + // The name of the log group for secure logging, that can be server-side encrypted + // in CloudWatch using KMS. This name can be /aws-glue/jobs/, in which case + // the default encryption is NONE. If you add a role name and SecurityConfiguration + // name (in other words, /aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/), + // then that security configuration will be used to encrypt the log group. + LogGroupName *string `type:"string"` + + // Specifies configuration properties of a job run notification. + NotificationProperty *NotificationProperty `type:"structure"` // A list of predecessors to this job run. PredecessorRuns []*Predecessor `type:"list"` @@ -15367,8 +17287,18 @@ type JobRun struct { // in the StartJobRun action. PreviousRunId *string `min:"1" type:"string"` + // The name of the SecurityConfiguration structure to be used with this job + // run. + SecurityConfiguration *string `min:"1" type:"string"` + // The date and time at which this job run was started. - StartedOn *time.Time `type:"timestamp" timestampFormat:"unix"` + StartedOn *time.Time `type:"timestamp"` + + // The JobRun timeout in minutes. This is the maximum time that a job run can + // consume resources before it is terminated and enters TIMEOUT status. The + // default is 2,880 minutes (48 hours). This overrides the timeout value set + // in the parent job. + Timeout *int64 `min:"1" type:"integer"` // The name of the trigger that started this job run. TriggerName *string `min:"1" type:"string"` @@ -15414,6 +17344,12 @@ func (s *JobRun) SetErrorMessage(v string) *JobRun { return s } +// SetExecutionTime sets the ExecutionTime field's value. +func (s *JobRun) SetExecutionTime(v int64) *JobRun { + s.ExecutionTime = &v + return s +} + // SetId sets the Id field's value. func (s *JobRun) SetId(v string) *JobRun { s.Id = &v @@ -15438,6 +17374,18 @@ func (s *JobRun) SetLastModifiedOn(v time.Time) *JobRun { return s } +// SetLogGroupName sets the LogGroupName field's value. +func (s *JobRun) SetLogGroupName(v string) *JobRun { + s.LogGroupName = &v + return s +} + +// SetNotificationProperty sets the NotificationProperty field's value. +func (s *JobRun) SetNotificationProperty(v *NotificationProperty) *JobRun { + s.NotificationProperty = v + return s +} + // SetPredecessorRuns sets the PredecessorRuns field's value. func (s *JobRun) SetPredecessorRuns(v []*Predecessor) *JobRun { s.PredecessorRuns = v @@ -15450,20 +17398,32 @@ func (s *JobRun) SetPreviousRunId(v string) *JobRun { return s } +// SetSecurityConfiguration sets the SecurityConfiguration field's value. +func (s *JobRun) SetSecurityConfiguration(v string) *JobRun { + s.SecurityConfiguration = &v + return s +} + // SetStartedOn sets the StartedOn field's value. func (s *JobRun) SetStartedOn(v time.Time) *JobRun { s.StartedOn = &v return s } +// SetTimeout sets the Timeout field's value. +func (s *JobRun) SetTimeout(v int64) *JobRun { + s.Timeout = &v + return s +} + // SetTriggerName sets the TriggerName field's value. func (s *JobRun) SetTriggerName(v string) *JobRun { s.TriggerName = &v return s } -// Specifies information used to update an existing job. Note that the previous -// job definition will be completely overwritten by this information. +// Specifies information used to update an existing job definition. Note that +// the previous job definition will be completely overwritten by this information. type JobUpdate struct { _ struct{} `type:"structure"` @@ -15490,11 +17450,11 @@ type JobUpdate struct { // topic in the developer guide. // // For information about the key-value pairs that AWS Glue consumes to set up - // your job, see the Special Parameters Used by AWS Glue (http://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-python-glue-arguments.html) + // your job, see the Special Parameters Used by AWS Glue (http://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.html) // topic in the developer guide. DefaultArguments map[string]*string `type:"map"` - // Description of the job. + // Description of the job being defined. Description *string `type:"string"` // An ExecutionProperty specifying the maximum number of concurrent runs allowed @@ -15507,8 +17467,19 @@ type JobUpdate struct { // The maximum number of times to retry this job if it fails. MaxRetries *int64 `type:"integer"` - // The name of the IAM role associated with this job (required). + // Specifies configuration properties of a job notification. + NotificationProperty *NotificationProperty `type:"structure"` + + // The name or ARN of the IAM role associated with this job (required). Role *string `type:"string"` + + // The name of the SecurityConfiguration structure to be used with this job. + SecurityConfiguration *string `min:"1" type:"string"` + + // The job timeout in minutes. This is the maximum time that a job run can consume + // resources before it is terminated and enters TIMEOUT status. The default + // is 2,880 minutes (48 hours). + Timeout *int64 `min:"1" type:"integer"` } // String returns the string representation @@ -15521,6 +17492,27 @@ func (s JobUpdate) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *JobUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "JobUpdate"} + if s.SecurityConfiguration != nil && len(*s.SecurityConfiguration) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecurityConfiguration", 1)) + } + if s.Timeout != nil && *s.Timeout < 1 { + invalidParams.Add(request.NewErrParamMinValue("Timeout", 1)) + } + if s.NotificationProperty != nil { + if err := s.NotificationProperty.Validate(); err != nil { + invalidParams.AddNested("NotificationProperty", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAllocatedCapacity sets the AllocatedCapacity field's value. func (s *JobUpdate) SetAllocatedCapacity(v int64) *JobUpdate { s.AllocatedCapacity = &v @@ -15569,18 +17561,36 @@ func (s *JobUpdate) SetMaxRetries(v int64) *JobUpdate { return s } +// SetNotificationProperty sets the NotificationProperty field's value. +func (s *JobUpdate) SetNotificationProperty(v *NotificationProperty) *JobUpdate { + s.NotificationProperty = v + return s +} + // SetRole sets the Role field's value. func (s *JobUpdate) SetRole(v string) *JobUpdate { s.Role = &v return s } +// SetSecurityConfiguration sets the SecurityConfiguration field's value. +func (s *JobUpdate) SetSecurityConfiguration(v string) *JobUpdate { + s.SecurityConfiguration = &v + return s +} + +// SetTimeout sets the Timeout field's value. +func (s *JobUpdate) SetTimeout(v int64) *JobUpdate { + s.Timeout = &v + return s +} + // A classifier for JSON content. type JsonClassifier struct { _ struct{} `type:"structure"` // The time this classifier was registered. - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationTime *time.Time `type:"timestamp"` // A JsonPath string defining the JSON data for the classifier to classify. // AWS Glue supports a subset of JsonPath, as described in Writing JsonPath @@ -15590,7 +17600,7 @@ type JsonClassifier struct { JsonPath *string `type:"string" required:"true"` // The time this classifier was last updated. - LastUpdated *time.Time `type:"timestamp" timestampFormat:"unix"` + LastUpdated *time.Time `type:"timestamp"` // The name of the classifier. // @@ -15658,7 +17668,7 @@ type LastCrawlInfo struct { MessagePrefix *string `min:"1" type:"string"` // The time at which the crawl started. - StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `type:"timestamp"` // Status of the last crawl. Status *string `type:"string" enum:"LastCrawlStatus"` @@ -15714,6 +17724,9 @@ func (s *LastCrawlInfo) SetStatus(v string) *LastCrawlInfo { type Location struct { _ struct{} `type:"structure"` + // A DynamoDB Table location. + DynamoDB []*CodeGenNodeArg `type:"list"` + // A JDBC location. Jdbc []*CodeGenNodeArg `type:"list"` @@ -15734,6 +17747,16 @@ func (s Location) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *Location) Validate() error { invalidParams := request.ErrInvalidParams{Context: "Location"} + if s.DynamoDB != nil { + for i, v := range s.DynamoDB { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "DynamoDB", i), err.(request.ErrInvalidParams)) + } + } + } if s.Jdbc != nil { for i, v := range s.Jdbc { if v == nil { @@ -15761,6 +17784,12 @@ func (s *Location) Validate() error { return nil } +// SetDynamoDB sets the DynamoDB field's value. +func (s *Location) SetDynamoDB(v []*CodeGenNodeArg) *Location { + s.DynamoDB = v + return s +} + // SetJdbc sets the Jdbc field's value. func (s *Location) SetJdbc(v []*CodeGenNodeArg) *Location { s.Jdbc = v @@ -15842,6 +17871,44 @@ func (s *MappingEntry) SetTargetType(v string) *MappingEntry { return s } +// Specifies configuration properties of a notification. +type NotificationProperty struct { + _ struct{} `type:"structure"` + + // After a job run starts, the number of minutes to wait before sending a job + // run delay notification. + NotifyDelayAfter *int64 `min:"1" type:"integer"` +} + +// String returns the string representation +func (s NotificationProperty) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NotificationProperty) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *NotificationProperty) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "NotificationProperty"} + if s.NotifyDelayAfter != nil && *s.NotifyDelayAfter < 1 { + invalidParams.Add(request.NewErrParamMinValue("NotifyDelayAfter", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNotifyDelayAfter sets the NotifyDelayAfter field's value. +func (s *NotificationProperty) SetNotifyDelayAfter(v int64) *NotificationProperty { + s.NotifyDelayAfter = &v + return s +} + // Specifies the sort order of a sorted column. type Order struct { _ struct{} `type:"structure"` @@ -15904,18 +17971,18 @@ type Partition struct { _ struct{} `type:"structure"` // The time at which the partition was created. - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationTime *time.Time `type:"timestamp"` // The name of the catalog database where the table in question is located. DatabaseName *string `min:"1" type:"string"` // The last time at which the partition was accessed. - LastAccessTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LastAccessTime *time.Time `type:"timestamp"` // The last time at which column statistics were computed for this partition. - LastAnalyzedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LastAnalyzedTime *time.Time `type:"timestamp"` - // Partition parameters, in the form of a list of key-value pairs. + // These key-value pairs define partition parameters. Parameters map[string]*string `type:"map"` // Provides information about the physical location where the partition is stored. @@ -16024,12 +18091,12 @@ type PartitionInput struct { _ struct{} `type:"structure"` // The last time at which the partition was accessed. - LastAccessTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LastAccessTime *time.Time `type:"timestamp"` // The last time at which column statistics were computed for this partition. - LastAnalyzedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LastAnalyzedTime *time.Time `type:"timestamp"` - // Partition parameters, in the form of a list of key-value pairs. + // These key-value pairs define partition parameters. Parameters map[string]*string `type:"map"` // Provides information about the physical location where the partition is stored. @@ -16137,7 +18204,9 @@ func (s *PartitionValueList) SetValues(v []*string) *PartitionValueList { type PhysicalConnectionRequirements struct { _ struct{} `type:"structure"` - // The connection's availability zone. This field is deprecated and has no effect. + // The connection's availability zone. This field is redundant, since the specified + // subnet implies the availability zone to be used. The field must be populated + // now, but will be deprecated in the future. AvailabilityZone *string `min:"1" type:"string"` // The security group ID list used by the connection. @@ -16196,7 +18265,7 @@ func (s *PhysicalConnectionRequirements) SetSubnetId(v string) *PhysicalConnecti type Predecessor struct { _ struct{} `type:"structure"` - // The name of the predecessor job. + // The name of the job definition used by the predecessor job run. JobName *string `min:"1" type:"string"` // The job-run ID of the predecessor job run. @@ -16232,7 +18301,8 @@ type Predicate struct { // A list of the conditions that determine when the trigger will fire. Conditions []*Condition `type:"list"` - // Currently "OR" is not supported. + // Optional field if only one condition is listed. If multiple conditions are + // listed, then this field is required. Logical *string `type:"string" enum:"Logical"` } @@ -16278,6 +18348,166 @@ func (s *Predicate) SetLogical(v string) *Predicate { return s } +type PutDataCatalogEncryptionSettingsInput struct { + _ struct{} `type:"structure"` + + // The ID of the Data Catalog for which to set the security configuration. If + // none is supplied, the AWS account ID is used by default. + CatalogId *string `min:"1" type:"string"` + + // The security configuration to set. + // + // DataCatalogEncryptionSettings is a required field + DataCatalogEncryptionSettings *DataCatalogEncryptionSettings `type:"structure" required:"true"` +} + +// String returns the string representation +func (s PutDataCatalogEncryptionSettingsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutDataCatalogEncryptionSettingsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutDataCatalogEncryptionSettingsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutDataCatalogEncryptionSettingsInput"} + if s.CatalogId != nil && len(*s.CatalogId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CatalogId", 1)) + } + if s.DataCatalogEncryptionSettings == nil { + invalidParams.Add(request.NewErrParamRequired("DataCatalogEncryptionSettings")) + } + if s.DataCatalogEncryptionSettings != nil { + if err := s.DataCatalogEncryptionSettings.Validate(); err != nil { + invalidParams.AddNested("DataCatalogEncryptionSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCatalogId sets the CatalogId field's value. +func (s *PutDataCatalogEncryptionSettingsInput) SetCatalogId(v string) *PutDataCatalogEncryptionSettingsInput { + s.CatalogId = &v + return s +} + +// SetDataCatalogEncryptionSettings sets the DataCatalogEncryptionSettings field's value. +func (s *PutDataCatalogEncryptionSettingsInput) SetDataCatalogEncryptionSettings(v *DataCatalogEncryptionSettings) *PutDataCatalogEncryptionSettingsInput { + s.DataCatalogEncryptionSettings = v + return s +} + +type PutDataCatalogEncryptionSettingsOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutDataCatalogEncryptionSettingsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutDataCatalogEncryptionSettingsOutput) GoString() string { + return s.String() +} + +type PutResourcePolicyInput struct { + _ struct{} `type:"structure"` + + // A value of MUST_EXIST is used to update a policy. A value of NOT_EXIST is + // used to create a new policy. If a value of NONE or a null value is used, + // the call will not depend on the existence of a policy. + PolicyExistsCondition *string `type:"string" enum:"ExistCondition"` + + // This is the hash value returned when the previous policy was set using PutResourcePolicy. + // Its purpose is to prevent concurrent modifications of a policy. Do not use + // this parameter if no previous policy has been set. + PolicyHashCondition *string `min:"1" type:"string"` + + // Contains the policy document to set, in JSON format. + // + // PolicyInJson is a required field + PolicyInJson *string `min:"2" type:"string" required:"true"` +} + +// String returns the string representation +func (s PutResourcePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutResourcePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutResourcePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutResourcePolicyInput"} + if s.PolicyHashCondition != nil && len(*s.PolicyHashCondition) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyHashCondition", 1)) + } + if s.PolicyInJson == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyInJson")) + } + if s.PolicyInJson != nil && len(*s.PolicyInJson) < 2 { + invalidParams.Add(request.NewErrParamMinLen("PolicyInJson", 2)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyExistsCondition sets the PolicyExistsCondition field's value. +func (s *PutResourcePolicyInput) SetPolicyExistsCondition(v string) *PutResourcePolicyInput { + s.PolicyExistsCondition = &v + return s +} + +// SetPolicyHashCondition sets the PolicyHashCondition field's value. +func (s *PutResourcePolicyInput) SetPolicyHashCondition(v string) *PutResourcePolicyInput { + s.PolicyHashCondition = &v + return s +} + +// SetPolicyInJson sets the PolicyInJson field's value. +func (s *PutResourcePolicyInput) SetPolicyInJson(v string) *PutResourcePolicyInput { + s.PolicyInJson = &v + return s +} + +type PutResourcePolicyOutput struct { + _ struct{} `type:"structure"` + + // A hash of the policy that has just been set. This must be included in a subsequent + // call that overwrites or updates this policy. + PolicyHash *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s PutResourcePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutResourcePolicyOutput) GoString() string { + return s.String() +} + +// SetPolicyHash sets the PolicyHash field's value. +func (s *PutResourcePolicyOutput) SetPolicyHash(v string) *PutResourcePolicyOutput { + s.PolicyHash = &v + return s +} + type ResetJobBookmarkInput struct { _ struct{} `type:"structure"` @@ -16385,6 +18615,39 @@ func (s *ResourceUri) SetUri(v string) *ResourceUri { return s } +// Specifies how S3 data should be encrypted. +type S3Encryption struct { + _ struct{} `type:"structure"` + + // The AWS ARN of the KMS key to be used to encrypt the data. + KmsKeyArn *string `type:"string"` + + // The encryption mode to use for S3 data. + S3EncryptionMode *string `type:"string" enum:"S3EncryptionMode"` +} + +// String returns the string representation +func (s S3Encryption) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s S3Encryption) GoString() string { + return s.String() +} + +// SetKmsKeyArn sets the KmsKeyArn field's value. +func (s *S3Encryption) SetKmsKeyArn(v string) *S3Encryption { + s.KmsKeyArn = &v + return s +} + +// SetS3EncryptionMode sets the S3EncryptionMode field's value. +func (s *S3Encryption) SetS3EncryptionMode(v string) *S3Encryption { + s.S3EncryptionMode = &v + return s +} + // Specifies a data store in Amazon S3. type S3Target struct { _ struct{} `type:"structure"` @@ -16488,6 +18751,48 @@ func (s *SchemaChangePolicy) SetUpdateBehavior(v string) *SchemaChangePolicy { return s } +// Specifies a security configuration. +type SecurityConfiguration struct { + _ struct{} `type:"structure"` + + // The time at which this security configuration was created. + CreatedTimeStamp *time.Time `type:"timestamp"` + + // The encryption configuration associated with this security configuration. + EncryptionConfiguration *EncryptionConfiguration `type:"structure"` + + // The name of the security configuration. + Name *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s SecurityConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SecurityConfiguration) GoString() string { + return s.String() +} + +// SetCreatedTimeStamp sets the CreatedTimeStamp field's value. +func (s *SecurityConfiguration) SetCreatedTimeStamp(v time.Time) *SecurityConfiguration { + s.CreatedTimeStamp = &v + return s +} + +// SetEncryptionConfiguration sets the EncryptionConfiguration field's value. +func (s *SecurityConfiguration) SetEncryptionConfiguration(v *EncryptionConfiguration) *SecurityConfiguration { + s.EncryptionConfiguration = v + return s +} + +// SetName sets the Name field's value. +func (s *SecurityConfiguration) SetName(v string) *SecurityConfiguration { + s.Name = &v + return s +} + // Defines a non-overlapping region of a table's partitions, allowing multiple // requests to be executed in parallel. type Segment struct { @@ -16555,7 +18860,7 @@ type SerDeInfo struct { // Name of the SerDe. Name *string `min:"1" type:"string"` - // A list of initialization parameters for the SerDe, in key-value form. + // These key-value pairs define initialization parameters for the SerDe. Parameters map[string]*string `type:"map"` // Usually the class that implements the SerDe. An example is: org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe. @@ -16770,7 +19075,7 @@ type StartJobRunInput struct { AllocatedCapacity *int64 `type:"integer"` // The job arguments specifically for this run. They override the equivalent - // default arguments set for the job itself. + // default arguments set for in the job definition itself. // // You can specify arguments here that your own job-execution script consumes, // as well as arguments that AWS Glue itself consumes. @@ -16780,17 +19085,30 @@ type StartJobRunInput struct { // topic in the developer guide. // // For information about the key-value pairs that AWS Glue consumes to set up - // your job, see the Special Parameters Used by AWS Glue (http://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-python-glue-arguments.html) + // your job, see the Special Parameters Used by AWS Glue (http://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.html) // topic in the developer guide. Arguments map[string]*string `type:"map"` - // The name of the job to start. + // The name of the job definition to use. // // JobName is a required field JobName *string `min:"1" type:"string" required:"true"` // The ID of a previous JobRun to retry. JobRunId *string `min:"1" type:"string"` + + // Specifies configuration properties of a job run notification. + NotificationProperty *NotificationProperty `type:"structure"` + + // The name of the SecurityConfiguration structure to be used with this job + // run. + SecurityConfiguration *string `min:"1" type:"string"` + + // The JobRun timeout in minutes. This is the maximum time that a job run can + // consume resources before it is terminated and enters TIMEOUT status. The + // default is 2,880 minutes (48 hours). This overrides the timeout value set + // in the parent job. + Timeout *int64 `min:"1" type:"integer"` } // String returns the string representation @@ -16815,6 +19133,17 @@ func (s *StartJobRunInput) Validate() error { if s.JobRunId != nil && len(*s.JobRunId) < 1 { invalidParams.Add(request.NewErrParamMinLen("JobRunId", 1)) } + if s.SecurityConfiguration != nil && len(*s.SecurityConfiguration) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecurityConfiguration", 1)) + } + if s.Timeout != nil && *s.Timeout < 1 { + invalidParams.Add(request.NewErrParamMinValue("Timeout", 1)) + } + if s.NotificationProperty != nil { + if err := s.NotificationProperty.Validate(); err != nil { + invalidParams.AddNested("NotificationProperty", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -16846,6 +19175,24 @@ func (s *StartJobRunInput) SetJobRunId(v string) *StartJobRunInput { return s } +// SetNotificationProperty sets the NotificationProperty field's value. +func (s *StartJobRunInput) SetNotificationProperty(v *NotificationProperty) *StartJobRunInput { + s.NotificationProperty = v + return s +} + +// SetSecurityConfiguration sets the SecurityConfiguration field's value. +func (s *StartJobRunInput) SetSecurityConfiguration(v string) *StartJobRunInput { + s.SecurityConfiguration = &v + return s +} + +// SetTimeout sets the Timeout field's value. +func (s *StartJobRunInput) SetTimeout(v int64) *StartJobRunInput { + s.Timeout = &v + return s +} + type StartJobRunOutput struct { _ struct{} `type:"structure"` @@ -17276,7 +19623,7 @@ type Table struct { _ struct{} `type:"structure"` // Time when the table definition was created in the Data Catalog. - CreateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreateTime *time.Time `type:"timestamp"` // Person or entity who created the table. CreatedBy *string `min:"1" type:"string"` @@ -17290,10 +19637,10 @@ type Table struct { // Last time the table was accessed. This is usually taken from HDFS, and may // not be reliable. - LastAccessTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LastAccessTime *time.Time `type:"timestamp"` // Last time column statistics were computed for this table. - LastAnalyzedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LastAnalyzedTime *time.Time `type:"timestamp"` // Name of the table. For Hive compatibility, this must be entirely lowercase. // @@ -17303,7 +19650,7 @@ type Table struct { // Owner of the table. Owner *string `min:"1" type:"string"` - // Properties associated with this table, as a list of key-value pairs. + // These key-value pairs define properties associated with the table. Parameters map[string]*string `type:"map"` // A list of columns by which the table is partitioned. Only primitive types @@ -17321,7 +19668,7 @@ type Table struct { TableType *string `type:"string"` // Last time the table was updated. - UpdateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + UpdateTime *time.Time `type:"timestamp"` // If the table is a view, the expanded text of the view; otherwise null. ViewExpandedText *string `type:"string"` @@ -17477,10 +19824,10 @@ type TableInput struct { Description *string `type:"string"` // Last time the table was accessed. - LastAccessTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LastAccessTime *time.Time `type:"timestamp"` // Last time column statistics were computed for this table. - LastAnalyzedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LastAnalyzedTime *time.Time `type:"timestamp"` // Name of the table. For Hive compatibility, this is folded to lowercase when // it is stored. @@ -17491,7 +19838,7 @@ type TableInput struct { // Owner of the table. Owner *string `min:"1" type:"string"` - // Properties associated with this table, as a list of key-value pairs. + // These key-value pairs define properties associated with the table. Parameters map[string]*string `type:"map"` // A list of columns by which the table is partitioned. Only primitive types @@ -17638,7 +19985,8 @@ type TableVersion struct { // The table in question Table *Table `type:"structure"` - // The ID value that identifies this table version. + // The ID value that identifies this table version. A VersionId is a string + // representation of an integer. Each version is incremented by 1. VersionId *string `min:"1" type:"string"` } @@ -17674,7 +20022,8 @@ type TableVersionError struct { // The name of the table in question. TableName *string `min:"1" type:"string"` - // The ID value of the version in question. + // The ID value of the version in question. A VersionID is a string representation + // of an integer. Each version is incremented by 1. VersionId *string `min:"1" type:"string"` } @@ -18059,22 +20408,18 @@ type UpdateCrawlerInput struct { _ struct{} `type:"structure"` // A list of custom classifiers that the user has registered. By default, all - // classifiers are included in a crawl, but these custom classifiers always - // override the default classifiers for a given classification. + // built-in classifiers are included in a crawl, but these custom classifiers + // always override the default classifiers for a given classification. Classifiers []*string `type:"list"` // Crawler configuration information. This versioned JSON string allows users - // to specify aspects of a Crawler's behavior. - // - // You can use this field to force partitions to inherit metadata such as classification, - // input format, output format, serde information, and schema from their parent - // table, rather than detect this information separately for each partition. - // Use the following JSON string to specify that behavior: - // - // Example: '{ "Version": 1.0, "CrawlerOutput": { "Partitions": { "AddOrUpdateBehavior": - // "InheritFromTable" } } }' + // to specify aspects of a crawler's behavior. For more information, see Configuring + // a Crawler (http://docs.aws.amazon.com/glue/latest/dg/crawler-configuration.html). Configuration *string `type:"string"` + // The name of the SecurityConfiguration structure to be used by this Crawler. + CrawlerSecurityConfiguration *string `type:"string"` + // The AWS Glue database where results are stored, such as: arn:aws:daylight:us-east-1::database/sometable/*. DatabaseName *string `type:"string"` @@ -18144,6 +20489,12 @@ func (s *UpdateCrawlerInput) SetConfiguration(v string) *UpdateCrawlerInput { return s } +// SetCrawlerSecurityConfiguration sets the CrawlerSecurityConfiguration field's value. +func (s *UpdateCrawlerInput) SetCrawlerSecurityConfiguration(v string) *UpdateCrawlerInput { + s.CrawlerSecurityConfiguration = &v + return s +} + // SetDatabaseName sets the DatabaseName field's value. func (s *UpdateCrawlerInput) SetDatabaseName(v string) *UpdateCrawlerInput { s.DatabaseName = &v @@ -18365,9 +20716,15 @@ func (s UpdateDatabaseOutput) GoString() string { type UpdateDevEndpointInput struct { _ struct{} `type:"structure"` + // The list of public keys for the DevEndpoint to use. + AddPublicKeys []*string `type:"list"` + // Custom Python or Java libraries to be loaded in the DevEndpoint. CustomLibraries *DevEndpointCustomLibraries `type:"structure"` + // The list of public keys to be deleted from the DevEndpoint. + DeletePublicKeys []*string `type:"list"` + // The name of the DevEndpoint to be updated. // // EndpointName is a required field @@ -18404,12 +20761,24 @@ func (s *UpdateDevEndpointInput) Validate() error { return nil } +// SetAddPublicKeys sets the AddPublicKeys field's value. +func (s *UpdateDevEndpointInput) SetAddPublicKeys(v []*string) *UpdateDevEndpointInput { + s.AddPublicKeys = v + return s +} + // SetCustomLibraries sets the CustomLibraries field's value. func (s *UpdateDevEndpointInput) SetCustomLibraries(v *DevEndpointCustomLibraries) *UpdateDevEndpointInput { s.CustomLibraries = v return s } +// SetDeletePublicKeys sets the DeletePublicKeys field's value. +func (s *UpdateDevEndpointInput) SetDeletePublicKeys(v []*string) *UpdateDevEndpointInput { + s.DeletePublicKeys = v + return s +} + // SetEndpointName sets the EndpointName field's value. func (s *UpdateDevEndpointInput) SetEndpointName(v string) *UpdateDevEndpointInput { s.EndpointName = &v @@ -18523,7 +20892,7 @@ type UpdateJobInput struct { // JobName is a required field JobName *string `min:"1" type:"string" required:"true"` - // Specifies the values with which to update the job. + // Specifies the values with which to update the job definition. // // JobUpdate is a required field JobUpdate *JobUpdate `type:"structure" required:"true"` @@ -18551,6 +20920,11 @@ func (s *UpdateJobInput) Validate() error { if s.JobUpdate == nil { invalidParams.Add(request.NewErrParamRequired("JobUpdate")) } + if s.JobUpdate != nil { + if err := s.JobUpdate.Validate(); err != nil { + invalidParams.AddNested("JobUpdate", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -18573,7 +20947,7 @@ func (s *UpdateJobInput) SetJobUpdate(v *JobUpdate) *UpdateJobInput { type UpdateJobOutput struct { _ struct{} `type:"structure"` - // Returns the name of the updated job. + // Returns the name of the updated job definition. JobName *string `min:"1" type:"string"` } @@ -19122,7 +21496,7 @@ type UserDefinedFunction struct { ClassName *string `min:"1" type:"string"` // The time at which the function was created. - CreateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreateTime *time.Time `type:"timestamp"` // The name of the function. FunctionName *string `min:"1" type:"string"` @@ -19282,10 +21656,10 @@ type XMLClassifier struct { Classification *string `type:"string" required:"true"` // The time this classifier was registered. - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationTime *time.Time `type:"timestamp"` // The time this classifier was last updated. - LastUpdated *time.Time `type:"timestamp" timestampFormat:"unix"` + LastUpdated *time.Time `type:"timestamp"` // The name of the classifier. // @@ -19349,6 +21723,22 @@ func (s *XMLClassifier) SetVersion(v int64) *XMLClassifier { return s } +const ( + // CatalogEncryptionModeDisabled is a CatalogEncryptionMode enum value + CatalogEncryptionModeDisabled = "DISABLED" + + // CatalogEncryptionModeSseKms is a CatalogEncryptionMode enum value + CatalogEncryptionModeSseKms = "SSE-KMS" +) + +const ( + // CloudWatchEncryptionModeDisabled is a CloudWatchEncryptionMode enum value + CloudWatchEncryptionModeDisabled = "DISABLED" + + // CloudWatchEncryptionModeSseKms is a CloudWatchEncryptionMode enum value + CloudWatchEncryptionModeSseKms = "SSE-KMS" +) + const ( // ConnectionPropertyKeyHost is a ConnectionPropertyKey enum value ConnectionPropertyKeyHost = "HOST" @@ -19382,6 +21772,9 @@ const ( // ConnectionPropertyKeyJdbcConnectionUrl is a ConnectionPropertyKey enum value ConnectionPropertyKeyJdbcConnectionUrl = "JDBC_CONNECTION_URL" + + // ConnectionPropertyKeyJdbcEnforceSsl is a ConnectionPropertyKey enum value + ConnectionPropertyKeyJdbcEnforceSsl = "JDBC_ENFORCE_SSL" ) const ( @@ -19414,6 +21807,25 @@ const ( DeleteBehaviorDeprecateInDatabase = "DEPRECATE_IN_DATABASE" ) +const ( + // ExistConditionMustExist is a ExistCondition enum value + ExistConditionMustExist = "MUST_EXIST" + + // ExistConditionNotExist is a ExistCondition enum value + ExistConditionNotExist = "NOT_EXIST" + + // ExistConditionNone is a ExistCondition enum value + ExistConditionNone = "NONE" +) + +const ( + // JobBookmarksEncryptionModeDisabled is a JobBookmarksEncryptionMode enum value + JobBookmarksEncryptionModeDisabled = "DISABLED" + + // JobBookmarksEncryptionModeCseKms is a JobBookmarksEncryptionMode enum value + JobBookmarksEncryptionModeCseKms = "CSE-KMS" +) + const ( // JobRunStateStarting is a JobRunState enum value JobRunStateStarting = "STARTING" @@ -19432,6 +21844,9 @@ const ( // JobRunStateFailed is a JobRunState enum value JobRunStateFailed = "FAILED" + + // JobRunStateTimeout is a JobRunState enum value + JobRunStateTimeout = "TIMEOUT" ) const ( @@ -19488,6 +21903,17 @@ const ( ResourceTypeArchive = "ARCHIVE" ) +const ( + // S3EncryptionModeDisabled is a S3EncryptionMode enum value + S3EncryptionModeDisabled = "DISABLED" + + // S3EncryptionModeSseKms is a S3EncryptionMode enum value + S3EncryptionModeSseKms = "SSE-KMS" + + // S3EncryptionModeSseS3 is a S3EncryptionMode enum value + S3EncryptionModeSseS3 = "SSE-S3" +) + const ( // ScheduleStateScheduled is a ScheduleState enum value ScheduleStateScheduled = "SCHEDULED" diff --git a/vendor/github.com/aws/aws-sdk-go/service/glue/errors.go b/vendor/github.com/aws/aws-sdk-go/service/glue/errors.go index c54357306fc..a1e9b4d4d6b 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/glue/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/glue/errors.go @@ -28,6 +28,12 @@ const ( // Too many jobs are being run concurrently. ErrCodeConcurrentRunsExceededException = "ConcurrentRunsExceededException" + // ErrCodeConditionCheckFailureException for service response error code + // "ConditionCheckFailureException". + // + // A specified condition was not satisfied. + ErrCodeConditionCheckFailureException = "ConditionCheckFailureException" + // ErrCodeCrawlerNotRunningException for service response error code // "CrawlerNotRunningException". // @@ -46,6 +52,12 @@ const ( // The specified crawler is stopping. ErrCodeCrawlerStoppingException = "CrawlerStoppingException" + // ErrCodeEncryptionException for service response error code + // "GlueEncryptionException". + // + // An encryption operation failed. + ErrCodeEncryptionException = "GlueEncryptionException" + // ErrCodeEntityNotFoundException for service response error code // "EntityNotFoundException". // diff --git a/vendor/github.com/aws/aws-sdk-go/service/glue/service.go b/vendor/github.com/aws/aws-sdk-go/service/glue/service.go index 5e017698a2c..075b0a1df6c 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/glue/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/glue/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "glue" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "glue" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Glue" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the Glue client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/guardduty/api.go b/vendor/github.com/aws/aws-sdk-go/service/guardduty/api.go index 5a2e9cfe600..88975389ba8 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/guardduty/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/guardduty/api.go @@ -3,6 +3,8 @@ package guardduty import ( + "fmt" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awsutil" "github.com/aws/aws-sdk-go/aws/request" @@ -12,8 +14,8 @@ const opAcceptInvitation = "AcceptInvitation" // AcceptInvitationRequest generates a "aws/request.Request" representing the // client's request for the AcceptInvitation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -94,8 +96,8 @@ const opArchiveFindings = "ArchiveFindings" // ArchiveFindingsRequest generates a "aws/request.Request" representing the // client's request for the ArchiveFindings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -176,8 +178,8 @@ const opCreateDetector = "CreateDetector" // CreateDetectorRequest generates a "aws/request.Request" representing the // client's request for the CreateDetector operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -256,12 +258,94 @@ func (c *GuardDuty) CreateDetectorWithContext(ctx aws.Context, input *CreateDete return out, req.Send() } +const opCreateFilter = "CreateFilter" + +// CreateFilterRequest generates a "aws/request.Request" representing the +// client's request for the CreateFilter operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateFilter for more information on using the CreateFilter +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateFilterRequest method. +// req, resp := client.CreateFilterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/guardduty-2017-11-28/CreateFilter +func (c *GuardDuty) CreateFilterRequest(input *CreateFilterInput) (req *request.Request, output *CreateFilterOutput) { + op := &request.Operation{ + Name: opCreateFilter, + HTTPMethod: "POST", + HTTPPath: "/detector/{detectorId}/filter", + } + + if input == nil { + input = &CreateFilterInput{} + } + + output = &CreateFilterOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateFilter API operation for Amazon GuardDuty. +// +// Creates a filter using the specified finding criteria. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon GuardDuty's +// API operation CreateFilter for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Error response object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Error response object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/guardduty-2017-11-28/CreateFilter +func (c *GuardDuty) CreateFilter(input *CreateFilterInput) (*CreateFilterOutput, error) { + req, out := c.CreateFilterRequest(input) + return out, req.Send() +} + +// CreateFilterWithContext is the same as CreateFilter with the addition of +// the ability to pass a context and additional request options. +// +// See CreateFilter for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *GuardDuty) CreateFilterWithContext(ctx aws.Context, input *CreateFilterInput, opts ...request.Option) (*CreateFilterOutput, error) { + req, out := c.CreateFilterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateIPSet = "CreateIPSet" // CreateIPSetRequest generates a "aws/request.Request" representing the // client's request for the CreateIPSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -343,8 +427,8 @@ const opCreateMembers = "CreateMembers" // CreateMembersRequest generates a "aws/request.Request" representing the // client's request for the CreateMembers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -427,8 +511,8 @@ const opCreateSampleFindings = "CreateSampleFindings" // CreateSampleFindingsRequest generates a "aws/request.Request" representing the // client's request for the CreateSampleFindings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -511,8 +595,8 @@ const opCreateThreatIntelSet = "CreateThreatIntelSet" // CreateThreatIntelSetRequest generates a "aws/request.Request" representing the // client's request for the CreateThreatIntelSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -594,8 +678,8 @@ const opDeclineInvitations = "DeclineInvitations" // DeclineInvitationsRequest generates a "aws/request.Request" representing the // client's request for the DeclineInvitations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -677,8 +761,8 @@ const opDeleteDetector = "DeleteDetector" // DeleteDetectorRequest generates a "aws/request.Request" representing the // client's request for the DeleteDetector operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -755,12 +839,94 @@ func (c *GuardDuty) DeleteDetectorWithContext(ctx aws.Context, input *DeleteDete return out, req.Send() } +const opDeleteFilter = "DeleteFilter" + +// DeleteFilterRequest generates a "aws/request.Request" representing the +// client's request for the DeleteFilter operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteFilter for more information on using the DeleteFilter +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteFilterRequest method. +// req, resp := client.DeleteFilterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/guardduty-2017-11-28/DeleteFilter +func (c *GuardDuty) DeleteFilterRequest(input *DeleteFilterInput) (req *request.Request, output *DeleteFilterOutput) { + op := &request.Operation{ + Name: opDeleteFilter, + HTTPMethod: "DELETE", + HTTPPath: "/detector/{detectorId}/filter/{filterName}", + } + + if input == nil { + input = &DeleteFilterInput{} + } + + output = &DeleteFilterOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteFilter API operation for Amazon GuardDuty. +// +// Deletes the filter specified by the filter name. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon GuardDuty's +// API operation DeleteFilter for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Error response object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Error response object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/guardduty-2017-11-28/DeleteFilter +func (c *GuardDuty) DeleteFilter(input *DeleteFilterInput) (*DeleteFilterOutput, error) { + req, out := c.DeleteFilterRequest(input) + return out, req.Send() +} + +// DeleteFilterWithContext is the same as DeleteFilter with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteFilter for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *GuardDuty) DeleteFilterWithContext(ctx aws.Context, input *DeleteFilterInput, opts ...request.Option) (*DeleteFilterOutput, error) { + req, out := c.DeleteFilterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteIPSet = "DeleteIPSet" // DeleteIPSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteIPSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -841,8 +1007,8 @@ const opDeleteInvitations = "DeleteInvitations" // DeleteInvitationsRequest generates a "aws/request.Request" representing the // client's request for the DeleteInvitations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -924,8 +1090,8 @@ const opDeleteMembers = "DeleteMembers" // DeleteMembersRequest generates a "aws/request.Request" representing the // client's request for the DeleteMembers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1007,8 +1173,8 @@ const opDeleteThreatIntelSet = "DeleteThreatIntelSet" // DeleteThreatIntelSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteThreatIntelSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1089,8 +1255,8 @@ const opDisassociateFromMasterAccount = "DisassociateFromMasterAccount" // DisassociateFromMasterAccountRequest generates a "aws/request.Request" representing the // client's request for the DisassociateFromMasterAccount operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1171,8 +1337,8 @@ const opDisassociateMembers = "DisassociateMembers" // DisassociateMembersRequest generates a "aws/request.Request" representing the // client's request for the DisassociateMembers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1254,8 +1420,8 @@ const opGetDetector = "GetDetector" // GetDetectorRequest generates a "aws/request.Request" representing the // client's request for the GetDetector operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1332,12 +1498,94 @@ func (c *GuardDuty) GetDetectorWithContext(ctx aws.Context, input *GetDetectorIn return out, req.Send() } +const opGetFilter = "GetFilter" + +// GetFilterRequest generates a "aws/request.Request" representing the +// client's request for the GetFilter operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetFilter for more information on using the GetFilter +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetFilterRequest method. +// req, resp := client.GetFilterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/guardduty-2017-11-28/GetFilter +func (c *GuardDuty) GetFilterRequest(input *GetFilterInput) (req *request.Request, output *GetFilterOutput) { + op := &request.Operation{ + Name: opGetFilter, + HTTPMethod: "GET", + HTTPPath: "/detector/{detectorId}/filter/{filterName}", + } + + if input == nil { + input = &GetFilterInput{} + } + + output = &GetFilterOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetFilter API operation for Amazon GuardDuty. +// +// Returns the details of the filter specified by the filter name. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon GuardDuty's +// API operation GetFilter for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Error response object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Error response object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/guardduty-2017-11-28/GetFilter +func (c *GuardDuty) GetFilter(input *GetFilterInput) (*GetFilterOutput, error) { + req, out := c.GetFilterRequest(input) + return out, req.Send() +} + +// GetFilterWithContext is the same as GetFilter with the addition of +// the ability to pass a context and additional request options. +// +// See GetFilter for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *GuardDuty) GetFilterWithContext(ctx aws.Context, input *GetFilterInput, opts ...request.Option) (*GetFilterOutput, error) { + req, out := c.GetFilterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opGetFindings = "GetFindings" // GetFindingsRequest generates a "aws/request.Request" representing the // client's request for the GetFindings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1418,8 +1666,8 @@ const opGetFindingsStatistics = "GetFindingsStatistics" // GetFindingsStatisticsRequest generates a "aws/request.Request" representing the // client's request for the GetFindingsStatistics operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1500,8 +1748,8 @@ const opGetIPSet = "GetIPSet" // GetIPSetRequest generates a "aws/request.Request" representing the // client's request for the GetIPSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1582,8 +1830,8 @@ const opGetInvitationsCount = "GetInvitationsCount" // GetInvitationsCountRequest generates a "aws/request.Request" representing the // client's request for the GetInvitationsCount operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1665,8 +1913,8 @@ const opGetMasterAccount = "GetMasterAccount" // GetMasterAccountRequest generates a "aws/request.Request" representing the // client's request for the GetMasterAccount operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1748,8 +1996,8 @@ const opGetMembers = "GetMembers" // GetMembersRequest generates a "aws/request.Request" representing the // client's request for the GetMembers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1831,8 +2079,8 @@ const opGetThreatIntelSet = "GetThreatIntelSet" // GetThreatIntelSetRequest generates a "aws/request.Request" representing the // client's request for the GetThreatIntelSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1913,8 +2161,8 @@ const opInviteMembers = "InviteMembers" // InviteMembersRequest generates a "aws/request.Request" representing the // client's request for the InviteMembers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1998,8 +2246,8 @@ const opListDetectors = "ListDetectors" // ListDetectorsRequest generates a "aws/request.Request" representing the // client's request for the ListDetectors operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2132,37 +2380,37 @@ func (c *GuardDuty) ListDetectorsPagesWithContext(ctx aws.Context, input *ListDe return p.Err() } -const opListFindings = "ListFindings" +const opListFilters = "ListFilters" -// ListFindingsRequest generates a "aws/request.Request" representing the -// client's request for the ListFindings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListFiltersRequest generates a "aws/request.Request" representing the +// client's request for the ListFilters operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListFindings for more information on using the ListFindings +// See ListFilters for more information on using the ListFilters // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListFindingsRequest method. -// req, resp := client.ListFindingsRequest(params) +// // Example sending a request using the ListFiltersRequest method. +// req, resp := client.ListFiltersRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/guardduty-2017-11-28/ListFindings -func (c *GuardDuty) ListFindingsRequest(input *ListFindingsInput) (req *request.Request, output *ListFindingsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/guardduty-2017-11-28/ListFilters +func (c *GuardDuty) ListFiltersRequest(input *ListFiltersInput) (req *request.Request, output *ListFiltersOutput) { op := &request.Operation{ - Name: opListFindings, - HTTPMethod: "POST", - HTTPPath: "/detector/{detectorId}/findings", + Name: opListFilters, + HTTPMethod: "GET", + HTTPPath: "/detector/{detectorId}/filter", Paginator: &request.Paginator{ InputTokens: []string{"NextToken"}, OutputTokens: []string{"NextToken"}, @@ -2172,24 +2420,24 @@ func (c *GuardDuty) ListFindingsRequest(input *ListFindingsInput) (req *request. } if input == nil { - input = &ListFindingsInput{} + input = &ListFiltersInput{} } - output = &ListFindingsOutput{} + output = &ListFiltersOutput{} req = c.newRequest(op, input, output) return } -// ListFindings API operation for Amazon GuardDuty. +// ListFilters API operation for Amazon GuardDuty. // -// Lists Amazon GuardDuty findings for the specified detector ID. +// Returns a paginated list of the current filters. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon GuardDuty's -// API operation ListFindings for usage and error information. +// API operation ListFilters for usage and error information. // // Returned Error Codes: // * ErrCodeBadRequestException "BadRequestException" @@ -2198,65 +2446,65 @@ func (c *GuardDuty) ListFindingsRequest(input *ListFindingsInput) (req *request. // * ErrCodeInternalServerErrorException "InternalServerErrorException" // Error response object. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/guardduty-2017-11-28/ListFindings -func (c *GuardDuty) ListFindings(input *ListFindingsInput) (*ListFindingsOutput, error) { - req, out := c.ListFindingsRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/guardduty-2017-11-28/ListFilters +func (c *GuardDuty) ListFilters(input *ListFiltersInput) (*ListFiltersOutput, error) { + req, out := c.ListFiltersRequest(input) return out, req.Send() } -// ListFindingsWithContext is the same as ListFindings with the addition of +// ListFiltersWithContext is the same as ListFilters with the addition of // the ability to pass a context and additional request options. // -// See ListFindings for details on how to use this API operation. +// See ListFilters for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *GuardDuty) ListFindingsWithContext(ctx aws.Context, input *ListFindingsInput, opts ...request.Option) (*ListFindingsOutput, error) { - req, out := c.ListFindingsRequest(input) +func (c *GuardDuty) ListFiltersWithContext(ctx aws.Context, input *ListFiltersInput, opts ...request.Option) (*ListFiltersOutput, error) { + req, out := c.ListFiltersRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// ListFindingsPages iterates over the pages of a ListFindings operation, +// ListFiltersPages iterates over the pages of a ListFilters operation, // calling the "fn" function with the response data for each page. To stop // iterating, return false from the fn function. // -// See ListFindings method for more information on how to use this operation. +// See ListFilters method for more information on how to use this operation. // // Note: This operation can generate multiple requests to a service. // -// // Example iterating over at most 3 pages of a ListFindings operation. +// // Example iterating over at most 3 pages of a ListFilters operation. // pageNum := 0 -// err := client.ListFindingsPages(params, -// func(page *ListFindingsOutput, lastPage bool) bool { +// err := client.ListFiltersPages(params, +// func(page *ListFiltersOutput, lastPage bool) bool { // pageNum++ // fmt.Println(page) // return pageNum <= 3 // }) // -func (c *GuardDuty) ListFindingsPages(input *ListFindingsInput, fn func(*ListFindingsOutput, bool) bool) error { - return c.ListFindingsPagesWithContext(aws.BackgroundContext(), input, fn) +func (c *GuardDuty) ListFiltersPages(input *ListFiltersInput, fn func(*ListFiltersOutput, bool) bool) error { + return c.ListFiltersPagesWithContext(aws.BackgroundContext(), input, fn) } -// ListFindingsPagesWithContext same as ListFindingsPages except +// ListFiltersPagesWithContext same as ListFiltersPages except // it takes a Context and allows setting request options on the pages. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *GuardDuty) ListFindingsPagesWithContext(ctx aws.Context, input *ListFindingsInput, fn func(*ListFindingsOutput, bool) bool, opts ...request.Option) error { +func (c *GuardDuty) ListFiltersPagesWithContext(ctx aws.Context, input *ListFiltersInput, fn func(*ListFiltersOutput, bool) bool, opts ...request.Option) error { p := request.Pagination{ NewRequest: func() (*request.Request, error) { - var inCpy *ListFindingsInput + var inCpy *ListFiltersInput if input != nil { tmp := *input inCpy = &tmp } - req, _ := c.ListFindingsRequest(inCpy) + req, _ := c.ListFiltersRequest(inCpy) req.SetContext(ctx) req.ApplyOptions(opts...) return req, nil @@ -2265,42 +2513,42 @@ func (c *GuardDuty) ListFindingsPagesWithContext(ctx aws.Context, input *ListFin cont := true for p.Next() && cont { - cont = fn(p.Page().(*ListFindingsOutput), !p.HasNextPage()) + cont = fn(p.Page().(*ListFiltersOutput), !p.HasNextPage()) } return p.Err() } -const opListIPSets = "ListIPSets" +const opListFindings = "ListFindings" -// ListIPSetsRequest generates a "aws/request.Request" representing the -// client's request for the ListIPSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListFindingsRequest generates a "aws/request.Request" representing the +// client's request for the ListFindings operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListIPSets for more information on using the ListIPSets +// See ListFindings for more information on using the ListFindings // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListIPSetsRequest method. -// req, resp := client.ListIPSetsRequest(params) +// // Example sending a request using the ListFindingsRequest method. +// req, resp := client.ListFindingsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/guardduty-2017-11-28/ListIPSets -func (c *GuardDuty) ListIPSetsRequest(input *ListIPSetsInput) (req *request.Request, output *ListIPSetsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/guardduty-2017-11-28/ListFindings +func (c *GuardDuty) ListFindingsRequest(input *ListFindingsInput) (req *request.Request, output *ListFindingsOutput) { op := &request.Operation{ - Name: opListIPSets, - HTTPMethod: "GET", - HTTPPath: "/detector/{detectorId}/ipset", + Name: opListFindings, + HTTPMethod: "POST", + HTTPPath: "/detector/{detectorId}/findings", Paginator: &request.Paginator{ InputTokens: []string{"NextToken"}, OutputTokens: []string{"NextToken"}, @@ -2310,24 +2558,24 @@ func (c *GuardDuty) ListIPSetsRequest(input *ListIPSetsInput) (req *request.Requ } if input == nil { - input = &ListIPSetsInput{} + input = &ListFindingsInput{} } - output = &ListIPSetsOutput{} + output = &ListFindingsOutput{} req = c.newRequest(op, input, output) return } -// ListIPSets API operation for Amazon GuardDuty. +// ListFindings API operation for Amazon GuardDuty. // -// Lists the IPSets of the GuardDuty service specified by the detector ID. +// Lists Amazon GuardDuty findings for the specified detector ID. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon GuardDuty's -// API operation ListIPSets for usage and error information. +// API operation ListFindings for usage and error information. // // Returned Error Codes: // * ErrCodeBadRequestException "BadRequestException" @@ -2336,24 +2584,162 @@ func (c *GuardDuty) ListIPSetsRequest(input *ListIPSetsInput) (req *request.Requ // * ErrCodeInternalServerErrorException "InternalServerErrorException" // Error response object. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/guardduty-2017-11-28/ListIPSets -func (c *GuardDuty) ListIPSets(input *ListIPSetsInput) (*ListIPSetsOutput, error) { - req, out := c.ListIPSetsRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/guardduty-2017-11-28/ListFindings +func (c *GuardDuty) ListFindings(input *ListFindingsInput) (*ListFindingsOutput, error) { + req, out := c.ListFindingsRequest(input) return out, req.Send() } -// ListIPSetsWithContext is the same as ListIPSets with the addition of +// ListFindingsWithContext is the same as ListFindings with the addition of // the ability to pass a context and additional request options. // -// See ListIPSets for details on how to use this API operation. +// See ListFindings for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *GuardDuty) ListIPSetsWithContext(ctx aws.Context, input *ListIPSetsInput, opts ...request.Option) (*ListIPSetsOutput, error) { - req, out := c.ListIPSetsRequest(input) - req.SetContext(ctx) +func (c *GuardDuty) ListFindingsWithContext(ctx aws.Context, input *ListFindingsInput, opts ...request.Option) (*ListFindingsOutput, error) { + req, out := c.ListFindingsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListFindingsPages iterates over the pages of a ListFindings operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListFindings method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListFindings operation. +// pageNum := 0 +// err := client.ListFindingsPages(params, +// func(page *ListFindingsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *GuardDuty) ListFindingsPages(input *ListFindingsInput, fn func(*ListFindingsOutput, bool) bool) error { + return c.ListFindingsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListFindingsPagesWithContext same as ListFindingsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *GuardDuty) ListFindingsPagesWithContext(ctx aws.Context, input *ListFindingsInput, fn func(*ListFindingsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListFindingsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListFindingsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListFindingsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListIPSets = "ListIPSets" + +// ListIPSetsRequest generates a "aws/request.Request" representing the +// client's request for the ListIPSets operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListIPSets for more information on using the ListIPSets +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListIPSetsRequest method. +// req, resp := client.ListIPSetsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/guardduty-2017-11-28/ListIPSets +func (c *GuardDuty) ListIPSetsRequest(input *ListIPSetsInput) (req *request.Request, output *ListIPSetsOutput) { + op := &request.Operation{ + Name: opListIPSets, + HTTPMethod: "GET", + HTTPPath: "/detector/{detectorId}/ipset", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListIPSetsInput{} + } + + output = &ListIPSetsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListIPSets API operation for Amazon GuardDuty. +// +// Lists the IPSets of the GuardDuty service specified by the detector ID. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon GuardDuty's +// API operation ListIPSets for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Error response object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Error response object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/guardduty-2017-11-28/ListIPSets +func (c *GuardDuty) ListIPSets(input *ListIPSetsInput) (*ListIPSetsOutput, error) { + req, out := c.ListIPSetsRequest(input) + return out, req.Send() +} + +// ListIPSetsWithContext is the same as ListIPSets with the addition of +// the ability to pass a context and additional request options. +// +// See ListIPSets for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *GuardDuty) ListIPSetsWithContext(ctx aws.Context, input *ListIPSetsInput, opts ...request.Option) (*ListIPSetsOutput, error) { + req, out := c.ListIPSetsRequest(input) + req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } @@ -2412,8 +2798,8 @@ const opListInvitations = "ListInvitations" // ListInvitationsRequest generates a "aws/request.Request" representing the // client's request for the ListInvitations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2551,8 +2937,8 @@ const opListMembers = "ListMembers" // ListMembersRequest generates a "aws/request.Request" representing the // client's request for the ListMembers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2690,8 +3076,8 @@ const opListThreatIntelSets = "ListThreatIntelSets" // ListThreatIntelSetsRequest generates a "aws/request.Request" representing the // client's request for the ListThreatIntelSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2829,8 +3215,8 @@ const opStartMonitoringMembers = "StartMonitoringMembers" // StartMonitoringMembersRequest generates a "aws/request.Request" representing the // client's request for the StartMonitoringMembers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2913,8 +3299,8 @@ const opStopMonitoringMembers = "StopMonitoringMembers" // StopMonitoringMembersRequest generates a "aws/request.Request" representing the // client's request for the StopMonitoringMembers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2955,7 +3341,7 @@ func (c *GuardDuty) StopMonitoringMembersRequest(input *StopMonitoringMembersInp // // Disables GuardDuty from monitoring findings of the member accounts specified // by the account IDs. After running this command, a master GuardDuty account -// can run StartMonitoringMembers to re-enable GuardDuty to monitor these members' +// can run StartMonitoringMembers to re-enable GuardDuty to monitor these members’ // findings. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -2998,8 +3384,8 @@ const opUnarchiveFindings = "UnarchiveFindings" // UnarchiveFindingsRequest generates a "aws/request.Request" representing the // client's request for the UnarchiveFindings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3080,8 +3466,8 @@ const opUpdateDetector = "UpdateDetector" // UpdateDetectorRequest generates a "aws/request.Request" representing the // client's request for the UpdateDetector operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3158,12 +3544,94 @@ func (c *GuardDuty) UpdateDetectorWithContext(ctx aws.Context, input *UpdateDete return out, req.Send() } +const opUpdateFilter = "UpdateFilter" + +// UpdateFilterRequest generates a "aws/request.Request" representing the +// client's request for the UpdateFilter operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateFilter for more information on using the UpdateFilter +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateFilterRequest method. +// req, resp := client.UpdateFilterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/guardduty-2017-11-28/UpdateFilter +func (c *GuardDuty) UpdateFilterRequest(input *UpdateFilterInput) (req *request.Request, output *UpdateFilterOutput) { + op := &request.Operation{ + Name: opUpdateFilter, + HTTPMethod: "POST", + HTTPPath: "/detector/{detectorId}/filter/{filterName}", + } + + if input == nil { + input = &UpdateFilterInput{} + } + + output = &UpdateFilterOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateFilter API operation for Amazon GuardDuty. +// +// Updates the filter specified by the filter name. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon GuardDuty's +// API operation UpdateFilter for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Error response object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Error response object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/guardduty-2017-11-28/UpdateFilter +func (c *GuardDuty) UpdateFilter(input *UpdateFilterInput) (*UpdateFilterOutput, error) { + req, out := c.UpdateFilterRequest(input) + return out, req.Send() +} + +// UpdateFilterWithContext is the same as UpdateFilter with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateFilter for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *GuardDuty) UpdateFilterWithContext(ctx aws.Context, input *UpdateFilterInput, opts ...request.Option) (*UpdateFilterOutput, error) { + req, out := c.UpdateFilterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opUpdateFindingsFeedback = "UpdateFindingsFeedback" // UpdateFindingsFeedbackRequest generates a "aws/request.Request" representing the // client's request for the UpdateFindingsFeedback operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3244,8 +3712,8 @@ const opUpdateIPSet = "UpdateIPSet" // UpdateIPSetRequest generates a "aws/request.Request" representing the // client's request for the UpdateIPSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3326,8 +3794,8 @@ const opUpdateThreatIntelSet = "UpdateThreatIntelSet" // UpdateThreatIntelSetRequest generates a "aws/request.Request" representing the // client's request for the UpdateThreatIntelSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3412,10 +3880,14 @@ type AcceptInvitationInput struct { DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` // This value is used to validate the master account to the member account. - InvitationId *string `locationName:"invitationId" type:"string"` + // + // InvitationId is a required field + InvitationId *string `locationName:"invitationId" type:"string" required:"true"` // The account ID of the master GuardDuty account whose invitation you're accepting. - MasterId *string `locationName:"masterId" type:"string"` + // + // MasterId is a required field + MasterId *string `locationName:"masterId" type:"string" required:"true"` } // String returns the string representation @@ -3434,6 +3906,12 @@ func (s *AcceptInvitationInput) Validate() error { if s.DetectorId == nil { invalidParams.Add(request.NewErrParamRequired("DetectorId")) } + if s.InvitationId == nil { + invalidParams.Add(request.NewErrParamRequired("InvitationId")) + } + if s.MasterId == nil { + invalidParams.Add(request.NewErrParamRequired("MasterId")) + } if invalidParams.Len() > 0 { return invalidParams @@ -3530,10 +4008,14 @@ type AccountDetail struct { _ struct{} `type:"structure"` // Member account ID. - AccountId *string `locationName:"accountId" type:"string"` + // + // AccountId is a required field + AccountId *string `locationName:"accountId" type:"string" required:"true"` // Member account's email address. - Email *string `locationName:"email" type:"string"` + // + // Email is a required field + Email *string `locationName:"email" type:"string" required:"true"` } // String returns the string representation @@ -3546,6 +4028,22 @@ func (s AccountDetail) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *AccountDetail) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AccountDetail"} + if s.AccountId == nil { + invalidParams.Add(request.NewErrParamRequired("AccountId")) + } + if s.Email == nil { + invalidParams.Add(request.NewErrParamRequired("Email")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAccountId sets the AccountId field's value. func (s *AccountDetail) SetAccountId(v string) *AccountDetail { s.AccountId = &v @@ -3626,7 +4124,9 @@ type ArchiveFindingsInput struct { DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` // IDs of the findings that you want to archive. - FindingIds []*string `locationName:"findingIds" type:"list"` + // + // FindingIds is a required field + FindingIds []*string `locationName:"findingIds" type:"list" required:"true"` } // String returns the string representation @@ -3645,6 +4145,9 @@ func (s *ArchiveFindingsInput) Validate() error { if s.DetectorId == nil { invalidParams.Add(request.NewErrParamRequired("DetectorId")) } + if s.FindingIds == nil { + invalidParams.Add(request.NewErrParamRequired("FindingIds")) + } if invalidParams.Len() > 0 { return invalidParams @@ -3875,8 +4378,16 @@ func (s *Country) SetCountryName(v string) *Country { type CreateDetectorInput struct { _ struct{} `type:"structure"` + // The idempotency token for the create request. + ClientToken *string `locationName:"clientToken" type:"string" idempotencyToken:"true"` + // A boolean value that specifies whether the detector is to be enabled. - Enable *bool `locationName:"enable" type:"boolean"` + // + // Enable is a required field + Enable *bool `locationName:"enable" type:"boolean" required:"true"` + + // A enum value that specifies how frequently customer got Finding updates published. + FindingPublishingFrequency *string `locationName:"findingPublishingFrequency" type:"string" enum:"FindingPublishingFrequency"` } // String returns the string representation @@ -3889,12 +4400,37 @@ func (s CreateDetectorInput) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDetectorInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDetectorInput"} + if s.Enable == nil { + invalidParams.Add(request.NewErrParamRequired("Enable")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientToken sets the ClientToken field's value. +func (s *CreateDetectorInput) SetClientToken(v string) *CreateDetectorInput { + s.ClientToken = &v + return s +} + // SetEnable sets the Enable field's value. func (s *CreateDetectorInput) SetEnable(v bool) *CreateDetectorInput { s.Enable = &v return s } +// SetFindingPublishingFrequency sets the FindingPublishingFrequency field's value. +func (s *CreateDetectorInput) SetFindingPublishingFrequency(v string) *CreateDetectorInput { + s.FindingPublishingFrequency = &v + return s +} + // CreateDetector response object. type CreateDetectorOutput struct { _ struct{} `type:"structure"` @@ -3919,27 +4455,165 @@ func (s *CreateDetectorOutput) SetDetectorId(v string) *CreateDetectorOutput { return s } +// CreateFilter request object. +type CreateFilterInput struct { + _ struct{} `type:"structure"` + + // Specifies the action that is to be applied to the findings that match the + // filter. + Action *string `locationName:"action" type:"string" enum:"FilterAction"` + + // The idempotency token for the create request. + ClientToken *string `locationName:"clientToken" type:"string" idempotencyToken:"true"` + + // The description of the filter. + Description *string `locationName:"description" type:"string"` + + // DetectorId is a required field + DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` + + // Represents the criteria to be used in the filter for querying findings. + // + // FindingCriteria is a required field + FindingCriteria *FindingCriteria `locationName:"findingCriteria" type:"structure" required:"true"` + + // The name of the filter. + // + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true"` + + // Specifies the position of the filter in the list of current filters. Also + // specifies the order in which this filter is applied to the findings. + Rank *int64 `locationName:"rank" type:"integer"` +} + +// String returns the string representation +func (s CreateFilterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateFilterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateFilterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateFilterInput"} + if s.DetectorId == nil { + invalidParams.Add(request.NewErrParamRequired("DetectorId")) + } + if s.FindingCriteria == nil { + invalidParams.Add(request.NewErrParamRequired("FindingCriteria")) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAction sets the Action field's value. +func (s *CreateFilterInput) SetAction(v string) *CreateFilterInput { + s.Action = &v + return s +} + +// SetClientToken sets the ClientToken field's value. +func (s *CreateFilterInput) SetClientToken(v string) *CreateFilterInput { + s.ClientToken = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *CreateFilterInput) SetDescription(v string) *CreateFilterInput { + s.Description = &v + return s +} + +// SetDetectorId sets the DetectorId field's value. +func (s *CreateFilterInput) SetDetectorId(v string) *CreateFilterInput { + s.DetectorId = &v + return s +} + +// SetFindingCriteria sets the FindingCriteria field's value. +func (s *CreateFilterInput) SetFindingCriteria(v *FindingCriteria) *CreateFilterInput { + s.FindingCriteria = v + return s +} + +// SetName sets the Name field's value. +func (s *CreateFilterInput) SetName(v string) *CreateFilterInput { + s.Name = &v + return s +} + +// SetRank sets the Rank field's value. +func (s *CreateFilterInput) SetRank(v int64) *CreateFilterInput { + s.Rank = &v + return s +} + +// CreateFilter response object. +type CreateFilterOutput struct { + _ struct{} `type:"structure"` + + // The name of the successfully created filter. + Name *string `locationName:"name" type:"string"` +} + +// String returns the string representation +func (s CreateFilterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateFilterOutput) GoString() string { + return s.String() +} + +// SetName sets the Name field's value. +func (s *CreateFilterOutput) SetName(v string) *CreateFilterOutput { + s.Name = &v + return s +} + // Create IP Set Request type CreateIPSetInput struct { _ struct{} `type:"structure"` // A boolean value that indicates whether GuardDuty is to start using the uploaded // IPSet. - Activate *bool `locationName:"activate" type:"boolean"` + // + // Activate is a required field + Activate *bool `locationName:"activate" type:"boolean" required:"true"` + + // The idempotency token for the create request. + ClientToken *string `locationName:"clientToken" type:"string" idempotencyToken:"true"` // DetectorId is a required field DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` // The format of the file that contains the IPSet. - Format *string `locationName:"format" type:"string" enum:"IpSetFormat"` + // + // Format is a required field + Format *string `locationName:"format" type:"string" required:"true" enum:"IpSetFormat"` // The URI of the file that contains the IPSet. For example (https://s3.us-west-2.amazonaws.com/my-bucket/my-object-key) - Location *string `locationName:"location" type:"string"` + // + // Location is a required field + Location *string `locationName:"location" type:"string" required:"true"` // The user friendly name to identify the IPSet. This name is displayed in all // findings that are triggered by activity that involves IP addresses included // in this IPSet. - Name *string `locationName:"name" type:"string"` + // + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true"` } // String returns the string representation @@ -3955,9 +4629,21 @@ func (s CreateIPSetInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *CreateIPSetInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "CreateIPSetInput"} + if s.Activate == nil { + invalidParams.Add(request.NewErrParamRequired("Activate")) + } if s.DetectorId == nil { invalidParams.Add(request.NewErrParamRequired("DetectorId")) } + if s.Format == nil { + invalidParams.Add(request.NewErrParamRequired("Format")) + } + if s.Location == nil { + invalidParams.Add(request.NewErrParamRequired("Location")) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } if invalidParams.Len() > 0 { return invalidParams @@ -3971,6 +4657,12 @@ func (s *CreateIPSetInput) SetActivate(v bool) *CreateIPSetInput { return s } +// SetClientToken sets the ClientToken field's value. +func (s *CreateIPSetInput) SetClientToken(v string) *CreateIPSetInput { + s.ClientToken = &v + return s +} + // SetDetectorId sets the DetectorId field's value. func (s *CreateIPSetInput) SetDetectorId(v string) *CreateIPSetInput { s.DetectorId = &v @@ -4025,7 +4717,9 @@ type CreateMembersInput struct { // A list of account ID and email address pairs of the accounts that you want // to associate with the master GuardDuty account. - AccountDetails []*AccountDetail `locationName:"accountDetails" type:"list"` + // + // AccountDetails is a required field + AccountDetails []*AccountDetail `locationName:"accountDetails" type:"list" required:"true"` // DetectorId is a required field DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` @@ -4044,9 +4738,22 @@ func (s CreateMembersInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *CreateMembersInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "CreateMembersInput"} + if s.AccountDetails == nil { + invalidParams.Add(request.NewErrParamRequired("AccountDetails")) + } if s.DetectorId == nil { invalidParams.Add(request.NewErrParamRequired("DetectorId")) } + if s.AccountDetails != nil { + for i, v := range s.AccountDetails { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AccountDetails", i), err.(request.ErrInvalidParams)) + } + } + } if invalidParams.Len() > 0 { return invalidParams @@ -4157,20 +4864,31 @@ type CreateThreatIntelSetInput struct { // A boolean value that indicates whether GuardDuty is to start using the uploaded // ThreatIntelSet. - Activate *bool `locationName:"activate" type:"boolean"` + // + // Activate is a required field + Activate *bool `locationName:"activate" type:"boolean" required:"true"` + + // The idempotency token for the create request. + ClientToken *string `locationName:"clientToken" type:"string" idempotencyToken:"true"` // DetectorId is a required field DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` // The format of the file that contains the ThreatIntelSet. - Format *string `locationName:"format" type:"string" enum:"ThreatIntelSetFormat"` + // + // Format is a required field + Format *string `locationName:"format" type:"string" required:"true" enum:"ThreatIntelSetFormat"` // The URI of the file that contains the ThreatIntelSet. For example (https://s3.us-west-2.amazonaws.com/my-bucket/my-object-key). - Location *string `locationName:"location" type:"string"` + // + // Location is a required field + Location *string `locationName:"location" type:"string" required:"true"` // A user-friendly ThreatIntelSet name that is displayed in all finding generated // by activity that involves IP addresses included in this ThreatIntelSet. - Name *string `locationName:"name" type:"string"` + // + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true"` } // String returns the string representation @@ -4186,9 +4904,21 @@ func (s CreateThreatIntelSetInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *CreateThreatIntelSetInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "CreateThreatIntelSetInput"} + if s.Activate == nil { + invalidParams.Add(request.NewErrParamRequired("Activate")) + } if s.DetectorId == nil { invalidParams.Add(request.NewErrParamRequired("DetectorId")) } + if s.Format == nil { + invalidParams.Add(request.NewErrParamRequired("Format")) + } + if s.Location == nil { + invalidParams.Add(request.NewErrParamRequired("Location")) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } if invalidParams.Len() > 0 { return invalidParams @@ -4202,6 +4932,12 @@ func (s *CreateThreatIntelSetInput) SetActivate(v bool) *CreateThreatIntelSetInp return s } +// SetClientToken sets the ClientToken field's value. +func (s *CreateThreatIntelSetInput) SetClientToken(v string) *CreateThreatIntelSetInput { + s.ClientToken = &v + return s +} + // SetDetectorId sets the DetectorId field's value. func (s *CreateThreatIntelSetInput) SetDetectorId(v string) *CreateThreatIntelSetInput { s.DetectorId = &v @@ -4256,7 +4992,9 @@ type DeclineInvitationsInput struct { // A list of account IDs of the AWS accounts that sent invitations to the current // member account that you want to decline invitations from. - AccountIds []*string `locationName:"accountIds" type:"list"` + // + // AccountIds is a required field + AccountIds []*string `locationName:"accountIds" type:"list" required:"true"` } // String returns the string representation @@ -4269,6 +5007,19 @@ func (s DeclineInvitationsInput) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeclineInvitationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeclineInvitationsInput"} + if s.AccountIds == nil { + invalidParams.Add(request.NewErrParamRequired("AccountIds")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAccountIds sets the AccountIds field's value. func (s *DeclineInvitationsInput) SetAccountIds(v []*string) *DeclineInvitationsInput { s.AccountIds = v @@ -4350,6 +5101,68 @@ func (s DeleteDetectorOutput) GoString() string { return s.String() } +type DeleteFilterInput struct { + _ struct{} `type:"structure"` + + // DetectorId is a required field + DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` + + // FilterName is a required field + FilterName *string `location:"uri" locationName:"filterName" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteFilterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteFilterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteFilterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteFilterInput"} + if s.DetectorId == nil { + invalidParams.Add(request.NewErrParamRequired("DetectorId")) + } + if s.FilterName == nil { + invalidParams.Add(request.NewErrParamRequired("FilterName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDetectorId sets the DetectorId field's value. +func (s *DeleteFilterInput) SetDetectorId(v string) *DeleteFilterInput { + s.DetectorId = &v + return s +} + +// SetFilterName sets the FilterName field's value. +func (s *DeleteFilterInput) SetFilterName(v string) *DeleteFilterInput { + s.FilterName = &v + return s +} + +type DeleteFilterOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteFilterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteFilterOutput) GoString() string { + return s.String() +} + type DeleteIPSetInput struct { _ struct{} `type:"structure"` @@ -4418,7 +5231,9 @@ type DeleteInvitationsInput struct { // A list of account IDs of the AWS accounts that sent invitations to the current // member account that you want to delete invitations from. - AccountIds []*string `locationName:"accountIds" type:"list"` + // + // AccountIds is a required field + AccountIds []*string `locationName:"accountIds" type:"list" required:"true"` } // String returns the string representation @@ -4431,6 +5246,19 @@ func (s DeleteInvitationsInput) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteInvitationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteInvitationsInput"} + if s.AccountIds == nil { + invalidParams.Add(request.NewErrParamRequired("AccountIds")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAccountIds sets the AccountIds field's value. func (s *DeleteInvitationsInput) SetAccountIds(v []*string) *DeleteInvitationsInput { s.AccountIds = v @@ -4467,7 +5295,9 @@ type DeleteMembersInput struct { _ struct{} `type:"structure"` // A list of account IDs of the GuardDuty member accounts that you want to delete. - AccountIds []*string `locationName:"accountIds" type:"list"` + // + // AccountIds is a required field + AccountIds []*string `locationName:"accountIds" type:"list" required:"true"` // DetectorId is a required field DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` @@ -4486,6 +5316,9 @@ func (s DeleteMembersInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *DeleteMembersInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "DeleteMembersInput"} + if s.AccountIds == nil { + invalidParams.Add(request.NewErrParamRequired("AccountIds")) + } if s.DetectorId == nil { invalidParams.Add(request.NewErrParamRequired("DetectorId")) } @@ -4651,7 +5484,9 @@ type DisassociateMembersInput struct { // A list of account IDs of the GuardDuty member accounts that you want to disassociate // from master. - AccountIds []*string `locationName:"accountIds" type:"list"` + // + // AccountIds is a required field + AccountIds []*string `locationName:"accountIds" type:"list" required:"true"` // DetectorId is a required field DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` @@ -4670,6 +5505,9 @@ func (s DisassociateMembersInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *DisassociateMembersInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "DisassociateMembersInput"} + if s.AccountIds == nil { + invalidParams.Add(request.NewErrParamRequired("AccountIds")) + } if s.DetectorId == nil { invalidParams.Add(request.NewErrParamRequired("DetectorId")) } @@ -4762,51 +5600,71 @@ type Finding struct { // AWS account ID where the activity occurred that prompted GuardDuty to generate // a finding. - AccountId *string `locationName:"accountId" type:"string"` + // + // AccountId is a required field + AccountId *string `locationName:"accountId" type:"string" required:"true"` // The ARN of a finding described by the action. - Arn *string `locationName:"arn" type:"string"` + // + // Arn is a required field + Arn *string `locationName:"arn" type:"string" required:"true"` // The confidence level of a finding. Confidence *float64 `locationName:"confidence" type:"double"` // The time stamp at which a finding was generated. - CreatedAt *string `locationName:"createdAt" type:"string"` + // + // CreatedAt is a required field + CreatedAt *string `locationName:"createdAt" type:"string" required:"true"` // The description of a finding. Description *string `locationName:"description" type:"string"` // The identifier that corresponds to a finding described by the action. - Id *string `locationName:"id" type:"string"` + // + // Id is a required field + Id *string `locationName:"id" type:"string" required:"true"` // The AWS resource partition. Partition *string `locationName:"partition" type:"string"` // The AWS region where the activity occurred that prompted GuardDuty to generate // a finding. - Region *string `locationName:"region" type:"string"` + // + // Region is a required field + Region *string `locationName:"region" type:"string" required:"true"` // The AWS resource associated with the activity that prompted GuardDuty to // generate a finding. - Resource *Resource `locationName:"resource" type:"structure"` + // + // Resource is a required field + Resource *Resource `locationName:"resource" type:"structure" required:"true"` // Findings' schema version. - SchemaVersion *string `locationName:"schemaVersion" type:"string"` + // + // SchemaVersion is a required field + SchemaVersion *string `locationName:"schemaVersion" type:"string" required:"true"` // Additional information assigned to the generated finding by GuardDuty. Service *Service `locationName:"service" type:"structure"` // The severity of a finding. - Severity *float64 `locationName:"severity" type:"double"` + // + // Severity is a required field + Severity *float64 `locationName:"severity" type:"double" required:"true"` // The title of a finding. Title *string `locationName:"title" type:"string"` // The type of a finding described by the action. - Type *string `locationName:"type" type:"string"` + // + // Type is a required field + Type *string `locationName:"type" type:"string" required:"true"` // The time stamp at which a finding was last updated. - UpdatedAt *string `locationName:"updatedAt" type:"string"` + // + // UpdatedAt is a required field + UpdatedAt *string `locationName:"updatedAt" type:"string" required:"true"` } // String returns the string representation @@ -4979,41 +5837,143 @@ func (s GeoLocation) GoString() string { return s.String() } -// SetLat sets the Lat field's value. -func (s *GeoLocation) SetLat(v float64) *GeoLocation { - s.Lat = &v +// SetLat sets the Lat field's value. +func (s *GeoLocation) SetLat(v float64) *GeoLocation { + s.Lat = &v + return s +} + +// SetLon sets the Lon field's value. +func (s *GeoLocation) SetLon(v float64) *GeoLocation { + s.Lon = &v + return s +} + +type GetDetectorInput struct { + _ struct{} `type:"structure"` + + // DetectorId is a required field + DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetDetectorInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDetectorInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetDetectorInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetDetectorInput"} + if s.DetectorId == nil { + invalidParams.Add(request.NewErrParamRequired("DetectorId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDetectorId sets the DetectorId field's value. +func (s *GetDetectorInput) SetDetectorId(v string) *GetDetectorInput { + s.DetectorId = &v + return s +} + +// GetDetector response object. +type GetDetectorOutput struct { + _ struct{} `type:"structure"` + + // The first time a resource was created. The format will be ISO-8601. + CreatedAt *string `locationName:"createdAt" type:"string"` + + // A enum value that specifies how frequently customer got Finding updates published. + FindingPublishingFrequency *string `locationName:"findingPublishingFrequency" type:"string" enum:"FindingPublishingFrequency"` + + // Customer serviceRole name or ARN for accessing customer resources + ServiceRole *string `locationName:"serviceRole" type:"string"` + + // The status of detector. + Status *string `locationName:"status" type:"string" enum:"DetectorStatus"` + + // The first time a resource was created. The format will be ISO-8601. + UpdatedAt *string `locationName:"updatedAt" type:"string"` +} + +// String returns the string representation +func (s GetDetectorOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDetectorOutput) GoString() string { + return s.String() +} + +// SetCreatedAt sets the CreatedAt field's value. +func (s *GetDetectorOutput) SetCreatedAt(v string) *GetDetectorOutput { + s.CreatedAt = &v + return s +} + +// SetFindingPublishingFrequency sets the FindingPublishingFrequency field's value. +func (s *GetDetectorOutput) SetFindingPublishingFrequency(v string) *GetDetectorOutput { + s.FindingPublishingFrequency = &v + return s +} + +// SetServiceRole sets the ServiceRole field's value. +func (s *GetDetectorOutput) SetServiceRole(v string) *GetDetectorOutput { + s.ServiceRole = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *GetDetectorOutput) SetStatus(v string) *GetDetectorOutput { + s.Status = &v return s } -// SetLon sets the Lon field's value. -func (s *GeoLocation) SetLon(v float64) *GeoLocation { - s.Lon = &v +// SetUpdatedAt sets the UpdatedAt field's value. +func (s *GetDetectorOutput) SetUpdatedAt(v string) *GetDetectorOutput { + s.UpdatedAt = &v return s } -type GetDetectorInput struct { +type GetFilterInput struct { _ struct{} `type:"structure"` // DetectorId is a required field DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` + + // FilterName is a required field + FilterName *string `location:"uri" locationName:"filterName" type:"string" required:"true"` } // String returns the string representation -func (s GetDetectorInput) String() string { +func (s GetFilterInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDetectorInput) GoString() string { +func (s GetFilterInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetDetectorInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetDetectorInput"} +func (s *GetFilterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetFilterInput"} if s.DetectorId == nil { invalidParams.Add(request.NewErrParamRequired("DetectorId")) } + if s.FilterName == nil { + invalidParams.Add(request.NewErrParamRequired("FilterName")) + } if invalidParams.Len() > 0 { return invalidParams @@ -5022,59 +5982,76 @@ func (s *GetDetectorInput) Validate() error { } // SetDetectorId sets the DetectorId field's value. -func (s *GetDetectorInput) SetDetectorId(v string) *GetDetectorInput { +func (s *GetFilterInput) SetDetectorId(v string) *GetFilterInput { s.DetectorId = &v return s } -// GetDetector response object. -type GetDetectorOutput struct { +// SetFilterName sets the FilterName field's value. +func (s *GetFilterInput) SetFilterName(v string) *GetFilterInput { + s.FilterName = &v + return s +} + +// GetFilter response object. +type GetFilterOutput struct { _ struct{} `type:"structure"` - // The first time a resource was created. The format will be ISO-8601. - CreatedAt *string `locationName:"createdAt" type:"string"` + // Specifies the action that is to be applied to the findings that match the + // filter. + Action *string `locationName:"action" type:"string" enum:"FilterAction"` - // Customer serviceRole name or ARN for accessing customer resources - ServiceRole *string `locationName:"serviceRole" type:"string"` + // The description of the filter. + Description *string `locationName:"description" type:"string"` - // The status of detector. - Status *string `locationName:"status" type:"string" enum:"DetectorStatus"` + // Represents the criteria to be used in the filter for querying findings. + FindingCriteria *FindingCriteria `locationName:"findingCriteria" type:"structure"` - // The first time a resource was created. The format will be ISO-8601. - UpdatedAt *string `locationName:"updatedAt" type:"string"` + // The name of the filter. + Name *string `locationName:"name" type:"string"` + + // Specifies the position of the filter in the list of current filters. Also + // specifies the order in which this filter is applied to the findings. + Rank *int64 `locationName:"rank" type:"integer"` } // String returns the string representation -func (s GetDetectorOutput) String() string { +func (s GetFilterOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDetectorOutput) GoString() string { +func (s GetFilterOutput) GoString() string { return s.String() } -// SetCreatedAt sets the CreatedAt field's value. -func (s *GetDetectorOutput) SetCreatedAt(v string) *GetDetectorOutput { - s.CreatedAt = &v +// SetAction sets the Action field's value. +func (s *GetFilterOutput) SetAction(v string) *GetFilterOutput { + s.Action = &v return s } -// SetServiceRole sets the ServiceRole field's value. -func (s *GetDetectorOutput) SetServiceRole(v string) *GetDetectorOutput { - s.ServiceRole = &v +// SetDescription sets the Description field's value. +func (s *GetFilterOutput) SetDescription(v string) *GetFilterOutput { + s.Description = &v return s } -// SetStatus sets the Status field's value. -func (s *GetDetectorOutput) SetStatus(v string) *GetDetectorOutput { - s.Status = &v +// SetFindingCriteria sets the FindingCriteria field's value. +func (s *GetFilterOutput) SetFindingCriteria(v *FindingCriteria) *GetFilterOutput { + s.FindingCriteria = v return s } -// SetUpdatedAt sets the UpdatedAt field's value. -func (s *GetDetectorOutput) SetUpdatedAt(v string) *GetDetectorOutput { - s.UpdatedAt = &v +// SetName sets the Name field's value. +func (s *GetFilterOutput) SetName(v string) *GetFilterOutput { + s.Name = &v + return s +} + +// SetRank sets the Rank field's value. +func (s *GetFilterOutput) SetRank(v int64) *GetFilterOutput { + s.Rank = &v return s } @@ -5086,7 +6063,9 @@ type GetFindingsInput struct { DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` // IDs of the findings that you want to retrieve. - FindingIds []*string `locationName:"findingIds" type:"list"` + // + // FindingIds is a required field + FindingIds []*string `locationName:"findingIds" type:"list" required:"true"` // Represents the criteria used for sorting findings. SortCriteria *SortCriteria `locationName:"sortCriteria" type:"structure"` @@ -5108,6 +6087,9 @@ func (s *GetFindingsInput) Validate() error { if s.DetectorId == nil { invalidParams.Add(request.NewErrParamRequired("DetectorId")) } + if s.FindingIds == nil { + invalidParams.Add(request.NewErrParamRequired("FindingIds")) + } if invalidParams.Len() > 0 { return invalidParams @@ -5168,7 +6150,9 @@ type GetFindingsStatisticsInput struct { FindingCriteria *FindingCriteria `locationName:"findingCriteria" type:"structure"` // Types of finding statistics to retrieve. - FindingStatisticTypes []*string `locationName:"findingStatisticTypes" type:"list"` + // + // FindingStatisticTypes is a required field + FindingStatisticTypes []*string `locationName:"findingStatisticTypes" type:"list" required:"true"` } // String returns the string representation @@ -5187,6 +6171,9 @@ func (s *GetFindingsStatisticsInput) Validate() error { if s.DetectorId == nil { invalidParams.Add(request.NewErrParamRequired("DetectorId")) } + if s.FindingStatisticTypes == nil { + invalidParams.Add(request.NewErrParamRequired("FindingStatisticTypes")) + } if invalidParams.Len() > 0 { return invalidParams @@ -5440,7 +6427,9 @@ type GetMembersInput struct { _ struct{} `type:"structure"` // A list of account IDs of the GuardDuty member accounts that you want to describe. - AccountIds []*string `locationName:"accountIds" type:"list"` + // + // AccountIds is a required field + AccountIds []*string `locationName:"accountIds" type:"list" required:"true"` // DetectorId is a required field DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` @@ -5459,6 +6448,9 @@ func (s GetMembersInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *GetMembersInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "GetMembersInput"} + if s.AccountIds == nil { + invalidParams.Add(request.NewErrParamRequired("AccountIds")) + } if s.DetectorId == nil { invalidParams.Add(request.NewErrParamRequired("DetectorId")) } @@ -5659,6 +6651,9 @@ type InstanceDetails struct { // The profile information of the EC2 instance. IamInstanceProfile *IamInstanceProfile `locationName:"iamInstanceProfile" type:"structure"` + // The image description of the EC2 instance. + ImageDescription *string `locationName:"imageDescription" type:"string"` + // The image ID of the EC2 instance. ImageId *string `locationName:"imageId" type:"string"` @@ -5709,6 +6704,12 @@ func (s *InstanceDetails) SetIamInstanceProfile(v *IamInstanceProfile) *Instance return s } +// SetImageDescription sets the ImageDescription field's value. +func (s *InstanceDetails) SetImageDescription(v string) *InstanceDetails { + s.ImageDescription = &v + return s +} + // SetImageId sets the ImageId field's value. func (s *InstanceDetails) SetImageId(v string) *InstanceDetails { s.ImageId = &v @@ -5820,12 +6821,18 @@ type InviteMembersInput struct { // A list of account IDs of the accounts that you want to invite to GuardDuty // as members. - AccountIds []*string `locationName:"accountIds" type:"list"` + // + // AccountIds is a required field + AccountIds []*string `locationName:"accountIds" type:"list" required:"true"` // DetectorId is a required field DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` - // The invitation message that you want to send to the accounts that you're + // A boolean value that specifies whether you want to disable email notification + // to the accounts that you’re inviting to GuardDuty as members. + DisableEmailNotification *bool `locationName:"disableEmailNotification" type:"boolean"` + + // The invitation message that you want to send to the accounts that you’re // inviting to GuardDuty as members. Message *string `locationName:"message" type:"string"` } @@ -5843,6 +6850,9 @@ func (s InviteMembersInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *InviteMembersInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "InviteMembersInput"} + if s.AccountIds == nil { + invalidParams.Add(request.NewErrParamRequired("AccountIds")) + } if s.DetectorId == nil { invalidParams.Add(request.NewErrParamRequired("DetectorId")) } @@ -5865,6 +6875,12 @@ func (s *InviteMembersInput) SetDetectorId(v string) *InviteMembersInput { return s } +// SetDisableEmailNotification sets the DisableEmailNotification field's value. +func (s *InviteMembersInput) SetDisableEmailNotification(v bool) *InviteMembersInput { + s.DisableEmailNotification = &v + return s +} + // SetMessage sets the Message field's value. func (s *InviteMembersInput) SetMessage(v string) *InviteMembersInput { s.Message = &v @@ -5977,6 +6993,99 @@ func (s *ListDetectorsOutput) SetNextToken(v string) *ListDetectorsOutput { return s } +type ListFiltersInput struct { + _ struct{} `type:"structure"` + + // DetectorId is a required field + DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` + + // You can use this parameter to indicate the maximum number of items that you + // want in the response. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s ListFiltersInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListFiltersInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListFiltersInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListFiltersInput"} + if s.DetectorId == nil { + invalidParams.Add(request.NewErrParamRequired("DetectorId")) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDetectorId sets the DetectorId field's value. +func (s *ListFiltersInput) SetDetectorId(v string) *ListFiltersInput { + s.DetectorId = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListFiltersInput) SetMaxResults(v int64) *ListFiltersInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListFiltersInput) SetNextToken(v string) *ListFiltersInput { + s.NextToken = &v + return s +} + +// ListFilters response object. +type ListFiltersOutput struct { + _ struct{} `type:"structure"` + + // A list of filter names + FilterNames []*string `locationName:"filterNames" type:"list"` + + // You can use this parameter when paginating results. Set the value of this + // parameter to null on your first call to the list action. For subsequent calls + // to the action fill nextToken in the request with the value of NextToken from + // the previous response to continue listing data. + NextToken *string `locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s ListFiltersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListFiltersOutput) GoString() string { + return s.String() +} + +// SetFilterNames sets the FilterNames field's value. +func (s *ListFiltersOutput) SetFilterNames(v []*string) *ListFiltersOutput { + s.FilterNames = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListFiltersOutput) SetNextToken(v string) *ListFiltersOutput { + s.NextToken = &v + return s +} + // List Findings Request type ListFindingsInput struct { _ struct{} `type:"structure"` @@ -6550,25 +7659,35 @@ type Member struct { _ struct{} `type:"structure"` // AWS account ID. - AccountId *string `locationName:"accountId" type:"string"` + // + // AccountId is a required field + AccountId *string `locationName:"accountId" type:"string" required:"true"` // The unique identifier for a detector. DetectorId *string `locationName:"detectorId" type:"string"` // Member account's email address. - Email *string `locationName:"email" type:"string"` + // + // Email is a required field + Email *string `locationName:"email" type:"string" required:"true"` // Timestamp at which the invitation was sent InvitedAt *string `locationName:"invitedAt" type:"string"` // The master account ID. - MasterId *string `locationName:"masterId" type:"string"` + // + // MasterId is a required field + MasterId *string `locationName:"masterId" type:"string" required:"true"` // The status of the relationship between the member and the master. - RelationshipStatus *string `locationName:"relationshipStatus" type:"string"` + // + // RelationshipStatus is a required field + RelationshipStatus *string `locationName:"relationshipStatus" type:"string" required:"true"` // The first time a resource was created. The format will be ISO-8601. - UpdatedAt *string `locationName:"updatedAt" type:"string"` + // + // UpdatedAt is a required field + UpdatedAt *string `locationName:"updatedAt" type:"string" required:"true"` } // String returns the string representation @@ -6699,6 +7818,9 @@ type NetworkInterface struct { // A list of EC2 instance IPv6 address information. Ipv6Addresses []*string `locationName:"ipv6Addresses" type:"list"` + // The ID of the network interface + NetworkInterfaceId *string `locationName:"networkInterfaceId" type:"string"` + // Private DNS name of the EC2 instance. PrivateDnsName *string `locationName:"privateDnsName" type:"string"` @@ -6740,6 +7862,12 @@ func (s *NetworkInterface) SetIpv6Addresses(v []*string) *NetworkInterface { return s } +// SetNetworkInterfaceId sets the NetworkInterfaceId field's value. +func (s *NetworkInterface) SetNetworkInterfaceId(v string) *NetworkInterface { + s.NetworkInterfaceId = &v + return s +} + // SetPrivateDnsName sets the PrivateDnsName field's value. func (s *NetworkInterface) SetPrivateDnsName(v string) *NetworkInterface { s.PrivateDnsName = &v @@ -7280,7 +8408,9 @@ type StartMonitoringMembersInput struct { // A list of account IDs of the GuardDuty member accounts whose findings you // want the master account to monitor. - AccountIds []*string `locationName:"accountIds" type:"list"` + // + // AccountIds is a required field + AccountIds []*string `locationName:"accountIds" type:"list" required:"true"` // DetectorId is a required field DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` @@ -7299,6 +8429,9 @@ func (s StartMonitoringMembersInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *StartMonitoringMembersInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "StartMonitoringMembersInput"} + if s.AccountIds == nil { + invalidParams.Add(request.NewErrParamRequired("AccountIds")) + } if s.DetectorId == nil { invalidParams.Add(request.NewErrParamRequired("DetectorId")) } @@ -7352,7 +8485,9 @@ type StopMonitoringMembersInput struct { // A list of account IDs of the GuardDuty member accounts whose findings you // want the master account to stop monitoring. - AccountIds []*string `locationName:"accountIds" type:"list"` + // + // AccountIds is a required field + AccountIds []*string `locationName:"accountIds" type:"list" required:"true"` // DetectorId is a required field DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` @@ -7371,6 +8506,9 @@ func (s StopMonitoringMembersInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *StopMonitoringMembersInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "StopMonitoringMembersInput"} + if s.AccountIds == nil { + invalidParams.Add(request.NewErrParamRequired("AccountIds")) + } if s.DetectorId == nil { invalidParams.Add(request.NewErrParamRequired("DetectorId")) } @@ -7459,7 +8597,9 @@ type UnarchiveFindingsInput struct { DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` // IDs of the findings that you want to unarchive. - FindingIds []*string `locationName:"findingIds" type:"list"` + // + // FindingIds is a required field + FindingIds []*string `locationName:"findingIds" type:"list" required:"true"` } // String returns the string representation @@ -7478,6 +8618,9 @@ func (s *UnarchiveFindingsInput) Validate() error { if s.DetectorId == nil { invalidParams.Add(request.NewErrParamRequired("DetectorId")) } + if s.FindingIds == nil { + invalidParams.Add(request.NewErrParamRequired("FindingIds")) + } if invalidParams.Len() > 0 { return invalidParams @@ -7517,10 +8660,14 @@ type UnprocessedAccount struct { _ struct{} `type:"structure"` // AWS Account ID. - AccountId *string `locationName:"accountId" type:"string"` + // + // AccountId is a required field + AccountId *string `locationName:"accountId" type:"string" required:"true"` // A reason why the account hasn't been processed. - Result *string `locationName:"result" type:"string"` + // + // Result is a required field + Result *string `locationName:"result" type:"string" required:"true"` } // String returns the string representation @@ -7555,6 +8702,9 @@ type UpdateDetectorInput struct { // Updated boolean value for the detector that specifies whether the detector // is enabled. Enable *bool `locationName:"enable" type:"boolean"` + + // A enum value that specifies how frequently customer got Finding updates published. + FindingPublishingFrequency *string `locationName:"findingPublishingFrequency" type:"string" enum:"FindingPublishingFrequency"` } // String returns the string representation @@ -7592,6 +8742,12 @@ func (s *UpdateDetectorInput) SetEnable(v bool) *UpdateDetectorInput { return s } +// SetFindingPublishingFrequency sets the FindingPublishingFrequency field's value. +func (s *UpdateDetectorInput) SetFindingPublishingFrequency(v string) *UpdateDetectorInput { + s.FindingPublishingFrequency = &v + return s +} + type UpdateDetectorOutput struct { _ struct{} `type:"structure"` } @@ -7606,6 +8762,117 @@ func (s UpdateDetectorOutput) GoString() string { return s.String() } +// UpdateFilter request object. +type UpdateFilterInput struct { + _ struct{} `type:"structure"` + + // Specifies the action that is to be applied to the findings that match the + // filter. + Action *string `locationName:"action" type:"string" enum:"FilterAction"` + + // The description of the filter. + Description *string `locationName:"description" type:"string"` + + // DetectorId is a required field + DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` + + // FilterName is a required field + FilterName *string `location:"uri" locationName:"filterName" type:"string" required:"true"` + + // Represents the criteria to be used in the filter for querying findings. + FindingCriteria *FindingCriteria `locationName:"findingCriteria" type:"structure"` + + // Specifies the position of the filter in the list of current filters. Also + // specifies the order in which this filter is applied to the findings. + Rank *int64 `locationName:"rank" type:"integer"` +} + +// String returns the string representation +func (s UpdateFilterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateFilterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateFilterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateFilterInput"} + if s.DetectorId == nil { + invalidParams.Add(request.NewErrParamRequired("DetectorId")) + } + if s.FilterName == nil { + invalidParams.Add(request.NewErrParamRequired("FilterName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAction sets the Action field's value. +func (s *UpdateFilterInput) SetAction(v string) *UpdateFilterInput { + s.Action = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *UpdateFilterInput) SetDescription(v string) *UpdateFilterInput { + s.Description = &v + return s +} + +// SetDetectorId sets the DetectorId field's value. +func (s *UpdateFilterInput) SetDetectorId(v string) *UpdateFilterInput { + s.DetectorId = &v + return s +} + +// SetFilterName sets the FilterName field's value. +func (s *UpdateFilterInput) SetFilterName(v string) *UpdateFilterInput { + s.FilterName = &v + return s +} + +// SetFindingCriteria sets the FindingCriteria field's value. +func (s *UpdateFilterInput) SetFindingCriteria(v *FindingCriteria) *UpdateFilterInput { + s.FindingCriteria = v + return s +} + +// SetRank sets the Rank field's value. +func (s *UpdateFilterInput) SetRank(v int64) *UpdateFilterInput { + s.Rank = &v + return s +} + +// UpdateFilter response object. +type UpdateFilterOutput struct { + _ struct{} `type:"structure"` + + // The name of the filter. + Name *string `locationName:"name" type:"string"` +} + +// String returns the string representation +func (s UpdateFilterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateFilterOutput) GoString() string { + return s.String() +} + +// SetName sets the Name field's value. +func (s *UpdateFilterOutput) SetName(v string) *UpdateFilterOutput { + s.Name = &v + return s +} + // Update findings feedback body type UpdateFindingsFeedbackInput struct { _ struct{} `type:"structure"` @@ -7617,10 +8884,14 @@ type UpdateFindingsFeedbackInput struct { DetectorId *string `location:"uri" locationName:"detectorId" type:"string" required:"true"` // Valid values: USEFUL | NOT_USEFUL - Feedback *string `locationName:"feedback" type:"string" enum:"Feedback"` + // + // Feedback is a required field + Feedback *string `locationName:"feedback" type:"string" required:"true" enum:"Feedback"` // IDs of the findings that you want to mark as useful or not useful. - FindingIds []*string `locationName:"findingIds" type:"list"` + // + // FindingIds is a required field + FindingIds []*string `locationName:"findingIds" type:"list" required:"true"` } // String returns the string representation @@ -7639,6 +8910,12 @@ func (s *UpdateFindingsFeedbackInput) Validate() error { if s.DetectorId == nil { invalidParams.Add(request.NewErrParamRequired("DetectorId")) } + if s.Feedback == nil { + invalidParams.Add(request.NewErrParamRequired("Feedback")) + } + if s.FindingIds == nil { + invalidParams.Add(request.NewErrParamRequired("FindingIds")) + } if invalidParams.Len() > 0 { return invalidParams @@ -7884,6 +9161,27 @@ const ( FeedbackNotUseful = "NOT_USEFUL" ) +// The action associated with a filter. +const ( + // FilterActionNoop is a FilterAction enum value + FilterActionNoop = "NOOP" + + // FilterActionArchive is a FilterAction enum value + FilterActionArchive = "ARCHIVE" +) + +// A enum value that specifies how frequently customer got Finding updates published. +const ( + // FindingPublishingFrequencyFifteenMinutes is a FindingPublishingFrequency enum value + FindingPublishingFrequencyFifteenMinutes = "FIFTEEN_MINUTES" + + // FindingPublishingFrequencyOneHour is a FindingPublishingFrequency enum value + FindingPublishingFrequencyOneHour = "ONE_HOUR" + + // FindingPublishingFrequencySixHours is a FindingPublishingFrequency enum value + FindingPublishingFrequencySixHours = "SIX_HOURS" +) + // The types of finding statistics. const ( // FindingStatisticTypeCountBySeverity is a FindingStatisticType enum value diff --git a/vendor/github.com/aws/aws-sdk-go/service/guardduty/service.go b/vendor/github.com/aws/aws-sdk-go/service/guardduty/service.go index 995709381aa..0ca09946c29 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/guardduty/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/guardduty/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "guardduty" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "guardduty" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "GuardDuty" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the GuardDuty client with a session. @@ -45,19 +46,20 @@ const ( // svc := guardduty.New(mySession, aws.NewConfig().WithRegion("us-west-2")) func New(p client.ConfigProvider, cfgs ...*aws.Config) *GuardDuty { c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "guardduty" + } return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) } // newClient creates, initializes and returns a new service client instance. func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *GuardDuty { - if len(signingName) == 0 { - signingName = "guardduty" - } svc := &GuardDuty{ Client: client.New( cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/iam/api.go b/vendor/github.com/aws/aws-sdk-go/service/iam/api.go index f7878f23ed6..b0d0acfc601 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/iam/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/iam/api.go @@ -17,8 +17,8 @@ const opAddClientIDToOpenIDConnectProvider = "AddClientIDToOpenIDConnectProvider // AddClientIDToOpenIDConnectProviderRequest generates a "aws/request.Request" representing the // client's request for the AddClientIDToOpenIDConnectProvider operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -62,8 +62,8 @@ func (c *IAM) AddClientIDToOpenIDConnectProviderRequest(input *AddClientIDToOpen // Adds a new client ID (also known as audience) to the list of client IDs already // registered for the specified IAM OpenID Connect (OIDC) provider resource. // -// This action is idempotent; it does not fail or return an error if you add -// an existing client ID to the provider. +// This operation is idempotent; it does not fail or return an error if you +// add an existing client ID to the provider. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -115,8 +115,8 @@ const opAddRoleToInstanceProfile = "AddRoleToInstanceProfile" // AddRoleToInstanceProfileRequest generates a "aws/request.Request" representing the // client's request for the AddRoleToInstanceProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -158,7 +158,13 @@ func (c *IAM) AddRoleToInstanceProfileRequest(input *AddRoleToInstanceProfileInp // AddRoleToInstanceProfile API operation for AWS Identity and Access Management. // // Adds the specified IAM role to the specified instance profile. An instance -// profile can contain only one role, and this limit cannot be increased. +// profile can contain only one role, and this limit cannot be increased. You +// can remove the existing role and then add a different role to an instance +// profile. You must then wait for the change to appear across all of AWS because +// of eventual consistency (https://en.wikipedia.org/wiki/Eventual_consistency). +// To force the change, you must disassociate the instance profile (https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DisassociateIamInstanceProfile.html) +// and then associate the instance profile (https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AssociateIamInstanceProfile.html), +// or you can stop your instance and then restart it. // // The caller of this API must be granted the PassRole permission on the IAM // role by a permission policy. @@ -223,8 +229,8 @@ const opAddUserToGroup = "AddUserToGroup" // AddUserToGroupRequest generates a "aws/request.Request" representing the // client's request for the AddUserToGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -313,8 +319,8 @@ const opAttachGroupPolicy = "AttachGroupPolicy" // AttachGroupPolicyRequest generates a "aws/request.Request" representing the // client's request for the AttachGroupPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -418,8 +424,8 @@ const opAttachRolePolicy = "AttachRolePolicy" // AttachRolePolicyRequest generates a "aws/request.Request" representing the // client's request for the AttachRolePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -533,8 +539,8 @@ const opAttachUserPolicy = "AttachUserPolicy" // AttachUserPolicyRequest generates a "aws/request.Request" representing the // client's request for the AttachUserPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -638,8 +644,8 @@ const opChangePassword = "ChangePassword" // ChangePasswordRequest generates a "aws/request.Request" representing the // client's request for the ChangePassword operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -680,8 +686,8 @@ func (c *IAM) ChangePasswordRequest(input *ChangePasswordInput) (req *request.Re // ChangePassword API operation for AWS Identity and Access Management. // -// Changes the password of the IAM user who is calling this action. The root -// account password is not affected by this action. +// Changes the password of the IAM user who is calling this operation. The AWS +// account root user password is not affected by this operation. // // To change the password for a different user, see UpdateLoginProfile. For // more information about modifying passwords, see Managing Passwords (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingLogins.html) @@ -747,8 +753,8 @@ const opCreateAccessKey = "CreateAccessKey" // CreateAccessKeyRequest generates a "aws/request.Request" representing the // client's request for the CreateAccessKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -791,9 +797,10 @@ func (c *IAM) CreateAccessKeyRequest(input *CreateAccessKeyInput) (req *request. // the specified user. The default status for new keys is Active. // // If you do not specify a user name, IAM determines the user name implicitly -// based on the AWS access key ID signing the request. Because this action works -// for access keys under the AWS account, you can use this action to manage -// root credentials even if the AWS account has no associated users. +// based on the AWS access key ID signing the request. Because this operation +// works for access keys under the AWS account, you can use this operation to +// manage AWS account root user credentials. This is true even if the AWS account +// has no associated users. // // For information about limits on the number of keys you can create, see Limitations // on IAM Entities (http://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html) @@ -851,8 +858,8 @@ const opCreateAccountAlias = "CreateAccountAlias" // CreateAccountAliasRequest generates a "aws/request.Request" representing the // client's request for the CreateAccountAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -943,8 +950,8 @@ const opCreateGroup = "CreateGroup" // CreateGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1039,8 +1046,8 @@ const opCreateInstanceProfile = "CreateInstanceProfile" // CreateInstanceProfileRequest generates a "aws/request.Request" representing the // client's request for the CreateInstanceProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1132,8 +1139,8 @@ const opCreateLoginProfile = "CreateLoginProfile" // CreateLoginProfileRequest generates a "aws/request.Request" representing the // client's request for the CreateLoginProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1231,8 +1238,8 @@ const opCreateOpenIDConnectProvider = "CreateOpenIDConnectProvider" // CreateOpenIDConnectProviderRequest generates a "aws/request.Request" representing the // client's request for the CreateOpenIDConnectProvider operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1275,19 +1282,24 @@ func (c *IAM) CreateOpenIDConnectProviderRequest(input *CreateOpenIDConnectProvi // OpenID Connect (OIDC) (http://openid.net/connect/). // // The OIDC provider that you create with this operation can be used as a principal -// in a role's trust policy to establish a trust relationship between AWS and -// the OIDC provider. +// in a role's trust policy. Such a policy establishes a trust relationship +// between AWS and the OIDC provider. // -// When you create the IAM OIDC provider, you specify the URL of the OIDC identity -// provider (IdP) to trust, a list of client IDs (also known as audiences) that -// identify the application or applications that are allowed to authenticate -// using the OIDC provider, and a list of thumbprints of the server certificate(s) -// that the IdP uses. You get all of this information from the OIDC IdP that -// you want to use for access to AWS. +// When you create the IAM OIDC provider, you specify the following: // -// Because trust for the OIDC provider is ultimately derived from the IAM provider -// that this action creates, it is a best practice to limit access to the CreateOpenIDConnectProvider -// action to highly-privileged users. +// * The URL of the OIDC identity provider (IdP) to trust +// +// * A list of client IDs (also known as audiences) that identify the application +// or applications that are allowed to authenticate using the OIDC provider +// +// * A list of thumbprints of the server certificate(s) that the IdP uses. +// +// You get all of this information from the OIDC IdP that you want to use to +// access AWS. +// +// Because trust for the OIDC provider is derived from the IAM provider that +// this operation creates, it is best to limit access to the CreateOpenIDConnectProvider +// operation to highly privileged users. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1339,8 +1351,8 @@ const opCreatePolicy = "CreatePolicy" // CreatePolicyRequest generates a "aws/request.Request" representing the // client's request for the CreatePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1444,8 +1456,8 @@ const opCreatePolicyVersion = "CreatePolicyVersion" // CreatePolicyVersionRequest generates a "aws/request.Request" representing the // client's request for the CreatePolicyVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1551,8 +1563,8 @@ const opCreateRole = "CreateRole" // CreateRoleRequest generates a "aws/request.Request" representing the // client's request for the CreateRole operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1592,7 +1604,7 @@ func (c *IAM) CreateRoleRequest(input *CreateRoleInput) (req *request.Request, o // CreateRole API operation for AWS Identity and Access Management. // // Creates a new role for your AWS account. For more information about roles, -// go to Working with Roles (http://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html). +// go to IAM Roles (http://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html). // For information about limitations on role names and the number of roles you // can create, go to Limitations on IAM Entities (http://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html) // in the IAM User Guide. @@ -1621,6 +1633,11 @@ func (c *IAM) CreateRoleRequest(input *CreateRoleInput) (req *request.Request, o // The request was rejected because the policy document was malformed. The error // message describes the specific error. // +// * ErrCodeConcurrentModificationException "ConcurrentModification" +// The request was rejected because multiple requests to change this object +// were submitted simultaneously. Wait a few minutes and submit your request +// again. +// // * ErrCodeServiceFailureException "ServiceFailure" // The request processing has failed because of an unknown error, exception // or failure. @@ -1651,8 +1668,8 @@ const opCreateSAMLProvider = "CreateSAMLProvider" // CreateSAMLProviderRequest generates a "aws/request.Request" representing the // client's request for the CreateSAMLProvider operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1695,14 +1712,14 @@ func (c *IAM) CreateSAMLProviderRequest(input *CreateSAMLProviderInput) (req *re // SAML 2.0. // // The SAML provider resource that you create with this operation can be used -// as a principal in an IAM role's trust policy to enable federated users who -// sign-in using the SAML IdP to assume the role. You can create an IAM role -// that supports Web-based single sign-on (SSO) to the AWS Management Console -// or one that supports API access to AWS. -// -// When you create the SAML provider resource, you upload an a SAML metadata -// document that you get from your IdP and that includes the issuer's name, -// expiration information, and keys that can be used to validate the SAML authentication +// as a principal in an IAM role's trust policy. Such a policy can enable federated +// users who sign-in using the SAML IdP to assume the role. You can create an +// IAM role that supports Web-based single sign-on (SSO) to the AWS Management +// Console or one that supports API access to AWS. +// +// When you create the SAML provider resource, you upload a SAML metadata document +// that you get from your IdP. That document includes the issuer's name, expiration +// information, and keys that can be used to validate the SAML authentication // response (assertions) that the IdP sends. You must generate the metadata // document using the identity management software that is used as your organization's // IdP. @@ -1764,8 +1781,8 @@ const opCreateServiceLinkedRole = "CreateServiceLinkedRole" // CreateServiceLinkedRoleRequest generates a "aws/request.Request" representing the // client's request for the CreateServiceLinkedRole operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1809,12 +1826,9 @@ func (c *IAM) CreateServiceLinkedRoleRequest(input *CreateServiceLinkedRoleInput // ensure that the service is not broken by an unexpectedly changed or deleted // role, which could put your AWS resources into an unknown state. Allowing // the service to control the role helps improve service stability and proper -// cleanup when a service and its role are no longer needed. -// -// The name of the role is autogenerated by combining the string that you specify -// for the AWSServiceName parameter with the string that you specify for the -// CustomSuffix parameter. The resulting name must be unique in your account -// or the request fails. +// cleanup when a service and its role are no longer needed. For more information, +// see Using Service-Linked Roles (http://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html) +// in the IAM User Guide. // // To attach a policy to this service-linked role, you must make the request // using the AWS service that depends on this role. @@ -1869,8 +1883,8 @@ const opCreateServiceSpecificCredential = "CreateServiceSpecificCredential" // CreateServiceSpecificCredentialRequest generates a "aws/request.Request" representing the // client's request for the CreateServiceSpecificCredential operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1969,8 +1983,8 @@ const opCreateUser = "CreateUser" // CreateUserRequest generates a "aws/request.Request" representing the // client's request for the CreateUser operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2035,6 +2049,15 @@ func (c *IAM) CreateUserRequest(input *CreateUserInput) (req *request.Request, o // The request was rejected because it referenced an entity that does not exist. // The error message describes the entity. // +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeConcurrentModificationException "ConcurrentModification" +// The request was rejected because multiple requests to change this object +// were submitted simultaneously. Wait a few minutes and submit your request +// again. +// // * ErrCodeServiceFailureException "ServiceFailure" // The request processing has failed because of an unknown error, exception // or failure. @@ -2065,8 +2088,8 @@ const opCreateVirtualMFADevice = "CreateVirtualMFADevice" // CreateVirtualMFADeviceRequest generates a "aws/request.Request" representing the // client's request for the CreateVirtualMFADevice operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2166,8 +2189,8 @@ const opDeactivateMFADevice = "DeactivateMFADevice" // DeactivateMFADeviceRequest generates a "aws/request.Request" representing the // client's request for the DeactivateMFADevice operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2267,8 +2290,8 @@ const opDeleteAccessKey = "DeleteAccessKey" // DeleteAccessKeyRequest generates a "aws/request.Request" representing the // client's request for the DeleteAccessKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2312,9 +2335,10 @@ func (c *IAM) DeleteAccessKeyRequest(input *DeleteAccessKeyInput) (req *request. // Deletes the access key pair associated with the specified IAM user. // // If you do not specify a user name, IAM determines the user name implicitly -// based on the AWS access key ID signing the request. Because this action works -// for access keys under the AWS account, you can use this action to manage -// root credentials even if the AWS account has no associated users. +// based on the AWS access key ID signing the request. Because this operation +// works for access keys under the AWS account, you can use this operation to +// manage AWS account root user credentials even if the AWS account has no associated +// users. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2362,8 +2386,8 @@ const opDeleteAccountAlias = "DeleteAccountAlias" // DeleteAccountAliasRequest generates a "aws/request.Request" representing the // client's request for the DeleteAccountAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2454,8 +2478,8 @@ const opDeleteAccountPasswordPolicy = "DeleteAccountPasswordPolicy" // DeleteAccountPasswordPolicyRequest generates a "aws/request.Request" representing the // client's request for the DeleteAccountPasswordPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2544,8 +2568,8 @@ const opDeleteGroup = "DeleteGroup" // DeleteGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2639,8 +2663,8 @@ const opDeleteGroupPolicy = "DeleteGroupPolicy" // DeleteGroupPolicyRequest generates a "aws/request.Request" representing the // client's request for the DeleteGroupPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2735,8 +2759,8 @@ const opDeleteInstanceProfile = "DeleteInstanceProfile" // DeleteInstanceProfileRequest generates a "aws/request.Request" representing the // client's request for the DeleteInstanceProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2780,9 +2804,9 @@ func (c *IAM) DeleteInstanceProfileRequest(input *DeleteInstanceProfileInput) (r // Deletes the specified instance profile. The instance profile must not have // an associated role. // -// Make sure you do not have any Amazon EC2 instances running with the instance -// profile you are about to delete. Deleting a role or instance profile that -// is associated with a running instance will break any applications running +// Make sure that you do not have any Amazon EC2 instances running with the +// instance profile you are about to delete. Deleting a role or instance profile +// that is associated with a running instance will break any applications running // on the instance. // // For more information about instance profiles, go to About Instance Profiles @@ -2838,8 +2862,8 @@ const opDeleteLoginProfile = "DeleteLoginProfile" // DeleteLoginProfileRequest generates a "aws/request.Request" representing the // client's request for the DeleteLoginProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2940,8 +2964,8 @@ const opDeleteOpenIDConnectProvider = "DeleteOpenIDConnectProvider" // DeleteOpenIDConnectProviderRequest generates a "aws/request.Request" representing the // client's request for the DeleteOpenIDConnectProvider operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2988,8 +3012,8 @@ func (c *IAM) DeleteOpenIDConnectProviderRequest(input *DeleteOpenIDConnectProvi // the provider as a principal in their trust policies. Any attempt to assume // a role that references a deleted provider fails. // -// This action is idempotent; it does not fail or return an error if you call -// the action for a provider that does not exist. +// This operation is idempotent; it does not fail or return an error if you +// call the operation for a provider that does not exist. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3037,8 +3061,8 @@ const opDeletePolicy = "DeletePolicy" // DeletePolicyRequest generates a "aws/request.Request" representing the // client's request for the DeletePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3082,14 +3106,14 @@ func (c *IAM) DeletePolicyRequest(input *DeletePolicyInput) (req *request.Reques // Deletes the specified managed policy. // // Before you can delete a managed policy, you must first detach the policy -// from all users, groups, and roles that it is attached to, and you must delete -// all of the policy's versions. The following steps describe the process for -// deleting a managed policy: +// from all users, groups, and roles that it is attached to. In addition you +// must delete all the policy's versions. The following steps describe the process +// for deleting a managed policy: // // * Detach the policy from all users, groups, and roles that the policy // is attached to, using the DetachUserPolicy, DetachGroupPolicy, or DetachRolePolicy -// APIs. To list all the users, groups, and roles that a policy is attached -// to, use ListEntitiesForPolicy. +// API operations. To list all the users, groups, and roles that a policy +// is attached to, use ListEntitiesForPolicy. // // * Delete all versions of the policy using DeletePolicyVersion. To list // the policy's versions, use ListPolicyVersions. You cannot use DeletePolicyVersion @@ -3157,8 +3181,8 @@ const opDeletePolicyVersion = "DeletePolicyVersion" // DeletePolicyVersionRequest generates a "aws/request.Request" representing the // client's request for the DeletePolicyVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3263,8 +3287,8 @@ const opDeleteRole = "DeleteRole" // DeleteRoleRequest generates a "aws/request.Request" representing the // client's request for the DeleteRole operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3308,9 +3332,10 @@ func (c *IAM) DeleteRoleRequest(input *DeleteRoleInput) (req *request.Request, o // Deletes the specified role. The role must not have any policies attached. // For more information about roles, go to Working with Roles (http://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html). // -// Make sure you do not have any Amazon EC2 instances running with the role -// you are about to delete. Deleting a role or instance profile that is associated -// with a running instance will break any applications running on the instance. +// Make sure that you do not have any Amazon EC2 instances running with the +// role you are about to delete. Deleting a role or instance profile that is +// associated with a running instance will break any applications running on +// the instance. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3338,6 +3363,11 @@ func (c *IAM) DeleteRoleRequest(input *DeleteRoleInput) (req *request.Request, o // the name of the service that depends on this service-linked role. You must // request the change through that service. // +// * ErrCodeConcurrentModificationException "ConcurrentModification" +// The request was rejected because multiple requests to change this object +// were submitted simultaneously. Wait a few minutes and submit your request +// again. +// // * ErrCodeServiceFailureException "ServiceFailure" // The request processing has failed because of an unknown error, exception // or failure. @@ -3364,12 +3394,108 @@ func (c *IAM) DeleteRoleWithContext(ctx aws.Context, input *DeleteRoleInput, opt return out, req.Send() } +const opDeleteRolePermissionsBoundary = "DeleteRolePermissionsBoundary" + +// DeleteRolePermissionsBoundaryRequest generates a "aws/request.Request" representing the +// client's request for the DeleteRolePermissionsBoundary operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteRolePermissionsBoundary for more information on using the DeleteRolePermissionsBoundary +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteRolePermissionsBoundaryRequest method. +// req, resp := client.DeleteRolePermissionsBoundaryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteRolePermissionsBoundary +func (c *IAM) DeleteRolePermissionsBoundaryRequest(input *DeleteRolePermissionsBoundaryInput) (req *request.Request, output *DeleteRolePermissionsBoundaryOutput) { + op := &request.Operation{ + Name: opDeleteRolePermissionsBoundary, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteRolePermissionsBoundaryInput{} + } + + output = &DeleteRolePermissionsBoundaryOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteRolePermissionsBoundary API operation for AWS Identity and Access Management. +// +// Deletes the permissions boundary for the specified IAM role. +// +// Deleting the permissions boundary for a role might increase its permissions +// by allowing anyone who assumes the role to perform all the actions granted +// in its permissions policies. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteRolePermissionsBoundary for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeUnmodifiableEntityException "UnmodifiableEntity" +// The request was rejected because only the service that depends on the service-linked +// role can modify or delete the role on your behalf. The error message includes +// the name of the service that depends on this service-linked role. You must +// request the change through that service. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteRolePermissionsBoundary +func (c *IAM) DeleteRolePermissionsBoundary(input *DeleteRolePermissionsBoundaryInput) (*DeleteRolePermissionsBoundaryOutput, error) { + req, out := c.DeleteRolePermissionsBoundaryRequest(input) + return out, req.Send() +} + +// DeleteRolePermissionsBoundaryWithContext is the same as DeleteRolePermissionsBoundary with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteRolePermissionsBoundary for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteRolePermissionsBoundaryWithContext(ctx aws.Context, input *DeleteRolePermissionsBoundaryInput, opts ...request.Option) (*DeleteRolePermissionsBoundaryOutput, error) { + req, out := c.DeleteRolePermissionsBoundaryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteRolePolicy = "DeleteRolePolicy" // DeleteRolePolicyRequest generates a "aws/request.Request" representing the // client's request for the DeleteRolePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3470,8 +3596,8 @@ const opDeleteSAMLProvider = "DeleteSAMLProvider" // DeleteSAMLProviderRequest generates a "aws/request.Request" representing the // client's request for the DeleteSAMLProvider operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3571,8 +3697,8 @@ const opDeleteSSHPublicKey = "DeleteSSHPublicKey" // DeleteSSHPublicKeyRequest generates a "aws/request.Request" representing the // client's request for the DeleteSSHPublicKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3615,7 +3741,7 @@ func (c *IAM) DeleteSSHPublicKeyRequest(input *DeleteSSHPublicKeyInput) (req *re // // Deletes the specified SSH public key. // -// The SSH public key deleted by this action is used only for authenticating +// The SSH public key deleted by this operation is used only for authenticating // the associated IAM user to an AWS CodeCommit repository. For more information // about using SSH keys to authenticate to an AWS CodeCommit repository, see // Set up AWS CodeCommit for SSH Connections (http://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-credentials-ssh.html) @@ -3659,8 +3785,8 @@ const opDeleteServerCertificate = "DeleteServerCertificate" // DeleteServerCertificateRequest generates a "aws/request.Request" representing the // client's request for the DeleteServerCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3703,10 +3829,10 @@ func (c *IAM) DeleteServerCertificateRequest(input *DeleteServerCertificateInput // // Deletes the specified server certificate. // -// For more information about working with server certificates, including a -// list of AWS services that can use the server certificates that you manage -// with IAM, go to Working with Server Certificates (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html) -// in the IAM User Guide. +// For more information about working with server certificates, see Working +// with Server Certificates (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html) +// in the IAM User Guide. This topic also includes a list of AWS services that +// can use the server certificates that you manage with IAM. // // If you are using a server certificate with Elastic Load Balancing, deleting // the certificate could have implications for your application. If Elastic @@ -3768,8 +3894,8 @@ const opDeleteServiceLinkedRole = "DeleteServiceLinkedRole" // DeleteServiceLinkedRoleRequest generates a "aws/request.Request" representing the // client's request for the DeleteServiceLinkedRole operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3818,8 +3944,8 @@ func (c *IAM) DeleteServiceLinkedRoleRequest(input *DeleteServiceLinkedRoleInput // If you submit a deletion request for a service-linked role whose linked service // is still accessing a resource, then the deletion task fails. If it fails, // the GetServiceLinkedRoleDeletionStatus API operation returns the reason for -// the failure, including the resources that must be deleted. To delete the -// service-linked role, you must first remove those resources from the linked +// the failure, usually including the resources that must be deleted. To delete +// the service-linked role, you must first remove those resources from the linked // service and then submit the deletion request again. Resources are specific // to the service that is linked to the role. For more information about removing // resources from a service, see the AWS documentation (http://docs.aws.amazon.com/) @@ -3875,8 +4001,8 @@ const opDeleteServiceSpecificCredential = "DeleteServiceSpecificCredential" // DeleteServiceSpecificCredentialRequest generates a "aws/request.Request" representing the // client's request for the DeleteServiceSpecificCredential operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3957,8 +4083,8 @@ const opDeleteSigningCertificate = "DeleteSigningCertificate" // DeleteSigningCertificateRequest generates a "aws/request.Request" representing the // client's request for the DeleteSigningCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4002,9 +4128,10 @@ func (c *IAM) DeleteSigningCertificateRequest(input *DeleteSigningCertificateInp // Deletes a signing certificate associated with the specified IAM user. // // If you do not specify a user name, IAM determines the user name implicitly -// based on the AWS access key ID signing the request. Because this action works -// for access keys under the AWS account, you can use this action to manage -// root credentials even if the AWS account has no associated IAM users. +// based on the AWS access key ID signing the request. Because this operation +// works for access keys under the AWS account, you can use this operation to +// manage AWS account root user credentials even if the AWS account has no associated +// IAM users. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4052,8 +4179,8 @@ const opDeleteUser = "DeleteUser" // DeleteUserRequest generates a "aws/request.Request" representing the // client's request for the DeleteUser operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4117,6 +4244,11 @@ func (c *IAM) DeleteUserRequest(input *DeleteUserInput) (req *request.Request, o // The request was rejected because it attempted to delete a resource that has // attached subordinate entities. The error message describes these entities. // +// * ErrCodeConcurrentModificationException "ConcurrentModification" +// The request was rejected because multiple requests to change this object +// were submitted simultaneously. Wait a few minutes and submit your request +// again. +// // * ErrCodeServiceFailureException "ServiceFailure" // The request processing has failed because of an unknown error, exception // or failure. @@ -4143,12 +4275,102 @@ func (c *IAM) DeleteUserWithContext(ctx aws.Context, input *DeleteUserInput, opt return out, req.Send() } +const opDeleteUserPermissionsBoundary = "DeleteUserPermissionsBoundary" + +// DeleteUserPermissionsBoundaryRequest generates a "aws/request.Request" representing the +// client's request for the DeleteUserPermissionsBoundary operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteUserPermissionsBoundary for more information on using the DeleteUserPermissionsBoundary +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteUserPermissionsBoundaryRequest method. +// req, resp := client.DeleteUserPermissionsBoundaryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteUserPermissionsBoundary +func (c *IAM) DeleteUserPermissionsBoundaryRequest(input *DeleteUserPermissionsBoundaryInput) (req *request.Request, output *DeleteUserPermissionsBoundaryOutput) { + op := &request.Operation{ + Name: opDeleteUserPermissionsBoundary, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteUserPermissionsBoundaryInput{} + } + + output = &DeleteUserPermissionsBoundaryOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteUserPermissionsBoundary API operation for AWS Identity and Access Management. +// +// Deletes the permissions boundary for the specified IAM user. +// +// Deleting the permissions boundary for a user might increase its permissions +// by allowing the user to perform all the actions granted in its permissions +// policies. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation DeleteUserPermissionsBoundary for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/DeleteUserPermissionsBoundary +func (c *IAM) DeleteUserPermissionsBoundary(input *DeleteUserPermissionsBoundaryInput) (*DeleteUserPermissionsBoundaryOutput, error) { + req, out := c.DeleteUserPermissionsBoundaryRequest(input) + return out, req.Send() +} + +// DeleteUserPermissionsBoundaryWithContext is the same as DeleteUserPermissionsBoundary with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteUserPermissionsBoundary for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) DeleteUserPermissionsBoundaryWithContext(ctx aws.Context, input *DeleteUserPermissionsBoundaryInput, opts ...request.Option) (*DeleteUserPermissionsBoundaryOutput, error) { + req, out := c.DeleteUserPermissionsBoundaryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteUserPolicy = "DeleteUserPolicy" // DeleteUserPolicyRequest generates a "aws/request.Request" representing the // client's request for the DeleteUserPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4243,8 +4465,8 @@ const opDeleteVirtualMFADevice = "DeleteVirtualMFADevice" // DeleteVirtualMFADeviceRequest generates a "aws/request.Request" representing the // client's request for the DeleteVirtualMFADevice operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4340,8 +4562,8 @@ const opDetachGroupPolicy = "DetachGroupPolicy" // DetachGroupPolicyRequest generates a "aws/request.Request" representing the // client's request for the DetachGroupPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4439,8 +4661,8 @@ const opDetachRolePolicy = "DetachRolePolicy" // DetachRolePolicyRequest generates a "aws/request.Request" representing the // client's request for the DetachRolePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4544,8 +4766,8 @@ const opDetachUserPolicy = "DetachUserPolicy" // DetachUserPolicyRequest generates a "aws/request.Request" representing the // client's request for the DetachUserPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4643,8 +4865,8 @@ const opEnableMFADevice = "EnableMFADevice" // EnableMFADeviceRequest generates a "aws/request.Request" representing the // client's request for the EnableMFADevice operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4749,8 +4971,8 @@ const opGenerateCredentialReport = "GenerateCredentialReport" // GenerateCredentialReportRequest generates a "aws/request.Request" representing the // client's request for the GenerateCredentialReport operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4835,8 +5057,8 @@ const opGetAccessKeyLastUsed = "GetAccessKeyLastUsed" // GetAccessKeyLastUsedRequest generates a "aws/request.Request" representing the // client's request for the GetAccessKeyLastUsed operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4918,8 +5140,8 @@ const opGetAccountAuthorizationDetails = "GetAccountAuthorizationDetails" // GetAccountAuthorizationDetailsRequest generates a "aws/request.Request" representing the // client's request for the GetAccountAuthorizationDetails operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4969,6 +5191,12 @@ func (c *IAM) GetAccountAuthorizationDetailsRequest(input *GetAccountAuthorizati // API to obtain a snapshot of the configuration of IAM permissions (users, // groups, roles, and policies) in your account. // +// Policies returned by this API are URL-encoded compliant with RFC 3986 (https://tools.ietf.org/html/rfc3986). +// You can use a URL decoding method to convert the policy back to plain JSON +// text. For example, if you use Java, you can use the decode method of the +// java.net.URLDecoder utility class in the Java SDK. Other languages and SDKs +// provide similar functionality. +// // You can optionally filter the results using the Filter parameter. You can // paginate the results using the MaxItems and Marker parameters. // @@ -5060,8 +5288,8 @@ const opGetAccountPasswordPolicy = "GetAccountPasswordPolicy" // GetAccountPasswordPolicyRequest generates a "aws/request.Request" representing the // client's request for the GetAccountPasswordPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5145,8 +5373,8 @@ const opGetAccountSummary = "GetAccountSummary" // GetAccountSummaryRequest generates a "aws/request.Request" representing the // client's request for the GetAccountSummary operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5229,8 +5457,8 @@ const opGetContextKeysForCustomPolicy = "GetContextKeysForCustomPolicy" // GetContextKeysForCustomPolicyRequest generates a "aws/request.Request" representing the // client's request for the GetContextKeysForCustomPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5274,10 +5502,10 @@ func (c *IAM) GetContextKeysForCustomPolicyRequest(input *GetContextKeysForCusto // keys from policies associated with an IAM user, group, or role, use GetContextKeysForPrincipalPolicy. // // Context keys are variables maintained by AWS and its services that provide -// details about the context of an API query request, and can be evaluated by -// testing against a value specified in an IAM policy. Use GetContextKeysForCustomPolicy +// details about the context of an API query request. Context keys can be evaluated +// by testing against a value specified in an IAM policy. Use GetContextKeysForCustomPolicy // to understand what key names and values you must supply when you call SimulateCustomPolicy. -// Note that all parameters are shown in unencoded form here for clarity, but +// Note that all parameters are shown in unencoded form here for clarity but // must be URL encoded to be included as a part of a real HTML request. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -5318,8 +5546,8 @@ const opGetContextKeysForPrincipalPolicy = "GetContextKeysForPrincipalPolicy" // GetContextKeysForPrincipalPolicyRequest generates a "aws/request.Request" representing the // client's request for the GetContextKeysForPrincipalPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5358,10 +5586,10 @@ func (c *IAM) GetContextKeysForPrincipalPolicyRequest(input *GetContextKeysForPr // GetContextKeysForPrincipalPolicy API operation for AWS Identity and Access Management. // -// Gets a list of all of the context keys referenced in all of the IAM policies -// attached to the specified IAM entity. The entity can be an IAM user, group, -// or role. If you specify a user, then the request also includes all of the -// policies attached to groups that the user is a member of. +// Gets a list of all of the context keys referenced in all the IAM policies +// that are attached to the specified IAM entity. The entity can be an IAM user, +// group, or role. If you specify a user, then the request also includes all +// of the policies attached to groups that the user is a member of. // // You can optionally include a list of one or more additional policies, specified // as strings. If you want to include only a list of policies by string, use @@ -5372,8 +5600,8 @@ func (c *IAM) GetContextKeysForPrincipalPolicyRequest(input *GetContextKeysForPr // allowing them to use GetContextKeysForCustomPolicy instead. // // Context keys are variables maintained by AWS and its services that provide -// details about the context of an API query request, and can be evaluated by -// testing against a value in an IAM policy. Use GetContextKeysForPrincipalPolicy +// details about the context of an API query request. Context keys can be evaluated +// by testing against a value in an IAM policy. Use GetContextKeysForPrincipalPolicy // to understand what key names and values you must supply when you call SimulatePrincipalPolicy. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -5418,8 +5646,8 @@ const opGetCredentialReport = "GetCredentialReport" // GetCredentialReportRequest generates a "aws/request.Request" representing the // client's request for the GetCredentialReport operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5514,8 +5742,8 @@ const opGetGroup = "GetGroup" // GetGroupRequest generates a "aws/request.Request" representing the // client's request for the GetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5655,8 +5883,8 @@ const opGetGroupPolicy = "GetGroupPolicy" // GetGroupPolicyRequest generates a "aws/request.Request" representing the // client's request for the GetGroupPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5755,8 +5983,8 @@ const opGetInstanceProfile = "GetInstanceProfile" // GetInstanceProfileRequest generates a "aws/request.Request" representing the // client's request for the GetInstanceProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5842,8 +6070,8 @@ const opGetLoginProfile = "GetLoginProfile" // GetLoginProfileRequest generates a "aws/request.Request" representing the // client's request for the GetLoginProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5883,8 +6111,8 @@ func (c *IAM) GetLoginProfileRequest(input *GetLoginProfileInput) (req *request. // GetLoginProfile API operation for AWS Identity and Access Management. // // Retrieves the user name and password-creation date for the specified IAM -// user. If the user has not been assigned a password, the action returns a -// 404 (NoSuchEntity) error. +// user. If the user has not been assigned a password, the operation returns +// a 404 (NoSuchEntity) error. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -5928,8 +6156,8 @@ const opGetOpenIDConnectProvider = "GetOpenIDConnectProvider" // GetOpenIDConnectProviderRequest generates a "aws/request.Request" representing the // client's request for the GetOpenIDConnectProvider operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6017,8 +6245,8 @@ const opGetPolicy = "GetPolicy" // GetPolicyRequest generates a "aws/request.Request" representing the // client's request for the GetPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6118,8 +6346,8 @@ const opGetPolicyVersion = "GetPolicyVersion" // GetPolicyVersionRequest generates a "aws/request.Request" representing the // client's request for the GetPolicyVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6227,8 +6455,8 @@ const opGetRole = "GetRole" // GetRoleRequest generates a "aws/request.Request" representing the // client's request for the GetRole operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6319,8 +6547,8 @@ const opGetRolePolicy = "GetRolePolicy" // GetRolePolicyRequest generates a "aws/request.Request" representing the // client's request for the GetRolePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6422,8 +6650,8 @@ const opGetSAMLProvider = "GetSAMLProvider" // GetSAMLProviderRequest generates a "aws/request.Request" representing the // client's request for the GetSAMLProvider operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6513,8 +6741,8 @@ const opGetSSHPublicKey = "GetSSHPublicKey" // GetSSHPublicKeyRequest generates a "aws/request.Request" representing the // client's request for the GetSSHPublicKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6555,7 +6783,7 @@ func (c *IAM) GetSSHPublicKeyRequest(input *GetSSHPublicKeyInput) (req *request. // // Retrieves the specified SSH public key, including metadata about the key. // -// The SSH public key retrieved by this action is used only for authenticating +// The SSH public key retrieved by this operation is used only for authenticating // the associated IAM user to an AWS CodeCommit repository. For more information // about using SSH keys to authenticate to an AWS CodeCommit repository, see // Set up AWS CodeCommit for SSH Connections (http://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-credentials-ssh.html) @@ -6603,8 +6831,8 @@ const opGetServerCertificate = "GetServerCertificate" // GetServerCertificateRequest generates a "aws/request.Request" representing the // client's request for the GetServerCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6645,10 +6873,10 @@ func (c *IAM) GetServerCertificateRequest(input *GetServerCertificateInput) (req // // Retrieves information about the specified server certificate stored in IAM. // -// For more information about working with server certificates, including a -// list of AWS services that can use the server certificates that you manage -// with IAM, go to Working with Server Certificates (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html) -// in the IAM User Guide. +// For more information about working with server certificates, see Working +// with Server Certificates (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html) +// in the IAM User Guide. This topic includes a list of AWS services that can +// use the server certificates that you manage with IAM. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -6692,8 +6920,8 @@ const opGetServiceLinkedRoleDeletionStatus = "GetServiceLinkedRoleDeletionStatus // GetServiceLinkedRoleDeletionStatusRequest generates a "aws/request.Request" representing the // client's request for the GetServiceLinkedRoleDeletionStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6736,7 +6964,8 @@ func (c *IAM) GetServiceLinkedRoleDeletionStatusRequest(input *GetServiceLinkedR // the DeleteServiceLinkedRole API operation to submit a service-linked role // for deletion, you can use the DeletionTaskId parameter in GetServiceLinkedRoleDeletionStatus // to check the status of the deletion. If the deletion fails, this operation -// returns the reason that it failed. +// returns the reason that it failed, if that information is returned by the +// service. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -6784,8 +7013,8 @@ const opGetUser = "GetUser" // GetUserRequest generates a "aws/request.Request" representing the // client's request for the GetUser operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6872,8 +7101,8 @@ const opGetUserPolicy = "GetUserPolicy" // GetUserPolicyRequest generates a "aws/request.Request" representing the // client's request for the GetUserPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6972,8 +7201,8 @@ const opListAccessKeys = "ListAccessKeys" // ListAccessKeysRequest generates a "aws/request.Request" representing the // client's request for the ListAccessKeys operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7019,15 +7248,16 @@ func (c *IAM) ListAccessKeysRequest(input *ListAccessKeysInput) (req *request.Re // ListAccessKeys API operation for AWS Identity and Access Management. // // Returns information about the access key IDs associated with the specified -// IAM user. If there are none, the action returns an empty list. +// IAM user. If there are none, the operation returns an empty list. // // Although each user is limited to a small number of keys, you can still paginate // the results using the MaxItems and Marker parameters. // -// If the UserName field is not specified, the UserName is determined implicitly -// based on the AWS access key ID used to sign the request. Because this action -// works for access keys under the AWS account, you can use this action to manage -// root credentials even if the AWS account has no associated users. +// If the UserName field is not specified, the user name is determined implicitly +// based on the AWS access key ID used to sign the request. Because this operation +// works for access keys under the AWS account, you can use this operation to +// manage AWS account root user credentials even if the AWS account has no associated +// users. // // To ensure the security of your AWS account, the secret access key is accessible // only during key and user creation. @@ -7124,8 +7354,8 @@ const opListAccountAliases = "ListAccountAliases" // ListAccountAliasesRequest generates a "aws/request.Request" representing the // client's request for the ListAccountAliases operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7263,8 +7493,8 @@ const opListAttachedGroupPolicies = "ListAttachedGroupPolicies" // ListAttachedGroupPoliciesRequest generates a "aws/request.Request" representing the // client's request for the ListAttachedGroupPolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7319,7 +7549,7 @@ func (c *IAM) ListAttachedGroupPoliciesRequest(input *ListAttachedGroupPoliciesI // You can paginate the results using the MaxItems and Marker parameters. You // can use the PathPrefix parameter to limit the list of policies to only those // matching the specified path prefix. If there are no policies attached to -// the specified group (or none that match the specified path prefix), the action +// the specified group (or none that match the specified path prefix), the operation // returns an empty list. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -7418,8 +7648,8 @@ const opListAttachedRolePolicies = "ListAttachedRolePolicies" // ListAttachedRolePoliciesRequest generates a "aws/request.Request" representing the // client's request for the ListAttachedRolePolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7474,7 +7704,7 @@ func (c *IAM) ListAttachedRolePoliciesRequest(input *ListAttachedRolePoliciesInp // You can paginate the results using the MaxItems and Marker parameters. You // can use the PathPrefix parameter to limit the list of policies to only those // matching the specified path prefix. If there are no policies attached to -// the specified role (or none that match the specified path prefix), the action +// the specified role (or none that match the specified path prefix), the operation // returns an empty list. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -7573,8 +7803,8 @@ const opListAttachedUserPolicies = "ListAttachedUserPolicies" // ListAttachedUserPoliciesRequest generates a "aws/request.Request" representing the // client's request for the ListAttachedUserPolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7629,7 +7859,7 @@ func (c *IAM) ListAttachedUserPoliciesRequest(input *ListAttachedUserPoliciesInp // You can paginate the results using the MaxItems and Marker parameters. You // can use the PathPrefix parameter to limit the list of policies to only those // matching the specified path prefix. If there are no policies attached to -// the specified group (or none that match the specified path prefix), the action +// the specified group (or none that match the specified path prefix), the operation // returns an empty list. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -7728,8 +7958,8 @@ const opListEntitiesForPolicy = "ListEntitiesForPolicy" // ListEntitiesForPolicyRequest generates a "aws/request.Request" representing the // client's request for the ListEntitiesForPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7880,8 +8110,8 @@ const opListGroupPolicies = "ListGroupPolicies" // ListGroupPoliciesRequest generates a "aws/request.Request" representing the // client's request for the ListGroupPolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7936,7 +8166,7 @@ func (c *IAM) ListGroupPoliciesRequest(input *ListGroupPoliciesInput) (req *requ // in the IAM User Guide. // // You can paginate the results using the MaxItems and Marker parameters. If -// there are no inline policies embedded with the specified group, the action +// there are no inline policies embedded with the specified group, the operation // returns an empty list. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -8031,8 +8261,8 @@ const opListGroups = "ListGroups" // ListGroupsRequest generates a "aws/request.Request" representing the // client's request for the ListGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8169,8 +8399,8 @@ const opListGroupsForUser = "ListGroupsForUser" // ListGroupsForUserRequest generates a "aws/request.Request" representing the // client's request for the ListGroupsForUser operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8311,8 +8541,8 @@ const opListInstanceProfiles = "ListInstanceProfiles" // ListInstanceProfilesRequest generates a "aws/request.Request" representing the // client's request for the ListInstanceProfiles operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8358,8 +8588,8 @@ func (c *IAM) ListInstanceProfilesRequest(input *ListInstanceProfilesInput) (req // ListInstanceProfiles API operation for AWS Identity and Access Management. // // Lists the instance profiles that have the specified path prefix. If there -// are none, the action returns an empty list. For more information about instance -// profiles, go to About Instance Profiles (http://docs.aws.amazon.com/IAM/latest/UserGuide/AboutInstanceProfiles.html). +// are none, the operation returns an empty list. For more information about +// instance profiles, go to About Instance Profiles (http://docs.aws.amazon.com/IAM/latest/UserGuide/AboutInstanceProfiles.html). // // You can paginate the results using the MaxItems and Marker parameters. // @@ -8451,8 +8681,8 @@ const opListInstanceProfilesForRole = "ListInstanceProfilesForRole" // ListInstanceProfilesForRoleRequest generates a "aws/request.Request" representing the // client's request for the ListInstanceProfilesForRole operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8498,7 +8728,7 @@ func (c *IAM) ListInstanceProfilesForRoleRequest(input *ListInstanceProfilesForR // ListInstanceProfilesForRole API operation for AWS Identity and Access Management. // // Lists the instance profiles that have the specified associated IAM role. -// If there are none, the action returns an empty list. For more information +// If there are none, the operation returns an empty list. For more information // about instance profiles, go to About Instance Profiles (http://docs.aws.amazon.com/IAM/latest/UserGuide/AboutInstanceProfiles.html). // // You can paginate the results using the MaxItems and Marker parameters. @@ -8595,8 +8825,8 @@ const opListMFADevices = "ListMFADevices" // ListMFADevicesRequest generates a "aws/request.Request" representing the // client's request for the ListMFADevices operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8642,7 +8872,7 @@ func (c *IAM) ListMFADevicesRequest(input *ListMFADevicesInput) (req *request.Re // ListMFADevices API operation for AWS Identity and Access Management. // // Lists the MFA devices for an IAM user. If the request includes a IAM user -// name, then this action lists all the MFA devices associated with the specified +// name, then this operation lists all the MFA devices associated with the specified // user. If you do not specify a user name, IAM determines the user name implicitly // based on the AWS access key ID signing the request for this API. // @@ -8740,8 +8970,8 @@ const opListOpenIDConnectProviders = "ListOpenIDConnectProviders" // ListOpenIDConnectProvidersRequest generates a "aws/request.Request" representing the // client's request for the ListOpenIDConnectProviders operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8821,8 +9051,8 @@ const opListPolicies = "ListPolicies" // ListPoliciesRequest generates a "aws/request.Request" representing the // client's request for the ListPolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8969,8 +9199,8 @@ const opListPolicyVersions = "ListPolicyVersions" // ListPolicyVersionsRequest generates a "aws/request.Request" representing the // client's request for the ListPolicyVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9118,8 +9348,8 @@ const opListRolePolicies = "ListRolePolicies" // ListRolePoliciesRequest generates a "aws/request.Request" representing the // client's request for the ListRolePolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9173,7 +9403,7 @@ func (c *IAM) ListRolePoliciesRequest(input *ListRolePoliciesInput) (req *reques // in the IAM User Guide. // // You can paginate the results using the MaxItems and Marker parameters. If -// there are no inline policies embedded with the specified role, the action +// there are no inline policies embedded with the specified role, the operation // returns an empty list. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -9264,12 +9494,99 @@ func (c *IAM) ListRolePoliciesPagesWithContext(ctx aws.Context, input *ListRoleP return p.Err() } +const opListRoleTags = "ListRoleTags" + +// ListRoleTagsRequest generates a "aws/request.Request" representing the +// client's request for the ListRoleTags operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListRoleTags for more information on using the ListRoleTags +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListRoleTagsRequest method. +// req, resp := client.ListRoleTagsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListRoleTags +func (c *IAM) ListRoleTagsRequest(input *ListRoleTagsInput) (req *request.Request, output *ListRoleTagsOutput) { + op := &request.Operation{ + Name: opListRoleTags, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListRoleTagsInput{} + } + + output = &ListRoleTagsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListRoleTags API operation for AWS Identity and Access Management. +// +// Lists the tags that are attached to the specified role. The returned list +// of tags is sorted by tag key. For more information about tagging, see Tagging +// IAM Identities (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListRoleTags for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListRoleTags +func (c *IAM) ListRoleTags(input *ListRoleTagsInput) (*ListRoleTagsOutput, error) { + req, out := c.ListRoleTagsRequest(input) + return out, req.Send() +} + +// ListRoleTagsWithContext is the same as ListRoleTags with the addition of +// the ability to pass a context and additional request options. +// +// See ListRoleTags for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListRoleTagsWithContext(ctx aws.Context, input *ListRoleTagsInput, opts ...request.Option) (*ListRoleTagsOutput, error) { + req, out := c.ListRoleTagsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opListRoles = "ListRoles" // ListRolesRequest generates a "aws/request.Request" representing the // client's request for the ListRoles operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9315,8 +9632,8 @@ func (c *IAM) ListRolesRequest(input *ListRolesInput) (req *request.Request, out // ListRoles API operation for AWS Identity and Access Management. // // Lists the IAM roles that have the specified path prefix. If there are none, -// the action returns an empty list. For more information about roles, go to -// Working with Roles (http://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html). +// the operation returns an empty list. For more information about roles, go +// to Working with Roles (http://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html). // // You can paginate the results using the MaxItems and Marker parameters. // @@ -9408,8 +9725,8 @@ const opListSAMLProviders = "ListSAMLProviders" // ListSAMLProvidersRequest generates a "aws/request.Request" representing the // client's request for the ListSAMLProviders operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9490,8 +9807,8 @@ const opListSSHPublicKeys = "ListSSHPublicKeys" // ListSSHPublicKeysRequest generates a "aws/request.Request" representing the // client's request for the ListSSHPublicKeys operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9537,9 +9854,9 @@ func (c *IAM) ListSSHPublicKeysRequest(input *ListSSHPublicKeysInput) (req *requ // ListSSHPublicKeys API operation for AWS Identity and Access Management. // // Returns information about the SSH public keys associated with the specified -// IAM user. If there are none, the action returns an empty list. +// IAM user. If there are none, the operation returns an empty list. // -// The SSH public keys returned by this action are used only for authenticating +// The SSH public keys returned by this operation are used only for authenticating // the IAM user to an AWS CodeCommit repository. For more information about // using SSH keys to authenticate to an AWS CodeCommit repository, see Set up // AWS CodeCommit for SSH Connections (http://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-credentials-ssh.html) @@ -9636,8 +9953,8 @@ const opListServerCertificates = "ListServerCertificates" // ListServerCertificatesRequest generates a "aws/request.Request" representing the // client's request for the ListServerCertificates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9683,14 +10000,14 @@ func (c *IAM) ListServerCertificatesRequest(input *ListServerCertificatesInput) // ListServerCertificates API operation for AWS Identity and Access Management. // // Lists the server certificates stored in IAM that have the specified path -// prefix. If none exist, the action returns an empty list. +// prefix. If none exist, the operation returns an empty list. // // You can paginate the results using the MaxItems and Marker parameters. // -// For more information about working with server certificates, including a -// list of AWS services that can use the server certificates that you manage -// with IAM, go to Working with Server Certificates (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html) -// in the IAM User Guide. +// For more information about working with server certificates, see Working +// with Server Certificates (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html) +// in the IAM User Guide. This topic also includes a list of AWS services that +// can use the server certificates that you manage with IAM. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -9780,8 +10097,8 @@ const opListServiceSpecificCredentials = "ListServiceSpecificCredentials" // ListServiceSpecificCredentialsRequest generates a "aws/request.Request" representing the // client's request for the ListServiceSpecificCredentials operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9821,11 +10138,11 @@ func (c *IAM) ListServiceSpecificCredentialsRequest(input *ListServiceSpecificCr // ListServiceSpecificCredentials API operation for AWS Identity and Access Management. // // Returns information about the service-specific credentials associated with -// the specified IAM user. If there are none, the action returns an empty list. -// The service-specific credentials returned by this action are used only for -// authenticating the IAM user to a specific service. For more information about -// using service-specific credentials to authenticate to an AWS service, see -// Set Up service-specific credentials (http://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html) +// the specified IAM user. If there are none, the operation returns an empty +// list. The service-specific credentials returned by this operation are used +// only for authenticating the IAM user to a specific service. For more information +// about using service-specific credentials to authenticate to an AWS service, +// see Set Up service-specific credentials (http://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html) // in the AWS CodeCommit User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -9869,8 +10186,8 @@ const opListSigningCertificates = "ListSigningCertificates" // ListSigningCertificatesRequest generates a "aws/request.Request" representing the // client's request for the ListSigningCertificates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9916,16 +10233,16 @@ func (c *IAM) ListSigningCertificatesRequest(input *ListSigningCertificatesInput // ListSigningCertificates API operation for AWS Identity and Access Management. // // Returns information about the signing certificates associated with the specified -// IAM user. If there are none, the action returns an empty list. +// IAM user. If there are none, the operation returns an empty list. // // Although each user is limited to a small number of signing certificates, // you can still paginate the results using the MaxItems and Marker parameters. // // If the UserName field is not specified, the user name is determined implicitly // based on the AWS access key ID used to sign the request for this API. Because -// this action works for access keys under the AWS account, you can use this -// action to manage root credentials even if the AWS account has no associated -// users. +// this operation works for access keys under the AWS account, you can use this +// operation to manage AWS account root user credentials even if the AWS account +// has no associated users. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -10019,8 +10336,8 @@ const opListUserPolicies = "ListUserPolicies" // ListUserPoliciesRequest generates a "aws/request.Request" representing the // client's request for the ListUserPolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10073,7 +10390,7 @@ func (c *IAM) ListUserPoliciesRequest(input *ListUserPoliciesInput) (req *reques // in the IAM User Guide. // // You can paginate the results using the MaxItems and Marker parameters. If -// there are no inline policies embedded with the specified user, the action +// there are no inline policies embedded with the specified user, the operation // returns an empty list. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -10164,90 +10481,177 @@ func (c *IAM) ListUserPoliciesPagesWithContext(ctx aws.Context, input *ListUserP return p.Err() } -const opListUsers = "ListUsers" +const opListUserTags = "ListUserTags" -// ListUsersRequest generates a "aws/request.Request" representing the -// client's request for the ListUsers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListUserTagsRequest generates a "aws/request.Request" representing the +// client's request for the ListUserTags operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListUsers for more information on using the ListUsers +// See ListUserTags for more information on using the ListUserTags // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListUsersRequest method. -// req, resp := client.ListUsersRequest(params) +// // Example sending a request using the ListUserTagsRequest method. +// req, resp := client.ListUserTagsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListUsers -func (c *IAM) ListUsersRequest(input *ListUsersInput) (req *request.Request, output *ListUsersOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListUserTags +func (c *IAM) ListUserTagsRequest(input *ListUserTagsInput) (req *request.Request, output *ListUserTagsOutput) { op := &request.Operation{ - Name: opListUsers, + Name: opListUserTags, HTTPMethod: "POST", HTTPPath: "/", - Paginator: &request.Paginator{ - InputTokens: []string{"Marker"}, - OutputTokens: []string{"Marker"}, - LimitToken: "MaxItems", - TruncationToken: "IsTruncated", - }, } if input == nil { - input = &ListUsersInput{} + input = &ListUserTagsInput{} } - output = &ListUsersOutput{} + output = &ListUserTagsOutput{} req = c.newRequest(op, input, output) return } -// ListUsers API operation for AWS Identity and Access Management. -// -// Lists the IAM users that have the specified path prefix. If no path prefix -// is specified, the action returns all users in the AWS account. If there are -// none, the action returns an empty list. +// ListUserTags API operation for AWS Identity and Access Management. // -// You can paginate the results using the MaxItems and Marker parameters. +// Lists the tags that are attached to the specified user. The returned list +// of tags is sorted by tag key. For more information about tagging, see Tagging +// IAM Identities (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) +// in the IAM User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Identity and Access Management's -// API operation ListUsers for usage and error information. +// API operation ListUserTags for usage and error information. // // Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// // * ErrCodeServiceFailureException "ServiceFailure" // The request processing has failed because of an unknown error, exception // or failure. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListUsers -func (c *IAM) ListUsers(input *ListUsersInput) (*ListUsersOutput, error) { - req, out := c.ListUsersRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListUserTags +func (c *IAM) ListUserTags(input *ListUserTagsInput) (*ListUserTagsOutput, error) { + req, out := c.ListUserTagsRequest(input) return out, req.Send() } -// ListUsersWithContext is the same as ListUsers with the addition of +// ListUserTagsWithContext is the same as ListUserTags with the addition of // the ability to pass a context and additional request options. // -// See ListUsers for details on how to use this API operation. +// See ListUserTags for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IAM) ListUsersWithContext(ctx aws.Context, input *ListUsersInput, opts ...request.Option) (*ListUsersOutput, error) { +func (c *IAM) ListUserTagsWithContext(ctx aws.Context, input *ListUserTagsInput, opts ...request.Option) (*ListUserTagsOutput, error) { + req, out := c.ListUserTagsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListUsers = "ListUsers" + +// ListUsersRequest generates a "aws/request.Request" representing the +// client's request for the ListUsers operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListUsers for more information on using the ListUsers +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListUsersRequest method. +// req, resp := client.ListUsersRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListUsers +func (c *IAM) ListUsersRequest(input *ListUsersInput) (req *request.Request, output *ListUsersOutput) { + op := &request.Operation{ + Name: opListUsers, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxItems", + TruncationToken: "IsTruncated", + }, + } + + if input == nil { + input = &ListUsersInput{} + } + + output = &ListUsersOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListUsers API operation for AWS Identity and Access Management. +// +// Lists the IAM users that have the specified path prefix. If no path prefix +// is specified, the operation returns all users in the AWS account. If there +// are none, the operation returns an empty list. +// +// You can paginate the results using the MaxItems and Marker parameters. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation ListUsers for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/ListUsers +func (c *IAM) ListUsers(input *ListUsersInput) (*ListUsersOutput, error) { + req, out := c.ListUsersRequest(input) + return out, req.Send() +} + +// ListUsersWithContext is the same as ListUsers with the addition of +// the ability to pass a context and additional request options. +// +// See ListUsers for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) ListUsersWithContext(ctx aws.Context, input *ListUsersInput, opts ...request.Option) (*ListUsersOutput, error) { req, out := c.ListUsersRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) @@ -10308,8 +10712,8 @@ const opListVirtualMFADevices = "ListVirtualMFADevices" // ListVirtualMFADevicesRequest generates a "aws/request.Request" representing the // client's request for the ListVirtualMFADevices operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10355,9 +10759,9 @@ func (c *IAM) ListVirtualMFADevicesRequest(input *ListVirtualMFADevicesInput) (r // ListVirtualMFADevices API operation for AWS Identity and Access Management. // // Lists the virtual MFA devices defined in the AWS account by assignment status. -// If you do not specify an assignment status, the action returns a list of -// all virtual MFA devices. Assignment status can be Assigned, Unassigned, or -// Any. +// If you do not specify an assignment status, the operation returns a list +// of all virtual MFA devices. Assignment status can be Assigned, Unassigned, +// or Any. // // You can paginate the results using the MaxItems and Marker parameters. // @@ -10443,8 +10847,8 @@ const opPutGroupPolicy = "PutGroupPolicy" // PutGroupPolicyRequest generates a "aws/request.Request" representing the // client's request for the PutGroupPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10549,12 +10953,124 @@ func (c *IAM) PutGroupPolicyWithContext(ctx aws.Context, input *PutGroupPolicyIn return out, req.Send() } +const opPutRolePermissionsBoundary = "PutRolePermissionsBoundary" + +// PutRolePermissionsBoundaryRequest generates a "aws/request.Request" representing the +// client's request for the PutRolePermissionsBoundary operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutRolePermissionsBoundary for more information on using the PutRolePermissionsBoundary +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutRolePermissionsBoundaryRequest method. +// req, resp := client.PutRolePermissionsBoundaryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/PutRolePermissionsBoundary +func (c *IAM) PutRolePermissionsBoundaryRequest(input *PutRolePermissionsBoundaryInput) (req *request.Request, output *PutRolePermissionsBoundaryOutput) { + op := &request.Operation{ + Name: opPutRolePermissionsBoundary, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutRolePermissionsBoundaryInput{} + } + + output = &PutRolePermissionsBoundaryOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutRolePermissionsBoundary API operation for AWS Identity and Access Management. +// +// Adds or updates the policy that is specified as the IAM role's permissions +// boundary. You can use an AWS managed policy or a customer managed policy +// to set the boundary for a role. Use the boundary to control the maximum permissions +// that the role can have. Setting a permissions boundary is an advanced feature +// that can affect the permissions for the role. +// +// You cannot set the boundary for a service-linked role. +// +// Policies used as permissions boundaries do not provide permissions. You must +// also attach a permissions policy to the role. To learn how the effective +// permissions for a role are evaluated, see IAM JSON Policy Evaluation Logic +// (https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation PutRolePermissionsBoundary for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeUnmodifiableEntityException "UnmodifiableEntity" +// The request was rejected because only the service that depends on the service-linked +// role can modify or delete the role on your behalf. The error message includes +// the name of the service that depends on this service-linked role. You must +// request the change through that service. +// +// * ErrCodePolicyNotAttachableException "PolicyNotAttachable" +// The request failed because AWS service role policies can only be attached +// to the service-linked role for that service. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/PutRolePermissionsBoundary +func (c *IAM) PutRolePermissionsBoundary(input *PutRolePermissionsBoundaryInput) (*PutRolePermissionsBoundaryOutput, error) { + req, out := c.PutRolePermissionsBoundaryRequest(input) + return out, req.Send() +} + +// PutRolePermissionsBoundaryWithContext is the same as PutRolePermissionsBoundary with the addition of +// the ability to pass a context and additional request options. +// +// See PutRolePermissionsBoundary for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) PutRolePermissionsBoundaryWithContext(ctx aws.Context, input *PutRolePermissionsBoundaryInput, opts ...request.Option) (*PutRolePermissionsBoundaryOutput, error) { + req, out := c.PutRolePermissionsBoundaryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opPutRolePolicy = "PutRolePolicy" // PutRolePolicyRequest generates a "aws/request.Request" representing the // client's request for the PutRolePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10671,12 +11187,116 @@ func (c *IAM) PutRolePolicyWithContext(ctx aws.Context, input *PutRolePolicyInpu return out, req.Send() } +const opPutUserPermissionsBoundary = "PutUserPermissionsBoundary" + +// PutUserPermissionsBoundaryRequest generates a "aws/request.Request" representing the +// client's request for the PutUserPermissionsBoundary operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutUserPermissionsBoundary for more information on using the PutUserPermissionsBoundary +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutUserPermissionsBoundaryRequest method. +// req, resp := client.PutUserPermissionsBoundaryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/PutUserPermissionsBoundary +func (c *IAM) PutUserPermissionsBoundaryRequest(input *PutUserPermissionsBoundaryInput) (req *request.Request, output *PutUserPermissionsBoundaryOutput) { + op := &request.Operation{ + Name: opPutUserPermissionsBoundary, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutUserPermissionsBoundaryInput{} + } + + output = &PutUserPermissionsBoundaryOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutUserPermissionsBoundary API operation for AWS Identity and Access Management. +// +// Adds or updates the policy that is specified as the IAM user's permissions +// boundary. You can use an AWS managed policy or a customer managed policy +// to set the boundary for a user. Use the boundary to control the maximum permissions +// that the user can have. Setting a permissions boundary is an advanced feature +// that can affect the permissions for the user. +// +// Policies that are used as permissions boundaries do not provide permissions. +// You must also attach a permissions policy to the user. To learn how the effective +// permissions for a user are evaluated, see IAM JSON Policy Evaluation Logic +// (https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation PutUserPermissionsBoundary for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodePolicyNotAttachableException "PolicyNotAttachable" +// The request failed because AWS service role policies can only be attached +// to the service-linked role for that service. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/PutUserPermissionsBoundary +func (c *IAM) PutUserPermissionsBoundary(input *PutUserPermissionsBoundaryInput) (*PutUserPermissionsBoundaryOutput, error) { + req, out := c.PutUserPermissionsBoundaryRequest(input) + return out, req.Send() +} + +// PutUserPermissionsBoundaryWithContext is the same as PutUserPermissionsBoundary with the addition of +// the ability to pass a context and additional request options. +// +// See PutUserPermissionsBoundary for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) PutUserPermissionsBoundaryWithContext(ctx aws.Context, input *PutUserPermissionsBoundaryInput, opts ...request.Option) (*PutUserPermissionsBoundaryOutput, error) { + req, out := c.PutUserPermissionsBoundaryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opPutUserPolicy = "PutUserPolicy" // PutUserPolicyRequest generates a "aws/request.Request" representing the // client's request for the PutUserPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10785,8 +11405,8 @@ const opRemoveClientIDFromOpenIDConnectProvider = "RemoveClientIDFromOpenIDConne // RemoveClientIDFromOpenIDConnectProviderRequest generates a "aws/request.Request" representing the // client's request for the RemoveClientIDFromOpenIDConnectProvider operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10831,8 +11451,8 @@ func (c *IAM) RemoveClientIDFromOpenIDConnectProviderRequest(input *RemoveClient // client IDs registered for the specified IAM OpenID Connect (OIDC) provider // resource object. // -// This action is idempotent; it does not fail or return an error if you try -// to remove a client ID that does not exist. +// This operation is idempotent; it does not fail or return an error if you +// try to remove a client ID that does not exist. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -10880,8 +11500,8 @@ const opRemoveRoleFromInstanceProfile = "RemoveRoleFromInstanceProfile" // RemoveRoleFromInstanceProfileRequest generates a "aws/request.Request" representing the // client's request for the RemoveRoleFromInstanceProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -10924,10 +11544,10 @@ func (c *IAM) RemoveRoleFromInstanceProfileRequest(input *RemoveRoleFromInstance // // Removes the specified IAM role from the specified EC2 instance profile. // -// Make sure you do not have any Amazon EC2 instances running with the role -// you are about to remove from the instance profile. Removing a role from an -// instance profile that is associated with a running instance might break any -// applications running on the instance. +// Make sure that you do not have any Amazon EC2 instances running with the +// role you are about to remove from the instance profile. Removing a role from +// an instance profile that is associated with a running instance might break +// any applications running on the instance. // // For more information about IAM roles, go to Working with Roles (http://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html). // For more information about instance profiles, go to About Instance Profiles @@ -10985,8 +11605,8 @@ const opRemoveUserFromGroup = "RemoveUserFromGroup" // RemoveUserFromGroupRequest generates a "aws/request.Request" representing the // client's request for the RemoveUserFromGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11075,8 +11695,8 @@ const opResetServiceSpecificCredential = "ResetServiceSpecificCredential" // ResetServiceSpecificCredentialRequest generates a "aws/request.Request" representing the // client's request for the ResetServiceSpecificCredential operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11158,8 +11778,8 @@ const opResyncMFADevice = "ResyncMFADevice" // ResyncMFADeviceRequest generates a "aws/request.Request" representing the // client's request for the ResyncMFADevice operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11257,8 +11877,8 @@ const opSetDefaultPolicyVersion = "SetDefaultPolicyVersion" // SetDefaultPolicyVersionRequest generates a "aws/request.Request" representing the // client's request for the SetDefaultPolicyVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11302,7 +11922,7 @@ func (c *IAM) SetDefaultPolicyVersionRequest(input *SetDefaultPolicyVersionInput // Sets the specified version of the specified policy as the policy's default // (operative) version. // -// This action affects all users, groups, and roles that the policy is attached +// This operation affects all users, groups, and roles that the policy is attached // to. To list the users, groups, and roles that the policy is attached to, // use the ListEntitiesForPolicy API. // @@ -11360,8 +11980,8 @@ const opSimulateCustomPolicy = "SimulateCustomPolicy" // SimulateCustomPolicyRequest generates a "aws/request.Request" representing the // client's request for the SimulateCustomPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11407,11 +12027,11 @@ func (c *IAM) SimulateCustomPolicyRequest(input *SimulateCustomPolicyInput) (req // SimulateCustomPolicy API operation for AWS Identity and Access Management. // // Simulate how a set of IAM policies and optionally a resource-based policy -// works with a list of API actions and AWS resources to determine the policies' +// works with a list of API operations and AWS resources to determine the policies' // effective permissions. The policies are provided as strings. // -// The simulation does not perform the API actions; it only checks the authorization -// to determine if the simulated policies allow or deny the actions. +// The simulation does not perform the API operations; it only checks the authorization +// to determine if the simulated policies allow or deny the operations. // // If you want to simulate existing policies attached to an IAM user, group, // or role, use SimulatePrincipalPolicy instead. @@ -11516,8 +12136,8 @@ const opSimulatePrincipalPolicy = "SimulatePrincipalPolicy" // SimulatePrincipalPolicyRequest generates a "aws/request.Request" representing the // client's request for the SimulatePrincipalPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -11563,10 +12183,10 @@ func (c *IAM) SimulatePrincipalPolicyRequest(input *SimulatePrincipalPolicyInput // SimulatePrincipalPolicy API operation for AWS Identity and Access Management. // // Simulate how a set of IAM policies attached to an IAM entity works with a -// list of API actions and AWS resources to determine the policies' effective +// list of API operations and AWS resources to determine the policies' effective // permissions. The entity can be an IAM user, group, or role. If you specify // a user, then the simulation also includes all of the policies that are attached -// to groups that the user belongs to . +// to groups that the user belongs to. // // You can optionally include a list of one or more additional policies specified // as strings to include in the simulation. If you want to simulate only policies @@ -11575,8 +12195,8 @@ func (c *IAM) SimulatePrincipalPolicyRequest(input *SimulatePrincipalPolicyInput // You can also optionally include one resource-based policy to be evaluated // with each of the resources included in the simulation. // -// The simulation does not perform the API actions, it only checks the authorization -// to determine if the simulated policies allow or deny the actions. +// The simulation does not perform the API operations, it only checks the authorization +// to determine if the simulated policies allow or deny the operations. // // Note: This API discloses information about the permissions granted to other // users. If you do not want users to see other user's permissions, then consider @@ -11682,70 +12302,93 @@ func (c *IAM) SimulatePrincipalPolicyPagesWithContext(ctx aws.Context, input *Si return p.Err() } -const opUpdateAccessKey = "UpdateAccessKey" +const opTagRole = "TagRole" -// UpdateAccessKeyRequest generates a "aws/request.Request" representing the -// client's request for the UpdateAccessKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// TagRoleRequest generates a "aws/request.Request" representing the +// client's request for the TagRole operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateAccessKey for more information on using the UpdateAccessKey +// See TagRole for more information on using the TagRole // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateAccessKeyRequest method. -// req, resp := client.UpdateAccessKeyRequest(params) +// // Example sending a request using the TagRoleRequest method. +// req, resp := client.TagRoleRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAccessKey -func (c *IAM) UpdateAccessKeyRequest(input *UpdateAccessKeyInput) (req *request.Request, output *UpdateAccessKeyOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/TagRole +func (c *IAM) TagRoleRequest(input *TagRoleInput) (req *request.Request, output *TagRoleOutput) { op := &request.Operation{ - Name: opUpdateAccessKey, + Name: opTagRole, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateAccessKeyInput{} + input = &TagRoleInput{} } - output = &UpdateAccessKeyOutput{} + output = &TagRoleOutput{} req = c.newRequest(op, input, output) req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateAccessKey API operation for AWS Identity and Access Management. +// TagRole API operation for AWS Identity and Access Management. // -// Changes the status of the specified access key from Active to Inactive, or -// vice versa. This action can be used to disable a user's key as part of a -// key rotation work flow. +// Adds one or more tags to an IAM role. The role can be a regular role or a +// service-linked role. If a tag with the same key name already exists, then +// that tag is overwritten with the new value. // -// If the UserName field is not specified, the UserName is determined implicitly -// based on the AWS access key ID used to sign the request. Because this action -// works for access keys under the AWS account, you can use this action to manage -// root credentials even if the AWS account has no associated users. +// A tag consists of a key name and an associated value. By assigning tags to +// your resources, you can do the following: // -// For information about rotating keys, see Managing Keys and Certificates (http://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingCredentials.html) -// in the IAM User Guide. +// * Administrative grouping and discovery - Attach tags to resources to +// aid in organization and search. For example, you could search for all +// resources with the key name Project and the value MyImportantProject. +// Or search for all resources with the key name Cost Center and the value +// 41200. // -// Returns awserr.Error for service API and SDK errors. Use runtime type assertions -// with awserr.Error's Code and Message methods to get detailed information about -// the error. +// * Access control - Reference tags in IAM user-based and resource-based +// policies. You can use tags to restrict access to only an IAM user or role +// that has a specified tag attached. You can also restrict access to only +// those resources that have a certain tag attached. For examples of policies +// that show how to use tags to control access, see Control Access Using +// IAM Tags (http://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html) +// in the IAM User Guide. +// +// * Cost allocation - Use tags to help track which individuals and teams +// are using which AWS resources. +// +// Make sure that you have no invalid tags and that you do not exceed the allowed +// number of tags per role. In either case, the entire request fails and no +// tags are added to the role. +// +// AWS always interprets the tag Value as a single string. If you need to store +// an array, you can store comma-separated values in the string. However, you +// must interpret the value in your code. +// +// For more information about tagging, see Tagging IAM Identities (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. // // See the AWS API reference guide for AWS Identity and Access Management's -// API operation UpdateAccessKey for usage and error information. +// API operation TagRole for usage and error information. // // Returned Error Codes: // * ErrCodeNoSuchEntityException "NoSuchEntity" @@ -11756,87 +12399,119 @@ func (c *IAM) UpdateAccessKeyRequest(input *UpdateAccessKeyInput) (req *request. // The request was rejected because it attempted to create resources beyond // the current AWS account limits. The error message describes the limit exceeded. // +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeConcurrentModificationException "ConcurrentModification" +// The request was rejected because multiple requests to change this object +// were submitted simultaneously. Wait a few minutes and submit your request +// again. +// // * ErrCodeServiceFailureException "ServiceFailure" // The request processing has failed because of an unknown error, exception // or failure. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAccessKey -func (c *IAM) UpdateAccessKey(input *UpdateAccessKeyInput) (*UpdateAccessKeyOutput, error) { - req, out := c.UpdateAccessKeyRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/TagRole +func (c *IAM) TagRole(input *TagRoleInput) (*TagRoleOutput, error) { + req, out := c.TagRoleRequest(input) return out, req.Send() } -// UpdateAccessKeyWithContext is the same as UpdateAccessKey with the addition of +// TagRoleWithContext is the same as TagRole with the addition of // the ability to pass a context and additional request options. // -// See UpdateAccessKey for details on how to use this API operation. +// See TagRole for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IAM) UpdateAccessKeyWithContext(ctx aws.Context, input *UpdateAccessKeyInput, opts ...request.Option) (*UpdateAccessKeyOutput, error) { - req, out := c.UpdateAccessKeyRequest(input) +func (c *IAM) TagRoleWithContext(ctx aws.Context, input *TagRoleInput, opts ...request.Option) (*TagRoleOutput, error) { + req, out := c.TagRoleRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateAccountPasswordPolicy = "UpdateAccountPasswordPolicy" +const opTagUser = "TagUser" -// UpdateAccountPasswordPolicyRequest generates a "aws/request.Request" representing the -// client's request for the UpdateAccountPasswordPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// TagUserRequest generates a "aws/request.Request" representing the +// client's request for the TagUser operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateAccountPasswordPolicy for more information on using the UpdateAccountPasswordPolicy +// See TagUser for more information on using the TagUser // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateAccountPasswordPolicyRequest method. -// req, resp := client.UpdateAccountPasswordPolicyRequest(params) +// // Example sending a request using the TagUserRequest method. +// req, resp := client.TagUserRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAccountPasswordPolicy -func (c *IAM) UpdateAccountPasswordPolicyRequest(input *UpdateAccountPasswordPolicyInput) (req *request.Request, output *UpdateAccountPasswordPolicyOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/TagUser +func (c *IAM) TagUserRequest(input *TagUserInput) (req *request.Request, output *TagUserOutput) { op := &request.Operation{ - Name: opUpdateAccountPasswordPolicy, + Name: opTagUser, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateAccountPasswordPolicyInput{} + input = &TagUserInput{} } - output = &UpdateAccountPasswordPolicyOutput{} + output = &TagUserOutput{} req = c.newRequest(op, input, output) req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateAccountPasswordPolicy API operation for AWS Identity and Access Management. +// TagUser API operation for AWS Identity and Access Management. // -// Updates the password policy settings for the AWS account. +// Adds one or more tags to an IAM user. If a tag with the same key name already +// exists, then that tag is overwritten with the new value. // -// This action does not support partial updates. No parameters are required, -// but if you do not specify a parameter, that parameter's value reverts to -// its default value. See the Request Parameters section for each parameter's -// default value. +// A tag consists of a key name and an associated value. By assigning tags to +// your resources, you can do the following: // -// For more information about using a password policy, see Managing an IAM Password -// Policy (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingPasswordPolicies.html) +// * Administrative grouping and discovery - Attach tags to resources to +// aid in organization and search. For example, you could search for all +// resources with the key name Project and the value MyImportantProject. +// Or search for all resources with the key name Cost Center and the value +// 41200. +// +// * Access control - Reference tags in IAM user-based and resource-based +// policies. You can use tags to restrict access to only an IAM requesting +// user or to a role that has a specified tag attached. You can also restrict +// access to only those resources that have a certain tag attached. For examples +// of policies that show how to use tags to control access, see Control Access +// Using IAM Tags (http://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html) +// in the IAM User Guide. +// +// * Cost allocation - Use tags to help track which individuals and teams +// are using which AWS resources. +// +// Make sure that you have no invalid tags and that you do not exceed the allowed +// number of tags per role. In either case, the entire request fails and no +// tags are added to the role. +// +// AWS always interprets the tag Value as a single string. If you need to store +// an array, you can store comma-separated values in the string. However, you +// must interpret the value in your code. +// +// For more information about tagging, see Tagging IAM Identities (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) // in the IAM User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -11844,304 +12519,295 @@ func (c *IAM) UpdateAccountPasswordPolicyRequest(input *UpdateAccountPasswordPol // the error. // // See the AWS API reference guide for AWS Identity and Access Management's -// API operation UpdateAccountPasswordPolicy for usage and error information. +// API operation TagUser for usage and error information. // // Returned Error Codes: // * ErrCodeNoSuchEntityException "NoSuchEntity" // The request was rejected because it referenced an entity that does not exist. // The error message describes the entity. // -// * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocument" -// The request was rejected because the policy document was malformed. The error -// message describes the specific error. -// // * ErrCodeLimitExceededException "LimitExceeded" // The request was rejected because it attempted to create resources beyond // the current AWS account limits. The error message describes the limit exceeded. // +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeConcurrentModificationException "ConcurrentModification" +// The request was rejected because multiple requests to change this object +// were submitted simultaneously. Wait a few minutes and submit your request +// again. +// // * ErrCodeServiceFailureException "ServiceFailure" // The request processing has failed because of an unknown error, exception // or failure. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAccountPasswordPolicy -func (c *IAM) UpdateAccountPasswordPolicy(input *UpdateAccountPasswordPolicyInput) (*UpdateAccountPasswordPolicyOutput, error) { - req, out := c.UpdateAccountPasswordPolicyRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/TagUser +func (c *IAM) TagUser(input *TagUserInput) (*TagUserOutput, error) { + req, out := c.TagUserRequest(input) return out, req.Send() } -// UpdateAccountPasswordPolicyWithContext is the same as UpdateAccountPasswordPolicy with the addition of +// TagUserWithContext is the same as TagUser with the addition of // the ability to pass a context and additional request options. // -// See UpdateAccountPasswordPolicy for details on how to use this API operation. +// See TagUser for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IAM) UpdateAccountPasswordPolicyWithContext(ctx aws.Context, input *UpdateAccountPasswordPolicyInput, opts ...request.Option) (*UpdateAccountPasswordPolicyOutput, error) { - req, out := c.UpdateAccountPasswordPolicyRequest(input) +func (c *IAM) TagUserWithContext(ctx aws.Context, input *TagUserInput, opts ...request.Option) (*TagUserOutput, error) { + req, out := c.TagUserRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateAssumeRolePolicy = "UpdateAssumeRolePolicy" +const opUntagRole = "UntagRole" -// UpdateAssumeRolePolicyRequest generates a "aws/request.Request" representing the -// client's request for the UpdateAssumeRolePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UntagRoleRequest generates a "aws/request.Request" representing the +// client's request for the UntagRole operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateAssumeRolePolicy for more information on using the UpdateAssumeRolePolicy +// See UntagRole for more information on using the UntagRole // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateAssumeRolePolicyRequest method. -// req, resp := client.UpdateAssumeRolePolicyRequest(params) +// // Example sending a request using the UntagRoleRequest method. +// req, resp := client.UntagRoleRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAssumeRolePolicy -func (c *IAM) UpdateAssumeRolePolicyRequest(input *UpdateAssumeRolePolicyInput) (req *request.Request, output *UpdateAssumeRolePolicyOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UntagRole +func (c *IAM) UntagRoleRequest(input *UntagRoleInput) (req *request.Request, output *UntagRoleOutput) { op := &request.Operation{ - Name: opUpdateAssumeRolePolicy, + Name: opUntagRole, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateAssumeRolePolicyInput{} + input = &UntagRoleInput{} } - output = &UpdateAssumeRolePolicyOutput{} + output = &UntagRoleOutput{} req = c.newRequest(op, input, output) req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateAssumeRolePolicy API operation for AWS Identity and Access Management. +// UntagRole API operation for AWS Identity and Access Management. // -// Updates the policy that grants an IAM entity permission to assume a role. -// This is typically referred to as the "role trust policy". For more information -// about roles, go to Using Roles to Delegate Permissions and Federate Identities -// (http://docs.aws.amazon.com/IAM/latest/UserGuide/roles-toplevel.html). +// Removes the specified tags from the role. For more information about tagging, +// see Tagging IAM Identities (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) +// in the IAM User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Identity and Access Management's -// API operation UpdateAssumeRolePolicy for usage and error information. +// API operation UntagRole for usage and error information. // // Returned Error Codes: // * ErrCodeNoSuchEntityException "NoSuchEntity" // The request was rejected because it referenced an entity that does not exist. // The error message describes the entity. // -// * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocument" -// The request was rejected because the policy document was malformed. The error -// message describes the specific error. -// -// * ErrCodeLimitExceededException "LimitExceeded" -// The request was rejected because it attempted to create resources beyond -// the current AWS account limits. The error message describes the limit exceeded. -// -// * ErrCodeUnmodifiableEntityException "UnmodifiableEntity" -// The request was rejected because only the service that depends on the service-linked -// role can modify or delete the role on your behalf. The error message includes -// the name of the service that depends on this service-linked role. You must -// request the change through that service. +// * ErrCodeConcurrentModificationException "ConcurrentModification" +// The request was rejected because multiple requests to change this object +// were submitted simultaneously. Wait a few minutes and submit your request +// again. // // * ErrCodeServiceFailureException "ServiceFailure" // The request processing has failed because of an unknown error, exception // or failure. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAssumeRolePolicy -func (c *IAM) UpdateAssumeRolePolicy(input *UpdateAssumeRolePolicyInput) (*UpdateAssumeRolePolicyOutput, error) { - req, out := c.UpdateAssumeRolePolicyRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UntagRole +func (c *IAM) UntagRole(input *UntagRoleInput) (*UntagRoleOutput, error) { + req, out := c.UntagRoleRequest(input) return out, req.Send() } -// UpdateAssumeRolePolicyWithContext is the same as UpdateAssumeRolePolicy with the addition of +// UntagRoleWithContext is the same as UntagRole with the addition of // the ability to pass a context and additional request options. // -// See UpdateAssumeRolePolicy for details on how to use this API operation. +// See UntagRole for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IAM) UpdateAssumeRolePolicyWithContext(ctx aws.Context, input *UpdateAssumeRolePolicyInput, opts ...request.Option) (*UpdateAssumeRolePolicyOutput, error) { - req, out := c.UpdateAssumeRolePolicyRequest(input) +func (c *IAM) UntagRoleWithContext(ctx aws.Context, input *UntagRoleInput, opts ...request.Option) (*UntagRoleOutput, error) { + req, out := c.UntagRoleRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateGroup = "UpdateGroup" +const opUntagUser = "UntagUser" -// UpdateGroupRequest generates a "aws/request.Request" representing the -// client's request for the UpdateGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UntagUserRequest generates a "aws/request.Request" representing the +// client's request for the UntagUser operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateGroup for more information on using the UpdateGroup +// See UntagUser for more information on using the UntagUser // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateGroupRequest method. -// req, resp := client.UpdateGroupRequest(params) +// // Example sending a request using the UntagUserRequest method. +// req, resp := client.UntagUserRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateGroup -func (c *IAM) UpdateGroupRequest(input *UpdateGroupInput) (req *request.Request, output *UpdateGroupOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UntagUser +func (c *IAM) UntagUserRequest(input *UntagUserInput) (req *request.Request, output *UntagUserOutput) { op := &request.Operation{ - Name: opUpdateGroup, + Name: opUntagUser, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateGroupInput{} + input = &UntagUserInput{} } - output = &UpdateGroupOutput{} + output = &UntagUserOutput{} req = c.newRequest(op, input, output) req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateGroup API operation for AWS Identity and Access Management. -// -// Updates the name and/or the path of the specified IAM group. +// UntagUser API operation for AWS Identity and Access Management. // -// You should understand the implications of changing a group's path or name. -// For more information, see Renaming Users and Groups (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_WorkingWithGroupsAndUsers.html) +// Removes the specified tags from the user. For more information about tagging, +// see Tagging IAM Identities (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) // in the IAM User Guide. // -// To change an IAM group name the requester must have appropriate permissions -// on both the source object and the target object. For example, to change "Managers" -// to "MGRs", the entity making the request must have permission on both "Managers" -// and "MGRs", or must have permission on all (*). For more information about -// permissions, see Permissions and Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/PermissionsAndPolicies.html). -// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Identity and Access Management's -// API operation UpdateGroup for usage and error information. +// API operation UntagUser for usage and error information. // // Returned Error Codes: // * ErrCodeNoSuchEntityException "NoSuchEntity" // The request was rejected because it referenced an entity that does not exist. // The error message describes the entity. // -// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" -// The request was rejected because it attempted to create a resource that already -// exists. -// -// * ErrCodeLimitExceededException "LimitExceeded" -// The request was rejected because it attempted to create resources beyond -// the current AWS account limits. The error message describes the limit exceeded. +// * ErrCodeConcurrentModificationException "ConcurrentModification" +// The request was rejected because multiple requests to change this object +// were submitted simultaneously. Wait a few minutes and submit your request +// again. // // * ErrCodeServiceFailureException "ServiceFailure" // The request processing has failed because of an unknown error, exception // or failure. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateGroup -func (c *IAM) UpdateGroup(input *UpdateGroupInput) (*UpdateGroupOutput, error) { - req, out := c.UpdateGroupRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UntagUser +func (c *IAM) UntagUser(input *UntagUserInput) (*UntagUserOutput, error) { + req, out := c.UntagUserRequest(input) return out, req.Send() } -// UpdateGroupWithContext is the same as UpdateGroup with the addition of +// UntagUserWithContext is the same as UntagUser with the addition of // the ability to pass a context and additional request options. // -// See UpdateGroup for details on how to use this API operation. +// See UntagUser for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IAM) UpdateGroupWithContext(ctx aws.Context, input *UpdateGroupInput, opts ...request.Option) (*UpdateGroupOutput, error) { - req, out := c.UpdateGroupRequest(input) +func (c *IAM) UntagUserWithContext(ctx aws.Context, input *UntagUserInput, opts ...request.Option) (*UntagUserOutput, error) { + req, out := c.UntagUserRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateLoginProfile = "UpdateLoginProfile" +const opUpdateAccessKey = "UpdateAccessKey" -// UpdateLoginProfileRequest generates a "aws/request.Request" representing the -// client's request for the UpdateLoginProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UpdateAccessKeyRequest generates a "aws/request.Request" representing the +// client's request for the UpdateAccessKey operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateLoginProfile for more information on using the UpdateLoginProfile +// See UpdateAccessKey for more information on using the UpdateAccessKey // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateLoginProfileRequest method. -// req, resp := client.UpdateLoginProfileRequest(params) +// // Example sending a request using the UpdateAccessKeyRequest method. +// req, resp := client.UpdateAccessKeyRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateLoginProfile -func (c *IAM) UpdateLoginProfileRequest(input *UpdateLoginProfileInput) (req *request.Request, output *UpdateLoginProfileOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAccessKey +func (c *IAM) UpdateAccessKeyRequest(input *UpdateAccessKeyInput) (req *request.Request, output *UpdateAccessKeyOutput) { op := &request.Operation{ - Name: opUpdateLoginProfile, + Name: opUpdateAccessKey, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateLoginProfileInput{} + input = &UpdateAccessKeyInput{} } - output = &UpdateLoginProfileOutput{} + output = &UpdateAccessKeyOutput{} req = c.newRequest(op, input, output) req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateLoginProfile API operation for AWS Identity and Access Management. +// UpdateAccessKey API operation for AWS Identity and Access Management. // -// Changes the password for the specified IAM user. +// Changes the status of the specified access key from Active to Inactive, or +// vice versa. This operation can be used to disable a user's key as part of +// a key rotation workflow. // -// IAM users can change their own passwords by calling ChangePassword. For more -// information about modifying passwords, see Managing Passwords (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingLogins.html) +// If the UserName field is not specified, the user name is determined implicitly +// based on the AWS access key ID used to sign the request. Because this operation +// works for access keys under the AWS account, you can use this operation to +// manage AWS account root user credentials even if the AWS account has no associated +// users. +// +// For information about rotating keys, see Managing Keys and Certificates (http://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingCredentials.html) // in the IAM User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -12149,23 +12815,13 @@ func (c *IAM) UpdateLoginProfileRequest(input *UpdateLoginProfileInput) (req *re // the error. // // See the AWS API reference guide for AWS Identity and Access Management's -// API operation UpdateLoginProfile for usage and error information. +// API operation UpdateAccessKey for usage and error information. // // Returned Error Codes: -// * ErrCodeEntityTemporarilyUnmodifiableException "EntityTemporarilyUnmodifiable" -// The request was rejected because it referenced an entity that is temporarily -// unmodifiable, such as a user name that was deleted and then recreated. The -// error indicates that the request is likely to succeed if you try again after -// waiting several minutes. The error message describes the entity. -// // * ErrCodeNoSuchEntityException "NoSuchEntity" // The request was rejected because it referenced an entity that does not exist. // The error message describes the entity. // -// * ErrCodePasswordPolicyViolationException "PasswordPolicyViolation" -// The request was rejected because the provided password did not meet the requirements -// imposed by the account password policy. -// // * ErrCodeLimitExceededException "LimitExceeded" // The request was rejected because it attempted to create resources beyond // the current AWS account limits. The error message describes the limit exceeded. @@ -12174,190 +12830,204 @@ func (c *IAM) UpdateLoginProfileRequest(input *UpdateLoginProfileInput) (req *re // The request processing has failed because of an unknown error, exception // or failure. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateLoginProfile -func (c *IAM) UpdateLoginProfile(input *UpdateLoginProfileInput) (*UpdateLoginProfileOutput, error) { - req, out := c.UpdateLoginProfileRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAccessKey +func (c *IAM) UpdateAccessKey(input *UpdateAccessKeyInput) (*UpdateAccessKeyOutput, error) { + req, out := c.UpdateAccessKeyRequest(input) return out, req.Send() } -// UpdateLoginProfileWithContext is the same as UpdateLoginProfile with the addition of +// UpdateAccessKeyWithContext is the same as UpdateAccessKey with the addition of // the ability to pass a context and additional request options. // -// See UpdateLoginProfile for details on how to use this API operation. +// See UpdateAccessKey for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IAM) UpdateLoginProfileWithContext(ctx aws.Context, input *UpdateLoginProfileInput, opts ...request.Option) (*UpdateLoginProfileOutput, error) { - req, out := c.UpdateLoginProfileRequest(input) +func (c *IAM) UpdateAccessKeyWithContext(ctx aws.Context, input *UpdateAccessKeyInput, opts ...request.Option) (*UpdateAccessKeyOutput, error) { + req, out := c.UpdateAccessKeyRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateOpenIDConnectProviderThumbprint = "UpdateOpenIDConnectProviderThumbprint" +const opUpdateAccountPasswordPolicy = "UpdateAccountPasswordPolicy" -// UpdateOpenIDConnectProviderThumbprintRequest generates a "aws/request.Request" representing the -// client's request for the UpdateOpenIDConnectProviderThumbprint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UpdateAccountPasswordPolicyRequest generates a "aws/request.Request" representing the +// client's request for the UpdateAccountPasswordPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateOpenIDConnectProviderThumbprint for more information on using the UpdateOpenIDConnectProviderThumbprint +// See UpdateAccountPasswordPolicy for more information on using the UpdateAccountPasswordPolicy // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateOpenIDConnectProviderThumbprintRequest method. -// req, resp := client.UpdateOpenIDConnectProviderThumbprintRequest(params) +// // Example sending a request using the UpdateAccountPasswordPolicyRequest method. +// req, resp := client.UpdateAccountPasswordPolicyRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateOpenIDConnectProviderThumbprint -func (c *IAM) UpdateOpenIDConnectProviderThumbprintRequest(input *UpdateOpenIDConnectProviderThumbprintInput) (req *request.Request, output *UpdateOpenIDConnectProviderThumbprintOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAccountPasswordPolicy +func (c *IAM) UpdateAccountPasswordPolicyRequest(input *UpdateAccountPasswordPolicyInput) (req *request.Request, output *UpdateAccountPasswordPolicyOutput) { op := &request.Operation{ - Name: opUpdateOpenIDConnectProviderThumbprint, + Name: opUpdateAccountPasswordPolicy, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateOpenIDConnectProviderThumbprintInput{} + input = &UpdateAccountPasswordPolicyInput{} } - output = &UpdateOpenIDConnectProviderThumbprintOutput{} + output = &UpdateAccountPasswordPolicyOutput{} req = c.newRequest(op, input, output) req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateOpenIDConnectProviderThumbprint API operation for AWS Identity and Access Management. -// -// Replaces the existing list of server certificate thumbprints associated with -// an OpenID Connect (OIDC) provider resource object with a new list of thumbprints. +// UpdateAccountPasswordPolicy API operation for AWS Identity and Access Management. // -// The list that you pass with this action completely replaces the existing -// list of thumbprints. (The lists are not merged.) +// Updates the password policy settings for the AWS account. // -// Typically, you need to update a thumbprint only when the identity provider's -// certificate changes, which occurs rarely. However, if the provider's certificate -// does change, any attempt to assume an IAM role that specifies the OIDC provider -// as a principal fails until the certificate thumbprint is updated. +// This operation does not support partial updates. No parameters are required, +// but if you do not specify a parameter, that parameter's value reverts to +// its default value. See the Request Parameters section for each parameter's +// default value. Also note that some parameters do not allow the default parameter +// to be explicitly set. Instead, to invoke the default value, do not include +// that parameter when you invoke the operation. // -// Because trust for the OIDC provider is ultimately derived from the provider's -// certificate and is validated by the thumbprint, it is a best practice to -// limit access to the UpdateOpenIDConnectProviderThumbprint action to highly-privileged -// users. +// For more information about using a password policy, see Managing an IAM Password +// Policy (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingPasswordPolicies.html) +// in the IAM User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Identity and Access Management's -// API operation UpdateOpenIDConnectProviderThumbprint for usage and error information. +// API operation UpdateAccountPasswordPolicy for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidInputException "InvalidInput" -// The request was rejected because an invalid or out-of-range value was supplied -// for an input parameter. -// // * ErrCodeNoSuchEntityException "NoSuchEntity" // The request was rejected because it referenced an entity that does not exist. // The error message describes the entity. // +// * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocument" +// The request was rejected because the policy document was malformed. The error +// message describes the specific error. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// // * ErrCodeServiceFailureException "ServiceFailure" // The request processing has failed because of an unknown error, exception // or failure. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateOpenIDConnectProviderThumbprint -func (c *IAM) UpdateOpenIDConnectProviderThumbprint(input *UpdateOpenIDConnectProviderThumbprintInput) (*UpdateOpenIDConnectProviderThumbprintOutput, error) { - req, out := c.UpdateOpenIDConnectProviderThumbprintRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAccountPasswordPolicy +func (c *IAM) UpdateAccountPasswordPolicy(input *UpdateAccountPasswordPolicyInput) (*UpdateAccountPasswordPolicyOutput, error) { + req, out := c.UpdateAccountPasswordPolicyRequest(input) return out, req.Send() } -// UpdateOpenIDConnectProviderThumbprintWithContext is the same as UpdateOpenIDConnectProviderThumbprint with the addition of +// UpdateAccountPasswordPolicyWithContext is the same as UpdateAccountPasswordPolicy with the addition of // the ability to pass a context and additional request options. // -// See UpdateOpenIDConnectProviderThumbprint for details on how to use this API operation. +// See UpdateAccountPasswordPolicy for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IAM) UpdateOpenIDConnectProviderThumbprintWithContext(ctx aws.Context, input *UpdateOpenIDConnectProviderThumbprintInput, opts ...request.Option) (*UpdateOpenIDConnectProviderThumbprintOutput, error) { - req, out := c.UpdateOpenIDConnectProviderThumbprintRequest(input) +func (c *IAM) UpdateAccountPasswordPolicyWithContext(ctx aws.Context, input *UpdateAccountPasswordPolicyInput, opts ...request.Option) (*UpdateAccountPasswordPolicyOutput, error) { + req, out := c.UpdateAccountPasswordPolicyRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateRoleDescription = "UpdateRoleDescription" +const opUpdateAssumeRolePolicy = "UpdateAssumeRolePolicy" -// UpdateRoleDescriptionRequest generates a "aws/request.Request" representing the -// client's request for the UpdateRoleDescription operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UpdateAssumeRolePolicyRequest generates a "aws/request.Request" representing the +// client's request for the UpdateAssumeRolePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateRoleDescription for more information on using the UpdateRoleDescription +// See UpdateAssumeRolePolicy for more information on using the UpdateAssumeRolePolicy // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateRoleDescriptionRequest method. -// req, resp := client.UpdateRoleDescriptionRequest(params) +// // Example sending a request using the UpdateAssumeRolePolicyRequest method. +// req, resp := client.UpdateAssumeRolePolicyRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateRoleDescription -func (c *IAM) UpdateRoleDescriptionRequest(input *UpdateRoleDescriptionInput) (req *request.Request, output *UpdateRoleDescriptionOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAssumeRolePolicy +func (c *IAM) UpdateAssumeRolePolicyRequest(input *UpdateAssumeRolePolicyInput) (req *request.Request, output *UpdateAssumeRolePolicyOutput) { op := &request.Operation{ - Name: opUpdateRoleDescription, + Name: opUpdateAssumeRolePolicy, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateRoleDescriptionInput{} + input = &UpdateAssumeRolePolicyInput{} } - output = &UpdateRoleDescriptionOutput{} + output = &UpdateAssumeRolePolicyOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateRoleDescription API operation for AWS Identity and Access Management. +// UpdateAssumeRolePolicy API operation for AWS Identity and Access Management. // -// Modifies the description of a role. +// Updates the policy that grants an IAM entity permission to assume a role. +// This is typically referred to as the "role trust policy". For more information +// about roles, go to Using Roles to Delegate Permissions and Federate Identities +// (http://docs.aws.amazon.com/IAM/latest/UserGuide/roles-toplevel.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Identity and Access Management's -// API operation UpdateRoleDescription for usage and error information. +// API operation UpdateAssumeRolePolicy for usage and error information. // // Returned Error Codes: // * ErrCodeNoSuchEntityException "NoSuchEntity" // The request was rejected because it referenced an entity that does not exist. // The error message describes the entity. // +// * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocument" +// The request was rejected because the policy document was malformed. The error +// message describes the specific error. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// // * ErrCodeUnmodifiableEntityException "UnmodifiableEntity" // The request was rejected because only the service that depends on the service-linked // role can modify or delete the role on your behalf. The error message includes @@ -12368,91 +13038,102 @@ func (c *IAM) UpdateRoleDescriptionRequest(input *UpdateRoleDescriptionInput) (r // The request processing has failed because of an unknown error, exception // or failure. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateRoleDescription -func (c *IAM) UpdateRoleDescription(input *UpdateRoleDescriptionInput) (*UpdateRoleDescriptionOutput, error) { - req, out := c.UpdateRoleDescriptionRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateAssumeRolePolicy +func (c *IAM) UpdateAssumeRolePolicy(input *UpdateAssumeRolePolicyInput) (*UpdateAssumeRolePolicyOutput, error) { + req, out := c.UpdateAssumeRolePolicyRequest(input) return out, req.Send() } -// UpdateRoleDescriptionWithContext is the same as UpdateRoleDescription with the addition of +// UpdateAssumeRolePolicyWithContext is the same as UpdateAssumeRolePolicy with the addition of // the ability to pass a context and additional request options. // -// See UpdateRoleDescription for details on how to use this API operation. +// See UpdateAssumeRolePolicy for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IAM) UpdateRoleDescriptionWithContext(ctx aws.Context, input *UpdateRoleDescriptionInput, opts ...request.Option) (*UpdateRoleDescriptionOutput, error) { - req, out := c.UpdateRoleDescriptionRequest(input) +func (c *IAM) UpdateAssumeRolePolicyWithContext(ctx aws.Context, input *UpdateAssumeRolePolicyInput, opts ...request.Option) (*UpdateAssumeRolePolicyOutput, error) { + req, out := c.UpdateAssumeRolePolicyRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateSAMLProvider = "UpdateSAMLProvider" +const opUpdateGroup = "UpdateGroup" -// UpdateSAMLProviderRequest generates a "aws/request.Request" representing the -// client's request for the UpdateSAMLProvider operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UpdateGroupRequest generates a "aws/request.Request" representing the +// client's request for the UpdateGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateSAMLProvider for more information on using the UpdateSAMLProvider +// See UpdateGroup for more information on using the UpdateGroup // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateSAMLProviderRequest method. -// req, resp := client.UpdateSAMLProviderRequest(params) +// // Example sending a request using the UpdateGroupRequest method. +// req, resp := client.UpdateGroupRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSAMLProvider -func (c *IAM) UpdateSAMLProviderRequest(input *UpdateSAMLProviderInput) (req *request.Request, output *UpdateSAMLProviderOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateGroup +func (c *IAM) UpdateGroupRequest(input *UpdateGroupInput) (req *request.Request, output *UpdateGroupOutput) { op := &request.Operation{ - Name: opUpdateSAMLProvider, + Name: opUpdateGroup, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateSAMLProviderInput{} + input = &UpdateGroupInput{} } - output = &UpdateSAMLProviderOutput{} + output = &UpdateGroupOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateSAMLProvider API operation for AWS Identity and Access Management. +// UpdateGroup API operation for AWS Identity and Access Management. // -// Updates the metadata document for an existing SAML provider resource object. +// Updates the name and/or the path of the specified IAM group. // -// This operation requires Signature Version 4 (http://docs.aws.amazon.com/general/latest/gr/signature-version-4.html). +// You should understand the implications of changing a group's path or name. +// For more information, see Renaming Users and Groups (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_WorkingWithGroupsAndUsers.html) +// in the IAM User Guide. +// +// The person making the request (the principal), must have permission to change +// the role group with the old name and the new name. For example, to change +// the group named Managers to MGRs, the principal must have a policy that allows +// them to update both groups. If the principal has permission to update the +// Managers group, but not the MGRs group, then the update fails. For more information +// about permissions, see Access Management (http://docs.aws.amazon.com/IAM/latest/UserGuide/access.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Identity and Access Management's -// API operation UpdateSAMLProvider for usage and error information. +// API operation UpdateGroup for usage and error information. // // Returned Error Codes: // * ErrCodeNoSuchEntityException "NoSuchEntity" // The request was rejected because it referenced an entity that does not exist. // The error message describes the entity. // -// * ErrCodeInvalidInputException "InvalidInput" -// The request was rejected because an invalid or out-of-range value was supplied -// for an input parameter. +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. // // * ErrCodeLimitExceededException "LimitExceeded" // The request was rejected because it attempted to create resources beyond @@ -12462,571 +13143,564 @@ func (c *IAM) UpdateSAMLProviderRequest(input *UpdateSAMLProviderInput) (req *re // The request processing has failed because of an unknown error, exception // or failure. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSAMLProvider -func (c *IAM) UpdateSAMLProvider(input *UpdateSAMLProviderInput) (*UpdateSAMLProviderOutput, error) { - req, out := c.UpdateSAMLProviderRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateGroup +func (c *IAM) UpdateGroup(input *UpdateGroupInput) (*UpdateGroupOutput, error) { + req, out := c.UpdateGroupRequest(input) return out, req.Send() } -// UpdateSAMLProviderWithContext is the same as UpdateSAMLProvider with the addition of +// UpdateGroupWithContext is the same as UpdateGroup with the addition of // the ability to pass a context and additional request options. // -// See UpdateSAMLProvider for details on how to use this API operation. +// See UpdateGroup for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IAM) UpdateSAMLProviderWithContext(ctx aws.Context, input *UpdateSAMLProviderInput, opts ...request.Option) (*UpdateSAMLProviderOutput, error) { - req, out := c.UpdateSAMLProviderRequest(input) +func (c *IAM) UpdateGroupWithContext(ctx aws.Context, input *UpdateGroupInput, opts ...request.Option) (*UpdateGroupOutput, error) { + req, out := c.UpdateGroupRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateSSHPublicKey = "UpdateSSHPublicKey" +const opUpdateLoginProfile = "UpdateLoginProfile" -// UpdateSSHPublicKeyRequest generates a "aws/request.Request" representing the -// client's request for the UpdateSSHPublicKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UpdateLoginProfileRequest generates a "aws/request.Request" representing the +// client's request for the UpdateLoginProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateSSHPublicKey for more information on using the UpdateSSHPublicKey +// See UpdateLoginProfile for more information on using the UpdateLoginProfile // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateSSHPublicKeyRequest method. -// req, resp := client.UpdateSSHPublicKeyRequest(params) +// // Example sending a request using the UpdateLoginProfileRequest method. +// req, resp := client.UpdateLoginProfileRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSSHPublicKey -func (c *IAM) UpdateSSHPublicKeyRequest(input *UpdateSSHPublicKeyInput) (req *request.Request, output *UpdateSSHPublicKeyOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateLoginProfile +func (c *IAM) UpdateLoginProfileRequest(input *UpdateLoginProfileInput) (req *request.Request, output *UpdateLoginProfileOutput) { op := &request.Operation{ - Name: opUpdateSSHPublicKey, + Name: opUpdateLoginProfile, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateSSHPublicKeyInput{} + input = &UpdateLoginProfileInput{} } - output = &UpdateSSHPublicKeyOutput{} + output = &UpdateLoginProfileOutput{} req = c.newRequest(op, input, output) req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateSSHPublicKey API operation for AWS Identity and Access Management. +// UpdateLoginProfile API operation for AWS Identity and Access Management. // -// Sets the status of an IAM user's SSH public key to active or inactive. SSH -// public keys that are inactive cannot be used for authentication. This action -// can be used to disable a user's SSH public key as part of a key rotation -// work flow. +// Changes the password for the specified IAM user. // -// The SSH public key affected by this action is used only for authenticating -// the associated IAM user to an AWS CodeCommit repository. For more information -// about using SSH keys to authenticate to an AWS CodeCommit repository, see -// Set up AWS CodeCommit for SSH Connections (http://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-credentials-ssh.html) -// in the AWS CodeCommit User Guide. +// IAM users can change their own passwords by calling ChangePassword. For more +// information about modifying passwords, see Managing Passwords (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingLogins.html) +// in the IAM User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Identity and Access Management's -// API operation UpdateSSHPublicKey for usage and error information. +// API operation UpdateLoginProfile for usage and error information. // // Returned Error Codes: +// * ErrCodeEntityTemporarilyUnmodifiableException "EntityTemporarilyUnmodifiable" +// The request was rejected because it referenced an entity that is temporarily +// unmodifiable, such as a user name that was deleted and then recreated. The +// error indicates that the request is likely to succeed if you try again after +// waiting several minutes. The error message describes the entity. +// // * ErrCodeNoSuchEntityException "NoSuchEntity" // The request was rejected because it referenced an entity that does not exist. // The error message describes the entity. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSSHPublicKey -func (c *IAM) UpdateSSHPublicKey(input *UpdateSSHPublicKeyInput) (*UpdateSSHPublicKeyOutput, error) { - req, out := c.UpdateSSHPublicKeyRequest(input) +// * ErrCodePasswordPolicyViolationException "PasswordPolicyViolation" +// The request was rejected because the provided password did not meet the requirements +// imposed by the account password policy. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateLoginProfile +func (c *IAM) UpdateLoginProfile(input *UpdateLoginProfileInput) (*UpdateLoginProfileOutput, error) { + req, out := c.UpdateLoginProfileRequest(input) return out, req.Send() } -// UpdateSSHPublicKeyWithContext is the same as UpdateSSHPublicKey with the addition of +// UpdateLoginProfileWithContext is the same as UpdateLoginProfile with the addition of // the ability to pass a context and additional request options. // -// See UpdateSSHPublicKey for details on how to use this API operation. +// See UpdateLoginProfile for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IAM) UpdateSSHPublicKeyWithContext(ctx aws.Context, input *UpdateSSHPublicKeyInput, opts ...request.Option) (*UpdateSSHPublicKeyOutput, error) { - req, out := c.UpdateSSHPublicKeyRequest(input) +func (c *IAM) UpdateLoginProfileWithContext(ctx aws.Context, input *UpdateLoginProfileInput, opts ...request.Option) (*UpdateLoginProfileOutput, error) { + req, out := c.UpdateLoginProfileRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateServerCertificate = "UpdateServerCertificate" +const opUpdateOpenIDConnectProviderThumbprint = "UpdateOpenIDConnectProviderThumbprint" -// UpdateServerCertificateRequest generates a "aws/request.Request" representing the -// client's request for the UpdateServerCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UpdateOpenIDConnectProviderThumbprintRequest generates a "aws/request.Request" representing the +// client's request for the UpdateOpenIDConnectProviderThumbprint operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateServerCertificate for more information on using the UpdateServerCertificate +// See UpdateOpenIDConnectProviderThumbprint for more information on using the UpdateOpenIDConnectProviderThumbprint // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateServerCertificateRequest method. -// req, resp := client.UpdateServerCertificateRequest(params) +// // Example sending a request using the UpdateOpenIDConnectProviderThumbprintRequest method. +// req, resp := client.UpdateOpenIDConnectProviderThumbprintRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateServerCertificate -func (c *IAM) UpdateServerCertificateRequest(input *UpdateServerCertificateInput) (req *request.Request, output *UpdateServerCertificateOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateOpenIDConnectProviderThumbprint +func (c *IAM) UpdateOpenIDConnectProviderThumbprintRequest(input *UpdateOpenIDConnectProviderThumbprintInput) (req *request.Request, output *UpdateOpenIDConnectProviderThumbprintOutput) { op := &request.Operation{ - Name: opUpdateServerCertificate, + Name: opUpdateOpenIDConnectProviderThumbprint, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateServerCertificateInput{} + input = &UpdateOpenIDConnectProviderThumbprintInput{} } - output = &UpdateServerCertificateOutput{} + output = &UpdateOpenIDConnectProviderThumbprintOutput{} req = c.newRequest(op, input, output) req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateServerCertificate API operation for AWS Identity and Access Management. +// UpdateOpenIDConnectProviderThumbprint API operation for AWS Identity and Access Management. // -// Updates the name and/or the path of the specified server certificate stored -// in IAM. +// Replaces the existing list of server certificate thumbprints associated with +// an OpenID Connect (OIDC) provider resource object with a new list of thumbprints. // -// For more information about working with server certificates, including a -// list of AWS services that can use the server certificates that you manage -// with IAM, go to Working with Server Certificates (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html) -// in the IAM User Guide. +// The list that you pass with this operation completely replaces the existing +// list of thumbprints. (The lists are not merged.) // -// You should understand the implications of changing a server certificate's -// path or name. For more information, see Renaming a Server Certificate (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs_manage.html#RenamingServerCerts) -// in the IAM User Guide. +// Typically, you need to update a thumbprint only when the identity provider's +// certificate changes, which occurs rarely. However, if the provider's certificate +// does change, any attempt to assume an IAM role that specifies the OIDC provider +// as a principal fails until the certificate thumbprint is updated. // -// To change a server certificate name the requester must have appropriate permissions -// on both the source object and the target object. For example, to change the -// name from "ProductionCert" to "ProdCert", the entity making the request must -// have permission on "ProductionCert" and "ProdCert", or must have permission -// on all (*). For more information about permissions, see Access Management -// (http://docs.aws.amazon.com/IAM/latest/UserGuide/access.html) in the IAM -// User Guide. +// Because trust for the OIDC provider is derived from the provider's certificate +// and is validated by the thumbprint, it is best to limit access to the UpdateOpenIDConnectProviderThumbprint +// operation to highly privileged users. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Identity and Access Management's -// API operation UpdateServerCertificate for usage and error information. +// API operation UpdateOpenIDConnectProviderThumbprint for usage and error information. // // Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// // * ErrCodeNoSuchEntityException "NoSuchEntity" // The request was rejected because it referenced an entity that does not exist. // The error message describes the entity. // -// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" -// The request was rejected because it attempted to create a resource that already -// exists. -// -// * ErrCodeLimitExceededException "LimitExceeded" -// The request was rejected because it attempted to create resources beyond -// the current AWS account limits. The error message describes the limit exceeded. -// // * ErrCodeServiceFailureException "ServiceFailure" // The request processing has failed because of an unknown error, exception // or failure. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateServerCertificate -func (c *IAM) UpdateServerCertificate(input *UpdateServerCertificateInput) (*UpdateServerCertificateOutput, error) { - req, out := c.UpdateServerCertificateRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateOpenIDConnectProviderThumbprint +func (c *IAM) UpdateOpenIDConnectProviderThumbprint(input *UpdateOpenIDConnectProviderThumbprintInput) (*UpdateOpenIDConnectProviderThumbprintOutput, error) { + req, out := c.UpdateOpenIDConnectProviderThumbprintRequest(input) return out, req.Send() } -// UpdateServerCertificateWithContext is the same as UpdateServerCertificate with the addition of +// UpdateOpenIDConnectProviderThumbprintWithContext is the same as UpdateOpenIDConnectProviderThumbprint with the addition of // the ability to pass a context and additional request options. // -// See UpdateServerCertificate for details on how to use this API operation. +// See UpdateOpenIDConnectProviderThumbprint for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IAM) UpdateServerCertificateWithContext(ctx aws.Context, input *UpdateServerCertificateInput, opts ...request.Option) (*UpdateServerCertificateOutput, error) { - req, out := c.UpdateServerCertificateRequest(input) +func (c *IAM) UpdateOpenIDConnectProviderThumbprintWithContext(ctx aws.Context, input *UpdateOpenIDConnectProviderThumbprintInput, opts ...request.Option) (*UpdateOpenIDConnectProviderThumbprintOutput, error) { + req, out := c.UpdateOpenIDConnectProviderThumbprintRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateServiceSpecificCredential = "UpdateServiceSpecificCredential" +const opUpdateRole = "UpdateRole" -// UpdateServiceSpecificCredentialRequest generates a "aws/request.Request" representing the -// client's request for the UpdateServiceSpecificCredential operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UpdateRoleRequest generates a "aws/request.Request" representing the +// client's request for the UpdateRole operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateServiceSpecificCredential for more information on using the UpdateServiceSpecificCredential +// See UpdateRole for more information on using the UpdateRole // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateServiceSpecificCredentialRequest method. -// req, resp := client.UpdateServiceSpecificCredentialRequest(params) +// // Example sending a request using the UpdateRoleRequest method. +// req, resp := client.UpdateRoleRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateServiceSpecificCredential -func (c *IAM) UpdateServiceSpecificCredentialRequest(input *UpdateServiceSpecificCredentialInput) (req *request.Request, output *UpdateServiceSpecificCredentialOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateRole +func (c *IAM) UpdateRoleRequest(input *UpdateRoleInput) (req *request.Request, output *UpdateRoleOutput) { op := &request.Operation{ - Name: opUpdateServiceSpecificCredential, + Name: opUpdateRole, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateServiceSpecificCredentialInput{} + input = &UpdateRoleInput{} } - output = &UpdateServiceSpecificCredentialOutput{} + output = &UpdateRoleOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateServiceSpecificCredential API operation for AWS Identity and Access Management. +// UpdateRole API operation for AWS Identity and Access Management. // -// Sets the status of a service-specific credential to Active or Inactive. Service-specific -// credentials that are inactive cannot be used for authentication to the service. -// This action can be used to disable a user’s service-specific credential as -// part of a credential rotation work flow. +// Updates the description or maximum session duration setting of a role. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Identity and Access Management's -// API operation UpdateServiceSpecificCredential for usage and error information. +// API operation UpdateRole for usage and error information. // // Returned Error Codes: +// * ErrCodeUnmodifiableEntityException "UnmodifiableEntity" +// The request was rejected because only the service that depends on the service-linked +// role can modify or delete the role on your behalf. The error message includes +// the name of the service that depends on this service-linked role. You must +// request the change through that service. +// // * ErrCodeNoSuchEntityException "NoSuchEntity" // The request was rejected because it referenced an entity that does not exist. // The error message describes the entity. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateServiceSpecificCredential -func (c *IAM) UpdateServiceSpecificCredential(input *UpdateServiceSpecificCredentialInput) (*UpdateServiceSpecificCredentialOutput, error) { - req, out := c.UpdateServiceSpecificCredentialRequest(input) +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateRole +func (c *IAM) UpdateRole(input *UpdateRoleInput) (*UpdateRoleOutput, error) { + req, out := c.UpdateRoleRequest(input) return out, req.Send() } -// UpdateServiceSpecificCredentialWithContext is the same as UpdateServiceSpecificCredential with the addition of +// UpdateRoleWithContext is the same as UpdateRole with the addition of // the ability to pass a context and additional request options. // -// See UpdateServiceSpecificCredential for details on how to use this API operation. +// See UpdateRole for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IAM) UpdateServiceSpecificCredentialWithContext(ctx aws.Context, input *UpdateServiceSpecificCredentialInput, opts ...request.Option) (*UpdateServiceSpecificCredentialOutput, error) { - req, out := c.UpdateServiceSpecificCredentialRequest(input) +func (c *IAM) UpdateRoleWithContext(ctx aws.Context, input *UpdateRoleInput, opts ...request.Option) (*UpdateRoleOutput, error) { + req, out := c.UpdateRoleRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateSigningCertificate = "UpdateSigningCertificate" +const opUpdateRoleDescription = "UpdateRoleDescription" -// UpdateSigningCertificateRequest generates a "aws/request.Request" representing the -// client's request for the UpdateSigningCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UpdateRoleDescriptionRequest generates a "aws/request.Request" representing the +// client's request for the UpdateRoleDescription operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateSigningCertificate for more information on using the UpdateSigningCertificate +// See UpdateRoleDescription for more information on using the UpdateRoleDescription // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateSigningCertificateRequest method. -// req, resp := client.UpdateSigningCertificateRequest(params) +// // Example sending a request using the UpdateRoleDescriptionRequest method. +// req, resp := client.UpdateRoleDescriptionRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSigningCertificate -func (c *IAM) UpdateSigningCertificateRequest(input *UpdateSigningCertificateInput) (req *request.Request, output *UpdateSigningCertificateOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateRoleDescription +func (c *IAM) UpdateRoleDescriptionRequest(input *UpdateRoleDescriptionInput) (req *request.Request, output *UpdateRoleDescriptionOutput) { op := &request.Operation{ - Name: opUpdateSigningCertificate, + Name: opUpdateRoleDescription, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateSigningCertificateInput{} + input = &UpdateRoleDescriptionInput{} } - output = &UpdateSigningCertificateOutput{} + output = &UpdateRoleDescriptionOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateSigningCertificate API operation for AWS Identity and Access Management. +// UpdateRoleDescription API operation for AWS Identity and Access Management. // -// Changes the status of the specified user signing certificate from active -// to disabled, or vice versa. This action can be used to disable an IAM user's -// signing certificate as part of a certificate rotation work flow. +// Use instead. // -// If the UserName field is not specified, the UserName is determined implicitly -// based on the AWS access key ID used to sign the request. Because this action -// works for access keys under the AWS account, you can use this action to manage -// root credentials even if the AWS account has no associated users. +// Modifies only the description of a role. This operation performs the same +// function as the Description parameter in the UpdateRole operation. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Identity and Access Management's -// API operation UpdateSigningCertificate for usage and error information. +// API operation UpdateRoleDescription for usage and error information. // // Returned Error Codes: // * ErrCodeNoSuchEntityException "NoSuchEntity" // The request was rejected because it referenced an entity that does not exist. // The error message describes the entity. // -// * ErrCodeLimitExceededException "LimitExceeded" -// The request was rejected because it attempted to create resources beyond -// the current AWS account limits. The error message describes the limit exceeded. +// * ErrCodeUnmodifiableEntityException "UnmodifiableEntity" +// The request was rejected because only the service that depends on the service-linked +// role can modify or delete the role on your behalf. The error message includes +// the name of the service that depends on this service-linked role. You must +// request the change through that service. // // * ErrCodeServiceFailureException "ServiceFailure" // The request processing has failed because of an unknown error, exception // or failure. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSigningCertificate -func (c *IAM) UpdateSigningCertificate(input *UpdateSigningCertificateInput) (*UpdateSigningCertificateOutput, error) { - req, out := c.UpdateSigningCertificateRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateRoleDescription +func (c *IAM) UpdateRoleDescription(input *UpdateRoleDescriptionInput) (*UpdateRoleDescriptionOutput, error) { + req, out := c.UpdateRoleDescriptionRequest(input) return out, req.Send() } -// UpdateSigningCertificateWithContext is the same as UpdateSigningCertificate with the addition of +// UpdateRoleDescriptionWithContext is the same as UpdateRoleDescription with the addition of // the ability to pass a context and additional request options. // -// See UpdateSigningCertificate for details on how to use this API operation. +// See UpdateRoleDescription for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IAM) UpdateSigningCertificateWithContext(ctx aws.Context, input *UpdateSigningCertificateInput, opts ...request.Option) (*UpdateSigningCertificateOutput, error) { - req, out := c.UpdateSigningCertificateRequest(input) +func (c *IAM) UpdateRoleDescriptionWithContext(ctx aws.Context, input *UpdateRoleDescriptionInput, opts ...request.Option) (*UpdateRoleDescriptionOutput, error) { + req, out := c.UpdateRoleDescriptionRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateUser = "UpdateUser" +const opUpdateSAMLProvider = "UpdateSAMLProvider" -// UpdateUserRequest generates a "aws/request.Request" representing the -// client's request for the UpdateUser operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UpdateSAMLProviderRequest generates a "aws/request.Request" representing the +// client's request for the UpdateSAMLProvider operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateUser for more information on using the UpdateUser +// See UpdateSAMLProvider for more information on using the UpdateSAMLProvider // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateUserRequest method. -// req, resp := client.UpdateUserRequest(params) +// // Example sending a request using the UpdateSAMLProviderRequest method. +// req, resp := client.UpdateSAMLProviderRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateUser -func (c *IAM) UpdateUserRequest(input *UpdateUserInput) (req *request.Request, output *UpdateUserOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSAMLProvider +func (c *IAM) UpdateSAMLProviderRequest(input *UpdateSAMLProviderInput) (req *request.Request, output *UpdateSAMLProviderOutput) { op := &request.Operation{ - Name: opUpdateUser, + Name: opUpdateSAMLProvider, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateUserInput{} + input = &UpdateSAMLProviderInput{} } - output = &UpdateUserOutput{} + output = &UpdateSAMLProviderOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateUser API operation for AWS Identity and Access Management. -// -// Updates the name and/or the path of the specified IAM user. +// UpdateSAMLProvider API operation for AWS Identity and Access Management. // -// You should understand the implications of changing an IAM user's path or -// name. For more information, see Renaming an IAM User (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_manage.html#id_users_renaming) -// and Renaming an IAM Group (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups_manage_rename.html) -// in the IAM User Guide. +// Updates the metadata document for an existing SAML provider resource object. // -// To change a user name the requester must have appropriate permissions on -// both the source object and the target object. For example, to change Bob -// to Robert, the entity making the request must have permission on Bob and -// Robert, or must have permission on all (*). For more information about permissions, -// see Permissions and Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/PermissionsAndPolicies.html). +// This operation requires Signature Version 4 (http://docs.aws.amazon.com/general/latest/gr/signature-version-4.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Identity and Access Management's -// API operation UpdateUser for usage and error information. +// API operation UpdateSAMLProvider for usage and error information. // // Returned Error Codes: // * ErrCodeNoSuchEntityException "NoSuchEntity" // The request was rejected because it referenced an entity that does not exist. // The error message describes the entity. // +// * ErrCodeInvalidInputException "InvalidInput" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// // * ErrCodeLimitExceededException "LimitExceeded" // The request was rejected because it attempted to create resources beyond // the current AWS account limits. The error message describes the limit exceeded. // -// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" -// The request was rejected because it attempted to create a resource that already -// exists. -// -// * ErrCodeEntityTemporarilyUnmodifiableException "EntityTemporarilyUnmodifiable" -// The request was rejected because it referenced an entity that is temporarily -// unmodifiable, such as a user name that was deleted and then recreated. The -// error indicates that the request is likely to succeed if you try again after -// waiting several minutes. The error message describes the entity. -// // * ErrCodeServiceFailureException "ServiceFailure" // The request processing has failed because of an unknown error, exception // or failure. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateUser -func (c *IAM) UpdateUser(input *UpdateUserInput) (*UpdateUserOutput, error) { - req, out := c.UpdateUserRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSAMLProvider +func (c *IAM) UpdateSAMLProvider(input *UpdateSAMLProviderInput) (*UpdateSAMLProviderOutput, error) { + req, out := c.UpdateSAMLProviderRequest(input) return out, req.Send() } -// UpdateUserWithContext is the same as UpdateUser with the addition of +// UpdateSAMLProviderWithContext is the same as UpdateSAMLProvider with the addition of // the ability to pass a context and additional request options. // -// See UpdateUser for details on how to use this API operation. +// See UpdateSAMLProvider for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IAM) UpdateUserWithContext(ctx aws.Context, input *UpdateUserInput, opts ...request.Option) (*UpdateUserOutput, error) { - req, out := c.UpdateUserRequest(input) +func (c *IAM) UpdateSAMLProviderWithContext(ctx aws.Context, input *UpdateSAMLProviderInput, opts ...request.Option) (*UpdateSAMLProviderOutput, error) { + req, out := c.UpdateSAMLProviderRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUploadSSHPublicKey = "UploadSSHPublicKey" +const opUpdateSSHPublicKey = "UpdateSSHPublicKey" -// UploadSSHPublicKeyRequest generates a "aws/request.Request" representing the -// client's request for the UploadSSHPublicKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UpdateSSHPublicKeyRequest generates a "aws/request.Request" representing the +// client's request for the UpdateSSHPublicKey operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UploadSSHPublicKey for more information on using the UploadSSHPublicKey +// See UpdateSSHPublicKey for more information on using the UpdateSSHPublicKey // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UploadSSHPublicKeyRequest method. -// req, resp := client.UploadSSHPublicKeyRequest(params) +// // Example sending a request using the UpdateSSHPublicKeyRequest method. +// req, resp := client.UpdateSSHPublicKeyRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadSSHPublicKey -func (c *IAM) UploadSSHPublicKeyRequest(input *UploadSSHPublicKeyInput) (req *request.Request, output *UploadSSHPublicKeyOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSSHPublicKey +func (c *IAM) UpdateSSHPublicKeyRequest(input *UpdateSSHPublicKeyInput) (req *request.Request, output *UpdateSSHPublicKeyOutput) { op := &request.Operation{ - Name: opUploadSSHPublicKey, + Name: opUpdateSSHPublicKey, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UploadSSHPublicKeyInput{} + input = &UpdateSSHPublicKeyInput{} } - output = &UploadSSHPublicKeyOutput{} + output = &UpdateSSHPublicKeyOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UploadSSHPublicKey API operation for AWS Identity and Access Management. +// UpdateSSHPublicKey API operation for AWS Identity and Access Management. // -// Uploads an SSH public key and associates it with the specified IAM user. +// Sets the status of an IAM user's SSH public key to active or inactive. SSH +// public keys that are inactive cannot be used for authentication. This operation +// can be used to disable a user's SSH public key as part of a key rotation +// work flow. // -// The SSH public key uploaded by this action can be used only for authenticating +// The SSH public key affected by this operation is used only for authenticating // the associated IAM user to an AWS CodeCommit repository. For more information // about using SSH keys to authenticate to an AWS CodeCommit repository, see // Set up AWS CodeCommit for SSH Connections (http://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-credentials-ssh.html) @@ -13037,120 +13711,100 @@ func (c *IAM) UploadSSHPublicKeyRequest(input *UploadSSHPublicKeyInput) (req *re // the error. // // See the AWS API reference guide for AWS Identity and Access Management's -// API operation UploadSSHPublicKey for usage and error information. +// API operation UpdateSSHPublicKey for usage and error information. // // Returned Error Codes: -// * ErrCodeLimitExceededException "LimitExceeded" -// The request was rejected because it attempted to create resources beyond -// the current AWS account limits. The error message describes the limit exceeded. -// // * ErrCodeNoSuchEntityException "NoSuchEntity" // The request was rejected because it referenced an entity that does not exist. // The error message describes the entity. // -// * ErrCodeInvalidPublicKeyException "InvalidPublicKey" -// The request was rejected because the public key is malformed or otherwise -// invalid. -// -// * ErrCodeDuplicateSSHPublicKeyException "DuplicateSSHPublicKey" -// The request was rejected because the SSH public key is already associated -// with the specified IAM user. -// -// * ErrCodeUnrecognizedPublicKeyEncodingException "UnrecognizedPublicKeyEncoding" -// The request was rejected because the public key encoding format is unsupported -// or unrecognized. -// -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadSSHPublicKey -func (c *IAM) UploadSSHPublicKey(input *UploadSSHPublicKeyInput) (*UploadSSHPublicKeyOutput, error) { - req, out := c.UploadSSHPublicKeyRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSSHPublicKey +func (c *IAM) UpdateSSHPublicKey(input *UpdateSSHPublicKeyInput) (*UpdateSSHPublicKeyOutput, error) { + req, out := c.UpdateSSHPublicKeyRequest(input) return out, req.Send() } -// UploadSSHPublicKeyWithContext is the same as UploadSSHPublicKey with the addition of +// UpdateSSHPublicKeyWithContext is the same as UpdateSSHPublicKey with the addition of // the ability to pass a context and additional request options. // -// See UploadSSHPublicKey for details on how to use this API operation. +// See UpdateSSHPublicKey for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IAM) UploadSSHPublicKeyWithContext(ctx aws.Context, input *UploadSSHPublicKeyInput, opts ...request.Option) (*UploadSSHPublicKeyOutput, error) { - req, out := c.UploadSSHPublicKeyRequest(input) +func (c *IAM) UpdateSSHPublicKeyWithContext(ctx aws.Context, input *UpdateSSHPublicKeyInput, opts ...request.Option) (*UpdateSSHPublicKeyOutput, error) { + req, out := c.UpdateSSHPublicKeyRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUploadServerCertificate = "UploadServerCertificate" +const opUpdateServerCertificate = "UpdateServerCertificate" -// UploadServerCertificateRequest generates a "aws/request.Request" representing the -// client's request for the UploadServerCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UpdateServerCertificateRequest generates a "aws/request.Request" representing the +// client's request for the UpdateServerCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UploadServerCertificate for more information on using the UploadServerCertificate +// See UpdateServerCertificate for more information on using the UpdateServerCertificate // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UploadServerCertificateRequest method. -// req, resp := client.UploadServerCertificateRequest(params) +// // Example sending a request using the UpdateServerCertificateRequest method. +// req, resp := client.UpdateServerCertificateRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadServerCertificate -func (c *IAM) UploadServerCertificateRequest(input *UploadServerCertificateInput) (req *request.Request, output *UploadServerCertificateOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateServerCertificate +func (c *IAM) UpdateServerCertificateRequest(input *UpdateServerCertificateInput) (req *request.Request, output *UpdateServerCertificateOutput) { op := &request.Operation{ - Name: opUploadServerCertificate, + Name: opUpdateServerCertificate, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UploadServerCertificateInput{} + input = &UpdateServerCertificateInput{} } - output = &UploadServerCertificateOutput{} + output = &UpdateServerCertificateOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UploadServerCertificate API operation for AWS Identity and Access Management. -// -// Uploads a server certificate entity for the AWS account. The server certificate -// entity includes a public key certificate, a private key, and an optional -// certificate chain, which should all be PEM-encoded. +// UpdateServerCertificate API operation for AWS Identity and Access Management. // -// We recommend that you use AWS Certificate Manager (https://aws.amazon.com/certificate-manager/) -// to provision, manage, and deploy your server certificates. With ACM you can -// request a certificate, deploy it to AWS resources, and let ACM handle certificate -// renewals for you. Certificates provided by ACM are free. For more information -// about using ACM, see the AWS Certificate Manager User Guide (http://docs.aws.amazon.com/acm/latest/userguide/). +// Updates the name and/or the path of the specified server certificate stored +// in IAM. // -// For more information about working with server certificates, including a -// list of AWS services that can use the server certificates that you manage -// with IAM, go to Working with Server Certificates (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html) -// in the IAM User Guide. +// For more information about working with server certificates, see Working +// with Server Certificates (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html) +// in the IAM User Guide. This topic also includes a list of AWS services that +// can use the server certificates that you manage with IAM. // -// For information about the number of server certificates you can upload, see -// Limitations on IAM Entities and Objects (http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-limits.html) +// You should understand the implications of changing a server certificate's +// path or name. For more information, see Renaming a Server Certificate (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs_manage.html#RenamingServerCerts) // in the IAM User Guide. // -// Because the body of the public key certificate, private key, and the certificate -// chain can be large, you should use POST rather than GET when calling UploadServerCertificate. -// For information about setting up signatures and authorization through the -// API, go to Signing AWS API Requests (http://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html) -// in the AWS General Reference. For general information about using the Query -// API with IAM, go to Calling the API by Making HTTP Query Requests (http://docs.aws.amazon.com/IAM/latest/UserGuide/programming.html) +// The person making the request (the principal), must have permission to change +// the server certificate with the old name and the new name. For example, to +// change the certificate named ProductionCert to ProdCert, the principal must +// have a policy that allows them to update both certificates. If the principal +// has permission to update the ProductionCert group, but not the ProdCert certificate, +// then the update fails. For more information about permissions, see Access +// Management (http://docs.aws.amazon.com/IAM/latest/UserGuide/access.html) // in the IAM User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -13158,219 +13812,738 @@ func (c *IAM) UploadServerCertificateRequest(input *UploadServerCertificateInput // the error. // // See the AWS API reference guide for AWS Identity and Access Management's -// API operation UploadServerCertificate for usage and error information. +// API operation UpdateServerCertificate for usage and error information. // // Returned Error Codes: -// * ErrCodeLimitExceededException "LimitExceeded" -// The request was rejected because it attempted to create resources beyond -// the current AWS account limits. The error message describes the limit exceeded. -// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// // * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" // The request was rejected because it attempted to create a resource that already // exists. // -// * ErrCodeMalformedCertificateException "MalformedCertificate" -// The request was rejected because the certificate was malformed or expired. -// The error message describes the specific error. -// -// * ErrCodeKeyPairMismatchException "KeyPairMismatch" -// The request was rejected because the public key certificate and the private -// key do not match. +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. // // * ErrCodeServiceFailureException "ServiceFailure" // The request processing has failed because of an unknown error, exception // or failure. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadServerCertificate -func (c *IAM) UploadServerCertificate(input *UploadServerCertificateInput) (*UploadServerCertificateOutput, error) { - req, out := c.UploadServerCertificateRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateServerCertificate +func (c *IAM) UpdateServerCertificate(input *UpdateServerCertificateInput) (*UpdateServerCertificateOutput, error) { + req, out := c.UpdateServerCertificateRequest(input) return out, req.Send() } -// UploadServerCertificateWithContext is the same as UploadServerCertificate with the addition of +// UpdateServerCertificateWithContext is the same as UpdateServerCertificate with the addition of // the ability to pass a context and additional request options. // -// See UploadServerCertificate for details on how to use this API operation. +// See UpdateServerCertificate for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IAM) UploadServerCertificateWithContext(ctx aws.Context, input *UploadServerCertificateInput, opts ...request.Option) (*UploadServerCertificateOutput, error) { - req, out := c.UploadServerCertificateRequest(input) +func (c *IAM) UpdateServerCertificateWithContext(ctx aws.Context, input *UpdateServerCertificateInput, opts ...request.Option) (*UpdateServerCertificateOutput, error) { + req, out := c.UpdateServerCertificateRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUploadSigningCertificate = "UploadSigningCertificate" +const opUpdateServiceSpecificCredential = "UpdateServiceSpecificCredential" -// UploadSigningCertificateRequest generates a "aws/request.Request" representing the -// client's request for the UploadSigningCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UpdateServiceSpecificCredentialRequest generates a "aws/request.Request" representing the +// client's request for the UpdateServiceSpecificCredential operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UploadSigningCertificate for more information on using the UploadSigningCertificate +// See UpdateServiceSpecificCredential for more information on using the UpdateServiceSpecificCredential // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UploadSigningCertificateRequest method. -// req, resp := client.UploadSigningCertificateRequest(params) +// // Example sending a request using the UpdateServiceSpecificCredentialRequest method. +// req, resp := client.UpdateServiceSpecificCredentialRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadSigningCertificate -func (c *IAM) UploadSigningCertificateRequest(input *UploadSigningCertificateInput) (req *request.Request, output *UploadSigningCertificateOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateServiceSpecificCredential +func (c *IAM) UpdateServiceSpecificCredentialRequest(input *UpdateServiceSpecificCredentialInput) (req *request.Request, output *UpdateServiceSpecificCredentialOutput) { op := &request.Operation{ - Name: opUploadSigningCertificate, + Name: opUpdateServiceSpecificCredential, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UploadSigningCertificateInput{} + input = &UpdateServiceSpecificCredentialInput{} } - output = &UploadSigningCertificateOutput{} + output = &UpdateServiceSpecificCredentialOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UploadSigningCertificate API operation for AWS Identity and Access Management. -// -// Uploads an X.509 signing certificate and associates it with the specified -// IAM user. Some AWS services use X.509 signing certificates to validate requests -// that are signed with a corresponding private key. When you upload the certificate, -// its default status is Active. -// -// If the UserName field is not specified, the IAM user name is determined implicitly -// based on the AWS access key ID used to sign the request. Because this action -// works for access keys under the AWS account, you can use this action to manage -// root credentials even if the AWS account has no associated users. +// UpdateServiceSpecificCredential API operation for AWS Identity and Access Management. // -// Because the body of a X.509 certificate can be large, you should use POST -// rather than GET when calling UploadSigningCertificate. For information about -// setting up signatures and authorization through the API, go to Signing AWS -// API Requests (http://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html) -// in the AWS General Reference. For general information about using the Query -// API with IAM, go to Making Query Requests (http://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_UsingQueryAPI.html) -// in the IAM User Guide. +// Sets the status of a service-specific credential to Active or Inactive. Service-specific +// credentials that are inactive cannot be used for authentication to the service. +// This operation can be used to disable a user's service-specific credential +// as part of a credential rotation work flow. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Identity and Access Management's -// API operation UploadSigningCertificate for usage and error information. +// API operation UpdateServiceSpecificCredential for usage and error information. // // Returned Error Codes: -// * ErrCodeLimitExceededException "LimitExceeded" -// The request was rejected because it attempted to create resources beyond -// the current AWS account limits. The error message describes the limit exceeded. +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. // -// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" -// The request was rejected because it attempted to create a resource that already -// exists. +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateServiceSpecificCredential +func (c *IAM) UpdateServiceSpecificCredential(input *UpdateServiceSpecificCredentialInput) (*UpdateServiceSpecificCredentialOutput, error) { + req, out := c.UpdateServiceSpecificCredentialRequest(input) + return out, req.Send() +} + +// UpdateServiceSpecificCredentialWithContext is the same as UpdateServiceSpecificCredential with the addition of +// the ability to pass a context and additional request options. // -// * ErrCodeMalformedCertificateException "MalformedCertificate" -// The request was rejected because the certificate was malformed or expired. -// The error message describes the specific error. +// See UpdateServiceSpecificCredential for details on how to use this API operation. // -// * ErrCodeInvalidCertificateException "InvalidCertificate" -// The request was rejected because the certificate is invalid. +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UpdateServiceSpecificCredentialWithContext(ctx aws.Context, input *UpdateServiceSpecificCredentialInput, opts ...request.Option) (*UpdateServiceSpecificCredentialOutput, error) { + req, out := c.UpdateServiceSpecificCredentialRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateSigningCertificate = "UpdateSigningCertificate" + +// UpdateSigningCertificateRequest generates a "aws/request.Request" representing the +// client's request for the UpdateSigningCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // -// * ErrCodeDuplicateCertificateException "DuplicateCertificate" -// The request was rejected because the same certificate is associated with -// an IAM user in the account. +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateSigningCertificate for more information on using the UpdateSigningCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateSigningCertificateRequest method. +// req, resp := client.UpdateSigningCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSigningCertificate +func (c *IAM) UpdateSigningCertificateRequest(input *UpdateSigningCertificateInput) (req *request.Request, output *UpdateSigningCertificateOutput) { + op := &request.Operation{ + Name: opUpdateSigningCertificate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateSigningCertificateInput{} + } + + output = &UpdateSigningCertificateOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UpdateSigningCertificate API operation for AWS Identity and Access Management. +// +// Changes the status of the specified user signing certificate from active +// to disabled, or vice versa. This operation can be used to disable an IAM +// user's signing certificate as part of a certificate rotation work flow. +// +// If the UserName field is not specified, the user name is determined implicitly +// based on the AWS access key ID used to sign the request. Because this operation +// works for access keys under the AWS account, you can use this operation to +// manage AWS account root user credentials even if the AWS account has no associated +// users. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UpdateSigningCertificate for usage and error information. // +// Returned Error Codes: // * ErrCodeNoSuchEntityException "NoSuchEntity" // The request was rejected because it referenced an entity that does not exist. // The error message describes the entity. // +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// // * ErrCodeServiceFailureException "ServiceFailure" // The request processing has failed because of an unknown error, exception // or failure. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadSigningCertificate -func (c *IAM) UploadSigningCertificate(input *UploadSigningCertificateInput) (*UploadSigningCertificateOutput, error) { - req, out := c.UploadSigningCertificateRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateSigningCertificate +func (c *IAM) UpdateSigningCertificate(input *UpdateSigningCertificateInput) (*UpdateSigningCertificateOutput, error) { + req, out := c.UpdateSigningCertificateRequest(input) return out, req.Send() } -// UploadSigningCertificateWithContext is the same as UploadSigningCertificate with the addition of +// UpdateSigningCertificateWithContext is the same as UpdateSigningCertificate with the addition of // the ability to pass a context and additional request options. // -// See UploadSigningCertificate for details on how to use this API operation. +// See UpdateSigningCertificate for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IAM) UploadSigningCertificateWithContext(ctx aws.Context, input *UploadSigningCertificateInput, opts ...request.Option) (*UploadSigningCertificateOutput, error) { - req, out := c.UploadSigningCertificateRequest(input) +func (c *IAM) UpdateSigningCertificateWithContext(ctx aws.Context, input *UpdateSigningCertificateInput, opts ...request.Option) (*UpdateSigningCertificateOutput, error) { + req, out := c.UpdateSigningCertificateRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// Contains information about an AWS access key. +const opUpdateUser = "UpdateUser" + +// UpdateUserRequest generates a "aws/request.Request" representing the +// client's request for the UpdateUser operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // -// This data type is used as a response element in the CreateAccessKey and ListAccessKeys -// actions. +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. // -// The SecretAccessKey value is returned only in response to CreateAccessKey. -// You can get a secret access key only when you first create an access key; -// you cannot recover the secret access key later. If you lose a secret access -// key, you must create a new access key. -type AccessKey struct { - _ struct{} `type:"structure"` - - // The ID for this access key. - // - // AccessKeyId is a required field - AccessKeyId *string `min:"16" type:"string" required:"true"` - - // The date when the access key was created. - CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` - - // The secret key used to sign requests. - // - // SecretAccessKey is a required field - SecretAccessKey *string `type:"string" required:"true"` - - // The status of the access key. Active means the key is valid for API calls, - // while Inactive means it is not. - // - // Status is a required field - Status *string `type:"string" required:"true" enum:"statusType"` - - // The name of the IAM user that the access key is associated with. - // - // UserName is a required field - UserName *string `min:"1" type:"string" required:"true"` -} +// See UpdateUser for more information on using the UpdateUser +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateUserRequest method. +// req, resp := client.UpdateUserRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateUser +func (c *IAM) UpdateUserRequest(input *UpdateUserInput) (req *request.Request, output *UpdateUserOutput) { + op := &request.Operation{ + Name: opUpdateUser, + HTTPMethod: "POST", + HTTPPath: "/", + } -// String returns the string representation -func (s AccessKey) String() string { - return awsutil.Prettify(s) -} + if input == nil { + input = &UpdateUserInput{} + } -// GoString returns the string representation -func (s AccessKey) GoString() string { - return s.String() + output = &UpdateUserOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return } -// SetAccessKeyId sets the AccessKeyId field's value. -func (s *AccessKey) SetAccessKeyId(v string) *AccessKey { +// UpdateUser API operation for AWS Identity and Access Management. +// +// Updates the name and/or the path of the specified IAM user. +// +// You should understand the implications of changing an IAM user's path or +// name. For more information, see Renaming an IAM User (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_manage.html#id_users_renaming) +// and Renaming an IAM Group (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups_manage_rename.html) +// in the IAM User Guide. +// +// To change a user name, the requester must have appropriate permissions on +// both the source object and the target object. For example, to change Bob +// to Robert, the entity making the request must have permission on Bob and +// Robert, or must have permission on all (*). For more information about permissions, +// see Permissions and Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/PermissionsAndPolicies.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UpdateUser for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeEntityTemporarilyUnmodifiableException "EntityTemporarilyUnmodifiable" +// The request was rejected because it referenced an entity that is temporarily +// unmodifiable, such as a user name that was deleted and then recreated. The +// error indicates that the request is likely to succeed if you try again after +// waiting several minutes. The error message describes the entity. +// +// * ErrCodeConcurrentModificationException "ConcurrentModification" +// The request was rejected because multiple requests to change this object +// were submitted simultaneously. Wait a few minutes and submit your request +// again. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UpdateUser +func (c *IAM) UpdateUser(input *UpdateUserInput) (*UpdateUserOutput, error) { + req, out := c.UpdateUserRequest(input) + return out, req.Send() +} + +// UpdateUserWithContext is the same as UpdateUser with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateUser for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UpdateUserWithContext(ctx aws.Context, input *UpdateUserInput, opts ...request.Option) (*UpdateUserOutput, error) { + req, out := c.UpdateUserRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUploadSSHPublicKey = "UploadSSHPublicKey" + +// UploadSSHPublicKeyRequest generates a "aws/request.Request" representing the +// client's request for the UploadSSHPublicKey operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UploadSSHPublicKey for more information on using the UploadSSHPublicKey +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UploadSSHPublicKeyRequest method. +// req, resp := client.UploadSSHPublicKeyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadSSHPublicKey +func (c *IAM) UploadSSHPublicKeyRequest(input *UploadSSHPublicKeyInput) (req *request.Request, output *UploadSSHPublicKeyOutput) { + op := &request.Operation{ + Name: opUploadSSHPublicKey, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UploadSSHPublicKeyInput{} + } + + output = &UploadSSHPublicKeyOutput{} + req = c.newRequest(op, input, output) + return +} + +// UploadSSHPublicKey API operation for AWS Identity and Access Management. +// +// Uploads an SSH public key and associates it with the specified IAM user. +// +// The SSH public key uploaded by this operation can be used only for authenticating +// the associated IAM user to an AWS CodeCommit repository. For more information +// about using SSH keys to authenticate to an AWS CodeCommit repository, see +// Set up AWS CodeCommit for SSH Connections (http://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-credentials-ssh.html) +// in the AWS CodeCommit User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UploadSSHPublicKey for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeInvalidPublicKeyException "InvalidPublicKey" +// The request was rejected because the public key is malformed or otherwise +// invalid. +// +// * ErrCodeDuplicateSSHPublicKeyException "DuplicateSSHPublicKey" +// The request was rejected because the SSH public key is already associated +// with the specified IAM user. +// +// * ErrCodeUnrecognizedPublicKeyEncodingException "UnrecognizedPublicKeyEncoding" +// The request was rejected because the public key encoding format is unsupported +// or unrecognized. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadSSHPublicKey +func (c *IAM) UploadSSHPublicKey(input *UploadSSHPublicKeyInput) (*UploadSSHPublicKeyOutput, error) { + req, out := c.UploadSSHPublicKeyRequest(input) + return out, req.Send() +} + +// UploadSSHPublicKeyWithContext is the same as UploadSSHPublicKey with the addition of +// the ability to pass a context and additional request options. +// +// See UploadSSHPublicKey for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UploadSSHPublicKeyWithContext(ctx aws.Context, input *UploadSSHPublicKeyInput, opts ...request.Option) (*UploadSSHPublicKeyOutput, error) { + req, out := c.UploadSSHPublicKeyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUploadServerCertificate = "UploadServerCertificate" + +// UploadServerCertificateRequest generates a "aws/request.Request" representing the +// client's request for the UploadServerCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UploadServerCertificate for more information on using the UploadServerCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UploadServerCertificateRequest method. +// req, resp := client.UploadServerCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadServerCertificate +func (c *IAM) UploadServerCertificateRequest(input *UploadServerCertificateInput) (req *request.Request, output *UploadServerCertificateOutput) { + op := &request.Operation{ + Name: opUploadServerCertificate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UploadServerCertificateInput{} + } + + output = &UploadServerCertificateOutput{} + req = c.newRequest(op, input, output) + return +} + +// UploadServerCertificate API operation for AWS Identity and Access Management. +// +// Uploads a server certificate entity for the AWS account. The server certificate +// entity includes a public key certificate, a private key, and an optional +// certificate chain, which should all be PEM-encoded. +// +// We recommend that you use AWS Certificate Manager (https://aws.amazon.com/certificate-manager/) +// to provision, manage, and deploy your server certificates. With ACM you can +// request a certificate, deploy it to AWS resources, and let ACM handle certificate +// renewals for you. Certificates provided by ACM are free. For more information +// about using ACM, see the AWS Certificate Manager User Guide (http://docs.aws.amazon.com/acm/latest/userguide/). +// +// For more information about working with server certificates, see Working +// with Server Certificates (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html) +// in the IAM User Guide. This topic includes a list of AWS services that can +// use the server certificates that you manage with IAM. +// +// For information about the number of server certificates you can upload, see +// Limitations on IAM Entities and Objects (http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-limits.html) +// in the IAM User Guide. +// +// Because the body of the public key certificate, private key, and the certificate +// chain can be large, you should use POST rather than GET when calling UploadServerCertificate. +// For information about setting up signatures and authorization through the +// API, go to Signing AWS API Requests (http://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html) +// in the AWS General Reference. For general information about using the Query +// API with IAM, go to Calling the API by Making HTTP Query Requests (http://docs.aws.amazon.com/IAM/latest/UserGuide/programming.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UploadServerCertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeMalformedCertificateException "MalformedCertificate" +// The request was rejected because the certificate was malformed or expired. +// The error message describes the specific error. +// +// * ErrCodeKeyPairMismatchException "KeyPairMismatch" +// The request was rejected because the public key certificate and the private +// key do not match. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadServerCertificate +func (c *IAM) UploadServerCertificate(input *UploadServerCertificateInput) (*UploadServerCertificateOutput, error) { + req, out := c.UploadServerCertificateRequest(input) + return out, req.Send() +} + +// UploadServerCertificateWithContext is the same as UploadServerCertificate with the addition of +// the ability to pass a context and additional request options. +// +// See UploadServerCertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UploadServerCertificateWithContext(ctx aws.Context, input *UploadServerCertificateInput, opts ...request.Option) (*UploadServerCertificateOutput, error) { + req, out := c.UploadServerCertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUploadSigningCertificate = "UploadSigningCertificate" + +// UploadSigningCertificateRequest generates a "aws/request.Request" representing the +// client's request for the UploadSigningCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UploadSigningCertificate for more information on using the UploadSigningCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UploadSigningCertificateRequest method. +// req, resp := client.UploadSigningCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadSigningCertificate +func (c *IAM) UploadSigningCertificateRequest(input *UploadSigningCertificateInput) (req *request.Request, output *UploadSigningCertificateOutput) { + op := &request.Operation{ + Name: opUploadSigningCertificate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UploadSigningCertificateInput{} + } + + output = &UploadSigningCertificateOutput{} + req = c.newRequest(op, input, output) + return +} + +// UploadSigningCertificate API operation for AWS Identity and Access Management. +// +// Uploads an X.509 signing certificate and associates it with the specified +// IAM user. Some AWS services use X.509 signing certificates to validate requests +// that are signed with a corresponding private key. When you upload the certificate, +// its default status is Active. +// +// If the UserName field is not specified, the IAM user name is determined implicitly +// based on the AWS access key ID used to sign the request. Because this operation +// works for access keys under the AWS account, you can use this operation to +// manage AWS account root user credentials even if the AWS account has no associated +// users. +// +// Because the body of an X.509 certificate can be large, you should use POST +// rather than GET when calling UploadSigningCertificate. For information about +// setting up signatures and authorization through the API, go to Signing AWS +// API Requests (http://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html) +// in the AWS General Reference. For general information about using the Query +// API with IAM, go to Making Query Requests (http://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_UsingQueryAPI.html) +// in the IAM User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Identity and Access Management's +// API operation UploadSigningCertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceeded" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error message describes the limit exceeded. +// +// * ErrCodeEntityAlreadyExistsException "EntityAlreadyExists" +// The request was rejected because it attempted to create a resource that already +// exists. +// +// * ErrCodeMalformedCertificateException "MalformedCertificate" +// The request was rejected because the certificate was malformed or expired. +// The error message describes the specific error. +// +// * ErrCodeInvalidCertificateException "InvalidCertificate" +// The request was rejected because the certificate is invalid. +// +// * ErrCodeDuplicateCertificateException "DuplicateCertificate" +// The request was rejected because the same certificate is associated with +// an IAM user in the account. +// +// * ErrCodeNoSuchEntityException "NoSuchEntity" +// The request was rejected because it referenced an entity that does not exist. +// The error message describes the entity. +// +// * ErrCodeServiceFailureException "ServiceFailure" +// The request processing has failed because of an unknown error, exception +// or failure. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/iam-2010-05-08/UploadSigningCertificate +func (c *IAM) UploadSigningCertificate(input *UploadSigningCertificateInput) (*UploadSigningCertificateOutput, error) { + req, out := c.UploadSigningCertificateRequest(input) + return out, req.Send() +} + +// UploadSigningCertificateWithContext is the same as UploadSigningCertificate with the addition of +// the ability to pass a context and additional request options. +// +// See UploadSigningCertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IAM) UploadSigningCertificateWithContext(ctx aws.Context, input *UploadSigningCertificateInput, opts ...request.Option) (*UploadSigningCertificateOutput, error) { + req, out := c.UploadSigningCertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// Contains information about an AWS access key. +// +// This data type is used as a response element in the CreateAccessKey and ListAccessKeys +// operations. +// +// The SecretAccessKey value is returned only in response to CreateAccessKey. +// You can get a secret access key only when you first create an access key; +// you cannot recover the secret access key later. If you lose a secret access +// key, you must create a new access key. +type AccessKey struct { + _ struct{} `type:"structure"` + + // The ID for this access key. + // + // AccessKeyId is a required field + AccessKeyId *string `min:"16" type:"string" required:"true"` + + // The date when the access key was created. + CreateDate *time.Time `type:"timestamp"` + + // The secret key used to sign requests. + // + // SecretAccessKey is a required field + SecretAccessKey *string `type:"string" required:"true"` + + // The status of the access key. Active means that the key is valid for API + // calls, while Inactive means it is not. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"statusType"` + + // The name of the IAM user that the access key is associated with. + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s AccessKey) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AccessKey) GoString() string { + return s.String() +} + +// SetAccessKeyId sets the AccessKeyId field's value. +func (s *AccessKey) SetAccessKeyId(v string) *AccessKey { s.AccessKeyId = &v return s } @@ -13402,12 +14575,13 @@ func (s *AccessKey) SetUserName(v string) *AccessKey { // Contains information about the last time an AWS access key was used. // // This data type is used as a response element in the GetAccessKeyLastUsed -// action. +// operation. type AccessKeyLastUsed struct { _ struct{} `type:"structure"` // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), - // when the access key was most recently used. This field is null when: + // when the access key was most recently used. This field is null in the following + // situations: // // * The user does not have an access key. // @@ -13417,10 +14591,10 @@ type AccessKeyLastUsed struct { // * There is no sign-in data associated with the user // // LastUsedDate is a required field - LastUsedDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + LastUsedDate *time.Time `type:"timestamp" required:"true"` // The AWS region where this access key was most recently used. This field is - // displays "N/A" when: + // displays "N/A" in the following situations: // // * The user does not have an access key. // @@ -13436,7 +14610,7 @@ type AccessKeyLastUsed struct { Region *string `type:"string" required:"true"` // The name of the AWS service with which this access key was most recently - // used. This field displays "N/A" when: + // used. This field displays "N/A" in the following situations: // // * The user does not have an access key. // @@ -13479,7 +14653,7 @@ func (s *AccessKeyLastUsed) SetServiceName(v string) *AccessKeyLastUsed { // Contains information about an AWS access key, without its secret key. // -// This data type is used as a response element in the ListAccessKeys action. +// This data type is used as a response element in the ListAccessKeys operation. type AccessKeyMetadata struct { _ struct{} `type:"structure"` @@ -13487,7 +14661,7 @@ type AccessKeyMetadata struct { AccessKeyId *string `min:"16" type:"string"` // The date when the access key was created. - CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CreateDate *time.Time `type:"timestamp"` // The status of the access key. Active means the key is valid for API calls; // Inactive means it is not. @@ -13542,7 +14716,7 @@ type AddClientIDToOpenIDConnectProviderInput struct { // The Amazon Resource Name (ARN) of the IAM OpenID Connect (OIDC) provider // resource to add the client ID to. You can get a list of OIDC provider ARNs - // by using the ListOpenIDConnectProviders action. + // by using the ListOpenIDConnectProviders operation. // // OpenIDConnectProviderArn is a required field OpenIDConnectProviderArn *string `min:"20" type:"string" required:"true"` @@ -13613,7 +14787,7 @@ type AddRoleToInstanceProfileInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // InstanceProfileName is a required field InstanceProfileName *string `min:"1" type:"string" required:"true"` @@ -13693,7 +14867,7 @@ type AddUserToGroupInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // GroupName is a required field GroupName *string `min:"1" type:"string" required:"true"` @@ -13702,7 +14876,7 @@ type AddUserToGroupInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -13773,7 +14947,7 @@ type AttachGroupPolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // GroupName is a required field GroupName *string `min:"1" type:"string" required:"true"` @@ -13942,7 +15116,7 @@ type AttachUserPolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -14006,12 +15180,55 @@ func (s AttachUserPolicyOutput) GoString() string { return s.String() } +// Contains information about an attached permissions boundary. +// +// An attached permissions boundary is a managed policy that has been attached +// to a user or role to set the permissions boundary. +// +// For more information about permissions boundaries, see Permissions Boundaries +// for IAM Identities (https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) +// in the IAM User Guide. +type AttachedPermissionsBoundary struct { + _ struct{} `type:"structure"` + + // The ARN of the policy used to set the permissions boundary for the user or + // role. + PermissionsBoundaryArn *string `min:"20" type:"string"` + + // The permissions boundary usage type that indicates what type of IAM resource + // is used as the permissions boundary for an entity. This data type can only + // have a value of Policy. + PermissionsBoundaryType *string `type:"string" enum:"PermissionsBoundaryAttachmentType"` +} + +// String returns the string representation +func (s AttachedPermissionsBoundary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachedPermissionsBoundary) GoString() string { + return s.String() +} + +// SetPermissionsBoundaryArn sets the PermissionsBoundaryArn field's value. +func (s *AttachedPermissionsBoundary) SetPermissionsBoundaryArn(v string) *AttachedPermissionsBoundary { + s.PermissionsBoundaryArn = &v + return s +} + +// SetPermissionsBoundaryType sets the PermissionsBoundaryType field's value. +func (s *AttachedPermissionsBoundary) SetPermissionsBoundaryType(v string) *AttachedPermissionsBoundary { + s.PermissionsBoundaryType = &v + return s +} + // Contains information about an attached policy. // // An attached policy is a managed policy that has been attached to a user, // group, or role. This data type is used as a response element in the ListAttachedGroupPolicies, // ListAttachedRolePolicies, ListAttachedUserPolicies, and GetAccountAuthorizationDetails -// actions. +// operations. // // For more information about managed policies, refer to Managed Policies and // Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) @@ -14058,14 +15275,14 @@ type ChangePasswordInput struct { // The new password. The new password must conform to the AWS account's password // policy, if one exists. // - // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of almost any printable ASCII - // character from the space (\u0020) through the end of the ASCII character - // range (\u00FF). You can also include the tab (\u0009), line feed (\u000A), - // and carriage return (\u000D) characters. Although any of these characters - // are valid in a password, note that many tools, such as the AWS Management - // Console, might restrict the ability to enter certain characters because they - // have special meaning within that tool. + // The regex pattern (http://wikipedia.org/wiki/regex) that is used to validate + // this parameter is a string of characters. That string can include almost + // any printable ASCII character from the space (\u0020) through the end of + // the ASCII character range (\u00FF). You can also include the tab (\u0009), + // line feed (\u000A), and carriage return (\u000D) characters. Any of these + // characters are valid in a password. However, many tools, such as the AWS + // Management Console, might restrict the ability to type certain characters + // because they have special meaning within that tool. // // NewPassword is a required field NewPassword *string `min:"1" type:"string" required:"true"` @@ -14153,8 +15370,8 @@ type ContextEntry struct { ContextKeyType *string `type:"string" enum:"ContextKeyTypeEnum"` // The value (or values, if the condition context key supports multiple values) - // to provide to the simulation for use when the key is referenced by a Condition - // element in an input policy. + // to provide to the simulation when the key is referenced by a Condition element + // in an input policy. ContextKeyValues []*string `type:"list"` } @@ -14206,7 +15423,7 @@ type CreateAccessKeyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- UserName *string `min:"1" type:"string"` } @@ -14332,7 +15549,7 @@ type CreateGroupInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@-. + // with no spaces. You can also include any of the following characters: _+=,.@-. // The group name must be unique within the account. Group names are not distinguished // by case. For example, you cannot create groups named both "ADMINS" and "admins". // @@ -14346,11 +15563,12 @@ type CreateGroupInput struct { // This parameter is optional. If it is not included, it defaults to a slash // (/). // - // This paramater allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of either a forward slash (/) by itself - // or a string that must begin and end with forward slashes, containing any - // ASCII character from the ! (\u0021) thru the DEL character (\u007F), including - // most punctuation characters, digits, and upper and lowercased letters. + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. Path *string `min:"1" type:"string"` } @@ -14428,7 +15646,7 @@ type CreateInstanceProfileInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // InstanceProfileName is a required field InstanceProfileName *string `min:"1" type:"string" required:"true"` @@ -14440,11 +15658,12 @@ type CreateInstanceProfileInput struct { // This parameter is optional. If it is not included, it defaults to a slash // (/). // - // This paramater allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of either a forward slash (/) by itself - // or a string that must begin and end with forward slashes, containing any - // ASCII character from the ! (\u0021) thru the DEL character (\u007F), including - // most punctuation characters, digits, and upper and lowercased letters. + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. Path *string `min:"1" type:"string"` } @@ -14520,14 +15739,14 @@ type CreateLoginProfileInput struct { // The new password for the user. // - // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of almost any printable ASCII - // character from the space (\u0020) through the end of the ASCII character - // range (\u00FF). You can also include the tab (\u0009), line feed (\u000A), - // and carriage return (\u000D) characters. Although any of these characters - // are valid in a password, note that many tools, such as the AWS Management - // Console, might restrict the ability to enter certain characters because they - // have special meaning within that tool. + // The regex pattern (http://wikipedia.org/wiki/regex) that is used to validate + // this parameter is a string of characters. That string can include almost + // any printable ASCII character from the space (\u0020) through the end of + // the ASCII character range (\u00FF). You can also include the tab (\u0009), + // line feed (\u000A), and carriage return (\u000D) characters. Any of these + // characters are valid in a password. However, many tools, such as the AWS + // Management Console, might restrict the ability to type certain characters + // because they have special meaning within that tool. // // Password is a required field Password *string `min:"1" type:"string" required:"true"` @@ -14540,7 +15759,7 @@ type CreateLoginProfileInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -14635,11 +15854,11 @@ type CreateOpenIDConnectProviderInput struct { // cannot register more than 100 client IDs with a single IAM OIDC provider. // // There is no defined format for a client ID. The CreateOpenIDConnectProviderRequest - // action accepts client IDs up to 255 characters long. + // operation accepts client IDs up to 255 characters long. ClientIDList []*string `type:"list"` // A list of server certificate thumbprints for the OpenID Connect (OIDC) identity - // provider's server certificate(s). Typically this list includes only one entry. + // provider's server certificates. Typically this list includes only one entry. // However, IAM lets you have up to five thumbprints for an OIDC provider. This // lets you maintain multiple thumbprints if the identity provider is rotating // certificates. @@ -14649,10 +15868,10 @@ type CreateOpenIDConnectProviderInput struct { // makes its keys available. It is always a 40-character string. // // You must provide at least one thumbprint when creating an IAM OIDC provider. - // For example, if the OIDC provider is server.example.com and the provider - // stores its keys at "https://keys.server.example.com/openid-connect", the - // thumbprint string would be the hex-encoded SHA-1 hash value of the certificate - // used by https://keys.server.example.com. + // For example, assume that the OIDC provider is server.example.com and the + // provider stores its keys at https://keys.server.example.com/openid-connect. + // In that case, the thumbprint string would be the hex-encoded SHA-1 hash value + // of the certificate used by https://keys.server.example.com. // // For more information about obtaining the OIDC provider's thumbprint, see // Obtaining the Thumbprint for an OpenID Connect Provider (http://docs.aws.amazon.com/IAM/latest/UserGuide/identity-providers-oidc-obtain-thumbprint.html) @@ -14661,11 +15880,11 @@ type CreateOpenIDConnectProviderInput struct { // ThumbprintList is a required field ThumbprintList []*string `type:"list" required:"true"` - // The URL of the identity provider. The URL must begin with "https://" and - // should correspond to the iss claim in the provider's OpenID Connect ID tokens. - // Per the OIDC standard, path components are allowed but query parameters are - // not. Typically the URL consists of only a host name, like "https://server.example.org" - // or "https://example.com". + // The URL of the identity provider. The URL must begin with https:// and should + // correspond to the iss claim in the provider's OpenID Connect ID tokens. Per + // the OIDC standard, path components are allowed but query parameters are not. + // Typically the URL consists of only a hostname, like https://server.example.org + // or https://example.com. // // You cannot register the same provider multiple times in a single AWS account. // If you try to submit a URL that has already been used for an OpenID Connect @@ -14767,22 +15986,28 @@ type CreatePolicyInput struct { // This parameter is optional. If it is not included, it defaults to a slash // (/). // - // This paramater allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of either a forward slash (/) by itself - // or a string that must begin and end with forward slashes, containing any - // ASCII character from the ! (\u0021) thru the DEL character (\u007F), including - // most punctuation characters, digits, and upper and lowercased letters. + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. Path *string `type:"string"` // The JSON policy document that you want to use as the content for the new // policy. // // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of any printable ASCII character - // ranging from the space character (\u0020) through end of the ASCII character - // range as well as the printable characters in the Basic Latin and Latin-1 - // Supplement character set (through \u00FF). It also includes the special characters - // tab (\u0009), line feed (\u000A), and carriage return (\u000D). + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) // // PolicyDocument is a required field PolicyDocument *string `min:"1" type:"string" required:"true"` @@ -14791,7 +16016,7 @@ type CreatePolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@-+ + // with no spaces. You can also include any of the following characters: _+=,.@- // // PolicyName is a required field PolicyName *string `min:"1" type:"string" required:"true"` @@ -14894,11 +16119,16 @@ type CreatePolicyVersionInput struct { // version of the policy. // // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of any printable ASCII character - // ranging from the space character (\u0020) through end of the ASCII character - // range as well as the printable characters in the Basic Latin and Latin-1 - // Supplement character set (through \u00FF). It also includes the special characters - // tab (\u0009), line feed (\u000A), and carriage return (\u000D). + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) // // PolicyDocument is a required field PolicyDocument *string `min:"1" type:"string" required:"true"` @@ -14906,8 +16136,8 @@ type CreatePolicyVersionInput struct { // Specifies whether to set this version as the policy's default version. // // When this parameter is true, the new policy version becomes the operative - // version; that is, the version that is in effect for the IAM users, groups, - // and roles that the policy is attached to. + // version. That is, it becomes the version that is in effect for the IAM users, + // groups, and roles that the policy is attached to. // // For more information about managed policy versions, see Versioning for Managed // Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-versions.html) @@ -14996,18 +16226,39 @@ type CreateRoleInput struct { // assume the role. // // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of any printable ASCII character - // ranging from the space character (\u0020) through end of the ASCII character - // range as well as the printable characters in the Basic Latin and Latin-1 - // Supplement character set (through \u00FF). It also includes the special characters - // tab (\u0009), line feed (\u000A), and carriage return (\u000D). + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) // // AssumeRolePolicyDocument is a required field AssumeRolePolicyDocument *string `min:"1" type:"string" required:"true"` - // A customer-provided description of the role. + // A description of the role. Description *string `type:"string"` + // The maximum session duration (in seconds) that you want to set for the specified + // role. If you do not specify a value for this setting, the default maximum + // of one hour is applied. This setting can have a value from 1 hour to 12 hours. + // + // Anyone who assumes the role from the AWS CLI or API can use the DurationSeconds + // API parameter or the duration-seconds CLI parameter to request a longer session. + // The MaxSessionDuration setting determines the maximum duration that can be + // requested using the DurationSeconds parameter. If users don't specify a value + // for the DurationSeconds parameter, their security credentials are valid for + // one hour by default. This applies when you use the AssumeRole* API operations + // or the assume-role* CLI operations but does not apply when you use those + // operations to create a console URL. For more information, see Using IAM Roles + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html) in the + // IAM User Guide. + MaxSessionDuration *int64 `min:"3600" type:"integer"` + // The path to the role. For more information about paths, see IAM Identifiers // (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) // in the IAM User Guide. @@ -15015,13 +16266,18 @@ type CreateRoleInput struct { // This parameter is optional. If it is not included, it defaults to a slash // (/). // - // This paramater allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of either a forward slash (/) by itself - // or a string that must begin and end with forward slashes, containing any - // ASCII character from the ! (\u0021) thru the DEL character (\u007F), including - // most punctuation characters, digits, and upper and lowercased letters. + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. Path *string `min:"1" type:"string"` + // The ARN of the policy that is used to set the permissions boundary for the + // role. + PermissionsBoundary *string `min:"20" type:"string"` + // The name of the role to create. // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) @@ -15033,6 +16289,15 @@ type CreateRoleInput struct { // // RoleName is a required field RoleName *string `min:"1" type:"string" required:"true"` + + // A list of tags that you want to attach to the newly created role. Each tag + // consists of a key name and an associated value. For more information about + // tagging, see Tagging IAM Identities (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) + // in the IAM User Guide. + // + // If any one of the tags is invalid or if you exceed the allowed number of + // tags per role, then the entire request fails and the role is not created. + Tags []*Tag `type:"list"` } // String returns the string representation @@ -15054,15 +16319,31 @@ func (s *CreateRoleInput) Validate() error { if s.AssumeRolePolicyDocument != nil && len(*s.AssumeRolePolicyDocument) < 1 { invalidParams.Add(request.NewErrParamMinLen("AssumeRolePolicyDocument", 1)) } + if s.MaxSessionDuration != nil && *s.MaxSessionDuration < 3600 { + invalidParams.Add(request.NewErrParamMinValue("MaxSessionDuration", 3600)) + } if s.Path != nil && len(*s.Path) < 1 { invalidParams.Add(request.NewErrParamMinLen("Path", 1)) } + if s.PermissionsBoundary != nil && len(*s.PermissionsBoundary) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PermissionsBoundary", 20)) + } if s.RoleName == nil { invalidParams.Add(request.NewErrParamRequired("RoleName")) } if s.RoleName != nil && len(*s.RoleName) < 1 { invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } if invalidParams.Len() > 0 { return invalidParams @@ -15082,18 +16363,36 @@ func (s *CreateRoleInput) SetDescription(v string) *CreateRoleInput { return s } +// SetMaxSessionDuration sets the MaxSessionDuration field's value. +func (s *CreateRoleInput) SetMaxSessionDuration(v int64) *CreateRoleInput { + s.MaxSessionDuration = &v + return s +} + // SetPath sets the Path field's value. func (s *CreateRoleInput) SetPath(v string) *CreateRoleInput { s.Path = &v return s } +// SetPermissionsBoundary sets the PermissionsBoundary field's value. +func (s *CreateRoleInput) SetPermissionsBoundary(v string) *CreateRoleInput { + s.PermissionsBoundary = &v + return s +} + // SetRoleName sets the RoleName field's value. func (s *CreateRoleInput) SetRoleName(v string) *CreateRoleInput { s.RoleName = &v return s } +// SetTags sets the Tags field's value. +func (s *CreateRoleInput) SetTags(v []*Tag) *CreateRoleInput { + s.Tags = v + return s +} + // Contains the response to a successful CreateRole request. type CreateRoleOutput struct { _ struct{} `type:"structure"` @@ -15127,7 +16426,7 @@ type CreateSAMLProviderInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // Name is a required field Name *string `min:"1" type:"string" required:"true"` @@ -15216,17 +16515,26 @@ func (s *CreateSAMLProviderOutput) SetSAMLProviderArn(v string) *CreateSAMLProvi type CreateServiceLinkedRoleInput struct { _ struct{} `type:"structure"` - // The AWS service to which this role is attached. You use a string similar - // to a URL but without the http:// in front. For example: elasticbeanstalk.amazonaws.com + // The service principal for the AWS service to which this role is attached. + // You use a string similar to a URL but without the http:// in front. For example: + // elasticbeanstalk.amazonaws.com. + // + // Service principals are unique and case-sensitive. To find the exact service + // principal for your service-linked role, see AWS Services That Work with IAM + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) + // in the IAM User Guide and look for the services that have Yes in the Service-Linked + // Role column. Choose the Yes link to view the service-linked role documentation + // for that service. // // AWSServiceName is a required field AWSServiceName *string `min:"1" type:"string" required:"true"` - // A string that you provide, which is combined with the service name to form - // the complete role name. If you make multiple requests for the same service, - // then you must supply a different CustomSuffix for each request. Otherwise - // the request fails with a duplicate role name error. For example, you could - // add -1 or -debug to the suffix. + // A string that you provide, which is combined with the service-provided prefix + // to form the complete role name. If you make multiple requests for the same + // service, then you must supply a different CustomSuffixfor each request. Otherwise the request fails with a duplicate role name + // error. For example, you could add -1or -debugto the suffix. + // + // Some services do not support the CustomSuffix CustomSuffix *string `min:"1" type:"string"` // The description of the role. @@ -15319,7 +16627,7 @@ type CreateServiceSpecificCredentialInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -15404,18 +16712,32 @@ type CreateUserInput struct { // This parameter is optional. If it is not included, it defaults to a slash // (/). // - // This paramater allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of either a forward slash (/) by itself - // or a string that must begin and end with forward slashes, containing any - // ASCII character from the ! (\u0021) thru the DEL character (\u007F), including - // most punctuation characters, digits, and upper and lowercased letters. + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. Path *string `min:"1" type:"string"` + // The ARN of the policy that is used to set the permissions boundary for the + // user. + PermissionsBoundary *string `min:"20" type:"string"` + + // A list of tags that you want to attach to the newly created user. Each tag + // consists of a key name and an associated value. For more information about + // tagging, see Tagging IAM Identities (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) + // in the IAM User Guide. + // + // If any one of the tags is invalid or if you exceed the allowed number of + // tags per user, then the entire request fails and the user is not created. + Tags []*Tag `type:"list"` + // The name of the user to create. // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@-. + // with no spaces. You can also include any of the following characters: _+=,.@-. // User names are not distinguished by case. For example, you cannot create // users named both "TESTUSER" and "testuser". // @@ -15439,12 +16761,25 @@ func (s *CreateUserInput) Validate() error { if s.Path != nil && len(*s.Path) < 1 { invalidParams.Add(request.NewErrParamMinLen("Path", 1)) } + if s.PermissionsBoundary != nil && len(*s.PermissionsBoundary) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PermissionsBoundary", 20)) + } if s.UserName == nil { invalidParams.Add(request.NewErrParamRequired("UserName")) } if s.UserName != nil && len(*s.UserName) < 1 { invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } if invalidParams.Len() > 0 { return invalidParams @@ -15458,6 +16793,18 @@ func (s *CreateUserInput) SetPath(v string) *CreateUserInput { return s } +// SetPermissionsBoundary sets the PermissionsBoundary field's value. +func (s *CreateUserInput) SetPermissionsBoundary(v string) *CreateUserInput { + s.PermissionsBoundary = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateUserInput) SetTags(v []*Tag) *CreateUserInput { + s.Tags = v + return s +} + // SetUserName sets the UserName field's value. func (s *CreateUserInput) SetUserName(v string) *CreateUserInput { s.UserName = &v @@ -15498,11 +16845,12 @@ type CreateVirtualMFADeviceInput struct { // This parameter is optional. If it is not included, it defaults to a slash // (/). // - // This paramater allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of either a forward slash (/) by itself - // or a string that must begin and end with forward slashes, containing any - // ASCII character from the ! (\u0021) thru the DEL character (\u007F), including - // most punctuation characters, digits, and upper and lowercased letters. + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. Path *string `min:"1" type:"string"` // The name of the virtual MFA device. Use with path to uniquely identify a @@ -15510,7 +16858,7 @@ type CreateVirtualMFADeviceInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // VirtualMFADeviceName is a required field VirtualMFADeviceName *string `min:"1" type:"string" required:"true"` @@ -15600,7 +16948,7 @@ type DeactivateMFADeviceInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -15681,7 +17029,7 @@ type DeleteAccessKeyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- UserName *string `min:"1" type:"string"` } @@ -15835,7 +17183,7 @@ type DeleteGroupInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // GroupName is a required field GroupName *string `min:"1" type:"string" required:"true"` @@ -15895,7 +17243,7 @@ type DeleteGroupPolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // GroupName is a required field GroupName *string `min:"1" type:"string" required:"true"` @@ -15904,7 +17252,7 @@ type DeleteGroupPolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@-+ + // with no spaces. You can also include any of the following characters: _+=,.@- // // PolicyName is a required field PolicyName *string `min:"1" type:"string" required:"true"` @@ -15975,7 +17323,7 @@ type DeleteInstanceProfileInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // InstanceProfileName is a required field InstanceProfileName *string `min:"1" type:"string" required:"true"` @@ -16034,7 +17382,7 @@ type DeleteLoginProfileInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -16091,7 +17439,7 @@ type DeleteOpenIDConnectProviderInput struct { // The Amazon Resource Name (ARN) of the IAM OpenID Connect provider resource // object to delete. You can get a list of OpenID Connect provider resource - // ARNs by using the ListOpenIDConnectProviders action. + // ARNs by using the ListOpenIDConnectProviders operation. // // OpenIDConnectProviderArn is a required field OpenIDConnectProviderArn *string `min:"20" type:"string" required:"true"` @@ -16309,8 +17657,64 @@ func (s DeleteRoleInput) GoString() string { } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteRoleInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteRoleInput"} +func (s *DeleteRoleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteRoleInput"} + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRoleName sets the RoleName field's value. +func (s *DeleteRoleInput) SetRoleName(v string) *DeleteRoleInput { + s.RoleName = &v + return s +} + +type DeleteRoleOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteRoleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteRoleOutput) GoString() string { + return s.String() +} + +type DeleteRolePermissionsBoundaryInput struct { + _ struct{} `type:"structure"` + + // The name (friendly name, not ARN) of the IAM role from which you want to + // remove the permissions boundary. + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteRolePermissionsBoundaryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteRolePermissionsBoundaryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteRolePermissionsBoundaryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteRolePermissionsBoundaryInput"} if s.RoleName == nil { invalidParams.Add(request.NewErrParamRequired("RoleName")) } @@ -16325,22 +17729,22 @@ func (s *DeleteRoleInput) Validate() error { } // SetRoleName sets the RoleName field's value. -func (s *DeleteRoleInput) SetRoleName(v string) *DeleteRoleInput { +func (s *DeleteRolePermissionsBoundaryInput) SetRoleName(v string) *DeleteRolePermissionsBoundaryInput { s.RoleName = &v return s } -type DeleteRoleOutput struct { +type DeleteRolePermissionsBoundaryOutput struct { _ struct{} `type:"structure"` } // String returns the string representation -func (s DeleteRoleOutput) String() string { +func (s DeleteRolePermissionsBoundaryOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteRoleOutput) GoString() string { +func (s DeleteRolePermissionsBoundaryOutput) GoString() string { return s.String() } @@ -16351,7 +17755,7 @@ type DeleteRolePolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@-+ + // with no spaces. You can also include any of the following characters: _+=,.@- // // PolicyName is a required field PolicyName *string `min:"1" type:"string" required:"true"` @@ -16496,7 +17900,7 @@ type DeleteSSHPublicKeyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -16567,7 +17971,7 @@ type DeleteServerCertificateInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // ServerCertificateName is a required field ServerCertificateName *string `min:"1" type:"string" required:"true"` @@ -16705,7 +18109,7 @@ type DeleteServiceSpecificCredentialInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- UserName *string `min:"1" type:"string"` } @@ -16780,7 +18184,7 @@ type DeleteSigningCertificateInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- UserName *string `min:"1" type:"string"` } @@ -16846,7 +18250,7 @@ type DeleteUserInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -16898,6 +18302,62 @@ func (s DeleteUserOutput) GoString() string { return s.String() } +type DeleteUserPermissionsBoundaryInput struct { + _ struct{} `type:"structure"` + + // The name (friendly name, not ARN) of the IAM user from which you want to + // remove the permissions boundary. + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteUserPermissionsBoundaryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteUserPermissionsBoundaryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteUserPermissionsBoundaryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteUserPermissionsBoundaryInput"} + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetUserName sets the UserName field's value. +func (s *DeleteUserPermissionsBoundaryInput) SetUserName(v string) *DeleteUserPermissionsBoundaryInput { + s.UserName = &v + return s +} + +type DeleteUserPermissionsBoundaryOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteUserPermissionsBoundaryOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteUserPermissionsBoundaryOutput) GoString() string { + return s.String() +} + type DeleteUserPolicyInput struct { _ struct{} `type:"structure"` @@ -16905,7 +18365,7 @@ type DeleteUserPolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@-+ + // with no spaces. You can also include any of the following characters: _+=,.@- // // PolicyName is a required field PolicyName *string `min:"1" type:"string" required:"true"` @@ -16915,7 +18375,7 @@ type DeleteUserPolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -17050,11 +18510,11 @@ type DeletionTaskFailureReasonType struct { Reason *string `type:"string"` // A list of objects that contains details about the service-linked role deletion - // failure. If the service-linked role has active sessions or if any resources - // that were used by the role have not been deleted from the linked service, - // the role can't be deleted. This parameter includes a list of the resources - // that are associated with the role and the region in which the resources are - // being used. + // failure, if that information is returned by the service. If the service-linked + // role has active sessions or if any resources that were used by the role have + // not been deleted from the linked service, the role can't be deleted. This + // parameter includes a list of the resources that are associated with the role + // and the region in which the resources are being used. RoleUsageList []*RoleUsageType `type:"list"` } @@ -17087,7 +18547,7 @@ type DetachGroupPolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // GroupName is a required field GroupName *string `min:"1" type:"string" required:"true"` @@ -17256,7 +18716,7 @@ type DetachUserPolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -17325,7 +18785,7 @@ type EnableMFADeviceInput struct { // An authentication code emitted by the device. // - // The format for this parameter is a string of 6 digits. + // The format for this parameter is a string of six digits. // // Submit your request immediately after generating the authentication codes. // If you generate the codes and then wait too long to submit the request, the @@ -17339,7 +18799,7 @@ type EnableMFADeviceInput struct { // A subsequent authentication code emitted by the device. // - // The format for this parameter is a string of 6 digits. + // The format for this parameter is a string of six digits. // // Submit your request immediately after generating the authentication codes. // If you generate the codes and then wait too long to submit the request, the @@ -17365,7 +18825,7 @@ type EnableMFADeviceInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -17460,7 +18920,7 @@ func (s EnableMFADeviceOutput) GoString() string { type EvaluationResult struct { _ struct{} `type:"structure"` - // The name of the API action tested on the indicated resource. + // The name of the API operation tested on the indicated resource. // // EvalActionName is a required field EvalActionName *string `min:"3" type:"string" required:"true"` @@ -17478,12 +18938,12 @@ type EvaluationResult struct { // Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_compare-resource-policies.html) EvalDecisionDetails map[string]*string `type:"map"` - // The ARN of the resource that the indicated API action was tested on. + // The ARN of the resource that the indicated API operation was tested on. EvalResourceName *string `min:"1" type:"string"` // A list of the statements in the input policies that determine the result - // for this scenario. Remember that even if multiple statements allow the action - // on the resource, if only one statement denies that action, then the explicit + // for this scenario. Remember that even if multiple statements allow the operation + // on the resource, if only one statement denies that operation, then the explicit // deny overrides any allow, and the deny statement is the only entry included // in the result. MatchedStatements []*Statement `type:"list"` @@ -17502,8 +18962,8 @@ type EvaluationResult struct { // account is part of an organization. OrganizationsDecisionDetail *OrganizationsDecisionDetail `type:"structure"` - // The individual results of the simulation of the API action specified in EvalActionName - // on each resource. + // The individual results of the simulation of the API operation specified in + // EvalActionName on each resource. ResourceSpecificResults []*ResourceSpecificResult `type:"list"` } @@ -17928,11 +19388,16 @@ type GetContextKeysForCustomPolicyInput struct { // complete, valid JSON text of an IAM policy. // // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of any printable ASCII character - // ranging from the space character (\u0020) through end of the ASCII character - // range as well as the printable characters in the Basic Latin and Latin-1 - // Supplement character set (through \u00FF). It also includes the special characters - // tab (\u0009), line feed (\u000A), and carriage return (\u000D). + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) // // PolicyInputList is a required field PolicyInputList []*string `type:"list" required:"true"` @@ -17999,20 +19464,26 @@ type GetContextKeysForPrincipalPolicyInput struct { // keys that are referenced. // // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of any printable ASCII character - // ranging from the space character (\u0020) through end of the ASCII character - // range as well as the printable characters in the Basic Latin and Latin-1 - // Supplement character set (through \u00FF). It also includes the special characters - // tab (\u0009), line feed (\u000A), and carriage return (\u000D). + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) PolicyInputList []*string `type:"list"` // The ARN of a user, group, or role whose policies contain the context keys // that you want listed. If you specify a user, the list includes context keys - // that are found in all policies attached to the user as well as to all groups - // that the user is a member of. If you pick a group or a role, then it includes - // only those context keys that are found in policies attached to that entity. - // Note that all parameters are shown in unencoded form here for clarity, but - // must be URL encoded to be included as a part of a real HTML request. + // that are found in all policies that are attached to the user. The list also + // includes all groups that the user is a member of. If you pick a group or + // a role, then it includes only those context keys that are found in policies + // attached to that entity. Note that all parameters are shown in unencoded + // form here for clarity, but must be URL encoded to be included as a part of + // a real HTML request. // // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) @@ -18085,7 +19556,7 @@ type GetCredentialReportOutput struct { // The date and time when the credential report was created, in ISO 8601 date-time // format (http://www.iso.org/iso/iso8601). - GeneratedTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + GeneratedTime *time.Time `type:"timestamp"` // The format (MIME type) of the credential report. ReportFormat *string `type:"string" enum:"ReportFormatType"` @@ -18126,7 +19597,7 @@ type GetGroupInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // GroupName is a required field GroupName *string `min:"1" type:"string" required:"true"` @@ -18267,7 +19738,7 @@ type GetGroupPolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // GroupName is a required field GroupName *string `min:"1" type:"string" required:"true"` @@ -18276,7 +19747,7 @@ type GetGroupPolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@-+ + // with no spaces. You can also include any of the following characters: _+=,.@- // // PolicyName is a required field PolicyName *string `min:"1" type:"string" required:"true"` @@ -18381,7 +19852,7 @@ type GetInstanceProfileInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // InstanceProfileName is a required field InstanceProfileName *string `min:"1" type:"string" required:"true"` @@ -18452,7 +19923,7 @@ type GetLoginProfileInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -18521,7 +19992,7 @@ type GetOpenIDConnectProviderInput struct { // The Amazon Resource Name (ARN) of the OIDC provider resource object in IAM // to get information for. You can get a list of OIDC provider resource ARNs - // by using the ListOpenIDConnectProviders action. + // by using the ListOpenIDConnectProviders operation. // // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) @@ -18573,7 +20044,7 @@ type GetOpenIDConnectProviderOutput struct { // The date and time when the IAM OIDC provider resource object was created // in the AWS account. - CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CreateDate *time.Time `type:"timestamp"` // A list of certificate thumbprints that are associated with the specified // IAM OIDC provider resource object. For more information, see CreateOpenIDConnectProvider. @@ -18855,7 +20326,7 @@ type GetRolePolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@-+ + // with no spaces. You can also include any of the following characters: _+=,.@- // // PolicyName is a required field PolicyName *string `min:"1" type:"string" required:"true"` @@ -19013,13 +20484,13 @@ type GetSAMLProviderOutput struct { _ struct{} `type:"structure"` // The date and time when the SAML provider was created. - CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CreateDate *time.Time `type:"timestamp"` // The XML metadata document that includes information about an identity provider. SAMLMetadataDocument *string `min:"1000" type:"string"` // The expiration date and time for the SAML provider. - ValidUntil *time.Time `type:"timestamp" timestampFormat:"iso8601"` + ValidUntil *time.Time `type:"timestamp"` } // String returns the string representation @@ -19073,7 +20544,7 @@ type GetSSHPublicKeyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -19163,7 +20634,7 @@ type GetServerCertificateInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // ServerCertificateName is a required field ServerCertificateName *string `min:"1" type:"string" required:"true"` @@ -19311,7 +20782,7 @@ type GetUserInput struct { // This parameter is optional. If it is not included, it defaults to the user // making the request. This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- UserName *string `min:"1" type:"string"` } @@ -19350,6 +20821,23 @@ type GetUserOutput struct { // A structure containing details about the IAM user. // + // Due to a service issue, password last used data does not include password + // use from May 3rd 2018 22:50 PDT to May 23rd 2018 14:08 PDT. This affects + // last sign-in (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_finding-unused.html) + // dates shown in the IAM console and password last used dates in the IAM credential + // report (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html), + // and returned by this GetUser API. If users signed in during the affected + // time, the password last used date that is returned is the date the user last + // signed in before May 3rd 2018. For users that signed in after May 23rd 2018 + // 14:08 PDT, the returned password last used date is accurate. + // + // If you use password last used information to identify unused credentials + // for deletion, such as deleting users who did not sign in to AWS in the last + // 90 days, we recommend that you adjust your evaluation window to include dates + // after May 23rd 2018. Alternatively, if your users use access keys to access + // AWS programmatically you can refer to access key last used information because + // it is accurate for all dates. + // // User is a required field User *User `type:"structure" required:"true"` } @@ -19377,7 +20865,7 @@ type GetUserPolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@-+ + // with no spaces. You can also include any of the following characters: _+=,.@- // // PolicyName is a required field PolicyName *string `min:"1" type:"string" required:"true"` @@ -19386,7 +20874,7 @@ type GetUserPolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -19486,7 +20974,7 @@ func (s *GetUserPolicyOutput) SetUserName(v string) *GetUserPolicyOutput { // Contains information about an IAM group entity. // -// This data type is used as a response element in the following actions: +// This data type is used as a response element in the following operations: // // * CreateGroup // @@ -19507,7 +20995,7 @@ type Group struct { // when the group was created. // // CreateDate is a required field - CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + CreateDate *time.Time `type:"timestamp" required:"true"` // The stable and unique string identifying the group. For more information // about IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) @@ -19572,7 +21060,7 @@ func (s *Group) SetPath(v string) *Group { // Contains information about an IAM group, including all of the group's policies. // // This data type is used as a response element in the GetAccountAuthorizationDetails -// action. +// operation. type GroupDetail struct { _ struct{} `type:"structure"` @@ -19588,7 +21076,7 @@ type GroupDetail struct { // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), // when the group was created. - CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CreateDate *time.Time `type:"timestamp"` // The stable and unique string identifying the group. For more information // about IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) @@ -19661,7 +21149,7 @@ func (s *GroupDetail) SetPath(v string) *GroupDetail { // Contains information about an instance profile. // -// This data type is used as a response element in the following actions: +// This data type is used as a response element in the following operations: // // * CreateInstanceProfile // @@ -19684,7 +21172,7 @@ type InstanceProfile struct { // The date when the instance profile was created. // // CreateDate is a required field - CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + CreateDate *time.Time `type:"timestamp" required:"true"` // The stable and unique string identifying the instance profile. For more information // about IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) @@ -19781,7 +21269,7 @@ type ListAccessKeysInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- UserName *string `min:"1" type:"string"` } @@ -20000,7 +21488,7 @@ type ListAttachedGroupPoliciesInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // GroupName is a required field GroupName *string `min:"1" type:"string" required:"true"` @@ -20025,11 +21513,12 @@ type ListAttachedGroupPoliciesInput struct { // The path prefix for filtering the results. This parameter is optional. If // it is not included, it defaults to a slash (/), listing all policies. // - // This paramater allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of either a forward slash (/) by itself - // or a string that must begin and end with forward slashes, containing any - // ASCII character from the ! (\u0021) thru the DEL character (\u007F), including - // most punctuation characters, digits, and upper and lowercased letters. + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. PathPrefix *string `type:"string"` } @@ -20160,11 +21649,12 @@ type ListAttachedRolePoliciesInput struct { // The path prefix for filtering the results. This parameter is optional. If // it is not included, it defaults to a slash (/), listing all policies. // - // This paramater allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of either a forward slash (/) by itself - // or a string that must begin and end with forward slashes, containing any - // ASCII character from the ! (\u0021) thru the DEL character (\u007F), including - // most punctuation characters, digits, and upper and lowercased letters. + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. PathPrefix *string `type:"string"` // The name (friendly name, not ARN) of the role to list attached policies for. @@ -20304,18 +21794,19 @@ type ListAttachedUserPoliciesInput struct { // The path prefix for filtering the results. This parameter is optional. If // it is not included, it defaults to a slash (/), listing all policies. // - // This paramater allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of either a forward slash (/) by itself - // or a string that must begin and end with forward slashes, containing any - // ASCII character from the ! (\u0021) thru the DEL character (\u007F), including - // most punctuation characters, digits, and upper and lowercased letters. + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. PathPrefix *string `type:"string"` // The name (friendly name, not ARN) of the user to list attached policies for. // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -20456,11 +21947,12 @@ type ListEntitiesForPolicyInput struct { // The path prefix for filtering the results. This parameter is optional. If // it is not included, it defaults to a slash (/), listing all entities. // - // This paramater allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of either a forward slash (/) by itself - // or a string that must begin and end with forward slashes, containing any - // ASCII character from the ! (\u0021) thru the DEL character (\u007F), including - // most punctuation characters, digits, and upper and lowercased letters. + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. PathPrefix *string `min:"1" type:"string"` // The Amazon Resource Name (ARN) of the IAM policy for which you want the versions. @@ -20471,6 +21963,15 @@ type ListEntitiesForPolicyInput struct { // // PolicyArn is a required field PolicyArn *string `min:"20" type:"string" required:"true"` + + // The policy usage method to use for filtering the results. + // + // To list only permissions policies, set PolicyUsageFilter to PermissionsPolicy. + // To list only the policies used to set permissions boundaries, set the value + // to PermissionsBoundary. + // + // This parameter is optional. If it is not included, all policies are returned. + PolicyUsageFilter *string `type:"string" enum:"PolicyUsageType"` } // String returns the string representation @@ -20538,6 +22039,12 @@ func (s *ListEntitiesForPolicyInput) SetPolicyArn(v string) *ListEntitiesForPoli return s } +// SetPolicyUsageFilter sets the PolicyUsageFilter field's value. +func (s *ListEntitiesForPolicyInput) SetPolicyUsageFilter(v string) *ListEntitiesForPolicyInput { + s.PolicyUsageFilter = &v + return s +} + // Contains the response to a successful ListEntitiesForPolicy request. type ListEntitiesForPolicyOutput struct { _ struct{} `type:"structure"` @@ -20611,7 +22118,7 @@ type ListGroupPoliciesInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // GroupName is a required field GroupName *string `min:"1" type:"string" required:"true"` @@ -20704,7 +22211,7 @@ type ListGroupPoliciesOutput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@-+ + // with no spaces. You can also include any of the following characters: _+=,.@- // // PolicyNames is a required field PolicyNames []*string `type:"list" required:"true"` @@ -20762,7 +22269,7 @@ type ListGroupsForUserInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -20892,11 +22399,12 @@ type ListGroupsInput struct { // gets all groups whose path starts with /division_abc/subdivision_xyz/. // // This parameter is optional. If it is not included, it defaults to a slash - // (/), listing all groups. This paramater allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // (/), listing all groups. This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of either a forward slash (/) by itself - // or a string that must begin and end with forward slashes, containing any - // ASCII character from the ! (\u0021) thru the DEL character (\u007F), including - // most punctuation characters, digits, and upper and lowercased letters. + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. PathPrefix *string `min:"1" type:"string"` } @@ -21151,12 +22659,12 @@ type ListInstanceProfilesInput struct { // gets all instance profiles whose path starts with /application_abc/component_xyz/. // // This parameter is optional. If it is not included, it defaults to a slash - // (/), listing all instance profiles. This paramater allows (per its regex + // (/), listing all instance profiles. This parameter allows (per its regex // pattern (http://wikipedia.org/wiki/regex)) a string of characters consisting // of either a forward slash (/) by itself or a string that must begin and end - // with forward slashes, containing any ASCII character from the ! (\u0021) - // thru the DEL character (\u007F), including most punctuation characters, digits, - // and upper and lowercased letters. + // with forward slashes. In addition, it can contain any ASCII character from + // the ! (\u0021) through the DEL character (\u007F), including most punctuation + // characters, digits, and upper and lowercased letters. PathPrefix *string `min:"1" type:"string"` } @@ -21281,7 +22789,7 @@ type ListMFADevicesInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- UserName *string `min:"1" type:"string"` } @@ -21449,13 +22957,23 @@ type ListPoliciesInput struct { // The path prefix for filtering the results. This parameter is optional. If // it is not included, it defaults to a slash (/), listing all policies. This - // paramater allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of either a forward slash (/) by itself - // or a string that must begin and end with forward slashes, containing any - // ASCII character from the ! (\u0021) thru the DEL character (\u007F), including - // most punctuation characters, digits, and upper and lowercased letters. + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. PathPrefix *string `type:"string"` + // The policy usage method to use for filtering the results. + // + // To list only permissions policies, set PolicyUsageFilter to PermissionsPolicy. + // To list only the policies used to set permissions boundaries, set the value + // to PermissionsBoundary. + // + // This parameter is optional. If it is not included, all policies are returned. + PolicyUsageFilter *string `type:"string" enum:"PolicyUsageType"` + // The scope to use for filtering the results. // // To list only AWS managed policies, set Scope to AWS. To list only the customer @@ -21516,6 +23034,12 @@ func (s *ListPoliciesInput) SetPathPrefix(v string) *ListPoliciesInput { return s } +// SetPolicyUsageFilter sets the PolicyUsageFilter field's value. +func (s *ListPoliciesInput) SetPolicyUsageFilter(v string) *ListPoliciesInput { + s.PolicyUsageFilter = &v + return s +} + // SetScope sets the Scope field's value. func (s *ListPoliciesInput) SetScope(v string) *ListPoliciesInput { s.Scope = &v @@ -21633,25 +23157,157 @@ func (s *ListPolicyVersionsInput) Validate() error { } // SetMarker sets the Marker field's value. -func (s *ListPolicyVersionsInput) SetMarker(v string) *ListPolicyVersionsInput { +func (s *ListPolicyVersionsInput) SetMarker(v string) *ListPolicyVersionsInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListPolicyVersionsInput) SetMaxItems(v int64) *ListPolicyVersionsInput { + s.MaxItems = &v + return s +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *ListPolicyVersionsInput) SetPolicyArn(v string) *ListPolicyVersionsInput { + s.PolicyArn = &v + return s +} + +// Contains the response to a successful ListPolicyVersions request. +type ListPolicyVersionsOutput struct { + _ struct{} `type:"structure"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` + + // A list of policy versions. + // + // For more information about managed policy versions, see Versioning for Managed + // Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-versions.html) + // in the IAM User Guide. + Versions []*PolicyVersion `type:"list"` +} + +// String returns the string representation +func (s ListPolicyVersionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListPolicyVersionsOutput) GoString() string { + return s.String() +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListPolicyVersionsOutput) SetIsTruncated(v bool) *ListPolicyVersionsOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListPolicyVersionsOutput) SetMarker(v string) *ListPolicyVersionsOutput { + s.Marker = &v + return s +} + +// SetVersions sets the Versions field's value. +func (s *ListPolicyVersionsOutput) SetVersions(v []*PolicyVersion) *ListPolicyVersionsOutput { + s.Versions = v + return s +} + +type ListRolePoliciesInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The name of the role to list policies for. + // + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters consisting of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListRolePoliciesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListRolePoliciesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListRolePoliciesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListRolePoliciesInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListRolePoliciesInput) SetMarker(v string) *ListRolePoliciesInput { s.Marker = &v return s } // SetMaxItems sets the MaxItems field's value. -func (s *ListPolicyVersionsInput) SetMaxItems(v int64) *ListPolicyVersionsInput { +func (s *ListRolePoliciesInput) SetMaxItems(v int64) *ListRolePoliciesInput { s.MaxItems = &v return s } -// SetPolicyArn sets the PolicyArn field's value. -func (s *ListPolicyVersionsInput) SetPolicyArn(v string) *ListPolicyVersionsInput { - s.PolicyArn = &v +// SetRoleName sets the RoleName field's value. +func (s *ListRolePoliciesInput) SetRoleName(v string) *ListRolePoliciesInput { + s.RoleName = &v return s } -// Contains the response to a successful ListPolicyVersions request. -type ListPolicyVersionsOutput struct { +// Contains the response to a successful ListRolePolicies request. +type ListRolePoliciesOutput struct { _ struct{} `type:"structure"` // A flag that indicates whether there are more items to return. If your results @@ -21666,66 +23322,65 @@ type ListPolicyVersionsOutput struct { // to use for the Marker parameter in a subsequent pagination request. Marker *string `min:"1" type:"string"` - // A list of policy versions. + // A list of policy names. // - // For more information about managed policy versions, see Versioning for Managed - // Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-versions.html) - // in the IAM User Guide. - Versions []*PolicyVersion `type:"list"` + // PolicyNames is a required field + PolicyNames []*string `type:"list" required:"true"` } // String returns the string representation -func (s ListPolicyVersionsOutput) String() string { +func (s ListRolePoliciesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListPolicyVersionsOutput) GoString() string { +func (s ListRolePoliciesOutput) GoString() string { return s.String() } // SetIsTruncated sets the IsTruncated field's value. -func (s *ListPolicyVersionsOutput) SetIsTruncated(v bool) *ListPolicyVersionsOutput { +func (s *ListRolePoliciesOutput) SetIsTruncated(v bool) *ListRolePoliciesOutput { s.IsTruncated = &v return s } // SetMarker sets the Marker field's value. -func (s *ListPolicyVersionsOutput) SetMarker(v string) *ListPolicyVersionsOutput { +func (s *ListRolePoliciesOutput) SetMarker(v string) *ListRolePoliciesOutput { s.Marker = &v return s } -// SetVersions sets the Versions field's value. -func (s *ListPolicyVersionsOutput) SetVersions(v []*PolicyVersion) *ListPolicyVersionsOutput { - s.Versions = v +// SetPolicyNames sets the PolicyNames field's value. +func (s *ListRolePoliciesOutput) SetPolicyNames(v []*string) *ListRolePoliciesOutput { + s.PolicyNames = v return s } -type ListRolePoliciesInput struct { +type ListRoleTagsInput struct { _ struct{} `type:"structure"` // Use this parameter only when paginating results and only after you receive // a response indicating that the results are truncated. Set it to the value - // of the Marker element in the response that you received to indicate where - // the next call should start. + // of the Marker element in the response to indicate where the next call should + // start. Marker *string `min:"1" type:"string"` // (Optional) Use this only when paginating results to indicate the maximum - // number of items you want in the response. If additional items exist beyond - // the maximum you specify, the IsTruncated response element is true. + // number of items that you want in the response. If additional items exist + // beyond the maximum that you specify, the IsTruncated response element is + // true. // // If you do not include this parameter, it defaults to 100. Note that IAM might - // return fewer results, even when there are more results available. In that - // case, the IsTruncated response element returns true and Marker contains a - // value to include in the subsequent call that tells the service where to continue + // return fewer results, even when more results are available. In that case, + // the IsTruncated response element returns true, and Marker contains a value + // to include in the subsequent call that tells the service where to continue // from. MaxItems *int64 `min:"1" type:"integer"` - // The name of the role to list policies for. + // The name of the IAM role for which you want to see the list of tags. // - // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) - // a string of characters consisting of upper and lowercase alphanumeric characters + // This parameter accepts (through its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters that consist of upper and lowercase alphanumeric characters // with no spaces. You can also include any of the following characters: _+=,.@- // // RoleName is a required field @@ -21733,18 +23388,18 @@ type ListRolePoliciesInput struct { } // String returns the string representation -func (s ListRolePoliciesInput) String() string { +func (s ListRoleTagsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListRolePoliciesInput) GoString() string { +func (s ListRoleTagsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListRolePoliciesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListRolePoliciesInput"} +func (s *ListRoleTagsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListRoleTagsInput"} if s.Marker != nil && len(*s.Marker) < 1 { invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) } @@ -21765,70 +23420,71 @@ func (s *ListRolePoliciesInput) Validate() error { } // SetMarker sets the Marker field's value. -func (s *ListRolePoliciesInput) SetMarker(v string) *ListRolePoliciesInput { +func (s *ListRoleTagsInput) SetMarker(v string) *ListRoleTagsInput { s.Marker = &v return s } // SetMaxItems sets the MaxItems field's value. -func (s *ListRolePoliciesInput) SetMaxItems(v int64) *ListRolePoliciesInput { +func (s *ListRoleTagsInput) SetMaxItems(v int64) *ListRoleTagsInput { s.MaxItems = &v return s } // SetRoleName sets the RoleName field's value. -func (s *ListRolePoliciesInput) SetRoleName(v string) *ListRolePoliciesInput { +func (s *ListRoleTagsInput) SetRoleName(v string) *ListRoleTagsInput { s.RoleName = &v return s } -// Contains the response to a successful ListRolePolicies request. -type ListRolePoliciesOutput struct { +type ListRoleTagsOutput struct { _ struct{} `type:"structure"` // A flag that indicates whether there are more items to return. If your results - // were truncated, you can make a subsequent pagination request using the Marker - // request parameter to retrieve more items. Note that IAM might return fewer - // than the MaxItems number of results even when there are more results available. - // We recommend that you check IsTruncated after every call to ensure that you - // receive all of your results. + // were truncated, you can use the Marker request parameter to make a subsequent + // pagination request that retrieves more items. Note that IAM might return + // fewer than the MaxItems number of results even when more results are available. + // Check IsTruncated after every call to ensure that you receive all of your + // results. IsTruncated *bool `type:"boolean"` // When IsTruncated is true, this element is present and contains the value // to use for the Marker parameter in a subsequent pagination request. Marker *string `min:"1" type:"string"` - // A list of policy names. + // The list of tags currently that is attached to the role. Each tag consists + // of a key name and an associated value. If no tags are attached to the specified + // role, the response contains an empty list. // - // PolicyNames is a required field - PolicyNames []*string `type:"list" required:"true"` + // Tags is a required field + Tags []*Tag `type:"list" required:"true"` } // String returns the string representation -func (s ListRolePoliciesOutput) String() string { +func (s ListRoleTagsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListRolePoliciesOutput) GoString() string { +func (s ListRoleTagsOutput) GoString() string { return s.String() } // SetIsTruncated sets the IsTruncated field's value. -func (s *ListRolePoliciesOutput) SetIsTruncated(v bool) *ListRolePoliciesOutput { +func (s *ListRoleTagsOutput) SetIsTruncated(v bool) *ListRoleTagsOutput { s.IsTruncated = &v return s } // SetMarker sets the Marker field's value. -func (s *ListRolePoliciesOutput) SetMarker(v string) *ListRolePoliciesOutput { +func (s *ListRoleTagsOutput) SetMarker(v string) *ListRoleTagsOutput { s.Marker = &v return s } -// SetPolicyNames sets the PolicyNames field's value. -func (s *ListRolePoliciesOutput) SetPolicyNames(v []*string) *ListRolePoliciesOutput { - s.PolicyNames = v +// SetTags sets the Tags field's value. +func (s *ListRoleTagsOutput) SetTags(v []*Tag) *ListRoleTagsOutput { + s.Tags = v return s } @@ -21856,11 +23512,12 @@ type ListRolesInput struct { // gets all roles whose path starts with /application_abc/component_xyz/. // // This parameter is optional. If it is not included, it defaults to a slash - // (/), listing all roles. This paramater allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // (/), listing all roles. This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of either a forward slash (/) by itself - // or a string that must begin and end with forward slashes, containing any - // ASCII character from the ! (\u0021) thru the DEL character (\u007F), including - // most punctuation characters, digits, and upper and lowercased letters. + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. PathPrefix *string `min:"1" type:"string"` } @@ -22025,7 +23682,7 @@ type ListSSHPublicKeysInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- UserName *string `min:"1" type:"string"` } @@ -22148,12 +23805,12 @@ type ListServerCertificatesInput struct { // would get all server certificates for which the path starts with /company/servercerts. // // This parameter is optional. If it is not included, it defaults to a slash - // (/), listing all server certificates. This paramater allows (per its regex + // (/), listing all server certificates. This parameter allows (per its regex // pattern (http://wikipedia.org/wiki/regex)) a string of characters consisting // of either a forward slash (/) by itself or a string that must begin and end - // with forward slashes, containing any ASCII character from the ! (\u0021) - // thru the DEL character (\u007F), including most punctuation characters, digits, - // and upper and lowercased letters. + // with forward slashes. In addition, it can contain any ASCII character from + // the ! (\u0021) through the DEL character (\u007F), including most punctuation + // characters, digits, and upper and lowercased letters. PathPrefix *string `min:"1" type:"string"` } @@ -22262,12 +23919,12 @@ type ListServiceSpecificCredentialsInput struct { ServiceName *string `type:"string"` // The name of the user whose service-specific credentials you want information - // about. If this value is not specified then the operation assumes the user + // about. If this value is not specified, then the operation assumes the user // whose credentials are used to call the operation. // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- UserName *string `min:"1" type:"string"` } @@ -22353,7 +24010,7 @@ type ListSigningCertificatesInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- UserName *string `min:"1" type:"string"` } @@ -22478,7 +24135,7 @@ type ListUserPoliciesInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -22584,6 +24241,138 @@ func (s *ListUserPoliciesOutput) SetPolicyNames(v []*string) *ListUserPoliciesOu return s } +type ListUserTagsInput struct { + _ struct{} `type:"structure"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response to indicate where the next call should + // start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items that you want in the response. If additional items exist + // beyond the maximum that you specify, the IsTruncated response element is + // true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when more results are available. In that case, + // the IsTruncated response element returns true, and Marker contains a value + // to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // The name of the IAM user whose tags you want to see. + // + // This parameter accepts (through its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters that consist of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: =,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListUserTagsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListUserTagsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListUserTagsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListUserTagsInput"} + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListUserTagsInput) SetMarker(v string) *ListUserTagsInput { + s.Marker = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListUserTagsInput) SetMaxItems(v int64) *ListUserTagsInput { + s.MaxItems = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *ListUserTagsInput) SetUserName(v string) *ListUserTagsInput { + s.UserName = &v + return s +} + +type ListUserTagsOutput struct { + _ struct{} `type:"structure"` + + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can use the Marker request parameter to make a subsequent + // pagination request that retrieves more items. Note that IAM might return + // fewer than the MaxItems number of results even when more results are available. + // Check IsTruncated after every call to ensure that you receive all of your + // results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` + + // The list of tags that are currently attached to the user. Each tag consists + // of a key name and an associated value. If no tags are attached to the specified + // user, the response contains an empty list. + // + // Tags is a required field + Tags []*Tag `type:"list" required:"true"` +} + +// String returns the string representation +func (s ListUserTagsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListUserTagsOutput) GoString() string { + return s.String() +} + +// SetIsTruncated sets the IsTruncated field's value. +func (s *ListUserTagsOutput) SetIsTruncated(v bool) *ListUserTagsOutput { + s.IsTruncated = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListUserTagsOutput) SetMarker(v string) *ListUserTagsOutput { + s.Marker = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *ListUserTagsOutput) SetTags(v []*Tag) *ListUserTagsOutput { + s.Tags = v + return s +} + type ListUsersInput struct { _ struct{} `type:"structure"` @@ -22608,12 +24397,12 @@ type ListUsersInput struct { // which would get all user names whose path starts with /division_abc/subdivision_xyz/. // // This parameter is optional. If it is not included, it defaults to a slash - // (/), listing all user names. This paramater allows (per its regex pattern + // (/), listing all user names. This parameter allows (per its regex pattern // (http://wikipedia.org/wiki/regex)) a string of characters consisting of either // a forward slash (/) by itself or a string that must begin and end with forward - // slashes, containing any ASCII character from the ! (\u0021) thru the DEL - // character (\u007F), including most punctuation characters, digits, and upper - // and lowercased letters. + // slashes. In addition, it can contain any ASCII character from the ! (\u0021) + // through the DEL character (\u007F), including most punctuation characters, + // digits, and upper and lowercased letters. PathPrefix *string `min:"1" type:"string"` } @@ -22718,7 +24507,7 @@ type ListVirtualMFADevicesInput struct { _ struct{} `type:"structure"` // The status (Unassigned or Assigned) of the devices to list. If you do not - // specify an AssignmentStatus, the action defaults to Any which lists both + // specify an AssignmentStatus, the operation defaults to Any which lists both // assigned and unassigned virtual MFA devices. AssignmentStatus *string `type:"string" enum:"assignmentStatusType"` @@ -22838,14 +24627,14 @@ func (s *ListVirtualMFADevicesOutput) SetVirtualMFADevices(v []*VirtualMFADevice // Contains the user name and password create date for a user. // // This data type is used as a response element in the CreateLoginProfile and -// GetLoginProfile actions. +// GetLoginProfile operations. type LoginProfile struct { _ struct{} `type:"structure"` // The date when the password for the user was created. // // CreateDate is a required field - CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + CreateDate *time.Time `type:"timestamp" required:"true"` // Specifies whether the user is required to set a new password on next sign-in. PasswordResetRequired *bool `type:"boolean"` @@ -22887,14 +24676,14 @@ func (s *LoginProfile) SetUserName(v string) *LoginProfile { // Contains information about an MFA device. // -// This data type is used as a response element in the ListMFADevices action. +// This data type is used as a response element in the ListMFADevices operation. type MFADevice struct { _ struct{} `type:"structure"` // The date when the MFA device was enabled for the user. // // EnableDate is a required field - EnableDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + EnableDate *time.Time `type:"timestamp" required:"true"` // The serial number that uniquely identifies the MFA device. For virtual MFA // devices, the serial number is the device ARN. @@ -22941,7 +24730,7 @@ func (s *MFADevice) SetUserName(v string) *MFADevice { // that the policy is attached to. // // This data type is used as a response element in the GetAccountAuthorizationDetails -// action. +// operation. // // For more information about managed policies, see Managed Policies and Inline // Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) @@ -22962,7 +24751,7 @@ type ManagedPolicyDetail struct { // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), // when the policy was created. - CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CreateDate *time.Time `type:"timestamp"` // The identifier for the version of the policy that is set as the default (operative) // version. @@ -22984,6 +24773,14 @@ type ManagedPolicyDetail struct { // in the Using IAM guide. Path *string `type:"string"` + // The number of entities (users and roles) for which the policy is used as + // the permissions boundary. + // + // For more information about permissions boundaries, see Permissions Boundaries + // for IAM Identities (https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) + // in the IAM User Guide. + PermissionsBoundaryUsageCount *int64 `type:"integer"` + // The stable and unique string identifying the policy. // // For more information about IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) @@ -23003,7 +24800,7 @@ type ManagedPolicyDetail struct { // when the policy was created. When a policy has more than one version, this // field contains the date and time when the most recent policy version was // created. - UpdateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + UpdateDate *time.Time `type:"timestamp"` } // String returns the string representation @@ -23058,6 +24855,12 @@ func (s *ManagedPolicyDetail) SetPath(v string) *ManagedPolicyDetail { return s } +// SetPermissionsBoundaryUsageCount sets the PermissionsBoundaryUsageCount field's value. +func (s *ManagedPolicyDetail) SetPermissionsBoundaryUsageCount(v int64) *ManagedPolicyDetail { + s.PermissionsBoundaryUsageCount = &v + return s +} + // SetPolicyId sets the PolicyId field's value. func (s *ManagedPolicyDetail) SetPolicyId(v string) *ManagedPolicyDetail { s.PolicyId = &v @@ -23110,11 +24913,11 @@ func (s *OpenIDConnectProviderListEntry) SetArn(v string) *OpenIDConnectProvider return s } -// Contains information about AWS Organizations's affect on a policy simulation. +// Contains information about AWS Organizations's effect on a policy simulation. type OrganizationsDecisionDetail struct { _ struct{} `type:"structure"` - // Specifies whether the simulated action is allowed by the AWS Organizations + // Specifies whether the simulated operation is allowed by the AWS Organizations // service control policies that impact the simulated user's account. AllowedByOrganizations *bool `type:"boolean"` } @@ -23138,7 +24941,7 @@ func (s *OrganizationsDecisionDetail) SetAllowedByOrganizations(v bool) *Organiz // Contains information about the account password policy. // // This data type is used as a response element in the GetAccountPasswordPolicy -// action. +// operation. type PasswordPolicy struct { _ struct{} `type:"structure"` @@ -23146,8 +24949,8 @@ type PasswordPolicy struct { AllowUsersToChangePassword *bool `type:"boolean"` // Indicates whether passwords in the account expire. Returns true if MaxPasswordAge - // is contains a value greater than 0. Returns false if MaxPasswordAge is 0 - // or not present. + // contains a value greater than 0. Returns false if MaxPasswordAge is 0 or + // not present. ExpirePasswords *bool `type:"boolean"` // Specifies whether IAM users are prevented from setting a new password after @@ -23250,7 +25053,7 @@ func (s *PasswordPolicy) SetRequireUppercaseCharacters(v bool) *PasswordPolicy { // Contains information about a managed policy. // // This data type is used as a response element in the CreatePolicy, GetPolicy, -// and ListPolicies actions. +// and ListPolicies operations. // // For more information about managed policies, refer to Managed Policies and // Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) @@ -23271,7 +25074,7 @@ type Policy struct { // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), // when the policy was created. - CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CreateDate *time.Time `type:"timestamp"` // The identifier for the version of the policy that is set as the default version. DefaultVersionId *string `type:"string"` @@ -23291,6 +25094,14 @@ type Policy struct { // in the Using IAM guide. Path *string `type:"string"` + // The number of entities (users and roles) for which the policy is used to + // set the permissions boundary. + // + // For more information about permissions boundaries, see Permissions Boundaries + // for IAM Identities (https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) + // in the IAM User Guide. + PermissionsBoundaryUsageCount *int64 `type:"integer"` + // The stable and unique string identifying the policy. // // For more information about IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) @@ -23307,7 +25118,7 @@ type Policy struct { // when the policy was created. When a policy has more than one version, this // field contains the date and time when the most recent policy version was // created. - UpdateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + UpdateDate *time.Time `type:"timestamp"` } // String returns the string representation @@ -23362,6 +25173,12 @@ func (s *Policy) SetPath(v string) *Policy { return s } +// SetPermissionsBoundaryUsageCount sets the PermissionsBoundaryUsageCount field's value. +func (s *Policy) SetPermissionsBoundaryUsageCount(v int64) *Policy { + s.PermissionsBoundaryUsageCount = &v + return s +} + // SetPolicyId sets the PolicyId field's value. func (s *Policy) SetPolicyId(v string) *Policy { s.PolicyId = &v @@ -23383,7 +25200,7 @@ func (s *Policy) SetUpdateDate(v time.Time) *Policy { // Contains information about an IAM policy, including the policy document. // // This data type is used as a response element in the GetAccountAuthorizationDetails -// action. +// operation. type PolicyDetail struct { _ struct{} `type:"structure"` @@ -23419,7 +25236,7 @@ func (s *PolicyDetail) SetPolicyName(v string) *PolicyDetail { // Contains information about a group that a managed policy is attached to. // // This data type is used as a response element in the ListEntitiesForPolicy -// action. +// operation. // // For more information about managed policies, refer to Managed Policies and // Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) @@ -23461,7 +25278,7 @@ func (s *PolicyGroup) SetGroupName(v string) *PolicyGroup { // Contains information about a role that a managed policy is attached to. // // This data type is used as a response element in the ListEntitiesForPolicy -// action. +// operation. // // For more information about managed policies, refer to Managed Policies and // Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) @@ -23503,7 +25320,7 @@ func (s *PolicyRole) SetRoleName(v string) *PolicyRole { // Contains information about a user that a managed policy is attached to. // // This data type is used as a response element in the ListEntitiesForPolicy -// action. +// operation. // // For more information about managed policies, refer to Managed Policies and // Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) @@ -23546,7 +25363,7 @@ func (s *PolicyUser) SetUserName(v string) *PolicyUser { // // This data type is used as a response element in the CreatePolicyVersion, // GetPolicyVersion, ListPolicyVersions, and GetAccountAuthorizationDetails -// actions. +// operations. // // For more information about managed policies, refer to Managed Policies and // Inline Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-vs-inline.html) @@ -23556,13 +25373,19 @@ type PolicyVersion struct { // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), // when the policy version was created. - CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CreateDate *time.Time `type:"timestamp"` // The policy document. // // The policy document is returned in the response to the GetPolicyVersion and // GetAccountAuthorizationDetails operations. It is not returned in the response // to the CreatePolicyVersion or ListPolicyVersions operations. + // + // The policy document returned in this structure is URL-encoded compliant with + // RFC 3986 (https://tools.ietf.org/html/rfc3986). You can use a URL decoding + // method to convert the policy back to plain JSON text. For example, if you + // use Java, you can use the decode method of the java.net.URLDecoder utility + // class in the Java SDK. Other languages and SDKs provide similar functionality. Document *string `min:"1" type:"string"` // Specifies whether the policy version is set as the policy's default version. @@ -23652,7 +25475,7 @@ type PutGroupPolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // GroupName is a required field GroupName *string `min:"1" type:"string" required:"true"` @@ -23660,11 +25483,16 @@ type PutGroupPolicyInput struct { // The policy document. // // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of any printable ASCII character - // ranging from the space character (\u0020) through end of the ASCII character - // range as well as the printable characters in the Basic Latin and Latin-1 - // Supplement character set (through \u00FF). It also includes the special characters - // tab (\u0009), line feed (\u000A), and carriage return (\u000D). + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) // // PolicyDocument is a required field PolicyDocument *string `min:"1" type:"string" required:"true"` @@ -23673,7 +25501,7 @@ type PutGroupPolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@-+ + // with no spaces. You can also include any of the following characters: _+=,.@- // // PolicyName is a required field PolicyName *string `min:"1" type:"string" required:"true"` @@ -23740,12 +25568,86 @@ type PutGroupPolicyOutput struct { } // String returns the string representation -func (s PutGroupPolicyOutput) String() string { +func (s PutGroupPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutGroupPolicyOutput) GoString() string { + return s.String() +} + +type PutRolePermissionsBoundaryInput struct { + _ struct{} `type:"structure"` + + // The ARN of the policy that is used to set the permissions boundary for the + // role. + // + // PermissionsBoundary is a required field + PermissionsBoundary *string `min:"20" type:"string" required:"true"` + + // The name (friendly name, not ARN) of the IAM role for which you want to set + // the permissions boundary. + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s PutRolePermissionsBoundaryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutRolePermissionsBoundaryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutRolePermissionsBoundaryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutRolePermissionsBoundaryInput"} + if s.PermissionsBoundary == nil { + invalidParams.Add(request.NewErrParamRequired("PermissionsBoundary")) + } + if s.PermissionsBoundary != nil && len(*s.PermissionsBoundary) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PermissionsBoundary", 20)) + } + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPermissionsBoundary sets the PermissionsBoundary field's value. +func (s *PutRolePermissionsBoundaryInput) SetPermissionsBoundary(v string) *PutRolePermissionsBoundaryInput { + s.PermissionsBoundary = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *PutRolePermissionsBoundaryInput) SetRoleName(v string) *PutRolePermissionsBoundaryInput { + s.RoleName = &v + return s +} + +type PutRolePermissionsBoundaryOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutRolePermissionsBoundaryOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s PutGroupPolicyOutput) GoString() string { +func (s PutRolePermissionsBoundaryOutput) GoString() string { return s.String() } @@ -23755,11 +25657,16 @@ type PutRolePolicyInput struct { // The policy document. // // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of any printable ASCII character - // ranging from the space character (\u0020) through end of the ASCII character - // range as well as the printable characters in the Basic Latin and Latin-1 - // Supplement character set (through \u00FF). It also includes the special characters - // tab (\u0009), line feed (\u000A), and carriage return (\u000D). + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) // // PolicyDocument is a required field PolicyDocument *string `min:"1" type:"string" required:"true"` @@ -23768,7 +25675,7 @@ type PutRolePolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@-+ + // with no spaces. You can also include any of the following characters: _+=,.@- // // PolicyName is a required field PolicyName *string `min:"1" type:"string" required:"true"` @@ -23853,17 +25760,96 @@ func (s PutRolePolicyOutput) GoString() string { return s.String() } +type PutUserPermissionsBoundaryInput struct { + _ struct{} `type:"structure"` + + // The ARN of the policy that is used to set the permissions boundary for the + // user. + // + // PermissionsBoundary is a required field + PermissionsBoundary *string `min:"20" type:"string" required:"true"` + + // The name (friendly name, not ARN) of the IAM user for which you want to set + // the permissions boundary. + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s PutUserPermissionsBoundaryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutUserPermissionsBoundaryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutUserPermissionsBoundaryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutUserPermissionsBoundaryInput"} + if s.PermissionsBoundary == nil { + invalidParams.Add(request.NewErrParamRequired("PermissionsBoundary")) + } + if s.PermissionsBoundary != nil && len(*s.PermissionsBoundary) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PermissionsBoundary", 20)) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPermissionsBoundary sets the PermissionsBoundary field's value. +func (s *PutUserPermissionsBoundaryInput) SetPermissionsBoundary(v string) *PutUserPermissionsBoundaryInput { + s.PermissionsBoundary = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *PutUserPermissionsBoundaryInput) SetUserName(v string) *PutUserPermissionsBoundaryInput { + s.UserName = &v + return s +} + +type PutUserPermissionsBoundaryOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutUserPermissionsBoundaryOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutUserPermissionsBoundaryOutput) GoString() string { + return s.String() +} + type PutUserPolicyInput struct { _ struct{} `type:"structure"` // The policy document. // // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of any printable ASCII character - // ranging from the space character (\u0020) through end of the ASCII character - // range as well as the printable characters in the Basic Latin and Latin-1 - // Supplement character set (through \u00FF). It also includes the special characters - // tab (\u0009), line feed (\u000A), and carriage return (\u000D). + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) // // PolicyDocument is a required field PolicyDocument *string `min:"1" type:"string" required:"true"` @@ -23872,7 +25858,7 @@ type PutUserPolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@-+ + // with no spaces. You can also include any of the following characters: _+=,.@- // // PolicyName is a required field PolicyName *string `min:"1" type:"string" required:"true"` @@ -23881,7 +25867,7 @@ type PutUserPolicyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -23968,7 +25954,7 @@ type RemoveClientIDFromOpenIDConnectProviderInput struct { // The Amazon Resource Name (ARN) of the IAM OIDC provider resource to remove // the client ID from. You can get a list of OIDC provider ARNs by using the - // ListOpenIDConnectProviders action. + // ListOpenIDConnectProviders operation. // // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) @@ -24043,7 +26029,7 @@ type RemoveRoleFromInstanceProfileInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // InstanceProfileName is a required field InstanceProfileName *string `min:"1" type:"string" required:"true"` @@ -24123,7 +26109,7 @@ type RemoveUserFromGroupInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // GroupName is a required field GroupName *string `min:"1" type:"string" required:"true"` @@ -24132,7 +26118,7 @@ type RemoveUserFromGroupInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -24214,7 +26200,7 @@ type ResetServiceSpecificCredentialInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- UserName *string `min:"1" type:"string"` } @@ -24286,8 +26272,8 @@ func (s *ResetServiceSpecificCredentialOutput) SetServiceSpecificCredential(v *S return s } -// Contains the result of the simulation of a single API action call on a single -// resource. +// Contains the result of the simulation of a single API operation call on a +// single resource. // // This data type is used by a member of the EvaluationResult data type. type ResourceSpecificResult struct { @@ -24300,7 +26286,7 @@ type ResourceSpecificResult struct { // caller's IAM policy must grant access. EvalDecisionDetails map[string]*string `type:"map"` - // The result of the simulation of the simulated API action on the resource + // The result of the simulation of the simulated API operation on the resource // specified in EvalResourceName. // // EvalResourceDecision is a required field @@ -24313,9 +26299,9 @@ type ResourceSpecificResult struct { // A list of the statements in the input policies that determine the result // for this part of the simulation. Remember that even if multiple statements - // allow the action on the resource, if any statement denies that action, then - // the explicit deny overrides any allow, and the deny statement is the only - // entry included in the result. + // allow the operation on the resource, if any statement denies that operation, + // then the explicit deny overrides any allow, and the deny statement is the + // only entry included in the result. MatchedStatements []*Statement `type:"list"` // A list of context keys that are required by the included input policies but @@ -24390,7 +26376,7 @@ type ResyncMFADeviceInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // SerialNumber is a required field SerialNumber *string `min:"9" type:"string" required:"true"` @@ -24399,7 +26385,7 @@ type ResyncMFADeviceInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -24488,7 +26474,7 @@ func (s ResyncMFADeviceOutput) GoString() string { } // Contains information about an IAM role. This structure is returned as a response -// element in several APIs that interact with roles. +// element in several API operations that interact with roles. type Role struct { _ struct{} `type:"structure"` @@ -24506,11 +26492,16 @@ type Role struct { // when the role was created. // // CreateDate is a required field - CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + CreateDate *time.Time `type:"timestamp" required:"true"` // A description of the role that you provide. Description *string `type:"string"` + // The maximum session duration (in seconds) for the specified role. Anyone + // who uses the AWS CLI or API to assume the role can specify the duration using + // the optional DurationSeconds API parameter or duration-seconds CLI parameter. + MaxSessionDuration *int64 `min:"3600" type:"integer"` + // The path to the role. For more information about paths, see IAM Identifiers // (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) // in the Using IAM guide. @@ -24518,6 +26509,13 @@ type Role struct { // Path is a required field Path *string `min:"1" type:"string" required:"true"` + // The ARN of the policy used to set the permissions boundary for the role. + // + // For more information about permissions boundaries, see Permissions Boundaries + // for IAM Identities (https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) + // in the IAM User Guide. + PermissionsBoundary *AttachedPermissionsBoundary `type:"structure"` + // The stable and unique string identifying the role. For more information about // IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) // in the Using IAM guide. @@ -24529,6 +26527,11 @@ type Role struct { // // RoleName is a required field RoleName *string `min:"1" type:"string" required:"true"` + + // A list of tags that are attached to the specified role. For more information + // about tagging, see Tagging IAM Identities (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) + // in the IAM User Guide. + Tags []*Tag `type:"list"` } // String returns the string representation @@ -24565,12 +26568,24 @@ func (s *Role) SetDescription(v string) *Role { return s } +// SetMaxSessionDuration sets the MaxSessionDuration field's value. +func (s *Role) SetMaxSessionDuration(v int64) *Role { + s.MaxSessionDuration = &v + return s +} + // SetPath sets the Path field's value. func (s *Role) SetPath(v string) *Role { s.Path = &v return s } +// SetPermissionsBoundary sets the PermissionsBoundary field's value. +func (s *Role) SetPermissionsBoundary(v *AttachedPermissionsBoundary) *Role { + s.PermissionsBoundary = v + return s +} + // SetRoleId sets the RoleId field's value. func (s *Role) SetRoleId(v string) *Role { s.RoleId = &v @@ -24583,10 +26598,16 @@ func (s *Role) SetRoleName(v string) *Role { return s } +// SetTags sets the Tags field's value. +func (s *Role) SetTags(v []*Tag) *Role { + s.Tags = v + return s +} + // Contains information about an IAM role, including all of the role's policies. // // This data type is used as a response element in the GetAccountAuthorizationDetails -// action. +// operation. type RoleDetail struct { _ struct{} `type:"structure"` @@ -24606,7 +26627,7 @@ type RoleDetail struct { // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), // when the role was created. - CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CreateDate *time.Time `type:"timestamp"` // A list of instance profiles that contain this role. InstanceProfileList []*InstanceProfile `type:"list"` @@ -24616,6 +26637,13 @@ type RoleDetail struct { // in the Using IAM guide. Path *string `min:"1" type:"string"` + // The ARN of the policy used to set the permissions boundary for the role. + // + // For more information about permissions boundaries, see Permissions Boundaries + // for IAM Identities (https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) + // in the IAM User Guide. + PermissionsBoundary *AttachedPermissionsBoundary `type:"structure"` + // The stable and unique string identifying the role. For more information about // IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) // in the Using IAM guide. @@ -24627,6 +26655,11 @@ type RoleDetail struct { // A list of inline policies embedded in the role. These policies are the role's // access (permissions) policies. RolePolicyList []*PolicyDetail `type:"list"` + + // A list of tags that are attached to the specified role. For more information + // about tagging, see Tagging IAM Identities (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) + // in the IAM User Guide. + Tags []*Tag `type:"list"` } // String returns the string representation @@ -24675,6 +26708,12 @@ func (s *RoleDetail) SetPath(v string) *RoleDetail { return s } +// SetPermissionsBoundary sets the PermissionsBoundary field's value. +func (s *RoleDetail) SetPermissionsBoundary(v *AttachedPermissionsBoundary) *RoleDetail { + s.PermissionsBoundary = v + return s +} + // SetRoleId sets the RoleId field's value. func (s *RoleDetail) SetRoleId(v string) *RoleDetail { s.RoleId = &v @@ -24693,7 +26732,14 @@ func (s *RoleDetail) SetRolePolicyList(v []*PolicyDetail) *RoleDetail { return s } -// An object that contains details about how a service-linked role is used. +// SetTags sets the Tags field's value. +func (s *RoleDetail) SetTags(v []*Tag) *RoleDetail { + s.Tags = v + return s +} + +// An object that contains details about how a service-linked role is used, +// if that information is returned by the service. // // This data type is used as a response element in the GetServiceLinkedRoleDeletionStatus // operation. @@ -24737,10 +26783,10 @@ type SAMLProviderListEntry struct { Arn *string `min:"20" type:"string"` // The date and time when the SAML provider was created. - CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CreateDate *time.Time `type:"timestamp"` // The expiration date and time for the SAML provider. - ValidUntil *time.Time `type:"timestamp" timestampFormat:"iso8601"` + ValidUntil *time.Time `type:"timestamp"` } // String returns the string representation @@ -24774,7 +26820,7 @@ func (s *SAMLProviderListEntry) SetValidUntil(v time.Time) *SAMLProviderListEntr // Contains information about an SSH public key. // // This data type is used as a response element in the GetSSHPublicKey and UploadSSHPublicKey -// actions. +// operations. type SSHPublicKey struct { _ struct{} `type:"structure"` @@ -24793,15 +26839,16 @@ type SSHPublicKey struct { // SSHPublicKeyId is a required field SSHPublicKeyId *string `min:"20" type:"string" required:"true"` - // The status of the SSH public key. Active means the key can be used for authentication - // with an AWS CodeCommit repository. Inactive means the key cannot be used. + // The status of the SSH public key. Active means that the key can be used for + // authentication with an AWS CodeCommit repository. Inactive means that the + // key cannot be used. // // Status is a required field Status *string `type:"string" required:"true" enum:"statusType"` // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), // when the SSH public key was uploaded. - UploadDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + UploadDate *time.Time `type:"timestamp"` // The name of the IAM user associated with the SSH public key. // @@ -24857,7 +26904,7 @@ func (s *SSHPublicKey) SetUserName(v string) *SSHPublicKey { // Contains information about an SSH public key, without the key's body or fingerprint. // -// This data type is used as a response element in the ListSSHPublicKeys action. +// This data type is used as a response element in the ListSSHPublicKeys operation. type SSHPublicKeyMetadata struct { _ struct{} `type:"structure"` @@ -24866,8 +26913,9 @@ type SSHPublicKeyMetadata struct { // SSHPublicKeyId is a required field SSHPublicKeyId *string `min:"20" type:"string" required:"true"` - // The status of the SSH public key. Active means the key can be used for authentication - // with an AWS CodeCommit repository. Inactive means the key cannot be used. + // The status of the SSH public key. Active means that the key can be used for + // authentication with an AWS CodeCommit repository. Inactive means that the + // key cannot be used. // // Status is a required field Status *string `type:"string" required:"true" enum:"statusType"` @@ -24876,7 +26924,7 @@ type SSHPublicKeyMetadata struct { // when the SSH public key was uploaded. // // UploadDate is a required field - UploadDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + UploadDate *time.Time `type:"timestamp" required:"true"` // The name of the IAM user associated with the SSH public key. // @@ -24921,7 +26969,7 @@ func (s *SSHPublicKeyMetadata) SetUserName(v string) *SSHPublicKeyMetadata { // Contains information about a server certificate. // // This data type is used as a response element in the GetServerCertificate -// action. +// operation. type ServerCertificate struct { _ struct{} `type:"structure"` @@ -24972,7 +27020,7 @@ func (s *ServerCertificate) SetServerCertificateMetadata(v *ServerCertificateMet // certificate chain, and private key. // // This data type is used as a response element in the UploadServerCertificate -// and ListServerCertificates actions. +// and ListServerCertificates operations. type ServerCertificateMetadata struct { _ struct{} `type:"structure"` @@ -24985,7 +27033,7 @@ type ServerCertificateMetadata struct { Arn *string `min:"20" type:"string" required:"true"` // The date on which the certificate is set to expire. - Expiration *time.Time `type:"timestamp" timestampFormat:"iso8601"` + Expiration *time.Time `type:"timestamp"` // The path to the server certificate. For more information about paths, see // IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) @@ -25007,7 +27055,7 @@ type ServerCertificateMetadata struct { ServerCertificateName *string `min:"1" type:"string" required:"true"` // The date when the server certificate was uploaded. - UploadDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + UploadDate *time.Time `type:"timestamp"` } // String returns the string representation @@ -25056,7 +27104,7 @@ func (s *ServerCertificateMetadata) SetUploadDate(v time.Time) *ServerCertificat return s } -// Contains the details of a service specific credential. +// Contains the details of a service-specific credential. type ServiceSpecificCredential struct { _ struct{} `type:"structure"` @@ -25064,7 +27112,7 @@ type ServiceSpecificCredential struct { // when the service-specific credential were created. // // CreateDate is a required field - CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + CreateDate *time.Time `type:"timestamp" required:"true"` // The name of the service associated with the service-specific credential. // @@ -25089,8 +27137,8 @@ type ServiceSpecificCredential struct { // ServiceUserName is a required field ServiceUserName *string `min:"17" type:"string" required:"true"` - // The status of the service-specific credential. Active means the key is valid - // for API calls, while Inactive means it is not. + // The status of the service-specific credential. Active means that the key + // is valid for API calls, while Inactive means it is not. // // Status is a required field Status *string `type:"string" required:"true" enum:"statusType"` @@ -25161,7 +27209,7 @@ type ServiceSpecificCredentialMetadata struct { // when the service-specific credential were created. // // CreateDate is a required field - CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + CreateDate *time.Time `type:"timestamp" required:"true"` // The name of the service associated with the service-specific credential. // @@ -25178,8 +27226,8 @@ type ServiceSpecificCredentialMetadata struct { // ServiceUserName is a required field ServiceUserName *string `min:"17" type:"string" required:"true"` - // The status of the service-specific credential. Active means the key is valid - // for API calls, while Inactive means it is not. + // The status of the service-specific credential. Active means that the key + // is valid for API calls, while Inactive means it is not. // // Status is a required field Status *string `type:"string" required:"true" enum:"statusType"` @@ -25200,86 +27248,396 @@ func (s ServiceSpecificCredentialMetadata) GoString() string { return s.String() } -// SetCreateDate sets the CreateDate field's value. -func (s *ServiceSpecificCredentialMetadata) SetCreateDate(v time.Time) *ServiceSpecificCredentialMetadata { - s.CreateDate = &v - return s +// SetCreateDate sets the CreateDate field's value. +func (s *ServiceSpecificCredentialMetadata) SetCreateDate(v time.Time) *ServiceSpecificCredentialMetadata { + s.CreateDate = &v + return s +} + +// SetServiceName sets the ServiceName field's value. +func (s *ServiceSpecificCredentialMetadata) SetServiceName(v string) *ServiceSpecificCredentialMetadata { + s.ServiceName = &v + return s +} + +// SetServiceSpecificCredentialId sets the ServiceSpecificCredentialId field's value. +func (s *ServiceSpecificCredentialMetadata) SetServiceSpecificCredentialId(v string) *ServiceSpecificCredentialMetadata { + s.ServiceSpecificCredentialId = &v + return s +} + +// SetServiceUserName sets the ServiceUserName field's value. +func (s *ServiceSpecificCredentialMetadata) SetServiceUserName(v string) *ServiceSpecificCredentialMetadata { + s.ServiceUserName = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *ServiceSpecificCredentialMetadata) SetStatus(v string) *ServiceSpecificCredentialMetadata { + s.Status = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *ServiceSpecificCredentialMetadata) SetUserName(v string) *ServiceSpecificCredentialMetadata { + s.UserName = &v + return s +} + +type SetDefaultPolicyVersionInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the IAM policy whose default version you + // want to set. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // PolicyArn is a required field + PolicyArn *string `min:"20" type:"string" required:"true"` + + // The version of the policy to set as the default (operative) version. + // + // For more information about managed policy versions, see Versioning for Managed + // Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-versions.html) + // in the IAM User Guide. + // + // VersionId is a required field + VersionId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s SetDefaultPolicyVersionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SetDefaultPolicyVersionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SetDefaultPolicyVersionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SetDefaultPolicyVersionInput"} + if s.PolicyArn == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyArn")) + } + if s.PolicyArn != nil && len(*s.PolicyArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PolicyArn", 20)) + } + if s.VersionId == nil { + invalidParams.Add(request.NewErrParamRequired("VersionId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *SetDefaultPolicyVersionInput) SetPolicyArn(v string) *SetDefaultPolicyVersionInput { + s.PolicyArn = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *SetDefaultPolicyVersionInput) SetVersionId(v string) *SetDefaultPolicyVersionInput { + s.VersionId = &v + return s +} + +type SetDefaultPolicyVersionOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s SetDefaultPolicyVersionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SetDefaultPolicyVersionOutput) GoString() string { + return s.String() +} + +// Contains information about an X.509 signing certificate. +// +// This data type is used as a response element in the UploadSigningCertificate +// and ListSigningCertificates operations. +type SigningCertificate struct { + _ struct{} `type:"structure"` + + // The contents of the signing certificate. + // + // CertificateBody is a required field + CertificateBody *string `min:"1" type:"string" required:"true"` + + // The ID for the signing certificate. + // + // CertificateId is a required field + CertificateId *string `min:"24" type:"string" required:"true"` + + // The status of the signing certificate. Active means that the key is valid + // for API calls, while Inactive means it is not. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"statusType"` + + // The date when the signing certificate was uploaded. + UploadDate *time.Time `type:"timestamp"` + + // The name of the user the signing certificate is associated with. + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s SigningCertificate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SigningCertificate) GoString() string { + return s.String() } -// SetServiceName sets the ServiceName field's value. -func (s *ServiceSpecificCredentialMetadata) SetServiceName(v string) *ServiceSpecificCredentialMetadata { - s.ServiceName = &v +// SetCertificateBody sets the CertificateBody field's value. +func (s *SigningCertificate) SetCertificateBody(v string) *SigningCertificate { + s.CertificateBody = &v return s } -// SetServiceSpecificCredentialId sets the ServiceSpecificCredentialId field's value. -func (s *ServiceSpecificCredentialMetadata) SetServiceSpecificCredentialId(v string) *ServiceSpecificCredentialMetadata { - s.ServiceSpecificCredentialId = &v +// SetCertificateId sets the CertificateId field's value. +func (s *SigningCertificate) SetCertificateId(v string) *SigningCertificate { + s.CertificateId = &v return s } -// SetServiceUserName sets the ServiceUserName field's value. -func (s *ServiceSpecificCredentialMetadata) SetServiceUserName(v string) *ServiceSpecificCredentialMetadata { - s.ServiceUserName = &v +// SetStatus sets the Status field's value. +func (s *SigningCertificate) SetStatus(v string) *SigningCertificate { + s.Status = &v return s } -// SetStatus sets the Status field's value. -func (s *ServiceSpecificCredentialMetadata) SetStatus(v string) *ServiceSpecificCredentialMetadata { - s.Status = &v +// SetUploadDate sets the UploadDate field's value. +func (s *SigningCertificate) SetUploadDate(v time.Time) *SigningCertificate { + s.UploadDate = &v return s } // SetUserName sets the UserName field's value. -func (s *ServiceSpecificCredentialMetadata) SetUserName(v string) *ServiceSpecificCredentialMetadata { +func (s *SigningCertificate) SetUserName(v string) *SigningCertificate { s.UserName = &v return s } -type SetDefaultPolicyVersionInput struct { +type SimulateCustomPolicyInput struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of the IAM policy whose default version you - // want to set. + // A list of names of API operations to evaluate in the simulation. Each operation + // is evaluated against each resource. Each operation must include the service + // identifier, such as iam:CreateUser. + // + // ActionNames is a required field + ActionNames []*string `type:"list" required:"true"` + + // The ARN of the IAM user that you want to use as the simulated caller of the + // API operations. CallerArn is required if you include a ResourcePolicy so + // that the policy's Principal element has a value to use in evaluating the + // policy. + // + // You can specify only the ARN of an IAM user. You cannot specify the ARN of + // an assumed role, federated user, or a service principal. + CallerArn *string `min:"1" type:"string"` + + // A list of context keys and corresponding values for the simulation to use. + // Whenever a context key is evaluated in one of the simulated IAM permission + // policies, the corresponding value is supplied. + ContextEntries []*ContextEntry `type:"list"` + + // Use this parameter only when paginating results and only after you receive + // a response indicating that the results are truncated. Set it to the value + // of the Marker element in the response that you received to indicate where + // the next call should start. + Marker *string `min:"1" type:"string"` + + // (Optional) Use this only when paginating results to indicate the maximum + // number of items you want in the response. If additional items exist beyond + // the maximum you specify, the IsTruncated response element is true. + // + // If you do not include this parameter, it defaults to 100. Note that IAM might + // return fewer results, even when there are more results available. In that + // case, the IsTruncated response element returns true and Marker contains a + // value to include in the subsequent call that tells the service where to continue + // from. + MaxItems *int64 `min:"1" type:"integer"` + + // A list of policy documents to include in the simulation. Each document is + // specified as a string containing the complete, valid JSON text of an IAM + // policy. Do not include any resource-based policies in this parameter. Any + // resource-based policy must be submitted with the ResourcePolicy parameter. + // The policies cannot be "scope-down" policies, such as you could include in + // a call to GetFederationToken (http://docs.aws.amazon.com/IAM/latest/APIReference/API_GetFederationToken.html) + // or one of the AssumeRole (http://docs.aws.amazon.com/IAM/latest/APIReference/API_AssumeRole.html) + // API operations. In other words, do not use policies designed to restrict + // what a user can do while using the temporary credentials. + // + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + // + // PolicyInputList is a required field + PolicyInputList []*string `type:"list" required:"true"` + + // A list of ARNs of AWS resources to include in the simulation. If this parameter + // is not provided then the value defaults to * (all resources). Each API in + // the ActionNames parameter is evaluated for each resource in this list. The + // simulation determines the access result (allowed or denied) of each combination + // and reports it in the response. + // + // The simulation does not automatically retrieve policies for the specified + // resources. If you want to include a resource policy in the simulation, then + // you must include the policy as a string in the ResourcePolicy parameter. + // + // If you include a ResourcePolicy, then it must be applicable to all of the + // resources included in the simulation or you receive an invalid input error. // // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) // in the AWS General Reference. + ResourceArns []*string `type:"list"` + + // Specifies the type of simulation to run. Different API operations that support + // resource-based policies require different combinations of resources. By specifying + // the type of simulation to run, you enable the policy simulator to enforce + // the presence of the required resources to ensure reliable simulation results. + // If your simulation does not match one of the following scenarios, then you + // can omit this parameter. The following list shows each of the supported scenario + // values and the resources that you must define to run the simulation. // - // PolicyArn is a required field - PolicyArn *string `min:"20" type:"string" required:"true"` + // Each of the EC2 scenarios requires that you specify instance, image, and + // security-group resources. If your scenario includes an EBS volume, then you + // must specify that volume as a resource. If the EC2 scenario includes VPC, + // then you must supply the network-interface resource. If it includes an IP + // subnet, then you must specify the subnet resource. For more information on + // the EC2 scenario options, see Supported Platforms (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html) + // in the Amazon EC2 User Guide. + // + // * EC2-Classic-InstanceStore + // + // instance, image, security-group + // + // * EC2-Classic-EBS + // + // instance, image, security-group, volume + // + // * EC2-VPC-InstanceStore + // + // instance, image, security-group, network-interface + // + // * EC2-VPC-InstanceStore-Subnet + // + // instance, image, security-group, network-interface, subnet + // + // * EC2-VPC-EBS + // + // instance, image, security-group, network-interface, volume + // + // * EC2-VPC-EBS-Subnet + // + // instance, image, security-group, network-interface, subnet, volume + ResourceHandlingOption *string `min:"1" type:"string"` - // The version of the policy to set as the default (operative) version. + // An ARN representing the AWS account ID that specifies the owner of any simulated + // resource that does not identify its owner in the resource ARN, such as an + // S3 bucket or object. If ResourceOwner is specified, it is also used as the + // account owner of any ResourcePolicy included in the simulation. If the ResourceOwner + // parameter is not specified, then the owner of the resources and the resource + // policy defaults to the account of the identity provided in CallerArn. This + // parameter is required only if you specify a resource-based policy and account + // that owns the resource is different from the account that owns the simulated + // calling user CallerArn. + // + // The ARN for an account uses the following syntax: arn:aws:iam::AWS-account-ID:root. + // For example, to represent the account with the 112233445566 ID, use the following + // ARN: arn:aws:iam::112233445566-ID:root. + ResourceOwner *string `min:"1" type:"string"` + + // A resource-based policy to include in the simulation provided as a string. + // Each resource in the simulation is treated as if it had this policy attached. + // You can include only one resource-based policy in a simulation. // - // For more information about managed policy versions, see Versioning for Managed - // Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies-managed-versions.html) - // in the IAM User Guide. + // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this + // parameter is a string of characters consisting of the following: // - // VersionId is a required field - VersionId *string `type:"string" required:"true"` + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + ResourcePolicy *string `min:"1" type:"string"` } // String returns the string representation -func (s SetDefaultPolicyVersionInput) String() string { +func (s SimulateCustomPolicyInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SetDefaultPolicyVersionInput) GoString() string { +func (s SimulateCustomPolicyInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *SetDefaultPolicyVersionInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "SetDefaultPolicyVersionInput"} - if s.PolicyArn == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyArn")) +func (s *SimulateCustomPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SimulateCustomPolicyInput"} + if s.ActionNames == nil { + invalidParams.Add(request.NewErrParamRequired("ActionNames")) } - if s.PolicyArn != nil && len(*s.PolicyArn) < 20 { - invalidParams.Add(request.NewErrParamMinLen("PolicyArn", 20)) + if s.CallerArn != nil && len(*s.CallerArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CallerArn", 1)) } - if s.VersionId == nil { - invalidParams.Add(request.NewErrParamRequired("VersionId")) + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + if s.PolicyInputList == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyInputList")) + } + if s.ResourceHandlingOption != nil && len(*s.ResourceHandlingOption) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceHandlingOption", 1)) + } + if s.ResourceOwner != nil && len(*s.ResourceOwner) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceOwner", 1)) + } + if s.ResourcePolicy != nil && len(*s.ResourcePolicy) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourcePolicy", 1)) + } + if s.ContextEntries != nil { + for i, v := range s.ContextEntries { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ContextEntries", i), err.(request.ErrInvalidParams)) + } + } } if invalidParams.Len() > 0 { @@ -25288,120 +27646,143 @@ func (s *SetDefaultPolicyVersionInput) Validate() error { return nil } -// SetPolicyArn sets the PolicyArn field's value. -func (s *SetDefaultPolicyVersionInput) SetPolicyArn(v string) *SetDefaultPolicyVersionInput { - s.PolicyArn = &v +// SetActionNames sets the ActionNames field's value. +func (s *SimulateCustomPolicyInput) SetActionNames(v []*string) *SimulateCustomPolicyInput { + s.ActionNames = v return s } -// SetVersionId sets the VersionId field's value. -func (s *SetDefaultPolicyVersionInput) SetVersionId(v string) *SetDefaultPolicyVersionInput { - s.VersionId = &v +// SetCallerArn sets the CallerArn field's value. +func (s *SimulateCustomPolicyInput) SetCallerArn(v string) *SimulateCustomPolicyInput { + s.CallerArn = &v return s } -type SetDefaultPolicyVersionOutput struct { - _ struct{} `type:"structure"` +// SetContextEntries sets the ContextEntries field's value. +func (s *SimulateCustomPolicyInput) SetContextEntries(v []*ContextEntry) *SimulateCustomPolicyInput { + s.ContextEntries = v + return s } -// String returns the string representation -func (s SetDefaultPolicyVersionOutput) String() string { - return awsutil.Prettify(s) +// SetMarker sets the Marker field's value. +func (s *SimulateCustomPolicyInput) SetMarker(v string) *SimulateCustomPolicyInput { + s.Marker = &v + return s } -// GoString returns the string representation -func (s SetDefaultPolicyVersionOutput) GoString() string { - return s.String() +// SetMaxItems sets the MaxItems field's value. +func (s *SimulateCustomPolicyInput) SetMaxItems(v int64) *SimulateCustomPolicyInput { + s.MaxItems = &v + return s } -// Contains information about an X.509 signing certificate. -// -// This data type is used as a response element in the UploadSigningCertificate -// and ListSigningCertificates actions. -type SigningCertificate struct { - _ struct{} `type:"structure"` +// SetPolicyInputList sets the PolicyInputList field's value. +func (s *SimulateCustomPolicyInput) SetPolicyInputList(v []*string) *SimulateCustomPolicyInput { + s.PolicyInputList = v + return s +} - // The contents of the signing certificate. - // - // CertificateBody is a required field - CertificateBody *string `min:"1" type:"string" required:"true"` +// SetResourceArns sets the ResourceArns field's value. +func (s *SimulateCustomPolicyInput) SetResourceArns(v []*string) *SimulateCustomPolicyInput { + s.ResourceArns = v + return s +} + +// SetResourceHandlingOption sets the ResourceHandlingOption field's value. +func (s *SimulateCustomPolicyInput) SetResourceHandlingOption(v string) *SimulateCustomPolicyInput { + s.ResourceHandlingOption = &v + return s +} + +// SetResourceOwner sets the ResourceOwner field's value. +func (s *SimulateCustomPolicyInput) SetResourceOwner(v string) *SimulateCustomPolicyInput { + s.ResourceOwner = &v + return s +} - // The ID for the signing certificate. - // - // CertificateId is a required field - CertificateId *string `min:"24" type:"string" required:"true"` +// SetResourcePolicy sets the ResourcePolicy field's value. +func (s *SimulateCustomPolicyInput) SetResourcePolicy(v string) *SimulateCustomPolicyInput { + s.ResourcePolicy = &v + return s +} - // The status of the signing certificate. Active means the key is valid for - // API calls, while Inactive means it is not. - // - // Status is a required field - Status *string `type:"string" required:"true" enum:"statusType"` +// Contains the response to a successful SimulatePrincipalPolicy or SimulateCustomPolicy +// request. +type SimulatePolicyResponse struct { + _ struct{} `type:"structure"` - // The date when the signing certificate was uploaded. - UploadDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + // The results of the simulation. + EvaluationResults []*EvaluationResult `type:"list"` - // The name of the user the signing certificate is associated with. - // - // UserName is a required field - UserName *string `min:"1" type:"string" required:"true"` + // A flag that indicates whether there are more items to return. If your results + // were truncated, you can make a subsequent pagination request using the Marker + // request parameter to retrieve more items. Note that IAM might return fewer + // than the MaxItems number of results even when there are more results available. + // We recommend that you check IsTruncated after every call to ensure that you + // receive all of your results. + IsTruncated *bool `type:"boolean"` + + // When IsTruncated is true, this element is present and contains the value + // to use for the Marker parameter in a subsequent pagination request. + Marker *string `min:"1" type:"string"` } // String returns the string representation -func (s SigningCertificate) String() string { +func (s SimulatePolicyResponse) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SigningCertificate) GoString() string { +func (s SimulatePolicyResponse) GoString() string { return s.String() } -// SetCertificateBody sets the CertificateBody field's value. -func (s *SigningCertificate) SetCertificateBody(v string) *SigningCertificate { - s.CertificateBody = &v - return s -} - -// SetCertificateId sets the CertificateId field's value. -func (s *SigningCertificate) SetCertificateId(v string) *SigningCertificate { - s.CertificateId = &v - return s -} - -// SetStatus sets the Status field's value. -func (s *SigningCertificate) SetStatus(v string) *SigningCertificate { - s.Status = &v +// SetEvaluationResults sets the EvaluationResults field's value. +func (s *SimulatePolicyResponse) SetEvaluationResults(v []*EvaluationResult) *SimulatePolicyResponse { + s.EvaluationResults = v return s } -// SetUploadDate sets the UploadDate field's value. -func (s *SigningCertificate) SetUploadDate(v time.Time) *SigningCertificate { - s.UploadDate = &v +// SetIsTruncated sets the IsTruncated field's value. +func (s *SimulatePolicyResponse) SetIsTruncated(v bool) *SimulatePolicyResponse { + s.IsTruncated = &v return s } -// SetUserName sets the UserName field's value. -func (s *SigningCertificate) SetUserName(v string) *SigningCertificate { - s.UserName = &v +// SetMarker sets the Marker field's value. +func (s *SimulatePolicyResponse) SetMarker(v string) *SimulatePolicyResponse { + s.Marker = &v return s } -type SimulateCustomPolicyInput struct { +type SimulatePrincipalPolicyInput struct { _ struct{} `type:"structure"` - // A list of names of API actions to evaluate in the simulation. Each action - // is evaluated against each resource. Each action must include the service - // identifier, such as iam:CreateUser. + // A list of names of API operations to evaluate in the simulation. Each operation + // is evaluated for each resource. Each operation must include the service identifier, + // such as iam:CreateUser. // // ActionNames is a required field ActionNames []*string `type:"list" required:"true"` - // The ARN of the IAM user that you want to use as the simulated caller of the - // APIs. CallerArn is required if you include a ResourcePolicy so that the policy's - // Principal element has a value to use in evaluating the policy. + // The ARN of the IAM user that you want to specify as the simulated caller + // of the API operations. If you do not specify a CallerArn, it defaults to + // the ARN of the user that you specify in PolicySourceArn, if you specified + // a user. If you include both a PolicySourceArn (for example, arn:aws:iam::123456789012:user/David) + // and a CallerArn (for example, arn:aws:iam::123456789012:user/Bob), the result + // is that you simulate calling the API operations as Bob, as if Bob had David's + // policies. // // You can specify only the ARN of an IAM user. You cannot specify the ARN of // an assumed role, federated user, or a service principal. + // + // CallerArn is required if you include a ResourcePolicy and the PolicySourceArn + // is not the ARN for an IAM user. This is required so that the resource-based + // policy's Principal element has a value to use in evaluating the policy. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. CallerArn *string `min:"1" type:"string"` // A list of context keys and corresponding values for the simulation to use. @@ -25426,27 +27807,38 @@ type SimulateCustomPolicyInput struct { // from. MaxItems *int64 `min:"1" type:"integer"` - // A list of policy documents to include in the simulation. Each document is - // specified as a string containing the complete, valid JSON text of an IAM - // policy. Do not include any resource-based policies in this parameter. Any - // resource-based policy must be submitted with the ResourcePolicy parameter. - // The policies cannot be "scope-down" policies, such as you could include in - // a call to GetFederationToken (http://docs.aws.amazon.com/IAM/latest/APIReference/API_GetFederationToken.html) - // or one of the AssumeRole (http://docs.aws.amazon.com/IAM/latest/APIReference/API_AssumeRole.html) - // APIs to restrict what a user can do while using the temporary credentials. + // An optional list of additional policy documents to include in the simulation. + // Each document is specified as a string containing the complete, valid JSON + // text of an IAM policy. // // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of any printable ASCII character - // ranging from the space character (\u0020) through end of the ASCII character - // range as well as the printable characters in the Basic Latin and Latin-1 - // Supplement character set (through \u00FF). It also includes the special characters - // tab (\u0009), line feed (\u000A), and carriage return (\u000D). + // parameter is a string of characters consisting of the following: // - // PolicyInputList is a required field - PolicyInputList []*string `type:"list" required:"true"` + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + PolicyInputList []*string `type:"list"` + + // The Amazon Resource Name (ARN) of a user, group, or role whose policies you + // want to include in the simulation. If you specify a user, group, or role, + // the simulation includes all policies that are associated with that entity. + // If you specify a user, the simulation also includes all policies that are + // attached to any groups the user belongs to. + // + // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) + // in the AWS General Reference. + // + // PolicySourceArn is a required field + PolicySourceArn *string `min:"20" type:"string" required:"true"` // A list of ARNs of AWS resources to include in the simulation. If this parameter - // is not provided then the value defaults to * (all resources). Each API in + // is not provided, then the value defaults to * (all resources). Each API in // the ActionNames parameter is evaluated for each resource in this list. The // simulation determines the access result (allowed or denied) of each combination // and reports it in the response. @@ -25455,21 +27847,18 @@ type SimulateCustomPolicyInput struct { // resources. If you want to include a resource policy in the simulation, then // you must include the policy as a string in the ResourcePolicy parameter. // - // If you include a ResourcePolicy, then it must be applicable to all of the - // resources included in the simulation or you receive an invalid input error. - // // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) // in the AWS General Reference. ResourceArns []*string `type:"list"` - // Specifies the type of simulation to run. Different APIs that support resource-based - // policies require different combinations of resources. By specifying the type - // of simulation to run, you enable the policy simulator to enforce the presence - // of the required resources to ensure reliable simulation results. If your - // simulation does not match one of the following scenarios, then you can omit - // this parameter. The following list shows each of the supported scenario values - // and the resources that you must define to run the simulation. + // Specifies the type of simulation to run. Different API operations that support + // resource-based policies require different combinations of resources. By specifying + // the type of simulation to run, you enable the policy simulator to enforce + // the presence of the required resources to ensure reliable simulation results. + // If your simulation does not match one of the following scenarios, then you + // can omit this parameter. The following list shows each of the supported scenario + // values and the resources that you must define to run the simulation. // // Each of the EC2 scenarios requires that you specify instance, image, and // security-group resources. If your scenario includes an EBS volume, then you @@ -25477,7 +27866,7 @@ type SimulateCustomPolicyInput struct { // then you must supply the network-interface resource. If it includes an IP // subnet, then you must specify the subnet resource. For more information on // the EC2 scenario options, see Supported Platforms (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html) - // in the AWS EC2 User Guide. + // in the Amazon EC2 User Guide. // // * EC2-Classic-InstanceStore // @@ -25520,27 +27909,32 @@ type SimulateCustomPolicyInput struct { // You can include only one resource-based policy in a simulation. // // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of any printable ASCII character - // ranging from the space character (\u0020) through end of the ASCII character - // range as well as the printable characters in the Basic Latin and Latin-1 - // Supplement character set (through \u00FF). It also includes the special characters - // tab (\u0009), line feed (\u000A), and carriage return (\u000D). + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) ResourcePolicy *string `min:"1" type:"string"` } // String returns the string representation -func (s SimulateCustomPolicyInput) String() string { +func (s SimulatePrincipalPolicyInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SimulateCustomPolicyInput) GoString() string { +func (s SimulatePrincipalPolicyInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *SimulateCustomPolicyInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "SimulateCustomPolicyInput"} +func (s *SimulatePrincipalPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SimulatePrincipalPolicyInput"} if s.ActionNames == nil { invalidParams.Add(request.NewErrParamRequired("ActionNames")) } @@ -25553,8 +27947,11 @@ func (s *SimulateCustomPolicyInput) Validate() error { if s.MaxItems != nil && *s.MaxItems < 1 { invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) } - if s.PolicyInputList == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyInputList")) + if s.PolicySourceArn == nil { + invalidParams.Add(request.NewErrParamRequired("PolicySourceArn")) + } + if s.PolicySourceArn != nil && len(*s.PolicySourceArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("PolicySourceArn", 20)) } if s.ResourceHandlingOption != nil && len(*s.ResourceHandlingOption) < 1 { invalidParams.Add(request.NewErrParamMinLen("ResourceHandlingOption", 1)) @@ -25583,317 +27980,326 @@ func (s *SimulateCustomPolicyInput) Validate() error { } // SetActionNames sets the ActionNames field's value. -func (s *SimulateCustomPolicyInput) SetActionNames(v []*string) *SimulateCustomPolicyInput { +func (s *SimulatePrincipalPolicyInput) SetActionNames(v []*string) *SimulatePrincipalPolicyInput { s.ActionNames = v return s } // SetCallerArn sets the CallerArn field's value. -func (s *SimulateCustomPolicyInput) SetCallerArn(v string) *SimulateCustomPolicyInput { +func (s *SimulatePrincipalPolicyInput) SetCallerArn(v string) *SimulatePrincipalPolicyInput { s.CallerArn = &v return s } // SetContextEntries sets the ContextEntries field's value. -func (s *SimulateCustomPolicyInput) SetContextEntries(v []*ContextEntry) *SimulateCustomPolicyInput { +func (s *SimulatePrincipalPolicyInput) SetContextEntries(v []*ContextEntry) *SimulatePrincipalPolicyInput { s.ContextEntries = v return s } // SetMarker sets the Marker field's value. -func (s *SimulateCustomPolicyInput) SetMarker(v string) *SimulateCustomPolicyInput { +func (s *SimulatePrincipalPolicyInput) SetMarker(v string) *SimulatePrincipalPolicyInput { s.Marker = &v return s } // SetMaxItems sets the MaxItems field's value. -func (s *SimulateCustomPolicyInput) SetMaxItems(v int64) *SimulateCustomPolicyInput { +func (s *SimulatePrincipalPolicyInput) SetMaxItems(v int64) *SimulatePrincipalPolicyInput { s.MaxItems = &v return s } // SetPolicyInputList sets the PolicyInputList field's value. -func (s *SimulateCustomPolicyInput) SetPolicyInputList(v []*string) *SimulateCustomPolicyInput { +func (s *SimulatePrincipalPolicyInput) SetPolicyInputList(v []*string) *SimulatePrincipalPolicyInput { s.PolicyInputList = v return s } +// SetPolicySourceArn sets the PolicySourceArn field's value. +func (s *SimulatePrincipalPolicyInput) SetPolicySourceArn(v string) *SimulatePrincipalPolicyInput { + s.PolicySourceArn = &v + return s +} + // SetResourceArns sets the ResourceArns field's value. -func (s *SimulateCustomPolicyInput) SetResourceArns(v []*string) *SimulateCustomPolicyInput { +func (s *SimulatePrincipalPolicyInput) SetResourceArns(v []*string) *SimulatePrincipalPolicyInput { s.ResourceArns = v return s } // SetResourceHandlingOption sets the ResourceHandlingOption field's value. -func (s *SimulateCustomPolicyInput) SetResourceHandlingOption(v string) *SimulateCustomPolicyInput { +func (s *SimulatePrincipalPolicyInput) SetResourceHandlingOption(v string) *SimulatePrincipalPolicyInput { s.ResourceHandlingOption = &v return s } // SetResourceOwner sets the ResourceOwner field's value. -func (s *SimulateCustomPolicyInput) SetResourceOwner(v string) *SimulateCustomPolicyInput { +func (s *SimulatePrincipalPolicyInput) SetResourceOwner(v string) *SimulatePrincipalPolicyInput { s.ResourceOwner = &v return s } // SetResourcePolicy sets the ResourcePolicy field's value. -func (s *SimulateCustomPolicyInput) SetResourcePolicy(v string) *SimulateCustomPolicyInput { +func (s *SimulatePrincipalPolicyInput) SetResourcePolicy(v string) *SimulatePrincipalPolicyInput { s.ResourcePolicy = &v return s } -// Contains the response to a successful SimulatePrincipalPolicy or SimulateCustomPolicy -// request. -type SimulatePolicyResponse struct { +// Contains a reference to a Statement element in a policy document that determines +// the result of the simulation. +// +// This data type is used by the MatchedStatements member of the EvaluationResult +// type. +type Statement struct { _ struct{} `type:"structure"` - // The results of the simulation. - EvaluationResults []*EvaluationResult `type:"list"` + // The row and column of the end of a Statement in an IAM policy. + EndPosition *Position `type:"structure"` - // A flag that indicates whether there are more items to return. If your results - // were truncated, you can make a subsequent pagination request using the Marker - // request parameter to retrieve more items. Note that IAM might return fewer - // than the MaxItems number of results even when there are more results available. - // We recommend that you check IsTruncated after every call to ensure that you - // receive all of your results. - IsTruncated *bool `type:"boolean"` + // The identifier of the policy that was provided as an input. + SourcePolicyId *string `type:"string"` + + // The type of the policy. + SourcePolicyType *string `type:"string" enum:"PolicySourceType"` + + // The row and column of the beginning of the Statement in an IAM policy. + StartPosition *Position `type:"structure"` +} + +// String returns the string representation +func (s Statement) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Statement) GoString() string { + return s.String() +} + +// SetEndPosition sets the EndPosition field's value. +func (s *Statement) SetEndPosition(v *Position) *Statement { + s.EndPosition = v + return s +} + +// SetSourcePolicyId sets the SourcePolicyId field's value. +func (s *Statement) SetSourcePolicyId(v string) *Statement { + s.SourcePolicyId = &v + return s +} + +// SetSourcePolicyType sets the SourcePolicyType field's value. +func (s *Statement) SetSourcePolicyType(v string) *Statement { + s.SourcePolicyType = &v + return s +} + +// SetStartPosition sets the StartPosition field's value. +func (s *Statement) SetStartPosition(v *Position) *Statement { + s.StartPosition = v + return s +} + +// A structure that represents user-provided metadata that can be associated +// with a resource such as an IAM user or role. For more information about tagging, +// see Tagging IAM Identities (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) +// in the IAM User Guide. +type Tag struct { + _ struct{} `type:"structure"` + + // The key name that can be used to look up or retrieve the associated value. + // For example, Department or Cost Center are common choices. + // + // Key is a required field + Key *string `min:"1" type:"string" required:"true"` + + // The value associated with this tag. For example, tags with a key name of + // Department could have values such as Human Resources, Accounting, and Support. + // Tags with a key name of Cost Center might have values that consist of the + // number associated with the different cost centers in your company. Typically, + // many resources have tags with the same key name but with different values. + // + // AWS always interprets the tag Value as a single string. If you need to store + // an array, you can store comma-separated values in the string. However, you + // must interpret the value in your code. + // + // Value is a required field + Value *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s Tag) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Tag) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Tag) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Tag"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *Tag) SetKey(v string) *Tag { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Tag) SetValue(v string) *Tag { + s.Value = &v + return s +} - // When IsTruncated is true, this element is present and contains the value - // to use for the Marker parameter in a subsequent pagination request. - Marker *string `min:"1" type:"string"` +type TagRoleInput struct { + _ struct{} `type:"structure"` + + // The name of the role that you want to add tags to. + // + // This parameter accepts (through its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters that consist of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` + + // The list of tags that you want to attach to the role. Each tag consists of + // a key name and an associated value. You can specify this with a JSON string. + // + // Tags is a required field + Tags []*Tag `type:"list" required:"true"` } // String returns the string representation -func (s SimulatePolicyResponse) String() string { +func (s TagRoleInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SimulatePolicyResponse) GoString() string { +func (s TagRoleInput) GoString() string { return s.String() } -// SetEvaluationResults sets the EvaluationResults field's value. -func (s *SimulatePolicyResponse) SetEvaluationResults(v []*EvaluationResult) *SimulatePolicyResponse { - s.EvaluationResults = v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *TagRoleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TagRoleInput"} + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetIsTruncated sets the IsTruncated field's value. -func (s *SimulatePolicyResponse) SetIsTruncated(v bool) *SimulatePolicyResponse { - s.IsTruncated = &v +// SetRoleName sets the RoleName field's value. +func (s *TagRoleInput) SetRoleName(v string) *TagRoleInput { + s.RoleName = &v return s } -// SetMarker sets the Marker field's value. -func (s *SimulatePolicyResponse) SetMarker(v string) *SimulatePolicyResponse { - s.Marker = &v +// SetTags sets the Tags field's value. +func (s *TagRoleInput) SetTags(v []*Tag) *TagRoleInput { + s.Tags = v return s } -type SimulatePrincipalPolicyInput struct { +type TagRoleOutput struct { _ struct{} `type:"structure"` +} - // A list of names of API actions to evaluate in the simulation. Each action - // is evaluated for each resource. Each action must include the service identifier, - // such as iam:CreateUser. - // - // ActionNames is a required field - ActionNames []*string `type:"list" required:"true"` - - // The ARN of the IAM user that you want to specify as the simulated caller - // of the APIs. If you do not specify a CallerArn, it defaults to the ARN of - // the user that you specify in PolicySourceArn, if you specified a user. If - // you include both a PolicySourceArn (for example, arn:aws:iam::123456789012:user/David) - // and a CallerArn (for example, arn:aws:iam::123456789012:user/Bob), the result - // is that you simulate calling the APIs as Bob, as if Bob had David's policies. - // - // You can specify only the ARN of an IAM user. You cannot specify the ARN of - // an assumed role, federated user, or a service principal. - // - // CallerArn is required if you include a ResourcePolicy and the PolicySourceArn - // is not the ARN for an IAM user. This is required so that the resource-based - // policy's Principal element has a value to use in evaluating the policy. - // - // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS - // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) - // in the AWS General Reference. - CallerArn *string `min:"1" type:"string"` - - // A list of context keys and corresponding values for the simulation to use. - // Whenever a context key is evaluated in one of the simulated IAM permission - // policies, the corresponding value is supplied. - ContextEntries []*ContextEntry `type:"list"` - - // Use this parameter only when paginating results and only after you receive - // a response indicating that the results are truncated. Set it to the value - // of the Marker element in the response that you received to indicate where - // the next call should start. - Marker *string `min:"1" type:"string"` - - // (Optional) Use this only when paginating results to indicate the maximum - // number of items you want in the response. If additional items exist beyond - // the maximum you specify, the IsTruncated response element is true. - // - // If you do not include this parameter, it defaults to 100. Note that IAM might - // return fewer results, even when there are more results available. In that - // case, the IsTruncated response element returns true and Marker contains a - // value to include in the subsequent call that tells the service where to continue - // from. - MaxItems *int64 `min:"1" type:"integer"` +// String returns the string representation +func (s TagRoleOutput) String() string { + return awsutil.Prettify(s) +} - // An optional list of additional policy documents to include in the simulation. - // Each document is specified as a string containing the complete, valid JSON - // text of an IAM policy. - // - // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of any printable ASCII character - // ranging from the space character (\u0020) through end of the ASCII character - // range as well as the printable characters in the Basic Latin and Latin-1 - // Supplement character set (through \u00FF). It also includes the special characters - // tab (\u0009), line feed (\u000A), and carriage return (\u000D). - PolicyInputList []*string `type:"list"` +// GoString returns the string representation +func (s TagRoleOutput) GoString() string { + return s.String() +} - // The Amazon Resource Name (ARN) of a user, group, or role whose policies you - // want to include in the simulation. If you specify a user, group, or role, - // the simulation includes all policies that are associated with that entity. - // If you specify a user, the simulation also includes all policies that are - // attached to any groups the user belongs to. - // - // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS - // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) - // in the AWS General Reference. - // - // PolicySourceArn is a required field - PolicySourceArn *string `min:"20" type:"string" required:"true"` +type TagUserInput struct { + _ struct{} `type:"structure"` - // A list of ARNs of AWS resources to include in the simulation. If this parameter - // is not provided then the value defaults to * (all resources). Each API in - // the ActionNames parameter is evaluated for each resource in this list. The - // simulation determines the access result (allowed or denied) of each combination - // and reports it in the response. + // The list of tags that you want to attach to the user. Each tag consists of + // a key name and an associated value. // - // The simulation does not automatically retrieve policies for the specified - // resources. If you want to include a resource policy in the simulation, then - // you must include the policy as a string in the ResourcePolicy parameter. - // - // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS - // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) - // in the AWS General Reference. - ResourceArns []*string `type:"list"` + // Tags is a required field + Tags []*Tag `type:"list" required:"true"` - // Specifies the type of simulation to run. Different APIs that support resource-based - // policies require different combinations of resources. By specifying the type - // of simulation to run, you enable the policy simulator to enforce the presence - // of the required resources to ensure reliable simulation results. If your - // simulation does not match one of the following scenarios, then you can omit - // this parameter. The following list shows each of the supported scenario values - // and the resources that you must define to run the simulation. + // The name of the user that you want to add tags to. // - // Each of the EC2 scenarios requires that you specify instance, image, and - // security-group resources. If your scenario includes an EBS volume, then you - // must specify that volume as a resource. If the EC2 scenario includes VPC, - // then you must supply the network-interface resource. If it includes an IP - // subnet, then you must specify the subnet resource. For more information on - // the EC2 scenario options, see Supported Platforms (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html) - // in the AWS EC2 User Guide. - // - // * EC2-Classic-InstanceStore - // - // instance, image, security-group - // - // * EC2-Classic-EBS - // - // instance, image, security-group, volume - // - // * EC2-VPC-InstanceStore - // - // instance, image, security-group, network-interface - // - // * EC2-VPC-InstanceStore-Subnet - // - // instance, image, security-group, network-interface, subnet - // - // * EC2-VPC-EBS - // - // instance, image, security-group, network-interface, volume - // - // * EC2-VPC-EBS-Subnet - // - // instance, image, security-group, network-interface, subnet, volume - ResourceHandlingOption *string `min:"1" type:"string"` - - // An AWS account ID that specifies the owner of any simulated resource that - // does not identify its owner in the resource ARN, such as an S3 bucket or - // object. If ResourceOwner is specified, it is also used as the account owner - // of any ResourcePolicy included in the simulation. If the ResourceOwner parameter - // is not specified, then the owner of the resources and the resource policy - // defaults to the account of the identity provided in CallerArn. This parameter - // is required only if you specify a resource-based policy and account that - // owns the resource is different from the account that owns the simulated calling - // user CallerArn. - ResourceOwner *string `min:"1" type:"string"` - - // A resource-based policy to include in the simulation provided as a string. - // Each resource in the simulation is treated as if it had this policy attached. - // You can include only one resource-based policy in a simulation. + // This parameter accepts (through its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters that consist of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: =,.@- // - // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of any printable ASCII character - // ranging from the space character (\u0020) through end of the ASCII character - // range as well as the printable characters in the Basic Latin and Latin-1 - // Supplement character set (through \u00FF). It also includes the special characters - // tab (\u0009), line feed (\u000A), and carriage return (\u000D). - ResourcePolicy *string `min:"1" type:"string"` + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` } // String returns the string representation -func (s SimulatePrincipalPolicyInput) String() string { +func (s TagUserInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SimulatePrincipalPolicyInput) GoString() string { +func (s TagUserInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *SimulatePrincipalPolicyInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "SimulatePrincipalPolicyInput"} - if s.ActionNames == nil { - invalidParams.Add(request.NewErrParamRequired("ActionNames")) - } - if s.CallerArn != nil && len(*s.CallerArn) < 1 { - invalidParams.Add(request.NewErrParamMinLen("CallerArn", 1)) - } - if s.Marker != nil && len(*s.Marker) < 1 { - invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) - } - if s.MaxItems != nil && *s.MaxItems < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) - } - if s.PolicySourceArn == nil { - invalidParams.Add(request.NewErrParamRequired("PolicySourceArn")) - } - if s.PolicySourceArn != nil && len(*s.PolicySourceArn) < 20 { - invalidParams.Add(request.NewErrParamMinLen("PolicySourceArn", 20)) +func (s *TagUserInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TagUserInput"} + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) } - if s.ResourceHandlingOption != nil && len(*s.ResourceHandlingOption) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ResourceHandlingOption", 1)) - } - if s.ResourceOwner != nil && len(*s.ResourceOwner) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ResourceOwner", 1)) + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) } - if s.ResourcePolicy != nil && len(*s.ResourcePolicy) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ResourcePolicy", 1)) + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) } - if s.ContextEntries != nil { - for i, v := range s.ContextEntries { + if s.Tags != nil { + for i, v := range s.Tags { if v == nil { continue } if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ContextEntries", i), err.(request.ErrInvalidParams)) + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) } } } @@ -25904,125 +28310,178 @@ func (s *SimulatePrincipalPolicyInput) Validate() error { return nil } -// SetActionNames sets the ActionNames field's value. -func (s *SimulatePrincipalPolicyInput) SetActionNames(v []*string) *SimulatePrincipalPolicyInput { - s.ActionNames = v +// SetTags sets the Tags field's value. +func (s *TagUserInput) SetTags(v []*Tag) *TagUserInput { + s.Tags = v return s } -// SetCallerArn sets the CallerArn field's value. -func (s *SimulatePrincipalPolicyInput) SetCallerArn(v string) *SimulatePrincipalPolicyInput { - s.CallerArn = &v +// SetUserName sets the UserName field's value. +func (s *TagUserInput) SetUserName(v string) *TagUserInput { + s.UserName = &v return s } -// SetContextEntries sets the ContextEntries field's value. -func (s *SimulatePrincipalPolicyInput) SetContextEntries(v []*ContextEntry) *SimulatePrincipalPolicyInput { - s.ContextEntries = v - return s +type TagUserOutput struct { + _ struct{} `type:"structure"` } -// SetMarker sets the Marker field's value. -func (s *SimulatePrincipalPolicyInput) SetMarker(v string) *SimulatePrincipalPolicyInput { - s.Marker = &v - return s +// String returns the string representation +func (s TagUserOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagUserOutput) GoString() string { + return s.String() +} + +type UntagRoleInput struct { + _ struct{} `type:"structure"` + + // The name of the IAM role from which you want to remove tags. + // + // This parameter accepts (through its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters that consist of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: _+=,.@- + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` + + // A list of key names as a simple array of strings. The tags with matching + // keys are removed from the specified role. + // + // TagKeys is a required field + TagKeys []*string `type:"list" required:"true"` +} + +// String returns the string representation +func (s UntagRoleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UntagRoleInput) GoString() string { + return s.String() } -// SetMaxItems sets the MaxItems field's value. -func (s *SimulatePrincipalPolicyInput) SetMaxItems(v int64) *SimulatePrincipalPolicyInput { - s.MaxItems = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *UntagRoleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UntagRoleInput"} + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + if s.TagKeys == nil { + invalidParams.Add(request.NewErrParamRequired("TagKeys")) + } -// SetPolicyInputList sets the PolicyInputList field's value. -func (s *SimulatePrincipalPolicyInput) SetPolicyInputList(v []*string) *SimulatePrincipalPolicyInput { - s.PolicyInputList = v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetPolicySourceArn sets the PolicySourceArn field's value. -func (s *SimulatePrincipalPolicyInput) SetPolicySourceArn(v string) *SimulatePrincipalPolicyInput { - s.PolicySourceArn = &v +// SetRoleName sets the RoleName field's value. +func (s *UntagRoleInput) SetRoleName(v string) *UntagRoleInput { + s.RoleName = &v return s } -// SetResourceArns sets the ResourceArns field's value. -func (s *SimulatePrincipalPolicyInput) SetResourceArns(v []*string) *SimulatePrincipalPolicyInput { - s.ResourceArns = v +// SetTagKeys sets the TagKeys field's value. +func (s *UntagRoleInput) SetTagKeys(v []*string) *UntagRoleInput { + s.TagKeys = v return s } -// SetResourceHandlingOption sets the ResourceHandlingOption field's value. -func (s *SimulatePrincipalPolicyInput) SetResourceHandlingOption(v string) *SimulatePrincipalPolicyInput { - s.ResourceHandlingOption = &v - return s +type UntagRoleOutput struct { + _ struct{} `type:"structure"` } -// SetResourceOwner sets the ResourceOwner field's value. -func (s *SimulatePrincipalPolicyInput) SetResourceOwner(v string) *SimulatePrincipalPolicyInput { - s.ResourceOwner = &v - return s +// String returns the string representation +func (s UntagRoleOutput) String() string { + return awsutil.Prettify(s) } -// SetResourcePolicy sets the ResourcePolicy field's value. -func (s *SimulatePrincipalPolicyInput) SetResourcePolicy(v string) *SimulatePrincipalPolicyInput { - s.ResourcePolicy = &v - return s +// GoString returns the string representation +func (s UntagRoleOutput) GoString() string { + return s.String() } -// Contains a reference to a Statement element in a policy document that determines -// the result of the simulation. -// -// This data type is used by the MatchedStatements member of the EvaluationResult -// type. -type Statement struct { +type UntagUserInput struct { _ struct{} `type:"structure"` - // The row and column of the end of a Statement in an IAM policy. - EndPosition *Position `type:"structure"` - - // The identifier of the policy that was provided as an input. - SourcePolicyId *string `type:"string"` - - // The type of the policy. - SourcePolicyType *string `type:"string" enum:"PolicySourceType"` + // A list of key names as a simple array of strings. The tags with matching + // keys are removed from the specified user. + // + // TagKeys is a required field + TagKeys []*string `type:"list" required:"true"` - // The row and column of the beginning of the Statement in an IAM policy. - StartPosition *Position `type:"structure"` + // The name of the IAM user from which you want to remove tags. + // + // This parameter accepts (through its regex pattern (http://wikipedia.org/wiki/regex)) + // a string of characters that consist of upper and lowercase alphanumeric characters + // with no spaces. You can also include any of the following characters: =,.@- + // + // UserName is a required field + UserName *string `min:"1" type:"string" required:"true"` } // String returns the string representation -func (s Statement) String() string { +func (s UntagUserInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Statement) GoString() string { +func (s UntagUserInput) GoString() string { return s.String() } -// SetEndPosition sets the EndPosition field's value. -func (s *Statement) SetEndPosition(v *Position) *Statement { - s.EndPosition = v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *UntagUserInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UntagUserInput"} + if s.TagKeys == nil { + invalidParams.Add(request.NewErrParamRequired("TagKeys")) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetSourcePolicyId sets the SourcePolicyId field's value. -func (s *Statement) SetSourcePolicyId(v string) *Statement { - s.SourcePolicyId = &v +// SetTagKeys sets the TagKeys field's value. +func (s *UntagUserInput) SetTagKeys(v []*string) *UntagUserInput { + s.TagKeys = v return s } -// SetSourcePolicyType sets the SourcePolicyType field's value. -func (s *Statement) SetSourcePolicyType(v string) *Statement { - s.SourcePolicyType = &v +// SetUserName sets the UserName field's value. +func (s *UntagUserInput) SetUserName(v string) *UntagUserInput { + s.UserName = &v return s } -// SetStartPosition sets the StartPosition field's value. -func (s *Statement) SetStartPosition(v *Position) *Statement { - s.StartPosition = v - return s +type UntagUserOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UntagUserOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UntagUserOutput) GoString() string { + return s.String() } type UpdateAccessKeyInput struct { @@ -26037,9 +28496,9 @@ type UpdateAccessKeyInput struct { // AccessKeyId is a required field AccessKeyId *string `min:"16" type:"string" required:"true"` - // The status you want to assign to the secret access key. Active means the - // key can be used for API calls to AWS, while Inactive means the key cannot - // be used. + // The status you want to assign to the secret access key. Active means that + // the key can be used for API calls to AWS, while Inactive means that the key + // cannot be used. // // Status is a required field Status *string `type:"string" required:"true" enum:"statusType"` @@ -26048,7 +28507,7 @@ type UpdateAccessKeyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- UserName *string `min:"1" type:"string"` } @@ -26124,42 +28583,53 @@ type UpdateAccountPasswordPolicyInput struct { // Their Own Passwords (http://docs.aws.amazon.com/IAM/latest/UserGuide/HowToPwdIAMUser.html) // in the IAM User Guide. // - // Default value: false + // If you do not specify a value for this parameter, then the operation uses + // the default value of false. The result is that IAM users in the account do + // not automatically have permissions to change their own password. AllowUsersToChangePassword *bool `type:"boolean"` // Prevents IAM users from setting a new password after their password has expired. + // The IAM user cannot be accessed until an administrator resets the password. // - // Default value: false + // If you do not specify a value for this parameter, then the operation uses + // the default value of false. The result is that IAM users can change their + // passwords after they expire and continue to sign in as the user. HardExpiry *bool `type:"boolean"` - // The number of days that an IAM user password is valid. The default value - // of 0 means IAM user passwords never expire. + // The number of days that an IAM user password is valid. // - // Default value: 0 + // If you do not specify a value for this parameter, then the operation uses + // the default value of 0. The result is that IAM user passwords never expire. MaxPasswordAge *int64 `min:"1" type:"integer"` // The minimum number of characters allowed in an IAM user password. // - // Default value: 6 + // If you do not specify a value for this parameter, then the operation uses + // the default value of 6. MinimumPasswordLength *int64 `min:"6" type:"integer"` // Specifies the number of previous passwords that IAM users are prevented from - // reusing. The default value of 0 means IAM users are not prevented from reusing - // previous passwords. + // reusing. // - // Default value: 0 + // If you do not specify a value for this parameter, then the operation uses + // the default value of 0. The result is that IAM users are not prevented from + // reusing previous passwords. PasswordReusePrevention *int64 `min:"1" type:"integer"` // Specifies whether IAM user passwords must contain at least one lowercase // character from the ISO basic Latin alphabet (a to z). // - // Default value: false + // If you do not specify a value for this parameter, then the operation uses + // the default value of false. The result is that passwords do not require at + // least one lowercase character. RequireLowercaseCharacters *bool `type:"boolean"` // Specifies whether IAM user passwords must contain at least one numeric character // (0 to 9). // - // Default value: false + // If you do not specify a value for this parameter, then the operation uses + // the default value of false. The result is that passwords do not require at + // least one numeric character. RequireNumbers *bool `type:"boolean"` // Specifies whether IAM user passwords must contain at least one of the following @@ -26167,13 +28637,17 @@ type UpdateAccountPasswordPolicyInput struct { // // ! @ # $ % ^ & * ( ) _ + - = [ ] { } | ' // - // Default value: false + // If you do not specify a value for this parameter, then the operation uses + // the default value of false. The result is that passwords do not require at + // least one symbol character. RequireSymbols *bool `type:"boolean"` // Specifies whether IAM user passwords must contain at least one uppercase // character from the ISO basic Latin alphabet (A to Z). // - // Default value: false + // If you do not specify a value for this parameter, then the operation uses + // the default value of false. The result is that passwords do not require at + // least one uppercase character. RequireUppercaseCharacters *bool `type:"boolean"` } @@ -26280,11 +28754,16 @@ type UpdateAssumeRolePolicyInput struct { // The policy that grants an entity permission to assume the role. // // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of any printable ASCII character - // ranging from the space character (\u0020) through end of the ASCII character - // range as well as the printable characters in the Basic Latin and Latin-1 - // Supplement character set (through \u00FF). It also includes the special characters - // tab (\u0009), line feed (\u000A), and carriage return (\u000D). + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) // // PolicyDocument is a required field PolicyDocument *string `min:"1" type:"string" required:"true"` @@ -26365,7 +28844,7 @@ type UpdateGroupInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // GroupName is a required field GroupName *string `min:"1" type:"string" required:"true"` @@ -26374,16 +28853,17 @@ type UpdateGroupInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- NewGroupName *string `min:"1" type:"string"` // New path for the IAM group. Only include this if changing the group's path. // - // This paramater allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of either a forward slash (/) by itself - // or a string that must begin and end with forward slashes, containing any - // ASCII character from the ! (\u0021) thru the DEL character (\u007F), including - // most punctuation characters, digits, and upper and lowercased letters. + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. NewPath *string `min:"1" type:"string"` } @@ -26457,13 +28937,20 @@ type UpdateLoginProfileInput struct { // The new password for the specified IAM user. // // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of any printable ASCII character - // ranging from the space character (\u0020) through end of the ASCII character - // range as well as the printable characters in the Basic Latin and Latin-1 - // Supplement character set (through \u00FF). It also includes the special characters - // tab (\u0009), line feed (\u000A), and carriage return (\u000D). However, - // the format can be further restricted by the account administrator by setting - // a password policy on the AWS account. For more information, see UpdateAccountPasswordPolicy. + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) + // + // However, the format can be further restricted by the account administrator + // by setting a password policy on the AWS account. For more information, see + // UpdateAccountPasswordPolicy. Password *string `min:"1" type:"string"` // Allows this new password to be used only once by requiring the specified @@ -26474,7 +28961,7 @@ type UpdateLoginProfileInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -26546,7 +29033,7 @@ type UpdateOpenIDConnectProviderThumbprintInput struct { // The Amazon Resource Name (ARN) of the IAM OIDC provider resource object for // which you want to update the thumbprint. You can get a list of OIDC provider - // ARNs by using the ListOpenIDConnectProviders action. + // ARNs by using the ListOpenIDConnectProviders operation. // // For more information about ARNs, see Amazon Resource Names (ARNs) and AWS // Service Namespaces (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) @@ -26695,6 +29182,95 @@ func (s *UpdateRoleDescriptionOutput) SetRole(v *Role) *UpdateRoleDescriptionOut return s } +type UpdateRoleInput struct { + _ struct{} `type:"structure"` + + // The new description that you want to apply to the specified role. + Description *string `type:"string"` + + // The maximum session duration (in seconds) that you want to set for the specified + // role. If you do not specify a value for this setting, the default maximum + // of one hour is applied. This setting can have a value from 1 hour to 12 hours. + // + // Anyone who assumes the role from the AWS CLI or API can use the DurationSeconds + // API parameter or the duration-seconds CLI parameter to request a longer session. + // The MaxSessionDuration setting determines the maximum duration that can be + // requested using the DurationSeconds parameter. If users don't specify a value + // for the DurationSeconds parameter, their security credentials are valid for + // one hour by default. This applies when you use the AssumeRole* API operations + // or the assume-role* CLI operations but does not apply when you use those + // operations to create a console URL. For more information, see Using IAM Roles + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html) in the + // IAM User Guide. + MaxSessionDuration *int64 `min:"3600" type:"integer"` + + // The name of the role that you want to modify. + // + // RoleName is a required field + RoleName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateRoleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateRoleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateRoleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateRoleInput"} + if s.MaxSessionDuration != nil && *s.MaxSessionDuration < 3600 { + invalidParams.Add(request.NewErrParamMinValue("MaxSessionDuration", 3600)) + } + if s.RoleName == nil { + invalidParams.Add(request.NewErrParamRequired("RoleName")) + } + if s.RoleName != nil && len(*s.RoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDescription sets the Description field's value. +func (s *UpdateRoleInput) SetDescription(v string) *UpdateRoleInput { + s.Description = &v + return s +} + +// SetMaxSessionDuration sets the MaxSessionDuration field's value. +func (s *UpdateRoleInput) SetMaxSessionDuration(v int64) *UpdateRoleInput { + s.MaxSessionDuration = &v + return s +} + +// SetRoleName sets the RoleName field's value. +func (s *UpdateRoleInput) SetRoleName(v string) *UpdateRoleInput { + s.RoleName = &v + return s +} + +type UpdateRoleOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UpdateRoleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateRoleOutput) GoString() string { + return s.String() +} + type UpdateSAMLProviderInput struct { _ struct{} `type:"structure"` @@ -26797,9 +29373,9 @@ type UpdateSSHPublicKeyInput struct { // SSHPublicKeyId is a required field SSHPublicKeyId *string `min:"20" type:"string" required:"true"` - // The status to assign to the SSH public key. Active means the key can be used - // for authentication with an AWS CodeCommit repository. Inactive means the - // key cannot be used. + // The status to assign to the SSH public key. Active means that the key can + // be used for authentication with an AWS CodeCommit repository. Inactive means + // that the key cannot be used. // // Status is a required field Status *string `type:"string" required:"true" enum:"statusType"` @@ -26808,7 +29384,7 @@ type UpdateSSHPublicKeyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -26887,11 +29463,12 @@ type UpdateServerCertificateInput struct { // The new path for the server certificate. Include this only if you are updating // the server certificate's path. // - // This paramater allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of either a forward slash (/) by itself - // or a string that must begin and end with forward slashes, containing any - // ASCII character from the ! (\u0021) thru the DEL character (\u007F), including - // most punctuation characters, digits, and upper and lowercased letters. + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. NewPath *string `min:"1" type:"string"` // The new name for the server certificate. Include this only if you are updating @@ -26900,14 +29477,14 @@ type UpdateServerCertificateInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- NewServerCertificateName *string `min:"1" type:"string"` // The name of the server certificate that you want to update. // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // ServerCertificateName is a required field ServerCertificateName *string `min:"1" type:"string" required:"true"` @@ -27000,7 +29577,7 @@ type UpdateServiceSpecificCredentialInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- UserName *string `min:"1" type:"string"` } @@ -27080,8 +29657,8 @@ type UpdateSigningCertificateInput struct { // CertificateId is a required field CertificateId *string `min:"24" type:"string" required:"true"` - // The status you want to assign to the certificate. Active means the certificate - // can be used for API calls to AWS, while Inactive means the certificate cannot + // The status you want to assign to the certificate. Active means that the certificate + // can be used for API calls to AWS Inactive means that the certificate cannot // be used. // // Status is a required field @@ -27091,7 +29668,7 @@ type UpdateSigningCertificateInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- UserName *string `min:"1" type:"string"` } @@ -27165,11 +29742,12 @@ type UpdateUserInput struct { // New path for the IAM user. Include this parameter only if you're changing // the user's path. // - // This paramater allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of either a forward slash (/) by itself - // or a string that must begin and end with forward slashes, containing any - // ASCII character from the ! (\u0021) thru the DEL character (\u007F), including - // most punctuation characters, digits, and upper and lowercased letters. + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. NewPath *string `min:"1" type:"string"` // New name for the user. Include this parameter only if you're changing the @@ -27177,7 +29755,7 @@ type UpdateUserInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- NewUserName *string `min:"1" type:"string"` // Name of the user to update. If you're changing the name of the user, this @@ -27185,7 +29763,7 @@ type UpdateUserInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -27259,14 +29837,21 @@ type UploadSSHPublicKeyInput struct { _ struct{} `type:"structure"` // The SSH public key. The public key must be encoded in ssh-rsa format or PEM - // format. + // format. The miminum bit-length of the public key is 2048 bits. For example, + // you can generate a 2048-bit key, and the resulting PEM file is 1679 bytes + // long. // // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of any printable ASCII character - // ranging from the space character (\u0020) through end of the ASCII character - // range as well as the printable characters in the Basic Latin and Latin-1 - // Supplement character set (through \u00FF). It also includes the special characters - // tab (\u0009), line feed (\u000A), and carriage return (\u000D). + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) // // SSHPublicKeyBody is a required field SSHPublicKeyBody *string `min:"1" type:"string" required:"true"` @@ -27275,7 +29860,7 @@ type UploadSSHPublicKeyInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // UserName is a required field UserName *string `min:"1" type:"string" required:"true"` @@ -27355,11 +29940,16 @@ type UploadServerCertificateInput struct { // The contents of the public key certificate in PEM-encoded format. // // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of any printable ASCII character - // ranging from the space character (\u0020) through end of the ASCII character - // range as well as the printable characters in the Basic Latin and Latin-1 - // Supplement character set (through \u00FF). It also includes the special characters - // tab (\u0009), line feed (\u000A), and carriage return (\u000D). + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) // // CertificateBody is a required field CertificateBody *string `min:"1" type:"string" required:"true"` @@ -27368,11 +29958,16 @@ type UploadServerCertificateInput struct { // of the PEM-encoded public key certificates of the chain. // // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of any printable ASCII character - // ranging from the space character (\u0020) through end of the ASCII character - // range as well as the printable characters in the Basic Latin and Latin-1 - // Supplement character set (through \u00FF). It also includes the special characters - // tab (\u0009), line feed (\u000A), and carriage return (\u000D). + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) CertificateChain *string `min:"1" type:"string"` // The path for the server certificate. For more information about paths, see @@ -27380,14 +29975,15 @@ type UploadServerCertificateInput struct { // in the IAM User Guide. // // This parameter is optional. If it is not included, it defaults to a slash - // (/). This paramater allows (per its regex pattern (http://wikipedia.org/wiki/regex)) + // (/). This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of either a forward slash (/) by itself - // or a string that must begin and end with forward slashes, containing any - // ASCII character from the ! (\u0021) thru the DEL character (\u007F), including - // most punctuation characters, digits, and upper and lowercased letters. + // or a string that must begin and end with forward slashes. In addition, it + // can contain any ASCII character from the ! (\u0021) through the DEL character + // (\u007F), including most punctuation characters, digits, and upper and lowercased + // letters. // // If you are uploading a server certificate specifically for use with Amazon - // CloudFront distributions, you must specify a path using the --path option. + // CloudFront distributions, you must specify a path using the path parameter. // The path must begin with /cloudfront and must include a trailing slash (for // example, /cloudfront/test/). Path *string `min:"1" type:"string"` @@ -27395,11 +29991,16 @@ type UploadServerCertificateInput struct { // The contents of the private key in PEM-encoded format. // // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of any printable ASCII character - // ranging from the space character (\u0020) through end of the ASCII character - // range as well as the printable characters in the Basic Latin and Latin-1 - // Supplement character set (through \u00FF). It also includes the special characters - // tab (\u0009), line feed (\u000A), and carriage return (\u000D). + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) // // PrivateKey is a required field PrivateKey *string `min:"1" type:"string" required:"true"` @@ -27409,7 +30010,7 @@ type UploadServerCertificateInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- // // ServerCertificateName is a required field ServerCertificateName *string `min:"1" type:"string" required:"true"` @@ -27520,11 +30121,16 @@ type UploadSigningCertificateInput struct { // The contents of the signing certificate. // // The regex pattern (http://wikipedia.org/wiki/regex) used to validate this - // parameter is a string of characters consisting of any printable ASCII character - // ranging from the space character (\u0020) through end of the ASCII character - // range as well as the printable characters in the Basic Latin and Latin-1 - // Supplement character set (through \u00FF). It also includes the special characters - // tab (\u0009), line feed (\u000A), and carriage return (\u000D). + // parameter is a string of characters consisting of the following: + // + // * Any printable ASCII character ranging from the space character (\u0020) + // through the end of the ASCII character range + // + // * The printable characters in the Basic Latin and Latin-1 Supplement character + // set (through \u00FF) + // + // * The special characters tab (\u0009), line feed (\u000A), and carriage + // return (\u000D) // // CertificateBody is a required field CertificateBody *string `min:"1" type:"string" required:"true"` @@ -27533,7 +30139,7 @@ type UploadSigningCertificateInput struct { // // This parameter allows (per its regex pattern (http://wikipedia.org/wiki/regex)) // a string of characters consisting of upper and lowercase alphanumeric characters - // with no spaces. You can also include any of the following characters: =,.@- + // with no spaces. You can also include any of the following characters: _+=,.@- UserName *string `min:"1" type:"string"` } @@ -27606,7 +30212,7 @@ func (s *UploadSigningCertificateOutput) SetCertificate(v *SigningCertificate) * // Contains information about an IAM user entity. // -// This data type is used as a response element in the following actions: +// This data type is used as a response element in the following operations: // // * CreateUser // @@ -27627,7 +30233,7 @@ type User struct { // when the user was created. // // CreateDate is a required field - CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + CreateDate *time.Time `type:"timestamp" required:"true"` // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), // when the user's password was last used to sign in to an AWS website. For @@ -27647,8 +30253,8 @@ type User struct { // does not currently have a password, but had one in the past, then this field // contains the date and time the most recent password was used. // - // This value is returned only in the GetUser and ListUsers actions. - PasswordLastUsed *time.Time `type:"timestamp" timestampFormat:"iso8601"` + // This value is returned only in the GetUser and ListUsers operations. + PasswordLastUsed *time.Time `type:"timestamp"` // The path to the user. For more information about paths, see IAM Identifiers // (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) @@ -27657,6 +30263,18 @@ type User struct { // Path is a required field Path *string `min:"1" type:"string" required:"true"` + // The ARN of the policy used to set the permissions boundary for the user. + // + // For more information about permissions boundaries, see Permissions Boundaries + // for IAM Identities (https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) + // in the IAM User Guide. + PermissionsBoundary *AttachedPermissionsBoundary `type:"structure"` + + // A list of tags that are associated with the specified user. For more information + // about tagging, see Tagging IAM Identities (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) + // in the IAM User Guide. + Tags []*Tag `type:"list"` + // The stable and unique string identifying the user. For more information about // IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) // in the Using IAM guide. @@ -27704,6 +30322,18 @@ func (s *User) SetPath(v string) *User { return s } +// SetPermissionsBoundary sets the PermissionsBoundary field's value. +func (s *User) SetPermissionsBoundary(v *AttachedPermissionsBoundary) *User { + s.PermissionsBoundary = v + return s +} + +// SetTags sets the Tags field's value. +func (s *User) SetTags(v []*Tag) *User { + s.Tags = v + return s +} + // SetUserId sets the UserId field's value. func (s *User) SetUserId(v string) *User { s.UserId = &v @@ -27720,7 +30350,7 @@ func (s *User) SetUserName(v string) *User { // and all the IAM groups the user is in. // // This data type is used as a response element in the GetAccountAuthorizationDetails -// action. +// operation. type UserDetail struct { _ struct{} `type:"structure"` @@ -27736,7 +30366,7 @@ type UserDetail struct { // The date and time, in ISO 8601 date-time format (http://www.iso.org/iso/iso8601), // when the user was created. - CreateDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CreateDate *time.Time `type:"timestamp"` // A list of IAM groups that the user is in. GroupList []*string `type:"list"` @@ -27746,6 +30376,18 @@ type UserDetail struct { // in the Using IAM guide. Path *string `min:"1" type:"string"` + // The ARN of the policy used to set the permissions boundary for the user. + // + // For more information about permissions boundaries, see Permissions Boundaries + // for IAM Identities (https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) + // in the IAM User Guide. + PermissionsBoundary *AttachedPermissionsBoundary `type:"structure"` + + // A list of tags that are associated with the specified user. For more information + // about tagging, see Tagging IAM Identities (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) + // in the IAM User Guide. + Tags []*Tag `type:"list"` + // The stable and unique string identifying the user. For more information about // IDs, see IAM Identifiers (http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) // in the Using IAM guide. @@ -27798,6 +30440,18 @@ func (s *UserDetail) SetPath(v string) *UserDetail { return s } +// SetPermissionsBoundary sets the PermissionsBoundary field's value. +func (s *UserDetail) SetPermissionsBoundary(v *AttachedPermissionsBoundary) *UserDetail { + s.PermissionsBoundary = v + return s +} + +// SetTags sets the Tags field's value. +func (s *UserDetail) SetTags(v []*Tag) *UserDetail { + s.Tags = v + return s +} + // SetUserId sets the UserId field's value. func (s *UserDetail) SetUserId(v string) *UserDetail { s.UserId = &v @@ -27827,7 +30481,7 @@ type VirtualMFADevice struct { Base32StringSeed []byte `type:"blob"` // The date and time on which the virtual MFA device was enabled. - EnableDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + EnableDate *time.Time `type:"timestamp"` // A QR code PNG image that encodes otpauth://totp/$virtualMFADeviceName@$AccountName?secret=$Base32String // where $virtualMFADeviceName is one of the create call arguments, AccountName @@ -27955,6 +30609,11 @@ const ( EntityTypeAwsmanagedPolicy = "AWSManagedPolicy" ) +const ( + // PermissionsBoundaryAttachmentTypePermissionsBoundaryPolicy is a PermissionsBoundaryAttachmentType enum value + PermissionsBoundaryAttachmentTypePermissionsBoundaryPolicy = "PermissionsBoundaryPolicy" +) + const ( // PolicyEvaluationDecisionTypeAllowed is a PolicyEvaluationDecisionType enum value PolicyEvaluationDecisionTypeAllowed = "allowed" @@ -27989,6 +30648,20 @@ const ( PolicySourceTypeNone = "none" ) +// The policy usage type that indicates whether the policy is used as a permissions +// policy or as the permissions boundary for an entity. +// +// For more information about permissions boundaries, see Permissions Boundaries +// for IAM Identities (https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) +// in the IAM User Guide. +const ( + // PolicyUsageTypePermissionsPolicy is a PolicyUsageType enum value + PolicyUsageTypePermissionsPolicy = "PermissionsPolicy" + + // PolicyUsageTypePermissionsBoundary is a PolicyUsageType enum value + PolicyUsageTypePermissionsBoundary = "PermissionsBoundary" +) + const ( // ReportFormatTypeTextCsv is a ReportFormatType enum value ReportFormatTypeTextCsv = "text/csv" diff --git a/vendor/github.com/aws/aws-sdk-go/service/iam/errors.go b/vendor/github.com/aws/aws-sdk-go/service/iam/errors.go index 470e19b37f4..b78d571061e 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/iam/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/iam/errors.go @@ -4,6 +4,14 @@ package iam const ( + // ErrCodeConcurrentModificationException for service response error code + // "ConcurrentModification". + // + // The request was rejected because multiple requests to change this object + // were submitted simultaneously. Wait a few minutes and submit your request + // again. + ErrCodeConcurrentModificationException = "ConcurrentModification" + // ErrCodeCredentialReportExpiredException for service response error code // "ReportExpired". // diff --git a/vendor/github.com/aws/aws-sdk-go/service/iam/service.go b/vendor/github.com/aws/aws-sdk-go/service/iam/service.go index 4f798c63d0f..940b4ce3283 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/iam/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/iam/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "iam" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "iam" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "IAM" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the IAM client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/inspector/api.go b/vendor/github.com/aws/aws-sdk-go/service/inspector/api.go index 0376ef0c264..5c91f542d3c 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/inspector/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/inspector/api.go @@ -17,8 +17,8 @@ const opAddAttributesToFindings = "AddAttributesToFindings" // AddAttributesToFindingsRequest generates a "aws/request.Request" representing the // client's request for the AddAttributesToFindings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -82,6 +82,9 @@ func (c *Inspector) AddAttributesToFindingsRequest(input *AddAttributesToFinding // The request was rejected because it referenced an entity that does not exist. // The error code describes the entity. // +// * ErrCodeServiceTemporarilyUnavailableException "ServiceTemporarilyUnavailableException" +// The serice is temporary unavailable. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/AddAttributesToFindings func (c *Inspector) AddAttributesToFindings(input *AddAttributesToFindingsInput) (*AddAttributesToFindingsOutput, error) { req, out := c.AddAttributesToFindingsRequest(input) @@ -108,8 +111,8 @@ const opCreateAssessmentTarget = "CreateAssessmentTarget" // CreateAssessmentTargetRequest generates a "aws/request.Request" representing the // client's request for the CreateAssessmentTarget operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -149,9 +152,11 @@ func (c *Inspector) CreateAssessmentTargetRequest(input *CreateAssessmentTargetI // CreateAssessmentTarget API operation for Amazon Inspector. // // Creates a new assessment target using the ARN of the resource group that -// is generated by CreateResourceGroup. If the service-linked role (https://docs.aws.amazon.com/inspector/latest/userguide/inspector_slr.html) -// isn’t already registered, also creates and registers a service-linked role -// to grant Amazon Inspector access to AWS Services needed to perform security +// is generated by CreateResourceGroup. If resourceGroupArn is not specified, +// all EC2 instances in the current AWS account and region are included in the +// assessment target. If the service-linked role (https://docs.aws.amazon.com/inspector/latest/userguide/inspector_slr.html) +// isn’t already registered, this action also creates and registers a service-linked +// role to grant Amazon Inspector access to AWS Services needed to perform security // assessments. You can create up to 50 assessment targets per AWS account. // You can run up to 500 concurrent agents per AWS account. For more information, // see Amazon Inspector Assessment Targets (http://docs.aws.amazon.com/inspector/latest/userguide/inspector_applications.html). @@ -182,6 +187,13 @@ func (c *Inspector) CreateAssessmentTargetRequest(input *CreateAssessmentTargetI // The request was rejected because it referenced an entity that does not exist. // The error code describes the entity. // +// * ErrCodeInvalidCrossAccountRoleException "InvalidCrossAccountRoleException" +// Amazon Inspector cannot assume the cross-account role that it needs to list +// your EC2 instances during the assessment run. +// +// * ErrCodeServiceTemporarilyUnavailableException "ServiceTemporarilyUnavailableException" +// The serice is temporary unavailable. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/CreateAssessmentTarget func (c *Inspector) CreateAssessmentTarget(input *CreateAssessmentTargetInput) (*CreateAssessmentTargetOutput, error) { req, out := c.CreateAssessmentTargetRequest(input) @@ -208,8 +220,8 @@ const opCreateAssessmentTemplate = "CreateAssessmentTemplate" // CreateAssessmentTemplateRequest generates a "aws/request.Request" representing the // client's request for the CreateAssessmentTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -250,8 +262,8 @@ func (c *Inspector) CreateAssessmentTemplateRequest(input *CreateAssessmentTempl // // Creates an assessment template for the assessment target that is specified // by the ARN of the assessment target. If the service-linked role (https://docs.aws.amazon.com/inspector/latest/userguide/inspector_slr.html) -// isn’t already registered, also creates and registers a service-linked role -// to grant Amazon Inspector access to AWS Services needed to perform security +// isn’t already registered, this action also creates and registers a service-linked +// role to grant Amazon Inspector access to AWS Services needed to perform security // assessments. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -280,6 +292,9 @@ func (c *Inspector) CreateAssessmentTemplateRequest(input *CreateAssessmentTempl // The request was rejected because it referenced an entity that does not exist. // The error code describes the entity. // +// * ErrCodeServiceTemporarilyUnavailableException "ServiceTemporarilyUnavailableException" +// The serice is temporary unavailable. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/CreateAssessmentTemplate func (c *Inspector) CreateAssessmentTemplate(input *CreateAssessmentTemplateInput) (*CreateAssessmentTemplateOutput, error) { req, out := c.CreateAssessmentTemplateRequest(input) @@ -302,12 +317,111 @@ func (c *Inspector) CreateAssessmentTemplateWithContext(ctx aws.Context, input * return out, req.Send() } +const opCreateExclusionsPreview = "CreateExclusionsPreview" + +// CreateExclusionsPreviewRequest generates a "aws/request.Request" representing the +// client's request for the CreateExclusionsPreview operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateExclusionsPreview for more information on using the CreateExclusionsPreview +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateExclusionsPreviewRequest method. +// req, resp := client.CreateExclusionsPreviewRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/CreateExclusionsPreview +func (c *Inspector) CreateExclusionsPreviewRequest(input *CreateExclusionsPreviewInput) (req *request.Request, output *CreateExclusionsPreviewOutput) { + op := &request.Operation{ + Name: opCreateExclusionsPreview, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateExclusionsPreviewInput{} + } + + output = &CreateExclusionsPreviewOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateExclusionsPreview API operation for Amazon Inspector. +// +// Starts the generation of an exclusions preview for the specified assessment +// template. The exclusions preview lists the potential exclusions (ExclusionPreview) +// that Inspector can detect before it runs the assessment. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Inspector's +// API operation CreateExclusionsPreview for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInputException" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodePreviewGenerationInProgressException "PreviewGenerationInProgressException" +// The request is rejected. The specified assessment template is currently generating +// an exclusions preview. +// +// * ErrCodeInternalException "InternalException" +// Internal server error. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// You do not have required permissions to access the requested resource. +// +// * ErrCodeNoSuchEntityException "NoSuchEntityException" +// The request was rejected because it referenced an entity that does not exist. +// The error code describes the entity. +// +// * ErrCodeServiceTemporarilyUnavailableException "ServiceTemporarilyUnavailableException" +// The serice is temporary unavailable. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/CreateExclusionsPreview +func (c *Inspector) CreateExclusionsPreview(input *CreateExclusionsPreviewInput) (*CreateExclusionsPreviewOutput, error) { + req, out := c.CreateExclusionsPreviewRequest(input) + return out, req.Send() +} + +// CreateExclusionsPreviewWithContext is the same as CreateExclusionsPreview with the addition of +// the ability to pass a context and additional request options. +// +// See CreateExclusionsPreview for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Inspector) CreateExclusionsPreviewWithContext(ctx aws.Context, input *CreateExclusionsPreviewInput, opts ...request.Option) (*CreateExclusionsPreviewOutput, error) { + req, out := c.CreateExclusionsPreviewRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateResourceGroup = "CreateResourceGroup" // CreateResourceGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateResourceGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -373,6 +487,9 @@ func (c *Inspector) CreateResourceGroupRequest(input *CreateResourceGroupInput) // * ErrCodeAccessDeniedException "AccessDeniedException" // You do not have required permissions to access the requested resource. // +// * ErrCodeServiceTemporarilyUnavailableException "ServiceTemporarilyUnavailableException" +// The serice is temporary unavailable. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/CreateResourceGroup func (c *Inspector) CreateResourceGroup(input *CreateResourceGroupInput) (*CreateResourceGroupOutput, error) { req, out := c.CreateResourceGroupRequest(input) @@ -399,8 +516,8 @@ const opDeleteAssessmentRun = "DeleteAssessmentRun" // DeleteAssessmentRunRequest generates a "aws/request.Request" representing the // client's request for the DeleteAssessmentRun operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -470,6 +587,9 @@ func (c *Inspector) DeleteAssessmentRunRequest(input *DeleteAssessmentRunInput) // The request was rejected because it referenced an entity that does not exist. // The error code describes the entity. // +// * ErrCodeServiceTemporarilyUnavailableException "ServiceTemporarilyUnavailableException" +// The serice is temporary unavailable. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/DeleteAssessmentRun func (c *Inspector) DeleteAssessmentRun(input *DeleteAssessmentRunInput) (*DeleteAssessmentRunOutput, error) { req, out := c.DeleteAssessmentRunRequest(input) @@ -496,8 +616,8 @@ const opDeleteAssessmentTarget = "DeleteAssessmentTarget" // DeleteAssessmentTargetRequest generates a "aws/request.Request" representing the // client's request for the DeleteAssessmentTarget operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -567,6 +687,9 @@ func (c *Inspector) DeleteAssessmentTargetRequest(input *DeleteAssessmentTargetI // The request was rejected because it referenced an entity that does not exist. // The error code describes the entity. // +// * ErrCodeServiceTemporarilyUnavailableException "ServiceTemporarilyUnavailableException" +// The serice is temporary unavailable. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/DeleteAssessmentTarget func (c *Inspector) DeleteAssessmentTarget(input *DeleteAssessmentTargetInput) (*DeleteAssessmentTargetOutput, error) { req, out := c.DeleteAssessmentTargetRequest(input) @@ -593,8 +716,8 @@ const opDeleteAssessmentTemplate = "DeleteAssessmentTemplate" // DeleteAssessmentTemplateRequest generates a "aws/request.Request" representing the // client's request for the DeleteAssessmentTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -664,6 +787,9 @@ func (c *Inspector) DeleteAssessmentTemplateRequest(input *DeleteAssessmentTempl // The request was rejected because it referenced an entity that does not exist. // The error code describes the entity. // +// * ErrCodeServiceTemporarilyUnavailableException "ServiceTemporarilyUnavailableException" +// The serice is temporary unavailable. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/DeleteAssessmentTemplate func (c *Inspector) DeleteAssessmentTemplate(input *DeleteAssessmentTemplateInput) (*DeleteAssessmentTemplateOutput, error) { req, out := c.DeleteAssessmentTemplateRequest(input) @@ -690,8 +816,8 @@ const opDescribeAssessmentRuns = "DescribeAssessmentRuns" // DescribeAssessmentRunsRequest generates a "aws/request.Request" representing the // client's request for the DescribeAssessmentRuns operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -774,8 +900,8 @@ const opDescribeAssessmentTargets = "DescribeAssessmentTargets" // DescribeAssessmentTargetsRequest generates a "aws/request.Request" representing the // client's request for the DescribeAssessmentTargets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -858,8 +984,8 @@ const opDescribeAssessmentTemplates = "DescribeAssessmentTemplates" // DescribeAssessmentTemplatesRequest generates a "aws/request.Request" representing the // client's request for the DescribeAssessmentTemplates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -942,8 +1068,8 @@ const opDescribeCrossAccountAccessRole = "DescribeCrossAccountAccessRole" // DescribeCrossAccountAccessRoleRequest generates a "aws/request.Request" representing the // client's request for the DescribeCrossAccountAccessRole operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1017,12 +1143,95 @@ func (c *Inspector) DescribeCrossAccountAccessRoleWithContext(ctx aws.Context, i return out, req.Send() } +const opDescribeExclusions = "DescribeExclusions" + +// DescribeExclusionsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeExclusions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeExclusions for more information on using the DescribeExclusions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeExclusionsRequest method. +// req, resp := client.DescribeExclusionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/DescribeExclusions +func (c *Inspector) DescribeExclusionsRequest(input *DescribeExclusionsInput) (req *request.Request, output *DescribeExclusionsOutput) { + op := &request.Operation{ + Name: opDescribeExclusions, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeExclusionsInput{} + } + + output = &DescribeExclusionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeExclusions API operation for Amazon Inspector. +// +// Describes the exclusions that are specified by the exclusions' ARNs. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Inspector's +// API operation DescribeExclusions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalException "InternalException" +// Internal server error. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/DescribeExclusions +func (c *Inspector) DescribeExclusions(input *DescribeExclusionsInput) (*DescribeExclusionsOutput, error) { + req, out := c.DescribeExclusionsRequest(input) + return out, req.Send() +} + +// DescribeExclusionsWithContext is the same as DescribeExclusions with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeExclusions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Inspector) DescribeExclusionsWithContext(ctx aws.Context, input *DescribeExclusionsInput, opts ...request.Option) (*DescribeExclusionsOutput, error) { + req, out := c.DescribeExclusionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeFindings = "DescribeFindings" // DescribeFindingsRequest generates a "aws/request.Request" representing the // client's request for the DescribeFindings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1104,8 +1313,8 @@ const opDescribeResourceGroups = "DescribeResourceGroups" // DescribeResourceGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeResourceGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1188,8 +1397,8 @@ const opDescribeRulesPackages = "DescribeRulesPackages" // DescribeRulesPackagesRequest generates a "aws/request.Request" representing the // client's request for the DescribeRulesPackages operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1272,8 +1481,8 @@ const opGetAssessmentReport = "GetAssessmentReport" // GetAssessmentReportRequest generates a "aws/request.Request" representing the // client's request for the GetAssessmentReport operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1348,6 +1557,9 @@ func (c *Inspector) GetAssessmentReportRequest(input *GetAssessmentReportInput) // runs that took place or will take place after generating reports in Amazon // Inspector became available. // +// * ErrCodeServiceTemporarilyUnavailableException "ServiceTemporarilyUnavailableException" +// The serice is temporary unavailable. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/GetAssessmentReport func (c *Inspector) GetAssessmentReport(input *GetAssessmentReportInput) (*GetAssessmentReportOutput, error) { req, out := c.GetAssessmentReportRequest(input) @@ -1370,12 +1582,160 @@ func (c *Inspector) GetAssessmentReportWithContext(ctx aws.Context, input *GetAs return out, req.Send() } +const opGetExclusionsPreview = "GetExclusionsPreview" + +// GetExclusionsPreviewRequest generates a "aws/request.Request" representing the +// client's request for the GetExclusionsPreview operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetExclusionsPreview for more information on using the GetExclusionsPreview +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetExclusionsPreviewRequest method. +// req, resp := client.GetExclusionsPreviewRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/GetExclusionsPreview +func (c *Inspector) GetExclusionsPreviewRequest(input *GetExclusionsPreviewInput) (req *request.Request, output *GetExclusionsPreviewOutput) { + op := &request.Operation{ + Name: opGetExclusionsPreview, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"nextToken"}, + OutputTokens: []string{"nextToken"}, + LimitToken: "maxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &GetExclusionsPreviewInput{} + } + + output = &GetExclusionsPreviewOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetExclusionsPreview API operation for Amazon Inspector. +// +// Retrieves the exclusions preview (a list of ExclusionPreview objects) specified +// by the preview token. You can obtain the preview token by running the CreateExclusionsPreview +// API. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Inspector's +// API operation GetExclusionsPreview for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInputException" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeInternalException "InternalException" +// Internal server error. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// You do not have required permissions to access the requested resource. +// +// * ErrCodeNoSuchEntityException "NoSuchEntityException" +// The request was rejected because it referenced an entity that does not exist. +// The error code describes the entity. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/GetExclusionsPreview +func (c *Inspector) GetExclusionsPreview(input *GetExclusionsPreviewInput) (*GetExclusionsPreviewOutput, error) { + req, out := c.GetExclusionsPreviewRequest(input) + return out, req.Send() +} + +// GetExclusionsPreviewWithContext is the same as GetExclusionsPreview with the addition of +// the ability to pass a context and additional request options. +// +// See GetExclusionsPreview for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Inspector) GetExclusionsPreviewWithContext(ctx aws.Context, input *GetExclusionsPreviewInput, opts ...request.Option) (*GetExclusionsPreviewOutput, error) { + req, out := c.GetExclusionsPreviewRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// GetExclusionsPreviewPages iterates over the pages of a GetExclusionsPreview operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See GetExclusionsPreview method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a GetExclusionsPreview operation. +// pageNum := 0 +// err := client.GetExclusionsPreviewPages(params, +// func(page *GetExclusionsPreviewOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *Inspector) GetExclusionsPreviewPages(input *GetExclusionsPreviewInput, fn func(*GetExclusionsPreviewOutput, bool) bool) error { + return c.GetExclusionsPreviewPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// GetExclusionsPreviewPagesWithContext same as GetExclusionsPreviewPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Inspector) GetExclusionsPreviewPagesWithContext(ctx aws.Context, input *GetExclusionsPreviewInput, fn func(*GetExclusionsPreviewOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *GetExclusionsPreviewInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.GetExclusionsPreviewRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*GetExclusionsPreviewOutput), !p.HasNextPage()) + } + return p.Err() +} + const opGetTelemetryMetadata = "GetTelemetryMetadata" // GetTelemetryMetadataRequest generates a "aws/request.Request" representing the // client's request for the GetTelemetryMetadata operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1465,8 +1825,8 @@ const opListAssessmentRunAgents = "ListAssessmentRunAgents" // ListAssessmentRunAgentsRequest generates a "aws/request.Request" representing the // client's request for the ListAssessmentRunAgents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1612,8 +1972,8 @@ const opListAssessmentRuns = "ListAssessmentRuns" // ListAssessmentRunsRequest generates a "aws/request.Request" representing the // client's request for the ListAssessmentRuns operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1759,8 +2119,8 @@ const opListAssessmentTargets = "ListAssessmentTargets" // ListAssessmentTargetsRequest generates a "aws/request.Request" representing the // client's request for the ListAssessmentTargets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1903,8 +2263,8 @@ const opListAssessmentTemplates = "ListAssessmentTemplates" // ListAssessmentTemplatesRequest generates a "aws/request.Request" representing the // client's request for the ListAssessmentTemplates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2050,8 +2410,8 @@ const opListEventSubscriptions = "ListEventSubscriptions" // ListEventSubscriptionsRequest generates a "aws/request.Request" representing the // client's request for the ListEventSubscriptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2194,17 +2554,163 @@ func (c *Inspector) ListEventSubscriptionsPagesWithContext(ctx aws.Context, inpu return p.Err() } -const opListFindings = "ListFindings" +const opListExclusions = "ListExclusions" -// ListFindingsRequest generates a "aws/request.Request" representing the -// client's request for the ListFindings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListExclusionsRequest generates a "aws/request.Request" representing the +// client's request for the ListExclusions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListFindings for more information on using the ListFindings +// See ListExclusions for more information on using the ListExclusions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListExclusionsRequest method. +// req, resp := client.ListExclusionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/ListExclusions +func (c *Inspector) ListExclusionsRequest(input *ListExclusionsInput) (req *request.Request, output *ListExclusionsOutput) { + op := &request.Operation{ + Name: opListExclusions, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"nextToken"}, + OutputTokens: []string{"nextToken"}, + LimitToken: "maxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListExclusionsInput{} + } + + output = &ListExclusionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListExclusions API operation for Amazon Inspector. +// +// List exclusions that are generated by the assessment run. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Inspector's +// API operation ListExclusions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalException "InternalException" +// Internal server error. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// You do not have required permissions to access the requested resource. +// +// * ErrCodeNoSuchEntityException "NoSuchEntityException" +// The request was rejected because it referenced an entity that does not exist. +// The error code describes the entity. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/ListExclusions +func (c *Inspector) ListExclusions(input *ListExclusionsInput) (*ListExclusionsOutput, error) { + req, out := c.ListExclusionsRequest(input) + return out, req.Send() +} + +// ListExclusionsWithContext is the same as ListExclusions with the addition of +// the ability to pass a context and additional request options. +// +// See ListExclusions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Inspector) ListExclusionsWithContext(ctx aws.Context, input *ListExclusionsInput, opts ...request.Option) (*ListExclusionsOutput, error) { + req, out := c.ListExclusionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListExclusionsPages iterates over the pages of a ListExclusions operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListExclusions method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListExclusions operation. +// pageNum := 0 +// err := client.ListExclusionsPages(params, +// func(page *ListExclusionsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *Inspector) ListExclusionsPages(input *ListExclusionsInput, fn func(*ListExclusionsOutput, bool) bool) error { + return c.ListExclusionsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListExclusionsPagesWithContext same as ListExclusionsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Inspector) ListExclusionsPagesWithContext(ctx aws.Context, input *ListExclusionsInput, fn func(*ListExclusionsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListExclusionsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListExclusionsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListExclusionsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListFindings = "ListFindings" + +// ListFindingsRequest generates a "aws/request.Request" representing the +// client's request for the ListFindings operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListFindings for more information on using the ListFindings // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration @@ -2345,8 +2851,8 @@ const opListRulesPackages = "ListRulesPackages" // ListRulesPackagesRequest generates a "aws/request.Request" representing the // client's request for the ListRulesPackages operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2487,8 +2993,8 @@ const opListTagsForResource = "ListTagsForResource" // ListTagsForResourceRequest generates a "aws/request.Request" representing the // client's request for the ListTagsForResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2577,8 +3083,8 @@ const opPreviewAgents = "PreviewAgents" // PreviewAgentsRequest generates a "aws/request.Request" representing the // client's request for the PreviewAgents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2728,8 +3234,8 @@ const opRegisterCrossAccountAccessRole = "RegisterCrossAccountAccessRole" // RegisterCrossAccountAccessRoleRequest generates a "aws/request.Request" representing the // client's request for the RegisterCrossAccountAccessRole operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2795,6 +3301,9 @@ func (c *Inspector) RegisterCrossAccountAccessRoleRequest(input *RegisterCrossAc // Amazon Inspector cannot assume the cross-account role that it needs to list // your EC2 instances during the assessment run. // +// * ErrCodeServiceTemporarilyUnavailableException "ServiceTemporarilyUnavailableException" +// The serice is temporary unavailable. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/RegisterCrossAccountAccessRole func (c *Inspector) RegisterCrossAccountAccessRole(input *RegisterCrossAccountAccessRoleInput) (*RegisterCrossAccountAccessRoleOutput, error) { req, out := c.RegisterCrossAccountAccessRoleRequest(input) @@ -2821,8 +3330,8 @@ const opRemoveAttributesFromFindings = "RemoveAttributesFromFindings" // RemoveAttributesFromFindingsRequest generates a "aws/request.Request" representing the // client's request for the RemoveAttributesFromFindings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2887,6 +3396,9 @@ func (c *Inspector) RemoveAttributesFromFindingsRequest(input *RemoveAttributesF // The request was rejected because it referenced an entity that does not exist. // The error code describes the entity. // +// * ErrCodeServiceTemporarilyUnavailableException "ServiceTemporarilyUnavailableException" +// The serice is temporary unavailable. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/RemoveAttributesFromFindings func (c *Inspector) RemoveAttributesFromFindings(input *RemoveAttributesFromFindingsInput) (*RemoveAttributesFromFindingsOutput, error) { req, out := c.RemoveAttributesFromFindingsRequest(input) @@ -2913,8 +3425,8 @@ const opSetTagsForResource = "SetTagsForResource" // SetTagsForResourceRequest generates a "aws/request.Request" representing the // client's request for the SetTagsForResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2980,6 +3492,9 @@ func (c *Inspector) SetTagsForResourceRequest(input *SetTagsForResourceInput) (r // The request was rejected because it referenced an entity that does not exist. // The error code describes the entity. // +// * ErrCodeServiceTemporarilyUnavailableException "ServiceTemporarilyUnavailableException" +// The serice is temporary unavailable. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/SetTagsForResource func (c *Inspector) SetTagsForResource(input *SetTagsForResourceInput) (*SetTagsForResourceOutput, error) { req, out := c.SetTagsForResourceRequest(input) @@ -3006,8 +3521,8 @@ const opStartAssessmentRun = "StartAssessmentRun" // StartAssessmentRunRequest generates a "aws/request.Request" representing the // client's request for the StartAssessmentRun operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3084,6 +3599,9 @@ func (c *Inspector) StartAssessmentRunRequest(input *StartAssessmentRunInput) (r // You started an assessment run, but one of the instances is already participating // in another assessment run. // +// * ErrCodeServiceTemporarilyUnavailableException "ServiceTemporarilyUnavailableException" +// The serice is temporary unavailable. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/StartAssessmentRun func (c *Inspector) StartAssessmentRun(input *StartAssessmentRunInput) (*StartAssessmentRunOutput, error) { req, out := c.StartAssessmentRunRequest(input) @@ -3110,8 +3628,8 @@ const opStopAssessmentRun = "StopAssessmentRun" // StopAssessmentRunRequest generates a "aws/request.Request" representing the // client's request for the StopAssessmentRun operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3176,6 +3694,9 @@ func (c *Inspector) StopAssessmentRunRequest(input *StopAssessmentRunInput) (req // The request was rejected because it referenced an entity that does not exist. // The error code describes the entity. // +// * ErrCodeServiceTemporarilyUnavailableException "ServiceTemporarilyUnavailableException" +// The serice is temporary unavailable. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/StopAssessmentRun func (c *Inspector) StopAssessmentRun(input *StopAssessmentRunInput) (*StopAssessmentRunOutput, error) { req, out := c.StopAssessmentRunRequest(input) @@ -3202,8 +3723,8 @@ const opSubscribeToEvent = "SubscribeToEvent" // SubscribeToEventRequest generates a "aws/request.Request" representing the // client's request for the SubscribeToEvent operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3273,6 +3794,9 @@ func (c *Inspector) SubscribeToEventRequest(input *SubscribeToEventInput) (req * // The request was rejected because it referenced an entity that does not exist. // The error code describes the entity. // +// * ErrCodeServiceTemporarilyUnavailableException "ServiceTemporarilyUnavailableException" +// The serice is temporary unavailable. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/SubscribeToEvent func (c *Inspector) SubscribeToEvent(input *SubscribeToEventInput) (*SubscribeToEventOutput, error) { req, out := c.SubscribeToEventRequest(input) @@ -3299,8 +3823,8 @@ const opUnsubscribeFromEvent = "UnsubscribeFromEvent" // UnsubscribeFromEventRequest generates a "aws/request.Request" representing the // client's request for the UnsubscribeFromEvent operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3366,6 +3890,9 @@ func (c *Inspector) UnsubscribeFromEventRequest(input *UnsubscribeFromEventInput // The request was rejected because it referenced an entity that does not exist. // The error code describes the entity. // +// * ErrCodeServiceTemporarilyUnavailableException "ServiceTemporarilyUnavailableException" +// The serice is temporary unavailable. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/UnsubscribeFromEvent func (c *Inspector) UnsubscribeFromEvent(input *UnsubscribeFromEventInput) (*UnsubscribeFromEventOutput, error) { req, out := c.UnsubscribeFromEventRequest(input) @@ -3392,8 +3919,8 @@ const opUpdateAssessmentTarget = "UpdateAssessmentTarget" // UpdateAssessmentTargetRequest generates a "aws/request.Request" representing the // client's request for the UpdateAssessmentTarget operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3437,6 +3964,9 @@ func (c *Inspector) UpdateAssessmentTargetRequest(input *UpdateAssessmentTargetI // Updates the assessment target that is specified by the ARN of the assessment // target. // +// If resourceGroupArn is not specified, all EC2 instances in the current AWS +// account and region are included in the assessment target. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -3459,6 +3989,9 @@ func (c *Inspector) UpdateAssessmentTargetRequest(input *UpdateAssessmentTargetI // The request was rejected because it referenced an entity that does not exist. // The error code describes the entity. // +// * ErrCodeServiceTemporarilyUnavailableException "ServiceTemporarilyUnavailableException" +// The serice is temporary unavailable. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/inspector-2016-02-16/UpdateAssessmentTarget func (c *Inspector) UpdateAssessmentTarget(input *UpdateAssessmentTargetInput) (*UpdateAssessmentTargetOutput, error) { req, out := c.UpdateAssessmentTargetRequest(input) @@ -3779,12 +4312,12 @@ type AssessmentRun struct { // The assessment run completion time that corresponds to the rules packages // evaluation completion time or failure. - CompletedAt *time.Time `locationName:"completedAt" type:"timestamp" timestampFormat:"unix"` + CompletedAt *time.Time `locationName:"completedAt" type:"timestamp"` // The time when StartAssessmentRun was called. // // CreatedAt is a required field - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix" required:"true"` + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" required:"true"` // A Boolean value (true or false) that specifies whether the process of collecting // data from the agents is completed. @@ -3819,7 +4352,7 @@ type AssessmentRun struct { RulesPackageArns []*string `locationName:"rulesPackageArns" min:"1" type:"list" required:"true"` // The time when StartAssessmentRun was called. - StartedAt *time.Time `locationName:"startedAt" type:"timestamp" timestampFormat:"unix"` + StartedAt *time.Time `locationName:"startedAt" type:"timestamp"` // The state of the assessment run. // @@ -3829,7 +4362,7 @@ type AssessmentRun struct { // The last time when the assessment run's state changed. // // StateChangedAt is a required field - StateChangedAt *time.Time `locationName:"stateChangedAt" type:"timestamp" timestampFormat:"unix" required:"true"` + StateChangedAt *time.Time `locationName:"stateChangedAt" type:"timestamp" required:"true"` // A list of the assessment run state changes. // @@ -4151,7 +4684,7 @@ type AssessmentRunNotification struct { // The date of the notification. // // Date is a required field - Date *time.Time `locationName:"date" type:"timestamp" timestampFormat:"unix" required:"true"` + Date *time.Time `locationName:"date" type:"timestamp" required:"true"` // The Boolean value that specifies whether the notification represents an error. // @@ -4231,7 +4764,7 @@ type AssessmentRunStateChange struct { // The last time the assessment run state changed. // // StateChangedAt is a required field - StateChangedAt *time.Time `locationName:"stateChangedAt" type:"timestamp" timestampFormat:"unix" required:"true"` + StateChangedAt *time.Time `locationName:"stateChangedAt" type:"timestamp" required:"true"` } // String returns the string representation @@ -4269,7 +4802,7 @@ type AssessmentTarget struct { // The time at which the assessment target is created. // // CreatedAt is a required field - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix" required:"true"` + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" required:"true"` // The name of the Amazon Inspector assessment target. // @@ -4278,14 +4811,12 @@ type AssessmentTarget struct { // The ARN that specifies the resource group that is associated with the assessment // target. - // - // ResourceGroupArn is a required field - ResourceGroupArn *string `locationName:"resourceGroupArn" min:"1" type:"string" required:"true"` + ResourceGroupArn *string `locationName:"resourceGroupArn" min:"1" type:"string"` // The time at which UpdateAssessmentTarget is called. // // UpdatedAt is a required field - UpdatedAt *time.Time `locationName:"updatedAt" type:"timestamp" timestampFormat:"unix" required:"true"` + UpdatedAt *time.Time `locationName:"updatedAt" type:"timestamp" required:"true"` } // String returns the string representation @@ -4392,9 +4923,9 @@ type AssessmentTemplate struct { // The time at which the assessment template is created. // // CreatedAt is a required field - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix" required:"true"` + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" required:"true"` - // The duration in seconds specified for this assessment tempate. The default + // The duration in seconds specified for this assessment template. The default // value is 3600 seconds (one hour). The maximum value is 86400 seconds (one // day). // @@ -4403,7 +4934,7 @@ type AssessmentTemplate struct { // The Amazon Resource Name (ARN) of the most recent assessment run associated // with this assessment template. This value exists only when the value of assessmentRunCount - // is greater than zero. + // is greaterpa than zero. LastAssessmentRunArn *string `locationName:"lastAssessmentRunArn" min:"1" type:"string"` // The name of the assessment template. @@ -4574,10 +5105,17 @@ type AssetAttributes struct { // The list of IP v4 addresses of the EC2 instance where the finding is generated. Ipv4Addresses []*string `locationName:"ipv4Addresses" type:"list"` + // An array of the network interfaces interacting with the EC2 instance where + // the finding is generated. + NetworkInterfaces []*NetworkInterface `locationName:"networkInterfaces" type:"list"` + // The schema version of this data type. // // SchemaVersion is a required field SchemaVersion *int64 `locationName:"schemaVersion" type:"integer" required:"true"` + + // The tags related to the EC2 instance where the finding is generated. + Tags []*Tag `locationName:"tags" type:"list"` } // String returns the string representation @@ -4620,12 +5158,24 @@ func (s *AssetAttributes) SetIpv4Addresses(v []*string) *AssetAttributes { return s } +// SetNetworkInterfaces sets the NetworkInterfaces field's value. +func (s *AssetAttributes) SetNetworkInterfaces(v []*NetworkInterface) *AssetAttributes { + s.NetworkInterfaces = v + return s +} + // SetSchemaVersion sets the SchemaVersion field's value. func (s *AssetAttributes) SetSchemaVersion(v int64) *AssetAttributes { s.SchemaVersion = &v return s } +// SetTags sets the Tags field's value. +func (s *AssetAttributes) SetTags(v []*Tag) *AssetAttributes { + s.Tags = v + return s +} + // This data type is used as a request parameter in the AddAttributesToFindings // and CreateAssessmentTemplate actions. type Attribute struct { @@ -4691,10 +5241,9 @@ type CreateAssessmentTargetInput struct { AssessmentTargetName *string `locationName:"assessmentTargetName" min:"1" type:"string" required:"true"` // The ARN that specifies the resource group that is used to create the assessment - // target. - // - // ResourceGroupArn is a required field - ResourceGroupArn *string `locationName:"resourceGroupArn" min:"1" type:"string" required:"true"` + // target. If resourceGroupArn is not specified, all EC2 instances in the current + // AWS account and region are included in the assessment target. + ResourceGroupArn *string `locationName:"resourceGroupArn" min:"1" type:"string"` } // String returns the string representation @@ -4716,9 +5265,6 @@ func (s *CreateAssessmentTargetInput) Validate() error { if s.AssessmentTargetName != nil && len(*s.AssessmentTargetName) < 1 { invalidParams.Add(request.NewErrParamMinLen("AssessmentTargetName", 1)) } - if s.ResourceGroupArn == nil { - invalidParams.Add(request.NewErrParamRequired("ResourceGroupArn")) - } if s.ResourceGroupArn != nil && len(*s.ResourceGroupArn) < 1 { invalidParams.Add(request.NewErrParamMinLen("ResourceGroupArn", 1)) } @@ -4783,8 +5329,7 @@ type CreateAssessmentTemplateInput struct { // AssessmentTemplateName is a required field AssessmentTemplateName *string `locationName:"assessmentTemplateName" min:"1" type:"string" required:"true"` - // The duration of the assessment run in seconds. The default value is 3600 - // seconds (one hour). + // The duration of the assessment run in seconds. // // DurationInSeconds is a required field DurationInSeconds *int64 `locationName:"durationInSeconds" min:"180" type:"integer" required:"true"` @@ -4908,6 +5453,75 @@ func (s *CreateAssessmentTemplateOutput) SetAssessmentTemplateArn(v string) *Cre return s } +type CreateExclusionsPreviewInput struct { + _ struct{} `type:"structure"` + + // The ARN that specifies the assessment template for which you want to create + // an exclusions preview. + // + // AssessmentTemplateArn is a required field + AssessmentTemplateArn *string `locationName:"assessmentTemplateArn" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateExclusionsPreviewInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateExclusionsPreviewInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateExclusionsPreviewInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateExclusionsPreviewInput"} + if s.AssessmentTemplateArn == nil { + invalidParams.Add(request.NewErrParamRequired("AssessmentTemplateArn")) + } + if s.AssessmentTemplateArn != nil && len(*s.AssessmentTemplateArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AssessmentTemplateArn", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAssessmentTemplateArn sets the AssessmentTemplateArn field's value. +func (s *CreateExclusionsPreviewInput) SetAssessmentTemplateArn(v string) *CreateExclusionsPreviewInput { + s.AssessmentTemplateArn = &v + return s +} + +type CreateExclusionsPreviewOutput struct { + _ struct{} `type:"structure"` + + // Specifies the unique identifier of the requested exclusions preview. You + // can use the unique identifier to retrieve the exclusions preview when running + // the GetExclusionsPreview API. + // + // PreviewToken is a required field + PreviewToken *string `locationName:"previewToken" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateExclusionsPreviewOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateExclusionsPreviewOutput) GoString() string { + return s.String() +} + +// SetPreviewToken sets the PreviewToken field's value. +func (s *CreateExclusionsPreviewOutput) SetPreviewToken(v string) *CreateExclusionsPreviewOutput { + s.PreviewToken = &v + return s +} + type CreateResourceGroupInput struct { _ struct{} `type:"structure"` @@ -5403,7 +6017,7 @@ type DescribeCrossAccountAccessRoleOutput struct { // The date when the cross-account access role was registered. // // RegisteredAt is a required field - RegisteredAt *time.Time `locationName:"registeredAt" type:"timestamp" timestampFormat:"unix" required:"true"` + RegisteredAt *time.Time `locationName:"registeredAt" type:"timestamp" required:"true"` // The ARN that specifies the IAM role that Amazon Inspector uses to access // your AWS account. @@ -5446,6 +6060,94 @@ func (s *DescribeCrossAccountAccessRoleOutput) SetValid(v bool) *DescribeCrossAc return s } +type DescribeExclusionsInput struct { + _ struct{} `type:"structure"` + + // The list of ARNs that specify the exclusions that you want to describe. + // + // ExclusionArns is a required field + ExclusionArns []*string `locationName:"exclusionArns" min:"1" type:"list" required:"true"` + + // The locale into which you want to translate the exclusion's title, description, + // and recommendation. + Locale *string `locationName:"locale" type:"string" enum:"Locale"` +} + +// String returns the string representation +func (s DescribeExclusionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeExclusionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeExclusionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeExclusionsInput"} + if s.ExclusionArns == nil { + invalidParams.Add(request.NewErrParamRequired("ExclusionArns")) + } + if s.ExclusionArns != nil && len(s.ExclusionArns) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ExclusionArns", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetExclusionArns sets the ExclusionArns field's value. +func (s *DescribeExclusionsInput) SetExclusionArns(v []*string) *DescribeExclusionsInput { + s.ExclusionArns = v + return s +} + +// SetLocale sets the Locale field's value. +func (s *DescribeExclusionsInput) SetLocale(v string) *DescribeExclusionsInput { + s.Locale = &v + return s +} + +type DescribeExclusionsOutput struct { + _ struct{} `type:"structure"` + + // Information about the exclusions. + // + // Exclusions is a required field + Exclusions map[string]*Exclusion `locationName:"exclusions" min:"1" type:"map" required:"true"` + + // Exclusion details that cannot be described. An error code is provided for + // each failed item. + // + // FailedItems is a required field + FailedItems map[string]*FailedItemDetails `locationName:"failedItems" type:"map" required:"true"` +} + +// String returns the string representation +func (s DescribeExclusionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeExclusionsOutput) GoString() string { + return s.String() +} + +// SetExclusions sets the Exclusions field's value. +func (s *DescribeExclusionsOutput) SetExclusions(v map[string]*Exclusion) *DescribeExclusionsOutput { + s.Exclusions = v + return s +} + +// SetFailedItems sets the FailedItems field's value. +func (s *DescribeExclusionsOutput) SetFailedItems(v map[string]*FailedItemDetails) *DescribeExclusionsOutput { + s.FailedItems = v + return s +} + type DescribeFindingsInput struct { _ struct{} `type:"structure"` @@ -5762,7 +6464,7 @@ type EventSubscription struct { // The time at which SubscribeToEvent is called. // // SubscribedAt is a required field - SubscribedAt *time.Time `locationName:"subscribedAt" type:"timestamp" timestampFormat:"unix" required:"true"` + SubscribedAt *time.Time `locationName:"subscribedAt" type:"timestamp" required:"true"` } // String returns the string representation @@ -5787,6 +6489,154 @@ func (s *EventSubscription) SetSubscribedAt(v time.Time) *EventSubscription { return s } +// Contains information about what was excluded from an assessment run. +type Exclusion struct { + _ struct{} `type:"structure"` + + // The ARN that specifies the exclusion. + // + // Arn is a required field + Arn *string `locationName:"arn" min:"1" type:"string" required:"true"` + + // The system-defined attributes for the exclusion. + Attributes []*Attribute `locationName:"attributes" type:"list"` + + // The description of the exclusion. + // + // Description is a required field + Description *string `locationName:"description" type:"string" required:"true"` + + // The recommendation for the exclusion. + // + // Recommendation is a required field + Recommendation *string `locationName:"recommendation" type:"string" required:"true"` + + // The AWS resources for which the exclusion pertains. + // + // Scopes is a required field + Scopes []*Scope `locationName:"scopes" min:"1" type:"list" required:"true"` + + // The name of the exclusion. + // + // Title is a required field + Title *string `locationName:"title" type:"string" required:"true"` +} + +// String returns the string representation +func (s Exclusion) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Exclusion) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *Exclusion) SetArn(v string) *Exclusion { + s.Arn = &v + return s +} + +// SetAttributes sets the Attributes field's value. +func (s *Exclusion) SetAttributes(v []*Attribute) *Exclusion { + s.Attributes = v + return s +} + +// SetDescription sets the Description field's value. +func (s *Exclusion) SetDescription(v string) *Exclusion { + s.Description = &v + return s +} + +// SetRecommendation sets the Recommendation field's value. +func (s *Exclusion) SetRecommendation(v string) *Exclusion { + s.Recommendation = &v + return s +} + +// SetScopes sets the Scopes field's value. +func (s *Exclusion) SetScopes(v []*Scope) *Exclusion { + s.Scopes = v + return s +} + +// SetTitle sets the Title field's value. +func (s *Exclusion) SetTitle(v string) *Exclusion { + s.Title = &v + return s +} + +// Contains information about what is excluded from an assessment run given +// the current state of the assessment template. +type ExclusionPreview struct { + _ struct{} `type:"structure"` + + // The system-defined attributes for the exclusion preview. + Attributes []*Attribute `locationName:"attributes" type:"list"` + + // The description of the exclusion preview. + // + // Description is a required field + Description *string `locationName:"description" type:"string" required:"true"` + + // The recommendation for the exclusion preview. + // + // Recommendation is a required field + Recommendation *string `locationName:"recommendation" type:"string" required:"true"` + + // The AWS resources for which the exclusion preview pertains. + // + // Scopes is a required field + Scopes []*Scope `locationName:"scopes" min:"1" type:"list" required:"true"` + + // The name of the exclusion preview. + // + // Title is a required field + Title *string `locationName:"title" type:"string" required:"true"` +} + +// String returns the string representation +func (s ExclusionPreview) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ExclusionPreview) GoString() string { + return s.String() +} + +// SetAttributes sets the Attributes field's value. +func (s *ExclusionPreview) SetAttributes(v []*Attribute) *ExclusionPreview { + s.Attributes = v + return s +} + +// SetDescription sets the Description field's value. +func (s *ExclusionPreview) SetDescription(v string) *ExclusionPreview { + s.Description = &v + return s +} + +// SetRecommendation sets the Recommendation field's value. +func (s *ExclusionPreview) SetRecommendation(v string) *ExclusionPreview { + s.Recommendation = &v + return s +} + +// SetScopes sets the Scopes field's value. +func (s *ExclusionPreview) SetScopes(v []*Scope) *ExclusionPreview { + s.Scopes = v + return s +} + +// SetTitle sets the Title field's value. +func (s *ExclusionPreview) SetTitle(v string) *ExclusionPreview { + s.Title = &v + return s +} + // Includes details about the failed items. type FailedItemDetails struct { _ struct{} `type:"structure"` @@ -5852,7 +6702,7 @@ type Finding struct { // The time when the finding was generated. // // CreatedAt is a required field - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix" required:"true"` + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" required:"true"` // The description of the finding. Description *string `locationName:"description" type:"string"` @@ -5887,7 +6737,7 @@ type Finding struct { // The time when AddAttributesToFindings is called. // // UpdatedAt is a required field - UpdatedAt *time.Time `locationName:"updatedAt" type:"timestamp" timestampFormat:"unix" required:"true"` + UpdatedAt *time.Time `locationName:"updatedAt" type:"timestamp" required:"true"` // The user-defined attributes that are assigned to the finding. // @@ -6199,56 +7049,193 @@ func (s *GetAssessmentReportInput) Validate() error { return nil } -// SetAssessmentRunArn sets the AssessmentRunArn field's value. -func (s *GetAssessmentReportInput) SetAssessmentRunArn(v string) *GetAssessmentReportInput { - s.AssessmentRunArn = &v +// SetAssessmentRunArn sets the AssessmentRunArn field's value. +func (s *GetAssessmentReportInput) SetAssessmentRunArn(v string) *GetAssessmentReportInput { + s.AssessmentRunArn = &v + return s +} + +// SetReportFileFormat sets the ReportFileFormat field's value. +func (s *GetAssessmentReportInput) SetReportFileFormat(v string) *GetAssessmentReportInput { + s.ReportFileFormat = &v + return s +} + +// SetReportType sets the ReportType field's value. +func (s *GetAssessmentReportInput) SetReportType(v string) *GetAssessmentReportInput { + s.ReportType = &v + return s +} + +type GetAssessmentReportOutput struct { + _ struct{} `type:"structure"` + + // Specifies the status of the request to generate an assessment report. + // + // Status is a required field + Status *string `locationName:"status" type:"string" required:"true" enum:"ReportStatus"` + + // Specifies the URL where you can find the generated assessment report. This + // parameter is only returned if the report is successfully generated. + Url *string `locationName:"url" type:"string"` +} + +// String returns the string representation +func (s GetAssessmentReportOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAssessmentReportOutput) GoString() string { + return s.String() +} + +// SetStatus sets the Status field's value. +func (s *GetAssessmentReportOutput) SetStatus(v string) *GetAssessmentReportOutput { + s.Status = &v + return s +} + +// SetUrl sets the Url field's value. +func (s *GetAssessmentReportOutput) SetUrl(v string) *GetAssessmentReportOutput { + s.Url = &v + return s +} + +type GetExclusionsPreviewInput struct { + _ struct{} `type:"structure"` + + // The ARN that specifies the assessment template for which the exclusions preview + // was requested. + // + // AssessmentTemplateArn is a required field + AssessmentTemplateArn *string `locationName:"assessmentTemplateArn" min:"1" type:"string" required:"true"` + + // The locale into which you want to translate the exclusion's title, description, + // and recommendation. + Locale *string `locationName:"locale" type:"string" enum:"Locale"` + + // You can use this parameter to indicate the maximum number of items you want + // in the response. The default value is 100. The maximum value is 500. + MaxResults *int64 `locationName:"maxResults" type:"integer"` + + // You can use this parameter when paginating results. Set the value of this + // parameter to null on your first call to the GetExclusionsPreviewRequest action. + // Subsequent calls to the action fill nextToken in the request with the value + // of nextToken from the previous response to continue listing data. + NextToken *string `locationName:"nextToken" min:"1" type:"string"` + + // The unique identifier associated of the exclusions preview. + // + // PreviewToken is a required field + PreviewToken *string `locationName:"previewToken" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetExclusionsPreviewInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetExclusionsPreviewInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetExclusionsPreviewInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetExclusionsPreviewInput"} + if s.AssessmentTemplateArn == nil { + invalidParams.Add(request.NewErrParamRequired("AssessmentTemplateArn")) + } + if s.AssessmentTemplateArn != nil && len(*s.AssessmentTemplateArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AssessmentTemplateArn", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + if s.PreviewToken == nil { + invalidParams.Add(request.NewErrParamRequired("PreviewToken")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAssessmentTemplateArn sets the AssessmentTemplateArn field's value. +func (s *GetExclusionsPreviewInput) SetAssessmentTemplateArn(v string) *GetExclusionsPreviewInput { + s.AssessmentTemplateArn = &v return s } -// SetReportFileFormat sets the ReportFileFormat field's value. -func (s *GetAssessmentReportInput) SetReportFileFormat(v string) *GetAssessmentReportInput { - s.ReportFileFormat = &v +// SetLocale sets the Locale field's value. +func (s *GetExclusionsPreviewInput) SetLocale(v string) *GetExclusionsPreviewInput { + s.Locale = &v return s } -// SetReportType sets the ReportType field's value. -func (s *GetAssessmentReportInput) SetReportType(v string) *GetAssessmentReportInput { - s.ReportType = &v +// SetMaxResults sets the MaxResults field's value. +func (s *GetExclusionsPreviewInput) SetMaxResults(v int64) *GetExclusionsPreviewInput { + s.MaxResults = &v return s } -type GetAssessmentReportOutput struct { +// SetNextToken sets the NextToken field's value. +func (s *GetExclusionsPreviewInput) SetNextToken(v string) *GetExclusionsPreviewInput { + s.NextToken = &v + return s +} + +// SetPreviewToken sets the PreviewToken field's value. +func (s *GetExclusionsPreviewInput) SetPreviewToken(v string) *GetExclusionsPreviewInput { + s.PreviewToken = &v + return s +} + +type GetExclusionsPreviewOutput struct { _ struct{} `type:"structure"` - // Specifies the status of the request to generate an assessment report. - // - // Status is a required field - Status *string `locationName:"status" type:"string" required:"true" enum:"ReportStatus"` + // Information about the exclusions included in the preview. + ExclusionPreviews []*ExclusionPreview `locationName:"exclusionPreviews" type:"list"` - // Specifies the URL where you can find the generated assessment report. This - // parameter is only returned if the report is successfully generated. - Url *string `locationName:"url" type:"string"` + // When a response is generated, if there is more data to be listed, this parameters + // is present in the response and contains the value to use for the nextToken + // parameter in a subsequent pagination request. If there is no more data to + // be listed, this parameter is set to null. + NextToken *string `locationName:"nextToken" min:"1" type:"string"` + + // Specifies the status of the request to generate an exclusions preview. + // + // PreviewStatus is a required field + PreviewStatus *string `locationName:"previewStatus" type:"string" required:"true" enum:"PreviewStatus"` } // String returns the string representation -func (s GetAssessmentReportOutput) String() string { +func (s GetExclusionsPreviewOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetAssessmentReportOutput) GoString() string { +func (s GetExclusionsPreviewOutput) GoString() string { return s.String() } -// SetStatus sets the Status field's value. -func (s *GetAssessmentReportOutput) SetStatus(v string) *GetAssessmentReportOutput { - s.Status = &v +// SetExclusionPreviews sets the ExclusionPreviews field's value. +func (s *GetExclusionsPreviewOutput) SetExclusionPreviews(v []*ExclusionPreview) *GetExclusionsPreviewOutput { + s.ExclusionPreviews = v return s } -// SetUrl sets the Url field's value. -func (s *GetAssessmentReportOutput) SetUrl(v string) *GetAssessmentReportOutput { - s.Url = &v +// SetNextToken sets the NextToken field's value. +func (s *GetExclusionsPreviewOutput) SetNextToken(v string) *GetExclusionsPreviewOutput { + s.NextToken = &v + return s +} + +// SetPreviewStatus sets the PreviewStatus field's value. +func (s *GetExclusionsPreviewOutput) SetPreviewStatus(v string) *GetExclusionsPreviewOutput { + s.PreviewStatus = &v return s } @@ -6877,6 +7864,110 @@ func (s *ListEventSubscriptionsOutput) SetSubscriptions(v []*Subscription) *List return s } +type ListExclusionsInput struct { + _ struct{} `type:"structure"` + + // The ARN of the assessment run that generated the exclusions that you want + // to list. + // + // AssessmentRunArn is a required field + AssessmentRunArn *string `locationName:"assessmentRunArn" min:"1" type:"string" required:"true"` + + // You can use this parameter to indicate the maximum number of items you want + // in the response. The default value is 100. The maximum value is 500. + MaxResults *int64 `locationName:"maxResults" type:"integer"` + + // You can use this parameter when paginating results. Set the value of this + // parameter to null on your first call to the ListExclusionsRequest action. + // Subsequent calls to the action fill nextToken in the request with the value + // of nextToken from the previous response to continue listing data. + NextToken *string `locationName:"nextToken" min:"1" type:"string"` +} + +// String returns the string representation +func (s ListExclusionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListExclusionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListExclusionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListExclusionsInput"} + if s.AssessmentRunArn == nil { + invalidParams.Add(request.NewErrParamRequired("AssessmentRunArn")) + } + if s.AssessmentRunArn != nil && len(*s.AssessmentRunArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AssessmentRunArn", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAssessmentRunArn sets the AssessmentRunArn field's value. +func (s *ListExclusionsInput) SetAssessmentRunArn(v string) *ListExclusionsInput { + s.AssessmentRunArn = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListExclusionsInput) SetMaxResults(v int64) *ListExclusionsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListExclusionsInput) SetNextToken(v string) *ListExclusionsInput { + s.NextToken = &v + return s +} + +type ListExclusionsOutput struct { + _ struct{} `type:"structure"` + + // A list of exclusions' ARNs returned by the action. + // + // ExclusionArns is a required field + ExclusionArns []*string `locationName:"exclusionArns" type:"list" required:"true"` + + // When a response is generated, if there is more data to be listed, this parameters + // is present in the response and contains the value to use for the nextToken + // parameter in a subsequent pagination request. If there is no more data to + // be listed, this parameter is set to null. + NextToken *string `locationName:"nextToken" min:"1" type:"string"` +} + +// String returns the string representation +func (s ListExclusionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListExclusionsOutput) GoString() string { + return s.String() +} + +// SetExclusionArns sets the ExclusionArns field's value. +func (s *ListExclusionsOutput) SetExclusionArns(v []*string) *ListExclusionsOutput { + s.ExclusionArns = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListExclusionsOutput) SetNextToken(v string) *ListExclusionsOutput { + s.NextToken = &v + return s +} + type ListFindingsInput struct { _ struct{} `type:"structure"` @@ -7144,6 +8235,115 @@ func (s *ListTagsForResourceOutput) SetTags(v []*Tag) *ListTagsForResourceOutput return s } +// Contains information about the network interfaces interacting with an EC2 +// instance. This data type is used as one of the elements of the AssetAttributes +// data type. +type NetworkInterface struct { + _ struct{} `type:"structure"` + + // The IP addresses associated with the network interface. + Ipv6Addresses []*string `locationName:"ipv6Addresses" type:"list"` + + // The ID of the network interface. + NetworkInterfaceId *string `locationName:"networkInterfaceId" type:"string"` + + // The name of a private DNS associated with the network interface. + PrivateDnsName *string `locationName:"privateDnsName" type:"string"` + + // The private IP address associated with the network interface. + PrivateIpAddress *string `locationName:"privateIpAddress" type:"string"` + + // A list of the private IP addresses associated with the network interface. + // Includes the privateDnsName and privateIpAddress. + PrivateIpAddresses []*PrivateIp `locationName:"privateIpAddresses" type:"list"` + + // The name of a public DNS associated with the network interface. + PublicDnsName *string `locationName:"publicDnsName" type:"string"` + + // The public IP address from which the network interface is reachable. + PublicIp *string `locationName:"publicIp" type:"string"` + + // A list of the security groups associated with the network interface. Includes + // the groupId and groupName. + SecurityGroups []*SecurityGroup `locationName:"securityGroups" type:"list"` + + // The ID of a subnet associated with the network interface. + SubnetId *string `locationName:"subnetId" type:"string"` + + // The ID of a VPC associated with the network interface. + VpcId *string `locationName:"vpcId" type:"string"` +} + +// String returns the string representation +func (s NetworkInterface) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NetworkInterface) GoString() string { + return s.String() +} + +// SetIpv6Addresses sets the Ipv6Addresses field's value. +func (s *NetworkInterface) SetIpv6Addresses(v []*string) *NetworkInterface { + s.Ipv6Addresses = v + return s +} + +// SetNetworkInterfaceId sets the NetworkInterfaceId field's value. +func (s *NetworkInterface) SetNetworkInterfaceId(v string) *NetworkInterface { + s.NetworkInterfaceId = &v + return s +} + +// SetPrivateDnsName sets the PrivateDnsName field's value. +func (s *NetworkInterface) SetPrivateDnsName(v string) *NetworkInterface { + s.PrivateDnsName = &v + return s +} + +// SetPrivateIpAddress sets the PrivateIpAddress field's value. +func (s *NetworkInterface) SetPrivateIpAddress(v string) *NetworkInterface { + s.PrivateIpAddress = &v + return s +} + +// SetPrivateIpAddresses sets the PrivateIpAddresses field's value. +func (s *NetworkInterface) SetPrivateIpAddresses(v []*PrivateIp) *NetworkInterface { + s.PrivateIpAddresses = v + return s +} + +// SetPublicDnsName sets the PublicDnsName field's value. +func (s *NetworkInterface) SetPublicDnsName(v string) *NetworkInterface { + s.PublicDnsName = &v + return s +} + +// SetPublicIp sets the PublicIp field's value. +func (s *NetworkInterface) SetPublicIp(v string) *NetworkInterface { + s.PublicIp = &v + return s +} + +// SetSecurityGroups sets the SecurityGroups field's value. +func (s *NetworkInterface) SetSecurityGroups(v []*SecurityGroup) *NetworkInterface { + s.SecurityGroups = v + return s +} + +// SetSubnetId sets the SubnetId field's value. +func (s *NetworkInterface) SetSubnetId(v string) *NetworkInterface { + s.SubnetId = &v + return s +} + +// SetVpcId sets the VpcId field's value. +func (s *NetworkInterface) SetVpcId(v string) *NetworkInterface { + s.VpcId = &v + return s +} + type PreviewAgentsInput struct { _ struct{} `type:"structure"` @@ -7247,6 +8447,41 @@ func (s *PreviewAgentsOutput) SetNextToken(v string) *PreviewAgentsOutput { return s } +// Contains information about a private IP address associated with a network +// interface. This data type is used as a response element in the DescribeFindings +// action. +type PrivateIp struct { + _ struct{} `type:"structure"` + + // The DNS name of the private IP address. + PrivateDnsName *string `locationName:"privateDnsName" type:"string"` + + // The full IP address of the network inteface. + PrivateIpAddress *string `locationName:"privateIpAddress" type:"string"` +} + +// String returns the string representation +func (s PrivateIp) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PrivateIp) GoString() string { + return s.String() +} + +// SetPrivateDnsName sets the PrivateDnsName field's value. +func (s *PrivateIp) SetPrivateDnsName(v string) *PrivateIp { + s.PrivateDnsName = &v + return s +} + +// SetPrivateIpAddress sets the PrivateIpAddress field's value. +func (s *PrivateIp) SetPrivateIpAddress(v string) *PrivateIp { + s.PrivateIpAddress = &v + return s +} + type RegisterCrossAccountAccessRoleInput struct { _ struct{} `type:"structure"` @@ -7399,7 +8634,7 @@ type ResourceGroup struct { // The time at which resource group is created. // // CreatedAt is a required field - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix" required:"true"` + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" required:"true"` // The tags (key and value pairs) of the resource group. This data type property // is used in the CreateResourceGroup action. @@ -7559,6 +8794,74 @@ func (s *RulesPackage) SetVersion(v string) *RulesPackage { return s } +// This data type contains key-value pairs that identify various Amazon resources. +type Scope struct { + _ struct{} `type:"structure"` + + // The type of the scope. + Key *string `locationName:"key" type:"string" enum:"ScopeType"` + + // The resource identifier for the specified scope type. + Value *string `locationName:"value" type:"string"` +} + +// String returns the string representation +func (s Scope) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Scope) GoString() string { + return s.String() +} + +// SetKey sets the Key field's value. +func (s *Scope) SetKey(v string) *Scope { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Scope) SetValue(v string) *Scope { + s.Value = &v + return s +} + +// Contains information about a security group associated with a network interface. +// This data type is used as one of the elements of the NetworkInterface data +// type. +type SecurityGroup struct { + _ struct{} `type:"structure"` + + // The ID of the security group. + GroupId *string `locationName:"groupId" type:"string"` + + // The name of the security group. + GroupName *string `locationName:"groupName" type:"string"` +} + +// String returns the string representation +func (s SecurityGroup) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SecurityGroup) GoString() string { + return s.String() +} + +// SetGroupId sets the GroupId field's value. +func (s *SecurityGroup) SetGroupId(v string) *SecurityGroup { + s.GroupId = &v + return s +} + +// SetGroupName sets the GroupName field's value. +func (s *SecurityGroup) SetGroupName(v string) *SecurityGroup { + s.GroupName = &v + return s +} + // This data type is used in the Finding data type. type ServiceAttributes struct { _ struct{} `type:"structure"` @@ -8073,10 +9376,10 @@ type TimestampRange struct { _ struct{} `type:"structure"` // The minimum value of the timestamp range. - BeginDate *time.Time `locationName:"beginDate" type:"timestamp" timestampFormat:"unix"` + BeginDate *time.Time `locationName:"beginDate" type:"timestamp"` // The maximum value of the timestamp range. - EndDate *time.Time `locationName:"endDate" type:"timestamp" timestampFormat:"unix"` + EndDate *time.Time `locationName:"endDate" type:"timestamp"` } // String returns the string representation @@ -8203,9 +9506,7 @@ type UpdateAssessmentTargetInput struct { // The ARN of the resource group that is used to specify the new resource group // to associate with the assessment target. - // - // ResourceGroupArn is a required field - ResourceGroupArn *string `locationName:"resourceGroupArn" min:"1" type:"string" required:"true"` + ResourceGroupArn *string `locationName:"resourceGroupArn" min:"1" type:"string"` } // String returns the string representation @@ -8233,9 +9534,6 @@ func (s *UpdateAssessmentTargetInput) Validate() error { if s.AssessmentTargetName != nil && len(*s.AssessmentTargetName) < 1 { invalidParams.Add(request.NewErrParamMinLen("AssessmentTargetName", 1)) } - if s.ResourceGroupArn == nil { - invalidParams.Add(request.NewErrParamRequired("ResourceGroupArn")) - } if s.ResourceGroupArn != nil && len(*s.ResourceGroupArn) < 1 { invalidParams.Add(request.NewErrParamMinLen("ResourceGroupArn", 1)) } @@ -8652,6 +9950,14 @@ const ( NoSuchEntityErrorCodeIamRoleDoesNotExist = "IAM_ROLE_DOES_NOT_EXIST" ) +const ( + // PreviewStatusWorkInProgress is a PreviewStatus enum value + PreviewStatusWorkInProgress = "WORK_IN_PROGRESS" + + // PreviewStatusCompleted is a PreviewStatus enum value + PreviewStatusCompleted = "COMPLETED" +) + const ( // ReportFileFormatHtml is a ReportFileFormat enum value ReportFileFormatHtml = "HTML" @@ -8679,6 +9985,14 @@ const ( ReportTypeFull = "FULL" ) +const ( + // ScopeTypeInstanceId is a ScopeType enum value + ScopeTypeInstanceId = "INSTANCE_ID" + + // ScopeTypeRulesPackageArn is a ScopeType enum value + ScopeTypeRulesPackageArn = "RULES_PACKAGE_ARN" +) + const ( // SeverityLow is a Severity enum value SeverityLow = "Low" diff --git a/vendor/github.com/aws/aws-sdk-go/service/inspector/errors.go b/vendor/github.com/aws/aws-sdk-go/service/inspector/errors.go index abdadcccbb6..9e106a58743 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/inspector/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/inspector/errors.go @@ -58,6 +58,19 @@ const ( // The error code describes the entity. ErrCodeNoSuchEntityException = "NoSuchEntityException" + // ErrCodePreviewGenerationInProgressException for service response error code + // "PreviewGenerationInProgressException". + // + // The request is rejected. The specified assessment template is currently generating + // an exclusions preview. + ErrCodePreviewGenerationInProgressException = "PreviewGenerationInProgressException" + + // ErrCodeServiceTemporarilyUnavailableException for service response error code + // "ServiceTemporarilyUnavailableException". + // + // The serice is temporary unavailable. + ErrCodeServiceTemporarilyUnavailableException = "ServiceTemporarilyUnavailableException" + // ErrCodeUnsupportedFeatureException for service response error code // "UnsupportedFeatureException". // diff --git a/vendor/github.com/aws/aws-sdk-go/service/inspector/service.go b/vendor/github.com/aws/aws-sdk-go/service/inspector/service.go index 1d65f070781..2e68b4e4d23 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/inspector/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/inspector/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "inspector" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "inspector" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Inspector" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the Inspector client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/iot/api.go b/vendor/github.com/aws/aws-sdk-go/service/iot/api.go index 073a4623566..8dfe8c4fa4b 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/iot/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/iot/api.go @@ -17,8 +17,8 @@ const opAcceptCertificateTransfer = "AcceptCertificateTransfer" // AcceptCertificateTransferRequest generates a "aws/request.Request" representing the // client's request for the AcceptCertificateTransfer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -114,12 +114,97 @@ func (c *IoT) AcceptCertificateTransferWithContext(ctx aws.Context, input *Accep return out, req.Send() } +const opAddThingToBillingGroup = "AddThingToBillingGroup" + +// AddThingToBillingGroupRequest generates a "aws/request.Request" representing the +// client's request for the AddThingToBillingGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AddThingToBillingGroup for more information on using the AddThingToBillingGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AddThingToBillingGroupRequest method. +// req, resp := client.AddThingToBillingGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) AddThingToBillingGroupRequest(input *AddThingToBillingGroupInput) (req *request.Request, output *AddThingToBillingGroupOutput) { + op := &request.Operation{ + Name: opAddThingToBillingGroup, + HTTPMethod: "PUT", + HTTPPath: "/billing-groups/addThingToBillingGroup", + } + + if input == nil { + input = &AddThingToBillingGroupInput{} + } + + output = &AddThingToBillingGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// AddThingToBillingGroup API operation for AWS IoT. +// +// Adds a thing to a billing group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation AddThingToBillingGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) AddThingToBillingGroup(input *AddThingToBillingGroupInput) (*AddThingToBillingGroupOutput, error) { + req, out := c.AddThingToBillingGroupRequest(input) + return out, req.Send() +} + +// AddThingToBillingGroupWithContext is the same as AddThingToBillingGroup with the addition of +// the ability to pass a context and additional request options. +// +// See AddThingToBillingGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) AddThingToBillingGroupWithContext(ctx aws.Context, input *AddThingToBillingGroupInput, opts ...request.Option) (*AddThingToBillingGroupOutput, error) { + req, out := c.AddThingToBillingGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opAddThingToThingGroup = "AddThingToThingGroup" // AddThingToThingGroupRequest generates a "aws/request.Request" representing the // client's request for the AddThingToThingGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -203,8 +288,8 @@ const opAssociateTargetsWithJob = "AssociateTargetsWithJob" // AssociateTargetsWithJobRequest generates a "aws/request.Request" representing the // client's request for the AssociateTargetsWithJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -266,7 +351,7 @@ func (c *IoT) AssociateTargetsWithJobRequest(input *AssociateTargetsWithJobInput // The specified resource does not exist. // // * ErrCodeLimitExceededException "LimitExceededException" -// The number of attached entities exceeds the limit. +// A limit has been exceeded. // // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. @@ -299,8 +384,8 @@ const opAttachPolicy = "AttachPolicy" // AttachPolicyRequest generates a "aws/request.Request" representing the // client's request for the AttachPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -368,7 +453,7 @@ func (c *IoT) AttachPolicyRequest(input *AttachPolicyInput) (req *request.Reques // An unexpected error has occurred. // // * ErrCodeLimitExceededException "LimitExceededException" -// The number of attached entities exceeds the limit. +// A limit has been exceeded. // func (c *IoT) AttachPolicy(input *AttachPolicyInput) (*AttachPolicyOutput, error) { req, out := c.AttachPolicyRequest(input) @@ -395,8 +480,8 @@ const opAttachPrincipalPolicy = "AttachPrincipalPolicy" // AttachPrincipalPolicyRequest generates a "aws/request.Request" representing the // client's request for the AttachPrincipalPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -415,6 +500,8 @@ const opAttachPrincipalPolicy = "AttachPrincipalPolicy" // if err == nil { // resp is now filled // fmt.Println(resp) // } +// +// Deprecated: AttachPrincipalPolicy has been deprecated func (c *IoT) AttachPrincipalPolicyRequest(input *AttachPrincipalPolicyInput) (req *request.Request, output *AttachPrincipalPolicyOutput) { if c.Client.Config.Logger != nil { c.Client.Config.Logger.Log("This operation, AttachPrincipalPolicy, has been deprecated") @@ -470,8 +557,10 @@ func (c *IoT) AttachPrincipalPolicyRequest(input *AttachPrincipalPolicyInput) (r // An unexpected error has occurred. // // * ErrCodeLimitExceededException "LimitExceededException" -// The number of attached entities exceeds the limit. +// A limit has been exceeded. // +// +// Deprecated: AttachPrincipalPolicy has been deprecated func (c *IoT) AttachPrincipalPolicy(input *AttachPrincipalPolicyInput) (*AttachPrincipalPolicyOutput, error) { req, out := c.AttachPrincipalPolicyRequest(input) return out, req.Send() @@ -486,6 +575,8 @@ func (c *IoT) AttachPrincipalPolicy(input *AttachPrincipalPolicyInput) (*AttachP // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. +// +// Deprecated: AttachPrincipalPolicyWithContext has been deprecated func (c *IoT) AttachPrincipalPolicyWithContext(ctx aws.Context, input *AttachPrincipalPolicyInput, opts ...request.Option) (*AttachPrincipalPolicyOutput, error) { req, out := c.AttachPrincipalPolicyRequest(input) req.SetContext(ctx) @@ -493,12 +584,106 @@ func (c *IoT) AttachPrincipalPolicyWithContext(ctx aws.Context, input *AttachPri return out, req.Send() } +const opAttachSecurityProfile = "AttachSecurityProfile" + +// AttachSecurityProfileRequest generates a "aws/request.Request" representing the +// client's request for the AttachSecurityProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AttachSecurityProfile for more information on using the AttachSecurityProfile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AttachSecurityProfileRequest method. +// req, resp := client.AttachSecurityProfileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) AttachSecurityProfileRequest(input *AttachSecurityProfileInput) (req *request.Request, output *AttachSecurityProfileOutput) { + op := &request.Operation{ + Name: opAttachSecurityProfile, + HTTPMethod: "PUT", + HTTPPath: "/security-profiles/{securityProfileName}/targets", + } + + if input == nil { + input = &AttachSecurityProfileInput{} + } + + output = &AttachSecurityProfileOutput{} + req = c.newRequest(op, input, output) + return +} + +// AttachSecurityProfile API operation for AWS IoT. +// +// Associates a Device Defender security profile with a thing group or with +// this account. Each thing group or account can have up to five security profiles +// associated with it. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation AttachSecurityProfile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit has been exceeded. +// +// * ErrCodeVersionConflictException "VersionConflictException" +// An exception thrown when the version of an entity specified with the expectedVersion +// parameter does not match the latest version in the system. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) AttachSecurityProfile(input *AttachSecurityProfileInput) (*AttachSecurityProfileOutput, error) { + req, out := c.AttachSecurityProfileRequest(input) + return out, req.Send() +} + +// AttachSecurityProfileWithContext is the same as AttachSecurityProfile with the addition of +// the ability to pass a context and additional request options. +// +// See AttachSecurityProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) AttachSecurityProfileWithContext(ctx aws.Context, input *AttachSecurityProfileInput, opts ...request.Option) (*AttachSecurityProfileOutput, error) { + req, out := c.AttachSecurityProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opAttachThingPrincipal = "AttachThingPrincipal" // AttachThingPrincipalRequest generates a "aws/request.Request" representing the // client's request for the AttachThingPrincipal operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -584,12 +769,99 @@ func (c *IoT) AttachThingPrincipalWithContext(ctx aws.Context, input *AttachThin return out, req.Send() } +const opCancelAuditTask = "CancelAuditTask" + +// CancelAuditTaskRequest generates a "aws/request.Request" representing the +// client's request for the CancelAuditTask operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CancelAuditTask for more information on using the CancelAuditTask +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CancelAuditTaskRequest method. +// req, resp := client.CancelAuditTaskRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) CancelAuditTaskRequest(input *CancelAuditTaskInput) (req *request.Request, output *CancelAuditTaskOutput) { + op := &request.Operation{ + Name: opCancelAuditTask, + HTTPMethod: "PUT", + HTTPPath: "/audit/tasks/{taskId}/cancel", + } + + if input == nil { + input = &CancelAuditTaskInput{} + } + + output = &CancelAuditTaskOutput{} + req = c.newRequest(op, input, output) + return +} + +// CancelAuditTask API operation for AWS IoT. +// +// Cancels an audit that is in progress. The audit can be either scheduled or +// on-demand. If the audit is not in progress, an "InvalidRequestException" +// occurs. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation CancelAuditTask for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) CancelAuditTask(input *CancelAuditTaskInput) (*CancelAuditTaskOutput, error) { + req, out := c.CancelAuditTaskRequest(input) + return out, req.Send() +} + +// CancelAuditTaskWithContext is the same as CancelAuditTask with the addition of +// the ability to pass a context and additional request options. +// +// See CancelAuditTask for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) CancelAuditTaskWithContext(ctx aws.Context, input *CancelAuditTaskInput, opts ...request.Option) (*CancelAuditTaskOutput, error) { + req, out := c.CancelAuditTaskRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCancelCertificateTransfer = "CancelCertificateTransfer" // CancelCertificateTransferRequest generates a "aws/request.Request" representing the // client's request for the CancelCertificateTransfer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -694,8 +966,8 @@ const opCancelJob = "CancelJob" // CancelJobRequest generates a "aws/request.Request" representing the // client's request for the CancelJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -775,133 +1047,229 @@ func (c *IoT) CancelJobWithContext(ctx aws.Context, input *CancelJobInput, opts return out, req.Send() } -const opClearDefaultAuthorizer = "ClearDefaultAuthorizer" +const opCancelJobExecution = "CancelJobExecution" -// ClearDefaultAuthorizerRequest generates a "aws/request.Request" representing the -// client's request for the ClearDefaultAuthorizer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// CancelJobExecutionRequest generates a "aws/request.Request" representing the +// client's request for the CancelJobExecution operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ClearDefaultAuthorizer for more information on using the ClearDefaultAuthorizer +// See CancelJobExecution for more information on using the CancelJobExecution // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ClearDefaultAuthorizerRequest method. -// req, resp := client.ClearDefaultAuthorizerRequest(params) +// // Example sending a request using the CancelJobExecutionRequest method. +// req, resp := client.CancelJobExecutionRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ClearDefaultAuthorizerRequest(input *ClearDefaultAuthorizerInput) (req *request.Request, output *ClearDefaultAuthorizerOutput) { +func (c *IoT) CancelJobExecutionRequest(input *CancelJobExecutionInput) (req *request.Request, output *CancelJobExecutionOutput) { op := &request.Operation{ - Name: opClearDefaultAuthorizer, - HTTPMethod: "DELETE", - HTTPPath: "/default-authorizer", + Name: opCancelJobExecution, + HTTPMethod: "PUT", + HTTPPath: "/things/{thingName}/jobs/{jobId}/cancel", } if input == nil { - input = &ClearDefaultAuthorizerInput{} + input = &CancelJobExecutionInput{} } - output = &ClearDefaultAuthorizerOutput{} + output = &CancelJobExecutionOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// ClearDefaultAuthorizer API operation for AWS IoT. +// CancelJobExecution API operation for AWS IoT. // -// Clears the default authorizer. +// Cancels the execution of a job for a given thing. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ClearDefaultAuthorizer for usage and error information. +// API operation CancelJobExecution for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeInvalidStateTransitionException "InvalidStateTransitionException" +// An attempt was made to change to an invalid state, for example by deleting +// a job or a job execution which is "IN_PROGRESS" without setting the force +// parameter. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. +// * ErrCodeVersionConflictException "VersionConflictException" +// An exception thrown when the version of an entity specified with the expectedVersion +// parameter does not match the latest version in the system. // -func (c *IoT) ClearDefaultAuthorizer(input *ClearDefaultAuthorizerInput) (*ClearDefaultAuthorizerOutput, error) { - req, out := c.ClearDefaultAuthorizerRequest(input) +func (c *IoT) CancelJobExecution(input *CancelJobExecutionInput) (*CancelJobExecutionOutput, error) { + req, out := c.CancelJobExecutionRequest(input) return out, req.Send() } -// ClearDefaultAuthorizerWithContext is the same as ClearDefaultAuthorizer with the addition of +// CancelJobExecutionWithContext is the same as CancelJobExecution with the addition of // the ability to pass a context and additional request options. // -// See ClearDefaultAuthorizer for details on how to use this API operation. +// See CancelJobExecution for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ClearDefaultAuthorizerWithContext(ctx aws.Context, input *ClearDefaultAuthorizerInput, opts ...request.Option) (*ClearDefaultAuthorizerOutput, error) { - req, out := c.ClearDefaultAuthorizerRequest(input) +func (c *IoT) CancelJobExecutionWithContext(ctx aws.Context, input *CancelJobExecutionInput, opts ...request.Option) (*CancelJobExecutionOutput, error) { + req, out := c.CancelJobExecutionRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opCreateAuthorizer = "CreateAuthorizer" +const opClearDefaultAuthorizer = "ClearDefaultAuthorizer" -// CreateAuthorizerRequest generates a "aws/request.Request" representing the -// client's request for the CreateAuthorizer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ClearDefaultAuthorizerRequest generates a "aws/request.Request" representing the +// client's request for the ClearDefaultAuthorizer operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See CreateAuthorizer for more information on using the CreateAuthorizer +// See ClearDefaultAuthorizer for more information on using the ClearDefaultAuthorizer // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the CreateAuthorizerRequest method. -// req, resp := client.CreateAuthorizerRequest(params) +// // Example sending a request using the ClearDefaultAuthorizerRequest method. +// req, resp := client.ClearDefaultAuthorizerRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) CreateAuthorizerRequest(input *CreateAuthorizerInput) (req *request.Request, output *CreateAuthorizerOutput) { +func (c *IoT) ClearDefaultAuthorizerRequest(input *ClearDefaultAuthorizerInput) (req *request.Request, output *ClearDefaultAuthorizerOutput) { op := &request.Operation{ - Name: opCreateAuthorizer, - HTTPMethod: "POST", - HTTPPath: "/authorizer/{authorizerName}", + Name: opClearDefaultAuthorizer, + HTTPMethod: "DELETE", + HTTPPath: "/default-authorizer", } if input == nil { - input = &CreateAuthorizerInput{} + input = &ClearDefaultAuthorizerInput{} } - output = &CreateAuthorizerOutput{} + output = &ClearDefaultAuthorizerOutput{} + req = c.newRequest(op, input, output) + return +} + +// ClearDefaultAuthorizer API operation for AWS IoT. +// +// Clears the default authorizer. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation ClearDefaultAuthorizer for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) ClearDefaultAuthorizer(input *ClearDefaultAuthorizerInput) (*ClearDefaultAuthorizerOutput, error) { + req, out := c.ClearDefaultAuthorizerRequest(input) + return out, req.Send() +} + +// ClearDefaultAuthorizerWithContext is the same as ClearDefaultAuthorizer with the addition of +// the ability to pass a context and additional request options. +// +// See ClearDefaultAuthorizer for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) ClearDefaultAuthorizerWithContext(ctx aws.Context, input *ClearDefaultAuthorizerInput, opts ...request.Option) (*ClearDefaultAuthorizerOutput, error) { + req, out := c.ClearDefaultAuthorizerRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateAuthorizer = "CreateAuthorizer" + +// CreateAuthorizerRequest generates a "aws/request.Request" representing the +// client's request for the CreateAuthorizer operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateAuthorizer for more information on using the CreateAuthorizer +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateAuthorizerRequest method. +// req, resp := client.CreateAuthorizerRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) CreateAuthorizerRequest(input *CreateAuthorizerInput) (req *request.Request, output *CreateAuthorizerOutput) { + op := &request.Operation{ + Name: opCreateAuthorizer, + HTTPMethod: "POST", + HTTPPath: "/authorizer/{authorizerName}", + } + + if input == nil { + input = &CreateAuthorizerInput{} + } + + output = &CreateAuthorizerOutput{} req = c.newRequest(op, input, output) return } @@ -925,7 +1293,7 @@ func (c *IoT) CreateAuthorizerRequest(input *CreateAuthorizerInput) (req *reques // The request is not valid. // // * ErrCodeLimitExceededException "LimitExceededException" -// The number of attached entities exceeds the limit. +// A limit has been exceeded. // // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. @@ -960,12 +1328,97 @@ func (c *IoT) CreateAuthorizerWithContext(ctx aws.Context, input *CreateAuthoriz return out, req.Send() } +const opCreateBillingGroup = "CreateBillingGroup" + +// CreateBillingGroupRequest generates a "aws/request.Request" representing the +// client's request for the CreateBillingGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateBillingGroup for more information on using the CreateBillingGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateBillingGroupRequest method. +// req, resp := client.CreateBillingGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) CreateBillingGroupRequest(input *CreateBillingGroupInput) (req *request.Request, output *CreateBillingGroupOutput) { + op := &request.Operation{ + Name: opCreateBillingGroup, + HTTPMethod: "POST", + HTTPPath: "/billing-groups/{billingGroupName}", + } + + if input == nil { + input = &CreateBillingGroupInput{} + } + + output = &CreateBillingGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateBillingGroup API operation for AWS IoT. +// +// Creates a billing group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation CreateBillingGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeResourceAlreadyExistsException "ResourceAlreadyExistsException" +// The resource already exists. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) CreateBillingGroup(input *CreateBillingGroupInput) (*CreateBillingGroupOutput, error) { + req, out := c.CreateBillingGroupRequest(input) + return out, req.Send() +} + +// CreateBillingGroupWithContext is the same as CreateBillingGroup with the addition of +// the ability to pass a context and additional request options. +// +// See CreateBillingGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) CreateBillingGroupWithContext(ctx aws.Context, input *CreateBillingGroupInput, opts ...request.Option) (*CreateBillingGroupOutput, error) { + req, out := c.CreateBillingGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateCertificateFromCsr = "CreateCertificateFromCsr" // CreateCertificateFromCsrRequest generates a "aws/request.Request" representing the // client's request for the CreateCertificateFromCsr operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1089,12 +1542,106 @@ func (c *IoT) CreateCertificateFromCsrWithContext(ctx aws.Context, input *Create return out, req.Send() } +const opCreateDynamicThingGroup = "CreateDynamicThingGroup" + +// CreateDynamicThingGroupRequest generates a "aws/request.Request" representing the +// client's request for the CreateDynamicThingGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateDynamicThingGroup for more information on using the CreateDynamicThingGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateDynamicThingGroupRequest method. +// req, resp := client.CreateDynamicThingGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) CreateDynamicThingGroupRequest(input *CreateDynamicThingGroupInput) (req *request.Request, output *CreateDynamicThingGroupOutput) { + op := &request.Operation{ + Name: opCreateDynamicThingGroup, + HTTPMethod: "POST", + HTTPPath: "/dynamic-thing-groups/{thingGroupName}", + } + + if input == nil { + input = &CreateDynamicThingGroupInput{} + } + + output = &CreateDynamicThingGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateDynamicThingGroup API operation for AWS IoT. +// +// Creates a dynamic thing group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation CreateDynamicThingGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeResourceAlreadyExistsException "ResourceAlreadyExistsException" +// The resource already exists. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeInvalidQueryException "InvalidQueryException" +// The query is invalid. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit has been exceeded. +// +func (c *IoT) CreateDynamicThingGroup(input *CreateDynamicThingGroupInput) (*CreateDynamicThingGroupOutput, error) { + req, out := c.CreateDynamicThingGroupRequest(input) + return out, req.Send() +} + +// CreateDynamicThingGroupWithContext is the same as CreateDynamicThingGroup with the addition of +// the ability to pass a context and additional request options. +// +// See CreateDynamicThingGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) CreateDynamicThingGroupWithContext(ctx aws.Context, input *CreateDynamicThingGroupInput, opts ...request.Option) (*CreateDynamicThingGroupOutput, error) { + req, out := c.CreateDynamicThingGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateJob = "CreateJob" // CreateJobRequest generates a "aws/request.Request" representing the // client's request for the CreateJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1151,7 +1698,7 @@ func (c *IoT) CreateJobRequest(input *CreateJobInput) (req *request.Request, out // The resource already exists. // // * ErrCodeLimitExceededException "LimitExceededException" -// The number of attached entities exceeds the limit. +// A limit has been exceeded. // // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. @@ -1184,8 +1731,8 @@ const opCreateKeysAndCertificate = "CreateKeysAndCertificate" // CreateKeysAndCertificateRequest generates a "aws/request.Request" representing the // client's request for the CreateKeysAndCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1276,8 +1823,8 @@ const opCreateOTAUpdate = "CreateOTAUpdate" // CreateOTAUpdateRequest generates a "aws/request.Request" representing the // client's request for the CreateOTAUpdate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1327,6 +1874,9 @@ func (c *IoT) CreateOTAUpdateRequest(input *CreateOTAUpdateInput) (req *request. // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit has been exceeded. +// // * ErrCodeResourceNotFoundException "ResourceNotFoundException" // The specified resource does not exist. // @@ -1370,8 +1920,8 @@ const opCreatePolicy = "CreatePolicy" // CreatePolicyRequest generates a "aws/request.Request" representing the // client's request for the CreatePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1468,8 +2018,8 @@ const opCreatePolicyVersion = "CreatePolicyVersion" // CreatePolicyVersionRequest generates a "aws/request.Request" representing the // client's request for the CreatePolicyVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1572,8 +2122,8 @@ const opCreateRoleAlias = "CreateRoleAlias" // CreateRoleAliasRequest generates a "aws/request.Request" representing the // client's request for the CreateRoleAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1627,7 +2177,7 @@ func (c *IoT) CreateRoleAliasRequest(input *CreateRoleAliasInput) (req *request. // The request is not valid. // // * ErrCodeLimitExceededException "LimitExceededException" -// The number of attached entities exceeds the limit. +// A limit has been exceeded. // // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. @@ -1662,78 +2212,251 @@ func (c *IoT) CreateRoleAliasWithContext(ctx aws.Context, input *CreateRoleAlias return out, req.Send() } -const opCreateStream = "CreateStream" +const opCreateScheduledAudit = "CreateScheduledAudit" -// CreateStreamRequest generates a "aws/request.Request" representing the -// client's request for the CreateStream operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// CreateScheduledAuditRequest generates a "aws/request.Request" representing the +// client's request for the CreateScheduledAudit operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See CreateStream for more information on using the CreateStream +// See CreateScheduledAudit for more information on using the CreateScheduledAudit // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the CreateStreamRequest method. -// req, resp := client.CreateStreamRequest(params) +// // Example sending a request using the CreateScheduledAuditRequest method. +// req, resp := client.CreateScheduledAuditRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) CreateStreamRequest(input *CreateStreamInput) (req *request.Request, output *CreateStreamOutput) { +func (c *IoT) CreateScheduledAuditRequest(input *CreateScheduledAuditInput) (req *request.Request, output *CreateScheduledAuditOutput) { op := &request.Operation{ - Name: opCreateStream, + Name: opCreateScheduledAudit, HTTPMethod: "POST", - HTTPPath: "/streams/{streamId}", + HTTPPath: "/audit/scheduledaudits/{scheduledAuditName}", } if input == nil { - input = &CreateStreamInput{} + input = &CreateScheduledAuditInput{} } - output = &CreateStreamOutput{} + output = &CreateScheduledAuditOutput{} req = c.newRequest(op, input, output) return } -// CreateStream API operation for AWS IoT. +// CreateScheduledAudit API operation for AWS IoT. // -// Creates a stream for delivering one or more large files in chunks over MQTT. -// A stream transports data bytes in chunks or blocks packaged as MQTT messages -// from a source like S3. You can have one or more files associated with a stream. -// The total size of a file associated with the stream cannot exceed more than -// 2 MB. The stream will be created with version 0. If a stream is created with -// the same streamID as a stream that existed and was deleted within last 90 -// days, we will resurrect that old stream by incrementing the version by 1. +// Creates a scheduled audit that is run at a specified time interval. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation CreateStream for usage and error information. +// API operation CreateScheduledAudit for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -// * ErrCodeResourceAlreadyExistsException "ResourceAlreadyExistsException" -// The resource already exists. -// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit has been exceeded. +// +func (c *IoT) CreateScheduledAudit(input *CreateScheduledAuditInput) (*CreateScheduledAuditOutput, error) { + req, out := c.CreateScheduledAuditRequest(input) + return out, req.Send() +} + +// CreateScheduledAuditWithContext is the same as CreateScheduledAudit with the addition of +// the ability to pass a context and additional request options. +// +// See CreateScheduledAudit for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) CreateScheduledAuditWithContext(ctx aws.Context, input *CreateScheduledAuditInput, opts ...request.Option) (*CreateScheduledAuditOutput, error) { + req, out := c.CreateScheduledAuditRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateSecurityProfile = "CreateSecurityProfile" + +// CreateSecurityProfileRequest generates a "aws/request.Request" representing the +// client's request for the CreateSecurityProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateSecurityProfile for more information on using the CreateSecurityProfile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateSecurityProfileRequest method. +// req, resp := client.CreateSecurityProfileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) CreateSecurityProfileRequest(input *CreateSecurityProfileInput) (req *request.Request, output *CreateSecurityProfileOutput) { + op := &request.Operation{ + Name: opCreateSecurityProfile, + HTTPMethod: "POST", + HTTPPath: "/security-profiles/{securityProfileName}", + } + + if input == nil { + input = &CreateSecurityProfileInput{} + } + + output = &CreateSecurityProfileOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateSecurityProfile API operation for AWS IoT. +// +// Creates a Device Defender security profile. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation CreateSecurityProfile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeResourceAlreadyExistsException "ResourceAlreadyExistsException" +// The resource already exists. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) CreateSecurityProfile(input *CreateSecurityProfileInput) (*CreateSecurityProfileOutput, error) { + req, out := c.CreateSecurityProfileRequest(input) + return out, req.Send() +} + +// CreateSecurityProfileWithContext is the same as CreateSecurityProfile with the addition of +// the ability to pass a context and additional request options. +// +// See CreateSecurityProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) CreateSecurityProfileWithContext(ctx aws.Context, input *CreateSecurityProfileInput, opts ...request.Option) (*CreateSecurityProfileOutput, error) { + req, out := c.CreateSecurityProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateStream = "CreateStream" + +// CreateStreamRequest generates a "aws/request.Request" representing the +// client's request for the CreateStream operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateStream for more information on using the CreateStream +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateStreamRequest method. +// req, resp := client.CreateStreamRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) CreateStreamRequest(input *CreateStreamInput) (req *request.Request, output *CreateStreamOutput) { + op := &request.Operation{ + Name: opCreateStream, + HTTPMethod: "POST", + HTTPPath: "/streams/{streamId}", + } + + if input == nil { + input = &CreateStreamInput{} + } + + output = &CreateStreamOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateStream API operation for AWS IoT. +// +// Creates a stream for delivering one or more large files in chunks over MQTT. +// A stream transports data bytes in chunks or blocks packaged as MQTT messages +// from a source like S3. You can have one or more files associated with a stream. +// The total size of a file associated with the stream cannot exceed more than +// 2 MB. The stream will be created with version 0. If a stream is created with +// the same streamID as a stream that existed and was deleted within last 90 +// days, we will resurrect that old stream by incrementing the version by 1. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation CreateStream for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit has been exceeded. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeResourceAlreadyExistsException "ResourceAlreadyExistsException" +// The resource already exists. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. // // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. @@ -1766,8 +2489,8 @@ const opCreateThing = "CreateThing" // CreateThingRequest generates a "aws/request.Request" representing the // client's request for the CreateThing operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1804,7 +2527,10 @@ func (c *IoT) CreateThingRequest(input *CreateThingInput) (req *request.Request, // CreateThing API operation for AWS IoT. // -// Creates a thing record in the thing registry. +// Creates a thing record in the registry. +// +// This is a control plane operation. See Authorization (http://docs.aws.amazon.com/iot/latest/developerguide/authorization.html) +// for information about authorizing control plane actions. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1860,8 +2586,8 @@ const opCreateThingGroup = "CreateThingGroup" // CreateThingGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateThingGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1900,6 +2626,9 @@ func (c *IoT) CreateThingGroupRequest(input *CreateThingGroupInput) (req *reques // // Create a thing group. // +// This is a control plane operation. See Authorization (http://docs.aws.amazon.com/iot/latest/developerguide/authorization.html) +// for information about authorizing control plane actions. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -1945,8 +2674,8 @@ const opCreateThingType = "CreateThingType" // CreateThingTypeRequest generates a "aws/request.Request" representing the // client's request for the CreateThingType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2036,8 +2765,8 @@ const opCreateTopicRule = "CreateTopicRule" // CreateTopicRuleRequest generates a "aws/request.Request" representing the // client's request for the CreateTopicRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2103,6 +2832,10 @@ func (c *IoT) CreateTopicRuleRequest(input *CreateTopicRuleInput) (req *request. // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // +// * ErrCodeConflictingResourceUpdateException "ConflictingResourceUpdateException" +// A conflicting resource update exception. This exception is thrown when two +// pending updates cause a conflict. +// func (c *IoT) CreateTopicRule(input *CreateTopicRuleInput) (*CreateTopicRuleOutput, error) { req, out := c.CreateTopicRuleRequest(input) return out, req.Send() @@ -2124,12 +2857,99 @@ func (c *IoT) CreateTopicRuleWithContext(ctx aws.Context, input *CreateTopicRule return out, req.Send() } +const opDeleteAccountAuditConfiguration = "DeleteAccountAuditConfiguration" + +// DeleteAccountAuditConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the DeleteAccountAuditConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteAccountAuditConfiguration for more information on using the DeleteAccountAuditConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteAccountAuditConfigurationRequest method. +// req, resp := client.DeleteAccountAuditConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) DeleteAccountAuditConfigurationRequest(input *DeleteAccountAuditConfigurationInput) (req *request.Request, output *DeleteAccountAuditConfigurationOutput) { + op := &request.Operation{ + Name: opDeleteAccountAuditConfiguration, + HTTPMethod: "DELETE", + HTTPPath: "/audit/configuration", + } + + if input == nil { + input = &DeleteAccountAuditConfigurationInput{} + } + + output = &DeleteAccountAuditConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteAccountAuditConfiguration API operation for AWS IoT. +// +// Restores the default settings for Device Defender audits for this account. +// Any configuration data you entered is deleted and all audit checks are reset +// to disabled. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation DeleteAccountAuditConfiguration for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) DeleteAccountAuditConfiguration(input *DeleteAccountAuditConfigurationInput) (*DeleteAccountAuditConfigurationOutput, error) { + req, out := c.DeleteAccountAuditConfigurationRequest(input) + return out, req.Send() +} + +// DeleteAccountAuditConfigurationWithContext is the same as DeleteAccountAuditConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteAccountAuditConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) DeleteAccountAuditConfigurationWithContext(ctx aws.Context, input *DeleteAccountAuditConfigurationInput, opts ...request.Option) (*DeleteAccountAuditConfigurationOutput, error) { + req, out := c.DeleteAccountAuditConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteAuthorizer = "DeleteAuthorizer" // DeleteAuthorizerRequest generates a "aws/request.Request" representing the // client's request for the DeleteAuthorizer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2218,106 +3038,192 @@ func (c *IoT) DeleteAuthorizerWithContext(ctx aws.Context, input *DeleteAuthoriz return out, req.Send() } -const opDeleteCACertificate = "DeleteCACertificate" +const opDeleteBillingGroup = "DeleteBillingGroup" -// DeleteCACertificateRequest generates a "aws/request.Request" representing the -// client's request for the DeleteCACertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteBillingGroupRequest generates a "aws/request.Request" representing the +// client's request for the DeleteBillingGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DeleteCACertificate for more information on using the DeleteCACertificate +// See DeleteBillingGroup for more information on using the DeleteBillingGroup // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DeleteCACertificateRequest method. -// req, resp := client.DeleteCACertificateRequest(params) +// // Example sending a request using the DeleteBillingGroupRequest method. +// req, resp := client.DeleteBillingGroupRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DeleteCACertificateRequest(input *DeleteCACertificateInput) (req *request.Request, output *DeleteCACertificateOutput) { +func (c *IoT) DeleteBillingGroupRequest(input *DeleteBillingGroupInput) (req *request.Request, output *DeleteBillingGroupOutput) { op := &request.Operation{ - Name: opDeleteCACertificate, + Name: opDeleteBillingGroup, HTTPMethod: "DELETE", - HTTPPath: "/cacertificate/{caCertificateId}", + HTTPPath: "/billing-groups/{billingGroupName}", } if input == nil { - input = &DeleteCACertificateInput{} + input = &DeleteBillingGroupInput{} } - output = &DeleteCACertificateOutput{} + output = &DeleteBillingGroupOutput{} req = c.newRequest(op, input, output) return } -// DeleteCACertificate API operation for AWS IoT. +// DeleteBillingGroup API operation for AWS IoT. // -// Deletes a registered CA certificate. +// Deletes the billing group. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DeleteCACertificate for usage and error information. +// API operation DeleteBillingGroup for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeCertificateStateException "CertificateStateException" -// The certificate operation is not allowed. +// * ErrCodeVersionConflictException "VersionConflictException" +// An exception thrown when the version of an entity specified with the expectedVersion +// parameter does not match the latest version in the system. // // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. -// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -func (c *IoT) DeleteCACertificate(input *DeleteCACertificateInput) (*DeleteCACertificateOutput, error) { - req, out := c.DeleteCACertificateRequest(input) +func (c *IoT) DeleteBillingGroup(input *DeleteBillingGroupInput) (*DeleteBillingGroupOutput, error) { + req, out := c.DeleteBillingGroupRequest(input) return out, req.Send() } -// DeleteCACertificateWithContext is the same as DeleteCACertificate with the addition of +// DeleteBillingGroupWithContext is the same as DeleteBillingGroup with the addition of // the ability to pass a context and additional request options. // -// See DeleteCACertificate for details on how to use this API operation. +// See DeleteBillingGroup for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DeleteCACertificateWithContext(ctx aws.Context, input *DeleteCACertificateInput, opts ...request.Option) (*DeleteCACertificateOutput, error) { - req, out := c.DeleteCACertificateRequest(input) +func (c *IoT) DeleteBillingGroupWithContext(ctx aws.Context, input *DeleteBillingGroupInput, opts ...request.Option) (*DeleteBillingGroupOutput, error) { + req, out := c.DeleteBillingGroupRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDeleteCertificate = "DeleteCertificate" +const opDeleteCACertificate = "DeleteCACertificate" + +// DeleteCACertificateRequest generates a "aws/request.Request" representing the +// client's request for the DeleteCACertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteCACertificate for more information on using the DeleteCACertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteCACertificateRequest method. +// req, resp := client.DeleteCACertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) DeleteCACertificateRequest(input *DeleteCACertificateInput) (req *request.Request, output *DeleteCACertificateOutput) { + op := &request.Operation{ + Name: opDeleteCACertificate, + HTTPMethod: "DELETE", + HTTPPath: "/cacertificate/{caCertificateId}", + } + + if input == nil { + input = &DeleteCACertificateInput{} + } + + output = &DeleteCACertificateOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteCACertificate API operation for AWS IoT. +// +// Deletes a registered CA certificate. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation DeleteCACertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeCertificateStateException "CertificateStateException" +// The certificate operation is not allowed. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) DeleteCACertificate(input *DeleteCACertificateInput) (*DeleteCACertificateOutput, error) { + req, out := c.DeleteCACertificateRequest(input) + return out, req.Send() +} + +// DeleteCACertificateWithContext is the same as DeleteCACertificate with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteCACertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) DeleteCACertificateWithContext(ctx aws.Context, input *DeleteCACertificateInput, opts ...request.Option) (*DeleteCACertificateOutput, error) { + req, out := c.DeleteCACertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteCertificate = "DeleteCertificate" // DeleteCertificateRequest generates a "aws/request.Request" representing the // client's request for the DeleteCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2416,445 +3322,453 @@ func (c *IoT) DeleteCertificateWithContext(ctx aws.Context, input *DeleteCertifi return out, req.Send() } -const opDeleteOTAUpdate = "DeleteOTAUpdate" +const opDeleteDynamicThingGroup = "DeleteDynamicThingGroup" -// DeleteOTAUpdateRequest generates a "aws/request.Request" representing the -// client's request for the DeleteOTAUpdate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteDynamicThingGroupRequest generates a "aws/request.Request" representing the +// client's request for the DeleteDynamicThingGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DeleteOTAUpdate for more information on using the DeleteOTAUpdate +// See DeleteDynamicThingGroup for more information on using the DeleteDynamicThingGroup // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DeleteOTAUpdateRequest method. -// req, resp := client.DeleteOTAUpdateRequest(params) +// // Example sending a request using the DeleteDynamicThingGroupRequest method. +// req, resp := client.DeleteDynamicThingGroupRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DeleteOTAUpdateRequest(input *DeleteOTAUpdateInput) (req *request.Request, output *DeleteOTAUpdateOutput) { +func (c *IoT) DeleteDynamicThingGroupRequest(input *DeleteDynamicThingGroupInput) (req *request.Request, output *DeleteDynamicThingGroupOutput) { op := &request.Operation{ - Name: opDeleteOTAUpdate, + Name: opDeleteDynamicThingGroup, HTTPMethod: "DELETE", - HTTPPath: "/otaUpdates/{otaUpdateId}", + HTTPPath: "/dynamic-thing-groups/{thingGroupName}", } if input == nil { - input = &DeleteOTAUpdateInput{} + input = &DeleteDynamicThingGroupInput{} } - output = &DeleteOTAUpdateOutput{} + output = &DeleteDynamicThingGroupOutput{} req = c.newRequest(op, input, output) return } -// DeleteOTAUpdate API operation for AWS IoT. +// DeleteDynamicThingGroup API operation for AWS IoT. // -// Delete an OTA update. +// Deletes a dynamic thing group. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DeleteOTAUpdate for usage and error information. +// API operation DeleteDynamicThingGroup for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. +// * ErrCodeVersionConflictException "VersionConflictException" +// An exception thrown when the version of an entity specified with the expectedVersion +// parameter does not match the latest version in the system. // // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. -// -func (c *IoT) DeleteOTAUpdate(input *DeleteOTAUpdateInput) (*DeleteOTAUpdateOutput, error) { - req, out := c.DeleteOTAUpdateRequest(input) +func (c *IoT) DeleteDynamicThingGroup(input *DeleteDynamicThingGroupInput) (*DeleteDynamicThingGroupOutput, error) { + req, out := c.DeleteDynamicThingGroupRequest(input) return out, req.Send() } -// DeleteOTAUpdateWithContext is the same as DeleteOTAUpdate with the addition of +// DeleteDynamicThingGroupWithContext is the same as DeleteDynamicThingGroup with the addition of // the ability to pass a context and additional request options. // -// See DeleteOTAUpdate for details on how to use this API operation. +// See DeleteDynamicThingGroup for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DeleteOTAUpdateWithContext(ctx aws.Context, input *DeleteOTAUpdateInput, opts ...request.Option) (*DeleteOTAUpdateOutput, error) { - req, out := c.DeleteOTAUpdateRequest(input) +func (c *IoT) DeleteDynamicThingGroupWithContext(ctx aws.Context, input *DeleteDynamicThingGroupInput, opts ...request.Option) (*DeleteDynamicThingGroupOutput, error) { + req, out := c.DeleteDynamicThingGroupRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDeletePolicy = "DeletePolicy" +const opDeleteJob = "DeleteJob" -// DeletePolicyRequest generates a "aws/request.Request" representing the -// client's request for the DeletePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteJobRequest generates a "aws/request.Request" representing the +// client's request for the DeleteJob operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DeletePolicy for more information on using the DeletePolicy +// See DeleteJob for more information on using the DeleteJob // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DeletePolicyRequest method. -// req, resp := client.DeletePolicyRequest(params) +// // Example sending a request using the DeleteJobRequest method. +// req, resp := client.DeleteJobRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DeletePolicyRequest(input *DeletePolicyInput) (req *request.Request, output *DeletePolicyOutput) { +func (c *IoT) DeleteJobRequest(input *DeleteJobInput) (req *request.Request, output *DeleteJobOutput) { op := &request.Operation{ - Name: opDeletePolicy, + Name: opDeleteJob, HTTPMethod: "DELETE", - HTTPPath: "/policies/{policyName}", + HTTPPath: "/jobs/{jobId}", } if input == nil { - input = &DeletePolicyInput{} + input = &DeleteJobInput{} } - output = &DeletePolicyOutput{} + output = &DeleteJobOutput{} req = c.newRequest(op, input, output) req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// DeletePolicy API operation for AWS IoT. -// -// Deletes the specified policy. +// DeleteJob API operation for AWS IoT. // -// A policy cannot be deleted if it has non-default versions or it is attached -// to any certificate. +// Deletes a job and its related job executions. // -// To delete a policy, use the DeletePolicyVersion API to delete all non-default -// versions of the policy; use the DetachPrincipalPolicy API to detach the policy -// from any certificate; and then use the DeletePolicy API to delete the policy. +// Deleting a job may take time, depending on the number of job executions created +// for the job and various other factors. While the job is being deleted, the +// status of the job will be shown as "DELETION_IN_PROGRESS". Attempting to +// delete or cancel a job whose status is already "DELETION_IN_PROGRESS" will +// result in an error. // -// When a policy is deleted using DeletePolicy, its default version is deleted -// with it. +// Only 10 jobs may have status "DELETION_IN_PROGRESS" at the same time, or +// a LimitExceededException will occur. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DeletePolicy for usage and error information. +// API operation DeleteJob for usage and error information. // // Returned Error Codes: -// * ErrCodeDeleteConflictException "DeleteConflictException" -// You can't delete the resource because it is attached to one or more resources. +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeInvalidStateTransitionException "InvalidStateTransitionException" +// An attempt was made to change to an invalid state, for example by deleting +// a job or a job execution which is "IN_PROGRESS" without setting the force +// parameter. // // * ErrCodeResourceNotFoundException "ResourceNotFoundException" // The specified resource does not exist. // -// * ErrCodeInvalidRequestException "InvalidRequestException" -// The request is not valid. +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit has been exceeded. // // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. -// -func (c *IoT) DeletePolicy(input *DeletePolicyInput) (*DeletePolicyOutput, error) { - req, out := c.DeletePolicyRequest(input) +func (c *IoT) DeleteJob(input *DeleteJobInput) (*DeleteJobOutput, error) { + req, out := c.DeleteJobRequest(input) return out, req.Send() } -// DeletePolicyWithContext is the same as DeletePolicy with the addition of +// DeleteJobWithContext is the same as DeleteJob with the addition of // the ability to pass a context and additional request options. // -// See DeletePolicy for details on how to use this API operation. +// See DeleteJob for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DeletePolicyWithContext(ctx aws.Context, input *DeletePolicyInput, opts ...request.Option) (*DeletePolicyOutput, error) { - req, out := c.DeletePolicyRequest(input) +func (c *IoT) DeleteJobWithContext(ctx aws.Context, input *DeleteJobInput, opts ...request.Option) (*DeleteJobOutput, error) { + req, out := c.DeleteJobRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDeletePolicyVersion = "DeletePolicyVersion" +const opDeleteJobExecution = "DeleteJobExecution" -// DeletePolicyVersionRequest generates a "aws/request.Request" representing the -// client's request for the DeletePolicyVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteJobExecutionRequest generates a "aws/request.Request" representing the +// client's request for the DeleteJobExecution operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DeletePolicyVersion for more information on using the DeletePolicyVersion +// See DeleteJobExecution for more information on using the DeleteJobExecution // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DeletePolicyVersionRequest method. -// req, resp := client.DeletePolicyVersionRequest(params) +// // Example sending a request using the DeleteJobExecutionRequest method. +// req, resp := client.DeleteJobExecutionRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DeletePolicyVersionRequest(input *DeletePolicyVersionInput) (req *request.Request, output *DeletePolicyVersionOutput) { +func (c *IoT) DeleteJobExecutionRequest(input *DeleteJobExecutionInput) (req *request.Request, output *DeleteJobExecutionOutput) { op := &request.Operation{ - Name: opDeletePolicyVersion, + Name: opDeleteJobExecution, HTTPMethod: "DELETE", - HTTPPath: "/policies/{policyName}/version/{policyVersionId}", + HTTPPath: "/things/{thingName}/jobs/{jobId}/executionNumber/{executionNumber}", } if input == nil { - input = &DeletePolicyVersionInput{} + input = &DeleteJobExecutionInput{} } - output = &DeletePolicyVersionOutput{} + output = &DeleteJobExecutionOutput{} req = c.newRequest(op, input, output) req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// DeletePolicyVersion API operation for AWS IoT. +// DeleteJobExecution API operation for AWS IoT. // -// Deletes the specified version of the specified policy. You cannot delete -// the default version of a policy using this API. To delete the default version -// of a policy, use DeletePolicy. To find out which version of a policy is marked -// as the default version, use ListPolicyVersions. +// Deletes a job execution. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DeletePolicyVersion for usage and error information. +// API operation DeleteJobExecution for usage and error information. // // Returned Error Codes: -// * ErrCodeDeleteConflictException "DeleteConflictException" -// You can't delete the resource because it is attached to one or more resources. +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeInvalidStateTransitionException "InvalidStateTransitionException" +// An attempt was made to change to an invalid state, for example by deleting +// a job or a job execution which is "IN_PROGRESS" without setting the force +// parameter. // // * ErrCodeResourceNotFoundException "ResourceNotFoundException" // The specified resource does not exist. // -// * ErrCodeInvalidRequestException "InvalidRequestException" -// The request is not valid. -// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. -// -func (c *IoT) DeletePolicyVersion(input *DeletePolicyVersionInput) (*DeletePolicyVersionOutput, error) { - req, out := c.DeletePolicyVersionRequest(input) +func (c *IoT) DeleteJobExecution(input *DeleteJobExecutionInput) (*DeleteJobExecutionOutput, error) { + req, out := c.DeleteJobExecutionRequest(input) return out, req.Send() } -// DeletePolicyVersionWithContext is the same as DeletePolicyVersion with the addition of +// DeleteJobExecutionWithContext is the same as DeleteJobExecution with the addition of // the ability to pass a context and additional request options. // -// See DeletePolicyVersion for details on how to use this API operation. +// See DeleteJobExecution for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DeletePolicyVersionWithContext(ctx aws.Context, input *DeletePolicyVersionInput, opts ...request.Option) (*DeletePolicyVersionOutput, error) { - req, out := c.DeletePolicyVersionRequest(input) +func (c *IoT) DeleteJobExecutionWithContext(ctx aws.Context, input *DeleteJobExecutionInput, opts ...request.Option) (*DeleteJobExecutionOutput, error) { + req, out := c.DeleteJobExecutionRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDeleteRegistrationCode = "DeleteRegistrationCode" +const opDeleteOTAUpdate = "DeleteOTAUpdate" -// DeleteRegistrationCodeRequest generates a "aws/request.Request" representing the -// client's request for the DeleteRegistrationCode operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteOTAUpdateRequest generates a "aws/request.Request" representing the +// client's request for the DeleteOTAUpdate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DeleteRegistrationCode for more information on using the DeleteRegistrationCode +// See DeleteOTAUpdate for more information on using the DeleteOTAUpdate // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DeleteRegistrationCodeRequest method. -// req, resp := client.DeleteRegistrationCodeRequest(params) +// // Example sending a request using the DeleteOTAUpdateRequest method. +// req, resp := client.DeleteOTAUpdateRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DeleteRegistrationCodeRequest(input *DeleteRegistrationCodeInput) (req *request.Request, output *DeleteRegistrationCodeOutput) { +func (c *IoT) DeleteOTAUpdateRequest(input *DeleteOTAUpdateInput) (req *request.Request, output *DeleteOTAUpdateOutput) { op := &request.Operation{ - Name: opDeleteRegistrationCode, + Name: opDeleteOTAUpdate, HTTPMethod: "DELETE", - HTTPPath: "/registrationcode", + HTTPPath: "/otaUpdates/{otaUpdateId}", } if input == nil { - input = &DeleteRegistrationCodeInput{} + input = &DeleteOTAUpdateInput{} } - output = &DeleteRegistrationCodeOutput{} + output = &DeleteOTAUpdateOutput{} req = c.newRequest(op, input, output) return } -// DeleteRegistrationCode API operation for AWS IoT. +// DeleteOTAUpdate API operation for AWS IoT. // -// Deletes a CA certificate registration code. +// Delete an OTA update. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DeleteRegistrationCode for usage and error information. +// API operation DeleteOTAUpdate for usage and error information. // // Returned Error Codes: -// * ErrCodeThrottlingException "ThrottlingException" -// The rate exceeds the limit. +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. // // * ErrCodeResourceNotFoundException "ResourceNotFoundException" // The specified resource does not exist. // +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// // * ErrCodeUnauthorizedException "UnauthorizedException" // You are not authorized to perform this operation. // +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. +// * ErrCodeVersionConflictException "VersionConflictException" +// An exception thrown when the version of an entity specified with the expectedVersion +// parameter does not match the latest version in the system. // -func (c *IoT) DeleteRegistrationCode(input *DeleteRegistrationCodeInput) (*DeleteRegistrationCodeOutput, error) { - req, out := c.DeleteRegistrationCodeRequest(input) +func (c *IoT) DeleteOTAUpdate(input *DeleteOTAUpdateInput) (*DeleteOTAUpdateOutput, error) { + req, out := c.DeleteOTAUpdateRequest(input) return out, req.Send() } -// DeleteRegistrationCodeWithContext is the same as DeleteRegistrationCode with the addition of +// DeleteOTAUpdateWithContext is the same as DeleteOTAUpdate with the addition of // the ability to pass a context and additional request options. // -// See DeleteRegistrationCode for details on how to use this API operation. +// See DeleteOTAUpdate for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DeleteRegistrationCodeWithContext(ctx aws.Context, input *DeleteRegistrationCodeInput, opts ...request.Option) (*DeleteRegistrationCodeOutput, error) { - req, out := c.DeleteRegistrationCodeRequest(input) +func (c *IoT) DeleteOTAUpdateWithContext(ctx aws.Context, input *DeleteOTAUpdateInput, opts ...request.Option) (*DeleteOTAUpdateOutput, error) { + req, out := c.DeleteOTAUpdateRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDeleteRoleAlias = "DeleteRoleAlias" +const opDeletePolicy = "DeletePolicy" -// DeleteRoleAliasRequest generates a "aws/request.Request" representing the -// client's request for the DeleteRoleAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeletePolicyRequest generates a "aws/request.Request" representing the +// client's request for the DeletePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DeleteRoleAlias for more information on using the DeleteRoleAlias +// See DeletePolicy for more information on using the DeletePolicy // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DeleteRoleAliasRequest method. -// req, resp := client.DeleteRoleAliasRequest(params) +// // Example sending a request using the DeletePolicyRequest method. +// req, resp := client.DeletePolicyRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DeleteRoleAliasRequest(input *DeleteRoleAliasInput) (req *request.Request, output *DeleteRoleAliasOutput) { +func (c *IoT) DeletePolicyRequest(input *DeletePolicyInput) (req *request.Request, output *DeletePolicyOutput) { op := &request.Operation{ - Name: opDeleteRoleAlias, + Name: opDeletePolicy, HTTPMethod: "DELETE", - HTTPPath: "/role-aliases/{roleAlias}", + HTTPPath: "/policies/{policyName}", } if input == nil { - input = &DeleteRoleAliasInput{} + input = &DeletePolicyInput{} } - output = &DeleteRoleAliasOutput{} + output = &DeletePolicyOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// DeleteRoleAlias API operation for AWS IoT. +// DeletePolicy API operation for AWS IoT. // -// Deletes a role alias +// Deletes the specified policy. +// +// A policy cannot be deleted if it has non-default versions or it is attached +// to any certificate. +// +// To delete a policy, use the DeletePolicyVersion API to delete all non-default +// versions of the policy; use the DetachPrincipalPolicy API to detach the policy +// from any certificate; and then use the DeletePolicy API to delete the policy. +// +// When a policy is deleted using DeletePolicy, its default version is deleted +// with it. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DeleteRoleAlias for usage and error information. +// API operation DeletePolicy for usage and error information. // // Returned Error Codes: // * ErrCodeDeleteConflictException "DeleteConflictException" // You can't delete the resource because it is attached to one or more resources. // +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // @@ -2870,88 +3784,90 @@ func (c *IoT) DeleteRoleAliasRequest(input *DeleteRoleAliasInput) (req *request. // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -func (c *IoT) DeleteRoleAlias(input *DeleteRoleAliasInput) (*DeleteRoleAliasOutput, error) { - req, out := c.DeleteRoleAliasRequest(input) +func (c *IoT) DeletePolicy(input *DeletePolicyInput) (*DeletePolicyOutput, error) { + req, out := c.DeletePolicyRequest(input) return out, req.Send() } -// DeleteRoleAliasWithContext is the same as DeleteRoleAlias with the addition of +// DeletePolicyWithContext is the same as DeletePolicy with the addition of // the ability to pass a context and additional request options. // -// See DeleteRoleAlias for details on how to use this API operation. +// See DeletePolicy for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DeleteRoleAliasWithContext(ctx aws.Context, input *DeleteRoleAliasInput, opts ...request.Option) (*DeleteRoleAliasOutput, error) { - req, out := c.DeleteRoleAliasRequest(input) +func (c *IoT) DeletePolicyWithContext(ctx aws.Context, input *DeletePolicyInput, opts ...request.Option) (*DeletePolicyOutput, error) { + req, out := c.DeletePolicyRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDeleteStream = "DeleteStream" +const opDeletePolicyVersion = "DeletePolicyVersion" -// DeleteStreamRequest generates a "aws/request.Request" representing the -// client's request for the DeleteStream operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeletePolicyVersionRequest generates a "aws/request.Request" representing the +// client's request for the DeletePolicyVersion operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DeleteStream for more information on using the DeleteStream +// See DeletePolicyVersion for more information on using the DeletePolicyVersion // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DeleteStreamRequest method. -// req, resp := client.DeleteStreamRequest(params) +// // Example sending a request using the DeletePolicyVersionRequest method. +// req, resp := client.DeletePolicyVersionRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DeleteStreamRequest(input *DeleteStreamInput) (req *request.Request, output *DeleteStreamOutput) { +func (c *IoT) DeletePolicyVersionRequest(input *DeletePolicyVersionInput) (req *request.Request, output *DeletePolicyVersionOutput) { op := &request.Operation{ - Name: opDeleteStream, + Name: opDeletePolicyVersion, HTTPMethod: "DELETE", - HTTPPath: "/streams/{streamId}", + HTTPPath: "/policies/{policyName}/version/{policyVersionId}", } if input == nil { - input = &DeleteStreamInput{} + input = &DeletePolicyVersionInput{} } - output = &DeleteStreamOutput{} + output = &DeletePolicyVersionOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// DeleteStream API operation for AWS IoT. +// DeletePolicyVersion API operation for AWS IoT. // -// Deletes a stream. +// Deletes the specified version of the specified policy. You cannot delete +// the default version of a policy using this API. To delete the default version +// of a policy, use DeletePolicy. To find out which version of a policy is marked +// as the default version, use ListPolicyVersions. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DeleteStream for usage and error information. +// API operation DeletePolicyVersion for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeDeleteConflictException "DeleteConflictException" // You can't delete the resource because it is attached to one or more resources. // +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // @@ -2967,92 +3883,85 @@ func (c *IoT) DeleteStreamRequest(input *DeleteStreamInput) (req *request.Reques // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) DeleteStream(input *DeleteStreamInput) (*DeleteStreamOutput, error) { - req, out := c.DeleteStreamRequest(input) +func (c *IoT) DeletePolicyVersion(input *DeletePolicyVersionInput) (*DeletePolicyVersionOutput, error) { + req, out := c.DeletePolicyVersionRequest(input) return out, req.Send() } -// DeleteStreamWithContext is the same as DeleteStream with the addition of +// DeletePolicyVersionWithContext is the same as DeletePolicyVersion with the addition of // the ability to pass a context and additional request options. // -// See DeleteStream for details on how to use this API operation. +// See DeletePolicyVersion for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DeleteStreamWithContext(ctx aws.Context, input *DeleteStreamInput, opts ...request.Option) (*DeleteStreamOutput, error) { - req, out := c.DeleteStreamRequest(input) +func (c *IoT) DeletePolicyVersionWithContext(ctx aws.Context, input *DeletePolicyVersionInput, opts ...request.Option) (*DeletePolicyVersionOutput, error) { + req, out := c.DeletePolicyVersionRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDeleteThing = "DeleteThing" +const opDeleteRegistrationCode = "DeleteRegistrationCode" -// DeleteThingRequest generates a "aws/request.Request" representing the -// client's request for the DeleteThing operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteRegistrationCodeRequest generates a "aws/request.Request" representing the +// client's request for the DeleteRegistrationCode operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DeleteThing for more information on using the DeleteThing +// See DeleteRegistrationCode for more information on using the DeleteRegistrationCode // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DeleteThingRequest method. -// req, resp := client.DeleteThingRequest(params) +// // Example sending a request using the DeleteRegistrationCodeRequest method. +// req, resp := client.DeleteRegistrationCodeRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DeleteThingRequest(input *DeleteThingInput) (req *request.Request, output *DeleteThingOutput) { +func (c *IoT) DeleteRegistrationCodeRequest(input *DeleteRegistrationCodeInput) (req *request.Request, output *DeleteRegistrationCodeOutput) { op := &request.Operation{ - Name: opDeleteThing, + Name: opDeleteRegistrationCode, HTTPMethod: "DELETE", - HTTPPath: "/things/{thingName}", + HTTPPath: "/registrationcode", } if input == nil { - input = &DeleteThingInput{} + input = &DeleteRegistrationCodeInput{} } - output = &DeleteThingOutput{} + output = &DeleteRegistrationCodeOutput{} req = c.newRequest(op, input, output) return } -// DeleteThing API operation for AWS IoT. +// DeleteRegistrationCode API operation for AWS IoT. // -// Deletes the specified thing. +// Deletes a CA certificate registration code. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DeleteThing for usage and error information. +// API operation DeleteRegistrationCode for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -// * ErrCodeVersionConflictException "VersionConflictException" -// An exception thrown when the version of a thing passed to a command is different -// than the version specified with the --version parameter. -// -// * ErrCodeInvalidRequestException "InvalidRequestException" -// The request is not valid. -// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// // * ErrCodeUnauthorizedException "UnauthorizedException" // You are not authorized to perform this operation. // @@ -3062,435 +3971,446 @@ func (c *IoT) DeleteThingRequest(input *DeleteThingInput) (req *request.Request, // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) DeleteThing(input *DeleteThingInput) (*DeleteThingOutput, error) { - req, out := c.DeleteThingRequest(input) +func (c *IoT) DeleteRegistrationCode(input *DeleteRegistrationCodeInput) (*DeleteRegistrationCodeOutput, error) { + req, out := c.DeleteRegistrationCodeRequest(input) return out, req.Send() } -// DeleteThingWithContext is the same as DeleteThing with the addition of +// DeleteRegistrationCodeWithContext is the same as DeleteRegistrationCode with the addition of // the ability to pass a context and additional request options. // -// See DeleteThing for details on how to use this API operation. +// See DeleteRegistrationCode for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DeleteThingWithContext(ctx aws.Context, input *DeleteThingInput, opts ...request.Option) (*DeleteThingOutput, error) { - req, out := c.DeleteThingRequest(input) +func (c *IoT) DeleteRegistrationCodeWithContext(ctx aws.Context, input *DeleteRegistrationCodeInput, opts ...request.Option) (*DeleteRegistrationCodeOutput, error) { + req, out := c.DeleteRegistrationCodeRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDeleteThingGroup = "DeleteThingGroup" +const opDeleteRoleAlias = "DeleteRoleAlias" -// DeleteThingGroupRequest generates a "aws/request.Request" representing the -// client's request for the DeleteThingGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteRoleAliasRequest generates a "aws/request.Request" representing the +// client's request for the DeleteRoleAlias operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DeleteThingGroup for more information on using the DeleteThingGroup +// See DeleteRoleAlias for more information on using the DeleteRoleAlias // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DeleteThingGroupRequest method. -// req, resp := client.DeleteThingGroupRequest(params) +// // Example sending a request using the DeleteRoleAliasRequest method. +// req, resp := client.DeleteRoleAliasRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DeleteThingGroupRequest(input *DeleteThingGroupInput) (req *request.Request, output *DeleteThingGroupOutput) { +func (c *IoT) DeleteRoleAliasRequest(input *DeleteRoleAliasInput) (req *request.Request, output *DeleteRoleAliasOutput) { op := &request.Operation{ - Name: opDeleteThingGroup, + Name: opDeleteRoleAlias, HTTPMethod: "DELETE", - HTTPPath: "/thing-groups/{thingGroupName}", + HTTPPath: "/role-aliases/{roleAlias}", } if input == nil { - input = &DeleteThingGroupInput{} + input = &DeleteRoleAliasInput{} } - output = &DeleteThingGroupOutput{} + output = &DeleteRoleAliasOutput{} req = c.newRequest(op, input, output) return } -// DeleteThingGroup API operation for AWS IoT. +// DeleteRoleAlias API operation for AWS IoT. // -// Deletes a thing group. +// Deletes a role alias // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DeleteThingGroup for usage and error information. +// API operation DeleteRoleAlias for usage and error information. // // Returned Error Codes: +// * ErrCodeDeleteConflictException "DeleteConflictException" +// You can't delete the resource because it is attached to one or more resources. +// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeVersionConflictException "VersionConflictException" -// An exception thrown when the version of a thing passed to a command is different -// than the version specified with the --version parameter. -// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) DeleteThingGroup(input *DeleteThingGroupInput) (*DeleteThingGroupOutput, error) { - req, out := c.DeleteThingGroupRequest(input) +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) DeleteRoleAlias(input *DeleteRoleAliasInput) (*DeleteRoleAliasOutput, error) { + req, out := c.DeleteRoleAliasRequest(input) return out, req.Send() } -// DeleteThingGroupWithContext is the same as DeleteThingGroup with the addition of +// DeleteRoleAliasWithContext is the same as DeleteRoleAlias with the addition of // the ability to pass a context and additional request options. // -// See DeleteThingGroup for details on how to use this API operation. +// See DeleteRoleAlias for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DeleteThingGroupWithContext(ctx aws.Context, input *DeleteThingGroupInput, opts ...request.Option) (*DeleteThingGroupOutput, error) { - req, out := c.DeleteThingGroupRequest(input) +func (c *IoT) DeleteRoleAliasWithContext(ctx aws.Context, input *DeleteRoleAliasInput, opts ...request.Option) (*DeleteRoleAliasOutput, error) { + req, out := c.DeleteRoleAliasRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDeleteThingType = "DeleteThingType" +const opDeleteScheduledAudit = "DeleteScheduledAudit" -// DeleteThingTypeRequest generates a "aws/request.Request" representing the -// client's request for the DeleteThingType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteScheduledAuditRequest generates a "aws/request.Request" representing the +// client's request for the DeleteScheduledAudit operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DeleteThingType for more information on using the DeleteThingType +// See DeleteScheduledAudit for more information on using the DeleteScheduledAudit // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DeleteThingTypeRequest method. -// req, resp := client.DeleteThingTypeRequest(params) +// // Example sending a request using the DeleteScheduledAuditRequest method. +// req, resp := client.DeleteScheduledAuditRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DeleteThingTypeRequest(input *DeleteThingTypeInput) (req *request.Request, output *DeleteThingTypeOutput) { +func (c *IoT) DeleteScheduledAuditRequest(input *DeleteScheduledAuditInput) (req *request.Request, output *DeleteScheduledAuditOutput) { op := &request.Operation{ - Name: opDeleteThingType, + Name: opDeleteScheduledAudit, HTTPMethod: "DELETE", - HTTPPath: "/thing-types/{thingTypeName}", + HTTPPath: "/audit/scheduledaudits/{scheduledAuditName}", } if input == nil { - input = &DeleteThingTypeInput{} + input = &DeleteScheduledAuditInput{} } - output = &DeleteThingTypeOutput{} + output = &DeleteScheduledAuditOutput{} req = c.newRequest(op, input, output) return } -// DeleteThingType API operation for AWS IoT. +// DeleteScheduledAudit API operation for AWS IoT. // -// Deletes the specified thing type . You cannot delete a thing type if it has -// things associated with it. To delete a thing type, first mark it as deprecated -// by calling DeprecateThingType, then remove any associated things by calling -// UpdateThing to change the thing type on any associated thing, and finally -// use DeleteThingType to delete the thing type. +// Deletes a scheduled audit. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DeleteThingType for usage and error information. +// API operation DeleteScheduledAudit for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. -// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) DeleteThingType(input *DeleteThingTypeInput) (*DeleteThingTypeOutput, error) { - req, out := c.DeleteThingTypeRequest(input) +func (c *IoT) DeleteScheduledAudit(input *DeleteScheduledAuditInput) (*DeleteScheduledAuditOutput, error) { + req, out := c.DeleteScheduledAuditRequest(input) return out, req.Send() } -// DeleteThingTypeWithContext is the same as DeleteThingType with the addition of +// DeleteScheduledAuditWithContext is the same as DeleteScheduledAudit with the addition of // the ability to pass a context and additional request options. // -// See DeleteThingType for details on how to use this API operation. +// See DeleteScheduledAudit for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DeleteThingTypeWithContext(ctx aws.Context, input *DeleteThingTypeInput, opts ...request.Option) (*DeleteThingTypeOutput, error) { - req, out := c.DeleteThingTypeRequest(input) +func (c *IoT) DeleteScheduledAuditWithContext(ctx aws.Context, input *DeleteScheduledAuditInput, opts ...request.Option) (*DeleteScheduledAuditOutput, error) { + req, out := c.DeleteScheduledAuditRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDeleteTopicRule = "DeleteTopicRule" +const opDeleteSecurityProfile = "DeleteSecurityProfile" -// DeleteTopicRuleRequest generates a "aws/request.Request" representing the -// client's request for the DeleteTopicRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteSecurityProfileRequest generates a "aws/request.Request" representing the +// client's request for the DeleteSecurityProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DeleteTopicRule for more information on using the DeleteTopicRule +// See DeleteSecurityProfile for more information on using the DeleteSecurityProfile // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DeleteTopicRuleRequest method. -// req, resp := client.DeleteTopicRuleRequest(params) +// // Example sending a request using the DeleteSecurityProfileRequest method. +// req, resp := client.DeleteSecurityProfileRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DeleteTopicRuleRequest(input *DeleteTopicRuleInput) (req *request.Request, output *DeleteTopicRuleOutput) { +func (c *IoT) DeleteSecurityProfileRequest(input *DeleteSecurityProfileInput) (req *request.Request, output *DeleteSecurityProfileOutput) { op := &request.Operation{ - Name: opDeleteTopicRule, + Name: opDeleteSecurityProfile, HTTPMethod: "DELETE", - HTTPPath: "/rules/{ruleName}", + HTTPPath: "/security-profiles/{securityProfileName}", } if input == nil { - input = &DeleteTopicRuleInput{} + input = &DeleteSecurityProfileInput{} } - output = &DeleteTopicRuleOutput{} + output = &DeleteSecurityProfileOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// DeleteTopicRule API operation for AWS IoT. +// DeleteSecurityProfile API operation for AWS IoT. // -// Deletes the rule. +// Deletes a Device Defender security profile. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DeleteTopicRule for usage and error information. +// API operation DeleteSecurityProfile for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalException "InternalException" -// An unexpected error has occurred. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. // -func (c *IoT) DeleteTopicRule(input *DeleteTopicRuleInput) (*DeleteTopicRuleOutput, error) { - req, out := c.DeleteTopicRuleRequest(input) +// * ErrCodeVersionConflictException "VersionConflictException" +// An exception thrown when the version of an entity specified with the expectedVersion +// parameter does not match the latest version in the system. +// +func (c *IoT) DeleteSecurityProfile(input *DeleteSecurityProfileInput) (*DeleteSecurityProfileOutput, error) { + req, out := c.DeleteSecurityProfileRequest(input) return out, req.Send() } -// DeleteTopicRuleWithContext is the same as DeleteTopicRule with the addition of +// DeleteSecurityProfileWithContext is the same as DeleteSecurityProfile with the addition of // the ability to pass a context and additional request options. // -// See DeleteTopicRule for details on how to use this API operation. +// See DeleteSecurityProfile for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DeleteTopicRuleWithContext(ctx aws.Context, input *DeleteTopicRuleInput, opts ...request.Option) (*DeleteTopicRuleOutput, error) { - req, out := c.DeleteTopicRuleRequest(input) +func (c *IoT) DeleteSecurityProfileWithContext(ctx aws.Context, input *DeleteSecurityProfileInput, opts ...request.Option) (*DeleteSecurityProfileOutput, error) { + req, out := c.DeleteSecurityProfileRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDeleteV2LoggingLevel = "DeleteV2LoggingLevel" +const opDeleteStream = "DeleteStream" -// DeleteV2LoggingLevelRequest generates a "aws/request.Request" representing the -// client's request for the DeleteV2LoggingLevel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteStreamRequest generates a "aws/request.Request" representing the +// client's request for the DeleteStream operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DeleteV2LoggingLevel for more information on using the DeleteV2LoggingLevel +// See DeleteStream for more information on using the DeleteStream // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DeleteV2LoggingLevelRequest method. -// req, resp := client.DeleteV2LoggingLevelRequest(params) +// // Example sending a request using the DeleteStreamRequest method. +// req, resp := client.DeleteStreamRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DeleteV2LoggingLevelRequest(input *DeleteV2LoggingLevelInput) (req *request.Request, output *DeleteV2LoggingLevelOutput) { +func (c *IoT) DeleteStreamRequest(input *DeleteStreamInput) (req *request.Request, output *DeleteStreamOutput) { op := &request.Operation{ - Name: opDeleteV2LoggingLevel, + Name: opDeleteStream, HTTPMethod: "DELETE", - HTTPPath: "/v2LoggingLevel", + HTTPPath: "/streams/{streamId}", } if input == nil { - input = &DeleteV2LoggingLevelInput{} + input = &DeleteStreamInput{} } - output = &DeleteV2LoggingLevelOutput{} + output = &DeleteStreamOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// DeleteV2LoggingLevel API operation for AWS IoT. +// DeleteStream API operation for AWS IoT. // -// Deletes a logging level. +// Deletes a stream. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DeleteV2LoggingLevel for usage and error information. +// API operation DeleteStream for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalException "InternalException" -// An unexpected error has occurred. +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeDeleteConflictException "DeleteConflictException" +// You can't delete the resource because it is attached to one or more resources. // // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -func (c *IoT) DeleteV2LoggingLevel(input *DeleteV2LoggingLevelInput) (*DeleteV2LoggingLevelOutput, error) { - req, out := c.DeleteV2LoggingLevelRequest(input) +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) DeleteStream(input *DeleteStreamInput) (*DeleteStreamOutput, error) { + req, out := c.DeleteStreamRequest(input) return out, req.Send() } -// DeleteV2LoggingLevelWithContext is the same as DeleteV2LoggingLevel with the addition of +// DeleteStreamWithContext is the same as DeleteStream with the addition of // the ability to pass a context and additional request options. // -// See DeleteV2LoggingLevel for details on how to use this API operation. +// See DeleteStream for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DeleteV2LoggingLevelWithContext(ctx aws.Context, input *DeleteV2LoggingLevelInput, opts ...request.Option) (*DeleteV2LoggingLevelOutput, error) { - req, out := c.DeleteV2LoggingLevelRequest(input) +func (c *IoT) DeleteStreamWithContext(ctx aws.Context, input *DeleteStreamInput, opts ...request.Option) (*DeleteStreamOutput, error) { + req, out := c.DeleteStreamRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDeprecateThingType = "DeprecateThingType" +const opDeleteThing = "DeleteThing" -// DeprecateThingTypeRequest generates a "aws/request.Request" representing the -// client's request for the DeprecateThingType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteThingRequest generates a "aws/request.Request" representing the +// client's request for the DeleteThing operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DeprecateThingType for more information on using the DeprecateThingType +// See DeleteThing for more information on using the DeleteThing // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DeprecateThingTypeRequest method. -// req, resp := client.DeprecateThingTypeRequest(params) +// // Example sending a request using the DeleteThingRequest method. +// req, resp := client.DeleteThingRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DeprecateThingTypeRequest(input *DeprecateThingTypeInput) (req *request.Request, output *DeprecateThingTypeOutput) { +func (c *IoT) DeleteThingRequest(input *DeleteThingInput) (req *request.Request, output *DeleteThingOutput) { op := &request.Operation{ - Name: opDeprecateThingType, - HTTPMethod: "POST", - HTTPPath: "/thing-types/{thingTypeName}/deprecate", + Name: opDeleteThing, + HTTPMethod: "DELETE", + HTTPPath: "/things/{thingName}", } if input == nil { - input = &DeprecateThingTypeInput{} + input = &DeleteThingInput{} } - output = &DeprecateThingTypeOutput{} + output = &DeleteThingOutput{} req = c.newRequest(op, input, output) return } -// DeprecateThingType API operation for AWS IoT. +// DeleteThing API operation for AWS IoT. // -// Deprecates a thing type. You can not associate new things with deprecated -// thing type. +// Deletes the specified thing. Returns successfully with no error if the deletion +// is successful or you specify a thing that doesn't exist. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DeprecateThingType for usage and error information. +// API operation DeleteThing for usage and error information. // // Returned Error Codes: // * ErrCodeResourceNotFoundException "ResourceNotFoundException" // The specified resource does not exist. // +// * ErrCodeVersionConflictException "VersionConflictException" +// An exception thrown when the version of an entity specified with the expectedVersion +// parameter does not match the latest version in the system. +// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // @@ -3506,170 +4426,172 @@ func (c *IoT) DeprecateThingTypeRequest(input *DeprecateThingTypeInput) (req *re // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) DeprecateThingType(input *DeprecateThingTypeInput) (*DeprecateThingTypeOutput, error) { - req, out := c.DeprecateThingTypeRequest(input) +func (c *IoT) DeleteThing(input *DeleteThingInput) (*DeleteThingOutput, error) { + req, out := c.DeleteThingRequest(input) return out, req.Send() } -// DeprecateThingTypeWithContext is the same as DeprecateThingType with the addition of +// DeleteThingWithContext is the same as DeleteThing with the addition of // the ability to pass a context and additional request options. // -// See DeprecateThingType for details on how to use this API operation. +// See DeleteThing for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DeprecateThingTypeWithContext(ctx aws.Context, input *DeprecateThingTypeInput, opts ...request.Option) (*DeprecateThingTypeOutput, error) { - req, out := c.DeprecateThingTypeRequest(input) +func (c *IoT) DeleteThingWithContext(ctx aws.Context, input *DeleteThingInput, opts ...request.Option) (*DeleteThingOutput, error) { + req, out := c.DeleteThingRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeAuthorizer = "DescribeAuthorizer" +const opDeleteThingGroup = "DeleteThingGroup" -// DescribeAuthorizerRequest generates a "aws/request.Request" representing the -// client's request for the DescribeAuthorizer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteThingGroupRequest generates a "aws/request.Request" representing the +// client's request for the DeleteThingGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeAuthorizer for more information on using the DescribeAuthorizer +// See DeleteThingGroup for more information on using the DeleteThingGroup // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeAuthorizerRequest method. -// req, resp := client.DescribeAuthorizerRequest(params) +// // Example sending a request using the DeleteThingGroupRequest method. +// req, resp := client.DeleteThingGroupRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DescribeAuthorizerRequest(input *DescribeAuthorizerInput) (req *request.Request, output *DescribeAuthorizerOutput) { +func (c *IoT) DeleteThingGroupRequest(input *DeleteThingGroupInput) (req *request.Request, output *DeleteThingGroupOutput) { op := &request.Operation{ - Name: opDescribeAuthorizer, - HTTPMethod: "GET", - HTTPPath: "/authorizer/{authorizerName}", + Name: opDeleteThingGroup, + HTTPMethod: "DELETE", + HTTPPath: "/thing-groups/{thingGroupName}", } if input == nil { - input = &DescribeAuthorizerInput{} + input = &DeleteThingGroupInput{} } - output = &DescribeAuthorizerOutput{} + output = &DeleteThingGroupOutput{} req = c.newRequest(op, input, output) return } -// DescribeAuthorizer API operation for AWS IoT. +// DeleteThingGroup API operation for AWS IoT. // -// Describes an authorizer. +// Deletes a thing group. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DescribeAuthorizer for usage and error information. +// API operation DeleteThingGroup for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeVersionConflictException "VersionConflictException" +// An exception thrown when the version of an entity specified with the expectedVersion +// parameter does not match the latest version in the system. +// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. -// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) DescribeAuthorizer(input *DescribeAuthorizerInput) (*DescribeAuthorizerOutput, error) { - req, out := c.DescribeAuthorizerRequest(input) +func (c *IoT) DeleteThingGroup(input *DeleteThingGroupInput) (*DeleteThingGroupOutput, error) { + req, out := c.DeleteThingGroupRequest(input) return out, req.Send() } -// DescribeAuthorizerWithContext is the same as DescribeAuthorizer with the addition of +// DeleteThingGroupWithContext is the same as DeleteThingGroup with the addition of // the ability to pass a context and additional request options. // -// See DescribeAuthorizer for details on how to use this API operation. +// See DeleteThingGroup for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DescribeAuthorizerWithContext(ctx aws.Context, input *DescribeAuthorizerInput, opts ...request.Option) (*DescribeAuthorizerOutput, error) { - req, out := c.DescribeAuthorizerRequest(input) +func (c *IoT) DeleteThingGroupWithContext(ctx aws.Context, input *DeleteThingGroupInput, opts ...request.Option) (*DeleteThingGroupOutput, error) { + req, out := c.DeleteThingGroupRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeCACertificate = "DescribeCACertificate" +const opDeleteThingType = "DeleteThingType" -// DescribeCACertificateRequest generates a "aws/request.Request" representing the -// client's request for the DescribeCACertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteThingTypeRequest generates a "aws/request.Request" representing the +// client's request for the DeleteThingType operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeCACertificate for more information on using the DescribeCACertificate +// See DeleteThingType for more information on using the DeleteThingType // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeCACertificateRequest method. -// req, resp := client.DescribeCACertificateRequest(params) +// // Example sending a request using the DeleteThingTypeRequest method. +// req, resp := client.DeleteThingTypeRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DescribeCACertificateRequest(input *DescribeCACertificateInput) (req *request.Request, output *DescribeCACertificateOutput) { +func (c *IoT) DeleteThingTypeRequest(input *DeleteThingTypeInput) (req *request.Request, output *DeleteThingTypeOutput) { op := &request.Operation{ - Name: opDescribeCACertificate, - HTTPMethod: "GET", - HTTPPath: "/cacertificate/{caCertificateId}", + Name: opDeleteThingType, + HTTPMethod: "DELETE", + HTTPPath: "/thing-types/{thingTypeName}", } if input == nil { - input = &DescribeCACertificateInput{} + input = &DeleteThingTypeInput{} } - output = &DescribeCACertificateOutput{} + output = &DeleteThingTypeOutput{} req = c.newRequest(op, input, output) return } -// DescribeCACertificate API operation for AWS IoT. +// DeleteThingType API operation for AWS IoT. // -// Describes a registered CA certificate. +// Deletes the specified thing type. You cannot delete a thing type if it has +// things associated with it. To delete a thing type, first mark it as deprecated +// by calling DeprecateThingType, then remove any associated things by calling +// UpdateThing to change the thing type on any associated thing, and finally +// use DeleteThingType to delete the thing type. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DescribeCACertificate for usage and error information. +// API operation DeleteThingType for usage and error information. // // Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // @@ -3685,687 +4607,686 @@ func (c *IoT) DescribeCACertificateRequest(input *DescribeCACertificateInput) (r // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -func (c *IoT) DescribeCACertificate(input *DescribeCACertificateInput) (*DescribeCACertificateOutput, error) { - req, out := c.DescribeCACertificateRequest(input) +func (c *IoT) DeleteThingType(input *DeleteThingTypeInput) (*DeleteThingTypeOutput, error) { + req, out := c.DeleteThingTypeRequest(input) return out, req.Send() } -// DescribeCACertificateWithContext is the same as DescribeCACertificate with the addition of +// DeleteThingTypeWithContext is the same as DeleteThingType with the addition of // the ability to pass a context and additional request options. // -// See DescribeCACertificate for details on how to use this API operation. +// See DeleteThingType for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DescribeCACertificateWithContext(ctx aws.Context, input *DescribeCACertificateInput, opts ...request.Option) (*DescribeCACertificateOutput, error) { - req, out := c.DescribeCACertificateRequest(input) +func (c *IoT) DeleteThingTypeWithContext(ctx aws.Context, input *DeleteThingTypeInput, opts ...request.Option) (*DeleteThingTypeOutput, error) { + req, out := c.DeleteThingTypeRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeCertificate = "DescribeCertificate" +const opDeleteTopicRule = "DeleteTopicRule" -// DescribeCertificateRequest generates a "aws/request.Request" representing the -// client's request for the DescribeCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteTopicRuleRequest generates a "aws/request.Request" representing the +// client's request for the DeleteTopicRule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeCertificate for more information on using the DescribeCertificate +// See DeleteTopicRule for more information on using the DeleteTopicRule // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeCertificateRequest method. -// req, resp := client.DescribeCertificateRequest(params) +// // Example sending a request using the DeleteTopicRuleRequest method. +// req, resp := client.DeleteTopicRuleRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DescribeCertificateRequest(input *DescribeCertificateInput) (req *request.Request, output *DescribeCertificateOutput) { +func (c *IoT) DeleteTopicRuleRequest(input *DeleteTopicRuleInput) (req *request.Request, output *DeleteTopicRuleOutput) { op := &request.Operation{ - Name: opDescribeCertificate, - HTTPMethod: "GET", - HTTPPath: "/certificates/{certificateId}", + Name: opDeleteTopicRule, + HTTPMethod: "DELETE", + HTTPPath: "/rules/{ruleName}", } if input == nil { - input = &DescribeCertificateInput{} + input = &DeleteTopicRuleInput{} } - output = &DescribeCertificateOutput{} + output = &DeleteTopicRuleOutput{} req = c.newRequest(op, input, output) - return + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return } -// DescribeCertificate API operation for AWS IoT. +// DeleteTopicRule API operation for AWS IoT. // -// Gets information about the specified certificate. +// Deletes the rule. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DescribeCertificate for usage and error information. +// API operation DeleteTopicRule for usage and error information. // // Returned Error Codes: +// * ErrCodeInternalException "InternalException" +// An unexpected error has occurred. +// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeThrottlingException "ThrottlingException" -// The rate exceeds the limit. -// -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. +// * ErrCodeConflictingResourceUpdateException "ConflictingResourceUpdateException" +// A conflicting resource update exception. This exception is thrown when two +// pending updates cause a conflict. // -func (c *IoT) DescribeCertificate(input *DescribeCertificateInput) (*DescribeCertificateOutput, error) { - req, out := c.DescribeCertificateRequest(input) +func (c *IoT) DeleteTopicRule(input *DeleteTopicRuleInput) (*DeleteTopicRuleOutput, error) { + req, out := c.DeleteTopicRuleRequest(input) return out, req.Send() } -// DescribeCertificateWithContext is the same as DescribeCertificate with the addition of +// DeleteTopicRuleWithContext is the same as DeleteTopicRule with the addition of // the ability to pass a context and additional request options. // -// See DescribeCertificate for details on how to use this API operation. +// See DeleteTopicRule for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DescribeCertificateWithContext(ctx aws.Context, input *DescribeCertificateInput, opts ...request.Option) (*DescribeCertificateOutput, error) { - req, out := c.DescribeCertificateRequest(input) +func (c *IoT) DeleteTopicRuleWithContext(ctx aws.Context, input *DeleteTopicRuleInput, opts ...request.Option) (*DeleteTopicRuleOutput, error) { + req, out := c.DeleteTopicRuleRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeDefaultAuthorizer = "DescribeDefaultAuthorizer" +const opDeleteV2LoggingLevel = "DeleteV2LoggingLevel" -// DescribeDefaultAuthorizerRequest generates a "aws/request.Request" representing the -// client's request for the DescribeDefaultAuthorizer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteV2LoggingLevelRequest generates a "aws/request.Request" representing the +// client's request for the DeleteV2LoggingLevel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeDefaultAuthorizer for more information on using the DescribeDefaultAuthorizer +// See DeleteV2LoggingLevel for more information on using the DeleteV2LoggingLevel // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeDefaultAuthorizerRequest method. -// req, resp := client.DescribeDefaultAuthorizerRequest(params) +// // Example sending a request using the DeleteV2LoggingLevelRequest method. +// req, resp := client.DeleteV2LoggingLevelRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DescribeDefaultAuthorizerRequest(input *DescribeDefaultAuthorizerInput) (req *request.Request, output *DescribeDefaultAuthorizerOutput) { +func (c *IoT) DeleteV2LoggingLevelRequest(input *DeleteV2LoggingLevelInput) (req *request.Request, output *DeleteV2LoggingLevelOutput) { op := &request.Operation{ - Name: opDescribeDefaultAuthorizer, - HTTPMethod: "GET", - HTTPPath: "/default-authorizer", + Name: opDeleteV2LoggingLevel, + HTTPMethod: "DELETE", + HTTPPath: "/v2LoggingLevel", } if input == nil { - input = &DescribeDefaultAuthorizerInput{} + input = &DeleteV2LoggingLevelInput{} } - output = &DescribeDefaultAuthorizerOutput{} + output = &DeleteV2LoggingLevelOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// DescribeDefaultAuthorizer API operation for AWS IoT. +// DeleteV2LoggingLevel API operation for AWS IoT. // -// Describes the default authorizer. +// Deletes a logging level. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DescribeDefaultAuthorizer for usage and error information. +// API operation DeleteV2LoggingLevel for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. +// * ErrCodeInternalException "InternalException" +// An unexpected error has occurred. // // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeThrottlingException "ThrottlingException" -// The rate exceeds the limit. -// -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. -// -func (c *IoT) DescribeDefaultAuthorizer(input *DescribeDefaultAuthorizerInput) (*DescribeDefaultAuthorizerOutput, error) { - req, out := c.DescribeDefaultAuthorizerRequest(input) +func (c *IoT) DeleteV2LoggingLevel(input *DeleteV2LoggingLevelInput) (*DeleteV2LoggingLevelOutput, error) { + req, out := c.DeleteV2LoggingLevelRequest(input) return out, req.Send() } -// DescribeDefaultAuthorizerWithContext is the same as DescribeDefaultAuthorizer with the addition of +// DeleteV2LoggingLevelWithContext is the same as DeleteV2LoggingLevel with the addition of // the ability to pass a context and additional request options. // -// See DescribeDefaultAuthorizer for details on how to use this API operation. +// See DeleteV2LoggingLevel for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DescribeDefaultAuthorizerWithContext(ctx aws.Context, input *DescribeDefaultAuthorizerInput, opts ...request.Option) (*DescribeDefaultAuthorizerOutput, error) { - req, out := c.DescribeDefaultAuthorizerRequest(input) +func (c *IoT) DeleteV2LoggingLevelWithContext(ctx aws.Context, input *DeleteV2LoggingLevelInput, opts ...request.Option) (*DeleteV2LoggingLevelOutput, error) { + req, out := c.DeleteV2LoggingLevelRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeEndpoint = "DescribeEndpoint" +const opDeprecateThingType = "DeprecateThingType" -// DescribeEndpointRequest generates a "aws/request.Request" representing the -// client's request for the DescribeEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeprecateThingTypeRequest generates a "aws/request.Request" representing the +// client's request for the DeprecateThingType operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeEndpoint for more information on using the DescribeEndpoint +// See DeprecateThingType for more information on using the DeprecateThingType // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeEndpointRequest method. -// req, resp := client.DescribeEndpointRequest(params) +// // Example sending a request using the DeprecateThingTypeRequest method. +// req, resp := client.DeprecateThingTypeRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DescribeEndpointRequest(input *DescribeEndpointInput) (req *request.Request, output *DescribeEndpointOutput) { +func (c *IoT) DeprecateThingTypeRequest(input *DeprecateThingTypeInput) (req *request.Request, output *DeprecateThingTypeOutput) { op := &request.Operation{ - Name: opDescribeEndpoint, - HTTPMethod: "GET", - HTTPPath: "/endpoint", + Name: opDeprecateThingType, + HTTPMethod: "POST", + HTTPPath: "/thing-types/{thingTypeName}/deprecate", } if input == nil { - input = &DescribeEndpointInput{} + input = &DeprecateThingTypeInput{} } - output = &DescribeEndpointOutput{} + output = &DeprecateThingTypeOutput{} req = c.newRequest(op, input, output) return } -// DescribeEndpoint API operation for AWS IoT. +// DeprecateThingType API operation for AWS IoT. // -// Returns a unique endpoint specific to the AWS account making the call. +// Deprecates a thing type. You can not associate new things with deprecated +// thing type. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DescribeEndpoint for usage and error information. +// API operation DeprecateThingType for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. // // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// // * ErrCodeUnauthorizedException "UnauthorizedException" // You are not authorized to perform this operation. // -// * ErrCodeThrottlingException "ThrottlingException" -// The rate exceeds the limit. +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. // -func (c *IoT) DescribeEndpoint(input *DescribeEndpointInput) (*DescribeEndpointOutput, error) { - req, out := c.DescribeEndpointRequest(input) +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) DeprecateThingType(input *DeprecateThingTypeInput) (*DeprecateThingTypeOutput, error) { + req, out := c.DeprecateThingTypeRequest(input) return out, req.Send() } -// DescribeEndpointWithContext is the same as DescribeEndpoint with the addition of +// DeprecateThingTypeWithContext is the same as DeprecateThingType with the addition of // the ability to pass a context and additional request options. // -// See DescribeEndpoint for details on how to use this API operation. +// See DeprecateThingType for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DescribeEndpointWithContext(ctx aws.Context, input *DescribeEndpointInput, opts ...request.Option) (*DescribeEndpointOutput, error) { - req, out := c.DescribeEndpointRequest(input) +func (c *IoT) DeprecateThingTypeWithContext(ctx aws.Context, input *DeprecateThingTypeInput, opts ...request.Option) (*DeprecateThingTypeOutput, error) { + req, out := c.DeprecateThingTypeRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeEventConfigurations = "DescribeEventConfigurations" +const opDescribeAccountAuditConfiguration = "DescribeAccountAuditConfiguration" -// DescribeEventConfigurationsRequest generates a "aws/request.Request" representing the -// client's request for the DescribeEventConfigurations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeAccountAuditConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the DescribeAccountAuditConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeEventConfigurations for more information on using the DescribeEventConfigurations +// See DescribeAccountAuditConfiguration for more information on using the DescribeAccountAuditConfiguration // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeEventConfigurationsRequest method. -// req, resp := client.DescribeEventConfigurationsRequest(params) +// // Example sending a request using the DescribeAccountAuditConfigurationRequest method. +// req, resp := client.DescribeAccountAuditConfigurationRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DescribeEventConfigurationsRequest(input *DescribeEventConfigurationsInput) (req *request.Request, output *DescribeEventConfigurationsOutput) { +func (c *IoT) DescribeAccountAuditConfigurationRequest(input *DescribeAccountAuditConfigurationInput) (req *request.Request, output *DescribeAccountAuditConfigurationOutput) { op := &request.Operation{ - Name: opDescribeEventConfigurations, + Name: opDescribeAccountAuditConfiguration, HTTPMethod: "GET", - HTTPPath: "/event-configurations", + HTTPPath: "/audit/configuration", } if input == nil { - input = &DescribeEventConfigurationsInput{} + input = &DescribeAccountAuditConfigurationInput{} } - output = &DescribeEventConfigurationsOutput{} + output = &DescribeAccountAuditConfigurationOutput{} req = c.newRequest(op, input, output) return } -// DescribeEventConfigurations API operation for AWS IoT. +// DescribeAccountAuditConfiguration API operation for AWS IoT. // -// Describes event configurations. +// Gets information about the Device Defender audit settings for this account. +// Settings include how audit notifications are sent and which audit checks +// are enabled or disabled. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DescribeEventConfigurations for usage and error information. +// API operation DescribeAccountAuditConfiguration for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. -// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -func (c *IoT) DescribeEventConfigurations(input *DescribeEventConfigurationsInput) (*DescribeEventConfigurationsOutput, error) { - req, out := c.DescribeEventConfigurationsRequest(input) +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) DescribeAccountAuditConfiguration(input *DescribeAccountAuditConfigurationInput) (*DescribeAccountAuditConfigurationOutput, error) { + req, out := c.DescribeAccountAuditConfigurationRequest(input) return out, req.Send() } -// DescribeEventConfigurationsWithContext is the same as DescribeEventConfigurations with the addition of +// DescribeAccountAuditConfigurationWithContext is the same as DescribeAccountAuditConfiguration with the addition of // the ability to pass a context and additional request options. // -// See DescribeEventConfigurations for details on how to use this API operation. +// See DescribeAccountAuditConfiguration for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DescribeEventConfigurationsWithContext(ctx aws.Context, input *DescribeEventConfigurationsInput, opts ...request.Option) (*DescribeEventConfigurationsOutput, error) { - req, out := c.DescribeEventConfigurationsRequest(input) +func (c *IoT) DescribeAccountAuditConfigurationWithContext(ctx aws.Context, input *DescribeAccountAuditConfigurationInput, opts ...request.Option) (*DescribeAccountAuditConfigurationOutput, error) { + req, out := c.DescribeAccountAuditConfigurationRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeIndex = "DescribeIndex" +const opDescribeAuditTask = "DescribeAuditTask" -// DescribeIndexRequest generates a "aws/request.Request" representing the -// client's request for the DescribeIndex operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeAuditTaskRequest generates a "aws/request.Request" representing the +// client's request for the DescribeAuditTask operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeIndex for more information on using the DescribeIndex +// See DescribeAuditTask for more information on using the DescribeAuditTask // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeIndexRequest method. -// req, resp := client.DescribeIndexRequest(params) +// // Example sending a request using the DescribeAuditTaskRequest method. +// req, resp := client.DescribeAuditTaskRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DescribeIndexRequest(input *DescribeIndexInput) (req *request.Request, output *DescribeIndexOutput) { +func (c *IoT) DescribeAuditTaskRequest(input *DescribeAuditTaskInput) (req *request.Request, output *DescribeAuditTaskOutput) { op := &request.Operation{ - Name: opDescribeIndex, + Name: opDescribeAuditTask, HTTPMethod: "GET", - HTTPPath: "/indices/{indexName}", + HTTPPath: "/audit/tasks/{taskId}", } if input == nil { - input = &DescribeIndexInput{} + input = &DescribeAuditTaskInput{} } - output = &DescribeIndexOutput{} + output = &DescribeAuditTaskOutput{} req = c.newRequest(op, input, output) return } -// DescribeIndex API operation for AWS IoT. +// DescribeAuditTask API operation for AWS IoT. // -// Describes a search index. +// Gets information about a Device Defender audit. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DescribeIndex for usage and error information. +// API operation DescribeAuditTask for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. -// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -func (c *IoT) DescribeIndex(input *DescribeIndexInput) (*DescribeIndexOutput, error) { - req, out := c.DescribeIndexRequest(input) +func (c *IoT) DescribeAuditTask(input *DescribeAuditTaskInput) (*DescribeAuditTaskOutput, error) { + req, out := c.DescribeAuditTaskRequest(input) return out, req.Send() } -// DescribeIndexWithContext is the same as DescribeIndex with the addition of +// DescribeAuditTaskWithContext is the same as DescribeAuditTask with the addition of // the ability to pass a context and additional request options. // -// See DescribeIndex for details on how to use this API operation. +// See DescribeAuditTask for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DescribeIndexWithContext(ctx aws.Context, input *DescribeIndexInput, opts ...request.Option) (*DescribeIndexOutput, error) { - req, out := c.DescribeIndexRequest(input) +func (c *IoT) DescribeAuditTaskWithContext(ctx aws.Context, input *DescribeAuditTaskInput, opts ...request.Option) (*DescribeAuditTaskOutput, error) { + req, out := c.DescribeAuditTaskRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeJob = "DescribeJob" +const opDescribeAuthorizer = "DescribeAuthorizer" -// DescribeJobRequest generates a "aws/request.Request" representing the -// client's request for the DescribeJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeAuthorizerRequest generates a "aws/request.Request" representing the +// client's request for the DescribeAuthorizer operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeJob for more information on using the DescribeJob +// See DescribeAuthorizer for more information on using the DescribeAuthorizer // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeJobRequest method. -// req, resp := client.DescribeJobRequest(params) +// // Example sending a request using the DescribeAuthorizerRequest method. +// req, resp := client.DescribeAuthorizerRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DescribeJobRequest(input *DescribeJobInput) (req *request.Request, output *DescribeJobOutput) { +func (c *IoT) DescribeAuthorizerRequest(input *DescribeAuthorizerInput) (req *request.Request, output *DescribeAuthorizerOutput) { op := &request.Operation{ - Name: opDescribeJob, + Name: opDescribeAuthorizer, HTTPMethod: "GET", - HTTPPath: "/jobs/{jobId}", + HTTPPath: "/authorizer/{authorizerName}", } if input == nil { - input = &DescribeJobInput{} + input = &DescribeAuthorizerInput{} } - output = &DescribeJobOutput{} + output = &DescribeAuthorizerOutput{} req = c.newRequest(op, input, output) return } -// DescribeJob API operation for AWS IoT. +// DescribeAuthorizer API operation for AWS IoT. // -// Describes a job. +// Describes an authorizer. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DescribeJob for usage and error information. +// API operation DescribeAuthorizer for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidRequestException "InvalidRequestException" -// The request is not valid. -// // * ErrCodeResourceNotFoundException "ResourceNotFoundException" // The specified resource does not exist. // +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -func (c *IoT) DescribeJob(input *DescribeJobInput) (*DescribeJobOutput, error) { - req, out := c.DescribeJobRequest(input) +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) DescribeAuthorizer(input *DescribeAuthorizerInput) (*DescribeAuthorizerOutput, error) { + req, out := c.DescribeAuthorizerRequest(input) return out, req.Send() } -// DescribeJobWithContext is the same as DescribeJob with the addition of +// DescribeAuthorizerWithContext is the same as DescribeAuthorizer with the addition of // the ability to pass a context and additional request options. // -// See DescribeJob for details on how to use this API operation. +// See DescribeAuthorizer for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DescribeJobWithContext(ctx aws.Context, input *DescribeJobInput, opts ...request.Option) (*DescribeJobOutput, error) { - req, out := c.DescribeJobRequest(input) +func (c *IoT) DescribeAuthorizerWithContext(ctx aws.Context, input *DescribeAuthorizerInput, opts ...request.Option) (*DescribeAuthorizerOutput, error) { + req, out := c.DescribeAuthorizerRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeJobExecution = "DescribeJobExecution" +const opDescribeBillingGroup = "DescribeBillingGroup" -// DescribeJobExecutionRequest generates a "aws/request.Request" representing the -// client's request for the DescribeJobExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeBillingGroupRequest generates a "aws/request.Request" representing the +// client's request for the DescribeBillingGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeJobExecution for more information on using the DescribeJobExecution +// See DescribeBillingGroup for more information on using the DescribeBillingGroup // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeJobExecutionRequest method. -// req, resp := client.DescribeJobExecutionRequest(params) +// // Example sending a request using the DescribeBillingGroupRequest method. +// req, resp := client.DescribeBillingGroupRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DescribeJobExecutionRequest(input *DescribeJobExecutionInput) (req *request.Request, output *DescribeJobExecutionOutput) { +func (c *IoT) DescribeBillingGroupRequest(input *DescribeBillingGroupInput) (req *request.Request, output *DescribeBillingGroupOutput) { op := &request.Operation{ - Name: opDescribeJobExecution, + Name: opDescribeBillingGroup, HTTPMethod: "GET", - HTTPPath: "/things/{thingName}/jobs/{jobId}", + HTTPPath: "/billing-groups/{billingGroupName}", } if input == nil { - input = &DescribeJobExecutionInput{} + input = &DescribeBillingGroupInput{} } - output = &DescribeJobExecutionOutput{} + output = &DescribeBillingGroupOutput{} req = c.newRequest(op, input, output) return } -// DescribeJobExecution API operation for AWS IoT. +// DescribeBillingGroup API operation for AWS IoT. // -// Describes a job execution. +// Returns information about a billing group. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DescribeJobExecution for usage and error information. +// API operation DescribeBillingGroup for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. // -func (c *IoT) DescribeJobExecution(input *DescribeJobExecutionInput) (*DescribeJobExecutionOutput, error) { - req, out := c.DescribeJobExecutionRequest(input) +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) DescribeBillingGroup(input *DescribeBillingGroupInput) (*DescribeBillingGroupOutput, error) { + req, out := c.DescribeBillingGroupRequest(input) return out, req.Send() } -// DescribeJobExecutionWithContext is the same as DescribeJobExecution with the addition of +// DescribeBillingGroupWithContext is the same as DescribeBillingGroup with the addition of // the ability to pass a context and additional request options. // -// See DescribeJobExecution for details on how to use this API operation. +// See DescribeBillingGroup for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DescribeJobExecutionWithContext(ctx aws.Context, input *DescribeJobExecutionInput, opts ...request.Option) (*DescribeJobExecutionOutput, error) { - req, out := c.DescribeJobExecutionRequest(input) +func (c *IoT) DescribeBillingGroupWithContext(ctx aws.Context, input *DescribeBillingGroupInput, opts ...request.Option) (*DescribeBillingGroupOutput, error) { + req, out := c.DescribeBillingGroupRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeRoleAlias = "DescribeRoleAlias" +const opDescribeCACertificate = "DescribeCACertificate" -// DescribeRoleAliasRequest generates a "aws/request.Request" representing the -// client's request for the DescribeRoleAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeCACertificateRequest generates a "aws/request.Request" representing the +// client's request for the DescribeCACertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeRoleAlias for more information on using the DescribeRoleAlias +// See DescribeCACertificate for more information on using the DescribeCACertificate // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeRoleAliasRequest method. -// req, resp := client.DescribeRoleAliasRequest(params) +// // Example sending a request using the DescribeCACertificateRequest method. +// req, resp := client.DescribeCACertificateRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DescribeRoleAliasRequest(input *DescribeRoleAliasInput) (req *request.Request, output *DescribeRoleAliasOutput) { +func (c *IoT) DescribeCACertificateRequest(input *DescribeCACertificateInput) (req *request.Request, output *DescribeCACertificateOutput) { op := &request.Operation{ - Name: opDescribeRoleAlias, + Name: opDescribeCACertificate, HTTPMethod: "GET", - HTTPPath: "/role-aliases/{roleAlias}", + HTTPPath: "/cacertificate/{caCertificateId}", } if input == nil { - input = &DescribeRoleAliasInput{} + input = &DescribeCACertificateInput{} } - output = &DescribeRoleAliasOutput{} + output = &DescribeCACertificateOutput{} req = c.newRequest(op, input, output) return } -// DescribeRoleAlias API operation for AWS IoT. +// DescribeCACertificate API operation for AWS IoT. // -// Describes a role alias. +// Describes a registered CA certificate. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DescribeRoleAlias for usage and error information. +// API operation DescribeCACertificate for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" @@ -4386,85 +5307,82 @@ func (c *IoT) DescribeRoleAliasRequest(input *DescribeRoleAliasInput) (req *requ // * ErrCodeResourceNotFoundException "ResourceNotFoundException" // The specified resource does not exist. // -func (c *IoT) DescribeRoleAlias(input *DescribeRoleAliasInput) (*DescribeRoleAliasOutput, error) { - req, out := c.DescribeRoleAliasRequest(input) +func (c *IoT) DescribeCACertificate(input *DescribeCACertificateInput) (*DescribeCACertificateOutput, error) { + req, out := c.DescribeCACertificateRequest(input) return out, req.Send() } -// DescribeRoleAliasWithContext is the same as DescribeRoleAlias with the addition of +// DescribeCACertificateWithContext is the same as DescribeCACertificate with the addition of // the ability to pass a context and additional request options. // -// See DescribeRoleAlias for details on how to use this API operation. +// See DescribeCACertificate for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DescribeRoleAliasWithContext(ctx aws.Context, input *DescribeRoleAliasInput, opts ...request.Option) (*DescribeRoleAliasOutput, error) { - req, out := c.DescribeRoleAliasRequest(input) +func (c *IoT) DescribeCACertificateWithContext(ctx aws.Context, input *DescribeCACertificateInput, opts ...request.Option) (*DescribeCACertificateOutput, error) { + req, out := c.DescribeCACertificateRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeStream = "DescribeStream" +const opDescribeCertificate = "DescribeCertificate" -// DescribeStreamRequest generates a "aws/request.Request" representing the -// client's request for the DescribeStream operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeCertificateRequest generates a "aws/request.Request" representing the +// client's request for the DescribeCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeStream for more information on using the DescribeStream +// See DescribeCertificate for more information on using the DescribeCertificate // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeStreamRequest method. -// req, resp := client.DescribeStreamRequest(params) +// // Example sending a request using the DescribeCertificateRequest method. +// req, resp := client.DescribeCertificateRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DescribeStreamRequest(input *DescribeStreamInput) (req *request.Request, output *DescribeStreamOutput) { +func (c *IoT) DescribeCertificateRequest(input *DescribeCertificateInput) (req *request.Request, output *DescribeCertificateOutput) { op := &request.Operation{ - Name: opDescribeStream, + Name: opDescribeCertificate, HTTPMethod: "GET", - HTTPPath: "/streams/{streamId}", + HTTPPath: "/certificates/{certificateId}", } if input == nil { - input = &DescribeStreamInput{} + input = &DescribeCertificateInput{} } - output = &DescribeStreamOutput{} + output = &DescribeCertificateOutput{} req = c.newRequest(op, input, output) return } -// DescribeStream API operation for AWS IoT. +// DescribeCertificate API operation for AWS IoT. // -// Gets information about a stream. +// Gets information about the specified certificate. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DescribeStream for usage and error information. +// API operation DescribeCertificate for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // @@ -4477,77 +5395,80 @@ func (c *IoT) DescribeStreamRequest(input *DescribeStreamInput) (req *request.Re // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) DescribeStream(input *DescribeStreamInput) (*DescribeStreamOutput, error) { - req, out := c.DescribeStreamRequest(input) +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) DescribeCertificate(input *DescribeCertificateInput) (*DescribeCertificateOutput, error) { + req, out := c.DescribeCertificateRequest(input) return out, req.Send() } -// DescribeStreamWithContext is the same as DescribeStream with the addition of +// DescribeCertificateWithContext is the same as DescribeCertificate with the addition of // the ability to pass a context and additional request options. // -// See DescribeStream for details on how to use this API operation. +// See DescribeCertificate for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DescribeStreamWithContext(ctx aws.Context, input *DescribeStreamInput, opts ...request.Option) (*DescribeStreamOutput, error) { - req, out := c.DescribeStreamRequest(input) +func (c *IoT) DescribeCertificateWithContext(ctx aws.Context, input *DescribeCertificateInput, opts ...request.Option) (*DescribeCertificateOutput, error) { + req, out := c.DescribeCertificateRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeThing = "DescribeThing" +const opDescribeDefaultAuthorizer = "DescribeDefaultAuthorizer" -// DescribeThingRequest generates a "aws/request.Request" representing the -// client's request for the DescribeThing operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeDefaultAuthorizerRequest generates a "aws/request.Request" representing the +// client's request for the DescribeDefaultAuthorizer operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeThing for more information on using the DescribeThing +// See DescribeDefaultAuthorizer for more information on using the DescribeDefaultAuthorizer // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeThingRequest method. -// req, resp := client.DescribeThingRequest(params) +// // Example sending a request using the DescribeDefaultAuthorizerRequest method. +// req, resp := client.DescribeDefaultAuthorizerRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DescribeThingRequest(input *DescribeThingInput) (req *request.Request, output *DescribeThingOutput) { +func (c *IoT) DescribeDefaultAuthorizerRequest(input *DescribeDefaultAuthorizerInput) (req *request.Request, output *DescribeDefaultAuthorizerOutput) { op := &request.Operation{ - Name: opDescribeThing, + Name: opDescribeDefaultAuthorizer, HTTPMethod: "GET", - HTTPPath: "/things/{thingName}", + HTTPPath: "/default-authorizer", } if input == nil { - input = &DescribeThingInput{} + input = &DescribeDefaultAuthorizerInput{} } - output = &DescribeThingOutput{} + output = &DescribeDefaultAuthorizerOutput{} req = c.newRequest(op, input, output) return } -// DescribeThing API operation for AWS IoT. +// DescribeDefaultAuthorizer API operation for AWS IoT. // -// Gets information about the specified thing. +// Describes the default authorizer. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DescribeThing for usage and error information. +// API operation DescribeDefaultAuthorizer for usage and error information. // // Returned Error Codes: // * ErrCodeResourceNotFoundException "ResourceNotFoundException" @@ -4568,255 +5489,243 @@ func (c *IoT) DescribeThingRequest(input *DescribeThingInput) (req *request.Requ // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) DescribeThing(input *DescribeThingInput) (*DescribeThingOutput, error) { - req, out := c.DescribeThingRequest(input) +func (c *IoT) DescribeDefaultAuthorizer(input *DescribeDefaultAuthorizerInput) (*DescribeDefaultAuthorizerOutput, error) { + req, out := c.DescribeDefaultAuthorizerRequest(input) return out, req.Send() } -// DescribeThingWithContext is the same as DescribeThing with the addition of +// DescribeDefaultAuthorizerWithContext is the same as DescribeDefaultAuthorizer with the addition of // the ability to pass a context and additional request options. // -// See DescribeThing for details on how to use this API operation. +// See DescribeDefaultAuthorizer for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DescribeThingWithContext(ctx aws.Context, input *DescribeThingInput, opts ...request.Option) (*DescribeThingOutput, error) { - req, out := c.DescribeThingRequest(input) +func (c *IoT) DescribeDefaultAuthorizerWithContext(ctx aws.Context, input *DescribeDefaultAuthorizerInput, opts ...request.Option) (*DescribeDefaultAuthorizerOutput, error) { + req, out := c.DescribeDefaultAuthorizerRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeThingGroup = "DescribeThingGroup" +const opDescribeEndpoint = "DescribeEndpoint" -// DescribeThingGroupRequest generates a "aws/request.Request" representing the -// client's request for the DescribeThingGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeEndpointRequest generates a "aws/request.Request" representing the +// client's request for the DescribeEndpoint operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeThingGroup for more information on using the DescribeThingGroup +// See DescribeEndpoint for more information on using the DescribeEndpoint // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeThingGroupRequest method. -// req, resp := client.DescribeThingGroupRequest(params) +// // Example sending a request using the DescribeEndpointRequest method. +// req, resp := client.DescribeEndpointRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DescribeThingGroupRequest(input *DescribeThingGroupInput) (req *request.Request, output *DescribeThingGroupOutput) { +func (c *IoT) DescribeEndpointRequest(input *DescribeEndpointInput) (req *request.Request, output *DescribeEndpointOutput) { op := &request.Operation{ - Name: opDescribeThingGroup, + Name: opDescribeEndpoint, HTTPMethod: "GET", - HTTPPath: "/thing-groups/{thingGroupName}", + HTTPPath: "/endpoint", } if input == nil { - input = &DescribeThingGroupInput{} + input = &DescribeEndpointInput{} } - output = &DescribeThingGroupOutput{} + output = &DescribeEndpointOutput{} req = c.newRequest(op, input, output) return } -// DescribeThingGroup API operation for AWS IoT. +// DescribeEndpoint API operation for AWS IoT. // -// Describe a thing group. +// Returns a unique endpoint specific to the AWS account making the call. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DescribeThingGroup for usage and error information. +// API operation DescribeEndpoint for usage and error information. // // Returned Error Codes: +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. -// -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -func (c *IoT) DescribeThingGroup(input *DescribeThingGroupInput) (*DescribeThingGroupOutput, error) { - req, out := c.DescribeThingGroupRequest(input) +func (c *IoT) DescribeEndpoint(input *DescribeEndpointInput) (*DescribeEndpointOutput, error) { + req, out := c.DescribeEndpointRequest(input) return out, req.Send() } -// DescribeThingGroupWithContext is the same as DescribeThingGroup with the addition of +// DescribeEndpointWithContext is the same as DescribeEndpoint with the addition of // the ability to pass a context and additional request options. // -// See DescribeThingGroup for details on how to use this API operation. +// See DescribeEndpoint for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DescribeThingGroupWithContext(ctx aws.Context, input *DescribeThingGroupInput, opts ...request.Option) (*DescribeThingGroupOutput, error) { - req, out := c.DescribeThingGroupRequest(input) +func (c *IoT) DescribeEndpointWithContext(ctx aws.Context, input *DescribeEndpointInput, opts ...request.Option) (*DescribeEndpointOutput, error) { + req, out := c.DescribeEndpointRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeThingRegistrationTask = "DescribeThingRegistrationTask" +const opDescribeEventConfigurations = "DescribeEventConfigurations" -// DescribeThingRegistrationTaskRequest generates a "aws/request.Request" representing the -// client's request for the DescribeThingRegistrationTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeEventConfigurationsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeEventConfigurations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeThingRegistrationTask for more information on using the DescribeThingRegistrationTask +// See DescribeEventConfigurations for more information on using the DescribeEventConfigurations // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeThingRegistrationTaskRequest method. -// req, resp := client.DescribeThingRegistrationTaskRequest(params) +// // Example sending a request using the DescribeEventConfigurationsRequest method. +// req, resp := client.DescribeEventConfigurationsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DescribeThingRegistrationTaskRequest(input *DescribeThingRegistrationTaskInput) (req *request.Request, output *DescribeThingRegistrationTaskOutput) { +func (c *IoT) DescribeEventConfigurationsRequest(input *DescribeEventConfigurationsInput) (req *request.Request, output *DescribeEventConfigurationsOutput) { op := &request.Operation{ - Name: opDescribeThingRegistrationTask, + Name: opDescribeEventConfigurations, HTTPMethod: "GET", - HTTPPath: "/thing-registration-tasks/{taskId}", + HTTPPath: "/event-configurations", } if input == nil { - input = &DescribeThingRegistrationTaskInput{} + input = &DescribeEventConfigurationsInput{} } - output = &DescribeThingRegistrationTaskOutput{} + output = &DescribeEventConfigurationsOutput{} req = c.newRequest(op, input, output) return } -// DescribeThingRegistrationTask API operation for AWS IoT. +// DescribeEventConfigurations API operation for AWS IoT. // -// Describes a bulk thing provisioning task. +// Describes event configurations. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DescribeThingRegistrationTask for usage and error information. +// API operation DescribeEventConfigurations for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidRequestException "InvalidRequestException" -// The request is not valid. +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. // // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. +func (c *IoT) DescribeEventConfigurations(input *DescribeEventConfigurationsInput) (*DescribeEventConfigurationsOutput, error) { + req, out := c.DescribeEventConfigurationsRequest(input) + return out, req.Send() +} + +// DescribeEventConfigurationsWithContext is the same as DescribeEventConfigurations with the addition of +// the ability to pass a context and additional request options. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. -// -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -func (c *IoT) DescribeThingRegistrationTask(input *DescribeThingRegistrationTaskInput) (*DescribeThingRegistrationTaskOutput, error) { - req, out := c.DescribeThingRegistrationTaskRequest(input) - return out, req.Send() -} - -// DescribeThingRegistrationTaskWithContext is the same as DescribeThingRegistrationTask with the addition of -// the ability to pass a context and additional request options. -// -// See DescribeThingRegistrationTask for details on how to use this API operation. +// See DescribeEventConfigurations for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DescribeThingRegistrationTaskWithContext(ctx aws.Context, input *DescribeThingRegistrationTaskInput, opts ...request.Option) (*DescribeThingRegistrationTaskOutput, error) { - req, out := c.DescribeThingRegistrationTaskRequest(input) +func (c *IoT) DescribeEventConfigurationsWithContext(ctx aws.Context, input *DescribeEventConfigurationsInput, opts ...request.Option) (*DescribeEventConfigurationsOutput, error) { + req, out := c.DescribeEventConfigurationsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeThingType = "DescribeThingType" +const opDescribeIndex = "DescribeIndex" -// DescribeThingTypeRequest generates a "aws/request.Request" representing the -// client's request for the DescribeThingType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeIndexRequest generates a "aws/request.Request" representing the +// client's request for the DescribeIndex operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeThingType for more information on using the DescribeThingType +// See DescribeIndex for more information on using the DescribeIndex // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeThingTypeRequest method. -// req, resp := client.DescribeThingTypeRequest(params) +// // Example sending a request using the DescribeIndexRequest method. +// req, resp := client.DescribeIndexRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DescribeThingTypeRequest(input *DescribeThingTypeInput) (req *request.Request, output *DescribeThingTypeOutput) { +func (c *IoT) DescribeIndexRequest(input *DescribeIndexInput) (req *request.Request, output *DescribeIndexOutput) { op := &request.Operation{ - Name: opDescribeThingType, + Name: opDescribeIndex, HTTPMethod: "GET", - HTTPPath: "/thing-types/{thingTypeName}", + HTTPPath: "/indices/{indexName}", } if input == nil { - input = &DescribeThingTypeInput{} + input = &DescribeIndexInput{} } - output = &DescribeThingTypeOutput{} + output = &DescribeIndexOutput{} req = c.newRequest(op, input, output) return } -// DescribeThingType API operation for AWS IoT. +// DescribeIndex API operation for AWS IoT. // -// Gets information about the specified thing type. +// Describes a search index. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DescribeThingType for usage and error information. +// API operation DescribeIndex for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // @@ -4832,273 +5741,252 @@ func (c *IoT) DescribeThingTypeRequest(input *DescribeThingTypeInput) (req *requ // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) DescribeThingType(input *DescribeThingTypeInput) (*DescribeThingTypeOutput, error) { - req, out := c.DescribeThingTypeRequest(input) +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) DescribeIndex(input *DescribeIndexInput) (*DescribeIndexOutput, error) { + req, out := c.DescribeIndexRequest(input) return out, req.Send() } -// DescribeThingTypeWithContext is the same as DescribeThingType with the addition of +// DescribeIndexWithContext is the same as DescribeIndex with the addition of // the ability to pass a context and additional request options. // -// See DescribeThingType for details on how to use this API operation. +// See DescribeIndex for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DescribeThingTypeWithContext(ctx aws.Context, input *DescribeThingTypeInput, opts ...request.Option) (*DescribeThingTypeOutput, error) { - req, out := c.DescribeThingTypeRequest(input) +func (c *IoT) DescribeIndexWithContext(ctx aws.Context, input *DescribeIndexInput, opts ...request.Option) (*DescribeIndexOutput, error) { + req, out := c.DescribeIndexRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDetachPolicy = "DetachPolicy" +const opDescribeJob = "DescribeJob" -// DetachPolicyRequest generates a "aws/request.Request" representing the -// client's request for the DetachPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeJobRequest generates a "aws/request.Request" representing the +// client's request for the DescribeJob operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DetachPolicy for more information on using the DetachPolicy +// See DescribeJob for more information on using the DescribeJob // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DetachPolicyRequest method. -// req, resp := client.DetachPolicyRequest(params) +// // Example sending a request using the DescribeJobRequest method. +// req, resp := client.DescribeJobRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DetachPolicyRequest(input *DetachPolicyInput) (req *request.Request, output *DetachPolicyOutput) { +func (c *IoT) DescribeJobRequest(input *DescribeJobInput) (req *request.Request, output *DescribeJobOutput) { op := &request.Operation{ - Name: opDetachPolicy, - HTTPMethod: "POST", - HTTPPath: "/target-policies/{policyName}", + Name: opDescribeJob, + HTTPMethod: "GET", + HTTPPath: "/jobs/{jobId}", } if input == nil { - input = &DetachPolicyInput{} + input = &DescribeJobInput{} } - output = &DetachPolicyOutput{} + output = &DescribeJobOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// DetachPolicy API operation for AWS IoT. +// DescribeJob API operation for AWS IoT. // -// Detaches a policy from the specified target. +// Describes a job. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DetachPolicy for usage and error information. +// API operation DescribeJob for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. -// -// * ErrCodeLimitExceededException "LimitExceededException" -// The number of attached entities exceeds the limit. -// -func (c *IoT) DetachPolicy(input *DetachPolicyInput) (*DetachPolicyOutput, error) { - req, out := c.DetachPolicyRequest(input) +func (c *IoT) DescribeJob(input *DescribeJobInput) (*DescribeJobOutput, error) { + req, out := c.DescribeJobRequest(input) return out, req.Send() } -// DetachPolicyWithContext is the same as DetachPolicy with the addition of +// DescribeJobWithContext is the same as DescribeJob with the addition of // the ability to pass a context and additional request options. // -// See DetachPolicy for details on how to use this API operation. +// See DescribeJob for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DetachPolicyWithContext(ctx aws.Context, input *DetachPolicyInput, opts ...request.Option) (*DetachPolicyOutput, error) { - req, out := c.DetachPolicyRequest(input) +func (c *IoT) DescribeJobWithContext(ctx aws.Context, input *DescribeJobInput, opts ...request.Option) (*DescribeJobOutput, error) { + req, out := c.DescribeJobRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDetachPrincipalPolicy = "DetachPrincipalPolicy" +const opDescribeJobExecution = "DescribeJobExecution" -// DetachPrincipalPolicyRequest generates a "aws/request.Request" representing the -// client's request for the DetachPrincipalPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeJobExecutionRequest generates a "aws/request.Request" representing the +// client's request for the DescribeJobExecution operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DetachPrincipalPolicy for more information on using the DetachPrincipalPolicy +// See DescribeJobExecution for more information on using the DescribeJobExecution // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DetachPrincipalPolicyRequest method. -// req, resp := client.DetachPrincipalPolicyRequest(params) +// // Example sending a request using the DescribeJobExecutionRequest method. +// req, resp := client.DescribeJobExecutionRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DetachPrincipalPolicyRequest(input *DetachPrincipalPolicyInput) (req *request.Request, output *DetachPrincipalPolicyOutput) { - if c.Client.Config.Logger != nil { - c.Client.Config.Logger.Log("This operation, DetachPrincipalPolicy, has been deprecated") - } +func (c *IoT) DescribeJobExecutionRequest(input *DescribeJobExecutionInput) (req *request.Request, output *DescribeJobExecutionOutput) { op := &request.Operation{ - Name: opDetachPrincipalPolicy, - HTTPMethod: "DELETE", - HTTPPath: "/principal-policies/{policyName}", + Name: opDescribeJobExecution, + HTTPMethod: "GET", + HTTPPath: "/things/{thingName}/jobs/{jobId}", } if input == nil { - input = &DetachPrincipalPolicyInput{} + input = &DescribeJobExecutionInput{} } - output = &DetachPrincipalPolicyOutput{} + output = &DescribeJobExecutionOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// DetachPrincipalPolicy API operation for AWS IoT. -// -// Removes the specified policy from the specified certificate. +// DescribeJobExecution API operation for AWS IoT. // -// Note: This API is deprecated. Please use DetachPolicy instead. +// Describes a job execution. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DetachPrincipalPolicy for usage and error information. +// API operation DescribeJobExecution for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. -// -func (c *IoT) DetachPrincipalPolicy(input *DetachPrincipalPolicyInput) (*DetachPrincipalPolicyOutput, error) { - req, out := c.DetachPrincipalPolicyRequest(input) +func (c *IoT) DescribeJobExecution(input *DescribeJobExecutionInput) (*DescribeJobExecutionOutput, error) { + req, out := c.DescribeJobExecutionRequest(input) return out, req.Send() } -// DetachPrincipalPolicyWithContext is the same as DetachPrincipalPolicy with the addition of +// DescribeJobExecutionWithContext is the same as DescribeJobExecution with the addition of // the ability to pass a context and additional request options. // -// See DetachPrincipalPolicy for details on how to use this API operation. +// See DescribeJobExecution for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DetachPrincipalPolicyWithContext(ctx aws.Context, input *DetachPrincipalPolicyInput, opts ...request.Option) (*DetachPrincipalPolicyOutput, error) { - req, out := c.DetachPrincipalPolicyRequest(input) +func (c *IoT) DescribeJobExecutionWithContext(ctx aws.Context, input *DescribeJobExecutionInput, opts ...request.Option) (*DescribeJobExecutionOutput, error) { + req, out := c.DescribeJobExecutionRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDetachThingPrincipal = "DetachThingPrincipal" +const opDescribeRoleAlias = "DescribeRoleAlias" -// DetachThingPrincipalRequest generates a "aws/request.Request" representing the -// client's request for the DetachThingPrincipal operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeRoleAliasRequest generates a "aws/request.Request" representing the +// client's request for the DescribeRoleAlias operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DetachThingPrincipal for more information on using the DetachThingPrincipal +// See DescribeRoleAlias for more information on using the DescribeRoleAlias // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DetachThingPrincipalRequest method. -// req, resp := client.DetachThingPrincipalRequest(params) +// // Example sending a request using the DescribeRoleAliasRequest method. +// req, resp := client.DescribeRoleAliasRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DetachThingPrincipalRequest(input *DetachThingPrincipalInput) (req *request.Request, output *DetachThingPrincipalOutput) { +func (c *IoT) DescribeRoleAliasRequest(input *DescribeRoleAliasInput) (req *request.Request, output *DescribeRoleAliasOutput) { op := &request.Operation{ - Name: opDetachThingPrincipal, - HTTPMethod: "DELETE", - HTTPPath: "/things/{thingName}/principals", + Name: opDescribeRoleAlias, + HTTPMethod: "GET", + HTTPPath: "/role-aliases/{roleAlias}", } if input == nil { - input = &DetachThingPrincipalInput{} + input = &DescribeRoleAliasInput{} } - output = &DetachThingPrincipalOutput{} + output = &DescribeRoleAliasOutput{} req = c.newRequest(op, input, output) return } -// DetachThingPrincipal API operation for AWS IoT. +// DescribeRoleAlias API operation for AWS IoT. // -// Detaches the specified principal from the specified thing. +// Describes a role alias. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DetachThingPrincipal for usage and error information. +// API operation DescribeRoleAlias for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // @@ -5114,259 +6002,258 @@ func (c *IoT) DetachThingPrincipalRequest(input *DetachThingPrincipalInput) (req // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) DetachThingPrincipal(input *DetachThingPrincipalInput) (*DetachThingPrincipalOutput, error) { - req, out := c.DetachThingPrincipalRequest(input) +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) DescribeRoleAlias(input *DescribeRoleAliasInput) (*DescribeRoleAliasOutput, error) { + req, out := c.DescribeRoleAliasRequest(input) return out, req.Send() } -// DetachThingPrincipalWithContext is the same as DetachThingPrincipal with the addition of +// DescribeRoleAliasWithContext is the same as DescribeRoleAlias with the addition of // the ability to pass a context and additional request options. // -// See DetachThingPrincipal for details on how to use this API operation. +// See DescribeRoleAlias for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DetachThingPrincipalWithContext(ctx aws.Context, input *DetachThingPrincipalInput, opts ...request.Option) (*DetachThingPrincipalOutput, error) { - req, out := c.DetachThingPrincipalRequest(input) +func (c *IoT) DescribeRoleAliasWithContext(ctx aws.Context, input *DescribeRoleAliasInput, opts ...request.Option) (*DescribeRoleAliasOutput, error) { + req, out := c.DescribeRoleAliasRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDisableTopicRule = "DisableTopicRule" +const opDescribeScheduledAudit = "DescribeScheduledAudit" -// DisableTopicRuleRequest generates a "aws/request.Request" representing the -// client's request for the DisableTopicRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeScheduledAuditRequest generates a "aws/request.Request" representing the +// client's request for the DescribeScheduledAudit operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DisableTopicRule for more information on using the DisableTopicRule +// See DescribeScheduledAudit for more information on using the DescribeScheduledAudit // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DisableTopicRuleRequest method. -// req, resp := client.DisableTopicRuleRequest(params) +// // Example sending a request using the DescribeScheduledAuditRequest method. +// req, resp := client.DescribeScheduledAuditRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) DisableTopicRuleRequest(input *DisableTopicRuleInput) (req *request.Request, output *DisableTopicRuleOutput) { +func (c *IoT) DescribeScheduledAuditRequest(input *DescribeScheduledAuditInput) (req *request.Request, output *DescribeScheduledAuditOutput) { op := &request.Operation{ - Name: opDisableTopicRule, - HTTPMethod: "POST", - HTTPPath: "/rules/{ruleName}/disable", + Name: opDescribeScheduledAudit, + HTTPMethod: "GET", + HTTPPath: "/audit/scheduledaudits/{scheduledAuditName}", } if input == nil { - input = &DisableTopicRuleInput{} + input = &DescribeScheduledAuditInput{} } - output = &DisableTopicRuleOutput{} + output = &DescribeScheduledAuditOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// DisableTopicRule API operation for AWS IoT. +// DescribeScheduledAudit API operation for AWS IoT. // -// Disables the rule. +// Gets information about a scheduled audit. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation DisableTopicRule for usage and error information. +// API operation DescribeScheduledAudit for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalException "InternalException" -// An unexpected error has occurred. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. // -func (c *IoT) DisableTopicRule(input *DisableTopicRuleInput) (*DisableTopicRuleOutput, error) { - req, out := c.DisableTopicRuleRequest(input) +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) DescribeScheduledAudit(input *DescribeScheduledAuditInput) (*DescribeScheduledAuditOutput, error) { + req, out := c.DescribeScheduledAuditRequest(input) return out, req.Send() } -// DisableTopicRuleWithContext is the same as DisableTopicRule with the addition of +// DescribeScheduledAuditWithContext is the same as DescribeScheduledAudit with the addition of // the ability to pass a context and additional request options. // -// See DisableTopicRule for details on how to use this API operation. +// See DescribeScheduledAudit for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) DisableTopicRuleWithContext(ctx aws.Context, input *DisableTopicRuleInput, opts ...request.Option) (*DisableTopicRuleOutput, error) { - req, out := c.DisableTopicRuleRequest(input) +func (c *IoT) DescribeScheduledAuditWithContext(ctx aws.Context, input *DescribeScheduledAuditInput, opts ...request.Option) (*DescribeScheduledAuditOutput, error) { + req, out := c.DescribeScheduledAuditRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opEnableTopicRule = "EnableTopicRule" +const opDescribeSecurityProfile = "DescribeSecurityProfile" -// EnableTopicRuleRequest generates a "aws/request.Request" representing the -// client's request for the EnableTopicRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeSecurityProfileRequest generates a "aws/request.Request" representing the +// client's request for the DescribeSecurityProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See EnableTopicRule for more information on using the EnableTopicRule +// See DescribeSecurityProfile for more information on using the DescribeSecurityProfile // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the EnableTopicRuleRequest method. -// req, resp := client.EnableTopicRuleRequest(params) +// // Example sending a request using the DescribeSecurityProfileRequest method. +// req, resp := client.DescribeSecurityProfileRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) EnableTopicRuleRequest(input *EnableTopicRuleInput) (req *request.Request, output *EnableTopicRuleOutput) { +func (c *IoT) DescribeSecurityProfileRequest(input *DescribeSecurityProfileInput) (req *request.Request, output *DescribeSecurityProfileOutput) { op := &request.Operation{ - Name: opEnableTopicRule, - HTTPMethod: "POST", - HTTPPath: "/rules/{ruleName}/enable", + Name: opDescribeSecurityProfile, + HTTPMethod: "GET", + HTTPPath: "/security-profiles/{securityProfileName}", } if input == nil { - input = &EnableTopicRuleInput{} + input = &DescribeSecurityProfileInput{} } - output = &EnableTopicRuleOutput{} + output = &DescribeSecurityProfileOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// EnableTopicRule API operation for AWS IoT. +// DescribeSecurityProfile API operation for AWS IoT. // -// Enables the rule. +// Gets information about a Device Defender security profile. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation EnableTopicRule for usage and error information. +// API operation DescribeSecurityProfile for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalException "InternalException" -// An unexpected error has occurred. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. // -func (c *IoT) EnableTopicRule(input *EnableTopicRuleInput) (*EnableTopicRuleOutput, error) { - req, out := c.EnableTopicRuleRequest(input) +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) DescribeSecurityProfile(input *DescribeSecurityProfileInput) (*DescribeSecurityProfileOutput, error) { + req, out := c.DescribeSecurityProfileRequest(input) return out, req.Send() } -// EnableTopicRuleWithContext is the same as EnableTopicRule with the addition of +// DescribeSecurityProfileWithContext is the same as DescribeSecurityProfile with the addition of // the ability to pass a context and additional request options. // -// See EnableTopicRule for details on how to use this API operation. +// See DescribeSecurityProfile for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) EnableTopicRuleWithContext(ctx aws.Context, input *EnableTopicRuleInput, opts ...request.Option) (*EnableTopicRuleOutput, error) { - req, out := c.EnableTopicRuleRequest(input) +func (c *IoT) DescribeSecurityProfileWithContext(ctx aws.Context, input *DescribeSecurityProfileInput, opts ...request.Option) (*DescribeSecurityProfileOutput, error) { + req, out := c.DescribeSecurityProfileRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetEffectivePolicies = "GetEffectivePolicies" +const opDescribeStream = "DescribeStream" -// GetEffectivePoliciesRequest generates a "aws/request.Request" representing the -// client's request for the GetEffectivePolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeStreamRequest generates a "aws/request.Request" representing the +// client's request for the DescribeStream operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetEffectivePolicies for more information on using the GetEffectivePolicies +// See DescribeStream for more information on using the DescribeStream // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetEffectivePoliciesRequest method. -// req, resp := client.GetEffectivePoliciesRequest(params) +// // Example sending a request using the DescribeStreamRequest method. +// req, resp := client.DescribeStreamRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) GetEffectivePoliciesRequest(input *GetEffectivePoliciesInput) (req *request.Request, output *GetEffectivePoliciesOutput) { +func (c *IoT) DescribeStreamRequest(input *DescribeStreamInput) (req *request.Request, output *DescribeStreamOutput) { op := &request.Operation{ - Name: opGetEffectivePolicies, - HTTPMethod: "POST", - HTTPPath: "/effective-policies", + Name: opDescribeStream, + HTTPMethod: "GET", + HTTPPath: "/streams/{streamId}", } if input == nil { - input = &GetEffectivePoliciesInput{} + input = &DescribeStreamInput{} } - output = &GetEffectivePoliciesOutput{} + output = &DescribeStreamOutput{} req = c.newRequest(op, input, output) return } -// GetEffectivePolicies API operation for AWS IoT. +// DescribeStream API operation for AWS IoT. // -// Gets effective policies. +// Gets information about a stream. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation GetEffectivePolicies for usage and error information. +// API operation DescribeStream for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // @@ -5379,82 +6266,82 @@ func (c *IoT) GetEffectivePoliciesRequest(input *GetEffectivePoliciesInput) (req // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -// * ErrCodeLimitExceededException "LimitExceededException" -// The number of attached entities exceeds the limit. -// -func (c *IoT) GetEffectivePolicies(input *GetEffectivePoliciesInput) (*GetEffectivePoliciesOutput, error) { - req, out := c.GetEffectivePoliciesRequest(input) +func (c *IoT) DescribeStream(input *DescribeStreamInput) (*DescribeStreamOutput, error) { + req, out := c.DescribeStreamRequest(input) return out, req.Send() } -// GetEffectivePoliciesWithContext is the same as GetEffectivePolicies with the addition of +// DescribeStreamWithContext is the same as DescribeStream with the addition of // the ability to pass a context and additional request options. // -// See GetEffectivePolicies for details on how to use this API operation. +// See DescribeStream for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) GetEffectivePoliciesWithContext(ctx aws.Context, input *GetEffectivePoliciesInput, opts ...request.Option) (*GetEffectivePoliciesOutput, error) { - req, out := c.GetEffectivePoliciesRequest(input) +func (c *IoT) DescribeStreamWithContext(ctx aws.Context, input *DescribeStreamInput, opts ...request.Option) (*DescribeStreamOutput, error) { + req, out := c.DescribeStreamRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetIndexingConfiguration = "GetIndexingConfiguration" +const opDescribeThing = "DescribeThing" -// GetIndexingConfigurationRequest generates a "aws/request.Request" representing the -// client's request for the GetIndexingConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeThingRequest generates a "aws/request.Request" representing the +// client's request for the DescribeThing operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetIndexingConfiguration for more information on using the GetIndexingConfiguration +// See DescribeThing for more information on using the DescribeThing // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetIndexingConfigurationRequest method. -// req, resp := client.GetIndexingConfigurationRequest(params) +// // Example sending a request using the DescribeThingRequest method. +// req, resp := client.DescribeThingRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) GetIndexingConfigurationRequest(input *GetIndexingConfigurationInput) (req *request.Request, output *GetIndexingConfigurationOutput) { +func (c *IoT) DescribeThingRequest(input *DescribeThingInput) (req *request.Request, output *DescribeThingOutput) { op := &request.Operation{ - Name: opGetIndexingConfiguration, + Name: opDescribeThing, HTTPMethod: "GET", - HTTPPath: "/indexing/config", + HTTPPath: "/things/{thingName}", } if input == nil { - input = &GetIndexingConfigurationInput{} + input = &DescribeThingInput{} } - output = &GetIndexingConfigurationOutput{} + output = &DescribeThingOutput{} req = c.newRequest(op, input, output) return } -// GetIndexingConfiguration API operation for AWS IoT. +// DescribeThing API operation for AWS IoT. // -// Gets the search configuration. +// Gets information about the specified thing. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation GetIndexingConfiguration for usage and error information. +// API operation DescribeThing for usage and error information. // // Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // @@ -5470,246 +6357,255 @@ func (c *IoT) GetIndexingConfigurationRequest(input *GetIndexingConfigurationInp // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) GetIndexingConfiguration(input *GetIndexingConfigurationInput) (*GetIndexingConfigurationOutput, error) { - req, out := c.GetIndexingConfigurationRequest(input) +func (c *IoT) DescribeThing(input *DescribeThingInput) (*DescribeThingOutput, error) { + req, out := c.DescribeThingRequest(input) return out, req.Send() } -// GetIndexingConfigurationWithContext is the same as GetIndexingConfiguration with the addition of +// DescribeThingWithContext is the same as DescribeThing with the addition of // the ability to pass a context and additional request options. // -// See GetIndexingConfiguration for details on how to use this API operation. +// See DescribeThing for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) GetIndexingConfigurationWithContext(ctx aws.Context, input *GetIndexingConfigurationInput, opts ...request.Option) (*GetIndexingConfigurationOutput, error) { - req, out := c.GetIndexingConfigurationRequest(input) +func (c *IoT) DescribeThingWithContext(ctx aws.Context, input *DescribeThingInput, opts ...request.Option) (*DescribeThingOutput, error) { + req, out := c.DescribeThingRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetJobDocument = "GetJobDocument" +const opDescribeThingGroup = "DescribeThingGroup" -// GetJobDocumentRequest generates a "aws/request.Request" representing the -// client's request for the GetJobDocument operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeThingGroupRequest generates a "aws/request.Request" representing the +// client's request for the DescribeThingGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetJobDocument for more information on using the GetJobDocument +// See DescribeThingGroup for more information on using the DescribeThingGroup // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetJobDocumentRequest method. -// req, resp := client.GetJobDocumentRequest(params) +// // Example sending a request using the DescribeThingGroupRequest method. +// req, resp := client.DescribeThingGroupRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) GetJobDocumentRequest(input *GetJobDocumentInput) (req *request.Request, output *GetJobDocumentOutput) { +func (c *IoT) DescribeThingGroupRequest(input *DescribeThingGroupInput) (req *request.Request, output *DescribeThingGroupOutput) { op := &request.Operation{ - Name: opGetJobDocument, + Name: opDescribeThingGroup, HTTPMethod: "GET", - HTTPPath: "/jobs/{jobId}/job-document", + HTTPPath: "/thing-groups/{thingGroupName}", } if input == nil { - input = &GetJobDocumentInput{} + input = &DescribeThingGroupInput{} } - output = &GetJobDocumentOutput{} + output = &DescribeThingGroupOutput{} req = c.newRequest(op, input, output) return } -// GetJobDocument API operation for AWS IoT. +// DescribeThingGroup API operation for AWS IoT. // -// Gets a job document. +// Describe a thing group. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation GetJobDocument for usage and error information. +// API operation DescribeThingGroup for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. // -func (c *IoT) GetJobDocument(input *GetJobDocumentInput) (*GetJobDocumentOutput, error) { - req, out := c.GetJobDocumentRequest(input) +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) DescribeThingGroup(input *DescribeThingGroupInput) (*DescribeThingGroupOutput, error) { + req, out := c.DescribeThingGroupRequest(input) return out, req.Send() } -// GetJobDocumentWithContext is the same as GetJobDocument with the addition of +// DescribeThingGroupWithContext is the same as DescribeThingGroup with the addition of // the ability to pass a context and additional request options. // -// See GetJobDocument for details on how to use this API operation. +// See DescribeThingGroup for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) GetJobDocumentWithContext(ctx aws.Context, input *GetJobDocumentInput, opts ...request.Option) (*GetJobDocumentOutput, error) { - req, out := c.GetJobDocumentRequest(input) +func (c *IoT) DescribeThingGroupWithContext(ctx aws.Context, input *DescribeThingGroupInput, opts ...request.Option) (*DescribeThingGroupOutput, error) { + req, out := c.DescribeThingGroupRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetLoggingOptions = "GetLoggingOptions" +const opDescribeThingRegistrationTask = "DescribeThingRegistrationTask" -// GetLoggingOptionsRequest generates a "aws/request.Request" representing the -// client's request for the GetLoggingOptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeThingRegistrationTaskRequest generates a "aws/request.Request" representing the +// client's request for the DescribeThingRegistrationTask operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetLoggingOptions for more information on using the GetLoggingOptions +// See DescribeThingRegistrationTask for more information on using the DescribeThingRegistrationTask // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetLoggingOptionsRequest method. -// req, resp := client.GetLoggingOptionsRequest(params) +// // Example sending a request using the DescribeThingRegistrationTaskRequest method. +// req, resp := client.DescribeThingRegistrationTaskRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) GetLoggingOptionsRequest(input *GetLoggingOptionsInput) (req *request.Request, output *GetLoggingOptionsOutput) { +func (c *IoT) DescribeThingRegistrationTaskRequest(input *DescribeThingRegistrationTaskInput) (req *request.Request, output *DescribeThingRegistrationTaskOutput) { op := &request.Operation{ - Name: opGetLoggingOptions, + Name: opDescribeThingRegistrationTask, HTTPMethod: "GET", - HTTPPath: "/loggingOptions", + HTTPPath: "/thing-registration-tasks/{taskId}", } if input == nil { - input = &GetLoggingOptionsInput{} + input = &DescribeThingRegistrationTaskInput{} } - output = &GetLoggingOptionsOutput{} + output = &DescribeThingRegistrationTaskOutput{} req = c.newRequest(op, input, output) return } -// GetLoggingOptions API operation for AWS IoT. +// DescribeThingRegistrationTask API operation for AWS IoT. // -// Gets the logging options. +// Describes a bulk thing provisioning task. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation GetLoggingOptions for usage and error information. +// API operation DescribeThingRegistrationTask for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalException "InternalException" -// An unexpected error has occurred. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. // -func (c *IoT) GetLoggingOptions(input *GetLoggingOptionsInput) (*GetLoggingOptionsOutput, error) { - req, out := c.GetLoggingOptionsRequest(input) +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) DescribeThingRegistrationTask(input *DescribeThingRegistrationTaskInput) (*DescribeThingRegistrationTaskOutput, error) { + req, out := c.DescribeThingRegistrationTaskRequest(input) return out, req.Send() } -// GetLoggingOptionsWithContext is the same as GetLoggingOptions with the addition of +// DescribeThingRegistrationTaskWithContext is the same as DescribeThingRegistrationTask with the addition of // the ability to pass a context and additional request options. // -// See GetLoggingOptions for details on how to use this API operation. +// See DescribeThingRegistrationTask for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) GetLoggingOptionsWithContext(ctx aws.Context, input *GetLoggingOptionsInput, opts ...request.Option) (*GetLoggingOptionsOutput, error) { - req, out := c.GetLoggingOptionsRequest(input) +func (c *IoT) DescribeThingRegistrationTaskWithContext(ctx aws.Context, input *DescribeThingRegistrationTaskInput, opts ...request.Option) (*DescribeThingRegistrationTaskOutput, error) { + req, out := c.DescribeThingRegistrationTaskRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetOTAUpdate = "GetOTAUpdate" +const opDescribeThingType = "DescribeThingType" -// GetOTAUpdateRequest generates a "aws/request.Request" representing the -// client's request for the GetOTAUpdate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeThingTypeRequest generates a "aws/request.Request" representing the +// client's request for the DescribeThingType operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetOTAUpdate for more information on using the GetOTAUpdate +// See DescribeThingType for more information on using the DescribeThingType // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetOTAUpdateRequest method. -// req, resp := client.GetOTAUpdateRequest(params) +// // Example sending a request using the DescribeThingTypeRequest method. +// req, resp := client.DescribeThingTypeRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) GetOTAUpdateRequest(input *GetOTAUpdateInput) (req *request.Request, output *GetOTAUpdateOutput) { +func (c *IoT) DescribeThingTypeRequest(input *DescribeThingTypeInput) (req *request.Request, output *DescribeThingTypeOutput) { op := &request.Operation{ - Name: opGetOTAUpdate, + Name: opDescribeThingType, HTTPMethod: "GET", - HTTPPath: "/otaUpdates/{otaUpdateId}", + HTTPPath: "/thing-types/{thingTypeName}", } if input == nil { - input = &GetOTAUpdateInput{} + input = &DescribeThingTypeInput{} } - output = &GetOTAUpdateOutput{} + output = &DescribeThingTypeOutput{} req = c.newRequest(op, input, output) return } -// GetOTAUpdate API operation for AWS IoT. +// DescribeThingType API operation for AWS IoT. // -// Gets an OTA update. +// Gets information about the specified thing type. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation GetOTAUpdate for usage and error information. +// API operation DescribeThingType for usage and error information. // // Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // @@ -5719,92 +6615,87 @@ func (c *IoT) GetOTAUpdateRequest(input *GetOTAUpdateInput) (req *request.Reques // * ErrCodeUnauthorizedException "UnauthorizedException" // You are not authorized to perform this operation. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. -// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. // -func (c *IoT) GetOTAUpdate(input *GetOTAUpdateInput) (*GetOTAUpdateOutput, error) { - req, out := c.GetOTAUpdateRequest(input) +func (c *IoT) DescribeThingType(input *DescribeThingTypeInput) (*DescribeThingTypeOutput, error) { + req, out := c.DescribeThingTypeRequest(input) return out, req.Send() } -// GetOTAUpdateWithContext is the same as GetOTAUpdate with the addition of +// DescribeThingTypeWithContext is the same as DescribeThingType with the addition of // the ability to pass a context and additional request options. // -// See GetOTAUpdate for details on how to use this API operation. +// See DescribeThingType for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) GetOTAUpdateWithContext(ctx aws.Context, input *GetOTAUpdateInput, opts ...request.Option) (*GetOTAUpdateOutput, error) { - req, out := c.GetOTAUpdateRequest(input) +func (c *IoT) DescribeThingTypeWithContext(ctx aws.Context, input *DescribeThingTypeInput, opts ...request.Option) (*DescribeThingTypeOutput, error) { + req, out := c.DescribeThingTypeRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetPolicy = "GetPolicy" +const opDetachPolicy = "DetachPolicy" -// GetPolicyRequest generates a "aws/request.Request" representing the -// client's request for the GetPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DetachPolicyRequest generates a "aws/request.Request" representing the +// client's request for the DetachPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetPolicy for more information on using the GetPolicy +// See DetachPolicy for more information on using the DetachPolicy // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetPolicyRequest method. -// req, resp := client.GetPolicyRequest(params) +// // Example sending a request using the DetachPolicyRequest method. +// req, resp := client.DetachPolicyRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) GetPolicyRequest(input *GetPolicyInput) (req *request.Request, output *GetPolicyOutput) { +func (c *IoT) DetachPolicyRequest(input *DetachPolicyInput) (req *request.Request, output *DetachPolicyOutput) { op := &request.Operation{ - Name: opGetPolicy, - HTTPMethod: "GET", - HTTPPath: "/policies/{policyName}", + Name: opDetachPolicy, + HTTPMethod: "POST", + HTTPPath: "/target-policies/{policyName}", } if input == nil { - input = &GetPolicyInput{} + input = &DetachPolicyInput{} } - output = &GetPolicyOutput{} + output = &DetachPolicyOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// GetPolicy API operation for AWS IoT. +// DetachPolicy API operation for AWS IoT. // -// Gets information about the specified policy with the policy document of the -// default version. +// Detaches a policy from the specified target. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation GetPolicy for usage and error information. +// API operation DetachPolicy for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // @@ -5820,77 +6711,89 @@ func (c *IoT) GetPolicyRequest(input *GetPolicyInput) (req *request.Request, out // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) GetPolicy(input *GetPolicyInput) (*GetPolicyOutput, error) { - req, out := c.GetPolicyRequest(input) +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit has been exceeded. +// +func (c *IoT) DetachPolicy(input *DetachPolicyInput) (*DetachPolicyOutput, error) { + req, out := c.DetachPolicyRequest(input) return out, req.Send() } -// GetPolicyWithContext is the same as GetPolicy with the addition of +// DetachPolicyWithContext is the same as DetachPolicy with the addition of // the ability to pass a context and additional request options. // -// See GetPolicy for details on how to use this API operation. +// See DetachPolicy for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) GetPolicyWithContext(ctx aws.Context, input *GetPolicyInput, opts ...request.Option) (*GetPolicyOutput, error) { - req, out := c.GetPolicyRequest(input) +func (c *IoT) DetachPolicyWithContext(ctx aws.Context, input *DetachPolicyInput, opts ...request.Option) (*DetachPolicyOutput, error) { + req, out := c.DetachPolicyRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetPolicyVersion = "GetPolicyVersion" +const opDetachPrincipalPolicy = "DetachPrincipalPolicy" -// GetPolicyVersionRequest generates a "aws/request.Request" representing the -// client's request for the GetPolicyVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DetachPrincipalPolicyRequest generates a "aws/request.Request" representing the +// client's request for the DetachPrincipalPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetPolicyVersion for more information on using the GetPolicyVersion +// See DetachPrincipalPolicy for more information on using the DetachPrincipalPolicy // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetPolicyVersionRequest method. -// req, resp := client.GetPolicyVersionRequest(params) +// // Example sending a request using the DetachPrincipalPolicyRequest method. +// req, resp := client.DetachPrincipalPolicyRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) GetPolicyVersionRequest(input *GetPolicyVersionInput) (req *request.Request, output *GetPolicyVersionOutput) { +// +// Deprecated: DetachPrincipalPolicy has been deprecated +func (c *IoT) DetachPrincipalPolicyRequest(input *DetachPrincipalPolicyInput) (req *request.Request, output *DetachPrincipalPolicyOutput) { + if c.Client.Config.Logger != nil { + c.Client.Config.Logger.Log("This operation, DetachPrincipalPolicy, has been deprecated") + } op := &request.Operation{ - Name: opGetPolicyVersion, - HTTPMethod: "GET", - HTTPPath: "/policies/{policyName}/version/{policyVersionId}", + Name: opDetachPrincipalPolicy, + HTTPMethod: "DELETE", + HTTPPath: "/principal-policies/{policyName}", } if input == nil { - input = &GetPolicyVersionInput{} + input = &DetachPrincipalPolicyInput{} } - output = &GetPolicyVersionOutput{} + output = &DetachPrincipalPolicyOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// GetPolicyVersion API operation for AWS IoT. +// DetachPrincipalPolicy API operation for AWS IoT. // -// Gets information about the specified policy version. +// Removes the specified policy from the specified certificate. +// +// Note: This API is deprecated. Please use DetachPolicy instead. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation GetPolicyVersion for usage and error information. +// API operation DetachPrincipalPolicy for usage and error information. // // Returned Error Codes: // * ErrCodeResourceNotFoundException "ResourceNotFoundException" @@ -5911,250 +6814,263 @@ func (c *IoT) GetPolicyVersionRequest(input *GetPolicyVersionInput) (req *reques // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) GetPolicyVersion(input *GetPolicyVersionInput) (*GetPolicyVersionOutput, error) { - req, out := c.GetPolicyVersionRequest(input) +// +// Deprecated: DetachPrincipalPolicy has been deprecated +func (c *IoT) DetachPrincipalPolicy(input *DetachPrincipalPolicyInput) (*DetachPrincipalPolicyOutput, error) { + req, out := c.DetachPrincipalPolicyRequest(input) return out, req.Send() } -// GetPolicyVersionWithContext is the same as GetPolicyVersion with the addition of +// DetachPrincipalPolicyWithContext is the same as DetachPrincipalPolicy with the addition of // the ability to pass a context and additional request options. // -// See GetPolicyVersion for details on how to use this API operation. +// See DetachPrincipalPolicy for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) GetPolicyVersionWithContext(ctx aws.Context, input *GetPolicyVersionInput, opts ...request.Option) (*GetPolicyVersionOutput, error) { - req, out := c.GetPolicyVersionRequest(input) +// +// Deprecated: DetachPrincipalPolicyWithContext has been deprecated +func (c *IoT) DetachPrincipalPolicyWithContext(ctx aws.Context, input *DetachPrincipalPolicyInput, opts ...request.Option) (*DetachPrincipalPolicyOutput, error) { + req, out := c.DetachPrincipalPolicyRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetRegistrationCode = "GetRegistrationCode" +const opDetachSecurityProfile = "DetachSecurityProfile" -// GetRegistrationCodeRequest generates a "aws/request.Request" representing the -// client's request for the GetRegistrationCode operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DetachSecurityProfileRequest generates a "aws/request.Request" representing the +// client's request for the DetachSecurityProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetRegistrationCode for more information on using the GetRegistrationCode +// See DetachSecurityProfile for more information on using the DetachSecurityProfile // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetRegistrationCodeRequest method. -// req, resp := client.GetRegistrationCodeRequest(params) +// // Example sending a request using the DetachSecurityProfileRequest method. +// req, resp := client.DetachSecurityProfileRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) GetRegistrationCodeRequest(input *GetRegistrationCodeInput) (req *request.Request, output *GetRegistrationCodeOutput) { +func (c *IoT) DetachSecurityProfileRequest(input *DetachSecurityProfileInput) (req *request.Request, output *DetachSecurityProfileOutput) { op := &request.Operation{ - Name: opGetRegistrationCode, - HTTPMethod: "GET", - HTTPPath: "/registrationcode", + Name: opDetachSecurityProfile, + HTTPMethod: "DELETE", + HTTPPath: "/security-profiles/{securityProfileName}/targets", } if input == nil { - input = &GetRegistrationCodeInput{} + input = &DetachSecurityProfileInput{} } - output = &GetRegistrationCodeOutput{} + output = &DetachSecurityProfileOutput{} req = c.newRequest(op, input, output) return } -// GetRegistrationCode API operation for AWS IoT. +// DetachSecurityProfile API operation for AWS IoT. // -// Gets a registration code used to register a CA certificate with AWS IoT. +// Disassociates a Device Defender security profile from a thing group or from +// this account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation GetRegistrationCode for usage and error information. +// API operation DetachSecurityProfile for usage and error information. // // Returned Error Codes: -// * ErrCodeThrottlingException "ThrottlingException" -// The rate exceeds the limit. +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. // -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. // // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -// * ErrCodeInvalidRequestException "InvalidRequestException" -// The request is not valid. -// -func (c *IoT) GetRegistrationCode(input *GetRegistrationCodeInput) (*GetRegistrationCodeOutput, error) { - req, out := c.GetRegistrationCodeRequest(input) +func (c *IoT) DetachSecurityProfile(input *DetachSecurityProfileInput) (*DetachSecurityProfileOutput, error) { + req, out := c.DetachSecurityProfileRequest(input) return out, req.Send() } -// GetRegistrationCodeWithContext is the same as GetRegistrationCode with the addition of +// DetachSecurityProfileWithContext is the same as DetachSecurityProfile with the addition of // the ability to pass a context and additional request options. // -// See GetRegistrationCode for details on how to use this API operation. +// See DetachSecurityProfile for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) GetRegistrationCodeWithContext(ctx aws.Context, input *GetRegistrationCodeInput, opts ...request.Option) (*GetRegistrationCodeOutput, error) { - req, out := c.GetRegistrationCodeRequest(input) +func (c *IoT) DetachSecurityProfileWithContext(ctx aws.Context, input *DetachSecurityProfileInput, opts ...request.Option) (*DetachSecurityProfileOutput, error) { + req, out := c.DetachSecurityProfileRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetTopicRule = "GetTopicRule" +const opDetachThingPrincipal = "DetachThingPrincipal" -// GetTopicRuleRequest generates a "aws/request.Request" representing the -// client's request for the GetTopicRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DetachThingPrincipalRequest generates a "aws/request.Request" representing the +// client's request for the DetachThingPrincipal operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetTopicRule for more information on using the GetTopicRule +// See DetachThingPrincipal for more information on using the DetachThingPrincipal // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetTopicRuleRequest method. -// req, resp := client.GetTopicRuleRequest(params) +// // Example sending a request using the DetachThingPrincipalRequest method. +// req, resp := client.DetachThingPrincipalRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) GetTopicRuleRequest(input *GetTopicRuleInput) (req *request.Request, output *GetTopicRuleOutput) { +func (c *IoT) DetachThingPrincipalRequest(input *DetachThingPrincipalInput) (req *request.Request, output *DetachThingPrincipalOutput) { op := &request.Operation{ - Name: opGetTopicRule, - HTTPMethod: "GET", - HTTPPath: "/rules/{ruleName}", + Name: opDetachThingPrincipal, + HTTPMethod: "DELETE", + HTTPPath: "/things/{thingName}/principals", } if input == nil { - input = &GetTopicRuleInput{} + input = &DetachThingPrincipalInput{} } - output = &GetTopicRuleOutput{} + output = &DetachThingPrincipalOutput{} req = c.newRequest(op, input, output) return } -// GetTopicRule API operation for AWS IoT. +// DetachThingPrincipal API operation for AWS IoT. // -// Gets information about the rule. +// Detaches the specified principal from the specified thing. +// +// This call is asynchronous. It might take several seconds for the detachment +// to propagate. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation GetTopicRule for usage and error information. +// API operation DetachThingPrincipal for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalException "InternalException" -// An unexpected error has occurred. +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. // // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. // // * ErrCodeUnauthorizedException "UnauthorizedException" // You are not authorized to perform this operation. // -func (c *IoT) GetTopicRule(input *GetTopicRuleInput) (*GetTopicRuleOutput, error) { - req, out := c.GetTopicRuleRequest(input) +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) DetachThingPrincipal(input *DetachThingPrincipalInput) (*DetachThingPrincipalOutput, error) { + req, out := c.DetachThingPrincipalRequest(input) return out, req.Send() } -// GetTopicRuleWithContext is the same as GetTopicRule with the addition of +// DetachThingPrincipalWithContext is the same as DetachThingPrincipal with the addition of // the ability to pass a context and additional request options. // -// See GetTopicRule for details on how to use this API operation. +// See DetachThingPrincipal for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) GetTopicRuleWithContext(ctx aws.Context, input *GetTopicRuleInput, opts ...request.Option) (*GetTopicRuleOutput, error) { - req, out := c.GetTopicRuleRequest(input) +func (c *IoT) DetachThingPrincipalWithContext(ctx aws.Context, input *DetachThingPrincipalInput, opts ...request.Option) (*DetachThingPrincipalOutput, error) { + req, out := c.DetachThingPrincipalRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetV2LoggingOptions = "GetV2LoggingOptions" +const opDisableTopicRule = "DisableTopicRule" -// GetV2LoggingOptionsRequest generates a "aws/request.Request" representing the -// client's request for the GetV2LoggingOptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DisableTopicRuleRequest generates a "aws/request.Request" representing the +// client's request for the DisableTopicRule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetV2LoggingOptions for more information on using the GetV2LoggingOptions +// See DisableTopicRule for more information on using the DisableTopicRule // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetV2LoggingOptionsRequest method. -// req, resp := client.GetV2LoggingOptionsRequest(params) +// // Example sending a request using the DisableTopicRuleRequest method. +// req, resp := client.DisableTopicRuleRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) GetV2LoggingOptionsRequest(input *GetV2LoggingOptionsInput) (req *request.Request, output *GetV2LoggingOptionsOutput) { +func (c *IoT) DisableTopicRuleRequest(input *DisableTopicRuleInput) (req *request.Request, output *DisableTopicRuleOutput) { op := &request.Operation{ - Name: opGetV2LoggingOptions, - HTTPMethod: "GET", - HTTPPath: "/v2LoggingOptions", - } + Name: opDisableTopicRule, + HTTPMethod: "POST", + HTTPPath: "/rules/{ruleName}/disable", + } if input == nil { - input = &GetV2LoggingOptionsInput{} + input = &DisableTopicRuleInput{} } - output = &GetV2LoggingOptionsOutput{} + output = &DisableTopicRuleOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// GetV2LoggingOptions API operation for AWS IoT. +// DisableTopicRule API operation for AWS IoT. // -// Gets the fine grained logging options. +// Disables the rule. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation GetV2LoggingOptions for usage and error information. +// API operation DisableTopicRule for usage and error information. // // Returned Error Codes: // * ErrCodeInternalException "InternalException" @@ -6166,173 +7082,181 @@ func (c *IoT) GetV2LoggingOptionsRequest(input *GetV2LoggingOptionsInput) (req * // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -func (c *IoT) GetV2LoggingOptions(input *GetV2LoggingOptionsInput) (*GetV2LoggingOptionsOutput, error) { - req, out := c.GetV2LoggingOptionsRequest(input) +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeConflictingResourceUpdateException "ConflictingResourceUpdateException" +// A conflicting resource update exception. This exception is thrown when two +// pending updates cause a conflict. +// +func (c *IoT) DisableTopicRule(input *DisableTopicRuleInput) (*DisableTopicRuleOutput, error) { + req, out := c.DisableTopicRuleRequest(input) return out, req.Send() } -// GetV2LoggingOptionsWithContext is the same as GetV2LoggingOptions with the addition of +// DisableTopicRuleWithContext is the same as DisableTopicRule with the addition of // the ability to pass a context and additional request options. // -// See GetV2LoggingOptions for details on how to use this API operation. +// See DisableTopicRule for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) GetV2LoggingOptionsWithContext(ctx aws.Context, input *GetV2LoggingOptionsInput, opts ...request.Option) (*GetV2LoggingOptionsOutput, error) { - req, out := c.GetV2LoggingOptionsRequest(input) +func (c *IoT) DisableTopicRuleWithContext(ctx aws.Context, input *DisableTopicRuleInput, opts ...request.Option) (*DisableTopicRuleOutput, error) { + req, out := c.DisableTopicRuleRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListAttachedPolicies = "ListAttachedPolicies" +const opEnableTopicRule = "EnableTopicRule" -// ListAttachedPoliciesRequest generates a "aws/request.Request" representing the -// client's request for the ListAttachedPolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// EnableTopicRuleRequest generates a "aws/request.Request" representing the +// client's request for the EnableTopicRule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListAttachedPolicies for more information on using the ListAttachedPolicies +// See EnableTopicRule for more information on using the EnableTopicRule // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListAttachedPoliciesRequest method. -// req, resp := client.ListAttachedPoliciesRequest(params) +// // Example sending a request using the EnableTopicRuleRequest method. +// req, resp := client.EnableTopicRuleRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListAttachedPoliciesRequest(input *ListAttachedPoliciesInput) (req *request.Request, output *ListAttachedPoliciesOutput) { +func (c *IoT) EnableTopicRuleRequest(input *EnableTopicRuleInput) (req *request.Request, output *EnableTopicRuleOutput) { op := &request.Operation{ - Name: opListAttachedPolicies, + Name: opEnableTopicRule, HTTPMethod: "POST", - HTTPPath: "/attached-policies/{target}", + HTTPPath: "/rules/{ruleName}/enable", } if input == nil { - input = &ListAttachedPoliciesInput{} + input = &EnableTopicRuleInput{} } - output = &ListAttachedPoliciesOutput{} + output = &EnableTopicRuleOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// ListAttachedPolicies API operation for AWS IoT. +// EnableTopicRule API operation for AWS IoT. // -// Lists the policies attached to the specified thing group. +// Enables the rule. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListAttachedPolicies for usage and error information. +// API operation EnableTopicRule for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. +// * ErrCodeInternalException "InternalException" +// An unexpected error has occurred. // // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeThrottlingException "ThrottlingException" -// The rate exceeds the limit. -// -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. // -// * ErrCodeLimitExceededException "LimitExceededException" -// The number of attached entities exceeds the limit. +// * ErrCodeConflictingResourceUpdateException "ConflictingResourceUpdateException" +// A conflicting resource update exception. This exception is thrown when two +// pending updates cause a conflict. // -func (c *IoT) ListAttachedPolicies(input *ListAttachedPoliciesInput) (*ListAttachedPoliciesOutput, error) { - req, out := c.ListAttachedPoliciesRequest(input) +func (c *IoT) EnableTopicRule(input *EnableTopicRuleInput) (*EnableTopicRuleOutput, error) { + req, out := c.EnableTopicRuleRequest(input) return out, req.Send() } -// ListAttachedPoliciesWithContext is the same as ListAttachedPolicies with the addition of +// EnableTopicRuleWithContext is the same as EnableTopicRule with the addition of // the ability to pass a context and additional request options. // -// See ListAttachedPolicies for details on how to use this API operation. +// See EnableTopicRule for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListAttachedPoliciesWithContext(ctx aws.Context, input *ListAttachedPoliciesInput, opts ...request.Option) (*ListAttachedPoliciesOutput, error) { - req, out := c.ListAttachedPoliciesRequest(input) +func (c *IoT) EnableTopicRuleWithContext(ctx aws.Context, input *EnableTopicRuleInput, opts ...request.Option) (*EnableTopicRuleOutput, error) { + req, out := c.EnableTopicRuleRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListAuthorizers = "ListAuthorizers" +const opGetEffectivePolicies = "GetEffectivePolicies" -// ListAuthorizersRequest generates a "aws/request.Request" representing the -// client's request for the ListAuthorizers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetEffectivePoliciesRequest generates a "aws/request.Request" representing the +// client's request for the GetEffectivePolicies operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListAuthorizers for more information on using the ListAuthorizers +// See GetEffectivePolicies for more information on using the GetEffectivePolicies // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListAuthorizersRequest method. -// req, resp := client.ListAuthorizersRequest(params) +// // Example sending a request using the GetEffectivePoliciesRequest method. +// req, resp := client.GetEffectivePoliciesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListAuthorizersRequest(input *ListAuthorizersInput) (req *request.Request, output *ListAuthorizersOutput) { +func (c *IoT) GetEffectivePoliciesRequest(input *GetEffectivePoliciesInput) (req *request.Request, output *GetEffectivePoliciesOutput) { op := &request.Operation{ - Name: opListAuthorizers, - HTTPMethod: "GET", - HTTPPath: "/authorizers/", + Name: opGetEffectivePolicies, + HTTPMethod: "POST", + HTTPPath: "/effective-policies", } if input == nil { - input = &ListAuthorizersInput{} + input = &GetEffectivePoliciesInput{} } - output = &ListAuthorizersOutput{} + output = &GetEffectivePoliciesOutput{} req = c.newRequest(op, input, output) return } -// ListAuthorizers API operation for AWS IoT. +// GetEffectivePolicies API operation for AWS IoT. // -// Lists the authorizers registered in your account. +// Gets a list of the policies that have an effect on the authorization behavior +// of the specified device when it connects to the AWS IoT device gateway. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListAuthorizers for usage and error information. +// API operation GetEffectivePolicies for usage and error information. // // Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // @@ -6348,80 +7272,80 @@ func (c *IoT) ListAuthorizersRequest(input *ListAuthorizersInput) (req *request. // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) ListAuthorizers(input *ListAuthorizersInput) (*ListAuthorizersOutput, error) { - req, out := c.ListAuthorizersRequest(input) +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit has been exceeded. +// +func (c *IoT) GetEffectivePolicies(input *GetEffectivePoliciesInput) (*GetEffectivePoliciesOutput, error) { + req, out := c.GetEffectivePoliciesRequest(input) return out, req.Send() } -// ListAuthorizersWithContext is the same as ListAuthorizers with the addition of +// GetEffectivePoliciesWithContext is the same as GetEffectivePolicies with the addition of // the ability to pass a context and additional request options. // -// See ListAuthorizers for details on how to use this API operation. +// See GetEffectivePolicies for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListAuthorizersWithContext(ctx aws.Context, input *ListAuthorizersInput, opts ...request.Option) (*ListAuthorizersOutput, error) { - req, out := c.ListAuthorizersRequest(input) +func (c *IoT) GetEffectivePoliciesWithContext(ctx aws.Context, input *GetEffectivePoliciesInput, opts ...request.Option) (*GetEffectivePoliciesOutput, error) { + req, out := c.GetEffectivePoliciesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListCACertificates = "ListCACertificates" +const opGetIndexingConfiguration = "GetIndexingConfiguration" -// ListCACertificatesRequest generates a "aws/request.Request" representing the -// client's request for the ListCACertificates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetIndexingConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the GetIndexingConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListCACertificates for more information on using the ListCACertificates +// See GetIndexingConfiguration for more information on using the GetIndexingConfiguration // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListCACertificatesRequest method. -// req, resp := client.ListCACertificatesRequest(params) +// // Example sending a request using the GetIndexingConfigurationRequest method. +// req, resp := client.GetIndexingConfigurationRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListCACertificatesRequest(input *ListCACertificatesInput) (req *request.Request, output *ListCACertificatesOutput) { +func (c *IoT) GetIndexingConfigurationRequest(input *GetIndexingConfigurationInput) (req *request.Request, output *GetIndexingConfigurationOutput) { op := &request.Operation{ - Name: opListCACertificates, + Name: opGetIndexingConfiguration, HTTPMethod: "GET", - HTTPPath: "/cacertificates", + HTTPPath: "/indexing/config", } if input == nil { - input = &ListCACertificatesInput{} + input = &GetIndexingConfigurationInput{} } - output = &ListCACertificatesOutput{} + output = &GetIndexingConfigurationOutput{} req = c.newRequest(op, input, output) return } -// ListCACertificates API operation for AWS IoT. -// -// Lists the CA certificates registered for your AWS account. +// GetIndexingConfiguration API operation for AWS IoT. // -// The results are paginated with a default page size of 25. You can use the -// returned marker to retrieve additional results. +// Gets the search configuration. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListCACertificates for usage and error information. +// API operation GetIndexingConfiguration for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" @@ -6439,256 +7363,246 @@ func (c *IoT) ListCACertificatesRequest(input *ListCACertificatesInput) (req *re // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) ListCACertificates(input *ListCACertificatesInput) (*ListCACertificatesOutput, error) { - req, out := c.ListCACertificatesRequest(input) +func (c *IoT) GetIndexingConfiguration(input *GetIndexingConfigurationInput) (*GetIndexingConfigurationOutput, error) { + req, out := c.GetIndexingConfigurationRequest(input) return out, req.Send() } -// ListCACertificatesWithContext is the same as ListCACertificates with the addition of +// GetIndexingConfigurationWithContext is the same as GetIndexingConfiguration with the addition of // the ability to pass a context and additional request options. // -// See ListCACertificates for details on how to use this API operation. +// See GetIndexingConfiguration for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListCACertificatesWithContext(ctx aws.Context, input *ListCACertificatesInput, opts ...request.Option) (*ListCACertificatesOutput, error) { - req, out := c.ListCACertificatesRequest(input) +func (c *IoT) GetIndexingConfigurationWithContext(ctx aws.Context, input *GetIndexingConfigurationInput, opts ...request.Option) (*GetIndexingConfigurationOutput, error) { + req, out := c.GetIndexingConfigurationRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListCertificates = "ListCertificates" +const opGetJobDocument = "GetJobDocument" -// ListCertificatesRequest generates a "aws/request.Request" representing the -// client's request for the ListCertificates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetJobDocumentRequest generates a "aws/request.Request" representing the +// client's request for the GetJobDocument operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListCertificates for more information on using the ListCertificates +// See GetJobDocument for more information on using the GetJobDocument // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListCertificatesRequest method. -// req, resp := client.ListCertificatesRequest(params) +// // Example sending a request using the GetJobDocumentRequest method. +// req, resp := client.GetJobDocumentRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListCertificatesRequest(input *ListCertificatesInput) (req *request.Request, output *ListCertificatesOutput) { +func (c *IoT) GetJobDocumentRequest(input *GetJobDocumentInput) (req *request.Request, output *GetJobDocumentOutput) { op := &request.Operation{ - Name: opListCertificates, + Name: opGetJobDocument, HTTPMethod: "GET", - HTTPPath: "/certificates", + HTTPPath: "/jobs/{jobId}/job-document", } if input == nil { - input = &ListCertificatesInput{} + input = &GetJobDocumentInput{} } - output = &ListCertificatesOutput{} + output = &GetJobDocumentOutput{} req = c.newRequest(op, input, output) return } -// ListCertificates API operation for AWS IoT. -// -// Lists the certificates registered in your AWS account. +// GetJobDocument API operation for AWS IoT. // -// The results are paginated with a default page size of 25. You can use the -// returned marker to retrieve additional results. +// Gets a job document. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListCertificates for usage and error information. +// API operation GetJobDocument for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. -// -func (c *IoT) ListCertificates(input *ListCertificatesInput) (*ListCertificatesOutput, error) { - req, out := c.ListCertificatesRequest(input) +func (c *IoT) GetJobDocument(input *GetJobDocumentInput) (*GetJobDocumentOutput, error) { + req, out := c.GetJobDocumentRequest(input) return out, req.Send() } -// ListCertificatesWithContext is the same as ListCertificates with the addition of +// GetJobDocumentWithContext is the same as GetJobDocument with the addition of // the ability to pass a context and additional request options. // -// See ListCertificates for details on how to use this API operation. +// See GetJobDocument for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListCertificatesWithContext(ctx aws.Context, input *ListCertificatesInput, opts ...request.Option) (*ListCertificatesOutput, error) { - req, out := c.ListCertificatesRequest(input) +func (c *IoT) GetJobDocumentWithContext(ctx aws.Context, input *GetJobDocumentInput, opts ...request.Option) (*GetJobDocumentOutput, error) { + req, out := c.GetJobDocumentRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListCertificatesByCA = "ListCertificatesByCA" +const opGetLoggingOptions = "GetLoggingOptions" -// ListCertificatesByCARequest generates a "aws/request.Request" representing the -// client's request for the ListCertificatesByCA operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetLoggingOptionsRequest generates a "aws/request.Request" representing the +// client's request for the GetLoggingOptions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListCertificatesByCA for more information on using the ListCertificatesByCA +// See GetLoggingOptions for more information on using the GetLoggingOptions // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListCertificatesByCARequest method. -// req, resp := client.ListCertificatesByCARequest(params) +// // Example sending a request using the GetLoggingOptionsRequest method. +// req, resp := client.GetLoggingOptionsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListCertificatesByCARequest(input *ListCertificatesByCAInput) (req *request.Request, output *ListCertificatesByCAOutput) { +func (c *IoT) GetLoggingOptionsRequest(input *GetLoggingOptionsInput) (req *request.Request, output *GetLoggingOptionsOutput) { op := &request.Operation{ - Name: opListCertificatesByCA, + Name: opGetLoggingOptions, HTTPMethod: "GET", - HTTPPath: "/certificates-by-ca/{caCertificateId}", + HTTPPath: "/loggingOptions", } if input == nil { - input = &ListCertificatesByCAInput{} + input = &GetLoggingOptionsInput{} } - output = &ListCertificatesByCAOutput{} + output = &GetLoggingOptionsOutput{} req = c.newRequest(op, input, output) return } -// ListCertificatesByCA API operation for AWS IoT. +// GetLoggingOptions API operation for AWS IoT. // -// List the device certificates signed by the specified CA certificate. +// Gets the logging options. +// +// NOTE: use of this command is not recommended. Use GetV2LoggingOptions instead. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListCertificatesByCA for usage and error information. +// API operation GetLoggingOptions for usage and error information. // // Returned Error Codes: +// * ErrCodeInternalException "InternalException" +// An unexpected error has occurred. +// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeThrottlingException "ThrottlingException" -// The rate exceeds the limit. -// -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. -// -func (c *IoT) ListCertificatesByCA(input *ListCertificatesByCAInput) (*ListCertificatesByCAOutput, error) { - req, out := c.ListCertificatesByCARequest(input) +func (c *IoT) GetLoggingOptions(input *GetLoggingOptionsInput) (*GetLoggingOptionsOutput, error) { + req, out := c.GetLoggingOptionsRequest(input) return out, req.Send() } -// ListCertificatesByCAWithContext is the same as ListCertificatesByCA with the addition of +// GetLoggingOptionsWithContext is the same as GetLoggingOptions with the addition of // the ability to pass a context and additional request options. // -// See ListCertificatesByCA for details on how to use this API operation. +// See GetLoggingOptions for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListCertificatesByCAWithContext(ctx aws.Context, input *ListCertificatesByCAInput, opts ...request.Option) (*ListCertificatesByCAOutput, error) { - req, out := c.ListCertificatesByCARequest(input) +func (c *IoT) GetLoggingOptionsWithContext(ctx aws.Context, input *GetLoggingOptionsInput, opts ...request.Option) (*GetLoggingOptionsOutput, error) { + req, out := c.GetLoggingOptionsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListIndices = "ListIndices" +const opGetOTAUpdate = "GetOTAUpdate" -// ListIndicesRequest generates a "aws/request.Request" representing the -// client's request for the ListIndices operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetOTAUpdateRequest generates a "aws/request.Request" representing the +// client's request for the GetOTAUpdate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListIndices for more information on using the ListIndices +// See GetOTAUpdate for more information on using the GetOTAUpdate // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListIndicesRequest method. -// req, resp := client.ListIndicesRequest(params) +// // Example sending a request using the GetOTAUpdateRequest method. +// req, resp := client.GetOTAUpdateRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListIndicesRequest(input *ListIndicesInput) (req *request.Request, output *ListIndicesOutput) { +func (c *IoT) GetOTAUpdateRequest(input *GetOTAUpdateInput) (req *request.Request, output *GetOTAUpdateOutput) { op := &request.Operation{ - Name: opListIndices, + Name: opGetOTAUpdate, HTTPMethod: "GET", - HTTPPath: "/indices", + HTTPPath: "/otaUpdates/{otaUpdateId}", } if input == nil { - input = &ListIndicesInput{} + input = &GetOTAUpdateInput{} } - output = &ListIndicesOutput{} + output = &GetOTAUpdateOutput{} req = c.newRequest(op, input, output) return } -// ListIndices API operation for AWS IoT. +// GetOTAUpdate API operation for AWS IoT. // -// Lists the search indices. +// Gets an OTA update. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListIndices for usage and error information. +// API operation GetOTAUpdate for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" @@ -6700,607 +7614,609 @@ func (c *IoT) ListIndicesRequest(input *ListIndicesInput) (req *request.Request, // * ErrCodeUnauthorizedException "UnauthorizedException" // You are not authorized to perform this operation. // +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. // -func (c *IoT) ListIndices(input *ListIndicesInput) (*ListIndicesOutput, error) { - req, out := c.ListIndicesRequest(input) +func (c *IoT) GetOTAUpdate(input *GetOTAUpdateInput) (*GetOTAUpdateOutput, error) { + req, out := c.GetOTAUpdateRequest(input) return out, req.Send() } -// ListIndicesWithContext is the same as ListIndices with the addition of +// GetOTAUpdateWithContext is the same as GetOTAUpdate with the addition of // the ability to pass a context and additional request options. // -// See ListIndices for details on how to use this API operation. +// See GetOTAUpdate for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListIndicesWithContext(ctx aws.Context, input *ListIndicesInput, opts ...request.Option) (*ListIndicesOutput, error) { - req, out := c.ListIndicesRequest(input) +func (c *IoT) GetOTAUpdateWithContext(ctx aws.Context, input *GetOTAUpdateInput, opts ...request.Option) (*GetOTAUpdateOutput, error) { + req, out := c.GetOTAUpdateRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListJobExecutionsForJob = "ListJobExecutionsForJob" +const opGetPolicy = "GetPolicy" -// ListJobExecutionsForJobRequest generates a "aws/request.Request" representing the -// client's request for the ListJobExecutionsForJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetPolicyRequest generates a "aws/request.Request" representing the +// client's request for the GetPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListJobExecutionsForJob for more information on using the ListJobExecutionsForJob +// See GetPolicy for more information on using the GetPolicy // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListJobExecutionsForJobRequest method. -// req, resp := client.ListJobExecutionsForJobRequest(params) +// // Example sending a request using the GetPolicyRequest method. +// req, resp := client.GetPolicyRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListJobExecutionsForJobRequest(input *ListJobExecutionsForJobInput) (req *request.Request, output *ListJobExecutionsForJobOutput) { +func (c *IoT) GetPolicyRequest(input *GetPolicyInput) (req *request.Request, output *GetPolicyOutput) { op := &request.Operation{ - Name: opListJobExecutionsForJob, + Name: opGetPolicy, HTTPMethod: "GET", - HTTPPath: "/jobs/{jobId}/things", + HTTPPath: "/policies/{policyName}", } if input == nil { - input = &ListJobExecutionsForJobInput{} + input = &GetPolicyInput{} } - output = &ListJobExecutionsForJobOutput{} + output = &GetPolicyOutput{} req = c.newRequest(op, input, output) return } -// ListJobExecutionsForJob API operation for AWS IoT. +// GetPolicy API operation for AWS IoT. // -// Lists the job executions for a job. +// Gets information about the specified policy with the policy document of the +// default version. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListJobExecutionsForJob for usage and error information. +// API operation GetPolicy for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidRequestException "InvalidRequestException" -// The request is not valid. -// // * ErrCodeResourceNotFoundException "ResourceNotFoundException" // The specified resource does not exist. // +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -func (c *IoT) ListJobExecutionsForJob(input *ListJobExecutionsForJobInput) (*ListJobExecutionsForJobOutput, error) { - req, out := c.ListJobExecutionsForJobRequest(input) +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) GetPolicy(input *GetPolicyInput) (*GetPolicyOutput, error) { + req, out := c.GetPolicyRequest(input) return out, req.Send() } -// ListJobExecutionsForJobWithContext is the same as ListJobExecutionsForJob with the addition of +// GetPolicyWithContext is the same as GetPolicy with the addition of // the ability to pass a context and additional request options. // -// See ListJobExecutionsForJob for details on how to use this API operation. +// See GetPolicy for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListJobExecutionsForJobWithContext(ctx aws.Context, input *ListJobExecutionsForJobInput, opts ...request.Option) (*ListJobExecutionsForJobOutput, error) { - req, out := c.ListJobExecutionsForJobRequest(input) +func (c *IoT) GetPolicyWithContext(ctx aws.Context, input *GetPolicyInput, opts ...request.Option) (*GetPolicyOutput, error) { + req, out := c.GetPolicyRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListJobExecutionsForThing = "ListJobExecutionsForThing" +const opGetPolicyVersion = "GetPolicyVersion" -// ListJobExecutionsForThingRequest generates a "aws/request.Request" representing the -// client's request for the ListJobExecutionsForThing operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetPolicyVersionRequest generates a "aws/request.Request" representing the +// client's request for the GetPolicyVersion operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListJobExecutionsForThing for more information on using the ListJobExecutionsForThing +// See GetPolicyVersion for more information on using the GetPolicyVersion // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListJobExecutionsForThingRequest method. -// req, resp := client.ListJobExecutionsForThingRequest(params) +// // Example sending a request using the GetPolicyVersionRequest method. +// req, resp := client.GetPolicyVersionRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListJobExecutionsForThingRequest(input *ListJobExecutionsForThingInput) (req *request.Request, output *ListJobExecutionsForThingOutput) { +func (c *IoT) GetPolicyVersionRequest(input *GetPolicyVersionInput) (req *request.Request, output *GetPolicyVersionOutput) { op := &request.Operation{ - Name: opListJobExecutionsForThing, + Name: opGetPolicyVersion, HTTPMethod: "GET", - HTTPPath: "/things/{thingName}/jobs", + HTTPPath: "/policies/{policyName}/version/{policyVersionId}", } if input == nil { - input = &ListJobExecutionsForThingInput{} + input = &GetPolicyVersionInput{} } - output = &ListJobExecutionsForThingOutput{} + output = &GetPolicyVersionOutput{} req = c.newRequest(op, input, output) return } -// ListJobExecutionsForThing API operation for AWS IoT. +// GetPolicyVersion API operation for AWS IoT. // -// Lists the job executions for the specified thing. +// Gets information about the specified policy version. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListJobExecutionsForThing for usage and error information. +// API operation GetPolicyVersion for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidRequestException "InvalidRequestException" -// The request is not valid. -// // * ErrCodeResourceNotFoundException "ResourceNotFoundException" // The specified resource does not exist. // +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -func (c *IoT) ListJobExecutionsForThing(input *ListJobExecutionsForThingInput) (*ListJobExecutionsForThingOutput, error) { - req, out := c.ListJobExecutionsForThingRequest(input) +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) GetPolicyVersion(input *GetPolicyVersionInput) (*GetPolicyVersionOutput, error) { + req, out := c.GetPolicyVersionRequest(input) return out, req.Send() } -// ListJobExecutionsForThingWithContext is the same as ListJobExecutionsForThing with the addition of +// GetPolicyVersionWithContext is the same as GetPolicyVersion with the addition of // the ability to pass a context and additional request options. // -// See ListJobExecutionsForThing for details on how to use this API operation. +// See GetPolicyVersion for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListJobExecutionsForThingWithContext(ctx aws.Context, input *ListJobExecutionsForThingInput, opts ...request.Option) (*ListJobExecutionsForThingOutput, error) { - req, out := c.ListJobExecutionsForThingRequest(input) +func (c *IoT) GetPolicyVersionWithContext(ctx aws.Context, input *GetPolicyVersionInput, opts ...request.Option) (*GetPolicyVersionOutput, error) { + req, out := c.GetPolicyVersionRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListJobs = "ListJobs" +const opGetRegistrationCode = "GetRegistrationCode" -// ListJobsRequest generates a "aws/request.Request" representing the -// client's request for the ListJobs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetRegistrationCodeRequest generates a "aws/request.Request" representing the +// client's request for the GetRegistrationCode operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListJobs for more information on using the ListJobs +// See GetRegistrationCode for more information on using the GetRegistrationCode // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListJobsRequest method. -// req, resp := client.ListJobsRequest(params) +// // Example sending a request using the GetRegistrationCodeRequest method. +// req, resp := client.GetRegistrationCodeRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListJobsRequest(input *ListJobsInput) (req *request.Request, output *ListJobsOutput) { +func (c *IoT) GetRegistrationCodeRequest(input *GetRegistrationCodeInput) (req *request.Request, output *GetRegistrationCodeOutput) { op := &request.Operation{ - Name: opListJobs, + Name: opGetRegistrationCode, HTTPMethod: "GET", - HTTPPath: "/jobs", + HTTPPath: "/registrationcode", } if input == nil { - input = &ListJobsInput{} + input = &GetRegistrationCodeInput{} } - output = &ListJobsOutput{} + output = &GetRegistrationCodeOutput{} req = c.newRequest(op, input, output) return } -// ListJobs API operation for AWS IoT. +// GetRegistrationCode API operation for AWS IoT. // -// Lists jobs. +// Gets a registration code used to register a CA certificate with AWS IoT. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListJobs for usage and error information. +// API operation GetRegistrationCode for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidRequestException "InvalidRequestException" -// The request is not valid. -// -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -func (c *IoT) ListJobs(input *ListJobsInput) (*ListJobsOutput, error) { - req, out := c.ListJobsRequest(input) +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +func (c *IoT) GetRegistrationCode(input *GetRegistrationCodeInput) (*GetRegistrationCodeOutput, error) { + req, out := c.GetRegistrationCodeRequest(input) return out, req.Send() } -// ListJobsWithContext is the same as ListJobs with the addition of +// GetRegistrationCodeWithContext is the same as GetRegistrationCode with the addition of // the ability to pass a context and additional request options. // -// See ListJobs for details on how to use this API operation. +// See GetRegistrationCode for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListJobsWithContext(ctx aws.Context, input *ListJobsInput, opts ...request.Option) (*ListJobsOutput, error) { - req, out := c.ListJobsRequest(input) +func (c *IoT) GetRegistrationCodeWithContext(ctx aws.Context, input *GetRegistrationCodeInput, opts ...request.Option) (*GetRegistrationCodeOutput, error) { + req, out := c.GetRegistrationCodeRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListOTAUpdates = "ListOTAUpdates" +const opGetTopicRule = "GetTopicRule" -// ListOTAUpdatesRequest generates a "aws/request.Request" representing the -// client's request for the ListOTAUpdates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetTopicRuleRequest generates a "aws/request.Request" representing the +// client's request for the GetTopicRule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListOTAUpdates for more information on using the ListOTAUpdates +// See GetTopicRule for more information on using the GetTopicRule // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListOTAUpdatesRequest method. -// req, resp := client.ListOTAUpdatesRequest(params) +// // Example sending a request using the GetTopicRuleRequest method. +// req, resp := client.GetTopicRuleRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListOTAUpdatesRequest(input *ListOTAUpdatesInput) (req *request.Request, output *ListOTAUpdatesOutput) { +func (c *IoT) GetTopicRuleRequest(input *GetTopicRuleInput) (req *request.Request, output *GetTopicRuleOutput) { op := &request.Operation{ - Name: opListOTAUpdates, + Name: opGetTopicRule, HTTPMethod: "GET", - HTTPPath: "/otaUpdates", + HTTPPath: "/rules/{ruleName}", } if input == nil { - input = &ListOTAUpdatesInput{} + input = &GetTopicRuleInput{} } - output = &ListOTAUpdatesOutput{} + output = &GetTopicRuleOutput{} req = c.newRequest(op, input, output) return } -// ListOTAUpdates API operation for AWS IoT. +// GetTopicRule API operation for AWS IoT. // -// Lists OTA updates. +// Gets information about the rule. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListOTAUpdates for usage and error information. +// API operation GetTopicRule for usage and error information. // // Returned Error Codes: +// * ErrCodeInternalException "InternalException" +// An unexpected error has occurred. +// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeThrottlingException "ThrottlingException" -// The rate exceeds the limit. +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. // // * ErrCodeUnauthorizedException "UnauthorizedException" // You are not authorized to perform this operation. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. -// -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. -// -func (c *IoT) ListOTAUpdates(input *ListOTAUpdatesInput) (*ListOTAUpdatesOutput, error) { - req, out := c.ListOTAUpdatesRequest(input) +func (c *IoT) GetTopicRule(input *GetTopicRuleInput) (*GetTopicRuleOutput, error) { + req, out := c.GetTopicRuleRequest(input) return out, req.Send() } -// ListOTAUpdatesWithContext is the same as ListOTAUpdates with the addition of +// GetTopicRuleWithContext is the same as GetTopicRule with the addition of // the ability to pass a context and additional request options. // -// See ListOTAUpdates for details on how to use this API operation. +// See GetTopicRule for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListOTAUpdatesWithContext(ctx aws.Context, input *ListOTAUpdatesInput, opts ...request.Option) (*ListOTAUpdatesOutput, error) { - req, out := c.ListOTAUpdatesRequest(input) +func (c *IoT) GetTopicRuleWithContext(ctx aws.Context, input *GetTopicRuleInput, opts ...request.Option) (*GetTopicRuleOutput, error) { + req, out := c.GetTopicRuleRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListOutgoingCertificates = "ListOutgoingCertificates" +const opGetV2LoggingOptions = "GetV2LoggingOptions" -// ListOutgoingCertificatesRequest generates a "aws/request.Request" representing the -// client's request for the ListOutgoingCertificates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetV2LoggingOptionsRequest generates a "aws/request.Request" representing the +// client's request for the GetV2LoggingOptions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListOutgoingCertificates for more information on using the ListOutgoingCertificates +// See GetV2LoggingOptions for more information on using the GetV2LoggingOptions // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListOutgoingCertificatesRequest method. -// req, resp := client.ListOutgoingCertificatesRequest(params) +// // Example sending a request using the GetV2LoggingOptionsRequest method. +// req, resp := client.GetV2LoggingOptionsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListOutgoingCertificatesRequest(input *ListOutgoingCertificatesInput) (req *request.Request, output *ListOutgoingCertificatesOutput) { +func (c *IoT) GetV2LoggingOptionsRequest(input *GetV2LoggingOptionsInput) (req *request.Request, output *GetV2LoggingOptionsOutput) { op := &request.Operation{ - Name: opListOutgoingCertificates, + Name: opGetV2LoggingOptions, HTTPMethod: "GET", - HTTPPath: "/certificates-out-going", + HTTPPath: "/v2LoggingOptions", } if input == nil { - input = &ListOutgoingCertificatesInput{} + input = &GetV2LoggingOptionsInput{} } - output = &ListOutgoingCertificatesOutput{} + output = &GetV2LoggingOptionsOutput{} req = c.newRequest(op, input, output) return } -// ListOutgoingCertificates API operation for AWS IoT. +// GetV2LoggingOptions API operation for AWS IoT. // -// Lists certificates that are being transferred but not yet accepted. +// Gets the fine grained logging options. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListOutgoingCertificates for usage and error information. +// API operation GetV2LoggingOptions for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidRequestException "InvalidRequestException" -// The request is not valid. -// -// * ErrCodeThrottlingException "ThrottlingException" -// The rate exceeds the limit. +// * ErrCodeInternalException "InternalException" +// An unexpected error has occurred. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. +// * ErrCodeNotConfiguredException "NotConfiguredException" +// The resource is not configured. // // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. -// -func (c *IoT) ListOutgoingCertificates(input *ListOutgoingCertificatesInput) (*ListOutgoingCertificatesOutput, error) { - req, out := c.ListOutgoingCertificatesRequest(input) +func (c *IoT) GetV2LoggingOptions(input *GetV2LoggingOptionsInput) (*GetV2LoggingOptionsOutput, error) { + req, out := c.GetV2LoggingOptionsRequest(input) return out, req.Send() } -// ListOutgoingCertificatesWithContext is the same as ListOutgoingCertificates with the addition of +// GetV2LoggingOptionsWithContext is the same as GetV2LoggingOptions with the addition of // the ability to pass a context and additional request options. // -// See ListOutgoingCertificates for details on how to use this API operation. +// See GetV2LoggingOptions for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListOutgoingCertificatesWithContext(ctx aws.Context, input *ListOutgoingCertificatesInput, opts ...request.Option) (*ListOutgoingCertificatesOutput, error) { - req, out := c.ListOutgoingCertificatesRequest(input) +func (c *IoT) GetV2LoggingOptionsWithContext(ctx aws.Context, input *GetV2LoggingOptionsInput, opts ...request.Option) (*GetV2LoggingOptionsOutput, error) { + req, out := c.GetV2LoggingOptionsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListPolicies = "ListPolicies" +const opListActiveViolations = "ListActiveViolations" -// ListPoliciesRequest generates a "aws/request.Request" representing the -// client's request for the ListPolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListActiveViolationsRequest generates a "aws/request.Request" representing the +// client's request for the ListActiveViolations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListPolicies for more information on using the ListPolicies +// See ListActiveViolations for more information on using the ListActiveViolations // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListPoliciesRequest method. -// req, resp := client.ListPoliciesRequest(params) +// // Example sending a request using the ListActiveViolationsRequest method. +// req, resp := client.ListActiveViolationsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListPoliciesRequest(input *ListPoliciesInput) (req *request.Request, output *ListPoliciesOutput) { +func (c *IoT) ListActiveViolationsRequest(input *ListActiveViolationsInput) (req *request.Request, output *ListActiveViolationsOutput) { op := &request.Operation{ - Name: opListPolicies, + Name: opListActiveViolations, HTTPMethod: "GET", - HTTPPath: "/policies", + HTTPPath: "/active-violations", } if input == nil { - input = &ListPoliciesInput{} + input = &ListActiveViolationsInput{} } - output = &ListPoliciesOutput{} + output = &ListActiveViolationsOutput{} req = c.newRequest(op, input, output) return } -// ListPolicies API operation for AWS IoT. +// ListActiveViolations API operation for AWS IoT. // -// Lists your policies. +// Lists the active violations for a given Device Defender security profile. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListPolicies for usage and error information. +// API operation ListActiveViolations for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. -// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) ListPolicies(input *ListPoliciesInput) (*ListPoliciesOutput, error) { - req, out := c.ListPoliciesRequest(input) +func (c *IoT) ListActiveViolations(input *ListActiveViolationsInput) (*ListActiveViolationsOutput, error) { + req, out := c.ListActiveViolationsRequest(input) return out, req.Send() } -// ListPoliciesWithContext is the same as ListPolicies with the addition of +// ListActiveViolationsWithContext is the same as ListActiveViolations with the addition of // the ability to pass a context and additional request options. // -// See ListPolicies for details on how to use this API operation. +// See ListActiveViolations for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListPoliciesWithContext(ctx aws.Context, input *ListPoliciesInput, opts ...request.Option) (*ListPoliciesOutput, error) { - req, out := c.ListPoliciesRequest(input) +func (c *IoT) ListActiveViolationsWithContext(ctx aws.Context, input *ListActiveViolationsInput, opts ...request.Option) (*ListActiveViolationsOutput, error) { + req, out := c.ListActiveViolationsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListPolicyPrincipals = "ListPolicyPrincipals" +const opListAttachedPolicies = "ListAttachedPolicies" -// ListPolicyPrincipalsRequest generates a "aws/request.Request" representing the -// client's request for the ListPolicyPrincipals operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListAttachedPoliciesRequest generates a "aws/request.Request" representing the +// client's request for the ListAttachedPolicies operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListPolicyPrincipals for more information on using the ListPolicyPrincipals +// See ListAttachedPolicies for more information on using the ListAttachedPolicies // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListPolicyPrincipalsRequest method. -// req, resp := client.ListPolicyPrincipalsRequest(params) +// // Example sending a request using the ListAttachedPoliciesRequest method. +// req, resp := client.ListAttachedPoliciesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListPolicyPrincipalsRequest(input *ListPolicyPrincipalsInput) (req *request.Request, output *ListPolicyPrincipalsOutput) { - if c.Client.Config.Logger != nil { - c.Client.Config.Logger.Log("This operation, ListPolicyPrincipals, has been deprecated") - } +func (c *IoT) ListAttachedPoliciesRequest(input *ListAttachedPoliciesInput) (req *request.Request, output *ListAttachedPoliciesOutput) { op := &request.Operation{ - Name: opListPolicyPrincipals, - HTTPMethod: "GET", - HTTPPath: "/policy-principals", + Name: opListAttachedPolicies, + HTTPMethod: "POST", + HTTPPath: "/attached-policies/{target}", } if input == nil { - input = &ListPolicyPrincipalsInput{} + input = &ListAttachedPoliciesInput{} } - output = &ListPolicyPrincipalsOutput{} + output = &ListAttachedPoliciesOutput{} req = c.newRequest(op, input, output) return } -// ListPolicyPrincipals API operation for AWS IoT. -// -// Lists the principals associated with the specified policy. +// ListAttachedPolicies API operation for AWS IoT. // -// Note: This API is deprecated. Please use ListTargetsForPolicy instead. +// Lists the policies attached to the specified thing group. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListPolicyPrincipals for usage and error information. +// API operation ListAttachedPolicies for usage and error information. // // Returned Error Codes: // * ErrCodeResourceNotFoundException "ResourceNotFoundException" @@ -7321,265 +8237,247 @@ func (c *IoT) ListPolicyPrincipalsRequest(input *ListPolicyPrincipalsInput) (req // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) ListPolicyPrincipals(input *ListPolicyPrincipalsInput) (*ListPolicyPrincipalsOutput, error) { - req, out := c.ListPolicyPrincipalsRequest(input) +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit has been exceeded. +// +func (c *IoT) ListAttachedPolicies(input *ListAttachedPoliciesInput) (*ListAttachedPoliciesOutput, error) { + req, out := c.ListAttachedPoliciesRequest(input) return out, req.Send() } -// ListPolicyPrincipalsWithContext is the same as ListPolicyPrincipals with the addition of +// ListAttachedPoliciesWithContext is the same as ListAttachedPolicies with the addition of // the ability to pass a context and additional request options. // -// See ListPolicyPrincipals for details on how to use this API operation. +// See ListAttachedPolicies for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListPolicyPrincipalsWithContext(ctx aws.Context, input *ListPolicyPrincipalsInput, opts ...request.Option) (*ListPolicyPrincipalsOutput, error) { - req, out := c.ListPolicyPrincipalsRequest(input) +func (c *IoT) ListAttachedPoliciesWithContext(ctx aws.Context, input *ListAttachedPoliciesInput, opts ...request.Option) (*ListAttachedPoliciesOutput, error) { + req, out := c.ListAttachedPoliciesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListPolicyVersions = "ListPolicyVersions" +const opListAuditFindings = "ListAuditFindings" -// ListPolicyVersionsRequest generates a "aws/request.Request" representing the -// client's request for the ListPolicyVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListAuditFindingsRequest generates a "aws/request.Request" representing the +// client's request for the ListAuditFindings operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListPolicyVersions for more information on using the ListPolicyVersions +// See ListAuditFindings for more information on using the ListAuditFindings // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListPolicyVersionsRequest method. -// req, resp := client.ListPolicyVersionsRequest(params) +// // Example sending a request using the ListAuditFindingsRequest method. +// req, resp := client.ListAuditFindingsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListPolicyVersionsRequest(input *ListPolicyVersionsInput) (req *request.Request, output *ListPolicyVersionsOutput) { +func (c *IoT) ListAuditFindingsRequest(input *ListAuditFindingsInput) (req *request.Request, output *ListAuditFindingsOutput) { op := &request.Operation{ - Name: opListPolicyVersions, - HTTPMethod: "GET", - HTTPPath: "/policies/{policyName}/version", + Name: opListAuditFindings, + HTTPMethod: "POST", + HTTPPath: "/audit/findings", } if input == nil { - input = &ListPolicyVersionsInput{} + input = &ListAuditFindingsInput{} } - output = &ListPolicyVersionsOutput{} + output = &ListAuditFindingsOutput{} req = c.newRequest(op, input, output) return } -// ListPolicyVersions API operation for AWS IoT. +// ListAuditFindings API operation for AWS IoT. // -// Lists the versions of the specified policy and identifies the default version. +// Lists the findings (results) of a Device Defender audit or of the audits +// performed during a specified time period. (Findings are retained for 180 +// days.) // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListPolicyVersions for usage and error information. +// API operation ListAuditFindings for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. -// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) ListPolicyVersions(input *ListPolicyVersionsInput) (*ListPolicyVersionsOutput, error) { - req, out := c.ListPolicyVersionsRequest(input) +func (c *IoT) ListAuditFindings(input *ListAuditFindingsInput) (*ListAuditFindingsOutput, error) { + req, out := c.ListAuditFindingsRequest(input) return out, req.Send() } -// ListPolicyVersionsWithContext is the same as ListPolicyVersions with the addition of +// ListAuditFindingsWithContext is the same as ListAuditFindings with the addition of // the ability to pass a context and additional request options. // -// See ListPolicyVersions for details on how to use this API operation. +// See ListAuditFindings for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListPolicyVersionsWithContext(ctx aws.Context, input *ListPolicyVersionsInput, opts ...request.Option) (*ListPolicyVersionsOutput, error) { - req, out := c.ListPolicyVersionsRequest(input) +func (c *IoT) ListAuditFindingsWithContext(ctx aws.Context, input *ListAuditFindingsInput, opts ...request.Option) (*ListAuditFindingsOutput, error) { + req, out := c.ListAuditFindingsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListPrincipalPolicies = "ListPrincipalPolicies" +const opListAuditTasks = "ListAuditTasks" -// ListPrincipalPoliciesRequest generates a "aws/request.Request" representing the -// client's request for the ListPrincipalPolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListAuditTasksRequest generates a "aws/request.Request" representing the +// client's request for the ListAuditTasks operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListPrincipalPolicies for more information on using the ListPrincipalPolicies +// See ListAuditTasks for more information on using the ListAuditTasks // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListPrincipalPoliciesRequest method. -// req, resp := client.ListPrincipalPoliciesRequest(params) +// // Example sending a request using the ListAuditTasksRequest method. +// req, resp := client.ListAuditTasksRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListPrincipalPoliciesRequest(input *ListPrincipalPoliciesInput) (req *request.Request, output *ListPrincipalPoliciesOutput) { - if c.Client.Config.Logger != nil { - c.Client.Config.Logger.Log("This operation, ListPrincipalPolicies, has been deprecated") - } +func (c *IoT) ListAuditTasksRequest(input *ListAuditTasksInput) (req *request.Request, output *ListAuditTasksOutput) { op := &request.Operation{ - Name: opListPrincipalPolicies, + Name: opListAuditTasks, HTTPMethod: "GET", - HTTPPath: "/principal-policies", + HTTPPath: "/audit/tasks", } if input == nil { - input = &ListPrincipalPoliciesInput{} + input = &ListAuditTasksInput{} } - output = &ListPrincipalPoliciesOutput{} + output = &ListAuditTasksOutput{} req = c.newRequest(op, input, output) return } -// ListPrincipalPolicies API operation for AWS IoT. -// -// Lists the policies attached to the specified principal. If you use an Cognito -// identity, the ID must be in AmazonCognito Identity format (http://docs.aws.amazon.com/cognitoidentity/latest/APIReference/API_GetCredentialsForIdentity.html#API_GetCredentialsForIdentity_RequestSyntax). +// ListAuditTasks API operation for AWS IoT. // -// Note: This API is deprecated. Please use ListAttachedPolicies instead. +// Lists the Device Defender audits that have been performed during a given +// time period. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListPrincipalPolicies for usage and error information. +// API operation ListAuditTasks for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. -// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) ListPrincipalPolicies(input *ListPrincipalPoliciesInput) (*ListPrincipalPoliciesOutput, error) { - req, out := c.ListPrincipalPoliciesRequest(input) +func (c *IoT) ListAuditTasks(input *ListAuditTasksInput) (*ListAuditTasksOutput, error) { + req, out := c.ListAuditTasksRequest(input) return out, req.Send() } -// ListPrincipalPoliciesWithContext is the same as ListPrincipalPolicies with the addition of +// ListAuditTasksWithContext is the same as ListAuditTasks with the addition of // the ability to pass a context and additional request options. // -// See ListPrincipalPolicies for details on how to use this API operation. +// See ListAuditTasks for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListPrincipalPoliciesWithContext(ctx aws.Context, input *ListPrincipalPoliciesInput, opts ...request.Option) (*ListPrincipalPoliciesOutput, error) { - req, out := c.ListPrincipalPoliciesRequest(input) +func (c *IoT) ListAuditTasksWithContext(ctx aws.Context, input *ListAuditTasksInput, opts ...request.Option) (*ListAuditTasksOutput, error) { + req, out := c.ListAuditTasksRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListPrincipalThings = "ListPrincipalThings" +const opListAuthorizers = "ListAuthorizers" -// ListPrincipalThingsRequest generates a "aws/request.Request" representing the -// client's request for the ListPrincipalThings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListAuthorizersRequest generates a "aws/request.Request" representing the +// client's request for the ListAuthorizers operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListPrincipalThings for more information on using the ListPrincipalThings +// See ListAuthorizers for more information on using the ListAuthorizers // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListPrincipalThingsRequest method. -// req, resp := client.ListPrincipalThingsRequest(params) +// // Example sending a request using the ListAuthorizersRequest method. +// req, resp := client.ListAuthorizersRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListPrincipalThingsRequest(input *ListPrincipalThingsInput) (req *request.Request, output *ListPrincipalThingsOutput) { +func (c *IoT) ListAuthorizersRequest(input *ListAuthorizersInput) (req *request.Request, output *ListAuthorizersOutput) { op := &request.Operation{ - Name: opListPrincipalThings, + Name: opListAuthorizers, HTTPMethod: "GET", - HTTPPath: "/principals/things", + HTTPPath: "/authorizers/", } if input == nil { - input = &ListPrincipalThingsInput{} + input = &ListAuthorizersInput{} } - output = &ListPrincipalThingsOutput{} + output = &ListAuthorizersOutput{} req = c.newRequest(op, input, output) return } -// ListPrincipalThings API operation for AWS IoT. +// ListAuthorizers API operation for AWS IoT. // -// Lists the things associated with the specified principal. +// Lists the authorizers registered in your account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListPrincipalThings for usage and error information. +// API operation ListAuthorizers for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" @@ -7597,168 +8495,165 @@ func (c *IoT) ListPrincipalThingsRequest(input *ListPrincipalThingsInput) (req * // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -func (c *IoT) ListPrincipalThings(input *ListPrincipalThingsInput) (*ListPrincipalThingsOutput, error) { - req, out := c.ListPrincipalThingsRequest(input) +func (c *IoT) ListAuthorizers(input *ListAuthorizersInput) (*ListAuthorizersOutput, error) { + req, out := c.ListAuthorizersRequest(input) return out, req.Send() } -// ListPrincipalThingsWithContext is the same as ListPrincipalThings with the addition of +// ListAuthorizersWithContext is the same as ListAuthorizers with the addition of // the ability to pass a context and additional request options. // -// See ListPrincipalThings for details on how to use this API operation. +// See ListAuthorizers for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListPrincipalThingsWithContext(ctx aws.Context, input *ListPrincipalThingsInput, opts ...request.Option) (*ListPrincipalThingsOutput, error) { - req, out := c.ListPrincipalThingsRequest(input) +func (c *IoT) ListAuthorizersWithContext(ctx aws.Context, input *ListAuthorizersInput, opts ...request.Option) (*ListAuthorizersOutput, error) { + req, out := c.ListAuthorizersRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListRoleAliases = "ListRoleAliases" +const opListBillingGroups = "ListBillingGroups" -// ListRoleAliasesRequest generates a "aws/request.Request" representing the -// client's request for the ListRoleAliases operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListBillingGroupsRequest generates a "aws/request.Request" representing the +// client's request for the ListBillingGroups operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListRoleAliases for more information on using the ListRoleAliases +// See ListBillingGroups for more information on using the ListBillingGroups // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListRoleAliasesRequest method. -// req, resp := client.ListRoleAliasesRequest(params) +// // Example sending a request using the ListBillingGroupsRequest method. +// req, resp := client.ListBillingGroupsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListRoleAliasesRequest(input *ListRoleAliasesInput) (req *request.Request, output *ListRoleAliasesOutput) { +func (c *IoT) ListBillingGroupsRequest(input *ListBillingGroupsInput) (req *request.Request, output *ListBillingGroupsOutput) { op := &request.Operation{ - Name: opListRoleAliases, + Name: opListBillingGroups, HTTPMethod: "GET", - HTTPPath: "/role-aliases", + HTTPPath: "/billing-groups", } if input == nil { - input = &ListRoleAliasesInput{} + input = &ListBillingGroupsInput{} } - output = &ListRoleAliasesOutput{} + output = &ListBillingGroupsOutput{} req = c.newRequest(op, input, output) return } -// ListRoleAliases API operation for AWS IoT. +// ListBillingGroups API operation for AWS IoT. // -// Lists the role aliases registered in your account. +// Lists the billing groups you have created. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListRoleAliases for usage and error information. +// API operation ListBillingGroups for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeThrottlingException "ThrottlingException" -// The rate exceeds the limit. -// -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. -// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) ListRoleAliases(input *ListRoleAliasesInput) (*ListRoleAliasesOutput, error) { - req, out := c.ListRoleAliasesRequest(input) +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +func (c *IoT) ListBillingGroups(input *ListBillingGroupsInput) (*ListBillingGroupsOutput, error) { + req, out := c.ListBillingGroupsRequest(input) return out, req.Send() } -// ListRoleAliasesWithContext is the same as ListRoleAliases with the addition of +// ListBillingGroupsWithContext is the same as ListBillingGroups with the addition of // the ability to pass a context and additional request options. // -// See ListRoleAliases for details on how to use this API operation. +// See ListBillingGroups for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListRoleAliasesWithContext(ctx aws.Context, input *ListRoleAliasesInput, opts ...request.Option) (*ListRoleAliasesOutput, error) { - req, out := c.ListRoleAliasesRequest(input) +func (c *IoT) ListBillingGroupsWithContext(ctx aws.Context, input *ListBillingGroupsInput, opts ...request.Option) (*ListBillingGroupsOutput, error) { + req, out := c.ListBillingGroupsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListStreams = "ListStreams" +const opListCACertificates = "ListCACertificates" -// ListStreamsRequest generates a "aws/request.Request" representing the -// client's request for the ListStreams operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListCACertificatesRequest generates a "aws/request.Request" representing the +// client's request for the ListCACertificates operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListStreams for more information on using the ListStreams +// See ListCACertificates for more information on using the ListCACertificates // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListStreamsRequest method. -// req, resp := client.ListStreamsRequest(params) +// // Example sending a request using the ListCACertificatesRequest method. +// req, resp := client.ListCACertificatesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListStreamsRequest(input *ListStreamsInput) (req *request.Request, output *ListStreamsOutput) { +func (c *IoT) ListCACertificatesRequest(input *ListCACertificatesInput) (req *request.Request, output *ListCACertificatesOutput) { op := &request.Operation{ - Name: opListStreams, + Name: opListCACertificates, HTTPMethod: "GET", - HTTPPath: "/streams", + HTTPPath: "/cacertificates", } if input == nil { - input = &ListStreamsInput{} + input = &ListCACertificatesInput{} } - output = &ListStreamsOutput{} + output = &ListCACertificatesOutput{} req = c.newRequest(op, input, output) return } -// ListStreams API operation for AWS IoT. +// ListCACertificates API operation for AWS IoT. // -// Lists all of the streams in your AWS account. +// Lists the CA certificates registered for your AWS account. +// +// The results are paginated with a default page size of 25. You can use the +// returned marker to retrieve additional results. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListStreams for usage and error information. +// API operation ListCACertificates for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" @@ -7776,82 +8671,82 @@ func (c *IoT) ListStreamsRequest(input *ListStreamsInput) (req *request.Request, // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) ListStreams(input *ListStreamsInput) (*ListStreamsOutput, error) { - req, out := c.ListStreamsRequest(input) +func (c *IoT) ListCACertificates(input *ListCACertificatesInput) (*ListCACertificatesOutput, error) { + req, out := c.ListCACertificatesRequest(input) return out, req.Send() } -// ListStreamsWithContext is the same as ListStreams with the addition of +// ListCACertificatesWithContext is the same as ListCACertificates with the addition of // the ability to pass a context and additional request options. // -// See ListStreams for details on how to use this API operation. +// See ListCACertificates for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListStreamsWithContext(ctx aws.Context, input *ListStreamsInput, opts ...request.Option) (*ListStreamsOutput, error) { - req, out := c.ListStreamsRequest(input) +func (c *IoT) ListCACertificatesWithContext(ctx aws.Context, input *ListCACertificatesInput, opts ...request.Option) (*ListCACertificatesOutput, error) { + req, out := c.ListCACertificatesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListTargetsForPolicy = "ListTargetsForPolicy" +const opListCertificates = "ListCertificates" -// ListTargetsForPolicyRequest generates a "aws/request.Request" representing the -// client's request for the ListTargetsForPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListCertificatesRequest generates a "aws/request.Request" representing the +// client's request for the ListCertificates operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListTargetsForPolicy for more information on using the ListTargetsForPolicy +// See ListCertificates for more information on using the ListCertificates // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListTargetsForPolicyRequest method. -// req, resp := client.ListTargetsForPolicyRequest(params) +// // Example sending a request using the ListCertificatesRequest method. +// req, resp := client.ListCertificatesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListTargetsForPolicyRequest(input *ListTargetsForPolicyInput) (req *request.Request, output *ListTargetsForPolicyOutput) { +func (c *IoT) ListCertificatesRequest(input *ListCertificatesInput) (req *request.Request, output *ListCertificatesOutput) { op := &request.Operation{ - Name: opListTargetsForPolicy, - HTTPMethod: "POST", - HTTPPath: "/policy-targets/{policyName}", + Name: opListCertificates, + HTTPMethod: "GET", + HTTPPath: "/certificates", } if input == nil { - input = &ListTargetsForPolicyInput{} + input = &ListCertificatesInput{} } - output = &ListTargetsForPolicyOutput{} + output = &ListCertificatesOutput{} req = c.newRequest(op, input, output) return } -// ListTargetsForPolicy API operation for AWS IoT. +// ListCertificates API operation for AWS IoT. // -// List targets for the specified policy. +// Lists the certificates registered in your AWS account. +// +// The results are paginated with a default page size of 25. You can use the +// returned marker to retrieve additional results. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListTargetsForPolicy for usage and error information. +// API operation ListCertificates for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // @@ -7867,505 +8762,508 @@ func (c *IoT) ListTargetsForPolicyRequest(input *ListTargetsForPolicyInput) (req // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -// * ErrCodeLimitExceededException "LimitExceededException" -// The number of attached entities exceeds the limit. -// -func (c *IoT) ListTargetsForPolicy(input *ListTargetsForPolicyInput) (*ListTargetsForPolicyOutput, error) { - req, out := c.ListTargetsForPolicyRequest(input) +func (c *IoT) ListCertificates(input *ListCertificatesInput) (*ListCertificatesOutput, error) { + req, out := c.ListCertificatesRequest(input) return out, req.Send() } -// ListTargetsForPolicyWithContext is the same as ListTargetsForPolicy with the addition of +// ListCertificatesWithContext is the same as ListCertificates with the addition of // the ability to pass a context and additional request options. // -// See ListTargetsForPolicy for details on how to use this API operation. +// See ListCertificates for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListTargetsForPolicyWithContext(ctx aws.Context, input *ListTargetsForPolicyInput, opts ...request.Option) (*ListTargetsForPolicyOutput, error) { - req, out := c.ListTargetsForPolicyRequest(input) +func (c *IoT) ListCertificatesWithContext(ctx aws.Context, input *ListCertificatesInput, opts ...request.Option) (*ListCertificatesOutput, error) { + req, out := c.ListCertificatesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListThingGroups = "ListThingGroups" +const opListCertificatesByCA = "ListCertificatesByCA" -// ListThingGroupsRequest generates a "aws/request.Request" representing the -// client's request for the ListThingGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListCertificatesByCARequest generates a "aws/request.Request" representing the +// client's request for the ListCertificatesByCA operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListThingGroups for more information on using the ListThingGroups +// See ListCertificatesByCA for more information on using the ListCertificatesByCA // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListThingGroupsRequest method. -// req, resp := client.ListThingGroupsRequest(params) +// // Example sending a request using the ListCertificatesByCARequest method. +// req, resp := client.ListCertificatesByCARequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListThingGroupsRequest(input *ListThingGroupsInput) (req *request.Request, output *ListThingGroupsOutput) { +func (c *IoT) ListCertificatesByCARequest(input *ListCertificatesByCAInput) (req *request.Request, output *ListCertificatesByCAOutput) { op := &request.Operation{ - Name: opListThingGroups, + Name: opListCertificatesByCA, HTTPMethod: "GET", - HTTPPath: "/thing-groups", + HTTPPath: "/certificates-by-ca/{caCertificateId}", } if input == nil { - input = &ListThingGroupsInput{} + input = &ListCertificatesByCAInput{} } - output = &ListThingGroupsOutput{} + output = &ListCertificatesByCAOutput{} req = c.newRequest(op, input, output) return } -// ListThingGroups API operation for AWS IoT. +// ListCertificatesByCA API operation for AWS IoT. // -// List the thing groups in your account. +// List the device certificates signed by the specified CA certificate. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListThingGroups for usage and error information. +// API operation ListCertificatesByCA for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -func (c *IoT) ListThingGroups(input *ListThingGroupsInput) (*ListThingGroupsOutput, error) { - req, out := c.ListThingGroupsRequest(input) +func (c *IoT) ListCertificatesByCA(input *ListCertificatesByCAInput) (*ListCertificatesByCAOutput, error) { + req, out := c.ListCertificatesByCARequest(input) return out, req.Send() } -// ListThingGroupsWithContext is the same as ListThingGroups with the addition of +// ListCertificatesByCAWithContext is the same as ListCertificatesByCA with the addition of // the ability to pass a context and additional request options. // -// See ListThingGroups for details on how to use this API operation. +// See ListCertificatesByCA for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListThingGroupsWithContext(ctx aws.Context, input *ListThingGroupsInput, opts ...request.Option) (*ListThingGroupsOutput, error) { - req, out := c.ListThingGroupsRequest(input) +func (c *IoT) ListCertificatesByCAWithContext(ctx aws.Context, input *ListCertificatesByCAInput, opts ...request.Option) (*ListCertificatesByCAOutput, error) { + req, out := c.ListCertificatesByCARequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListThingGroupsForThing = "ListThingGroupsForThing" +const opListIndices = "ListIndices" -// ListThingGroupsForThingRequest generates a "aws/request.Request" representing the -// client's request for the ListThingGroupsForThing operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListIndicesRequest generates a "aws/request.Request" representing the +// client's request for the ListIndices operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListThingGroupsForThing for more information on using the ListThingGroupsForThing +// See ListIndices for more information on using the ListIndices // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListThingGroupsForThingRequest method. -// req, resp := client.ListThingGroupsForThingRequest(params) +// // Example sending a request using the ListIndicesRequest method. +// req, resp := client.ListIndicesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListThingGroupsForThingRequest(input *ListThingGroupsForThingInput) (req *request.Request, output *ListThingGroupsForThingOutput) { +func (c *IoT) ListIndicesRequest(input *ListIndicesInput) (req *request.Request, output *ListIndicesOutput) { op := &request.Operation{ - Name: opListThingGroupsForThing, + Name: opListIndices, HTTPMethod: "GET", - HTTPPath: "/things/{thingName}/thing-groups", + HTTPPath: "/indices", } if input == nil { - input = &ListThingGroupsForThingInput{} + input = &ListIndicesInput{} } - output = &ListThingGroupsForThingOutput{} + output = &ListIndicesOutput{} req = c.newRequest(op, input, output) return } -// ListThingGroupsForThing API operation for AWS IoT. +// ListIndices API operation for AWS IoT. // -// List the thing groups to which the specified thing belongs. +// Lists the search indices. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListThingGroupsForThing for usage and error information. +// API operation ListIndices for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -func (c *IoT) ListThingGroupsForThing(input *ListThingGroupsForThingInput) (*ListThingGroupsForThingOutput, error) { - req, out := c.ListThingGroupsForThingRequest(input) +func (c *IoT) ListIndices(input *ListIndicesInput) (*ListIndicesOutput, error) { + req, out := c.ListIndicesRequest(input) return out, req.Send() } -// ListThingGroupsForThingWithContext is the same as ListThingGroupsForThing with the addition of +// ListIndicesWithContext is the same as ListIndices with the addition of // the ability to pass a context and additional request options. // -// See ListThingGroupsForThing for details on how to use this API operation. +// See ListIndices for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListThingGroupsForThingWithContext(ctx aws.Context, input *ListThingGroupsForThingInput, opts ...request.Option) (*ListThingGroupsForThingOutput, error) { - req, out := c.ListThingGroupsForThingRequest(input) +func (c *IoT) ListIndicesWithContext(ctx aws.Context, input *ListIndicesInput, opts ...request.Option) (*ListIndicesOutput, error) { + req, out := c.ListIndicesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListThingPrincipals = "ListThingPrincipals" +const opListJobExecutionsForJob = "ListJobExecutionsForJob" -// ListThingPrincipalsRequest generates a "aws/request.Request" representing the -// client's request for the ListThingPrincipals operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListJobExecutionsForJobRequest generates a "aws/request.Request" representing the +// client's request for the ListJobExecutionsForJob operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListThingPrincipals for more information on using the ListThingPrincipals +// See ListJobExecutionsForJob for more information on using the ListJobExecutionsForJob // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListThingPrincipalsRequest method. -// req, resp := client.ListThingPrincipalsRequest(params) +// // Example sending a request using the ListJobExecutionsForJobRequest method. +// req, resp := client.ListJobExecutionsForJobRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListThingPrincipalsRequest(input *ListThingPrincipalsInput) (req *request.Request, output *ListThingPrincipalsOutput) { +func (c *IoT) ListJobExecutionsForJobRequest(input *ListJobExecutionsForJobInput) (req *request.Request, output *ListJobExecutionsForJobOutput) { op := &request.Operation{ - Name: opListThingPrincipals, + Name: opListJobExecutionsForJob, HTTPMethod: "GET", - HTTPPath: "/things/{thingName}/principals", + HTTPPath: "/jobs/{jobId}/things", } if input == nil { - input = &ListThingPrincipalsInput{} + input = &ListJobExecutionsForJobInput{} } - output = &ListThingPrincipalsOutput{} + output = &ListJobExecutionsForJobOutput{} req = c.newRequest(op, input, output) return } -// ListThingPrincipals API operation for AWS IoT. +// ListJobExecutionsForJob API operation for AWS IoT. // -// Lists the principals associated with the specified thing. +// Lists the job executions for a job. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListThingPrincipals for usage and error information. +// API operation ListJobExecutionsForJob for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. -// -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -func (c *IoT) ListThingPrincipals(input *ListThingPrincipalsInput) (*ListThingPrincipalsOutput, error) { - req, out := c.ListThingPrincipalsRequest(input) +func (c *IoT) ListJobExecutionsForJob(input *ListJobExecutionsForJobInput) (*ListJobExecutionsForJobOutput, error) { + req, out := c.ListJobExecutionsForJobRequest(input) return out, req.Send() } -// ListThingPrincipalsWithContext is the same as ListThingPrincipals with the addition of +// ListJobExecutionsForJobWithContext is the same as ListJobExecutionsForJob with the addition of // the ability to pass a context and additional request options. // -// See ListThingPrincipals for details on how to use this API operation. +// See ListJobExecutionsForJob for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListThingPrincipalsWithContext(ctx aws.Context, input *ListThingPrincipalsInput, opts ...request.Option) (*ListThingPrincipalsOutput, error) { - req, out := c.ListThingPrincipalsRequest(input) +func (c *IoT) ListJobExecutionsForJobWithContext(ctx aws.Context, input *ListJobExecutionsForJobInput, opts ...request.Option) (*ListJobExecutionsForJobOutput, error) { + req, out := c.ListJobExecutionsForJobRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListThingRegistrationTaskReports = "ListThingRegistrationTaskReports" +const opListJobExecutionsForThing = "ListJobExecutionsForThing" -// ListThingRegistrationTaskReportsRequest generates a "aws/request.Request" representing the -// client's request for the ListThingRegistrationTaskReports operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListJobExecutionsForThingRequest generates a "aws/request.Request" representing the +// client's request for the ListJobExecutionsForThing operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListThingRegistrationTaskReports for more information on using the ListThingRegistrationTaskReports +// See ListJobExecutionsForThing for more information on using the ListJobExecutionsForThing // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListThingRegistrationTaskReportsRequest method. -// req, resp := client.ListThingRegistrationTaskReportsRequest(params) +// // Example sending a request using the ListJobExecutionsForThingRequest method. +// req, resp := client.ListJobExecutionsForThingRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListThingRegistrationTaskReportsRequest(input *ListThingRegistrationTaskReportsInput) (req *request.Request, output *ListThingRegistrationTaskReportsOutput) { +func (c *IoT) ListJobExecutionsForThingRequest(input *ListJobExecutionsForThingInput) (req *request.Request, output *ListJobExecutionsForThingOutput) { op := &request.Operation{ - Name: opListThingRegistrationTaskReports, + Name: opListJobExecutionsForThing, HTTPMethod: "GET", - HTTPPath: "/thing-registration-tasks/{taskId}/reports", + HTTPPath: "/things/{thingName}/jobs", } if input == nil { - input = &ListThingRegistrationTaskReportsInput{} + input = &ListJobExecutionsForThingInput{} } - output = &ListThingRegistrationTaskReportsOutput{} + output = &ListJobExecutionsForThingOutput{} req = c.newRequest(op, input, output) return } -// ListThingRegistrationTaskReports API operation for AWS IoT. +// ListJobExecutionsForThing API operation for AWS IoT. // -// Information about the thing registration tasks. +// Lists the job executions for the specified thing. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListThingRegistrationTaskReports for usage and error information. +// API operation ListJobExecutionsForThing for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. // -func (c *IoT) ListThingRegistrationTaskReports(input *ListThingRegistrationTaskReportsInput) (*ListThingRegistrationTaskReportsOutput, error) { - req, out := c.ListThingRegistrationTaskReportsRequest(input) +func (c *IoT) ListJobExecutionsForThing(input *ListJobExecutionsForThingInput) (*ListJobExecutionsForThingOutput, error) { + req, out := c.ListJobExecutionsForThingRequest(input) return out, req.Send() } -// ListThingRegistrationTaskReportsWithContext is the same as ListThingRegistrationTaskReports with the addition of +// ListJobExecutionsForThingWithContext is the same as ListJobExecutionsForThing with the addition of // the ability to pass a context and additional request options. // -// See ListThingRegistrationTaskReports for details on how to use this API operation. +// See ListJobExecutionsForThing for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListThingRegistrationTaskReportsWithContext(ctx aws.Context, input *ListThingRegistrationTaskReportsInput, opts ...request.Option) (*ListThingRegistrationTaskReportsOutput, error) { - req, out := c.ListThingRegistrationTaskReportsRequest(input) +func (c *IoT) ListJobExecutionsForThingWithContext(ctx aws.Context, input *ListJobExecutionsForThingInput, opts ...request.Option) (*ListJobExecutionsForThingOutput, error) { + req, out := c.ListJobExecutionsForThingRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListThingRegistrationTasks = "ListThingRegistrationTasks" +const opListJobs = "ListJobs" -// ListThingRegistrationTasksRequest generates a "aws/request.Request" representing the -// client's request for the ListThingRegistrationTasks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListJobsRequest generates a "aws/request.Request" representing the +// client's request for the ListJobs operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListThingRegistrationTasks for more information on using the ListThingRegistrationTasks +// See ListJobs for more information on using the ListJobs // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListThingRegistrationTasksRequest method. -// req, resp := client.ListThingRegistrationTasksRequest(params) +// // Example sending a request using the ListJobsRequest method. +// req, resp := client.ListJobsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListThingRegistrationTasksRequest(input *ListThingRegistrationTasksInput) (req *request.Request, output *ListThingRegistrationTasksOutput) { +func (c *IoT) ListJobsRequest(input *ListJobsInput) (req *request.Request, output *ListJobsOutput) { op := &request.Operation{ - Name: opListThingRegistrationTasks, + Name: opListJobs, HTTPMethod: "GET", - HTTPPath: "/thing-registration-tasks", + HTTPPath: "/jobs", } if input == nil { - input = &ListThingRegistrationTasksInput{} + input = &ListJobsInput{} } - output = &ListThingRegistrationTasksOutput{} + output = &ListJobsOutput{} req = c.newRequest(op, input, output) return } -// ListThingRegistrationTasks API operation for AWS IoT. +// ListJobs API operation for AWS IoT. // -// List bulk thing provisioning tasks. +// Lists jobs. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListThingRegistrationTasks for usage and error information. +// API operation ListJobs for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. // -func (c *IoT) ListThingRegistrationTasks(input *ListThingRegistrationTasksInput) (*ListThingRegistrationTasksOutput, error) { - req, out := c.ListThingRegistrationTasksRequest(input) +func (c *IoT) ListJobs(input *ListJobsInput) (*ListJobsOutput, error) { + req, out := c.ListJobsRequest(input) return out, req.Send() } -// ListThingRegistrationTasksWithContext is the same as ListThingRegistrationTasks with the addition of +// ListJobsWithContext is the same as ListJobs with the addition of // the ability to pass a context and additional request options. // -// See ListThingRegistrationTasks for details on how to use this API operation. +// See ListJobs for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListThingRegistrationTasksWithContext(ctx aws.Context, input *ListThingRegistrationTasksInput, opts ...request.Option) (*ListThingRegistrationTasksOutput, error) { - req, out := c.ListThingRegistrationTasksRequest(input) +func (c *IoT) ListJobsWithContext(ctx aws.Context, input *ListJobsInput, opts ...request.Option) (*ListJobsOutput, error) { + req, out := c.ListJobsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListThingTypes = "ListThingTypes" +const opListOTAUpdates = "ListOTAUpdates" -// ListThingTypesRequest generates a "aws/request.Request" representing the -// client's request for the ListThingTypes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListOTAUpdatesRequest generates a "aws/request.Request" representing the +// client's request for the ListOTAUpdates operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListThingTypes for more information on using the ListThingTypes +// See ListOTAUpdates for more information on using the ListOTAUpdates // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListThingTypesRequest method. -// req, resp := client.ListThingTypesRequest(params) +// // Example sending a request using the ListOTAUpdatesRequest method. +// req, resp := client.ListOTAUpdatesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListThingTypesRequest(input *ListThingTypesInput) (req *request.Request, output *ListThingTypesOutput) { +func (c *IoT) ListOTAUpdatesRequest(input *ListOTAUpdatesInput) (req *request.Request, output *ListOTAUpdatesOutput) { op := &request.Operation{ - Name: opListThingTypes, + Name: opListOTAUpdates, HTTPMethod: "GET", - HTTPPath: "/thing-types", + HTTPPath: "/otaUpdates", } if input == nil { - input = &ListThingTypesInput{} + input = &ListOTAUpdatesInput{} } - output = &ListThingTypesOutput{} + output = &ListOTAUpdatesOutput{} req = c.newRequest(op, input, output) return } -// ListThingTypes API operation for AWS IoT. +// ListOTAUpdates API operation for AWS IoT. // -// Lists the existing thing types. +// Lists OTA updates. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListThingTypes for usage and error information. +// API operation ListOTAUpdates for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" @@ -8377,86 +9275,83 @@ func (c *IoT) ListThingTypesRequest(input *ListThingTypesInput) (req *request.Re // * ErrCodeUnauthorizedException "UnauthorizedException" // You are not authorized to perform this operation. // -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. -// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) ListThingTypes(input *ListThingTypesInput) (*ListThingTypesOutput, error) { - req, out := c.ListThingTypesRequest(input) +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +func (c *IoT) ListOTAUpdates(input *ListOTAUpdatesInput) (*ListOTAUpdatesOutput, error) { + req, out := c.ListOTAUpdatesRequest(input) return out, req.Send() } -// ListThingTypesWithContext is the same as ListThingTypes with the addition of +// ListOTAUpdatesWithContext is the same as ListOTAUpdates with the addition of // the ability to pass a context and additional request options. // -// See ListThingTypes for details on how to use this API operation. +// See ListOTAUpdates for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListThingTypesWithContext(ctx aws.Context, input *ListThingTypesInput, opts ...request.Option) (*ListThingTypesOutput, error) { - req, out := c.ListThingTypesRequest(input) +func (c *IoT) ListOTAUpdatesWithContext(ctx aws.Context, input *ListOTAUpdatesInput, opts ...request.Option) (*ListOTAUpdatesOutput, error) { + req, out := c.ListOTAUpdatesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListThings = "ListThings" +const opListOutgoingCertificates = "ListOutgoingCertificates" -// ListThingsRequest generates a "aws/request.Request" representing the -// client's request for the ListThings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListOutgoingCertificatesRequest generates a "aws/request.Request" representing the +// client's request for the ListOutgoingCertificates operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListThings for more information on using the ListThings +// See ListOutgoingCertificates for more information on using the ListOutgoingCertificates // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListThingsRequest method. -// req, resp := client.ListThingsRequest(params) +// // Example sending a request using the ListOutgoingCertificatesRequest method. +// req, resp := client.ListOutgoingCertificatesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListThingsRequest(input *ListThingsInput) (req *request.Request, output *ListThingsOutput) { +func (c *IoT) ListOutgoingCertificatesRequest(input *ListOutgoingCertificatesInput) (req *request.Request, output *ListOutgoingCertificatesOutput) { op := &request.Operation{ - Name: opListThings, + Name: opListOutgoingCertificates, HTTPMethod: "GET", - HTTPPath: "/things", + HTTPPath: "/certificates-out-going", } if input == nil { - input = &ListThingsInput{} + input = &ListOutgoingCertificatesInput{} } - output = &ListThingsOutput{} + output = &ListOutgoingCertificatesOutput{} req = c.newRequest(op, input, output) return } -// ListThings API operation for AWS IoT. +// ListOutgoingCertificates API operation for AWS IoT. // -// Lists your things. Use the attributeName and attributeValue parameters to -// filter your things. For example, calling ListThings with attributeName=Color -// and attributeValue=Red retrieves all things in the registry that contain -// an attribute Color with the value Red. +// Lists certificates that are being transferred but not yet accepted. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListThings for usage and error information. +// API operation ListOutgoingCertificates for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" @@ -8474,352 +9369,377 @@ func (c *IoT) ListThingsRequest(input *ListThingsInput) (req *request.Request, o // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) ListThings(input *ListThingsInput) (*ListThingsOutput, error) { - req, out := c.ListThingsRequest(input) +func (c *IoT) ListOutgoingCertificates(input *ListOutgoingCertificatesInput) (*ListOutgoingCertificatesOutput, error) { + req, out := c.ListOutgoingCertificatesRequest(input) return out, req.Send() } -// ListThingsWithContext is the same as ListThings with the addition of +// ListOutgoingCertificatesWithContext is the same as ListOutgoingCertificates with the addition of // the ability to pass a context and additional request options. // -// See ListThings for details on how to use this API operation. +// See ListOutgoingCertificates for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListThingsWithContext(ctx aws.Context, input *ListThingsInput, opts ...request.Option) (*ListThingsOutput, error) { - req, out := c.ListThingsRequest(input) +func (c *IoT) ListOutgoingCertificatesWithContext(ctx aws.Context, input *ListOutgoingCertificatesInput, opts ...request.Option) (*ListOutgoingCertificatesOutput, error) { + req, out := c.ListOutgoingCertificatesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListThingsInThingGroup = "ListThingsInThingGroup" +const opListPolicies = "ListPolicies" -// ListThingsInThingGroupRequest generates a "aws/request.Request" representing the -// client's request for the ListThingsInThingGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListPoliciesRequest generates a "aws/request.Request" representing the +// client's request for the ListPolicies operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListThingsInThingGroup for more information on using the ListThingsInThingGroup +// See ListPolicies for more information on using the ListPolicies // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListThingsInThingGroupRequest method. -// req, resp := client.ListThingsInThingGroupRequest(params) +// // Example sending a request using the ListPoliciesRequest method. +// req, resp := client.ListPoliciesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListThingsInThingGroupRequest(input *ListThingsInThingGroupInput) (req *request.Request, output *ListThingsInThingGroupOutput) { +func (c *IoT) ListPoliciesRequest(input *ListPoliciesInput) (req *request.Request, output *ListPoliciesOutput) { op := &request.Operation{ - Name: opListThingsInThingGroup, + Name: opListPolicies, HTTPMethod: "GET", - HTTPPath: "/thing-groups/{thingGroupName}/things", + HTTPPath: "/policies", } if input == nil { - input = &ListThingsInThingGroupInput{} + input = &ListPoliciesInput{} } - output = &ListThingsInThingGroupOutput{} + output = &ListPoliciesOutput{} req = c.newRequest(op, input, output) return } -// ListThingsInThingGroup API operation for AWS IoT. +// ListPolicies API operation for AWS IoT. // -// Lists the things in the specified group. +// Lists your policies. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListThingsInThingGroup for usage and error information. +// API operation ListPolicies for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -func (c *IoT) ListThingsInThingGroup(input *ListThingsInThingGroupInput) (*ListThingsInThingGroupOutput, error) { - req, out := c.ListThingsInThingGroupRequest(input) +func (c *IoT) ListPolicies(input *ListPoliciesInput) (*ListPoliciesOutput, error) { + req, out := c.ListPoliciesRequest(input) return out, req.Send() } -// ListThingsInThingGroupWithContext is the same as ListThingsInThingGroup with the addition of +// ListPoliciesWithContext is the same as ListPolicies with the addition of // the ability to pass a context and additional request options. // -// See ListThingsInThingGroup for details on how to use this API operation. +// See ListPolicies for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListThingsInThingGroupWithContext(ctx aws.Context, input *ListThingsInThingGroupInput, opts ...request.Option) (*ListThingsInThingGroupOutput, error) { - req, out := c.ListThingsInThingGroupRequest(input) +func (c *IoT) ListPoliciesWithContext(ctx aws.Context, input *ListPoliciesInput, opts ...request.Option) (*ListPoliciesOutput, error) { + req, out := c.ListPoliciesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListTopicRules = "ListTopicRules" +const opListPolicyPrincipals = "ListPolicyPrincipals" -// ListTopicRulesRequest generates a "aws/request.Request" representing the -// client's request for the ListTopicRules operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListPolicyPrincipalsRequest generates a "aws/request.Request" representing the +// client's request for the ListPolicyPrincipals operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListTopicRules for more information on using the ListTopicRules +// See ListPolicyPrincipals for more information on using the ListPolicyPrincipals // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListTopicRulesRequest method. -// req, resp := client.ListTopicRulesRequest(params) +// // Example sending a request using the ListPolicyPrincipalsRequest method. +// req, resp := client.ListPolicyPrincipalsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListTopicRulesRequest(input *ListTopicRulesInput) (req *request.Request, output *ListTopicRulesOutput) { +// +// Deprecated: ListPolicyPrincipals has been deprecated +func (c *IoT) ListPolicyPrincipalsRequest(input *ListPolicyPrincipalsInput) (req *request.Request, output *ListPolicyPrincipalsOutput) { + if c.Client.Config.Logger != nil { + c.Client.Config.Logger.Log("This operation, ListPolicyPrincipals, has been deprecated") + } op := &request.Operation{ - Name: opListTopicRules, + Name: opListPolicyPrincipals, HTTPMethod: "GET", - HTTPPath: "/rules", + HTTPPath: "/policy-principals", } if input == nil { - input = &ListTopicRulesInput{} + input = &ListPolicyPrincipalsInput{} } - output = &ListTopicRulesOutput{} + output = &ListPolicyPrincipalsOutput{} req = c.newRequest(op, input, output) return } -// ListTopicRules API operation for AWS IoT. +// ListPolicyPrincipals API operation for AWS IoT. // -// Lists the rules for the specific topic. +// Lists the principals associated with the specified policy. +// +// Note: This API is deprecated. Please use ListTargetsForPolicy instead. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListTopicRules for usage and error information. +// API operation ListPolicyPrincipals for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalException "InternalException" -// An unexpected error has occurred. +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. // // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -func (c *IoT) ListTopicRules(input *ListTopicRulesInput) (*ListTopicRulesOutput, error) { - req, out := c.ListTopicRulesRequest(input) +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// +// Deprecated: ListPolicyPrincipals has been deprecated +func (c *IoT) ListPolicyPrincipals(input *ListPolicyPrincipalsInput) (*ListPolicyPrincipalsOutput, error) { + req, out := c.ListPolicyPrincipalsRequest(input) return out, req.Send() } -// ListTopicRulesWithContext is the same as ListTopicRules with the addition of +// ListPolicyPrincipalsWithContext is the same as ListPolicyPrincipals with the addition of // the ability to pass a context and additional request options. // -// See ListTopicRules for details on how to use this API operation. +// See ListPolicyPrincipals for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListTopicRulesWithContext(ctx aws.Context, input *ListTopicRulesInput, opts ...request.Option) (*ListTopicRulesOutput, error) { - req, out := c.ListTopicRulesRequest(input) +// +// Deprecated: ListPolicyPrincipalsWithContext has been deprecated +func (c *IoT) ListPolicyPrincipalsWithContext(ctx aws.Context, input *ListPolicyPrincipalsInput, opts ...request.Option) (*ListPolicyPrincipalsOutput, error) { + req, out := c.ListPolicyPrincipalsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opListV2LoggingLevels = "ListV2LoggingLevels" +const opListPolicyVersions = "ListPolicyVersions" -// ListV2LoggingLevelsRequest generates a "aws/request.Request" representing the -// client's request for the ListV2LoggingLevels operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListPolicyVersionsRequest generates a "aws/request.Request" representing the +// client's request for the ListPolicyVersions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListV2LoggingLevels for more information on using the ListV2LoggingLevels +// See ListPolicyVersions for more information on using the ListPolicyVersions // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListV2LoggingLevelsRequest method. -// req, resp := client.ListV2LoggingLevelsRequest(params) +// // Example sending a request using the ListPolicyVersionsRequest method. +// req, resp := client.ListPolicyVersionsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ListV2LoggingLevelsRequest(input *ListV2LoggingLevelsInput) (req *request.Request, output *ListV2LoggingLevelsOutput) { +func (c *IoT) ListPolicyVersionsRequest(input *ListPolicyVersionsInput) (req *request.Request, output *ListPolicyVersionsOutput) { op := &request.Operation{ - Name: opListV2LoggingLevels, + Name: opListPolicyVersions, HTTPMethod: "GET", - HTTPPath: "/v2LoggingLevel", + HTTPPath: "/policies/{policyName}/version", } if input == nil { - input = &ListV2LoggingLevelsInput{} + input = &ListPolicyVersionsInput{} } - output = &ListV2LoggingLevelsOutput{} + output = &ListPolicyVersionsOutput{} req = c.newRequest(op, input, output) return } -// ListV2LoggingLevels API operation for AWS IoT. +// ListPolicyVersions API operation for AWS IoT. // -// Lists logging levels. +// Lists the versions of the specified policy and identifies the default version. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ListV2LoggingLevels for usage and error information. +// API operation ListPolicyVersions for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalException "InternalException" -// An unexpected error has occurred. -// -// * ErrCodeNotConfiguredException "NotConfiguredException" -// The resource is not configured. +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. // // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -func (c *IoT) ListV2LoggingLevels(input *ListV2LoggingLevelsInput) (*ListV2LoggingLevelsOutput, error) { - req, out := c.ListV2LoggingLevelsRequest(input) +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) ListPolicyVersions(input *ListPolicyVersionsInput) (*ListPolicyVersionsOutput, error) { + req, out := c.ListPolicyVersionsRequest(input) return out, req.Send() } -// ListV2LoggingLevelsWithContext is the same as ListV2LoggingLevels with the addition of +// ListPolicyVersionsWithContext is the same as ListPolicyVersions with the addition of // the ability to pass a context and additional request options. // -// See ListV2LoggingLevels for details on how to use this API operation. +// See ListPolicyVersions for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ListV2LoggingLevelsWithContext(ctx aws.Context, input *ListV2LoggingLevelsInput, opts ...request.Option) (*ListV2LoggingLevelsOutput, error) { - req, out := c.ListV2LoggingLevelsRequest(input) +func (c *IoT) ListPolicyVersionsWithContext(ctx aws.Context, input *ListPolicyVersionsInput, opts ...request.Option) (*ListPolicyVersionsOutput, error) { + req, out := c.ListPolicyVersionsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opRegisterCACertificate = "RegisterCACertificate" +const opListPrincipalPolicies = "ListPrincipalPolicies" -// RegisterCACertificateRequest generates a "aws/request.Request" representing the -// client's request for the RegisterCACertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListPrincipalPoliciesRequest generates a "aws/request.Request" representing the +// client's request for the ListPrincipalPolicies operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See RegisterCACertificate for more information on using the RegisterCACertificate +// See ListPrincipalPolicies for more information on using the ListPrincipalPolicies // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the RegisterCACertificateRequest method. -// req, resp := client.RegisterCACertificateRequest(params) +// // Example sending a request using the ListPrincipalPoliciesRequest method. +// req, resp := client.ListPrincipalPoliciesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) RegisterCACertificateRequest(input *RegisterCACertificateInput) (req *request.Request, output *RegisterCACertificateOutput) { +// +// Deprecated: ListPrincipalPolicies has been deprecated +func (c *IoT) ListPrincipalPoliciesRequest(input *ListPrincipalPoliciesInput) (req *request.Request, output *ListPrincipalPoliciesOutput) { + if c.Client.Config.Logger != nil { + c.Client.Config.Logger.Log("This operation, ListPrincipalPolicies, has been deprecated") + } op := &request.Operation{ - Name: opRegisterCACertificate, - HTTPMethod: "POST", - HTTPPath: "/cacertificate", + Name: opListPrincipalPolicies, + HTTPMethod: "GET", + HTTPPath: "/principal-policies", } if input == nil { - input = &RegisterCACertificateInput{} + input = &ListPrincipalPoliciesInput{} } - output = &RegisterCACertificateOutput{} + output = &ListPrincipalPoliciesOutput{} req = c.newRequest(op, input, output) return } -// RegisterCACertificate API operation for AWS IoT. +// ListPrincipalPolicies API operation for AWS IoT. // -// Registers a CA certificate with AWS IoT. This CA certificate can then be -// used to sign device certificates, which can be then registered with AWS IoT. -// You can register up to 10 CA certificates per AWS account that have the same -// subject field. This enables you to have up to 10 certificate authorities -// sign your device certificates. If you have more than one CA certificate registered, -// make sure you pass the CA certificate when you register your device certificates -// with the RegisterCertificate API. +// Lists the policies attached to the specified principal. If you use an Cognito +// identity, the ID must be in AmazonCognito Identity format (http://docs.aws.amazon.com/cognitoidentity/latest/APIReference/API_GetCredentialsForIdentity.html#API_GetCredentialsForIdentity_RequestSyntax). +// +// Note: This API is deprecated. Please use ListAttachedPolicies instead. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation RegisterCACertificate for usage and error information. +// API operation ListPrincipalPolicies for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceAlreadyExistsException "ResourceAlreadyExistsException" -// The resource already exists. -// -// * ErrCodeRegistrationCodeValidationException "RegistrationCodeValidationException" -// The registration code is invalid. +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. // // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeCertificateValidationException "CertificateValidationException" -// The certificate is invalid. -// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeLimitExceededException "LimitExceededException" -// The number of attached entities exceeds the limit. -// // * ErrCodeUnauthorizedException "UnauthorizedException" // You are not authorized to perform this operation. // @@ -8829,98 +9749,86 @@ func (c *IoT) RegisterCACertificateRequest(input *RegisterCACertificateInput) (r // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) RegisterCACertificate(input *RegisterCACertificateInput) (*RegisterCACertificateOutput, error) { - req, out := c.RegisterCACertificateRequest(input) +// +// Deprecated: ListPrincipalPolicies has been deprecated +func (c *IoT) ListPrincipalPolicies(input *ListPrincipalPoliciesInput) (*ListPrincipalPoliciesOutput, error) { + req, out := c.ListPrincipalPoliciesRequest(input) return out, req.Send() } -// RegisterCACertificateWithContext is the same as RegisterCACertificate with the addition of +// ListPrincipalPoliciesWithContext is the same as ListPrincipalPolicies with the addition of // the ability to pass a context and additional request options. // -// See RegisterCACertificate for details on how to use this API operation. +// See ListPrincipalPolicies for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) RegisterCACertificateWithContext(ctx aws.Context, input *RegisterCACertificateInput, opts ...request.Option) (*RegisterCACertificateOutput, error) { - req, out := c.RegisterCACertificateRequest(input) +// +// Deprecated: ListPrincipalPoliciesWithContext has been deprecated +func (c *IoT) ListPrincipalPoliciesWithContext(ctx aws.Context, input *ListPrincipalPoliciesInput, opts ...request.Option) (*ListPrincipalPoliciesOutput, error) { + req, out := c.ListPrincipalPoliciesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opRegisterCertificate = "RegisterCertificate" +const opListPrincipalThings = "ListPrincipalThings" -// RegisterCertificateRequest generates a "aws/request.Request" representing the -// client's request for the RegisterCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListPrincipalThingsRequest generates a "aws/request.Request" representing the +// client's request for the ListPrincipalThings operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See RegisterCertificate for more information on using the RegisterCertificate +// See ListPrincipalThings for more information on using the ListPrincipalThings // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the RegisterCertificateRequest method. -// req, resp := client.RegisterCertificateRequest(params) +// // Example sending a request using the ListPrincipalThingsRequest method. +// req, resp := client.ListPrincipalThingsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) RegisterCertificateRequest(input *RegisterCertificateInput) (req *request.Request, output *RegisterCertificateOutput) { +func (c *IoT) ListPrincipalThingsRequest(input *ListPrincipalThingsInput) (req *request.Request, output *ListPrincipalThingsOutput) { op := &request.Operation{ - Name: opRegisterCertificate, - HTTPMethod: "POST", - HTTPPath: "/certificate/register", + Name: opListPrincipalThings, + HTTPMethod: "GET", + HTTPPath: "/principals/things", } if input == nil { - input = &RegisterCertificateInput{} + input = &ListPrincipalThingsInput{} } - output = &RegisterCertificateOutput{} + output = &ListPrincipalThingsOutput{} req = c.newRequest(op, input, output) return } -// RegisterCertificate API operation for AWS IoT. +// ListPrincipalThings API operation for AWS IoT. // -// Registers a device certificate with AWS IoT. If you have more than one CA -// certificate that has the same subject field, you must specify the CA certificate -// that was used to sign the device certificate being registered. +// Lists the things associated with the specified principal. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation RegisterCertificate for usage and error information. +// API operation ListPrincipalThings for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceAlreadyExistsException "ResourceAlreadyExistsException" -// The resource already exists. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeCertificateValidationException "CertificateValidationException" -// The certificate is invalid. -// -// * ErrCodeCertificateStateException "CertificateStateException" -// The certificate operation is not allowed. -// -// * ErrCodeCertificateConflictException "CertificateConflictException" -// Unable to verify the CA certificate used to sign the device certificate you -// are attempting to register. This is happens when you have registered more -// than one CA certificate that has the same subject field and public key. -// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // @@ -8933,277 +9841,252 @@ func (c *IoT) RegisterCertificateRequest(input *RegisterCertificateInput) (req * // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) RegisterCertificate(input *RegisterCertificateInput) (*RegisterCertificateOutput, error) { - req, out := c.RegisterCertificateRequest(input) +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) ListPrincipalThings(input *ListPrincipalThingsInput) (*ListPrincipalThingsOutput, error) { + req, out := c.ListPrincipalThingsRequest(input) return out, req.Send() } -// RegisterCertificateWithContext is the same as RegisterCertificate with the addition of +// ListPrincipalThingsWithContext is the same as ListPrincipalThings with the addition of // the ability to pass a context and additional request options. // -// See RegisterCertificate for details on how to use this API operation. +// See ListPrincipalThings for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) RegisterCertificateWithContext(ctx aws.Context, input *RegisterCertificateInput, opts ...request.Option) (*RegisterCertificateOutput, error) { - req, out := c.RegisterCertificateRequest(input) +func (c *IoT) ListPrincipalThingsWithContext(ctx aws.Context, input *ListPrincipalThingsInput, opts ...request.Option) (*ListPrincipalThingsOutput, error) { + req, out := c.ListPrincipalThingsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opRegisterThing = "RegisterThing" +const opListRoleAliases = "ListRoleAliases" -// RegisterThingRequest generates a "aws/request.Request" representing the -// client's request for the RegisterThing operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListRoleAliasesRequest generates a "aws/request.Request" representing the +// client's request for the ListRoleAliases operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See RegisterThing for more information on using the RegisterThing +// See ListRoleAliases for more information on using the ListRoleAliases // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the RegisterThingRequest method. -// req, resp := client.RegisterThingRequest(params) +// // Example sending a request using the ListRoleAliasesRequest method. +// req, resp := client.ListRoleAliasesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) RegisterThingRequest(input *RegisterThingInput) (req *request.Request, output *RegisterThingOutput) { +func (c *IoT) ListRoleAliasesRequest(input *ListRoleAliasesInput) (req *request.Request, output *ListRoleAliasesOutput) { op := &request.Operation{ - Name: opRegisterThing, - HTTPMethod: "POST", - HTTPPath: "/things", + Name: opListRoleAliases, + HTTPMethod: "GET", + HTTPPath: "/role-aliases", } if input == nil { - input = &RegisterThingInput{} + input = &ListRoleAliasesInput{} } - output = &RegisterThingOutput{} + output = &ListRoleAliasesOutput{} req = c.newRequest(op, input, output) return } -// RegisterThing API operation for AWS IoT. +// ListRoleAliases API operation for AWS IoT. // -// Provisions a thing. +// Lists the role aliases registered in your account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation RegisterThing for usage and error information. +// API operation ListRoleAliases for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. -// -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeConflictingResourceUpdateException "ConflictingResourceUpdateException" -// A conflicting resource update exception. This exception is thrown when two -// pending updates cause a conflict. +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. // -// * ErrCodeResourceRegistrationFailureException "ResourceRegistrationFailureException" -// The resource registration failed. +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. // -func (c *IoT) RegisterThing(input *RegisterThingInput) (*RegisterThingOutput, error) { - req, out := c.RegisterThingRequest(input) +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) ListRoleAliases(input *ListRoleAliasesInput) (*ListRoleAliasesOutput, error) { + req, out := c.ListRoleAliasesRequest(input) return out, req.Send() } -// RegisterThingWithContext is the same as RegisterThing with the addition of +// ListRoleAliasesWithContext is the same as ListRoleAliases with the addition of // the ability to pass a context and additional request options. // -// See RegisterThing for details on how to use this API operation. +// See ListRoleAliases for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) RegisterThingWithContext(ctx aws.Context, input *RegisterThingInput, opts ...request.Option) (*RegisterThingOutput, error) { - req, out := c.RegisterThingRequest(input) +func (c *IoT) ListRoleAliasesWithContext(ctx aws.Context, input *ListRoleAliasesInput, opts ...request.Option) (*ListRoleAliasesOutput, error) { + req, out := c.ListRoleAliasesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opRejectCertificateTransfer = "RejectCertificateTransfer" +const opListScheduledAudits = "ListScheduledAudits" -// RejectCertificateTransferRequest generates a "aws/request.Request" representing the -// client's request for the RejectCertificateTransfer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListScheduledAuditsRequest generates a "aws/request.Request" representing the +// client's request for the ListScheduledAudits operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See RejectCertificateTransfer for more information on using the RejectCertificateTransfer +// See ListScheduledAudits for more information on using the ListScheduledAudits // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the RejectCertificateTransferRequest method. -// req, resp := client.RejectCertificateTransferRequest(params) +// // Example sending a request using the ListScheduledAuditsRequest method. +// req, resp := client.ListScheduledAuditsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) RejectCertificateTransferRequest(input *RejectCertificateTransferInput) (req *request.Request, output *RejectCertificateTransferOutput) { +func (c *IoT) ListScheduledAuditsRequest(input *ListScheduledAuditsInput) (req *request.Request, output *ListScheduledAuditsOutput) { op := &request.Operation{ - Name: opRejectCertificateTransfer, - HTTPMethod: "PATCH", - HTTPPath: "/reject-certificate-transfer/{certificateId}", + Name: opListScheduledAudits, + HTTPMethod: "GET", + HTTPPath: "/audit/scheduledaudits", } if input == nil { - input = &RejectCertificateTransferInput{} + input = &ListScheduledAuditsInput{} } - output = &RejectCertificateTransferOutput{} + output = &ListScheduledAuditsOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// RejectCertificateTransfer API operation for AWS IoT. -// -// Rejects a pending certificate transfer. After AWS IoT rejects a certificate -// transfer, the certificate status changes from PENDING_TRANSFER to INACTIVE. +// ListScheduledAudits API operation for AWS IoT. // -// To check for pending certificate transfers, call ListCertificates to enumerate -// your certificates. -// -// This operation can only be called by the transfer destination. After it is -// called, the certificate will be returned to the source's account in the INACTIVE -// state. +// Lists all of your scheduled audits. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation RejectCertificateTransfer for usage and error information. +// API operation ListScheduledAudits for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -// * ErrCodeTransferAlreadyCompletedException "TransferAlreadyCompletedException" -// You can't revert the certificate transfer because the transfer is already -// complete. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. -// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) RejectCertificateTransfer(input *RejectCertificateTransferInput) (*RejectCertificateTransferOutput, error) { - req, out := c.RejectCertificateTransferRequest(input) +func (c *IoT) ListScheduledAudits(input *ListScheduledAuditsInput) (*ListScheduledAuditsOutput, error) { + req, out := c.ListScheduledAuditsRequest(input) return out, req.Send() } -// RejectCertificateTransferWithContext is the same as RejectCertificateTransfer with the addition of +// ListScheduledAuditsWithContext is the same as ListScheduledAudits with the addition of // the ability to pass a context and additional request options. // -// See RejectCertificateTransfer for details on how to use this API operation. +// See ListScheduledAudits for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) RejectCertificateTransferWithContext(ctx aws.Context, input *RejectCertificateTransferInput, opts ...request.Option) (*RejectCertificateTransferOutput, error) { - req, out := c.RejectCertificateTransferRequest(input) +func (c *IoT) ListScheduledAuditsWithContext(ctx aws.Context, input *ListScheduledAuditsInput, opts ...request.Option) (*ListScheduledAuditsOutput, error) { + req, out := c.ListScheduledAuditsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opRemoveThingFromThingGroup = "RemoveThingFromThingGroup" +const opListSecurityProfiles = "ListSecurityProfiles" -// RemoveThingFromThingGroupRequest generates a "aws/request.Request" representing the -// client's request for the RemoveThingFromThingGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListSecurityProfilesRequest generates a "aws/request.Request" representing the +// client's request for the ListSecurityProfiles operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See RemoveThingFromThingGroup for more information on using the RemoveThingFromThingGroup +// See ListSecurityProfiles for more information on using the ListSecurityProfiles // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the RemoveThingFromThingGroupRequest method. -// req, resp := client.RemoveThingFromThingGroupRequest(params) +// // Example sending a request using the ListSecurityProfilesRequest method. +// req, resp := client.ListSecurityProfilesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) RemoveThingFromThingGroupRequest(input *RemoveThingFromThingGroupInput) (req *request.Request, output *RemoveThingFromThingGroupOutput) { +func (c *IoT) ListSecurityProfilesRequest(input *ListSecurityProfilesInput) (req *request.Request, output *ListSecurityProfilesOutput) { op := &request.Operation{ - Name: opRemoveThingFromThingGroup, - HTTPMethod: "PUT", - HTTPPath: "/thing-groups/removeThingFromThingGroup", + Name: opListSecurityProfiles, + HTTPMethod: "GET", + HTTPPath: "/security-profiles", } if input == nil { - input = &RemoveThingFromThingGroupInput{} + input = &ListSecurityProfilesInput{} } - output = &RemoveThingFromThingGroupOutput{} + output = &ListSecurityProfilesOutput{} req = c.newRequest(op, input, output) return } -// RemoveThingFromThingGroup API operation for AWS IoT. +// ListSecurityProfiles API operation for AWS IoT. // -// Remove the specified thing from the specified group. +// Lists the Device Defender security profiles you have created. You can use +// filters to list only those security profiles associated with a thing group +// or only those associated with your account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation RemoveThingFromThingGroup for usage and error information. +// API operation ListSecurityProfiles for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" @@ -9215,172 +10098,162 @@ func (c *IoT) RemoveThingFromThingGroupRequest(input *RemoveThingFromThingGroupI // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -func (c *IoT) RemoveThingFromThingGroup(input *RemoveThingFromThingGroupInput) (*RemoveThingFromThingGroupOutput, error) { - req, out := c.RemoveThingFromThingGroupRequest(input) +func (c *IoT) ListSecurityProfiles(input *ListSecurityProfilesInput) (*ListSecurityProfilesOutput, error) { + req, out := c.ListSecurityProfilesRequest(input) return out, req.Send() } -// RemoveThingFromThingGroupWithContext is the same as RemoveThingFromThingGroup with the addition of +// ListSecurityProfilesWithContext is the same as ListSecurityProfiles with the addition of // the ability to pass a context and additional request options. // -// See RemoveThingFromThingGroup for details on how to use this API operation. +// See ListSecurityProfiles for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) RemoveThingFromThingGroupWithContext(ctx aws.Context, input *RemoveThingFromThingGroupInput, opts ...request.Option) (*RemoveThingFromThingGroupOutput, error) { - req, out := c.RemoveThingFromThingGroupRequest(input) +func (c *IoT) ListSecurityProfilesWithContext(ctx aws.Context, input *ListSecurityProfilesInput, opts ...request.Option) (*ListSecurityProfilesOutput, error) { + req, out := c.ListSecurityProfilesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opReplaceTopicRule = "ReplaceTopicRule" +const opListSecurityProfilesForTarget = "ListSecurityProfilesForTarget" -// ReplaceTopicRuleRequest generates a "aws/request.Request" representing the -// client's request for the ReplaceTopicRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListSecurityProfilesForTargetRequest generates a "aws/request.Request" representing the +// client's request for the ListSecurityProfilesForTarget operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ReplaceTopicRule for more information on using the ReplaceTopicRule +// See ListSecurityProfilesForTarget for more information on using the ListSecurityProfilesForTarget // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ReplaceTopicRuleRequest method. -// req, resp := client.ReplaceTopicRuleRequest(params) +// // Example sending a request using the ListSecurityProfilesForTargetRequest method. +// req, resp := client.ListSecurityProfilesForTargetRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) ReplaceTopicRuleRequest(input *ReplaceTopicRuleInput) (req *request.Request, output *ReplaceTopicRuleOutput) { +func (c *IoT) ListSecurityProfilesForTargetRequest(input *ListSecurityProfilesForTargetInput) (req *request.Request, output *ListSecurityProfilesForTargetOutput) { op := &request.Operation{ - Name: opReplaceTopicRule, - HTTPMethod: "PATCH", - HTTPPath: "/rules/{ruleName}", + Name: opListSecurityProfilesForTarget, + HTTPMethod: "GET", + HTTPPath: "/security-profiles-for-target", } if input == nil { - input = &ReplaceTopicRuleInput{} + input = &ListSecurityProfilesForTargetInput{} } - output = &ReplaceTopicRuleOutput{} + output = &ListSecurityProfilesForTargetOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// ReplaceTopicRule API operation for AWS IoT. +// ListSecurityProfilesForTarget API operation for AWS IoT. // -// Replaces the rule. You must specify all parameters for the new rule. Creating -// rules is an administrator-level action. Any user who has permission to create -// rules will be able to access data processed by the rule. +// Lists the Device Defender security profiles attached to a target (thing group). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation ReplaceTopicRule for usage and error information. +// API operation ListSecurityProfilesForTarget for usage and error information. // // Returned Error Codes: -// * ErrCodeSqlParseException "SqlParseException" -// The Rule-SQL expression can't be parsed correctly. -// -// * ErrCodeInternalException "InternalException" -// An unexpected error has occurred. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. // -func (c *IoT) ReplaceTopicRule(input *ReplaceTopicRuleInput) (*ReplaceTopicRuleOutput, error) { - req, out := c.ReplaceTopicRuleRequest(input) +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) ListSecurityProfilesForTarget(input *ListSecurityProfilesForTargetInput) (*ListSecurityProfilesForTargetOutput, error) { + req, out := c.ListSecurityProfilesForTargetRequest(input) return out, req.Send() } -// ReplaceTopicRuleWithContext is the same as ReplaceTopicRule with the addition of +// ListSecurityProfilesForTargetWithContext is the same as ListSecurityProfilesForTarget with the addition of // the ability to pass a context and additional request options. // -// See ReplaceTopicRule for details on how to use this API operation. +// See ListSecurityProfilesForTarget for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) ReplaceTopicRuleWithContext(ctx aws.Context, input *ReplaceTopicRuleInput, opts ...request.Option) (*ReplaceTopicRuleOutput, error) { - req, out := c.ReplaceTopicRuleRequest(input) +func (c *IoT) ListSecurityProfilesForTargetWithContext(ctx aws.Context, input *ListSecurityProfilesForTargetInput, opts ...request.Option) (*ListSecurityProfilesForTargetOutput, error) { + req, out := c.ListSecurityProfilesForTargetRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opSearchIndex = "SearchIndex" +const opListStreams = "ListStreams" -// SearchIndexRequest generates a "aws/request.Request" representing the -// client's request for the SearchIndex operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListStreamsRequest generates a "aws/request.Request" representing the +// client's request for the ListStreams operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See SearchIndex for more information on using the SearchIndex +// See ListStreams for more information on using the ListStreams // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the SearchIndexRequest method. -// req, resp := client.SearchIndexRequest(params) +// // Example sending a request using the ListStreamsRequest method. +// req, resp := client.ListStreamsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) SearchIndexRequest(input *SearchIndexInput) (req *request.Request, output *SearchIndexOutput) { +func (c *IoT) ListStreamsRequest(input *ListStreamsInput) (req *request.Request, output *ListStreamsOutput) { op := &request.Operation{ - Name: opSearchIndex, - HTTPMethod: "POST", - HTTPPath: "/indices/search", + Name: opListStreams, + HTTPMethod: "GET", + HTTPPath: "/streams", } if input == nil { - input = &SearchIndexInput{} + input = &ListStreamsInput{} } - output = &SearchIndexOutput{} + output = &ListStreamsOutput{} req = c.newRequest(op, input, output) return } -// SearchIndex API operation for AWS IoT. +// ListStreams API operation for AWS IoT. // -// The query search index. +// Lists all of the streams in your AWS account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation SearchIndex for usage and error information. +// API operation ListStreams for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" @@ -9398,186 +10271,162 @@ func (c *IoT) SearchIndexRequest(input *SearchIndexInput) (req *request.Request, // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -// * ErrCodeInvalidQueryException "InvalidQueryException" -// The query is invalid. -// -// * ErrCodeIndexNotReadyException "IndexNotReadyException" -// The index is not ready. -// -func (c *IoT) SearchIndex(input *SearchIndexInput) (*SearchIndexOutput, error) { - req, out := c.SearchIndexRequest(input) +func (c *IoT) ListStreams(input *ListStreamsInput) (*ListStreamsOutput, error) { + req, out := c.ListStreamsRequest(input) return out, req.Send() } -// SearchIndexWithContext is the same as SearchIndex with the addition of +// ListStreamsWithContext is the same as ListStreams with the addition of // the ability to pass a context and additional request options. // -// See SearchIndex for details on how to use this API operation. +// See ListStreams for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) SearchIndexWithContext(ctx aws.Context, input *SearchIndexInput, opts ...request.Option) (*SearchIndexOutput, error) { - req, out := c.SearchIndexRequest(input) +func (c *IoT) ListStreamsWithContext(ctx aws.Context, input *ListStreamsInput, opts ...request.Option) (*ListStreamsOutput, error) { + req, out := c.ListStreamsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opSetDefaultAuthorizer = "SetDefaultAuthorizer" +const opListTagsForResource = "ListTagsForResource" -// SetDefaultAuthorizerRequest generates a "aws/request.Request" representing the -// client's request for the SetDefaultAuthorizer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListTagsForResourceRequest generates a "aws/request.Request" representing the +// client's request for the ListTagsForResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See SetDefaultAuthorizer for more information on using the SetDefaultAuthorizer +// See ListTagsForResource for more information on using the ListTagsForResource // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the SetDefaultAuthorizerRequest method. -// req, resp := client.SetDefaultAuthorizerRequest(params) +// // Example sending a request using the ListTagsForResourceRequest method. +// req, resp := client.ListTagsForResourceRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) SetDefaultAuthorizerRequest(input *SetDefaultAuthorizerInput) (req *request.Request, output *SetDefaultAuthorizerOutput) { +func (c *IoT) ListTagsForResourceRequest(input *ListTagsForResourceInput) (req *request.Request, output *ListTagsForResourceOutput) { op := &request.Operation{ - Name: opSetDefaultAuthorizer, - HTTPMethod: "POST", - HTTPPath: "/default-authorizer", + Name: opListTagsForResource, + HTTPMethod: "GET", + HTTPPath: "/tags", } if input == nil { - input = &SetDefaultAuthorizerInput{} + input = &ListTagsForResourceInput{} } - output = &SetDefaultAuthorizerOutput{} + output = &ListTagsForResourceOutput{} req = c.newRequest(op, input, output) return } -// SetDefaultAuthorizer API operation for AWS IoT. +// ListTagsForResource API operation for AWS IoT. // -// Sets the default authorizer. This will be used if a websocket connection -// is made without specifying an authorizer. +// Lists the tags (metadata) you have assigned to the resource. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation SetDefaultAuthorizer for usage and error information. +// API operation ListTagsForResource for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeThrottlingException "ThrottlingException" -// The rate exceeds the limit. -// -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. -// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -// * ErrCodeResourceAlreadyExistsException "ResourceAlreadyExistsException" -// The resource already exists. +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. // -func (c *IoT) SetDefaultAuthorizer(input *SetDefaultAuthorizerInput) (*SetDefaultAuthorizerOutput, error) { - req, out := c.SetDefaultAuthorizerRequest(input) +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +func (c *IoT) ListTagsForResource(input *ListTagsForResourceInput) (*ListTagsForResourceOutput, error) { + req, out := c.ListTagsForResourceRequest(input) return out, req.Send() } -// SetDefaultAuthorizerWithContext is the same as SetDefaultAuthorizer with the addition of +// ListTagsForResourceWithContext is the same as ListTagsForResource with the addition of // the ability to pass a context and additional request options. // -// See SetDefaultAuthorizer for details on how to use this API operation. +// See ListTagsForResource for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) SetDefaultAuthorizerWithContext(ctx aws.Context, input *SetDefaultAuthorizerInput, opts ...request.Option) (*SetDefaultAuthorizerOutput, error) { - req, out := c.SetDefaultAuthorizerRequest(input) +func (c *IoT) ListTagsForResourceWithContext(ctx aws.Context, input *ListTagsForResourceInput, opts ...request.Option) (*ListTagsForResourceOutput, error) { + req, out := c.ListTagsForResourceRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opSetDefaultPolicyVersion = "SetDefaultPolicyVersion" +const opListTargetsForPolicy = "ListTargetsForPolicy" -// SetDefaultPolicyVersionRequest generates a "aws/request.Request" representing the -// client's request for the SetDefaultPolicyVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListTargetsForPolicyRequest generates a "aws/request.Request" representing the +// client's request for the ListTargetsForPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See SetDefaultPolicyVersion for more information on using the SetDefaultPolicyVersion +// See ListTargetsForPolicy for more information on using the ListTargetsForPolicy // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the SetDefaultPolicyVersionRequest method. -// req, resp := client.SetDefaultPolicyVersionRequest(params) +// // Example sending a request using the ListTargetsForPolicyRequest method. +// req, resp := client.ListTargetsForPolicyRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) SetDefaultPolicyVersionRequest(input *SetDefaultPolicyVersionInput) (req *request.Request, output *SetDefaultPolicyVersionOutput) { +func (c *IoT) ListTargetsForPolicyRequest(input *ListTargetsForPolicyInput) (req *request.Request, output *ListTargetsForPolicyOutput) { op := &request.Operation{ - Name: opSetDefaultPolicyVersion, - HTTPMethod: "PATCH", - HTTPPath: "/policies/{policyName}/version/{policyVersionId}", + Name: opListTargetsForPolicy, + HTTPMethod: "POST", + HTTPPath: "/policy-targets/{policyName}", } if input == nil { - input = &SetDefaultPolicyVersionInput{} + input = &ListTargetsForPolicyInput{} } - output = &SetDefaultPolicyVersionOutput{} + output = &ListTargetsForPolicyOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// SetDefaultPolicyVersion API operation for AWS IoT. +// ListTargetsForPolicy API operation for AWS IoT. // -// Sets the specified version of the specified policy as the policy's default -// (operative) version. This action affects all certificates to which the policy -// is attached. To list the principals the policy is attached to, use the ListPrincipalPolicy -// API. +// List targets for the specified policy. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation SetDefaultPolicyVersion for usage and error information. +// API operation ListTargetsForPolicy for usage and error information. // // Returned Error Codes: // * ErrCodeResourceNotFoundException "ResourceNotFoundException" @@ -9598,332 +10447,330 @@ func (c *IoT) SetDefaultPolicyVersionRequest(input *SetDefaultPolicyVersionInput // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) SetDefaultPolicyVersion(input *SetDefaultPolicyVersionInput) (*SetDefaultPolicyVersionOutput, error) { - req, out := c.SetDefaultPolicyVersionRequest(input) +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit has been exceeded. +// +func (c *IoT) ListTargetsForPolicy(input *ListTargetsForPolicyInput) (*ListTargetsForPolicyOutput, error) { + req, out := c.ListTargetsForPolicyRequest(input) return out, req.Send() } -// SetDefaultPolicyVersionWithContext is the same as SetDefaultPolicyVersion with the addition of +// ListTargetsForPolicyWithContext is the same as ListTargetsForPolicy with the addition of // the ability to pass a context and additional request options. // -// See SetDefaultPolicyVersion for details on how to use this API operation. +// See ListTargetsForPolicy for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) SetDefaultPolicyVersionWithContext(ctx aws.Context, input *SetDefaultPolicyVersionInput, opts ...request.Option) (*SetDefaultPolicyVersionOutput, error) { - req, out := c.SetDefaultPolicyVersionRequest(input) +func (c *IoT) ListTargetsForPolicyWithContext(ctx aws.Context, input *ListTargetsForPolicyInput, opts ...request.Option) (*ListTargetsForPolicyOutput, error) { + req, out := c.ListTargetsForPolicyRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opSetLoggingOptions = "SetLoggingOptions" +const opListTargetsForSecurityProfile = "ListTargetsForSecurityProfile" -// SetLoggingOptionsRequest generates a "aws/request.Request" representing the -// client's request for the SetLoggingOptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListTargetsForSecurityProfileRequest generates a "aws/request.Request" representing the +// client's request for the ListTargetsForSecurityProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See SetLoggingOptions for more information on using the SetLoggingOptions +// See ListTargetsForSecurityProfile for more information on using the ListTargetsForSecurityProfile // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the SetLoggingOptionsRequest method. -// req, resp := client.SetLoggingOptionsRequest(params) +// // Example sending a request using the ListTargetsForSecurityProfileRequest method. +// req, resp := client.ListTargetsForSecurityProfileRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) SetLoggingOptionsRequest(input *SetLoggingOptionsInput) (req *request.Request, output *SetLoggingOptionsOutput) { +func (c *IoT) ListTargetsForSecurityProfileRequest(input *ListTargetsForSecurityProfileInput) (req *request.Request, output *ListTargetsForSecurityProfileOutput) { op := &request.Operation{ - Name: opSetLoggingOptions, - HTTPMethod: "POST", - HTTPPath: "/loggingOptions", + Name: opListTargetsForSecurityProfile, + HTTPMethod: "GET", + HTTPPath: "/security-profiles/{securityProfileName}/targets", } if input == nil { - input = &SetLoggingOptionsInput{} + input = &ListTargetsForSecurityProfileInput{} } - output = &SetLoggingOptionsOutput{} + output = &ListTargetsForSecurityProfileOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// SetLoggingOptions API operation for AWS IoT. +// ListTargetsForSecurityProfile API operation for AWS IoT. // -// Sets the logging options. +// Lists the targets (thing groups) associated with a given Device Defender +// security profile. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation SetLoggingOptions for usage and error information. +// API operation ListTargetsForSecurityProfile for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalException "InternalException" -// An unexpected error has occurred. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. // -func (c *IoT) SetLoggingOptions(input *SetLoggingOptionsInput) (*SetLoggingOptionsOutput, error) { - req, out := c.SetLoggingOptionsRequest(input) +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) ListTargetsForSecurityProfile(input *ListTargetsForSecurityProfileInput) (*ListTargetsForSecurityProfileOutput, error) { + req, out := c.ListTargetsForSecurityProfileRequest(input) return out, req.Send() } -// SetLoggingOptionsWithContext is the same as SetLoggingOptions with the addition of +// ListTargetsForSecurityProfileWithContext is the same as ListTargetsForSecurityProfile with the addition of // the ability to pass a context and additional request options. // -// See SetLoggingOptions for details on how to use this API operation. +// See ListTargetsForSecurityProfile for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) SetLoggingOptionsWithContext(ctx aws.Context, input *SetLoggingOptionsInput, opts ...request.Option) (*SetLoggingOptionsOutput, error) { - req, out := c.SetLoggingOptionsRequest(input) +func (c *IoT) ListTargetsForSecurityProfileWithContext(ctx aws.Context, input *ListTargetsForSecurityProfileInput, opts ...request.Option) (*ListTargetsForSecurityProfileOutput, error) { + req, out := c.ListTargetsForSecurityProfileRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opSetV2LoggingLevel = "SetV2LoggingLevel" +const opListThingGroups = "ListThingGroups" -// SetV2LoggingLevelRequest generates a "aws/request.Request" representing the -// client's request for the SetV2LoggingLevel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListThingGroupsRequest generates a "aws/request.Request" representing the +// client's request for the ListThingGroups operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See SetV2LoggingLevel for more information on using the SetV2LoggingLevel +// See ListThingGroups for more information on using the ListThingGroups // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the SetV2LoggingLevelRequest method. -// req, resp := client.SetV2LoggingLevelRequest(params) +// // Example sending a request using the ListThingGroupsRequest method. +// req, resp := client.ListThingGroupsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) SetV2LoggingLevelRequest(input *SetV2LoggingLevelInput) (req *request.Request, output *SetV2LoggingLevelOutput) { +func (c *IoT) ListThingGroupsRequest(input *ListThingGroupsInput) (req *request.Request, output *ListThingGroupsOutput) { op := &request.Operation{ - Name: opSetV2LoggingLevel, - HTTPMethod: "POST", - HTTPPath: "/v2LoggingLevel", + Name: opListThingGroups, + HTTPMethod: "GET", + HTTPPath: "/thing-groups", } if input == nil { - input = &SetV2LoggingLevelInput{} + input = &ListThingGroupsInput{} } - output = &SetV2LoggingLevelOutput{} + output = &ListThingGroupsOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// SetV2LoggingLevel API operation for AWS IoT. +// ListThingGroups API operation for AWS IoT. // -// Sets the logging level. +// List the thing groups in your account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation SetV2LoggingLevel for usage and error information. +// API operation ListThingGroups for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalException "InternalException" -// An unexpected error has occurred. -// -// * ErrCodeNotConfiguredException "NotConfiguredException" -// The resource is not configured. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. // -func (c *IoT) SetV2LoggingLevel(input *SetV2LoggingLevelInput) (*SetV2LoggingLevelOutput, error) { - req, out := c.SetV2LoggingLevelRequest(input) +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) ListThingGroups(input *ListThingGroupsInput) (*ListThingGroupsOutput, error) { + req, out := c.ListThingGroupsRequest(input) return out, req.Send() } -// SetV2LoggingLevelWithContext is the same as SetV2LoggingLevel with the addition of +// ListThingGroupsWithContext is the same as ListThingGroups with the addition of // the ability to pass a context and additional request options. // -// See SetV2LoggingLevel for details on how to use this API operation. +// See ListThingGroups for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) SetV2LoggingLevelWithContext(ctx aws.Context, input *SetV2LoggingLevelInput, opts ...request.Option) (*SetV2LoggingLevelOutput, error) { - req, out := c.SetV2LoggingLevelRequest(input) +func (c *IoT) ListThingGroupsWithContext(ctx aws.Context, input *ListThingGroupsInput, opts ...request.Option) (*ListThingGroupsOutput, error) { + req, out := c.ListThingGroupsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opSetV2LoggingOptions = "SetV2LoggingOptions" +const opListThingGroupsForThing = "ListThingGroupsForThing" -// SetV2LoggingOptionsRequest generates a "aws/request.Request" representing the -// client's request for the SetV2LoggingOptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListThingGroupsForThingRequest generates a "aws/request.Request" representing the +// client's request for the ListThingGroupsForThing operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See SetV2LoggingOptions for more information on using the SetV2LoggingOptions +// See ListThingGroupsForThing for more information on using the ListThingGroupsForThing // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the SetV2LoggingOptionsRequest method. -// req, resp := client.SetV2LoggingOptionsRequest(params) +// // Example sending a request using the ListThingGroupsForThingRequest method. +// req, resp := client.ListThingGroupsForThingRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) SetV2LoggingOptionsRequest(input *SetV2LoggingOptionsInput) (req *request.Request, output *SetV2LoggingOptionsOutput) { +func (c *IoT) ListThingGroupsForThingRequest(input *ListThingGroupsForThingInput) (req *request.Request, output *ListThingGroupsForThingOutput) { op := &request.Operation{ - Name: opSetV2LoggingOptions, - HTTPMethod: "POST", - HTTPPath: "/v2LoggingOptions", + Name: opListThingGroupsForThing, + HTTPMethod: "GET", + HTTPPath: "/things/{thingName}/thing-groups", } if input == nil { - input = &SetV2LoggingOptionsInput{} + input = &ListThingGroupsForThingInput{} } - output = &SetV2LoggingOptionsOutput{} + output = &ListThingGroupsForThingOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// SetV2LoggingOptions API operation for AWS IoT. +// ListThingGroupsForThing API operation for AWS IoT. // -// Sets the logging options for the V2 logging service. +// List the thing groups to which the specified thing belongs. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation SetV2LoggingOptions for usage and error information. +// API operation ListThingGroupsForThing for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalException "InternalException" -// An unexpected error has occurred. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. // -func (c *IoT) SetV2LoggingOptions(input *SetV2LoggingOptionsInput) (*SetV2LoggingOptionsOutput, error) { - req, out := c.SetV2LoggingOptionsRequest(input) +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) ListThingGroupsForThing(input *ListThingGroupsForThingInput) (*ListThingGroupsForThingOutput, error) { + req, out := c.ListThingGroupsForThingRequest(input) return out, req.Send() } -// SetV2LoggingOptionsWithContext is the same as SetV2LoggingOptions with the addition of +// ListThingGroupsForThingWithContext is the same as ListThingGroupsForThing with the addition of // the ability to pass a context and additional request options. // -// See SetV2LoggingOptions for details on how to use this API operation. +// See ListThingGroupsForThing for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) SetV2LoggingOptionsWithContext(ctx aws.Context, input *SetV2LoggingOptionsInput, opts ...request.Option) (*SetV2LoggingOptionsOutput, error) { - req, out := c.SetV2LoggingOptionsRequest(input) +func (c *IoT) ListThingGroupsForThingWithContext(ctx aws.Context, input *ListThingGroupsForThingInput, opts ...request.Option) (*ListThingGroupsForThingOutput, error) { + req, out := c.ListThingGroupsForThingRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opStartThingRegistrationTask = "StartThingRegistrationTask" +const opListThingPrincipals = "ListThingPrincipals" -// StartThingRegistrationTaskRequest generates a "aws/request.Request" representing the -// client's request for the StartThingRegistrationTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListThingPrincipalsRequest generates a "aws/request.Request" representing the +// client's request for the ListThingPrincipals operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See StartThingRegistrationTask for more information on using the StartThingRegistrationTask +// See ListThingPrincipals for more information on using the ListThingPrincipals // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the StartThingRegistrationTaskRequest method. -// req, resp := client.StartThingRegistrationTaskRequest(params) +// // Example sending a request using the ListThingPrincipalsRequest method. +// req, resp := client.ListThingPrincipalsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) StartThingRegistrationTaskRequest(input *StartThingRegistrationTaskInput) (req *request.Request, output *StartThingRegistrationTaskOutput) { +func (c *IoT) ListThingPrincipalsRequest(input *ListThingPrincipalsInput) (req *request.Request, output *ListThingPrincipalsOutput) { op := &request.Operation{ - Name: opStartThingRegistrationTask, - HTTPMethod: "POST", - HTTPPath: "/thing-registration-tasks", + Name: opListThingPrincipals, + HTTPMethod: "GET", + HTTPPath: "/things/{thingName}/principals", } if input == nil { - input = &StartThingRegistrationTaskInput{} + input = &ListThingPrincipalsInput{} } - output = &StartThingRegistrationTaskOutput{} + output = &ListThingPrincipalsOutput{} req = c.newRequest(op, input, output) return } -// StartThingRegistrationTask API operation for AWS IoT. +// ListThingPrincipals API operation for AWS IoT. // -// Creates a bulk thing provisioning task. +// Lists the principals associated with the specified thing. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation StartThingRegistrationTask for usage and error information. +// API operation ListThingPrincipals for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" @@ -9935,80 +10782,86 @@ func (c *IoT) StartThingRegistrationTaskRequest(input *StartThingRegistrationTas // * ErrCodeUnauthorizedException "UnauthorizedException" // You are not authorized to perform this operation. // +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) StartThingRegistrationTask(input *StartThingRegistrationTaskInput) (*StartThingRegistrationTaskOutput, error) { - req, out := c.StartThingRegistrationTaskRequest(input) +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) ListThingPrincipals(input *ListThingPrincipalsInput) (*ListThingPrincipalsOutput, error) { + req, out := c.ListThingPrincipalsRequest(input) return out, req.Send() } -// StartThingRegistrationTaskWithContext is the same as StartThingRegistrationTask with the addition of +// ListThingPrincipalsWithContext is the same as ListThingPrincipals with the addition of // the ability to pass a context and additional request options. // -// See StartThingRegistrationTask for details on how to use this API operation. +// See ListThingPrincipals for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) StartThingRegistrationTaskWithContext(ctx aws.Context, input *StartThingRegistrationTaskInput, opts ...request.Option) (*StartThingRegistrationTaskOutput, error) { - req, out := c.StartThingRegistrationTaskRequest(input) +func (c *IoT) ListThingPrincipalsWithContext(ctx aws.Context, input *ListThingPrincipalsInput, opts ...request.Option) (*ListThingPrincipalsOutput, error) { + req, out := c.ListThingPrincipalsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opStopThingRegistrationTask = "StopThingRegistrationTask" +const opListThingRegistrationTaskReports = "ListThingRegistrationTaskReports" -// StopThingRegistrationTaskRequest generates a "aws/request.Request" representing the -// client's request for the StopThingRegistrationTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListThingRegistrationTaskReportsRequest generates a "aws/request.Request" representing the +// client's request for the ListThingRegistrationTaskReports operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See StopThingRegistrationTask for more information on using the StopThingRegistrationTask +// See ListThingRegistrationTaskReports for more information on using the ListThingRegistrationTaskReports // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the StopThingRegistrationTaskRequest method. -// req, resp := client.StopThingRegistrationTaskRequest(params) +// // Example sending a request using the ListThingRegistrationTaskReportsRequest method. +// req, resp := client.ListThingRegistrationTaskReportsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) StopThingRegistrationTaskRequest(input *StopThingRegistrationTaskInput) (req *request.Request, output *StopThingRegistrationTaskOutput) { +func (c *IoT) ListThingRegistrationTaskReportsRequest(input *ListThingRegistrationTaskReportsInput) (req *request.Request, output *ListThingRegistrationTaskReportsOutput) { op := &request.Operation{ - Name: opStopThingRegistrationTask, - HTTPMethod: "PUT", - HTTPPath: "/thing-registration-tasks/{taskId}/cancel", + Name: opListThingRegistrationTaskReports, + HTTPMethod: "GET", + HTTPPath: "/thing-registration-tasks/{taskId}/reports", } if input == nil { - input = &StopThingRegistrationTaskInput{} + input = &ListThingRegistrationTaskReportsInput{} } - output = &StopThingRegistrationTaskOutput{} + output = &ListThingRegistrationTaskReportsOutput{} req = c.newRequest(op, input, output) return } -// StopThingRegistrationTask API operation for AWS IoT. +// ListThingRegistrationTaskReports API operation for AWS IoT. // -// Cancels a bulk thing provisioning task. +// Information about the thing registration tasks. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation StopThingRegistrationTask for usage and error information. +// API operation ListThingRegistrationTaskReports for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" @@ -10023,85 +10876,79 @@ func (c *IoT) StopThingRegistrationTaskRequest(input *StopThingRegistrationTaskI // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -func (c *IoT) StopThingRegistrationTask(input *StopThingRegistrationTaskInput) (*StopThingRegistrationTaskOutput, error) { - req, out := c.StopThingRegistrationTaskRequest(input) +func (c *IoT) ListThingRegistrationTaskReports(input *ListThingRegistrationTaskReportsInput) (*ListThingRegistrationTaskReportsOutput, error) { + req, out := c.ListThingRegistrationTaskReportsRequest(input) return out, req.Send() } -// StopThingRegistrationTaskWithContext is the same as StopThingRegistrationTask with the addition of +// ListThingRegistrationTaskReportsWithContext is the same as ListThingRegistrationTaskReports with the addition of // the ability to pass a context and additional request options. // -// See StopThingRegistrationTask for details on how to use this API operation. +// See ListThingRegistrationTaskReports for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) StopThingRegistrationTaskWithContext(ctx aws.Context, input *StopThingRegistrationTaskInput, opts ...request.Option) (*StopThingRegistrationTaskOutput, error) { - req, out := c.StopThingRegistrationTaskRequest(input) +func (c *IoT) ListThingRegistrationTaskReportsWithContext(ctx aws.Context, input *ListThingRegistrationTaskReportsInput, opts ...request.Option) (*ListThingRegistrationTaskReportsOutput, error) { + req, out := c.ListThingRegistrationTaskReportsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opTestAuthorization = "TestAuthorization" +const opListThingRegistrationTasks = "ListThingRegistrationTasks" -// TestAuthorizationRequest generates a "aws/request.Request" representing the -// client's request for the TestAuthorization operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListThingRegistrationTasksRequest generates a "aws/request.Request" representing the +// client's request for the ListThingRegistrationTasks operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See TestAuthorization for more information on using the TestAuthorization +// See ListThingRegistrationTasks for more information on using the ListThingRegistrationTasks // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the TestAuthorizationRequest method. -// req, resp := client.TestAuthorizationRequest(params) +// // Example sending a request using the ListThingRegistrationTasksRequest method. +// req, resp := client.ListThingRegistrationTasksRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) TestAuthorizationRequest(input *TestAuthorizationInput) (req *request.Request, output *TestAuthorizationOutput) { +func (c *IoT) ListThingRegistrationTasksRequest(input *ListThingRegistrationTasksInput) (req *request.Request, output *ListThingRegistrationTasksOutput) { op := &request.Operation{ - Name: opTestAuthorization, - HTTPMethod: "POST", - HTTPPath: "/test-authorization", + Name: opListThingRegistrationTasks, + HTTPMethod: "GET", + HTTPPath: "/thing-registration-tasks", } if input == nil { - input = &TestAuthorizationInput{} + input = &ListThingRegistrationTasksInput{} } - output = &TestAuthorizationOutput{} + output = &ListThingRegistrationTasksOutput{} req = c.newRequest(op, input, output) return } -// TestAuthorization API operation for AWS IoT. +// ListThingRegistrationTasks API operation for AWS IoT. // -// Test custom authorization. +// List bulk thing provisioning tasks. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation TestAuthorization for usage and error information. +// API operation ListThingRegistrationTasks for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // @@ -10111,91 +10958,82 @@ func (c *IoT) TestAuthorizationRequest(input *TestAuthorizationInput) (req *requ // * ErrCodeUnauthorizedException "UnauthorizedException" // You are not authorized to perform this operation. // -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. -// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -// * ErrCodeLimitExceededException "LimitExceededException" -// The number of attached entities exceeds the limit. -// -func (c *IoT) TestAuthorization(input *TestAuthorizationInput) (*TestAuthorizationOutput, error) { - req, out := c.TestAuthorizationRequest(input) +func (c *IoT) ListThingRegistrationTasks(input *ListThingRegistrationTasksInput) (*ListThingRegistrationTasksOutput, error) { + req, out := c.ListThingRegistrationTasksRequest(input) return out, req.Send() } -// TestAuthorizationWithContext is the same as TestAuthorization with the addition of +// ListThingRegistrationTasksWithContext is the same as ListThingRegistrationTasks with the addition of // the ability to pass a context and additional request options. // -// See TestAuthorization for details on how to use this API operation. +// See ListThingRegistrationTasks for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) TestAuthorizationWithContext(ctx aws.Context, input *TestAuthorizationInput, opts ...request.Option) (*TestAuthorizationOutput, error) { - req, out := c.TestAuthorizationRequest(input) +func (c *IoT) ListThingRegistrationTasksWithContext(ctx aws.Context, input *ListThingRegistrationTasksInput, opts ...request.Option) (*ListThingRegistrationTasksOutput, error) { + req, out := c.ListThingRegistrationTasksRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opTestInvokeAuthorizer = "TestInvokeAuthorizer" +const opListThingTypes = "ListThingTypes" -// TestInvokeAuthorizerRequest generates a "aws/request.Request" representing the -// client's request for the TestInvokeAuthorizer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListThingTypesRequest generates a "aws/request.Request" representing the +// client's request for the ListThingTypes operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See TestInvokeAuthorizer for more information on using the TestInvokeAuthorizer +// See ListThingTypes for more information on using the ListThingTypes // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the TestInvokeAuthorizerRequest method. -// req, resp := client.TestInvokeAuthorizerRequest(params) +// // Example sending a request using the ListThingTypesRequest method. +// req, resp := client.ListThingTypesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) TestInvokeAuthorizerRequest(input *TestInvokeAuthorizerInput) (req *request.Request, output *TestInvokeAuthorizerOutput) { +func (c *IoT) ListThingTypesRequest(input *ListThingTypesInput) (req *request.Request, output *ListThingTypesOutput) { op := &request.Operation{ - Name: opTestInvokeAuthorizer, - HTTPMethod: "POST", - HTTPPath: "/authorizer/{authorizerName}/test", + Name: opListThingTypes, + HTTPMethod: "GET", + HTTPPath: "/thing-types", } if input == nil { - input = &TestInvokeAuthorizerInput{} + input = &ListThingTypesInput{} } - output = &TestInvokeAuthorizerOutput{} + output = &ListThingTypesOutput{} req = c.newRequest(op, input, output) return } -// TestInvokeAuthorizer API operation for AWS IoT. +// ListThingTypes API operation for AWS IoT. // -// Invoke the specified custom authorizer for testing purposes. +// Lists the existing thing types. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation TestInvokeAuthorizer for usage and error information. +// API operation ListThingTypes for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // @@ -10211,106 +11049,85 @@ func (c *IoT) TestInvokeAuthorizerRequest(input *TestInvokeAuthorizerInput) (req // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -// * ErrCodeInvalidResponseException "InvalidResponseException" -// The response is invalid. -// -func (c *IoT) TestInvokeAuthorizer(input *TestInvokeAuthorizerInput) (*TestInvokeAuthorizerOutput, error) { - req, out := c.TestInvokeAuthorizerRequest(input) +func (c *IoT) ListThingTypes(input *ListThingTypesInput) (*ListThingTypesOutput, error) { + req, out := c.ListThingTypesRequest(input) return out, req.Send() } -// TestInvokeAuthorizerWithContext is the same as TestInvokeAuthorizer with the addition of +// ListThingTypesWithContext is the same as ListThingTypes with the addition of // the ability to pass a context and additional request options. // -// See TestInvokeAuthorizer for details on how to use this API operation. +// See ListThingTypes for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) TestInvokeAuthorizerWithContext(ctx aws.Context, input *TestInvokeAuthorizerInput, opts ...request.Option) (*TestInvokeAuthorizerOutput, error) { - req, out := c.TestInvokeAuthorizerRequest(input) +func (c *IoT) ListThingTypesWithContext(ctx aws.Context, input *ListThingTypesInput, opts ...request.Option) (*ListThingTypesOutput, error) { + req, out := c.ListThingTypesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opTransferCertificate = "TransferCertificate" +const opListThings = "ListThings" -// TransferCertificateRequest generates a "aws/request.Request" representing the -// client's request for the TransferCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListThingsRequest generates a "aws/request.Request" representing the +// client's request for the ListThings operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See TransferCertificate for more information on using the TransferCertificate +// See ListThings for more information on using the ListThings // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the TransferCertificateRequest method. -// req, resp := client.TransferCertificateRequest(params) +// // Example sending a request using the ListThingsRequest method. +// req, resp := client.ListThingsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) TransferCertificateRequest(input *TransferCertificateInput) (req *request.Request, output *TransferCertificateOutput) { +func (c *IoT) ListThingsRequest(input *ListThingsInput) (req *request.Request, output *ListThingsOutput) { op := &request.Operation{ - Name: opTransferCertificate, - HTTPMethod: "PATCH", - HTTPPath: "/transfer-certificate/{certificateId}", + Name: opListThings, + HTTPMethod: "GET", + HTTPPath: "/things", } if input == nil { - input = &TransferCertificateInput{} + input = &ListThingsInput{} } - output = &TransferCertificateOutput{} + output = &ListThingsOutput{} req = c.newRequest(op, input, output) return } -// TransferCertificate API operation for AWS IoT. -// -// Transfers the specified certificate to the specified AWS account. -// -// You can cancel the transfer until it is acknowledged by the recipient. -// -// No notification is sent to the transfer destination's account. It is up to -// the caller to notify the transfer target. -// -// The certificate being transferred must not be in the ACTIVE state. You can -// use the UpdateCertificate API to deactivate it. +// ListThings API operation for AWS IoT. // -// The certificate must not have any policies attached to it. You can use the -// DetachPrincipalPolicy API to detach them. +// Lists your things. Use the attributeName and attributeValue parameters to +// filter your things. For example, calling ListThings with attributeName=Color +// and attributeValue=Red retrieves all things in the registry that contain +// an attribute Color with the value Red. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation TransferCertificate for usage and error information. +// API operation ListThings for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -// * ErrCodeCertificateStateException "CertificateStateException" -// The certificate operation is not allowed. -// -// * ErrCodeTransferConflictException "TransferConflictException" -// You can't transfer the certificate because authorization policies are still -// attached. -// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // @@ -10323,448 +11140,413 @@ func (c *IoT) TransferCertificateRequest(input *TransferCertificateInput) (req * // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) TransferCertificate(input *TransferCertificateInput) (*TransferCertificateOutput, error) { - req, out := c.TransferCertificateRequest(input) +func (c *IoT) ListThings(input *ListThingsInput) (*ListThingsOutput, error) { + req, out := c.ListThingsRequest(input) return out, req.Send() } -// TransferCertificateWithContext is the same as TransferCertificate with the addition of +// ListThingsWithContext is the same as ListThings with the addition of // the ability to pass a context and additional request options. // -// See TransferCertificate for details on how to use this API operation. +// See ListThings for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) TransferCertificateWithContext(ctx aws.Context, input *TransferCertificateInput, opts ...request.Option) (*TransferCertificateOutput, error) { - req, out := c.TransferCertificateRequest(input) +func (c *IoT) ListThingsWithContext(ctx aws.Context, input *ListThingsInput, opts ...request.Option) (*ListThingsOutput, error) { + req, out := c.ListThingsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateAuthorizer = "UpdateAuthorizer" +const opListThingsInBillingGroup = "ListThingsInBillingGroup" -// UpdateAuthorizerRequest generates a "aws/request.Request" representing the -// client's request for the UpdateAuthorizer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListThingsInBillingGroupRequest generates a "aws/request.Request" representing the +// client's request for the ListThingsInBillingGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateAuthorizer for more information on using the UpdateAuthorizer +// See ListThingsInBillingGroup for more information on using the ListThingsInBillingGroup // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateAuthorizerRequest method. -// req, resp := client.UpdateAuthorizerRequest(params) +// // Example sending a request using the ListThingsInBillingGroupRequest method. +// req, resp := client.ListThingsInBillingGroupRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) UpdateAuthorizerRequest(input *UpdateAuthorizerInput) (req *request.Request, output *UpdateAuthorizerOutput) { +func (c *IoT) ListThingsInBillingGroupRequest(input *ListThingsInBillingGroupInput) (req *request.Request, output *ListThingsInBillingGroupOutput) { op := &request.Operation{ - Name: opUpdateAuthorizer, - HTTPMethod: "PUT", - HTTPPath: "/authorizer/{authorizerName}", + Name: opListThingsInBillingGroup, + HTTPMethod: "GET", + HTTPPath: "/billing-groups/{billingGroupName}/things", } if input == nil { - input = &UpdateAuthorizerInput{} + input = &ListThingsInBillingGroupInput{} } - output = &UpdateAuthorizerOutput{} + output = &ListThingsInBillingGroupOutput{} req = c.newRequest(op, input, output) return } -// UpdateAuthorizer API operation for AWS IoT. +// ListThingsInBillingGroup API operation for AWS IoT. // -// Updates an authorizer. +// Lists the things you have added to the given billing group. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation UpdateAuthorizer for usage and error information. +// API operation ListThingsInBillingGroup for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeLimitExceededException "LimitExceededException" -// The number of attached entities exceeds the limit. -// -// * ErrCodeThrottlingException "ThrottlingException" -// The rate exceeds the limit. -// -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. -// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) UpdateAuthorizer(input *UpdateAuthorizerInput) (*UpdateAuthorizerOutput, error) { - req, out := c.UpdateAuthorizerRequest(input) +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +func (c *IoT) ListThingsInBillingGroup(input *ListThingsInBillingGroupInput) (*ListThingsInBillingGroupOutput, error) { + req, out := c.ListThingsInBillingGroupRequest(input) return out, req.Send() } -// UpdateAuthorizerWithContext is the same as UpdateAuthorizer with the addition of +// ListThingsInBillingGroupWithContext is the same as ListThingsInBillingGroup with the addition of // the ability to pass a context and additional request options. // -// See UpdateAuthorizer for details on how to use this API operation. +// See ListThingsInBillingGroup for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) UpdateAuthorizerWithContext(ctx aws.Context, input *UpdateAuthorizerInput, opts ...request.Option) (*UpdateAuthorizerOutput, error) { - req, out := c.UpdateAuthorizerRequest(input) +func (c *IoT) ListThingsInBillingGroupWithContext(ctx aws.Context, input *ListThingsInBillingGroupInput, opts ...request.Option) (*ListThingsInBillingGroupOutput, error) { + req, out := c.ListThingsInBillingGroupRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateCACertificate = "UpdateCACertificate" +const opListThingsInThingGroup = "ListThingsInThingGroup" -// UpdateCACertificateRequest generates a "aws/request.Request" representing the -// client's request for the UpdateCACertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListThingsInThingGroupRequest generates a "aws/request.Request" representing the +// client's request for the ListThingsInThingGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateCACertificate for more information on using the UpdateCACertificate +// See ListThingsInThingGroup for more information on using the ListThingsInThingGroup // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateCACertificateRequest method. -// req, resp := client.UpdateCACertificateRequest(params) +// // Example sending a request using the ListThingsInThingGroupRequest method. +// req, resp := client.ListThingsInThingGroupRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) UpdateCACertificateRequest(input *UpdateCACertificateInput) (req *request.Request, output *UpdateCACertificateOutput) { +func (c *IoT) ListThingsInThingGroupRequest(input *ListThingsInThingGroupInput) (req *request.Request, output *ListThingsInThingGroupOutput) { op := &request.Operation{ - Name: opUpdateCACertificate, - HTTPMethod: "PUT", - HTTPPath: "/cacertificate/{caCertificateId}", + Name: opListThingsInThingGroup, + HTTPMethod: "GET", + HTTPPath: "/thing-groups/{thingGroupName}/things", } if input == nil { - input = &UpdateCACertificateInput{} + input = &ListThingsInThingGroupInput{} } - output = &UpdateCACertificateOutput{} + output = &ListThingsInThingGroupOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateCACertificate API operation for AWS IoT. +// ListThingsInThingGroup API operation for AWS IoT. // -// Updates a registered CA certificate. +// Lists the things in the specified group. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation UpdateCACertificate for usage and error information. +// API operation ListThingsInThingGroup for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeThrottlingException "ThrottlingException" -// The rate exceeds the limit. -// -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. -// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) UpdateCACertificate(input *UpdateCACertificateInput) (*UpdateCACertificateOutput, error) { - req, out := c.UpdateCACertificateRequest(input) +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) ListThingsInThingGroup(input *ListThingsInThingGroupInput) (*ListThingsInThingGroupOutput, error) { + req, out := c.ListThingsInThingGroupRequest(input) return out, req.Send() } -// UpdateCACertificateWithContext is the same as UpdateCACertificate with the addition of +// ListThingsInThingGroupWithContext is the same as ListThingsInThingGroup with the addition of // the ability to pass a context and additional request options. // -// See UpdateCACertificate for details on how to use this API operation. +// See ListThingsInThingGroup for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) UpdateCACertificateWithContext(ctx aws.Context, input *UpdateCACertificateInput, opts ...request.Option) (*UpdateCACertificateOutput, error) { - req, out := c.UpdateCACertificateRequest(input) +func (c *IoT) ListThingsInThingGroupWithContext(ctx aws.Context, input *ListThingsInThingGroupInput, opts ...request.Option) (*ListThingsInThingGroupOutput, error) { + req, out := c.ListThingsInThingGroupRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateCertificate = "UpdateCertificate" +const opListTopicRules = "ListTopicRules" -// UpdateCertificateRequest generates a "aws/request.Request" representing the -// client's request for the UpdateCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListTopicRulesRequest generates a "aws/request.Request" representing the +// client's request for the ListTopicRules operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateCertificate for more information on using the UpdateCertificate +// See ListTopicRules for more information on using the ListTopicRules // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateCertificateRequest method. -// req, resp := client.UpdateCertificateRequest(params) +// // Example sending a request using the ListTopicRulesRequest method. +// req, resp := client.ListTopicRulesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) UpdateCertificateRequest(input *UpdateCertificateInput) (req *request.Request, output *UpdateCertificateOutput) { +func (c *IoT) ListTopicRulesRequest(input *ListTopicRulesInput) (req *request.Request, output *ListTopicRulesOutput) { op := &request.Operation{ - Name: opUpdateCertificate, - HTTPMethod: "PUT", - HTTPPath: "/certificates/{certificateId}", + Name: opListTopicRules, + HTTPMethod: "GET", + HTTPPath: "/rules", } if input == nil { - input = &UpdateCertificateInput{} + input = &ListTopicRulesInput{} } - output = &UpdateCertificateOutput{} + output = &ListTopicRulesOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateCertificate API operation for AWS IoT. -// -// Updates the status of the specified certificate. This operation is idempotent. -// -// Moving a certificate from the ACTIVE state (including REVOKED) will not disconnect -// currently connected devices, but these devices will be unable to reconnect. +// ListTopicRules API operation for AWS IoT. // -// The ACTIVE state is required to authenticate devices connecting to AWS IoT -// using a certificate. +// Lists the rules for the specific topic. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation UpdateCertificate for usage and error information. +// API operation ListTopicRules for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -// * ErrCodeCertificateStateException "CertificateStateException" -// The certificate operation is not allowed. +// * ErrCodeInternalException "InternalException" +// An unexpected error has occurred. // // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeThrottlingException "ThrottlingException" -// The rate exceeds the limit. -// -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// // * ErrCodeServiceUnavailableException "ServiceUnavailableException" // The service is temporarily unavailable. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. -// -func (c *IoT) UpdateCertificate(input *UpdateCertificateInput) (*UpdateCertificateOutput, error) { - req, out := c.UpdateCertificateRequest(input) +func (c *IoT) ListTopicRules(input *ListTopicRulesInput) (*ListTopicRulesOutput, error) { + req, out := c.ListTopicRulesRequest(input) return out, req.Send() } -// UpdateCertificateWithContext is the same as UpdateCertificate with the addition of +// ListTopicRulesWithContext is the same as ListTopicRules with the addition of // the ability to pass a context and additional request options. // -// See UpdateCertificate for details on how to use this API operation. +// See ListTopicRules for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) UpdateCertificateWithContext(ctx aws.Context, input *UpdateCertificateInput, opts ...request.Option) (*UpdateCertificateOutput, error) { - req, out := c.UpdateCertificateRequest(input) +func (c *IoT) ListTopicRulesWithContext(ctx aws.Context, input *ListTopicRulesInput, opts ...request.Option) (*ListTopicRulesOutput, error) { + req, out := c.ListTopicRulesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateEventConfigurations = "UpdateEventConfigurations" +const opListV2LoggingLevels = "ListV2LoggingLevels" -// UpdateEventConfigurationsRequest generates a "aws/request.Request" representing the -// client's request for the UpdateEventConfigurations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListV2LoggingLevelsRequest generates a "aws/request.Request" representing the +// client's request for the ListV2LoggingLevels operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateEventConfigurations for more information on using the UpdateEventConfigurations +// See ListV2LoggingLevels for more information on using the ListV2LoggingLevels // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateEventConfigurationsRequest method. -// req, resp := client.UpdateEventConfigurationsRequest(params) +// // Example sending a request using the ListV2LoggingLevelsRequest method. +// req, resp := client.ListV2LoggingLevelsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) UpdateEventConfigurationsRequest(input *UpdateEventConfigurationsInput) (req *request.Request, output *UpdateEventConfigurationsOutput) { +func (c *IoT) ListV2LoggingLevelsRequest(input *ListV2LoggingLevelsInput) (req *request.Request, output *ListV2LoggingLevelsOutput) { op := &request.Operation{ - Name: opUpdateEventConfigurations, - HTTPMethod: "PATCH", - HTTPPath: "/event-configurations", + Name: opListV2LoggingLevels, + HTTPMethod: "GET", + HTTPPath: "/v2LoggingLevel", } if input == nil { - input = &UpdateEventConfigurationsInput{} + input = &ListV2LoggingLevelsInput{} } - output = &UpdateEventConfigurationsOutput{} + output = &ListV2LoggingLevelsOutput{} req = c.newRequest(op, input, output) return } -// UpdateEventConfigurations API operation for AWS IoT. +// ListV2LoggingLevels API operation for AWS IoT. // -// Updates the event configurations. +// Lists logging levels. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation UpdateEventConfigurations for usage and error information. +// API operation ListV2LoggingLevels for usage and error information. // // Returned Error Codes: +// * ErrCodeInternalException "InternalException" +// An unexpected error has occurred. +// +// * ErrCodeNotConfiguredException "NotConfiguredException" +// The resource is not configured. +// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. -// -// * ErrCodeThrottlingException "ThrottlingException" -// The rate exceeds the limit. +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. // -func (c *IoT) UpdateEventConfigurations(input *UpdateEventConfigurationsInput) (*UpdateEventConfigurationsOutput, error) { - req, out := c.UpdateEventConfigurationsRequest(input) +func (c *IoT) ListV2LoggingLevels(input *ListV2LoggingLevelsInput) (*ListV2LoggingLevelsOutput, error) { + req, out := c.ListV2LoggingLevelsRequest(input) return out, req.Send() } -// UpdateEventConfigurationsWithContext is the same as UpdateEventConfigurations with the addition of +// ListV2LoggingLevelsWithContext is the same as ListV2LoggingLevels with the addition of // the ability to pass a context and additional request options. // -// See UpdateEventConfigurations for details on how to use this API operation. +// See ListV2LoggingLevels for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) UpdateEventConfigurationsWithContext(ctx aws.Context, input *UpdateEventConfigurationsInput, opts ...request.Option) (*UpdateEventConfigurationsOutput, error) { - req, out := c.UpdateEventConfigurationsRequest(input) +func (c *IoT) ListV2LoggingLevelsWithContext(ctx aws.Context, input *ListV2LoggingLevelsInput, opts ...request.Option) (*ListV2LoggingLevelsOutput, error) { + req, out := c.ListV2LoggingLevelsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateIndexingConfiguration = "UpdateIndexingConfiguration" +const opListViolationEvents = "ListViolationEvents" -// UpdateIndexingConfigurationRequest generates a "aws/request.Request" representing the -// client's request for the UpdateIndexingConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListViolationEventsRequest generates a "aws/request.Request" representing the +// client's request for the ListViolationEvents operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateIndexingConfiguration for more information on using the UpdateIndexingConfiguration +// See ListViolationEvents for more information on using the ListViolationEvents // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateIndexingConfigurationRequest method. -// req, resp := client.UpdateIndexingConfigurationRequest(params) +// // Example sending a request using the ListViolationEventsRequest method. +// req, resp := client.ListViolationEventsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) UpdateIndexingConfigurationRequest(input *UpdateIndexingConfigurationInput) (req *request.Request, output *UpdateIndexingConfigurationOutput) { +func (c *IoT) ListViolationEventsRequest(input *ListViolationEventsInput) (req *request.Request, output *ListViolationEventsOutput) { op := &request.Operation{ - Name: opUpdateIndexingConfiguration, - HTTPMethod: "POST", - HTTPPath: "/indexing/config", + Name: opListViolationEvents, + HTTPMethod: "GET", + HTTPPath: "/violation-events", } if input == nil { - input = &UpdateIndexingConfigurationInput{} + input = &ListViolationEventsInput{} } - output = &UpdateIndexingConfigurationOutput{} + output = &ListViolationEventsOutput{} req = c.newRequest(op, input, output) return } -// UpdateIndexingConfiguration API operation for AWS IoT. +// ListViolationEvents API operation for AWS IoT. // -// Updates the search configuration. +// Lists the Device Defender security profile violations discovered during the +// given time period. You can use filters to limit the results to those alerts +// issued for a particular security profile, behavior or thing (device). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation UpdateIndexingConfiguration for usage and error information. +// API operation ListViolationEvents for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" @@ -10773,97 +11555,106 @@ func (c *IoT) UpdateIndexingConfigurationRequest(input *UpdateIndexingConfigurat // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // -// * ErrCodeUnauthorizedException "UnauthorizedException" -// You are not authorized to perform this operation. -// -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. -// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) UpdateIndexingConfiguration(input *UpdateIndexingConfigurationInput) (*UpdateIndexingConfigurationOutput, error) { - req, out := c.UpdateIndexingConfigurationRequest(input) +func (c *IoT) ListViolationEvents(input *ListViolationEventsInput) (*ListViolationEventsOutput, error) { + req, out := c.ListViolationEventsRequest(input) return out, req.Send() } -// UpdateIndexingConfigurationWithContext is the same as UpdateIndexingConfiguration with the addition of +// ListViolationEventsWithContext is the same as ListViolationEvents with the addition of // the ability to pass a context and additional request options. // -// See UpdateIndexingConfiguration for details on how to use this API operation. +// See ListViolationEvents for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) UpdateIndexingConfigurationWithContext(ctx aws.Context, input *UpdateIndexingConfigurationInput, opts ...request.Option) (*UpdateIndexingConfigurationOutput, error) { - req, out := c.UpdateIndexingConfigurationRequest(input) +func (c *IoT) ListViolationEventsWithContext(ctx aws.Context, input *ListViolationEventsInput, opts ...request.Option) (*ListViolationEventsOutput, error) { + req, out := c.ListViolationEventsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateRoleAlias = "UpdateRoleAlias" +const opRegisterCACertificate = "RegisterCACertificate" -// UpdateRoleAliasRequest generates a "aws/request.Request" representing the -// client's request for the UpdateRoleAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// RegisterCACertificateRequest generates a "aws/request.Request" representing the +// client's request for the RegisterCACertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateRoleAlias for more information on using the UpdateRoleAlias +// See RegisterCACertificate for more information on using the RegisterCACertificate // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateRoleAliasRequest method. -// req, resp := client.UpdateRoleAliasRequest(params) +// // Example sending a request using the RegisterCACertificateRequest method. +// req, resp := client.RegisterCACertificateRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) UpdateRoleAliasRequest(input *UpdateRoleAliasInput) (req *request.Request, output *UpdateRoleAliasOutput) { +func (c *IoT) RegisterCACertificateRequest(input *RegisterCACertificateInput) (req *request.Request, output *RegisterCACertificateOutput) { op := &request.Operation{ - Name: opUpdateRoleAlias, - HTTPMethod: "PUT", - HTTPPath: "/role-aliases/{roleAlias}", + Name: opRegisterCACertificate, + HTTPMethod: "POST", + HTTPPath: "/cacertificate", } if input == nil { - input = &UpdateRoleAliasInput{} + input = &RegisterCACertificateInput{} } - output = &UpdateRoleAliasOutput{} + output = &RegisterCACertificateOutput{} req = c.newRequest(op, input, output) return } -// UpdateRoleAlias API operation for AWS IoT. +// RegisterCACertificate API operation for AWS IoT. // -// Updates a role alias. +// Registers a CA certificate with AWS IoT. This CA certificate can then be +// used to sign device certificates, which can be then registered with AWS IoT. +// You can register up to 10 CA certificates per AWS account that have the same +// subject field. This enables you to have up to 10 certificate authorities +// sign your device certificates. If you have more than one CA certificate registered, +// make sure you pass the CA certificate when you register your device certificates +// with the RegisterCertificate API. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation UpdateRoleAlias for usage and error information. +// API operation RegisterCACertificate for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. +// * ErrCodeResourceAlreadyExistsException "ResourceAlreadyExistsException" +// The resource already exists. +// +// * ErrCodeRegistrationCodeValidationException "RegistrationCodeValidationException" +// The registration code is invalid. // // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // +// * ErrCodeCertificateValidationException "CertificateValidationException" +// The certificate is invalid. +// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit has been exceeded. +// // * ErrCodeUnauthorizedException "UnauthorizedException" // You are not authorized to perform this operation. // @@ -10873,84 +11664,97 @@ func (c *IoT) UpdateRoleAliasRequest(input *UpdateRoleAliasInput) (req *request. // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) UpdateRoleAlias(input *UpdateRoleAliasInput) (*UpdateRoleAliasOutput, error) { - req, out := c.UpdateRoleAliasRequest(input) +func (c *IoT) RegisterCACertificate(input *RegisterCACertificateInput) (*RegisterCACertificateOutput, error) { + req, out := c.RegisterCACertificateRequest(input) return out, req.Send() } -// UpdateRoleAliasWithContext is the same as UpdateRoleAlias with the addition of +// RegisterCACertificateWithContext is the same as RegisterCACertificate with the addition of // the ability to pass a context and additional request options. // -// See UpdateRoleAlias for details on how to use this API operation. +// See RegisterCACertificate for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) UpdateRoleAliasWithContext(ctx aws.Context, input *UpdateRoleAliasInput, opts ...request.Option) (*UpdateRoleAliasOutput, error) { - req, out := c.UpdateRoleAliasRequest(input) +func (c *IoT) RegisterCACertificateWithContext(ctx aws.Context, input *RegisterCACertificateInput, opts ...request.Option) (*RegisterCACertificateOutput, error) { + req, out := c.RegisterCACertificateRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateStream = "UpdateStream" +const opRegisterCertificate = "RegisterCertificate" -// UpdateStreamRequest generates a "aws/request.Request" representing the -// client's request for the UpdateStream operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// RegisterCertificateRequest generates a "aws/request.Request" representing the +// client's request for the RegisterCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateStream for more information on using the UpdateStream +// See RegisterCertificate for more information on using the RegisterCertificate // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateStreamRequest method. -// req, resp := client.UpdateStreamRequest(params) +// // Example sending a request using the RegisterCertificateRequest method. +// req, resp := client.RegisterCertificateRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) UpdateStreamRequest(input *UpdateStreamInput) (req *request.Request, output *UpdateStreamOutput) { +func (c *IoT) RegisterCertificateRequest(input *RegisterCertificateInput) (req *request.Request, output *RegisterCertificateOutput) { op := &request.Operation{ - Name: opUpdateStream, - HTTPMethod: "PUT", - HTTPPath: "/streams/{streamId}", + Name: opRegisterCertificate, + HTTPMethod: "POST", + HTTPPath: "/certificate/register", } if input == nil { - input = &UpdateStreamInput{} + input = &RegisterCertificateInput{} } - output = &UpdateStreamOutput{} + output = &RegisterCertificateOutput{} req = c.newRequest(op, input, output) return } -// UpdateStream API operation for AWS IoT. +// RegisterCertificate API operation for AWS IoT. // -// Updates an existing stream. The stream version will be incremented by one. +// Registers a device certificate with AWS IoT. If you have more than one CA +// certificate that has the same subject field, you must specify the CA certificate +// that was used to sign the device certificate being registered. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation UpdateStream for usage and error information. +// API operation RegisterCertificate for usage and error information. // // Returned Error Codes: +// * ErrCodeResourceAlreadyExistsException "ResourceAlreadyExistsException" +// The resource already exists. +// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. +// * ErrCodeCertificateValidationException "CertificateValidationException" +// The certificate is invalid. +// +// * ErrCodeCertificateStateException "CertificateStateException" +// The certificate operation is not allowed. +// +// * ErrCodeCertificateConflictException "CertificateConflictException" +// Unable to verify the CA certificate used to sign the device certificate you +// are attempting to register. This is happens when you have registered more +// than one CA certificate that has the same subject field and public key. // // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. @@ -10964,261 +11768,277 @@ func (c *IoT) UpdateStreamRequest(input *UpdateStreamInput) (req *request.Reques // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -func (c *IoT) UpdateStream(input *UpdateStreamInput) (*UpdateStreamOutput, error) { - req, out := c.UpdateStreamRequest(input) +func (c *IoT) RegisterCertificate(input *RegisterCertificateInput) (*RegisterCertificateOutput, error) { + req, out := c.RegisterCertificateRequest(input) return out, req.Send() } -// UpdateStreamWithContext is the same as UpdateStream with the addition of +// RegisterCertificateWithContext is the same as RegisterCertificate with the addition of // the ability to pass a context and additional request options. // -// See UpdateStream for details on how to use this API operation. +// See RegisterCertificate for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) UpdateStreamWithContext(ctx aws.Context, input *UpdateStreamInput, opts ...request.Option) (*UpdateStreamOutput, error) { - req, out := c.UpdateStreamRequest(input) +func (c *IoT) RegisterCertificateWithContext(ctx aws.Context, input *RegisterCertificateInput, opts ...request.Option) (*RegisterCertificateOutput, error) { + req, out := c.RegisterCertificateRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateThing = "UpdateThing" +const opRegisterThing = "RegisterThing" -// UpdateThingRequest generates a "aws/request.Request" representing the -// client's request for the UpdateThing operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// RegisterThingRequest generates a "aws/request.Request" representing the +// client's request for the RegisterThing operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateThing for more information on using the UpdateThing +// See RegisterThing for more information on using the RegisterThing // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateThingRequest method. -// req, resp := client.UpdateThingRequest(params) +// // Example sending a request using the RegisterThingRequest method. +// req, resp := client.RegisterThingRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) UpdateThingRequest(input *UpdateThingInput) (req *request.Request, output *UpdateThingOutput) { +func (c *IoT) RegisterThingRequest(input *RegisterThingInput) (req *request.Request, output *RegisterThingOutput) { op := &request.Operation{ - Name: opUpdateThing, - HTTPMethod: "PATCH", - HTTPPath: "/things/{thingName}", + Name: opRegisterThing, + HTTPMethod: "POST", + HTTPPath: "/things", } if input == nil { - input = &UpdateThingInput{} + input = &RegisterThingInput{} } - output = &UpdateThingOutput{} + output = &RegisterThingOutput{} req = c.newRequest(op, input, output) return } -// UpdateThing API operation for AWS IoT. +// RegisterThing API operation for AWS IoT. // -// Updates the data for a thing. +// Provisions a thing. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation UpdateThing for usage and error information. +// API operation RegisterThing for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidRequestException "InvalidRequestException" -// The request is not valid. +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. // -// * ErrCodeVersionConflictException "VersionConflictException" -// An exception thrown when the version of a thing passed to a command is different -// than the version specified with the --version parameter. +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. // -// * ErrCodeThrottlingException "ThrottlingException" -// The rate exceeds the limit. +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. // // * ErrCodeUnauthorizedException "UnauthorizedException" // You are not authorized to perform this operation. // -// * ErrCodeServiceUnavailableException "ServiceUnavailableException" -// The service is temporarily unavailable. +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. // -// * ErrCodeInternalFailureException "InternalFailureException" -// An unexpected error has occurred. +// * ErrCodeConflictingResourceUpdateException "ConflictingResourceUpdateException" +// A conflicting resource update exception. This exception is thrown when two +// pending updates cause a conflict. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. +// * ErrCodeResourceRegistrationFailureException "ResourceRegistrationFailureException" +// The resource registration failed. // -func (c *IoT) UpdateThing(input *UpdateThingInput) (*UpdateThingOutput, error) { - req, out := c.UpdateThingRequest(input) +func (c *IoT) RegisterThing(input *RegisterThingInput) (*RegisterThingOutput, error) { + req, out := c.RegisterThingRequest(input) return out, req.Send() } -// UpdateThingWithContext is the same as UpdateThing with the addition of +// RegisterThingWithContext is the same as RegisterThing with the addition of // the ability to pass a context and additional request options. // -// See UpdateThing for details on how to use this API operation. +// See RegisterThing for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) UpdateThingWithContext(ctx aws.Context, input *UpdateThingInput, opts ...request.Option) (*UpdateThingOutput, error) { - req, out := c.UpdateThingRequest(input) +func (c *IoT) RegisterThingWithContext(ctx aws.Context, input *RegisterThingInput, opts ...request.Option) (*RegisterThingOutput, error) { + req, out := c.RegisterThingRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateThingGroup = "UpdateThingGroup" +const opRejectCertificateTransfer = "RejectCertificateTransfer" -// UpdateThingGroupRequest generates a "aws/request.Request" representing the -// client's request for the UpdateThingGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// RejectCertificateTransferRequest generates a "aws/request.Request" representing the +// client's request for the RejectCertificateTransfer operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateThingGroup for more information on using the UpdateThingGroup +// See RejectCertificateTransfer for more information on using the RejectCertificateTransfer // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateThingGroupRequest method. -// req, resp := client.UpdateThingGroupRequest(params) +// // Example sending a request using the RejectCertificateTransferRequest method. +// req, resp := client.RejectCertificateTransferRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) UpdateThingGroupRequest(input *UpdateThingGroupInput) (req *request.Request, output *UpdateThingGroupOutput) { +func (c *IoT) RejectCertificateTransferRequest(input *RejectCertificateTransferInput) (req *request.Request, output *RejectCertificateTransferOutput) { op := &request.Operation{ - Name: opUpdateThingGroup, + Name: opRejectCertificateTransfer, HTTPMethod: "PATCH", - HTTPPath: "/thing-groups/{thingGroupName}", + HTTPPath: "/reject-certificate-transfer/{certificateId}", } if input == nil { - input = &UpdateThingGroupInput{} + input = &RejectCertificateTransferInput{} } - output = &UpdateThingGroupOutput{} + output = &RejectCertificateTransferOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateThingGroup API operation for AWS IoT. +// RejectCertificateTransfer API operation for AWS IoT. // -// Update a thing group. +// Rejects a pending certificate transfer. After AWS IoT rejects a certificate +// transfer, the certificate status changes from PENDING_TRANSFER to INACTIVE. +// +// To check for pending certificate transfers, call ListCertificates to enumerate +// your certificates. +// +// This operation can only be called by the transfer destination. After it is +// called, the certificate will be returned to the source's account in the INACTIVE +// state. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation UpdateThingGroup for usage and error information. +// API operation RejectCertificateTransfer for usage and error information. // // Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeTransferAlreadyCompletedException "TransferAlreadyCompletedException" +// You can't revert the certificate transfer because the transfer is already +// complete. +// // * ErrCodeInvalidRequestException "InvalidRequestException" // The request is not valid. // -// * ErrCodeVersionConflictException "VersionConflictException" -// An exception thrown when the version of a thing passed to a command is different -// than the version specified with the --version parameter. -// // * ErrCodeThrottlingException "ThrottlingException" // The rate exceeds the limit. // +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// // * ErrCodeInternalFailureException "InternalFailureException" // An unexpected error has occurred. // -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The specified resource does not exist. -// -func (c *IoT) UpdateThingGroup(input *UpdateThingGroupInput) (*UpdateThingGroupOutput, error) { - req, out := c.UpdateThingGroupRequest(input) +func (c *IoT) RejectCertificateTransfer(input *RejectCertificateTransferInput) (*RejectCertificateTransferOutput, error) { + req, out := c.RejectCertificateTransferRequest(input) return out, req.Send() } -// UpdateThingGroupWithContext is the same as UpdateThingGroup with the addition of +// RejectCertificateTransferWithContext is the same as RejectCertificateTransfer with the addition of // the ability to pass a context and additional request options. // -// See UpdateThingGroup for details on how to use this API operation. +// See RejectCertificateTransfer for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) UpdateThingGroupWithContext(ctx aws.Context, input *UpdateThingGroupInput, opts ...request.Option) (*UpdateThingGroupOutput, error) { - req, out := c.UpdateThingGroupRequest(input) +func (c *IoT) RejectCertificateTransferWithContext(ctx aws.Context, input *RejectCertificateTransferInput, opts ...request.Option) (*RejectCertificateTransferOutput, error) { + req, out := c.RejectCertificateTransferRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateThingGroupsForThing = "UpdateThingGroupsForThing" +const opRemoveThingFromBillingGroup = "RemoveThingFromBillingGroup" -// UpdateThingGroupsForThingRequest generates a "aws/request.Request" representing the -// client's request for the UpdateThingGroupsForThing operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// RemoveThingFromBillingGroupRequest generates a "aws/request.Request" representing the +// client's request for the RemoveThingFromBillingGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateThingGroupsForThing for more information on using the UpdateThingGroupsForThing +// See RemoveThingFromBillingGroup for more information on using the RemoveThingFromBillingGroup // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateThingGroupsForThingRequest method. -// req, resp := client.UpdateThingGroupsForThingRequest(params) +// // Example sending a request using the RemoveThingFromBillingGroupRequest method. +// req, resp := client.RemoveThingFromBillingGroupRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } -func (c *IoT) UpdateThingGroupsForThingRequest(input *UpdateThingGroupsForThingInput) (req *request.Request, output *UpdateThingGroupsForThingOutput) { +func (c *IoT) RemoveThingFromBillingGroupRequest(input *RemoveThingFromBillingGroupInput) (req *request.Request, output *RemoveThingFromBillingGroupOutput) { op := &request.Operation{ - Name: opUpdateThingGroupsForThing, + Name: opRemoveThingFromBillingGroup, HTTPMethod: "PUT", - HTTPPath: "/thing-groups/updateThingGroupsForThing", + HTTPPath: "/billing-groups/removeThingFromBillingGroup", } if input == nil { - input = &UpdateThingGroupsForThingInput{} + input = &RemoveThingFromBillingGroupInput{} } - output = &UpdateThingGroupsForThingOutput{} + output = &RemoveThingFromBillingGroupOutput{} req = c.newRequest(op, input, output) return } -// UpdateThingGroupsForThing API operation for AWS IoT. +// RemoveThingFromBillingGroup API operation for AWS IoT. // -// Updates the groups to which the thing belongs. +// Removes the given thing from the billing group. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS IoT's -// API operation UpdateThingGroupsForThing for usage and error information. +// API operation RemoveThingFromBillingGroup for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidRequestException "InvalidRequestException" @@ -11233,58 +12053,8805 @@ func (c *IoT) UpdateThingGroupsForThingRequest(input *UpdateThingGroupsForThingI // * ErrCodeResourceNotFoundException "ResourceNotFoundException" // The specified resource does not exist. // -func (c *IoT) UpdateThingGroupsForThing(input *UpdateThingGroupsForThingInput) (*UpdateThingGroupsForThingOutput, error) { - req, out := c.UpdateThingGroupsForThingRequest(input) +func (c *IoT) RemoveThingFromBillingGroup(input *RemoveThingFromBillingGroupInput) (*RemoveThingFromBillingGroupOutput, error) { + req, out := c.RemoveThingFromBillingGroupRequest(input) return out, req.Send() } -// UpdateThingGroupsForThingWithContext is the same as UpdateThingGroupsForThing with the addition of +// RemoveThingFromBillingGroupWithContext is the same as RemoveThingFromBillingGroup with the addition of // the ability to pass a context and additional request options. // -// See UpdateThingGroupsForThing for details on how to use this API operation. +// See RemoveThingFromBillingGroup for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *IoT) UpdateThingGroupsForThingWithContext(ctx aws.Context, input *UpdateThingGroupsForThingInput, opts ...request.Option) (*UpdateThingGroupsForThingOutput, error) { - req, out := c.UpdateThingGroupsForThingRequest(input) +func (c *IoT) RemoveThingFromBillingGroupWithContext(ctx aws.Context, input *RemoveThingFromBillingGroupInput, opts ...request.Option) (*RemoveThingFromBillingGroupOutput, error) { + req, out := c.RemoveThingFromBillingGroupRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// The input for the AcceptCertificateTransfer operation. -type AcceptCertificateTransferInput struct { - _ struct{} `type:"structure"` - - // The ID of the certificate. - // - // CertificateId is a required field - CertificateId *string `location:"uri" locationName:"certificateId" min:"64" type:"string" required:"true"` +const opRemoveThingFromThingGroup = "RemoveThingFromThingGroup" - // Specifies whether the certificate is active. +// RemoveThingFromThingGroupRequest generates a "aws/request.Request" representing the +// client's request for the RemoveThingFromThingGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RemoveThingFromThingGroup for more information on using the RemoveThingFromThingGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RemoveThingFromThingGroupRequest method. +// req, resp := client.RemoveThingFromThingGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) RemoveThingFromThingGroupRequest(input *RemoveThingFromThingGroupInput) (req *request.Request, output *RemoveThingFromThingGroupOutput) { + op := &request.Operation{ + Name: opRemoveThingFromThingGroup, + HTTPMethod: "PUT", + HTTPPath: "/thing-groups/removeThingFromThingGroup", + } + + if input == nil { + input = &RemoveThingFromThingGroupInput{} + } + + output = &RemoveThingFromThingGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// RemoveThingFromThingGroup API operation for AWS IoT. +// +// Remove the specified thing from the specified group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation RemoveThingFromThingGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) RemoveThingFromThingGroup(input *RemoveThingFromThingGroupInput) (*RemoveThingFromThingGroupOutput, error) { + req, out := c.RemoveThingFromThingGroupRequest(input) + return out, req.Send() +} + +// RemoveThingFromThingGroupWithContext is the same as RemoveThingFromThingGroup with the addition of +// the ability to pass a context and additional request options. +// +// See RemoveThingFromThingGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) RemoveThingFromThingGroupWithContext(ctx aws.Context, input *RemoveThingFromThingGroupInput, opts ...request.Option) (*RemoveThingFromThingGroupOutput, error) { + req, out := c.RemoveThingFromThingGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opReplaceTopicRule = "ReplaceTopicRule" + +// ReplaceTopicRuleRequest generates a "aws/request.Request" representing the +// client's request for the ReplaceTopicRule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ReplaceTopicRule for more information on using the ReplaceTopicRule +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ReplaceTopicRuleRequest method. +// req, resp := client.ReplaceTopicRuleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) ReplaceTopicRuleRequest(input *ReplaceTopicRuleInput) (req *request.Request, output *ReplaceTopicRuleOutput) { + op := &request.Operation{ + Name: opReplaceTopicRule, + HTTPMethod: "PATCH", + HTTPPath: "/rules/{ruleName}", + } + + if input == nil { + input = &ReplaceTopicRuleInput{} + } + + output = &ReplaceTopicRuleOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// ReplaceTopicRule API operation for AWS IoT. +// +// Replaces the rule. You must specify all parameters for the new rule. Creating +// rules is an administrator-level action. Any user who has permission to create +// rules will be able to access data processed by the rule. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation ReplaceTopicRule for usage and error information. +// +// Returned Error Codes: +// * ErrCodeSqlParseException "SqlParseException" +// The Rule-SQL expression can't be parsed correctly. +// +// * ErrCodeInternalException "InternalException" +// An unexpected error has occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeConflictingResourceUpdateException "ConflictingResourceUpdateException" +// A conflicting resource update exception. This exception is thrown when two +// pending updates cause a conflict. +// +func (c *IoT) ReplaceTopicRule(input *ReplaceTopicRuleInput) (*ReplaceTopicRuleOutput, error) { + req, out := c.ReplaceTopicRuleRequest(input) + return out, req.Send() +} + +// ReplaceTopicRuleWithContext is the same as ReplaceTopicRule with the addition of +// the ability to pass a context and additional request options. +// +// See ReplaceTopicRule for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) ReplaceTopicRuleWithContext(ctx aws.Context, input *ReplaceTopicRuleInput, opts ...request.Option) (*ReplaceTopicRuleOutput, error) { + req, out := c.ReplaceTopicRuleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opSearchIndex = "SearchIndex" + +// SearchIndexRequest generates a "aws/request.Request" representing the +// client's request for the SearchIndex operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SearchIndex for more information on using the SearchIndex +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SearchIndexRequest method. +// req, resp := client.SearchIndexRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) SearchIndexRequest(input *SearchIndexInput) (req *request.Request, output *SearchIndexOutput) { + op := &request.Operation{ + Name: opSearchIndex, + HTTPMethod: "POST", + HTTPPath: "/indices/search", + } + + if input == nil { + input = &SearchIndexInput{} + } + + output = &SearchIndexOutput{} + req = c.newRequest(op, input, output) + return +} + +// SearchIndex API operation for AWS IoT. +// +// The query search index. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation SearchIndex for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeInvalidQueryException "InvalidQueryException" +// The query is invalid. +// +// * ErrCodeIndexNotReadyException "IndexNotReadyException" +// The index is not ready. +// +func (c *IoT) SearchIndex(input *SearchIndexInput) (*SearchIndexOutput, error) { + req, out := c.SearchIndexRequest(input) + return out, req.Send() +} + +// SearchIndexWithContext is the same as SearchIndex with the addition of +// the ability to pass a context and additional request options. +// +// See SearchIndex for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) SearchIndexWithContext(ctx aws.Context, input *SearchIndexInput, opts ...request.Option) (*SearchIndexOutput, error) { + req, out := c.SearchIndexRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opSetDefaultAuthorizer = "SetDefaultAuthorizer" + +// SetDefaultAuthorizerRequest generates a "aws/request.Request" representing the +// client's request for the SetDefaultAuthorizer operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SetDefaultAuthorizer for more information on using the SetDefaultAuthorizer +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SetDefaultAuthorizerRequest method. +// req, resp := client.SetDefaultAuthorizerRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) SetDefaultAuthorizerRequest(input *SetDefaultAuthorizerInput) (req *request.Request, output *SetDefaultAuthorizerOutput) { + op := &request.Operation{ + Name: opSetDefaultAuthorizer, + HTTPMethod: "POST", + HTTPPath: "/default-authorizer", + } + + if input == nil { + input = &SetDefaultAuthorizerInput{} + } + + output = &SetDefaultAuthorizerOutput{} + req = c.newRequest(op, input, output) + return +} + +// SetDefaultAuthorizer API operation for AWS IoT. +// +// Sets the default authorizer. This will be used if a websocket connection +// is made without specifying an authorizer. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation SetDefaultAuthorizer for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeResourceAlreadyExistsException "ResourceAlreadyExistsException" +// The resource already exists. +// +func (c *IoT) SetDefaultAuthorizer(input *SetDefaultAuthorizerInput) (*SetDefaultAuthorizerOutput, error) { + req, out := c.SetDefaultAuthorizerRequest(input) + return out, req.Send() +} + +// SetDefaultAuthorizerWithContext is the same as SetDefaultAuthorizer with the addition of +// the ability to pass a context and additional request options. +// +// See SetDefaultAuthorizer for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) SetDefaultAuthorizerWithContext(ctx aws.Context, input *SetDefaultAuthorizerInput, opts ...request.Option) (*SetDefaultAuthorizerOutput, error) { + req, out := c.SetDefaultAuthorizerRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opSetDefaultPolicyVersion = "SetDefaultPolicyVersion" + +// SetDefaultPolicyVersionRequest generates a "aws/request.Request" representing the +// client's request for the SetDefaultPolicyVersion operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SetDefaultPolicyVersion for more information on using the SetDefaultPolicyVersion +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SetDefaultPolicyVersionRequest method. +// req, resp := client.SetDefaultPolicyVersionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) SetDefaultPolicyVersionRequest(input *SetDefaultPolicyVersionInput) (req *request.Request, output *SetDefaultPolicyVersionOutput) { + op := &request.Operation{ + Name: opSetDefaultPolicyVersion, + HTTPMethod: "PATCH", + HTTPPath: "/policies/{policyName}/version/{policyVersionId}", + } + + if input == nil { + input = &SetDefaultPolicyVersionInput{} + } + + output = &SetDefaultPolicyVersionOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// SetDefaultPolicyVersion API operation for AWS IoT. +// +// Sets the specified version of the specified policy as the policy's default +// (operative) version. This action affects all certificates to which the policy +// is attached. To list the principals the policy is attached to, use the ListPrincipalPolicy +// API. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation SetDefaultPolicyVersion for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) SetDefaultPolicyVersion(input *SetDefaultPolicyVersionInput) (*SetDefaultPolicyVersionOutput, error) { + req, out := c.SetDefaultPolicyVersionRequest(input) + return out, req.Send() +} + +// SetDefaultPolicyVersionWithContext is the same as SetDefaultPolicyVersion with the addition of +// the ability to pass a context and additional request options. +// +// See SetDefaultPolicyVersion for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) SetDefaultPolicyVersionWithContext(ctx aws.Context, input *SetDefaultPolicyVersionInput, opts ...request.Option) (*SetDefaultPolicyVersionOutput, error) { + req, out := c.SetDefaultPolicyVersionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opSetLoggingOptions = "SetLoggingOptions" + +// SetLoggingOptionsRequest generates a "aws/request.Request" representing the +// client's request for the SetLoggingOptions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SetLoggingOptions for more information on using the SetLoggingOptions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SetLoggingOptionsRequest method. +// req, resp := client.SetLoggingOptionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) SetLoggingOptionsRequest(input *SetLoggingOptionsInput) (req *request.Request, output *SetLoggingOptionsOutput) { + op := &request.Operation{ + Name: opSetLoggingOptions, + HTTPMethod: "POST", + HTTPPath: "/loggingOptions", + } + + if input == nil { + input = &SetLoggingOptionsInput{} + } + + output = &SetLoggingOptionsOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// SetLoggingOptions API operation for AWS IoT. +// +// Sets the logging options. +// +// NOTE: use of this command is not recommended. Use SetV2LoggingOptions instead. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation SetLoggingOptions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalException "InternalException" +// An unexpected error has occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +func (c *IoT) SetLoggingOptions(input *SetLoggingOptionsInput) (*SetLoggingOptionsOutput, error) { + req, out := c.SetLoggingOptionsRequest(input) + return out, req.Send() +} + +// SetLoggingOptionsWithContext is the same as SetLoggingOptions with the addition of +// the ability to pass a context and additional request options. +// +// See SetLoggingOptions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) SetLoggingOptionsWithContext(ctx aws.Context, input *SetLoggingOptionsInput, opts ...request.Option) (*SetLoggingOptionsOutput, error) { + req, out := c.SetLoggingOptionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opSetV2LoggingLevel = "SetV2LoggingLevel" + +// SetV2LoggingLevelRequest generates a "aws/request.Request" representing the +// client's request for the SetV2LoggingLevel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SetV2LoggingLevel for more information on using the SetV2LoggingLevel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SetV2LoggingLevelRequest method. +// req, resp := client.SetV2LoggingLevelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) SetV2LoggingLevelRequest(input *SetV2LoggingLevelInput) (req *request.Request, output *SetV2LoggingLevelOutput) { + op := &request.Operation{ + Name: opSetV2LoggingLevel, + HTTPMethod: "POST", + HTTPPath: "/v2LoggingLevel", + } + + if input == nil { + input = &SetV2LoggingLevelInput{} + } + + output = &SetV2LoggingLevelOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// SetV2LoggingLevel API operation for AWS IoT. +// +// Sets the logging level. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation SetV2LoggingLevel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalException "InternalException" +// An unexpected error has occurred. +// +// * ErrCodeNotConfiguredException "NotConfiguredException" +// The resource is not configured. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +func (c *IoT) SetV2LoggingLevel(input *SetV2LoggingLevelInput) (*SetV2LoggingLevelOutput, error) { + req, out := c.SetV2LoggingLevelRequest(input) + return out, req.Send() +} + +// SetV2LoggingLevelWithContext is the same as SetV2LoggingLevel with the addition of +// the ability to pass a context and additional request options. +// +// See SetV2LoggingLevel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) SetV2LoggingLevelWithContext(ctx aws.Context, input *SetV2LoggingLevelInput, opts ...request.Option) (*SetV2LoggingLevelOutput, error) { + req, out := c.SetV2LoggingLevelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opSetV2LoggingOptions = "SetV2LoggingOptions" + +// SetV2LoggingOptionsRequest generates a "aws/request.Request" representing the +// client's request for the SetV2LoggingOptions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SetV2LoggingOptions for more information on using the SetV2LoggingOptions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SetV2LoggingOptionsRequest method. +// req, resp := client.SetV2LoggingOptionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) SetV2LoggingOptionsRequest(input *SetV2LoggingOptionsInput) (req *request.Request, output *SetV2LoggingOptionsOutput) { + op := &request.Operation{ + Name: opSetV2LoggingOptions, + HTTPMethod: "POST", + HTTPPath: "/v2LoggingOptions", + } + + if input == nil { + input = &SetV2LoggingOptionsInput{} + } + + output = &SetV2LoggingOptionsOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// SetV2LoggingOptions API operation for AWS IoT. +// +// Sets the logging options for the V2 logging service. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation SetV2LoggingOptions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalException "InternalException" +// An unexpected error has occurred. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +func (c *IoT) SetV2LoggingOptions(input *SetV2LoggingOptionsInput) (*SetV2LoggingOptionsOutput, error) { + req, out := c.SetV2LoggingOptionsRequest(input) + return out, req.Send() +} + +// SetV2LoggingOptionsWithContext is the same as SetV2LoggingOptions with the addition of +// the ability to pass a context and additional request options. +// +// See SetV2LoggingOptions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) SetV2LoggingOptionsWithContext(ctx aws.Context, input *SetV2LoggingOptionsInput, opts ...request.Option) (*SetV2LoggingOptionsOutput, error) { + req, out := c.SetV2LoggingOptionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStartOnDemandAuditTask = "StartOnDemandAuditTask" + +// StartOnDemandAuditTaskRequest generates a "aws/request.Request" representing the +// client's request for the StartOnDemandAuditTask operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StartOnDemandAuditTask for more information on using the StartOnDemandAuditTask +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StartOnDemandAuditTaskRequest method. +// req, resp := client.StartOnDemandAuditTaskRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) StartOnDemandAuditTaskRequest(input *StartOnDemandAuditTaskInput) (req *request.Request, output *StartOnDemandAuditTaskOutput) { + op := &request.Operation{ + Name: opStartOnDemandAuditTask, + HTTPMethod: "POST", + HTTPPath: "/audit/tasks", + } + + if input == nil { + input = &StartOnDemandAuditTaskInput{} + } + + output = &StartOnDemandAuditTaskOutput{} + req = c.newRequest(op, input, output) + return +} + +// StartOnDemandAuditTask API operation for AWS IoT. +// +// Starts an on-demand Device Defender audit. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation StartOnDemandAuditTask for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit has been exceeded. +// +func (c *IoT) StartOnDemandAuditTask(input *StartOnDemandAuditTaskInput) (*StartOnDemandAuditTaskOutput, error) { + req, out := c.StartOnDemandAuditTaskRequest(input) + return out, req.Send() +} + +// StartOnDemandAuditTaskWithContext is the same as StartOnDemandAuditTask with the addition of +// the ability to pass a context and additional request options. +// +// See StartOnDemandAuditTask for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) StartOnDemandAuditTaskWithContext(ctx aws.Context, input *StartOnDemandAuditTaskInput, opts ...request.Option) (*StartOnDemandAuditTaskOutput, error) { + req, out := c.StartOnDemandAuditTaskRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStartThingRegistrationTask = "StartThingRegistrationTask" + +// StartThingRegistrationTaskRequest generates a "aws/request.Request" representing the +// client's request for the StartThingRegistrationTask operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StartThingRegistrationTask for more information on using the StartThingRegistrationTask +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StartThingRegistrationTaskRequest method. +// req, resp := client.StartThingRegistrationTaskRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) StartThingRegistrationTaskRequest(input *StartThingRegistrationTaskInput) (req *request.Request, output *StartThingRegistrationTaskOutput) { + op := &request.Operation{ + Name: opStartThingRegistrationTask, + HTTPMethod: "POST", + HTTPPath: "/thing-registration-tasks", + } + + if input == nil { + input = &StartThingRegistrationTaskInput{} + } + + output = &StartThingRegistrationTaskOutput{} + req = c.newRequest(op, input, output) + return +} + +// StartThingRegistrationTask API operation for AWS IoT. +// +// Creates a bulk thing provisioning task. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation StartThingRegistrationTask for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) StartThingRegistrationTask(input *StartThingRegistrationTaskInput) (*StartThingRegistrationTaskOutput, error) { + req, out := c.StartThingRegistrationTaskRequest(input) + return out, req.Send() +} + +// StartThingRegistrationTaskWithContext is the same as StartThingRegistrationTask with the addition of +// the ability to pass a context and additional request options. +// +// See StartThingRegistrationTask for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) StartThingRegistrationTaskWithContext(ctx aws.Context, input *StartThingRegistrationTaskInput, opts ...request.Option) (*StartThingRegistrationTaskOutput, error) { + req, out := c.StartThingRegistrationTaskRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStopThingRegistrationTask = "StopThingRegistrationTask" + +// StopThingRegistrationTaskRequest generates a "aws/request.Request" representing the +// client's request for the StopThingRegistrationTask operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StopThingRegistrationTask for more information on using the StopThingRegistrationTask +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StopThingRegistrationTaskRequest method. +// req, resp := client.StopThingRegistrationTaskRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) StopThingRegistrationTaskRequest(input *StopThingRegistrationTaskInput) (req *request.Request, output *StopThingRegistrationTaskOutput) { + op := &request.Operation{ + Name: opStopThingRegistrationTask, + HTTPMethod: "PUT", + HTTPPath: "/thing-registration-tasks/{taskId}/cancel", + } + + if input == nil { + input = &StopThingRegistrationTaskInput{} + } + + output = &StopThingRegistrationTaskOutput{} + req = c.newRequest(op, input, output) + return +} + +// StopThingRegistrationTask API operation for AWS IoT. +// +// Cancels a bulk thing provisioning task. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation StopThingRegistrationTask for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) StopThingRegistrationTask(input *StopThingRegistrationTaskInput) (*StopThingRegistrationTaskOutput, error) { + req, out := c.StopThingRegistrationTaskRequest(input) + return out, req.Send() +} + +// StopThingRegistrationTaskWithContext is the same as StopThingRegistrationTask with the addition of +// the ability to pass a context and additional request options. +// +// See StopThingRegistrationTask for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) StopThingRegistrationTaskWithContext(ctx aws.Context, input *StopThingRegistrationTaskInput, opts ...request.Option) (*StopThingRegistrationTaskOutput, error) { + req, out := c.StopThingRegistrationTaskRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opTagResource = "TagResource" + +// TagResourceRequest generates a "aws/request.Request" representing the +// client's request for the TagResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See TagResource for more information on using the TagResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the TagResourceRequest method. +// req, resp := client.TagResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) TagResourceRequest(input *TagResourceInput) (req *request.Request, output *TagResourceOutput) { + op := &request.Operation{ + Name: opTagResource, + HTTPMethod: "POST", + HTTPPath: "/tags", + } + + if input == nil { + input = &TagResourceInput{} + } + + output = &TagResourceOutput{} + req = c.newRequest(op, input, output) + return +} + +// TagResource API operation for AWS IoT. +// +// Adds to or modifies the tags of the given resource. Tags are metadata which +// can be used to manage a resource. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation TagResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit has been exceeded. +// +func (c *IoT) TagResource(input *TagResourceInput) (*TagResourceOutput, error) { + req, out := c.TagResourceRequest(input) + return out, req.Send() +} + +// TagResourceWithContext is the same as TagResource with the addition of +// the ability to pass a context and additional request options. +// +// See TagResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) TagResourceWithContext(ctx aws.Context, input *TagResourceInput, opts ...request.Option) (*TagResourceOutput, error) { + req, out := c.TagResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opTestAuthorization = "TestAuthorization" + +// TestAuthorizationRequest generates a "aws/request.Request" representing the +// client's request for the TestAuthorization operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See TestAuthorization for more information on using the TestAuthorization +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the TestAuthorizationRequest method. +// req, resp := client.TestAuthorizationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) TestAuthorizationRequest(input *TestAuthorizationInput) (req *request.Request, output *TestAuthorizationOutput) { + op := &request.Operation{ + Name: opTestAuthorization, + HTTPMethod: "POST", + HTTPPath: "/test-authorization", + } + + if input == nil { + input = &TestAuthorizationInput{} + } + + output = &TestAuthorizationOutput{} + req = c.newRequest(op, input, output) + return +} + +// TestAuthorization API operation for AWS IoT. +// +// Tests if a specified principal is authorized to perform an AWS IoT action +// on a specified resource. Use this to test and debug the authorization behavior +// of devices that connect to the AWS IoT device gateway. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation TestAuthorization for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit has been exceeded. +// +func (c *IoT) TestAuthorization(input *TestAuthorizationInput) (*TestAuthorizationOutput, error) { + req, out := c.TestAuthorizationRequest(input) + return out, req.Send() +} + +// TestAuthorizationWithContext is the same as TestAuthorization with the addition of +// the ability to pass a context and additional request options. +// +// See TestAuthorization for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) TestAuthorizationWithContext(ctx aws.Context, input *TestAuthorizationInput, opts ...request.Option) (*TestAuthorizationOutput, error) { + req, out := c.TestAuthorizationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opTestInvokeAuthorizer = "TestInvokeAuthorizer" + +// TestInvokeAuthorizerRequest generates a "aws/request.Request" representing the +// client's request for the TestInvokeAuthorizer operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See TestInvokeAuthorizer for more information on using the TestInvokeAuthorizer +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the TestInvokeAuthorizerRequest method. +// req, resp := client.TestInvokeAuthorizerRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) TestInvokeAuthorizerRequest(input *TestInvokeAuthorizerInput) (req *request.Request, output *TestInvokeAuthorizerOutput) { + op := &request.Operation{ + Name: opTestInvokeAuthorizer, + HTTPMethod: "POST", + HTTPPath: "/authorizer/{authorizerName}/test", + } + + if input == nil { + input = &TestInvokeAuthorizerInput{} + } + + output = &TestInvokeAuthorizerOutput{} + req = c.newRequest(op, input, output) + return +} + +// TestInvokeAuthorizer API operation for AWS IoT. +// +// Tests a custom authorization behavior by invoking a specified custom authorizer. +// Use this to test and debug the custom authorization behavior of devices that +// connect to the AWS IoT device gateway. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation TestInvokeAuthorizer for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeInvalidResponseException "InvalidResponseException" +// The response is invalid. +// +func (c *IoT) TestInvokeAuthorizer(input *TestInvokeAuthorizerInput) (*TestInvokeAuthorizerOutput, error) { + req, out := c.TestInvokeAuthorizerRequest(input) + return out, req.Send() +} + +// TestInvokeAuthorizerWithContext is the same as TestInvokeAuthorizer with the addition of +// the ability to pass a context and additional request options. +// +// See TestInvokeAuthorizer for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) TestInvokeAuthorizerWithContext(ctx aws.Context, input *TestInvokeAuthorizerInput, opts ...request.Option) (*TestInvokeAuthorizerOutput, error) { + req, out := c.TestInvokeAuthorizerRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opTransferCertificate = "TransferCertificate" + +// TransferCertificateRequest generates a "aws/request.Request" representing the +// client's request for the TransferCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See TransferCertificate for more information on using the TransferCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the TransferCertificateRequest method. +// req, resp := client.TransferCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) TransferCertificateRequest(input *TransferCertificateInput) (req *request.Request, output *TransferCertificateOutput) { + op := &request.Operation{ + Name: opTransferCertificate, + HTTPMethod: "PATCH", + HTTPPath: "/transfer-certificate/{certificateId}", + } + + if input == nil { + input = &TransferCertificateInput{} + } + + output = &TransferCertificateOutput{} + req = c.newRequest(op, input, output) + return +} + +// TransferCertificate API operation for AWS IoT. +// +// Transfers the specified certificate to the specified AWS account. +// +// You can cancel the transfer until it is acknowledged by the recipient. +// +// No notification is sent to the transfer destination's account. It is up to +// the caller to notify the transfer target. +// +// The certificate being transferred must not be in the ACTIVE state. You can +// use the UpdateCertificate API to deactivate it. +// +// The certificate must not have any policies attached to it. You can use the +// DetachPrincipalPolicy API to detach them. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation TransferCertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeCertificateStateException "CertificateStateException" +// The certificate operation is not allowed. +// +// * ErrCodeTransferConflictException "TransferConflictException" +// You can't transfer the certificate because authorization policies are still +// attached. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) TransferCertificate(input *TransferCertificateInput) (*TransferCertificateOutput, error) { + req, out := c.TransferCertificateRequest(input) + return out, req.Send() +} + +// TransferCertificateWithContext is the same as TransferCertificate with the addition of +// the ability to pass a context and additional request options. +// +// See TransferCertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) TransferCertificateWithContext(ctx aws.Context, input *TransferCertificateInput, opts ...request.Option) (*TransferCertificateOutput, error) { + req, out := c.TransferCertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUntagResource = "UntagResource" + +// UntagResourceRequest generates a "aws/request.Request" representing the +// client's request for the UntagResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UntagResource for more information on using the UntagResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UntagResourceRequest method. +// req, resp := client.UntagResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) UntagResourceRequest(input *UntagResourceInput) (req *request.Request, output *UntagResourceOutput) { + op := &request.Operation{ + Name: opUntagResource, + HTTPMethod: "POST", + HTTPPath: "/untag", + } + + if input == nil { + input = &UntagResourceInput{} + } + + output = &UntagResourceOutput{} + req = c.newRequest(op, input, output) + return +} + +// UntagResource API operation for AWS IoT. +// +// Removes the given tags (metadata) from the resource. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation UntagResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +func (c *IoT) UntagResource(input *UntagResourceInput) (*UntagResourceOutput, error) { + req, out := c.UntagResourceRequest(input) + return out, req.Send() +} + +// UntagResourceWithContext is the same as UntagResource with the addition of +// the ability to pass a context and additional request options. +// +// See UntagResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) UntagResourceWithContext(ctx aws.Context, input *UntagResourceInput, opts ...request.Option) (*UntagResourceOutput, error) { + req, out := c.UntagResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateAccountAuditConfiguration = "UpdateAccountAuditConfiguration" + +// UpdateAccountAuditConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the UpdateAccountAuditConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateAccountAuditConfiguration for more information on using the UpdateAccountAuditConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateAccountAuditConfigurationRequest method. +// req, resp := client.UpdateAccountAuditConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) UpdateAccountAuditConfigurationRequest(input *UpdateAccountAuditConfigurationInput) (req *request.Request, output *UpdateAccountAuditConfigurationOutput) { + op := &request.Operation{ + Name: opUpdateAccountAuditConfiguration, + HTTPMethod: "PATCH", + HTTPPath: "/audit/configuration", + } + + if input == nil { + input = &UpdateAccountAuditConfigurationInput{} + } + + output = &UpdateAccountAuditConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateAccountAuditConfiguration API operation for AWS IoT. +// +// Configures or reconfigures the Device Defender audit settings for this account. +// Settings include how audit notifications are sent and which audit checks +// are enabled or disabled. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation UpdateAccountAuditConfiguration for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) UpdateAccountAuditConfiguration(input *UpdateAccountAuditConfigurationInput) (*UpdateAccountAuditConfigurationOutput, error) { + req, out := c.UpdateAccountAuditConfigurationRequest(input) + return out, req.Send() +} + +// UpdateAccountAuditConfigurationWithContext is the same as UpdateAccountAuditConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateAccountAuditConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) UpdateAccountAuditConfigurationWithContext(ctx aws.Context, input *UpdateAccountAuditConfigurationInput, opts ...request.Option) (*UpdateAccountAuditConfigurationOutput, error) { + req, out := c.UpdateAccountAuditConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateAuthorizer = "UpdateAuthorizer" + +// UpdateAuthorizerRequest generates a "aws/request.Request" representing the +// client's request for the UpdateAuthorizer operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateAuthorizer for more information on using the UpdateAuthorizer +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateAuthorizerRequest method. +// req, resp := client.UpdateAuthorizerRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) UpdateAuthorizerRequest(input *UpdateAuthorizerInput) (req *request.Request, output *UpdateAuthorizerOutput) { + op := &request.Operation{ + Name: opUpdateAuthorizer, + HTTPMethod: "PUT", + HTTPPath: "/authorizer/{authorizerName}", + } + + if input == nil { + input = &UpdateAuthorizerInput{} + } + + output = &UpdateAuthorizerOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateAuthorizer API operation for AWS IoT. +// +// Updates an authorizer. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation UpdateAuthorizer for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// A limit has been exceeded. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) UpdateAuthorizer(input *UpdateAuthorizerInput) (*UpdateAuthorizerOutput, error) { + req, out := c.UpdateAuthorizerRequest(input) + return out, req.Send() +} + +// UpdateAuthorizerWithContext is the same as UpdateAuthorizer with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateAuthorizer for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) UpdateAuthorizerWithContext(ctx aws.Context, input *UpdateAuthorizerInput, opts ...request.Option) (*UpdateAuthorizerOutput, error) { + req, out := c.UpdateAuthorizerRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateBillingGroup = "UpdateBillingGroup" + +// UpdateBillingGroupRequest generates a "aws/request.Request" representing the +// client's request for the UpdateBillingGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateBillingGroup for more information on using the UpdateBillingGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateBillingGroupRequest method. +// req, resp := client.UpdateBillingGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) UpdateBillingGroupRequest(input *UpdateBillingGroupInput) (req *request.Request, output *UpdateBillingGroupOutput) { + op := &request.Operation{ + Name: opUpdateBillingGroup, + HTTPMethod: "PATCH", + HTTPPath: "/billing-groups/{billingGroupName}", + } + + if input == nil { + input = &UpdateBillingGroupInput{} + } + + output = &UpdateBillingGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateBillingGroup API operation for AWS IoT. +// +// Updates information about the billing group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation UpdateBillingGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeVersionConflictException "VersionConflictException" +// An exception thrown when the version of an entity specified with the expectedVersion +// parameter does not match the latest version in the system. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) UpdateBillingGroup(input *UpdateBillingGroupInput) (*UpdateBillingGroupOutput, error) { + req, out := c.UpdateBillingGroupRequest(input) + return out, req.Send() +} + +// UpdateBillingGroupWithContext is the same as UpdateBillingGroup with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateBillingGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) UpdateBillingGroupWithContext(ctx aws.Context, input *UpdateBillingGroupInput, opts ...request.Option) (*UpdateBillingGroupOutput, error) { + req, out := c.UpdateBillingGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateCACertificate = "UpdateCACertificate" + +// UpdateCACertificateRequest generates a "aws/request.Request" representing the +// client's request for the UpdateCACertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateCACertificate for more information on using the UpdateCACertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateCACertificateRequest method. +// req, resp := client.UpdateCACertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) UpdateCACertificateRequest(input *UpdateCACertificateInput) (req *request.Request, output *UpdateCACertificateOutput) { + op := &request.Operation{ + Name: opUpdateCACertificate, + HTTPMethod: "PUT", + HTTPPath: "/cacertificate/{caCertificateId}", + } + + if input == nil { + input = &UpdateCACertificateInput{} + } + + output = &UpdateCACertificateOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UpdateCACertificate API operation for AWS IoT. +// +// Updates a registered CA certificate. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation UpdateCACertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) UpdateCACertificate(input *UpdateCACertificateInput) (*UpdateCACertificateOutput, error) { + req, out := c.UpdateCACertificateRequest(input) + return out, req.Send() +} + +// UpdateCACertificateWithContext is the same as UpdateCACertificate with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateCACertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) UpdateCACertificateWithContext(ctx aws.Context, input *UpdateCACertificateInput, opts ...request.Option) (*UpdateCACertificateOutput, error) { + req, out := c.UpdateCACertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateCertificate = "UpdateCertificate" + +// UpdateCertificateRequest generates a "aws/request.Request" representing the +// client's request for the UpdateCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateCertificate for more information on using the UpdateCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateCertificateRequest method. +// req, resp := client.UpdateCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) UpdateCertificateRequest(input *UpdateCertificateInput) (req *request.Request, output *UpdateCertificateOutput) { + op := &request.Operation{ + Name: opUpdateCertificate, + HTTPMethod: "PUT", + HTTPPath: "/certificates/{certificateId}", + } + + if input == nil { + input = &UpdateCertificateInput{} + } + + output = &UpdateCertificateOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UpdateCertificate API operation for AWS IoT. +// +// Updates the status of the specified certificate. This operation is idempotent. +// +// Moving a certificate from the ACTIVE state (including REVOKED) will not disconnect +// currently connected devices, but these devices will be unable to reconnect. +// +// The ACTIVE state is required to authenticate devices connecting to AWS IoT +// using a certificate. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation UpdateCertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeCertificateStateException "CertificateStateException" +// The certificate operation is not allowed. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) UpdateCertificate(input *UpdateCertificateInput) (*UpdateCertificateOutput, error) { + req, out := c.UpdateCertificateRequest(input) + return out, req.Send() +} + +// UpdateCertificateWithContext is the same as UpdateCertificate with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateCertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) UpdateCertificateWithContext(ctx aws.Context, input *UpdateCertificateInput, opts ...request.Option) (*UpdateCertificateOutput, error) { + req, out := c.UpdateCertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateDynamicThingGroup = "UpdateDynamicThingGroup" + +// UpdateDynamicThingGroupRequest generates a "aws/request.Request" representing the +// client's request for the UpdateDynamicThingGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateDynamicThingGroup for more information on using the UpdateDynamicThingGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateDynamicThingGroupRequest method. +// req, resp := client.UpdateDynamicThingGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) UpdateDynamicThingGroupRequest(input *UpdateDynamicThingGroupInput) (req *request.Request, output *UpdateDynamicThingGroupOutput) { + op := &request.Operation{ + Name: opUpdateDynamicThingGroup, + HTTPMethod: "PATCH", + HTTPPath: "/dynamic-thing-groups/{thingGroupName}", + } + + if input == nil { + input = &UpdateDynamicThingGroupInput{} + } + + output = &UpdateDynamicThingGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateDynamicThingGroup API operation for AWS IoT. +// +// Updates a dynamic thing group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation UpdateDynamicThingGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeVersionConflictException "VersionConflictException" +// An exception thrown when the version of an entity specified with the expectedVersion +// parameter does not match the latest version in the system. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeInvalidQueryException "InvalidQueryException" +// The query is invalid. +// +func (c *IoT) UpdateDynamicThingGroup(input *UpdateDynamicThingGroupInput) (*UpdateDynamicThingGroupOutput, error) { + req, out := c.UpdateDynamicThingGroupRequest(input) + return out, req.Send() +} + +// UpdateDynamicThingGroupWithContext is the same as UpdateDynamicThingGroup with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateDynamicThingGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) UpdateDynamicThingGroupWithContext(ctx aws.Context, input *UpdateDynamicThingGroupInput, opts ...request.Option) (*UpdateDynamicThingGroupOutput, error) { + req, out := c.UpdateDynamicThingGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateEventConfigurations = "UpdateEventConfigurations" + +// UpdateEventConfigurationsRequest generates a "aws/request.Request" representing the +// client's request for the UpdateEventConfigurations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateEventConfigurations for more information on using the UpdateEventConfigurations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateEventConfigurationsRequest method. +// req, resp := client.UpdateEventConfigurationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) UpdateEventConfigurationsRequest(input *UpdateEventConfigurationsInput) (req *request.Request, output *UpdateEventConfigurationsOutput) { + op := &request.Operation{ + Name: opUpdateEventConfigurations, + HTTPMethod: "PATCH", + HTTPPath: "/event-configurations", + } + + if input == nil { + input = &UpdateEventConfigurationsInput{} + } + + output = &UpdateEventConfigurationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateEventConfigurations API operation for AWS IoT. +// +// Updates the event configurations. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation UpdateEventConfigurations for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +func (c *IoT) UpdateEventConfigurations(input *UpdateEventConfigurationsInput) (*UpdateEventConfigurationsOutput, error) { + req, out := c.UpdateEventConfigurationsRequest(input) + return out, req.Send() +} + +// UpdateEventConfigurationsWithContext is the same as UpdateEventConfigurations with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateEventConfigurations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) UpdateEventConfigurationsWithContext(ctx aws.Context, input *UpdateEventConfigurationsInput, opts ...request.Option) (*UpdateEventConfigurationsOutput, error) { + req, out := c.UpdateEventConfigurationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateIndexingConfiguration = "UpdateIndexingConfiguration" + +// UpdateIndexingConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the UpdateIndexingConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateIndexingConfiguration for more information on using the UpdateIndexingConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateIndexingConfigurationRequest method. +// req, resp := client.UpdateIndexingConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) UpdateIndexingConfigurationRequest(input *UpdateIndexingConfigurationInput) (req *request.Request, output *UpdateIndexingConfigurationOutput) { + op := &request.Operation{ + Name: opUpdateIndexingConfiguration, + HTTPMethod: "POST", + HTTPPath: "/indexing/config", + } + + if input == nil { + input = &UpdateIndexingConfigurationInput{} + } + + output = &UpdateIndexingConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateIndexingConfiguration API operation for AWS IoT. +// +// Updates the search configuration. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation UpdateIndexingConfiguration for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) UpdateIndexingConfiguration(input *UpdateIndexingConfigurationInput) (*UpdateIndexingConfigurationOutput, error) { + req, out := c.UpdateIndexingConfigurationRequest(input) + return out, req.Send() +} + +// UpdateIndexingConfigurationWithContext is the same as UpdateIndexingConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateIndexingConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) UpdateIndexingConfigurationWithContext(ctx aws.Context, input *UpdateIndexingConfigurationInput, opts ...request.Option) (*UpdateIndexingConfigurationOutput, error) { + req, out := c.UpdateIndexingConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateJob = "UpdateJob" + +// UpdateJobRequest generates a "aws/request.Request" representing the +// client's request for the UpdateJob operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateJob for more information on using the UpdateJob +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateJobRequest method. +// req, resp := client.UpdateJobRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) UpdateJobRequest(input *UpdateJobInput) (req *request.Request, output *UpdateJobOutput) { + op := &request.Operation{ + Name: opUpdateJob, + HTTPMethod: "PATCH", + HTTPPath: "/jobs/{jobId}", + } + + if input == nil { + input = &UpdateJobInput{} + } + + output = &UpdateJobOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UpdateJob API operation for AWS IoT. +// +// Updates supported fields of the specified job. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation UpdateJob for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +func (c *IoT) UpdateJob(input *UpdateJobInput) (*UpdateJobOutput, error) { + req, out := c.UpdateJobRequest(input) + return out, req.Send() +} + +// UpdateJobWithContext is the same as UpdateJob with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateJob for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) UpdateJobWithContext(ctx aws.Context, input *UpdateJobInput, opts ...request.Option) (*UpdateJobOutput, error) { + req, out := c.UpdateJobRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateRoleAlias = "UpdateRoleAlias" + +// UpdateRoleAliasRequest generates a "aws/request.Request" representing the +// client's request for the UpdateRoleAlias operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateRoleAlias for more information on using the UpdateRoleAlias +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateRoleAliasRequest method. +// req, resp := client.UpdateRoleAliasRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) UpdateRoleAliasRequest(input *UpdateRoleAliasInput) (req *request.Request, output *UpdateRoleAliasOutput) { + op := &request.Operation{ + Name: opUpdateRoleAlias, + HTTPMethod: "PUT", + HTTPPath: "/role-aliases/{roleAlias}", + } + + if input == nil { + input = &UpdateRoleAliasInput{} + } + + output = &UpdateRoleAliasOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateRoleAlias API operation for AWS IoT. +// +// Updates a role alias. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation UpdateRoleAlias for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) UpdateRoleAlias(input *UpdateRoleAliasInput) (*UpdateRoleAliasOutput, error) { + req, out := c.UpdateRoleAliasRequest(input) + return out, req.Send() +} + +// UpdateRoleAliasWithContext is the same as UpdateRoleAlias with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateRoleAlias for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) UpdateRoleAliasWithContext(ctx aws.Context, input *UpdateRoleAliasInput, opts ...request.Option) (*UpdateRoleAliasOutput, error) { + req, out := c.UpdateRoleAliasRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateScheduledAudit = "UpdateScheduledAudit" + +// UpdateScheduledAuditRequest generates a "aws/request.Request" representing the +// client's request for the UpdateScheduledAudit operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateScheduledAudit for more information on using the UpdateScheduledAudit +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateScheduledAuditRequest method. +// req, resp := client.UpdateScheduledAuditRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) UpdateScheduledAuditRequest(input *UpdateScheduledAuditInput) (req *request.Request, output *UpdateScheduledAuditOutput) { + op := &request.Operation{ + Name: opUpdateScheduledAudit, + HTTPMethod: "PATCH", + HTTPPath: "/audit/scheduledaudits/{scheduledAuditName}", + } + + if input == nil { + input = &UpdateScheduledAuditInput{} + } + + output = &UpdateScheduledAuditOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateScheduledAudit API operation for AWS IoT. +// +// Updates a scheduled audit, including what checks are performed and how often +// the audit takes place. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation UpdateScheduledAudit for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) UpdateScheduledAudit(input *UpdateScheduledAuditInput) (*UpdateScheduledAuditOutput, error) { + req, out := c.UpdateScheduledAuditRequest(input) + return out, req.Send() +} + +// UpdateScheduledAuditWithContext is the same as UpdateScheduledAudit with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateScheduledAudit for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) UpdateScheduledAuditWithContext(ctx aws.Context, input *UpdateScheduledAuditInput, opts ...request.Option) (*UpdateScheduledAuditOutput, error) { + req, out := c.UpdateScheduledAuditRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateSecurityProfile = "UpdateSecurityProfile" + +// UpdateSecurityProfileRequest generates a "aws/request.Request" representing the +// client's request for the UpdateSecurityProfile operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateSecurityProfile for more information on using the UpdateSecurityProfile +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateSecurityProfileRequest method. +// req, resp := client.UpdateSecurityProfileRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) UpdateSecurityProfileRequest(input *UpdateSecurityProfileInput) (req *request.Request, output *UpdateSecurityProfileOutput) { + op := &request.Operation{ + Name: opUpdateSecurityProfile, + HTTPMethod: "PATCH", + HTTPPath: "/security-profiles/{securityProfileName}", + } + + if input == nil { + input = &UpdateSecurityProfileInput{} + } + + output = &UpdateSecurityProfileOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateSecurityProfile API operation for AWS IoT. +// +// Updates a Device Defender security profile. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation UpdateSecurityProfile for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeVersionConflictException "VersionConflictException" +// An exception thrown when the version of an entity specified with the expectedVersion +// parameter does not match the latest version in the system. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) UpdateSecurityProfile(input *UpdateSecurityProfileInput) (*UpdateSecurityProfileOutput, error) { + req, out := c.UpdateSecurityProfileRequest(input) + return out, req.Send() +} + +// UpdateSecurityProfileWithContext is the same as UpdateSecurityProfile with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateSecurityProfile for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) UpdateSecurityProfileWithContext(ctx aws.Context, input *UpdateSecurityProfileInput, opts ...request.Option) (*UpdateSecurityProfileOutput, error) { + req, out := c.UpdateSecurityProfileRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateStream = "UpdateStream" + +// UpdateStreamRequest generates a "aws/request.Request" representing the +// client's request for the UpdateStream operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateStream for more information on using the UpdateStream +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateStreamRequest method. +// req, resp := client.UpdateStreamRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) UpdateStreamRequest(input *UpdateStreamInput) (req *request.Request, output *UpdateStreamOutput) { + op := &request.Operation{ + Name: opUpdateStream, + HTTPMethod: "PUT", + HTTPPath: "/streams/{streamId}", + } + + if input == nil { + input = &UpdateStreamInput{} + } + + output = &UpdateStreamOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateStream API operation for AWS IoT. +// +// Updates an existing stream. The stream version will be incremented by one. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation UpdateStream for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) UpdateStream(input *UpdateStreamInput) (*UpdateStreamOutput, error) { + req, out := c.UpdateStreamRequest(input) + return out, req.Send() +} + +// UpdateStreamWithContext is the same as UpdateStream with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateStream for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) UpdateStreamWithContext(ctx aws.Context, input *UpdateStreamInput, opts ...request.Option) (*UpdateStreamOutput, error) { + req, out := c.UpdateStreamRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateThing = "UpdateThing" + +// UpdateThingRequest generates a "aws/request.Request" representing the +// client's request for the UpdateThing operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateThing for more information on using the UpdateThing +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateThingRequest method. +// req, resp := client.UpdateThingRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) UpdateThingRequest(input *UpdateThingInput) (req *request.Request, output *UpdateThingOutput) { + op := &request.Operation{ + Name: opUpdateThing, + HTTPMethod: "PATCH", + HTTPPath: "/things/{thingName}", + } + + if input == nil { + input = &UpdateThingInput{} + } + + output = &UpdateThingOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateThing API operation for AWS IoT. +// +// Updates the data for a thing. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation UpdateThing for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeVersionConflictException "VersionConflictException" +// An exception thrown when the version of an entity specified with the expectedVersion +// parameter does not match the latest version in the system. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeUnauthorizedException "UnauthorizedException" +// You are not authorized to perform this operation. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is temporarily unavailable. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) UpdateThing(input *UpdateThingInput) (*UpdateThingOutput, error) { + req, out := c.UpdateThingRequest(input) + return out, req.Send() +} + +// UpdateThingWithContext is the same as UpdateThing with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateThing for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) UpdateThingWithContext(ctx aws.Context, input *UpdateThingInput, opts ...request.Option) (*UpdateThingOutput, error) { + req, out := c.UpdateThingRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateThingGroup = "UpdateThingGroup" + +// UpdateThingGroupRequest generates a "aws/request.Request" representing the +// client's request for the UpdateThingGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateThingGroup for more information on using the UpdateThingGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateThingGroupRequest method. +// req, resp := client.UpdateThingGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) UpdateThingGroupRequest(input *UpdateThingGroupInput) (req *request.Request, output *UpdateThingGroupOutput) { + op := &request.Operation{ + Name: opUpdateThingGroup, + HTTPMethod: "PATCH", + HTTPPath: "/thing-groups/{thingGroupName}", + } + + if input == nil { + input = &UpdateThingGroupInput{} + } + + output = &UpdateThingGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateThingGroup API operation for AWS IoT. +// +// Update a thing group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation UpdateThingGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeVersionConflictException "VersionConflictException" +// An exception thrown when the version of an entity specified with the expectedVersion +// parameter does not match the latest version in the system. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) UpdateThingGroup(input *UpdateThingGroupInput) (*UpdateThingGroupOutput, error) { + req, out := c.UpdateThingGroupRequest(input) + return out, req.Send() +} + +// UpdateThingGroupWithContext is the same as UpdateThingGroup with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateThingGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) UpdateThingGroupWithContext(ctx aws.Context, input *UpdateThingGroupInput, opts ...request.Option) (*UpdateThingGroupOutput, error) { + req, out := c.UpdateThingGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateThingGroupsForThing = "UpdateThingGroupsForThing" + +// UpdateThingGroupsForThingRequest generates a "aws/request.Request" representing the +// client's request for the UpdateThingGroupsForThing operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateThingGroupsForThing for more information on using the UpdateThingGroupsForThing +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateThingGroupsForThingRequest method. +// req, resp := client.UpdateThingGroupsForThingRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) UpdateThingGroupsForThingRequest(input *UpdateThingGroupsForThingInput) (req *request.Request, output *UpdateThingGroupsForThingOutput) { + op := &request.Operation{ + Name: opUpdateThingGroupsForThing, + HTTPMethod: "PUT", + HTTPPath: "/thing-groups/updateThingGroupsForThing", + } + + if input == nil { + input = &UpdateThingGroupsForThingInput{} + } + + output = &UpdateThingGroupsForThingOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateThingGroupsForThing API operation for AWS IoT. +// +// Updates the groups to which the thing belongs. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation UpdateThingGroupsForThing for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +func (c *IoT) UpdateThingGroupsForThing(input *UpdateThingGroupsForThingInput) (*UpdateThingGroupsForThingOutput, error) { + req, out := c.UpdateThingGroupsForThingRequest(input) + return out, req.Send() +} + +// UpdateThingGroupsForThingWithContext is the same as UpdateThingGroupsForThing with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateThingGroupsForThing for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) UpdateThingGroupsForThingWithContext(ctx aws.Context, input *UpdateThingGroupsForThingInput, opts ...request.Option) (*UpdateThingGroupsForThingOutput, error) { + req, out := c.UpdateThingGroupsForThingRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opValidateSecurityProfileBehaviors = "ValidateSecurityProfileBehaviors" + +// ValidateSecurityProfileBehaviorsRequest generates a "aws/request.Request" representing the +// client's request for the ValidateSecurityProfileBehaviors operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ValidateSecurityProfileBehaviors for more information on using the ValidateSecurityProfileBehaviors +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ValidateSecurityProfileBehaviorsRequest method. +// req, resp := client.ValidateSecurityProfileBehaviorsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *IoT) ValidateSecurityProfileBehaviorsRequest(input *ValidateSecurityProfileBehaviorsInput) (req *request.Request, output *ValidateSecurityProfileBehaviorsOutput) { + op := &request.Operation{ + Name: opValidateSecurityProfileBehaviors, + HTTPMethod: "POST", + HTTPPath: "/security-profile-behaviors/validate", + } + + if input == nil { + input = &ValidateSecurityProfileBehaviorsInput{} + } + + output = &ValidateSecurityProfileBehaviorsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ValidateSecurityProfileBehaviors API operation for AWS IoT. +// +// Validates a Device Defender security profile behaviors specification. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS IoT's +// API operation ValidateSecurityProfileBehaviors for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRequestException "InvalidRequestException" +// The request is not valid. +// +// * ErrCodeThrottlingException "ThrottlingException" +// The rate exceeds the limit. +// +// * ErrCodeInternalFailureException "InternalFailureException" +// An unexpected error has occurred. +// +func (c *IoT) ValidateSecurityProfileBehaviors(input *ValidateSecurityProfileBehaviorsInput) (*ValidateSecurityProfileBehaviorsOutput, error) { + req, out := c.ValidateSecurityProfileBehaviorsRequest(input) + return out, req.Send() +} + +// ValidateSecurityProfileBehaviorsWithContext is the same as ValidateSecurityProfileBehaviors with the addition of +// the ability to pass a context and additional request options. +// +// See ValidateSecurityProfileBehaviors for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *IoT) ValidateSecurityProfileBehaviorsWithContext(ctx aws.Context, input *ValidateSecurityProfileBehaviorsInput, opts ...request.Option) (*ValidateSecurityProfileBehaviorsOutput, error) { + req, out := c.ValidateSecurityProfileBehaviorsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// Details of abort criteria to abort the job. +type AbortConfig struct { + _ struct{} `type:"structure"` + + // The list of abort criteria to define rules to abort the job. + // + // CriteriaList is a required field + CriteriaList []*AbortCriteria `locationName:"criteriaList" min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s AbortConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AbortConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AbortConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AbortConfig"} + if s.CriteriaList == nil { + invalidParams.Add(request.NewErrParamRequired("CriteriaList")) + } + if s.CriteriaList != nil && len(s.CriteriaList) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CriteriaList", 1)) + } + if s.CriteriaList != nil { + for i, v := range s.CriteriaList { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "CriteriaList", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCriteriaList sets the CriteriaList field's value. +func (s *AbortConfig) SetCriteriaList(v []*AbortCriteria) *AbortConfig { + s.CriteriaList = v + return s +} + +// Details of abort criteria to define rules to abort the job. +type AbortCriteria struct { + _ struct{} `type:"structure"` + + // The type of abort action to initiate a job abort. + // + // Action is a required field + Action *string `locationName:"action" type:"string" required:"true" enum:"AbortAction"` + + // The type of job execution failure to define a rule to initiate a job abort. + // + // FailureType is a required field + FailureType *string `locationName:"failureType" type:"string" required:"true" enum:"JobExecutionFailureType"` + + // Minimum number of executed things before evaluating an abort rule. + // + // MinNumberOfExecutedThings is a required field + MinNumberOfExecutedThings *int64 `locationName:"minNumberOfExecutedThings" min:"1" type:"integer" required:"true"` + + // The threshold as a percentage of the total number of executed things that + // will initiate a job abort. + // + // AWS IoT supports up to two digits after the decimal (for example, 10.9 and + // 10.99, but not 10.999). + // + // ThresholdPercentage is a required field + ThresholdPercentage *float64 `locationName:"thresholdPercentage" type:"double" required:"true"` +} + +// String returns the string representation +func (s AbortCriteria) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AbortCriteria) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AbortCriteria) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AbortCriteria"} + if s.Action == nil { + invalidParams.Add(request.NewErrParamRequired("Action")) + } + if s.FailureType == nil { + invalidParams.Add(request.NewErrParamRequired("FailureType")) + } + if s.MinNumberOfExecutedThings == nil { + invalidParams.Add(request.NewErrParamRequired("MinNumberOfExecutedThings")) + } + if s.MinNumberOfExecutedThings != nil && *s.MinNumberOfExecutedThings < 1 { + invalidParams.Add(request.NewErrParamMinValue("MinNumberOfExecutedThings", 1)) + } + if s.ThresholdPercentage == nil { + invalidParams.Add(request.NewErrParamRequired("ThresholdPercentage")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAction sets the Action field's value. +func (s *AbortCriteria) SetAction(v string) *AbortCriteria { + s.Action = &v + return s +} + +// SetFailureType sets the FailureType field's value. +func (s *AbortCriteria) SetFailureType(v string) *AbortCriteria { + s.FailureType = &v + return s +} + +// SetMinNumberOfExecutedThings sets the MinNumberOfExecutedThings field's value. +func (s *AbortCriteria) SetMinNumberOfExecutedThings(v int64) *AbortCriteria { + s.MinNumberOfExecutedThings = &v + return s +} + +// SetThresholdPercentage sets the ThresholdPercentage field's value. +func (s *AbortCriteria) SetThresholdPercentage(v float64) *AbortCriteria { + s.ThresholdPercentage = &v + return s +} + +// The input for the AcceptCertificateTransfer operation. +type AcceptCertificateTransferInput struct { + _ struct{} `type:"structure"` + + // The ID of the certificate. (The last part of the certificate ARN contains + // the certificate ID.) + // + // CertificateId is a required field + CertificateId *string `location:"uri" locationName:"certificateId" min:"64" type:"string" required:"true"` + + // Specifies whether the certificate is active. + SetAsActive *bool `location:"querystring" locationName:"setAsActive" type:"boolean"` +} + +// String returns the string representation +func (s AcceptCertificateTransferInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AcceptCertificateTransferInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AcceptCertificateTransferInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AcceptCertificateTransferInput"} + if s.CertificateId == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateId")) + } + if s.CertificateId != nil && len(*s.CertificateId) < 64 { + invalidParams.Add(request.NewErrParamMinLen("CertificateId", 64)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateId sets the CertificateId field's value. +func (s *AcceptCertificateTransferInput) SetCertificateId(v string) *AcceptCertificateTransferInput { + s.CertificateId = &v + return s +} + +// SetSetAsActive sets the SetAsActive field's value. +func (s *AcceptCertificateTransferInput) SetSetAsActive(v bool) *AcceptCertificateTransferInput { + s.SetAsActive = &v + return s +} + +type AcceptCertificateTransferOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AcceptCertificateTransferOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AcceptCertificateTransferOutput) GoString() string { + return s.String() +} + +// Describes the actions associated with a rule. +type Action struct { + _ struct{} `type:"structure"` + + // Change the state of a CloudWatch alarm. + CloudwatchAlarm *CloudwatchAlarmAction `locationName:"cloudwatchAlarm" type:"structure"` + + // Capture a CloudWatch metric. + CloudwatchMetric *CloudwatchMetricAction `locationName:"cloudwatchMetric" type:"structure"` + + // Write to a DynamoDB table. + DynamoDB *DynamoDBAction `locationName:"dynamoDB" type:"structure"` + + // Write to a DynamoDB table. This is a new version of the DynamoDB action. + // It allows you to write each attribute in an MQTT message payload into a separate + // DynamoDB column. + DynamoDBv2 *DynamoDBv2Action `locationName:"dynamoDBv2" type:"structure"` + + // Write data to an Amazon Elasticsearch Service domain. + Elasticsearch *ElasticsearchAction `locationName:"elasticsearch" type:"structure"` + + // Write to an Amazon Kinesis Firehose stream. + Firehose *FirehoseAction `locationName:"firehose" type:"structure"` + + // Sends message data to an AWS IoT Analytics channel. + IotAnalytics *IotAnalyticsAction `locationName:"iotAnalytics" type:"structure"` + + // Write data to an Amazon Kinesis stream. + Kinesis *KinesisAction `locationName:"kinesis" type:"structure"` + + // Invoke a Lambda function. + Lambda *LambdaAction `locationName:"lambda" type:"structure"` + + // Publish to another MQTT topic. + Republish *RepublishAction `locationName:"republish" type:"structure"` + + // Write to an Amazon S3 bucket. + S3 *S3Action `locationName:"s3" type:"structure"` + + // Send a message to a Salesforce IoT Cloud Input Stream. + Salesforce *SalesforceAction `locationName:"salesforce" type:"structure"` + + // Publish to an Amazon SNS topic. + Sns *SnsAction `locationName:"sns" type:"structure"` + + // Publish to an Amazon SQS queue. + Sqs *SqsAction `locationName:"sqs" type:"structure"` + + // Starts execution of a Step Functions state machine. + StepFunctions *StepFunctionsAction `locationName:"stepFunctions" type:"structure"` +} + +// String returns the string representation +func (s Action) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Action) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Action) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Action"} + if s.CloudwatchAlarm != nil { + if err := s.CloudwatchAlarm.Validate(); err != nil { + invalidParams.AddNested("CloudwatchAlarm", err.(request.ErrInvalidParams)) + } + } + if s.CloudwatchMetric != nil { + if err := s.CloudwatchMetric.Validate(); err != nil { + invalidParams.AddNested("CloudwatchMetric", err.(request.ErrInvalidParams)) + } + } + if s.DynamoDB != nil { + if err := s.DynamoDB.Validate(); err != nil { + invalidParams.AddNested("DynamoDB", err.(request.ErrInvalidParams)) + } + } + if s.DynamoDBv2 != nil { + if err := s.DynamoDBv2.Validate(); err != nil { + invalidParams.AddNested("DynamoDBv2", err.(request.ErrInvalidParams)) + } + } + if s.Elasticsearch != nil { + if err := s.Elasticsearch.Validate(); err != nil { + invalidParams.AddNested("Elasticsearch", err.(request.ErrInvalidParams)) + } + } + if s.Firehose != nil { + if err := s.Firehose.Validate(); err != nil { + invalidParams.AddNested("Firehose", err.(request.ErrInvalidParams)) + } + } + if s.Kinesis != nil { + if err := s.Kinesis.Validate(); err != nil { + invalidParams.AddNested("Kinesis", err.(request.ErrInvalidParams)) + } + } + if s.Lambda != nil { + if err := s.Lambda.Validate(); err != nil { + invalidParams.AddNested("Lambda", err.(request.ErrInvalidParams)) + } + } + if s.Republish != nil { + if err := s.Republish.Validate(); err != nil { + invalidParams.AddNested("Republish", err.(request.ErrInvalidParams)) + } + } + if s.S3 != nil { + if err := s.S3.Validate(); err != nil { + invalidParams.AddNested("S3", err.(request.ErrInvalidParams)) + } + } + if s.Salesforce != nil { + if err := s.Salesforce.Validate(); err != nil { + invalidParams.AddNested("Salesforce", err.(request.ErrInvalidParams)) + } + } + if s.Sns != nil { + if err := s.Sns.Validate(); err != nil { + invalidParams.AddNested("Sns", err.(request.ErrInvalidParams)) + } + } + if s.Sqs != nil { + if err := s.Sqs.Validate(); err != nil { + invalidParams.AddNested("Sqs", err.(request.ErrInvalidParams)) + } + } + if s.StepFunctions != nil { + if err := s.StepFunctions.Validate(); err != nil { + invalidParams.AddNested("StepFunctions", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCloudwatchAlarm sets the CloudwatchAlarm field's value. +func (s *Action) SetCloudwatchAlarm(v *CloudwatchAlarmAction) *Action { + s.CloudwatchAlarm = v + return s +} + +// SetCloudwatchMetric sets the CloudwatchMetric field's value. +func (s *Action) SetCloudwatchMetric(v *CloudwatchMetricAction) *Action { + s.CloudwatchMetric = v + return s +} + +// SetDynamoDB sets the DynamoDB field's value. +func (s *Action) SetDynamoDB(v *DynamoDBAction) *Action { + s.DynamoDB = v + return s +} + +// SetDynamoDBv2 sets the DynamoDBv2 field's value. +func (s *Action) SetDynamoDBv2(v *DynamoDBv2Action) *Action { + s.DynamoDBv2 = v + return s +} + +// SetElasticsearch sets the Elasticsearch field's value. +func (s *Action) SetElasticsearch(v *ElasticsearchAction) *Action { + s.Elasticsearch = v + return s +} + +// SetFirehose sets the Firehose field's value. +func (s *Action) SetFirehose(v *FirehoseAction) *Action { + s.Firehose = v + return s +} + +// SetIotAnalytics sets the IotAnalytics field's value. +func (s *Action) SetIotAnalytics(v *IotAnalyticsAction) *Action { + s.IotAnalytics = v + return s +} + +// SetKinesis sets the Kinesis field's value. +func (s *Action) SetKinesis(v *KinesisAction) *Action { + s.Kinesis = v + return s +} + +// SetLambda sets the Lambda field's value. +func (s *Action) SetLambda(v *LambdaAction) *Action { + s.Lambda = v + return s +} + +// SetRepublish sets the Republish field's value. +func (s *Action) SetRepublish(v *RepublishAction) *Action { + s.Republish = v + return s +} + +// SetS3 sets the S3 field's value. +func (s *Action) SetS3(v *S3Action) *Action { + s.S3 = v + return s +} + +// SetSalesforce sets the Salesforce field's value. +func (s *Action) SetSalesforce(v *SalesforceAction) *Action { + s.Salesforce = v + return s +} + +// SetSns sets the Sns field's value. +func (s *Action) SetSns(v *SnsAction) *Action { + s.Sns = v + return s +} + +// SetSqs sets the Sqs field's value. +func (s *Action) SetSqs(v *SqsAction) *Action { + s.Sqs = v + return s +} + +// SetStepFunctions sets the StepFunctions field's value. +func (s *Action) SetStepFunctions(v *StepFunctionsAction) *Action { + s.StepFunctions = v + return s +} + +// Information about an active Device Defender security profile behavior violation. +type ActiveViolation struct { + _ struct{} `type:"structure"` + + // The behavior which is being violated. + Behavior *Behavior `locationName:"behavior" type:"structure"` + + // The time the most recent violation occurred. + LastViolationTime *time.Time `locationName:"lastViolationTime" type:"timestamp"` + + // The value of the metric (the measurement) which caused the most recent violation. + LastViolationValue *MetricValue `locationName:"lastViolationValue" type:"structure"` + + // The security profile whose behavior is in violation. + SecurityProfileName *string `locationName:"securityProfileName" min:"1" type:"string"` + + // The name of the thing responsible for the active violation. + ThingName *string `locationName:"thingName" min:"1" type:"string"` + + // The ID of the active violation. + ViolationId *string `locationName:"violationId" min:"1" type:"string"` + + // The time the violation started. + ViolationStartTime *time.Time `locationName:"violationStartTime" type:"timestamp"` +} + +// String returns the string representation +func (s ActiveViolation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ActiveViolation) GoString() string { + return s.String() +} + +// SetBehavior sets the Behavior field's value. +func (s *ActiveViolation) SetBehavior(v *Behavior) *ActiveViolation { + s.Behavior = v + return s +} + +// SetLastViolationTime sets the LastViolationTime field's value. +func (s *ActiveViolation) SetLastViolationTime(v time.Time) *ActiveViolation { + s.LastViolationTime = &v + return s +} + +// SetLastViolationValue sets the LastViolationValue field's value. +func (s *ActiveViolation) SetLastViolationValue(v *MetricValue) *ActiveViolation { + s.LastViolationValue = v + return s +} + +// SetSecurityProfileName sets the SecurityProfileName field's value. +func (s *ActiveViolation) SetSecurityProfileName(v string) *ActiveViolation { + s.SecurityProfileName = &v + return s +} + +// SetThingName sets the ThingName field's value. +func (s *ActiveViolation) SetThingName(v string) *ActiveViolation { + s.ThingName = &v + return s +} + +// SetViolationId sets the ViolationId field's value. +func (s *ActiveViolation) SetViolationId(v string) *ActiveViolation { + s.ViolationId = &v + return s +} + +// SetViolationStartTime sets the ViolationStartTime field's value. +func (s *ActiveViolation) SetViolationStartTime(v time.Time) *ActiveViolation { + s.ViolationStartTime = &v + return s +} + +type AddThingToBillingGroupInput struct { + _ struct{} `type:"structure"` + + // The ARN of the billing group. + BillingGroupArn *string `locationName:"billingGroupArn" type:"string"` + + // The name of the billing group. + BillingGroupName *string `locationName:"billingGroupName" min:"1" type:"string"` + + // The ARN of the thing to be added to the billing group. + ThingArn *string `locationName:"thingArn" type:"string"` + + // The name of the thing to be added to the billing group. + ThingName *string `locationName:"thingName" min:"1" type:"string"` +} + +// String returns the string representation +func (s AddThingToBillingGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddThingToBillingGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddThingToBillingGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddThingToBillingGroupInput"} + if s.BillingGroupName != nil && len(*s.BillingGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BillingGroupName", 1)) + } + if s.ThingName != nil && len(*s.ThingName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBillingGroupArn sets the BillingGroupArn field's value. +func (s *AddThingToBillingGroupInput) SetBillingGroupArn(v string) *AddThingToBillingGroupInput { + s.BillingGroupArn = &v + return s +} + +// SetBillingGroupName sets the BillingGroupName field's value. +func (s *AddThingToBillingGroupInput) SetBillingGroupName(v string) *AddThingToBillingGroupInput { + s.BillingGroupName = &v + return s +} + +// SetThingArn sets the ThingArn field's value. +func (s *AddThingToBillingGroupInput) SetThingArn(v string) *AddThingToBillingGroupInput { + s.ThingArn = &v + return s +} + +// SetThingName sets the ThingName field's value. +func (s *AddThingToBillingGroupInput) SetThingName(v string) *AddThingToBillingGroupInput { + s.ThingName = &v + return s +} + +type AddThingToBillingGroupOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AddThingToBillingGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddThingToBillingGroupOutput) GoString() string { + return s.String() +} + +type AddThingToThingGroupInput struct { + _ struct{} `type:"structure"` + + // Override dynamic thing groups with static thing groups when 10-group limit + // is reached. If a thing belongs to 10 thing groups, and one or more of those + // groups are dynamic thing groups, adding a thing to a static group removes + // the thing from the last dynamic group. + OverrideDynamicGroups *bool `locationName:"overrideDynamicGroups" type:"boolean"` + + // The ARN of the thing to add to a group. + ThingArn *string `locationName:"thingArn" type:"string"` + + // The ARN of the group to which you are adding a thing. + ThingGroupArn *string `locationName:"thingGroupArn" type:"string"` + + // The name of the group to which you are adding a thing. + ThingGroupName *string `locationName:"thingGroupName" min:"1" type:"string"` + + // The name of the thing to add to a group. + ThingName *string `locationName:"thingName" min:"1" type:"string"` +} + +// String returns the string representation +func (s AddThingToThingGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddThingToThingGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddThingToThingGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddThingToThingGroupInput"} + if s.ThingGroupName != nil && len(*s.ThingGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingGroupName", 1)) + } + if s.ThingName != nil && len(*s.ThingName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetOverrideDynamicGroups sets the OverrideDynamicGroups field's value. +func (s *AddThingToThingGroupInput) SetOverrideDynamicGroups(v bool) *AddThingToThingGroupInput { + s.OverrideDynamicGroups = &v + return s +} + +// SetThingArn sets the ThingArn field's value. +func (s *AddThingToThingGroupInput) SetThingArn(v string) *AddThingToThingGroupInput { + s.ThingArn = &v + return s +} + +// SetThingGroupArn sets the ThingGroupArn field's value. +func (s *AddThingToThingGroupInput) SetThingGroupArn(v string) *AddThingToThingGroupInput { + s.ThingGroupArn = &v + return s +} + +// SetThingGroupName sets the ThingGroupName field's value. +func (s *AddThingToThingGroupInput) SetThingGroupName(v string) *AddThingToThingGroupInput { + s.ThingGroupName = &v + return s +} + +// SetThingName sets the ThingName field's value. +func (s *AddThingToThingGroupInput) SetThingName(v string) *AddThingToThingGroupInput { + s.ThingName = &v + return s +} + +type AddThingToThingGroupOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AddThingToThingGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddThingToThingGroupOutput) GoString() string { + return s.String() +} + +// A structure containing the alert target ARN and the role ARN. +type AlertTarget struct { + _ struct{} `type:"structure"` + + // The ARN of the notification target to which alerts are sent. + // + // AlertTargetArn is a required field + AlertTargetArn *string `locationName:"alertTargetArn" type:"string" required:"true"` + + // The ARN of the role that grants permission to send alerts to the notification + // target. + // + // RoleArn is a required field + RoleArn *string `locationName:"roleArn" min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s AlertTarget) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AlertTarget) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AlertTarget) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AlertTarget"} + if s.AlertTargetArn == nil { + invalidParams.Add(request.NewErrParamRequired("AlertTargetArn")) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + if s.RoleArn != nil && len(*s.RoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAlertTargetArn sets the AlertTargetArn field's value. +func (s *AlertTarget) SetAlertTargetArn(v string) *AlertTarget { + s.AlertTargetArn = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *AlertTarget) SetRoleArn(v string) *AlertTarget { + s.RoleArn = &v + return s +} + +// Contains information that allowed the authorization. +type Allowed struct { + _ struct{} `type:"structure"` + + // A list of policies that allowed the authentication. + Policies []*Policy `locationName:"policies" type:"list"` +} + +// String returns the string representation +func (s Allowed) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Allowed) GoString() string { + return s.String() +} + +// SetPolicies sets the Policies field's value. +func (s *Allowed) SetPolicies(v []*Policy) *Allowed { + s.Policies = v + return s +} + +type AssociateTargetsWithJobInput struct { + _ struct{} `type:"structure"` + + // An optional comment string describing why the job was associated with the + // targets. + Comment *string `locationName:"comment" type:"string"` + + // The unique identifier you assigned to this job when it was created. + // + // JobId is a required field + JobId *string `location:"uri" locationName:"jobId" min:"1" type:"string" required:"true"` + + // A list of thing group ARNs that define the targets of the job. + // + // Targets is a required field + Targets []*string `locationName:"targets" min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s AssociateTargetsWithJobInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociateTargetsWithJobInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AssociateTargetsWithJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AssociateTargetsWithJobInput"} + if s.JobId == nil { + invalidParams.Add(request.NewErrParamRequired("JobId")) + } + if s.JobId != nil && len(*s.JobId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("JobId", 1)) + } + if s.Targets == nil { + invalidParams.Add(request.NewErrParamRequired("Targets")) + } + if s.Targets != nil && len(s.Targets) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Targets", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetComment sets the Comment field's value. +func (s *AssociateTargetsWithJobInput) SetComment(v string) *AssociateTargetsWithJobInput { + s.Comment = &v + return s +} + +// SetJobId sets the JobId field's value. +func (s *AssociateTargetsWithJobInput) SetJobId(v string) *AssociateTargetsWithJobInput { + s.JobId = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *AssociateTargetsWithJobInput) SetTargets(v []*string) *AssociateTargetsWithJobInput { + s.Targets = v + return s +} + +type AssociateTargetsWithJobOutput struct { + _ struct{} `type:"structure"` + + // A short text description of the job. + Description *string `locationName:"description" type:"string"` + + // An ARN identifying the job. + JobArn *string `locationName:"jobArn" type:"string"` + + // The unique identifier you assigned to this job when it was created. + JobId *string `locationName:"jobId" min:"1" type:"string"` +} + +// String returns the string representation +func (s AssociateTargetsWithJobOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociateTargetsWithJobOutput) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *AssociateTargetsWithJobOutput) SetDescription(v string) *AssociateTargetsWithJobOutput { + s.Description = &v + return s +} + +// SetJobArn sets the JobArn field's value. +func (s *AssociateTargetsWithJobOutput) SetJobArn(v string) *AssociateTargetsWithJobOutput { + s.JobArn = &v + return s +} + +// SetJobId sets the JobId field's value. +func (s *AssociateTargetsWithJobOutput) SetJobId(v string) *AssociateTargetsWithJobOutput { + s.JobId = &v + return s +} + +type AttachPolicyInput struct { + _ struct{} `type:"structure"` + + // The name of the policy to attach. + // + // PolicyName is a required field + PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` + + // The identity (https://docs.aws.amazon.com/iot/latest/developerguide/iot-security-identity.html) + // to which the policy is attached. + // + // Target is a required field + Target *string `locationName:"target" type:"string" required:"true"` +} + +// String returns the string representation +func (s AttachPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AttachPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AttachPolicyInput"} + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) + } + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + if s.Target == nil { + invalidParams.Add(request.NewErrParamRequired("Target")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyName sets the PolicyName field's value. +func (s *AttachPolicyInput) SetPolicyName(v string) *AttachPolicyInput { + s.PolicyName = &v + return s +} + +// SetTarget sets the Target field's value. +func (s *AttachPolicyInput) SetTarget(v string) *AttachPolicyInput { + s.Target = &v + return s +} + +type AttachPolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AttachPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachPolicyOutput) GoString() string { + return s.String() +} + +// The input for the AttachPrincipalPolicy operation. +type AttachPrincipalPolicyInput struct { + _ struct{} `type:"structure"` + + // The policy name. + // + // PolicyName is a required field + PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` + + // The principal, which can be a certificate ARN (as returned from the CreateCertificate + // operation) or an Amazon Cognito ID. + // + // Principal is a required field + Principal *string `location:"header" locationName:"x-amzn-iot-principal" type:"string" required:"true"` +} + +// String returns the string representation +func (s AttachPrincipalPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachPrincipalPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AttachPrincipalPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AttachPrincipalPolicyInput"} + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) + } + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + if s.Principal == nil { + invalidParams.Add(request.NewErrParamRequired("Principal")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyName sets the PolicyName field's value. +func (s *AttachPrincipalPolicyInput) SetPolicyName(v string) *AttachPrincipalPolicyInput { + s.PolicyName = &v + return s +} + +// SetPrincipal sets the Principal field's value. +func (s *AttachPrincipalPolicyInput) SetPrincipal(v string) *AttachPrincipalPolicyInput { + s.Principal = &v + return s +} + +type AttachPrincipalPolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AttachPrincipalPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachPrincipalPolicyOutput) GoString() string { + return s.String() +} + +type AttachSecurityProfileInput struct { + _ struct{} `type:"structure"` + + // The security profile that is attached. + // + // SecurityProfileName is a required field + SecurityProfileName *string `location:"uri" locationName:"securityProfileName" min:"1" type:"string" required:"true"` + + // The ARN of the target (thing group) to which the security profile is attached. + // + // SecurityProfileTargetArn is a required field + SecurityProfileTargetArn *string `location:"querystring" locationName:"securityProfileTargetArn" type:"string" required:"true"` +} + +// String returns the string representation +func (s AttachSecurityProfileInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachSecurityProfileInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AttachSecurityProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AttachSecurityProfileInput"} + if s.SecurityProfileName == nil { + invalidParams.Add(request.NewErrParamRequired("SecurityProfileName")) + } + if s.SecurityProfileName != nil && len(*s.SecurityProfileName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecurityProfileName", 1)) + } + if s.SecurityProfileTargetArn == nil { + invalidParams.Add(request.NewErrParamRequired("SecurityProfileTargetArn")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSecurityProfileName sets the SecurityProfileName field's value. +func (s *AttachSecurityProfileInput) SetSecurityProfileName(v string) *AttachSecurityProfileInput { + s.SecurityProfileName = &v + return s +} + +// SetSecurityProfileTargetArn sets the SecurityProfileTargetArn field's value. +func (s *AttachSecurityProfileInput) SetSecurityProfileTargetArn(v string) *AttachSecurityProfileInput { + s.SecurityProfileTargetArn = &v + return s +} + +type AttachSecurityProfileOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AttachSecurityProfileOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachSecurityProfileOutput) GoString() string { + return s.String() +} + +// The input for the AttachThingPrincipal operation. +type AttachThingPrincipalInput struct { + _ struct{} `type:"structure"` + + // The principal, such as a certificate or other credential. + // + // Principal is a required field + Principal *string `location:"header" locationName:"x-amzn-principal" type:"string" required:"true"` + + // The name of the thing. + // + // ThingName is a required field + ThingName *string `location:"uri" locationName:"thingName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s AttachThingPrincipalInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachThingPrincipalInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AttachThingPrincipalInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AttachThingPrincipalInput"} + if s.Principal == nil { + invalidParams.Add(request.NewErrParamRequired("Principal")) + } + if s.ThingName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingName")) + } + if s.ThingName != nil && len(*s.ThingName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPrincipal sets the Principal field's value. +func (s *AttachThingPrincipalInput) SetPrincipal(v string) *AttachThingPrincipalInput { + s.Principal = &v + return s +} + +// SetThingName sets the ThingName field's value. +func (s *AttachThingPrincipalInput) SetThingName(v string) *AttachThingPrincipalInput { + s.ThingName = &v + return s +} + +// The output from the AttachThingPrincipal operation. +type AttachThingPrincipalOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AttachThingPrincipalOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachThingPrincipalOutput) GoString() string { + return s.String() +} + +// The attribute payload. +type AttributePayload struct { + _ struct{} `type:"structure"` + + // A JSON string containing up to three key-value pair in JSON format. For example: + // + // {\"attributes\":{\"string1\":\"string2\"}} + Attributes map[string]*string `locationName:"attributes" type:"map"` + + // Specifies whether the list of attributes provided in the AttributePayload + // is merged with the attributes stored in the registry, instead of overwriting + // them. + // + // To remove an attribute, call UpdateThing with an empty attribute value. + // + // The merge attribute is only valid when calling UpdateThing. + Merge *bool `locationName:"merge" type:"boolean"` +} + +// String returns the string representation +func (s AttributePayload) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttributePayload) GoString() string { + return s.String() +} + +// SetAttributes sets the Attributes field's value. +func (s *AttributePayload) SetAttributes(v map[string]*string) *AttributePayload { + s.Attributes = v + return s +} + +// SetMerge sets the Merge field's value. +func (s *AttributePayload) SetMerge(v bool) *AttributePayload { + s.Merge = &v + return s +} + +// Which audit checks are enabled and disabled for this account. +type AuditCheckConfiguration struct { + _ struct{} `type:"structure"` + + // True if this audit check is enabled for this account. + Enabled *bool `locationName:"enabled" type:"boolean"` +} + +// String returns the string representation +func (s AuditCheckConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AuditCheckConfiguration) GoString() string { + return s.String() +} + +// SetEnabled sets the Enabled field's value. +func (s *AuditCheckConfiguration) SetEnabled(v bool) *AuditCheckConfiguration { + s.Enabled = &v + return s +} + +// Information about the audit check. +type AuditCheckDetails struct { + _ struct{} `type:"structure"` + + // True if the check completed and found all resources compliant. + CheckCompliant *bool `locationName:"checkCompliant" type:"boolean"` + + // The completion status of this check, one of "IN_PROGRESS", "WAITING_FOR_DATA_COLLECTION", + // "CANCELED", "COMPLETED_COMPLIANT", "COMPLETED_NON_COMPLIANT", or "FAILED". + CheckRunStatus *string `locationName:"checkRunStatus" type:"string" enum:"AuditCheckRunStatus"` + + // The code of any error encountered when performing this check during this + // audit. One of "INSUFFICIENT_PERMISSIONS", or "AUDIT_CHECK_DISABLED". + ErrorCode *string `locationName:"errorCode" type:"string"` + + // The message associated with any error encountered when performing this check + // during this audit. + Message *string `locationName:"message" type:"string"` + + // The number of resources that the check found non-compliant. + NonCompliantResourcesCount *int64 `locationName:"nonCompliantResourcesCount" type:"long"` + + // The number of resources on which the check was performed. + TotalResourcesCount *int64 `locationName:"totalResourcesCount" type:"long"` +} + +// String returns the string representation +func (s AuditCheckDetails) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AuditCheckDetails) GoString() string { + return s.String() +} + +// SetCheckCompliant sets the CheckCompliant field's value. +func (s *AuditCheckDetails) SetCheckCompliant(v bool) *AuditCheckDetails { + s.CheckCompliant = &v + return s +} + +// SetCheckRunStatus sets the CheckRunStatus field's value. +func (s *AuditCheckDetails) SetCheckRunStatus(v string) *AuditCheckDetails { + s.CheckRunStatus = &v + return s +} + +// SetErrorCode sets the ErrorCode field's value. +func (s *AuditCheckDetails) SetErrorCode(v string) *AuditCheckDetails { + s.ErrorCode = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *AuditCheckDetails) SetMessage(v string) *AuditCheckDetails { + s.Message = &v + return s +} + +// SetNonCompliantResourcesCount sets the NonCompliantResourcesCount field's value. +func (s *AuditCheckDetails) SetNonCompliantResourcesCount(v int64) *AuditCheckDetails { + s.NonCompliantResourcesCount = &v + return s +} + +// SetTotalResourcesCount sets the TotalResourcesCount field's value. +func (s *AuditCheckDetails) SetTotalResourcesCount(v int64) *AuditCheckDetails { + s.TotalResourcesCount = &v + return s +} + +// The findings (results) of the audit. +type AuditFinding struct { + _ struct{} `type:"structure"` + + // The audit check that generated this result. + CheckName *string `locationName:"checkName" type:"string"` + + // The time the result (finding) was discovered. + FindingTime *time.Time `locationName:"findingTime" type:"timestamp"` + + // The resource that was found to be non-compliant with the audit check. + NonCompliantResource *NonCompliantResource `locationName:"nonCompliantResource" type:"structure"` + + // The reason the resource was non-compliant. + ReasonForNonCompliance *string `locationName:"reasonForNonCompliance" type:"string"` + + // A code which indicates the reason that the resource was non-compliant. + ReasonForNonComplianceCode *string `locationName:"reasonForNonComplianceCode" type:"string"` + + // The list of related resources. + RelatedResources []*RelatedResource `locationName:"relatedResources" type:"list"` + + // The severity of the result (finding). + Severity *string `locationName:"severity" type:"string" enum:"AuditFindingSeverity"` + + // The ID of the audit that generated this result (finding) + TaskId *string `locationName:"taskId" min:"1" type:"string"` + + // The time the audit started. + TaskStartTime *time.Time `locationName:"taskStartTime" type:"timestamp"` +} + +// String returns the string representation +func (s AuditFinding) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AuditFinding) GoString() string { + return s.String() +} + +// SetCheckName sets the CheckName field's value. +func (s *AuditFinding) SetCheckName(v string) *AuditFinding { + s.CheckName = &v + return s +} + +// SetFindingTime sets the FindingTime field's value. +func (s *AuditFinding) SetFindingTime(v time.Time) *AuditFinding { + s.FindingTime = &v + return s +} + +// SetNonCompliantResource sets the NonCompliantResource field's value. +func (s *AuditFinding) SetNonCompliantResource(v *NonCompliantResource) *AuditFinding { + s.NonCompliantResource = v + return s +} + +// SetReasonForNonCompliance sets the ReasonForNonCompliance field's value. +func (s *AuditFinding) SetReasonForNonCompliance(v string) *AuditFinding { + s.ReasonForNonCompliance = &v + return s +} + +// SetReasonForNonComplianceCode sets the ReasonForNonComplianceCode field's value. +func (s *AuditFinding) SetReasonForNonComplianceCode(v string) *AuditFinding { + s.ReasonForNonComplianceCode = &v + return s +} + +// SetRelatedResources sets the RelatedResources field's value. +func (s *AuditFinding) SetRelatedResources(v []*RelatedResource) *AuditFinding { + s.RelatedResources = v + return s +} + +// SetSeverity sets the Severity field's value. +func (s *AuditFinding) SetSeverity(v string) *AuditFinding { + s.Severity = &v + return s +} + +// SetTaskId sets the TaskId field's value. +func (s *AuditFinding) SetTaskId(v string) *AuditFinding { + s.TaskId = &v + return s +} + +// SetTaskStartTime sets the TaskStartTime field's value. +func (s *AuditFinding) SetTaskStartTime(v time.Time) *AuditFinding { + s.TaskStartTime = &v + return s +} + +// Information about the targets to which audit notifications are sent. +type AuditNotificationTarget struct { + _ struct{} `type:"structure"` + + // True if notifications to the target are enabled. + Enabled *bool `locationName:"enabled" type:"boolean"` + + // The ARN of the role that grants permission to send notifications to the target. + RoleArn *string `locationName:"roleArn" min:"20" type:"string"` + + // The ARN of the target (SNS topic) to which audit notifications are sent. + TargetArn *string `locationName:"targetArn" type:"string"` +} + +// String returns the string representation +func (s AuditNotificationTarget) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AuditNotificationTarget) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AuditNotificationTarget) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AuditNotificationTarget"} + if s.RoleArn != nil && len(*s.RoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEnabled sets the Enabled field's value. +func (s *AuditNotificationTarget) SetEnabled(v bool) *AuditNotificationTarget { + s.Enabled = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *AuditNotificationTarget) SetRoleArn(v string) *AuditNotificationTarget { + s.RoleArn = &v + return s +} + +// SetTargetArn sets the TargetArn field's value. +func (s *AuditNotificationTarget) SetTargetArn(v string) *AuditNotificationTarget { + s.TargetArn = &v + return s +} + +// The audits that were performed. +type AuditTaskMetadata struct { + _ struct{} `type:"structure"` + + // The ID of this audit. + TaskId *string `locationName:"taskId" min:"1" type:"string"` + + // The status of this audit: one of "IN_PROGRESS", "COMPLETED", "FAILED" or + // "CANCELED". + TaskStatus *string `locationName:"taskStatus" type:"string" enum:"AuditTaskStatus"` + + // The type of this audit: one of "ON_DEMAND_AUDIT_TASK" or "SCHEDULED_AUDIT_TASK". + TaskType *string `locationName:"taskType" type:"string" enum:"AuditTaskType"` +} + +// String returns the string representation +func (s AuditTaskMetadata) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AuditTaskMetadata) GoString() string { + return s.String() +} + +// SetTaskId sets the TaskId field's value. +func (s *AuditTaskMetadata) SetTaskId(v string) *AuditTaskMetadata { + s.TaskId = &v + return s +} + +// SetTaskStatus sets the TaskStatus field's value. +func (s *AuditTaskMetadata) SetTaskStatus(v string) *AuditTaskMetadata { + s.TaskStatus = &v + return s +} + +// SetTaskType sets the TaskType field's value. +func (s *AuditTaskMetadata) SetTaskType(v string) *AuditTaskMetadata { + s.TaskType = &v + return s +} + +// A collection of authorization information. +type AuthInfo struct { + _ struct{} `type:"structure"` + + // The type of action for which the principal is being authorized. + ActionType *string `locationName:"actionType" type:"string" enum:"ActionType"` + + // The resources for which the principal is being authorized to perform the + // specified action. + Resources []*string `locationName:"resources" type:"list"` +} + +// String returns the string representation +func (s AuthInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AuthInfo) GoString() string { + return s.String() +} + +// SetActionType sets the ActionType field's value. +func (s *AuthInfo) SetActionType(v string) *AuthInfo { + s.ActionType = &v + return s +} + +// SetResources sets the Resources field's value. +func (s *AuthInfo) SetResources(v []*string) *AuthInfo { + s.Resources = v + return s +} + +// The authorizer result. +type AuthResult struct { + _ struct{} `type:"structure"` + + // The policies and statements that allowed the specified action. + Allowed *Allowed `locationName:"allowed" type:"structure"` + + // The final authorization decision of this scenario. Multiple statements are + // taken into account when determining the authorization decision. An explicit + // deny statement can override multiple allow statements. + AuthDecision *string `locationName:"authDecision" type:"string" enum:"AuthDecision"` + + // Authorization information. + AuthInfo *AuthInfo `locationName:"authInfo" type:"structure"` + + // The policies and statements that denied the specified action. + Denied *Denied `locationName:"denied" type:"structure"` + + // Contains any missing context values found while evaluating policy. + MissingContextValues []*string `locationName:"missingContextValues" type:"list"` +} + +// String returns the string representation +func (s AuthResult) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AuthResult) GoString() string { + return s.String() +} + +// SetAllowed sets the Allowed field's value. +func (s *AuthResult) SetAllowed(v *Allowed) *AuthResult { + s.Allowed = v + return s +} + +// SetAuthDecision sets the AuthDecision field's value. +func (s *AuthResult) SetAuthDecision(v string) *AuthResult { + s.AuthDecision = &v + return s +} + +// SetAuthInfo sets the AuthInfo field's value. +func (s *AuthResult) SetAuthInfo(v *AuthInfo) *AuthResult { + s.AuthInfo = v + return s +} + +// SetDenied sets the Denied field's value. +func (s *AuthResult) SetDenied(v *Denied) *AuthResult { + s.Denied = v + return s +} + +// SetMissingContextValues sets the MissingContextValues field's value. +func (s *AuthResult) SetMissingContextValues(v []*string) *AuthResult { + s.MissingContextValues = v + return s +} + +// The authorizer description. +type AuthorizerDescription struct { + _ struct{} `type:"structure"` + + // The authorizer ARN. + AuthorizerArn *string `locationName:"authorizerArn" type:"string"` + + // The authorizer's Lambda function ARN. + AuthorizerFunctionArn *string `locationName:"authorizerFunctionArn" type:"string"` + + // The authorizer name. + AuthorizerName *string `locationName:"authorizerName" min:"1" type:"string"` + + // The UNIX timestamp of when the authorizer was created. + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` + + // The UNIX timestamp of when the authorizer was last updated. + LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp"` + + // The status of the authorizer. + Status *string `locationName:"status" type:"string" enum:"AuthorizerStatus"` + + // The key used to extract the token from the HTTP headers. + TokenKeyName *string `locationName:"tokenKeyName" min:"1" type:"string"` + + // The public keys used to validate the token signature returned by your custom + // authentication service. + TokenSigningPublicKeys map[string]*string `locationName:"tokenSigningPublicKeys" type:"map"` +} + +// String returns the string representation +func (s AuthorizerDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AuthorizerDescription) GoString() string { + return s.String() +} + +// SetAuthorizerArn sets the AuthorizerArn field's value. +func (s *AuthorizerDescription) SetAuthorizerArn(v string) *AuthorizerDescription { + s.AuthorizerArn = &v + return s +} + +// SetAuthorizerFunctionArn sets the AuthorizerFunctionArn field's value. +func (s *AuthorizerDescription) SetAuthorizerFunctionArn(v string) *AuthorizerDescription { + s.AuthorizerFunctionArn = &v + return s +} + +// SetAuthorizerName sets the AuthorizerName field's value. +func (s *AuthorizerDescription) SetAuthorizerName(v string) *AuthorizerDescription { + s.AuthorizerName = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *AuthorizerDescription) SetCreationDate(v time.Time) *AuthorizerDescription { + s.CreationDate = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *AuthorizerDescription) SetLastModifiedDate(v time.Time) *AuthorizerDescription { + s.LastModifiedDate = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *AuthorizerDescription) SetStatus(v string) *AuthorizerDescription { + s.Status = &v + return s +} + +// SetTokenKeyName sets the TokenKeyName field's value. +func (s *AuthorizerDescription) SetTokenKeyName(v string) *AuthorizerDescription { + s.TokenKeyName = &v + return s +} + +// SetTokenSigningPublicKeys sets the TokenSigningPublicKeys field's value. +func (s *AuthorizerDescription) SetTokenSigningPublicKeys(v map[string]*string) *AuthorizerDescription { + s.TokenSigningPublicKeys = v + return s +} + +// The authorizer summary. +type AuthorizerSummary struct { + _ struct{} `type:"structure"` + + // The authorizer ARN. + AuthorizerArn *string `locationName:"authorizerArn" type:"string"` + + // The authorizer name. + AuthorizerName *string `locationName:"authorizerName" min:"1" type:"string"` +} + +// String returns the string representation +func (s AuthorizerSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AuthorizerSummary) GoString() string { + return s.String() +} + +// SetAuthorizerArn sets the AuthorizerArn field's value. +func (s *AuthorizerSummary) SetAuthorizerArn(v string) *AuthorizerSummary { + s.AuthorizerArn = &v + return s +} + +// SetAuthorizerName sets the AuthorizerName field's value. +func (s *AuthorizerSummary) SetAuthorizerName(v string) *AuthorizerSummary { + s.AuthorizerName = &v + return s +} + +// Configuration for the rollout of OTA updates. +type AwsJobExecutionsRolloutConfig struct { + _ struct{} `type:"structure"` + + // The maximum number of OTA update job executions started per minute. + MaximumPerMinute *int64 `locationName:"maximumPerMinute" min:"1" type:"integer"` +} + +// String returns the string representation +func (s AwsJobExecutionsRolloutConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AwsJobExecutionsRolloutConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AwsJobExecutionsRolloutConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AwsJobExecutionsRolloutConfig"} + if s.MaximumPerMinute != nil && *s.MaximumPerMinute < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaximumPerMinute", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaximumPerMinute sets the MaximumPerMinute field's value. +func (s *AwsJobExecutionsRolloutConfig) SetMaximumPerMinute(v int64) *AwsJobExecutionsRolloutConfig { + s.MaximumPerMinute = &v + return s +} + +// A Device Defender security profile behavior. +type Behavior struct { + _ struct{} `type:"structure"` + + // The criteria that determine if a device is behaving normally in regard to + // the metric. + Criteria *BehaviorCriteria `locationName:"criteria" type:"structure"` + + // What is measured by the behavior. + Metric *string `locationName:"metric" type:"string"` + + // The name you have given to the behavior. + // + // Name is a required field + Name *string `locationName:"name" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s Behavior) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Behavior) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Behavior) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Behavior"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCriteria sets the Criteria field's value. +func (s *Behavior) SetCriteria(v *BehaviorCriteria) *Behavior { + s.Criteria = v + return s +} + +// SetMetric sets the Metric field's value. +func (s *Behavior) SetMetric(v string) *Behavior { + s.Metric = &v + return s +} + +// SetName sets the Name field's value. +func (s *Behavior) SetName(v string) *Behavior { + s.Name = &v + return s +} + +// The criteria by which the behavior is determined to be normal. +type BehaviorCriteria struct { + _ struct{} `type:"structure"` + + // The operator that relates the thing measured (metric) to the criteria (value). + ComparisonOperator *string `locationName:"comparisonOperator" type:"string" enum:"ComparisonOperator"` + + // Use this to specify the period of time over which the behavior is evaluated, + // for those criteria which have a time dimension (for example, NUM_MESSAGES_SENT). + DurationSeconds *int64 `locationName:"durationSeconds" type:"integer"` + + // The value to be compared with the metric. + Value *MetricValue `locationName:"value" type:"structure"` +} + +// String returns the string representation +func (s BehaviorCriteria) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BehaviorCriteria) GoString() string { + return s.String() +} + +// SetComparisonOperator sets the ComparisonOperator field's value. +func (s *BehaviorCriteria) SetComparisonOperator(v string) *BehaviorCriteria { + s.ComparisonOperator = &v + return s +} + +// SetDurationSeconds sets the DurationSeconds field's value. +func (s *BehaviorCriteria) SetDurationSeconds(v int64) *BehaviorCriteria { + s.DurationSeconds = &v + return s +} + +// SetValue sets the Value field's value. +func (s *BehaviorCriteria) SetValue(v *MetricValue) *BehaviorCriteria { + s.Value = v + return s +} + +// Additional information about the billing group. +type BillingGroupMetadata struct { + _ struct{} `type:"structure"` + + // The date the billing group was created. + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` +} + +// String returns the string representation +func (s BillingGroupMetadata) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BillingGroupMetadata) GoString() string { + return s.String() +} + +// SetCreationDate sets the CreationDate field's value. +func (s *BillingGroupMetadata) SetCreationDate(v time.Time) *BillingGroupMetadata { + s.CreationDate = &v + return s +} + +// The properties of a billing group. +type BillingGroupProperties struct { + _ struct{} `type:"structure"` + + // The description of the billing group. + BillingGroupDescription *string `locationName:"billingGroupDescription" type:"string"` +} + +// String returns the string representation +func (s BillingGroupProperties) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BillingGroupProperties) GoString() string { + return s.String() +} + +// SetBillingGroupDescription sets the BillingGroupDescription field's value. +func (s *BillingGroupProperties) SetBillingGroupDescription(v string) *BillingGroupProperties { + s.BillingGroupDescription = &v + return s +} + +// A CA certificate. +type CACertificate struct { + _ struct{} `type:"structure"` + + // The ARN of the CA certificate. + CertificateArn *string `locationName:"certificateArn" type:"string"` + + // The ID of the CA certificate. + CertificateId *string `locationName:"certificateId" min:"64" type:"string"` + + // The date the CA certificate was created. + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` + + // The status of the CA certificate. + // + // The status value REGISTER_INACTIVE is deprecated and should not be used. + Status *string `locationName:"status" type:"string" enum:"CACertificateStatus"` +} + +// String returns the string representation +func (s CACertificate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CACertificate) GoString() string { + return s.String() +} + +// SetCertificateArn sets the CertificateArn field's value. +func (s *CACertificate) SetCertificateArn(v string) *CACertificate { + s.CertificateArn = &v + return s +} + +// SetCertificateId sets the CertificateId field's value. +func (s *CACertificate) SetCertificateId(v string) *CACertificate { + s.CertificateId = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *CACertificate) SetCreationDate(v time.Time) *CACertificate { + s.CreationDate = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *CACertificate) SetStatus(v string) *CACertificate { + s.Status = &v + return s +} + +// Describes a CA certificate. +type CACertificateDescription struct { + _ struct{} `type:"structure"` + + // Whether the CA certificate configured for auto registration of device certificates. + // Valid values are "ENABLE" and "DISABLE" + AutoRegistrationStatus *string `locationName:"autoRegistrationStatus" type:"string" enum:"AutoRegistrationStatus"` + + // The CA certificate ARN. + CertificateArn *string `locationName:"certificateArn" type:"string"` + + // The CA certificate ID. + CertificateId *string `locationName:"certificateId" min:"64" type:"string"` + + // The CA certificate data, in PEM format. + CertificatePem *string `locationName:"certificatePem" min:"1" type:"string"` + + // The date the CA certificate was created. + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` + + // The customer version of the CA certificate. + CustomerVersion *int64 `locationName:"customerVersion" min:"1" type:"integer"` + + // The generation ID of the CA certificate. + GenerationId *string `locationName:"generationId" type:"string"` + + // The date the CA certificate was last modified. + LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp"` + + // The owner of the CA certificate. + OwnedBy *string `locationName:"ownedBy" min:"12" type:"string"` + + // The status of a CA certificate. + Status *string `locationName:"status" type:"string" enum:"CACertificateStatus"` + + // When the CA certificate is valid. + Validity *CertificateValidity `locationName:"validity" type:"structure"` +} + +// String returns the string representation +func (s CACertificateDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CACertificateDescription) GoString() string { + return s.String() +} + +// SetAutoRegistrationStatus sets the AutoRegistrationStatus field's value. +func (s *CACertificateDescription) SetAutoRegistrationStatus(v string) *CACertificateDescription { + s.AutoRegistrationStatus = &v + return s +} + +// SetCertificateArn sets the CertificateArn field's value. +func (s *CACertificateDescription) SetCertificateArn(v string) *CACertificateDescription { + s.CertificateArn = &v + return s +} + +// SetCertificateId sets the CertificateId field's value. +func (s *CACertificateDescription) SetCertificateId(v string) *CACertificateDescription { + s.CertificateId = &v + return s +} + +// SetCertificatePem sets the CertificatePem field's value. +func (s *CACertificateDescription) SetCertificatePem(v string) *CACertificateDescription { + s.CertificatePem = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *CACertificateDescription) SetCreationDate(v time.Time) *CACertificateDescription { + s.CreationDate = &v + return s +} + +// SetCustomerVersion sets the CustomerVersion field's value. +func (s *CACertificateDescription) SetCustomerVersion(v int64) *CACertificateDescription { + s.CustomerVersion = &v + return s +} + +// SetGenerationId sets the GenerationId field's value. +func (s *CACertificateDescription) SetGenerationId(v string) *CACertificateDescription { + s.GenerationId = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *CACertificateDescription) SetLastModifiedDate(v time.Time) *CACertificateDescription { + s.LastModifiedDate = &v + return s +} + +// SetOwnedBy sets the OwnedBy field's value. +func (s *CACertificateDescription) SetOwnedBy(v string) *CACertificateDescription { + s.OwnedBy = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *CACertificateDescription) SetStatus(v string) *CACertificateDescription { + s.Status = &v + return s +} + +// SetValidity sets the Validity field's value. +func (s *CACertificateDescription) SetValidity(v *CertificateValidity) *CACertificateDescription { + s.Validity = v + return s +} + +type CancelAuditTaskInput struct { + _ struct{} `type:"structure"` + + // The ID of the audit you want to cancel. You can only cancel an audit that + // is "IN_PROGRESS". + // + // TaskId is a required field + TaskId *string `location:"uri" locationName:"taskId" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CancelAuditTaskInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelAuditTaskInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CancelAuditTaskInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CancelAuditTaskInput"} + if s.TaskId == nil { + invalidParams.Add(request.NewErrParamRequired("TaskId")) + } + if s.TaskId != nil && len(*s.TaskId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TaskId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTaskId sets the TaskId field's value. +func (s *CancelAuditTaskInput) SetTaskId(v string) *CancelAuditTaskInput { + s.TaskId = &v + return s +} + +type CancelAuditTaskOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s CancelAuditTaskOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelAuditTaskOutput) GoString() string { + return s.String() +} + +// The input for the CancelCertificateTransfer operation. +type CancelCertificateTransferInput struct { + _ struct{} `type:"structure"` + + // The ID of the certificate. (The last part of the certificate ARN contains + // the certificate ID.) + // + // CertificateId is a required field + CertificateId *string `location:"uri" locationName:"certificateId" min:"64" type:"string" required:"true"` +} + +// String returns the string representation +func (s CancelCertificateTransferInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelCertificateTransferInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CancelCertificateTransferInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CancelCertificateTransferInput"} + if s.CertificateId == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateId")) + } + if s.CertificateId != nil && len(*s.CertificateId) < 64 { + invalidParams.Add(request.NewErrParamMinLen("CertificateId", 64)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateId sets the CertificateId field's value. +func (s *CancelCertificateTransferInput) SetCertificateId(v string) *CancelCertificateTransferInput { + s.CertificateId = &v + return s +} + +type CancelCertificateTransferOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s CancelCertificateTransferOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelCertificateTransferOutput) GoString() string { + return s.String() +} + +type CancelJobExecutionInput struct { + _ struct{} `type:"structure"` + + // (Optional) The expected current version of the job execution. Each time you + // update the job execution, its version is incremented. If the version of the + // job execution stored in Jobs does not match, the update is rejected with + // a VersionMismatch error, and an ErrorResponse that contains the current job + // execution status data is returned. (This makes it unnecessary to perform + // a separate DescribeJobExecution request in order to obtain the job execution + // status data.) + ExpectedVersion *int64 `locationName:"expectedVersion" type:"long"` + + // (Optional) If true the job execution will be canceled if it has status IN_PROGRESS + // or QUEUED, otherwise the job execution will be canceled only if it has status + // QUEUED. If you attempt to cancel a job execution that is IN_PROGRESS, and + // you do not set force to true, then an InvalidStateTransitionException will + // be thrown. The default is false. + // + // Canceling a job execution which is "IN_PROGRESS", will cause the device to + // be unable to update the job execution status. Use caution and ensure that + // the device is able to recover to a valid state. + Force *bool `location:"querystring" locationName:"force" type:"boolean"` + + // The ID of the job to be canceled. + // + // JobId is a required field + JobId *string `location:"uri" locationName:"jobId" min:"1" type:"string" required:"true"` + + // A collection of name/value pairs that describe the status of the job execution. + // If not specified, the statusDetails are unchanged. You can specify at most + // 10 name/value pairs. + StatusDetails map[string]*string `locationName:"statusDetails" type:"map"` + + // The name of the thing whose execution of the job will be canceled. + // + // ThingName is a required field + ThingName *string `location:"uri" locationName:"thingName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CancelJobExecutionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelJobExecutionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CancelJobExecutionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CancelJobExecutionInput"} + if s.JobId == nil { + invalidParams.Add(request.NewErrParamRequired("JobId")) + } + if s.JobId != nil && len(*s.JobId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("JobId", 1)) + } + if s.ThingName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingName")) + } + if s.ThingName != nil && len(*s.ThingName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetExpectedVersion sets the ExpectedVersion field's value. +func (s *CancelJobExecutionInput) SetExpectedVersion(v int64) *CancelJobExecutionInput { + s.ExpectedVersion = &v + return s +} + +// SetForce sets the Force field's value. +func (s *CancelJobExecutionInput) SetForce(v bool) *CancelJobExecutionInput { + s.Force = &v + return s +} + +// SetJobId sets the JobId field's value. +func (s *CancelJobExecutionInput) SetJobId(v string) *CancelJobExecutionInput { + s.JobId = &v + return s +} + +// SetStatusDetails sets the StatusDetails field's value. +func (s *CancelJobExecutionInput) SetStatusDetails(v map[string]*string) *CancelJobExecutionInput { + s.StatusDetails = v + return s +} + +// SetThingName sets the ThingName field's value. +func (s *CancelJobExecutionInput) SetThingName(v string) *CancelJobExecutionInput { + s.ThingName = &v + return s +} + +type CancelJobExecutionOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s CancelJobExecutionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelJobExecutionOutput) GoString() string { + return s.String() +} + +type CancelJobInput struct { + _ struct{} `type:"structure"` + + // An optional comment string describing why the job was canceled. + Comment *string `locationName:"comment" type:"string"` + + // (Optional) If true job executions with status "IN_PROGRESS" and "QUEUED" + // are canceled, otherwise only job executions with status "QUEUED" are canceled. + // The default is false. + // + // Canceling a job which is "IN_PROGRESS", will cause a device which is executing + // the job to be unable to update the job execution status. Use caution and + // ensure that each device executing a job which is canceled is able to recover + // to a valid state. + Force *bool `location:"querystring" locationName:"force" type:"boolean"` + + // The unique identifier you assigned to this job when it was created. + // + // JobId is a required field + JobId *string `location:"uri" locationName:"jobId" min:"1" type:"string" required:"true"` + + // (Optional)A reason code string that explains why the job was canceled. + ReasonCode *string `locationName:"reasonCode" type:"string"` +} + +// String returns the string representation +func (s CancelJobInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelJobInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CancelJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CancelJobInput"} + if s.JobId == nil { + invalidParams.Add(request.NewErrParamRequired("JobId")) + } + if s.JobId != nil && len(*s.JobId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("JobId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetComment sets the Comment field's value. +func (s *CancelJobInput) SetComment(v string) *CancelJobInput { + s.Comment = &v + return s +} + +// SetForce sets the Force field's value. +func (s *CancelJobInput) SetForce(v bool) *CancelJobInput { + s.Force = &v + return s +} + +// SetJobId sets the JobId field's value. +func (s *CancelJobInput) SetJobId(v string) *CancelJobInput { + s.JobId = &v + return s +} + +// SetReasonCode sets the ReasonCode field's value. +func (s *CancelJobInput) SetReasonCode(v string) *CancelJobInput { + s.ReasonCode = &v + return s +} + +type CancelJobOutput struct { + _ struct{} `type:"structure"` + + // A short text description of the job. + Description *string `locationName:"description" type:"string"` + + // The job ARN. + JobArn *string `locationName:"jobArn" type:"string"` + + // The unique identifier you assigned to this job when it was created. + JobId *string `locationName:"jobId" min:"1" type:"string"` +} + +// String returns the string representation +func (s CancelJobOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelJobOutput) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *CancelJobOutput) SetDescription(v string) *CancelJobOutput { + s.Description = &v + return s +} + +// SetJobArn sets the JobArn field's value. +func (s *CancelJobOutput) SetJobArn(v string) *CancelJobOutput { + s.JobArn = &v + return s +} + +// SetJobId sets the JobId field's value. +func (s *CancelJobOutput) SetJobId(v string) *CancelJobOutput { + s.JobId = &v + return s +} + +// Information about a certificate. +type Certificate struct { + _ struct{} `type:"structure"` + + // The ARN of the certificate. + CertificateArn *string `locationName:"certificateArn" type:"string"` + + // The ID of the certificate. (The last part of the certificate ARN contains + // the certificate ID.) + CertificateId *string `locationName:"certificateId" min:"64" type:"string"` + + // The date and time the certificate was created. + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` + + // The status of the certificate. + // + // The status value REGISTER_INACTIVE is deprecated and should not be used. + Status *string `locationName:"status" type:"string" enum:"CertificateStatus"` +} + +// String returns the string representation +func (s Certificate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Certificate) GoString() string { + return s.String() +} + +// SetCertificateArn sets the CertificateArn field's value. +func (s *Certificate) SetCertificateArn(v string) *Certificate { + s.CertificateArn = &v + return s +} + +// SetCertificateId sets the CertificateId field's value. +func (s *Certificate) SetCertificateId(v string) *Certificate { + s.CertificateId = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *Certificate) SetCreationDate(v time.Time) *Certificate { + s.CreationDate = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *Certificate) SetStatus(v string) *Certificate { + s.Status = &v + return s +} + +// Describes a certificate. +type CertificateDescription struct { + _ struct{} `type:"structure"` + + // The certificate ID of the CA certificate used to sign this certificate. + CaCertificateId *string `locationName:"caCertificateId" min:"64" type:"string"` + + // The ARN of the certificate. + CertificateArn *string `locationName:"certificateArn" type:"string"` + + // The ID of the certificate. + CertificateId *string `locationName:"certificateId" min:"64" type:"string"` + + // The certificate data, in PEM format. + CertificatePem *string `locationName:"certificatePem" min:"1" type:"string"` + + // The date and time the certificate was created. + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` + + // The customer version of the certificate. + CustomerVersion *int64 `locationName:"customerVersion" min:"1" type:"integer"` + + // The generation ID of the certificate. + GenerationId *string `locationName:"generationId" type:"string"` + + // The date and time the certificate was last modified. + LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp"` + + // The ID of the AWS account that owns the certificate. + OwnedBy *string `locationName:"ownedBy" min:"12" type:"string"` + + // The ID of the AWS account of the previous owner of the certificate. + PreviousOwnedBy *string `locationName:"previousOwnedBy" min:"12" type:"string"` + + // The status of the certificate. + Status *string `locationName:"status" type:"string" enum:"CertificateStatus"` + + // The transfer data. + TransferData *TransferData `locationName:"transferData" type:"structure"` + + // When the certificate is valid. + Validity *CertificateValidity `locationName:"validity" type:"structure"` +} + +// String returns the string representation +func (s CertificateDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CertificateDescription) GoString() string { + return s.String() +} + +// SetCaCertificateId sets the CaCertificateId field's value. +func (s *CertificateDescription) SetCaCertificateId(v string) *CertificateDescription { + s.CaCertificateId = &v + return s +} + +// SetCertificateArn sets the CertificateArn field's value. +func (s *CertificateDescription) SetCertificateArn(v string) *CertificateDescription { + s.CertificateArn = &v + return s +} + +// SetCertificateId sets the CertificateId field's value. +func (s *CertificateDescription) SetCertificateId(v string) *CertificateDescription { + s.CertificateId = &v + return s +} + +// SetCertificatePem sets the CertificatePem field's value. +func (s *CertificateDescription) SetCertificatePem(v string) *CertificateDescription { + s.CertificatePem = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *CertificateDescription) SetCreationDate(v time.Time) *CertificateDescription { + s.CreationDate = &v + return s +} + +// SetCustomerVersion sets the CustomerVersion field's value. +func (s *CertificateDescription) SetCustomerVersion(v int64) *CertificateDescription { + s.CustomerVersion = &v + return s +} + +// SetGenerationId sets the GenerationId field's value. +func (s *CertificateDescription) SetGenerationId(v string) *CertificateDescription { + s.GenerationId = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *CertificateDescription) SetLastModifiedDate(v time.Time) *CertificateDescription { + s.LastModifiedDate = &v + return s +} + +// SetOwnedBy sets the OwnedBy field's value. +func (s *CertificateDescription) SetOwnedBy(v string) *CertificateDescription { + s.OwnedBy = &v + return s +} + +// SetPreviousOwnedBy sets the PreviousOwnedBy field's value. +func (s *CertificateDescription) SetPreviousOwnedBy(v string) *CertificateDescription { + s.PreviousOwnedBy = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *CertificateDescription) SetStatus(v string) *CertificateDescription { + s.Status = &v + return s +} + +// SetTransferData sets the TransferData field's value. +func (s *CertificateDescription) SetTransferData(v *TransferData) *CertificateDescription { + s.TransferData = v + return s +} + +// SetValidity sets the Validity field's value. +func (s *CertificateDescription) SetValidity(v *CertificateValidity) *CertificateDescription { + s.Validity = v + return s +} + +// When the certificate is valid. +type CertificateValidity struct { + _ struct{} `type:"structure"` + + // The certificate is not valid after this date. + NotAfter *time.Time `locationName:"notAfter" type:"timestamp"` + + // The certificate is not valid before this date. + NotBefore *time.Time `locationName:"notBefore" type:"timestamp"` +} + +// String returns the string representation +func (s CertificateValidity) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CertificateValidity) GoString() string { + return s.String() +} + +// SetNotAfter sets the NotAfter field's value. +func (s *CertificateValidity) SetNotAfter(v time.Time) *CertificateValidity { + s.NotAfter = &v + return s +} + +// SetNotBefore sets the NotBefore field's value. +func (s *CertificateValidity) SetNotBefore(v time.Time) *CertificateValidity { + s.NotBefore = &v + return s +} + +type ClearDefaultAuthorizerInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s ClearDefaultAuthorizerInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ClearDefaultAuthorizerInput) GoString() string { + return s.String() +} + +type ClearDefaultAuthorizerOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s ClearDefaultAuthorizerOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ClearDefaultAuthorizerOutput) GoString() string { + return s.String() +} + +// Describes an action that updates a CloudWatch alarm. +type CloudwatchAlarmAction struct { + _ struct{} `type:"structure"` + + // The CloudWatch alarm name. + // + // AlarmName is a required field + AlarmName *string `locationName:"alarmName" type:"string" required:"true"` + + // The IAM role that allows access to the CloudWatch alarm. + // + // RoleArn is a required field + RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + + // The reason for the alarm change. + // + // StateReason is a required field + StateReason *string `locationName:"stateReason" type:"string" required:"true"` + + // The value of the alarm state. Acceptable values are: OK, ALARM, INSUFFICIENT_DATA. + // + // StateValue is a required field + StateValue *string `locationName:"stateValue" type:"string" required:"true"` +} + +// String returns the string representation +func (s CloudwatchAlarmAction) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CloudwatchAlarmAction) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CloudwatchAlarmAction) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CloudwatchAlarmAction"} + if s.AlarmName == nil { + invalidParams.Add(request.NewErrParamRequired("AlarmName")) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + if s.StateReason == nil { + invalidParams.Add(request.NewErrParamRequired("StateReason")) + } + if s.StateValue == nil { + invalidParams.Add(request.NewErrParamRequired("StateValue")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAlarmName sets the AlarmName field's value. +func (s *CloudwatchAlarmAction) SetAlarmName(v string) *CloudwatchAlarmAction { + s.AlarmName = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *CloudwatchAlarmAction) SetRoleArn(v string) *CloudwatchAlarmAction { + s.RoleArn = &v + return s +} + +// SetStateReason sets the StateReason field's value. +func (s *CloudwatchAlarmAction) SetStateReason(v string) *CloudwatchAlarmAction { + s.StateReason = &v + return s +} + +// SetStateValue sets the StateValue field's value. +func (s *CloudwatchAlarmAction) SetStateValue(v string) *CloudwatchAlarmAction { + s.StateValue = &v + return s +} + +// Describes an action that captures a CloudWatch metric. +type CloudwatchMetricAction struct { + _ struct{} `type:"structure"` + + // The CloudWatch metric name. + // + // MetricName is a required field + MetricName *string `locationName:"metricName" type:"string" required:"true"` + + // The CloudWatch metric namespace name. + // + // MetricNamespace is a required field + MetricNamespace *string `locationName:"metricNamespace" type:"string" required:"true"` + + // An optional Unix timestamp (http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#about_timestamp). + MetricTimestamp *string `locationName:"metricTimestamp" type:"string"` + + // The metric unit (http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Unit) + // supported by CloudWatch. + // + // MetricUnit is a required field + MetricUnit *string `locationName:"metricUnit" type:"string" required:"true"` + + // The CloudWatch metric value. + // + // MetricValue is a required field + MetricValue *string `locationName:"metricValue" type:"string" required:"true"` + + // The IAM role that allows access to the CloudWatch metric. + // + // RoleArn is a required field + RoleArn *string `locationName:"roleArn" type:"string" required:"true"` +} + +// String returns the string representation +func (s CloudwatchMetricAction) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CloudwatchMetricAction) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CloudwatchMetricAction) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CloudwatchMetricAction"} + if s.MetricName == nil { + invalidParams.Add(request.NewErrParamRequired("MetricName")) + } + if s.MetricNamespace == nil { + invalidParams.Add(request.NewErrParamRequired("MetricNamespace")) + } + if s.MetricUnit == nil { + invalidParams.Add(request.NewErrParamRequired("MetricUnit")) + } + if s.MetricValue == nil { + invalidParams.Add(request.NewErrParamRequired("MetricValue")) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMetricName sets the MetricName field's value. +func (s *CloudwatchMetricAction) SetMetricName(v string) *CloudwatchMetricAction { + s.MetricName = &v + return s +} + +// SetMetricNamespace sets the MetricNamespace field's value. +func (s *CloudwatchMetricAction) SetMetricNamespace(v string) *CloudwatchMetricAction { + s.MetricNamespace = &v + return s +} + +// SetMetricTimestamp sets the MetricTimestamp field's value. +func (s *CloudwatchMetricAction) SetMetricTimestamp(v string) *CloudwatchMetricAction { + s.MetricTimestamp = &v + return s +} + +// SetMetricUnit sets the MetricUnit field's value. +func (s *CloudwatchMetricAction) SetMetricUnit(v string) *CloudwatchMetricAction { + s.MetricUnit = &v + return s +} + +// SetMetricValue sets the MetricValue field's value. +func (s *CloudwatchMetricAction) SetMetricValue(v string) *CloudwatchMetricAction { + s.MetricValue = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *CloudwatchMetricAction) SetRoleArn(v string) *CloudwatchMetricAction { + s.RoleArn = &v + return s +} + +// Describes the method to use when code signing a file. +type CodeSigning struct { + _ struct{} `type:"structure"` + + // The ID of the AWSSignerJob which was created to sign the file. + AwsSignerJobId *string `locationName:"awsSignerJobId" type:"string"` + + // A custom method for code signing a file. + CustomCodeSigning *CustomCodeSigning `locationName:"customCodeSigning" type:"structure"` + + // Describes the code-signing job. + StartSigningJobParameter *StartSigningJobParameter `locationName:"startSigningJobParameter" type:"structure"` +} + +// String returns the string representation +func (s CodeSigning) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CodeSigning) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CodeSigning) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CodeSigning"} + if s.StartSigningJobParameter != nil { + if err := s.StartSigningJobParameter.Validate(); err != nil { + invalidParams.AddNested("StartSigningJobParameter", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAwsSignerJobId sets the AwsSignerJobId field's value. +func (s *CodeSigning) SetAwsSignerJobId(v string) *CodeSigning { + s.AwsSignerJobId = &v + return s +} + +// SetCustomCodeSigning sets the CustomCodeSigning field's value. +func (s *CodeSigning) SetCustomCodeSigning(v *CustomCodeSigning) *CodeSigning { + s.CustomCodeSigning = v + return s +} + +// SetStartSigningJobParameter sets the StartSigningJobParameter field's value. +func (s *CodeSigning) SetStartSigningJobParameter(v *StartSigningJobParameter) *CodeSigning { + s.StartSigningJobParameter = v + return s +} + +// Describes the certificate chain being used when code signing a file. +type CodeSigningCertificateChain struct { + _ struct{} `type:"structure"` + + // The name of the certificate. + CertificateName *string `locationName:"certificateName" type:"string"` + + // A base64 encoded binary representation of the code signing certificate chain. + InlineDocument *string `locationName:"inlineDocument" type:"string"` +} + +// String returns the string representation +func (s CodeSigningCertificateChain) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CodeSigningCertificateChain) GoString() string { + return s.String() +} + +// SetCertificateName sets the CertificateName field's value. +func (s *CodeSigningCertificateChain) SetCertificateName(v string) *CodeSigningCertificateChain { + s.CertificateName = &v + return s +} + +// SetInlineDocument sets the InlineDocument field's value. +func (s *CodeSigningCertificateChain) SetInlineDocument(v string) *CodeSigningCertificateChain { + s.InlineDocument = &v + return s +} + +// Describes the signature for a file. +type CodeSigningSignature struct { + _ struct{} `type:"structure"` + + // A base64 encoded binary representation of the code signing signature. + // + // InlineDocument is automatically base64 encoded/decoded by the SDK. + InlineDocument []byte `locationName:"inlineDocument" type:"blob"` +} + +// String returns the string representation +func (s CodeSigningSignature) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CodeSigningSignature) GoString() string { + return s.String() +} + +// SetInlineDocument sets the InlineDocument field's value. +func (s *CodeSigningSignature) SetInlineDocument(v []byte) *CodeSigningSignature { + s.InlineDocument = v + return s +} + +// Configuration. +type Configuration struct { + _ struct{} `type:"structure"` + + // True to enable the configuration. + Enabled *bool `type:"boolean"` +} + +// String returns the string representation +func (s Configuration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Configuration) GoString() string { + return s.String() +} + +// SetEnabled sets the Enabled field's value. +func (s *Configuration) SetEnabled(v bool) *Configuration { + s.Enabled = &v + return s +} + +type CreateAuthorizerInput struct { + _ struct{} `type:"structure"` + + // The ARN of the authorizer's Lambda function. + // + // AuthorizerFunctionArn is a required field + AuthorizerFunctionArn *string `locationName:"authorizerFunctionArn" type:"string" required:"true"` + + // The authorizer name. + // + // AuthorizerName is a required field + AuthorizerName *string `location:"uri" locationName:"authorizerName" min:"1" type:"string" required:"true"` + + // The status of the create authorizer request. + Status *string `locationName:"status" type:"string" enum:"AuthorizerStatus"` + + // The name of the token key used to extract the token from the HTTP headers. + // + // TokenKeyName is a required field + TokenKeyName *string `locationName:"tokenKeyName" min:"1" type:"string" required:"true"` + + // The public keys used to verify the digital signature returned by your custom + // authentication service. + // + // TokenSigningPublicKeys is a required field + TokenSigningPublicKeys map[string]*string `locationName:"tokenSigningPublicKeys" type:"map" required:"true"` +} + +// String returns the string representation +func (s CreateAuthorizerInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateAuthorizerInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateAuthorizerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateAuthorizerInput"} + if s.AuthorizerFunctionArn == nil { + invalidParams.Add(request.NewErrParamRequired("AuthorizerFunctionArn")) + } + if s.AuthorizerName == nil { + invalidParams.Add(request.NewErrParamRequired("AuthorizerName")) + } + if s.AuthorizerName != nil && len(*s.AuthorizerName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AuthorizerName", 1)) + } + if s.TokenKeyName == nil { + invalidParams.Add(request.NewErrParamRequired("TokenKeyName")) + } + if s.TokenKeyName != nil && len(*s.TokenKeyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TokenKeyName", 1)) + } + if s.TokenSigningPublicKeys == nil { + invalidParams.Add(request.NewErrParamRequired("TokenSigningPublicKeys")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAuthorizerFunctionArn sets the AuthorizerFunctionArn field's value. +func (s *CreateAuthorizerInput) SetAuthorizerFunctionArn(v string) *CreateAuthorizerInput { + s.AuthorizerFunctionArn = &v + return s +} + +// SetAuthorizerName sets the AuthorizerName field's value. +func (s *CreateAuthorizerInput) SetAuthorizerName(v string) *CreateAuthorizerInput { + s.AuthorizerName = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *CreateAuthorizerInput) SetStatus(v string) *CreateAuthorizerInput { + s.Status = &v + return s +} + +// SetTokenKeyName sets the TokenKeyName field's value. +func (s *CreateAuthorizerInput) SetTokenKeyName(v string) *CreateAuthorizerInput { + s.TokenKeyName = &v + return s +} + +// SetTokenSigningPublicKeys sets the TokenSigningPublicKeys field's value. +func (s *CreateAuthorizerInput) SetTokenSigningPublicKeys(v map[string]*string) *CreateAuthorizerInput { + s.TokenSigningPublicKeys = v + return s +} + +type CreateAuthorizerOutput struct { + _ struct{} `type:"structure"` + + // The authorizer ARN. + AuthorizerArn *string `locationName:"authorizerArn" type:"string"` + + // The authorizer's name. + AuthorizerName *string `locationName:"authorizerName" min:"1" type:"string"` +} + +// String returns the string representation +func (s CreateAuthorizerOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateAuthorizerOutput) GoString() string { + return s.String() +} + +// SetAuthorizerArn sets the AuthorizerArn field's value. +func (s *CreateAuthorizerOutput) SetAuthorizerArn(v string) *CreateAuthorizerOutput { + s.AuthorizerArn = &v + return s +} + +// SetAuthorizerName sets the AuthorizerName field's value. +func (s *CreateAuthorizerOutput) SetAuthorizerName(v string) *CreateAuthorizerOutput { + s.AuthorizerName = &v + return s +} + +type CreateBillingGroupInput struct { + _ struct{} `type:"structure"` + + // The name you wish to give to the billing group. + // + // BillingGroupName is a required field + BillingGroupName *string `location:"uri" locationName:"billingGroupName" min:"1" type:"string" required:"true"` + + // The properties of the billing group. + BillingGroupProperties *BillingGroupProperties `locationName:"billingGroupProperties" type:"structure"` + + // Metadata which can be used to manage the billing group. + Tags []*Tag `locationName:"tags" type:"list"` +} + +// String returns the string representation +func (s CreateBillingGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateBillingGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateBillingGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateBillingGroupInput"} + if s.BillingGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("BillingGroupName")) + } + if s.BillingGroupName != nil && len(*s.BillingGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BillingGroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBillingGroupName sets the BillingGroupName field's value. +func (s *CreateBillingGroupInput) SetBillingGroupName(v string) *CreateBillingGroupInput { + s.BillingGroupName = &v + return s +} + +// SetBillingGroupProperties sets the BillingGroupProperties field's value. +func (s *CreateBillingGroupInput) SetBillingGroupProperties(v *BillingGroupProperties) *CreateBillingGroupInput { + s.BillingGroupProperties = v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateBillingGroupInput) SetTags(v []*Tag) *CreateBillingGroupInput { + s.Tags = v + return s +} + +type CreateBillingGroupOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the billing group. + BillingGroupArn *string `locationName:"billingGroupArn" type:"string"` + + // The ID of the billing group. + BillingGroupId *string `locationName:"billingGroupId" min:"1" type:"string"` + + // The name you gave to the billing group. + BillingGroupName *string `locationName:"billingGroupName" min:"1" type:"string"` +} + +// String returns the string representation +func (s CreateBillingGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateBillingGroupOutput) GoString() string { + return s.String() +} + +// SetBillingGroupArn sets the BillingGroupArn field's value. +func (s *CreateBillingGroupOutput) SetBillingGroupArn(v string) *CreateBillingGroupOutput { + s.BillingGroupArn = &v + return s +} + +// SetBillingGroupId sets the BillingGroupId field's value. +func (s *CreateBillingGroupOutput) SetBillingGroupId(v string) *CreateBillingGroupOutput { + s.BillingGroupId = &v + return s +} + +// SetBillingGroupName sets the BillingGroupName field's value. +func (s *CreateBillingGroupOutput) SetBillingGroupName(v string) *CreateBillingGroupOutput { + s.BillingGroupName = &v + return s +} + +// The input for the CreateCertificateFromCsr operation. +type CreateCertificateFromCsrInput struct { + _ struct{} `type:"structure"` + + // The certificate signing request (CSR). + // + // CertificateSigningRequest is a required field + CertificateSigningRequest *string `locationName:"certificateSigningRequest" min:"1" type:"string" required:"true"` + + // Specifies whether the certificate is active. + SetAsActive *bool `location:"querystring" locationName:"setAsActive" type:"boolean"` +} + +// String returns the string representation +func (s CreateCertificateFromCsrInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateCertificateFromCsrInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateCertificateFromCsrInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateCertificateFromCsrInput"} + if s.CertificateSigningRequest == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateSigningRequest")) + } + if s.CertificateSigningRequest != nil && len(*s.CertificateSigningRequest) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CertificateSigningRequest", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateSigningRequest sets the CertificateSigningRequest field's value. +func (s *CreateCertificateFromCsrInput) SetCertificateSigningRequest(v string) *CreateCertificateFromCsrInput { + s.CertificateSigningRequest = &v + return s +} + +// SetSetAsActive sets the SetAsActive field's value. +func (s *CreateCertificateFromCsrInput) SetSetAsActive(v bool) *CreateCertificateFromCsrInput { + s.SetAsActive = &v + return s +} + +// The output from the CreateCertificateFromCsr operation. +type CreateCertificateFromCsrOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the certificate. You can use the ARN as + // a principal for policy operations. + CertificateArn *string `locationName:"certificateArn" type:"string"` + + // The ID of the certificate. Certificate management operations only take a + // certificateId. + CertificateId *string `locationName:"certificateId" min:"64" type:"string"` + + // The certificate data, in PEM format. + CertificatePem *string `locationName:"certificatePem" min:"1" type:"string"` +} + +// String returns the string representation +func (s CreateCertificateFromCsrOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateCertificateFromCsrOutput) GoString() string { + return s.String() +} + +// SetCertificateArn sets the CertificateArn field's value. +func (s *CreateCertificateFromCsrOutput) SetCertificateArn(v string) *CreateCertificateFromCsrOutput { + s.CertificateArn = &v + return s +} + +// SetCertificateId sets the CertificateId field's value. +func (s *CreateCertificateFromCsrOutput) SetCertificateId(v string) *CreateCertificateFromCsrOutput { + s.CertificateId = &v + return s +} + +// SetCertificatePem sets the CertificatePem field's value. +func (s *CreateCertificateFromCsrOutput) SetCertificatePem(v string) *CreateCertificateFromCsrOutput { + s.CertificatePem = &v + return s +} + +type CreateDynamicThingGroupInput struct { + _ struct{} `type:"structure"` + + // The dynamic thing group index name. + // + // Currently one index is supported: "AWS_Things". + IndexName *string `locationName:"indexName" min:"1" type:"string"` + + // The dynamic thing group search query string. + // + // See Query Syntax (http://docs.aws.amazon.com/iot/latest/developerguide/query-syntax.html) + // for information about query string syntax. + // + // QueryString is a required field + QueryString *string `locationName:"queryString" min:"1" type:"string" required:"true"` + + // The dynamic thing group query version. + // + // Currently one query version is supported: "2017-09-30". If not specified, + // the query version defaults to this value. + QueryVersion *string `locationName:"queryVersion" type:"string"` + + // Metadata which can be used to manage the dynamic thing group. + Tags []*Tag `locationName:"tags" type:"list"` + + // The dynamic thing group name to create. + // + // ThingGroupName is a required field + ThingGroupName *string `location:"uri" locationName:"thingGroupName" min:"1" type:"string" required:"true"` + + // The dynamic thing group properties. + ThingGroupProperties *ThingGroupProperties `locationName:"thingGroupProperties" type:"structure"` +} + +// String returns the string representation +func (s CreateDynamicThingGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDynamicThingGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDynamicThingGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDynamicThingGroupInput"} + if s.IndexName != nil && len(*s.IndexName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("IndexName", 1)) + } + if s.QueryString == nil { + invalidParams.Add(request.NewErrParamRequired("QueryString")) + } + if s.QueryString != nil && len(*s.QueryString) < 1 { + invalidParams.Add(request.NewErrParamMinLen("QueryString", 1)) + } + if s.ThingGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingGroupName")) + } + if s.ThingGroupName != nil && len(*s.ThingGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingGroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetIndexName sets the IndexName field's value. +func (s *CreateDynamicThingGroupInput) SetIndexName(v string) *CreateDynamicThingGroupInput { + s.IndexName = &v + return s +} + +// SetQueryString sets the QueryString field's value. +func (s *CreateDynamicThingGroupInput) SetQueryString(v string) *CreateDynamicThingGroupInput { + s.QueryString = &v + return s +} + +// SetQueryVersion sets the QueryVersion field's value. +func (s *CreateDynamicThingGroupInput) SetQueryVersion(v string) *CreateDynamicThingGroupInput { + s.QueryVersion = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateDynamicThingGroupInput) SetTags(v []*Tag) *CreateDynamicThingGroupInput { + s.Tags = v + return s +} + +// SetThingGroupName sets the ThingGroupName field's value. +func (s *CreateDynamicThingGroupInput) SetThingGroupName(v string) *CreateDynamicThingGroupInput { + s.ThingGroupName = &v + return s +} + +// SetThingGroupProperties sets the ThingGroupProperties field's value. +func (s *CreateDynamicThingGroupInput) SetThingGroupProperties(v *ThingGroupProperties) *CreateDynamicThingGroupInput { + s.ThingGroupProperties = v + return s +} + +type CreateDynamicThingGroupOutput struct { + _ struct{} `type:"structure"` + + // The dynamic thing group index name. + IndexName *string `locationName:"indexName" min:"1" type:"string"` + + // The dynamic thing group search query string. + QueryString *string `locationName:"queryString" min:"1" type:"string"` + + // The dynamic thing group query version. + QueryVersion *string `locationName:"queryVersion" type:"string"` + + // The dynamic thing group ARN. + ThingGroupArn *string `locationName:"thingGroupArn" type:"string"` + + // The dynamic thing group ID. + ThingGroupId *string `locationName:"thingGroupId" min:"1" type:"string"` + + // The dynamic thing group name. + ThingGroupName *string `locationName:"thingGroupName" min:"1" type:"string"` +} + +// String returns the string representation +func (s CreateDynamicThingGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDynamicThingGroupOutput) GoString() string { + return s.String() +} + +// SetIndexName sets the IndexName field's value. +func (s *CreateDynamicThingGroupOutput) SetIndexName(v string) *CreateDynamicThingGroupOutput { + s.IndexName = &v + return s +} + +// SetQueryString sets the QueryString field's value. +func (s *CreateDynamicThingGroupOutput) SetQueryString(v string) *CreateDynamicThingGroupOutput { + s.QueryString = &v + return s +} + +// SetQueryVersion sets the QueryVersion field's value. +func (s *CreateDynamicThingGroupOutput) SetQueryVersion(v string) *CreateDynamicThingGroupOutput { + s.QueryVersion = &v + return s +} + +// SetThingGroupArn sets the ThingGroupArn field's value. +func (s *CreateDynamicThingGroupOutput) SetThingGroupArn(v string) *CreateDynamicThingGroupOutput { + s.ThingGroupArn = &v + return s +} + +// SetThingGroupId sets the ThingGroupId field's value. +func (s *CreateDynamicThingGroupOutput) SetThingGroupId(v string) *CreateDynamicThingGroupOutput { + s.ThingGroupId = &v + return s +} + +// SetThingGroupName sets the ThingGroupName field's value. +func (s *CreateDynamicThingGroupOutput) SetThingGroupName(v string) *CreateDynamicThingGroupOutput { + s.ThingGroupName = &v + return s +} + +type CreateJobInput struct { + _ struct{} `type:"structure"` + + // Allows you to create criteria to abort a job. + AbortConfig *AbortConfig `locationName:"abortConfig" type:"structure"` + + // A short text description of the job. + Description *string `locationName:"description" type:"string"` + + // The job document. + // + // If the job document resides in an S3 bucket, you must use a placeholder link + // when specifying the document. + // + // The placeholder link is of the following form: + // + // ${aws:iot:s3-presigned-url:https://s3.amazonaws.com/bucket/key} + // + // where bucket is your bucket name and key is the object in the bucket to which + // you are linking. + Document *string `locationName:"document" type:"string"` + + // An S3 link to the job document. + DocumentSource *string `locationName:"documentSource" min:"1" type:"string"` + + // Allows you to create a staged rollout of the job. + JobExecutionsRolloutConfig *JobExecutionsRolloutConfig `locationName:"jobExecutionsRolloutConfig" type:"structure"` + + // A job identifier which must be unique for your AWS account. We recommend + // using a UUID. Alpha-numeric characters, "-" and "_" are valid for use here. + // + // JobId is a required field + JobId *string `location:"uri" locationName:"jobId" min:"1" type:"string" required:"true"` + + // Configuration information for pre-signed S3 URLs. + PresignedUrlConfig *PresignedUrlConfig `locationName:"presignedUrlConfig" type:"structure"` + + // Metadata which can be used to manage the job. + Tags []*Tag `locationName:"tags" type:"list"` + + // Specifies whether the job will continue to run (CONTINUOUS), or will be complete + // after all those things specified as targets have completed the job (SNAPSHOT). + // If continuous, the job may also be run on a thing when a change is detected + // in a target. For example, a job will run on a thing when the thing is added + // to a target group, even after the job was completed by all things originally + // in the group. + TargetSelection *string `locationName:"targetSelection" type:"string" enum:"TargetSelection"` + + // A list of things and thing groups to which the job should be sent. + // + // Targets is a required field + Targets []*string `locationName:"targets" min:"1" type:"list" required:"true"` + + // Specifies the amount of time each device has to finish its execution of the + // job. The timer is started when the job execution status is set to IN_PROGRESS. + // If the job execution status is not set to another terminal state before the + // time expires, it will be automatically set to TIMED_OUT. + TimeoutConfig *TimeoutConfig `locationName:"timeoutConfig" type:"structure"` +} + +// String returns the string representation +func (s CreateJobInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateJobInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateJobInput"} + if s.DocumentSource != nil && len(*s.DocumentSource) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DocumentSource", 1)) + } + if s.JobId == nil { + invalidParams.Add(request.NewErrParamRequired("JobId")) + } + if s.JobId != nil && len(*s.JobId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("JobId", 1)) + } + if s.Targets == nil { + invalidParams.Add(request.NewErrParamRequired("Targets")) + } + if s.Targets != nil && len(s.Targets) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Targets", 1)) + } + if s.AbortConfig != nil { + if err := s.AbortConfig.Validate(); err != nil { + invalidParams.AddNested("AbortConfig", err.(request.ErrInvalidParams)) + } + } + if s.JobExecutionsRolloutConfig != nil { + if err := s.JobExecutionsRolloutConfig.Validate(); err != nil { + invalidParams.AddNested("JobExecutionsRolloutConfig", err.(request.ErrInvalidParams)) + } + } + if s.PresignedUrlConfig != nil { + if err := s.PresignedUrlConfig.Validate(); err != nil { + invalidParams.AddNested("PresignedUrlConfig", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAbortConfig sets the AbortConfig field's value. +func (s *CreateJobInput) SetAbortConfig(v *AbortConfig) *CreateJobInput { + s.AbortConfig = v + return s +} + +// SetDescription sets the Description field's value. +func (s *CreateJobInput) SetDescription(v string) *CreateJobInput { + s.Description = &v + return s +} + +// SetDocument sets the Document field's value. +func (s *CreateJobInput) SetDocument(v string) *CreateJobInput { + s.Document = &v + return s +} + +// SetDocumentSource sets the DocumentSource field's value. +func (s *CreateJobInput) SetDocumentSource(v string) *CreateJobInput { + s.DocumentSource = &v + return s +} + +// SetJobExecutionsRolloutConfig sets the JobExecutionsRolloutConfig field's value. +func (s *CreateJobInput) SetJobExecutionsRolloutConfig(v *JobExecutionsRolloutConfig) *CreateJobInput { + s.JobExecutionsRolloutConfig = v + return s +} + +// SetJobId sets the JobId field's value. +func (s *CreateJobInput) SetJobId(v string) *CreateJobInput { + s.JobId = &v + return s +} + +// SetPresignedUrlConfig sets the PresignedUrlConfig field's value. +func (s *CreateJobInput) SetPresignedUrlConfig(v *PresignedUrlConfig) *CreateJobInput { + s.PresignedUrlConfig = v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateJobInput) SetTags(v []*Tag) *CreateJobInput { + s.Tags = v + return s +} + +// SetTargetSelection sets the TargetSelection field's value. +func (s *CreateJobInput) SetTargetSelection(v string) *CreateJobInput { + s.TargetSelection = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *CreateJobInput) SetTargets(v []*string) *CreateJobInput { + s.Targets = v + return s +} + +// SetTimeoutConfig sets the TimeoutConfig field's value. +func (s *CreateJobInput) SetTimeoutConfig(v *TimeoutConfig) *CreateJobInput { + s.TimeoutConfig = v + return s +} + +type CreateJobOutput struct { + _ struct{} `type:"structure"` + + // The job description. + Description *string `locationName:"description" type:"string"` + + // The job ARN. + JobArn *string `locationName:"jobArn" type:"string"` + + // The unique identifier you assigned to this job. + JobId *string `locationName:"jobId" min:"1" type:"string"` +} + +// String returns the string representation +func (s CreateJobOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateJobOutput) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *CreateJobOutput) SetDescription(v string) *CreateJobOutput { + s.Description = &v + return s +} + +// SetJobArn sets the JobArn field's value. +func (s *CreateJobOutput) SetJobArn(v string) *CreateJobOutput { + s.JobArn = &v + return s +} + +// SetJobId sets the JobId field's value. +func (s *CreateJobOutput) SetJobId(v string) *CreateJobOutput { + s.JobId = &v + return s +} + +// The input for the CreateKeysAndCertificate operation. +type CreateKeysAndCertificateInput struct { + _ struct{} `type:"structure"` + + // Specifies whether the certificate is active. SetAsActive *bool `location:"querystring" locationName:"setAsActive" type:"boolean"` } // String returns the string representation -func (s AcceptCertificateTransferInput) String() string { +func (s CreateKeysAndCertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateKeysAndCertificateInput) GoString() string { + return s.String() +} + +// SetSetAsActive sets the SetAsActive field's value. +func (s *CreateKeysAndCertificateInput) SetSetAsActive(v bool) *CreateKeysAndCertificateInput { + s.SetAsActive = &v + return s +} + +// The output of the CreateKeysAndCertificate operation. +type CreateKeysAndCertificateOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the certificate. + CertificateArn *string `locationName:"certificateArn" type:"string"` + + // The ID of the certificate. AWS IoT issues a default subject name for the + // certificate (for example, AWS IoT Certificate). + CertificateId *string `locationName:"certificateId" min:"64" type:"string"` + + // The certificate data, in PEM format. + CertificatePem *string `locationName:"certificatePem" min:"1" type:"string"` + + // The generated key pair. + KeyPair *KeyPair `locationName:"keyPair" type:"structure"` +} + +// String returns the string representation +func (s CreateKeysAndCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateKeysAndCertificateOutput) GoString() string { + return s.String() +} + +// SetCertificateArn sets the CertificateArn field's value. +func (s *CreateKeysAndCertificateOutput) SetCertificateArn(v string) *CreateKeysAndCertificateOutput { + s.CertificateArn = &v + return s +} + +// SetCertificateId sets the CertificateId field's value. +func (s *CreateKeysAndCertificateOutput) SetCertificateId(v string) *CreateKeysAndCertificateOutput { + s.CertificateId = &v + return s +} + +// SetCertificatePem sets the CertificatePem field's value. +func (s *CreateKeysAndCertificateOutput) SetCertificatePem(v string) *CreateKeysAndCertificateOutput { + s.CertificatePem = &v + return s +} + +// SetKeyPair sets the KeyPair field's value. +func (s *CreateKeysAndCertificateOutput) SetKeyPair(v *KeyPair) *CreateKeysAndCertificateOutput { + s.KeyPair = v + return s +} + +type CreateOTAUpdateInput struct { + _ struct{} `type:"structure"` + + // A list of additional OTA update parameters which are name-value pairs. + AdditionalParameters map[string]*string `locationName:"additionalParameters" type:"map"` + + // Configuration for the rollout of OTA updates. + AwsJobExecutionsRolloutConfig *AwsJobExecutionsRolloutConfig `locationName:"awsJobExecutionsRolloutConfig" type:"structure"` + + // The description of the OTA update. + Description *string `locationName:"description" type:"string"` + + // The files to be streamed by the OTA update. + // + // Files is a required field + Files []*OTAUpdateFile `locationName:"files" min:"1" type:"list" required:"true"` + + // The ID of the OTA update to be created. + // + // OtaUpdateId is a required field + OtaUpdateId *string `location:"uri" locationName:"otaUpdateId" min:"1" type:"string" required:"true"` + + // The IAM role that allows access to the AWS IoT Jobs service. + // + // RoleArn is a required field + RoleArn *string `locationName:"roleArn" min:"20" type:"string" required:"true"` + + // Specifies whether the update will continue to run (CONTINUOUS), or will be + // complete after all the things specified as targets have completed the update + // (SNAPSHOT). If continuous, the update may also be run on a thing when a change + // is detected in a target. For example, an update will run on a thing when + // the thing is added to a target group, even after the update was completed + // by all things originally in the group. Valid values: CONTINUOUS | SNAPSHOT. + TargetSelection *string `locationName:"targetSelection" type:"string" enum:"TargetSelection"` + + // The targeted devices to receive OTA updates. + // + // Targets is a required field + Targets []*string `locationName:"targets" min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s CreateOTAUpdateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateOTAUpdateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateOTAUpdateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateOTAUpdateInput"} + if s.Files == nil { + invalidParams.Add(request.NewErrParamRequired("Files")) + } + if s.Files != nil && len(s.Files) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Files", 1)) + } + if s.OtaUpdateId == nil { + invalidParams.Add(request.NewErrParamRequired("OtaUpdateId")) + } + if s.OtaUpdateId != nil && len(*s.OtaUpdateId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("OtaUpdateId", 1)) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + if s.RoleArn != nil && len(*s.RoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + } + if s.Targets == nil { + invalidParams.Add(request.NewErrParamRequired("Targets")) + } + if s.Targets != nil && len(s.Targets) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Targets", 1)) + } + if s.AwsJobExecutionsRolloutConfig != nil { + if err := s.AwsJobExecutionsRolloutConfig.Validate(); err != nil { + invalidParams.AddNested("AwsJobExecutionsRolloutConfig", err.(request.ErrInvalidParams)) + } + } + if s.Files != nil { + for i, v := range s.Files { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Files", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAdditionalParameters sets the AdditionalParameters field's value. +func (s *CreateOTAUpdateInput) SetAdditionalParameters(v map[string]*string) *CreateOTAUpdateInput { + s.AdditionalParameters = v + return s +} + +// SetAwsJobExecutionsRolloutConfig sets the AwsJobExecutionsRolloutConfig field's value. +func (s *CreateOTAUpdateInput) SetAwsJobExecutionsRolloutConfig(v *AwsJobExecutionsRolloutConfig) *CreateOTAUpdateInput { + s.AwsJobExecutionsRolloutConfig = v + return s +} + +// SetDescription sets the Description field's value. +func (s *CreateOTAUpdateInput) SetDescription(v string) *CreateOTAUpdateInput { + s.Description = &v + return s +} + +// SetFiles sets the Files field's value. +func (s *CreateOTAUpdateInput) SetFiles(v []*OTAUpdateFile) *CreateOTAUpdateInput { + s.Files = v + return s +} + +// SetOtaUpdateId sets the OtaUpdateId field's value. +func (s *CreateOTAUpdateInput) SetOtaUpdateId(v string) *CreateOTAUpdateInput { + s.OtaUpdateId = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *CreateOTAUpdateInput) SetRoleArn(v string) *CreateOTAUpdateInput { + s.RoleArn = &v + return s +} + +// SetTargetSelection sets the TargetSelection field's value. +func (s *CreateOTAUpdateInput) SetTargetSelection(v string) *CreateOTAUpdateInput { + s.TargetSelection = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *CreateOTAUpdateInput) SetTargets(v []*string) *CreateOTAUpdateInput { + s.Targets = v + return s +} + +type CreateOTAUpdateOutput struct { + _ struct{} `type:"structure"` + + // The AWS IoT job ARN associated with the OTA update. + AwsIotJobArn *string `locationName:"awsIotJobArn" type:"string"` + + // The AWS IoT job ID associated with the OTA update. + AwsIotJobId *string `locationName:"awsIotJobId" type:"string"` + + // The OTA update ARN. + OtaUpdateArn *string `locationName:"otaUpdateArn" type:"string"` + + // The OTA update ID. + OtaUpdateId *string `locationName:"otaUpdateId" min:"1" type:"string"` + + // The OTA update status. + OtaUpdateStatus *string `locationName:"otaUpdateStatus" type:"string" enum:"OTAUpdateStatus"` +} + +// String returns the string representation +func (s CreateOTAUpdateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateOTAUpdateOutput) GoString() string { + return s.String() +} + +// SetAwsIotJobArn sets the AwsIotJobArn field's value. +func (s *CreateOTAUpdateOutput) SetAwsIotJobArn(v string) *CreateOTAUpdateOutput { + s.AwsIotJobArn = &v + return s +} + +// SetAwsIotJobId sets the AwsIotJobId field's value. +func (s *CreateOTAUpdateOutput) SetAwsIotJobId(v string) *CreateOTAUpdateOutput { + s.AwsIotJobId = &v + return s +} + +// SetOtaUpdateArn sets the OtaUpdateArn field's value. +func (s *CreateOTAUpdateOutput) SetOtaUpdateArn(v string) *CreateOTAUpdateOutput { + s.OtaUpdateArn = &v + return s +} + +// SetOtaUpdateId sets the OtaUpdateId field's value. +func (s *CreateOTAUpdateOutput) SetOtaUpdateId(v string) *CreateOTAUpdateOutput { + s.OtaUpdateId = &v + return s +} + +// SetOtaUpdateStatus sets the OtaUpdateStatus field's value. +func (s *CreateOTAUpdateOutput) SetOtaUpdateStatus(v string) *CreateOTAUpdateOutput { + s.OtaUpdateStatus = &v + return s +} + +// The input for the CreatePolicy operation. +type CreatePolicyInput struct { + _ struct{} `type:"structure"` + + // The JSON document that describes the policy. policyDocument must have a minimum + // length of 1, with a maximum length of 2048, excluding whitespace. + // + // PolicyDocument is a required field + PolicyDocument *string `locationName:"policyDocument" type:"string" required:"true"` + + // The policy name. + // + // PolicyName is a required field + PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreatePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreatePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreatePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreatePolicyInput"} + if s.PolicyDocument == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyDocument")) + } + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) + } + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyDocument sets the PolicyDocument field's value. +func (s *CreatePolicyInput) SetPolicyDocument(v string) *CreatePolicyInput { + s.PolicyDocument = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *CreatePolicyInput) SetPolicyName(v string) *CreatePolicyInput { + s.PolicyName = &v + return s +} + +// The output from the CreatePolicy operation. +type CreatePolicyOutput struct { + _ struct{} `type:"structure"` + + // The policy ARN. + PolicyArn *string `locationName:"policyArn" type:"string"` + + // The JSON document that describes the policy. + PolicyDocument *string `locationName:"policyDocument" type:"string"` + + // The policy name. + PolicyName *string `locationName:"policyName" min:"1" type:"string"` + + // The policy version ID. + PolicyVersionId *string `locationName:"policyVersionId" type:"string"` +} + +// String returns the string representation +func (s CreatePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreatePolicyOutput) GoString() string { + return s.String() +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *CreatePolicyOutput) SetPolicyArn(v string) *CreatePolicyOutput { + s.PolicyArn = &v + return s +} + +// SetPolicyDocument sets the PolicyDocument field's value. +func (s *CreatePolicyOutput) SetPolicyDocument(v string) *CreatePolicyOutput { + s.PolicyDocument = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *CreatePolicyOutput) SetPolicyName(v string) *CreatePolicyOutput { + s.PolicyName = &v + return s +} + +// SetPolicyVersionId sets the PolicyVersionId field's value. +func (s *CreatePolicyOutput) SetPolicyVersionId(v string) *CreatePolicyOutput { + s.PolicyVersionId = &v + return s +} + +// The input for the CreatePolicyVersion operation. +type CreatePolicyVersionInput struct { + _ struct{} `type:"structure"` + + // The JSON document that describes the policy. Minimum length of 1. Maximum + // length of 2048, excluding whitespace. + // + // PolicyDocument is a required field + PolicyDocument *string `locationName:"policyDocument" type:"string" required:"true"` + + // The policy name. + // + // PolicyName is a required field + PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` + + // Specifies whether the policy version is set as the default. When this parameter + // is true, the new policy version becomes the operative version (that is, the + // version that is in effect for the certificates to which the policy is attached). + SetAsDefault *bool `location:"querystring" locationName:"setAsDefault" type:"boolean"` +} + +// String returns the string representation +func (s CreatePolicyVersionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreatePolicyVersionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreatePolicyVersionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreatePolicyVersionInput"} + if s.PolicyDocument == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyDocument")) + } + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) + } + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyDocument sets the PolicyDocument field's value. +func (s *CreatePolicyVersionInput) SetPolicyDocument(v string) *CreatePolicyVersionInput { + s.PolicyDocument = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *CreatePolicyVersionInput) SetPolicyName(v string) *CreatePolicyVersionInput { + s.PolicyName = &v + return s +} + +// SetSetAsDefault sets the SetAsDefault field's value. +func (s *CreatePolicyVersionInput) SetSetAsDefault(v bool) *CreatePolicyVersionInput { + s.SetAsDefault = &v + return s +} + +// The output of the CreatePolicyVersion operation. +type CreatePolicyVersionOutput struct { + _ struct{} `type:"structure"` + + // Specifies whether the policy version is the default. + IsDefaultVersion *bool `locationName:"isDefaultVersion" type:"boolean"` + + // The policy ARN. + PolicyArn *string `locationName:"policyArn" type:"string"` + + // The JSON document that describes the policy. + PolicyDocument *string `locationName:"policyDocument" type:"string"` + + // The policy version ID. + PolicyVersionId *string `locationName:"policyVersionId" type:"string"` +} + +// String returns the string representation +func (s CreatePolicyVersionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreatePolicyVersionOutput) GoString() string { + return s.String() +} + +// SetIsDefaultVersion sets the IsDefaultVersion field's value. +func (s *CreatePolicyVersionOutput) SetIsDefaultVersion(v bool) *CreatePolicyVersionOutput { + s.IsDefaultVersion = &v + return s +} + +// SetPolicyArn sets the PolicyArn field's value. +func (s *CreatePolicyVersionOutput) SetPolicyArn(v string) *CreatePolicyVersionOutput { + s.PolicyArn = &v + return s +} + +// SetPolicyDocument sets the PolicyDocument field's value. +func (s *CreatePolicyVersionOutput) SetPolicyDocument(v string) *CreatePolicyVersionOutput { + s.PolicyDocument = &v + return s +} + +// SetPolicyVersionId sets the PolicyVersionId field's value. +func (s *CreatePolicyVersionOutput) SetPolicyVersionId(v string) *CreatePolicyVersionOutput { + s.PolicyVersionId = &v + return s +} + +type CreateRoleAliasInput struct { + _ struct{} `type:"structure"` + + // How long (in seconds) the credentials will be valid. + CredentialDurationSeconds *int64 `locationName:"credentialDurationSeconds" min:"900" type:"integer"` + + // The role alias that points to a role ARN. This allows you to change the role + // without having to update the device. + // + // RoleAlias is a required field + RoleAlias *string `location:"uri" locationName:"roleAlias" min:"1" type:"string" required:"true"` + + // The role ARN. + // + // RoleArn is a required field + RoleArn *string `locationName:"roleArn" min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateRoleAliasInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateRoleAliasInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateRoleAliasInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateRoleAliasInput"} + if s.CredentialDurationSeconds != nil && *s.CredentialDurationSeconds < 900 { + invalidParams.Add(request.NewErrParamMinValue("CredentialDurationSeconds", 900)) + } + if s.RoleAlias == nil { + invalidParams.Add(request.NewErrParamRequired("RoleAlias")) + } + if s.RoleAlias != nil && len(*s.RoleAlias) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleAlias", 1)) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + if s.RoleArn != nil && len(*s.RoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCredentialDurationSeconds sets the CredentialDurationSeconds field's value. +func (s *CreateRoleAliasInput) SetCredentialDurationSeconds(v int64) *CreateRoleAliasInput { + s.CredentialDurationSeconds = &v + return s +} + +// SetRoleAlias sets the RoleAlias field's value. +func (s *CreateRoleAliasInput) SetRoleAlias(v string) *CreateRoleAliasInput { + s.RoleAlias = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *CreateRoleAliasInput) SetRoleArn(v string) *CreateRoleAliasInput { + s.RoleArn = &v + return s +} + +type CreateRoleAliasOutput struct { + _ struct{} `type:"structure"` + + // The role alias. + RoleAlias *string `locationName:"roleAlias" min:"1" type:"string"` + + // The role alias ARN. + RoleAliasArn *string `locationName:"roleAliasArn" type:"string"` +} + +// String returns the string representation +func (s CreateRoleAliasOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateRoleAliasOutput) GoString() string { + return s.String() +} + +// SetRoleAlias sets the RoleAlias field's value. +func (s *CreateRoleAliasOutput) SetRoleAlias(v string) *CreateRoleAliasOutput { + s.RoleAlias = &v + return s +} + +// SetRoleAliasArn sets the RoleAliasArn field's value. +func (s *CreateRoleAliasOutput) SetRoleAliasArn(v string) *CreateRoleAliasOutput { + s.RoleAliasArn = &v + return s +} + +type CreateScheduledAuditInput struct { + _ struct{} `type:"structure"` + + // The day of the month on which the scheduled audit takes place. Can be "1" + // through "31" or "LAST". This field is required if the "frequency" parameter + // is set to "MONTHLY". If days 29-31 are specified, and the month does not + // have that many days, the audit takes place on the "LAST" day of the month. + DayOfMonth *string `locationName:"dayOfMonth" type:"string"` + + // The day of the week on which the scheduled audit takes place. Can be one + // of "SUN", "MON", "TUE", "WED", "THU", "FRI" or "SAT". This field is required + // if the "frequency" parameter is set to "WEEKLY" or "BIWEEKLY". + DayOfWeek *string `locationName:"dayOfWeek" type:"string" enum:"DayOfWeek"` + + // How often the scheduled audit takes place. Can be one of "DAILY", "WEEKLY", + // "BIWEEKLY" or "MONTHLY". The actual start time of each audit is determined + // by the system. + // + // Frequency is a required field + Frequency *string `locationName:"frequency" type:"string" required:"true" enum:"AuditFrequency"` + + // The name you want to give to the scheduled audit. (Max. 128 chars) + // + // ScheduledAuditName is a required field + ScheduledAuditName *string `location:"uri" locationName:"scheduledAuditName" min:"1" type:"string" required:"true"` + + // Which checks are performed during the scheduled audit. Checks must be enabled + // for your account. (Use DescribeAccountAuditConfiguration to see the list + // of all checks including those that are enabled or UpdateAccountAuditConfiguration + // to select which checks are enabled.) + // + // TargetCheckNames is a required field + TargetCheckNames []*string `locationName:"targetCheckNames" type:"list" required:"true"` +} + +// String returns the string representation +func (s CreateScheduledAuditInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateScheduledAuditInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateScheduledAuditInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateScheduledAuditInput"} + if s.Frequency == nil { + invalidParams.Add(request.NewErrParamRequired("Frequency")) + } + if s.ScheduledAuditName == nil { + invalidParams.Add(request.NewErrParamRequired("ScheduledAuditName")) + } + if s.ScheduledAuditName != nil && len(*s.ScheduledAuditName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ScheduledAuditName", 1)) + } + if s.TargetCheckNames == nil { + invalidParams.Add(request.NewErrParamRequired("TargetCheckNames")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDayOfMonth sets the DayOfMonth field's value. +func (s *CreateScheduledAuditInput) SetDayOfMonth(v string) *CreateScheduledAuditInput { + s.DayOfMonth = &v + return s +} + +// SetDayOfWeek sets the DayOfWeek field's value. +func (s *CreateScheduledAuditInput) SetDayOfWeek(v string) *CreateScheduledAuditInput { + s.DayOfWeek = &v + return s +} + +// SetFrequency sets the Frequency field's value. +func (s *CreateScheduledAuditInput) SetFrequency(v string) *CreateScheduledAuditInput { + s.Frequency = &v + return s +} + +// SetScheduledAuditName sets the ScheduledAuditName field's value. +func (s *CreateScheduledAuditInput) SetScheduledAuditName(v string) *CreateScheduledAuditInput { + s.ScheduledAuditName = &v + return s +} + +// SetTargetCheckNames sets the TargetCheckNames field's value. +func (s *CreateScheduledAuditInput) SetTargetCheckNames(v []*string) *CreateScheduledAuditInput { + s.TargetCheckNames = v + return s +} + +type CreateScheduledAuditOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the scheduled audit. + ScheduledAuditArn *string `locationName:"scheduledAuditArn" type:"string"` +} + +// String returns the string representation +func (s CreateScheduledAuditOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateScheduledAuditOutput) GoString() string { + return s.String() +} + +// SetScheduledAuditArn sets the ScheduledAuditArn field's value. +func (s *CreateScheduledAuditOutput) SetScheduledAuditArn(v string) *CreateScheduledAuditOutput { + s.ScheduledAuditArn = &v + return s +} + +type CreateSecurityProfileInput struct { + _ struct{} `type:"structure"` + + // Specifies the destinations to which alerts are sent. (Alerts are always sent + // to the console.) Alerts are generated when a device (thing) violates a behavior. + AlertTargets map[string]*AlertTarget `locationName:"alertTargets" type:"map"` + + // Specifies the behaviors that, when violated by a device (thing), cause an + // alert. + // + // Behaviors is a required field + Behaviors []*Behavior `locationName:"behaviors" type:"list" required:"true"` + + // A description of the security profile. + SecurityProfileDescription *string `locationName:"securityProfileDescription" type:"string"` + + // The name you are giving to the security profile. + // + // SecurityProfileName is a required field + SecurityProfileName *string `location:"uri" locationName:"securityProfileName" min:"1" type:"string" required:"true"` + + // Metadata which can be used to manage the security profile. + Tags []*Tag `locationName:"tags" type:"list"` +} + +// String returns the string representation +func (s CreateSecurityProfileInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateSecurityProfileInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateSecurityProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateSecurityProfileInput"} + if s.Behaviors == nil { + invalidParams.Add(request.NewErrParamRequired("Behaviors")) + } + if s.SecurityProfileName == nil { + invalidParams.Add(request.NewErrParamRequired("SecurityProfileName")) + } + if s.SecurityProfileName != nil && len(*s.SecurityProfileName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecurityProfileName", 1)) + } + if s.AlertTargets != nil { + for i, v := range s.AlertTargets { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AlertTargets", i), err.(request.ErrInvalidParams)) + } + } + } + if s.Behaviors != nil { + for i, v := range s.Behaviors { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Behaviors", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAlertTargets sets the AlertTargets field's value. +func (s *CreateSecurityProfileInput) SetAlertTargets(v map[string]*AlertTarget) *CreateSecurityProfileInput { + s.AlertTargets = v + return s +} + +// SetBehaviors sets the Behaviors field's value. +func (s *CreateSecurityProfileInput) SetBehaviors(v []*Behavior) *CreateSecurityProfileInput { + s.Behaviors = v + return s +} + +// SetSecurityProfileDescription sets the SecurityProfileDescription field's value. +func (s *CreateSecurityProfileInput) SetSecurityProfileDescription(v string) *CreateSecurityProfileInput { + s.SecurityProfileDescription = &v + return s +} + +// SetSecurityProfileName sets the SecurityProfileName field's value. +func (s *CreateSecurityProfileInput) SetSecurityProfileName(v string) *CreateSecurityProfileInput { + s.SecurityProfileName = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateSecurityProfileInput) SetTags(v []*Tag) *CreateSecurityProfileInput { + s.Tags = v + return s +} + +type CreateSecurityProfileOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the security profile. + SecurityProfileArn *string `locationName:"securityProfileArn" type:"string"` + + // The name you gave to the security profile. + SecurityProfileName *string `locationName:"securityProfileName" min:"1" type:"string"` +} + +// String returns the string representation +func (s CreateSecurityProfileOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateSecurityProfileOutput) GoString() string { + return s.String() +} + +// SetSecurityProfileArn sets the SecurityProfileArn field's value. +func (s *CreateSecurityProfileOutput) SetSecurityProfileArn(v string) *CreateSecurityProfileOutput { + s.SecurityProfileArn = &v + return s +} + +// SetSecurityProfileName sets the SecurityProfileName field's value. +func (s *CreateSecurityProfileOutput) SetSecurityProfileName(v string) *CreateSecurityProfileOutput { + s.SecurityProfileName = &v + return s +} + +type CreateStreamInput struct { + _ struct{} `type:"structure"` + + // A description of the stream. + Description *string `locationName:"description" type:"string"` + + // The files to stream. + // + // Files is a required field + Files []*StreamFile `locationName:"files" min:"1" type:"list" required:"true"` + + // An IAM role that allows the IoT service principal assumes to access your + // S3 files. + // + // RoleArn is a required field + RoleArn *string `locationName:"roleArn" min:"20" type:"string" required:"true"` + + // The stream ID. + // + // StreamId is a required field + StreamId *string `location:"uri" locationName:"streamId" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateStreamInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateStreamInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateStreamInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateStreamInput"} + if s.Files == nil { + invalidParams.Add(request.NewErrParamRequired("Files")) + } + if s.Files != nil && len(s.Files) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Files", 1)) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + if s.RoleArn != nil && len(*s.RoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + } + if s.StreamId == nil { + invalidParams.Add(request.NewErrParamRequired("StreamId")) + } + if s.StreamId != nil && len(*s.StreamId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StreamId", 1)) + } + if s.Files != nil { + for i, v := range s.Files { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Files", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDescription sets the Description field's value. +func (s *CreateStreamInput) SetDescription(v string) *CreateStreamInput { + s.Description = &v + return s +} + +// SetFiles sets the Files field's value. +func (s *CreateStreamInput) SetFiles(v []*StreamFile) *CreateStreamInput { + s.Files = v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *CreateStreamInput) SetRoleArn(v string) *CreateStreamInput { + s.RoleArn = &v + return s +} + +// SetStreamId sets the StreamId field's value. +func (s *CreateStreamInput) SetStreamId(v string) *CreateStreamInput { + s.StreamId = &v + return s +} + +type CreateStreamOutput struct { + _ struct{} `type:"structure"` + + // A description of the stream. + Description *string `locationName:"description" type:"string"` + + // The stream ARN. + StreamArn *string `locationName:"streamArn" type:"string"` + + // The stream ID. + StreamId *string `locationName:"streamId" min:"1" type:"string"` + + // The version of the stream. + StreamVersion *int64 `locationName:"streamVersion" type:"integer"` +} + +// String returns the string representation +func (s CreateStreamOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateStreamOutput) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *CreateStreamOutput) SetDescription(v string) *CreateStreamOutput { + s.Description = &v + return s +} + +// SetStreamArn sets the StreamArn field's value. +func (s *CreateStreamOutput) SetStreamArn(v string) *CreateStreamOutput { + s.StreamArn = &v + return s +} + +// SetStreamId sets the StreamId field's value. +func (s *CreateStreamOutput) SetStreamId(v string) *CreateStreamOutput { + s.StreamId = &v + return s +} + +// SetStreamVersion sets the StreamVersion field's value. +func (s *CreateStreamOutput) SetStreamVersion(v int64) *CreateStreamOutput { + s.StreamVersion = &v + return s +} + +type CreateThingGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the parent thing group. + ParentGroupName *string `locationName:"parentGroupName" min:"1" type:"string"` + + // Metadata which can be used to manage the thing group. + Tags []*Tag `locationName:"tags" type:"list"` + + // The thing group name to create. + // + // ThingGroupName is a required field + ThingGroupName *string `location:"uri" locationName:"thingGroupName" min:"1" type:"string" required:"true"` + + // The thing group properties. + ThingGroupProperties *ThingGroupProperties `locationName:"thingGroupProperties" type:"structure"` +} + +// String returns the string representation +func (s CreateThingGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateThingGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateThingGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateThingGroupInput"} + if s.ParentGroupName != nil && len(*s.ParentGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ParentGroupName", 1)) + } + if s.ThingGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingGroupName")) + } + if s.ThingGroupName != nil && len(*s.ThingGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingGroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetParentGroupName sets the ParentGroupName field's value. +func (s *CreateThingGroupInput) SetParentGroupName(v string) *CreateThingGroupInput { + s.ParentGroupName = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateThingGroupInput) SetTags(v []*Tag) *CreateThingGroupInput { + s.Tags = v + return s +} + +// SetThingGroupName sets the ThingGroupName field's value. +func (s *CreateThingGroupInput) SetThingGroupName(v string) *CreateThingGroupInput { + s.ThingGroupName = &v + return s +} + +// SetThingGroupProperties sets the ThingGroupProperties field's value. +func (s *CreateThingGroupInput) SetThingGroupProperties(v *ThingGroupProperties) *CreateThingGroupInput { + s.ThingGroupProperties = v + return s +} + +type CreateThingGroupOutput struct { + _ struct{} `type:"structure"` + + // The thing group ARN. + ThingGroupArn *string `locationName:"thingGroupArn" type:"string"` + + // The thing group ID. + ThingGroupId *string `locationName:"thingGroupId" min:"1" type:"string"` + + // The thing group name. + ThingGroupName *string `locationName:"thingGroupName" min:"1" type:"string"` +} + +// String returns the string representation +func (s CreateThingGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateThingGroupOutput) GoString() string { + return s.String() +} + +// SetThingGroupArn sets the ThingGroupArn field's value. +func (s *CreateThingGroupOutput) SetThingGroupArn(v string) *CreateThingGroupOutput { + s.ThingGroupArn = &v + return s +} + +// SetThingGroupId sets the ThingGroupId field's value. +func (s *CreateThingGroupOutput) SetThingGroupId(v string) *CreateThingGroupOutput { + s.ThingGroupId = &v + return s +} + +// SetThingGroupName sets the ThingGroupName field's value. +func (s *CreateThingGroupOutput) SetThingGroupName(v string) *CreateThingGroupOutput { + s.ThingGroupName = &v + return s +} + +// The input for the CreateThing operation. +type CreateThingInput struct { + _ struct{} `type:"structure"` + + // The attribute payload, which consists of up to three name/value pairs in + // a JSON document. For example: + // + // {\"attributes\":{\"string1\":\"string2\"}} + AttributePayload *AttributePayload `locationName:"attributePayload" type:"structure"` + + // The name of the billing group the thing will be added to. + BillingGroupName *string `locationName:"billingGroupName" min:"1" type:"string"` + + // The name of the thing to create. + // + // ThingName is a required field + ThingName *string `location:"uri" locationName:"thingName" min:"1" type:"string" required:"true"` + + // The name of the thing type associated with the new thing. + ThingTypeName *string `locationName:"thingTypeName" min:"1" type:"string"` +} + +// String returns the string representation +func (s CreateThingInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateThingInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateThingInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateThingInput"} + if s.BillingGroupName != nil && len(*s.BillingGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BillingGroupName", 1)) + } + if s.ThingName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingName")) + } + if s.ThingName != nil && len(*s.ThingName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) + } + if s.ThingTypeName != nil && len(*s.ThingTypeName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingTypeName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttributePayload sets the AttributePayload field's value. +func (s *CreateThingInput) SetAttributePayload(v *AttributePayload) *CreateThingInput { + s.AttributePayload = v + return s +} + +// SetBillingGroupName sets the BillingGroupName field's value. +func (s *CreateThingInput) SetBillingGroupName(v string) *CreateThingInput { + s.BillingGroupName = &v + return s +} + +// SetThingName sets the ThingName field's value. +func (s *CreateThingInput) SetThingName(v string) *CreateThingInput { + s.ThingName = &v + return s +} + +// SetThingTypeName sets the ThingTypeName field's value. +func (s *CreateThingInput) SetThingTypeName(v string) *CreateThingInput { + s.ThingTypeName = &v + return s +} + +// The output of the CreateThing operation. +type CreateThingOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the new thing. + ThingArn *string `locationName:"thingArn" type:"string"` + + // The thing ID. + ThingId *string `locationName:"thingId" type:"string"` + + // The name of the new thing. + ThingName *string `locationName:"thingName" min:"1" type:"string"` +} + +// String returns the string representation +func (s CreateThingOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateThingOutput) GoString() string { + return s.String() +} + +// SetThingArn sets the ThingArn field's value. +func (s *CreateThingOutput) SetThingArn(v string) *CreateThingOutput { + s.ThingArn = &v + return s +} + +// SetThingId sets the ThingId field's value. +func (s *CreateThingOutput) SetThingId(v string) *CreateThingOutput { + s.ThingId = &v + return s +} + +// SetThingName sets the ThingName field's value. +func (s *CreateThingOutput) SetThingName(v string) *CreateThingOutput { + s.ThingName = &v + return s +} + +// The input for the CreateThingType operation. +type CreateThingTypeInput struct { + _ struct{} `type:"structure"` + + // Metadata which can be used to manage the thing type. + Tags []*Tag `locationName:"tags" type:"list"` + + // The name of the thing type. + // + // ThingTypeName is a required field + ThingTypeName *string `location:"uri" locationName:"thingTypeName" min:"1" type:"string" required:"true"` + + // The ThingTypeProperties for the thing type to create. It contains information + // about the new thing type including a description, and a list of searchable + // thing attribute names. + ThingTypeProperties *ThingTypeProperties `locationName:"thingTypeProperties" type:"structure"` +} + +// String returns the string representation +func (s CreateThingTypeInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateThingTypeInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateThingTypeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateThingTypeInput"} + if s.ThingTypeName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingTypeName")) + } + if s.ThingTypeName != nil && len(*s.ThingTypeName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingTypeName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTags sets the Tags field's value. +func (s *CreateThingTypeInput) SetTags(v []*Tag) *CreateThingTypeInput { + s.Tags = v + return s +} + +// SetThingTypeName sets the ThingTypeName field's value. +func (s *CreateThingTypeInput) SetThingTypeName(v string) *CreateThingTypeInput { + s.ThingTypeName = &v + return s +} + +// SetThingTypeProperties sets the ThingTypeProperties field's value. +func (s *CreateThingTypeInput) SetThingTypeProperties(v *ThingTypeProperties) *CreateThingTypeInput { + s.ThingTypeProperties = v + return s +} + +// The output of the CreateThingType operation. +type CreateThingTypeOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the thing type. + ThingTypeArn *string `locationName:"thingTypeArn" type:"string"` + + // The thing type ID. + ThingTypeId *string `locationName:"thingTypeId" type:"string"` + + // The name of the thing type. + ThingTypeName *string `locationName:"thingTypeName" min:"1" type:"string"` +} + +// String returns the string representation +func (s CreateThingTypeOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateThingTypeOutput) GoString() string { + return s.String() +} + +// SetThingTypeArn sets the ThingTypeArn field's value. +func (s *CreateThingTypeOutput) SetThingTypeArn(v string) *CreateThingTypeOutput { + s.ThingTypeArn = &v + return s +} + +// SetThingTypeId sets the ThingTypeId field's value. +func (s *CreateThingTypeOutput) SetThingTypeId(v string) *CreateThingTypeOutput { + s.ThingTypeId = &v + return s +} + +// SetThingTypeName sets the ThingTypeName field's value. +func (s *CreateThingTypeOutput) SetThingTypeName(v string) *CreateThingTypeOutput { + s.ThingTypeName = &v + return s +} + +// The input for the CreateTopicRule operation. +type CreateTopicRuleInput struct { + _ struct{} `type:"structure" payload:"TopicRulePayload"` + + // The name of the rule. + // + // RuleName is a required field + RuleName *string `location:"uri" locationName:"ruleName" min:"1" type:"string" required:"true"` + + // The rule payload. + // + // TopicRulePayload is a required field + TopicRulePayload *TopicRulePayload `locationName:"topicRulePayload" type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateTopicRuleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateTopicRuleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateTopicRuleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateTopicRuleInput"} + if s.RuleName == nil { + invalidParams.Add(request.NewErrParamRequired("RuleName")) + } + if s.RuleName != nil && len(*s.RuleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RuleName", 1)) + } + if s.TopicRulePayload == nil { + invalidParams.Add(request.NewErrParamRequired("TopicRulePayload")) + } + if s.TopicRulePayload != nil { + if err := s.TopicRulePayload.Validate(); err != nil { + invalidParams.AddNested("TopicRulePayload", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRuleName sets the RuleName field's value. +func (s *CreateTopicRuleInput) SetRuleName(v string) *CreateTopicRuleInput { + s.RuleName = &v + return s +} + +// SetTopicRulePayload sets the TopicRulePayload field's value. +func (s *CreateTopicRuleInput) SetTopicRulePayload(v *TopicRulePayload) *CreateTopicRuleInput { + s.TopicRulePayload = v + return s +} + +type CreateTopicRuleOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s CreateTopicRuleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateTopicRuleOutput) GoString() string { + return s.String() +} + +// Describes a custom method used to code sign a file. +type CustomCodeSigning struct { + _ struct{} `type:"structure"` + + // The certificate chain. + CertificateChain *CodeSigningCertificateChain `locationName:"certificateChain" type:"structure"` + + // The hash algorithm used to code sign the file. + HashAlgorithm *string `locationName:"hashAlgorithm" type:"string"` + + // The signature for the file. + Signature *CodeSigningSignature `locationName:"signature" type:"structure"` + + // The signature algorithm used to code sign the file. + SignatureAlgorithm *string `locationName:"signatureAlgorithm" type:"string"` +} + +// String returns the string representation +func (s CustomCodeSigning) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CustomCodeSigning) GoString() string { + return s.String() +} + +// SetCertificateChain sets the CertificateChain field's value. +func (s *CustomCodeSigning) SetCertificateChain(v *CodeSigningCertificateChain) *CustomCodeSigning { + s.CertificateChain = v + return s +} + +// SetHashAlgorithm sets the HashAlgorithm field's value. +func (s *CustomCodeSigning) SetHashAlgorithm(v string) *CustomCodeSigning { + s.HashAlgorithm = &v + return s +} + +// SetSignature sets the Signature field's value. +func (s *CustomCodeSigning) SetSignature(v *CodeSigningSignature) *CustomCodeSigning { + s.Signature = v + return s +} + +// SetSignatureAlgorithm sets the SignatureAlgorithm field's value. +func (s *CustomCodeSigning) SetSignatureAlgorithm(v string) *CustomCodeSigning { + s.SignatureAlgorithm = &v + return s +} + +type DeleteAccountAuditConfigurationInput struct { + _ struct{} `type:"structure"` + + // If true, all scheduled audits are deleted. + DeleteScheduledAudits *bool `location:"querystring" locationName:"deleteScheduledAudits" type:"boolean"` +} + +// String returns the string representation +func (s DeleteAccountAuditConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAccountAuditConfigurationInput) GoString() string { + return s.String() +} + +// SetDeleteScheduledAudits sets the DeleteScheduledAudits field's value. +func (s *DeleteAccountAuditConfigurationInput) SetDeleteScheduledAudits(v bool) *DeleteAccountAuditConfigurationInput { + s.DeleteScheduledAudits = &v + return s +} + +type DeleteAccountAuditConfigurationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteAccountAuditConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAccountAuditConfigurationOutput) GoString() string { + return s.String() +} + +type DeleteAuthorizerInput struct { + _ struct{} `type:"structure"` + + // The name of the authorizer to delete. + // + // AuthorizerName is a required field + AuthorizerName *string `location:"uri" locationName:"authorizerName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteAuthorizerInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAuthorizerInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteAuthorizerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteAuthorizerInput"} + if s.AuthorizerName == nil { + invalidParams.Add(request.NewErrParamRequired("AuthorizerName")) + } + if s.AuthorizerName != nil && len(*s.AuthorizerName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AuthorizerName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAuthorizerName sets the AuthorizerName field's value. +func (s *DeleteAuthorizerInput) SetAuthorizerName(v string) *DeleteAuthorizerInput { + s.AuthorizerName = &v + return s +} + +type DeleteAuthorizerOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteAuthorizerOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAuthorizerOutput) GoString() string { + return s.String() +} + +type DeleteBillingGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the billing group. + // + // BillingGroupName is a required field + BillingGroupName *string `location:"uri" locationName:"billingGroupName" min:"1" type:"string" required:"true"` + + // The expected version of the billing group. If the version of the billing + // group does not match the expected version specified in the request, the DeleteBillingGroup + // request is rejected with a VersionConflictException. + ExpectedVersion *int64 `location:"querystring" locationName:"expectedVersion" type:"long"` +} + +// String returns the string representation +func (s DeleteBillingGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBillingGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteBillingGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteBillingGroupInput"} + if s.BillingGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("BillingGroupName")) + } + if s.BillingGroupName != nil && len(*s.BillingGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BillingGroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBillingGroupName sets the BillingGroupName field's value. +func (s *DeleteBillingGroupInput) SetBillingGroupName(v string) *DeleteBillingGroupInput { + s.BillingGroupName = &v + return s +} + +// SetExpectedVersion sets the ExpectedVersion field's value. +func (s *DeleteBillingGroupInput) SetExpectedVersion(v int64) *DeleteBillingGroupInput { + s.ExpectedVersion = &v + return s +} + +type DeleteBillingGroupOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteBillingGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBillingGroupOutput) GoString() string { + return s.String() +} + +// Input for the DeleteCACertificate operation. +type DeleteCACertificateInput struct { + _ struct{} `type:"structure"` + + // The ID of the certificate to delete. (The last part of the certificate ARN + // contains the certificate ID.) + // + // CertificateId is a required field + CertificateId *string `location:"uri" locationName:"caCertificateId" min:"64" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteCACertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteCACertificateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteCACertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteCACertificateInput"} + if s.CertificateId == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateId")) + } + if s.CertificateId != nil && len(*s.CertificateId) < 64 { + invalidParams.Add(request.NewErrParamMinLen("CertificateId", 64)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateId sets the CertificateId field's value. +func (s *DeleteCACertificateInput) SetCertificateId(v string) *DeleteCACertificateInput { + s.CertificateId = &v + return s +} + +// The output for the DeleteCACertificate operation. +type DeleteCACertificateOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteCACertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteCACertificateOutput) GoString() string { + return s.String() +} + +// The input for the DeleteCertificate operation. +type DeleteCertificateInput struct { + _ struct{} `type:"structure"` + + // The ID of the certificate. (The last part of the certificate ARN contains + // the certificate ID.) + // + // CertificateId is a required field + CertificateId *string `location:"uri" locationName:"certificateId" min:"64" type:"string" required:"true"` + + // Forces a certificate request to be deleted. + ForceDelete *bool `location:"querystring" locationName:"forceDelete" type:"boolean"` +} + +// String returns the string representation +func (s DeleteCertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteCertificateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteCertificateInput"} + if s.CertificateId == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateId")) + } + if s.CertificateId != nil && len(*s.CertificateId) < 64 { + invalidParams.Add(request.NewErrParamMinLen("CertificateId", 64)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateId sets the CertificateId field's value. +func (s *DeleteCertificateInput) SetCertificateId(v string) *DeleteCertificateInput { + s.CertificateId = &v + return s +} + +// SetForceDelete sets the ForceDelete field's value. +func (s *DeleteCertificateInput) SetForceDelete(v bool) *DeleteCertificateInput { + s.ForceDelete = &v + return s +} + +type DeleteCertificateOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteCertificateOutput) GoString() string { + return s.String() +} + +type DeleteDynamicThingGroupInput struct { + _ struct{} `type:"structure"` + + // The expected version of the dynamic thing group to delete. + ExpectedVersion *int64 `location:"querystring" locationName:"expectedVersion" type:"long"` + + // The name of the dynamic thing group to delete. + // + // ThingGroupName is a required field + ThingGroupName *string `location:"uri" locationName:"thingGroupName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteDynamicThingGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDynamicThingGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteDynamicThingGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDynamicThingGroupInput"} + if s.ThingGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingGroupName")) + } + if s.ThingGroupName != nil && len(*s.ThingGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingGroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetExpectedVersion sets the ExpectedVersion field's value. +func (s *DeleteDynamicThingGroupInput) SetExpectedVersion(v int64) *DeleteDynamicThingGroupInput { + s.ExpectedVersion = &v + return s +} + +// SetThingGroupName sets the ThingGroupName field's value. +func (s *DeleteDynamicThingGroupInput) SetThingGroupName(v string) *DeleteDynamicThingGroupInput { + s.ThingGroupName = &v + return s +} + +type DeleteDynamicThingGroupOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteDynamicThingGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDynamicThingGroupOutput) GoString() string { + return s.String() +} + +type DeleteJobExecutionInput struct { + _ struct{} `type:"structure"` + + // The ID of the job execution to be deleted. The executionNumber refers to + // the execution of a particular job on a particular device. + // + // Note that once a job execution is deleted, the executionNumber may be reused + // by IoT, so be sure you get and use the correct value here. + // + // ExecutionNumber is a required field + ExecutionNumber *int64 `location:"uri" locationName:"executionNumber" type:"long" required:"true"` + + // (Optional) When true, you can delete a job execution which is "IN_PROGRESS". + // Otherwise, you can only delete a job execution which is in a terminal state + // ("SUCCEEDED", "FAILED", "REJECTED", "REMOVED" or "CANCELED") or an exception + // will occur. The default is false. + // + // Deleting a job execution which is "IN_PROGRESS", will cause the device to + // be unable to access job information or update the job execution status. Use + // caution and ensure that the device is able to recover to a valid state. + Force *bool `location:"querystring" locationName:"force" type:"boolean"` + + // The ID of the job whose execution on a particular device will be deleted. + // + // JobId is a required field + JobId *string `location:"uri" locationName:"jobId" min:"1" type:"string" required:"true"` + + // The name of the thing whose job execution will be deleted. + // + // ThingName is a required field + ThingName *string `location:"uri" locationName:"thingName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteJobExecutionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteJobExecutionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteJobExecutionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteJobExecutionInput"} + if s.ExecutionNumber == nil { + invalidParams.Add(request.NewErrParamRequired("ExecutionNumber")) + } + if s.JobId == nil { + invalidParams.Add(request.NewErrParamRequired("JobId")) + } + if s.JobId != nil && len(*s.JobId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("JobId", 1)) + } + if s.ThingName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingName")) + } + if s.ThingName != nil && len(*s.ThingName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetExecutionNumber sets the ExecutionNumber field's value. +func (s *DeleteJobExecutionInput) SetExecutionNumber(v int64) *DeleteJobExecutionInput { + s.ExecutionNumber = &v + return s +} + +// SetForce sets the Force field's value. +func (s *DeleteJobExecutionInput) SetForce(v bool) *DeleteJobExecutionInput { + s.Force = &v + return s +} + +// SetJobId sets the JobId field's value. +func (s *DeleteJobExecutionInput) SetJobId(v string) *DeleteJobExecutionInput { + s.JobId = &v + return s +} + +// SetThingName sets the ThingName field's value. +func (s *DeleteJobExecutionInput) SetThingName(v string) *DeleteJobExecutionInput { + s.ThingName = &v + return s +} + +type DeleteJobExecutionOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteJobExecutionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteJobExecutionOutput) GoString() string { + return s.String() +} + +type DeleteJobInput struct { + _ struct{} `type:"structure"` + + // (Optional) When true, you can delete a job which is "IN_PROGRESS". Otherwise, + // you can only delete a job which is in a terminal state ("COMPLETED" or "CANCELED") + // or an exception will occur. The default is false. + // + // Deleting a job which is "IN_PROGRESS", will cause a device which is executing + // the job to be unable to access job information or update the job execution + // status. Use caution and ensure that each device executing a job which is + // deleted is able to recover to a valid state. + Force *bool `location:"querystring" locationName:"force" type:"boolean"` + + // The ID of the job to be deleted. + // + // After a job deletion is completed, you may reuse this jobId when you create + // a new job. However, this is not recommended, and you must ensure that your + // devices are not using the jobId to refer to the deleted job. + // + // JobId is a required field + JobId *string `location:"uri" locationName:"jobId" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteJobInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteJobInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteJobInput"} + if s.JobId == nil { + invalidParams.Add(request.NewErrParamRequired("JobId")) + } + if s.JobId != nil && len(*s.JobId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("JobId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetForce sets the Force field's value. +func (s *DeleteJobInput) SetForce(v bool) *DeleteJobInput { + s.Force = &v + return s +} + +// SetJobId sets the JobId field's value. +func (s *DeleteJobInput) SetJobId(v string) *DeleteJobInput { + s.JobId = &v + return s +} + +type DeleteJobOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteJobOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteJobOutput) GoString() string { + return s.String() +} + +type DeleteOTAUpdateInput struct { + _ struct{} `type:"structure"` + + // Specifies if the stream associated with an OTA update should be deleted when + // the OTA update is deleted. + DeleteStream *bool `location:"querystring" locationName:"deleteStream" type:"boolean"` + + // Specifies if the AWS Job associated with the OTA update should be deleted + // with the OTA update is deleted. + ForceDeleteAWSJob *bool `location:"querystring" locationName:"forceDeleteAWSJob" type:"boolean"` + + // The OTA update ID to delete. + // + // OtaUpdateId is a required field + OtaUpdateId *string `location:"uri" locationName:"otaUpdateId" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteOTAUpdateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteOTAUpdateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteOTAUpdateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteOTAUpdateInput"} + if s.OtaUpdateId == nil { + invalidParams.Add(request.NewErrParamRequired("OtaUpdateId")) + } + if s.OtaUpdateId != nil && len(*s.OtaUpdateId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("OtaUpdateId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDeleteStream sets the DeleteStream field's value. +func (s *DeleteOTAUpdateInput) SetDeleteStream(v bool) *DeleteOTAUpdateInput { + s.DeleteStream = &v + return s +} + +// SetForceDeleteAWSJob sets the ForceDeleteAWSJob field's value. +func (s *DeleteOTAUpdateInput) SetForceDeleteAWSJob(v bool) *DeleteOTAUpdateInput { + s.ForceDeleteAWSJob = &v + return s +} + +// SetOtaUpdateId sets the OtaUpdateId field's value. +func (s *DeleteOTAUpdateInput) SetOtaUpdateId(v string) *DeleteOTAUpdateInput { + s.OtaUpdateId = &v + return s +} + +type DeleteOTAUpdateOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteOTAUpdateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteOTAUpdateOutput) GoString() string { + return s.String() +} + +// The input for the DeletePolicy operation. +type DeletePolicyInput struct { + _ struct{} `type:"structure"` + + // The name of the policy to delete. + // + // PolicyName is a required field + PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeletePolicyInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AcceptCertificateTransferInput) GoString() string { +func (s DeletePolicyInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *AcceptCertificateTransferInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AcceptCertificateTransferInput"} - if s.CertificateId == nil { - invalidParams.Add(request.NewErrParamRequired("CertificateId")) +func (s *DeletePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeletePolicyInput"} + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) } - if s.CertificateId != nil && len(*s.CertificateId) < 64 { - invalidParams.Add(request.NewErrParamMinLen("CertificateId", 64)) + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) } if invalidParams.Len() > 0 { @@ -11293,155 +20860,208 @@ func (s *AcceptCertificateTransferInput) Validate() error { return nil } -// SetCertificateId sets the CertificateId field's value. -func (s *AcceptCertificateTransferInput) SetCertificateId(v string) *AcceptCertificateTransferInput { - s.CertificateId = &v +// SetPolicyName sets the PolicyName field's value. +func (s *DeletePolicyInput) SetPolicyName(v string) *DeletePolicyInput { + s.PolicyName = &v return s } -// SetSetAsActive sets the SetAsActive field's value. -func (s *AcceptCertificateTransferInput) SetSetAsActive(v bool) *AcceptCertificateTransferInput { - s.SetAsActive = &v - return s +type DeletePolicyOutput struct { + _ struct{} `type:"structure"` } -type AcceptCertificateTransferOutput struct { +// String returns the string representation +func (s DeletePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeletePolicyOutput) GoString() string { + return s.String() +} + +// The input for the DeletePolicyVersion operation. +type DeletePolicyVersionInput struct { _ struct{} `type:"structure"` + + // The name of the policy. + // + // PolicyName is a required field + PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` + + // The policy version ID. + // + // PolicyVersionId is a required field + PolicyVersionId *string `location:"uri" locationName:"policyVersionId" type:"string" required:"true"` } // String returns the string representation -func (s AcceptCertificateTransferOutput) String() string { +func (s DeletePolicyVersionInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AcceptCertificateTransferOutput) GoString() string { +func (s DeletePolicyVersionInput) GoString() string { return s.String() } -// Describes the actions associated with a rule. -type Action struct { - _ struct{} `type:"structure"` +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeletePolicyVersionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeletePolicyVersionInput"} + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) + } + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + if s.PolicyVersionId == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyVersionId")) + } - // Change the state of a CloudWatch alarm. - CloudwatchAlarm *CloudwatchAlarmAction `locationName:"cloudwatchAlarm" type:"structure"` + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} - // Capture a CloudWatch metric. - CloudwatchMetric *CloudwatchMetricAction `locationName:"cloudwatchMetric" type:"structure"` +// SetPolicyName sets the PolicyName field's value. +func (s *DeletePolicyVersionInput) SetPolicyName(v string) *DeletePolicyVersionInput { + s.PolicyName = &v + return s +} - // Write to a DynamoDB table. - DynamoDB *DynamoDBAction `locationName:"dynamoDB" type:"structure"` +// SetPolicyVersionId sets the PolicyVersionId field's value. +func (s *DeletePolicyVersionInput) SetPolicyVersionId(v string) *DeletePolicyVersionInput { + s.PolicyVersionId = &v + return s +} - // Write to a DynamoDB table. This is a new version of the DynamoDB action. - // It allows you to write each attribute in an MQTT message payload into a separate - // DynamoDB column. - DynamoDBv2 *DynamoDBv2Action `locationName:"dynamoDBv2" type:"structure"` +type DeletePolicyVersionOutput struct { + _ struct{} `type:"structure"` +} - // Write data to an Amazon Elasticsearch Service domain. - Elasticsearch *ElasticsearchAction `locationName:"elasticsearch" type:"structure"` +// String returns the string representation +func (s DeletePolicyVersionOutput) String() string { + return awsutil.Prettify(s) +} - // Write to an Amazon Kinesis Firehose stream. - Firehose *FirehoseAction `locationName:"firehose" type:"structure"` +// GoString returns the string representation +func (s DeletePolicyVersionOutput) GoString() string { + return s.String() +} - // Write data to an Amazon Kinesis stream. - Kinesis *KinesisAction `locationName:"kinesis" type:"structure"` +// The input for the DeleteRegistrationCode operation. +type DeleteRegistrationCodeInput struct { + _ struct{} `type:"structure"` +} - // Invoke a Lambda function. - Lambda *LambdaAction `locationName:"lambda" type:"structure"` +// String returns the string representation +func (s DeleteRegistrationCodeInput) String() string { + return awsutil.Prettify(s) +} - // Publish to another MQTT topic. - Republish *RepublishAction `locationName:"republish" type:"structure"` +// GoString returns the string representation +func (s DeleteRegistrationCodeInput) GoString() string { + return s.String() +} - // Write to an Amazon S3 bucket. - S3 *S3Action `locationName:"s3" type:"structure"` +// The output for the DeleteRegistrationCode operation. +type DeleteRegistrationCodeOutput struct { + _ struct{} `type:"structure"` +} - // Send a message to a Salesforce IoT Cloud Input Stream. - Salesforce *SalesforceAction `locationName:"salesforce" type:"structure"` +// String returns the string representation +func (s DeleteRegistrationCodeOutput) String() string { + return awsutil.Prettify(s) +} - // Publish to an Amazon SNS topic. - Sns *SnsAction `locationName:"sns" type:"structure"` +// GoString returns the string representation +func (s DeleteRegistrationCodeOutput) GoString() string { + return s.String() +} - // Publish to an Amazon SQS queue. - Sqs *SqsAction `locationName:"sqs" type:"structure"` +type DeleteRoleAliasInput struct { + _ struct{} `type:"structure"` + + // The role alias to delete. + // + // RoleAlias is a required field + RoleAlias *string `location:"uri" locationName:"roleAlias" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s Action) String() string { +func (s DeleteRoleAliasInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Action) GoString() string { +func (s DeleteRoleAliasInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *Action) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "Action"} - if s.CloudwatchAlarm != nil { - if err := s.CloudwatchAlarm.Validate(); err != nil { - invalidParams.AddNested("CloudwatchAlarm", err.(request.ErrInvalidParams)) - } - } - if s.CloudwatchMetric != nil { - if err := s.CloudwatchMetric.Validate(); err != nil { - invalidParams.AddNested("CloudwatchMetric", err.(request.ErrInvalidParams)) - } - } - if s.DynamoDB != nil { - if err := s.DynamoDB.Validate(); err != nil { - invalidParams.AddNested("DynamoDB", err.(request.ErrInvalidParams)) - } - } - if s.DynamoDBv2 != nil { - if err := s.DynamoDBv2.Validate(); err != nil { - invalidParams.AddNested("DynamoDBv2", err.(request.ErrInvalidParams)) - } - } - if s.Elasticsearch != nil { - if err := s.Elasticsearch.Validate(); err != nil { - invalidParams.AddNested("Elasticsearch", err.(request.ErrInvalidParams)) - } - } - if s.Firehose != nil { - if err := s.Firehose.Validate(); err != nil { - invalidParams.AddNested("Firehose", err.(request.ErrInvalidParams)) - } - } - if s.Kinesis != nil { - if err := s.Kinesis.Validate(); err != nil { - invalidParams.AddNested("Kinesis", err.(request.ErrInvalidParams)) - } - } - if s.Lambda != nil { - if err := s.Lambda.Validate(); err != nil { - invalidParams.AddNested("Lambda", err.(request.ErrInvalidParams)) - } - } - if s.Republish != nil { - if err := s.Republish.Validate(); err != nil { - invalidParams.AddNested("Republish", err.(request.ErrInvalidParams)) - } +func (s *DeleteRoleAliasInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteRoleAliasInput"} + if s.RoleAlias == nil { + invalidParams.Add(request.NewErrParamRequired("RoleAlias")) } - if s.S3 != nil { - if err := s.S3.Validate(); err != nil { - invalidParams.AddNested("S3", err.(request.ErrInvalidParams)) - } + if s.RoleAlias != nil && len(*s.RoleAlias) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleAlias", 1)) } - if s.Salesforce != nil { - if err := s.Salesforce.Validate(); err != nil { - invalidParams.AddNested("Salesforce", err.(request.ErrInvalidParams)) - } + + if invalidParams.Len() > 0 { + return invalidParams } - if s.Sns != nil { - if err := s.Sns.Validate(); err != nil { - invalidParams.AddNested("Sns", err.(request.ErrInvalidParams)) - } + return nil +} + +// SetRoleAlias sets the RoleAlias field's value. +func (s *DeleteRoleAliasInput) SetRoleAlias(v string) *DeleteRoleAliasInput { + s.RoleAlias = &v + return s +} + +type DeleteRoleAliasOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteRoleAliasOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteRoleAliasOutput) GoString() string { + return s.String() +} + +type DeleteScheduledAuditInput struct { + _ struct{} `type:"structure"` + + // The name of the scheduled audit you want to delete. + // + // ScheduledAuditName is a required field + ScheduledAuditName *string `location:"uri" locationName:"scheduledAuditName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteScheduledAuditInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteScheduledAuditInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteScheduledAuditInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteScheduledAuditInput"} + if s.ScheduledAuditName == nil { + invalidParams.Add(request.NewErrParamRequired("ScheduledAuditName")) } - if s.Sqs != nil { - if err := s.Sqs.Validate(); err != nil { - invalidParams.AddNested("Sqs", err.(request.ErrInvalidParams)) - } + if s.ScheduledAuditName != nil && len(*s.ScheduledAuditName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ScheduledAuditName", 1)) } if invalidParams.Len() > 0 { @@ -11450,119 +21070,178 @@ func (s *Action) Validate() error { return nil } -// SetCloudwatchAlarm sets the CloudwatchAlarm field's value. -func (s *Action) SetCloudwatchAlarm(v *CloudwatchAlarmAction) *Action { - s.CloudwatchAlarm = v +// SetScheduledAuditName sets the ScheduledAuditName field's value. +func (s *DeleteScheduledAuditInput) SetScheduledAuditName(v string) *DeleteScheduledAuditInput { + s.ScheduledAuditName = &v return s } -// SetCloudwatchMetric sets the CloudwatchMetric field's value. -func (s *Action) SetCloudwatchMetric(v *CloudwatchMetricAction) *Action { - s.CloudwatchMetric = v - return s +type DeleteScheduledAuditOutput struct { + _ struct{} `type:"structure"` } -// SetDynamoDB sets the DynamoDB field's value. -func (s *Action) SetDynamoDB(v *DynamoDBAction) *Action { - s.DynamoDB = v - return s +// String returns the string representation +func (s DeleteScheduledAuditOutput) String() string { + return awsutil.Prettify(s) } -// SetDynamoDBv2 sets the DynamoDBv2 field's value. -func (s *Action) SetDynamoDBv2(v *DynamoDBv2Action) *Action { - s.DynamoDBv2 = v - return s +// GoString returns the string representation +func (s DeleteScheduledAuditOutput) GoString() string { + return s.String() } -// SetElasticsearch sets the Elasticsearch field's value. -func (s *Action) SetElasticsearch(v *ElasticsearchAction) *Action { - s.Elasticsearch = v - return s +type DeleteSecurityProfileInput struct { + _ struct{} `type:"structure"` + + // The expected version of the security profile. A new version is generated + // whenever the security profile is updated. If you specify a value that is + // different than the actual version, a VersionConflictException is thrown. + ExpectedVersion *int64 `location:"querystring" locationName:"expectedVersion" type:"long"` + + // The name of the security profile to be deleted. + // + // SecurityProfileName is a required field + SecurityProfileName *string `location:"uri" locationName:"securityProfileName" min:"1" type:"string" required:"true"` } -// SetFirehose sets the Firehose field's value. -func (s *Action) SetFirehose(v *FirehoseAction) *Action { - s.Firehose = v - return s +// String returns the string representation +func (s DeleteSecurityProfileInput) String() string { + return awsutil.Prettify(s) } -// SetKinesis sets the Kinesis field's value. -func (s *Action) SetKinesis(v *KinesisAction) *Action { - s.Kinesis = v - return s +// GoString returns the string representation +func (s DeleteSecurityProfileInput) GoString() string { + return s.String() } -// SetLambda sets the Lambda field's value. -func (s *Action) SetLambda(v *LambdaAction) *Action { - s.Lambda = v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteSecurityProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteSecurityProfileInput"} + if s.SecurityProfileName == nil { + invalidParams.Add(request.NewErrParamRequired("SecurityProfileName")) + } + if s.SecurityProfileName != nil && len(*s.SecurityProfileName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecurityProfileName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetRepublish sets the Republish field's value. -func (s *Action) SetRepublish(v *RepublishAction) *Action { - s.Republish = v +// SetExpectedVersion sets the ExpectedVersion field's value. +func (s *DeleteSecurityProfileInput) SetExpectedVersion(v int64) *DeleteSecurityProfileInput { + s.ExpectedVersion = &v return s } -// SetS3 sets the S3 field's value. -func (s *Action) SetS3(v *S3Action) *Action { - s.S3 = v +// SetSecurityProfileName sets the SecurityProfileName field's value. +func (s *DeleteSecurityProfileInput) SetSecurityProfileName(v string) *DeleteSecurityProfileInput { + s.SecurityProfileName = &v return s } -// SetSalesforce sets the Salesforce field's value. -func (s *Action) SetSalesforce(v *SalesforceAction) *Action { - s.Salesforce = v - return s +type DeleteSecurityProfileOutput struct { + _ struct{} `type:"structure"` } -// SetSns sets the Sns field's value. -func (s *Action) SetSns(v *SnsAction) *Action { - s.Sns = v - return s +// String returns the string representation +func (s DeleteSecurityProfileOutput) String() string { + return awsutil.Prettify(s) } -// SetSqs sets the Sqs field's value. -func (s *Action) SetSqs(v *SqsAction) *Action { - s.Sqs = v +// GoString returns the string representation +func (s DeleteSecurityProfileOutput) GoString() string { + return s.String() +} + +type DeleteStreamInput struct { + _ struct{} `type:"structure"` + + // The stream ID. + // + // StreamId is a required field + StreamId *string `location:"uri" locationName:"streamId" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteStreamInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteStreamInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteStreamInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteStreamInput"} + if s.StreamId == nil { + invalidParams.Add(request.NewErrParamRequired("StreamId")) + } + if s.StreamId != nil && len(*s.StreamId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StreamId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetStreamId sets the StreamId field's value. +func (s *DeleteStreamInput) SetStreamId(v string) *DeleteStreamInput { + s.StreamId = &v return s } -type AddThingToThingGroupInput struct { +type DeleteStreamOutput struct { _ struct{} `type:"structure"` +} - // The ARN of the thing to add to a group. - ThingArn *string `locationName:"thingArn" type:"string"` +// String returns the string representation +func (s DeleteStreamOutput) String() string { + return awsutil.Prettify(s) +} - // The ARN of the group to which you are adding a thing. - ThingGroupArn *string `locationName:"thingGroupArn" type:"string"` +// GoString returns the string representation +func (s DeleteStreamOutput) GoString() string { + return s.String() +} - // The name of the group to which you are adding a thing. - ThingGroupName *string `locationName:"thingGroupName" min:"1" type:"string"` +type DeleteThingGroupInput struct { + _ struct{} `type:"structure"` - // The name of the thing to add to a group. - ThingName *string `locationName:"thingName" min:"1" type:"string"` + // The expected version of the thing group to delete. + ExpectedVersion *int64 `location:"querystring" locationName:"expectedVersion" type:"long"` + + // The name of the thing group to delete. + // + // ThingGroupName is a required field + ThingGroupName *string `location:"uri" locationName:"thingGroupName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s AddThingToThingGroupInput) String() string { +func (s DeleteThingGroupInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AddThingToThingGroupInput) GoString() string { +func (s DeleteThingGroupInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *AddThingToThingGroupInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AddThingToThingGroupInput"} +func (s *DeleteThingGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteThingGroupInput"} + if s.ThingGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingGroupName")) + } if s.ThingGroupName != nil && len(*s.ThingGroupName) < 1 { invalidParams.Add(request.NewErrParamMinLen("ThingGroupName", 1)) } - if s.ThingName != nil && len(*s.ThingName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) - } if invalidParams.Len() > 0 { return invalidParams @@ -11570,110 +21249,128 @@ func (s *AddThingToThingGroupInput) Validate() error { return nil } -// SetThingArn sets the ThingArn field's value. -func (s *AddThingToThingGroupInput) SetThingArn(v string) *AddThingToThingGroupInput { - s.ThingArn = &v - return s -} - -// SetThingGroupArn sets the ThingGroupArn field's value. -func (s *AddThingToThingGroupInput) SetThingGroupArn(v string) *AddThingToThingGroupInput { - s.ThingGroupArn = &v +// SetExpectedVersion sets the ExpectedVersion field's value. +func (s *DeleteThingGroupInput) SetExpectedVersion(v int64) *DeleteThingGroupInput { + s.ExpectedVersion = &v return s } // SetThingGroupName sets the ThingGroupName field's value. -func (s *AddThingToThingGroupInput) SetThingGroupName(v string) *AddThingToThingGroupInput { +func (s *DeleteThingGroupInput) SetThingGroupName(v string) *DeleteThingGroupInput { s.ThingGroupName = &v return s } -// SetThingName sets the ThingName field's value. -func (s *AddThingToThingGroupInput) SetThingName(v string) *AddThingToThingGroupInput { - s.ThingName = &v - return s +type DeleteThingGroupOutput struct { + _ struct{} `type:"structure"` } -type AddThingToThingGroupOutput struct { +// String returns the string representation +func (s DeleteThingGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteThingGroupOutput) GoString() string { + return s.String() +} + +// The input for the DeleteThing operation. +type DeleteThingInput struct { _ struct{} `type:"structure"` + + // The expected version of the thing record in the registry. If the version + // of the record in the registry does not match the expected version specified + // in the request, the DeleteThing request is rejected with a VersionConflictException. + ExpectedVersion *int64 `location:"querystring" locationName:"expectedVersion" type:"long"` + + // The name of the thing to delete. + // + // ThingName is a required field + ThingName *string `location:"uri" locationName:"thingName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteThingInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteThingInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteThingInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteThingInput"} + if s.ThingName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingName")) + } + if s.ThingName != nil && len(*s.ThingName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// String returns the string representation -func (s AddThingToThingGroupOutput) String() string { - return awsutil.Prettify(s) +// SetExpectedVersion sets the ExpectedVersion field's value. +func (s *DeleteThingInput) SetExpectedVersion(v int64) *DeleteThingInput { + s.ExpectedVersion = &v + return s } -// GoString returns the string representation -func (s AddThingToThingGroupOutput) GoString() string { - return s.String() +// SetThingName sets the ThingName field's value. +func (s *DeleteThingInput) SetThingName(v string) *DeleteThingInput { + s.ThingName = &v + return s } -// Contains information that allowed the authorization. -type Allowed struct { +// The output of the DeleteThing operation. +type DeleteThingOutput struct { _ struct{} `type:"structure"` - - // A list of policies that allowed the authentication. - Policies []*Policy `locationName:"policies" type:"list"` } // String returns the string representation -func (s Allowed) String() string { +func (s DeleteThingOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Allowed) GoString() string { +func (s DeleteThingOutput) GoString() string { return s.String() } -// SetPolicies sets the Policies field's value. -func (s *Allowed) SetPolicies(v []*Policy) *Allowed { - s.Policies = v - return s -} - -type AssociateTargetsWithJobInput struct { +// The input for the DeleteThingType operation. +type DeleteThingTypeInput struct { _ struct{} `type:"structure"` - // An optional comment string describing why the job was associated with the - // targets. - Comment *string `locationName:"comment" type:"string"` - - // The unique identifier you assigned to this job when it was created. - // - // JobId is a required field - JobId *string `location:"uri" locationName:"jobId" min:"1" type:"string" required:"true"` - - // A list of thing group ARNs that define the targets of the job. + // The name of the thing type. // - // Targets is a required field - Targets []*string `locationName:"targets" min:"1" type:"list" required:"true"` + // ThingTypeName is a required field + ThingTypeName *string `location:"uri" locationName:"thingTypeName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s AssociateTargetsWithJobInput) String() string { +func (s DeleteThingTypeInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AssociateTargetsWithJobInput) GoString() string { +func (s DeleteThingTypeInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *AssociateTargetsWithJobInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AssociateTargetsWithJobInput"} - if s.JobId == nil { - invalidParams.Add(request.NewErrParamRequired("JobId")) - } - if s.JobId != nil && len(*s.JobId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("JobId", 1)) - } - if s.Targets == nil { - invalidParams.Add(request.NewErrParamRequired("Targets")) +func (s *DeleteThingTypeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteThingTypeInput"} + if s.ThingTypeName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingTypeName")) } - if s.Targets != nil && len(s.Targets) < 1 { - invalidParams.Add(request.NewErrParamMinLen("Targets", 1)) + if s.ThingTypeName != nil && len(*s.ThingTypeName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingTypeName", 1)) } if invalidParams.Len() > 0 { @@ -11682,100 +21379,55 @@ func (s *AssociateTargetsWithJobInput) Validate() error { return nil } -// SetComment sets the Comment field's value. -func (s *AssociateTargetsWithJobInput) SetComment(v string) *AssociateTargetsWithJobInput { - s.Comment = &v - return s -} - -// SetJobId sets the JobId field's value. -func (s *AssociateTargetsWithJobInput) SetJobId(v string) *AssociateTargetsWithJobInput { - s.JobId = &v - return s -} - -// SetTargets sets the Targets field's value. -func (s *AssociateTargetsWithJobInput) SetTargets(v []*string) *AssociateTargetsWithJobInput { - s.Targets = v +// SetThingTypeName sets the ThingTypeName field's value. +func (s *DeleteThingTypeInput) SetThingTypeName(v string) *DeleteThingTypeInput { + s.ThingTypeName = &v return s } -type AssociateTargetsWithJobOutput struct { +// The output for the DeleteThingType operation. +type DeleteThingTypeOutput struct { _ struct{} `type:"structure"` - - // A short text description of the job. - Description *string `locationName:"description" type:"string"` - - // An ARN identifying the job. - JobArn *string `locationName:"jobArn" type:"string"` - - // The unique identifier you assigned to this job when it was created. - JobId *string `locationName:"jobId" min:"1" type:"string"` } // String returns the string representation -func (s AssociateTargetsWithJobOutput) String() string { +func (s DeleteThingTypeOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AssociateTargetsWithJobOutput) GoString() string { +func (s DeleteThingTypeOutput) GoString() string { return s.String() } -// SetDescription sets the Description field's value. -func (s *AssociateTargetsWithJobOutput) SetDescription(v string) *AssociateTargetsWithJobOutput { - s.Description = &v - return s -} - -// SetJobArn sets the JobArn field's value. -func (s *AssociateTargetsWithJobOutput) SetJobArn(v string) *AssociateTargetsWithJobOutput { - s.JobArn = &v - return s -} - -// SetJobId sets the JobId field's value. -func (s *AssociateTargetsWithJobOutput) SetJobId(v string) *AssociateTargetsWithJobOutput { - s.JobId = &v - return s -} - -type AttachPolicyInput struct { +// The input for the DeleteTopicRule operation. +type DeleteTopicRuleInput struct { _ struct{} `type:"structure"` - // The name of the policy to attach. - // - // PolicyName is a required field - PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` - - // The identity to which the policy is attached. + // The name of the rule. // - // Target is a required field - Target *string `locationName:"target" type:"string" required:"true"` + // RuleName is a required field + RuleName *string `location:"uri" locationName:"ruleName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s AttachPolicyInput) String() string { +func (s DeleteTopicRuleInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AttachPolicyInput) GoString() string { +func (s DeleteTopicRuleInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *AttachPolicyInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AttachPolicyInput"} - if s.PolicyName == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyName")) - } - if s.PolicyName != nil && len(*s.PolicyName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) +func (s *DeleteTopicRuleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteTopicRuleInput"} + if s.RuleName == nil { + invalidParams.Add(request.NewErrParamRequired("RuleName")) } - if s.Target == nil { - invalidParams.Add(request.NewErrParamRequired("Target")) + if s.RuleName != nil && len(*s.RuleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RuleName", 1)) } if invalidParams.Len() > 0 { @@ -11784,69 +21436,58 @@ func (s *AttachPolicyInput) Validate() error { return nil } -// SetPolicyName sets the PolicyName field's value. -func (s *AttachPolicyInput) SetPolicyName(v string) *AttachPolicyInput { - s.PolicyName = &v - return s -} - -// SetTarget sets the Target field's value. -func (s *AttachPolicyInput) SetTarget(v string) *AttachPolicyInput { - s.Target = &v +// SetRuleName sets the RuleName field's value. +func (s *DeleteTopicRuleInput) SetRuleName(v string) *DeleteTopicRuleInput { + s.RuleName = &v return s } -type AttachPolicyOutput struct { +type DeleteTopicRuleOutput struct { _ struct{} `type:"structure"` } // String returns the string representation -func (s AttachPolicyOutput) String() string { +func (s DeleteTopicRuleOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AttachPolicyOutput) GoString() string { +func (s DeleteTopicRuleOutput) GoString() string { return s.String() } -// The input for the AttachPrincipalPolicy operation. -type AttachPrincipalPolicyInput struct { +type DeleteV2LoggingLevelInput struct { _ struct{} `type:"structure"` - // The policy name. + // The name of the resource for which you are configuring logging. // - // PolicyName is a required field - PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` + // TargetName is a required field + TargetName *string `location:"querystring" locationName:"targetName" type:"string" required:"true"` - // The principal, which can be a certificate ARN (as returned from the CreateCertificate - // operation) or an Amazon Cognito ID. + // The type of resource for which you are configuring logging. Must be THING_Group. // - // Principal is a required field - Principal *string `location:"header" locationName:"x-amzn-iot-principal" type:"string" required:"true"` + // TargetType is a required field + TargetType *string `location:"querystring" locationName:"targetType" type:"string" required:"true" enum:"LogTargetType"` } // String returns the string representation -func (s AttachPrincipalPolicyInput) String() string { +func (s DeleteV2LoggingLevelInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AttachPrincipalPolicyInput) GoString() string { +func (s DeleteV2LoggingLevelInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *AttachPrincipalPolicyInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AttachPrincipalPolicyInput"} - if s.PolicyName == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyName")) - } - if s.PolicyName != nil && len(*s.PolicyName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) +func (s *DeleteV2LoggingLevelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteV2LoggingLevelInput"} + if s.TargetName == nil { + invalidParams.Add(request.NewErrParamRequired("TargetName")) } - if s.Principal == nil { - invalidParams.Add(request.NewErrParamRequired("Principal")) + if s.TargetType == nil { + invalidParams.Add(request.NewErrParamRequired("TargetType")) } if invalidParams.Len() > 0 { @@ -11855,68 +21496,99 @@ func (s *AttachPrincipalPolicyInput) Validate() error { return nil } -// SetPolicyName sets the PolicyName field's value. -func (s *AttachPrincipalPolicyInput) SetPolicyName(v string) *AttachPrincipalPolicyInput { - s.PolicyName = &v +// SetTargetName sets the TargetName field's value. +func (s *DeleteV2LoggingLevelInput) SetTargetName(v string) *DeleteV2LoggingLevelInput { + s.TargetName = &v return s } -// SetPrincipal sets the Principal field's value. -func (s *AttachPrincipalPolicyInput) SetPrincipal(v string) *AttachPrincipalPolicyInput { - s.Principal = &v +// SetTargetType sets the TargetType field's value. +func (s *DeleteV2LoggingLevelInput) SetTargetType(v string) *DeleteV2LoggingLevelInput { + s.TargetType = &v return s } -type AttachPrincipalPolicyOutput struct { +type DeleteV2LoggingLevelOutput struct { _ struct{} `type:"structure"` } // String returns the string representation -func (s AttachPrincipalPolicyOutput) String() string { +func (s DeleteV2LoggingLevelOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AttachPrincipalPolicyOutput) GoString() string { +func (s DeleteV2LoggingLevelOutput) GoString() string { + return s.String() +} + +// Contains information that denied the authorization. +type Denied struct { + _ struct{} `type:"structure"` + + // Information that explicitly denies the authorization. + ExplicitDeny *ExplicitDeny `locationName:"explicitDeny" type:"structure"` + + // Information that implicitly denies the authorization. When a policy doesn't + // explicitly deny or allow an action on a resource it is considered an implicit + // deny. + ImplicitDeny *ImplicitDeny `locationName:"implicitDeny" type:"structure"` +} + +// String returns the string representation +func (s Denied) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Denied) GoString() string { return s.String() } -// The input for the AttachThingPrincipal operation. -type AttachThingPrincipalInput struct { +// SetExplicitDeny sets the ExplicitDeny field's value. +func (s *Denied) SetExplicitDeny(v *ExplicitDeny) *Denied { + s.ExplicitDeny = v + return s +} + +// SetImplicitDeny sets the ImplicitDeny field's value. +func (s *Denied) SetImplicitDeny(v *ImplicitDeny) *Denied { + s.ImplicitDeny = v + return s +} + +// The input for the DeprecateThingType operation. +type DeprecateThingTypeInput struct { _ struct{} `type:"structure"` - // The principal, such as a certificate or other credential. + // The name of the thing type to deprecate. // - // Principal is a required field - Principal *string `location:"header" locationName:"x-amzn-principal" type:"string" required:"true"` + // ThingTypeName is a required field + ThingTypeName *string `location:"uri" locationName:"thingTypeName" min:"1" type:"string" required:"true"` - // The name of the thing. - // - // ThingName is a required field - ThingName *string `location:"uri" locationName:"thingName" min:"1" type:"string" required:"true"` + // Whether to undeprecate a deprecated thing type. If true, the thing type will + // not be deprecated anymore and you can associate it with things. + UndoDeprecate *bool `locationName:"undoDeprecate" type:"boolean"` } // String returns the string representation -func (s AttachThingPrincipalInput) String() string { +func (s DeprecateThingTypeInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AttachThingPrincipalInput) GoString() string { +func (s DeprecateThingTypeInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *AttachThingPrincipalInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AttachThingPrincipalInput"} - if s.Principal == nil { - invalidParams.Add(request.NewErrParamRequired("Principal")) - } - if s.ThingName == nil { - invalidParams.Add(request.NewErrParamRequired("ThingName")) +func (s *DeprecateThingTypeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeprecateThingTypeInput"} + if s.ThingTypeName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingTypeName")) } - if s.ThingName != nil && len(*s.ThingName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) + if s.ThingTypeName != nil && len(*s.ThingTypeName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingTypeName", 1)) } if invalidParams.Len() > 0 { @@ -11925,470 +21597,475 @@ func (s *AttachThingPrincipalInput) Validate() error { return nil } -// SetPrincipal sets the Principal field's value. -func (s *AttachThingPrincipalInput) SetPrincipal(v string) *AttachThingPrincipalInput { - s.Principal = &v +// SetThingTypeName sets the ThingTypeName field's value. +func (s *DeprecateThingTypeInput) SetThingTypeName(v string) *DeprecateThingTypeInput { + s.ThingTypeName = &v return s } -// SetThingName sets the ThingName field's value. -func (s *AttachThingPrincipalInput) SetThingName(v string) *AttachThingPrincipalInput { - s.ThingName = &v +// SetUndoDeprecate sets the UndoDeprecate field's value. +func (s *DeprecateThingTypeInput) SetUndoDeprecate(v bool) *DeprecateThingTypeInput { + s.UndoDeprecate = &v return s } -// The output from the AttachThingPrincipal operation. -type AttachThingPrincipalOutput struct { +// The output for the DeprecateThingType operation. +type DeprecateThingTypeOutput struct { _ struct{} `type:"structure"` } // String returns the string representation -func (s AttachThingPrincipalOutput) String() string { +func (s DeprecateThingTypeOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AttachThingPrincipalOutput) GoString() string { +func (s DeprecateThingTypeOutput) GoString() string { return s.String() } -// The attribute payload. -type AttributePayload struct { +type DescribeAccountAuditConfigurationInput struct { _ struct{} `type:"structure"` +} - // A JSON string containing up to three key-value pair in JSON format. For example: - // - // {\"attributes\":{\"string1\":\"string2\"}} - Attributes map[string]*string `locationName:"attributes" type:"map"` +// String returns the string representation +func (s DescribeAccountAuditConfigurationInput) String() string { + return awsutil.Prettify(s) +} - // Specifies whether the list of attributes provided in the AttributePayload - // is merged with the attributes stored in the registry, instead of overwriting - // them. - // - // To remove an attribute, call UpdateThing with an empty attribute value. +// GoString returns the string representation +func (s DescribeAccountAuditConfigurationInput) GoString() string { + return s.String() +} + +type DescribeAccountAuditConfigurationOutput struct { + _ struct{} `type:"structure"` + + // Which audit checks are enabled and disabled for this account. + AuditCheckConfigurations map[string]*AuditCheckConfiguration `locationName:"auditCheckConfigurations" type:"map"` + + // Information about the targets to which audit notifications are sent for this + // account. + AuditNotificationTargetConfigurations map[string]*AuditNotificationTarget `locationName:"auditNotificationTargetConfigurations" type:"map"` + + // The ARN of the role that grants permission to AWS IoT to access information + // about your devices, policies, certificates and other items as necessary when + // performing an audit. // - // The merge attribute is only valid when calling UpdateThing. - Merge *bool `locationName:"merge" type:"boolean"` + // On the first call to UpdateAccountAuditConfiguration this parameter is required. + RoleArn *string `locationName:"roleArn" min:"20" type:"string"` } // String returns the string representation -func (s AttributePayload) String() string { +func (s DescribeAccountAuditConfigurationOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AttributePayload) GoString() string { +func (s DescribeAccountAuditConfigurationOutput) GoString() string { return s.String() } -// SetAttributes sets the Attributes field's value. -func (s *AttributePayload) SetAttributes(v map[string]*string) *AttributePayload { - s.Attributes = v +// SetAuditCheckConfigurations sets the AuditCheckConfigurations field's value. +func (s *DescribeAccountAuditConfigurationOutput) SetAuditCheckConfigurations(v map[string]*AuditCheckConfiguration) *DescribeAccountAuditConfigurationOutput { + s.AuditCheckConfigurations = v return s } -// SetMerge sets the Merge field's value. -func (s *AttributePayload) SetMerge(v bool) *AttributePayload { - s.Merge = &v +// SetAuditNotificationTargetConfigurations sets the AuditNotificationTargetConfigurations field's value. +func (s *DescribeAccountAuditConfigurationOutput) SetAuditNotificationTargetConfigurations(v map[string]*AuditNotificationTarget) *DescribeAccountAuditConfigurationOutput { + s.AuditNotificationTargetConfigurations = v return s } -// A collection of authorization information. -type AuthInfo struct { - _ struct{} `type:"structure"` +// SetRoleArn sets the RoleArn field's value. +func (s *DescribeAccountAuditConfigurationOutput) SetRoleArn(v string) *DescribeAccountAuditConfigurationOutput { + s.RoleArn = &v + return s +} - // The type of action for which the principal is being authorized. - ActionType *string `locationName:"actionType" type:"string" enum:"ActionType"` +type DescribeAuditTaskInput struct { + _ struct{} `type:"structure"` - // The resources for which the principal is being authorized to perform the - // specified action. - Resources []*string `locationName:"resources" type:"list"` + // The ID of the audit whose information you want to get. + // + // TaskId is a required field + TaskId *string `location:"uri" locationName:"taskId" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s AuthInfo) String() string { +func (s DescribeAuditTaskInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AuthInfo) GoString() string { +func (s DescribeAuditTaskInput) GoString() string { return s.String() } -// SetActionType sets the ActionType field's value. -func (s *AuthInfo) SetActionType(v string) *AuthInfo { - s.ActionType = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeAuditTaskInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeAuditTaskInput"} + if s.TaskId == nil { + invalidParams.Add(request.NewErrParamRequired("TaskId")) + } + if s.TaskId != nil && len(*s.TaskId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TaskId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetResources sets the Resources field's value. -func (s *AuthInfo) SetResources(v []*string) *AuthInfo { - s.Resources = v +// SetTaskId sets the TaskId field's value. +func (s *DescribeAuditTaskInput) SetTaskId(v string) *DescribeAuditTaskInput { + s.TaskId = &v return s } -// The authorizer result. -type AuthResult struct { +type DescribeAuditTaskOutput struct { _ struct{} `type:"structure"` - // The policies and statements that allowed the specified action. - Allowed *Allowed `locationName:"allowed" type:"structure"` + // Detailed information about each check performed during this audit. + AuditDetails map[string]*AuditCheckDetails `locationName:"auditDetails" type:"map"` - // The final authorization decision of this scenario. Multiple statements are - // taken into account when determining the authorization decision. An explicit - // deny statement can override multiple allow statements. - AuthDecision *string `locationName:"authDecision" type:"string" enum:"AuthDecision"` + // The name of the scheduled audit (only if the audit was a scheduled audit). + ScheduledAuditName *string `locationName:"scheduledAuditName" min:"1" type:"string"` - // Authorization information. - AuthInfo *AuthInfo `locationName:"authInfo" type:"structure"` + // The time the audit started. + TaskStartTime *time.Time `locationName:"taskStartTime" type:"timestamp"` - // The policies and statements that denied the specified action. - Denied *Denied `locationName:"denied" type:"structure"` + // Statistical information about the audit. + TaskStatistics *TaskStatistics `locationName:"taskStatistics" type:"structure"` - // Contains any missing context values found while evaluating policy. - MissingContextValues []*string `locationName:"missingContextValues" type:"list"` + // The status of the audit: one of "IN_PROGRESS", "COMPLETED", "FAILED", or + // "CANCELED". + TaskStatus *string `locationName:"taskStatus" type:"string" enum:"AuditTaskStatus"` + + // The type of audit: "ON_DEMAND_AUDIT_TASK" or "SCHEDULED_AUDIT_TASK". + TaskType *string `locationName:"taskType" type:"string" enum:"AuditTaskType"` } // String returns the string representation -func (s AuthResult) String() string { +func (s DescribeAuditTaskOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AuthResult) GoString() string { +func (s DescribeAuditTaskOutput) GoString() string { return s.String() } -// SetAllowed sets the Allowed field's value. -func (s *AuthResult) SetAllowed(v *Allowed) *AuthResult { - s.Allowed = v +// SetAuditDetails sets the AuditDetails field's value. +func (s *DescribeAuditTaskOutput) SetAuditDetails(v map[string]*AuditCheckDetails) *DescribeAuditTaskOutput { + s.AuditDetails = v return s } -// SetAuthDecision sets the AuthDecision field's value. -func (s *AuthResult) SetAuthDecision(v string) *AuthResult { - s.AuthDecision = &v +// SetScheduledAuditName sets the ScheduledAuditName field's value. +func (s *DescribeAuditTaskOutput) SetScheduledAuditName(v string) *DescribeAuditTaskOutput { + s.ScheduledAuditName = &v return s } -// SetAuthInfo sets the AuthInfo field's value. -func (s *AuthResult) SetAuthInfo(v *AuthInfo) *AuthResult { - s.AuthInfo = v +// SetTaskStartTime sets the TaskStartTime field's value. +func (s *DescribeAuditTaskOutput) SetTaskStartTime(v time.Time) *DescribeAuditTaskOutput { + s.TaskStartTime = &v return s } -// SetDenied sets the Denied field's value. -func (s *AuthResult) SetDenied(v *Denied) *AuthResult { - s.Denied = v +// SetTaskStatistics sets the TaskStatistics field's value. +func (s *DescribeAuditTaskOutput) SetTaskStatistics(v *TaskStatistics) *DescribeAuditTaskOutput { + s.TaskStatistics = v return s } -// SetMissingContextValues sets the MissingContextValues field's value. -func (s *AuthResult) SetMissingContextValues(v []*string) *AuthResult { - s.MissingContextValues = v +// SetTaskStatus sets the TaskStatus field's value. +func (s *DescribeAuditTaskOutput) SetTaskStatus(v string) *DescribeAuditTaskOutput { + s.TaskStatus = &v return s } -// The authorizer description. -type AuthorizerDescription struct { - _ struct{} `type:"structure"` - - // The authorizer ARN. - AuthorizerArn *string `locationName:"authorizerArn" type:"string"` - - // The authorizer's Lambda function ARN. - AuthorizerFunctionArn *string `locationName:"authorizerFunctionArn" type:"string"` - - // The authorizer name. - AuthorizerName *string `locationName:"authorizerName" min:"1" type:"string"` - - // The UNIX timestamp of when the authorizer was created. - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix"` - - // The UNIX timestamp of when the authorizer was last updated. - LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp" timestampFormat:"unix"` - - // The status of the authorizer. - Status *string `locationName:"status" type:"string" enum:"AuthorizerStatus"` +// SetTaskType sets the TaskType field's value. +func (s *DescribeAuditTaskOutput) SetTaskType(v string) *DescribeAuditTaskOutput { + s.TaskType = &v + return s +} - // The key used to extract the token from the HTTP headers. - TokenKeyName *string `locationName:"tokenKeyName" min:"1" type:"string"` +type DescribeAuthorizerInput struct { + _ struct{} `type:"structure"` - // The public keys used to validate the token signature returned by your custom - // authentication service. - TokenSigningPublicKeys map[string]*string `locationName:"tokenSigningPublicKeys" type:"map"` + // The name of the authorizer to describe. + // + // AuthorizerName is a required field + AuthorizerName *string `location:"uri" locationName:"authorizerName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s AuthorizerDescription) String() string { +func (s DescribeAuthorizerInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AuthorizerDescription) GoString() string { +func (s DescribeAuthorizerInput) GoString() string { return s.String() } -// SetAuthorizerArn sets the AuthorizerArn field's value. -func (s *AuthorizerDescription) SetAuthorizerArn(v string) *AuthorizerDescription { - s.AuthorizerArn = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeAuthorizerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeAuthorizerInput"} + if s.AuthorizerName == nil { + invalidParams.Add(request.NewErrParamRequired("AuthorizerName")) + } + if s.AuthorizerName != nil && len(*s.AuthorizerName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AuthorizerName", 1)) + } -// SetAuthorizerFunctionArn sets the AuthorizerFunctionArn field's value. -func (s *AuthorizerDescription) SetAuthorizerFunctionArn(v string) *AuthorizerDescription { - s.AuthorizerFunctionArn = &v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } // SetAuthorizerName sets the AuthorizerName field's value. -func (s *AuthorizerDescription) SetAuthorizerName(v string) *AuthorizerDescription { +func (s *DescribeAuthorizerInput) SetAuthorizerName(v string) *DescribeAuthorizerInput { s.AuthorizerName = &v return s } -// SetCreationDate sets the CreationDate field's value. -func (s *AuthorizerDescription) SetCreationDate(v time.Time) *AuthorizerDescription { - s.CreationDate = &v - return s -} +type DescribeAuthorizerOutput struct { + _ struct{} `type:"structure"` -// SetLastModifiedDate sets the LastModifiedDate field's value. -func (s *AuthorizerDescription) SetLastModifiedDate(v time.Time) *AuthorizerDescription { - s.LastModifiedDate = &v - return s + // The authorizer description. + AuthorizerDescription *AuthorizerDescription `locationName:"authorizerDescription" type:"structure"` } -// SetStatus sets the Status field's value. -func (s *AuthorizerDescription) SetStatus(v string) *AuthorizerDescription { - s.Status = &v - return s +// String returns the string representation +func (s DescribeAuthorizerOutput) String() string { + return awsutil.Prettify(s) } -// SetTokenKeyName sets the TokenKeyName field's value. -func (s *AuthorizerDescription) SetTokenKeyName(v string) *AuthorizerDescription { - s.TokenKeyName = &v - return s +// GoString returns the string representation +func (s DescribeAuthorizerOutput) GoString() string { + return s.String() } -// SetTokenSigningPublicKeys sets the TokenSigningPublicKeys field's value. -func (s *AuthorizerDescription) SetTokenSigningPublicKeys(v map[string]*string) *AuthorizerDescription { - s.TokenSigningPublicKeys = v +// SetAuthorizerDescription sets the AuthorizerDescription field's value. +func (s *DescribeAuthorizerOutput) SetAuthorizerDescription(v *AuthorizerDescription) *DescribeAuthorizerOutput { + s.AuthorizerDescription = v return s } -// The authorizer summary. -type AuthorizerSummary struct { +type DescribeBillingGroupInput struct { _ struct{} `type:"structure"` - // The authorizer ARN. - AuthorizerArn *string `locationName:"authorizerArn" type:"string"` - - // The authorizer name. - AuthorizerName *string `locationName:"authorizerName" min:"1" type:"string"` + // The name of the billing group. + // + // BillingGroupName is a required field + BillingGroupName *string `location:"uri" locationName:"billingGroupName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s AuthorizerSummary) String() string { +func (s DescribeBillingGroupInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AuthorizerSummary) GoString() string { +func (s DescribeBillingGroupInput) GoString() string { return s.String() } -// SetAuthorizerArn sets the AuthorizerArn field's value. -func (s *AuthorizerSummary) SetAuthorizerArn(v string) *AuthorizerSummary { - s.AuthorizerArn = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeBillingGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeBillingGroupInput"} + if s.BillingGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("BillingGroupName")) + } + if s.BillingGroupName != nil && len(*s.BillingGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BillingGroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetAuthorizerName sets the AuthorizerName field's value. -func (s *AuthorizerSummary) SetAuthorizerName(v string) *AuthorizerSummary { - s.AuthorizerName = &v +// SetBillingGroupName sets the BillingGroupName field's value. +func (s *DescribeBillingGroupInput) SetBillingGroupName(v string) *DescribeBillingGroupInput { + s.BillingGroupName = &v return s } -// A CA certificate. -type CACertificate struct { +type DescribeBillingGroupOutput struct { _ struct{} `type:"structure"` - // The ARN of the CA certificate. - CertificateArn *string `locationName:"certificateArn" type:"string"` + // The ARN of the billing group. + BillingGroupArn *string `locationName:"billingGroupArn" type:"string"` - // The ID of the CA certificate. - CertificateId *string `locationName:"certificateId" min:"64" type:"string"` + // The ID of the billing group. + BillingGroupId *string `locationName:"billingGroupId" min:"1" type:"string"` - // The date the CA certificate was created. - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix"` + // Additional information about the billing group. + BillingGroupMetadata *BillingGroupMetadata `locationName:"billingGroupMetadata" type:"structure"` - // The status of the CA certificate. - // - // The status value REGISTER_INACTIVE is deprecated and should not be used. - Status *string `locationName:"status" type:"string" enum:"CACertificateStatus"` + // The name of the billing group. + BillingGroupName *string `locationName:"billingGroupName" min:"1" type:"string"` + + // The properties of the billing group. + BillingGroupProperties *BillingGroupProperties `locationName:"billingGroupProperties" type:"structure"` + + // The version of the billing group. + Version *int64 `locationName:"version" type:"long"` } // String returns the string representation -func (s CACertificate) String() string { +func (s DescribeBillingGroupOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CACertificate) GoString() string { +func (s DescribeBillingGroupOutput) GoString() string { return s.String() } -// SetCertificateArn sets the CertificateArn field's value. -func (s *CACertificate) SetCertificateArn(v string) *CACertificate { - s.CertificateArn = &v +// SetBillingGroupArn sets the BillingGroupArn field's value. +func (s *DescribeBillingGroupOutput) SetBillingGroupArn(v string) *DescribeBillingGroupOutput { + s.BillingGroupArn = &v return s } -// SetCertificateId sets the CertificateId field's value. -func (s *CACertificate) SetCertificateId(v string) *CACertificate { - s.CertificateId = &v +// SetBillingGroupId sets the BillingGroupId field's value. +func (s *DescribeBillingGroupOutput) SetBillingGroupId(v string) *DescribeBillingGroupOutput { + s.BillingGroupId = &v return s } -// SetCreationDate sets the CreationDate field's value. -func (s *CACertificate) SetCreationDate(v time.Time) *CACertificate { - s.CreationDate = &v +// SetBillingGroupMetadata sets the BillingGroupMetadata field's value. +func (s *DescribeBillingGroupOutput) SetBillingGroupMetadata(v *BillingGroupMetadata) *DescribeBillingGroupOutput { + s.BillingGroupMetadata = v return s } -// SetStatus sets the Status field's value. -func (s *CACertificate) SetStatus(v string) *CACertificate { - s.Status = &v +// SetBillingGroupName sets the BillingGroupName field's value. +func (s *DescribeBillingGroupOutput) SetBillingGroupName(v string) *DescribeBillingGroupOutput { + s.BillingGroupName = &v return s } -// Describes a CA certificate. -type CACertificateDescription struct { - _ struct{} `type:"structure"` - - // Whether the CA certificate configured for auto registration of device certificates. - // Valid values are "ENABLE" and "DISABLE" - AutoRegistrationStatus *string `locationName:"autoRegistrationStatus" type:"string" enum:"AutoRegistrationStatus"` - - // The CA certificate ARN. - CertificateArn *string `locationName:"certificateArn" type:"string"` - - // The CA certificate ID. - CertificateId *string `locationName:"certificateId" min:"64" type:"string"` - - // The CA certificate data, in PEM format. - CertificatePem *string `locationName:"certificatePem" min:"1" type:"string"` - - // The date the CA certificate was created. - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix"` - - CustomerVersion *int64 `locationName:"customerVersion" min:"1" type:"integer"` - - GenerationId *string `locationName:"generationId" type:"string"` +// SetBillingGroupProperties sets the BillingGroupProperties field's value. +func (s *DescribeBillingGroupOutput) SetBillingGroupProperties(v *BillingGroupProperties) *DescribeBillingGroupOutput { + s.BillingGroupProperties = v + return s +} - LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp" timestampFormat:"unix"` +// SetVersion sets the Version field's value. +func (s *DescribeBillingGroupOutput) SetVersion(v int64) *DescribeBillingGroupOutput { + s.Version = &v + return s +} - // The owner of the CA certificate. - OwnedBy *string `locationName:"ownedBy" type:"string"` +// The input for the DescribeCACertificate operation. +type DescribeCACertificateInput struct { + _ struct{} `type:"structure"` - // The status of a CA certificate. - Status *string `locationName:"status" type:"string" enum:"CACertificateStatus"` + // The CA certificate identifier. + // + // CertificateId is a required field + CertificateId *string `location:"uri" locationName:"caCertificateId" min:"64" type:"string" required:"true"` } // String returns the string representation -func (s CACertificateDescription) String() string { +func (s DescribeCACertificateInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CACertificateDescription) GoString() string { +func (s DescribeCACertificateInput) GoString() string { return s.String() } -// SetAutoRegistrationStatus sets the AutoRegistrationStatus field's value. -func (s *CACertificateDescription) SetAutoRegistrationStatus(v string) *CACertificateDescription { - s.AutoRegistrationStatus = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeCACertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeCACertificateInput"} + if s.CertificateId == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateId")) + } + if s.CertificateId != nil && len(*s.CertificateId) < 64 { + invalidParams.Add(request.NewErrParamMinLen("CertificateId", 64)) + } -// SetCertificateArn sets the CertificateArn field's value. -func (s *CACertificateDescription) SetCertificateArn(v string) *CACertificateDescription { - s.CertificateArn = &v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } // SetCertificateId sets the CertificateId field's value. -func (s *CACertificateDescription) SetCertificateId(v string) *CACertificateDescription { +func (s *DescribeCACertificateInput) SetCertificateId(v string) *DescribeCACertificateInput { s.CertificateId = &v return s } -// SetCertificatePem sets the CertificatePem field's value. -func (s *CACertificateDescription) SetCertificatePem(v string) *CACertificateDescription { - s.CertificatePem = &v - return s -} +// The output from the DescribeCACertificate operation. +type DescribeCACertificateOutput struct { + _ struct{} `type:"structure"` -// SetCreationDate sets the CreationDate field's value. -func (s *CACertificateDescription) SetCreationDate(v time.Time) *CACertificateDescription { - s.CreationDate = &v - return s -} + // The CA certificate description. + CertificateDescription *CACertificateDescription `locationName:"certificateDescription" type:"structure"` -// SetCustomerVersion sets the CustomerVersion field's value. -func (s *CACertificateDescription) SetCustomerVersion(v int64) *CACertificateDescription { - s.CustomerVersion = &v - return s + // Information about the registration configuration. + RegistrationConfig *RegistrationConfig `locationName:"registrationConfig" type:"structure"` } -// SetGenerationId sets the GenerationId field's value. -func (s *CACertificateDescription) SetGenerationId(v string) *CACertificateDescription { - s.GenerationId = &v - return s +// String returns the string representation +func (s DescribeCACertificateOutput) String() string { + return awsutil.Prettify(s) } -// SetLastModifiedDate sets the LastModifiedDate field's value. -func (s *CACertificateDescription) SetLastModifiedDate(v time.Time) *CACertificateDescription { - s.LastModifiedDate = &v - return s +// GoString returns the string representation +func (s DescribeCACertificateOutput) GoString() string { + return s.String() } -// SetOwnedBy sets the OwnedBy field's value. -func (s *CACertificateDescription) SetOwnedBy(v string) *CACertificateDescription { - s.OwnedBy = &v +// SetCertificateDescription sets the CertificateDescription field's value. +func (s *DescribeCACertificateOutput) SetCertificateDescription(v *CACertificateDescription) *DescribeCACertificateOutput { + s.CertificateDescription = v return s } -// SetStatus sets the Status field's value. -func (s *CACertificateDescription) SetStatus(v string) *CACertificateDescription { - s.Status = &v +// SetRegistrationConfig sets the RegistrationConfig field's value. +func (s *DescribeCACertificateOutput) SetRegistrationConfig(v *RegistrationConfig) *DescribeCACertificateOutput { + s.RegistrationConfig = v return s } -// The input for the CancelCertificateTransfer operation. -type CancelCertificateTransferInput struct { +// The input for the DescribeCertificate operation. +type DescribeCertificateInput struct { _ struct{} `type:"structure"` - // The ID of the certificate. + // The ID of the certificate. (The last part of the certificate ARN contains + // the certificate ID.) // // CertificateId is a required field CertificateId *string `location:"uri" locationName:"certificateId" min:"64" type:"string" required:"true"` } // String returns the string representation -func (s CancelCertificateTransferInput) String() string { +func (s DescribeCertificateInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CancelCertificateTransferInput) GoString() string { +func (s DescribeCertificateInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CancelCertificateTransferInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CancelCertificateTransferInput"} +func (s *DescribeCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeCertificateInput"} if s.CertificateId == nil { invalidParams.Add(request.NewErrParamRequired("CertificateId")) } @@ -12403,367 +22080,320 @@ func (s *CancelCertificateTransferInput) Validate() error { } // SetCertificateId sets the CertificateId field's value. -func (s *CancelCertificateTransferInput) SetCertificateId(v string) *CancelCertificateTransferInput { +func (s *DescribeCertificateInput) SetCertificateId(v string) *DescribeCertificateInput { s.CertificateId = &v return s } -type CancelCertificateTransferOutput struct { +// The output of the DescribeCertificate operation. +type DescribeCertificateOutput struct { _ struct{} `type:"structure"` + + // The description of the certificate. + CertificateDescription *CertificateDescription `locationName:"certificateDescription" type:"structure"` } // String returns the string representation -func (s CancelCertificateTransferOutput) String() string { +func (s DescribeCertificateOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CancelCertificateTransferOutput) GoString() string { +func (s DescribeCertificateOutput) GoString() string { return s.String() } -type CancelJobInput struct { - _ struct{} `type:"structure"` - - // An optional comment string describing why the job was canceled. - Comment *string `locationName:"comment" type:"string"` +// SetCertificateDescription sets the CertificateDescription field's value. +func (s *DescribeCertificateOutput) SetCertificateDescription(v *CertificateDescription) *DescribeCertificateOutput { + s.CertificateDescription = v + return s +} - // The unique identifier you assigned to this job when it was created. - // - // JobId is a required field - JobId *string `location:"uri" locationName:"jobId" min:"1" type:"string" required:"true"` +type DescribeDefaultAuthorizerInput struct { + _ struct{} `type:"structure"` } // String returns the string representation -func (s CancelJobInput) String() string { +func (s DescribeDefaultAuthorizerInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CancelJobInput) GoString() string { +func (s DescribeDefaultAuthorizerInput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *CancelJobInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CancelJobInput"} - if s.JobId == nil { - invalidParams.Add(request.NewErrParamRequired("JobId")) - } - if s.JobId != nil && len(*s.JobId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("JobId", 1)) - } +type DescribeDefaultAuthorizerOutput struct { + _ struct{} `type:"structure"` - if invalidParams.Len() > 0 { - return invalidParams - } - return nil + // The default authorizer's description. + AuthorizerDescription *AuthorizerDescription `locationName:"authorizerDescription" type:"structure"` } -// SetComment sets the Comment field's value. -func (s *CancelJobInput) SetComment(v string) *CancelJobInput { - s.Comment = &v - return s +// String returns the string representation +func (s DescribeDefaultAuthorizerOutput) String() string { + return awsutil.Prettify(s) } -// SetJobId sets the JobId field's value. -func (s *CancelJobInput) SetJobId(v string) *CancelJobInput { - s.JobId = &v - return s +// GoString returns the string representation +func (s DescribeDefaultAuthorizerOutput) GoString() string { + return s.String() } -type CancelJobOutput struct { - _ struct{} `type:"structure"` - - // A short text description of the job. - Description *string `locationName:"description" type:"string"` +// SetAuthorizerDescription sets the AuthorizerDescription field's value. +func (s *DescribeDefaultAuthorizerOutput) SetAuthorizerDescription(v *AuthorizerDescription) *DescribeDefaultAuthorizerOutput { + s.AuthorizerDescription = v + return s +} - // The job ARN. - JobArn *string `locationName:"jobArn" type:"string"` +// The input for the DescribeEndpoint operation. +type DescribeEndpointInput struct { + _ struct{} `type:"structure"` - // The unique identifier you assigned to this job when it was created. - JobId *string `locationName:"jobId" min:"1" type:"string"` + // The endpoint type. Valid endpoint types include: + // + // * iot:Data - Returns a VeriSign signed data endpoint. + // + // * iot:Data-ATS - Returns an ATS signed data endpoint. + // + // * iot:CredentialProvider - Returns an AWS IoT credentials provider API + // endpoint. + // + // * iot:Jobs - Returns an AWS IoT device management Jobs API endpoint. + EndpointType *string `location:"querystring" locationName:"endpointType" type:"string"` } // String returns the string representation -func (s CancelJobOutput) String() string { +func (s DescribeEndpointInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CancelJobOutput) GoString() string { +func (s DescribeEndpointInput) GoString() string { return s.String() } -// SetDescription sets the Description field's value. -func (s *CancelJobOutput) SetDescription(v string) *CancelJobOutput { - s.Description = &v - return s -} - -// SetJobArn sets the JobArn field's value. -func (s *CancelJobOutput) SetJobArn(v string) *CancelJobOutput { - s.JobArn = &v - return s -} - -// SetJobId sets the JobId field's value. -func (s *CancelJobOutput) SetJobId(v string) *CancelJobOutput { - s.JobId = &v +// SetEndpointType sets the EndpointType field's value. +func (s *DescribeEndpointInput) SetEndpointType(v string) *DescribeEndpointInput { + s.EndpointType = &v return s } -// Information about a certificate. -type Certificate struct { +// The output from the DescribeEndpoint operation. +type DescribeEndpointOutput struct { _ struct{} `type:"structure"` - // The ARN of the certificate. - CertificateArn *string `locationName:"certificateArn" type:"string"` - - // The ID of the certificate. - CertificateId *string `locationName:"certificateId" min:"64" type:"string"` - - // The date and time the certificate was created. - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix"` - - // The status of the certificate. - // - // The status value REGISTER_INACTIVE is deprecated and should not be used. - Status *string `locationName:"status" type:"string" enum:"CertificateStatus"` + // The endpoint. The format of the endpoint is as follows: identifier.iot.region.amazonaws.com. + EndpointAddress *string `locationName:"endpointAddress" type:"string"` } // String returns the string representation -func (s Certificate) String() string { +func (s DescribeEndpointOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Certificate) GoString() string { +func (s DescribeEndpointOutput) GoString() string { return s.String() } -// SetCertificateArn sets the CertificateArn field's value. -func (s *Certificate) SetCertificateArn(v string) *Certificate { - s.CertificateArn = &v +// SetEndpointAddress sets the EndpointAddress field's value. +func (s *DescribeEndpointOutput) SetEndpointAddress(v string) *DescribeEndpointOutput { + s.EndpointAddress = &v return s } -// SetCertificateId sets the CertificateId field's value. -func (s *Certificate) SetCertificateId(v string) *Certificate { - s.CertificateId = &v - return s +type DescribeEventConfigurationsInput struct { + _ struct{} `type:"structure"` } -// SetCreationDate sets the CreationDate field's value. -func (s *Certificate) SetCreationDate(v time.Time) *Certificate { - s.CreationDate = &v - return s +// String returns the string representation +func (s DescribeEventConfigurationsInput) String() string { + return awsutil.Prettify(s) } -// SetStatus sets the Status field's value. -func (s *Certificate) SetStatus(v string) *Certificate { - s.Status = &v - return s +// GoString returns the string representation +func (s DescribeEventConfigurationsInput) GoString() string { + return s.String() } -// Describes a certificate. -type CertificateDescription struct { +type DescribeEventConfigurationsOutput struct { _ struct{} `type:"structure"` - // The certificate ID of the CA certificate used to sign this certificate. - CaCertificateId *string `locationName:"caCertificateId" min:"64" type:"string"` - - // The ARN of the certificate. - CertificateArn *string `locationName:"certificateArn" type:"string"` - - // The ID of the certificate. - CertificateId *string `locationName:"certificateId" min:"64" type:"string"` - - // The certificate data, in PEM format. - CertificatePem *string `locationName:"certificatePem" min:"1" type:"string"` - - // The date and time the certificate was created. - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix"` - - CustomerVersion *int64 `locationName:"customerVersion" min:"1" type:"integer"` - - GenerationId *string `locationName:"generationId" type:"string"` - - // The date and time the certificate was last modified. - LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp" timestampFormat:"unix"` - - // The ID of the AWS account that owns the certificate. - OwnedBy *string `locationName:"ownedBy" type:"string"` - - // The ID of the AWS account of the previous owner of the certificate. - PreviousOwnedBy *string `locationName:"previousOwnedBy" type:"string"` + // The creation date of the event configuration. + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` - // The status of the certificate. - Status *string `locationName:"status" type:"string" enum:"CertificateStatus"` + // The event configurations. + EventConfigurations map[string]*Configuration `locationName:"eventConfigurations" type:"map"` - // The transfer data. - TransferData *TransferData `locationName:"transferData" type:"structure"` + // The date the event configurations were last modified. + LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp"` } // String returns the string representation -func (s CertificateDescription) String() string { +func (s DescribeEventConfigurationsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CertificateDescription) GoString() string { +func (s DescribeEventConfigurationsOutput) GoString() string { return s.String() } -// SetCaCertificateId sets the CaCertificateId field's value. -func (s *CertificateDescription) SetCaCertificateId(v string) *CertificateDescription { - s.CaCertificateId = &v +// SetCreationDate sets the CreationDate field's value. +func (s *DescribeEventConfigurationsOutput) SetCreationDate(v time.Time) *DescribeEventConfigurationsOutput { + s.CreationDate = &v return s } -// SetCertificateArn sets the CertificateArn field's value. -func (s *CertificateDescription) SetCertificateArn(v string) *CertificateDescription { - s.CertificateArn = &v +// SetEventConfigurations sets the EventConfigurations field's value. +func (s *DescribeEventConfigurationsOutput) SetEventConfigurations(v map[string]*Configuration) *DescribeEventConfigurationsOutput { + s.EventConfigurations = v return s } -// SetCertificateId sets the CertificateId field's value. -func (s *CertificateDescription) SetCertificateId(v string) *CertificateDescription { - s.CertificateId = &v +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *DescribeEventConfigurationsOutput) SetLastModifiedDate(v time.Time) *DescribeEventConfigurationsOutput { + s.LastModifiedDate = &v return s } -// SetCertificatePem sets the CertificatePem field's value. -func (s *CertificateDescription) SetCertificatePem(v string) *CertificateDescription { - s.CertificatePem = &v - return s -} +type DescribeIndexInput struct { + _ struct{} `type:"structure"` -// SetCreationDate sets the CreationDate field's value. -func (s *CertificateDescription) SetCreationDate(v time.Time) *CertificateDescription { - s.CreationDate = &v - return s + // The index name. + // + // IndexName is a required field + IndexName *string `location:"uri" locationName:"indexName" min:"1" type:"string" required:"true"` } -// SetCustomerVersion sets the CustomerVersion field's value. -func (s *CertificateDescription) SetCustomerVersion(v int64) *CertificateDescription { - s.CustomerVersion = &v - return s +// String returns the string representation +func (s DescribeIndexInput) String() string { + return awsutil.Prettify(s) } -// SetGenerationId sets the GenerationId field's value. -func (s *CertificateDescription) SetGenerationId(v string) *CertificateDescription { - s.GenerationId = &v - return s +// GoString returns the string representation +func (s DescribeIndexInput) GoString() string { + return s.String() } -// SetLastModifiedDate sets the LastModifiedDate field's value. -func (s *CertificateDescription) SetLastModifiedDate(v time.Time) *CertificateDescription { - s.LastModifiedDate = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeIndexInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeIndexInput"} + if s.IndexName == nil { + invalidParams.Add(request.NewErrParamRequired("IndexName")) + } + if s.IndexName != nil && len(*s.IndexName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("IndexName", 1)) + } -// SetOwnedBy sets the OwnedBy field's value. -func (s *CertificateDescription) SetOwnedBy(v string) *CertificateDescription { - s.OwnedBy = &v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetPreviousOwnedBy sets the PreviousOwnedBy field's value. -func (s *CertificateDescription) SetPreviousOwnedBy(v string) *CertificateDescription { - s.PreviousOwnedBy = &v +// SetIndexName sets the IndexName field's value. +func (s *DescribeIndexInput) SetIndexName(v string) *DescribeIndexInput { + s.IndexName = &v return s } -// SetStatus sets the Status field's value. -func (s *CertificateDescription) SetStatus(v string) *CertificateDescription { - s.Status = &v - return s -} +type DescribeIndexOutput struct { + _ struct{} `type:"structure"` -// SetTransferData sets the TransferData field's value. -func (s *CertificateDescription) SetTransferData(v *TransferData) *CertificateDescription { - s.TransferData = v - return s -} + // The index name. + IndexName *string `locationName:"indexName" min:"1" type:"string"` -type ClearDefaultAuthorizerInput struct { - _ struct{} `type:"structure"` + // The index status. + IndexStatus *string `locationName:"indexStatus" type:"string" enum:"IndexStatus"` + + // Contains a value that specifies the type of indexing performed. Valid values + // are: + // + // * REGISTRY – Your thing index will contain only registry data. + // + // * REGISTRY_AND_SHADOW - Your thing index will contain registry data and + // shadow data. + // + // * REGISTRY_AND_CONNECTIVITY_STATUS - Your thing index will contain registry + // data and thing connectivity status data. + // + // * REGISTRY_AND_SHADOW_AND_CONNECTIVITY_STATUS - Your thing index will + // contain registry data, shadow data, and thing connectivity status data. + Schema *string `locationName:"schema" type:"string"` } // String returns the string representation -func (s ClearDefaultAuthorizerInput) String() string { +func (s DescribeIndexOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ClearDefaultAuthorizerInput) GoString() string { +func (s DescribeIndexOutput) GoString() string { return s.String() } -type ClearDefaultAuthorizerOutput struct { - _ struct{} `type:"structure"` +// SetIndexName sets the IndexName field's value. +func (s *DescribeIndexOutput) SetIndexName(v string) *DescribeIndexOutput { + s.IndexName = &v + return s } -// String returns the string representation -func (s ClearDefaultAuthorizerOutput) String() string { - return awsutil.Prettify(s) +// SetIndexStatus sets the IndexStatus field's value. +func (s *DescribeIndexOutput) SetIndexStatus(v string) *DescribeIndexOutput { + s.IndexStatus = &v + return s } -// GoString returns the string representation -func (s ClearDefaultAuthorizerOutput) GoString() string { - return s.String() +// SetSchema sets the Schema field's value. +func (s *DescribeIndexOutput) SetSchema(v string) *DescribeIndexOutput { + s.Schema = &v + return s } -// Describes an action that updates a CloudWatch alarm. -type CloudwatchAlarmAction struct { +type DescribeJobExecutionInput struct { _ struct{} `type:"structure"` - // The CloudWatch alarm name. - // - // AlarmName is a required field - AlarmName *string `locationName:"alarmName" type:"string" required:"true"` - - // The IAM role that allows access to the CloudWatch alarm. - // - // RoleArn is a required field - RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + // A string (consisting of the digits "0" through "9" which is used to specify + // a particular job execution on a particular device. + ExecutionNumber *int64 `location:"querystring" locationName:"executionNumber" type:"long"` - // The reason for the alarm change. + // The unique identifier you assigned to this job when it was created. // - // StateReason is a required field - StateReason *string `locationName:"stateReason" type:"string" required:"true"` + // JobId is a required field + JobId *string `location:"uri" locationName:"jobId" min:"1" type:"string" required:"true"` - // The value of the alarm state. Acceptable values are: OK, ALARM, INSUFFICIENT_DATA. + // The name of the thing on which the job execution is running. // - // StateValue is a required field - StateValue *string `locationName:"stateValue" type:"string" required:"true"` + // ThingName is a required field + ThingName *string `location:"uri" locationName:"thingName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s CloudwatchAlarmAction) String() string { +func (s DescribeJobExecutionInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CloudwatchAlarmAction) GoString() string { +func (s DescribeJobExecutionInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CloudwatchAlarmAction) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CloudwatchAlarmAction"} - if s.AlarmName == nil { - invalidParams.Add(request.NewErrParamRequired("AlarmName")) +func (s *DescribeJobExecutionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeJobExecutionInput"} + if s.JobId == nil { + invalidParams.Add(request.NewErrParamRequired("JobId")) } - if s.RoleArn == nil { - invalidParams.Add(request.NewErrParamRequired("RoleArn")) + if s.JobId != nil && len(*s.JobId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("JobId", 1)) } - if s.StateReason == nil { - invalidParams.Add(request.NewErrParamRequired("StateReason")) + if s.ThingName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingName")) } - if s.StateValue == nil { - invalidParams.Add(request.NewErrParamRequired("StateValue")) + if s.ThingName != nil && len(*s.ThingName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) } if invalidParams.Len() > 0 { @@ -12772,91 +22402,74 @@ func (s *CloudwatchAlarmAction) Validate() error { return nil } -// SetAlarmName sets the AlarmName field's value. -func (s *CloudwatchAlarmAction) SetAlarmName(v string) *CloudwatchAlarmAction { - s.AlarmName = &v - return s -} - -// SetRoleArn sets the RoleArn field's value. -func (s *CloudwatchAlarmAction) SetRoleArn(v string) *CloudwatchAlarmAction { - s.RoleArn = &v +// SetExecutionNumber sets the ExecutionNumber field's value. +func (s *DescribeJobExecutionInput) SetExecutionNumber(v int64) *DescribeJobExecutionInput { + s.ExecutionNumber = &v return s } -// SetStateReason sets the StateReason field's value. -func (s *CloudwatchAlarmAction) SetStateReason(v string) *CloudwatchAlarmAction { - s.StateReason = &v +// SetJobId sets the JobId field's value. +func (s *DescribeJobExecutionInput) SetJobId(v string) *DescribeJobExecutionInput { + s.JobId = &v return s } -// SetStateValue sets the StateValue field's value. -func (s *CloudwatchAlarmAction) SetStateValue(v string) *CloudwatchAlarmAction { - s.StateValue = &v +// SetThingName sets the ThingName field's value. +func (s *DescribeJobExecutionInput) SetThingName(v string) *DescribeJobExecutionInput { + s.ThingName = &v return s } -// Describes an action that captures a CloudWatch metric. -type CloudwatchMetricAction struct { +type DescribeJobExecutionOutput struct { _ struct{} `type:"structure"` - // The CloudWatch metric name. - // - // MetricName is a required field - MetricName *string `locationName:"metricName" type:"string" required:"true"` + // Information about the job execution. + Execution *JobExecution `locationName:"execution" type:"structure"` +} - // The CloudWatch metric namespace name. - // - // MetricNamespace is a required field - MetricNamespace *string `locationName:"metricNamespace" type:"string" required:"true"` +// String returns the string representation +func (s DescribeJobExecutionOutput) String() string { + return awsutil.Prettify(s) +} - // An optional Unix timestamp (http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#about_timestamp). - MetricTimestamp *string `locationName:"metricTimestamp" type:"string"` +// GoString returns the string representation +func (s DescribeJobExecutionOutput) GoString() string { + return s.String() +} - // The metric unit (http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Unit) - // supported by CloudWatch. - // - // MetricUnit is a required field - MetricUnit *string `locationName:"metricUnit" type:"string" required:"true"` +// SetExecution sets the Execution field's value. +func (s *DescribeJobExecutionOutput) SetExecution(v *JobExecution) *DescribeJobExecutionOutput { + s.Execution = v + return s +} - // The CloudWatch metric value. - // - // MetricValue is a required field - MetricValue *string `locationName:"metricValue" type:"string" required:"true"` +type DescribeJobInput struct { + _ struct{} `type:"structure"` - // The IAM role that allows access to the CloudWatch metric. + // The unique identifier you assigned to this job when it was created. // - // RoleArn is a required field - RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + // JobId is a required field + JobId *string `location:"uri" locationName:"jobId" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s CloudwatchMetricAction) String() string { +func (s DescribeJobInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CloudwatchMetricAction) GoString() string { +func (s DescribeJobInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CloudwatchMetricAction) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CloudwatchMetricAction"} - if s.MetricName == nil { - invalidParams.Add(request.NewErrParamRequired("MetricName")) - } - if s.MetricNamespace == nil { - invalidParams.Add(request.NewErrParamRequired("MetricNamespace")) - } - if s.MetricUnit == nil { - invalidParams.Add(request.NewErrParamRequired("MetricUnit")) - } - if s.MetricValue == nil { - invalidParams.Add(request.NewErrParamRequired("MetricValue")) +func (s *DescribeJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeJobInput"} + if s.JobId == nil { + invalidParams.Add(request.NewErrParamRequired("JobId")) } - if s.RoleArn == nil { - invalidParams.Add(request.NewErrParamRequired("RoleArn")) + if s.JobId != nil && len(*s.JobId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("JobId", 1)) } if invalidParams.Len() > 0 { @@ -12865,70 +22478,71 @@ func (s *CloudwatchMetricAction) Validate() error { return nil } -// SetMetricName sets the MetricName field's value. -func (s *CloudwatchMetricAction) SetMetricName(v string) *CloudwatchMetricAction { - s.MetricName = &v +// SetJobId sets the JobId field's value. +func (s *DescribeJobInput) SetJobId(v string) *DescribeJobInput { + s.JobId = &v return s } -// SetMetricNamespace sets the MetricNamespace field's value. -func (s *CloudwatchMetricAction) SetMetricNamespace(v string) *CloudwatchMetricAction { - s.MetricNamespace = &v - return s +type DescribeJobOutput struct { + _ struct{} `type:"structure"` + + // An S3 link to the job document. + DocumentSource *string `locationName:"documentSource" min:"1" type:"string"` + + // Information about the job. + Job *Job `locationName:"job" type:"structure"` } -// SetMetricTimestamp sets the MetricTimestamp field's value. -func (s *CloudwatchMetricAction) SetMetricTimestamp(v string) *CloudwatchMetricAction { - s.MetricTimestamp = &v - return s +// String returns the string representation +func (s DescribeJobOutput) String() string { + return awsutil.Prettify(s) } -// SetMetricUnit sets the MetricUnit field's value. -func (s *CloudwatchMetricAction) SetMetricUnit(v string) *CloudwatchMetricAction { - s.MetricUnit = &v - return s +// GoString returns the string representation +func (s DescribeJobOutput) GoString() string { + return s.String() } -// SetMetricValue sets the MetricValue field's value. -func (s *CloudwatchMetricAction) SetMetricValue(v string) *CloudwatchMetricAction { - s.MetricValue = &v +// SetDocumentSource sets the DocumentSource field's value. +func (s *DescribeJobOutput) SetDocumentSource(v string) *DescribeJobOutput { + s.DocumentSource = &v return s } -// SetRoleArn sets the RoleArn field's value. -func (s *CloudwatchMetricAction) SetRoleArn(v string) *CloudwatchMetricAction { - s.RoleArn = &v +// SetJob sets the Job field's value. +func (s *DescribeJobOutput) SetJob(v *Job) *DescribeJobOutput { + s.Job = v return s } -// Describes the method to use when code signing a file. -type CodeSigning struct { +type DescribeRoleAliasInput struct { _ struct{} `type:"structure"` - // The ID of the AWSSignerJob which was created to sign the file. - AwsSignerJobId *string `locationName:"awsSignerJobId" type:"string"` - - // A custom method for code signing a file. - CustomCodeSigning *CustomCodeSigning `locationName:"customCodeSigning" type:"structure"` + // The role alias to describe. + // + // RoleAlias is a required field + RoleAlias *string `location:"uri" locationName:"roleAlias" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s CodeSigning) String() string { +func (s DescribeRoleAliasInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CodeSigning) GoString() string { +func (s DescribeRoleAliasInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CodeSigning) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CodeSigning"} - if s.CustomCodeSigning != nil { - if err := s.CustomCodeSigning.Validate(); err != nil { - invalidParams.AddNested("CustomCodeSigning", err.(request.ErrInvalidParams)) - } +func (s *DescribeRoleAliasInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeRoleAliasInput"} + if s.RoleAlias == nil { + invalidParams.Add(request.NewErrParamRequired("RoleAlias")) + } + if s.RoleAlias != nil && len(*s.RoleAlias) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleAlias", 1)) } if invalidParams.Len() > 0 { @@ -12937,105 +22551,62 @@ func (s *CodeSigning) Validate() error { return nil } -// SetAwsSignerJobId sets the AwsSignerJobId field's value. -func (s *CodeSigning) SetAwsSignerJobId(v string) *CodeSigning { - s.AwsSignerJobId = &v - return s -} - -// SetCustomCodeSigning sets the CustomCodeSigning field's value. -func (s *CodeSigning) SetCustomCodeSigning(v *CustomCodeSigning) *CodeSigning { - s.CustomCodeSigning = v +// SetRoleAlias sets the RoleAlias field's value. +func (s *DescribeRoleAliasInput) SetRoleAlias(v string) *DescribeRoleAliasInput { + s.RoleAlias = &v return s } -// Describes the certificate chain being used when code signing a file. -type CodeSigningCertificateChain struct { +type DescribeRoleAliasOutput struct { _ struct{} `type:"structure"` - // The name of the certificate. - CertificateName *string `locationName:"certificateName" type:"string"` - - // A base64 encoded binary representation of the code signing certificate chain. - InlineDocument *string `locationName:"inlineDocument" type:"string"` - - // A stream of the certificate chain files. - Stream *Stream `locationName:"stream" type:"structure"` + // The role alias description. + RoleAliasDescription *RoleAliasDescription `locationName:"roleAliasDescription" type:"structure"` } // String returns the string representation -func (s CodeSigningCertificateChain) String() string { +func (s DescribeRoleAliasOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CodeSigningCertificateChain) GoString() string { - return s.String() -} - -// Validate inspects the fields of the type to determine if they are valid. -func (s *CodeSigningCertificateChain) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CodeSigningCertificateChain"} - if s.Stream != nil { - if err := s.Stream.Validate(); err != nil { - invalidParams.AddNested("Stream", err.(request.ErrInvalidParams)) - } - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetCertificateName sets the CertificateName field's value. -func (s *CodeSigningCertificateChain) SetCertificateName(v string) *CodeSigningCertificateChain { - s.CertificateName = &v - return s -} - -// SetInlineDocument sets the InlineDocument field's value. -func (s *CodeSigningCertificateChain) SetInlineDocument(v string) *CodeSigningCertificateChain { - s.InlineDocument = &v - return s +func (s DescribeRoleAliasOutput) GoString() string { + return s.String() } -// SetStream sets the Stream field's value. -func (s *CodeSigningCertificateChain) SetStream(v *Stream) *CodeSigningCertificateChain { - s.Stream = v +// SetRoleAliasDescription sets the RoleAliasDescription field's value. +func (s *DescribeRoleAliasOutput) SetRoleAliasDescription(v *RoleAliasDescription) *DescribeRoleAliasOutput { + s.RoleAliasDescription = v return s } -// Describes the signature for a file. -type CodeSigningSignature struct { +type DescribeScheduledAuditInput struct { _ struct{} `type:"structure"` - // A base64 encoded binary representation of the code signing signature. + // The name of the scheduled audit whose information you want to get. // - // InlineDocument is automatically base64 encoded/decoded by the SDK. - InlineDocument []byte `locationName:"inlineDocument" type:"blob"` - - // A stream of the code signing signature. - Stream *Stream `locationName:"stream" type:"structure"` + // ScheduledAuditName is a required field + ScheduledAuditName *string `location:"uri" locationName:"scheduledAuditName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s CodeSigningSignature) String() string { +func (s DescribeScheduledAuditInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CodeSigningSignature) GoString() string { +func (s DescribeScheduledAuditInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CodeSigningSignature) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CodeSigningSignature"} - if s.Stream != nil { - if err := s.Stream.Validate(); err != nil { - invalidParams.AddNested("Stream", err.(request.ErrInvalidParams)) - } +func (s *DescribeScheduledAuditInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeScheduledAuditInput"} + if s.ScheduledAuditName == nil { + invalidParams.Add(request.NewErrParamRequired("ScheduledAuditName")) + } + if s.ScheduledAuditName != nil && len(*s.ScheduledAuditName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ScheduledAuditName", 1)) } if invalidParams.Len() > 0 { @@ -13044,100 +22615,114 @@ func (s *CodeSigningSignature) Validate() error { return nil } -// SetInlineDocument sets the InlineDocument field's value. -func (s *CodeSigningSignature) SetInlineDocument(v []byte) *CodeSigningSignature { - s.InlineDocument = v - return s -} - -// SetStream sets the Stream field's value. -func (s *CodeSigningSignature) SetStream(v *Stream) *CodeSigningSignature { - s.Stream = v +// SetScheduledAuditName sets the ScheduledAuditName field's value. +func (s *DescribeScheduledAuditInput) SetScheduledAuditName(v string) *DescribeScheduledAuditInput { + s.ScheduledAuditName = &v return s } -// Configuration. -type Configuration struct { +type DescribeScheduledAuditOutput struct { _ struct{} `type:"structure"` - // True to enable the configuration. - Enabled *bool `type:"boolean"` + // The day of the month on which the scheduled audit takes place. Will be "1" + // through "31" or "LAST". If days 29-31 are specified, and the month does not + // have that many days, the audit takes place on the "LAST" day of the month. + DayOfMonth *string `locationName:"dayOfMonth" type:"string"` + + // The day of the week on which the scheduled audit takes place. One of "SUN", + // "MON", "TUE", "WED", "THU", "FRI" or "SAT". + DayOfWeek *string `locationName:"dayOfWeek" type:"string" enum:"DayOfWeek"` + + // How often the scheduled audit takes place. One of "DAILY", "WEEKLY", "BIWEEKLY" + // or "MONTHLY". The actual start time of each audit is determined by the system. + Frequency *string `locationName:"frequency" type:"string" enum:"AuditFrequency"` + + // The ARN of the scheduled audit. + ScheduledAuditArn *string `locationName:"scheduledAuditArn" type:"string"` + + // The name of the scheduled audit. + ScheduledAuditName *string `locationName:"scheduledAuditName" min:"1" type:"string"` + + // Which checks are performed during the scheduled audit. (Note that checks + // must be enabled for your account. (Use DescribeAccountAuditConfiguration + // to see the list of all checks including those that are enabled or UpdateAccountAuditConfiguration + // to select which checks are enabled.) + TargetCheckNames []*string `locationName:"targetCheckNames" type:"list"` } // String returns the string representation -func (s Configuration) String() string { +func (s DescribeScheduledAuditOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Configuration) GoString() string { +func (s DescribeScheduledAuditOutput) GoString() string { return s.String() } -// SetEnabled sets the Enabled field's value. -func (s *Configuration) SetEnabled(v bool) *Configuration { - s.Enabled = &v +// SetDayOfMonth sets the DayOfMonth field's value. +func (s *DescribeScheduledAuditOutput) SetDayOfMonth(v string) *DescribeScheduledAuditOutput { + s.DayOfMonth = &v return s } -type CreateAuthorizerInput struct { - _ struct{} `type:"structure"` +// SetDayOfWeek sets the DayOfWeek field's value. +func (s *DescribeScheduledAuditOutput) SetDayOfWeek(v string) *DescribeScheduledAuditOutput { + s.DayOfWeek = &v + return s +} - // The ARN of the authorizer's Lambda function. - // - // AuthorizerFunctionArn is a required field - AuthorizerFunctionArn *string `locationName:"authorizerFunctionArn" type:"string" required:"true"` +// SetFrequency sets the Frequency field's value. +func (s *DescribeScheduledAuditOutput) SetFrequency(v string) *DescribeScheduledAuditOutput { + s.Frequency = &v + return s +} - // The authorizer name. - // - // AuthorizerName is a required field - AuthorizerName *string `location:"uri" locationName:"authorizerName" min:"1" type:"string" required:"true"` +// SetScheduledAuditArn sets the ScheduledAuditArn field's value. +func (s *DescribeScheduledAuditOutput) SetScheduledAuditArn(v string) *DescribeScheduledAuditOutput { + s.ScheduledAuditArn = &v + return s +} - // The status of the create authorizer request. - Status *string `locationName:"status" type:"string" enum:"AuthorizerStatus"` +// SetScheduledAuditName sets the ScheduledAuditName field's value. +func (s *DescribeScheduledAuditOutput) SetScheduledAuditName(v string) *DescribeScheduledAuditOutput { + s.ScheduledAuditName = &v + return s +} - // The name of the token key used to extract the token from the HTTP headers. - // - // TokenKeyName is a required field - TokenKeyName *string `locationName:"tokenKeyName" min:"1" type:"string" required:"true"` +// SetTargetCheckNames sets the TargetCheckNames field's value. +func (s *DescribeScheduledAuditOutput) SetTargetCheckNames(v []*string) *DescribeScheduledAuditOutput { + s.TargetCheckNames = v + return s +} - // The public keys used to verify the digital signature returned by your custom - // authentication service. +type DescribeSecurityProfileInput struct { + _ struct{} `type:"structure"` + + // The name of the security profile whose information you want to get. // - // TokenSigningPublicKeys is a required field - TokenSigningPublicKeys map[string]*string `locationName:"tokenSigningPublicKeys" type:"map" required:"true"` + // SecurityProfileName is a required field + SecurityProfileName *string `location:"uri" locationName:"securityProfileName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s CreateAuthorizerInput) String() string { +func (s DescribeSecurityProfileInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateAuthorizerInput) GoString() string { +func (s DescribeSecurityProfileInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateAuthorizerInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateAuthorizerInput"} - if s.AuthorizerFunctionArn == nil { - invalidParams.Add(request.NewErrParamRequired("AuthorizerFunctionArn")) - } - if s.AuthorizerName == nil { - invalidParams.Add(request.NewErrParamRequired("AuthorizerName")) - } - if s.AuthorizerName != nil && len(*s.AuthorizerName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("AuthorizerName", 1)) - } - if s.TokenKeyName == nil { - invalidParams.Add(request.NewErrParamRequired("TokenKeyName")) - } - if s.TokenKeyName != nil && len(*s.TokenKeyName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("TokenKeyName", 1)) +func (s *DescribeSecurityProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeSecurityProfileInput"} + if s.SecurityProfileName == nil { + invalidParams.Add(request.NewErrParamRequired("SecurityProfileName")) } - if s.TokenSigningPublicKeys == nil { - invalidParams.Add(request.NewErrParamRequired("TokenSigningPublicKeys")) + if s.SecurityProfileName != nil && len(*s.SecurityProfileName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecurityProfileName", 1)) } if invalidParams.Len() > 0 { @@ -13146,99 +22731,128 @@ func (s *CreateAuthorizerInput) Validate() error { return nil } -// SetAuthorizerFunctionArn sets the AuthorizerFunctionArn field's value. -func (s *CreateAuthorizerInput) SetAuthorizerFunctionArn(v string) *CreateAuthorizerInput { - s.AuthorizerFunctionArn = &v +// SetSecurityProfileName sets the SecurityProfileName field's value. +func (s *DescribeSecurityProfileInput) SetSecurityProfileName(v string) *DescribeSecurityProfileInput { + s.SecurityProfileName = &v return s } -// SetAuthorizerName sets the AuthorizerName field's value. -func (s *CreateAuthorizerInput) SetAuthorizerName(v string) *CreateAuthorizerInput { - s.AuthorizerName = &v - return s +type DescribeSecurityProfileOutput struct { + _ struct{} `type:"structure"` + + // Where the alerts are sent. (Alerts are always sent to the console.) + AlertTargets map[string]*AlertTarget `locationName:"alertTargets" type:"map"` + + // Specifies the behaviors that, when violated by a device (thing), cause an + // alert. + Behaviors []*Behavior `locationName:"behaviors" type:"list"` + + // The time the security profile was created. + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` + + // The time the security profile was last modified. + LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp"` + + // The ARN of the security profile. + SecurityProfileArn *string `locationName:"securityProfileArn" type:"string"` + + // A description of the security profile (associated with the security profile + // when it was created or updated). + SecurityProfileDescription *string `locationName:"securityProfileDescription" type:"string"` + + // The name of the security profile. + SecurityProfileName *string `locationName:"securityProfileName" min:"1" type:"string"` + + // The version of the security profile. A new version is generated whenever + // the security profile is updated. + Version *int64 `locationName:"version" type:"long"` } -// SetStatus sets the Status field's value. -func (s *CreateAuthorizerInput) SetStatus(v string) *CreateAuthorizerInput { - s.Status = &v - return s +// String returns the string representation +func (s DescribeSecurityProfileOutput) String() string { + return awsutil.Prettify(s) } -// SetTokenKeyName sets the TokenKeyName field's value. -func (s *CreateAuthorizerInput) SetTokenKeyName(v string) *CreateAuthorizerInput { - s.TokenKeyName = &v - return s +// GoString returns the string representation +func (s DescribeSecurityProfileOutput) GoString() string { + return s.String() } -// SetTokenSigningPublicKeys sets the TokenSigningPublicKeys field's value. -func (s *CreateAuthorizerInput) SetTokenSigningPublicKeys(v map[string]*string) *CreateAuthorizerInput { - s.TokenSigningPublicKeys = v +// SetAlertTargets sets the AlertTargets field's value. +func (s *DescribeSecurityProfileOutput) SetAlertTargets(v map[string]*AlertTarget) *DescribeSecurityProfileOutput { + s.AlertTargets = v return s } -type CreateAuthorizerOutput struct { - _ struct{} `type:"structure"` +// SetBehaviors sets the Behaviors field's value. +func (s *DescribeSecurityProfileOutput) SetBehaviors(v []*Behavior) *DescribeSecurityProfileOutput { + s.Behaviors = v + return s +} - // The authorizer ARN. - AuthorizerArn *string `locationName:"authorizerArn" type:"string"` +// SetCreationDate sets the CreationDate field's value. +func (s *DescribeSecurityProfileOutput) SetCreationDate(v time.Time) *DescribeSecurityProfileOutput { + s.CreationDate = &v + return s +} - // The authorizer's name. - AuthorizerName *string `locationName:"authorizerName" min:"1" type:"string"` +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *DescribeSecurityProfileOutput) SetLastModifiedDate(v time.Time) *DescribeSecurityProfileOutput { + s.LastModifiedDate = &v + return s } -// String returns the string representation -func (s CreateAuthorizerOutput) String() string { - return awsutil.Prettify(s) +// SetSecurityProfileArn sets the SecurityProfileArn field's value. +func (s *DescribeSecurityProfileOutput) SetSecurityProfileArn(v string) *DescribeSecurityProfileOutput { + s.SecurityProfileArn = &v + return s } -// GoString returns the string representation -func (s CreateAuthorizerOutput) GoString() string { - return s.String() +// SetSecurityProfileDescription sets the SecurityProfileDescription field's value. +func (s *DescribeSecurityProfileOutput) SetSecurityProfileDescription(v string) *DescribeSecurityProfileOutput { + s.SecurityProfileDescription = &v + return s } -// SetAuthorizerArn sets the AuthorizerArn field's value. -func (s *CreateAuthorizerOutput) SetAuthorizerArn(v string) *CreateAuthorizerOutput { - s.AuthorizerArn = &v +// SetSecurityProfileName sets the SecurityProfileName field's value. +func (s *DescribeSecurityProfileOutput) SetSecurityProfileName(v string) *DescribeSecurityProfileOutput { + s.SecurityProfileName = &v return s } -// SetAuthorizerName sets the AuthorizerName field's value. -func (s *CreateAuthorizerOutput) SetAuthorizerName(v string) *CreateAuthorizerOutput { - s.AuthorizerName = &v +// SetVersion sets the Version field's value. +func (s *DescribeSecurityProfileOutput) SetVersion(v int64) *DescribeSecurityProfileOutput { + s.Version = &v return s } -// The input for the CreateCertificateFromCsr operation. -type CreateCertificateFromCsrInput struct { +type DescribeStreamInput struct { _ struct{} `type:"structure"` - // The certificate signing request (CSR). + // The stream ID. // - // CertificateSigningRequest is a required field - CertificateSigningRequest *string `locationName:"certificateSigningRequest" min:"1" type:"string" required:"true"` - - // Specifies whether the certificate is active. - SetAsActive *bool `location:"querystring" locationName:"setAsActive" type:"boolean"` + // StreamId is a required field + StreamId *string `location:"uri" locationName:"streamId" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s CreateCertificateFromCsrInput) String() string { +func (s DescribeStreamInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateCertificateFromCsrInput) GoString() string { +func (s DescribeStreamInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateCertificateFromCsrInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateCertificateFromCsrInput"} - if s.CertificateSigningRequest == nil { - invalidParams.Add(request.NewErrParamRequired("CertificateSigningRequest")) +func (s *DescribeStreamInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeStreamInput"} + if s.StreamId == nil { + invalidParams.Add(request.NewErrParamRequired("StreamId")) } - if s.CertificateSigningRequest != nil && len(*s.CertificateSigningRequest) < 1 { - invalidParams.Add(request.NewErrParamMinLen("CertificateSigningRequest", 1)) + if s.StreamId != nil && len(*s.StreamId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StreamId", 1)) } if invalidParams.Len() > 0 { @@ -13247,403 +22861,337 @@ func (s *CreateCertificateFromCsrInput) Validate() error { return nil } -// SetCertificateSigningRequest sets the CertificateSigningRequest field's value. -func (s *CreateCertificateFromCsrInput) SetCertificateSigningRequest(v string) *CreateCertificateFromCsrInput { - s.CertificateSigningRequest = &v +// SetStreamId sets the StreamId field's value. +func (s *DescribeStreamInput) SetStreamId(v string) *DescribeStreamInput { + s.StreamId = &v return s } -// SetSetAsActive sets the SetAsActive field's value. -func (s *CreateCertificateFromCsrInput) SetSetAsActive(v bool) *CreateCertificateFromCsrInput { - s.SetAsActive = &v - return s +type DescribeStreamOutput struct { + _ struct{} `type:"structure"` + + // Information about the stream. + StreamInfo *StreamInfo `locationName:"streamInfo" type:"structure"` } -// The output from the CreateCertificateFromCsr operation. -type CreateCertificateFromCsrOutput struct { - _ struct{} `type:"structure"` +// String returns the string representation +func (s DescribeStreamOutput) String() string { + return awsutil.Prettify(s) +} - // The Amazon Resource Name (ARN) of the certificate. You can use the ARN as - // a principal for policy operations. - CertificateArn *string `locationName:"certificateArn" type:"string"` +// GoString returns the string representation +func (s DescribeStreamOutput) GoString() string { + return s.String() +} - // The ID of the certificate. Certificate management operations only take a - // certificateId. - CertificateId *string `locationName:"certificateId" min:"64" type:"string"` +// SetStreamInfo sets the StreamInfo field's value. +func (s *DescribeStreamOutput) SetStreamInfo(v *StreamInfo) *DescribeStreamOutput { + s.StreamInfo = v + return s +} - // The certificate data, in PEM format. - CertificatePem *string `locationName:"certificatePem" min:"1" type:"string"` +type DescribeThingGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the thing group. + // + // ThingGroupName is a required field + ThingGroupName *string `location:"uri" locationName:"thingGroupName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s CreateCertificateFromCsrOutput) String() string { +func (s DescribeThingGroupInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateCertificateFromCsrOutput) GoString() string { +func (s DescribeThingGroupInput) GoString() string { return s.String() } -// SetCertificateArn sets the CertificateArn field's value. -func (s *CreateCertificateFromCsrOutput) SetCertificateArn(v string) *CreateCertificateFromCsrOutput { - s.CertificateArn = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeThingGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeThingGroupInput"} + if s.ThingGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingGroupName")) + } + if s.ThingGroupName != nil && len(*s.ThingGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingGroupName", 1)) + } -// SetCertificateId sets the CertificateId field's value. -func (s *CreateCertificateFromCsrOutput) SetCertificateId(v string) *CreateCertificateFromCsrOutput { - s.CertificateId = &v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetCertificatePem sets the CertificatePem field's value. -func (s *CreateCertificateFromCsrOutput) SetCertificatePem(v string) *CreateCertificateFromCsrOutput { - s.CertificatePem = &v +// SetThingGroupName sets the ThingGroupName field's value. +func (s *DescribeThingGroupInput) SetThingGroupName(v string) *DescribeThingGroupInput { + s.ThingGroupName = &v return s } -type CreateJobInput struct { +type DescribeThingGroupOutput struct { _ struct{} `type:"structure"` - // A short text description of the job. - Description *string `locationName:"description" type:"string"` + // The dynamic thing group index name. + IndexName *string `locationName:"indexName" min:"1" type:"string"` - // The job document. - Document *string `locationName:"document" type:"string"` + // The dynamic thing group search query string. + QueryString *string `locationName:"queryString" min:"1" type:"string"` - // Parameters for the job document. - DocumentParameters map[string]*string `locationName:"documentParameters" type:"map"` + // The dynamic thing group query version. + QueryVersion *string `locationName:"queryVersion" type:"string"` - // An S3 link to the job document. - DocumentSource *string `locationName:"documentSource" min:"1" type:"string"` + // The dynamic thing group status. + Status *string `locationName:"status" type:"string" enum:"DynamicGroupStatus"` - // Allows you to create a staged rollout of the job. - JobExecutionsRolloutConfig *JobExecutionsRolloutConfig `locationName:"jobExecutionsRolloutConfig" type:"structure"` + // The thing group ARN. + ThingGroupArn *string `locationName:"thingGroupArn" type:"string"` - // A job identifier which must be unique for your AWS account. We recommend - // using a UUID. Alpha-numeric characters, "-" and "_" are valid for use here. - // - // JobId is a required field - JobId *string `location:"uri" locationName:"jobId" min:"1" type:"string" required:"true"` + // The thing group ID. + ThingGroupId *string `locationName:"thingGroupId" min:"1" type:"string"` - // Configuration information for pre-signed S3 URLs. - PresignedUrlConfig *PresignedUrlConfig `locationName:"presignedUrlConfig" type:"structure"` + // Thing group metadata. + ThingGroupMetadata *ThingGroupMetadata `locationName:"thingGroupMetadata" type:"structure"` - // Specifies whether the job will continue to run (CONTINUOUS), or will be complete - // after all those things specified as targets have completed the job (SNAPSHOT). - // If continuous, the job may also be run on a thing when a change is detected - // in a target. For example, a job will run on a thing when the thing is added - // to a target group, even after the job was completed by all things originally - // in the group. - TargetSelection *string `locationName:"targetSelection" type:"string" enum:"TargetSelection"` + // The name of the thing group. + ThingGroupName *string `locationName:"thingGroupName" min:"1" type:"string"` - // A list of things and thing groups to which the job should be sent. - // - // Targets is a required field - Targets []*string `locationName:"targets" min:"1" type:"list" required:"true"` + // The thing group properties. + ThingGroupProperties *ThingGroupProperties `locationName:"thingGroupProperties" type:"structure"` + + // The version of the thing group. + Version *int64 `locationName:"version" type:"long"` } // String returns the string representation -func (s CreateJobInput) String() string { +func (s DescribeThingGroupOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateJobInput) GoString() string { +func (s DescribeThingGroupOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *CreateJobInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateJobInput"} - if s.DocumentSource != nil && len(*s.DocumentSource) < 1 { - invalidParams.Add(request.NewErrParamMinLen("DocumentSource", 1)) - } - if s.JobId == nil { - invalidParams.Add(request.NewErrParamRequired("JobId")) - } - if s.JobId != nil && len(*s.JobId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("JobId", 1)) - } - if s.Targets == nil { - invalidParams.Add(request.NewErrParamRequired("Targets")) - } - if s.Targets != nil && len(s.Targets) < 1 { - invalidParams.Add(request.NewErrParamMinLen("Targets", 1)) - } - if s.JobExecutionsRolloutConfig != nil { - if err := s.JobExecutionsRolloutConfig.Validate(); err != nil { - invalidParams.AddNested("JobExecutionsRolloutConfig", err.(request.ErrInvalidParams)) - } - } - if s.PresignedUrlConfig != nil { - if err := s.PresignedUrlConfig.Validate(); err != nil { - invalidParams.AddNested("PresignedUrlConfig", err.(request.ErrInvalidParams)) - } - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetIndexName sets the IndexName field's value. +func (s *DescribeThingGroupOutput) SetIndexName(v string) *DescribeThingGroupOutput { + s.IndexName = &v + return s } -// SetDescription sets the Description field's value. -func (s *CreateJobInput) SetDescription(v string) *CreateJobInput { - s.Description = &v +// SetQueryString sets the QueryString field's value. +func (s *DescribeThingGroupOutput) SetQueryString(v string) *DescribeThingGroupOutput { + s.QueryString = &v return s } -// SetDocument sets the Document field's value. -func (s *CreateJobInput) SetDocument(v string) *CreateJobInput { - s.Document = &v +// SetQueryVersion sets the QueryVersion field's value. +func (s *DescribeThingGroupOutput) SetQueryVersion(v string) *DescribeThingGroupOutput { + s.QueryVersion = &v return s } -// SetDocumentParameters sets the DocumentParameters field's value. -func (s *CreateJobInput) SetDocumentParameters(v map[string]*string) *CreateJobInput { - s.DocumentParameters = v +// SetStatus sets the Status field's value. +func (s *DescribeThingGroupOutput) SetStatus(v string) *DescribeThingGroupOutput { + s.Status = &v return s } -// SetDocumentSource sets the DocumentSource field's value. -func (s *CreateJobInput) SetDocumentSource(v string) *CreateJobInput { - s.DocumentSource = &v +// SetThingGroupArn sets the ThingGroupArn field's value. +func (s *DescribeThingGroupOutput) SetThingGroupArn(v string) *DescribeThingGroupOutput { + s.ThingGroupArn = &v return s } -// SetJobExecutionsRolloutConfig sets the JobExecutionsRolloutConfig field's value. -func (s *CreateJobInput) SetJobExecutionsRolloutConfig(v *JobExecutionsRolloutConfig) *CreateJobInput { - s.JobExecutionsRolloutConfig = v +// SetThingGroupId sets the ThingGroupId field's value. +func (s *DescribeThingGroupOutput) SetThingGroupId(v string) *DescribeThingGroupOutput { + s.ThingGroupId = &v return s } -// SetJobId sets the JobId field's value. -func (s *CreateJobInput) SetJobId(v string) *CreateJobInput { - s.JobId = &v +// SetThingGroupMetadata sets the ThingGroupMetadata field's value. +func (s *DescribeThingGroupOutput) SetThingGroupMetadata(v *ThingGroupMetadata) *DescribeThingGroupOutput { + s.ThingGroupMetadata = v return s } -// SetPresignedUrlConfig sets the PresignedUrlConfig field's value. -func (s *CreateJobInput) SetPresignedUrlConfig(v *PresignedUrlConfig) *CreateJobInput { - s.PresignedUrlConfig = v +// SetThingGroupName sets the ThingGroupName field's value. +func (s *DescribeThingGroupOutput) SetThingGroupName(v string) *DescribeThingGroupOutput { + s.ThingGroupName = &v return s } -// SetTargetSelection sets the TargetSelection field's value. -func (s *CreateJobInput) SetTargetSelection(v string) *CreateJobInput { - s.TargetSelection = &v +// SetThingGroupProperties sets the ThingGroupProperties field's value. +func (s *DescribeThingGroupOutput) SetThingGroupProperties(v *ThingGroupProperties) *DescribeThingGroupOutput { + s.ThingGroupProperties = v return s } -// SetTargets sets the Targets field's value. -func (s *CreateJobInput) SetTargets(v []*string) *CreateJobInput { - s.Targets = v +// SetVersion sets the Version field's value. +func (s *DescribeThingGroupOutput) SetVersion(v int64) *DescribeThingGroupOutput { + s.Version = &v return s } -type CreateJobOutput struct { +// The input for the DescribeThing operation. +type DescribeThingInput struct { _ struct{} `type:"structure"` - // The job description. - Description *string `locationName:"description" type:"string"` - - // The job ARN. - JobArn *string `locationName:"jobArn" type:"string"` - - // The unique identifier you assigned to this job. - JobId *string `locationName:"jobId" min:"1" type:"string"` + // The name of the thing. + // + // ThingName is a required field + ThingName *string `location:"uri" locationName:"thingName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s CreateJobOutput) String() string { +func (s DescribeThingInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateJobOutput) GoString() string { +func (s DescribeThingInput) GoString() string { return s.String() } -// SetDescription sets the Description field's value. -func (s *CreateJobOutput) SetDescription(v string) *CreateJobOutput { - s.Description = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeThingInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeThingInput"} + if s.ThingName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingName")) + } + if s.ThingName != nil && len(*s.ThingName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) + } -// SetJobArn sets the JobArn field's value. -func (s *CreateJobOutput) SetJobArn(v string) *CreateJobOutput { - s.JobArn = &v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetJobId sets the JobId field's value. -func (s *CreateJobOutput) SetJobId(v string) *CreateJobOutput { - s.JobId = &v +// SetThingName sets the ThingName field's value. +func (s *DescribeThingInput) SetThingName(v string) *DescribeThingInput { + s.ThingName = &v return s } -// The input for the CreateKeysAndCertificate operation. -type CreateKeysAndCertificateInput struct { +// The output from the DescribeThing operation. +type DescribeThingOutput struct { _ struct{} `type:"structure"` - // Specifies whether the certificate is active. - SetAsActive *bool `location:"querystring" locationName:"setAsActive" type:"boolean"` -} - -// String returns the string representation -func (s CreateKeysAndCertificateInput) String() string { - return awsutil.Prettify(s) -} + // The thing attributes. + Attributes map[string]*string `locationName:"attributes" type:"map"` -// GoString returns the string representation -func (s CreateKeysAndCertificateInput) GoString() string { - return s.String() -} + // The name of the billing group the thing belongs to. + BillingGroupName *string `locationName:"billingGroupName" min:"1" type:"string"` -// SetSetAsActive sets the SetAsActive field's value. -func (s *CreateKeysAndCertificateInput) SetSetAsActive(v bool) *CreateKeysAndCertificateInput { - s.SetAsActive = &v - return s -} + // The default client ID. + DefaultClientId *string `locationName:"defaultClientId" type:"string"` -// The output of the CreateKeysAndCertificate operation. -type CreateKeysAndCertificateOutput struct { - _ struct{} `type:"structure"` + // The ARN of the thing to describe. + ThingArn *string `locationName:"thingArn" type:"string"` - // The ARN of the certificate. - CertificateArn *string `locationName:"certificateArn" type:"string"` + // The ID of the thing to describe. + ThingId *string `locationName:"thingId" type:"string"` - // The ID of the certificate. AWS IoT issues a default subject name for the - // certificate (for example, AWS IoT Certificate). - CertificateId *string `locationName:"certificateId" min:"64" type:"string"` + // The name of the thing. + ThingName *string `locationName:"thingName" min:"1" type:"string"` - // The certificate data, in PEM format. - CertificatePem *string `locationName:"certificatePem" min:"1" type:"string"` + // The thing type name. + ThingTypeName *string `locationName:"thingTypeName" min:"1" type:"string"` - // The generated key pair. - KeyPair *KeyPair `locationName:"keyPair" type:"structure"` + // The current version of the thing record in the registry. + // + // To avoid unintentional changes to the information in the registry, you can + // pass the version information in the expectedVersion parameter of the UpdateThing + // and DeleteThing calls. + Version *int64 `locationName:"version" type:"long"` } // String returns the string representation -func (s CreateKeysAndCertificateOutput) String() string { +func (s DescribeThingOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateKeysAndCertificateOutput) GoString() string { +func (s DescribeThingOutput) GoString() string { return s.String() } -// SetCertificateArn sets the CertificateArn field's value. -func (s *CreateKeysAndCertificateOutput) SetCertificateArn(v string) *CreateKeysAndCertificateOutput { - s.CertificateArn = &v +// SetAttributes sets the Attributes field's value. +func (s *DescribeThingOutput) SetAttributes(v map[string]*string) *DescribeThingOutput { + s.Attributes = v + return s +} + +// SetBillingGroupName sets the BillingGroupName field's value. +func (s *DescribeThingOutput) SetBillingGroupName(v string) *DescribeThingOutput { + s.BillingGroupName = &v + return s +} + +// SetDefaultClientId sets the DefaultClientId field's value. +func (s *DescribeThingOutput) SetDefaultClientId(v string) *DescribeThingOutput { + s.DefaultClientId = &v + return s +} + +// SetThingArn sets the ThingArn field's value. +func (s *DescribeThingOutput) SetThingArn(v string) *DescribeThingOutput { + s.ThingArn = &v return s } -// SetCertificateId sets the CertificateId field's value. -func (s *CreateKeysAndCertificateOutput) SetCertificateId(v string) *CreateKeysAndCertificateOutput { - s.CertificateId = &v +// SetThingId sets the ThingId field's value. +func (s *DescribeThingOutput) SetThingId(v string) *DescribeThingOutput { + s.ThingId = &v return s } -// SetCertificatePem sets the CertificatePem field's value. -func (s *CreateKeysAndCertificateOutput) SetCertificatePem(v string) *CreateKeysAndCertificateOutput { - s.CertificatePem = &v +// SetThingName sets the ThingName field's value. +func (s *DescribeThingOutput) SetThingName(v string) *DescribeThingOutput { + s.ThingName = &v return s } -// SetKeyPair sets the KeyPair field's value. -func (s *CreateKeysAndCertificateOutput) SetKeyPair(v *KeyPair) *CreateKeysAndCertificateOutput { - s.KeyPair = v +// SetThingTypeName sets the ThingTypeName field's value. +func (s *DescribeThingOutput) SetThingTypeName(v string) *DescribeThingOutput { + s.ThingTypeName = &v return s } -type CreateOTAUpdateInput struct { - _ struct{} `type:"structure"` - - // A list of additional OTA update parameters which are name-value pairs. - AdditionalParameters map[string]*string `locationName:"additionalParameters" type:"map"` - - // The description of the OTA update. - Description *string `locationName:"description" type:"string"` - - // The files to be streamed by the OTA update. - // - // Files is a required field - Files []*OTAUpdateFile `locationName:"files" min:"1" type:"list" required:"true"` - - // The ID of the OTA update to be created. - // - // OtaUpdateId is a required field - OtaUpdateId *string `location:"uri" locationName:"otaUpdateId" min:"1" type:"string" required:"true"` - - // The IAM role that allows access to the AWS IoT Jobs service. - // - // RoleArn is a required field - RoleArn *string `locationName:"roleArn" min:"20" type:"string" required:"true"` +// SetVersion sets the Version field's value. +func (s *DescribeThingOutput) SetVersion(v int64) *DescribeThingOutput { + s.Version = &v + return s +} - // Specifies whether the update will continue to run (CONTINUOUS), or will be - // complete after all the things specified as targets have completed the update - // (SNAPSHOT). If continuous, the update may also be run on a thing when a change - // is detected in a target. For example, an update will run on a thing when - // the thing is added to a target group, even after the update was completed - // by all things originally in the group. Valid values: CONTINUOUS | SNAPSHOT. - TargetSelection *string `locationName:"targetSelection" type:"string" enum:"TargetSelection"` +type DescribeThingRegistrationTaskInput struct { + _ struct{} `type:"structure"` - // The targeted devices to receive OTA updates. + // The task ID. // - // Targets is a required field - Targets []*string `locationName:"targets" min:"1" type:"list" required:"true"` + // TaskId is a required field + TaskId *string `location:"uri" locationName:"taskId" type:"string" required:"true"` } // String returns the string representation -func (s CreateOTAUpdateInput) String() string { +func (s DescribeThingRegistrationTaskInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateOTAUpdateInput) GoString() string { +func (s DescribeThingRegistrationTaskInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateOTAUpdateInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateOTAUpdateInput"} - if s.Files == nil { - invalidParams.Add(request.NewErrParamRequired("Files")) - } - if s.Files != nil && len(s.Files) < 1 { - invalidParams.Add(request.NewErrParamMinLen("Files", 1)) - } - if s.OtaUpdateId == nil { - invalidParams.Add(request.NewErrParamRequired("OtaUpdateId")) - } - if s.OtaUpdateId != nil && len(*s.OtaUpdateId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("OtaUpdateId", 1)) - } - if s.RoleArn == nil { - invalidParams.Add(request.NewErrParamRequired("RoleArn")) - } - if s.RoleArn != nil && len(*s.RoleArn) < 20 { - invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) - } - if s.Targets == nil { - invalidParams.Add(request.NewErrParamRequired("Targets")) - } - if s.Targets != nil && len(s.Targets) < 1 { - invalidParams.Add(request.NewErrParamMinLen("Targets", 1)) - } - if s.Files != nil { - for i, v := range s.Files { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Files", i), err.(request.ErrInvalidParams)) - } - } +func (s *DescribeThingRegistrationTaskInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeThingRegistrationTaskInput"} + if s.TaskId == nil { + invalidParams.Add(request.NewErrParamRequired("TaskId")) } if invalidParams.Len() > 0 { @@ -13652,144 +23200,162 @@ func (s *CreateOTAUpdateInput) Validate() error { return nil } -// SetAdditionalParameters sets the AdditionalParameters field's value. -func (s *CreateOTAUpdateInput) SetAdditionalParameters(v map[string]*string) *CreateOTAUpdateInput { - s.AdditionalParameters = v +// SetTaskId sets the TaskId field's value. +func (s *DescribeThingRegistrationTaskInput) SetTaskId(v string) *DescribeThingRegistrationTaskInput { + s.TaskId = &v return s } -// SetDescription sets the Description field's value. -func (s *CreateOTAUpdateInput) SetDescription(v string) *CreateOTAUpdateInput { - s.Description = &v - return s +type DescribeThingRegistrationTaskOutput struct { + _ struct{} `type:"structure"` + + // The task creation date. + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` + + // The number of things that failed to be provisioned. + FailureCount *int64 `locationName:"failureCount" type:"integer"` + + // The S3 bucket that contains the input file. + InputFileBucket *string `locationName:"inputFileBucket" min:"3" type:"string"` + + // The input file key. + InputFileKey *string `locationName:"inputFileKey" min:"1" type:"string"` + + // The date when the task was last modified. + LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp"` + + // The message. + Message *string `locationName:"message" type:"string"` + + // The progress of the bulk provisioning task expressed as a percentage. + PercentageProgress *int64 `locationName:"percentageProgress" type:"integer"` + + // The role ARN that grants access to the input file bucket. + RoleArn *string `locationName:"roleArn" min:"20" type:"string"` + + // The status of the bulk thing provisioning task. + Status *string `locationName:"status" type:"string" enum:"Status"` + + // The number of things successfully provisioned. + SuccessCount *int64 `locationName:"successCount" type:"integer"` + + // The task ID. + TaskId *string `locationName:"taskId" type:"string"` + + // The task's template. + TemplateBody *string `locationName:"templateBody" type:"string"` } -// SetFiles sets the Files field's value. -func (s *CreateOTAUpdateInput) SetFiles(v []*OTAUpdateFile) *CreateOTAUpdateInput { - s.Files = v - return s +// String returns the string representation +func (s DescribeThingRegistrationTaskOutput) String() string { + return awsutil.Prettify(s) } -// SetOtaUpdateId sets the OtaUpdateId field's value. -func (s *CreateOTAUpdateInput) SetOtaUpdateId(v string) *CreateOTAUpdateInput { - s.OtaUpdateId = &v - return s +// GoString returns the string representation +func (s DescribeThingRegistrationTaskOutput) GoString() string { + return s.String() } -// SetRoleArn sets the RoleArn field's value. -func (s *CreateOTAUpdateInput) SetRoleArn(v string) *CreateOTAUpdateInput { - s.RoleArn = &v +// SetCreationDate sets the CreationDate field's value. +func (s *DescribeThingRegistrationTaskOutput) SetCreationDate(v time.Time) *DescribeThingRegistrationTaskOutput { + s.CreationDate = &v return s } -// SetTargetSelection sets the TargetSelection field's value. -func (s *CreateOTAUpdateInput) SetTargetSelection(v string) *CreateOTAUpdateInput { - s.TargetSelection = &v +// SetFailureCount sets the FailureCount field's value. +func (s *DescribeThingRegistrationTaskOutput) SetFailureCount(v int64) *DescribeThingRegistrationTaskOutput { + s.FailureCount = &v return s } -// SetTargets sets the Targets field's value. -func (s *CreateOTAUpdateInput) SetTargets(v []*string) *CreateOTAUpdateInput { - s.Targets = v +// SetInputFileBucket sets the InputFileBucket field's value. +func (s *DescribeThingRegistrationTaskOutput) SetInputFileBucket(v string) *DescribeThingRegistrationTaskOutput { + s.InputFileBucket = &v return s } -type CreateOTAUpdateOutput struct { - _ struct{} `type:"structure"` - - // The AWS IoT job ARN associated with the OTA update. - AwsIotJobArn *string `locationName:"awsIotJobArn" type:"string"` - - // The AWS IoT job ID associated with the OTA update. - AwsIotJobId *string `locationName:"awsIotJobId" type:"string"` - - // The OTA update ARN. - OtaUpdateArn *string `locationName:"otaUpdateArn" type:"string"` - - // The OTA update ID. - OtaUpdateId *string `locationName:"otaUpdateId" min:"1" type:"string"` +// SetInputFileKey sets the InputFileKey field's value. +func (s *DescribeThingRegistrationTaskOutput) SetInputFileKey(v string) *DescribeThingRegistrationTaskOutput { + s.InputFileKey = &v + return s +} - // The OTA update status. - OtaUpdateStatus *string `locationName:"otaUpdateStatus" type:"string" enum:"OTAUpdateStatus"` +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *DescribeThingRegistrationTaskOutput) SetLastModifiedDate(v time.Time) *DescribeThingRegistrationTaskOutput { + s.LastModifiedDate = &v + return s } -// String returns the string representation -func (s CreateOTAUpdateOutput) String() string { - return awsutil.Prettify(s) +// SetMessage sets the Message field's value. +func (s *DescribeThingRegistrationTaskOutput) SetMessage(v string) *DescribeThingRegistrationTaskOutput { + s.Message = &v + return s } -// GoString returns the string representation -func (s CreateOTAUpdateOutput) GoString() string { - return s.String() +// SetPercentageProgress sets the PercentageProgress field's value. +func (s *DescribeThingRegistrationTaskOutput) SetPercentageProgress(v int64) *DescribeThingRegistrationTaskOutput { + s.PercentageProgress = &v + return s } -// SetAwsIotJobArn sets the AwsIotJobArn field's value. -func (s *CreateOTAUpdateOutput) SetAwsIotJobArn(v string) *CreateOTAUpdateOutput { - s.AwsIotJobArn = &v +// SetRoleArn sets the RoleArn field's value. +func (s *DescribeThingRegistrationTaskOutput) SetRoleArn(v string) *DescribeThingRegistrationTaskOutput { + s.RoleArn = &v return s } -// SetAwsIotJobId sets the AwsIotJobId field's value. -func (s *CreateOTAUpdateOutput) SetAwsIotJobId(v string) *CreateOTAUpdateOutput { - s.AwsIotJobId = &v +// SetStatus sets the Status field's value. +func (s *DescribeThingRegistrationTaskOutput) SetStatus(v string) *DescribeThingRegistrationTaskOutput { + s.Status = &v return s } -// SetOtaUpdateArn sets the OtaUpdateArn field's value. -func (s *CreateOTAUpdateOutput) SetOtaUpdateArn(v string) *CreateOTAUpdateOutput { - s.OtaUpdateArn = &v +// SetSuccessCount sets the SuccessCount field's value. +func (s *DescribeThingRegistrationTaskOutput) SetSuccessCount(v int64) *DescribeThingRegistrationTaskOutput { + s.SuccessCount = &v return s } -// SetOtaUpdateId sets the OtaUpdateId field's value. -func (s *CreateOTAUpdateOutput) SetOtaUpdateId(v string) *CreateOTAUpdateOutput { - s.OtaUpdateId = &v +// SetTaskId sets the TaskId field's value. +func (s *DescribeThingRegistrationTaskOutput) SetTaskId(v string) *DescribeThingRegistrationTaskOutput { + s.TaskId = &v return s } -// SetOtaUpdateStatus sets the OtaUpdateStatus field's value. -func (s *CreateOTAUpdateOutput) SetOtaUpdateStatus(v string) *CreateOTAUpdateOutput { - s.OtaUpdateStatus = &v +// SetTemplateBody sets the TemplateBody field's value. +func (s *DescribeThingRegistrationTaskOutput) SetTemplateBody(v string) *DescribeThingRegistrationTaskOutput { + s.TemplateBody = &v return s } -// The input for the CreatePolicy operation. -type CreatePolicyInput struct { +// The input for the DescribeThingType operation. +type DescribeThingTypeInput struct { _ struct{} `type:"structure"` - // The JSON document that describes the policy. policyDocument must have a minimum - // length of 1, with a maximum length of 2048, excluding whitespace. - // - // PolicyDocument is a required field - PolicyDocument *string `locationName:"policyDocument" type:"string" required:"true"` - - // The policy name. + // The name of the thing type. // - // PolicyName is a required field - PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` + // ThingTypeName is a required field + ThingTypeName *string `location:"uri" locationName:"thingTypeName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s CreatePolicyInput) String() string { +func (s DescribeThingTypeInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreatePolicyInput) GoString() string { +func (s DescribeThingTypeInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreatePolicyInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreatePolicyInput"} - if s.PolicyDocument == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyDocument")) - } - if s.PolicyName == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyName")) +func (s *DescribeThingTypeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeThingTypeInput"} + if s.ThingTypeName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingTypeName")) } - if s.PolicyName != nil && len(*s.PolicyName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + if s.ThingTypeName != nil && len(*s.ThingTypeName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingTypeName", 1)) } if invalidParams.Len() > 0 { @@ -13798,111 +23364,100 @@ func (s *CreatePolicyInput) Validate() error { return nil } -// SetPolicyDocument sets the PolicyDocument field's value. -func (s *CreatePolicyInput) SetPolicyDocument(v string) *CreatePolicyInput { - s.PolicyDocument = &v - return s -} - -// SetPolicyName sets the PolicyName field's value. -func (s *CreatePolicyInput) SetPolicyName(v string) *CreatePolicyInput { - s.PolicyName = &v +// SetThingTypeName sets the ThingTypeName field's value. +func (s *DescribeThingTypeInput) SetThingTypeName(v string) *DescribeThingTypeInput { + s.ThingTypeName = &v return s } -// The output from the CreatePolicy operation. -type CreatePolicyOutput struct { +// The output for the DescribeThingType operation. +type DescribeThingTypeOutput struct { _ struct{} `type:"structure"` - // The policy ARN. - PolicyArn *string `locationName:"policyArn" type:"string"` + // The thing type ARN. + ThingTypeArn *string `locationName:"thingTypeArn" type:"string"` - // The JSON document that describes the policy. - PolicyDocument *string `locationName:"policyDocument" type:"string"` + // The thing type ID. + ThingTypeId *string `locationName:"thingTypeId" type:"string"` - // The policy name. - PolicyName *string `locationName:"policyName" min:"1" type:"string"` + // The ThingTypeMetadata contains additional information about the thing type + // including: creation date and time, a value indicating whether the thing type + // is deprecated, and a date and time when it was deprecated. + ThingTypeMetadata *ThingTypeMetadata `locationName:"thingTypeMetadata" type:"structure"` - // The policy version ID. - PolicyVersionId *string `locationName:"policyVersionId" type:"string"` + // The name of the thing type. + ThingTypeName *string `locationName:"thingTypeName" min:"1" type:"string"` + + // The ThingTypeProperties contains information about the thing type including + // description, and a list of searchable thing attribute names. + ThingTypeProperties *ThingTypeProperties `locationName:"thingTypeProperties" type:"structure"` } // String returns the string representation -func (s CreatePolicyOutput) String() string { +func (s DescribeThingTypeOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreatePolicyOutput) GoString() string { +func (s DescribeThingTypeOutput) GoString() string { return s.String() } -// SetPolicyArn sets the PolicyArn field's value. -func (s *CreatePolicyOutput) SetPolicyArn(v string) *CreatePolicyOutput { - s.PolicyArn = &v +// SetThingTypeArn sets the ThingTypeArn field's value. +func (s *DescribeThingTypeOutput) SetThingTypeArn(v string) *DescribeThingTypeOutput { + s.ThingTypeArn = &v return s } -// SetPolicyDocument sets the PolicyDocument field's value. -func (s *CreatePolicyOutput) SetPolicyDocument(v string) *CreatePolicyOutput { - s.PolicyDocument = &v +// SetThingTypeId sets the ThingTypeId field's value. +func (s *DescribeThingTypeOutput) SetThingTypeId(v string) *DescribeThingTypeOutput { + s.ThingTypeId = &v return s } -// SetPolicyName sets the PolicyName field's value. -func (s *CreatePolicyOutput) SetPolicyName(v string) *CreatePolicyOutput { - s.PolicyName = &v +// SetThingTypeMetadata sets the ThingTypeMetadata field's value. +func (s *DescribeThingTypeOutput) SetThingTypeMetadata(v *ThingTypeMetadata) *DescribeThingTypeOutput { + s.ThingTypeMetadata = v return s } -// SetPolicyVersionId sets the PolicyVersionId field's value. -func (s *CreatePolicyOutput) SetPolicyVersionId(v string) *CreatePolicyOutput { - s.PolicyVersionId = &v +// SetThingTypeName sets the ThingTypeName field's value. +func (s *DescribeThingTypeOutput) SetThingTypeName(v string) *DescribeThingTypeOutput { + s.ThingTypeName = &v return s } -// The input for the CreatePolicyVersion operation. -type CreatePolicyVersionInput struct { - _ struct{} `type:"structure"` - - // The JSON document that describes the policy. Minimum length of 1. Maximum - // length of 2048, excluding whitespace. - // - // PolicyDocument is a required field - PolicyDocument *string `locationName:"policyDocument" type:"string" required:"true"` +// SetThingTypeProperties sets the ThingTypeProperties field's value. +func (s *DescribeThingTypeOutput) SetThingTypeProperties(v *ThingTypeProperties) *DescribeThingTypeOutput { + s.ThingTypeProperties = v + return s +} - // The policy name. - // - // PolicyName is a required field - PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` +// Describes the location of the updated firmware. +type Destination struct { + _ struct{} `type:"structure"` - // Specifies whether the policy version is set as the default. When this parameter - // is true, the new policy version becomes the operative version (that is, the - // version that is in effect for the certificates to which the policy is attached). - SetAsDefault *bool `location:"querystring" locationName:"setAsDefault" type:"boolean"` + // Describes the location in S3 of the updated firmware. + S3Destination *S3Destination `locationName:"s3Destination" type:"structure"` } // String returns the string representation -func (s CreatePolicyVersionInput) String() string { +func (s Destination) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreatePolicyVersionInput) GoString() string { +func (s Destination) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreatePolicyVersionInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreatePolicyVersionInput"} - if s.PolicyDocument == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyDocument")) - } - if s.PolicyName == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyName")) - } - if s.PolicyName != nil && len(*s.PolicyName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) +func (s *Destination) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Destination"} + if s.S3Destination != nil { + if err := s.S3Destination.Validate(); err != nil { + invalidParams.AddNested("S3Destination", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -13911,241 +23466,120 @@ func (s *CreatePolicyVersionInput) Validate() error { return nil } -// SetPolicyDocument sets the PolicyDocument field's value. -func (s *CreatePolicyVersionInput) SetPolicyDocument(v string) *CreatePolicyVersionInput { - s.PolicyDocument = &v - return s -} - -// SetPolicyName sets the PolicyName field's value. -func (s *CreatePolicyVersionInput) SetPolicyName(v string) *CreatePolicyVersionInput { - s.PolicyName = &v - return s -} - -// SetSetAsDefault sets the SetAsDefault field's value. -func (s *CreatePolicyVersionInput) SetSetAsDefault(v bool) *CreatePolicyVersionInput { - s.SetAsDefault = &v - return s -} - -// The output of the CreatePolicyVersion operation. -type CreatePolicyVersionOutput struct { - _ struct{} `type:"structure"` - - // Specifies whether the policy version is the default. - IsDefaultVersion *bool `locationName:"isDefaultVersion" type:"boolean"` - - // The policy ARN. - PolicyArn *string `locationName:"policyArn" type:"string"` - - // The JSON document that describes the policy. - PolicyDocument *string `locationName:"policyDocument" type:"string"` - - // The policy version ID. - PolicyVersionId *string `locationName:"policyVersionId" type:"string"` -} - -// String returns the string representation -func (s CreatePolicyVersionOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s CreatePolicyVersionOutput) GoString() string { - return s.String() -} - -// SetIsDefaultVersion sets the IsDefaultVersion field's value. -func (s *CreatePolicyVersionOutput) SetIsDefaultVersion(v bool) *CreatePolicyVersionOutput { - s.IsDefaultVersion = &v - return s -} - -// SetPolicyArn sets the PolicyArn field's value. -func (s *CreatePolicyVersionOutput) SetPolicyArn(v string) *CreatePolicyVersionOutput { - s.PolicyArn = &v - return s -} - -// SetPolicyDocument sets the PolicyDocument field's value. -func (s *CreatePolicyVersionOutput) SetPolicyDocument(v string) *CreatePolicyVersionOutput { - s.PolicyDocument = &v - return s -} - -// SetPolicyVersionId sets the PolicyVersionId field's value. -func (s *CreatePolicyVersionOutput) SetPolicyVersionId(v string) *CreatePolicyVersionOutput { - s.PolicyVersionId = &v +// SetS3Destination sets the S3Destination field's value. +func (s *Destination) SetS3Destination(v *S3Destination) *Destination { + s.S3Destination = v return s } -type CreateRoleAliasInput struct { +type DetachPolicyInput struct { _ struct{} `type:"structure"` - // How long (in seconds) the credentials will be valid. - CredentialDurationSeconds *int64 `locationName:"credentialDurationSeconds" min:"900" type:"integer"` - - // The role alias that points to a role ARN. This allows you to change the role - // without having to update the device. + // The policy to detach. // - // RoleAlias is a required field - RoleAlias *string `location:"uri" locationName:"roleAlias" min:"1" type:"string" required:"true"` + // PolicyName is a required field + PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` - // The role ARN. + // The target from which the policy will be detached. // - // RoleArn is a required field - RoleArn *string `locationName:"roleArn" min:"20" type:"string" required:"true"` + // Target is a required field + Target *string `locationName:"target" type:"string" required:"true"` } // String returns the string representation -func (s CreateRoleAliasInput) String() string { +func (s DetachPolicyInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateRoleAliasInput) GoString() string { +func (s DetachPolicyInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateRoleAliasInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateRoleAliasInput"} - if s.CredentialDurationSeconds != nil && *s.CredentialDurationSeconds < 900 { - invalidParams.Add(request.NewErrParamMinValue("CredentialDurationSeconds", 900)) - } - if s.RoleAlias == nil { - invalidParams.Add(request.NewErrParamRequired("RoleAlias")) - } - if s.RoleAlias != nil && len(*s.RoleAlias) < 1 { - invalidParams.Add(request.NewErrParamMinLen("RoleAlias", 1)) +func (s *DetachPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DetachPolicyInput"} + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) } - if s.RoleArn == nil { - invalidParams.Add(request.NewErrParamRequired("RoleArn")) + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) } - if s.RoleArn != nil && len(*s.RoleArn) < 20 { - invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + if s.Target == nil { + invalidParams.Add(request.NewErrParamRequired("Target")) } if invalidParams.Len() > 0 { return invalidParams - } - return nil -} - -// SetCredentialDurationSeconds sets the CredentialDurationSeconds field's value. -func (s *CreateRoleAliasInput) SetCredentialDurationSeconds(v int64) *CreateRoleAliasInput { - s.CredentialDurationSeconds = &v - return s + } + return nil } -// SetRoleAlias sets the RoleAlias field's value. -func (s *CreateRoleAliasInput) SetRoleAlias(v string) *CreateRoleAliasInput { - s.RoleAlias = &v +// SetPolicyName sets the PolicyName field's value. +func (s *DetachPolicyInput) SetPolicyName(v string) *DetachPolicyInput { + s.PolicyName = &v return s } -// SetRoleArn sets the RoleArn field's value. -func (s *CreateRoleAliasInput) SetRoleArn(v string) *CreateRoleAliasInput { - s.RoleArn = &v +// SetTarget sets the Target field's value. +func (s *DetachPolicyInput) SetTarget(v string) *DetachPolicyInput { + s.Target = &v return s } -type CreateRoleAliasOutput struct { +type DetachPolicyOutput struct { _ struct{} `type:"structure"` - - // The role alias. - RoleAlias *string `locationName:"roleAlias" min:"1" type:"string"` - - // The role alias ARN. - RoleAliasArn *string `locationName:"roleAliasArn" type:"string"` } // String returns the string representation -func (s CreateRoleAliasOutput) String() string { +func (s DetachPolicyOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateRoleAliasOutput) GoString() string { +func (s DetachPolicyOutput) GoString() string { return s.String() } -// SetRoleAlias sets the RoleAlias field's value. -func (s *CreateRoleAliasOutput) SetRoleAlias(v string) *CreateRoleAliasOutput { - s.RoleAlias = &v - return s -} - -// SetRoleAliasArn sets the RoleAliasArn field's value. -func (s *CreateRoleAliasOutput) SetRoleAliasArn(v string) *CreateRoleAliasOutput { - s.RoleAliasArn = &v - return s -} - -type CreateStreamInput struct { +// The input for the DetachPrincipalPolicy operation. +type DetachPrincipalPolicyInput struct { _ struct{} `type:"structure"` - // A description of the stream. - Description *string `locationName:"description" type:"string"` - - // The files to stream. + // The name of the policy to detach. // - // Files is a required field - Files []*StreamFile `locationName:"files" min:"1" type:"list" required:"true"` + // PolicyName is a required field + PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` - // An IAM role that allows the IoT service principal assumes to access your - // S3 files. + // The principal. // - // RoleArn is a required field - RoleArn *string `locationName:"roleArn" min:"20" type:"string" required:"true"` - - // The stream ID. + // If the principal is a certificate, specify the certificate ARN. If the principal + // is an Amazon Cognito identity, specify the identity ID. // - // StreamId is a required field - StreamId *string `location:"uri" locationName:"streamId" min:"1" type:"string" required:"true"` + // Principal is a required field + Principal *string `location:"header" locationName:"x-amzn-iot-principal" type:"string" required:"true"` } // String returns the string representation -func (s CreateStreamInput) String() string { +func (s DetachPrincipalPolicyInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateStreamInput) GoString() string { +func (s DetachPrincipalPolicyInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateStreamInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateStreamInput"} - if s.Files == nil { - invalidParams.Add(request.NewErrParamRequired("Files")) - } - if s.Files != nil && len(s.Files) < 1 { - invalidParams.Add(request.NewErrParamMinLen("Files", 1)) - } - if s.RoleArn == nil { - invalidParams.Add(request.NewErrParamRequired("RoleArn")) - } - if s.RoleArn != nil && len(*s.RoleArn) < 20 { - invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) - } - if s.StreamId == nil { - invalidParams.Add(request.NewErrParamRequired("StreamId")) +func (s *DetachPrincipalPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DetachPrincipalPolicyInput"} + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) } - if s.StreamId != nil && len(*s.StreamId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("StreamId", 1)) + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) } - if s.Files != nil { - for i, v := range s.Files { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Files", i), err.(request.ErrInvalidParams)) - } - } + if s.Principal == nil { + invalidParams.Add(request.NewErrParamRequired("Principal")) } if invalidParams.Len() > 0 { @@ -14154,116 +23588,67 @@ func (s *CreateStreamInput) Validate() error { return nil } -// SetDescription sets the Description field's value. -func (s *CreateStreamInput) SetDescription(v string) *CreateStreamInput { - s.Description = &v - return s -} - -// SetFiles sets the Files field's value. -func (s *CreateStreamInput) SetFiles(v []*StreamFile) *CreateStreamInput { - s.Files = v - return s -} - -// SetRoleArn sets the RoleArn field's value. -func (s *CreateStreamInput) SetRoleArn(v string) *CreateStreamInput { - s.RoleArn = &v +// SetPolicyName sets the PolicyName field's value. +func (s *DetachPrincipalPolicyInput) SetPolicyName(v string) *DetachPrincipalPolicyInput { + s.PolicyName = &v return s } -// SetStreamId sets the StreamId field's value. -func (s *CreateStreamInput) SetStreamId(v string) *CreateStreamInput { - s.StreamId = &v +// SetPrincipal sets the Principal field's value. +func (s *DetachPrincipalPolicyInput) SetPrincipal(v string) *DetachPrincipalPolicyInput { + s.Principal = &v return s } -type CreateStreamOutput struct { +type DetachPrincipalPolicyOutput struct { _ struct{} `type:"structure"` - - // A description of the stream. - Description *string `locationName:"description" type:"string"` - - // The stream ARN. - StreamArn *string `locationName:"streamArn" type:"string"` - - // The stream ID. - StreamId *string `locationName:"streamId" min:"1" type:"string"` - - // The version of the stream. - StreamVersion *int64 `locationName:"streamVersion" type:"integer"` } // String returns the string representation -func (s CreateStreamOutput) String() string { +func (s DetachPrincipalPolicyOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateStreamOutput) GoString() string { +func (s DetachPrincipalPolicyOutput) GoString() string { return s.String() } -// SetDescription sets the Description field's value. -func (s *CreateStreamOutput) SetDescription(v string) *CreateStreamOutput { - s.Description = &v - return s -} - -// SetStreamArn sets the StreamArn field's value. -func (s *CreateStreamOutput) SetStreamArn(v string) *CreateStreamOutput { - s.StreamArn = &v - return s -} - -// SetStreamId sets the StreamId field's value. -func (s *CreateStreamOutput) SetStreamId(v string) *CreateStreamOutput { - s.StreamId = &v - return s -} - -// SetStreamVersion sets the StreamVersion field's value. -func (s *CreateStreamOutput) SetStreamVersion(v int64) *CreateStreamOutput { - s.StreamVersion = &v - return s -} - -type CreateThingGroupInput struct { +type DetachSecurityProfileInput struct { _ struct{} `type:"structure"` - // The name of the parent thing group. - ParentGroupName *string `locationName:"parentGroupName" min:"1" type:"string"` - - // The thing group name to create. + // The security profile that is detached. // - // ThingGroupName is a required field - ThingGroupName *string `location:"uri" locationName:"thingGroupName" min:"1" type:"string" required:"true"` + // SecurityProfileName is a required field + SecurityProfileName *string `location:"uri" locationName:"securityProfileName" min:"1" type:"string" required:"true"` - // The thing group properties. - ThingGroupProperties *ThingGroupProperties `locationName:"thingGroupProperties" type:"structure"` + // The ARN of the thing group from which the security profile is detached. + // + // SecurityProfileTargetArn is a required field + SecurityProfileTargetArn *string `location:"querystring" locationName:"securityProfileTargetArn" type:"string" required:"true"` } // String returns the string representation -func (s CreateThingGroupInput) String() string { +func (s DetachSecurityProfileInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateThingGroupInput) GoString() string { +func (s DetachSecurityProfileInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateThingGroupInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateThingGroupInput"} - if s.ParentGroupName != nil && len(*s.ParentGroupName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ParentGroupName", 1)) +func (s *DetachSecurityProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DetachSecurityProfileInput"} + if s.SecurityProfileName == nil { + invalidParams.Add(request.NewErrParamRequired("SecurityProfileName")) } - if s.ThingGroupName == nil { - invalidParams.Add(request.NewErrParamRequired("ThingGroupName")) + if s.SecurityProfileName != nil && len(*s.SecurityProfileName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecurityProfileName", 1)) } - if s.ThingGroupName != nil && len(*s.ThingGroupName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingGroupName", 1)) + if s.SecurityProfileTargetArn == nil { + invalidParams.Add(request.NewErrParamRequired("SecurityProfileTargetArn")) } if invalidParams.Len() > 0 { @@ -14272,106 +23657,71 @@ func (s *CreateThingGroupInput) Validate() error { return nil } -// SetParentGroupName sets the ParentGroupName field's value. -func (s *CreateThingGroupInput) SetParentGroupName(v string) *CreateThingGroupInput { - s.ParentGroupName = &v - return s -} - -// SetThingGroupName sets the ThingGroupName field's value. -func (s *CreateThingGroupInput) SetThingGroupName(v string) *CreateThingGroupInput { - s.ThingGroupName = &v +// SetSecurityProfileName sets the SecurityProfileName field's value. +func (s *DetachSecurityProfileInput) SetSecurityProfileName(v string) *DetachSecurityProfileInput { + s.SecurityProfileName = &v return s } -// SetThingGroupProperties sets the ThingGroupProperties field's value. -func (s *CreateThingGroupInput) SetThingGroupProperties(v *ThingGroupProperties) *CreateThingGroupInput { - s.ThingGroupProperties = v +// SetSecurityProfileTargetArn sets the SecurityProfileTargetArn field's value. +func (s *DetachSecurityProfileInput) SetSecurityProfileTargetArn(v string) *DetachSecurityProfileInput { + s.SecurityProfileTargetArn = &v return s } -type CreateThingGroupOutput struct { +type DetachSecurityProfileOutput struct { _ struct{} `type:"structure"` - - // The thing group ARN. - ThingGroupArn *string `locationName:"thingGroupArn" type:"string"` - - // The thing group ID. - ThingGroupId *string `locationName:"thingGroupId" min:"1" type:"string"` - - // The thing group name. - ThingGroupName *string `locationName:"thingGroupName" min:"1" type:"string"` } // String returns the string representation -func (s CreateThingGroupOutput) String() string { +func (s DetachSecurityProfileOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateThingGroupOutput) GoString() string { +func (s DetachSecurityProfileOutput) GoString() string { return s.String() } -// SetThingGroupArn sets the ThingGroupArn field's value. -func (s *CreateThingGroupOutput) SetThingGroupArn(v string) *CreateThingGroupOutput { - s.ThingGroupArn = &v - return s -} - -// SetThingGroupId sets the ThingGroupId field's value. -func (s *CreateThingGroupOutput) SetThingGroupId(v string) *CreateThingGroupOutput { - s.ThingGroupId = &v - return s -} - -// SetThingGroupName sets the ThingGroupName field's value. -func (s *CreateThingGroupOutput) SetThingGroupName(v string) *CreateThingGroupOutput { - s.ThingGroupName = &v - return s -} - -// The input for the CreateThing operation. -type CreateThingInput struct { +// The input for the DetachThingPrincipal operation. +type DetachThingPrincipalInput struct { _ struct{} `type:"structure"` - // The attribute payload, which consists of up to three name/value pairs in - // a JSON document. For example: + // If the principal is a certificate, this value must be ARN of the certificate. + // If the principal is an Amazon Cognito identity, this value must be the ID + // of the Amazon Cognito identity. // - // {\"attributes\":{\"string1\":\"string2\"}} - AttributePayload *AttributePayload `locationName:"attributePayload" type:"structure"` + // Principal is a required field + Principal *string `location:"header" locationName:"x-amzn-principal" type:"string" required:"true"` - // The name of the thing to create. + // The name of the thing. // // ThingName is a required field ThingName *string `location:"uri" locationName:"thingName" min:"1" type:"string" required:"true"` - - // The name of the thing type associated with the new thing. - ThingTypeName *string `locationName:"thingTypeName" min:"1" type:"string"` } // String returns the string representation -func (s CreateThingInput) String() string { +func (s DetachThingPrincipalInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateThingInput) GoString() string { +func (s DetachThingPrincipalInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateThingInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateThingInput"} +func (s *DetachThingPrincipalInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DetachThingPrincipalInput"} + if s.Principal == nil { + invalidParams.Add(request.NewErrParamRequired("Principal")) + } if s.ThingName == nil { invalidParams.Add(request.NewErrParamRequired("ThingName")) } if s.ThingName != nil && len(*s.ThingName) < 1 { invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) } - if s.ThingTypeName != nil && len(*s.ThingTypeName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingTypeName", 1)) - } if invalidParams.Len() > 0 { return invalidParams @@ -14379,99 +23729,174 @@ func (s *CreateThingInput) Validate() error { return nil } -// SetAttributePayload sets the AttributePayload field's value. -func (s *CreateThingInput) SetAttributePayload(v *AttributePayload) *CreateThingInput { - s.AttributePayload = v +// SetPrincipal sets the Principal field's value. +func (s *DetachThingPrincipalInput) SetPrincipal(v string) *DetachThingPrincipalInput { + s.Principal = &v return s } // SetThingName sets the ThingName field's value. -func (s *CreateThingInput) SetThingName(v string) *CreateThingInput { +func (s *DetachThingPrincipalInput) SetThingName(v string) *DetachThingPrincipalInput { s.ThingName = &v return s } -// SetThingTypeName sets the ThingTypeName field's value. -func (s *CreateThingInput) SetThingTypeName(v string) *CreateThingInput { - s.ThingTypeName = &v - return s +// The output from the DetachThingPrincipal operation. +type DetachThingPrincipalOutput struct { + _ struct{} `type:"structure"` } -// The output of the CreateThing operation. -type CreateThingOutput struct { - _ struct{} `type:"structure"` +// String returns the string representation +func (s DetachThingPrincipalOutput) String() string { + return awsutil.Prettify(s) +} - // The ARN of the new thing. - ThingArn *string `locationName:"thingArn" type:"string"` +// GoString returns the string representation +func (s DetachThingPrincipalOutput) GoString() string { + return s.String() +} - // The thing ID. - ThingId *string `locationName:"thingId" type:"string"` +// The input for the DisableTopicRuleRequest operation. +type DisableTopicRuleInput struct { + _ struct{} `type:"structure"` - // The name of the new thing. - ThingName *string `locationName:"thingName" min:"1" type:"string"` + // The name of the rule to disable. + // + // RuleName is a required field + RuleName *string `location:"uri" locationName:"ruleName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s CreateThingOutput) String() string { +func (s DisableTopicRuleInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateThingOutput) GoString() string { +func (s DisableTopicRuleInput) GoString() string { return s.String() } -// SetThingArn sets the ThingArn field's value. -func (s *CreateThingOutput) SetThingArn(v string) *CreateThingOutput { - s.ThingArn = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *DisableTopicRuleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DisableTopicRuleInput"} + if s.RuleName == nil { + invalidParams.Add(request.NewErrParamRequired("RuleName")) + } + if s.RuleName != nil && len(*s.RuleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RuleName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetThingId sets the ThingId field's value. -func (s *CreateThingOutput) SetThingId(v string) *CreateThingOutput { - s.ThingId = &v +// SetRuleName sets the RuleName field's value. +func (s *DisableTopicRuleInput) SetRuleName(v string) *DisableTopicRuleInput { + s.RuleName = &v return s } -// SetThingName sets the ThingName field's value. -func (s *CreateThingOutput) SetThingName(v string) *CreateThingOutput { - s.ThingName = &v - return s +type DisableTopicRuleOutput struct { + _ struct{} `type:"structure"` } -// The input for the CreateThingType operation. -type CreateThingTypeInput struct { +// String returns the string representation +func (s DisableTopicRuleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisableTopicRuleOutput) GoString() string { + return s.String() +} + +// Describes an action to write to a DynamoDB table. +// +// The tableName, hashKeyField, and rangeKeyField values must match the values +// used when you created the table. +// +// The hashKeyValue and rangeKeyvalue fields use a substitution template syntax. +// These templates provide data at runtime. The syntax is as follows: ${sql-expression}. +// +// You can specify any valid expression in a WHERE or SELECT clause, including +// JSON properties, comparisons, calculations, and functions. For example, the +// following field uses the third level of the topic: +// +// "hashKeyValue": "${topic(3)}" +// +// The following field uses the timestamp: +// +// "rangeKeyValue": "${timestamp()}" +type DynamoDBAction struct { _ struct{} `type:"structure"` - // The name of the thing type. + // The hash key name. // - // ThingTypeName is a required field - ThingTypeName *string `location:"uri" locationName:"thingTypeName" min:"1" type:"string" required:"true"` + // HashKeyField is a required field + HashKeyField *string `locationName:"hashKeyField" type:"string" required:"true"` - // The ThingTypeProperties for the thing type to create. It contains information - // about the new thing type including a description, and a list of searchable - // thing attribute names. - ThingTypeProperties *ThingTypeProperties `locationName:"thingTypeProperties" type:"structure"` + // The hash key type. Valid values are "STRING" or "NUMBER" + HashKeyType *string `locationName:"hashKeyType" type:"string" enum:"DynamoKeyType"` + + // The hash key value. + // + // HashKeyValue is a required field + HashKeyValue *string `locationName:"hashKeyValue" type:"string" required:"true"` + + // The type of operation to be performed. This follows the substitution template, + // so it can be ${operation}, but the substitution must result in one of the + // following: INSERT, UPDATE, or DELETE. + Operation *string `locationName:"operation" type:"string"` + + // The action payload. This name can be customized. + PayloadField *string `locationName:"payloadField" type:"string"` + + // The range key name. + RangeKeyField *string `locationName:"rangeKeyField" type:"string"` + + // The range key type. Valid values are "STRING" or "NUMBER" + RangeKeyType *string `locationName:"rangeKeyType" type:"string" enum:"DynamoKeyType"` + + // The range key value. + RangeKeyValue *string `locationName:"rangeKeyValue" type:"string"` + + // The ARN of the IAM role that grants access to the DynamoDB table. + // + // RoleArn is a required field + RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + + // The name of the DynamoDB table. + // + // TableName is a required field + TableName *string `locationName:"tableName" type:"string" required:"true"` } // String returns the string representation -func (s CreateThingTypeInput) String() string { +func (s DynamoDBAction) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateThingTypeInput) GoString() string { +func (s DynamoDBAction) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateThingTypeInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateThingTypeInput"} - if s.ThingTypeName == nil { - invalidParams.Add(request.NewErrParamRequired("ThingTypeName")) +func (s *DynamoDBAction) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DynamoDBAction"} + if s.HashKeyField == nil { + invalidParams.Add(request.NewErrParamRequired("HashKeyField")) } - if s.ThingTypeName != nil && len(*s.ThingTypeName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingTypeName", 1)) + if s.HashKeyValue == nil { + invalidParams.Add(request.NewErrParamRequired("HashKeyValue")) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) } if invalidParams.Len() > 0 { @@ -14480,100 +23905,103 @@ func (s *CreateThingTypeInput) Validate() error { return nil } -// SetThingTypeName sets the ThingTypeName field's value. -func (s *CreateThingTypeInput) SetThingTypeName(v string) *CreateThingTypeInput { - s.ThingTypeName = &v +// SetHashKeyField sets the HashKeyField field's value. +func (s *DynamoDBAction) SetHashKeyField(v string) *DynamoDBAction { + s.HashKeyField = &v return s } -// SetThingTypeProperties sets the ThingTypeProperties field's value. -func (s *CreateThingTypeInput) SetThingTypeProperties(v *ThingTypeProperties) *CreateThingTypeInput { - s.ThingTypeProperties = v +// SetHashKeyType sets the HashKeyType field's value. +func (s *DynamoDBAction) SetHashKeyType(v string) *DynamoDBAction { + s.HashKeyType = &v return s } -// The output of the CreateThingType operation. -type CreateThingTypeOutput struct { - _ struct{} `type:"structure"` - - // The Amazon Resource Name (ARN) of the thing type. - ThingTypeArn *string `locationName:"thingTypeArn" type:"string"` +// SetHashKeyValue sets the HashKeyValue field's value. +func (s *DynamoDBAction) SetHashKeyValue(v string) *DynamoDBAction { + s.HashKeyValue = &v + return s +} - // The thing type ID. - ThingTypeId *string `locationName:"thingTypeId" type:"string"` +// SetOperation sets the Operation field's value. +func (s *DynamoDBAction) SetOperation(v string) *DynamoDBAction { + s.Operation = &v + return s +} - // The name of the thing type. - ThingTypeName *string `locationName:"thingTypeName" min:"1" type:"string"` +// SetPayloadField sets the PayloadField field's value. +func (s *DynamoDBAction) SetPayloadField(v string) *DynamoDBAction { + s.PayloadField = &v + return s } -// String returns the string representation -func (s CreateThingTypeOutput) String() string { - return awsutil.Prettify(s) +// SetRangeKeyField sets the RangeKeyField field's value. +func (s *DynamoDBAction) SetRangeKeyField(v string) *DynamoDBAction { + s.RangeKeyField = &v + return s } -// GoString returns the string representation -func (s CreateThingTypeOutput) GoString() string { - return s.String() +// SetRangeKeyType sets the RangeKeyType field's value. +func (s *DynamoDBAction) SetRangeKeyType(v string) *DynamoDBAction { + s.RangeKeyType = &v + return s } -// SetThingTypeArn sets the ThingTypeArn field's value. -func (s *CreateThingTypeOutput) SetThingTypeArn(v string) *CreateThingTypeOutput { - s.ThingTypeArn = &v +// SetRangeKeyValue sets the RangeKeyValue field's value. +func (s *DynamoDBAction) SetRangeKeyValue(v string) *DynamoDBAction { + s.RangeKeyValue = &v return s } -// SetThingTypeId sets the ThingTypeId field's value. -func (s *CreateThingTypeOutput) SetThingTypeId(v string) *CreateThingTypeOutput { - s.ThingTypeId = &v +// SetRoleArn sets the RoleArn field's value. +func (s *DynamoDBAction) SetRoleArn(v string) *DynamoDBAction { + s.RoleArn = &v return s } -// SetThingTypeName sets the ThingTypeName field's value. -func (s *CreateThingTypeOutput) SetThingTypeName(v string) *CreateThingTypeOutput { - s.ThingTypeName = &v +// SetTableName sets the TableName field's value. +func (s *DynamoDBAction) SetTableName(v string) *DynamoDBAction { + s.TableName = &v return s } -// The input for the CreateTopicRule operation. -type CreateTopicRuleInput struct { - _ struct{} `type:"structure" payload:"TopicRulePayload"` +// Describes an action to write to a DynamoDB table. +// +// This DynamoDB action writes each attribute in the message payload into it's +// own column in the DynamoDB table. +type DynamoDBv2Action struct { + _ struct{} `type:"structure"` - // The name of the rule. + // Specifies the DynamoDB table to which the message data will be written. For + // example: + // + // { "dynamoDBv2": { "roleArn": "aws:iam:12341251:my-role" "putItem": { "tableName": + // "my-table" } } } // - // RuleName is a required field - RuleName *string `location:"uri" locationName:"ruleName" min:"1" type:"string" required:"true"` + // Each attribute in the message payload will be written to a separate column + // in the DynamoDB database. + PutItem *PutItemInput `locationName:"putItem" type:"structure"` - // The rule payload. - // - // TopicRulePayload is a required field - TopicRulePayload *TopicRulePayload `locationName:"topicRulePayload" type:"structure" required:"true"` + // The ARN of the IAM role that grants access to the DynamoDB table. + RoleArn *string `locationName:"roleArn" type:"string"` } // String returns the string representation -func (s CreateTopicRuleInput) String() string { +func (s DynamoDBv2Action) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateTopicRuleInput) GoString() string { +func (s DynamoDBv2Action) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateTopicRuleInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateTopicRuleInput"} - if s.RuleName == nil { - invalidParams.Add(request.NewErrParamRequired("RuleName")) - } - if s.RuleName != nil && len(*s.RuleName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("RuleName", 1)) - } - if s.TopicRulePayload == nil { - invalidParams.Add(request.NewErrParamRequired("TopicRulePayload")) - } - if s.TopicRulePayload != nil { - if err := s.TopicRulePayload.Validate(); err != nil { - invalidParams.AddNested("TopicRulePayload", err.(request.ErrInvalidParams)) +func (s *DynamoDBv2Action) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DynamoDBv2Action"} + if s.PutItem != nil { + if err := s.PutItem.Validate(); err != nil { + invalidParams.AddNested("PutItem", err.(request.ErrInvalidParams)) } } @@ -14583,71 +24011,117 @@ func (s *CreateTopicRuleInput) Validate() error { return nil } -// SetRuleName sets the RuleName field's value. -func (s *CreateTopicRuleInput) SetRuleName(v string) *CreateTopicRuleInput { - s.RuleName = &v +// SetPutItem sets the PutItem field's value. +func (s *DynamoDBv2Action) SetPutItem(v *PutItemInput) *DynamoDBv2Action { + s.PutItem = v return s } -// SetTopicRulePayload sets the TopicRulePayload field's value. -func (s *CreateTopicRuleInput) SetTopicRulePayload(v *TopicRulePayload) *CreateTopicRuleInput { - s.TopicRulePayload = v +// SetRoleArn sets the RoleArn field's value. +func (s *DynamoDBv2Action) SetRoleArn(v string) *DynamoDBv2Action { + s.RoleArn = &v return s } -type CreateTopicRuleOutput struct { +// The policy that has the effect on the authorization results. +type EffectivePolicy struct { _ struct{} `type:"structure"` + + // The policy ARN. + PolicyArn *string `locationName:"policyArn" type:"string"` + + // The IAM policy document. + PolicyDocument *string `locationName:"policyDocument" type:"string"` + + // The policy name. + PolicyName *string `locationName:"policyName" min:"1" type:"string"` } // String returns the string representation -func (s CreateTopicRuleOutput) String() string { +func (s EffectivePolicy) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateTopicRuleOutput) GoString() string { +func (s EffectivePolicy) GoString() string { return s.String() } -// Describes a custom method used to code sign a file. -type CustomCodeSigning struct { +// SetPolicyArn sets the PolicyArn field's value. +func (s *EffectivePolicy) SetPolicyArn(v string) *EffectivePolicy { + s.PolicyArn = &v + return s +} + +// SetPolicyDocument sets the PolicyDocument field's value. +func (s *EffectivePolicy) SetPolicyDocument(v string) *EffectivePolicy { + s.PolicyDocument = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *EffectivePolicy) SetPolicyName(v string) *EffectivePolicy { + s.PolicyName = &v + return s +} + +// Describes an action that writes data to an Amazon Elasticsearch Service domain. +type ElasticsearchAction struct { _ struct{} `type:"structure"` - // The certificate chain. - CertificateChain *CodeSigningCertificateChain `locationName:"certificateChain" type:"structure"` + // The endpoint of your Elasticsearch domain. + // + // Endpoint is a required field + Endpoint *string `locationName:"endpoint" type:"string" required:"true"` - // The hash algorithm used to code sign the file. - HashAlgorithm *string `locationName:"hashAlgorithm" type:"string"` + // The unique identifier for the document you are storing. + // + // Id is a required field + Id *string `locationName:"id" type:"string" required:"true"` - // The signature for the file. - Signature *CodeSigningSignature `locationName:"signature" type:"structure"` + // The Elasticsearch index where you want to store your data. + // + // Index is a required field + Index *string `locationName:"index" type:"string" required:"true"` - // The signature algorithm used to code sign the file. - SignatureAlgorithm *string `locationName:"signatureAlgorithm" type:"string"` + // The IAM role ARN that has access to Elasticsearch. + // + // RoleArn is a required field + RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + + // The type of document you are storing. + // + // Type is a required field + Type *string `locationName:"type" type:"string" required:"true"` } // String returns the string representation -func (s CustomCodeSigning) String() string { +func (s ElasticsearchAction) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CustomCodeSigning) GoString() string { +func (s ElasticsearchAction) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CustomCodeSigning) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CustomCodeSigning"} - if s.CertificateChain != nil { - if err := s.CertificateChain.Validate(); err != nil { - invalidParams.AddNested("CertificateChain", err.(request.ErrInvalidParams)) - } +func (s *ElasticsearchAction) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ElasticsearchAction"} + if s.Endpoint == nil { + invalidParams.Add(request.NewErrParamRequired("Endpoint")) } - if s.Signature != nil { - if err := s.Signature.Validate(); err != nil { - invalidParams.AddNested("Signature", err.(request.ErrInvalidParams)) - } + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.Index == nil { + invalidParams.Add(request.NewErrParamRequired("Index")) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) } if invalidParams.Len() > 0 { @@ -14656,57 +24130,64 @@ func (s *CustomCodeSigning) Validate() error { return nil } -// SetCertificateChain sets the CertificateChain field's value. -func (s *CustomCodeSigning) SetCertificateChain(v *CodeSigningCertificateChain) *CustomCodeSigning { - s.CertificateChain = v +// SetEndpoint sets the Endpoint field's value. +func (s *ElasticsearchAction) SetEndpoint(v string) *ElasticsearchAction { + s.Endpoint = &v return s } -// SetHashAlgorithm sets the HashAlgorithm field's value. -func (s *CustomCodeSigning) SetHashAlgorithm(v string) *CustomCodeSigning { - s.HashAlgorithm = &v +// SetId sets the Id field's value. +func (s *ElasticsearchAction) SetId(v string) *ElasticsearchAction { + s.Id = &v return s } -// SetSignature sets the Signature field's value. -func (s *CustomCodeSigning) SetSignature(v *CodeSigningSignature) *CustomCodeSigning { - s.Signature = v +// SetIndex sets the Index field's value. +func (s *ElasticsearchAction) SetIndex(v string) *ElasticsearchAction { + s.Index = &v return s } -// SetSignatureAlgorithm sets the SignatureAlgorithm field's value. -func (s *CustomCodeSigning) SetSignatureAlgorithm(v string) *CustomCodeSigning { - s.SignatureAlgorithm = &v +// SetRoleArn sets the RoleArn field's value. +func (s *ElasticsearchAction) SetRoleArn(v string) *ElasticsearchAction { + s.RoleArn = &v return s } -type DeleteAuthorizerInput struct { +// SetType sets the Type field's value. +func (s *ElasticsearchAction) SetType(v string) *ElasticsearchAction { + s.Type = &v + return s +} + +// The input for the EnableTopicRuleRequest operation. +type EnableTopicRuleInput struct { _ struct{} `type:"structure"` - // The name of the authorizer to delete. + // The name of the topic rule to enable. // - // AuthorizerName is a required field - AuthorizerName *string `location:"uri" locationName:"authorizerName" min:"1" type:"string" required:"true"` + // RuleName is a required field + RuleName *string `location:"uri" locationName:"ruleName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s DeleteAuthorizerInput) String() string { +func (s EnableTopicRuleInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteAuthorizerInput) GoString() string { +func (s EnableTopicRuleInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteAuthorizerInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteAuthorizerInput"} - if s.AuthorizerName == nil { - invalidParams.Add(request.NewErrParamRequired("AuthorizerName")) +func (s *EnableTopicRuleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "EnableTopicRuleInput"} + if s.RuleName == nil { + invalidParams.Add(request.NewErrParamRequired("RuleName")) } - if s.AuthorizerName != nil && len(*s.AuthorizerName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("AuthorizerName", 1)) + if s.RuleName != nil && len(*s.RuleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RuleName", 1)) } if invalidParams.Len() > 0 { @@ -14715,114 +24196,140 @@ func (s *DeleteAuthorizerInput) Validate() error { return nil } -// SetAuthorizerName sets the AuthorizerName field's value. -func (s *DeleteAuthorizerInput) SetAuthorizerName(v string) *DeleteAuthorizerInput { - s.AuthorizerName = &v +// SetRuleName sets the RuleName field's value. +func (s *EnableTopicRuleInput) SetRuleName(v string) *EnableTopicRuleInput { + s.RuleName = &v return s } -type DeleteAuthorizerOutput struct { +type EnableTopicRuleOutput struct { _ struct{} `type:"structure"` } // String returns the string representation -func (s DeleteAuthorizerOutput) String() string { +func (s EnableTopicRuleOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteAuthorizerOutput) GoString() string { +func (s EnableTopicRuleOutput) GoString() string { return s.String() } -// Input for the DeleteCACertificate operation. -type DeleteCACertificateInput struct { +// Error information. +type ErrorInfo struct { _ struct{} `type:"structure"` - // The ID of the certificate to delete. - // - // CertificateId is a required field - CertificateId *string `location:"uri" locationName:"caCertificateId" min:"64" type:"string" required:"true"` + // The error code. + Code *string `locationName:"code" type:"string"` + + // The error message. + Message *string `locationName:"message" type:"string"` } // String returns the string representation -func (s DeleteCACertificateInput) String() string { +func (s ErrorInfo) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteCACertificateInput) GoString() string { +func (s ErrorInfo) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteCACertificateInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteCACertificateInput"} - if s.CertificateId == nil { - invalidParams.Add(request.NewErrParamRequired("CertificateId")) - } - if s.CertificateId != nil && len(*s.CertificateId) < 64 { - invalidParams.Add(request.NewErrParamMinLen("CertificateId", 64)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetCode sets the Code field's value. +func (s *ErrorInfo) SetCode(v string) *ErrorInfo { + s.Code = &v + return s } -// SetCertificateId sets the CertificateId field's value. -func (s *DeleteCACertificateInput) SetCertificateId(v string) *DeleteCACertificateInput { - s.CertificateId = &v +// SetMessage sets the Message field's value. +func (s *ErrorInfo) SetMessage(v string) *ErrorInfo { + s.Message = &v return s } -// The output for the DeleteCACertificate operation. -type DeleteCACertificateOutput struct { +// Information that explicitly denies authorization. +type ExplicitDeny struct { _ struct{} `type:"structure"` + + // The policies that denied the authorization. + Policies []*Policy `locationName:"policies" type:"list"` } // String returns the string representation -func (s DeleteCACertificateOutput) String() string { +func (s ExplicitDeny) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteCACertificateOutput) GoString() string { +func (s ExplicitDeny) GoString() string { return s.String() } -// The input for the DeleteCertificate operation. -type DeleteCertificateInput struct { +// SetPolicies sets the Policies field's value. +func (s *ExplicitDeny) SetPolicies(v []*Policy) *ExplicitDeny { + s.Policies = v + return s +} + +// Allows you to create an exponential rate of rollout for a job. +type ExponentialRolloutRate struct { _ struct{} `type:"structure"` - // The ID of the certificate. + // The minimum number of things that will be notified of a pending job, per + // minute at the start of job rollout. This parameter allows you to define the + // initial rate of rollout. // - // CertificateId is a required field - CertificateId *string `location:"uri" locationName:"certificateId" min:"64" type:"string" required:"true"` + // BaseRatePerMinute is a required field + BaseRatePerMinute *int64 `locationName:"baseRatePerMinute" min:"1" type:"integer" required:"true"` - // Forces a certificate request to be deleted. - ForceDelete *bool `location:"querystring" locationName:"forceDelete" type:"boolean"` + // The exponential factor to increase the rate of rollout for a job. + // + // IncrementFactor is a required field + IncrementFactor *float64 `locationName:"incrementFactor" min:"1" type:"double" required:"true"` + + // The criteria to initiate the increase in rate of rollout for a job. + // + // AWS IoT supports up to one digit after the decimal (for example, 1.5, but + // not 1.55). + // + // RateIncreaseCriteria is a required field + RateIncreaseCriteria *RateIncreaseCriteria `locationName:"rateIncreaseCriteria" type:"structure" required:"true"` } // String returns the string representation -func (s DeleteCertificateInput) String() string { +func (s ExponentialRolloutRate) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteCertificateInput) GoString() string { +func (s ExponentialRolloutRate) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteCertificateInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteCertificateInput"} - if s.CertificateId == nil { - invalidParams.Add(request.NewErrParamRequired("CertificateId")) +func (s *ExponentialRolloutRate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ExponentialRolloutRate"} + if s.BaseRatePerMinute == nil { + invalidParams.Add(request.NewErrParamRequired("BaseRatePerMinute")) } - if s.CertificateId != nil && len(*s.CertificateId) < 64 { - invalidParams.Add(request.NewErrParamMinLen("CertificateId", 64)) + if s.BaseRatePerMinute != nil && *s.BaseRatePerMinute < 1 { + invalidParams.Add(request.NewErrParamMinValue("BaseRatePerMinute", 1)) + } + if s.IncrementFactor == nil { + invalidParams.Add(request.NewErrParamRequired("IncrementFactor")) + } + if s.IncrementFactor != nil && *s.IncrementFactor < 1 { + invalidParams.Add(request.NewErrParamMinValue("IncrementFactor", 1)) + } + if s.RateIncreaseCriteria == nil { + invalidParams.Add(request.NewErrParamRequired("RateIncreaseCriteria")) + } + if s.RateIncreaseCriteria != nil { + if err := s.RateIncreaseCriteria.Validate(); err != nil { + invalidParams.AddNested("RateIncreaseCriteria", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -14831,59 +24338,57 @@ func (s *DeleteCertificateInput) Validate() error { return nil } -// SetCertificateId sets the CertificateId field's value. -func (s *DeleteCertificateInput) SetCertificateId(v string) *DeleteCertificateInput { - s.CertificateId = &v +// SetBaseRatePerMinute sets the BaseRatePerMinute field's value. +func (s *ExponentialRolloutRate) SetBaseRatePerMinute(v int64) *ExponentialRolloutRate { + s.BaseRatePerMinute = &v return s } -// SetForceDelete sets the ForceDelete field's value. -func (s *DeleteCertificateInput) SetForceDelete(v bool) *DeleteCertificateInput { - s.ForceDelete = &v +// SetIncrementFactor sets the IncrementFactor field's value. +func (s *ExponentialRolloutRate) SetIncrementFactor(v float64) *ExponentialRolloutRate { + s.IncrementFactor = &v return s } -type DeleteCertificateOutput struct { - _ struct{} `type:"structure"` -} - -// String returns the string representation -func (s DeleteCertificateOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s DeleteCertificateOutput) GoString() string { - return s.String() +// SetRateIncreaseCriteria sets the RateIncreaseCriteria field's value. +func (s *ExponentialRolloutRate) SetRateIncreaseCriteria(v *RateIncreaseCriteria) *ExponentialRolloutRate { + s.RateIncreaseCriteria = v + return s } -type DeleteOTAUpdateInput struct { +// The location of the OTA update. +type FileLocation struct { _ struct{} `type:"structure"` - // The OTA update ID to delete. - // - // OtaUpdateId is a required field - OtaUpdateId *string `location:"uri" locationName:"otaUpdateId" min:"1" type:"string" required:"true"` + // The location of the updated firmware in S3. + S3Location *S3Location `locationName:"s3Location" type:"structure"` + + // The stream that contains the OTA update. + Stream *Stream `locationName:"stream" type:"structure"` } // String returns the string representation -func (s DeleteOTAUpdateInput) String() string { +func (s FileLocation) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteOTAUpdateInput) GoString() string { +func (s FileLocation) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteOTAUpdateInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteOTAUpdateInput"} - if s.OtaUpdateId == nil { - invalidParams.Add(request.NewErrParamRequired("OtaUpdateId")) +func (s *FileLocation) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "FileLocation"} + if s.S3Location != nil { + if err := s.S3Location.Validate(); err != nil { + invalidParams.AddNested("S3Location", err.(request.ErrInvalidParams)) + } } - if s.OtaUpdateId != nil && len(*s.OtaUpdateId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("OtaUpdateId", 1)) + if s.Stream != nil { + if err := s.Stream.Validate(); err != nil { + invalidParams.AddNested("Stream", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -14892,54 +24397,56 @@ func (s *DeleteOTAUpdateInput) Validate() error { return nil } -// SetOtaUpdateId sets the OtaUpdateId field's value. -func (s *DeleteOTAUpdateInput) SetOtaUpdateId(v string) *DeleteOTAUpdateInput { - s.OtaUpdateId = &v +// SetS3Location sets the S3Location field's value. +func (s *FileLocation) SetS3Location(v *S3Location) *FileLocation { + s.S3Location = v return s } -type DeleteOTAUpdateOutput struct { - _ struct{} `type:"structure"` -} - -// String returns the string representation -func (s DeleteOTAUpdateOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s DeleteOTAUpdateOutput) GoString() string { - return s.String() +// SetStream sets the Stream field's value. +func (s *FileLocation) SetStream(v *Stream) *FileLocation { + s.Stream = v + return s } -// The input for the DeletePolicy operation. -type DeletePolicyInput struct { +// Describes an action that writes data to an Amazon Kinesis Firehose stream. +type FirehoseAction struct { _ struct{} `type:"structure"` - // The name of the policy to delete. + // The delivery stream name. // - // PolicyName is a required field - PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` + // DeliveryStreamName is a required field + DeliveryStreamName *string `locationName:"deliveryStreamName" type:"string" required:"true"` + + // The IAM role that grants access to the Amazon Kinesis Firehose stream. + // + // RoleArn is a required field + RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + + // A character separator that will be used to separate records written to the + // Firehose stream. Valid values are: '\n' (newline), '\t' (tab), '\r\n' (Windows + // newline), ',' (comma). + Separator *string `locationName:"separator" type:"string"` } // String returns the string representation -func (s DeletePolicyInput) String() string { +func (s FirehoseAction) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeletePolicyInput) GoString() string { +func (s FirehoseAction) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeletePolicyInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeletePolicyInput"} - if s.PolicyName == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyName")) +func (s *FirehoseAction) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "FirehoseAction"} + if s.DeliveryStreamName == nil { + invalidParams.Add(request.NewErrParamRequired("DeliveryStreamName")) } - if s.PolicyName != nil && len(*s.PolicyName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) } if invalidParams.Len() > 0 { @@ -14948,62 +24455,52 @@ func (s *DeletePolicyInput) Validate() error { return nil } -// SetPolicyName sets the PolicyName field's value. -func (s *DeletePolicyInput) SetPolicyName(v string) *DeletePolicyInput { - s.PolicyName = &v +// SetDeliveryStreamName sets the DeliveryStreamName field's value. +func (s *FirehoseAction) SetDeliveryStreamName(v string) *FirehoseAction { + s.DeliveryStreamName = &v return s } -type DeletePolicyOutput struct { - _ struct{} `type:"structure"` -} - -// String returns the string representation -func (s DeletePolicyOutput) String() string { - return awsutil.Prettify(s) +// SetRoleArn sets the RoleArn field's value. +func (s *FirehoseAction) SetRoleArn(v string) *FirehoseAction { + s.RoleArn = &v + return s } -// GoString returns the string representation -func (s DeletePolicyOutput) GoString() string { - return s.String() +// SetSeparator sets the Separator field's value. +func (s *FirehoseAction) SetSeparator(v string) *FirehoseAction { + s.Separator = &v + return s } -// The input for the DeletePolicyVersion operation. -type DeletePolicyVersionInput struct { +type GetEffectivePoliciesInput struct { _ struct{} `type:"structure"` - // The name of the policy. - // - // PolicyName is a required field - PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` + // The Cognito identity pool ID. + CognitoIdentityPoolId *string `locationName:"cognitoIdentityPoolId" type:"string"` - // The policy version ID. - // - // PolicyVersionId is a required field - PolicyVersionId *string `location:"uri" locationName:"policyVersionId" type:"string" required:"true"` + // The principal. + Principal *string `locationName:"principal" type:"string"` + + // The thing name. + ThingName *string `location:"querystring" locationName:"thingName" min:"1" type:"string"` } // String returns the string representation -func (s DeletePolicyVersionInput) String() string { +func (s GetEffectivePoliciesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeletePolicyVersionInput) GoString() string { +func (s GetEffectivePoliciesInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeletePolicyVersionInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeletePolicyVersionInput"} - if s.PolicyName == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyName")) - } - if s.PolicyName != nil && len(*s.PolicyName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) - } - if s.PolicyVersionId == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyVersionId")) +func (s *GetEffectivePoliciesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetEffectivePoliciesInput"} + if s.ThingName != nil && len(*s.ThingName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) } if invalidParams.Len() > 0 { @@ -15012,89 +24509,120 @@ func (s *DeletePolicyVersionInput) Validate() error { return nil } -// SetPolicyName sets the PolicyName field's value. -func (s *DeletePolicyVersionInput) SetPolicyName(v string) *DeletePolicyVersionInput { - s.PolicyName = &v +// SetCognitoIdentityPoolId sets the CognitoIdentityPoolId field's value. +func (s *GetEffectivePoliciesInput) SetCognitoIdentityPoolId(v string) *GetEffectivePoliciesInput { + s.CognitoIdentityPoolId = &v return s } -// SetPolicyVersionId sets the PolicyVersionId field's value. -func (s *DeletePolicyVersionInput) SetPolicyVersionId(v string) *DeletePolicyVersionInput { - s.PolicyVersionId = &v +// SetPrincipal sets the Principal field's value. +func (s *GetEffectivePoliciesInput) SetPrincipal(v string) *GetEffectivePoliciesInput { + s.Principal = &v return s } -type DeletePolicyVersionOutput struct { +// SetThingName sets the ThingName field's value. +func (s *GetEffectivePoliciesInput) SetThingName(v string) *GetEffectivePoliciesInput { + s.ThingName = &v + return s +} + +type GetEffectivePoliciesOutput struct { _ struct{} `type:"structure"` + + // The effective policies. + EffectivePolicies []*EffectivePolicy `locationName:"effectivePolicies" type:"list"` } // String returns the string representation -func (s DeletePolicyVersionOutput) String() string { +func (s GetEffectivePoliciesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeletePolicyVersionOutput) GoString() string { +func (s GetEffectivePoliciesOutput) GoString() string { return s.String() } -// The input for the DeleteRegistrationCode operation. -type DeleteRegistrationCodeInput struct { +// SetEffectivePolicies sets the EffectivePolicies field's value. +func (s *GetEffectivePoliciesOutput) SetEffectivePolicies(v []*EffectivePolicy) *GetEffectivePoliciesOutput { + s.EffectivePolicies = v + return s +} + +type GetIndexingConfigurationInput struct { _ struct{} `type:"structure"` } // String returns the string representation -func (s DeleteRegistrationCodeInput) String() string { +func (s GetIndexingConfigurationInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteRegistrationCodeInput) GoString() string { +func (s GetIndexingConfigurationInput) GoString() string { return s.String() } -// The output for the DeleteRegistrationCode operation. -type DeleteRegistrationCodeOutput struct { +type GetIndexingConfigurationOutput struct { _ struct{} `type:"structure"` + + // The index configuration. + ThingGroupIndexingConfiguration *ThingGroupIndexingConfiguration `locationName:"thingGroupIndexingConfiguration" type:"structure"` + + // Thing indexing configuration. + ThingIndexingConfiguration *ThingIndexingConfiguration `locationName:"thingIndexingConfiguration" type:"structure"` } // String returns the string representation -func (s DeleteRegistrationCodeOutput) String() string { +func (s GetIndexingConfigurationOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteRegistrationCodeOutput) GoString() string { +func (s GetIndexingConfigurationOutput) GoString() string { return s.String() } -type DeleteRoleAliasInput struct { +// SetThingGroupIndexingConfiguration sets the ThingGroupIndexingConfiguration field's value. +func (s *GetIndexingConfigurationOutput) SetThingGroupIndexingConfiguration(v *ThingGroupIndexingConfiguration) *GetIndexingConfigurationOutput { + s.ThingGroupIndexingConfiguration = v + return s +} + +// SetThingIndexingConfiguration sets the ThingIndexingConfiguration field's value. +func (s *GetIndexingConfigurationOutput) SetThingIndexingConfiguration(v *ThingIndexingConfiguration) *GetIndexingConfigurationOutput { + s.ThingIndexingConfiguration = v + return s +} + +type GetJobDocumentInput struct { _ struct{} `type:"structure"` - // The role alias to delete. + // The unique identifier you assigned to this job when it was created. // - // RoleAlias is a required field - RoleAlias *string `location:"uri" locationName:"roleAlias" min:"1" type:"string" required:"true"` + // JobId is a required field + JobId *string `location:"uri" locationName:"jobId" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s DeleteRoleAliasInput) String() string { +func (s GetJobDocumentInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteRoleAliasInput) GoString() string { +func (s GetJobDocumentInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteRoleAliasInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteRoleAliasInput"} - if s.RoleAlias == nil { - invalidParams.Add(request.NewErrParamRequired("RoleAlias")) +func (s *GetJobDocumentInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetJobDocumentInput"} + if s.JobId == nil { + invalidParams.Add(request.NewErrParamRequired("JobId")) } - if s.RoleAlias != nil && len(*s.RoleAlias) < 1 { - invalidParams.Add(request.NewErrParamMinLen("RoleAlias", 1)) + if s.JobId != nil && len(*s.JobId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("JobId", 1)) } if invalidParams.Len() > 0 { @@ -15103,111 +24631,110 @@ func (s *DeleteRoleAliasInput) Validate() error { return nil } -// SetRoleAlias sets the RoleAlias field's value. -func (s *DeleteRoleAliasInput) SetRoleAlias(v string) *DeleteRoleAliasInput { - s.RoleAlias = &v +// SetJobId sets the JobId field's value. +func (s *GetJobDocumentInput) SetJobId(v string) *GetJobDocumentInput { + s.JobId = &v return s } -type DeleteRoleAliasOutput struct { +type GetJobDocumentOutput struct { _ struct{} `type:"structure"` + + // The job document content. + Document *string `locationName:"document" type:"string"` } // String returns the string representation -func (s DeleteRoleAliasOutput) String() string { +func (s GetJobDocumentOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteRoleAliasOutput) GoString() string { +func (s GetJobDocumentOutput) GoString() string { return s.String() } -type DeleteStreamInput struct { - _ struct{} `type:"structure"` +// SetDocument sets the Document field's value. +func (s *GetJobDocumentOutput) SetDocument(v string) *GetJobDocumentOutput { + s.Document = &v + return s +} - // The stream ID. - // - // StreamId is a required field - StreamId *string `location:"uri" locationName:"streamId" min:"1" type:"string" required:"true"` +// The input for the GetLoggingOptions operation. +type GetLoggingOptionsInput struct { + _ struct{} `type:"structure"` } // String returns the string representation -func (s DeleteStreamInput) String() string { +func (s GetLoggingOptionsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteStreamInput) GoString() string { +func (s GetLoggingOptionsInput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteStreamInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteStreamInput"} - if s.StreamId == nil { - invalidParams.Add(request.NewErrParamRequired("StreamId")) - } - if s.StreamId != nil && len(*s.StreamId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("StreamId", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} +// The output from the GetLoggingOptions operation. +type GetLoggingOptionsOutput struct { + _ struct{} `type:"structure"` -// SetStreamId sets the StreamId field's value. -func (s *DeleteStreamInput) SetStreamId(v string) *DeleteStreamInput { - s.StreamId = &v - return s -} + // The logging level. + LogLevel *string `locationName:"logLevel" type:"string" enum:"LogLevel"` -type DeleteStreamOutput struct { - _ struct{} `type:"structure"` + // The ARN of the IAM role that grants access. + RoleArn *string `locationName:"roleArn" type:"string"` } // String returns the string representation -func (s DeleteStreamOutput) String() string { +func (s GetLoggingOptionsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteStreamOutput) GoString() string { +func (s GetLoggingOptionsOutput) GoString() string { return s.String() } -type DeleteThingGroupInput struct { - _ struct{} `type:"structure"` +// SetLogLevel sets the LogLevel field's value. +func (s *GetLoggingOptionsOutput) SetLogLevel(v string) *GetLoggingOptionsOutput { + s.LogLevel = &v + return s +} - // The expected version of the thing group to delete. - ExpectedVersion *int64 `location:"querystring" locationName:"expectedVersion" type:"long"` +// SetRoleArn sets the RoleArn field's value. +func (s *GetLoggingOptionsOutput) SetRoleArn(v string) *GetLoggingOptionsOutput { + s.RoleArn = &v + return s +} - // The name of the thing group to delete. +type GetOTAUpdateInput struct { + _ struct{} `type:"structure"` + + // The OTA update ID. // - // ThingGroupName is a required field - ThingGroupName *string `location:"uri" locationName:"thingGroupName" min:"1" type:"string" required:"true"` + // OtaUpdateId is a required field + OtaUpdateId *string `location:"uri" locationName:"otaUpdateId" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s DeleteThingGroupInput) String() string { +func (s GetOTAUpdateInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteThingGroupInput) GoString() string { +func (s GetOTAUpdateInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteThingGroupInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteThingGroupInput"} - if s.ThingGroupName == nil { - invalidParams.Add(request.NewErrParamRequired("ThingGroupName")) +func (s *GetOTAUpdateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetOTAUpdateInput"} + if s.OtaUpdateId == nil { + invalidParams.Add(request.NewErrParamRequired("OtaUpdateId")) } - if s.ThingGroupName != nil && len(*s.ThingGroupName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingGroupName", 1)) + if s.OtaUpdateId != nil && len(*s.OtaUpdateId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("OtaUpdateId", 1)) } if invalidParams.Len() > 0 { @@ -15216,65 +24743,63 @@ func (s *DeleteThingGroupInput) Validate() error { return nil } -// SetExpectedVersion sets the ExpectedVersion field's value. -func (s *DeleteThingGroupInput) SetExpectedVersion(v int64) *DeleteThingGroupInput { - s.ExpectedVersion = &v - return s -} - -// SetThingGroupName sets the ThingGroupName field's value. -func (s *DeleteThingGroupInput) SetThingGroupName(v string) *DeleteThingGroupInput { - s.ThingGroupName = &v +// SetOtaUpdateId sets the OtaUpdateId field's value. +func (s *GetOTAUpdateInput) SetOtaUpdateId(v string) *GetOTAUpdateInput { + s.OtaUpdateId = &v return s } -type DeleteThingGroupOutput struct { +type GetOTAUpdateOutput struct { _ struct{} `type:"structure"` + + // The OTA update info. + OtaUpdateInfo *OTAUpdateInfo `locationName:"otaUpdateInfo" type:"structure"` } // String returns the string representation -func (s DeleteThingGroupOutput) String() string { +func (s GetOTAUpdateOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteThingGroupOutput) GoString() string { +func (s GetOTAUpdateOutput) GoString() string { return s.String() } -// The input for the DeleteThing operation. -type DeleteThingInput struct { - _ struct{} `type:"structure"` +// SetOtaUpdateInfo sets the OtaUpdateInfo field's value. +func (s *GetOTAUpdateOutput) SetOtaUpdateInfo(v *OTAUpdateInfo) *GetOTAUpdateOutput { + s.OtaUpdateInfo = v + return s +} - // The expected version of the thing record in the registry. If the version - // of the record in the registry does not match the expected version specified - // in the request, the DeleteThing request is rejected with a VersionConflictException. - ExpectedVersion *int64 `location:"querystring" locationName:"expectedVersion" type:"long"` +// The input for the GetPolicy operation. +type GetPolicyInput struct { + _ struct{} `type:"structure"` - // The name of the thing to delete. + // The name of the policy. // - // ThingName is a required field - ThingName *string `location:"uri" locationName:"thingName" min:"1" type:"string" required:"true"` + // PolicyName is a required field + PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s DeleteThingInput) String() string { +func (s GetPolicyInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteThingInput) GoString() string { +func (s GetPolicyInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteThingInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteThingInput"} - if s.ThingName == nil { - invalidParams.Add(request.NewErrParamRequired("ThingName")) +func (s *GetPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetPolicyInput"} + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) } - if s.ThingName != nil && len(*s.ThingName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) } if invalidParams.Len() > 0 { @@ -15283,118 +24808,126 @@ func (s *DeleteThingInput) Validate() error { return nil } -// SetExpectedVersion sets the ExpectedVersion field's value. -func (s *DeleteThingInput) SetExpectedVersion(v int64) *DeleteThingInput { - s.ExpectedVersion = &v - return s -} - -// SetThingName sets the ThingName field's value. -func (s *DeleteThingInput) SetThingName(v string) *DeleteThingInput { - s.ThingName = &v +// SetPolicyName sets the PolicyName field's value. +func (s *GetPolicyInput) SetPolicyName(v string) *GetPolicyInput { + s.PolicyName = &v return s } -// The output of the DeleteThing operation. -type DeleteThingOutput struct { +// The output from the GetPolicy operation. +type GetPolicyOutput struct { _ struct{} `type:"structure"` -} -// String returns the string representation -func (s DeleteThingOutput) String() string { - return awsutil.Prettify(s) -} + // The date the policy was created. + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` -// GoString returns the string representation -func (s DeleteThingOutput) GoString() string { - return s.String() -} + // The default policy version ID. + DefaultVersionId *string `locationName:"defaultVersionId" type:"string"` -// The input for the DeleteThingType operation. -type DeleteThingTypeInput struct { - _ struct{} `type:"structure"` + // The generation ID of the policy. + GenerationId *string `locationName:"generationId" type:"string"` - // The name of the thing type. - // - // ThingTypeName is a required field - ThingTypeName *string `location:"uri" locationName:"thingTypeName" min:"1" type:"string" required:"true"` + // The date the policy was last modified. + LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp"` + + // The policy ARN. + PolicyArn *string `locationName:"policyArn" type:"string"` + + // The JSON document that describes the policy. + PolicyDocument *string `locationName:"policyDocument" type:"string"` + + // The policy name. + PolicyName *string `locationName:"policyName" min:"1" type:"string"` } // String returns the string representation -func (s DeleteThingTypeInput) String() string { +func (s GetPolicyOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteThingTypeInput) GoString() string { +func (s GetPolicyOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteThingTypeInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteThingTypeInput"} - if s.ThingTypeName == nil { - invalidParams.Add(request.NewErrParamRequired("ThingTypeName")) - } - if s.ThingTypeName != nil && len(*s.ThingTypeName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingTypeName", 1)) - } +// SetCreationDate sets the CreationDate field's value. +func (s *GetPolicyOutput) SetCreationDate(v time.Time) *GetPolicyOutput { + s.CreationDate = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetDefaultVersionId sets the DefaultVersionId field's value. +func (s *GetPolicyOutput) SetDefaultVersionId(v string) *GetPolicyOutput { + s.DefaultVersionId = &v + return s } -// SetThingTypeName sets the ThingTypeName field's value. -func (s *DeleteThingTypeInput) SetThingTypeName(v string) *DeleteThingTypeInput { - s.ThingTypeName = &v +// SetGenerationId sets the GenerationId field's value. +func (s *GetPolicyOutput) SetGenerationId(v string) *GetPolicyOutput { + s.GenerationId = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *GetPolicyOutput) SetLastModifiedDate(v time.Time) *GetPolicyOutput { + s.LastModifiedDate = &v return s } -// The output for the DeleteThingType operation. -type DeleteThingTypeOutput struct { - _ struct{} `type:"structure"` +// SetPolicyArn sets the PolicyArn field's value. +func (s *GetPolicyOutput) SetPolicyArn(v string) *GetPolicyOutput { + s.PolicyArn = &v + return s } -// String returns the string representation -func (s DeleteThingTypeOutput) String() string { - return awsutil.Prettify(s) +// SetPolicyDocument sets the PolicyDocument field's value. +func (s *GetPolicyOutput) SetPolicyDocument(v string) *GetPolicyOutput { + s.PolicyDocument = &v + return s } -// GoString returns the string representation -func (s DeleteThingTypeOutput) GoString() string { - return s.String() +// SetPolicyName sets the PolicyName field's value. +func (s *GetPolicyOutput) SetPolicyName(v string) *GetPolicyOutput { + s.PolicyName = &v + return s } -// The input for the DeleteTopicRule operation. -type DeleteTopicRuleInput struct { +// The input for the GetPolicyVersion operation. +type GetPolicyVersionInput struct { _ struct{} `type:"structure"` - // The name of the rule. + // The name of the policy. // - // RuleName is a required field - RuleName *string `location:"uri" locationName:"ruleName" min:"1" type:"string" required:"true"` + // PolicyName is a required field + PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` + + // The policy version ID. + // + // PolicyVersionId is a required field + PolicyVersionId *string `location:"uri" locationName:"policyVersionId" type:"string" required:"true"` } // String returns the string representation -func (s DeleteTopicRuleInput) String() string { +func (s GetPolicyVersionInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteTopicRuleInput) GoString() string { +func (s GetPolicyVersionInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteTopicRuleInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteTopicRuleInput"} - if s.RuleName == nil { - invalidParams.Add(request.NewErrParamRequired("RuleName")) +func (s *GetPolicyVersionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetPolicyVersionInput"} + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) } - if s.RuleName != nil && len(*s.RuleName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("RuleName", 1)) + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + if s.PolicyVersionId == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyVersionId")) } if invalidParams.Len() > 0 { @@ -15403,159 +24936,172 @@ func (s *DeleteTopicRuleInput) Validate() error { return nil } -// SetRuleName sets the RuleName field's value. -func (s *DeleteTopicRuleInput) SetRuleName(v string) *DeleteTopicRuleInput { - s.RuleName = &v +// SetPolicyName sets the PolicyName field's value. +func (s *GetPolicyVersionInput) SetPolicyName(v string) *GetPolicyVersionInput { + s.PolicyName = &v return s } -type DeleteTopicRuleOutput struct { +// SetPolicyVersionId sets the PolicyVersionId field's value. +func (s *GetPolicyVersionInput) SetPolicyVersionId(v string) *GetPolicyVersionInput { + s.PolicyVersionId = &v + return s +} + +// The output from the GetPolicyVersion operation. +type GetPolicyVersionOutput struct { _ struct{} `type:"structure"` + + // The date the policy version was created. + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` + + // The generation ID of the policy version. + GenerationId *string `locationName:"generationId" type:"string"` + + // Specifies whether the policy version is the default. + IsDefaultVersion *bool `locationName:"isDefaultVersion" type:"boolean"` + + // The date the policy version was last modified. + LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp"` + + // The policy ARN. + PolicyArn *string `locationName:"policyArn" type:"string"` + + // The JSON document that describes the policy. + PolicyDocument *string `locationName:"policyDocument" type:"string"` + + // The policy name. + PolicyName *string `locationName:"policyName" min:"1" type:"string"` + + // The policy version ID. + PolicyVersionId *string `locationName:"policyVersionId" type:"string"` } // String returns the string representation -func (s DeleteTopicRuleOutput) String() string { +func (s GetPolicyVersionOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteTopicRuleOutput) GoString() string { +func (s GetPolicyVersionOutput) GoString() string { return s.String() } -type DeleteV2LoggingLevelInput struct { - _ struct{} `type:"structure"` - - // The name of the resource for which you are configuring logging. - // - // TargetName is a required field - TargetName *string `location:"querystring" locationName:"targetName" type:"string" required:"true"` +// SetCreationDate sets the CreationDate field's value. +func (s *GetPolicyVersionOutput) SetCreationDate(v time.Time) *GetPolicyVersionOutput { + s.CreationDate = &v + return s +} - // The type of resource for which you are configuring logging. Must be THING_Group. - // - // TargetType is a required field - TargetType *string `location:"querystring" locationName:"targetType" type:"string" required:"true" enum:"LogTargetType"` +// SetGenerationId sets the GenerationId field's value. +func (s *GetPolicyVersionOutput) SetGenerationId(v string) *GetPolicyVersionOutput { + s.GenerationId = &v + return s } -// String returns the string representation -func (s DeleteV2LoggingLevelInput) String() string { - return awsutil.Prettify(s) +// SetIsDefaultVersion sets the IsDefaultVersion field's value. +func (s *GetPolicyVersionOutput) SetIsDefaultVersion(v bool) *GetPolicyVersionOutput { + s.IsDefaultVersion = &v + return s } -// GoString returns the string representation -func (s DeleteV2LoggingLevelInput) GoString() string { - return s.String() +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *GetPolicyVersionOutput) SetLastModifiedDate(v time.Time) *GetPolicyVersionOutput { + s.LastModifiedDate = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteV2LoggingLevelInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteV2LoggingLevelInput"} - if s.TargetName == nil { - invalidParams.Add(request.NewErrParamRequired("TargetName")) - } - if s.TargetType == nil { - invalidParams.Add(request.NewErrParamRequired("TargetType")) - } +// SetPolicyArn sets the PolicyArn field's value. +func (s *GetPolicyVersionOutput) SetPolicyArn(v string) *GetPolicyVersionOutput { + s.PolicyArn = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetPolicyDocument sets the PolicyDocument field's value. +func (s *GetPolicyVersionOutput) SetPolicyDocument(v string) *GetPolicyVersionOutput { + s.PolicyDocument = &v + return s } -// SetTargetName sets the TargetName field's value. -func (s *DeleteV2LoggingLevelInput) SetTargetName(v string) *DeleteV2LoggingLevelInput { - s.TargetName = &v +// SetPolicyName sets the PolicyName field's value. +func (s *GetPolicyVersionOutput) SetPolicyName(v string) *GetPolicyVersionOutput { + s.PolicyName = &v return s } -// SetTargetType sets the TargetType field's value. -func (s *DeleteV2LoggingLevelInput) SetTargetType(v string) *DeleteV2LoggingLevelInput { - s.TargetType = &v +// SetPolicyVersionId sets the PolicyVersionId field's value. +func (s *GetPolicyVersionOutput) SetPolicyVersionId(v string) *GetPolicyVersionOutput { + s.PolicyVersionId = &v return s } -type DeleteV2LoggingLevelOutput struct { +// The input to the GetRegistrationCode operation. +type GetRegistrationCodeInput struct { _ struct{} `type:"structure"` } // String returns the string representation -func (s DeleteV2LoggingLevelOutput) String() string { +func (s GetRegistrationCodeInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteV2LoggingLevelOutput) GoString() string { +func (s GetRegistrationCodeInput) GoString() string { return s.String() } -// Contains information that denied the authorization. -type Denied struct { +// The output from the GetRegistrationCode operation. +type GetRegistrationCodeOutput struct { _ struct{} `type:"structure"` - // Information that explicitly denies the authorization. - ExplicitDeny *ExplicitDeny `locationName:"explicitDeny" type:"structure"` - - // Information that implicitly denies the authorization. When a policy doesn't - // explicitly deny or allow an action on a resource it is considered an implicit - // deny. - ImplicitDeny *ImplicitDeny `locationName:"implicitDeny" type:"structure"` + // The CA certificate registration code. + RegistrationCode *string `locationName:"registrationCode" min:"64" type:"string"` } // String returns the string representation -func (s Denied) String() string { +func (s GetRegistrationCodeOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Denied) GoString() string { +func (s GetRegistrationCodeOutput) GoString() string { return s.String() } -// SetExplicitDeny sets the ExplicitDeny field's value. -func (s *Denied) SetExplicitDeny(v *ExplicitDeny) *Denied { - s.ExplicitDeny = v - return s -} - -// SetImplicitDeny sets the ImplicitDeny field's value. -func (s *Denied) SetImplicitDeny(v *ImplicitDeny) *Denied { - s.ImplicitDeny = v +// SetRegistrationCode sets the RegistrationCode field's value. +func (s *GetRegistrationCodeOutput) SetRegistrationCode(v string) *GetRegistrationCodeOutput { + s.RegistrationCode = &v return s } -// The input for the DeprecateThingType operation. -type DeprecateThingTypeInput struct { +// The input for the GetTopicRule operation. +type GetTopicRuleInput struct { _ struct{} `type:"structure"` - // The name of the thing type to deprecate. + // The name of the rule. // - // ThingTypeName is a required field - ThingTypeName *string `location:"uri" locationName:"thingTypeName" min:"1" type:"string" required:"true"` - - // Whether to undeprecate a deprecated thing type. If true, the thing type will - // not be deprecated anymore and you can associate it with things. - UndoDeprecate *bool `locationName:"undoDeprecate" type:"boolean"` + // RuleName is a required field + RuleName *string `location:"uri" locationName:"ruleName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s DeprecateThingTypeInput) String() string { +func (s GetTopicRuleInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeprecateThingTypeInput) GoString() string { +func (s GetTopicRuleInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeprecateThingTypeInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeprecateThingTypeInput"} - if s.ThingTypeName == nil { - invalidParams.Add(request.NewErrParamRequired("ThingTypeName")) +func (s *GetTopicRuleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetTopicRuleInput"} + if s.RuleName == nil { + invalidParams.Add(request.NewErrParamRequired("RuleName")) } - if s.ThingTypeName != nil && len(*s.ThingTypeName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingTypeName", 1)) + if s.RuleName != nil && len(*s.RuleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RuleName", 1)) } if invalidParams.Len() > 0 { @@ -15564,583 +25110,692 @@ func (s *DeprecateThingTypeInput) Validate() error { return nil } -// SetThingTypeName sets the ThingTypeName field's value. -func (s *DeprecateThingTypeInput) SetThingTypeName(v string) *DeprecateThingTypeInput { - s.ThingTypeName = &v +// SetRuleName sets the RuleName field's value. +func (s *GetTopicRuleInput) SetRuleName(v string) *GetTopicRuleInput { + s.RuleName = &v + return s +} + +// The output from the GetTopicRule operation. +type GetTopicRuleOutput struct { + _ struct{} `type:"structure"` + + // The rule. + Rule *TopicRule `locationName:"rule" type:"structure"` + + // The rule ARN. + RuleArn *string `locationName:"ruleArn" type:"string"` +} + +// String returns the string representation +func (s GetTopicRuleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetTopicRuleOutput) GoString() string { + return s.String() +} + +// SetRule sets the Rule field's value. +func (s *GetTopicRuleOutput) SetRule(v *TopicRule) *GetTopicRuleOutput { + s.Rule = v return s } -// SetUndoDeprecate sets the UndoDeprecate field's value. -func (s *DeprecateThingTypeInput) SetUndoDeprecate(v bool) *DeprecateThingTypeInput { - s.UndoDeprecate = &v +// SetRuleArn sets the RuleArn field's value. +func (s *GetTopicRuleOutput) SetRuleArn(v string) *GetTopicRuleOutput { + s.RuleArn = &v return s } -// The output for the DeprecateThingType operation. -type DeprecateThingTypeOutput struct { +type GetV2LoggingOptionsInput struct { _ struct{} `type:"structure"` } // String returns the string representation -func (s DeprecateThingTypeOutput) String() string { +func (s GetV2LoggingOptionsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeprecateThingTypeOutput) GoString() string { +func (s GetV2LoggingOptionsInput) GoString() string { return s.String() } -type DescribeAuthorizerInput struct { +type GetV2LoggingOptionsOutput struct { _ struct{} `type:"structure"` - // The name of the authorizer to describe. - // - // AuthorizerName is a required field - AuthorizerName *string `location:"uri" locationName:"authorizerName" min:"1" type:"string" required:"true"` + // The default log level. + DefaultLogLevel *string `locationName:"defaultLogLevel" type:"string" enum:"LogLevel"` + + // Disables all logs. + DisableAllLogs *bool `locationName:"disableAllLogs" type:"boolean"` + + // The IAM role ARN AWS IoT uses to write to your CloudWatch logs. + RoleArn *string `locationName:"roleArn" type:"string"` } // String returns the string representation -func (s DescribeAuthorizerInput) String() string { +func (s GetV2LoggingOptionsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeAuthorizerInput) GoString() string { +func (s GetV2LoggingOptionsOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeAuthorizerInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeAuthorizerInput"} - if s.AuthorizerName == nil { - invalidParams.Add(request.NewErrParamRequired("AuthorizerName")) - } - if s.AuthorizerName != nil && len(*s.AuthorizerName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("AuthorizerName", 1)) - } +// SetDefaultLogLevel sets the DefaultLogLevel field's value. +func (s *GetV2LoggingOptionsOutput) SetDefaultLogLevel(v string) *GetV2LoggingOptionsOutput { + s.DefaultLogLevel = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetDisableAllLogs sets the DisableAllLogs field's value. +func (s *GetV2LoggingOptionsOutput) SetDisableAllLogs(v bool) *GetV2LoggingOptionsOutput { + s.DisableAllLogs = &v + return s } -// SetAuthorizerName sets the AuthorizerName field's value. -func (s *DescribeAuthorizerInput) SetAuthorizerName(v string) *DescribeAuthorizerInput { - s.AuthorizerName = &v +// SetRoleArn sets the RoleArn field's value. +func (s *GetV2LoggingOptionsOutput) SetRoleArn(v string) *GetV2LoggingOptionsOutput { + s.RoleArn = &v return s } -type DescribeAuthorizerOutput struct { +// The name and ARN of a group. +type GroupNameAndArn struct { _ struct{} `type:"structure"` - // The authorizer description. - AuthorizerDescription *AuthorizerDescription `locationName:"authorizerDescription" type:"structure"` + // The group ARN. + GroupArn *string `locationName:"groupArn" type:"string"` + + // The group name. + GroupName *string `locationName:"groupName" min:"1" type:"string"` } // String returns the string representation -func (s DescribeAuthorizerOutput) String() string { +func (s GroupNameAndArn) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeAuthorizerOutput) GoString() string { +func (s GroupNameAndArn) GoString() string { return s.String() } -// SetAuthorizerDescription sets the AuthorizerDescription field's value. -func (s *DescribeAuthorizerOutput) SetAuthorizerDescription(v *AuthorizerDescription) *DescribeAuthorizerOutput { - s.AuthorizerDescription = v +// SetGroupArn sets the GroupArn field's value. +func (s *GroupNameAndArn) SetGroupArn(v string) *GroupNameAndArn { + s.GroupArn = &v return s } -// The input for the DescribeCACertificate operation. -type DescribeCACertificateInput struct { +// SetGroupName sets the GroupName field's value. +func (s *GroupNameAndArn) SetGroupName(v string) *GroupNameAndArn { + s.GroupName = &v + return s +} + +// Information that implicitly denies authorization. When policy doesn't explicitly +// deny or allow an action on a resource it is considered an implicit deny. +type ImplicitDeny struct { _ struct{} `type:"structure"` - // The CA certificate identifier. - // - // CertificateId is a required field - CertificateId *string `location:"uri" locationName:"caCertificateId" min:"64" type:"string" required:"true"` + // Policies that don't contain a matching allow or deny statement for the specified + // action on the specified resource. + Policies []*Policy `locationName:"policies" type:"list"` } // String returns the string representation -func (s DescribeCACertificateInput) String() string { +func (s ImplicitDeny) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeCACertificateInput) GoString() string { +func (s ImplicitDeny) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeCACertificateInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeCACertificateInput"} - if s.CertificateId == nil { - invalidParams.Add(request.NewErrParamRequired("CertificateId")) - } - if s.CertificateId != nil && len(*s.CertificateId) < 64 { - invalidParams.Add(request.NewErrParamMinLen("CertificateId", 64)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetCertificateId sets the CertificateId field's value. -func (s *DescribeCACertificateInput) SetCertificateId(v string) *DescribeCACertificateInput { - s.CertificateId = &v +// SetPolicies sets the Policies field's value. +func (s *ImplicitDeny) SetPolicies(v []*Policy) *ImplicitDeny { + s.Policies = v return s } -// The output from the DescribeCACertificate operation. -type DescribeCACertificateOutput struct { +// Sends messge data to an AWS IoT Analytics channel. +type IotAnalyticsAction struct { _ struct{} `type:"structure"` - // The CA certificate description. - CertificateDescription *CACertificateDescription `locationName:"certificateDescription" type:"structure"` + // (deprecated) The ARN of the IoT Analytics channel to which message data will + // be sent. + ChannelArn *string `locationName:"channelArn" type:"string"` - // Information about the registration configuration. - RegistrationConfig *RegistrationConfig `locationName:"registrationConfig" type:"structure"` + // The name of the IoT Analytics channel to which message data will be sent. + ChannelName *string `locationName:"channelName" type:"string"` + + // The ARN of the role which has a policy that grants IoT Analytics permission + // to send message data via IoT Analytics (iotanalytics:BatchPutMessage). + RoleArn *string `locationName:"roleArn" type:"string"` } // String returns the string representation -func (s DescribeCACertificateOutput) String() string { +func (s IotAnalyticsAction) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeCACertificateOutput) GoString() string { +func (s IotAnalyticsAction) GoString() string { return s.String() } -// SetCertificateDescription sets the CertificateDescription field's value. -func (s *DescribeCACertificateOutput) SetCertificateDescription(v *CACertificateDescription) *DescribeCACertificateOutput { - s.CertificateDescription = v +// SetChannelArn sets the ChannelArn field's value. +func (s *IotAnalyticsAction) SetChannelArn(v string) *IotAnalyticsAction { + s.ChannelArn = &v return s } -// SetRegistrationConfig sets the RegistrationConfig field's value. -func (s *DescribeCACertificateOutput) SetRegistrationConfig(v *RegistrationConfig) *DescribeCACertificateOutput { - s.RegistrationConfig = v +// SetChannelName sets the ChannelName field's value. +func (s *IotAnalyticsAction) SetChannelName(v string) *IotAnalyticsAction { + s.ChannelName = &v return s } -// The input for the DescribeCertificate operation. -type DescribeCertificateInput struct { +// SetRoleArn sets the RoleArn field's value. +func (s *IotAnalyticsAction) SetRoleArn(v string) *IotAnalyticsAction { + s.RoleArn = &v + return s +} + +// The Job object contains details about a job. +type Job struct { _ struct{} `type:"structure"` - // The ID of the certificate. - // - // CertificateId is a required field - CertificateId *string `location:"uri" locationName:"certificateId" min:"64" type:"string" required:"true"` + // Configuration for criteria to abort the job. + AbortConfig *AbortConfig `locationName:"abortConfig" type:"structure"` + + // If the job was updated, describes the reason for the update. + Comment *string `locationName:"comment" type:"string"` + + // The time, in milliseconds since the epoch, when the job was completed. + CompletedAt *time.Time `locationName:"completedAt" type:"timestamp"` + + // The time, in milliseconds since the epoch, when the job was created. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` + + // A short text description of the job. + Description *string `locationName:"description" type:"string"` + + // Will be true if the job was canceled with the optional force parameter set + // to true. + ForceCanceled *bool `locationName:"forceCanceled" type:"boolean"` + + // An ARN identifying the job with format "arn:aws:iot:region:account:job/jobId". + JobArn *string `locationName:"jobArn" type:"string"` + + // Allows you to create a staged rollout of a job. + JobExecutionsRolloutConfig *JobExecutionsRolloutConfig `locationName:"jobExecutionsRolloutConfig" type:"structure"` + + // The unique identifier you assigned to this job when it was created. + JobId *string `locationName:"jobId" min:"1" type:"string"` + + // Details about the job process. + JobProcessDetails *JobProcessDetails `locationName:"jobProcessDetails" type:"structure"` + + // The time, in milliseconds since the epoch, when the job was last updated. + LastUpdatedAt *time.Time `locationName:"lastUpdatedAt" type:"timestamp"` + + // Configuration for pre-signed S3 URLs. + PresignedUrlConfig *PresignedUrlConfig `locationName:"presignedUrlConfig" type:"structure"` + + // If the job was updated, provides the reason code for the update. + ReasonCode *string `locationName:"reasonCode" type:"string"` + + // The status of the job, one of IN_PROGRESS, CANCELED, DELETION_IN_PROGRESS + // or COMPLETED. + Status *string `locationName:"status" type:"string" enum:"JobStatus"` + + // Specifies whether the job will continue to run (CONTINUOUS), or will be complete + // after all those things specified as targets have completed the job (SNAPSHOT). + // If continuous, the job may also be run on a thing when a change is detected + // in a target. For example, a job will run on a device when the thing representing + // the device is added to a target group, even after the job was completed by + // all things originally in the group. + TargetSelection *string `locationName:"targetSelection" type:"string" enum:"TargetSelection"` + + // A list of IoT things and thing groups to which the job should be sent. + Targets []*string `locationName:"targets" min:"1" type:"list"` + + // Specifies the amount of time each device has to finish its execution of the + // job. A timer is started when the job execution status is set to IN_PROGRESS. + // If the job execution status is not set to another terminal state before the + // timer expires, it will be automatically set to TIMED_OUT. + TimeoutConfig *TimeoutConfig `locationName:"timeoutConfig" type:"structure"` } // String returns the string representation -func (s DescribeCertificateInput) String() string { +func (s Job) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeCertificateInput) GoString() string { +func (s Job) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeCertificateInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeCertificateInput"} - if s.CertificateId == nil { - invalidParams.Add(request.NewErrParamRequired("CertificateId")) - } - if s.CertificateId != nil && len(*s.CertificateId) < 64 { - invalidParams.Add(request.NewErrParamMinLen("CertificateId", 64)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetCertificateId sets the CertificateId field's value. -func (s *DescribeCertificateInput) SetCertificateId(v string) *DescribeCertificateInput { - s.CertificateId = &v +// SetAbortConfig sets the AbortConfig field's value. +func (s *Job) SetAbortConfig(v *AbortConfig) *Job { + s.AbortConfig = v return s } -// The output of the DescribeCertificate operation. -type DescribeCertificateOutput struct { - _ struct{} `type:"structure"` - - // The description of the certificate. - CertificateDescription *CertificateDescription `locationName:"certificateDescription" type:"structure"` +// SetComment sets the Comment field's value. +func (s *Job) SetComment(v string) *Job { + s.Comment = &v + return s } -// String returns the string representation -func (s DescribeCertificateOutput) String() string { - return awsutil.Prettify(s) +// SetCompletedAt sets the CompletedAt field's value. +func (s *Job) SetCompletedAt(v time.Time) *Job { + s.CompletedAt = &v + return s } -// GoString returns the string representation -func (s DescribeCertificateOutput) GoString() string { - return s.String() +// SetCreatedAt sets the CreatedAt field's value. +func (s *Job) SetCreatedAt(v time.Time) *Job { + s.CreatedAt = &v + return s } -// SetCertificateDescription sets the CertificateDescription field's value. -func (s *DescribeCertificateOutput) SetCertificateDescription(v *CertificateDescription) *DescribeCertificateOutput { - s.CertificateDescription = v +// SetDescription sets the Description field's value. +func (s *Job) SetDescription(v string) *Job { + s.Description = &v return s } -type DescribeDefaultAuthorizerInput struct { - _ struct{} `type:"structure"` +// SetForceCanceled sets the ForceCanceled field's value. +func (s *Job) SetForceCanceled(v bool) *Job { + s.ForceCanceled = &v + return s } -// String returns the string representation -func (s DescribeDefaultAuthorizerInput) String() string { - return awsutil.Prettify(s) +// SetJobArn sets the JobArn field's value. +func (s *Job) SetJobArn(v string) *Job { + s.JobArn = &v + return s } -// GoString returns the string representation -func (s DescribeDefaultAuthorizerInput) GoString() string { - return s.String() +// SetJobExecutionsRolloutConfig sets the JobExecutionsRolloutConfig field's value. +func (s *Job) SetJobExecutionsRolloutConfig(v *JobExecutionsRolloutConfig) *Job { + s.JobExecutionsRolloutConfig = v + return s } -type DescribeDefaultAuthorizerOutput struct { - _ struct{} `type:"structure"` - - // The default authorizer's description. - AuthorizerDescription *AuthorizerDescription `locationName:"authorizerDescription" type:"structure"` +// SetJobId sets the JobId field's value. +func (s *Job) SetJobId(v string) *Job { + s.JobId = &v + return s } -// String returns the string representation -func (s DescribeDefaultAuthorizerOutput) String() string { - return awsutil.Prettify(s) +// SetJobProcessDetails sets the JobProcessDetails field's value. +func (s *Job) SetJobProcessDetails(v *JobProcessDetails) *Job { + s.JobProcessDetails = v + return s } -// GoString returns the string representation -func (s DescribeDefaultAuthorizerOutput) GoString() string { - return s.String() +// SetLastUpdatedAt sets the LastUpdatedAt field's value. +func (s *Job) SetLastUpdatedAt(v time.Time) *Job { + s.LastUpdatedAt = &v + return s } -// SetAuthorizerDescription sets the AuthorizerDescription field's value. -func (s *DescribeDefaultAuthorizerOutput) SetAuthorizerDescription(v *AuthorizerDescription) *DescribeDefaultAuthorizerOutput { - s.AuthorizerDescription = v +// SetPresignedUrlConfig sets the PresignedUrlConfig field's value. +func (s *Job) SetPresignedUrlConfig(v *PresignedUrlConfig) *Job { + s.PresignedUrlConfig = v return s } -// The input for the DescribeEndpoint operation. -type DescribeEndpointInput struct { - _ struct{} `type:"structure"` +// SetReasonCode sets the ReasonCode field's value. +func (s *Job) SetReasonCode(v string) *Job { + s.ReasonCode = &v + return s +} - // The endpoint type. - EndpointType *string `location:"querystring" locationName:"endpointType" type:"string"` +// SetStatus sets the Status field's value. +func (s *Job) SetStatus(v string) *Job { + s.Status = &v + return s } -// String returns the string representation -func (s DescribeEndpointInput) String() string { - return awsutil.Prettify(s) +// SetTargetSelection sets the TargetSelection field's value. +func (s *Job) SetTargetSelection(v string) *Job { + s.TargetSelection = &v + return s } -// GoString returns the string representation -func (s DescribeEndpointInput) GoString() string { - return s.String() +// SetTargets sets the Targets field's value. +func (s *Job) SetTargets(v []*string) *Job { + s.Targets = v + return s } -// SetEndpointType sets the EndpointType field's value. -func (s *DescribeEndpointInput) SetEndpointType(v string) *DescribeEndpointInput { - s.EndpointType = &v +// SetTimeoutConfig sets the TimeoutConfig field's value. +func (s *Job) SetTimeoutConfig(v *TimeoutConfig) *Job { + s.TimeoutConfig = v return s } -// The output from the DescribeEndpoint operation. -type DescribeEndpointOutput struct { +// The job execution object represents the execution of a job on a particular +// device. +type JobExecution struct { _ struct{} `type:"structure"` - // The endpoint. The format of the endpoint is as follows: identifier.iot.region.amazonaws.com. - EndpointAddress *string `locationName:"endpointAddress" type:"string"` + // The estimated number of seconds that remain before the job execution status + // will be changed to TIMED_OUT. The timeout interval can be anywhere between + // 1 minute and 7 days (1 to 10080 minutes). The actual job execution timeout + // can occur up to 60 seconds later than the estimated duration. This value + // will not be included if the job execution has reached a terminal status. + ApproximateSecondsBeforeTimedOut *int64 `locationName:"approximateSecondsBeforeTimedOut" type:"long"` + + // A string (consisting of the digits "0" through "9") which identifies this + // particular job execution on this particular device. It can be used in commands + // which return or update job execution information. + ExecutionNumber *int64 `locationName:"executionNumber" type:"long"` + + // Will be true if the job execution was canceled with the optional force parameter + // set to true. + ForceCanceled *bool `locationName:"forceCanceled" type:"boolean"` + + // The unique identifier you assigned to the job when it was created. + JobId *string `locationName:"jobId" min:"1" type:"string"` + + // The time, in milliseconds since the epoch, when the job execution was last + // updated. + LastUpdatedAt *time.Time `locationName:"lastUpdatedAt" type:"timestamp"` + + // The time, in milliseconds since the epoch, when the job execution was queued. + QueuedAt *time.Time `locationName:"queuedAt" type:"timestamp"` + + // The time, in milliseconds since the epoch, when the job execution started. + StartedAt *time.Time `locationName:"startedAt" type:"timestamp"` + + // The status of the job execution (IN_PROGRESS, QUEUED, FAILED, SUCCEEDED, + // TIMED_OUT, CANCELED, or REJECTED). + Status *string `locationName:"status" type:"string" enum:"JobExecutionStatus"` + + // A collection of name/value pairs that describe the status of the job execution. + StatusDetails *JobExecutionStatusDetails `locationName:"statusDetails" type:"structure"` + + // The ARN of the thing on which the job execution is running. + ThingArn *string `locationName:"thingArn" type:"string"` + + // The version of the job execution. Job execution versions are incremented + // each time they are updated by a device. + VersionNumber *int64 `locationName:"versionNumber" type:"long"` } // String returns the string representation -func (s DescribeEndpointOutput) String() string { +func (s JobExecution) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeEndpointOutput) GoString() string { +func (s JobExecution) GoString() string { return s.String() } -// SetEndpointAddress sets the EndpointAddress field's value. -func (s *DescribeEndpointOutput) SetEndpointAddress(v string) *DescribeEndpointOutput { - s.EndpointAddress = &v +// SetApproximateSecondsBeforeTimedOut sets the ApproximateSecondsBeforeTimedOut field's value. +func (s *JobExecution) SetApproximateSecondsBeforeTimedOut(v int64) *JobExecution { + s.ApproximateSecondsBeforeTimedOut = &v return s } -type DescribeEventConfigurationsInput struct { - _ struct{} `type:"structure"` +// SetExecutionNumber sets the ExecutionNumber field's value. +func (s *JobExecution) SetExecutionNumber(v int64) *JobExecution { + s.ExecutionNumber = &v + return s } -// String returns the string representation -func (s DescribeEventConfigurationsInput) String() string { - return awsutil.Prettify(s) +// SetForceCanceled sets the ForceCanceled field's value. +func (s *JobExecution) SetForceCanceled(v bool) *JobExecution { + s.ForceCanceled = &v + return s } -// GoString returns the string representation -func (s DescribeEventConfigurationsInput) GoString() string { - return s.String() +// SetJobId sets the JobId field's value. +func (s *JobExecution) SetJobId(v string) *JobExecution { + s.JobId = &v + return s } -type DescribeEventConfigurationsOutput struct { - _ struct{} `type:"structure"` - - // The creation date of the event configuration. - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix"` - - // The event configurations. - EventConfigurations map[string]*Configuration `locationName:"eventConfigurations" type:"map"` +// SetLastUpdatedAt sets the LastUpdatedAt field's value. +func (s *JobExecution) SetLastUpdatedAt(v time.Time) *JobExecution { + s.LastUpdatedAt = &v + return s +} - // The date the event configurations were last modified. - LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp" timestampFormat:"unix"` +// SetQueuedAt sets the QueuedAt field's value. +func (s *JobExecution) SetQueuedAt(v time.Time) *JobExecution { + s.QueuedAt = &v + return s } -// String returns the string representation -func (s DescribeEventConfigurationsOutput) String() string { - return awsutil.Prettify(s) +// SetStartedAt sets the StartedAt field's value. +func (s *JobExecution) SetStartedAt(v time.Time) *JobExecution { + s.StartedAt = &v + return s } -// GoString returns the string representation -func (s DescribeEventConfigurationsOutput) GoString() string { - return s.String() +// SetStatus sets the Status field's value. +func (s *JobExecution) SetStatus(v string) *JobExecution { + s.Status = &v + return s } -// SetCreationDate sets the CreationDate field's value. -func (s *DescribeEventConfigurationsOutput) SetCreationDate(v time.Time) *DescribeEventConfigurationsOutput { - s.CreationDate = &v +// SetStatusDetails sets the StatusDetails field's value. +func (s *JobExecution) SetStatusDetails(v *JobExecutionStatusDetails) *JobExecution { + s.StatusDetails = v return s } -// SetEventConfigurations sets the EventConfigurations field's value. -func (s *DescribeEventConfigurationsOutput) SetEventConfigurations(v map[string]*Configuration) *DescribeEventConfigurationsOutput { - s.EventConfigurations = v +// SetThingArn sets the ThingArn field's value. +func (s *JobExecution) SetThingArn(v string) *JobExecution { + s.ThingArn = &v return s } -// SetLastModifiedDate sets the LastModifiedDate field's value. -func (s *DescribeEventConfigurationsOutput) SetLastModifiedDate(v time.Time) *DescribeEventConfigurationsOutput { - s.LastModifiedDate = &v +// SetVersionNumber sets the VersionNumber field's value. +func (s *JobExecution) SetVersionNumber(v int64) *JobExecution { + s.VersionNumber = &v return s } -type DescribeIndexInput struct { +// Details of the job execution status. +type JobExecutionStatusDetails struct { _ struct{} `type:"structure"` - // The index name. - // - // IndexName is a required field - IndexName *string `location:"uri" locationName:"indexName" min:"1" type:"string" required:"true"` + // The job execution status. + DetailsMap map[string]*string `locationName:"detailsMap" type:"map"` } // String returns the string representation -func (s DescribeIndexInput) String() string { +func (s JobExecutionStatusDetails) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeIndexInput) GoString() string { +func (s JobExecutionStatusDetails) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeIndexInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeIndexInput"} - if s.IndexName == nil { - invalidParams.Add(request.NewErrParamRequired("IndexName")) - } - if s.IndexName != nil && len(*s.IndexName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("IndexName", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetIndexName sets the IndexName field's value. -func (s *DescribeIndexInput) SetIndexName(v string) *DescribeIndexInput { - s.IndexName = &v +// SetDetailsMap sets the DetailsMap field's value. +func (s *JobExecutionStatusDetails) SetDetailsMap(v map[string]*string) *JobExecutionStatusDetails { + s.DetailsMap = v return s } -type DescribeIndexOutput struct { +// The job execution summary. +type JobExecutionSummary struct { _ struct{} `type:"structure"` - // The index name. - IndexName *string `locationName:"indexName" min:"1" type:"string"` + // A string (consisting of the digits "0" through "9") which identifies this + // particular job execution on this particular device. It can be used later + // in commands which return or update job execution information. + ExecutionNumber *int64 `locationName:"executionNumber" type:"long"` - // The index status. - IndexStatus *string `locationName:"indexStatus" type:"string" enum:"IndexStatus"` + // The time, in milliseconds since the epoch, when the job execution was last + // updated. + LastUpdatedAt *time.Time `locationName:"lastUpdatedAt" type:"timestamp"` - // Contains a value that specifies the type of indexing performed. Valid values - // are: - // - // REGISTRY – Your thing index will contain only registry data. - // - // REGISTRY_AND_SHADOW - Your thing index will contain registry and shadow data. - Schema *string `locationName:"schema" type:"string"` + // The time, in milliseconds since the epoch, when the job execution was queued. + QueuedAt *time.Time `locationName:"queuedAt" type:"timestamp"` + + // The time, in milliseconds since the epoch, when the job execution started. + StartedAt *time.Time `locationName:"startedAt" type:"timestamp"` + + // The status of the job execution. + Status *string `locationName:"status" type:"string" enum:"JobExecutionStatus"` } // String returns the string representation -func (s DescribeIndexOutput) String() string { +func (s JobExecutionSummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeIndexOutput) GoString() string { +func (s JobExecutionSummary) GoString() string { return s.String() } -// SetIndexName sets the IndexName field's value. -func (s *DescribeIndexOutput) SetIndexName(v string) *DescribeIndexOutput { - s.IndexName = &v +// SetExecutionNumber sets the ExecutionNumber field's value. +func (s *JobExecutionSummary) SetExecutionNumber(v int64) *JobExecutionSummary { + s.ExecutionNumber = &v + return s +} + +// SetLastUpdatedAt sets the LastUpdatedAt field's value. +func (s *JobExecutionSummary) SetLastUpdatedAt(v time.Time) *JobExecutionSummary { + s.LastUpdatedAt = &v + return s +} + +// SetQueuedAt sets the QueuedAt field's value. +func (s *JobExecutionSummary) SetQueuedAt(v time.Time) *JobExecutionSummary { + s.QueuedAt = &v return s } -// SetIndexStatus sets the IndexStatus field's value. -func (s *DescribeIndexOutput) SetIndexStatus(v string) *DescribeIndexOutput { - s.IndexStatus = &v +// SetStartedAt sets the StartedAt field's value. +func (s *JobExecutionSummary) SetStartedAt(v time.Time) *JobExecutionSummary { + s.StartedAt = &v return s } -// SetSchema sets the Schema field's value. -func (s *DescribeIndexOutput) SetSchema(v string) *DescribeIndexOutput { - s.Schema = &v +// SetStatus sets the Status field's value. +func (s *JobExecutionSummary) SetStatus(v string) *JobExecutionSummary { + s.Status = &v return s } -type DescribeJobExecutionInput struct { +// Contains a summary of information about job executions for a specific job. +type JobExecutionSummaryForJob struct { _ struct{} `type:"structure"` - // A string (consisting of the digits "0" through "9" which is used to specify - // a particular job execution on a particular device. - ExecutionNumber *int64 `location:"querystring" locationName:"executionNumber" type:"long"` - - // The unique identifier you assigned to this job when it was created. - // - // JobId is a required field - JobId *string `location:"uri" locationName:"jobId" min:"1" type:"string" required:"true"` + // Contains a subset of information about a job execution. + JobExecutionSummary *JobExecutionSummary `locationName:"jobExecutionSummary" type:"structure"` - // The name of the thing on which the job execution is running. - // - // ThingName is a required field - ThingName *string `location:"uri" locationName:"thingName" min:"1" type:"string" required:"true"` + // The ARN of the thing on which the job execution is running. + ThingArn *string `locationName:"thingArn" type:"string"` } // String returns the string representation -func (s DescribeJobExecutionInput) String() string { +func (s JobExecutionSummaryForJob) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeJobExecutionInput) GoString() string { +func (s JobExecutionSummaryForJob) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeJobExecutionInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeJobExecutionInput"} - if s.JobId == nil { - invalidParams.Add(request.NewErrParamRequired("JobId")) - } - if s.JobId != nil && len(*s.JobId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("JobId", 1)) - } - if s.ThingName == nil { - invalidParams.Add(request.NewErrParamRequired("ThingName")) - } - if s.ThingName != nil && len(*s.ThingName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetExecutionNumber sets the ExecutionNumber field's value. -func (s *DescribeJobExecutionInput) SetExecutionNumber(v int64) *DescribeJobExecutionInput { - s.ExecutionNumber = &v - return s -} - -// SetJobId sets the JobId field's value. -func (s *DescribeJobExecutionInput) SetJobId(v string) *DescribeJobExecutionInput { - s.JobId = &v +// SetJobExecutionSummary sets the JobExecutionSummary field's value. +func (s *JobExecutionSummaryForJob) SetJobExecutionSummary(v *JobExecutionSummary) *JobExecutionSummaryForJob { + s.JobExecutionSummary = v return s } -// SetThingName sets the ThingName field's value. -func (s *DescribeJobExecutionInput) SetThingName(v string) *DescribeJobExecutionInput { - s.ThingName = &v +// SetThingArn sets the ThingArn field's value. +func (s *JobExecutionSummaryForJob) SetThingArn(v string) *JobExecutionSummaryForJob { + s.ThingArn = &v return s } -type DescribeJobExecutionOutput struct { +// The job execution summary for a thing. +type JobExecutionSummaryForThing struct { _ struct{} `type:"structure"` - // Information about the job execution. - Execution *JobExecution `locationName:"execution" type:"structure"` + // Contains a subset of information about a job execution. + JobExecutionSummary *JobExecutionSummary `locationName:"jobExecutionSummary" type:"structure"` + + // The unique identifier you assigned to this job when it was created. + JobId *string `locationName:"jobId" min:"1" type:"string"` } // String returns the string representation -func (s DescribeJobExecutionOutput) String() string { +func (s JobExecutionSummaryForThing) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeJobExecutionOutput) GoString() string { +func (s JobExecutionSummaryForThing) GoString() string { return s.String() } -// SetExecution sets the Execution field's value. -func (s *DescribeJobExecutionOutput) SetExecution(v *JobExecution) *DescribeJobExecutionOutput { - s.Execution = v +// SetJobExecutionSummary sets the JobExecutionSummary field's value. +func (s *JobExecutionSummaryForThing) SetJobExecutionSummary(v *JobExecutionSummary) *JobExecutionSummaryForThing { + s.JobExecutionSummary = v return s } -type DescribeJobInput struct { +// SetJobId sets the JobId field's value. +func (s *JobExecutionSummaryForThing) SetJobId(v string) *JobExecutionSummaryForThing { + s.JobId = &v + return s +} + +// Allows you to create a staged rollout of a job. +type JobExecutionsRolloutConfig struct { _ struct{} `type:"structure"` - // The unique identifier you assigned to this job when it was created. - // - // JobId is a required field - JobId *string `location:"uri" locationName:"jobId" min:"1" type:"string" required:"true"` + // The rate of increase for a job rollout. This parameter allows you to define + // an exponential rate for a job rollout. + ExponentialRate *ExponentialRolloutRate `locationName:"exponentialRate" type:"structure"` + + // The maximum number of things that will be notified of a pending job, per + // minute. This parameter allows you to create a staged rollout. + MaximumPerMinute *int64 `locationName:"maximumPerMinute" min:"1" type:"integer"` } // String returns the string representation -func (s DescribeJobInput) String() string { +func (s JobExecutionsRolloutConfig) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeJobInput) GoString() string { +func (s JobExecutionsRolloutConfig) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeJobInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeJobInput"} - if s.JobId == nil { - invalidParams.Add(request.NewErrParamRequired("JobId")) +func (s *JobExecutionsRolloutConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "JobExecutionsRolloutConfig"} + if s.MaximumPerMinute != nil && *s.MaximumPerMinute < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaximumPerMinute", 1)) } - if s.JobId != nil && len(*s.JobId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("JobId", 1)) + if s.ExponentialRate != nil { + if err := s.ExponentialRate.Validate(); err != nil { + invalidParams.AddNested("ExponentialRate", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -16149,199 +25804,279 @@ func (s *DescribeJobInput) Validate() error { return nil } -// SetJobId sets the JobId field's value. -func (s *DescribeJobInput) SetJobId(v string) *DescribeJobInput { - s.JobId = &v +// SetExponentialRate sets the ExponentialRate field's value. +func (s *JobExecutionsRolloutConfig) SetExponentialRate(v *ExponentialRolloutRate) *JobExecutionsRolloutConfig { + s.ExponentialRate = v return s } -type DescribeJobOutput struct { +// SetMaximumPerMinute sets the MaximumPerMinute field's value. +func (s *JobExecutionsRolloutConfig) SetMaximumPerMinute(v int64) *JobExecutionsRolloutConfig { + s.MaximumPerMinute = &v + return s +} + +// The job process details. +type JobProcessDetails struct { _ struct{} `type:"structure"` - // An S3 link to the job document. - DocumentSource *string `locationName:"documentSource" min:"1" type:"string"` + // The number of things that cancelled the job. + NumberOfCanceledThings *int64 `locationName:"numberOfCanceledThings" type:"integer"` - // Information about the job. - Job *Job `locationName:"job" type:"structure"` + // The number of things that failed executing the job. + NumberOfFailedThings *int64 `locationName:"numberOfFailedThings" type:"integer"` + + // The number of things currently executing the job. + NumberOfInProgressThings *int64 `locationName:"numberOfInProgressThings" type:"integer"` + + // The number of things that are awaiting execution of the job. + NumberOfQueuedThings *int64 `locationName:"numberOfQueuedThings" type:"integer"` + + // The number of things that rejected the job. + NumberOfRejectedThings *int64 `locationName:"numberOfRejectedThings" type:"integer"` + + // The number of things that are no longer scheduled to execute the job because + // they have been deleted or have been removed from the group that was a target + // of the job. + NumberOfRemovedThings *int64 `locationName:"numberOfRemovedThings" type:"integer"` + + // The number of things which successfully completed the job. + NumberOfSucceededThings *int64 `locationName:"numberOfSucceededThings" type:"integer"` + + // The number of things whose job execution status is TIMED_OUT. + NumberOfTimedOutThings *int64 `locationName:"numberOfTimedOutThings" type:"integer"` + + // The target devices to which the job execution is being rolled out. This value + // will be null after the job execution has finished rolling out to all the + // target devices. + ProcessingTargets []*string `locationName:"processingTargets" type:"list"` } // String returns the string representation -func (s DescribeJobOutput) String() string { +func (s JobProcessDetails) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeJobOutput) GoString() string { +func (s JobProcessDetails) GoString() string { return s.String() } -// SetDocumentSource sets the DocumentSource field's value. -func (s *DescribeJobOutput) SetDocumentSource(v string) *DescribeJobOutput { - s.DocumentSource = &v +// SetNumberOfCanceledThings sets the NumberOfCanceledThings field's value. +func (s *JobProcessDetails) SetNumberOfCanceledThings(v int64) *JobProcessDetails { + s.NumberOfCanceledThings = &v return s } -// SetJob sets the Job field's value. -func (s *DescribeJobOutput) SetJob(v *Job) *DescribeJobOutput { - s.Job = v +// SetNumberOfFailedThings sets the NumberOfFailedThings field's value. +func (s *JobProcessDetails) SetNumberOfFailedThings(v int64) *JobProcessDetails { + s.NumberOfFailedThings = &v return s } -type DescribeRoleAliasInput struct { - _ struct{} `type:"structure"` +// SetNumberOfInProgressThings sets the NumberOfInProgressThings field's value. +func (s *JobProcessDetails) SetNumberOfInProgressThings(v int64) *JobProcessDetails { + s.NumberOfInProgressThings = &v + return s +} - // The role alias to describe. - // - // RoleAlias is a required field - RoleAlias *string `location:"uri" locationName:"roleAlias" min:"1" type:"string" required:"true"` +// SetNumberOfQueuedThings sets the NumberOfQueuedThings field's value. +func (s *JobProcessDetails) SetNumberOfQueuedThings(v int64) *JobProcessDetails { + s.NumberOfQueuedThings = &v + return s } -// String returns the string representation -func (s DescribeRoleAliasInput) String() string { - return awsutil.Prettify(s) +// SetNumberOfRejectedThings sets the NumberOfRejectedThings field's value. +func (s *JobProcessDetails) SetNumberOfRejectedThings(v int64) *JobProcessDetails { + s.NumberOfRejectedThings = &v + return s } -// GoString returns the string representation -func (s DescribeRoleAliasInput) GoString() string { - return s.String() +// SetNumberOfRemovedThings sets the NumberOfRemovedThings field's value. +func (s *JobProcessDetails) SetNumberOfRemovedThings(v int64) *JobProcessDetails { + s.NumberOfRemovedThings = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeRoleAliasInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeRoleAliasInput"} - if s.RoleAlias == nil { - invalidParams.Add(request.NewErrParamRequired("RoleAlias")) - } - if s.RoleAlias != nil && len(*s.RoleAlias) < 1 { - invalidParams.Add(request.NewErrParamMinLen("RoleAlias", 1)) - } +// SetNumberOfSucceededThings sets the NumberOfSucceededThings field's value. +func (s *JobProcessDetails) SetNumberOfSucceededThings(v int64) *JobProcessDetails { + s.NumberOfSucceededThings = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetNumberOfTimedOutThings sets the NumberOfTimedOutThings field's value. +func (s *JobProcessDetails) SetNumberOfTimedOutThings(v int64) *JobProcessDetails { + s.NumberOfTimedOutThings = &v + return s } -// SetRoleAlias sets the RoleAlias field's value. -func (s *DescribeRoleAliasInput) SetRoleAlias(v string) *DescribeRoleAliasInput { - s.RoleAlias = &v +// SetProcessingTargets sets the ProcessingTargets field's value. +func (s *JobProcessDetails) SetProcessingTargets(v []*string) *JobProcessDetails { + s.ProcessingTargets = v return s } -type DescribeRoleAliasOutput struct { +// The job summary. +type JobSummary struct { _ struct{} `type:"structure"` - // The role alias description. - RoleAliasDescription *RoleAliasDescription `locationName:"roleAliasDescription" type:"structure"` + // The time, in milliseconds since the epoch, when the job completed. + CompletedAt *time.Time `locationName:"completedAt" type:"timestamp"` + + // The time, in milliseconds since the epoch, when the job was created. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` + + // The job ARN. + JobArn *string `locationName:"jobArn" type:"string"` + + // The unique identifier you assigned to this job when it was created. + JobId *string `locationName:"jobId" min:"1" type:"string"` + + // The time, in milliseconds since the epoch, when the job was last updated. + LastUpdatedAt *time.Time `locationName:"lastUpdatedAt" type:"timestamp"` + + // The job summary status. + Status *string `locationName:"status" type:"string" enum:"JobStatus"` + + // Specifies whether the job will continue to run (CONTINUOUS), or will be complete + // after all those things specified as targets have completed the job (SNAPSHOT). + // If continuous, the job may also be run on a thing when a change is detected + // in a target. For example, a job will run on a thing when the thing is added + // to a target group, even after the job was completed by all things originally + // in the group. + TargetSelection *string `locationName:"targetSelection" type:"string" enum:"TargetSelection"` + + // The ID of the thing group. + ThingGroupId *string `locationName:"thingGroupId" min:"1" type:"string"` } // String returns the string representation -func (s DescribeRoleAliasOutput) String() string { +func (s JobSummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeRoleAliasOutput) GoString() string { +func (s JobSummary) GoString() string { return s.String() } -// SetRoleAliasDescription sets the RoleAliasDescription field's value. -func (s *DescribeRoleAliasOutput) SetRoleAliasDescription(v *RoleAliasDescription) *DescribeRoleAliasOutput { - s.RoleAliasDescription = v +// SetCompletedAt sets the CompletedAt field's value. +func (s *JobSummary) SetCompletedAt(v time.Time) *JobSummary { + s.CompletedAt = &v return s } -type DescribeStreamInput struct { - _ struct{} `type:"structure"` +// SetCreatedAt sets the CreatedAt field's value. +func (s *JobSummary) SetCreatedAt(v time.Time) *JobSummary { + s.CreatedAt = &v + return s +} - // The stream ID. - // - // StreamId is a required field - StreamId *string `location:"uri" locationName:"streamId" min:"1" type:"string" required:"true"` +// SetJobArn sets the JobArn field's value. +func (s *JobSummary) SetJobArn(v string) *JobSummary { + s.JobArn = &v + return s } -// String returns the string representation -func (s DescribeStreamInput) String() string { - return awsutil.Prettify(s) +// SetJobId sets the JobId field's value. +func (s *JobSummary) SetJobId(v string) *JobSummary { + s.JobId = &v + return s } -// GoString returns the string representation -func (s DescribeStreamInput) GoString() string { - return s.String() +// SetLastUpdatedAt sets the LastUpdatedAt field's value. +func (s *JobSummary) SetLastUpdatedAt(v time.Time) *JobSummary { + s.LastUpdatedAt = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeStreamInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeStreamInput"} - if s.StreamId == nil { - invalidParams.Add(request.NewErrParamRequired("StreamId")) - } - if s.StreamId != nil && len(*s.StreamId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("StreamId", 1)) - } +// SetStatus sets the Status field's value. +func (s *JobSummary) SetStatus(v string) *JobSummary { + s.Status = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetTargetSelection sets the TargetSelection field's value. +func (s *JobSummary) SetTargetSelection(v string) *JobSummary { + s.TargetSelection = &v + return s } -// SetStreamId sets the StreamId field's value. -func (s *DescribeStreamInput) SetStreamId(v string) *DescribeStreamInput { - s.StreamId = &v +// SetThingGroupId sets the ThingGroupId field's value. +func (s *JobSummary) SetThingGroupId(v string) *JobSummary { + s.ThingGroupId = &v return s } -type DescribeStreamOutput struct { +// Describes a key pair. +type KeyPair struct { _ struct{} `type:"structure"` - // Information about the stream. - StreamInfo *StreamInfo `locationName:"streamInfo" type:"structure"` + // The private key. + PrivateKey *string `min:"1" type:"string"` + + // The public key. + PublicKey *string `min:"1" type:"string"` } // String returns the string representation -func (s DescribeStreamOutput) String() string { +func (s KeyPair) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeStreamOutput) GoString() string { +func (s KeyPair) GoString() string { return s.String() } -// SetStreamInfo sets the StreamInfo field's value. -func (s *DescribeStreamOutput) SetStreamInfo(v *StreamInfo) *DescribeStreamOutput { - s.StreamInfo = v +// SetPrivateKey sets the PrivateKey field's value. +func (s *KeyPair) SetPrivateKey(v string) *KeyPair { + s.PrivateKey = &v return s } -type DescribeThingGroupInput struct { +// SetPublicKey sets the PublicKey field's value. +func (s *KeyPair) SetPublicKey(v string) *KeyPair { + s.PublicKey = &v + return s +} + +// Describes an action to write data to an Amazon Kinesis stream. +type KinesisAction struct { _ struct{} `type:"structure"` - // The name of the thing group. + // The partition key. + PartitionKey *string `locationName:"partitionKey" type:"string"` + + // The ARN of the IAM role that grants access to the Amazon Kinesis stream. // - // ThingGroupName is a required field - ThingGroupName *string `location:"uri" locationName:"thingGroupName" min:"1" type:"string" required:"true"` + // RoleArn is a required field + RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + + // The name of the Amazon Kinesis stream. + // + // StreamName is a required field + StreamName *string `locationName:"streamName" type:"string" required:"true"` } // String returns the string representation -func (s DescribeThingGroupInput) String() string { +func (s KinesisAction) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeThingGroupInput) GoString() string { +func (s KinesisAction) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeThingGroupInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeThingGroupInput"} - if s.ThingGroupName == nil { - invalidParams.Add(request.NewErrParamRequired("ThingGroupName")) +func (s *KinesisAction) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "KinesisAction"} + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) } - if s.ThingGroupName != nil && len(*s.ThingGroupName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingGroupName", 1)) + if s.StreamName == nil { + invalidParams.Add(request.NewErrParamRequired("StreamName")) } if invalidParams.Len() > 0 { @@ -16350,108 +26085,49 @@ func (s *DescribeThingGroupInput) Validate() error { return nil } -// SetThingGroupName sets the ThingGroupName field's value. -func (s *DescribeThingGroupInput) SetThingGroupName(v string) *DescribeThingGroupInput { - s.ThingGroupName = &v - return s -} - -type DescribeThingGroupOutput struct { - _ struct{} `type:"structure"` - - // The thing group ARN. - ThingGroupArn *string `locationName:"thingGroupArn" type:"string"` - - // The thing group ID. - ThingGroupId *string `locationName:"thingGroupId" min:"1" type:"string"` - - // Thing group metadata. - ThingGroupMetadata *ThingGroupMetadata `locationName:"thingGroupMetadata" type:"structure"` - - // The name of the thing group. - ThingGroupName *string `locationName:"thingGroupName" min:"1" type:"string"` - - // The thing group properties. - ThingGroupProperties *ThingGroupProperties `locationName:"thingGroupProperties" type:"structure"` - - // The version of the thing group. - Version *int64 `locationName:"version" type:"long"` -} - -// String returns the string representation -func (s DescribeThingGroupOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s DescribeThingGroupOutput) GoString() string { - return s.String() -} - -// SetThingGroupArn sets the ThingGroupArn field's value. -func (s *DescribeThingGroupOutput) SetThingGroupArn(v string) *DescribeThingGroupOutput { - s.ThingGroupArn = &v - return s -} - -// SetThingGroupId sets the ThingGroupId field's value. -func (s *DescribeThingGroupOutput) SetThingGroupId(v string) *DescribeThingGroupOutput { - s.ThingGroupId = &v - return s -} - -// SetThingGroupMetadata sets the ThingGroupMetadata field's value. -func (s *DescribeThingGroupOutput) SetThingGroupMetadata(v *ThingGroupMetadata) *DescribeThingGroupOutput { - s.ThingGroupMetadata = v - return s -} - -// SetThingGroupName sets the ThingGroupName field's value. -func (s *DescribeThingGroupOutput) SetThingGroupName(v string) *DescribeThingGroupOutput { - s.ThingGroupName = &v +// SetPartitionKey sets the PartitionKey field's value. +func (s *KinesisAction) SetPartitionKey(v string) *KinesisAction { + s.PartitionKey = &v return s } -// SetThingGroupProperties sets the ThingGroupProperties field's value. -func (s *DescribeThingGroupOutput) SetThingGroupProperties(v *ThingGroupProperties) *DescribeThingGroupOutput { - s.ThingGroupProperties = v +// SetRoleArn sets the RoleArn field's value. +func (s *KinesisAction) SetRoleArn(v string) *KinesisAction { + s.RoleArn = &v return s } -// SetVersion sets the Version field's value. -func (s *DescribeThingGroupOutput) SetVersion(v int64) *DescribeThingGroupOutput { - s.Version = &v +// SetStreamName sets the StreamName field's value. +func (s *KinesisAction) SetStreamName(v string) *KinesisAction { + s.StreamName = &v return s } -// The input for the DescribeThing operation. -type DescribeThingInput struct { +// Describes an action to invoke a Lambda function. +type LambdaAction struct { _ struct{} `type:"structure"` - // The name of the thing. + // The ARN of the Lambda function. // - // ThingName is a required field - ThingName *string `location:"uri" locationName:"thingName" min:"1" type:"string" required:"true"` + // FunctionArn is a required field + FunctionArn *string `locationName:"functionArn" type:"string" required:"true"` } // String returns the string representation -func (s DescribeThingInput) String() string { +func (s LambdaAction) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeThingInput) GoString() string { +func (s LambdaAction) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeThingInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeThingInput"} - if s.ThingName == nil { - invalidParams.Add(request.NewErrParamRequired("ThingName")) - } - if s.ThingName != nil && len(*s.ThingName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) +func (s *LambdaAction) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LambdaAction"} + if s.FunctionArn == nil { + invalidParams.Add(request.NewErrParamRequired("FunctionArn")) } if invalidParams.Len() > 0 { @@ -16460,282 +26136,267 @@ func (s *DescribeThingInput) Validate() error { return nil } -// SetThingName sets the ThingName field's value. -func (s *DescribeThingInput) SetThingName(v string) *DescribeThingInput { - s.ThingName = &v +// SetFunctionArn sets the FunctionArn field's value. +func (s *LambdaAction) SetFunctionArn(v string) *LambdaAction { + s.FunctionArn = &v return s } -// The output from the DescribeThing operation. -type DescribeThingOutput struct { +type ListActiveViolationsInput struct { _ struct{} `type:"structure"` - // The thing attributes. - Attributes map[string]*string `locationName:"attributes" type:"map"` - - // The default client ID. - DefaultClientId *string `locationName:"defaultClientId" type:"string"` - - // The ARN of the thing to describe. - ThingArn *string `locationName:"thingArn" type:"string"` - - // The ID of the thing to describe. - ThingId *string `locationName:"thingId" type:"string"` - - // The name of the thing. - ThingName *string `locationName:"thingName" min:"1" type:"string"` + // The maximum number of results to return at one time. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - // The thing type name. - ThingTypeName *string `locationName:"thingTypeName" min:"1" type:"string"` + // The token for the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` - // The current version of the thing record in the registry. - // - // To avoid unintentional changes to the information in the registry, you can - // pass the version information in the expectedVersion parameter of the UpdateThing - // and DeleteThing calls. - Version *int64 `locationName:"version" type:"long"` + // The name of the Device Defender security profile for which violations are + // listed. + SecurityProfileName *string `location:"querystring" locationName:"securityProfileName" min:"1" type:"string"` + + // The name of the thing whose active violations are listed. + ThingName *string `location:"querystring" locationName:"thingName" min:"1" type:"string"` } // String returns the string representation -func (s DescribeThingOutput) String() string { +func (s ListActiveViolationsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeThingOutput) GoString() string { +func (s ListActiveViolationsInput) GoString() string { return s.String() } -// SetAttributes sets the Attributes field's value. -func (s *DescribeThingOutput) SetAttributes(v map[string]*string) *DescribeThingOutput { - s.Attributes = v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListActiveViolationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListActiveViolationsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.SecurityProfileName != nil && len(*s.SecurityProfileName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecurityProfileName", 1)) + } + if s.ThingName != nil && len(*s.ThingName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetDefaultClientId sets the DefaultClientId field's value. -func (s *DescribeThingOutput) SetDefaultClientId(v string) *DescribeThingOutput { - s.DefaultClientId = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListActiveViolationsInput) SetMaxResults(v int64) *ListActiveViolationsInput { + s.MaxResults = &v return s } -// SetThingArn sets the ThingArn field's value. -func (s *DescribeThingOutput) SetThingArn(v string) *DescribeThingOutput { - s.ThingArn = &v +// SetNextToken sets the NextToken field's value. +func (s *ListActiveViolationsInput) SetNextToken(v string) *ListActiveViolationsInput { + s.NextToken = &v return s } -// SetThingId sets the ThingId field's value. -func (s *DescribeThingOutput) SetThingId(v string) *DescribeThingOutput { - s.ThingId = &v +// SetSecurityProfileName sets the SecurityProfileName field's value. +func (s *ListActiveViolationsInput) SetSecurityProfileName(v string) *ListActiveViolationsInput { + s.SecurityProfileName = &v return s } // SetThingName sets the ThingName field's value. -func (s *DescribeThingOutput) SetThingName(v string) *DescribeThingOutput { +func (s *ListActiveViolationsInput) SetThingName(v string) *ListActiveViolationsInput { s.ThingName = &v return s } -// SetThingTypeName sets the ThingTypeName field's value. -func (s *DescribeThingOutput) SetThingTypeName(v string) *DescribeThingOutput { - s.ThingTypeName = &v - return s -} - -// SetVersion sets the Version field's value. -func (s *DescribeThingOutput) SetVersion(v int64) *DescribeThingOutput { - s.Version = &v - return s -} - -type DescribeThingRegistrationTaskInput struct { +type ListActiveViolationsOutput struct { _ struct{} `type:"structure"` - // The task ID. - // - // TaskId is a required field - TaskId *string `location:"uri" locationName:"taskId" type:"string" required:"true"` + // The list of active violations. + ActiveViolations []*ActiveViolation `locationName:"activeViolations" type:"list"` + + // A token that can be used to retrieve the next set of results, or null if + // there are no additional results. + NextToken *string `locationName:"nextToken" type:"string"` } // String returns the string representation -func (s DescribeThingRegistrationTaskInput) String() string { +func (s ListActiveViolationsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeThingRegistrationTaskInput) GoString() string { +func (s ListActiveViolationsOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeThingRegistrationTaskInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeThingRegistrationTaskInput"} - if s.TaskId == nil { - invalidParams.Add(request.NewErrParamRequired("TaskId")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetActiveViolations sets the ActiveViolations field's value. +func (s *ListActiveViolationsOutput) SetActiveViolations(v []*ActiveViolation) *ListActiveViolationsOutput { + s.ActiveViolations = v + return s } -// SetTaskId sets the TaskId field's value. -func (s *DescribeThingRegistrationTaskInput) SetTaskId(v string) *DescribeThingRegistrationTaskInput { - s.TaskId = &v +// SetNextToken sets the NextToken field's value. +func (s *ListActiveViolationsOutput) SetNextToken(v string) *ListActiveViolationsOutput { + s.NextToken = &v return s } -type DescribeThingRegistrationTaskOutput struct { +type ListAttachedPoliciesInput struct { _ struct{} `type:"structure"` - // The task creation date. - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix"` - - // The number of things that failed to be provisioned. - FailureCount *int64 `locationName:"failureCount" type:"integer"` - - // The S3 bucket that contains the input file. - InputFileBucket *string `locationName:"inputFileBucket" min:"3" type:"string"` - - // The input file key. - InputFileKey *string `locationName:"inputFileKey" min:"1" type:"string"` - - // The date when the task was last modified. - LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp" timestampFormat:"unix"` - - // The message. - Message *string `locationName:"message" type:"string"` - - // The progress of the bulk provisioning task expressed as a percentage. - PercentageProgress *int64 `locationName:"percentageProgress" type:"integer"` - - // The role ARN that grants access to the input file bucket. - RoleArn *string `locationName:"roleArn" min:"20" type:"string"` - - // The status of the bulk thing provisioning task. - Status *string `locationName:"status" type:"string" enum:"Status"` + // The token to retrieve the next set of results. + Marker *string `location:"querystring" locationName:"marker" type:"string"` - // The number of things successfully provisioned. - SuccessCount *int64 `locationName:"successCount" type:"integer"` + // The maximum number of results to be returned per request. + PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` - // The task ID. - TaskId *string `locationName:"taskId" type:"string"` + // When true, recursively list attached policies. + Recursive *bool `location:"querystring" locationName:"recursive" type:"boolean"` - // The task's template. - TemplateBody *string `locationName:"templateBody" type:"string"` + // The group for which the policies will be listed. + // + // Target is a required field + Target *string `location:"uri" locationName:"target" type:"string" required:"true"` } // String returns the string representation -func (s DescribeThingRegistrationTaskOutput) String() string { +func (s ListAttachedPoliciesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeThingRegistrationTaskOutput) GoString() string { +func (s ListAttachedPoliciesInput) GoString() string { return s.String() } -// SetCreationDate sets the CreationDate field's value. -func (s *DescribeThingRegistrationTaskOutput) SetCreationDate(v time.Time) *DescribeThingRegistrationTaskOutput { - s.CreationDate = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListAttachedPoliciesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListAttachedPoliciesInput"} + if s.PageSize != nil && *s.PageSize < 1 { + invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) + } + if s.Target == nil { + invalidParams.Add(request.NewErrParamRequired("Target")) + } -// SetFailureCount sets the FailureCount field's value. -func (s *DescribeThingRegistrationTaskOutput) SetFailureCount(v int64) *DescribeThingRegistrationTaskOutput { - s.FailureCount = &v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetInputFileBucket sets the InputFileBucket field's value. -func (s *DescribeThingRegistrationTaskOutput) SetInputFileBucket(v string) *DescribeThingRegistrationTaskOutput { - s.InputFileBucket = &v +// SetMarker sets the Marker field's value. +func (s *ListAttachedPoliciesInput) SetMarker(v string) *ListAttachedPoliciesInput { + s.Marker = &v return s } -// SetInputFileKey sets the InputFileKey field's value. -func (s *DescribeThingRegistrationTaskOutput) SetInputFileKey(v string) *DescribeThingRegistrationTaskOutput { - s.InputFileKey = &v +// SetPageSize sets the PageSize field's value. +func (s *ListAttachedPoliciesInput) SetPageSize(v int64) *ListAttachedPoliciesInput { + s.PageSize = &v return s } -// SetLastModifiedDate sets the LastModifiedDate field's value. -func (s *DescribeThingRegistrationTaskOutput) SetLastModifiedDate(v time.Time) *DescribeThingRegistrationTaskOutput { - s.LastModifiedDate = &v +// SetRecursive sets the Recursive field's value. +func (s *ListAttachedPoliciesInput) SetRecursive(v bool) *ListAttachedPoliciesInput { + s.Recursive = &v return s } -// SetMessage sets the Message field's value. -func (s *DescribeThingRegistrationTaskOutput) SetMessage(v string) *DescribeThingRegistrationTaskOutput { - s.Message = &v +// SetTarget sets the Target field's value. +func (s *ListAttachedPoliciesInput) SetTarget(v string) *ListAttachedPoliciesInput { + s.Target = &v return s } -// SetPercentageProgress sets the PercentageProgress field's value. -func (s *DescribeThingRegistrationTaskOutput) SetPercentageProgress(v int64) *DescribeThingRegistrationTaskOutput { - s.PercentageProgress = &v - return s -} +type ListAttachedPoliciesOutput struct { + _ struct{} `type:"structure"` -// SetRoleArn sets the RoleArn field's value. -func (s *DescribeThingRegistrationTaskOutput) SetRoleArn(v string) *DescribeThingRegistrationTaskOutput { - s.RoleArn = &v - return s + // The token to retrieve the next set of results, or ``null`` if there are no + // more results. + NextMarker *string `locationName:"nextMarker" type:"string"` + + // The policies. + Policies []*Policy `locationName:"policies" type:"list"` } -// SetStatus sets the Status field's value. -func (s *DescribeThingRegistrationTaskOutput) SetStatus(v string) *DescribeThingRegistrationTaskOutput { - s.Status = &v - return s +// String returns the string representation +func (s ListAttachedPoliciesOutput) String() string { + return awsutil.Prettify(s) } -// SetSuccessCount sets the SuccessCount field's value. -func (s *DescribeThingRegistrationTaskOutput) SetSuccessCount(v int64) *DescribeThingRegistrationTaskOutput { - s.SuccessCount = &v - return s +// GoString returns the string representation +func (s ListAttachedPoliciesOutput) GoString() string { + return s.String() } -// SetTaskId sets the TaskId field's value. -func (s *DescribeThingRegistrationTaskOutput) SetTaskId(v string) *DescribeThingRegistrationTaskOutput { - s.TaskId = &v +// SetNextMarker sets the NextMarker field's value. +func (s *ListAttachedPoliciesOutput) SetNextMarker(v string) *ListAttachedPoliciesOutput { + s.NextMarker = &v return s } -// SetTemplateBody sets the TemplateBody field's value. -func (s *DescribeThingRegistrationTaskOutput) SetTemplateBody(v string) *DescribeThingRegistrationTaskOutput { - s.TemplateBody = &v +// SetPolicies sets the Policies field's value. +func (s *ListAttachedPoliciesOutput) SetPolicies(v []*Policy) *ListAttachedPoliciesOutput { + s.Policies = v return s } -// The input for the DescribeThingType operation. -type DescribeThingTypeInput struct { +type ListAuditFindingsInput struct { _ struct{} `type:"structure"` - // The name of the thing type. - // - // ThingTypeName is a required field - ThingTypeName *string `location:"uri" locationName:"thingTypeName" min:"1" type:"string" required:"true"` + // A filter to limit results to the findings for the specified audit check. + CheckName *string `locationName:"checkName" type:"string"` + + // A filter to limit results to those found before the specified time. You must + // specify either the startTime and endTime or the taskId, but not both. + EndTime *time.Time `locationName:"endTime" type:"timestamp"` + + // The maximum number of results to return at one time. The default is 25. + MaxResults *int64 `locationName:"maxResults" min:"1" type:"integer"` + + // The token for the next set of results. + NextToken *string `locationName:"nextToken" type:"string"` + + // Information identifying the non-compliant resource. + ResourceIdentifier *ResourceIdentifier `locationName:"resourceIdentifier" type:"structure"` + + // A filter to limit results to those found after the specified time. You must + // specify either the startTime and endTime or the taskId, but not both. + StartTime *time.Time `locationName:"startTime" type:"timestamp"` + + // A filter to limit results to the audit with the specified ID. You must specify + // either the taskId or the startTime and endTime, but not both. + TaskId *string `locationName:"taskId" min:"1" type:"string"` } // String returns the string representation -func (s DescribeThingTypeInput) String() string { +func (s ListAuditFindingsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeThingTypeInput) GoString() string { +func (s ListAuditFindingsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeThingTypeInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeThingTypeInput"} - if s.ThingTypeName == nil { - invalidParams.Add(request.NewErrParamRequired("ThingTypeName")) +func (s *ListAuditFindingsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListAuditFindingsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } - if s.ThingTypeName != nil && len(*s.ThingTypeName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingTypeName", 1)) + if s.TaskId != nil && len(*s.TaskId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TaskId", 1)) + } + if s.ResourceIdentifier != nil { + if err := s.ResourceIdentifier.Validate(); err != nil { + invalidParams.AddNested("ResourceIdentifier", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -16744,110 +26405,132 @@ func (s *DescribeThingTypeInput) Validate() error { return nil } -// SetThingTypeName sets the ThingTypeName field's value. -func (s *DescribeThingTypeInput) SetThingTypeName(v string) *DescribeThingTypeInput { - s.ThingTypeName = &v +// SetCheckName sets the CheckName field's value. +func (s *ListAuditFindingsInput) SetCheckName(v string) *ListAuditFindingsInput { + s.CheckName = &v + return s +} + +// SetEndTime sets the EndTime field's value. +func (s *ListAuditFindingsInput) SetEndTime(v time.Time) *ListAuditFindingsInput { + s.EndTime = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListAuditFindingsInput) SetMaxResults(v int64) *ListAuditFindingsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListAuditFindingsInput) SetNextToken(v string) *ListAuditFindingsInput { + s.NextToken = &v return s } -// The output for the DescribeThingType operation. -type DescribeThingTypeOutput struct { - _ struct{} `type:"structure"` +// SetResourceIdentifier sets the ResourceIdentifier field's value. +func (s *ListAuditFindingsInput) SetResourceIdentifier(v *ResourceIdentifier) *ListAuditFindingsInput { + s.ResourceIdentifier = v + return s +} - // The thing type ARN. - ThingTypeArn *string `locationName:"thingTypeArn" type:"string"` +// SetStartTime sets the StartTime field's value. +func (s *ListAuditFindingsInput) SetStartTime(v time.Time) *ListAuditFindingsInput { + s.StartTime = &v + return s +} - // The thing type ID. - ThingTypeId *string `locationName:"thingTypeId" type:"string"` +// SetTaskId sets the TaskId field's value. +func (s *ListAuditFindingsInput) SetTaskId(v string) *ListAuditFindingsInput { + s.TaskId = &v + return s +} - // The ThingTypeMetadata contains additional information about the thing type - // including: creation date and time, a value indicating whether the thing type - // is deprecated, and a date and time when it was deprecated. - ThingTypeMetadata *ThingTypeMetadata `locationName:"thingTypeMetadata" type:"structure"` +type ListAuditFindingsOutput struct { + _ struct{} `type:"structure"` - // The name of the thing type. - ThingTypeName *string `locationName:"thingTypeName" min:"1" type:"string"` + // The findings (results) of the audit. + Findings []*AuditFinding `locationName:"findings" type:"list"` - // The ThingTypeProperties contains information about the thing type including - // description, and a list of searchable thing attribute names. - ThingTypeProperties *ThingTypeProperties `locationName:"thingTypeProperties" type:"structure"` + // A token that can be used to retrieve the next set of results, or null if + // there are no additional results. + NextToken *string `locationName:"nextToken" type:"string"` } // String returns the string representation -func (s DescribeThingTypeOutput) String() string { +func (s ListAuditFindingsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeThingTypeOutput) GoString() string { +func (s ListAuditFindingsOutput) GoString() string { return s.String() } -// SetThingTypeArn sets the ThingTypeArn field's value. -func (s *DescribeThingTypeOutput) SetThingTypeArn(v string) *DescribeThingTypeOutput { - s.ThingTypeArn = &v +// SetFindings sets the Findings field's value. +func (s *ListAuditFindingsOutput) SetFindings(v []*AuditFinding) *ListAuditFindingsOutput { + s.Findings = v return s } -// SetThingTypeId sets the ThingTypeId field's value. -func (s *DescribeThingTypeOutput) SetThingTypeId(v string) *DescribeThingTypeOutput { - s.ThingTypeId = &v +// SetNextToken sets the NextToken field's value. +func (s *ListAuditFindingsOutput) SetNextToken(v string) *ListAuditFindingsOutput { + s.NextToken = &v return s } -// SetThingTypeMetadata sets the ThingTypeMetadata field's value. -func (s *DescribeThingTypeOutput) SetThingTypeMetadata(v *ThingTypeMetadata) *DescribeThingTypeOutput { - s.ThingTypeMetadata = v - return s -} +type ListAuditTasksInput struct { + _ struct{} `type:"structure"` -// SetThingTypeName sets the ThingTypeName field's value. -func (s *DescribeThingTypeOutput) SetThingTypeName(v string) *DescribeThingTypeOutput { - s.ThingTypeName = &v - return s -} + // The end of the time period. + // + // EndTime is a required field + EndTime *time.Time `location:"querystring" locationName:"endTime" type:"timestamp" required:"true"` -// SetThingTypeProperties sets the ThingTypeProperties field's value. -func (s *DescribeThingTypeOutput) SetThingTypeProperties(v *ThingTypeProperties) *DescribeThingTypeOutput { - s.ThingTypeProperties = v - return s -} + // The maximum number of results to return at one time. The default is 25. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` -type DetachPolicyInput struct { - _ struct{} `type:"structure"` + // The token for the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` - // The policy to detach. + // The beginning of the time period. Note that audit information is retained + // for a limited time (180 days). Requesting a start time prior to what is retained + // results in an "InvalidRequestException". // - // PolicyName is a required field - PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` + // StartTime is a required field + StartTime *time.Time `location:"querystring" locationName:"startTime" type:"timestamp" required:"true"` - // The target from which the policy will be detached. - // - // Target is a required field - Target *string `locationName:"target" type:"string" required:"true"` + // A filter to limit the output to audits with the specified completion status: + // can be one of "IN_PROGRESS", "COMPLETED", "FAILED" or "CANCELED". + TaskStatus *string `location:"querystring" locationName:"taskStatus" type:"string" enum:"AuditTaskStatus"` + + // A filter to limit the output to the specified type of audit: can be one of + // "ON_DEMAND_AUDIT_TASK" or "SCHEDULED__AUDIT_TASK". + TaskType *string `location:"querystring" locationName:"taskType" type:"string" enum:"AuditTaskType"` } // String returns the string representation -func (s DetachPolicyInput) String() string { +func (s ListAuditTasksInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DetachPolicyInput) GoString() string { +func (s ListAuditTasksInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DetachPolicyInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DetachPolicyInput"} - if s.PolicyName == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyName")) +func (s *ListAuditTasksInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListAuditTasksInput"} + if s.EndTime == nil { + invalidParams.Add(request.NewErrParamRequired("EndTime")) } - if s.PolicyName != nil && len(*s.PolicyName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } - if s.Target == nil { - invalidParams.Add(request.NewErrParamRequired("Target")) + if s.StartTime == nil { + invalidParams.Add(request.NewErrParamRequired("StartTime")) } if invalidParams.Len() > 0 { @@ -16856,71 +26539,106 @@ func (s *DetachPolicyInput) Validate() error { return nil } -// SetPolicyName sets the PolicyName field's value. -func (s *DetachPolicyInput) SetPolicyName(v string) *DetachPolicyInput { - s.PolicyName = &v +// SetEndTime sets the EndTime field's value. +func (s *ListAuditTasksInput) SetEndTime(v time.Time) *ListAuditTasksInput { + s.EndTime = &v return s } -// SetTarget sets the Target field's value. -func (s *DetachPolicyInput) SetTarget(v string) *DetachPolicyInput { - s.Target = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListAuditTasksInput) SetMaxResults(v int64) *ListAuditTasksInput { + s.MaxResults = &v return s } -type DetachPolicyOutput struct { +// SetNextToken sets the NextToken field's value. +func (s *ListAuditTasksInput) SetNextToken(v string) *ListAuditTasksInput { + s.NextToken = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *ListAuditTasksInput) SetStartTime(v time.Time) *ListAuditTasksInput { + s.StartTime = &v + return s +} + +// SetTaskStatus sets the TaskStatus field's value. +func (s *ListAuditTasksInput) SetTaskStatus(v string) *ListAuditTasksInput { + s.TaskStatus = &v + return s +} + +// SetTaskType sets the TaskType field's value. +func (s *ListAuditTasksInput) SetTaskType(v string) *ListAuditTasksInput { + s.TaskType = &v + return s +} + +type ListAuditTasksOutput struct { _ struct{} `type:"structure"` + + // A token that can be used to retrieve the next set of results, or null if + // there are no additional results. + NextToken *string `locationName:"nextToken" type:"string"` + + // The audits that were performed during the specified time period. + Tasks []*AuditTaskMetadata `locationName:"tasks" type:"list"` } // String returns the string representation -func (s DetachPolicyOutput) String() string { +func (s ListAuditTasksOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DetachPolicyOutput) GoString() string { +func (s ListAuditTasksOutput) GoString() string { return s.String() } -// The input for the DetachPrincipalPolicy operation. -type DetachPrincipalPolicyInput struct { +// SetNextToken sets the NextToken field's value. +func (s *ListAuditTasksOutput) SetNextToken(v string) *ListAuditTasksOutput { + s.NextToken = &v + return s +} + +// SetTasks sets the Tasks field's value. +func (s *ListAuditTasksOutput) SetTasks(v []*AuditTaskMetadata) *ListAuditTasksOutput { + s.Tasks = v + return s +} + +type ListAuthorizersInput struct { _ struct{} `type:"structure"` - // The name of the policy to detach. - // - // PolicyName is a required field - PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` + // Return the list of authorizers in ascending alphabetical order. + AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` - // The principal. - // - // If the principal is a certificate, specify the certificate ARN. If the principal - // is an Amazon Cognito identity, specify the identity ID. - // - // Principal is a required field - Principal *string `location:"header" locationName:"x-amzn-iot-principal" type:"string" required:"true"` + // A marker used to get the next set of results. + Marker *string `location:"querystring" locationName:"marker" type:"string"` + + // The maximum number of results to return at one time. + PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` + + // The status of the list authorizers request. + Status *string `location:"querystring" locationName:"status" type:"string" enum:"AuthorizerStatus"` } // String returns the string representation -func (s DetachPrincipalPolicyInput) String() string { +func (s ListAuthorizersInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DetachPrincipalPolicyInput) GoString() string { +func (s ListAuthorizersInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DetachPrincipalPolicyInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DetachPrincipalPolicyInput"} - if s.PolicyName == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyName")) - } - if s.PolicyName != nil && len(*s.PolicyName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) - } - if s.Principal == nil { - invalidParams.Add(request.NewErrParamRequired("Principal")) +func (s *ListAuthorizersInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListAuthorizersInput"} + if s.PageSize != nil && *s.PageSize < 1 { + invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) } if invalidParams.Len() > 0 { @@ -16929,70 +26647,93 @@ func (s *DetachPrincipalPolicyInput) Validate() error { return nil } -// SetPolicyName sets the PolicyName field's value. -func (s *DetachPrincipalPolicyInput) SetPolicyName(v string) *DetachPrincipalPolicyInput { - s.PolicyName = &v +// SetAscendingOrder sets the AscendingOrder field's value. +func (s *ListAuthorizersInput) SetAscendingOrder(v bool) *ListAuthorizersInput { + s.AscendingOrder = &v return s } -// SetPrincipal sets the Principal field's value. -func (s *DetachPrincipalPolicyInput) SetPrincipal(v string) *DetachPrincipalPolicyInput { - s.Principal = &v +// SetMarker sets the Marker field's value. +func (s *ListAuthorizersInput) SetMarker(v string) *ListAuthorizersInput { + s.Marker = &v return s } -type DetachPrincipalPolicyOutput struct { +// SetPageSize sets the PageSize field's value. +func (s *ListAuthorizersInput) SetPageSize(v int64) *ListAuthorizersInput { + s.PageSize = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *ListAuthorizersInput) SetStatus(v string) *ListAuthorizersInput { + s.Status = &v + return s +} + +type ListAuthorizersOutput struct { _ struct{} `type:"structure"` + + // The authorizers. + Authorizers []*AuthorizerSummary `locationName:"authorizers" type:"list"` + + // A marker used to get the next set of results. + NextMarker *string `locationName:"nextMarker" type:"string"` } // String returns the string representation -func (s DetachPrincipalPolicyOutput) String() string { +func (s ListAuthorizersOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DetachPrincipalPolicyOutput) GoString() string { +func (s ListAuthorizersOutput) GoString() string { return s.String() } -// The input for the DetachThingPrincipal operation. -type DetachThingPrincipalInput struct { +// SetAuthorizers sets the Authorizers field's value. +func (s *ListAuthorizersOutput) SetAuthorizers(v []*AuthorizerSummary) *ListAuthorizersOutput { + s.Authorizers = v + return s +} + +// SetNextMarker sets the NextMarker field's value. +func (s *ListAuthorizersOutput) SetNextMarker(v string) *ListAuthorizersOutput { + s.NextMarker = &v + return s +} + +type ListBillingGroupsInput struct { _ struct{} `type:"structure"` - // If the principal is a certificate, this value must be ARN of the certificate. - // If the principal is an Amazon Cognito identity, this value must be the ID - // of the Amazon Cognito identity. - // - // Principal is a required field - Principal *string `location:"header" locationName:"x-amzn-principal" type:"string" required:"true"` + // The maximum number of results to return per request. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - // The name of the thing. - // - // ThingName is a required field - ThingName *string `location:"uri" locationName:"thingName" min:"1" type:"string" required:"true"` + // Limit the results to billing groups whose names have the given prefix. + NamePrefixFilter *string `location:"querystring" locationName:"namePrefixFilter" min:"1" type:"string"` + + // The token to retrieve the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` } // String returns the string representation -func (s DetachThingPrincipalInput) String() string { +func (s ListBillingGroupsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DetachThingPrincipalInput) GoString() string { +func (s ListBillingGroupsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DetachThingPrincipalInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DetachThingPrincipalInput"} - if s.Principal == nil { - invalidParams.Add(request.NewErrParamRequired("Principal")) - } - if s.ThingName == nil { - invalidParams.Add(request.NewErrParamRequired("ThingName")) +func (s *ListBillingGroupsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListBillingGroupsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } - if s.ThingName != nil && len(*s.ThingName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) + if s.NamePrefixFilter != nil && len(*s.NamePrefixFilter) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NamePrefixFilter", 1)) } if invalidParams.Len() > 0 { @@ -17001,61 +26742,86 @@ func (s *DetachThingPrincipalInput) Validate() error { return nil } -// SetPrincipal sets the Principal field's value. -func (s *DetachThingPrincipalInput) SetPrincipal(v string) *DetachThingPrincipalInput { - s.Principal = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListBillingGroupsInput) SetMaxResults(v int64) *ListBillingGroupsInput { + s.MaxResults = &v return s } -// SetThingName sets the ThingName field's value. -func (s *DetachThingPrincipalInput) SetThingName(v string) *DetachThingPrincipalInput { - s.ThingName = &v +// SetNamePrefixFilter sets the NamePrefixFilter field's value. +func (s *ListBillingGroupsInput) SetNamePrefixFilter(v string) *ListBillingGroupsInput { + s.NamePrefixFilter = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListBillingGroupsInput) SetNextToken(v string) *ListBillingGroupsInput { + s.NextToken = &v return s } -// The output from the DetachThingPrincipal operation. -type DetachThingPrincipalOutput struct { +type ListBillingGroupsOutput struct { _ struct{} `type:"structure"` + + // The list of billing groups. + BillingGroups []*GroupNameAndArn `locationName:"billingGroups" type:"list"` + + // The token used to get the next set of results, or null if there are no additional + // results. + NextToken *string `locationName:"nextToken" type:"string"` } // String returns the string representation -func (s DetachThingPrincipalOutput) String() string { +func (s ListBillingGroupsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DetachThingPrincipalOutput) GoString() string { +func (s ListBillingGroupsOutput) GoString() string { return s.String() } -// The input for the DisableTopicRuleRequest operation. -type DisableTopicRuleInput struct { +// SetBillingGroups sets the BillingGroups field's value. +func (s *ListBillingGroupsOutput) SetBillingGroups(v []*GroupNameAndArn) *ListBillingGroupsOutput { + s.BillingGroups = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListBillingGroupsOutput) SetNextToken(v string) *ListBillingGroupsOutput { + s.NextToken = &v + return s +} + +// Input for the ListCACertificates operation. +type ListCACertificatesInput struct { _ struct{} `type:"structure"` - // The name of the rule to disable. - // - // RuleName is a required field - RuleName *string `location:"uri" locationName:"ruleName" min:"1" type:"string" required:"true"` + // Determines the order of the results. + AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` + + // The marker for the next set of results. + Marker *string `location:"querystring" locationName:"marker" type:"string"` + + // The result page size. + PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` } // String returns the string representation -func (s DisableTopicRuleInput) String() string { +func (s ListCACertificatesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DisableTopicRuleInput) GoString() string { +func (s ListCACertificatesInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DisableTopicRuleInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DisableTopicRuleInput"} - if s.RuleName == nil { - invalidParams.Add(request.NewErrParamRequired("RuleName")) - } - if s.RuleName != nil && len(*s.RuleName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("RuleName", 1)) +func (s *ListCACertificatesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListCACertificatesInput"} + if s.PageSize != nil && *s.PageSize < 1 { + invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) } if invalidParams.Len() > 0 { @@ -17064,111 +26830,99 @@ func (s *DisableTopicRuleInput) Validate() error { return nil } -// SetRuleName sets the RuleName field's value. -func (s *DisableTopicRuleInput) SetRuleName(v string) *DisableTopicRuleInput { - s.RuleName = &v +// SetAscendingOrder sets the AscendingOrder field's value. +func (s *ListCACertificatesInput) SetAscendingOrder(v bool) *ListCACertificatesInput { + s.AscendingOrder = &v return s } -type DisableTopicRuleOutput struct { +// SetMarker sets the Marker field's value. +func (s *ListCACertificatesInput) SetMarker(v string) *ListCACertificatesInput { + s.Marker = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *ListCACertificatesInput) SetPageSize(v int64) *ListCACertificatesInput { + s.PageSize = &v + return s +} + +// The output from the ListCACertificates operation. +type ListCACertificatesOutput struct { _ struct{} `type:"structure"` + + // The CA certificates registered in your AWS account. + Certificates []*CACertificate `locationName:"certificates" type:"list"` + + // The current position within the list of CA certificates. + NextMarker *string `locationName:"nextMarker" type:"string"` } // String returns the string representation -func (s DisableTopicRuleOutput) String() string { +func (s ListCACertificatesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DisableTopicRuleOutput) GoString() string { +func (s ListCACertificatesOutput) GoString() string { return s.String() } -// Describes an action to write to a DynamoDB table. -// -// The tableName, hashKeyField, and rangeKeyField values must match the values -// used when you created the table. -// -// The hashKeyValue and rangeKeyvalue fields use a substitution template syntax. -// These templates provide data at runtime. The syntax is as follows: ${sql-expression}. -// -// You can specify any valid expression in a WHERE or SELECT clause, including -// JSON properties, comparisons, calculations, and functions. For example, the -// following field uses the third level of the topic: -// -// "hashKeyValue": "${topic(3)}" -// -// The following field uses the timestamp: -// -// "rangeKeyValue": "${timestamp()}" -type DynamoDBAction struct { - _ struct{} `type:"structure"` - - // The hash key name. - // - // HashKeyField is a required field - HashKeyField *string `locationName:"hashKeyField" type:"string" required:"true"` - - // The hash key type. Valid values are "STRING" or "NUMBER" - HashKeyType *string `locationName:"hashKeyType" type:"string" enum:"DynamoKeyType"` - - // The hash key value. - // - // HashKeyValue is a required field - HashKeyValue *string `locationName:"hashKeyValue" type:"string" required:"true"` - - // The type of operation to be performed. This follows the substitution template, - // so it can be ${operation}, but the substitution must result in one of the - // following: INSERT, UPDATE, or DELETE. - Operation *string `locationName:"operation" type:"string"` - - // The action payload. This name can be customized. - PayloadField *string `locationName:"payloadField" type:"string"` +// SetCertificates sets the Certificates field's value. +func (s *ListCACertificatesOutput) SetCertificates(v []*CACertificate) *ListCACertificatesOutput { + s.Certificates = v + return s +} - // The range key name. - RangeKeyField *string `locationName:"rangeKeyField" type:"string"` +// SetNextMarker sets the NextMarker field's value. +func (s *ListCACertificatesOutput) SetNextMarker(v string) *ListCACertificatesOutput { + s.NextMarker = &v + return s +} - // The range key type. Valid values are "STRING" or "NUMBER" - RangeKeyType *string `locationName:"rangeKeyType" type:"string" enum:"DynamoKeyType"` +// The input to the ListCertificatesByCA operation. +type ListCertificatesByCAInput struct { + _ struct{} `type:"structure"` - // The range key value. - RangeKeyValue *string `locationName:"rangeKeyValue" type:"string"` + // Specifies the order for results. If True, the results are returned in ascending + // order, based on the creation date. + AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` - // The ARN of the IAM role that grants access to the DynamoDB table. + // The ID of the CA certificate. This operation will list all registered device + // certificate that were signed by this CA certificate. // - // RoleArn is a required field - RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + // CaCertificateId is a required field + CaCertificateId *string `location:"uri" locationName:"caCertificateId" min:"64" type:"string" required:"true"` - // The name of the DynamoDB table. - // - // TableName is a required field - TableName *string `locationName:"tableName" type:"string" required:"true"` + // The marker for the next set of results. + Marker *string `location:"querystring" locationName:"marker" type:"string"` + + // The result page size. + PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` } // String returns the string representation -func (s DynamoDBAction) String() string { +func (s ListCertificatesByCAInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DynamoDBAction) GoString() string { +func (s ListCertificatesByCAInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DynamoDBAction) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DynamoDBAction"} - if s.HashKeyField == nil { - invalidParams.Add(request.NewErrParamRequired("HashKeyField")) - } - if s.HashKeyValue == nil { - invalidParams.Add(request.NewErrParamRequired("HashKeyValue")) +func (s *ListCertificatesByCAInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListCertificatesByCAInput"} + if s.CaCertificateId == nil { + invalidParams.Add(request.NewErrParamRequired("CaCertificateId")) } - if s.RoleArn == nil { - invalidParams.Add(request.NewErrParamRequired("RoleArn")) + if s.CaCertificateId != nil && len(*s.CaCertificateId) < 64 { + invalidParams.Add(request.NewErrParamMinLen("CaCertificateId", 64)) } - if s.TableName == nil { - invalidParams.Add(request.NewErrParamRequired("TableName")) + if s.PageSize != nil && *s.PageSize < 1 { + invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) } if invalidParams.Len() > 0 { @@ -17177,104 +26931,94 @@ func (s *DynamoDBAction) Validate() error { return nil } -// SetHashKeyField sets the HashKeyField field's value. -func (s *DynamoDBAction) SetHashKeyField(v string) *DynamoDBAction { - s.HashKeyField = &v +// SetAscendingOrder sets the AscendingOrder field's value. +func (s *ListCertificatesByCAInput) SetAscendingOrder(v bool) *ListCertificatesByCAInput { + s.AscendingOrder = &v return s } -// SetHashKeyType sets the HashKeyType field's value. -func (s *DynamoDBAction) SetHashKeyType(v string) *DynamoDBAction { - s.HashKeyType = &v +// SetCaCertificateId sets the CaCertificateId field's value. +func (s *ListCertificatesByCAInput) SetCaCertificateId(v string) *ListCertificatesByCAInput { + s.CaCertificateId = &v return s } -// SetHashKeyValue sets the HashKeyValue field's value. -func (s *DynamoDBAction) SetHashKeyValue(v string) *DynamoDBAction { - s.HashKeyValue = &v +// SetMarker sets the Marker field's value. +func (s *ListCertificatesByCAInput) SetMarker(v string) *ListCertificatesByCAInput { + s.Marker = &v return s } -// SetOperation sets the Operation field's value. -func (s *DynamoDBAction) SetOperation(v string) *DynamoDBAction { - s.Operation = &v +// SetPageSize sets the PageSize field's value. +func (s *ListCertificatesByCAInput) SetPageSize(v int64) *ListCertificatesByCAInput { + s.PageSize = &v return s } -// SetPayloadField sets the PayloadField field's value. -func (s *DynamoDBAction) SetPayloadField(v string) *DynamoDBAction { - s.PayloadField = &v - return s -} +// The output of the ListCertificatesByCA operation. +type ListCertificatesByCAOutput struct { + _ struct{} `type:"structure"` -// SetRangeKeyField sets the RangeKeyField field's value. -func (s *DynamoDBAction) SetRangeKeyField(v string) *DynamoDBAction { - s.RangeKeyField = &v - return s + // The device certificates signed by the specified CA certificate. + Certificates []*Certificate `locationName:"certificates" type:"list"` + + // The marker for the next set of results, or null if there are no additional + // results. + NextMarker *string `locationName:"nextMarker" type:"string"` } -// SetRangeKeyType sets the RangeKeyType field's value. -func (s *DynamoDBAction) SetRangeKeyType(v string) *DynamoDBAction { - s.RangeKeyType = &v - return s +// String returns the string representation +func (s ListCertificatesByCAOutput) String() string { + return awsutil.Prettify(s) } -// SetRangeKeyValue sets the RangeKeyValue field's value. -func (s *DynamoDBAction) SetRangeKeyValue(v string) *DynamoDBAction { - s.RangeKeyValue = &v - return s +// GoString returns the string representation +func (s ListCertificatesByCAOutput) GoString() string { + return s.String() } -// SetRoleArn sets the RoleArn field's value. -func (s *DynamoDBAction) SetRoleArn(v string) *DynamoDBAction { - s.RoleArn = &v +// SetCertificates sets the Certificates field's value. +func (s *ListCertificatesByCAOutput) SetCertificates(v []*Certificate) *ListCertificatesByCAOutput { + s.Certificates = v return s } -// SetTableName sets the TableName field's value. -func (s *DynamoDBAction) SetTableName(v string) *DynamoDBAction { - s.TableName = &v +// SetNextMarker sets the NextMarker field's value. +func (s *ListCertificatesByCAOutput) SetNextMarker(v string) *ListCertificatesByCAOutput { + s.NextMarker = &v return s } -// Describes an action to write to a DynamoDB table. -// -// This DynamoDB action writes each attribute in the message payload into it's -// own column in the DynamoDB table. -type DynamoDBv2Action struct { +// The input for the ListCertificates operation. +type ListCertificatesInput struct { _ struct{} `type:"structure"` - // Specifies the DynamoDB table to which the message data will be written. For - // example: - // - // { "dynamoDBv2": { "roleArn": "aws:iam:12341251:my-role" "putItem": { "tableName": - // "my-table" } } } - // - // Each attribute in the message payload will be written to a separate column - // in the DynamoDB database. - PutItem *PutItemInput `locationName:"putItem" type:"structure"` + // Specifies the order for results. If True, the results are returned in ascending + // order, based on the creation date. + AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` - // The ARN of the IAM role that grants access to the DynamoDB table. - RoleArn *string `locationName:"roleArn" type:"string"` + // The marker for the next set of results. + Marker *string `location:"querystring" locationName:"marker" type:"string"` + + // The result page size. + PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` } // String returns the string representation -func (s DynamoDBv2Action) String() string { +func (s ListCertificatesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DynamoDBv2Action) GoString() string { +func (s ListCertificatesInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DynamoDBv2Action) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DynamoDBv2Action"} - if s.PutItem != nil { - if err := s.PutItem.Validate(); err != nil { - invalidParams.AddNested("PutItem", err.(request.ErrInvalidParams)) - } +func (s *ListCertificatesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListCertificatesInput"} + if s.PageSize != nil && *s.PageSize < 1 { + invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) } if invalidParams.Len() > 0 { @@ -17283,117 +27027,84 @@ func (s *DynamoDBv2Action) Validate() error { return nil } -// SetPutItem sets the PutItem field's value. -func (s *DynamoDBv2Action) SetPutItem(v *PutItemInput) *DynamoDBv2Action { - s.PutItem = v +// SetAscendingOrder sets the AscendingOrder field's value. +func (s *ListCertificatesInput) SetAscendingOrder(v bool) *ListCertificatesInput { + s.AscendingOrder = &v return s } -// SetRoleArn sets the RoleArn field's value. -func (s *DynamoDBv2Action) SetRoleArn(v string) *DynamoDBv2Action { - s.RoleArn = &v +// SetMarker sets the Marker field's value. +func (s *ListCertificatesInput) SetMarker(v string) *ListCertificatesInput { + s.Marker = &v return s } -// The policy that has the effect on the authorization results. -type EffectivePolicy struct { - _ struct{} `type:"structure"` +// SetPageSize sets the PageSize field's value. +func (s *ListCertificatesInput) SetPageSize(v int64) *ListCertificatesInput { + s.PageSize = &v + return s +} - // The policy ARN. - PolicyArn *string `locationName:"policyArn" type:"string"` +// The output of the ListCertificates operation. +type ListCertificatesOutput struct { + _ struct{} `type:"structure"` - // The IAM policy document. - PolicyDocument *string `locationName:"policyDocument" type:"string"` + // The descriptions of the certificates. + Certificates []*Certificate `locationName:"certificates" type:"list"` - // The policy name. - PolicyName *string `locationName:"policyName" min:"1" type:"string"` + // The marker for the next set of results, or null if there are no additional + // results. + NextMarker *string `locationName:"nextMarker" type:"string"` } // String returns the string representation -func (s EffectivePolicy) String() string { +func (s ListCertificatesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s EffectivePolicy) GoString() string { +func (s ListCertificatesOutput) GoString() string { return s.String() } -// SetPolicyArn sets the PolicyArn field's value. -func (s *EffectivePolicy) SetPolicyArn(v string) *EffectivePolicy { - s.PolicyArn = &v - return s -} - -// SetPolicyDocument sets the PolicyDocument field's value. -func (s *EffectivePolicy) SetPolicyDocument(v string) *EffectivePolicy { - s.PolicyDocument = &v +// SetCertificates sets the Certificates field's value. +func (s *ListCertificatesOutput) SetCertificates(v []*Certificate) *ListCertificatesOutput { + s.Certificates = v return s } -// SetPolicyName sets the PolicyName field's value. -func (s *EffectivePolicy) SetPolicyName(v string) *EffectivePolicy { - s.PolicyName = &v +// SetNextMarker sets the NextMarker field's value. +func (s *ListCertificatesOutput) SetNextMarker(v string) *ListCertificatesOutput { + s.NextMarker = &v return s } -// Describes an action that writes data to an Amazon Elasticsearch Service domain. -type ElasticsearchAction struct { +type ListIndicesInput struct { _ struct{} `type:"structure"` - // The endpoint of your Elasticsearch domain. - // - // Endpoint is a required field - Endpoint *string `locationName:"endpoint" type:"string" required:"true"` - - // The unique identifier for the document you are storing. - // - // Id is a required field - Id *string `locationName:"id" type:"string" required:"true"` - - // The Elasticsearch index where you want to store your data. - // - // Index is a required field - Index *string `locationName:"index" type:"string" required:"true"` - - // The IAM role ARN that has access to Elasticsearch. - // - // RoleArn is a required field - RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + // The maximum number of results to return at one time. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - // The type of document you are storing. - // - // Type is a required field - Type *string `locationName:"type" type:"string" required:"true"` + // The token used to get the next set of results, or null if there are no additional + // results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` } // String returns the string representation -func (s ElasticsearchAction) String() string { +func (s ListIndicesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ElasticsearchAction) GoString() string { +func (s ListIndicesInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ElasticsearchAction) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ElasticsearchAction"} - if s.Endpoint == nil { - invalidParams.Add(request.NewErrParamRequired("Endpoint")) - } - if s.Id == nil { - invalidParams.Add(request.NewErrParamRequired("Id")) - } - if s.Index == nil { - invalidParams.Add(request.NewErrParamRequired("Index")) - } - if s.RoleArn == nil { - invalidParams.Add(request.NewErrParamRequired("RoleArn")) - } - if s.Type == nil { - invalidParams.Add(request.NewErrParamRequired("Type")) +func (s *ListIndicesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListIndicesInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } if invalidParams.Len() > 0 { @@ -17402,187 +27113,195 @@ func (s *ElasticsearchAction) Validate() error { return nil } -// SetEndpoint sets the Endpoint field's value. -func (s *ElasticsearchAction) SetEndpoint(v string) *ElasticsearchAction { - s.Endpoint = &v - return s -} - -// SetId sets the Id field's value. -func (s *ElasticsearchAction) SetId(v string) *ElasticsearchAction { - s.Id = &v - return s -} - -// SetIndex sets the Index field's value. -func (s *ElasticsearchAction) SetIndex(v string) *ElasticsearchAction { - s.Index = &v - return s -} - -// SetRoleArn sets the RoleArn field's value. -func (s *ElasticsearchAction) SetRoleArn(v string) *ElasticsearchAction { - s.RoleArn = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListIndicesInput) SetMaxResults(v int64) *ListIndicesInput { + s.MaxResults = &v return s } -// SetType sets the Type field's value. -func (s *ElasticsearchAction) SetType(v string) *ElasticsearchAction { - s.Type = &v +// SetNextToken sets the NextToken field's value. +func (s *ListIndicesInput) SetNextToken(v string) *ListIndicesInput { + s.NextToken = &v return s } -// The input for the EnableTopicRuleRequest operation. -type EnableTopicRuleInput struct { +type ListIndicesOutput struct { _ struct{} `type:"structure"` - // The name of the topic rule to enable. - // - // RuleName is a required field - RuleName *string `location:"uri" locationName:"ruleName" min:"1" type:"string" required:"true"` + // The index names. + IndexNames []*string `locationName:"indexNames" type:"list"` + + // The token used to get the next set of results, or null if there are no additional + // results. + NextToken *string `locationName:"nextToken" type:"string"` } // String returns the string representation -func (s EnableTopicRuleInput) String() string { +func (s ListIndicesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s EnableTopicRuleInput) GoString() string { +func (s ListIndicesOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *EnableTopicRuleInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "EnableTopicRuleInput"} - if s.RuleName == nil { - invalidParams.Add(request.NewErrParamRequired("RuleName")) - } - if s.RuleName != nil && len(*s.RuleName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("RuleName", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetIndexNames sets the IndexNames field's value. +func (s *ListIndicesOutput) SetIndexNames(v []*string) *ListIndicesOutput { + s.IndexNames = v + return s } -// SetRuleName sets the RuleName field's value. -func (s *EnableTopicRuleInput) SetRuleName(v string) *EnableTopicRuleInput { - s.RuleName = &v +// SetNextToken sets the NextToken field's value. +func (s *ListIndicesOutput) SetNextToken(v string) *ListIndicesOutput { + s.NextToken = &v return s } -type EnableTopicRuleOutput struct { +type ListJobExecutionsForJobInput struct { _ struct{} `type:"structure"` + + // The unique identifier you assigned to this job when it was created. + // + // JobId is a required field + JobId *string `location:"uri" locationName:"jobId" min:"1" type:"string" required:"true"` + + // The maximum number of results to be returned per request. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + + // The token to retrieve the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + + // The status of the job. + Status *string `location:"querystring" locationName:"status" type:"string" enum:"JobExecutionStatus"` } // String returns the string representation -func (s EnableTopicRuleOutput) String() string { +func (s ListJobExecutionsForJobInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s EnableTopicRuleOutput) GoString() string { +func (s ListJobExecutionsForJobInput) GoString() string { return s.String() } -// Error information. -type ErrorInfo struct { - _ struct{} `type:"structure"` - - // The error code. - Code *string `locationName:"code" type:"string"` +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListJobExecutionsForJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListJobExecutionsForJobInput"} + if s.JobId == nil { + invalidParams.Add(request.NewErrParamRequired("JobId")) + } + if s.JobId != nil && len(*s.JobId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("JobId", 1)) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } - // The error message. - Message *string `locationName:"message" type:"string"` + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// String returns the string representation -func (s ErrorInfo) String() string { - return awsutil.Prettify(s) +// SetJobId sets the JobId field's value. +func (s *ListJobExecutionsForJobInput) SetJobId(v string) *ListJobExecutionsForJobInput { + s.JobId = &v + return s } -// GoString returns the string representation -func (s ErrorInfo) GoString() string { - return s.String() +// SetMaxResults sets the MaxResults field's value. +func (s *ListJobExecutionsForJobInput) SetMaxResults(v int64) *ListJobExecutionsForJobInput { + s.MaxResults = &v + return s } -// SetCode sets the Code field's value. -func (s *ErrorInfo) SetCode(v string) *ErrorInfo { - s.Code = &v +// SetNextToken sets the NextToken field's value. +func (s *ListJobExecutionsForJobInput) SetNextToken(v string) *ListJobExecutionsForJobInput { + s.NextToken = &v return s } -// SetMessage sets the Message field's value. -func (s *ErrorInfo) SetMessage(v string) *ErrorInfo { - s.Message = &v +// SetStatus sets the Status field's value. +func (s *ListJobExecutionsForJobInput) SetStatus(v string) *ListJobExecutionsForJobInput { + s.Status = &v return s } -// Information that explicitly denies authorization. -type ExplicitDeny struct { +type ListJobExecutionsForJobOutput struct { _ struct{} `type:"structure"` - // The policies that denied the authorization. - Policies []*Policy `locationName:"policies" type:"list"` + // A list of job execution summaries. + ExecutionSummaries []*JobExecutionSummaryForJob `locationName:"executionSummaries" type:"list"` + + // The token for the next set of results, or null if there are no additional + // results. + NextToken *string `locationName:"nextToken" type:"string"` } // String returns the string representation -func (s ExplicitDeny) String() string { +func (s ListJobExecutionsForJobOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ExplicitDeny) GoString() string { +func (s ListJobExecutionsForJobOutput) GoString() string { return s.String() } -// SetPolicies sets the Policies field's value. -func (s *ExplicitDeny) SetPolicies(v []*Policy) *ExplicitDeny { - s.Policies = v +// SetExecutionSummaries sets the ExecutionSummaries field's value. +func (s *ListJobExecutionsForJobOutput) SetExecutionSummaries(v []*JobExecutionSummaryForJob) *ListJobExecutionsForJobOutput { + s.ExecutionSummaries = v return s } -// Describes an action that writes data to an Amazon Kinesis Firehose stream. -type FirehoseAction struct { +// SetNextToken sets the NextToken field's value. +func (s *ListJobExecutionsForJobOutput) SetNextToken(v string) *ListJobExecutionsForJobOutput { + s.NextToken = &v + return s +} + +type ListJobExecutionsForThingInput struct { _ struct{} `type:"structure"` - // The delivery stream name. - // - // DeliveryStreamName is a required field - DeliveryStreamName *string `locationName:"deliveryStreamName" type:"string" required:"true"` + // The maximum number of results to be returned per request. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - // The IAM role that grants access to the Amazon Kinesis Firehose stream. - // - // RoleArn is a required field - RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + // The token to retrieve the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` - // A character separator that will be used to separate records written to the - // Firehose stream. Valid values are: '\n' (newline), '\t' (tab), '\r\n' (Windows - // newline), ',' (comma). - Separator *string `locationName:"separator" type:"string"` + // An optional filter that lets you search for jobs that have the specified + // status. + Status *string `location:"querystring" locationName:"status" type:"string" enum:"JobExecutionStatus"` + + // The thing name. + // + // ThingName is a required field + ThingName *string `location:"uri" locationName:"thingName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s FirehoseAction) String() string { +func (s ListJobExecutionsForThingInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s FirehoseAction) GoString() string { +func (s ListJobExecutionsForThingInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *FirehoseAction) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "FirehoseAction"} - if s.DeliveryStreamName == nil { - invalidParams.Add(request.NewErrParamRequired("DeliveryStreamName")) +func (s *ListJobExecutionsForThingInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListJobExecutionsForThingInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } - if s.RoleArn == nil { - invalidParams.Add(request.NewErrParamRequired("RoleArn")) + if s.ThingName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingName")) + } + if s.ThingName != nil && len(*s.ThingName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) } if invalidParams.Len() > 0 { @@ -17591,165 +27310,217 @@ func (s *FirehoseAction) Validate() error { return nil } -// SetDeliveryStreamName sets the DeliveryStreamName field's value. -func (s *FirehoseAction) SetDeliveryStreamName(v string) *FirehoseAction { - s.DeliveryStreamName = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListJobExecutionsForThingInput) SetMaxResults(v int64) *ListJobExecutionsForThingInput { + s.MaxResults = &v return s } -// SetRoleArn sets the RoleArn field's value. -func (s *FirehoseAction) SetRoleArn(v string) *FirehoseAction { - s.RoleArn = &v +// SetNextToken sets the NextToken field's value. +func (s *ListJobExecutionsForThingInput) SetNextToken(v string) *ListJobExecutionsForThingInput { + s.NextToken = &v return s } -// SetSeparator sets the Separator field's value. -func (s *FirehoseAction) SetSeparator(v string) *FirehoseAction { - s.Separator = &v +// SetStatus sets the Status field's value. +func (s *ListJobExecutionsForThingInput) SetStatus(v string) *ListJobExecutionsForThingInput { + s.Status = &v return s } -type GetEffectivePoliciesInput struct { - _ struct{} `type:"structure"` +// SetThingName sets the ThingName field's value. +func (s *ListJobExecutionsForThingInput) SetThingName(v string) *ListJobExecutionsForThingInput { + s.ThingName = &v + return s +} - // The Cognito identity pool ID. - CognitoIdentityPoolId *string `locationName:"cognitoIdentityPoolId" type:"string"` +type ListJobExecutionsForThingOutput struct { + _ struct{} `type:"structure"` - // The principal. - Principal *string `locationName:"principal" type:"string"` + // A list of job execution summaries. + ExecutionSummaries []*JobExecutionSummaryForThing `locationName:"executionSummaries" type:"list"` - // The thing name. - ThingName *string `location:"querystring" locationName:"thingName" min:"1" type:"string"` + // The token for the next set of results, or null if there are no additional + // results. + NextToken *string `locationName:"nextToken" type:"string"` } // String returns the string representation -func (s GetEffectivePoliciesInput) String() string { +func (s ListJobExecutionsForThingOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetEffectivePoliciesInput) GoString() string { +func (s ListJobExecutionsForThingOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *GetEffectivePoliciesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetEffectivePoliciesInput"} - if s.ThingName != nil && len(*s.ThingName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetCognitoIdentityPoolId sets the CognitoIdentityPoolId field's value. -func (s *GetEffectivePoliciesInput) SetCognitoIdentityPoolId(v string) *GetEffectivePoliciesInput { - s.CognitoIdentityPoolId = &v - return s -} - -// SetPrincipal sets the Principal field's value. -func (s *GetEffectivePoliciesInput) SetPrincipal(v string) *GetEffectivePoliciesInput { - s.Principal = &v +// SetExecutionSummaries sets the ExecutionSummaries field's value. +func (s *ListJobExecutionsForThingOutput) SetExecutionSummaries(v []*JobExecutionSummaryForThing) *ListJobExecutionsForThingOutput { + s.ExecutionSummaries = v return s } -// SetThingName sets the ThingName field's value. -func (s *GetEffectivePoliciesInput) SetThingName(v string) *GetEffectivePoliciesInput { - s.ThingName = &v +// SetNextToken sets the NextToken field's value. +func (s *ListJobExecutionsForThingOutput) SetNextToken(v string) *ListJobExecutionsForThingOutput { + s.NextToken = &v return s } -type GetEffectivePoliciesOutput struct { +type ListJobsInput struct { _ struct{} `type:"structure"` - // The effective policies. - EffectivePolicies []*EffectivePolicy `locationName:"effectivePolicies" type:"list"` + // The maximum number of results to return per request. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + + // The token to retrieve the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + + // An optional filter that lets you search for jobs that have the specified + // status. + Status *string `location:"querystring" locationName:"status" type:"string" enum:"JobStatus"` + + // Specifies whether the job will continue to run (CONTINUOUS), or will be complete + // after all those things specified as targets have completed the job (SNAPSHOT). + // If continuous, the job may also be run on a thing when a change is detected + // in a target. For example, a job will run on a thing when the thing is added + // to a target group, even after the job was completed by all things originally + // in the group. + TargetSelection *string `location:"querystring" locationName:"targetSelection" type:"string" enum:"TargetSelection"` + + // A filter that limits the returned jobs to those for the specified group. + ThingGroupId *string `location:"querystring" locationName:"thingGroupId" min:"1" type:"string"` + + // A filter that limits the returned jobs to those for the specified group. + ThingGroupName *string `location:"querystring" locationName:"thingGroupName" min:"1" type:"string"` } // String returns the string representation -func (s GetEffectivePoliciesOutput) String() string { +func (s ListJobsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetEffectivePoliciesOutput) GoString() string { +func (s ListJobsInput) GoString() string { return s.String() } -// SetEffectivePolicies sets the EffectivePolicies field's value. -func (s *GetEffectivePoliciesOutput) SetEffectivePolicies(v []*EffectivePolicy) *GetEffectivePoliciesOutput { - s.EffectivePolicies = v +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListJobsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListJobsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.ThingGroupId != nil && len(*s.ThingGroupId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingGroupId", 1)) + } + if s.ThingGroupName != nil && len(*s.ThingGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingGroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListJobsInput) SetMaxResults(v int64) *ListJobsInput { + s.MaxResults = &v return s } -type GetIndexingConfigurationInput struct { - _ struct{} `type:"structure"` +// SetNextToken sets the NextToken field's value. +func (s *ListJobsInput) SetNextToken(v string) *ListJobsInput { + s.NextToken = &v + return s } -// String returns the string representation -func (s GetIndexingConfigurationInput) String() string { - return awsutil.Prettify(s) +// SetStatus sets the Status field's value. +func (s *ListJobsInput) SetStatus(v string) *ListJobsInput { + s.Status = &v + return s +} + +// SetTargetSelection sets the TargetSelection field's value. +func (s *ListJobsInput) SetTargetSelection(v string) *ListJobsInput { + s.TargetSelection = &v + return s +} + +// SetThingGroupId sets the ThingGroupId field's value. +func (s *ListJobsInput) SetThingGroupId(v string) *ListJobsInput { + s.ThingGroupId = &v + return s } -// GoString returns the string representation -func (s GetIndexingConfigurationInput) GoString() string { - return s.String() +// SetThingGroupName sets the ThingGroupName field's value. +func (s *ListJobsInput) SetThingGroupName(v string) *ListJobsInput { + s.ThingGroupName = &v + return s } -type GetIndexingConfigurationOutput struct { +type ListJobsOutput struct { _ struct{} `type:"structure"` - // Thing indexing configuration. - ThingIndexingConfiguration *ThingIndexingConfiguration `locationName:"thingIndexingConfiguration" type:"structure"` + // A list of jobs. + Jobs []*JobSummary `locationName:"jobs" type:"list"` + + // The token for the next set of results, or null if there are no additional + // results. + NextToken *string `locationName:"nextToken" type:"string"` } // String returns the string representation -func (s GetIndexingConfigurationOutput) String() string { +func (s ListJobsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetIndexingConfigurationOutput) GoString() string { +func (s ListJobsOutput) GoString() string { return s.String() } -// SetThingIndexingConfiguration sets the ThingIndexingConfiguration field's value. -func (s *GetIndexingConfigurationOutput) SetThingIndexingConfiguration(v *ThingIndexingConfiguration) *GetIndexingConfigurationOutput { - s.ThingIndexingConfiguration = v +// SetJobs sets the Jobs field's value. +func (s *ListJobsOutput) SetJobs(v []*JobSummary) *ListJobsOutput { + s.Jobs = v return s } -type GetJobDocumentInput struct { +// SetNextToken sets the NextToken field's value. +func (s *ListJobsOutput) SetNextToken(v string) *ListJobsOutput { + s.NextToken = &v + return s +} + +type ListOTAUpdatesInput struct { _ struct{} `type:"structure"` - // The unique identifier you assigned to this job when it was created. - // - // JobId is a required field - JobId *string `location:"uri" locationName:"jobId" min:"1" type:"string" required:"true"` + // The maximum number of results to return at one time. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + + // A token used to retrieve the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + + // The OTA update job status. + OtaUpdateStatus *string `location:"querystring" locationName:"otaUpdateStatus" type:"string" enum:"OTAUpdateStatus"` } // String returns the string representation -func (s GetJobDocumentInput) String() string { +func (s ListOTAUpdatesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetJobDocumentInput) GoString() string { +func (s ListOTAUpdatesInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetJobDocumentInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetJobDocumentInput"} - if s.JobId == nil { - invalidParams.Add(request.NewErrParamRequired("JobId")) - } - if s.JobId != nil && len(*s.JobId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("JobId", 1)) +func (s *ListOTAUpdatesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListOTAUpdatesInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } if invalidParams.Len() > 0 { @@ -17758,110 +27529,86 @@ func (s *GetJobDocumentInput) Validate() error { return nil } -// SetJobId sets the JobId field's value. -func (s *GetJobDocumentInput) SetJobId(v string) *GetJobDocumentInput { - s.JobId = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListOTAUpdatesInput) SetMaxResults(v int64) *ListOTAUpdatesInput { + s.MaxResults = &v return s } -type GetJobDocumentOutput struct { - _ struct{} `type:"structure"` - - // The job document content. - Document *string `locationName:"document" type:"string"` -} - -// String returns the string representation -func (s GetJobDocumentOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s GetJobDocumentOutput) GoString() string { - return s.String() -} - -// SetDocument sets the Document field's value. -func (s *GetJobDocumentOutput) SetDocument(v string) *GetJobDocumentOutput { - s.Document = &v +// SetNextToken sets the NextToken field's value. +func (s *ListOTAUpdatesInput) SetNextToken(v string) *ListOTAUpdatesInput { + s.NextToken = &v return s } -// The input for the GetLoggingOptions operation. -type GetLoggingOptionsInput struct { - _ struct{} `type:"structure"` -} - -// String returns the string representation -func (s GetLoggingOptionsInput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s GetLoggingOptionsInput) GoString() string { - return s.String() +// SetOtaUpdateStatus sets the OtaUpdateStatus field's value. +func (s *ListOTAUpdatesInput) SetOtaUpdateStatus(v string) *ListOTAUpdatesInput { + s.OtaUpdateStatus = &v + return s } -// The output from the GetLoggingOptions operation. -type GetLoggingOptionsOutput struct { +type ListOTAUpdatesOutput struct { _ struct{} `type:"structure"` - // The logging level. - LogLevel *string `locationName:"logLevel" type:"string" enum:"LogLevel"` + // A token to use to get the next set of results. + NextToken *string `locationName:"nextToken" type:"string"` - // The ARN of the IAM role that grants access. - RoleArn *string `locationName:"roleArn" type:"string"` + // A list of OTA update jobs. + OtaUpdates []*OTAUpdateSummary `locationName:"otaUpdates" type:"list"` } // String returns the string representation -func (s GetLoggingOptionsOutput) String() string { +func (s ListOTAUpdatesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetLoggingOptionsOutput) GoString() string { +func (s ListOTAUpdatesOutput) GoString() string { return s.String() } -// SetLogLevel sets the LogLevel field's value. -func (s *GetLoggingOptionsOutput) SetLogLevel(v string) *GetLoggingOptionsOutput { - s.LogLevel = &v +// SetNextToken sets the NextToken field's value. +func (s *ListOTAUpdatesOutput) SetNextToken(v string) *ListOTAUpdatesOutput { + s.NextToken = &v return s } -// SetRoleArn sets the RoleArn field's value. -func (s *GetLoggingOptionsOutput) SetRoleArn(v string) *GetLoggingOptionsOutput { - s.RoleArn = &v +// SetOtaUpdates sets the OtaUpdates field's value. +func (s *ListOTAUpdatesOutput) SetOtaUpdates(v []*OTAUpdateSummary) *ListOTAUpdatesOutput { + s.OtaUpdates = v return s } -type GetOTAUpdateInput struct { +// The input to the ListOutgoingCertificates operation. +type ListOutgoingCertificatesInput struct { _ struct{} `type:"structure"` - // The OTA update ID. - // - // OtaUpdateId is a required field - OtaUpdateId *string `location:"uri" locationName:"otaUpdateId" min:"1" type:"string" required:"true"` + // Specifies the order for results. If True, the results are returned in ascending + // order, based on the creation date. + AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` + + // The marker for the next set of results. + Marker *string `location:"querystring" locationName:"marker" type:"string"` + + // The result page size. + PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` } // String returns the string representation -func (s GetOTAUpdateInput) String() string { +func (s ListOutgoingCertificatesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetOTAUpdateInput) GoString() string { +func (s ListOutgoingCertificatesInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetOTAUpdateInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetOTAUpdateInput"} - if s.OtaUpdateId == nil { - invalidParams.Add(request.NewErrParamRequired("OtaUpdateId")) - } - if s.OtaUpdateId != nil && len(*s.OtaUpdateId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("OtaUpdateId", 1)) +func (s *ListOutgoingCertificatesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListOutgoingCertificatesInput"} + if s.PageSize != nil && *s.PageSize < 1 { + invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) } if invalidParams.Len() > 0 { @@ -17870,63 +27617,87 @@ func (s *GetOTAUpdateInput) Validate() error { return nil } -// SetOtaUpdateId sets the OtaUpdateId field's value. -func (s *GetOTAUpdateInput) SetOtaUpdateId(v string) *GetOTAUpdateInput { - s.OtaUpdateId = &v +// SetAscendingOrder sets the AscendingOrder field's value. +func (s *ListOutgoingCertificatesInput) SetAscendingOrder(v bool) *ListOutgoingCertificatesInput { + s.AscendingOrder = &v return s } -type GetOTAUpdateOutput struct { +// SetMarker sets the Marker field's value. +func (s *ListOutgoingCertificatesInput) SetMarker(v string) *ListOutgoingCertificatesInput { + s.Marker = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *ListOutgoingCertificatesInput) SetPageSize(v int64) *ListOutgoingCertificatesInput { + s.PageSize = &v + return s +} + +// The output from the ListOutgoingCertificates operation. +type ListOutgoingCertificatesOutput struct { _ struct{} `type:"structure"` - // The OTA update info. - OtaUpdateInfo *OTAUpdateInfo `locationName:"otaUpdateInfo" type:"structure"` + // The marker for the next set of results. + NextMarker *string `locationName:"nextMarker" type:"string"` + + // The certificates that are being transferred but not yet accepted. + OutgoingCertificates []*OutgoingCertificate `locationName:"outgoingCertificates" type:"list"` } // String returns the string representation -func (s GetOTAUpdateOutput) String() string { +func (s ListOutgoingCertificatesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetOTAUpdateOutput) GoString() string { +func (s ListOutgoingCertificatesOutput) GoString() string { return s.String() } -// SetOtaUpdateInfo sets the OtaUpdateInfo field's value. -func (s *GetOTAUpdateOutput) SetOtaUpdateInfo(v *OTAUpdateInfo) *GetOTAUpdateOutput { - s.OtaUpdateInfo = v +// SetNextMarker sets the NextMarker field's value. +func (s *ListOutgoingCertificatesOutput) SetNextMarker(v string) *ListOutgoingCertificatesOutput { + s.NextMarker = &v return s } -// The input for the GetPolicy operation. -type GetPolicyInput struct { +// SetOutgoingCertificates sets the OutgoingCertificates field's value. +func (s *ListOutgoingCertificatesOutput) SetOutgoingCertificates(v []*OutgoingCertificate) *ListOutgoingCertificatesOutput { + s.OutgoingCertificates = v + return s +} + +// The input for the ListPolicies operation. +type ListPoliciesInput struct { _ struct{} `type:"structure"` - // The name of the policy. - // - // PolicyName is a required field - PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` + // Specifies the order for results. If true, the results are returned in ascending + // creation order. + AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` + + // The marker for the next set of results. + Marker *string `location:"querystring" locationName:"marker" type:"string"` + + // The result page size. + PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` } // String returns the string representation -func (s GetPolicyInput) String() string { +func (s ListPoliciesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetPolicyInput) GoString() string { +func (s ListPoliciesInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetPolicyInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetPolicyInput"} - if s.PolicyName == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyName")) - } - if s.PolicyName != nil && len(*s.PolicyName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) +func (s *ListPoliciesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListPoliciesInput"} + if s.PageSize != nil && *s.PageSize < 1 { + invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) } if invalidParams.Len() > 0 { @@ -17935,124 +27706,100 @@ func (s *GetPolicyInput) Validate() error { return nil } -// SetPolicyName sets the PolicyName field's value. -func (s *GetPolicyInput) SetPolicyName(v string) *GetPolicyInput { - s.PolicyName = &v +// SetAscendingOrder sets the AscendingOrder field's value. +func (s *ListPoliciesInput) SetAscendingOrder(v bool) *ListPoliciesInput { + s.AscendingOrder = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListPoliciesInput) SetMarker(v string) *ListPoliciesInput { + s.Marker = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *ListPoliciesInput) SetPageSize(v int64) *ListPoliciesInput { + s.PageSize = &v return s } -// The output from the GetPolicy operation. -type GetPolicyOutput struct { +// The output from the ListPolicies operation. +type ListPoliciesOutput struct { _ struct{} `type:"structure"` - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix"` - - // The default policy version ID. - DefaultVersionId *string `locationName:"defaultVersionId" type:"string"` - - GenerationId *string `locationName:"generationId" type:"string"` - - LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp" timestampFormat:"unix"` - - // The policy ARN. - PolicyArn *string `locationName:"policyArn" type:"string"` - - // The JSON document that describes the policy. - PolicyDocument *string `locationName:"policyDocument" type:"string"` + // The marker for the next set of results, or null if there are no additional + // results. + NextMarker *string `locationName:"nextMarker" type:"string"` - // The policy name. - PolicyName *string `locationName:"policyName" min:"1" type:"string"` + // The descriptions of the policies. + Policies []*Policy `locationName:"policies" type:"list"` } // String returns the string representation -func (s GetPolicyOutput) String() string { +func (s ListPoliciesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetPolicyOutput) GoString() string { +func (s ListPoliciesOutput) GoString() string { return s.String() } -// SetCreationDate sets the CreationDate field's value. -func (s *GetPolicyOutput) SetCreationDate(v time.Time) *GetPolicyOutput { - s.CreationDate = &v - return s -} - -// SetDefaultVersionId sets the DefaultVersionId field's value. -func (s *GetPolicyOutput) SetDefaultVersionId(v string) *GetPolicyOutput { - s.DefaultVersionId = &v - return s -} - -// SetGenerationId sets the GenerationId field's value. -func (s *GetPolicyOutput) SetGenerationId(v string) *GetPolicyOutput { - s.GenerationId = &v +// SetNextMarker sets the NextMarker field's value. +func (s *ListPoliciesOutput) SetNextMarker(v string) *ListPoliciesOutput { + s.NextMarker = &v return s } -// SetLastModifiedDate sets the LastModifiedDate field's value. -func (s *GetPolicyOutput) SetLastModifiedDate(v time.Time) *GetPolicyOutput { - s.LastModifiedDate = &v +// SetPolicies sets the Policies field's value. +func (s *ListPoliciesOutput) SetPolicies(v []*Policy) *ListPoliciesOutput { + s.Policies = v return s } -// SetPolicyArn sets the PolicyArn field's value. -func (s *GetPolicyOutput) SetPolicyArn(v string) *GetPolicyOutput { - s.PolicyArn = &v - return s -} +// The input for the ListPolicyPrincipals operation. +type ListPolicyPrincipalsInput struct { + _ struct{} `type:"structure"` -// SetPolicyDocument sets the PolicyDocument field's value. -func (s *GetPolicyOutput) SetPolicyDocument(v string) *GetPolicyOutput { - s.PolicyDocument = &v - return s -} + // Specifies the order for results. If true, the results are returned in ascending + // creation order. + AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` -// SetPolicyName sets the PolicyName field's value. -func (s *GetPolicyOutput) SetPolicyName(v string) *GetPolicyOutput { - s.PolicyName = &v - return s -} + // The marker for the next set of results. + Marker *string `location:"querystring" locationName:"marker" type:"string"` -// The input for the GetPolicyVersion operation. -type GetPolicyVersionInput struct { - _ struct{} `type:"structure"` + // The result page size. + PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` - // The name of the policy. + // The policy name. // // PolicyName is a required field - PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` - - // The policy version ID. - // - // PolicyVersionId is a required field - PolicyVersionId *string `location:"uri" locationName:"policyVersionId" type:"string" required:"true"` + PolicyName *string `location:"header" locationName:"x-amzn-iot-policy" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s GetPolicyVersionInput) String() string { +func (s ListPolicyPrincipalsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetPolicyVersionInput) GoString() string { +func (s ListPolicyPrincipalsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetPolicyVersionInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetPolicyVersionInput"} +func (s *ListPolicyPrincipalsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListPolicyPrincipalsInput"} + if s.PageSize != nil && *s.PageSize < 1 { + invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) + } if s.PolicyName == nil { invalidParams.Add(request.NewErrParamRequired("PolicyName")) } if s.PolicyName != nil && len(*s.PolicyName) < 1 { invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) } - if s.PolicyVersionId == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyVersionId")) - } if invalidParams.Len() > 0 { return invalidParams @@ -18060,169 +27807,168 @@ func (s *GetPolicyVersionInput) Validate() error { return nil } -// SetPolicyName sets the PolicyName field's value. -func (s *GetPolicyVersionInput) SetPolicyName(v string) *GetPolicyVersionInput { - s.PolicyName = &v +// SetAscendingOrder sets the AscendingOrder field's value. +func (s *ListPolicyPrincipalsInput) SetAscendingOrder(v bool) *ListPolicyPrincipalsInput { + s.AscendingOrder = &v return s } -// SetPolicyVersionId sets the PolicyVersionId field's value. -func (s *GetPolicyVersionInput) SetPolicyVersionId(v string) *GetPolicyVersionInput { - s.PolicyVersionId = &v +// SetMarker sets the Marker field's value. +func (s *ListPolicyPrincipalsInput) SetMarker(v string) *ListPolicyPrincipalsInput { + s.Marker = &v return s } -// The output from the GetPolicyVersion operation. -type GetPolicyVersionOutput struct { - _ struct{} `type:"structure"` - - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix"` - - GenerationId *string `locationName:"generationId" type:"string"` - - // Specifies whether the policy version is the default. - IsDefaultVersion *bool `locationName:"isDefaultVersion" type:"boolean"` - - LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp" timestampFormat:"unix"` +// SetPageSize sets the PageSize field's value. +func (s *ListPolicyPrincipalsInput) SetPageSize(v int64) *ListPolicyPrincipalsInput { + s.PageSize = &v + return s +} - // The policy ARN. - PolicyArn *string `locationName:"policyArn" type:"string"` +// SetPolicyName sets the PolicyName field's value. +func (s *ListPolicyPrincipalsInput) SetPolicyName(v string) *ListPolicyPrincipalsInput { + s.PolicyName = &v + return s +} - // The JSON document that describes the policy. - PolicyDocument *string `locationName:"policyDocument" type:"string"` +// The output from the ListPolicyPrincipals operation. +type ListPolicyPrincipalsOutput struct { + _ struct{} `type:"structure"` - // The policy name. - PolicyName *string `locationName:"policyName" min:"1" type:"string"` + // The marker for the next set of results, or null if there are no additional + // results. + NextMarker *string `locationName:"nextMarker" type:"string"` - // The policy version ID. - PolicyVersionId *string `locationName:"policyVersionId" type:"string"` + // The descriptions of the principals. + Principals []*string `locationName:"principals" type:"list"` } // String returns the string representation -func (s GetPolicyVersionOutput) String() string { +func (s ListPolicyPrincipalsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetPolicyVersionOutput) GoString() string { +func (s ListPolicyPrincipalsOutput) GoString() string { return s.String() } -// SetCreationDate sets the CreationDate field's value. -func (s *GetPolicyVersionOutput) SetCreationDate(v time.Time) *GetPolicyVersionOutput { - s.CreationDate = &v +// SetNextMarker sets the NextMarker field's value. +func (s *ListPolicyPrincipalsOutput) SetNextMarker(v string) *ListPolicyPrincipalsOutput { + s.NextMarker = &v return s } -// SetGenerationId sets the GenerationId field's value. -func (s *GetPolicyVersionOutput) SetGenerationId(v string) *GetPolicyVersionOutput { - s.GenerationId = &v +// SetPrincipals sets the Principals field's value. +func (s *ListPolicyPrincipalsOutput) SetPrincipals(v []*string) *ListPolicyPrincipalsOutput { + s.Principals = v return s } -// SetIsDefaultVersion sets the IsDefaultVersion field's value. -func (s *GetPolicyVersionOutput) SetIsDefaultVersion(v bool) *GetPolicyVersionOutput { - s.IsDefaultVersion = &v - return s +// The input for the ListPolicyVersions operation. +type ListPolicyVersionsInput struct { + _ struct{} `type:"structure"` + + // The policy name. + // + // PolicyName is a required field + PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` } -// SetLastModifiedDate sets the LastModifiedDate field's value. -func (s *GetPolicyVersionOutput) SetLastModifiedDate(v time.Time) *GetPolicyVersionOutput { - s.LastModifiedDate = &v - return s +// String returns the string representation +func (s ListPolicyVersionsInput) String() string { + return awsutil.Prettify(s) } -// SetPolicyArn sets the PolicyArn field's value. -func (s *GetPolicyVersionOutput) SetPolicyArn(v string) *GetPolicyVersionOutput { - s.PolicyArn = &v - return s +// GoString returns the string representation +func (s ListPolicyVersionsInput) GoString() string { + return s.String() } -// SetPolicyDocument sets the PolicyDocument field's value. -func (s *GetPolicyVersionOutput) SetPolicyDocument(v string) *GetPolicyVersionOutput { - s.PolicyDocument = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListPolicyVersionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListPolicyVersionsInput"} + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) + } + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } // SetPolicyName sets the PolicyName field's value. -func (s *GetPolicyVersionOutput) SetPolicyName(v string) *GetPolicyVersionOutput { +func (s *ListPolicyVersionsInput) SetPolicyName(v string) *ListPolicyVersionsInput { s.PolicyName = &v return s } -// SetPolicyVersionId sets the PolicyVersionId field's value. -func (s *GetPolicyVersionOutput) SetPolicyVersionId(v string) *GetPolicyVersionOutput { - s.PolicyVersionId = &v - return s -} - -// The input to the GetRegistrationCode operation. -type GetRegistrationCodeInput struct { +// The output from the ListPolicyVersions operation. +type ListPolicyVersionsOutput struct { _ struct{} `type:"structure"` + + // The policy versions. + PolicyVersions []*PolicyVersion `locationName:"policyVersions" type:"list"` } // String returns the string representation -func (s GetRegistrationCodeInput) String() string { +func (s ListPolicyVersionsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetRegistrationCodeInput) GoString() string { +func (s ListPolicyVersionsOutput) GoString() string { return s.String() } -// The output from the GetRegistrationCode operation. -type GetRegistrationCodeOutput struct { - _ struct{} `type:"structure"` - - // The CA certificate registration code. - RegistrationCode *string `locationName:"registrationCode" min:"64" type:"string"` +// SetPolicyVersions sets the PolicyVersions field's value. +func (s *ListPolicyVersionsOutput) SetPolicyVersions(v []*PolicyVersion) *ListPolicyVersionsOutput { + s.PolicyVersions = v + return s } -// String returns the string representation -func (s GetRegistrationCodeOutput) String() string { - return awsutil.Prettify(s) -} +// The input for the ListPrincipalPolicies operation. +type ListPrincipalPoliciesInput struct { + _ struct{} `type:"structure"` -// GoString returns the string representation -func (s GetRegistrationCodeOutput) GoString() string { - return s.String() -} + // Specifies the order for results. If true, results are returned in ascending + // creation order. + AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` -// SetRegistrationCode sets the RegistrationCode field's value. -func (s *GetRegistrationCodeOutput) SetRegistrationCode(v string) *GetRegistrationCodeOutput { - s.RegistrationCode = &v - return s -} + // The marker for the next set of results. + Marker *string `location:"querystring" locationName:"marker" type:"string"` -// The input for the GetTopicRule operation. -type GetTopicRuleInput struct { - _ struct{} `type:"structure"` + // The result page size. + PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` - // The name of the rule. + // The principal. // - // RuleName is a required field - RuleName *string `location:"uri" locationName:"ruleName" min:"1" type:"string" required:"true"` + // Principal is a required field + Principal *string `location:"header" locationName:"x-amzn-iot-principal" type:"string" required:"true"` } // String returns the string representation -func (s GetTopicRuleInput) String() string { +func (s ListPrincipalPoliciesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetTopicRuleInput) GoString() string { +func (s ListPrincipalPoliciesInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetTopicRuleInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetTopicRuleInput"} - if s.RuleName == nil { - invalidParams.Add(request.NewErrParamRequired("RuleName")) +func (s *ListPrincipalPoliciesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListPrincipalPoliciesInput"} + if s.PageSize != nil && *s.PageSize < 1 { + invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) } - if s.RuleName != nil && len(*s.RuleName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("RuleName", 1)) + if s.Principal == nil { + invalidParams.Add(request.NewErrParamRequired("Principal")) } if invalidParams.Len() > 0 { @@ -18231,574 +27977,615 @@ func (s *GetTopicRuleInput) Validate() error { return nil } -// SetRuleName sets the RuleName field's value. -func (s *GetTopicRuleInput) SetRuleName(v string) *GetTopicRuleInput { - s.RuleName = &v +// SetAscendingOrder sets the AscendingOrder field's value. +func (s *ListPrincipalPoliciesInput) SetAscendingOrder(v bool) *ListPrincipalPoliciesInput { + s.AscendingOrder = &v return s } -// The output from the GetTopicRule operation. -type GetTopicRuleOutput struct { - _ struct{} `type:"structure"` - - // The rule. - Rule *TopicRule `locationName:"rule" type:"structure"` - - // The rule ARN. - RuleArn *string `locationName:"ruleArn" type:"string"` -} - -// String returns the string representation -func (s GetTopicRuleOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s GetTopicRuleOutput) GoString() string { - return s.String() +// SetMarker sets the Marker field's value. +func (s *ListPrincipalPoliciesInput) SetMarker(v string) *ListPrincipalPoliciesInput { + s.Marker = &v + return s } -// SetRule sets the Rule field's value. -func (s *GetTopicRuleOutput) SetRule(v *TopicRule) *GetTopicRuleOutput { - s.Rule = v +// SetPageSize sets the PageSize field's value. +func (s *ListPrincipalPoliciesInput) SetPageSize(v int64) *ListPrincipalPoliciesInput { + s.PageSize = &v return s } -// SetRuleArn sets the RuleArn field's value. -func (s *GetTopicRuleOutput) SetRuleArn(v string) *GetTopicRuleOutput { - s.RuleArn = &v +// SetPrincipal sets the Principal field's value. +func (s *ListPrincipalPoliciesInput) SetPrincipal(v string) *ListPrincipalPoliciesInput { + s.Principal = &v return s } -type GetV2LoggingOptionsInput struct { +// The output from the ListPrincipalPolicies operation. +type ListPrincipalPoliciesOutput struct { _ struct{} `type:"structure"` + + // The marker for the next set of results, or null if there are no additional + // results. + NextMarker *string `locationName:"nextMarker" type:"string"` + + // The policies. + Policies []*Policy `locationName:"policies" type:"list"` } // String returns the string representation -func (s GetV2LoggingOptionsInput) String() string { +func (s ListPrincipalPoliciesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetV2LoggingOptionsInput) GoString() string { +func (s ListPrincipalPoliciesOutput) GoString() string { return s.String() } -type GetV2LoggingOptionsOutput struct { +// SetNextMarker sets the NextMarker field's value. +func (s *ListPrincipalPoliciesOutput) SetNextMarker(v string) *ListPrincipalPoliciesOutput { + s.NextMarker = &v + return s +} + +// SetPolicies sets the Policies field's value. +func (s *ListPrincipalPoliciesOutput) SetPolicies(v []*Policy) *ListPrincipalPoliciesOutput { + s.Policies = v + return s +} + +// The input for the ListPrincipalThings operation. +type ListPrincipalThingsInput struct { _ struct{} `type:"structure"` - // The default log level. - DefaultLogLevel *string `locationName:"defaultLogLevel" type:"string" enum:"LogLevel"` + // The maximum number of results to return in this operation. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - // Disables all logs. - DisableAllLogs *bool `locationName:"disableAllLogs" type:"boolean"` + // The token to retrieve the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` - // The IAM role ARN AWS IoT uses to write to your CloudWatch logs. - RoleArn *string `locationName:"roleArn" type:"string"` + // The principal. + // + // Principal is a required field + Principal *string `location:"header" locationName:"x-amzn-principal" type:"string" required:"true"` } // String returns the string representation -func (s GetV2LoggingOptionsOutput) String() string { +func (s ListPrincipalThingsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetV2LoggingOptionsOutput) GoString() string { +func (s ListPrincipalThingsInput) GoString() string { return s.String() } -// SetDefaultLogLevel sets the DefaultLogLevel field's value. -func (s *GetV2LoggingOptionsOutput) SetDefaultLogLevel(v string) *GetV2LoggingOptionsOutput { - s.DefaultLogLevel = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListPrincipalThingsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListPrincipalThingsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.Principal == nil { + invalidParams.Add(request.NewErrParamRequired("Principal")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListPrincipalThingsInput) SetMaxResults(v int64) *ListPrincipalThingsInput { + s.MaxResults = &v return s } -// SetDisableAllLogs sets the DisableAllLogs field's value. -func (s *GetV2LoggingOptionsOutput) SetDisableAllLogs(v bool) *GetV2LoggingOptionsOutput { - s.DisableAllLogs = &v +// SetNextToken sets the NextToken field's value. +func (s *ListPrincipalThingsInput) SetNextToken(v string) *ListPrincipalThingsInput { + s.NextToken = &v return s } -// SetRoleArn sets the RoleArn field's value. -func (s *GetV2LoggingOptionsOutput) SetRoleArn(v string) *GetV2LoggingOptionsOutput { - s.RoleArn = &v +// SetPrincipal sets the Principal field's value. +func (s *ListPrincipalThingsInput) SetPrincipal(v string) *ListPrincipalThingsInput { + s.Principal = &v return s } -// The name and ARN of a group. -type GroupNameAndArn struct { +// The output from the ListPrincipalThings operation. +type ListPrincipalThingsOutput struct { _ struct{} `type:"structure"` - // The group ARN. - GroupArn *string `locationName:"groupArn" type:"string"` + // The token used to get the next set of results, or null if there are no additional + // results. + NextToken *string `locationName:"nextToken" type:"string"` - // The group name. - GroupName *string `locationName:"groupName" min:"1" type:"string"` + // The things. + Things []*string `locationName:"things" type:"list"` } // String returns the string representation -func (s GroupNameAndArn) String() string { +func (s ListPrincipalThingsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GroupNameAndArn) GoString() string { +func (s ListPrincipalThingsOutput) GoString() string { return s.String() } -// SetGroupArn sets the GroupArn field's value. -func (s *GroupNameAndArn) SetGroupArn(v string) *GroupNameAndArn { - s.GroupArn = &v +// SetNextToken sets the NextToken field's value. +func (s *ListPrincipalThingsOutput) SetNextToken(v string) *ListPrincipalThingsOutput { + s.NextToken = &v return s } -// SetGroupName sets the GroupName field's value. -func (s *GroupNameAndArn) SetGroupName(v string) *GroupNameAndArn { - s.GroupName = &v +// SetThings sets the Things field's value. +func (s *ListPrincipalThingsOutput) SetThings(v []*string) *ListPrincipalThingsOutput { + s.Things = v return s } -// Information that implicitly denies authorization. When policy doesn't explicitly -// deny or allow an action on a resource it is considered an implicit deny. -type ImplicitDeny struct { +type ListRoleAliasesInput struct { _ struct{} `type:"structure"` - // Policies that don't contain a matching allow or deny statement for the specified - // action on the specified resource. - Policies []*Policy `locationName:"policies" type:"list"` + // Return the list of role aliases in ascending alphabetical order. + AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` + + // A marker used to get the next set of results. + Marker *string `location:"querystring" locationName:"marker" type:"string"` + + // The maximum number of results to return at one time. + PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` } // String returns the string representation -func (s ImplicitDeny) String() string { +func (s ListRoleAliasesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ImplicitDeny) GoString() string { +func (s ListRoleAliasesInput) GoString() string { return s.String() } -// SetPolicies sets the Policies field's value. -func (s *ImplicitDeny) SetPolicies(v []*Policy) *ImplicitDeny { - s.Policies = v - return s -} - -// The Job object contains details about a job. -type Job struct { - _ struct{} `type:"structure"` - - // If the job was updated, describes the reason for the update. - Comment *string `locationName:"comment" type:"string"` - - // The time, in milliseconds since the epoch, when the job was completed. - CompletedAt *time.Time `locationName:"completedAt" type:"timestamp" timestampFormat:"unix"` - - // The time, in milliseconds since the epoch, when the job was created. - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` - - // A short text description of the job. - Description *string `locationName:"description" type:"string"` - - // The parameters specified for the job document. - DocumentParameters map[string]*string `locationName:"documentParameters" type:"map"` - - // An ARN identifying the job with format "arn:aws:iot:region:account:job/jobId". - JobArn *string `locationName:"jobArn" type:"string"` - - // Allows you to create a staged rollout of a job. - JobExecutionsRolloutConfig *JobExecutionsRolloutConfig `locationName:"jobExecutionsRolloutConfig" type:"structure"` +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListRoleAliasesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListRoleAliasesInput"} + if s.PageSize != nil && *s.PageSize < 1 { + invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) + } - // The unique identifier you assigned to this job when it was created. - JobId *string `locationName:"jobId" min:"1" type:"string"` + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} - // Details about the job process. - JobProcessDetails *JobProcessDetails `locationName:"jobProcessDetails" type:"structure"` +// SetAscendingOrder sets the AscendingOrder field's value. +func (s *ListRoleAliasesInput) SetAscendingOrder(v bool) *ListRoleAliasesInput { + s.AscendingOrder = &v + return s +} - // The time, in milliseconds since the epoch, when the job was last updated. - LastUpdatedAt *time.Time `locationName:"lastUpdatedAt" type:"timestamp" timestampFormat:"unix"` +// SetMarker sets the Marker field's value. +func (s *ListRoleAliasesInput) SetMarker(v string) *ListRoleAliasesInput { + s.Marker = &v + return s +} - // Configuration for pre-signed S3 URLs. - PresignedUrlConfig *PresignedUrlConfig `locationName:"presignedUrlConfig" type:"structure"` +// SetPageSize sets the PageSize field's value. +func (s *ListRoleAliasesInput) SetPageSize(v int64) *ListRoleAliasesInput { + s.PageSize = &v + return s +} - // The status of the job, one of IN_PROGRESS, CANCELED, or COMPLETED. - Status *string `locationName:"status" type:"string" enum:"JobStatus"` +type ListRoleAliasesOutput struct { + _ struct{} `type:"structure"` - // Specifies whether the job will continue to run (CONTINUOUS), or will be complete - // after all those things specified as targets have completed the job (SNAPSHOT). - // If continuous, the job may also be run on a thing when a change is detected - // in a target. For example, a job will run on a device when the thing representing - // the device is added to a target group, even after the job was completed by - // all things originally in the group. - TargetSelection *string `locationName:"targetSelection" type:"string" enum:"TargetSelection"` + // A marker used to get the next set of results. + NextMarker *string `locationName:"nextMarker" type:"string"` - // A list of IoT things and thing groups to which the job should be sent. - Targets []*string `locationName:"targets" min:"1" type:"list"` + // The role aliases. + RoleAliases []*string `locationName:"roleAliases" type:"list"` } // String returns the string representation -func (s Job) String() string { +func (s ListRoleAliasesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Job) GoString() string { +func (s ListRoleAliasesOutput) GoString() string { return s.String() } -// SetComment sets the Comment field's value. -func (s *Job) SetComment(v string) *Job { - s.Comment = &v +// SetNextMarker sets the NextMarker field's value. +func (s *ListRoleAliasesOutput) SetNextMarker(v string) *ListRoleAliasesOutput { + s.NextMarker = &v return s } -// SetCompletedAt sets the CompletedAt field's value. -func (s *Job) SetCompletedAt(v time.Time) *Job { - s.CompletedAt = &v +// SetRoleAliases sets the RoleAliases field's value. +func (s *ListRoleAliasesOutput) SetRoleAliases(v []*string) *ListRoleAliasesOutput { + s.RoleAliases = v return s } -// SetCreatedAt sets the CreatedAt field's value. -func (s *Job) SetCreatedAt(v time.Time) *Job { - s.CreatedAt = &v - return s -} +type ListScheduledAuditsInput struct { + _ struct{} `type:"structure"` -// SetDescription sets the Description field's value. -func (s *Job) SetDescription(v string) *Job { - s.Description = &v - return s + // The maximum number of results to return at one time. The default is 25. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + + // The token for the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` } -// SetDocumentParameters sets the DocumentParameters field's value. -func (s *Job) SetDocumentParameters(v map[string]*string) *Job { - s.DocumentParameters = v - return s +// String returns the string representation +func (s ListScheduledAuditsInput) String() string { + return awsutil.Prettify(s) } -// SetJobArn sets the JobArn field's value. -func (s *Job) SetJobArn(v string) *Job { - s.JobArn = &v - return s +// GoString returns the string representation +func (s ListScheduledAuditsInput) GoString() string { + return s.String() } -// SetJobExecutionsRolloutConfig sets the JobExecutionsRolloutConfig field's value. -func (s *Job) SetJobExecutionsRolloutConfig(v *JobExecutionsRolloutConfig) *Job { - s.JobExecutionsRolloutConfig = v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListScheduledAuditsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListScheduledAuditsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetJobId sets the JobId field's value. -func (s *Job) SetJobId(v string) *Job { - s.JobId = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListScheduledAuditsInput) SetMaxResults(v int64) *ListScheduledAuditsInput { + s.MaxResults = &v return s } -// SetJobProcessDetails sets the JobProcessDetails field's value. -func (s *Job) SetJobProcessDetails(v *JobProcessDetails) *Job { - s.JobProcessDetails = v +// SetNextToken sets the NextToken field's value. +func (s *ListScheduledAuditsInput) SetNextToken(v string) *ListScheduledAuditsInput { + s.NextToken = &v return s } -// SetLastUpdatedAt sets the LastUpdatedAt field's value. -func (s *Job) SetLastUpdatedAt(v time.Time) *Job { - s.LastUpdatedAt = &v - return s +type ListScheduledAuditsOutput struct { + _ struct{} `type:"structure"` + + // A token that can be used to retrieve the next set of results, or null if + // there are no additional results. + NextToken *string `locationName:"nextToken" type:"string"` + + // The list of scheduled audits. + ScheduledAudits []*ScheduledAuditMetadata `locationName:"scheduledAudits" type:"list"` } -// SetPresignedUrlConfig sets the PresignedUrlConfig field's value. -func (s *Job) SetPresignedUrlConfig(v *PresignedUrlConfig) *Job { - s.PresignedUrlConfig = v - return s +// String returns the string representation +func (s ListScheduledAuditsOutput) String() string { + return awsutil.Prettify(s) } -// SetStatus sets the Status field's value. -func (s *Job) SetStatus(v string) *Job { - s.Status = &v - return s +// GoString returns the string representation +func (s ListScheduledAuditsOutput) GoString() string { + return s.String() } -// SetTargetSelection sets the TargetSelection field's value. -func (s *Job) SetTargetSelection(v string) *Job { - s.TargetSelection = &v +// SetNextToken sets the NextToken field's value. +func (s *ListScheduledAuditsOutput) SetNextToken(v string) *ListScheduledAuditsOutput { + s.NextToken = &v return s } -// SetTargets sets the Targets field's value. -func (s *Job) SetTargets(v []*string) *Job { - s.Targets = v +// SetScheduledAudits sets the ScheduledAudits field's value. +func (s *ListScheduledAuditsOutput) SetScheduledAudits(v []*ScheduledAuditMetadata) *ListScheduledAuditsOutput { + s.ScheduledAudits = v return s } -// The job execution object represents the execution of a job on a particular -// device. -type JobExecution struct { +type ListSecurityProfilesForTargetInput struct { _ struct{} `type:"structure"` - // A string (consisting of the digits "0" through "9") which identifies this - // particular job execution on this particular device. It can be used in commands - // which return or update job execution information. - ExecutionNumber *int64 `locationName:"executionNumber" type:"long"` - - // The unique identifier you assigned to the job when it was created. - JobId *string `locationName:"jobId" min:"1" type:"string"` - - // The time, in milliseconds since the epoch, when the job execution was last - // updated. - LastUpdatedAt *time.Time `locationName:"lastUpdatedAt" type:"timestamp" timestampFormat:"unix"` - - // The time, in milliseconds since the epoch, when the job execution was queued. - QueuedAt *time.Time `locationName:"queuedAt" type:"timestamp" timestampFormat:"unix"` - - // The time, in milliseconds since the epoch, when the job execution started. - StartedAt *time.Time `locationName:"startedAt" type:"timestamp" timestampFormat:"unix"` + // The maximum number of results to return at one time. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - // The status of the job execution (IN_PROGRESS, QUEUED, FAILED, SUCCESS, CANCELED, - // or REJECTED). - Status *string `locationName:"status" type:"string" enum:"JobExecutionStatus"` + // The token for the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` - // A collection of name/value pairs that describe the status of the job execution. - StatusDetails *JobExecutionStatusDetails `locationName:"statusDetails" type:"structure"` + // If true, return child groups as well. + Recursive *bool `location:"querystring" locationName:"recursive" type:"boolean"` - // The ARN of the thing on which the job execution is running. - ThingArn *string `locationName:"thingArn" type:"string"` + // The ARN of the target (thing group) whose attached security profiles you + // want to get. + // + // SecurityProfileTargetArn is a required field + SecurityProfileTargetArn *string `location:"querystring" locationName:"securityProfileTargetArn" type:"string" required:"true"` } // String returns the string representation -func (s JobExecution) String() string { +func (s ListSecurityProfilesForTargetInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s JobExecution) GoString() string { +func (s ListSecurityProfilesForTargetInput) GoString() string { return s.String() } -// SetExecutionNumber sets the ExecutionNumber field's value. -func (s *JobExecution) SetExecutionNumber(v int64) *JobExecution { - s.ExecutionNumber = &v - return s -} - -// SetJobId sets the JobId field's value. -func (s *JobExecution) SetJobId(v string) *JobExecution { - s.JobId = &v - return s -} - -// SetLastUpdatedAt sets the LastUpdatedAt field's value. -func (s *JobExecution) SetLastUpdatedAt(v time.Time) *JobExecution { - s.LastUpdatedAt = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListSecurityProfilesForTargetInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListSecurityProfilesForTargetInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.SecurityProfileTargetArn == nil { + invalidParams.Add(request.NewErrParamRequired("SecurityProfileTargetArn")) + } -// SetQueuedAt sets the QueuedAt field's value. -func (s *JobExecution) SetQueuedAt(v time.Time) *JobExecution { - s.QueuedAt = &v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetStartedAt sets the StartedAt field's value. -func (s *JobExecution) SetStartedAt(v time.Time) *JobExecution { - s.StartedAt = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListSecurityProfilesForTargetInput) SetMaxResults(v int64) *ListSecurityProfilesForTargetInput { + s.MaxResults = &v return s } -// SetStatus sets the Status field's value. -func (s *JobExecution) SetStatus(v string) *JobExecution { - s.Status = &v +// SetNextToken sets the NextToken field's value. +func (s *ListSecurityProfilesForTargetInput) SetNextToken(v string) *ListSecurityProfilesForTargetInput { + s.NextToken = &v return s } -// SetStatusDetails sets the StatusDetails field's value. -func (s *JobExecution) SetStatusDetails(v *JobExecutionStatusDetails) *JobExecution { - s.StatusDetails = v +// SetRecursive sets the Recursive field's value. +func (s *ListSecurityProfilesForTargetInput) SetRecursive(v bool) *ListSecurityProfilesForTargetInput { + s.Recursive = &v return s } -// SetThingArn sets the ThingArn field's value. -func (s *JobExecution) SetThingArn(v string) *JobExecution { - s.ThingArn = &v +// SetSecurityProfileTargetArn sets the SecurityProfileTargetArn field's value. +func (s *ListSecurityProfilesForTargetInput) SetSecurityProfileTargetArn(v string) *ListSecurityProfilesForTargetInput { + s.SecurityProfileTargetArn = &v return s } -// Details of the job execution status. -type JobExecutionStatusDetails struct { +type ListSecurityProfilesForTargetOutput struct { _ struct{} `type:"structure"` - // The job execution status. - DetailsMap map[string]*string `locationName:"detailsMap" type:"map"` + // A token that can be used to retrieve the next set of results, or null if + // there are no additional results. + NextToken *string `locationName:"nextToken" type:"string"` + + // A list of security profiles and their associated targets. + SecurityProfileTargetMappings []*SecurityProfileTargetMapping `locationName:"securityProfileTargetMappings" type:"list"` } // String returns the string representation -func (s JobExecutionStatusDetails) String() string { +func (s ListSecurityProfilesForTargetOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s JobExecutionStatusDetails) GoString() string { +func (s ListSecurityProfilesForTargetOutput) GoString() string { return s.String() } -// SetDetailsMap sets the DetailsMap field's value. -func (s *JobExecutionStatusDetails) SetDetailsMap(v map[string]*string) *JobExecutionStatusDetails { - s.DetailsMap = v +// SetNextToken sets the NextToken field's value. +func (s *ListSecurityProfilesForTargetOutput) SetNextToken(v string) *ListSecurityProfilesForTargetOutput { + s.NextToken = &v return s } -// The job execution summary. -type JobExecutionSummary struct { - _ struct{} `type:"structure"` - - // A string (consisting of the digits "0" through "9") which identifies this - // particular job execution on this particular device. It can be used later - // in commands which return or update job execution information. - ExecutionNumber *int64 `locationName:"executionNumber" type:"long"` - - // The time, in milliseconds since the epoch, when the job execution was last - // updated. - LastUpdatedAt *time.Time `locationName:"lastUpdatedAt" type:"timestamp" timestampFormat:"unix"` +// SetSecurityProfileTargetMappings sets the SecurityProfileTargetMappings field's value. +func (s *ListSecurityProfilesForTargetOutput) SetSecurityProfileTargetMappings(v []*SecurityProfileTargetMapping) *ListSecurityProfilesForTargetOutput { + s.SecurityProfileTargetMappings = v + return s +} - // The time, in milliseconds since the epoch, when the job execution was queued. - QueuedAt *time.Time `locationName:"queuedAt" type:"timestamp" timestampFormat:"unix"` +type ListSecurityProfilesInput struct { + _ struct{} `type:"structure"` - // The time, in milliseconds since the epoch, when the job execution started. - StartedAt *time.Time `locationName:"startedAt" type:"timestamp" timestampFormat:"unix"` + // The maximum number of results to return at one time. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - // The status of the job execution. - Status *string `locationName:"status" type:"string" enum:"JobExecutionStatus"` + // The token for the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` } // String returns the string representation -func (s JobExecutionSummary) String() string { +func (s ListSecurityProfilesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s JobExecutionSummary) GoString() string { +func (s ListSecurityProfilesInput) GoString() string { return s.String() } -// SetExecutionNumber sets the ExecutionNumber field's value. -func (s *JobExecutionSummary) SetExecutionNumber(v int64) *JobExecutionSummary { - s.ExecutionNumber = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListSecurityProfilesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListSecurityProfilesInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListSecurityProfilesInput) SetMaxResults(v int64) *ListSecurityProfilesInput { + s.MaxResults = &v return s } -// SetLastUpdatedAt sets the LastUpdatedAt field's value. -func (s *JobExecutionSummary) SetLastUpdatedAt(v time.Time) *JobExecutionSummary { - s.LastUpdatedAt = &v +// SetNextToken sets the NextToken field's value. +func (s *ListSecurityProfilesInput) SetNextToken(v string) *ListSecurityProfilesInput { + s.NextToken = &v return s } -// SetQueuedAt sets the QueuedAt field's value. -func (s *JobExecutionSummary) SetQueuedAt(v time.Time) *JobExecutionSummary { - s.QueuedAt = &v - return s +type ListSecurityProfilesOutput struct { + _ struct{} `type:"structure"` + + // A token that can be used to retrieve the next set of results, or null if + // there are no additional results. + NextToken *string `locationName:"nextToken" type:"string"` + + // A list of security profile identifiers (names and ARNs). + SecurityProfileIdentifiers []*SecurityProfileIdentifier `locationName:"securityProfileIdentifiers" type:"list"` +} + +// String returns the string representation +func (s ListSecurityProfilesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListSecurityProfilesOutput) GoString() string { + return s.String() } -// SetStartedAt sets the StartedAt field's value. -func (s *JobExecutionSummary) SetStartedAt(v time.Time) *JobExecutionSummary { - s.StartedAt = &v +// SetNextToken sets the NextToken field's value. +func (s *ListSecurityProfilesOutput) SetNextToken(v string) *ListSecurityProfilesOutput { + s.NextToken = &v return s } -// SetStatus sets the Status field's value. -func (s *JobExecutionSummary) SetStatus(v string) *JobExecutionSummary { - s.Status = &v +// SetSecurityProfileIdentifiers sets the SecurityProfileIdentifiers field's value. +func (s *ListSecurityProfilesOutput) SetSecurityProfileIdentifiers(v []*SecurityProfileIdentifier) *ListSecurityProfilesOutput { + s.SecurityProfileIdentifiers = v return s } -// Contains a summary of information about job executions for a specific job. -type JobExecutionSummaryForJob struct { +type ListStreamsInput struct { _ struct{} `type:"structure"` - // Contains a subset of information about a job execution. - JobExecutionSummary *JobExecutionSummary `locationName:"jobExecutionSummary" type:"structure"` + // Set to true to return the list of streams in ascending order. + AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` - // The ARN of the thing on which the job execution is running. - ThingArn *string `locationName:"thingArn" type:"string"` + // The maximum number of results to return at a time. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + + // A token used to get the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` } // String returns the string representation -func (s JobExecutionSummaryForJob) String() string { +func (s ListStreamsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s JobExecutionSummaryForJob) GoString() string { +func (s ListStreamsInput) GoString() string { return s.String() } -// SetJobExecutionSummary sets the JobExecutionSummary field's value. -func (s *JobExecutionSummaryForJob) SetJobExecutionSummary(v *JobExecutionSummary) *JobExecutionSummaryForJob { - s.JobExecutionSummary = v +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListStreamsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListStreamsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAscendingOrder sets the AscendingOrder field's value. +func (s *ListStreamsInput) SetAscendingOrder(v bool) *ListStreamsInput { + s.AscendingOrder = &v return s } -// SetThingArn sets the ThingArn field's value. -func (s *JobExecutionSummaryForJob) SetThingArn(v string) *JobExecutionSummaryForJob { - s.ThingArn = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListStreamsInput) SetMaxResults(v int64) *ListStreamsInput { + s.MaxResults = &v return s } -// The job execution summary for a thing. -type JobExecutionSummaryForThing struct { +// SetNextToken sets the NextToken field's value. +func (s *ListStreamsInput) SetNextToken(v string) *ListStreamsInput { + s.NextToken = &v + return s +} + +type ListStreamsOutput struct { _ struct{} `type:"structure"` - // Contains a subset of information about a job execution. - JobExecutionSummary *JobExecutionSummary `locationName:"jobExecutionSummary" type:"structure"` + // A token used to get the next set of results. + NextToken *string `locationName:"nextToken" type:"string"` - // The unique identifier you assigned to this job when it was created. - JobId *string `locationName:"jobId" min:"1" type:"string"` + // A list of streams. + Streams []*StreamSummary `locationName:"streams" type:"list"` } // String returns the string representation -func (s JobExecutionSummaryForThing) String() string { +func (s ListStreamsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s JobExecutionSummaryForThing) GoString() string { +func (s ListStreamsOutput) GoString() string { return s.String() } -// SetJobExecutionSummary sets the JobExecutionSummary field's value. -func (s *JobExecutionSummaryForThing) SetJobExecutionSummary(v *JobExecutionSummary) *JobExecutionSummaryForThing { - s.JobExecutionSummary = v +// SetNextToken sets the NextToken field's value. +func (s *ListStreamsOutput) SetNextToken(v string) *ListStreamsOutput { + s.NextToken = &v return s } -// SetJobId sets the JobId field's value. -func (s *JobExecutionSummaryForThing) SetJobId(v string) *JobExecutionSummaryForThing { - s.JobId = &v +// SetStreams sets the Streams field's value. +func (s *ListStreamsOutput) SetStreams(v []*StreamSummary) *ListStreamsOutput { + s.Streams = v return s } -// Allows you to create a staged rollout of a job. -type JobExecutionsRolloutConfig struct { +type ListTagsForResourceInput struct { _ struct{} `type:"structure"` - // The maximum number of things that will be notified of a pending job, per - // minute. This parameter allows you to create a staged rollout. - MaximumPerMinute *int64 `locationName:"maximumPerMinute" min:"1" type:"integer"` + // The token to retrieve the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + + // The ARN of the resource. + // + // ResourceArn is a required field + ResourceArn *string `location:"querystring" locationName:"resourceArn" type:"string" required:"true"` } // String returns the string representation -func (s JobExecutionsRolloutConfig) String() string { +func (s ListTagsForResourceInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s JobExecutionsRolloutConfig) GoString() string { +func (s ListTagsForResourceInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *JobExecutionsRolloutConfig) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "JobExecutionsRolloutConfig"} - if s.MaximumPerMinute != nil && *s.MaximumPerMinute < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaximumPerMinute", 1)) +func (s *ListTagsForResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTagsForResourceInput"} + if s.ResourceArn == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceArn")) } if invalidParams.Len() > 0 { @@ -18807,262 +28594,276 @@ func (s *JobExecutionsRolloutConfig) Validate() error { return nil } -// SetMaximumPerMinute sets the MaximumPerMinute field's value. -func (s *JobExecutionsRolloutConfig) SetMaximumPerMinute(v int64) *JobExecutionsRolloutConfig { - s.MaximumPerMinute = &v +// SetNextToken sets the NextToken field's value. +func (s *ListTagsForResourceInput) SetNextToken(v string) *ListTagsForResourceInput { + s.NextToken = &v return s } -// The job process details. -type JobProcessDetails struct { - _ struct{} `type:"structure"` - - // The number of things that cancelled the job. - NumberOfCanceledThings *int64 `locationName:"numberOfCanceledThings" type:"integer"` - - // The number of things that failed executing the job. - NumberOfFailedThings *int64 `locationName:"numberOfFailedThings" type:"integer"` - - // The number of things currently executing the job. - NumberOfInProgressThings *int64 `locationName:"numberOfInProgressThings" type:"integer"` - - // The number of things that are awaiting execution of the job. - NumberOfQueuedThings *int64 `locationName:"numberOfQueuedThings" type:"integer"` - - // The number of things that rejected the job. - NumberOfRejectedThings *int64 `locationName:"numberOfRejectedThings" type:"integer"` +// SetResourceArn sets the ResourceArn field's value. +func (s *ListTagsForResourceInput) SetResourceArn(v string) *ListTagsForResourceInput { + s.ResourceArn = &v + return s +} - // The number of things that are no longer scheduled to execute the job because - // they have been deleted or have been removed from the group that was a target - // of the job. - NumberOfRemovedThings *int64 `locationName:"numberOfRemovedThings" type:"integer"` +type ListTagsForResourceOutput struct { + _ struct{} `type:"structure"` - // The number of things which successfully completed the job. - NumberOfSucceededThings *int64 `locationName:"numberOfSucceededThings" type:"integer"` + // The token used to get the next set of results, or null if there are no additional + // results. + NextToken *string `locationName:"nextToken" type:"string"` - // The devices on which the job is executing. - ProcessingTargets []*string `locationName:"processingTargets" type:"list"` + // The list of tags assigned to the resource. + Tags []*Tag `locationName:"tags" type:"list"` } // String returns the string representation -func (s JobProcessDetails) String() string { +func (s ListTagsForResourceOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s JobProcessDetails) GoString() string { +func (s ListTagsForResourceOutput) GoString() string { return s.String() } -// SetNumberOfCanceledThings sets the NumberOfCanceledThings field's value. -func (s *JobProcessDetails) SetNumberOfCanceledThings(v int64) *JobProcessDetails { - s.NumberOfCanceledThings = &v +// SetNextToken sets the NextToken field's value. +func (s *ListTagsForResourceOutput) SetNextToken(v string) *ListTagsForResourceOutput { + s.NextToken = &v return s } -// SetNumberOfFailedThings sets the NumberOfFailedThings field's value. -func (s *JobProcessDetails) SetNumberOfFailedThings(v int64) *JobProcessDetails { - s.NumberOfFailedThings = &v +// SetTags sets the Tags field's value. +func (s *ListTagsForResourceOutput) SetTags(v []*Tag) *ListTagsForResourceOutput { + s.Tags = v return s } -// SetNumberOfInProgressThings sets the NumberOfInProgressThings field's value. -func (s *JobProcessDetails) SetNumberOfInProgressThings(v int64) *JobProcessDetails { - s.NumberOfInProgressThings = &v - return s +type ListTargetsForPolicyInput struct { + _ struct{} `type:"structure"` + + // A marker used to get the next set of results. + Marker *string `location:"querystring" locationName:"marker" type:"string"` + + // The maximum number of results to return at one time. + PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` + + // The policy name. + // + // PolicyName is a required field + PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` } -// SetNumberOfQueuedThings sets the NumberOfQueuedThings field's value. -func (s *JobProcessDetails) SetNumberOfQueuedThings(v int64) *JobProcessDetails { - s.NumberOfQueuedThings = &v - return s +// String returns the string representation +func (s ListTargetsForPolicyInput) String() string { + return awsutil.Prettify(s) } -// SetNumberOfRejectedThings sets the NumberOfRejectedThings field's value. -func (s *JobProcessDetails) SetNumberOfRejectedThings(v int64) *JobProcessDetails { - s.NumberOfRejectedThings = &v - return s +// GoString returns the string representation +func (s ListTargetsForPolicyInput) GoString() string { + return s.String() } -// SetNumberOfRemovedThings sets the NumberOfRemovedThings field's value. -func (s *JobProcessDetails) SetNumberOfRemovedThings(v int64) *JobProcessDetails { - s.NumberOfRemovedThings = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListTargetsForPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTargetsForPolicyInput"} + if s.PageSize != nil && *s.PageSize < 1 { + invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) + } + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) + } + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *ListTargetsForPolicyInput) SetMarker(v string) *ListTargetsForPolicyInput { + s.Marker = &v return s } -// SetNumberOfSucceededThings sets the NumberOfSucceededThings field's value. -func (s *JobProcessDetails) SetNumberOfSucceededThings(v int64) *JobProcessDetails { - s.NumberOfSucceededThings = &v +// SetPageSize sets the PageSize field's value. +func (s *ListTargetsForPolicyInput) SetPageSize(v int64) *ListTargetsForPolicyInput { + s.PageSize = &v return s } -// SetProcessingTargets sets the ProcessingTargets field's value. -func (s *JobProcessDetails) SetProcessingTargets(v []*string) *JobProcessDetails { - s.ProcessingTargets = v +// SetPolicyName sets the PolicyName field's value. +func (s *ListTargetsForPolicyInput) SetPolicyName(v string) *ListTargetsForPolicyInput { + s.PolicyName = &v return s } -// The job summary. -type JobSummary struct { +type ListTargetsForPolicyOutput struct { _ struct{} `type:"structure"` - // The time, in milliseconds since the epoch, when the job completed. - CompletedAt *time.Time `locationName:"completedAt" type:"timestamp" timestampFormat:"unix"` - - // The time, in milliseconds since the epoch, when the job was created. - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` - - // The job ARN. - JobArn *string `locationName:"jobArn" type:"string"` - - // The unique identifier you assigned to this job when it was created. - JobId *string `locationName:"jobId" min:"1" type:"string"` - - // The time, in milliseconds since the epoch, when the job was last updated. - LastUpdatedAt *time.Time `locationName:"lastUpdatedAt" type:"timestamp" timestampFormat:"unix"` - - // The job summary status. - Status *string `locationName:"status" type:"string" enum:"JobStatus"` - - // Specifies whether the job will continue to run (CONTINUOUS), or will be complete - // after all those things specified as targets have completed the job (SNAPSHOT). - // If continuous, the job may also be run on a thing when a change is detected - // in a target. For example, a job will run on a thing when the thing is added - // to a target group, even after the job was completed by all things originally - // in the group. - TargetSelection *string `locationName:"targetSelection" type:"string" enum:"TargetSelection"` + // A marker used to get the next set of results. + NextMarker *string `locationName:"nextMarker" type:"string"` - // The ID of the thing group. - ThingGroupId *string `locationName:"thingGroupId" min:"1" type:"string"` + // The policy targets. + Targets []*string `locationName:"targets" type:"list"` } // String returns the string representation -func (s JobSummary) String() string { +func (s ListTargetsForPolicyOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s JobSummary) GoString() string { +func (s ListTargetsForPolicyOutput) GoString() string { return s.String() } -// SetCompletedAt sets the CompletedAt field's value. -func (s *JobSummary) SetCompletedAt(v time.Time) *JobSummary { - s.CompletedAt = &v +// SetNextMarker sets the NextMarker field's value. +func (s *ListTargetsForPolicyOutput) SetNextMarker(v string) *ListTargetsForPolicyOutput { + s.NextMarker = &v return s } -// SetCreatedAt sets the CreatedAt field's value. -func (s *JobSummary) SetCreatedAt(v time.Time) *JobSummary { - s.CreatedAt = &v +// SetTargets sets the Targets field's value. +func (s *ListTargetsForPolicyOutput) SetTargets(v []*string) *ListTargetsForPolicyOutput { + s.Targets = v return s } -// SetJobArn sets the JobArn field's value. -func (s *JobSummary) SetJobArn(v string) *JobSummary { - s.JobArn = &v - return s +type ListTargetsForSecurityProfileInput struct { + _ struct{} `type:"structure"` + + // The maximum number of results to return at one time. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + + // The token for the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + + // The security profile. + // + // SecurityProfileName is a required field + SecurityProfileName *string `location:"uri" locationName:"securityProfileName" min:"1" type:"string" required:"true"` } -// SetJobId sets the JobId field's value. -func (s *JobSummary) SetJobId(v string) *JobSummary { - s.JobId = &v - return s +// String returns the string representation +func (s ListTargetsForSecurityProfileInput) String() string { + return awsutil.Prettify(s) } -// SetLastUpdatedAt sets the LastUpdatedAt field's value. -func (s *JobSummary) SetLastUpdatedAt(v time.Time) *JobSummary { - s.LastUpdatedAt = &v - return s +// GoString returns the string representation +func (s ListTargetsForSecurityProfileInput) GoString() string { + return s.String() } -// SetStatus sets the Status field's value. -func (s *JobSummary) SetStatus(v string) *JobSummary { - s.Status = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListTargetsForSecurityProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTargetsForSecurityProfileInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.SecurityProfileName == nil { + invalidParams.Add(request.NewErrParamRequired("SecurityProfileName")) + } + if s.SecurityProfileName != nil && len(*s.SecurityProfileName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecurityProfileName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListTargetsForSecurityProfileInput) SetMaxResults(v int64) *ListTargetsForSecurityProfileInput { + s.MaxResults = &v return s } -// SetTargetSelection sets the TargetSelection field's value. -func (s *JobSummary) SetTargetSelection(v string) *JobSummary { - s.TargetSelection = &v +// SetNextToken sets the NextToken field's value. +func (s *ListTargetsForSecurityProfileInput) SetNextToken(v string) *ListTargetsForSecurityProfileInput { + s.NextToken = &v return s } -// SetThingGroupId sets the ThingGroupId field's value. -func (s *JobSummary) SetThingGroupId(v string) *JobSummary { - s.ThingGroupId = &v +// SetSecurityProfileName sets the SecurityProfileName field's value. +func (s *ListTargetsForSecurityProfileInput) SetSecurityProfileName(v string) *ListTargetsForSecurityProfileInput { + s.SecurityProfileName = &v return s } -// Describes a key pair. -type KeyPair struct { +type ListTargetsForSecurityProfileOutput struct { _ struct{} `type:"structure"` - // The private key. - PrivateKey *string `min:"1" type:"string"` + // A token that can be used to retrieve the next set of results, or null if + // there are no additional results. + NextToken *string `locationName:"nextToken" type:"string"` - // The public key. - PublicKey *string `min:"1" type:"string"` + // The thing groups to which the security profile is attached. + SecurityProfileTargets []*SecurityProfileTarget `locationName:"securityProfileTargets" type:"list"` } // String returns the string representation -func (s KeyPair) String() string { +func (s ListTargetsForSecurityProfileOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s KeyPair) GoString() string { +func (s ListTargetsForSecurityProfileOutput) GoString() string { return s.String() } -// SetPrivateKey sets the PrivateKey field's value. -func (s *KeyPair) SetPrivateKey(v string) *KeyPair { - s.PrivateKey = &v +// SetNextToken sets the NextToken field's value. +func (s *ListTargetsForSecurityProfileOutput) SetNextToken(v string) *ListTargetsForSecurityProfileOutput { + s.NextToken = &v return s } -// SetPublicKey sets the PublicKey field's value. -func (s *KeyPair) SetPublicKey(v string) *KeyPair { - s.PublicKey = &v +// SetSecurityProfileTargets sets the SecurityProfileTargets field's value. +func (s *ListTargetsForSecurityProfileOutput) SetSecurityProfileTargets(v []*SecurityProfileTarget) *ListTargetsForSecurityProfileOutput { + s.SecurityProfileTargets = v return s } -// Describes an action to write data to an Amazon Kinesis stream. -type KinesisAction struct { +type ListThingGroupsForThingInput struct { _ struct{} `type:"structure"` - // The partition key. - PartitionKey *string `locationName:"partitionKey" type:"string"` + // The maximum number of results to return at one time. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - // The ARN of the IAM role that grants access to the Amazon Kinesis stream. - // - // RoleArn is a required field - RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + // The token to retrieve the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` - // The name of the Amazon Kinesis stream. + // The thing name. // - // StreamName is a required field - StreamName *string `locationName:"streamName" type:"string" required:"true"` + // ThingName is a required field + ThingName *string `location:"uri" locationName:"thingName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s KinesisAction) String() string { +func (s ListThingGroupsForThingInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s KinesisAction) GoString() string { +func (s ListThingGroupsForThingInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *KinesisAction) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "KinesisAction"} - if s.RoleArn == nil { - invalidParams.Add(request.NewErrParamRequired("RoleArn")) +func (s *ListThingGroupsForThingInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListThingGroupsForThingInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } - if s.StreamName == nil { - invalidParams.Add(request.NewErrParamRequired("StreamName")) + if s.ThingName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingName")) + } + if s.ThingName != nil && len(*s.ThingName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) } if invalidParams.Len() > 0 { @@ -19071,99 +28872,97 @@ func (s *KinesisAction) Validate() error { return nil } -// SetPartitionKey sets the PartitionKey field's value. -func (s *KinesisAction) SetPartitionKey(v string) *KinesisAction { - s.PartitionKey = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListThingGroupsForThingInput) SetMaxResults(v int64) *ListThingGroupsForThingInput { + s.MaxResults = &v return s } -// SetRoleArn sets the RoleArn field's value. -func (s *KinesisAction) SetRoleArn(v string) *KinesisAction { - s.RoleArn = &v +// SetNextToken sets the NextToken field's value. +func (s *ListThingGroupsForThingInput) SetNextToken(v string) *ListThingGroupsForThingInput { + s.NextToken = &v return s } -// SetStreamName sets the StreamName field's value. -func (s *KinesisAction) SetStreamName(v string) *KinesisAction { - s.StreamName = &v +// SetThingName sets the ThingName field's value. +func (s *ListThingGroupsForThingInput) SetThingName(v string) *ListThingGroupsForThingInput { + s.ThingName = &v return s } -// Describes an action to invoke a Lambda function. -type LambdaAction struct { +type ListThingGroupsForThingOutput struct { _ struct{} `type:"structure"` - // The ARN of the Lambda function. - // - // FunctionArn is a required field - FunctionArn *string `locationName:"functionArn" type:"string" required:"true"` + // The token used to get the next set of results, or null if there are no additional + // results. + NextToken *string `locationName:"nextToken" type:"string"` + + // The thing groups. + ThingGroups []*GroupNameAndArn `locationName:"thingGroups" type:"list"` } // String returns the string representation -func (s LambdaAction) String() string { +func (s ListThingGroupsForThingOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s LambdaAction) GoString() string { +func (s ListThingGroupsForThingOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *LambdaAction) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "LambdaAction"} - if s.FunctionArn == nil { - invalidParams.Add(request.NewErrParamRequired("FunctionArn")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetNextToken sets the NextToken field's value. +func (s *ListThingGroupsForThingOutput) SetNextToken(v string) *ListThingGroupsForThingOutput { + s.NextToken = &v + return s } -// SetFunctionArn sets the FunctionArn field's value. -func (s *LambdaAction) SetFunctionArn(v string) *LambdaAction { - s.FunctionArn = &v +// SetThingGroups sets the ThingGroups field's value. +func (s *ListThingGroupsForThingOutput) SetThingGroups(v []*GroupNameAndArn) *ListThingGroupsForThingOutput { + s.ThingGroups = v return s } -type ListAttachedPoliciesInput struct { +type ListThingGroupsInput struct { _ struct{} `type:"structure"` + // The maximum number of results to return at one time. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + + // A filter that limits the results to those with the specified name prefix. + NamePrefixFilter *string `location:"querystring" locationName:"namePrefixFilter" min:"1" type:"string"` + // The token to retrieve the next set of results. - Marker *string `location:"querystring" locationName:"marker" type:"string"` + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` - // The maximum number of results to be returned per request. - PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` + // A filter that limits the results to those with the specified parent group. + ParentGroup *string `location:"querystring" locationName:"parentGroup" min:"1" type:"string"` - // When true, recursively list attached policies. + // If true, return child groups as well. Recursive *bool `location:"querystring" locationName:"recursive" type:"boolean"` - - // The group for which the policies will be listed. - // - // Target is a required field - Target *string `location:"uri" locationName:"target" type:"string" required:"true"` } // String returns the string representation -func (s ListAttachedPoliciesInput) String() string { +func (s ListThingGroupsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListAttachedPoliciesInput) GoString() string { +func (s ListThingGroupsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListAttachedPoliciesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListAttachedPoliciesInput"} - if s.PageSize != nil && *s.PageSize < 1 { - invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) +func (s *ListThingGroupsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListThingGroupsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } - if s.Target == nil { - invalidParams.Add(request.NewErrParamRequired("Target")) + if s.NamePrefixFilter != nil && len(*s.NamePrefixFilter) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NamePrefixFilter", 1)) + } + if s.ParentGroup != nil && len(*s.ParentGroup) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ParentGroup", 1)) } if invalidParams.Len() > 0 { @@ -19172,94 +28971,97 @@ func (s *ListAttachedPoliciesInput) Validate() error { return nil } -// SetMarker sets the Marker field's value. -func (s *ListAttachedPoliciesInput) SetMarker(v string) *ListAttachedPoliciesInput { - s.Marker = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListThingGroupsInput) SetMaxResults(v int64) *ListThingGroupsInput { + s.MaxResults = &v return s } -// SetPageSize sets the PageSize field's value. -func (s *ListAttachedPoliciesInput) SetPageSize(v int64) *ListAttachedPoliciesInput { - s.PageSize = &v +// SetNamePrefixFilter sets the NamePrefixFilter field's value. +func (s *ListThingGroupsInput) SetNamePrefixFilter(v string) *ListThingGroupsInput { + s.NamePrefixFilter = &v return s } -// SetRecursive sets the Recursive field's value. -func (s *ListAttachedPoliciesInput) SetRecursive(v bool) *ListAttachedPoliciesInput { - s.Recursive = &v +// SetNextToken sets the NextToken field's value. +func (s *ListThingGroupsInput) SetNextToken(v string) *ListThingGroupsInput { + s.NextToken = &v return s } -// SetTarget sets the Target field's value. -func (s *ListAttachedPoliciesInput) SetTarget(v string) *ListAttachedPoliciesInput { - s.Target = &v +// SetParentGroup sets the ParentGroup field's value. +func (s *ListThingGroupsInput) SetParentGroup(v string) *ListThingGroupsInput { + s.ParentGroup = &v return s } -type ListAttachedPoliciesOutput struct { +// SetRecursive sets the Recursive field's value. +func (s *ListThingGroupsInput) SetRecursive(v bool) *ListThingGroupsInput { + s.Recursive = &v + return s +} + +type ListThingGroupsOutput struct { _ struct{} `type:"structure"` - // The token to retrieve the next set of results, or ``null`` if there are no - // more results. - NextMarker *string `locationName:"nextMarker" type:"string"` + // The token used to get the next set of results, or null if there are no additional + // results. + NextToken *string `locationName:"nextToken" type:"string"` - // The policies. - Policies []*Policy `locationName:"policies" type:"list"` + // The thing groups. + ThingGroups []*GroupNameAndArn `locationName:"thingGroups" type:"list"` } // String returns the string representation -func (s ListAttachedPoliciesOutput) String() string { +func (s ListThingGroupsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListAttachedPoliciesOutput) GoString() string { +func (s ListThingGroupsOutput) GoString() string { return s.String() } -// SetNextMarker sets the NextMarker field's value. -func (s *ListAttachedPoliciesOutput) SetNextMarker(v string) *ListAttachedPoliciesOutput { - s.NextMarker = &v +// SetNextToken sets the NextToken field's value. +func (s *ListThingGroupsOutput) SetNextToken(v string) *ListThingGroupsOutput { + s.NextToken = &v return s } -// SetPolicies sets the Policies field's value. -func (s *ListAttachedPoliciesOutput) SetPolicies(v []*Policy) *ListAttachedPoliciesOutput { - s.Policies = v +// SetThingGroups sets the ThingGroups field's value. +func (s *ListThingGroupsOutput) SetThingGroups(v []*GroupNameAndArn) *ListThingGroupsOutput { + s.ThingGroups = v return s } -type ListAuthorizersInput struct { +// The input for the ListThingPrincipal operation. +type ListThingPrincipalsInput struct { _ struct{} `type:"structure"` - // Return the list of authorizers in ascending alphabetical order. - AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` - - // A marker used to get the next set of results. - Marker *string `location:"querystring" locationName:"marker" type:"string"` - - // The maximum number of results to return at one time. - PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` - - // The status of the list authorizers request. - Status *string `location:"querystring" locationName:"status" type:"string" enum:"AuthorizerStatus"` + // The name of the thing. + // + // ThingName is a required field + ThingName *string `location:"uri" locationName:"thingName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s ListAuthorizersInput) String() string { +func (s ListThingPrincipalsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListAuthorizersInput) GoString() string { +func (s ListThingPrincipalsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListAuthorizersInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListAuthorizersInput"} - if s.PageSize != nil && *s.PageSize < 1 { - invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) +func (s *ListThingPrincipalsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListThingPrincipalsInput"} + if s.ThingName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingName")) + } + if s.ThingName != nil && len(*s.ThingName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) } if invalidParams.Len() > 0 { @@ -19268,91 +29070,77 @@ func (s *ListAuthorizersInput) Validate() error { return nil } -// SetAscendingOrder sets the AscendingOrder field's value. -func (s *ListAuthorizersInput) SetAscendingOrder(v bool) *ListAuthorizersInput { - s.AscendingOrder = &v - return s -} - -// SetMarker sets the Marker field's value. -func (s *ListAuthorizersInput) SetMarker(v string) *ListAuthorizersInput { - s.Marker = &v - return s -} - -// SetPageSize sets the PageSize field's value. -func (s *ListAuthorizersInput) SetPageSize(v int64) *ListAuthorizersInput { - s.PageSize = &v - return s -} - -// SetStatus sets the Status field's value. -func (s *ListAuthorizersInput) SetStatus(v string) *ListAuthorizersInput { - s.Status = &v +// SetThingName sets the ThingName field's value. +func (s *ListThingPrincipalsInput) SetThingName(v string) *ListThingPrincipalsInput { + s.ThingName = &v return s } -type ListAuthorizersOutput struct { +// The output from the ListThingPrincipals operation. +type ListThingPrincipalsOutput struct { _ struct{} `type:"structure"` - // The authorizers. - Authorizers []*AuthorizerSummary `locationName:"authorizers" type:"list"` - - // A marker used to get the next set of results. - NextMarker *string `locationName:"nextMarker" type:"string"` + // The principals associated with the thing. + Principals []*string `locationName:"principals" type:"list"` } // String returns the string representation -func (s ListAuthorizersOutput) String() string { +func (s ListThingPrincipalsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListAuthorizersOutput) GoString() string { +func (s ListThingPrincipalsOutput) GoString() string { return s.String() } -// SetAuthorizers sets the Authorizers field's value. -func (s *ListAuthorizersOutput) SetAuthorizers(v []*AuthorizerSummary) *ListAuthorizersOutput { - s.Authorizers = v - return s -} - -// SetNextMarker sets the NextMarker field's value. -func (s *ListAuthorizersOutput) SetNextMarker(v string) *ListAuthorizersOutput { - s.NextMarker = &v +// SetPrincipals sets the Principals field's value. +func (s *ListThingPrincipalsOutput) SetPrincipals(v []*string) *ListThingPrincipalsOutput { + s.Principals = v return s } -// Input for the ListCACertificates operation. -type ListCACertificatesInput struct { +type ListThingRegistrationTaskReportsInput struct { _ struct{} `type:"structure"` - // Determines the order of the results. - AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` + // The maximum number of results to return per request. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - // The marker for the next set of results. - Marker *string `location:"querystring" locationName:"marker" type:"string"` + // The token to retrieve the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` - // The result page size. - PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` + // The type of task report. + // + // ReportType is a required field + ReportType *string `location:"querystring" locationName:"reportType" type:"string" required:"true" enum:"ReportType"` + + // The id of the task. + // + // TaskId is a required field + TaskId *string `location:"uri" locationName:"taskId" type:"string" required:"true"` } // String returns the string representation -func (s ListCACertificatesInput) String() string { +func (s ListThingRegistrationTaskReportsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListCACertificatesInput) GoString() string { +func (s ListThingRegistrationTaskReportsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListCACertificatesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListCACertificatesInput"} - if s.PageSize != nil && *s.PageSize < 1 { - invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) +func (s *ListThingRegistrationTaskReportsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListThingRegistrationTaskReportsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.ReportType == nil { + invalidParams.Add(request.NewErrParamRequired("ReportType")) + } + if s.TaskId == nil { + invalidParams.Add(request.NewErrParamRequired("TaskId")) } if invalidParams.Len() > 0 { @@ -19361,99 +29149,100 @@ func (s *ListCACertificatesInput) Validate() error { return nil } -// SetAscendingOrder sets the AscendingOrder field's value. -func (s *ListCACertificatesInput) SetAscendingOrder(v bool) *ListCACertificatesInput { - s.AscendingOrder = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListThingRegistrationTaskReportsInput) SetMaxResults(v int64) *ListThingRegistrationTaskReportsInput { + s.MaxResults = &v return s } -// SetMarker sets the Marker field's value. -func (s *ListCACertificatesInput) SetMarker(v string) *ListCACertificatesInput { - s.Marker = &v +// SetNextToken sets the NextToken field's value. +func (s *ListThingRegistrationTaskReportsInput) SetNextToken(v string) *ListThingRegistrationTaskReportsInput { + s.NextToken = &v return s } -// SetPageSize sets the PageSize field's value. -func (s *ListCACertificatesInput) SetPageSize(v int64) *ListCACertificatesInput { - s.PageSize = &v +// SetReportType sets the ReportType field's value. +func (s *ListThingRegistrationTaskReportsInput) SetReportType(v string) *ListThingRegistrationTaskReportsInput { + s.ReportType = &v return s } -// The output from the ListCACertificates operation. -type ListCACertificatesOutput struct { +// SetTaskId sets the TaskId field's value. +func (s *ListThingRegistrationTaskReportsInput) SetTaskId(v string) *ListThingRegistrationTaskReportsInput { + s.TaskId = &v + return s +} + +type ListThingRegistrationTaskReportsOutput struct { _ struct{} `type:"structure"` - // The CA certificates registered in your AWS account. - Certificates []*CACertificate `locationName:"certificates" type:"list"` + // The token used to get the next set of results, or null if there are no additional + // results. + NextToken *string `locationName:"nextToken" type:"string"` - // The current position within the list of CA certificates. - NextMarker *string `locationName:"nextMarker" type:"string"` + // The type of task report. + ReportType *string `locationName:"reportType" type:"string" enum:"ReportType"` + + // Links to the task resources. + ResourceLinks []*string `locationName:"resourceLinks" type:"list"` } // String returns the string representation -func (s ListCACertificatesOutput) String() string { +func (s ListThingRegistrationTaskReportsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListCACertificatesOutput) GoString() string { +func (s ListThingRegistrationTaskReportsOutput) GoString() string { return s.String() } -// SetCertificates sets the Certificates field's value. -func (s *ListCACertificatesOutput) SetCertificates(v []*CACertificate) *ListCACertificatesOutput { - s.Certificates = v +// SetNextToken sets the NextToken field's value. +func (s *ListThingRegistrationTaskReportsOutput) SetNextToken(v string) *ListThingRegistrationTaskReportsOutput { + s.NextToken = &v return s } -// SetNextMarker sets the NextMarker field's value. -func (s *ListCACertificatesOutput) SetNextMarker(v string) *ListCACertificatesOutput { - s.NextMarker = &v +// SetReportType sets the ReportType field's value. +func (s *ListThingRegistrationTaskReportsOutput) SetReportType(v string) *ListThingRegistrationTaskReportsOutput { + s.ReportType = &v return s } -// The input to the ListCertificatesByCA operation. -type ListCertificatesByCAInput struct { - _ struct{} `type:"structure"` +// SetResourceLinks sets the ResourceLinks field's value. +func (s *ListThingRegistrationTaskReportsOutput) SetResourceLinks(v []*string) *ListThingRegistrationTaskReportsOutput { + s.ResourceLinks = v + return s +} - // Specifies the order for results. If True, the results are returned in ascending - // order, based on the creation date. - AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` +type ListThingRegistrationTasksInput struct { + _ struct{} `type:"structure"` - // The ID of the CA certificate. This operation will list all registered device - // certificate that were signed by this CA certificate. - // - // CaCertificateId is a required field - CaCertificateId *string `location:"uri" locationName:"caCertificateId" min:"64" type:"string" required:"true"` + // The maximum number of results to return at one time. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - // The marker for the next set of results. - Marker *string `location:"querystring" locationName:"marker" type:"string"` + // The token to retrieve the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` - // The result page size. - PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` + // The status of the bulk thing provisioning task. + Status *string `location:"querystring" locationName:"status" type:"string" enum:"Status"` } // String returns the string representation -func (s ListCertificatesByCAInput) String() string { +func (s ListThingRegistrationTasksInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListCertificatesByCAInput) GoString() string { +func (s ListThingRegistrationTasksInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListCertificatesByCAInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListCertificatesByCAInput"} - if s.CaCertificateId == nil { - invalidParams.Add(request.NewErrParamRequired("CaCertificateId")) - } - if s.CaCertificateId != nil && len(*s.CaCertificateId) < 64 { - invalidParams.Add(request.NewErrParamMinLen("CaCertificateId", 64)) - } - if s.PageSize != nil && *s.PageSize < 1 { - invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) +func (s *ListThingRegistrationTasksInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListThingRegistrationTasksInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } if invalidParams.Len() > 0 { @@ -19462,94 +29251,89 @@ func (s *ListCertificatesByCAInput) Validate() error { return nil } -// SetAscendingOrder sets the AscendingOrder field's value. -func (s *ListCertificatesByCAInput) SetAscendingOrder(v bool) *ListCertificatesByCAInput { - s.AscendingOrder = &v - return s -} - -// SetCaCertificateId sets the CaCertificateId field's value. -func (s *ListCertificatesByCAInput) SetCaCertificateId(v string) *ListCertificatesByCAInput { - s.CaCertificateId = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListThingRegistrationTasksInput) SetMaxResults(v int64) *ListThingRegistrationTasksInput { + s.MaxResults = &v return s } -// SetMarker sets the Marker field's value. -func (s *ListCertificatesByCAInput) SetMarker(v string) *ListCertificatesByCAInput { - s.Marker = &v +// SetNextToken sets the NextToken field's value. +func (s *ListThingRegistrationTasksInput) SetNextToken(v string) *ListThingRegistrationTasksInput { + s.NextToken = &v return s } -// SetPageSize sets the PageSize field's value. -func (s *ListCertificatesByCAInput) SetPageSize(v int64) *ListCertificatesByCAInput { - s.PageSize = &v +// SetStatus sets the Status field's value. +func (s *ListThingRegistrationTasksInput) SetStatus(v string) *ListThingRegistrationTasksInput { + s.Status = &v return s } -// The output of the ListCertificatesByCA operation. -type ListCertificatesByCAOutput struct { +type ListThingRegistrationTasksOutput struct { _ struct{} `type:"structure"` - // The device certificates signed by the specified CA certificate. - Certificates []*Certificate `locationName:"certificates" type:"list"` - - // The marker for the next set of results, or null if there are no additional + // The token used to get the next set of results, or null if there are no additional // results. - NextMarker *string `locationName:"nextMarker" type:"string"` + NextToken *string `locationName:"nextToken" type:"string"` + + // A list of bulk thing provisioning task IDs. + TaskIds []*string `locationName:"taskIds" type:"list"` } // String returns the string representation -func (s ListCertificatesByCAOutput) String() string { +func (s ListThingRegistrationTasksOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListCertificatesByCAOutput) GoString() string { +func (s ListThingRegistrationTasksOutput) GoString() string { return s.String() } -// SetCertificates sets the Certificates field's value. -func (s *ListCertificatesByCAOutput) SetCertificates(v []*Certificate) *ListCertificatesByCAOutput { - s.Certificates = v +// SetNextToken sets the NextToken field's value. +func (s *ListThingRegistrationTasksOutput) SetNextToken(v string) *ListThingRegistrationTasksOutput { + s.NextToken = &v return s } -// SetNextMarker sets the NextMarker field's value. -func (s *ListCertificatesByCAOutput) SetNextMarker(v string) *ListCertificatesByCAOutput { - s.NextMarker = &v +// SetTaskIds sets the TaskIds field's value. +func (s *ListThingRegistrationTasksOutput) SetTaskIds(v []*string) *ListThingRegistrationTasksOutput { + s.TaskIds = v return s } -// The input for the ListCertificates operation. -type ListCertificatesInput struct { +// The input for the ListThingTypes operation. +type ListThingTypesInput struct { _ struct{} `type:"structure"` - // Specifies the order for results. If True, the results are returned in ascending - // order, based on the creation date. - AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` + // The maximum number of results to return in this operation. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - // The marker for the next set of results. - Marker *string `location:"querystring" locationName:"marker" type:"string"` + // The token to retrieve the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` - // The result page size. - PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` + // The name of the thing type. + ThingTypeName *string `location:"querystring" locationName:"thingTypeName" min:"1" type:"string"` } // String returns the string representation -func (s ListCertificatesInput) String() string { +func (s ListThingTypesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListCertificatesInput) GoString() string { +func (s ListThingTypesInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListCertificatesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListCertificatesInput"} - if s.PageSize != nil && *s.PageSize < 1 { - invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) +func (s *ListThingTypesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListThingTypesInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.ThingTypeName != nil && len(*s.ThingTypeName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingTypeName", 1)) } if invalidParams.Len() > 0 { @@ -19558,82 +29342,92 @@ func (s *ListCertificatesInput) Validate() error { return nil } -// SetAscendingOrder sets the AscendingOrder field's value. -func (s *ListCertificatesInput) SetAscendingOrder(v bool) *ListCertificatesInput { - s.AscendingOrder = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListThingTypesInput) SetMaxResults(v int64) *ListThingTypesInput { + s.MaxResults = &v return s } -// SetMarker sets the Marker field's value. -func (s *ListCertificatesInput) SetMarker(v string) *ListCertificatesInput { - s.Marker = &v +// SetNextToken sets the NextToken field's value. +func (s *ListThingTypesInput) SetNextToken(v string) *ListThingTypesInput { + s.NextToken = &v return s } -// SetPageSize sets the PageSize field's value. -func (s *ListCertificatesInput) SetPageSize(v int64) *ListCertificatesInput { - s.PageSize = &v +// SetThingTypeName sets the ThingTypeName field's value. +func (s *ListThingTypesInput) SetThingTypeName(v string) *ListThingTypesInput { + s.ThingTypeName = &v return s } -// The output of the ListCertificates operation. -type ListCertificatesOutput struct { +// The output for the ListThingTypes operation. +type ListThingTypesOutput struct { _ struct{} `type:"structure"` - // The descriptions of the certificates. - Certificates []*Certificate `locationName:"certificates" type:"list"` - - // The marker for the next set of results, or null if there are no additional + // The token for the next set of results, or null if there are no additional // results. - NextMarker *string `locationName:"nextMarker" type:"string"` + NextToken *string `locationName:"nextToken" type:"string"` + + // The thing types. + ThingTypes []*ThingTypeDefinition `locationName:"thingTypes" type:"list"` } // String returns the string representation -func (s ListCertificatesOutput) String() string { +func (s ListThingTypesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListCertificatesOutput) GoString() string { +func (s ListThingTypesOutput) GoString() string { return s.String() } -// SetCertificates sets the Certificates field's value. -func (s *ListCertificatesOutput) SetCertificates(v []*Certificate) *ListCertificatesOutput { - s.Certificates = v +// SetNextToken sets the NextToken field's value. +func (s *ListThingTypesOutput) SetNextToken(v string) *ListThingTypesOutput { + s.NextToken = &v return s } -// SetNextMarker sets the NextMarker field's value. -func (s *ListCertificatesOutput) SetNextMarker(v string) *ListCertificatesOutput { - s.NextMarker = &v +// SetThingTypes sets the ThingTypes field's value. +func (s *ListThingTypesOutput) SetThingTypes(v []*ThingTypeDefinition) *ListThingTypesOutput { + s.ThingTypes = v return s } -type ListIndicesInput struct { +type ListThingsInBillingGroupInput struct { _ struct{} `type:"structure"` - // The maximum number of results to return at one time. + // The name of the billing group. + // + // BillingGroupName is a required field + BillingGroupName *string `location:"uri" locationName:"billingGroupName" min:"1" type:"string" required:"true"` + + // The maximum number of results to return per request. MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - // The token used to get the next set of results, or null if there are no additional - // results. + // The token to retrieve the next set of results. NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` } // String returns the string representation -func (s ListIndicesInput) String() string { +func (s ListThingsInBillingGroupInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListIndicesInput) GoString() string { +func (s ListThingsInBillingGroupInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListIndicesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListIndicesInput"} +func (s *ListThingsInBillingGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListThingsInBillingGroupInput"} + if s.BillingGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("BillingGroupName")) + } + if s.BillingGroupName != nil && len(*s.BillingGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BillingGroupName", 1)) + } if s.MaxResults != nil && *s.MaxResults < 1 { invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } @@ -19644,91 +29438,97 @@ func (s *ListIndicesInput) Validate() error { return nil } +// SetBillingGroupName sets the BillingGroupName field's value. +func (s *ListThingsInBillingGroupInput) SetBillingGroupName(v string) *ListThingsInBillingGroupInput { + s.BillingGroupName = &v + return s +} + // SetMaxResults sets the MaxResults field's value. -func (s *ListIndicesInput) SetMaxResults(v int64) *ListIndicesInput { +func (s *ListThingsInBillingGroupInput) SetMaxResults(v int64) *ListThingsInBillingGroupInput { s.MaxResults = &v return s } // SetNextToken sets the NextToken field's value. -func (s *ListIndicesInput) SetNextToken(v string) *ListIndicesInput { +func (s *ListThingsInBillingGroupInput) SetNextToken(v string) *ListThingsInBillingGroupInput { s.NextToken = &v return s } -type ListIndicesOutput struct { +type ListThingsInBillingGroupOutput struct { _ struct{} `type:"structure"` - // The index names. - IndexNames []*string `locationName:"indexNames" type:"list"` - // The token used to get the next set of results, or null if there are no additional // results. NextToken *string `locationName:"nextToken" type:"string"` + + // A list of things in the billing group. + Things []*string `locationName:"things" type:"list"` } // String returns the string representation -func (s ListIndicesOutput) String() string { +func (s ListThingsInBillingGroupOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListIndicesOutput) GoString() string { +func (s ListThingsInBillingGroupOutput) GoString() string { return s.String() } -// SetIndexNames sets the IndexNames field's value. -func (s *ListIndicesOutput) SetIndexNames(v []*string) *ListIndicesOutput { - s.IndexNames = v +// SetNextToken sets the NextToken field's value. +func (s *ListThingsInBillingGroupOutput) SetNextToken(v string) *ListThingsInBillingGroupOutput { + s.NextToken = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListIndicesOutput) SetNextToken(v string) *ListIndicesOutput { - s.NextToken = &v +// SetThings sets the Things field's value. +func (s *ListThingsInBillingGroupOutput) SetThings(v []*string) *ListThingsInBillingGroupOutput { + s.Things = v return s } -type ListJobExecutionsForJobInput struct { +type ListThingsInThingGroupInput struct { _ struct{} `type:"structure"` - // The unique identifier you assigned to this job when it was created. - // - // JobId is a required field - JobId *string `location:"uri" locationName:"jobId" min:"1" type:"string" required:"true"` - - // The maximum number of results to be returned per request. + // The maximum number of results to return at one time. MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` // The token to retrieve the next set of results. NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` - // The status of the job. - Status *string `location:"querystring" locationName:"status" type:"string" enum:"JobExecutionStatus"` + // When true, list things in this thing group and in all child groups as well. + Recursive *bool `location:"querystring" locationName:"recursive" type:"boolean"` + + // The thing group name. + // + // ThingGroupName is a required field + ThingGroupName *string `location:"uri" locationName:"thingGroupName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s ListJobExecutionsForJobInput) String() string { +func (s ListThingsInThingGroupInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListJobExecutionsForJobInput) GoString() string { +func (s ListThingsInThingGroupInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListJobExecutionsForJobInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListJobExecutionsForJobInput"} - if s.JobId == nil { - invalidParams.Add(request.NewErrParamRequired("JobId")) - } - if s.JobId != nil && len(*s.JobId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("JobId", 1)) - } +func (s *ListThingsInThingGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListThingsInThingGroupInput"} if s.MaxResults != nil && *s.MaxResults < 1 { invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } + if s.ThingGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingGroupName")) + } + if s.ThingGroupName != nil && len(*s.ThingGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingGroupName", 1)) + } if invalidParams.Len() > 0 { return invalidParams @@ -19736,103 +29536,101 @@ func (s *ListJobExecutionsForJobInput) Validate() error { return nil } -// SetJobId sets the JobId field's value. -func (s *ListJobExecutionsForJobInput) SetJobId(v string) *ListJobExecutionsForJobInput { - s.JobId = &v - return s -} - // SetMaxResults sets the MaxResults field's value. -func (s *ListJobExecutionsForJobInput) SetMaxResults(v int64) *ListJobExecutionsForJobInput { +func (s *ListThingsInThingGroupInput) SetMaxResults(v int64) *ListThingsInThingGroupInput { s.MaxResults = &v return s } // SetNextToken sets the NextToken field's value. -func (s *ListJobExecutionsForJobInput) SetNextToken(v string) *ListJobExecutionsForJobInput { +func (s *ListThingsInThingGroupInput) SetNextToken(v string) *ListThingsInThingGroupInput { s.NextToken = &v return s } -// SetStatus sets the Status field's value. -func (s *ListJobExecutionsForJobInput) SetStatus(v string) *ListJobExecutionsForJobInput { - s.Status = &v +// SetRecursive sets the Recursive field's value. +func (s *ListThingsInThingGroupInput) SetRecursive(v bool) *ListThingsInThingGroupInput { + s.Recursive = &v return s } -type ListJobExecutionsForJobOutput struct { - _ struct{} `type:"structure"` +// SetThingGroupName sets the ThingGroupName field's value. +func (s *ListThingsInThingGroupInput) SetThingGroupName(v string) *ListThingsInThingGroupInput { + s.ThingGroupName = &v + return s +} - // A list of job execution summaries. - ExecutionSummaries []*JobExecutionSummaryForJob `locationName:"executionSummaries" type:"list"` +type ListThingsInThingGroupOutput struct { + _ struct{} `type:"structure"` - // The token for the next set of results, or null if there are no additional + // The token used to get the next set of results, or null if there are no additional // results. NextToken *string `locationName:"nextToken" type:"string"` + + // The things in the specified thing group. + Things []*string `locationName:"things" type:"list"` } // String returns the string representation -func (s ListJobExecutionsForJobOutput) String() string { +func (s ListThingsInThingGroupOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListJobExecutionsForJobOutput) GoString() string { +func (s ListThingsInThingGroupOutput) GoString() string { return s.String() } -// SetExecutionSummaries sets the ExecutionSummaries field's value. -func (s *ListJobExecutionsForJobOutput) SetExecutionSummaries(v []*JobExecutionSummaryForJob) *ListJobExecutionsForJobOutput { - s.ExecutionSummaries = v +// SetNextToken sets the NextToken field's value. +func (s *ListThingsInThingGroupOutput) SetNextToken(v string) *ListThingsInThingGroupOutput { + s.NextToken = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListJobExecutionsForJobOutput) SetNextToken(v string) *ListJobExecutionsForJobOutput { - s.NextToken = &v +// SetThings sets the Things field's value. +func (s *ListThingsInThingGroupOutput) SetThings(v []*string) *ListThingsInThingGroupOutput { + s.Things = v return s } -type ListJobExecutionsForThingInput struct { +// The input for the ListThings operation. +type ListThingsInput struct { _ struct{} `type:"structure"` - // The maximum number of results to be returned per request. - MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + // The attribute name used to search for things. + AttributeName *string `location:"querystring" locationName:"attributeName" type:"string"` - // The token to retrieve the next set of results. - NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + // The attribute value used to search for things. + AttributeValue *string `location:"querystring" locationName:"attributeValue" type:"string"` - // An optional filter that lets you search for jobs that have the specified - // status. - Status *string `location:"querystring" locationName:"status" type:"string" enum:"JobExecutionStatus"` + // The maximum number of results to return in this operation. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + + // The token to retrieve the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` - // The thing name. - // - // ThingName is a required field - ThingName *string `location:"uri" locationName:"thingName" min:"1" type:"string" required:"true"` + // The name of the thing type used to search for things. + ThingTypeName *string `location:"querystring" locationName:"thingTypeName" min:"1" type:"string"` } // String returns the string representation -func (s ListJobExecutionsForThingInput) String() string { +func (s ListThingsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListJobExecutionsForThingInput) GoString() string { +func (s ListThingsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListJobExecutionsForThingInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListJobExecutionsForThingInput"} +func (s *ListThingsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListThingsInput"} if s.MaxResults != nil && *s.MaxResults < 1 { invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } - if s.ThingName == nil { - invalidParams.Add(request.NewErrParamRequired("ThingName")) - } - if s.ThingName != nil && len(*s.ThingName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) + if s.ThingTypeName != nil && len(*s.ThingTypeName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingTypeName", 1)) } if invalidParams.Len() > 0 { @@ -19841,113 +29639,103 @@ func (s *ListJobExecutionsForThingInput) Validate() error { return nil } +// SetAttributeName sets the AttributeName field's value. +func (s *ListThingsInput) SetAttributeName(v string) *ListThingsInput { + s.AttributeName = &v + return s +} + +// SetAttributeValue sets the AttributeValue field's value. +func (s *ListThingsInput) SetAttributeValue(v string) *ListThingsInput { + s.AttributeValue = &v + return s +} + // SetMaxResults sets the MaxResults field's value. -func (s *ListJobExecutionsForThingInput) SetMaxResults(v int64) *ListJobExecutionsForThingInput { +func (s *ListThingsInput) SetMaxResults(v int64) *ListThingsInput { s.MaxResults = &v return s } // SetNextToken sets the NextToken field's value. -func (s *ListJobExecutionsForThingInput) SetNextToken(v string) *ListJobExecutionsForThingInput { +func (s *ListThingsInput) SetNextToken(v string) *ListThingsInput { s.NextToken = &v return s } -// SetStatus sets the Status field's value. -func (s *ListJobExecutionsForThingInput) SetStatus(v string) *ListJobExecutionsForThingInput { - s.Status = &v - return s -} - -// SetThingName sets the ThingName field's value. -func (s *ListJobExecutionsForThingInput) SetThingName(v string) *ListJobExecutionsForThingInput { - s.ThingName = &v +// SetThingTypeName sets the ThingTypeName field's value. +func (s *ListThingsInput) SetThingTypeName(v string) *ListThingsInput { + s.ThingTypeName = &v return s } -type ListJobExecutionsForThingOutput struct { +// The output from the ListThings operation. +type ListThingsOutput struct { _ struct{} `type:"structure"` - // A list of job execution summaries. - ExecutionSummaries []*JobExecutionSummaryForThing `locationName:"executionSummaries" type:"list"` - - // The token for the next set of results, or null if there are no additional + // The token used to get the next set of results, or null if there are no additional // results. NextToken *string `locationName:"nextToken" type:"string"` + + // The things. + Things []*ThingAttribute `locationName:"things" type:"list"` } // String returns the string representation -func (s ListJobExecutionsForThingOutput) String() string { +func (s ListThingsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListJobExecutionsForThingOutput) GoString() string { +func (s ListThingsOutput) GoString() string { return s.String() } -// SetExecutionSummaries sets the ExecutionSummaries field's value. -func (s *ListJobExecutionsForThingOutput) SetExecutionSummaries(v []*JobExecutionSummaryForThing) *ListJobExecutionsForThingOutput { - s.ExecutionSummaries = v +// SetNextToken sets the NextToken field's value. +func (s *ListThingsOutput) SetNextToken(v string) *ListThingsOutput { + s.NextToken = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListJobExecutionsForThingOutput) SetNextToken(v string) *ListJobExecutionsForThingOutput { - s.NextToken = &v +// SetThings sets the Things field's value. +func (s *ListThingsOutput) SetThings(v []*ThingAttribute) *ListThingsOutput { + s.Things = v return s } -type ListJobsInput struct { +// The input for the ListTopicRules operation. +type ListTopicRulesInput struct { _ struct{} `type:"structure"` - // The maximum number of results to return per request. + // The maximum number of results to return. MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - // The token to retrieve the next set of results. + // A token used to retrieve the next value. NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` - // An optional filter that lets you search for jobs that have the specified - // status. - Status *string `location:"querystring" locationName:"status" type:"string" enum:"JobStatus"` - - // Specifies whether the job will continue to run (CONTINUOUS), or will be complete - // after all those things specified as targets have completed the job (SNAPSHOT). - // If continuous, the job may also be run on a thing when a change is detected - // in a target. For example, a job will run on a thing when the thing is added - // to a target group, even after the job was completed by all things originally - // in the group. - TargetSelection *string `location:"querystring" locationName:"targetSelection" type:"string" enum:"TargetSelection"` - - // A filter that limits the returned jobs to those for the specified group. - ThingGroupId *string `location:"querystring" locationName:"thingGroupId" min:"1" type:"string"` + // Specifies whether the rule is disabled. + RuleDisabled *bool `location:"querystring" locationName:"ruleDisabled" type:"boolean"` - // A filter that limits the returned jobs to those for the specified group. - ThingGroupName *string `location:"querystring" locationName:"thingGroupName" min:"1" type:"string"` + // The topic. + Topic *string `location:"querystring" locationName:"topic" type:"string"` } // String returns the string representation -func (s ListJobsInput) String() string { +func (s ListTopicRulesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListJobsInput) GoString() string { +func (s ListTopicRulesInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListJobsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListJobsInput"} +func (s *ListTopicRulesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTopicRulesInput"} if s.MaxResults != nil && *s.MaxResults < 1 { invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } - if s.ThingGroupId != nil && len(*s.ThingGroupId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingGroupId", 1)) - } - if s.ThingGroupName != nil && len(*s.ThingGroupName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingGroupName", 1)) - } if invalidParams.Len() > 0 { return invalidParams @@ -19956,100 +29744,89 @@ func (s *ListJobsInput) Validate() error { } // SetMaxResults sets the MaxResults field's value. -func (s *ListJobsInput) SetMaxResults(v int64) *ListJobsInput { +func (s *ListTopicRulesInput) SetMaxResults(v int64) *ListTopicRulesInput { s.MaxResults = &v return s } // SetNextToken sets the NextToken field's value. -func (s *ListJobsInput) SetNextToken(v string) *ListJobsInput { +func (s *ListTopicRulesInput) SetNextToken(v string) *ListTopicRulesInput { s.NextToken = &v return s } -// SetStatus sets the Status field's value. -func (s *ListJobsInput) SetStatus(v string) *ListJobsInput { - s.Status = &v - return s -} - -// SetTargetSelection sets the TargetSelection field's value. -func (s *ListJobsInput) SetTargetSelection(v string) *ListJobsInput { - s.TargetSelection = &v - return s -} - -// SetThingGroupId sets the ThingGroupId field's value. -func (s *ListJobsInput) SetThingGroupId(v string) *ListJobsInput { - s.ThingGroupId = &v +// SetRuleDisabled sets the RuleDisabled field's value. +func (s *ListTopicRulesInput) SetRuleDisabled(v bool) *ListTopicRulesInput { + s.RuleDisabled = &v return s } -// SetThingGroupName sets the ThingGroupName field's value. -func (s *ListJobsInput) SetThingGroupName(v string) *ListJobsInput { - s.ThingGroupName = &v +// SetTopic sets the Topic field's value. +func (s *ListTopicRulesInput) SetTopic(v string) *ListTopicRulesInput { + s.Topic = &v return s } -type ListJobsOutput struct { +// The output from the ListTopicRules operation. +type ListTopicRulesOutput struct { _ struct{} `type:"structure"` - // A list of jobs. - Jobs []*JobSummary `locationName:"jobs" type:"list"` - - // The token for the next set of results, or null if there are no additional - // results. + // A token used to retrieve the next value. NextToken *string `locationName:"nextToken" type:"string"` + + // The rules. + Rules []*TopicRuleListItem `locationName:"rules" type:"list"` } // String returns the string representation -func (s ListJobsOutput) String() string { +func (s ListTopicRulesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListJobsOutput) GoString() string { +func (s ListTopicRulesOutput) GoString() string { return s.String() } -// SetJobs sets the Jobs field's value. -func (s *ListJobsOutput) SetJobs(v []*JobSummary) *ListJobsOutput { - s.Jobs = v +// SetNextToken sets the NextToken field's value. +func (s *ListTopicRulesOutput) SetNextToken(v string) *ListTopicRulesOutput { + s.NextToken = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListJobsOutput) SetNextToken(v string) *ListJobsOutput { - s.NextToken = &v +// SetRules sets the Rules field's value. +func (s *ListTopicRulesOutput) SetRules(v []*TopicRuleListItem) *ListTopicRulesOutput { + s.Rules = v return s } -type ListOTAUpdatesInput struct { +type ListV2LoggingLevelsInput struct { _ struct{} `type:"structure"` // The maximum number of results to return at one time. MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - // A token used to retreive the next set of results. + // The token used to get the next set of results, or null if there are no additional + // results. NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` - // The OTA update job status. - OtaUpdateStatus *string `location:"querystring" locationName:"otaUpdateStatus" type:"string" enum:"OTAUpdateStatus"` + // The type of resource for which you are configuring logging. Must be THING_Group. + TargetType *string `location:"querystring" locationName:"targetType" type:"string" enum:"LogTargetType"` } // String returns the string representation -func (s ListOTAUpdatesInput) String() string { +func (s ListV2LoggingLevelsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListOTAUpdatesInput) GoString() string { +func (s ListV2LoggingLevelsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListOTAUpdatesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListOTAUpdatesInput"} +func (s *ListV2LoggingLevelsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListV2LoggingLevelsInput"} if s.MaxResults != nil && *s.MaxResults < 1 { invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } @@ -20061,85 +29838,110 @@ func (s *ListOTAUpdatesInput) Validate() error { } // SetMaxResults sets the MaxResults field's value. -func (s *ListOTAUpdatesInput) SetMaxResults(v int64) *ListOTAUpdatesInput { +func (s *ListV2LoggingLevelsInput) SetMaxResults(v int64) *ListV2LoggingLevelsInput { s.MaxResults = &v return s } // SetNextToken sets the NextToken field's value. -func (s *ListOTAUpdatesInput) SetNextToken(v string) *ListOTAUpdatesInput { +func (s *ListV2LoggingLevelsInput) SetNextToken(v string) *ListV2LoggingLevelsInput { s.NextToken = &v return s } -// SetOtaUpdateStatus sets the OtaUpdateStatus field's value. -func (s *ListOTAUpdatesInput) SetOtaUpdateStatus(v string) *ListOTAUpdatesInput { - s.OtaUpdateStatus = &v +// SetTargetType sets the TargetType field's value. +func (s *ListV2LoggingLevelsInput) SetTargetType(v string) *ListV2LoggingLevelsInput { + s.TargetType = &v return s } -type ListOTAUpdatesOutput struct { +type ListV2LoggingLevelsOutput struct { _ struct{} `type:"structure"` - // A token to use to get the next set of results. - NextToken *string `locationName:"nextToken" type:"string"` + // The logging configuration for a target. + LogTargetConfigurations []*LogTargetConfiguration `locationName:"logTargetConfigurations" type:"list"` - // A list of OTA update jobs. - OtaUpdates []*OTAUpdateSummary `locationName:"otaUpdates" type:"list"` + // The token used to get the next set of results, or null if there are no additional + // results. + NextToken *string `locationName:"nextToken" type:"string"` } // String returns the string representation -func (s ListOTAUpdatesOutput) String() string { +func (s ListV2LoggingLevelsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListOTAUpdatesOutput) GoString() string { +func (s ListV2LoggingLevelsOutput) GoString() string { return s.String() } -// SetNextToken sets the NextToken field's value. -func (s *ListOTAUpdatesOutput) SetNextToken(v string) *ListOTAUpdatesOutput { - s.NextToken = &v +// SetLogTargetConfigurations sets the LogTargetConfigurations field's value. +func (s *ListV2LoggingLevelsOutput) SetLogTargetConfigurations(v []*LogTargetConfiguration) *ListV2LoggingLevelsOutput { + s.LogTargetConfigurations = v return s } -// SetOtaUpdates sets the OtaUpdates field's value. -func (s *ListOTAUpdatesOutput) SetOtaUpdates(v []*OTAUpdateSummary) *ListOTAUpdatesOutput { - s.OtaUpdates = v +// SetNextToken sets the NextToken field's value. +func (s *ListV2LoggingLevelsOutput) SetNextToken(v string) *ListV2LoggingLevelsOutput { + s.NextToken = &v return s } -// The input to the ListOutgoingCertificates operation. -type ListOutgoingCertificatesInput struct { +type ListViolationEventsInput struct { _ struct{} `type:"structure"` - // Specifies the order for results. If True, the results are returned in ascending - // order, based on the creation date. - AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` + // The end time for the alerts to be listed. + // + // EndTime is a required field + EndTime *time.Time `location:"querystring" locationName:"endTime" type:"timestamp" required:"true"` - // The marker for the next set of results. - Marker *string `location:"querystring" locationName:"marker" type:"string"` + // The maximum number of results to return at one time. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - // The result page size. - PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` + // The token for the next set of results. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + + // A filter to limit results to those alerts generated by the specified security + // profile. + SecurityProfileName *string `location:"querystring" locationName:"securityProfileName" min:"1" type:"string"` + + // The start time for the alerts to be listed. + // + // StartTime is a required field + StartTime *time.Time `location:"querystring" locationName:"startTime" type:"timestamp" required:"true"` + + // A filter to limit results to those alerts caused by the specified thing. + ThingName *string `location:"querystring" locationName:"thingName" min:"1" type:"string"` } // String returns the string representation -func (s ListOutgoingCertificatesInput) String() string { +func (s ListViolationEventsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListOutgoingCertificatesInput) GoString() string { +func (s ListViolationEventsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListOutgoingCertificatesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListOutgoingCertificatesInput"} - if s.PageSize != nil && *s.PageSize < 1 { - invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) +func (s *ListViolationEventsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListViolationEventsInput"} + if s.EndTime == nil { + invalidParams.Add(request.NewErrParamRequired("EndTime")) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.SecurityProfileName != nil && len(*s.SecurityProfileName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecurityProfileName", 1)) + } + if s.StartTime == nil { + invalidParams.Add(request.NewErrParamRequired("StartTime")) + } + if s.ThingName != nil && len(*s.ThingName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) } if invalidParams.Len() > 0 { @@ -20148,87 +29950,105 @@ func (s *ListOutgoingCertificatesInput) Validate() error { return nil } -// SetAscendingOrder sets the AscendingOrder field's value. -func (s *ListOutgoingCertificatesInput) SetAscendingOrder(v bool) *ListOutgoingCertificatesInput { - s.AscendingOrder = &v +// SetEndTime sets the EndTime field's value. +func (s *ListViolationEventsInput) SetEndTime(v time.Time) *ListViolationEventsInput { + s.EndTime = &v return s } -// SetMarker sets the Marker field's value. -func (s *ListOutgoingCertificatesInput) SetMarker(v string) *ListOutgoingCertificatesInput { - s.Marker = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListViolationEventsInput) SetMaxResults(v int64) *ListViolationEventsInput { + s.MaxResults = &v return s } -// SetPageSize sets the PageSize field's value. -func (s *ListOutgoingCertificatesInput) SetPageSize(v int64) *ListOutgoingCertificatesInput { - s.PageSize = &v +// SetNextToken sets the NextToken field's value. +func (s *ListViolationEventsInput) SetNextToken(v string) *ListViolationEventsInput { + s.NextToken = &v return s } -// The output from the ListOutgoingCertificates operation. -type ListOutgoingCertificatesOutput struct { +// SetSecurityProfileName sets the SecurityProfileName field's value. +func (s *ListViolationEventsInput) SetSecurityProfileName(v string) *ListViolationEventsInput { + s.SecurityProfileName = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *ListViolationEventsInput) SetStartTime(v time.Time) *ListViolationEventsInput { + s.StartTime = &v + return s +} + +// SetThingName sets the ThingName field's value. +func (s *ListViolationEventsInput) SetThingName(v string) *ListViolationEventsInput { + s.ThingName = &v + return s +} + +type ListViolationEventsOutput struct { _ struct{} `type:"structure"` - // The marker for the next set of results. - NextMarker *string `locationName:"nextMarker" type:"string"` + // A token that can be used to retrieve the next set of results, or null if + // there are no additional results. + NextToken *string `locationName:"nextToken" type:"string"` - // The certificates that are being transferred but not yet accepted. - OutgoingCertificates []*OutgoingCertificate `locationName:"outgoingCertificates" type:"list"` + // The security profile violation alerts issued for this account during the + // given time frame, potentially filtered by security profile, behavior violated, + // or thing (device) violating. + ViolationEvents []*ViolationEvent `locationName:"violationEvents" type:"list"` } // String returns the string representation -func (s ListOutgoingCertificatesOutput) String() string { +func (s ListViolationEventsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListOutgoingCertificatesOutput) GoString() string { +func (s ListViolationEventsOutput) GoString() string { return s.String() } -// SetNextMarker sets the NextMarker field's value. -func (s *ListOutgoingCertificatesOutput) SetNextMarker(v string) *ListOutgoingCertificatesOutput { - s.NextMarker = &v +// SetNextToken sets the NextToken field's value. +func (s *ListViolationEventsOutput) SetNextToken(v string) *ListViolationEventsOutput { + s.NextToken = &v return s } -// SetOutgoingCertificates sets the OutgoingCertificates field's value. -func (s *ListOutgoingCertificatesOutput) SetOutgoingCertificates(v []*OutgoingCertificate) *ListOutgoingCertificatesOutput { - s.OutgoingCertificates = v +// SetViolationEvents sets the ViolationEvents field's value. +func (s *ListViolationEventsOutput) SetViolationEvents(v []*ViolationEvent) *ListViolationEventsOutput { + s.ViolationEvents = v return s } -// The input for the ListPolicies operation. -type ListPoliciesInput struct { +// A log target. +type LogTarget struct { _ struct{} `type:"structure"` - // Specifies the order for results. If true, the results are returned in ascending - // creation order. - AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` - - // The marker for the next set of results. - Marker *string `location:"querystring" locationName:"marker" type:"string"` + // The target name. + TargetName *string `locationName:"targetName" type:"string"` - // The result page size. - PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` + // The target type. + // + // TargetType is a required field + TargetType *string `locationName:"targetType" type:"string" required:"true" enum:"LogTargetType"` } // String returns the string representation -func (s ListPoliciesInput) String() string { +func (s LogTarget) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListPoliciesInput) GoString() string { +func (s LogTarget) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListPoliciesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListPoliciesInput"} - if s.PageSize != nil && *s.PageSize < 1 { - invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) +func (s *LogTarget) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LogTarget"} + if s.TargetType == nil { + invalidParams.Add(request.NewErrParamRequired("TargetType")) } if invalidParams.Len() > 0 { @@ -20237,99 +30057,79 @@ func (s *ListPoliciesInput) Validate() error { return nil } -// SetAscendingOrder sets the AscendingOrder field's value. -func (s *ListPoliciesInput) SetAscendingOrder(v bool) *ListPoliciesInput { - s.AscendingOrder = &v - return s -} - -// SetMarker sets the Marker field's value. -func (s *ListPoliciesInput) SetMarker(v string) *ListPoliciesInput { - s.Marker = &v +// SetTargetName sets the TargetName field's value. +func (s *LogTarget) SetTargetName(v string) *LogTarget { + s.TargetName = &v return s } -// SetPageSize sets the PageSize field's value. -func (s *ListPoliciesInput) SetPageSize(v int64) *ListPoliciesInput { - s.PageSize = &v +// SetTargetType sets the TargetType field's value. +func (s *LogTarget) SetTargetType(v string) *LogTarget { + s.TargetType = &v return s } -// The output from the ListPolicies operation. -type ListPoliciesOutput struct { +// The target configuration. +type LogTargetConfiguration struct { _ struct{} `type:"structure"` - // The marker for the next set of results, or null if there are no additional - // results. - NextMarker *string `locationName:"nextMarker" type:"string"` + // The logging level. + LogLevel *string `locationName:"logLevel" type:"string" enum:"LogLevel"` - // The descriptions of the policies. - Policies []*Policy `locationName:"policies" type:"list"` + // A log target + LogTarget *LogTarget `locationName:"logTarget" type:"structure"` } // String returns the string representation -func (s ListPoliciesOutput) String() string { +func (s LogTargetConfiguration) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListPoliciesOutput) GoString() string { +func (s LogTargetConfiguration) GoString() string { return s.String() } -// SetNextMarker sets the NextMarker field's value. -func (s *ListPoliciesOutput) SetNextMarker(v string) *ListPoliciesOutput { - s.NextMarker = &v +// SetLogLevel sets the LogLevel field's value. +func (s *LogTargetConfiguration) SetLogLevel(v string) *LogTargetConfiguration { + s.LogLevel = &v return s } -// SetPolicies sets the Policies field's value. -func (s *ListPoliciesOutput) SetPolicies(v []*Policy) *ListPoliciesOutput { - s.Policies = v +// SetLogTarget sets the LogTarget field's value. +func (s *LogTargetConfiguration) SetLogTarget(v *LogTarget) *LogTargetConfiguration { + s.LogTarget = v return s } -// The input for the ListPolicyPrincipals operation. -type ListPolicyPrincipalsInput struct { +// Describes the logging options payload. +type LoggingOptionsPayload struct { _ struct{} `type:"structure"` - // Specifies the order for results. If true, the results are returned in ascending - // creation order. - AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` - - // The marker for the next set of results. - Marker *string `location:"querystring" locationName:"marker" type:"string"` - - // The result page size. - PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` + // The log level. + LogLevel *string `locationName:"logLevel" type:"string" enum:"LogLevel"` - // The policy name. + // The ARN of the IAM role that grants access. // - // PolicyName is a required field - PolicyName *string `location:"header" locationName:"x-amzn-iot-policy" min:"1" type:"string" required:"true"` + // RoleArn is a required field + RoleArn *string `locationName:"roleArn" type:"string" required:"true"` } // String returns the string representation -func (s ListPolicyPrincipalsInput) String() string { +func (s LoggingOptionsPayload) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListPolicyPrincipalsInput) GoString() string { +func (s LoggingOptionsPayload) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListPolicyPrincipalsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListPolicyPrincipalsInput"} - if s.PageSize != nil && *s.PageSize < 1 { - invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) - } - if s.PolicyName == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyName")) - } - if s.PolicyName != nil && len(*s.PolicyName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) +func (s *LoggingOptionsPayload) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LoggingOptionsPayload"} + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) } if invalidParams.Len() > 0 { @@ -20338,168 +30138,147 @@ func (s *ListPolicyPrincipalsInput) Validate() error { return nil } -// SetAscendingOrder sets the AscendingOrder field's value. -func (s *ListPolicyPrincipalsInput) SetAscendingOrder(v bool) *ListPolicyPrincipalsInput { - s.AscendingOrder = &v - return s -} - -// SetMarker sets the Marker field's value. -func (s *ListPolicyPrincipalsInput) SetMarker(v string) *ListPolicyPrincipalsInput { - s.Marker = &v - return s -} - -// SetPageSize sets the PageSize field's value. -func (s *ListPolicyPrincipalsInput) SetPageSize(v int64) *ListPolicyPrincipalsInput { - s.PageSize = &v +// SetLogLevel sets the LogLevel field's value. +func (s *LoggingOptionsPayload) SetLogLevel(v string) *LoggingOptionsPayload { + s.LogLevel = &v return s } -// SetPolicyName sets the PolicyName field's value. -func (s *ListPolicyPrincipalsInput) SetPolicyName(v string) *ListPolicyPrincipalsInput { - s.PolicyName = &v +// SetRoleArn sets the RoleArn field's value. +func (s *LoggingOptionsPayload) SetRoleArn(v string) *LoggingOptionsPayload { + s.RoleArn = &v return s } -// The output from the ListPolicyPrincipals operation. -type ListPolicyPrincipalsOutput struct { +// The value to be compared with the metric. +type MetricValue struct { _ struct{} `type:"structure"` - // The marker for the next set of results, or null if there are no additional - // results. - NextMarker *string `locationName:"nextMarker" type:"string"` + // If the comparisonOperator calls for a set of CIDRs, use this to specify that + // set to be compared with the metric. + Cidrs []*string `locationName:"cidrs" type:"list"` - // The descriptions of the principals. - Principals []*string `locationName:"principals" type:"list"` + // If the comparisonOperator calls for a numeric value, use this to specify + // that numeric value to be compared with the metric. + Count *int64 `locationName:"count" type:"long"` + + // If the comparisonOperator calls for a set of ports, use this to specify that + // set to be compared with the metric. + Ports []*int64 `locationName:"ports" type:"list"` } // String returns the string representation -func (s ListPolicyPrincipalsOutput) String() string { +func (s MetricValue) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListPolicyPrincipalsOutput) GoString() string { +func (s MetricValue) GoString() string { return s.String() } -// SetNextMarker sets the NextMarker field's value. -func (s *ListPolicyPrincipalsOutput) SetNextMarker(v string) *ListPolicyPrincipalsOutput { - s.NextMarker = &v +// SetCidrs sets the Cidrs field's value. +func (s *MetricValue) SetCidrs(v []*string) *MetricValue { + s.Cidrs = v return s } -// SetPrincipals sets the Principals field's value. -func (s *ListPolicyPrincipalsOutput) SetPrincipals(v []*string) *ListPolicyPrincipalsOutput { - s.Principals = v +// SetCount sets the Count field's value. +func (s *MetricValue) SetCount(v int64) *MetricValue { + s.Count = &v return s } -// The input for the ListPolicyVersions operation. -type ListPolicyVersionsInput struct { +// SetPorts sets the Ports field's value. +func (s *MetricValue) SetPorts(v []*int64) *MetricValue { + s.Ports = v + return s +} + +// Information about the resource that was non-compliant with the audit check. +type NonCompliantResource struct { _ struct{} `type:"structure"` - // The policy name. - // - // PolicyName is a required field - PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` + // Additional information about the non-compliant resource. + AdditionalInfo map[string]*string `locationName:"additionalInfo" type:"map"` + + // Information identifying the non-compliant resource. + ResourceIdentifier *ResourceIdentifier `locationName:"resourceIdentifier" type:"structure"` + + // The type of the non-compliant resource. + ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` } // String returns the string representation -func (s ListPolicyVersionsInput) String() string { +func (s NonCompliantResource) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListPolicyVersionsInput) GoString() string { +func (s NonCompliantResource) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *ListPolicyVersionsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListPolicyVersionsInput"} - if s.PolicyName == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyName")) - } - if s.PolicyName != nil && len(*s.PolicyName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetPolicyName sets the PolicyName field's value. -func (s *ListPolicyVersionsInput) SetPolicyName(v string) *ListPolicyVersionsInput { - s.PolicyName = &v +// SetAdditionalInfo sets the AdditionalInfo field's value. +func (s *NonCompliantResource) SetAdditionalInfo(v map[string]*string) *NonCompliantResource { + s.AdditionalInfo = v return s } -// The output from the ListPolicyVersions operation. -type ListPolicyVersionsOutput struct { - _ struct{} `type:"structure"` - - // The policy versions. - PolicyVersions []*PolicyVersion `locationName:"policyVersions" type:"list"` -} - -// String returns the string representation -func (s ListPolicyVersionsOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s ListPolicyVersionsOutput) GoString() string { - return s.String() +// SetResourceIdentifier sets the ResourceIdentifier field's value. +func (s *NonCompliantResource) SetResourceIdentifier(v *ResourceIdentifier) *NonCompliantResource { + s.ResourceIdentifier = v + return s } -// SetPolicyVersions sets the PolicyVersions field's value. -func (s *ListPolicyVersionsOutput) SetPolicyVersions(v []*PolicyVersion) *ListPolicyVersionsOutput { - s.PolicyVersions = v +// SetResourceType sets the ResourceType field's value. +func (s *NonCompliantResource) SetResourceType(v string) *NonCompliantResource { + s.ResourceType = &v return s } - -// The input for the ListPrincipalPolicies operation. -type ListPrincipalPoliciesInput struct { + +// Describes a file to be associated with an OTA update. +type OTAUpdateFile struct { _ struct{} `type:"structure"` - // Specifies the order for results. If true, results are returned in ascending - // creation order. - AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` + // A list of name/attribute pairs. + Attributes map[string]*string `locationName:"attributes" type:"map"` - // The marker for the next set of results. - Marker *string `location:"querystring" locationName:"marker" type:"string"` + // The code signing method of the file. + CodeSigning *CodeSigning `locationName:"codeSigning" type:"structure"` - // The result page size. - PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` + // The location of the updated firmware. + FileLocation *FileLocation `locationName:"fileLocation" type:"structure"` - // The principal. - // - // Principal is a required field - Principal *string `location:"header" locationName:"x-amzn-iot-principal" type:"string" required:"true"` + // The name of the file. + FileName *string `locationName:"fileName" type:"string"` + + // The file version. + FileVersion *string `locationName:"fileVersion" type:"string"` } // String returns the string representation -func (s ListPrincipalPoliciesInput) String() string { +func (s OTAUpdateFile) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListPrincipalPoliciesInput) GoString() string { +func (s OTAUpdateFile) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListPrincipalPoliciesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListPrincipalPoliciesInput"} - if s.PageSize != nil && *s.PageSize < 1 { - invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) +func (s *OTAUpdateFile) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "OTAUpdateFile"} + if s.CodeSigning != nil { + if err := s.CodeSigning.Validate(); err != nil { + invalidParams.AddNested("CodeSigning", err.(request.ErrInvalidParams)) + } } - if s.Principal == nil { - invalidParams.Add(request.NewErrParamRequired("Principal")) + if s.FileLocation != nil { + if err := s.FileLocation.Validate(); err != nil { + invalidParams.AddNested("FileLocation", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -20508,365 +30287,392 @@ func (s *ListPrincipalPoliciesInput) Validate() error { return nil } -// SetAscendingOrder sets the AscendingOrder field's value. -func (s *ListPrincipalPoliciesInput) SetAscendingOrder(v bool) *ListPrincipalPoliciesInput { - s.AscendingOrder = &v +// SetAttributes sets the Attributes field's value. +func (s *OTAUpdateFile) SetAttributes(v map[string]*string) *OTAUpdateFile { + s.Attributes = v return s } -// SetMarker sets the Marker field's value. -func (s *ListPrincipalPoliciesInput) SetMarker(v string) *ListPrincipalPoliciesInput { - s.Marker = &v +// SetCodeSigning sets the CodeSigning field's value. +func (s *OTAUpdateFile) SetCodeSigning(v *CodeSigning) *OTAUpdateFile { + s.CodeSigning = v return s } -// SetPageSize sets the PageSize field's value. -func (s *ListPrincipalPoliciesInput) SetPageSize(v int64) *ListPrincipalPoliciesInput { - s.PageSize = &v +// SetFileLocation sets the FileLocation field's value. +func (s *OTAUpdateFile) SetFileLocation(v *FileLocation) *OTAUpdateFile { + s.FileLocation = v return s } -// SetPrincipal sets the Principal field's value. -func (s *ListPrincipalPoliciesInput) SetPrincipal(v string) *ListPrincipalPoliciesInput { - s.Principal = &v +// SetFileName sets the FileName field's value. +func (s *OTAUpdateFile) SetFileName(v string) *OTAUpdateFile { + s.FileName = &v return s } -// The output from the ListPrincipalPolicies operation. -type ListPrincipalPoliciesOutput struct { +// SetFileVersion sets the FileVersion field's value. +func (s *OTAUpdateFile) SetFileVersion(v string) *OTAUpdateFile { + s.FileVersion = &v + return s +} + +// Information about an OTA update. +type OTAUpdateInfo struct { _ struct{} `type:"structure"` - // The marker for the next set of results, or null if there are no additional - // results. - NextMarker *string `locationName:"nextMarker" type:"string"` + // A collection of name/value pairs + AdditionalParameters map[string]*string `locationName:"additionalParameters" type:"map"` - // The policies. - Policies []*Policy `locationName:"policies" type:"list"` + // The AWS IoT job ARN associated with the OTA update. + AwsIotJobArn *string `locationName:"awsIotJobArn" type:"string"` + + // The AWS IoT job ID associated with the OTA update. + AwsIotJobId *string `locationName:"awsIotJobId" type:"string"` + + // Configuration for the rollout of OTA updates. + AwsJobExecutionsRolloutConfig *AwsJobExecutionsRolloutConfig `locationName:"awsJobExecutionsRolloutConfig" type:"structure"` + + // The date when the OTA update was created. + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` + + // A description of the OTA update. + Description *string `locationName:"description" type:"string"` + + // Error information associated with the OTA update. + ErrorInfo *ErrorInfo `locationName:"errorInfo" type:"structure"` + + // The date when the OTA update was last updated. + LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp"` + + // The OTA update ARN. + OtaUpdateArn *string `locationName:"otaUpdateArn" type:"string"` + + // A list of files associated with the OTA update. + OtaUpdateFiles []*OTAUpdateFile `locationName:"otaUpdateFiles" min:"1" type:"list"` + + // The OTA update ID. + OtaUpdateId *string `locationName:"otaUpdateId" min:"1" type:"string"` + + // The status of the OTA update. + OtaUpdateStatus *string `locationName:"otaUpdateStatus" type:"string" enum:"OTAUpdateStatus"` + + // Specifies whether the OTA update will continue to run (CONTINUOUS), or will + // be complete after all those things specified as targets have completed the + // OTA update (SNAPSHOT). If continuous, the OTA update may also be run on a + // thing when a change is detected in a target. For example, an OTA update will + // run on a thing when the thing is added to a target group, even after the + // OTA update was completed by all things originally in the group. + TargetSelection *string `locationName:"targetSelection" type:"string" enum:"TargetSelection"` + + // The targets of the OTA update. + Targets []*string `locationName:"targets" min:"1" type:"list"` } // String returns the string representation -func (s ListPrincipalPoliciesOutput) String() string { +func (s OTAUpdateInfo) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListPrincipalPoliciesOutput) GoString() string { +func (s OTAUpdateInfo) GoString() string { return s.String() } -// SetNextMarker sets the NextMarker field's value. -func (s *ListPrincipalPoliciesOutput) SetNextMarker(v string) *ListPrincipalPoliciesOutput { - s.NextMarker = &v +// SetAdditionalParameters sets the AdditionalParameters field's value. +func (s *OTAUpdateInfo) SetAdditionalParameters(v map[string]*string) *OTAUpdateInfo { + s.AdditionalParameters = v return s } -// SetPolicies sets the Policies field's value. -func (s *ListPrincipalPoliciesOutput) SetPolicies(v []*Policy) *ListPrincipalPoliciesOutput { - s.Policies = v +// SetAwsIotJobArn sets the AwsIotJobArn field's value. +func (s *OTAUpdateInfo) SetAwsIotJobArn(v string) *OTAUpdateInfo { + s.AwsIotJobArn = &v return s } -// The input for the ListPrincipalThings operation. -type ListPrincipalThingsInput struct { - _ struct{} `type:"structure"` +// SetAwsIotJobId sets the AwsIotJobId field's value. +func (s *OTAUpdateInfo) SetAwsIotJobId(v string) *OTAUpdateInfo { + s.AwsIotJobId = &v + return s +} - // The maximum number of results to return in this operation. - MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` +// SetAwsJobExecutionsRolloutConfig sets the AwsJobExecutionsRolloutConfig field's value. +func (s *OTAUpdateInfo) SetAwsJobExecutionsRolloutConfig(v *AwsJobExecutionsRolloutConfig) *OTAUpdateInfo { + s.AwsJobExecutionsRolloutConfig = v + return s +} - // The token used to get the next set of results, or null if there are no additional - // results. - NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` +// SetCreationDate sets the CreationDate field's value. +func (s *OTAUpdateInfo) SetCreationDate(v time.Time) *OTAUpdateInfo { + s.CreationDate = &v + return s +} - // The principal. - // - // Principal is a required field - Principal *string `location:"header" locationName:"x-amzn-principal" type:"string" required:"true"` +// SetDescription sets the Description field's value. +func (s *OTAUpdateInfo) SetDescription(v string) *OTAUpdateInfo { + s.Description = &v + return s } -// String returns the string representation -func (s ListPrincipalThingsInput) String() string { - return awsutil.Prettify(s) +// SetErrorInfo sets the ErrorInfo field's value. +func (s *OTAUpdateInfo) SetErrorInfo(v *ErrorInfo) *OTAUpdateInfo { + s.ErrorInfo = v + return s } -// GoString returns the string representation -func (s ListPrincipalThingsInput) GoString() string { - return s.String() +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *OTAUpdateInfo) SetLastModifiedDate(v time.Time) *OTAUpdateInfo { + s.LastModifiedDate = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *ListPrincipalThingsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListPrincipalThingsInput"} - if s.MaxResults != nil && *s.MaxResults < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) - } - if s.Principal == nil { - invalidParams.Add(request.NewErrParamRequired("Principal")) - } +// SetOtaUpdateArn sets the OtaUpdateArn field's value. +func (s *OTAUpdateInfo) SetOtaUpdateArn(v string) *OTAUpdateInfo { + s.OtaUpdateArn = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetOtaUpdateFiles sets the OtaUpdateFiles field's value. +func (s *OTAUpdateInfo) SetOtaUpdateFiles(v []*OTAUpdateFile) *OTAUpdateInfo { + s.OtaUpdateFiles = v + return s } -// SetMaxResults sets the MaxResults field's value. -func (s *ListPrincipalThingsInput) SetMaxResults(v int64) *ListPrincipalThingsInput { - s.MaxResults = &v +// SetOtaUpdateId sets the OtaUpdateId field's value. +func (s *OTAUpdateInfo) SetOtaUpdateId(v string) *OTAUpdateInfo { + s.OtaUpdateId = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListPrincipalThingsInput) SetNextToken(v string) *ListPrincipalThingsInput { - s.NextToken = &v +// SetOtaUpdateStatus sets the OtaUpdateStatus field's value. +func (s *OTAUpdateInfo) SetOtaUpdateStatus(v string) *OTAUpdateInfo { + s.OtaUpdateStatus = &v return s } -// SetPrincipal sets the Principal field's value. -func (s *ListPrincipalThingsInput) SetPrincipal(v string) *ListPrincipalThingsInput { - s.Principal = &v +// SetTargetSelection sets the TargetSelection field's value. +func (s *OTAUpdateInfo) SetTargetSelection(v string) *OTAUpdateInfo { + s.TargetSelection = &v return s } -// The output from the ListPrincipalThings operation. -type ListPrincipalThingsOutput struct { +// SetTargets sets the Targets field's value. +func (s *OTAUpdateInfo) SetTargets(v []*string) *OTAUpdateInfo { + s.Targets = v + return s +} + +// An OTA update summary. +type OTAUpdateSummary struct { _ struct{} `type:"structure"` - // The token used to get the next set of results, or null if there are no additional - // results. - NextToken *string `locationName:"nextToken" type:"string"` + // The date when the OTA update was created. + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` - // The things. - Things []*string `locationName:"things" type:"list"` + // The OTA update ARN. + OtaUpdateArn *string `locationName:"otaUpdateArn" type:"string"` + + // The OTA update ID. + OtaUpdateId *string `locationName:"otaUpdateId" min:"1" type:"string"` } // String returns the string representation -func (s ListPrincipalThingsOutput) String() string { +func (s OTAUpdateSummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListPrincipalThingsOutput) GoString() string { +func (s OTAUpdateSummary) GoString() string { return s.String() } -// SetNextToken sets the NextToken field's value. -func (s *ListPrincipalThingsOutput) SetNextToken(v string) *ListPrincipalThingsOutput { - s.NextToken = &v +// SetCreationDate sets the CreationDate field's value. +func (s *OTAUpdateSummary) SetCreationDate(v time.Time) *OTAUpdateSummary { + s.CreationDate = &v return s } -// SetThings sets the Things field's value. -func (s *ListPrincipalThingsOutput) SetThings(v []*string) *ListPrincipalThingsOutput { - s.Things = v +// SetOtaUpdateArn sets the OtaUpdateArn field's value. +func (s *OTAUpdateSummary) SetOtaUpdateArn(v string) *OTAUpdateSummary { + s.OtaUpdateArn = &v + return s +} + +// SetOtaUpdateId sets the OtaUpdateId field's value. +func (s *OTAUpdateSummary) SetOtaUpdateId(v string) *OTAUpdateSummary { + s.OtaUpdateId = &v return s } -type ListRoleAliasesInput struct { +// A certificate that has been transferred but not yet accepted. +type OutgoingCertificate struct { _ struct{} `type:"structure"` - // Return the list of role aliases in ascending alphabetical order. - AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` + // The certificate ARN. + CertificateArn *string `locationName:"certificateArn" type:"string"` - // A marker used to get the next set of results. - Marker *string `location:"querystring" locationName:"marker" type:"string"` + // The certificate ID. + CertificateId *string `locationName:"certificateId" min:"64" type:"string"` - // The maximum number of results to return at one time. - PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` + // The certificate creation date. + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` + + // The date the transfer was initiated. + TransferDate *time.Time `locationName:"transferDate" type:"timestamp"` + + // The transfer message. + TransferMessage *string `locationName:"transferMessage" type:"string"` + + // The AWS account to which the transfer was made. + TransferredTo *string `locationName:"transferredTo" min:"12" type:"string"` } // String returns the string representation -func (s ListRoleAliasesInput) String() string { +func (s OutgoingCertificate) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListRoleAliasesInput) GoString() string { +func (s OutgoingCertificate) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *ListRoleAliasesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListRoleAliasesInput"} - if s.PageSize != nil && *s.PageSize < 1 { - invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetAscendingOrder sets the AscendingOrder field's value. -func (s *ListRoleAliasesInput) SetAscendingOrder(v bool) *ListRoleAliasesInput { - s.AscendingOrder = &v +// SetCertificateArn sets the CertificateArn field's value. +func (s *OutgoingCertificate) SetCertificateArn(v string) *OutgoingCertificate { + s.CertificateArn = &v return s } -// SetMarker sets the Marker field's value. -func (s *ListRoleAliasesInput) SetMarker(v string) *ListRoleAliasesInput { - s.Marker = &v +// SetCertificateId sets the CertificateId field's value. +func (s *OutgoingCertificate) SetCertificateId(v string) *OutgoingCertificate { + s.CertificateId = &v return s } -// SetPageSize sets the PageSize field's value. -func (s *ListRoleAliasesInput) SetPageSize(v int64) *ListRoleAliasesInput { - s.PageSize = &v +// SetCreationDate sets the CreationDate field's value. +func (s *OutgoingCertificate) SetCreationDate(v time.Time) *OutgoingCertificate { + s.CreationDate = &v return s } -type ListRoleAliasesOutput struct { - _ struct{} `type:"structure"` - - // A marker used to get the next set of results. - NextMarker *string `locationName:"nextMarker" type:"string"` - - // The role aliases. - RoleAliases []*string `locationName:"roleAliases" type:"list"` -} - -// String returns the string representation -func (s ListRoleAliasesOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s ListRoleAliasesOutput) GoString() string { - return s.String() +// SetTransferDate sets the TransferDate field's value. +func (s *OutgoingCertificate) SetTransferDate(v time.Time) *OutgoingCertificate { + s.TransferDate = &v + return s } -// SetNextMarker sets the NextMarker field's value. -func (s *ListRoleAliasesOutput) SetNextMarker(v string) *ListRoleAliasesOutput { - s.NextMarker = &v +// SetTransferMessage sets the TransferMessage field's value. +func (s *OutgoingCertificate) SetTransferMessage(v string) *OutgoingCertificate { + s.TransferMessage = &v return s } -// SetRoleAliases sets the RoleAliases field's value. -func (s *ListRoleAliasesOutput) SetRoleAliases(v []*string) *ListRoleAliasesOutput { - s.RoleAliases = v +// SetTransferredTo sets the TransferredTo field's value. +func (s *OutgoingCertificate) SetTransferredTo(v string) *OutgoingCertificate { + s.TransferredTo = &v return s } -type ListStreamsInput struct { +// Describes an AWS IoT policy. +type Policy struct { _ struct{} `type:"structure"` - // Set to true to return the list of streams in ascending order. - AscendingOrder *bool `location:"querystring" locationName:"isAscendingOrder" type:"boolean"` - - // The maximum number of results to return at a time. - MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + // The policy ARN. + PolicyArn *string `locationName:"policyArn" type:"string"` - // A token used to get the next set of results. - NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + // The policy name. + PolicyName *string `locationName:"policyName" min:"1" type:"string"` } // String returns the string representation -func (s ListStreamsInput) String() string { +func (s Policy) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListStreamsInput) GoString() string { +func (s Policy) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *ListStreamsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListStreamsInput"} - if s.MaxResults != nil && *s.MaxResults < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetAscendingOrder sets the AscendingOrder field's value. -func (s *ListStreamsInput) SetAscendingOrder(v bool) *ListStreamsInput { - s.AscendingOrder = &v - return s -} - -// SetMaxResults sets the MaxResults field's value. -func (s *ListStreamsInput) SetMaxResults(v int64) *ListStreamsInput { - s.MaxResults = &v +// SetPolicyArn sets the PolicyArn field's value. +func (s *Policy) SetPolicyArn(v string) *Policy { + s.PolicyArn = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListStreamsInput) SetNextToken(v string) *ListStreamsInput { - s.NextToken = &v +// SetPolicyName sets the PolicyName field's value. +func (s *Policy) SetPolicyName(v string) *Policy { + s.PolicyName = &v return s } -type ListStreamsOutput struct { +// Describes a policy version. +type PolicyVersion struct { _ struct{} `type:"structure"` - // A token used to get the next set of results. - NextToken *string `locationName:"nextToken" type:"string"` + // The date and time the policy was created. + CreateDate *time.Time `locationName:"createDate" type:"timestamp"` - // A list of streams. - Streams []*StreamSummary `locationName:"streams" type:"list"` + // Specifies whether the policy version is the default. + IsDefaultVersion *bool `locationName:"isDefaultVersion" type:"boolean"` + + // The policy version ID. + VersionId *string `locationName:"versionId" type:"string"` } // String returns the string representation -func (s ListStreamsOutput) String() string { +func (s PolicyVersion) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListStreamsOutput) GoString() string { +func (s PolicyVersion) GoString() string { return s.String() } -// SetNextToken sets the NextToken field's value. -func (s *ListStreamsOutput) SetNextToken(v string) *ListStreamsOutput { - s.NextToken = &v +// SetCreateDate sets the CreateDate field's value. +func (s *PolicyVersion) SetCreateDate(v time.Time) *PolicyVersion { + s.CreateDate = &v return s } -// SetStreams sets the Streams field's value. -func (s *ListStreamsOutput) SetStreams(v []*StreamSummary) *ListStreamsOutput { - s.Streams = v +// SetIsDefaultVersion sets the IsDefaultVersion field's value. +func (s *PolicyVersion) SetIsDefaultVersion(v bool) *PolicyVersion { + s.IsDefaultVersion = &v return s } -type ListTargetsForPolicyInput struct { - _ struct{} `type:"structure"` +// SetVersionId sets the VersionId field's value. +func (s *PolicyVersion) SetVersionId(v string) *PolicyVersion { + s.VersionId = &v + return s +} - // A marker used to get the next set of results. - Marker *string `location:"querystring" locationName:"marker" type:"string"` +// Information about the version of the policy associated with the resource. +type PolicyVersionIdentifier struct { + _ struct{} `type:"structure"` - // The maximum number of results to return at one time. - PageSize *int64 `location:"querystring" locationName:"pageSize" min:"1" type:"integer"` + // The name of the policy. + PolicyName *string `locationName:"policyName" min:"1" type:"string"` - // The policy name. - // - // PolicyName is a required field - PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` + // The ID of the version of the policy associated with the resource. + PolicyVersionId *string `locationName:"policyVersionId" type:"string"` } // String returns the string representation -func (s ListTargetsForPolicyInput) String() string { +func (s PolicyVersionIdentifier) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListTargetsForPolicyInput) GoString() string { +func (s PolicyVersionIdentifier) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListTargetsForPolicyInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListTargetsForPolicyInput"} - if s.PageSize != nil && *s.PageSize < 1 { - invalidParams.Add(request.NewErrParamMinValue("PageSize", 1)) - } - if s.PolicyName == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyName")) - } +func (s *PolicyVersionIdentifier) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PolicyVersionIdentifier"} if s.PolicyName != nil && len(*s.PolicyName) < 1 { invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) } @@ -20877,93 +30683,97 @@ func (s *ListTargetsForPolicyInput) Validate() error { return nil } -// SetMarker sets the Marker field's value. -func (s *ListTargetsForPolicyInput) SetMarker(v string) *ListTargetsForPolicyInput { - s.Marker = &v - return s -} - -// SetPageSize sets the PageSize field's value. -func (s *ListTargetsForPolicyInput) SetPageSize(v int64) *ListTargetsForPolicyInput { - s.PageSize = &v +// SetPolicyName sets the PolicyName field's value. +func (s *PolicyVersionIdentifier) SetPolicyName(v string) *PolicyVersionIdentifier { + s.PolicyName = &v return s } -// SetPolicyName sets the PolicyName field's value. -func (s *ListTargetsForPolicyInput) SetPolicyName(v string) *ListTargetsForPolicyInput { - s.PolicyName = &v +// SetPolicyVersionId sets the PolicyVersionId field's value. +func (s *PolicyVersionIdentifier) SetPolicyVersionId(v string) *PolicyVersionIdentifier { + s.PolicyVersionId = &v return s } -type ListTargetsForPolicyOutput struct { +// Configuration for pre-signed S3 URLs. +type PresignedUrlConfig struct { _ struct{} `type:"structure"` - // A marker used to get the next set of results. - NextMarker *string `locationName:"nextMarker" type:"string"` + // How long (in seconds) pre-signed URLs are valid. Valid values are 60 - 3600, + // the default value is 3600 seconds. Pre-signed URLs are generated when Jobs + // receives an MQTT request for the job document. + ExpiresInSec *int64 `locationName:"expiresInSec" min:"60" type:"long"` - // The policy targets. - Targets []*string `locationName:"targets" type:"list"` + // The ARN of an IAM role that grants grants permission to download files from + // the S3 bucket where the job data/updates are stored. The role must also grant + // permission for IoT to download the files. + RoleArn *string `locationName:"roleArn" min:"20" type:"string"` } // String returns the string representation -func (s ListTargetsForPolicyOutput) String() string { +func (s PresignedUrlConfig) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListTargetsForPolicyOutput) GoString() string { +func (s PresignedUrlConfig) GoString() string { return s.String() } -// SetNextMarker sets the NextMarker field's value. -func (s *ListTargetsForPolicyOutput) SetNextMarker(v string) *ListTargetsForPolicyOutput { - s.NextMarker = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *PresignedUrlConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PresignedUrlConfig"} + if s.ExpiresInSec != nil && *s.ExpiresInSec < 60 { + invalidParams.Add(request.NewErrParamMinValue("ExpiresInSec", 60)) + } + if s.RoleArn != nil && len(*s.RoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetExpiresInSec sets the ExpiresInSec field's value. +func (s *PresignedUrlConfig) SetExpiresInSec(v int64) *PresignedUrlConfig { + s.ExpiresInSec = &v return s } -// SetTargets sets the Targets field's value. -func (s *ListTargetsForPolicyOutput) SetTargets(v []*string) *ListTargetsForPolicyOutput { - s.Targets = v +// SetRoleArn sets the RoleArn field's value. +func (s *PresignedUrlConfig) SetRoleArn(v string) *PresignedUrlConfig { + s.RoleArn = &v return s } -type ListThingGroupsForThingInput struct { +// The input for the DynamoActionVS action that specifies the DynamoDB table +// to which the message data will be written. +type PutItemInput struct { _ struct{} `type:"structure"` - // The maximum number of results to return at one time. - MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - - // The token used to get the next set of results, or null if there are no additional - // results. - NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` - - // The thing name. + // The table where the message data will be written // - // ThingName is a required field - ThingName *string `location:"uri" locationName:"thingName" min:"1" type:"string" required:"true"` + // TableName is a required field + TableName *string `locationName:"tableName" type:"string" required:"true"` } // String returns the string representation -func (s ListThingGroupsForThingInput) String() string { +func (s PutItemInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListThingGroupsForThingInput) GoString() string { +func (s PutItemInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListThingGroupsForThingInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListThingGroupsForThingInput"} - if s.MaxResults != nil && *s.MaxResults < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) - } - if s.ThingName == nil { - invalidParams.Add(request.NewErrParamRequired("ThingName")) - } - if s.ThingName != nil && len(*s.ThingName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) +func (s *PutItemInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutItemInput"} + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) } if invalidParams.Len() > 0 { @@ -20972,98 +30782,117 @@ func (s *ListThingGroupsForThingInput) Validate() error { return nil } -// SetMaxResults sets the MaxResults field's value. -func (s *ListThingGroupsForThingInput) SetMaxResults(v int64) *ListThingGroupsForThingInput { - s.MaxResults = &v - return s -} - -// SetNextToken sets the NextToken field's value. -func (s *ListThingGroupsForThingInput) SetNextToken(v string) *ListThingGroupsForThingInput { - s.NextToken = &v - return s -} - -// SetThingName sets the ThingName field's value. -func (s *ListThingGroupsForThingInput) SetThingName(v string) *ListThingGroupsForThingInput { - s.ThingName = &v +// SetTableName sets the TableName field's value. +func (s *PutItemInput) SetTableName(v string) *PutItemInput { + s.TableName = &v return s } -type ListThingGroupsForThingOutput struct { +// Allows you to define a criteria to initiate the increase in rate of rollout +// for a job. +type RateIncreaseCriteria struct { _ struct{} `type:"structure"` - // The token used to get the next set of results, or null if there are no additional - // results. - NextToken *string `locationName:"nextToken" type:"string"` + // The threshold for number of notified things that will initiate the increase + // in rate of rollout. + NumberOfNotifiedThings *int64 `locationName:"numberOfNotifiedThings" min:"1" type:"integer"` - // The thing groups. - ThingGroups []*GroupNameAndArn `locationName:"thingGroups" type:"list"` + // The threshold for number of succeeded things that will initiate the increase + // in rate of rollout. + NumberOfSucceededThings *int64 `locationName:"numberOfSucceededThings" min:"1" type:"integer"` } // String returns the string representation -func (s ListThingGroupsForThingOutput) String() string { +func (s RateIncreaseCriteria) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListThingGroupsForThingOutput) GoString() string { +func (s RateIncreaseCriteria) GoString() string { return s.String() } -// SetNextToken sets the NextToken field's value. -func (s *ListThingGroupsForThingOutput) SetNextToken(v string) *ListThingGroupsForThingOutput { - s.NextToken = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *RateIncreaseCriteria) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RateIncreaseCriteria"} + if s.NumberOfNotifiedThings != nil && *s.NumberOfNotifiedThings < 1 { + invalidParams.Add(request.NewErrParamMinValue("NumberOfNotifiedThings", 1)) + } + if s.NumberOfSucceededThings != nil && *s.NumberOfSucceededThings < 1 { + invalidParams.Add(request.NewErrParamMinValue("NumberOfSucceededThings", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNumberOfNotifiedThings sets the NumberOfNotifiedThings field's value. +func (s *RateIncreaseCriteria) SetNumberOfNotifiedThings(v int64) *RateIncreaseCriteria { + s.NumberOfNotifiedThings = &v return s } -// SetThingGroups sets the ThingGroups field's value. -func (s *ListThingGroupsForThingOutput) SetThingGroups(v []*GroupNameAndArn) *ListThingGroupsForThingOutput { - s.ThingGroups = v +// SetNumberOfSucceededThings sets the NumberOfSucceededThings field's value. +func (s *RateIncreaseCriteria) SetNumberOfSucceededThings(v int64) *RateIncreaseCriteria { + s.NumberOfSucceededThings = &v return s } -type ListThingGroupsInput struct { +// The input to the RegisterCACertificate operation. +type RegisterCACertificateInput struct { _ struct{} `type:"structure"` - // The maximum number of results to return at one time. - MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + // Allows this CA certificate to be used for auto registration of device certificates. + AllowAutoRegistration *bool `location:"querystring" locationName:"allowAutoRegistration" type:"boolean"` - // A filter that limits the results to those with the specified name prefix. - NamePrefixFilter *string `location:"querystring" locationName:"namePrefixFilter" min:"1" type:"string"` + // The CA certificate. + // + // CaCertificate is a required field + CaCertificate *string `locationName:"caCertificate" min:"1" type:"string" required:"true"` - // The token used to get the next set of results, or null if there are no additional - // results. - NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + // Information about the registration configuration. + RegistrationConfig *RegistrationConfig `locationName:"registrationConfig" type:"structure"` - // A filter that limits the results to those with the specified parent group. - ParentGroup *string `location:"querystring" locationName:"parentGroup" min:"1" type:"string"` + // A boolean value that specifies if the CA certificate is set to active. + SetAsActive *bool `location:"querystring" locationName:"setAsActive" type:"boolean"` - // If true, return child groups as well. - Recursive *bool `location:"querystring" locationName:"recursive" type:"boolean"` + // The private key verification certificate. + // + // VerificationCertificate is a required field + VerificationCertificate *string `locationName:"verificationCertificate" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s ListThingGroupsInput) String() string { +func (s RegisterCACertificateInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListThingGroupsInput) GoString() string { +func (s RegisterCACertificateInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListThingGroupsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListThingGroupsInput"} - if s.MaxResults != nil && *s.MaxResults < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) +func (s *RegisterCACertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RegisterCACertificateInput"} + if s.CaCertificate == nil { + invalidParams.Add(request.NewErrParamRequired("CaCertificate")) } - if s.NamePrefixFilter != nil && len(*s.NamePrefixFilter) < 1 { - invalidParams.Add(request.NewErrParamMinLen("NamePrefixFilter", 1)) + if s.CaCertificate != nil && len(*s.CaCertificate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CaCertificate", 1)) } - if s.ParentGroup != nil && len(*s.ParentGroup) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ParentGroup", 1)) + if s.VerificationCertificate == nil { + invalidParams.Add(request.NewErrParamRequired("VerificationCertificate")) + } + if s.VerificationCertificate != nil && len(*s.VerificationCertificate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("VerificationCertificate", 1)) + } + if s.RegistrationConfig != nil { + if err := s.RegistrationConfig.Validate(); err != nil { + invalidParams.AddNested("RegistrationConfig", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -21072,97 +30901,111 @@ func (s *ListThingGroupsInput) Validate() error { return nil } -// SetMaxResults sets the MaxResults field's value. -func (s *ListThingGroupsInput) SetMaxResults(v int64) *ListThingGroupsInput { - s.MaxResults = &v +// SetAllowAutoRegistration sets the AllowAutoRegistration field's value. +func (s *RegisterCACertificateInput) SetAllowAutoRegistration(v bool) *RegisterCACertificateInput { + s.AllowAutoRegistration = &v return s } -// SetNamePrefixFilter sets the NamePrefixFilter field's value. -func (s *ListThingGroupsInput) SetNamePrefixFilter(v string) *ListThingGroupsInput { - s.NamePrefixFilter = &v +// SetCaCertificate sets the CaCertificate field's value. +func (s *RegisterCACertificateInput) SetCaCertificate(v string) *RegisterCACertificateInput { + s.CaCertificate = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListThingGroupsInput) SetNextToken(v string) *ListThingGroupsInput { - s.NextToken = &v +// SetRegistrationConfig sets the RegistrationConfig field's value. +func (s *RegisterCACertificateInput) SetRegistrationConfig(v *RegistrationConfig) *RegisterCACertificateInput { + s.RegistrationConfig = v return s } -// SetParentGroup sets the ParentGroup field's value. -func (s *ListThingGroupsInput) SetParentGroup(v string) *ListThingGroupsInput { - s.ParentGroup = &v +// SetSetAsActive sets the SetAsActive field's value. +func (s *RegisterCACertificateInput) SetSetAsActive(v bool) *RegisterCACertificateInput { + s.SetAsActive = &v return s } -// SetRecursive sets the Recursive field's value. -func (s *ListThingGroupsInput) SetRecursive(v bool) *ListThingGroupsInput { - s.Recursive = &v +// SetVerificationCertificate sets the VerificationCertificate field's value. +func (s *RegisterCACertificateInput) SetVerificationCertificate(v string) *RegisterCACertificateInput { + s.VerificationCertificate = &v return s } -type ListThingGroupsOutput struct { +// The output from the RegisterCACertificateResponse operation. +type RegisterCACertificateOutput struct { _ struct{} `type:"structure"` - // The token used to get the next set of results, or null if there are no additional - // results. - NextToken *string `locationName:"nextToken" type:"string"` + // The CA certificate ARN. + CertificateArn *string `locationName:"certificateArn" type:"string"` - // The thing groups. - ThingGroups []*GroupNameAndArn `locationName:"thingGroups" type:"list"` + // The CA certificate identifier. + CertificateId *string `locationName:"certificateId" min:"64" type:"string"` } // String returns the string representation -func (s ListThingGroupsOutput) String() string { +func (s RegisterCACertificateOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListThingGroupsOutput) GoString() string { +func (s RegisterCACertificateOutput) GoString() string { return s.String() } -// SetNextToken sets the NextToken field's value. -func (s *ListThingGroupsOutput) SetNextToken(v string) *ListThingGroupsOutput { - s.NextToken = &v +// SetCertificateArn sets the CertificateArn field's value. +func (s *RegisterCACertificateOutput) SetCertificateArn(v string) *RegisterCACertificateOutput { + s.CertificateArn = &v return s } -// SetThingGroups sets the ThingGroups field's value. -func (s *ListThingGroupsOutput) SetThingGroups(v []*GroupNameAndArn) *ListThingGroupsOutput { - s.ThingGroups = v +// SetCertificateId sets the CertificateId field's value. +func (s *RegisterCACertificateOutput) SetCertificateId(v string) *RegisterCACertificateOutput { + s.CertificateId = &v return s } -// The input for the ListThingPrincipal operation. -type ListThingPrincipalsInput struct { +// The input to the RegisterCertificate operation. +type RegisterCertificateInput struct { _ struct{} `type:"structure"` - // The name of the thing. + // The CA certificate used to sign the device certificate being registered. + CaCertificatePem *string `locationName:"caCertificatePem" min:"1" type:"string"` + + // The certificate data, in PEM format. // - // ThingName is a required field - ThingName *string `location:"uri" locationName:"thingName" min:"1" type:"string" required:"true"` + // CertificatePem is a required field + CertificatePem *string `locationName:"certificatePem" min:"1" type:"string" required:"true"` + + // A boolean value that specifies if the CA certificate is set to active. + // + // Deprecated: SetAsActive has been deprecated + SetAsActive *bool `location:"querystring" locationName:"setAsActive" deprecated:"true" type:"boolean"` + + // The status of the register certificate request. + Status *string `locationName:"status" type:"string" enum:"CertificateStatus"` } // String returns the string representation -func (s ListThingPrincipalsInput) String() string { +func (s RegisterCertificateInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListThingPrincipalsInput) GoString() string { +func (s RegisterCertificateInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListThingPrincipalsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListThingPrincipalsInput"} - if s.ThingName == nil { - invalidParams.Add(request.NewErrParamRequired("ThingName")) +func (s *RegisterCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RegisterCertificateInput"} + if s.CaCertificatePem != nil && len(*s.CaCertificatePem) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CaCertificatePem", 1)) } - if s.ThingName != nil && len(*s.ThingName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) + if s.CertificatePem == nil { + invalidParams.Add(request.NewErrParamRequired("CertificatePem")) + } + if s.CertificatePem != nil && len(*s.CertificatePem) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CertificatePem", 1)) } if invalidParams.Len() > 0 { @@ -21171,77 +31014,92 @@ func (s *ListThingPrincipalsInput) Validate() error { return nil } -// SetThingName sets the ThingName field's value. -func (s *ListThingPrincipalsInput) SetThingName(v string) *ListThingPrincipalsInput { - s.ThingName = &v +// SetCaCertificatePem sets the CaCertificatePem field's value. +func (s *RegisterCertificateInput) SetCaCertificatePem(v string) *RegisterCertificateInput { + s.CaCertificatePem = &v + return s +} + +// SetCertificatePem sets the CertificatePem field's value. +func (s *RegisterCertificateInput) SetCertificatePem(v string) *RegisterCertificateInput { + s.CertificatePem = &v + return s +} + +// SetSetAsActive sets the SetAsActive field's value. +func (s *RegisterCertificateInput) SetSetAsActive(v bool) *RegisterCertificateInput { + s.SetAsActive = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *RegisterCertificateInput) SetStatus(v string) *RegisterCertificateInput { + s.Status = &v return s } -// The output from the ListThingPrincipals operation. -type ListThingPrincipalsOutput struct { +// The output from the RegisterCertificate operation. +type RegisterCertificateOutput struct { _ struct{} `type:"structure"` - // The principals associated with the thing. - Principals []*string `locationName:"principals" type:"list"` + // The certificate ARN. + CertificateArn *string `locationName:"certificateArn" type:"string"` + + // The certificate identifier. + CertificateId *string `locationName:"certificateId" min:"64" type:"string"` } // String returns the string representation -func (s ListThingPrincipalsOutput) String() string { +func (s RegisterCertificateOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListThingPrincipalsOutput) GoString() string { +func (s RegisterCertificateOutput) GoString() string { return s.String() } -// SetPrincipals sets the Principals field's value. -func (s *ListThingPrincipalsOutput) SetPrincipals(v []*string) *ListThingPrincipalsOutput { - s.Principals = v +// SetCertificateArn sets the CertificateArn field's value. +func (s *RegisterCertificateOutput) SetCertificateArn(v string) *RegisterCertificateOutput { + s.CertificateArn = &v return s } -type ListThingRegistrationTaskReportsInput struct { - _ struct{} `type:"structure"` - - // The maximum number of results to return per request. - MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` +// SetCertificateId sets the CertificateId field's value. +func (s *RegisterCertificateOutput) SetCertificateId(v string) *RegisterCertificateOutput { + s.CertificateId = &v + return s +} - // The token to retrieve the next set of results. - NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` +type RegisterThingInput struct { + _ struct{} `type:"structure"` - // The type of task report. - // - // ReportType is a required field - ReportType *string `location:"querystring" locationName:"reportType" type:"string" required:"true" enum:"ReportType"` + // The parameters for provisioning a thing. See Programmatic Provisioning (http://docs.aws.amazon.com/iot/latest/developerguide/programmatic-provisioning.html) + // for more information. + Parameters map[string]*string `locationName:"parameters" type:"map"` - // The id of the task. + // The provisioning template. See Programmatic Provisioning (http://docs.aws.amazon.com/iot/latest/developerguide/programmatic-provisioning.html) + // for more information. // - // TaskId is a required field - TaskId *string `location:"uri" locationName:"taskId" type:"string" required:"true"` + // TemplateBody is a required field + TemplateBody *string `locationName:"templateBody" type:"string" required:"true"` } // String returns the string representation -func (s ListThingRegistrationTaskReportsInput) String() string { +func (s RegisterThingInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListThingRegistrationTaskReportsInput) GoString() string { +func (s RegisterThingInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListThingRegistrationTaskReportsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListThingRegistrationTaskReportsInput"} - if s.MaxResults != nil && *s.MaxResults < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) - } - if s.ReportType == nil { - invalidParams.Add(request.NewErrParamRequired("ReportType")) - } - if s.TaskId == nil { - invalidParams.Add(request.NewErrParamRequired("TaskId")) +func (s *RegisterThingInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RegisterThingInput"} + if s.TemplateBody == nil { + invalidParams.Add(request.NewErrParamRequired("TemplateBody")) } if invalidParams.Len() > 0 { @@ -21250,100 +31108,128 @@ func (s *ListThingRegistrationTaskReportsInput) Validate() error { return nil } -// SetMaxResults sets the MaxResults field's value. -func (s *ListThingRegistrationTaskReportsInput) SetMaxResults(v int64) *ListThingRegistrationTaskReportsInput { - s.MaxResults = &v +// SetParameters sets the Parameters field's value. +func (s *RegisterThingInput) SetParameters(v map[string]*string) *RegisterThingInput { + s.Parameters = v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListThingRegistrationTaskReportsInput) SetNextToken(v string) *ListThingRegistrationTaskReportsInput { - s.NextToken = &v +// SetTemplateBody sets the TemplateBody field's value. +func (s *RegisterThingInput) SetTemplateBody(v string) *RegisterThingInput { + s.TemplateBody = &v return s } -// SetReportType sets the ReportType field's value. -func (s *ListThingRegistrationTaskReportsInput) SetReportType(v string) *ListThingRegistrationTaskReportsInput { - s.ReportType = &v +type RegisterThingOutput struct { + _ struct{} `type:"structure"` + + // . + CertificatePem *string `locationName:"certificatePem" min:"1" type:"string"` + + // ARNs for the generated resources. + ResourceArns map[string]*string `locationName:"resourceArns" type:"map"` +} + +// String returns the string representation +func (s RegisterThingOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RegisterThingOutput) GoString() string { + return s.String() +} + +// SetCertificatePem sets the CertificatePem field's value. +func (s *RegisterThingOutput) SetCertificatePem(v string) *RegisterThingOutput { + s.CertificatePem = &v return s } -// SetTaskId sets the TaskId field's value. -func (s *ListThingRegistrationTaskReportsInput) SetTaskId(v string) *ListThingRegistrationTaskReportsInput { - s.TaskId = &v +// SetResourceArns sets the ResourceArns field's value. +func (s *RegisterThingOutput) SetResourceArns(v map[string]*string) *RegisterThingOutput { + s.ResourceArns = v return s } -type ListThingRegistrationTaskReportsOutput struct { +// The registration configuration. +type RegistrationConfig struct { _ struct{} `type:"structure"` - // The token to retrieve the next set of results. - NextToken *string `locationName:"nextToken" type:"string"` - - // The type of task report. - ReportType *string `locationName:"reportType" type:"string" enum:"ReportType"` + // The ARN of the role. + RoleArn *string `locationName:"roleArn" min:"20" type:"string"` - // Links to the task resources. - ResourceLinks []*string `locationName:"resourceLinks" type:"list"` + // The template body. + TemplateBody *string `locationName:"templateBody" type:"string"` } // String returns the string representation -func (s ListThingRegistrationTaskReportsOutput) String() string { +func (s RegistrationConfig) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListThingRegistrationTaskReportsOutput) GoString() string { +func (s RegistrationConfig) GoString() string { return s.String() } -// SetNextToken sets the NextToken field's value. -func (s *ListThingRegistrationTaskReportsOutput) SetNextToken(v string) *ListThingRegistrationTaskReportsOutput { - s.NextToken = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *RegistrationConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RegistrationConfig"} + if s.RoleArn != nil && len(*s.RoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetReportType sets the ReportType field's value. -func (s *ListThingRegistrationTaskReportsOutput) SetReportType(v string) *ListThingRegistrationTaskReportsOutput { - s.ReportType = &v +// SetRoleArn sets the RoleArn field's value. +func (s *RegistrationConfig) SetRoleArn(v string) *RegistrationConfig { + s.RoleArn = &v return s } -// SetResourceLinks sets the ResourceLinks field's value. -func (s *ListThingRegistrationTaskReportsOutput) SetResourceLinks(v []*string) *ListThingRegistrationTaskReportsOutput { - s.ResourceLinks = v +// SetTemplateBody sets the TemplateBody field's value. +func (s *RegistrationConfig) SetTemplateBody(v string) *RegistrationConfig { + s.TemplateBody = &v return s } -type ListThingRegistrationTasksInput struct { +// The input for the RejectCertificateTransfer operation. +type RejectCertificateTransferInput struct { _ struct{} `type:"structure"` - // The maximum number of results to return at one time. - MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - - // The token used to get the next set of results, or null if there are no additional - // results. - NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + // The ID of the certificate. (The last part of the certificate ARN contains + // the certificate ID.) + // + // CertificateId is a required field + CertificateId *string `location:"uri" locationName:"certificateId" min:"64" type:"string" required:"true"` - // The status of the bulk thing provisioning task. - Status *string `location:"querystring" locationName:"status" type:"string" enum:"Status"` + // The reason the certificate transfer was rejected. + RejectReason *string `locationName:"rejectReason" type:"string"` } // String returns the string representation -func (s ListThingRegistrationTasksInput) String() string { +func (s RejectCertificateTransferInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListThingRegistrationTasksInput) GoString() string { +func (s RejectCertificateTransferInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListThingRegistrationTasksInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListThingRegistrationTasksInput"} - if s.MaxResults != nil && *s.MaxResults < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) +func (s *RejectCertificateTransferInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RejectCertificateTransferInput"} + if s.CertificateId == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateId")) + } + if s.CertificateId != nil && len(*s.CertificateId) < 64 { + invalidParams.Add(request.NewErrParamMinLen("CertificateId", 64)) } if invalidParams.Len() > 0 { @@ -21352,90 +31238,108 @@ func (s *ListThingRegistrationTasksInput) Validate() error { return nil } -// SetMaxResults sets the MaxResults field's value. -func (s *ListThingRegistrationTasksInput) SetMaxResults(v int64) *ListThingRegistrationTasksInput { - s.MaxResults = &v +// SetCertificateId sets the CertificateId field's value. +func (s *RejectCertificateTransferInput) SetCertificateId(v string) *RejectCertificateTransferInput { + s.CertificateId = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListThingRegistrationTasksInput) SetNextToken(v string) *ListThingRegistrationTasksInput { - s.NextToken = &v +// SetRejectReason sets the RejectReason field's value. +func (s *RejectCertificateTransferInput) SetRejectReason(v string) *RejectCertificateTransferInput { + s.RejectReason = &v return s } -// SetStatus sets the Status field's value. -func (s *ListThingRegistrationTasksInput) SetStatus(v string) *ListThingRegistrationTasksInput { - s.Status = &v - return s +type RejectCertificateTransferOutput struct { + _ struct{} `type:"structure"` } -type ListThingRegistrationTasksOutput struct { +// String returns the string representation +func (s RejectCertificateTransferOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RejectCertificateTransferOutput) GoString() string { + return s.String() +} + +// Information about a related resource. +type RelatedResource struct { _ struct{} `type:"structure"` - // The token used to get the next set of results, or null if there are no additional - // results. - NextToken *string `locationName:"nextToken" type:"string"` + // Additional information about the resource. + AdditionalInfo map[string]*string `locationName:"additionalInfo" type:"map"` - // A list of bulk thing provisioning task IDs. - TaskIds []*string `locationName:"taskIds" type:"list"` + // Information identifying the resource. + ResourceIdentifier *ResourceIdentifier `locationName:"resourceIdentifier" type:"structure"` + + // The type of resource. + ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` } // String returns the string representation -func (s ListThingRegistrationTasksOutput) String() string { +func (s RelatedResource) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListThingRegistrationTasksOutput) GoString() string { +func (s RelatedResource) GoString() string { return s.String() } -// SetNextToken sets the NextToken field's value. -func (s *ListThingRegistrationTasksOutput) SetNextToken(v string) *ListThingRegistrationTasksOutput { - s.NextToken = &v +// SetAdditionalInfo sets the AdditionalInfo field's value. +func (s *RelatedResource) SetAdditionalInfo(v map[string]*string) *RelatedResource { + s.AdditionalInfo = v return s } -// SetTaskIds sets the TaskIds field's value. -func (s *ListThingRegistrationTasksOutput) SetTaskIds(v []*string) *ListThingRegistrationTasksOutput { - s.TaskIds = v +// SetResourceIdentifier sets the ResourceIdentifier field's value. +func (s *RelatedResource) SetResourceIdentifier(v *ResourceIdentifier) *RelatedResource { + s.ResourceIdentifier = v return s } -// The input for the ListThingTypes operation. -type ListThingTypesInput struct { +// SetResourceType sets the ResourceType field's value. +func (s *RelatedResource) SetResourceType(v string) *RelatedResource { + s.ResourceType = &v + return s +} + +type RemoveThingFromBillingGroupInput struct { _ struct{} `type:"structure"` - // The maximum number of results to return in this operation. - MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + // The ARN of the billing group. + BillingGroupArn *string `locationName:"billingGroupArn" type:"string"` - // The token for the next set of results, or null if there are no additional - // results. - NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + // The name of the billing group. + BillingGroupName *string `locationName:"billingGroupName" min:"1" type:"string"` - // The name of the thing type. - ThingTypeName *string `location:"querystring" locationName:"thingTypeName" min:"1" type:"string"` + // The ARN of the thing to be removed from the billing group. + ThingArn *string `locationName:"thingArn" type:"string"` + + // The name of the thing to be removed from the billing group. + ThingName *string `locationName:"thingName" min:"1" type:"string"` } // String returns the string representation -func (s ListThingTypesInput) String() string { +func (s RemoveThingFromBillingGroupInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListThingTypesInput) GoString() string { +func (s RemoveThingFromBillingGroupInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListThingTypesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListThingTypesInput"} - if s.MaxResults != nil && *s.MaxResults < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) +func (s *RemoveThingFromBillingGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RemoveThingFromBillingGroupInput"} + if s.BillingGroupName != nil && len(*s.BillingGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BillingGroupName", 1)) } - if s.ThingTypeName != nil && len(*s.ThingTypeName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingTypeName", 1)) + if s.ThingName != nil && len(*s.ThingName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) } if invalidParams.Len() > 0 { @@ -21444,99 +31348,79 @@ func (s *ListThingTypesInput) Validate() error { return nil } -// SetMaxResults sets the MaxResults field's value. -func (s *ListThingTypesInput) SetMaxResults(v int64) *ListThingTypesInput { - s.MaxResults = &v +// SetBillingGroupArn sets the BillingGroupArn field's value. +func (s *RemoveThingFromBillingGroupInput) SetBillingGroupArn(v string) *RemoveThingFromBillingGroupInput { + s.BillingGroupArn = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListThingTypesInput) SetNextToken(v string) *ListThingTypesInput { - s.NextToken = &v +// SetBillingGroupName sets the BillingGroupName field's value. +func (s *RemoveThingFromBillingGroupInput) SetBillingGroupName(v string) *RemoveThingFromBillingGroupInput { + s.BillingGroupName = &v return s } -// SetThingTypeName sets the ThingTypeName field's value. -func (s *ListThingTypesInput) SetThingTypeName(v string) *ListThingTypesInput { - s.ThingTypeName = &v +// SetThingArn sets the ThingArn field's value. +func (s *RemoveThingFromBillingGroupInput) SetThingArn(v string) *RemoveThingFromBillingGroupInput { + s.ThingArn = &v return s } -// The output for the ListThingTypes operation. -type ListThingTypesOutput struct { - _ struct{} `type:"structure"` - - // The token for the next set of results, or null if there are no additional - // results. - NextToken *string `locationName:"nextToken" type:"string"` +// SetThingName sets the ThingName field's value. +func (s *RemoveThingFromBillingGroupInput) SetThingName(v string) *RemoveThingFromBillingGroupInput { + s.ThingName = &v + return s +} - // The thing types. - ThingTypes []*ThingTypeDefinition `locationName:"thingTypes" type:"list"` +type RemoveThingFromBillingGroupOutput struct { + _ struct{} `type:"structure"` } // String returns the string representation -func (s ListThingTypesOutput) String() string { +func (s RemoveThingFromBillingGroupOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListThingTypesOutput) GoString() string { +func (s RemoveThingFromBillingGroupOutput) GoString() string { return s.String() } -// SetNextToken sets the NextToken field's value. -func (s *ListThingTypesOutput) SetNextToken(v string) *ListThingTypesOutput { - s.NextToken = &v - return s -} - -// SetThingTypes sets the ThingTypes field's value. -func (s *ListThingTypesOutput) SetThingTypes(v []*ThingTypeDefinition) *ListThingTypesOutput { - s.ThingTypes = v - return s -} - -type ListThingsInThingGroupInput struct { +type RemoveThingFromThingGroupInput struct { _ struct{} `type:"structure"` - // The maximum number of results to return at one time. - MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + // The ARN of the thing to remove from the group. + ThingArn *string `locationName:"thingArn" type:"string"` - // The token used to get the next set of results, or null if there are no additional - // results. - NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + // The group ARN. + ThingGroupArn *string `locationName:"thingGroupArn" type:"string"` - // When true, list things in this thing group and in all child groups as well. - Recursive *bool `location:"querystring" locationName:"recursive" type:"boolean"` + // The group name. + ThingGroupName *string `locationName:"thingGroupName" min:"1" type:"string"` - // The thing group name. - // - // ThingGroupName is a required field - ThingGroupName *string `location:"uri" locationName:"thingGroupName" min:"1" type:"string" required:"true"` + // The name of the thing to remove from the group. + ThingName *string `locationName:"thingName" min:"1" type:"string"` } // String returns the string representation -func (s ListThingsInThingGroupInput) String() string { +func (s RemoveThingFromThingGroupInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListThingsInThingGroupInput) GoString() string { +func (s RemoveThingFromThingGroupInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListThingsInThingGroupInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListThingsInThingGroupInput"} - if s.MaxResults != nil && *s.MaxResults < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) - } - if s.ThingGroupName == nil { - invalidParams.Add(request.NewErrParamRequired("ThingGroupName")) - } +func (s *RemoveThingFromThingGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RemoveThingFromThingGroupInput"} if s.ThingGroupName != nil && len(*s.ThingGroupName) < 1 { invalidParams.Add(request.NewErrParamMinLen("ThingGroupName", 1)) } + if s.ThingName != nil && len(*s.ThingName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) + } if invalidParams.Len() > 0 { return invalidParams @@ -21544,102 +31428,85 @@ func (s *ListThingsInThingGroupInput) Validate() error { return nil } -// SetMaxResults sets the MaxResults field's value. -func (s *ListThingsInThingGroupInput) SetMaxResults(v int64) *ListThingsInThingGroupInput { - s.MaxResults = &v +// SetThingArn sets the ThingArn field's value. +func (s *RemoveThingFromThingGroupInput) SetThingArn(v string) *RemoveThingFromThingGroupInput { + s.ThingArn = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListThingsInThingGroupInput) SetNextToken(v string) *ListThingsInThingGroupInput { - s.NextToken = &v +// SetThingGroupArn sets the ThingGroupArn field's value. +func (s *RemoveThingFromThingGroupInput) SetThingGroupArn(v string) *RemoveThingFromThingGroupInput { + s.ThingGroupArn = &v return s } -// SetRecursive sets the Recursive field's value. -func (s *ListThingsInThingGroupInput) SetRecursive(v bool) *ListThingsInThingGroupInput { - s.Recursive = &v +// SetThingGroupName sets the ThingGroupName field's value. +func (s *RemoveThingFromThingGroupInput) SetThingGroupName(v string) *RemoveThingFromThingGroupInput { + s.ThingGroupName = &v return s } -// SetThingGroupName sets the ThingGroupName field's value. -func (s *ListThingsInThingGroupInput) SetThingGroupName(v string) *ListThingsInThingGroupInput { - s.ThingGroupName = &v +// SetThingName sets the ThingName field's value. +func (s *RemoveThingFromThingGroupInput) SetThingName(v string) *RemoveThingFromThingGroupInput { + s.ThingName = &v return s } -type ListThingsInThingGroupOutput struct { +type RemoveThingFromThingGroupOutput struct { _ struct{} `type:"structure"` - - // The token used to get the next set of results, or null if there are no additional - // results. - NextToken *string `locationName:"nextToken" type:"string"` - - // The things in the specified thing group. - Things []*string `locationName:"things" type:"list"` } // String returns the string representation -func (s ListThingsInThingGroupOutput) String() string { +func (s RemoveThingFromThingGroupOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListThingsInThingGroupOutput) GoString() string { +func (s RemoveThingFromThingGroupOutput) GoString() string { return s.String() } -// SetNextToken sets the NextToken field's value. -func (s *ListThingsInThingGroupOutput) SetNextToken(v string) *ListThingsInThingGroupOutput { - s.NextToken = &v - return s -} - -// SetThings sets the Things field's value. -func (s *ListThingsInThingGroupOutput) SetThings(v []*string) *ListThingsInThingGroupOutput { - s.Things = v - return s -} - -// The input for the ListThings operation. -type ListThingsInput struct { - _ struct{} `type:"structure"` - - // The attribute name used to search for things. - AttributeName *string `location:"querystring" locationName:"attributeName" type:"string"` - - // The attribute value used to search for things. - AttributeValue *string `location:"querystring" locationName:"attributeValue" type:"string"` - - // The maximum number of results to return in this operation. - MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` +// The input for the ReplaceTopicRule operation. +type ReplaceTopicRuleInput struct { + _ struct{} `type:"structure" payload:"TopicRulePayload"` - // The token used to get the next set of results, or null if there are no additional - // results. - NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + // The name of the rule. + // + // RuleName is a required field + RuleName *string `location:"uri" locationName:"ruleName" min:"1" type:"string" required:"true"` - // The name of the thing type used to search for things. - ThingTypeName *string `location:"querystring" locationName:"thingTypeName" min:"1" type:"string"` + // The rule payload. + // + // TopicRulePayload is a required field + TopicRulePayload *TopicRulePayload `locationName:"topicRulePayload" type:"structure" required:"true"` } // String returns the string representation -func (s ListThingsInput) String() string { +func (s ReplaceTopicRuleInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListThingsInput) GoString() string { +func (s ReplaceTopicRuleInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListThingsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListThingsInput"} - if s.MaxResults != nil && *s.MaxResults < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) +func (s *ReplaceTopicRuleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ReplaceTopicRuleInput"} + if s.RuleName == nil { + invalidParams.Add(request.NewErrParamRequired("RuleName")) } - if s.ThingTypeName != nil && len(*s.ThingTypeName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingTypeName", 1)) + if s.RuleName != nil && len(*s.RuleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RuleName", 1)) + } + if s.TopicRulePayload == nil { + invalidParams.Add(request.NewErrParamRequired("TopicRulePayload")) + } + if s.TopicRulePayload != nil { + if err := s.TopicRulePayload.Validate(); err != nil { + invalidParams.AddNested("TopicRulePayload", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -21648,102 +31515,134 @@ func (s *ListThingsInput) Validate() error { return nil } -// SetAttributeName sets the AttributeName field's value. -func (s *ListThingsInput) SetAttributeName(v string) *ListThingsInput { - s.AttributeName = &v +// SetRuleName sets the RuleName field's value. +func (s *ReplaceTopicRuleInput) SetRuleName(v string) *ReplaceTopicRuleInput { + s.RuleName = &v return s } -// SetAttributeValue sets the AttributeValue field's value. -func (s *ListThingsInput) SetAttributeValue(v string) *ListThingsInput { - s.AttributeValue = &v +// SetTopicRulePayload sets the TopicRulePayload field's value. +func (s *ReplaceTopicRuleInput) SetTopicRulePayload(v *TopicRulePayload) *ReplaceTopicRuleInput { + s.TopicRulePayload = v return s } -// SetMaxResults sets the MaxResults field's value. -func (s *ListThingsInput) SetMaxResults(v int64) *ListThingsInput { - s.MaxResults = &v - return s +type ReplaceTopicRuleOutput struct { + _ struct{} `type:"structure"` } -// SetNextToken sets the NextToken field's value. -func (s *ListThingsInput) SetNextToken(v string) *ListThingsInput { - s.NextToken = &v - return s +// String returns the string representation +func (s ReplaceTopicRuleOutput) String() string { + return awsutil.Prettify(s) } -// SetThingTypeName sets the ThingTypeName field's value. -func (s *ListThingsInput) SetThingTypeName(v string) *ListThingsInput { - s.ThingTypeName = &v - return s +// GoString returns the string representation +func (s ReplaceTopicRuleOutput) GoString() string { + return s.String() } -// The output from the ListThings operation. -type ListThingsOutput struct { +// Describes an action to republish to another topic. +type RepublishAction struct { _ struct{} `type:"structure"` - // The token used to get the next set of results, or null if there are no additional - // results. - NextToken *string `locationName:"nextToken" type:"string"` + // The ARN of the IAM role that grants access. + // + // RoleArn is a required field + RoleArn *string `locationName:"roleArn" type:"string" required:"true"` - // The things. - Things []*ThingAttribute `locationName:"things" type:"list"` + // The name of the MQTT topic. + // + // Topic is a required field + Topic *string `locationName:"topic" type:"string" required:"true"` } // String returns the string representation -func (s ListThingsOutput) String() string { +func (s RepublishAction) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListThingsOutput) GoString() string { +func (s RepublishAction) GoString() string { return s.String() } -// SetNextToken sets the NextToken field's value. -func (s *ListThingsOutput) SetNextToken(v string) *ListThingsOutput { - s.NextToken = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *RepublishAction) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RepublishAction"} + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + if s.Topic == nil { + invalidParams.Add(request.NewErrParamRequired("Topic")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRoleArn sets the RoleArn field's value. +func (s *RepublishAction) SetRoleArn(v string) *RepublishAction { + s.RoleArn = &v return s } -// SetThings sets the Things field's value. -func (s *ListThingsOutput) SetThings(v []*ThingAttribute) *ListThingsOutput { - s.Things = v +// SetTopic sets the Topic field's value. +func (s *RepublishAction) SetTopic(v string) *RepublishAction { + s.Topic = &v return s } -// The input for the ListTopicRules operation. -type ListTopicRulesInput struct { +// Information identifying the non-compliant resource. +type ResourceIdentifier struct { _ struct{} `type:"structure"` - // The maximum number of results to return. - MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + // The account with which the resource is associated. + Account *string `locationName:"account" min:"12" type:"string"` - // A token used to retrieve the next value. - NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + // The ID of the CA certificate used to authorize the certificate. + CaCertificateId *string `locationName:"caCertificateId" min:"64" type:"string"` - // Specifies whether the rule is disabled. - RuleDisabled *bool `location:"querystring" locationName:"ruleDisabled" type:"boolean"` + // The client ID. + ClientId *string `locationName:"clientId" type:"string"` - // The topic. - Topic *string `location:"querystring" locationName:"topic" type:"string"` + // The ID of the Cognito Identity Pool. + CognitoIdentityPoolId *string `locationName:"cognitoIdentityPoolId" type:"string"` + + // The ID of the certificate attached to the resource. + DeviceCertificateId *string `locationName:"deviceCertificateId" min:"64" type:"string"` + + // The version of the policy associated with the resource. + PolicyVersionIdentifier *PolicyVersionIdentifier `locationName:"policyVersionIdentifier" type:"structure"` } // String returns the string representation -func (s ListTopicRulesInput) String() string { +func (s ResourceIdentifier) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListTopicRulesInput) GoString() string { +func (s ResourceIdentifier) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListTopicRulesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListTopicRulesInput"} - if s.MaxResults != nil && *s.MaxResults < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) +func (s *ResourceIdentifier) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResourceIdentifier"} + if s.Account != nil && len(*s.Account) < 12 { + invalidParams.Add(request.NewErrParamMinLen("Account", 12)) + } + if s.CaCertificateId != nil && len(*s.CaCertificateId) < 64 { + invalidParams.Add(request.NewErrParamMinLen("CaCertificateId", 64)) + } + if s.DeviceCertificateId != nil && len(*s.DeviceCertificateId) < 64 { + invalidParams.Add(request.NewErrParamMinLen("DeviceCertificateId", 64)) + } + if s.PolicyVersionIdentifier != nil { + if err := s.PolicyVersionIdentifier.Validate(); err != nil { + invalidParams.AddNested("PolicyVersionIdentifier", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -21752,179 +31651,165 @@ func (s *ListTopicRulesInput) Validate() error { return nil } -// SetMaxResults sets the MaxResults field's value. -func (s *ListTopicRulesInput) SetMaxResults(v int64) *ListTopicRulesInput { - s.MaxResults = &v +// SetAccount sets the Account field's value. +func (s *ResourceIdentifier) SetAccount(v string) *ResourceIdentifier { + s.Account = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListTopicRulesInput) SetNextToken(v string) *ListTopicRulesInput { - s.NextToken = &v +// SetCaCertificateId sets the CaCertificateId field's value. +func (s *ResourceIdentifier) SetCaCertificateId(v string) *ResourceIdentifier { + s.CaCertificateId = &v return s } -// SetRuleDisabled sets the RuleDisabled field's value. -func (s *ListTopicRulesInput) SetRuleDisabled(v bool) *ListTopicRulesInput { - s.RuleDisabled = &v +// SetClientId sets the ClientId field's value. +func (s *ResourceIdentifier) SetClientId(v string) *ResourceIdentifier { + s.ClientId = &v return s } -// SetTopic sets the Topic field's value. -func (s *ListTopicRulesInput) SetTopic(v string) *ListTopicRulesInput { - s.Topic = &v +// SetCognitoIdentityPoolId sets the CognitoIdentityPoolId field's value. +func (s *ResourceIdentifier) SetCognitoIdentityPoolId(v string) *ResourceIdentifier { + s.CognitoIdentityPoolId = &v return s } -// The output from the ListTopicRules operation. -type ListTopicRulesOutput struct { - _ struct{} `type:"structure"` - - // A token used to retrieve the next value. - NextToken *string `locationName:"nextToken" type:"string"` - - // The rules. - Rules []*TopicRuleListItem `locationName:"rules" type:"list"` -} - -// String returns the string representation -func (s ListTopicRulesOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s ListTopicRulesOutput) GoString() string { - return s.String() -} - -// SetNextToken sets the NextToken field's value. -func (s *ListTopicRulesOutput) SetNextToken(v string) *ListTopicRulesOutput { - s.NextToken = &v +// SetDeviceCertificateId sets the DeviceCertificateId field's value. +func (s *ResourceIdentifier) SetDeviceCertificateId(v string) *ResourceIdentifier { + s.DeviceCertificateId = &v return s } -// SetRules sets the Rules field's value. -func (s *ListTopicRulesOutput) SetRules(v []*TopicRuleListItem) *ListTopicRulesOutput { - s.Rules = v +// SetPolicyVersionIdentifier sets the PolicyVersionIdentifier field's value. +func (s *ResourceIdentifier) SetPolicyVersionIdentifier(v *PolicyVersionIdentifier) *ResourceIdentifier { + s.PolicyVersionIdentifier = v return s } -type ListV2LoggingLevelsInput struct { +// Role alias description. +type RoleAliasDescription struct { _ struct{} `type:"structure"` - // The maximum number of results to return at one time. - MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + // The UNIX timestamp of when the role alias was created. + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` - // The token used to get the next set of results, or null if there are no additional - // results. - NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + // The number of seconds for which the credential is valid. + CredentialDurationSeconds *int64 `locationName:"credentialDurationSeconds" min:"900" type:"integer"` - // The type of resource for which you are configuring logging. Must be THING_Group. - TargetType *string `location:"querystring" locationName:"targetType" type:"string" enum:"LogTargetType"` + // The UNIX timestamp of when the role alias was last modified. + LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp"` + + // The role alias owner. + Owner *string `locationName:"owner" min:"12" type:"string"` + + // The role alias. + RoleAlias *string `locationName:"roleAlias" min:"1" type:"string"` + + // The ARN of the role alias. + RoleAliasArn *string `locationName:"roleAliasArn" type:"string"` + + // The role ARN. + RoleArn *string `locationName:"roleArn" min:"20" type:"string"` } // String returns the string representation -func (s ListV2LoggingLevelsInput) String() string { +func (s RoleAliasDescription) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListV2LoggingLevelsInput) GoString() string { +func (s RoleAliasDescription) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *ListV2LoggingLevelsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListV2LoggingLevelsInput"} - if s.MaxResults != nil && *s.MaxResults < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetMaxResults sets the MaxResults field's value. -func (s *ListV2LoggingLevelsInput) SetMaxResults(v int64) *ListV2LoggingLevelsInput { - s.MaxResults = &v +// SetCreationDate sets the CreationDate field's value. +func (s *RoleAliasDescription) SetCreationDate(v time.Time) *RoleAliasDescription { + s.CreationDate = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListV2LoggingLevelsInput) SetNextToken(v string) *ListV2LoggingLevelsInput { - s.NextToken = &v +// SetCredentialDurationSeconds sets the CredentialDurationSeconds field's value. +func (s *RoleAliasDescription) SetCredentialDurationSeconds(v int64) *RoleAliasDescription { + s.CredentialDurationSeconds = &v return s } -// SetTargetType sets the TargetType field's value. -func (s *ListV2LoggingLevelsInput) SetTargetType(v string) *ListV2LoggingLevelsInput { - s.TargetType = &v +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *RoleAliasDescription) SetLastModifiedDate(v time.Time) *RoleAliasDescription { + s.LastModifiedDate = &v return s } -type ListV2LoggingLevelsOutput struct { - _ struct{} `type:"structure"` - - // The logging configuration for a target. - LogTargetConfigurations []*LogTargetConfiguration `locationName:"logTargetConfigurations" type:"list"` - - // The token used to get the next set of results, or null if there are no additional - // results. - NextToken *string `locationName:"nextToken" type:"string"` -} - -// String returns the string representation -func (s ListV2LoggingLevelsOutput) String() string { - return awsutil.Prettify(s) +// SetOwner sets the Owner field's value. +func (s *RoleAliasDescription) SetOwner(v string) *RoleAliasDescription { + s.Owner = &v + return s } -// GoString returns the string representation -func (s ListV2LoggingLevelsOutput) GoString() string { - return s.String() +// SetRoleAlias sets the RoleAlias field's value. +func (s *RoleAliasDescription) SetRoleAlias(v string) *RoleAliasDescription { + s.RoleAlias = &v + return s } -// SetLogTargetConfigurations sets the LogTargetConfigurations field's value. -func (s *ListV2LoggingLevelsOutput) SetLogTargetConfigurations(v []*LogTargetConfiguration) *ListV2LoggingLevelsOutput { - s.LogTargetConfigurations = v +// SetRoleAliasArn sets the RoleAliasArn field's value. +func (s *RoleAliasDescription) SetRoleAliasArn(v string) *RoleAliasDescription { + s.RoleAliasArn = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListV2LoggingLevelsOutput) SetNextToken(v string) *ListV2LoggingLevelsOutput { - s.NextToken = &v +// SetRoleArn sets the RoleArn field's value. +func (s *RoleAliasDescription) SetRoleArn(v string) *RoleAliasDescription { + s.RoleArn = &v return s } -// A log target. -type LogTarget struct { +// Describes an action to write data to an Amazon S3 bucket. +type S3Action struct { _ struct{} `type:"structure"` - // The target name. - TargetName *string `locationName:"targetName" type:"string"` + // The Amazon S3 bucket. + // + // BucketName is a required field + BucketName *string `locationName:"bucketName" type:"string" required:"true"` - // The target type. + // The Amazon S3 canned ACL that controls access to the object identified by + // the object key. For more information, see S3 canned ACLs (http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl). + CannedAcl *string `locationName:"cannedAcl" type:"string" enum:"CannedAccessControlList"` + + // The object key. // - // TargetType is a required field - TargetType *string `locationName:"targetType" type:"string" required:"true" enum:"LogTargetType"` + // Key is a required field + Key *string `locationName:"key" type:"string" required:"true"` + + // The ARN of the IAM role that grants access. + // + // RoleArn is a required field + RoleArn *string `locationName:"roleArn" type:"string" required:"true"` } // String returns the string representation -func (s LogTarget) String() string { +func (s S3Action) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s LogTarget) GoString() string { +func (s S3Action) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *LogTarget) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "LogTarget"} - if s.TargetType == nil { - invalidParams.Add(request.NewErrParamRequired("TargetType")) +func (s *S3Action) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "S3Action"} + if s.BucketName == nil { + invalidParams.Add(request.NewErrParamRequired("BucketName")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) } if invalidParams.Len() > 0 { @@ -21933,79 +31818,56 @@ func (s *LogTarget) Validate() error { return nil } -// SetTargetName sets the TargetName field's value. -func (s *LogTarget) SetTargetName(v string) *LogTarget { - s.TargetName = &v +// SetBucketName sets the BucketName field's value. +func (s *S3Action) SetBucketName(v string) *S3Action { + s.BucketName = &v return s } -// SetTargetType sets the TargetType field's value. -func (s *LogTarget) SetTargetType(v string) *LogTarget { - s.TargetType = &v +// SetCannedAcl sets the CannedAcl field's value. +func (s *S3Action) SetCannedAcl(v string) *S3Action { + s.CannedAcl = &v return s } -// The target configuration. -type LogTargetConfiguration struct { - _ struct{} `type:"structure"` - - // The logging level. - LogLevel *string `locationName:"logLevel" type:"string" enum:"LogLevel"` - - // A log target - LogTarget *LogTarget `locationName:"logTarget" type:"structure"` -} - -// String returns the string representation -func (s LogTargetConfiguration) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s LogTargetConfiguration) GoString() string { - return s.String() -} - -// SetLogLevel sets the LogLevel field's value. -func (s *LogTargetConfiguration) SetLogLevel(v string) *LogTargetConfiguration { - s.LogLevel = &v +// SetKey sets the Key field's value. +func (s *S3Action) SetKey(v string) *S3Action { + s.Key = &v return s } - -// SetLogTarget sets the LogTarget field's value. -func (s *LogTargetConfiguration) SetLogTarget(v *LogTarget) *LogTargetConfiguration { - s.LogTarget = v + +// SetRoleArn sets the RoleArn field's value. +func (s *S3Action) SetRoleArn(v string) *S3Action { + s.RoleArn = &v return s } -// Describes the logging options payload. -type LoggingOptionsPayload struct { +// Describes the location of updated firmware in S3. +type S3Destination struct { _ struct{} `type:"structure"` - // The log level. - LogLevel *string `locationName:"logLevel" type:"string" enum:"LogLevel"` + // The S3 bucket that contains the updated firmware. + Bucket *string `locationName:"bucket" min:"1" type:"string"` - // The ARN of the IAM role that grants access. - // - // RoleArn is a required field - RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + // The S3 prefix. + Prefix *string `locationName:"prefix" type:"string"` } // String returns the string representation -func (s LoggingOptionsPayload) String() string { +func (s S3Destination) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s LoggingOptionsPayload) GoString() string { +func (s S3Destination) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *LoggingOptionsPayload) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "LoggingOptionsPayload"} - if s.RoleArn == nil { - invalidParams.Add(request.NewErrParamRequired("RoleArn")) +func (s *S3Destination) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "S3Destination"} + if s.Bucket != nil && len(*s.Bucket) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Bucket", 1)) } if invalidParams.Len() > 0 { @@ -22014,60 +31876,50 @@ func (s *LoggingOptionsPayload) Validate() error { return nil } -// SetLogLevel sets the LogLevel field's value. -func (s *LoggingOptionsPayload) SetLogLevel(v string) *LoggingOptionsPayload { - s.LogLevel = &v +// SetBucket sets the Bucket field's value. +func (s *S3Destination) SetBucket(v string) *S3Destination { + s.Bucket = &v return s } -// SetRoleArn sets the RoleArn field's value. -func (s *LoggingOptionsPayload) SetRoleArn(v string) *LoggingOptionsPayload { - s.RoleArn = &v +// SetPrefix sets the Prefix field's value. +func (s *S3Destination) SetPrefix(v string) *S3Destination { + s.Prefix = &v return s } -// Describes a file to be associated with an OTA update. -type OTAUpdateFile struct { +// The S3 location. +type S3Location struct { _ struct{} `type:"structure"` - // A list of name/attribute pairs. - Attributes map[string]*string `locationName:"attributes" type:"map"` - - // The code signing method of the file. - CodeSigning *CodeSigning `locationName:"codeSigning" type:"structure"` - - // The name of the file. - FileName *string `locationName:"fileName" type:"string"` + // The S3 bucket. + Bucket *string `locationName:"bucket" min:"1" type:"string"` - // The source of the file. - FileSource *Stream `locationName:"fileSource" type:"structure"` + // The S3 key. + Key *string `locationName:"key" min:"1" type:"string"` - // The file version. - FileVersion *string `locationName:"fileVersion" type:"string"` + // The S3 bucket version. + Version *string `locationName:"version" type:"string"` } // String returns the string representation -func (s OTAUpdateFile) String() string { +func (s S3Location) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s OTAUpdateFile) GoString() string { +func (s S3Location) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *OTAUpdateFile) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "OTAUpdateFile"} - if s.CodeSigning != nil { - if err := s.CodeSigning.Validate(); err != nil { - invalidParams.AddNested("CodeSigning", err.(request.ErrInvalidParams)) - } +func (s *S3Location) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "S3Location"} + if s.Bucket != nil && len(*s.Bucket) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Bucket", 1)) } - if s.FileSource != nil { - if err := s.FileSource.Validate(); err != nil { - invalidParams.AddNested("FileSource", err.(request.ErrInvalidParams)) - } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) } if invalidParams.Len() > 0 { @@ -22076,438 +31928,477 @@ func (s *OTAUpdateFile) Validate() error { return nil } -// SetAttributes sets the Attributes field's value. -func (s *OTAUpdateFile) SetAttributes(v map[string]*string) *OTAUpdateFile { - s.Attributes = v - return s -} - -// SetCodeSigning sets the CodeSigning field's value. -func (s *OTAUpdateFile) SetCodeSigning(v *CodeSigning) *OTAUpdateFile { - s.CodeSigning = v - return s -} - -// SetFileName sets the FileName field's value. -func (s *OTAUpdateFile) SetFileName(v string) *OTAUpdateFile { - s.FileName = &v +// SetBucket sets the Bucket field's value. +func (s *S3Location) SetBucket(v string) *S3Location { + s.Bucket = &v return s } -// SetFileSource sets the FileSource field's value. -func (s *OTAUpdateFile) SetFileSource(v *Stream) *OTAUpdateFile { - s.FileSource = v +// SetKey sets the Key field's value. +func (s *S3Location) SetKey(v string) *S3Location { + s.Key = &v return s } -// SetFileVersion sets the FileVersion field's value. -func (s *OTAUpdateFile) SetFileVersion(v string) *OTAUpdateFile { - s.FileVersion = &v +// SetVersion sets the Version field's value. +func (s *S3Location) SetVersion(v string) *S3Location { + s.Version = &v return s } -// Information about an OTA update. -type OTAUpdateInfo struct { +// Describes an action to write a message to a Salesforce IoT Cloud Input Stream. +type SalesforceAction struct { _ struct{} `type:"structure"` - // A collection of name/value pairs - AdditionalParameters map[string]*string `locationName:"additionalParameters" type:"map"` + // The token used to authenticate access to the Salesforce IoT Cloud Input Stream. + // The token is available from the Salesforce IoT Cloud platform after creation + // of the Input Stream. + // + // Token is a required field + Token *string `locationName:"token" min:"40" type:"string" required:"true"` - // The AWS IoT job ARN associated with the OTA update. - AwsIotJobArn *string `locationName:"awsIotJobArn" type:"string"` + // The URL exposed by the Salesforce IoT Cloud Input Stream. The URL is available + // from the Salesforce IoT Cloud platform after creation of the Input Stream. + // + // Url is a required field + Url *string `locationName:"url" type:"string" required:"true"` +} - // The AWS IoT job ID associated with the OTA update. - AwsIotJobId *string `locationName:"awsIotJobId" type:"string"` +// String returns the string representation +func (s SalesforceAction) String() string { + return awsutil.Prettify(s) +} - // The date when the OTA update was created. - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix"` +// GoString returns the string representation +func (s SalesforceAction) GoString() string { + return s.String() +} - // A description of the OTA update. - Description *string `locationName:"description" type:"string"` +// Validate inspects the fields of the type to determine if they are valid. +func (s *SalesforceAction) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SalesforceAction"} + if s.Token == nil { + invalidParams.Add(request.NewErrParamRequired("Token")) + } + if s.Token != nil && len(*s.Token) < 40 { + invalidParams.Add(request.NewErrParamMinLen("Token", 40)) + } + if s.Url == nil { + invalidParams.Add(request.NewErrParamRequired("Url")) + } - // Error information associated with the OTA update. - ErrorInfo *ErrorInfo `locationName:"errorInfo" type:"structure"` + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} - // The date when the OTA update was last updated. - LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp" timestampFormat:"unix"` +// SetToken sets the Token field's value. +func (s *SalesforceAction) SetToken(v string) *SalesforceAction { + s.Token = &v + return s +} - // The OTA update ARN. - OtaUpdateArn *string `locationName:"otaUpdateArn" type:"string"` +// SetUrl sets the Url field's value. +func (s *SalesforceAction) SetUrl(v string) *SalesforceAction { + s.Url = &v + return s +} - // A list of files associated with the OTA update. - OtaUpdateFiles []*OTAUpdateFile `locationName:"otaUpdateFiles" min:"1" type:"list"` +// Information about the scheduled audit. +type ScheduledAuditMetadata struct { + _ struct{} `type:"structure"` - // The OTA update ID. - OtaUpdateId *string `locationName:"otaUpdateId" min:"1" type:"string"` + // The day of the month on which the scheduled audit is run (if the frequency + // is "MONTHLY"). If days 29-31 are specified, and the month does not have that + // many days, the audit takes place on the "LAST" day of the month. + DayOfMonth *string `locationName:"dayOfMonth" type:"string"` - // The status of the OTA update. - OtaUpdateStatus *string `locationName:"otaUpdateStatus" type:"string" enum:"OTAUpdateStatus"` + // The day of the week on which the scheduled audit is run (if the frequency + // is "WEEKLY" or "BIWEEKLY"). + DayOfWeek *string `locationName:"dayOfWeek" type:"string" enum:"DayOfWeek"` - // Specifies whether the OTA update will continue to run (CONTINUOUS), or will - // be complete after all those things specified as targets have completed the - // OTA update (SNAPSHOT). If continuous, the OTA update may also be run on a - // thing when a change is detected in a target. For example, an OTA update will - // run on a thing when the thing is added to a target group, even after the - // OTA update was completed by all things originally in the group. - TargetSelection *string `locationName:"targetSelection" type:"string" enum:"TargetSelection"` + // How often the scheduled audit takes place. + Frequency *string `locationName:"frequency" type:"string" enum:"AuditFrequency"` - // The targets of the OTA update. - Targets []*string `locationName:"targets" min:"1" type:"list"` + // The ARN of the scheduled audit. + ScheduledAuditArn *string `locationName:"scheduledAuditArn" type:"string"` + + // The name of the scheduled audit. + ScheduledAuditName *string `locationName:"scheduledAuditName" min:"1" type:"string"` } // String returns the string representation -func (s OTAUpdateInfo) String() string { +func (s ScheduledAuditMetadata) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s OTAUpdateInfo) GoString() string { +func (s ScheduledAuditMetadata) GoString() string { return s.String() } -// SetAdditionalParameters sets the AdditionalParameters field's value. -func (s *OTAUpdateInfo) SetAdditionalParameters(v map[string]*string) *OTAUpdateInfo { - s.AdditionalParameters = v +// SetDayOfMonth sets the DayOfMonth field's value. +func (s *ScheduledAuditMetadata) SetDayOfMonth(v string) *ScheduledAuditMetadata { + s.DayOfMonth = &v return s } -// SetAwsIotJobArn sets the AwsIotJobArn field's value. -func (s *OTAUpdateInfo) SetAwsIotJobArn(v string) *OTAUpdateInfo { - s.AwsIotJobArn = &v +// SetDayOfWeek sets the DayOfWeek field's value. +func (s *ScheduledAuditMetadata) SetDayOfWeek(v string) *ScheduledAuditMetadata { + s.DayOfWeek = &v return s } -// SetAwsIotJobId sets the AwsIotJobId field's value. -func (s *OTAUpdateInfo) SetAwsIotJobId(v string) *OTAUpdateInfo { - s.AwsIotJobId = &v +// SetFrequency sets the Frequency field's value. +func (s *ScheduledAuditMetadata) SetFrequency(v string) *ScheduledAuditMetadata { + s.Frequency = &v return s } -// SetCreationDate sets the CreationDate field's value. -func (s *OTAUpdateInfo) SetCreationDate(v time.Time) *OTAUpdateInfo { - s.CreationDate = &v +// SetScheduledAuditArn sets the ScheduledAuditArn field's value. +func (s *ScheduledAuditMetadata) SetScheduledAuditArn(v string) *ScheduledAuditMetadata { + s.ScheduledAuditArn = &v return s } -// SetDescription sets the Description field's value. -func (s *OTAUpdateInfo) SetDescription(v string) *OTAUpdateInfo { - s.Description = &v +// SetScheduledAuditName sets the ScheduledAuditName field's value. +func (s *ScheduledAuditMetadata) SetScheduledAuditName(v string) *ScheduledAuditMetadata { + s.ScheduledAuditName = &v return s } -// SetErrorInfo sets the ErrorInfo field's value. -func (s *OTAUpdateInfo) SetErrorInfo(v *ErrorInfo) *OTAUpdateInfo { - s.ErrorInfo = v - return s +type SearchIndexInput struct { + _ struct{} `type:"structure"` + + // The search index name. + IndexName *string `locationName:"indexName" min:"1" type:"string"` + + // The maximum number of results to return at one time. + MaxResults *int64 `locationName:"maxResults" min:"1" type:"integer"` + + // The token used to get the next set of results, or null if there are no additional + // results. + NextToken *string `locationName:"nextToken" type:"string"` + + // The search query string. + // + // QueryString is a required field + QueryString *string `locationName:"queryString" min:"1" type:"string" required:"true"` + + // The query version. + QueryVersion *string `locationName:"queryVersion" type:"string"` } -// SetLastModifiedDate sets the LastModifiedDate field's value. -func (s *OTAUpdateInfo) SetLastModifiedDate(v time.Time) *OTAUpdateInfo { - s.LastModifiedDate = &v - return s +// String returns the string representation +func (s SearchIndexInput) String() string { + return awsutil.Prettify(s) } -// SetOtaUpdateArn sets the OtaUpdateArn field's value. -func (s *OTAUpdateInfo) SetOtaUpdateArn(v string) *OTAUpdateInfo { - s.OtaUpdateArn = &v - return s +// GoString returns the string representation +func (s SearchIndexInput) GoString() string { + return s.String() } -// SetOtaUpdateFiles sets the OtaUpdateFiles field's value. -func (s *OTAUpdateInfo) SetOtaUpdateFiles(v []*OTAUpdateFile) *OTAUpdateInfo { - s.OtaUpdateFiles = v +// Validate inspects the fields of the type to determine if they are valid. +func (s *SearchIndexInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SearchIndexInput"} + if s.IndexName != nil && len(*s.IndexName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("IndexName", 1)) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.QueryString == nil { + invalidParams.Add(request.NewErrParamRequired("QueryString")) + } + if s.QueryString != nil && len(*s.QueryString) < 1 { + invalidParams.Add(request.NewErrParamMinLen("QueryString", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetIndexName sets the IndexName field's value. +func (s *SearchIndexInput) SetIndexName(v string) *SearchIndexInput { + s.IndexName = &v return s } -// SetOtaUpdateId sets the OtaUpdateId field's value. -func (s *OTAUpdateInfo) SetOtaUpdateId(v string) *OTAUpdateInfo { - s.OtaUpdateId = &v +// SetMaxResults sets the MaxResults field's value. +func (s *SearchIndexInput) SetMaxResults(v int64) *SearchIndexInput { + s.MaxResults = &v return s } -// SetOtaUpdateStatus sets the OtaUpdateStatus field's value. -func (s *OTAUpdateInfo) SetOtaUpdateStatus(v string) *OTAUpdateInfo { - s.OtaUpdateStatus = &v +// SetNextToken sets the NextToken field's value. +func (s *SearchIndexInput) SetNextToken(v string) *SearchIndexInput { + s.NextToken = &v return s } -// SetTargetSelection sets the TargetSelection field's value. -func (s *OTAUpdateInfo) SetTargetSelection(v string) *OTAUpdateInfo { - s.TargetSelection = &v +// SetQueryString sets the QueryString field's value. +func (s *SearchIndexInput) SetQueryString(v string) *SearchIndexInput { + s.QueryString = &v return s } -// SetTargets sets the Targets field's value. -func (s *OTAUpdateInfo) SetTargets(v []*string) *OTAUpdateInfo { - s.Targets = v +// SetQueryVersion sets the QueryVersion field's value. +func (s *SearchIndexInput) SetQueryVersion(v string) *SearchIndexInput { + s.QueryVersion = &v return s } -// An OTA update summary. -type OTAUpdateSummary struct { +type SearchIndexOutput struct { _ struct{} `type:"structure"` - // The date when the OTA update was created. - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix"` + // The token used to get the next set of results, or null if there are no additional + // results. + NextToken *string `locationName:"nextToken" type:"string"` - // The OTA update ARN. - OtaUpdateArn *string `locationName:"otaUpdateArn" type:"string"` + // The thing groups that match the search query. + ThingGroups []*ThingGroupDocument `locationName:"thingGroups" type:"list"` - // The OTA update ID. - OtaUpdateId *string `locationName:"otaUpdateId" min:"1" type:"string"` + // The things that match the search query. + Things []*ThingDocument `locationName:"things" type:"list"` } // String returns the string representation -func (s OTAUpdateSummary) String() string { +func (s SearchIndexOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s OTAUpdateSummary) GoString() string { +func (s SearchIndexOutput) GoString() string { return s.String() } -// SetCreationDate sets the CreationDate field's value. -func (s *OTAUpdateSummary) SetCreationDate(v time.Time) *OTAUpdateSummary { - s.CreationDate = &v +// SetNextToken sets the NextToken field's value. +func (s *SearchIndexOutput) SetNextToken(v string) *SearchIndexOutput { + s.NextToken = &v return s } -// SetOtaUpdateArn sets the OtaUpdateArn field's value. -func (s *OTAUpdateSummary) SetOtaUpdateArn(v string) *OTAUpdateSummary { - s.OtaUpdateArn = &v +// SetThingGroups sets the ThingGroups field's value. +func (s *SearchIndexOutput) SetThingGroups(v []*ThingGroupDocument) *SearchIndexOutput { + s.ThingGroups = v return s } -// SetOtaUpdateId sets the OtaUpdateId field's value. -func (s *OTAUpdateSummary) SetOtaUpdateId(v string) *OTAUpdateSummary { - s.OtaUpdateId = &v +// SetThings sets the Things field's value. +func (s *SearchIndexOutput) SetThings(v []*ThingDocument) *SearchIndexOutput { + s.Things = v return s } -// A certificate that has been transferred but not yet accepted. -type OutgoingCertificate struct { +// Identifying information for a Device Defender security profile. +type SecurityProfileIdentifier struct { _ struct{} `type:"structure"` - // The certificate ARN. - CertificateArn *string `locationName:"certificateArn" type:"string"` - - // The certificate ID. - CertificateId *string `locationName:"certificateId" min:"64" type:"string"` - - // The certificate creation date. - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix"` - - // The date the transfer was initiated. - TransferDate *time.Time `locationName:"transferDate" type:"timestamp" timestampFormat:"unix"` - - // The transfer message. - TransferMessage *string `locationName:"transferMessage" type:"string"` + // The ARN of the security profile. + // + // Arn is a required field + Arn *string `locationName:"arn" type:"string" required:"true"` - // The AWS account to which the transfer was made. - TransferredTo *string `locationName:"transferredTo" type:"string"` + // The name you have given to the security profile. + // + // Name is a required field + Name *string `locationName:"name" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s OutgoingCertificate) String() string { +func (s SecurityProfileIdentifier) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s OutgoingCertificate) GoString() string { +func (s SecurityProfileIdentifier) GoString() string { return s.String() } -// SetCertificateArn sets the CertificateArn field's value. -func (s *OutgoingCertificate) SetCertificateArn(v string) *OutgoingCertificate { - s.CertificateArn = &v +// SetArn sets the Arn field's value. +func (s *SecurityProfileIdentifier) SetArn(v string) *SecurityProfileIdentifier { + s.Arn = &v return s } -// SetCertificateId sets the CertificateId field's value. -func (s *OutgoingCertificate) SetCertificateId(v string) *OutgoingCertificate { - s.CertificateId = &v +// SetName sets the Name field's value. +func (s *SecurityProfileIdentifier) SetName(v string) *SecurityProfileIdentifier { + s.Name = &v return s } -// SetCreationDate sets the CreationDate field's value. -func (s *OutgoingCertificate) SetCreationDate(v time.Time) *OutgoingCertificate { - s.CreationDate = &v - return s +// A target to which an alert is sent when a security profile behavior is violated. +type SecurityProfileTarget struct { + _ struct{} `type:"structure"` + + // The ARN of the security profile. + // + // Arn is a required field + Arn *string `locationName:"arn" type:"string" required:"true"` } -// SetTransferDate sets the TransferDate field's value. -func (s *OutgoingCertificate) SetTransferDate(v time.Time) *OutgoingCertificate { - s.TransferDate = &v - return s +// String returns the string representation +func (s SecurityProfileTarget) String() string { + return awsutil.Prettify(s) } -// SetTransferMessage sets the TransferMessage field's value. -func (s *OutgoingCertificate) SetTransferMessage(v string) *OutgoingCertificate { - s.TransferMessage = &v - return s +// GoString returns the string representation +func (s SecurityProfileTarget) GoString() string { + return s.String() } -// SetTransferredTo sets the TransferredTo field's value. -func (s *OutgoingCertificate) SetTransferredTo(v string) *OutgoingCertificate { - s.TransferredTo = &v +// SetArn sets the Arn field's value. +func (s *SecurityProfileTarget) SetArn(v string) *SecurityProfileTarget { + s.Arn = &v return s } -// Describes an AWS IoT policy. -type Policy struct { +// Information about a security profile and the target associated with it. +type SecurityProfileTargetMapping struct { _ struct{} `type:"structure"` - // The policy ARN. - PolicyArn *string `locationName:"policyArn" type:"string"` + // Information that identifies the security profile. + SecurityProfileIdentifier *SecurityProfileIdentifier `locationName:"securityProfileIdentifier" type:"structure"` - // The policy name. - PolicyName *string `locationName:"policyName" min:"1" type:"string"` + // Information about the target (thing group) associated with the security profile. + Target *SecurityProfileTarget `locationName:"target" type:"structure"` } // String returns the string representation -func (s Policy) String() string { +func (s SecurityProfileTargetMapping) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Policy) GoString() string { +func (s SecurityProfileTargetMapping) GoString() string { return s.String() } -// SetPolicyArn sets the PolicyArn field's value. -func (s *Policy) SetPolicyArn(v string) *Policy { - s.PolicyArn = &v +// SetSecurityProfileIdentifier sets the SecurityProfileIdentifier field's value. +func (s *SecurityProfileTargetMapping) SetSecurityProfileIdentifier(v *SecurityProfileIdentifier) *SecurityProfileTargetMapping { + s.SecurityProfileIdentifier = v return s } -// SetPolicyName sets the PolicyName field's value. -func (s *Policy) SetPolicyName(v string) *Policy { - s.PolicyName = &v +// SetTarget sets the Target field's value. +func (s *SecurityProfileTargetMapping) SetTarget(v *SecurityProfileTarget) *SecurityProfileTargetMapping { + s.Target = v return s } -// Describes a policy version. -type PolicyVersion struct { +type SetDefaultAuthorizerInput struct { _ struct{} `type:"structure"` - // The date and time the policy was created. - CreateDate *time.Time `locationName:"createDate" type:"timestamp" timestampFormat:"unix"` - - // Specifies whether the policy version is the default. - IsDefaultVersion *bool `locationName:"isDefaultVersion" type:"boolean"` - - // The policy version ID. - VersionId *string `locationName:"versionId" type:"string"` + // The authorizer name. + // + // AuthorizerName is a required field + AuthorizerName *string `locationName:"authorizerName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s PolicyVersion) String() string { +func (s SetDefaultAuthorizerInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s PolicyVersion) GoString() string { +func (s SetDefaultAuthorizerInput) GoString() string { return s.String() } -// SetCreateDate sets the CreateDate field's value. -func (s *PolicyVersion) SetCreateDate(v time.Time) *PolicyVersion { - s.CreateDate = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *SetDefaultAuthorizerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SetDefaultAuthorizerInput"} + if s.AuthorizerName == nil { + invalidParams.Add(request.NewErrParamRequired("AuthorizerName")) + } + if s.AuthorizerName != nil && len(*s.AuthorizerName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AuthorizerName", 1)) + } -// SetIsDefaultVersion sets the IsDefaultVersion field's value. -func (s *PolicyVersion) SetIsDefaultVersion(v bool) *PolicyVersion { - s.IsDefaultVersion = &v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetVersionId sets the VersionId field's value. -func (s *PolicyVersion) SetVersionId(v string) *PolicyVersion { - s.VersionId = &v +// SetAuthorizerName sets the AuthorizerName field's value. +func (s *SetDefaultAuthorizerInput) SetAuthorizerName(v string) *SetDefaultAuthorizerInput { + s.AuthorizerName = &v return s } -// Configuration for pre-signed S3 URLs. -type PresignedUrlConfig struct { +type SetDefaultAuthorizerOutput struct { _ struct{} `type:"structure"` - // How long (in seconds) pre-signed URLs are valid. Valid values are 60 - 3600, - // the default value is 3600 seconds. Pre-signed URLs are generated when Jobs - // receives an MQTT request for the job document. - ExpiresInSec *int64 `locationName:"expiresInSec" min:"60" type:"long"` + // The authorizer ARN. + AuthorizerArn *string `locationName:"authorizerArn" type:"string"` - // The ARN of an IAM role that grants grants permission to download files from - // the S3 bucket where the job data/updates are stored. The role must also grant - // permission for IoT to download the files. - RoleArn *string `locationName:"roleArn" min:"20" type:"string"` + // The authorizer name. + AuthorizerName *string `locationName:"authorizerName" min:"1" type:"string"` } // String returns the string representation -func (s PresignedUrlConfig) String() string { +func (s SetDefaultAuthorizerOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s PresignedUrlConfig) GoString() string { +func (s SetDefaultAuthorizerOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *PresignedUrlConfig) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "PresignedUrlConfig"} - if s.ExpiresInSec != nil && *s.ExpiresInSec < 60 { - invalidParams.Add(request.NewErrParamMinValue("ExpiresInSec", 60)) - } - if s.RoleArn != nil && len(*s.RoleArn) < 20 { - invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetExpiresInSec sets the ExpiresInSec field's value. -func (s *PresignedUrlConfig) SetExpiresInSec(v int64) *PresignedUrlConfig { - s.ExpiresInSec = &v +// SetAuthorizerArn sets the AuthorizerArn field's value. +func (s *SetDefaultAuthorizerOutput) SetAuthorizerArn(v string) *SetDefaultAuthorizerOutput { + s.AuthorizerArn = &v return s } -// SetRoleArn sets the RoleArn field's value. -func (s *PresignedUrlConfig) SetRoleArn(v string) *PresignedUrlConfig { - s.RoleArn = &v +// SetAuthorizerName sets the AuthorizerName field's value. +func (s *SetDefaultAuthorizerOutput) SetAuthorizerName(v string) *SetDefaultAuthorizerOutput { + s.AuthorizerName = &v return s } -// The input for the DynamoActionVS action that specifies the DynamoDB table -// to which the message data will be written. -type PutItemInput struct { +// The input for the SetDefaultPolicyVersion operation. +type SetDefaultPolicyVersionInput struct { _ struct{} `type:"structure"` - // The table where the message data will be written + // The policy name. // - // TableName is a required field - TableName *string `locationName:"tableName" type:"string" required:"true"` + // PolicyName is a required field + PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` + + // The policy version ID. + // + // PolicyVersionId is a required field + PolicyVersionId *string `location:"uri" locationName:"policyVersionId" type:"string" required:"true"` } // String returns the string representation -func (s PutItemInput) String() string { +func (s SetDefaultPolicyVersionInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s PutItemInput) GoString() string { +func (s SetDefaultPolicyVersionInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *PutItemInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "PutItemInput"} - if s.TableName == nil { - invalidParams.Add(request.NewErrParamRequired("TableName")) +func (s *SetDefaultPolicyVersionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SetDefaultPolicyVersionInput"} + if s.PolicyName == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyName")) + } + if s.PolicyName != nil && len(*s.PolicyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) + } + if s.PolicyVersionId == nil { + invalidParams.Add(request.NewErrParamRequired("PolicyVersionId")) } if invalidParams.Len() > 0 { @@ -22516,64 +32407,61 @@ func (s *PutItemInput) Validate() error { return nil } -// SetTableName sets the TableName field's value. -func (s *PutItemInput) SetTableName(v string) *PutItemInput { - s.TableName = &v +// SetPolicyName sets the PolicyName field's value. +func (s *SetDefaultPolicyVersionInput) SetPolicyName(v string) *SetDefaultPolicyVersionInput { + s.PolicyName = &v return s } -// The input to the RegisterCACertificate operation. -type RegisterCACertificateInput struct { - _ struct{} `type:"structure"` +// SetPolicyVersionId sets the PolicyVersionId field's value. +func (s *SetDefaultPolicyVersionInput) SetPolicyVersionId(v string) *SetDefaultPolicyVersionInput { + s.PolicyVersionId = &v + return s +} - // Allows this CA certificate to be used for auto registration of device certificates. - AllowAutoRegistration *bool `location:"querystring" locationName:"allowAutoRegistration" type:"boolean"` +type SetDefaultPolicyVersionOutput struct { + _ struct{} `type:"structure"` +} - // The CA certificate. - // - // CaCertificate is a required field - CaCertificate *string `locationName:"caCertificate" min:"1" type:"string" required:"true"` +// String returns the string representation +func (s SetDefaultPolicyVersionOutput) String() string { + return awsutil.Prettify(s) +} - // Information about the registration configuration. - RegistrationConfig *RegistrationConfig `locationName:"registrationConfig" type:"structure"` +// GoString returns the string representation +func (s SetDefaultPolicyVersionOutput) GoString() string { + return s.String() +} - // A boolean value that specifies if the CA certificate is set to active. - SetAsActive *bool `location:"querystring" locationName:"setAsActive" type:"boolean"` +// The input for the SetLoggingOptions operation. +type SetLoggingOptionsInput struct { + _ struct{} `type:"structure" payload:"LoggingOptionsPayload"` - // The private key verification certificate. + // The logging options payload. // - // VerificationCertificate is a required field - VerificationCertificate *string `locationName:"verificationCertificate" min:"1" type:"string" required:"true"` + // LoggingOptionsPayload is a required field + LoggingOptionsPayload *LoggingOptionsPayload `locationName:"loggingOptionsPayload" type:"structure" required:"true"` } // String returns the string representation -func (s RegisterCACertificateInput) String() string { +func (s SetLoggingOptionsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s RegisterCACertificateInput) GoString() string { +func (s SetLoggingOptionsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *RegisterCACertificateInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "RegisterCACertificateInput"} - if s.CaCertificate == nil { - invalidParams.Add(request.NewErrParamRequired("CaCertificate")) - } - if s.CaCertificate != nil && len(*s.CaCertificate) < 1 { - invalidParams.Add(request.NewErrParamMinLen("CaCertificate", 1)) - } - if s.VerificationCertificate == nil { - invalidParams.Add(request.NewErrParamRequired("VerificationCertificate")) - } - if s.VerificationCertificate != nil && len(*s.VerificationCertificate) < 1 { - invalidParams.Add(request.NewErrParamMinLen("VerificationCertificate", 1)) +func (s *SetLoggingOptionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SetLoggingOptionsInput"} + if s.LoggingOptionsPayload == nil { + invalidParams.Add(request.NewErrParamRequired("LoggingOptionsPayload")) } - if s.RegistrationConfig != nil { - if err := s.RegistrationConfig.Validate(); err != nil { - invalidParams.AddNested("RegistrationConfig", err.(request.ErrInvalidParams)) + if s.LoggingOptionsPayload != nil { + if err := s.LoggingOptionsPayload.Validate(); err != nil { + invalidParams.AddNested("LoggingOptionsPayload", err.(request.ErrInvalidParams)) } } @@ -22583,109 +32471,63 @@ func (s *RegisterCACertificateInput) Validate() error { return nil } -// SetAllowAutoRegistration sets the AllowAutoRegistration field's value. -func (s *RegisterCACertificateInput) SetAllowAutoRegistration(v bool) *RegisterCACertificateInput { - s.AllowAutoRegistration = &v - return s -} - -// SetCaCertificate sets the CaCertificate field's value. -func (s *RegisterCACertificateInput) SetCaCertificate(v string) *RegisterCACertificateInput { - s.CaCertificate = &v - return s -} - -// SetRegistrationConfig sets the RegistrationConfig field's value. -func (s *RegisterCACertificateInput) SetRegistrationConfig(v *RegistrationConfig) *RegisterCACertificateInput { - s.RegistrationConfig = v - return s -} - -// SetSetAsActive sets the SetAsActive field's value. -func (s *RegisterCACertificateInput) SetSetAsActive(v bool) *RegisterCACertificateInput { - s.SetAsActive = &v - return s -} - -// SetVerificationCertificate sets the VerificationCertificate field's value. -func (s *RegisterCACertificateInput) SetVerificationCertificate(v string) *RegisterCACertificateInput { - s.VerificationCertificate = &v +// SetLoggingOptionsPayload sets the LoggingOptionsPayload field's value. +func (s *SetLoggingOptionsInput) SetLoggingOptionsPayload(v *LoggingOptionsPayload) *SetLoggingOptionsInput { + s.LoggingOptionsPayload = v return s } -// The output from the RegisterCACertificateResponse operation. -type RegisterCACertificateOutput struct { +type SetLoggingOptionsOutput struct { _ struct{} `type:"structure"` - - // The CA certificate ARN. - CertificateArn *string `locationName:"certificateArn" type:"string"` - - // The CA certificate identifier. - CertificateId *string `locationName:"certificateId" min:"64" type:"string"` } // String returns the string representation -func (s RegisterCACertificateOutput) String() string { +func (s SetLoggingOptionsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s RegisterCACertificateOutput) GoString() string { +func (s SetLoggingOptionsOutput) GoString() string { return s.String() } -// SetCertificateArn sets the CertificateArn field's value. -func (s *RegisterCACertificateOutput) SetCertificateArn(v string) *RegisterCACertificateOutput { - s.CertificateArn = &v - return s -} - -// SetCertificateId sets the CertificateId field's value. -func (s *RegisterCACertificateOutput) SetCertificateId(v string) *RegisterCACertificateOutput { - s.CertificateId = &v - return s -} - -// The input to the RegisterCertificate operation. -type RegisterCertificateInput struct { +type SetV2LoggingLevelInput struct { _ struct{} `type:"structure"` - // The CA certificate used to sign the device certificate being registered. - CaCertificatePem *string `locationName:"caCertificatePem" min:"1" type:"string"` - - // The certificate data, in PEM format. + // The log level. // - // CertificatePem is a required field - CertificatePem *string `locationName:"certificatePem" min:"1" type:"string" required:"true"` - - // A boolean value that specifies if the CA certificate is set to active. - SetAsActive *bool `location:"querystring" locationName:"setAsActive" deprecated:"true" type:"boolean"` + // LogLevel is a required field + LogLevel *string `locationName:"logLevel" type:"string" required:"true" enum:"LogLevel"` - // The status of the register certificate request. - Status *string `locationName:"status" type:"string" enum:"CertificateStatus"` + // The log target. + // + // LogTarget is a required field + LogTarget *LogTarget `locationName:"logTarget" type:"structure" required:"true"` } // String returns the string representation -func (s RegisterCertificateInput) String() string { +func (s SetV2LoggingLevelInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s RegisterCertificateInput) GoString() string { +func (s SetV2LoggingLevelInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *RegisterCertificateInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "RegisterCertificateInput"} - if s.CaCertificatePem != nil && len(*s.CaCertificatePem) < 1 { - invalidParams.Add(request.NewErrParamMinLen("CaCertificatePem", 1)) +func (s *SetV2LoggingLevelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SetV2LoggingLevelInput"} + if s.LogLevel == nil { + invalidParams.Add(request.NewErrParamRequired("LogLevel")) } - if s.CertificatePem == nil { - invalidParams.Add(request.NewErrParamRequired("CertificatePem")) + if s.LogTarget == nil { + invalidParams.Add(request.NewErrParamRequired("LogTarget")) } - if s.CertificatePem != nil && len(*s.CertificatePem) < 1 { - invalidParams.Add(request.NewErrParamMinLen("CertificatePem", 1)) + if s.LogTarget != nil { + if err := s.LogTarget.Validate(); err != nil { + invalidParams.AddNested("LogTarget", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -22694,168 +32536,170 @@ func (s *RegisterCertificateInput) Validate() error { return nil } -// SetCaCertificatePem sets the CaCertificatePem field's value. -func (s *RegisterCertificateInput) SetCaCertificatePem(v string) *RegisterCertificateInput { - s.CaCertificatePem = &v - return s -} - -// SetCertificatePem sets the CertificatePem field's value. -func (s *RegisterCertificateInput) SetCertificatePem(v string) *RegisterCertificateInput { - s.CertificatePem = &v - return s -} - -// SetSetAsActive sets the SetAsActive field's value. -func (s *RegisterCertificateInput) SetSetAsActive(v bool) *RegisterCertificateInput { - s.SetAsActive = &v +// SetLogLevel sets the LogLevel field's value. +func (s *SetV2LoggingLevelInput) SetLogLevel(v string) *SetV2LoggingLevelInput { + s.LogLevel = &v return s } -// SetStatus sets the Status field's value. -func (s *RegisterCertificateInput) SetStatus(v string) *RegisterCertificateInput { - s.Status = &v +// SetLogTarget sets the LogTarget field's value. +func (s *SetV2LoggingLevelInput) SetLogTarget(v *LogTarget) *SetV2LoggingLevelInput { + s.LogTarget = v return s } -// The output from the RegisterCertificate operation. -type RegisterCertificateOutput struct { +type SetV2LoggingLevelOutput struct { _ struct{} `type:"structure"` - - // The certificate ARN. - CertificateArn *string `locationName:"certificateArn" type:"string"` - - // The certificate identifier. - CertificateId *string `locationName:"certificateId" min:"64" type:"string"` } // String returns the string representation -func (s RegisterCertificateOutput) String() string { +func (s SetV2LoggingLevelOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s RegisterCertificateOutput) GoString() string { +func (s SetV2LoggingLevelOutput) GoString() string { return s.String() } -// SetCertificateArn sets the CertificateArn field's value. -func (s *RegisterCertificateOutput) SetCertificateArn(v string) *RegisterCertificateOutput { - s.CertificateArn = &v - return s -} - -// SetCertificateId sets the CertificateId field's value. -func (s *RegisterCertificateOutput) SetCertificateId(v string) *RegisterCertificateOutput { - s.CertificateId = &v - return s -} - -type RegisterThingInput struct { +type SetV2LoggingOptionsInput struct { _ struct{} `type:"structure"` - // The parameters for provisioning a thing. - Parameters map[string]*string `locationName:"parameters" type:"map"` + // The default logging level. + DefaultLogLevel *string `locationName:"defaultLogLevel" type:"string" enum:"LogLevel"` - // The provisioning template. - // - // TemplateBody is a required field - TemplateBody *string `locationName:"templateBody" type:"string" required:"true"` + // If true all logs are disabled. The default is false. + DisableAllLogs *bool `locationName:"disableAllLogs" type:"boolean"` + + // The ARN of the role that allows IoT to write to Cloudwatch logs. + RoleArn *string `locationName:"roleArn" type:"string"` } // String returns the string representation -func (s RegisterThingInput) String() string { +func (s SetV2LoggingOptionsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s RegisterThingInput) GoString() string { +func (s SetV2LoggingOptionsInput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *RegisterThingInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "RegisterThingInput"} - if s.TemplateBody == nil { - invalidParams.Add(request.NewErrParamRequired("TemplateBody")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetDefaultLogLevel sets the DefaultLogLevel field's value. +func (s *SetV2LoggingOptionsInput) SetDefaultLogLevel(v string) *SetV2LoggingOptionsInput { + s.DefaultLogLevel = &v + return s } -// SetParameters sets the Parameters field's value. -func (s *RegisterThingInput) SetParameters(v map[string]*string) *RegisterThingInput { - s.Parameters = v +// SetDisableAllLogs sets the DisableAllLogs field's value. +func (s *SetV2LoggingOptionsInput) SetDisableAllLogs(v bool) *SetV2LoggingOptionsInput { + s.DisableAllLogs = &v return s } -// SetTemplateBody sets the TemplateBody field's value. -func (s *RegisterThingInput) SetTemplateBody(v string) *RegisterThingInput { - s.TemplateBody = &v +// SetRoleArn sets the RoleArn field's value. +func (s *SetV2LoggingOptionsInput) SetRoleArn(v string) *SetV2LoggingOptionsInput { + s.RoleArn = &v return s } -type RegisterThingOutput struct { +type SetV2LoggingOptionsOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s SetV2LoggingOptionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SetV2LoggingOptionsOutput) GoString() string { + return s.String() +} + +// Describes the code-signing profile. +type SigningProfileParameter struct { _ struct{} `type:"structure"` - // The PEM of a certificate. - CertificatePem *string `locationName:"certificatePem" min:"1" type:"string"` + // Certificate ARN. + CertificateArn *string `locationName:"certificateArn" type:"string"` - // ARNs for the generated resources. - ResourceArns map[string]*string `locationName:"resourceArns" type:"map"` + // The location of the code-signing certificate on your device. + CertificatePathOnDevice *string `locationName:"certificatePathOnDevice" type:"string"` + + // The hardware platform of your device. + Platform *string `locationName:"platform" type:"string"` } // String returns the string representation -func (s RegisterThingOutput) String() string { +func (s SigningProfileParameter) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s RegisterThingOutput) GoString() string { +func (s SigningProfileParameter) GoString() string { return s.String() } -// SetCertificatePem sets the CertificatePem field's value. -func (s *RegisterThingOutput) SetCertificatePem(v string) *RegisterThingOutput { - s.CertificatePem = &v +// SetCertificateArn sets the CertificateArn field's value. +func (s *SigningProfileParameter) SetCertificateArn(v string) *SigningProfileParameter { + s.CertificateArn = &v return s } -// SetResourceArns sets the ResourceArns field's value. -func (s *RegisterThingOutput) SetResourceArns(v map[string]*string) *RegisterThingOutput { - s.ResourceArns = v +// SetCertificatePathOnDevice sets the CertificatePathOnDevice field's value. +func (s *SigningProfileParameter) SetCertificatePathOnDevice(v string) *SigningProfileParameter { + s.CertificatePathOnDevice = &v return s } -// The registration configuration. -type RegistrationConfig struct { +// SetPlatform sets the Platform field's value. +func (s *SigningProfileParameter) SetPlatform(v string) *SigningProfileParameter { + s.Platform = &v + return s +} + +// Describes an action to publish to an Amazon SNS topic. +type SnsAction struct { _ struct{} `type:"structure"` - // The ARN of the role. - RoleArn *string `locationName:"roleArn" min:"20" type:"string"` + // (Optional) The message format of the message to publish. Accepted values + // are "JSON" and "RAW". The default value of the attribute is "RAW". SNS uses + // this setting to determine if the payload should be parsed and relevant platform-specific + // bits of the payload should be extracted. To read more about SNS message formats, + // see http://docs.aws.amazon.com/sns/latest/dg/json-formats.html (http://docs.aws.amazon.com/sns/latest/dg/json-formats.html) + // refer to their official documentation. + MessageFormat *string `locationName:"messageFormat" type:"string" enum:"MessageFormat"` - // The template body. - TemplateBody *string `locationName:"templateBody" type:"string"` + // The ARN of the IAM role that grants access. + // + // RoleArn is a required field + RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + + // The ARN of the SNS topic. + // + // TargetArn is a required field + TargetArn *string `locationName:"targetArn" type:"string" required:"true"` } // String returns the string representation -func (s RegistrationConfig) String() string { +func (s SnsAction) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s RegistrationConfig) GoString() string { +func (s SnsAction) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *RegistrationConfig) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "RegistrationConfig"} - if s.RoleArn != nil && len(*s.RoleArn) < 20 { - invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) +func (s *SnsAction) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SnsAction"} + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + if s.TargetArn == nil { + invalidParams.Add(request.NewErrParamRequired("TargetArn")) } if invalidParams.Len() > 0 { @@ -22864,49 +32708,60 @@ func (s *RegistrationConfig) Validate() error { return nil } +// SetMessageFormat sets the MessageFormat field's value. +func (s *SnsAction) SetMessageFormat(v string) *SnsAction { + s.MessageFormat = &v + return s +} + // SetRoleArn sets the RoleArn field's value. -func (s *RegistrationConfig) SetRoleArn(v string) *RegistrationConfig { +func (s *SnsAction) SetRoleArn(v string) *SnsAction { s.RoleArn = &v return s } -// SetTemplateBody sets the TemplateBody field's value. -func (s *RegistrationConfig) SetTemplateBody(v string) *RegistrationConfig { - s.TemplateBody = &v +// SetTargetArn sets the TargetArn field's value. +func (s *SnsAction) SetTargetArn(v string) *SnsAction { + s.TargetArn = &v return s } -// The input for the RejectCertificateTransfer operation. -type RejectCertificateTransferInput struct { +// Describes an action to publish data to an Amazon SQS queue. +type SqsAction struct { _ struct{} `type:"structure"` - // The ID of the certificate. + // The URL of the Amazon SQS queue. // - // CertificateId is a required field - CertificateId *string `location:"uri" locationName:"certificateId" min:"64" type:"string" required:"true"` + // QueueUrl is a required field + QueueUrl *string `locationName:"queueUrl" type:"string" required:"true"` - // The reason the certificate transfer was rejected. - RejectReason *string `locationName:"rejectReason" type:"string"` + // The ARN of the IAM role that grants access. + // + // RoleArn is a required field + RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + + // Specifies whether to use Base64 encoding. + UseBase64 *bool `locationName:"useBase64" type:"boolean"` } // String returns the string representation -func (s RejectCertificateTransferInput) String() string { +func (s SqsAction) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s RejectCertificateTransferInput) GoString() string { +func (s SqsAction) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *RejectCertificateTransferInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "RejectCertificateTransferInput"} - if s.CertificateId == nil { - invalidParams.Add(request.NewErrParamRequired("CertificateId")) +func (s *SqsAction) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SqsAction"} + if s.QueueUrl == nil { + invalidParams.Add(request.NewErrParamRequired("QueueUrl")) } - if s.CertificateId != nil && len(*s.CertificateId) < 64 { - invalidParams.Add(request.NewErrParamMinLen("CertificateId", 64)) + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) } if invalidParams.Len() > 0 { @@ -22915,66 +32770,51 @@ func (s *RejectCertificateTransferInput) Validate() error { return nil } -// SetCertificateId sets the CertificateId field's value. -func (s *RejectCertificateTransferInput) SetCertificateId(v string) *RejectCertificateTransferInput { - s.CertificateId = &v +// SetQueueUrl sets the QueueUrl field's value. +func (s *SqsAction) SetQueueUrl(v string) *SqsAction { + s.QueueUrl = &v return s } -// SetRejectReason sets the RejectReason field's value. -func (s *RejectCertificateTransferInput) SetRejectReason(v string) *RejectCertificateTransferInput { - s.RejectReason = &v +// SetRoleArn sets the RoleArn field's value. +func (s *SqsAction) SetRoleArn(v string) *SqsAction { + s.RoleArn = &v return s } -type RejectCertificateTransferOutput struct { - _ struct{} `type:"structure"` -} - -// String returns the string representation -func (s RejectCertificateTransferOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s RejectCertificateTransferOutput) GoString() string { - return s.String() +// SetUseBase64 sets the UseBase64 field's value. +func (s *SqsAction) SetUseBase64(v bool) *SqsAction { + s.UseBase64 = &v + return s } -type RemoveThingFromThingGroupInput struct { +type StartOnDemandAuditTaskInput struct { _ struct{} `type:"structure"` - // The ARN of the thing to remove from the group. - ThingArn *string `locationName:"thingArn" type:"string"` - - // The group ARN. - ThingGroupArn *string `locationName:"thingGroupArn" type:"string"` - - // The group name. - ThingGroupName *string `locationName:"thingGroupName" min:"1" type:"string"` - - // The name of the thing to remove from the group. - ThingName *string `locationName:"thingName" min:"1" type:"string"` + // Which checks are performed during the audit. The checks you specify must + // be enabled for your account or an exception occurs. Use DescribeAccountAuditConfiguration + // to see the list of all checks including those that are enabled or UpdateAccountAuditConfiguration + // to select which checks are enabled. + // + // TargetCheckNames is a required field + TargetCheckNames []*string `locationName:"targetCheckNames" type:"list" required:"true"` } // String returns the string representation -func (s RemoveThingFromThingGroupInput) String() string { +func (s StartOnDemandAuditTaskInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s RemoveThingFromThingGroupInput) GoString() string { +func (s StartOnDemandAuditTaskInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *RemoveThingFromThingGroupInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "RemoveThingFromThingGroupInput"} - if s.ThingGroupName != nil && len(*s.ThingGroupName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingGroupName", 1)) - } - if s.ThingName != nil && len(*s.ThingName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ThingName", 1)) +func (s *StartOnDemandAuditTaskInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StartOnDemandAuditTaskInput"} + if s.TargetCheckNames == nil { + invalidParams.Add(request.NewErrParamRequired("TargetCheckNames")) } if invalidParams.Len() > 0 { @@ -22983,84 +32823,65 @@ func (s *RemoveThingFromThingGroupInput) Validate() error { return nil } -// SetThingArn sets the ThingArn field's value. -func (s *RemoveThingFromThingGroupInput) SetThingArn(v string) *RemoveThingFromThingGroupInput { - s.ThingArn = &v - return s -} - -// SetThingGroupArn sets the ThingGroupArn field's value. -func (s *RemoveThingFromThingGroupInput) SetThingGroupArn(v string) *RemoveThingFromThingGroupInput { - s.ThingGroupArn = &v - return s -} - -// SetThingGroupName sets the ThingGroupName field's value. -func (s *RemoveThingFromThingGroupInput) SetThingGroupName(v string) *RemoveThingFromThingGroupInput { - s.ThingGroupName = &v - return s -} - -// SetThingName sets the ThingName field's value. -func (s *RemoveThingFromThingGroupInput) SetThingName(v string) *RemoveThingFromThingGroupInput { - s.ThingName = &v +// SetTargetCheckNames sets the TargetCheckNames field's value. +func (s *StartOnDemandAuditTaskInput) SetTargetCheckNames(v []*string) *StartOnDemandAuditTaskInput { + s.TargetCheckNames = v return s } -type RemoveThingFromThingGroupOutput struct { +type StartOnDemandAuditTaskOutput struct { _ struct{} `type:"structure"` + + // The ID of the on-demand audit you started. + TaskId *string `locationName:"taskId" min:"1" type:"string"` } // String returns the string representation -func (s RemoveThingFromThingGroupOutput) String() string { +func (s StartOnDemandAuditTaskOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s RemoveThingFromThingGroupOutput) GoString() string { +func (s StartOnDemandAuditTaskOutput) GoString() string { return s.String() } -// The input for the ReplaceTopicRule operation. -type ReplaceTopicRuleInput struct { - _ struct{} `type:"structure" payload:"TopicRulePayload"` +// SetTaskId sets the TaskId field's value. +func (s *StartOnDemandAuditTaskOutput) SetTaskId(v string) *StartOnDemandAuditTaskOutput { + s.TaskId = &v + return s +} - // The name of the rule. - // - // RuleName is a required field - RuleName *string `location:"uri" locationName:"ruleName" min:"1" type:"string" required:"true"` +// Information required to start a signing job. +type StartSigningJobParameter struct { + _ struct{} `type:"structure"` - // The rule payload. - // - // TopicRulePayload is a required field - TopicRulePayload *TopicRulePayload `locationName:"topicRulePayload" type:"structure" required:"true"` + // The location to write the code-signed file. + Destination *Destination `locationName:"destination" type:"structure"` + + // The code-signing profile name. + SigningProfileName *string `locationName:"signingProfileName" type:"string"` + + // Describes the code-signing profile. + SigningProfileParameter *SigningProfileParameter `locationName:"signingProfileParameter" type:"structure"` } // String returns the string representation -func (s ReplaceTopicRuleInput) String() string { +func (s StartSigningJobParameter) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ReplaceTopicRuleInput) GoString() string { +func (s StartSigningJobParameter) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ReplaceTopicRuleInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ReplaceTopicRuleInput"} - if s.RuleName == nil { - invalidParams.Add(request.NewErrParamRequired("RuleName")) - } - if s.RuleName != nil && len(*s.RuleName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("RuleName", 1)) - } - if s.TopicRulePayload == nil { - invalidParams.Add(request.NewErrParamRequired("TopicRulePayload")) - } - if s.TopicRulePayload != nil { - if err := s.TopicRulePayload.Validate(); err != nil { - invalidParams.AddNested("TopicRulePayload", err.(request.ErrInvalidParams)) +func (s *StartSigningJobParameter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StartSigningJobParameter"} + if s.Destination != nil { + if err := s.Destination.Validate(); err != nil { + invalidParams.AddNested("Destination", err.(request.ErrInvalidParams)) } } @@ -23070,65 +32891,83 @@ func (s *ReplaceTopicRuleInput) Validate() error { return nil } -// SetRuleName sets the RuleName field's value. -func (s *ReplaceTopicRuleInput) SetRuleName(v string) *ReplaceTopicRuleInput { - s.RuleName = &v +// SetDestination sets the Destination field's value. +func (s *StartSigningJobParameter) SetDestination(v *Destination) *StartSigningJobParameter { + s.Destination = v return s } -// SetTopicRulePayload sets the TopicRulePayload field's value. -func (s *ReplaceTopicRuleInput) SetTopicRulePayload(v *TopicRulePayload) *ReplaceTopicRuleInput { - s.TopicRulePayload = v +// SetSigningProfileName sets the SigningProfileName field's value. +func (s *StartSigningJobParameter) SetSigningProfileName(v string) *StartSigningJobParameter { + s.SigningProfileName = &v return s } -type ReplaceTopicRuleOutput struct { - _ struct{} `type:"structure"` +// SetSigningProfileParameter sets the SigningProfileParameter field's value. +func (s *StartSigningJobParameter) SetSigningProfileParameter(v *SigningProfileParameter) *StartSigningJobParameter { + s.SigningProfileParameter = v + return s } -// String returns the string representation -func (s ReplaceTopicRuleOutput) String() string { - return awsutil.Prettify(s) -} +type StartThingRegistrationTaskInput struct { + _ struct{} `type:"structure"` -// GoString returns the string representation -func (s ReplaceTopicRuleOutput) GoString() string { - return s.String() -} + // The S3 bucket that contains the input file. + // + // InputFileBucket is a required field + InputFileBucket *string `locationName:"inputFileBucket" min:"3" type:"string" required:"true"` -// Describes an action to republish to another topic. -type RepublishAction struct { - _ struct{} `type:"structure"` + // The name of input file within the S3 bucket. This file contains a newline + // delimited JSON file. Each line contains the parameter values to provision + // one device (thing). + // + // InputFileKey is a required field + InputFileKey *string `locationName:"inputFileKey" min:"1" type:"string" required:"true"` - // The ARN of the IAM role that grants access. + // The IAM role ARN that grants permission the input file. // // RoleArn is a required field - RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + RoleArn *string `locationName:"roleArn" min:"20" type:"string" required:"true"` - // The name of the MQTT topic. + // The provisioning template. // - // Topic is a required field - Topic *string `locationName:"topic" type:"string" required:"true"` + // TemplateBody is a required field + TemplateBody *string `locationName:"templateBody" type:"string" required:"true"` } // String returns the string representation -func (s RepublishAction) String() string { +func (s StartThingRegistrationTaskInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s RepublishAction) GoString() string { +func (s StartThingRegistrationTaskInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *RepublishAction) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "RepublishAction"} +func (s *StartThingRegistrationTaskInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StartThingRegistrationTaskInput"} + if s.InputFileBucket == nil { + invalidParams.Add(request.NewErrParamRequired("InputFileBucket")) + } + if s.InputFileBucket != nil && len(*s.InputFileBucket) < 3 { + invalidParams.Add(request.NewErrParamMinLen("InputFileBucket", 3)) + } + if s.InputFileKey == nil { + invalidParams.Add(request.NewErrParamRequired("InputFileKey")) + } + if s.InputFileKey != nil && len(*s.InputFileKey) < 1 { + invalidParams.Add(request.NewErrParamMinLen("InputFileKey", 1)) + } if s.RoleArn == nil { invalidParams.Add(request.NewErrParamRequired("RoleArn")) } - if s.Topic == nil { - invalidParams.Add(request.NewErrParamRequired("Topic")) + if s.RoleArn != nil && len(*s.RoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + } + if s.TemplateBody == nil { + invalidParams.Add(request.NewErrParamRequired("TemplateBody")) } if invalidParams.Len() > 0 { @@ -23137,140 +32976,142 @@ func (s *RepublishAction) Validate() error { return nil } +// SetInputFileBucket sets the InputFileBucket field's value. +func (s *StartThingRegistrationTaskInput) SetInputFileBucket(v string) *StartThingRegistrationTaskInput { + s.InputFileBucket = &v + return s +} + +// SetInputFileKey sets the InputFileKey field's value. +func (s *StartThingRegistrationTaskInput) SetInputFileKey(v string) *StartThingRegistrationTaskInput { + s.InputFileKey = &v + return s +} + // SetRoleArn sets the RoleArn field's value. -func (s *RepublishAction) SetRoleArn(v string) *RepublishAction { +func (s *StartThingRegistrationTaskInput) SetRoleArn(v string) *StartThingRegistrationTaskInput { s.RoleArn = &v return s } -// SetTopic sets the Topic field's value. -func (s *RepublishAction) SetTopic(v string) *RepublishAction { - s.Topic = &v +// SetTemplateBody sets the TemplateBody field's value. +func (s *StartThingRegistrationTaskInput) SetTemplateBody(v string) *StartThingRegistrationTaskInput { + s.TemplateBody = &v return s } -// Role alias description. -type RoleAliasDescription struct { +type StartThingRegistrationTaskOutput struct { _ struct{} `type:"structure"` - // The UNIX timestamp of when the role alias was created. - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix"` - - // The number of seconds for which the credential is valid. - CredentialDurationSeconds *int64 `locationName:"credentialDurationSeconds" min:"900" type:"integer"` - - // The UNIX timestamp of when the role alias was last modified. - LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp" timestampFormat:"unix"` - - // The role alias owner. - Owner *string `locationName:"owner" type:"string"` - - // The role alias. - RoleAlias *string `locationName:"roleAlias" min:"1" type:"string"` - - RoleAliasArn *string `locationName:"roleAliasArn" type:"string"` - - // The role ARN. - RoleArn *string `locationName:"roleArn" min:"20" type:"string"` + // The bulk thing provisioning task ID. + TaskId *string `locationName:"taskId" type:"string"` } // String returns the string representation -func (s RoleAliasDescription) String() string { +func (s StartThingRegistrationTaskOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s RoleAliasDescription) GoString() string { +func (s StartThingRegistrationTaskOutput) GoString() string { return s.String() } -// SetCreationDate sets the CreationDate field's value. -func (s *RoleAliasDescription) SetCreationDate(v time.Time) *RoleAliasDescription { - s.CreationDate = &v +// SetTaskId sets the TaskId field's value. +func (s *StartThingRegistrationTaskOutput) SetTaskId(v string) *StartThingRegistrationTaskOutput { + s.TaskId = &v return s } -// SetCredentialDurationSeconds sets the CredentialDurationSeconds field's value. -func (s *RoleAliasDescription) SetCredentialDurationSeconds(v int64) *RoleAliasDescription { - s.CredentialDurationSeconds = &v - return s +// Starts execution of a Step Functions state machine. +type StepFunctionsAction struct { + _ struct{} `type:"structure"` + + // (Optional) A name will be given to the state machine execution consisting + // of this prefix followed by a UUID. Step Functions automatically creates a + // unique name for each state machine execution if one is not provided. + ExecutionNamePrefix *string `locationName:"executionNamePrefix" type:"string"` + + // The ARN of the role that grants IoT permission to start execution of a state + // machine ("Action":"states:StartExecution"). + // + // RoleArn is a required field + RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + + // The name of the Step Functions state machine whose execution will be started. + // + // StateMachineName is a required field + StateMachineName *string `locationName:"stateMachineName" type:"string" required:"true"` } -// SetLastModifiedDate sets the LastModifiedDate field's value. -func (s *RoleAliasDescription) SetLastModifiedDate(v time.Time) *RoleAliasDescription { - s.LastModifiedDate = &v - return s +// String returns the string representation +func (s StepFunctionsAction) String() string { + return awsutil.Prettify(s) } -// SetOwner sets the Owner field's value. -func (s *RoleAliasDescription) SetOwner(v string) *RoleAliasDescription { - s.Owner = &v - return s +// GoString returns the string representation +func (s StepFunctionsAction) GoString() string { + return s.String() } -// SetRoleAlias sets the RoleAlias field's value. -func (s *RoleAliasDescription) SetRoleAlias(v string) *RoleAliasDescription { - s.RoleAlias = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *StepFunctionsAction) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StepFunctionsAction"} + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + if s.StateMachineName == nil { + invalidParams.Add(request.NewErrParamRequired("StateMachineName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetRoleAliasArn sets the RoleAliasArn field's value. -func (s *RoleAliasDescription) SetRoleAliasArn(v string) *RoleAliasDescription { - s.RoleAliasArn = &v +// SetExecutionNamePrefix sets the ExecutionNamePrefix field's value. +func (s *StepFunctionsAction) SetExecutionNamePrefix(v string) *StepFunctionsAction { + s.ExecutionNamePrefix = &v return s } // SetRoleArn sets the RoleArn field's value. -func (s *RoleAliasDescription) SetRoleArn(v string) *RoleAliasDescription { +func (s *StepFunctionsAction) SetRoleArn(v string) *StepFunctionsAction { s.RoleArn = &v return s } -// Describes an action to write data to an Amazon S3 bucket. -type S3Action struct { - _ struct{} `type:"structure"` - - // The Amazon S3 bucket. - // - // BucketName is a required field - BucketName *string `locationName:"bucketName" type:"string" required:"true"` - - // The Amazon S3 canned ACL that controls access to the object identified by - // the object key. For more information, see S3 canned ACLs (http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl). - CannedAcl *string `locationName:"cannedAcl" type:"string" enum:"CannedAccessControlList"` +// SetStateMachineName sets the StateMachineName field's value. +func (s *StepFunctionsAction) SetStateMachineName(v string) *StepFunctionsAction { + s.StateMachineName = &v + return s +} - // The object key. - // - // Key is a required field - Key *string `locationName:"key" type:"string" required:"true"` +type StopThingRegistrationTaskInput struct { + _ struct{} `type:"structure"` - // The ARN of the IAM role that grants access. + // The bulk thing provisioning task ID. // - // RoleArn is a required field - RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + // TaskId is a required field + TaskId *string `location:"uri" locationName:"taskId" type:"string" required:"true"` } // String returns the string representation -func (s S3Action) String() string { +func (s StopThingRegistrationTaskInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s S3Action) GoString() string { +func (s StopThingRegistrationTaskInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *S3Action) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "S3Action"} - if s.BucketName == nil { - invalidParams.Add(request.NewErrParamRequired("BucketName")) - } - if s.Key == nil { - invalidParams.Add(request.NewErrParamRequired("Key")) - } - if s.RoleArn == nil { - invalidParams.Add(request.NewErrParamRequired("RoleArn")) +func (s *StopThingRegistrationTaskInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StopThingRegistrationTaskInput"} + if s.TaskId == nil { + invalidParams.Add(request.NewErrParamRequired("TaskId")) } if invalidParams.Len() > 0 { @@ -23279,72 +33120,52 @@ func (s *S3Action) Validate() error { return nil } -// SetBucketName sets the BucketName field's value. -func (s *S3Action) SetBucketName(v string) *S3Action { - s.BucketName = &v +// SetTaskId sets the TaskId field's value. +func (s *StopThingRegistrationTaskInput) SetTaskId(v string) *StopThingRegistrationTaskInput { + s.TaskId = &v return s } -// SetCannedAcl sets the CannedAcl field's value. -func (s *S3Action) SetCannedAcl(v string) *S3Action { - s.CannedAcl = &v - return s +type StopThingRegistrationTaskOutput struct { + _ struct{} `type:"structure"` } -// SetKey sets the Key field's value. -func (s *S3Action) SetKey(v string) *S3Action { - s.Key = &v - return s +// String returns the string representation +func (s StopThingRegistrationTaskOutput) String() string { + return awsutil.Prettify(s) } -// SetRoleArn sets the RoleArn field's value. -func (s *S3Action) SetRoleArn(v string) *S3Action { - s.RoleArn = &v - return s +// GoString returns the string representation +func (s StopThingRegistrationTaskOutput) GoString() string { + return s.String() } -// The location in S3 the contains the files to stream. -type S3Location struct { +// Describes a group of files that can be streamed. +type Stream struct { _ struct{} `type:"structure"` - // The S3 bucket that contains the file to stream. - // - // Bucket is a required field - Bucket *string `locationName:"bucket" min:"1" type:"string" required:"true"` - - // The name of the file within the S3 bucket to stream. - // - // Key is a required field - Key *string `locationName:"key" min:"1" type:"string" required:"true"` + // The ID of a file associated with a stream. + FileId *int64 `locationName:"fileId" type:"integer"` - // The file version. - Version *string `locationName:"version" type:"string"` + // The stream ID. + StreamId *string `locationName:"streamId" min:"1" type:"string"` } // String returns the string representation -func (s S3Location) String() string { +func (s Stream) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s S3Location) GoString() string { +func (s Stream) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *S3Location) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "S3Location"} - if s.Bucket == nil { - invalidParams.Add(request.NewErrParamRequired("Bucket")) - } - if s.Bucket != nil && len(*s.Bucket) < 1 { - invalidParams.Add(request.NewErrParamMinLen("Bucket", 1)) - } - if s.Key == nil { - invalidParams.Add(request.NewErrParamRequired("Key")) - } - if s.Key != nil && len(*s.Key) < 1 { - invalidParams.Add(request.NewErrParamMinLen("Key", 1)) +func (s *Stream) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Stream"} + if s.StreamId != nil && len(*s.StreamId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StreamId", 1)) } if invalidParams.Len() > 0 { @@ -23353,63 +33174,46 @@ func (s *S3Location) Validate() error { return nil } -// SetBucket sets the Bucket field's value. -func (s *S3Location) SetBucket(v string) *S3Location { - s.Bucket = &v - return s -} - -// SetKey sets the Key field's value. -func (s *S3Location) SetKey(v string) *S3Location { - s.Key = &v +// SetFileId sets the FileId field's value. +func (s *Stream) SetFileId(v int64) *Stream { + s.FileId = &v return s } -// SetVersion sets the Version field's value. -func (s *S3Location) SetVersion(v string) *S3Location { - s.Version = &v +// SetStreamId sets the StreamId field's value. +func (s *Stream) SetStreamId(v string) *Stream { + s.StreamId = &v return s } -// Describes an action to write a message to a Salesforce IoT Cloud Input Stream. -type SalesforceAction struct { +// Represents a file to stream. +type StreamFile struct { _ struct{} `type:"structure"` - // The token used to authenticate access to the Salesforce IoT Cloud Input Stream. - // The token is available from the Salesforce IoT Cloud platform after creation - // of the Input Stream. - // - // Token is a required field - Token *string `locationName:"token" min:"40" type:"string" required:"true"` + // The file ID. + FileId *int64 `locationName:"fileId" type:"integer"` - // The URL exposed by the Salesforce IoT Cloud Input Stream. The URL is available - // from the Salesforce IoT Cloud platform after creation of the Input Stream. - // - // Url is a required field - Url *string `locationName:"url" type:"string" required:"true"` + // The location of the file in S3. + S3Location *S3Location `locationName:"s3Location" type:"structure"` } // String returns the string representation -func (s SalesforceAction) String() string { +func (s StreamFile) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SalesforceAction) GoString() string { +func (s StreamFile) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *SalesforceAction) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "SalesforceAction"} - if s.Token == nil { - invalidParams.Add(request.NewErrParamRequired("Token")) - } - if s.Token != nil && len(*s.Token) < 40 { - invalidParams.Add(request.NewErrParamMinLen("Token", 40)) - } - if s.Url == nil { - invalidParams.Add(request.NewErrParamRequired("Url")) +func (s *StreamFile) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StreamFile"} + if s.S3Location != nil { + if err := s.S3Location.Validate(); err != nil { + invalidParams.AddNested("S3Location", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -23418,162 +33222,221 @@ func (s *SalesforceAction) Validate() error { return nil } -// SetToken sets the Token field's value. -func (s *SalesforceAction) SetToken(v string) *SalesforceAction { - s.Token = &v +// SetFileId sets the FileId field's value. +func (s *StreamFile) SetFileId(v int64) *StreamFile { + s.FileId = &v return s } -// SetUrl sets the Url field's value. -func (s *SalesforceAction) SetUrl(v string) *SalesforceAction { - s.Url = &v +// SetS3Location sets the S3Location field's value. +func (s *StreamFile) SetS3Location(v *S3Location) *StreamFile { + s.S3Location = v return s } -type SearchIndexInput struct { +// Information about a stream. +type StreamInfo struct { _ struct{} `type:"structure"` - // The search index name. - IndexName *string `locationName:"indexName" min:"1" type:"string"` + // The date when the stream was created. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` - // The maximum number of results to return at one time. - MaxResults *int64 `locationName:"maxResults" min:"1" type:"integer"` + // The description of the stream. + Description *string `locationName:"description" type:"string"` - // The token used to get the next set of results, or null if there are no additional - // results. - NextToken *string `locationName:"nextToken" type:"string"` + // The files to stream. + Files []*StreamFile `locationName:"files" min:"1" type:"list"` - // The search query string. - // - // QueryString is a required field - QueryString *string `locationName:"queryString" min:"1" type:"string" required:"true"` + // The date when the stream was last updated. + LastUpdatedAt *time.Time `locationName:"lastUpdatedAt" type:"timestamp"` - // The query version. - QueryVersion *string `locationName:"queryVersion" type:"string"` + // An IAM role AWS IoT assumes to access your S3 files. + RoleArn *string `locationName:"roleArn" min:"20" type:"string"` + + // The stream ARN. + StreamArn *string `locationName:"streamArn" type:"string"` + + // The stream ID. + StreamId *string `locationName:"streamId" min:"1" type:"string"` + + // The stream version. + StreamVersion *int64 `locationName:"streamVersion" type:"integer"` } // String returns the string representation -func (s SearchIndexInput) String() string { +func (s StreamInfo) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SearchIndexInput) GoString() string { +func (s StreamInfo) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *SearchIndexInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "SearchIndexInput"} - if s.IndexName != nil && len(*s.IndexName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("IndexName", 1)) - } - if s.MaxResults != nil && *s.MaxResults < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) - } - if s.QueryString == nil { - invalidParams.Add(request.NewErrParamRequired("QueryString")) - } - if s.QueryString != nil && len(*s.QueryString) < 1 { - invalidParams.Add(request.NewErrParamMinLen("QueryString", 1)) - } +// SetCreatedAt sets the CreatedAt field's value. +func (s *StreamInfo) SetCreatedAt(v time.Time) *StreamInfo { + s.CreatedAt = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetDescription sets the Description field's value. +func (s *StreamInfo) SetDescription(v string) *StreamInfo { + s.Description = &v + return s } -// SetIndexName sets the IndexName field's value. -func (s *SearchIndexInput) SetIndexName(v string) *SearchIndexInput { - s.IndexName = &v +// SetFiles sets the Files field's value. +func (s *StreamInfo) SetFiles(v []*StreamFile) *StreamInfo { + s.Files = v return s } -// SetMaxResults sets the MaxResults field's value. -func (s *SearchIndexInput) SetMaxResults(v int64) *SearchIndexInput { - s.MaxResults = &v +// SetLastUpdatedAt sets the LastUpdatedAt field's value. +func (s *StreamInfo) SetLastUpdatedAt(v time.Time) *StreamInfo { + s.LastUpdatedAt = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *SearchIndexInput) SetNextToken(v string) *SearchIndexInput { - s.NextToken = &v +// SetRoleArn sets the RoleArn field's value. +func (s *StreamInfo) SetRoleArn(v string) *StreamInfo { + s.RoleArn = &v return s } -// SetQueryString sets the QueryString field's value. -func (s *SearchIndexInput) SetQueryString(v string) *SearchIndexInput { - s.QueryString = &v +// SetStreamArn sets the StreamArn field's value. +func (s *StreamInfo) SetStreamArn(v string) *StreamInfo { + s.StreamArn = &v return s } -// SetQueryVersion sets the QueryVersion field's value. -func (s *SearchIndexInput) SetQueryVersion(v string) *SearchIndexInput { - s.QueryVersion = &v +// SetStreamId sets the StreamId field's value. +func (s *StreamInfo) SetStreamId(v string) *StreamInfo { + s.StreamId = &v return s } -type SearchIndexOutput struct { +// SetStreamVersion sets the StreamVersion field's value. +func (s *StreamInfo) SetStreamVersion(v int64) *StreamInfo { + s.StreamVersion = &v + return s +} + +// A summary of a stream. +type StreamSummary struct { _ struct{} `type:"structure"` - // The token used to get the next set of results, or null if there are no additional - // results. - NextToken *string `locationName:"nextToken" type:"string"` + // A description of the stream. + Description *string `locationName:"description" type:"string"` - // The things that match the search query. - Things []*ThingDocument `locationName:"things" type:"list"` + // The stream ARN. + StreamArn *string `locationName:"streamArn" type:"string"` + + // The stream ID. + StreamId *string `locationName:"streamId" min:"1" type:"string"` + + // The stream version. + StreamVersion *int64 `locationName:"streamVersion" type:"integer"` } // String returns the string representation -func (s SearchIndexOutput) String() string { +func (s StreamSummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SearchIndexOutput) GoString() string { +func (s StreamSummary) GoString() string { return s.String() } -// SetNextToken sets the NextToken field's value. -func (s *SearchIndexOutput) SetNextToken(v string) *SearchIndexOutput { - s.NextToken = &v +// SetDescription sets the Description field's value. +func (s *StreamSummary) SetDescription(v string) *StreamSummary { + s.Description = &v return s } -// SetThings sets the Things field's value. -func (s *SearchIndexOutput) SetThings(v []*ThingDocument) *SearchIndexOutput { - s.Things = v +// SetStreamArn sets the StreamArn field's value. +func (s *StreamSummary) SetStreamArn(v string) *StreamSummary { + s.StreamArn = &v return s } -type SetDefaultAuthorizerInput struct { +// SetStreamId sets the StreamId field's value. +func (s *StreamSummary) SetStreamId(v string) *StreamSummary { + s.StreamId = &v + return s +} + +// SetStreamVersion sets the StreamVersion field's value. +func (s *StreamSummary) SetStreamVersion(v int64) *StreamSummary { + s.StreamVersion = &v + return s +} + +// A set of key/value pairs that are used to manage the resource. +type Tag struct { _ struct{} `type:"structure"` - // The authorizer name. + // The tag's key. + Key *string `type:"string"` + + // The tag's value. + Value *string `type:"string"` +} + +// String returns the string representation +func (s Tag) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Tag) GoString() string { + return s.String() +} + +// SetKey sets the Key field's value. +func (s *Tag) SetKey(v string) *Tag { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Tag) SetValue(v string) *Tag { + s.Value = &v + return s +} + +type TagResourceInput struct { + _ struct{} `type:"structure"` + + // The ARN of the resource. // - // AuthorizerName is a required field - AuthorizerName *string `locationName:"authorizerName" min:"1" type:"string" required:"true"` + // ResourceArn is a required field + ResourceArn *string `locationName:"resourceArn" type:"string" required:"true"` + + // The new or modified tags for the resource. + // + // Tags is a required field + Tags []*Tag `locationName:"tags" type:"list" required:"true"` } // String returns the string representation -func (s SetDefaultAuthorizerInput) String() string { +func (s TagResourceInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SetDefaultAuthorizerInput) GoString() string { +func (s TagResourceInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *SetDefaultAuthorizerInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "SetDefaultAuthorizerInput"} - if s.AuthorizerName == nil { - invalidParams.Add(request.NewErrParamRequired("AuthorizerName")) +func (s *TagResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TagResourceInput"} + if s.ResourceArn == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceArn")) } - if s.AuthorizerName != nil && len(*s.AuthorizerName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("AuthorizerName", 1)) + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) } if invalidParams.Len() > 0 { @@ -23582,144 +33445,155 @@ func (s *SetDefaultAuthorizerInput) Validate() error { return nil } -// SetAuthorizerName sets the AuthorizerName field's value. -func (s *SetDefaultAuthorizerInput) SetAuthorizerName(v string) *SetDefaultAuthorizerInput { - s.AuthorizerName = &v +// SetResourceArn sets the ResourceArn field's value. +func (s *TagResourceInput) SetResourceArn(v string) *TagResourceInput { + s.ResourceArn = &v return s } -type SetDefaultAuthorizerOutput struct { - _ struct{} `type:"structure"` - - // The authorizer ARN. - AuthorizerArn *string `locationName:"authorizerArn" type:"string"` +// SetTags sets the Tags field's value. +func (s *TagResourceInput) SetTags(v []*Tag) *TagResourceInput { + s.Tags = v + return s +} - // The authorizer name. - AuthorizerName *string `locationName:"authorizerName" min:"1" type:"string"` +type TagResourceOutput struct { + _ struct{} `type:"structure"` } // String returns the string representation -func (s SetDefaultAuthorizerOutput) String() string { +func (s TagResourceOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SetDefaultAuthorizerOutput) GoString() string { +func (s TagResourceOutput) GoString() string { return s.String() } -// SetAuthorizerArn sets the AuthorizerArn field's value. -func (s *SetDefaultAuthorizerOutput) SetAuthorizerArn(v string) *SetDefaultAuthorizerOutput { - s.AuthorizerArn = &v - return s -} +// Statistics for the checks performed during the audit. +type TaskStatistics struct { + _ struct{} `type:"structure"` -// SetAuthorizerName sets the AuthorizerName field's value. -func (s *SetDefaultAuthorizerOutput) SetAuthorizerName(v string) *SetDefaultAuthorizerOutput { - s.AuthorizerName = &v - return s -} + // The number of checks that did not run because the audit was canceled. + CanceledChecks *int64 `locationName:"canceledChecks" type:"integer"` -// The input for the SetDefaultPolicyVersion operation. -type SetDefaultPolicyVersionInput struct { - _ struct{} `type:"structure"` + // The number of checks that found compliant resources. + CompliantChecks *int64 `locationName:"compliantChecks" type:"integer"` - // The policy name. - // - // PolicyName is a required field - PolicyName *string `location:"uri" locationName:"policyName" min:"1" type:"string" required:"true"` + // The number of checks + FailedChecks *int64 `locationName:"failedChecks" type:"integer"` + + // The number of checks in progress. + InProgressChecks *int64 `locationName:"inProgressChecks" type:"integer"` + + // The number of checks that found non-compliant resources. + NonCompliantChecks *int64 `locationName:"nonCompliantChecks" type:"integer"` - // The policy version ID. - // - // PolicyVersionId is a required field - PolicyVersionId *string `location:"uri" locationName:"policyVersionId" type:"string" required:"true"` + // The number of checks in this audit. + TotalChecks *int64 `locationName:"totalChecks" type:"integer"` + + // The number of checks waiting for data collection. + WaitingForDataCollectionChecks *int64 `locationName:"waitingForDataCollectionChecks" type:"integer"` } // String returns the string representation -func (s SetDefaultPolicyVersionInput) String() string { +func (s TaskStatistics) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SetDefaultPolicyVersionInput) GoString() string { +func (s TaskStatistics) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *SetDefaultPolicyVersionInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "SetDefaultPolicyVersionInput"} - if s.PolicyName == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyName")) - } - if s.PolicyName != nil && len(*s.PolicyName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("PolicyName", 1)) - } - if s.PolicyVersionId == nil { - invalidParams.Add(request.NewErrParamRequired("PolicyVersionId")) - } +// SetCanceledChecks sets the CanceledChecks field's value. +func (s *TaskStatistics) SetCanceledChecks(v int64) *TaskStatistics { + s.CanceledChecks = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetCompliantChecks sets the CompliantChecks field's value. +func (s *TaskStatistics) SetCompliantChecks(v int64) *TaskStatistics { + s.CompliantChecks = &v + return s } -// SetPolicyName sets the PolicyName field's value. -func (s *SetDefaultPolicyVersionInput) SetPolicyName(v string) *SetDefaultPolicyVersionInput { - s.PolicyName = &v +// SetFailedChecks sets the FailedChecks field's value. +func (s *TaskStatistics) SetFailedChecks(v int64) *TaskStatistics { + s.FailedChecks = &v return s } -// SetPolicyVersionId sets the PolicyVersionId field's value. -func (s *SetDefaultPolicyVersionInput) SetPolicyVersionId(v string) *SetDefaultPolicyVersionInput { - s.PolicyVersionId = &v +// SetInProgressChecks sets the InProgressChecks field's value. +func (s *TaskStatistics) SetInProgressChecks(v int64) *TaskStatistics { + s.InProgressChecks = &v return s } -type SetDefaultPolicyVersionOutput struct { - _ struct{} `type:"structure"` +// SetNonCompliantChecks sets the NonCompliantChecks field's value. +func (s *TaskStatistics) SetNonCompliantChecks(v int64) *TaskStatistics { + s.NonCompliantChecks = &v + return s } -// String returns the string representation -func (s SetDefaultPolicyVersionOutput) String() string { - return awsutil.Prettify(s) +// SetTotalChecks sets the TotalChecks field's value. +func (s *TaskStatistics) SetTotalChecks(v int64) *TaskStatistics { + s.TotalChecks = &v + return s } -// GoString returns the string representation -func (s SetDefaultPolicyVersionOutput) GoString() string { - return s.String() +// SetWaitingForDataCollectionChecks sets the WaitingForDataCollectionChecks field's value. +func (s *TaskStatistics) SetWaitingForDataCollectionChecks(v int64) *TaskStatistics { + s.WaitingForDataCollectionChecks = &v + return s } -// The input for the SetLoggingOptions operation. -type SetLoggingOptionsInput struct { - _ struct{} `type:"structure" payload:"LoggingOptionsPayload"` +type TestAuthorizationInput struct { + _ struct{} `type:"structure"` - // The logging options payload. + // A list of authorization info objects. Simulating authorization will create + // a response for each authInfo object in the list. // - // LoggingOptionsPayload is a required field - LoggingOptionsPayload *LoggingOptionsPayload `locationName:"loggingOptionsPayload" type:"structure" required:"true"` + // AuthInfos is a required field + AuthInfos []*AuthInfo `locationName:"authInfos" min:"1" type:"list" required:"true"` + + // The MQTT client ID. + ClientId *string `location:"querystring" locationName:"clientId" type:"string"` + + // The Cognito identity pool ID. + CognitoIdentityPoolId *string `locationName:"cognitoIdentityPoolId" type:"string"` + + // When testing custom authorization, the policies specified here are treated + // as if they are attached to the principal being authorized. + PolicyNamesToAdd []*string `locationName:"policyNamesToAdd" type:"list"` + + // When testing custom authorization, the policies specified here are treated + // as if they are not attached to the principal being authorized. + PolicyNamesToSkip []*string `locationName:"policyNamesToSkip" type:"list"` + + // The principal. + Principal *string `locationName:"principal" type:"string"` } // String returns the string representation -func (s SetLoggingOptionsInput) String() string { +func (s TestAuthorizationInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SetLoggingOptionsInput) GoString() string { +func (s TestAuthorizationInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *SetLoggingOptionsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "SetLoggingOptionsInput"} - if s.LoggingOptionsPayload == nil { - invalidParams.Add(request.NewErrParamRequired("LoggingOptionsPayload")) +func (s *TestAuthorizationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TestAuthorizationInput"} + if s.AuthInfos == nil { + invalidParams.Add(request.NewErrParamRequired("AuthInfos")) } - if s.LoggingOptionsPayload != nil { - if err := s.LoggingOptionsPayload.Validate(); err != nil { - invalidParams.AddNested("LoggingOptionsPayload", err.(request.ErrInvalidParams)) - } + if s.AuthInfos != nil && len(s.AuthInfos) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AuthInfos", 1)) } if invalidParams.Len() > 0 { @@ -23728,63 +33602,115 @@ func (s *SetLoggingOptionsInput) Validate() error { return nil } -// SetLoggingOptionsPayload sets the LoggingOptionsPayload field's value. -func (s *SetLoggingOptionsInput) SetLoggingOptionsPayload(v *LoggingOptionsPayload) *SetLoggingOptionsInput { - s.LoggingOptionsPayload = v +// SetAuthInfos sets the AuthInfos field's value. +func (s *TestAuthorizationInput) SetAuthInfos(v []*AuthInfo) *TestAuthorizationInput { + s.AuthInfos = v return s } -type SetLoggingOptionsOutput struct { +// SetClientId sets the ClientId field's value. +func (s *TestAuthorizationInput) SetClientId(v string) *TestAuthorizationInput { + s.ClientId = &v + return s +} + +// SetCognitoIdentityPoolId sets the CognitoIdentityPoolId field's value. +func (s *TestAuthorizationInput) SetCognitoIdentityPoolId(v string) *TestAuthorizationInput { + s.CognitoIdentityPoolId = &v + return s +} + +// SetPolicyNamesToAdd sets the PolicyNamesToAdd field's value. +func (s *TestAuthorizationInput) SetPolicyNamesToAdd(v []*string) *TestAuthorizationInput { + s.PolicyNamesToAdd = v + return s +} + +// SetPolicyNamesToSkip sets the PolicyNamesToSkip field's value. +func (s *TestAuthorizationInput) SetPolicyNamesToSkip(v []*string) *TestAuthorizationInput { + s.PolicyNamesToSkip = v + return s +} + +// SetPrincipal sets the Principal field's value. +func (s *TestAuthorizationInput) SetPrincipal(v string) *TestAuthorizationInput { + s.Principal = &v + return s +} + +type TestAuthorizationOutput struct { _ struct{} `type:"structure"` + + // The authentication results. + AuthResults []*AuthResult `locationName:"authResults" type:"list"` } // String returns the string representation -func (s SetLoggingOptionsOutput) String() string { +func (s TestAuthorizationOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SetLoggingOptionsOutput) GoString() string { +func (s TestAuthorizationOutput) GoString() string { return s.String() } -type SetV2LoggingLevelInput struct { +// SetAuthResults sets the AuthResults field's value. +func (s *TestAuthorizationOutput) SetAuthResults(v []*AuthResult) *TestAuthorizationOutput { + s.AuthResults = v + return s +} + +type TestInvokeAuthorizerInput struct { _ struct{} `type:"structure"` - // The log level. + // The custom authorizer name. // - // LogLevel is a required field - LogLevel *string `locationName:"logLevel" type:"string" required:"true" enum:"LogLevel"` + // AuthorizerName is a required field + AuthorizerName *string `location:"uri" locationName:"authorizerName" min:"1" type:"string" required:"true"` - // The log target. + // The token returned by your custom authentication service. // - // LogTarget is a required field - LogTarget *LogTarget `locationName:"logTarget" type:"structure" required:"true"` + // Token is a required field + Token *string `locationName:"token" min:"1" type:"string" required:"true"` + + // The signature made with the token and your custom authentication service's + // private key. + // + // TokenSignature is a required field + TokenSignature *string `locationName:"tokenSignature" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s SetV2LoggingLevelInput) String() string { +func (s TestInvokeAuthorizerInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SetV2LoggingLevelInput) GoString() string { +func (s TestInvokeAuthorizerInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *SetV2LoggingLevelInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "SetV2LoggingLevelInput"} - if s.LogLevel == nil { - invalidParams.Add(request.NewErrParamRequired("LogLevel")) +func (s *TestInvokeAuthorizerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TestInvokeAuthorizerInput"} + if s.AuthorizerName == nil { + invalidParams.Add(request.NewErrParamRequired("AuthorizerName")) } - if s.LogTarget == nil { - invalidParams.Add(request.NewErrParamRequired("LogTarget")) + if s.AuthorizerName != nil && len(*s.AuthorizerName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AuthorizerName", 1)) } - if s.LogTarget != nil { - if err := s.LogTarget.Validate(); err != nil { - invalidParams.AddNested("LogTarget", err.(request.ErrInvalidParams)) - } + if s.Token == nil { + invalidParams.Add(request.NewErrParamRequired("Token")) + } + if s.Token != nil && len(*s.Token) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Token", 1)) + } + if s.TokenSignature == nil { + invalidParams.Add(request.NewErrParamRequired("TokenSignature")) + } + if s.TokenSignature != nil && len(*s.TokenSignature) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TokenSignature", 1)) } if invalidParams.Len() > 0 { @@ -23793,408 +33719,343 @@ func (s *SetV2LoggingLevelInput) Validate() error { return nil } -// SetLogLevel sets the LogLevel field's value. -func (s *SetV2LoggingLevelInput) SetLogLevel(v string) *SetV2LoggingLevelInput { - s.LogLevel = &v +// SetAuthorizerName sets the AuthorizerName field's value. +func (s *TestInvokeAuthorizerInput) SetAuthorizerName(v string) *TestInvokeAuthorizerInput { + s.AuthorizerName = &v return s } -// SetLogTarget sets the LogTarget field's value. -func (s *SetV2LoggingLevelInput) SetLogTarget(v *LogTarget) *SetV2LoggingLevelInput { - s.LogTarget = v +// SetToken sets the Token field's value. +func (s *TestInvokeAuthorizerInput) SetToken(v string) *TestInvokeAuthorizerInput { + s.Token = &v return s } -type SetV2LoggingLevelOutput struct { - _ struct{} `type:"structure"` +// SetTokenSignature sets the TokenSignature field's value. +func (s *TestInvokeAuthorizerInput) SetTokenSignature(v string) *TestInvokeAuthorizerInput { + s.TokenSignature = &v + return s } -// String returns the string representation -func (s SetV2LoggingLevelOutput) String() string { - return awsutil.Prettify(s) -} +type TestInvokeAuthorizerOutput struct { + _ struct{} `type:"structure"` -// GoString returns the string representation -func (s SetV2LoggingLevelOutput) GoString() string { - return s.String() -} + // The number of seconds after which the connection is terminated. + DisconnectAfterInSeconds *int64 `locationName:"disconnectAfterInSeconds" type:"integer"` -type SetV2LoggingOptionsInput struct { - _ struct{} `type:"structure"` + // True if the token is authenticated, otherwise false. + IsAuthenticated *bool `locationName:"isAuthenticated" type:"boolean"` - // The default logging level. - DefaultLogLevel *string `locationName:"defaultLogLevel" type:"string" enum:"LogLevel"` + // IAM policy documents. + PolicyDocuments []*string `locationName:"policyDocuments" type:"list"` - // Set to true to disable all logs, otherwise set to false. - DisableAllLogs *bool `locationName:"disableAllLogs" type:"boolean"` + // The principal ID. + PrincipalId *string `locationName:"principalId" min:"1" type:"string"` - // The role ARN that allows IoT to write to Cloudwatch logs. - RoleArn *string `locationName:"roleArn" type:"string"` + // The number of seconds after which the temporary credentials are refreshed. + RefreshAfterInSeconds *int64 `locationName:"refreshAfterInSeconds" type:"integer"` } // String returns the string representation -func (s SetV2LoggingOptionsInput) String() string { +func (s TestInvokeAuthorizerOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SetV2LoggingOptionsInput) GoString() string { +func (s TestInvokeAuthorizerOutput) GoString() string { return s.String() } -// SetDefaultLogLevel sets the DefaultLogLevel field's value. -func (s *SetV2LoggingOptionsInput) SetDefaultLogLevel(v string) *SetV2LoggingOptionsInput { - s.DefaultLogLevel = &v +// SetDisconnectAfterInSeconds sets the DisconnectAfterInSeconds field's value. +func (s *TestInvokeAuthorizerOutput) SetDisconnectAfterInSeconds(v int64) *TestInvokeAuthorizerOutput { + s.DisconnectAfterInSeconds = &v + return s +} + +// SetIsAuthenticated sets the IsAuthenticated field's value. +func (s *TestInvokeAuthorizerOutput) SetIsAuthenticated(v bool) *TestInvokeAuthorizerOutput { + s.IsAuthenticated = &v + return s +} + +// SetPolicyDocuments sets the PolicyDocuments field's value. +func (s *TestInvokeAuthorizerOutput) SetPolicyDocuments(v []*string) *TestInvokeAuthorizerOutput { + s.PolicyDocuments = v return s } -// SetDisableAllLogs sets the DisableAllLogs field's value. -func (s *SetV2LoggingOptionsInput) SetDisableAllLogs(v bool) *SetV2LoggingOptionsInput { - s.DisableAllLogs = &v +// SetPrincipalId sets the PrincipalId field's value. +func (s *TestInvokeAuthorizerOutput) SetPrincipalId(v string) *TestInvokeAuthorizerOutput { + s.PrincipalId = &v return s } -// SetRoleArn sets the RoleArn field's value. -func (s *SetV2LoggingOptionsInput) SetRoleArn(v string) *SetV2LoggingOptionsInput { - s.RoleArn = &v +// SetRefreshAfterInSeconds sets the RefreshAfterInSeconds field's value. +func (s *TestInvokeAuthorizerOutput) SetRefreshAfterInSeconds(v int64) *TestInvokeAuthorizerOutput { + s.RefreshAfterInSeconds = &v return s } -type SetV2LoggingOptionsOutput struct { +// The properties of the thing, including thing name, thing type name, and a +// list of thing attributes. +type ThingAttribute struct { _ struct{} `type:"structure"` -} - -// String returns the string representation -func (s SetV2LoggingOptionsOutput) String() string { - return awsutil.Prettify(s) -} -// GoString returns the string representation -func (s SetV2LoggingOptionsOutput) GoString() string { - return s.String() -} + // A list of thing attributes which are name-value pairs. + Attributes map[string]*string `locationName:"attributes" type:"map"` -// Describes an action to publish to an Amazon SNS topic. -type SnsAction struct { - _ struct{} `type:"structure"` + // The thing ARN. + ThingArn *string `locationName:"thingArn" type:"string"` - // The message format of the message to publish. Optional. Accepted values are - // "JSON" and "RAW". The default value of the attribute is "RAW". SNS uses this - // setting to determine if the payload should be parsed and relevant platform-specific - // bits of the payload should be extracted. To read more about SNS message formats, - // see http://docs.aws.amazon.com/sns/latest/dg/json-formats.html (http://docs.aws.amazon.com/sns/latest/dg/json-formats.html) - // refer to their official documentation. - MessageFormat *string `locationName:"messageFormat" type:"string" enum:"MessageFormat"` + // The name of the thing. + ThingName *string `locationName:"thingName" min:"1" type:"string"` - // The ARN of the IAM role that grants access. - // - // RoleArn is a required field - RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + // The name of the thing type, if the thing has been associated with a type. + ThingTypeName *string `locationName:"thingTypeName" min:"1" type:"string"` - // The ARN of the SNS topic. - // - // TargetArn is a required field - TargetArn *string `locationName:"targetArn" type:"string" required:"true"` + // The version of the thing record in the registry. + Version *int64 `locationName:"version" type:"long"` } // String returns the string representation -func (s SnsAction) String() string { +func (s ThingAttribute) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SnsAction) GoString() string { +func (s ThingAttribute) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *SnsAction) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "SnsAction"} - if s.RoleArn == nil { - invalidParams.Add(request.NewErrParamRequired("RoleArn")) - } - if s.TargetArn == nil { - invalidParams.Add(request.NewErrParamRequired("TargetArn")) - } +// SetAttributes sets the Attributes field's value. +func (s *ThingAttribute) SetAttributes(v map[string]*string) *ThingAttribute { + s.Attributes = v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetThingArn sets the ThingArn field's value. +func (s *ThingAttribute) SetThingArn(v string) *ThingAttribute { + s.ThingArn = &v + return s } -// SetMessageFormat sets the MessageFormat field's value. -func (s *SnsAction) SetMessageFormat(v string) *SnsAction { - s.MessageFormat = &v +// SetThingName sets the ThingName field's value. +func (s *ThingAttribute) SetThingName(v string) *ThingAttribute { + s.ThingName = &v return s } -// SetRoleArn sets the RoleArn field's value. -func (s *SnsAction) SetRoleArn(v string) *SnsAction { - s.RoleArn = &v +// SetThingTypeName sets the ThingTypeName field's value. +func (s *ThingAttribute) SetThingTypeName(v string) *ThingAttribute { + s.ThingTypeName = &v return s } -// SetTargetArn sets the TargetArn field's value. -func (s *SnsAction) SetTargetArn(v string) *SnsAction { - s.TargetArn = &v +// SetVersion sets the Version field's value. +func (s *ThingAttribute) SetVersion(v int64) *ThingAttribute { + s.Version = &v return s } -// Describes an action to publish data to an Amazon SQS queue. -type SqsAction struct { +// The connectivity status of the thing. +type ThingConnectivity struct { _ struct{} `type:"structure"` - // The URL of the Amazon SQS queue. - // - // QueueUrl is a required field - QueueUrl *string `locationName:"queueUrl" type:"string" required:"true"` - - // The ARN of the IAM role that grants access. - // - // RoleArn is a required field - RoleArn *string `locationName:"roleArn" type:"string" required:"true"` + // True if the thing is connected to the AWS IoT service, false if it is not + // connected. + Connected *bool `locationName:"connected" type:"boolean"` - // Specifies whether to use Base64 encoding. - UseBase64 *bool `locationName:"useBase64" type:"boolean"` + // The epoch time (in milliseconds) when the thing last connected or disconnected. + // Note that if the thing has been disconnected for more than a few weeks, the + // time value can be missing. + Timestamp *int64 `locationName:"timestamp" type:"long"` } // String returns the string representation -func (s SqsAction) String() string { +func (s ThingConnectivity) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SqsAction) GoString() string { +func (s ThingConnectivity) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *SqsAction) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "SqsAction"} - if s.QueueUrl == nil { - invalidParams.Add(request.NewErrParamRequired("QueueUrl")) - } - if s.RoleArn == nil { - invalidParams.Add(request.NewErrParamRequired("RoleArn")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetQueueUrl sets the QueueUrl field's value. -func (s *SqsAction) SetQueueUrl(v string) *SqsAction { - s.QueueUrl = &v - return s -} - -// SetRoleArn sets the RoleArn field's value. -func (s *SqsAction) SetRoleArn(v string) *SqsAction { - s.RoleArn = &v +// SetConnected sets the Connected field's value. +func (s *ThingConnectivity) SetConnected(v bool) *ThingConnectivity { + s.Connected = &v return s } -// SetUseBase64 sets the UseBase64 field's value. -func (s *SqsAction) SetUseBase64(v bool) *SqsAction { - s.UseBase64 = &v +// SetTimestamp sets the Timestamp field's value. +func (s *ThingConnectivity) SetTimestamp(v int64) *ThingConnectivity { + s.Timestamp = &v return s } -type StartThingRegistrationTaskInput struct { +// The thing search index document. +type ThingDocument struct { _ struct{} `type:"structure"` - // The S3 bucket that contains the input file. - // - // InputFileBucket is a required field - InputFileBucket *string `locationName:"inputFileBucket" min:"3" type:"string" required:"true"` + // The attributes. + Attributes map[string]*string `locationName:"attributes" type:"map"` - // The name of input file within the S3 bucket. This file contains a newline - // delimited JSON file. Each line contains the parameter values to provision - // one device (thing). - // - // InputFileKey is a required field - InputFileKey *string `locationName:"inputFileKey" min:"1" type:"string" required:"true"` + // Indicates whether or not the thing is connected to the AWS IoT service. + Connectivity *ThingConnectivity `locationName:"connectivity" type:"structure"` - // The IAM role ARN that grants permission the input file. - // - // RoleArn is a required field - RoleArn *string `locationName:"roleArn" min:"20" type:"string" required:"true"` + // The shadow. + Shadow *string `locationName:"shadow" type:"string"` - // The provisioning template. - // - // TemplateBody is a required field - TemplateBody *string `locationName:"templateBody" type:"string" required:"true"` + // Thing group names. + ThingGroupNames []*string `locationName:"thingGroupNames" type:"list"` + + // The thing ID. + ThingId *string `locationName:"thingId" type:"string"` + + // The thing name. + ThingName *string `locationName:"thingName" min:"1" type:"string"` + + // The thing type name. + ThingTypeName *string `locationName:"thingTypeName" min:"1" type:"string"` } // String returns the string representation -func (s StartThingRegistrationTaskInput) String() string { +func (s ThingDocument) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s StartThingRegistrationTaskInput) GoString() string { +func (s ThingDocument) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *StartThingRegistrationTaskInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "StartThingRegistrationTaskInput"} - if s.InputFileBucket == nil { - invalidParams.Add(request.NewErrParamRequired("InputFileBucket")) - } - if s.InputFileBucket != nil && len(*s.InputFileBucket) < 3 { - invalidParams.Add(request.NewErrParamMinLen("InputFileBucket", 3)) - } - if s.InputFileKey == nil { - invalidParams.Add(request.NewErrParamRequired("InputFileKey")) - } - if s.InputFileKey != nil && len(*s.InputFileKey) < 1 { - invalidParams.Add(request.NewErrParamMinLen("InputFileKey", 1)) - } - if s.RoleArn == nil { - invalidParams.Add(request.NewErrParamRequired("RoleArn")) - } - if s.RoleArn != nil && len(*s.RoleArn) < 20 { - invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) - } - if s.TemplateBody == nil { - invalidParams.Add(request.NewErrParamRequired("TemplateBody")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetInputFileBucket sets the InputFileBucket field's value. -func (s *StartThingRegistrationTaskInput) SetInputFileBucket(v string) *StartThingRegistrationTaskInput { - s.InputFileBucket = &v +// SetAttributes sets the Attributes field's value. +func (s *ThingDocument) SetAttributes(v map[string]*string) *ThingDocument { + s.Attributes = v return s } -// SetInputFileKey sets the InputFileKey field's value. -func (s *StartThingRegistrationTaskInput) SetInputFileKey(v string) *StartThingRegistrationTaskInput { - s.InputFileKey = &v +// SetConnectivity sets the Connectivity field's value. +func (s *ThingDocument) SetConnectivity(v *ThingConnectivity) *ThingDocument { + s.Connectivity = v return s } -// SetRoleArn sets the RoleArn field's value. -func (s *StartThingRegistrationTaskInput) SetRoleArn(v string) *StartThingRegistrationTaskInput { - s.RoleArn = &v +// SetShadow sets the Shadow field's value. +func (s *ThingDocument) SetShadow(v string) *ThingDocument { + s.Shadow = &v return s } -// SetTemplateBody sets the TemplateBody field's value. -func (s *StartThingRegistrationTaskInput) SetTemplateBody(v string) *StartThingRegistrationTaskInput { - s.TemplateBody = &v +// SetThingGroupNames sets the ThingGroupNames field's value. +func (s *ThingDocument) SetThingGroupNames(v []*string) *ThingDocument { + s.ThingGroupNames = v return s } -type StartThingRegistrationTaskOutput struct { - _ struct{} `type:"structure"` - - // The bulk thing provisioning task ID. - TaskId *string `locationName:"taskId" type:"string"` -} - -// String returns the string representation -func (s StartThingRegistrationTaskOutput) String() string { - return awsutil.Prettify(s) +// SetThingId sets the ThingId field's value. +func (s *ThingDocument) SetThingId(v string) *ThingDocument { + s.ThingId = &v + return s } -// GoString returns the string representation -func (s StartThingRegistrationTaskOutput) GoString() string { - return s.String() +// SetThingName sets the ThingName field's value. +func (s *ThingDocument) SetThingName(v string) *ThingDocument { + s.ThingName = &v + return s } -// SetTaskId sets the TaskId field's value. -func (s *StartThingRegistrationTaskOutput) SetTaskId(v string) *StartThingRegistrationTaskOutput { - s.TaskId = &v +// SetThingTypeName sets the ThingTypeName field's value. +func (s *ThingDocument) SetThingTypeName(v string) *ThingDocument { + s.ThingTypeName = &v return s } -type StopThingRegistrationTaskInput struct { +// The thing group search index document. +type ThingGroupDocument struct { _ struct{} `type:"structure"` - // The bulk thing provisioning task ID. - // - // TaskId is a required field - TaskId *string `location:"uri" locationName:"taskId" type:"string" required:"true"` + // The thing group attributes. + Attributes map[string]*string `locationName:"attributes" type:"map"` + + // Parent group names. + ParentGroupNames []*string `locationName:"parentGroupNames" type:"list"` + + // The thing group description. + ThingGroupDescription *string `locationName:"thingGroupDescription" type:"string"` + + // The thing group ID. + ThingGroupId *string `locationName:"thingGroupId" min:"1" type:"string"` + + // The thing group name. + ThingGroupName *string `locationName:"thingGroupName" min:"1" type:"string"` } // String returns the string representation -func (s StopThingRegistrationTaskInput) String() string { +func (s ThingGroupDocument) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s StopThingRegistrationTaskInput) GoString() string { +func (s ThingGroupDocument) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *StopThingRegistrationTaskInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "StopThingRegistrationTaskInput"} - if s.TaskId == nil { - invalidParams.Add(request.NewErrParamRequired("TaskId")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetAttributes sets the Attributes field's value. +func (s *ThingGroupDocument) SetAttributes(v map[string]*string) *ThingGroupDocument { + s.Attributes = v + return s } -// SetTaskId sets the TaskId field's value. -func (s *StopThingRegistrationTaskInput) SetTaskId(v string) *StopThingRegistrationTaskInput { - s.TaskId = &v +// SetParentGroupNames sets the ParentGroupNames field's value. +func (s *ThingGroupDocument) SetParentGroupNames(v []*string) *ThingGroupDocument { + s.ParentGroupNames = v return s } -type StopThingRegistrationTaskOutput struct { - _ struct{} `type:"structure"` +// SetThingGroupDescription sets the ThingGroupDescription field's value. +func (s *ThingGroupDocument) SetThingGroupDescription(v string) *ThingGroupDocument { + s.ThingGroupDescription = &v + return s } -// String returns the string representation -func (s StopThingRegistrationTaskOutput) String() string { - return awsutil.Prettify(s) +// SetThingGroupId sets the ThingGroupId field's value. +func (s *ThingGroupDocument) SetThingGroupId(v string) *ThingGroupDocument { + s.ThingGroupId = &v + return s } -// GoString returns the string representation -func (s StopThingRegistrationTaskOutput) GoString() string { - return s.String() +// SetThingGroupName sets the ThingGroupName field's value. +func (s *ThingGroupDocument) SetThingGroupName(v string) *ThingGroupDocument { + s.ThingGroupName = &v + return s } -// Describes a group of files that can be streamed. -type Stream struct { +// Thing group indexing configuration. +type ThingGroupIndexingConfiguration struct { _ struct{} `type:"structure"` - // The ID of a file associated with a stream. - FileId *int64 `locationName:"fileId" type:"integer"` - - // The stream ID. - StreamId *string `locationName:"streamId" min:"1" type:"string"` + // Thing group indexing mode. + // + // ThingGroupIndexingMode is a required field + ThingGroupIndexingMode *string `locationName:"thingGroupIndexingMode" type:"string" required:"true" enum:"ThingGroupIndexingMode"` } // String returns the string representation -func (s Stream) String() string { +func (s ThingGroupIndexingConfiguration) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Stream) GoString() string { +func (s ThingGroupIndexingConfiguration) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *Stream) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "Stream"} - if s.StreamId != nil && len(*s.StreamId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("StreamId", 1)) +func (s *ThingGroupIndexingConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ThingGroupIndexingConfiguration"} + if s.ThingGroupIndexingMode == nil { + invalidParams.Add(request.NewErrParamRequired("ThingGroupIndexingMode")) } if invalidParams.Len() > 0 { @@ -24203,1029 +34064,1225 @@ func (s *Stream) Validate() error { return nil } -// SetFileId sets the FileId field's value. -func (s *Stream) SetFileId(v int64) *Stream { - s.FileId = &v - return s -} - -// SetStreamId sets the StreamId field's value. -func (s *Stream) SetStreamId(v string) *Stream { - s.StreamId = &v +// SetThingGroupIndexingMode sets the ThingGroupIndexingMode field's value. +func (s *ThingGroupIndexingConfiguration) SetThingGroupIndexingMode(v string) *ThingGroupIndexingConfiguration { + s.ThingGroupIndexingMode = &v return s } -// Represents a file to stream. -type StreamFile struct { +// Thing group metadata. +type ThingGroupMetadata struct { _ struct{} `type:"structure"` - // The file ID. - FileId *int64 `locationName:"fileId" type:"integer"` + // The UNIX timestamp of when the thing group was created. + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` - // The location of the file in S3. - S3Location *S3Location `locationName:"s3Location" type:"structure"` + // The parent thing group name. + ParentGroupName *string `locationName:"parentGroupName" min:"1" type:"string"` + + // The root parent thing group. + RootToParentThingGroups []*GroupNameAndArn `locationName:"rootToParentThingGroups" type:"list"` } // String returns the string representation -func (s StreamFile) String() string { +func (s ThingGroupMetadata) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s StreamFile) GoString() string { +func (s ThingGroupMetadata) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *StreamFile) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "StreamFile"} - if s.S3Location != nil { - if err := s.S3Location.Validate(); err != nil { - invalidParams.AddNested("S3Location", err.(request.ErrInvalidParams)) - } - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetCreationDate sets the CreationDate field's value. +func (s *ThingGroupMetadata) SetCreationDate(v time.Time) *ThingGroupMetadata { + s.CreationDate = &v + return s } -// SetFileId sets the FileId field's value. -func (s *StreamFile) SetFileId(v int64) *StreamFile { - s.FileId = &v +// SetParentGroupName sets the ParentGroupName field's value. +func (s *ThingGroupMetadata) SetParentGroupName(v string) *ThingGroupMetadata { + s.ParentGroupName = &v return s } -// SetS3Location sets the S3Location field's value. -func (s *StreamFile) SetS3Location(v *S3Location) *StreamFile { - s.S3Location = v +// SetRootToParentThingGroups sets the RootToParentThingGroups field's value. +func (s *ThingGroupMetadata) SetRootToParentThingGroups(v []*GroupNameAndArn) *ThingGroupMetadata { + s.RootToParentThingGroups = v return s } -// Information about a stream. -type StreamInfo struct { +// Thing group properties. +type ThingGroupProperties struct { _ struct{} `type:"structure"` - // The date when the stream was created. - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` - - // The description of the stream. - Description *string `locationName:"description" type:"string"` - - // The files to stream. - Files []*StreamFile `locationName:"files" min:"1" type:"list"` - - // The date when the stream was last updated. - LastUpdatedAt *time.Time `locationName:"lastUpdatedAt" type:"timestamp" timestampFormat:"unix"` - - // An IAM role AWS IoT assumes to access your S3 files. - RoleArn *string `locationName:"roleArn" min:"20" type:"string"` - - // The stream ARN. - StreamArn *string `locationName:"streamArn" type:"string"` - - // The stream ID. - StreamId *string `locationName:"streamId" min:"1" type:"string"` + // The thing group attributes in JSON format. + AttributePayload *AttributePayload `locationName:"attributePayload" type:"structure"` - // The stream version. - StreamVersion *int64 `locationName:"streamVersion" type:"integer"` + // The thing group description. + ThingGroupDescription *string `locationName:"thingGroupDescription" type:"string"` } // String returns the string representation -func (s StreamInfo) String() string { +func (s ThingGroupProperties) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s StreamInfo) GoString() string { +func (s ThingGroupProperties) GoString() string { return s.String() } -// SetCreatedAt sets the CreatedAt field's value. -func (s *StreamInfo) SetCreatedAt(v time.Time) *StreamInfo { - s.CreatedAt = &v +// SetAttributePayload sets the AttributePayload field's value. +func (s *ThingGroupProperties) SetAttributePayload(v *AttributePayload) *ThingGroupProperties { + s.AttributePayload = v return s } -// SetDescription sets the Description field's value. -func (s *StreamInfo) SetDescription(v string) *StreamInfo { - s.Description = &v +// SetThingGroupDescription sets the ThingGroupDescription field's value. +func (s *ThingGroupProperties) SetThingGroupDescription(v string) *ThingGroupProperties { + s.ThingGroupDescription = &v return s } -// SetFiles sets the Files field's value. -func (s *StreamInfo) SetFiles(v []*StreamFile) *StreamInfo { - s.Files = v - return s +// The thing indexing configuration. For more information, see Managing Thing +// Indexing (https://docs.aws.amazon.com/iot/latest/developerguide/managing-index.html). +type ThingIndexingConfiguration struct { + _ struct{} `type:"structure"` + + // Thing connectivity indexing mode. Valid values are: + // + // * STATUS – Your thing index will contain connectivity status. In order + // to enable thing connectivity indexing, thingIndexMode must not be set + // to OFF. + // + // * OFF - Thing connectivity status indexing is disabled. + ThingConnectivityIndexingMode *string `locationName:"thingConnectivityIndexingMode" type:"string" enum:"ThingConnectivityIndexingMode"` + + // Thing indexing mode. Valid values are: + // + // * REGISTRY – Your thing index will contain only registry data. + // + // * REGISTRY_AND_SHADOW - Your thing index will contain registry and shadow + // data. + // + // * OFF - Thing indexing is disabled. + // + // ThingIndexingMode is a required field + ThingIndexingMode *string `locationName:"thingIndexingMode" type:"string" required:"true" enum:"ThingIndexingMode"` } -// SetLastUpdatedAt sets the LastUpdatedAt field's value. -func (s *StreamInfo) SetLastUpdatedAt(v time.Time) *StreamInfo { - s.LastUpdatedAt = &v - return s +// String returns the string representation +func (s ThingIndexingConfiguration) String() string { + return awsutil.Prettify(s) } -// SetRoleArn sets the RoleArn field's value. -func (s *StreamInfo) SetRoleArn(v string) *StreamInfo { - s.RoleArn = &v - return s +// GoString returns the string representation +func (s ThingIndexingConfiguration) GoString() string { + return s.String() } -// SetStreamArn sets the StreamArn field's value. -func (s *StreamInfo) SetStreamArn(v string) *StreamInfo { - s.StreamArn = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *ThingIndexingConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ThingIndexingConfiguration"} + if s.ThingIndexingMode == nil { + invalidParams.Add(request.NewErrParamRequired("ThingIndexingMode")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetStreamId sets the StreamId field's value. -func (s *StreamInfo) SetStreamId(v string) *StreamInfo { - s.StreamId = &v +// SetThingConnectivityIndexingMode sets the ThingConnectivityIndexingMode field's value. +func (s *ThingIndexingConfiguration) SetThingConnectivityIndexingMode(v string) *ThingIndexingConfiguration { + s.ThingConnectivityIndexingMode = &v return s } -// SetStreamVersion sets the StreamVersion field's value. -func (s *StreamInfo) SetStreamVersion(v int64) *StreamInfo { - s.StreamVersion = &v +// SetThingIndexingMode sets the ThingIndexingMode field's value. +func (s *ThingIndexingConfiguration) SetThingIndexingMode(v string) *ThingIndexingConfiguration { + s.ThingIndexingMode = &v return s } -// A summary of a stream. -type StreamSummary struct { +// The definition of the thing type, including thing type name and description. +type ThingTypeDefinition struct { _ struct{} `type:"structure"` - // A description of the stream. - Description *string `locationName:"description" type:"string"` + // The thing type ARN. + ThingTypeArn *string `locationName:"thingTypeArn" type:"string"` - // The stream ARN. - StreamArn *string `locationName:"streamArn" type:"string"` + // The ThingTypeMetadata contains additional information about the thing type + // including: creation date and time, a value indicating whether the thing type + // is deprecated, and a date and time when it was deprecated. + ThingTypeMetadata *ThingTypeMetadata `locationName:"thingTypeMetadata" type:"structure"` - // The stream ID. - StreamId *string `locationName:"streamId" min:"1" type:"string"` + // The name of the thing type. + ThingTypeName *string `locationName:"thingTypeName" min:"1" type:"string"` - // The stream version. - StreamVersion *int64 `locationName:"streamVersion" type:"integer"` + // The ThingTypeProperties for the thing type. + ThingTypeProperties *ThingTypeProperties `locationName:"thingTypeProperties" type:"structure"` } // String returns the string representation -func (s StreamSummary) String() string { +func (s ThingTypeDefinition) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s StreamSummary) GoString() string { +func (s ThingTypeDefinition) GoString() string { return s.String() } -// SetDescription sets the Description field's value. -func (s *StreamSummary) SetDescription(v string) *StreamSummary { - s.Description = &v +// SetThingTypeArn sets the ThingTypeArn field's value. +func (s *ThingTypeDefinition) SetThingTypeArn(v string) *ThingTypeDefinition { + s.ThingTypeArn = &v return s } -// SetStreamArn sets the StreamArn field's value. -func (s *StreamSummary) SetStreamArn(v string) *StreamSummary { - s.StreamArn = &v +// SetThingTypeMetadata sets the ThingTypeMetadata field's value. +func (s *ThingTypeDefinition) SetThingTypeMetadata(v *ThingTypeMetadata) *ThingTypeDefinition { + s.ThingTypeMetadata = v return s } -// SetStreamId sets the StreamId field's value. -func (s *StreamSummary) SetStreamId(v string) *StreamSummary { - s.StreamId = &v +// SetThingTypeName sets the ThingTypeName field's value. +func (s *ThingTypeDefinition) SetThingTypeName(v string) *ThingTypeDefinition { + s.ThingTypeName = &v return s } -// SetStreamVersion sets the StreamVersion field's value. -func (s *StreamSummary) SetStreamVersion(v int64) *StreamSummary { - s.StreamVersion = &v +// SetThingTypeProperties sets the ThingTypeProperties field's value. +func (s *ThingTypeDefinition) SetThingTypeProperties(v *ThingTypeProperties) *ThingTypeDefinition { + s.ThingTypeProperties = v return s } -type TestAuthorizationInput struct { +// The ThingTypeMetadata contains additional information about the thing type +// including: creation date and time, a value indicating whether the thing type +// is deprecated, and a date and time when time was deprecated. +type ThingTypeMetadata struct { _ struct{} `type:"structure"` - // A list of authorization info objects. Simulating authorization will create - // a response for each authInfo object in the list. - // - // AuthInfos is a required field - AuthInfos []*AuthInfo `locationName:"authInfos" min:"1" type:"list" required:"true"` - - // The MQTT client ID. - ClientId *string `location:"querystring" locationName:"clientId" type:"string"` - - // The Cognito identity pool ID. - CognitoIdentityPoolId *string `locationName:"cognitoIdentityPoolId" type:"string"` - - // When testing custom authorization, the policies specified here are treated - // as if they are attached to the principal being authorized. - PolicyNamesToAdd []*string `locationName:"policyNamesToAdd" type:"list"` + // The date and time when the thing type was created. + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` - // When testing custom authorization, the policies specified here are treated - // as if they are not attached to the principal being authorized. - PolicyNamesToSkip []*string `locationName:"policyNamesToSkip" type:"list"` + // Whether the thing type is deprecated. If true, no new things could be associated + // with this type. + Deprecated *bool `locationName:"deprecated" type:"boolean"` - // The principal. - Principal *string `locationName:"principal" type:"string"` + // The date and time when the thing type was deprecated. + DeprecationDate *time.Time `locationName:"deprecationDate" type:"timestamp"` } // String returns the string representation -func (s TestAuthorizationInput) String() string { +func (s ThingTypeMetadata) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s TestAuthorizationInput) GoString() string { +func (s ThingTypeMetadata) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *TestAuthorizationInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "TestAuthorizationInput"} - if s.AuthInfos == nil { - invalidParams.Add(request.NewErrParamRequired("AuthInfos")) - } - if s.AuthInfos != nil && len(s.AuthInfos) < 1 { - invalidParams.Add(request.NewErrParamMinLen("AuthInfos", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetCreationDate sets the CreationDate field's value. +func (s *ThingTypeMetadata) SetCreationDate(v time.Time) *ThingTypeMetadata { + s.CreationDate = &v + return s } -// SetAuthInfos sets the AuthInfos field's value. -func (s *TestAuthorizationInput) SetAuthInfos(v []*AuthInfo) *TestAuthorizationInput { - s.AuthInfos = v +// SetDeprecated sets the Deprecated field's value. +func (s *ThingTypeMetadata) SetDeprecated(v bool) *ThingTypeMetadata { + s.Deprecated = &v return s } -// SetClientId sets the ClientId field's value. -func (s *TestAuthorizationInput) SetClientId(v string) *TestAuthorizationInput { - s.ClientId = &v +// SetDeprecationDate sets the DeprecationDate field's value. +func (s *ThingTypeMetadata) SetDeprecationDate(v time.Time) *ThingTypeMetadata { + s.DeprecationDate = &v return s } -// SetCognitoIdentityPoolId sets the CognitoIdentityPoolId field's value. -func (s *TestAuthorizationInput) SetCognitoIdentityPoolId(v string) *TestAuthorizationInput { - s.CognitoIdentityPoolId = &v - return s +// The ThingTypeProperties contains information about the thing type including: +// a thing type description, and a list of searchable thing attribute names. +type ThingTypeProperties struct { + _ struct{} `type:"structure"` + + // A list of searchable thing attribute names. + SearchableAttributes []*string `locationName:"searchableAttributes" type:"list"` + + // The description of the thing type. + ThingTypeDescription *string `locationName:"thingTypeDescription" type:"string"` } -// SetPolicyNamesToAdd sets the PolicyNamesToAdd field's value. -func (s *TestAuthorizationInput) SetPolicyNamesToAdd(v []*string) *TestAuthorizationInput { - s.PolicyNamesToAdd = v - return s +// String returns the string representation +func (s ThingTypeProperties) String() string { + return awsutil.Prettify(s) } -// SetPolicyNamesToSkip sets the PolicyNamesToSkip field's value. -func (s *TestAuthorizationInput) SetPolicyNamesToSkip(v []*string) *TestAuthorizationInput { - s.PolicyNamesToSkip = v +// GoString returns the string representation +func (s ThingTypeProperties) GoString() string { + return s.String() +} + +// SetSearchableAttributes sets the SearchableAttributes field's value. +func (s *ThingTypeProperties) SetSearchableAttributes(v []*string) *ThingTypeProperties { + s.SearchableAttributes = v return s } -// SetPrincipal sets the Principal field's value. -func (s *TestAuthorizationInput) SetPrincipal(v string) *TestAuthorizationInput { - s.Principal = &v +// SetThingTypeDescription sets the ThingTypeDescription field's value. +func (s *ThingTypeProperties) SetThingTypeDescription(v string) *ThingTypeProperties { + s.ThingTypeDescription = &v return s } -type TestAuthorizationOutput struct { +// Specifies the amount of time each device has to finish its execution of the +// job. A timer is started when the job execution status is set to IN_PROGRESS. +// If the job execution status is not set to another terminal state before the +// timer expires, it will be automatically set to TIMED_OUT. +type TimeoutConfig struct { _ struct{} `type:"structure"` - // The authentication results. - AuthResults []*AuthResult `locationName:"authResults" type:"list"` + // Specifies the amount of time, in minutes, this device has to finish execution + // of this job. The timeout interval can be anywhere between 1 minute and 7 + // days (1 to 10080 minutes). The in progress timer can't be updated and will + // apply to all job executions for the job. Whenever a job execution remains + // in the IN_PROGRESS status for longer than this interval, the job execution + // will fail and switch to the terminal TIMED_OUT status. + InProgressTimeoutInMinutes *int64 `locationName:"inProgressTimeoutInMinutes" type:"long"` } // String returns the string representation -func (s TestAuthorizationOutput) String() string { +func (s TimeoutConfig) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s TestAuthorizationOutput) GoString() string { +func (s TimeoutConfig) GoString() string { return s.String() } -// SetAuthResults sets the AuthResults field's value. -func (s *TestAuthorizationOutput) SetAuthResults(v []*AuthResult) *TestAuthorizationOutput { - s.AuthResults = v +// SetInProgressTimeoutInMinutes sets the InProgressTimeoutInMinutes field's value. +func (s *TimeoutConfig) SetInProgressTimeoutInMinutes(v int64) *TimeoutConfig { + s.InProgressTimeoutInMinutes = &v return s } -type TestInvokeAuthorizerInput struct { +// Describes a rule. +type TopicRule struct { _ struct{} `type:"structure"` - // The custom authorizer name. - // - // AuthorizerName is a required field - AuthorizerName *string `location:"uri" locationName:"authorizerName" min:"1" type:"string" required:"true"` + // The actions associated with the rule. + Actions []*Action `locationName:"actions" type:"list"` - // The token returned by your custom authentication service. - // - // Token is a required field - Token *string `locationName:"token" min:"1" type:"string" required:"true"` + // The version of the SQL rules engine to use when evaluating the rule. + AwsIotSqlVersion *string `locationName:"awsIotSqlVersion" type:"string"` - // The signature made with the token and your custom authentication service's - // private key. - // - // TokenSignature is a required field - TokenSignature *string `locationName:"tokenSignature" min:"1" type:"string" required:"true"` + // The date and time the rule was created. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` + + // The description of the rule. + Description *string `locationName:"description" type:"string"` + + // The action to perform when an error occurs. + ErrorAction *Action `locationName:"errorAction" type:"structure"` + + // Specifies whether the rule is disabled. + RuleDisabled *bool `locationName:"ruleDisabled" type:"boolean"` + + // The name of the rule. + RuleName *string `locationName:"ruleName" min:"1" type:"string"` + + // The SQL statement used to query the topic. When using a SQL query with multiple + // lines, be sure to escape the newline characters. + Sql *string `locationName:"sql" type:"string"` } // String returns the string representation -func (s TestInvokeAuthorizerInput) String() string { +func (s TopicRule) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s TestInvokeAuthorizerInput) GoString() string { +func (s TopicRule) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *TestInvokeAuthorizerInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "TestInvokeAuthorizerInput"} - if s.AuthorizerName == nil { - invalidParams.Add(request.NewErrParamRequired("AuthorizerName")) - } - if s.AuthorizerName != nil && len(*s.AuthorizerName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("AuthorizerName", 1)) - } - if s.Token == nil { - invalidParams.Add(request.NewErrParamRequired("Token")) - } - if s.Token != nil && len(*s.Token) < 1 { - invalidParams.Add(request.NewErrParamMinLen("Token", 1)) - } - if s.TokenSignature == nil { - invalidParams.Add(request.NewErrParamRequired("TokenSignature")) - } - if s.TokenSignature != nil && len(*s.TokenSignature) < 1 { - invalidParams.Add(request.NewErrParamMinLen("TokenSignature", 1)) - } +// SetActions sets the Actions field's value. +func (s *TopicRule) SetActions(v []*Action) *TopicRule { + s.Actions = v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetAwsIotSqlVersion sets the AwsIotSqlVersion field's value. +func (s *TopicRule) SetAwsIotSqlVersion(v string) *TopicRule { + s.AwsIotSqlVersion = &v + return s } -// SetAuthorizerName sets the AuthorizerName field's value. -func (s *TestInvokeAuthorizerInput) SetAuthorizerName(v string) *TestInvokeAuthorizerInput { - s.AuthorizerName = &v +// SetCreatedAt sets the CreatedAt field's value. +func (s *TopicRule) SetCreatedAt(v time.Time) *TopicRule { + s.CreatedAt = &v return s } -// SetToken sets the Token field's value. -func (s *TestInvokeAuthorizerInput) SetToken(v string) *TestInvokeAuthorizerInput { - s.Token = &v +// SetDescription sets the Description field's value. +func (s *TopicRule) SetDescription(v string) *TopicRule { + s.Description = &v return s } -// SetTokenSignature sets the TokenSignature field's value. -func (s *TestInvokeAuthorizerInput) SetTokenSignature(v string) *TestInvokeAuthorizerInput { - s.TokenSignature = &v +// SetErrorAction sets the ErrorAction field's value. +func (s *TopicRule) SetErrorAction(v *Action) *TopicRule { + s.ErrorAction = v return s } -type TestInvokeAuthorizerOutput struct { +// SetRuleDisabled sets the RuleDisabled field's value. +func (s *TopicRule) SetRuleDisabled(v bool) *TopicRule { + s.RuleDisabled = &v + return s +} + +// SetRuleName sets the RuleName field's value. +func (s *TopicRule) SetRuleName(v string) *TopicRule { + s.RuleName = &v + return s +} + +// SetSql sets the Sql field's value. +func (s *TopicRule) SetSql(v string) *TopicRule { + s.Sql = &v + return s +} + +// Describes a rule. +type TopicRuleListItem struct { _ struct{} `type:"structure"` - // The number of seconds after which the connection is terminated. - DisconnectAfterInSeconds *int64 `locationName:"disconnectAfterInSeconds" type:"integer"` + // The date and time the rule was created. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` - // True if the token is authenticated, otherwise false. - IsAuthenticated *bool `locationName:"isAuthenticated" type:"boolean"` + // The rule ARN. + RuleArn *string `locationName:"ruleArn" type:"string"` - // IAM policy documents. - PolicyDocuments []*string `locationName:"policyDocuments" type:"list"` + // Specifies whether the rule is disabled. + RuleDisabled *bool `locationName:"ruleDisabled" type:"boolean"` - // The principal ID. - PrincipalId *string `locationName:"principalId" min:"1" type:"string"` + // The name of the rule. + RuleName *string `locationName:"ruleName" min:"1" type:"string"` - // The number of seconds after which the temporary credentials are refreshed. - RefreshAfterInSeconds *int64 `locationName:"refreshAfterInSeconds" type:"integer"` + // The pattern for the topic names that apply. + TopicPattern *string `locationName:"topicPattern" type:"string"` } // String returns the string representation -func (s TestInvokeAuthorizerOutput) String() string { +func (s TopicRuleListItem) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s TestInvokeAuthorizerOutput) GoString() string { +func (s TopicRuleListItem) GoString() string { return s.String() } -// SetDisconnectAfterInSeconds sets the DisconnectAfterInSeconds field's value. -func (s *TestInvokeAuthorizerOutput) SetDisconnectAfterInSeconds(v int64) *TestInvokeAuthorizerOutput { - s.DisconnectAfterInSeconds = &v +// SetCreatedAt sets the CreatedAt field's value. +func (s *TopicRuleListItem) SetCreatedAt(v time.Time) *TopicRuleListItem { + s.CreatedAt = &v return s } -// SetIsAuthenticated sets the IsAuthenticated field's value. -func (s *TestInvokeAuthorizerOutput) SetIsAuthenticated(v bool) *TestInvokeAuthorizerOutput { - s.IsAuthenticated = &v +// SetRuleArn sets the RuleArn field's value. +func (s *TopicRuleListItem) SetRuleArn(v string) *TopicRuleListItem { + s.RuleArn = &v return s } -// SetPolicyDocuments sets the PolicyDocuments field's value. -func (s *TestInvokeAuthorizerOutput) SetPolicyDocuments(v []*string) *TestInvokeAuthorizerOutput { - s.PolicyDocuments = v +// SetRuleDisabled sets the RuleDisabled field's value. +func (s *TopicRuleListItem) SetRuleDisabled(v bool) *TopicRuleListItem { + s.RuleDisabled = &v return s } -// SetPrincipalId sets the PrincipalId field's value. -func (s *TestInvokeAuthorizerOutput) SetPrincipalId(v string) *TestInvokeAuthorizerOutput { - s.PrincipalId = &v +// SetRuleName sets the RuleName field's value. +func (s *TopicRuleListItem) SetRuleName(v string) *TopicRuleListItem { + s.RuleName = &v return s } -// SetRefreshAfterInSeconds sets the RefreshAfterInSeconds field's value. -func (s *TestInvokeAuthorizerOutput) SetRefreshAfterInSeconds(v int64) *TestInvokeAuthorizerOutput { - s.RefreshAfterInSeconds = &v +// SetTopicPattern sets the TopicPattern field's value. +func (s *TopicRuleListItem) SetTopicPattern(v string) *TopicRuleListItem { + s.TopicPattern = &v return s } -// The properties of the thing, including thing name, thing type name, and a -// list of thing attributes. -type ThingAttribute struct { +// Describes a rule. +type TopicRulePayload struct { _ struct{} `type:"structure"` - // A list of thing attributes which are name-value pairs. - Attributes map[string]*string `locationName:"attributes" type:"map"` + // The actions associated with the rule. + // + // Actions is a required field + Actions []*Action `locationName:"actions" type:"list" required:"true"` - // The thing ARN. - ThingArn *string `locationName:"thingArn" type:"string"` + // The version of the SQL rules engine to use when evaluating the rule. + AwsIotSqlVersion *string `locationName:"awsIotSqlVersion" type:"string"` - // The name of the thing. - ThingName *string `locationName:"thingName" min:"1" type:"string"` + // The description of the rule. + Description *string `locationName:"description" type:"string"` - // The name of the thing type, if the thing has been associated with a type. - ThingTypeName *string `locationName:"thingTypeName" min:"1" type:"string"` + // The action to take when an error occurs. + ErrorAction *Action `locationName:"errorAction" type:"structure"` - // The version of the thing record in the registry. - Version *int64 `locationName:"version" type:"long"` + // Specifies whether the rule is disabled. + RuleDisabled *bool `locationName:"ruleDisabled" type:"boolean"` + + // The SQL statement used to query the topic. For more information, see AWS + // IoT SQL Reference (http://docs.aws.amazon.com/iot/latest/developerguide/iot-rules.html#aws-iot-sql-reference) + // in the AWS IoT Developer Guide. + // + // Sql is a required field + Sql *string `locationName:"sql" type:"string" required:"true"` } // String returns the string representation -func (s ThingAttribute) String() string { +func (s TopicRulePayload) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ThingAttribute) GoString() string { +func (s TopicRulePayload) GoString() string { return s.String() } -// SetAttributes sets the Attributes field's value. -func (s *ThingAttribute) SetAttributes(v map[string]*string) *ThingAttribute { - s.Attributes = v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *TopicRulePayload) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TopicRulePayload"} + if s.Actions == nil { + invalidParams.Add(request.NewErrParamRequired("Actions")) + } + if s.Sql == nil { + invalidParams.Add(request.NewErrParamRequired("Sql")) + } + if s.Actions != nil { + for i, v := range s.Actions { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Actions", i), err.(request.ErrInvalidParams)) + } + } + } + if s.ErrorAction != nil { + if err := s.ErrorAction.Validate(); err != nil { + invalidParams.AddNested("ErrorAction", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetThingArn sets the ThingArn field's value. -func (s *ThingAttribute) SetThingArn(v string) *ThingAttribute { - s.ThingArn = &v +// SetActions sets the Actions field's value. +func (s *TopicRulePayload) SetActions(v []*Action) *TopicRulePayload { + s.Actions = v return s } -// SetThingName sets the ThingName field's value. -func (s *ThingAttribute) SetThingName(v string) *ThingAttribute { - s.ThingName = &v +// SetAwsIotSqlVersion sets the AwsIotSqlVersion field's value. +func (s *TopicRulePayload) SetAwsIotSqlVersion(v string) *TopicRulePayload { + s.AwsIotSqlVersion = &v return s } -// SetThingTypeName sets the ThingTypeName field's value. -func (s *ThingAttribute) SetThingTypeName(v string) *ThingAttribute { - s.ThingTypeName = &v +// SetDescription sets the Description field's value. +func (s *TopicRulePayload) SetDescription(v string) *TopicRulePayload { + s.Description = &v return s } -// SetVersion sets the Version field's value. -func (s *ThingAttribute) SetVersion(v int64) *ThingAttribute { - s.Version = &v +// SetErrorAction sets the ErrorAction field's value. +func (s *TopicRulePayload) SetErrorAction(v *Action) *TopicRulePayload { + s.ErrorAction = v return s } -// The thing search index document. -type ThingDocument struct { - _ struct{} `type:"structure"` - - // The attributes. - Attributes map[string]*string `locationName:"attributes" type:"map"` +// SetRuleDisabled sets the RuleDisabled field's value. +func (s *TopicRulePayload) SetRuleDisabled(v bool) *TopicRulePayload { + s.RuleDisabled = &v + return s +} - // The thing shadow. - Shadow *string `locationName:"shadow" type:"string"` +// SetSql sets the Sql field's value. +func (s *TopicRulePayload) SetSql(v string) *TopicRulePayload { + s.Sql = &v + return s +} - // Thing group names. - ThingGroupNames []*string `locationName:"thingGroupNames" type:"list"` +// The input for the TransferCertificate operation. +type TransferCertificateInput struct { + _ struct{} `type:"structure"` - // The thing ID. - ThingId *string `locationName:"thingId" type:"string"` + // The ID of the certificate. (The last part of the certificate ARN contains + // the certificate ID.) + // + // CertificateId is a required field + CertificateId *string `location:"uri" locationName:"certificateId" min:"64" type:"string" required:"true"` - // The thing name. - ThingName *string `locationName:"thingName" min:"1" type:"string"` + // The AWS account. + // + // TargetAwsAccount is a required field + TargetAwsAccount *string `location:"querystring" locationName:"targetAwsAccount" min:"12" type:"string" required:"true"` - // The thing type name. - ThingTypeName *string `locationName:"thingTypeName" min:"1" type:"string"` + // The transfer message. + TransferMessage *string `locationName:"transferMessage" type:"string"` } // String returns the string representation -func (s ThingDocument) String() string { +func (s TransferCertificateInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ThingDocument) GoString() string { +func (s TransferCertificateInput) GoString() string { return s.String() } -// SetAttributes sets the Attributes field's value. -func (s *ThingDocument) SetAttributes(v map[string]*string) *ThingDocument { - s.Attributes = v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *TransferCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TransferCertificateInput"} + if s.CertificateId == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateId")) + } + if s.CertificateId != nil && len(*s.CertificateId) < 64 { + invalidParams.Add(request.NewErrParamMinLen("CertificateId", 64)) + } + if s.TargetAwsAccount == nil { + invalidParams.Add(request.NewErrParamRequired("TargetAwsAccount")) + } + if s.TargetAwsAccount != nil && len(*s.TargetAwsAccount) < 12 { + invalidParams.Add(request.NewErrParamMinLen("TargetAwsAccount", 12)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetShadow sets the Shadow field's value. -func (s *ThingDocument) SetShadow(v string) *ThingDocument { - s.Shadow = &v +// SetCertificateId sets the CertificateId field's value. +func (s *TransferCertificateInput) SetCertificateId(v string) *TransferCertificateInput { + s.CertificateId = &v return s } -// SetThingGroupNames sets the ThingGroupNames field's value. -func (s *ThingDocument) SetThingGroupNames(v []*string) *ThingDocument { - s.ThingGroupNames = v +// SetTargetAwsAccount sets the TargetAwsAccount field's value. +func (s *TransferCertificateInput) SetTargetAwsAccount(v string) *TransferCertificateInput { + s.TargetAwsAccount = &v return s } -// SetThingId sets the ThingId field's value. -func (s *ThingDocument) SetThingId(v string) *ThingDocument { - s.ThingId = &v +// SetTransferMessage sets the TransferMessage field's value. +func (s *TransferCertificateInput) SetTransferMessage(v string) *TransferCertificateInput { + s.TransferMessage = &v return s } -// SetThingName sets the ThingName field's value. -func (s *ThingDocument) SetThingName(v string) *ThingDocument { - s.ThingName = &v - return s +// The output from the TransferCertificate operation. +type TransferCertificateOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the certificate. + TransferredCertificateArn *string `locationName:"transferredCertificateArn" type:"string"` } -// SetThingTypeName sets the ThingTypeName field's value. -func (s *ThingDocument) SetThingTypeName(v string) *ThingDocument { - s.ThingTypeName = &v +// String returns the string representation +func (s TransferCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TransferCertificateOutput) GoString() string { + return s.String() +} + +// SetTransferredCertificateArn sets the TransferredCertificateArn field's value. +func (s *TransferCertificateOutput) SetTransferredCertificateArn(v string) *TransferCertificateOutput { + s.TransferredCertificateArn = &v return s } -// Thing group metadata. -type ThingGroupMetadata struct { +// Data used to transfer a certificate to an AWS account. +type TransferData struct { _ struct{} `type:"structure"` - // The UNIX timestamp of when the thing group was created. - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix"` + // The date the transfer was accepted. + AcceptDate *time.Time `locationName:"acceptDate" type:"timestamp"` - // The parent thing group name. - ParentGroupName *string `locationName:"parentGroupName" min:"1" type:"string"` + // The date the transfer was rejected. + RejectDate *time.Time `locationName:"rejectDate" type:"timestamp"` - // The root parent thing group. - RootToParentThingGroups []*GroupNameAndArn `locationName:"rootToParentThingGroups" type:"list"` + // The reason why the transfer was rejected. + RejectReason *string `locationName:"rejectReason" type:"string"` + + // The date the transfer took place. + TransferDate *time.Time `locationName:"transferDate" type:"timestamp"` + + // The transfer message. + TransferMessage *string `locationName:"transferMessage" type:"string"` } // String returns the string representation -func (s ThingGroupMetadata) String() string { +func (s TransferData) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ThingGroupMetadata) GoString() string { +func (s TransferData) GoString() string { return s.String() } -// SetCreationDate sets the CreationDate field's value. -func (s *ThingGroupMetadata) SetCreationDate(v time.Time) *ThingGroupMetadata { - s.CreationDate = &v +// SetAcceptDate sets the AcceptDate field's value. +func (s *TransferData) SetAcceptDate(v time.Time) *TransferData { + s.AcceptDate = &v return s } -// SetParentGroupName sets the ParentGroupName field's value. -func (s *ThingGroupMetadata) SetParentGroupName(v string) *ThingGroupMetadata { - s.ParentGroupName = &v +// SetRejectDate sets the RejectDate field's value. +func (s *TransferData) SetRejectDate(v time.Time) *TransferData { + s.RejectDate = &v return s } -// SetRootToParentThingGroups sets the RootToParentThingGroups field's value. -func (s *ThingGroupMetadata) SetRootToParentThingGroups(v []*GroupNameAndArn) *ThingGroupMetadata { - s.RootToParentThingGroups = v +// SetRejectReason sets the RejectReason field's value. +func (s *TransferData) SetRejectReason(v string) *TransferData { + s.RejectReason = &v return s } -// Thing group properties. -type ThingGroupProperties struct { +// SetTransferDate sets the TransferDate field's value. +func (s *TransferData) SetTransferDate(v time.Time) *TransferData { + s.TransferDate = &v + return s +} + +// SetTransferMessage sets the TransferMessage field's value. +func (s *TransferData) SetTransferMessage(v string) *TransferData { + s.TransferMessage = &v + return s +} + +type UntagResourceInput struct { _ struct{} `type:"structure"` - // The thing group attributes in JSON format. - AttributePayload *AttributePayload `locationName:"attributePayload" type:"structure"` + // The ARN of the resource. + // + // ResourceArn is a required field + ResourceArn *string `locationName:"resourceArn" type:"string" required:"true"` - // The thing group description. - ThingGroupDescription *string `locationName:"thingGroupDescription" type:"string"` + // A list of the keys of the tags to be removed from the resource. + // + // TagKeys is a required field + TagKeys []*string `locationName:"tagKeys" type:"list" required:"true"` } // String returns the string representation -func (s ThingGroupProperties) String() string { +func (s UntagResourceInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ThingGroupProperties) GoString() string { +func (s UntagResourceInput) GoString() string { return s.String() } -// SetAttributePayload sets the AttributePayload field's value. -func (s *ThingGroupProperties) SetAttributePayload(v *AttributePayload) *ThingGroupProperties { - s.AttributePayload = v +// Validate inspects the fields of the type to determine if they are valid. +func (s *UntagResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UntagResourceInput"} + if s.ResourceArn == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceArn")) + } + if s.TagKeys == nil { + invalidParams.Add(request.NewErrParamRequired("TagKeys")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceArn sets the ResourceArn field's value. +func (s *UntagResourceInput) SetResourceArn(v string) *UntagResourceInput { + s.ResourceArn = &v return s } -// SetThingGroupDescription sets the ThingGroupDescription field's value. -func (s *ThingGroupProperties) SetThingGroupDescription(v string) *ThingGroupProperties { - s.ThingGroupDescription = &v +// SetTagKeys sets the TagKeys field's value. +func (s *UntagResourceInput) SetTagKeys(v []*string) *UntagResourceInput { + s.TagKeys = v return s } -// Thing indexing configuration. -type ThingIndexingConfiguration struct { +type UntagResourceOutput struct { _ struct{} `type:"structure"` - - // Thing indexing mode. Valid values are: - // - // * REGISTRY – Your thing index will contain only registry data. - // - // * REGISTRY_AND_SHADOW - Your thing index will contain registry and shadow - // data. - // - // * OFF - Thing indexing is disabled. - ThingIndexingMode *string `locationName:"thingIndexingMode" type:"string" enum:"ThingIndexingMode"` } // String returns the string representation -func (s ThingIndexingConfiguration) String() string { +func (s UntagResourceOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ThingIndexingConfiguration) GoString() string { +func (s UntagResourceOutput) GoString() string { return s.String() } -// SetThingIndexingMode sets the ThingIndexingMode field's value. -func (s *ThingIndexingConfiguration) SetThingIndexingMode(v string) *ThingIndexingConfiguration { - s.ThingIndexingMode = &v - return s -} - -// The definition of the thing type, including thing type name and description. -type ThingTypeDefinition struct { +type UpdateAccountAuditConfigurationInput struct { _ struct{} `type:"structure"` - // The thing type ARN. - ThingTypeArn *string `locationName:"thingTypeArn" type:"string"` - - // The ThingTypeMetadata contains additional information about the thing type - // including: creation date and time, a value indicating whether the thing type - // is deprecated, and a date and time when it was deprecated. - ThingTypeMetadata *ThingTypeMetadata `locationName:"thingTypeMetadata" type:"structure"` + // Specifies which audit checks are enabled and disabled for this account. Use + // DescribeAccountAuditConfiguration to see the list of all checks including + // those that are currently enabled. + // + // Note that some data collection may begin immediately when certain checks + // are enabled. When a check is disabled, any data collected so far in relation + // to the check is deleted. + // + // You cannot disable a check if it is used by any scheduled audit. You must + // first delete the check from the scheduled audit or delete the scheduled audit + // itself. + // + // On the first call to UpdateAccountAuditConfiguration this parameter is required + // and must specify at least one enabled check. + AuditCheckConfigurations map[string]*AuditCheckConfiguration `locationName:"auditCheckConfigurations" type:"map"` - // The name of the thing type. - ThingTypeName *string `locationName:"thingTypeName" min:"1" type:"string"` + // Information about the targets to which audit notifications are sent. + AuditNotificationTargetConfigurations map[string]*AuditNotificationTarget `locationName:"auditNotificationTargetConfigurations" type:"map"` - // The ThingTypeProperties for the thing type. - ThingTypeProperties *ThingTypeProperties `locationName:"thingTypeProperties" type:"structure"` + // The ARN of the role that grants permission to AWS IoT to access information + // about your devices, policies, certificates and other items as necessary when + // performing an audit. + RoleArn *string `locationName:"roleArn" min:"20" type:"string"` } // String returns the string representation -func (s ThingTypeDefinition) String() string { +func (s UpdateAccountAuditConfigurationInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ThingTypeDefinition) GoString() string { +func (s UpdateAccountAuditConfigurationInput) GoString() string { return s.String() } -// SetThingTypeArn sets the ThingTypeArn field's value. -func (s *ThingTypeDefinition) SetThingTypeArn(v string) *ThingTypeDefinition { - s.ThingTypeArn = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateAccountAuditConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateAccountAuditConfigurationInput"} + if s.RoleArn != nil && len(*s.RoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + } + if s.AuditNotificationTargetConfigurations != nil { + for i, v := range s.AuditNotificationTargetConfigurations { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AuditNotificationTargetConfigurations", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetThingTypeMetadata sets the ThingTypeMetadata field's value. -func (s *ThingTypeDefinition) SetThingTypeMetadata(v *ThingTypeMetadata) *ThingTypeDefinition { - s.ThingTypeMetadata = v +// SetAuditCheckConfigurations sets the AuditCheckConfigurations field's value. +func (s *UpdateAccountAuditConfigurationInput) SetAuditCheckConfigurations(v map[string]*AuditCheckConfiguration) *UpdateAccountAuditConfigurationInput { + s.AuditCheckConfigurations = v return s } -// SetThingTypeName sets the ThingTypeName field's value. -func (s *ThingTypeDefinition) SetThingTypeName(v string) *ThingTypeDefinition { - s.ThingTypeName = &v +// SetAuditNotificationTargetConfigurations sets the AuditNotificationTargetConfigurations field's value. +func (s *UpdateAccountAuditConfigurationInput) SetAuditNotificationTargetConfigurations(v map[string]*AuditNotificationTarget) *UpdateAccountAuditConfigurationInput { + s.AuditNotificationTargetConfigurations = v return s } -// SetThingTypeProperties sets the ThingTypeProperties field's value. -func (s *ThingTypeDefinition) SetThingTypeProperties(v *ThingTypeProperties) *ThingTypeDefinition { - s.ThingTypeProperties = v +// SetRoleArn sets the RoleArn field's value. +func (s *UpdateAccountAuditConfigurationInput) SetRoleArn(v string) *UpdateAccountAuditConfigurationInput { + s.RoleArn = &v return s } -// The ThingTypeMetadata contains additional information about the thing type -// including: creation date and time, a value indicating whether the thing type -// is deprecated, and a date and time when time was deprecated. -type ThingTypeMetadata struct { +type UpdateAccountAuditConfigurationOutput struct { _ struct{} `type:"structure"` +} - // The date and time when the thing type was created. - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix"` +// String returns the string representation +func (s UpdateAccountAuditConfigurationOutput) String() string { + return awsutil.Prettify(s) +} - // Whether the thing type is deprecated. If true, no new things could be associated - // with this type. - Deprecated *bool `locationName:"deprecated" type:"boolean"` +// GoString returns the string representation +func (s UpdateAccountAuditConfigurationOutput) GoString() string { + return s.String() +} - // The date and time when the thing type was deprecated. - DeprecationDate *time.Time `locationName:"deprecationDate" type:"timestamp" timestampFormat:"unix"` +type UpdateAuthorizerInput struct { + _ struct{} `type:"structure"` + + // The ARN of the authorizer's Lambda function. + AuthorizerFunctionArn *string `locationName:"authorizerFunctionArn" type:"string"` + + // The authorizer name. + // + // AuthorizerName is a required field + AuthorizerName *string `location:"uri" locationName:"authorizerName" min:"1" type:"string" required:"true"` + + // The status of the update authorizer request. + Status *string `locationName:"status" type:"string" enum:"AuthorizerStatus"` + + // The key used to extract the token from the HTTP headers. + TokenKeyName *string `locationName:"tokenKeyName" min:"1" type:"string"` + + // The public keys used to verify the token signature. + TokenSigningPublicKeys map[string]*string `locationName:"tokenSigningPublicKeys" type:"map"` } // String returns the string representation -func (s ThingTypeMetadata) String() string { +func (s UpdateAuthorizerInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ThingTypeMetadata) GoString() string { +func (s UpdateAuthorizerInput) GoString() string { return s.String() } -// SetCreationDate sets the CreationDate field's value. -func (s *ThingTypeMetadata) SetCreationDate(v time.Time) *ThingTypeMetadata { - s.CreationDate = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateAuthorizerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateAuthorizerInput"} + if s.AuthorizerName == nil { + invalidParams.Add(request.NewErrParamRequired("AuthorizerName")) + } + if s.AuthorizerName != nil && len(*s.AuthorizerName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AuthorizerName", 1)) + } + if s.TokenKeyName != nil && len(*s.TokenKeyName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TokenKeyName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAuthorizerFunctionArn sets the AuthorizerFunctionArn field's value. +func (s *UpdateAuthorizerInput) SetAuthorizerFunctionArn(v string) *UpdateAuthorizerInput { + s.AuthorizerFunctionArn = &v return s } -// SetDeprecated sets the Deprecated field's value. -func (s *ThingTypeMetadata) SetDeprecated(v bool) *ThingTypeMetadata { - s.Deprecated = &v +// SetAuthorizerName sets the AuthorizerName field's value. +func (s *UpdateAuthorizerInput) SetAuthorizerName(v string) *UpdateAuthorizerInput { + s.AuthorizerName = &v return s } -// SetDeprecationDate sets the DeprecationDate field's value. -func (s *ThingTypeMetadata) SetDeprecationDate(v time.Time) *ThingTypeMetadata { - s.DeprecationDate = &v +// SetStatus sets the Status field's value. +func (s *UpdateAuthorizerInput) SetStatus(v string) *UpdateAuthorizerInput { + s.Status = &v return s } -// The ThingTypeProperties contains information about the thing type including: -// a thing type description, and a list of searchable thing attribute names. -type ThingTypeProperties struct { +// SetTokenKeyName sets the TokenKeyName field's value. +func (s *UpdateAuthorizerInput) SetTokenKeyName(v string) *UpdateAuthorizerInput { + s.TokenKeyName = &v + return s +} + +// SetTokenSigningPublicKeys sets the TokenSigningPublicKeys field's value. +func (s *UpdateAuthorizerInput) SetTokenSigningPublicKeys(v map[string]*string) *UpdateAuthorizerInput { + s.TokenSigningPublicKeys = v + return s +} + +type UpdateAuthorizerOutput struct { _ struct{} `type:"structure"` - // A list of searchable thing attribute names. - SearchableAttributes []*string `locationName:"searchableAttributes" type:"list"` + // The authorizer ARN. + AuthorizerArn *string `locationName:"authorizerArn" type:"string"` - // The description of the thing type. - ThingTypeDescription *string `locationName:"thingTypeDescription" type:"string"` + // The authorizer name. + AuthorizerName *string `locationName:"authorizerName" min:"1" type:"string"` } // String returns the string representation -func (s ThingTypeProperties) String() string { +func (s UpdateAuthorizerOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ThingTypeProperties) GoString() string { +func (s UpdateAuthorizerOutput) GoString() string { return s.String() } -// SetSearchableAttributes sets the SearchableAttributes field's value. -func (s *ThingTypeProperties) SetSearchableAttributes(v []*string) *ThingTypeProperties { - s.SearchableAttributes = v - return s -} - -// SetThingTypeDescription sets the ThingTypeDescription field's value. -func (s *ThingTypeProperties) SetThingTypeDescription(v string) *ThingTypeProperties { - s.ThingTypeDescription = &v +// SetAuthorizerArn sets the AuthorizerArn field's value. +func (s *UpdateAuthorizerOutput) SetAuthorizerArn(v string) *UpdateAuthorizerOutput { + s.AuthorizerArn = &v return s -} - -// Describes a rule. -type TopicRule struct { - _ struct{} `type:"structure"` - - // The actions associated with the rule. - Actions []*Action `locationName:"actions" type:"list"` - - // The version of the SQL rules engine to use when evaluating the rule. - AwsIotSqlVersion *string `locationName:"awsIotSqlVersion" type:"string"` - - // The date and time the rule was created. - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` +} - // The description of the rule. - Description *string `locationName:"description" type:"string"` +// SetAuthorizerName sets the AuthorizerName field's value. +func (s *UpdateAuthorizerOutput) SetAuthorizerName(v string) *UpdateAuthorizerOutput { + s.AuthorizerName = &v + return s +} - // The action to perform when an error occurs. - ErrorAction *Action `locationName:"errorAction" type:"structure"` +type UpdateBillingGroupInput struct { + _ struct{} `type:"structure"` - // Specifies whether the rule is disabled. - RuleDisabled *bool `locationName:"ruleDisabled" type:"boolean"` + // The name of the billing group. + // + // BillingGroupName is a required field + BillingGroupName *string `location:"uri" locationName:"billingGroupName" min:"1" type:"string" required:"true"` - // The name of the rule. - RuleName *string `locationName:"ruleName" min:"1" type:"string"` + // The properties of the billing group. + // + // BillingGroupProperties is a required field + BillingGroupProperties *BillingGroupProperties `locationName:"billingGroupProperties" type:"structure" required:"true"` - // The SQL statement used to query the topic. When using a SQL query with multiple - // lines, be sure to escape the newline characters. - Sql *string `locationName:"sql" type:"string"` + // The expected version of the billing group. If the version of the billing + // group does not match the expected version specified in the request, the UpdateBillingGroup + // request is rejected with a VersionConflictException. + ExpectedVersion *int64 `locationName:"expectedVersion" type:"long"` } // String returns the string representation -func (s TopicRule) String() string { +func (s UpdateBillingGroupInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s TopicRule) GoString() string { +func (s UpdateBillingGroupInput) GoString() string { return s.String() } -// SetActions sets the Actions field's value. -func (s *TopicRule) SetActions(v []*Action) *TopicRule { - s.Actions = v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateBillingGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateBillingGroupInput"} + if s.BillingGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("BillingGroupName")) + } + if s.BillingGroupName != nil && len(*s.BillingGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BillingGroupName", 1)) + } + if s.BillingGroupProperties == nil { + invalidParams.Add(request.NewErrParamRequired("BillingGroupProperties")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetAwsIotSqlVersion sets the AwsIotSqlVersion field's value. -func (s *TopicRule) SetAwsIotSqlVersion(v string) *TopicRule { - s.AwsIotSqlVersion = &v +// SetBillingGroupName sets the BillingGroupName field's value. +func (s *UpdateBillingGroupInput) SetBillingGroupName(v string) *UpdateBillingGroupInput { + s.BillingGroupName = &v return s } -// SetCreatedAt sets the CreatedAt field's value. -func (s *TopicRule) SetCreatedAt(v time.Time) *TopicRule { - s.CreatedAt = &v +// SetBillingGroupProperties sets the BillingGroupProperties field's value. +func (s *UpdateBillingGroupInput) SetBillingGroupProperties(v *BillingGroupProperties) *UpdateBillingGroupInput { + s.BillingGroupProperties = v return s } -// SetDescription sets the Description field's value. -func (s *TopicRule) SetDescription(v string) *TopicRule { - s.Description = &v +// SetExpectedVersion sets the ExpectedVersion field's value. +func (s *UpdateBillingGroupInput) SetExpectedVersion(v int64) *UpdateBillingGroupInput { + s.ExpectedVersion = &v return s } -// SetErrorAction sets the ErrorAction field's value. -func (s *TopicRule) SetErrorAction(v *Action) *TopicRule { - s.ErrorAction = v - return s +type UpdateBillingGroupOutput struct { + _ struct{} `type:"structure"` + + // The latest version of the billing group. + Version *int64 `locationName:"version" type:"long"` } -// SetRuleDisabled sets the RuleDisabled field's value. -func (s *TopicRule) SetRuleDisabled(v bool) *TopicRule { - s.RuleDisabled = &v - return s +// String returns the string representation +func (s UpdateBillingGroupOutput) String() string { + return awsutil.Prettify(s) } -// SetRuleName sets the RuleName field's value. -func (s *TopicRule) SetRuleName(v string) *TopicRule { - s.RuleName = &v - return s +// GoString returns the string representation +func (s UpdateBillingGroupOutput) GoString() string { + return s.String() } -// SetSql sets the Sql field's value. -func (s *TopicRule) SetSql(v string) *TopicRule { - s.Sql = &v +// SetVersion sets the Version field's value. +func (s *UpdateBillingGroupOutput) SetVersion(v int64) *UpdateBillingGroupOutput { + s.Version = &v return s } -// Describes a rule. -type TopicRuleListItem struct { +// The input to the UpdateCACertificate operation. +type UpdateCACertificateInput struct { _ struct{} `type:"structure"` - // The date and time the rule was created. - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` + // The CA certificate identifier. + // + // CertificateId is a required field + CertificateId *string `location:"uri" locationName:"caCertificateId" min:"64" type:"string" required:"true"` - // The rule ARN. - RuleArn *string `locationName:"ruleArn" type:"string"` + // The new value for the auto registration status. Valid values are: "ENABLE" + // or "DISABLE". + NewAutoRegistrationStatus *string `location:"querystring" locationName:"newAutoRegistrationStatus" type:"string" enum:"AutoRegistrationStatus"` - // Specifies whether the rule is disabled. - RuleDisabled *bool `locationName:"ruleDisabled" type:"boolean"` + // The updated status of the CA certificate. + // + // Note: The status value REGISTER_INACTIVE is deprecated and should not be + // used. + NewStatus *string `location:"querystring" locationName:"newStatus" type:"string" enum:"CACertificateStatus"` - // The name of the rule. - RuleName *string `locationName:"ruleName" min:"1" type:"string"` + // Information about the registration configuration. + RegistrationConfig *RegistrationConfig `locationName:"registrationConfig" type:"structure"` - // The pattern for the topic names that apply. - TopicPattern *string `locationName:"topicPattern" type:"string"` + // If true, remove auto registration. + RemoveAutoRegistration *bool `locationName:"removeAutoRegistration" type:"boolean"` } // String returns the string representation -func (s TopicRuleListItem) String() string { +func (s UpdateCACertificateInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s TopicRuleListItem) GoString() string { +func (s UpdateCACertificateInput) GoString() string { return s.String() } -// SetCreatedAt sets the CreatedAt field's value. -func (s *TopicRuleListItem) SetCreatedAt(v time.Time) *TopicRuleListItem { - s.CreatedAt = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateCACertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateCACertificateInput"} + if s.CertificateId == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateId")) + } + if s.CertificateId != nil && len(*s.CertificateId) < 64 { + invalidParams.Add(request.NewErrParamMinLen("CertificateId", 64)) + } + if s.RegistrationConfig != nil { + if err := s.RegistrationConfig.Validate(); err != nil { + invalidParams.AddNested("RegistrationConfig", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateId sets the CertificateId field's value. +func (s *UpdateCACertificateInput) SetCertificateId(v string) *UpdateCACertificateInput { + s.CertificateId = &v return s } -// SetRuleArn sets the RuleArn field's value. -func (s *TopicRuleListItem) SetRuleArn(v string) *TopicRuleListItem { - s.RuleArn = &v +// SetNewAutoRegistrationStatus sets the NewAutoRegistrationStatus field's value. +func (s *UpdateCACertificateInput) SetNewAutoRegistrationStatus(v string) *UpdateCACertificateInput { + s.NewAutoRegistrationStatus = &v return s } -// SetRuleDisabled sets the RuleDisabled field's value. -func (s *TopicRuleListItem) SetRuleDisabled(v bool) *TopicRuleListItem { - s.RuleDisabled = &v +// SetNewStatus sets the NewStatus field's value. +func (s *UpdateCACertificateInput) SetNewStatus(v string) *UpdateCACertificateInput { + s.NewStatus = &v return s } -// SetRuleName sets the RuleName field's value. -func (s *TopicRuleListItem) SetRuleName(v string) *TopicRuleListItem { - s.RuleName = &v +// SetRegistrationConfig sets the RegistrationConfig field's value. +func (s *UpdateCACertificateInput) SetRegistrationConfig(v *RegistrationConfig) *UpdateCACertificateInput { + s.RegistrationConfig = v return s } -// SetTopicPattern sets the TopicPattern field's value. -func (s *TopicRuleListItem) SetTopicPattern(v string) *TopicRuleListItem { - s.TopicPattern = &v +// SetRemoveAutoRegistration sets the RemoveAutoRegistration field's value. +func (s *UpdateCACertificateInput) SetRemoveAutoRegistration(v bool) *UpdateCACertificateInput { + s.RemoveAutoRegistration = &v return s } -// Describes a rule. -type TopicRulePayload struct { +type UpdateCACertificateOutput struct { _ struct{} `type:"structure"` +} - // The actions associated with the rule. - // - // Actions is a required field - Actions []*Action `locationName:"actions" type:"list" required:"true"` - - // The version of the SQL rules engine to use when evaluating the rule. - AwsIotSqlVersion *string `locationName:"awsIotSqlVersion" type:"string"` +// String returns the string representation +func (s UpdateCACertificateOutput) String() string { + return awsutil.Prettify(s) +} - // The description of the rule. - Description *string `locationName:"description" type:"string"` +// GoString returns the string representation +func (s UpdateCACertificateOutput) GoString() string { + return s.String() +} - // The action to take when an error occurs. - ErrorAction *Action `locationName:"errorAction" type:"structure"` +// The input for the UpdateCertificate operation. +type UpdateCertificateInput struct { + _ struct{} `type:"structure"` - // Specifies whether the rule is disabled. - RuleDisabled *bool `locationName:"ruleDisabled" type:"boolean"` + // The ID of the certificate. (The last part of the certificate ARN contains + // the certificate ID.) + // + // CertificateId is a required field + CertificateId *string `location:"uri" locationName:"certificateId" min:"64" type:"string" required:"true"` - // The SQL statement used to query the topic. For more information, see AWS - // IoT SQL Reference (http://docs.aws.amazon.com/iot/latest/developerguide/iot-rules.html#aws-iot-sql-reference) - // in the AWS IoT Developer Guide. + // The new status. // - // Sql is a required field - Sql *string `locationName:"sql" type:"string" required:"true"` + // Note: Setting the status to PENDING_TRANSFER will result in an exception + // being thrown. PENDING_TRANSFER is a status used internally by AWS IoT. It + // is not intended for developer use. + // + // Note: The status value REGISTER_INACTIVE is deprecated and should not be + // used. + // + // NewStatus is a required field + NewStatus *string `location:"querystring" locationName:"newStatus" type:"string" required:"true" enum:"CertificateStatus"` } // String returns the string representation -func (s TopicRulePayload) String() string { +func (s UpdateCertificateInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s TopicRulePayload) GoString() string { +func (s UpdateCertificateInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *TopicRulePayload) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "TopicRulePayload"} - if s.Actions == nil { - invalidParams.Add(request.NewErrParamRequired("Actions")) - } - if s.Sql == nil { - invalidParams.Add(request.NewErrParamRequired("Sql")) +func (s *UpdateCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateCertificateInput"} + if s.CertificateId == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateId")) } - if s.Actions != nil { - for i, v := range s.Actions { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Actions", i), err.(request.ErrInvalidParams)) - } - } + if s.CertificateId != nil && len(*s.CertificateId) < 64 { + invalidParams.Add(request.NewErrParamMinLen("CertificateId", 64)) } - if s.ErrorAction != nil { - if err := s.ErrorAction.Validate(); err != nil { - invalidParams.AddNested("ErrorAction", err.(request.ErrInvalidParams)) - } + if s.NewStatus == nil { + invalidParams.Add(request.NewErrParamRequired("NewStatus")) } if invalidParams.Len() > 0 { @@ -25234,81 +35291,90 @@ func (s *TopicRulePayload) Validate() error { return nil } -// SetActions sets the Actions field's value. -func (s *TopicRulePayload) SetActions(v []*Action) *TopicRulePayload { - s.Actions = v - return s -} - -// SetAwsIotSqlVersion sets the AwsIotSqlVersion field's value. -func (s *TopicRulePayload) SetAwsIotSqlVersion(v string) *TopicRulePayload { - s.AwsIotSqlVersion = &v +// SetCertificateId sets the CertificateId field's value. +func (s *UpdateCertificateInput) SetCertificateId(v string) *UpdateCertificateInput { + s.CertificateId = &v return s } -// SetDescription sets the Description field's value. -func (s *TopicRulePayload) SetDescription(v string) *TopicRulePayload { - s.Description = &v +// SetNewStatus sets the NewStatus field's value. +func (s *UpdateCertificateInput) SetNewStatus(v string) *UpdateCertificateInput { + s.NewStatus = &v return s } -// SetErrorAction sets the ErrorAction field's value. -func (s *TopicRulePayload) SetErrorAction(v *Action) *TopicRulePayload { - s.ErrorAction = v - return s +type UpdateCertificateOutput struct { + _ struct{} `type:"structure"` } -// SetRuleDisabled sets the RuleDisabled field's value. -func (s *TopicRulePayload) SetRuleDisabled(v bool) *TopicRulePayload { - s.RuleDisabled = &v - return s +// String returns the string representation +func (s UpdateCertificateOutput) String() string { + return awsutil.Prettify(s) } -// SetSql sets the Sql field's value. -func (s *TopicRulePayload) SetSql(v string) *TopicRulePayload { - s.Sql = &v - return s +// GoString returns the string representation +func (s UpdateCertificateOutput) GoString() string { + return s.String() } -// The input for the TransferCertificate operation. -type TransferCertificateInput struct { +type UpdateDynamicThingGroupInput struct { _ struct{} `type:"structure"` - // The ID of the certificate. + // The expected version of the dynamic thing group to update. + ExpectedVersion *int64 `locationName:"expectedVersion" type:"long"` + + // The dynamic thing group index to update. // - // CertificateId is a required field - CertificateId *string `location:"uri" locationName:"certificateId" min:"64" type:"string" required:"true"` + // Currently one index is supported: 'AWS_Things'. + IndexName *string `locationName:"indexName" min:"1" type:"string"` - // The AWS account. + // The dynamic thing group search query string to update. + QueryString *string `locationName:"queryString" min:"1" type:"string"` + + // The dynamic thing group query version to update. // - // TargetAwsAccount is a required field - TargetAwsAccount *string `location:"querystring" locationName:"targetAwsAccount" type:"string" required:"true"` + // Currently one query version is supported: "2017-09-30". If not specified, + // the query version defaults to this value. + QueryVersion *string `locationName:"queryVersion" type:"string"` - // The transfer message. - TransferMessage *string `locationName:"transferMessage" type:"string"` + // The name of the dynamic thing group to update. + // + // ThingGroupName is a required field + ThingGroupName *string `location:"uri" locationName:"thingGroupName" min:"1" type:"string" required:"true"` + + // The dynamic thing group properties to update. + // + // ThingGroupProperties is a required field + ThingGroupProperties *ThingGroupProperties `locationName:"thingGroupProperties" type:"structure" required:"true"` } // String returns the string representation -func (s TransferCertificateInput) String() string { +func (s UpdateDynamicThingGroupInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s TransferCertificateInput) GoString() string { +func (s UpdateDynamicThingGroupInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *TransferCertificateInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "TransferCertificateInput"} - if s.CertificateId == nil { - invalidParams.Add(request.NewErrParamRequired("CertificateId")) +func (s *UpdateDynamicThingGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateDynamicThingGroupInput"} + if s.IndexName != nil && len(*s.IndexName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("IndexName", 1)) } - if s.CertificateId != nil && len(*s.CertificateId) < 64 { - invalidParams.Add(request.NewErrParamMinLen("CertificateId", 64)) + if s.QueryString != nil && len(*s.QueryString) < 1 { + invalidParams.Add(request.NewErrParamMinLen("QueryString", 1)) } - if s.TargetAwsAccount == nil { - invalidParams.Add(request.NewErrParamRequired("TargetAwsAccount")) + if s.ThingGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("ThingGroupName")) + } + if s.ThingGroupName != nil && len(*s.ThingGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ThingGroupName", 1)) + } + if s.ThingGroupProperties == nil { + invalidParams.Add(request.NewErrParamRequired("ThingGroupProperties")) } if invalidParams.Len() > 0 { @@ -25317,150 +35383,134 @@ func (s *TransferCertificateInput) Validate() error { return nil } -// SetCertificateId sets the CertificateId field's value. -func (s *TransferCertificateInput) SetCertificateId(v string) *TransferCertificateInput { - s.CertificateId = &v +// SetExpectedVersion sets the ExpectedVersion field's value. +func (s *UpdateDynamicThingGroupInput) SetExpectedVersion(v int64) *UpdateDynamicThingGroupInput { + s.ExpectedVersion = &v return s } -// SetTargetAwsAccount sets the TargetAwsAccount field's value. -func (s *TransferCertificateInput) SetTargetAwsAccount(v string) *TransferCertificateInput { - s.TargetAwsAccount = &v +// SetIndexName sets the IndexName field's value. +func (s *UpdateDynamicThingGroupInput) SetIndexName(v string) *UpdateDynamicThingGroupInput { + s.IndexName = &v return s } -// SetTransferMessage sets the TransferMessage field's value. -func (s *TransferCertificateInput) SetTransferMessage(v string) *TransferCertificateInput { - s.TransferMessage = &v +// SetQueryString sets the QueryString field's value. +func (s *UpdateDynamicThingGroupInput) SetQueryString(v string) *UpdateDynamicThingGroupInput { + s.QueryString = &v return s } -// The output from the TransferCertificate operation. -type TransferCertificateOutput struct { +// SetQueryVersion sets the QueryVersion field's value. +func (s *UpdateDynamicThingGroupInput) SetQueryVersion(v string) *UpdateDynamicThingGroupInput { + s.QueryVersion = &v + return s +} + +// SetThingGroupName sets the ThingGroupName field's value. +func (s *UpdateDynamicThingGroupInput) SetThingGroupName(v string) *UpdateDynamicThingGroupInput { + s.ThingGroupName = &v + return s +} + +// SetThingGroupProperties sets the ThingGroupProperties field's value. +func (s *UpdateDynamicThingGroupInput) SetThingGroupProperties(v *ThingGroupProperties) *UpdateDynamicThingGroupInput { + s.ThingGroupProperties = v + return s +} + +type UpdateDynamicThingGroupOutput struct { _ struct{} `type:"structure"` - // The ARN of the certificate. - TransferredCertificateArn *string `locationName:"transferredCertificateArn" type:"string"` + // The dynamic thing group version. + Version *int64 `locationName:"version" type:"long"` } // String returns the string representation -func (s TransferCertificateOutput) String() string { +func (s UpdateDynamicThingGroupOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s TransferCertificateOutput) GoString() string { +func (s UpdateDynamicThingGroupOutput) GoString() string { return s.String() } -// SetTransferredCertificateArn sets the TransferredCertificateArn field's value. -func (s *TransferCertificateOutput) SetTransferredCertificateArn(v string) *TransferCertificateOutput { - s.TransferredCertificateArn = &v +// SetVersion sets the Version field's value. +func (s *UpdateDynamicThingGroupOutput) SetVersion(v int64) *UpdateDynamicThingGroupOutput { + s.Version = &v return s } -// Data used to transfer a certificate to an AWS account. -type TransferData struct { +type UpdateEventConfigurationsInput struct { _ struct{} `type:"structure"` - // The date the transfer was accepted. - AcceptDate *time.Time `locationName:"acceptDate" type:"timestamp" timestampFormat:"unix"` - - // The date the transfer was rejected. - RejectDate *time.Time `locationName:"rejectDate" type:"timestamp" timestampFormat:"unix"` - - // The reason why the transfer was rejected. - RejectReason *string `locationName:"rejectReason" type:"string"` - - // The date the transfer took place. - TransferDate *time.Time `locationName:"transferDate" type:"timestamp" timestampFormat:"unix"` - - // The transfer message. - TransferMessage *string `locationName:"transferMessage" type:"string"` + // The new event configuration values. + EventConfigurations map[string]*Configuration `locationName:"eventConfigurations" type:"map"` } // String returns the string representation -func (s TransferData) String() string { +func (s UpdateEventConfigurationsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s TransferData) GoString() string { +func (s UpdateEventConfigurationsInput) GoString() string { return s.String() } -// SetAcceptDate sets the AcceptDate field's value. -func (s *TransferData) SetAcceptDate(v time.Time) *TransferData { - s.AcceptDate = &v - return s -} - -// SetRejectDate sets the RejectDate field's value. -func (s *TransferData) SetRejectDate(v time.Time) *TransferData { - s.RejectDate = &v +// SetEventConfigurations sets the EventConfigurations field's value. +func (s *UpdateEventConfigurationsInput) SetEventConfigurations(v map[string]*Configuration) *UpdateEventConfigurationsInput { + s.EventConfigurations = v return s } -// SetRejectReason sets the RejectReason field's value. -func (s *TransferData) SetRejectReason(v string) *TransferData { - s.RejectReason = &v - return s +type UpdateEventConfigurationsOutput struct { + _ struct{} `type:"structure"` } -// SetTransferDate sets the TransferDate field's value. -func (s *TransferData) SetTransferDate(v time.Time) *TransferData { - s.TransferDate = &v - return s +// String returns the string representation +func (s UpdateEventConfigurationsOutput) String() string { + return awsutil.Prettify(s) } -// SetTransferMessage sets the TransferMessage field's value. -func (s *TransferData) SetTransferMessage(v string) *TransferData { - s.TransferMessage = &v - return s +// GoString returns the string representation +func (s UpdateEventConfigurationsOutput) GoString() string { + return s.String() } -type UpdateAuthorizerInput struct { +type UpdateIndexingConfigurationInput struct { _ struct{} `type:"structure"` - // The ARN of the authorizer's Lambda function. - AuthorizerFunctionArn *string `locationName:"authorizerFunctionArn" type:"string"` - - // The authorizer name. - // - // AuthorizerName is a required field - AuthorizerName *string `location:"uri" locationName:"authorizerName" min:"1" type:"string" required:"true"` - - // The status of the update authorizer request. - Status *string `locationName:"status" type:"string" enum:"AuthorizerStatus"` - - // The key used to extract the token from the HTTP headers. - TokenKeyName *string `locationName:"tokenKeyName" min:"1" type:"string"` + // Thing group indexing configuration. + ThingGroupIndexingConfiguration *ThingGroupIndexingConfiguration `locationName:"thingGroupIndexingConfiguration" type:"structure"` - // The public keys used to verify the token signature. - TokenSigningPublicKeys map[string]*string `locationName:"tokenSigningPublicKeys" type:"map"` + // Thing indexing configuration. + ThingIndexingConfiguration *ThingIndexingConfiguration `locationName:"thingIndexingConfiguration" type:"structure"` } // String returns the string representation -func (s UpdateAuthorizerInput) String() string { +func (s UpdateIndexingConfigurationInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s UpdateAuthorizerInput) GoString() string { +func (s UpdateIndexingConfigurationInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *UpdateAuthorizerInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "UpdateAuthorizerInput"} - if s.AuthorizerName == nil { - invalidParams.Add(request.NewErrParamRequired("AuthorizerName")) - } - if s.AuthorizerName != nil && len(*s.AuthorizerName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("AuthorizerName", 1)) +func (s *UpdateIndexingConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateIndexingConfigurationInput"} + if s.ThingGroupIndexingConfiguration != nil { + if err := s.ThingGroupIndexingConfiguration.Validate(); err != nil { + invalidParams.AddNested("ThingGroupIndexingConfiguration", err.(request.ErrInvalidParams)) + } } - if s.TokenKeyName != nil && len(*s.TokenKeyName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("TokenKeyName", 1)) + if s.ThingIndexingConfiguration != nil { + if err := s.ThingIndexingConfiguration.Validate(); err != nil { + invalidParams.AddNested("ThingIndexingConfiguration", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -25469,116 +35519,91 @@ func (s *UpdateAuthorizerInput) Validate() error { return nil } -// SetAuthorizerFunctionArn sets the AuthorizerFunctionArn field's value. -func (s *UpdateAuthorizerInput) SetAuthorizerFunctionArn(v string) *UpdateAuthorizerInput { - s.AuthorizerFunctionArn = &v - return s -} - -// SetAuthorizerName sets the AuthorizerName field's value. -func (s *UpdateAuthorizerInput) SetAuthorizerName(v string) *UpdateAuthorizerInput { - s.AuthorizerName = &v - return s -} - -// SetStatus sets the Status field's value. -func (s *UpdateAuthorizerInput) SetStatus(v string) *UpdateAuthorizerInput { - s.Status = &v - return s -} - -// SetTokenKeyName sets the TokenKeyName field's value. -func (s *UpdateAuthorizerInput) SetTokenKeyName(v string) *UpdateAuthorizerInput { - s.TokenKeyName = &v +// SetThingGroupIndexingConfiguration sets the ThingGroupIndexingConfiguration field's value. +func (s *UpdateIndexingConfigurationInput) SetThingGroupIndexingConfiguration(v *ThingGroupIndexingConfiguration) *UpdateIndexingConfigurationInput { + s.ThingGroupIndexingConfiguration = v return s } -// SetTokenSigningPublicKeys sets the TokenSigningPublicKeys field's value. -func (s *UpdateAuthorizerInput) SetTokenSigningPublicKeys(v map[string]*string) *UpdateAuthorizerInput { - s.TokenSigningPublicKeys = v +// SetThingIndexingConfiguration sets the ThingIndexingConfiguration field's value. +func (s *UpdateIndexingConfigurationInput) SetThingIndexingConfiguration(v *ThingIndexingConfiguration) *UpdateIndexingConfigurationInput { + s.ThingIndexingConfiguration = v return s } -type UpdateAuthorizerOutput struct { +type UpdateIndexingConfigurationOutput struct { _ struct{} `type:"structure"` - - // The authorizer ARN. - AuthorizerArn *string `locationName:"authorizerArn" type:"string"` - - // The authorizer name. - AuthorizerName *string `locationName:"authorizerName" min:"1" type:"string"` } // String returns the string representation -func (s UpdateAuthorizerOutput) String() string { +func (s UpdateIndexingConfigurationOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s UpdateAuthorizerOutput) GoString() string { +func (s UpdateIndexingConfigurationOutput) GoString() string { return s.String() } -// SetAuthorizerArn sets the AuthorizerArn field's value. -func (s *UpdateAuthorizerOutput) SetAuthorizerArn(v string) *UpdateAuthorizerOutput { - s.AuthorizerArn = &v - return s -} - -// SetAuthorizerName sets the AuthorizerName field's value. -func (s *UpdateAuthorizerOutput) SetAuthorizerName(v string) *UpdateAuthorizerOutput { - s.AuthorizerName = &v - return s -} - -// The input to the UpdateCACertificate operation. -type UpdateCACertificateInput struct { +type UpdateJobInput struct { _ struct{} `type:"structure"` - // The CA certificate identifier. - // - // CertificateId is a required field - CertificateId *string `location:"uri" locationName:"caCertificateId" min:"64" type:"string" required:"true"` + // Allows you to create criteria to abort a job. + AbortConfig *AbortConfig `locationName:"abortConfig" type:"structure"` - // The new value for the auto registration status. Valid values are: "ENABLE" - // or "DISABLE". - NewAutoRegistrationStatus *string `location:"querystring" locationName:"newAutoRegistrationStatus" type:"string" enum:"AutoRegistrationStatus"` + // A short text description of the job. + Description *string `locationName:"description" type:"string"` - // The updated status of the CA certificate. - // - // Note: The status value REGISTER_INACTIVE is deprecated and should not be - // used. - NewStatus *string `location:"querystring" locationName:"newStatus" type:"string" enum:"CACertificateStatus"` + // Allows you to create a staged rollout of the job. + JobExecutionsRolloutConfig *JobExecutionsRolloutConfig `locationName:"jobExecutionsRolloutConfig" type:"structure"` + + // The ID of the job to be updated. + // + // JobId is a required field + JobId *string `location:"uri" locationName:"jobId" min:"1" type:"string" required:"true"` - // Information about the registration configuration. - RegistrationConfig *RegistrationConfig `locationName:"registrationConfig" type:"structure"` + // Configuration information for pre-signed S3 URLs. + PresignedUrlConfig *PresignedUrlConfig `locationName:"presignedUrlConfig" type:"structure"` - // If true, remove auto registration. - RemoveAutoRegistration *bool `locationName:"removeAutoRegistration" type:"boolean"` + // Specifies the amount of time each device has to finish its execution of the + // job. The timer is started when the job execution status is set to IN_PROGRESS. + // If the job execution status is not set to another terminal state before the + // time expires, it will be automatically set to TIMED_OUT. + TimeoutConfig *TimeoutConfig `locationName:"timeoutConfig" type:"structure"` } // String returns the string representation -func (s UpdateCACertificateInput) String() string { +func (s UpdateJobInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s UpdateCACertificateInput) GoString() string { +func (s UpdateJobInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *UpdateCACertificateInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "UpdateCACertificateInput"} - if s.CertificateId == nil { - invalidParams.Add(request.NewErrParamRequired("CertificateId")) +func (s *UpdateJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateJobInput"} + if s.JobId == nil { + invalidParams.Add(request.NewErrParamRequired("JobId")) } - if s.CertificateId != nil && len(*s.CertificateId) < 64 { - invalidParams.Add(request.NewErrParamMinLen("CertificateId", 64)) + if s.JobId != nil && len(*s.JobId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("JobId", 1)) } - if s.RegistrationConfig != nil { - if err := s.RegistrationConfig.Validate(); err != nil { - invalidParams.AddNested("RegistrationConfig", err.(request.ErrInvalidParams)) + if s.AbortConfig != nil { + if err := s.AbortConfig.Validate(); err != nil { + invalidParams.AddNested("AbortConfig", err.(request.ErrInvalidParams)) + } + } + if s.JobExecutionsRolloutConfig != nil { + if err := s.JobExecutionsRolloutConfig.Validate(); err != nil { + invalidParams.AddNested("JobExecutionsRolloutConfig", err.(request.ErrInvalidParams)) + } + } + if s.PresignedUrlConfig != nil { + if err := s.PresignedUrlConfig.Validate(); err != nil { + invalidParams.AddNested("PresignedUrlConfig", err.(request.ErrInvalidParams)) } } @@ -25588,93 +35613,95 @@ func (s *UpdateCACertificateInput) Validate() error { return nil } -// SetCertificateId sets the CertificateId field's value. -func (s *UpdateCACertificateInput) SetCertificateId(v string) *UpdateCACertificateInput { - s.CertificateId = &v +// SetAbortConfig sets the AbortConfig field's value. +func (s *UpdateJobInput) SetAbortConfig(v *AbortConfig) *UpdateJobInput { + s.AbortConfig = v return s } -// SetNewAutoRegistrationStatus sets the NewAutoRegistrationStatus field's value. -func (s *UpdateCACertificateInput) SetNewAutoRegistrationStatus(v string) *UpdateCACertificateInput { - s.NewAutoRegistrationStatus = &v +// SetDescription sets the Description field's value. +func (s *UpdateJobInput) SetDescription(v string) *UpdateJobInput { + s.Description = &v return s } -// SetNewStatus sets the NewStatus field's value. -func (s *UpdateCACertificateInput) SetNewStatus(v string) *UpdateCACertificateInput { - s.NewStatus = &v +// SetJobExecutionsRolloutConfig sets the JobExecutionsRolloutConfig field's value. +func (s *UpdateJobInput) SetJobExecutionsRolloutConfig(v *JobExecutionsRolloutConfig) *UpdateJobInput { + s.JobExecutionsRolloutConfig = v return s } -// SetRegistrationConfig sets the RegistrationConfig field's value. -func (s *UpdateCACertificateInput) SetRegistrationConfig(v *RegistrationConfig) *UpdateCACertificateInput { - s.RegistrationConfig = v +// SetJobId sets the JobId field's value. +func (s *UpdateJobInput) SetJobId(v string) *UpdateJobInput { + s.JobId = &v return s } -// SetRemoveAutoRegistration sets the RemoveAutoRegistration field's value. -func (s *UpdateCACertificateInput) SetRemoveAutoRegistration(v bool) *UpdateCACertificateInput { - s.RemoveAutoRegistration = &v +// SetPresignedUrlConfig sets the PresignedUrlConfig field's value. +func (s *UpdateJobInput) SetPresignedUrlConfig(v *PresignedUrlConfig) *UpdateJobInput { + s.PresignedUrlConfig = v return s } -type UpdateCACertificateOutput struct { +// SetTimeoutConfig sets the TimeoutConfig field's value. +func (s *UpdateJobInput) SetTimeoutConfig(v *TimeoutConfig) *UpdateJobInput { + s.TimeoutConfig = v + return s +} + +type UpdateJobOutput struct { _ struct{} `type:"structure"` } // String returns the string representation -func (s UpdateCACertificateOutput) String() string { +func (s UpdateJobOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s UpdateCACertificateOutput) GoString() string { +func (s UpdateJobOutput) GoString() string { return s.String() } -// The input for the UpdateCertificate operation. -type UpdateCertificateInput struct { +type UpdateRoleAliasInput struct { _ struct{} `type:"structure"` - // The ID of the certificate. - // - // CertificateId is a required field - CertificateId *string `location:"uri" locationName:"certificateId" min:"64" type:"string" required:"true"` + // The number of seconds the credential will be valid. + CredentialDurationSeconds *int64 `locationName:"credentialDurationSeconds" min:"900" type:"integer"` - // The new status. - // - // Note: Setting the status to PENDING_TRANSFER will result in an exception - // being thrown. PENDING_TRANSFER is a status used internally by AWS IoT. It - // is not intended for developer use. - // - // Note: The status value REGISTER_INACTIVE is deprecated and should not be - // used. + // The role alias to update. // - // NewStatus is a required field - NewStatus *string `location:"querystring" locationName:"newStatus" type:"string" required:"true" enum:"CertificateStatus"` + // RoleAlias is a required field + RoleAlias *string `location:"uri" locationName:"roleAlias" min:"1" type:"string" required:"true"` + + // The role ARN. + RoleArn *string `locationName:"roleArn" min:"20" type:"string"` } // String returns the string representation -func (s UpdateCertificateInput) String() string { +func (s UpdateRoleAliasInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s UpdateCertificateInput) GoString() string { +func (s UpdateRoleAliasInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *UpdateCertificateInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "UpdateCertificateInput"} - if s.CertificateId == nil { - invalidParams.Add(request.NewErrParamRequired("CertificateId")) +func (s *UpdateRoleAliasInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateRoleAliasInput"} + if s.CredentialDurationSeconds != nil && *s.CredentialDurationSeconds < 900 { + invalidParams.Add(request.NewErrParamMinValue("CredentialDurationSeconds", 900)) } - if s.CertificateId != nil && len(*s.CertificateId) < 64 { - invalidParams.Add(request.NewErrParamMinLen("CertificateId", 64)) + if s.RoleAlias == nil { + invalidParams.Add(request.NewErrParamRequired("RoleAlias")) } - if s.NewStatus == nil { - invalidParams.Add(request.NewErrParamRequired("NewStatus")) + if s.RoleAlias != nil && len(*s.RoleAlias) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleAlias", 1)) + } + if s.RoleArn != nil && len(*s.RoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) } if invalidParams.Len() > 0 { @@ -25683,145 +35710,228 @@ func (s *UpdateCertificateInput) Validate() error { return nil } -// SetCertificateId sets the CertificateId field's value. -func (s *UpdateCertificateInput) SetCertificateId(v string) *UpdateCertificateInput { - s.CertificateId = &v +// SetCredentialDurationSeconds sets the CredentialDurationSeconds field's value. +func (s *UpdateRoleAliasInput) SetCredentialDurationSeconds(v int64) *UpdateRoleAliasInput { + s.CredentialDurationSeconds = &v return s } -// SetNewStatus sets the NewStatus field's value. -func (s *UpdateCertificateInput) SetNewStatus(v string) *UpdateCertificateInput { - s.NewStatus = &v +// SetRoleAlias sets the RoleAlias field's value. +func (s *UpdateRoleAliasInput) SetRoleAlias(v string) *UpdateRoleAliasInput { + s.RoleAlias = &v return s } -type UpdateCertificateOutput struct { +// SetRoleArn sets the RoleArn field's value. +func (s *UpdateRoleAliasInput) SetRoleArn(v string) *UpdateRoleAliasInput { + s.RoleArn = &v + return s +} + +type UpdateRoleAliasOutput struct { _ struct{} `type:"structure"` + + // The role alias. + RoleAlias *string `locationName:"roleAlias" min:"1" type:"string"` + + // The role alias ARN. + RoleAliasArn *string `locationName:"roleAliasArn" type:"string"` } // String returns the string representation -func (s UpdateCertificateOutput) String() string { +func (s UpdateRoleAliasOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s UpdateCertificateOutput) GoString() string { +func (s UpdateRoleAliasOutput) GoString() string { return s.String() } -type UpdateEventConfigurationsInput struct { +// SetRoleAlias sets the RoleAlias field's value. +func (s *UpdateRoleAliasOutput) SetRoleAlias(v string) *UpdateRoleAliasOutput { + s.RoleAlias = &v + return s +} + +// SetRoleAliasArn sets the RoleAliasArn field's value. +func (s *UpdateRoleAliasOutput) SetRoleAliasArn(v string) *UpdateRoleAliasOutput { + s.RoleAliasArn = &v + return s +} + +type UpdateScheduledAuditInput struct { _ struct{} `type:"structure"` - // The new event configuration values. - EventConfigurations map[string]*Configuration `locationName:"eventConfigurations" type:"map"` + // The day of the month on which the scheduled audit takes place. Can be "1" + // through "31" or "LAST". This field is required if the "frequency" parameter + // is set to "MONTHLY". If days 29-31 are specified, and the month does not + // have that many days, the audit takes place on the "LAST" day of the month. + DayOfMonth *string `locationName:"dayOfMonth" type:"string"` + + // The day of the week on which the scheduled audit takes place. Can be one + // of "SUN", "MON", "TUE", "WED", "THU", "FRI" or "SAT". This field is required + // if the "frequency" parameter is set to "WEEKLY" or "BIWEEKLY". + DayOfWeek *string `locationName:"dayOfWeek" type:"string" enum:"DayOfWeek"` + + // How often the scheduled audit takes place. Can be one of "DAILY", "WEEKLY", + // "BIWEEKLY" or "MONTHLY". The actual start time of each audit is determined + // by the system. + Frequency *string `locationName:"frequency" type:"string" enum:"AuditFrequency"` + + // The name of the scheduled audit. (Max. 128 chars) + // + // ScheduledAuditName is a required field + ScheduledAuditName *string `location:"uri" locationName:"scheduledAuditName" min:"1" type:"string" required:"true"` + + // Which checks are performed during the scheduled audit. Checks must be enabled + // for your account. (Use DescribeAccountAuditConfiguration to see the list + // of all checks including those that are enabled or UpdateAccountAuditConfiguration + // to select which checks are enabled.) + TargetCheckNames []*string `locationName:"targetCheckNames" type:"list"` } // String returns the string representation -func (s UpdateEventConfigurationsInput) String() string { +func (s UpdateScheduledAuditInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s UpdateEventConfigurationsInput) GoString() string { +func (s UpdateScheduledAuditInput) GoString() string { return s.String() } -// SetEventConfigurations sets the EventConfigurations field's value. -func (s *UpdateEventConfigurationsInput) SetEventConfigurations(v map[string]*Configuration) *UpdateEventConfigurationsInput { - s.EventConfigurations = v +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateScheduledAuditInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateScheduledAuditInput"} + if s.ScheduledAuditName == nil { + invalidParams.Add(request.NewErrParamRequired("ScheduledAuditName")) + } + if s.ScheduledAuditName != nil && len(*s.ScheduledAuditName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ScheduledAuditName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDayOfMonth sets the DayOfMonth field's value. +func (s *UpdateScheduledAuditInput) SetDayOfMonth(v string) *UpdateScheduledAuditInput { + s.DayOfMonth = &v return s } -type UpdateEventConfigurationsOutput struct { - _ struct{} `type:"structure"` +// SetDayOfWeek sets the DayOfWeek field's value. +func (s *UpdateScheduledAuditInput) SetDayOfWeek(v string) *UpdateScheduledAuditInput { + s.DayOfWeek = &v + return s } -// String returns the string representation -func (s UpdateEventConfigurationsOutput) String() string { - return awsutil.Prettify(s) +// SetFrequency sets the Frequency field's value. +func (s *UpdateScheduledAuditInput) SetFrequency(v string) *UpdateScheduledAuditInput { + s.Frequency = &v + return s } -// GoString returns the string representation -func (s UpdateEventConfigurationsOutput) GoString() string { - return s.String() +// SetScheduledAuditName sets the ScheduledAuditName field's value. +func (s *UpdateScheduledAuditInput) SetScheduledAuditName(v string) *UpdateScheduledAuditInput { + s.ScheduledAuditName = &v + return s } -type UpdateIndexingConfigurationInput struct { +// SetTargetCheckNames sets the TargetCheckNames field's value. +func (s *UpdateScheduledAuditInput) SetTargetCheckNames(v []*string) *UpdateScheduledAuditInput { + s.TargetCheckNames = v + return s +} + +type UpdateScheduledAuditOutput struct { _ struct{} `type:"structure"` - // Thing indexing configuration. - ThingIndexingConfiguration *ThingIndexingConfiguration `locationName:"thingIndexingConfiguration" type:"structure"` + // The ARN of the scheduled audit. + ScheduledAuditArn *string `locationName:"scheduledAuditArn" type:"string"` } // String returns the string representation -func (s UpdateIndexingConfigurationInput) String() string { +func (s UpdateScheduledAuditOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s UpdateIndexingConfigurationInput) GoString() string { +func (s UpdateScheduledAuditOutput) GoString() string { return s.String() } -// SetThingIndexingConfiguration sets the ThingIndexingConfiguration field's value. -func (s *UpdateIndexingConfigurationInput) SetThingIndexingConfiguration(v *ThingIndexingConfiguration) *UpdateIndexingConfigurationInput { - s.ThingIndexingConfiguration = v +// SetScheduledAuditArn sets the ScheduledAuditArn field's value. +func (s *UpdateScheduledAuditOutput) SetScheduledAuditArn(v string) *UpdateScheduledAuditOutput { + s.ScheduledAuditArn = &v return s } -type UpdateIndexingConfigurationOutput struct { +type UpdateSecurityProfileInput struct { _ struct{} `type:"structure"` -} -// String returns the string representation -func (s UpdateIndexingConfigurationOutput) String() string { - return awsutil.Prettify(s) -} + // Where the alerts are sent. (Alerts are always sent to the console.) + AlertTargets map[string]*AlertTarget `locationName:"alertTargets" type:"map"` -// GoString returns the string representation -func (s UpdateIndexingConfigurationOutput) GoString() string { - return s.String() -} + // Specifies the behaviors that, when violated by a device (thing), cause an + // alert. + Behaviors []*Behavior `locationName:"behaviors" type:"list"` -type UpdateRoleAliasInput struct { - _ struct{} `type:"structure"` + // The expected version of the security profile. A new version is generated + // whenever the security profile is updated. If you specify a value that is + // different than the actual version, a VersionConflictException is thrown. + ExpectedVersion *int64 `location:"querystring" locationName:"expectedVersion" type:"long"` - // The number of seconds the credential will be valid. - CredentialDurationSeconds *int64 `locationName:"credentialDurationSeconds" min:"900" type:"integer"` + // A description of the security profile. + SecurityProfileDescription *string `locationName:"securityProfileDescription" type:"string"` - // The role alias to update. + // The name of the security profile you want to update. // - // RoleAlias is a required field - RoleAlias *string `location:"uri" locationName:"roleAlias" min:"1" type:"string" required:"true"` - - // The role ARN. - RoleArn *string `locationName:"roleArn" min:"20" type:"string"` + // SecurityProfileName is a required field + SecurityProfileName *string `location:"uri" locationName:"securityProfileName" min:"1" type:"string" required:"true"` } // String returns the string representation -func (s UpdateRoleAliasInput) String() string { +func (s UpdateSecurityProfileInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s UpdateRoleAliasInput) GoString() string { +func (s UpdateSecurityProfileInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *UpdateRoleAliasInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "UpdateRoleAliasInput"} - if s.CredentialDurationSeconds != nil && *s.CredentialDurationSeconds < 900 { - invalidParams.Add(request.NewErrParamMinValue("CredentialDurationSeconds", 900)) +func (s *UpdateSecurityProfileInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateSecurityProfileInput"} + if s.SecurityProfileName == nil { + invalidParams.Add(request.NewErrParamRequired("SecurityProfileName")) } - if s.RoleAlias == nil { - invalidParams.Add(request.NewErrParamRequired("RoleAlias")) + if s.SecurityProfileName != nil && len(*s.SecurityProfileName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecurityProfileName", 1)) } - if s.RoleAlias != nil && len(*s.RoleAlias) < 1 { - invalidParams.Add(request.NewErrParamMinLen("RoleAlias", 1)) + if s.AlertTargets != nil { + for i, v := range s.AlertTargets { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AlertTargets", i), err.(request.ErrInvalidParams)) + } + } } - if s.RoleArn != nil && len(*s.RoleArn) < 20 { - invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + if s.Behaviors != nil { + for i, v := range s.Behaviors { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Behaviors", i), err.(request.ErrInvalidParams)) + } + } } if invalidParams.Len() > 0 { @@ -25830,53 +35940,120 @@ func (s *UpdateRoleAliasInput) Validate() error { return nil } -// SetCredentialDurationSeconds sets the CredentialDurationSeconds field's value. -func (s *UpdateRoleAliasInput) SetCredentialDurationSeconds(v int64) *UpdateRoleAliasInput { - s.CredentialDurationSeconds = &v +// SetAlertTargets sets the AlertTargets field's value. +func (s *UpdateSecurityProfileInput) SetAlertTargets(v map[string]*AlertTarget) *UpdateSecurityProfileInput { + s.AlertTargets = v return s } -// SetRoleAlias sets the RoleAlias field's value. -func (s *UpdateRoleAliasInput) SetRoleAlias(v string) *UpdateRoleAliasInput { - s.RoleAlias = &v +// SetBehaviors sets the Behaviors field's value. +func (s *UpdateSecurityProfileInput) SetBehaviors(v []*Behavior) *UpdateSecurityProfileInput { + s.Behaviors = v + return s +} + +// SetExpectedVersion sets the ExpectedVersion field's value. +func (s *UpdateSecurityProfileInput) SetExpectedVersion(v int64) *UpdateSecurityProfileInput { + s.ExpectedVersion = &v + return s +} + +// SetSecurityProfileDescription sets the SecurityProfileDescription field's value. +func (s *UpdateSecurityProfileInput) SetSecurityProfileDescription(v string) *UpdateSecurityProfileInput { + s.SecurityProfileDescription = &v return s } -// SetRoleArn sets the RoleArn field's value. -func (s *UpdateRoleAliasInput) SetRoleArn(v string) *UpdateRoleAliasInput { - s.RoleArn = &v +// SetSecurityProfileName sets the SecurityProfileName field's value. +func (s *UpdateSecurityProfileInput) SetSecurityProfileName(v string) *UpdateSecurityProfileInput { + s.SecurityProfileName = &v return s } -type UpdateRoleAliasOutput struct { +type UpdateSecurityProfileOutput struct { _ struct{} `type:"structure"` - // The role alias. - RoleAlias *string `locationName:"roleAlias" min:"1" type:"string"` + // Where the alerts are sent. (Alerts are always sent to the console.) + AlertTargets map[string]*AlertTarget `locationName:"alertTargets" type:"map"` - // The role alias ARN. - RoleAliasArn *string `locationName:"roleAliasArn" type:"string"` + // Specifies the behaviors that, when violated by a device (thing), cause an + // alert. + Behaviors []*Behavior `locationName:"behaviors" type:"list"` + + // The time the security profile was created. + CreationDate *time.Time `locationName:"creationDate" type:"timestamp"` + + // The time the security profile was last modified. + LastModifiedDate *time.Time `locationName:"lastModifiedDate" type:"timestamp"` + + // The ARN of the security profile that was updated. + SecurityProfileArn *string `locationName:"securityProfileArn" type:"string"` + + // The description of the security profile. + SecurityProfileDescription *string `locationName:"securityProfileDescription" type:"string"` + + // The name of the security profile that was updated. + SecurityProfileName *string `locationName:"securityProfileName" min:"1" type:"string"` + + // The updated version of the security profile. + Version *int64 `locationName:"version" type:"long"` } // String returns the string representation -func (s UpdateRoleAliasOutput) String() string { +func (s UpdateSecurityProfileOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s UpdateRoleAliasOutput) GoString() string { +func (s UpdateSecurityProfileOutput) GoString() string { return s.String() } -// SetRoleAlias sets the RoleAlias field's value. -func (s *UpdateRoleAliasOutput) SetRoleAlias(v string) *UpdateRoleAliasOutput { - s.RoleAlias = &v +// SetAlertTargets sets the AlertTargets field's value. +func (s *UpdateSecurityProfileOutput) SetAlertTargets(v map[string]*AlertTarget) *UpdateSecurityProfileOutput { + s.AlertTargets = v return s } -// SetRoleAliasArn sets the RoleAliasArn field's value. -func (s *UpdateRoleAliasOutput) SetRoleAliasArn(v string) *UpdateRoleAliasOutput { - s.RoleAliasArn = &v +// SetBehaviors sets the Behaviors field's value. +func (s *UpdateSecurityProfileOutput) SetBehaviors(v []*Behavior) *UpdateSecurityProfileOutput { + s.Behaviors = v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *UpdateSecurityProfileOutput) SetCreationDate(v time.Time) *UpdateSecurityProfileOutput { + s.CreationDate = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *UpdateSecurityProfileOutput) SetLastModifiedDate(v time.Time) *UpdateSecurityProfileOutput { + s.LastModifiedDate = &v + return s +} + +// SetSecurityProfileArn sets the SecurityProfileArn field's value. +func (s *UpdateSecurityProfileOutput) SetSecurityProfileArn(v string) *UpdateSecurityProfileOutput { + s.SecurityProfileArn = &v + return s +} + +// SetSecurityProfileDescription sets the SecurityProfileDescription field's value. +func (s *UpdateSecurityProfileOutput) SetSecurityProfileDescription(v string) *UpdateSecurityProfileOutput { + s.SecurityProfileDescription = &v + return s +} + +// SetSecurityProfileName sets the SecurityProfileName field's value. +func (s *UpdateSecurityProfileOutput) SetSecurityProfileName(v string) *UpdateSecurityProfileOutput { + s.SecurityProfileName = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *UpdateSecurityProfileOutput) SetVersion(v int64) *UpdateSecurityProfileOutput { + s.Version = &v return s } @@ -26106,6 +36283,12 @@ func (s *UpdateThingGroupOutput) SetVersion(v int64) *UpdateThingGroupOutput { type UpdateThingGroupsForThingInput struct { _ struct{} `type:"structure"` + // Override dynamic thing groups with static thing groups when 10-group limit + // is reached. If a thing belongs to 10 thing groups, and one or more of those + // groups are dynamic thing groups, adding a thing to a static group removes + // the thing from the last dynamic group. + OverrideDynamicGroups *bool `locationName:"overrideDynamicGroups" type:"boolean"` + // The groups to which the thing will be added. ThingGroupsToAdd []*string `locationName:"thingGroupsToAdd" type:"list"` @@ -26139,6 +36322,12 @@ func (s *UpdateThingGroupsForThingInput) Validate() error { return nil } +// SetOverrideDynamicGroups sets the OverrideDynamicGroups field's value. +func (s *UpdateThingGroupsForThingInput) SetOverrideDynamicGroups(v bool) *UpdateThingGroupsForThingInput { + s.OverrideDynamicGroups = &v + return s +} + // SetThingGroupsToAdd sets the ThingGroupsToAdd field's value. func (s *UpdateThingGroupsForThingInput) SetThingGroupsToAdd(v []*string) *UpdateThingGroupsForThingInput { s.ThingGroupsToAdd = v @@ -26274,6 +36463,194 @@ func (s UpdateThingOutput) GoString() string { return s.String() } +type ValidateSecurityProfileBehaviorsInput struct { + _ struct{} `type:"structure"` + + // Specifies the behaviors that, when violated by a device (thing), cause an + // alert. + // + // Behaviors is a required field + Behaviors []*Behavior `locationName:"behaviors" type:"list" required:"true"` +} + +// String returns the string representation +func (s ValidateSecurityProfileBehaviorsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ValidateSecurityProfileBehaviorsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ValidateSecurityProfileBehaviorsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ValidateSecurityProfileBehaviorsInput"} + if s.Behaviors == nil { + invalidParams.Add(request.NewErrParamRequired("Behaviors")) + } + if s.Behaviors != nil { + for i, v := range s.Behaviors { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Behaviors", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBehaviors sets the Behaviors field's value. +func (s *ValidateSecurityProfileBehaviorsInput) SetBehaviors(v []*Behavior) *ValidateSecurityProfileBehaviorsInput { + s.Behaviors = v + return s +} + +type ValidateSecurityProfileBehaviorsOutput struct { + _ struct{} `type:"structure"` + + // True if the behaviors were valid. + Valid *bool `locationName:"valid" type:"boolean"` + + // The list of any errors found in the behaviors. + ValidationErrors []*ValidationError `locationName:"validationErrors" type:"list"` +} + +// String returns the string representation +func (s ValidateSecurityProfileBehaviorsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ValidateSecurityProfileBehaviorsOutput) GoString() string { + return s.String() +} + +// SetValid sets the Valid field's value. +func (s *ValidateSecurityProfileBehaviorsOutput) SetValid(v bool) *ValidateSecurityProfileBehaviorsOutput { + s.Valid = &v + return s +} + +// SetValidationErrors sets the ValidationErrors field's value. +func (s *ValidateSecurityProfileBehaviorsOutput) SetValidationErrors(v []*ValidationError) *ValidateSecurityProfileBehaviorsOutput { + s.ValidationErrors = v + return s +} + +// Information about an error found in a behavior specification. +type ValidationError struct { + _ struct{} `type:"structure"` + + // The description of an error found in the behaviors. + ErrorMessage *string `locationName:"errorMessage" type:"string"` +} + +// String returns the string representation +func (s ValidationError) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ValidationError) GoString() string { + return s.String() +} + +// SetErrorMessage sets the ErrorMessage field's value. +func (s *ValidationError) SetErrorMessage(v string) *ValidationError { + s.ErrorMessage = &v + return s +} + +// Information about a Device Defender security profile behavior violation. +type ViolationEvent struct { + _ struct{} `type:"structure"` + + // The behavior which was violated. + Behavior *Behavior `locationName:"behavior" type:"structure"` + + // The value of the metric (the measurement). + MetricValue *MetricValue `locationName:"metricValue" type:"structure"` + + // The name of the security profile whose behavior was violated. + SecurityProfileName *string `locationName:"securityProfileName" min:"1" type:"string"` + + // The name of the thing responsible for the violation event. + ThingName *string `locationName:"thingName" min:"1" type:"string"` + + // The time the violation event occurred. + ViolationEventTime *time.Time `locationName:"violationEventTime" type:"timestamp"` + + // The type of violation event. + ViolationEventType *string `locationName:"violationEventType" type:"string" enum:"ViolationEventType"` + + // The ID of the violation event. + ViolationId *string `locationName:"violationId" min:"1" type:"string"` +} + +// String returns the string representation +func (s ViolationEvent) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ViolationEvent) GoString() string { + return s.String() +} + +// SetBehavior sets the Behavior field's value. +func (s *ViolationEvent) SetBehavior(v *Behavior) *ViolationEvent { + s.Behavior = v + return s +} + +// SetMetricValue sets the MetricValue field's value. +func (s *ViolationEvent) SetMetricValue(v *MetricValue) *ViolationEvent { + s.MetricValue = v + return s +} + +// SetSecurityProfileName sets the SecurityProfileName field's value. +func (s *ViolationEvent) SetSecurityProfileName(v string) *ViolationEvent { + s.SecurityProfileName = &v + return s +} + +// SetThingName sets the ThingName field's value. +func (s *ViolationEvent) SetThingName(v string) *ViolationEvent { + s.ThingName = &v + return s +} + +// SetViolationEventTime sets the ViolationEventTime field's value. +func (s *ViolationEvent) SetViolationEventTime(v time.Time) *ViolationEvent { + s.ViolationEventTime = &v + return s +} + +// SetViolationEventType sets the ViolationEventType field's value. +func (s *ViolationEvent) SetViolationEventType(v string) *ViolationEvent { + s.ViolationEventType = &v + return s +} + +// SetViolationId sets the ViolationId field's value. +func (s *ViolationEvent) SetViolationId(v string) *ViolationEvent { + s.ViolationId = &v + return s +} + +const ( + // AbortActionCancel is a AbortAction enum value + AbortActionCancel = "CANCEL" +) + const ( // ActionTypePublish is a ActionType enum value ActionTypePublish = "PUBLISH" @@ -26288,6 +36665,87 @@ const ( ActionTypeConnect = "CONNECT" ) +// The type of alert target: one of "SNS". +const ( + // AlertTargetTypeSns is a AlertTargetType enum value + AlertTargetTypeSns = "SNS" +) + +const ( + // AuditCheckRunStatusInProgress is a AuditCheckRunStatus enum value + AuditCheckRunStatusInProgress = "IN_PROGRESS" + + // AuditCheckRunStatusWaitingForDataCollection is a AuditCheckRunStatus enum value + AuditCheckRunStatusWaitingForDataCollection = "WAITING_FOR_DATA_COLLECTION" + + // AuditCheckRunStatusCanceled is a AuditCheckRunStatus enum value + AuditCheckRunStatusCanceled = "CANCELED" + + // AuditCheckRunStatusCompletedCompliant is a AuditCheckRunStatus enum value + AuditCheckRunStatusCompletedCompliant = "COMPLETED_COMPLIANT" + + // AuditCheckRunStatusCompletedNonCompliant is a AuditCheckRunStatus enum value + AuditCheckRunStatusCompletedNonCompliant = "COMPLETED_NON_COMPLIANT" + + // AuditCheckRunStatusFailed is a AuditCheckRunStatus enum value + AuditCheckRunStatusFailed = "FAILED" +) + +const ( + // AuditFindingSeverityCritical is a AuditFindingSeverity enum value + AuditFindingSeverityCritical = "CRITICAL" + + // AuditFindingSeverityHigh is a AuditFindingSeverity enum value + AuditFindingSeverityHigh = "HIGH" + + // AuditFindingSeverityMedium is a AuditFindingSeverity enum value + AuditFindingSeverityMedium = "MEDIUM" + + // AuditFindingSeverityLow is a AuditFindingSeverity enum value + AuditFindingSeverityLow = "LOW" +) + +const ( + // AuditFrequencyDaily is a AuditFrequency enum value + AuditFrequencyDaily = "DAILY" + + // AuditFrequencyWeekly is a AuditFrequency enum value + AuditFrequencyWeekly = "WEEKLY" + + // AuditFrequencyBiweekly is a AuditFrequency enum value + AuditFrequencyBiweekly = "BIWEEKLY" + + // AuditFrequencyMonthly is a AuditFrequency enum value + AuditFrequencyMonthly = "MONTHLY" +) + +const ( + // AuditNotificationTypeSns is a AuditNotificationType enum value + AuditNotificationTypeSns = "SNS" +) + +const ( + // AuditTaskStatusInProgress is a AuditTaskStatus enum value + AuditTaskStatusInProgress = "IN_PROGRESS" + + // AuditTaskStatusCompleted is a AuditTaskStatus enum value + AuditTaskStatusCompleted = "COMPLETED" + + // AuditTaskStatusFailed is a AuditTaskStatus enum value + AuditTaskStatusFailed = "FAILED" + + // AuditTaskStatusCanceled is a AuditTaskStatus enum value + AuditTaskStatusCanceled = "CANCELED" +) + +const ( + // AuditTaskTypeOnDemandAuditTask is a AuditTaskType enum value + AuditTaskTypeOnDemandAuditTask = "ON_DEMAND_AUDIT_TASK" + + // AuditTaskTypeScheduledAuditTask is a AuditTaskType enum value + AuditTaskTypeScheduledAuditTask = "SCHEDULED_AUDIT_TASK" +) + const ( // AuthDecisionAllowed is a AuthDecision enum value AuthDecisionAllowed = "ALLOWED" @@ -26369,6 +36827,66 @@ const ( CertificateStatusPendingActivation = "PENDING_ACTIVATION" ) +const ( + // ComparisonOperatorLessThan is a ComparisonOperator enum value + ComparisonOperatorLessThan = "less-than" + + // ComparisonOperatorLessThanEquals is a ComparisonOperator enum value + ComparisonOperatorLessThanEquals = "less-than-equals" + + // ComparisonOperatorGreaterThan is a ComparisonOperator enum value + ComparisonOperatorGreaterThan = "greater-than" + + // ComparisonOperatorGreaterThanEquals is a ComparisonOperator enum value + ComparisonOperatorGreaterThanEquals = "greater-than-equals" + + // ComparisonOperatorInCidrSet is a ComparisonOperator enum value + ComparisonOperatorInCidrSet = "in-cidr-set" + + // ComparisonOperatorNotInCidrSet is a ComparisonOperator enum value + ComparisonOperatorNotInCidrSet = "not-in-cidr-set" + + // ComparisonOperatorInPortSet is a ComparisonOperator enum value + ComparisonOperatorInPortSet = "in-port-set" + + // ComparisonOperatorNotInPortSet is a ComparisonOperator enum value + ComparisonOperatorNotInPortSet = "not-in-port-set" +) + +const ( + // DayOfWeekSun is a DayOfWeek enum value + DayOfWeekSun = "SUN" + + // DayOfWeekMon is a DayOfWeek enum value + DayOfWeekMon = "MON" + + // DayOfWeekTue is a DayOfWeek enum value + DayOfWeekTue = "TUE" + + // DayOfWeekWed is a DayOfWeek enum value + DayOfWeekWed = "WED" + + // DayOfWeekThu is a DayOfWeek enum value + DayOfWeekThu = "THU" + + // DayOfWeekFri is a DayOfWeek enum value + DayOfWeekFri = "FRI" + + // DayOfWeekSat is a DayOfWeek enum value + DayOfWeekSat = "SAT" +) + +const ( + // DynamicGroupStatusActive is a DynamicGroupStatus enum value + DynamicGroupStatusActive = "ACTIVE" + + // DynamicGroupStatusBuilding is a DynamicGroupStatus enum value + DynamicGroupStatusBuilding = "BUILDING" + + // DynamicGroupStatusRebuilding is a DynamicGroupStatus enum value + DynamicGroupStatusRebuilding = "REBUILDING" +) + const ( // DynamoKeyTypeString is a DynamoKeyType enum value DynamoKeyTypeString = "STRING" @@ -26401,6 +36919,15 @@ const ( // EventTypeJobExecution is a EventType enum value EventTypeJobExecution = "JOB_EXECUTION" + + // EventTypePolicy is a EventType enum value + EventTypePolicy = "POLICY" + + // EventTypeCertificate is a EventType enum value + EventTypeCertificate = "CERTIFICATE" + + // EventTypeCaCertificate is a EventType enum value + EventTypeCaCertificate = "CA_CERTIFICATE" ) const ( @@ -26414,6 +36941,20 @@ const ( IndexStatusRebuilding = "REBUILDING" ) +const ( + // JobExecutionFailureTypeFailed is a JobExecutionFailureType enum value + JobExecutionFailureTypeFailed = "FAILED" + + // JobExecutionFailureTypeRejected is a JobExecutionFailureType enum value + JobExecutionFailureTypeRejected = "REJECTED" + + // JobExecutionFailureTypeTimedOut is a JobExecutionFailureType enum value + JobExecutionFailureTypeTimedOut = "TIMED_OUT" + + // JobExecutionFailureTypeAll is a JobExecutionFailureType enum value + JobExecutionFailureTypeAll = "ALL" +) + const ( // JobExecutionStatusQueued is a JobExecutionStatus enum value JobExecutionStatusQueued = "QUEUED" @@ -26427,6 +36968,9 @@ const ( // JobExecutionStatusFailed is a JobExecutionStatus enum value JobExecutionStatusFailed = "FAILED" + // JobExecutionStatusTimedOut is a JobExecutionStatus enum value + JobExecutionStatusTimedOut = "TIMED_OUT" + // JobExecutionStatusRejected is a JobExecutionStatus enum value JobExecutionStatusRejected = "REJECTED" @@ -26446,6 +36990,9 @@ const ( // JobStatusCompleted is a JobStatus enum value JobStatusCompleted = "COMPLETED" + + // JobStatusDeletionInProgress is a JobStatus enum value + JobStatusDeletionInProgress = "DELETION_IN_PROGRESS" ) const ( @@ -26503,6 +37050,26 @@ const ( ReportTypeResults = "RESULTS" ) +const ( + // ResourceTypeDeviceCertificate is a ResourceType enum value + ResourceTypeDeviceCertificate = "DEVICE_CERTIFICATE" + + // ResourceTypeCaCertificate is a ResourceType enum value + ResourceTypeCaCertificate = "CA_CERTIFICATE" + + // ResourceTypeIotPolicy is a ResourceType enum value + ResourceTypeIotPolicy = "IOT_POLICY" + + // ResourceTypeCognitoIdentityPool is a ResourceType enum value + ResourceTypeCognitoIdentityPool = "COGNITO_IDENTITY_POOL" + + // ResourceTypeClientId is a ResourceType enum value + ResourceTypeClientId = "CLIENT_ID" + + // ResourceTypeAccountSettings is a ResourceType enum value + ResourceTypeAccountSettings = "ACCOUNT_SETTINGS" +) + const ( // StatusInProgress is a Status enum value StatusInProgress = "InProgress" @@ -26528,6 +37095,22 @@ const ( TargetSelectionSnapshot = "SNAPSHOT" ) +const ( + // ThingConnectivityIndexingModeOff is a ThingConnectivityIndexingMode enum value + ThingConnectivityIndexingModeOff = "OFF" + + // ThingConnectivityIndexingModeStatus is a ThingConnectivityIndexingMode enum value + ThingConnectivityIndexingModeStatus = "STATUS" +) + +const ( + // ThingGroupIndexingModeOff is a ThingGroupIndexingMode enum value + ThingGroupIndexingModeOff = "OFF" + + // ThingGroupIndexingModeOn is a ThingGroupIndexingMode enum value + ThingGroupIndexingModeOn = "ON" +) + const ( // ThingIndexingModeOff is a ThingIndexingMode enum value ThingIndexingModeOff = "OFF" @@ -26538,3 +37121,14 @@ const ( // ThingIndexingModeRegistryAndShadow is a ThingIndexingMode enum value ThingIndexingModeRegistryAndShadow = "REGISTRY_AND_SHADOW" ) + +const ( + // ViolationEventTypeInAlarm is a ViolationEventType enum value + ViolationEventTypeInAlarm = "in-alarm" + + // ViolationEventTypeAlarmCleared is a ViolationEventType enum value + ViolationEventTypeAlarmCleared = "alarm-cleared" + + // ViolationEventTypeAlarmInvalidated is a ViolationEventType enum value + ViolationEventTypeAlarmInvalidated = "alarm-invalidated" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/iot/doc.go b/vendor/github.com/aws/aws-sdk-go/service/iot/doc.go index 92337e2e6ef..ce15b4e96a9 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/iot/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/iot/doc.go @@ -4,14 +4,17 @@ // requests to AWS IoT. // // AWS IoT provides secure, bi-directional communication between Internet-connected -// things (such as sensors, actuators, embedded devices, or smart appliances) +// devices (such as sensors, actuators, embedded devices, or smart appliances) // and the AWS cloud. You can discover your custom IoT-Data endpoint to communicate // with, configure rules for data processing and integration with other services, -// organize resources associated with each thing (Thing Registry), configure -// logging, and create and manage policies and credentials to authenticate things. +// organize resources associated with each device (Registry), configure logging, +// and create and manage policies and credentials to authenticate devices. // // For more information about how AWS IoT works, see the Developer Guide (http://docs.aws.amazon.com/iot/latest/developerguide/aws-iot-how-it-works.html). // +// For information about how to use the credentials provider for AWS IoT, see +// Authorizing Direct Calls to AWS Services (http://docs.aws.amazon.com/iot/latest/developerguide/authorizing-direct-aws.html). +// // See iot package documentation for more information. // https://docs.aws.amazon.com/sdk-for-go/api/service/iot/ // diff --git a/vendor/github.com/aws/aws-sdk-go/service/iot/errors.go b/vendor/github.com/aws/aws-sdk-go/service/iot/errors.go index 6edb1070613..8320d782bda 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/iot/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/iot/errors.go @@ -73,10 +73,18 @@ const ( // The response is invalid. ErrCodeInvalidResponseException = "InvalidResponseException" + // ErrCodeInvalidStateTransitionException for service response error code + // "InvalidStateTransitionException". + // + // An attempt was made to change to an invalid state, for example by deleting + // a job or a job execution which is "IN_PROGRESS" without setting the force + // parameter. + ErrCodeInvalidStateTransitionException = "InvalidStateTransitionException" + // ErrCodeLimitExceededException for service response error code // "LimitExceededException". // - // The number of attached entities exceeds the limit. + // A limit has been exceeded. ErrCodeLimitExceededException = "LimitExceededException" // ErrCodeMalformedPolicyException for service response error code @@ -156,8 +164,8 @@ const ( // ErrCodeVersionConflictException for service response error code // "VersionConflictException". // - // An exception thrown when the version of a thing passed to a command is different - // than the version specified with the --version parameter. + // An exception thrown when the version of an entity specified with the expectedVersion + // parameter does not match the latest version in the system. ErrCodeVersionConflictException = "VersionConflictException" // ErrCodeVersionsLimitExceededException for service response error code diff --git a/vendor/github.com/aws/aws-sdk-go/service/iot/service.go b/vendor/github.com/aws/aws-sdk-go/service/iot/service.go index 530757a190c..10a95d5607c 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/iot/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/iot/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "iot" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "iot" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "IoT" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the IoT client with a session. @@ -45,19 +46,20 @@ const ( // svc := iot.New(mySession, aws.NewConfig().WithRegion("us-west-2")) func New(p client.ConfigProvider, cfgs ...*aws.Config) *IoT { c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "execute-api" + } return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) } // newClient creates, initializes and returns a new service client instance. func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *IoT { - if len(signingName) == 0 { - signingName = "execute-api" - } svc := &IoT{ Client: client.New( cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/kinesis/api.go b/vendor/github.com/aws/aws-sdk-go/service/kinesis/api.go index 3d6e3cd1275..7b3cdcf2639 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/kinesis/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/kinesis/api.go @@ -17,8 +17,8 @@ const opAddTagsToStream = "AddTagsToStream" // AddTagsToStreamRequest generates a "aws/request.Request" representing the // client's request for the AddTagsToStream operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -59,8 +59,10 @@ func (c *Kinesis) AddTagsToStreamRequest(input *AddTagsToStreamInput) (req *requ // AddTagsToStream API operation for Amazon Kinesis. // -// Adds or updates tags for the specified Kinesis data stream. Each stream can -// have up to 10 tags. +// Adds or updates tags for the specified Kinesis data stream. Each time you +// invoke this operation, you can specify up to 10 tags. If you want to add +// more than 10 tags to your stream, you can invoke this operation multiple +// times. In total, each stream can have up to 50 tags. // // If tags have already been assigned to the stream, AddTagsToStream overwrites // any existing tags that correspond to the specified tag keys. @@ -117,8 +119,8 @@ const opCreateStream = "CreateStream" // CreateStreamRequest generates a "aws/request.Request" representing the // client's request for the CreateStream operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -245,8 +247,8 @@ const opDecreaseStreamRetentionPeriod = "DecreaseStreamRetentionPeriod" // DecreaseStreamRetentionPeriodRequest generates a "aws/request.Request" representing the // client's request for the DecreaseStreamRetentionPeriod operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -345,8 +347,8 @@ const opDeleteStream = "DeleteStream" // DeleteStreamRequest generates a "aws/request.Request" representing the // client's request for the DeleteStream operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -424,6 +426,10 @@ func (c *Kinesis) DeleteStreamRequest(input *DeleteStreamInput) (req *request.Re // The requested resource exceeds the maximum number allowed, or the number // of concurrent stream requests exceeds the maximum number allowed. // +// * ErrCodeResourceInUseException "ResourceInUseException" +// The resource is not available for this operation. For successful operation, +// the resource must be in the ACTIVE state. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/kinesis-2013-12-02/DeleteStream func (c *Kinesis) DeleteStream(input *DeleteStreamInput) (*DeleteStreamOutput, error) { req, out := c.DeleteStreamRequest(input) @@ -446,12 +452,111 @@ func (c *Kinesis) DeleteStreamWithContext(ctx aws.Context, input *DeleteStreamIn return out, req.Send() } +const opDeregisterStreamConsumer = "DeregisterStreamConsumer" + +// DeregisterStreamConsumerRequest generates a "aws/request.Request" representing the +// client's request for the DeregisterStreamConsumer operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeregisterStreamConsumer for more information on using the DeregisterStreamConsumer +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeregisterStreamConsumerRequest method. +// req, resp := client.DeregisterStreamConsumerRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesis-2013-12-02/DeregisterStreamConsumer +func (c *Kinesis) DeregisterStreamConsumerRequest(input *DeregisterStreamConsumerInput) (req *request.Request, output *DeregisterStreamConsumerOutput) { + op := &request.Operation{ + Name: opDeregisterStreamConsumer, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeregisterStreamConsumerInput{} + } + + output = &DeregisterStreamConsumerOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeregisterStreamConsumer API operation for Amazon Kinesis. +// +// To deregister a consumer, provide its ARN. Alternatively, you can provide +// the ARN of the data stream and the name you gave the consumer when you registered +// it. You may also provide all three parameters, as long as they don't conflict +// with each other. If you don't know the name or ARN of the consumer that you +// want to deregister, you can use the ListStreamConsumers operation to get +// a list of the descriptions of all the consumers that are currently registered +// with a given data stream. The description of a consumer contains its name +// and ARN. +// +// This operation has a limit of five transactions per second per account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis's +// API operation DeregisterStreamConsumer for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceededException" +// The requested resource exceeds the maximum number allowed, or the number +// of concurrent stream requests exceeds the maximum number allowed. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The requested resource could not be found. The stream might not be specified +// correctly. +// +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// A specified parameter exceeds its restrictions, is not supported, or can't +// be used. For more information, see the returned message. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesis-2013-12-02/DeregisterStreamConsumer +func (c *Kinesis) DeregisterStreamConsumer(input *DeregisterStreamConsumerInput) (*DeregisterStreamConsumerOutput, error) { + req, out := c.DeregisterStreamConsumerRequest(input) + return out, req.Send() +} + +// DeregisterStreamConsumerWithContext is the same as DeregisterStreamConsumer with the addition of +// the ability to pass a context and additional request options. +// +// See DeregisterStreamConsumer for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Kinesis) DeregisterStreamConsumerWithContext(ctx aws.Context, input *DeregisterStreamConsumerInput, opts ...request.Option) (*DeregisterStreamConsumerOutput, error) { + req, out := c.DeregisterStreamConsumerRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeLimits = "DescribeLimits" // DescribeLimitsRequest generates a "aws/request.Request" representing the // client's request for the DescribeLimits operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -535,8 +640,8 @@ const opDescribeStream = "DescribeStream" // DescribeStreamRequest generates a "aws/request.Request" representing the // client's request for the DescribeStream operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -689,12 +794,108 @@ func (c *Kinesis) DescribeStreamPagesWithContext(ctx aws.Context, input *Describ return p.Err() } +const opDescribeStreamConsumer = "DescribeStreamConsumer" + +// DescribeStreamConsumerRequest generates a "aws/request.Request" representing the +// client's request for the DescribeStreamConsumer operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeStreamConsumer for more information on using the DescribeStreamConsumer +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeStreamConsumerRequest method. +// req, resp := client.DescribeStreamConsumerRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesis-2013-12-02/DescribeStreamConsumer +func (c *Kinesis) DescribeStreamConsumerRequest(input *DescribeStreamConsumerInput) (req *request.Request, output *DescribeStreamConsumerOutput) { + op := &request.Operation{ + Name: opDescribeStreamConsumer, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeStreamConsumerInput{} + } + + output = &DescribeStreamConsumerOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeStreamConsumer API operation for Amazon Kinesis. +// +// To get the description of a registered consumer, provide the ARN of the consumer. +// Alternatively, you can provide the ARN of the data stream and the name you +// gave the consumer when you registered it. You may also provide all three +// parameters, as long as they don't conflict with each other. If you don't +// know the name or ARN of the consumer that you want to describe, you can use +// the ListStreamConsumers operation to get a list of the descriptions of all +// the consumers that are currently registered with a given data stream. +// +// This operation has a limit of 20 transactions per second per account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis's +// API operation DescribeStreamConsumer for usage and error information. +// +// Returned Error Codes: +// * ErrCodeLimitExceededException "LimitExceededException" +// The requested resource exceeds the maximum number allowed, or the number +// of concurrent stream requests exceeds the maximum number allowed. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The requested resource could not be found. The stream might not be specified +// correctly. +// +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// A specified parameter exceeds its restrictions, is not supported, or can't +// be used. For more information, see the returned message. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesis-2013-12-02/DescribeStreamConsumer +func (c *Kinesis) DescribeStreamConsumer(input *DescribeStreamConsumerInput) (*DescribeStreamConsumerOutput, error) { + req, out := c.DescribeStreamConsumerRequest(input) + return out, req.Send() +} + +// DescribeStreamConsumerWithContext is the same as DescribeStreamConsumer with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeStreamConsumer for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Kinesis) DescribeStreamConsumerWithContext(ctx aws.Context, input *DescribeStreamConsumerInput, opts ...request.Option) (*DescribeStreamConsumerOutput, error) { + req, out := c.DescribeStreamConsumerRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeStreamSummary = "DescribeStreamSummary" // DescribeStreamSummaryRequest generates a "aws/request.Request" representing the // client's request for the DescribeStreamSummary operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -782,8 +983,8 @@ const opDisableEnhancedMonitoring = "DisableEnhancedMonitoring" // DisableEnhancedMonitoringRequest generates a "aws/request.Request" representing the // client's request for the DisableEnhancedMonitoring operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -874,8 +1075,8 @@ const opEnableEnhancedMonitoring = "EnableEnhancedMonitoring" // EnableEnhancedMonitoringRequest generates a "aws/request.Request" representing the // client's request for the EnableEnhancedMonitoring operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -966,8 +1167,8 @@ const opGetRecords = "GetRecords" // GetRecordsRequest generates a "aws/request.Request" representing the // client's request for the GetRecords operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1029,20 +1230,21 @@ func (c *Kinesis) GetRecordsRequest(input *GetRecordsInput) (req *request.Reques // is closed, or when the shard iterator reaches the record with the sequence // number or other attribute that marks it as the last record to process. // -// Each data record can be up to 1 MB in size, and each shard can read up to -// 2 MB per second. You can ensure that your calls don't exceed the maximum +// Each data record can be up to 1 MiB in size, and each shard can read up to +// 2 MiB per second. You can ensure that your calls don't exceed the maximum // supported size or throughput by using the Limit parameter to specify the // maximum number of records that GetRecords can return. Consider your average -// record size when determining this limit. +// record size when determining this limit. The maximum number of records that +// can be returned per call is 10,000. // // The size of the data returned by GetRecords varies depending on the utilization -// of the shard. The maximum size of data that GetRecords can return is 10 MB. +// of the shard. The maximum size of data that GetRecords can return is 10 MiB. // If a call returns this amount of data, subsequent calls made within the next -// five seconds throw ProvisionedThroughputExceededException. If there is insufficient +// 5 seconds throw ProvisionedThroughputExceededException. If there is insufficient // provisioned throughput on the stream, subsequent calls made within the next -// one second throw ProvisionedThroughputExceededException. GetRecords won't +// 1 second throw ProvisionedThroughputExceededException. GetRecords doesn't // return any data when it throws an exception. For this reason, we recommend -// that you wait one second between calls to GetRecords; however, it's possible +// that you wait 1 second between calls to GetRecords. However, it's possible // that the application will get exceptions for longer than 1 second. // // To detect whether the application is falling behind in processing, you can @@ -1060,6 +1262,8 @@ func (c *Kinesis) GetRecordsRequest(input *GetRecordsInput) (req *request.Reques // always increasing. For example, records in a shard or across a stream might // have time stamps that are out of order. // +// This operation has a limit of five transactions per second per account. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -1139,8 +1343,8 @@ const opGetShardIterator = "GetShardIterator" // GetShardIteratorRequest generates a "aws/request.Request" representing the // client's request for the GetShardIterator operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1179,7 +1383,7 @@ func (c *Kinesis) GetShardIteratorRequest(input *GetShardIteratorInput) (req *re // GetShardIterator API operation for Amazon Kinesis. // -// Gets an Amazon Kinesis shard iterator. A shard iterator expires five minutes +// Gets an Amazon Kinesis shard iterator. A shard iterator expires 5 minutes // after it is returned to the requester. // // A shard iterator specifies the shard position from which to start reading @@ -1269,8 +1473,8 @@ const opIncreaseStreamRetentionPeriod = "IncreaseStreamRetentionPeriod" // IncreaseStreamRetentionPeriodRequest generates a "aws/request.Request" representing the // client's request for the IncreaseStreamRetentionPeriod operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1373,8 +1577,8 @@ const opListShards = "ListShards" // ListShardsRequest generates a "aws/request.Request" representing the // client's request for the ListShards operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1413,7 +1617,8 @@ func (c *Kinesis) ListShardsRequest(input *ListShardsInput) (req *request.Reques // ListShards API operation for Amazon Kinesis. // -// Lists the shards in a stream and provides information about each shard. +// Lists the shards in a stream and provides information about each shard. This +// operation has a limit of 100 transactions per second per data stream. // // This API is a new operation that is used by the Amazon Kinesis Client Library // (KCL). If you have a fine-grained IAM policy that only allows specific operations, @@ -1442,8 +1647,7 @@ func (c *Kinesis) ListShardsRequest(input *ListShardsInput) (req *request.Reques // of concurrent stream requests exceeds the maximum number allowed. // // * ErrCodeExpiredNextTokenException "ExpiredNextTokenException" -// The pagination token passed to the ListShards operation is expired. For more -// information, see ListShardsInput$NextToken. +// The pagination token passed to the operation is expired. // // * ErrCodeResourceInUseException "ResourceInUseException" // The resource is not available for this operation. For successful operation, @@ -1471,12 +1675,166 @@ func (c *Kinesis) ListShardsWithContext(ctx aws.Context, input *ListShardsInput, return out, req.Send() } +const opListStreamConsumers = "ListStreamConsumers" + +// ListStreamConsumersRequest generates a "aws/request.Request" representing the +// client's request for the ListStreamConsumers operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListStreamConsumers for more information on using the ListStreamConsumers +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListStreamConsumersRequest method. +// req, resp := client.ListStreamConsumersRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesis-2013-12-02/ListStreamConsumers +func (c *Kinesis) ListStreamConsumersRequest(input *ListStreamConsumersInput) (req *request.Request, output *ListStreamConsumersOutput) { + op := &request.Operation{ + Name: opListStreamConsumers, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListStreamConsumersInput{} + } + + output = &ListStreamConsumersOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListStreamConsumers API operation for Amazon Kinesis. +// +// Lists the consumers registered to receive data from a stream using enhanced +// fan-out, and provides information about each consumer. +// +// This operation has a limit of 10 transactions per second per account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis's +// API operation ListStreamConsumers for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The requested resource could not be found. The stream might not be specified +// correctly. +// +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// A specified parameter exceeds its restrictions, is not supported, or can't +// be used. For more information, see the returned message. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The requested resource exceeds the maximum number allowed, or the number +// of concurrent stream requests exceeds the maximum number allowed. +// +// * ErrCodeExpiredNextTokenException "ExpiredNextTokenException" +// The pagination token passed to the operation is expired. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// The resource is not available for this operation. For successful operation, +// the resource must be in the ACTIVE state. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesis-2013-12-02/ListStreamConsumers +func (c *Kinesis) ListStreamConsumers(input *ListStreamConsumersInput) (*ListStreamConsumersOutput, error) { + req, out := c.ListStreamConsumersRequest(input) + return out, req.Send() +} + +// ListStreamConsumersWithContext is the same as ListStreamConsumers with the addition of +// the ability to pass a context and additional request options. +// +// See ListStreamConsumers for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Kinesis) ListStreamConsumersWithContext(ctx aws.Context, input *ListStreamConsumersInput, opts ...request.Option) (*ListStreamConsumersOutput, error) { + req, out := c.ListStreamConsumersRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListStreamConsumersPages iterates over the pages of a ListStreamConsumers operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListStreamConsumers method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListStreamConsumers operation. +// pageNum := 0 +// err := client.ListStreamConsumersPages(params, +// func(page *ListStreamConsumersOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *Kinesis) ListStreamConsumersPages(input *ListStreamConsumersInput, fn func(*ListStreamConsumersOutput, bool) bool) error { + return c.ListStreamConsumersPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListStreamConsumersPagesWithContext same as ListStreamConsumersPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Kinesis) ListStreamConsumersPagesWithContext(ctx aws.Context, input *ListStreamConsumersInput, fn func(*ListStreamConsumersOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListStreamConsumersInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListStreamConsumersRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListStreamConsumersOutput), !p.HasNextPage()) + } + return p.Err() +} + const opListStreams = "ListStreams" // ListStreamsRequest generates a "aws/request.Request" representing the // client's request for the ListStreams operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1626,8 +1984,8 @@ const opListTagsForStream = "ListTagsForStream" // ListTagsForStreamRequest generates a "aws/request.Request" representing the // client's request for the ListTagsForStream operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1715,8 +2073,8 @@ const opMergeShards = "MergeShards" // MergeShardsRequest generates a "aws/request.Request" representing the // client's request for the MergeShards operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1844,8 +2202,8 @@ const opPutRecord = "PutRecord" // PutRecordRequest generates a "aws/request.Request" representing the // client's request for the PutRecord operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2000,8 +2358,8 @@ const opPutRecords = "PutRecords" // PutRecordsRequest generates a "aws/request.Request" representing the // client's request for the PutRecords operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2172,12 +2530,112 @@ func (c *Kinesis) PutRecordsWithContext(ctx aws.Context, input *PutRecordsInput, return out, req.Send() } +const opRegisterStreamConsumer = "RegisterStreamConsumer" + +// RegisterStreamConsumerRequest generates a "aws/request.Request" representing the +// client's request for the RegisterStreamConsumer operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RegisterStreamConsumer for more information on using the RegisterStreamConsumer +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RegisterStreamConsumerRequest method. +// req, resp := client.RegisterStreamConsumerRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesis-2013-12-02/RegisterStreamConsumer +func (c *Kinesis) RegisterStreamConsumerRequest(input *RegisterStreamConsumerInput) (req *request.Request, output *RegisterStreamConsumerOutput) { + op := &request.Operation{ + Name: opRegisterStreamConsumer, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RegisterStreamConsumerInput{} + } + + output = &RegisterStreamConsumerOutput{} + req = c.newRequest(op, input, output) + return +} + +// RegisterStreamConsumer API operation for Amazon Kinesis. +// +// Registers a consumer with a Kinesis data stream. When you use this operation, +// the consumer you register can read data from the stream at a rate of up to +// 2 MiB per second. This rate is unaffected by the total number of consumers +// that read from the same stream. +// +// You can register up to 5 consumers per stream. A given consumer can only +// be registered with one stream. +// +// This operation has a limit of five transactions per second per account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis's +// API operation RegisterStreamConsumer for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// A specified parameter exceeds its restrictions, is not supported, or can't +// be used. For more information, see the returned message. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The requested resource exceeds the maximum number allowed, or the number +// of concurrent stream requests exceeds the maximum number allowed. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// The resource is not available for this operation. For successful operation, +// the resource must be in the ACTIVE state. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The requested resource could not be found. The stream might not be specified +// correctly. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesis-2013-12-02/RegisterStreamConsumer +func (c *Kinesis) RegisterStreamConsumer(input *RegisterStreamConsumerInput) (*RegisterStreamConsumerOutput, error) { + req, out := c.RegisterStreamConsumerRequest(input) + return out, req.Send() +} + +// RegisterStreamConsumerWithContext is the same as RegisterStreamConsumer with the addition of +// the ability to pass a context and additional request options. +// +// See RegisterStreamConsumer for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Kinesis) RegisterStreamConsumerWithContext(ctx aws.Context, input *RegisterStreamConsumerInput, opts ...request.Option) (*RegisterStreamConsumerOutput, error) { + req, out := c.RegisterStreamConsumerRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opRemoveTagsFromStream = "RemoveTagsFromStream" // RemoveTagsFromStreamRequest generates a "aws/request.Request" representing the // client's request for the RemoveTagsFromStream operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2275,8 +2733,8 @@ const opSplitShard = "SplitShard" // SplitShardRequest generates a "aws/request.Request" representing the // client's request for the SplitShard operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2354,7 +2812,8 @@ func (c *Kinesis) SplitShardRequest(input *SplitShardInput) (req *request.Reques // If you try to create more shards than are authorized for your account, you // receive a LimitExceededException. // -// For the default shard limit for an AWS account, see Streams Limits (http://docs.aws.amazon.com/kinesis/latest/dev/service-sizes-and-limits.html) +// For the default shard limit for an AWS account, see Kinesis Data Streams +// Limits (http://docs.aws.amazon.com/kinesis/latest/dev/service-sizes-and-limits.html) // in the Amazon Kinesis Data Streams Developer Guide. To increase this limit, // contact AWS Support (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html). // @@ -2413,8 +2872,8 @@ const opStartStreamEncryption = "StartStreamEncryption" // StartStreamEncryptionRequest generates a "aws/request.Request" representing the // client's request for the StartStreamEncryption operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2470,7 +2929,7 @@ func (c *Kinesis) StartStreamEncryptionRequest(input *StartStreamEncryptionInput // API Limits: You can successfully apply a new AWS KMS key for server-side // encryption 25 times in a rolling 24-hour period. // -// Note: It can take up to five seconds after the stream is in an ACTIVE status +// Note: It can take up to 5 seconds after the stream is in an ACTIVE status // before all records written to the stream are encrypted. After you enable // encryption, you can verify that encryption is applied by inspecting the API // response from PutRecord or PutRecords. @@ -2551,8 +3010,8 @@ const opStopStreamEncryption = "StopStreamEncryption" // StopStreamEncryptionRequest generates a "aws/request.Request" representing the // client's request for the StopStreamEncryption operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2607,7 +3066,7 @@ func (c *Kinesis) StopStreamEncryptionRequest(input *StopStreamEncryptionInput) // API Limits: You can successfully disable server-side encryption 25 times // in a rolling 24-hour period. // -// Note: It can take up to five seconds after the stream is in an ACTIVE status +// Note: It can take up to 5 seconds after the stream is in an ACTIVE status // before all records written to the stream are no longer subject to encryption. // After you disabled encryption, you can verify that encryption is not applied // by inspecting the API response from PutRecord or PutRecords. @@ -2662,8 +3121,8 @@ const opUpdateShardCount = "UpdateShardCount" // UpdateShardCountRequest generates a "aws/request.Request" representing the // client's request for the UpdateShardCount operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2717,7 +3176,8 @@ func (c *Kinesis) UpdateShardCountRequest(input *UpdateShardCountInput) (req *re // addition to the final shards. We recommend that you double or halve the shard // count, as this results in the fewest number of splits or merges. // -// This operation has the following limits. You cannot do the following: +// This operation has the following default limits. By default, you cannot do +// the following: // // * Scale more than twice per rolling 24-hour period per stream // @@ -2792,7 +3252,7 @@ type AddTagsToStreamInput struct { // StreamName is a required field StreamName *string `min:"1" type:"string" required:"true"` - // The set of key-value pairs to use to create the tags. + // A set of up to 10 key-value pairs to use to create the tags. // // Tags is a required field Tags map[string]*string `min:"1" type:"map" required:"true"` @@ -2856,6 +3316,143 @@ func (s AddTagsToStreamOutput) GoString() string { return s.String() } +// An object that represents the details of the consumer you registered. +type Consumer struct { + _ struct{} `type:"structure"` + + // When you register a consumer, Kinesis Data Streams generates an ARN for it. + // You need this ARN to be able to call SubscribeToShard. + // + // If you delete a consumer and then create a new one with the same name, it + // won't have the same ARN. That's because consumer ARNs contain the creation + // timestamp. This is important to keep in mind if you have IAM policies that + // reference consumer ARNs. + // + // ConsumerARN is a required field + ConsumerARN *string `min:"1" type:"string" required:"true"` + + // ConsumerCreationTimestamp is a required field + ConsumerCreationTimestamp *time.Time `type:"timestamp" required:"true"` + + // The name of the consumer is something you choose when you register the consumer. + // + // ConsumerName is a required field + ConsumerName *string `min:"1" type:"string" required:"true"` + + // A consumer can't read data while in the CREATING or DELETING states. + // + // ConsumerStatus is a required field + ConsumerStatus *string `type:"string" required:"true" enum:"ConsumerStatus"` +} + +// String returns the string representation +func (s Consumer) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Consumer) GoString() string { + return s.String() +} + +// SetConsumerARN sets the ConsumerARN field's value. +func (s *Consumer) SetConsumerARN(v string) *Consumer { + s.ConsumerARN = &v + return s +} + +// SetConsumerCreationTimestamp sets the ConsumerCreationTimestamp field's value. +func (s *Consumer) SetConsumerCreationTimestamp(v time.Time) *Consumer { + s.ConsumerCreationTimestamp = &v + return s +} + +// SetConsumerName sets the ConsumerName field's value. +func (s *Consumer) SetConsumerName(v string) *Consumer { + s.ConsumerName = &v + return s +} + +// SetConsumerStatus sets the ConsumerStatus field's value. +func (s *Consumer) SetConsumerStatus(v string) *Consumer { + s.ConsumerStatus = &v + return s +} + +// An object that represents the details of a registered consumer. +type ConsumerDescription struct { + _ struct{} `type:"structure"` + + // When you register a consumer, Kinesis Data Streams generates an ARN for it. + // You need this ARN to be able to call SubscribeToShard. + // + // If you delete a consumer and then create a new one with the same name, it + // won't have the same ARN. That's because consumer ARNs contain the creation + // timestamp. This is important to keep in mind if you have IAM policies that + // reference consumer ARNs. + // + // ConsumerARN is a required field + ConsumerARN *string `min:"1" type:"string" required:"true"` + + // ConsumerCreationTimestamp is a required field + ConsumerCreationTimestamp *time.Time `type:"timestamp" required:"true"` + + // The name of the consumer is something you choose when you register the consumer. + // + // ConsumerName is a required field + ConsumerName *string `min:"1" type:"string" required:"true"` + + // A consumer can't read data while in the CREATING or DELETING states. + // + // ConsumerStatus is a required field + ConsumerStatus *string `type:"string" required:"true" enum:"ConsumerStatus"` + + // The ARN of the stream with which you registered the consumer. + // + // StreamARN is a required field + StreamARN *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ConsumerDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConsumerDescription) GoString() string { + return s.String() +} + +// SetConsumerARN sets the ConsumerARN field's value. +func (s *ConsumerDescription) SetConsumerARN(v string) *ConsumerDescription { + s.ConsumerARN = &v + return s +} + +// SetConsumerCreationTimestamp sets the ConsumerCreationTimestamp field's value. +func (s *ConsumerDescription) SetConsumerCreationTimestamp(v time.Time) *ConsumerDescription { + s.ConsumerCreationTimestamp = &v + return s +} + +// SetConsumerName sets the ConsumerName field's value. +func (s *ConsumerDescription) SetConsumerName(v string) *ConsumerDescription { + s.ConsumerName = &v + return s +} + +// SetConsumerStatus sets the ConsumerStatus field's value. +func (s *ConsumerDescription) SetConsumerStatus(v string) *ConsumerDescription { + s.ConsumerStatus = &v + return s +} + +// SetStreamARN sets the StreamARN field's value. +func (s *ConsumerDescription) SetStreamARN(v string) *ConsumerDescription { + s.StreamARN = &v + return s +} + // Represents the input for CreateStream. type CreateStreamInput struct { _ struct{} `type:"structure"` @@ -2985,60 +3582,138 @@ func (s *DecreaseStreamRetentionPeriodInput) Validate() error { return nil } -// SetRetentionPeriodHours sets the RetentionPeriodHours field's value. -func (s *DecreaseStreamRetentionPeriodInput) SetRetentionPeriodHours(v int64) *DecreaseStreamRetentionPeriodInput { - s.RetentionPeriodHours = &v +// SetRetentionPeriodHours sets the RetentionPeriodHours field's value. +func (s *DecreaseStreamRetentionPeriodInput) SetRetentionPeriodHours(v int64) *DecreaseStreamRetentionPeriodInput { + s.RetentionPeriodHours = &v + return s +} + +// SetStreamName sets the StreamName field's value. +func (s *DecreaseStreamRetentionPeriodInput) SetStreamName(v string) *DecreaseStreamRetentionPeriodInput { + s.StreamName = &v + return s +} + +type DecreaseStreamRetentionPeriodOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DecreaseStreamRetentionPeriodOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DecreaseStreamRetentionPeriodOutput) GoString() string { + return s.String() +} + +// Represents the input for DeleteStream. +type DeleteStreamInput struct { + _ struct{} `type:"structure"` + + // If this parameter is unset (null) or if you set it to false, and the stream + // has registered consumers, the call to DeleteStream fails with a ResourceInUseException. + EnforceConsumerDeletion *bool `type:"boolean"` + + // The name of the stream to delete. + // + // StreamName is a required field + StreamName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteStreamInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteStreamInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteStreamInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteStreamInput"} + if s.StreamName == nil { + invalidParams.Add(request.NewErrParamRequired("StreamName")) + } + if s.StreamName != nil && len(*s.StreamName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StreamName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEnforceConsumerDeletion sets the EnforceConsumerDeletion field's value. +func (s *DeleteStreamInput) SetEnforceConsumerDeletion(v bool) *DeleteStreamInput { + s.EnforceConsumerDeletion = &v return s } // SetStreamName sets the StreamName field's value. -func (s *DecreaseStreamRetentionPeriodInput) SetStreamName(v string) *DecreaseStreamRetentionPeriodInput { +func (s *DeleteStreamInput) SetStreamName(v string) *DeleteStreamInput { s.StreamName = &v return s } -type DecreaseStreamRetentionPeriodOutput struct { +type DeleteStreamOutput struct { _ struct{} `type:"structure"` } // String returns the string representation -func (s DecreaseStreamRetentionPeriodOutput) String() string { +func (s DeleteStreamOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DecreaseStreamRetentionPeriodOutput) GoString() string { +func (s DeleteStreamOutput) GoString() string { return s.String() } -// Represents the input for DeleteStream. -type DeleteStreamInput struct { +type DeregisterStreamConsumerInput struct { _ struct{} `type:"structure"` - // The name of the stream to delete. - // - // StreamName is a required field - StreamName *string `min:"1" type:"string" required:"true"` + // The ARN returned by Kinesis Data Streams when you registered the consumer. + // If you don't know the ARN of the consumer that you want to deregister, you + // can use the ListStreamConsumers operation to get a list of the descriptions + // of all the consumers that are currently registered with a given data stream. + // The description of a consumer contains its ARN. + ConsumerARN *string `min:"1" type:"string"` + + // The name that you gave to the consumer. + ConsumerName *string `min:"1" type:"string"` + + // The ARN of the Kinesis data stream that the consumer is registered with. + // For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces + // (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-kinesis-streams). + StreamARN *string `min:"1" type:"string"` } // String returns the string representation -func (s DeleteStreamInput) String() string { +func (s DeregisterStreamConsumerInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteStreamInput) GoString() string { +func (s DeregisterStreamConsumerInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteStreamInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteStreamInput"} - if s.StreamName == nil { - invalidParams.Add(request.NewErrParamRequired("StreamName")) +func (s *DeregisterStreamConsumerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeregisterStreamConsumerInput"} + if s.ConsumerARN != nil && len(*s.ConsumerARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConsumerARN", 1)) } - if s.StreamName != nil && len(*s.StreamName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("StreamName", 1)) + if s.ConsumerName != nil && len(*s.ConsumerName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConsumerName", 1)) + } + if s.StreamARN != nil && len(*s.StreamARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StreamARN", 1)) } if invalidParams.Len() > 0 { @@ -3047,23 +3722,35 @@ func (s *DeleteStreamInput) Validate() error { return nil } -// SetStreamName sets the StreamName field's value. -func (s *DeleteStreamInput) SetStreamName(v string) *DeleteStreamInput { - s.StreamName = &v +// SetConsumerARN sets the ConsumerARN field's value. +func (s *DeregisterStreamConsumerInput) SetConsumerARN(v string) *DeregisterStreamConsumerInput { + s.ConsumerARN = &v return s } -type DeleteStreamOutput struct { +// SetConsumerName sets the ConsumerName field's value. +func (s *DeregisterStreamConsumerInput) SetConsumerName(v string) *DeregisterStreamConsumerInput { + s.ConsumerName = &v + return s +} + +// SetStreamARN sets the StreamARN field's value. +func (s *DeregisterStreamConsumerInput) SetStreamARN(v string) *DeregisterStreamConsumerInput { + s.StreamARN = &v + return s +} + +type DeregisterStreamConsumerOutput struct { _ struct{} `type:"structure"` } // String returns the string representation -func (s DeleteStreamOutput) String() string { +func (s DeregisterStreamConsumerOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteStreamOutput) GoString() string { +func (s DeregisterStreamConsumerOutput) GoString() string { return s.String() } @@ -3117,6 +3804,93 @@ func (s *DescribeLimitsOutput) SetShardLimit(v int64) *DescribeLimitsOutput { return s } +type DescribeStreamConsumerInput struct { + _ struct{} `type:"structure"` + + // The ARN returned by Kinesis Data Streams when you registered the consumer. + ConsumerARN *string `min:"1" type:"string"` + + // The name that you gave to the consumer. + ConsumerName *string `min:"1" type:"string"` + + // The ARN of the Kinesis data stream that the consumer is registered with. + // For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces + // (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-kinesis-streams). + StreamARN *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeStreamConsumerInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStreamConsumerInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeStreamConsumerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeStreamConsumerInput"} + if s.ConsumerARN != nil && len(*s.ConsumerARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConsumerARN", 1)) + } + if s.ConsumerName != nil && len(*s.ConsumerName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConsumerName", 1)) + } + if s.StreamARN != nil && len(*s.StreamARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StreamARN", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetConsumerARN sets the ConsumerARN field's value. +func (s *DescribeStreamConsumerInput) SetConsumerARN(v string) *DescribeStreamConsumerInput { + s.ConsumerARN = &v + return s +} + +// SetConsumerName sets the ConsumerName field's value. +func (s *DescribeStreamConsumerInput) SetConsumerName(v string) *DescribeStreamConsumerInput { + s.ConsumerName = &v + return s +} + +// SetStreamARN sets the StreamARN field's value. +func (s *DescribeStreamConsumerInput) SetStreamARN(v string) *DescribeStreamConsumerInput { + s.StreamARN = &v + return s +} + +type DescribeStreamConsumerOutput struct { + _ struct{} `type:"structure"` + + // An object that represents the details of the consumer. + // + // ConsumerDescription is a required field + ConsumerDescription *ConsumerDescription `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DescribeStreamConsumerOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStreamConsumerOutput) GoString() string { + return s.String() +} + +// SetConsumerDescription sets the ConsumerDescription field's value. +func (s *DescribeStreamConsumerOutput) SetConsumerDescription(v *ConsumerDescription) *DescribeStreamConsumerOutput { + s.ConsumerDescription = v + return s +} + // Represents the input for DescribeStream. type DescribeStreamInput struct { _ struct{} `type:"structure"` @@ -3687,7 +4461,7 @@ type GetShardIteratorInput struct { // iterator returned is for the next (later) record. If the time stamp is older // than the current trim horizon, the iterator returned is for the oldest untrimmed // data record (TRIM_HORIZON). - Timestamp *time.Time `type:"timestamp" timestampFormat:"unix"` + Timestamp *time.Time `type:"timestamp"` } // String returns the string representation @@ -3896,7 +4670,8 @@ func (s IncreaseStreamRetentionPeriodOutput) GoString() string { type ListShardsInput struct { _ struct{} `type:"structure"` - // The ID of the shard to start the list with. + // Specify this parameter to indicate that you want to list the shards starting + // with the shard whose ID immediately follows ExclusiveStartShardId. // // If you don't specify this parameter, the default behavior is for ListShards // to list the shards starting with the first one in the stream. @@ -3941,7 +4716,7 @@ type ListShardsInput struct { // for. // // You cannot specify this parameter if you specify the NextToken parameter. - StreamCreationTimestamp *time.Time `type:"timestamp" timestampFormat:"unix"` + StreamCreationTimestamp *time.Time `type:"timestamp"` // The name of the data stream whose shards you want to list. // @@ -4056,6 +4831,153 @@ func (s *ListShardsOutput) SetShards(v []*Shard) *ListShardsOutput { return s } +type ListStreamConsumersInput struct { + _ struct{} `type:"structure"` + + // The maximum number of consumers that you want a single call of ListStreamConsumers + // to return. + MaxResults *int64 `min:"1" type:"integer"` + + // When the number of consumers that are registered with the data stream is + // greater than the default value for the MaxResults parameter, or if you explicitly + // specify a value for MaxResults that is less than the number of consumers + // that are registered with the data stream, the response includes a pagination + // token named NextToken. You can specify this NextToken value in a subsequent + // call to ListStreamConsumers to list the next set of registered consumers. + // + // Don't specify StreamName or StreamCreationTimestamp if you specify NextToken + // because the latter unambiguously identifies the stream. + // + // You can optionally specify a value for the MaxResults parameter when you + // specify NextToken. If you specify a MaxResults value that is less than the + // number of consumers that the operation returns if you don't specify MaxResults, + // the response will contain a new NextToken value. You can use the new NextToken + // value in a subsequent call to the ListStreamConsumers operation to list the + // next set of consumers. + // + // Tokens expire after 300 seconds. When you obtain a value for NextToken in + // the response to a call to ListStreamConsumers, you have 300 seconds to use + // that value. If you specify an expired token in a call to ListStreamConsumers, + // you get ExpiredNextTokenException. + NextToken *string `min:"1" type:"string"` + + // The ARN of the Kinesis data stream for which you want to list the registered + // consumers. For more information, see Amazon Resource Names (ARNs) and AWS + // Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-kinesis-streams). + // + // StreamARN is a required field + StreamARN *string `min:"1" type:"string" required:"true"` + + // Specify this input parameter to distinguish data streams that have the same + // name. For example, if you create a data stream and then delete it, and you + // later create another data stream with the same name, you can use this input + // parameter to specify which of the two streams you want to list the consumers + // for. + // + // You can't specify this parameter if you specify the NextToken parameter. + StreamCreationTimestamp *time.Time `type:"timestamp"` +} + +// String returns the string representation +func (s ListStreamConsumersInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListStreamConsumersInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListStreamConsumersInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListStreamConsumersInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + if s.StreamARN == nil { + invalidParams.Add(request.NewErrParamRequired("StreamARN")) + } + if s.StreamARN != nil && len(*s.StreamARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StreamARN", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListStreamConsumersInput) SetMaxResults(v int64) *ListStreamConsumersInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListStreamConsumersInput) SetNextToken(v string) *ListStreamConsumersInput { + s.NextToken = &v + return s +} + +// SetStreamARN sets the StreamARN field's value. +func (s *ListStreamConsumersInput) SetStreamARN(v string) *ListStreamConsumersInput { + s.StreamARN = &v + return s +} + +// SetStreamCreationTimestamp sets the StreamCreationTimestamp field's value. +func (s *ListStreamConsumersInput) SetStreamCreationTimestamp(v time.Time) *ListStreamConsumersInput { + s.StreamCreationTimestamp = &v + return s +} + +type ListStreamConsumersOutput struct { + _ struct{} `type:"structure"` + + // An array of JSON objects. Each object represents one registered consumer. + Consumers []*Consumer `type:"list"` + + // When the number of consumers that are registered with the data stream is + // greater than the default value for the MaxResults parameter, or if you explicitly + // specify a value for MaxResults that is less than the number of registered + // consumers, the response includes a pagination token named NextToken. You + // can specify this NextToken value in a subsequent call to ListStreamConsumers + // to list the next set of registered consumers. For more information about + // the use of this pagination token when calling the ListStreamConsumers operation, + // see ListStreamConsumersInput$NextToken. + // + // Tokens expire after 300 seconds. When you obtain a value for NextToken in + // the response to a call to ListStreamConsumers, you have 300 seconds to use + // that value. If you specify an expired token in a call to ListStreamConsumers, + // you get ExpiredNextTokenException. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListStreamConsumersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListStreamConsumersOutput) GoString() string { + return s.String() +} + +// SetConsumers sets the Consumers field's value. +func (s *ListStreamConsumersOutput) SetConsumers(v []*Consumer) *ListStreamConsumersOutput { + s.Consumers = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListStreamConsumersOutput) SetNextToken(v string) *ListStreamConsumersOutput { + s.NextToken = &v + return s +} + // Represents the input for ListStreams. type ListStreamsInput struct { _ struct{} `type:"structure"` @@ -4769,7 +5691,7 @@ type Record struct { _ struct{} `type:"structure"` // The approximate time that the record was inserted into the stream. - ApproximateArrivalTimestamp *time.Time `type:"timestamp" timestampFormat:"unix"` + ApproximateArrivalTimestamp *time.Time `type:"timestamp"` // The data blob. The data in the blob is both opaque and immutable to Kinesis // Data Streams, which does not inspect, interpret, or change the data in the @@ -4842,6 +5764,94 @@ func (s *Record) SetSequenceNumber(v string) *Record { return s } +type RegisterStreamConsumerInput struct { + _ struct{} `type:"structure"` + + // For a given Kinesis data stream, each consumer must have a unique name. However, + // consumer names don't have to be unique across data streams. + // + // ConsumerName is a required field + ConsumerName *string `min:"1" type:"string" required:"true"` + + // The ARN of the Kinesis data stream that you want to register the consumer + // with. For more info, see Amazon Resource Names (ARNs) and AWS Service Namespaces + // (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-kinesis-streams). + // + // StreamARN is a required field + StreamARN *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s RegisterStreamConsumerInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RegisterStreamConsumerInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RegisterStreamConsumerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RegisterStreamConsumerInput"} + if s.ConsumerName == nil { + invalidParams.Add(request.NewErrParamRequired("ConsumerName")) + } + if s.ConsumerName != nil && len(*s.ConsumerName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ConsumerName", 1)) + } + if s.StreamARN == nil { + invalidParams.Add(request.NewErrParamRequired("StreamARN")) + } + if s.StreamARN != nil && len(*s.StreamARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("StreamARN", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetConsumerName sets the ConsumerName field's value. +func (s *RegisterStreamConsumerInput) SetConsumerName(v string) *RegisterStreamConsumerInput { + s.ConsumerName = &v + return s +} + +// SetStreamARN sets the StreamARN field's value. +func (s *RegisterStreamConsumerInput) SetStreamARN(v string) *RegisterStreamConsumerInput { + s.StreamARN = &v + return s +} + +type RegisterStreamConsumerOutput struct { + _ struct{} `type:"structure"` + + // An object that represents the details of the consumer you registered. When + // you register a consumer, it gets an ARN that is generated by Kinesis Data + // Streams. + // + // Consumer is a required field + Consumer *Consumer `type:"structure" required:"true"` +} + +// String returns the string representation +func (s RegisterStreamConsumerOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RegisterStreamConsumerOutput) GoString() string { + return s.String() +} + +// SetConsumer sets the Consumer field's value. +func (s *RegisterStreamConsumerOutput) SetConsumer(v *Consumer) *RegisterStreamConsumerOutput { + s.Consumer = v + return s +} + // Represents the input for RemoveTagsFromStream. type RemoveTagsFromStreamInput struct { _ struct{} `type:"structure"` @@ -5363,12 +6373,12 @@ type StreamDescription struct { // The Amazon Resource Name (ARN) for the stream being described. // // StreamARN is a required field - StreamARN *string `type:"string" required:"true"` + StreamARN *string `min:"1" type:"string" required:"true"` // The approximate time that the stream was created. // // StreamCreationTimestamp is a required field - StreamCreationTimestamp *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + StreamCreationTimestamp *time.Time `type:"timestamp" required:"true"` // The name of the stream being described. // @@ -5470,6 +6480,9 @@ func (s *StreamDescription) SetStreamStatus(v string) *StreamDescription { type StreamDescriptionSummary struct { _ struct{} `type:"structure"` + // The number of enhanced fan-out consumers registered with the stream. + ConsumerCount *int64 `type:"integer"` + // The encryption type used. This value is one of the following: // // * KMS @@ -5511,12 +6524,12 @@ type StreamDescriptionSummary struct { // The Amazon Resource Name (ARN) for the stream being described. // // StreamARN is a required field - StreamARN *string `type:"string" required:"true"` + StreamARN *string `min:"1" type:"string" required:"true"` // The approximate time that the stream was created. // // StreamCreationTimestamp is a required field - StreamCreationTimestamp *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + StreamCreationTimestamp *time.Time `type:"timestamp" required:"true"` // The name of the stream being described. // @@ -5554,6 +6567,12 @@ func (s StreamDescriptionSummary) GoString() string { return s.String() } +// SetConsumerCount sets the ConsumerCount field's value. +func (s *StreamDescriptionSummary) SetConsumerCount(v int64) *StreamDescriptionSummary { + s.ConsumerCount = &v + return s +} + // SetEncryptionType sets the EncryptionType field's value. func (s *StreamDescriptionSummary) SetEncryptionType(v string) *StreamDescriptionSummary { s.EncryptionType = &v @@ -5759,6 +6778,17 @@ func (s *UpdateShardCountOutput) SetTargetShardCount(v int64) *UpdateShardCountO return s } +const ( + // ConsumerStatusCreating is a ConsumerStatus enum value + ConsumerStatusCreating = "CREATING" + + // ConsumerStatusDeleting is a ConsumerStatus enum value + ConsumerStatusDeleting = "DELETING" + + // ConsumerStatusActive is a ConsumerStatus enum value + ConsumerStatusActive = "ACTIVE" +) + const ( // EncryptionTypeNone is a EncryptionType enum value EncryptionTypeNone = "NONE" diff --git a/vendor/github.com/aws/aws-sdk-go/service/kinesis/errors.go b/vendor/github.com/aws/aws-sdk-go/service/kinesis/errors.go index a567a3d45b8..8c06c6f8ca5 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/kinesis/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/kinesis/errors.go @@ -13,8 +13,7 @@ const ( // ErrCodeExpiredNextTokenException for service response error code // "ExpiredNextTokenException". // - // The pagination token passed to the ListShards operation is expired. For more - // information, see ListShardsInput$NextToken. + // The pagination token passed to the operation is expired. ErrCodeExpiredNextTokenException = "ExpiredNextTokenException" // ErrCodeInvalidArgumentException for service response error code diff --git a/vendor/github.com/aws/aws-sdk-go/service/kinesis/service.go b/vendor/github.com/aws/aws-sdk-go/service/kinesis/service.go index 17a59119a94..2d55ac9579e 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/kinesis/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/kinesis/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "kinesis" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "kinesis" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Kinesis" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the Kinesis client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/kinesisanalytics/api.go b/vendor/github.com/aws/aws-sdk-go/service/kinesisanalytics/api.go new file mode 100644 index 00000000000..49bab98efb9 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/kinesisanalytics/api.go @@ -0,0 +1,6680 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package kinesisanalytics + +import ( + "fmt" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" +) + +const opAddApplicationCloudWatchLoggingOption = "AddApplicationCloudWatchLoggingOption" + +// AddApplicationCloudWatchLoggingOptionRequest generates a "aws/request.Request" representing the +// client's request for the AddApplicationCloudWatchLoggingOption operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AddApplicationCloudWatchLoggingOption for more information on using the AddApplicationCloudWatchLoggingOption +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AddApplicationCloudWatchLoggingOptionRequest method. +// req, resp := client.AddApplicationCloudWatchLoggingOptionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/AddApplicationCloudWatchLoggingOption +func (c *KinesisAnalytics) AddApplicationCloudWatchLoggingOptionRequest(input *AddApplicationCloudWatchLoggingOptionInput) (req *request.Request, output *AddApplicationCloudWatchLoggingOptionOutput) { + op := &request.Operation{ + Name: opAddApplicationCloudWatchLoggingOption, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AddApplicationCloudWatchLoggingOptionInput{} + } + + output = &AddApplicationCloudWatchLoggingOptionOutput{} + req = c.newRequest(op, input, output) + return +} + +// AddApplicationCloudWatchLoggingOption API operation for Amazon Kinesis Analytics. +// +// Adds a CloudWatch log stream to monitor application configuration errors. +// For more information about using CloudWatch log streams with Amazon Kinesis +// Analytics applications, see Working with Amazon CloudWatch Logs (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/cloudwatch-logs.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Analytics's +// API operation AddApplicationCloudWatchLoggingOption for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// Specified application can't be found. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// Application is not available for this operation. +// +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// Specified input parameter value is invalid. +// +// * ErrCodeConcurrentModificationException "ConcurrentModificationException" +// Exception thrown as a result of concurrent modification to an application. +// For example, two individuals attempting to edit the same application at the +// same time. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/AddApplicationCloudWatchLoggingOption +func (c *KinesisAnalytics) AddApplicationCloudWatchLoggingOption(input *AddApplicationCloudWatchLoggingOptionInput) (*AddApplicationCloudWatchLoggingOptionOutput, error) { + req, out := c.AddApplicationCloudWatchLoggingOptionRequest(input) + return out, req.Send() +} + +// AddApplicationCloudWatchLoggingOptionWithContext is the same as AddApplicationCloudWatchLoggingOption with the addition of +// the ability to pass a context and additional request options. +// +// See AddApplicationCloudWatchLoggingOption for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *KinesisAnalytics) AddApplicationCloudWatchLoggingOptionWithContext(ctx aws.Context, input *AddApplicationCloudWatchLoggingOptionInput, opts ...request.Option) (*AddApplicationCloudWatchLoggingOptionOutput, error) { + req, out := c.AddApplicationCloudWatchLoggingOptionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAddApplicationInput = "AddApplicationInput" + +// AddApplicationInputRequest generates a "aws/request.Request" representing the +// client's request for the AddApplicationInput operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AddApplicationInput for more information on using the AddApplicationInput +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AddApplicationInputRequest method. +// req, resp := client.AddApplicationInputRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/AddApplicationInput +func (c *KinesisAnalytics) AddApplicationInputRequest(input *AddApplicationInputInput) (req *request.Request, output *AddApplicationInputOutput) { + op := &request.Operation{ + Name: opAddApplicationInput, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AddApplicationInputInput{} + } + + output = &AddApplicationInputOutput{} + req = c.newRequest(op, input, output) + return +} + +// AddApplicationInput API operation for Amazon Kinesis Analytics. +// +// Adds a streaming source to your Amazon Kinesis application. For conceptual +// information, see Configuring Application Input (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-input.html). +// +// You can add a streaming source either when you create an application or you +// can use this operation to add a streaming source after you create an application. +// For more information, see CreateApplication. +// +// Any configuration update, including adding a streaming source using this +// operation, results in a new version of the application. You can use the DescribeApplication +// operation to find the current application version. +// +// This operation requires permissions to perform the kinesisanalytics:AddApplicationInput +// action. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Analytics's +// API operation AddApplicationInput for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// Specified application can't be found. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// Application is not available for this operation. +// +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// Specified input parameter value is invalid. +// +// * ErrCodeConcurrentModificationException "ConcurrentModificationException" +// Exception thrown as a result of concurrent modification to an application. +// For example, two individuals attempting to edit the same application at the +// same time. +// +// * ErrCodeCodeValidationException "CodeValidationException" +// User-provided application code (query) is invalid. This can be a simple syntax +// error. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/AddApplicationInput +func (c *KinesisAnalytics) AddApplicationInput(input *AddApplicationInputInput) (*AddApplicationInputOutput, error) { + req, out := c.AddApplicationInputRequest(input) + return out, req.Send() +} + +// AddApplicationInputWithContext is the same as AddApplicationInput with the addition of +// the ability to pass a context and additional request options. +// +// See AddApplicationInput for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *KinesisAnalytics) AddApplicationInputWithContext(ctx aws.Context, input *AddApplicationInputInput, opts ...request.Option) (*AddApplicationInputOutput, error) { + req, out := c.AddApplicationInputRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAddApplicationInputProcessingConfiguration = "AddApplicationInputProcessingConfiguration" + +// AddApplicationInputProcessingConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the AddApplicationInputProcessingConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AddApplicationInputProcessingConfiguration for more information on using the AddApplicationInputProcessingConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AddApplicationInputProcessingConfigurationRequest method. +// req, resp := client.AddApplicationInputProcessingConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/AddApplicationInputProcessingConfiguration +func (c *KinesisAnalytics) AddApplicationInputProcessingConfigurationRequest(input *AddApplicationInputProcessingConfigurationInput) (req *request.Request, output *AddApplicationInputProcessingConfigurationOutput) { + op := &request.Operation{ + Name: opAddApplicationInputProcessingConfiguration, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AddApplicationInputProcessingConfigurationInput{} + } + + output = &AddApplicationInputProcessingConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// AddApplicationInputProcessingConfiguration API operation for Amazon Kinesis Analytics. +// +// Adds an InputProcessingConfiguration to an application. An input processor +// preprocesses records on the input stream before the application's SQL code +// executes. Currently, the only input processor available is AWS Lambda (https://aws.amazon.com/documentation/lambda/). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Analytics's +// API operation AddApplicationInputProcessingConfiguration for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// Specified application can't be found. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// Application is not available for this operation. +// +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// Specified input parameter value is invalid. +// +// * ErrCodeConcurrentModificationException "ConcurrentModificationException" +// Exception thrown as a result of concurrent modification to an application. +// For example, two individuals attempting to edit the same application at the +// same time. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/AddApplicationInputProcessingConfiguration +func (c *KinesisAnalytics) AddApplicationInputProcessingConfiguration(input *AddApplicationInputProcessingConfigurationInput) (*AddApplicationInputProcessingConfigurationOutput, error) { + req, out := c.AddApplicationInputProcessingConfigurationRequest(input) + return out, req.Send() +} + +// AddApplicationInputProcessingConfigurationWithContext is the same as AddApplicationInputProcessingConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See AddApplicationInputProcessingConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *KinesisAnalytics) AddApplicationInputProcessingConfigurationWithContext(ctx aws.Context, input *AddApplicationInputProcessingConfigurationInput, opts ...request.Option) (*AddApplicationInputProcessingConfigurationOutput, error) { + req, out := c.AddApplicationInputProcessingConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAddApplicationOutput = "AddApplicationOutput" + +// AddApplicationOutputRequest generates a "aws/request.Request" representing the +// client's request for the AddApplicationOutput operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AddApplicationOutput for more information on using the AddApplicationOutput +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AddApplicationOutputRequest method. +// req, resp := client.AddApplicationOutputRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/AddApplicationOutput +func (c *KinesisAnalytics) AddApplicationOutputRequest(input *AddApplicationOutputInput) (req *request.Request, output *AddApplicationOutputOutput) { + op := &request.Operation{ + Name: opAddApplicationOutput, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AddApplicationOutputInput{} + } + + output = &AddApplicationOutputOutput{} + req = c.newRequest(op, input, output) + return +} + +// AddApplicationOutput API operation for Amazon Kinesis Analytics. +// +// Adds an external destination to your Amazon Kinesis Analytics application. +// +// If you want Amazon Kinesis Analytics to deliver data from an in-application +// stream within your application to an external destination (such as an Amazon +// Kinesis stream, an Amazon Kinesis Firehose delivery stream, or an Amazon +// Lambda function), you add the relevant configuration to your application +// using this operation. You can configure one or more outputs for your application. +// Each output configuration maps an in-application stream and an external destination. +// +// You can use one of the output configurations to deliver data from your in-application +// error stream to an external destination so that you can analyze the errors. +// For conceptual information, see Understanding Application Output (Destination) +// (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-output.html). +// +// Note that any configuration update, including adding a streaming source using +// this operation, results in a new version of the application. You can use +// the DescribeApplication operation to find the current application version. +// +// For the limits on the number of application inputs and outputs you can configure, +// see Limits (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/limits.html). +// +// This operation requires permissions to perform the kinesisanalytics:AddApplicationOutput +// action. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Analytics's +// API operation AddApplicationOutput for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// Specified application can't be found. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// Application is not available for this operation. +// +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// Specified input parameter value is invalid. +// +// * ErrCodeConcurrentModificationException "ConcurrentModificationException" +// Exception thrown as a result of concurrent modification to an application. +// For example, two individuals attempting to edit the same application at the +// same time. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/AddApplicationOutput +func (c *KinesisAnalytics) AddApplicationOutput(input *AddApplicationOutputInput) (*AddApplicationOutputOutput, error) { + req, out := c.AddApplicationOutputRequest(input) + return out, req.Send() +} + +// AddApplicationOutputWithContext is the same as AddApplicationOutput with the addition of +// the ability to pass a context and additional request options. +// +// See AddApplicationOutput for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *KinesisAnalytics) AddApplicationOutputWithContext(ctx aws.Context, input *AddApplicationOutputInput, opts ...request.Option) (*AddApplicationOutputOutput, error) { + req, out := c.AddApplicationOutputRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAddApplicationReferenceDataSource = "AddApplicationReferenceDataSource" + +// AddApplicationReferenceDataSourceRequest generates a "aws/request.Request" representing the +// client's request for the AddApplicationReferenceDataSource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AddApplicationReferenceDataSource for more information on using the AddApplicationReferenceDataSource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AddApplicationReferenceDataSourceRequest method. +// req, resp := client.AddApplicationReferenceDataSourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/AddApplicationReferenceDataSource +func (c *KinesisAnalytics) AddApplicationReferenceDataSourceRequest(input *AddApplicationReferenceDataSourceInput) (req *request.Request, output *AddApplicationReferenceDataSourceOutput) { + op := &request.Operation{ + Name: opAddApplicationReferenceDataSource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AddApplicationReferenceDataSourceInput{} + } + + output = &AddApplicationReferenceDataSourceOutput{} + req = c.newRequest(op, input, output) + return +} + +// AddApplicationReferenceDataSource API operation for Amazon Kinesis Analytics. +// +// Adds a reference data source to an existing application. +// +// Amazon Kinesis Analytics reads reference data (that is, an Amazon S3 object) +// and creates an in-application table within your application. In the request, +// you provide the source (S3 bucket name and object key name), name of the +// in-application table to create, and the necessary mapping information that +// describes how data in Amazon S3 object maps to columns in the resulting in-application +// table. +// +// For conceptual information, see Configuring Application Input (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-input.html). +// For the limits on data sources you can add to your application, see Limits +// (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/limits.html). +// +// This operation requires permissions to perform the kinesisanalytics:AddApplicationOutput +// action. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Analytics's +// API operation AddApplicationReferenceDataSource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// Specified application can't be found. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// Application is not available for this operation. +// +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// Specified input parameter value is invalid. +// +// * ErrCodeConcurrentModificationException "ConcurrentModificationException" +// Exception thrown as a result of concurrent modification to an application. +// For example, two individuals attempting to edit the same application at the +// same time. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/AddApplicationReferenceDataSource +func (c *KinesisAnalytics) AddApplicationReferenceDataSource(input *AddApplicationReferenceDataSourceInput) (*AddApplicationReferenceDataSourceOutput, error) { + req, out := c.AddApplicationReferenceDataSourceRequest(input) + return out, req.Send() +} + +// AddApplicationReferenceDataSourceWithContext is the same as AddApplicationReferenceDataSource with the addition of +// the ability to pass a context and additional request options. +// +// See AddApplicationReferenceDataSource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *KinesisAnalytics) AddApplicationReferenceDataSourceWithContext(ctx aws.Context, input *AddApplicationReferenceDataSourceInput, opts ...request.Option) (*AddApplicationReferenceDataSourceOutput, error) { + req, out := c.AddApplicationReferenceDataSourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateApplication = "CreateApplication" + +// CreateApplicationRequest generates a "aws/request.Request" representing the +// client's request for the CreateApplication operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateApplication for more information on using the CreateApplication +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateApplicationRequest method. +// req, resp := client.CreateApplicationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/CreateApplication +func (c *KinesisAnalytics) CreateApplicationRequest(input *CreateApplicationInput) (req *request.Request, output *CreateApplicationOutput) { + op := &request.Operation{ + Name: opCreateApplication, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateApplicationInput{} + } + + output = &CreateApplicationOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateApplication API operation for Amazon Kinesis Analytics. +// +// Creates an Amazon Kinesis Analytics application. You can configure each application +// with one streaming source as input, application code to process the input, +// and up to three destinations where you want Amazon Kinesis Analytics to write +// the output data from your application. For an overview, see How it Works +// (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works.html). +// +// In the input configuration, you map the streaming source to an in-application +// stream, which you can think of as a constantly updating table. In the mapping, +// you must provide a schema for the in-application stream and map each data +// column in the in-application stream to a data element in the streaming source. +// +// Your application code is one or more SQL statements that read input data, +// transform it, and generate output. Your application code can create one or +// more SQL artifacts like SQL streams or pumps. +// +// In the output configuration, you can configure the application to write data +// from in-application streams created in your applications to up to three destinations. +// +// To read data from your source stream or write data to destination streams, +// Amazon Kinesis Analytics needs your permissions. You grant these permissions +// by creating IAM roles. This operation requires permissions to perform the +// kinesisanalytics:CreateApplication action. +// +// For introductory exercises to create an Amazon Kinesis Analytics application, +// see Getting Started (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/getting-started.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Analytics's +// API operation CreateApplication for usage and error information. +// +// Returned Error Codes: +// * ErrCodeCodeValidationException "CodeValidationException" +// User-provided application code (query) is invalid. This can be a simple syntax +// error. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// Application is not available for this operation. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// Exceeded the number of applications allowed. +// +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// Specified input parameter value is invalid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/CreateApplication +func (c *KinesisAnalytics) CreateApplication(input *CreateApplicationInput) (*CreateApplicationOutput, error) { + req, out := c.CreateApplicationRequest(input) + return out, req.Send() +} + +// CreateApplicationWithContext is the same as CreateApplication with the addition of +// the ability to pass a context and additional request options. +// +// See CreateApplication for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *KinesisAnalytics) CreateApplicationWithContext(ctx aws.Context, input *CreateApplicationInput, opts ...request.Option) (*CreateApplicationOutput, error) { + req, out := c.CreateApplicationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteApplication = "DeleteApplication" + +// DeleteApplicationRequest generates a "aws/request.Request" representing the +// client's request for the DeleteApplication operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteApplication for more information on using the DeleteApplication +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteApplicationRequest method. +// req, resp := client.DeleteApplicationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/DeleteApplication +func (c *KinesisAnalytics) DeleteApplicationRequest(input *DeleteApplicationInput) (req *request.Request, output *DeleteApplicationOutput) { + op := &request.Operation{ + Name: opDeleteApplication, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteApplicationInput{} + } + + output = &DeleteApplicationOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteApplication API operation for Amazon Kinesis Analytics. +// +// Deletes the specified application. Amazon Kinesis Analytics halts application +// execution and deletes the application, including any application artifacts +// (such as in-application streams, reference table, and application code). +// +// This operation requires permissions to perform the kinesisanalytics:DeleteApplication +// action. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Analytics's +// API operation DeleteApplication for usage and error information. +// +// Returned Error Codes: +// * ErrCodeConcurrentModificationException "ConcurrentModificationException" +// Exception thrown as a result of concurrent modification to an application. +// For example, two individuals attempting to edit the same application at the +// same time. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// Specified application can't be found. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// Application is not available for this operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/DeleteApplication +func (c *KinesisAnalytics) DeleteApplication(input *DeleteApplicationInput) (*DeleteApplicationOutput, error) { + req, out := c.DeleteApplicationRequest(input) + return out, req.Send() +} + +// DeleteApplicationWithContext is the same as DeleteApplication with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteApplication for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *KinesisAnalytics) DeleteApplicationWithContext(ctx aws.Context, input *DeleteApplicationInput, opts ...request.Option) (*DeleteApplicationOutput, error) { + req, out := c.DeleteApplicationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteApplicationCloudWatchLoggingOption = "DeleteApplicationCloudWatchLoggingOption" + +// DeleteApplicationCloudWatchLoggingOptionRequest generates a "aws/request.Request" representing the +// client's request for the DeleteApplicationCloudWatchLoggingOption operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteApplicationCloudWatchLoggingOption for more information on using the DeleteApplicationCloudWatchLoggingOption +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteApplicationCloudWatchLoggingOptionRequest method. +// req, resp := client.DeleteApplicationCloudWatchLoggingOptionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/DeleteApplicationCloudWatchLoggingOption +func (c *KinesisAnalytics) DeleteApplicationCloudWatchLoggingOptionRequest(input *DeleteApplicationCloudWatchLoggingOptionInput) (req *request.Request, output *DeleteApplicationCloudWatchLoggingOptionOutput) { + op := &request.Operation{ + Name: opDeleteApplicationCloudWatchLoggingOption, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteApplicationCloudWatchLoggingOptionInput{} + } + + output = &DeleteApplicationCloudWatchLoggingOptionOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteApplicationCloudWatchLoggingOption API operation for Amazon Kinesis Analytics. +// +// Deletes a CloudWatch log stream from an application. For more information +// about using CloudWatch log streams with Amazon Kinesis Analytics applications, +// see Working with Amazon CloudWatch Logs (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/cloudwatch-logs.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Analytics's +// API operation DeleteApplicationCloudWatchLoggingOption for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// Specified application can't be found. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// Application is not available for this operation. +// +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// Specified input parameter value is invalid. +// +// * ErrCodeConcurrentModificationException "ConcurrentModificationException" +// Exception thrown as a result of concurrent modification to an application. +// For example, two individuals attempting to edit the same application at the +// same time. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/DeleteApplicationCloudWatchLoggingOption +func (c *KinesisAnalytics) DeleteApplicationCloudWatchLoggingOption(input *DeleteApplicationCloudWatchLoggingOptionInput) (*DeleteApplicationCloudWatchLoggingOptionOutput, error) { + req, out := c.DeleteApplicationCloudWatchLoggingOptionRequest(input) + return out, req.Send() +} + +// DeleteApplicationCloudWatchLoggingOptionWithContext is the same as DeleteApplicationCloudWatchLoggingOption with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteApplicationCloudWatchLoggingOption for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *KinesisAnalytics) DeleteApplicationCloudWatchLoggingOptionWithContext(ctx aws.Context, input *DeleteApplicationCloudWatchLoggingOptionInput, opts ...request.Option) (*DeleteApplicationCloudWatchLoggingOptionOutput, error) { + req, out := c.DeleteApplicationCloudWatchLoggingOptionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteApplicationInputProcessingConfiguration = "DeleteApplicationInputProcessingConfiguration" + +// DeleteApplicationInputProcessingConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the DeleteApplicationInputProcessingConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteApplicationInputProcessingConfiguration for more information on using the DeleteApplicationInputProcessingConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteApplicationInputProcessingConfigurationRequest method. +// req, resp := client.DeleteApplicationInputProcessingConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/DeleteApplicationInputProcessingConfiguration +func (c *KinesisAnalytics) DeleteApplicationInputProcessingConfigurationRequest(input *DeleteApplicationInputProcessingConfigurationInput) (req *request.Request, output *DeleteApplicationInputProcessingConfigurationOutput) { + op := &request.Operation{ + Name: opDeleteApplicationInputProcessingConfiguration, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteApplicationInputProcessingConfigurationInput{} + } + + output = &DeleteApplicationInputProcessingConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteApplicationInputProcessingConfiguration API operation for Amazon Kinesis Analytics. +// +// Deletes an InputProcessingConfiguration from an input. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Analytics's +// API operation DeleteApplicationInputProcessingConfiguration for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// Specified application can't be found. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// Application is not available for this operation. +// +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// Specified input parameter value is invalid. +// +// * ErrCodeConcurrentModificationException "ConcurrentModificationException" +// Exception thrown as a result of concurrent modification to an application. +// For example, two individuals attempting to edit the same application at the +// same time. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/DeleteApplicationInputProcessingConfiguration +func (c *KinesisAnalytics) DeleteApplicationInputProcessingConfiguration(input *DeleteApplicationInputProcessingConfigurationInput) (*DeleteApplicationInputProcessingConfigurationOutput, error) { + req, out := c.DeleteApplicationInputProcessingConfigurationRequest(input) + return out, req.Send() +} + +// DeleteApplicationInputProcessingConfigurationWithContext is the same as DeleteApplicationInputProcessingConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteApplicationInputProcessingConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *KinesisAnalytics) DeleteApplicationInputProcessingConfigurationWithContext(ctx aws.Context, input *DeleteApplicationInputProcessingConfigurationInput, opts ...request.Option) (*DeleteApplicationInputProcessingConfigurationOutput, error) { + req, out := c.DeleteApplicationInputProcessingConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteApplicationOutput = "DeleteApplicationOutput" + +// DeleteApplicationOutputRequest generates a "aws/request.Request" representing the +// client's request for the DeleteApplicationOutput operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteApplicationOutput for more information on using the DeleteApplicationOutput +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteApplicationOutputRequest method. +// req, resp := client.DeleteApplicationOutputRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/DeleteApplicationOutput +func (c *KinesisAnalytics) DeleteApplicationOutputRequest(input *DeleteApplicationOutputInput) (req *request.Request, output *DeleteApplicationOutputOutput) { + op := &request.Operation{ + Name: opDeleteApplicationOutput, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteApplicationOutputInput{} + } + + output = &DeleteApplicationOutputOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteApplicationOutput API operation for Amazon Kinesis Analytics. +// +// Deletes output destination configuration from your application configuration. +// Amazon Kinesis Analytics will no longer write data from the corresponding +// in-application stream to the external output destination. +// +// This operation requires permissions to perform the kinesisanalytics:DeleteApplicationOutput +// action. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Analytics's +// API operation DeleteApplicationOutput for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// Specified application can't be found. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// Application is not available for this operation. +// +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// Specified input parameter value is invalid. +// +// * ErrCodeConcurrentModificationException "ConcurrentModificationException" +// Exception thrown as a result of concurrent modification to an application. +// For example, two individuals attempting to edit the same application at the +// same time. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/DeleteApplicationOutput +func (c *KinesisAnalytics) DeleteApplicationOutput(input *DeleteApplicationOutputInput) (*DeleteApplicationOutputOutput, error) { + req, out := c.DeleteApplicationOutputRequest(input) + return out, req.Send() +} + +// DeleteApplicationOutputWithContext is the same as DeleteApplicationOutput with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteApplicationOutput for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *KinesisAnalytics) DeleteApplicationOutputWithContext(ctx aws.Context, input *DeleteApplicationOutputInput, opts ...request.Option) (*DeleteApplicationOutputOutput, error) { + req, out := c.DeleteApplicationOutputRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteApplicationReferenceDataSource = "DeleteApplicationReferenceDataSource" + +// DeleteApplicationReferenceDataSourceRequest generates a "aws/request.Request" representing the +// client's request for the DeleteApplicationReferenceDataSource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteApplicationReferenceDataSource for more information on using the DeleteApplicationReferenceDataSource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteApplicationReferenceDataSourceRequest method. +// req, resp := client.DeleteApplicationReferenceDataSourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/DeleteApplicationReferenceDataSource +func (c *KinesisAnalytics) DeleteApplicationReferenceDataSourceRequest(input *DeleteApplicationReferenceDataSourceInput) (req *request.Request, output *DeleteApplicationReferenceDataSourceOutput) { + op := &request.Operation{ + Name: opDeleteApplicationReferenceDataSource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteApplicationReferenceDataSourceInput{} + } + + output = &DeleteApplicationReferenceDataSourceOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteApplicationReferenceDataSource API operation for Amazon Kinesis Analytics. +// +// Deletes a reference data source configuration from the specified application +// configuration. +// +// If the application is running, Amazon Kinesis Analytics immediately removes +// the in-application table that you created using the AddApplicationReferenceDataSource +// operation. +// +// This operation requires permissions to perform the kinesisanalytics.DeleteApplicationReferenceDataSource +// action. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Analytics's +// API operation DeleteApplicationReferenceDataSource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// Specified application can't be found. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// Application is not available for this operation. +// +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// Specified input parameter value is invalid. +// +// * ErrCodeConcurrentModificationException "ConcurrentModificationException" +// Exception thrown as a result of concurrent modification to an application. +// For example, two individuals attempting to edit the same application at the +// same time. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/DeleteApplicationReferenceDataSource +func (c *KinesisAnalytics) DeleteApplicationReferenceDataSource(input *DeleteApplicationReferenceDataSourceInput) (*DeleteApplicationReferenceDataSourceOutput, error) { + req, out := c.DeleteApplicationReferenceDataSourceRequest(input) + return out, req.Send() +} + +// DeleteApplicationReferenceDataSourceWithContext is the same as DeleteApplicationReferenceDataSource with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteApplicationReferenceDataSource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *KinesisAnalytics) DeleteApplicationReferenceDataSourceWithContext(ctx aws.Context, input *DeleteApplicationReferenceDataSourceInput, opts ...request.Option) (*DeleteApplicationReferenceDataSourceOutput, error) { + req, out := c.DeleteApplicationReferenceDataSourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeApplication = "DescribeApplication" + +// DescribeApplicationRequest generates a "aws/request.Request" representing the +// client's request for the DescribeApplication operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeApplication for more information on using the DescribeApplication +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeApplicationRequest method. +// req, resp := client.DescribeApplicationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/DescribeApplication +func (c *KinesisAnalytics) DescribeApplicationRequest(input *DescribeApplicationInput) (req *request.Request, output *DescribeApplicationOutput) { + op := &request.Operation{ + Name: opDescribeApplication, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeApplicationInput{} + } + + output = &DescribeApplicationOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeApplication API operation for Amazon Kinesis Analytics. +// +// Returns information about a specific Amazon Kinesis Analytics application. +// +// If you want to retrieve a list of all applications in your account, use the +// ListApplications operation. +// +// This operation requires permissions to perform the kinesisanalytics:DescribeApplication +// action. You can use DescribeApplication to get the current application versionId, +// which you need to call other operations such as Update. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Analytics's +// API operation DescribeApplication for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// Specified application can't be found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/DescribeApplication +func (c *KinesisAnalytics) DescribeApplication(input *DescribeApplicationInput) (*DescribeApplicationOutput, error) { + req, out := c.DescribeApplicationRequest(input) + return out, req.Send() +} + +// DescribeApplicationWithContext is the same as DescribeApplication with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeApplication for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *KinesisAnalytics) DescribeApplicationWithContext(ctx aws.Context, input *DescribeApplicationInput, opts ...request.Option) (*DescribeApplicationOutput, error) { + req, out := c.DescribeApplicationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDiscoverInputSchema = "DiscoverInputSchema" + +// DiscoverInputSchemaRequest generates a "aws/request.Request" representing the +// client's request for the DiscoverInputSchema operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DiscoverInputSchema for more information on using the DiscoverInputSchema +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DiscoverInputSchemaRequest method. +// req, resp := client.DiscoverInputSchemaRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/DiscoverInputSchema +func (c *KinesisAnalytics) DiscoverInputSchemaRequest(input *DiscoverInputSchemaInput) (req *request.Request, output *DiscoverInputSchemaOutput) { + op := &request.Operation{ + Name: opDiscoverInputSchema, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DiscoverInputSchemaInput{} + } + + output = &DiscoverInputSchemaOutput{} + req = c.newRequest(op, input, output) + return +} + +// DiscoverInputSchema API operation for Amazon Kinesis Analytics. +// +// Infers a schema by evaluating sample records on the specified streaming source +// (Amazon Kinesis stream or Amazon Kinesis Firehose delivery stream) or S3 +// object. In the response, the operation returns the inferred schema and also +// the sample records that the operation used to infer the schema. +// +// You can use the inferred schema when configuring a streaming source for your +// application. For conceptual information, see Configuring Application Input +// (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-input.html). +// Note that when you create an application using the Amazon Kinesis Analytics +// console, the console uses this operation to infer a schema and show it in +// the console user interface. +// +// This operation requires permissions to perform the kinesisanalytics:DiscoverInputSchema +// action. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Analytics's +// API operation DiscoverInputSchema for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// Specified input parameter value is invalid. +// +// * ErrCodeUnableToDetectSchemaException "UnableToDetectSchemaException" +// Data format is not valid, Amazon Kinesis Analytics is not able to detect +// schema for the given streaming source. +// +// * ErrCodeResourceProvisionedThroughputExceededException "ResourceProvisionedThroughputExceededException" +// Discovery failed to get a record from the streaming source because of the +// Amazon Kinesis Streams ProvisionedThroughputExceededException. For more information, +// see GetRecords (http://docs.aws.amazon.com/kinesis/latest/APIReference/API_GetRecords.html) +// in the Amazon Kinesis Streams API Reference. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service is unavailable, back off and retry the operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/DiscoverInputSchema +func (c *KinesisAnalytics) DiscoverInputSchema(input *DiscoverInputSchemaInput) (*DiscoverInputSchemaOutput, error) { + req, out := c.DiscoverInputSchemaRequest(input) + return out, req.Send() +} + +// DiscoverInputSchemaWithContext is the same as DiscoverInputSchema with the addition of +// the ability to pass a context and additional request options. +// +// See DiscoverInputSchema for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *KinesisAnalytics) DiscoverInputSchemaWithContext(ctx aws.Context, input *DiscoverInputSchemaInput, opts ...request.Option) (*DiscoverInputSchemaOutput, error) { + req, out := c.DiscoverInputSchemaRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListApplications = "ListApplications" + +// ListApplicationsRequest generates a "aws/request.Request" representing the +// client's request for the ListApplications operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListApplications for more information on using the ListApplications +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListApplicationsRequest method. +// req, resp := client.ListApplicationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/ListApplications +func (c *KinesisAnalytics) ListApplicationsRequest(input *ListApplicationsInput) (req *request.Request, output *ListApplicationsOutput) { + op := &request.Operation{ + Name: opListApplications, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListApplicationsInput{} + } + + output = &ListApplicationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListApplications API operation for Amazon Kinesis Analytics. +// +// Returns a list of Amazon Kinesis Analytics applications in your account. +// For each application, the response includes the application name, Amazon +// Resource Name (ARN), and status. If the response returns the HasMoreApplications +// value as true, you can send another request by adding the ExclusiveStartApplicationName +// in the request body, and set the value of this to the last application name +// from the previous response. +// +// If you want detailed information about a specific application, use DescribeApplication. +// +// This operation requires permissions to perform the kinesisanalytics:ListApplications +// action. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Analytics's +// API operation ListApplications for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/ListApplications +func (c *KinesisAnalytics) ListApplications(input *ListApplicationsInput) (*ListApplicationsOutput, error) { + req, out := c.ListApplicationsRequest(input) + return out, req.Send() +} + +// ListApplicationsWithContext is the same as ListApplications with the addition of +// the ability to pass a context and additional request options. +// +// See ListApplications for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *KinesisAnalytics) ListApplicationsWithContext(ctx aws.Context, input *ListApplicationsInput, opts ...request.Option) (*ListApplicationsOutput, error) { + req, out := c.ListApplicationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStartApplication = "StartApplication" + +// StartApplicationRequest generates a "aws/request.Request" representing the +// client's request for the StartApplication operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StartApplication for more information on using the StartApplication +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StartApplicationRequest method. +// req, resp := client.StartApplicationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/StartApplication +func (c *KinesisAnalytics) StartApplicationRequest(input *StartApplicationInput) (req *request.Request, output *StartApplicationOutput) { + op := &request.Operation{ + Name: opStartApplication, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StartApplicationInput{} + } + + output = &StartApplicationOutput{} + req = c.newRequest(op, input, output) + return +} + +// StartApplication API operation for Amazon Kinesis Analytics. +// +// Starts the specified Amazon Kinesis Analytics application. After creating +// an application, you must exclusively call this operation to start your application. +// +// After the application starts, it begins consuming the input data, processes +// it, and writes the output to the configured destination. +// +// The application status must be READY for you to start an application. You +// can get the application status in the console or using the DescribeApplication +// operation. +// +// After you start the application, you can stop the application from processing +// the input by calling the StopApplication operation. +// +// This operation requires permissions to perform the kinesisanalytics:StartApplication +// action. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Analytics's +// API operation StartApplication for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// Specified application can't be found. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// Application is not available for this operation. +// +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// Specified input parameter value is invalid. +// +// * ErrCodeInvalidApplicationConfigurationException "InvalidApplicationConfigurationException" +// User-provided application configuration is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/StartApplication +func (c *KinesisAnalytics) StartApplication(input *StartApplicationInput) (*StartApplicationOutput, error) { + req, out := c.StartApplicationRequest(input) + return out, req.Send() +} + +// StartApplicationWithContext is the same as StartApplication with the addition of +// the ability to pass a context and additional request options. +// +// See StartApplication for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *KinesisAnalytics) StartApplicationWithContext(ctx aws.Context, input *StartApplicationInput, opts ...request.Option) (*StartApplicationOutput, error) { + req, out := c.StartApplicationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStopApplication = "StopApplication" + +// StopApplicationRequest generates a "aws/request.Request" representing the +// client's request for the StopApplication operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StopApplication for more information on using the StopApplication +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StopApplicationRequest method. +// req, resp := client.StopApplicationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/StopApplication +func (c *KinesisAnalytics) StopApplicationRequest(input *StopApplicationInput) (req *request.Request, output *StopApplicationOutput) { + op := &request.Operation{ + Name: opStopApplication, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StopApplicationInput{} + } + + output = &StopApplicationOutput{} + req = c.newRequest(op, input, output) + return +} + +// StopApplication API operation for Amazon Kinesis Analytics. +// +// Stops the application from processing input data. You can stop an application +// only if it is in the running state. You can use the DescribeApplication operation +// to find the application state. After the application is stopped, Amazon Kinesis +// Analytics stops reading data from the input, the application stops processing +// data, and there is no output written to the destination. +// +// This operation requires permissions to perform the kinesisanalytics:StopApplication +// action. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Analytics's +// API operation StopApplication for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// Specified application can't be found. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// Application is not available for this operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/StopApplication +func (c *KinesisAnalytics) StopApplication(input *StopApplicationInput) (*StopApplicationOutput, error) { + req, out := c.StopApplicationRequest(input) + return out, req.Send() +} + +// StopApplicationWithContext is the same as StopApplication with the addition of +// the ability to pass a context and additional request options. +// +// See StopApplication for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *KinesisAnalytics) StopApplicationWithContext(ctx aws.Context, input *StopApplicationInput, opts ...request.Option) (*StopApplicationOutput, error) { + req, out := c.StopApplicationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateApplication = "UpdateApplication" + +// UpdateApplicationRequest generates a "aws/request.Request" representing the +// client's request for the UpdateApplication operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateApplication for more information on using the UpdateApplication +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateApplicationRequest method. +// req, resp := client.UpdateApplicationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/UpdateApplication +func (c *KinesisAnalytics) UpdateApplicationRequest(input *UpdateApplicationInput) (req *request.Request, output *UpdateApplicationOutput) { + op := &request.Operation{ + Name: opUpdateApplication, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateApplicationInput{} + } + + output = &UpdateApplicationOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateApplication API operation for Amazon Kinesis Analytics. +// +// Updates an existing Amazon Kinesis Analytics application. Using this API, +// you can update application code, input configuration, and output configuration. +// +// Note that Amazon Kinesis Analytics updates the CurrentApplicationVersionId +// each time you update your application. +// +// This operation requires permission for the kinesisanalytics:UpdateApplication +// action. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Kinesis Analytics's +// API operation UpdateApplication for usage and error information. +// +// Returned Error Codes: +// * ErrCodeCodeValidationException "CodeValidationException" +// User-provided application code (query) is invalid. This can be a simple syntax +// error. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// Specified application can't be found. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// Application is not available for this operation. +// +// * ErrCodeInvalidArgumentException "InvalidArgumentException" +// Specified input parameter value is invalid. +// +// * ErrCodeConcurrentModificationException "ConcurrentModificationException" +// Exception thrown as a result of concurrent modification to an application. +// For example, two individuals attempting to edit the same application at the +// same time. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14/UpdateApplication +func (c *KinesisAnalytics) UpdateApplication(input *UpdateApplicationInput) (*UpdateApplicationOutput, error) { + req, out := c.UpdateApplicationRequest(input) + return out, req.Send() +} + +// UpdateApplicationWithContext is the same as UpdateApplication with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateApplication for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *KinesisAnalytics) UpdateApplicationWithContext(ctx aws.Context, input *UpdateApplicationInput, opts ...request.Option) (*UpdateApplicationOutput, error) { + req, out := c.UpdateApplicationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type AddApplicationCloudWatchLoggingOptionInput struct { + _ struct{} `type:"structure"` + + // The Kinesis Analytics application name. + // + // ApplicationName is a required field + ApplicationName *string `min:"1" type:"string" required:"true"` + + // Provides the CloudWatch log stream Amazon Resource Name (ARN) and the IAM + // role ARN. Note: To write application messages to CloudWatch, the IAM role + // that is used must have the PutLogEvents policy action enabled. + // + // CloudWatchLoggingOption is a required field + CloudWatchLoggingOption *CloudWatchLoggingOption `type:"structure" required:"true"` + + // The version ID of the Kinesis Analytics application. + // + // CurrentApplicationVersionId is a required field + CurrentApplicationVersionId *int64 `min:"1" type:"long" required:"true"` +} + +// String returns the string representation +func (s AddApplicationCloudWatchLoggingOptionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddApplicationCloudWatchLoggingOptionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddApplicationCloudWatchLoggingOptionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddApplicationCloudWatchLoggingOptionInput"} + if s.ApplicationName == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationName")) + } + if s.ApplicationName != nil && len(*s.ApplicationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ApplicationName", 1)) + } + if s.CloudWatchLoggingOption == nil { + invalidParams.Add(request.NewErrParamRequired("CloudWatchLoggingOption")) + } + if s.CurrentApplicationVersionId == nil { + invalidParams.Add(request.NewErrParamRequired("CurrentApplicationVersionId")) + } + if s.CurrentApplicationVersionId != nil && *s.CurrentApplicationVersionId < 1 { + invalidParams.Add(request.NewErrParamMinValue("CurrentApplicationVersionId", 1)) + } + if s.CloudWatchLoggingOption != nil { + if err := s.CloudWatchLoggingOption.Validate(); err != nil { + invalidParams.AddNested("CloudWatchLoggingOption", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationName sets the ApplicationName field's value. +func (s *AddApplicationCloudWatchLoggingOptionInput) SetApplicationName(v string) *AddApplicationCloudWatchLoggingOptionInput { + s.ApplicationName = &v + return s +} + +// SetCloudWatchLoggingOption sets the CloudWatchLoggingOption field's value. +func (s *AddApplicationCloudWatchLoggingOptionInput) SetCloudWatchLoggingOption(v *CloudWatchLoggingOption) *AddApplicationCloudWatchLoggingOptionInput { + s.CloudWatchLoggingOption = v + return s +} + +// SetCurrentApplicationVersionId sets the CurrentApplicationVersionId field's value. +func (s *AddApplicationCloudWatchLoggingOptionInput) SetCurrentApplicationVersionId(v int64) *AddApplicationCloudWatchLoggingOptionInput { + s.CurrentApplicationVersionId = &v + return s +} + +type AddApplicationCloudWatchLoggingOptionOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AddApplicationCloudWatchLoggingOptionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddApplicationCloudWatchLoggingOptionOutput) GoString() string { + return s.String() +} + +type AddApplicationInputInput struct { + _ struct{} `type:"structure"` + + // Name of your existing Amazon Kinesis Analytics application to which you want + // to add the streaming source. + // + // ApplicationName is a required field + ApplicationName *string `min:"1" type:"string" required:"true"` + + // Current version of your Amazon Kinesis Analytics application. You can use + // the DescribeApplication operation to find the current application version. + // + // CurrentApplicationVersionId is a required field + CurrentApplicationVersionId *int64 `min:"1" type:"long" required:"true"` + + // The Input to add. + // + // Input is a required field + Input *Input `type:"structure" required:"true"` +} + +// String returns the string representation +func (s AddApplicationInputInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddApplicationInputInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddApplicationInputInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddApplicationInputInput"} + if s.ApplicationName == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationName")) + } + if s.ApplicationName != nil && len(*s.ApplicationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ApplicationName", 1)) + } + if s.CurrentApplicationVersionId == nil { + invalidParams.Add(request.NewErrParamRequired("CurrentApplicationVersionId")) + } + if s.CurrentApplicationVersionId != nil && *s.CurrentApplicationVersionId < 1 { + invalidParams.Add(request.NewErrParamMinValue("CurrentApplicationVersionId", 1)) + } + if s.Input == nil { + invalidParams.Add(request.NewErrParamRequired("Input")) + } + if s.Input != nil { + if err := s.Input.Validate(); err != nil { + invalidParams.AddNested("Input", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationName sets the ApplicationName field's value. +func (s *AddApplicationInputInput) SetApplicationName(v string) *AddApplicationInputInput { + s.ApplicationName = &v + return s +} + +// SetCurrentApplicationVersionId sets the CurrentApplicationVersionId field's value. +func (s *AddApplicationInputInput) SetCurrentApplicationVersionId(v int64) *AddApplicationInputInput { + s.CurrentApplicationVersionId = &v + return s +} + +// SetInput sets the Input field's value. +func (s *AddApplicationInputInput) SetInput(v *Input) *AddApplicationInputInput { + s.Input = v + return s +} + +type AddApplicationInputOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AddApplicationInputOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddApplicationInputOutput) GoString() string { + return s.String() +} + +type AddApplicationInputProcessingConfigurationInput struct { + _ struct{} `type:"structure"` + + // Name of the application to which you want to add the input processing configuration. + // + // ApplicationName is a required field + ApplicationName *string `min:"1" type:"string" required:"true"` + + // Version of the application to which you want to add the input processing + // configuration. You can use the DescribeApplication operation to get the current + // application version. If the version specified is not the current version, + // the ConcurrentModificationException is returned. + // + // CurrentApplicationVersionId is a required field + CurrentApplicationVersionId *int64 `min:"1" type:"long" required:"true"` + + // The ID of the input configuration to add the input processing configuration + // to. You can get a list of the input IDs for an application using the DescribeApplication + // operation. + // + // InputId is a required field + InputId *string `min:"1" type:"string" required:"true"` + + // The InputProcessingConfiguration to add to the application. + // + // InputProcessingConfiguration is a required field + InputProcessingConfiguration *InputProcessingConfiguration `type:"structure" required:"true"` +} + +// String returns the string representation +func (s AddApplicationInputProcessingConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddApplicationInputProcessingConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddApplicationInputProcessingConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddApplicationInputProcessingConfigurationInput"} + if s.ApplicationName == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationName")) + } + if s.ApplicationName != nil && len(*s.ApplicationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ApplicationName", 1)) + } + if s.CurrentApplicationVersionId == nil { + invalidParams.Add(request.NewErrParamRequired("CurrentApplicationVersionId")) + } + if s.CurrentApplicationVersionId != nil && *s.CurrentApplicationVersionId < 1 { + invalidParams.Add(request.NewErrParamMinValue("CurrentApplicationVersionId", 1)) + } + if s.InputId == nil { + invalidParams.Add(request.NewErrParamRequired("InputId")) + } + if s.InputId != nil && len(*s.InputId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("InputId", 1)) + } + if s.InputProcessingConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("InputProcessingConfiguration")) + } + if s.InputProcessingConfiguration != nil { + if err := s.InputProcessingConfiguration.Validate(); err != nil { + invalidParams.AddNested("InputProcessingConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationName sets the ApplicationName field's value. +func (s *AddApplicationInputProcessingConfigurationInput) SetApplicationName(v string) *AddApplicationInputProcessingConfigurationInput { + s.ApplicationName = &v + return s +} + +// SetCurrentApplicationVersionId sets the CurrentApplicationVersionId field's value. +func (s *AddApplicationInputProcessingConfigurationInput) SetCurrentApplicationVersionId(v int64) *AddApplicationInputProcessingConfigurationInput { + s.CurrentApplicationVersionId = &v + return s +} + +// SetInputId sets the InputId field's value. +func (s *AddApplicationInputProcessingConfigurationInput) SetInputId(v string) *AddApplicationInputProcessingConfigurationInput { + s.InputId = &v + return s +} + +// SetInputProcessingConfiguration sets the InputProcessingConfiguration field's value. +func (s *AddApplicationInputProcessingConfigurationInput) SetInputProcessingConfiguration(v *InputProcessingConfiguration) *AddApplicationInputProcessingConfigurationInput { + s.InputProcessingConfiguration = v + return s +} + +type AddApplicationInputProcessingConfigurationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AddApplicationInputProcessingConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddApplicationInputProcessingConfigurationOutput) GoString() string { + return s.String() +} + +type AddApplicationOutputInput struct { + _ struct{} `type:"structure"` + + // Name of the application to which you want to add the output configuration. + // + // ApplicationName is a required field + ApplicationName *string `min:"1" type:"string" required:"true"` + + // Version of the application to which you want to add the output configuration. + // You can use the DescribeApplication operation to get the current application + // version. If the version specified is not the current version, the ConcurrentModificationException + // is returned. + // + // CurrentApplicationVersionId is a required field + CurrentApplicationVersionId *int64 `min:"1" type:"long" required:"true"` + + // An array of objects, each describing one output configuration. In the output + // configuration, you specify the name of an in-application stream, a destination + // (that is, an Amazon Kinesis stream, an Amazon Kinesis Firehose delivery stream, + // or an Amazon Lambda function), and record the formation to use when writing + // to the destination. + // + // Output is a required field + Output *Output `type:"structure" required:"true"` +} + +// String returns the string representation +func (s AddApplicationOutputInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddApplicationOutputInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddApplicationOutputInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddApplicationOutputInput"} + if s.ApplicationName == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationName")) + } + if s.ApplicationName != nil && len(*s.ApplicationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ApplicationName", 1)) + } + if s.CurrentApplicationVersionId == nil { + invalidParams.Add(request.NewErrParamRequired("CurrentApplicationVersionId")) + } + if s.CurrentApplicationVersionId != nil && *s.CurrentApplicationVersionId < 1 { + invalidParams.Add(request.NewErrParamMinValue("CurrentApplicationVersionId", 1)) + } + if s.Output == nil { + invalidParams.Add(request.NewErrParamRequired("Output")) + } + if s.Output != nil { + if err := s.Output.Validate(); err != nil { + invalidParams.AddNested("Output", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationName sets the ApplicationName field's value. +func (s *AddApplicationOutputInput) SetApplicationName(v string) *AddApplicationOutputInput { + s.ApplicationName = &v + return s +} + +// SetCurrentApplicationVersionId sets the CurrentApplicationVersionId field's value. +func (s *AddApplicationOutputInput) SetCurrentApplicationVersionId(v int64) *AddApplicationOutputInput { + s.CurrentApplicationVersionId = &v + return s +} + +// SetOutput sets the Output field's value. +func (s *AddApplicationOutputInput) SetOutput(v *Output) *AddApplicationOutputInput { + s.Output = v + return s +} + +type AddApplicationOutputOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AddApplicationOutputOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddApplicationOutputOutput) GoString() string { + return s.String() +} + +type AddApplicationReferenceDataSourceInput struct { + _ struct{} `type:"structure"` + + // Name of an existing application. + // + // ApplicationName is a required field + ApplicationName *string `min:"1" type:"string" required:"true"` + + // Version of the application for which you are adding the reference data source. + // You can use the DescribeApplication operation to get the current application + // version. If the version specified is not the current version, the ConcurrentModificationException + // is returned. + // + // CurrentApplicationVersionId is a required field + CurrentApplicationVersionId *int64 `min:"1" type:"long" required:"true"` + + // The reference data source can be an object in your Amazon S3 bucket. Amazon + // Kinesis Analytics reads the object and copies the data into the in-application + // table that is created. You provide an S3 bucket, object key name, and the + // resulting in-application table that is created. You must also provide an + // IAM role with the necessary permissions that Amazon Kinesis Analytics can + // assume to read the object from your S3 bucket on your behalf. + // + // ReferenceDataSource is a required field + ReferenceDataSource *ReferenceDataSource `type:"structure" required:"true"` +} + +// String returns the string representation +func (s AddApplicationReferenceDataSourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddApplicationReferenceDataSourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddApplicationReferenceDataSourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddApplicationReferenceDataSourceInput"} + if s.ApplicationName == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationName")) + } + if s.ApplicationName != nil && len(*s.ApplicationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ApplicationName", 1)) + } + if s.CurrentApplicationVersionId == nil { + invalidParams.Add(request.NewErrParamRequired("CurrentApplicationVersionId")) + } + if s.CurrentApplicationVersionId != nil && *s.CurrentApplicationVersionId < 1 { + invalidParams.Add(request.NewErrParamMinValue("CurrentApplicationVersionId", 1)) + } + if s.ReferenceDataSource == nil { + invalidParams.Add(request.NewErrParamRequired("ReferenceDataSource")) + } + if s.ReferenceDataSource != nil { + if err := s.ReferenceDataSource.Validate(); err != nil { + invalidParams.AddNested("ReferenceDataSource", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationName sets the ApplicationName field's value. +func (s *AddApplicationReferenceDataSourceInput) SetApplicationName(v string) *AddApplicationReferenceDataSourceInput { + s.ApplicationName = &v + return s +} + +// SetCurrentApplicationVersionId sets the CurrentApplicationVersionId field's value. +func (s *AddApplicationReferenceDataSourceInput) SetCurrentApplicationVersionId(v int64) *AddApplicationReferenceDataSourceInput { + s.CurrentApplicationVersionId = &v + return s +} + +// SetReferenceDataSource sets the ReferenceDataSource field's value. +func (s *AddApplicationReferenceDataSourceInput) SetReferenceDataSource(v *ReferenceDataSource) *AddApplicationReferenceDataSourceInput { + s.ReferenceDataSource = v + return s +} + +type AddApplicationReferenceDataSourceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AddApplicationReferenceDataSourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddApplicationReferenceDataSourceOutput) GoString() string { + return s.String() +} + +// Provides a description of the application, including the application Amazon +// Resource Name (ARN), status, latest version, and input and output configuration. +type ApplicationDetail struct { + _ struct{} `type:"structure"` + + // ARN of the application. + // + // ApplicationARN is a required field + ApplicationARN *string `min:"1" type:"string" required:"true"` + + // Returns the application code that you provided to perform data analysis on + // any of the in-application streams in your application. + ApplicationCode *string `type:"string"` + + // Description of the application. + ApplicationDescription *string `type:"string"` + + // Name of the application. + // + // ApplicationName is a required field + ApplicationName *string `min:"1" type:"string" required:"true"` + + // Status of the application. + // + // ApplicationStatus is a required field + ApplicationStatus *string `type:"string" required:"true" enum:"ApplicationStatus"` + + // Provides the current application version. + // + // ApplicationVersionId is a required field + ApplicationVersionId *int64 `min:"1" type:"long" required:"true"` + + // Describes the CloudWatch log streams that are configured to receive application + // messages. For more information about using CloudWatch log streams with Amazon + // Kinesis Analytics applications, see Working with Amazon CloudWatch Logs (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/cloudwatch-logs.html). + CloudWatchLoggingOptionDescriptions []*CloudWatchLoggingOptionDescription `type:"list"` + + // Time stamp when the application version was created. + CreateTimestamp *time.Time `type:"timestamp"` + + // Describes the application input configuration. For more information, see + // Configuring Application Input (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-input.html). + InputDescriptions []*InputDescription `type:"list"` + + // Time stamp when the application was last updated. + LastUpdateTimestamp *time.Time `type:"timestamp"` + + // Describes the application output configuration. For more information, see + // Configuring Application Output (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-output.html). + OutputDescriptions []*OutputDescription `type:"list"` + + // Describes reference data sources configured for the application. For more + // information, see Configuring Application Input (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-input.html). + ReferenceDataSourceDescriptions []*ReferenceDataSourceDescription `type:"list"` +} + +// String returns the string representation +func (s ApplicationDetail) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ApplicationDetail) GoString() string { + return s.String() +} + +// SetApplicationARN sets the ApplicationARN field's value. +func (s *ApplicationDetail) SetApplicationARN(v string) *ApplicationDetail { + s.ApplicationARN = &v + return s +} + +// SetApplicationCode sets the ApplicationCode field's value. +func (s *ApplicationDetail) SetApplicationCode(v string) *ApplicationDetail { + s.ApplicationCode = &v + return s +} + +// SetApplicationDescription sets the ApplicationDescription field's value. +func (s *ApplicationDetail) SetApplicationDescription(v string) *ApplicationDetail { + s.ApplicationDescription = &v + return s +} + +// SetApplicationName sets the ApplicationName field's value. +func (s *ApplicationDetail) SetApplicationName(v string) *ApplicationDetail { + s.ApplicationName = &v + return s +} + +// SetApplicationStatus sets the ApplicationStatus field's value. +func (s *ApplicationDetail) SetApplicationStatus(v string) *ApplicationDetail { + s.ApplicationStatus = &v + return s +} + +// SetApplicationVersionId sets the ApplicationVersionId field's value. +func (s *ApplicationDetail) SetApplicationVersionId(v int64) *ApplicationDetail { + s.ApplicationVersionId = &v + return s +} + +// SetCloudWatchLoggingOptionDescriptions sets the CloudWatchLoggingOptionDescriptions field's value. +func (s *ApplicationDetail) SetCloudWatchLoggingOptionDescriptions(v []*CloudWatchLoggingOptionDescription) *ApplicationDetail { + s.CloudWatchLoggingOptionDescriptions = v + return s +} + +// SetCreateTimestamp sets the CreateTimestamp field's value. +func (s *ApplicationDetail) SetCreateTimestamp(v time.Time) *ApplicationDetail { + s.CreateTimestamp = &v + return s +} + +// SetInputDescriptions sets the InputDescriptions field's value. +func (s *ApplicationDetail) SetInputDescriptions(v []*InputDescription) *ApplicationDetail { + s.InputDescriptions = v + return s +} + +// SetLastUpdateTimestamp sets the LastUpdateTimestamp field's value. +func (s *ApplicationDetail) SetLastUpdateTimestamp(v time.Time) *ApplicationDetail { + s.LastUpdateTimestamp = &v + return s +} + +// SetOutputDescriptions sets the OutputDescriptions field's value. +func (s *ApplicationDetail) SetOutputDescriptions(v []*OutputDescription) *ApplicationDetail { + s.OutputDescriptions = v + return s +} + +// SetReferenceDataSourceDescriptions sets the ReferenceDataSourceDescriptions field's value. +func (s *ApplicationDetail) SetReferenceDataSourceDescriptions(v []*ReferenceDataSourceDescription) *ApplicationDetail { + s.ReferenceDataSourceDescriptions = v + return s +} + +// Provides application summary information, including the application Amazon +// Resource Name (ARN), name, and status. +type ApplicationSummary struct { + _ struct{} `type:"structure"` + + // ARN of the application. + // + // ApplicationARN is a required field + ApplicationARN *string `min:"1" type:"string" required:"true"` + + // Name of the application. + // + // ApplicationName is a required field + ApplicationName *string `min:"1" type:"string" required:"true"` + + // Status of the application. + // + // ApplicationStatus is a required field + ApplicationStatus *string `type:"string" required:"true" enum:"ApplicationStatus"` +} + +// String returns the string representation +func (s ApplicationSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ApplicationSummary) GoString() string { + return s.String() +} + +// SetApplicationARN sets the ApplicationARN field's value. +func (s *ApplicationSummary) SetApplicationARN(v string) *ApplicationSummary { + s.ApplicationARN = &v + return s +} + +// SetApplicationName sets the ApplicationName field's value. +func (s *ApplicationSummary) SetApplicationName(v string) *ApplicationSummary { + s.ApplicationName = &v + return s +} + +// SetApplicationStatus sets the ApplicationStatus field's value. +func (s *ApplicationSummary) SetApplicationStatus(v string) *ApplicationSummary { + s.ApplicationStatus = &v + return s +} + +// Describes updates to apply to an existing Amazon Kinesis Analytics application. +type ApplicationUpdate struct { + _ struct{} `type:"structure"` + + // Describes application code updates. + ApplicationCodeUpdate *string `type:"string"` + + // Describes application CloudWatch logging option updates. + CloudWatchLoggingOptionUpdates []*CloudWatchLoggingOptionUpdate `type:"list"` + + // Describes application input configuration updates. + InputUpdates []*InputUpdate `type:"list"` + + // Describes application output configuration updates. + OutputUpdates []*OutputUpdate `type:"list"` + + // Describes application reference data source updates. + ReferenceDataSourceUpdates []*ReferenceDataSourceUpdate `type:"list"` +} + +// String returns the string representation +func (s ApplicationUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ApplicationUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ApplicationUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ApplicationUpdate"} + if s.CloudWatchLoggingOptionUpdates != nil { + for i, v := range s.CloudWatchLoggingOptionUpdates { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "CloudWatchLoggingOptionUpdates", i), err.(request.ErrInvalidParams)) + } + } + } + if s.InputUpdates != nil { + for i, v := range s.InputUpdates { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "InputUpdates", i), err.(request.ErrInvalidParams)) + } + } + } + if s.OutputUpdates != nil { + for i, v := range s.OutputUpdates { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "OutputUpdates", i), err.(request.ErrInvalidParams)) + } + } + } + if s.ReferenceDataSourceUpdates != nil { + for i, v := range s.ReferenceDataSourceUpdates { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ReferenceDataSourceUpdates", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationCodeUpdate sets the ApplicationCodeUpdate field's value. +func (s *ApplicationUpdate) SetApplicationCodeUpdate(v string) *ApplicationUpdate { + s.ApplicationCodeUpdate = &v + return s +} + +// SetCloudWatchLoggingOptionUpdates sets the CloudWatchLoggingOptionUpdates field's value. +func (s *ApplicationUpdate) SetCloudWatchLoggingOptionUpdates(v []*CloudWatchLoggingOptionUpdate) *ApplicationUpdate { + s.CloudWatchLoggingOptionUpdates = v + return s +} + +// SetInputUpdates sets the InputUpdates field's value. +func (s *ApplicationUpdate) SetInputUpdates(v []*InputUpdate) *ApplicationUpdate { + s.InputUpdates = v + return s +} + +// SetOutputUpdates sets the OutputUpdates field's value. +func (s *ApplicationUpdate) SetOutputUpdates(v []*OutputUpdate) *ApplicationUpdate { + s.OutputUpdates = v + return s +} + +// SetReferenceDataSourceUpdates sets the ReferenceDataSourceUpdates field's value. +func (s *ApplicationUpdate) SetReferenceDataSourceUpdates(v []*ReferenceDataSourceUpdate) *ApplicationUpdate { + s.ReferenceDataSourceUpdates = v + return s +} + +// Provides additional mapping information when the record format uses delimiters, +// such as CSV. For example, the following sample records use CSV format, where +// the records use the '\n' as the row delimiter and a comma (",") as the column +// delimiter: +// +// "name1", "address1" +// +// "name2, "address2" +type CSVMappingParameters struct { + _ struct{} `type:"structure"` + + // Column delimiter. For example, in a CSV format, a comma (",") is the typical + // column delimiter. + // + // RecordColumnDelimiter is a required field + RecordColumnDelimiter *string `min:"1" type:"string" required:"true"` + + // Row delimiter. For example, in a CSV format, '\n' is the typical row delimiter. + // + // RecordRowDelimiter is a required field + RecordRowDelimiter *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CSVMappingParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CSVMappingParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CSVMappingParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CSVMappingParameters"} + if s.RecordColumnDelimiter == nil { + invalidParams.Add(request.NewErrParamRequired("RecordColumnDelimiter")) + } + if s.RecordColumnDelimiter != nil && len(*s.RecordColumnDelimiter) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RecordColumnDelimiter", 1)) + } + if s.RecordRowDelimiter == nil { + invalidParams.Add(request.NewErrParamRequired("RecordRowDelimiter")) + } + if s.RecordRowDelimiter != nil && len(*s.RecordRowDelimiter) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RecordRowDelimiter", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRecordColumnDelimiter sets the RecordColumnDelimiter field's value. +func (s *CSVMappingParameters) SetRecordColumnDelimiter(v string) *CSVMappingParameters { + s.RecordColumnDelimiter = &v + return s +} + +// SetRecordRowDelimiter sets the RecordRowDelimiter field's value. +func (s *CSVMappingParameters) SetRecordRowDelimiter(v string) *CSVMappingParameters { + s.RecordRowDelimiter = &v + return s +} + +// Provides a description of CloudWatch logging options, including the log stream +// Amazon Resource Name (ARN) and the role ARN. +type CloudWatchLoggingOption struct { + _ struct{} `type:"structure"` + + // ARN of the CloudWatch log to receive application messages. + // + // LogStreamARN is a required field + LogStreamARN *string `min:"1" type:"string" required:"true"` + + // IAM ARN of the role to use to send application messages. Note: To write application + // messages to CloudWatch, the IAM role that is used must have the PutLogEvents + // policy action enabled. + // + // RoleARN is a required field + RoleARN *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CloudWatchLoggingOption) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CloudWatchLoggingOption) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CloudWatchLoggingOption) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CloudWatchLoggingOption"} + if s.LogStreamARN == nil { + invalidParams.Add(request.NewErrParamRequired("LogStreamARN")) + } + if s.LogStreamARN != nil && len(*s.LogStreamARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogStreamARN", 1)) + } + if s.RoleARN == nil { + invalidParams.Add(request.NewErrParamRequired("RoleARN")) + } + if s.RoleARN != nil && len(*s.RoleARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleARN", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLogStreamARN sets the LogStreamARN field's value. +func (s *CloudWatchLoggingOption) SetLogStreamARN(v string) *CloudWatchLoggingOption { + s.LogStreamARN = &v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *CloudWatchLoggingOption) SetRoleARN(v string) *CloudWatchLoggingOption { + s.RoleARN = &v + return s +} + +// Description of the CloudWatch logging option. +type CloudWatchLoggingOptionDescription struct { + _ struct{} `type:"structure"` + + // ID of the CloudWatch logging option description. + CloudWatchLoggingOptionId *string `min:"1" type:"string"` + + // ARN of the CloudWatch log to receive application messages. + // + // LogStreamARN is a required field + LogStreamARN *string `min:"1" type:"string" required:"true"` + + // IAM ARN of the role to use to send application messages. Note: To write application + // messages to CloudWatch, the IAM role used must have the PutLogEvents policy + // action enabled. + // + // RoleARN is a required field + RoleARN *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CloudWatchLoggingOptionDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CloudWatchLoggingOptionDescription) GoString() string { + return s.String() +} + +// SetCloudWatchLoggingOptionId sets the CloudWatchLoggingOptionId field's value. +func (s *CloudWatchLoggingOptionDescription) SetCloudWatchLoggingOptionId(v string) *CloudWatchLoggingOptionDescription { + s.CloudWatchLoggingOptionId = &v + return s +} + +// SetLogStreamARN sets the LogStreamARN field's value. +func (s *CloudWatchLoggingOptionDescription) SetLogStreamARN(v string) *CloudWatchLoggingOptionDescription { + s.LogStreamARN = &v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *CloudWatchLoggingOptionDescription) SetRoleARN(v string) *CloudWatchLoggingOptionDescription { + s.RoleARN = &v + return s +} + +// Describes CloudWatch logging option updates. +type CloudWatchLoggingOptionUpdate struct { + _ struct{} `type:"structure"` + + // ID of the CloudWatch logging option to update + // + // CloudWatchLoggingOptionId is a required field + CloudWatchLoggingOptionId *string `min:"1" type:"string" required:"true"` + + // ARN of the CloudWatch log to receive application messages. + LogStreamARNUpdate *string `min:"1" type:"string"` + + // IAM ARN of the role to use to send application messages. Note: To write application + // messages to CloudWatch, the IAM role used must have the PutLogEvents policy + // action enabled. + RoleARNUpdate *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s CloudWatchLoggingOptionUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CloudWatchLoggingOptionUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CloudWatchLoggingOptionUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CloudWatchLoggingOptionUpdate"} + if s.CloudWatchLoggingOptionId == nil { + invalidParams.Add(request.NewErrParamRequired("CloudWatchLoggingOptionId")) + } + if s.CloudWatchLoggingOptionId != nil && len(*s.CloudWatchLoggingOptionId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CloudWatchLoggingOptionId", 1)) + } + if s.LogStreamARNUpdate != nil && len(*s.LogStreamARNUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogStreamARNUpdate", 1)) + } + if s.RoleARNUpdate != nil && len(*s.RoleARNUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleARNUpdate", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCloudWatchLoggingOptionId sets the CloudWatchLoggingOptionId field's value. +func (s *CloudWatchLoggingOptionUpdate) SetCloudWatchLoggingOptionId(v string) *CloudWatchLoggingOptionUpdate { + s.CloudWatchLoggingOptionId = &v + return s +} + +// SetLogStreamARNUpdate sets the LogStreamARNUpdate field's value. +func (s *CloudWatchLoggingOptionUpdate) SetLogStreamARNUpdate(v string) *CloudWatchLoggingOptionUpdate { + s.LogStreamARNUpdate = &v + return s +} + +// SetRoleARNUpdate sets the RoleARNUpdate field's value. +func (s *CloudWatchLoggingOptionUpdate) SetRoleARNUpdate(v string) *CloudWatchLoggingOptionUpdate { + s.RoleARNUpdate = &v + return s +} + +// TBD +type CreateApplicationInput struct { + _ struct{} `type:"structure"` + + // One or more SQL statements that read input data, transform it, and generate + // output. For example, you can write a SQL statement that reads data from one + // in-application stream, generates a running average of the number of advertisement + // clicks by vendor, and insert resulting rows in another in-application stream + // using pumps. For more information about the typical pattern, see Application + // Code (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-app-code.html). + // + // You can provide such series of SQL statements, where output of one statement + // can be used as the input for the next statement. You store intermediate results + // by creating in-application streams and pumps. + // + // Note that the application code must create the streams with names specified + // in the Outputs. For example, if your Outputs defines output streams named + // ExampleOutputStream1 and ExampleOutputStream2, then your application code + // must create these streams. + ApplicationCode *string `type:"string"` + + // Summary description of the application. + ApplicationDescription *string `type:"string"` + + // Name of your Amazon Kinesis Analytics application (for example, sample-app). + // + // ApplicationName is a required field + ApplicationName *string `min:"1" type:"string" required:"true"` + + // Use this parameter to configure a CloudWatch log stream to monitor application + // configuration errors. For more information, see Working with Amazon CloudWatch + // Logs (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/cloudwatch-logs.html). + CloudWatchLoggingOptions []*CloudWatchLoggingOption `type:"list"` + + // Use this parameter to configure the application input. + // + // You can configure your application to receive input from a single streaming + // source. In this configuration, you map this streaming source to an in-application + // stream that is created. Your application code can then query the in-application + // stream like a table (you can think of it as a constantly updating table). + // + // For the streaming source, you provide its Amazon Resource Name (ARN) and + // format of data on the stream (for example, JSON, CSV, etc.). You also must + // provide an IAM role that Amazon Kinesis Analytics can assume to read this + // stream on your behalf. + // + // To create the in-application stream, you need to specify a schema to transform + // your data into a schematized version used in SQL. In the schema, you provide + // the necessary mapping of the data elements in the streaming source to record + // columns in the in-app stream. + Inputs []*Input `type:"list"` + + // You can configure application output to write data from any of the in-application + // streams to up to three destinations. + // + // These destinations can be Amazon Kinesis streams, Amazon Kinesis Firehose + // delivery streams, Amazon Lambda destinations, or any combination of the three. + // + // In the configuration, you specify the in-application stream name, the destination + // stream or Lambda function Amazon Resource Name (ARN), and the format to use + // when writing data. You must also provide an IAM role that Amazon Kinesis + // Analytics can assume to write to the destination stream or Lambda function + // on your behalf. + // + // In the output configuration, you also provide the output stream or Lambda + // function ARN. For stream destinations, you provide the format of data in + // the stream (for example, JSON, CSV). You also must provide an IAM role that + // Amazon Kinesis Analytics can assume to write to the stream or Lambda function + // on your behalf. + Outputs []*Output `type:"list"` +} + +// String returns the string representation +func (s CreateApplicationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateApplicationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateApplicationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateApplicationInput"} + if s.ApplicationName == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationName")) + } + if s.ApplicationName != nil && len(*s.ApplicationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ApplicationName", 1)) + } + if s.CloudWatchLoggingOptions != nil { + for i, v := range s.CloudWatchLoggingOptions { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "CloudWatchLoggingOptions", i), err.(request.ErrInvalidParams)) + } + } + } + if s.Inputs != nil { + for i, v := range s.Inputs { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Inputs", i), err.(request.ErrInvalidParams)) + } + } + } + if s.Outputs != nil { + for i, v := range s.Outputs { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Outputs", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationCode sets the ApplicationCode field's value. +func (s *CreateApplicationInput) SetApplicationCode(v string) *CreateApplicationInput { + s.ApplicationCode = &v + return s +} + +// SetApplicationDescription sets the ApplicationDescription field's value. +func (s *CreateApplicationInput) SetApplicationDescription(v string) *CreateApplicationInput { + s.ApplicationDescription = &v + return s +} + +// SetApplicationName sets the ApplicationName field's value. +func (s *CreateApplicationInput) SetApplicationName(v string) *CreateApplicationInput { + s.ApplicationName = &v + return s +} + +// SetCloudWatchLoggingOptions sets the CloudWatchLoggingOptions field's value. +func (s *CreateApplicationInput) SetCloudWatchLoggingOptions(v []*CloudWatchLoggingOption) *CreateApplicationInput { + s.CloudWatchLoggingOptions = v + return s +} + +// SetInputs sets the Inputs field's value. +func (s *CreateApplicationInput) SetInputs(v []*Input) *CreateApplicationInput { + s.Inputs = v + return s +} + +// SetOutputs sets the Outputs field's value. +func (s *CreateApplicationInput) SetOutputs(v []*Output) *CreateApplicationInput { + s.Outputs = v + return s +} + +// TBD +type CreateApplicationOutput struct { + _ struct{} `type:"structure"` + + // In response to your CreateApplication request, Amazon Kinesis Analytics returns + // a response with a summary of the application it created, including the application + // Amazon Resource Name (ARN), name, and status. + // + // ApplicationSummary is a required field + ApplicationSummary *ApplicationSummary `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateApplicationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateApplicationOutput) GoString() string { + return s.String() +} + +// SetApplicationSummary sets the ApplicationSummary field's value. +func (s *CreateApplicationOutput) SetApplicationSummary(v *ApplicationSummary) *CreateApplicationOutput { + s.ApplicationSummary = v + return s +} + +type DeleteApplicationCloudWatchLoggingOptionInput struct { + _ struct{} `type:"structure"` + + // The Kinesis Analytics application name. + // + // ApplicationName is a required field + ApplicationName *string `min:"1" type:"string" required:"true"` + + // The CloudWatchLoggingOptionId of the CloudWatch logging option to delete. + // You can get the CloudWatchLoggingOptionId by using the DescribeApplication + // operation. + // + // CloudWatchLoggingOptionId is a required field + CloudWatchLoggingOptionId *string `min:"1" type:"string" required:"true"` + + // The version ID of the Kinesis Analytics application. + // + // CurrentApplicationVersionId is a required field + CurrentApplicationVersionId *int64 `min:"1" type:"long" required:"true"` +} + +// String returns the string representation +func (s DeleteApplicationCloudWatchLoggingOptionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApplicationCloudWatchLoggingOptionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteApplicationCloudWatchLoggingOptionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteApplicationCloudWatchLoggingOptionInput"} + if s.ApplicationName == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationName")) + } + if s.ApplicationName != nil && len(*s.ApplicationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ApplicationName", 1)) + } + if s.CloudWatchLoggingOptionId == nil { + invalidParams.Add(request.NewErrParamRequired("CloudWatchLoggingOptionId")) + } + if s.CloudWatchLoggingOptionId != nil && len(*s.CloudWatchLoggingOptionId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CloudWatchLoggingOptionId", 1)) + } + if s.CurrentApplicationVersionId == nil { + invalidParams.Add(request.NewErrParamRequired("CurrentApplicationVersionId")) + } + if s.CurrentApplicationVersionId != nil && *s.CurrentApplicationVersionId < 1 { + invalidParams.Add(request.NewErrParamMinValue("CurrentApplicationVersionId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationName sets the ApplicationName field's value. +func (s *DeleteApplicationCloudWatchLoggingOptionInput) SetApplicationName(v string) *DeleteApplicationCloudWatchLoggingOptionInput { + s.ApplicationName = &v + return s +} + +// SetCloudWatchLoggingOptionId sets the CloudWatchLoggingOptionId field's value. +func (s *DeleteApplicationCloudWatchLoggingOptionInput) SetCloudWatchLoggingOptionId(v string) *DeleteApplicationCloudWatchLoggingOptionInput { + s.CloudWatchLoggingOptionId = &v + return s +} + +// SetCurrentApplicationVersionId sets the CurrentApplicationVersionId field's value. +func (s *DeleteApplicationCloudWatchLoggingOptionInput) SetCurrentApplicationVersionId(v int64) *DeleteApplicationCloudWatchLoggingOptionInput { + s.CurrentApplicationVersionId = &v + return s +} + +type DeleteApplicationCloudWatchLoggingOptionOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteApplicationCloudWatchLoggingOptionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApplicationCloudWatchLoggingOptionOutput) GoString() string { + return s.String() +} + +type DeleteApplicationInput struct { + _ struct{} `type:"structure"` + + // Name of the Amazon Kinesis Analytics application to delete. + // + // ApplicationName is a required field + ApplicationName *string `min:"1" type:"string" required:"true"` + + // You can use the DescribeApplication operation to get this value. + // + // CreateTimestamp is a required field + CreateTimestamp *time.Time `type:"timestamp" required:"true"` +} + +// String returns the string representation +func (s DeleteApplicationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApplicationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteApplicationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteApplicationInput"} + if s.ApplicationName == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationName")) + } + if s.ApplicationName != nil && len(*s.ApplicationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ApplicationName", 1)) + } + if s.CreateTimestamp == nil { + invalidParams.Add(request.NewErrParamRequired("CreateTimestamp")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationName sets the ApplicationName field's value. +func (s *DeleteApplicationInput) SetApplicationName(v string) *DeleteApplicationInput { + s.ApplicationName = &v + return s +} + +// SetCreateTimestamp sets the CreateTimestamp field's value. +func (s *DeleteApplicationInput) SetCreateTimestamp(v time.Time) *DeleteApplicationInput { + s.CreateTimestamp = &v + return s +} + +type DeleteApplicationInputProcessingConfigurationInput struct { + _ struct{} `type:"structure"` + + // The Kinesis Analytics application name. + // + // ApplicationName is a required field + ApplicationName *string `min:"1" type:"string" required:"true"` + + // The version ID of the Kinesis Analytics application. + // + // CurrentApplicationVersionId is a required field + CurrentApplicationVersionId *int64 `min:"1" type:"long" required:"true"` + + // The ID of the input configuration from which to delete the input processing + // configuration. You can get a list of the input IDs for an application by + // using the DescribeApplication operation. + // + // InputId is a required field + InputId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteApplicationInputProcessingConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApplicationInputProcessingConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteApplicationInputProcessingConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteApplicationInputProcessingConfigurationInput"} + if s.ApplicationName == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationName")) + } + if s.ApplicationName != nil && len(*s.ApplicationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ApplicationName", 1)) + } + if s.CurrentApplicationVersionId == nil { + invalidParams.Add(request.NewErrParamRequired("CurrentApplicationVersionId")) + } + if s.CurrentApplicationVersionId != nil && *s.CurrentApplicationVersionId < 1 { + invalidParams.Add(request.NewErrParamMinValue("CurrentApplicationVersionId", 1)) + } + if s.InputId == nil { + invalidParams.Add(request.NewErrParamRequired("InputId")) + } + if s.InputId != nil && len(*s.InputId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("InputId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationName sets the ApplicationName field's value. +func (s *DeleteApplicationInputProcessingConfigurationInput) SetApplicationName(v string) *DeleteApplicationInputProcessingConfigurationInput { + s.ApplicationName = &v + return s +} + +// SetCurrentApplicationVersionId sets the CurrentApplicationVersionId field's value. +func (s *DeleteApplicationInputProcessingConfigurationInput) SetCurrentApplicationVersionId(v int64) *DeleteApplicationInputProcessingConfigurationInput { + s.CurrentApplicationVersionId = &v + return s +} + +// SetInputId sets the InputId field's value. +func (s *DeleteApplicationInputProcessingConfigurationInput) SetInputId(v string) *DeleteApplicationInputProcessingConfigurationInput { + s.InputId = &v + return s +} + +type DeleteApplicationInputProcessingConfigurationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteApplicationInputProcessingConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApplicationInputProcessingConfigurationOutput) GoString() string { + return s.String() +} + +type DeleteApplicationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteApplicationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApplicationOutput) GoString() string { + return s.String() +} + +type DeleteApplicationOutputInput struct { + _ struct{} `type:"structure"` + + // Amazon Kinesis Analytics application name. + // + // ApplicationName is a required field + ApplicationName *string `min:"1" type:"string" required:"true"` + + // Amazon Kinesis Analytics application version. You can use the DescribeApplication + // operation to get the current application version. If the version specified + // is not the current version, the ConcurrentModificationException is returned. + // + // CurrentApplicationVersionId is a required field + CurrentApplicationVersionId *int64 `min:"1" type:"long" required:"true"` + + // The ID of the configuration to delete. Each output configuration that is + // added to the application, either when the application is created or later + // using the AddApplicationOutput operation, has a unique ID. You need to provide + // the ID to uniquely identify the output configuration that you want to delete + // from the application configuration. You can use the DescribeApplication operation + // to get the specific OutputId. + // + // OutputId is a required field + OutputId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteApplicationOutputInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApplicationOutputInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteApplicationOutputInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteApplicationOutputInput"} + if s.ApplicationName == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationName")) + } + if s.ApplicationName != nil && len(*s.ApplicationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ApplicationName", 1)) + } + if s.CurrentApplicationVersionId == nil { + invalidParams.Add(request.NewErrParamRequired("CurrentApplicationVersionId")) + } + if s.CurrentApplicationVersionId != nil && *s.CurrentApplicationVersionId < 1 { + invalidParams.Add(request.NewErrParamMinValue("CurrentApplicationVersionId", 1)) + } + if s.OutputId == nil { + invalidParams.Add(request.NewErrParamRequired("OutputId")) + } + if s.OutputId != nil && len(*s.OutputId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("OutputId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationName sets the ApplicationName field's value. +func (s *DeleteApplicationOutputInput) SetApplicationName(v string) *DeleteApplicationOutputInput { + s.ApplicationName = &v + return s +} + +// SetCurrentApplicationVersionId sets the CurrentApplicationVersionId field's value. +func (s *DeleteApplicationOutputInput) SetCurrentApplicationVersionId(v int64) *DeleteApplicationOutputInput { + s.CurrentApplicationVersionId = &v + return s +} + +// SetOutputId sets the OutputId field's value. +func (s *DeleteApplicationOutputInput) SetOutputId(v string) *DeleteApplicationOutputInput { + s.OutputId = &v + return s +} + +type DeleteApplicationOutputOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteApplicationOutputOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApplicationOutputOutput) GoString() string { + return s.String() +} + +type DeleteApplicationReferenceDataSourceInput struct { + _ struct{} `type:"structure"` + + // Name of an existing application. + // + // ApplicationName is a required field + ApplicationName *string `min:"1" type:"string" required:"true"` + + // Version of the application. You can use the DescribeApplication operation + // to get the current application version. If the version specified is not the + // current version, the ConcurrentModificationException is returned. + // + // CurrentApplicationVersionId is a required field + CurrentApplicationVersionId *int64 `min:"1" type:"long" required:"true"` + + // ID of the reference data source. When you add a reference data source to + // your application using the AddApplicationReferenceDataSource, Amazon Kinesis + // Analytics assigns an ID. You can use the DescribeApplication operation to + // get the reference ID. + // + // ReferenceId is a required field + ReferenceId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteApplicationReferenceDataSourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApplicationReferenceDataSourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteApplicationReferenceDataSourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteApplicationReferenceDataSourceInput"} + if s.ApplicationName == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationName")) + } + if s.ApplicationName != nil && len(*s.ApplicationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ApplicationName", 1)) + } + if s.CurrentApplicationVersionId == nil { + invalidParams.Add(request.NewErrParamRequired("CurrentApplicationVersionId")) + } + if s.CurrentApplicationVersionId != nil && *s.CurrentApplicationVersionId < 1 { + invalidParams.Add(request.NewErrParamMinValue("CurrentApplicationVersionId", 1)) + } + if s.ReferenceId == nil { + invalidParams.Add(request.NewErrParamRequired("ReferenceId")) + } + if s.ReferenceId != nil && len(*s.ReferenceId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ReferenceId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationName sets the ApplicationName field's value. +func (s *DeleteApplicationReferenceDataSourceInput) SetApplicationName(v string) *DeleteApplicationReferenceDataSourceInput { + s.ApplicationName = &v + return s +} + +// SetCurrentApplicationVersionId sets the CurrentApplicationVersionId field's value. +func (s *DeleteApplicationReferenceDataSourceInput) SetCurrentApplicationVersionId(v int64) *DeleteApplicationReferenceDataSourceInput { + s.CurrentApplicationVersionId = &v + return s +} + +// SetReferenceId sets the ReferenceId field's value. +func (s *DeleteApplicationReferenceDataSourceInput) SetReferenceId(v string) *DeleteApplicationReferenceDataSourceInput { + s.ReferenceId = &v + return s +} + +type DeleteApplicationReferenceDataSourceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteApplicationReferenceDataSourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApplicationReferenceDataSourceOutput) GoString() string { + return s.String() +} + +type DescribeApplicationInput struct { + _ struct{} `type:"structure"` + + // Name of the application. + // + // ApplicationName is a required field + ApplicationName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeApplicationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeApplicationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeApplicationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeApplicationInput"} + if s.ApplicationName == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationName")) + } + if s.ApplicationName != nil && len(*s.ApplicationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ApplicationName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationName sets the ApplicationName field's value. +func (s *DescribeApplicationInput) SetApplicationName(v string) *DescribeApplicationInput { + s.ApplicationName = &v + return s +} + +type DescribeApplicationOutput struct { + _ struct{} `type:"structure"` + + // Provides a description of the application, such as the application Amazon + // Resource Name (ARN), status, latest version, and input and output configuration + // details. + // + // ApplicationDetail is a required field + ApplicationDetail *ApplicationDetail `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DescribeApplicationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeApplicationOutput) GoString() string { + return s.String() +} + +// SetApplicationDetail sets the ApplicationDetail field's value. +func (s *DescribeApplicationOutput) SetApplicationDetail(v *ApplicationDetail) *DescribeApplicationOutput { + s.ApplicationDetail = v + return s +} + +// Describes the data format when records are written to the destination. For +// more information, see Configuring Application Output (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-output.html). +type DestinationSchema struct { + _ struct{} `type:"structure"` + + // Specifies the format of the records on the output stream. + RecordFormatType *string `type:"string" enum:"RecordFormatType"` +} + +// String returns the string representation +func (s DestinationSchema) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DestinationSchema) GoString() string { + return s.String() +} + +// SetRecordFormatType sets the RecordFormatType field's value. +func (s *DestinationSchema) SetRecordFormatType(v string) *DestinationSchema { + s.RecordFormatType = &v + return s +} + +type DiscoverInputSchemaInput struct { + _ struct{} `type:"structure"` + + // The InputProcessingConfiguration to use to preprocess the records before + // discovering the schema of the records. + InputProcessingConfiguration *InputProcessingConfiguration `type:"structure"` + + // Point at which you want Amazon Kinesis Analytics to start reading records + // from the specified streaming source discovery purposes. + InputStartingPositionConfiguration *InputStartingPositionConfiguration `type:"structure"` + + // Amazon Resource Name (ARN) of the streaming source. + ResourceARN *string `min:"1" type:"string"` + + // ARN of the IAM role that Amazon Kinesis Analytics can assume to access the + // stream on your behalf. + RoleARN *string `min:"1" type:"string"` + + // Specify this parameter to discover a schema from data in an S3 object. + S3Configuration *S3Configuration `type:"structure"` +} + +// String returns the string representation +func (s DiscoverInputSchemaInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DiscoverInputSchemaInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DiscoverInputSchemaInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DiscoverInputSchemaInput"} + if s.ResourceARN != nil && len(*s.ResourceARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceARN", 1)) + } + if s.RoleARN != nil && len(*s.RoleARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleARN", 1)) + } + if s.InputProcessingConfiguration != nil { + if err := s.InputProcessingConfiguration.Validate(); err != nil { + invalidParams.AddNested("InputProcessingConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.S3Configuration != nil { + if err := s.S3Configuration.Validate(); err != nil { + invalidParams.AddNested("S3Configuration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInputProcessingConfiguration sets the InputProcessingConfiguration field's value. +func (s *DiscoverInputSchemaInput) SetInputProcessingConfiguration(v *InputProcessingConfiguration) *DiscoverInputSchemaInput { + s.InputProcessingConfiguration = v + return s +} + +// SetInputStartingPositionConfiguration sets the InputStartingPositionConfiguration field's value. +func (s *DiscoverInputSchemaInput) SetInputStartingPositionConfiguration(v *InputStartingPositionConfiguration) *DiscoverInputSchemaInput { + s.InputStartingPositionConfiguration = v + return s +} + +// SetResourceARN sets the ResourceARN field's value. +func (s *DiscoverInputSchemaInput) SetResourceARN(v string) *DiscoverInputSchemaInput { + s.ResourceARN = &v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *DiscoverInputSchemaInput) SetRoleARN(v string) *DiscoverInputSchemaInput { + s.RoleARN = &v + return s +} + +// SetS3Configuration sets the S3Configuration field's value. +func (s *DiscoverInputSchemaInput) SetS3Configuration(v *S3Configuration) *DiscoverInputSchemaInput { + s.S3Configuration = v + return s +} + +type DiscoverInputSchemaOutput struct { + _ struct{} `type:"structure"` + + // Schema inferred from the streaming source. It identifies the format of the + // data in the streaming source and how each data element maps to corresponding + // columns in the in-application stream that you can create. + InputSchema *SourceSchema `type:"structure"` + + // An array of elements, where each element corresponds to a row in a stream + // record (a stream record can have more than one row). + ParsedInputRecords [][]*string `type:"list"` + + // Stream data that was modified by the processor specified in the InputProcessingConfiguration + // parameter. + ProcessedInputRecords []*string `type:"list"` + + // Raw stream data that was sampled to infer the schema. + RawInputRecords []*string `type:"list"` +} + +// String returns the string representation +func (s DiscoverInputSchemaOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DiscoverInputSchemaOutput) GoString() string { + return s.String() +} + +// SetInputSchema sets the InputSchema field's value. +func (s *DiscoverInputSchemaOutput) SetInputSchema(v *SourceSchema) *DiscoverInputSchemaOutput { + s.InputSchema = v + return s +} + +// SetParsedInputRecords sets the ParsedInputRecords field's value. +func (s *DiscoverInputSchemaOutput) SetParsedInputRecords(v [][]*string) *DiscoverInputSchemaOutput { + s.ParsedInputRecords = v + return s +} + +// SetProcessedInputRecords sets the ProcessedInputRecords field's value. +func (s *DiscoverInputSchemaOutput) SetProcessedInputRecords(v []*string) *DiscoverInputSchemaOutput { + s.ProcessedInputRecords = v + return s +} + +// SetRawInputRecords sets the RawInputRecords field's value. +func (s *DiscoverInputSchemaOutput) SetRawInputRecords(v []*string) *DiscoverInputSchemaOutput { + s.RawInputRecords = v + return s +} + +// When you configure the application input, you specify the streaming source, +// the in-application stream name that is created, and the mapping between the +// two. For more information, see Configuring Application Input (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-input.html). +type Input struct { + _ struct{} `type:"structure"` + + // Describes the number of in-application streams to create. + // + // Data from your source is routed to these in-application input streams. + // + // (see Configuring Application Input (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-input.html). + InputParallelism *InputParallelism `type:"structure"` + + // The InputProcessingConfiguration for the input. An input processor transforms + // records as they are received from the stream, before the application's SQL + // code executes. Currently, the only input processing configuration available + // is InputLambdaProcessor. + InputProcessingConfiguration *InputProcessingConfiguration `type:"structure"` + + // Describes the format of the data in the streaming source, and how each data + // element maps to corresponding columns in the in-application stream that is + // being created. + // + // Also used to describe the format of the reference data source. + // + // InputSchema is a required field + InputSchema *SourceSchema `type:"structure" required:"true"` + + // If the streaming source is an Amazon Kinesis Firehose delivery stream, identifies + // the delivery stream's ARN and an IAM role that enables Amazon Kinesis Analytics + // to access the stream on your behalf. + // + // Note: Either KinesisStreamsInput or KinesisFirehoseInput is required. + KinesisFirehoseInput *KinesisFirehoseInput `type:"structure"` + + // If the streaming source is an Amazon Kinesis stream, identifies the stream's + // Amazon Resource Name (ARN) and an IAM role that enables Amazon Kinesis Analytics + // to access the stream on your behalf. + // + // Note: Either KinesisStreamsInput or KinesisFirehoseInput is required. + KinesisStreamsInput *KinesisStreamsInput `type:"structure"` + + // Name prefix to use when creating an in-application stream. Suppose that you + // specify a prefix "MyInApplicationStream." Amazon Kinesis Analytics then creates + // one or more (as per the InputParallelism count you specified) in-application + // streams with names "MyInApplicationStream_001," "MyInApplicationStream_002," + // and so on. + // + // NamePrefix is a required field + NamePrefix *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s Input) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Input) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Input) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Input"} + if s.InputSchema == nil { + invalidParams.Add(request.NewErrParamRequired("InputSchema")) + } + if s.NamePrefix == nil { + invalidParams.Add(request.NewErrParamRequired("NamePrefix")) + } + if s.NamePrefix != nil && len(*s.NamePrefix) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NamePrefix", 1)) + } + if s.InputParallelism != nil { + if err := s.InputParallelism.Validate(); err != nil { + invalidParams.AddNested("InputParallelism", err.(request.ErrInvalidParams)) + } + } + if s.InputProcessingConfiguration != nil { + if err := s.InputProcessingConfiguration.Validate(); err != nil { + invalidParams.AddNested("InputProcessingConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.InputSchema != nil { + if err := s.InputSchema.Validate(); err != nil { + invalidParams.AddNested("InputSchema", err.(request.ErrInvalidParams)) + } + } + if s.KinesisFirehoseInput != nil { + if err := s.KinesisFirehoseInput.Validate(); err != nil { + invalidParams.AddNested("KinesisFirehoseInput", err.(request.ErrInvalidParams)) + } + } + if s.KinesisStreamsInput != nil { + if err := s.KinesisStreamsInput.Validate(); err != nil { + invalidParams.AddNested("KinesisStreamsInput", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInputParallelism sets the InputParallelism field's value. +func (s *Input) SetInputParallelism(v *InputParallelism) *Input { + s.InputParallelism = v + return s +} + +// SetInputProcessingConfiguration sets the InputProcessingConfiguration field's value. +func (s *Input) SetInputProcessingConfiguration(v *InputProcessingConfiguration) *Input { + s.InputProcessingConfiguration = v + return s +} + +// SetInputSchema sets the InputSchema field's value. +func (s *Input) SetInputSchema(v *SourceSchema) *Input { + s.InputSchema = v + return s +} + +// SetKinesisFirehoseInput sets the KinesisFirehoseInput field's value. +func (s *Input) SetKinesisFirehoseInput(v *KinesisFirehoseInput) *Input { + s.KinesisFirehoseInput = v + return s +} + +// SetKinesisStreamsInput sets the KinesisStreamsInput field's value. +func (s *Input) SetKinesisStreamsInput(v *KinesisStreamsInput) *Input { + s.KinesisStreamsInput = v + return s +} + +// SetNamePrefix sets the NamePrefix field's value. +func (s *Input) SetNamePrefix(v string) *Input { + s.NamePrefix = &v + return s +} + +// When you start your application, you provide this configuration, which identifies +// the input source and the point in the input source at which you want the +// application to start processing records. +type InputConfiguration struct { + _ struct{} `type:"structure"` + + // Input source ID. You can get this ID by calling the DescribeApplication operation. + // + // Id is a required field + Id *string `min:"1" type:"string" required:"true"` + + // Point at which you want the application to start processing records from + // the streaming source. + // + // InputStartingPositionConfiguration is a required field + InputStartingPositionConfiguration *InputStartingPositionConfiguration `type:"structure" required:"true"` +} + +// String returns the string representation +func (s InputConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InputConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InputConfiguration"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.Id != nil && len(*s.Id) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Id", 1)) + } + if s.InputStartingPositionConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("InputStartingPositionConfiguration")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetId sets the Id field's value. +func (s *InputConfiguration) SetId(v string) *InputConfiguration { + s.Id = &v + return s +} + +// SetInputStartingPositionConfiguration sets the InputStartingPositionConfiguration field's value. +func (s *InputConfiguration) SetInputStartingPositionConfiguration(v *InputStartingPositionConfiguration) *InputConfiguration { + s.InputStartingPositionConfiguration = v + return s +} + +// Describes the application input configuration. For more information, see +// Configuring Application Input (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-input.html). +type InputDescription struct { + _ struct{} `type:"structure"` + + // Returns the in-application stream names that are mapped to the stream source. + InAppStreamNames []*string `type:"list"` + + // Input ID associated with the application input. This is the ID that Amazon + // Kinesis Analytics assigns to each input configuration you add to your application. + InputId *string `min:"1" type:"string"` + + // Describes the configured parallelism (number of in-application streams mapped + // to the streaming source). + InputParallelism *InputParallelism `type:"structure"` + + // The description of the preprocessor that executes on records in this input + // before the application's code is run. + InputProcessingConfigurationDescription *InputProcessingConfigurationDescription `type:"structure"` + + // Describes the format of the data in the streaming source, and how each data + // element maps to corresponding columns in the in-application stream that is + // being created. + InputSchema *SourceSchema `type:"structure"` + + // Point at which the application is configured to read from the input stream. + InputStartingPositionConfiguration *InputStartingPositionConfiguration `type:"structure"` + + // If an Amazon Kinesis Firehose delivery stream is configured as a streaming + // source, provides the delivery stream's ARN and an IAM role that enables Amazon + // Kinesis Analytics to access the stream on your behalf. + KinesisFirehoseInputDescription *KinesisFirehoseInputDescription `type:"structure"` + + // If an Amazon Kinesis stream is configured as streaming source, provides Amazon + // Kinesis stream's Amazon Resource Name (ARN) and an IAM role that enables + // Amazon Kinesis Analytics to access the stream on your behalf. + KinesisStreamsInputDescription *KinesisStreamsInputDescription `type:"structure"` + + // In-application name prefix. + NamePrefix *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s InputDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputDescription) GoString() string { + return s.String() +} + +// SetInAppStreamNames sets the InAppStreamNames field's value. +func (s *InputDescription) SetInAppStreamNames(v []*string) *InputDescription { + s.InAppStreamNames = v + return s +} + +// SetInputId sets the InputId field's value. +func (s *InputDescription) SetInputId(v string) *InputDescription { + s.InputId = &v + return s +} + +// SetInputParallelism sets the InputParallelism field's value. +func (s *InputDescription) SetInputParallelism(v *InputParallelism) *InputDescription { + s.InputParallelism = v + return s +} + +// SetInputProcessingConfigurationDescription sets the InputProcessingConfigurationDescription field's value. +func (s *InputDescription) SetInputProcessingConfigurationDescription(v *InputProcessingConfigurationDescription) *InputDescription { + s.InputProcessingConfigurationDescription = v + return s +} + +// SetInputSchema sets the InputSchema field's value. +func (s *InputDescription) SetInputSchema(v *SourceSchema) *InputDescription { + s.InputSchema = v + return s +} + +// SetInputStartingPositionConfiguration sets the InputStartingPositionConfiguration field's value. +func (s *InputDescription) SetInputStartingPositionConfiguration(v *InputStartingPositionConfiguration) *InputDescription { + s.InputStartingPositionConfiguration = v + return s +} + +// SetKinesisFirehoseInputDescription sets the KinesisFirehoseInputDescription field's value. +func (s *InputDescription) SetKinesisFirehoseInputDescription(v *KinesisFirehoseInputDescription) *InputDescription { + s.KinesisFirehoseInputDescription = v + return s +} + +// SetKinesisStreamsInputDescription sets the KinesisStreamsInputDescription field's value. +func (s *InputDescription) SetKinesisStreamsInputDescription(v *KinesisStreamsInputDescription) *InputDescription { + s.KinesisStreamsInputDescription = v + return s +} + +// SetNamePrefix sets the NamePrefix field's value. +func (s *InputDescription) SetNamePrefix(v string) *InputDescription { + s.NamePrefix = &v + return s +} + +// An object that contains the Amazon Resource Name (ARN) of the AWS Lambda +// (https://aws.amazon.com/documentation/lambda/) function that is used to preprocess +// records in the stream, and the ARN of the IAM role that is used to access +// the AWS Lambda function. +type InputLambdaProcessor struct { + _ struct{} `type:"structure"` + + // The ARN of the AWS Lambda (https://aws.amazon.com/documentation/lambda/) + // function that operates on records in the stream. + // + // ResourceARN is a required field + ResourceARN *string `min:"1" type:"string" required:"true"` + + // The ARN of the IAM role that is used to access the AWS Lambda function. + // + // RoleARN is a required field + RoleARN *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s InputLambdaProcessor) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputLambdaProcessor) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InputLambdaProcessor) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InputLambdaProcessor"} + if s.ResourceARN == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceARN")) + } + if s.ResourceARN != nil && len(*s.ResourceARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceARN", 1)) + } + if s.RoleARN == nil { + invalidParams.Add(request.NewErrParamRequired("RoleARN")) + } + if s.RoleARN != nil && len(*s.RoleARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleARN", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceARN sets the ResourceARN field's value. +func (s *InputLambdaProcessor) SetResourceARN(v string) *InputLambdaProcessor { + s.ResourceARN = &v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *InputLambdaProcessor) SetRoleARN(v string) *InputLambdaProcessor { + s.RoleARN = &v + return s +} + +// An object that contains the Amazon Resource Name (ARN) of the AWS Lambda +// (https://aws.amazon.com/documentation/lambda/) function that is used to preprocess +// records in the stream, and the ARN of the IAM role that is used to access +// the AWS Lambda expression. +type InputLambdaProcessorDescription struct { + _ struct{} `type:"structure"` + + // The ARN of the AWS Lambda (https://aws.amazon.com/documentation/lambda/) + // function that is used to preprocess the records in the stream. + ResourceARN *string `min:"1" type:"string"` + + // The ARN of the IAM role that is used to access the AWS Lambda function. + RoleARN *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s InputLambdaProcessorDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputLambdaProcessorDescription) GoString() string { + return s.String() +} + +// SetResourceARN sets the ResourceARN field's value. +func (s *InputLambdaProcessorDescription) SetResourceARN(v string) *InputLambdaProcessorDescription { + s.ResourceARN = &v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *InputLambdaProcessorDescription) SetRoleARN(v string) *InputLambdaProcessorDescription { + s.RoleARN = &v + return s +} + +// Represents an update to the InputLambdaProcessor that is used to preprocess +// the records in the stream. +type InputLambdaProcessorUpdate struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the new AWS Lambda (https://aws.amazon.com/documentation/lambda/) + // function that is used to preprocess the records in the stream. + ResourceARNUpdate *string `min:"1" type:"string"` + + // The ARN of the new IAM role that is used to access the AWS Lambda function. + RoleARNUpdate *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s InputLambdaProcessorUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputLambdaProcessorUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InputLambdaProcessorUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InputLambdaProcessorUpdate"} + if s.ResourceARNUpdate != nil && len(*s.ResourceARNUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceARNUpdate", 1)) + } + if s.RoleARNUpdate != nil && len(*s.RoleARNUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleARNUpdate", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceARNUpdate sets the ResourceARNUpdate field's value. +func (s *InputLambdaProcessorUpdate) SetResourceARNUpdate(v string) *InputLambdaProcessorUpdate { + s.ResourceARNUpdate = &v + return s +} + +// SetRoleARNUpdate sets the RoleARNUpdate field's value. +func (s *InputLambdaProcessorUpdate) SetRoleARNUpdate(v string) *InputLambdaProcessorUpdate { + s.RoleARNUpdate = &v + return s +} + +// Describes the number of in-application streams to create for a given streaming +// source. For information about parallelism, see Configuring Application Input +// (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-input.html). +type InputParallelism struct { + _ struct{} `type:"structure"` + + // Number of in-application streams to create. For more information, see Limits + // (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/limits.html). + Count *int64 `min:"1" type:"integer"` +} + +// String returns the string representation +func (s InputParallelism) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputParallelism) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InputParallelism) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InputParallelism"} + if s.Count != nil && *s.Count < 1 { + invalidParams.Add(request.NewErrParamMinValue("Count", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCount sets the Count field's value. +func (s *InputParallelism) SetCount(v int64) *InputParallelism { + s.Count = &v + return s +} + +// Provides updates to the parallelism count. +type InputParallelismUpdate struct { + _ struct{} `type:"structure"` + + // Number of in-application streams to create for the specified streaming source. + CountUpdate *int64 `min:"1" type:"integer"` +} + +// String returns the string representation +func (s InputParallelismUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputParallelismUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InputParallelismUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InputParallelismUpdate"} + if s.CountUpdate != nil && *s.CountUpdate < 1 { + invalidParams.Add(request.NewErrParamMinValue("CountUpdate", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCountUpdate sets the CountUpdate field's value. +func (s *InputParallelismUpdate) SetCountUpdate(v int64) *InputParallelismUpdate { + s.CountUpdate = &v + return s +} + +// Provides a description of a processor that is used to preprocess the records +// in the stream before being processed by your application code. Currently, +// the only input processor available is AWS Lambda (https://aws.amazon.com/documentation/lambda/). +type InputProcessingConfiguration struct { + _ struct{} `type:"structure"` + + // The InputLambdaProcessor that is used to preprocess the records in the stream + // before being processed by your application code. + // + // InputLambdaProcessor is a required field + InputLambdaProcessor *InputLambdaProcessor `type:"structure" required:"true"` +} + +// String returns the string representation +func (s InputProcessingConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputProcessingConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InputProcessingConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InputProcessingConfiguration"} + if s.InputLambdaProcessor == nil { + invalidParams.Add(request.NewErrParamRequired("InputLambdaProcessor")) + } + if s.InputLambdaProcessor != nil { + if err := s.InputLambdaProcessor.Validate(); err != nil { + invalidParams.AddNested("InputLambdaProcessor", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInputLambdaProcessor sets the InputLambdaProcessor field's value. +func (s *InputProcessingConfiguration) SetInputLambdaProcessor(v *InputLambdaProcessor) *InputProcessingConfiguration { + s.InputLambdaProcessor = v + return s +} + +// Provides configuration information about an input processor. Currently, the +// only input processor available is AWS Lambda (https://aws.amazon.com/documentation/lambda/). +type InputProcessingConfigurationDescription struct { + _ struct{} `type:"structure"` + + // Provides configuration information about the associated InputLambdaProcessorDescription. + InputLambdaProcessorDescription *InputLambdaProcessorDescription `type:"structure"` +} + +// String returns the string representation +func (s InputProcessingConfigurationDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputProcessingConfigurationDescription) GoString() string { + return s.String() +} + +// SetInputLambdaProcessorDescription sets the InputLambdaProcessorDescription field's value. +func (s *InputProcessingConfigurationDescription) SetInputLambdaProcessorDescription(v *InputLambdaProcessorDescription) *InputProcessingConfigurationDescription { + s.InputLambdaProcessorDescription = v + return s +} + +// Describes updates to an InputProcessingConfiguration. +type InputProcessingConfigurationUpdate struct { + _ struct{} `type:"structure"` + + // Provides update information for an InputLambdaProcessor. + // + // InputLambdaProcessorUpdate is a required field + InputLambdaProcessorUpdate *InputLambdaProcessorUpdate `type:"structure" required:"true"` +} + +// String returns the string representation +func (s InputProcessingConfigurationUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputProcessingConfigurationUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InputProcessingConfigurationUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InputProcessingConfigurationUpdate"} + if s.InputLambdaProcessorUpdate == nil { + invalidParams.Add(request.NewErrParamRequired("InputLambdaProcessorUpdate")) + } + if s.InputLambdaProcessorUpdate != nil { + if err := s.InputLambdaProcessorUpdate.Validate(); err != nil { + invalidParams.AddNested("InputLambdaProcessorUpdate", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInputLambdaProcessorUpdate sets the InputLambdaProcessorUpdate field's value. +func (s *InputProcessingConfigurationUpdate) SetInputLambdaProcessorUpdate(v *InputLambdaProcessorUpdate) *InputProcessingConfigurationUpdate { + s.InputLambdaProcessorUpdate = v + return s +} + +// Describes updates for the application's input schema. +type InputSchemaUpdate struct { + _ struct{} `type:"structure"` + + // A list of RecordColumn objects. Each object describes the mapping of the + // streaming source element to the corresponding column in the in-application + // stream. + RecordColumnUpdates []*RecordColumn `min:"1" type:"list"` + + // Specifies the encoding of the records in the streaming source. For example, + // UTF-8. + RecordEncodingUpdate *string `type:"string"` + + // Specifies the format of the records on the streaming source. + RecordFormatUpdate *RecordFormat `type:"structure"` +} + +// String returns the string representation +func (s InputSchemaUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputSchemaUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InputSchemaUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InputSchemaUpdate"} + if s.RecordColumnUpdates != nil && len(s.RecordColumnUpdates) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RecordColumnUpdates", 1)) + } + if s.RecordColumnUpdates != nil { + for i, v := range s.RecordColumnUpdates { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "RecordColumnUpdates", i), err.(request.ErrInvalidParams)) + } + } + } + if s.RecordFormatUpdate != nil { + if err := s.RecordFormatUpdate.Validate(); err != nil { + invalidParams.AddNested("RecordFormatUpdate", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRecordColumnUpdates sets the RecordColumnUpdates field's value. +func (s *InputSchemaUpdate) SetRecordColumnUpdates(v []*RecordColumn) *InputSchemaUpdate { + s.RecordColumnUpdates = v + return s +} + +// SetRecordEncodingUpdate sets the RecordEncodingUpdate field's value. +func (s *InputSchemaUpdate) SetRecordEncodingUpdate(v string) *InputSchemaUpdate { + s.RecordEncodingUpdate = &v + return s +} + +// SetRecordFormatUpdate sets the RecordFormatUpdate field's value. +func (s *InputSchemaUpdate) SetRecordFormatUpdate(v *RecordFormat) *InputSchemaUpdate { + s.RecordFormatUpdate = v + return s +} + +// Describes the point at which the application reads from the streaming source. +type InputStartingPositionConfiguration struct { + _ struct{} `type:"structure"` + + // The starting position on the stream. + // + // * NOW - Start reading just after the most recent record in the stream, + // start at the request time stamp that the customer issued. + // + // * TRIM_HORIZON - Start reading at the last untrimmed record in the stream, + // which is the oldest record available in the stream. This option is not + // available for an Amazon Kinesis Firehose delivery stream. + // + // * LAST_STOPPED_POINT - Resume reading from where the application last + // stopped reading. + InputStartingPosition *string `type:"string" enum:"InputStartingPosition"` +} + +// String returns the string representation +func (s InputStartingPositionConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputStartingPositionConfiguration) GoString() string { + return s.String() +} + +// SetInputStartingPosition sets the InputStartingPosition field's value. +func (s *InputStartingPositionConfiguration) SetInputStartingPosition(v string) *InputStartingPositionConfiguration { + s.InputStartingPosition = &v + return s +} + +// Describes updates to a specific input configuration (identified by the InputId +// of an application). +type InputUpdate struct { + _ struct{} `type:"structure"` + + // Input ID of the application input to be updated. + // + // InputId is a required field + InputId *string `min:"1" type:"string" required:"true"` + + // Describes the parallelism updates (the number in-application streams Amazon + // Kinesis Analytics creates for the specific streaming source). + InputParallelismUpdate *InputParallelismUpdate `type:"structure"` + + // Describes updates for an input processing configuration. + InputProcessingConfigurationUpdate *InputProcessingConfigurationUpdate `type:"structure"` + + // Describes the data format on the streaming source, and how record elements + // on the streaming source map to columns of the in-application stream that + // is created. + InputSchemaUpdate *InputSchemaUpdate `type:"structure"` + + // If an Amazon Kinesis Firehose delivery stream is the streaming source to + // be updated, provides an updated stream ARN and IAM role ARN. + KinesisFirehoseInputUpdate *KinesisFirehoseInputUpdate `type:"structure"` + + // If an Amazon Kinesis stream is the streaming source to be updated, provides + // an updated stream Amazon Resource Name (ARN) and IAM role ARN. + KinesisStreamsInputUpdate *KinesisStreamsInputUpdate `type:"structure"` + + // Name prefix for in-application streams that Amazon Kinesis Analytics creates + // for the specific streaming source. + NamePrefixUpdate *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s InputUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InputUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InputUpdate"} + if s.InputId == nil { + invalidParams.Add(request.NewErrParamRequired("InputId")) + } + if s.InputId != nil && len(*s.InputId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("InputId", 1)) + } + if s.NamePrefixUpdate != nil && len(*s.NamePrefixUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NamePrefixUpdate", 1)) + } + if s.InputParallelismUpdate != nil { + if err := s.InputParallelismUpdate.Validate(); err != nil { + invalidParams.AddNested("InputParallelismUpdate", err.(request.ErrInvalidParams)) + } + } + if s.InputProcessingConfigurationUpdate != nil { + if err := s.InputProcessingConfigurationUpdate.Validate(); err != nil { + invalidParams.AddNested("InputProcessingConfigurationUpdate", err.(request.ErrInvalidParams)) + } + } + if s.InputSchemaUpdate != nil { + if err := s.InputSchemaUpdate.Validate(); err != nil { + invalidParams.AddNested("InputSchemaUpdate", err.(request.ErrInvalidParams)) + } + } + if s.KinesisFirehoseInputUpdate != nil { + if err := s.KinesisFirehoseInputUpdate.Validate(); err != nil { + invalidParams.AddNested("KinesisFirehoseInputUpdate", err.(request.ErrInvalidParams)) + } + } + if s.KinesisStreamsInputUpdate != nil { + if err := s.KinesisStreamsInputUpdate.Validate(); err != nil { + invalidParams.AddNested("KinesisStreamsInputUpdate", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInputId sets the InputId field's value. +func (s *InputUpdate) SetInputId(v string) *InputUpdate { + s.InputId = &v + return s +} + +// SetInputParallelismUpdate sets the InputParallelismUpdate field's value. +func (s *InputUpdate) SetInputParallelismUpdate(v *InputParallelismUpdate) *InputUpdate { + s.InputParallelismUpdate = v + return s +} + +// SetInputProcessingConfigurationUpdate sets the InputProcessingConfigurationUpdate field's value. +func (s *InputUpdate) SetInputProcessingConfigurationUpdate(v *InputProcessingConfigurationUpdate) *InputUpdate { + s.InputProcessingConfigurationUpdate = v + return s +} + +// SetInputSchemaUpdate sets the InputSchemaUpdate field's value. +func (s *InputUpdate) SetInputSchemaUpdate(v *InputSchemaUpdate) *InputUpdate { + s.InputSchemaUpdate = v + return s +} + +// SetKinesisFirehoseInputUpdate sets the KinesisFirehoseInputUpdate field's value. +func (s *InputUpdate) SetKinesisFirehoseInputUpdate(v *KinesisFirehoseInputUpdate) *InputUpdate { + s.KinesisFirehoseInputUpdate = v + return s +} + +// SetKinesisStreamsInputUpdate sets the KinesisStreamsInputUpdate field's value. +func (s *InputUpdate) SetKinesisStreamsInputUpdate(v *KinesisStreamsInputUpdate) *InputUpdate { + s.KinesisStreamsInputUpdate = v + return s +} + +// SetNamePrefixUpdate sets the NamePrefixUpdate field's value. +func (s *InputUpdate) SetNamePrefixUpdate(v string) *InputUpdate { + s.NamePrefixUpdate = &v + return s +} + +// Provides additional mapping information when JSON is the record format on +// the streaming source. +type JSONMappingParameters struct { + _ struct{} `type:"structure"` + + // Path to the top-level parent that contains the records. + // + // RecordRowPath is a required field + RecordRowPath *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s JSONMappingParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s JSONMappingParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *JSONMappingParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "JSONMappingParameters"} + if s.RecordRowPath == nil { + invalidParams.Add(request.NewErrParamRequired("RecordRowPath")) + } + if s.RecordRowPath != nil && len(*s.RecordRowPath) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RecordRowPath", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRecordRowPath sets the RecordRowPath field's value. +func (s *JSONMappingParameters) SetRecordRowPath(v string) *JSONMappingParameters { + s.RecordRowPath = &v + return s +} + +// Identifies an Amazon Kinesis Firehose delivery stream as the streaming source. +// You provide the delivery stream's Amazon Resource Name (ARN) and an IAM role +// ARN that enables Amazon Kinesis Analytics to access the stream on your behalf. +type KinesisFirehoseInput struct { + _ struct{} `type:"structure"` + + // ARN of the input delivery stream. + // + // ResourceARN is a required field + ResourceARN *string `min:"1" type:"string" required:"true"` + + // ARN of the IAM role that Amazon Kinesis Analytics can assume to access the + // stream on your behalf. You need to make sure the role has necessary permissions + // to access the stream. + // + // RoleARN is a required field + RoleARN *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s KinesisFirehoseInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s KinesisFirehoseInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *KinesisFirehoseInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "KinesisFirehoseInput"} + if s.ResourceARN == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceARN")) + } + if s.ResourceARN != nil && len(*s.ResourceARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceARN", 1)) + } + if s.RoleARN == nil { + invalidParams.Add(request.NewErrParamRequired("RoleARN")) + } + if s.RoleARN != nil && len(*s.RoleARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleARN", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceARN sets the ResourceARN field's value. +func (s *KinesisFirehoseInput) SetResourceARN(v string) *KinesisFirehoseInput { + s.ResourceARN = &v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *KinesisFirehoseInput) SetRoleARN(v string) *KinesisFirehoseInput { + s.RoleARN = &v + return s +} + +// Describes the Amazon Kinesis Firehose delivery stream that is configured +// as the streaming source in the application input configuration. +type KinesisFirehoseInputDescription struct { + _ struct{} `type:"structure"` + + // Amazon Resource Name (ARN) of the Amazon Kinesis Firehose delivery stream. + ResourceARN *string `min:"1" type:"string"` + + // ARN of the IAM role that Amazon Kinesis Analytics assumes to access the stream. + RoleARN *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s KinesisFirehoseInputDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s KinesisFirehoseInputDescription) GoString() string { + return s.String() +} + +// SetResourceARN sets the ResourceARN field's value. +func (s *KinesisFirehoseInputDescription) SetResourceARN(v string) *KinesisFirehoseInputDescription { + s.ResourceARN = &v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *KinesisFirehoseInputDescription) SetRoleARN(v string) *KinesisFirehoseInputDescription { + s.RoleARN = &v + return s +} + +// When updating application input configuration, provides information about +// an Amazon Kinesis Firehose delivery stream as the streaming source. +type KinesisFirehoseInputUpdate struct { + _ struct{} `type:"structure"` + + // Amazon Resource Name (ARN) of the input Amazon Kinesis Firehose delivery + // stream to read. + ResourceARNUpdate *string `min:"1" type:"string"` + + // ARN of the IAM role that Amazon Kinesis Analytics can assume to access the + // stream on your behalf. You need to grant necessary permissions to this role. + RoleARNUpdate *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s KinesisFirehoseInputUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s KinesisFirehoseInputUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *KinesisFirehoseInputUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "KinesisFirehoseInputUpdate"} + if s.ResourceARNUpdate != nil && len(*s.ResourceARNUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceARNUpdate", 1)) + } + if s.RoleARNUpdate != nil && len(*s.RoleARNUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleARNUpdate", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceARNUpdate sets the ResourceARNUpdate field's value. +func (s *KinesisFirehoseInputUpdate) SetResourceARNUpdate(v string) *KinesisFirehoseInputUpdate { + s.ResourceARNUpdate = &v + return s +} + +// SetRoleARNUpdate sets the RoleARNUpdate field's value. +func (s *KinesisFirehoseInputUpdate) SetRoleARNUpdate(v string) *KinesisFirehoseInputUpdate { + s.RoleARNUpdate = &v + return s +} + +// When configuring application output, identifies an Amazon Kinesis Firehose +// delivery stream as the destination. You provide the stream Amazon Resource +// Name (ARN) and an IAM role that enables Amazon Kinesis Analytics to write +// to the stream on your behalf. +type KinesisFirehoseOutput struct { + _ struct{} `type:"structure"` + + // ARN of the destination Amazon Kinesis Firehose delivery stream to write to. + // + // ResourceARN is a required field + ResourceARN *string `min:"1" type:"string" required:"true"` + + // ARN of the IAM role that Amazon Kinesis Analytics can assume to write to + // the destination stream on your behalf. You need to grant the necessary permissions + // to this role. + // + // RoleARN is a required field + RoleARN *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s KinesisFirehoseOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s KinesisFirehoseOutput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *KinesisFirehoseOutput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "KinesisFirehoseOutput"} + if s.ResourceARN == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceARN")) + } + if s.ResourceARN != nil && len(*s.ResourceARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceARN", 1)) + } + if s.RoleARN == nil { + invalidParams.Add(request.NewErrParamRequired("RoleARN")) + } + if s.RoleARN != nil && len(*s.RoleARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleARN", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceARN sets the ResourceARN field's value. +func (s *KinesisFirehoseOutput) SetResourceARN(v string) *KinesisFirehoseOutput { + s.ResourceARN = &v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *KinesisFirehoseOutput) SetRoleARN(v string) *KinesisFirehoseOutput { + s.RoleARN = &v + return s +} + +// For an application output, describes the Amazon Kinesis Firehose delivery +// stream configured as its destination. +type KinesisFirehoseOutputDescription struct { + _ struct{} `type:"structure"` + + // Amazon Resource Name (ARN) of the Amazon Kinesis Firehose delivery stream. + ResourceARN *string `min:"1" type:"string"` + + // ARN of the IAM role that Amazon Kinesis Analytics can assume to access the + // stream. + RoleARN *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s KinesisFirehoseOutputDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s KinesisFirehoseOutputDescription) GoString() string { + return s.String() +} + +// SetResourceARN sets the ResourceARN field's value. +func (s *KinesisFirehoseOutputDescription) SetResourceARN(v string) *KinesisFirehoseOutputDescription { + s.ResourceARN = &v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *KinesisFirehoseOutputDescription) SetRoleARN(v string) *KinesisFirehoseOutputDescription { + s.RoleARN = &v + return s +} + +// When updating an output configuration using the UpdateApplication operation, +// provides information about an Amazon Kinesis Firehose delivery stream configured +// as the destination. +type KinesisFirehoseOutputUpdate struct { + _ struct{} `type:"structure"` + + // Amazon Resource Name (ARN) of the Amazon Kinesis Firehose delivery stream + // to write to. + ResourceARNUpdate *string `min:"1" type:"string"` + + // ARN of the IAM role that Amazon Kinesis Analytics can assume to access the + // stream on your behalf. You need to grant necessary permissions to this role. + RoleARNUpdate *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s KinesisFirehoseOutputUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s KinesisFirehoseOutputUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *KinesisFirehoseOutputUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "KinesisFirehoseOutputUpdate"} + if s.ResourceARNUpdate != nil && len(*s.ResourceARNUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceARNUpdate", 1)) + } + if s.RoleARNUpdate != nil && len(*s.RoleARNUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleARNUpdate", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceARNUpdate sets the ResourceARNUpdate field's value. +func (s *KinesisFirehoseOutputUpdate) SetResourceARNUpdate(v string) *KinesisFirehoseOutputUpdate { + s.ResourceARNUpdate = &v + return s +} + +// SetRoleARNUpdate sets the RoleARNUpdate field's value. +func (s *KinesisFirehoseOutputUpdate) SetRoleARNUpdate(v string) *KinesisFirehoseOutputUpdate { + s.RoleARNUpdate = &v + return s +} + +// Identifies an Amazon Kinesis stream as the streaming source. You provide +// the stream's Amazon Resource Name (ARN) and an IAM role ARN that enables +// Amazon Kinesis Analytics to access the stream on your behalf. +type KinesisStreamsInput struct { + _ struct{} `type:"structure"` + + // ARN of the input Amazon Kinesis stream to read. + // + // ResourceARN is a required field + ResourceARN *string `min:"1" type:"string" required:"true"` + + // ARN of the IAM role that Amazon Kinesis Analytics can assume to access the + // stream on your behalf. You need to grant the necessary permissions to this + // role. + // + // RoleARN is a required field + RoleARN *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s KinesisStreamsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s KinesisStreamsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *KinesisStreamsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "KinesisStreamsInput"} + if s.ResourceARN == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceARN")) + } + if s.ResourceARN != nil && len(*s.ResourceARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceARN", 1)) + } + if s.RoleARN == nil { + invalidParams.Add(request.NewErrParamRequired("RoleARN")) + } + if s.RoleARN != nil && len(*s.RoleARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleARN", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceARN sets the ResourceARN field's value. +func (s *KinesisStreamsInput) SetResourceARN(v string) *KinesisStreamsInput { + s.ResourceARN = &v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *KinesisStreamsInput) SetRoleARN(v string) *KinesisStreamsInput { + s.RoleARN = &v + return s +} + +// Describes the Amazon Kinesis stream that is configured as the streaming source +// in the application input configuration. +type KinesisStreamsInputDescription struct { + _ struct{} `type:"structure"` + + // Amazon Resource Name (ARN) of the Amazon Kinesis stream. + ResourceARN *string `min:"1" type:"string"` + + // ARN of the IAM role that Amazon Kinesis Analytics can assume to access the + // stream. + RoleARN *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s KinesisStreamsInputDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s KinesisStreamsInputDescription) GoString() string { + return s.String() +} + +// SetResourceARN sets the ResourceARN field's value. +func (s *KinesisStreamsInputDescription) SetResourceARN(v string) *KinesisStreamsInputDescription { + s.ResourceARN = &v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *KinesisStreamsInputDescription) SetRoleARN(v string) *KinesisStreamsInputDescription { + s.RoleARN = &v + return s +} + +// When updating application input configuration, provides information about +// an Amazon Kinesis stream as the streaming source. +type KinesisStreamsInputUpdate struct { + _ struct{} `type:"structure"` + + // Amazon Resource Name (ARN) of the input Amazon Kinesis stream to read. + ResourceARNUpdate *string `min:"1" type:"string"` + + // ARN of the IAM role that Amazon Kinesis Analytics can assume to access the + // stream on your behalf. You need to grant the necessary permissions to this + // role. + RoleARNUpdate *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s KinesisStreamsInputUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s KinesisStreamsInputUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *KinesisStreamsInputUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "KinesisStreamsInputUpdate"} + if s.ResourceARNUpdate != nil && len(*s.ResourceARNUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceARNUpdate", 1)) + } + if s.RoleARNUpdate != nil && len(*s.RoleARNUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleARNUpdate", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceARNUpdate sets the ResourceARNUpdate field's value. +func (s *KinesisStreamsInputUpdate) SetResourceARNUpdate(v string) *KinesisStreamsInputUpdate { + s.ResourceARNUpdate = &v + return s +} + +// SetRoleARNUpdate sets the RoleARNUpdate field's value. +func (s *KinesisStreamsInputUpdate) SetRoleARNUpdate(v string) *KinesisStreamsInputUpdate { + s.RoleARNUpdate = &v + return s +} + +// When configuring application output, identifies an Amazon Kinesis stream +// as the destination. You provide the stream Amazon Resource Name (ARN) and +// also an IAM role ARN that Amazon Kinesis Analytics can use to write to the +// stream on your behalf. +type KinesisStreamsOutput struct { + _ struct{} `type:"structure"` + + // ARN of the destination Amazon Kinesis stream to write to. + // + // ResourceARN is a required field + ResourceARN *string `min:"1" type:"string" required:"true"` + + // ARN of the IAM role that Amazon Kinesis Analytics can assume to write to + // the destination stream on your behalf. You need to grant the necessary permissions + // to this role. + // + // RoleARN is a required field + RoleARN *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s KinesisStreamsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s KinesisStreamsOutput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *KinesisStreamsOutput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "KinesisStreamsOutput"} + if s.ResourceARN == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceARN")) + } + if s.ResourceARN != nil && len(*s.ResourceARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceARN", 1)) + } + if s.RoleARN == nil { + invalidParams.Add(request.NewErrParamRequired("RoleARN")) + } + if s.RoleARN != nil && len(*s.RoleARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleARN", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceARN sets the ResourceARN field's value. +func (s *KinesisStreamsOutput) SetResourceARN(v string) *KinesisStreamsOutput { + s.ResourceARN = &v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *KinesisStreamsOutput) SetRoleARN(v string) *KinesisStreamsOutput { + s.RoleARN = &v + return s +} + +// For an application output, describes the Amazon Kinesis stream configured +// as its destination. +type KinesisStreamsOutputDescription struct { + _ struct{} `type:"structure"` + + // Amazon Resource Name (ARN) of the Amazon Kinesis stream. + ResourceARN *string `min:"1" type:"string"` + + // ARN of the IAM role that Amazon Kinesis Analytics can assume to access the + // stream. + RoleARN *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s KinesisStreamsOutputDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s KinesisStreamsOutputDescription) GoString() string { + return s.String() +} + +// SetResourceARN sets the ResourceARN field's value. +func (s *KinesisStreamsOutputDescription) SetResourceARN(v string) *KinesisStreamsOutputDescription { + s.ResourceARN = &v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *KinesisStreamsOutputDescription) SetRoleARN(v string) *KinesisStreamsOutputDescription { + s.RoleARN = &v + return s +} + +// When updating an output configuration using the UpdateApplication operation, +// provides information about an Amazon Kinesis stream configured as the destination. +type KinesisStreamsOutputUpdate struct { + _ struct{} `type:"structure"` + + // Amazon Resource Name (ARN) of the Amazon Kinesis stream where you want to + // write the output. + ResourceARNUpdate *string `min:"1" type:"string"` + + // ARN of the IAM role that Amazon Kinesis Analytics can assume to access the + // stream on your behalf. You need to grant the necessary permissions to this + // role. + RoleARNUpdate *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s KinesisStreamsOutputUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s KinesisStreamsOutputUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *KinesisStreamsOutputUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "KinesisStreamsOutputUpdate"} + if s.ResourceARNUpdate != nil && len(*s.ResourceARNUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceARNUpdate", 1)) + } + if s.RoleARNUpdate != nil && len(*s.RoleARNUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleARNUpdate", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceARNUpdate sets the ResourceARNUpdate field's value. +func (s *KinesisStreamsOutputUpdate) SetResourceARNUpdate(v string) *KinesisStreamsOutputUpdate { + s.ResourceARNUpdate = &v + return s +} + +// SetRoleARNUpdate sets the RoleARNUpdate field's value. +func (s *KinesisStreamsOutputUpdate) SetRoleARNUpdate(v string) *KinesisStreamsOutputUpdate { + s.RoleARNUpdate = &v + return s +} + +// When configuring application output, identifies an AWS Lambda function as +// the destination. You provide the function Amazon Resource Name (ARN) and +// also an IAM role ARN that Amazon Kinesis Analytics can use to write to the +// function on your behalf. +type LambdaOutput struct { + _ struct{} `type:"structure"` + + // Amazon Resource Name (ARN) of the destination Lambda function to write to. + // + // ResourceARN is a required field + ResourceARN *string `min:"1" type:"string" required:"true"` + + // ARN of the IAM role that Amazon Kinesis Analytics can assume to write to + // the destination function on your behalf. You need to grant the necessary + // permissions to this role. + // + // RoleARN is a required field + RoleARN *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s LambdaOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LambdaOutput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *LambdaOutput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LambdaOutput"} + if s.ResourceARN == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceARN")) + } + if s.ResourceARN != nil && len(*s.ResourceARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceARN", 1)) + } + if s.RoleARN == nil { + invalidParams.Add(request.NewErrParamRequired("RoleARN")) + } + if s.RoleARN != nil && len(*s.RoleARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleARN", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceARN sets the ResourceARN field's value. +func (s *LambdaOutput) SetResourceARN(v string) *LambdaOutput { + s.ResourceARN = &v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *LambdaOutput) SetRoleARN(v string) *LambdaOutput { + s.RoleARN = &v + return s +} + +// For an application output, describes the AWS Lambda function configured as +// its destination. +type LambdaOutputDescription struct { + _ struct{} `type:"structure"` + + // Amazon Resource Name (ARN) of the destination Lambda function. + ResourceARN *string `min:"1" type:"string"` + + // ARN of the IAM role that Amazon Kinesis Analytics can assume to write to + // the destination function. + RoleARN *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s LambdaOutputDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LambdaOutputDescription) GoString() string { + return s.String() +} + +// SetResourceARN sets the ResourceARN field's value. +func (s *LambdaOutputDescription) SetResourceARN(v string) *LambdaOutputDescription { + s.ResourceARN = &v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *LambdaOutputDescription) SetRoleARN(v string) *LambdaOutputDescription { + s.RoleARN = &v + return s +} + +// When updating an output configuration using the UpdateApplication operation, +// provides information about an AWS Lambda function configured as the destination. +type LambdaOutputUpdate struct { + _ struct{} `type:"structure"` + + // Amazon Resource Name (ARN) of the destination Lambda function. + ResourceARNUpdate *string `min:"1" type:"string"` + + // ARN of the IAM role that Amazon Kinesis Analytics can assume to write to + // the destination function on your behalf. You need to grant the necessary + // permissions to this role. + RoleARNUpdate *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s LambdaOutputUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LambdaOutputUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *LambdaOutputUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LambdaOutputUpdate"} + if s.ResourceARNUpdate != nil && len(*s.ResourceARNUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceARNUpdate", 1)) + } + if s.RoleARNUpdate != nil && len(*s.RoleARNUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleARNUpdate", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceARNUpdate sets the ResourceARNUpdate field's value. +func (s *LambdaOutputUpdate) SetResourceARNUpdate(v string) *LambdaOutputUpdate { + s.ResourceARNUpdate = &v + return s +} + +// SetRoleARNUpdate sets the RoleARNUpdate field's value. +func (s *LambdaOutputUpdate) SetRoleARNUpdate(v string) *LambdaOutputUpdate { + s.RoleARNUpdate = &v + return s +} + +type ListApplicationsInput struct { + _ struct{} `type:"structure"` + + // Name of the application to start the list with. When using pagination to + // retrieve the list, you don't need to specify this parameter in the first + // request. However, in subsequent requests, you add the last application name + // from the previous response to get the next page of applications. + ExclusiveStartApplicationName *string `min:"1" type:"string"` + + // Maximum number of applications to list. + Limit *int64 `min:"1" type:"integer"` +} + +// String returns the string representation +func (s ListApplicationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListApplicationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListApplicationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListApplicationsInput"} + if s.ExclusiveStartApplicationName != nil && len(*s.ExclusiveStartApplicationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ExclusiveStartApplicationName", 1)) + } + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetExclusiveStartApplicationName sets the ExclusiveStartApplicationName field's value. +func (s *ListApplicationsInput) SetExclusiveStartApplicationName(v string) *ListApplicationsInput { + s.ExclusiveStartApplicationName = &v + return s +} + +// SetLimit sets the Limit field's value. +func (s *ListApplicationsInput) SetLimit(v int64) *ListApplicationsInput { + s.Limit = &v + return s +} + +type ListApplicationsOutput struct { + _ struct{} `type:"structure"` + + // List of ApplicationSummary objects. + // + // ApplicationSummaries is a required field + ApplicationSummaries []*ApplicationSummary `type:"list" required:"true"` + + // Returns true if there are more applications to retrieve. + // + // HasMoreApplications is a required field + HasMoreApplications *bool `type:"boolean" required:"true"` +} + +// String returns the string representation +func (s ListApplicationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListApplicationsOutput) GoString() string { + return s.String() +} + +// SetApplicationSummaries sets the ApplicationSummaries field's value. +func (s *ListApplicationsOutput) SetApplicationSummaries(v []*ApplicationSummary) *ListApplicationsOutput { + s.ApplicationSummaries = v + return s +} + +// SetHasMoreApplications sets the HasMoreApplications field's value. +func (s *ListApplicationsOutput) SetHasMoreApplications(v bool) *ListApplicationsOutput { + s.HasMoreApplications = &v + return s +} + +// When configuring application input at the time of creating or updating an +// application, provides additional mapping information specific to the record +// format (such as JSON, CSV, or record fields delimited by some delimiter) +// on the streaming source. +type MappingParameters struct { + _ struct{} `type:"structure"` + + // Provides additional mapping information when the record format uses delimiters + // (for example, CSV). + CSVMappingParameters *CSVMappingParameters `type:"structure"` + + // Provides additional mapping information when JSON is the record format on + // the streaming source. + JSONMappingParameters *JSONMappingParameters `type:"structure"` +} + +// String returns the string representation +func (s MappingParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MappingParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *MappingParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MappingParameters"} + if s.CSVMappingParameters != nil { + if err := s.CSVMappingParameters.Validate(); err != nil { + invalidParams.AddNested("CSVMappingParameters", err.(request.ErrInvalidParams)) + } + } + if s.JSONMappingParameters != nil { + if err := s.JSONMappingParameters.Validate(); err != nil { + invalidParams.AddNested("JSONMappingParameters", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCSVMappingParameters sets the CSVMappingParameters field's value. +func (s *MappingParameters) SetCSVMappingParameters(v *CSVMappingParameters) *MappingParameters { + s.CSVMappingParameters = v + return s +} + +// SetJSONMappingParameters sets the JSONMappingParameters field's value. +func (s *MappingParameters) SetJSONMappingParameters(v *JSONMappingParameters) *MappingParameters { + s.JSONMappingParameters = v + return s +} + +// Describes application output configuration in which you identify an in-application +// stream and a destination where you want the in-application stream data to +// be written. The destination can be an Amazon Kinesis stream or an Amazon +// Kinesis Firehose delivery stream. +// +// For limits on how many destinations an application can write and other limitations, +// see Limits (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/limits.html) +type Output struct { + _ struct{} `type:"structure"` + + // Describes the data format when records are written to the destination. For + // more information, see Configuring Application Output (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-output.html). + // + // DestinationSchema is a required field + DestinationSchema *DestinationSchema `type:"structure" required:"true"` + + // Identifies an Amazon Kinesis Firehose delivery stream as the destination. + KinesisFirehoseOutput *KinesisFirehoseOutput `type:"structure"` + + // Identifies an Amazon Kinesis stream as the destination. + KinesisStreamsOutput *KinesisStreamsOutput `type:"structure"` + + // Identifies an AWS Lambda function as the destination. + LambdaOutput *LambdaOutput `type:"structure"` + + // Name of the in-application stream. + // + // Name is a required field + Name *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s Output) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Output) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Output) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Output"} + if s.DestinationSchema == nil { + invalidParams.Add(request.NewErrParamRequired("DestinationSchema")) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + if s.KinesisFirehoseOutput != nil { + if err := s.KinesisFirehoseOutput.Validate(); err != nil { + invalidParams.AddNested("KinesisFirehoseOutput", err.(request.ErrInvalidParams)) + } + } + if s.KinesisStreamsOutput != nil { + if err := s.KinesisStreamsOutput.Validate(); err != nil { + invalidParams.AddNested("KinesisStreamsOutput", err.(request.ErrInvalidParams)) + } + } + if s.LambdaOutput != nil { + if err := s.LambdaOutput.Validate(); err != nil { + invalidParams.AddNested("LambdaOutput", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDestinationSchema sets the DestinationSchema field's value. +func (s *Output) SetDestinationSchema(v *DestinationSchema) *Output { + s.DestinationSchema = v + return s +} + +// SetKinesisFirehoseOutput sets the KinesisFirehoseOutput field's value. +func (s *Output) SetKinesisFirehoseOutput(v *KinesisFirehoseOutput) *Output { + s.KinesisFirehoseOutput = v + return s +} + +// SetKinesisStreamsOutput sets the KinesisStreamsOutput field's value. +func (s *Output) SetKinesisStreamsOutput(v *KinesisStreamsOutput) *Output { + s.KinesisStreamsOutput = v + return s +} + +// SetLambdaOutput sets the LambdaOutput field's value. +func (s *Output) SetLambdaOutput(v *LambdaOutput) *Output { + s.LambdaOutput = v + return s +} + +// SetName sets the Name field's value. +func (s *Output) SetName(v string) *Output { + s.Name = &v + return s +} + +// Describes the application output configuration, which includes the in-application +// stream name and the destination where the stream data is written. The destination +// can be an Amazon Kinesis stream or an Amazon Kinesis Firehose delivery stream. +type OutputDescription struct { + _ struct{} `type:"structure"` + + // Data format used for writing data to the destination. + DestinationSchema *DestinationSchema `type:"structure"` + + // Describes the Amazon Kinesis Firehose delivery stream configured as the destination + // where output is written. + KinesisFirehoseOutputDescription *KinesisFirehoseOutputDescription `type:"structure"` + + // Describes Amazon Kinesis stream configured as the destination where output + // is written. + KinesisStreamsOutputDescription *KinesisStreamsOutputDescription `type:"structure"` + + // Describes the AWS Lambda function configured as the destination where output + // is written. + LambdaOutputDescription *LambdaOutputDescription `type:"structure"` + + // Name of the in-application stream configured as output. + Name *string `min:"1" type:"string"` + + // A unique identifier for the output configuration. + OutputId *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s OutputDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OutputDescription) GoString() string { + return s.String() +} + +// SetDestinationSchema sets the DestinationSchema field's value. +func (s *OutputDescription) SetDestinationSchema(v *DestinationSchema) *OutputDescription { + s.DestinationSchema = v + return s +} + +// SetKinesisFirehoseOutputDescription sets the KinesisFirehoseOutputDescription field's value. +func (s *OutputDescription) SetKinesisFirehoseOutputDescription(v *KinesisFirehoseOutputDescription) *OutputDescription { + s.KinesisFirehoseOutputDescription = v + return s +} + +// SetKinesisStreamsOutputDescription sets the KinesisStreamsOutputDescription field's value. +func (s *OutputDescription) SetKinesisStreamsOutputDescription(v *KinesisStreamsOutputDescription) *OutputDescription { + s.KinesisStreamsOutputDescription = v + return s +} + +// SetLambdaOutputDescription sets the LambdaOutputDescription field's value. +func (s *OutputDescription) SetLambdaOutputDescription(v *LambdaOutputDescription) *OutputDescription { + s.LambdaOutputDescription = v + return s +} + +// SetName sets the Name field's value. +func (s *OutputDescription) SetName(v string) *OutputDescription { + s.Name = &v + return s +} + +// SetOutputId sets the OutputId field's value. +func (s *OutputDescription) SetOutputId(v string) *OutputDescription { + s.OutputId = &v + return s +} + +// Describes updates to the output configuration identified by the OutputId. +type OutputUpdate struct { + _ struct{} `type:"structure"` + + // Describes the data format when records are written to the destination. For + // more information, see Configuring Application Output (http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-output.html). + DestinationSchemaUpdate *DestinationSchema `type:"structure"` + + // Describes an Amazon Kinesis Firehose delivery stream as the destination for + // the output. + KinesisFirehoseOutputUpdate *KinesisFirehoseOutputUpdate `type:"structure"` + + // Describes an Amazon Kinesis stream as the destination for the output. + KinesisStreamsOutputUpdate *KinesisStreamsOutputUpdate `type:"structure"` + + // Describes an AWS Lambda function as the destination for the output. + LambdaOutputUpdate *LambdaOutputUpdate `type:"structure"` + + // If you want to specify a different in-application stream for this output + // configuration, use this field to specify the new in-application stream name. + NameUpdate *string `min:"1" type:"string"` + + // Identifies the specific output configuration that you want to update. + // + // OutputId is a required field + OutputId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s OutputUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OutputUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *OutputUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "OutputUpdate"} + if s.NameUpdate != nil && len(*s.NameUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NameUpdate", 1)) + } + if s.OutputId == nil { + invalidParams.Add(request.NewErrParamRequired("OutputId")) + } + if s.OutputId != nil && len(*s.OutputId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("OutputId", 1)) + } + if s.KinesisFirehoseOutputUpdate != nil { + if err := s.KinesisFirehoseOutputUpdate.Validate(); err != nil { + invalidParams.AddNested("KinesisFirehoseOutputUpdate", err.(request.ErrInvalidParams)) + } + } + if s.KinesisStreamsOutputUpdate != nil { + if err := s.KinesisStreamsOutputUpdate.Validate(); err != nil { + invalidParams.AddNested("KinesisStreamsOutputUpdate", err.(request.ErrInvalidParams)) + } + } + if s.LambdaOutputUpdate != nil { + if err := s.LambdaOutputUpdate.Validate(); err != nil { + invalidParams.AddNested("LambdaOutputUpdate", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDestinationSchemaUpdate sets the DestinationSchemaUpdate field's value. +func (s *OutputUpdate) SetDestinationSchemaUpdate(v *DestinationSchema) *OutputUpdate { + s.DestinationSchemaUpdate = v + return s +} + +// SetKinesisFirehoseOutputUpdate sets the KinesisFirehoseOutputUpdate field's value. +func (s *OutputUpdate) SetKinesisFirehoseOutputUpdate(v *KinesisFirehoseOutputUpdate) *OutputUpdate { + s.KinesisFirehoseOutputUpdate = v + return s +} + +// SetKinesisStreamsOutputUpdate sets the KinesisStreamsOutputUpdate field's value. +func (s *OutputUpdate) SetKinesisStreamsOutputUpdate(v *KinesisStreamsOutputUpdate) *OutputUpdate { + s.KinesisStreamsOutputUpdate = v + return s +} + +// SetLambdaOutputUpdate sets the LambdaOutputUpdate field's value. +func (s *OutputUpdate) SetLambdaOutputUpdate(v *LambdaOutputUpdate) *OutputUpdate { + s.LambdaOutputUpdate = v + return s +} + +// SetNameUpdate sets the NameUpdate field's value. +func (s *OutputUpdate) SetNameUpdate(v string) *OutputUpdate { + s.NameUpdate = &v + return s +} + +// SetOutputId sets the OutputId field's value. +func (s *OutputUpdate) SetOutputId(v string) *OutputUpdate { + s.OutputId = &v + return s +} + +// Describes the mapping of each data element in the streaming source to the +// corresponding column in the in-application stream. +// +// Also used to describe the format of the reference data source. +type RecordColumn struct { + _ struct{} `type:"structure"` + + // Reference to the data element in the streaming input of the reference data + // source. + Mapping *string `type:"string"` + + // Name of the column created in the in-application input stream or reference + // table. + // + // Name is a required field + Name *string `type:"string" required:"true"` + + // Type of column created in the in-application input stream or reference table. + // + // SqlType is a required field + SqlType *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s RecordColumn) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RecordColumn) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RecordColumn) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RecordColumn"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.SqlType == nil { + invalidParams.Add(request.NewErrParamRequired("SqlType")) + } + if s.SqlType != nil && len(*s.SqlType) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SqlType", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMapping sets the Mapping field's value. +func (s *RecordColumn) SetMapping(v string) *RecordColumn { + s.Mapping = &v + return s +} + +// SetName sets the Name field's value. +func (s *RecordColumn) SetName(v string) *RecordColumn { + s.Name = &v + return s +} + +// SetSqlType sets the SqlType field's value. +func (s *RecordColumn) SetSqlType(v string) *RecordColumn { + s.SqlType = &v + return s +} + +// Describes the record format and relevant mapping information that should +// be applied to schematize the records on the stream. +type RecordFormat struct { + _ struct{} `type:"structure"` + + // When configuring application input at the time of creating or updating an + // application, provides additional mapping information specific to the record + // format (such as JSON, CSV, or record fields delimited by some delimiter) + // on the streaming source. + MappingParameters *MappingParameters `type:"structure"` + + // The type of record format. + // + // RecordFormatType is a required field + RecordFormatType *string `type:"string" required:"true" enum:"RecordFormatType"` +} + +// String returns the string representation +func (s RecordFormat) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RecordFormat) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RecordFormat) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RecordFormat"} + if s.RecordFormatType == nil { + invalidParams.Add(request.NewErrParamRequired("RecordFormatType")) + } + if s.MappingParameters != nil { + if err := s.MappingParameters.Validate(); err != nil { + invalidParams.AddNested("MappingParameters", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMappingParameters sets the MappingParameters field's value. +func (s *RecordFormat) SetMappingParameters(v *MappingParameters) *RecordFormat { + s.MappingParameters = v + return s +} + +// SetRecordFormatType sets the RecordFormatType field's value. +func (s *RecordFormat) SetRecordFormatType(v string) *RecordFormat { + s.RecordFormatType = &v + return s +} + +// Describes the reference data source by providing the source information (S3 +// bucket name and object key name), the resulting in-application table name +// that is created, and the necessary schema to map the data elements in the +// Amazon S3 object to the in-application table. +type ReferenceDataSource struct { + _ struct{} `type:"structure"` + + // Describes the format of the data in the streaming source, and how each data + // element maps to corresponding columns created in the in-application stream. + // + // ReferenceSchema is a required field + ReferenceSchema *SourceSchema `type:"structure" required:"true"` + + // Identifies the S3 bucket and object that contains the reference data. Also + // identifies the IAM role Amazon Kinesis Analytics can assume to read this + // object on your behalf. An Amazon Kinesis Analytics application loads reference + // data only once. If the data changes, you call the UpdateApplication operation + // to trigger reloading of data into your application. + S3ReferenceDataSource *S3ReferenceDataSource `type:"structure"` + + // Name of the in-application table to create. + // + // TableName is a required field + TableName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ReferenceDataSource) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReferenceDataSource) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ReferenceDataSource) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ReferenceDataSource"} + if s.ReferenceSchema == nil { + invalidParams.Add(request.NewErrParamRequired("ReferenceSchema")) + } + if s.TableName == nil { + invalidParams.Add(request.NewErrParamRequired("TableName")) + } + if s.TableName != nil && len(*s.TableName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TableName", 1)) + } + if s.ReferenceSchema != nil { + if err := s.ReferenceSchema.Validate(); err != nil { + invalidParams.AddNested("ReferenceSchema", err.(request.ErrInvalidParams)) + } + } + if s.S3ReferenceDataSource != nil { + if err := s.S3ReferenceDataSource.Validate(); err != nil { + invalidParams.AddNested("S3ReferenceDataSource", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetReferenceSchema sets the ReferenceSchema field's value. +func (s *ReferenceDataSource) SetReferenceSchema(v *SourceSchema) *ReferenceDataSource { + s.ReferenceSchema = v + return s +} + +// SetS3ReferenceDataSource sets the S3ReferenceDataSource field's value. +func (s *ReferenceDataSource) SetS3ReferenceDataSource(v *S3ReferenceDataSource) *ReferenceDataSource { + s.S3ReferenceDataSource = v + return s +} + +// SetTableName sets the TableName field's value. +func (s *ReferenceDataSource) SetTableName(v string) *ReferenceDataSource { + s.TableName = &v + return s +} + +// Describes the reference data source configured for an application. +type ReferenceDataSourceDescription struct { + _ struct{} `type:"structure"` + + // ID of the reference data source. This is the ID that Amazon Kinesis Analytics + // assigns when you add the reference data source to your application using + // the AddApplicationReferenceDataSource operation. + // + // ReferenceId is a required field + ReferenceId *string `min:"1" type:"string" required:"true"` + + // Describes the format of the data in the streaming source, and how each data + // element maps to corresponding columns created in the in-application stream. + ReferenceSchema *SourceSchema `type:"structure"` + + // Provides the S3 bucket name, the object key name that contains the reference + // data. It also provides the Amazon Resource Name (ARN) of the IAM role that + // Amazon Kinesis Analytics can assume to read the Amazon S3 object and populate + // the in-application reference table. + // + // S3ReferenceDataSourceDescription is a required field + S3ReferenceDataSourceDescription *S3ReferenceDataSourceDescription `type:"structure" required:"true"` + + // The in-application table name created by the specific reference data source + // configuration. + // + // TableName is a required field + TableName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ReferenceDataSourceDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReferenceDataSourceDescription) GoString() string { + return s.String() +} + +// SetReferenceId sets the ReferenceId field's value. +func (s *ReferenceDataSourceDescription) SetReferenceId(v string) *ReferenceDataSourceDescription { + s.ReferenceId = &v + return s +} + +// SetReferenceSchema sets the ReferenceSchema field's value. +func (s *ReferenceDataSourceDescription) SetReferenceSchema(v *SourceSchema) *ReferenceDataSourceDescription { + s.ReferenceSchema = v + return s +} + +// SetS3ReferenceDataSourceDescription sets the S3ReferenceDataSourceDescription field's value. +func (s *ReferenceDataSourceDescription) SetS3ReferenceDataSourceDescription(v *S3ReferenceDataSourceDescription) *ReferenceDataSourceDescription { + s.S3ReferenceDataSourceDescription = v + return s +} + +// SetTableName sets the TableName field's value. +func (s *ReferenceDataSourceDescription) SetTableName(v string) *ReferenceDataSourceDescription { + s.TableName = &v + return s +} + +// When you update a reference data source configuration for an application, +// this object provides all the updated values (such as the source bucket name +// and object key name), the in-application table name that is created, and +// updated mapping information that maps the data in the Amazon S3 object to +// the in-application reference table that is created. +type ReferenceDataSourceUpdate struct { + _ struct{} `type:"structure"` + + // ID of the reference data source being updated. You can use the DescribeApplication + // operation to get this value. + // + // ReferenceId is a required field + ReferenceId *string `min:"1" type:"string" required:"true"` + + // Describes the format of the data in the streaming source, and how each data + // element maps to corresponding columns created in the in-application stream. + ReferenceSchemaUpdate *SourceSchema `type:"structure"` + + // Describes the S3 bucket name, object key name, and IAM role that Amazon Kinesis + // Analytics can assume to read the Amazon S3 object on your behalf and populate + // the in-application reference table. + S3ReferenceDataSourceUpdate *S3ReferenceDataSourceUpdate `type:"structure"` + + // In-application table name that is created by this update. + TableNameUpdate *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ReferenceDataSourceUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReferenceDataSourceUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ReferenceDataSourceUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ReferenceDataSourceUpdate"} + if s.ReferenceId == nil { + invalidParams.Add(request.NewErrParamRequired("ReferenceId")) + } + if s.ReferenceId != nil && len(*s.ReferenceId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ReferenceId", 1)) + } + if s.TableNameUpdate != nil && len(*s.TableNameUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TableNameUpdate", 1)) + } + if s.ReferenceSchemaUpdate != nil { + if err := s.ReferenceSchemaUpdate.Validate(); err != nil { + invalidParams.AddNested("ReferenceSchemaUpdate", err.(request.ErrInvalidParams)) + } + } + if s.S3ReferenceDataSourceUpdate != nil { + if err := s.S3ReferenceDataSourceUpdate.Validate(); err != nil { + invalidParams.AddNested("S3ReferenceDataSourceUpdate", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetReferenceId sets the ReferenceId field's value. +func (s *ReferenceDataSourceUpdate) SetReferenceId(v string) *ReferenceDataSourceUpdate { + s.ReferenceId = &v + return s +} + +// SetReferenceSchemaUpdate sets the ReferenceSchemaUpdate field's value. +func (s *ReferenceDataSourceUpdate) SetReferenceSchemaUpdate(v *SourceSchema) *ReferenceDataSourceUpdate { + s.ReferenceSchemaUpdate = v + return s +} + +// SetS3ReferenceDataSourceUpdate sets the S3ReferenceDataSourceUpdate field's value. +func (s *ReferenceDataSourceUpdate) SetS3ReferenceDataSourceUpdate(v *S3ReferenceDataSourceUpdate) *ReferenceDataSourceUpdate { + s.S3ReferenceDataSourceUpdate = v + return s +} + +// SetTableNameUpdate sets the TableNameUpdate field's value. +func (s *ReferenceDataSourceUpdate) SetTableNameUpdate(v string) *ReferenceDataSourceUpdate { + s.TableNameUpdate = &v + return s +} + +// Provides a description of an Amazon S3 data source, including the Amazon +// Resource Name (ARN) of the S3 bucket, the ARN of the IAM role that is used +// to access the bucket, and the name of the S3 object that contains the data. +type S3Configuration struct { + _ struct{} `type:"structure"` + + // ARN of the S3 bucket that contains the data. + // + // BucketARN is a required field + BucketARN *string `min:"1" type:"string" required:"true"` + + // The name of the object that contains the data. + // + // FileKey is a required field + FileKey *string `min:"1" type:"string" required:"true"` + + // IAM ARN of the role used to access the data. + // + // RoleARN is a required field + RoleARN *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s S3Configuration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s S3Configuration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *S3Configuration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "S3Configuration"} + if s.BucketARN == nil { + invalidParams.Add(request.NewErrParamRequired("BucketARN")) + } + if s.BucketARN != nil && len(*s.BucketARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BucketARN", 1)) + } + if s.FileKey == nil { + invalidParams.Add(request.NewErrParamRequired("FileKey")) + } + if s.FileKey != nil && len(*s.FileKey) < 1 { + invalidParams.Add(request.NewErrParamMinLen("FileKey", 1)) + } + if s.RoleARN == nil { + invalidParams.Add(request.NewErrParamRequired("RoleARN")) + } + if s.RoleARN != nil && len(*s.RoleARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleARN", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucketARN sets the BucketARN field's value. +func (s *S3Configuration) SetBucketARN(v string) *S3Configuration { + s.BucketARN = &v + return s +} + +// SetFileKey sets the FileKey field's value. +func (s *S3Configuration) SetFileKey(v string) *S3Configuration { + s.FileKey = &v + return s +} + +// SetRoleARN sets the RoleARN field's value. +func (s *S3Configuration) SetRoleARN(v string) *S3Configuration { + s.RoleARN = &v + return s +} + +// Identifies the S3 bucket and object that contains the reference data. Also +// identifies the IAM role Amazon Kinesis Analytics can assume to read this +// object on your behalf. +// +// An Amazon Kinesis Analytics application loads reference data only once. If +// the data changes, you call the UpdateApplication operation to trigger reloading +// of data into your application. +type S3ReferenceDataSource struct { + _ struct{} `type:"structure"` + + // Amazon Resource Name (ARN) of the S3 bucket. + // + // BucketARN is a required field + BucketARN *string `min:"1" type:"string" required:"true"` + + // Object key name containing reference data. + // + // FileKey is a required field + FileKey *string `min:"1" type:"string" required:"true"` + + // ARN of the IAM role that the service can assume to read data on your behalf. + // This role must have permission for the s3:GetObject action on the object + // and trust policy that allows Amazon Kinesis Analytics service principal to + // assume this role. + // + // ReferenceRoleARN is a required field + ReferenceRoleARN *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s S3ReferenceDataSource) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s S3ReferenceDataSource) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *S3ReferenceDataSource) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "S3ReferenceDataSource"} + if s.BucketARN == nil { + invalidParams.Add(request.NewErrParamRequired("BucketARN")) + } + if s.BucketARN != nil && len(*s.BucketARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BucketARN", 1)) + } + if s.FileKey == nil { + invalidParams.Add(request.NewErrParamRequired("FileKey")) + } + if s.FileKey != nil && len(*s.FileKey) < 1 { + invalidParams.Add(request.NewErrParamMinLen("FileKey", 1)) + } + if s.ReferenceRoleARN == nil { + invalidParams.Add(request.NewErrParamRequired("ReferenceRoleARN")) + } + if s.ReferenceRoleARN != nil && len(*s.ReferenceRoleARN) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ReferenceRoleARN", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucketARN sets the BucketARN field's value. +func (s *S3ReferenceDataSource) SetBucketARN(v string) *S3ReferenceDataSource { + s.BucketARN = &v + return s +} + +// SetFileKey sets the FileKey field's value. +func (s *S3ReferenceDataSource) SetFileKey(v string) *S3ReferenceDataSource { + s.FileKey = &v + return s +} + +// SetReferenceRoleARN sets the ReferenceRoleARN field's value. +func (s *S3ReferenceDataSource) SetReferenceRoleARN(v string) *S3ReferenceDataSource { + s.ReferenceRoleARN = &v + return s +} + +// Provides the bucket name and object key name that stores the reference data. +type S3ReferenceDataSourceDescription struct { + _ struct{} `type:"structure"` + + // Amazon Resource Name (ARN) of the S3 bucket. + // + // BucketARN is a required field + BucketARN *string `min:"1" type:"string" required:"true"` + + // Amazon S3 object key name. + // + // FileKey is a required field + FileKey *string `min:"1" type:"string" required:"true"` + + // ARN of the IAM role that Amazon Kinesis Analytics can assume to read the + // Amazon S3 object on your behalf to populate the in-application reference + // table. + // + // ReferenceRoleARN is a required field + ReferenceRoleARN *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s S3ReferenceDataSourceDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s S3ReferenceDataSourceDescription) GoString() string { + return s.String() +} + +// SetBucketARN sets the BucketARN field's value. +func (s *S3ReferenceDataSourceDescription) SetBucketARN(v string) *S3ReferenceDataSourceDescription { + s.BucketARN = &v + return s +} + +// SetFileKey sets the FileKey field's value. +func (s *S3ReferenceDataSourceDescription) SetFileKey(v string) *S3ReferenceDataSourceDescription { + s.FileKey = &v + return s +} + +// SetReferenceRoleARN sets the ReferenceRoleARN field's value. +func (s *S3ReferenceDataSourceDescription) SetReferenceRoleARN(v string) *S3ReferenceDataSourceDescription { + s.ReferenceRoleARN = &v + return s +} + +// Describes the S3 bucket name, object key name, and IAM role that Amazon Kinesis +// Analytics can assume to read the Amazon S3 object on your behalf and populate +// the in-application reference table. +type S3ReferenceDataSourceUpdate struct { + _ struct{} `type:"structure"` + + // Amazon Resource Name (ARN) of the S3 bucket. + BucketARNUpdate *string `min:"1" type:"string"` + + // Object key name. + FileKeyUpdate *string `min:"1" type:"string"` + + // ARN of the IAM role that Amazon Kinesis Analytics can assume to read the + // Amazon S3 object and populate the in-application. + ReferenceRoleARNUpdate *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s S3ReferenceDataSourceUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s S3ReferenceDataSourceUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *S3ReferenceDataSourceUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "S3ReferenceDataSourceUpdate"} + if s.BucketARNUpdate != nil && len(*s.BucketARNUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BucketARNUpdate", 1)) + } + if s.FileKeyUpdate != nil && len(*s.FileKeyUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("FileKeyUpdate", 1)) + } + if s.ReferenceRoleARNUpdate != nil && len(*s.ReferenceRoleARNUpdate) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ReferenceRoleARNUpdate", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucketARNUpdate sets the BucketARNUpdate field's value. +func (s *S3ReferenceDataSourceUpdate) SetBucketARNUpdate(v string) *S3ReferenceDataSourceUpdate { + s.BucketARNUpdate = &v + return s +} + +// SetFileKeyUpdate sets the FileKeyUpdate field's value. +func (s *S3ReferenceDataSourceUpdate) SetFileKeyUpdate(v string) *S3ReferenceDataSourceUpdate { + s.FileKeyUpdate = &v + return s +} + +// SetReferenceRoleARNUpdate sets the ReferenceRoleARNUpdate field's value. +func (s *S3ReferenceDataSourceUpdate) SetReferenceRoleARNUpdate(v string) *S3ReferenceDataSourceUpdate { + s.ReferenceRoleARNUpdate = &v + return s +} + +// Describes the format of the data in the streaming source, and how each data +// element maps to corresponding columns created in the in-application stream. +type SourceSchema struct { + _ struct{} `type:"structure"` + + // A list of RecordColumn objects. + // + // RecordColumns is a required field + RecordColumns []*RecordColumn `min:"1" type:"list" required:"true"` + + // Specifies the encoding of the records in the streaming source. For example, + // UTF-8. + RecordEncoding *string `type:"string"` + + // Specifies the format of the records on the streaming source. + // + // RecordFormat is a required field + RecordFormat *RecordFormat `type:"structure" required:"true"` +} + +// String returns the string representation +func (s SourceSchema) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SourceSchema) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SourceSchema) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SourceSchema"} + if s.RecordColumns == nil { + invalidParams.Add(request.NewErrParamRequired("RecordColumns")) + } + if s.RecordColumns != nil && len(s.RecordColumns) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RecordColumns", 1)) + } + if s.RecordFormat == nil { + invalidParams.Add(request.NewErrParamRequired("RecordFormat")) + } + if s.RecordColumns != nil { + for i, v := range s.RecordColumns { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "RecordColumns", i), err.(request.ErrInvalidParams)) + } + } + } + if s.RecordFormat != nil { + if err := s.RecordFormat.Validate(); err != nil { + invalidParams.AddNested("RecordFormat", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRecordColumns sets the RecordColumns field's value. +func (s *SourceSchema) SetRecordColumns(v []*RecordColumn) *SourceSchema { + s.RecordColumns = v + return s +} + +// SetRecordEncoding sets the RecordEncoding field's value. +func (s *SourceSchema) SetRecordEncoding(v string) *SourceSchema { + s.RecordEncoding = &v + return s +} + +// SetRecordFormat sets the RecordFormat field's value. +func (s *SourceSchema) SetRecordFormat(v *RecordFormat) *SourceSchema { + s.RecordFormat = v + return s +} + +type StartApplicationInput struct { + _ struct{} `type:"structure"` + + // Name of the application. + // + // ApplicationName is a required field + ApplicationName *string `min:"1" type:"string" required:"true"` + + // Identifies the specific input, by ID, that the application starts consuming. + // Amazon Kinesis Analytics starts reading the streaming source associated with + // the input. You can also specify where in the streaming source you want Amazon + // Kinesis Analytics to start reading. + // + // InputConfigurations is a required field + InputConfigurations []*InputConfiguration `type:"list" required:"true"` +} + +// String returns the string representation +func (s StartApplicationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartApplicationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StartApplicationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StartApplicationInput"} + if s.ApplicationName == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationName")) + } + if s.ApplicationName != nil && len(*s.ApplicationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ApplicationName", 1)) + } + if s.InputConfigurations == nil { + invalidParams.Add(request.NewErrParamRequired("InputConfigurations")) + } + if s.InputConfigurations != nil { + for i, v := range s.InputConfigurations { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "InputConfigurations", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationName sets the ApplicationName field's value. +func (s *StartApplicationInput) SetApplicationName(v string) *StartApplicationInput { + s.ApplicationName = &v + return s +} + +// SetInputConfigurations sets the InputConfigurations field's value. +func (s *StartApplicationInput) SetInputConfigurations(v []*InputConfiguration) *StartApplicationInput { + s.InputConfigurations = v + return s +} + +type StartApplicationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s StartApplicationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartApplicationOutput) GoString() string { + return s.String() +} + +type StopApplicationInput struct { + _ struct{} `type:"structure"` + + // Name of the running application to stop. + // + // ApplicationName is a required field + ApplicationName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s StopApplicationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StopApplicationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StopApplicationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StopApplicationInput"} + if s.ApplicationName == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationName")) + } + if s.ApplicationName != nil && len(*s.ApplicationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ApplicationName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationName sets the ApplicationName field's value. +func (s *StopApplicationInput) SetApplicationName(v string) *StopApplicationInput { + s.ApplicationName = &v + return s +} + +type StopApplicationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s StopApplicationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StopApplicationOutput) GoString() string { + return s.String() +} + +type UpdateApplicationInput struct { + _ struct{} `type:"structure"` + + // Name of the Amazon Kinesis Analytics application to update. + // + // ApplicationName is a required field + ApplicationName *string `min:"1" type:"string" required:"true"` + + // Describes application updates. + // + // ApplicationUpdate is a required field + ApplicationUpdate *ApplicationUpdate `type:"structure" required:"true"` + + // The current application version ID. You can use the DescribeApplication operation + // to get this value. + // + // CurrentApplicationVersionId is a required field + CurrentApplicationVersionId *int64 `min:"1" type:"long" required:"true"` +} + +// String returns the string representation +func (s UpdateApplicationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateApplicationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateApplicationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateApplicationInput"} + if s.ApplicationName == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationName")) + } + if s.ApplicationName != nil && len(*s.ApplicationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ApplicationName", 1)) + } + if s.ApplicationUpdate == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationUpdate")) + } + if s.CurrentApplicationVersionId == nil { + invalidParams.Add(request.NewErrParamRequired("CurrentApplicationVersionId")) + } + if s.CurrentApplicationVersionId != nil && *s.CurrentApplicationVersionId < 1 { + invalidParams.Add(request.NewErrParamMinValue("CurrentApplicationVersionId", 1)) + } + if s.ApplicationUpdate != nil { + if err := s.ApplicationUpdate.Validate(); err != nil { + invalidParams.AddNested("ApplicationUpdate", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationName sets the ApplicationName field's value. +func (s *UpdateApplicationInput) SetApplicationName(v string) *UpdateApplicationInput { + s.ApplicationName = &v + return s +} + +// SetApplicationUpdate sets the ApplicationUpdate field's value. +func (s *UpdateApplicationInput) SetApplicationUpdate(v *ApplicationUpdate) *UpdateApplicationInput { + s.ApplicationUpdate = v + return s +} + +// SetCurrentApplicationVersionId sets the CurrentApplicationVersionId field's value. +func (s *UpdateApplicationInput) SetCurrentApplicationVersionId(v int64) *UpdateApplicationInput { + s.CurrentApplicationVersionId = &v + return s +} + +type UpdateApplicationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UpdateApplicationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateApplicationOutput) GoString() string { + return s.String() +} + +const ( + // ApplicationStatusDeleting is a ApplicationStatus enum value + ApplicationStatusDeleting = "DELETING" + + // ApplicationStatusStarting is a ApplicationStatus enum value + ApplicationStatusStarting = "STARTING" + + // ApplicationStatusStopping is a ApplicationStatus enum value + ApplicationStatusStopping = "STOPPING" + + // ApplicationStatusReady is a ApplicationStatus enum value + ApplicationStatusReady = "READY" + + // ApplicationStatusRunning is a ApplicationStatus enum value + ApplicationStatusRunning = "RUNNING" + + // ApplicationStatusUpdating is a ApplicationStatus enum value + ApplicationStatusUpdating = "UPDATING" +) + +const ( + // InputStartingPositionNow is a InputStartingPosition enum value + InputStartingPositionNow = "NOW" + + // InputStartingPositionTrimHorizon is a InputStartingPosition enum value + InputStartingPositionTrimHorizon = "TRIM_HORIZON" + + // InputStartingPositionLastStoppedPoint is a InputStartingPosition enum value + InputStartingPositionLastStoppedPoint = "LAST_STOPPED_POINT" +) + +const ( + // RecordFormatTypeJson is a RecordFormatType enum value + RecordFormatTypeJson = "JSON" + + // RecordFormatTypeCsv is a RecordFormatType enum value + RecordFormatTypeCsv = "CSV" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/kinesisanalytics/doc.go b/vendor/github.com/aws/aws-sdk-go/service/kinesisanalytics/doc.go new file mode 100644 index 00000000000..8e2c7811337 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/kinesisanalytics/doc.go @@ -0,0 +1,26 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package kinesisanalytics provides the client and types for making API +// requests to Amazon Kinesis Analytics. +// +// See https://docs.aws.amazon.com/goto/WebAPI/kinesisanalytics-2015-08-14 for more information on this service. +// +// See kinesisanalytics package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/kinesisanalytics/ +// +// Using the Client +// +// To contact Amazon Kinesis Analytics with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the Amazon Kinesis Analytics client KinesisAnalytics for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/kinesisanalytics/#New +package kinesisanalytics diff --git a/vendor/github.com/aws/aws-sdk-go/service/kinesisanalytics/errors.go b/vendor/github.com/aws/aws-sdk-go/service/kinesisanalytics/errors.go new file mode 100644 index 00000000000..75b5581b5a8 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/kinesisanalytics/errors.go @@ -0,0 +1,73 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package kinesisanalytics + +const ( + + // ErrCodeCodeValidationException for service response error code + // "CodeValidationException". + // + // User-provided application code (query) is invalid. This can be a simple syntax + // error. + ErrCodeCodeValidationException = "CodeValidationException" + + // ErrCodeConcurrentModificationException for service response error code + // "ConcurrentModificationException". + // + // Exception thrown as a result of concurrent modification to an application. + // For example, two individuals attempting to edit the same application at the + // same time. + ErrCodeConcurrentModificationException = "ConcurrentModificationException" + + // ErrCodeInvalidApplicationConfigurationException for service response error code + // "InvalidApplicationConfigurationException". + // + // User-provided application configuration is not valid. + ErrCodeInvalidApplicationConfigurationException = "InvalidApplicationConfigurationException" + + // ErrCodeInvalidArgumentException for service response error code + // "InvalidArgumentException". + // + // Specified input parameter value is invalid. + ErrCodeInvalidArgumentException = "InvalidArgumentException" + + // ErrCodeLimitExceededException for service response error code + // "LimitExceededException". + // + // Exceeded the number of applications allowed. + ErrCodeLimitExceededException = "LimitExceededException" + + // ErrCodeResourceInUseException for service response error code + // "ResourceInUseException". + // + // Application is not available for this operation. + ErrCodeResourceInUseException = "ResourceInUseException" + + // ErrCodeResourceNotFoundException for service response error code + // "ResourceNotFoundException". + // + // Specified application can't be found. + ErrCodeResourceNotFoundException = "ResourceNotFoundException" + + // ErrCodeResourceProvisionedThroughputExceededException for service response error code + // "ResourceProvisionedThroughputExceededException". + // + // Discovery failed to get a record from the streaming source because of the + // Amazon Kinesis Streams ProvisionedThroughputExceededException. For more information, + // see GetRecords (http://docs.aws.amazon.com/kinesis/latest/APIReference/API_GetRecords.html) + // in the Amazon Kinesis Streams API Reference. + ErrCodeResourceProvisionedThroughputExceededException = "ResourceProvisionedThroughputExceededException" + + // ErrCodeServiceUnavailableException for service response error code + // "ServiceUnavailableException". + // + // The service is unavailable, back off and retry the operation. + ErrCodeServiceUnavailableException = "ServiceUnavailableException" + + // ErrCodeUnableToDetectSchemaException for service response error code + // "UnableToDetectSchemaException". + // + // Data format is not valid, Amazon Kinesis Analytics is not able to detect + // schema for the given streaming source. + ErrCodeUnableToDetectSchemaException = "UnableToDetectSchemaException" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/kinesisanalytics/service.go b/vendor/github.com/aws/aws-sdk-go/service/kinesisanalytics/service.go new file mode 100644 index 00000000000..153daad6eba --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/kinesisanalytics/service.go @@ -0,0 +1,97 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package kinesisanalytics + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" +) + +// KinesisAnalytics provides the API operation methods for making requests to +// Amazon Kinesis Analytics. See this package's package overview docs +// for details on the service. +// +// KinesisAnalytics methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type KinesisAnalytics struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "kinesisanalytics" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Kinesis Analytics" // ServiceID is a unique identifer of a specific service. +) + +// New creates a new instance of the KinesisAnalytics client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a KinesisAnalytics client from just a session. +// svc := kinesisanalytics.New(mySession) +// +// // Create a KinesisAnalytics client with additional configuration +// svc := kinesisanalytics.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *KinesisAnalytics { + c := p.ClientConfig(EndpointsID, cfgs...) + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *KinesisAnalytics { + svc := &KinesisAnalytics{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + ServiceID: ServiceID, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2015-08-14", + JSONVersion: "1.1", + TargetPrefix: "KinesisAnalytics_20150814", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(jsonrpc.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(jsonrpc.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(jsonrpc.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(jsonrpc.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a KinesisAnalytics operation and runs any +// custom request initialization. +func (c *KinesisAnalytics) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/kms/api.go b/vendor/github.com/aws/aws-sdk-go/service/kms/api.go index e67cf240d47..540480c7c9f 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/kms/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/kms/api.go @@ -17,8 +17,8 @@ const opCancelKeyDeletion = "CancelKeyDeletion" // CancelKeyDeletionRequest generates a "aws/request.Request" representing the // client's request for the CancelKeyDeletion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -66,6 +66,10 @@ func (c *KMS) CancelKeyDeletionRequest(input *CancelKeyDeletionInput) (req *requ // Deleting Customer Master Keys (http://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys.html) // in the AWS Key Management Service Developer Guide. // +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -85,11 +89,11 @@ func (c *KMS) CancelKeyDeletionRequest(input *CancelKeyDeletionInput) (req *requ // The system timed out while trying to fulfill the request. The request can // be retried. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -123,8 +127,8 @@ const opCreateAlias = "CreateAlias" // CreateAliasRequest generates a "aws/request.Request" representing the // client's request for the CreateAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -165,8 +169,9 @@ func (c *KMS) CreateAliasRequest(input *CreateAliasInput) (req *request.Request, // CreateAlias API operation for AWS Key Management Service. // -// Creates a display name for a customer master key (CMK). You can use an alias -// to identify a CMK in selected operations, such as Encrypt and GenerateDataKey. +// Creates a display name for a customer-managed customer master key (CMK). +// You can use an alias to identify a CMK in selected operations, such as Encrypt +// and GenerateDataKey. // // Each CMK can have multiple aliases, but each alias points to only one CMK. // The alias name must be unique in the AWS account and region. To simplify @@ -178,10 +183,9 @@ func (c *KMS) CreateAliasRequest(input *CreateAliasInput) (req *request.Request, // the response from the DescribeKey operation. To get the aliases of all CMKs, // use the ListAliases operation. // -// An alias must start with the word alias followed by a forward slash (alias/). // The alias name can contain only alphanumeric characters, forward slashes -// (/), underscores (_), and dashes (-). Alias names cannot begin with aws; -// that alias name prefix is reserved by Amazon Web Services (AWS). +// (/), underscores (_), and dashes (-). Alias names cannot begin with aws/. +// That alias name prefix is reserved for AWS managed CMKs. // // The alias and the CMK it is mapped to must be in the same AWS account and // the same region. You cannot perform this operation on an alias in a different @@ -189,6 +193,10 @@ func (c *KMS) CreateAliasRequest(input *CreateAliasInput) (req *request.Request, // // To map an existing alias to a different CMK, call UpdateAlias. // +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -212,7 +220,7 @@ func (c *KMS) CreateAliasRequest(input *CreateAliasInput) (req *request.Request, // * ErrCodeInvalidAliasNameException "InvalidAliasNameException" // The request was rejected because the specified alias name is not valid. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // @@ -221,7 +229,7 @@ func (c *KMS) CreateAliasRequest(input *CreateAliasInput) (req *request.Request, // see Limits (http://docs.aws.amazon.com/kms/latest/developerguide/limits.html) // in the AWS Key Management Service Developer Guide. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -255,8 +263,8 @@ const opCreateGrant = "CreateGrant" // CreateGrantRequest generates a "aws/request.Request" representing the // client's request for the CreateGrant operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -304,6 +312,10 @@ func (c *KMS) CreateGrantRequest(input *CreateGrantInput) (req *request.Request, // see Grants (http://docs.aws.amazon.com/kms/latest/developerguide/grants.html) // in the AWS Key Management Service Developer Guide. // +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -326,7 +338,7 @@ func (c *KMS) CreateGrantRequest(input *CreateGrantInput) (req *request.Request, // * ErrCodeInvalidArnException "InvalidArnException" // The request was rejected because a specified ARN was not valid. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // @@ -338,7 +350,7 @@ func (c *KMS) CreateGrantRequest(input *CreateGrantInput) (req *request.Request, // see Limits (http://docs.aws.amazon.com/kms/latest/developerguide/limits.html) // in the AWS Key Management Service Developer Guide. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -372,8 +384,8 @@ const opCreateKey = "CreateKey" // CreateKeyRequest generates a "aws/request.Request" representing the // client's request for the CreateKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -414,8 +426,8 @@ func (c *KMS) CreateKeyRequest(input *CreateKeyInput) (req *request.Request, out // // Creates a customer master key (CMK) in the caller's AWS account. // -// You can use a CMK to encrypt small amounts of data (4 KiB or less) directly, -// but CMKs are more commonly used to encrypt data encryption keys (DEKs), which +// You can use a CMK to encrypt small amounts of data (4 KiB or less) directly. +// But CMKs are more commonly used to encrypt data encryption keys (DEKs), which // are used to encrypt raw data. For more information about DEKs and the difference // between CMKs and DEKs, see the following: // @@ -449,7 +461,7 @@ func (c *KMS) CreateKeyRequest(input *CreateKeyInput) (req *request.Request, out // The request was rejected because a specified parameter is not supported or // a specified resource is not valid for this operation. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // @@ -487,8 +499,8 @@ const opDecrypt = "Decrypt" // DecryptRequest generates a "aws/request.Request" representing the // client's request for the Decrypt operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -536,14 +548,17 @@ func (c *KMS) DecryptRequest(input *DecryptInput) (req *request.Request, output // // * Encrypt // -// Note that if a caller has been granted access permissions to all keys (through, -// for example, IAM user policies that grant Decrypt permission on all resources), -// then ciphertext encrypted by using keys in other accounts where the key grants -// access to the caller can be decrypted. To remedy this, we recommend that -// you do not grant Decrypt access in an IAM user policy. Instead grant Decrypt -// access only in key policies. If you must grant Decrypt access in an IAM user -// policy, you should scope the resource to specific keys or to specific trusted -// accounts. +// Whenever possible, use key policies to give users permission to call the +// Decrypt operation on the CMK, instead of IAM policies. Otherwise, you might +// create an IAM user policy that gives the user Decrypt permission on all CMKs. +// This user could decrypt ciphertext that was encrypted by CMKs in other accounts +// if the key policy for the cross-account CMK permits it. If you must use an +// IAM policy for Decrypt permissions, limit the user to particular CMKs or +// particular trusted accounts. +// +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -576,11 +591,11 @@ func (c *KMS) DecryptRequest(input *DecryptInput) (req *request.Request, output // * ErrCodeInvalidGrantTokenException "InvalidGrantTokenException" // The request was rejected because the specified grant token is not valid. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -614,8 +629,8 @@ const opDeleteAlias = "DeleteAlias" // DeleteAliasRequest generates a "aws/request.Request" representing the // client's request for the DeleteAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -684,11 +699,11 @@ func (c *KMS) DeleteAliasRequest(input *DeleteAliasInput) (req *request.Request, // The request was rejected because the specified entity or resource could not // be found. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -722,8 +737,8 @@ const opDeleteImportedKeyMaterial = "DeleteImportedKeyMaterial" // DeleteImportedKeyMaterialRequest generates a "aws/request.Request" representing the // client's request for the DeleteImportedKeyMaterial operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -776,6 +791,10 @@ func (c *KMS) DeleteImportedKeyMaterialRequest(input *DeleteImportedKeyMaterialI // After you delete key material, you can use ImportKeyMaterial to reimport // the same key material into the CMK. // +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -799,11 +818,11 @@ func (c *KMS) DeleteImportedKeyMaterialRequest(input *DeleteImportedKeyMaterialI // The request was rejected because the specified entity or resource could not // be found. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -837,8 +856,8 @@ const opDescribeKey = "DescribeKey" // DescribeKeyRequest generates a "aws/request.Request" representing the // client's request for the DescribeKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -879,6 +898,11 @@ func (c *KMS) DescribeKeyRequest(input *DescribeKeyInput) (req *request.Request, // // Provides detailed information about the specified customer master key (CMK). // +// You can use DescribeKey on a predefined AWS alias, that is, an AWS alias +// with no key ID. When you do, AWS KMS associates the alias with an AWS managed +// CMK (http://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#master_keys) +// and returns its KeyId and Arn in the response. +// // To perform this operation on a CMK in a different AWS account, specify the // key ARN or alias ARN in the value of the KeyId parameter. // @@ -901,7 +925,7 @@ func (c *KMS) DescribeKeyRequest(input *DescribeKeyInput) (req *request.Request, // The system timed out while trying to fulfill the request. The request can // be retried. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // @@ -931,8 +955,8 @@ const opDisableKey = "DisableKey" // DisableKeyRequest generates a "aws/request.Request" representing the // client's request for the DisableKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -981,6 +1005,10 @@ func (c *KMS) DisableKeyRequest(input *DisableKeyInput) (req *request.Request, o // Key State Affects the Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) // in the AWS Key Management Service Developer Guide. // +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -1000,11 +1028,11 @@ func (c *KMS) DisableKeyRequest(input *DisableKeyInput) (req *request.Request, o // The system timed out while trying to fulfill the request. The request can // be retried. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -1038,8 +1066,8 @@ const opDisableKeyRotation = "DisableKeyRotation" // DisableKeyRotationRequest generates a "aws/request.Request" representing the // client's request for the DisableKeyRotation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1080,9 +1108,13 @@ func (c *KMS) DisableKeyRotationRequest(input *DisableKeyRotationInput) (req *re // DisableKeyRotation API operation for AWS Key Management Service. // -// Disables automatic rotation of the key material for the specified customer -// master key (CMK). You cannot perform this operation on a CMK in a different -// AWS account. +// Disables automatic rotation of the key material (http://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html) +// for the specified customer master key (CMK). You cannot perform this operation +// on a CMK in a different AWS account. +// +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1106,11 +1138,11 @@ func (c *KMS) DisableKeyRotationRequest(input *DisableKeyRotationInput) (req *re // The system timed out while trying to fulfill the request. The request can // be retried. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -1148,8 +1180,8 @@ const opEnableKey = "EnableKey" // EnableKeyRequest generates a "aws/request.Request" representing the // client's request for the EnableKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1194,6 +1226,10 @@ func (c *KMS) EnableKeyRequest(input *EnableKeyInput) (req *request.Request, out // its use for cryptographic operations. You cannot perform this operation on // a CMK in a different AWS account. // +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -1213,7 +1249,7 @@ func (c *KMS) EnableKeyRequest(input *EnableKeyInput) (req *request.Request, out // The system timed out while trying to fulfill the request. The request can // be retried. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // @@ -1222,7 +1258,7 @@ func (c *KMS) EnableKeyRequest(input *EnableKeyInput) (req *request.Request, out // see Limits (http://docs.aws.amazon.com/kms/latest/developerguide/limits.html) // in the AWS Key Management Service Developer Guide. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -1256,8 +1292,8 @@ const opEnableKeyRotation = "EnableKeyRotation" // EnableKeyRotationRequest generates a "aws/request.Request" representing the // client's request for the EnableKeyRotation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1298,9 +1334,13 @@ func (c *KMS) EnableKeyRotationRequest(input *EnableKeyRotationInput) (req *requ // EnableKeyRotation API operation for AWS Key Management Service. // -// Enables automatic rotation of the key material for the specified customer -// master key (CMK). You cannot perform this operation on a CMK in a different -// AWS account. +// Enables automatic rotation of the key material (http://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html) +// for the specified customer master key (CMK). You cannot perform this operation +// on a CMK in a different AWS account. +// +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1324,11 +1364,11 @@ func (c *KMS) EnableKeyRotationRequest(input *EnableKeyRotationInput) (req *requ // The system timed out while trying to fulfill the request. The request can // be retried. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -1366,8 +1406,8 @@ const opEncrypt = "Encrypt" // EncryptRequest generates a "aws/request.Request" representing the // client's request for the Encrypt operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1412,23 +1452,27 @@ func (c *KMS) EncryptRequest(input *EncryptInput) (req *request.Request, output // * You can encrypt up to 4 kilobytes (4096 bytes) of arbitrary data such // as an RSA key, a database password, or other sensitive information. // -// * To move encrypted data from one AWS region to another, you can use this -// operation to encrypt in the new region the plaintext data key that was -// used to encrypt the data in the original region. This provides you with -// an encrypted copy of the data key that can be decrypted in the new region -// and used there to decrypt the encrypted data. +// * You can use the Encrypt operation to move encrypted data from one AWS +// region to another. In the first region, generate a data key and use the +// plaintext key to encrypt the data. Then, in the new region, call the Encrypt +// method on same plaintext data key. Now, you can safely move the encrypted +// data and encrypted data key to the new region, and decrypt in the new +// region when necessary. // -// To perform this operation on a CMK in a different AWS account, specify the -// key ARN or alias ARN in the value of the KeyId parameter. +// You don't need use this operation to encrypt a data key within a region. +// The GenerateDataKey and GenerateDataKeyWithoutPlaintext operations return +// an encrypted data key. // -// Unless you are moving encrypted data from one region to another, you don't -// use this operation to encrypt a generated data key within a region. To get -// data keys that are already encrypted, call the GenerateDataKey or GenerateDataKeyWithoutPlaintext -// operation. Data keys don't need to be encrypted again by calling Encrypt. +// Also, you don't need to use this operation to encrypt data in your application. +// You can use the plaintext and encrypted data keys that the GenerateDataKey +// operation returns. // -// To encrypt data locally in your application, use the GenerateDataKey operation -// to return a plaintext data encryption key and a copy of the key encrypted -// under the CMK of your choosing. +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. +// +// To perform this operation on a CMK in a different AWS account, specify the +// key ARN or alias ARN in the value of the KeyId parameter. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1459,11 +1503,11 @@ func (c *KMS) EncryptRequest(input *EncryptInput) (req *request.Request, output // * ErrCodeInvalidGrantTokenException "InvalidGrantTokenException" // The request was rejected because the specified grant token is not valid. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -1497,8 +1541,8 @@ const opGenerateDataKey = "GenerateDataKey" // GenerateDataKeyRequest generates a "aws/request.Request" representing the // client's request for the GenerateDataKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1584,6 +1628,10 @@ func (c *KMS) GenerateDataKeyRequest(input *GenerateDataKeyInput) (req *request. // Context (http://docs.aws.amazon.com/kms/latest/developerguide/encryption-context.html) // in the AWS Key Management Service Developer Guide. // +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -1613,11 +1661,11 @@ func (c *KMS) GenerateDataKeyRequest(input *GenerateDataKeyInput) (req *request. // * ErrCodeInvalidGrantTokenException "InvalidGrantTokenException" // The request was rejected because the specified grant token is not valid. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -1651,8 +1699,8 @@ const opGenerateDataKeyWithoutPlaintext = "GenerateDataKeyWithoutPlaintext" // GenerateDataKeyWithoutPlaintextRequest generates a "aws/request.Request" representing the // client's request for the GenerateDataKeyWithoutPlaintext operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1706,9 +1754,14 @@ func (c *KMS) GenerateDataKeyWithoutPlaintextRequest(input *GenerateDataKeyWitho // (GenerateDataKeyWithoutPlaintext) to get an encrypted data key and then stores // it in the container. Later, a different component of the system, called the // data plane, puts encrypted data into the containers. To do this, it passes -// the encrypted data key to the Decrypt operation, then uses the returned plaintext -// data key to encrypt data, and finally stores the encrypted data in the container. -// In this system, the control plane never sees the plaintext data key. +// the encrypted data key to the Decrypt operation. It then uses the returned +// plaintext data key to encrypt data and finally stores the encrypted data +// in the container. In this system, the control plane never sees the plaintext +// data key. +// +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1739,11 +1792,11 @@ func (c *KMS) GenerateDataKeyWithoutPlaintextRequest(input *GenerateDataKeyWitho // * ErrCodeInvalidGrantTokenException "InvalidGrantTokenException" // The request was rejected because the specified grant token is not valid. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -1777,8 +1830,8 @@ const opGenerateRandom = "GenerateRandom" // GenerateRandomRequest generates a "aws/request.Request" representing the // client's request for the GenerateRandom operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1835,7 +1888,7 @@ func (c *KMS) GenerateRandomRequest(input *GenerateRandomInput) (req *request.Re // The system timed out while trying to fulfill the request. The request can // be retried. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // @@ -1865,8 +1918,8 @@ const opGetKeyPolicy = "GetKeyPolicy" // GetKeyPolicyRequest generates a "aws/request.Request" representing the // client's request for the GetKeyPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1927,11 +1980,11 @@ func (c *KMS) GetKeyPolicyRequest(input *GetKeyPolicyInput) (req *request.Reques // The system timed out while trying to fulfill the request. The request can // be retried. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -1965,8 +2018,8 @@ const opGetKeyRotationStatus = "GetKeyRotationStatus" // GetKeyRotationStatusRequest generates a "aws/request.Request" representing the // client's request for the GetKeyRotationStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2006,7 +2059,20 @@ func (c *KMS) GetKeyRotationStatusRequest(input *GetKeyRotationStatusInput) (req // GetKeyRotationStatus API operation for AWS Key Management Service. // // Gets a Boolean value that indicates whether automatic rotation of the key -// material is enabled for the specified customer master key (CMK). +// material (http://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html) +// is enabled for the specified customer master key (CMK). +// +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. +// +// * Disabled: The key rotation status does not change when you disable a +// CMK. However, while the CMK is disabled, AWS KMS does not rotate the backing +// key. +// +// * Pending deletion: While a CMK is pending deletion, its key rotation +// status is false and AWS KMS does not rotate the backing key. If you cancel +// the deletion, the original key rotation status is restored. // // To perform this operation on a CMK in a different AWS account, specify the // key ARN in the value of the KeyId parameter. @@ -2030,11 +2096,11 @@ func (c *KMS) GetKeyRotationStatusRequest(input *GetKeyRotationStatusInput) (req // The system timed out while trying to fulfill the request. The request can // be retried. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -2072,8 +2138,8 @@ const opGetParametersForImport = "GetParametersForImport" // GetParametersForImportRequest generates a "aws/request.Request" representing the // client's request for the GetParametersForImport operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2130,6 +2196,10 @@ func (c *KMS) GetParametersForImportRequest(input *GetParametersForImportInput) // they expire, they cannot be used for a subsequent ImportKeyMaterial request. // To get new ones, send another GetParametersForImport request. // +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -2153,11 +2223,11 @@ func (c *KMS) GetParametersForImportRequest(input *GetParametersForImportInput) // The request was rejected because the specified entity or resource could not // be found. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -2191,8 +2261,8 @@ const opImportKeyMaterial = "ImportKeyMaterial" // ImportKeyMaterialRequest generates a "aws/request.Request" representing the // client's request for the ImportKeyMaterial operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2268,6 +2338,10 @@ func (c *KMS) ImportKeyMaterialRequest(input *ImportKeyMaterialInput) (req *requ // into a CMK, you can reimport the same key material into that CMK, but you // cannot import different key material. // +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -2291,11 +2365,11 @@ func (c *KMS) ImportKeyMaterialRequest(input *ImportKeyMaterialInput) (req *requ // The request was rejected because the specified entity or resource could not // be found. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -2348,8 +2422,8 @@ const opListAliases = "ListAliases" // ListAliasesRequest generates a "aws/request.Request" representing the // client's request for the ListAliases operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2394,14 +2468,22 @@ func (c *KMS) ListAliasesRequest(input *ListAliasesInput) (req *request.Request, // ListAliases API operation for AWS Key Management Service. // -// Gets a list of all aliases in the caller's AWS account and region. You cannot +// Gets a list of aliases in the caller's AWS account and region. You cannot // list aliases in other accounts. For more information about aliases, see CreateAlias. // -// The response might include several aliases that do not have a TargetKeyId -// field because they are not associated with a CMK. These are predefined aliases -// that are reserved for CMKs managed by AWS services. If an alias is not associated -// with a CMK, the alias does not count against the alias limit (http://docs.aws.amazon.com/kms/latest/developerguide/limits.html#aliases-limit) -// for your account. +// By default, the ListAliases command returns all aliases in the account and +// region. To get only the aliases that point to a particular customer master +// key (CMK), use the KeyId parameter. +// +// The ListAliases response can include aliases that you created and associated +// with your customer managed CMKs, and aliases that AWS created and associated +// with AWS managed CMKs in your account. You can recognize AWS aliases because +// their names have the format aws/, such as aws/dynamodb. +// +// The response might also include aliases that have no TargetKeyId field. These +// are predefined aliases that AWS has created but has not yet associated with +// a CMK. Aliases that AWS creates in your account, including predefined aliases, +// do not count against your AWS KMS aliases limit (http://docs.aws.amazon.com/kms/latest/developerguide/limits.html#aliases-limit). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2419,7 +2501,7 @@ func (c *KMS) ListAliasesRequest(input *ListAliasesInput) (req *request.Request, // The request was rejected because the marker that specifies where pagination // should next begin is not valid. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // @@ -2499,8 +2581,8 @@ const opListGrants = "ListGrants" // ListGrantsRequest generates a "aws/request.Request" representing the // client's request for the ListGrants operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2573,11 +2655,11 @@ func (c *KMS) ListGrantsRequest(input *ListGrantsInput) (req *request.Request, o // * ErrCodeInvalidArnException "InvalidArnException" // The request was rejected because a specified ARN was not valid. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -2661,8 +2743,8 @@ const opListKeyPolicies = "ListKeyPolicies" // ListKeyPoliciesRequest generates a "aws/request.Request" representing the // client's request for the ListKeyPolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2731,11 +2813,11 @@ func (c *KMS) ListKeyPoliciesRequest(input *ListKeyPoliciesInput) (req *request. // The system timed out while trying to fulfill the request. The request can // be retried. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -2819,8 +2901,8 @@ const opListKeys = "ListKeys" // ListKeysRequest generates a "aws/request.Request" representing the // client's request for the ListKeys operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2880,7 +2962,7 @@ func (c *KMS) ListKeysRequest(input *ListKeysInput) (req *request.Request, outpu // The system timed out while trying to fulfill the request. The request can // be retried. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // @@ -2964,8 +3046,8 @@ const opListResourceTags = "ListResourceTags" // ListResourceTagsRequest generates a "aws/request.Request" representing the // client's request for the ListResourceTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3016,7 +3098,7 @@ func (c *KMS) ListResourceTagsRequest(input *ListResourceTagsInput) (req *reques // API operation ListResourceTags for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // @@ -3057,8 +3139,8 @@ const opListRetirableGrants = "ListRetirableGrants" // ListRetirableGrantsRequest generates a "aws/request.Request" representing the // client's request for the ListRetirableGrants operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3126,7 +3208,7 @@ func (c *KMS) ListRetirableGrantsRequest(input *ListRetirableGrantsInput) (req * // The request was rejected because the specified entity or resource could not // be found. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // @@ -3156,8 +3238,8 @@ const opPutKeyPolicy = "PutKeyPolicy" // PutKeyPolicyRequest generates a "aws/request.Request" representing the // client's request for the PutKeyPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3231,7 +3313,7 @@ func (c *KMS) PutKeyPolicyRequest(input *PutKeyPolicyInput) (req *request.Reques // The request was rejected because a specified parameter is not supported or // a specified resource is not valid for this operation. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // @@ -3240,7 +3322,7 @@ func (c *KMS) PutKeyPolicyRequest(input *PutKeyPolicyInput) (req *request.Reques // see Limits (http://docs.aws.amazon.com/kms/latest/developerguide/limits.html) // in the AWS Key Management Service Developer Guide. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -3274,8 +3356,8 @@ const opReEncrypt = "ReEncrypt" // ReEncryptRequest generates a "aws/request.Request" representing the // client's request for the ReEncrypt operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3325,10 +3407,14 @@ func (c *KMS) ReEncryptRequest(input *ReEncryptInput) (req *request.Request, out // on the source CMK and once as ReEncryptTo on the destination CMK. We recommend // that you include the "kms:ReEncrypt*" permission in your key policies (http://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) // to permit reencryption from or to the CMK. This permission is automatically -// included in the key policy when you create a CMK through the console, but +// included in the key policy when you create a CMK through the console. But // you must include it manually when you create a CMK programmatically or when // you set a key policy with the PutKeyPolicy operation. // +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -3363,11 +3449,11 @@ func (c *KMS) ReEncryptRequest(input *ReEncryptInput) (req *request.Request, out // * ErrCodeInvalidGrantTokenException "InvalidGrantTokenException" // The request was rejected because the specified grant token is not valid. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -3401,8 +3487,8 @@ const opRetireGrant = "RetireGrant" // RetireGrantRequest generates a "aws/request.Request" representing the // client's request for the RetireGrant operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3485,11 +3571,11 @@ func (c *KMS) RetireGrantRequest(input *RetireGrantInput) (req *request.Request, // The system timed out while trying to fulfill the request. The request can // be retried. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -3523,8 +3609,8 @@ const opRevokeGrant = "RevokeGrant" // RevokeGrantRequest generates a "aws/request.Request" representing the // client's request for the RevokeGrant operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3593,11 +3679,11 @@ func (c *KMS) RevokeGrantRequest(input *RevokeGrantInput) (req *request.Request, // * ErrCodeInvalidGrantIdException "InvalidGrantIdException" // The request was rejected because the specified GrantId is not valid. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -3631,8 +3717,8 @@ const opScheduleKeyDeletion = "ScheduleKeyDeletion" // ScheduleKeyDeletionRequest generates a "aws/request.Request" representing the // client's request for the ScheduleKeyDeletion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3690,6 +3776,10 @@ func (c *KMS) ScheduleKeyDeletionRequest(input *ScheduleKeyDeletionInput) (req * // Master Keys (http://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys.html) // in the AWS Key Management Service Developer Guide. // +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -3709,11 +3799,11 @@ func (c *KMS) ScheduleKeyDeletionRequest(input *ScheduleKeyDeletionInput) (req * // The system timed out while trying to fulfill the request. The request can // be retried. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -3747,8 +3837,8 @@ const opTagResource = "TagResource" // TagResourceRequest generates a "aws/request.Request" representing the // client's request for the TagResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3789,22 +3879,23 @@ func (c *KMS) TagResourceRequest(input *TagResourceInput) (req *request.Request, // TagResource API operation for AWS Key Management Service. // -// Adds or overwrites one or more tags for the specified customer master key -// (CMK). You cannot perform this operation on a CMK in a different AWS account. +// Adds or edits tags for a customer master key (CMK). You cannot perform this +// operation on a CMK in a different AWS account. // // Each tag consists of a tag key and a tag value. Tag keys and tag values are // both required, but tag values can be empty (null) strings. // -// You cannot use the same tag key more than once per CMK. For example, consider -// a CMK with one tag whose tag key is Purpose and tag value is Test. If you -// send a TagResource request for this CMK with a tag key of Purpose and a tag -// value of Prod, it does not create a second tag. Instead, the original tag -// is overwritten with the new tag value. +// You can only use a tag key once for each CMK. If you use the tag key again, +// AWS KMS replaces the current tag value with the specified value. // // For information about the rules that apply to tag keys and tag values, see // User-Defined Tag Restrictions (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/allocation-tag-restrictions.html) // in the AWS Billing and Cost Management User Guide. // +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -3813,7 +3904,7 @@ func (c *KMS) TagResourceRequest(input *TagResourceInput) (req *request.Request, // API operation TagResource for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // @@ -3824,7 +3915,7 @@ func (c *KMS) TagResourceRequest(input *TagResourceInput) (req *request.Request, // * ErrCodeInvalidArnException "InvalidArnException" // The request was rejected because a specified ARN was not valid. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -3866,8 +3957,8 @@ const opUntagResource = "UntagResource" // UntagResourceRequest generates a "aws/request.Request" representing the // client's request for the UntagResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3908,12 +3999,15 @@ func (c *KMS) UntagResourceRequest(input *UntagResourceInput) (req *request.Requ // UntagResource API operation for AWS Key Management Service. // -// Removes the specified tag or tags from the specified customer master key -// (CMK). You cannot perform this operation on a CMK in a different AWS account. +// Removes the specified tags from the specified customer master key (CMK). +// You cannot perform this operation on a CMK in a different AWS account. +// +// To remove a tag, specify the tag key. To change the tag value of an existing +// tag key, use TagResource. // -// To remove a tag, you specify the tag key for each tag to remove. You do not -// specify the tag value. To overwrite the tag value for an existing tag, use -// TagResource. +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3923,7 +4017,7 @@ func (c *KMS) UntagResourceRequest(input *UntagResourceInput) (req *request.Requ // API operation UntagResource for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // @@ -3934,7 +4028,7 @@ func (c *KMS) UntagResourceRequest(input *UntagResourceInput) (req *request.Requ // * ErrCodeInvalidArnException "InvalidArnException" // The request was rejected because a specified ARN was not valid. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -3971,8 +4065,8 @@ const opUpdateAlias = "UpdateAlias" // UpdateAliasRequest generates a "aws/request.Request" representing the // client's request for the UpdateAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4034,6 +4128,10 @@ func (c *KMS) UpdateAliasRequest(input *UpdateAliasInput) (req *request.Request, // cannot begin with aws; that alias name prefix is reserved by Amazon Web Services // (AWS). // +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -4050,11 +4148,11 @@ func (c *KMS) UpdateAliasRequest(input *UpdateAliasInput) (req *request.Request, // The request was rejected because the specified entity or resource could not // be found. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -4088,8 +4186,8 @@ const opUpdateKeyDescription = "UpdateKeyDescription" // UpdateKeyDescriptionRequest generates a "aws/request.Request" representing the // client's request for the UpdateKeyDescription operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4130,11 +4228,15 @@ func (c *KMS) UpdateKeyDescriptionRequest(input *UpdateKeyDescriptionInput) (req // UpdateKeyDescription API operation for AWS Key Management Service. // -// Updates the description of a customer master key (CMK). To see the decription +// Updates the description of a customer master key (CMK). To see the description // of a CMK, use DescribeKey. // // You cannot perform this operation on a CMK in a different AWS account. // +// The result of this operation varies with the key state of the CMK. For details, +// see How Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -4154,11 +4256,11 @@ func (c *KMS) UpdateKeyDescriptionRequest(input *UpdateKeyDescriptionInput) (req // The system timed out while trying to fulfill the request. The request can // be retried. // -// * ErrCodeInternalException "InternalException" +// * ErrCodeInternalException "KMSInternalException" // The request was rejected because an internal exception occurred. The request // can be retried. // -// * ErrCodeInvalidStateException "InvalidStateException" +// * ErrCodeInvalidStateException "KMSInvalidStateException" // The request was rejected because the state of the specified resource is not // valid for this request. // @@ -4308,9 +4410,9 @@ func (s *CancelKeyDeletionOutput) SetKeyId(v string) *CancelKeyDeletionOutput { type CreateAliasInput struct { _ struct{} `type:"structure"` - // String that contains the display name. The name must start with the word - // "alias" followed by a forward slash (alias/). Aliases that begin with "alias/AWS" - // are reserved. + // Specifies the alias name. This value must begin with alias/ followed by the + // alias name, such as alias/ExampleAlias. The alias name cannot begin with + // aws/. The alias/aws/ prefix is reserved for AWS managed CMKs. // // AliasName is a required field AliasName *string `min:"1" type:"string" required:"true"` @@ -4435,8 +4537,8 @@ type CreateGrantInput struct { // KeyId is a required field KeyId *string `min:"1" type:"string" required:"true"` - // A friendly name for identifying the grant. Use this value to prevent unintended - // creation of duplicate grants when retrying this request. + // A friendly name for identifying the grant. Use this value to prevent the + // unintended creation of duplicate grants when retrying this request. // // When this value is absent, all CreateGrant requests result in a new grant // with a unique GrantId even if all the supplied parameters are identical. @@ -4642,9 +4744,9 @@ type CreateKeyInput struct { // The principals in the key policy must exist and be visible to AWS KMS. // When you create a new AWS principal (for example, an IAM user or role), // you might need to enforce a delay before including the new principal in - // a key policy because the new principal might not be immediately visible - // to AWS KMS. For more information, see Changes that I make are not always - // immediately visible (http://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_general.html#troubleshoot_general_eventual-consistency) + // a key policy. The reason for this is that the new principal might not + // be immediately visible to AWS KMS. For more information, see Changes that + // I make are not always immediately visible (http://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_general.html#troubleshoot_general_eventual-consistency) // in the AWS Identity and Access Management User Guide. // // If you do not provide a key policy, AWS KMS attaches a default key policy @@ -4987,7 +5089,11 @@ type DescribeKeyInput struct { // in the AWS Key Management Service Developer Guide. GrantTokens []*string `type:"list"` - // A unique identifier for the customer master key (CMK). + // Describes the specified customer master key (CMK). + // + // If you specify a predefined AWS alias (an AWS alias with no key ID), KMS + // associates the alias with an AWS managed CMK (http://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#master_keys) + // and returns its KeyId and Arn in the response. // // To specify a CMK, use its key ID, Amazon Resource Name (ARN), alias name, // or alias ARN. When using an alias name, prefix it with "alias/". To specify @@ -6016,9 +6122,8 @@ type GetParametersForImportInput struct { // KeyId is a required field KeyId *string `min:"1" type:"string" required:"true"` - // The algorithm you will use to encrypt the key material before importing it - // with ImportKeyMaterial. For more information, see Encrypt the Key Material - // (http://docs.aws.amazon.com/kms/latest/developerguide/importing-keys-encrypt-key-material.html) + // The algorithm you use to encrypt the key material before importing it with + // ImportKeyMaterial. For more information, see Encrypt the Key Material (http://docs.aws.amazon.com/kms/latest/developerguide/importing-keys-encrypt-key-material.html) // in the AWS Key Management Service Developer Guide. // // WrappingAlgorithm is a required field @@ -6096,7 +6201,7 @@ type GetParametersForImportOutput struct { // The time at which the import token and public key are no longer valid. After // this time, you cannot use them to make an ImportKeyMaterial request and you // must send another GetParametersForImport request to get new ones. - ParametersValidTo *time.Time `type:"timestamp" timestampFormat:"unix"` + ParametersValidTo *time.Time `type:"timestamp"` // The public key to use to encrypt the key material before importing it with // ImportKeyMaterial. @@ -6140,14 +6245,14 @@ func (s *GetParametersForImportOutput) SetPublicKey(v []byte) *GetParametersForI } // A structure that you can use to allow certain operations in the grant only -// when the desired encryption context is present. For more information about +// when the preferred encryption context is present. For more information about // encryption context, see Encryption Context (http://docs.aws.amazon.com/kms/latest/developerguide/encryption-context.html) // in the AWS Key Management Service Developer Guide. // // Grant constraints apply only to operations that accept encryption context // as input. For example, the DescribeKey operation does not accept encryption // context as input. A grant that allows the DescribeKey operation does so regardless -// of the grant constraints. In constrast, the Encrypt operation accepts encryption +// of the grant constraints. In contrast, the Encrypt operation accepts encryption // context as input. A grant that allows the Encrypt operation does so only // when the encryption context of the Encrypt operation satisfies the grant // constraints. @@ -6200,7 +6305,7 @@ type GrantListEntry struct { Constraints *GrantConstraints `type:"structure"` // The date and time when the grant was created. - CreationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDate *time.Time `type:"timestamp"` // The unique identifier for the grant. GrantId *string `min:"1" type:"string"` @@ -6336,7 +6441,7 @@ type ImportKeyMaterialInput struct { // expires, AWS KMS deletes the key material and the CMK becomes unusable. You // must omit this parameter when the ExpirationModel parameter is set to KEY_MATERIAL_DOES_NOT_EXPIRE. // Otherwise it is required. - ValidTo *time.Time `type:"timestamp" timestampFormat:"unix"` + ValidTo *time.Time `type:"timestamp"` } // String returns the string representation @@ -6470,11 +6575,11 @@ type KeyMetadata struct { Arn *string `min:"20" type:"string"` // The date and time when the CMK was created. - CreationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationDate *time.Time `type:"timestamp"` // The date and time after which AWS KMS deletes the CMK. This value is present // only when KeyState is PendingDeletion, otherwise this value is omitted. - DeletionDate *time.Time `type:"timestamp" timestampFormat:"unix"` + DeletionDate *time.Time `type:"timestamp"` // The description of the CMK. Description *string `type:"string"` @@ -6492,7 +6597,7 @@ type KeyMetadata struct { // KeyId is a required field KeyId *string `min:"1" type:"string" required:"true"` - // The CMK's manager. CMKs are either customer-managed or AWS-managed. For more + // The CMK's manager. CMKs are either customer managed or AWS managed. For more // information about the difference, see Customer Master Keys (http://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#master_keys) // in the AWS Key Management Service Developer Guide. KeyManager *string `type:"string" enum:"KeyManagerType"` @@ -6519,7 +6624,7 @@ type KeyMetadata struct { // expires, AWS KMS deletes the key material and the CMK becomes unusable. This // value is present only for CMKs whose Origin is EXTERNAL and whose ExpirationModel // is KEY_MATERIAL_EXPIRES, otherwise this value is omitted. - ValidTo *time.Time `type:"timestamp" timestampFormat:"unix"` + ValidTo *time.Time `type:"timestamp"` } // String returns the string representation @@ -6613,6 +6718,14 @@ func (s *KeyMetadata) SetValidTo(v time.Time) *KeyMetadata { type ListAliasesInput struct { _ struct{} `type:"structure"` + // Lists only aliases that refer to the specified CMK. The value of this parameter + // can be the ID or Amazon Resource Name (ARN) of a CMK in the caller's account + // and region. You cannot use an alias name or alias ARN in this value. + // + // This parameter is optional. If you omit it, ListAliases returns all aliases + // in the account and region. + KeyId *string `min:"1" type:"string"` + // Use this parameter to specify the maximum number of items to return. When // this value is present, AWS KMS does not return more than the specified number // of items, but it might return fewer. @@ -6640,6 +6753,9 @@ func (s ListAliasesInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *ListAliasesInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "ListAliasesInput"} + if s.KeyId != nil && len(*s.KeyId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("KeyId", 1)) + } if s.Limit != nil && *s.Limit < 1 { invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) } @@ -6653,6 +6769,12 @@ func (s *ListAliasesInput) Validate() error { return nil } +// SetKeyId sets the KeyId field's value. +func (s *ListAliasesInput) SetKeyId(v string) *ListAliasesInput { + s.KeyId = &v + return s +} + // SetLimit sets the Limit field's value. func (s *ListAliasesInput) SetLimit(v int64) *ListAliasesInput { s.Limit = &v @@ -7326,9 +7448,9 @@ type PutKeyPolicyInput struct { // The principals in the key policy must exist and be visible to AWS KMS. // When you create a new AWS principal (for example, an IAM user or role), // you might need to enforce a delay before including the new principal in - // a key policy because the new principal might not be immediately visible - // to AWS KMS. For more information, see Changes that I make are not always - // immediately visible (http://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_general.html#troubleshoot_general_eventual-consistency) + // a key policy. The reason for this is that the new principal might not + // be immediately visible to AWS KMS. For more information, see Changes that + // I make are not always immediately visible (http://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_general.html#troubleshoot_general_eventual-consistency) // in the AWS Identity and Access Management User Guide. // // The key policy size limit is 32 kilobytes (32768 bytes). @@ -7803,7 +7925,7 @@ type ScheduleKeyDeletionOutput struct { _ struct{} `type:"structure"` // The date and time after which AWS KMS deletes the customer master key (CMK). - DeletionDate *time.Time `type:"timestamp" timestampFormat:"unix"` + DeletionDate *time.Time `type:"timestamp"` // The unique identifier of the customer master key (CMK) for which deletion // is scheduled. diff --git a/vendor/github.com/aws/aws-sdk-go/service/kms/doc.go b/vendor/github.com/aws/aws-sdk-go/service/kms/doc.go index 3bab059f303..cc3620ac7dd 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/kms/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/kms/doc.go @@ -9,7 +9,7 @@ // Management Service Developer Guide (http://docs.aws.amazon.com/kms/latest/developerguide/). // // AWS provides SDKs that consist of libraries and sample code for various programming -// languages and platforms (Java, Ruby, .Net, iOS, Android, etc.). The SDKs +// languages and platforms (Java, Ruby, .Net, macOS, Android, etc.). The SDKs // provide a convenient way to create programmatic access to AWS KMS and other // AWS services. For example, the SDKs take care of tasks such as signing requests // (see below), managing errors, and retrying requests automatically. For more @@ -30,7 +30,7 @@ // Requests must be signed by using an access key ID and a secret access key. // We strongly recommend that you do not use your AWS account (root) access // key ID and secret key for everyday work with AWS KMS. Instead, use the access -// key ID and secret access key for an IAM user, or you can use the AWS Security +// key ID and secret access key for an IAM user. You can also use the AWS Security // Token Service to generate temporary security credentials that you can use // to sign requests. // @@ -61,11 +61,11 @@ // - This set of topics walks you through the process of signing a request // using an access key ID and a secret access key. // -// Commonly Used APIs +// Commonly Used API Operations // -// Of the APIs discussed in this guide, the following will prove the most useful -// for most applications. You will likely perform actions other than these, -// such as creating keys and assigning policies, by using the console. +// Of the API operations discussed in this guide, the following will prove the +// most useful for most applications. You will likely perform operations other +// than these, such as creating keys and assigning policies, by using the console. // // * Encrypt // diff --git a/vendor/github.com/aws/aws-sdk-go/service/kms/errors.go b/vendor/github.com/aws/aws-sdk-go/service/kms/errors.go index d79e4321bd9..2a6511da97e 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/kms/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/kms/errors.go @@ -41,11 +41,11 @@ const ( ErrCodeIncorrectKeyMaterialException = "IncorrectKeyMaterialException" // ErrCodeInternalException for service response error code - // "InternalException". + // "KMSInternalException". // // The request was rejected because an internal exception occurred. The request // can be retried. - ErrCodeInternalException = "InternalException" + ErrCodeInternalException = "KMSInternalException" // ErrCodeInvalidAliasNameException for service response error code // "InvalidAliasNameException". @@ -100,7 +100,7 @@ const ( ErrCodeInvalidMarkerException = "InvalidMarkerException" // ErrCodeInvalidStateException for service response error code - // "InvalidStateException". + // "KMSInvalidStateException". // // The request was rejected because the state of the specified resource is not // valid for this request. @@ -108,7 +108,7 @@ const ( // For more information about how key state affects the use of a CMK, see How // Key State Affects Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) // in the AWS Key Management Service Developer Guide. - ErrCodeInvalidStateException = "InvalidStateException" + ErrCodeInvalidStateException = "KMSInvalidStateException" // ErrCodeKeyUnavailableException for service response error code // "KeyUnavailableException". diff --git a/vendor/github.com/aws/aws-sdk-go/service/kms/service.go b/vendor/github.com/aws/aws-sdk-go/service/kms/service.go index 3ff65de5e5b..6d062f32fc8 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/kms/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/kms/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "kms" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "kms" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "KMS" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the KMS client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/lambda/api.go b/vendor/github.com/aws/aws-sdk-go/service/lambda/api.go index d4e8db908c8..9b8428ca8ad 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/lambda/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/lambda/api.go @@ -17,8 +17,8 @@ const opAddPermission = "AddPermission" // AddPermissionRequest generates a "aws/request.Request" representing the // client's request for the AddPermission operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -59,17 +59,16 @@ func (c *Lambda) AddPermissionRequest(input *AddPermissionInput) (req *request.R // // Adds a permission to the resource policy associated with the specified AWS // Lambda function. You use resource policies to grant permissions to event -// sources that use push model. In a push model, event sources (such as Amazon -// S3 and custom applications) invoke your Lambda function. Each permission -// you add to the resource policy allows an event source, permission to invoke +// sources that use the push model. In a push model, event sources (such as +// Amazon S3 and custom applications) invoke your Lambda function. Each permission +// you add to the resource policy allows an event source permission to invoke // the Lambda function. // -// For information about the push model, see AWS Lambda: How it Works (http://docs.aws.amazon.com/lambda/latest/dg/lambda-introduction.html). -// -// If you are using versioning, the permissions you add are specific to the -// Lambda function version or alias you specify in the AddPermission request -// via the Qualifier parameter. For more information about versioning, see AWS -// Lambda Function Versioning and Aliases (http://docs.aws.amazon.com/lambda/latest/dg/versioning-aliases.html). +// Permissions apply to the Amazon Resource Name (ARN) used to invoke the function, +// which can be unqualified (the unpublished version of the function), or include +// a version or alias. If a client uses a version or alias to invoke a function, +// use the Qualifier parameter to apply permissions to that ARN. For more information +// about versioning, see AWS Lambda Function Versioning and Aliases (http://docs.aws.amazon.com/lambda/latest/dg/versioning-aliases.html). // // This operation requires permission for the lambda:AddPermission action. // @@ -100,6 +99,7 @@ func (c *Lambda) AddPermissionRequest(input *AddPermissionInput) (req *request.R // Lambda function access policy is limited to 20 KB. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // * ErrCodePreconditionFailedException "PreconditionFailedException" // The RevisionId provided does not match the latest RevisionId for the Lambda @@ -132,8 +132,8 @@ const opCreateAlias = "CreateAlias" // CreateAliasRequest generates a "aws/request.Request" representing the // client's request for the CreateAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -202,6 +202,7 @@ func (c *Lambda) CreateAliasRequest(input *CreateAliasInput) (req *request.Reque // API, that AWS Lambda is unable to assume you will get this exception. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // See also, https://docs.aws.amazon.com/goto/WebAPI/lambda-2015-03-31/CreateAlias func (c *Lambda) CreateAlias(input *CreateAliasInput) (*AliasConfiguration, error) { @@ -229,8 +230,8 @@ const opCreateEventSourceMapping = "CreateEventSourceMapping" // CreateEventSourceMappingRequest generates a "aws/request.Request" representing the // client's request for the CreateEventSourceMapping operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -269,31 +270,16 @@ func (c *Lambda) CreateEventSourceMappingRequest(input *CreateEventSourceMapping // CreateEventSourceMapping API operation for AWS Lambda. // -// Identifies a stream as an event source for a Lambda function. It can be either -// an Amazon Kinesis stream or an Amazon DynamoDB stream. AWS Lambda invokes -// the specified function when records are posted to the stream. +// Creates a mapping between an event source and an AWS Lambda function. Lambda +// reads items from the event source and triggers the function. // -// This association between a stream source and a Lambda function is called -// the event source mapping. +// For details about each event source type, see the following topics. // -// This event source mapping is relevant only in the AWS Lambda pull model, -// where AWS Lambda invokes the function. For more information, see AWS Lambda: -// How it Works (http://docs.aws.amazon.com/lambda/latest/dg/lambda-introduction.html) -// in the AWS Lambda Developer Guide. +// * Using AWS Lambda with Amazon Kinesis (http://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html) // -// You provide mapping information (for example, which stream to read from and -// which Lambda function to invoke) in the request body. -// -// Each event source, such as an Amazon Kinesis or a DynamoDB stream, can be -// associated with multiple AWS Lambda function. A given Lambda function can -// be associated with multiple AWS event sources. -// -// If you are using versioning, you can specify a specific function version -// or an alias via the function name parameter. For more information about versioning, -// see AWS Lambda Function Versioning and Aliases (http://docs.aws.amazon.com/lambda/latest/dg/versioning-aliases.html). +// * Using AWS Lambda with Amazon SQS (http://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html) // -// This operation requires permission for the lambda:CreateEventSourceMapping -// action. +// * Using AWS Lambda with Amazon DynamoDB (http://docs.aws.amazon.com/lambda/latest/dg/with-ddb.html) // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -315,6 +301,7 @@ func (c *Lambda) CreateEventSourceMappingRequest(input *CreateEventSourceMapping // The resource already exists. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // * ErrCodeResourceNotFoundException "ResourceNotFoundException" // The resource (for example, a Lambda function or access policy statement) @@ -346,8 +333,8 @@ const opCreateFunction = "CreateFunction" // CreateFunctionRequest generates a "aws/request.Request" representing the // client's request for the CreateFunction operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -386,14 +373,9 @@ func (c *Lambda) CreateFunctionRequest(input *CreateFunctionInput) (req *request // CreateFunction API operation for AWS Lambda. // -// Creates a new Lambda function. The function metadata is created from the -// request parameters, and the code for the function is provided by a .zip file -// in the request body. If the function name already exists, the operation will -// fail. Note that the function name is case-sensitive. -// -// If you are using versioning, you can also publish a version of the Lambda -// function you are creating using the Publish parameter. For more information -// about versioning, see AWS Lambda Function Versioning and Aliases (http://docs.aws.amazon.com/lambda/latest/dg/versioning-aliases.html). +// Creates a new Lambda function. The function configuration is created from +// the request parameters, and the code for the function is provided by a .zip +// file. The function name is case-sensitive. // // This operation requires permission for the lambda:CreateFunction action. // @@ -421,6 +403,7 @@ func (c *Lambda) CreateFunctionRequest(input *CreateFunctionInput) (req *request // The resource already exists. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // * ErrCodeCodeStorageExceededException "CodeStorageExceededException" // You have exceeded your maximum total code size per account. Limits (http://docs.aws.amazon.com/lambda/latest/dg/limits.html) @@ -451,8 +434,8 @@ const opDeleteAlias = "DeleteAlias" // DeleteAliasRequest generates a "aws/request.Request" representing the // client's request for the DeleteAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -515,6 +498,7 @@ func (c *Lambda) DeleteAliasRequest(input *DeleteAliasInput) (req *request.Reque // API, that AWS Lambda is unable to assume you will get this exception. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // See also, https://docs.aws.amazon.com/goto/WebAPI/lambda-2015-03-31/DeleteAlias func (c *Lambda) DeleteAlias(input *DeleteAliasInput) (*DeleteAliasOutput, error) { @@ -542,8 +526,8 @@ const opDeleteEventSourceMapping = "DeleteEventSourceMapping" // DeleteEventSourceMappingRequest generates a "aws/request.Request" representing the // client's request for the DeleteEventSourceMapping operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -582,11 +566,7 @@ func (c *Lambda) DeleteEventSourceMappingRequest(input *DeleteEventSourceMapping // DeleteEventSourceMapping API operation for AWS Lambda. // -// Removes an event source mapping. This means AWS Lambda will no longer invoke -// the function for events in the associated source. -// -// This operation requires permission for the lambda:DeleteEventSourceMapping -// action. +// Deletes an event source mapping. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -609,6 +589,12 @@ func (c *Lambda) DeleteEventSourceMappingRequest(input *DeleteEventSourceMapping // API, that AWS Lambda is unable to assume you will get this exception. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// The operation conflicts with the resource's availability. For example, you +// attempted to update an EventSoure Mapping in CREATING, or tried to delete +// a EventSoure mapping currently in the UPDATING state. // // See also, https://docs.aws.amazon.com/goto/WebAPI/lambda-2015-03-31/DeleteEventSourceMapping func (c *Lambda) DeleteEventSourceMapping(input *DeleteEventSourceMappingInput) (*EventSourceMappingConfiguration, error) { @@ -636,8 +622,8 @@ const opDeleteFunction = "DeleteFunction" // DeleteFunctionRequest generates a "aws/request.Request" representing the // client's request for the DeleteFunction operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -678,17 +664,9 @@ func (c *Lambda) DeleteFunctionRequest(input *DeleteFunctionInput) (req *request // DeleteFunction API operation for AWS Lambda. // -// Deletes the specified Lambda function code and configuration. -// -// If you are using the versioning feature and you don't specify a function -// version in your DeleteFunction request, AWS Lambda will delete the function, -// including all its versions, and any aliases pointing to the function versions. -// To delete a specific function version, you must provide the function version -// via the Qualifier parameter. For information about function versioning, see -// AWS Lambda Function Versioning and Aliases (http://docs.aws.amazon.com/lambda/latest/dg/versioning-aliases.html). -// -// When you delete a function the associated resource policy is also deleted. -// You will need to delete the event source mappings explicitly. +// Deletes a Lambda function. To delete a specific function version, use the +// Qualifier parameter. Otherwise, all versions and aliases are deleted. Event +// source mappings are not deleted. // // This operation requires permission for the lambda:DeleteFunction action. // @@ -708,6 +686,7 @@ func (c *Lambda) DeleteFunctionRequest(input *DeleteFunctionInput) (req *request // specified in the request does not exist. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // One of the parameters in the request is invalid. For example, if you provided @@ -743,8 +722,8 @@ const opDeleteFunctionConcurrency = "DeleteFunctionConcurrency" // DeleteFunctionConcurrencyRequest generates a "aws/request.Request" representing the // client's request for the DeleteFunctionConcurrency operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -786,7 +765,7 @@ func (c *Lambda) DeleteFunctionConcurrencyRequest(input *DeleteFunctionConcurren // DeleteFunctionConcurrency API operation for AWS Lambda. // // Removes concurrent execution limits from this function. For more information, -// see concurrent-executions. +// see Managing Concurrency (http://docs.aws.amazon.com/lambda/latest/dg/concurrent-executions.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -804,6 +783,7 @@ func (c *Lambda) DeleteFunctionConcurrencyRequest(input *DeleteFunctionConcurren // specified in the request does not exist. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // One of the parameters in the request is invalid. For example, if you provided @@ -836,8 +816,8 @@ const opGetAccountSettings = "GetAccountSettings" // GetAccountSettingsRequest generates a "aws/request.Request" representing the // client's request for the GetAccountSettings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -876,13 +856,8 @@ func (c *Lambda) GetAccountSettingsRequest(input *GetAccountSettingsInput) (req // GetAccountSettings API operation for AWS Lambda. // -// Returns a customer's account settings. -// -// You can use this operation to retrieve Lambda limits information, such as -// code size and concurrency limits. For more information about limits, see -// AWS Lambda Limits (http://docs.aws.amazon.com/lambda/latest/dg/limits.html). -// You can also retrieve resource usage statistics, such as code storage usage -// and function count. +// Retrieves details about your account's limits (http://docs.aws.amazon.com/lambda/latest/dg/limits.html) +// and usage in a region. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -893,6 +868,7 @@ func (c *Lambda) GetAccountSettingsRequest(input *GetAccountSettingsInput) (req // // Returned Error Codes: // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // * ErrCodeServiceException "ServiceException" // The AWS Lambda service encountered an internal error. @@ -923,8 +899,8 @@ const opGetAlias = "GetAlias" // GetAliasRequest generates a "aws/request.Request" representing the // client's request for the GetAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -990,6 +966,7 @@ func (c *Lambda) GetAliasRequest(input *GetAliasInput) (req *request.Request, ou // API, that AWS Lambda is unable to assume you will get this exception. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // See also, https://docs.aws.amazon.com/goto/WebAPI/lambda-2015-03-31/GetAlias func (c *Lambda) GetAlias(input *GetAliasInput) (*AliasConfiguration, error) { @@ -1017,8 +994,8 @@ const opGetEventSourceMapping = "GetEventSourceMapping" // GetEventSourceMappingRequest generates a "aws/request.Request" representing the // client's request for the GetEventSourceMapping operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1057,10 +1034,7 @@ func (c *Lambda) GetEventSourceMappingRequest(input *GetEventSourceMappingInput) // GetEventSourceMapping API operation for AWS Lambda. // -// Returns configuration information for the specified event source mapping -// (see CreateEventSourceMapping). -// -// This operation requires permission for the lambda:GetEventSourceMapping action. +// Returns details about an event source mapping. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1083,6 +1057,7 @@ func (c *Lambda) GetEventSourceMappingRequest(input *GetEventSourceMappingInput) // API, that AWS Lambda is unable to assume you will get this exception. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // See also, https://docs.aws.amazon.com/goto/WebAPI/lambda-2015-03-31/GetEventSourceMapping func (c *Lambda) GetEventSourceMapping(input *GetEventSourceMappingInput) (*EventSourceMappingConfiguration, error) { @@ -1110,8 +1085,8 @@ const opGetFunction = "GetFunction" // GetFunctionRequest generates a "aws/request.Request" representing the // client's request for the GetFunction operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1156,11 +1131,9 @@ func (c *Lambda) GetFunctionRequest(input *GetFunctionInput) (req *request.Reque // information is the same information you provided as parameters when uploading // the function. // -// Using the optional Qualifier parameter, you can specify a specific function -// version for which you want this information. If you don't specify this parameter, -// the API uses unqualified function ARN which return information about the -// $LATEST version of the Lambda function. For more information, see AWS Lambda -// Function Versioning and Aliases (http://docs.aws.amazon.com/lambda/latest/dg/versioning-aliases.html). +// Use the Qualifier parameter to retrieve a published version of the function. +// Otherwise, returns the unpublished version ($LATEST). For more information, +// see AWS Lambda Function Versioning and Aliases (http://docs.aws.amazon.com/lambda/latest/dg/versioning-aliases.html). // // This operation requires permission for the lambda:GetFunction action. // @@ -1180,6 +1153,7 @@ func (c *Lambda) GetFunctionRequest(input *GetFunctionInput) (req *request.Reque // specified in the request does not exist. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // One of the parameters in the request is invalid. For example, if you provided @@ -1212,8 +1186,8 @@ const opGetFunctionConfiguration = "GetFunctionConfiguration" // GetFunctionConfigurationRequest generates a "aws/request.Request" representing the // client's request for the GetFunctionConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1282,6 +1256,7 @@ func (c *Lambda) GetFunctionConfigurationRequest(input *GetFunctionConfiguration // specified in the request does not exist. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // One of the parameters in the request is invalid. For example, if you provided @@ -1314,8 +1289,8 @@ const opGetPolicy = "GetPolicy" // GetPolicyRequest generates a "aws/request.Request" representing the // client's request for the GetPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1356,12 +1331,7 @@ func (c *Lambda) GetPolicyRequest(input *GetPolicyInput) (req *request.Request, // // Returns the resource policy associated with the specified Lambda function. // -// If you are using the versioning feature, you can get the resource policy -// associated with the specific Lambda function version or alias by specifying -// the version or alias name using the Qualifier parameter. For more information -// about versioning, see AWS Lambda Function Versioning and Aliases (http://docs.aws.amazon.com/lambda/latest/dg/versioning-aliases.html). -// -// You need permission for the lambda:GetPolicy action. +// This action requires permission for the lambda:GetPolicy action. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1379,6 +1349,7 @@ func (c *Lambda) GetPolicyRequest(input *GetPolicyInput) (req *request.Request, // specified in the request does not exist. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // One of the parameters in the request is invalid. For example, if you provided @@ -1411,8 +1382,8 @@ const opInvoke = "Invoke" // InvokeRequest generates a "aws/request.Request" representing the // client's request for the Invoke operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1451,16 +1422,22 @@ func (c *Lambda) InvokeRequest(input *InvokeInput) (req *request.Request, output // Invoke API operation for AWS Lambda. // -// Invokes a specific Lambda function. For an example, see Create the Lambda -// Function and Test It Manually (http://docs.aws.amazon.com/lambda/latest/dg/with-dynamodb-create-function.html#with-dbb-invoke-manually). +// Invokes a Lambda function. For an example, see Create the Lambda Function +// and Test It Manually (http://docs.aws.amazon.com/lambda/latest/dg/with-dynamodb-create-function.html#with-dbb-invoke-manually). // -// If you are using the versioning feature, you can invoke the specific function -// version by providing function version or alias name that is pointing to the -// function version using the Qualifier parameter in the request. If you don't -// provide the Qualifier parameter, the $LATEST version of the Lambda function -// is invoked. Invocations occur at least once in response to an event and functions -// must be idempotent to handle this. For information about the versioning feature, -// see AWS Lambda Function Versioning and Aliases (http://docs.aws.amazon.com/lambda/latest/dg/versioning-aliases.html). +// Specify just a function name to invoke the latest version of the function. +// To invoke a published version, use the Qualifier parameter to specify a version +// or alias (http://docs.aws.amazon.com/lambda/latest/dg/versioning-aliases.html). +// +// If you use the RequestResponse (synchronous) invocation option, the function +// will be invoked only once. If you use the Event (asynchronous) invocation +// option, the function will be invoked at least once in response to an event +// and the function must be idempotent to handle this. +// +// For functions with a long timeout, your client may be disconnected during +// synchronous invocation while it waits for a response. Configure your HTTP +// client, SDK, firewall, proxy, or operating system to allow for long connections +// with timeout or keep-alive settings. // // This operation requires permission for the lambda:InvokeFunction action. // @@ -1497,6 +1474,7 @@ func (c *Lambda) InvokeRequest(input *InvokeInput) (req *request.Request, output // The content type of the Invoke request body is not JSON. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // One of the parameters in the request is invalid. For example, if you provided @@ -1521,6 +1499,7 @@ func (c *Lambda) InvokeRequest(input *InvokeInput) (req *request.Request, output // using the execution role provided for the Lambda function. // // * ErrCodeEC2AccessDeniedException "EC2AccessDeniedException" +// Need additional permissions to configure VPC settings. // // * ErrCodeInvalidSubnetIDException "InvalidSubnetIDException" // The Subnet ID provided in the Lambda function VPC configuration is invalid. @@ -1530,7 +1509,7 @@ func (c *Lambda) InvokeRequest(input *InvokeInput) (req *request.Request, output // invalid. // // * ErrCodeInvalidZipFileException "InvalidZipFileException" -// AWS Lambda could not unzip the function zip file. +// AWS Lambda could not unzip the deployment package. // // * ErrCodeKMSDisabledException "KMSDisabledException" // Lambda was unable to decrypt the environment variables because the KMS key @@ -1577,8 +1556,8 @@ const opInvokeAsync = "InvokeAsync" // InvokeAsyncRequest generates a "aws/request.Request" representing the // client's request for the InvokeAsync operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1599,6 +1578,8 @@ const opInvokeAsync = "InvokeAsync" // } // // See also, https://docs.aws.amazon.com/goto/WebAPI/lambda-2015-03-31/InvokeAsync +// +// Deprecated: InvokeAsync has been deprecated func (c *Lambda) InvokeAsyncRequest(input *InvokeAsyncInput) (req *request.Request, output *InvokeAsyncOutput) { if c.Client.Config.Logger != nil { c.Client.Config.Logger.Log("This operation, InvokeAsync, has been deprecated") @@ -1620,7 +1601,7 @@ func (c *Lambda) InvokeAsyncRequest(input *InvokeAsyncInput) (req *request.Reque // InvokeAsync API operation for AWS Lambda. // -// This API is deprecated. We recommend you use Invoke API (see Invoke). +// For asynchronous function invocation, use Invoke. // // Submits an invocation request to AWS Lambda. Upon receiving the request, // Lambda executes the specified function asynchronously. To see the logs generated @@ -1650,6 +1631,8 @@ func (c *Lambda) InvokeAsyncRequest(input *InvokeAsyncInput) (req *request.Reque // The runtime or runtime version specified is not supported. // // See also, https://docs.aws.amazon.com/goto/WebAPI/lambda-2015-03-31/InvokeAsync +// +// Deprecated: InvokeAsync has been deprecated func (c *Lambda) InvokeAsync(input *InvokeAsyncInput) (*InvokeAsyncOutput, error) { req, out := c.InvokeAsyncRequest(input) return out, req.Send() @@ -1664,6 +1647,8 @@ func (c *Lambda) InvokeAsync(input *InvokeAsyncInput) (*InvokeAsyncOutput, error // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. +// +// Deprecated: InvokeAsyncWithContext has been deprecated func (c *Lambda) InvokeAsyncWithContext(ctx aws.Context, input *InvokeAsyncInput, opts ...request.Option) (*InvokeAsyncOutput, error) { req, out := c.InvokeAsyncRequest(input) req.SetContext(ctx) @@ -1675,8 +1660,8 @@ const opListAliases = "ListAliases" // ListAliasesRequest generates a "aws/request.Request" representing the // client's request for the ListAliases operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1743,6 +1728,7 @@ func (c *Lambda) ListAliasesRequest(input *ListAliasesInput) (req *request.Reque // API, that AWS Lambda is unable to assume you will get this exception. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // See also, https://docs.aws.amazon.com/goto/WebAPI/lambda-2015-03-31/ListAliases func (c *Lambda) ListAliases(input *ListAliasesInput) (*ListAliasesOutput, error) { @@ -1770,8 +1756,8 @@ const opListEventSourceMappings = "ListEventSourceMappings" // ListEventSourceMappingsRequest generates a "aws/request.Request" representing the // client's request for the ListEventSourceMappings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1816,19 +1802,8 @@ func (c *Lambda) ListEventSourceMappingsRequest(input *ListEventSourceMappingsIn // ListEventSourceMappings API operation for AWS Lambda. // -// Returns a list of event source mappings you created using the CreateEventSourceMapping -// (see CreateEventSourceMapping). -// -// For each mapping, the API returns configuration information. You can optionally -// specify filters to retrieve specific event source mappings. -// -// If you are using the versioning feature, you can get list of event source -// mappings for a specific Lambda function version or an alias as described -// in the FunctionName parameter. For information about the versioning feature, -// see AWS Lambda Function Versioning and Aliases (http://docs.aws.amazon.com/lambda/latest/dg/versioning-aliases.html). -// -// This operation requires permission for the lambda:ListEventSourceMappings -// action. +// Lists event source mappings. Specify an EventSourceArn to only show event +// source mappings for a single event source. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1851,6 +1826,7 @@ func (c *Lambda) ListEventSourceMappingsRequest(input *ListEventSourceMappingsIn // API, that AWS Lambda is unable to assume you will get this exception. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // See also, https://docs.aws.amazon.com/goto/WebAPI/lambda-2015-03-31/ListEventSourceMappings func (c *Lambda) ListEventSourceMappings(input *ListEventSourceMappingsInput) (*ListEventSourceMappingsOutput, error) { @@ -1928,8 +1904,8 @@ const opListFunctions = "ListFunctions" // ListFunctionsRequest generates a "aws/request.Request" representing the // client's request for the ListFunctions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1996,6 +1972,7 @@ func (c *Lambda) ListFunctionsRequest(input *ListFunctionsInput) (req *request.R // The AWS Lambda service encountered an internal error. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // * ErrCodeInvalidParameterValueException "InvalidParameterValueException" // One of the parameters in the request is invalid. For example, if you provided @@ -2078,8 +2055,8 @@ const opListTags = "ListTags" // ListTagsRequest generates a "aws/request.Request" representing the // client's request for the ListTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2119,7 +2096,9 @@ func (c *Lambda) ListTagsRequest(input *ListTagsInput) (req *request.Request, ou // ListTags API operation for AWS Lambda. // // Returns a list of tags assigned to a function when supplied the function -// ARN (Amazon Resource Name). +// ARN (Amazon Resource Name). For more information on Tagging, see Tagging +// Lambda Functions (http://docs.aws.amazon.com/lambda/latest/dg/tagging.html) +// in the AWS Lambda Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2142,6 +2121,7 @@ func (c *Lambda) ListTagsRequest(input *ListTagsInput) (req *request.Request, ou // API, that AWS Lambda is unable to assume you will get this exception. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // See also, https://docs.aws.amazon.com/goto/WebAPI/lambda-2015-03-31/ListTags func (c *Lambda) ListTags(input *ListTagsInput) (*ListTagsOutput, error) { @@ -2169,8 +2149,8 @@ const opListVersionsByFunction = "ListVersionsByFunction" // ListVersionsByFunctionRequest generates a "aws/request.Request" representing the // client's request for the ListVersionsByFunction operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2209,8 +2189,8 @@ func (c *Lambda) ListVersionsByFunctionRequest(input *ListVersionsByFunctionInpu // ListVersionsByFunction API operation for AWS Lambda. // -// List all versions of a function. For information about the versioning feature, -// see AWS Lambda Function Versioning and Aliases (http://docs.aws.amazon.com/lambda/latest/dg/versioning-aliases.html). +// Lists all versions of a function. For information about versioning, see AWS +// Lambda Function Versioning and Aliases (http://docs.aws.amazon.com/lambda/latest/dg/versioning-aliases.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2233,6 +2213,7 @@ func (c *Lambda) ListVersionsByFunctionRequest(input *ListVersionsByFunctionInpu // API, that AWS Lambda is unable to assume you will get this exception. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // See also, https://docs.aws.amazon.com/goto/WebAPI/lambda-2015-03-31/ListVersionsByFunction func (c *Lambda) ListVersionsByFunction(input *ListVersionsByFunctionInput) (*ListVersionsByFunctionOutput, error) { @@ -2260,8 +2241,8 @@ const opPublishVersion = "PublishVersion" // PublishVersionRequest generates a "aws/request.Request" representing the // client's request for the PublishVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2327,6 +2308,7 @@ func (c *Lambda) PublishVersionRequest(input *PublishVersionInput) (req *request // API, that AWS Lambda is unable to assume you will get this exception. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // * ErrCodeCodeStorageExceededException "CodeStorageExceededException" // You have exceeded your maximum total code size per account. Limits (http://docs.aws.amazon.com/lambda/latest/dg/limits.html) @@ -2362,8 +2344,8 @@ const opPutFunctionConcurrency = "PutFunctionConcurrency" // PutFunctionConcurrencyRequest generates a "aws/request.Request" representing the // client's request for the PutFunctionConcurrency operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2407,7 +2389,7 @@ func (c *Lambda) PutFunctionConcurrencyRequest(input *PutFunctionConcurrencyInpu // Note that Lambda automatically reserves a buffer of 100 concurrent executions // for functions without any reserved concurrency limit. This means if your // account limit is 1000, you have a total of 900 available to allocate to individual -// functions. For more information, see concurrent-executions. +// functions. For more information, see Managing Concurrency (http://docs.aws.amazon.com/lambda/latest/dg/concurrent-executions.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2430,6 +2412,7 @@ func (c *Lambda) PutFunctionConcurrencyRequest(input *PutFunctionConcurrencyInpu // specified in the request does not exist. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // See also, https://docs.aws.amazon.com/goto/WebAPI/lambda-2015-03-31/PutFunctionConcurrency func (c *Lambda) PutFunctionConcurrency(input *PutFunctionConcurrencyInput) (*PutFunctionConcurrencyOutput, error) { @@ -2457,8 +2440,8 @@ const opRemovePermission = "RemovePermission" // RemovePermissionRequest generates a "aws/request.Request" representing the // client's request for the RemovePermission operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2499,17 +2482,17 @@ func (c *Lambda) RemovePermissionRequest(input *RemovePermissionInput) (req *req // RemovePermission API operation for AWS Lambda. // -// You can remove individual permissions from an resource policy associated -// with a Lambda function by providing a statement ID that you provided when -// you added the permission. +// Removes permissions from a function. You can remove individual permissions +// from an resource policy associated with a Lambda function by providing a +// statement ID that you provided when you added the permission. When you remove +// permissions, disable the event source mapping or trigger configuration first +// to avoid errors. // -// If you are using versioning, the permissions you remove are specific to the -// Lambda function version or alias you specify in the AddPermission request -// via the Qualifier parameter. For more information about versioning, see AWS -// Lambda Function Versioning and Aliases (http://docs.aws.amazon.com/lambda/latest/dg/versioning-aliases.html). -// -// Note that removal of a permission will cause an active event source to lose -// permission to the function. +// Permissions apply to the Amazon Resource Name (ARN) used to invoke the function, +// which can be unqualified (the unpublished version of the function), or include +// a version or alias. If a client uses a version or alias to invoke a function, +// use the Qualifier parameter to apply permissions to that ARN. For more information +// about versioning, see AWS Lambda Function Versioning and Aliases (http://docs.aws.amazon.com/lambda/latest/dg/versioning-aliases.html). // // You need permission for the lambda:RemovePermission action. // @@ -2534,6 +2517,7 @@ func (c *Lambda) RemovePermissionRequest(input *RemovePermissionInput) (req *req // API, that AWS Lambda is unable to assume you will get this exception. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // * ErrCodePreconditionFailedException "PreconditionFailedException" // The RevisionId provided does not match the latest RevisionId for the Lambda @@ -2566,8 +2550,8 @@ const opTagResource = "TagResource" // TagResourceRequest generates a "aws/request.Request" representing the // client's request for the TagResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2611,6 +2595,8 @@ func (c *Lambda) TagResourceRequest(input *TagResourceInput) (req *request.Reque // Creates a list of tags (key-value pairs) on the Lambda function. Requires // the Lambda function ARN (Amazon Resource Name). If a key is specified without // a value, Lambda creates a tag with the specified key and a value of null. +// For more information, see Tagging Lambda Functions (http://docs.aws.amazon.com/lambda/latest/dg/tagging.html) +// in the AWS Lambda Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2633,6 +2619,7 @@ func (c *Lambda) TagResourceRequest(input *TagResourceInput) (req *request.Reque // API, that AWS Lambda is unable to assume you will get this exception. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // See also, https://docs.aws.amazon.com/goto/WebAPI/lambda-2015-03-31/TagResource func (c *Lambda) TagResource(input *TagResourceInput) (*TagResourceOutput, error) { @@ -2660,8 +2647,8 @@ const opUntagResource = "UntagResource" // UntagResourceRequest generates a "aws/request.Request" representing the // client's request for the UntagResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2703,7 +2690,8 @@ func (c *Lambda) UntagResourceRequest(input *UntagResourceInput) (req *request.R // UntagResource API operation for AWS Lambda. // // Removes tags from a Lambda function. Requires the function ARN (Amazon Resource -// Name). +// Name). For more information, see Tagging Lambda Functions (http://docs.aws.amazon.com/lambda/latest/dg/tagging.html) +// in the AWS Lambda Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2726,6 +2714,7 @@ func (c *Lambda) UntagResourceRequest(input *UntagResourceInput) (req *request.R // API, that AWS Lambda is unable to assume you will get this exception. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // See also, https://docs.aws.amazon.com/goto/WebAPI/lambda-2015-03-31/UntagResource func (c *Lambda) UntagResource(input *UntagResourceInput) (*UntagResourceOutput, error) { @@ -2753,8 +2742,8 @@ const opUpdateAlias = "UpdateAlias" // UpdateAliasRequest generates a "aws/request.Request" representing the // client's request for the UpdateAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2820,6 +2809,7 @@ func (c *Lambda) UpdateAliasRequest(input *UpdateAliasInput) (req *request.Reque // API, that AWS Lambda is unable to assume you will get this exception. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // * ErrCodePreconditionFailedException "PreconditionFailedException" // The RevisionId provided does not match the latest RevisionId for the Lambda @@ -2852,8 +2842,8 @@ const opUpdateEventSourceMapping = "UpdateEventSourceMapping" // UpdateEventSourceMappingRequest generates a "aws/request.Request" representing the // client's request for the UpdateEventSourceMapping operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2892,23 +2882,8 @@ func (c *Lambda) UpdateEventSourceMappingRequest(input *UpdateEventSourceMapping // UpdateEventSourceMapping API operation for AWS Lambda. // -// You can update an event source mapping. This is useful if you want to change -// the parameters of the existing mapping without losing your position in the -// stream. You can change which function will receive the stream records, but -// to change the stream itself, you must create a new mapping. -// -// If you are using the versioning feature, you can update the event source -// mapping to map to a specific Lambda function version or alias as described -// in the FunctionName parameter. For information about the versioning feature, -// see AWS Lambda Function Versioning and Aliases (http://docs.aws.amazon.com/lambda/latest/dg/versioning-aliases.html). -// -// If you disable the event source mapping, AWS Lambda stops polling. If you -// enable again, it will resume polling from the time it had stopped polling, -// so you don't lose processing of any records. However, if you delete event -// source mapping and create it again, it will reset. -// -// This operation requires permission for the lambda:UpdateEventSourceMapping -// action. +// Updates an event source mapping. You can change the function that AWS Lambda +// invokes, or pause invocation and resume later from the same location. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2931,10 +2906,16 @@ func (c *Lambda) UpdateEventSourceMappingRequest(input *UpdateEventSourceMapping // API, that AWS Lambda is unable to assume you will get this exception. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // * ErrCodeResourceConflictException "ResourceConflictException" // The resource already exists. // +// * ErrCodeResourceInUseException "ResourceInUseException" +// The operation conflicts with the resource's availability. For example, you +// attempted to update an EventSoure Mapping in CREATING, or tried to delete +// a EventSoure mapping currently in the UPDATING state. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/lambda-2015-03-31/UpdateEventSourceMapping func (c *Lambda) UpdateEventSourceMapping(input *UpdateEventSourceMappingInput) (*EventSourceMappingConfiguration, error) { req, out := c.UpdateEventSourceMappingRequest(input) @@ -2961,8 +2942,8 @@ const opUpdateFunctionCode = "UpdateFunctionCode" // UpdateFunctionCodeRequest generates a "aws/request.Request" representing the // client's request for the UpdateFunctionCode operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3032,6 +3013,7 @@ func (c *Lambda) UpdateFunctionCodeRequest(input *UpdateFunctionCodeInput) (req // API, that AWS Lambda is unable to assume you will get this exception. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // * ErrCodeCodeStorageExceededException "CodeStorageExceededException" // You have exceeded your maximum total code size per account. Limits (http://docs.aws.amazon.com/lambda/latest/dg/limits.html) @@ -3067,8 +3049,8 @@ const opUpdateFunctionConfiguration = "UpdateFunctionConfiguration" // UpdateFunctionConfigurationRequest generates a "aws/request.Request" representing the // client's request for the UpdateFunctionConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3140,6 +3122,7 @@ func (c *Lambda) UpdateFunctionConfigurationRequest(input *UpdateFunctionConfigu // API, that AWS Lambda is unable to assume you will get this exception. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Request throughput limit exceeded // // * ErrCodeResourceConflictException "ResourceConflictException" // The resource already exists. @@ -3172,7 +3155,8 @@ func (c *Lambda) UpdateFunctionConfigurationWithContext(ctx aws.Context, input * } // Provides limits of code size and concurrency associated with the current -// account and region. +// account and region. For more information or to request a limit increase for +// concurrent executions, see Lambda Limits (http://docs.aws.amazon.com/lambda/latest/dg/limits.html). type AccountLimit struct { _ struct{} `type:"structure"` @@ -3185,10 +3169,8 @@ type AccountLimit struct { // larger files. Default limit is 50 MB. CodeSizeZipped *int64 `type:"long"` - // Number of simultaneous executions of your function per region. For more information - // or to request a limit increase for concurrent executions, see Lambda Function - // Concurrent Executions (http://docs.aws.amazon.com/lambda/latest/dg/concurrent-executions.html). - // The default limit is 1000. + // Number of simultaneous executions of your function per region. The default + // limit is 1000. ConcurrentExecutions *int64 `type:"integer"` // Maximum size, in bytes, of a code package you can upload per region. The @@ -3196,7 +3178,7 @@ type AccountLimit struct { TotalCodeSize *int64 `type:"long"` // The number of concurrent executions available to functions that do not have - // concurrency limits set. For more information, see concurrent-executions. + // concurrency limits set. For more information, see Managing Concurrency (http://docs.aws.amazon.com/lambda/latest/dg/concurrent-executions.html). UnreservedConcurrentExecutions *int64 `type:"integer"` } @@ -3289,52 +3271,40 @@ type AddPermissionInput struct { // This is currently only used for Alexa Smart Home functions. EventSourceToken *string `type:"string"` - // Name of the Lambda function whose resource policy you are updating by adding - // a new permission. + // The name of the lambda function. + // + // Name formats + // + // * Function name - MyFunction. + // + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. // - // You can specify a function name (for example, Thumbnail) or you can specify - // Amazon Resource Name (ARN) of the function (for example, arn:aws:lambda:us-west-2:account-id:function:ThumbNail). - // AWS Lambda also allows you to specify partial ARN (for example, account-id:Thumbnail). - // Note that the length constraint applies only to the ARN. If you specify only - // the function name, it is limited to 64 characters in length. + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `location:"uri" locationName:"FunctionName" min:"1" type:"string" required:"true"` - // The principal who is getting this permission. It can be Amazon S3 service - // Principal (s3.amazonaws.com) if you want Amazon S3 to invoke the function, - // an AWS account ID if you are granting cross-account permission, or any valid - // AWS service principal such as sns.amazonaws.com. For example, you might want - // to allow a custom application in another AWS account to push events to AWS - // Lambda by invoking your function. + // The principal who is getting this permission. The principal can be an AWS + // service (e.g. s3.amazonaws.com or sns.amazonaws.com) for service triggers, + // or an account ID for cross-account access. If you specify a service as a + // principal, use the SourceArn parameter to limit who can invoke the function + // through that service. // // Principal is a required field Principal *string `type:"string" required:"true"` - // You can use this optional query parameter to describe a qualified ARN using - // a function version or an alias name. The permission will then apply to the - // specific qualified ARN. For example, if you specify function version 2 as - // the qualifier, then permission applies only when request is made using qualified - // function ARN: - // - // arn:aws:lambda:aws-region:acct-id:function:function-name:2 - // - // If you specify an alias name, for example PROD, then the permission is valid - // only for requests made using the alias ARN: - // - // arn:aws:lambda:aws-region:acct-id:function:function-name:PROD - // - // If the qualifier is not specified, the permission is valid only when requests - // is made using unqualified function ARN. - // - // arn:aws:lambda:aws-region:acct-id:function:function-name + // Specify a version or alias to add permissions to a published version of the + // function. Qualifier *string `location:"querystring" locationName:"Qualifier" min:"1" type:"string"` // An optional value you can use to ensure you are updating the latest update // of the function version or alias. If the RevisionID you pass doesn't match // the latest RevisionId of the function or alias, it will fail with an error // message, advising you to retrieve the latest function version or alias RevisionID - // using either or . + // using either GetFunction or GetAlias RevisionId *string `type:"string"` // This parameter is used for S3 and SES. The AWS account ID (without a hyphen) @@ -3346,14 +3316,11 @@ type AddPermissionInput struct { // you don't specify the SourceArn) owned by a specific account. SourceAccount *string `type:"string"` - // This is optional; however, when granting permission to invoke your function, - // you should specify this field with the Amazon Resource Name (ARN) as its - // value. This ensures that only events generated from the specified source - // can invoke the function. + // The Amazon Resource Name of the invoker. // - // If you add a permission without providing the source ARN, any AWS account - // that creates a mapping to your function ARN can send events to invoke your - // Lambda function. + // If you add a permission to a service principal without providing the source + // ARN, any AWS account that creates a mapping to your function ARN can invoke + // your Lambda function. SourceArn *string `type:"string"` // A unique statement identifier. @@ -3504,8 +3471,7 @@ type AliasConfiguration struct { RevisionId *string `type:"string"` // Specifies an additional function versions the alias points to, allowing you - // to dictate what percentage of traffic will invoke each version. For more - // information, see lambda-traffic-shifting-using-aliases. + // to dictate what percentage of traffic will invoke each version. RoutingConfig *AliasRoutingConfiguration `type:"structure"` } @@ -3555,14 +3521,13 @@ func (s *AliasConfiguration) SetRoutingConfig(v *AliasRoutingConfiguration) *Ali return s } -// The parent object that implements what percentage of traffic will invoke -// each function version. For more information, see lambda-traffic-shifting-using-aliases. +// The alias's traffic shifting (http://docs.aws.amazon.com/lambda/latest/dg/lambda-traffic-shifting-using-aliases.html) +// configuration. type AliasRoutingConfiguration struct { _ struct{} `type:"structure"` - // Set this value to dictate what percentage of traffic will invoke the updated - // function version. If set to an empty string, 100 percent of traffic will - // invoke function-version. For more information, see lambda-traffic-shifting-using-aliases. + // The name of the second alias, and the percentage of traffic that is routed + // to it. AdditionalVersionWeights map[string]*float64 `type:"map"` } @@ -3588,9 +3553,18 @@ type CreateAliasInput struct { // Description of the alias. Description *string `type:"string"` - // Name of the Lambda function for which you want to create an alias. Note that - // the length constraint applies only to the ARN. If you specify only the function - // name, it is limited to 64 characters in length. + // The name of the lambda function. + // + // Name formats + // + // * Function name - MyFunction. + // + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. + // + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `location:"uri" locationName:"FunctionName" min:"1" type:"string" required:"true"` @@ -3607,7 +3581,7 @@ type CreateAliasInput struct { // Specifies an additional version your alias can point to, allowing you to // dictate what percentage of traffic will invoke each version. For more information, - // see lambda-traffic-shifting-using-aliases. + // see Traffic Shifting Using Aliases (http://docs.aws.amazon.com/lambda/latest/dg/lambda-traffic-shifting-using-aliases.html). RoutingConfig *AliasRoutingConfiguration `type:"structure"` } @@ -3682,57 +3656,55 @@ func (s *CreateAliasInput) SetRoutingConfig(v *AliasRoutingConfiguration) *Creat type CreateEventSourceMappingInput struct { _ struct{} `type:"structure"` - // The largest number of records that AWS Lambda will retrieve from your event - // source at the time of invoking your function. Your function receives an event - // with all the retrieved records. The default is 100 records. + // The maximum number of items to retrieve in a single batch. + // + // * Amazon Kinesis - Default 100. Max 10,000. + // + // * Amazon DynamoDB Streams - Default 100. Max 1,000. + // + // * Amazon Simple Queue Service - Default 10. Max 10. BatchSize *int64 `min:"1" type:"integer"` - // Indicates whether AWS Lambda should begin polling the event source. By default, - // Enabled is true. + // Disables the event source mapping to pause polling and invocation. Enabled *bool `type:"boolean"` - // The Amazon Resource Name (ARN) of the Amazon Kinesis or the Amazon DynamoDB - // stream that is the event source. Any record added to this stream could cause - // AWS Lambda to invoke your Lambda function, it depends on the BatchSize. AWS - // Lambda POSTs the Amazon Kinesis event, containing records, to your Lambda - // function as JSON. + // The Amazon Resource Name (ARN) of the event source. + // + // * Amazon Kinesis - The ARN of the data stream or a stream consumer. + // + // * Amazon DynamoDB Streams - The ARN of the stream. + // + // * Amazon Simple Queue Service - The ARN of the queue. // // EventSourceArn is a required field EventSourceArn *string `type:"string" required:"true"` - // The Lambda function to invoke when AWS Lambda detects an event on the stream. + // The name of the Lambda function. + // + // Name formats + // + // * Function name - MyFunction. // - // You can specify the function name (for example, Thumbnail) or you can specify - // Amazon Resource Name (ARN) of the function (for example, arn:aws:lambda:us-west-2:account-id:function:ThumbNail). + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. // - // If you are using versioning, you can also provide a qualified function ARN - // (ARN that is qualified with function version or alias name as suffix). For - // more information about versioning, see AWS Lambda Function Versioning and - // Aliases (http://docs.aws.amazon.com/lambda/latest/dg/versioning-aliases.html) + // * Version or Alias ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD. // - // AWS Lambda also allows you to specify only the function name with the account - // ID qualifier (for example, account-id:Thumbnail). + // * Partial ARN - 123456789012:function:MyFunction. // - // Note that the length constraint applies only to the ARN. If you specify only - // the function name, it is limited to 64 characters in length. + // The length constraint applies only to the full ARN. If you specify only the + // function name, it's limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `min:"1" type:"string" required:"true"` - // The position in the stream where AWS Lambda should start reading. Valid only - // for Kinesis streams. For more information, see ShardIteratorType (http://docs.aws.amazon.com/kinesis/latest/APIReference/API_GetShardIterator.html#Kinesis-GetShardIterator-request-ShardIteratorType) - // in the Amazon Kinesis API Reference. - // - // StartingPosition is a required field - StartingPosition *string `type:"string" required:"true" enum:"EventSourcePosition"` + // The position in a stream from which to start reading. Required for Amazon + // Kinesis and Amazon DynamoDB Streams sources. AT_TIMESTAMP is only supported + // for Amazon Kinesis streams. + StartingPosition *string `type:"string" enum:"EventSourcePosition"` - // The timestamp of the data record from which to start reading. Used with shard - // iterator type (http://docs.aws.amazon.com/kinesis/latest/APIReference/API_GetShardIterator.html#Kinesis-GetShardIterator-request-ShardIteratorType) - // AT_TIMESTAMP. If a record with this exact timestamp does not exist, the iterator - // returned is for the next (later) record. If the timestamp is older than the - // current trim horizon, the iterator returned is for the oldest untrimmed data - // record (TRIM_HORIZON). Valid only for Kinesis streams. - StartingPositionTimestamp *time.Time `type:"timestamp" timestampFormat:"unix"` + // With StartingPosition set to AT_TIMESTAMP, the Unix time in seconds from + // which to start reading. + StartingPositionTimestamp *time.Time `type:"timestamp"` } // String returns the string representation @@ -3760,9 +3732,6 @@ func (s *CreateEventSourceMappingInput) Validate() error { if s.FunctionName != nil && len(*s.FunctionName) < 1 { invalidParams.Add(request.NewErrParamMinLen("FunctionName", 1)) } - if s.StartingPosition == nil { - invalidParams.Add(request.NewErrParamRequired("StartingPosition")) - } if invalidParams.Len() > 0 { return invalidParams @@ -3809,88 +3778,77 @@ func (s *CreateEventSourceMappingInput) SetStartingPositionTimestamp(v time.Time type CreateFunctionInput struct { _ struct{} `type:"structure"` - // The code for the Lambda function. + // The code for the function. // // Code is a required field Code *FunctionCode `type:"structure" required:"true"` - // The parent object that contains the target ARN (Amazon Resource Name) of - // an Amazon SQS queue or Amazon SNS topic. + // A dead letter queue configuration that specifies the queue or topic where + // Lambda sends asynchronous events when they fail processing. For more information, + // see Dead Letter Queues (http://docs.aws.amazon.com/lambda/latest/dg/dlq.html). DeadLetterConfig *DeadLetterConfig `type:"structure"` - // A short, user-defined function description. Lambda does not use this value. - // Assign a meaningful description as you see fit. + // A description of the function. Description *string `type:"string"` - // The parent object that contains your environment's configuration settings. + // Environment variables that are accessible from function code during execution. Environment *Environment `type:"structure"` - // The name you want to assign to the function you are uploading. The function - // names appear in the console and are returned in the ListFunctions API. Function - // names are used to specify functions to other AWS Lambda API operations, such - // as Invoke. Note that the length constraint applies only to the ARN. If you - // specify only the function name, it is limited to 64 characters in length. + // The name of the lambda function. + // + // Name formats + // + // * Function name - MyFunction. + // + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. + // + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `min:"1" type:"string" required:"true"` - // The function within your code that Lambda calls to begin execution. For Node.js, - // it is the module-name.export value in your function. For Java, it can be - // package.class-name::handler or package.class-name. For more information, - // see Lambda Function Handler (Java) (http://docs.aws.amazon.com/lambda/latest/dg/java-programming-model-handler-types.html). + // The name of the method within your code that Lambda calls to execute your + // function. For more information, see Programming Model (http://docs.aws.amazon.com/lambda/latest/dg/programming-model-v2.html). // // Handler is a required field Handler *string `type:"string" required:"true"` - // The Amazon Resource Name (ARN) of the KMS key used to encrypt your function's - // environment variables. If not provided, AWS Lambda will use a default service - // key. + // The ARN of the KMS key used to encrypt your function's environment variables. + // If not provided, AWS Lambda will use a default service key. KMSKeyArn *string `type:"string"` - // The amount of memory, in MB, your Lambda function is given. Lambda uses this - // memory size to infer the amount of CPU and memory allocated to your function. - // Your function use-case determines your CPU and memory requirements. For example, - // a database operation might need less memory compared to an image processing - // function. The default value is 128 MB. The value must be a multiple of 64 - // MB. + // The amount of memory that your function has access to. Increasing the function's + // memory also increases it's CPU allocation. The default value is 128 MB. The + // value must be a multiple of 64 MB. MemorySize *int64 `min:"128" type:"integer"` - // This boolean parameter can be used to request AWS Lambda to create the Lambda - // function and publish a version as an atomic operation. + // Set to true to publish the first version of the function during creation. Publish *bool `type:"boolean"` - // The Amazon Resource Name (ARN) of the IAM role that Lambda assumes when it - // executes your function to access any other Amazon Web Services (AWS) resources. - // For more information, see AWS Lambda: How it Works (http://docs.aws.amazon.com/lambda/latest/dg/lambda-introduction.html). + // The Amazon Resource Name (ARN) of the function's execution role (http://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html#lambda-intro-execution-role). // // Role is a required field Role *string `type:"string" required:"true"` - // The runtime environment for the Lambda function you are uploading. - // - // To use the Python runtime v3.6, set the value to "python3.6". To use the - // Python runtime v2.7, set the value to "python2.7". To use the Node.js runtime - // v6.10, set the value to "nodejs6.10". To use the Node.js runtime v4.3, set - // the value to "nodejs4.3". - // - // Node v0.10.42 is currently marked as deprecated. You must migrate existing - // functions to the newer Node.js runtime versions available on AWS Lambda (nodejs4.3 - // or nodejs6.10) as soon as possible. Failure to do so will result in an invalid - // parameter error being returned. Note that you will have to follow this procedure - // for each region that contains functions written in the Node v0.10.42 runtime. + // The runtime version for the function. // // Runtime is a required field Runtime *string `type:"string" required:"true" enum:"Runtime"` - // The list of tags (key-value pairs) assigned to the new function. + // The list of tags (key-value pairs) assigned to the new function. For more + // information, see Tagging Lambda Functions (http://docs.aws.amazon.com/lambda/latest/dg/tagging.html) + // in the AWS Lambda Developer Guide. Tags map[string]*string `type:"map"` - // The function execution time at which Lambda should terminate the function. - // Because the execution time has cost implications, we recommend you set this - // value based on your expected execution time. The default is 3 seconds. + // The amount of time that Lambda allows a function to run before terminating + // it. The default is 3 seconds. The maximum allowed value is 900 seconds. Timeout *int64 `min:"1" type:"integer"` - // The parent object that contains your function's tracing settings. + // Set Mode to Active to sample and trace a subset of incoming requests with + // AWS X-Ray. TracingConfig *TracingConfig `type:"structure"` // If your Lambda function accesses resources in a VPC, you provide this parameter @@ -4039,13 +3997,12 @@ func (s *CreateFunctionInput) SetVpcConfig(v *VpcConfig) *CreateFunctionInput { return s } -// The parent object that contains the target ARN (Amazon Resource Name) of -// an Amazon SQS queue or Amazon SNS topic. +// The dead letter queue (http://docs.aws.amazon.com/lambda/latest/dg/dlq.html) +// for failed asynchronous invocations. type DeadLetterConfig struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of an Amazon SQS queue or Amazon SNS topic - // you specify as your Dead Letter Queue (DLQ). + // The Amazon Resource Name (ARN) of an Amazon SQS queue or Amazon SNS topic. TargetArn *string `type:"string"` } @@ -4068,10 +4025,18 @@ func (s *DeadLetterConfig) SetTargetArn(v string) *DeadLetterConfig { type DeleteAliasInput struct { _ struct{} `type:"structure"` - // The Lambda function name for which the alias is created. Deleting an alias - // does not delete the function version to which it is pointing. Note that the - // length constraint applies only to the ARN. If you specify only the function - // name, it is limited to 64 characters in length. + // The name of the lambda function. + // + // Name formats + // + // * Function name - MyFunction. + // + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. + // + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `location:"uri" locationName:"FunctionName" min:"1" type:"string" required:"true"` @@ -4143,7 +4108,7 @@ func (s DeleteAliasOutput) GoString() string { type DeleteEventSourceMappingInput struct { _ struct{} `type:"structure"` - // The event source mapping ID. + // The identifier of the event source mapping. // // UUID is a required field UUID *string `location:"uri" locationName:"UUID" type:"string" required:"true"` @@ -4181,8 +4146,18 @@ func (s *DeleteEventSourceMappingInput) SetUUID(v string) *DeleteEventSourceMapp type DeleteFunctionConcurrencyInput struct { _ struct{} `type:"structure"` - // The name of the function you are removing concurrent execution limits from. - // For more information, see concurrent-executions. + // The name of the lambda function. + // + // Name formats + // + // * Function name - MyFunction. + // + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. + // + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `location:"uri" locationName:"FunctionName" min:"1" type:"string" required:"true"` @@ -4237,33 +4212,24 @@ func (s DeleteFunctionConcurrencyOutput) GoString() string { type DeleteFunctionInput struct { _ struct{} `type:"structure"` - // The Lambda function to delete. + // The name of the lambda function. + // + // Name formats // - // You can specify the function name (for example, Thumbnail) or you can specify - // Amazon Resource Name (ARN) of the function (for example, arn:aws:lambda:us-west-2:account-id:function:ThumbNail). - // If you are using versioning, you can also provide a qualified function ARN - // (ARN that is qualified with function version or alias name as suffix). AWS - // Lambda also allows you to specify only the function name with the account - // ID qualifier (for example, account-id:Thumbnail). Note that the length constraint - // applies only to the ARN. If you specify only the function name, it is limited - // to 64 characters in length. + // * Function name - MyFunction. + // + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. + // + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `location:"uri" locationName:"FunctionName" min:"1" type:"string" required:"true"` - // Using this optional parameter you can specify a function version (but not - // the $LATEST version) to direct AWS Lambda to delete a specific function version. - // If the function version has one or more aliases pointing to it, you will - // get an error because you cannot have aliases pointing to it. You can delete - // any function version but not the $LATEST, that is, you cannot specify $LATEST - // as the value of this parameter. The $LATEST version can be deleted only when - // you want to delete all the function versions and aliases. - // - // You can only specify a function version, not an alias name, using this parameter. - // You cannot delete a function version using its alias. - // - // If you don't specify this parameter, AWS Lambda will delete the function, - // including all of its versions and aliases. + // Specify a version to delete. You cannot delete a version that is referenced + // by an alias. Qualifier *string `location:"querystring" locationName:"Qualifier" min:"1" type:"string"` } @@ -4322,11 +4288,11 @@ func (s DeleteFunctionOutput) GoString() string { return s.String() } -// The parent object that contains your environment's configuration settings. +// A function's environment variable settings. type Environment struct { _ struct{} `type:"structure"` - // The key-value pairs that represent your environment's configuration settings. + // Environment variable key-value pairs. Variables map[string]*string `type:"map"` } @@ -4346,15 +4312,14 @@ func (s *Environment) SetVariables(v map[string]*string) *Environment { return s } -// The parent object that contains error information associated with your configuration -// settings. +// Error messages for environment variables that could not be applied. type EnvironmentError struct { _ struct{} `type:"structure"` - // The error code returned by the environment error object. + // The error code. ErrorCode *string `type:"string"` - // The message returned by the environment error object. + // The error message. Message *string `type:"string"` } @@ -4380,17 +4345,14 @@ func (s *EnvironmentError) SetMessage(v string) *EnvironmentError { return s } -// The parent object returned that contains your environment's configuration -// settings or any error information associated with your configuration settings. +// The results of a configuration update that applied environment variables. type EnvironmentResponse struct { _ struct{} `type:"structure"` - // The parent object that contains error information associated with your configuration - // settings. + // Error messages for environment variables that could not be applied. Error *EnvironmentError `type:"structure"` - // The key-value pairs returned that represent your environment's configuration - // settings or error information. + // Environment variable key-value pairs. Variables map[string]*string `type:"map"` } @@ -4416,37 +4378,34 @@ func (s *EnvironmentResponse) SetVariables(v map[string]*string) *EnvironmentRes return s } -// Describes mapping between an Amazon Kinesis stream and a Lambda function. +// A mapping between an AWS resource and an AWS Lambda function. See CreateEventSourceMapping +// for details. type EventSourceMappingConfiguration struct { _ struct{} `type:"structure"` - // The largest number of records that AWS Lambda will retrieve from your event - // source at the time of invoking your function. Your function receives an event - // with all the retrieved records. + // The maximum number of items to retrieve in a single batch. BatchSize *int64 `min:"1" type:"integer"` - // The Amazon Resource Name (ARN) of the Amazon Kinesis stream that is the source - // of events. + // The Amazon Resource Name (ARN) of the event source. EventSourceArn *string `type:"string"` - // The Lambda function to invoke when AWS Lambda detects an event on the stream. + // The ARN of the Lambda function. FunctionArn *string `type:"string"` - // The UTC time string indicating the last time the event mapping was updated. - LastModified *time.Time `type:"timestamp" timestampFormat:"unix"` + // The date that the event source mapping was last updated, in Unix time seconds. + LastModified *time.Time `type:"timestamp"` // The result of the last AWS Lambda invocation of your Lambda function. LastProcessingResult *string `type:"string"` - // The state of the event source mapping. It can be Creating, Enabled, Disabled, - // Enabling, Disabling, Updating, or Deleting. + // The state of the event source mapping. It can be one of the following: Creating, + // Enabling, Enabled, Disabling, Disabled, Updating, or Deleting. State *string `type:"string"` - // The reason the event source mapping is in its current state. It is either - // user-requested or an AWS Lambda-initiated state transition. + // The cause of the last state change, either User initiated or Lambda initiated. StateTransitionReason *string `type:"string"` - // The AWS Lambda assigned opaque identifier for the mapping. + // The identifier of the event source mapping. UUID *string `type:"string"` } @@ -4508,27 +4467,22 @@ func (s *EventSourceMappingConfiguration) SetUUID(v string) *EventSourceMappingC return s } -// The code for the Lambda function. +// The code for the Lambda function. You can specify either an S3 location, +// or upload a deployment package directly. type FunctionCode struct { _ struct{} `type:"structure"` - // Amazon S3 bucket name where the .zip file containing your deployment package - // is stored. This bucket must reside in the same AWS region where you are creating - // the Lambda function. + // An Amazon S3 bucket in the same region as your function. S3Bucket *string `min:"3" type:"string"` - // The Amazon S3 object (the deployment package) key name you want to upload. + // The Amazon S3 key of the deployment package. S3Key *string `min:"1" type:"string"` - // The Amazon S3 object (the deployment package) version you want to upload. + // For versioned objects, the version of the deployment package object to use. S3ObjectVersion *string `min:"1" type:"string"` - // The contents of your zip file containing your deployment package. If you - // are using the web API directly, the contents of the zip file must be base64-encoded. - // If you are using the AWS SDKs or the AWS CLI, the SDKs or CLI will do the - // encoding for you. For more information about creating a .zip file, see Execution - // Permissions (http://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html#lambda-intro-execution-role.html) - // in the AWS Lambda Developer Guide. + // The base64-encoded contents of your zip file containing your deployment package. + // AWS SDK and AWS CLI clients handle the encoding for you. // // ZipFile is automatically base64 encoded/decoded by the SDK. ZipFile []byte `type:"blob"` @@ -4621,77 +4575,68 @@ func (s *FunctionCodeLocation) SetRepositoryType(v string) *FunctionCodeLocation return s } -// A complex type that describes function metadata. +// A Lambda function's configuration settings. type FunctionConfiguration struct { _ struct{} `type:"structure"` - // It is the SHA256 hash of your function deployment package. + // The SHA256 hash of the function's deployment package. CodeSha256 *string `type:"string"` - // The size, in bytes, of the function .zip file you uploaded. + // The size of the function's deployment package in bytes. CodeSize *int64 `type:"long"` - // The parent object that contains the target ARN (Amazon Resource Name) of - // an Amazon SQS queue or Amazon SNS topic. + // The function's dead letter queue. DeadLetterConfig *DeadLetterConfig `type:"structure"` - // The user-provided description. + // The function's description. Description *string `type:"string"` - // The parent object that contains your environment's configuration settings. + // The function's environment variables. Environment *EnvironmentResponse `type:"structure"` - // The Amazon Resource Name (ARN) assigned to the function. + // The function's Amazon Resource Name. FunctionArn *string `type:"string"` - // The name of the function. Note that the length constraint applies only to - // the ARN. If you specify only the function name, it is limited to 64 characters - // in length. + // The name of the function. FunctionName *string `min:"1" type:"string"` // The function Lambda calls to begin executing your function. Handler *string `type:"string"` - // The Amazon Resource Name (ARN) of the KMS key used to encrypt your function's - // environment variables. If empty, it means you are using the AWS Lambda default - // service key. + // The KMS key used to encrypt the function's environment variables. Only returned + // if you've configured a customer managed CMK. KMSKeyArn *string `type:"string"` - // The time stamp of the last time you updated the function. The time stamp - // is conveyed as a string complying with ISO-8601 in this way YYYY-MM-DDThh:mm:ssTZD - // (e.g., 1997-07-16T19:20:30+01:00). For more information, see Date and Time - // Formats (https://www.w3.org/TR/NOTE-datetime). + // The date and time that the function was last updated, in ISO-8601 format + // (https://www.w3.org/TR/NOTE-datetime) (YYYY-MM-DDThh:mm:ssTZD). LastModified *string `type:"string"` - // Returns the ARN (Amazon Resource Name) of the master function. + // The ARN of the master function. MasterArn *string `type:"string"` - // The memory size, in MB, you configured for the function. Must be a multiple - // of 64 MB. + // The memory allocated to the function MemorySize *int64 `min:"128" type:"integer"` // Represents the latest updated revision of the function or alias. RevisionId *string `type:"string"` - // The Amazon Resource Name (ARN) of the IAM role that Lambda assumes when it - // executes your function to access any other Amazon Web Services (AWS) resources. + // The function's execution role. Role *string `type:"string"` // The runtime environment for the Lambda function. Runtime *string `type:"string" enum:"Runtime"` - // The function execution time at which Lambda should terminate the function. - // Because the execution time has cost implications, we recommend you set this - // value based on your expected execution time. The default is 3 seconds. + // The amount of time that Lambda allows a function to run before terminating + // it. Timeout *int64 `min:"1" type:"integer"` - // The parent object that contains your function's tracing settings. + // The function's AWS X-Ray tracing configuration. TracingConfig *TracingConfigResponse `type:"structure"` // The version of the Lambda function. Version *string `min:"1" type:"string"` - // VPC configuration associated with your Lambda function. + // The function's networking configuration. VpcConfig *VpcConfigResponse `type:"structure"` } @@ -4836,12 +4781,10 @@ func (s GetAccountSettingsInput) GoString() string { type GetAccountSettingsOutput struct { _ struct{} `type:"structure"` - // Provides limits of code size and concurrency associated with the current - // account and region. + // Limits related to concurrency and code storage. AccountLimit *AccountLimit `type:"structure"` - // Provides code size usage and function count associated with the current account - // and region. + // The number of functions and amount of storage in use. AccountUsage *AccountUsage `type:"structure"` } @@ -4870,11 +4813,18 @@ func (s *GetAccountSettingsOutput) SetAccountUsage(v *AccountUsage) *GetAccountS type GetAliasInput struct { _ struct{} `type:"structure"` - // Function name for which the alias is created. An alias is a subresource that - // exists only in the context of an existing Lambda function so you must specify - // the function name. Note that the length constraint applies only to the ARN. - // If you specify only the function name, it is limited to 64 characters in - // length. + // The name of the lambda function. + // + // Name formats + // + // * Function name - MyFunction. + // + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. + // + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `location:"uri" locationName:"FunctionName" min:"1" type:"string" required:"true"` @@ -4932,7 +4882,7 @@ func (s *GetAliasInput) SetName(v string) *GetAliasInput { type GetEventSourceMappingInput struct { _ struct{} `type:"structure"` - // The AWS Lambda assigned ID of the event source mapping. + // The identifier of the event source mapping. // // UUID is a required field UUID *string `location:"uri" locationName:"UUID" type:"string" required:"true"` @@ -4970,26 +4920,24 @@ func (s *GetEventSourceMappingInput) SetUUID(v string) *GetEventSourceMappingInp type GetFunctionConfigurationInput struct { _ struct{} `type:"structure"` - // The name of the Lambda function for which you want to retrieve the configuration - // information. + // The name of the lambda function. // - // You can specify a function name (for example, Thumbnail) or you can specify - // Amazon Resource Name (ARN) of the function (for example, arn:aws:lambda:us-west-2:account-id:function:ThumbNail). - // AWS Lambda also allows you to specify a partial ARN (for example, account-id:Thumbnail). - // Note that the length constraint applies only to the ARN. If you specify only - // the function name, it is limited to 64 characters in length. + // Name formats + // + // * Function name - MyFunction. + // + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. + // + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `location:"uri" locationName:"FunctionName" min:"1" type:"string" required:"true"` - // Using this optional parameter you can specify a function version or an alias - // name. If you specify function version, the API uses qualified function ARN - // and returns information about the specific function version. If you specify - // an alias name, the API uses the alias ARN and returns information about the - // function version to which the alias points. - // - // If you don't specify this parameter, the API uses unqualified function ARN, - // and returns information about the $LATEST function version. + // Specify a version or alias to get details about a published version of the + // function. Qualifier *string `location:"querystring" locationName:"Qualifier" min:"1" type:"string"` } @@ -5037,24 +4985,24 @@ func (s *GetFunctionConfigurationInput) SetQualifier(v string) *GetFunctionConfi type GetFunctionInput struct { _ struct{} `type:"structure"` - // The Lambda function name. + // The name of the lambda function. + // + // Name formats + // + // * Function name - MyFunction. // - // You can specify a function name (for example, Thumbnail) or you can specify - // Amazon Resource Name (ARN) of the function (for example, arn:aws:lambda:us-west-2:account-id:function:ThumbNail). - // AWS Lambda also allows you to specify a partial ARN (for example, account-id:Thumbnail). - // Note that the length constraint applies only to the ARN. If you specify only - // the function name, it is limited to 64 characters in length. + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. + // + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `location:"uri" locationName:"FunctionName" min:"1" type:"string" required:"true"` - // Use this optional parameter to specify a function version or an alias name. - // If you specify function version, the API uses qualified function ARN for - // the request and returns information about the specific Lambda function version. - // If you specify an alias name, the API uses the alias ARN and returns information - // about the function version to which the alias points. If you don't provide - // this parameter, the API uses unqualified function ARN and returns information - // about the $LATEST version of the Lambda function. + // Specify a version or alias to get details about a published version of the + // function. Qualifier *string `location:"querystring" locationName:"Qualifier" min:"1" type:"string"` } @@ -5103,17 +5051,19 @@ func (s *GetFunctionInput) SetQualifier(v string) *GetFunctionInput { type GetFunctionOutput struct { _ struct{} `type:"structure"` - // The object for the Lambda function location. + // The function's code. Code *FunctionCodeLocation `type:"structure"` // The concurrent execution limit set for this function. For more information, - // see concurrent-executions. + // see Managing Concurrency (http://docs.aws.amazon.com/lambda/latest/dg/concurrent-executions.html). Concurrency *PutFunctionConcurrencyOutput `type:"structure"` - // A complex type that describes function metadata. + // The function's configuration. Configuration *FunctionConfiguration `type:"structure"` - // Returns the list of tags associated with the function. + // Returns the list of tags associated with the function. For more information, + // see Tagging Lambda Functions (http://docs.aws.amazon.com/lambda/latest/dg/tagging.html) + // in the AWS Lambda Developer Guide. Tags map[string]*string `type:"map"` } @@ -5154,16 +5104,18 @@ func (s *GetFunctionOutput) SetTags(v map[string]*string) *GetFunctionOutput { type GetPolicyInput struct { _ struct{} `type:"structure"` - // Function name whose resource policy you want to retrieve. + // The name of the lambda function. + // + // Name formats + // + // * Function name - MyFunction. + // + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. + // + // * Partial ARN - 123456789012:function:MyFunction. // - // You can specify the function name (for example, Thumbnail) or you can specify - // Amazon Resource Name (ARN) of the function (for example, arn:aws:lambda:us-west-2:account-id:function:ThumbNail). - // If you are using versioning, you can also provide a qualified function ARN - // (ARN that is qualified with function version or alias name as suffix). AWS - // Lambda also allows you to specify only the function name with the account - // ID qualifier (for example, account-id:Thumbnail). Note that the length constraint - // applies only to the ARN. If you specify only the function name, it is limited - // to 64 characters in length. + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `location:"uri" locationName:"FunctionName" min:"1" type:"string" required:"true"` @@ -5250,12 +5202,22 @@ func (s *GetPolicyOutput) SetRevisionId(v string) *GetPolicyOutput { return s } +// Deprecated: InvokeAsyncInput has been deprecated type InvokeAsyncInput struct { _ struct{} `deprecated:"true" type:"structure" payload:"InvokeArgs"` - // The Lambda function name. Note that the length constraint applies only to - // the ARN. If you specify only the function name, it is limited to 64 characters - // in length. + // The name of the lambda function. + // + // Name formats + // + // * Function name - MyFunction. + // + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. + // + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `location:"uri" locationName:"FunctionName" min:"1" type:"string" required:"true"` @@ -5308,6 +5270,8 @@ func (s *InvokeAsyncInput) SetInvokeArgs(v io.ReadSeeker) *InvokeAsyncInput { } // Upon success, it returns empty response. Otherwise, throws an exception. +// +// Deprecated: InvokeAsyncOutput has been deprecated type InvokeAsyncOutput struct { _ struct{} `deprecated:"true" type:"structure"` @@ -5342,26 +5306,37 @@ type InvokeInput struct { // // The ClientContext JSON must be base64-encoded and has a maximum size of 3583 // bytes. + // + // ClientContext information is returned only if you use the synchronous (RequestResponse) + // invocation type. ClientContext *string `location:"header" locationName:"X-Amz-Client-Context" type:"string"` - // The Lambda function name. + // The name of the lambda function. // - // You can specify a function name (for example, Thumbnail) or you can specify - // Amazon Resource Name (ARN) of the function (for example, arn:aws:lambda:us-west-2:account-id:function:ThumbNail). - // AWS Lambda also allows you to specify a partial ARN (for example, account-id:Thumbnail). - // Note that the length constraint applies only to the ARN. If you specify only - // the function name, it is limited to 64 characters in length. + // Name formats + // + // * Function name - MyFunction. + // + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. + // + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `location:"uri" locationName:"FunctionName" min:"1" type:"string" required:"true"` - // By default, the Invoke API assumes RequestResponse invocation type. You can - // optionally request asynchronous execution by specifying Event as the InvocationType. - // You can also use this parameter to request AWS Lambda to not execute the - // function but do some verification, such as if the caller is authorized to - // invoke the function and if the inputs are valid. You request this by specifying - // DryRun as the InvocationType. This is useful in a cross-account scenario - // when you want to verify access to a function without running it. + // Choose from the following options. + // + // * RequestResponse (default) - Invoke the function synchronously. Keep + // the connection open until the function returns a response or times out. + // + // * Event - Invoke the function asynchronously. Send events that fail multiple + // times to the function's dead-letter queue (if configured). + // + // * DryRun - Validate parameter values and verify that the user or role + // has permission to invoke the function. InvocationType *string `location:"header" locationName:"X-Amz-Invocation-Type" type:"string" enum:"InvocationType"` // You can set this optional parameter to Tail in the request only if you specify @@ -5373,14 +5348,7 @@ type InvokeInput struct { // JSON that you want to provide to your Lambda function as input. Payload []byte `type:"blob"` - // You can use this optional parameter to specify a Lambda function version - // or alias name. If you specify a function version, the API uses the qualified - // function ARN to invoke a specific Lambda function. If you specify an alias - // name, the API uses the alias ARN to invoke the Lambda function version to - // which the alias points. - // - // If you don't provide this parameter, then the API uses unqualified function - // ARN which results in invocation of the $LATEST version. + // Specify a version or alias to invoke a published version of the function. Qualifier *string `location:"querystring" locationName:"Qualifier" min:"1" type:"string"` } @@ -5454,7 +5422,8 @@ type InvokeOutput struct { _ struct{} `type:"structure" payload:"Payload"` // The function version that has been executed. This value is returned only - // if the invocation type is RequestResponse. For more information, see lambda-traffic-shifting-using-aliases. + // if the invocation type is RequestResponse. For more information, see Traffic + // Shifting Using Aliases (http://docs.aws.amazon.com/lambda/latest/dg/lambda-traffic-shifting-using-aliases.html). ExecutedVersion *string `location:"header" locationName:"X-Amz-Executed-Version" min:"1" type:"string"` // Indicates whether an error occurred while executing the Lambda function. @@ -5528,9 +5497,18 @@ func (s *InvokeOutput) SetStatusCode(v int64) *InvokeOutput { type ListAliasesInput struct { _ struct{} `type:"structure"` - // Lambda function name for which the alias is created. Note that the length - // constraint applies only to the ARN. If you specify only the function name, - // it is limited to 64 characters in length. + // The name of the lambda function. + // + // Name formats + // + // * Function name - MyFunction. + // + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. + // + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `location:"uri" locationName:"FunctionName" min:"1" type:"string" required:"true"` @@ -5640,29 +5618,35 @@ func (s *ListAliasesOutput) SetNextMarker(v string) *ListAliasesOutput { type ListEventSourceMappingsInput struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of the Amazon Kinesis stream. (This parameter - // is optional.) + // The Amazon Resource Name (ARN) of the event source. + // + // * Amazon Kinesis - The ARN of the data stream or a stream consumer. + // + // * Amazon DynamoDB Streams - The ARN of the stream. + // + // * Amazon Simple Queue Service - The ARN of the queue. EventSourceArn *string `location:"querystring" locationName:"EventSourceArn" type:"string"` // The name of the Lambda function. // - // You can specify the function name (for example, Thumbnail) or you can specify - // Amazon Resource Name (ARN) of the function (for example, arn:aws:lambda:us-west-2:account-id:function:ThumbNail). - // If you are using versioning, you can also provide a qualified function ARN - // (ARN that is qualified with function version or alias name as suffix). AWS - // Lambda also allows you to specify only the function name with the account - // ID qualifier (for example, account-id:Thumbnail). Note that the length constraint - // applies only to the ARN. If you specify only the function name, it is limited - // to 64 characters in length. + // Name formats + // + // * Function name - MyFunction. + // + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. + // + // * Version or Alias ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD. + // + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it's limited to 64 characters in length. FunctionName *string `location:"querystring" locationName:"FunctionName" min:"1" type:"string"` - // Optional string. An opaque pagination token returned from a previous ListEventSourceMappings - // operation. If present, specifies to continue the list from where the returning - // call left off. + // A pagination token returned by a previous call. Marker *string `location:"querystring" locationName:"Marker" type:"string"` - // Optional integer. Specifies the maximum number of event sources to return - // in response. This value must be greater than 0. + // The maximum number of event source mappings to return. MaxItems *int64 `location:"querystring" locationName:"MaxItems" min:"1" type:"integer"` } @@ -5716,14 +5700,14 @@ func (s *ListEventSourceMappingsInput) SetMaxItems(v int64) *ListEventSourceMapp return s } -// Contains a list of event sources (see EventSourceMappingConfiguration) type ListEventSourceMappingsOutput struct { _ struct{} `type:"structure"` - // An array of EventSourceMappingConfiguration objects. + // A list of event source mappings. EventSourceMappings []*EventSourceMappingConfiguration `type:"list"` - // A string, present if there are more event source mappings. + // A pagination token that's returned when the response doesn't contain all + // event source mappings. NextMarker *string `type:"string"` } @@ -5752,33 +5736,22 @@ func (s *ListEventSourceMappingsOutput) SetNextMarker(v string) *ListEventSource type ListFunctionsInput struct { _ struct{} `type:"structure"` - // Optional string. If not specified, only the unqualified functions ARNs (Amazon - // Resource Names) will be returned. - // - // Valid value: - // - // ALL: Will return all versions, including $LATEST which will have fully qualified - // ARNs (Amazon Resource Names). + // Set to ALL to list all published versions. If not specified, only the latest + // unpublished version ARN is returned. FunctionVersion *string `location:"querystring" locationName:"FunctionVersion" type:"string" enum:"FunctionVersion"` // Optional string. An opaque pagination token returned from a previous ListFunctions // operation. If present, indicates where to continue the listing. Marker *string `location:"querystring" locationName:"Marker" type:"string"` - // Optional string. If not specified, will return only regular function versions - // (i.e., non-replicated versions). - // - // Valid values are: - // - // The region from which the functions are replicated. For example, if you specify - // us-east-1, only functions replicated from that region will be returned. - // - // ALL: Will return all functions from any region. If specified, you also must - // specify a valid FunctionVersion parameter. + // Specify a region (e.g. us-east-2) to only list functions that were created + // in that region, or ALL to include functions replicated from any region. If + // specified, you also must specify the FunctionVersion. MasterRegion *string `location:"querystring" locationName:"MasterRegion" type:"string"` // Optional integer. Specifies the maximum number of AWS Lambda functions to - // return in response. This parameter value must be greater than 0. + // return in response. This parameter value must be greater than 0. The absolute + // maximum of AWS Lambda functions that can be returned is 50. MaxItems *int64 `location:"querystring" locationName:"MaxItems" min:"1" type:"integer"` } @@ -5829,7 +5802,7 @@ func (s *ListFunctionsInput) SetMaxItems(v int64) *ListFunctionsInput { return s } -// Contains a list of AWS Lambda function configurations (see FunctionConfiguration. +// A list of Lambda functions. type ListFunctionsOutput struct { _ struct{} `type:"structure"` @@ -5865,7 +5838,9 @@ func (s *ListFunctionsOutput) SetNextMarker(v string) *ListFunctionsOutput { type ListTagsInput struct { _ struct{} `type:"structure"` - // The ARN (Amazon Resource Name) of the function. + // The ARN (Amazon Resource Name) of the function. For more information, see + // Tagging Lambda Functions (http://docs.aws.amazon.com/lambda/latest/dg/tagging.html) + // in the AWS Lambda Developer Guide. // // Resource is a required field Resource *string `location:"uri" locationName:"ARN" type:"string" required:"true"` @@ -5903,7 +5878,9 @@ func (s *ListTagsInput) SetResource(v string) *ListTagsInput { type ListTagsOutput struct { _ struct{} `type:"structure"` - // The list of tags assigned to the function. + // The list of tags assigned to the function. For more information, see Tagging + // Lambda Functions (http://docs.aws.amazon.com/lambda/latest/dg/tagging.html) + // in the AWS Lambda Developer Guide. Tags map[string]*string `type:"map"` } @@ -5926,12 +5903,18 @@ func (s *ListTagsOutput) SetTags(v map[string]*string) *ListTagsOutput { type ListVersionsByFunctionInput struct { _ struct{} `type:"structure"` - // Function name whose versions to list. You can specify a function name (for - // example, Thumbnail) or you can specify Amazon Resource Name (ARN) of the - // function (for example, arn:aws:lambda:us-west-2:account-id:function:ThumbNail). - // AWS Lambda also allows you to specify a partial ARN (for example, account-id:Thumbnail). - // Note that the length constraint applies only to the ARN. If you specify only - // the function name, it is limited to 64 characters in length. + // The name of the lambda function. + // + // Name formats + // + // * Function name - MyFunction. + // + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. + // + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `location:"uri" locationName:"FunctionName" min:"1" type:"string" required:"true"` @@ -6038,12 +6021,18 @@ type PublishVersionInput struct { // Lambda copies the description from the $LATEST version. Description *string `type:"string"` - // The Lambda function name. You can specify a function name (for example, Thumbnail) - // or you can specify Amazon Resource Name (ARN) of the function (for example, - // arn:aws:lambda:us-west-2:account-id:function:ThumbNail). AWS Lambda also - // allows you to specify a partial ARN (for example, account-id:Thumbnail). - // Note that the length constraint applies only to the ARN. If you specify only - // the function name, it is limited to 64 characters in length. + // The name of the lambda function. + // + // Name formats + // + // * Function name - MyFunction. + // + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. + // + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `location:"uri" locationName:"FunctionName" min:"1" type:"string" required:"true"` @@ -6051,8 +6040,8 @@ type PublishVersionInput struct { // An optional value you can use to ensure you are updating the latest update // of the function version or alias. If the RevisionID you pass doesn't match // the latest RevisionId of the function or alias, it will fail with an error - // message, advising you to retrieve the latest function version or alias RevisionID - // using either or . + // message, advising you retrieve the latest function version or alias RevisionID + // using either GetFunction or GetAlias. RevisionId *string `type:"string"` } @@ -6109,14 +6098,23 @@ func (s *PublishVersionInput) SetRevisionId(v string) *PublishVersionInput { type PutFunctionConcurrencyInput struct { _ struct{} `type:"structure"` - // The name of the function you are setting concurrent execution limits on. - // For more information, see concurrent-executions. + // The name of the lambda function. + // + // Name formats + // + // * Function name - MyFunction. + // + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. + // + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `location:"uri" locationName:"FunctionName" min:"1" type:"string" required:"true"` - // The concurrent execution limit reserved for this function. For more information, - // see concurrent-executions. + // The concurrent execution limit reserved for this function. // // ReservedConcurrentExecutions is a required field ReservedConcurrentExecutions *int64 `type:"integer" required:"true"` @@ -6167,7 +6165,7 @@ type PutFunctionConcurrencyOutput struct { _ struct{} `type:"structure"` // The number of concurrent executions reserved for this function. For more - // information, see concurrent-executions. + // information, see Managing Concurrency (http://docs.aws.amazon.com/lambda/latest/dg/concurrent-executions.html). ReservedConcurrentExecutions *int64 `type:"integer"` } @@ -6190,28 +6188,31 @@ func (s *PutFunctionConcurrencyOutput) SetReservedConcurrentExecutions(v int64) type RemovePermissionInput struct { _ struct{} `type:"structure"` - // Lambda function whose resource policy you want to remove a permission from. + // The name of the lambda function. + // + // Name formats + // + // * Function name - MyFunction. // - // You can specify a function name (for example, Thumbnail) or you can specify - // Amazon Resource Name (ARN) of the function (for example, arn:aws:lambda:us-west-2:account-id:function:ThumbNail). - // AWS Lambda also allows you to specify a partial ARN (for example, account-id:Thumbnail). - // Note that the length constraint applies only to the ARN. If you specify only - // the function name, it is limited to 64 characters in length. + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. + // + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `location:"uri" locationName:"FunctionName" min:"1" type:"string" required:"true"` - // You can specify this optional parameter to remove permission associated with - // a specific function version or function alias. If you don't specify this - // parameter, the API removes permission associated with the unqualified function - // ARN. + // Specify a version or alias to remove permissions from a published version + // of the function. Qualifier *string `location:"querystring" locationName:"Qualifier" min:"1" type:"string"` // An optional value you can use to ensure you are updating the latest update // of the function version or alias. If the RevisionID you pass doesn't match // the latest RevisionId of the function or alias, it will fail with an error // message, advising you to retrieve the latest function version or alias RevisionID - // using either or . + // using either GetFunction or GetAlias. RevisionId *string `location:"querystring" locationName:"RevisionId" type:"string"` // Statement ID of the permission to remove. @@ -6296,12 +6297,16 @@ func (s RemovePermissionOutput) GoString() string { type TagResourceInput struct { _ struct{} `type:"structure"` - // The ARN (Amazon Resource Name) of the Lambda function. + // The ARN (Amazon Resource Name) of the Lambda function. For more information, + // see Tagging Lambda Functions (http://docs.aws.amazon.com/lambda/latest/dg/tagging.html) + // in the AWS Lambda Developer Guide. // // Resource is a required field Resource *string `location:"uri" locationName:"ARN" type:"string" required:"true"` // The list of tags (key-value pairs) you are assigning to the Lambda function. + // For more information, see Tagging Lambda Functions (http://docs.aws.amazon.com/lambda/latest/dg/tagging.html) + // in the AWS Lambda Developer Guide. // // Tags is a required field Tags map[string]*string `type:"map" required:"true"` @@ -6359,15 +6364,11 @@ func (s TagResourceOutput) GoString() string { return s.String() } -// The parent object that contains your function's tracing settings. +// The function's AWS X-Ray tracing configuration. type TracingConfig struct { _ struct{} `type:"structure"` - // Can be either PassThrough or Active. If PassThrough, Lambda will only trace - // the request from an upstream service if it contains a tracing header with - // "sampled=1". If Active, Lambda will respect any tracing header it receives - // from an upstream service. If no tracing header is received, Lambda will call - // X-Ray for a tracing decision. + // The tracing mode. Mode *string `type:"string" enum:"TracingMode"` } @@ -6387,11 +6388,11 @@ func (s *TracingConfig) SetMode(v string) *TracingConfig { return s } -// Parent object of the tracing information associated with your Lambda function. +// The function's AWS X-Ray tracing configuration. type TracingConfigResponse struct { _ struct{} `type:"structure"` - // The tracing mode associated with your Lambda function. + // The tracing mode. Mode *string `type:"string" enum:"TracingMode"` } @@ -6414,12 +6415,16 @@ func (s *TracingConfigResponse) SetMode(v string) *TracingConfigResponse { type UntagResourceInput struct { _ struct{} `type:"structure"` - // The ARN (Amazon Resource Name) of the function. + // The ARN (Amazon Resource Name) of the function. For more information, see + // Tagging Lambda Functions (http://docs.aws.amazon.com/lambda/latest/dg/tagging.html) + // in the AWS Lambda Developer Guide. // // Resource is a required field Resource *string `location:"uri" locationName:"ARN" type:"string" required:"true"` - // The list of tag keys to be deleted from the function. + // The list of tag keys to be deleted from the function. For more information, + // see Tagging Lambda Functions (http://docs.aws.amazon.com/lambda/latest/dg/tagging.html) + // in the AWS Lambda Developer Guide. // // TagKeys is a required field TagKeys []*string `location:"querystring" locationName:"tagKeys" type:"list" required:"true"` @@ -6483,9 +6488,18 @@ type UpdateAliasInput struct { // You can change the description of the alias using this parameter. Description *string `type:"string"` - // The function name for which the alias is created. Note that the length constraint - // applies only to the ARN. If you specify only the function name, it is limited - // to 64 characters in length. + // The name of the lambda function. + // + // Name formats + // + // * Function name - MyFunction. + // + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. + // + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `location:"uri" locationName:"FunctionName" min:"1" type:"string" required:"true"` @@ -6502,13 +6516,13 @@ type UpdateAliasInput struct { // An optional value you can use to ensure you are updating the latest update // of the function version or alias. If the RevisionID you pass doesn't match // the latest RevisionId of the function or alias, it will fail with an error - // message, advising you to retrieve the latest function version or alias RevisionID - // using either or . + // message, advising you retrieve the latest function version or alias RevisionID + // using either GetFunction or GetAlias. RevisionId *string `type:"string"` // Specifies an additional version your alias can point to, allowing you to // dictate what percentage of traffic will invoke each version. For more information, - // see lambda-traffic-shifting-using-aliases. + // see Traffic Shifting Using Aliases (http://docs.aws.amazon.com/lambda/latest/dg/lambda-traffic-shifting-using-aliases.html). RoutingConfig *AliasRoutingConfiguration `type:"structure"` } @@ -6586,32 +6600,35 @@ func (s *UpdateAliasInput) SetRoutingConfig(v *AliasRoutingConfiguration) *Updat type UpdateEventSourceMappingInput struct { _ struct{} `type:"structure"` - // The maximum number of stream records that can be sent to your Lambda function - // for a single invocation. + // The maximum number of items to retrieve in a single batch. + // + // * Amazon Kinesis - Default 100. Max 10,000. + // + // * Amazon DynamoDB Streams - Default 100. Max 1,000. + // + // * Amazon Simple Queue Service - Default 10. Max 10. BatchSize *int64 `min:"1" type:"integer"` - // Specifies whether AWS Lambda should actively poll the stream or not. If disabled, - // AWS Lambda will not poll the stream. + // Disables the event source mapping to pause polling and invocation. Enabled *bool `type:"boolean"` - // The Lambda function to which you want the stream records sent. + // The name of the Lambda function. + // + // Name formats // - // You can specify a function name (for example, Thumbnail) or you can specify - // Amazon Resource Name (ARN) of the function (for example, arn:aws:lambda:us-west-2:account-id:function:ThumbNail). - // AWS Lambda also allows you to specify a partial ARN (for example, account-id:Thumbnail). - // Note that the length constraint applies only to the ARN. If you specify only - // the function name, it is limited to 64 characters in length. + // * Function name - MyFunction. // - // If you are using versioning, you can also provide a qualified function ARN - // (ARN that is qualified with function version or alias name as suffix). For - // more information about versioning, see AWS Lambda Function Versioning and - // Aliases (http://docs.aws.amazon.com/lambda/latest/dg/versioning-aliases.html) + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. // - // Note that the length constraint applies only to the ARN. If you specify only - // the function name, it is limited to 64 character in length. + // * Version or Alias ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD. + // + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it's limited to 64 characters in length. FunctionName *string `min:"1" type:"string"` - // The event source mapping identifier. + // The identifier of the event source mapping. // // UUID is a required field UUID *string `location:"uri" locationName:"UUID" type:"string" required:"true"` @@ -6681,13 +6698,18 @@ type UpdateFunctionCodeInput struct { // returned in the response. DryRun *bool `type:"boolean"` - // The existing Lambda function name whose code you want to replace. + // The name of the lambda function. + // + // Name formats + // + // * Function name - MyFunction. // - // You can specify a function name (for example, Thumbnail) or you can specify - // Amazon Resource Name (ARN) of the function (for example, arn:aws:lambda:us-west-2:account-id:function:ThumbNail). - // AWS Lambda also allows you to specify a partial ARN (for example, account-id:Thumbnail). - // Note that the length constraint applies only to the ARN. If you specify only - // the function name, it is limited to 64 characters in length. + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. + // + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `location:"uri" locationName:"FunctionName" min:"1" type:"string" required:"true"` @@ -6700,7 +6722,7 @@ type UpdateFunctionCodeInput struct { // of the function version or alias. If the RevisionID you pass doesn't match // the latest RevisionId of the function or alias, it will fail with an error // message, advising you to retrieve the latest function version or alias RevisionID - // using either or . + // using either using using either GetFunction or GetAlias. RevisionId *string `type:"string"` // Amazon S3 bucket name where the .zip file containing your deployment package @@ -6718,8 +6740,7 @@ type UpdateFunctionCodeInput struct { // are using the web API directly, the contents of the zip file must be base64-encoded. // If you are using the AWS SDKs or the AWS CLI, the SDKs or CLI will do the // encoding for you. For more information about creating a .zip file, see Execution - // Permissions (http://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html#lambda-intro-execution-role.html) - // in the AWS Lambda Developer Guide. + // Permissions (http://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html#lambda-intro-execution-role.html). // // ZipFile is automatically base64 encoded/decoded by the SDK. ZipFile []byte `type:"blob"` @@ -6811,8 +6832,9 @@ func (s *UpdateFunctionCodeInput) SetZipFile(v []byte) *UpdateFunctionCodeInput type UpdateFunctionConfigurationInput struct { _ struct{} `type:"structure"` - // The parent object that contains the target ARN (Amazon Resource Name) of - // an Amazon SQS queue or Amazon SNS topic. + // A dead letter queue configuration that specifies the queue or topic where + // Lambda sends asynchronous events when they fail processing. For more information, + // see Dead Letter Queues (http://docs.aws.amazon.com/lambda/latest/dg/dlq.html). DeadLetterConfig *DeadLetterConfig `type:"structure"` // A short user-defined function description. AWS Lambda does not use this value. @@ -6822,13 +6844,18 @@ type UpdateFunctionConfigurationInput struct { // The parent object that contains your environment's configuration settings. Environment *Environment `type:"structure"` - // The name of the Lambda function. + // The name of the lambda function. + // + // Name formats + // + // * Function name - MyFunction. + // + // * Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. // - // You can specify a function name (for example, Thumbnail) or you can specify - // Amazon Resource Name (ARN) of the function (for example, arn:aws:lambda:us-west-2:account-id:function:ThumbNail). - // AWS Lambda also allows you to specify a partial ARN (for example, account-id:Thumbnail). - // Note that the length constraint applies only to the ARN. If you specify only - // the function name, it is limited to 64 character in length. + // * Partial ARN - 123456789012:function:MyFunction. + // + // The length constraint applies only to the full ARN. If you specify only the + // function name, it is limited to 64 characters in length. // // FunctionName is a required field FunctionName *string `location:"uri" locationName:"FunctionName" min:"1" type:"string" required:"true"` @@ -6854,40 +6881,26 @@ type UpdateFunctionConfigurationInput struct { // of the function version or alias. If the RevisionID you pass doesn't match // the latest RevisionId of the function or alias, it will fail with an error // message, advising you to retrieve the latest function version or alias RevisionID - // using either or . + // using either GetFunction or GetAlias. RevisionId *string `type:"string"` // The Amazon Resource Name (ARN) of the IAM role that Lambda will assume when // it executes your function. Role *string `type:"string"` - // The runtime environment for the Lambda function. - // - // To use the Python runtime v3.6, set the value to "python3.6". To use the - // Python runtime v2.7, set the value to "python2.7". To use the Node.js runtime - // v6.10, set the value to "nodejs6.10". To use the Node.js runtime v4.3, set - // the value to "nodejs4.3". To use the Python runtime v3.6, set the value to - // "python3.6". - // - // Node v0.10.42 is currently marked as deprecated. You must migrate existing - // functions to the newer Node.js runtime versions available on AWS Lambda (nodejs4.3 - // or nodejs6.10) as soon as possible. Failure to do so will result in an invalid - // parameter error being returned. Note that you will have to follow this procedure - // for each region that contains functions written in the Node v0.10.42 runtime. + // The runtime version for the function. Runtime *string `type:"string" enum:"Runtime"` - // The function execution time at which AWS Lambda should terminate the function. - // Because the execution time has cost implications, we recommend you set this - // value based on your expected execution time. The default is 3 seconds. + // The amount of time that Lambda allows a function to run before terminating + // it. The default is 3 seconds. The maximum allowed value is 900 seconds. Timeout *int64 `min:"1" type:"integer"` - // The parent object that contains your function's tracing settings. + // Set Mode to Active to sample and trace a subset of incoming requests with + // AWS X-Ray. TracingConfig *TracingConfig `type:"structure"` - // If your Lambda function accesses resources in a VPC, you provide this parameter - // identifying the list of security group IDs and subnet IDs. These must belong - // to the same VPC. You must provide at least one security group and one subnet - // ID. + // Specify security groups and subnets in a VPC to which your Lambda function + // needs access. VpcConfig *VpcConfig `type:"structure"` } @@ -7001,17 +7014,14 @@ func (s *UpdateFunctionConfigurationInput) SetVpcConfig(v *VpcConfig) *UpdateFun return s } -// If your Lambda function accesses resources in a VPC, you provide this parameter -// identifying the list of security group IDs and subnet IDs. These must belong -// to the same VPC. You must provide at least one security group and one subnet -// ID. +// The VPC security groups and subnets attached to a Lambda function. type VpcConfig struct { _ struct{} `type:"structure"` - // A list of one or more security groups IDs in your VPC. + // A list of VPC security groups IDs. SecurityGroupIds []*string `type:"list"` - // A list of one or more subnet IDs in your VPC. + // A list of VPC subnet IDs. SubnetIds []*string `type:"list"` } @@ -7037,17 +7047,17 @@ func (s *VpcConfig) SetSubnetIds(v []*string) *VpcConfig { return s } -// VPC configuration associated with your Lambda function. +// The VPC security groups and subnets attached to a Lambda function. type VpcConfigResponse struct { _ struct{} `type:"structure"` - // A list of security group IDs associated with the Lambda function. + // A list of VPC security groups IDs. SecurityGroupIds []*string `type:"list"` - // A list of subnet IDs associated with the Lambda function. + // A list of VPC subnet IDs. SubnetIds []*string `type:"list"` - // The VPC ID associated with you Lambda function. + // The ID of the VPC. VpcId *string `type:"string"` } @@ -7124,6 +7134,9 @@ const ( // RuntimeNodejs610 is a Runtime enum value RuntimeNodejs610 = "nodejs6.10" + // RuntimeNodejs810 is a Runtime enum value + RuntimeNodejs810 = "nodejs8.10" + // RuntimeJava8 is a Runtime enum value RuntimeJava8 = "java8" @@ -7133,12 +7146,18 @@ const ( // RuntimePython36 is a Runtime enum value RuntimePython36 = "python3.6" + // RuntimePython37 is a Runtime enum value + RuntimePython37 = "python3.7" + // RuntimeDotnetcore10 is a Runtime enum value RuntimeDotnetcore10 = "dotnetcore1.0" // RuntimeDotnetcore20 is a Runtime enum value RuntimeDotnetcore20 = "dotnetcore2.0" + // RuntimeDotnetcore21 is a Runtime enum value + RuntimeDotnetcore21 = "dotnetcore2.1" + // RuntimeNodejs43Edge is a Runtime enum value RuntimeNodejs43Edge = "nodejs4.3-edge" diff --git a/vendor/github.com/aws/aws-sdk-go/service/lambda/errors.go b/vendor/github.com/aws/aws-sdk-go/service/lambda/errors.go index 57daa1c34e6..4e12ca08f76 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/lambda/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/lambda/errors.go @@ -12,6 +12,8 @@ const ( // ErrCodeEC2AccessDeniedException for service response error code // "EC2AccessDeniedException". + // + // Need additional permissions to configure VPC settings. ErrCodeEC2AccessDeniedException = "EC2AccessDeniedException" // ErrCodeEC2ThrottledException for service response error code @@ -72,7 +74,7 @@ const ( // ErrCodeInvalidZipFileException for service response error code // "InvalidZipFileException". // - // AWS Lambda could not unzip the function zip file. + // AWS Lambda could not unzip the deployment package. ErrCodeInvalidZipFileException = "InvalidZipFileException" // ErrCodeKMSAccessDeniedException for service response error code @@ -130,6 +132,14 @@ const ( // The resource already exists. ErrCodeResourceConflictException = "ResourceConflictException" + // ErrCodeResourceInUseException for service response error code + // "ResourceInUseException". + // + // The operation conflicts with the resource's availability. For example, you + // attempted to update an EventSoure Mapping in CREATING, or tried to delete + // a EventSoure mapping currently in the UPDATING state. + ErrCodeResourceInUseException = "ResourceInUseException" + // ErrCodeResourceNotFoundException for service response error code // "ResourceNotFoundException". // @@ -152,6 +162,8 @@ const ( // ErrCodeTooManyRequestsException for service response error code // "TooManyRequestsException". + // + // Request throughput limit exceeded ErrCodeTooManyRequestsException = "TooManyRequestsException" // ErrCodeUnsupportedMediaTypeException for service response error code diff --git a/vendor/github.com/aws/aws-sdk-go/service/lambda/service.go b/vendor/github.com/aws/aws-sdk-go/service/lambda/service.go index 83c5e3094e3..1cccdda0842 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/lambda/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/lambda/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "lambda" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "lambda" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Lambda" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the Lambda client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/lexmodelbuildingservice/api.go b/vendor/github.com/aws/aws-sdk-go/service/lexmodelbuildingservice/api.go index 1693e732082..fd12663b8e0 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/lexmodelbuildingservice/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/lexmodelbuildingservice/api.go @@ -17,8 +17,8 @@ const opCreateBotVersion = "CreateBotVersion" // CreateBotVersionRequest generates a "aws/request.Request" representing the // client's request for the CreateBotVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -124,8 +124,8 @@ const opCreateIntentVersion = "CreateIntentVersion" // CreateIntentVersionRequest generates a "aws/request.Request" representing the // client's request for the CreateIntentVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -233,8 +233,8 @@ const opCreateSlotTypeVersion = "CreateSlotTypeVersion" // CreateSlotTypeVersionRequest generates a "aws/request.Request" representing the // client's request for the CreateSlotTypeVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -341,8 +341,8 @@ const opDeleteBot = "DeleteBot" // DeleteBotRequest generates a "aws/request.Request" representing the // client's request for the DeleteBot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -458,8 +458,8 @@ const opDeleteBotAlias = "DeleteBotAlias" // DeleteBotAliasRequest generates a "aws/request.Request" representing the // client's request for the DeleteBotAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -574,8 +574,8 @@ const opDeleteBotChannelAssociation = "DeleteBotChannelAssociation" // DeleteBotChannelAssociationRequest generates a "aws/request.Request" representing the // client's request for the DeleteBotChannelAssociation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -672,8 +672,8 @@ const opDeleteBotVersion = "DeleteBotVersion" // DeleteBotVersionRequest generates a "aws/request.Request" representing the // client's request for the DeleteBotVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -783,8 +783,8 @@ const opDeleteIntent = "DeleteIntent" // DeleteIntentRequest generates a "aws/request.Request" representing the // client's request for the DeleteIntent operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -904,8 +904,8 @@ const opDeleteIntentVersion = "DeleteIntentVersion" // DeleteIntentVersionRequest generates a "aws/request.Request" representing the // client's request for the DeleteIntentVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1015,8 +1015,8 @@ const opDeleteSlotType = "DeleteSlotType" // DeleteSlotTypeRequest generates a "aws/request.Request" representing the // client's request for the DeleteSlotType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1138,8 +1138,8 @@ const opDeleteSlotTypeVersion = "DeleteSlotTypeVersion" // DeleteSlotTypeVersionRequest generates a "aws/request.Request" representing the // client's request for the DeleteSlotTypeVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1249,8 +1249,8 @@ const opDeleteUtterances = "DeleteUtterances" // DeleteUtterancesRequest generates a "aws/request.Request" representing the // client's request for the DeleteUtterances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1351,8 +1351,8 @@ const opGetBot = "GetBot" // GetBotRequest generates a "aws/request.Request" representing the // client's request for the GetBot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1444,8 +1444,8 @@ const opGetBotAlias = "GetBotAlias" // GetBotAliasRequest generates a "aws/request.Request" representing the // client's request for the GetBotAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1537,8 +1537,8 @@ const opGetBotAliases = "GetBotAliases" // GetBotAliasesRequest generates a "aws/request.Request" representing the // client's request for the GetBotAliases operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1681,8 +1681,8 @@ const opGetBotChannelAssociation = "GetBotChannelAssociation" // GetBotChannelAssociationRequest generates a "aws/request.Request" representing the // client's request for the GetBotChannelAssociation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1775,8 +1775,8 @@ const opGetBotChannelAssociations = "GetBotChannelAssociations" // GetBotChannelAssociationsRequest generates a "aws/request.Request" representing the // client's request for the GetBotChannelAssociations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1920,8 +1920,8 @@ const opGetBotVersions = "GetBotVersions" // GetBotVersionsRequest generates a "aws/request.Request" representing the // client's request for the GetBotVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2076,8 +2076,8 @@ const opGetBots = "GetBots" // GetBotsRequest generates a "aws/request.Request" representing the // client's request for the GetBots operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2231,8 +2231,8 @@ const opGetBuiltinIntent = "GetBuiltinIntent" // GetBuiltinIntentRequest generates a "aws/request.Request" representing the // client's request for the GetBuiltinIntent operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2323,8 +2323,8 @@ const opGetBuiltinIntents = "GetBuiltinIntents" // GetBuiltinIntentsRequest generates a "aws/request.Request" representing the // client's request for the GetBuiltinIntents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2467,8 +2467,8 @@ const opGetBuiltinSlotTypes = "GetBuiltinSlotTypes" // GetBuiltinSlotTypesRequest generates a "aws/request.Request" representing the // client's request for the GetBuiltinSlotTypes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2614,8 +2614,8 @@ const opGetExport = "GetExport" // GetExportRequest generates a "aws/request.Request" representing the // client's request for the GetExport operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2704,8 +2704,8 @@ const opGetImport = "GetImport" // GetImportRequest generates a "aws/request.Request" representing the // client's request for the GetImport operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2794,8 +2794,8 @@ const opGetIntent = "GetIntent" // GetIntentRequest generates a "aws/request.Request" representing the // client's request for the GetIntent operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2887,8 +2887,8 @@ const opGetIntentVersions = "GetIntentVersions" // GetIntentVersionsRequest generates a "aws/request.Request" representing the // client's request for the GetIntentVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3043,8 +3043,8 @@ const opGetIntents = "GetIntents" // GetIntentsRequest generates a "aws/request.Request" representing the // client's request for the GetIntents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3197,8 +3197,8 @@ const opGetSlotType = "GetSlotType" // GetSlotTypeRequest generates a "aws/request.Request" representing the // client's request for the GetSlotType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3290,8 +3290,8 @@ const opGetSlotTypeVersions = "GetSlotTypeVersions" // GetSlotTypeVersionsRequest generates a "aws/request.Request" representing the // client's request for the GetSlotTypeVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3446,8 +3446,8 @@ const opGetSlotTypes = "GetSlotTypes" // GetSlotTypesRequest generates a "aws/request.Request" representing the // client's request for the GetSlotTypes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3600,8 +3600,8 @@ const opGetUtterancesView = "GetUtterancesView" // GetUtterancesViewRequest generates a "aws/request.Request" representing the // client's request for the GetUtterancesView operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3706,8 +3706,8 @@ const opPutBot = "PutBot" // PutBotRequest generates a "aws/request.Request" representing the // client's request for the PutBot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3812,8 +3812,8 @@ const opPutBotAlias = "PutBotAlias" // PutBotAliasRequest generates a "aws/request.Request" representing the // client's request for the PutBotAlias operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3909,8 +3909,8 @@ const opPutIntent = "PutIntent" // PutIntentRequest generates a "aws/request.Request" representing the // client's request for the PutIntent operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4047,8 +4047,8 @@ const opPutSlotType = "PutSlotType" // PutSlotTypeRequest generates a "aws/request.Request" representing the // client's request for the PutSlotType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4153,8 +4153,8 @@ const opStartImport = "StartImport" // StartImportRequest generates a "aws/request.Request" representing the // client's request for the StartImport operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4249,14 +4249,14 @@ type BotAliasMetadata struct { Checksum *string `locationName:"checksum" type:"string"` // The date that the bot alias was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // A description of the bot alias. Description *string `locationName:"description" type:"string"` // The date that the bot alias was updated. When you create a resource, the // creation date and last updated date are the same. - LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp" timestampFormat:"unix"` + LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp"` // The name of the bot alias. Name *string `locationName:"name" min:"1" type:"string"` @@ -4334,7 +4334,7 @@ type BotChannelAssociation struct { // The date that the association between the Amazon Lex bot and the channel // was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // A text description of the association you are creating. Description *string `locationName:"description" type:"string"` @@ -4430,14 +4430,14 @@ type BotMetadata struct { _ struct{} `type:"structure"` // The date that the bot was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // A description of the bot. Description *string `locationName:"description" type:"string"` // The date that the bot was updated. When you create a bot, the creation date // and last updated date are the same. - LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp" timestampFormat:"unix"` + LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp"` // The name of the bot. Name *string `locationName:"name" min:"2" type:"string"` @@ -4745,7 +4745,7 @@ type CreateBotVersionOutput struct { ClarificationPrompt *Prompt `locationName:"clarificationPrompt" type:"structure"` // The date when the bot version was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // A description of the bot. Description *string `locationName:"description" type:"string"` @@ -4762,7 +4762,7 @@ type CreateBotVersionOutput struct { Intents []*Intent `locationName:"intents" type:"list"` // The date when the $LATEST version of this bot was updated. - LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp" timestampFormat:"unix"` + LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp"` // Specifies the target locale for the bot. Locale *string `locationName:"locale" type:"string" enum:"Locale"` @@ -4954,7 +4954,7 @@ type CreateIntentVersionOutput struct { ConfirmationPrompt *Prompt `locationName:"confirmationPrompt" type:"structure"` // The date that the intent was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // A description of the intent. Description *string `locationName:"description" type:"string"` @@ -4970,7 +4970,7 @@ type CreateIntentVersionOutput struct { FulfillmentActivity *FulfillmentActivity `locationName:"fulfillmentActivity" type:"structure"` // The date that the intent was updated. - LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp" timestampFormat:"unix"` + LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp"` // The name of the intent. Name *string `locationName:"name" min:"1" type:"string"` @@ -5155,7 +5155,7 @@ type CreateSlotTypeVersionOutput struct { Checksum *string `locationName:"checksum" type:"string"` // The date that the slot type was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // A description of the slot type. Description *string `locationName:"description" type:"string"` @@ -5166,7 +5166,7 @@ type CreateSlotTypeVersionOutput struct { // The date that the slot type was updated. When you create a resource, the // creation date and last update date are the same. - LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp" timestampFormat:"unix"` + LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp"` // The name of the slot type. Name *string `locationName:"name" min:"1" type:"string"` @@ -6128,14 +6128,14 @@ type GetBotAliasOutput struct { Checksum *string `locationName:"checksum" type:"string"` // The date that the bot alias was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // A description of the bot alias. Description *string `locationName:"description" type:"string"` // The date that the bot alias was updated. When you create a resource, the // creation date and the last updated date are the same. - LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp" timestampFormat:"unix"` + LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp"` // The name of the bot alias. Name *string `locationName:"name" min:"1" type:"string"` @@ -6400,7 +6400,7 @@ type GetBotChannelAssociationOutput struct { BotName *string `locationName:"botName" min:"2" type:"string"` // The date that the association between the bot and the channel was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // A description of the association between the bot and the channel. Description *string `locationName:"description" type:"string"` @@ -6721,7 +6721,7 @@ type GetBotOutput struct { ClarificationPrompt *Prompt `locationName:"clarificationPrompt" type:"structure"` // The date that the bot was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // A description of the bot. Description *string `locationName:"description" type:"string"` @@ -6738,7 +6738,7 @@ type GetBotOutput struct { // The date that the bot was updated. When you create a resource, the creation // date and last updated date are the same. - LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp" timestampFormat:"unix"` + LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp"` // The target locale for the bot. Locale *string `locationName:"locale" type:"string" enum:"Locale"` @@ -7562,7 +7562,7 @@ type GetImportOutput struct { _ struct{} `type:"structure"` // A timestamp for the date and time that the import job was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // A string that describes why an import job failed to complete. FailureReason []*string `locationName:"failureReason" type:"list"` @@ -7710,7 +7710,7 @@ type GetIntentOutput struct { ConfirmationPrompt *Prompt `locationName:"confirmationPrompt" type:"structure"` // The date that the intent was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // A description of the intent. Description *string `locationName:"description" type:"string"` @@ -7728,7 +7728,7 @@ type GetIntentOutput struct { // The date that the intent was updated. When you create a resource, the creation // date and the last updated date are the same. - LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp" timestampFormat:"unix"` + LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp"` // The name of the intent. Name *string `locationName:"name" min:"1" type:"string"` @@ -8112,7 +8112,7 @@ type GetSlotTypeOutput struct { Checksum *string `locationName:"checksum" type:"string"` // The date that the slot type was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // A description of the slot type. Description *string `locationName:"description" type:"string"` @@ -8123,7 +8123,7 @@ type GetSlotTypeOutput struct { // The date that the slot type was updated. When you create a resource, the // creation date and last update date are the same. - LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp" timestampFormat:"unix"` + LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp"` // The name of the slot type. Name *string `locationName:"name" min:"1" type:"string"` @@ -8565,14 +8565,14 @@ type IntentMetadata struct { _ struct{} `type:"structure"` // The date that the intent was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // A description of the intent. Description *string `locationName:"description" type:"string"` // The date that the intent was updated. When you create an intent, the creation // date and last updated date are the same. - LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp" timestampFormat:"unix"` + LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp"` // The name of the intent. Name *string `locationName:"name" min:"1" type:"string"` @@ -8893,14 +8893,14 @@ type PutBotAliasOutput struct { Checksum *string `locationName:"checksum" type:"string"` // The date that the bot alias was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // A description of the alias. Description *string `locationName:"description" type:"string"` // The date that the bot alias was updated. When you create a resource, the // creation date and the last updated date are the same. - LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp" timestampFormat:"unix"` + LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp"` // The name of the alias. Name *string `locationName:"name" min:"1" type:"string"` @@ -9251,7 +9251,7 @@ type PutBotOutput struct { CreateVersion *bool `locationName:"createVersion" type:"boolean"` // The date that the bot was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // A description of the bot. Description *string `locationName:"description" type:"string"` @@ -9269,7 +9269,7 @@ type PutBotOutput struct { // The date that the bot was updated. When you create a resource, the creation // date and last updated date are the same. - LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp" timestampFormat:"unix"` + LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp"` // The target locale for the bot. Locale *string `locationName:"locale" type:"string" enum:"Locale"` @@ -9683,7 +9683,7 @@ type PutIntentOutput struct { CreateVersion *bool `locationName:"createVersion" type:"boolean"` // The date that the intent was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // A description of the intent. Description *string `locationName:"description" type:"string"` @@ -9703,7 +9703,7 @@ type PutIntentOutput struct { // The date that the intent was updated. When you create a resource, the creation // date and last update dates are the same. - LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp" timestampFormat:"unix"` + LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp"` // The name of the intent. Name *string `locationName:"name" min:"1" type:"string"` @@ -9973,7 +9973,7 @@ type PutSlotTypeOutput struct { CreateVersion *bool `locationName:"createVersion" type:"boolean"` // The date that the slot type was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // A description of the slot type. Description *string `locationName:"description" type:"string"` @@ -9984,7 +9984,7 @@ type PutSlotTypeOutput struct { // The date that the slot type was updated. When you create a slot type, the // creation date and last update date are the same. - LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp" timestampFormat:"unix"` + LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp"` // The name of the slot type. Name *string `locationName:"name" min:"1" type:"string"` @@ -10248,14 +10248,14 @@ type SlotTypeMetadata struct { _ struct{} `type:"structure"` // The date that the slot type was created. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // A description of the slot type. Description *string `locationName:"description" type:"string"` // The date that the slot type was updated. When you create a resource, the // creation date and last updated date are the same. - LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp" timestampFormat:"unix"` + LastUpdatedDate *time.Time `locationName:"lastUpdatedDate" type:"timestamp"` // The name of the slot type. Name *string `locationName:"name" min:"1" type:"string"` @@ -10393,7 +10393,7 @@ type StartImportOutput struct { _ struct{} `type:"structure"` // A timestamp for the date and time that the import job was requested. - CreatedDate *time.Time `locationName:"createdDate" type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `locationName:"createdDate" type:"timestamp"` // The identifier for the specific import job. ImportId *string `locationName:"importId" type:"string"` @@ -10537,10 +10537,10 @@ type UtteranceData struct { DistinctUsers *int64 `locationName:"distinctUsers" type:"integer"` // The date that the utterance was first recorded. - FirstUtteredDate *time.Time `locationName:"firstUtteredDate" type:"timestamp" timestampFormat:"unix"` + FirstUtteredDate *time.Time `locationName:"firstUtteredDate" type:"timestamp"` // The date that the utterance was last recorded. - LastUtteredDate *time.Time `locationName:"lastUtteredDate" type:"timestamp" timestampFormat:"unix"` + LastUtteredDate *time.Time `locationName:"lastUtteredDate" type:"timestamp"` // The text that was entered by the user or the text representation of an audio // clip. @@ -10771,6 +10771,9 @@ const ( // StatusReady is a Status enum value StatusReady = "READY" + // StatusReadyBasicTesting is a Status enum value + StatusReadyBasicTesting = "READY_BASIC_TESTING" + // StatusFailed is a Status enum value StatusFailed = "FAILED" diff --git a/vendor/github.com/aws/aws-sdk-go/service/lexmodelbuildingservice/service.go b/vendor/github.com/aws/aws-sdk-go/service/lexmodelbuildingservice/service.go index 8948604660b..86d1a44d24b 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/lexmodelbuildingservice/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/lexmodelbuildingservice/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "models.lex" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "models.lex" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Lex Model Building Service" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the LexModelBuildingService client with a session. @@ -45,19 +46,20 @@ const ( // svc := lexmodelbuildingservice.New(mySession, aws.NewConfig().WithRegion("us-west-2")) func New(p client.ConfigProvider, cfgs ...*aws.Config) *LexModelBuildingService { c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "lex" + } return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) } // newClient creates, initializes and returns a new service client instance. func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *LexModelBuildingService { - if len(signingName) == 0 { - signingName = "lex" - } svc := &LexModelBuildingService{ Client: client.New( cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/lightsail/api.go b/vendor/github.com/aws/aws-sdk-go/service/lightsail/api.go index 3d8e0c950e6..483c44f8938 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/lightsail/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/lightsail/api.go @@ -14,8 +14,8 @@ const opAllocateStaticIp = "AllocateStaticIp" // AllocateStaticIpRequest generates a "aws/request.Request" representing the // client's request for the AllocateStaticIp operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -118,8 +118,8 @@ const opAttachDisk = "AttachDisk" // AttachDiskRequest generates a "aws/request.Request" representing the // client's request for the AttachDisk operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -223,8 +223,8 @@ const opAttachInstancesToLoadBalancer = "AttachInstancesToLoadBalancer" // AttachInstancesToLoadBalancerRequest generates a "aws/request.Request" representing the // client's request for the AttachInstancesToLoadBalancer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -330,8 +330,8 @@ const opAttachLoadBalancerTlsCertificate = "AttachLoadBalancerTlsCertificate" // AttachLoadBalancerTlsCertificateRequest generates a "aws/request.Request" representing the // client's request for the AttachLoadBalancerTlsCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -441,8 +441,8 @@ const opAttachStaticIp = "AttachStaticIp" // AttachStaticIpRequest generates a "aws/request.Request" representing the // client's request for the AttachStaticIp operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -545,8 +545,8 @@ const opCloseInstancePublicPorts = "CloseInstancePublicPorts" // CloseInstancePublicPortsRequest generates a "aws/request.Request" representing the // client's request for the CloseInstancePublicPorts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -649,8 +649,8 @@ const opCreateDisk = "CreateDisk" // CreateDiskRequest generates a "aws/request.Request" representing the // client's request for the CreateDisk operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -756,8 +756,8 @@ const opCreateDiskFromSnapshot = "CreateDiskFromSnapshot" // CreateDiskFromSnapshotRequest generates a "aws/request.Request" representing the // client's request for the CreateDiskFromSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -863,8 +863,8 @@ const opCreateDiskSnapshot = "CreateDiskSnapshot" // CreateDiskSnapshotRequest generates a "aws/request.Request" representing the // client's request for the CreateDiskSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -980,8 +980,8 @@ const opCreateDomain = "CreateDomain" // CreateDomainRequest generates a "aws/request.Request" representing the // client's request for the CreateDomain operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1084,8 +1084,8 @@ const opCreateDomainEntry = "CreateDomainEntry" // CreateDomainEntryRequest generates a "aws/request.Request" representing the // client's request for the CreateDomainEntry operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1189,8 +1189,8 @@ const opCreateInstanceSnapshot = "CreateInstanceSnapshot" // CreateInstanceSnapshotRequest generates a "aws/request.Request" representing the // client's request for the CreateInstanceSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1294,8 +1294,8 @@ const opCreateInstances = "CreateInstances" // CreateInstancesRequest generates a "aws/request.Request" representing the // client's request for the CreateInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1335,6 +1335,11 @@ func (c *Lightsail) CreateInstancesRequest(input *CreateInstancesInput) (req *re // CreateInstances API operation for Amazon Lightsail. // // Creates one or more Amazon Lightsail virtual private servers, or instances. +// Create instances using active blueprints. Inactive blueprints are listed +// to support customers with existing instances but are not necessarily available +// for launch of new instances. Blueprints are marked inactive when they become +// outdated due to operating system updates or new application releases. Use +// the get blueprints operation to return a list of available blueprints. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1398,8 +1403,8 @@ const opCreateInstancesFromSnapshot = "CreateInstancesFromSnapshot" // CreateInstancesFromSnapshotRequest generates a "aws/request.Request" representing the // client's request for the CreateInstancesFromSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1503,8 +1508,8 @@ const opCreateKeyPair = "CreateKeyPair" // CreateKeyPairRequest generates a "aws/request.Request" representing the // client's request for the CreateKeyPair operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1543,7 +1548,7 @@ func (c *Lightsail) CreateKeyPairRequest(input *CreateKeyPairInput) (req *reques // CreateKeyPair API operation for Amazon Lightsail. // -// Creates sn SSH key pair. +// Creates an SSH key pair. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1607,8 +1612,8 @@ const opCreateLoadBalancer = "CreateLoadBalancer" // CreateLoadBalancerRequest generates a "aws/request.Request" representing the // client's request for the CreateLoadBalancer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1718,8 +1723,8 @@ const opCreateLoadBalancerTlsCertificate = "CreateLoadBalancerTlsCertificate" // CreateLoadBalancerTlsCertificateRequest generates a "aws/request.Request" representing the // client's request for the CreateLoadBalancerTlsCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1820,12 +1825,330 @@ func (c *Lightsail) CreateLoadBalancerTlsCertificateWithContext(ctx aws.Context, return out, req.Send() } +const opCreateRelationalDatabase = "CreateRelationalDatabase" + +// CreateRelationalDatabaseRequest generates a "aws/request.Request" representing the +// client's request for the CreateRelationalDatabase operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateRelationalDatabase for more information on using the CreateRelationalDatabase +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateRelationalDatabaseRequest method. +// req, resp := client.CreateRelationalDatabaseRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/CreateRelationalDatabase +func (c *Lightsail) CreateRelationalDatabaseRequest(input *CreateRelationalDatabaseInput) (req *request.Request, output *CreateRelationalDatabaseOutput) { + op := &request.Operation{ + Name: opCreateRelationalDatabase, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateRelationalDatabaseInput{} + } + + output = &CreateRelationalDatabaseOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateRelationalDatabase API operation for Amazon Lightsail. +// +// Creates a new database in Amazon Lightsail. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation CreateRelationalDatabase for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/CreateRelationalDatabase +func (c *Lightsail) CreateRelationalDatabase(input *CreateRelationalDatabaseInput) (*CreateRelationalDatabaseOutput, error) { + req, out := c.CreateRelationalDatabaseRequest(input) + return out, req.Send() +} + +// CreateRelationalDatabaseWithContext is the same as CreateRelationalDatabase with the addition of +// the ability to pass a context and additional request options. +// +// See CreateRelationalDatabase for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) CreateRelationalDatabaseWithContext(ctx aws.Context, input *CreateRelationalDatabaseInput, opts ...request.Option) (*CreateRelationalDatabaseOutput, error) { + req, out := c.CreateRelationalDatabaseRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateRelationalDatabaseFromSnapshot = "CreateRelationalDatabaseFromSnapshot" + +// CreateRelationalDatabaseFromSnapshotRequest generates a "aws/request.Request" representing the +// client's request for the CreateRelationalDatabaseFromSnapshot operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateRelationalDatabaseFromSnapshot for more information on using the CreateRelationalDatabaseFromSnapshot +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateRelationalDatabaseFromSnapshotRequest method. +// req, resp := client.CreateRelationalDatabaseFromSnapshotRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/CreateRelationalDatabaseFromSnapshot +func (c *Lightsail) CreateRelationalDatabaseFromSnapshotRequest(input *CreateRelationalDatabaseFromSnapshotInput) (req *request.Request, output *CreateRelationalDatabaseFromSnapshotOutput) { + op := &request.Operation{ + Name: opCreateRelationalDatabaseFromSnapshot, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateRelationalDatabaseFromSnapshotInput{} + } + + output = &CreateRelationalDatabaseFromSnapshotOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateRelationalDatabaseFromSnapshot API operation for Amazon Lightsail. +// +// Creates a new database from an existing database snapshot in Amazon Lightsail. +// +// You can create a new database from a snapshot in if something goes wrong +// with your original database, or to change it to a different plan, such as +// a high availability or standard plan. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation CreateRelationalDatabaseFromSnapshot for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/CreateRelationalDatabaseFromSnapshot +func (c *Lightsail) CreateRelationalDatabaseFromSnapshot(input *CreateRelationalDatabaseFromSnapshotInput) (*CreateRelationalDatabaseFromSnapshotOutput, error) { + req, out := c.CreateRelationalDatabaseFromSnapshotRequest(input) + return out, req.Send() +} + +// CreateRelationalDatabaseFromSnapshotWithContext is the same as CreateRelationalDatabaseFromSnapshot with the addition of +// the ability to pass a context and additional request options. +// +// See CreateRelationalDatabaseFromSnapshot for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) CreateRelationalDatabaseFromSnapshotWithContext(ctx aws.Context, input *CreateRelationalDatabaseFromSnapshotInput, opts ...request.Option) (*CreateRelationalDatabaseFromSnapshotOutput, error) { + req, out := c.CreateRelationalDatabaseFromSnapshotRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateRelationalDatabaseSnapshot = "CreateRelationalDatabaseSnapshot" + +// CreateRelationalDatabaseSnapshotRequest generates a "aws/request.Request" representing the +// client's request for the CreateRelationalDatabaseSnapshot operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateRelationalDatabaseSnapshot for more information on using the CreateRelationalDatabaseSnapshot +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateRelationalDatabaseSnapshotRequest method. +// req, resp := client.CreateRelationalDatabaseSnapshotRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/CreateRelationalDatabaseSnapshot +func (c *Lightsail) CreateRelationalDatabaseSnapshotRequest(input *CreateRelationalDatabaseSnapshotInput) (req *request.Request, output *CreateRelationalDatabaseSnapshotOutput) { + op := &request.Operation{ + Name: opCreateRelationalDatabaseSnapshot, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateRelationalDatabaseSnapshotInput{} + } + + output = &CreateRelationalDatabaseSnapshotOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateRelationalDatabaseSnapshot API operation for Amazon Lightsail. +// +// Creates a snapshot of your database in Amazon Lightsail. You can use snapshots +// for backups, to make copies of a database, and to save data before deleting +// a database. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation CreateRelationalDatabaseSnapshot for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/CreateRelationalDatabaseSnapshot +func (c *Lightsail) CreateRelationalDatabaseSnapshot(input *CreateRelationalDatabaseSnapshotInput) (*CreateRelationalDatabaseSnapshotOutput, error) { + req, out := c.CreateRelationalDatabaseSnapshotRequest(input) + return out, req.Send() +} + +// CreateRelationalDatabaseSnapshotWithContext is the same as CreateRelationalDatabaseSnapshot with the addition of +// the ability to pass a context and additional request options. +// +// See CreateRelationalDatabaseSnapshot for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) CreateRelationalDatabaseSnapshotWithContext(ctx aws.Context, input *CreateRelationalDatabaseSnapshotInput, opts ...request.Option) (*CreateRelationalDatabaseSnapshotOutput, error) { + req, out := c.CreateRelationalDatabaseSnapshotRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteDisk = "DeleteDisk" // DeleteDiskRequest generates a "aws/request.Request" representing the // client's request for the DeleteDisk operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1931,8 +2254,8 @@ const opDeleteDiskSnapshot = "DeleteDiskSnapshot" // DeleteDiskSnapshotRequest generates a "aws/request.Request" representing the // client's request for the DeleteDiskSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2042,8 +2365,8 @@ const opDeleteDomain = "DeleteDomain" // DeleteDomainRequest generates a "aws/request.Request" representing the // client's request for the DeleteDomain operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2146,8 +2469,8 @@ const opDeleteDomainEntry = "DeleteDomainEntry" // DeleteDomainEntryRequest generates a "aws/request.Request" representing the // client's request for the DeleteDomainEntry operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2250,8 +2573,8 @@ const opDeleteInstance = "DeleteInstance" // DeleteInstanceRequest generates a "aws/request.Request" representing the // client's request for the DeleteInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2354,8 +2677,8 @@ const opDeleteInstanceSnapshot = "DeleteInstanceSnapshot" // DeleteInstanceSnapshotRequest generates a "aws/request.Request" representing the // client's request for the DeleteInstanceSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2458,8 +2781,8 @@ const opDeleteKeyPair = "DeleteKeyPair" // DeleteKeyPairRequest generates a "aws/request.Request" representing the // client's request for the DeleteKeyPair operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2562,8 +2885,8 @@ const opDeleteLoadBalancer = "DeleteLoadBalancer" // DeleteLoadBalancerRequest generates a "aws/request.Request" representing the // client's request for the DeleteLoadBalancer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2668,8 +2991,8 @@ const opDeleteLoadBalancerTlsCertificate = "DeleteLoadBalancerTlsCertificate" // DeleteLoadBalancerTlsCertificateRequest generates a "aws/request.Request" representing the // client's request for the DeleteLoadBalancerTlsCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2768,49 +3091,257 @@ func (c *Lightsail) DeleteLoadBalancerTlsCertificateWithContext(ctx aws.Context, return out, req.Send() } -const opDetachDisk = "DetachDisk" +const opDeleteRelationalDatabase = "DeleteRelationalDatabase" -// DetachDiskRequest generates a "aws/request.Request" representing the -// client's request for the DetachDisk operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteRelationalDatabaseRequest generates a "aws/request.Request" representing the +// client's request for the DeleteRelationalDatabase operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DetachDisk for more information on using the DetachDisk +// See DeleteRelationalDatabase for more information on using the DeleteRelationalDatabase // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DetachDiskRequest method. -// req, resp := client.DetachDiskRequest(params) +// // Example sending a request using the DeleteRelationalDatabaseRequest method. +// req, resp := client.DeleteRelationalDatabaseRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/DetachDisk -func (c *Lightsail) DetachDiskRequest(input *DetachDiskInput) (req *request.Request, output *DetachDiskOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/DeleteRelationalDatabase +func (c *Lightsail) DeleteRelationalDatabaseRequest(input *DeleteRelationalDatabaseInput) (req *request.Request, output *DeleteRelationalDatabaseOutput) { op := &request.Operation{ - Name: opDetachDisk, + Name: opDeleteRelationalDatabase, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &DetachDiskInput{} + input = &DeleteRelationalDatabaseInput{} } - output = &DetachDiskOutput{} + output = &DeleteRelationalDatabaseOutput{} req = c.newRequest(op, input, output) return } -// DetachDisk API operation for Amazon Lightsail. +// DeleteRelationalDatabase API operation for Amazon Lightsail. +// +// Deletes a database in Amazon Lightsail. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation DeleteRelationalDatabase for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/DeleteRelationalDatabase +func (c *Lightsail) DeleteRelationalDatabase(input *DeleteRelationalDatabaseInput) (*DeleteRelationalDatabaseOutput, error) { + req, out := c.DeleteRelationalDatabaseRequest(input) + return out, req.Send() +} + +// DeleteRelationalDatabaseWithContext is the same as DeleteRelationalDatabase with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteRelationalDatabase for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) DeleteRelationalDatabaseWithContext(ctx aws.Context, input *DeleteRelationalDatabaseInput, opts ...request.Option) (*DeleteRelationalDatabaseOutput, error) { + req, out := c.DeleteRelationalDatabaseRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteRelationalDatabaseSnapshot = "DeleteRelationalDatabaseSnapshot" + +// DeleteRelationalDatabaseSnapshotRequest generates a "aws/request.Request" representing the +// client's request for the DeleteRelationalDatabaseSnapshot operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteRelationalDatabaseSnapshot for more information on using the DeleteRelationalDatabaseSnapshot +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteRelationalDatabaseSnapshotRequest method. +// req, resp := client.DeleteRelationalDatabaseSnapshotRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/DeleteRelationalDatabaseSnapshot +func (c *Lightsail) DeleteRelationalDatabaseSnapshotRequest(input *DeleteRelationalDatabaseSnapshotInput) (req *request.Request, output *DeleteRelationalDatabaseSnapshotOutput) { + op := &request.Operation{ + Name: opDeleteRelationalDatabaseSnapshot, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteRelationalDatabaseSnapshotInput{} + } + + output = &DeleteRelationalDatabaseSnapshotOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteRelationalDatabaseSnapshot API operation for Amazon Lightsail. +// +// Deletes a database snapshot in Amazon Lightsail. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation DeleteRelationalDatabaseSnapshot for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/DeleteRelationalDatabaseSnapshot +func (c *Lightsail) DeleteRelationalDatabaseSnapshot(input *DeleteRelationalDatabaseSnapshotInput) (*DeleteRelationalDatabaseSnapshotOutput, error) { + req, out := c.DeleteRelationalDatabaseSnapshotRequest(input) + return out, req.Send() +} + +// DeleteRelationalDatabaseSnapshotWithContext is the same as DeleteRelationalDatabaseSnapshot with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteRelationalDatabaseSnapshot for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) DeleteRelationalDatabaseSnapshotWithContext(ctx aws.Context, input *DeleteRelationalDatabaseSnapshotInput, opts ...request.Option) (*DeleteRelationalDatabaseSnapshotOutput, error) { + req, out := c.DeleteRelationalDatabaseSnapshotRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDetachDisk = "DetachDisk" + +// DetachDiskRequest generates a "aws/request.Request" representing the +// client's request for the DetachDisk operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DetachDisk for more information on using the DetachDisk +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DetachDiskRequest method. +// req, resp := client.DetachDiskRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/DetachDisk +func (c *Lightsail) DetachDiskRequest(input *DetachDiskInput) (req *request.Request, output *DetachDiskOutput) { + op := &request.Operation{ + Name: opDetachDisk, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DetachDiskInput{} + } + + output = &DetachDiskOutput{} + req = c.newRequest(op, input, output) + return +} + +// DetachDisk API operation for Amazon Lightsail. // // Detaches a stopped block storage disk from a Lightsail instance. Make sure // to unmount any file systems on the device within your operating system before @@ -2878,8 +3409,8 @@ const opDetachInstancesFromLoadBalancer = "DetachInstancesFromLoadBalancer" // DetachInstancesFromLoadBalancerRequest generates a "aws/request.Request" representing the // client's request for the DetachInstancesFromLoadBalancer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2985,8 +3516,8 @@ const opDetachStaticIp = "DetachStaticIp" // DetachStaticIpRequest generates a "aws/request.Request" representing the // client's request for the DetachStaticIp operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3089,8 +3620,8 @@ const opDownloadDefaultKeyPair = "DownloadDefaultKeyPair" // DownloadDefaultKeyPairRequest generates a "aws/request.Request" representing the // client's request for the DownloadDefaultKeyPair operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3193,8 +3724,8 @@ const opGetActiveNames = "GetActiveNames" // GetActiveNamesRequest generates a "aws/request.Request" representing the // client's request for the GetActiveNames operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3297,8 +3828,8 @@ const opGetBlueprints = "GetBlueprints" // GetBlueprintsRequest generates a "aws/request.Request" representing the // client's request for the GetBlueprints operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3404,8 +3935,8 @@ const opGetBundles = "GetBundles" // GetBundlesRequest generates a "aws/request.Request" representing the // client's request for the GetBundles operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3509,8 +4040,8 @@ const opGetDisk = "GetDisk" // GetDiskRequest generates a "aws/request.Request" representing the // client's request for the GetDisk operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3613,8 +4144,8 @@ const opGetDiskSnapshot = "GetDiskSnapshot" // GetDiskSnapshotRequest generates a "aws/request.Request" representing the // client's request for the GetDiskSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3717,8 +4248,8 @@ const opGetDiskSnapshots = "GetDiskSnapshots" // GetDiskSnapshotsRequest generates a "aws/request.Request" representing the // client's request for the GetDiskSnapshots operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3826,8 +4357,8 @@ const opGetDisks = "GetDisks" // GetDisksRequest generates a "aws/request.Request" representing the // client's request for the GetDisks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3935,8 +4466,8 @@ const opGetDomain = "GetDomain" // GetDomainRequest generates a "aws/request.Request" representing the // client's request for the GetDomain operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4039,8 +4570,8 @@ const opGetDomains = "GetDomains" // GetDomainsRequest generates a "aws/request.Request" representing the // client's request for the GetDomains operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4143,8 +4674,8 @@ const opGetInstance = "GetInstance" // GetInstanceRequest generates a "aws/request.Request" representing the // client's request for the GetInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4248,8 +4779,8 @@ const opGetInstanceAccessDetails = "GetInstanceAccessDetails" // GetInstanceAccessDetailsRequest generates a "aws/request.Request" representing the // client's request for the GetInstanceAccessDetails operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4353,8 +4884,8 @@ const opGetInstanceMetricData = "GetInstanceMetricData" // GetInstanceMetricDataRequest generates a "aws/request.Request" representing the // client's request for the GetInstanceMetricData operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4458,8 +4989,8 @@ const opGetInstancePortStates = "GetInstancePortStates" // GetInstancePortStatesRequest generates a "aws/request.Request" representing the // client's request for the GetInstancePortStates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4562,8 +5093,8 @@ const opGetInstanceSnapshot = "GetInstanceSnapshot" // GetInstanceSnapshotRequest generates a "aws/request.Request" representing the // client's request for the GetInstanceSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4666,8 +5197,8 @@ const opGetInstanceSnapshots = "GetInstanceSnapshots" // GetInstanceSnapshotsRequest generates a "aws/request.Request" representing the // client's request for the GetInstanceSnapshots operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4770,8 +5301,8 @@ const opGetInstanceState = "GetInstanceState" // GetInstanceStateRequest generates a "aws/request.Request" representing the // client's request for the GetInstanceState operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4874,8 +5405,8 @@ const opGetInstances = "GetInstances" // GetInstancesRequest generates a "aws/request.Request" representing the // client's request for the GetInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4979,8 +5510,8 @@ const opGetKeyPair = "GetKeyPair" // GetKeyPairRequest generates a "aws/request.Request" representing the // client's request for the GetKeyPair operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5083,8 +5614,8 @@ const opGetKeyPairs = "GetKeyPairs" // GetKeyPairsRequest generates a "aws/request.Request" representing the // client's request for the GetKeyPairs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5187,8 +5718,8 @@ const opGetLoadBalancer = "GetLoadBalancer" // GetLoadBalancerRequest generates a "aws/request.Request" representing the // client's request for the GetLoadBalancer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5291,8 +5822,8 @@ const opGetLoadBalancerMetricData = "GetLoadBalancerMetricData" // GetLoadBalancerMetricDataRequest generates a "aws/request.Request" representing the // client's request for the GetLoadBalancerMetricData operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5395,8 +5926,8 @@ const opGetLoadBalancerTlsCertificates = "GetLoadBalancerTlsCertificates" // GetLoadBalancerTlsCertificatesRequest generates a "aws/request.Request" representing the // client's request for the GetLoadBalancerTlsCertificates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5505,8 +6036,8 @@ const opGetLoadBalancers = "GetLoadBalancers" // GetLoadBalancersRequest generates a "aws/request.Request" representing the // client's request for the GetLoadBalancers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5613,8 +6144,8 @@ const opGetOperation = "GetOperation" // GetOperationRequest generates a "aws/request.Request" representing the // client's request for the GetOperation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5719,8 +6250,8 @@ const opGetOperations = "GetOperations" // GetOperationsRequest generates a "aws/request.Request" representing the // client's request for the GetOperations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5827,8 +6358,8 @@ const opGetOperationsForResource = "GetOperationsForResource" // GetOperationsForResourceRequest generates a "aws/request.Request" representing the // client's request for the GetOperationsForResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5931,8 +6462,8 @@ const opGetRegions = "GetRegions" // GetRegionsRequest generates a "aws/request.Request" representing the // client's request for the GetRegions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5972,7 +6503,7 @@ func (c *Lightsail) GetRegionsRequest(input *GetRegionsInput) (req *request.Requ // GetRegions API operation for Amazon Lightsail. // // Returns a list of all valid regions for Amazon Lightsail. Use the include -// availability zones parameter to also return the availability zones in a region. +// availability zones parameter to also return the Availability Zones in a region. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -6032,58 +6563,58 @@ func (c *Lightsail) GetRegionsWithContext(ctx aws.Context, input *GetRegionsInpu return out, req.Send() } -const opGetStaticIp = "GetStaticIp" +const opGetRelationalDatabase = "GetRelationalDatabase" -// GetStaticIpRequest generates a "aws/request.Request" representing the -// client's request for the GetStaticIp operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetRelationalDatabaseRequest generates a "aws/request.Request" representing the +// client's request for the GetRelationalDatabase operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetStaticIp for more information on using the GetStaticIp +// See GetRelationalDatabase for more information on using the GetRelationalDatabase // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetStaticIpRequest method. -// req, resp := client.GetStaticIpRequest(params) +// // Example sending a request using the GetRelationalDatabaseRequest method. +// req, resp := client.GetRelationalDatabaseRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetStaticIp -func (c *Lightsail) GetStaticIpRequest(input *GetStaticIpInput) (req *request.Request, output *GetStaticIpOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabase +func (c *Lightsail) GetRelationalDatabaseRequest(input *GetRelationalDatabaseInput) (req *request.Request, output *GetRelationalDatabaseOutput) { op := &request.Operation{ - Name: opGetStaticIp, + Name: opGetRelationalDatabase, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &GetStaticIpInput{} + input = &GetRelationalDatabaseInput{} } - output = &GetStaticIpOutput{} + output = &GetRelationalDatabaseOutput{} req = c.newRequest(op, input, output) return } -// GetStaticIp API operation for Amazon Lightsail. +// GetRelationalDatabase API operation for Amazon Lightsail. // -// Returns information about a specific static IP. +// Returns information about a specific database in Amazon Lightsail. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Lightsail's -// API operation GetStaticIp for usage and error information. +// API operation GetRelationalDatabase for usage and error information. // // Returned Error Codes: // * ErrCodeServiceException "ServiceException" @@ -6114,80 +6645,84 @@ func (c *Lightsail) GetStaticIpRequest(input *GetStaticIpInput) (req *request.Re // * ErrCodeUnauthenticatedException "UnauthenticatedException" // Lightsail throws this exception when the user has not been authenticated. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetStaticIp -func (c *Lightsail) GetStaticIp(input *GetStaticIpInput) (*GetStaticIpOutput, error) { - req, out := c.GetStaticIpRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabase +func (c *Lightsail) GetRelationalDatabase(input *GetRelationalDatabaseInput) (*GetRelationalDatabaseOutput, error) { + req, out := c.GetRelationalDatabaseRequest(input) return out, req.Send() } -// GetStaticIpWithContext is the same as GetStaticIp with the addition of +// GetRelationalDatabaseWithContext is the same as GetRelationalDatabase with the addition of // the ability to pass a context and additional request options. // -// See GetStaticIp for details on how to use this API operation. +// See GetRelationalDatabase for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Lightsail) GetStaticIpWithContext(ctx aws.Context, input *GetStaticIpInput, opts ...request.Option) (*GetStaticIpOutput, error) { - req, out := c.GetStaticIpRequest(input) +func (c *Lightsail) GetRelationalDatabaseWithContext(ctx aws.Context, input *GetRelationalDatabaseInput, opts ...request.Option) (*GetRelationalDatabaseOutput, error) { + req, out := c.GetRelationalDatabaseRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opGetStaticIps = "GetStaticIps" +const opGetRelationalDatabaseBlueprints = "GetRelationalDatabaseBlueprints" -// GetStaticIpsRequest generates a "aws/request.Request" representing the -// client's request for the GetStaticIps operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetRelationalDatabaseBlueprintsRequest generates a "aws/request.Request" representing the +// client's request for the GetRelationalDatabaseBlueprints operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetStaticIps for more information on using the GetStaticIps +// See GetRelationalDatabaseBlueprints for more information on using the GetRelationalDatabaseBlueprints // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetStaticIpsRequest method. -// req, resp := client.GetStaticIpsRequest(params) +// // Example sending a request using the GetRelationalDatabaseBlueprintsRequest method. +// req, resp := client.GetRelationalDatabaseBlueprintsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetStaticIps -func (c *Lightsail) GetStaticIpsRequest(input *GetStaticIpsInput) (req *request.Request, output *GetStaticIpsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseBlueprints +func (c *Lightsail) GetRelationalDatabaseBlueprintsRequest(input *GetRelationalDatabaseBlueprintsInput) (req *request.Request, output *GetRelationalDatabaseBlueprintsOutput) { op := &request.Operation{ - Name: opGetStaticIps, + Name: opGetRelationalDatabaseBlueprints, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &GetStaticIpsInput{} + input = &GetRelationalDatabaseBlueprintsInput{} } - output = &GetStaticIpsOutput{} + output = &GetRelationalDatabaseBlueprintsOutput{} req = c.newRequest(op, input, output) return } -// GetStaticIps API operation for Amazon Lightsail. +// GetRelationalDatabaseBlueprints API operation for Amazon Lightsail. // -// Returns information about all static IPs in the user's account. +// Returns a list of available database blueprints in Amazon Lightsail. A blueprint +// describes the major engine version of a database. +// +// You can use a blueprint ID to create a new database that runs a specific +// database engine. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Lightsail's -// API operation GetStaticIps for usage and error information. +// API operation GetRelationalDatabaseBlueprints for usage and error information. // // Returned Error Codes: // * ErrCodeServiceException "ServiceException" @@ -6218,80 +6753,84 @@ func (c *Lightsail) GetStaticIpsRequest(input *GetStaticIpsInput) (req *request. // * ErrCodeUnauthenticatedException "UnauthenticatedException" // Lightsail throws this exception when the user has not been authenticated. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetStaticIps -func (c *Lightsail) GetStaticIps(input *GetStaticIpsInput) (*GetStaticIpsOutput, error) { - req, out := c.GetStaticIpsRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseBlueprints +func (c *Lightsail) GetRelationalDatabaseBlueprints(input *GetRelationalDatabaseBlueprintsInput) (*GetRelationalDatabaseBlueprintsOutput, error) { + req, out := c.GetRelationalDatabaseBlueprintsRequest(input) return out, req.Send() } -// GetStaticIpsWithContext is the same as GetStaticIps with the addition of +// GetRelationalDatabaseBlueprintsWithContext is the same as GetRelationalDatabaseBlueprints with the addition of // the ability to pass a context and additional request options. // -// See GetStaticIps for details on how to use this API operation. +// See GetRelationalDatabaseBlueprints for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Lightsail) GetStaticIpsWithContext(ctx aws.Context, input *GetStaticIpsInput, opts ...request.Option) (*GetStaticIpsOutput, error) { - req, out := c.GetStaticIpsRequest(input) +func (c *Lightsail) GetRelationalDatabaseBlueprintsWithContext(ctx aws.Context, input *GetRelationalDatabaseBlueprintsInput, opts ...request.Option) (*GetRelationalDatabaseBlueprintsOutput, error) { + req, out := c.GetRelationalDatabaseBlueprintsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opImportKeyPair = "ImportKeyPair" +const opGetRelationalDatabaseBundles = "GetRelationalDatabaseBundles" -// ImportKeyPairRequest generates a "aws/request.Request" representing the -// client's request for the ImportKeyPair operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetRelationalDatabaseBundlesRequest generates a "aws/request.Request" representing the +// client's request for the GetRelationalDatabaseBundles operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ImportKeyPair for more information on using the ImportKeyPair +// See GetRelationalDatabaseBundles for more information on using the GetRelationalDatabaseBundles // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ImportKeyPairRequest method. -// req, resp := client.ImportKeyPairRequest(params) +// // Example sending a request using the GetRelationalDatabaseBundlesRequest method. +// req, resp := client.GetRelationalDatabaseBundlesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/ImportKeyPair -func (c *Lightsail) ImportKeyPairRequest(input *ImportKeyPairInput) (req *request.Request, output *ImportKeyPairOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseBundles +func (c *Lightsail) GetRelationalDatabaseBundlesRequest(input *GetRelationalDatabaseBundlesInput) (req *request.Request, output *GetRelationalDatabaseBundlesOutput) { op := &request.Operation{ - Name: opImportKeyPair, + Name: opGetRelationalDatabaseBundles, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &ImportKeyPairInput{} + input = &GetRelationalDatabaseBundlesInput{} } - output = &ImportKeyPairOutput{} + output = &GetRelationalDatabaseBundlesOutput{} req = c.newRequest(op, input, output) return } -// ImportKeyPair API operation for Amazon Lightsail. +// GetRelationalDatabaseBundles API operation for Amazon Lightsail. // -// Imports a public SSH key from a specific key pair. +// Returns the list of bundles that are available in Amazon Lightsail. A bundle +// describes the performance specifications for a database. +// +// You can use a bundle ID to create a new database with explicit performance +// specifications. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Lightsail's -// API operation ImportKeyPair for usage and error information. +// API operation GetRelationalDatabaseBundles for usage and error information. // // Returned Error Codes: // * ErrCodeServiceException "ServiceException" @@ -6322,80 +6861,80 @@ func (c *Lightsail) ImportKeyPairRequest(input *ImportKeyPairInput) (req *reques // * ErrCodeUnauthenticatedException "UnauthenticatedException" // Lightsail throws this exception when the user has not been authenticated. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/ImportKeyPair -func (c *Lightsail) ImportKeyPair(input *ImportKeyPairInput) (*ImportKeyPairOutput, error) { - req, out := c.ImportKeyPairRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseBundles +func (c *Lightsail) GetRelationalDatabaseBundles(input *GetRelationalDatabaseBundlesInput) (*GetRelationalDatabaseBundlesOutput, error) { + req, out := c.GetRelationalDatabaseBundlesRequest(input) return out, req.Send() } -// ImportKeyPairWithContext is the same as ImportKeyPair with the addition of +// GetRelationalDatabaseBundlesWithContext is the same as GetRelationalDatabaseBundles with the addition of // the ability to pass a context and additional request options. // -// See ImportKeyPair for details on how to use this API operation. +// See GetRelationalDatabaseBundles for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Lightsail) ImportKeyPairWithContext(ctx aws.Context, input *ImportKeyPairInput, opts ...request.Option) (*ImportKeyPairOutput, error) { - req, out := c.ImportKeyPairRequest(input) +func (c *Lightsail) GetRelationalDatabaseBundlesWithContext(ctx aws.Context, input *GetRelationalDatabaseBundlesInput, opts ...request.Option) (*GetRelationalDatabaseBundlesOutput, error) { + req, out := c.GetRelationalDatabaseBundlesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opIsVpcPeered = "IsVpcPeered" +const opGetRelationalDatabaseEvents = "GetRelationalDatabaseEvents" -// IsVpcPeeredRequest generates a "aws/request.Request" representing the -// client's request for the IsVpcPeered operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetRelationalDatabaseEventsRequest generates a "aws/request.Request" representing the +// client's request for the GetRelationalDatabaseEvents operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See IsVpcPeered for more information on using the IsVpcPeered +// See GetRelationalDatabaseEvents for more information on using the GetRelationalDatabaseEvents // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the IsVpcPeeredRequest method. -// req, resp := client.IsVpcPeeredRequest(params) +// // Example sending a request using the GetRelationalDatabaseEventsRequest method. +// req, resp := client.GetRelationalDatabaseEventsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/IsVpcPeered -func (c *Lightsail) IsVpcPeeredRequest(input *IsVpcPeeredInput) (req *request.Request, output *IsVpcPeeredOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseEvents +func (c *Lightsail) GetRelationalDatabaseEventsRequest(input *GetRelationalDatabaseEventsInput) (req *request.Request, output *GetRelationalDatabaseEventsOutput) { op := &request.Operation{ - Name: opIsVpcPeered, + Name: opGetRelationalDatabaseEvents, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &IsVpcPeeredInput{} + input = &GetRelationalDatabaseEventsInput{} } - output = &IsVpcPeeredOutput{} + output = &GetRelationalDatabaseEventsOutput{} req = c.newRequest(op, input, output) return } -// IsVpcPeered API operation for Amazon Lightsail. +// GetRelationalDatabaseEvents API operation for Amazon Lightsail. // -// Returns a Boolean value indicating whether your Lightsail VPC is peered. +// Returns a list of events for a specific database in Amazon Lightsail. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Lightsail's -// API operation IsVpcPeered for usage and error information. +// API operation GetRelationalDatabaseEvents for usage and error information. // // Returned Error Codes: // * ErrCodeServiceException "ServiceException" @@ -6426,80 +6965,80 @@ func (c *Lightsail) IsVpcPeeredRequest(input *IsVpcPeeredInput) (req *request.Re // * ErrCodeUnauthenticatedException "UnauthenticatedException" // Lightsail throws this exception when the user has not been authenticated. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/IsVpcPeered -func (c *Lightsail) IsVpcPeered(input *IsVpcPeeredInput) (*IsVpcPeeredOutput, error) { - req, out := c.IsVpcPeeredRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseEvents +func (c *Lightsail) GetRelationalDatabaseEvents(input *GetRelationalDatabaseEventsInput) (*GetRelationalDatabaseEventsOutput, error) { + req, out := c.GetRelationalDatabaseEventsRequest(input) return out, req.Send() } -// IsVpcPeeredWithContext is the same as IsVpcPeered with the addition of +// GetRelationalDatabaseEventsWithContext is the same as GetRelationalDatabaseEvents with the addition of // the ability to pass a context and additional request options. // -// See IsVpcPeered for details on how to use this API operation. +// See GetRelationalDatabaseEvents for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Lightsail) IsVpcPeeredWithContext(ctx aws.Context, input *IsVpcPeeredInput, opts ...request.Option) (*IsVpcPeeredOutput, error) { - req, out := c.IsVpcPeeredRequest(input) +func (c *Lightsail) GetRelationalDatabaseEventsWithContext(ctx aws.Context, input *GetRelationalDatabaseEventsInput, opts ...request.Option) (*GetRelationalDatabaseEventsOutput, error) { + req, out := c.GetRelationalDatabaseEventsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opOpenInstancePublicPorts = "OpenInstancePublicPorts" +const opGetRelationalDatabaseLogEvents = "GetRelationalDatabaseLogEvents" -// OpenInstancePublicPortsRequest generates a "aws/request.Request" representing the -// client's request for the OpenInstancePublicPorts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetRelationalDatabaseLogEventsRequest generates a "aws/request.Request" representing the +// client's request for the GetRelationalDatabaseLogEvents operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See OpenInstancePublicPorts for more information on using the OpenInstancePublicPorts +// See GetRelationalDatabaseLogEvents for more information on using the GetRelationalDatabaseLogEvents // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the OpenInstancePublicPortsRequest method. -// req, resp := client.OpenInstancePublicPortsRequest(params) +// // Example sending a request using the GetRelationalDatabaseLogEventsRequest method. +// req, resp := client.GetRelationalDatabaseLogEventsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/OpenInstancePublicPorts -func (c *Lightsail) OpenInstancePublicPortsRequest(input *OpenInstancePublicPortsInput) (req *request.Request, output *OpenInstancePublicPortsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseLogEvents +func (c *Lightsail) GetRelationalDatabaseLogEventsRequest(input *GetRelationalDatabaseLogEventsInput) (req *request.Request, output *GetRelationalDatabaseLogEventsOutput) { op := &request.Operation{ - Name: opOpenInstancePublicPorts, + Name: opGetRelationalDatabaseLogEvents, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &OpenInstancePublicPortsInput{} + input = &GetRelationalDatabaseLogEventsInput{} } - output = &OpenInstancePublicPortsOutput{} + output = &GetRelationalDatabaseLogEventsOutput{} req = c.newRequest(op, input, output) return } -// OpenInstancePublicPorts API operation for Amazon Lightsail. +// GetRelationalDatabaseLogEvents API operation for Amazon Lightsail. // -// Adds public ports to an Amazon Lightsail instance. +// Returns a list of log events for a database in Amazon Lightsail. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Lightsail's -// API operation OpenInstancePublicPorts for usage and error information. +// API operation GetRelationalDatabaseLogEvents for usage and error information. // // Returned Error Codes: // * ErrCodeServiceException "ServiceException" @@ -6530,80 +7069,81 @@ func (c *Lightsail) OpenInstancePublicPortsRequest(input *OpenInstancePublicPort // * ErrCodeUnauthenticatedException "UnauthenticatedException" // Lightsail throws this exception when the user has not been authenticated. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/OpenInstancePublicPorts -func (c *Lightsail) OpenInstancePublicPorts(input *OpenInstancePublicPortsInput) (*OpenInstancePublicPortsOutput, error) { - req, out := c.OpenInstancePublicPortsRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseLogEvents +func (c *Lightsail) GetRelationalDatabaseLogEvents(input *GetRelationalDatabaseLogEventsInput) (*GetRelationalDatabaseLogEventsOutput, error) { + req, out := c.GetRelationalDatabaseLogEventsRequest(input) return out, req.Send() } -// OpenInstancePublicPortsWithContext is the same as OpenInstancePublicPorts with the addition of +// GetRelationalDatabaseLogEventsWithContext is the same as GetRelationalDatabaseLogEvents with the addition of // the ability to pass a context and additional request options. // -// See OpenInstancePublicPorts for details on how to use this API operation. +// See GetRelationalDatabaseLogEvents for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Lightsail) OpenInstancePublicPortsWithContext(ctx aws.Context, input *OpenInstancePublicPortsInput, opts ...request.Option) (*OpenInstancePublicPortsOutput, error) { - req, out := c.OpenInstancePublicPortsRequest(input) +func (c *Lightsail) GetRelationalDatabaseLogEventsWithContext(ctx aws.Context, input *GetRelationalDatabaseLogEventsInput, opts ...request.Option) (*GetRelationalDatabaseLogEventsOutput, error) { + req, out := c.GetRelationalDatabaseLogEventsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opPeerVpc = "PeerVpc" +const opGetRelationalDatabaseLogStreams = "GetRelationalDatabaseLogStreams" -// PeerVpcRequest generates a "aws/request.Request" representing the -// client's request for the PeerVpc operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetRelationalDatabaseLogStreamsRequest generates a "aws/request.Request" representing the +// client's request for the GetRelationalDatabaseLogStreams operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See PeerVpc for more information on using the PeerVpc +// See GetRelationalDatabaseLogStreams for more information on using the GetRelationalDatabaseLogStreams // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the PeerVpcRequest method. -// req, resp := client.PeerVpcRequest(params) +// // Example sending a request using the GetRelationalDatabaseLogStreamsRequest method. +// req, resp := client.GetRelationalDatabaseLogStreamsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/PeerVpc -func (c *Lightsail) PeerVpcRequest(input *PeerVpcInput) (req *request.Request, output *PeerVpcOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseLogStreams +func (c *Lightsail) GetRelationalDatabaseLogStreamsRequest(input *GetRelationalDatabaseLogStreamsInput) (req *request.Request, output *GetRelationalDatabaseLogStreamsOutput) { op := &request.Operation{ - Name: opPeerVpc, + Name: opGetRelationalDatabaseLogStreams, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &PeerVpcInput{} + input = &GetRelationalDatabaseLogStreamsInput{} } - output = &PeerVpcOutput{} + output = &GetRelationalDatabaseLogStreamsOutput{} req = c.newRequest(op, input, output) return } -// PeerVpc API operation for Amazon Lightsail. +// GetRelationalDatabaseLogStreams API operation for Amazon Lightsail. // -// Tries to peer the Lightsail VPC with the user's default VPC. +// Returns a list of available log streams for a specific database in Amazon +// Lightsail. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Lightsail's -// API operation PeerVpc for usage and error information. +// API operation GetRelationalDatabaseLogStreams for usage and error information. // // Returned Error Codes: // * ErrCodeServiceException "ServiceException" @@ -6634,81 +7174,81 @@ func (c *Lightsail) PeerVpcRequest(input *PeerVpcInput) (req *request.Request, o // * ErrCodeUnauthenticatedException "UnauthenticatedException" // Lightsail throws this exception when the user has not been authenticated. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/PeerVpc -func (c *Lightsail) PeerVpc(input *PeerVpcInput) (*PeerVpcOutput, error) { - req, out := c.PeerVpcRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseLogStreams +func (c *Lightsail) GetRelationalDatabaseLogStreams(input *GetRelationalDatabaseLogStreamsInput) (*GetRelationalDatabaseLogStreamsOutput, error) { + req, out := c.GetRelationalDatabaseLogStreamsRequest(input) return out, req.Send() } -// PeerVpcWithContext is the same as PeerVpc with the addition of +// GetRelationalDatabaseLogStreamsWithContext is the same as GetRelationalDatabaseLogStreams with the addition of // the ability to pass a context and additional request options. // -// See PeerVpc for details on how to use this API operation. +// See GetRelationalDatabaseLogStreams for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Lightsail) PeerVpcWithContext(ctx aws.Context, input *PeerVpcInput, opts ...request.Option) (*PeerVpcOutput, error) { - req, out := c.PeerVpcRequest(input) +func (c *Lightsail) GetRelationalDatabaseLogStreamsWithContext(ctx aws.Context, input *GetRelationalDatabaseLogStreamsInput, opts ...request.Option) (*GetRelationalDatabaseLogStreamsOutput, error) { + req, out := c.GetRelationalDatabaseLogStreamsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opPutInstancePublicPorts = "PutInstancePublicPorts" +const opGetRelationalDatabaseMasterUserPassword = "GetRelationalDatabaseMasterUserPassword" -// PutInstancePublicPortsRequest generates a "aws/request.Request" representing the -// client's request for the PutInstancePublicPorts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetRelationalDatabaseMasterUserPasswordRequest generates a "aws/request.Request" representing the +// client's request for the GetRelationalDatabaseMasterUserPassword operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See PutInstancePublicPorts for more information on using the PutInstancePublicPorts +// See GetRelationalDatabaseMasterUserPassword for more information on using the GetRelationalDatabaseMasterUserPassword // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the PutInstancePublicPortsRequest method. -// req, resp := client.PutInstancePublicPortsRequest(params) +// // Example sending a request using the GetRelationalDatabaseMasterUserPasswordRequest method. +// req, resp := client.GetRelationalDatabaseMasterUserPasswordRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/PutInstancePublicPorts -func (c *Lightsail) PutInstancePublicPortsRequest(input *PutInstancePublicPortsInput) (req *request.Request, output *PutInstancePublicPortsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseMasterUserPassword +func (c *Lightsail) GetRelationalDatabaseMasterUserPasswordRequest(input *GetRelationalDatabaseMasterUserPasswordInput) (req *request.Request, output *GetRelationalDatabaseMasterUserPasswordOutput) { op := &request.Operation{ - Name: opPutInstancePublicPorts, + Name: opGetRelationalDatabaseMasterUserPassword, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &PutInstancePublicPortsInput{} + input = &GetRelationalDatabaseMasterUserPasswordInput{} } - output = &PutInstancePublicPortsOutput{} + output = &GetRelationalDatabaseMasterUserPasswordOutput{} req = c.newRequest(op, input, output) return } -// PutInstancePublicPorts API operation for Amazon Lightsail. +// GetRelationalDatabaseMasterUserPassword API operation for Amazon Lightsail. // -// Sets the specified open ports for an Amazon Lightsail instance, and closes -// all ports for every protocol not included in the current request. +// Returns the current, previous, or pending versions of the master user password +// for a Lightsail database. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Lightsail's -// API operation PutInstancePublicPorts for usage and error information. +// API operation GetRelationalDatabaseMasterUserPassword for usage and error information. // // Returned Error Codes: // * ErrCodeServiceException "ServiceException" @@ -6739,83 +7279,81 @@ func (c *Lightsail) PutInstancePublicPortsRequest(input *PutInstancePublicPortsI // * ErrCodeUnauthenticatedException "UnauthenticatedException" // Lightsail throws this exception when the user has not been authenticated. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/PutInstancePublicPorts -func (c *Lightsail) PutInstancePublicPorts(input *PutInstancePublicPortsInput) (*PutInstancePublicPortsOutput, error) { - req, out := c.PutInstancePublicPortsRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseMasterUserPassword +func (c *Lightsail) GetRelationalDatabaseMasterUserPassword(input *GetRelationalDatabaseMasterUserPasswordInput) (*GetRelationalDatabaseMasterUserPasswordOutput, error) { + req, out := c.GetRelationalDatabaseMasterUserPasswordRequest(input) return out, req.Send() } -// PutInstancePublicPortsWithContext is the same as PutInstancePublicPorts with the addition of +// GetRelationalDatabaseMasterUserPasswordWithContext is the same as GetRelationalDatabaseMasterUserPassword with the addition of // the ability to pass a context and additional request options. // -// See PutInstancePublicPorts for details on how to use this API operation. +// See GetRelationalDatabaseMasterUserPassword for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Lightsail) PutInstancePublicPortsWithContext(ctx aws.Context, input *PutInstancePublicPortsInput, opts ...request.Option) (*PutInstancePublicPortsOutput, error) { - req, out := c.PutInstancePublicPortsRequest(input) +func (c *Lightsail) GetRelationalDatabaseMasterUserPasswordWithContext(ctx aws.Context, input *GetRelationalDatabaseMasterUserPasswordInput, opts ...request.Option) (*GetRelationalDatabaseMasterUserPasswordOutput, error) { + req, out := c.GetRelationalDatabaseMasterUserPasswordRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opRebootInstance = "RebootInstance" +const opGetRelationalDatabaseMetricData = "GetRelationalDatabaseMetricData" -// RebootInstanceRequest generates a "aws/request.Request" representing the -// client's request for the RebootInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetRelationalDatabaseMetricDataRequest generates a "aws/request.Request" representing the +// client's request for the GetRelationalDatabaseMetricData operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See RebootInstance for more information on using the RebootInstance +// See GetRelationalDatabaseMetricData for more information on using the GetRelationalDatabaseMetricData // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the RebootInstanceRequest method. -// req, resp := client.RebootInstanceRequest(params) +// // Example sending a request using the GetRelationalDatabaseMetricDataRequest method. +// req, resp := client.GetRelationalDatabaseMetricDataRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/RebootInstance -func (c *Lightsail) RebootInstanceRequest(input *RebootInstanceInput) (req *request.Request, output *RebootInstanceOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseMetricData +func (c *Lightsail) GetRelationalDatabaseMetricDataRequest(input *GetRelationalDatabaseMetricDataInput) (req *request.Request, output *GetRelationalDatabaseMetricDataOutput) { op := &request.Operation{ - Name: opRebootInstance, + Name: opGetRelationalDatabaseMetricData, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &RebootInstanceInput{} + input = &GetRelationalDatabaseMetricDataInput{} } - output = &RebootInstanceOutput{} + output = &GetRelationalDatabaseMetricDataOutput{} req = c.newRequest(op, input, output) return } -// RebootInstance API operation for Amazon Lightsail. +// GetRelationalDatabaseMetricData API operation for Amazon Lightsail. // -// Restarts a specific instance. When your Amazon Lightsail instance is finished -// rebooting, Lightsail assigns a new public IP address. To use the same IP -// address after restarting, create a static IP address and attach it to the -// instance. +// Returns the data points of the specified metric for a database in Amazon +// Lightsail. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Lightsail's -// API operation RebootInstance for usage and error information. +// API operation GetRelationalDatabaseMetricData for usage and error information. // // Returned Error Codes: // * ErrCodeServiceException "ServiceException" @@ -6846,80 +7384,86 @@ func (c *Lightsail) RebootInstanceRequest(input *RebootInstanceInput) (req *requ // * ErrCodeUnauthenticatedException "UnauthenticatedException" // Lightsail throws this exception when the user has not been authenticated. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/RebootInstance -func (c *Lightsail) RebootInstance(input *RebootInstanceInput) (*RebootInstanceOutput, error) { - req, out := c.RebootInstanceRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseMetricData +func (c *Lightsail) GetRelationalDatabaseMetricData(input *GetRelationalDatabaseMetricDataInput) (*GetRelationalDatabaseMetricDataOutput, error) { + req, out := c.GetRelationalDatabaseMetricDataRequest(input) return out, req.Send() } -// RebootInstanceWithContext is the same as RebootInstance with the addition of +// GetRelationalDatabaseMetricDataWithContext is the same as GetRelationalDatabaseMetricData with the addition of // the ability to pass a context and additional request options. // -// See RebootInstance for details on how to use this API operation. +// See GetRelationalDatabaseMetricData for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Lightsail) RebootInstanceWithContext(ctx aws.Context, input *RebootInstanceInput, opts ...request.Option) (*RebootInstanceOutput, error) { - req, out := c.RebootInstanceRequest(input) +func (c *Lightsail) GetRelationalDatabaseMetricDataWithContext(ctx aws.Context, input *GetRelationalDatabaseMetricDataInput, opts ...request.Option) (*GetRelationalDatabaseMetricDataOutput, error) { + req, out := c.GetRelationalDatabaseMetricDataRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opReleaseStaticIp = "ReleaseStaticIp" +const opGetRelationalDatabaseParameters = "GetRelationalDatabaseParameters" -// ReleaseStaticIpRequest generates a "aws/request.Request" representing the -// client's request for the ReleaseStaticIp operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetRelationalDatabaseParametersRequest generates a "aws/request.Request" representing the +// client's request for the GetRelationalDatabaseParameters operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ReleaseStaticIp for more information on using the ReleaseStaticIp +// See GetRelationalDatabaseParameters for more information on using the GetRelationalDatabaseParameters // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ReleaseStaticIpRequest method. -// req, resp := client.ReleaseStaticIpRequest(params) +// // Example sending a request using the GetRelationalDatabaseParametersRequest method. +// req, resp := client.GetRelationalDatabaseParametersRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/ReleaseStaticIp -func (c *Lightsail) ReleaseStaticIpRequest(input *ReleaseStaticIpInput) (req *request.Request, output *ReleaseStaticIpOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseParameters +func (c *Lightsail) GetRelationalDatabaseParametersRequest(input *GetRelationalDatabaseParametersInput) (req *request.Request, output *GetRelationalDatabaseParametersOutput) { op := &request.Operation{ - Name: opReleaseStaticIp, + Name: opGetRelationalDatabaseParameters, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &ReleaseStaticIpInput{} + input = &GetRelationalDatabaseParametersInput{} } - output = &ReleaseStaticIpOutput{} + output = &GetRelationalDatabaseParametersOutput{} req = c.newRequest(op, input, output) return } -// ReleaseStaticIp API operation for Amazon Lightsail. +// GetRelationalDatabaseParameters API operation for Amazon Lightsail. // -// Deletes a specific static IP from your account. +// Returns all of the runtime parameters offered by the underlying database +// software, or engine, for a specific database in Amazon Lightsail. +// +// In addition to the parameter names and values, this operation returns other +// information about each parameter. This information includes whether changes +// require a reboot, whether the parameter is modifiable, the allowed values, +// and the data types. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Lightsail's -// API operation ReleaseStaticIp for usage and error information. +// API operation GetRelationalDatabaseParameters for usage and error information. // // Returned Error Codes: // * ErrCodeServiceException "ServiceException" @@ -6950,81 +7494,80 @@ func (c *Lightsail) ReleaseStaticIpRequest(input *ReleaseStaticIpInput) (req *re // * ErrCodeUnauthenticatedException "UnauthenticatedException" // Lightsail throws this exception when the user has not been authenticated. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/ReleaseStaticIp -func (c *Lightsail) ReleaseStaticIp(input *ReleaseStaticIpInput) (*ReleaseStaticIpOutput, error) { - req, out := c.ReleaseStaticIpRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseParameters +func (c *Lightsail) GetRelationalDatabaseParameters(input *GetRelationalDatabaseParametersInput) (*GetRelationalDatabaseParametersOutput, error) { + req, out := c.GetRelationalDatabaseParametersRequest(input) return out, req.Send() } -// ReleaseStaticIpWithContext is the same as ReleaseStaticIp with the addition of +// GetRelationalDatabaseParametersWithContext is the same as GetRelationalDatabaseParameters with the addition of // the ability to pass a context and additional request options. // -// See ReleaseStaticIp for details on how to use this API operation. +// See GetRelationalDatabaseParameters for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Lightsail) ReleaseStaticIpWithContext(ctx aws.Context, input *ReleaseStaticIpInput, opts ...request.Option) (*ReleaseStaticIpOutput, error) { - req, out := c.ReleaseStaticIpRequest(input) +func (c *Lightsail) GetRelationalDatabaseParametersWithContext(ctx aws.Context, input *GetRelationalDatabaseParametersInput, opts ...request.Option) (*GetRelationalDatabaseParametersOutput, error) { + req, out := c.GetRelationalDatabaseParametersRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opStartInstance = "StartInstance" +const opGetRelationalDatabaseSnapshot = "GetRelationalDatabaseSnapshot" -// StartInstanceRequest generates a "aws/request.Request" representing the -// client's request for the StartInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetRelationalDatabaseSnapshotRequest generates a "aws/request.Request" representing the +// client's request for the GetRelationalDatabaseSnapshot operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See StartInstance for more information on using the StartInstance +// See GetRelationalDatabaseSnapshot for more information on using the GetRelationalDatabaseSnapshot // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the StartInstanceRequest method. -// req, resp := client.StartInstanceRequest(params) +// // Example sending a request using the GetRelationalDatabaseSnapshotRequest method. +// req, resp := client.GetRelationalDatabaseSnapshotRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/StartInstance -func (c *Lightsail) StartInstanceRequest(input *StartInstanceInput) (req *request.Request, output *StartInstanceOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseSnapshot +func (c *Lightsail) GetRelationalDatabaseSnapshotRequest(input *GetRelationalDatabaseSnapshotInput) (req *request.Request, output *GetRelationalDatabaseSnapshotOutput) { op := &request.Operation{ - Name: opStartInstance, + Name: opGetRelationalDatabaseSnapshot, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &StartInstanceInput{} + input = &GetRelationalDatabaseSnapshotInput{} } - output = &StartInstanceOutput{} + output = &GetRelationalDatabaseSnapshotOutput{} req = c.newRequest(op, input, output) return } -// StartInstance API operation for Amazon Lightsail. +// GetRelationalDatabaseSnapshot API operation for Amazon Lightsail. // -// Starts a specific Amazon Lightsail instance from a stopped state. To restart -// an instance, use the reboot instance operation. +// Returns information about a specific database snapshot in Amazon Lightsail. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Lightsail's -// API operation StartInstance for usage and error information. +// API operation GetRelationalDatabaseSnapshot for usage and error information. // // Returned Error Codes: // * ErrCodeServiceException "ServiceException" @@ -7055,80 +7598,80 @@ func (c *Lightsail) StartInstanceRequest(input *StartInstanceInput) (req *reques // * ErrCodeUnauthenticatedException "UnauthenticatedException" // Lightsail throws this exception when the user has not been authenticated. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/StartInstance -func (c *Lightsail) StartInstance(input *StartInstanceInput) (*StartInstanceOutput, error) { - req, out := c.StartInstanceRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseSnapshot +func (c *Lightsail) GetRelationalDatabaseSnapshot(input *GetRelationalDatabaseSnapshotInput) (*GetRelationalDatabaseSnapshotOutput, error) { + req, out := c.GetRelationalDatabaseSnapshotRequest(input) return out, req.Send() } -// StartInstanceWithContext is the same as StartInstance with the addition of +// GetRelationalDatabaseSnapshotWithContext is the same as GetRelationalDatabaseSnapshot with the addition of // the ability to pass a context and additional request options. // -// See StartInstance for details on how to use this API operation. +// See GetRelationalDatabaseSnapshot for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Lightsail) StartInstanceWithContext(ctx aws.Context, input *StartInstanceInput, opts ...request.Option) (*StartInstanceOutput, error) { - req, out := c.StartInstanceRequest(input) +func (c *Lightsail) GetRelationalDatabaseSnapshotWithContext(ctx aws.Context, input *GetRelationalDatabaseSnapshotInput, opts ...request.Option) (*GetRelationalDatabaseSnapshotOutput, error) { + req, out := c.GetRelationalDatabaseSnapshotRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opStopInstance = "StopInstance" +const opGetRelationalDatabaseSnapshots = "GetRelationalDatabaseSnapshots" -// StopInstanceRequest generates a "aws/request.Request" representing the -// client's request for the StopInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetRelationalDatabaseSnapshotsRequest generates a "aws/request.Request" representing the +// client's request for the GetRelationalDatabaseSnapshots operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See StopInstance for more information on using the StopInstance +// See GetRelationalDatabaseSnapshots for more information on using the GetRelationalDatabaseSnapshots // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the StopInstanceRequest method. -// req, resp := client.StopInstanceRequest(params) +// // Example sending a request using the GetRelationalDatabaseSnapshotsRequest method. +// req, resp := client.GetRelationalDatabaseSnapshotsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/StopInstance -func (c *Lightsail) StopInstanceRequest(input *StopInstanceInput) (req *request.Request, output *StopInstanceOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseSnapshots +func (c *Lightsail) GetRelationalDatabaseSnapshotsRequest(input *GetRelationalDatabaseSnapshotsInput) (req *request.Request, output *GetRelationalDatabaseSnapshotsOutput) { op := &request.Operation{ - Name: opStopInstance, + Name: opGetRelationalDatabaseSnapshots, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &StopInstanceInput{} + input = &GetRelationalDatabaseSnapshotsInput{} } - output = &StopInstanceOutput{} + output = &GetRelationalDatabaseSnapshotsOutput{} req = c.newRequest(op, input, output) return } -// StopInstance API operation for Amazon Lightsail. +// GetRelationalDatabaseSnapshots API operation for Amazon Lightsail. // -// Stops a specific Amazon Lightsail instance that is currently running. +// Returns information about all of your database snapshots in Amazon Lightsail. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Lightsail's -// API operation StopInstance for usage and error information. +// API operation GetRelationalDatabaseSnapshots for usage and error information. // // Returned Error Codes: // * ErrCodeServiceException "ServiceException" @@ -7159,80 +7702,80 @@ func (c *Lightsail) StopInstanceRequest(input *StopInstanceInput) (req *request. // * ErrCodeUnauthenticatedException "UnauthenticatedException" // Lightsail throws this exception when the user has not been authenticated. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/StopInstance -func (c *Lightsail) StopInstance(input *StopInstanceInput) (*StopInstanceOutput, error) { - req, out := c.StopInstanceRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabaseSnapshots +func (c *Lightsail) GetRelationalDatabaseSnapshots(input *GetRelationalDatabaseSnapshotsInput) (*GetRelationalDatabaseSnapshotsOutput, error) { + req, out := c.GetRelationalDatabaseSnapshotsRequest(input) return out, req.Send() } -// StopInstanceWithContext is the same as StopInstance with the addition of +// GetRelationalDatabaseSnapshotsWithContext is the same as GetRelationalDatabaseSnapshots with the addition of // the ability to pass a context and additional request options. // -// See StopInstance for details on how to use this API operation. +// See GetRelationalDatabaseSnapshots for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Lightsail) StopInstanceWithContext(ctx aws.Context, input *StopInstanceInput, opts ...request.Option) (*StopInstanceOutput, error) { - req, out := c.StopInstanceRequest(input) +func (c *Lightsail) GetRelationalDatabaseSnapshotsWithContext(ctx aws.Context, input *GetRelationalDatabaseSnapshotsInput, opts ...request.Option) (*GetRelationalDatabaseSnapshotsOutput, error) { + req, out := c.GetRelationalDatabaseSnapshotsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUnpeerVpc = "UnpeerVpc" +const opGetRelationalDatabases = "GetRelationalDatabases" -// UnpeerVpcRequest generates a "aws/request.Request" representing the -// client's request for the UnpeerVpc operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetRelationalDatabasesRequest generates a "aws/request.Request" representing the +// client's request for the GetRelationalDatabases operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UnpeerVpc for more information on using the UnpeerVpc +// See GetRelationalDatabases for more information on using the GetRelationalDatabases // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UnpeerVpcRequest method. -// req, resp := client.UnpeerVpcRequest(params) +// // Example sending a request using the GetRelationalDatabasesRequest method. +// req, resp := client.GetRelationalDatabasesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/UnpeerVpc -func (c *Lightsail) UnpeerVpcRequest(input *UnpeerVpcInput) (req *request.Request, output *UnpeerVpcOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabases +func (c *Lightsail) GetRelationalDatabasesRequest(input *GetRelationalDatabasesInput) (req *request.Request, output *GetRelationalDatabasesOutput) { op := &request.Operation{ - Name: opUnpeerVpc, + Name: opGetRelationalDatabases, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UnpeerVpcInput{} + input = &GetRelationalDatabasesInput{} } - output = &UnpeerVpcOutput{} + output = &GetRelationalDatabasesOutput{} req = c.newRequest(op, input, output) return } -// UnpeerVpc API operation for Amazon Lightsail. +// GetRelationalDatabases API operation for Amazon Lightsail. // -// Attempts to unpeer the Lightsail VPC from the user's default VPC. +// Returns information about all of your databases in Amazon Lightsail. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Lightsail's -// API operation UnpeerVpc for usage and error information. +// API operation GetRelationalDatabases for usage and error information. // // Returned Error Codes: // * ErrCodeServiceException "ServiceException" @@ -7263,80 +7806,80 @@ func (c *Lightsail) UnpeerVpcRequest(input *UnpeerVpcInput) (req *request.Reques // * ErrCodeUnauthenticatedException "UnauthenticatedException" // Lightsail throws this exception when the user has not been authenticated. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/UnpeerVpc -func (c *Lightsail) UnpeerVpc(input *UnpeerVpcInput) (*UnpeerVpcOutput, error) { - req, out := c.UnpeerVpcRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetRelationalDatabases +func (c *Lightsail) GetRelationalDatabases(input *GetRelationalDatabasesInput) (*GetRelationalDatabasesOutput, error) { + req, out := c.GetRelationalDatabasesRequest(input) return out, req.Send() } -// UnpeerVpcWithContext is the same as UnpeerVpc with the addition of +// GetRelationalDatabasesWithContext is the same as GetRelationalDatabases with the addition of // the ability to pass a context and additional request options. // -// See UnpeerVpc for details on how to use this API operation. +// See GetRelationalDatabases for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Lightsail) UnpeerVpcWithContext(ctx aws.Context, input *UnpeerVpcInput, opts ...request.Option) (*UnpeerVpcOutput, error) { - req, out := c.UnpeerVpcRequest(input) +func (c *Lightsail) GetRelationalDatabasesWithContext(ctx aws.Context, input *GetRelationalDatabasesInput, opts ...request.Option) (*GetRelationalDatabasesOutput, error) { + req, out := c.GetRelationalDatabasesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateDomainEntry = "UpdateDomainEntry" +const opGetStaticIp = "GetStaticIp" -// UpdateDomainEntryRequest generates a "aws/request.Request" representing the -// client's request for the UpdateDomainEntry operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetStaticIpRequest generates a "aws/request.Request" representing the +// client's request for the GetStaticIp operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateDomainEntry for more information on using the UpdateDomainEntry +// See GetStaticIp for more information on using the GetStaticIp // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateDomainEntryRequest method. -// req, resp := client.UpdateDomainEntryRequest(params) +// // Example sending a request using the GetStaticIpRequest method. +// req, resp := client.GetStaticIpRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/UpdateDomainEntry -func (c *Lightsail) UpdateDomainEntryRequest(input *UpdateDomainEntryInput) (req *request.Request, output *UpdateDomainEntryOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetStaticIp +func (c *Lightsail) GetStaticIpRequest(input *GetStaticIpInput) (req *request.Request, output *GetStaticIpOutput) { op := &request.Operation{ - Name: opUpdateDomainEntry, + Name: opGetStaticIp, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateDomainEntryInput{} + input = &GetStaticIpInput{} } - output = &UpdateDomainEntryOutput{} + output = &GetStaticIpOutput{} req = c.newRequest(op, input, output) return } -// UpdateDomainEntry API operation for Amazon Lightsail. +// GetStaticIp API operation for Amazon Lightsail. // -// Updates a domain recordset after it is created. +// Returns information about a specific static IP. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Lightsail's -// API operation UpdateDomainEntry for usage and error information. +// API operation GetStaticIp for usage and error information. // // Returned Error Codes: // * ErrCodeServiceException "ServiceException" @@ -7367,81 +7910,80 @@ func (c *Lightsail) UpdateDomainEntryRequest(input *UpdateDomainEntryInput) (req // * ErrCodeUnauthenticatedException "UnauthenticatedException" // Lightsail throws this exception when the user has not been authenticated. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/UpdateDomainEntry -func (c *Lightsail) UpdateDomainEntry(input *UpdateDomainEntryInput) (*UpdateDomainEntryOutput, error) { - req, out := c.UpdateDomainEntryRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetStaticIp +func (c *Lightsail) GetStaticIp(input *GetStaticIpInput) (*GetStaticIpOutput, error) { + req, out := c.GetStaticIpRequest(input) return out, req.Send() } -// UpdateDomainEntryWithContext is the same as UpdateDomainEntry with the addition of +// GetStaticIpWithContext is the same as GetStaticIp with the addition of // the ability to pass a context and additional request options. // -// See UpdateDomainEntry for details on how to use this API operation. +// See GetStaticIp for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Lightsail) UpdateDomainEntryWithContext(ctx aws.Context, input *UpdateDomainEntryInput, opts ...request.Option) (*UpdateDomainEntryOutput, error) { - req, out := c.UpdateDomainEntryRequest(input) +func (c *Lightsail) GetStaticIpWithContext(ctx aws.Context, input *GetStaticIpInput, opts ...request.Option) (*GetStaticIpOutput, error) { + req, out := c.GetStaticIpRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateLoadBalancerAttribute = "UpdateLoadBalancerAttribute" +const opGetStaticIps = "GetStaticIps" -// UpdateLoadBalancerAttributeRequest generates a "aws/request.Request" representing the -// client's request for the UpdateLoadBalancerAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetStaticIpsRequest generates a "aws/request.Request" representing the +// client's request for the GetStaticIps operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateLoadBalancerAttribute for more information on using the UpdateLoadBalancerAttribute +// See GetStaticIps for more information on using the GetStaticIps // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateLoadBalancerAttributeRequest method. -// req, resp := client.UpdateLoadBalancerAttributeRequest(params) +// // Example sending a request using the GetStaticIpsRequest method. +// req, resp := client.GetStaticIpsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/UpdateLoadBalancerAttribute -func (c *Lightsail) UpdateLoadBalancerAttributeRequest(input *UpdateLoadBalancerAttributeInput) (req *request.Request, output *UpdateLoadBalancerAttributeOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetStaticIps +func (c *Lightsail) GetStaticIpsRequest(input *GetStaticIpsInput) (req *request.Request, output *GetStaticIpsOutput) { op := &request.Operation{ - Name: opUpdateLoadBalancerAttribute, + Name: opGetStaticIps, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateLoadBalancerAttributeInput{} + input = &GetStaticIpsInput{} } - output = &UpdateLoadBalancerAttributeOutput{} + output = &GetStaticIpsOutput{} req = c.newRequest(op, input, output) return } -// UpdateLoadBalancerAttribute API operation for Amazon Lightsail. +// GetStaticIps API operation for Amazon Lightsail. // -// Updates the specified attribute for a load balancer. You can only update -// one attribute at a time. +// Returns information about all static IPs in the user's account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Lightsail's -// API operation UpdateLoadBalancerAttribute for usage and error information. +// API operation GetStaticIps for usage and error information. // // Returned Error Codes: // * ErrCodeServiceException "ServiceException" @@ -7472,52 +8014,3759 @@ func (c *Lightsail) UpdateLoadBalancerAttributeRequest(input *UpdateLoadBalancer // * ErrCodeUnauthenticatedException "UnauthenticatedException" // Lightsail throws this exception when the user has not been authenticated. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/UpdateLoadBalancerAttribute -func (c *Lightsail) UpdateLoadBalancerAttribute(input *UpdateLoadBalancerAttributeInput) (*UpdateLoadBalancerAttributeOutput, error) { - req, out := c.UpdateLoadBalancerAttributeRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/GetStaticIps +func (c *Lightsail) GetStaticIps(input *GetStaticIpsInput) (*GetStaticIpsOutput, error) { + req, out := c.GetStaticIpsRequest(input) return out, req.Send() } -// UpdateLoadBalancerAttributeWithContext is the same as UpdateLoadBalancerAttribute with the addition of +// GetStaticIpsWithContext is the same as GetStaticIps with the addition of // the ability to pass a context and additional request options. // -// See UpdateLoadBalancerAttribute for details on how to use this API operation. +// See GetStaticIps for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *Lightsail) UpdateLoadBalancerAttributeWithContext(ctx aws.Context, input *UpdateLoadBalancerAttributeInput, opts ...request.Option) (*UpdateLoadBalancerAttributeOutput, error) { - req, out := c.UpdateLoadBalancerAttributeRequest(input) +func (c *Lightsail) GetStaticIpsWithContext(ctx aws.Context, input *GetStaticIpsInput, opts ...request.Option) (*GetStaticIpsOutput, error) { + req, out := c.GetStaticIpsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opImportKeyPair = "ImportKeyPair" + +// ImportKeyPairRequest generates a "aws/request.Request" representing the +// client's request for the ImportKeyPair operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ImportKeyPair for more information on using the ImportKeyPair +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ImportKeyPairRequest method. +// req, resp := client.ImportKeyPairRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/ImportKeyPair +func (c *Lightsail) ImportKeyPairRequest(input *ImportKeyPairInput) (req *request.Request, output *ImportKeyPairOutput) { + op := &request.Operation{ + Name: opImportKeyPair, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ImportKeyPairInput{} + } + + output = &ImportKeyPairOutput{} + req = c.newRequest(op, input, output) + return +} + +// ImportKeyPair API operation for Amazon Lightsail. +// +// Imports a public SSH key from a specific key pair. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation ImportKeyPair for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/ImportKeyPair +func (c *Lightsail) ImportKeyPair(input *ImportKeyPairInput) (*ImportKeyPairOutput, error) { + req, out := c.ImportKeyPairRequest(input) + return out, req.Send() +} + +// ImportKeyPairWithContext is the same as ImportKeyPair with the addition of +// the ability to pass a context and additional request options. +// +// See ImportKeyPair for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) ImportKeyPairWithContext(ctx aws.Context, input *ImportKeyPairInput, opts ...request.Option) (*ImportKeyPairOutput, error) { + req, out := c.ImportKeyPairRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opIsVpcPeered = "IsVpcPeered" + +// IsVpcPeeredRequest generates a "aws/request.Request" representing the +// client's request for the IsVpcPeered operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See IsVpcPeered for more information on using the IsVpcPeered +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the IsVpcPeeredRequest method. +// req, resp := client.IsVpcPeeredRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/IsVpcPeered +func (c *Lightsail) IsVpcPeeredRequest(input *IsVpcPeeredInput) (req *request.Request, output *IsVpcPeeredOutput) { + op := &request.Operation{ + Name: opIsVpcPeered, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &IsVpcPeeredInput{} + } + + output = &IsVpcPeeredOutput{} + req = c.newRequest(op, input, output) + return +} + +// IsVpcPeered API operation for Amazon Lightsail. +// +// Returns a Boolean value indicating whether your Lightsail VPC is peered. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation IsVpcPeered for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/IsVpcPeered +func (c *Lightsail) IsVpcPeered(input *IsVpcPeeredInput) (*IsVpcPeeredOutput, error) { + req, out := c.IsVpcPeeredRequest(input) + return out, req.Send() +} + +// IsVpcPeeredWithContext is the same as IsVpcPeered with the addition of +// the ability to pass a context and additional request options. +// +// See IsVpcPeered for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) IsVpcPeeredWithContext(ctx aws.Context, input *IsVpcPeeredInput, opts ...request.Option) (*IsVpcPeeredOutput, error) { + req, out := c.IsVpcPeeredRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -type AllocateStaticIpInput struct { +const opOpenInstancePublicPorts = "OpenInstancePublicPorts" + +// OpenInstancePublicPortsRequest generates a "aws/request.Request" representing the +// client's request for the OpenInstancePublicPorts operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See OpenInstancePublicPorts for more information on using the OpenInstancePublicPorts +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the OpenInstancePublicPortsRequest method. +// req, resp := client.OpenInstancePublicPortsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/OpenInstancePublicPorts +func (c *Lightsail) OpenInstancePublicPortsRequest(input *OpenInstancePublicPortsInput) (req *request.Request, output *OpenInstancePublicPortsOutput) { + op := &request.Operation{ + Name: opOpenInstancePublicPorts, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &OpenInstancePublicPortsInput{} + } + + output = &OpenInstancePublicPortsOutput{} + req = c.newRequest(op, input, output) + return +} + +// OpenInstancePublicPorts API operation for Amazon Lightsail. +// +// Adds public ports to an Amazon Lightsail instance. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation OpenInstancePublicPorts for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/OpenInstancePublicPorts +func (c *Lightsail) OpenInstancePublicPorts(input *OpenInstancePublicPortsInput) (*OpenInstancePublicPortsOutput, error) { + req, out := c.OpenInstancePublicPortsRequest(input) + return out, req.Send() +} + +// OpenInstancePublicPortsWithContext is the same as OpenInstancePublicPorts with the addition of +// the ability to pass a context and additional request options. +// +// See OpenInstancePublicPorts for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) OpenInstancePublicPortsWithContext(ctx aws.Context, input *OpenInstancePublicPortsInput, opts ...request.Option) (*OpenInstancePublicPortsOutput, error) { + req, out := c.OpenInstancePublicPortsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPeerVpc = "PeerVpc" + +// PeerVpcRequest generates a "aws/request.Request" representing the +// client's request for the PeerVpc operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PeerVpc for more information on using the PeerVpc +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PeerVpcRequest method. +// req, resp := client.PeerVpcRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/PeerVpc +func (c *Lightsail) PeerVpcRequest(input *PeerVpcInput) (req *request.Request, output *PeerVpcOutput) { + op := &request.Operation{ + Name: opPeerVpc, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PeerVpcInput{} + } + + output = &PeerVpcOutput{} + req = c.newRequest(op, input, output) + return +} + +// PeerVpc API operation for Amazon Lightsail. +// +// Tries to peer the Lightsail VPC with the user's default VPC. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation PeerVpc for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/PeerVpc +func (c *Lightsail) PeerVpc(input *PeerVpcInput) (*PeerVpcOutput, error) { + req, out := c.PeerVpcRequest(input) + return out, req.Send() +} + +// PeerVpcWithContext is the same as PeerVpc with the addition of +// the ability to pass a context and additional request options. +// +// See PeerVpc for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) PeerVpcWithContext(ctx aws.Context, input *PeerVpcInput, opts ...request.Option) (*PeerVpcOutput, error) { + req, out := c.PeerVpcRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutInstancePublicPorts = "PutInstancePublicPorts" + +// PutInstancePublicPortsRequest generates a "aws/request.Request" representing the +// client's request for the PutInstancePublicPorts operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutInstancePublicPorts for more information on using the PutInstancePublicPorts +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutInstancePublicPortsRequest method. +// req, resp := client.PutInstancePublicPortsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/PutInstancePublicPorts +func (c *Lightsail) PutInstancePublicPortsRequest(input *PutInstancePublicPortsInput) (req *request.Request, output *PutInstancePublicPortsOutput) { + op := &request.Operation{ + Name: opPutInstancePublicPorts, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutInstancePublicPortsInput{} + } + + output = &PutInstancePublicPortsOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutInstancePublicPorts API operation for Amazon Lightsail. +// +// Sets the specified open ports for an Amazon Lightsail instance, and closes +// all ports for every protocol not included in the current request. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation PutInstancePublicPorts for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/PutInstancePublicPorts +func (c *Lightsail) PutInstancePublicPorts(input *PutInstancePublicPortsInput) (*PutInstancePublicPortsOutput, error) { + req, out := c.PutInstancePublicPortsRequest(input) + return out, req.Send() +} + +// PutInstancePublicPortsWithContext is the same as PutInstancePublicPorts with the addition of +// the ability to pass a context and additional request options. +// +// See PutInstancePublicPorts for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) PutInstancePublicPortsWithContext(ctx aws.Context, input *PutInstancePublicPortsInput, opts ...request.Option) (*PutInstancePublicPortsOutput, error) { + req, out := c.PutInstancePublicPortsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRebootInstance = "RebootInstance" + +// RebootInstanceRequest generates a "aws/request.Request" representing the +// client's request for the RebootInstance operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RebootInstance for more information on using the RebootInstance +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RebootInstanceRequest method. +// req, resp := client.RebootInstanceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/RebootInstance +func (c *Lightsail) RebootInstanceRequest(input *RebootInstanceInput) (req *request.Request, output *RebootInstanceOutput) { + op := &request.Operation{ + Name: opRebootInstance, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RebootInstanceInput{} + } + + output = &RebootInstanceOutput{} + req = c.newRequest(op, input, output) + return +} + +// RebootInstance API operation for Amazon Lightsail. +// +// Restarts a specific instance. When your Amazon Lightsail instance is finished +// rebooting, Lightsail assigns a new public IP address. To use the same IP +// address after restarting, create a static IP address and attach it to the +// instance. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation RebootInstance for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/RebootInstance +func (c *Lightsail) RebootInstance(input *RebootInstanceInput) (*RebootInstanceOutput, error) { + req, out := c.RebootInstanceRequest(input) + return out, req.Send() +} + +// RebootInstanceWithContext is the same as RebootInstance with the addition of +// the ability to pass a context and additional request options. +// +// See RebootInstance for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) RebootInstanceWithContext(ctx aws.Context, input *RebootInstanceInput, opts ...request.Option) (*RebootInstanceOutput, error) { + req, out := c.RebootInstanceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRebootRelationalDatabase = "RebootRelationalDatabase" + +// RebootRelationalDatabaseRequest generates a "aws/request.Request" representing the +// client's request for the RebootRelationalDatabase operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RebootRelationalDatabase for more information on using the RebootRelationalDatabase +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RebootRelationalDatabaseRequest method. +// req, resp := client.RebootRelationalDatabaseRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/RebootRelationalDatabase +func (c *Lightsail) RebootRelationalDatabaseRequest(input *RebootRelationalDatabaseInput) (req *request.Request, output *RebootRelationalDatabaseOutput) { + op := &request.Operation{ + Name: opRebootRelationalDatabase, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RebootRelationalDatabaseInput{} + } + + output = &RebootRelationalDatabaseOutput{} + req = c.newRequest(op, input, output) + return +} + +// RebootRelationalDatabase API operation for Amazon Lightsail. +// +// Restarts a specific database in Amazon Lightsail. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation RebootRelationalDatabase for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/RebootRelationalDatabase +func (c *Lightsail) RebootRelationalDatabase(input *RebootRelationalDatabaseInput) (*RebootRelationalDatabaseOutput, error) { + req, out := c.RebootRelationalDatabaseRequest(input) + return out, req.Send() +} + +// RebootRelationalDatabaseWithContext is the same as RebootRelationalDatabase with the addition of +// the ability to pass a context and additional request options. +// +// See RebootRelationalDatabase for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) RebootRelationalDatabaseWithContext(ctx aws.Context, input *RebootRelationalDatabaseInput, opts ...request.Option) (*RebootRelationalDatabaseOutput, error) { + req, out := c.RebootRelationalDatabaseRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opReleaseStaticIp = "ReleaseStaticIp" + +// ReleaseStaticIpRequest generates a "aws/request.Request" representing the +// client's request for the ReleaseStaticIp operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ReleaseStaticIp for more information on using the ReleaseStaticIp +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ReleaseStaticIpRequest method. +// req, resp := client.ReleaseStaticIpRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/ReleaseStaticIp +func (c *Lightsail) ReleaseStaticIpRequest(input *ReleaseStaticIpInput) (req *request.Request, output *ReleaseStaticIpOutput) { + op := &request.Operation{ + Name: opReleaseStaticIp, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ReleaseStaticIpInput{} + } + + output = &ReleaseStaticIpOutput{} + req = c.newRequest(op, input, output) + return +} + +// ReleaseStaticIp API operation for Amazon Lightsail. +// +// Deletes a specific static IP from your account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation ReleaseStaticIp for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/ReleaseStaticIp +func (c *Lightsail) ReleaseStaticIp(input *ReleaseStaticIpInput) (*ReleaseStaticIpOutput, error) { + req, out := c.ReleaseStaticIpRequest(input) + return out, req.Send() +} + +// ReleaseStaticIpWithContext is the same as ReleaseStaticIp with the addition of +// the ability to pass a context and additional request options. +// +// See ReleaseStaticIp for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) ReleaseStaticIpWithContext(ctx aws.Context, input *ReleaseStaticIpInput, opts ...request.Option) (*ReleaseStaticIpOutput, error) { + req, out := c.ReleaseStaticIpRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStartInstance = "StartInstance" + +// StartInstanceRequest generates a "aws/request.Request" representing the +// client's request for the StartInstance operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StartInstance for more information on using the StartInstance +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StartInstanceRequest method. +// req, resp := client.StartInstanceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/StartInstance +func (c *Lightsail) StartInstanceRequest(input *StartInstanceInput) (req *request.Request, output *StartInstanceOutput) { + op := &request.Operation{ + Name: opStartInstance, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StartInstanceInput{} + } + + output = &StartInstanceOutput{} + req = c.newRequest(op, input, output) + return +} + +// StartInstance API operation for Amazon Lightsail. +// +// Starts a specific Amazon Lightsail instance from a stopped state. To restart +// an instance, use the reboot instance operation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation StartInstance for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/StartInstance +func (c *Lightsail) StartInstance(input *StartInstanceInput) (*StartInstanceOutput, error) { + req, out := c.StartInstanceRequest(input) + return out, req.Send() +} + +// StartInstanceWithContext is the same as StartInstance with the addition of +// the ability to pass a context and additional request options. +// +// See StartInstance for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) StartInstanceWithContext(ctx aws.Context, input *StartInstanceInput, opts ...request.Option) (*StartInstanceOutput, error) { + req, out := c.StartInstanceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStartRelationalDatabase = "StartRelationalDatabase" + +// StartRelationalDatabaseRequest generates a "aws/request.Request" representing the +// client's request for the StartRelationalDatabase operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StartRelationalDatabase for more information on using the StartRelationalDatabase +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StartRelationalDatabaseRequest method. +// req, resp := client.StartRelationalDatabaseRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/StartRelationalDatabase +func (c *Lightsail) StartRelationalDatabaseRequest(input *StartRelationalDatabaseInput) (req *request.Request, output *StartRelationalDatabaseOutput) { + op := &request.Operation{ + Name: opStartRelationalDatabase, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StartRelationalDatabaseInput{} + } + + output = &StartRelationalDatabaseOutput{} + req = c.newRequest(op, input, output) + return +} + +// StartRelationalDatabase API operation for Amazon Lightsail. +// +// Starts a specific database from a stopped state in Amazon Lightsail. To restart +// a database, use the reboot relational database operation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation StartRelationalDatabase for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/StartRelationalDatabase +func (c *Lightsail) StartRelationalDatabase(input *StartRelationalDatabaseInput) (*StartRelationalDatabaseOutput, error) { + req, out := c.StartRelationalDatabaseRequest(input) + return out, req.Send() +} + +// StartRelationalDatabaseWithContext is the same as StartRelationalDatabase with the addition of +// the ability to pass a context and additional request options. +// +// See StartRelationalDatabase for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) StartRelationalDatabaseWithContext(ctx aws.Context, input *StartRelationalDatabaseInput, opts ...request.Option) (*StartRelationalDatabaseOutput, error) { + req, out := c.StartRelationalDatabaseRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStopInstance = "StopInstance" + +// StopInstanceRequest generates a "aws/request.Request" representing the +// client's request for the StopInstance operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StopInstance for more information on using the StopInstance +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StopInstanceRequest method. +// req, resp := client.StopInstanceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/StopInstance +func (c *Lightsail) StopInstanceRequest(input *StopInstanceInput) (req *request.Request, output *StopInstanceOutput) { + op := &request.Operation{ + Name: opStopInstance, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StopInstanceInput{} + } + + output = &StopInstanceOutput{} + req = c.newRequest(op, input, output) + return +} + +// StopInstance API operation for Amazon Lightsail. +// +// Stops a specific Amazon Lightsail instance that is currently running. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation StopInstance for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/StopInstance +func (c *Lightsail) StopInstance(input *StopInstanceInput) (*StopInstanceOutput, error) { + req, out := c.StopInstanceRequest(input) + return out, req.Send() +} + +// StopInstanceWithContext is the same as StopInstance with the addition of +// the ability to pass a context and additional request options. +// +// See StopInstance for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) StopInstanceWithContext(ctx aws.Context, input *StopInstanceInput, opts ...request.Option) (*StopInstanceOutput, error) { + req, out := c.StopInstanceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStopRelationalDatabase = "StopRelationalDatabase" + +// StopRelationalDatabaseRequest generates a "aws/request.Request" representing the +// client's request for the StopRelationalDatabase operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StopRelationalDatabase for more information on using the StopRelationalDatabase +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StopRelationalDatabaseRequest method. +// req, resp := client.StopRelationalDatabaseRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/StopRelationalDatabase +func (c *Lightsail) StopRelationalDatabaseRequest(input *StopRelationalDatabaseInput) (req *request.Request, output *StopRelationalDatabaseOutput) { + op := &request.Operation{ + Name: opStopRelationalDatabase, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StopRelationalDatabaseInput{} + } + + output = &StopRelationalDatabaseOutput{} + req = c.newRequest(op, input, output) + return +} + +// StopRelationalDatabase API operation for Amazon Lightsail. +// +// Stops a specific database that is currently running in Amazon Lightsail. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation StopRelationalDatabase for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/StopRelationalDatabase +func (c *Lightsail) StopRelationalDatabase(input *StopRelationalDatabaseInput) (*StopRelationalDatabaseOutput, error) { + req, out := c.StopRelationalDatabaseRequest(input) + return out, req.Send() +} + +// StopRelationalDatabaseWithContext is the same as StopRelationalDatabase with the addition of +// the ability to pass a context and additional request options. +// +// See StopRelationalDatabase for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) StopRelationalDatabaseWithContext(ctx aws.Context, input *StopRelationalDatabaseInput, opts ...request.Option) (*StopRelationalDatabaseOutput, error) { + req, out := c.StopRelationalDatabaseRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUnpeerVpc = "UnpeerVpc" + +// UnpeerVpcRequest generates a "aws/request.Request" representing the +// client's request for the UnpeerVpc operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UnpeerVpc for more information on using the UnpeerVpc +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UnpeerVpcRequest method. +// req, resp := client.UnpeerVpcRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/UnpeerVpc +func (c *Lightsail) UnpeerVpcRequest(input *UnpeerVpcInput) (req *request.Request, output *UnpeerVpcOutput) { + op := &request.Operation{ + Name: opUnpeerVpc, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UnpeerVpcInput{} + } + + output = &UnpeerVpcOutput{} + req = c.newRequest(op, input, output) + return +} + +// UnpeerVpc API operation for Amazon Lightsail. +// +// Attempts to unpeer the Lightsail VPC from the user's default VPC. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation UnpeerVpc for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/UnpeerVpc +func (c *Lightsail) UnpeerVpc(input *UnpeerVpcInput) (*UnpeerVpcOutput, error) { + req, out := c.UnpeerVpcRequest(input) + return out, req.Send() +} + +// UnpeerVpcWithContext is the same as UnpeerVpc with the addition of +// the ability to pass a context and additional request options. +// +// See UnpeerVpc for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) UnpeerVpcWithContext(ctx aws.Context, input *UnpeerVpcInput, opts ...request.Option) (*UnpeerVpcOutput, error) { + req, out := c.UnpeerVpcRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateDomainEntry = "UpdateDomainEntry" + +// UpdateDomainEntryRequest generates a "aws/request.Request" representing the +// client's request for the UpdateDomainEntry operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateDomainEntry for more information on using the UpdateDomainEntry +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateDomainEntryRequest method. +// req, resp := client.UpdateDomainEntryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/UpdateDomainEntry +func (c *Lightsail) UpdateDomainEntryRequest(input *UpdateDomainEntryInput) (req *request.Request, output *UpdateDomainEntryOutput) { + op := &request.Operation{ + Name: opUpdateDomainEntry, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateDomainEntryInput{} + } + + output = &UpdateDomainEntryOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateDomainEntry API operation for Amazon Lightsail. +// +// Updates a domain recordset after it is created. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation UpdateDomainEntry for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/UpdateDomainEntry +func (c *Lightsail) UpdateDomainEntry(input *UpdateDomainEntryInput) (*UpdateDomainEntryOutput, error) { + req, out := c.UpdateDomainEntryRequest(input) + return out, req.Send() +} + +// UpdateDomainEntryWithContext is the same as UpdateDomainEntry with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateDomainEntry for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) UpdateDomainEntryWithContext(ctx aws.Context, input *UpdateDomainEntryInput, opts ...request.Option) (*UpdateDomainEntryOutput, error) { + req, out := c.UpdateDomainEntryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateLoadBalancerAttribute = "UpdateLoadBalancerAttribute" + +// UpdateLoadBalancerAttributeRequest generates a "aws/request.Request" representing the +// client's request for the UpdateLoadBalancerAttribute operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateLoadBalancerAttribute for more information on using the UpdateLoadBalancerAttribute +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateLoadBalancerAttributeRequest method. +// req, resp := client.UpdateLoadBalancerAttributeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/UpdateLoadBalancerAttribute +func (c *Lightsail) UpdateLoadBalancerAttributeRequest(input *UpdateLoadBalancerAttributeInput) (req *request.Request, output *UpdateLoadBalancerAttributeOutput) { + op := &request.Operation{ + Name: opUpdateLoadBalancerAttribute, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateLoadBalancerAttributeInput{} + } + + output = &UpdateLoadBalancerAttributeOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateLoadBalancerAttribute API operation for Amazon Lightsail. +// +// Updates the specified attribute for a load balancer. You can only update +// one attribute at a time. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation UpdateLoadBalancerAttribute for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/UpdateLoadBalancerAttribute +func (c *Lightsail) UpdateLoadBalancerAttribute(input *UpdateLoadBalancerAttributeInput) (*UpdateLoadBalancerAttributeOutput, error) { + req, out := c.UpdateLoadBalancerAttributeRequest(input) + return out, req.Send() +} + +// UpdateLoadBalancerAttributeWithContext is the same as UpdateLoadBalancerAttribute with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateLoadBalancerAttribute for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) UpdateLoadBalancerAttributeWithContext(ctx aws.Context, input *UpdateLoadBalancerAttributeInput, opts ...request.Option) (*UpdateLoadBalancerAttributeOutput, error) { + req, out := c.UpdateLoadBalancerAttributeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateRelationalDatabase = "UpdateRelationalDatabase" + +// UpdateRelationalDatabaseRequest generates a "aws/request.Request" representing the +// client's request for the UpdateRelationalDatabase operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateRelationalDatabase for more information on using the UpdateRelationalDatabase +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateRelationalDatabaseRequest method. +// req, resp := client.UpdateRelationalDatabaseRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/UpdateRelationalDatabase +func (c *Lightsail) UpdateRelationalDatabaseRequest(input *UpdateRelationalDatabaseInput) (req *request.Request, output *UpdateRelationalDatabaseOutput) { + op := &request.Operation{ + Name: opUpdateRelationalDatabase, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateRelationalDatabaseInput{} + } + + output = &UpdateRelationalDatabaseOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateRelationalDatabase API operation for Amazon Lightsail. +// +// Allows the update of one or more attributes of a database in Amazon Lightsail. +// +// Updates are applied immediately, or in cases where the updates could result +// in an outage, are applied during the database's predefined maintenance window. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation UpdateRelationalDatabase for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/UpdateRelationalDatabase +func (c *Lightsail) UpdateRelationalDatabase(input *UpdateRelationalDatabaseInput) (*UpdateRelationalDatabaseOutput, error) { + req, out := c.UpdateRelationalDatabaseRequest(input) + return out, req.Send() +} + +// UpdateRelationalDatabaseWithContext is the same as UpdateRelationalDatabase with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateRelationalDatabase for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) UpdateRelationalDatabaseWithContext(ctx aws.Context, input *UpdateRelationalDatabaseInput, opts ...request.Option) (*UpdateRelationalDatabaseOutput, error) { + req, out := c.UpdateRelationalDatabaseRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateRelationalDatabaseParameters = "UpdateRelationalDatabaseParameters" + +// UpdateRelationalDatabaseParametersRequest generates a "aws/request.Request" representing the +// client's request for the UpdateRelationalDatabaseParameters operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateRelationalDatabaseParameters for more information on using the UpdateRelationalDatabaseParameters +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateRelationalDatabaseParametersRequest method. +// req, resp := client.UpdateRelationalDatabaseParametersRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/UpdateRelationalDatabaseParameters +func (c *Lightsail) UpdateRelationalDatabaseParametersRequest(input *UpdateRelationalDatabaseParametersInput) (req *request.Request, output *UpdateRelationalDatabaseParametersOutput) { + op := &request.Operation{ + Name: opUpdateRelationalDatabaseParameters, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateRelationalDatabaseParametersInput{} + } + + output = &UpdateRelationalDatabaseParametersOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateRelationalDatabaseParameters API operation for Amazon Lightsail. +// +// Allows the update of one or more parameters of a database in Amazon Lightsail. +// +// Parameter updates don't cause outages; therefore, their application is not +// subject to the preferred maintenance window. However, there are two ways +// in which paramater updates are applied: dynamic or pending-reboot. Parameters +// marked with a dynamic apply type are applied immediately. Parameters marked +// with a pending-reboot apply type are applied only after the database is rebooted +// using the reboot relational database operation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Lightsail's +// API operation UpdateRelationalDatabaseParameters for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServiceException "ServiceException" +// A general service exception. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// Lightsail throws this exception when user input does not conform to the validation +// rules of an input field. +// +// Domain-related APIs are only available in the N. Virginia (us-east-1) Region. +// Please set your AWS Region configuration to us-east-1 to create, view, or +// edit these resources. +// +// * ErrCodeNotFoundException "NotFoundException" +// Lightsail throws this exception when it cannot find a resource. +// +// * ErrCodeOperationFailureException "OperationFailureException" +// Lightsail throws this exception when an operation fails to execute. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// Lightsail throws this exception when the user cannot be authenticated or +// uses invalid credentials to access a resource. +// +// * ErrCodeAccountSetupInProgressException "AccountSetupInProgressException" +// Lightsail throws this exception when an account is still in the setup in +// progress state. +// +// * ErrCodeUnauthenticatedException "UnauthenticatedException" +// Lightsail throws this exception when the user has not been authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/lightsail-2016-11-28/UpdateRelationalDatabaseParameters +func (c *Lightsail) UpdateRelationalDatabaseParameters(input *UpdateRelationalDatabaseParametersInput) (*UpdateRelationalDatabaseParametersOutput, error) { + req, out := c.UpdateRelationalDatabaseParametersRequest(input) + return out, req.Send() +} + +// UpdateRelationalDatabaseParametersWithContext is the same as UpdateRelationalDatabaseParameters with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateRelationalDatabaseParameters for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Lightsail) UpdateRelationalDatabaseParametersWithContext(ctx aws.Context, input *UpdateRelationalDatabaseParametersInput, opts ...request.Option) (*UpdateRelationalDatabaseParametersOutput, error) { + req, out := c.UpdateRelationalDatabaseParametersRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type AllocateStaticIpInput struct { + _ struct{} `type:"structure"` + + // The name of the static IP address. + // + // StaticIpName is a required field + StaticIpName *string `locationName:"staticIpName" type:"string" required:"true"` +} + +// String returns the string representation +func (s AllocateStaticIpInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AllocateStaticIpInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AllocateStaticIpInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AllocateStaticIpInput"} + if s.StaticIpName == nil { + invalidParams.Add(request.NewErrParamRequired("StaticIpName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetStaticIpName sets the StaticIpName field's value. +func (s *AllocateStaticIpInput) SetStaticIpName(v string) *AllocateStaticIpInput { + s.StaticIpName = &v + return s +} + +type AllocateStaticIpOutput struct { + _ struct{} `type:"structure"` + + // An array of key-value pairs containing information about the static IP address + // you allocated. + Operations []*Operation `locationName:"operations" type:"list"` +} + +// String returns the string representation +func (s AllocateStaticIpOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AllocateStaticIpOutput) GoString() string { + return s.String() +} + +// SetOperations sets the Operations field's value. +func (s *AllocateStaticIpOutput) SetOperations(v []*Operation) *AllocateStaticIpOutput { + s.Operations = v + return s +} + +type AttachDiskInput struct { + _ struct{} `type:"structure"` + + // The unique Lightsail disk name (e.g., my-disk). + // + // DiskName is a required field + DiskName *string `locationName:"diskName" type:"string" required:"true"` + + // The disk path to expose to the instance (e.g., /dev/xvdf). + // + // DiskPath is a required field + DiskPath *string `locationName:"diskPath" type:"string" required:"true"` + + // The name of the Lightsail instance where you want to utilize the storage + // disk. + // + // InstanceName is a required field + InstanceName *string `locationName:"instanceName" type:"string" required:"true"` +} + +// String returns the string representation +func (s AttachDiskInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachDiskInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AttachDiskInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AttachDiskInput"} + if s.DiskName == nil { + invalidParams.Add(request.NewErrParamRequired("DiskName")) + } + if s.DiskPath == nil { + invalidParams.Add(request.NewErrParamRequired("DiskPath")) + } + if s.InstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDiskName sets the DiskName field's value. +func (s *AttachDiskInput) SetDiskName(v string) *AttachDiskInput { + s.DiskName = &v + return s +} + +// SetDiskPath sets the DiskPath field's value. +func (s *AttachDiskInput) SetDiskPath(v string) *AttachDiskInput { + s.DiskPath = &v + return s +} + +// SetInstanceName sets the InstanceName field's value. +func (s *AttachDiskInput) SetInstanceName(v string) *AttachDiskInput { + s.InstanceName = &v + return s +} + +type AttachDiskOutput struct { + _ struct{} `type:"structure"` + + // An object describing the API operations. + Operations []*Operation `locationName:"operations" type:"list"` +} + +// String returns the string representation +func (s AttachDiskOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachDiskOutput) GoString() string { + return s.String() +} + +// SetOperations sets the Operations field's value. +func (s *AttachDiskOutput) SetOperations(v []*Operation) *AttachDiskOutput { + s.Operations = v + return s +} + +type AttachInstancesToLoadBalancerInput struct { + _ struct{} `type:"structure"` + + // An array of strings representing the instance name(s) you want to attach + // to your load balancer. + // + // An instance must be running before you can attach it to your load balancer. + // + // There are no additional limits on the number of instances you can attach + // to your load balancer, aside from the limit of Lightsail instances you can + // create in your account (20). + // + // InstanceNames is a required field + InstanceNames []*string `locationName:"instanceNames" type:"list" required:"true"` + + // The name of the load balancer. + // + // LoadBalancerName is a required field + LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` +} + +// String returns the string representation +func (s AttachInstancesToLoadBalancerInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachInstancesToLoadBalancerInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AttachInstancesToLoadBalancerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AttachInstancesToLoadBalancerInput"} + if s.InstanceNames == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceNames")) + } + if s.LoadBalancerName == nil { + invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceNames sets the InstanceNames field's value. +func (s *AttachInstancesToLoadBalancerInput) SetInstanceNames(v []*string) *AttachInstancesToLoadBalancerInput { + s.InstanceNames = v + return s +} + +// SetLoadBalancerName sets the LoadBalancerName field's value. +func (s *AttachInstancesToLoadBalancerInput) SetLoadBalancerName(v string) *AttachInstancesToLoadBalancerInput { + s.LoadBalancerName = &v + return s +} + +type AttachInstancesToLoadBalancerOutput struct { + _ struct{} `type:"structure"` + + // An object representing the API operations. + Operations []*Operation `locationName:"operations" type:"list"` +} + +// String returns the string representation +func (s AttachInstancesToLoadBalancerOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachInstancesToLoadBalancerOutput) GoString() string { + return s.String() +} + +// SetOperations sets the Operations field's value. +func (s *AttachInstancesToLoadBalancerOutput) SetOperations(v []*Operation) *AttachInstancesToLoadBalancerOutput { + s.Operations = v + return s +} + +type AttachLoadBalancerTlsCertificateInput struct { + _ struct{} `type:"structure"` + + // The name of your SSL/TLS certificate. + // + // CertificateName is a required field + CertificateName *string `locationName:"certificateName" type:"string" required:"true"` + + // The name of the load balancer to which you want to associate the SSL/TLS + // certificate. + // + // LoadBalancerName is a required field + LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` +} + +// String returns the string representation +func (s AttachLoadBalancerTlsCertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachLoadBalancerTlsCertificateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AttachLoadBalancerTlsCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AttachLoadBalancerTlsCertificateInput"} + if s.CertificateName == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateName")) + } + if s.LoadBalancerName == nil { + invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateName sets the CertificateName field's value. +func (s *AttachLoadBalancerTlsCertificateInput) SetCertificateName(v string) *AttachLoadBalancerTlsCertificateInput { + s.CertificateName = &v + return s +} + +// SetLoadBalancerName sets the LoadBalancerName field's value. +func (s *AttachLoadBalancerTlsCertificateInput) SetLoadBalancerName(v string) *AttachLoadBalancerTlsCertificateInput { + s.LoadBalancerName = &v + return s +} + +type AttachLoadBalancerTlsCertificateOutput struct { + _ struct{} `type:"structure"` + + // An object representing the API operations. + // + // These SSL/TLS certificates are only usable by Lightsail load balancers. You + // can't get the certificate and use it for another purpose. + Operations []*Operation `locationName:"operations" type:"list"` +} + +// String returns the string representation +func (s AttachLoadBalancerTlsCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachLoadBalancerTlsCertificateOutput) GoString() string { + return s.String() +} + +// SetOperations sets the Operations field's value. +func (s *AttachLoadBalancerTlsCertificateOutput) SetOperations(v []*Operation) *AttachLoadBalancerTlsCertificateOutput { + s.Operations = v + return s +} + +type AttachStaticIpInput struct { + _ struct{} `type:"structure"` + + // The instance name to which you want to attach the static IP address. + // + // InstanceName is a required field + InstanceName *string `locationName:"instanceName" type:"string" required:"true"` + + // The name of the static IP. + // + // StaticIpName is a required field + StaticIpName *string `locationName:"staticIpName" type:"string" required:"true"` +} + +// String returns the string representation +func (s AttachStaticIpInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachStaticIpInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AttachStaticIpInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AttachStaticIpInput"} + if s.InstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceName")) + } + if s.StaticIpName == nil { + invalidParams.Add(request.NewErrParamRequired("StaticIpName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceName sets the InstanceName field's value. +func (s *AttachStaticIpInput) SetInstanceName(v string) *AttachStaticIpInput { + s.InstanceName = &v + return s +} + +// SetStaticIpName sets the StaticIpName field's value. +func (s *AttachStaticIpInput) SetStaticIpName(v string) *AttachStaticIpInput { + s.StaticIpName = &v + return s +} + +type AttachStaticIpOutput struct { + _ struct{} `type:"structure"` + + // An array of key-value pairs containing information about your API operations. + Operations []*Operation `locationName:"operations" type:"list"` +} + +// String returns the string representation +func (s AttachStaticIpOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachStaticIpOutput) GoString() string { + return s.String() +} + +// SetOperations sets the Operations field's value. +func (s *AttachStaticIpOutput) SetOperations(v []*Operation) *AttachStaticIpOutput { + s.Operations = v + return s +} + +// Describes an Availability Zone. +type AvailabilityZone struct { + _ struct{} `type:"structure"` + + // The state of the Availability Zone. + State *string `locationName:"state" type:"string"` + + // The name of the Availability Zone. The format is us-east-2a (case-sensitive). + ZoneName *string `locationName:"zoneName" type:"string"` +} + +// String returns the string representation +func (s AvailabilityZone) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AvailabilityZone) GoString() string { + return s.String() +} + +// SetState sets the State field's value. +func (s *AvailabilityZone) SetState(v string) *AvailabilityZone { + s.State = &v + return s +} + +// SetZoneName sets the ZoneName field's value. +func (s *AvailabilityZone) SetZoneName(v string) *AvailabilityZone { + s.ZoneName = &v + return s +} + +// Describes a blueprint (a virtual private server image). +type Blueprint struct { + _ struct{} `type:"structure"` + + // The ID for the virtual private server image (e.g., app_wordpress_4_4 or app_lamp_7_0). + BlueprintId *string `locationName:"blueprintId" type:"string"` + + // The description of the blueprint. + Description *string `locationName:"description" type:"string"` + + // The group name of the blueprint (e.g., amazon-linux). + Group *string `locationName:"group" type:"string"` + + // A Boolean value indicating whether the blueprint is active. Inactive blueprints + // are listed to support customers with existing instances but are not necessarily + // available for launch of new instances. Blueprints are marked inactive when + // they become outdated due to operating system updates or new application releases. + IsActive *bool `locationName:"isActive" type:"boolean"` + + // The end-user license agreement URL for the image or blueprint. + LicenseUrl *string `locationName:"licenseUrl" type:"string"` + + // The minimum bundle power required to run this blueprint. For example, you + // need a bundle with a power value of 500 or more to create an instance that + // uses a blueprint with a minimum power value of 500. 0 indicates that the + // blueprint runs on all instance sizes. + MinPower *int64 `locationName:"minPower" type:"integer"` + + // The friendly name of the blueprint (e.g., Amazon Linux). + Name *string `locationName:"name" type:"string"` + + // The operating system platform (either Linux/Unix-based or Windows Server-based) + // of the blueprint. + Platform *string `locationName:"platform" type:"string" enum:"InstancePlatform"` + + // The product URL to learn more about the image or blueprint. + ProductUrl *string `locationName:"productUrl" type:"string"` + + // The type of the blueprint (e.g., os or app). + Type *string `locationName:"type" type:"string" enum:"BlueprintType"` + + // The version number of the operating system, application, or stack (e.g., + // 2016.03.0). + Version *string `locationName:"version" type:"string"` + + // The version code. + VersionCode *string `locationName:"versionCode" type:"string"` +} + +// String returns the string representation +func (s Blueprint) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Blueprint) GoString() string { + return s.String() +} + +// SetBlueprintId sets the BlueprintId field's value. +func (s *Blueprint) SetBlueprintId(v string) *Blueprint { + s.BlueprintId = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *Blueprint) SetDescription(v string) *Blueprint { + s.Description = &v + return s +} + +// SetGroup sets the Group field's value. +func (s *Blueprint) SetGroup(v string) *Blueprint { + s.Group = &v + return s +} + +// SetIsActive sets the IsActive field's value. +func (s *Blueprint) SetIsActive(v bool) *Blueprint { + s.IsActive = &v + return s +} + +// SetLicenseUrl sets the LicenseUrl field's value. +func (s *Blueprint) SetLicenseUrl(v string) *Blueprint { + s.LicenseUrl = &v + return s +} + +// SetMinPower sets the MinPower field's value. +func (s *Blueprint) SetMinPower(v int64) *Blueprint { + s.MinPower = &v + return s +} + +// SetName sets the Name field's value. +func (s *Blueprint) SetName(v string) *Blueprint { + s.Name = &v + return s +} + +// SetPlatform sets the Platform field's value. +func (s *Blueprint) SetPlatform(v string) *Blueprint { + s.Platform = &v + return s +} + +// SetProductUrl sets the ProductUrl field's value. +func (s *Blueprint) SetProductUrl(v string) *Blueprint { + s.ProductUrl = &v + return s +} + +// SetType sets the Type field's value. +func (s *Blueprint) SetType(v string) *Blueprint { + s.Type = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *Blueprint) SetVersion(v string) *Blueprint { + s.Version = &v + return s +} + +// SetVersionCode sets the VersionCode field's value. +func (s *Blueprint) SetVersionCode(v string) *Blueprint { + s.VersionCode = &v + return s +} + +// Describes a bundle, which is a set of specs describing your virtual private +// server (or instance). +type Bundle struct { + _ struct{} `type:"structure"` + + // The bundle ID (e.g., micro_1_0). + BundleId *string `locationName:"bundleId" type:"string"` + + // The number of vCPUs included in the bundle (e.g., 2). + CpuCount *int64 `locationName:"cpuCount" type:"integer"` + + // The size of the SSD (e.g., 30). + DiskSizeInGb *int64 `locationName:"diskSizeInGb" type:"integer"` + + // The Amazon EC2 instance type (e.g., t2.micro). + InstanceType *string `locationName:"instanceType" type:"string"` + + // A Boolean value indicating whether the bundle is active. + IsActive *bool `locationName:"isActive" type:"boolean"` + + // A friendly name for the bundle (e.g., Micro). + Name *string `locationName:"name" type:"string"` + + // A numeric value that represents the power of the bundle (e.g., 500). You + // can use the bundle's power value in conjunction with a blueprint's minimum + // power value to determine whether the blueprint will run on the bundle. For + // example, you need a bundle with a power value of 500 or more to create an + // instance that uses a blueprint with a minimum power value of 500. + Power *int64 `locationName:"power" type:"integer"` + + // The price in US dollars (e.g., 5.0). + Price *float64 `locationName:"price" type:"float"` + + // The amount of RAM in GB (e.g., 2.0). + RamSizeInGb *float64 `locationName:"ramSizeInGb" type:"float"` + + // The operating system platform (Linux/Unix-based or Windows Server-based) + // that the bundle supports. You can only launch a WINDOWS bundle on a blueprint + // that supports the WINDOWS platform. LINUX_UNIX blueprints require a LINUX_UNIX + // bundle. + SupportedPlatforms []*string `locationName:"supportedPlatforms" type:"list"` + + // The data transfer rate per month in GB (e.g., 2000). + TransferPerMonthInGb *int64 `locationName:"transferPerMonthInGb" type:"integer"` +} + +// String returns the string representation +func (s Bundle) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Bundle) GoString() string { + return s.String() +} + +// SetBundleId sets the BundleId field's value. +func (s *Bundle) SetBundleId(v string) *Bundle { + s.BundleId = &v + return s +} + +// SetCpuCount sets the CpuCount field's value. +func (s *Bundle) SetCpuCount(v int64) *Bundle { + s.CpuCount = &v + return s +} + +// SetDiskSizeInGb sets the DiskSizeInGb field's value. +func (s *Bundle) SetDiskSizeInGb(v int64) *Bundle { + s.DiskSizeInGb = &v + return s +} + +// SetInstanceType sets the InstanceType field's value. +func (s *Bundle) SetInstanceType(v string) *Bundle { + s.InstanceType = &v + return s +} + +// SetIsActive sets the IsActive field's value. +func (s *Bundle) SetIsActive(v bool) *Bundle { + s.IsActive = &v + return s +} + +// SetName sets the Name field's value. +func (s *Bundle) SetName(v string) *Bundle { + s.Name = &v + return s +} + +// SetPower sets the Power field's value. +func (s *Bundle) SetPower(v int64) *Bundle { + s.Power = &v + return s +} + +// SetPrice sets the Price field's value. +func (s *Bundle) SetPrice(v float64) *Bundle { + s.Price = &v + return s +} + +// SetRamSizeInGb sets the RamSizeInGb field's value. +func (s *Bundle) SetRamSizeInGb(v float64) *Bundle { + s.RamSizeInGb = &v + return s +} + +// SetSupportedPlatforms sets the SupportedPlatforms field's value. +func (s *Bundle) SetSupportedPlatforms(v []*string) *Bundle { + s.SupportedPlatforms = v + return s +} + +// SetTransferPerMonthInGb sets the TransferPerMonthInGb field's value. +func (s *Bundle) SetTransferPerMonthInGb(v int64) *Bundle { + s.TransferPerMonthInGb = &v + return s +} + +type CloseInstancePublicPortsInput struct { + _ struct{} `type:"structure"` + + // The name of the instance on which you're attempting to close the public ports. + // + // InstanceName is a required field + InstanceName *string `locationName:"instanceName" type:"string" required:"true"` + + // Information about the public port you are trying to close. + // + // PortInfo is a required field + PortInfo *PortInfo `locationName:"portInfo" type:"structure" required:"true"` +} + +// String returns the string representation +func (s CloseInstancePublicPortsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CloseInstancePublicPortsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CloseInstancePublicPortsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CloseInstancePublicPortsInput"} + if s.InstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceName")) + } + if s.PortInfo == nil { + invalidParams.Add(request.NewErrParamRequired("PortInfo")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceName sets the InstanceName field's value. +func (s *CloseInstancePublicPortsInput) SetInstanceName(v string) *CloseInstancePublicPortsInput { + s.InstanceName = &v + return s +} + +// SetPortInfo sets the PortInfo field's value. +func (s *CloseInstancePublicPortsInput) SetPortInfo(v *PortInfo) *CloseInstancePublicPortsInput { + s.PortInfo = v + return s +} + +type CloseInstancePublicPortsOutput struct { + _ struct{} `type:"structure"` + + // An array of key-value pairs that contains information about the operation. + Operation *Operation `locationName:"operation" type:"structure"` +} + +// String returns the string representation +func (s CloseInstancePublicPortsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CloseInstancePublicPortsOutput) GoString() string { + return s.String() +} + +// SetOperation sets the Operation field's value. +func (s *CloseInstancePublicPortsOutput) SetOperation(v *Operation) *CloseInstancePublicPortsOutput { + s.Operation = v + return s +} + +type CreateDiskFromSnapshotInput struct { + _ struct{} `type:"structure"` + + // The Availability Zone where you want to create the disk (e.g., us-east-2a). + // Choose the same Availability Zone as the Lightsail instance where you want + // to create the disk. + // + // Use the GetRegions operation to list the Availability Zones where Lightsail + // is currently available. + // + // AvailabilityZone is a required field + AvailabilityZone *string `locationName:"availabilityZone" type:"string" required:"true"` + + // The unique Lightsail disk name (e.g., my-disk). + // + // DiskName is a required field + DiskName *string `locationName:"diskName" type:"string" required:"true"` + + // The name of the disk snapshot (e.g., my-snapshot) from which to create the + // new storage disk. + // + // DiskSnapshotName is a required field + DiskSnapshotName *string `locationName:"diskSnapshotName" type:"string" required:"true"` + + // The size of the disk in GB (e.g., 32). + // + // SizeInGb is a required field + SizeInGb *int64 `locationName:"sizeInGb" type:"integer" required:"true"` +} + +// String returns the string representation +func (s CreateDiskFromSnapshotInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDiskFromSnapshotInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDiskFromSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDiskFromSnapshotInput"} + if s.AvailabilityZone == nil { + invalidParams.Add(request.NewErrParamRequired("AvailabilityZone")) + } + if s.DiskName == nil { + invalidParams.Add(request.NewErrParamRequired("DiskName")) + } + if s.DiskSnapshotName == nil { + invalidParams.Add(request.NewErrParamRequired("DiskSnapshotName")) + } + if s.SizeInGb == nil { + invalidParams.Add(request.NewErrParamRequired("SizeInGb")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *CreateDiskFromSnapshotInput) SetAvailabilityZone(v string) *CreateDiskFromSnapshotInput { + s.AvailabilityZone = &v + return s +} + +// SetDiskName sets the DiskName field's value. +func (s *CreateDiskFromSnapshotInput) SetDiskName(v string) *CreateDiskFromSnapshotInput { + s.DiskName = &v + return s +} + +// SetDiskSnapshotName sets the DiskSnapshotName field's value. +func (s *CreateDiskFromSnapshotInput) SetDiskSnapshotName(v string) *CreateDiskFromSnapshotInput { + s.DiskSnapshotName = &v + return s +} + +// SetSizeInGb sets the SizeInGb field's value. +func (s *CreateDiskFromSnapshotInput) SetSizeInGb(v int64) *CreateDiskFromSnapshotInput { + s.SizeInGb = &v + return s +} + +type CreateDiskFromSnapshotOutput struct { + _ struct{} `type:"structure"` + + // An object describing the API operations. + Operations []*Operation `locationName:"operations" type:"list"` +} + +// String returns the string representation +func (s CreateDiskFromSnapshotOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDiskFromSnapshotOutput) GoString() string { + return s.String() +} + +// SetOperations sets the Operations field's value. +func (s *CreateDiskFromSnapshotOutput) SetOperations(v []*Operation) *CreateDiskFromSnapshotOutput { + s.Operations = v + return s +} + +type CreateDiskInput struct { + _ struct{} `type:"structure"` + + // The Availability Zone where you want to create the disk (e.g., us-east-2a). + // Choose the same Availability Zone as the Lightsail instance where you want + // to create the disk. + // + // Use the GetRegions operation to list the Availability Zones where Lightsail + // is currently available. + // + // AvailabilityZone is a required field + AvailabilityZone *string `locationName:"availabilityZone" type:"string" required:"true"` + + // The unique Lightsail disk name (e.g., my-disk). + // + // DiskName is a required field + DiskName *string `locationName:"diskName" type:"string" required:"true"` + + // The size of the disk in GB (e.g., 32). + // + // SizeInGb is a required field + SizeInGb *int64 `locationName:"sizeInGb" type:"integer" required:"true"` +} + +// String returns the string representation +func (s CreateDiskInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDiskInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDiskInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDiskInput"} + if s.AvailabilityZone == nil { + invalidParams.Add(request.NewErrParamRequired("AvailabilityZone")) + } + if s.DiskName == nil { + invalidParams.Add(request.NewErrParamRequired("DiskName")) + } + if s.SizeInGb == nil { + invalidParams.Add(request.NewErrParamRequired("SizeInGb")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *CreateDiskInput) SetAvailabilityZone(v string) *CreateDiskInput { + s.AvailabilityZone = &v + return s +} + +// SetDiskName sets the DiskName field's value. +func (s *CreateDiskInput) SetDiskName(v string) *CreateDiskInput { + s.DiskName = &v + return s +} + +// SetSizeInGb sets the SizeInGb field's value. +func (s *CreateDiskInput) SetSizeInGb(v int64) *CreateDiskInput { + s.SizeInGb = &v + return s +} + +type CreateDiskOutput struct { + _ struct{} `type:"structure"` + + // An object describing the API operations. + Operations []*Operation `locationName:"operations" type:"list"` +} + +// String returns the string representation +func (s CreateDiskOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDiskOutput) GoString() string { + return s.String() +} + +// SetOperations sets the Operations field's value. +func (s *CreateDiskOutput) SetOperations(v []*Operation) *CreateDiskOutput { + s.Operations = v + return s +} + +type CreateDiskSnapshotInput struct { + _ struct{} `type:"structure"` + + // The unique name of the source disk (e.g., my-source-disk). + // + // DiskName is a required field + DiskName *string `locationName:"diskName" type:"string" required:"true"` + + // The name of the destination disk snapshot (e.g., my-disk-snapshot) based + // on the source disk. + // + // DiskSnapshotName is a required field + DiskSnapshotName *string `locationName:"diskSnapshotName" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateDiskSnapshotInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDiskSnapshotInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDiskSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDiskSnapshotInput"} + if s.DiskName == nil { + invalidParams.Add(request.NewErrParamRequired("DiskName")) + } + if s.DiskSnapshotName == nil { + invalidParams.Add(request.NewErrParamRequired("DiskSnapshotName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDiskName sets the DiskName field's value. +func (s *CreateDiskSnapshotInput) SetDiskName(v string) *CreateDiskSnapshotInput { + s.DiskName = &v + return s +} + +// SetDiskSnapshotName sets the DiskSnapshotName field's value. +func (s *CreateDiskSnapshotInput) SetDiskSnapshotName(v string) *CreateDiskSnapshotInput { + s.DiskSnapshotName = &v + return s +} + +type CreateDiskSnapshotOutput struct { + _ struct{} `type:"structure"` + + // An object describing the API operations. + Operations []*Operation `locationName:"operations" type:"list"` +} + +// String returns the string representation +func (s CreateDiskSnapshotOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDiskSnapshotOutput) GoString() string { + return s.String() +} + +// SetOperations sets the Operations field's value. +func (s *CreateDiskSnapshotOutput) SetOperations(v []*Operation) *CreateDiskSnapshotOutput { + s.Operations = v + return s +} + +type CreateDomainEntryInput struct { + _ struct{} `type:"structure"` + + // An array of key-value pairs containing information about the domain entry + // request. + // + // DomainEntry is a required field + DomainEntry *DomainEntry `locationName:"domainEntry" type:"structure" required:"true"` + + // The domain name (e.g., example.com) for which you want to create the domain + // entry. + // + // DomainName is a required field + DomainName *string `locationName:"domainName" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateDomainEntryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDomainEntryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDomainEntryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDomainEntryInput"} + if s.DomainEntry == nil { + invalidParams.Add(request.NewErrParamRequired("DomainEntry")) + } + if s.DomainName == nil { + invalidParams.Add(request.NewErrParamRequired("DomainName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDomainEntry sets the DomainEntry field's value. +func (s *CreateDomainEntryInput) SetDomainEntry(v *DomainEntry) *CreateDomainEntryInput { + s.DomainEntry = v + return s +} + +// SetDomainName sets the DomainName field's value. +func (s *CreateDomainEntryInput) SetDomainName(v string) *CreateDomainEntryInput { + s.DomainName = &v + return s +} + +type CreateDomainEntryOutput struct { + _ struct{} `type:"structure"` + + // An array of key-value pairs containing information about the operation. + Operation *Operation `locationName:"operation" type:"structure"` +} + +// String returns the string representation +func (s CreateDomainEntryOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDomainEntryOutput) GoString() string { + return s.String() +} + +// SetOperation sets the Operation field's value. +func (s *CreateDomainEntryOutput) SetOperation(v *Operation) *CreateDomainEntryOutput { + s.Operation = v + return s +} + +type CreateDomainInput struct { + _ struct{} `type:"structure"` + + // The domain name to manage (e.g., example.com). + // + // You cannot register a new domain name using Lightsail. You must register + // a domain name using Amazon Route 53 or another domain name registrar. If + // you have already registered your domain, you can enter its name in this parameter + // to manage the DNS records for that domain. + // + // DomainName is a required field + DomainName *string `locationName:"domainName" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateDomainInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDomainInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDomainInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDomainInput"} + if s.DomainName == nil { + invalidParams.Add(request.NewErrParamRequired("DomainName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDomainName sets the DomainName field's value. +func (s *CreateDomainInput) SetDomainName(v string) *CreateDomainInput { + s.DomainName = &v + return s +} + +type CreateDomainOutput struct { + _ struct{} `type:"structure"` + + // An array of key-value pairs containing information about the domain resource + // you created. + Operation *Operation `locationName:"operation" type:"structure"` +} + +// String returns the string representation +func (s CreateDomainOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDomainOutput) GoString() string { + return s.String() +} + +// SetOperation sets the Operation field's value. +func (s *CreateDomainOutput) SetOperation(v *Operation) *CreateDomainOutput { + s.Operation = v + return s +} + +type CreateInstanceSnapshotInput struct { + _ struct{} `type:"structure"` + + // The Lightsail instance on which to base your snapshot. + // + // InstanceName is a required field + InstanceName *string `locationName:"instanceName" type:"string" required:"true"` + + // The name for your new snapshot. + // + // InstanceSnapshotName is a required field + InstanceSnapshotName *string `locationName:"instanceSnapshotName" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateInstanceSnapshotInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateInstanceSnapshotInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateInstanceSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateInstanceSnapshotInput"} + if s.InstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceName")) + } + if s.InstanceSnapshotName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceSnapshotName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceName sets the InstanceName field's value. +func (s *CreateInstanceSnapshotInput) SetInstanceName(v string) *CreateInstanceSnapshotInput { + s.InstanceName = &v + return s +} + +// SetInstanceSnapshotName sets the InstanceSnapshotName field's value. +func (s *CreateInstanceSnapshotInput) SetInstanceSnapshotName(v string) *CreateInstanceSnapshotInput { + s.InstanceSnapshotName = &v + return s +} + +type CreateInstanceSnapshotOutput struct { + _ struct{} `type:"structure"` + + // An array of key-value pairs containing information about the results of your + // create instances snapshot request. + Operations []*Operation `locationName:"operations" type:"list"` +} + +// String returns the string representation +func (s CreateInstanceSnapshotOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateInstanceSnapshotOutput) GoString() string { + return s.String() +} + +// SetOperations sets the Operations field's value. +func (s *CreateInstanceSnapshotOutput) SetOperations(v []*Operation) *CreateInstanceSnapshotOutput { + s.Operations = v + return s +} + +type CreateInstancesFromSnapshotInput struct { + _ struct{} `type:"structure"` + + // An object containing information about one or more disk mappings. + AttachedDiskMapping map[string][]*DiskMap `locationName:"attachedDiskMapping" type:"map"` + + // The Availability Zone where you want to create your instances. Use the following + // formatting: us-east-2a (case sensitive). You can get a list of Availability + // Zones by using the get regions (http://docs.aws.amazon.com/lightsail/2016-11-28/api-reference/API_GetRegions.html) + // operation. Be sure to add the include Availability Zones parameter to your + // request. + // + // AvailabilityZone is a required field + AvailabilityZone *string `locationName:"availabilityZone" type:"string" required:"true"` + + // The bundle of specification information for your virtual private server (or + // instance), including the pricing plan (e.g., micro_1_0). + // + // BundleId is a required field + BundleId *string `locationName:"bundleId" type:"string" required:"true"` + + // The names for your new instances. + // + // InstanceNames is a required field + InstanceNames []*string `locationName:"instanceNames" type:"list" required:"true"` + + // The name of the instance snapshot on which you are basing your new instances. + // Use the get instance snapshots operation to return information about your + // existing snapshots. + // + // InstanceSnapshotName is a required field + InstanceSnapshotName *string `locationName:"instanceSnapshotName" type:"string" required:"true"` + + // The name for your key pair. + KeyPairName *string `locationName:"keyPairName" type:"string"` + + // You can create a launch script that configures a server with additional user + // data. For example, apt-get -y update. + // + // Depending on the machine image you choose, the command to get software on + // your instance varies. Amazon Linux and CentOS use yum, Debian and Ubuntu + // use apt-get, and FreeBSD uses pkg. For a complete list, see the Dev Guide + // (https://lightsail.aws.amazon.com/ls/docs/getting-started/article/compare-options-choose-lightsail-instance-image). + UserData *string `locationName:"userData" type:"string"` +} + +// String returns the string representation +func (s CreateInstancesFromSnapshotInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateInstancesFromSnapshotInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateInstancesFromSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateInstancesFromSnapshotInput"} + if s.AvailabilityZone == nil { + invalidParams.Add(request.NewErrParamRequired("AvailabilityZone")) + } + if s.BundleId == nil { + invalidParams.Add(request.NewErrParamRequired("BundleId")) + } + if s.InstanceNames == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceNames")) + } + if s.InstanceSnapshotName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceSnapshotName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttachedDiskMapping sets the AttachedDiskMapping field's value. +func (s *CreateInstancesFromSnapshotInput) SetAttachedDiskMapping(v map[string][]*DiskMap) *CreateInstancesFromSnapshotInput { + s.AttachedDiskMapping = v + return s +} + +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *CreateInstancesFromSnapshotInput) SetAvailabilityZone(v string) *CreateInstancesFromSnapshotInput { + s.AvailabilityZone = &v + return s +} + +// SetBundleId sets the BundleId field's value. +func (s *CreateInstancesFromSnapshotInput) SetBundleId(v string) *CreateInstancesFromSnapshotInput { + s.BundleId = &v + return s +} + +// SetInstanceNames sets the InstanceNames field's value. +func (s *CreateInstancesFromSnapshotInput) SetInstanceNames(v []*string) *CreateInstancesFromSnapshotInput { + s.InstanceNames = v + return s +} + +// SetInstanceSnapshotName sets the InstanceSnapshotName field's value. +func (s *CreateInstancesFromSnapshotInput) SetInstanceSnapshotName(v string) *CreateInstancesFromSnapshotInput { + s.InstanceSnapshotName = &v + return s +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *CreateInstancesFromSnapshotInput) SetKeyPairName(v string) *CreateInstancesFromSnapshotInput { + s.KeyPairName = &v + return s +} + +// SetUserData sets the UserData field's value. +func (s *CreateInstancesFromSnapshotInput) SetUserData(v string) *CreateInstancesFromSnapshotInput { + s.UserData = &v + return s +} + +type CreateInstancesFromSnapshotOutput struct { + _ struct{} `type:"structure"` + + // An array of key-value pairs containing information about the results of your + // create instances from snapshot request. + Operations []*Operation `locationName:"operations" type:"list"` +} + +// String returns the string representation +func (s CreateInstancesFromSnapshotOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateInstancesFromSnapshotOutput) GoString() string { + return s.String() +} + +// SetOperations sets the Operations field's value. +func (s *CreateInstancesFromSnapshotOutput) SetOperations(v []*Operation) *CreateInstancesFromSnapshotOutput { + s.Operations = v + return s +} + +type CreateInstancesInput struct { + _ struct{} `type:"structure"` + + // The Availability Zone in which to create your instance. Use the following + // format: us-east-2a (case sensitive). You can get a list of Availability Zones + // by using the get regions (http://docs.aws.amazon.com/lightsail/2016-11-28/api-reference/API_GetRegions.html) + // operation. Be sure to add the include Availability Zones parameter to your + // request. + // + // AvailabilityZone is a required field + AvailabilityZone *string `locationName:"availabilityZone" type:"string" required:"true"` + + // The ID for a virtual private server image (e.g., app_wordpress_4_4 or app_lamp_7_0). + // Use the get blueprints operation to return a list of available images (or + // blueprints). + // + // BlueprintId is a required field + BlueprintId *string `locationName:"blueprintId" type:"string" required:"true"` + + // The bundle of specification information for your virtual private server (or + // instance), including the pricing plan (e.g., micro_1_0). + // + // BundleId is a required field + BundleId *string `locationName:"bundleId" type:"string" required:"true"` + + // (Deprecated) The name for your custom image. + // + // In releases prior to June 12, 2017, this parameter was ignored by the API. + // It is now deprecated. + // + // Deprecated: CustomImageName has been deprecated + CustomImageName *string `locationName:"customImageName" deprecated:"true" type:"string"` + + // The names to use for your new Lightsail instances. Separate multiple values + // using quotation marks and commas, for example: ["MyFirstInstance","MySecondInstance"] + // + // InstanceNames is a required field + InstanceNames []*string `locationName:"instanceNames" type:"list" required:"true"` + + // The name of your key pair. + KeyPairName *string `locationName:"keyPairName" type:"string"` + + // A launch script you can create that configures a server with additional user + // data. For example, you might want to run apt-get -y update. + // + // Depending on the machine image you choose, the command to get software on + // your instance varies. Amazon Linux and CentOS use yum, Debian and Ubuntu + // use apt-get, and FreeBSD uses pkg. For a complete list, see the Dev Guide + // (https://lightsail.aws.amazon.com/ls/docs/getting-started/article/compare-options-choose-lightsail-instance-image). + UserData *string `locationName:"userData" type:"string"` +} + +// String returns the string representation +func (s CreateInstancesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateInstancesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateInstancesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateInstancesInput"} + if s.AvailabilityZone == nil { + invalidParams.Add(request.NewErrParamRequired("AvailabilityZone")) + } + if s.BlueprintId == nil { + invalidParams.Add(request.NewErrParamRequired("BlueprintId")) + } + if s.BundleId == nil { + invalidParams.Add(request.NewErrParamRequired("BundleId")) + } + if s.InstanceNames == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceNames")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *CreateInstancesInput) SetAvailabilityZone(v string) *CreateInstancesInput { + s.AvailabilityZone = &v + return s +} + +// SetBlueprintId sets the BlueprintId field's value. +func (s *CreateInstancesInput) SetBlueprintId(v string) *CreateInstancesInput { + s.BlueprintId = &v + return s +} + +// SetBundleId sets the BundleId field's value. +func (s *CreateInstancesInput) SetBundleId(v string) *CreateInstancesInput { + s.BundleId = &v + return s +} + +// SetCustomImageName sets the CustomImageName field's value. +func (s *CreateInstancesInput) SetCustomImageName(v string) *CreateInstancesInput { + s.CustomImageName = &v + return s +} + +// SetInstanceNames sets the InstanceNames field's value. +func (s *CreateInstancesInput) SetInstanceNames(v []*string) *CreateInstancesInput { + s.InstanceNames = v + return s +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *CreateInstancesInput) SetKeyPairName(v string) *CreateInstancesInput { + s.KeyPairName = &v + return s +} + +// SetUserData sets the UserData field's value. +func (s *CreateInstancesInput) SetUserData(v string) *CreateInstancesInput { + s.UserData = &v + return s +} + +type CreateInstancesOutput struct { + _ struct{} `type:"structure"` + + // An array of key-value pairs containing information about the results of your + // create instances request. + Operations []*Operation `locationName:"operations" type:"list"` +} + +// String returns the string representation +func (s CreateInstancesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateInstancesOutput) GoString() string { + return s.String() +} + +// SetOperations sets the Operations field's value. +func (s *CreateInstancesOutput) SetOperations(v []*Operation) *CreateInstancesOutput { + s.Operations = v + return s +} + +type CreateKeyPairInput struct { + _ struct{} `type:"structure"` + + // The name for your new key pair. + // + // KeyPairName is a required field + KeyPairName *string `locationName:"keyPairName" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateKeyPairInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateKeyPairInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateKeyPairInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateKeyPairInput"} + if s.KeyPairName == nil { + invalidParams.Add(request.NewErrParamRequired("KeyPairName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *CreateKeyPairInput) SetKeyPairName(v string) *CreateKeyPairInput { + s.KeyPairName = &v + return s +} + +type CreateKeyPairOutput struct { + _ struct{} `type:"structure"` + + // An array of key-value pairs containing information about the new key pair + // you just created. + KeyPair *KeyPair `locationName:"keyPair" type:"structure"` + + // An array of key-value pairs containing information about the results of your + // create key pair request. + Operation *Operation `locationName:"operation" type:"structure"` + + // A base64-encoded RSA private key. + PrivateKeyBase64 *string `locationName:"privateKeyBase64" type:"string"` + + // A base64-encoded public key of the ssh-rsa type. + PublicKeyBase64 *string `locationName:"publicKeyBase64" type:"string"` +} + +// String returns the string representation +func (s CreateKeyPairOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateKeyPairOutput) GoString() string { + return s.String() +} + +// SetKeyPair sets the KeyPair field's value. +func (s *CreateKeyPairOutput) SetKeyPair(v *KeyPair) *CreateKeyPairOutput { + s.KeyPair = v + return s +} + +// SetOperation sets the Operation field's value. +func (s *CreateKeyPairOutput) SetOperation(v *Operation) *CreateKeyPairOutput { + s.Operation = v + return s +} + +// SetPrivateKeyBase64 sets the PrivateKeyBase64 field's value. +func (s *CreateKeyPairOutput) SetPrivateKeyBase64(v string) *CreateKeyPairOutput { + s.PrivateKeyBase64 = &v + return s +} + +// SetPublicKeyBase64 sets the PublicKeyBase64 field's value. +func (s *CreateKeyPairOutput) SetPublicKeyBase64(v string) *CreateKeyPairOutput { + s.PublicKeyBase64 = &v + return s +} + +type CreateLoadBalancerInput struct { + _ struct{} `type:"structure"` + + // The optional alternative domains and subdomains to use with your SSL/TLS + // certificate (e.g., www.example.com, example.com, m.example.com, blog.example.com). + CertificateAlternativeNames []*string `locationName:"certificateAlternativeNames" type:"list"` + + // The domain name with which your certificate is associated (e.g., example.com). + // + // If you specify certificateDomainName, then certificateName is required (and + // vice-versa). + CertificateDomainName *string `locationName:"certificateDomainName" type:"string"` + + // The name of the SSL/TLS certificate. + // + // If you specify certificateName, then certificateDomainName is required (and + // vice-versa). + CertificateName *string `locationName:"certificateName" type:"string"` + + // The path you provided to perform the load balancer health check. If you didn't + // specify a health check path, Lightsail uses the root path of your website + // (e.g., "/"). + // + // You may want to specify a custom health check path other than the root of + // your application if your home page loads slowly or has a lot of media or + // scripting on it. + HealthCheckPath *string `locationName:"healthCheckPath" type:"string"` + + // The instance port where you're creating your load balancer. + // + // InstancePort is a required field + InstancePort *int64 `locationName:"instancePort" type:"integer" required:"true"` + + // The name of your load balancer. + // + // LoadBalancerName is a required field + LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateLoadBalancerInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLoadBalancerInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateLoadBalancerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateLoadBalancerInput"} + if s.InstancePort == nil { + invalidParams.Add(request.NewErrParamRequired("InstancePort")) + } + if s.LoadBalancerName == nil { + invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateAlternativeNames sets the CertificateAlternativeNames field's value. +func (s *CreateLoadBalancerInput) SetCertificateAlternativeNames(v []*string) *CreateLoadBalancerInput { + s.CertificateAlternativeNames = v + return s +} + +// SetCertificateDomainName sets the CertificateDomainName field's value. +func (s *CreateLoadBalancerInput) SetCertificateDomainName(v string) *CreateLoadBalancerInput { + s.CertificateDomainName = &v + return s +} + +// SetCertificateName sets the CertificateName field's value. +func (s *CreateLoadBalancerInput) SetCertificateName(v string) *CreateLoadBalancerInput { + s.CertificateName = &v + return s +} + +// SetHealthCheckPath sets the HealthCheckPath field's value. +func (s *CreateLoadBalancerInput) SetHealthCheckPath(v string) *CreateLoadBalancerInput { + s.HealthCheckPath = &v + return s +} + +// SetInstancePort sets the InstancePort field's value. +func (s *CreateLoadBalancerInput) SetInstancePort(v int64) *CreateLoadBalancerInput { + s.InstancePort = &v + return s +} + +// SetLoadBalancerName sets the LoadBalancerName field's value. +func (s *CreateLoadBalancerInput) SetLoadBalancerName(v string) *CreateLoadBalancerInput { + s.LoadBalancerName = &v + return s +} + +type CreateLoadBalancerOutput struct { + _ struct{} `type:"structure"` + + // An object containing information about the API operations. + Operations []*Operation `locationName:"operations" type:"list"` +} + +// String returns the string representation +func (s CreateLoadBalancerOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLoadBalancerOutput) GoString() string { + return s.String() +} + +// SetOperations sets the Operations field's value. +func (s *CreateLoadBalancerOutput) SetOperations(v []*Operation) *CreateLoadBalancerOutput { + s.Operations = v + return s +} + +type CreateLoadBalancerTlsCertificateInput struct { + _ struct{} `type:"structure"` + + // An array of strings listing alternative domains and subdomains for your SSL/TLS + // certificate. Lightsail will de-dupe the names for you. You can have a maximum + // of 9 alternative names (in addition to the 1 primary domain). We do not support + // wildcards (e.g., *.example.com). + CertificateAlternativeNames []*string `locationName:"certificateAlternativeNames" type:"list"` + + // The domain name (e.g., example.com) for your SSL/TLS certificate. + // + // CertificateDomainName is a required field + CertificateDomainName *string `locationName:"certificateDomainName" type:"string" required:"true"` + + // The SSL/TLS certificate name. + // + // You can have up to 10 certificates in your account at one time. Each Lightsail + // load balancer can have up to 2 certificates associated with it at one time. + // There is also an overall limit to the number of certificates that can be + // issue in a 365-day period. For more information, see Limits (http://docs.aws.amazon.com/acm/latest/userguide/acm-limits.html). + // + // CertificateName is a required field + CertificateName *string `locationName:"certificateName" type:"string" required:"true"` + + // The load balancer name where you want to create the SSL/TLS certificate. + // + // LoadBalancerName is a required field + LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateLoadBalancerTlsCertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLoadBalancerTlsCertificateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateLoadBalancerTlsCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateLoadBalancerTlsCertificateInput"} + if s.CertificateDomainName == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateDomainName")) + } + if s.CertificateName == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateName")) + } + if s.LoadBalancerName == nil { + invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateAlternativeNames sets the CertificateAlternativeNames field's value. +func (s *CreateLoadBalancerTlsCertificateInput) SetCertificateAlternativeNames(v []*string) *CreateLoadBalancerTlsCertificateInput { + s.CertificateAlternativeNames = v + return s +} + +// SetCertificateDomainName sets the CertificateDomainName field's value. +func (s *CreateLoadBalancerTlsCertificateInput) SetCertificateDomainName(v string) *CreateLoadBalancerTlsCertificateInput { + s.CertificateDomainName = &v + return s +} + +// SetCertificateName sets the CertificateName field's value. +func (s *CreateLoadBalancerTlsCertificateInput) SetCertificateName(v string) *CreateLoadBalancerTlsCertificateInput { + s.CertificateName = &v + return s +} + +// SetLoadBalancerName sets the LoadBalancerName field's value. +func (s *CreateLoadBalancerTlsCertificateInput) SetLoadBalancerName(v string) *CreateLoadBalancerTlsCertificateInput { + s.LoadBalancerName = &v + return s +} + +type CreateLoadBalancerTlsCertificateOutput struct { _ struct{} `type:"structure"` - // The name of the static IP address. + // An object containing information about the API operations. + Operations []*Operation `locationName:"operations" type:"list"` +} + +// String returns the string representation +func (s CreateLoadBalancerTlsCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLoadBalancerTlsCertificateOutput) GoString() string { + return s.String() +} + +// SetOperations sets the Operations field's value. +func (s *CreateLoadBalancerTlsCertificateOutput) SetOperations(v []*Operation) *CreateLoadBalancerTlsCertificateOutput { + s.Operations = v + return s +} + +type CreateRelationalDatabaseFromSnapshotInput struct { + _ struct{} `type:"structure"` + + // The Availability Zone in which to create your new database. Use the us-east-2a + // case-sensitive format. // - // StaticIpName is a required field - StaticIpName *string `locationName:"staticIpName" type:"string" required:"true"` + // You can get a list of Availability Zones by using the get regions operation. + // Be sure to add the include relational database Availability Zones parameter + // to your request. + AvailabilityZone *string `locationName:"availabilityZone" type:"string"` + + // Specifies the accessibility options for your new database. A value of true + // specifies a database that is available to resources outside of your Lightsail + // account. A value of false specifies a database that is available only to + // your Lightsail resources in the same region as your database. + PubliclyAccessible *bool `locationName:"publiclyAccessible" type:"boolean"` + + // The bundle ID for your new database. A bundle describes the performance specifications + // for your database. + // + // You can get a list of database bundle IDs by using the get relational database + // bundles operation. + // + // When creating a new database from a snapshot, you cannot choose a bundle + // that is smaller than the bundle of the source database. + RelationalDatabaseBundleId *string `locationName:"relationalDatabaseBundleId" type:"string"` + + // The name to use for your new database. + // + // Constraints: + // + // * Must contain from 2 to 255 alphanumeric characters, or hyphens. + // + // * The first and last character must be a letter or number. + // + // RelationalDatabaseName is a required field + RelationalDatabaseName *string `locationName:"relationalDatabaseName" type:"string" required:"true"` + + // The name of the database snapshot from which to create your new database. + RelationalDatabaseSnapshotName *string `locationName:"relationalDatabaseSnapshotName" type:"string"` + + // The date and time to restore your database from. + // + // Constraints: + // + // * Must be before the latest restorable time for the database. + // + // * Cannot be specified if the use latest restorable time parameter is true. + // + // * Specified in Universal Coordinated Time (UTC). + // + // * Specified in the Unix time format. + // + // For example, if you wish to use a restore time of October 1, 2018, at 8 PM + // UTC, then you input 1538424000 as the restore time. + RestoreTime *time.Time `locationName:"restoreTime" type:"timestamp"` + + // The name of the source database. + SourceRelationalDatabaseName *string `locationName:"sourceRelationalDatabaseName" type:"string"` + + // Specifies whether your database is restored from the latest backup time. + // A value of true restores from the latest backup time. + // + // Default: false + // + // Constraints: Cannot be specified if the restore time parameter is provided. + UseLatestRestorableTime *bool `locationName:"useLatestRestorableTime" type:"boolean"` } // String returns the string representation -func (s AllocateStaticIpInput) String() string { +func (s CreateRelationalDatabaseFromSnapshotInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AllocateStaticIpInput) GoString() string { +func (s CreateRelationalDatabaseFromSnapshotInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *AllocateStaticIpInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AllocateStaticIpInput"} - if s.StaticIpName == nil { - invalidParams.Add(request.NewErrParamRequired("StaticIpName")) +func (s *CreateRelationalDatabaseFromSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateRelationalDatabaseFromSnapshotInput"} + if s.RelationalDatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseName")) } if invalidParams.Len() > 0 { @@ -7526,77 +11775,229 @@ func (s *AllocateStaticIpInput) Validate() error { return nil } -// SetStaticIpName sets the StaticIpName field's value. -func (s *AllocateStaticIpInput) SetStaticIpName(v string) *AllocateStaticIpInput { - s.StaticIpName = &v +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *CreateRelationalDatabaseFromSnapshotInput) SetAvailabilityZone(v string) *CreateRelationalDatabaseFromSnapshotInput { + s.AvailabilityZone = &v return s } -type AllocateStaticIpOutput struct { +// SetPubliclyAccessible sets the PubliclyAccessible field's value. +func (s *CreateRelationalDatabaseFromSnapshotInput) SetPubliclyAccessible(v bool) *CreateRelationalDatabaseFromSnapshotInput { + s.PubliclyAccessible = &v + return s +} + +// SetRelationalDatabaseBundleId sets the RelationalDatabaseBundleId field's value. +func (s *CreateRelationalDatabaseFromSnapshotInput) SetRelationalDatabaseBundleId(v string) *CreateRelationalDatabaseFromSnapshotInput { + s.RelationalDatabaseBundleId = &v + return s +} + +// SetRelationalDatabaseName sets the RelationalDatabaseName field's value. +func (s *CreateRelationalDatabaseFromSnapshotInput) SetRelationalDatabaseName(v string) *CreateRelationalDatabaseFromSnapshotInput { + s.RelationalDatabaseName = &v + return s +} + +// SetRelationalDatabaseSnapshotName sets the RelationalDatabaseSnapshotName field's value. +func (s *CreateRelationalDatabaseFromSnapshotInput) SetRelationalDatabaseSnapshotName(v string) *CreateRelationalDatabaseFromSnapshotInput { + s.RelationalDatabaseSnapshotName = &v + return s +} + +// SetRestoreTime sets the RestoreTime field's value. +func (s *CreateRelationalDatabaseFromSnapshotInput) SetRestoreTime(v time.Time) *CreateRelationalDatabaseFromSnapshotInput { + s.RestoreTime = &v + return s +} + +// SetSourceRelationalDatabaseName sets the SourceRelationalDatabaseName field's value. +func (s *CreateRelationalDatabaseFromSnapshotInput) SetSourceRelationalDatabaseName(v string) *CreateRelationalDatabaseFromSnapshotInput { + s.SourceRelationalDatabaseName = &v + return s +} + +// SetUseLatestRestorableTime sets the UseLatestRestorableTime field's value. +func (s *CreateRelationalDatabaseFromSnapshotInput) SetUseLatestRestorableTime(v bool) *CreateRelationalDatabaseFromSnapshotInput { + s.UseLatestRestorableTime = &v + return s +} + +type CreateRelationalDatabaseFromSnapshotOutput struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about the static IP address - // you allocated. + // An object describing the result of your create relational database from snapshot + // request. Operations []*Operation `locationName:"operations" type:"list"` } // String returns the string representation -func (s AllocateStaticIpOutput) String() string { +func (s CreateRelationalDatabaseFromSnapshotOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AllocateStaticIpOutput) GoString() string { +func (s CreateRelationalDatabaseFromSnapshotOutput) GoString() string { return s.String() } // SetOperations sets the Operations field's value. -func (s *AllocateStaticIpOutput) SetOperations(v []*Operation) *AllocateStaticIpOutput { +func (s *CreateRelationalDatabaseFromSnapshotOutput) SetOperations(v []*Operation) *CreateRelationalDatabaseFromSnapshotOutput { s.Operations = v return s } -type AttachDiskInput struct { +type CreateRelationalDatabaseInput struct { _ struct{} `type:"structure"` - // The unique Lightsail disk name (e.g., my-disk). + // The Availability Zone in which to create your new database. Use the us-east-2a + // case-sensitive format. // - // DiskName is a required field - DiskName *string `locationName:"diskName" type:"string" required:"true"` + // You can get a list of Availability Zones by using the get regions operation. + // Be sure to add the include relational database Availability Zones parameter + // to your request. + AvailabilityZone *string `locationName:"availabilityZone" type:"string"` - // The disk path to expose to the instance (e.g., /dev/xvdf). + // The name of the master database created when the Lightsail database resource + // is created. // - // DiskPath is a required field - DiskPath *string `locationName:"diskPath" type:"string" required:"true"` + // Constraints: + // + // * Must contain from 1 to 64 alphanumeric characters. + // + // * Cannot be a word reserved by the specified database engine + // + // MasterDatabaseName is a required field + MasterDatabaseName *string `locationName:"masterDatabaseName" type:"string" required:"true"` - // The name of the Lightsail instance where you want to utilize the storage - // disk. + // The password for the master user of your new database. The password can include + // any printable ASCII character except "/", """, or "@". // - // InstanceName is a required field - InstanceName *string `locationName:"instanceName" type:"string" required:"true"` + // Constraints: Must contain 8 to 41 characters. + MasterUserPassword *string `locationName:"masterUserPassword" type:"string"` + + // The master user name for your new database. + // + // Constraints: + // + // * Master user name is required. + // + // * Must contain from 1 to 16 alphanumeric characters. + // + // * The first character must be a letter. + // + // * Cannot be a reserved word for the database engine you choose. + // + // For more information about reserved words in MySQL 5.6 or 5.7, see the Keywords + // and Reserved Words articles for MySQL 5.6 (https://dev.mysql.com/doc/refman/5.6/en/keywords.html) + // or MySQL 5.7 (https://dev.mysql.com/doc/refman/5.7/en/keywords.html) respectively. + // + // MasterUsername is a required field + MasterUsername *string `locationName:"masterUsername" type:"string" required:"true"` + + // The daily time range during which automated backups are created for your + // new database if automated backups are enabled. + // + // The default is a 30-minute window selected at random from an 8-hour block + // of time for each AWS Region. For more information about the preferred backup + // window time blocks for each region, see the Working With Backups (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html#USER_WorkingWithAutomatedBackups.BackupWindow) + // guide in the Amazon Relational Database Service (Amazon RDS) documentation. + // + // Constraints: + // + // * Must be in the hh24:mi-hh24:mi format. + // + // Example: 16:00-16:30 + // + // * Specified in Universal Coordinated Time (UTC). + // + // * Must not conflict with the preferred maintenance window. + // + // * Must be at least 30 minutes. + PreferredBackupWindow *string `locationName:"preferredBackupWindow" type:"string"` + + // The weekly time range during which system maintenance can occur on your new + // database. + // + // The default is a 30-minute window selected at random from an 8-hour block + // of time for each AWS Region, occurring on a random day of the week. + // + // Constraints: + // + // * Must be in the ddd:hh24:mi-ddd:hh24:mi format. + // + // * Valid days: Mon, Tue, Wed, Thu, Fri, Sat, Sun. + // + // * Must be at least 30 minutes. + // + // * Specified in Universal Coordinated Time (UTC). + // + // * Example: Tue:17:00-Tue:17:30 + PreferredMaintenanceWindow *string `locationName:"preferredMaintenanceWindow" type:"string"` + + // Specifies the accessibility options for your new database. A value of true + // specifies a database that is available to resources outside of your Lightsail + // account. A value of false specifies a database that is available only to + // your Lightsail resources in the same region as your database. + PubliclyAccessible *bool `locationName:"publiclyAccessible" type:"boolean"` + + // The blueprint ID for your new database. A blueprint describes the major engine + // version of a database. + // + // You can get a list of database blueprints IDs by using the get relational + // database blueprints operation. + // + // RelationalDatabaseBlueprintId is a required field + RelationalDatabaseBlueprintId *string `locationName:"relationalDatabaseBlueprintId" type:"string" required:"true"` + + // The bundle ID for your new database. A bundle describes the performance specifications + // for your database. + // + // You can get a list of database bundle IDs by using the get relational database + // bundles operation. + // + // RelationalDatabaseBundleId is a required field + RelationalDatabaseBundleId *string `locationName:"relationalDatabaseBundleId" type:"string" required:"true"` + + // The name to use for your new database. + // + // Constraints: + // + // * Must contain from 2 to 255 alphanumeric characters, or hyphens. + // + // * The first and last character must be a letter or number. + // + // RelationalDatabaseName is a required field + RelationalDatabaseName *string `locationName:"relationalDatabaseName" type:"string" required:"true"` } // String returns the string representation -func (s AttachDiskInput) String() string { +func (s CreateRelationalDatabaseInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AttachDiskInput) GoString() string { +func (s CreateRelationalDatabaseInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *AttachDiskInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AttachDiskInput"} - if s.DiskName == nil { - invalidParams.Add(request.NewErrParamRequired("DiskName")) +func (s *CreateRelationalDatabaseInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateRelationalDatabaseInput"} + if s.MasterDatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("MasterDatabaseName")) } - if s.DiskPath == nil { - invalidParams.Add(request.NewErrParamRequired("DiskPath")) + if s.MasterUsername == nil { + invalidParams.Add(request.NewErrParamRequired("MasterUsername")) } - if s.InstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceName")) + if s.RelationalDatabaseBlueprintId == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseBlueprintId")) + } + if s.RelationalDatabaseBundleId == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseBundleId")) + } + if s.RelationalDatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseName")) } if invalidParams.Len() > 0 { @@ -7605,86 +12006,127 @@ func (s *AttachDiskInput) Validate() error { return nil } -// SetDiskName sets the DiskName field's value. -func (s *AttachDiskInput) SetDiskName(v string) *AttachDiskInput { - s.DiskName = &v +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *CreateRelationalDatabaseInput) SetAvailabilityZone(v string) *CreateRelationalDatabaseInput { + s.AvailabilityZone = &v return s } -// SetDiskPath sets the DiskPath field's value. -func (s *AttachDiskInput) SetDiskPath(v string) *AttachDiskInput { - s.DiskPath = &v +// SetMasterDatabaseName sets the MasterDatabaseName field's value. +func (s *CreateRelationalDatabaseInput) SetMasterDatabaseName(v string) *CreateRelationalDatabaseInput { + s.MasterDatabaseName = &v return s } -// SetInstanceName sets the InstanceName field's value. -func (s *AttachDiskInput) SetInstanceName(v string) *AttachDiskInput { - s.InstanceName = &v +// SetMasterUserPassword sets the MasterUserPassword field's value. +func (s *CreateRelationalDatabaseInput) SetMasterUserPassword(v string) *CreateRelationalDatabaseInput { + s.MasterUserPassword = &v return s } -type AttachDiskOutput struct { +// SetMasterUsername sets the MasterUsername field's value. +func (s *CreateRelationalDatabaseInput) SetMasterUsername(v string) *CreateRelationalDatabaseInput { + s.MasterUsername = &v + return s +} + +// SetPreferredBackupWindow sets the PreferredBackupWindow field's value. +func (s *CreateRelationalDatabaseInput) SetPreferredBackupWindow(v string) *CreateRelationalDatabaseInput { + s.PreferredBackupWindow = &v + return s +} + +// SetPreferredMaintenanceWindow sets the PreferredMaintenanceWindow field's value. +func (s *CreateRelationalDatabaseInput) SetPreferredMaintenanceWindow(v string) *CreateRelationalDatabaseInput { + s.PreferredMaintenanceWindow = &v + return s +} + +// SetPubliclyAccessible sets the PubliclyAccessible field's value. +func (s *CreateRelationalDatabaseInput) SetPubliclyAccessible(v bool) *CreateRelationalDatabaseInput { + s.PubliclyAccessible = &v + return s +} + +// SetRelationalDatabaseBlueprintId sets the RelationalDatabaseBlueprintId field's value. +func (s *CreateRelationalDatabaseInput) SetRelationalDatabaseBlueprintId(v string) *CreateRelationalDatabaseInput { + s.RelationalDatabaseBlueprintId = &v + return s +} + +// SetRelationalDatabaseBundleId sets the RelationalDatabaseBundleId field's value. +func (s *CreateRelationalDatabaseInput) SetRelationalDatabaseBundleId(v string) *CreateRelationalDatabaseInput { + s.RelationalDatabaseBundleId = &v + return s +} + +// SetRelationalDatabaseName sets the RelationalDatabaseName field's value. +func (s *CreateRelationalDatabaseInput) SetRelationalDatabaseName(v string) *CreateRelationalDatabaseInput { + s.RelationalDatabaseName = &v + return s +} + +type CreateRelationalDatabaseOutput struct { _ struct{} `type:"structure"` - // An object describing the API operations. + // An object describing the result of your create relational database request. Operations []*Operation `locationName:"operations" type:"list"` } // String returns the string representation -func (s AttachDiskOutput) String() string { +func (s CreateRelationalDatabaseOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AttachDiskOutput) GoString() string { +func (s CreateRelationalDatabaseOutput) GoString() string { return s.String() } // SetOperations sets the Operations field's value. -func (s *AttachDiskOutput) SetOperations(v []*Operation) *AttachDiskOutput { +func (s *CreateRelationalDatabaseOutput) SetOperations(v []*Operation) *CreateRelationalDatabaseOutput { s.Operations = v return s } -type AttachInstancesToLoadBalancerInput struct { +type CreateRelationalDatabaseSnapshotInput struct { _ struct{} `type:"structure"` - // An array of strings representing the instance name(s) you want to attach - // to your load balancer. + // The name of the database on which to base your new snapshot. // - // An instance must be running before you can attach it to your load balancer. + // RelationalDatabaseName is a required field + RelationalDatabaseName *string `locationName:"relationalDatabaseName" type:"string" required:"true"` + + // The name for your new database snapshot. // - // There are no additional limits on the number of instances you can attach - // to your load balancer, aside from the limit of Lightsail instances you can - // create in your account (20). + // Constraints: // - // InstanceNames is a required field - InstanceNames []*string `locationName:"instanceNames" type:"list" required:"true"` - - // The name of the load balancer. + // * Must contain from 2 to 255 alphanumeric characters, or hyphens. // - // LoadBalancerName is a required field - LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` + // * The first and last character must be a letter or number. + // + // RelationalDatabaseSnapshotName is a required field + RelationalDatabaseSnapshotName *string `locationName:"relationalDatabaseSnapshotName" type:"string" required:"true"` } // String returns the string representation -func (s AttachInstancesToLoadBalancerInput) String() string { +func (s CreateRelationalDatabaseSnapshotInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AttachInstancesToLoadBalancerInput) GoString() string { +func (s CreateRelationalDatabaseSnapshotInput) GoString() string { return s.String() } - -// Validate inspects the fields of the type to determine if they are valid. -func (s *AttachInstancesToLoadBalancerInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AttachInstancesToLoadBalancerInput"} - if s.InstanceNames == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceNames")) + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateRelationalDatabaseSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateRelationalDatabaseSnapshotInput"} + if s.RelationalDatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseName")) } - if s.LoadBalancerName == nil { - invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) + if s.RelationalDatabaseSnapshotName == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseSnapshotName")) } if invalidParams.Len() > 0 { @@ -7693,74 +12135,66 @@ func (s *AttachInstancesToLoadBalancerInput) Validate() error { return nil } -// SetInstanceNames sets the InstanceNames field's value. -func (s *AttachInstancesToLoadBalancerInput) SetInstanceNames(v []*string) *AttachInstancesToLoadBalancerInput { - s.InstanceNames = v +// SetRelationalDatabaseName sets the RelationalDatabaseName field's value. +func (s *CreateRelationalDatabaseSnapshotInput) SetRelationalDatabaseName(v string) *CreateRelationalDatabaseSnapshotInput { + s.RelationalDatabaseName = &v return s } -// SetLoadBalancerName sets the LoadBalancerName field's value. -func (s *AttachInstancesToLoadBalancerInput) SetLoadBalancerName(v string) *AttachInstancesToLoadBalancerInput { - s.LoadBalancerName = &v +// SetRelationalDatabaseSnapshotName sets the RelationalDatabaseSnapshotName field's value. +func (s *CreateRelationalDatabaseSnapshotInput) SetRelationalDatabaseSnapshotName(v string) *CreateRelationalDatabaseSnapshotInput { + s.RelationalDatabaseSnapshotName = &v return s } -type AttachInstancesToLoadBalancerOutput struct { +type CreateRelationalDatabaseSnapshotOutput struct { _ struct{} `type:"structure"` - // An object representing the API operations. + // An object describing the result of your create relational database snapshot + // request. Operations []*Operation `locationName:"operations" type:"list"` } // String returns the string representation -func (s AttachInstancesToLoadBalancerOutput) String() string { +func (s CreateRelationalDatabaseSnapshotOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AttachInstancesToLoadBalancerOutput) GoString() string { +func (s CreateRelationalDatabaseSnapshotOutput) GoString() string { return s.String() } // SetOperations sets the Operations field's value. -func (s *AttachInstancesToLoadBalancerOutput) SetOperations(v []*Operation) *AttachInstancesToLoadBalancerOutput { +func (s *CreateRelationalDatabaseSnapshotOutput) SetOperations(v []*Operation) *CreateRelationalDatabaseSnapshotOutput { s.Operations = v return s } -type AttachLoadBalancerTlsCertificateInput struct { +type DeleteDiskInput struct { _ struct{} `type:"structure"` - // The name of your SSL/TLS certificate. - // - // CertificateName is a required field - CertificateName *string `locationName:"certificateName" type:"string" required:"true"` - - // The name of the load balancer to which you want to associate the SSL/TLS - // certificate. + // The unique name of the disk you want to delete (e.g., my-disk). // - // LoadBalancerName is a required field - LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` + // DiskName is a required field + DiskName *string `locationName:"diskName" type:"string" required:"true"` } // String returns the string representation -func (s AttachLoadBalancerTlsCertificateInput) String() string { +func (s DeleteDiskInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AttachLoadBalancerTlsCertificateInput) GoString() string { +func (s DeleteDiskInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *AttachLoadBalancerTlsCertificateInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AttachLoadBalancerTlsCertificateInput"} - if s.CertificateName == nil { - invalidParams.Add(request.NewErrParamRequired("CertificateName")) - } - if s.LoadBalancerName == nil { - invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) +func (s *DeleteDiskInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDiskInput"} + if s.DiskName == nil { + invalidParams.Add(request.NewErrParamRequired("DiskName")) } if invalidParams.Len() > 0 { @@ -7769,76 +12203,59 @@ func (s *AttachLoadBalancerTlsCertificateInput) Validate() error { return nil } -// SetCertificateName sets the CertificateName field's value. -func (s *AttachLoadBalancerTlsCertificateInput) SetCertificateName(v string) *AttachLoadBalancerTlsCertificateInput { - s.CertificateName = &v - return s -} - -// SetLoadBalancerName sets the LoadBalancerName field's value. -func (s *AttachLoadBalancerTlsCertificateInput) SetLoadBalancerName(v string) *AttachLoadBalancerTlsCertificateInput { - s.LoadBalancerName = &v +// SetDiskName sets the DiskName field's value. +func (s *DeleteDiskInput) SetDiskName(v string) *DeleteDiskInput { + s.DiskName = &v return s } -type AttachLoadBalancerTlsCertificateOutput struct { +type DeleteDiskOutput struct { _ struct{} `type:"structure"` - // An object representing the API operations. - // - // These SSL/TLS certificates are only usable by Lightsail load balancers. You - // can't get the certificate and use it for another purpose. + // An object describing the API operations. Operations []*Operation `locationName:"operations" type:"list"` } // String returns the string representation -func (s AttachLoadBalancerTlsCertificateOutput) String() string { +func (s DeleteDiskOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AttachLoadBalancerTlsCertificateOutput) GoString() string { +func (s DeleteDiskOutput) GoString() string { return s.String() } // SetOperations sets the Operations field's value. -func (s *AttachLoadBalancerTlsCertificateOutput) SetOperations(v []*Operation) *AttachLoadBalancerTlsCertificateOutput { +func (s *DeleteDiskOutput) SetOperations(v []*Operation) *DeleteDiskOutput { s.Operations = v return s } -type AttachStaticIpInput struct { +type DeleteDiskSnapshotInput struct { _ struct{} `type:"structure"` - // The instance name to which you want to attach the static IP address. - // - // InstanceName is a required field - InstanceName *string `locationName:"instanceName" type:"string" required:"true"` - - // The name of the static IP. + // The name of the disk snapshot you want to delete (e.g., my-disk-snapshot). // - // StaticIpName is a required field - StaticIpName *string `locationName:"staticIpName" type:"string" required:"true"` + // DiskSnapshotName is a required field + DiskSnapshotName *string `locationName:"diskSnapshotName" type:"string" required:"true"` } // String returns the string representation -func (s AttachStaticIpInput) String() string { +func (s DeleteDiskSnapshotInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AttachStaticIpInput) GoString() string { +func (s DeleteDiskSnapshotInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *AttachStaticIpInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AttachStaticIpInput"} - if s.InstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceName")) - } - if s.StaticIpName == nil { - invalidParams.Add(request.NewErrParamRequired("StaticIpName")) +func (s *DeleteDiskSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDiskSnapshotInput"} + if s.DiskSnapshotName == nil { + invalidParams.Add(request.NewErrParamRequired("DiskSnapshotName")) } if invalidParams.Len() > 0 { @@ -7847,358 +12264,321 @@ func (s *AttachStaticIpInput) Validate() error { return nil } -// SetInstanceName sets the InstanceName field's value. -func (s *AttachStaticIpInput) SetInstanceName(v string) *AttachStaticIpInput { - s.InstanceName = &v - return s -} - -// SetStaticIpName sets the StaticIpName field's value. -func (s *AttachStaticIpInput) SetStaticIpName(v string) *AttachStaticIpInput { - s.StaticIpName = &v +// SetDiskSnapshotName sets the DiskSnapshotName field's value. +func (s *DeleteDiskSnapshotInput) SetDiskSnapshotName(v string) *DeleteDiskSnapshotInput { + s.DiskSnapshotName = &v return s } -type AttachStaticIpOutput struct { +type DeleteDiskSnapshotOutput struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about your API operations. + // An object describing the API operations. Operations []*Operation `locationName:"operations" type:"list"` } // String returns the string representation -func (s AttachStaticIpOutput) String() string { +func (s DeleteDiskSnapshotOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AttachStaticIpOutput) GoString() string { +func (s DeleteDiskSnapshotOutput) GoString() string { return s.String() } // SetOperations sets the Operations field's value. -func (s *AttachStaticIpOutput) SetOperations(v []*Operation) *AttachStaticIpOutput { +func (s *DeleteDiskSnapshotOutput) SetOperations(v []*Operation) *DeleteDiskSnapshotOutput { s.Operations = v return s } -// Describes an Availability Zone. -type AvailabilityZone struct { +type DeleteDomainEntryInput struct { _ struct{} `type:"structure"` - // The state of the Availability Zone. - State *string `locationName:"state" type:"string"` + // An array of key-value pairs containing information about your domain entries. + // + // DomainEntry is a required field + DomainEntry *DomainEntry `locationName:"domainEntry" type:"structure" required:"true"` - // The name of the Availability Zone. The format is us-east-2a (case-sensitive). - ZoneName *string `locationName:"zoneName" type:"string"` + // The name of the domain entry to delete. + // + // DomainName is a required field + DomainName *string `locationName:"domainName" type:"string" required:"true"` } // String returns the string representation -func (s AvailabilityZone) String() string { +func (s DeleteDomainEntryInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AvailabilityZone) GoString() string { +func (s DeleteDomainEntryInput) GoString() string { return s.String() } -// SetState sets the State field's value. -func (s *AvailabilityZone) SetState(v string) *AvailabilityZone { - s.State = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteDomainEntryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDomainEntryInput"} + if s.DomainEntry == nil { + invalidParams.Add(request.NewErrParamRequired("DomainEntry")) + } + if s.DomainName == nil { + invalidParams.Add(request.NewErrParamRequired("DomainName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDomainEntry sets the DomainEntry field's value. +func (s *DeleteDomainEntryInput) SetDomainEntry(v *DomainEntry) *DeleteDomainEntryInput { + s.DomainEntry = v return s } -// SetZoneName sets the ZoneName field's value. -func (s *AvailabilityZone) SetZoneName(v string) *AvailabilityZone { - s.ZoneName = &v +// SetDomainName sets the DomainName field's value. +func (s *DeleteDomainEntryInput) SetDomainName(v string) *DeleteDomainEntryInput { + s.DomainName = &v return s } -// Describes a blueprint (a virtual private server image). -type Blueprint struct { +type DeleteDomainEntryOutput struct { _ struct{} `type:"structure"` - // The ID for the virtual private server image (e.g., app_wordpress_4_4 or app_lamp_7_0). - BlueprintId *string `locationName:"blueprintId" type:"string"` - - // The description of the blueprint. - Description *string `locationName:"description" type:"string"` - - // The group name of the blueprint (e.g., amazon-linux). - Group *string `locationName:"group" type:"string"` - - // A Boolean value indicating whether the blueprint is active. When you update - // your blueprints, you will inactivate old blueprints and keep the most recent - // versions active. - IsActive *bool `locationName:"isActive" type:"boolean"` - - // The end-user license agreement URL for the image or blueprint. - LicenseUrl *string `locationName:"licenseUrl" type:"string"` - - // The minimum bundle power required to run this blueprint. For example, you - // need a bundle with a power value of 500 or more to create an instance that - // uses a blueprint with a minimum power value of 500. 0 indicates that the - // blueprint runs on all instance sizes. - MinPower *int64 `locationName:"minPower" type:"integer"` - - // The friendly name of the blueprint (e.g., Amazon Linux). - Name *string `locationName:"name" type:"string"` - - // The operating system platform (either Linux/Unix-based or Windows Server-based) - // of the blueprint. - Platform *string `locationName:"platform" type:"string" enum:"InstancePlatform"` - - // The product URL to learn more about the image or blueprint. - ProductUrl *string `locationName:"productUrl" type:"string"` - - // The type of the blueprint (e.g., os or app). - Type *string `locationName:"type" type:"string" enum:"BlueprintType"` - - // The version number of the operating system, application, or stack (e.g., - // 2016.03.0). - Version *string `locationName:"version" type:"string"` - - // The version code. - VersionCode *string `locationName:"versionCode" type:"string"` + // An array of key-value pairs containing information about the results of your + // delete domain entry request. + Operation *Operation `locationName:"operation" type:"structure"` } // String returns the string representation -func (s Blueprint) String() string { +func (s DeleteDomainEntryOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Blueprint) GoString() string { +func (s DeleteDomainEntryOutput) GoString() string { return s.String() } -// SetBlueprintId sets the BlueprintId field's value. -func (s *Blueprint) SetBlueprintId(v string) *Blueprint { - s.BlueprintId = &v - return s -} - -// SetDescription sets the Description field's value. -func (s *Blueprint) SetDescription(v string) *Blueprint { - s.Description = &v +// SetOperation sets the Operation field's value. +func (s *DeleteDomainEntryOutput) SetOperation(v *Operation) *DeleteDomainEntryOutput { + s.Operation = v return s } -// SetGroup sets the Group field's value. -func (s *Blueprint) SetGroup(v string) *Blueprint { - s.Group = &v - return s -} +type DeleteDomainInput struct { + _ struct{} `type:"structure"` -// SetIsActive sets the IsActive field's value. -func (s *Blueprint) SetIsActive(v bool) *Blueprint { - s.IsActive = &v - return s + // The specific domain name to delete. + // + // DomainName is a required field + DomainName *string `locationName:"domainName" type:"string" required:"true"` } -// SetLicenseUrl sets the LicenseUrl field's value. -func (s *Blueprint) SetLicenseUrl(v string) *Blueprint { - s.LicenseUrl = &v - return s +// String returns the string representation +func (s DeleteDomainInput) String() string { + return awsutil.Prettify(s) } -// SetMinPower sets the MinPower field's value. -func (s *Blueprint) SetMinPower(v int64) *Blueprint { - s.MinPower = &v - return s +// GoString returns the string representation +func (s DeleteDomainInput) GoString() string { + return s.String() } -// SetName sets the Name field's value. -func (s *Blueprint) SetName(v string) *Blueprint { - s.Name = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteDomainInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDomainInput"} + if s.DomainName == nil { + invalidParams.Add(request.NewErrParamRequired("DomainName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetPlatform sets the Platform field's value. -func (s *Blueprint) SetPlatform(v string) *Blueprint { - s.Platform = &v +// SetDomainName sets the DomainName field's value. +func (s *DeleteDomainInput) SetDomainName(v string) *DeleteDomainInput { + s.DomainName = &v return s } -// SetProductUrl sets the ProductUrl field's value. -func (s *Blueprint) SetProductUrl(v string) *Blueprint { - s.ProductUrl = &v - return s +type DeleteDomainOutput struct { + _ struct{} `type:"structure"` + + // An array of key-value pairs containing information about the results of your + // delete domain request. + Operation *Operation `locationName:"operation" type:"structure"` } -// SetType sets the Type field's value. -func (s *Blueprint) SetType(v string) *Blueprint { - s.Type = &v - return s +// String returns the string representation +func (s DeleteDomainOutput) String() string { + return awsutil.Prettify(s) } -// SetVersion sets the Version field's value. -func (s *Blueprint) SetVersion(v string) *Blueprint { - s.Version = &v - return s +// GoString returns the string representation +func (s DeleteDomainOutput) GoString() string { + return s.String() } -// SetVersionCode sets the VersionCode field's value. -func (s *Blueprint) SetVersionCode(v string) *Blueprint { - s.VersionCode = &v +// SetOperation sets the Operation field's value. +func (s *DeleteDomainOutput) SetOperation(v *Operation) *DeleteDomainOutput { + s.Operation = v return s } -// Describes a bundle, which is a set of specs describing your virtual private -// server (or instance). -type Bundle struct { +type DeleteInstanceInput struct { _ struct{} `type:"structure"` - // The bundle ID (e.g., micro_1_0). - BundleId *string `locationName:"bundleId" type:"string"` - - // The number of vCPUs included in the bundle (e.g., 2). - CpuCount *int64 `locationName:"cpuCount" type:"integer"` - - // The size of the SSD (e.g., 30). - DiskSizeInGb *int64 `locationName:"diskSizeInGb" type:"integer"` - - // The Amazon EC2 instance type (e.g., t2.micro). - InstanceType *string `locationName:"instanceType" type:"string"` + // The name of the instance to delete. + // + // InstanceName is a required field + InstanceName *string `locationName:"instanceName" type:"string" required:"true"` +} - // A Boolean value indicating whether the bundle is active. - IsActive *bool `locationName:"isActive" type:"boolean"` +// String returns the string representation +func (s DeleteInstanceInput) String() string { + return awsutil.Prettify(s) +} - // A friendly name for the bundle (e.g., Micro). - Name *string `locationName:"name" type:"string"` +// GoString returns the string representation +func (s DeleteInstanceInput) GoString() string { + return s.String() +} - // A numeric value that represents the power of the bundle (e.g., 500). You - // can use the bundle's power value in conjunction with a blueprint's minimum - // power value to determine whether the blueprint will run on the bundle. For - // example, you need a bundle with a power value of 500 or more to create an - // instance that uses a blueprint with a minimum power value of 500. - Power *int64 `locationName:"power" type:"integer"` +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteInstanceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteInstanceInput"} + if s.InstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceName")) + } - // The price in US dollars (e.g., 5.0). - Price *float64 `locationName:"price" type:"float"` + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} - // The amount of RAM in GB (e.g., 2.0). - RamSizeInGb *float64 `locationName:"ramSizeInGb" type:"float"` +// SetInstanceName sets the InstanceName field's value. +func (s *DeleteInstanceInput) SetInstanceName(v string) *DeleteInstanceInput { + s.InstanceName = &v + return s +} - // The operating system platform (Linux/Unix-based or Windows Server-based) - // that the bundle supports. You can only launch a WINDOWS bundle on a blueprint - // that supports the WINDOWS platform. LINUX_UNIX blueprints require a LINUX_UNIX - // bundle. - SupportedPlatforms []*string `locationName:"supportedPlatforms" type:"list"` +type DeleteInstanceOutput struct { + _ struct{} `type:"structure"` - // The data transfer rate per month in GB (e.g., 2000). - TransferPerMonthInGb *int64 `locationName:"transferPerMonthInGb" type:"integer"` + // An array of key-value pairs containing information about the results of your + // delete instance request. + Operations []*Operation `locationName:"operations" type:"list"` } // String returns the string representation -func (s Bundle) String() string { +func (s DeleteInstanceOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Bundle) GoString() string { +func (s DeleteInstanceOutput) GoString() string { return s.String() } -// SetBundleId sets the BundleId field's value. -func (s *Bundle) SetBundleId(v string) *Bundle { - s.BundleId = &v +// SetOperations sets the Operations field's value. +func (s *DeleteInstanceOutput) SetOperations(v []*Operation) *DeleteInstanceOutput { + s.Operations = v return s } -// SetCpuCount sets the CpuCount field's value. -func (s *Bundle) SetCpuCount(v int64) *Bundle { - s.CpuCount = &v - return s -} +type DeleteInstanceSnapshotInput struct { + _ struct{} `type:"structure"` -// SetDiskSizeInGb sets the DiskSizeInGb field's value. -func (s *Bundle) SetDiskSizeInGb(v int64) *Bundle { - s.DiskSizeInGb = &v - return s + // The name of the snapshot to delete. + // + // InstanceSnapshotName is a required field + InstanceSnapshotName *string `locationName:"instanceSnapshotName" type:"string" required:"true"` } -// SetInstanceType sets the InstanceType field's value. -func (s *Bundle) SetInstanceType(v string) *Bundle { - s.InstanceType = &v - return s +// String returns the string representation +func (s DeleteInstanceSnapshotInput) String() string { + return awsutil.Prettify(s) } -// SetIsActive sets the IsActive field's value. -func (s *Bundle) SetIsActive(v bool) *Bundle { - s.IsActive = &v - return s +// GoString returns the string representation +func (s DeleteInstanceSnapshotInput) GoString() string { + return s.String() } -// SetName sets the Name field's value. -func (s *Bundle) SetName(v string) *Bundle { - s.Name = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteInstanceSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteInstanceSnapshotInput"} + if s.InstanceSnapshotName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceSnapshotName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetPower sets the Power field's value. -func (s *Bundle) SetPower(v int64) *Bundle { - s.Power = &v +// SetInstanceSnapshotName sets the InstanceSnapshotName field's value. +func (s *DeleteInstanceSnapshotInput) SetInstanceSnapshotName(v string) *DeleteInstanceSnapshotInput { + s.InstanceSnapshotName = &v return s } -// SetPrice sets the Price field's value. -func (s *Bundle) SetPrice(v float64) *Bundle { - s.Price = &v - return s +type DeleteInstanceSnapshotOutput struct { + _ struct{} `type:"structure"` + + // An array of key-value pairs containing information about the results of your + // delete instance snapshot request. + Operations []*Operation `locationName:"operations" type:"list"` } -// SetRamSizeInGb sets the RamSizeInGb field's value. -func (s *Bundle) SetRamSizeInGb(v float64) *Bundle { - s.RamSizeInGb = &v - return s +// String returns the string representation +func (s DeleteInstanceSnapshotOutput) String() string { + return awsutil.Prettify(s) } -// SetSupportedPlatforms sets the SupportedPlatforms field's value. -func (s *Bundle) SetSupportedPlatforms(v []*string) *Bundle { - s.SupportedPlatforms = v - return s +// GoString returns the string representation +func (s DeleteInstanceSnapshotOutput) GoString() string { + return s.String() } -// SetTransferPerMonthInGb sets the TransferPerMonthInGb field's value. -func (s *Bundle) SetTransferPerMonthInGb(v int64) *Bundle { - s.TransferPerMonthInGb = &v +// SetOperations sets the Operations field's value. +func (s *DeleteInstanceSnapshotOutput) SetOperations(v []*Operation) *DeleteInstanceSnapshotOutput { + s.Operations = v return s } -type CloseInstancePublicPortsInput struct { +type DeleteKeyPairInput struct { _ struct{} `type:"structure"` - // The name of the instance on which you're attempting to close the public ports. - // - // InstanceName is a required field - InstanceName *string `locationName:"instanceName" type:"string" required:"true"` - - // Information about the public port you are trying to close. + // The name of the key pair to delete. // - // PortInfo is a required field - PortInfo *PortInfo `locationName:"portInfo" type:"structure" required:"true"` + // KeyPairName is a required field + KeyPairName *string `locationName:"keyPairName" type:"string" required:"true"` } // String returns the string representation -func (s CloseInstancePublicPortsInput) String() string { +func (s DeleteKeyPairInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CloseInstancePublicPortsInput) GoString() string { +func (s DeleteKeyPairInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CloseInstancePublicPortsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CloseInstancePublicPortsInput"} - if s.InstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceName")) - } - if s.PortInfo == nil { - invalidParams.Add(request.NewErrParamRequired("PortInfo")) +func (s *DeleteKeyPairInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteKeyPairInput"} + if s.KeyPairName == nil { + invalidParams.Add(request.NewErrParamRequired("KeyPairName")) } if invalidParams.Len() > 0 { @@ -8207,95 +12587,60 @@ func (s *CloseInstancePublicPortsInput) Validate() error { return nil } -// SetInstanceName sets the InstanceName field's value. -func (s *CloseInstancePublicPortsInput) SetInstanceName(v string) *CloseInstancePublicPortsInput { - s.InstanceName = &v - return s -} - -// SetPortInfo sets the PortInfo field's value. -func (s *CloseInstancePublicPortsInput) SetPortInfo(v *PortInfo) *CloseInstancePublicPortsInput { - s.PortInfo = v +// SetKeyPairName sets the KeyPairName field's value. +func (s *DeleteKeyPairInput) SetKeyPairName(v string) *DeleteKeyPairInput { + s.KeyPairName = &v return s } -type CloseInstancePublicPortsOutput struct { +type DeleteKeyPairOutput struct { _ struct{} `type:"structure"` - // An array of key-value pairs that contains information about the operation. + // An array of key-value pairs containing information about the results of your + // delete key pair request. Operation *Operation `locationName:"operation" type:"structure"` } // String returns the string representation -func (s CloseInstancePublicPortsOutput) String() string { +func (s DeleteKeyPairOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CloseInstancePublicPortsOutput) GoString() string { +func (s DeleteKeyPairOutput) GoString() string { return s.String() } // SetOperation sets the Operation field's value. -func (s *CloseInstancePublicPortsOutput) SetOperation(v *Operation) *CloseInstancePublicPortsOutput { +func (s *DeleteKeyPairOutput) SetOperation(v *Operation) *DeleteKeyPairOutput { s.Operation = v return s } -type CreateDiskFromSnapshotInput struct { +type DeleteLoadBalancerInput struct { _ struct{} `type:"structure"` - // The Availability Zone where you want to create the disk (e.g., us-east-2a). - // Choose the same Availability Zone as the Lightsail instance where you want - // to create the disk. - // - // Use the GetRegions operation to list the Availability Zones where Lightsail - // is currently available. - // - // AvailabilityZone is a required field - AvailabilityZone *string `locationName:"availabilityZone" type:"string" required:"true"` - - // The unique Lightsail disk name (e.g., my-disk). - // - // DiskName is a required field - DiskName *string `locationName:"diskName" type:"string" required:"true"` - - // The name of the disk snapshot (e.g., my-snapshot) from which to create the - // new storage disk. - // - // DiskSnapshotName is a required field - DiskSnapshotName *string `locationName:"diskSnapshotName" type:"string" required:"true"` - - // The size of the disk in GB (e.g., 32). + // The name of the load balancer you want to delete. // - // SizeInGb is a required field - SizeInGb *int64 `locationName:"sizeInGb" type:"integer" required:"true"` + // LoadBalancerName is a required field + LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` } // String returns the string representation -func (s CreateDiskFromSnapshotInput) String() string { +func (s DeleteLoadBalancerInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateDiskFromSnapshotInput) GoString() string { +func (s DeleteLoadBalancerInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateDiskFromSnapshotInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateDiskFromSnapshotInput"} - if s.AvailabilityZone == nil { - invalidParams.Add(request.NewErrParamRequired("AvailabilityZone")) - } - if s.DiskName == nil { - invalidParams.Add(request.NewErrParamRequired("DiskName")) - } - if s.DiskSnapshotName == nil { - invalidParams.Add(request.NewErrParamRequired("DiskSnapshotName")) - } - if s.SizeInGb == nil { - invalidParams.Add(request.NewErrParamRequired("SizeInGb")) +func (s *DeleteLoadBalancerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteLoadBalancerInput"} + if s.LoadBalancerName == nil { + invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) } if invalidParams.Len() > 0 { @@ -8304,31 +12649,13 @@ func (s *CreateDiskFromSnapshotInput) Validate() error { return nil } -// SetAvailabilityZone sets the AvailabilityZone field's value. -func (s *CreateDiskFromSnapshotInput) SetAvailabilityZone(v string) *CreateDiskFromSnapshotInput { - s.AvailabilityZone = &v - return s -} - -// SetDiskName sets the DiskName field's value. -func (s *CreateDiskFromSnapshotInput) SetDiskName(v string) *CreateDiskFromSnapshotInput { - s.DiskName = &v - return s -} - -// SetDiskSnapshotName sets the DiskSnapshotName field's value. -func (s *CreateDiskFromSnapshotInput) SetDiskSnapshotName(v string) *CreateDiskFromSnapshotInput { - s.DiskSnapshotName = &v - return s -} - -// SetSizeInGb sets the SizeInGb field's value. -func (s *CreateDiskFromSnapshotInput) SetSizeInGb(v int64) *CreateDiskFromSnapshotInput { - s.SizeInGb = &v +// SetLoadBalancerName sets the LoadBalancerName field's value. +func (s *DeleteLoadBalancerInput) SetLoadBalancerName(v string) *DeleteLoadBalancerInput { + s.LoadBalancerName = &v return s } -type CreateDiskFromSnapshotOutput struct { +type DeleteLoadBalancerOutput struct { _ struct{} `type:"structure"` // An object describing the API operations. @@ -8336,66 +12663,60 @@ type CreateDiskFromSnapshotOutput struct { } // String returns the string representation -func (s CreateDiskFromSnapshotOutput) String() string { +func (s DeleteLoadBalancerOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateDiskFromSnapshotOutput) GoString() string { +func (s DeleteLoadBalancerOutput) GoString() string { return s.String() } // SetOperations sets the Operations field's value. -func (s *CreateDiskFromSnapshotOutput) SetOperations(v []*Operation) *CreateDiskFromSnapshotOutput { +func (s *DeleteLoadBalancerOutput) SetOperations(v []*Operation) *DeleteLoadBalancerOutput { s.Operations = v return s } -type CreateDiskInput struct { +type DeleteLoadBalancerTlsCertificateInput struct { _ struct{} `type:"structure"` - // The Availability Zone where you want to create the disk (e.g., us-east-2a). - // Choose the same Availability Zone as the Lightsail instance where you want - // to create the disk. - // - // Use the GetRegions operation to list the Availability Zones where Lightsail - // is currently available. + // The SSL/TLS certificate name. // - // AvailabilityZone is a required field - AvailabilityZone *string `locationName:"availabilityZone" type:"string" required:"true"` + // CertificateName is a required field + CertificateName *string `locationName:"certificateName" type:"string" required:"true"` - // The unique Lightsail disk name (e.g., my-disk). + // When true, forces the deletion of an SSL/TLS certificate. // - // DiskName is a required field - DiskName *string `locationName:"diskName" type:"string" required:"true"` + // There can be two certificates associated with a Lightsail load balancer: + // the primary and the backup. The force parameter is required when the primary + // SSL/TLS certificate is in use by an instance attached to the load balancer. + Force *bool `locationName:"force" type:"boolean"` - // The size of the disk in GB (e.g., 32). + // The load balancer name. // - // SizeInGb is a required field - SizeInGb *int64 `locationName:"sizeInGb" type:"integer" required:"true"` + // LoadBalancerName is a required field + LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` } // String returns the string representation -func (s CreateDiskInput) String() string { +func (s DeleteLoadBalancerTlsCertificateInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateDiskInput) GoString() string { +func (s DeleteLoadBalancerTlsCertificateInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateDiskInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateDiskInput"} - if s.AvailabilityZone == nil { - invalidParams.Add(request.NewErrParamRequired("AvailabilityZone")) - } - if s.DiskName == nil { - invalidParams.Add(request.NewErrParamRequired("DiskName")) +func (s *DeleteLoadBalancerTlsCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteLoadBalancerTlsCertificateInput"} + if s.CertificateName == nil { + invalidParams.Add(request.NewErrParamRequired("CertificateName")) } - if s.SizeInGb == nil { - invalidParams.Add(request.NewErrParamRequired("SizeInGb")) + if s.LoadBalancerName == nil { + invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) } if invalidParams.Len() > 0 { @@ -8404,25 +12725,25 @@ func (s *CreateDiskInput) Validate() error { return nil } -// SetAvailabilityZone sets the AvailabilityZone field's value. -func (s *CreateDiskInput) SetAvailabilityZone(v string) *CreateDiskInput { - s.AvailabilityZone = &v +// SetCertificateName sets the CertificateName field's value. +func (s *DeleteLoadBalancerTlsCertificateInput) SetCertificateName(v string) *DeleteLoadBalancerTlsCertificateInput { + s.CertificateName = &v return s } -// SetDiskName sets the DiskName field's value. -func (s *CreateDiskInput) SetDiskName(v string) *CreateDiskInput { - s.DiskName = &v +// SetForce sets the Force field's value. +func (s *DeleteLoadBalancerTlsCertificateInput) SetForce(v bool) *DeleteLoadBalancerTlsCertificateInput { + s.Force = &v return s } -// SetSizeInGb sets the SizeInGb field's value. -func (s *CreateDiskInput) SetSizeInGb(v int64) *CreateDiskInput { - s.SizeInGb = &v +// SetLoadBalancerName sets the LoadBalancerName field's value. +func (s *DeleteLoadBalancerTlsCertificateInput) SetLoadBalancerName(v string) *DeleteLoadBalancerTlsCertificateInput { + s.LoadBalancerName = &v return s } -type CreateDiskOutput struct { +type DeleteLoadBalancerTlsCertificateOutput struct { _ struct{} `type:"structure"` // An object describing the API operations. @@ -8430,54 +12751,68 @@ type CreateDiskOutput struct { } // String returns the string representation -func (s CreateDiskOutput) String() string { +func (s DeleteLoadBalancerTlsCertificateOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateDiskOutput) GoString() string { +func (s DeleteLoadBalancerTlsCertificateOutput) GoString() string { return s.String() } // SetOperations sets the Operations field's value. -func (s *CreateDiskOutput) SetOperations(v []*Operation) *CreateDiskOutput { +func (s *DeleteLoadBalancerTlsCertificateOutput) SetOperations(v []*Operation) *DeleteLoadBalancerTlsCertificateOutput { s.Operations = v return s } -type CreateDiskSnapshotInput struct { +type DeleteRelationalDatabaseInput struct { _ struct{} `type:"structure"` - // The unique name of the source disk (e.g., my-source-disk). + // The name of the database snapshot created if skip final snapshot is false, + // which is the default value for that parameter. // - // DiskName is a required field - DiskName *string `locationName:"diskName" type:"string" required:"true"` + // Specifying this parameter and also specifying the skip final snapshot parameter + // to true results in an error. + // + // Constraints: + // + // * Must contain from 2 to 255 alphanumeric characters, or hyphens. + // + // * The first and last character must be a letter or number. + FinalRelationalDatabaseSnapshotName *string `locationName:"finalRelationalDatabaseSnapshotName" type:"string"` - // The name of the destination disk snapshot (e.g., my-disk-snapshot) based - // on the source disk. + // The name of the database that you are deleting. // - // DiskSnapshotName is a required field - DiskSnapshotName *string `locationName:"diskSnapshotName" type:"string" required:"true"` + // RelationalDatabaseName is a required field + RelationalDatabaseName *string `locationName:"relationalDatabaseName" type:"string" required:"true"` + + // Determines whether a final database snapshot is created before your database + // is deleted. If true is specified, no database snapshot is created. If false + // is specified, a database snapshot is created before your database is deleted. + // + // You must specify the final relational database snapshot name parameter if + // the skip final snapshot parameter is false. + // + // Default: false + SkipFinalSnapshot *bool `locationName:"skipFinalSnapshot" type:"boolean"` } // String returns the string representation -func (s CreateDiskSnapshotInput) String() string { +func (s DeleteRelationalDatabaseInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateDiskSnapshotInput) GoString() string { +func (s DeleteRelationalDatabaseInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateDiskSnapshotInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateDiskSnapshotInput"} - if s.DiskName == nil { - invalidParams.Add(request.NewErrParamRequired("DiskName")) - } - if s.DiskSnapshotName == nil { - invalidParams.Add(request.NewErrParamRequired("DiskSnapshotName")) +func (s *DeleteRelationalDatabaseInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteRelationalDatabaseInput"} + if s.RelationalDatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseName")) } if invalidParams.Len() > 0 { @@ -8486,75 +12821,71 @@ func (s *CreateDiskSnapshotInput) Validate() error { return nil } -// SetDiskName sets the DiskName field's value. -func (s *CreateDiskSnapshotInput) SetDiskName(v string) *CreateDiskSnapshotInput { - s.DiskName = &v +// SetFinalRelationalDatabaseSnapshotName sets the FinalRelationalDatabaseSnapshotName field's value. +func (s *DeleteRelationalDatabaseInput) SetFinalRelationalDatabaseSnapshotName(v string) *DeleteRelationalDatabaseInput { + s.FinalRelationalDatabaseSnapshotName = &v return s } -// SetDiskSnapshotName sets the DiskSnapshotName field's value. -func (s *CreateDiskSnapshotInput) SetDiskSnapshotName(v string) *CreateDiskSnapshotInput { - s.DiskSnapshotName = &v +// SetRelationalDatabaseName sets the RelationalDatabaseName field's value. +func (s *DeleteRelationalDatabaseInput) SetRelationalDatabaseName(v string) *DeleteRelationalDatabaseInput { + s.RelationalDatabaseName = &v return s } -type CreateDiskSnapshotOutput struct { +// SetSkipFinalSnapshot sets the SkipFinalSnapshot field's value. +func (s *DeleteRelationalDatabaseInput) SetSkipFinalSnapshot(v bool) *DeleteRelationalDatabaseInput { + s.SkipFinalSnapshot = &v + return s +} + +type DeleteRelationalDatabaseOutput struct { _ struct{} `type:"structure"` - // An object describing the API operations. + // An object describing the result of your delete relational database request. Operations []*Operation `locationName:"operations" type:"list"` } // String returns the string representation -func (s CreateDiskSnapshotOutput) String() string { +func (s DeleteRelationalDatabaseOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateDiskSnapshotOutput) GoString() string { +func (s DeleteRelationalDatabaseOutput) GoString() string { return s.String() } // SetOperations sets the Operations field's value. -func (s *CreateDiskSnapshotOutput) SetOperations(v []*Operation) *CreateDiskSnapshotOutput { +func (s *DeleteRelationalDatabaseOutput) SetOperations(v []*Operation) *DeleteRelationalDatabaseOutput { s.Operations = v return s } -type CreateDomainEntryInput struct { +type DeleteRelationalDatabaseSnapshotInput struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about the domain entry - // request. - // - // DomainEntry is a required field - DomainEntry *DomainEntry `locationName:"domainEntry" type:"structure" required:"true"` - - // The domain name (e.g., example.com) for which you want to create the domain - // entry. + // The name of the database snapshot that you are deleting. // - // DomainName is a required field - DomainName *string `locationName:"domainName" type:"string" required:"true"` + // RelationalDatabaseSnapshotName is a required field + RelationalDatabaseSnapshotName *string `locationName:"relationalDatabaseSnapshotName" type:"string" required:"true"` } // String returns the string representation -func (s CreateDomainEntryInput) String() string { +func (s DeleteRelationalDatabaseSnapshotInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateDomainEntryInput) GoString() string { +func (s DeleteRelationalDatabaseSnapshotInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateDomainEntryInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateDomainEntryInput"} - if s.DomainEntry == nil { - invalidParams.Add(request.NewErrParamRequired("DomainEntry")) - } - if s.DomainName == nil { - invalidParams.Add(request.NewErrParamRequired("DomainName")) +func (s *DeleteRelationalDatabaseSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteRelationalDatabaseSnapshotInput"} + if s.RelationalDatabaseSnapshotName == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseSnapshotName")) } if invalidParams.Len() > 0 { @@ -8563,70 +12894,61 @@ func (s *CreateDomainEntryInput) Validate() error { return nil } -// SetDomainEntry sets the DomainEntry field's value. -func (s *CreateDomainEntryInput) SetDomainEntry(v *DomainEntry) *CreateDomainEntryInput { - s.DomainEntry = v - return s -} - -// SetDomainName sets the DomainName field's value. -func (s *CreateDomainEntryInput) SetDomainName(v string) *CreateDomainEntryInput { - s.DomainName = &v +// SetRelationalDatabaseSnapshotName sets the RelationalDatabaseSnapshotName field's value. +func (s *DeleteRelationalDatabaseSnapshotInput) SetRelationalDatabaseSnapshotName(v string) *DeleteRelationalDatabaseSnapshotInput { + s.RelationalDatabaseSnapshotName = &v return s } -type CreateDomainEntryOutput struct { +type DeleteRelationalDatabaseSnapshotOutput struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about the operation. - Operation *Operation `locationName:"operation" type:"structure"` + // An object describing the result of your delete relational database snapshot + // request. + Operations []*Operation `locationName:"operations" type:"list"` } // String returns the string representation -func (s CreateDomainEntryOutput) String() string { +func (s DeleteRelationalDatabaseSnapshotOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateDomainEntryOutput) GoString() string { +func (s DeleteRelationalDatabaseSnapshotOutput) GoString() string { return s.String() } -// SetOperation sets the Operation field's value. -func (s *CreateDomainEntryOutput) SetOperation(v *Operation) *CreateDomainEntryOutput { - s.Operation = v +// SetOperations sets the Operations field's value. +func (s *DeleteRelationalDatabaseSnapshotOutput) SetOperations(v []*Operation) *DeleteRelationalDatabaseSnapshotOutput { + s.Operations = v return s } -type CreateDomainInput struct { +type DetachDiskInput struct { _ struct{} `type:"structure"` - // The domain name to manage (e.g., example.com). - // - // You cannot register a new domain name using Lightsail. You must register - // a domain name using Amazon Route 53 or another domain name registrar. If - // you have already registered your domain, you can enter its name in this parameter - // to manage the DNS records for that domain. + // The unique name of the disk you want to detach from your instance (e.g., + // my-disk). // - // DomainName is a required field - DomainName *string `locationName:"domainName" type:"string" required:"true"` + // DiskName is a required field + DiskName *string `locationName:"diskName" type:"string" required:"true"` } // String returns the string representation -func (s CreateDomainInput) String() string { +func (s DetachDiskInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateDomainInput) GoString() string { +func (s DetachDiskInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateDomainInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateDomainInput"} - if s.DomainName == nil { - invalidParams.Add(request.NewErrParamRequired("DomainName")) +func (s *DetachDiskInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DetachDiskInput"} + if s.DiskName == nil { + invalidParams.Add(request.NewErrParamRequired("DiskName")) } if invalidParams.Len() > 0 { @@ -8635,68 +12957,68 @@ func (s *CreateDomainInput) Validate() error { return nil } -// SetDomainName sets the DomainName field's value. -func (s *CreateDomainInput) SetDomainName(v string) *CreateDomainInput { - s.DomainName = &v +// SetDiskName sets the DiskName field's value. +func (s *DetachDiskInput) SetDiskName(v string) *DetachDiskInput { + s.DiskName = &v return s } -type CreateDomainOutput struct { +type DetachDiskOutput struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about the domain resource - // you created. - Operation *Operation `locationName:"operation" type:"structure"` + // An object describing the API operations. + Operations []*Operation `locationName:"operations" type:"list"` } // String returns the string representation -func (s CreateDomainOutput) String() string { +func (s DetachDiskOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateDomainOutput) GoString() string { +func (s DetachDiskOutput) GoString() string { return s.String() } -// SetOperation sets the Operation field's value. -func (s *CreateDomainOutput) SetOperation(v *Operation) *CreateDomainOutput { - s.Operation = v +// SetOperations sets the Operations field's value. +func (s *DetachDiskOutput) SetOperations(v []*Operation) *DetachDiskOutput { + s.Operations = v return s } -type CreateInstanceSnapshotInput struct { +type DetachInstancesFromLoadBalancerInput struct { _ struct{} `type:"structure"` - // The Lightsail instance on which to base your snapshot. + // An array of strings containing the names of the instances you want to detach + // from the load balancer. // - // InstanceName is a required field - InstanceName *string `locationName:"instanceName" type:"string" required:"true"` + // InstanceNames is a required field + InstanceNames []*string `locationName:"instanceNames" type:"list" required:"true"` - // The name for your new snapshot. + // The name of the Lightsail load balancer. // - // InstanceSnapshotName is a required field - InstanceSnapshotName *string `locationName:"instanceSnapshotName" type:"string" required:"true"` + // LoadBalancerName is a required field + LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` } // String returns the string representation -func (s CreateInstanceSnapshotInput) String() string { +func (s DetachInstancesFromLoadBalancerInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateInstanceSnapshotInput) GoString() string { +func (s DetachInstancesFromLoadBalancerInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateInstanceSnapshotInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateInstanceSnapshotInput"} - if s.InstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceName")) +func (s *DetachInstancesFromLoadBalancerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DetachInstancesFromLoadBalancerInput"} + if s.InstanceNames == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceNames")) } - if s.InstanceSnapshotName == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceSnapshotName")) + if s.LoadBalancerName == nil { + invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) } if invalidParams.Len() > 0 { @@ -8705,112 +13027,65 @@ func (s *CreateInstanceSnapshotInput) Validate() error { return nil } -// SetInstanceName sets the InstanceName field's value. -func (s *CreateInstanceSnapshotInput) SetInstanceName(v string) *CreateInstanceSnapshotInput { - s.InstanceName = &v +// SetInstanceNames sets the InstanceNames field's value. +func (s *DetachInstancesFromLoadBalancerInput) SetInstanceNames(v []*string) *DetachInstancesFromLoadBalancerInput { + s.InstanceNames = v return s } -// SetInstanceSnapshotName sets the InstanceSnapshotName field's value. -func (s *CreateInstanceSnapshotInput) SetInstanceSnapshotName(v string) *CreateInstanceSnapshotInput { - s.InstanceSnapshotName = &v +// SetLoadBalancerName sets the LoadBalancerName field's value. +func (s *DetachInstancesFromLoadBalancerInput) SetLoadBalancerName(v string) *DetachInstancesFromLoadBalancerInput { + s.LoadBalancerName = &v return s } -type CreateInstanceSnapshotOutput struct { +type DetachInstancesFromLoadBalancerOutput struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about the results of your - // create instances snapshot request. + // An object describing the API operations. Operations []*Operation `locationName:"operations" type:"list"` } // String returns the string representation -func (s CreateInstanceSnapshotOutput) String() string { +func (s DetachInstancesFromLoadBalancerOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateInstanceSnapshotOutput) GoString() string { +func (s DetachInstancesFromLoadBalancerOutput) GoString() string { return s.String() } // SetOperations sets the Operations field's value. -func (s *CreateInstanceSnapshotOutput) SetOperations(v []*Operation) *CreateInstanceSnapshotOutput { +func (s *DetachInstancesFromLoadBalancerOutput) SetOperations(v []*Operation) *DetachInstancesFromLoadBalancerOutput { s.Operations = v return s } -type CreateInstancesFromSnapshotInput struct { +type DetachStaticIpInput struct { _ struct{} `type:"structure"` - // An object containing information about one or more disk mappings. - AttachedDiskMapping map[string][]*DiskMap `locationName:"attachedDiskMapping" type:"map"` - - // The Availability Zone where you want to create your instances. Use the following - // formatting: us-east-2a (case sensitive). You can get a list of availability - // zones by using the get regions (http://docs.aws.amazon.com/lightsail/2016-11-28/api-reference/API_GetRegions.html) - // operation. Be sure to add the include availability zones parameter to your - // request. - // - // AvailabilityZone is a required field - AvailabilityZone *string `locationName:"availabilityZone" type:"string" required:"true"` - - // The bundle of specification information for your virtual private server (or - // instance), including the pricing plan (e.g., micro_1_0). - // - // BundleId is a required field - BundleId *string `locationName:"bundleId" type:"string" required:"true"` - - // The names for your new instances. - // - // InstanceNames is a required field - InstanceNames []*string `locationName:"instanceNames" type:"list" required:"true"` - - // The name of the instance snapshot on which you are basing your new instances. - // Use the get instance snapshots operation to return information about your - // existing snapshots. - // - // InstanceSnapshotName is a required field - InstanceSnapshotName *string `locationName:"instanceSnapshotName" type:"string" required:"true"` - - // The name for your key pair. - KeyPairName *string `locationName:"keyPairName" type:"string"` - - // You can create a launch script that configures a server with additional user - // data. For example, apt-get -y update. + // The name of the static IP to detach from the instance. // - // Depending on the machine image you choose, the command to get software on - // your instance varies. Amazon Linux and CentOS use yum, Debian and Ubuntu - // use apt-get, and FreeBSD uses pkg. For a complete list, see the Dev Guide - // (http://lightsail.aws.amazon.com/ls/docs/getting-started/articles/pre-installed-apps). - UserData *string `locationName:"userData" type:"string"` + // StaticIpName is a required field + StaticIpName *string `locationName:"staticIpName" type:"string" required:"true"` } // String returns the string representation -func (s CreateInstancesFromSnapshotInput) String() string { +func (s DetachStaticIpInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateInstancesFromSnapshotInput) GoString() string { +func (s DetachStaticIpInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateInstancesFromSnapshotInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateInstancesFromSnapshotInput"} - if s.AvailabilityZone == nil { - invalidParams.Add(request.NewErrParamRequired("AvailabilityZone")) - } - if s.BundleId == nil { - invalidParams.Add(request.NewErrParamRequired("BundleId")) - } - if s.InstanceNames == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceNames")) - } - if s.InstanceSnapshotName == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceSnapshotName")) +func (s *DetachStaticIpInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DetachStaticIpInput"} + if s.StaticIpName == nil { + invalidParams.Add(request.NewErrParamRequired("StaticIpName")) } if invalidParams.Len() > 0 { @@ -8819,886 +13094,772 @@ func (s *CreateInstancesFromSnapshotInput) Validate() error { return nil } -// SetAttachedDiskMapping sets the AttachedDiskMapping field's value. -func (s *CreateInstancesFromSnapshotInput) SetAttachedDiskMapping(v map[string][]*DiskMap) *CreateInstancesFromSnapshotInput { - s.AttachedDiskMapping = v - return s -} - -// SetAvailabilityZone sets the AvailabilityZone field's value. -func (s *CreateInstancesFromSnapshotInput) SetAvailabilityZone(v string) *CreateInstancesFromSnapshotInput { - s.AvailabilityZone = &v - return s -} - -// SetBundleId sets the BundleId field's value. -func (s *CreateInstancesFromSnapshotInput) SetBundleId(v string) *CreateInstancesFromSnapshotInput { - s.BundleId = &v - return s -} - -// SetInstanceNames sets the InstanceNames field's value. -func (s *CreateInstancesFromSnapshotInput) SetInstanceNames(v []*string) *CreateInstancesFromSnapshotInput { - s.InstanceNames = v - return s -} - -// SetInstanceSnapshotName sets the InstanceSnapshotName field's value. -func (s *CreateInstancesFromSnapshotInput) SetInstanceSnapshotName(v string) *CreateInstancesFromSnapshotInput { - s.InstanceSnapshotName = &v - return s -} - -// SetKeyPairName sets the KeyPairName field's value. -func (s *CreateInstancesFromSnapshotInput) SetKeyPairName(v string) *CreateInstancesFromSnapshotInput { - s.KeyPairName = &v - return s -} - -// SetUserData sets the UserData field's value. -func (s *CreateInstancesFromSnapshotInput) SetUserData(v string) *CreateInstancesFromSnapshotInput { - s.UserData = &v +// SetStaticIpName sets the StaticIpName field's value. +func (s *DetachStaticIpInput) SetStaticIpName(v string) *DetachStaticIpInput { + s.StaticIpName = &v return s } -type CreateInstancesFromSnapshotOutput struct { +type DetachStaticIpOutput struct { _ struct{} `type:"structure"` // An array of key-value pairs containing information about the results of your - // create instances from snapshot request. + // detach static IP request. Operations []*Operation `locationName:"operations" type:"list"` } // String returns the string representation -func (s CreateInstancesFromSnapshotOutput) String() string { +func (s DetachStaticIpOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateInstancesFromSnapshotOutput) GoString() string { +func (s DetachStaticIpOutput) GoString() string { return s.String() } // SetOperations sets the Operations field's value. -func (s *CreateInstancesFromSnapshotOutput) SetOperations(v []*Operation) *CreateInstancesFromSnapshotOutput { +func (s *DetachStaticIpOutput) SetOperations(v []*Operation) *DetachStaticIpOutput { s.Operations = v return s } -type CreateInstancesInput struct { +// Describes a system disk or an block storage disk. +type Disk struct { _ struct{} `type:"structure"` - // The Availability Zone in which to create your instance. Use the following - // format: us-east-2a (case sensitive). You can get a list of availability zones - // by using the get regions (http://docs.aws.amazon.com/lightsail/2016-11-28/api-reference/API_GetRegions.html) - // operation. Be sure to add the include availability zones parameter to your - // request. - // - // AvailabilityZone is a required field - AvailabilityZone *string `locationName:"availabilityZone" type:"string" required:"true"` + // The Amazon Resource Name (ARN) of the disk. + Arn *string `locationName:"arn" type:"string"` - // The ID for a virtual private server image (e.g., app_wordpress_4_4 or app_lamp_7_0). - // Use the get blueprints operation to return a list of available images (or - // blueprints). - // - // BlueprintId is a required field - BlueprintId *string `locationName:"blueprintId" type:"string" required:"true"` + // The resources to which the disk is attached. + AttachedTo *string `locationName:"attachedTo" type:"string"` - // The bundle of specification information for your virtual private server (or - // instance), including the pricing plan (e.g., micro_1_0). + // (Deprecated) The attachment state of the disk. // - // BundleId is a required field - BundleId *string `locationName:"bundleId" type:"string" required:"true"` + // In releases prior to November 14, 2017, this parameter returned attached + // for system disks in the API response. It is now deprecated, but still included + // in the response. Use isAttached instead. + // + // Deprecated: AttachmentState has been deprecated + AttachmentState *string `locationName:"attachmentState" deprecated:"true" type:"string"` - // (Deprecated) The name for your custom image. + // The date when the disk was created. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` + + // (Deprecated) The number of GB in use by the disk. // - // In releases prior to June 12, 2017, this parameter was ignored by the API. - // It is now deprecated. - CustomImageName *string `locationName:"customImageName" deprecated:"true" type:"string"` + // In releases prior to November 14, 2017, this parameter was not included in + // the API response. It is now deprecated. + // + // Deprecated: GbInUse has been deprecated + GbInUse *int64 `locationName:"gbInUse" deprecated:"true" type:"integer"` + + // The input/output operations per second (IOPS) of the disk. + Iops *int64 `locationName:"iops" type:"integer"` + + // A Boolean value indicating whether the disk is attached. + IsAttached *bool `locationName:"isAttached" type:"boolean"` + + // A Boolean value indicating whether this disk is a system disk (has an operating + // system loaded on it). + IsSystemDisk *bool `locationName:"isSystemDisk" type:"boolean"` + + // The AWS Region and Availability Zone where the disk is located. + Location *ResourceLocation `locationName:"location" type:"structure"` + + // The unique name of the disk. + Name *string `locationName:"name" type:"string"` - // The names to use for your new Lightsail instances. Separate multiple values - // using quotation marks and commas, for example: ["MyFirstInstance","MySecondInstance"] - // - // InstanceNames is a required field - InstanceNames []*string `locationName:"instanceNames" type:"list" required:"true"` + // The disk path. + Path *string `locationName:"path" type:"string"` - // The name of your key pair. - KeyPairName *string `locationName:"keyPairName" type:"string"` + // The Lightsail resource type (e.g., Disk). + ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` - // A launch script you can create that configures a server with additional user - // data. For example, you might want to run apt-get -y update. - // - // Depending on the machine image you choose, the command to get software on - // your instance varies. Amazon Linux and CentOS use yum, Debian and Ubuntu - // use apt-get, and FreeBSD uses pkg. For a complete list, see the Dev Guide - // (https://lightsail.aws.amazon.com/ls/docs/getting-started/article/compare-options-choose-lightsail-instance-image). - UserData *string `locationName:"userData" type:"string"` + // The size of the disk in GB. + SizeInGb *int64 `locationName:"sizeInGb" type:"integer"` + + // Describes the status of the disk. + State *string `locationName:"state" type:"string" enum:"DiskState"` + + // The support code. Include this code in your email to support when you have + // questions about an instance or another resource in Lightsail. This code enables + // our support team to look up your Lightsail information more easily. + SupportCode *string `locationName:"supportCode" type:"string"` } // String returns the string representation -func (s CreateInstancesInput) String() string { +func (s Disk) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateInstancesInput) GoString() string { +func (s Disk) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *CreateInstancesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateInstancesInput"} - if s.AvailabilityZone == nil { - invalidParams.Add(request.NewErrParamRequired("AvailabilityZone")) - } - if s.BlueprintId == nil { - invalidParams.Add(request.NewErrParamRequired("BlueprintId")) - } - if s.BundleId == nil { - invalidParams.Add(request.NewErrParamRequired("BundleId")) - } - if s.InstanceNames == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceNames")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetAvailabilityZone sets the AvailabilityZone field's value. -func (s *CreateInstancesInput) SetAvailabilityZone(v string) *CreateInstancesInput { - s.AvailabilityZone = &v +// SetArn sets the Arn field's value. +func (s *Disk) SetArn(v string) *Disk { + s.Arn = &v return s } -// SetBlueprintId sets the BlueprintId field's value. -func (s *CreateInstancesInput) SetBlueprintId(v string) *CreateInstancesInput { - s.BlueprintId = &v +// SetAttachedTo sets the AttachedTo field's value. +func (s *Disk) SetAttachedTo(v string) *Disk { + s.AttachedTo = &v return s } -// SetBundleId sets the BundleId field's value. -func (s *CreateInstancesInput) SetBundleId(v string) *CreateInstancesInput { - s.BundleId = &v +// SetAttachmentState sets the AttachmentState field's value. +func (s *Disk) SetAttachmentState(v string) *Disk { + s.AttachmentState = &v return s } -// SetCustomImageName sets the CustomImageName field's value. -func (s *CreateInstancesInput) SetCustomImageName(v string) *CreateInstancesInput { - s.CustomImageName = &v +// SetCreatedAt sets the CreatedAt field's value. +func (s *Disk) SetCreatedAt(v time.Time) *Disk { + s.CreatedAt = &v return s } -// SetInstanceNames sets the InstanceNames field's value. -func (s *CreateInstancesInput) SetInstanceNames(v []*string) *CreateInstancesInput { - s.InstanceNames = v +// SetGbInUse sets the GbInUse field's value. +func (s *Disk) SetGbInUse(v int64) *Disk { + s.GbInUse = &v return s } -// SetKeyPairName sets the KeyPairName field's value. -func (s *CreateInstancesInput) SetKeyPairName(v string) *CreateInstancesInput { - s.KeyPairName = &v +// SetIops sets the Iops field's value. +func (s *Disk) SetIops(v int64) *Disk { + s.Iops = &v return s } -// SetUserData sets the UserData field's value. -func (s *CreateInstancesInput) SetUserData(v string) *CreateInstancesInput { - s.UserData = &v +// SetIsAttached sets the IsAttached field's value. +func (s *Disk) SetIsAttached(v bool) *Disk { + s.IsAttached = &v return s } -type CreateInstancesOutput struct { - _ struct{} `type:"structure"` - - // An array of key-value pairs containing information about the results of your - // create instances request. - Operations []*Operation `locationName:"operations" type:"list"` -} - -// String returns the string representation -func (s CreateInstancesOutput) String() string { - return awsutil.Prettify(s) +// SetIsSystemDisk sets the IsSystemDisk field's value. +func (s *Disk) SetIsSystemDisk(v bool) *Disk { + s.IsSystemDisk = &v + return s } -// GoString returns the string representation -func (s CreateInstancesOutput) GoString() string { - return s.String() +// SetLocation sets the Location field's value. +func (s *Disk) SetLocation(v *ResourceLocation) *Disk { + s.Location = v + return s } -// SetOperations sets the Operations field's value. -func (s *CreateInstancesOutput) SetOperations(v []*Operation) *CreateInstancesOutput { - s.Operations = v +// SetName sets the Name field's value. +func (s *Disk) SetName(v string) *Disk { + s.Name = &v return s } -type CreateKeyPairInput struct { - _ struct{} `type:"structure"` - - // The name for your new key pair. - // - // KeyPairName is a required field - KeyPairName *string `locationName:"keyPairName" type:"string" required:"true"` +// SetPath sets the Path field's value. +func (s *Disk) SetPath(v string) *Disk { + s.Path = &v + return s } -// String returns the string representation -func (s CreateKeyPairInput) String() string { - return awsutil.Prettify(s) +// SetResourceType sets the ResourceType field's value. +func (s *Disk) SetResourceType(v string) *Disk { + s.ResourceType = &v + return s } -// GoString returns the string representation -func (s CreateKeyPairInput) GoString() string { - return s.String() +// SetSizeInGb sets the SizeInGb field's value. +func (s *Disk) SetSizeInGb(v int64) *Disk { + s.SizeInGb = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *CreateKeyPairInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateKeyPairInput"} - if s.KeyPairName == nil { - invalidParams.Add(request.NewErrParamRequired("KeyPairName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetState sets the State field's value. +func (s *Disk) SetState(v string) *Disk { + s.State = &v + return s } -// SetKeyPairName sets the KeyPairName field's value. -func (s *CreateKeyPairInput) SetKeyPairName(v string) *CreateKeyPairInput { - s.KeyPairName = &v +// SetSupportCode sets the SupportCode field's value. +func (s *Disk) SetSupportCode(v string) *Disk { + s.SupportCode = &v return s } -type CreateKeyPairOutput struct { +// Describes a block storage disk mapping. +type DiskMap struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about the new key pair - // you just created. - KeyPair *KeyPair `locationName:"keyPair" type:"structure"` - - // An array of key-value pairs containing information about the results of your - // create key pair request. - Operation *Operation `locationName:"operation" type:"structure"` - - // A base64-encoded RSA private key. - PrivateKeyBase64 *string `locationName:"privateKeyBase64" type:"string"` + // The new disk name (e.g., my-new-disk). + NewDiskName *string `locationName:"newDiskName" type:"string"` - // A base64-encoded public key of the ssh-rsa type. - PublicKeyBase64 *string `locationName:"publicKeyBase64" type:"string"` + // The original disk path exposed to the instance (for example, /dev/sdh). + OriginalDiskPath *string `locationName:"originalDiskPath" type:"string"` } // String returns the string representation -func (s CreateKeyPairOutput) String() string { +func (s DiskMap) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateKeyPairOutput) GoString() string { +func (s DiskMap) GoString() string { return s.String() } -// SetKeyPair sets the KeyPair field's value. -func (s *CreateKeyPairOutput) SetKeyPair(v *KeyPair) *CreateKeyPairOutput { - s.KeyPair = v +// SetNewDiskName sets the NewDiskName field's value. +func (s *DiskMap) SetNewDiskName(v string) *DiskMap { + s.NewDiskName = &v return s } -// SetOperation sets the Operation field's value. -func (s *CreateKeyPairOutput) SetOperation(v *Operation) *CreateKeyPairOutput { - s.Operation = v +// SetOriginalDiskPath sets the OriginalDiskPath field's value. +func (s *DiskMap) SetOriginalDiskPath(v string) *DiskMap { + s.OriginalDiskPath = &v return s } -// SetPrivateKeyBase64 sets the PrivateKeyBase64 field's value. -func (s *CreateKeyPairOutput) SetPrivateKeyBase64(v string) *CreateKeyPairOutput { - s.PrivateKeyBase64 = &v - return s -} +// Describes a block storage disk snapshot. +type DiskSnapshot struct { + _ struct{} `type:"structure"` -// SetPublicKeyBase64 sets the PublicKeyBase64 field's value. -func (s *CreateKeyPairOutput) SetPublicKeyBase64(v string) *CreateKeyPairOutput { - s.PublicKeyBase64 = &v - return s -} + // The Amazon Resource Name (ARN) of the disk snapshot. + Arn *string `locationName:"arn" type:"string"` -type CreateLoadBalancerInput struct { - _ struct{} `type:"structure"` + // The date when the disk snapshot was created. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` - // The optional alternative domains and subdomains to use with your SSL/TLS - // certificate (e.g., www.example.com, example.com, m.example.com, blog.example.com). - CertificateAlternativeNames []*string `locationName:"certificateAlternativeNames" type:"list"` + // The Amazon Resource Name (ARN) of the source disk from which you are creating + // the disk snapshot. + FromDiskArn *string `locationName:"fromDiskArn" type:"string"` - // The domain name with which your certificate is associated (e.g., example.com). - // - // If you specify certificateDomainName, then certificateName is required (and - // vice-versa). - CertificateDomainName *string `locationName:"certificateDomainName" type:"string"` + // The unique name of the source disk from which you are creating the disk snapshot. + FromDiskName *string `locationName:"fromDiskName" type:"string"` - // The name of the SSL/TLS certificate. - // - // If you specify certificateName, then certificateDomainName is required (and - // vice-versa). - CertificateName *string `locationName:"certificateName" type:"string"` + // The AWS Region and Availability Zone where the disk snapshot was created. + Location *ResourceLocation `locationName:"location" type:"structure"` - // The path you provided to perform the load balancer health check. If you didn't - // specify a health check path, Lightsail uses the root path of your website - // (e.g., "/"). - // - // You may want to specify a custom health check path other than the root of - // your application if your home page loads slowly or has a lot of media or - // scripting on it. - HealthCheckPath *string `locationName:"healthCheckPath" type:"string"` + // The name of the disk snapshot (e.g., my-disk-snapshot). + Name *string `locationName:"name" type:"string"` + + // The progress of the disk snapshot operation. + Progress *string `locationName:"progress" type:"string"` - // The instance port where you're creating your load balancer. - // - // InstancePort is a required field - InstancePort *int64 `locationName:"instancePort" type:"integer" required:"true"` + // The Lightsail resource type (e.g., DiskSnapshot). + ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` - // The name of your load balancer. - // - // LoadBalancerName is a required field - LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` + // The size of the disk in GB. + SizeInGb *int64 `locationName:"sizeInGb" type:"integer"` + + // The status of the disk snapshot operation. + State *string `locationName:"state" type:"string" enum:"DiskSnapshotState"` + + // The support code. Include this code in your email to support when you have + // questions about an instance or another resource in Lightsail. This code enables + // our support team to look up your Lightsail information more easily. + SupportCode *string `locationName:"supportCode" type:"string"` } // String returns the string representation -func (s CreateLoadBalancerInput) String() string { +func (s DiskSnapshot) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateLoadBalancerInput) GoString() string { +func (s DiskSnapshot) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *CreateLoadBalancerInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateLoadBalancerInput"} - if s.InstancePort == nil { - invalidParams.Add(request.NewErrParamRequired("InstancePort")) - } - if s.LoadBalancerName == nil { - invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetArn sets the Arn field's value. +func (s *DiskSnapshot) SetArn(v string) *DiskSnapshot { + s.Arn = &v + return s } -// SetCertificateAlternativeNames sets the CertificateAlternativeNames field's value. -func (s *CreateLoadBalancerInput) SetCertificateAlternativeNames(v []*string) *CreateLoadBalancerInput { - s.CertificateAlternativeNames = v +// SetCreatedAt sets the CreatedAt field's value. +func (s *DiskSnapshot) SetCreatedAt(v time.Time) *DiskSnapshot { + s.CreatedAt = &v return s } -// SetCertificateDomainName sets the CertificateDomainName field's value. -func (s *CreateLoadBalancerInput) SetCertificateDomainName(v string) *CreateLoadBalancerInput { - s.CertificateDomainName = &v +// SetFromDiskArn sets the FromDiskArn field's value. +func (s *DiskSnapshot) SetFromDiskArn(v string) *DiskSnapshot { + s.FromDiskArn = &v return s } -// SetCertificateName sets the CertificateName field's value. -func (s *CreateLoadBalancerInput) SetCertificateName(v string) *CreateLoadBalancerInput { - s.CertificateName = &v +// SetFromDiskName sets the FromDiskName field's value. +func (s *DiskSnapshot) SetFromDiskName(v string) *DiskSnapshot { + s.FromDiskName = &v return s } -// SetHealthCheckPath sets the HealthCheckPath field's value. -func (s *CreateLoadBalancerInput) SetHealthCheckPath(v string) *CreateLoadBalancerInput { - s.HealthCheckPath = &v +// SetLocation sets the Location field's value. +func (s *DiskSnapshot) SetLocation(v *ResourceLocation) *DiskSnapshot { + s.Location = v return s } -// SetInstancePort sets the InstancePort field's value. -func (s *CreateLoadBalancerInput) SetInstancePort(v int64) *CreateLoadBalancerInput { - s.InstancePort = &v +// SetName sets the Name field's value. +func (s *DiskSnapshot) SetName(v string) *DiskSnapshot { + s.Name = &v return s } -// SetLoadBalancerName sets the LoadBalancerName field's value. -func (s *CreateLoadBalancerInput) SetLoadBalancerName(v string) *CreateLoadBalancerInput { - s.LoadBalancerName = &v +// SetProgress sets the Progress field's value. +func (s *DiskSnapshot) SetProgress(v string) *DiskSnapshot { + s.Progress = &v return s } -type CreateLoadBalancerOutput struct { - _ struct{} `type:"structure"` - - // An object containing information about the API operations. - Operations []*Operation `locationName:"operations" type:"list"` +// SetResourceType sets the ResourceType field's value. +func (s *DiskSnapshot) SetResourceType(v string) *DiskSnapshot { + s.ResourceType = &v + return s } -// String returns the string representation -func (s CreateLoadBalancerOutput) String() string { - return awsutil.Prettify(s) +// SetSizeInGb sets the SizeInGb field's value. +func (s *DiskSnapshot) SetSizeInGb(v int64) *DiskSnapshot { + s.SizeInGb = &v + return s } -// GoString returns the string representation -func (s CreateLoadBalancerOutput) GoString() string { - return s.String() +// SetState sets the State field's value. +func (s *DiskSnapshot) SetState(v string) *DiskSnapshot { + s.State = &v + return s } -// SetOperations sets the Operations field's value. -func (s *CreateLoadBalancerOutput) SetOperations(v []*Operation) *CreateLoadBalancerOutput { - s.Operations = v +// SetSupportCode sets the SupportCode field's value. +func (s *DiskSnapshot) SetSupportCode(v string) *DiskSnapshot { + s.SupportCode = &v return s } -type CreateLoadBalancerTlsCertificateInput struct { +// Describes a domain where you are storing recordsets in Lightsail. +type Domain struct { _ struct{} `type:"structure"` - // An array of strings listing alternative domains and subdomains for your SSL/TLS - // certificate. Lightsail will de-dupe the names for you. You can have a maximum - // of 9 alternative names (in addition to the 1 primary domain). We do not support - // wildcards (e.g., *.example.com). - CertificateAlternativeNames []*string `locationName:"certificateAlternativeNames" type:"list"` + // The Amazon Resource Name (ARN) of the domain recordset (e.g., arn:aws:lightsail:global:123456789101:Domain/824cede0-abc7-4f84-8dbc-12345EXAMPLE). + Arn *string `locationName:"arn" type:"string"` - // The domain name (e.g., example.com) for your SSL/TLS certificate. - // - // CertificateDomainName is a required field - CertificateDomainName *string `locationName:"certificateDomainName" type:"string" required:"true"` + // The date when the domain recordset was created. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` - // The SSL/TLS certificate name. - // - // You can have up to 10 certificates in your account at one time. Each Lightsail - // load balancer can have up to 2 certificates associated with it at one time. - // There is also an overall limit to the number of certificates that can be - // issue in a 365-day period. For more information, see Limits (http://docs.aws.amazon.com/acm/latest/userguide/acm-limits.html). - // - // CertificateName is a required field - CertificateName *string `locationName:"certificateName" type:"string" required:"true"` + // An array of key-value pairs containing information about the domain entries. + DomainEntries []*DomainEntry `locationName:"domainEntries" type:"list"` - // The load balancer name where you want to create the SSL/TLS certificate. - // - // LoadBalancerName is a required field - LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` + // The AWS Region and Availability Zones where the domain recordset was created. + Location *ResourceLocation `locationName:"location" type:"structure"` + + // The name of the domain. + Name *string `locationName:"name" type:"string"` + + // The resource type. + ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` + + // The support code. Include this code in your email to support when you have + // questions about an instance or another resource in Lightsail. This code enables + // our support team to look up your Lightsail information more easily. + SupportCode *string `locationName:"supportCode" type:"string"` } // String returns the string representation -func (s CreateLoadBalancerTlsCertificateInput) String() string { +func (s Domain) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateLoadBalancerTlsCertificateInput) GoString() string { +func (s Domain) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *CreateLoadBalancerTlsCertificateInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateLoadBalancerTlsCertificateInput"} - if s.CertificateDomainName == nil { - invalidParams.Add(request.NewErrParamRequired("CertificateDomainName")) - } - if s.CertificateName == nil { - invalidParams.Add(request.NewErrParamRequired("CertificateName")) - } - if s.LoadBalancerName == nil { - invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetCertificateAlternativeNames sets the CertificateAlternativeNames field's value. -func (s *CreateLoadBalancerTlsCertificateInput) SetCertificateAlternativeNames(v []*string) *CreateLoadBalancerTlsCertificateInput { - s.CertificateAlternativeNames = v +// SetArn sets the Arn field's value. +func (s *Domain) SetArn(v string) *Domain { + s.Arn = &v return s } -// SetCertificateDomainName sets the CertificateDomainName field's value. -func (s *CreateLoadBalancerTlsCertificateInput) SetCertificateDomainName(v string) *CreateLoadBalancerTlsCertificateInput { - s.CertificateDomainName = &v +// SetCreatedAt sets the CreatedAt field's value. +func (s *Domain) SetCreatedAt(v time.Time) *Domain { + s.CreatedAt = &v return s } -// SetCertificateName sets the CertificateName field's value. -func (s *CreateLoadBalancerTlsCertificateInput) SetCertificateName(v string) *CreateLoadBalancerTlsCertificateInput { - s.CertificateName = &v +// SetDomainEntries sets the DomainEntries field's value. +func (s *Domain) SetDomainEntries(v []*DomainEntry) *Domain { + s.DomainEntries = v return s } -// SetLoadBalancerName sets the LoadBalancerName field's value. -func (s *CreateLoadBalancerTlsCertificateInput) SetLoadBalancerName(v string) *CreateLoadBalancerTlsCertificateInput { - s.LoadBalancerName = &v +// SetLocation sets the Location field's value. +func (s *Domain) SetLocation(v *ResourceLocation) *Domain { + s.Location = v return s } -type CreateLoadBalancerTlsCertificateOutput struct { - _ struct{} `type:"structure"` - - // An object containing information about the API operations. - Operations []*Operation `locationName:"operations" type:"list"` -} - -// String returns the string representation -func (s CreateLoadBalancerTlsCertificateOutput) String() string { - return awsutil.Prettify(s) +// SetName sets the Name field's value. +func (s *Domain) SetName(v string) *Domain { + s.Name = &v + return s } -// GoString returns the string representation -func (s CreateLoadBalancerTlsCertificateOutput) GoString() string { - return s.String() +// SetResourceType sets the ResourceType field's value. +func (s *Domain) SetResourceType(v string) *Domain { + s.ResourceType = &v + return s } -// SetOperations sets the Operations field's value. -func (s *CreateLoadBalancerTlsCertificateOutput) SetOperations(v []*Operation) *CreateLoadBalancerTlsCertificateOutput { - s.Operations = v +// SetSupportCode sets the SupportCode field's value. +func (s *Domain) SetSupportCode(v string) *Domain { + s.SupportCode = &v return s } -type DeleteDiskInput struct { +// Describes a domain recordset entry. +type DomainEntry struct { _ struct{} `type:"structure"` - // The unique name of the disk you want to delete (e.g., my-disk). + // The ID of the domain recordset entry. + Id *string `locationName:"id" type:"string"` + + // When true, specifies whether the domain entry is an alias used by the Lightsail + // load balancer. You can include an alias (A type) record in your request, + // which points to a load balancer DNS name and routes traffic to your load + // balancer + IsAlias *bool `locationName:"isAlias" type:"boolean"` + + // The name of the domain. + Name *string `locationName:"name" type:"string"` + + // (Deprecated) The options for the domain entry. // - // DiskName is a required field - DiskName *string `locationName:"diskName" type:"string" required:"true"` + // In releases prior to November 29, 2017, this parameter was not included in + // the API response. It is now deprecated. + // + // Deprecated: Options has been deprecated + Options map[string]*string `locationName:"options" deprecated:"true" type:"map"` + + // The target AWS name server (e.g., ns-111.awsdns-22.com.). + // + // For Lightsail load balancers, the value looks like ab1234c56789c6b86aba6fb203d443bc-123456789.us-east-2.elb.amazonaws.com. + // Be sure to also set isAlias to true when setting up an A record for a load + // balancer. + Target *string `locationName:"target" type:"string"` + + // The type of domain entry (e.g., SOA or NS). + Type *string `locationName:"type" type:"string"` } // String returns the string representation -func (s DeleteDiskInput) String() string { +func (s DomainEntry) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteDiskInput) GoString() string { +func (s DomainEntry) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteDiskInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteDiskInput"} - if s.DiskName == nil { - invalidParams.Add(request.NewErrParamRequired("DiskName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetId sets the Id field's value. +func (s *DomainEntry) SetId(v string) *DomainEntry { + s.Id = &v + return s } -// SetDiskName sets the DiskName field's value. -func (s *DeleteDiskInput) SetDiskName(v string) *DeleteDiskInput { - s.DiskName = &v +// SetIsAlias sets the IsAlias field's value. +func (s *DomainEntry) SetIsAlias(v bool) *DomainEntry { + s.IsAlias = &v return s } -type DeleteDiskOutput struct { - _ struct{} `type:"structure"` - - // An object describing the API operations. - Operations []*Operation `locationName:"operations" type:"list"` +// SetName sets the Name field's value. +func (s *DomainEntry) SetName(v string) *DomainEntry { + s.Name = &v + return s } -// String returns the string representation -func (s DeleteDiskOutput) String() string { - return awsutil.Prettify(s) +// SetOptions sets the Options field's value. +func (s *DomainEntry) SetOptions(v map[string]*string) *DomainEntry { + s.Options = v + return s } -// GoString returns the string representation -func (s DeleteDiskOutput) GoString() string { - return s.String() +// SetTarget sets the Target field's value. +func (s *DomainEntry) SetTarget(v string) *DomainEntry { + s.Target = &v + return s } -// SetOperations sets the Operations field's value. -func (s *DeleteDiskOutput) SetOperations(v []*Operation) *DeleteDiskOutput { - s.Operations = v +// SetType sets the Type field's value. +func (s *DomainEntry) SetType(v string) *DomainEntry { + s.Type = &v return s } -type DeleteDiskSnapshotInput struct { +type DownloadDefaultKeyPairInput struct { _ struct{} `type:"structure"` - - // The name of the disk snapshot you want to delete (e.g., my-disk-snapshot). - // - // DiskSnapshotName is a required field - DiskSnapshotName *string `locationName:"diskSnapshotName" type:"string" required:"true"` } // String returns the string representation -func (s DeleteDiskSnapshotInput) String() string { +func (s DownloadDefaultKeyPairInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteDiskSnapshotInput) GoString() string { +func (s DownloadDefaultKeyPairInput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteDiskSnapshotInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteDiskSnapshotInput"} - if s.DiskSnapshotName == nil { - invalidParams.Add(request.NewErrParamRequired("DiskSnapshotName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetDiskSnapshotName sets the DiskSnapshotName field's value. -func (s *DeleteDiskSnapshotInput) SetDiskSnapshotName(v string) *DeleteDiskSnapshotInput { - s.DiskSnapshotName = &v - return s -} - -type DeleteDiskSnapshotOutput struct { +type DownloadDefaultKeyPairOutput struct { _ struct{} `type:"structure"` - // An object describing the API operations. - Operations []*Operation `locationName:"operations" type:"list"` + // A base64-encoded RSA private key. + PrivateKeyBase64 *string `locationName:"privateKeyBase64" type:"string"` + + // A base64-encoded public key of the ssh-rsa type. + PublicKeyBase64 *string `locationName:"publicKeyBase64" type:"string"` } // String returns the string representation -func (s DeleteDiskSnapshotOutput) String() string { +func (s DownloadDefaultKeyPairOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteDiskSnapshotOutput) GoString() string { +func (s DownloadDefaultKeyPairOutput) GoString() string { return s.String() } -// SetOperations sets the Operations field's value. -func (s *DeleteDiskSnapshotOutput) SetOperations(v []*Operation) *DeleteDiskSnapshotOutput { - s.Operations = v +// SetPrivateKeyBase64 sets the PrivateKeyBase64 field's value. +func (s *DownloadDefaultKeyPairOutput) SetPrivateKeyBase64(v string) *DownloadDefaultKeyPairOutput { + s.PrivateKeyBase64 = &v return s } -type DeleteDomainEntryInput struct { - _ struct{} `type:"structure"` +// SetPublicKeyBase64 sets the PublicKeyBase64 field's value. +func (s *DownloadDefaultKeyPairOutput) SetPublicKeyBase64(v string) *DownloadDefaultKeyPairOutput { + s.PublicKeyBase64 = &v + return s +} - // An array of key-value pairs containing information about your domain entries. - // - // DomainEntry is a required field - DomainEntry *DomainEntry `locationName:"domainEntry" type:"structure" required:"true"` +type GetActiveNamesInput struct { + _ struct{} `type:"structure"` - // The name of the domain entry to delete. - // - // DomainName is a required field - DomainName *string `locationName:"domainName" type:"string" required:"true"` + // A token used for paginating results from your get active names request. + PageToken *string `locationName:"pageToken" type:"string"` } // String returns the string representation -func (s DeleteDomainEntryInput) String() string { +func (s GetActiveNamesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteDomainEntryInput) GoString() string { +func (s GetActiveNamesInput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteDomainEntryInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteDomainEntryInput"} - if s.DomainEntry == nil { - invalidParams.Add(request.NewErrParamRequired("DomainEntry")) - } - if s.DomainName == nil { - invalidParams.Add(request.NewErrParamRequired("DomainName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetDomainEntry sets the DomainEntry field's value. -func (s *DeleteDomainEntryInput) SetDomainEntry(v *DomainEntry) *DeleteDomainEntryInput { - s.DomainEntry = v - return s -} - -// SetDomainName sets the DomainName field's value. -func (s *DeleteDomainEntryInput) SetDomainName(v string) *DeleteDomainEntryInput { - s.DomainName = &v +// SetPageToken sets the PageToken field's value. +func (s *GetActiveNamesInput) SetPageToken(v string) *GetActiveNamesInput { + s.PageToken = &v return s } -type DeleteDomainEntryOutput struct { +type GetActiveNamesOutput struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about the results of your - // delete domain entry request. - Operation *Operation `locationName:"operation" type:"structure"` + // The list of active names returned by the get active names request. + ActiveNames []*string `locationName:"activeNames" type:"list"` + + // A token used for advancing to the next page of results from your get active + // names request. + NextPageToken *string `locationName:"nextPageToken" type:"string"` } // String returns the string representation -func (s DeleteDomainEntryOutput) String() string { +func (s GetActiveNamesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteDomainEntryOutput) GoString() string { +func (s GetActiveNamesOutput) GoString() string { return s.String() } -// SetOperation sets the Operation field's value. -func (s *DeleteDomainEntryOutput) SetOperation(v *Operation) *DeleteDomainEntryOutput { - s.Operation = v +// SetActiveNames sets the ActiveNames field's value. +func (s *GetActiveNamesOutput) SetActiveNames(v []*string) *GetActiveNamesOutput { + s.ActiveNames = v return s } -type DeleteDomainInput struct { +// SetNextPageToken sets the NextPageToken field's value. +func (s *GetActiveNamesOutput) SetNextPageToken(v string) *GetActiveNamesOutput { + s.NextPageToken = &v + return s +} + +type GetBlueprintsInput struct { _ struct{} `type:"structure"` - // The specific domain name to delete. - // - // DomainName is a required field - DomainName *string `locationName:"domainName" type:"string" required:"true"` + // A Boolean value indicating whether to include inactive results in your request. + IncludeInactive *bool `locationName:"includeInactive" type:"boolean"` + + // A token used for advancing to the next page of results from your get blueprints + // request. + PageToken *string `locationName:"pageToken" type:"string"` } // String returns the string representation -func (s DeleteDomainInput) String() string { +func (s GetBlueprintsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteDomainInput) GoString() string { +func (s GetBlueprintsInput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteDomainInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteDomainInput"} - if s.DomainName == nil { - invalidParams.Add(request.NewErrParamRequired("DomainName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetIncludeInactive sets the IncludeInactive field's value. +func (s *GetBlueprintsInput) SetIncludeInactive(v bool) *GetBlueprintsInput { + s.IncludeInactive = &v + return s } -// SetDomainName sets the DomainName field's value. -func (s *DeleteDomainInput) SetDomainName(v string) *DeleteDomainInput { - s.DomainName = &v +// SetPageToken sets the PageToken field's value. +func (s *GetBlueprintsInput) SetPageToken(v string) *GetBlueprintsInput { + s.PageToken = &v return s } -type DeleteDomainOutput struct { +type GetBlueprintsOutput struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about the results of your - // delete domain request. - Operation *Operation `locationName:"operation" type:"structure"` + // An array of key-value pairs that contains information about the available + // blueprints. + Blueprints []*Blueprint `locationName:"blueprints" type:"list"` + + // A token used for advancing to the next page of results from your get blueprints + // request. + NextPageToken *string `locationName:"nextPageToken" type:"string"` } // String returns the string representation -func (s DeleteDomainOutput) String() string { +func (s GetBlueprintsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteDomainOutput) GoString() string { +func (s GetBlueprintsOutput) GoString() string { return s.String() } -// SetOperation sets the Operation field's value. -func (s *DeleteDomainOutput) SetOperation(v *Operation) *DeleteDomainOutput { - s.Operation = v +// SetBlueprints sets the Blueprints field's value. +func (s *GetBlueprintsOutput) SetBlueprints(v []*Blueprint) *GetBlueprintsOutput { + s.Blueprints = v return s } -type DeleteInstanceInput struct { +// SetNextPageToken sets the NextPageToken field's value. +func (s *GetBlueprintsOutput) SetNextPageToken(v string) *GetBlueprintsOutput { + s.NextPageToken = &v + return s +} + +type GetBundlesInput struct { _ struct{} `type:"structure"` - // The name of the instance to delete. - // - // InstanceName is a required field - InstanceName *string `locationName:"instanceName" type:"string" required:"true"` + // A Boolean value that indicates whether to include inactive bundle results + // in your request. + IncludeInactive *bool `locationName:"includeInactive" type:"boolean"` + + // A token used for advancing to the next page of results from your get bundles + // request. + PageToken *string `locationName:"pageToken" type:"string"` } // String returns the string representation -func (s DeleteInstanceInput) String() string { +func (s GetBundlesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteInstanceInput) GoString() string { +func (s GetBundlesInput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteInstanceInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteInstanceInput"} - if s.InstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetIncludeInactive sets the IncludeInactive field's value. +func (s *GetBundlesInput) SetIncludeInactive(v bool) *GetBundlesInput { + s.IncludeInactive = &v + return s } -// SetInstanceName sets the InstanceName field's value. -func (s *DeleteInstanceInput) SetInstanceName(v string) *DeleteInstanceInput { - s.InstanceName = &v +// SetPageToken sets the PageToken field's value. +func (s *GetBundlesInput) SetPageToken(v string) *GetBundlesInput { + s.PageToken = &v return s } -type DeleteInstanceOutput struct { +type GetBundlesOutput struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about the results of your - // delete instance request. - Operations []*Operation `locationName:"operations" type:"list"` + // An array of key-value pairs that contains information about the available + // bundles. + Bundles []*Bundle `locationName:"bundles" type:"list"` + + // A token used for advancing to the next page of results from your get active + // names request. + NextPageToken *string `locationName:"nextPageToken" type:"string"` } // String returns the string representation -func (s DeleteInstanceOutput) String() string { +func (s GetBundlesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteInstanceOutput) GoString() string { +func (s GetBundlesOutput) GoString() string { return s.String() } -// SetOperations sets the Operations field's value. -func (s *DeleteInstanceOutput) SetOperations(v []*Operation) *DeleteInstanceOutput { - s.Operations = v +// SetBundles sets the Bundles field's value. +func (s *GetBundlesOutput) SetBundles(v []*Bundle) *GetBundlesOutput { + s.Bundles = v return s } -type DeleteInstanceSnapshotInput struct { +// SetNextPageToken sets the NextPageToken field's value. +func (s *GetBundlesOutput) SetNextPageToken(v string) *GetBundlesOutput { + s.NextPageToken = &v + return s +} + +type GetDiskInput struct { _ struct{} `type:"structure"` - // The name of the snapshot to delete. + // The name of the disk (e.g., my-disk). // - // InstanceSnapshotName is a required field - InstanceSnapshotName *string `locationName:"instanceSnapshotName" type:"string" required:"true"` + // DiskName is a required field + DiskName *string `locationName:"diskName" type:"string" required:"true"` } // String returns the string representation -func (s DeleteInstanceSnapshotInput) String() string { +func (s GetDiskInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteInstanceSnapshotInput) GoString() string { +func (s GetDiskInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteInstanceSnapshotInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteInstanceSnapshotInput"} - if s.InstanceSnapshotName == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceSnapshotName")) +func (s *GetDiskInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetDiskInput"} + if s.DiskName == nil { + invalidParams.Add(request.NewErrParamRequired("DiskName")) } if invalidParams.Len() > 0 { @@ -9707,60 +13868,59 @@ func (s *DeleteInstanceSnapshotInput) Validate() error { return nil } -// SetInstanceSnapshotName sets the InstanceSnapshotName field's value. -func (s *DeleteInstanceSnapshotInput) SetInstanceSnapshotName(v string) *DeleteInstanceSnapshotInput { - s.InstanceSnapshotName = &v +// SetDiskName sets the DiskName field's value. +func (s *GetDiskInput) SetDiskName(v string) *GetDiskInput { + s.DiskName = &v return s } -type DeleteInstanceSnapshotOutput struct { +type GetDiskOutput struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about the results of your - // delete instance snapshot request. - Operations []*Operation `locationName:"operations" type:"list"` + // An object containing information about the disk. + Disk *Disk `locationName:"disk" type:"structure"` } // String returns the string representation -func (s DeleteInstanceSnapshotOutput) String() string { +func (s GetDiskOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteInstanceSnapshotOutput) GoString() string { +func (s GetDiskOutput) GoString() string { return s.String() } -// SetOperations sets the Operations field's value. -func (s *DeleteInstanceSnapshotOutput) SetOperations(v []*Operation) *DeleteInstanceSnapshotOutput { - s.Operations = v +// SetDisk sets the Disk field's value. +func (s *GetDiskOutput) SetDisk(v *Disk) *GetDiskOutput { + s.Disk = v return s } -type DeleteKeyPairInput struct { +type GetDiskSnapshotInput struct { _ struct{} `type:"structure"` - // The name of the key pair to delete. + // The name of the disk snapshot (e.g., my-disk-snapshot). // - // KeyPairName is a required field - KeyPairName *string `locationName:"keyPairName" type:"string" required:"true"` + // DiskSnapshotName is a required field + DiskSnapshotName *string `locationName:"diskSnapshotName" type:"string" required:"true"` } // String returns the string representation -func (s DeleteKeyPairInput) String() string { +func (s GetDiskSnapshotInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteKeyPairInput) GoString() string { +func (s GetDiskSnapshotInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteKeyPairInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteKeyPairInput"} - if s.KeyPairName == nil { - invalidParams.Add(request.NewErrParamRequired("KeyPairName")) +func (s *GetDiskSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetDiskSnapshotInput"} + if s.DiskSnapshotName == nil { + invalidParams.Add(request.NewErrParamRequired("DiskSnapshotName")) } if invalidParams.Len() > 0 { @@ -9769,210 +13929,173 @@ func (s *DeleteKeyPairInput) Validate() error { return nil } -// SetKeyPairName sets the KeyPairName field's value. -func (s *DeleteKeyPairInput) SetKeyPairName(v string) *DeleteKeyPairInput { - s.KeyPairName = &v +// SetDiskSnapshotName sets the DiskSnapshotName field's value. +func (s *GetDiskSnapshotInput) SetDiskSnapshotName(v string) *GetDiskSnapshotInput { + s.DiskSnapshotName = &v return s } -type DeleteKeyPairOutput struct { +type GetDiskSnapshotOutput struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about the results of your - // delete key pair request. - Operation *Operation `locationName:"operation" type:"structure"` + // An object containing information about the disk snapshot. + DiskSnapshot *DiskSnapshot `locationName:"diskSnapshot" type:"structure"` } // String returns the string representation -func (s DeleteKeyPairOutput) String() string { +func (s GetDiskSnapshotOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteKeyPairOutput) GoString() string { +func (s GetDiskSnapshotOutput) GoString() string { return s.String() } -// SetOperation sets the Operation field's value. -func (s *DeleteKeyPairOutput) SetOperation(v *Operation) *DeleteKeyPairOutput { - s.Operation = v +// SetDiskSnapshot sets the DiskSnapshot field's value. +func (s *GetDiskSnapshotOutput) SetDiskSnapshot(v *DiskSnapshot) *GetDiskSnapshotOutput { + s.DiskSnapshot = v return s } -type DeleteLoadBalancerInput struct { +type GetDiskSnapshotsInput struct { _ struct{} `type:"structure"` - // The name of the load balancer you want to delete. - // - // LoadBalancerName is a required field - LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` + // A token used for advancing to the next page of results from your GetDiskSnapshots + // request. + PageToken *string `locationName:"pageToken" type:"string"` } // String returns the string representation -func (s DeleteLoadBalancerInput) String() string { +func (s GetDiskSnapshotsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteLoadBalancerInput) GoString() string { +func (s GetDiskSnapshotsInput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteLoadBalancerInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteLoadBalancerInput"} - if s.LoadBalancerName == nil { - invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetLoadBalancerName sets the LoadBalancerName field's value. -func (s *DeleteLoadBalancerInput) SetLoadBalancerName(v string) *DeleteLoadBalancerInput { - s.LoadBalancerName = &v +// SetPageToken sets the PageToken field's value. +func (s *GetDiskSnapshotsInput) SetPageToken(v string) *GetDiskSnapshotsInput { + s.PageToken = &v return s } -type DeleteLoadBalancerOutput struct { +type GetDiskSnapshotsOutput struct { _ struct{} `type:"structure"` - // An object describing the API operations. - Operations []*Operation `locationName:"operations" type:"list"` + // An array of objects containing information about all block storage disk snapshots. + DiskSnapshots []*DiskSnapshot `locationName:"diskSnapshots" type:"list"` + + // A token used for advancing to the next page of results from your GetDiskSnapshots + // request. + NextPageToken *string `locationName:"nextPageToken" type:"string"` } // String returns the string representation -func (s DeleteLoadBalancerOutput) String() string { +func (s GetDiskSnapshotsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteLoadBalancerOutput) GoString() string { +func (s GetDiskSnapshotsOutput) GoString() string { return s.String() } -// SetOperations sets the Operations field's value. -func (s *DeleteLoadBalancerOutput) SetOperations(v []*Operation) *DeleteLoadBalancerOutput { - s.Operations = v +// SetDiskSnapshots sets the DiskSnapshots field's value. +func (s *GetDiskSnapshotsOutput) SetDiskSnapshots(v []*DiskSnapshot) *GetDiskSnapshotsOutput { + s.DiskSnapshots = v return s } -type DeleteLoadBalancerTlsCertificateInput struct { - _ struct{} `type:"structure"` - - // The SSL/TLS certificate name. - // - // CertificateName is a required field - CertificateName *string `locationName:"certificateName" type:"string" required:"true"` +// SetNextPageToken sets the NextPageToken field's value. +func (s *GetDiskSnapshotsOutput) SetNextPageToken(v string) *GetDiskSnapshotsOutput { + s.NextPageToken = &v + return s +} - // When true, forces the deletion of an SSL/TLS certificate. - // - // There can be two certificates associated with a Lightsail load balancer: - // the primary and the backup. The force parameter is required when the primary - // SSL/TLS certificate is in use by an instance attached to the load balancer. - Force *bool `locationName:"force" type:"boolean"` +type GetDisksInput struct { + _ struct{} `type:"structure"` - // The load balancer name. - // - // LoadBalancerName is a required field - LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` + // A token used for advancing to the next page of results from your GetDisks + // request. + PageToken *string `locationName:"pageToken" type:"string"` } // String returns the string representation -func (s DeleteLoadBalancerTlsCertificateInput) String() string { +func (s GetDisksInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteLoadBalancerTlsCertificateInput) GoString() string { - return s.String() -} - -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteLoadBalancerTlsCertificateInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteLoadBalancerTlsCertificateInput"} - if s.CertificateName == nil { - invalidParams.Add(request.NewErrParamRequired("CertificateName")) - } - if s.LoadBalancerName == nil { - invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetCertificateName sets the CertificateName field's value. -func (s *DeleteLoadBalancerTlsCertificateInput) SetCertificateName(v string) *DeleteLoadBalancerTlsCertificateInput { - s.CertificateName = &v - return s -} - -// SetForce sets the Force field's value. -func (s *DeleteLoadBalancerTlsCertificateInput) SetForce(v bool) *DeleteLoadBalancerTlsCertificateInput { - s.Force = &v - return s +func (s GetDisksInput) GoString() string { + return s.String() } -// SetLoadBalancerName sets the LoadBalancerName field's value. -func (s *DeleteLoadBalancerTlsCertificateInput) SetLoadBalancerName(v string) *DeleteLoadBalancerTlsCertificateInput { - s.LoadBalancerName = &v +// SetPageToken sets the PageToken field's value. +func (s *GetDisksInput) SetPageToken(v string) *GetDisksInput { + s.PageToken = &v return s } -type DeleteLoadBalancerTlsCertificateOutput struct { +type GetDisksOutput struct { _ struct{} `type:"structure"` - // An object describing the API operations. - Operations []*Operation `locationName:"operations" type:"list"` + // An array of objects containing information about all block storage disks. + Disks []*Disk `locationName:"disks" type:"list"` + + // A token used for advancing to the next page of results from your GetDisks + // request. + NextPageToken *string `locationName:"nextPageToken" type:"string"` } // String returns the string representation -func (s DeleteLoadBalancerTlsCertificateOutput) String() string { +func (s GetDisksOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteLoadBalancerTlsCertificateOutput) GoString() string { +func (s GetDisksOutput) GoString() string { return s.String() } -// SetOperations sets the Operations field's value. -func (s *DeleteLoadBalancerTlsCertificateOutput) SetOperations(v []*Operation) *DeleteLoadBalancerTlsCertificateOutput { - s.Operations = v +// SetDisks sets the Disks field's value. +func (s *GetDisksOutput) SetDisks(v []*Disk) *GetDisksOutput { + s.Disks = v return s } -type DetachDiskInput struct { +// SetNextPageToken sets the NextPageToken field's value. +func (s *GetDisksOutput) SetNextPageToken(v string) *GetDisksOutput { + s.NextPageToken = &v + return s +} + +type GetDomainInput struct { _ struct{} `type:"structure"` - // The unique name of the disk you want to detach from your instance (e.g., - // my-disk). + // The domain name for which your want to return information about. // - // DiskName is a required field - DiskName *string `locationName:"diskName" type:"string" required:"true"` + // DomainName is a required field + DomainName *string `locationName:"domainName" type:"string" required:"true"` } // String returns the string representation -func (s DetachDiskInput) String() string { +func (s GetDomainInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DetachDiskInput) GoString() string { +func (s GetDomainInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DetachDiskInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DetachDiskInput"} - if s.DiskName == nil { - invalidParams.Add(request.NewErrParamRequired("DiskName")) +func (s *GetDomainInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetDomainInput"} + if s.DomainName == nil { + invalidParams.Add(request.NewErrParamRequired("DomainName")) } if invalidParams.Len() > 0 { @@ -9981,135 +14104,121 @@ func (s *DetachDiskInput) Validate() error { return nil } -// SetDiskName sets the DiskName field's value. -func (s *DetachDiskInput) SetDiskName(v string) *DetachDiskInput { - s.DiskName = &v +// SetDomainName sets the DomainName field's value. +func (s *GetDomainInput) SetDomainName(v string) *GetDomainInput { + s.DomainName = &v return s } -type DetachDiskOutput struct { +type GetDomainOutput struct { _ struct{} `type:"structure"` - // An object describing the API operations. - Operations []*Operation `locationName:"operations" type:"list"` + // An array of key-value pairs containing information about your get domain + // request. + Domain *Domain `locationName:"domain" type:"structure"` } // String returns the string representation -func (s DetachDiskOutput) String() string { +func (s GetDomainOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DetachDiskOutput) GoString() string { +func (s GetDomainOutput) GoString() string { return s.String() } -// SetOperations sets the Operations field's value. -func (s *DetachDiskOutput) SetOperations(v []*Operation) *DetachDiskOutput { - s.Operations = v +// SetDomain sets the Domain field's value. +func (s *GetDomainOutput) SetDomain(v *Domain) *GetDomainOutput { + s.Domain = v return s } -type DetachInstancesFromLoadBalancerInput struct { +type GetDomainsInput struct { _ struct{} `type:"structure"` - // An array of strings containing the names of the instances you want to detach - // from the load balancer. - // - // InstanceNames is a required field - InstanceNames []*string `locationName:"instanceNames" type:"list" required:"true"` - - // The name of the Lightsail load balancer. - // - // LoadBalancerName is a required field - LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` + // A token used for advancing to the next page of results from your get domains + // request. + PageToken *string `locationName:"pageToken" type:"string"` } // String returns the string representation -func (s DetachInstancesFromLoadBalancerInput) String() string { +func (s GetDomainsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DetachInstancesFromLoadBalancerInput) GoString() string { +func (s GetDomainsInput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DetachInstancesFromLoadBalancerInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DetachInstancesFromLoadBalancerInput"} - if s.InstanceNames == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceNames")) - } - if s.LoadBalancerName == nil { - invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetInstanceNames sets the InstanceNames field's value. -func (s *DetachInstancesFromLoadBalancerInput) SetInstanceNames(v []*string) *DetachInstancesFromLoadBalancerInput { - s.InstanceNames = v - return s -} - -// SetLoadBalancerName sets the LoadBalancerName field's value. -func (s *DetachInstancesFromLoadBalancerInput) SetLoadBalancerName(v string) *DetachInstancesFromLoadBalancerInput { - s.LoadBalancerName = &v +// SetPageToken sets the PageToken field's value. +func (s *GetDomainsInput) SetPageToken(v string) *GetDomainsInput { + s.PageToken = &v return s } -type DetachInstancesFromLoadBalancerOutput struct { +type GetDomainsOutput struct { _ struct{} `type:"structure"` - // An object describing the API operations. - Operations []*Operation `locationName:"operations" type:"list"` + // An array of key-value pairs containing information about each of the domain + // entries in the user's account. + Domains []*Domain `locationName:"domains" type:"list"` + + // A token used for advancing to the next page of results from your get active + // names request. + NextPageToken *string `locationName:"nextPageToken" type:"string"` } // String returns the string representation -func (s DetachInstancesFromLoadBalancerOutput) String() string { +func (s GetDomainsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DetachInstancesFromLoadBalancerOutput) GoString() string { +func (s GetDomainsOutput) GoString() string { return s.String() } -// SetOperations sets the Operations field's value. -func (s *DetachInstancesFromLoadBalancerOutput) SetOperations(v []*Operation) *DetachInstancesFromLoadBalancerOutput { - s.Operations = v +// SetDomains sets the Domains field's value. +func (s *GetDomainsOutput) SetDomains(v []*Domain) *GetDomainsOutput { + s.Domains = v return s } -type DetachStaticIpInput struct { +// SetNextPageToken sets the NextPageToken field's value. +func (s *GetDomainsOutput) SetNextPageToken(v string) *GetDomainsOutput { + s.NextPageToken = &v + return s +} + +type GetInstanceAccessDetailsInput struct { _ struct{} `type:"structure"` - // The name of the static IP to detach from the instance. + // The name of the instance to access. // - // StaticIpName is a required field - StaticIpName *string `locationName:"staticIpName" type:"string" required:"true"` + // InstanceName is a required field + InstanceName *string `locationName:"instanceName" type:"string" required:"true"` + + // The protocol to use to connect to your instance. Defaults to ssh. + Protocol *string `locationName:"protocol" type:"string" enum:"InstanceAccessProtocol"` } // String returns the string representation -func (s DetachStaticIpInput) String() string { +func (s GetInstanceAccessDetailsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DetachStaticIpInput) GoString() string { +func (s GetInstanceAccessDetailsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DetachStaticIpInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DetachStaticIpInput"} - if s.StaticIpName == nil { - invalidParams.Add(request.NewErrParamRequired("StaticIpName")) +func (s *GetInstanceAccessDetailsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetInstanceAccessDetailsInput"} + if s.InstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceName")) } if invalidParams.Len() > 0 { @@ -10118,766 +14227,890 @@ func (s *DetachStaticIpInput) Validate() error { return nil } -// SetStaticIpName sets the StaticIpName field's value. -func (s *DetachStaticIpInput) SetStaticIpName(v string) *DetachStaticIpInput { - s.StaticIpName = &v +// SetInstanceName sets the InstanceName field's value. +func (s *GetInstanceAccessDetailsInput) SetInstanceName(v string) *GetInstanceAccessDetailsInput { + s.InstanceName = &v return s } -type DetachStaticIpOutput struct { - _ struct{} `type:"structure"` - - // An array of key-value pairs containing information about the results of your - // detach static IP request. - Operations []*Operation `locationName:"operations" type:"list"` -} - -// String returns the string representation -func (s DetachStaticIpOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s DetachStaticIpOutput) GoString() string { - return s.String() -} - -// SetOperations sets the Operations field's value. -func (s *DetachStaticIpOutput) SetOperations(v []*Operation) *DetachStaticIpOutput { - s.Operations = v +// SetProtocol sets the Protocol field's value. +func (s *GetInstanceAccessDetailsInput) SetProtocol(v string) *GetInstanceAccessDetailsInput { + s.Protocol = &v return s } -// Describes a system disk or an block storage disk. -type Disk struct { +type GetInstanceAccessDetailsOutput struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of the disk. - Arn *string `locationName:"arn" type:"string"` - - // The resources to which the disk is attached. - AttachedTo *string `locationName:"attachedTo" type:"string"` - - // (Deprecated) The attachment state of the disk. - // - // In releases prior to November 14, 2017, this parameter returned attached - // for system disks in the API response. It is now deprecated, but still included - // in the response. Use isAttached instead. - AttachmentState *string `locationName:"attachmentState" deprecated:"true" type:"string"` - - // The date when the disk was created. - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` - - // (Deprecated) The number of GB in use by the disk. - // - // In releases prior to November 14, 2017, this parameter was not included in - // the API response. It is now deprecated. - GbInUse *int64 `locationName:"gbInUse" deprecated:"true" type:"integer"` - - // The input/output operations per second (IOPS) of the disk. - Iops *int64 `locationName:"iops" type:"integer"` - - // A Boolean value indicating whether the disk is attached. - IsAttached *bool `locationName:"isAttached" type:"boolean"` - - // A Boolean value indicating whether this disk is a system disk (has an operating - // system loaded on it). - IsSystemDisk *bool `locationName:"isSystemDisk" type:"boolean"` - - // The AWS Region and Availability Zone where the disk is located. - Location *ResourceLocation `locationName:"location" type:"structure"` - - // The unique name of the disk. - Name *string `locationName:"name" type:"string"` - - // The disk path. - Path *string `locationName:"path" type:"string"` - - // The Lightsail resource type (e.g., Disk). - ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` - - // The size of the disk in GB. - SizeInGb *int64 `locationName:"sizeInGb" type:"integer"` - - // Describes the status of the disk. - State *string `locationName:"state" type:"string" enum:"DiskState"` - - // The support code. Include this code in your email to support when you have - // questions about an instance or another resource in Lightsail. This code enables - // our support team to look up your Lightsail information more easily. - SupportCode *string `locationName:"supportCode" type:"string"` + // An array of key-value pairs containing information about a get instance access + // request. + AccessDetails *InstanceAccessDetails `locationName:"accessDetails" type:"structure"` } // String returns the string representation -func (s Disk) String() string { +func (s GetInstanceAccessDetailsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Disk) GoString() string { +func (s GetInstanceAccessDetailsOutput) GoString() string { return s.String() } -// SetArn sets the Arn field's value. -func (s *Disk) SetArn(v string) *Disk { - s.Arn = &v +// SetAccessDetails sets the AccessDetails field's value. +func (s *GetInstanceAccessDetailsOutput) SetAccessDetails(v *InstanceAccessDetails) *GetInstanceAccessDetailsOutput { + s.AccessDetails = v return s } -// SetAttachedTo sets the AttachedTo field's value. -func (s *Disk) SetAttachedTo(v string) *Disk { - s.AttachedTo = &v - return s +type GetInstanceInput struct { + _ struct{} `type:"structure"` + + // The name of the instance. + // + // InstanceName is a required field + InstanceName *string `locationName:"instanceName" type:"string" required:"true"` } -// SetAttachmentState sets the AttachmentState field's value. -func (s *Disk) SetAttachmentState(v string) *Disk { - s.AttachmentState = &v - return s +// String returns the string representation +func (s GetInstanceInput) String() string { + return awsutil.Prettify(s) } -// SetCreatedAt sets the CreatedAt field's value. -func (s *Disk) SetCreatedAt(v time.Time) *Disk { - s.CreatedAt = &v - return s +// GoString returns the string representation +func (s GetInstanceInput) GoString() string { + return s.String() } -// SetGbInUse sets the GbInUse field's value. -func (s *Disk) SetGbInUse(v int64) *Disk { - s.GbInUse = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetInstanceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetInstanceInput"} + if s.InstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetIops sets the Iops field's value. -func (s *Disk) SetIops(v int64) *Disk { - s.Iops = &v +// SetInstanceName sets the InstanceName field's value. +func (s *GetInstanceInput) SetInstanceName(v string) *GetInstanceInput { + s.InstanceName = &v return s } -// SetIsAttached sets the IsAttached field's value. -func (s *Disk) SetIsAttached(v bool) *Disk { - s.IsAttached = &v - return s +type GetInstanceMetricDataInput struct { + _ struct{} `type:"structure"` + + // The end time of the time period. + // + // EndTime is a required field + EndTime *time.Time `locationName:"endTime" type:"timestamp" required:"true"` + + // The name of the instance for which you want to get metrics data. + // + // InstanceName is a required field + InstanceName *string `locationName:"instanceName" type:"string" required:"true"` + + // The metric name to get data about. + // + // MetricName is a required field + MetricName *string `locationName:"metricName" type:"string" required:"true" enum:"InstanceMetricName"` + + // The granularity, in seconds, of the returned data points. + // + // Period is a required field + Period *int64 `locationName:"period" min:"60" type:"integer" required:"true"` + + // The start time of the time period. + // + // StartTime is a required field + StartTime *time.Time `locationName:"startTime" type:"timestamp" required:"true"` + + // The instance statistics. + // + // Statistics is a required field + Statistics []*string `locationName:"statistics" type:"list" required:"true"` + + // The unit. The list of valid values is below. + // + // Unit is a required field + Unit *string `locationName:"unit" type:"string" required:"true" enum:"MetricUnit"` } -// SetIsSystemDisk sets the IsSystemDisk field's value. -func (s *Disk) SetIsSystemDisk(v bool) *Disk { - s.IsSystemDisk = &v - return s +// String returns the string representation +func (s GetInstanceMetricDataInput) String() string { + return awsutil.Prettify(s) } -// SetLocation sets the Location field's value. -func (s *Disk) SetLocation(v *ResourceLocation) *Disk { - s.Location = v +// GoString returns the string representation +func (s GetInstanceMetricDataInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetInstanceMetricDataInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetInstanceMetricDataInput"} + if s.EndTime == nil { + invalidParams.Add(request.NewErrParamRequired("EndTime")) + } + if s.InstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceName")) + } + if s.MetricName == nil { + invalidParams.Add(request.NewErrParamRequired("MetricName")) + } + if s.Period == nil { + invalidParams.Add(request.NewErrParamRequired("Period")) + } + if s.Period != nil && *s.Period < 60 { + invalidParams.Add(request.NewErrParamMinValue("Period", 60)) + } + if s.StartTime == nil { + invalidParams.Add(request.NewErrParamRequired("StartTime")) + } + if s.Statistics == nil { + invalidParams.Add(request.NewErrParamRequired("Statistics")) + } + if s.Unit == nil { + invalidParams.Add(request.NewErrParamRequired("Unit")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEndTime sets the EndTime field's value. +func (s *GetInstanceMetricDataInput) SetEndTime(v time.Time) *GetInstanceMetricDataInput { + s.EndTime = &v return s } -// SetName sets the Name field's value. -func (s *Disk) SetName(v string) *Disk { - s.Name = &v +// SetInstanceName sets the InstanceName field's value. +func (s *GetInstanceMetricDataInput) SetInstanceName(v string) *GetInstanceMetricDataInput { + s.InstanceName = &v return s } -// SetPath sets the Path field's value. -func (s *Disk) SetPath(v string) *Disk { - s.Path = &v +// SetMetricName sets the MetricName field's value. +func (s *GetInstanceMetricDataInput) SetMetricName(v string) *GetInstanceMetricDataInput { + s.MetricName = &v return s } -// SetResourceType sets the ResourceType field's value. -func (s *Disk) SetResourceType(v string) *Disk { - s.ResourceType = &v +// SetPeriod sets the Period field's value. +func (s *GetInstanceMetricDataInput) SetPeriod(v int64) *GetInstanceMetricDataInput { + s.Period = &v return s } -// SetSizeInGb sets the SizeInGb field's value. -func (s *Disk) SetSizeInGb(v int64) *Disk { - s.SizeInGb = &v +// SetStartTime sets the StartTime field's value. +func (s *GetInstanceMetricDataInput) SetStartTime(v time.Time) *GetInstanceMetricDataInput { + s.StartTime = &v return s } -// SetState sets the State field's value. -func (s *Disk) SetState(v string) *Disk { - s.State = &v +// SetStatistics sets the Statistics field's value. +func (s *GetInstanceMetricDataInput) SetStatistics(v []*string) *GetInstanceMetricDataInput { + s.Statistics = v return s } -// SetSupportCode sets the SupportCode field's value. -func (s *Disk) SetSupportCode(v string) *Disk { - s.SupportCode = &v +// SetUnit sets the Unit field's value. +func (s *GetInstanceMetricDataInput) SetUnit(v string) *GetInstanceMetricDataInput { + s.Unit = &v return s } -// Describes a block storage disk mapping. -type DiskMap struct { +type GetInstanceMetricDataOutput struct { _ struct{} `type:"structure"` - // The new disk name (e.g., my-new-disk). - NewDiskName *string `locationName:"newDiskName" type:"string"` + // An array of key-value pairs containing information about the results of your + // get instance metric data request. + MetricData []*MetricDatapoint `locationName:"metricData" type:"list"` - // The original disk path exposed to the instance (for example, /dev/sdh). - OriginalDiskPath *string `locationName:"originalDiskPath" type:"string"` + // The metric name to return data for. + MetricName *string `locationName:"metricName" type:"string" enum:"InstanceMetricName"` } // String returns the string representation -func (s DiskMap) String() string { +func (s GetInstanceMetricDataOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DiskMap) GoString() string { +func (s GetInstanceMetricDataOutput) GoString() string { return s.String() } -// SetNewDiskName sets the NewDiskName field's value. -func (s *DiskMap) SetNewDiskName(v string) *DiskMap { - s.NewDiskName = &v +// SetMetricData sets the MetricData field's value. +func (s *GetInstanceMetricDataOutput) SetMetricData(v []*MetricDatapoint) *GetInstanceMetricDataOutput { + s.MetricData = v return s } -// SetOriginalDiskPath sets the OriginalDiskPath field's value. -func (s *DiskMap) SetOriginalDiskPath(v string) *DiskMap { - s.OriginalDiskPath = &v +// SetMetricName sets the MetricName field's value. +func (s *GetInstanceMetricDataOutput) SetMetricName(v string) *GetInstanceMetricDataOutput { + s.MetricName = &v return s } -// Describes a block storage disk snapshot. -type DiskSnapshot struct { +type GetInstanceOutput struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of the disk snapshot. - Arn *string `locationName:"arn" type:"string"` - - // The date when the disk snapshot was created. - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` - - // The Amazon Resource Name (ARN) of the source disk from which you are creating - // the disk snapshot. - FromDiskArn *string `locationName:"fromDiskArn" type:"string"` - - // The unique name of the source disk from which you are creating the disk snapshot. - FromDiskName *string `locationName:"fromDiskName" type:"string"` - - // The AWS Region and Availability Zone where the disk snapshot was created. - Location *ResourceLocation `locationName:"location" type:"structure"` - - // The name of the disk snapshot (e.g., my-disk-snapshot). - Name *string `locationName:"name" type:"string"` + // An array of key-value pairs containing information about the specified instance. + Instance *Instance `locationName:"instance" type:"structure"` +} - // The progress of the disk snapshot operation. - Progress *string `locationName:"progress" type:"string"` +// String returns the string representation +func (s GetInstanceOutput) String() string { + return awsutil.Prettify(s) +} - // The Lightsail resource type (e.g., DiskSnapshot). - ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` +// GoString returns the string representation +func (s GetInstanceOutput) GoString() string { + return s.String() +} - // The size of the disk in GB. - SizeInGb *int64 `locationName:"sizeInGb" type:"integer"` +// SetInstance sets the Instance field's value. +func (s *GetInstanceOutput) SetInstance(v *Instance) *GetInstanceOutput { + s.Instance = v + return s +} - // The status of the disk snapshot operation. - State *string `locationName:"state" type:"string" enum:"DiskSnapshotState"` +type GetInstancePortStatesInput struct { + _ struct{} `type:"structure"` - // The support code. Include this code in your email to support when you have - // questions about an instance or another resource in Lightsail. This code enables - // our support team to look up your Lightsail information more easily. - SupportCode *string `locationName:"supportCode" type:"string"` + // The name of the instance. + // + // InstanceName is a required field + InstanceName *string `locationName:"instanceName" type:"string" required:"true"` } // String returns the string representation -func (s DiskSnapshot) String() string { +func (s GetInstancePortStatesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DiskSnapshot) GoString() string { +func (s GetInstancePortStatesInput) GoString() string { return s.String() } -// SetArn sets the Arn field's value. -func (s *DiskSnapshot) SetArn(v string) *DiskSnapshot { - s.Arn = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetInstancePortStatesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetInstancePortStatesInput"} + if s.InstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetCreatedAt sets the CreatedAt field's value. -func (s *DiskSnapshot) SetCreatedAt(v time.Time) *DiskSnapshot { - s.CreatedAt = &v +// SetInstanceName sets the InstanceName field's value. +func (s *GetInstancePortStatesInput) SetInstanceName(v string) *GetInstancePortStatesInput { + s.InstanceName = &v return s } -// SetFromDiskArn sets the FromDiskArn field's value. -func (s *DiskSnapshot) SetFromDiskArn(v string) *DiskSnapshot { - s.FromDiskArn = &v - return s +type GetInstancePortStatesOutput struct { + _ struct{} `type:"structure"` + + // Information about the port states resulting from your request. + PortStates []*InstancePortState `locationName:"portStates" type:"list"` } -// SetFromDiskName sets the FromDiskName field's value. -func (s *DiskSnapshot) SetFromDiskName(v string) *DiskSnapshot { - s.FromDiskName = &v - return s +// String returns the string representation +func (s GetInstancePortStatesOutput) String() string { + return awsutil.Prettify(s) } -// SetLocation sets the Location field's value. -func (s *DiskSnapshot) SetLocation(v *ResourceLocation) *DiskSnapshot { - s.Location = v - return s +// GoString returns the string representation +func (s GetInstancePortStatesOutput) GoString() string { + return s.String() } -// SetName sets the Name field's value. -func (s *DiskSnapshot) SetName(v string) *DiskSnapshot { - s.Name = &v +// SetPortStates sets the PortStates field's value. +func (s *GetInstancePortStatesOutput) SetPortStates(v []*InstancePortState) *GetInstancePortStatesOutput { + s.PortStates = v return s } -// SetProgress sets the Progress field's value. -func (s *DiskSnapshot) SetProgress(v string) *DiskSnapshot { - s.Progress = &v - return s +type GetInstanceSnapshotInput struct { + _ struct{} `type:"structure"` + + // The name of the snapshot for which you are requesting information. + // + // InstanceSnapshotName is a required field + InstanceSnapshotName *string `locationName:"instanceSnapshotName" type:"string" required:"true"` } -// SetResourceType sets the ResourceType field's value. -func (s *DiskSnapshot) SetResourceType(v string) *DiskSnapshot { - s.ResourceType = &v - return s +// String returns the string representation +func (s GetInstanceSnapshotInput) String() string { + return awsutil.Prettify(s) } -// SetSizeInGb sets the SizeInGb field's value. -func (s *DiskSnapshot) SetSizeInGb(v int64) *DiskSnapshot { - s.SizeInGb = &v - return s +// GoString returns the string representation +func (s GetInstanceSnapshotInput) GoString() string { + return s.String() } -// SetState sets the State field's value. -func (s *DiskSnapshot) SetState(v string) *DiskSnapshot { - s.State = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetInstanceSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetInstanceSnapshotInput"} + if s.InstanceSnapshotName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceSnapshotName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetSupportCode sets the SupportCode field's value. -func (s *DiskSnapshot) SetSupportCode(v string) *DiskSnapshot { - s.SupportCode = &v +// SetInstanceSnapshotName sets the InstanceSnapshotName field's value. +func (s *GetInstanceSnapshotInput) SetInstanceSnapshotName(v string) *GetInstanceSnapshotInput { + s.InstanceSnapshotName = &v return s } -// Describes a domain where you are storing recordsets in Lightsail. -type Domain struct { +type GetInstanceSnapshotOutput struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of the domain recordset (e.g., arn:aws:lightsail:global:123456789101:Domain/824cede0-abc7-4f84-8dbc-12345EXAMPLE). - Arn *string `locationName:"arn" type:"string"` - - // The date when the domain recordset was created. - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` + // An array of key-value pairs containing information about the results of your + // get instance snapshot request. + InstanceSnapshot *InstanceSnapshot `locationName:"instanceSnapshot" type:"structure"` +} - // An array of key-value pairs containing information about the domain entries. - DomainEntries []*DomainEntry `locationName:"domainEntries" type:"list"` +// String returns the string representation +func (s GetInstanceSnapshotOutput) String() string { + return awsutil.Prettify(s) +} - // The AWS Region and Availability Zones where the domain recordset was created. - Location *ResourceLocation `locationName:"location" type:"structure"` +// GoString returns the string representation +func (s GetInstanceSnapshotOutput) GoString() string { + return s.String() +} - // The name of the domain. - Name *string `locationName:"name" type:"string"` +// SetInstanceSnapshot sets the InstanceSnapshot field's value. +func (s *GetInstanceSnapshotOutput) SetInstanceSnapshot(v *InstanceSnapshot) *GetInstanceSnapshotOutput { + s.InstanceSnapshot = v + return s +} - // The resource type. - ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` +type GetInstanceSnapshotsInput struct { + _ struct{} `type:"structure"` - // The support code. Include this code in your email to support when you have - // questions about an instance or another resource in Lightsail. This code enables - // our support team to look up your Lightsail information more easily. - SupportCode *string `locationName:"supportCode" type:"string"` + // A token used for advancing to the next page of results from your get instance + // snapshots request. + PageToken *string `locationName:"pageToken" type:"string"` } // String returns the string representation -func (s Domain) String() string { +func (s GetInstanceSnapshotsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Domain) GoString() string { +func (s GetInstanceSnapshotsInput) GoString() string { return s.String() } -// SetArn sets the Arn field's value. -func (s *Domain) SetArn(v string) *Domain { - s.Arn = &v +// SetPageToken sets the PageToken field's value. +func (s *GetInstanceSnapshotsInput) SetPageToken(v string) *GetInstanceSnapshotsInput { + s.PageToken = &v return s } -// SetCreatedAt sets the CreatedAt field's value. -func (s *Domain) SetCreatedAt(v time.Time) *Domain { - s.CreatedAt = &v - return s -} +type GetInstanceSnapshotsOutput struct { + _ struct{} `type:"structure"` -// SetDomainEntries sets the DomainEntries field's value. -func (s *Domain) SetDomainEntries(v []*DomainEntry) *Domain { - s.DomainEntries = v - return s + // An array of key-value pairs containing information about the results of your + // get instance snapshots request. + InstanceSnapshots []*InstanceSnapshot `locationName:"instanceSnapshots" type:"list"` + + // A token used for advancing to the next page of results from your get instance + // snapshots request. + NextPageToken *string `locationName:"nextPageToken" type:"string"` } -// SetLocation sets the Location field's value. -func (s *Domain) SetLocation(v *ResourceLocation) *Domain { - s.Location = v - return s +// String returns the string representation +func (s GetInstanceSnapshotsOutput) String() string { + return awsutil.Prettify(s) } -// SetName sets the Name field's value. -func (s *Domain) SetName(v string) *Domain { - s.Name = &v - return s +// GoString returns the string representation +func (s GetInstanceSnapshotsOutput) GoString() string { + return s.String() } -// SetResourceType sets the ResourceType field's value. -func (s *Domain) SetResourceType(v string) *Domain { - s.ResourceType = &v +// SetInstanceSnapshots sets the InstanceSnapshots field's value. +func (s *GetInstanceSnapshotsOutput) SetInstanceSnapshots(v []*InstanceSnapshot) *GetInstanceSnapshotsOutput { + s.InstanceSnapshots = v return s } -// SetSupportCode sets the SupportCode field's value. -func (s *Domain) SetSupportCode(v string) *Domain { - s.SupportCode = &v +// SetNextPageToken sets the NextPageToken field's value. +func (s *GetInstanceSnapshotsOutput) SetNextPageToken(v string) *GetInstanceSnapshotsOutput { + s.NextPageToken = &v return s } -// Describes a domain recordset entry. -type DomainEntry struct { +type GetInstanceStateInput struct { _ struct{} `type:"structure"` - // The ID of the domain recordset entry. - Id *string `locationName:"id" type:"string"` - - // When true, specifies whether the domain entry is an alias used by the Lightsail - // load balancer. You can include an alias (A type) record in your request, - // which points to a load balancer DNS name and routes traffic to your load - // balancer - IsAlias *bool `locationName:"isAlias" type:"boolean"` - - // The name of the domain. - Name *string `locationName:"name" type:"string"` - - // (Deprecated) The options for the domain entry. - // - // In releases prior to November 29, 2017, this parameter was not included in - // the API response. It is now deprecated. - Options map[string]*string `locationName:"options" deprecated:"true" type:"map"` - - // The target AWS name server (e.g., ns-111.awsdns-22.com.). + // The name of the instance to get state information about. // - // For Lightsail load balancers, the value looks like ab1234c56789c6b86aba6fb203d443bc-123456789.us-east-2.elb.amazonaws.com. - // Be sure to also set isAlias to true when setting up an A record for a load - // balancer. - Target *string `locationName:"target" type:"string"` - - // The type of domain entry (e.g., SOA or NS). - Type *string `locationName:"type" type:"string"` + // InstanceName is a required field + InstanceName *string `locationName:"instanceName" type:"string" required:"true"` } // String returns the string representation -func (s DomainEntry) String() string { +func (s GetInstanceStateInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DomainEntry) GoString() string { +func (s GetInstanceStateInput) GoString() string { return s.String() } -// SetId sets the Id field's value. -func (s *DomainEntry) SetId(v string) *DomainEntry { - s.Id = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetInstanceStateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetInstanceStateInput"} + if s.InstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceName sets the InstanceName field's value. +func (s *GetInstanceStateInput) SetInstanceName(v string) *GetInstanceStateInput { + s.InstanceName = &v return s } -// SetIsAlias sets the IsAlias field's value. -func (s *DomainEntry) SetIsAlias(v bool) *DomainEntry { - s.IsAlias = &v - return s -} +type GetInstanceStateOutput struct { + _ struct{} `type:"structure"` -// SetName sets the Name field's value. -func (s *DomainEntry) SetName(v string) *DomainEntry { - s.Name = &v - return s + // The state of the instance. + State *InstanceState `locationName:"state" type:"structure"` } -// SetOptions sets the Options field's value. -func (s *DomainEntry) SetOptions(v map[string]*string) *DomainEntry { - s.Options = v - return s +// String returns the string representation +func (s GetInstanceStateOutput) String() string { + return awsutil.Prettify(s) } -// SetTarget sets the Target field's value. -func (s *DomainEntry) SetTarget(v string) *DomainEntry { - s.Target = &v - return s +// GoString returns the string representation +func (s GetInstanceStateOutput) GoString() string { + return s.String() } -// SetType sets the Type field's value. -func (s *DomainEntry) SetType(v string) *DomainEntry { - s.Type = &v +// SetState sets the State field's value. +func (s *GetInstanceStateOutput) SetState(v *InstanceState) *GetInstanceStateOutput { + s.State = v return s } -type DownloadDefaultKeyPairInput struct { +type GetInstancesInput struct { _ struct{} `type:"structure"` + + // A token used for advancing to the next page of results from your get instances + // request. + PageToken *string `locationName:"pageToken" type:"string"` } // String returns the string representation -func (s DownloadDefaultKeyPairInput) String() string { +func (s GetInstancesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DownloadDefaultKeyPairInput) GoString() string { +func (s GetInstancesInput) GoString() string { return s.String() } -type DownloadDefaultKeyPairOutput struct { +// SetPageToken sets the PageToken field's value. +func (s *GetInstancesInput) SetPageToken(v string) *GetInstancesInput { + s.PageToken = &v + return s +} + +type GetInstancesOutput struct { _ struct{} `type:"structure"` - // A base64-encoded RSA private key. - PrivateKeyBase64 *string `locationName:"privateKeyBase64" type:"string"` + // An array of key-value pairs containing information about your instances. + Instances []*Instance `locationName:"instances" type:"list"` - // A base64-encoded public key of the ssh-rsa type. - PublicKeyBase64 *string `locationName:"publicKeyBase64" type:"string"` + // A token used for advancing to the next page of results from your get instances + // request. + NextPageToken *string `locationName:"nextPageToken" type:"string"` } // String returns the string representation -func (s DownloadDefaultKeyPairOutput) String() string { +func (s GetInstancesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DownloadDefaultKeyPairOutput) GoString() string { +func (s GetInstancesOutput) GoString() string { return s.String() } -// SetPrivateKeyBase64 sets the PrivateKeyBase64 field's value. -func (s *DownloadDefaultKeyPairOutput) SetPrivateKeyBase64(v string) *DownloadDefaultKeyPairOutput { - s.PrivateKeyBase64 = &v +// SetInstances sets the Instances field's value. +func (s *GetInstancesOutput) SetInstances(v []*Instance) *GetInstancesOutput { + s.Instances = v return s } -// SetPublicKeyBase64 sets the PublicKeyBase64 field's value. -func (s *DownloadDefaultKeyPairOutput) SetPublicKeyBase64(v string) *DownloadDefaultKeyPairOutput { - s.PublicKeyBase64 = &v +// SetNextPageToken sets the NextPageToken field's value. +func (s *GetInstancesOutput) SetNextPageToken(v string) *GetInstancesOutput { + s.NextPageToken = &v return s } -type GetActiveNamesInput struct { +type GetKeyPairInput struct { _ struct{} `type:"structure"` - // A token used for paginating results from your get active names request. - PageToken *string `locationName:"pageToken" type:"string"` + // The name of the key pair for which you are requesting information. + // + // KeyPairName is a required field + KeyPairName *string `locationName:"keyPairName" type:"string" required:"true"` } // String returns the string representation -func (s GetActiveNamesInput) String() string { +func (s GetKeyPairInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetActiveNamesInput) GoString() string { +func (s GetKeyPairInput) GoString() string { return s.String() } -// SetPageToken sets the PageToken field's value. -func (s *GetActiveNamesInput) SetPageToken(v string) *GetActiveNamesInput { - s.PageToken = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetKeyPairInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetKeyPairInput"} + if s.KeyPairName == nil { + invalidParams.Add(request.NewErrParamRequired("KeyPairName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *GetKeyPairInput) SetKeyPairName(v string) *GetKeyPairInput { + s.KeyPairName = &v return s } -type GetActiveNamesOutput struct { +type GetKeyPairOutput struct { _ struct{} `type:"structure"` - // The list of active names returned by the get active names request. - ActiveNames []*string `locationName:"activeNames" type:"list"` - - // A token used for advancing to the next page of results from your get active - // names request. - NextPageToken *string `locationName:"nextPageToken" type:"string"` + // An array of key-value pairs containing information about the key pair. + KeyPair *KeyPair `locationName:"keyPair" type:"structure"` } // String returns the string representation -func (s GetActiveNamesOutput) String() string { +func (s GetKeyPairOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetActiveNamesOutput) GoString() string { +func (s GetKeyPairOutput) GoString() string { return s.String() } -// SetActiveNames sets the ActiveNames field's value. -func (s *GetActiveNamesOutput) SetActiveNames(v []*string) *GetActiveNamesOutput { - s.ActiveNames = v - return s -} - -// SetNextPageToken sets the NextPageToken field's value. -func (s *GetActiveNamesOutput) SetNextPageToken(v string) *GetActiveNamesOutput { - s.NextPageToken = &v +// SetKeyPair sets the KeyPair field's value. +func (s *GetKeyPairOutput) SetKeyPair(v *KeyPair) *GetKeyPairOutput { + s.KeyPair = v return s } -type GetBlueprintsInput struct { +type GetKeyPairsInput struct { _ struct{} `type:"structure"` - // A Boolean value indicating whether to include inactive results in your request. - IncludeInactive *bool `locationName:"includeInactive" type:"boolean"` - - // A token used for advancing to the next page of results from your get blueprints - // request. + // A token used for advancing to the next page of results from your get key + // pairs request. PageToken *string `locationName:"pageToken" type:"string"` } // String returns the string representation -func (s GetBlueprintsInput) String() string { +func (s GetKeyPairsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetBlueprintsInput) GoString() string { +func (s GetKeyPairsInput) GoString() string { return s.String() } -// SetIncludeInactive sets the IncludeInactive field's value. -func (s *GetBlueprintsInput) SetIncludeInactive(v bool) *GetBlueprintsInput { - s.IncludeInactive = &v - return s -} - // SetPageToken sets the PageToken field's value. -func (s *GetBlueprintsInput) SetPageToken(v string) *GetBlueprintsInput { +func (s *GetKeyPairsInput) SetPageToken(v string) *GetKeyPairsInput { s.PageToken = &v return s } -type GetBlueprintsOutput struct { +type GetKeyPairsOutput struct { _ struct{} `type:"structure"` - // An array of key-value pairs that contains information about the available - // blueprints. - Blueprints []*Blueprint `locationName:"blueprints" type:"list"` + // An array of key-value pairs containing information about the key pairs. + KeyPairs []*KeyPair `locationName:"keyPairs" type:"list"` - // A token used for advancing to the next page of results from your get blueprints - // request. + // A token used for advancing to the next page of results from your get key + // pairs request. NextPageToken *string `locationName:"nextPageToken" type:"string"` } // String returns the string representation -func (s GetBlueprintsOutput) String() string { +func (s GetKeyPairsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetBlueprintsOutput) GoString() string { +func (s GetKeyPairsOutput) GoString() string { return s.String() } -// SetBlueprints sets the Blueprints field's value. -func (s *GetBlueprintsOutput) SetBlueprints(v []*Blueprint) *GetBlueprintsOutput { - s.Blueprints = v +// SetKeyPairs sets the KeyPairs field's value. +func (s *GetKeyPairsOutput) SetKeyPairs(v []*KeyPair) *GetKeyPairsOutput { + s.KeyPairs = v return s } // SetNextPageToken sets the NextPageToken field's value. -func (s *GetBlueprintsOutput) SetNextPageToken(v string) *GetBlueprintsOutput { +func (s *GetKeyPairsOutput) SetNextPageToken(v string) *GetKeyPairsOutput { s.NextPageToken = &v return s } -type GetBundlesInput struct { +type GetLoadBalancerInput struct { _ struct{} `type:"structure"` - // A Boolean value that indicates whether to include inactive bundle results - // in your request. - IncludeInactive *bool `locationName:"includeInactive" type:"boolean"` - - // A token used for advancing to the next page of results from your get bundles - // request. - PageToken *string `locationName:"pageToken" type:"string"` + // The name of the load balancer. + // + // LoadBalancerName is a required field + LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` } // String returns the string representation -func (s GetBundlesInput) String() string { +func (s GetLoadBalancerInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetBundlesInput) GoString() string { +func (s GetLoadBalancerInput) GoString() string { return s.String() } -// SetIncludeInactive sets the IncludeInactive field's value. -func (s *GetBundlesInput) SetIncludeInactive(v bool) *GetBundlesInput { - s.IncludeInactive = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetLoadBalancerInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetLoadBalancerInput"} + if s.LoadBalancerName == nil { + invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetPageToken sets the PageToken field's value. -func (s *GetBundlesInput) SetPageToken(v string) *GetBundlesInput { - s.PageToken = &v +// SetLoadBalancerName sets the LoadBalancerName field's value. +func (s *GetLoadBalancerInput) SetLoadBalancerName(v string) *GetLoadBalancerInput { + s.LoadBalancerName = &v return s } -type GetBundlesOutput struct { +type GetLoadBalancerMetricDataInput struct { _ struct{} `type:"structure"` - // An array of key-value pairs that contains information about the available - // bundles. - Bundles []*Bundle `locationName:"bundles" type:"list"` - - // A token used for advancing to the next page of results from your get active - // names request. - NextPageToken *string `locationName:"nextPageToken" type:"string"` -} + // The end time of the period. + // + // EndTime is a required field + EndTime *time.Time `locationName:"endTime" type:"timestamp" required:"true"` -// String returns the string representation -func (s GetBundlesOutput) String() string { - return awsutil.Prettify(s) -} + // The name of the load balancer. + // + // LoadBalancerName is a required field + LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` -// GoString returns the string representation -func (s GetBundlesOutput) GoString() string { - return s.String() -} + // The metric about which you want to return information. Valid values are listed + // below, along with the most useful statistics to include in your request. + // + // * ClientTLSNegotiationErrorCount - The number of TLS connections initiated + // by the client that did not establish a session with the load balancer. + // Possible causes include a mismatch of ciphers or protocols. + // + // Statistics: The most useful statistic is Sum. + // + // * HealthyHostCount - The number of target instances that are considered + // healthy. + // + // Statistics: The most useful statistic are Average, Minimum, and Maximum. + // + // * UnhealthyHostCount - The number of target instances that are considered + // unhealthy. + // + // Statistics: The most useful statistic are Average, Minimum, and Maximum. + // + // * HTTPCode_LB_4XX_Count - The number of HTTP 4XX client error codes that + // originate from the load balancer. Client errors are generated when requests + // are malformed or incomplete. These requests have not been received by + // the target instance. This count does not include any response codes generated + // by the target instances. + // + // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, + // and Average all return 1. + // + // * HTTPCode_LB_5XX_Count - The number of HTTP 5XX server error codes that + // originate from the load balancer. This count does not include any response + // codes generated by the target instances. + // + // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, + // and Average all return 1. Note that Minimum, Maximum, and Average all + // return 1. + // + // * HTTPCode_Instance_2XX_Count - The number of HTTP response codes generated + // by the target instances. This does not include any response codes generated + // by the load balancer. + // + // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, + // and Average all return 1. + // + // * HTTPCode_Instance_3XX_Count - The number of HTTP response codes generated + // by the target instances. This does not include any response codes generated + // by the load balancer. + // + // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, + // and Average all return 1. + // + // * HTTPCode_Instance_4XX_Count - The number of HTTP response codes generated + // by the target instances. This does not include any response codes generated + // by the load balancer. + // + // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, + // and Average all return 1. + // + // * HTTPCode_Instance_5XX_Count - The number of HTTP response codes generated + // by the target instances. This does not include any response codes generated + // by the load balancer. + // + // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, + // and Average all return 1. + // + // * InstanceResponseTime - The time elapsed, in seconds, after the request + // leaves the load balancer until a response from the target instance is + // received. + // + // Statistics: The most useful statistic is Average. + // + // * RejectedConnectionCount - The number of connections that were rejected + // because the load balancer had reached its maximum number of connections. + // + // Statistics: The most useful statistic is Sum. + // + // * RequestCount - The number of requests processed over IPv4. This count + // includes only the requests with a response generated by a target instance + // of the load balancer. + // + // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, + // and Average all return 1. + // + // MetricName is a required field + MetricName *string `locationName:"metricName" type:"string" required:"true" enum:"LoadBalancerMetricName"` -// SetBundles sets the Bundles field's value. -func (s *GetBundlesOutput) SetBundles(v []*Bundle) *GetBundlesOutput { - s.Bundles = v - return s -} + // The granularity, in seconds, of the returned data points. + // + // Period is a required field + Period *int64 `locationName:"period" min:"60" type:"integer" required:"true"` -// SetNextPageToken sets the NextPageToken field's value. -func (s *GetBundlesOutput) SetNextPageToken(v string) *GetBundlesOutput { - s.NextPageToken = &v - return s -} + // The start time of the period. + // + // StartTime is a required field + StartTime *time.Time `locationName:"startTime" type:"timestamp" required:"true"` -type GetDiskInput struct { - _ struct{} `type:"structure"` + // An array of statistics that you want to request metrics for. Valid values + // are listed below. + // + // * SampleCount - The count (number) of data points used for the statistical + // calculation. + // + // * Average - The value of Sum / SampleCount during the specified period. + // By comparing this statistic with the Minimum and Maximum, you can determine + // the full scope of a metric and how close the average use is to the Minimum + // and Maximum. This comparison helps you to know when to increase or decrease + // your resources as needed. + // + // * Sum - All values submitted for the matching metric added together. This + // statistic can be useful for determining the total volume of a metric. + // + // * Minimum - The lowest value observed during the specified period. You + // can use this value to determine low volumes of activity for your application. + // + // * Maximum - The highest value observed during the specified period. You + // can use this value to determine high volumes of activity for your application. + // + // Statistics is a required field + Statistics []*string `locationName:"statistics" type:"list" required:"true"` - // The name of the disk (e.g., my-disk). + // The unit for the time period request. Valid values are listed below. // - // DiskName is a required field - DiskName *string `locationName:"diskName" type:"string" required:"true"` + // Unit is a required field + Unit *string `locationName:"unit" type:"string" required:"true" enum:"MetricUnit"` } // String returns the string representation -func (s GetDiskInput) String() string { +func (s GetLoadBalancerMetricDataInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDiskInput) GoString() string { +func (s GetLoadBalancerMetricDataInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetDiskInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetDiskInput"} - if s.DiskName == nil { - invalidParams.Add(request.NewErrParamRequired("DiskName")) +func (s *GetLoadBalancerMetricDataInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetLoadBalancerMetricDataInput"} + if s.EndTime == nil { + invalidParams.Add(request.NewErrParamRequired("EndTime")) + } + if s.LoadBalancerName == nil { + invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) + } + if s.MetricName == nil { + invalidParams.Add(request.NewErrParamRequired("MetricName")) + } + if s.Period == nil { + invalidParams.Add(request.NewErrParamRequired("Period")) + } + if s.Period != nil && *s.Period < 60 { + invalidParams.Add(request.NewErrParamMinValue("Period", 60)) + } + if s.StartTime == nil { + invalidParams.Add(request.NewErrParamRequired("StartTime")) + } + if s.Statistics == nil { + invalidParams.Add(request.NewErrParamRequired("Statistics")) + } + if s.Unit == nil { + invalidParams.Add(request.NewErrParamRequired("Unit")) } if invalidParams.Len() > 0 { @@ -10886,234 +15119,324 @@ func (s *GetDiskInput) Validate() error { return nil } -// SetDiskName sets the DiskName field's value. -func (s *GetDiskInput) SetDiskName(v string) *GetDiskInput { - s.DiskName = &v +// SetEndTime sets the EndTime field's value. +func (s *GetLoadBalancerMetricDataInput) SetEndTime(v time.Time) *GetLoadBalancerMetricDataInput { + s.EndTime = &v return s } -type GetDiskOutput struct { - _ struct{} `type:"structure"` +// SetLoadBalancerName sets the LoadBalancerName field's value. +func (s *GetLoadBalancerMetricDataInput) SetLoadBalancerName(v string) *GetLoadBalancerMetricDataInput { + s.LoadBalancerName = &v + return s +} - // An object containing information about the disk. - Disk *Disk `locationName:"disk" type:"structure"` +// SetMetricName sets the MetricName field's value. +func (s *GetLoadBalancerMetricDataInput) SetMetricName(v string) *GetLoadBalancerMetricDataInput { + s.MetricName = &v + return s } -// String returns the string representation -func (s GetDiskOutput) String() string { - return awsutil.Prettify(s) +// SetPeriod sets the Period field's value. +func (s *GetLoadBalancerMetricDataInput) SetPeriod(v int64) *GetLoadBalancerMetricDataInput { + s.Period = &v + return s } -// GoString returns the string representation -func (s GetDiskOutput) GoString() string { - return s.String() +// SetStartTime sets the StartTime field's value. +func (s *GetLoadBalancerMetricDataInput) SetStartTime(v time.Time) *GetLoadBalancerMetricDataInput { + s.StartTime = &v + return s } -// SetDisk sets the Disk field's value. -func (s *GetDiskOutput) SetDisk(v *Disk) *GetDiskOutput { - s.Disk = v +// SetStatistics sets the Statistics field's value. +func (s *GetLoadBalancerMetricDataInput) SetStatistics(v []*string) *GetLoadBalancerMetricDataInput { + s.Statistics = v return s } -type GetDiskSnapshotInput struct { +// SetUnit sets the Unit field's value. +func (s *GetLoadBalancerMetricDataInput) SetUnit(v string) *GetLoadBalancerMetricDataInput { + s.Unit = &v + return s +} + +type GetLoadBalancerMetricDataOutput struct { _ struct{} `type:"structure"` - // The name of the disk snapshot (e.g., my-disk-snapshot). + // An array of metric datapoint objects. + MetricData []*MetricDatapoint `locationName:"metricData" type:"list"` + + // The metric about which you are receiving information. Valid values are listed + // below, along with the most useful statistics to include in your request. + // + // * ClientTLSNegotiationErrorCount - The number of TLS connections initiated + // by the client that did not establish a session with the load balancer. + // Possible causes include a mismatch of ciphers or protocols. + // + // Statistics: The most useful statistic is Sum. + // + // * HealthyHostCount - The number of target instances that are considered + // healthy. + // + // Statistics: The most useful statistic are Average, Minimum, and Maximum. + // + // * UnhealthyHostCount - The number of target instances that are considered + // unhealthy. + // + // Statistics: The most useful statistic are Average, Minimum, and Maximum. + // + // * HTTPCode_LB_4XX_Count - The number of HTTP 4XX client error codes that + // originate from the load balancer. Client errors are generated when requests + // are malformed or incomplete. These requests have not been received by + // the target instance. This count does not include any response codes generated + // by the target instances. + // + // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, + // and Average all return 1. + // + // * HTTPCode_LB_5XX_Count - The number of HTTP 5XX server error codes that + // originate from the load balancer. This count does not include any response + // codes generated by the target instances. + // + // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, + // and Average all return 1. Note that Minimum, Maximum, and Average all + // return 1. + // + // * HTTPCode_Instance_2XX_Count - The number of HTTP response codes generated + // by the target instances. This does not include any response codes generated + // by the load balancer. + // + // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, + // and Average all return 1. + // + // * HTTPCode_Instance_3XX_Count - The number of HTTP response codes generated + // by the target instances. This does not include any response codes generated + // by the load balancer. + // + // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, + // and Average all return 1. // - // DiskSnapshotName is a required field - DiskSnapshotName *string `locationName:"diskSnapshotName" type:"string" required:"true"` + // * HTTPCode_Instance_4XX_Count - The number of HTTP response codes generated + // by the target instances. This does not include any response codes generated + // by the load balancer. + // + // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, + // and Average all return 1. + // + // * HTTPCode_Instance_5XX_Count - The number of HTTP response codes generated + // by the target instances. This does not include any response codes generated + // by the load balancer. + // + // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, + // and Average all return 1. + // + // * InstanceResponseTime - The time elapsed, in seconds, after the request + // leaves the load balancer until a response from the target instance is + // received. + // + // Statistics: The most useful statistic is Average. + // + // * RejectedConnectionCount - The number of connections that were rejected + // because the load balancer had reached its maximum number of connections. + // + // Statistics: The most useful statistic is Sum. + // + // * RequestCount - The number of requests processed over IPv4. This count + // includes only the requests with a response generated by a target instance + // of the load balancer. + // + // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, + // and Average all return 1. + MetricName *string `locationName:"metricName" type:"string" enum:"LoadBalancerMetricName"` } // String returns the string representation -func (s GetDiskSnapshotInput) String() string { +func (s GetLoadBalancerMetricDataOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDiskSnapshotInput) GoString() string { +func (s GetLoadBalancerMetricDataOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *GetDiskSnapshotInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetDiskSnapshotInput"} - if s.DiskSnapshotName == nil { - invalidParams.Add(request.NewErrParamRequired("DiskSnapshotName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetMetricData sets the MetricData field's value. +func (s *GetLoadBalancerMetricDataOutput) SetMetricData(v []*MetricDatapoint) *GetLoadBalancerMetricDataOutput { + s.MetricData = v + return s } -// SetDiskSnapshotName sets the DiskSnapshotName field's value. -func (s *GetDiskSnapshotInput) SetDiskSnapshotName(v string) *GetDiskSnapshotInput { - s.DiskSnapshotName = &v +// SetMetricName sets the MetricName field's value. +func (s *GetLoadBalancerMetricDataOutput) SetMetricName(v string) *GetLoadBalancerMetricDataOutput { + s.MetricName = &v return s } -type GetDiskSnapshotOutput struct { +type GetLoadBalancerOutput struct { _ struct{} `type:"structure"` - // An object containing information about the disk snapshot. - DiskSnapshot *DiskSnapshot `locationName:"diskSnapshot" type:"structure"` + // An object containing information about your load balancer. + LoadBalancer *LoadBalancer `locationName:"loadBalancer" type:"structure"` } // String returns the string representation -func (s GetDiskSnapshotOutput) String() string { +func (s GetLoadBalancerOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDiskSnapshotOutput) GoString() string { +func (s GetLoadBalancerOutput) GoString() string { return s.String() } -// SetDiskSnapshot sets the DiskSnapshot field's value. -func (s *GetDiskSnapshotOutput) SetDiskSnapshot(v *DiskSnapshot) *GetDiskSnapshotOutput { - s.DiskSnapshot = v +// SetLoadBalancer sets the LoadBalancer field's value. +func (s *GetLoadBalancerOutput) SetLoadBalancer(v *LoadBalancer) *GetLoadBalancerOutput { + s.LoadBalancer = v return s } -type GetDiskSnapshotsInput struct { +type GetLoadBalancerTlsCertificatesInput struct { _ struct{} `type:"structure"` - // A token used for advancing to the next page of results from your GetDiskSnapshots - // request. - PageToken *string `locationName:"pageToken" type:"string"` + // The name of the load balancer you associated with your SSL/TLS certificate. + // + // LoadBalancerName is a required field + LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` } // String returns the string representation -func (s GetDiskSnapshotsInput) String() string { +func (s GetLoadBalancerTlsCertificatesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDiskSnapshotsInput) GoString() string { +func (s GetLoadBalancerTlsCertificatesInput) GoString() string { return s.String() } -// SetPageToken sets the PageToken field's value. -func (s *GetDiskSnapshotsInput) SetPageToken(v string) *GetDiskSnapshotsInput { - s.PageToken = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetLoadBalancerTlsCertificatesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetLoadBalancerTlsCertificatesInput"} + if s.LoadBalancerName == nil { + invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLoadBalancerName sets the LoadBalancerName field's value. +func (s *GetLoadBalancerTlsCertificatesInput) SetLoadBalancerName(v string) *GetLoadBalancerTlsCertificatesInput { + s.LoadBalancerName = &v return s } -type GetDiskSnapshotsOutput struct { +type GetLoadBalancerTlsCertificatesOutput struct { _ struct{} `type:"structure"` - // An array of objects containing information about all block storage disk snapshots. - DiskSnapshots []*DiskSnapshot `locationName:"diskSnapshots" type:"list"` - - // A token used for advancing to the next page of results from your GetDiskSnapshots - // request. - NextPageToken *string `locationName:"nextPageToken" type:"string"` + // An array of LoadBalancerTlsCertificate objects describing your SSL/TLS certificates. + TlsCertificates []*LoadBalancerTlsCertificate `locationName:"tlsCertificates" type:"list"` } // String returns the string representation -func (s GetDiskSnapshotsOutput) String() string { +func (s GetLoadBalancerTlsCertificatesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDiskSnapshotsOutput) GoString() string { +func (s GetLoadBalancerTlsCertificatesOutput) GoString() string { return s.String() } -// SetDiskSnapshots sets the DiskSnapshots field's value. -func (s *GetDiskSnapshotsOutput) SetDiskSnapshots(v []*DiskSnapshot) *GetDiskSnapshotsOutput { - s.DiskSnapshots = v - return s -} - -// SetNextPageToken sets the NextPageToken field's value. -func (s *GetDiskSnapshotsOutput) SetNextPageToken(v string) *GetDiskSnapshotsOutput { - s.NextPageToken = &v +// SetTlsCertificates sets the TlsCertificates field's value. +func (s *GetLoadBalancerTlsCertificatesOutput) SetTlsCertificates(v []*LoadBalancerTlsCertificate) *GetLoadBalancerTlsCertificatesOutput { + s.TlsCertificates = v return s } -type GetDisksInput struct { +type GetLoadBalancersInput struct { _ struct{} `type:"structure"` - // A token used for advancing to the next page of results from your GetDisks - // request. + // A token used for paginating the results from your GetLoadBalancers request. PageToken *string `locationName:"pageToken" type:"string"` } // String returns the string representation -func (s GetDisksInput) String() string { +func (s GetLoadBalancersInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDisksInput) GoString() string { +func (s GetLoadBalancersInput) GoString() string { return s.String() } // SetPageToken sets the PageToken field's value. -func (s *GetDisksInput) SetPageToken(v string) *GetDisksInput { +func (s *GetLoadBalancersInput) SetPageToken(v string) *GetLoadBalancersInput { s.PageToken = &v return s } -type GetDisksOutput struct { +type GetLoadBalancersOutput struct { _ struct{} `type:"structure"` - // An array of objects containing information about all block storage disks. - Disks []*Disk `locationName:"disks" type:"list"` + // An array of LoadBalancer objects describing your load balancers. + LoadBalancers []*LoadBalancer `locationName:"loadBalancers" type:"list"` - // A token used for advancing to the next page of results from your GetDisks + // A token used for advancing to the next page of results from your GetLoadBalancers // request. NextPageToken *string `locationName:"nextPageToken" type:"string"` } // String returns the string representation -func (s GetDisksOutput) String() string { +func (s GetLoadBalancersOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDisksOutput) GoString() string { +func (s GetLoadBalancersOutput) GoString() string { return s.String() } -// SetDisks sets the Disks field's value. -func (s *GetDisksOutput) SetDisks(v []*Disk) *GetDisksOutput { - s.Disks = v +// SetLoadBalancers sets the LoadBalancers field's value. +func (s *GetLoadBalancersOutput) SetLoadBalancers(v []*LoadBalancer) *GetLoadBalancersOutput { + s.LoadBalancers = v return s } // SetNextPageToken sets the NextPageToken field's value. -func (s *GetDisksOutput) SetNextPageToken(v string) *GetDisksOutput { +func (s *GetLoadBalancersOutput) SetNextPageToken(v string) *GetLoadBalancersOutput { s.NextPageToken = &v return s } -type GetDomainInput struct { +type GetOperationInput struct { _ struct{} `type:"structure"` - // The domain name for which your want to return information about. + // A GUID used to identify the operation. // - // DomainName is a required field - DomainName *string `locationName:"domainName" type:"string" required:"true"` + // OperationId is a required field + OperationId *string `locationName:"operationId" type:"string" required:"true"` } // String returns the string representation -func (s GetDomainInput) String() string { +func (s GetOperationInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDomainInput) GoString() string { +func (s GetOperationInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetDomainInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetDomainInput"} - if s.DomainName == nil { - invalidParams.Add(request.NewErrParamRequired("DomainName")) +func (s *GetOperationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetOperationInput"} + if s.OperationId == nil { + invalidParams.Add(request.NewErrParamRequired("OperationId")) } if invalidParams.Len() > 0 { @@ -11122,469 +15445,401 @@ func (s *GetDomainInput) Validate() error { return nil } -// SetDomainName sets the DomainName field's value. -func (s *GetDomainInput) SetDomainName(v string) *GetDomainInput { - s.DomainName = &v +// SetOperationId sets the OperationId field's value. +func (s *GetOperationInput) SetOperationId(v string) *GetOperationInput { + s.OperationId = &v return s } -type GetDomainOutput struct { +type GetOperationOutput struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about your get domain - // request. - Domain *Domain `locationName:"domain" type:"structure"` + // An array of key-value pairs containing information about the results of your + // get operation request. + Operation *Operation `locationName:"operation" type:"structure"` } // String returns the string representation -func (s GetDomainOutput) String() string { +func (s GetOperationOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDomainOutput) GoString() string { +func (s GetOperationOutput) GoString() string { return s.String() } -// SetDomain sets the Domain field's value. -func (s *GetDomainOutput) SetDomain(v *Domain) *GetDomainOutput { - s.Domain = v +// SetOperation sets the Operation field's value. +func (s *GetOperationOutput) SetOperation(v *Operation) *GetOperationOutput { + s.Operation = v return s } -type GetDomainsInput struct { +type GetOperationsForResourceInput struct { _ struct{} `type:"structure"` - // A token used for advancing to the next page of results from your get domains - // request. + // A token used for advancing to the next page of results from your get operations + // for resource request. PageToken *string `locationName:"pageToken" type:"string"` + + // The name of the resource for which you are requesting information. + // + // ResourceName is a required field + ResourceName *string `locationName:"resourceName" type:"string" required:"true"` } // String returns the string representation -func (s GetDomainsInput) String() string { +func (s GetOperationsForResourceInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetDomainsInput) GoString() string { +func (s GetOperationsForResourceInput) GoString() string { return s.String() } -// SetPageToken sets the PageToken field's value. -func (s *GetDomainsInput) SetPageToken(v string) *GetDomainsInput { - s.PageToken = &v - return s -} - -type GetDomainsOutput struct { - _ struct{} `type:"structure"` - - // An array of key-value pairs containing information about each of the domain - // entries in the user's account. - Domains []*Domain `locationName:"domains" type:"list"` - - // A token used for advancing to the next page of results from your get active - // names request. - NextPageToken *string `locationName:"nextPageToken" type:"string"` -} - -// String returns the string representation -func (s GetDomainsOutput) String() string { - return awsutil.Prettify(s) -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetOperationsForResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetOperationsForResourceInput"} + if s.ResourceName == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceName")) + } -// GoString returns the string representation -func (s GetDomainsOutput) GoString() string { - return s.String() + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetDomains sets the Domains field's value. -func (s *GetDomainsOutput) SetDomains(v []*Domain) *GetDomainsOutput { - s.Domains = v +// SetPageToken sets the PageToken field's value. +func (s *GetOperationsForResourceInput) SetPageToken(v string) *GetOperationsForResourceInput { + s.PageToken = &v return s } -// SetNextPageToken sets the NextPageToken field's value. -func (s *GetDomainsOutput) SetNextPageToken(v string) *GetDomainsOutput { - s.NextPageToken = &v +// SetResourceName sets the ResourceName field's value. +func (s *GetOperationsForResourceInput) SetResourceName(v string) *GetOperationsForResourceInput { + s.ResourceName = &v return s } -type GetInstanceAccessDetailsInput struct { +type GetOperationsForResourceOutput struct { _ struct{} `type:"structure"` - // The name of the instance to access. + // (Deprecated) Returns the number of pages of results that remain. // - // InstanceName is a required field - InstanceName *string `locationName:"instanceName" type:"string" required:"true"` + // In releases prior to June 12, 2017, this parameter returned null by the API. + // It is now deprecated, and the API returns the next page token parameter instead. + // + // Deprecated: NextPageCount has been deprecated + NextPageCount *string `locationName:"nextPageCount" deprecated:"true" type:"string"` - // The protocol to use to connect to your instance. Defaults to ssh. - Protocol *string `locationName:"protocol" type:"string" enum:"InstanceAccessProtocol"` + // An identifier that was returned from the previous call to this operation, + // which can be used to return the next set of items in the list. + NextPageToken *string `locationName:"nextPageToken" type:"string"` + + // An array of key-value pairs containing information about the results of your + // get operations for resource request. + Operations []*Operation `locationName:"operations" type:"list"` } // String returns the string representation -func (s GetInstanceAccessDetailsInput) String() string { +func (s GetOperationsForResourceOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetInstanceAccessDetailsInput) GoString() string { +func (s GetOperationsForResourceOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *GetInstanceAccessDetailsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetInstanceAccessDetailsInput"} - if s.InstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetNextPageCount sets the NextPageCount field's value. +func (s *GetOperationsForResourceOutput) SetNextPageCount(v string) *GetOperationsForResourceOutput { + s.NextPageCount = &v + return s } -// SetInstanceName sets the InstanceName field's value. -func (s *GetInstanceAccessDetailsInput) SetInstanceName(v string) *GetInstanceAccessDetailsInput { - s.InstanceName = &v +// SetNextPageToken sets the NextPageToken field's value. +func (s *GetOperationsForResourceOutput) SetNextPageToken(v string) *GetOperationsForResourceOutput { + s.NextPageToken = &v return s } -// SetProtocol sets the Protocol field's value. -func (s *GetInstanceAccessDetailsInput) SetProtocol(v string) *GetInstanceAccessDetailsInput { - s.Protocol = &v +// SetOperations sets the Operations field's value. +func (s *GetOperationsForResourceOutput) SetOperations(v []*Operation) *GetOperationsForResourceOutput { + s.Operations = v return s } -type GetInstanceAccessDetailsOutput struct { +type GetOperationsInput struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about a get instance access + // A token used for advancing to the next page of results from your get operations // request. - AccessDetails *InstanceAccessDetails `locationName:"accessDetails" type:"structure"` + PageToken *string `locationName:"pageToken" type:"string"` } // String returns the string representation -func (s GetInstanceAccessDetailsOutput) String() string { +func (s GetOperationsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetInstanceAccessDetailsOutput) GoString() string { +func (s GetOperationsInput) GoString() string { return s.String() } -// SetAccessDetails sets the AccessDetails field's value. -func (s *GetInstanceAccessDetailsOutput) SetAccessDetails(v *InstanceAccessDetails) *GetInstanceAccessDetailsOutput { - s.AccessDetails = v +// SetPageToken sets the PageToken field's value. +func (s *GetOperationsInput) SetPageToken(v string) *GetOperationsInput { + s.PageToken = &v return s } -type GetInstanceInput struct { +type GetOperationsOutput struct { _ struct{} `type:"structure"` - // The name of the instance. - // - // InstanceName is a required field - InstanceName *string `locationName:"instanceName" type:"string" required:"true"` + // A token used for advancing to the next page of results from your get operations + // request. + NextPageToken *string `locationName:"nextPageToken" type:"string"` + + // An array of key-value pairs containing information about the results of your + // get operations request. + Operations []*Operation `locationName:"operations" type:"list"` } // String returns the string representation -func (s GetInstanceInput) String() string { +func (s GetOperationsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetInstanceInput) GoString() string { +func (s GetOperationsOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *GetInstanceInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetInstanceInput"} - if s.InstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetNextPageToken sets the NextPageToken field's value. +func (s *GetOperationsOutput) SetNextPageToken(v string) *GetOperationsOutput { + s.NextPageToken = &v + return s } -// SetInstanceName sets the InstanceName field's value. -func (s *GetInstanceInput) SetInstanceName(v string) *GetInstanceInput { - s.InstanceName = &v +// SetOperations sets the Operations field's value. +func (s *GetOperationsOutput) SetOperations(v []*Operation) *GetOperationsOutput { + s.Operations = v return s } -type GetInstanceMetricDataInput struct { +type GetRegionsInput struct { _ struct{} `type:"structure"` - // The end time of the time period. - // - // EndTime is a required field - EndTime *time.Time `locationName:"endTime" type:"timestamp" timestampFormat:"unix" required:"true"` - - // The name of the instance for which you want to get metrics data. - // - // InstanceName is a required field - InstanceName *string `locationName:"instanceName" type:"string" required:"true"` - - // The metric name to get data about. - // - // MetricName is a required field - MetricName *string `locationName:"metricName" type:"string" required:"true" enum:"InstanceMetricName"` - - // The time period for which you are requesting data. - // - // Period is a required field - Period *int64 `locationName:"period" min:"60" type:"integer" required:"true"` - - // The start time of the time period. - // - // StartTime is a required field - StartTime *time.Time `locationName:"startTime" type:"timestamp" timestampFormat:"unix" required:"true"` - - // The instance statistics. - // - // Statistics is a required field - Statistics []*string `locationName:"statistics" type:"list" required:"true"` + // A Boolean value indicating whether to also include Availability Zones in + // your get regions request. Availability Zones are indicated with a letter: + // e.g., us-east-2a. + IncludeAvailabilityZones *bool `locationName:"includeAvailabilityZones" type:"boolean"` - // The unit. The list of valid values is below. - // - // Unit is a required field - Unit *string `locationName:"unit" type:"string" required:"true" enum:"MetricUnit"` + // >A Boolean value indicating whether to also include Availability Zones for + // databases in your get regions request. Availability Zones are indicated with + // a letter (e.g., us-east-2a). + IncludeRelationalDatabaseAvailabilityZones *bool `locationName:"includeRelationalDatabaseAvailabilityZones" type:"boolean"` } // String returns the string representation -func (s GetInstanceMetricDataInput) String() string { +func (s GetRegionsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetInstanceMetricDataInput) GoString() string { +func (s GetRegionsInput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *GetInstanceMetricDataInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetInstanceMetricDataInput"} - if s.EndTime == nil { - invalidParams.Add(request.NewErrParamRequired("EndTime")) - } - if s.InstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceName")) - } - if s.MetricName == nil { - invalidParams.Add(request.NewErrParamRequired("MetricName")) - } - if s.Period == nil { - invalidParams.Add(request.NewErrParamRequired("Period")) - } - if s.Period != nil && *s.Period < 60 { - invalidParams.Add(request.NewErrParamMinValue("Period", 60)) - } - if s.StartTime == nil { - invalidParams.Add(request.NewErrParamRequired("StartTime")) - } - if s.Statistics == nil { - invalidParams.Add(request.NewErrParamRequired("Statistics")) - } - if s.Unit == nil { - invalidParams.Add(request.NewErrParamRequired("Unit")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetEndTime sets the EndTime field's value. -func (s *GetInstanceMetricDataInput) SetEndTime(v time.Time) *GetInstanceMetricDataInput { - s.EndTime = &v +// SetIncludeAvailabilityZones sets the IncludeAvailabilityZones field's value. +func (s *GetRegionsInput) SetIncludeAvailabilityZones(v bool) *GetRegionsInput { + s.IncludeAvailabilityZones = &v return s } -// SetInstanceName sets the InstanceName field's value. -func (s *GetInstanceMetricDataInput) SetInstanceName(v string) *GetInstanceMetricDataInput { - s.InstanceName = &v +// SetIncludeRelationalDatabaseAvailabilityZones sets the IncludeRelationalDatabaseAvailabilityZones field's value. +func (s *GetRegionsInput) SetIncludeRelationalDatabaseAvailabilityZones(v bool) *GetRegionsInput { + s.IncludeRelationalDatabaseAvailabilityZones = &v return s } -// SetMetricName sets the MetricName field's value. -func (s *GetInstanceMetricDataInput) SetMetricName(v string) *GetInstanceMetricDataInput { - s.MetricName = &v - return s -} +type GetRegionsOutput struct { + _ struct{} `type:"structure"` -// SetPeriod sets the Period field's value. -func (s *GetInstanceMetricDataInput) SetPeriod(v int64) *GetInstanceMetricDataInput { - s.Period = &v - return s + // An array of key-value pairs containing information about your get regions + // request. + Regions []*Region `locationName:"regions" type:"list"` } -// SetStartTime sets the StartTime field's value. -func (s *GetInstanceMetricDataInput) SetStartTime(v time.Time) *GetInstanceMetricDataInput { - s.StartTime = &v - return s +// String returns the string representation +func (s GetRegionsOutput) String() string { + return awsutil.Prettify(s) } -// SetStatistics sets the Statistics field's value. -func (s *GetInstanceMetricDataInput) SetStatistics(v []*string) *GetInstanceMetricDataInput { - s.Statistics = v - return s +// GoString returns the string representation +func (s GetRegionsOutput) GoString() string { + return s.String() } -// SetUnit sets the Unit field's value. -func (s *GetInstanceMetricDataInput) SetUnit(v string) *GetInstanceMetricDataInput { - s.Unit = &v +// SetRegions sets the Regions field's value. +func (s *GetRegionsOutput) SetRegions(v []*Region) *GetRegionsOutput { + s.Regions = v return s } -type GetInstanceMetricDataOutput struct { +type GetRelationalDatabaseBlueprintsInput struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about the results of your - // get instance metric data request. - MetricData []*MetricDatapoint `locationName:"metricData" type:"list"` - - // The metric name to return data for. - MetricName *string `locationName:"metricName" type:"string" enum:"InstanceMetricName"` + // A token used for advancing to a specific page of results for your get relational + // database blueprints request. + PageToken *string `locationName:"pageToken" type:"string"` } // String returns the string representation -func (s GetInstanceMetricDataOutput) String() string { +func (s GetRelationalDatabaseBlueprintsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetInstanceMetricDataOutput) GoString() string { +func (s GetRelationalDatabaseBlueprintsInput) GoString() string { return s.String() } -// SetMetricData sets the MetricData field's value. -func (s *GetInstanceMetricDataOutput) SetMetricData(v []*MetricDatapoint) *GetInstanceMetricDataOutput { - s.MetricData = v - return s -} - -// SetMetricName sets the MetricName field's value. -func (s *GetInstanceMetricDataOutput) SetMetricName(v string) *GetInstanceMetricDataOutput { - s.MetricName = &v +// SetPageToken sets the PageToken field's value. +func (s *GetRelationalDatabaseBlueprintsInput) SetPageToken(v string) *GetRelationalDatabaseBlueprintsInput { + s.PageToken = &v return s } -type GetInstanceOutput struct { +type GetRelationalDatabaseBlueprintsOutput struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about the specified instance. - Instance *Instance `locationName:"instance" type:"structure"` + // An object describing the result of your get relational database blueprints + // request. + Blueprints []*RelationalDatabaseBlueprint `locationName:"blueprints" type:"list"` + + // A token used for advancing to the next page of results of your get relational + // database blueprints request. + NextPageToken *string `locationName:"nextPageToken" type:"string"` } // String returns the string representation -func (s GetInstanceOutput) String() string { +func (s GetRelationalDatabaseBlueprintsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetInstanceOutput) GoString() string { +func (s GetRelationalDatabaseBlueprintsOutput) GoString() string { return s.String() } -// SetInstance sets the Instance field's value. -func (s *GetInstanceOutput) SetInstance(v *Instance) *GetInstanceOutput { - s.Instance = v +// SetBlueprints sets the Blueprints field's value. +func (s *GetRelationalDatabaseBlueprintsOutput) SetBlueprints(v []*RelationalDatabaseBlueprint) *GetRelationalDatabaseBlueprintsOutput { + s.Blueprints = v return s } -type GetInstancePortStatesInput struct { +// SetNextPageToken sets the NextPageToken field's value. +func (s *GetRelationalDatabaseBlueprintsOutput) SetNextPageToken(v string) *GetRelationalDatabaseBlueprintsOutput { + s.NextPageToken = &v + return s +} + +type GetRelationalDatabaseBundlesInput struct { _ struct{} `type:"structure"` - // The name of the instance. - // - // InstanceName is a required field - InstanceName *string `locationName:"instanceName" type:"string" required:"true"` + // A token used for advancing to a specific page of results for your get relational + // database bundles request. + PageToken *string `locationName:"pageToken" type:"string"` } // String returns the string representation -func (s GetInstancePortStatesInput) String() string { +func (s GetRelationalDatabaseBundlesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetInstancePortStatesInput) GoString() string { +func (s GetRelationalDatabaseBundlesInput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *GetInstancePortStatesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetInstancePortStatesInput"} - if s.InstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetInstanceName sets the InstanceName field's value. -func (s *GetInstancePortStatesInput) SetInstanceName(v string) *GetInstancePortStatesInput { - s.InstanceName = &v +// SetPageToken sets the PageToken field's value. +func (s *GetRelationalDatabaseBundlesInput) SetPageToken(v string) *GetRelationalDatabaseBundlesInput { + s.PageToken = &v return s } -type GetInstancePortStatesOutput struct { +type GetRelationalDatabaseBundlesOutput struct { _ struct{} `type:"structure"` - // Information about the port states resulting from your request. - PortStates []*InstancePortState `locationName:"portStates" type:"list"` + // An object describing the result of your get relational database bundles request. + Bundles []*RelationalDatabaseBundle `locationName:"bundles" type:"list"` + + // A token used for advancing to the next page of results of your get relational + // database bundles request. + NextPageToken *string `locationName:"nextPageToken" type:"string"` } // String returns the string representation -func (s GetInstancePortStatesOutput) String() string { +func (s GetRelationalDatabaseBundlesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetInstancePortStatesOutput) GoString() string { +func (s GetRelationalDatabaseBundlesOutput) GoString() string { return s.String() } -// SetPortStates sets the PortStates field's value. -func (s *GetInstancePortStatesOutput) SetPortStates(v []*InstancePortState) *GetInstancePortStatesOutput { - s.PortStates = v +// SetBundles sets the Bundles field's value. +func (s *GetRelationalDatabaseBundlesOutput) SetBundles(v []*RelationalDatabaseBundle) *GetRelationalDatabaseBundlesOutput { + s.Bundles = v return s } -type GetInstanceSnapshotInput struct { +// SetNextPageToken sets the NextPageToken field's value. +func (s *GetRelationalDatabaseBundlesOutput) SetNextPageToken(v string) *GetRelationalDatabaseBundlesOutput { + s.NextPageToken = &v + return s +} + +type GetRelationalDatabaseEventsInput struct { _ struct{} `type:"structure"` - // The name of the snapshot for which you are requesting information. + // The number of minutes in the past from which to retrieve events. For example, + // to get all events from the past 2 hours, enter 120. // - // InstanceSnapshotName is a required field - InstanceSnapshotName *string `locationName:"instanceSnapshotName" type:"string" required:"true"` + // Default: 60 + // + // The minimum is 1 and the maximum is 14 days (20160 minutes). + DurationInMinutes *int64 `locationName:"durationInMinutes" type:"integer"` + + // A token used for advancing to a specific page of results from for get relational + // database events request. + PageToken *string `locationName:"pageToken" type:"string"` + + // The name of the database from which to get events. + // + // RelationalDatabaseName is a required field + RelationalDatabaseName *string `locationName:"relationalDatabaseName" type:"string" required:"true"` } // String returns the string representation -func (s GetInstanceSnapshotInput) String() string { +func (s GetRelationalDatabaseEventsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetInstanceSnapshotInput) GoString() string { +func (s GetRelationalDatabaseEventsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetInstanceSnapshotInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetInstanceSnapshotInput"} - if s.InstanceSnapshotName == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceSnapshotName")) +func (s *GetRelationalDatabaseEventsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetRelationalDatabaseEventsInput"} + if s.RelationalDatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseName")) } if invalidParams.Len() > 0 { @@ -11593,118 +15848,165 @@ func (s *GetInstanceSnapshotInput) Validate() error { return nil } -// SetInstanceSnapshotName sets the InstanceSnapshotName field's value. -func (s *GetInstanceSnapshotInput) SetInstanceSnapshotName(v string) *GetInstanceSnapshotInput { - s.InstanceSnapshotName = &v +// SetDurationInMinutes sets the DurationInMinutes field's value. +func (s *GetRelationalDatabaseEventsInput) SetDurationInMinutes(v int64) *GetRelationalDatabaseEventsInput { + s.DurationInMinutes = &v return s } -type GetInstanceSnapshotOutput struct { - _ struct{} `type:"structure"` - - // An array of key-value pairs containing information about the results of your - // get instance snapshot request. - InstanceSnapshot *InstanceSnapshot `locationName:"instanceSnapshot" type:"structure"` -} - -// String returns the string representation -func (s GetInstanceSnapshotOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s GetInstanceSnapshotOutput) GoString() string { - return s.String() +// SetPageToken sets the PageToken field's value. +func (s *GetRelationalDatabaseEventsInput) SetPageToken(v string) *GetRelationalDatabaseEventsInput { + s.PageToken = &v + return s } -// SetInstanceSnapshot sets the InstanceSnapshot field's value. -func (s *GetInstanceSnapshotOutput) SetInstanceSnapshot(v *InstanceSnapshot) *GetInstanceSnapshotOutput { - s.InstanceSnapshot = v +// SetRelationalDatabaseName sets the RelationalDatabaseName field's value. +func (s *GetRelationalDatabaseEventsInput) SetRelationalDatabaseName(v string) *GetRelationalDatabaseEventsInput { + s.RelationalDatabaseName = &v return s } -type GetInstanceSnapshotsInput struct { +type GetRelationalDatabaseEventsOutput struct { _ struct{} `type:"structure"` - // A token used for advancing to the next page of results from your get instance - // snapshots request. - PageToken *string `locationName:"pageToken" type:"string"` + // A token used for advancing to the next page of results from your get relational + // database events request. + NextPageToken *string `locationName:"nextPageToken" type:"string"` + + // An object describing the result of your get relational database events request. + RelationalDatabaseEvents []*RelationalDatabaseEvent `locationName:"relationalDatabaseEvents" type:"list"` } // String returns the string representation -func (s GetInstanceSnapshotsInput) String() string { +func (s GetRelationalDatabaseEventsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetInstanceSnapshotsInput) GoString() string { +func (s GetRelationalDatabaseEventsOutput) GoString() string { return s.String() } -// SetPageToken sets the PageToken field's value. -func (s *GetInstanceSnapshotsInput) SetPageToken(v string) *GetInstanceSnapshotsInput { - s.PageToken = &v +// SetNextPageToken sets the NextPageToken field's value. +func (s *GetRelationalDatabaseEventsOutput) SetNextPageToken(v string) *GetRelationalDatabaseEventsOutput { + s.NextPageToken = &v return s } -type GetInstanceSnapshotsOutput struct { - _ struct{} `type:"structure"` +// SetRelationalDatabaseEvents sets the RelationalDatabaseEvents field's value. +func (s *GetRelationalDatabaseEventsOutput) SetRelationalDatabaseEvents(v []*RelationalDatabaseEvent) *GetRelationalDatabaseEventsOutput { + s.RelationalDatabaseEvents = v + return s +} - // An array of key-value pairs containing information about the results of your - // get instance snapshots request. - InstanceSnapshots []*InstanceSnapshot `locationName:"instanceSnapshots" type:"list"` +type GetRelationalDatabaseInput struct { + _ struct{} `type:"structure"` - // A token used for advancing to the next page of results from your get instance - // snapshots request. - NextPageToken *string `locationName:"nextPageToken" type:"string"` + // The name of the database that you are looking up. + // + // RelationalDatabaseName is a required field + RelationalDatabaseName *string `locationName:"relationalDatabaseName" type:"string" required:"true"` } // String returns the string representation -func (s GetInstanceSnapshotsOutput) String() string { +func (s GetRelationalDatabaseInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetInstanceSnapshotsOutput) GoString() string { +func (s GetRelationalDatabaseInput) GoString() string { return s.String() } -// SetInstanceSnapshots sets the InstanceSnapshots field's value. -func (s *GetInstanceSnapshotsOutput) SetInstanceSnapshots(v []*InstanceSnapshot) *GetInstanceSnapshotsOutput { - s.InstanceSnapshots = v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetRelationalDatabaseInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetRelationalDatabaseInput"} + if s.RelationalDatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetNextPageToken sets the NextPageToken field's value. -func (s *GetInstanceSnapshotsOutput) SetNextPageToken(v string) *GetInstanceSnapshotsOutput { - s.NextPageToken = &v +// SetRelationalDatabaseName sets the RelationalDatabaseName field's value. +func (s *GetRelationalDatabaseInput) SetRelationalDatabaseName(v string) *GetRelationalDatabaseInput { + s.RelationalDatabaseName = &v return s } -type GetInstanceStateInput struct { +type GetRelationalDatabaseLogEventsInput struct { _ struct{} `type:"structure"` - // The name of the instance to get state information about. + // The end of the time interval from which to get log events. // - // InstanceName is a required field - InstanceName *string `locationName:"instanceName" type:"string" required:"true"` + // Constraints: + // + // * Specified in Universal Coordinated Time (UTC). + // + // * Specified in the Unix time format. + // + // For example, if you wish to use an end time of October 1, 2018, at 8 PM UTC, + // then you input 1538424000 as the end time. + EndTime *time.Time `locationName:"endTime" type:"timestamp"` + + // The name of the log stream. + // + // Use the get relational database log streams operation to get a list of available + // log streams. + // + // LogStreamName is a required field + LogStreamName *string `locationName:"logStreamName" type:"string" required:"true"` + + // A token used for advancing to a specific page of results for your get relational + // database log events request. + PageToken *string `locationName:"pageToken" type:"string"` + + // The name of your database for which to get log events. + // + // RelationalDatabaseName is a required field + RelationalDatabaseName *string `locationName:"relationalDatabaseName" type:"string" required:"true"` + + // Parameter to specify if the log should start from head or tail. If true is + // specified, the log event starts from the head of the log. If false is specified, + // the log event starts from the tail of the log. + // + // Default: false + StartFromHead *bool `locationName:"startFromHead" type:"boolean"` + + // The start of the time interval from which to get log events. + // + // Constraints: + // + // * Specified in Universal Coordinated Time (UTC). + // + // * Specified in the Unix time format. + // + // For example, if you wish to use a start time of October 1, 2018, at 8 PM + // UTC, then you input 1538424000 as the start time. + StartTime *time.Time `locationName:"startTime" type:"timestamp"` } // String returns the string representation -func (s GetInstanceStateInput) String() string { +func (s GetRelationalDatabaseLogEventsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetInstanceStateInput) GoString() string { +func (s GetRelationalDatabaseLogEventsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetInstanceStateInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetInstanceStateInput"} - if s.InstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceName")) +func (s *GetRelationalDatabaseLogEventsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetRelationalDatabaseLogEventsInput"} + if s.LogStreamName == nil { + invalidParams.Add(request.NewErrParamRequired("LogStreamName")) + } + if s.RelationalDatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseName")) } if invalidParams.Len() > 0 { @@ -11713,116 +16015,182 @@ func (s *GetInstanceStateInput) Validate() error { return nil } -// SetInstanceName sets the InstanceName field's value. -func (s *GetInstanceStateInput) SetInstanceName(v string) *GetInstanceStateInput { - s.InstanceName = &v +// SetEndTime sets the EndTime field's value. +func (s *GetRelationalDatabaseLogEventsInput) SetEndTime(v time.Time) *GetRelationalDatabaseLogEventsInput { + s.EndTime = &v return s } -type GetInstanceStateOutput struct { +// SetLogStreamName sets the LogStreamName field's value. +func (s *GetRelationalDatabaseLogEventsInput) SetLogStreamName(v string) *GetRelationalDatabaseLogEventsInput { + s.LogStreamName = &v + return s +} + +// SetPageToken sets the PageToken field's value. +func (s *GetRelationalDatabaseLogEventsInput) SetPageToken(v string) *GetRelationalDatabaseLogEventsInput { + s.PageToken = &v + return s +} + +// SetRelationalDatabaseName sets the RelationalDatabaseName field's value. +func (s *GetRelationalDatabaseLogEventsInput) SetRelationalDatabaseName(v string) *GetRelationalDatabaseLogEventsInput { + s.RelationalDatabaseName = &v + return s +} + +// SetStartFromHead sets the StartFromHead field's value. +func (s *GetRelationalDatabaseLogEventsInput) SetStartFromHead(v bool) *GetRelationalDatabaseLogEventsInput { + s.StartFromHead = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *GetRelationalDatabaseLogEventsInput) SetStartTime(v time.Time) *GetRelationalDatabaseLogEventsInput { + s.StartTime = &v + return s +} + +type GetRelationalDatabaseLogEventsOutput struct { _ struct{} `type:"structure"` - // The state of the instance. - State *InstanceState `locationName:"state" type:"structure"` + // A token used for advancing to the previous page of results from your get + // relational database log events request. + NextBackwardToken *string `locationName:"nextBackwardToken" type:"string"` + + // A token used for advancing to the next page of results from your get relational + // database log events request. + NextForwardToken *string `locationName:"nextForwardToken" type:"string"` + + // An object describing the result of your get relational database log events + // request. + ResourceLogEvents []*LogEvent `locationName:"resourceLogEvents" type:"list"` } // String returns the string representation -func (s GetInstanceStateOutput) String() string { +func (s GetRelationalDatabaseLogEventsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetInstanceStateOutput) GoString() string { +func (s GetRelationalDatabaseLogEventsOutput) GoString() string { return s.String() } -// SetState sets the State field's value. -func (s *GetInstanceStateOutput) SetState(v *InstanceState) *GetInstanceStateOutput { - s.State = v +// SetNextBackwardToken sets the NextBackwardToken field's value. +func (s *GetRelationalDatabaseLogEventsOutput) SetNextBackwardToken(v string) *GetRelationalDatabaseLogEventsOutput { + s.NextBackwardToken = &v return s } -type GetInstancesInput struct { +// SetNextForwardToken sets the NextForwardToken field's value. +func (s *GetRelationalDatabaseLogEventsOutput) SetNextForwardToken(v string) *GetRelationalDatabaseLogEventsOutput { + s.NextForwardToken = &v + return s +} + +// SetResourceLogEvents sets the ResourceLogEvents field's value. +func (s *GetRelationalDatabaseLogEventsOutput) SetResourceLogEvents(v []*LogEvent) *GetRelationalDatabaseLogEventsOutput { + s.ResourceLogEvents = v + return s +} + +type GetRelationalDatabaseLogStreamsInput struct { _ struct{} `type:"structure"` - // A token used for advancing to the next page of results from your get instances - // request. - PageToken *string `locationName:"pageToken" type:"string"` + // The name of your database for which to get log streams. + // + // RelationalDatabaseName is a required field + RelationalDatabaseName *string `locationName:"relationalDatabaseName" type:"string" required:"true"` } // String returns the string representation -func (s GetInstancesInput) String() string { +func (s GetRelationalDatabaseLogStreamsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetInstancesInput) GoString() string { +func (s GetRelationalDatabaseLogStreamsInput) GoString() string { return s.String() } -// SetPageToken sets the PageToken field's value. -func (s *GetInstancesInput) SetPageToken(v string) *GetInstancesInput { - s.PageToken = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetRelationalDatabaseLogStreamsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetRelationalDatabaseLogStreamsInput"} + if s.RelationalDatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRelationalDatabaseName sets the RelationalDatabaseName field's value. +func (s *GetRelationalDatabaseLogStreamsInput) SetRelationalDatabaseName(v string) *GetRelationalDatabaseLogStreamsInput { + s.RelationalDatabaseName = &v return s } -type GetInstancesOutput struct { +type GetRelationalDatabaseLogStreamsOutput struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about your instances. - Instances []*Instance `locationName:"instances" type:"list"` - - // A token used for advancing to the next page of results from your get instances + // An object describing the result of your get relational database log streams // request. - NextPageToken *string `locationName:"nextPageToken" type:"string"` + LogStreams []*string `locationName:"logStreams" type:"list"` } // String returns the string representation -func (s GetInstancesOutput) String() string { +func (s GetRelationalDatabaseLogStreamsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetInstancesOutput) GoString() string { +func (s GetRelationalDatabaseLogStreamsOutput) GoString() string { return s.String() } -// SetInstances sets the Instances field's value. -func (s *GetInstancesOutput) SetInstances(v []*Instance) *GetInstancesOutput { - s.Instances = v - return s -} - -// SetNextPageToken sets the NextPageToken field's value. -func (s *GetInstancesOutput) SetNextPageToken(v string) *GetInstancesOutput { - s.NextPageToken = &v +// SetLogStreams sets the LogStreams field's value. +func (s *GetRelationalDatabaseLogStreamsOutput) SetLogStreams(v []*string) *GetRelationalDatabaseLogStreamsOutput { + s.LogStreams = v return s } -type GetKeyPairInput struct { +type GetRelationalDatabaseMasterUserPasswordInput struct { _ struct{} `type:"structure"` - // The name of the key pair for which you are requesting information. + // The password version to return. // - // KeyPairName is a required field - KeyPairName *string `locationName:"keyPairName" type:"string" required:"true"` + // Specifying CURRENT or PREVIOUS returns the current or previous passwords + // respectively. Specifying PENDING returns the newest version of the password + // that will rotate to CURRENT. After the PENDING password rotates to CURRENT, + // the PENDING password is no longer available. + // + // Default: CURRENT + PasswordVersion *string `locationName:"passwordVersion" type:"string" enum:"RelationalDatabasePasswordVersion"` + + // The name of your database for which to get the master user password. + // + // RelationalDatabaseName is a required field + RelationalDatabaseName *string `locationName:"relationalDatabaseName" type:"string" required:"true"` } // String returns the string representation -func (s GetKeyPairInput) String() string { +func (s GetRelationalDatabaseMasterUserPasswordInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetKeyPairInput) GoString() string { +func (s GetRelationalDatabaseMasterUserPasswordInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetKeyPairInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetKeyPairInput"} - if s.KeyPairName == nil { - invalidParams.Add(request.NewErrParamRequired("KeyPairName")) +func (s *GetRelationalDatabaseMasterUserPasswordInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetRelationalDatabaseMasterUserPasswordInput"} + if s.RelationalDatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseName")) } if invalidParams.Len() > 0 { @@ -11831,304 +16199,278 @@ func (s *GetKeyPairInput) Validate() error { return nil } -// SetKeyPairName sets the KeyPairName field's value. -func (s *GetKeyPairInput) SetKeyPairName(v string) *GetKeyPairInput { - s.KeyPairName = &v +// SetPasswordVersion sets the PasswordVersion field's value. +func (s *GetRelationalDatabaseMasterUserPasswordInput) SetPasswordVersion(v string) *GetRelationalDatabaseMasterUserPasswordInput { + s.PasswordVersion = &v return s } -type GetKeyPairOutput struct { +// SetRelationalDatabaseName sets the RelationalDatabaseName field's value. +func (s *GetRelationalDatabaseMasterUserPasswordInput) SetRelationalDatabaseName(v string) *GetRelationalDatabaseMasterUserPasswordInput { + s.RelationalDatabaseName = &v + return s +} + +type GetRelationalDatabaseMasterUserPasswordOutput struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about the key pair. - KeyPair *KeyPair `locationName:"keyPair" type:"structure"` + // The timestamp when the specified version of the master user password was + // created. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` + + // The master user password for the password version specified. + MasterUserPassword *string `locationName:"masterUserPassword" type:"string"` } // String returns the string representation -func (s GetKeyPairOutput) String() string { +func (s GetRelationalDatabaseMasterUserPasswordOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetKeyPairOutput) GoString() string { +func (s GetRelationalDatabaseMasterUserPasswordOutput) GoString() string { return s.String() } -// SetKeyPair sets the KeyPair field's value. -func (s *GetKeyPairOutput) SetKeyPair(v *KeyPair) *GetKeyPairOutput { - s.KeyPair = v +// SetCreatedAt sets the CreatedAt field's value. +func (s *GetRelationalDatabaseMasterUserPasswordOutput) SetCreatedAt(v time.Time) *GetRelationalDatabaseMasterUserPasswordOutput { + s.CreatedAt = &v return s } -type GetKeyPairsInput struct { +// SetMasterUserPassword sets the MasterUserPassword field's value. +func (s *GetRelationalDatabaseMasterUserPasswordOutput) SetMasterUserPassword(v string) *GetRelationalDatabaseMasterUserPasswordOutput { + s.MasterUserPassword = &v + return s +} + +type GetRelationalDatabaseMetricDataInput struct { _ struct{} `type:"structure"` - // A token used for advancing to the next page of results from your get key - // pairs request. - PageToken *string `locationName:"pageToken" type:"string"` + // The end of the time interval from which to get metric data. + // + // Constraints: + // + // * Specified in Universal Coordinated Time (UTC). + // + // * Specified in the Unix time format. + // + // For example, if you wish to use an end time of October 1, 2018, at 8 PM UTC, + // then you input 1538424000 as the end time. + // + // EndTime is a required field + EndTime *time.Time `locationName:"endTime" type:"timestamp" required:"true"` + + // The name of the metric data to return. + // + // MetricName is a required field + MetricName *string `locationName:"metricName" type:"string" required:"true" enum:"RelationalDatabaseMetricName"` + + // The granularity, in seconds, of the returned data points. + // + // Period is a required field + Period *int64 `locationName:"period" min:"60" type:"integer" required:"true"` + + // The name of your database from which to get metric data. + // + // RelationalDatabaseName is a required field + RelationalDatabaseName *string `locationName:"relationalDatabaseName" type:"string" required:"true"` + + // The start of the time interval from which to get metric data. + // + // Constraints: + // + // * Specified in Universal Coordinated Time (UTC). + // + // * Specified in the Unix time format. + // + // For example, if you wish to use a start time of October 1, 2018, at 8 PM + // UTC, then you input 1538424000 as the start time. + // + // StartTime is a required field + StartTime *time.Time `locationName:"startTime" type:"timestamp" required:"true"` + + // The array of statistics for your metric data request. + // + // Statistics is a required field + Statistics []*string `locationName:"statistics" type:"list" required:"true"` + + // The unit for the metric data request. + // + // Unit is a required field + Unit *string `locationName:"unit" type:"string" required:"true" enum:"MetricUnit"` } // String returns the string representation -func (s GetKeyPairsInput) String() string { +func (s GetRelationalDatabaseMetricDataInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetKeyPairsInput) GoString() string { +func (s GetRelationalDatabaseMetricDataInput) GoString() string { return s.String() } -// SetPageToken sets the PageToken field's value. -func (s *GetKeyPairsInput) SetPageToken(v string) *GetKeyPairsInput { - s.PageToken = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetRelationalDatabaseMetricDataInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetRelationalDatabaseMetricDataInput"} + if s.EndTime == nil { + invalidParams.Add(request.NewErrParamRequired("EndTime")) + } + if s.MetricName == nil { + invalidParams.Add(request.NewErrParamRequired("MetricName")) + } + if s.Period == nil { + invalidParams.Add(request.NewErrParamRequired("Period")) + } + if s.Period != nil && *s.Period < 60 { + invalidParams.Add(request.NewErrParamMinValue("Period", 60)) + } + if s.RelationalDatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseName")) + } + if s.StartTime == nil { + invalidParams.Add(request.NewErrParamRequired("StartTime")) + } + if s.Statistics == nil { + invalidParams.Add(request.NewErrParamRequired("Statistics")) + } + if s.Unit == nil { + invalidParams.Add(request.NewErrParamRequired("Unit")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -type GetKeyPairsOutput struct { - _ struct{} `type:"structure"` +// SetEndTime sets the EndTime field's value. +func (s *GetRelationalDatabaseMetricDataInput) SetEndTime(v time.Time) *GetRelationalDatabaseMetricDataInput { + s.EndTime = &v + return s +} - // An array of key-value pairs containing information about the key pairs. - KeyPairs []*KeyPair `locationName:"keyPairs" type:"list"` +// SetMetricName sets the MetricName field's value. +func (s *GetRelationalDatabaseMetricDataInput) SetMetricName(v string) *GetRelationalDatabaseMetricDataInput { + s.MetricName = &v + return s +} - // A token used for advancing to the next page of results from your get key - // pairs request. - NextPageToken *string `locationName:"nextPageToken" type:"string"` +// SetPeriod sets the Period field's value. +func (s *GetRelationalDatabaseMetricDataInput) SetPeriod(v int64) *GetRelationalDatabaseMetricDataInput { + s.Period = &v + return s } -// String returns the string representation -func (s GetKeyPairsOutput) String() string { - return awsutil.Prettify(s) +// SetRelationalDatabaseName sets the RelationalDatabaseName field's value. +func (s *GetRelationalDatabaseMetricDataInput) SetRelationalDatabaseName(v string) *GetRelationalDatabaseMetricDataInput { + s.RelationalDatabaseName = &v + return s } -// GoString returns the string representation -func (s GetKeyPairsOutput) GoString() string { - return s.String() +// SetStartTime sets the StartTime field's value. +func (s *GetRelationalDatabaseMetricDataInput) SetStartTime(v time.Time) *GetRelationalDatabaseMetricDataInput { + s.StartTime = &v + return s } -// SetKeyPairs sets the KeyPairs field's value. -func (s *GetKeyPairsOutput) SetKeyPairs(v []*KeyPair) *GetKeyPairsOutput { - s.KeyPairs = v +// SetStatistics sets the Statistics field's value. +func (s *GetRelationalDatabaseMetricDataInput) SetStatistics(v []*string) *GetRelationalDatabaseMetricDataInput { + s.Statistics = v return s } -// SetNextPageToken sets the NextPageToken field's value. -func (s *GetKeyPairsOutput) SetNextPageToken(v string) *GetKeyPairsOutput { - s.NextPageToken = &v +// SetUnit sets the Unit field's value. +func (s *GetRelationalDatabaseMetricDataInput) SetUnit(v string) *GetRelationalDatabaseMetricDataInput { + s.Unit = &v return s } -type GetLoadBalancerInput struct { +type GetRelationalDatabaseMetricDataOutput struct { _ struct{} `type:"structure"` - // The name of the load balancer. - // - // LoadBalancerName is a required field - LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` + // An object describing the result of your get relational database metric data + // request. + MetricData []*MetricDatapoint `locationName:"metricData" type:"list"` + + // The name of the metric. + MetricName *string `locationName:"metricName" type:"string" enum:"RelationalDatabaseMetricName"` } // String returns the string representation -func (s GetLoadBalancerInput) String() string { +func (s GetRelationalDatabaseMetricDataOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetLoadBalancerInput) GoString() string { +func (s GetRelationalDatabaseMetricDataOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *GetLoadBalancerInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetLoadBalancerInput"} - if s.LoadBalancerName == nil { - invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetMetricData sets the MetricData field's value. +func (s *GetRelationalDatabaseMetricDataOutput) SetMetricData(v []*MetricDatapoint) *GetRelationalDatabaseMetricDataOutput { + s.MetricData = v + return s } -// SetLoadBalancerName sets the LoadBalancerName field's value. -func (s *GetLoadBalancerInput) SetLoadBalancerName(v string) *GetLoadBalancerInput { - s.LoadBalancerName = &v +// SetMetricName sets the MetricName field's value. +func (s *GetRelationalDatabaseMetricDataOutput) SetMetricName(v string) *GetRelationalDatabaseMetricDataOutput { + s.MetricName = &v return s } -type GetLoadBalancerMetricDataInput struct { +type GetRelationalDatabaseOutput struct { _ struct{} `type:"structure"` - // The end time of the period. - // - // EndTime is a required field - EndTime *time.Time `locationName:"endTime" type:"timestamp" timestampFormat:"unix" required:"true"` - - // The name of the load balancer. - // - // LoadBalancerName is a required field - LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` + // An object describing the specified database. + RelationalDatabase *RelationalDatabase `locationName:"relationalDatabase" type:"structure"` +} - // The metric about which you want to return information. Valid values are listed - // below, along with the most useful statistics to include in your request. - // - // * ClientTLSNegotiationErrorCount - The number of TLS connections initiated - // by the client that did not establish a session with the load balancer. - // Possible causes include a mismatch of ciphers or protocols. - // - // Statistics: The most useful statistic is Sum. - // - // * HealthyHostCount - The number of target instances that are considered - // healthy. - // - // Statistics: The most useful statistic are Average, Minimum, and Maximum. - // - // * UnhealthyHostCount - The number of target instances that are considered - // unhealthy. - // - // Statistics: The most useful statistic are Average, Minimum, and Maximum. - // - // * HTTPCode_LB_4XX_Count - The number of HTTP 4XX client error codes that - // originate from the load balancer. Client errors are generated when requests - // are malformed or incomplete. These requests have not been received by - // the target instance. This count does not include any response codes generated - // by the target instances. - // - // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, - // and Average all return 1. - // - // * HTTPCode_LB_5XX_Count - The number of HTTP 5XX server error codes that - // originate from the load balancer. This count does not include any response - // codes generated by the target instances. - // - // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, - // and Average all return 1. Note that Minimum, Maximum, and Average all - // return 1. - // - // * HTTPCode_Instance_2XX_Count - The number of HTTP response codes generated - // by the target instances. This does not include any response codes generated - // by the load balancer. - // - // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, - // and Average all return 1. - // - // * HTTPCode_Instance_3XX_Count - The number of HTTP response codes generated - // by the target instances. This does not include any response codes generated - // by the load balancer. - // - // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, - // and Average all return 1. - // - // * HTTPCode_Instance_4XX_Count - The number of HTTP response codes generated - // by the target instances. This does not include any response codes generated - // by the load balancer. - // - // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, - // and Average all return 1. - // - // * HTTPCode_Instance_5XX_Count - The number of HTTP response codes generated - // by the target instances. This does not include any response codes generated - // by the load balancer. - // - // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, - // and Average all return 1. - // - // * InstanceResponseTime - The time elapsed, in seconds, after the request - // leaves the load balancer until a response from the target instance is - // received. - // - // Statistics: The most useful statistic is Average. - // - // * RejectedConnectionCount - The number of connections that were rejected - // because the load balancer had reached its maximum number of connections. - // - // Statistics: The most useful statistic is Sum. - // - // * RequestCount - The number of requests processed over IPv4. This count - // includes only the requests with a response generated by a target instance - // of the load balancer. - // - // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, - // and Average all return 1. - // - // MetricName is a required field - MetricName *string `locationName:"metricName" type:"string" required:"true" enum:"LoadBalancerMetricName"` +// String returns the string representation +func (s GetRelationalDatabaseOutput) String() string { + return awsutil.Prettify(s) +} - // The time period duration for your health data request. - // - // Period is a required field - Period *int64 `locationName:"period" min:"60" type:"integer" required:"true"` +// GoString returns the string representation +func (s GetRelationalDatabaseOutput) GoString() string { + return s.String() +} - // The start time of the period. - // - // StartTime is a required field - StartTime *time.Time `locationName:"startTime" type:"timestamp" timestampFormat:"unix" required:"true"` +// SetRelationalDatabase sets the RelationalDatabase field's value. +func (s *GetRelationalDatabaseOutput) SetRelationalDatabase(v *RelationalDatabase) *GetRelationalDatabaseOutput { + s.RelationalDatabase = v + return s +} - // An array of statistics that you want to request metrics for. Valid values - // are listed below. - // - // * SampleCount - The count (number) of data points used for the statistical - // calculation. - // - // * Average - The value of Sum / SampleCount during the specified period. - // By comparing this statistic with the Minimum and Maximum, you can determine - // the full scope of a metric and how close the average use is to the Minimum - // and Maximum. This comparison helps you to know when to increase or decrease - // your resources as needed. - // - // * Sum - All values submitted for the matching metric added together. This - // statistic can be useful for determining the total volume of a metric. - // - // * Minimum - The lowest value observed during the specified period. You - // can use this value to determine low volumes of activity for your application. - // - // * Maximum - The highest value observed during the specified period. You - // can use this value to determine high volumes of activity for your application. - // - // Statistics is a required field - Statistics []*string `locationName:"statistics" type:"list" required:"true"` +type GetRelationalDatabaseParametersInput struct { + _ struct{} `type:"structure"` - // The unit for the time period request. Valid values are listed below. + // A token used for advancing to a specific page of results for your get relational + // database parameters request. + PageToken *string `locationName:"pageToken" type:"string"` + + // The name of your database for which to get parameters. // - // Unit is a required field - Unit *string `locationName:"unit" type:"string" required:"true" enum:"MetricUnit"` + // RelationalDatabaseName is a required field + RelationalDatabaseName *string `locationName:"relationalDatabaseName" type:"string" required:"true"` } // String returns the string representation -func (s GetLoadBalancerMetricDataInput) String() string { +func (s GetRelationalDatabaseParametersInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetLoadBalancerMetricDataInput) GoString() string { +func (s GetRelationalDatabaseParametersInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetLoadBalancerMetricDataInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetLoadBalancerMetricDataInput"} - if s.EndTime == nil { - invalidParams.Add(request.NewErrParamRequired("EndTime")) - } - if s.LoadBalancerName == nil { - invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) - } - if s.MetricName == nil { - invalidParams.Add(request.NewErrParamRequired("MetricName")) - } - if s.Period == nil { - invalidParams.Add(request.NewErrParamRequired("Period")) - } - if s.Period != nil && *s.Period < 60 { - invalidParams.Add(request.NewErrParamMinValue("Period", 60)) - } - if s.StartTime == nil { - invalidParams.Add(request.NewErrParamRequired("StartTime")) - } - if s.Statistics == nil { - invalidParams.Add(request.NewErrParamRequired("Statistics")) - } - if s.Unit == nil { - invalidParams.Add(request.NewErrParamRequired("Unit")) +func (s *GetRelationalDatabaseParametersInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetRelationalDatabaseParametersInput"} + if s.RelationalDatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseName")) } if invalidParams.Len() > 0 { @@ -12137,207 +16479,252 @@ func (s *GetLoadBalancerMetricDataInput) Validate() error { return nil } -// SetEndTime sets the EndTime field's value. -func (s *GetLoadBalancerMetricDataInput) SetEndTime(v time.Time) *GetLoadBalancerMetricDataInput { - s.EndTime = &v +// SetPageToken sets the PageToken field's value. +func (s *GetRelationalDatabaseParametersInput) SetPageToken(v string) *GetRelationalDatabaseParametersInput { + s.PageToken = &v return s } -// SetLoadBalancerName sets the LoadBalancerName field's value. -func (s *GetLoadBalancerMetricDataInput) SetLoadBalancerName(v string) *GetLoadBalancerMetricDataInput { - s.LoadBalancerName = &v +// SetRelationalDatabaseName sets the RelationalDatabaseName field's value. +func (s *GetRelationalDatabaseParametersInput) SetRelationalDatabaseName(v string) *GetRelationalDatabaseParametersInput { + s.RelationalDatabaseName = &v return s } -// SetMetricName sets the MetricName field's value. -func (s *GetLoadBalancerMetricDataInput) SetMetricName(v string) *GetLoadBalancerMetricDataInput { - s.MetricName = &v +type GetRelationalDatabaseParametersOutput struct { + _ struct{} `type:"structure"` + + // A token used for advancing to the next page of results from your get static + // IPs request. + NextPageToken *string `locationName:"nextPageToken" type:"string"` + + // An object describing the result of your get relational database parameters + // request. + Parameters []*RelationalDatabaseParameter `locationName:"parameters" type:"list"` +} + +// String returns the string representation +func (s GetRelationalDatabaseParametersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetRelationalDatabaseParametersOutput) GoString() string { + return s.String() +} + +// SetNextPageToken sets the NextPageToken field's value. +func (s *GetRelationalDatabaseParametersOutput) SetNextPageToken(v string) *GetRelationalDatabaseParametersOutput { + s.NextPageToken = &v return s } -// SetPeriod sets the Period field's value. -func (s *GetLoadBalancerMetricDataInput) SetPeriod(v int64) *GetLoadBalancerMetricDataInput { - s.Period = &v +// SetParameters sets the Parameters field's value. +func (s *GetRelationalDatabaseParametersOutput) SetParameters(v []*RelationalDatabaseParameter) *GetRelationalDatabaseParametersOutput { + s.Parameters = v return s } -// SetStartTime sets the StartTime field's value. -func (s *GetLoadBalancerMetricDataInput) SetStartTime(v time.Time) *GetLoadBalancerMetricDataInput { - s.StartTime = &v +type GetRelationalDatabaseSnapshotInput struct { + _ struct{} `type:"structure"` + + // The name of the database snapshot for which to get information. + // + // RelationalDatabaseSnapshotName is a required field + RelationalDatabaseSnapshotName *string `locationName:"relationalDatabaseSnapshotName" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetRelationalDatabaseSnapshotInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetRelationalDatabaseSnapshotInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetRelationalDatabaseSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetRelationalDatabaseSnapshotInput"} + if s.RelationalDatabaseSnapshotName == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseSnapshotName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRelationalDatabaseSnapshotName sets the RelationalDatabaseSnapshotName field's value. +func (s *GetRelationalDatabaseSnapshotInput) SetRelationalDatabaseSnapshotName(v string) *GetRelationalDatabaseSnapshotInput { + s.RelationalDatabaseSnapshotName = &v return s } -// SetStatistics sets the Statistics field's value. -func (s *GetLoadBalancerMetricDataInput) SetStatistics(v []*string) *GetLoadBalancerMetricDataInput { - s.Statistics = v +type GetRelationalDatabaseSnapshotOutput struct { + _ struct{} `type:"structure"` + + // An object describing the specified database snapshot. + RelationalDatabaseSnapshot *RelationalDatabaseSnapshot `locationName:"relationalDatabaseSnapshot" type:"structure"` +} + +// String returns the string representation +func (s GetRelationalDatabaseSnapshotOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetRelationalDatabaseSnapshotOutput) GoString() string { + return s.String() +} + +// SetRelationalDatabaseSnapshot sets the RelationalDatabaseSnapshot field's value. +func (s *GetRelationalDatabaseSnapshotOutput) SetRelationalDatabaseSnapshot(v *RelationalDatabaseSnapshot) *GetRelationalDatabaseSnapshotOutput { + s.RelationalDatabaseSnapshot = v return s } -// SetUnit sets the Unit field's value. -func (s *GetLoadBalancerMetricDataInput) SetUnit(v string) *GetLoadBalancerMetricDataInput { - s.Unit = &v +type GetRelationalDatabaseSnapshotsInput struct { + _ struct{} `type:"structure"` + + // A token used for advancing to a specific page of results for your get relational + // database snapshots request. + PageToken *string `locationName:"pageToken" type:"string"` +} + +// String returns the string representation +func (s GetRelationalDatabaseSnapshotsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetRelationalDatabaseSnapshotsInput) GoString() string { + return s.String() +} + +// SetPageToken sets the PageToken field's value. +func (s *GetRelationalDatabaseSnapshotsInput) SetPageToken(v string) *GetRelationalDatabaseSnapshotsInput { + s.PageToken = &v return s } -type GetLoadBalancerMetricDataOutput struct { +type GetRelationalDatabaseSnapshotsOutput struct { _ struct{} `type:"structure"` - // An array of metric datapoint objects. - MetricData []*MetricDatapoint `locationName:"metricData" type:"list"` + // A token used for advancing to the next page of results from your get relational + // database snapshots request. + NextPageToken *string `locationName:"nextPageToken" type:"string"` - // The metric about which you are receiving information. Valid values are listed - // below, along with the most useful statistics to include in your request. - // - // * ClientTLSNegotiationErrorCount - The number of TLS connections initiated - // by the client that did not establish a session with the load balancer. - // Possible causes include a mismatch of ciphers or protocols. - // - // Statistics: The most useful statistic is Sum. - // - // * HealthyHostCount - The number of target instances that are considered - // healthy. - // - // Statistics: The most useful statistic are Average, Minimum, and Maximum. - // - // * UnhealthyHostCount - The number of target instances that are considered - // unhealthy. - // - // Statistics: The most useful statistic are Average, Minimum, and Maximum. - // - // * HTTPCode_LB_4XX_Count - The number of HTTP 4XX client error codes that - // originate from the load balancer. Client errors are generated when requests - // are malformed or incomplete. These requests have not been received by - // the target instance. This count does not include any response codes generated - // by the target instances. - // - // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, - // and Average all return 1. - // - // * HTTPCode_LB_5XX_Count - The number of HTTP 5XX server error codes that - // originate from the load balancer. This count does not include any response - // codes generated by the target instances. - // - // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, - // and Average all return 1. Note that Minimum, Maximum, and Average all - // return 1. - // - // * HTTPCode_Instance_2XX_Count - The number of HTTP response codes generated - // by the target instances. This does not include any response codes generated - // by the load balancer. - // - // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, - // and Average all return 1. - // - // * HTTPCode_Instance_3XX_Count - The number of HTTP response codes generated - // by the target instances. This does not include any response codes generated - // by the load balancer. - // - // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, - // and Average all return 1. - // - // * HTTPCode_Instance_4XX_Count - The number of HTTP response codes generated - // by the target instances. This does not include any response codes generated - // by the load balancer. - // - // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, - // and Average all return 1. - // - // * HTTPCode_Instance_5XX_Count - The number of HTTP response codes generated - // by the target instances. This does not include any response codes generated - // by the load balancer. - // - // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, - // and Average all return 1. - // - // * InstanceResponseTime - The time elapsed, in seconds, after the request - // leaves the load balancer until a response from the target instance is - // received. - // - // Statistics: The most useful statistic is Average. - // - // * RejectedConnectionCount - The number of connections that were rejected - // because the load balancer had reached its maximum number of connections. - // - // Statistics: The most useful statistic is Sum. - // - // * RequestCount - The number of requests processed over IPv4. This count - // includes only the requests with a response generated by a target instance - // of the load balancer. - // - // Statistics: The most useful statistic is Sum. Note that Minimum, Maximum, - // and Average all return 1. - MetricName *string `locationName:"metricName" type:"string" enum:"LoadBalancerMetricName"` + // An object describing the result of your get relational database snapshots + // request. + RelationalDatabaseSnapshots []*RelationalDatabaseSnapshot `locationName:"relationalDatabaseSnapshots" type:"list"` +} + +// String returns the string representation +func (s GetRelationalDatabaseSnapshotsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetRelationalDatabaseSnapshotsOutput) GoString() string { + return s.String() +} + +// SetNextPageToken sets the NextPageToken field's value. +func (s *GetRelationalDatabaseSnapshotsOutput) SetNextPageToken(v string) *GetRelationalDatabaseSnapshotsOutput { + s.NextPageToken = &v + return s +} + +// SetRelationalDatabaseSnapshots sets the RelationalDatabaseSnapshots field's value. +func (s *GetRelationalDatabaseSnapshotsOutput) SetRelationalDatabaseSnapshots(v []*RelationalDatabaseSnapshot) *GetRelationalDatabaseSnapshotsOutput { + s.RelationalDatabaseSnapshots = v + return s +} + +type GetRelationalDatabasesInput struct { + _ struct{} `type:"structure"` + + // A token used for advancing to a specific page of results for your get relational + // database request. + PageToken *string `locationName:"pageToken" type:"string"` } // String returns the string representation -func (s GetLoadBalancerMetricDataOutput) String() string { +func (s GetRelationalDatabasesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetLoadBalancerMetricDataOutput) GoString() string { +func (s GetRelationalDatabasesInput) GoString() string { return s.String() } -// SetMetricData sets the MetricData field's value. -func (s *GetLoadBalancerMetricDataOutput) SetMetricData(v []*MetricDatapoint) *GetLoadBalancerMetricDataOutput { - s.MetricData = v - return s -} - -// SetMetricName sets the MetricName field's value. -func (s *GetLoadBalancerMetricDataOutput) SetMetricName(v string) *GetLoadBalancerMetricDataOutput { - s.MetricName = &v +// SetPageToken sets the PageToken field's value. +func (s *GetRelationalDatabasesInput) SetPageToken(v string) *GetRelationalDatabasesInput { + s.PageToken = &v return s } -type GetLoadBalancerOutput struct { +type GetRelationalDatabasesOutput struct { _ struct{} `type:"structure"` - // An object containing information about your load balancer. - LoadBalancer *LoadBalancer `locationName:"loadBalancer" type:"structure"` + // A token used for advancing to the next page of results from your get relational + // databases request. + NextPageToken *string `locationName:"nextPageToken" type:"string"` + + // An object describing the result of your get relational databases request. + RelationalDatabases []*RelationalDatabase `locationName:"relationalDatabases" type:"list"` } // String returns the string representation -func (s GetLoadBalancerOutput) String() string { +func (s GetRelationalDatabasesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetLoadBalancerOutput) GoString() string { +func (s GetRelationalDatabasesOutput) GoString() string { return s.String() } -// SetLoadBalancer sets the LoadBalancer field's value. -func (s *GetLoadBalancerOutput) SetLoadBalancer(v *LoadBalancer) *GetLoadBalancerOutput { - s.LoadBalancer = v +// SetNextPageToken sets the NextPageToken field's value. +func (s *GetRelationalDatabasesOutput) SetNextPageToken(v string) *GetRelationalDatabasesOutput { + s.NextPageToken = &v return s } -type GetLoadBalancerTlsCertificatesInput struct { +// SetRelationalDatabases sets the RelationalDatabases field's value. +func (s *GetRelationalDatabasesOutput) SetRelationalDatabases(v []*RelationalDatabase) *GetRelationalDatabasesOutput { + s.RelationalDatabases = v + return s +} + +type GetStaticIpInput struct { _ struct{} `type:"structure"` - // The name of the load balancer you associated with your SSL/TLS certificate. + // The name of the static IP in Lightsail. // - // LoadBalancerName is a required field - LoadBalancerName *string `locationName:"loadBalancerName" type:"string" required:"true"` + // StaticIpName is a required field + StaticIpName *string `locationName:"staticIpName" type:"string" required:"true"` } // String returns the string representation -func (s GetLoadBalancerTlsCertificatesInput) String() string { +func (s GetStaticIpInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetLoadBalancerTlsCertificatesInput) GoString() string { +func (s GetStaticIpInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetLoadBalancerTlsCertificatesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetLoadBalancerTlsCertificatesInput"} - if s.LoadBalancerName == nil { - invalidParams.Add(request.NewErrParamRequired("LoadBalancerName")) +func (s *GetStaticIpInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetStaticIpInput"} + if s.StaticIpName == nil { + invalidParams.Add(request.NewErrParamRequired("StaticIpName")) } if invalidParams.Len() > 0 { @@ -12346,115 +16733,126 @@ func (s *GetLoadBalancerTlsCertificatesInput) Validate() error { return nil } -// SetLoadBalancerName sets the LoadBalancerName field's value. -func (s *GetLoadBalancerTlsCertificatesInput) SetLoadBalancerName(v string) *GetLoadBalancerTlsCertificatesInput { - s.LoadBalancerName = &v +// SetStaticIpName sets the StaticIpName field's value. +func (s *GetStaticIpInput) SetStaticIpName(v string) *GetStaticIpInput { + s.StaticIpName = &v return s } -type GetLoadBalancerTlsCertificatesOutput struct { +type GetStaticIpOutput struct { _ struct{} `type:"structure"` - // An array of LoadBalancerTlsCertificate objects describing your SSL/TLS certificates. - TlsCertificates []*LoadBalancerTlsCertificate `locationName:"tlsCertificates" type:"list"` + // An array of key-value pairs containing information about the requested static + // IP. + StaticIp *StaticIp `locationName:"staticIp" type:"structure"` } // String returns the string representation -func (s GetLoadBalancerTlsCertificatesOutput) String() string { +func (s GetStaticIpOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetLoadBalancerTlsCertificatesOutput) GoString() string { +func (s GetStaticIpOutput) GoString() string { return s.String() } -// SetTlsCertificates sets the TlsCertificates field's value. -func (s *GetLoadBalancerTlsCertificatesOutput) SetTlsCertificates(v []*LoadBalancerTlsCertificate) *GetLoadBalancerTlsCertificatesOutput { - s.TlsCertificates = v +// SetStaticIp sets the StaticIp field's value. +func (s *GetStaticIpOutput) SetStaticIp(v *StaticIp) *GetStaticIpOutput { + s.StaticIp = v return s } -type GetLoadBalancersInput struct { +type GetStaticIpsInput struct { _ struct{} `type:"structure"` - // A token used for paginating the results from your GetLoadBalancers request. + // A token used for advancing to the next page of results from your get static + // IPs request. PageToken *string `locationName:"pageToken" type:"string"` } // String returns the string representation -func (s GetLoadBalancersInput) String() string { +func (s GetStaticIpsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetLoadBalancersInput) GoString() string { +func (s GetStaticIpsInput) GoString() string { return s.String() } // SetPageToken sets the PageToken field's value. -func (s *GetLoadBalancersInput) SetPageToken(v string) *GetLoadBalancersInput { +func (s *GetStaticIpsInput) SetPageToken(v string) *GetStaticIpsInput { s.PageToken = &v return s } -type GetLoadBalancersOutput struct { +type GetStaticIpsOutput struct { _ struct{} `type:"structure"` - // An array of LoadBalancer objects describing your load balancers. - LoadBalancers []*LoadBalancer `locationName:"loadBalancers" type:"list"` - - // A token used for advancing to the next page of results from your GetLoadBalancers - // request. + // A token used for advancing to the next page of results from your get static + // IPs request. NextPageToken *string `locationName:"nextPageToken" type:"string"` + + // An array of key-value pairs containing information about your get static + // IPs request. + StaticIps []*StaticIp `locationName:"staticIps" type:"list"` } // String returns the string representation -func (s GetLoadBalancersOutput) String() string { +func (s GetStaticIpsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetLoadBalancersOutput) GoString() string { +func (s GetStaticIpsOutput) GoString() string { return s.String() } -// SetLoadBalancers sets the LoadBalancers field's value. -func (s *GetLoadBalancersOutput) SetLoadBalancers(v []*LoadBalancer) *GetLoadBalancersOutput { - s.LoadBalancers = v +// SetNextPageToken sets the NextPageToken field's value. +func (s *GetStaticIpsOutput) SetNextPageToken(v string) *GetStaticIpsOutput { + s.NextPageToken = &v return s } -// SetNextPageToken sets the NextPageToken field's value. -func (s *GetLoadBalancersOutput) SetNextPageToken(v string) *GetLoadBalancersOutput { - s.NextPageToken = &v +// SetStaticIps sets the StaticIps field's value. +func (s *GetStaticIpsOutput) SetStaticIps(v []*StaticIp) *GetStaticIpsOutput { + s.StaticIps = v return s } -type GetOperationInput struct { +type ImportKeyPairInput struct { _ struct{} `type:"structure"` - // A GUID used to identify the operation. + // The name of the key pair for which you want to import the public key. // - // OperationId is a required field - OperationId *string `locationName:"operationId" type:"string" required:"true"` + // KeyPairName is a required field + KeyPairName *string `locationName:"keyPairName" type:"string" required:"true"` + + // A base64-encoded public key of the ssh-rsa type. + // + // PublicKeyBase64 is a required field + PublicKeyBase64 *string `locationName:"publicKeyBase64" type:"string" required:"true"` } // String returns the string representation -func (s GetOperationInput) String() string { +func (s ImportKeyPairInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetOperationInput) GoString() string { +func (s ImportKeyPairInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GetOperationInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetOperationInput"} - if s.OperationId == nil { - invalidParams.Add(request.NewErrParamRequired("OperationId")) +func (s *ImportKeyPairInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ImportKeyPairInput"} + if s.KeyPairName == nil { + invalidParams.Add(request.NewErrParamRequired("KeyPairName")) + } + if s.PublicKeyBase64 == nil { + invalidParams.Add(request.NewErrParamRequired("PublicKeyBase64")) } if invalidParams.Len() > 0 { @@ -12463,2076 +16861,2260 @@ func (s *GetOperationInput) Validate() error { return nil } -// SetOperationId sets the OperationId field's value. -func (s *GetOperationInput) SetOperationId(v string) *GetOperationInput { - s.OperationId = &v +// SetKeyPairName sets the KeyPairName field's value. +func (s *ImportKeyPairInput) SetKeyPairName(v string) *ImportKeyPairInput { + s.KeyPairName = &v return s } -type GetOperationOutput struct { +// SetPublicKeyBase64 sets the PublicKeyBase64 field's value. +func (s *ImportKeyPairInput) SetPublicKeyBase64(v string) *ImportKeyPairInput { + s.PublicKeyBase64 = &v + return s +} + +type ImportKeyPairOutput struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about the results of your - // get operation request. + // An array of key-value pairs containing information about the request operation. Operation *Operation `locationName:"operation" type:"structure"` } // String returns the string representation -func (s GetOperationOutput) String() string { +func (s ImportKeyPairOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetOperationOutput) GoString() string { +func (s ImportKeyPairOutput) GoString() string { return s.String() } // SetOperation sets the Operation field's value. -func (s *GetOperationOutput) SetOperation(v *Operation) *GetOperationOutput { +func (s *ImportKeyPairOutput) SetOperation(v *Operation) *ImportKeyPairOutput { s.Operation = v return s } -type GetOperationsForResourceInput struct { +// Describes an instance (a virtual private server). +type Instance struct { _ struct{} `type:"structure"` - // A token used for advancing to the next page of results from your get operations - // for resource request. - PageToken *string `locationName:"pageToken" type:"string"` + // The Amazon Resource Name (ARN) of the instance (e.g., arn:aws:lightsail:us-east-2:123456789101:Instance/244ad76f-8aad-4741-809f-12345EXAMPLE). + Arn *string `locationName:"arn" type:"string"` - // The name of the resource for which you are requesting information. - // - // ResourceName is a required field - ResourceName *string `locationName:"resourceName" type:"string" required:"true"` -} + // The blueprint ID (e.g., os_amlinux_2016_03). + BlueprintId *string `locationName:"blueprintId" type:"string"` -// String returns the string representation -func (s GetOperationsForResourceInput) String() string { - return awsutil.Prettify(s) -} + // The friendly name of the blueprint (e.g., Amazon Linux). + BlueprintName *string `locationName:"blueprintName" type:"string"` -// GoString returns the string representation -func (s GetOperationsForResourceInput) GoString() string { - return s.String() -} + // The bundle for the instance (e.g., micro_1_0). + BundleId *string `locationName:"bundleId" type:"string"` -// Validate inspects the fields of the type to determine if they are valid. -func (s *GetOperationsForResourceInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetOperationsForResourceInput"} - if s.ResourceName == nil { - invalidParams.Add(request.NewErrParamRequired("ResourceName")) - } + // The timestamp when the instance was created (e.g., 1479734909.17). + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} + // The size of the vCPU and the amount of RAM for the instance. + Hardware *InstanceHardware `locationName:"hardware" type:"structure"` -// SetPageToken sets the PageToken field's value. -func (s *GetOperationsForResourceInput) SetPageToken(v string) *GetOperationsForResourceInput { - s.PageToken = &v - return s -} + // The IPv6 address of the instance. + Ipv6Address *string `locationName:"ipv6Address" type:"string"` -// SetResourceName sets the ResourceName field's value. -func (s *GetOperationsForResourceInput) SetResourceName(v string) *GetOperationsForResourceInput { - s.ResourceName = &v - return s -} + // A Boolean value indicating whether this instance has a static IP assigned + // to it. + IsStaticIp *bool `locationName:"isStaticIp" type:"boolean"` -type GetOperationsForResourceOutput struct { - _ struct{} `type:"structure"` + // The region name and Availability Zone where the instance is located. + Location *ResourceLocation `locationName:"location" type:"structure"` - // (Deprecated) Returns the number of pages of results that remain. - // - // In releases prior to June 12, 2017, this parameter returned null by the API. - // It is now deprecated, and the API returns the nextPageToken parameter instead. - NextPageCount *string `locationName:"nextPageCount" deprecated:"true" type:"string"` + // The name the user gave the instance (e.g., Amazon_Linux-1GB-Ohio-1). + Name *string `locationName:"name" type:"string"` + + // Information about the public ports and monthly data transfer rates for the + // instance. + Networking *InstanceNetworking `locationName:"networking" type:"structure"` + + // The private IP address of the instance. + PrivateIpAddress *string `locationName:"privateIpAddress" type:"string"` + + // The public IP address of the instance. + PublicIpAddress *string `locationName:"publicIpAddress" type:"string"` + + // The type of resource (usually Instance). + ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` + + // The name of the SSH key being used to connect to the instance (e.g., LightsailDefaultKeyPair). + SshKeyName *string `locationName:"sshKeyName" type:"string"` + + // The status code and the state (e.g., running) for the instance. + State *InstanceState `locationName:"state" type:"structure"` - // An identifier that was returned from the previous call to this operation, - // which can be used to return the next set of items in the list. - NextPageToken *string `locationName:"nextPageToken" type:"string"` + // The support code. Include this code in your email to support when you have + // questions about an instance or another resource in Lightsail. This code enables + // our support team to look up your Lightsail information more easily. + SupportCode *string `locationName:"supportCode" type:"string"` - // An array of key-value pairs containing information about the results of your - // get operations for resource request. - Operations []*Operation `locationName:"operations" type:"list"` + // The user name for connecting to the instance (e.g., ec2-user). + Username *string `locationName:"username" type:"string"` } // String returns the string representation -func (s GetOperationsForResourceOutput) String() string { +func (s Instance) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetOperationsForResourceOutput) GoString() string { +func (s Instance) GoString() string { return s.String() } -// SetNextPageCount sets the NextPageCount field's value. -func (s *GetOperationsForResourceOutput) SetNextPageCount(v string) *GetOperationsForResourceOutput { - s.NextPageCount = &v +// SetArn sets the Arn field's value. +func (s *Instance) SetArn(v string) *Instance { + s.Arn = &v return s } -// SetNextPageToken sets the NextPageToken field's value. -func (s *GetOperationsForResourceOutput) SetNextPageToken(v string) *GetOperationsForResourceOutput { - s.NextPageToken = &v +// SetBlueprintId sets the BlueprintId field's value. +func (s *Instance) SetBlueprintId(v string) *Instance { + s.BlueprintId = &v return s } -// SetOperations sets the Operations field's value. -func (s *GetOperationsForResourceOutput) SetOperations(v []*Operation) *GetOperationsForResourceOutput { - s.Operations = v +// SetBlueprintName sets the BlueprintName field's value. +func (s *Instance) SetBlueprintName(v string) *Instance { + s.BlueprintName = &v return s } -type GetOperationsInput struct { - _ struct{} `type:"structure"` - - // A token used for advancing to the next page of results from your get operations - // request. - PageToken *string `locationName:"pageToken" type:"string"` +// SetBundleId sets the BundleId field's value. +func (s *Instance) SetBundleId(v string) *Instance { + s.BundleId = &v + return s } -// String returns the string representation -func (s GetOperationsInput) String() string { - return awsutil.Prettify(s) +// SetCreatedAt sets the CreatedAt field's value. +func (s *Instance) SetCreatedAt(v time.Time) *Instance { + s.CreatedAt = &v + return s } -// GoString returns the string representation -func (s GetOperationsInput) GoString() string { - return s.String() +// SetHardware sets the Hardware field's value. +func (s *Instance) SetHardware(v *InstanceHardware) *Instance { + s.Hardware = v + return s } -// SetPageToken sets the PageToken field's value. -func (s *GetOperationsInput) SetPageToken(v string) *GetOperationsInput { - s.PageToken = &v +// SetIpv6Address sets the Ipv6Address field's value. +func (s *Instance) SetIpv6Address(v string) *Instance { + s.Ipv6Address = &v return s } -type GetOperationsOutput struct { - _ struct{} `type:"structure"` - - // A token used for advancing to the next page of results from your get operations - // request. - NextPageToken *string `locationName:"nextPageToken" type:"string"` +// SetIsStaticIp sets the IsStaticIp field's value. +func (s *Instance) SetIsStaticIp(v bool) *Instance { + s.IsStaticIp = &v + return s +} - // An array of key-value pairs containing information about the results of your - // get operations request. - Operations []*Operation `locationName:"operations" type:"list"` +// SetLocation sets the Location field's value. +func (s *Instance) SetLocation(v *ResourceLocation) *Instance { + s.Location = v + return s } -// String returns the string representation -func (s GetOperationsOutput) String() string { - return awsutil.Prettify(s) +// SetName sets the Name field's value. +func (s *Instance) SetName(v string) *Instance { + s.Name = &v + return s } -// GoString returns the string representation -func (s GetOperationsOutput) GoString() string { - return s.String() +// SetNetworking sets the Networking field's value. +func (s *Instance) SetNetworking(v *InstanceNetworking) *Instance { + s.Networking = v + return s } -// SetNextPageToken sets the NextPageToken field's value. -func (s *GetOperationsOutput) SetNextPageToken(v string) *GetOperationsOutput { - s.NextPageToken = &v +// SetPrivateIpAddress sets the PrivateIpAddress field's value. +func (s *Instance) SetPrivateIpAddress(v string) *Instance { + s.PrivateIpAddress = &v return s } -// SetOperations sets the Operations field's value. -func (s *GetOperationsOutput) SetOperations(v []*Operation) *GetOperationsOutput { - s.Operations = v +// SetPublicIpAddress sets the PublicIpAddress field's value. +func (s *Instance) SetPublicIpAddress(v string) *Instance { + s.PublicIpAddress = &v return s } -type GetRegionsInput struct { - _ struct{} `type:"structure"` +// SetResourceType sets the ResourceType field's value. +func (s *Instance) SetResourceType(v string) *Instance { + s.ResourceType = &v + return s +} - // A Boolean value indicating whether to also include Availability Zones in - // your get regions request. Availability Zones are indicated with a letter: - // e.g., us-east-2a. - IncludeAvailabilityZones *bool `locationName:"includeAvailabilityZones" type:"boolean"` +// SetSshKeyName sets the SshKeyName field's value. +func (s *Instance) SetSshKeyName(v string) *Instance { + s.SshKeyName = &v + return s } -// String returns the string representation -func (s GetRegionsInput) String() string { - return awsutil.Prettify(s) +// SetState sets the State field's value. +func (s *Instance) SetState(v *InstanceState) *Instance { + s.State = v + return s } -// GoString returns the string representation -func (s GetRegionsInput) GoString() string { - return s.String() +// SetSupportCode sets the SupportCode field's value. +func (s *Instance) SetSupportCode(v string) *Instance { + s.SupportCode = &v + return s } -// SetIncludeAvailabilityZones sets the IncludeAvailabilityZones field's value. -func (s *GetRegionsInput) SetIncludeAvailabilityZones(v bool) *GetRegionsInput { - s.IncludeAvailabilityZones = &v +// SetUsername sets the Username field's value. +func (s *Instance) SetUsername(v string) *Instance { + s.Username = &v return s } -type GetRegionsOutput struct { +// The parameters for gaining temporary access to one of your Amazon Lightsail +// instances. +type InstanceAccessDetails struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about your get regions - // request. - Regions []*Region `locationName:"regions" type:"list"` -} - -// String returns the string representation -func (s GetRegionsOutput) String() string { - return awsutil.Prettify(s) -} + // For SSH access, the public key to use when accessing your instance For OpenSSH + // clients (e.g., command line SSH), you should save this value to tempkey-cert.pub. + CertKey *string `locationName:"certKey" type:"string"` -// GoString returns the string representation -func (s GetRegionsOutput) GoString() string { - return s.String() -} + // For SSH access, the date on which the temporary keys expire. + ExpiresAt *time.Time `locationName:"expiresAt" type:"timestamp"` -// SetRegions sets the Regions field's value. -func (s *GetRegionsOutput) SetRegions(v []*Region) *GetRegionsOutput { - s.Regions = v - return s -} + // The name of this Amazon Lightsail instance. + InstanceName *string `locationName:"instanceName" type:"string"` -type GetStaticIpInput struct { - _ struct{} `type:"structure"` + // The public IP address of the Amazon Lightsail instance. + IpAddress *string `locationName:"ipAddress" type:"string"` - // The name of the static IP in Lightsail. + // For RDP access, the password for your Amazon Lightsail instance. Password + // will be an empty string if the password for your new instance is not ready + // yet. When you create an instance, it can take up to 15 minutes for the instance + // to be ready. // - // StaticIpName is a required field - StaticIpName *string `locationName:"staticIpName" type:"string" required:"true"` + // If you create an instance using any key pair other than the default (LightsailDefaultKeyPair), + // password will always be an empty string. + // + // If you change the Administrator password on the instance, Lightsail will + // continue to return the original password value. When accessing the instance + // using RDP, you need to manually enter the Administrator password after changing + // it from the default. + Password *string `locationName:"password" type:"string"` + + // For a Windows Server-based instance, an object with the data you can use + // to retrieve your password. This is only needed if password is empty and the + // instance is not new (and therefore the password is not ready yet). When you + // create an instance, it can take up to 15 minutes for the instance to be ready. + PasswordData *PasswordData `locationName:"passwordData" type:"structure"` + + // For SSH access, the temporary private key. For OpenSSH clients (e.g., command + // line SSH), you should save this value to tempkey). + PrivateKey *string `locationName:"privateKey" type:"string"` + + // The protocol for these Amazon Lightsail instance access details. + Protocol *string `locationName:"protocol" type:"string" enum:"InstanceAccessProtocol"` + + // The user name to use when logging in to the Amazon Lightsail instance. + Username *string `locationName:"username" type:"string"` } // String returns the string representation -func (s GetStaticIpInput) String() string { +func (s InstanceAccessDetails) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetStaticIpInput) GoString() string { +func (s InstanceAccessDetails) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *GetStaticIpInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GetStaticIpInput"} - if s.StaticIpName == nil { - invalidParams.Add(request.NewErrParamRequired("StaticIpName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetStaticIpName sets the StaticIpName field's value. -func (s *GetStaticIpInput) SetStaticIpName(v string) *GetStaticIpInput { - s.StaticIpName = &v +// SetCertKey sets the CertKey field's value. +func (s *InstanceAccessDetails) SetCertKey(v string) *InstanceAccessDetails { + s.CertKey = &v return s } -type GetStaticIpOutput struct { - _ struct{} `type:"structure"` - - // An array of key-value pairs containing information about the requested static - // IP. - StaticIp *StaticIp `locationName:"staticIp" type:"structure"` +// SetExpiresAt sets the ExpiresAt field's value. +func (s *InstanceAccessDetails) SetExpiresAt(v time.Time) *InstanceAccessDetails { + s.ExpiresAt = &v + return s } -// String returns the string representation -func (s GetStaticIpOutput) String() string { - return awsutil.Prettify(s) +// SetInstanceName sets the InstanceName field's value. +func (s *InstanceAccessDetails) SetInstanceName(v string) *InstanceAccessDetails { + s.InstanceName = &v + return s } -// GoString returns the string representation -func (s GetStaticIpOutput) GoString() string { - return s.String() +// SetIpAddress sets the IpAddress field's value. +func (s *InstanceAccessDetails) SetIpAddress(v string) *InstanceAccessDetails { + s.IpAddress = &v + return s } -// SetStaticIp sets the StaticIp field's value. -func (s *GetStaticIpOutput) SetStaticIp(v *StaticIp) *GetStaticIpOutput { - s.StaticIp = v +// SetPassword sets the Password field's value. +func (s *InstanceAccessDetails) SetPassword(v string) *InstanceAccessDetails { + s.Password = &v return s } -type GetStaticIpsInput struct { - _ struct{} `type:"structure"` - - // A token used for advancing to the next page of results from your get static - // IPs request. - PageToken *string `locationName:"pageToken" type:"string"` +// SetPasswordData sets the PasswordData field's value. +func (s *InstanceAccessDetails) SetPasswordData(v *PasswordData) *InstanceAccessDetails { + s.PasswordData = v + return s } -// String returns the string representation -func (s GetStaticIpsInput) String() string { - return awsutil.Prettify(s) +// SetPrivateKey sets the PrivateKey field's value. +func (s *InstanceAccessDetails) SetPrivateKey(v string) *InstanceAccessDetails { + s.PrivateKey = &v + return s } -// GoString returns the string representation -func (s GetStaticIpsInput) GoString() string { - return s.String() +// SetProtocol sets the Protocol field's value. +func (s *InstanceAccessDetails) SetProtocol(v string) *InstanceAccessDetails { + s.Protocol = &v + return s } -// SetPageToken sets the PageToken field's value. -func (s *GetStaticIpsInput) SetPageToken(v string) *GetStaticIpsInput { - s.PageToken = &v +// SetUsername sets the Username field's value. +func (s *InstanceAccessDetails) SetUsername(v string) *InstanceAccessDetails { + s.Username = &v return s } -type GetStaticIpsOutput struct { +// Describes the hardware for the instance. +type InstanceHardware struct { _ struct{} `type:"structure"` - // A token used for advancing to the next page of results from your get static - // IPs request. - NextPageToken *string `locationName:"nextPageToken" type:"string"` + // The number of vCPUs the instance has. + CpuCount *int64 `locationName:"cpuCount" type:"integer"` - // An array of key-value pairs containing information about your get static - // IPs request. - StaticIps []*StaticIp `locationName:"staticIps" type:"list"` + // The disks attached to the instance. + Disks []*Disk `locationName:"disks" type:"list"` + + // The amount of RAM in GB on the instance (e.g., 1.0). + RamSizeInGb *float64 `locationName:"ramSizeInGb" type:"float"` } // String returns the string representation -func (s GetStaticIpsOutput) String() string { +func (s InstanceHardware) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GetStaticIpsOutput) GoString() string { +func (s InstanceHardware) GoString() string { return s.String() } -// SetNextPageToken sets the NextPageToken field's value. -func (s *GetStaticIpsOutput) SetNextPageToken(v string) *GetStaticIpsOutput { - s.NextPageToken = &v +// SetCpuCount sets the CpuCount field's value. +func (s *InstanceHardware) SetCpuCount(v int64) *InstanceHardware { + s.CpuCount = &v return s } -// SetStaticIps sets the StaticIps field's value. -func (s *GetStaticIpsOutput) SetStaticIps(v []*StaticIp) *GetStaticIpsOutput { - s.StaticIps = v +// SetDisks sets the Disks field's value. +func (s *InstanceHardware) SetDisks(v []*Disk) *InstanceHardware { + s.Disks = v return s } -type ImportKeyPairInput struct { +// SetRamSizeInGb sets the RamSizeInGb field's value. +func (s *InstanceHardware) SetRamSizeInGb(v float64) *InstanceHardware { + s.RamSizeInGb = &v + return s +} + +// Describes information about the health of the instance. +type InstanceHealthSummary struct { _ struct{} `type:"structure"` - // The name of the key pair for which you want to import the public key. - // - // KeyPairName is a required field - KeyPairName *string `locationName:"keyPairName" type:"string" required:"true"` + // Describes the overall instance health. Valid values are below. + InstanceHealth *string `locationName:"instanceHealth" type:"string" enum:"InstanceHealthState"` - // A base64-encoded public key of the ssh-rsa type. + // More information about the instance health. If the instanceHealth is healthy, + // then an instanceHealthReason value is not provided. // - // PublicKeyBase64 is a required field - PublicKeyBase64 *string `locationName:"publicKeyBase64" type:"string" required:"true"` + // If instanceHealth is initial, the instanceHealthReason value can be one of + // the following: + // + // * Lb.RegistrationInProgress - The target instance is in the process of + // being registered with the load balancer. + // + // * Lb.InitialHealthChecking - The Lightsail load balancer is still sending + // the target instance the minimum number of health checks required to determine + // its health status. + // + // If instanceHealth is unhealthy, the instanceHealthReason value can be one + // of the following: + // + // * Instance.ResponseCodeMismatch - The health checks did not return an + // expected HTTP code. + // + // * Instance.Timeout - The health check requests timed out. + // + // * Instance.FailedHealthChecks - The health checks failed because the connection + // to the target instance timed out, the target instance response was malformed, + // or the target instance failed the health check for an unknown reason. + // + // * Lb.InternalError - The health checks failed due to an internal error. + // + // If instanceHealth is unused, the instanceHealthReason value can be one of + // the following: + // + // * Instance.NotRegistered - The target instance is not registered with + // the target group. + // + // * Instance.NotInUse - The target group is not used by any load balancer, + // or the target instance is in an Availability Zone that is not enabled + // for its load balancer. + // + // * Instance.IpUnusable - The target IP address is reserved for use by a + // Lightsail load balancer. + // + // * Instance.InvalidState - The target is in the stopped or terminated state. + // + // If instanceHealth is draining, the instanceHealthReason value can be one + // of the following: + // + // * Instance.DeregistrationInProgress - The target instance is in the process + // of being deregistered and the deregistration delay period has not expired. + InstanceHealthReason *string `locationName:"instanceHealthReason" type:"string" enum:"InstanceHealthReason"` + + // The name of the Lightsail instance for which you are requesting health check + // data. + InstanceName *string `locationName:"instanceName" type:"string"` } // String returns the string representation -func (s ImportKeyPairInput) String() string { +func (s InstanceHealthSummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ImportKeyPairInput) GoString() string { +func (s InstanceHealthSummary) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *ImportKeyPairInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ImportKeyPairInput"} - if s.KeyPairName == nil { - invalidParams.Add(request.NewErrParamRequired("KeyPairName")) - } - if s.PublicKeyBase64 == nil { - invalidParams.Add(request.NewErrParamRequired("PublicKeyBase64")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetInstanceHealth sets the InstanceHealth field's value. +func (s *InstanceHealthSummary) SetInstanceHealth(v string) *InstanceHealthSummary { + s.InstanceHealth = &v + return s } -// SetKeyPairName sets the KeyPairName field's value. -func (s *ImportKeyPairInput) SetKeyPairName(v string) *ImportKeyPairInput { - s.KeyPairName = &v +// SetInstanceHealthReason sets the InstanceHealthReason field's value. +func (s *InstanceHealthSummary) SetInstanceHealthReason(v string) *InstanceHealthSummary { + s.InstanceHealthReason = &v return s } -// SetPublicKeyBase64 sets the PublicKeyBase64 field's value. -func (s *ImportKeyPairInput) SetPublicKeyBase64(v string) *ImportKeyPairInput { - s.PublicKeyBase64 = &v +// SetInstanceName sets the InstanceName field's value. +func (s *InstanceHealthSummary) SetInstanceName(v string) *InstanceHealthSummary { + s.InstanceName = &v return s } -type ImportKeyPairOutput struct { +// Describes monthly data transfer rates and port information for an instance. +type InstanceNetworking struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about the request operation. - Operation *Operation `locationName:"operation" type:"structure"` + // The amount of data in GB allocated for monthly data transfers. + MonthlyTransfer *MonthlyTransfer `locationName:"monthlyTransfer" type:"structure"` + + // An array of key-value pairs containing information about the ports on the + // instance. + Ports []*InstancePortInfo `locationName:"ports" type:"list"` } // String returns the string representation -func (s ImportKeyPairOutput) String() string { +func (s InstanceNetworking) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ImportKeyPairOutput) GoString() string { +func (s InstanceNetworking) GoString() string { return s.String() } -// SetOperation sets the Operation field's value. -func (s *ImportKeyPairOutput) SetOperation(v *Operation) *ImportKeyPairOutput { - s.Operation = v +// SetMonthlyTransfer sets the MonthlyTransfer field's value. +func (s *InstanceNetworking) SetMonthlyTransfer(v *MonthlyTransfer) *InstanceNetworking { + s.MonthlyTransfer = v return s } -// Describes an instance (a virtual private server). -type Instance struct { - _ struct{} `type:"structure"` - - // The Amazon Resource Name (ARN) of the instance (e.g., arn:aws:lightsail:us-east-2:123456789101:Instance/244ad76f-8aad-4741-809f-12345EXAMPLE). - Arn *string `locationName:"arn" type:"string"` - - // The blueprint ID (e.g., os_amlinux_2016_03). - BlueprintId *string `locationName:"blueprintId" type:"string"` - - // The friendly name of the blueprint (e.g., Amazon Linux). - BlueprintName *string `locationName:"blueprintName" type:"string"` - - // The bundle for the instance (e.g., micro_1_0). - BundleId *string `locationName:"bundleId" type:"string"` - - // The timestamp when the instance was created (e.g., 1479734909.17). - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` - - // The size of the vCPU and the amount of RAM for the instance. - Hardware *InstanceHardware `locationName:"hardware" type:"structure"` - - // The IPv6 address of the instance. - Ipv6Address *string `locationName:"ipv6Address" type:"string"` - - // A Boolean value indicating whether this instance has a static IP assigned - // to it. - IsStaticIp *bool `locationName:"isStaticIp" type:"boolean"` - - // The region name and availability zone where the instance is located. - Location *ResourceLocation `locationName:"location" type:"structure"` - - // The name the user gave the instance (e.g., Amazon_Linux-1GB-Ohio-1). - Name *string `locationName:"name" type:"string"` +// SetPorts sets the Ports field's value. +func (s *InstanceNetworking) SetPorts(v []*InstancePortInfo) *InstanceNetworking { + s.Ports = v + return s +} - // Information about the public ports and monthly data transfer rates for the - // instance. - Networking *InstanceNetworking `locationName:"networking" type:"structure"` +// Describes information about the instance ports. +type InstancePortInfo struct { + _ struct{} `type:"structure"` - // The private IP address of the instance. - PrivateIpAddress *string `locationName:"privateIpAddress" type:"string"` + // The access direction (inbound or outbound). + AccessDirection *string `locationName:"accessDirection" type:"string" enum:"AccessDirection"` - // The public IP address of the instance. - PublicIpAddress *string `locationName:"publicIpAddress" type:"string"` + // The location from which access is allowed (e.g., Anywhere (0.0.0.0/0)). + AccessFrom *string `locationName:"accessFrom" type:"string"` - // The type of resource (usually Instance). - ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` + // The type of access (Public or Private). + AccessType *string `locationName:"accessType" type:"string" enum:"PortAccessType"` - // The name of the SSH key being used to connect to the instance (e.g., LightsailDefaultKeyPair). - SshKeyName *string `locationName:"sshKeyName" type:"string"` + // The common name. + CommonName *string `locationName:"commonName" type:"string"` - // The status code and the state (e.g., running) for the instance. - State *InstanceState `locationName:"state" type:"structure"` + // The first port in the range. + FromPort *int64 `locationName:"fromPort" type:"integer"` - // The support code. Include this code in your email to support when you have - // questions about an instance or another resource in Lightsail. This code enables - // our support team to look up your Lightsail information more easily. - SupportCode *string `locationName:"supportCode" type:"string"` + // The protocol being used. Can be one of the following. + // + // * tcp - Transmission Control Protocol (TCP) provides reliable, ordered, + // and error-checked delivery of streamed data between applications running + // on hosts communicating by an IP network. If you have an application that + // doesn't require reliable data stream service, use UDP instead. + // + // * all - All transport layer protocol types. For more general information, + // see Transport layer (https://en.wikipedia.org/wiki/Transport_layer) on + // Wikipedia. + // + // * udp - With User Datagram Protocol (UDP), computer applications can send + // messages (or datagrams) to other hosts on an Internet Protocol (IP) network. + // Prior communications are not required to set up transmission channels + // or data paths. Applications that don't require reliable data stream service + // can use UDP, which provides a connectionless datagram service that emphasizes + // reduced latency over reliability. If you do require reliable data stream + // service, use TCP instead. + Protocol *string `locationName:"protocol" type:"string" enum:"NetworkProtocol"` - // The user name for connecting to the instance (e.g., ec2-user). - Username *string `locationName:"username" type:"string"` + // The last port in the range. + ToPort *int64 `locationName:"toPort" type:"integer"` } // String returns the string representation -func (s Instance) String() string { +func (s InstancePortInfo) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Instance) GoString() string { +func (s InstancePortInfo) GoString() string { return s.String() } -// SetArn sets the Arn field's value. -func (s *Instance) SetArn(v string) *Instance { - s.Arn = &v +// SetAccessDirection sets the AccessDirection field's value. +func (s *InstancePortInfo) SetAccessDirection(v string) *InstancePortInfo { + s.AccessDirection = &v return s } -// SetBlueprintId sets the BlueprintId field's value. -func (s *Instance) SetBlueprintId(v string) *Instance { - s.BlueprintId = &v +// SetAccessFrom sets the AccessFrom field's value. +func (s *InstancePortInfo) SetAccessFrom(v string) *InstancePortInfo { + s.AccessFrom = &v return s } -// SetBlueprintName sets the BlueprintName field's value. -func (s *Instance) SetBlueprintName(v string) *Instance { - s.BlueprintName = &v +// SetAccessType sets the AccessType field's value. +func (s *InstancePortInfo) SetAccessType(v string) *InstancePortInfo { + s.AccessType = &v return s } -// SetBundleId sets the BundleId field's value. -func (s *Instance) SetBundleId(v string) *Instance { - s.BundleId = &v +// SetCommonName sets the CommonName field's value. +func (s *InstancePortInfo) SetCommonName(v string) *InstancePortInfo { + s.CommonName = &v return s } -// SetCreatedAt sets the CreatedAt field's value. -func (s *Instance) SetCreatedAt(v time.Time) *Instance { - s.CreatedAt = &v +// SetFromPort sets the FromPort field's value. +func (s *InstancePortInfo) SetFromPort(v int64) *InstancePortInfo { + s.FromPort = &v return s } -// SetHardware sets the Hardware field's value. -func (s *Instance) SetHardware(v *InstanceHardware) *Instance { - s.Hardware = v +// SetProtocol sets the Protocol field's value. +func (s *InstancePortInfo) SetProtocol(v string) *InstancePortInfo { + s.Protocol = &v return s } -// SetIpv6Address sets the Ipv6Address field's value. -func (s *Instance) SetIpv6Address(v string) *Instance { - s.Ipv6Address = &v +// SetToPort sets the ToPort field's value. +func (s *InstancePortInfo) SetToPort(v int64) *InstancePortInfo { + s.ToPort = &v return s } -// SetIsStaticIp sets the IsStaticIp field's value. -func (s *Instance) SetIsStaticIp(v bool) *Instance { - s.IsStaticIp = &v - return s -} +// Describes the port state. +type InstancePortState struct { + _ struct{} `type:"structure"` -// SetLocation sets the Location field's value. -func (s *Instance) SetLocation(v *ResourceLocation) *Instance { - s.Location = v - return s -} + // The first port in the range. + FromPort *int64 `locationName:"fromPort" type:"integer"` -// SetName sets the Name field's value. -func (s *Instance) SetName(v string) *Instance { - s.Name = &v - return s -} + // The protocol being used. Can be one of the following. + // + // * tcp - Transmission Control Protocol (TCP) provides reliable, ordered, + // and error-checked delivery of streamed data between applications running + // on hosts communicating by an IP network. If you have an application that + // doesn't require reliable data stream service, use UDP instead. + // + // * all - All transport layer protocol types. For more general information, + // see Transport layer (https://en.wikipedia.org/wiki/Transport_layer) on + // Wikipedia. + // + // * udp - With User Datagram Protocol (UDP), computer applications can send + // messages (or datagrams) to other hosts on an Internet Protocol (IP) network. + // Prior communications are not required to set up transmission channels + // or data paths. Applications that don't require reliable data stream service + // can use UDP, which provides a connectionless datagram service that emphasizes + // reduced latency over reliability. If you do require reliable data stream + // service, use TCP instead. + Protocol *string `locationName:"protocol" type:"string" enum:"NetworkProtocol"` -// SetNetworking sets the Networking field's value. -func (s *Instance) SetNetworking(v *InstanceNetworking) *Instance { - s.Networking = v - return s -} + // Specifies whether the instance port is open or closed. + State *string `locationName:"state" type:"string" enum:"PortState"` -// SetPrivateIpAddress sets the PrivateIpAddress field's value. -func (s *Instance) SetPrivateIpAddress(v string) *Instance { - s.PrivateIpAddress = &v - return s + // The last port in the range. + ToPort *int64 `locationName:"toPort" type:"integer"` } -// SetPublicIpAddress sets the PublicIpAddress field's value. -func (s *Instance) SetPublicIpAddress(v string) *Instance { - s.PublicIpAddress = &v - return s +// String returns the string representation +func (s InstancePortState) String() string { + return awsutil.Prettify(s) } -// SetResourceType sets the ResourceType field's value. -func (s *Instance) SetResourceType(v string) *Instance { - s.ResourceType = &v - return s +// GoString returns the string representation +func (s InstancePortState) GoString() string { + return s.String() } -// SetSshKeyName sets the SshKeyName field's value. -func (s *Instance) SetSshKeyName(v string) *Instance { - s.SshKeyName = &v +// SetFromPort sets the FromPort field's value. +func (s *InstancePortState) SetFromPort(v int64) *InstancePortState { + s.FromPort = &v return s } -// SetState sets the State field's value. -func (s *Instance) SetState(v *InstanceState) *Instance { - s.State = v +// SetProtocol sets the Protocol field's value. +func (s *InstancePortState) SetProtocol(v string) *InstancePortState { + s.Protocol = &v return s } -// SetSupportCode sets the SupportCode field's value. -func (s *Instance) SetSupportCode(v string) *Instance { - s.SupportCode = &v +// SetState sets the State field's value. +func (s *InstancePortState) SetState(v string) *InstancePortState { + s.State = &v return s } -// SetUsername sets the Username field's value. -func (s *Instance) SetUsername(v string) *Instance { - s.Username = &v +// SetToPort sets the ToPort field's value. +func (s *InstancePortState) SetToPort(v int64) *InstancePortState { + s.ToPort = &v return s } -// The parameters for gaining temporary access to one of your Amazon Lightsail -// instances. -type InstanceAccessDetails struct { +// Describes the snapshot of the virtual private server, or instance. +type InstanceSnapshot struct { _ struct{} `type:"structure"` - // For SSH access, the public key to use when accessing your instance For OpenSSH - // clients (e.g., command line SSH), you should save this value to tempkey-cert.pub. - CertKey *string `locationName:"certKey" type:"string"` + // The Amazon Resource Name (ARN) of the snapshot (e.g., arn:aws:lightsail:us-east-2:123456789101:InstanceSnapshot/d23b5706-3322-4d83-81e5-12345EXAMPLE). + Arn *string `locationName:"arn" type:"string"` - // For SSH access, the date on which the temporary keys expire. - ExpiresAt *time.Time `locationName:"expiresAt" type:"timestamp" timestampFormat:"unix"` + // The timestamp when the snapshot was created (e.g., 1479907467.024). + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` - // The name of this Amazon Lightsail instance. - InstanceName *string `locationName:"instanceName" type:"string"` + // An array of disk objects containing information about all block storage disks. + FromAttachedDisks []*Disk `locationName:"fromAttachedDisks" type:"list"` - // The public IP address of the Amazon Lightsail instance. - IpAddress *string `locationName:"ipAddress" type:"string"` + // The blueprint ID from which you created the snapshot (e.g., os_debian_8_3). + // A blueprint is a virtual private server (or instance) image used to create + // instances quickly. + FromBlueprintId *string `locationName:"fromBlueprintId" type:"string"` - // For RDP access, the password for your Amazon Lightsail instance. Password - // will be an empty string if the password for your new instance is not ready - // yet. When you create an instance, it can take up to 15 minutes for the instance - // to be ready. - // - // If you create an instance using any key pair other than the default (LightsailDefaultKeyPair), - // password will always be an empty string. - // - // If you change the Administrator password on the instance, Lightsail will - // continue to return the original password value. When accessing the instance - // using RDP, you need to manually enter the Administrator password after changing - // it from the default. - Password *string `locationName:"password" type:"string"` + // The bundle ID from which you created the snapshot (e.g., micro_1_0). + FromBundleId *string `locationName:"fromBundleId" type:"string"` - // For a Windows Server-based instance, an object with the data you can use - // to retrieve your password. This is only needed if password is empty and the - // instance is not new (and therefore the password is not ready yet). When you - // create an instance, it can take up to 15 minutes for the instance to be ready. - PasswordData *PasswordData `locationName:"passwordData" type:"structure"` + // The Amazon Resource Name (ARN) of the instance from which the snapshot was + // created (e.g., arn:aws:lightsail:us-east-2:123456789101:Instance/64b8404c-ccb1-430b-8daf-12345EXAMPLE). + FromInstanceArn *string `locationName:"fromInstanceArn" type:"string"` - // For SSH access, the temporary private key. For OpenSSH clients (e.g., command - // line SSH), you should save this value to tempkey). - PrivateKey *string `locationName:"privateKey" type:"string"` + // The instance from which the snapshot was created. + FromInstanceName *string `locationName:"fromInstanceName" type:"string"` - // The protocol for these Amazon Lightsail instance access details. - Protocol *string `locationName:"protocol" type:"string" enum:"InstanceAccessProtocol"` + // The region name and Availability Zone where you created the snapshot. + Location *ResourceLocation `locationName:"location" type:"structure"` - // The user name to use when logging in to the Amazon Lightsail instance. - Username *string `locationName:"username" type:"string"` + // The name of the snapshot. + Name *string `locationName:"name" type:"string"` + + // The progress of the snapshot. + Progress *string `locationName:"progress" type:"string"` + + // The type of resource (usually InstanceSnapshot). + ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` + + // The size in GB of the SSD. + SizeInGb *int64 `locationName:"sizeInGb" type:"integer"` + + // The state the snapshot is in. + State *string `locationName:"state" type:"string" enum:"InstanceSnapshotState"` + + // The support code. Include this code in your email to support when you have + // questions about an instance or another resource in Lightsail. This code enables + // our support team to look up your Lightsail information more easily. + SupportCode *string `locationName:"supportCode" type:"string"` +} + +// String returns the string representation +func (s InstanceSnapshot) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceSnapshot) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *InstanceSnapshot) SetArn(v string) *InstanceSnapshot { + s.Arn = &v + return s +} + +// SetCreatedAt sets the CreatedAt field's value. +func (s *InstanceSnapshot) SetCreatedAt(v time.Time) *InstanceSnapshot { + s.CreatedAt = &v + return s +} + +// SetFromAttachedDisks sets the FromAttachedDisks field's value. +func (s *InstanceSnapshot) SetFromAttachedDisks(v []*Disk) *InstanceSnapshot { + s.FromAttachedDisks = v + return s } -// String returns the string representation -func (s InstanceAccessDetails) String() string { - return awsutil.Prettify(s) +// SetFromBlueprintId sets the FromBlueprintId field's value. +func (s *InstanceSnapshot) SetFromBlueprintId(v string) *InstanceSnapshot { + s.FromBlueprintId = &v + return s } -// GoString returns the string representation -func (s InstanceAccessDetails) GoString() string { - return s.String() +// SetFromBundleId sets the FromBundleId field's value. +func (s *InstanceSnapshot) SetFromBundleId(v string) *InstanceSnapshot { + s.FromBundleId = &v + return s } -// SetCertKey sets the CertKey field's value. -func (s *InstanceAccessDetails) SetCertKey(v string) *InstanceAccessDetails { - s.CertKey = &v +// SetFromInstanceArn sets the FromInstanceArn field's value. +func (s *InstanceSnapshot) SetFromInstanceArn(v string) *InstanceSnapshot { + s.FromInstanceArn = &v return s } -// SetExpiresAt sets the ExpiresAt field's value. -func (s *InstanceAccessDetails) SetExpiresAt(v time.Time) *InstanceAccessDetails { - s.ExpiresAt = &v +// SetFromInstanceName sets the FromInstanceName field's value. +func (s *InstanceSnapshot) SetFromInstanceName(v string) *InstanceSnapshot { + s.FromInstanceName = &v return s } -// SetInstanceName sets the InstanceName field's value. -func (s *InstanceAccessDetails) SetInstanceName(v string) *InstanceAccessDetails { - s.InstanceName = &v +// SetLocation sets the Location field's value. +func (s *InstanceSnapshot) SetLocation(v *ResourceLocation) *InstanceSnapshot { + s.Location = v return s } -// SetIpAddress sets the IpAddress field's value. -func (s *InstanceAccessDetails) SetIpAddress(v string) *InstanceAccessDetails { - s.IpAddress = &v +// SetName sets the Name field's value. +func (s *InstanceSnapshot) SetName(v string) *InstanceSnapshot { + s.Name = &v return s } -// SetPassword sets the Password field's value. -func (s *InstanceAccessDetails) SetPassword(v string) *InstanceAccessDetails { - s.Password = &v +// SetProgress sets the Progress field's value. +func (s *InstanceSnapshot) SetProgress(v string) *InstanceSnapshot { + s.Progress = &v return s } -// SetPasswordData sets the PasswordData field's value. -func (s *InstanceAccessDetails) SetPasswordData(v *PasswordData) *InstanceAccessDetails { - s.PasswordData = v +// SetResourceType sets the ResourceType field's value. +func (s *InstanceSnapshot) SetResourceType(v string) *InstanceSnapshot { + s.ResourceType = &v return s } -// SetPrivateKey sets the PrivateKey field's value. -func (s *InstanceAccessDetails) SetPrivateKey(v string) *InstanceAccessDetails { - s.PrivateKey = &v +// SetSizeInGb sets the SizeInGb field's value. +func (s *InstanceSnapshot) SetSizeInGb(v int64) *InstanceSnapshot { + s.SizeInGb = &v return s } -// SetProtocol sets the Protocol field's value. -func (s *InstanceAccessDetails) SetProtocol(v string) *InstanceAccessDetails { - s.Protocol = &v +// SetState sets the State field's value. +func (s *InstanceSnapshot) SetState(v string) *InstanceSnapshot { + s.State = &v return s } -// SetUsername sets the Username field's value. -func (s *InstanceAccessDetails) SetUsername(v string) *InstanceAccessDetails { - s.Username = &v +// SetSupportCode sets the SupportCode field's value. +func (s *InstanceSnapshot) SetSupportCode(v string) *InstanceSnapshot { + s.SupportCode = &v return s } -// Describes the hardware for the instance. -type InstanceHardware struct { +// Describes the virtual private server (or instance) status. +type InstanceState struct { _ struct{} `type:"structure"` - // The number of vCPUs the instance has. - CpuCount *int64 `locationName:"cpuCount" type:"integer"` - - // The disks attached to the instance. - Disks []*Disk `locationName:"disks" type:"list"` + // The status code for the instance. + Code *int64 `locationName:"code" type:"integer"` - // The amount of RAM in GB on the instance (e.g., 1.0). - RamSizeInGb *float64 `locationName:"ramSizeInGb" type:"float"` + // The state of the instance (e.g., running or pending). + Name *string `locationName:"name" type:"string"` } // String returns the string representation -func (s InstanceHardware) String() string { +func (s InstanceState) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s InstanceHardware) GoString() string { +func (s InstanceState) GoString() string { return s.String() } -// SetCpuCount sets the CpuCount field's value. -func (s *InstanceHardware) SetCpuCount(v int64) *InstanceHardware { - s.CpuCount = &v +// SetCode sets the Code field's value. +func (s *InstanceState) SetCode(v int64) *InstanceState { + s.Code = &v return s } -// SetDisks sets the Disks field's value. -func (s *InstanceHardware) SetDisks(v []*Disk) *InstanceHardware { - s.Disks = v +// SetName sets the Name field's value. +func (s *InstanceState) SetName(v string) *InstanceState { + s.Name = &v return s } -// SetRamSizeInGb sets the RamSizeInGb field's value. -func (s *InstanceHardware) SetRamSizeInGb(v float64) *InstanceHardware { - s.RamSizeInGb = &v - return s +type IsVpcPeeredInput struct { + _ struct{} `type:"structure"` } -// Describes information about the health of the instance. -type InstanceHealthSummary struct { - _ struct{} `type:"structure"` +// String returns the string representation +func (s IsVpcPeeredInput) String() string { + return awsutil.Prettify(s) +} - // Describes the overall instance health. Valid values are below. - InstanceHealth *string `locationName:"instanceHealth" type:"string" enum:"InstanceHealthState"` +// GoString returns the string representation +func (s IsVpcPeeredInput) GoString() string { + return s.String() +} - // More information about the instance health. If the instanceHealth is healthy, - // then an instanceHealthReason value is not provided. - // - // If instanceHealth is initial, the instanceHealthReason value can be one of - // the following: - // - // * Lb.RegistrationInProgress - The target instance is in the process of - // being registered with the load balancer. - // - // * Lb.InitialHealthChecking - The Lightsail load balancer is still sending - // the target instance the minimum number of health checks required to determine - // its health status. - // - // If instanceHealth is unhealthy, the instanceHealthReason value can be one - // of the following: - // - // * Instance.ResponseCodeMismatch - The health checks did not return an - // expected HTTP code. - // - // * Instance.Timeout - The health check requests timed out. - // - // * Instance.FailedHealthChecks - The health checks failed because the connection - // to the target instance timed out, the target instance response was malformed, - // or the target instance failed the health check for an unknown reason. - // - // * Lb.InternalError - The health checks failed due to an internal error. - // - // If instanceHealth is unused, the instanceHealthReason value can be one of - // the following: - // - // * Instance.NotRegistered - The target instance is not registered with - // the target group. - // - // * Instance.NotInUse - The target group is not used by any load balancer, - // or the target instance is in an Availability Zone that is not enabled - // for its load balancer. - // - // * Instance.IpUnusable - The target IP address is reserved for use by a - // Lightsail load balancer. - // - // * Instance.InvalidState - The target is in the stopped or terminated state. - // - // If instanceHealth is draining, the instanceHealthReason value can be one - // of the following: - // - // * Instance.DeregistrationInProgress - The target instance is in the process - // of being deregistered and the deregistration delay period has not expired. - InstanceHealthReason *string `locationName:"instanceHealthReason" type:"string" enum:"InstanceHealthReason"` +type IsVpcPeeredOutput struct { + _ struct{} `type:"structure"` - // The name of the Lightsail instance for which you are requesting health check - // data. - InstanceName *string `locationName:"instanceName" type:"string"` + // Returns true if the Lightsail VPC is peered; otherwise, false. + IsPeered *bool `locationName:"isPeered" type:"boolean"` } // String returns the string representation -func (s InstanceHealthSummary) String() string { +func (s IsVpcPeeredOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s InstanceHealthSummary) GoString() string { +func (s IsVpcPeeredOutput) GoString() string { return s.String() } -// SetInstanceHealth sets the InstanceHealth field's value. -func (s *InstanceHealthSummary) SetInstanceHealth(v string) *InstanceHealthSummary { - s.InstanceHealth = &v +// SetIsPeered sets the IsPeered field's value. +func (s *IsVpcPeeredOutput) SetIsPeered(v bool) *IsVpcPeeredOutput { + s.IsPeered = &v return s } -// SetInstanceHealthReason sets the InstanceHealthReason field's value. -func (s *InstanceHealthSummary) SetInstanceHealthReason(v string) *InstanceHealthSummary { - s.InstanceHealthReason = &v - return s -} +// Describes the SSH key pair. +type KeyPair struct { + _ struct{} `type:"structure"` -// SetInstanceName sets the InstanceName field's value. -func (s *InstanceHealthSummary) SetInstanceName(v string) *InstanceHealthSummary { - s.InstanceName = &v - return s -} + // The Amazon Resource Name (ARN) of the key pair (e.g., arn:aws:lightsail:us-east-2:123456789101:KeyPair/05859e3d-331d-48ba-9034-12345EXAMPLE). + Arn *string `locationName:"arn" type:"string"` -// Describes monthly data transfer rates and port information for an instance. -type InstanceNetworking struct { - _ struct{} `type:"structure"` + // The timestamp when the key pair was created (e.g., 1479816991.349). + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` - // The amount of data in GB allocated for monthly data transfers. - MonthlyTransfer *MonthlyTransfer `locationName:"monthlyTransfer" type:"structure"` + // The RSA fingerprint of the key pair. + Fingerprint *string `locationName:"fingerprint" type:"string"` - // An array of key-value pairs containing information about the ports on the - // instance. - Ports []*InstancePortInfo `locationName:"ports" type:"list"` + // The region name and Availability Zone where the key pair was created. + Location *ResourceLocation `locationName:"location" type:"structure"` + + // The friendly name of the SSH key pair. + Name *string `locationName:"name" type:"string"` + + // The resource type (usually KeyPair). + ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` + + // The support code. Include this code in your email to support when you have + // questions about an instance or another resource in Lightsail. This code enables + // our support team to look up your Lightsail information more easily. + SupportCode *string `locationName:"supportCode" type:"string"` } // String returns the string representation -func (s InstanceNetworking) String() string { +func (s KeyPair) String() string { return awsutil.Prettify(s) } -// GoString returns the string representation -func (s InstanceNetworking) GoString() string { - return s.String() +// GoString returns the string representation +func (s KeyPair) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *KeyPair) SetArn(v string) *KeyPair { + s.Arn = &v + return s +} + +// SetCreatedAt sets the CreatedAt field's value. +func (s *KeyPair) SetCreatedAt(v time.Time) *KeyPair { + s.CreatedAt = &v + return s +} + +// SetFingerprint sets the Fingerprint field's value. +func (s *KeyPair) SetFingerprint(v string) *KeyPair { + s.Fingerprint = &v + return s +} + +// SetLocation sets the Location field's value. +func (s *KeyPair) SetLocation(v *ResourceLocation) *KeyPair { + s.Location = v + return s +} + +// SetName sets the Name field's value. +func (s *KeyPair) SetName(v string) *KeyPair { + s.Name = &v + return s } -// SetMonthlyTransfer sets the MonthlyTransfer field's value. -func (s *InstanceNetworking) SetMonthlyTransfer(v *MonthlyTransfer) *InstanceNetworking { - s.MonthlyTransfer = v +// SetResourceType sets the ResourceType field's value. +func (s *KeyPair) SetResourceType(v string) *KeyPair { + s.ResourceType = &v return s } -// SetPorts sets the Ports field's value. -func (s *InstanceNetworking) SetPorts(v []*InstancePortInfo) *InstanceNetworking { - s.Ports = v +// SetSupportCode sets the SupportCode field's value. +func (s *KeyPair) SetSupportCode(v string) *KeyPair { + s.SupportCode = &v return s } -// Describes information about the instance ports. -type InstancePortInfo struct { +// Describes the Lightsail load balancer. +type LoadBalancer struct { _ struct{} `type:"structure"` - // The access direction (inbound or outbound). - AccessDirection *string `locationName:"accessDirection" type:"string" enum:"AccessDirection"` + // The Amazon Resource Name (ARN) of the load balancer. + Arn *string `locationName:"arn" type:"string"` - // The location from which access is allowed (e.g., Anywhere (0.0.0.0/0)). - AccessFrom *string `locationName:"accessFrom" type:"string"` + // A string to string map of the configuration options for your load balancer. + // Valid values are listed below. + ConfigurationOptions map[string]*string `locationName:"configurationOptions" type:"map"` - // The type of access (Public or Private). - AccessType *string `locationName:"accessType" type:"string" enum:"PortAccessType"` + // The date when your load balancer was created. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` - // The common name. - CommonName *string `locationName:"commonName" type:"string"` + // The DNS name of your Lightsail load balancer. + DnsName *string `locationName:"dnsName" type:"string"` - // The first port in the range. - FromPort *int64 `locationName:"fromPort" type:"integer"` + // The path you specified to perform your health checks. If no path is specified, + // the load balancer tries to make a request to the default (root) page. + HealthCheckPath *string `locationName:"healthCheckPath" type:"string"` - // The protocol being used. Can be one of the following. - // - // * tcp - Transmission Control Protocol (TCP) provides reliable, ordered, - // and error-checked delivery of streamed data between applications running - // on hosts communicating by an IP network. If you have an application that - // doesn't require reliable data stream service, use UDP instead. - // - // * all - All transport layer protocol types. For more general information, - // see Transport layer (https://en.wikipedia.org/wiki/Transport_layer) on - // Wikipedia. + // An array of InstanceHealthSummary objects describing the health of the load + // balancer. + InstanceHealthSummary []*InstanceHealthSummary `locationName:"instanceHealthSummary" type:"list"` + + // The port where the load balancer will direct traffic to your Lightsail instances. + // For HTTP traffic, it's port 80. For HTTPS traffic, it's port 443. + InstancePort *int64 `locationName:"instancePort" type:"integer"` + + // The AWS Region where your load balancer was created (e.g., us-east-2a). Lightsail + // automatically creates your load balancer across Availability Zones. + Location *ResourceLocation `locationName:"location" type:"structure"` + + // The name of the load balancer (e.g., my-load-balancer). + Name *string `locationName:"name" type:"string"` + + // The protocol you have enabled for your load balancer. Valid values are below. // - // * udp - With User Datagram Protocol (UDP), computer applications can send - // messages (or datagrams) to other hosts on an Internet Protocol (IP) network. - // Prior communications are not required to set up transmission channels - // or data paths. Applications that don't require reliable data stream service - // can use UDP, which provides a connectionless datagram service that emphasizes - // reduced latency over reliability. If you do require reliable data stream - // service, use TCP instead. - Protocol *string `locationName:"protocol" type:"string" enum:"NetworkProtocol"` + // You can't just have HTTP_HTTPS, but you can have just HTTP. + Protocol *string `locationName:"protocol" type:"string" enum:"LoadBalancerProtocol"` - // The last port in the range. - ToPort *int64 `locationName:"toPort" type:"integer"` + // An array of public port settings for your load balancer. For HTTP, use port + // 80. For HTTPS, use port 443. + PublicPorts []*int64 `locationName:"publicPorts" type:"list"` + + // The resource type (e.g., LoadBalancer. + ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` + + // The status of your load balancer. Valid values are below. + State *string `locationName:"state" type:"string" enum:"LoadBalancerState"` + + // The support code. Include this code in your email to support when you have + // questions about your Lightsail load balancer. This code enables our support + // team to look up your Lightsail information more easily. + SupportCode *string `locationName:"supportCode" type:"string"` + + // An array of LoadBalancerTlsCertificateSummary objects that provide additional + // information about the SSL/TLS certificates. For example, if true, the certificate + // is attached to the load balancer. + TlsCertificateSummaries []*LoadBalancerTlsCertificateSummary `locationName:"tlsCertificateSummaries" type:"list"` } // String returns the string representation -func (s InstancePortInfo) String() string { +func (s LoadBalancer) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s InstancePortInfo) GoString() string { +func (s LoadBalancer) GoString() string { return s.String() } -// SetAccessDirection sets the AccessDirection field's value. -func (s *InstancePortInfo) SetAccessDirection(v string) *InstancePortInfo { - s.AccessDirection = &v +// SetArn sets the Arn field's value. +func (s *LoadBalancer) SetArn(v string) *LoadBalancer { + s.Arn = &v return s } -// SetAccessFrom sets the AccessFrom field's value. -func (s *InstancePortInfo) SetAccessFrom(v string) *InstancePortInfo { - s.AccessFrom = &v +// SetConfigurationOptions sets the ConfigurationOptions field's value. +func (s *LoadBalancer) SetConfigurationOptions(v map[string]*string) *LoadBalancer { + s.ConfigurationOptions = v return s } -// SetAccessType sets the AccessType field's value. -func (s *InstancePortInfo) SetAccessType(v string) *InstancePortInfo { - s.AccessType = &v +// SetCreatedAt sets the CreatedAt field's value. +func (s *LoadBalancer) SetCreatedAt(v time.Time) *LoadBalancer { + s.CreatedAt = &v return s } -// SetCommonName sets the CommonName field's value. -func (s *InstancePortInfo) SetCommonName(v string) *InstancePortInfo { - s.CommonName = &v +// SetDnsName sets the DnsName field's value. +func (s *LoadBalancer) SetDnsName(v string) *LoadBalancer { + s.DnsName = &v return s } -// SetFromPort sets the FromPort field's value. -func (s *InstancePortInfo) SetFromPort(v int64) *InstancePortInfo { - s.FromPort = &v +// SetHealthCheckPath sets the HealthCheckPath field's value. +func (s *LoadBalancer) SetHealthCheckPath(v string) *LoadBalancer { + s.HealthCheckPath = &v return s } -// SetProtocol sets the Protocol field's value. -func (s *InstancePortInfo) SetProtocol(v string) *InstancePortInfo { - s.Protocol = &v +// SetInstanceHealthSummary sets the InstanceHealthSummary field's value. +func (s *LoadBalancer) SetInstanceHealthSummary(v []*InstanceHealthSummary) *LoadBalancer { + s.InstanceHealthSummary = v return s } -// SetToPort sets the ToPort field's value. -func (s *InstancePortInfo) SetToPort(v int64) *InstancePortInfo { - s.ToPort = &v +// SetInstancePort sets the InstancePort field's value. +func (s *LoadBalancer) SetInstancePort(v int64) *LoadBalancer { + s.InstancePort = &v return s } -// Describes the port state. -type InstancePortState struct { - _ struct{} `type:"structure"` - - // The first port in the range. - FromPort *int64 `locationName:"fromPort" type:"integer"` - - // The protocol being used. Can be one of the following. - // - // * tcp - Transmission Control Protocol (TCP) provides reliable, ordered, - // and error-checked delivery of streamed data between applications running - // on hosts communicating by an IP network. If you have an application that - // doesn't require reliable data stream service, use UDP instead. - // - // * all - All transport layer protocol types. For more general information, - // see Transport layer (https://en.wikipedia.org/wiki/Transport_layer) on - // Wikipedia. - // - // * udp - With User Datagram Protocol (UDP), computer applications can send - // messages (or datagrams) to other hosts on an Internet Protocol (IP) network. - // Prior communications are not required to set up transmission channels - // or data paths. Applications that don't require reliable data stream service - // can use UDP, which provides a connectionless datagram service that emphasizes - // reduced latency over reliability. If you do require reliable data stream - // service, use TCP instead. - Protocol *string `locationName:"protocol" type:"string" enum:"NetworkProtocol"` - - // Specifies whether the instance port is open or closed. - State *string `locationName:"state" type:"string" enum:"PortState"` - - // The last port in the range. - ToPort *int64 `locationName:"toPort" type:"integer"` +// SetLocation sets the Location field's value. +func (s *LoadBalancer) SetLocation(v *ResourceLocation) *LoadBalancer { + s.Location = v + return s } -// String returns the string representation -func (s InstancePortState) String() string { - return awsutil.Prettify(s) +// SetName sets the Name field's value. +func (s *LoadBalancer) SetName(v string) *LoadBalancer { + s.Name = &v + return s } -// GoString returns the string representation -func (s InstancePortState) GoString() string { - return s.String() +// SetProtocol sets the Protocol field's value. +func (s *LoadBalancer) SetProtocol(v string) *LoadBalancer { + s.Protocol = &v + return s } -// SetFromPort sets the FromPort field's value. -func (s *InstancePortState) SetFromPort(v int64) *InstancePortState { - s.FromPort = &v +// SetPublicPorts sets the PublicPorts field's value. +func (s *LoadBalancer) SetPublicPorts(v []*int64) *LoadBalancer { + s.PublicPorts = v return s } -// SetProtocol sets the Protocol field's value. -func (s *InstancePortState) SetProtocol(v string) *InstancePortState { - s.Protocol = &v +// SetResourceType sets the ResourceType field's value. +func (s *LoadBalancer) SetResourceType(v string) *LoadBalancer { + s.ResourceType = &v return s } // SetState sets the State field's value. -func (s *InstancePortState) SetState(v string) *InstancePortState { +func (s *LoadBalancer) SetState(v string) *LoadBalancer { s.State = &v return s } -// SetToPort sets the ToPort field's value. -func (s *InstancePortState) SetToPort(v int64) *InstancePortState { - s.ToPort = &v +// SetSupportCode sets the SupportCode field's value. +func (s *LoadBalancer) SetSupportCode(v string) *LoadBalancer { + s.SupportCode = &v + return s +} + +// SetTlsCertificateSummaries sets the TlsCertificateSummaries field's value. +func (s *LoadBalancer) SetTlsCertificateSummaries(v []*LoadBalancerTlsCertificateSummary) *LoadBalancer { + s.TlsCertificateSummaries = v return s } -// Describes the snapshot of the virtual private server, or instance. -type InstanceSnapshot struct { - _ struct{} `type:"structure"` +// Describes a load balancer SSL/TLS certificate. +// +// TLS is just an updated, more secure version of Secure Socket Layer (SSL). +type LoadBalancerTlsCertificate struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the SSL/TLS certificate. + Arn *string `locationName:"arn" type:"string"` + + // The time when you created your SSL/TLS certificate. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` + + // The domain name for your SSL/TLS certificate. + DomainName *string `locationName:"domainName" type:"string"` + + // An array of LoadBalancerTlsCertificateDomainValidationRecord objects describing + // the records. + DomainValidationRecords []*LoadBalancerTlsCertificateDomainValidationRecord `locationName:"domainValidationRecords" type:"list"` + + // The reason for the SSL/TLS certificate validation failure. + FailureReason *string `locationName:"failureReason" type:"string" enum:"LoadBalancerTlsCertificateFailureReason"` + + // When true, the SSL/TLS certificate is attached to the Lightsail load balancer. + IsAttached *bool `locationName:"isAttached" type:"boolean"` + + // The time when the SSL/TLS certificate was issued. + IssuedAt *time.Time `locationName:"issuedAt" type:"timestamp"` + + // The issuer of the certificate. + Issuer *string `locationName:"issuer" type:"string"` + + // The algorithm that was used to generate the key pair (the public and private + // key). + KeyAlgorithm *string `locationName:"keyAlgorithm" type:"string"` + + // The load balancer name where your SSL/TLS certificate is attached. + LoadBalancerName *string `locationName:"loadBalancerName" type:"string"` - // The Amazon Resource Name (ARN) of the snapshot (e.g., arn:aws:lightsail:us-east-2:123456789101:InstanceSnapshot/d23b5706-3322-4d83-81e5-12345EXAMPLE). - Arn *string `locationName:"arn" type:"string"` + // The AWS Region and Availability Zone where you created your certificate. + Location *ResourceLocation `locationName:"location" type:"structure"` - // The timestamp when the snapshot was created (e.g., 1479907467.024). - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` + // The name of the SSL/TLS certificate (e.g., my-certificate). + Name *string `locationName:"name" type:"string"` - // An array of disk objects containing information about all block storage disks. - FromAttachedDisks []*Disk `locationName:"fromAttachedDisks" type:"list"` + // The timestamp when the SSL/TLS certificate expires. + NotAfter *time.Time `locationName:"notAfter" type:"timestamp"` - // The blueprint ID from which you created the snapshot (e.g., os_debian_8_3). - // A blueprint is a virtual private server (or instance) image used to create - // instances quickly. - FromBlueprintId *string `locationName:"fromBlueprintId" type:"string"` + // The timestamp when the SSL/TLS certificate is first valid. + NotBefore *time.Time `locationName:"notBefore" type:"timestamp"` - // The bundle ID from which you created the snapshot (e.g., micro_1_0). - FromBundleId *string `locationName:"fromBundleId" type:"string"` + // An object containing information about the status of Lightsail's managed + // renewal for the certificate. + RenewalSummary *LoadBalancerTlsCertificateRenewalSummary `locationName:"renewalSummary" type:"structure"` - // The Amazon Resource Name (ARN) of the instance from which the snapshot was - // created (e.g., arn:aws:lightsail:us-east-2:123456789101:Instance/64b8404c-ccb1-430b-8daf-12345EXAMPLE). - FromInstanceArn *string `locationName:"fromInstanceArn" type:"string"` + // The resource type (e.g., LoadBalancerTlsCertificate). + // + // * Instance - A Lightsail instance (a virtual private server) + // + // * StaticIp - A static IP address + // + // * KeyPair - The key pair used to connect to a Lightsail instance + // + // * InstanceSnapshot - A Lightsail instance snapshot + // + // * Domain - A DNS zone + // + // * PeeredVpc - A peered VPC + // + // * LoadBalancer - A Lightsail load balancer + // + // * LoadBalancerTlsCertificate - An SSL/TLS certificate associated with + // a Lightsail load balancer + // + // * Disk - A Lightsail block storage disk + // + // * DiskSnapshot - A block storage disk snapshot + ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` - // The instance from which the snapshot was created. - FromInstanceName *string `locationName:"fromInstanceName" type:"string"` + // The reason the certificate was revoked. Valid values are below. + RevocationReason *string `locationName:"revocationReason" type:"string" enum:"LoadBalancerTlsCertificateRevocationReason"` - // The region name and availability zone where you created the snapshot. - Location *ResourceLocation `locationName:"location" type:"structure"` + // The timestamp when the SSL/TLS certificate was revoked. + RevokedAt *time.Time `locationName:"revokedAt" type:"timestamp"` - // The name of the snapshot. - Name *string `locationName:"name" type:"string"` + // The serial number of the certificate. + Serial *string `locationName:"serial" type:"string"` - // The progress of the snapshot. - Progress *string `locationName:"progress" type:"string"` + // The algorithm that was used to sign the certificate. + SignatureAlgorithm *string `locationName:"signatureAlgorithm" type:"string"` - // The type of resource (usually InstanceSnapshot). - ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` + // The status of the SSL/TLS certificate. Valid values are below. + Status *string `locationName:"status" type:"string" enum:"LoadBalancerTlsCertificateStatus"` - // The size in GB of the SSD. - SizeInGb *int64 `locationName:"sizeInGb" type:"integer"` + // The name of the entity that is associated with the public key contained in + // the certificate. + Subject *string `locationName:"subject" type:"string"` - // The state the snapshot is in. - State *string `locationName:"state" type:"string" enum:"InstanceSnapshotState"` + // One or more domains or subdomains included in the certificate. This list + // contains the domain names that are bound to the public key that is contained + // in the certificate. The subject alternative names include the canonical domain + // name (CNAME) of the certificate and additional domain names that can be used + // to connect to the website, such as example.com, www.example.com, or m.example.com. + SubjectAlternativeNames []*string `locationName:"subjectAlternativeNames" type:"list"` // The support code. Include this code in your email to support when you have - // questions about an instance or another resource in Lightsail. This code enables - // our support team to look up your Lightsail information more easily. + // questions about your Lightsail load balancer or SSL/TLS certificate. This + // code enables our support team to look up your Lightsail information more + // easily. SupportCode *string `locationName:"supportCode" type:"string"` } // String returns the string representation -func (s InstanceSnapshot) String() string { +func (s LoadBalancerTlsCertificate) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s InstanceSnapshot) GoString() string { +func (s LoadBalancerTlsCertificate) GoString() string { return s.String() } // SetArn sets the Arn field's value. -func (s *InstanceSnapshot) SetArn(v string) *InstanceSnapshot { +func (s *LoadBalancerTlsCertificate) SetArn(v string) *LoadBalancerTlsCertificate { s.Arn = &v return s } // SetCreatedAt sets the CreatedAt field's value. -func (s *InstanceSnapshot) SetCreatedAt(v time.Time) *InstanceSnapshot { +func (s *LoadBalancerTlsCertificate) SetCreatedAt(v time.Time) *LoadBalancerTlsCertificate { s.CreatedAt = &v return s } -// SetFromAttachedDisks sets the FromAttachedDisks field's value. -func (s *InstanceSnapshot) SetFromAttachedDisks(v []*Disk) *InstanceSnapshot { - s.FromAttachedDisks = v +// SetDomainName sets the DomainName field's value. +func (s *LoadBalancerTlsCertificate) SetDomainName(v string) *LoadBalancerTlsCertificate { + s.DomainName = &v return s } -// SetFromBlueprintId sets the FromBlueprintId field's value. -func (s *InstanceSnapshot) SetFromBlueprintId(v string) *InstanceSnapshot { - s.FromBlueprintId = &v +// SetDomainValidationRecords sets the DomainValidationRecords field's value. +func (s *LoadBalancerTlsCertificate) SetDomainValidationRecords(v []*LoadBalancerTlsCertificateDomainValidationRecord) *LoadBalancerTlsCertificate { + s.DomainValidationRecords = v return s } -// SetFromBundleId sets the FromBundleId field's value. -func (s *InstanceSnapshot) SetFromBundleId(v string) *InstanceSnapshot { - s.FromBundleId = &v +// SetFailureReason sets the FailureReason field's value. +func (s *LoadBalancerTlsCertificate) SetFailureReason(v string) *LoadBalancerTlsCertificate { + s.FailureReason = &v return s } -// SetFromInstanceArn sets the FromInstanceArn field's value. -func (s *InstanceSnapshot) SetFromInstanceArn(v string) *InstanceSnapshot { - s.FromInstanceArn = &v +// SetIsAttached sets the IsAttached field's value. +func (s *LoadBalancerTlsCertificate) SetIsAttached(v bool) *LoadBalancerTlsCertificate { + s.IsAttached = &v return s } -// SetFromInstanceName sets the FromInstanceName field's value. -func (s *InstanceSnapshot) SetFromInstanceName(v string) *InstanceSnapshot { - s.FromInstanceName = &v +// SetIssuedAt sets the IssuedAt field's value. +func (s *LoadBalancerTlsCertificate) SetIssuedAt(v time.Time) *LoadBalancerTlsCertificate { + s.IssuedAt = &v + return s +} + +// SetIssuer sets the Issuer field's value. +func (s *LoadBalancerTlsCertificate) SetIssuer(v string) *LoadBalancerTlsCertificate { + s.Issuer = &v + return s +} + +// SetKeyAlgorithm sets the KeyAlgorithm field's value. +func (s *LoadBalancerTlsCertificate) SetKeyAlgorithm(v string) *LoadBalancerTlsCertificate { + s.KeyAlgorithm = &v + return s +} + +// SetLoadBalancerName sets the LoadBalancerName field's value. +func (s *LoadBalancerTlsCertificate) SetLoadBalancerName(v string) *LoadBalancerTlsCertificate { + s.LoadBalancerName = &v return s } // SetLocation sets the Location field's value. -func (s *InstanceSnapshot) SetLocation(v *ResourceLocation) *InstanceSnapshot { +func (s *LoadBalancerTlsCertificate) SetLocation(v *ResourceLocation) *LoadBalancerTlsCertificate { s.Location = v return s } // SetName sets the Name field's value. -func (s *InstanceSnapshot) SetName(v string) *InstanceSnapshot { +func (s *LoadBalancerTlsCertificate) SetName(v string) *LoadBalancerTlsCertificate { s.Name = &v return s } -// SetProgress sets the Progress field's value. -func (s *InstanceSnapshot) SetProgress(v string) *InstanceSnapshot { - s.Progress = &v +// SetNotAfter sets the NotAfter field's value. +func (s *LoadBalancerTlsCertificate) SetNotAfter(v time.Time) *LoadBalancerTlsCertificate { + s.NotAfter = &v + return s +} + +// SetNotBefore sets the NotBefore field's value. +func (s *LoadBalancerTlsCertificate) SetNotBefore(v time.Time) *LoadBalancerTlsCertificate { + s.NotBefore = &v + return s +} + +// SetRenewalSummary sets the RenewalSummary field's value. +func (s *LoadBalancerTlsCertificate) SetRenewalSummary(v *LoadBalancerTlsCertificateRenewalSummary) *LoadBalancerTlsCertificate { + s.RenewalSummary = v return s } // SetResourceType sets the ResourceType field's value. -func (s *InstanceSnapshot) SetResourceType(v string) *InstanceSnapshot { +func (s *LoadBalancerTlsCertificate) SetResourceType(v string) *LoadBalancerTlsCertificate { s.ResourceType = &v return s } -// SetSizeInGb sets the SizeInGb field's value. -func (s *InstanceSnapshot) SetSizeInGb(v int64) *InstanceSnapshot { - s.SizeInGb = &v +// SetRevocationReason sets the RevocationReason field's value. +func (s *LoadBalancerTlsCertificate) SetRevocationReason(v string) *LoadBalancerTlsCertificate { + s.RevocationReason = &v return s } -// SetState sets the State field's value. -func (s *InstanceSnapshot) SetState(v string) *InstanceSnapshot { - s.State = &v +// SetRevokedAt sets the RevokedAt field's value. +func (s *LoadBalancerTlsCertificate) SetRevokedAt(v time.Time) *LoadBalancerTlsCertificate { + s.RevokedAt = &v + return s +} + +// SetSerial sets the Serial field's value. +func (s *LoadBalancerTlsCertificate) SetSerial(v string) *LoadBalancerTlsCertificate { + s.Serial = &v + return s +} + +// SetSignatureAlgorithm sets the SignatureAlgorithm field's value. +func (s *LoadBalancerTlsCertificate) SetSignatureAlgorithm(v string) *LoadBalancerTlsCertificate { + s.SignatureAlgorithm = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *LoadBalancerTlsCertificate) SetStatus(v string) *LoadBalancerTlsCertificate { + s.Status = &v + return s +} + +// SetSubject sets the Subject field's value. +func (s *LoadBalancerTlsCertificate) SetSubject(v string) *LoadBalancerTlsCertificate { + s.Subject = &v + return s +} + +// SetSubjectAlternativeNames sets the SubjectAlternativeNames field's value. +func (s *LoadBalancerTlsCertificate) SetSubjectAlternativeNames(v []*string) *LoadBalancerTlsCertificate { + s.SubjectAlternativeNames = v return s } // SetSupportCode sets the SupportCode field's value. -func (s *InstanceSnapshot) SetSupportCode(v string) *InstanceSnapshot { +func (s *LoadBalancerTlsCertificate) SetSupportCode(v string) *LoadBalancerTlsCertificate { s.SupportCode = &v return s } -// Describes the virtual private server (or instance) status. -type InstanceState struct { - _ struct{} `type:"structure"` +// Contains information about the domain names on an SSL/TLS certificate that +// you will use to validate domain ownership. +type LoadBalancerTlsCertificateDomainValidationOption struct { + _ struct{} `type:"structure"` + + // The fully qualified domain name in the certificate request. + DomainName *string `locationName:"domainName" type:"string"` + + // The status of the domain validation. Valid values are listed below. + ValidationStatus *string `locationName:"validationStatus" type:"string" enum:"LoadBalancerTlsCertificateDomainStatus"` +} + +// String returns the string representation +func (s LoadBalancerTlsCertificateDomainValidationOption) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LoadBalancerTlsCertificateDomainValidationOption) GoString() string { + return s.String() +} + +// SetDomainName sets the DomainName field's value. +func (s *LoadBalancerTlsCertificateDomainValidationOption) SetDomainName(v string) *LoadBalancerTlsCertificateDomainValidationOption { + s.DomainName = &v + return s +} + +// SetValidationStatus sets the ValidationStatus field's value. +func (s *LoadBalancerTlsCertificateDomainValidationOption) SetValidationStatus(v string) *LoadBalancerTlsCertificateDomainValidationOption { + s.ValidationStatus = &v + return s +} + +// Describes the validation record of each domain name in the SSL/TLS certificate. +type LoadBalancerTlsCertificateDomainValidationRecord struct { + _ struct{} `type:"structure"` + + // The domain name against which your SSL/TLS certificate was validated. + DomainName *string `locationName:"domainName" type:"string"` + + // A fully qualified domain name in the certificate. For example, example.com. + Name *string `locationName:"name" type:"string"` + + // The type of validation record. For example, CNAME for domain validation. + Type *string `locationName:"type" type:"string"` - // The status code for the instance. - Code *int64 `locationName:"code" type:"integer"` + // The validation status. Valid values are listed below. + ValidationStatus *string `locationName:"validationStatus" type:"string" enum:"LoadBalancerTlsCertificateDomainStatus"` - // The state of the instance (e.g., running or pending). - Name *string `locationName:"name" type:"string"` + // The value for that type. + Value *string `locationName:"value" type:"string"` } // String returns the string representation -func (s InstanceState) String() string { +func (s LoadBalancerTlsCertificateDomainValidationRecord) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s InstanceState) GoString() string { +func (s LoadBalancerTlsCertificateDomainValidationRecord) GoString() string { return s.String() } -// SetCode sets the Code field's value. -func (s *InstanceState) SetCode(v int64) *InstanceState { - s.Code = &v +// SetDomainName sets the DomainName field's value. +func (s *LoadBalancerTlsCertificateDomainValidationRecord) SetDomainName(v string) *LoadBalancerTlsCertificateDomainValidationRecord { + s.DomainName = &v return s } // SetName sets the Name field's value. -func (s *InstanceState) SetName(v string) *InstanceState { +func (s *LoadBalancerTlsCertificateDomainValidationRecord) SetName(v string) *LoadBalancerTlsCertificateDomainValidationRecord { s.Name = &v return s } -type IsVpcPeeredInput struct { - _ struct{} `type:"structure"` +// SetType sets the Type field's value. +func (s *LoadBalancerTlsCertificateDomainValidationRecord) SetType(v string) *LoadBalancerTlsCertificateDomainValidationRecord { + s.Type = &v + return s } -// String returns the string representation -func (s IsVpcPeeredInput) String() string { - return awsutil.Prettify(s) +// SetValidationStatus sets the ValidationStatus field's value. +func (s *LoadBalancerTlsCertificateDomainValidationRecord) SetValidationStatus(v string) *LoadBalancerTlsCertificateDomainValidationRecord { + s.ValidationStatus = &v + return s } -// GoString returns the string representation -func (s IsVpcPeeredInput) GoString() string { - return s.String() +// SetValue sets the Value field's value. +func (s *LoadBalancerTlsCertificateDomainValidationRecord) SetValue(v string) *LoadBalancerTlsCertificateDomainValidationRecord { + s.Value = &v + return s } -type IsVpcPeeredOutput struct { +// Contains information about the status of Lightsail's managed renewal for +// the certificate. +type LoadBalancerTlsCertificateRenewalSummary struct { _ struct{} `type:"structure"` - // Returns true if the Lightsail VPC is peered; otherwise, false. - IsPeered *bool `locationName:"isPeered" type:"boolean"` + // Contains information about the validation of each domain name in the certificate, + // as it pertains to Lightsail's managed renewal. This is different from the + // initial validation that occurs as a result of the RequestCertificate request. + DomainValidationOptions []*LoadBalancerTlsCertificateDomainValidationOption `locationName:"domainValidationOptions" type:"list"` + + // The status of Lightsail's managed renewal of the certificate. Valid values + // are listed below. + RenewalStatus *string `locationName:"renewalStatus" type:"string" enum:"LoadBalancerTlsCertificateRenewalStatus"` } // String returns the string representation -func (s IsVpcPeeredOutput) String() string { +func (s LoadBalancerTlsCertificateRenewalSummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s IsVpcPeeredOutput) GoString() string { +func (s LoadBalancerTlsCertificateRenewalSummary) GoString() string { return s.String() } -// SetIsPeered sets the IsPeered field's value. -func (s *IsVpcPeeredOutput) SetIsPeered(v bool) *IsVpcPeeredOutput { - s.IsPeered = &v +// SetDomainValidationOptions sets the DomainValidationOptions field's value. +func (s *LoadBalancerTlsCertificateRenewalSummary) SetDomainValidationOptions(v []*LoadBalancerTlsCertificateDomainValidationOption) *LoadBalancerTlsCertificateRenewalSummary { + s.DomainValidationOptions = v return s } -// Describes the SSH key pair. -type KeyPair struct { - _ struct{} `type:"structure"` - - // The Amazon Resource Name (ARN) of the key pair (e.g., arn:aws:lightsail:us-east-2:123456789101:KeyPair/05859e3d-331d-48ba-9034-12345EXAMPLE). - Arn *string `locationName:"arn" type:"string"` - - // The timestamp when the key pair was created (e.g., 1479816991.349). - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` +// SetRenewalStatus sets the RenewalStatus field's value. +func (s *LoadBalancerTlsCertificateRenewalSummary) SetRenewalStatus(v string) *LoadBalancerTlsCertificateRenewalSummary { + s.RenewalStatus = &v + return s +} - // The RSA fingerprint of the key pair. - Fingerprint *string `locationName:"fingerprint" type:"string"` +// Provides a summary of SSL/TLS certificate metadata. +type LoadBalancerTlsCertificateSummary struct { + _ struct{} `type:"structure"` - // The region name and Availability Zone where the key pair was created. - Location *ResourceLocation `locationName:"location" type:"structure"` + // When true, the SSL/TLS certificate is attached to the Lightsail load balancer. + IsAttached *bool `locationName:"isAttached" type:"boolean"` - // The friendly name of the SSH key pair. + // The name of the SSL/TLS certificate. Name *string `locationName:"name" type:"string"` - - // The resource type (usually KeyPair). - ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` - - // The support code. Include this code in your email to support when you have - // questions about an instance or another resource in Lightsail. This code enables - // our support team to look up your Lightsail information more easily. - SupportCode *string `locationName:"supportCode" type:"string"` } // String returns the string representation -func (s KeyPair) String() string { +func (s LoadBalancerTlsCertificateSummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s KeyPair) GoString() string { +func (s LoadBalancerTlsCertificateSummary) GoString() string { return s.String() } -// SetArn sets the Arn field's value. -func (s *KeyPair) SetArn(v string) *KeyPair { - s.Arn = &v +// SetIsAttached sets the IsAttached field's value. +func (s *LoadBalancerTlsCertificateSummary) SetIsAttached(v bool) *LoadBalancerTlsCertificateSummary { + s.IsAttached = &v return s } -// SetCreatedAt sets the CreatedAt field's value. -func (s *KeyPair) SetCreatedAt(v time.Time) *KeyPair { - s.CreatedAt = &v +// SetName sets the Name field's value. +func (s *LoadBalancerTlsCertificateSummary) SetName(v string) *LoadBalancerTlsCertificateSummary { + s.Name = &v return s } -// SetFingerprint sets the Fingerprint field's value. -func (s *KeyPair) SetFingerprint(v string) *KeyPair { - s.Fingerprint = &v - return s +// Describes a database log event. +type LogEvent struct { + _ struct{} `type:"structure"` + + // The timestamp when the database log event was created. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` + + // The message of the database log event. + Message *string `locationName:"message" type:"string"` } -// SetLocation sets the Location field's value. -func (s *KeyPair) SetLocation(v *ResourceLocation) *KeyPair { - s.Location = v - return s +// String returns the string representation +func (s LogEvent) String() string { + return awsutil.Prettify(s) } -// SetName sets the Name field's value. -func (s *KeyPair) SetName(v string) *KeyPair { - s.Name = &v - return s +// GoString returns the string representation +func (s LogEvent) GoString() string { + return s.String() } -// SetResourceType sets the ResourceType field's value. -func (s *KeyPair) SetResourceType(v string) *KeyPair { - s.ResourceType = &v +// SetCreatedAt sets the CreatedAt field's value. +func (s *LogEvent) SetCreatedAt(v time.Time) *LogEvent { + s.CreatedAt = &v return s } -// SetSupportCode sets the SupportCode field's value. -func (s *KeyPair) SetSupportCode(v string) *KeyPair { - s.SupportCode = &v +// SetMessage sets the Message field's value. +func (s *LogEvent) SetMessage(v string) *LogEvent { + s.Message = &v return s } -// Describes the Lightsail load balancer. -type LoadBalancer struct { +// Describes the metric data point. +type MetricDatapoint struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of the load balancer. - Arn *string `locationName:"arn" type:"string"` - - // A string to string map of the configuration options for your load balancer. - // Valid values are listed below. - ConfigurationOptions map[string]*string `locationName:"configurationOptions" type:"map"` - - // The date when your load balancer was created. - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` - - // The DNS name of your Lightsail load balancer. - DnsName *string `locationName:"dnsName" type:"string"` - - // The path you specified to perform your health checks. If no path is specified, - // the load balancer tries to make a request to the default (root) page. - HealthCheckPath *string `locationName:"healthCheckPath" type:"string"` - - // An array of InstanceHealthSummary objects describing the health of the load - // balancer. - InstanceHealthSummary []*InstanceHealthSummary `locationName:"instanceHealthSummary" type:"list"` - - // The port where the load balancer will direct traffic to your Lightsail instances. - // For HTTP traffic, it's port 80. For HTTPS traffic, it's port 443. - InstancePort *int64 `locationName:"instancePort" type:"integer"` - - // The AWS Region where your load balancer was created (e.g., us-east-2a). Lightsail - // automatically creates your load balancer across Availability Zones. - Location *ResourceLocation `locationName:"location" type:"structure"` - - // The name of the load balancer (e.g., my-load-balancer). - Name *string `locationName:"name" type:"string"` + // The average. + Average *float64 `locationName:"average" type:"double"` - // The protocol you have enabled for your load balancer. Valid values are below. - // - // You can't just have HTTP_HTTPS, but you can have just HTTP. - Protocol *string `locationName:"protocol" type:"string" enum:"LoadBalancerProtocol"` + // The maximum. + Maximum *float64 `locationName:"maximum" type:"double"` - // An array of public port settings for your load balancer. For HTTP, use port - // 80. For HTTPS, use port 443. - PublicPorts []*int64 `locationName:"publicPorts" type:"list"` + // The minimum. + Minimum *float64 `locationName:"minimum" type:"double"` - // The resource type (e.g., LoadBalancer. - ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` + // The sample count. + SampleCount *float64 `locationName:"sampleCount" type:"double"` - // The status of your load balancer. Valid values are below. - State *string `locationName:"state" type:"string" enum:"LoadBalancerState"` + // The sum. + Sum *float64 `locationName:"sum" type:"double"` - // The support code. Include this code in your email to support when you have - // questions about your Lightsail load balancer. This code enables our support - // team to look up your Lightsail information more easily. - SupportCode *string `locationName:"supportCode" type:"string"` + // The timestamp (e.g., 1479816991.349). + Timestamp *time.Time `locationName:"timestamp" type:"timestamp"` - // An array of LoadBalancerTlsCertificateSummary objects that provide additional - // information about the SSL/TLS certificates. For example, if true, the certificate - // is attached to the load balancer. - TlsCertificateSummaries []*LoadBalancerTlsCertificateSummary `locationName:"tlsCertificateSummaries" type:"list"` + // The unit. + Unit *string `locationName:"unit" type:"string" enum:"MetricUnit"` } // String returns the string representation -func (s LoadBalancer) String() string { +func (s MetricDatapoint) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s LoadBalancer) GoString() string { +func (s MetricDatapoint) GoString() string { return s.String() } -// SetArn sets the Arn field's value. -func (s *LoadBalancer) SetArn(v string) *LoadBalancer { - s.Arn = &v - return s -} - -// SetConfigurationOptions sets the ConfigurationOptions field's value. -func (s *LoadBalancer) SetConfigurationOptions(v map[string]*string) *LoadBalancer { - s.ConfigurationOptions = v - return s -} - -// SetCreatedAt sets the CreatedAt field's value. -func (s *LoadBalancer) SetCreatedAt(v time.Time) *LoadBalancer { - s.CreatedAt = &v - return s -} - -// SetDnsName sets the DnsName field's value. -func (s *LoadBalancer) SetDnsName(v string) *LoadBalancer { - s.DnsName = &v - return s -} - -// SetHealthCheckPath sets the HealthCheckPath field's value. -func (s *LoadBalancer) SetHealthCheckPath(v string) *LoadBalancer { - s.HealthCheckPath = &v +// SetAverage sets the Average field's value. +func (s *MetricDatapoint) SetAverage(v float64) *MetricDatapoint { + s.Average = &v return s } -// SetInstanceHealthSummary sets the InstanceHealthSummary field's value. -func (s *LoadBalancer) SetInstanceHealthSummary(v []*InstanceHealthSummary) *LoadBalancer { - s.InstanceHealthSummary = v +// SetMaximum sets the Maximum field's value. +func (s *MetricDatapoint) SetMaximum(v float64) *MetricDatapoint { + s.Maximum = &v return s } -// SetInstancePort sets the InstancePort field's value. -func (s *LoadBalancer) SetInstancePort(v int64) *LoadBalancer { - s.InstancePort = &v +// SetMinimum sets the Minimum field's value. +func (s *MetricDatapoint) SetMinimum(v float64) *MetricDatapoint { + s.Minimum = &v return s } -// SetLocation sets the Location field's value. -func (s *LoadBalancer) SetLocation(v *ResourceLocation) *LoadBalancer { - s.Location = v +// SetSampleCount sets the SampleCount field's value. +func (s *MetricDatapoint) SetSampleCount(v float64) *MetricDatapoint { + s.SampleCount = &v return s } -// SetName sets the Name field's value. -func (s *LoadBalancer) SetName(v string) *LoadBalancer { - s.Name = &v +// SetSum sets the Sum field's value. +func (s *MetricDatapoint) SetSum(v float64) *MetricDatapoint { + s.Sum = &v return s } -// SetProtocol sets the Protocol field's value. -func (s *LoadBalancer) SetProtocol(v string) *LoadBalancer { - s.Protocol = &v +// SetTimestamp sets the Timestamp field's value. +func (s *MetricDatapoint) SetTimestamp(v time.Time) *MetricDatapoint { + s.Timestamp = &v return s } -// SetPublicPorts sets the PublicPorts field's value. -func (s *LoadBalancer) SetPublicPorts(v []*int64) *LoadBalancer { - s.PublicPorts = v +// SetUnit sets the Unit field's value. +func (s *MetricDatapoint) SetUnit(v string) *MetricDatapoint { + s.Unit = &v return s } -// SetResourceType sets the ResourceType field's value. -func (s *LoadBalancer) SetResourceType(v string) *LoadBalancer { - s.ResourceType = &v - return s +// Describes the monthly data transfer in and out of your virtual private server +// (or instance). +type MonthlyTransfer struct { + _ struct{} `type:"structure"` + + // The amount allocated per month (in GB). + GbPerMonthAllocated *int64 `locationName:"gbPerMonthAllocated" type:"integer"` } -// SetState sets the State field's value. -func (s *LoadBalancer) SetState(v string) *LoadBalancer { - s.State = &v - return s +// String returns the string representation +func (s MonthlyTransfer) String() string { + return awsutil.Prettify(s) } -// SetSupportCode sets the SupportCode field's value. -func (s *LoadBalancer) SetSupportCode(v string) *LoadBalancer { - s.SupportCode = &v - return s +// GoString returns the string representation +func (s MonthlyTransfer) GoString() string { + return s.String() } -// SetTlsCertificateSummaries sets the TlsCertificateSummaries field's value. -func (s *LoadBalancer) SetTlsCertificateSummaries(v []*LoadBalancerTlsCertificateSummary) *LoadBalancer { - s.TlsCertificateSummaries = v +// SetGbPerMonthAllocated sets the GbPerMonthAllocated field's value. +func (s *MonthlyTransfer) SetGbPerMonthAllocated(v int64) *MonthlyTransfer { + s.GbPerMonthAllocated = &v return s } -// Describes a load balancer SSL/TLS certificate. -// -// TLS is just an updated, more secure version of Secure Socket Layer (SSL). -type LoadBalancerTlsCertificate struct { +type OpenInstancePublicPortsInput struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of the SSL/TLS certificate. - Arn *string `locationName:"arn" type:"string"` + // The name of the instance for which you want to open the public ports. + // + // InstanceName is a required field + InstanceName *string `locationName:"instanceName" type:"string" required:"true"` - // The time when you created your SSL/TLS certificate. - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` + // An array of key-value pairs containing information about the port mappings. + // + // PortInfo is a required field + PortInfo *PortInfo `locationName:"portInfo" type:"structure" required:"true"` +} - // The domain name for your SSL/TLS certificate. - DomainName *string `locationName:"domainName" type:"string"` +// String returns the string representation +func (s OpenInstancePublicPortsInput) String() string { + return awsutil.Prettify(s) +} - // An array of LoadBalancerTlsCertificateDomainValidationRecord objects describing - // the records. - DomainValidationRecords []*LoadBalancerTlsCertificateDomainValidationRecord `locationName:"domainValidationRecords" type:"list"` +// GoString returns the string representation +func (s OpenInstancePublicPortsInput) GoString() string { + return s.String() +} - // The reason for the SSL/TLS certificate validation failure. - FailureReason *string `locationName:"failureReason" type:"string" enum:"LoadBalancerTlsCertificateFailureReason"` +// Validate inspects the fields of the type to determine if they are valid. +func (s *OpenInstancePublicPortsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "OpenInstancePublicPortsInput"} + if s.InstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceName")) + } + if s.PortInfo == nil { + invalidParams.Add(request.NewErrParamRequired("PortInfo")) + } - // When true, the SSL/TLS certificate is attached to the Lightsail load balancer. - IsAttached *bool `locationName:"isAttached" type:"boolean"` + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} - // The time when the SSL/TLS certificate was issued. - IssuedAt *time.Time `locationName:"issuedAt" type:"timestamp" timestampFormat:"unix"` +// SetInstanceName sets the InstanceName field's value. +func (s *OpenInstancePublicPortsInput) SetInstanceName(v string) *OpenInstancePublicPortsInput { + s.InstanceName = &v + return s +} - // The issuer of the certificate. - Issuer *string `locationName:"issuer" type:"string"` +// SetPortInfo sets the PortInfo field's value. +func (s *OpenInstancePublicPortsInput) SetPortInfo(v *PortInfo) *OpenInstancePublicPortsInput { + s.PortInfo = v + return s +} - // The algorithm that was used to generate the key pair (the public and private - // key). - KeyAlgorithm *string `locationName:"keyAlgorithm" type:"string"` +type OpenInstancePublicPortsOutput struct { + _ struct{} `type:"structure"` - // The load balancer name where your SSL/TLS certificate is attached. - LoadBalancerName *string `locationName:"loadBalancerName" type:"string"` + // An array of key-value pairs containing information about the request operation. + Operation *Operation `locationName:"operation" type:"structure"` +} - // The AWS Region and Availability Zone where you created your certificate. - Location *ResourceLocation `locationName:"location" type:"structure"` +// String returns the string representation +func (s OpenInstancePublicPortsOutput) String() string { + return awsutil.Prettify(s) +} - // The name of the SSL/TLS certificate (e.g., my-certificate). - Name *string `locationName:"name" type:"string"` +// GoString returns the string representation +func (s OpenInstancePublicPortsOutput) GoString() string { + return s.String() +} - // The timestamp when the SSL/TLS certificate expires. - NotAfter *time.Time `locationName:"notAfter" type:"timestamp" timestampFormat:"unix"` +// SetOperation sets the Operation field's value. +func (s *OpenInstancePublicPortsOutput) SetOperation(v *Operation) *OpenInstancePublicPortsOutput { + s.Operation = v + return s +} - // The timestamp when the SSL/TLS certificate is first valid. - NotBefore *time.Time `locationName:"notBefore" type:"timestamp" timestampFormat:"unix"` +// Describes the API operation. +type Operation struct { + _ struct{} `type:"structure"` - // An object containing information about the status of Lightsail's managed - // renewal for the certificate. - RenewalSummary *LoadBalancerTlsCertificateRenewalSummary `locationName:"renewalSummary" type:"structure"` + // The timestamp when the operation was initialized (e.g., 1479816991.349). + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` - // The resource type (e.g., LoadBalancerTlsCertificate). - // - // * Instance - A Lightsail instance (a virtual private server) - // - // * StaticIp - A static IP address - // - // * KeyPair - The key pair used to connect to a Lightsail instance - // - // * InstanceSnapshot - A Lightsail instance snapshot - // - // * Domain - A DNS zone - // - // * PeeredVpc - A peered VPC - // - // * LoadBalancer - A Lightsail load balancer - // - // * LoadBalancerTlsCertificate - An SSL/TLS certificate associated with - // a Lightsail load balancer - // - // * Disk - A Lightsail block storage disk - // - // * DiskSnapshot - A block storage disk snapshot - ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` + // The error code. + ErrorCode *string `locationName:"errorCode" type:"string"` - // The reason the certificate was revoked. Valid values are below. - RevocationReason *string `locationName:"revocationReason" type:"string" enum:"LoadBalancerTlsCertificateRevocationReason"` + // The error details. + ErrorDetails *string `locationName:"errorDetails" type:"string"` - // The timestamp when the SSL/TLS certificate was revoked. - RevokedAt *time.Time `locationName:"revokedAt" type:"timestamp" timestampFormat:"unix"` + // The ID of the operation. + Id *string `locationName:"id" type:"string"` - // The serial number of the certificate. - Serial *string `locationName:"serial" type:"string"` + // A Boolean value indicating whether the operation is terminal. + IsTerminal *bool `locationName:"isTerminal" type:"boolean"` - // The algorithm that was used to sign the certificate. - SignatureAlgorithm *string `locationName:"signatureAlgorithm" type:"string"` + // The region and Availability Zone. + Location *ResourceLocation `locationName:"location" type:"structure"` - // The status of the SSL/TLS certificate. Valid values are below. - Status *string `locationName:"status" type:"string" enum:"LoadBalancerTlsCertificateStatus"` + // Details about the operation (e.g., Debian-1GB-Ohio-1). + OperationDetails *string `locationName:"operationDetails" type:"string"` - // The name of the entity that is associated with the public key contained in - // the certificate. - Subject *string `locationName:"subject" type:"string"` + // The type of operation. + OperationType *string `locationName:"operationType" type:"string" enum:"OperationType"` - // One or more domains or subdomains included in the certificate. This list - // contains the domain names that are bound to the public key that is contained - // in the certificate. The subject alternative names include the canonical domain - // name (CNAME) of the certificate and additional domain names that can be used - // to connect to the website, such as example.com, www.example.com, or m.example.com. - SubjectAlternativeNames []*string `locationName:"subjectAlternativeNames" type:"list"` + // The resource name. + ResourceName *string `locationName:"resourceName" type:"string"` - // The support code. Include this code in your email to support when you have - // questions about your Lightsail load balancer or SSL/TLS certificate. This - // code enables our support team to look up your Lightsail information more - // easily. - SupportCode *string `locationName:"supportCode" type:"string"` + // The resource type. + ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` + + // The status of the operation. + Status *string `locationName:"status" type:"string" enum:"OperationStatus"` + + // The timestamp when the status was changed (e.g., 1479816991.349). + StatusChangedAt *time.Time `locationName:"statusChangedAt" type:"timestamp"` } // String returns the string representation -func (s LoadBalancerTlsCertificate) String() string { +func (s Operation) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s LoadBalancerTlsCertificate) GoString() string { +func (s Operation) GoString() string { return s.String() } -// SetArn sets the Arn field's value. -func (s *LoadBalancerTlsCertificate) SetArn(v string) *LoadBalancerTlsCertificate { - s.Arn = &v - return s -} - -// SetCreatedAt sets the CreatedAt field's value. -func (s *LoadBalancerTlsCertificate) SetCreatedAt(v time.Time) *LoadBalancerTlsCertificate { - s.CreatedAt = &v - return s -} - -// SetDomainName sets the DomainName field's value. -func (s *LoadBalancerTlsCertificate) SetDomainName(v string) *LoadBalancerTlsCertificate { - s.DomainName = &v - return s -} - -// SetDomainValidationRecords sets the DomainValidationRecords field's value. -func (s *LoadBalancerTlsCertificate) SetDomainValidationRecords(v []*LoadBalancerTlsCertificateDomainValidationRecord) *LoadBalancerTlsCertificate { - s.DomainValidationRecords = v - return s -} - -// SetFailureReason sets the FailureReason field's value. -func (s *LoadBalancerTlsCertificate) SetFailureReason(v string) *LoadBalancerTlsCertificate { - s.FailureReason = &v - return s -} - -// SetIsAttached sets the IsAttached field's value. -func (s *LoadBalancerTlsCertificate) SetIsAttached(v bool) *LoadBalancerTlsCertificate { - s.IsAttached = &v +// SetCreatedAt sets the CreatedAt field's value. +func (s *Operation) SetCreatedAt(v time.Time) *Operation { + s.CreatedAt = &v return s } -// SetIssuedAt sets the IssuedAt field's value. -func (s *LoadBalancerTlsCertificate) SetIssuedAt(v time.Time) *LoadBalancerTlsCertificate { - s.IssuedAt = &v +// SetErrorCode sets the ErrorCode field's value. +func (s *Operation) SetErrorCode(v string) *Operation { + s.ErrorCode = &v return s } -// SetIssuer sets the Issuer field's value. -func (s *LoadBalancerTlsCertificate) SetIssuer(v string) *LoadBalancerTlsCertificate { - s.Issuer = &v +// SetErrorDetails sets the ErrorDetails field's value. +func (s *Operation) SetErrorDetails(v string) *Operation { + s.ErrorDetails = &v return s } -// SetKeyAlgorithm sets the KeyAlgorithm field's value. -func (s *LoadBalancerTlsCertificate) SetKeyAlgorithm(v string) *LoadBalancerTlsCertificate { - s.KeyAlgorithm = &v +// SetId sets the Id field's value. +func (s *Operation) SetId(v string) *Operation { + s.Id = &v return s } -// SetLoadBalancerName sets the LoadBalancerName field's value. -func (s *LoadBalancerTlsCertificate) SetLoadBalancerName(v string) *LoadBalancerTlsCertificate { - s.LoadBalancerName = &v +// SetIsTerminal sets the IsTerminal field's value. +func (s *Operation) SetIsTerminal(v bool) *Operation { + s.IsTerminal = &v return s } // SetLocation sets the Location field's value. -func (s *LoadBalancerTlsCertificate) SetLocation(v *ResourceLocation) *LoadBalancerTlsCertificate { +func (s *Operation) SetLocation(v *ResourceLocation) *Operation { s.Location = v return s } -// SetName sets the Name field's value. -func (s *LoadBalancerTlsCertificate) SetName(v string) *LoadBalancerTlsCertificate { - s.Name = &v - return s -} - -// SetNotAfter sets the NotAfter field's value. -func (s *LoadBalancerTlsCertificate) SetNotAfter(v time.Time) *LoadBalancerTlsCertificate { - s.NotAfter = &v +// SetOperationDetails sets the OperationDetails field's value. +func (s *Operation) SetOperationDetails(v string) *Operation { + s.OperationDetails = &v return s } -// SetNotBefore sets the NotBefore field's value. -func (s *LoadBalancerTlsCertificate) SetNotBefore(v time.Time) *LoadBalancerTlsCertificate { - s.NotBefore = &v +// SetOperationType sets the OperationType field's value. +func (s *Operation) SetOperationType(v string) *Operation { + s.OperationType = &v return s } -// SetRenewalSummary sets the RenewalSummary field's value. -func (s *LoadBalancerTlsCertificate) SetRenewalSummary(v *LoadBalancerTlsCertificateRenewalSummary) *LoadBalancerTlsCertificate { - s.RenewalSummary = v +// SetResourceName sets the ResourceName field's value. +func (s *Operation) SetResourceName(v string) *Operation { + s.ResourceName = &v return s } // SetResourceType sets the ResourceType field's value. -func (s *LoadBalancerTlsCertificate) SetResourceType(v string) *LoadBalancerTlsCertificate { +func (s *Operation) SetResourceType(v string) *Operation { s.ResourceType = &v return s } -// SetRevocationReason sets the RevocationReason field's value. -func (s *LoadBalancerTlsCertificate) SetRevocationReason(v string) *LoadBalancerTlsCertificate { - s.RevocationReason = &v +// SetStatus sets the Status field's value. +func (s *Operation) SetStatus(v string) *Operation { + s.Status = &v return s } -// SetRevokedAt sets the RevokedAt field's value. -func (s *LoadBalancerTlsCertificate) SetRevokedAt(v time.Time) *LoadBalancerTlsCertificate { - s.RevokedAt = &v +// SetStatusChangedAt sets the StatusChangedAt field's value. +func (s *Operation) SetStatusChangedAt(v time.Time) *Operation { + s.StatusChangedAt = &v return s } -// SetSerial sets the Serial field's value. -func (s *LoadBalancerTlsCertificate) SetSerial(v string) *LoadBalancerTlsCertificate { - s.Serial = &v - return s -} +// The password data for the Windows Server-based instance, including the ciphertext +// and the key pair name. +type PasswordData struct { + _ struct{} `type:"structure"` -// SetSignatureAlgorithm sets the SignatureAlgorithm field's value. -func (s *LoadBalancerTlsCertificate) SetSignatureAlgorithm(v string) *LoadBalancerTlsCertificate { - s.SignatureAlgorithm = &v - return s + // The encrypted password. Ciphertext will be an empty string if access to your + // new instance is not ready yet. When you create an instance, it can take up + // to 15 minutes for the instance to be ready. + // + // If you use the default key pair (LightsailDefaultKeyPair), the decrypted + // password will be available in the password field. + // + // If you are using a custom key pair, you need to use your own means of decryption. + // + // If you change the Administrator password on the instance, Lightsail will + // continue to return the original ciphertext value. When accessing the instance + // using RDP, you need to manually enter the Administrator password after changing + // it from the default. + Ciphertext *string `locationName:"ciphertext" type:"string"` + + // The name of the key pair that you used when creating your instance. If no + // key pair name was specified when creating the instance, Lightsail uses the + // default key pair (LightsailDefaultKeyPair). + // + // If you are using a custom key pair, you need to use your own means of decrypting + // your password using the ciphertext. Lightsail creates the ciphertext by encrypting + // your password with the public key part of this key pair. + KeyPairName *string `locationName:"keyPairName" type:"string"` } -// SetStatus sets the Status field's value. -func (s *LoadBalancerTlsCertificate) SetStatus(v string) *LoadBalancerTlsCertificate { - s.Status = &v - return s +// String returns the string representation +func (s PasswordData) String() string { + return awsutil.Prettify(s) } -// SetSubject sets the Subject field's value. -func (s *LoadBalancerTlsCertificate) SetSubject(v string) *LoadBalancerTlsCertificate { - s.Subject = &v - return s +// GoString returns the string representation +func (s PasswordData) GoString() string { + return s.String() } -// SetSubjectAlternativeNames sets the SubjectAlternativeNames field's value. -func (s *LoadBalancerTlsCertificate) SetSubjectAlternativeNames(v []*string) *LoadBalancerTlsCertificate { - s.SubjectAlternativeNames = v +// SetCiphertext sets the Ciphertext field's value. +func (s *PasswordData) SetCiphertext(v string) *PasswordData { + s.Ciphertext = &v return s } -// SetSupportCode sets the SupportCode field's value. -func (s *LoadBalancerTlsCertificate) SetSupportCode(v string) *LoadBalancerTlsCertificate { - s.SupportCode = &v +// SetKeyPairName sets the KeyPairName field's value. +func (s *PasswordData) SetKeyPairName(v string) *PasswordData { + s.KeyPairName = &v return s } -// Contains information about the domain names on an SSL/TLS certificate that -// you will use to validate domain ownership. -type LoadBalancerTlsCertificateDomainValidationOption struct { +type PeerVpcInput struct { _ struct{} `type:"structure"` - - // The fully qualified domain name in the certificate request. - DomainName *string `locationName:"domainName" type:"string"` - - // The status of the domain validation. Valid values are listed below. - ValidationStatus *string `locationName:"validationStatus" type:"string" enum:"LoadBalancerTlsCertificateDomainStatus"` } // String returns the string representation -func (s LoadBalancerTlsCertificateDomainValidationOption) String() string { +func (s PeerVpcInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s LoadBalancerTlsCertificateDomainValidationOption) GoString() string { +func (s PeerVpcInput) GoString() string { return s.String() } -// SetDomainName sets the DomainName field's value. -func (s *LoadBalancerTlsCertificateDomainValidationOption) SetDomainName(v string) *LoadBalancerTlsCertificateDomainValidationOption { - s.DomainName = &v - return s +type PeerVpcOutput struct { + _ struct{} `type:"structure"` + + // An array of key-value pairs containing information about the request operation. + Operation *Operation `locationName:"operation" type:"structure"` } -// SetValidationStatus sets the ValidationStatus field's value. -func (s *LoadBalancerTlsCertificateDomainValidationOption) SetValidationStatus(v string) *LoadBalancerTlsCertificateDomainValidationOption { - s.ValidationStatus = &v - return s +// String returns the string representation +func (s PeerVpcOutput) String() string { + return awsutil.Prettify(s) } -// Describes the validation record of each domain name in the SSL/TLS certificate. -type LoadBalancerTlsCertificateDomainValidationRecord struct { - _ struct{} `type:"structure"` +// GoString returns the string representation +func (s PeerVpcOutput) GoString() string { + return s.String() +} - // The domain name against which your SSL/TLS certificate was validated. - DomainName *string `locationName:"domainName" type:"string"` +// SetOperation sets the Operation field's value. +func (s *PeerVpcOutput) SetOperation(v *Operation) *PeerVpcOutput { + s.Operation = v + return s +} - // A fully qualified domain name in the certificate. For example, example.com. - Name *string `locationName:"name" type:"string"` +// Describes a pending database maintenance action. +type PendingMaintenanceAction struct { + _ struct{} `type:"structure"` - // The type of validation record. For example, CNAME for domain validation. - Type *string `locationName:"type" type:"string"` + // The type of pending database maintenance action. + Action *string `locationName:"action" type:"string"` - // The validation status. Valid values are listed below. - ValidationStatus *string `locationName:"validationStatus" type:"string" enum:"LoadBalancerTlsCertificateDomainStatus"` + // The effective date of the pending database maintenance action. + CurrentApplyDate *time.Time `locationName:"currentApplyDate" type:"timestamp"` - // The value for that type. - Value *string `locationName:"value" type:"string"` + // Additional detail about the pending database maintenance action. + Description *string `locationName:"description" type:"string"` } // String returns the string representation -func (s LoadBalancerTlsCertificateDomainValidationRecord) String() string { +func (s PendingMaintenanceAction) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s LoadBalancerTlsCertificateDomainValidationRecord) GoString() string { +func (s PendingMaintenanceAction) GoString() string { return s.String() } -// SetDomainName sets the DomainName field's value. -func (s *LoadBalancerTlsCertificateDomainValidationRecord) SetDomainName(v string) *LoadBalancerTlsCertificateDomainValidationRecord { - s.DomainName = &v +// SetAction sets the Action field's value. +func (s *PendingMaintenanceAction) SetAction(v string) *PendingMaintenanceAction { + s.Action = &v return s } -// SetName sets the Name field's value. -func (s *LoadBalancerTlsCertificateDomainValidationRecord) SetName(v string) *LoadBalancerTlsCertificateDomainValidationRecord { - s.Name = &v +// SetCurrentApplyDate sets the CurrentApplyDate field's value. +func (s *PendingMaintenanceAction) SetCurrentApplyDate(v time.Time) *PendingMaintenanceAction { + s.CurrentApplyDate = &v return s } -// SetType sets the Type field's value. -func (s *LoadBalancerTlsCertificateDomainValidationRecord) SetType(v string) *LoadBalancerTlsCertificateDomainValidationRecord { - s.Type = &v +// SetDescription sets the Description field's value. +func (s *PendingMaintenanceAction) SetDescription(v string) *PendingMaintenanceAction { + s.Description = &v return s } -// SetValidationStatus sets the ValidationStatus field's value. -func (s *LoadBalancerTlsCertificateDomainValidationRecord) SetValidationStatus(v string) *LoadBalancerTlsCertificateDomainValidationRecord { - s.ValidationStatus = &v +// Describes a pending database value modification. +type PendingModifiedRelationalDatabaseValues struct { + _ struct{} `type:"structure"` + + // A Boolean value indicating whether automated backup retention is enabled. + BackupRetentionEnabled *bool `locationName:"backupRetentionEnabled" type:"boolean"` + + // The database engine version. + EngineVersion *string `locationName:"engineVersion" type:"string"` + + // The password for the master user of the database. + MasterUserPassword *string `locationName:"masterUserPassword" type:"string"` +} + +// String returns the string representation +func (s PendingModifiedRelationalDatabaseValues) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PendingModifiedRelationalDatabaseValues) GoString() string { + return s.String() +} + +// SetBackupRetentionEnabled sets the BackupRetentionEnabled field's value. +func (s *PendingModifiedRelationalDatabaseValues) SetBackupRetentionEnabled(v bool) *PendingModifiedRelationalDatabaseValues { + s.BackupRetentionEnabled = &v return s } -// SetValue sets the Value field's value. -func (s *LoadBalancerTlsCertificateDomainValidationRecord) SetValue(v string) *LoadBalancerTlsCertificateDomainValidationRecord { - s.Value = &v +// SetEngineVersion sets the EngineVersion field's value. +func (s *PendingModifiedRelationalDatabaseValues) SetEngineVersion(v string) *PendingModifiedRelationalDatabaseValues { + s.EngineVersion = &v return s } -// Contains information about the status of Lightsail's managed renewal for -// the certificate. -type LoadBalancerTlsCertificateRenewalSummary struct { +// SetMasterUserPassword sets the MasterUserPassword field's value. +func (s *PendingModifiedRelationalDatabaseValues) SetMasterUserPassword(v string) *PendingModifiedRelationalDatabaseValues { + s.MasterUserPassword = &v + return s +} + +// Describes information about the ports on your virtual private server (or +// instance). +type PortInfo struct { _ struct{} `type:"structure"` - // Contains information about the validation of each domain name in the certificate, - // as it pertains to Lightsail's managed renewal. This is different from the - // initial validation that occurs as a result of the RequestCertificate request. - DomainValidationOptions []*LoadBalancerTlsCertificateDomainValidationOption `locationName:"domainValidationOptions" type:"list"` + // The first port in the range. + FromPort *int64 `locationName:"fromPort" type:"integer"` - // The status of Lightsail's managed renewal of the certificate. Valid values - // are listed below. - RenewalStatus *string `locationName:"renewalStatus" type:"string" enum:"LoadBalancerTlsCertificateRenewalStatus"` + // The protocol. + Protocol *string `locationName:"protocol" type:"string" enum:"NetworkProtocol"` + + // The last port in the range. + ToPort *int64 `locationName:"toPort" type:"integer"` } // String returns the string representation -func (s LoadBalancerTlsCertificateRenewalSummary) String() string { +func (s PortInfo) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s LoadBalancerTlsCertificateRenewalSummary) GoString() string { +func (s PortInfo) GoString() string { return s.String() } -// SetDomainValidationOptions sets the DomainValidationOptions field's value. -func (s *LoadBalancerTlsCertificateRenewalSummary) SetDomainValidationOptions(v []*LoadBalancerTlsCertificateDomainValidationOption) *LoadBalancerTlsCertificateRenewalSummary { - s.DomainValidationOptions = v +// SetFromPort sets the FromPort field's value. +func (s *PortInfo) SetFromPort(v int64) *PortInfo { + s.FromPort = &v + return s +} + +// SetProtocol sets the Protocol field's value. +func (s *PortInfo) SetProtocol(v string) *PortInfo { + s.Protocol = &v return s } -// SetRenewalStatus sets the RenewalStatus field's value. -func (s *LoadBalancerTlsCertificateRenewalSummary) SetRenewalStatus(v string) *LoadBalancerTlsCertificateRenewalSummary { - s.RenewalStatus = &v +// SetToPort sets the ToPort field's value. +func (s *PortInfo) SetToPort(v int64) *PortInfo { + s.ToPort = &v return s } -// Provides a summary of SSL/TLS certificate metadata. -type LoadBalancerTlsCertificateSummary struct { +type PutInstancePublicPortsInput struct { _ struct{} `type:"structure"` - // When true, the SSL/TLS certificate is attached to the Lightsail load balancer. - IsAttached *bool `locationName:"isAttached" type:"boolean"` + // The Lightsail instance name of the public port(s) you are setting. + // + // InstanceName is a required field + InstanceName *string `locationName:"instanceName" type:"string" required:"true"` - // The name of the SSL/TLS certificate. - Name *string `locationName:"name" type:"string"` + // Specifies information about the public port(s). + // + // PortInfos is a required field + PortInfos []*PortInfo `locationName:"portInfos" type:"list" required:"true"` } // String returns the string representation -func (s LoadBalancerTlsCertificateSummary) String() string { +func (s PutInstancePublicPortsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s LoadBalancerTlsCertificateSummary) GoString() string { +func (s PutInstancePublicPortsInput) GoString() string { return s.String() } -// SetIsAttached sets the IsAttached field's value. -func (s *LoadBalancerTlsCertificateSummary) SetIsAttached(v bool) *LoadBalancerTlsCertificateSummary { - s.IsAttached = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutInstancePublicPortsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutInstancePublicPortsInput"} + if s.InstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceName")) + } + if s.PortInfos == nil { + invalidParams.Add(request.NewErrParamRequired("PortInfos")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceName sets the InstanceName field's value. +func (s *PutInstancePublicPortsInput) SetInstanceName(v string) *PutInstancePublicPortsInput { + s.InstanceName = &v return s } -// SetName sets the Name field's value. -func (s *LoadBalancerTlsCertificateSummary) SetName(v string) *LoadBalancerTlsCertificateSummary { - s.Name = &v +// SetPortInfos sets the PortInfos field's value. +func (s *PutInstancePublicPortsInput) SetPortInfos(v []*PortInfo) *PutInstancePublicPortsInput { + s.PortInfos = v return s } -// Describes the metric data point. -type MetricDatapoint struct { +type PutInstancePublicPortsOutput struct { _ struct{} `type:"structure"` - // The average. - Average *float64 `locationName:"average" type:"double"` - - // The maximum. - Maximum *float64 `locationName:"maximum" type:"double"` - - // The minimum. - Minimum *float64 `locationName:"minimum" type:"double"` - - // The sample count. - SampleCount *float64 `locationName:"sampleCount" type:"double"` - - // The sum. - Sum *float64 `locationName:"sum" type:"double"` - - // The timestamp (e.g., 1479816991.349). - Timestamp *time.Time `locationName:"timestamp" type:"timestamp" timestampFormat:"unix"` - - // The unit. - Unit *string `locationName:"unit" type:"string" enum:"MetricUnit"` + // Describes metadata about the operation you just executed. + Operation *Operation `locationName:"operation" type:"structure"` } // String returns the string representation -func (s MetricDatapoint) String() string { +func (s PutInstancePublicPortsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s MetricDatapoint) GoString() string { +func (s PutInstancePublicPortsOutput) GoString() string { return s.String() } -// SetAverage sets the Average field's value. -func (s *MetricDatapoint) SetAverage(v float64) *MetricDatapoint { - s.Average = &v +// SetOperation sets the Operation field's value. +func (s *PutInstancePublicPortsOutput) SetOperation(v *Operation) *PutInstancePublicPortsOutput { + s.Operation = v return s } -// SetMaximum sets the Maximum field's value. -func (s *MetricDatapoint) SetMaximum(v float64) *MetricDatapoint { - s.Maximum = &v - return s -} +type RebootInstanceInput struct { + _ struct{} `type:"structure"` -// SetMinimum sets the Minimum field's value. -func (s *MetricDatapoint) SetMinimum(v float64) *MetricDatapoint { - s.Minimum = &v - return s + // The name of the instance to reboot. + // + // InstanceName is a required field + InstanceName *string `locationName:"instanceName" type:"string" required:"true"` } -// SetSampleCount sets the SampleCount field's value. -func (s *MetricDatapoint) SetSampleCount(v float64) *MetricDatapoint { - s.SampleCount = &v - return s +// String returns the string representation +func (s RebootInstanceInput) String() string { + return awsutil.Prettify(s) } -// SetSum sets the Sum field's value. -func (s *MetricDatapoint) SetSum(v float64) *MetricDatapoint { - s.Sum = &v - return s +// GoString returns the string representation +func (s RebootInstanceInput) GoString() string { + return s.String() } -// SetTimestamp sets the Timestamp field's value. -func (s *MetricDatapoint) SetTimestamp(v time.Time) *MetricDatapoint { - s.Timestamp = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *RebootInstanceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RebootInstanceInput"} + if s.InstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetUnit sets the Unit field's value. -func (s *MetricDatapoint) SetUnit(v string) *MetricDatapoint { - s.Unit = &v +// SetInstanceName sets the InstanceName field's value. +func (s *RebootInstanceInput) SetInstanceName(v string) *RebootInstanceInput { + s.InstanceName = &v return s } -// Describes the monthly data transfer in and out of your virtual private server -// (or instance). -type MonthlyTransfer struct { +type RebootInstanceOutput struct { _ struct{} `type:"structure"` - // The amount allocated per month (in GB). - GbPerMonthAllocated *int64 `locationName:"gbPerMonthAllocated" type:"integer"` + // An array of key-value pairs containing information about the request operations. + Operations []*Operation `locationName:"operations" type:"list"` } // String returns the string representation -func (s MonthlyTransfer) String() string { +func (s RebootInstanceOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s MonthlyTransfer) GoString() string { +func (s RebootInstanceOutput) GoString() string { return s.String() } -// SetGbPerMonthAllocated sets the GbPerMonthAllocated field's value. -func (s *MonthlyTransfer) SetGbPerMonthAllocated(v int64) *MonthlyTransfer { - s.GbPerMonthAllocated = &v +// SetOperations sets the Operations field's value. +func (s *RebootInstanceOutput) SetOperations(v []*Operation) *RebootInstanceOutput { + s.Operations = v return s } -type OpenInstancePublicPortsInput struct { +type RebootRelationalDatabaseInput struct { _ struct{} `type:"structure"` - // The name of the instance for which you want to open the public ports. - // - // InstanceName is a required field - InstanceName *string `locationName:"instanceName" type:"string" required:"true"` - - // An array of key-value pairs containing information about the port mappings. + // The name of your database to reboot. // - // PortInfo is a required field - PortInfo *PortInfo `locationName:"portInfo" type:"structure" required:"true"` + // RelationalDatabaseName is a required field + RelationalDatabaseName *string `locationName:"relationalDatabaseName" type:"string" required:"true"` } // String returns the string representation -func (s OpenInstancePublicPortsInput) String() string { +func (s RebootRelationalDatabaseInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s OpenInstancePublicPortsInput) GoString() string { +func (s RebootRelationalDatabaseInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *OpenInstancePublicPortsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "OpenInstancePublicPortsInput"} - if s.InstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceName")) - } - if s.PortInfo == nil { - invalidParams.Add(request.NewErrParamRequired("PortInfo")) +func (s *RebootRelationalDatabaseInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RebootRelationalDatabaseInput"} + if s.RelationalDatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseName")) } if invalidParams.Len() > 0 { @@ -14541,493 +19123,878 @@ func (s *OpenInstancePublicPortsInput) Validate() error { return nil } -// SetInstanceName sets the InstanceName field's value. -func (s *OpenInstancePublicPortsInput) SetInstanceName(v string) *OpenInstancePublicPortsInput { - s.InstanceName = &v +// SetRelationalDatabaseName sets the RelationalDatabaseName field's value. +func (s *RebootRelationalDatabaseInput) SetRelationalDatabaseName(v string) *RebootRelationalDatabaseInput { + s.RelationalDatabaseName = &v return s } -// SetPortInfo sets the PortInfo field's value. -func (s *OpenInstancePublicPortsInput) SetPortInfo(v *PortInfo) *OpenInstancePublicPortsInput { - s.PortInfo = v +type RebootRelationalDatabaseOutput struct { + _ struct{} `type:"structure"` + + // An object describing the result of your reboot relational database request. + Operations []*Operation `locationName:"operations" type:"list"` +} + +// String returns the string representation +func (s RebootRelationalDatabaseOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RebootRelationalDatabaseOutput) GoString() string { + return s.String() +} + +// SetOperations sets the Operations field's value. +func (s *RebootRelationalDatabaseOutput) SetOperations(v []*Operation) *RebootRelationalDatabaseOutput { + s.Operations = v return s } -type OpenInstancePublicPortsOutput struct { +// Describes the AWS Region. +type Region struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about the request operation. - Operation *Operation `locationName:"operation" type:"structure"` + // The Availability Zones. Follows the format us-east-2a (case-sensitive). + AvailabilityZones []*AvailabilityZone `locationName:"availabilityZones" type:"list"` + + // The continent code (e.g., NA, meaning North America). + ContinentCode *string `locationName:"continentCode" type:"string"` + + // The description of the AWS Region (e.g., This region is recommended to serve + // users in the eastern United States and eastern Canada). + Description *string `locationName:"description" type:"string"` + + // The display name (e.g., Ohio). + DisplayName *string `locationName:"displayName" type:"string"` + + // The region name (e.g., us-east-2). + Name *string `locationName:"name" type:"string" enum:"RegionName"` + + // The Availability Zones for databases. Follows the format us-east-2a (case-sensitive). + RelationalDatabaseAvailabilityZones []*AvailabilityZone `locationName:"relationalDatabaseAvailabilityZones" type:"list"` } // String returns the string representation -func (s OpenInstancePublicPortsOutput) String() string { +func (s Region) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s OpenInstancePublicPortsOutput) GoString() string { +func (s Region) GoString() string { return s.String() } -// SetOperation sets the Operation field's value. -func (s *OpenInstancePublicPortsOutput) SetOperation(v *Operation) *OpenInstancePublicPortsOutput { - s.Operation = v +// SetAvailabilityZones sets the AvailabilityZones field's value. +func (s *Region) SetAvailabilityZones(v []*AvailabilityZone) *Region { + s.AvailabilityZones = v return s } -// Describes the API operation. -type Operation struct { +// SetContinentCode sets the ContinentCode field's value. +func (s *Region) SetContinentCode(v string) *Region { + s.ContinentCode = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *Region) SetDescription(v string) *Region { + s.Description = &v + return s +} + +// SetDisplayName sets the DisplayName field's value. +func (s *Region) SetDisplayName(v string) *Region { + s.DisplayName = &v + return s +} + +// SetName sets the Name field's value. +func (s *Region) SetName(v string) *Region { + s.Name = &v + return s +} + +// SetRelationalDatabaseAvailabilityZones sets the RelationalDatabaseAvailabilityZones field's value. +func (s *Region) SetRelationalDatabaseAvailabilityZones(v []*AvailabilityZone) *Region { + s.RelationalDatabaseAvailabilityZones = v + return s +} + +// Describes a database. +type RelationalDatabase struct { _ struct{} `type:"structure"` - // The timestamp when the operation was initialized (e.g., 1479816991.349). - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` + // The Amazon Resource Name (ARN) of the database. + Arn *string `locationName:"arn" type:"string"` - // The error code. - ErrorCode *string `locationName:"errorCode" type:"string"` + // A Boolean value indicating whether automated backup retention is enabled + // for the database. + BackupRetentionEnabled *bool `locationName:"backupRetentionEnabled" type:"boolean"` - // The error details. - ErrorDetails *string `locationName:"errorDetails" type:"string"` + // The timestamp when the database was created. Formatted in Unix time. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` - // The ID of the operation. - Id *string `locationName:"id" type:"string"` + // The database software (for example, MySQL). + Engine *string `locationName:"engine" type:"string"` - // A Boolean value indicating whether the operation is terminal. - IsTerminal *bool `locationName:"isTerminal" type:"boolean"` + // The database engine version (for example, 5.7.23). + EngineVersion *string `locationName:"engineVersion" type:"string"` - // The region and Availability Zone. + // Describes the hardware of the database. + Hardware *RelationalDatabaseHardware `locationName:"hardware" type:"structure"` + + // The latest point in time to which the database can be restored. Formatted + // in Unix time. + LatestRestorableTime *time.Time `locationName:"latestRestorableTime" type:"timestamp"` + + // The Region name and Availability Zone where the database is located. Location *ResourceLocation `locationName:"location" type:"structure"` - // Details about the operation (e.g., Debian-1GB-Ohio-1). - OperationDetails *string `locationName:"operationDetails" type:"string"` + // The name of the master database created when the Lightsail database resource + // is created. + MasterDatabaseName *string `locationName:"masterDatabaseName" type:"string"` - // The type of operation. - OperationType *string `locationName:"operationType" type:"string" enum:"OperationType"` + // The master endpoint for the database. + MasterEndpoint *RelationalDatabaseEndpoint `locationName:"masterEndpoint" type:"structure"` - // The resource name. - ResourceName *string `locationName:"resourceName" type:"string"` + // The master user name of the database. + MasterUsername *string `locationName:"masterUsername" type:"string"` - // The resource type. + // The unique name of the database resource in Lightsail. + Name *string `locationName:"name" type:"string"` + + // The status of parameter updates for the database. + ParameterApplyStatus *string `locationName:"parameterApplyStatus" type:"string"` + + // Describes the pending maintenance actions for the database. + PendingMaintenanceActions []*PendingMaintenanceAction `locationName:"pendingMaintenanceActions" type:"list"` + + // Describes pending database value modifications. + PendingModifiedValues *PendingModifiedRelationalDatabaseValues `locationName:"pendingModifiedValues" type:"structure"` + + // The daily time range during which automated backups are created for the database + // (for example, 16:00-16:30). + PreferredBackupWindow *string `locationName:"preferredBackupWindow" type:"string"` + + // The weekly time range during which system maintenance can occur on the database. + // + // In the format ddd:hh24:mi-ddd:hh24:mi. For example, Tue:17:00-Tue:17:30. + PreferredMaintenanceWindow *string `locationName:"preferredMaintenanceWindow" type:"string"` + + // A Boolean value indicating whether the database is publicly accessible. + PubliclyAccessible *bool `locationName:"publiclyAccessible" type:"boolean"` + + // The blueprint ID for the database. A blueprint describes the major engine + // version of a database. + RelationalDatabaseBlueprintId *string `locationName:"relationalDatabaseBlueprintId" type:"string"` + + // The bundle ID for the database. A bundle describes the performance specifications + // for your database. + RelationalDatabaseBundleId *string `locationName:"relationalDatabaseBundleId" type:"string"` + + // The Lightsail resource type for the database (for example, RelationalDatabase). ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` - // The status of the operation. - Status *string `locationName:"status" type:"string" enum:"OperationStatus"` + // Describes the secondary Availability Zone of a high availability database. + // + // The secondary database is used for failover support of a high availability + // database. + SecondaryAvailabilityZone *string `locationName:"secondaryAvailabilityZone" type:"string"` - // The timestamp when the status was changed (e.g., 1479816991.349). - StatusChangedAt *time.Time `locationName:"statusChangedAt" type:"timestamp" timestampFormat:"unix"` + // Describes the current state of the database. + State *string `locationName:"state" type:"string"` + + // The support code for the database. Include this code in your email to support + // when you have questions about a database in Lightsail. This code enables + // our support team to look up your Lightsail information more easily. + SupportCode *string `locationName:"supportCode" type:"string"` } // String returns the string representation -func (s Operation) String() string { +func (s RelationalDatabase) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Operation) GoString() string { +func (s RelationalDatabase) GoString() string { return s.String() } +// SetArn sets the Arn field's value. +func (s *RelationalDatabase) SetArn(v string) *RelationalDatabase { + s.Arn = &v + return s +} + +// SetBackupRetentionEnabled sets the BackupRetentionEnabled field's value. +func (s *RelationalDatabase) SetBackupRetentionEnabled(v bool) *RelationalDatabase { + s.BackupRetentionEnabled = &v + return s +} + // SetCreatedAt sets the CreatedAt field's value. -func (s *Operation) SetCreatedAt(v time.Time) *Operation { +func (s *RelationalDatabase) SetCreatedAt(v time.Time) *RelationalDatabase { s.CreatedAt = &v return s } -// SetErrorCode sets the ErrorCode field's value. -func (s *Operation) SetErrorCode(v string) *Operation { - s.ErrorCode = &v +// SetEngine sets the Engine field's value. +func (s *RelationalDatabase) SetEngine(v string) *RelationalDatabase { + s.Engine = &v return s } -// SetErrorDetails sets the ErrorDetails field's value. -func (s *Operation) SetErrorDetails(v string) *Operation { - s.ErrorDetails = &v +// SetEngineVersion sets the EngineVersion field's value. +func (s *RelationalDatabase) SetEngineVersion(v string) *RelationalDatabase { + s.EngineVersion = &v return s } -// SetId sets the Id field's value. -func (s *Operation) SetId(v string) *Operation { - s.Id = &v +// SetHardware sets the Hardware field's value. +func (s *RelationalDatabase) SetHardware(v *RelationalDatabaseHardware) *RelationalDatabase { + s.Hardware = v return s } -// SetIsTerminal sets the IsTerminal field's value. -func (s *Operation) SetIsTerminal(v bool) *Operation { - s.IsTerminal = &v +// SetLatestRestorableTime sets the LatestRestorableTime field's value. +func (s *RelationalDatabase) SetLatestRestorableTime(v time.Time) *RelationalDatabase { + s.LatestRestorableTime = &v return s } // SetLocation sets the Location field's value. -func (s *Operation) SetLocation(v *ResourceLocation) *Operation { +func (s *RelationalDatabase) SetLocation(v *ResourceLocation) *RelationalDatabase { s.Location = v return s } -// SetOperationDetails sets the OperationDetails field's value. -func (s *Operation) SetOperationDetails(v string) *Operation { - s.OperationDetails = &v +// SetMasterDatabaseName sets the MasterDatabaseName field's value. +func (s *RelationalDatabase) SetMasterDatabaseName(v string) *RelationalDatabase { + s.MasterDatabaseName = &v return s } -// SetOperationType sets the OperationType field's value. -func (s *Operation) SetOperationType(v string) *Operation { - s.OperationType = &v +// SetMasterEndpoint sets the MasterEndpoint field's value. +func (s *RelationalDatabase) SetMasterEndpoint(v *RelationalDatabaseEndpoint) *RelationalDatabase { + s.MasterEndpoint = v return s } -// SetResourceName sets the ResourceName field's value. -func (s *Operation) SetResourceName(v string) *Operation { - s.ResourceName = &v +// SetMasterUsername sets the MasterUsername field's value. +func (s *RelationalDatabase) SetMasterUsername(v string) *RelationalDatabase { + s.MasterUsername = &v + return s +} + +// SetName sets the Name field's value. +func (s *RelationalDatabase) SetName(v string) *RelationalDatabase { + s.Name = &v + return s +} + +// SetParameterApplyStatus sets the ParameterApplyStatus field's value. +func (s *RelationalDatabase) SetParameterApplyStatus(v string) *RelationalDatabase { + s.ParameterApplyStatus = &v + return s +} + +// SetPendingMaintenanceActions sets the PendingMaintenanceActions field's value. +func (s *RelationalDatabase) SetPendingMaintenanceActions(v []*PendingMaintenanceAction) *RelationalDatabase { + s.PendingMaintenanceActions = v + return s +} + +// SetPendingModifiedValues sets the PendingModifiedValues field's value. +func (s *RelationalDatabase) SetPendingModifiedValues(v *PendingModifiedRelationalDatabaseValues) *RelationalDatabase { + s.PendingModifiedValues = v + return s +} + +// SetPreferredBackupWindow sets the PreferredBackupWindow field's value. +func (s *RelationalDatabase) SetPreferredBackupWindow(v string) *RelationalDatabase { + s.PreferredBackupWindow = &v + return s +} + +// SetPreferredMaintenanceWindow sets the PreferredMaintenanceWindow field's value. +func (s *RelationalDatabase) SetPreferredMaintenanceWindow(v string) *RelationalDatabase { + s.PreferredMaintenanceWindow = &v + return s +} + +// SetPubliclyAccessible sets the PubliclyAccessible field's value. +func (s *RelationalDatabase) SetPubliclyAccessible(v bool) *RelationalDatabase { + s.PubliclyAccessible = &v + return s +} + +// SetRelationalDatabaseBlueprintId sets the RelationalDatabaseBlueprintId field's value. +func (s *RelationalDatabase) SetRelationalDatabaseBlueprintId(v string) *RelationalDatabase { + s.RelationalDatabaseBlueprintId = &v + return s +} + +// SetRelationalDatabaseBundleId sets the RelationalDatabaseBundleId field's value. +func (s *RelationalDatabase) SetRelationalDatabaseBundleId(v string) *RelationalDatabase { + s.RelationalDatabaseBundleId = &v return s } // SetResourceType sets the ResourceType field's value. -func (s *Operation) SetResourceType(v string) *Operation { +func (s *RelationalDatabase) SetResourceType(v string) *RelationalDatabase { s.ResourceType = &v return s } -// SetStatus sets the Status field's value. -func (s *Operation) SetStatus(v string) *Operation { - s.Status = &v +// SetSecondaryAvailabilityZone sets the SecondaryAvailabilityZone field's value. +func (s *RelationalDatabase) SetSecondaryAvailabilityZone(v string) *RelationalDatabase { + s.SecondaryAvailabilityZone = &v return s } -// SetStatusChangedAt sets the StatusChangedAt field's value. -func (s *Operation) SetStatusChangedAt(v time.Time) *Operation { - s.StatusChangedAt = &v +// SetState sets the State field's value. +func (s *RelationalDatabase) SetState(v string) *RelationalDatabase { + s.State = &v return s } -// The password data for the Windows Server-based instance, including the ciphertext -// and the key pair name. -type PasswordData struct { +// SetSupportCode sets the SupportCode field's value. +func (s *RelationalDatabase) SetSupportCode(v string) *RelationalDatabase { + s.SupportCode = &v + return s +} + +// Describes a database image, or blueprint. A blueprint describes the major +// engine version of a database. +type RelationalDatabaseBlueprint struct { _ struct{} `type:"structure"` - // The encrypted password. Ciphertext will be an empty string if access to your - // new instance is not ready yet. When you create an instance, it can take up - // to 15 minutes for the instance to be ready. - // - // If you use the default key pair (LightsailDefaultKeyPair), the decrypted - // password will be available in the password field. - // - // If you are using a custom key pair, you need to use your own means of decryption. - // - // If you change the Administrator password on the instance, Lightsail will - // continue to return the original ciphertext value. When accessing the instance - // using RDP, you need to manually enter the Administrator password after changing - // it from the default. - Ciphertext *string `locationName:"ciphertext" type:"string"` + // The ID for the database blueprint. + BlueprintId *string `locationName:"blueprintId" type:"string"` - // The name of the key pair that you used when creating your instance. If no - // key pair name was specified when creating the instance, Lightsail uses the - // default key pair (LightsailDefaultKeyPair). - // - // If you are using a custom key pair, you need to use your own means of decrypting - // your password using the ciphertext. Lightsail creates the ciphertext by encrypting - // your password with the public key part of this key pair. - KeyPairName *string `locationName:"keyPairName" type:"string"` + // The database software of the database blueprint (for example, MySQL). + Engine *string `locationName:"engine" type:"string" enum:"RelationalDatabaseEngine"` + + // The description of the database engine for the database blueprint. + EngineDescription *string `locationName:"engineDescription" type:"string"` + + // The database engine version for the database blueprint (for example, 5.7.23). + EngineVersion *string `locationName:"engineVersion" type:"string"` + + // The description of the database engine version for the database blueprint. + EngineVersionDescription *string `locationName:"engineVersionDescription" type:"string"` + + // A Boolean value indicating whether the engine version is the default for + // the database blueprint. + IsEngineDefault *bool `locationName:"isEngineDefault" type:"boolean"` } // String returns the string representation -func (s PasswordData) String() string { +func (s RelationalDatabaseBlueprint) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s PasswordData) GoString() string { +func (s RelationalDatabaseBlueprint) GoString() string { return s.String() } -// SetCiphertext sets the Ciphertext field's value. -func (s *PasswordData) SetCiphertext(v string) *PasswordData { - s.Ciphertext = &v +// SetBlueprintId sets the BlueprintId field's value. +func (s *RelationalDatabaseBlueprint) SetBlueprintId(v string) *RelationalDatabaseBlueprint { + s.BlueprintId = &v return s } -// SetKeyPairName sets the KeyPairName field's value. -func (s *PasswordData) SetKeyPairName(v string) *PasswordData { - s.KeyPairName = &v +// SetEngine sets the Engine field's value. +func (s *RelationalDatabaseBlueprint) SetEngine(v string) *RelationalDatabaseBlueprint { + s.Engine = &v return s } -type PeerVpcInput struct { - _ struct{} `type:"structure"` +// SetEngineDescription sets the EngineDescription field's value. +func (s *RelationalDatabaseBlueprint) SetEngineDescription(v string) *RelationalDatabaseBlueprint { + s.EngineDescription = &v + return s } -// String returns the string representation -func (s PeerVpcInput) String() string { - return awsutil.Prettify(s) +// SetEngineVersion sets the EngineVersion field's value. +func (s *RelationalDatabaseBlueprint) SetEngineVersion(v string) *RelationalDatabaseBlueprint { + s.EngineVersion = &v + return s } -// GoString returns the string representation -func (s PeerVpcInput) GoString() string { - return s.String() +// SetEngineVersionDescription sets the EngineVersionDescription field's value. +func (s *RelationalDatabaseBlueprint) SetEngineVersionDescription(v string) *RelationalDatabaseBlueprint { + s.EngineVersionDescription = &v + return s } -type PeerVpcOutput struct { +// SetIsEngineDefault sets the IsEngineDefault field's value. +func (s *RelationalDatabaseBlueprint) SetIsEngineDefault(v bool) *RelationalDatabaseBlueprint { + s.IsEngineDefault = &v + return s +} + +// Describes a database bundle. A bundle describes the performance specifications +// of the database. +type RelationalDatabaseBundle struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about the request operation. - Operation *Operation `locationName:"operation" type:"structure"` + // The ID for the database bundle. + BundleId *string `locationName:"bundleId" type:"string"` + + // The number of virtual CPUs (vCPUs) for the database bundle. + CpuCount *int64 `locationName:"cpuCount" type:"integer"` + + // The size of the disk for the database bundle. + DiskSizeInGb *int64 `locationName:"diskSizeInGb" type:"integer"` + + // A Boolean value indicating whether the database bundle is active. + IsActive *bool `locationName:"isActive" type:"boolean"` + + // A Boolean value indicating whether the database bundle is encrypted. + IsEncrypted *bool `locationName:"isEncrypted" type:"boolean"` + + // The name for the database bundle. + Name *string `locationName:"name" type:"string"` + + // The cost of the database bundle in US currency. + Price *float64 `locationName:"price" type:"float"` + + // The amount of RAM in GB (for example, 2.0) for the database bundle. + RamSizeInGb *float64 `locationName:"ramSizeInGb" type:"float"` + + // The data transfer rate per month in GB for the database bundle. + TransferPerMonthInGb *int64 `locationName:"transferPerMonthInGb" type:"integer"` } // String returns the string representation -func (s PeerVpcOutput) String() string { +func (s RelationalDatabaseBundle) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s PeerVpcOutput) GoString() string { +func (s RelationalDatabaseBundle) GoString() string { return s.String() } -// SetOperation sets the Operation field's value. -func (s *PeerVpcOutput) SetOperation(v *Operation) *PeerVpcOutput { - s.Operation = v +// SetBundleId sets the BundleId field's value. +func (s *RelationalDatabaseBundle) SetBundleId(v string) *RelationalDatabaseBundle { + s.BundleId = &v return s } -// Describes information about the ports on your virtual private server (or -// instance). -type PortInfo struct { - _ struct{} `type:"structure"` +// SetCpuCount sets the CpuCount field's value. +func (s *RelationalDatabaseBundle) SetCpuCount(v int64) *RelationalDatabaseBundle { + s.CpuCount = &v + return s +} - // The first port in the range. - FromPort *int64 `locationName:"fromPort" type:"integer"` +// SetDiskSizeInGb sets the DiskSizeInGb field's value. +func (s *RelationalDatabaseBundle) SetDiskSizeInGb(v int64) *RelationalDatabaseBundle { + s.DiskSizeInGb = &v + return s +} - // The protocol. - Protocol *string `locationName:"protocol" type:"string" enum:"NetworkProtocol"` +// SetIsActive sets the IsActive field's value. +func (s *RelationalDatabaseBundle) SetIsActive(v bool) *RelationalDatabaseBundle { + s.IsActive = &v + return s +} - // The last port in the range. - ToPort *int64 `locationName:"toPort" type:"integer"` +// SetIsEncrypted sets the IsEncrypted field's value. +func (s *RelationalDatabaseBundle) SetIsEncrypted(v bool) *RelationalDatabaseBundle { + s.IsEncrypted = &v + return s +} + +// SetName sets the Name field's value. +func (s *RelationalDatabaseBundle) SetName(v string) *RelationalDatabaseBundle { + s.Name = &v + return s +} + +// SetPrice sets the Price field's value. +func (s *RelationalDatabaseBundle) SetPrice(v float64) *RelationalDatabaseBundle { + s.Price = &v + return s +} + +// SetRamSizeInGb sets the RamSizeInGb field's value. +func (s *RelationalDatabaseBundle) SetRamSizeInGb(v float64) *RelationalDatabaseBundle { + s.RamSizeInGb = &v + return s +} + +// SetTransferPerMonthInGb sets the TransferPerMonthInGb field's value. +func (s *RelationalDatabaseBundle) SetTransferPerMonthInGb(v int64) *RelationalDatabaseBundle { + s.TransferPerMonthInGb = &v + return s +} + +// Describes an endpoint for a database. +type RelationalDatabaseEndpoint struct { + _ struct{} `type:"structure"` + + // Specifies the DNS address of the database. + Address *string `locationName:"address" type:"string"` + + // Specifies the port that the database is listening on. + Port *int64 `locationName:"port" type:"integer"` } // String returns the string representation -func (s PortInfo) String() string { +func (s RelationalDatabaseEndpoint) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s PortInfo) GoString() string { +func (s RelationalDatabaseEndpoint) GoString() string { return s.String() } -// SetFromPort sets the FromPort field's value. -func (s *PortInfo) SetFromPort(v int64) *PortInfo { - s.FromPort = &v - return s -} - -// SetProtocol sets the Protocol field's value. -func (s *PortInfo) SetProtocol(v string) *PortInfo { - s.Protocol = &v +// SetAddress sets the Address field's value. +func (s *RelationalDatabaseEndpoint) SetAddress(v string) *RelationalDatabaseEndpoint { + s.Address = &v return s } -// SetToPort sets the ToPort field's value. -func (s *PortInfo) SetToPort(v int64) *PortInfo { - s.ToPort = &v +// SetPort sets the Port field's value. +func (s *RelationalDatabaseEndpoint) SetPort(v int64) *RelationalDatabaseEndpoint { + s.Port = &v return s } -type PutInstancePublicPortsInput struct { +// Describes an event for a database. +type RelationalDatabaseEvent struct { _ struct{} `type:"structure"` - // The Lightsail instance name of the public port(s) you are setting. - // - // InstanceName is a required field - InstanceName *string `locationName:"instanceName" type:"string" required:"true"` + // The timestamp when the database event was created. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` - // Specifies information about the public port(s). - // - // PortInfos is a required field - PortInfos []*PortInfo `locationName:"portInfos" type:"list" required:"true"` + // The category that the database event belongs to. + EventCategories []*string `locationName:"eventCategories" type:"list"` + + // The message of the database event. + Message *string `locationName:"message" type:"string"` + + // The database that the database event relates to. + Resource *string `locationName:"resource" type:"string"` } // String returns the string representation -func (s PutInstancePublicPortsInput) String() string { +func (s RelationalDatabaseEvent) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s PutInstancePublicPortsInput) GoString() string { +func (s RelationalDatabaseEvent) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *PutInstancePublicPortsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "PutInstancePublicPortsInput"} - if s.InstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceName")) - } - if s.PortInfos == nil { - invalidParams.Add(request.NewErrParamRequired("PortInfos")) - } +// SetCreatedAt sets the CreatedAt field's value. +func (s *RelationalDatabaseEvent) SetCreatedAt(v time.Time) *RelationalDatabaseEvent { + s.CreatedAt = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetEventCategories sets the EventCategories field's value. +func (s *RelationalDatabaseEvent) SetEventCategories(v []*string) *RelationalDatabaseEvent { + s.EventCategories = v + return s } -// SetInstanceName sets the InstanceName field's value. -func (s *PutInstancePublicPortsInput) SetInstanceName(v string) *PutInstancePublicPortsInput { - s.InstanceName = &v +// SetMessage sets the Message field's value. +func (s *RelationalDatabaseEvent) SetMessage(v string) *RelationalDatabaseEvent { + s.Message = &v return s } -// SetPortInfos sets the PortInfos field's value. -func (s *PutInstancePublicPortsInput) SetPortInfos(v []*PortInfo) *PutInstancePublicPortsInput { - s.PortInfos = v +// SetResource sets the Resource field's value. +func (s *RelationalDatabaseEvent) SetResource(v string) *RelationalDatabaseEvent { + s.Resource = &v return s } -type PutInstancePublicPortsOutput struct { +// Describes the hardware of a database. +type RelationalDatabaseHardware struct { _ struct{} `type:"structure"` - // Describes metadata about the operation you just executed. - Operation *Operation `locationName:"operation" type:"structure"` + // The number of vCPUs for the database. + CpuCount *int64 `locationName:"cpuCount" type:"integer"` + + // The size of the disk for the database. + DiskSizeInGb *int64 `locationName:"diskSizeInGb" type:"integer"` + + // The amount of RAM in GB for the database. + RamSizeInGb *float64 `locationName:"ramSizeInGb" type:"float"` } // String returns the string representation -func (s PutInstancePublicPortsOutput) String() string { +func (s RelationalDatabaseHardware) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s PutInstancePublicPortsOutput) GoString() string { +func (s RelationalDatabaseHardware) GoString() string { return s.String() } -// SetOperation sets the Operation field's value. -func (s *PutInstancePublicPortsOutput) SetOperation(v *Operation) *PutInstancePublicPortsOutput { - s.Operation = v +// SetCpuCount sets the CpuCount field's value. +func (s *RelationalDatabaseHardware) SetCpuCount(v int64) *RelationalDatabaseHardware { + s.CpuCount = &v return s } -type RebootInstanceInput struct { +// SetDiskSizeInGb sets the DiskSizeInGb field's value. +func (s *RelationalDatabaseHardware) SetDiskSizeInGb(v int64) *RelationalDatabaseHardware { + s.DiskSizeInGb = &v + return s +} + +// SetRamSizeInGb sets the RamSizeInGb field's value. +func (s *RelationalDatabaseHardware) SetRamSizeInGb(v float64) *RelationalDatabaseHardware { + s.RamSizeInGb = &v + return s +} + +// Describes the parameters of a database. +type RelationalDatabaseParameter struct { _ struct{} `type:"structure"` - // The name of the instance to reboot. + // Specifies the valid range of values for the parameter. + AllowedValues *string `locationName:"allowedValues" type:"string"` + + // Indicates when parameter updates are applied. // - // InstanceName is a required field - InstanceName *string `locationName:"instanceName" type:"string" required:"true"` + // Can be immediate or pending-reboot. + ApplyMethod *string `locationName:"applyMethod" type:"string"` + + // Specifies the engine-specific parameter type. + ApplyType *string `locationName:"applyType" type:"string"` + + // Specifies the valid data type for the parameter. + DataType *string `locationName:"dataType" type:"string"` + + // Provides a description of the parameter. + Description *string `locationName:"description" type:"string"` + + // A Boolean value indicating whether the parameter can be modified. + IsModifiable *bool `locationName:"isModifiable" type:"boolean"` + + // Specifies the name of the parameter. + ParameterName *string `locationName:"parameterName" type:"string"` + + // Specifies the value of the parameter. + ParameterValue *string `locationName:"parameterValue" type:"string"` } // String returns the string representation -func (s RebootInstanceInput) String() string { +func (s RelationalDatabaseParameter) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s RebootInstanceInput) GoString() string { +func (s RelationalDatabaseParameter) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *RebootInstanceInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "RebootInstanceInput"} - if s.InstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceName")) - } +// SetAllowedValues sets the AllowedValues field's value. +func (s *RelationalDatabaseParameter) SetAllowedValues(v string) *RelationalDatabaseParameter { + s.AllowedValues = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetApplyMethod sets the ApplyMethod field's value. +func (s *RelationalDatabaseParameter) SetApplyMethod(v string) *RelationalDatabaseParameter { + s.ApplyMethod = &v + return s } -// SetInstanceName sets the InstanceName field's value. -func (s *RebootInstanceInput) SetInstanceName(v string) *RebootInstanceInput { - s.InstanceName = &v +// SetApplyType sets the ApplyType field's value. +func (s *RelationalDatabaseParameter) SetApplyType(v string) *RelationalDatabaseParameter { + s.ApplyType = &v return s } -type RebootInstanceOutput struct { - _ struct{} `type:"structure"` +// SetDataType sets the DataType field's value. +func (s *RelationalDatabaseParameter) SetDataType(v string) *RelationalDatabaseParameter { + s.DataType = &v + return s +} - // An array of key-value pairs containing information about the request operations. - Operations []*Operation `locationName:"operations" type:"list"` +// SetDescription sets the Description field's value. +func (s *RelationalDatabaseParameter) SetDescription(v string) *RelationalDatabaseParameter { + s.Description = &v + return s } -// String returns the string representation -func (s RebootInstanceOutput) String() string { - return awsutil.Prettify(s) +// SetIsModifiable sets the IsModifiable field's value. +func (s *RelationalDatabaseParameter) SetIsModifiable(v bool) *RelationalDatabaseParameter { + s.IsModifiable = &v + return s } -// GoString returns the string representation -func (s RebootInstanceOutput) GoString() string { - return s.String() +// SetParameterName sets the ParameterName field's value. +func (s *RelationalDatabaseParameter) SetParameterName(v string) *RelationalDatabaseParameter { + s.ParameterName = &v + return s } -// SetOperations sets the Operations field's value. -func (s *RebootInstanceOutput) SetOperations(v []*Operation) *RebootInstanceOutput { - s.Operations = v +// SetParameterValue sets the ParameterValue field's value. +func (s *RelationalDatabaseParameter) SetParameterValue(v string) *RelationalDatabaseParameter { + s.ParameterValue = &v return s } -// Describes the AWS Region. -type Region struct { +// Describes a database snapshot. +type RelationalDatabaseSnapshot struct { _ struct{} `type:"structure"` - // The Availability Zones. Follows the format us-east-2a (case-sensitive). - AvailabilityZones []*AvailabilityZone `locationName:"availabilityZones" type:"list"` + // The Amazon Resource Name (ARN) of the database snapshot. + Arn *string `locationName:"arn" type:"string"` - // The continent code (e.g., NA, meaning North America). - ContinentCode *string `locationName:"continentCode" type:"string"` + // The timestamp when the database snapshot was created. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` - // The description of the AWS Region (e.g., This region is recommended to serve - // users in the eastern United States and eastern Canada). - Description *string `locationName:"description" type:"string"` + // The software of the database snapshot (for example, MySQL) + Engine *string `locationName:"engine" type:"string"` - // The display name (e.g., Ohio). - DisplayName *string `locationName:"displayName" type:"string"` + // The database engine version for the database snapshot (for example, 5.7.23). + EngineVersion *string `locationName:"engineVersion" type:"string"` - // The region name (e.g., us-east-2). - Name *string `locationName:"name" type:"string" enum:"RegionName"` + // The Amazon Resource Name (ARN) of the database from which the database snapshot + // was created. + FromRelationalDatabaseArn *string `locationName:"fromRelationalDatabaseArn" type:"string"` + + // The blueprint ID of the database from which the database snapshot was created. + // A blueprint describes the major engine version of a database. + FromRelationalDatabaseBlueprintId *string `locationName:"fromRelationalDatabaseBlueprintId" type:"string"` + + // The bundle ID of the database from which the database snapshot was created. + FromRelationalDatabaseBundleId *string `locationName:"fromRelationalDatabaseBundleId" type:"string"` + + // The name of the source database from which the database snapshot was created. + FromRelationalDatabaseName *string `locationName:"fromRelationalDatabaseName" type:"string"` + + // The Region name and Availability Zone where the database snapshot is located. + Location *ResourceLocation `locationName:"location" type:"structure"` + + // The name of the database snapshot. + Name *string `locationName:"name" type:"string"` + + // The Lightsail resource type. + ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` + + // The size of the disk in GB (for example, 32) for the database snapshot. + SizeInGb *int64 `locationName:"sizeInGb" type:"integer"` + + // The state of the database snapshot. + State *string `locationName:"state" type:"string"` + + // The support code for the database snapshot. Include this code in your email + // to support when you have questions about a database snapshot in Lightsail. + // This code enables our support team to look up your Lightsail information + // more easily. + SupportCode *string `locationName:"supportCode" type:"string"` } // String returns the string representation -func (s Region) String() string { +func (s RelationalDatabaseSnapshot) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Region) GoString() string { +func (s RelationalDatabaseSnapshot) GoString() string { return s.String() } -// SetAvailabilityZones sets the AvailabilityZones field's value. -func (s *Region) SetAvailabilityZones(v []*AvailabilityZone) *Region { - s.AvailabilityZones = v +// SetArn sets the Arn field's value. +func (s *RelationalDatabaseSnapshot) SetArn(v string) *RelationalDatabaseSnapshot { + s.Arn = &v return s } -// SetContinentCode sets the ContinentCode field's value. -func (s *Region) SetContinentCode(v string) *Region { - s.ContinentCode = &v +// SetCreatedAt sets the CreatedAt field's value. +func (s *RelationalDatabaseSnapshot) SetCreatedAt(v time.Time) *RelationalDatabaseSnapshot { + s.CreatedAt = &v return s } -// SetDescription sets the Description field's value. -func (s *Region) SetDescription(v string) *Region { - s.Description = &v +// SetEngine sets the Engine field's value. +func (s *RelationalDatabaseSnapshot) SetEngine(v string) *RelationalDatabaseSnapshot { + s.Engine = &v return s } -// SetDisplayName sets the DisplayName field's value. -func (s *Region) SetDisplayName(v string) *Region { - s.DisplayName = &v +// SetEngineVersion sets the EngineVersion field's value. +func (s *RelationalDatabaseSnapshot) SetEngineVersion(v string) *RelationalDatabaseSnapshot { + s.EngineVersion = &v + return s +} + +// SetFromRelationalDatabaseArn sets the FromRelationalDatabaseArn field's value. +func (s *RelationalDatabaseSnapshot) SetFromRelationalDatabaseArn(v string) *RelationalDatabaseSnapshot { + s.FromRelationalDatabaseArn = &v + return s +} + +// SetFromRelationalDatabaseBlueprintId sets the FromRelationalDatabaseBlueprintId field's value. +func (s *RelationalDatabaseSnapshot) SetFromRelationalDatabaseBlueprintId(v string) *RelationalDatabaseSnapshot { + s.FromRelationalDatabaseBlueprintId = &v + return s +} + +// SetFromRelationalDatabaseBundleId sets the FromRelationalDatabaseBundleId field's value. +func (s *RelationalDatabaseSnapshot) SetFromRelationalDatabaseBundleId(v string) *RelationalDatabaseSnapshot { + s.FromRelationalDatabaseBundleId = &v + return s +} + +// SetFromRelationalDatabaseName sets the FromRelationalDatabaseName field's value. +func (s *RelationalDatabaseSnapshot) SetFromRelationalDatabaseName(v string) *RelationalDatabaseSnapshot { + s.FromRelationalDatabaseName = &v + return s +} + +// SetLocation sets the Location field's value. +func (s *RelationalDatabaseSnapshot) SetLocation(v *ResourceLocation) *RelationalDatabaseSnapshot { + s.Location = v return s } // SetName sets the Name field's value. -func (s *Region) SetName(v string) *Region { +func (s *RelationalDatabaseSnapshot) SetName(v string) *RelationalDatabaseSnapshot { s.Name = &v return s } +// SetResourceType sets the ResourceType field's value. +func (s *RelationalDatabaseSnapshot) SetResourceType(v string) *RelationalDatabaseSnapshot { + s.ResourceType = &v + return s +} + +// SetSizeInGb sets the SizeInGb field's value. +func (s *RelationalDatabaseSnapshot) SetSizeInGb(v int64) *RelationalDatabaseSnapshot { + s.SizeInGb = &v + return s +} + +// SetState sets the State field's value. +func (s *RelationalDatabaseSnapshot) SetState(v string) *RelationalDatabaseSnapshot { + s.State = &v + return s +} + +// SetSupportCode sets the SupportCode field's value. +func (s *RelationalDatabaseSnapshot) SetSupportCode(v string) *RelationalDatabaseSnapshot { + s.SupportCode = &v + return s +} + type ReleaseStaticIpInput struct { _ struct{} `type:"structure"` @@ -15116,36 +20083,97 @@ func (s *ResourceLocation) SetAvailabilityZone(v string) *ResourceLocation { return s } -// SetRegionName sets the RegionName field's value. -func (s *ResourceLocation) SetRegionName(v string) *ResourceLocation { - s.RegionName = &v +// SetRegionName sets the RegionName field's value. +func (s *ResourceLocation) SetRegionName(v string) *ResourceLocation { + s.RegionName = &v + return s +} + +type StartInstanceInput struct { + _ struct{} `type:"structure"` + + // The name of the instance (a virtual private server) to start. + // + // InstanceName is a required field + InstanceName *string `locationName:"instanceName" type:"string" required:"true"` +} + +// String returns the string representation +func (s StartInstanceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartInstanceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StartInstanceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StartInstanceInput"} + if s.InstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInstanceName sets the InstanceName field's value. +func (s *StartInstanceInput) SetInstanceName(v string) *StartInstanceInput { + s.InstanceName = &v + return s +} + +type StartInstanceOutput struct { + _ struct{} `type:"structure"` + + // An array of key-value pairs containing information about the request operation. + Operations []*Operation `locationName:"operations" type:"list"` +} + +// String returns the string representation +func (s StartInstanceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartInstanceOutput) GoString() string { + return s.String() +} + +// SetOperations sets the Operations field's value. +func (s *StartInstanceOutput) SetOperations(v []*Operation) *StartInstanceOutput { + s.Operations = v return s } -type StartInstanceInput struct { +type StartRelationalDatabaseInput struct { _ struct{} `type:"structure"` - // The name of the instance (a virtual private server) to start. + // The name of your database to start. // - // InstanceName is a required field - InstanceName *string `locationName:"instanceName" type:"string" required:"true"` + // RelationalDatabaseName is a required field + RelationalDatabaseName *string `locationName:"relationalDatabaseName" type:"string" required:"true"` } // String returns the string representation -func (s StartInstanceInput) String() string { +func (s StartRelationalDatabaseInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s StartInstanceInput) GoString() string { +func (s StartRelationalDatabaseInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *StartInstanceInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "StartInstanceInput"} - if s.InstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceName")) +func (s *StartRelationalDatabaseInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StartRelationalDatabaseInput"} + if s.RelationalDatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseName")) } if invalidParams.Len() > 0 { @@ -15154,31 +20182,31 @@ func (s *StartInstanceInput) Validate() error { return nil } -// SetInstanceName sets the InstanceName field's value. -func (s *StartInstanceInput) SetInstanceName(v string) *StartInstanceInput { - s.InstanceName = &v +// SetRelationalDatabaseName sets the RelationalDatabaseName field's value. +func (s *StartRelationalDatabaseInput) SetRelationalDatabaseName(v string) *StartRelationalDatabaseInput { + s.RelationalDatabaseName = &v return s } -type StartInstanceOutput struct { +type StartRelationalDatabaseOutput struct { _ struct{} `type:"structure"` - // An array of key-value pairs containing information about the request operation. + // An object describing the result of your start relational database request. Operations []*Operation `locationName:"operations" type:"list"` } // String returns the string representation -func (s StartInstanceOutput) String() string { +func (s StartRelationalDatabaseOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s StartInstanceOutput) GoString() string { +func (s StartRelationalDatabaseOutput) GoString() string { return s.String() } // SetOperations sets the Operations field's value. -func (s *StartInstanceOutput) SetOperations(v []*Operation) *StartInstanceOutput { +func (s *StartRelationalDatabaseOutput) SetOperations(v []*Operation) *StartRelationalDatabaseOutput { s.Operations = v return s } @@ -15194,7 +20222,7 @@ type StaticIp struct { AttachedTo *string `locationName:"attachedTo" type:"string"` // The timestamp when the static IP was created (e.g., 1479735304.222). - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp"` // The static IP address. IpAddress *string `locationName:"ipAddress" type:"string"` @@ -15356,6 +20384,77 @@ func (s *StopInstanceOutput) SetOperations(v []*Operation) *StopInstanceOutput { return s } +type StopRelationalDatabaseInput struct { + _ struct{} `type:"structure"` + + // The name of your database to stop. + // + // RelationalDatabaseName is a required field + RelationalDatabaseName *string `locationName:"relationalDatabaseName" type:"string" required:"true"` + + // The name of your new database snapshot to be created before stopping your + // database. + RelationalDatabaseSnapshotName *string `locationName:"relationalDatabaseSnapshotName" type:"string"` +} + +// String returns the string representation +func (s StopRelationalDatabaseInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StopRelationalDatabaseInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StopRelationalDatabaseInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StopRelationalDatabaseInput"} + if s.RelationalDatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRelationalDatabaseName sets the RelationalDatabaseName field's value. +func (s *StopRelationalDatabaseInput) SetRelationalDatabaseName(v string) *StopRelationalDatabaseInput { + s.RelationalDatabaseName = &v + return s +} + +// SetRelationalDatabaseSnapshotName sets the RelationalDatabaseSnapshotName field's value. +func (s *StopRelationalDatabaseInput) SetRelationalDatabaseSnapshotName(v string) *StopRelationalDatabaseInput { + s.RelationalDatabaseSnapshotName = &v + return s +} + +type StopRelationalDatabaseOutput struct { + _ struct{} `type:"structure"` + + // An object describing the result of your stop relational database request. + Operations []*Operation `locationName:"operations" type:"list"` +} + +// String returns the string representation +func (s StopRelationalDatabaseOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StopRelationalDatabaseOutput) GoString() string { + return s.String() +} + +// SetOperations sets the Operations field's value. +func (s *StopRelationalDatabaseOutput) SetOperations(v []*Operation) *StopRelationalDatabaseOutput { + s.Operations = v + return s +} + type UnpeerVpcInput struct { _ struct{} `type:"structure"` } @@ -15560,6 +20659,266 @@ func (s *UpdateLoadBalancerAttributeOutput) SetOperations(v []*Operation) *Updat return s } +type UpdateRelationalDatabaseInput struct { + _ struct{} `type:"structure"` + + // When true, applies changes immediately. When false, applies changes during + // the preferred maintenance window. Some changes may cause an outage. + // + // Default: false + ApplyImmediately *bool `locationName:"applyImmediately" type:"boolean"` + + // When true, disables automated backup retention for your database. + // + // Disabling backup retention deletes all automated database backups. Before + // disabling this, you may want to create a snapshot of your database using + // the create relational database snapshot operation. + // + // Updates are applied during the next maintenance window because this can result + // in an outage. + DisableBackupRetention *bool `locationName:"disableBackupRetention" type:"boolean"` + + // When true, enables automated backup retention for your database. + // + // Updates are applied during the next maintenance window because this can result + // in an outage. + EnableBackupRetention *bool `locationName:"enableBackupRetention" type:"boolean"` + + // The password for the master user of your database. The password can include + // any printable ASCII character except "/", """, or "@". + // + // Constraints: Must contain 8 to 41 characters. + MasterUserPassword *string `locationName:"masterUserPassword" type:"string"` + + // The daily time range during which automated backups are created for your + // database if automated backups are enabled. + // + // Constraints: + // + // * Must be in the hh24:mi-hh24:mi format. + // + // Example: 16:00-16:30 + // + // * Specified in Universal Coordinated Time (UTC). + // + // * Must not conflict with the preferred maintenance window. + // + // * Must be at least 30 minutes. + PreferredBackupWindow *string `locationName:"preferredBackupWindow" type:"string"` + + // The weekly time range during which system maintenance can occur on your database. + // + // The default is a 30-minute window selected at random from an 8-hour block + // of time for each AWS Region, occurring on a random day of the week. + // + // Constraints: + // + // * Must be in the ddd:hh24:mi-ddd:hh24:mi format. + // + // * Valid days: Mon, Tue, Wed, Thu, Fri, Sat, Sun. + // + // * Must be at least 30 minutes. + // + // * Specified in Universal Coordinated Time (UTC). + // + // * Example: Tue:17:00-Tue:17:30 + PreferredMaintenanceWindow *string `locationName:"preferredMaintenanceWindow" type:"string"` + + // Specifies the accessibility options for your database. A value of true specifies + // a database that is available to resources outside of your Lightsail account. + // A value of false specifies a database that is available only to your Lightsail + // resources in the same region as your database. + PubliclyAccessible *bool `locationName:"publiclyAccessible" type:"boolean"` + + // The name of your database to update. + // + // RelationalDatabaseName is a required field + RelationalDatabaseName *string `locationName:"relationalDatabaseName" type:"string" required:"true"` + + // When true, the master user password is changed to a new strong password generated + // by Lightsail. + // + // Use the get relational database master user password operation to get the + // new password. + RotateMasterUserPassword *bool `locationName:"rotateMasterUserPassword" type:"boolean"` +} + +// String returns the string representation +func (s UpdateRelationalDatabaseInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateRelationalDatabaseInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateRelationalDatabaseInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateRelationalDatabaseInput"} + if s.RelationalDatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplyImmediately sets the ApplyImmediately field's value. +func (s *UpdateRelationalDatabaseInput) SetApplyImmediately(v bool) *UpdateRelationalDatabaseInput { + s.ApplyImmediately = &v + return s +} + +// SetDisableBackupRetention sets the DisableBackupRetention field's value. +func (s *UpdateRelationalDatabaseInput) SetDisableBackupRetention(v bool) *UpdateRelationalDatabaseInput { + s.DisableBackupRetention = &v + return s +} + +// SetEnableBackupRetention sets the EnableBackupRetention field's value. +func (s *UpdateRelationalDatabaseInput) SetEnableBackupRetention(v bool) *UpdateRelationalDatabaseInput { + s.EnableBackupRetention = &v + return s +} + +// SetMasterUserPassword sets the MasterUserPassword field's value. +func (s *UpdateRelationalDatabaseInput) SetMasterUserPassword(v string) *UpdateRelationalDatabaseInput { + s.MasterUserPassword = &v + return s +} + +// SetPreferredBackupWindow sets the PreferredBackupWindow field's value. +func (s *UpdateRelationalDatabaseInput) SetPreferredBackupWindow(v string) *UpdateRelationalDatabaseInput { + s.PreferredBackupWindow = &v + return s +} + +// SetPreferredMaintenanceWindow sets the PreferredMaintenanceWindow field's value. +func (s *UpdateRelationalDatabaseInput) SetPreferredMaintenanceWindow(v string) *UpdateRelationalDatabaseInput { + s.PreferredMaintenanceWindow = &v + return s +} + +// SetPubliclyAccessible sets the PubliclyAccessible field's value. +func (s *UpdateRelationalDatabaseInput) SetPubliclyAccessible(v bool) *UpdateRelationalDatabaseInput { + s.PubliclyAccessible = &v + return s +} + +// SetRelationalDatabaseName sets the RelationalDatabaseName field's value. +func (s *UpdateRelationalDatabaseInput) SetRelationalDatabaseName(v string) *UpdateRelationalDatabaseInput { + s.RelationalDatabaseName = &v + return s +} + +// SetRotateMasterUserPassword sets the RotateMasterUserPassword field's value. +func (s *UpdateRelationalDatabaseInput) SetRotateMasterUserPassword(v bool) *UpdateRelationalDatabaseInput { + s.RotateMasterUserPassword = &v + return s +} + +type UpdateRelationalDatabaseOutput struct { + _ struct{} `type:"structure"` + + // An object describing the result of your update relational database request. + Operations []*Operation `locationName:"operations" type:"list"` +} + +// String returns the string representation +func (s UpdateRelationalDatabaseOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateRelationalDatabaseOutput) GoString() string { + return s.String() +} + +// SetOperations sets the Operations field's value. +func (s *UpdateRelationalDatabaseOutput) SetOperations(v []*Operation) *UpdateRelationalDatabaseOutput { + s.Operations = v + return s +} + +type UpdateRelationalDatabaseParametersInput struct { + _ struct{} `type:"structure"` + + // The database parameters to update. + // + // Parameters is a required field + Parameters []*RelationalDatabaseParameter `locationName:"parameters" type:"list" required:"true"` + + // The name of your database for which to update parameters. + // + // RelationalDatabaseName is a required field + RelationalDatabaseName *string `locationName:"relationalDatabaseName" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateRelationalDatabaseParametersInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateRelationalDatabaseParametersInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateRelationalDatabaseParametersInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateRelationalDatabaseParametersInput"} + if s.Parameters == nil { + invalidParams.Add(request.NewErrParamRequired("Parameters")) + } + if s.RelationalDatabaseName == nil { + invalidParams.Add(request.NewErrParamRequired("RelationalDatabaseName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetParameters sets the Parameters field's value. +func (s *UpdateRelationalDatabaseParametersInput) SetParameters(v []*RelationalDatabaseParameter) *UpdateRelationalDatabaseParametersInput { + s.Parameters = v + return s +} + +// SetRelationalDatabaseName sets the RelationalDatabaseName field's value. +func (s *UpdateRelationalDatabaseParametersInput) SetRelationalDatabaseName(v string) *UpdateRelationalDatabaseParametersInput { + s.RelationalDatabaseName = &v + return s +} + +type UpdateRelationalDatabaseParametersOutput struct { + _ struct{} `type:"structure"` + + // An object describing the result of your update relational database parameters + // request. + Operations []*Operation `locationName:"operations" type:"list"` +} + +// String returns the string representation +func (s UpdateRelationalDatabaseParametersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateRelationalDatabaseParametersOutput) GoString() string { + return s.String() +} + +// SetOperations sets the Operations field's value. +func (s *UpdateRelationalDatabaseParametersOutput) SetOperations(v []*Operation) *UpdateRelationalDatabaseParametersOutput { + s.Operations = v + return s +} + const ( // AccessDirectionInbound is a AccessDirection enum value AccessDirectionInbound = "inbound" @@ -16113,6 +21472,36 @@ const ( // OperationTypeCreateDiskFromSnapshot is a OperationType enum value OperationTypeCreateDiskFromSnapshot = "CreateDiskFromSnapshot" + + // OperationTypeCreateRelationalDatabase is a OperationType enum value + OperationTypeCreateRelationalDatabase = "CreateRelationalDatabase" + + // OperationTypeUpdateRelationalDatabase is a OperationType enum value + OperationTypeUpdateRelationalDatabase = "UpdateRelationalDatabase" + + // OperationTypeDeleteRelationalDatabase is a OperationType enum value + OperationTypeDeleteRelationalDatabase = "DeleteRelationalDatabase" + + // OperationTypeCreateRelationalDatabaseFromSnapshot is a OperationType enum value + OperationTypeCreateRelationalDatabaseFromSnapshot = "CreateRelationalDatabaseFromSnapshot" + + // OperationTypeCreateRelationalDatabaseSnapshot is a OperationType enum value + OperationTypeCreateRelationalDatabaseSnapshot = "CreateRelationalDatabaseSnapshot" + + // OperationTypeDeleteRelationalDatabaseSnapshot is a OperationType enum value + OperationTypeDeleteRelationalDatabaseSnapshot = "DeleteRelationalDatabaseSnapshot" + + // OperationTypeUpdateRelationalDatabaseParameters is a OperationType enum value + OperationTypeUpdateRelationalDatabaseParameters = "UpdateRelationalDatabaseParameters" + + // OperationTypeStartRelationalDatabase is a OperationType enum value + OperationTypeStartRelationalDatabase = "StartRelationalDatabase" + + // OperationTypeRebootRelationalDatabase is a OperationType enum value + OperationTypeRebootRelationalDatabase = "RebootRelationalDatabase" + + // OperationTypeStopRelationalDatabase is a OperationType enum value + OperationTypeStopRelationalDatabase = "StopRelationalDatabase" ) const ( @@ -16144,15 +21533,21 @@ const ( // RegionNameUsWest2 is a RegionName enum value RegionNameUsWest2 = "us-west-2" - // RegionNameEuCentral1 is a RegionName enum value - RegionNameEuCentral1 = "eu-central-1" - // RegionNameEuWest1 is a RegionName enum value RegionNameEuWest1 = "eu-west-1" // RegionNameEuWest2 is a RegionName enum value RegionNameEuWest2 = "eu-west-2" + // RegionNameEuWest3 is a RegionName enum value + RegionNameEuWest3 = "eu-west-3" + + // RegionNameEuCentral1 is a RegionName enum value + RegionNameEuCentral1 = "eu-central-1" + + // RegionNameCaCentral1 is a RegionName enum value + RegionNameCaCentral1 = "ca-central-1" + // RegionNameApSouth1 is a RegionName enum value RegionNameApSouth1 = "ap-south-1" @@ -16169,6 +21564,42 @@ const ( RegionNameApNortheast2 = "ap-northeast-2" ) +const ( + // RelationalDatabaseEngineMysql is a RelationalDatabaseEngine enum value + RelationalDatabaseEngineMysql = "mysql" +) + +const ( + // RelationalDatabaseMetricNameCpuutilization is a RelationalDatabaseMetricName enum value + RelationalDatabaseMetricNameCpuutilization = "CPUUtilization" + + // RelationalDatabaseMetricNameDatabaseConnections is a RelationalDatabaseMetricName enum value + RelationalDatabaseMetricNameDatabaseConnections = "DatabaseConnections" + + // RelationalDatabaseMetricNameDiskQueueDepth is a RelationalDatabaseMetricName enum value + RelationalDatabaseMetricNameDiskQueueDepth = "DiskQueueDepth" + + // RelationalDatabaseMetricNameFreeStorageSpace is a RelationalDatabaseMetricName enum value + RelationalDatabaseMetricNameFreeStorageSpace = "FreeStorageSpace" + + // RelationalDatabaseMetricNameNetworkReceiveThroughput is a RelationalDatabaseMetricName enum value + RelationalDatabaseMetricNameNetworkReceiveThroughput = "NetworkReceiveThroughput" + + // RelationalDatabaseMetricNameNetworkTransmitThroughput is a RelationalDatabaseMetricName enum value + RelationalDatabaseMetricNameNetworkTransmitThroughput = "NetworkTransmitThroughput" +) + +const ( + // RelationalDatabasePasswordVersionCurrent is a RelationalDatabasePasswordVersion enum value + RelationalDatabasePasswordVersionCurrent = "CURRENT" + + // RelationalDatabasePasswordVersionPrevious is a RelationalDatabasePasswordVersion enum value + RelationalDatabasePasswordVersionPrevious = "PREVIOUS" + + // RelationalDatabasePasswordVersionPending is a RelationalDatabasePasswordVersion enum value + RelationalDatabasePasswordVersionPending = "PENDING" +) + const ( // ResourceTypeInstance is a ResourceType enum value ResourceTypeInstance = "Instance" @@ -16199,4 +21630,10 @@ const ( // ResourceTypeDiskSnapshot is a ResourceType enum value ResourceTypeDiskSnapshot = "DiskSnapshot" + + // ResourceTypeRelationalDatabase is a ResourceType enum value + ResourceTypeRelationalDatabase = "RelationalDatabase" + + // ResourceTypeRelationalDatabaseSnapshot is a ResourceType enum value + ResourceTypeRelationalDatabaseSnapshot = "RelationalDatabaseSnapshot" ) diff --git a/vendor/github.com/aws/aws-sdk-go/service/lightsail/service.go b/vendor/github.com/aws/aws-sdk-go/service/lightsail/service.go index a76cf79e038..b9f97faa8a6 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/lightsail/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/lightsail/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "lightsail" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "lightsail" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Lightsail" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the Lightsail client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/macie/api.go b/vendor/github.com/aws/aws-sdk-go/service/macie/api.go new file mode 100644 index 00000000000..f17f888c084 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/macie/api.go @@ -0,0 +1,1611 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package macie + +import ( + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol" + "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" +) + +const opAssociateMemberAccount = "AssociateMemberAccount" + +// AssociateMemberAccountRequest generates a "aws/request.Request" representing the +// client's request for the AssociateMemberAccount operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AssociateMemberAccount for more information on using the AssociateMemberAccount +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AssociateMemberAccountRequest method. +// req, resp := client.AssociateMemberAccountRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/macie-2017-12-19/AssociateMemberAccount +func (c *Macie) AssociateMemberAccountRequest(input *AssociateMemberAccountInput) (req *request.Request, output *AssociateMemberAccountOutput) { + op := &request.Operation{ + Name: opAssociateMemberAccount, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AssociateMemberAccountInput{} + } + + output = &AssociateMemberAccountOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// AssociateMemberAccount API operation for Amazon Macie. +// +// Associates a specified AWS account with Amazon Macie as a member account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Macie's +// API operation AssociateMemberAccount for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInputException" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error code describes the limit exceeded. +// +// * ErrCodeInternalException "InternalException" +// Internal server error. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/macie-2017-12-19/AssociateMemberAccount +func (c *Macie) AssociateMemberAccount(input *AssociateMemberAccountInput) (*AssociateMemberAccountOutput, error) { + req, out := c.AssociateMemberAccountRequest(input) + return out, req.Send() +} + +// AssociateMemberAccountWithContext is the same as AssociateMemberAccount with the addition of +// the ability to pass a context and additional request options. +// +// See AssociateMemberAccount for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Macie) AssociateMemberAccountWithContext(ctx aws.Context, input *AssociateMemberAccountInput, opts ...request.Option) (*AssociateMemberAccountOutput, error) { + req, out := c.AssociateMemberAccountRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAssociateS3Resources = "AssociateS3Resources" + +// AssociateS3ResourcesRequest generates a "aws/request.Request" representing the +// client's request for the AssociateS3Resources operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AssociateS3Resources for more information on using the AssociateS3Resources +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AssociateS3ResourcesRequest method. +// req, resp := client.AssociateS3ResourcesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/macie-2017-12-19/AssociateS3Resources +func (c *Macie) AssociateS3ResourcesRequest(input *AssociateS3ResourcesInput) (req *request.Request, output *AssociateS3ResourcesOutput) { + op := &request.Operation{ + Name: opAssociateS3Resources, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AssociateS3ResourcesInput{} + } + + output = &AssociateS3ResourcesOutput{} + req = c.newRequest(op, input, output) + return +} + +// AssociateS3Resources API operation for Amazon Macie. +// +// Associates specified S3 resources with Amazon Macie for monitoring and data +// classification. If memberAccountId isn't specified, the action associates +// specified S3 resources with Macie for the current master account. If memberAccountId +// is specified, the action associates specified S3 resources with Macie for +// the specified member account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Macie's +// API operation AssociateS3Resources for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInputException" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// You do not have required permissions to access the requested resource. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The request was rejected because it attempted to create resources beyond +// the current AWS account limits. The error code describes the limit exceeded. +// +// * ErrCodeInternalException "InternalException" +// Internal server error. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/macie-2017-12-19/AssociateS3Resources +func (c *Macie) AssociateS3Resources(input *AssociateS3ResourcesInput) (*AssociateS3ResourcesOutput, error) { + req, out := c.AssociateS3ResourcesRequest(input) + return out, req.Send() +} + +// AssociateS3ResourcesWithContext is the same as AssociateS3Resources with the addition of +// the ability to pass a context and additional request options. +// +// See AssociateS3Resources for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Macie) AssociateS3ResourcesWithContext(ctx aws.Context, input *AssociateS3ResourcesInput, opts ...request.Option) (*AssociateS3ResourcesOutput, error) { + req, out := c.AssociateS3ResourcesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDisassociateMemberAccount = "DisassociateMemberAccount" + +// DisassociateMemberAccountRequest generates a "aws/request.Request" representing the +// client's request for the DisassociateMemberAccount operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DisassociateMemberAccount for more information on using the DisassociateMemberAccount +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DisassociateMemberAccountRequest method. +// req, resp := client.DisassociateMemberAccountRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/macie-2017-12-19/DisassociateMemberAccount +func (c *Macie) DisassociateMemberAccountRequest(input *DisassociateMemberAccountInput) (req *request.Request, output *DisassociateMemberAccountOutput) { + op := &request.Operation{ + Name: opDisassociateMemberAccount, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DisassociateMemberAccountInput{} + } + + output = &DisassociateMemberAccountOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DisassociateMemberAccount API operation for Amazon Macie. +// +// Removes the specified member account from Amazon Macie. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Macie's +// API operation DisassociateMemberAccount for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInputException" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeInternalException "InternalException" +// Internal server error. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/macie-2017-12-19/DisassociateMemberAccount +func (c *Macie) DisassociateMemberAccount(input *DisassociateMemberAccountInput) (*DisassociateMemberAccountOutput, error) { + req, out := c.DisassociateMemberAccountRequest(input) + return out, req.Send() +} + +// DisassociateMemberAccountWithContext is the same as DisassociateMemberAccount with the addition of +// the ability to pass a context and additional request options. +// +// See DisassociateMemberAccount for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Macie) DisassociateMemberAccountWithContext(ctx aws.Context, input *DisassociateMemberAccountInput, opts ...request.Option) (*DisassociateMemberAccountOutput, error) { + req, out := c.DisassociateMemberAccountRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDisassociateS3Resources = "DisassociateS3Resources" + +// DisassociateS3ResourcesRequest generates a "aws/request.Request" representing the +// client's request for the DisassociateS3Resources operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DisassociateS3Resources for more information on using the DisassociateS3Resources +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DisassociateS3ResourcesRequest method. +// req, resp := client.DisassociateS3ResourcesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/macie-2017-12-19/DisassociateS3Resources +func (c *Macie) DisassociateS3ResourcesRequest(input *DisassociateS3ResourcesInput) (req *request.Request, output *DisassociateS3ResourcesOutput) { + op := &request.Operation{ + Name: opDisassociateS3Resources, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DisassociateS3ResourcesInput{} + } + + output = &DisassociateS3ResourcesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DisassociateS3Resources API operation for Amazon Macie. +// +// Removes specified S3 resources from being monitored by Amazon Macie. If memberAccountId +// isn't specified, the action removes specified S3 resources from Macie for +// the current master account. If memberAccountId is specified, the action removes +// specified S3 resources from Macie for the specified member account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Macie's +// API operation DisassociateS3Resources for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInputException" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// You do not have required permissions to access the requested resource. +// +// * ErrCodeInternalException "InternalException" +// Internal server error. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/macie-2017-12-19/DisassociateS3Resources +func (c *Macie) DisassociateS3Resources(input *DisassociateS3ResourcesInput) (*DisassociateS3ResourcesOutput, error) { + req, out := c.DisassociateS3ResourcesRequest(input) + return out, req.Send() +} + +// DisassociateS3ResourcesWithContext is the same as DisassociateS3Resources with the addition of +// the ability to pass a context and additional request options. +// +// See DisassociateS3Resources for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Macie) DisassociateS3ResourcesWithContext(ctx aws.Context, input *DisassociateS3ResourcesInput, opts ...request.Option) (*DisassociateS3ResourcesOutput, error) { + req, out := c.DisassociateS3ResourcesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListMemberAccounts = "ListMemberAccounts" + +// ListMemberAccountsRequest generates a "aws/request.Request" representing the +// client's request for the ListMemberAccounts operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListMemberAccounts for more information on using the ListMemberAccounts +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListMemberAccountsRequest method. +// req, resp := client.ListMemberAccountsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/macie-2017-12-19/ListMemberAccounts +func (c *Macie) ListMemberAccountsRequest(input *ListMemberAccountsInput) (req *request.Request, output *ListMemberAccountsOutput) { + op := &request.Operation{ + Name: opListMemberAccounts, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"nextToken"}, + OutputTokens: []string{"nextToken"}, + LimitToken: "maxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListMemberAccountsInput{} + } + + output = &ListMemberAccountsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListMemberAccounts API operation for Amazon Macie. +// +// Lists all Amazon Macie member accounts for the current Amazon Macie master +// account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Macie's +// API operation ListMemberAccounts for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalException "InternalException" +// Internal server error. +// +// * ErrCodeInvalidInputException "InvalidInputException" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/macie-2017-12-19/ListMemberAccounts +func (c *Macie) ListMemberAccounts(input *ListMemberAccountsInput) (*ListMemberAccountsOutput, error) { + req, out := c.ListMemberAccountsRequest(input) + return out, req.Send() +} + +// ListMemberAccountsWithContext is the same as ListMemberAccounts with the addition of +// the ability to pass a context and additional request options. +// +// See ListMemberAccounts for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Macie) ListMemberAccountsWithContext(ctx aws.Context, input *ListMemberAccountsInput, opts ...request.Option) (*ListMemberAccountsOutput, error) { + req, out := c.ListMemberAccountsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListMemberAccountsPages iterates over the pages of a ListMemberAccounts operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListMemberAccounts method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListMemberAccounts operation. +// pageNum := 0 +// err := client.ListMemberAccountsPages(params, +// func(page *ListMemberAccountsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *Macie) ListMemberAccountsPages(input *ListMemberAccountsInput, fn func(*ListMemberAccountsOutput, bool) bool) error { + return c.ListMemberAccountsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListMemberAccountsPagesWithContext same as ListMemberAccountsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Macie) ListMemberAccountsPagesWithContext(ctx aws.Context, input *ListMemberAccountsInput, fn func(*ListMemberAccountsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListMemberAccountsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListMemberAccountsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListMemberAccountsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListS3Resources = "ListS3Resources" + +// ListS3ResourcesRequest generates a "aws/request.Request" representing the +// client's request for the ListS3Resources operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListS3Resources for more information on using the ListS3Resources +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListS3ResourcesRequest method. +// req, resp := client.ListS3ResourcesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/macie-2017-12-19/ListS3Resources +func (c *Macie) ListS3ResourcesRequest(input *ListS3ResourcesInput) (req *request.Request, output *ListS3ResourcesOutput) { + op := &request.Operation{ + Name: opListS3Resources, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"nextToken"}, + OutputTokens: []string{"nextToken"}, + LimitToken: "maxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListS3ResourcesInput{} + } + + output = &ListS3ResourcesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListS3Resources API operation for Amazon Macie. +// +// Lists all the S3 resources associated with Amazon Macie. If memberAccountId +// isn't specified, the action lists the S3 resources associated with Amazon +// Macie for the current master account. If memberAccountId is specified, the +// action lists the S3 resources associated with Amazon Macie for the specified +// member account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Macie's +// API operation ListS3Resources for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInputException" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// You do not have required permissions to access the requested resource. +// +// * ErrCodeInternalException "InternalException" +// Internal server error. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/macie-2017-12-19/ListS3Resources +func (c *Macie) ListS3Resources(input *ListS3ResourcesInput) (*ListS3ResourcesOutput, error) { + req, out := c.ListS3ResourcesRequest(input) + return out, req.Send() +} + +// ListS3ResourcesWithContext is the same as ListS3Resources with the addition of +// the ability to pass a context and additional request options. +// +// See ListS3Resources for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Macie) ListS3ResourcesWithContext(ctx aws.Context, input *ListS3ResourcesInput, opts ...request.Option) (*ListS3ResourcesOutput, error) { + req, out := c.ListS3ResourcesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListS3ResourcesPages iterates over the pages of a ListS3Resources operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListS3Resources method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListS3Resources operation. +// pageNum := 0 +// err := client.ListS3ResourcesPages(params, +// func(page *ListS3ResourcesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *Macie) ListS3ResourcesPages(input *ListS3ResourcesInput, fn func(*ListS3ResourcesOutput, bool) bool) error { + return c.ListS3ResourcesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListS3ResourcesPagesWithContext same as ListS3ResourcesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Macie) ListS3ResourcesPagesWithContext(ctx aws.Context, input *ListS3ResourcesInput, fn func(*ListS3ResourcesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListS3ResourcesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListS3ResourcesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListS3ResourcesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opUpdateS3Resources = "UpdateS3Resources" + +// UpdateS3ResourcesRequest generates a "aws/request.Request" representing the +// client's request for the UpdateS3Resources operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateS3Resources for more information on using the UpdateS3Resources +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateS3ResourcesRequest method. +// req, resp := client.UpdateS3ResourcesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/macie-2017-12-19/UpdateS3Resources +func (c *Macie) UpdateS3ResourcesRequest(input *UpdateS3ResourcesInput) (req *request.Request, output *UpdateS3ResourcesOutput) { + op := &request.Operation{ + Name: opUpdateS3Resources, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateS3ResourcesInput{} + } + + output = &UpdateS3ResourcesOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateS3Resources API operation for Amazon Macie. +// +// Updates the classification types for the specified S3 resources. If memberAccountId +// isn't specified, the action updates the classification types of the S3 resources +// associated with Amazon Macie for the current master account. If memberAccountId +// is specified, the action updates the classification types of the S3 resources +// associated with Amazon Macie for the specified member account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Macie's +// API operation UpdateS3Resources for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidInputException "InvalidInputException" +// The request was rejected because an invalid or out-of-range value was supplied +// for an input parameter. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// You do not have required permissions to access the requested resource. +// +// * ErrCodeInternalException "InternalException" +// Internal server error. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/macie-2017-12-19/UpdateS3Resources +func (c *Macie) UpdateS3Resources(input *UpdateS3ResourcesInput) (*UpdateS3ResourcesOutput, error) { + req, out := c.UpdateS3ResourcesRequest(input) + return out, req.Send() +} + +// UpdateS3ResourcesWithContext is the same as UpdateS3Resources with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateS3Resources for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Macie) UpdateS3ResourcesWithContext(ctx aws.Context, input *UpdateS3ResourcesInput, opts ...request.Option) (*UpdateS3ResourcesOutput, error) { + req, out := c.UpdateS3ResourcesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type AssociateMemberAccountInput struct { + _ struct{} `type:"structure"` + + // The ID of the AWS account that you want to associate with Amazon Macie as + // a member account. + // + // MemberAccountId is a required field + MemberAccountId *string `locationName:"memberAccountId" type:"string" required:"true"` +} + +// String returns the string representation +func (s AssociateMemberAccountInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociateMemberAccountInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AssociateMemberAccountInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AssociateMemberAccountInput"} + if s.MemberAccountId == nil { + invalidParams.Add(request.NewErrParamRequired("MemberAccountId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMemberAccountId sets the MemberAccountId field's value. +func (s *AssociateMemberAccountInput) SetMemberAccountId(v string) *AssociateMemberAccountInput { + s.MemberAccountId = &v + return s +} + +type AssociateMemberAccountOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AssociateMemberAccountOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociateMemberAccountOutput) GoString() string { + return s.String() +} + +type AssociateS3ResourcesInput struct { + _ struct{} `type:"structure"` + + // The ID of the Amazon Macie member account whose resources you want to associate + // with Macie. + MemberAccountId *string `locationName:"memberAccountId" type:"string"` + + // The S3 resources that you want to associate with Amazon Macie for monitoring + // and data classification. + // + // S3Resources is a required field + S3Resources []*S3ResourceClassification `locationName:"s3Resources" type:"list" required:"true"` +} + +// String returns the string representation +func (s AssociateS3ResourcesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociateS3ResourcesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AssociateS3ResourcesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AssociateS3ResourcesInput"} + if s.S3Resources == nil { + invalidParams.Add(request.NewErrParamRequired("S3Resources")) + } + if s.S3Resources != nil { + for i, v := range s.S3Resources { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "S3Resources", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMemberAccountId sets the MemberAccountId field's value. +func (s *AssociateS3ResourcesInput) SetMemberAccountId(v string) *AssociateS3ResourcesInput { + s.MemberAccountId = &v + return s +} + +// SetS3Resources sets the S3Resources field's value. +func (s *AssociateS3ResourcesInput) SetS3Resources(v []*S3ResourceClassification) *AssociateS3ResourcesInput { + s.S3Resources = v + return s +} + +type AssociateS3ResourcesOutput struct { + _ struct{} `type:"structure"` + + // S3 resources that couldn't be associated with Amazon Macie. An error code + // and an error message are provided for each failed item. + FailedS3Resources []*FailedS3Resource `locationName:"failedS3Resources" type:"list"` +} + +// String returns the string representation +func (s AssociateS3ResourcesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociateS3ResourcesOutput) GoString() string { + return s.String() +} + +// SetFailedS3Resources sets the FailedS3Resources field's value. +func (s *AssociateS3ResourcesOutput) SetFailedS3Resources(v []*FailedS3Resource) *AssociateS3ResourcesOutput { + s.FailedS3Resources = v + return s +} + +// The classification type that Amazon Macie applies to the associated S3 resources. +type ClassificationType struct { + _ struct{} `type:"structure"` + + // A continuous classification of the objects that are added to a specified + // S3 bucket. Amazon Macie begins performing continuous classification after + // a bucket is successfully associated with Amazon Macie. + // + // Continuous is a required field + Continuous *string `locationName:"continuous" type:"string" required:"true" enum:"S3ContinuousClassificationType"` + + // A one-time classification of all of the existing objects in a specified S3 + // bucket. + // + // OneTime is a required field + OneTime *string `locationName:"oneTime" type:"string" required:"true" enum:"S3OneTimeClassificationType"` +} + +// String returns the string representation +func (s ClassificationType) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ClassificationType) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ClassificationType) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ClassificationType"} + if s.Continuous == nil { + invalidParams.Add(request.NewErrParamRequired("Continuous")) + } + if s.OneTime == nil { + invalidParams.Add(request.NewErrParamRequired("OneTime")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetContinuous sets the Continuous field's value. +func (s *ClassificationType) SetContinuous(v string) *ClassificationType { + s.Continuous = &v + return s +} + +// SetOneTime sets the OneTime field's value. +func (s *ClassificationType) SetOneTime(v string) *ClassificationType { + s.OneTime = &v + return s +} + +// The classification type that Amazon Macie applies to the associated S3 resources. +// At least one of the classification types (oneTime or continuous) must be +// specified. +type ClassificationTypeUpdate struct { + _ struct{} `type:"structure"` + + // A continuous classification of the objects that are added to a specified + // S3 bucket. Amazon Macie begins performing continuous classification after + // a bucket is successfully associated with Amazon Macie. + Continuous *string `locationName:"continuous" type:"string" enum:"S3ContinuousClassificationType"` + + // A one-time classification of all of the existing objects in a specified S3 + // bucket. + OneTime *string `locationName:"oneTime" type:"string" enum:"S3OneTimeClassificationType"` +} + +// String returns the string representation +func (s ClassificationTypeUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ClassificationTypeUpdate) GoString() string { + return s.String() +} + +// SetContinuous sets the Continuous field's value. +func (s *ClassificationTypeUpdate) SetContinuous(v string) *ClassificationTypeUpdate { + s.Continuous = &v + return s +} + +// SetOneTime sets the OneTime field's value. +func (s *ClassificationTypeUpdate) SetOneTime(v string) *ClassificationTypeUpdate { + s.OneTime = &v + return s +} + +type DisassociateMemberAccountInput struct { + _ struct{} `type:"structure"` + + // The ID of the member account that you want to remove from Amazon Macie. + // + // MemberAccountId is a required field + MemberAccountId *string `locationName:"memberAccountId" type:"string" required:"true"` +} + +// String returns the string representation +func (s DisassociateMemberAccountInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisassociateMemberAccountInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DisassociateMemberAccountInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DisassociateMemberAccountInput"} + if s.MemberAccountId == nil { + invalidParams.Add(request.NewErrParamRequired("MemberAccountId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMemberAccountId sets the MemberAccountId field's value. +func (s *DisassociateMemberAccountInput) SetMemberAccountId(v string) *DisassociateMemberAccountInput { + s.MemberAccountId = &v + return s +} + +type DisassociateMemberAccountOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DisassociateMemberAccountOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisassociateMemberAccountOutput) GoString() string { + return s.String() +} + +type DisassociateS3ResourcesInput struct { + _ struct{} `type:"structure"` + + // The S3 resources (buckets or prefixes) that you want to remove from being + // monitored and classified by Amazon Macie. + // + // AssociatedS3Resources is a required field + AssociatedS3Resources []*S3Resource `locationName:"associatedS3Resources" type:"list" required:"true"` + + // The ID of the Amazon Macie member account whose resources you want to remove + // from being monitored by Amazon Macie. + MemberAccountId *string `locationName:"memberAccountId" type:"string"` +} + +// String returns the string representation +func (s DisassociateS3ResourcesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisassociateS3ResourcesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DisassociateS3ResourcesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DisassociateS3ResourcesInput"} + if s.AssociatedS3Resources == nil { + invalidParams.Add(request.NewErrParamRequired("AssociatedS3Resources")) + } + if s.AssociatedS3Resources != nil { + for i, v := range s.AssociatedS3Resources { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AssociatedS3Resources", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAssociatedS3Resources sets the AssociatedS3Resources field's value. +func (s *DisassociateS3ResourcesInput) SetAssociatedS3Resources(v []*S3Resource) *DisassociateS3ResourcesInput { + s.AssociatedS3Resources = v + return s +} + +// SetMemberAccountId sets the MemberAccountId field's value. +func (s *DisassociateS3ResourcesInput) SetMemberAccountId(v string) *DisassociateS3ResourcesInput { + s.MemberAccountId = &v + return s +} + +type DisassociateS3ResourcesOutput struct { + _ struct{} `type:"structure"` + + // S3 resources that couldn't be removed from being monitored and classified + // by Amazon Macie. An error code and an error message are provided for each + // failed item. + FailedS3Resources []*FailedS3Resource `locationName:"failedS3Resources" type:"list"` +} + +// String returns the string representation +func (s DisassociateS3ResourcesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisassociateS3ResourcesOutput) GoString() string { + return s.String() +} + +// SetFailedS3Resources sets the FailedS3Resources field's value. +func (s *DisassociateS3ResourcesOutput) SetFailedS3Resources(v []*FailedS3Resource) *DisassociateS3ResourcesOutput { + s.FailedS3Resources = v + return s +} + +// Includes details about the failed S3 resources. +type FailedS3Resource struct { + _ struct{} `type:"structure"` + + // The status code of a failed item. + ErrorCode *string `locationName:"errorCode" type:"string"` + + // The error message of a failed item. + ErrorMessage *string `locationName:"errorMessage" type:"string"` + + // The failed S3 resources. + FailedItem *S3Resource `locationName:"failedItem" type:"structure"` +} + +// String returns the string representation +func (s FailedS3Resource) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FailedS3Resource) GoString() string { + return s.String() +} + +// SetErrorCode sets the ErrorCode field's value. +func (s *FailedS3Resource) SetErrorCode(v string) *FailedS3Resource { + s.ErrorCode = &v + return s +} + +// SetErrorMessage sets the ErrorMessage field's value. +func (s *FailedS3Resource) SetErrorMessage(v string) *FailedS3Resource { + s.ErrorMessage = &v + return s +} + +// SetFailedItem sets the FailedItem field's value. +func (s *FailedS3Resource) SetFailedItem(v *S3Resource) *FailedS3Resource { + s.FailedItem = v + return s +} + +type ListMemberAccountsInput struct { + _ struct{} `type:"structure"` + + // Use this parameter to indicate the maximum number of items that you want + // in the response. The default value is 250. + MaxResults *int64 `locationName:"maxResults" type:"integer"` + + // Use this parameter when paginating results. Set the value of this parameter + // to null on your first call to the ListMemberAccounts action. Subsequent calls + // to the action fill nextToken in the request with the value of nextToken from + // the previous response to continue listing data. + NextToken *string `locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s ListMemberAccountsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListMemberAccountsInput) GoString() string { + return s.String() +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListMemberAccountsInput) SetMaxResults(v int64) *ListMemberAccountsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListMemberAccountsInput) SetNextToken(v string) *ListMemberAccountsInput { + s.NextToken = &v + return s +} + +type ListMemberAccountsOutput struct { + _ struct{} `type:"structure"` + + // A list of the Amazon Macie member accounts returned by the action. The current + // master account is also included in this list. + MemberAccounts []*MemberAccount `locationName:"memberAccounts" type:"list"` + + // When a response is generated, if there is more data to be listed, this parameter + // is present in the response and contains the value to use for the nextToken + // parameter in a subsequent pagination request. If there is no more data to + // be listed, this parameter is set to null. + NextToken *string `locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s ListMemberAccountsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListMemberAccountsOutput) GoString() string { + return s.String() +} + +// SetMemberAccounts sets the MemberAccounts field's value. +func (s *ListMemberAccountsOutput) SetMemberAccounts(v []*MemberAccount) *ListMemberAccountsOutput { + s.MemberAccounts = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListMemberAccountsOutput) SetNextToken(v string) *ListMemberAccountsOutput { + s.NextToken = &v + return s +} + +type ListS3ResourcesInput struct { + _ struct{} `type:"structure"` + + // Use this parameter to indicate the maximum number of items that you want + // in the response. The default value is 250. + MaxResults *int64 `locationName:"maxResults" type:"integer"` + + // The Amazon Macie member account ID whose associated S3 resources you want + // to list. + MemberAccountId *string `locationName:"memberAccountId" type:"string"` + + // Use this parameter when paginating results. Set its value to null on your + // first call to the ListS3Resources action. Subsequent calls to the action + // fill nextToken in the request with the value of nextToken from the previous + // response to continue listing data. + NextToken *string `locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s ListS3ResourcesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListS3ResourcesInput) GoString() string { + return s.String() +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListS3ResourcesInput) SetMaxResults(v int64) *ListS3ResourcesInput { + s.MaxResults = &v + return s +} + +// SetMemberAccountId sets the MemberAccountId field's value. +func (s *ListS3ResourcesInput) SetMemberAccountId(v string) *ListS3ResourcesInput { + s.MemberAccountId = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListS3ResourcesInput) SetNextToken(v string) *ListS3ResourcesInput { + s.NextToken = &v + return s +} + +type ListS3ResourcesOutput struct { + _ struct{} `type:"structure"` + + // When a response is generated, if there is more data to be listed, this parameter + // is present in the response and contains the value to use for the nextToken + // parameter in a subsequent pagination request. If there is no more data to + // be listed, this parameter is set to null. + NextToken *string `locationName:"nextToken" type:"string"` + + // A list of the associated S3 resources returned by the action. + S3Resources []*S3ResourceClassification `locationName:"s3Resources" type:"list"` +} + +// String returns the string representation +func (s ListS3ResourcesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListS3ResourcesOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListS3ResourcesOutput) SetNextToken(v string) *ListS3ResourcesOutput { + s.NextToken = &v + return s +} + +// SetS3Resources sets the S3Resources field's value. +func (s *ListS3ResourcesOutput) SetS3Resources(v []*S3ResourceClassification) *ListS3ResourcesOutput { + s.S3Resources = v + return s +} + +// Contains information about the Amazon Macie member account. +type MemberAccount struct { + _ struct{} `type:"structure"` + + // The AWS account ID of the Amazon Macie member account. + AccountId *string `locationName:"accountId" type:"string"` +} + +// String returns the string representation +func (s MemberAccount) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MemberAccount) GoString() string { + return s.String() +} + +// SetAccountId sets the AccountId field's value. +func (s *MemberAccount) SetAccountId(v string) *MemberAccount { + s.AccountId = &v + return s +} + +// Contains information about the S3 resource. This data type is used as a request +// parameter in the DisassociateS3Resources action and can be used as a response +// parameter in the AssociateS3Resources and UpdateS3Resources actions. +type S3Resource struct { + _ struct{} `type:"structure"` + + // The name of the S3 bucket. + // + // BucketName is a required field + BucketName *string `locationName:"bucketName" type:"string" required:"true"` + + // The prefix of the S3 bucket. + Prefix *string `locationName:"prefix" type:"string"` +} + +// String returns the string representation +func (s S3Resource) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s S3Resource) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *S3Resource) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "S3Resource"} + if s.BucketName == nil { + invalidParams.Add(request.NewErrParamRequired("BucketName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucketName sets the BucketName field's value. +func (s *S3Resource) SetBucketName(v string) *S3Resource { + s.BucketName = &v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *S3Resource) SetPrefix(v string) *S3Resource { + s.Prefix = &v + return s +} + +// The S3 resources that you want to associate with Amazon Macie for monitoring +// and data classification. This data type is used as a request parameter in +// the AssociateS3Resources action and a response parameter in the ListS3Resources +// action. +type S3ResourceClassification struct { + _ struct{} `type:"structure"` + + // The name of the S3 bucket that you want to associate with Amazon Macie. + // + // BucketName is a required field + BucketName *string `locationName:"bucketName" type:"string" required:"true"` + + // The classification type that you want to specify for the resource associated + // with Amazon Macie. + // + // ClassificationType is a required field + ClassificationType *ClassificationType `locationName:"classificationType" type:"structure" required:"true"` + + // The prefix of the S3 bucket that you want to associate with Amazon Macie. + Prefix *string `locationName:"prefix" type:"string"` +} + +// String returns the string representation +func (s S3ResourceClassification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s S3ResourceClassification) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *S3ResourceClassification) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "S3ResourceClassification"} + if s.BucketName == nil { + invalidParams.Add(request.NewErrParamRequired("BucketName")) + } + if s.ClassificationType == nil { + invalidParams.Add(request.NewErrParamRequired("ClassificationType")) + } + if s.ClassificationType != nil { + if err := s.ClassificationType.Validate(); err != nil { + invalidParams.AddNested("ClassificationType", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucketName sets the BucketName field's value. +func (s *S3ResourceClassification) SetBucketName(v string) *S3ResourceClassification { + s.BucketName = &v + return s +} + +// SetClassificationType sets the ClassificationType field's value. +func (s *S3ResourceClassification) SetClassificationType(v *ClassificationType) *S3ResourceClassification { + s.ClassificationType = v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *S3ResourceClassification) SetPrefix(v string) *S3ResourceClassification { + s.Prefix = &v + return s +} + +// The S3 resources whose classification types you want to update. This data +// type is used as a request parameter in the UpdateS3Resources action. +type S3ResourceClassificationUpdate struct { + _ struct{} `type:"structure"` + + // The name of the S3 bucket whose classification types you want to update. + // + // BucketName is a required field + BucketName *string `locationName:"bucketName" type:"string" required:"true"` + + // The classification type that you want to update for the resource associated + // with Amazon Macie. + // + // ClassificationTypeUpdate is a required field + ClassificationTypeUpdate *ClassificationTypeUpdate `locationName:"classificationTypeUpdate" type:"structure" required:"true"` + + // The prefix of the S3 bucket whose classification types you want to update. + Prefix *string `locationName:"prefix" type:"string"` +} + +// String returns the string representation +func (s S3ResourceClassificationUpdate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s S3ResourceClassificationUpdate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *S3ResourceClassificationUpdate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "S3ResourceClassificationUpdate"} + if s.BucketName == nil { + invalidParams.Add(request.NewErrParamRequired("BucketName")) + } + if s.ClassificationTypeUpdate == nil { + invalidParams.Add(request.NewErrParamRequired("ClassificationTypeUpdate")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucketName sets the BucketName field's value. +func (s *S3ResourceClassificationUpdate) SetBucketName(v string) *S3ResourceClassificationUpdate { + s.BucketName = &v + return s +} + +// SetClassificationTypeUpdate sets the ClassificationTypeUpdate field's value. +func (s *S3ResourceClassificationUpdate) SetClassificationTypeUpdate(v *ClassificationTypeUpdate) *S3ResourceClassificationUpdate { + s.ClassificationTypeUpdate = v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *S3ResourceClassificationUpdate) SetPrefix(v string) *S3ResourceClassificationUpdate { + s.Prefix = &v + return s +} + +type UpdateS3ResourcesInput struct { + _ struct{} `type:"structure"` + + // The AWS ID of the Amazon Macie member account whose S3 resources' classification + // types you want to update. + MemberAccountId *string `locationName:"memberAccountId" type:"string"` + + // The S3 resources whose classification types you want to update. + // + // S3ResourcesUpdate is a required field + S3ResourcesUpdate []*S3ResourceClassificationUpdate `locationName:"s3ResourcesUpdate" type:"list" required:"true"` +} + +// String returns the string representation +func (s UpdateS3ResourcesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateS3ResourcesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateS3ResourcesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateS3ResourcesInput"} + if s.S3ResourcesUpdate == nil { + invalidParams.Add(request.NewErrParamRequired("S3ResourcesUpdate")) + } + if s.S3ResourcesUpdate != nil { + for i, v := range s.S3ResourcesUpdate { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "S3ResourcesUpdate", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMemberAccountId sets the MemberAccountId field's value. +func (s *UpdateS3ResourcesInput) SetMemberAccountId(v string) *UpdateS3ResourcesInput { + s.MemberAccountId = &v + return s +} + +// SetS3ResourcesUpdate sets the S3ResourcesUpdate field's value. +func (s *UpdateS3ResourcesInput) SetS3ResourcesUpdate(v []*S3ResourceClassificationUpdate) *UpdateS3ResourcesInput { + s.S3ResourcesUpdate = v + return s +} + +type UpdateS3ResourcesOutput struct { + _ struct{} `type:"structure"` + + // The S3 resources whose classification types can't be updated. An error code + // and an error message are provided for each failed item. + FailedS3Resources []*FailedS3Resource `locationName:"failedS3Resources" type:"list"` +} + +// String returns the string representation +func (s UpdateS3ResourcesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateS3ResourcesOutput) GoString() string { + return s.String() +} + +// SetFailedS3Resources sets the FailedS3Resources field's value. +func (s *UpdateS3ResourcesOutput) SetFailedS3Resources(v []*FailedS3Resource) *UpdateS3ResourcesOutput { + s.FailedS3Resources = v + return s +} + +const ( + // S3ContinuousClassificationTypeFull is a S3ContinuousClassificationType enum value + S3ContinuousClassificationTypeFull = "FULL" +) + +const ( + // S3OneTimeClassificationTypeFull is a S3OneTimeClassificationType enum value + S3OneTimeClassificationTypeFull = "FULL" + + // S3OneTimeClassificationTypeNone is a S3OneTimeClassificationType enum value + S3OneTimeClassificationTypeNone = "NONE" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/macie/doc.go b/vendor/github.com/aws/aws-sdk-go/service/macie/doc.go new file mode 100644 index 00000000000..1b8f9632f37 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/macie/doc.go @@ -0,0 +1,33 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package macie provides the client and types for making API +// requests to Amazon Macie. +// +// Amazon Macie is a security service that uses machine learning to automatically +// discover, classify, and protect sensitive data in AWS. Macie recognizes sensitive +// data such as personally identifiable information (PII) or intellectual property, +// and provides you with dashboards and alerts that give visibility into how +// this data is being accessed or moved. For more information, see the Macie +// User Guide (https://docs.aws.amazon.com/macie/latest/userguide/what-is-macie.html). +// +// See https://docs.aws.amazon.com/goto/WebAPI/macie-2017-12-19 for more information on this service. +// +// See macie package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/macie/ +// +// Using the Client +// +// To contact Amazon Macie with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the Amazon Macie client Macie for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/macie/#New +package macie diff --git a/vendor/github.com/aws/aws-sdk-go/service/macie/errors.go b/vendor/github.com/aws/aws-sdk-go/service/macie/errors.go new file mode 100644 index 00000000000..77768d52eb5 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/macie/errors.go @@ -0,0 +1,32 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package macie + +const ( + + // ErrCodeAccessDeniedException for service response error code + // "AccessDeniedException". + // + // You do not have required permissions to access the requested resource. + ErrCodeAccessDeniedException = "AccessDeniedException" + + // ErrCodeInternalException for service response error code + // "InternalException". + // + // Internal server error. + ErrCodeInternalException = "InternalException" + + // ErrCodeInvalidInputException for service response error code + // "InvalidInputException". + // + // The request was rejected because an invalid or out-of-range value was supplied + // for an input parameter. + ErrCodeInvalidInputException = "InvalidInputException" + + // ErrCodeLimitExceededException for service response error code + // "LimitExceededException". + // + // The request was rejected because it attempted to create resources beyond + // the current AWS account limits. The error code describes the limit exceeded. + ErrCodeLimitExceededException = "LimitExceededException" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/macie/service.go b/vendor/github.com/aws/aws-sdk-go/service/macie/service.go new file mode 100644 index 00000000000..0b38598f0fe --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/macie/service.go @@ -0,0 +1,97 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package macie + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" +) + +// Macie provides the API operation methods for making requests to +// Amazon Macie. See this package's package overview docs +// for details on the service. +// +// Macie methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type Macie struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "Macie" // Name of service. + EndpointsID = "macie" // ID to lookup a service endpoint with. + ServiceID = "Macie" // ServiceID is a unique identifer of a specific service. +) + +// New creates a new instance of the Macie client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a Macie client from just a session. +// svc := macie.New(mySession) +// +// // Create a Macie client with additional configuration +// svc := macie.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *Macie { + c := p.ClientConfig(EndpointsID, cfgs...) + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *Macie { + svc := &Macie{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + ServiceID: ServiceID, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2017-12-19", + JSONVersion: "1.1", + TargetPrefix: "MacieService", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(jsonrpc.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(jsonrpc.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(jsonrpc.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(jsonrpc.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a Macie operation and runs any +// custom request initialization. +func (c *Macie) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/mediaconvert/api.go b/vendor/github.com/aws/aws-sdk-go/service/mediaconvert/api.go index 5a0eb2a0089..8aae536de4b 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/mediaconvert/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/mediaconvert/api.go @@ -3,6 +3,7 @@ package mediaconvert import ( + "fmt" "time" "github.com/aws/aws-sdk-go/aws" @@ -10,12 +11,101 @@ import ( "github.com/aws/aws-sdk-go/aws/request" ) +const opAssociateCertificate = "AssociateCertificate" + +// AssociateCertificateRequest generates a "aws/request.Request" representing the +// client's request for the AssociateCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AssociateCertificate for more information on using the AssociateCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AssociateCertificateRequest method. +// req, resp := client.AssociateCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/AssociateCertificate +func (c *MediaConvert) AssociateCertificateRequest(input *AssociateCertificateInput) (req *request.Request, output *AssociateCertificateOutput) { + op := &request.Operation{ + Name: opAssociateCertificate, + HTTPMethod: "POST", + HTTPPath: "/2017-08-29/certificates", + } + + if input == nil { + input = &AssociateCertificateInput{} + } + + output = &AssociateCertificateOutput{} + req = c.newRequest(op, input, output) + return +} + +// AssociateCertificate API operation for AWS Elemental MediaConvert. +// +// Associates an AWS Certificate Manager (ACM) Amazon Resource Name (ARN) with +// AWS Elemental MediaConvert. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Elemental MediaConvert's +// API operation AssociateCertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// +// * ErrCodeForbiddenException "ForbiddenException" +// +// * ErrCodeNotFoundException "NotFoundException" +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// +// * ErrCodeConflictException "ConflictException" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/AssociateCertificate +func (c *MediaConvert) AssociateCertificate(input *AssociateCertificateInput) (*AssociateCertificateOutput, error) { + req, out := c.AssociateCertificateRequest(input) + return out, req.Send() +} + +// AssociateCertificateWithContext is the same as AssociateCertificate with the addition of +// the ability to pass a context and additional request options. +// +// See AssociateCertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaConvert) AssociateCertificateWithContext(ctx aws.Context, input *AssociateCertificateInput, opts ...request.Option) (*AssociateCertificateOutput, error) { + req, out := c.AssociateCertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCancelJob = "CancelJob" // CancelJobRequest generates a "aws/request.Request" representing the // client's request for the CancelJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -103,8 +193,8 @@ const opCreateJob = "CreateJob" // CreateJobRequest generates a "aws/request.Request" representing the // client's request for the CreateJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -192,8 +282,8 @@ const opCreateJobTemplate = "CreateJobTemplate" // CreateJobTemplateRequest generates a "aws/request.Request" representing the // client's request for the CreateJobTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -281,8 +371,8 @@ const opCreatePreset = "CreatePreset" // CreatePresetRequest generates a "aws/request.Request" representing the // client's request for the CreatePreset operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -370,8 +460,8 @@ const opCreateQueue = "CreateQueue" // CreateQueueRequest generates a "aws/request.Request" representing the // client's request for the CreateQueue operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -459,8 +549,8 @@ const opDeleteJobTemplate = "DeleteJobTemplate" // DeleteJobTemplateRequest generates a "aws/request.Request" representing the // client's request for the DeleteJobTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -547,8 +637,8 @@ const opDeletePreset = "DeletePreset" // DeletePresetRequest generates a "aws/request.Request" representing the // client's request for the DeletePreset operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -635,8 +725,8 @@ const opDeleteQueue = "DeleteQueue" // DeleteQueueRequest generates a "aws/request.Request" representing the // client's request for the DeleteQueue operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -723,8 +813,8 @@ const opDescribeEndpoints = "DescribeEndpoints" // DescribeEndpointsRequest generates a "aws/request.Request" representing the // client's request for the DescribeEndpoints operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -750,6 +840,12 @@ func (c *MediaConvert) DescribeEndpointsRequest(input *DescribeEndpointsInput) ( Name: opDescribeEndpoints, HTTPMethod: "POST", HTTPPath: "/2017-08-29/endpoints", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, } if input == nil { @@ -808,12 +904,151 @@ func (c *MediaConvert) DescribeEndpointsWithContext(ctx aws.Context, input *Desc return out, req.Send() } +// DescribeEndpointsPages iterates over the pages of a DescribeEndpoints operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeEndpoints method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeEndpoints operation. +// pageNum := 0 +// err := client.DescribeEndpointsPages(params, +// func(page *DescribeEndpointsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *MediaConvert) DescribeEndpointsPages(input *DescribeEndpointsInput, fn func(*DescribeEndpointsOutput, bool) bool) error { + return c.DescribeEndpointsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeEndpointsPagesWithContext same as DescribeEndpointsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaConvert) DescribeEndpointsPagesWithContext(ctx aws.Context, input *DescribeEndpointsInput, fn func(*DescribeEndpointsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeEndpointsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeEndpointsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeEndpointsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDisassociateCertificate = "DisassociateCertificate" + +// DisassociateCertificateRequest generates a "aws/request.Request" representing the +// client's request for the DisassociateCertificate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DisassociateCertificate for more information on using the DisassociateCertificate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DisassociateCertificateRequest method. +// req, resp := client.DisassociateCertificateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/DisassociateCertificate +func (c *MediaConvert) DisassociateCertificateRequest(input *DisassociateCertificateInput) (req *request.Request, output *DisassociateCertificateOutput) { + op := &request.Operation{ + Name: opDisassociateCertificate, + HTTPMethod: "DELETE", + HTTPPath: "/2017-08-29/certificates/{arn}", + } + + if input == nil { + input = &DisassociateCertificateInput{} + } + + output = &DisassociateCertificateOutput{} + req = c.newRequest(op, input, output) + return +} + +// DisassociateCertificate API operation for AWS Elemental MediaConvert. +// +// Removes an association between the Amazon Resource Name (ARN) of an AWS Certificate +// Manager (ACM) certificate and an AWS Elemental MediaConvert resource. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Elemental MediaConvert's +// API operation DisassociateCertificate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// +// * ErrCodeForbiddenException "ForbiddenException" +// +// * ErrCodeNotFoundException "NotFoundException" +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// +// * ErrCodeConflictException "ConflictException" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/DisassociateCertificate +func (c *MediaConvert) DisassociateCertificate(input *DisassociateCertificateInput) (*DisassociateCertificateOutput, error) { + req, out := c.DisassociateCertificateRequest(input) + return out, req.Send() +} + +// DisassociateCertificateWithContext is the same as DisassociateCertificate with the addition of +// the ability to pass a context and additional request options. +// +// See DisassociateCertificate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaConvert) DisassociateCertificateWithContext(ctx aws.Context, input *DisassociateCertificateInput, opts ...request.Option) (*DisassociateCertificateOutput, error) { + req, out := c.DisassociateCertificateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opGetJob = "GetJob" // GetJobRequest generates a "aws/request.Request" representing the // client's request for the GetJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -900,8 +1135,8 @@ const opGetJobTemplate = "GetJobTemplate" // GetJobTemplateRequest generates a "aws/request.Request" representing the // client's request for the GetJobTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -988,8 +1223,8 @@ const opGetPreset = "GetPreset" // GetPresetRequest generates a "aws/request.Request" representing the // client's request for the GetPreset operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1076,8 +1311,8 @@ const opGetQueue = "GetQueue" // GetQueueRequest generates a "aws/request.Request" representing the // client's request for the GetQueue operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1164,8 +1399,8 @@ const opListJobTemplates = "ListJobTemplates" // ListJobTemplatesRequest generates a "aws/request.Request" representing the // client's request for the ListJobTemplates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1191,6 +1426,12 @@ func (c *MediaConvert) ListJobTemplatesRequest(input *ListJobTemplatesInput) (re Name: opListJobTemplates, HTTPMethod: "GET", HTTPPath: "/2017-08-29/jobTemplates", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, } if input == nil { @@ -1250,12 +1491,62 @@ func (c *MediaConvert) ListJobTemplatesWithContext(ctx aws.Context, input *ListJ return out, req.Send() } +// ListJobTemplatesPages iterates over the pages of a ListJobTemplates operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListJobTemplates method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListJobTemplates operation. +// pageNum := 0 +// err := client.ListJobTemplatesPages(params, +// func(page *ListJobTemplatesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *MediaConvert) ListJobTemplatesPages(input *ListJobTemplatesInput, fn func(*ListJobTemplatesOutput, bool) bool) error { + return c.ListJobTemplatesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListJobTemplatesPagesWithContext same as ListJobTemplatesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaConvert) ListJobTemplatesPagesWithContext(ctx aws.Context, input *ListJobTemplatesInput, fn func(*ListJobTemplatesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListJobTemplatesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListJobTemplatesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListJobTemplatesOutput), !p.HasNextPage()) + } + return p.Err() +} + const opListJobs = "ListJobs" // ListJobsRequest generates a "aws/request.Request" representing the // client's request for the ListJobs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1281,6 +1572,12 @@ func (c *MediaConvert) ListJobsRequest(input *ListJobsInput) (req *request.Reque Name: opListJobs, HTTPMethod: "GET", HTTPPath: "/2017-08-29/jobs", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, } if input == nil { @@ -1341,12 +1638,62 @@ func (c *MediaConvert) ListJobsWithContext(ctx aws.Context, input *ListJobsInput return out, req.Send() } +// ListJobsPages iterates over the pages of a ListJobs operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListJobs method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListJobs operation. +// pageNum := 0 +// err := client.ListJobsPages(params, +// func(page *ListJobsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *MediaConvert) ListJobsPages(input *ListJobsInput, fn func(*ListJobsOutput, bool) bool) error { + return c.ListJobsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListJobsPagesWithContext same as ListJobsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaConvert) ListJobsPagesWithContext(ctx aws.Context, input *ListJobsInput, fn func(*ListJobsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListJobsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListJobsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListJobsOutput), !p.HasNextPage()) + } + return p.Err() +} + const opListPresets = "ListPresets" // ListPresetsRequest generates a "aws/request.Request" representing the // client's request for the ListPresets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1372,6 +1719,12 @@ func (c *MediaConvert) ListPresetsRequest(input *ListPresetsInput) (req *request Name: opListPresets, HTTPMethod: "GET", HTTPPath: "/2017-08-29/presets", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, } if input == nil { @@ -1431,12 +1784,62 @@ func (c *MediaConvert) ListPresetsWithContext(ctx aws.Context, input *ListPreset return out, req.Send() } +// ListPresetsPages iterates over the pages of a ListPresets operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListPresets method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListPresets operation. +// pageNum := 0 +// err := client.ListPresetsPages(params, +// func(page *ListPresetsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *MediaConvert) ListPresetsPages(input *ListPresetsInput, fn func(*ListPresetsOutput, bool) bool) error { + return c.ListPresetsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListPresetsPagesWithContext same as ListPresetsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaConvert) ListPresetsPagesWithContext(ctx aws.Context, input *ListPresetsInput, fn func(*ListPresetsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListPresetsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListPresetsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListPresetsOutput), !p.HasNextPage()) + } + return p.Err() +} + const opListQueues = "ListQueues" // ListQueuesRequest generates a "aws/request.Request" representing the // client's request for the ListQueues operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1462,6 +1865,12 @@ func (c *MediaConvert) ListQueuesRequest(input *ListQueuesInput) (req *request.R Name: opListQueues, HTTPMethod: "GET", HTTPPath: "/2017-08-29/queues", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, } if input == nil { @@ -1521,58 +1930,108 @@ func (c *MediaConvert) ListQueuesWithContext(ctx aws.Context, input *ListQueuesI return out, req.Send() } -const opUpdateJobTemplate = "UpdateJobTemplate" +// ListQueuesPages iterates over the pages of a ListQueues operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListQueues method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListQueues operation. +// pageNum := 0 +// err := client.ListQueuesPages(params, +// func(page *ListQueuesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *MediaConvert) ListQueuesPages(input *ListQueuesInput, fn func(*ListQueuesOutput, bool) bool) error { + return c.ListQueuesPagesWithContext(aws.BackgroundContext(), input, fn) +} -// UpdateJobTemplateRequest generates a "aws/request.Request" representing the -// client's request for the UpdateJobTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListQueuesPagesWithContext same as ListQueuesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaConvert) ListQueuesPagesWithContext(ctx aws.Context, input *ListQueuesInput, fn func(*ListQueuesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListQueuesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListQueuesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListQueuesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListTagsForResource = "ListTagsForResource" + +// ListTagsForResourceRequest generates a "aws/request.Request" representing the +// client's request for the ListTagsForResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateJobTemplate for more information on using the UpdateJobTemplate +// See ListTagsForResource for more information on using the ListTagsForResource // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateJobTemplateRequest method. -// req, resp := client.UpdateJobTemplateRequest(params) +// // Example sending a request using the ListTagsForResourceRequest method. +// req, resp := client.ListTagsForResourceRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/UpdateJobTemplate -func (c *MediaConvert) UpdateJobTemplateRequest(input *UpdateJobTemplateInput) (req *request.Request, output *UpdateJobTemplateOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/ListTagsForResource +func (c *MediaConvert) ListTagsForResourceRequest(input *ListTagsForResourceInput) (req *request.Request, output *ListTagsForResourceOutput) { op := &request.Operation{ - Name: opUpdateJobTemplate, - HTTPMethod: "PUT", - HTTPPath: "/2017-08-29/jobTemplates/{name}", + Name: opListTagsForResource, + HTTPMethod: "GET", + HTTPPath: "/2017-08-29/tags/{arn}", } if input == nil { - input = &UpdateJobTemplateInput{} + input = &ListTagsForResourceInput{} } - output = &UpdateJobTemplateOutput{} + output = &ListTagsForResourceOutput{} req = c.newRequest(op, input, output) return } -// UpdateJobTemplate API operation for AWS Elemental MediaConvert. +// ListTagsForResource API operation for AWS Elemental MediaConvert. // -// Modify one of your existing job templates. +// Retrieve the tags for a MediaConvert resource. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Elemental MediaConvert's -// API operation UpdateJobTemplate for usage and error information. +// API operation ListTagsForResource for usage and error information. // // Returned Error Codes: // * ErrCodeBadRequestException "BadRequestException" @@ -1587,80 +2046,81 @@ func (c *MediaConvert) UpdateJobTemplateRequest(input *UpdateJobTemplateInput) ( // // * ErrCodeConflictException "ConflictException" // -// See also, https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/UpdateJobTemplate -func (c *MediaConvert) UpdateJobTemplate(input *UpdateJobTemplateInput) (*UpdateJobTemplateOutput, error) { - req, out := c.UpdateJobTemplateRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/ListTagsForResource +func (c *MediaConvert) ListTagsForResource(input *ListTagsForResourceInput) (*ListTagsForResourceOutput, error) { + req, out := c.ListTagsForResourceRequest(input) return out, req.Send() } -// UpdateJobTemplateWithContext is the same as UpdateJobTemplate with the addition of +// ListTagsForResourceWithContext is the same as ListTagsForResource with the addition of // the ability to pass a context and additional request options. // -// See UpdateJobTemplate for details on how to use this API operation. +// See ListTagsForResource for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *MediaConvert) UpdateJobTemplateWithContext(ctx aws.Context, input *UpdateJobTemplateInput, opts ...request.Option) (*UpdateJobTemplateOutput, error) { - req, out := c.UpdateJobTemplateRequest(input) +func (c *MediaConvert) ListTagsForResourceWithContext(ctx aws.Context, input *ListTagsForResourceInput, opts ...request.Option) (*ListTagsForResourceOutput, error) { + req, out := c.ListTagsForResourceRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdatePreset = "UpdatePreset" +const opTagResource = "TagResource" -// UpdatePresetRequest generates a "aws/request.Request" representing the -// client's request for the UpdatePreset operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// TagResourceRequest generates a "aws/request.Request" representing the +// client's request for the TagResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdatePreset for more information on using the UpdatePreset +// See TagResource for more information on using the TagResource // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdatePresetRequest method. -// req, resp := client.UpdatePresetRequest(params) +// // Example sending a request using the TagResourceRequest method. +// req, resp := client.TagResourceRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/UpdatePreset -func (c *MediaConvert) UpdatePresetRequest(input *UpdatePresetInput) (req *request.Request, output *UpdatePresetOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/TagResource +func (c *MediaConvert) TagResourceRequest(input *TagResourceInput) (req *request.Request, output *TagResourceOutput) { op := &request.Operation{ - Name: opUpdatePreset, - HTTPMethod: "PUT", - HTTPPath: "/2017-08-29/presets/{name}", + Name: opTagResource, + HTTPMethod: "POST", + HTTPPath: "/2017-08-29/tags", } if input == nil { - input = &UpdatePresetInput{} + input = &TagResourceInput{} } - output = &UpdatePresetOutput{} + output = &TagResourceOutput{} req = c.newRequest(op, input, output) return } -// UpdatePreset API operation for AWS Elemental MediaConvert. +// TagResource API operation for AWS Elemental MediaConvert. // -// Modify one of your existing presets. +// Add tags to a MediaConvert queue, preset, or job template. For information +// about tagging, see the User Guide at https://docs.aws.amazon.com/mediaconvert/latest/ug/tagging-resources.html // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Elemental MediaConvert's -// API operation UpdatePreset for usage and error information. +// API operation TagResource for usage and error information. // // Returned Error Codes: // * ErrCodeBadRequestException "BadRequestException" @@ -1675,23 +2135,288 @@ func (c *MediaConvert) UpdatePresetRequest(input *UpdatePresetInput) (req *reque // // * ErrCodeConflictException "ConflictException" // -// See also, https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/UpdatePreset -func (c *MediaConvert) UpdatePreset(input *UpdatePresetInput) (*UpdatePresetOutput, error) { - req, out := c.UpdatePresetRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/TagResource +func (c *MediaConvert) TagResource(input *TagResourceInput) (*TagResourceOutput, error) { + req, out := c.TagResourceRequest(input) return out, req.Send() } -// UpdatePresetWithContext is the same as UpdatePreset with the addition of +// TagResourceWithContext is the same as TagResource with the addition of // the ability to pass a context and additional request options. // -// See UpdatePreset for details on how to use this API operation. +// See TagResource for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *MediaConvert) UpdatePresetWithContext(ctx aws.Context, input *UpdatePresetInput, opts ...request.Option) (*UpdatePresetOutput, error) { - req, out := c.UpdatePresetRequest(input) +func (c *MediaConvert) TagResourceWithContext(ctx aws.Context, input *TagResourceInput, opts ...request.Option) (*TagResourceOutput, error) { + req, out := c.TagResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUntagResource = "UntagResource" + +// UntagResourceRequest generates a "aws/request.Request" representing the +// client's request for the UntagResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UntagResource for more information on using the UntagResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UntagResourceRequest method. +// req, resp := client.UntagResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/UntagResource +func (c *MediaConvert) UntagResourceRequest(input *UntagResourceInput) (req *request.Request, output *UntagResourceOutput) { + op := &request.Operation{ + Name: opUntagResource, + HTTPMethod: "PUT", + HTTPPath: "/2017-08-29/tags/{arn}", + } + + if input == nil { + input = &UntagResourceInput{} + } + + output = &UntagResourceOutput{} + req = c.newRequest(op, input, output) + return +} + +// UntagResource API operation for AWS Elemental MediaConvert. +// +// Remove tags from a MediaConvert queue, preset, or job template. For information +// about tagging, see the User Guide at https://docs.aws.amazon.com/mediaconvert/latest/ug/tagging-resources.html +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Elemental MediaConvert's +// API operation UntagResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// +// * ErrCodeForbiddenException "ForbiddenException" +// +// * ErrCodeNotFoundException "NotFoundException" +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// +// * ErrCodeConflictException "ConflictException" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/UntagResource +func (c *MediaConvert) UntagResource(input *UntagResourceInput) (*UntagResourceOutput, error) { + req, out := c.UntagResourceRequest(input) + return out, req.Send() +} + +// UntagResourceWithContext is the same as UntagResource with the addition of +// the ability to pass a context and additional request options. +// +// See UntagResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaConvert) UntagResourceWithContext(ctx aws.Context, input *UntagResourceInput, opts ...request.Option) (*UntagResourceOutput, error) { + req, out := c.UntagResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateJobTemplate = "UpdateJobTemplate" + +// UpdateJobTemplateRequest generates a "aws/request.Request" representing the +// client's request for the UpdateJobTemplate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateJobTemplate for more information on using the UpdateJobTemplate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateJobTemplateRequest method. +// req, resp := client.UpdateJobTemplateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/UpdateJobTemplate +func (c *MediaConvert) UpdateJobTemplateRequest(input *UpdateJobTemplateInput) (req *request.Request, output *UpdateJobTemplateOutput) { + op := &request.Operation{ + Name: opUpdateJobTemplate, + HTTPMethod: "PUT", + HTTPPath: "/2017-08-29/jobTemplates/{name}", + } + + if input == nil { + input = &UpdateJobTemplateInput{} + } + + output = &UpdateJobTemplateOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateJobTemplate API operation for AWS Elemental MediaConvert. +// +// Modify one of your existing job templates. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Elemental MediaConvert's +// API operation UpdateJobTemplate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// +// * ErrCodeForbiddenException "ForbiddenException" +// +// * ErrCodeNotFoundException "NotFoundException" +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// +// * ErrCodeConflictException "ConflictException" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/UpdateJobTemplate +func (c *MediaConvert) UpdateJobTemplate(input *UpdateJobTemplateInput) (*UpdateJobTemplateOutput, error) { + req, out := c.UpdateJobTemplateRequest(input) + return out, req.Send() +} + +// UpdateJobTemplateWithContext is the same as UpdateJobTemplate with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateJobTemplate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaConvert) UpdateJobTemplateWithContext(ctx aws.Context, input *UpdateJobTemplateInput, opts ...request.Option) (*UpdateJobTemplateOutput, error) { + req, out := c.UpdateJobTemplateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdatePreset = "UpdatePreset" + +// UpdatePresetRequest generates a "aws/request.Request" representing the +// client's request for the UpdatePreset operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdatePreset for more information on using the UpdatePreset +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdatePresetRequest method. +// req, resp := client.UpdatePresetRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/UpdatePreset +func (c *MediaConvert) UpdatePresetRequest(input *UpdatePresetInput) (req *request.Request, output *UpdatePresetOutput) { + op := &request.Operation{ + Name: opUpdatePreset, + HTTPMethod: "PUT", + HTTPPath: "/2017-08-29/presets/{name}", + } + + if input == nil { + input = &UpdatePresetInput{} + } + + output = &UpdatePresetOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdatePreset API operation for AWS Elemental MediaConvert. +// +// Modify one of your existing presets. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Elemental MediaConvert's +// API operation UpdatePreset for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// +// * ErrCodeForbiddenException "ForbiddenException" +// +// * ErrCodeNotFoundException "NotFoundException" +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// +// * ErrCodeConflictException "ConflictException" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/UpdatePreset +func (c *MediaConvert) UpdatePreset(input *UpdatePresetInput) (*UpdatePresetOutput, error) { + req, out := c.UpdatePresetRequest(input) + return out, req.Send() +} + +// UpdatePresetWithContext is the same as UpdatePreset with the addition of +// the ability to pass a context and additional request options. +// +// See UpdatePreset for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaConvert) UpdatePresetWithContext(ctx aws.Context, input *UpdatePresetInput, opts ...request.Option) (*UpdatePresetOutput, error) { + req, out := c.UpdatePresetRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() @@ -1701,8 +2426,8 @@ const opUpdateQueue = "UpdateQueue" // UpdateQueueRequest generates a "aws/request.Request" representing the // client's request for the UpdateQueue operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1786,7 +2511,12 @@ func (c *MediaConvert) UpdateQueueWithContext(ctx aws.Context, input *UpdateQueu } // Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to -// the value AAC. +// the value AAC. The service accepts one of two mutually exclusive groups of +// AAC settings--VBR and CBR. To select one of these modes, set the value of +// Bitrate control mode (rateControlMode) to "VBR" or "CBR". In VBR mode, you +// control the audio quality with the setting VBR quality (vbrQuality). In CBR +// mode, you use the setting Bitrate (bitrate). Defaults and valid values depend +// on the rate control mode. type AacSettings struct { _ struct{} `type:"structure"` @@ -1801,9 +2531,9 @@ type AacSettings struct { // and FollowInputAudioType. AudioDescriptionBroadcasterMix *string `locationName:"audioDescriptionBroadcasterMix" type:"string" enum:"AacAudioDescriptionBroadcasterMix"` - // Average bitrate in bits/second. Valid values depend on rate control mode - // and profile. - Bitrate *int64 `locationName:"bitrate" type:"integer"` + // Average bitrate in bits/second. Defaults and valid values depend on rate + // control mode and profile. + Bitrate *int64 `locationName:"bitrate" min:"6000" type:"integer"` // AAC Profile. CodecProfile *string `locationName:"codecProfile" type:"string" enum:"AacCodecProfile"` @@ -1823,7 +2553,7 @@ type AacSettings struct { RawFormat *string `locationName:"rawFormat" type:"string" enum:"AacRawFormat"` // Sample rate in Hz. Valid values depend on rate control mode and profile. - SampleRate *int64 `locationName:"sampleRate" type:"integer"` + SampleRate *int64 `locationName:"sampleRate" min:"8000" type:"integer"` // Use MPEG-2 AAC instead of MPEG-4 AAC audio for raw or MPEG-2 Transport Stream // containers. @@ -1843,6 +2573,22 @@ func (s AacSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *AacSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AacSettings"} + if s.Bitrate != nil && *s.Bitrate < 6000 { + invalidParams.Add(request.NewErrParamMinValue("Bitrate", 6000)) + } + if s.SampleRate != nil && *s.SampleRate < 8000 { + invalidParams.Add(request.NewErrParamMinValue("SampleRate", 8000)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAudioDescriptionBroadcasterMix sets the AudioDescriptionBroadcasterMix field's value. func (s *AacSettings) SetAudioDescriptionBroadcasterMix(v string) *AacSettings { s.AudioDescriptionBroadcasterMix = &v @@ -1903,7 +2649,7 @@ type Ac3Settings struct { _ struct{} `type:"structure"` // Average bitrate in bits/second. Valid bitrates depend on the coding mode. - Bitrate *int64 `locationName:"bitrate" type:"integer"` + Bitrate *int64 `locationName:"bitrate" min:"64000" type:"integer"` // Specifies the "Bitstream Mode" (bsmod) for the emitted AC-3 stream. See ATSC // A/52-2012 for background on these values. @@ -1914,7 +2660,7 @@ type Ac3Settings struct { // Sets the dialnorm for the output. If blank and input audio is Dolby Digital, // dialnorm will be passed through. - Dialnorm *int64 `locationName:"dialnorm" type:"integer"` + Dialnorm *int64 `locationName:"dialnorm" min:"1" type:"integer"` // If set to FILM_STANDARD, adds dynamic range compression signaling to the // output bitstream as defined in the Dolby Digital specification. @@ -1930,7 +2676,7 @@ type Ac3Settings struct { MetadataControl *string `locationName:"metadataControl" type:"string" enum:"Ac3MetadataControl"` // Sample rate in hz. Sample rate is always 48000. - SampleRate *int64 `locationName:"sampleRate" type:"integer"` + SampleRate *int64 `locationName:"sampleRate" min:"48000" type:"integer"` } // String returns the string representation @@ -1943,14 +2689,33 @@ func (s Ac3Settings) GoString() string { return s.String() } -// SetBitrate sets the Bitrate field's value. -func (s *Ac3Settings) SetBitrate(v int64) *Ac3Settings { - s.Bitrate = &v - return s -} - -// SetBitstreamMode sets the BitstreamMode field's value. -func (s *Ac3Settings) SetBitstreamMode(v string) *Ac3Settings { +// Validate inspects the fields of the type to determine if they are valid. +func (s *Ac3Settings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Ac3Settings"} + if s.Bitrate != nil && *s.Bitrate < 64000 { + invalidParams.Add(request.NewErrParamMinValue("Bitrate", 64000)) + } + if s.Dialnorm != nil && *s.Dialnorm < 1 { + invalidParams.Add(request.NewErrParamMinValue("Dialnorm", 1)) + } + if s.SampleRate != nil && *s.SampleRate < 48000 { + invalidParams.Add(request.NewErrParamMinValue("SampleRate", 48000)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBitrate sets the Bitrate field's value. +func (s *Ac3Settings) SetBitrate(v int64) *Ac3Settings { + s.Bitrate = &v + return s +} + +// SetBitstreamMode sets the BitstreamMode field's value. +func (s *Ac3Settings) SetBitstreamMode(v string) *Ac3Settings { s.BitstreamMode = &v return s } @@ -1998,15 +2763,15 @@ type AiffSettings struct { // Specify Bit depth (BitDepth), in bits per sample, to choose the encoding // quality for this audio track. - BitDepth *int64 `locationName:"bitDepth" type:"integer"` + BitDepth *int64 `locationName:"bitDepth" min:"16" type:"integer"` // Set Channels to specify the number of channels in this output audio track. // Choosing Mono in the console will give you 1 output channel; choosing Stereo // will give you 2. In the API, valid values are 1 and 2. - Channels *int64 `locationName:"channels" type:"integer"` + Channels *int64 `locationName:"channels" min:"1" type:"integer"` // Sample rate in hz. - SampleRate *int64 `locationName:"sampleRate" type:"integer"` + SampleRate *int64 `locationName:"sampleRate" min:"8000" type:"integer"` } // String returns the string representation @@ -2019,6 +2784,25 @@ func (s AiffSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *AiffSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AiffSettings"} + if s.BitDepth != nil && *s.BitDepth < 16 { + invalidParams.Add(request.NewErrParamMinValue("BitDepth", 16)) + } + if s.Channels != nil && *s.Channels < 1 { + invalidParams.Add(request.NewErrParamMinValue("Channels", 1)) + } + if s.SampleRate != nil && *s.SampleRate < 8000 { + invalidParams.Add(request.NewErrParamMinValue("SampleRate", 8000)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetBitDepth sets the BitDepth field's value. func (s *AiffSettings) SetBitDepth(v int64) *AiffSettings { s.BitDepth = &v @@ -2043,7 +2827,7 @@ type AncillarySourceSettings struct { // Specifies the 608 channel number in the ancillary data track from which to // extract captions. Unused for passthrough. - SourceAncillaryChannelNumber *int64 `locationName:"sourceAncillaryChannelNumber" type:"integer"` + SourceAncillaryChannelNumber *int64 `locationName:"sourceAncillaryChannelNumber" min:"1" type:"integer"` } // String returns the string representation @@ -2056,12 +2840,82 @@ func (s AncillarySourceSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *AncillarySourceSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AncillarySourceSettings"} + if s.SourceAncillaryChannelNumber != nil && *s.SourceAncillaryChannelNumber < 1 { + invalidParams.Add(request.NewErrParamMinValue("SourceAncillaryChannelNumber", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetSourceAncillaryChannelNumber sets the SourceAncillaryChannelNumber field's value. func (s *AncillarySourceSettings) SetSourceAncillaryChannelNumber(v int64) *AncillarySourceSettings { s.SourceAncillaryChannelNumber = &v return s } +// Associates the Amazon Resource Name (ARN) of an AWS Certificate Manager (ACM) +// certificate with an AWS Elemental MediaConvert resource. +type AssociateCertificateInput struct { + _ struct{} `type:"structure"` + + // The ARN of the ACM certificate that you want to associate with your MediaConvert + // resource. + // + // Arn is a required field + Arn *string `locationName:"arn" type:"string" required:"true"` +} + +// String returns the string representation +func (s AssociateCertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociateCertificateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AssociateCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AssociateCertificateInput"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *AssociateCertificateInput) SetArn(v string) *AssociateCertificateInput { + s.Arn = &v + return s +} + +// Successful association of Certificate Manager Amazon Resource Name (ARN) +// with Mediaconvert returns an OK message. +type AssociateCertificateOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AssociateCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociateCertificateOutput) GoString() string { + return s.String() +} + // Audio codec settings (CodecSettings) under (AudioDescriptions) contains the // group of settings related to audio encoding. The settings in this group vary // depending on the value you choose for Audio codec (Codec). For each codec @@ -2072,7 +2926,12 @@ type AudioCodecSettings struct { _ struct{} `type:"structure"` // Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to - // the value AAC. + // the value AAC. The service accepts one of two mutually exclusive groups of + // AAC settings--VBR and CBR. To select one of these modes, set the value of + // Bitrate control mode (rateControlMode) to "VBR" or "CBR". In VBR mode, you + // control the audio quality with the setting VBR quality (vbrQuality). In CBR + // mode, you use the setting Bitrate (bitrate). Defaults and valid values depend + // on the rate control mode. AacSettings *AacSettings `locationName:"aacSettings" type:"structure"` // Required when you set (Codec) under (AudioDescriptions)>(CodecSettings) to @@ -2109,6 +2968,46 @@ func (s AudioCodecSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *AudioCodecSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AudioCodecSettings"} + if s.AacSettings != nil { + if err := s.AacSettings.Validate(); err != nil { + invalidParams.AddNested("AacSettings", err.(request.ErrInvalidParams)) + } + } + if s.Ac3Settings != nil { + if err := s.Ac3Settings.Validate(); err != nil { + invalidParams.AddNested("Ac3Settings", err.(request.ErrInvalidParams)) + } + } + if s.AiffSettings != nil { + if err := s.AiffSettings.Validate(); err != nil { + invalidParams.AddNested("AiffSettings", err.(request.ErrInvalidParams)) + } + } + if s.Eac3Settings != nil { + if err := s.Eac3Settings.Validate(); err != nil { + invalidParams.AddNested("Eac3Settings", err.(request.ErrInvalidParams)) + } + } + if s.Mp2Settings != nil { + if err := s.Mp2Settings.Validate(); err != nil { + invalidParams.AddNested("Mp2Settings", err.(request.ErrInvalidParams)) + } + } + if s.WavSettings != nil { + if err := s.WavSettings.Validate(); err != nil { + invalidParams.AddNested("WavSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAacSettings sets the AacSettings field's value. func (s *AudioCodecSettings) SetAacSettings(v *AacSettings) *AudioCodecSettings { s.AacSettings = v @@ -2191,6 +3090,13 @@ type AudioDescription struct { // * WAV, WavSettings * AIFF, AiffSettings * AC3, Ac3Settings * EAC3, Eac3Settings CodecSettings *AudioCodecSettings `locationName:"codecSettings" type:"structure"` + // Specify the language for this audio output track, using the ISO 639-2 or + // ISO 639-3 three-letter language code. The language specified will be used + // when 'Follow Input Language Code' is not selected or when 'Follow Input Language + // Code' is selected but there is no ISO 639 language code specified by the + // input. + CustomLanguageCode *string `locationName:"customLanguageCode" min:"3" type:"string"` + // Indicates the language of the audio output track. The ISO 639 language specified // in the 'Language Code' drop down will be used when 'Follow Input Language // Code' is not selected or when 'Follow Input Language Code' is selected but @@ -2222,6 +3128,34 @@ func (s AudioDescription) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *AudioDescription) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AudioDescription"} + if s.CustomLanguageCode != nil && len(*s.CustomLanguageCode) < 3 { + invalidParams.Add(request.NewErrParamMinLen("CustomLanguageCode", 3)) + } + if s.AudioNormalizationSettings != nil { + if err := s.AudioNormalizationSettings.Validate(); err != nil { + invalidParams.AddNested("AudioNormalizationSettings", err.(request.ErrInvalidParams)) + } + } + if s.CodecSettings != nil { + if err := s.CodecSettings.Validate(); err != nil { + invalidParams.AddNested("CodecSettings", err.(request.ErrInvalidParams)) + } + } + if s.RemixSettings != nil { + if err := s.RemixSettings.Validate(); err != nil { + invalidParams.AddNested("RemixSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAudioNormalizationSettings sets the AudioNormalizationSettings field's value. func (s *AudioDescription) SetAudioNormalizationSettings(v *AudioNormalizationSettings) *AudioDescription { s.AudioNormalizationSettings = v @@ -2252,6 +3186,12 @@ func (s *AudioDescription) SetCodecSettings(v *AudioCodecSettings) *AudioDescrip return s } +// SetCustomLanguageCode sets the CustomLanguageCode field's value. +func (s *AudioDescription) SetCustomLanguageCode(v string) *AudioDescription { + s.CustomLanguageCode = &v + return s +} + // SetLanguageCode sets the LanguageCode field's value. func (s *AudioDescription) SetLanguageCode(v string) *AudioDescription { s.LanguageCode = &v @@ -2317,6 +3257,19 @@ func (s AudioNormalizationSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *AudioNormalizationSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AudioNormalizationSettings"} + if s.CorrectionGateLevel != nil && *s.CorrectionGateLevel < -70 { + invalidParams.Add(request.NewErrParamMinValue("CorrectionGateLevel", -70)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAlgorithm sets the Algorithm field's value. func (s *AudioNormalizationSettings) SetAlgorithm(v string) *AudioNormalizationSettings { s.Algorithm = &v @@ -2357,10 +3310,13 @@ func (s *AudioNormalizationSettings) SetTargetLkfs(v float64) *AudioNormalizatio type AudioSelector struct { _ struct{} `type:"structure"` - // When an "Audio Description":#audio_description specifies an AudioSelector - // or AudioSelectorGroup for which no matching source is found in the input, - // then the audio selector marked as DEFAULT will be used. If none are marked - // as default, silence will be inserted for the duration of the input. + // Selects a specific language code from within an audio source, using the ISO + // 639-2 or ISO 639-3 three-letter language code + CustomLanguageCode *string `locationName:"customLanguageCode" min:"3" type:"string"` + + // Enable this setting on one audio selector to set it as the default for the + // job. The service uses this default for outputs where it can't find the specified + // input audio. If you don't set a default, those outputs have no audio. DefaultSelection *string `locationName:"defaultSelection" type:"string" enum:"AudioDefaultSelection"` // Specifies audio data from an external file source. @@ -2377,23 +3333,31 @@ type AudioSelector struct { // 0x101). Pids []*int64 `locationName:"pids" type:"list"` - // Applies only when input streams contain Dolby E. Enter the program ID (according - // to the metadata in the audio) of the Dolby E program to extract from the - // specified track. One program extracted per audio selector. To select multiple - // programs, create multiple selectors with the same Track and different Program - // numbers. "All channels" means to ignore the program IDs and include all the - // channels in this selector; useful if metadata is known to be incorrect. + // Use this setting for input streams that contain Dolby E, to have the service + // extract specific program data from the track. To select multiple programs, + // create multiple selectors with the same Track and different Program numbers. + // In the console, this setting is visible when you set Selector type to Track. + // Choose the program number from the dropdown list. If you are sending a JSON + // file, provide the program ID, which is part of the audio metadata. If your + // input file has incorrect metadata, you can choose All channels instead of + // a program number to have the service ignore the program IDs and include all + // the programs in the track. ProgramSelection *int64 `locationName:"programSelection" type:"integer"` - // Advanced audio remixing settings. + // Use these settings to reorder the audio channels of one input to match those + // of another input. This allows you to combine the two files into a single + // output, one after the other. RemixSettings *RemixSettings `locationName:"remixSettings" type:"structure"` // Specifies the type of the audio selector. SelectorType *string `locationName:"selectorType" type:"string" enum:"AudioSelectorType"` - // Identify the channel to include in this selector by entering the 1-based - // track index. To combine several tracks, enter a comma-separated list, e.g. - // "1,2,3" for tracks 1-3. + // Identify a track from the input audio to include in this selector by entering + // the track index number. To include several tracks in a single audio selector, + // specify multiple tracks as follows. Using the console, enter a comma-separated + // list. For examle, type "1,2,3" to include tracks 1 through 3. Specifying + // directly in your JSON job file, provide the track numbers in an array. For + // example, "tracks": [1,2,3]. Tracks []*int64 `locationName:"tracks" type:"list"` } @@ -2407,6 +3371,33 @@ func (s AudioSelector) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *AudioSelector) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AudioSelector"} + if s.CustomLanguageCode != nil && len(*s.CustomLanguageCode) < 3 { + invalidParams.Add(request.NewErrParamMinLen("CustomLanguageCode", 3)) + } + if s.Offset != nil && *s.Offset < -2.147483648e+09 { + invalidParams.Add(request.NewErrParamMinValue("Offset", -2.147483648e+09)) + } + if s.RemixSettings != nil { + if err := s.RemixSettings.Validate(); err != nil { + invalidParams.AddNested("RemixSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCustomLanguageCode sets the CustomLanguageCode field's value. +func (s *AudioSelector) SetCustomLanguageCode(v string) *AudioSelector { + s.CustomLanguageCode = &v + return s +} + // SetDefaultSelection sets the DefaultSelection field's value. func (s *AudioSelector) SetDefaultSelection(v string) *AudioSelector { s.DefaultSelection = &v @@ -2465,10 +3456,10 @@ func (s *AudioSelector) SetTracks(v []*int64) *AudioSelector { type AudioSelectorGroup struct { _ struct{} `type:"structure"` - // Name of an "Audio Selector":#inputs-audio_selector within the same input - // to include in the group. Audio selector names are standardized, based on - // their order within the input (e.g. "Audio Selector 1"). The audio_selector_name - // parameter can be repeated to add any number of audio selectors to the group. + // Name of an Audio Selector within the same input to include in the group. + // Audio selector names are standardized, based on their order within the input + // (e.g., "Audio Selector 1"). The audio selector name parameter can be repeated + // to add any number of audio selectors to the group. AudioSelectorNames []*string `locationName:"audioSelectorNames" type:"list"` } @@ -2494,7 +3485,7 @@ type AvailBlanking struct { // Blanking image to be used. Leave empty for solid black. Only bmp and png // images are supported. - AvailBlankingImage *string `locationName:"availBlankingImage" type:"string"` + AvailBlankingImage *string `locationName:"availBlankingImage" min:"14" type:"string"` } // String returns the string representation @@ -2507,6 +3498,19 @@ func (s AvailBlanking) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *AvailBlanking) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AvailBlanking"} + if s.AvailBlankingImage != nil && len(*s.AvailBlankingImage) < 14 { + invalidParams.Add(request.NewErrParamMinLen("AvailBlankingImage", 14)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAvailBlankingImage sets the AvailBlankingImage field's value. func (s *AvailBlanking) SetAvailBlankingImage(v string) *AvailBlanking { s.AvailBlankingImage = &v @@ -2548,7 +3552,7 @@ type BurninDestinationSettings struct { // Font resolution in DPI (dots per inch); default is 96 dpi.All burn-in and // DVB-Sub font settings must match. - FontResolution *int64 `locationName:"fontResolution" type:"integer"` + FontResolution *int64 `locationName:"fontResolution" min:"96" type:"integer"` // A positive integer indicates the exact font size in points. Set to 0 for // automatic font size selection. All burn-in and DVB-Sub font settings must @@ -2586,9 +3590,11 @@ type BurninDestinationSettings struct { // burn-in and DVB-Sub font settings must match. ShadowYOffset *int64 `locationName:"shadowYOffset" type:"integer"` - // Controls whether a fixed grid size or proportional font spacing will be used - // to generate the output subtitles bitmap. Only applicable for Teletext inputs - // and DVB-Sub/Burn-in outputs. + // Only applies to jobs with input captions in Teletext or STL formats. Specify + // whether the spacing between letters in your captions is set by the captions + // grid or varies depending on letter width. Choose fixed grid to conform to + // the spacing specified in the captions file more accurately. Choose proportional + // to make the text easier to read if the captions are closed caption. TeletextSpacing *string `locationName:"teletextSpacing" type:"string" enum:"BurninSubtitleTeletextSpacing"` // Specifies the horizontal position of the caption relative to the left side @@ -2620,6 +3626,25 @@ func (s BurninDestinationSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *BurninDestinationSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BurninDestinationSettings"} + if s.FontResolution != nil && *s.FontResolution < 96 { + invalidParams.Add(request.NewErrParamMinValue("FontResolution", 96)) + } + if s.ShadowXOffset != nil && *s.ShadowXOffset < -2.147483648e+09 { + invalidParams.Add(request.NewErrParamMinValue("ShadowXOffset", -2.147483648e+09)) + } + if s.ShadowYOffset != nil && *s.ShadowYOffset < -2.147483648e+09 { + invalidParams.Add(request.NewErrParamMinValue("ShadowYOffset", -2.147483648e+09)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAlignment sets the Alignment field's value. func (s *BurninDestinationSettings) SetAlignment(v string) *BurninDestinationSettings { s.Alignment = &v @@ -2778,7 +3803,11 @@ type CaptionDescription struct { // input when generating captions. The name should be of the format "Caption // Selector ", which denotes that the Nth Caption Selector will be used from // each input. - CaptionSelectorName *string `locationName:"captionSelectorName" type:"string"` + CaptionSelectorName *string `locationName:"captionSelectorName" min:"1" type:"string"` + + // Indicates the language of the caption output track, using the ISO 639-2 or + // ISO 639-3 three-letter language code + CustomLanguageCode *string `locationName:"customLanguageCode" min:"3" type:"string"` // Specific settings required by destination type. Note that burnin_destination_settings // are not available if the source of the caption data is Embedded or Teletext. @@ -2803,12 +3832,39 @@ func (s CaptionDescription) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *CaptionDescription) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CaptionDescription"} + if s.CaptionSelectorName != nil && len(*s.CaptionSelectorName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CaptionSelectorName", 1)) + } + if s.CustomLanguageCode != nil && len(*s.CustomLanguageCode) < 3 { + invalidParams.Add(request.NewErrParamMinLen("CustomLanguageCode", 3)) + } + if s.DestinationSettings != nil { + if err := s.DestinationSettings.Validate(); err != nil { + invalidParams.AddNested("DestinationSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetCaptionSelectorName sets the CaptionSelectorName field's value. func (s *CaptionDescription) SetCaptionSelectorName(v string) *CaptionDescription { s.CaptionSelectorName = &v return s } +// SetCustomLanguageCode sets the CustomLanguageCode field's value. +func (s *CaptionDescription) SetCustomLanguageCode(v string) *CaptionDescription { + s.CustomLanguageCode = &v + return s +} + // SetDestinationSettings sets the DestinationSettings field's value. func (s *CaptionDescription) SetDestinationSettings(v *CaptionDestinationSettings) *CaptionDescription { s.DestinationSettings = v @@ -2831,6 +3887,10 @@ func (s *CaptionDescription) SetLanguageDescription(v string) *CaptionDescriptio type CaptionDescriptionPreset struct { _ struct{} `type:"structure"` + // Indicates the language of the caption output track, using the ISO 639-2 or + // ISO 639-3 three-letter language code + CustomLanguageCode *string `locationName:"customLanguageCode" min:"3" type:"string"` + // Specific settings required by destination type. Note that burnin_destination_settings // are not available if the source of the caption data is Embedded or Teletext. DestinationSettings *CaptionDestinationSettings `locationName:"destinationSettings" type:"structure"` @@ -2854,6 +3914,30 @@ func (s CaptionDescriptionPreset) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *CaptionDescriptionPreset) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CaptionDescriptionPreset"} + if s.CustomLanguageCode != nil && len(*s.CustomLanguageCode) < 3 { + invalidParams.Add(request.NewErrParamMinLen("CustomLanguageCode", 3)) + } + if s.DestinationSettings != nil { + if err := s.DestinationSettings.Validate(); err != nil { + invalidParams.AddNested("DestinationSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCustomLanguageCode sets the CustomLanguageCode field's value. +func (s *CaptionDescriptionPreset) SetCustomLanguageCode(v string) *CaptionDescriptionPreset { + s.CustomLanguageCode = &v + return s +} + // SetDestinationSettings sets the DestinationSettings field's value. func (s *CaptionDescriptionPreset) SetDestinationSettings(v *CaptionDestinationSettings) *CaptionDescriptionPreset { s.DestinationSettings = v @@ -2880,8 +3964,8 @@ type CaptionDestinationSettings struct { // Burn-In Destination Settings. BurninDestinationSettings *BurninDestinationSettings `locationName:"burninDestinationSettings" type:"structure"` - // Type of Caption output, including Burn-In, Embedded, SCC, SRT, TTML, WebVTT, - // DVB-Sub, Teletext. + // Type of Caption output, including Burn-In, Embedded (with or without SCTE20), + // SCC, SMI, SRT, TTML, WebVTT, DVB-Sub, Teletext. DestinationType *string `locationName:"destinationType" type:"string" enum:"CaptionDestinationType"` // DVB-Sub Destination Settings @@ -2908,6 +3992,31 @@ func (s CaptionDestinationSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *CaptionDestinationSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CaptionDestinationSettings"} + if s.BurninDestinationSettings != nil { + if err := s.BurninDestinationSettings.Validate(); err != nil { + invalidParams.AddNested("BurninDestinationSettings", err.(request.ErrInvalidParams)) + } + } + if s.DvbSubDestinationSettings != nil { + if err := s.DvbSubDestinationSettings.Validate(); err != nil { + invalidParams.AddNested("DvbSubDestinationSettings", err.(request.ErrInvalidParams)) + } + } + if s.TeletextDestinationSettings != nil { + if err := s.TeletextDestinationSettings.Validate(); err != nil { + invalidParams.AddNested("TeletextDestinationSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetBurninDestinationSettings sets the BurninDestinationSettings field's value. func (s *CaptionDestinationSettings) SetBurninDestinationSettings(v *BurninDestinationSettings) *CaptionDestinationSettings { s.BurninDestinationSettings = v @@ -2926,158 +4035,512 @@ func (s *CaptionDestinationSettings) SetDvbSubDestinationSettings(v *DvbSubDesti return s } -// SetSccDestinationSettings sets the SccDestinationSettings field's value. -func (s *CaptionDestinationSettings) SetSccDestinationSettings(v *SccDestinationSettings) *CaptionDestinationSettings { - s.SccDestinationSettings = v - return s -} +// SetSccDestinationSettings sets the SccDestinationSettings field's value. +func (s *CaptionDestinationSettings) SetSccDestinationSettings(v *SccDestinationSettings) *CaptionDestinationSettings { + s.SccDestinationSettings = v + return s +} + +// SetTeletextDestinationSettings sets the TeletextDestinationSettings field's value. +func (s *CaptionDestinationSettings) SetTeletextDestinationSettings(v *TeletextDestinationSettings) *CaptionDestinationSettings { + s.TeletextDestinationSettings = v + return s +} + +// SetTtmlDestinationSettings sets the TtmlDestinationSettings field's value. +func (s *CaptionDestinationSettings) SetTtmlDestinationSettings(v *TtmlDestinationSettings) *CaptionDestinationSettings { + s.TtmlDestinationSettings = v + return s +} + +// Set up captions in your outputs by first selecting them from your input here. +type CaptionSelector struct { + _ struct{} `type:"structure"` + + // The specific language to extract from source, using the ISO 639-2 or ISO + // 639-3 three-letter language code. If input is SCTE-27, complete this field + // and/or PID to select the caption language to extract. If input is DVB-Sub + // and output is Burn-in or SMPTE-TT, complete this field and/or PID to select + // the caption language to extract. If input is DVB-Sub that is being passed + // through, omit this field (and PID field); there is no way to extract a specific + // language with pass-through captions. + CustomLanguageCode *string `locationName:"customLanguageCode" min:"3" type:"string"` + + // The specific language to extract from source. If input is SCTE-27, complete + // this field and/or PID to select the caption language to extract. If input + // is DVB-Sub and output is Burn-in or SMPTE-TT, complete this field and/or + // PID to select the caption language to extract. If input is DVB-Sub that is + // being passed through, omit this field (and PID field); there is no way to + // extract a specific language with pass-through captions. + LanguageCode *string `locationName:"languageCode" type:"string" enum:"LanguageCode"` + + // Source settings (SourceSettings) contains the group of settings for captions + // in the input. + SourceSettings *CaptionSourceSettings `locationName:"sourceSettings" type:"structure"` +} + +// String returns the string representation +func (s CaptionSelector) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CaptionSelector) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CaptionSelector) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CaptionSelector"} + if s.CustomLanguageCode != nil && len(*s.CustomLanguageCode) < 3 { + invalidParams.Add(request.NewErrParamMinLen("CustomLanguageCode", 3)) + } + if s.SourceSettings != nil { + if err := s.SourceSettings.Validate(); err != nil { + invalidParams.AddNested("SourceSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCustomLanguageCode sets the CustomLanguageCode field's value. +func (s *CaptionSelector) SetCustomLanguageCode(v string) *CaptionSelector { + s.CustomLanguageCode = &v + return s +} + +// SetLanguageCode sets the LanguageCode field's value. +func (s *CaptionSelector) SetLanguageCode(v string) *CaptionSelector { + s.LanguageCode = &v + return s +} + +// SetSourceSettings sets the SourceSettings field's value. +func (s *CaptionSelector) SetSourceSettings(v *CaptionSourceSettings) *CaptionSelector { + s.SourceSettings = v + return s +} + +// Source settings (SourceSettings) contains the group of settings for captions +// in the input. +type CaptionSourceSettings struct { + _ struct{} `type:"structure"` + + // Settings for ancillary captions source. + AncillarySourceSettings *AncillarySourceSettings `locationName:"ancillarySourceSettings" type:"structure"` + + // DVB Sub Source Settings + DvbSubSourceSettings *DvbSubSourceSettings `locationName:"dvbSubSourceSettings" type:"structure"` + + // Settings for embedded captions Source + EmbeddedSourceSettings *EmbeddedSourceSettings `locationName:"embeddedSourceSettings" type:"structure"` + + // Settings for File-based Captions in Source + FileSourceSettings *FileSourceSettings `locationName:"fileSourceSettings" type:"structure"` + + // Use Source (SourceType) to identify the format of your input captions. The + // service cannot auto-detect caption format. + SourceType *string `locationName:"sourceType" type:"string" enum:"CaptionSourceType"` + + // Settings specific to Teletext caption sources, including Page number. + TeletextSourceSettings *TeletextSourceSettings `locationName:"teletextSourceSettings" type:"structure"` +} + +// String returns the string representation +func (s CaptionSourceSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CaptionSourceSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CaptionSourceSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CaptionSourceSettings"} + if s.AncillarySourceSettings != nil { + if err := s.AncillarySourceSettings.Validate(); err != nil { + invalidParams.AddNested("AncillarySourceSettings", err.(request.ErrInvalidParams)) + } + } + if s.DvbSubSourceSettings != nil { + if err := s.DvbSubSourceSettings.Validate(); err != nil { + invalidParams.AddNested("DvbSubSourceSettings", err.(request.ErrInvalidParams)) + } + } + if s.EmbeddedSourceSettings != nil { + if err := s.EmbeddedSourceSettings.Validate(); err != nil { + invalidParams.AddNested("EmbeddedSourceSettings", err.(request.ErrInvalidParams)) + } + } + if s.FileSourceSettings != nil { + if err := s.FileSourceSettings.Validate(); err != nil { + invalidParams.AddNested("FileSourceSettings", err.(request.ErrInvalidParams)) + } + } + if s.TeletextSourceSettings != nil { + if err := s.TeletextSourceSettings.Validate(); err != nil { + invalidParams.AddNested("TeletextSourceSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAncillarySourceSettings sets the AncillarySourceSettings field's value. +func (s *CaptionSourceSettings) SetAncillarySourceSettings(v *AncillarySourceSettings) *CaptionSourceSettings { + s.AncillarySourceSettings = v + return s +} + +// SetDvbSubSourceSettings sets the DvbSubSourceSettings field's value. +func (s *CaptionSourceSettings) SetDvbSubSourceSettings(v *DvbSubSourceSettings) *CaptionSourceSettings { + s.DvbSubSourceSettings = v + return s +} + +// SetEmbeddedSourceSettings sets the EmbeddedSourceSettings field's value. +func (s *CaptionSourceSettings) SetEmbeddedSourceSettings(v *EmbeddedSourceSettings) *CaptionSourceSettings { + s.EmbeddedSourceSettings = v + return s +} + +// SetFileSourceSettings sets the FileSourceSettings field's value. +func (s *CaptionSourceSettings) SetFileSourceSettings(v *FileSourceSettings) *CaptionSourceSettings { + s.FileSourceSettings = v + return s +} + +// SetSourceType sets the SourceType field's value. +func (s *CaptionSourceSettings) SetSourceType(v string) *CaptionSourceSettings { + s.SourceType = &v + return s +} + +// SetTeletextSourceSettings sets the TeletextSourceSettings field's value. +func (s *CaptionSourceSettings) SetTeletextSourceSettings(v *TeletextSourceSettings) *CaptionSourceSettings { + s.TeletextSourceSettings = v + return s +} + +// Channel mapping (ChannelMapping) contains the group of fields that hold the +// remixing value for each channel. Units are in dB. Acceptable values are within +// the range from -60 (mute) through 6. A setting of 0 passes the input channel +// unchanged to the output channel (no attenuation or amplification). +type ChannelMapping struct { + _ struct{} `type:"structure"` + + // List of output channels + OutputChannels []*OutputChannelMapping `locationName:"outputChannels" type:"list"` +} + +// String returns the string representation +func (s ChannelMapping) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ChannelMapping) GoString() string { + return s.String() +} + +// SetOutputChannels sets the OutputChannels field's value. +func (s *ChannelMapping) SetOutputChannels(v []*OutputChannelMapping) *ChannelMapping { + s.OutputChannels = v + return s +} + +// Settings for CMAF encryption +type CmafEncryptionSettings struct { + _ struct{} `type:"structure"` + + // This is a 128-bit, 16-byte hex value represented by a 32-character text string. + // If this parameter is not set then the Initialization Vector will follow the + // segment number by default. + ConstantInitializationVector *string `locationName:"constantInitializationVector" min:"32" type:"string"` + + // Encrypts the segments with the given encryption scheme. Leave blank to disable. + // Selecting 'Disabled' in the web interface also disables encryption. + EncryptionMethod *string `locationName:"encryptionMethod" type:"string" enum:"CmafEncryptionType"` + + // The Initialization Vector is a 128-bit number used in conjunction with the + // key for encrypting blocks. If set to INCLUDE, Initialization Vector is listed + // in the manifest. Otherwise Initialization Vector is not in the manifest. + InitializationVectorInManifest *string `locationName:"initializationVectorInManifest" type:"string" enum:"CmafInitializationVectorInManifest"` + + // Use these settings to set up encryption with a static key provider. + StaticKeyProvider *StaticKeyProvider `locationName:"staticKeyProvider" type:"structure"` + + // Indicates which type of key provider is used for encryption. + Type *string `locationName:"type" type:"string" enum:"CmafKeyProviderType"` +} + +// String returns the string representation +func (s CmafEncryptionSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CmafEncryptionSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CmafEncryptionSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CmafEncryptionSettings"} + if s.ConstantInitializationVector != nil && len(*s.ConstantInitializationVector) < 32 { + invalidParams.Add(request.NewErrParamMinLen("ConstantInitializationVector", 32)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetConstantInitializationVector sets the ConstantInitializationVector field's value. +func (s *CmafEncryptionSettings) SetConstantInitializationVector(v string) *CmafEncryptionSettings { + s.ConstantInitializationVector = &v + return s +} + +// SetEncryptionMethod sets the EncryptionMethod field's value. +func (s *CmafEncryptionSettings) SetEncryptionMethod(v string) *CmafEncryptionSettings { + s.EncryptionMethod = &v + return s +} + +// SetInitializationVectorInManifest sets the InitializationVectorInManifest field's value. +func (s *CmafEncryptionSettings) SetInitializationVectorInManifest(v string) *CmafEncryptionSettings { + s.InitializationVectorInManifest = &v + return s +} + +// SetStaticKeyProvider sets the StaticKeyProvider field's value. +func (s *CmafEncryptionSettings) SetStaticKeyProvider(v *StaticKeyProvider) *CmafEncryptionSettings { + s.StaticKeyProvider = v + return s +} + +// SetType sets the Type field's value. +func (s *CmafEncryptionSettings) SetType(v string) *CmafEncryptionSettings { + s.Type = &v + return s +} + +// Required when you set (Type) under (OutputGroups)>(OutputGroupSettings) to +// CMAF_GROUP_SETTINGS. Each output in a CMAF Output Group may only contain +// a single video, audio, or caption output. +type CmafGroupSettings struct { + _ struct{} `type:"structure"` + + // A partial URI prefix that will be put in the manifest file at the top level + // BaseURL element. Can be used if streams are delivered from a different URL + // than the manifest file. + BaseUrl *string `locationName:"baseUrl" type:"string"` + + // When set to ENABLED, sets #EXT-X-ALLOW-CACHE:no tag, which prevents client + // from saving media segments for later replay. + ClientCache *string `locationName:"clientCache" type:"string" enum:"CmafClientCache"` + + // Specification to use (RFC-6381 or the default RFC-4281) during m3u8 playlist + // generation. + CodecSpecification *string `locationName:"codecSpecification" type:"string" enum:"CmafCodecSpecification"` + + // Use Destination (Destination) to specify the S3 output location and the output + // filename base. Destination accepts format identifiers. If you do not specify + // the base filename in the URI, the service will use the filename of the input + // file. If your job has multiple inputs, the service uses the filename of the + // first input file. + Destination *string `locationName:"destination" type:"string"` + + // DRM settings. + Encryption *CmafEncryptionSettings `locationName:"encryption" type:"structure"` + + // Length of fragments to generate (in seconds). Fragment length must be compatible + // with GOP size and Framerate. Note that fragments will end on the next keyframe + // after this number of seconds, so actual fragment length may be longer. When + // Emit Single File is checked, the fragmentation is internal to a single output + // file and it does not cause the creation of many output files as in other + // output types. + FragmentLength *int64 `locationName:"fragmentLength" min:"1" type:"integer"` + + // When set to GZIP, compresses HLS playlist. + ManifestCompression *string `locationName:"manifestCompression" type:"string" enum:"CmafManifestCompression"` -// SetTeletextDestinationSettings sets the TeletextDestinationSettings field's value. -func (s *CaptionDestinationSettings) SetTeletextDestinationSettings(v *TeletextDestinationSettings) *CaptionDestinationSettings { - s.TeletextDestinationSettings = v - return s -} + // Indicates whether the output manifest should use floating point values for + // segment duration. + ManifestDurationFormat *string `locationName:"manifestDurationFormat" type:"string" enum:"CmafManifestDurationFormat"` -// SetTtmlDestinationSettings sets the TtmlDestinationSettings field's value. -func (s *CaptionDestinationSettings) SetTtmlDestinationSettings(v *TtmlDestinationSettings) *CaptionDestinationSettings { - s.TtmlDestinationSettings = v - return s -} + // Minimum time of initially buffered media that is needed to ensure smooth + // playout. + MinBufferTime *int64 `locationName:"minBufferTime" type:"integer"` -// Caption inputs to be mapped to caption outputs. -type CaptionSelector struct { - _ struct{} `type:"structure"` + // Keep this setting at the default value of 0, unless you are troubleshooting + // a problem with how devices play back the end of your video asset. If you + // know that player devices are hanging on the final segment of your video because + // the length of your final segment is too short, use this setting to specify + // a minimum final segment length, in seconds. Choose a value that is greater + // than or equal to 1 and less than your segment length. When you specify a + // value for this setting, the encoder will combine any final segment that is + // shorter than the length that you specify with the previous segment. For example, + // your segment length is 3 seconds and your final segment is .5 seconds without + // a minimum final segment length; when you set the minimum final segment length + // to 1, your final segment is 3.5 seconds. + MinFinalSegmentLength *float64 `locationName:"minFinalSegmentLength" type:"double"` - // The specific language to extract from source. If input is SCTE-27, complete - // this field and/or PID to select the caption language to extract. If input - // is DVB-Sub and output is Burn-in or SMPTE-TT, complete this field and/or - // PID to select the caption language to extract. If input is DVB-Sub that is - // being passed through, omit this field (and PID field); there is no way to - // extract a specific language with pass-through captions. - LanguageCode *string `locationName:"languageCode" type:"string" enum:"LanguageCode"` + // When set to SINGLE_FILE, a single output file is generated, which is internally + // segmented using the Fragment Length and Segment Length. When set to SEGMENTED_FILES, + // separate segment files will be created. + SegmentControl *string `locationName:"segmentControl" type:"string" enum:"CmafSegmentControl"` + + // Use this setting to specify the length, in seconds, of each individual CMAF + // segment. This value applies to the whole package; that is, to every output + // in the output group. Note that segments end on the first keyframe after this + // number of seconds, so the actual segment length might be slightly longer. + // If you set Segment control (CmafSegmentControl) to single file, the service + // puts the content of each output in a single file that has metadata that marks + // these segments. If you set it to segmented files, the service creates multiple + // files for each output, each with the content of one segment. + SegmentLength *int64 `locationName:"segmentLength" min:"1" type:"integer"` - // Source settings (SourceSettings) contains the group of settings for captions - // in the input. - SourceSettings *CaptionSourceSettings `locationName:"sourceSettings" type:"structure"` + // Include or exclude RESOLUTION attribute for video in EXT-X-STREAM-INF tag + // of variant manifest. + StreamInfResolution *string `locationName:"streamInfResolution" type:"string" enum:"CmafStreamInfResolution"` + + // When set to ENABLED, a DASH MPD manifest will be generated for this output. + WriteDashManifest *string `locationName:"writeDashManifest" type:"string" enum:"CmafWriteDASHManifest"` + + // When set to ENABLED, an Apple HLS manifest will be generated for this output. + WriteHlsManifest *string `locationName:"writeHlsManifest" type:"string" enum:"CmafWriteHLSManifest"` } // String returns the string representation -func (s CaptionSelector) String() string { +func (s CmafGroupSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CaptionSelector) GoString() string { +func (s CmafGroupSettings) GoString() string { return s.String() } -// SetLanguageCode sets the LanguageCode field's value. -func (s *CaptionSelector) SetLanguageCode(v string) *CaptionSelector { - s.LanguageCode = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *CmafGroupSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CmafGroupSettings"} + if s.FragmentLength != nil && *s.FragmentLength < 1 { + invalidParams.Add(request.NewErrParamMinValue("FragmentLength", 1)) + } + if s.SegmentLength != nil && *s.SegmentLength < 1 { + invalidParams.Add(request.NewErrParamMinValue("SegmentLength", 1)) + } + if s.Encryption != nil { + if err := s.Encryption.Validate(); err != nil { + invalidParams.AddNested("Encryption", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetSourceSettings sets the SourceSettings field's value. -func (s *CaptionSelector) SetSourceSettings(v *CaptionSourceSettings) *CaptionSelector { - s.SourceSettings = v +// SetBaseUrl sets the BaseUrl field's value. +func (s *CmafGroupSettings) SetBaseUrl(v string) *CmafGroupSettings { + s.BaseUrl = &v return s } -// Source settings (SourceSettings) contains the group of settings for captions -// in the input. -type CaptionSourceSettings struct { - _ struct{} `type:"structure"` - - // Settings for ancillary captions source. - AncillarySourceSettings *AncillarySourceSettings `locationName:"ancillarySourceSettings" type:"structure"` - - // DVB Sub Source Settings - DvbSubSourceSettings *DvbSubSourceSettings `locationName:"dvbSubSourceSettings" type:"structure"` - - // Settings for embedded captions Source - EmbeddedSourceSettings *EmbeddedSourceSettings `locationName:"embeddedSourceSettings" type:"structure"` - - // Settings for File-based Captions in Source - FileSourceSettings *FileSourceSettings `locationName:"fileSourceSettings" type:"structure"` - - // Use Source (SourceType) to identify the format of your input captions. The - // service cannot auto-detect caption format. - SourceType *string `locationName:"sourceType" type:"string" enum:"CaptionSourceType"` - - // Settings specific to Teletext caption sources, including Page number. - TeletextSourceSettings *TeletextSourceSettings `locationName:"teletextSourceSettings" type:"structure"` +// SetClientCache sets the ClientCache field's value. +func (s *CmafGroupSettings) SetClientCache(v string) *CmafGroupSettings { + s.ClientCache = &v + return s } -// String returns the string representation -func (s CaptionSourceSettings) String() string { - return awsutil.Prettify(s) +// SetCodecSpecification sets the CodecSpecification field's value. +func (s *CmafGroupSettings) SetCodecSpecification(v string) *CmafGroupSettings { + s.CodecSpecification = &v + return s } -// GoString returns the string representation -func (s CaptionSourceSettings) GoString() string { - return s.String() +// SetDestination sets the Destination field's value. +func (s *CmafGroupSettings) SetDestination(v string) *CmafGroupSettings { + s.Destination = &v + return s } -// SetAncillarySourceSettings sets the AncillarySourceSettings field's value. -func (s *CaptionSourceSettings) SetAncillarySourceSettings(v *AncillarySourceSettings) *CaptionSourceSettings { - s.AncillarySourceSettings = v +// SetEncryption sets the Encryption field's value. +func (s *CmafGroupSettings) SetEncryption(v *CmafEncryptionSettings) *CmafGroupSettings { + s.Encryption = v return s } -// SetDvbSubSourceSettings sets the DvbSubSourceSettings field's value. -func (s *CaptionSourceSettings) SetDvbSubSourceSettings(v *DvbSubSourceSettings) *CaptionSourceSettings { - s.DvbSubSourceSettings = v +// SetFragmentLength sets the FragmentLength field's value. +func (s *CmafGroupSettings) SetFragmentLength(v int64) *CmafGroupSettings { + s.FragmentLength = &v return s } -// SetEmbeddedSourceSettings sets the EmbeddedSourceSettings field's value. -func (s *CaptionSourceSettings) SetEmbeddedSourceSettings(v *EmbeddedSourceSettings) *CaptionSourceSettings { - s.EmbeddedSourceSettings = v +// SetManifestCompression sets the ManifestCompression field's value. +func (s *CmafGroupSettings) SetManifestCompression(v string) *CmafGroupSettings { + s.ManifestCompression = &v return s } -// SetFileSourceSettings sets the FileSourceSettings field's value. -func (s *CaptionSourceSettings) SetFileSourceSettings(v *FileSourceSettings) *CaptionSourceSettings { - s.FileSourceSettings = v +// SetManifestDurationFormat sets the ManifestDurationFormat field's value. +func (s *CmafGroupSettings) SetManifestDurationFormat(v string) *CmafGroupSettings { + s.ManifestDurationFormat = &v return s } -// SetSourceType sets the SourceType field's value. -func (s *CaptionSourceSettings) SetSourceType(v string) *CaptionSourceSettings { - s.SourceType = &v +// SetMinBufferTime sets the MinBufferTime field's value. +func (s *CmafGroupSettings) SetMinBufferTime(v int64) *CmafGroupSettings { + s.MinBufferTime = &v return s } -// SetTeletextSourceSettings sets the TeletextSourceSettings field's value. -func (s *CaptionSourceSettings) SetTeletextSourceSettings(v *TeletextSourceSettings) *CaptionSourceSettings { - s.TeletextSourceSettings = v +// SetMinFinalSegmentLength sets the MinFinalSegmentLength field's value. +func (s *CmafGroupSettings) SetMinFinalSegmentLength(v float64) *CmafGroupSettings { + s.MinFinalSegmentLength = &v return s } -// Channel mapping (ChannelMapping) contains the group of fields that hold the -// remixing value for each channel. Units are in dB. Acceptable values are within -// the range from -60 (mute) through 6. A setting of 0 passes the input channel -// unchanged to the output channel (no attenuation or amplification). -type ChannelMapping struct { - _ struct{} `type:"structure"` +// SetSegmentControl sets the SegmentControl field's value. +func (s *CmafGroupSettings) SetSegmentControl(v string) *CmafGroupSettings { + s.SegmentControl = &v + return s +} - // List of output channels - OutputChannels []*OutputChannelMapping `locationName:"outputChannels" type:"list"` +// SetSegmentLength sets the SegmentLength field's value. +func (s *CmafGroupSettings) SetSegmentLength(v int64) *CmafGroupSettings { + s.SegmentLength = &v + return s } -// String returns the string representation -func (s ChannelMapping) String() string { - return awsutil.Prettify(s) +// SetStreamInfResolution sets the StreamInfResolution field's value. +func (s *CmafGroupSettings) SetStreamInfResolution(v string) *CmafGroupSettings { + s.StreamInfResolution = &v + return s } -// GoString returns the string representation -func (s ChannelMapping) GoString() string { - return s.String() +// SetWriteDashManifest sets the WriteDashManifest field's value. +func (s *CmafGroupSettings) SetWriteDashManifest(v string) *CmafGroupSettings { + s.WriteDashManifest = &v + return s } -// SetOutputChannels sets the OutputChannels field's value. -func (s *ChannelMapping) SetOutputChannels(v []*OutputChannelMapping) *ChannelMapping { - s.OutputChannels = v +// SetWriteHlsManifest sets the WriteHlsManifest field's value. +func (s *CmafGroupSettings) SetWriteHlsManifest(v string) *CmafGroupSettings { + s.WriteHlsManifest = &v return s } @@ -3086,7 +4549,7 @@ type ColorCorrector struct { _ struct{} `type:"structure"` // Brightness level. - Brightness *int64 `locationName:"brightness" type:"integer"` + Brightness *int64 `locationName:"brightness" min:"1" type:"integer"` // Determines if colorspace conversion will be performed. If set to _None_, // no conversion will be performed. If _Force 601_ or _Force 709_ are selected, @@ -3096,19 +4559,22 @@ type ColorCorrector struct { ColorSpaceConversion *string `locationName:"colorSpaceConversion" type:"string" enum:"ColorSpaceConversion"` // Contrast level. - Contrast *int64 `locationName:"contrast" type:"integer"` - - // Use the HDR master display (Hdr10Metadata) settings to provide values for - // HDR color. These values vary depending on the input video and must be provided - // by a color grader. Range is 0 to 50,000, each increment represents 0.00002 - // in CIE1931 color coordinate. + Contrast *int64 `locationName:"contrast" min:"1" type:"integer"` + + // Use the HDR master display (Hdr10Metadata) settings to correct HDR metadata + // or to provide missing metadata. These values vary depending on the input + // video and must be provided by a color grader. Range is 0 to 50,000, each + // increment represents 0.00002 in CIE1931 color coordinate. Note that these + // settings are not color correction. Note that if you are creating HDR outputs + // inside of an HLS CMAF package, to comply with the Apple specification, you + // must use the HVC1 for H.265 setting. Hdr10Metadata *Hdr10Metadata `locationName:"hdr10Metadata" type:"structure"` // Hue in degrees. Hue *int64 `locationName:"hue" type:"integer"` // Saturation level. - Saturation *int64 `locationName:"saturation" type:"integer"` + Saturation *int64 `locationName:"saturation" min:"1" type:"integer"` } // String returns the string representation @@ -3121,6 +4587,28 @@ func (s ColorCorrector) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *ColorCorrector) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ColorCorrector"} + if s.Brightness != nil && *s.Brightness < 1 { + invalidParams.Add(request.NewErrParamMinValue("Brightness", 1)) + } + if s.Contrast != nil && *s.Contrast < 1 { + invalidParams.Add(request.NewErrParamMinValue("Contrast", 1)) + } + if s.Hue != nil && *s.Hue < -180 { + invalidParams.Add(request.NewErrParamMinValue("Hue", -180)) + } + if s.Saturation != nil && *s.Saturation < 1 { + invalidParams.Add(request.NewErrParamMinValue("Saturation", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetBrightness sets the Brightness field's value. func (s *ColorCorrector) SetBrightness(v int64) *ColorCorrector { s.Brightness = &v @@ -3191,6 +4679,26 @@ func (s ContainerSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *ContainerSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ContainerSettings"} + if s.M2tsSettings != nil { + if err := s.M2tsSettings.Validate(); err != nil { + invalidParams.AddNested("M2tsSettings", err.(request.ErrInvalidParams)) + } + } + if s.M3u8Settings != nil { + if err := s.M3u8Settings.Validate(); err != nil { + invalidParams.AddNested("M3u8Settings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetContainer sets the Container field's value. func (s *ContainerSettings) SetContainer(v string) *ContainerSettings { s.Container = &v @@ -3232,6 +4740,13 @@ func (s *ContainerSettings) SetMp4Settings(v *Mp4Settings) *ContainerSettings { type CreateJobInput struct { _ struct{} `type:"structure"` + // Optional. Choose a tag type that AWS Billing and Cost Management will use + // to sort your AWS Elemental MediaConvert costs on any billing report that + // you set up. Any transcoding outputs that don't have an associated tag will + // appear in your billing report unsorted. If you don't choose a valid value + // for this field, your job outputs will appear on the billing report unsorted. + BillingTagsSource *string `locationName:"billingTagsSource" type:"string" enum:"BillingTagsSource"` + // Idempotency token for CreateJob operation. ClientRequestToken *string `locationName:"clientRequestToken" type:"string" idempotencyToken:"true"` @@ -3246,10 +4761,14 @@ type CreateJobInput struct { // Required. The IAM role you use for creating this job. For details about permissions, // see the User Guide topic at the User Guide at http://docs.aws.amazon.com/mediaconvert/latest/ug/iam-role.html. - Role *string `locationName:"role" type:"string"` + // + // Role is a required field + Role *string `locationName:"role" type:"string" required:"true"` // JobSettings contains all the transcode settings for a job. - Settings *JobSettings `locationName:"settings" type:"structure"` + // + // Settings is a required field + Settings *JobSettings `locationName:"settings" type:"structure" required:"true"` // User-defined metadata that you want to associate with an MediaConvert job. // You specify metadata in key/value pairs. @@ -3266,6 +4785,33 @@ func (s CreateJobInput) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateJobInput"} + if s.Role == nil { + invalidParams.Add(request.NewErrParamRequired("Role")) + } + if s.Settings == nil { + invalidParams.Add(request.NewErrParamRequired("Settings")) + } + if s.Settings != nil { + if err := s.Settings.Validate(); err != nil { + invalidParams.AddNested("Settings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBillingTagsSource sets the BillingTagsSource field's value. +func (s *CreateJobInput) SetBillingTagsSource(v string) *CreateJobInput { + s.BillingTagsSource = &v + return s +} + // SetClientRequestToken sets the ClientRequestToken field's value. func (s *CreateJobInput) SetClientRequestToken(v string) *CreateJobInput { s.ClientRequestToken = &v @@ -3340,7 +4886,9 @@ type CreateJobTemplateInput struct { Description *string `locationName:"description" type:"string"` // The name of the job template you are creating. - Name *string `locationName:"name" type:"string"` + // + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true"` // Optional. The queue that jobs created from this template are assigned to. // If you don't specify this, jobs will go to the default queue. @@ -3348,7 +4896,13 @@ type CreateJobTemplateInput struct { // JobTemplateSettings contains all the transcode settings saved in the template // that will be applied to jobs created from it. - Settings *JobTemplateSettings `locationName:"settings" type:"structure"` + // + // Settings is a required field + Settings *JobTemplateSettings `locationName:"settings" type:"structure" required:"true"` + + // The tags that you want to add to the resource. You can tag resources with + // a key-value pair or with only a key. + Tags map[string]*string `locationName:"tags" type:"map"` } // String returns the string representation @@ -3361,6 +4915,27 @@ func (s CreateJobTemplateInput) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateJobTemplateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateJobTemplateInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Settings == nil { + invalidParams.Add(request.NewErrParamRequired("Settings")) + } + if s.Settings != nil { + if err := s.Settings.Validate(); err != nil { + invalidParams.AddNested("Settings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetCategory sets the Category field's value. func (s *CreateJobTemplateInput) SetCategory(v string) *CreateJobTemplateInput { s.Category = &v @@ -3391,6 +4966,12 @@ func (s *CreateJobTemplateInput) SetSettings(v *JobTemplateSettings) *CreateJobT return s } +// SetTags sets the Tags field's value. +func (s *CreateJobTemplateInput) SetTags(v map[string]*string) *CreateJobTemplateInput { + s.Tags = v + return s +} + // Successful create job template requests will return the template JSON. type CreateJobTemplateOutput struct { _ struct{} `type:"structure"` @@ -3428,10 +5009,18 @@ type CreatePresetInput struct { Description *string `locationName:"description" type:"string"` // The name of the preset you are creating. - Name *string `locationName:"name" type:"string"` + // + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true"` // Settings for preset - Settings *PresetSettings `locationName:"settings" type:"structure"` + // + // Settings is a required field + Settings *PresetSettings `locationName:"settings" type:"structure" required:"true"` + + // The tags that you want to add to the resource. You can tag resources with + // a key-value pair or with only a key. + Tags map[string]*string `locationName:"tags" type:"map"` } // String returns the string representation @@ -3444,6 +5033,27 @@ func (s CreatePresetInput) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreatePresetInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreatePresetInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Settings == nil { + invalidParams.Add(request.NewErrParamRequired("Settings")) + } + if s.Settings != nil { + if err := s.Settings.Validate(); err != nil { + invalidParams.AddNested("Settings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetCategory sets the Category field's value. func (s *CreatePresetInput) SetCategory(v string) *CreatePresetInput { s.Category = &v @@ -3468,6 +5078,12 @@ func (s *CreatePresetInput) SetSettings(v *PresetSettings) *CreatePresetInput { return s } +// SetTags sets the Tags field's value. +func (s *CreatePresetInput) SetTags(v map[string]*string) *CreatePresetInput { + s.Tags = v + return s +} + // Successful create preset requests will return the preset JSON. type CreatePresetOutput struct { _ struct{} `type:"structure"` @@ -3497,11 +5113,29 @@ func (s *CreatePresetOutput) SetPreset(v *Preset) *CreatePresetOutput { type CreateQueueInput struct { _ struct{} `type:"structure"` - // Optional. A description of the queue you are creating. + // Optional. A description of the queue that you are creating. Description *string `locationName:"description" type:"string"` - // The name of the queue you are creating. - Name *string `locationName:"name" type:"string"` + // The name of the queue that you are creating. + // + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true"` + + // Optional; default is on-demand. Specifies whether the pricing plan for the + // queue is on-demand or reserved. The pricing plan for the queue determines + // whether you pay on-demand or reserved pricing for the transcoding jobs you + // run through the queue. For reserved queue pricing, you must set up a contract. + // You can create a reserved queue contract through the AWS Elemental MediaConvert + // console. + PricingPlan *string `locationName:"pricingPlan" type:"string" enum:"PricingPlan"` + + // Details about the pricing plan for your reserved queue. Required for reserved + // queues and not applicable to on-demand queues. + ReservationPlanSettings *ReservationPlanSettings `locationName:"reservationPlanSettings" type:"structure"` + + // The tags that you want to add to the resource. You can tag resources with + // a key-value pair or with only a key. + Tags map[string]*string `locationName:"tags" type:"map"` } // String returns the string representation @@ -3514,6 +5148,24 @@ func (s CreateQueueInput) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateQueueInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateQueueInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.ReservationPlanSettings != nil { + if err := s.ReservationPlanSettings.Validate(); err != nil { + invalidParams.AddNested("ReservationPlanSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetDescription sets the Description field's value. func (s *CreateQueueInput) SetDescription(v string) *CreateQueueInput { s.Description = &v @@ -3526,14 +5178,33 @@ func (s *CreateQueueInput) SetName(v string) *CreateQueueInput { return s } -// Successful create queue requests will return the name of the queue you just +// SetPricingPlan sets the PricingPlan field's value. +func (s *CreateQueueInput) SetPricingPlan(v string) *CreateQueueInput { + s.PricingPlan = &v + return s +} + +// SetReservationPlanSettings sets the ReservationPlanSettings field's value. +func (s *CreateQueueInput) SetReservationPlanSettings(v *ReservationPlanSettings) *CreateQueueInput { + s.ReservationPlanSettings = v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateQueueInput) SetTags(v map[string]*string) *CreateQueueInput { + s.Tags = v + return s +} + +// Successful create queue requests return the name of the queue that you just // created and information about it. type CreateQueueOutput struct { _ struct{} `type:"structure"` - // MediaConvert jobs are submitted to a queue. Unless specified otherwise jobs - // are submitted to a built-in default queue. User can create additional queues - // to separate the jobs of different categories or priority. + // You can use queues to manage the resources that are available to your AWS + // account for running multiple transcoding jobs at the same time. If you don't + // specify a queue, the service sends all jobs through the default queue. For + // more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/about-resource-allocation-and-job-prioritization.html. Queue *Queue `locationName:"queue" type:"structure"` } @@ -3603,7 +5274,7 @@ type DashIsoGroupSettings struct { // Emit Single File is checked, the fragmentation is internal to a single output // file and it does not cause the creation of many output files as in other // output types. - FragmentLength *int64 `locationName:"fragmentLength" type:"integer"` + FragmentLength *int64 `locationName:"fragmentLength" min:"1" type:"integer"` // Supports HbbTV specification as indicated HbbtvCompliance *string `locationName:"hbbtvCompliance" type:"string" enum:"DashIsoHbbtvCompliance"` @@ -3622,7 +5293,15 @@ type DashIsoGroupSettings struct { // may be longer. When Emit Single File is checked, the segmentation is internal // to a single output file and it does not cause the creation of many output // files as in other output types. - SegmentLength *int64 `locationName:"segmentLength" type:"integer"` + SegmentLength *int64 `locationName:"segmentLength" min:"1" type:"integer"` + + // When you enable Precise segment duration in manifests (writeSegmentTimelineInRepresentation), + // your DASH manifest shows precise segment durations. The segment duration + // information appears inside the SegmentTimeline element, inside SegmentTemplate + // at the Representation level. When this feature isn't enabled, the segment + // durations in your DASH manifest are approximate. The segment duration information + // appears in the duration attribute of the SegmentTemplate element. + WriteSegmentTimelineInRepresentation *string `locationName:"writeSegmentTimelineInRepresentation" type:"string" enum:"DashIsoWriteSegmentTimelineInRepresentation"` } // String returns the string representation @@ -3635,6 +5314,22 @@ func (s DashIsoGroupSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *DashIsoGroupSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DashIsoGroupSettings"} + if s.FragmentLength != nil && *s.FragmentLength < 1 { + invalidParams.Add(request.NewErrParamMinValue("FragmentLength", 1)) + } + if s.SegmentLength != nil && *s.SegmentLength < 1 { + invalidParams.Add(request.NewErrParamMinValue("SegmentLength", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetBaseUrl sets the BaseUrl field's value. func (s *DashIsoGroupSettings) SetBaseUrl(v string) *DashIsoGroupSettings { s.BaseUrl = &v @@ -3683,6 +5378,12 @@ func (s *DashIsoGroupSettings) SetSegmentLength(v int64) *DashIsoGroupSettings { return s } +// SetWriteSegmentTimelineInRepresentation sets the WriteSegmentTimelineInRepresentation field's value. +func (s *DashIsoGroupSettings) SetWriteSegmentTimelineInRepresentation(v string) *DashIsoGroupSettings { + s.WriteSegmentTimelineInRepresentation = &v + return s +} + // Settings for deinterlacer type Deinterlacer struct { _ struct{} `type:"structure"` @@ -3849,7 +5550,7 @@ func (s DeletePresetOutput) GoString() string { return s.String() } -// Delete a queue by sending a request with the queue name +// Delete a queue by sending a request with the queue name. type DeleteQueueInput struct { _ struct{} `type:"structure"` @@ -3888,8 +5589,8 @@ func (s *DeleteQueueInput) SetName(v string) *DeleteQueueInput { return s } -// Delete queue requests will return an OK message or error message with an -// empty body. +// Delete queue requests return an OK message or error message with an empty +// body. type DeleteQueueOutput struct { _ struct{} `type:"structure"` } @@ -3913,6 +5614,12 @@ type DescribeEndpointsInput struct { // one time. MaxResults *int64 `locationName:"maxResults" type:"integer"` + // Optional field, defaults to DEFAULT. Specify DEFAULT for this operation to + // return your endpoints if any exist, or to create an endpoint for you and + // return it if one doesn't already exist. Specify GET_ONLY to return your endpoints + // if any exist, or an empty list if none exist. + Mode *string `locationName:"mode" type:"string" enum:"DescribeEndpointsMode"` + // Use this string, provided with the response to a previous request, to request // the next batch of endpoints. NextToken *string `locationName:"nextToken" type:"string"` @@ -3934,6 +5641,12 @@ func (s *DescribeEndpointsInput) SetMaxResults(v int64) *DescribeEndpointsInput return s } +// SetMode sets the Mode field's value. +func (s *DescribeEndpointsInput) SetMode(v string) *DescribeEndpointsInput { + s.Mode = &v + return s +} + // SetNextToken sets the NextToken field's value. func (s *DescribeEndpointsInput) SetNextToken(v string) *DescribeEndpointsInput { s.NextToken = &v @@ -3973,6 +5686,63 @@ func (s *DescribeEndpointsOutput) SetNextToken(v string) *DescribeEndpointsOutpu return s } +// Removes an association between the Amazon Resource Name (ARN) of an AWS Certificate +// Manager (ACM) certificate and an AWS Elemental MediaConvert resource. +type DisassociateCertificateInput struct { + _ struct{} `type:"structure"` + + // The ARN of the ACM certificate that you want to disassociate from your MediaConvert + // resource. + // + // Arn is a required field + Arn *string `locationName:"arn" type:"string" required:"true"` +} + +// String returns the string representation +func (s DisassociateCertificateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisassociateCertificateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DisassociateCertificateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DisassociateCertificateInput"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *DisassociateCertificateInput) SetArn(v string) *DisassociateCertificateInput { + s.Arn = &v + return s +} + +// Successful disassociation of Certificate Manager Amazon Resource Name (ARN) +// with Mediaconvert returns an OK message. +type DisassociateCertificateOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DisassociateCertificateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisassociateCertificateOutput) GoString() string { + return s.String() +} + // Inserts DVB Network Information Table (NIT) at the specified table repetition // interval. type DvbNitSettings struct { @@ -3983,11 +5753,11 @@ type DvbNitSettings struct { // The network name text placed in the network_name_descriptor inside the Network // Information Table. Maximum length is 256 characters. - NetworkName *string `locationName:"networkName" type:"string"` + NetworkName *string `locationName:"networkName" min:"1" type:"string"` // The number of milliseconds between instances of this table in the output // transport stream. - NitInterval *int64 `locationName:"nitInterval" type:"integer"` + NitInterval *int64 `locationName:"nitInterval" min:"25" type:"integer"` } // String returns the string representation @@ -4000,6 +5770,22 @@ func (s DvbNitSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *DvbNitSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DvbNitSettings"} + if s.NetworkName != nil && len(*s.NetworkName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NetworkName", 1)) + } + if s.NitInterval != nil && *s.NitInterval < 25 { + invalidParams.Add(request.NewErrParamMinValue("NitInterval", 25)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetNetworkId sets the NetworkId field's value. func (s *DvbNitSettings) SetNetworkId(v int64) *DvbNitSettings { s.NetworkId = &v @@ -4033,15 +5819,15 @@ type DvbSdtSettings struct { // The number of milliseconds between instances of this table in the output // transport stream. - SdtInterval *int64 `locationName:"sdtInterval" type:"integer"` + SdtInterval *int64 `locationName:"sdtInterval" min:"25" type:"integer"` // The service name placed in the service_descriptor in the Service Description // Table. Maximum length is 256 characters. - ServiceName *string `locationName:"serviceName" type:"string"` + ServiceName *string `locationName:"serviceName" min:"1" type:"string"` // The service provider name placed in the service_descriptor in the Service // Description Table. Maximum length is 256 characters. - ServiceProviderName *string `locationName:"serviceProviderName" type:"string"` + ServiceProviderName *string `locationName:"serviceProviderName" min:"1" type:"string"` } // String returns the string representation @@ -4054,6 +5840,25 @@ func (s DvbSdtSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *DvbSdtSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DvbSdtSettings"} + if s.SdtInterval != nil && *s.SdtInterval < 25 { + invalidParams.Add(request.NewErrParamMinValue("SdtInterval", 25)) + } + if s.ServiceName != nil && len(*s.ServiceName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ServiceName", 1)) + } + if s.ServiceProviderName != nil && len(*s.ServiceProviderName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ServiceProviderName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetOutputSdt sets the OutputSdt field's value. func (s *DvbSdtSettings) SetOutputSdt(v string) *DvbSdtSettings { s.OutputSdt = &v @@ -4113,7 +5918,7 @@ type DvbSubDestinationSettings struct { // Font resolution in DPI (dots per inch); default is 96 dpi.All burn-in and // DVB-Sub font settings must match. - FontResolution *int64 `locationName:"fontResolution" type:"integer"` + FontResolution *int64 `locationName:"fontResolution" min:"96" type:"integer"` // A positive integer indicates the exact font size in points. Set to 0 for // automatic font size selection. All burn-in and DVB-Sub font settings must @@ -4151,9 +5956,11 @@ type DvbSubDestinationSettings struct { // burn-in and DVB-Sub font settings must match. ShadowYOffset *int64 `locationName:"shadowYOffset" type:"integer"` - // Controls whether a fixed grid size or proportional font spacing will be used - // to generate the output subtitles bitmap. Only applicable for Teletext inputs - // and DVB-Sub/Burn-in outputs. + // Only applies to jobs with input captions in Teletext or STL formats. Specify + // whether the spacing between letters in your captions is set by the captions + // grid or varies depending on letter width. Choose fixed grid to conform to + // the spacing specified in the captions file more accurately. Choose proportional + // to make the text easier to read if the captions are closed caption. TeletextSpacing *string `locationName:"teletextSpacing" type:"string" enum:"DvbSubtitleTeletextSpacing"` // Specifies the horizontal position of the caption relative to the left side @@ -4185,6 +5992,25 @@ func (s DvbSubDestinationSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *DvbSubDestinationSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DvbSubDestinationSettings"} + if s.FontResolution != nil && *s.FontResolution < 96 { + invalidParams.Add(request.NewErrParamMinValue("FontResolution", 96)) + } + if s.ShadowXOffset != nil && *s.ShadowXOffset < -2.147483648e+09 { + invalidParams.Add(request.NewErrParamMinValue("ShadowXOffset", -2.147483648e+09)) + } + if s.ShadowYOffset != nil && *s.ShadowYOffset < -2.147483648e+09 { + invalidParams.Add(request.NewErrParamMinValue("ShadowYOffset", -2.147483648e+09)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAlignment sets the Alignment field's value. func (s *DvbSubDestinationSettings) SetAlignment(v string) *DvbSubDestinationSettings { s.Alignment = &v @@ -4288,7 +6114,7 @@ type DvbSubSourceSettings struct { // When using DVB-Sub with Burn-In or SMPTE-TT, use this PID for the source // content. Unused for DVB-Sub passthrough. All DVB-Sub content is passed through, // regardless of selectors. - Pid *int64 `locationName:"pid" type:"integer"` + Pid *int64 `locationName:"pid" min:"1" type:"integer"` } // String returns the string representation @@ -4301,6 +6127,19 @@ func (s DvbSubSourceSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *DvbSubSourceSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DvbSubSourceSettings"} + if s.Pid != nil && *s.Pid < 1 { + invalidParams.Add(request.NewErrParamMinValue("Pid", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetPid sets the Pid field's value. func (s *DvbSubSourceSettings) SetPid(v int64) *DvbSubSourceSettings { s.Pid = &v @@ -4313,7 +6152,7 @@ type DvbTdtSettings struct { // The number of milliseconds between instances of this table in the output // transport stream. - TdtInterval *int64 `locationName:"tdtInterval" type:"integer"` + TdtInterval *int64 `locationName:"tdtInterval" min:"1000" type:"integer"` } // String returns the string representation @@ -4326,6 +6165,19 @@ func (s DvbTdtSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *DvbTdtSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DvbTdtSettings"} + if s.TdtInterval != nil && *s.TdtInterval < 1000 { + invalidParams.Add(request.NewErrParamMinValue("TdtInterval", 1000)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetTdtInterval sets the TdtInterval field's value. func (s *DvbTdtSettings) SetTdtInterval(v int64) *DvbTdtSettings { s.TdtInterval = &v @@ -4342,7 +6194,7 @@ type Eac3Settings struct { AttenuationControl *string `locationName:"attenuationControl" type:"string" enum:"Eac3AttenuationControl"` // Average bitrate in bits/second. Valid bitrates depend on the coding mode. - Bitrate *int64 `locationName:"bitrate" type:"integer"` + Bitrate *int64 `locationName:"bitrate" min:"64000" type:"integer"` // Specifies the "Bitstream Mode" (bsmod) for the emitted E-AC-3 stream. See // ATSC A/52-2012 (Annex E) for background on these values. @@ -4356,7 +6208,7 @@ type Eac3Settings struct { // Sets the dialnorm for the output. If blank and input audio is Dolby Digital // Plus, dialnorm will be passed through. - Dialnorm *int64 `locationName:"dialnorm" type:"integer"` + Dialnorm *int64 `locationName:"dialnorm" min:"1" type:"integer"` // Enables Dynamic Range Compression that restricts the absolute peak level // for a signal. @@ -4405,7 +6257,7 @@ type Eac3Settings struct { PhaseControl *string `locationName:"phaseControl" type:"string" enum:"Eac3PhaseControl"` // Sample rate in hz. Sample rate is always 48000. - SampleRate *int64 `locationName:"sampleRate" type:"integer"` + SampleRate *int64 `locationName:"sampleRate" min:"48000" type:"integer"` // Stereo downmix preference. Only used for 3/2 coding mode. StereoDownmix *string `locationName:"stereoDownmix" type:"string" enum:"Eac3StereoDownmix"` @@ -4429,6 +6281,25 @@ func (s Eac3Settings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *Eac3Settings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Eac3Settings"} + if s.Bitrate != nil && *s.Bitrate < 64000 { + invalidParams.Add(request.NewErrParamMinValue("Bitrate", 64000)) + } + if s.Dialnorm != nil && *s.Dialnorm < 1 { + invalidParams.Add(request.NewErrParamMinValue("Dialnorm", 1)) + } + if s.SampleRate != nil && *s.SampleRate < 48000 { + invalidParams.Add(request.NewErrParamMinValue("SampleRate", 48000)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAttenuationControl sets the AttenuationControl field's value. func (s *Eac3Settings) SetAttenuationControl(v string) *Eac3Settings { s.AttenuationControl = &v @@ -4566,11 +6437,11 @@ type EmbeddedSourceSettings struct { // Specifies the 608/708 channel number within the video track from which to // extract captions. Unused for passthrough. - Source608ChannelNumber *int64 `locationName:"source608ChannelNumber" type:"integer"` + Source608ChannelNumber *int64 `locationName:"source608ChannelNumber" min:"1" type:"integer"` // Specifies the video track index used for extracting captions. The system // only supports one input video track, so this should always be set to '1'. - Source608TrackNumber *int64 `locationName:"source608TrackNumber" type:"integer"` + Source608TrackNumber *int64 `locationName:"source608TrackNumber" min:"1" type:"integer"` } // String returns the string representation @@ -4583,6 +6454,22 @@ func (s EmbeddedSourceSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *EmbeddedSourceSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "EmbeddedSourceSettings"} + if s.Source608ChannelNumber != nil && *s.Source608ChannelNumber < 1 { + invalidParams.Add(request.NewErrParamMinValue("Source608ChannelNumber", 1)) + } + if s.Source608TrackNumber != nil && *s.Source608TrackNumber < 1 { + invalidParams.Add(request.NewErrParamMinValue("Source608TrackNumber", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetConvert608To708 sets the Convert608To708 field's value. func (s *EmbeddedSourceSettings) SetConvert608To708(v string) *EmbeddedSourceSettings { s.Convert608To708 = &v @@ -4601,7 +6488,7 @@ func (s *EmbeddedSourceSettings) SetSource608TrackNumber(v int64) *EmbeddedSourc return s } -// Describes account specific API endpoint +// Describes an account-specific API endpoint. type Endpoint struct { _ struct{} `type:"structure"` @@ -4691,7 +6578,7 @@ type FileSourceSettings struct { // External caption file used for loading captions. Accepted file extensions // are 'scc', 'ttml', 'dfxp', 'stl', 'srt', and 'smi'. - SourceFile *string `locationName:"sourceFile" type:"string"` + SourceFile *string `locationName:"sourceFile" min:"14" type:"string"` // Specifies a time delta in seconds to offset the captions from the source // file. @@ -4708,6 +6595,22 @@ func (s FileSourceSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *FileSourceSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "FileSourceSettings"} + if s.SourceFile != nil && len(*s.SourceFile) < 14 { + invalidParams.Add(request.NewErrParamMinLen("SourceFile", 14)) + } + if s.TimeDelta != nil && *s.TimeDelta < -2.147483648e+09 { + invalidParams.Add(request.NewErrParamMinValue("TimeDelta", -2.147483648e+09)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetConvert608To708 sets the Convert608To708 field's value. func (s *FileSourceSettings) SetConvert608To708(v string) *FileSourceSettings { s.Convert608To708 = &v @@ -4737,7 +6640,7 @@ type FrameCaptureSettings struct { // 1/3 frame per second) will capture the first frame, then 1 frame every 3s. // Files will be named as filename.n.jpg where n is the 0-based sequence number // of each Capture. - FramerateDenominator *int64 `locationName:"framerateDenominator" type:"integer"` + FramerateDenominator *int64 `locationName:"framerateDenominator" min:"1" type:"integer"` // Frame capture will encode the first frame of the output stream, then one // frame every framerateDenominator/framerateNumerator seconds. For example, @@ -4745,13 +6648,13 @@ type FrameCaptureSettings struct { // 1/3 frame per second) will capture the first frame, then 1 frame every 3s. // Files will be named as filename.NNNNNNN.jpg where N is the 0-based frame // sequence number zero padded to 7 decimal places. - FramerateNumerator *int64 `locationName:"framerateNumerator" type:"integer"` + FramerateNumerator *int64 `locationName:"framerateNumerator" min:"1" type:"integer"` // Maximum number of captures (encoded jpg output files). - MaxCaptures *int64 `locationName:"maxCaptures" type:"integer"` + MaxCaptures *int64 `locationName:"maxCaptures" min:"1" type:"integer"` // JPEG Quality - a higher value equals higher quality. - Quality *int64 `locationName:"quality" type:"integer"` + Quality *int64 `locationName:"quality" min:"1" type:"integer"` } // String returns the string representation @@ -4764,6 +6667,28 @@ func (s FrameCaptureSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *FrameCaptureSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "FrameCaptureSettings"} + if s.FramerateDenominator != nil && *s.FramerateDenominator < 1 { + invalidParams.Add(request.NewErrParamMinValue("FramerateDenominator", 1)) + } + if s.FramerateNumerator != nil && *s.FramerateNumerator < 1 { + invalidParams.Add(request.NewErrParamMinValue("FramerateNumerator", 1)) + } + if s.MaxCaptures != nil && *s.MaxCaptures < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxCaptures", 1)) + } + if s.Quality != nil && *s.Quality < 1 { + invalidParams.Add(request.NewErrParamMinValue("Quality", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetFramerateDenominator sets the FramerateDenominator field's value. func (s *FrameCaptureSettings) SetFramerateDenominator(v int64) *FrameCaptureSettings { s.FramerateDenominator = &v @@ -4981,11 +6906,11 @@ func (s *GetPresetOutput) SetPreset(v *Preset) *GetPresetOutput { return s } -// Query a queue by sending a request with the queue name. +// Get information about a queue by sending a request with the queue name. type GetQueueInput struct { _ struct{} `type:"structure"` - // The name of the queue. + // The name of the queue that you want information about. // // Name is a required field Name *string `location:"uri" locationName:"name" type:"string" required:"true"` @@ -5020,13 +6945,15 @@ func (s *GetQueueInput) SetName(v string) *GetQueueInput { return s } -// Successful get queue requests will return an OK message and the queue JSON. +// Successful get queue requests return an OK message and information about +// the queue in JSON. type GetQueueOutput struct { _ struct{} `type:"structure"` - // MediaConvert jobs are submitted to a queue. Unless specified otherwise jobs - // are submitted to a built-in default queue. User can create additional queues - // to separate the jobs of different categories or priority. + // You can use queues to manage the resources that are available to your AWS + // account for running multiple transcoding jobs at the same time. If you don't + // specify a queue, the service sends all jobs through the default queue. For + // more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/about-resource-allocation-and-job-prioritization.html. Queue *Queue `locationName:"queue" type:"structure"` } @@ -5046,6 +6973,67 @@ func (s *GetQueueOutput) SetQueue(v *Queue) *GetQueueOutput { return s } +// Settings for quality-defined variable bitrate encoding with the H.264 codec. +// Required when you set Rate control mode to QVBR. Not valid when you set Rate +// control mode to a value other than QVBR, or when you don't define Rate control +// mode. +type H264QvbrSettings struct { + _ struct{} `type:"structure"` + + // Use this setting only when Rate control mode is QVBR and Quality tuning level + // is Multi-pass HQ. For Max average bitrate values suited to the complexity + // of your input video, the service limits the average bitrate of the video + // part of this output to the value you choose. That is, the total size of the + // video element is less than or equal to the value you set multiplied by the + // number of seconds of encoded output. + MaxAverageBitrate *int64 `locationName:"maxAverageBitrate" min:"1000" type:"integer"` + + // Required when you use QVBR rate control mode. That is, when you specify qvbrSettings + // within h264Settings. Specify the target quality level for this output, from + // 1 to 10. Use higher numbers for greater quality. Level 10 results in nearly + // lossless compression. The quality level for most broadcast-quality transcodes + // is between 6 and 9. + QvbrQualityLevel *int64 `locationName:"qvbrQualityLevel" min:"1" type:"integer"` +} + +// String returns the string representation +func (s H264QvbrSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s H264QvbrSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *H264QvbrSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "H264QvbrSettings"} + if s.MaxAverageBitrate != nil && *s.MaxAverageBitrate < 1000 { + invalidParams.Add(request.NewErrParamMinValue("MaxAverageBitrate", 1000)) + } + if s.QvbrQualityLevel != nil && *s.QvbrQualityLevel < 1 { + invalidParams.Add(request.NewErrParamMinValue("QvbrQualityLevel", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxAverageBitrate sets the MaxAverageBitrate field's value. +func (s *H264QvbrSettings) SetMaxAverageBitrate(v int64) *H264QvbrSettings { + s.MaxAverageBitrate = &v + return s +} + +// SetQvbrQualityLevel sets the QvbrQualityLevel field's value. +func (s *H264QvbrSettings) SetQvbrQualityLevel(v int64) *H264QvbrSettings { + s.QvbrQualityLevel = &v + return s +} + // Required when you set (Codec) under (VideoDescription)>(CodecSettings) to // the value H_264. type H264Settings struct { @@ -5055,11 +7043,9 @@ type H264Settings struct { // quality. AdaptiveQuantization *string `locationName:"adaptiveQuantization" type:"string" enum:"H264AdaptiveQuantization"` - // Average bitrate in bits/second. Required for VBR, CBR, and ABR. Five megabits - // can be entered as 5000000 or 5m. Five hundred kilobits can be entered as - // 500000 or 0.5m. For MS Smooth outputs, bitrates must be unique when rounded - // down to the nearest multiple of 1000. - Bitrate *int64 `locationName:"bitrate" type:"integer"` + // Average bitrate in bits/second. Required for VBR and CBR. For MS Smooth outputs, + // bitrates must be unique when rounded down to the nearest multiple of 1000. + Bitrate *int64 `locationName:"bitrate" min:"1000" type:"integer"` // H.264 Level. CodecLevel *string `locationName:"codecLevel" type:"string" enum:"H264CodecLevel"` @@ -5068,6 +7054,13 @@ type H264Settings struct { // AVC-I License. CodecProfile *string `locationName:"codecProfile" type:"string" enum:"H264CodecProfile"` + // Choose Adaptive to improve subjective video quality for high-motion content. + // This will cause the service to use fewer B-frames (which infer information + // based on other frames) for high-motion portions of the video and more B-frames + // for low-motion portions. The maximum number of B-frames is limited by the + // value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames). + DynamicSubGop *string `locationName:"dynamicSubGop" type:"string" enum:"H264DynamicSubGop"` + // Entropy encoding mode. Use CABAC (must be in Main or High profile) or CAVLC. EntropyEncoding *string `locationName:"entropyEncoding" type:"string" enum:"H264EntropyEncoding"` @@ -5077,9 +7070,17 @@ type H264Settings struct { // Adjust quantization within each frame to reduce flicker or 'pop' on I-frames. FlickerAdaptiveQuantization *string `locationName:"flickerAdaptiveQuantization" type:"string" enum:"H264FlickerAdaptiveQuantization"` - // Using the API, set FramerateControl to INITIALIZE_FROM_SOURCE if you want - // the service to use the framerate from the input. Using the console, do this - // by choosing INITIALIZE_FROM_SOURCE for Framerate. + // If you are using the console, use the Framerate setting to specify the framerate + // for this output. If you want to keep the same framerate as the input video, + // choose Follow source. If you want to do framerate conversion, choose a framerate + // from the dropdown list or choose Custom. The framerates shown in the dropdown + // list are decimal approximations of fractions. If you choose Custom, specify + // your framerate as a fraction. If you are creating your transcoding job specification + // as a JSON file without the console, use FramerateControl to specify which + // value the service uses for the framerate for this output. Choose INITIALIZE_FROM_SOURCE + // if you want the service to use the framerate from the input. Choose SPECIFIED + // if you want the service to use the framerate you specify in the settings + // FramerateNumerator and FramerateDenominator. FramerateControl *string `locationName:"framerateControl" type:"string" enum:"H264FramerateControl"` // When set to INTERPOLATE, produces smoother motion during framerate conversion. @@ -5091,11 +7092,11 @@ type H264Settings struct { // example, use 1001 for the value of FramerateDenominator. When you use the // console for transcode jobs that use framerate conversion, provide the value // as a decimal number for Framerate. In this example, specify 23.976. - FramerateDenominator *int64 `locationName:"framerateDenominator" type:"integer"` + FramerateDenominator *int64 `locationName:"framerateDenominator" min:"1" type:"integer"` // Framerate numerator - framerate is a fraction, e.g. 24000 / 1001 = 23.976 // fps. - FramerateNumerator *int64 `locationName:"framerateNumerator" type:"integer"` + FramerateNumerator *int64 `locationName:"framerateNumerator" min:"1" type:"integer"` // If enable, use reference B frames for GOP structures that have B frames > // 1. @@ -5117,27 +7118,26 @@ type H264Settings struct { // Percentage of the buffer that should initially be filled (HRD buffer model). HrdBufferInitialFillPercentage *int64 `locationName:"hrdBufferInitialFillPercentage" type:"integer"` - // Size of buffer (HRD buffer model). Five megabits can be entered as 5000000 - // or 5m. Five hundred kilobits can be entered as 500000 or 0.5m. + // Size of buffer (HRD buffer model) in bits. For example, enter five megabits + // as 5000000. HrdBufferSize *int64 `locationName:"hrdBufferSize" type:"integer"` // Use Interlace mode (InterlaceMode) to choose the scan line type for the output. // * Top Field First (TOP_FIELD) and Bottom Field First (BOTTOM_FIELD) produce // interlaced output with the entire output having the same field polarity (top - // or bottom first). * Follow, Default Top (FOLLOw_TOP_FIELD) and Follow, Default + // or bottom first). * Follow, Default Top (FOLLOW_TOP_FIELD) and Follow, Default // Bottom (FOLLOW_BOTTOM_FIELD) use the same field polarity as the source. Therefore, - // behavior depends on the input scan type. - If the source is interlaced, the - // output will be interlaced with the same polarity as the source (it will follow - // the source). The output could therefore be a mix of "top field first" and - // "bottom field first". - If the source is progressive, the output will be - // interlaced with "top field first" or "bottom field first" polarity, depending + // behavior depends on the input scan type, as follows. - If the source is interlaced, + // the output will be interlaced with the same polarity as the source (it will + // follow the source). The output could therefore be a mix of "top field first" + // and "bottom field first". - If the source is progressive, the output will + // be interlaced with "top field first" or "bottom field first" polarity, depending // on which of the Follow options you chose. InterlaceMode *string `locationName:"interlaceMode" type:"string" enum:"H264InterlaceMode"` - // Maximum bitrate in bits/second (for VBR mode only). Five megabits can be - // entered as 5000000 or 5m. Five hundred kilobits can be entered as 500000 - // or 0.5m. - MaxBitrate *int64 `locationName:"maxBitrate" type:"integer"` + // Maximum bitrate in bits/second. For example, enter five megabits per second + // as 5000000. Required when Rate control mode is QVBR. + MaxBitrate *int64 `locationName:"maxBitrate" min:"1000" type:"integer"` // Enforces separation between repeated (cadence) I-frames and I-frames inserted // by Scene Change Detection. If a scene change I-frame is within I-interval @@ -5153,7 +7153,7 @@ type H264Settings struct { // Number of reference frames to use. The encoder may use more than requested // if using B-frames and/or interlaced encoding. - NumberReferenceFrames *int64 `locationName:"numberReferenceFrames" type:"integer"` + NumberReferenceFrames *int64 `locationName:"numberReferenceFrames" min:"1" type:"integer"` // Using the API, enable ParFollowSource if you want the service to use the // pixel aspect ratio from the input. Using the console, do this by choosing @@ -5161,18 +7161,24 @@ type H264Settings struct { ParControl *string `locationName:"parControl" type:"string" enum:"H264ParControl"` // Pixel Aspect Ratio denominator. - ParDenominator *int64 `locationName:"parDenominator" type:"integer"` + ParDenominator *int64 `locationName:"parDenominator" min:"1" type:"integer"` // Pixel Aspect Ratio numerator. - ParNumerator *int64 `locationName:"parNumerator" type:"integer"` + ParNumerator *int64 `locationName:"parNumerator" min:"1" type:"integer"` // Use Quality tuning level (H264QualityTuningLevel) to specifiy whether to // use fast single-pass, high-quality singlepass, or high-quality multipass // video encoding. QualityTuningLevel *string `locationName:"qualityTuningLevel" type:"string" enum:"H264QualityTuningLevel"` - // Rate control mode. CQ uses constant quantizer (qp), ABR (average bitrate) - // does not write HRD parameters. + // Settings for quality-defined variable bitrate encoding with the H.264 codec. + // Required when you set Rate control mode to QVBR. Not valid when you set Rate + // control mode to a value other than QVBR, or when you don't define Rate control + // mode. + QvbrSettings *H264QvbrSettings `locationName:"qvbrSettings" type:"structure"` + + // Use this setting to specify whether this output has a variable bitrate (VBR), + // constant bitrate (CBR) or quality-defined variable bitrate (QVBR). RateControlMode *string `locationName:"rateControlMode" type:"string" enum:"H264RateControlMode"` // Places a PPS header on each encoded picture, even if repeated. @@ -5184,7 +7190,7 @@ type H264Settings struct { // Number of slices per picture. Must be less than or equal to the number of // macroblock rows for progressive pictures, and less than or equal to half // the number of macroblock rows for interlaced pictures. - Slices *int64 `locationName:"slices" type:"integer"` + Slices *int64 `locationName:"slices" min:"1" type:"integer"` // Enables Slow PAL rate conversion. 23.976fps and 24fps input is relabeled // as 25fps, and audio is sped up correspondingly. @@ -5228,6 +7234,45 @@ func (s H264Settings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *H264Settings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "H264Settings"} + if s.Bitrate != nil && *s.Bitrate < 1000 { + invalidParams.Add(request.NewErrParamMinValue("Bitrate", 1000)) + } + if s.FramerateDenominator != nil && *s.FramerateDenominator < 1 { + invalidParams.Add(request.NewErrParamMinValue("FramerateDenominator", 1)) + } + if s.FramerateNumerator != nil && *s.FramerateNumerator < 1 { + invalidParams.Add(request.NewErrParamMinValue("FramerateNumerator", 1)) + } + if s.MaxBitrate != nil && *s.MaxBitrate < 1000 { + invalidParams.Add(request.NewErrParamMinValue("MaxBitrate", 1000)) + } + if s.NumberReferenceFrames != nil && *s.NumberReferenceFrames < 1 { + invalidParams.Add(request.NewErrParamMinValue("NumberReferenceFrames", 1)) + } + if s.ParDenominator != nil && *s.ParDenominator < 1 { + invalidParams.Add(request.NewErrParamMinValue("ParDenominator", 1)) + } + if s.ParNumerator != nil && *s.ParNumerator < 1 { + invalidParams.Add(request.NewErrParamMinValue("ParNumerator", 1)) + } + if s.Slices != nil && *s.Slices < 1 { + invalidParams.Add(request.NewErrParamMinValue("Slices", 1)) + } + if s.QvbrSettings != nil { + if err := s.QvbrSettings.Validate(); err != nil { + invalidParams.AddNested("QvbrSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAdaptiveQuantization sets the AdaptiveQuantization field's value. func (s *H264Settings) SetAdaptiveQuantization(v string) *H264Settings { s.AdaptiveQuantization = &v @@ -5252,6 +7297,12 @@ func (s *H264Settings) SetCodecProfile(v string) *H264Settings { return s } +// SetDynamicSubGop sets the DynamicSubGop field's value. +func (s *H264Settings) SetDynamicSubGop(v string) *H264Settings { + s.DynamicSubGop = &v + return s +} + // SetEntropyEncoding sets the EntropyEncoding field's value. func (s *H264Settings) SetEntropyEncoding(v string) *H264Settings { s.EntropyEncoding = &v @@ -5384,6 +7435,12 @@ func (s *H264Settings) SetQualityTuningLevel(v string) *H264Settings { return s } +// SetQvbrSettings sets the QvbrSettings field's value. +func (s *H264Settings) SetQvbrSettings(v *H264QvbrSettings) *H264Settings { + s.QvbrSettings = v + return s +} + // SetRateControlMode sets the RateControlMode field's value. func (s *H264Settings) SetRateControlMode(v string) *H264Settings { s.RateControlMode = &v @@ -5450,6 +7507,67 @@ func (s *H264Settings) SetUnregisteredSeiTimecode(v string) *H264Settings { return s } +// Settings for quality-defined variable bitrate encoding with the H.265 codec. +// Required when you set Rate control mode to QVBR. Not valid when you set Rate +// control mode to a value other than QVBR, or when you don't define Rate control +// mode. +type H265QvbrSettings struct { + _ struct{} `type:"structure"` + + // Use this setting only when Rate control mode is QVBR and Quality tuning level + // is Multi-pass HQ. For Max average bitrate values suited to the complexity + // of your input video, the service limits the average bitrate of the video + // part of this output to the value you choose. That is, the total size of the + // video element is less than or equal to the value you set multiplied by the + // number of seconds of encoded output. + MaxAverageBitrate *int64 `locationName:"maxAverageBitrate" min:"1000" type:"integer"` + + // Required when you use QVBR rate control mode. That is, when you specify qvbrSettings + // within h265Settings. Specify the target quality level for this output, from + // 1 to 10. Use higher numbers for greater quality. Level 10 results in nearly + // lossless compression. The quality level for most broadcast-quality transcodes + // is between 6 and 9. + QvbrQualityLevel *int64 `locationName:"qvbrQualityLevel" min:"1" type:"integer"` +} + +// String returns the string representation +func (s H265QvbrSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s H265QvbrSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *H265QvbrSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "H265QvbrSettings"} + if s.MaxAverageBitrate != nil && *s.MaxAverageBitrate < 1000 { + invalidParams.Add(request.NewErrParamMinValue("MaxAverageBitrate", 1000)) + } + if s.QvbrQualityLevel != nil && *s.QvbrQualityLevel < 1 { + invalidParams.Add(request.NewErrParamMinValue("QvbrQualityLevel", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxAverageBitrate sets the MaxAverageBitrate field's value. +func (s *H265QvbrSettings) SetMaxAverageBitrate(v int64) *H265QvbrSettings { + s.MaxAverageBitrate = &v + return s +} + +// SetQvbrQualityLevel sets the QvbrQualityLevel field's value. +func (s *H265QvbrSettings) SetQvbrQualityLevel(v int64) *H265QvbrSettings { + s.QvbrQualityLevel = &v + return s +} + // Settings for H265 codec type H265Settings struct { _ struct{} `type:"structure"` @@ -5462,11 +7580,9 @@ type H265Settings struct { // Log Gamma (HLG) Electro-Optical Transfer Function (EOTF). AlternateTransferFunctionSei *string `locationName:"alternateTransferFunctionSei" type:"string" enum:"H265AlternateTransferFunctionSei"` - // Average bitrate in bits/second. Required for VBR, CBR, and ABR. Five megabits - // can be entered as 5000000 or 5m. Five hundred kilobits can be entered as - // 500000 or 0.5m. For MS Smooth outputs, bitrates must be unique when rounded - // down to the nearest multiple of 1000. - Bitrate *int64 `locationName:"bitrate" type:"integer"` + // Average bitrate in bits/second. Required for VBR and CBR. For MS Smooth outputs, + // bitrates must be unique when rounded down to the nearest multiple of 1000. + Bitrate *int64 `locationName:"bitrate" min:"1000" type:"integer"` // H.265 Level. CodecLevel *string `locationName:"codecLevel" type:"string" enum:"H265CodecLevel"` @@ -5476,23 +7592,38 @@ type H265Settings struct { // with High Tier. 4:2:2 profiles are only available with the HEVC 4:2:2 License. CodecProfile *string `locationName:"codecProfile" type:"string" enum:"H265CodecProfile"` + // Choose Adaptive to improve subjective video quality for high-motion content. + // This will cause the service to use fewer B-frames (which infer information + // based on other frames) for high-motion portions of the video and more B-frames + // for low-motion portions. The maximum number of B-frames is limited by the + // value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames). + DynamicSubGop *string `locationName:"dynamicSubGop" type:"string" enum:"H265DynamicSubGop"` + // Adjust quantization within each frame to reduce flicker or 'pop' on I-frames. FlickerAdaptiveQuantization *string `locationName:"flickerAdaptiveQuantization" type:"string" enum:"H265FlickerAdaptiveQuantization"` - // Using the API, set FramerateControl to INITIALIZE_FROM_SOURCE if you want - // the service to use the framerate from the input. Using the console, do this - // by choosing INITIALIZE_FROM_SOURCE for Framerate. + // If you are using the console, use the Framerate setting to specify the framerate + // for this output. If you want to keep the same framerate as the input video, + // choose Follow source. If you want to do framerate conversion, choose a framerate + // from the dropdown list or choose Custom. The framerates shown in the dropdown + // list are decimal approximations of fractions. If you choose Custom, specify + // your framerate as a fraction. If you are creating your transcoding job sepecification + // as a JSON file without the console, use FramerateControl to specify which + // value the service uses for the framerate for this output. Choose INITIALIZE_FROM_SOURCE + // if you want the service to use the framerate from the input. Choose SPECIFIED + // if you want the service to use the framerate you specify in the settings + // FramerateNumerator and FramerateDenominator. FramerateControl *string `locationName:"framerateControl" type:"string" enum:"H265FramerateControl"` // When set to INTERPOLATE, produces smoother motion during framerate conversion. FramerateConversionAlgorithm *string `locationName:"framerateConversionAlgorithm" type:"string" enum:"H265FramerateConversionAlgorithm"` // Framerate denominator. - FramerateDenominator *int64 `locationName:"framerateDenominator" type:"integer"` + FramerateDenominator *int64 `locationName:"framerateDenominator" min:"1" type:"integer"` // Framerate numerator - framerate is a fraction, e.g. 24000 / 1001 = 23.976 // fps. - FramerateNumerator *int64 `locationName:"framerateNumerator" type:"integer"` + FramerateNumerator *int64 `locationName:"framerateNumerator" min:"1" type:"integer"` // If enable, use reference B frames for GOP structures that have B frames > // 1. @@ -5514,14 +7645,14 @@ type H265Settings struct { // Percentage of the buffer that should initially be filled (HRD buffer model). HrdBufferInitialFillPercentage *int64 `locationName:"hrdBufferInitialFillPercentage" type:"integer"` - // Size of buffer (HRD buffer model). Five megabits can be entered as 5000000 - // or 5m. Five hundred kilobits can be entered as 500000 or 0.5m. + // Size of buffer (HRD buffer model) in bits. For example, enter five megabits + // as 5000000. HrdBufferSize *int64 `locationName:"hrdBufferSize" type:"integer"` // Use Interlace mode (InterlaceMode) to choose the scan line type for the output. // * Top Field First (TOP_FIELD) and Bottom Field First (BOTTOM_FIELD) produce // interlaced output with the entire output having the same field polarity (top - // or bottom first). * Follow, Default Top (FOLLOw_TOP_FIELD) and Follow, Default + // or bottom first). * Follow, Default Top (FOLLOW_TOP_FIELD) and Follow, Default // Bottom (FOLLOW_BOTTOM_FIELD) use the same field polarity as the source. Therefore, // behavior depends on the input scan type. - If the source is interlaced, the // output will be interlaced with the same polarity as the source (it will follow @@ -5531,10 +7662,9 @@ type H265Settings struct { // on which of the Follow options you chose. InterlaceMode *string `locationName:"interlaceMode" type:"string" enum:"H265InterlaceMode"` - // Maximum bitrate in bits/second (for VBR mode only). Five megabits can be - // entered as 5000000 or 5m. Five hundred kilobits can be entered as 500000 - // or 0.5m. - MaxBitrate *int64 `locationName:"maxBitrate" type:"integer"` + // Maximum bitrate in bits/second. For example, enter five megabits per second + // as 5000000. Required when Rate control mode is QVBR. + MaxBitrate *int64 `locationName:"maxBitrate" min:"1000" type:"integer"` // Enforces separation between repeated (cadence) I-frames and I-frames inserted // by Scene Change Detection. If a scene change I-frame is within I-interval @@ -5550,7 +7680,7 @@ type H265Settings struct { // Number of reference frames to use. The encoder may use more than requested // if using B-frames and/or interlaced encoding. - NumberReferenceFrames *int64 `locationName:"numberReferenceFrames" type:"integer"` + NumberReferenceFrames *int64 `locationName:"numberReferenceFrames" min:"1" type:"integer"` // Using the API, enable ParFollowSource if you want the service to use the // pixel aspect ratio from the input. Using the console, do this by choosing @@ -5558,18 +7688,24 @@ type H265Settings struct { ParControl *string `locationName:"parControl" type:"string" enum:"H265ParControl"` // Pixel Aspect Ratio denominator. - ParDenominator *int64 `locationName:"parDenominator" type:"integer"` + ParDenominator *int64 `locationName:"parDenominator" min:"1" type:"integer"` // Pixel Aspect Ratio numerator. - ParNumerator *int64 `locationName:"parNumerator" type:"integer"` + ParNumerator *int64 `locationName:"parNumerator" min:"1" type:"integer"` // Use Quality tuning level (H265QualityTuningLevel) to specifiy whether to // use fast single-pass, high-quality singlepass, or high-quality multipass // video encoding. QualityTuningLevel *string `locationName:"qualityTuningLevel" type:"string" enum:"H265QualityTuningLevel"` - // Rate control mode. CQ uses constant quantizer (qp), ABR (average bitrate) - // does not write HRD parameters. + // Settings for quality-defined variable bitrate encoding with the H.265 codec. + // Required when you set Rate control mode to QVBR. Not valid when you set Rate + // control mode to a value other than QVBR, or when you don't define Rate control + // mode. + QvbrSettings *H265QvbrSettings `locationName:"qvbrSettings" type:"structure"` + + // Use this setting to specify whether this output has a variable bitrate (VBR), + // constant bitrate (CBR) or quality-defined variable bitrate (QVBR). RateControlMode *string `locationName:"rateControlMode" type:"string" enum:"H265RateControlMode"` // Specify Sample Adaptive Offset (SAO) filter strength. Adaptive mode dynamically @@ -5582,7 +7718,7 @@ type H265Settings struct { // Number of slices per picture. Must be less than or equal to the number of // macroblock rows for progressive pictures, and less than or equal to half // the number of macroblock rows for interlaced pictures. - Slices *int64 `locationName:"slices" type:"integer"` + Slices *int64 `locationName:"slices" min:"1" type:"integer"` // Enables Slow PAL rate conversion. 23.976fps and 24fps input is relabeled // as 25fps, and audio is sped up correspondingly. @@ -5621,6 +7757,12 @@ type H265Settings struct { // Inserts timecode for each frame as 4 bytes of an unregistered SEI message. UnregisteredSeiTimecode *string `locationName:"unregisteredSeiTimecode" type:"string" enum:"H265UnregisteredSeiTimecode"` + + // If HVC1, output that is H.265 will be marked as HVC1 and adhere to the ISO-IECJTC1-SC29_N13798_Text_ISOIEC_FDIS_14496-15_3rd_E + // spec which states that parameter set NAL units will be stored in the sample + // headers but not in the samples directly. If HEV1, then H.265 will be marked + // as HEV1 and parameter set NAL units will be written into the samples. + WriteMp4PackagingType *string `locationName:"writeMp4PackagingType" type:"string" enum:"H265WriteMp4PackagingType"` } // String returns the string representation @@ -5633,6 +7775,45 @@ func (s H265Settings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *H265Settings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "H265Settings"} + if s.Bitrate != nil && *s.Bitrate < 1000 { + invalidParams.Add(request.NewErrParamMinValue("Bitrate", 1000)) + } + if s.FramerateDenominator != nil && *s.FramerateDenominator < 1 { + invalidParams.Add(request.NewErrParamMinValue("FramerateDenominator", 1)) + } + if s.FramerateNumerator != nil && *s.FramerateNumerator < 1 { + invalidParams.Add(request.NewErrParamMinValue("FramerateNumerator", 1)) + } + if s.MaxBitrate != nil && *s.MaxBitrate < 1000 { + invalidParams.Add(request.NewErrParamMinValue("MaxBitrate", 1000)) + } + if s.NumberReferenceFrames != nil && *s.NumberReferenceFrames < 1 { + invalidParams.Add(request.NewErrParamMinValue("NumberReferenceFrames", 1)) + } + if s.ParDenominator != nil && *s.ParDenominator < 1 { + invalidParams.Add(request.NewErrParamMinValue("ParDenominator", 1)) + } + if s.ParNumerator != nil && *s.ParNumerator < 1 { + invalidParams.Add(request.NewErrParamMinValue("ParNumerator", 1)) + } + if s.Slices != nil && *s.Slices < 1 { + invalidParams.Add(request.NewErrParamMinValue("Slices", 1)) + } + if s.QvbrSettings != nil { + if err := s.QvbrSettings.Validate(); err != nil { + invalidParams.AddNested("QvbrSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAdaptiveQuantization sets the AdaptiveQuantization field's value. func (s *H265Settings) SetAdaptiveQuantization(v string) *H265Settings { s.AdaptiveQuantization = &v @@ -5663,6 +7844,12 @@ func (s *H265Settings) SetCodecProfile(v string) *H265Settings { return s } +// SetDynamicSubGop sets the DynamicSubGop field's value. +func (s *H265Settings) SetDynamicSubGop(v string) *H265Settings { + s.DynamicSubGop = &v + return s +} + // SetFlickerAdaptiveQuantization sets the FlickerAdaptiveQuantization field's value. func (s *H265Settings) SetFlickerAdaptiveQuantization(v string) *H265Settings { s.FlickerAdaptiveQuantization = &v @@ -5783,6 +7970,12 @@ func (s *H265Settings) SetQualityTuningLevel(v string) *H265Settings { return s } +// SetQvbrSettings sets the QvbrSettings field's value. +func (s *H265Settings) SetQvbrSettings(v *H265QvbrSettings) *H265Settings { + s.QvbrSettings = v + return s +} + // SetRateControlMode sets the RateControlMode field's value. func (s *H265Settings) SetRateControlMode(v string) *H265Settings { s.RateControlMode = &v @@ -5849,31 +8042,40 @@ func (s *H265Settings) SetUnregisteredSeiTimecode(v string) *H265Settings { return s } -// Use the HDR master display (Hdr10Metadata) settings to provide values for -// HDR color. These values vary depending on the input video and must be provided -// by a color grader. Range is 0 to 50,000, each increment represents 0.00002 -// in CIE1931 color coordinate. +// SetWriteMp4PackagingType sets the WriteMp4PackagingType field's value. +func (s *H265Settings) SetWriteMp4PackagingType(v string) *H265Settings { + s.WriteMp4PackagingType = &v + return s +} + +// Use the HDR master display (Hdr10Metadata) settings to correct HDR metadata +// or to provide missing metadata. These values vary depending on the input +// video and must be provided by a color grader. Range is 0 to 50,000, each +// increment represents 0.00002 in CIE1931 color coordinate. Note that these +// settings are not color correction. Note that if you are creating HDR outputs +// inside of an HLS CMAF package, to comply with the Apple specification, you +// must use the HVC1 for H.265 setting. type Hdr10Metadata struct { _ struct{} `type:"structure"` - // HDR Master Display Information comes from the color grader and the color - // grading tools. Range is 0 to 50,000, each increment represents 0.00002 in - // CIE1931 color coordinate. + // HDR Master Display Information must be provided by a color grader, using + // color grading tools. Range is 0 to 50,000, each increment represents 0.00002 + // in CIE1931 color coordinate. Note that this setting is not for color correction. BluePrimaryX *int64 `locationName:"bluePrimaryX" type:"integer"` - // HDR Master Display Information comes from the color grader and the color - // grading tools. Range is 0 to 50,000, each increment represents 0.00002 in - // CIE1931 color coordinate. + // HDR Master Display Information must be provided by a color grader, using + // color grading tools. Range is 0 to 50,000, each increment represents 0.00002 + // in CIE1931 color coordinate. Note that this setting is not for color correction. BluePrimaryY *int64 `locationName:"bluePrimaryY" type:"integer"` - // HDR Master Display Information comes from the color grader and the color - // grading tools. Range is 0 to 50,000, each increment represents 0.00002 in - // CIE1931 color coordinate. + // HDR Master Display Information must be provided by a color grader, using + // color grading tools. Range is 0 to 50,000, each increment represents 0.00002 + // in CIE1931 color coordinate. Note that this setting is not for color correction. GreenPrimaryX *int64 `locationName:"greenPrimaryX" type:"integer"` - // HDR Master Display Information comes from the color grader and the color - // grading tools. Range is 0 to 50,000, each increment represents 0.00002 in - // CIE1931 color coordinate. + // HDR Master Display Information must be provided by a color grader, using + // color grading tools. Range is 0 to 50,000, each increment represents 0.00002 + // in CIE1931 color coordinate. Note that this setting is not for color correction. GreenPrimaryY *int64 `locationName:"greenPrimaryY" type:"integer"` // Maximum light level among all samples in the coded video sequence, in units @@ -5892,24 +8094,24 @@ type Hdr10Metadata struct { // per square meter MinLuminance *int64 `locationName:"minLuminance" type:"integer"` - // HDR Master Display Information comes from the color grader and the color - // grading tools. Range is 0 to 50,000, each increment represents 0.00002 in - // CIE1931 color coordinate. + // HDR Master Display Information must be provided by a color grader, using + // color grading tools. Range is 0 to 50,000, each increment represents 0.00002 + // in CIE1931 color coordinate. Note that this setting is not for color correction. RedPrimaryX *int64 `locationName:"redPrimaryX" type:"integer"` - // HDR Master Display Information comes from the color grader and the color - // grading tools. Range is 0 to 50,000, each increment represents 0.00002 in - // CIE1931 color coordinate. + // HDR Master Display Information must be provided by a color grader, using + // color grading tools. Range is 0 to 50,000, each increment represents 0.00002 + // in CIE1931 color coordinate. Note that this setting is not for color correction. RedPrimaryY *int64 `locationName:"redPrimaryY" type:"integer"` - // HDR Master Display Information comes from the color grader and the color - // grading tools. Range is 0 to 50,000, each increment represents 0.00002 in - // CIE1931 color coordinate. + // HDR Master Display Information must be provided by a color grader, using + // color grading tools. Range is 0 to 50,000, each increment represents 0.00002 + // in CIE1931 color coordinate. Note that this setting is not for color correction. WhitePointX *int64 `locationName:"whitePointX" type:"integer"` - // HDR Master Display Information comes from the color grader and the color - // grading tools. Range is 0 to 50,000, each increment represents 0.00002 in - // CIE1931 color coordinate. + // HDR Master Display Information must be provided by a color grader, using + // color grading tools. Range is 0 to 50,000, each increment represents 0.00002 + // in CIE1931 color coordinate. Note that this setting is not for color correction. WhitePointY *int64 `locationName:"whitePointY" type:"integer"` } @@ -6002,8 +8204,11 @@ type HlsCaptionLanguageMapping struct { // Caption channel. CaptionChannel *int64 `locationName:"captionChannel" type:"integer"` - // Code to specify the language, following the specification "ISO 639-2 three-digit - // code":http://www.loc.gov/standards/iso639-2/ + // Specify the language for this caption channel, using the ISO 639-2 or ISO + // 639-3 three-letter language code + CustomLanguageCode *string `locationName:"customLanguageCode" min:"3" type:"string"` + + // Specify the language, using the ISO 639-2 three-letter code listed at https://www.loc.gov/standards/iso639-2/php/code_list.php. LanguageCode *string `locationName:"languageCode" type:"string" enum:"LanguageCode"` // Caption language description. @@ -6020,12 +8225,34 @@ func (s HlsCaptionLanguageMapping) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *HlsCaptionLanguageMapping) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "HlsCaptionLanguageMapping"} + if s.CaptionChannel != nil && *s.CaptionChannel < -2.147483648e+09 { + invalidParams.Add(request.NewErrParamMinValue("CaptionChannel", -2.147483648e+09)) + } + if s.CustomLanguageCode != nil && len(*s.CustomLanguageCode) < 3 { + invalidParams.Add(request.NewErrParamMinLen("CustomLanguageCode", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetCaptionChannel sets the CaptionChannel field's value. func (s *HlsCaptionLanguageMapping) SetCaptionChannel(v int64) *HlsCaptionLanguageMapping { s.CaptionChannel = &v return s } +// SetCustomLanguageCode sets the CustomLanguageCode field's value. +func (s *HlsCaptionLanguageMapping) SetCustomLanguageCode(v string) *HlsCaptionLanguageMapping { + s.CustomLanguageCode = &v + return s +} + // SetLanguageCode sets the LanguageCode field's value. func (s *HlsCaptionLanguageMapping) SetLanguageCode(v string) *HlsCaptionLanguageMapping { s.LanguageCode = &v @@ -6045,7 +8272,7 @@ type HlsEncryptionSettings struct { // This is a 128-bit, 16-byte hex value represented by a 32-character text string. // If this parameter is not set then the Initialization Vector will follow the // segment number by default. - ConstantInitializationVector *string `locationName:"constantInitializationVector" type:"string"` + ConstantInitializationVector *string `locationName:"constantInitializationVector" min:"32" type:"string"` // Encrypts the segments with the given encryption scheme. Leave blank to disable. // Selecting 'Disabled' in the web interface also disables encryption. @@ -6059,7 +8286,7 @@ type HlsEncryptionSettings struct { // Settings for use with a SPEKE key provider SpekeKeyProvider *SpekeKeyProvider `locationName:"spekeKeyProvider" type:"structure"` - // Settings for use with a SPEKE key provider. + // Use these settings to set up encryption with a static key provider. StaticKeyProvider *StaticKeyProvider `locationName:"staticKeyProvider" type:"structure"` // Indicates which type of key provider is used for encryption. @@ -6076,6 +8303,19 @@ func (s HlsEncryptionSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *HlsEncryptionSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "HlsEncryptionSettings"} + if s.ConstantInitializationVector != nil && len(*s.ConstantInitializationVector) < 32 { + invalidParams.Add(request.NewErrParamMinLen("ConstantInitializationVector", 32)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetConstantInitializationVector sets the ConstantInitializationVector field's value. func (s *HlsEncryptionSettings) SetConstantInitializationVector(v string) *HlsEncryptionSettings { s.ConstantInitializationVector = &v @@ -6168,6 +8408,19 @@ type HlsGroupSettings struct { // segment duration. ManifestDurationFormat *string `locationName:"manifestDurationFormat" type:"string" enum:"HlsManifestDurationFormat"` + // Keep this setting at the default value of 0, unless you are troubleshooting + // a problem with how devices play back the end of your video asset. If you + // know that player devices are hanging on the final segment of your video because + // the length of your final segment is too short, use this setting to specify + // a minimum final segment length, in seconds. Choose a value that is greater + // than or equal to 1 and less than your segment length. When you specify a + // value for this setting, the encoder will combine any final segment that is + // shorter than the length that you specify with the previous segment. For example, + // your segment length is 3 seconds and your final segment is .5 seconds without + // a minimum final segment length; when you set the minimum final segment length + // to 1, your final segment is 3.5 seconds. + MinFinalSegmentLength *float64 `locationName:"minFinalSegmentLength" type:"double"` + // When set, Minimum Segment Size is enforced by looking ahead and back within // the specified range for a nearby avail and extending the segment size if // needed. @@ -6193,11 +8446,11 @@ type HlsGroupSettings struct { // Length of MPEG-2 Transport Stream segments to create (in seconds). Note that // segments will end on the next keyframe after this number of seconds, so actual // segment length may be longer. - SegmentLength *int64 `locationName:"segmentLength" type:"integer"` + SegmentLength *int64 `locationName:"segmentLength" min:"1" type:"integer"` // Number of segments to write to a subdirectory before starting a new one. // directoryStructure must be SINGLE_DIRECTORY for this setting to have an effect. - SegmentsPerSubdirectory *int64 `locationName:"segmentsPerSubdirectory" type:"integer"` + SegmentsPerSubdirectory *int64 `locationName:"segmentsPerSubdirectory" min:"1" type:"integer"` // Include or exclude RESOLUTION attribute for video in EXT-X-STREAM-INF tag // of variant manifest. @@ -6223,6 +8476,43 @@ func (s HlsGroupSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *HlsGroupSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "HlsGroupSettings"} + if s.SegmentLength != nil && *s.SegmentLength < 1 { + invalidParams.Add(request.NewErrParamMinValue("SegmentLength", 1)) + } + if s.SegmentsPerSubdirectory != nil && *s.SegmentsPerSubdirectory < 1 { + invalidParams.Add(request.NewErrParamMinValue("SegmentsPerSubdirectory", 1)) + } + if s.TimedMetadataId3Period != nil && *s.TimedMetadataId3Period < -2.147483648e+09 { + invalidParams.Add(request.NewErrParamMinValue("TimedMetadataId3Period", -2.147483648e+09)) + } + if s.TimestampDeltaMilliseconds != nil && *s.TimestampDeltaMilliseconds < -2.147483648e+09 { + invalidParams.Add(request.NewErrParamMinValue("TimestampDeltaMilliseconds", -2.147483648e+09)) + } + if s.CaptionLanguageMappings != nil { + for i, v := range s.CaptionLanguageMappings { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "CaptionLanguageMappings", i), err.(request.ErrInvalidParams)) + } + } + } + if s.Encryption != nil { + if err := s.Encryption.Validate(); err != nil { + invalidParams.AddNested("Encryption", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAdMarkers sets the AdMarkers field's value. func (s *HlsGroupSettings) SetAdMarkers(v []*string) *HlsGroupSettings { s.AdMarkers = v @@ -6289,6 +8579,12 @@ func (s *HlsGroupSettings) SetManifestDurationFormat(v string) *HlsGroupSettings return s } +// SetMinFinalSegmentLength sets the MinFinalSegmentLength field's value. +func (s *HlsGroupSettings) SetMinFinalSegmentLength(v float64) *HlsGroupSettings { + s.MinFinalSegmentLength = &v + return s +} + // SetMinSegmentLength sets the MinSegmentLength field's value. func (s *HlsGroupSettings) SetMinSegmentLength(v int64) *HlsGroupSettings { s.MinSegmentLength = &v @@ -6430,7 +8726,7 @@ func (s *HlsSettings) SetSegmentModifier(v string) *HlsSettings { // To insert ID3 tags in your output, specify two values. Use ID3 tag (Id3) // to specify the base 64 encoded string and use Timecode (TimeCode) to specify // the time when the tag should be inserted. To insert multiple ID3 tags in -// your output, create mulitple instances of ID3 insertion (Id3Insertion). +// your output, create multiple instances of ID3 insertion (Id3Insertion). type Id3Insertion struct { _ struct{} `type:"structure"` @@ -6464,13 +8760,13 @@ func (s *Id3Insertion) SetTimecode(v string) *Id3Insertion { } // Enable the Image inserter (ImageInserter) feature to include a graphic overlay -// on your video. Enable or disable this feature for each output individually. +// on your video. Enable or disable this feature for each input or output individually. // This setting is disabled by default. type ImageInserter struct { _ struct{} `type:"structure"` - // Image to insert. Must be 32 bit windows BMP, PNG, or TGA file. Must not be - // larger than the output frames. + // Specify the images that you want to overlay on your video. The images must + // be PNG or TGA files. InsertableImages []*InsertableImage `locationName:"insertableImages" type:"list"` } @@ -6484,6 +8780,26 @@ func (s ImageInserter) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *ImageInserter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ImageInserter"} + if s.InsertableImages != nil { + for i, v := range s.InsertableImages { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "InsertableImages", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetInsertableImages sets the InsertableImages field's value. func (s *ImageInserter) SetInsertableImages(v []*InsertableImage) *ImageInserter { s.InsertableImages = v @@ -6514,6 +8830,10 @@ type Input struct { // video inputs. DeblockFilter *string `locationName:"deblockFilter" type:"string" enum:"InputDeblockFilter"` + // If the input file is encrypted, decryption settings to decrypt the media + // file + DecryptionSettings *InputDecryptionSettings `locationName:"decryptionSettings" type:"structure"` + // Enable Denoise (InputDenoiseFilter) to filter noise from the input. Default // is disabled. Only applicable to MPEG2, H.264, H.265, and uncompressed video // inputs. @@ -6537,6 +8857,11 @@ type Input struct { // settings (Deblock and Denoise). The range is -5 to 5. Default is 0. FilterStrength *int64 `locationName:"filterStrength" type:"integer"` + // Enable the Image inserter (ImageInserter) feature to include a graphic overlay + // on your video. Enable or disable this feature for each input individually. + // This setting is disabled by default. + ImageInserter *ImageInserter `locationName:"imageInserter" type:"structure"` + // (InputClippings) contains sets of start and end times that together specify // a portion of the input to be used in the outputs. If you provide only a start // time, the clip will be the entire input from that point to the end. If you @@ -6549,20 +8874,20 @@ type Input struct { // transport stream. Note that Quad 4K is not currently supported. Default is // the first program within the transport stream. If the program you specify // doesn't exist, the transcoding service will use this default. - ProgramNumber *int64 `locationName:"programNumber" type:"integer"` + ProgramNumber *int64 `locationName:"programNumber" min:"1" type:"integer"` // Set PSI control (InputPsiControl) for transport stream inputs to specify // which data the demux process to scans. * Ignore PSI - Scan all PIDs for audio // and video. * Use PSI - Scan only PSI data. PsiControl *string `locationName:"psiControl" type:"string" enum:"InputPsiControl"` - // Use Timecode source (InputTimecodeSource) to specify how timecode information - // from your input is adjusted and encoded in all outputs for the job. Default - // is embedded. Set to Embedded (EMBEDDED) to use the timecode that is in the - // input video. If no embedded timecode is in the source, will set the timecode - // for the first frame to 00:00:00:00. Set to Start at 0 (ZEROBASED) to set - // the timecode of the initial frame to 00:00:00:00. Set to Specified start - // (SPECIFIEDSTART) to provide the initial timecode yourself the setting (Start). + // Timecode source under input settings (InputTimecodeSource) only affects the + // behavior of features that apply to a single input at a time, such as input + // clipping and synchronizing some captions formats. Use this setting to specify + // whether the service counts frames by timecodes embedded in the video (EMBEDDED) + // or by starting the first frame at zero (ZEROBASED). In both cases, the timecode + // format is HH:MM:SS:FF or HH:MM:SS;FF, where FF is the frame number. Only + // set this to EMBEDDED if your source video has embedded timecodes. TimecodeSource *string `locationName:"timecodeSource" type:"string" enum:"InputTimecodeSource"` // Selector for video. @@ -6579,6 +8904,57 @@ func (s Input) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *Input) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Input"} + if s.FilterStrength != nil && *s.FilterStrength < -5 { + invalidParams.Add(request.NewErrParamMinValue("FilterStrength", -5)) + } + if s.ProgramNumber != nil && *s.ProgramNumber < 1 { + invalidParams.Add(request.NewErrParamMinValue("ProgramNumber", 1)) + } + if s.AudioSelectors != nil { + for i, v := range s.AudioSelectors { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AudioSelectors", i), err.(request.ErrInvalidParams)) + } + } + } + if s.CaptionSelectors != nil { + for i, v := range s.CaptionSelectors { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "CaptionSelectors", i), err.(request.ErrInvalidParams)) + } + } + } + if s.DecryptionSettings != nil { + if err := s.DecryptionSettings.Validate(); err != nil { + invalidParams.AddNested("DecryptionSettings", err.(request.ErrInvalidParams)) + } + } + if s.ImageInserter != nil { + if err := s.ImageInserter.Validate(); err != nil { + invalidParams.AddNested("ImageInserter", err.(request.ErrInvalidParams)) + } + } + if s.VideoSelector != nil { + if err := s.VideoSelector.Validate(); err != nil { + invalidParams.AddNested("VideoSelector", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAudioSelectorGroups sets the AudioSelectorGroups field's value. func (s *Input) SetAudioSelectorGroups(v map[string]*AudioSelectorGroup) *Input { s.AudioSelectorGroups = v @@ -6603,6 +8979,12 @@ func (s *Input) SetDeblockFilter(v string) *Input { return s } +// SetDecryptionSettings sets the DecryptionSettings field's value. +func (s *Input) SetDecryptionSettings(v *InputDecryptionSettings) *Input { + s.DecryptionSettings = v + return s +} + // SetDenoiseFilter sets the DenoiseFilter field's value. func (s *Input) SetDenoiseFilter(v string) *Input { s.DenoiseFilter = &v @@ -6627,6 +9009,12 @@ func (s *Input) SetFilterStrength(v int64) *Input { return s } +// SetImageInserter sets the ImageInserter field's value. +func (s *Input) SetImageInserter(v *ImageInserter) *Input { + s.ImageInserter = v + return s +} + // SetInputClippings sets the InputClippings field's value. func (s *Input) SetInputClippings(v []*InputClipping) *Input { s.InputClippings = v @@ -6657,25 +9045,31 @@ func (s *Input) SetVideoSelector(v *VideoSelector) *Input { return s } -// Include one instance of (InputClipping) for each input clip. +// To transcode only portions of your input (clips), include one Input clipping +// (one instance of InputClipping in the JSON job file) for each input clip. +// All input clips you specify will be included in every output of the job. type InputClipping struct { _ struct{} `type:"structure"` // Set End timecode (EndTimecode) to the end of the portion of the input you // are clipping. The frame corresponding to the End timecode value is included // in the clip. Start timecode or End timecode may be left blank, but not both. - // When choosing this value, take into account your setting for Input timecode - // source. For example, if you have embedded timecodes that start at 01:00:00:00 - // and you want your clip to begin five minutes into the video, use 01:00:05:00. + // Use the format HH:MM:SS:FF or HH:MM:SS;FF, where HH is the hour, MM is the + // minute, SS is the second, and FF is the frame number. When choosing this + // value, take into account your setting for timecode source under input settings + // (InputTimecodeSource). For example, if you have embedded timecodes that start + // at 01:00:00:00 and you want your clip to end six minutes into the video, + // use 01:06:00:00. EndTimecode *string `locationName:"endTimecode" type:"string"` // Set Start timecode (StartTimecode) to the beginning of the portion of the // input you are clipping. The frame corresponding to the Start timecode value // is included in the clip. Start timecode or End timecode may be left blank, - // but not both. When choosing this value, take into account your setting for - // Input timecode source. For example, if you have embedded timecodes that start - // at 01:00:00:00 and you want your clip to begin five minutes into the video, - // use 01:00:05:00. + // but not both. Use the format HH:MM:SS:FF or HH:MM:SS;FF, where HH is the + // hour, MM is the minute, SS is the second, and FF is the frame number. When + // choosing this value, take into account your setting for Input timecode source. + // For example, if you have embedded timecodes that start at 01:00:00:00 and + // you want your clip to begin five minutes into the video, use 01:05:00:00. StartTimecode *string `locationName:"startTimecode" type:"string"` } @@ -6701,6 +9095,76 @@ func (s *InputClipping) SetStartTimecode(v string) *InputClipping { return s } +// Specify the decryption settings used to decrypt encrypted input +type InputDecryptionSettings struct { + _ struct{} `type:"structure"` + + // This specifies how the encrypted file needs to be decrypted. + DecryptionMode *string `locationName:"decryptionMode" type:"string" enum:"DecryptionMode"` + + // Decryption key either 128 or 192 or 256 bits encrypted with KMS + EncryptedDecryptionKey *string `locationName:"encryptedDecryptionKey" min:"24" type:"string"` + + // Initialization Vector 96 bits (CTR/GCM mode only) or 128 bits. + InitializationVector *string `locationName:"initializationVector" min:"16" type:"string"` + + // The AWS region in which decryption key was encrypted with KMS + KmsKeyRegion *string `locationName:"kmsKeyRegion" min:"9" type:"string"` +} + +// String returns the string representation +func (s InputDecryptionSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputDecryptionSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InputDecryptionSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InputDecryptionSettings"} + if s.EncryptedDecryptionKey != nil && len(*s.EncryptedDecryptionKey) < 24 { + invalidParams.Add(request.NewErrParamMinLen("EncryptedDecryptionKey", 24)) + } + if s.InitializationVector != nil && len(*s.InitializationVector) < 16 { + invalidParams.Add(request.NewErrParamMinLen("InitializationVector", 16)) + } + if s.KmsKeyRegion != nil && len(*s.KmsKeyRegion) < 9 { + invalidParams.Add(request.NewErrParamMinLen("KmsKeyRegion", 9)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDecryptionMode sets the DecryptionMode field's value. +func (s *InputDecryptionSettings) SetDecryptionMode(v string) *InputDecryptionSettings { + s.DecryptionMode = &v + return s +} + +// SetEncryptedDecryptionKey sets the EncryptedDecryptionKey field's value. +func (s *InputDecryptionSettings) SetEncryptedDecryptionKey(v string) *InputDecryptionSettings { + s.EncryptedDecryptionKey = &v + return s +} + +// SetInitializationVector sets the InitializationVector field's value. +func (s *InputDecryptionSettings) SetInitializationVector(v string) *InputDecryptionSettings { + s.InitializationVector = &v + return s +} + +// SetKmsKeyRegion sets the KmsKeyRegion field's value. +func (s *InputDecryptionSettings) SetKmsKeyRegion(v string) *InputDecryptionSettings { + s.KmsKeyRegion = &v + return s +} + // Specified video input in a template. type InputTemplate struct { _ struct{} `type:"structure"` @@ -6743,6 +9207,11 @@ type InputTemplate struct { // settings (Deblock and Denoise). The range is -5 to 5. Default is 0. FilterStrength *int64 `locationName:"filterStrength" type:"integer"` + // Enable the Image inserter (ImageInserter) feature to include a graphic overlay + // on your video. Enable or disable this feature for each input individually. + // This setting is disabled by default. + ImageInserter *ImageInserter `locationName:"imageInserter" type:"structure"` + // (InputClippings) contains sets of start and end times that together specify // a portion of the input to be used in the outputs. If you provide only a start // time, the clip will be the entire input from that point to the end. If you @@ -6755,20 +9224,20 @@ type InputTemplate struct { // transport stream. Note that Quad 4K is not currently supported. Default is // the first program within the transport stream. If the program you specify // doesn't exist, the transcoding service will use this default. - ProgramNumber *int64 `locationName:"programNumber" type:"integer"` + ProgramNumber *int64 `locationName:"programNumber" min:"1" type:"integer"` // Set PSI control (InputPsiControl) for transport stream inputs to specify // which data the demux process to scans. * Ignore PSI - Scan all PIDs for audio // and video. * Use PSI - Scan only PSI data. PsiControl *string `locationName:"psiControl" type:"string" enum:"InputPsiControl"` - // Use Timecode source (InputTimecodeSource) to specify how timecode information - // from your input is adjusted and encoded in all outputs for the job. Default - // is embedded. Set to Embedded (EMBEDDED) to use the timecode that is in the - // input video. If no embedded timecode is in the source, will set the timecode - // for the first frame to 00:00:00:00. Set to Start at 0 (ZEROBASED) to set - // the timecode of the initial frame to 00:00:00:00. Set to Specified start - // (SPECIFIEDSTART) to provide the initial timecode yourself the setting (Start). + // Timecode source under input settings (InputTimecodeSource) only affects the + // behavior of features that apply to a single input at a time, such as input + // clipping and synchronizing some captions formats. Use this setting to specify + // whether the service counts frames by timecodes embedded in the video (EMBEDDED) + // or by starting the first frame at zero (ZEROBASED). In both cases, the timecode + // format is HH:MM:SS:FF or HH:MM:SS;FF, where FF is the frame number. Only + // set this to EMBEDDED if your source video has embedded timecodes. TimecodeSource *string `locationName:"timecodeSource" type:"string" enum:"InputTimecodeSource"` // Selector for video. @@ -6785,6 +9254,52 @@ func (s InputTemplate) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *InputTemplate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InputTemplate"} + if s.FilterStrength != nil && *s.FilterStrength < -5 { + invalidParams.Add(request.NewErrParamMinValue("FilterStrength", -5)) + } + if s.ProgramNumber != nil && *s.ProgramNumber < 1 { + invalidParams.Add(request.NewErrParamMinValue("ProgramNumber", 1)) + } + if s.AudioSelectors != nil { + for i, v := range s.AudioSelectors { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AudioSelectors", i), err.(request.ErrInvalidParams)) + } + } + } + if s.CaptionSelectors != nil { + for i, v := range s.CaptionSelectors { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "CaptionSelectors", i), err.(request.ErrInvalidParams)) + } + } + } + if s.ImageInserter != nil { + if err := s.ImageInserter.Validate(); err != nil { + invalidParams.AddNested("ImageInserter", err.(request.ErrInvalidParams)) + } + } + if s.VideoSelector != nil { + if err := s.VideoSelector.Validate(); err != nil { + invalidParams.AddNested("VideoSelector", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAudioSelectorGroups sets the AudioSelectorGroups field's value. func (s *InputTemplate) SetAudioSelectorGroups(v map[string]*AudioSelectorGroup) *InputTemplate { s.AudioSelectorGroups = v @@ -6827,6 +9342,12 @@ func (s *InputTemplate) SetFilterStrength(v int64) *InputTemplate { return s } +// SetImageInserter sets the ImageInserter field's value. +func (s *InputTemplate) SetImageInserter(v *ImageInserter) *InputTemplate { + s.ImageInserter = v + return s +} + // SetInputClippings sets the InputClippings field's value. func (s *InputTemplate) SetInputClippings(v []*InputClipping) *InputTemplate { s.InputClippings = v @@ -6857,45 +9378,49 @@ func (s *InputTemplate) SetVideoSelector(v *VideoSelector) *InputTemplate { return s } -// Settings for Insertable Image +// Settings that specify how your overlay appears. type InsertableImage struct { _ struct{} `type:"structure"` - // Use Duration (Duration) to set the time, in milliseconds, for the image to - // remain on the output video. + // Set the time, in milliseconds, for the image to remain on the output video. Duration *int64 `locationName:"duration" type:"integer"` - // Use Fade in (FadeIut) to set the length, in milliseconds, of the inserted - // image fade in. If you don't specify a value for Fade in, the image will appear - // abruptly at the Start time. + // Set the length of time, in milliseconds, between the Start time that you + // specify for the image insertion and the time that the image appears at full + // opacity. Full opacity is the level that you specify for the opacity setting. + // If you don't specify a value for Fade-in, the image will appear abruptly + // at the overlay start time. FadeIn *int64 `locationName:"fadeIn" type:"integer"` - // Use Fade out (FadeOut) to set the length, in milliseconds, of the inserted - // image fade out. If you don't specify a value for Fade out, the image will - // disappear abruptly at the end of the inserted image duration. + // Specify the length of time, in milliseconds, between the end of the time + // that you have specified for the image overlay Duration and when the overlaid + // image has faded to total transparency. If you don't specify a value for Fade-out, + // the image will disappear abruptly at the end of the inserted image duration. FadeOut *int64 `locationName:"fadeOut" type:"integer"` - // Specify the Height (Height) of the inserted image. Use a value that is less - // than or equal to the video resolution height. Leave this setting blank to - // use the native height of the image. + // Specify the height of the inserted image in pixels. If you specify a value + // that's larger than the video resolution height, the service will crop your + // overlaid image to fit. To use the native height of the image, keep this setting + // blank. Height *int64 `locationName:"height" type:"integer"` // Use Image location (imageInserterInput) to specify the Amazon S3 location - // of the image to be inserted into the output. Use a 32 bit BMP, PNG, or TGA - // file that fits inside the video frame. - ImageInserterInput *string `locationName:"imageInserterInput" type:"string"` + // of the image to be inserted into the output. Use a PNG or TGA file that fits + // inside the video frame. + ImageInserterInput *string `locationName:"imageInserterInput" min:"14" type:"string"` // Use Left (ImageX) to set the distance, in pixels, between the inserted image - // and the left edge of the frame. Required for BMP, PNG and TGA input. + // and the left edge of the video frame. Required for any image overlay that + // you specify. ImageX *int64 `locationName:"imageX" type:"integer"` - // Use Top (ImageY) to set the distance, in pixels, between the inserted image - // and the top edge of the video frame. Required for BMP, PNG and TGA input. + // Use Top (ImageY) to set the distance, in pixels, between the overlaid image + // and the top edge of the video frame. Required for any image overlay that + // you specify. ImageY *int64 `locationName:"imageY" type:"integer"` - // Use Layer (Layer) to specify how overlapping inserted images appear. Images - // with higher values of layer appear on top of images with lower values of - // layer. + // Specify how overlapping inserted images appear. Images with higher values + // for Layer appear on top of images with lower values for Layer. Layer *int64 `locationName:"layer" type:"integer"` // Use Opacity (Opacity) to specify how much of the underlying video shows through @@ -6904,12 +9429,14 @@ type InsertableImage struct { Opacity *int64 `locationName:"opacity" type:"integer"` // Use Start time (StartTime) to specify the video timecode when the image is - // inserted in the output. This must be in timecode format (HH:MM:SS:FF) + // inserted in the output. This must be in timecode (HH:MM:SS:FF or HH:MM:SS;FF) + // format. StartTime *string `locationName:"startTime" type:"string"` - // Specify the Width (Width) of the inserted image. Use a value that is less - // than or equal to the video resolution width. Leave this setting blank to - // use the native width of the image. + // Specify the width of the inserted image in pixels. If you specify a value + // that's larger than the video resolution width, the service will crop your + // overlaid image to fit. To use the native width of the image, keep this setting + // blank. Width *int64 `locationName:"width" type:"integer"` } @@ -6923,6 +9450,19 @@ func (s InsertableImage) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *InsertableImage) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InsertableImage"} + if s.ImageInserterInput != nil && len(*s.ImageInserterInput) < 14 { + invalidParams.Add(request.NewErrParamMinLen("ImageInserterInput", 14)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetDuration sets the Duration field's value. func (s *InsertableImage) SetDuration(v int64) *InsertableImage { s.Duration = &v @@ -6997,8 +9537,15 @@ type Job struct { // An identifier for this resource that is unique within all of AWS. Arn *string `locationName:"arn" type:"string"` + // Optional. Choose a tag type that AWS Billing and Cost Management will use + // to sort your AWS Elemental MediaConvert costs on any billing report that + // you set up. Any transcoding outputs that don't have an associated tag will + // appear in your billing report unsorted. If you don't choose a valid value + // for this field, your job outputs will appear on the billing report unsorted. + BillingTagsSource *string `locationName:"billingTagsSource" type:"string" enum:"BillingTagsSource"` + // The time, in Unix epoch format in seconds, when the job got created. - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unixTimestamp"` // Error code for the job ErrorCode *int64 `locationName:"errorCode" type:"integer"` @@ -7024,10 +9571,14 @@ type Job struct { // The IAM role you use for creating this job. For details about permissions, // see the User Guide topic at the User Guide at http://docs.aws.amazon.com/mediaconvert/latest/ug/iam-role.html - Role *string `locationName:"role" type:"string"` + // + // Role is a required field + Role *string `locationName:"role" type:"string" required:"true"` // JobSettings contains all the transcode settings for a job. - Settings *JobSettings `locationName:"settings" type:"structure"` + // + // Settings is a required field + Settings *JobSettings `locationName:"settings" type:"structure" required:"true"` // A job's status can be SUBMITTED, PROGRESSING, COMPLETE, CANCELED, or ERROR. Status *string `locationName:"status" type:"string" enum:"JobStatus"` @@ -7057,6 +9608,12 @@ func (s *Job) SetArn(v string) *Job { return s } +// SetBillingTagsSource sets the BillingTagsSource field's value. +func (s *Job) SetBillingTagsSource(v string) *Job { + s.BillingTagsSource = &v + return s +} + // SetCreatedAt sets the CreatedAt field's value. func (s *Job) SetCreatedAt(v time.Time) *Job { s.CreatedAt = &v @@ -7146,18 +9703,22 @@ type JobSettings struct { // to create the output. Inputs []*Input `locationName:"inputs" type:"list"` + // Overlay motion graphics on top of your video. The motion graphics that you + // specify here appear on all outputs in all output groups. + MotionImageInserter *MotionImageInserter `locationName:"motionImageInserter" type:"structure"` + // Settings for Nielsen Configuration NielsenConfiguration *NielsenConfiguration `locationName:"nielsenConfiguration" type:"structure"` - // **!!**(OutputGroups) contains one group of settings for each set of outputs - // that share a common package type. All unpackaged files (MPEG-4, MPEG-2 TS, - // Quicktime, MXF, and no container) are grouped in a single output group as - // well. Required in (OutputGroups) is a group of settings that apply to the - // whole group. This required object depends on the value you set for (Type) - // under (OutputGroups)>(OutputGroupSettings). Type, settings object pairs are - // as follows. * FILE_GROUP_SETTINGS, FileGroupSettings * HLS_GROUP_SETTINGS, - // HlsGroupSettings * DASH_ISO_GROUP_SETTINGS, DashIsoGroupSettings * MS_SMOOTH_GROUP_SETTINGS, - // MsSmoothGroupSettings + // (OutputGroups) contains one group of settings for each set of outputs that + // share a common package type. All unpackaged files (MPEG-4, MPEG-2 TS, Quicktime, + // MXF, and no container) are grouped in a single output group as well. Required + // in (OutputGroups) is a group of settings that apply to the whole group. This + // required object depends on the value you set for (Type) under (OutputGroups)>(OutputGroupSettings). + // Type, settings object pairs are as follows. * FILE_GROUP_SETTINGS, FileGroupSettings + // * HLS_GROUP_SETTINGS, HlsGroupSettings * DASH_ISO_GROUP_SETTINGS, DashIsoGroupSettings + // * MS_SMOOTH_GROUP_SETTINGS, MsSmoothGroupSettings * CMAF_GROUP_SETTINGS, + // CmafGroupSettings OutputGroups []*OutputGroup `locationName:"outputGroups" type:"list"` // Contains settings used to acquire and adjust timecode information from inputs. @@ -7180,6 +9741,49 @@ func (s JobSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *JobSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "JobSettings"} + if s.AdAvailOffset != nil && *s.AdAvailOffset < -1000 { + invalidParams.Add(request.NewErrParamMinValue("AdAvailOffset", -1000)) + } + if s.AvailBlanking != nil { + if err := s.AvailBlanking.Validate(); err != nil { + invalidParams.AddNested("AvailBlanking", err.(request.ErrInvalidParams)) + } + } + if s.Inputs != nil { + for i, v := range s.Inputs { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Inputs", i), err.(request.ErrInvalidParams)) + } + } + } + if s.MotionImageInserter != nil { + if err := s.MotionImageInserter.Validate(); err != nil { + invalidParams.AddNested("MotionImageInserter", err.(request.ErrInvalidParams)) + } + } + if s.OutputGroups != nil { + for i, v := range s.OutputGroups { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "OutputGroups", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAdAvailOffset sets the AdAvailOffset field's value. func (s *JobSettings) SetAdAvailOffset(v int64) *JobSettings { s.AdAvailOffset = &v @@ -7198,6 +9802,12 @@ func (s *JobSettings) SetInputs(v []*Input) *JobSettings { return s } +// SetMotionImageInserter sets the MotionImageInserter field's value. +func (s *JobSettings) SetMotionImageInserter(v *MotionImageInserter) *JobSettings { + s.MotionImageInserter = v + return s +} + // SetNielsenConfiguration sets the NielsenConfiguration field's value. func (s *JobSettings) SetNielsenConfiguration(v *NielsenConfiguration) *JobSettings { s.NielsenConfiguration = v @@ -7234,17 +9844,19 @@ type JobTemplate struct { Category *string `locationName:"category" type:"string"` // The timestamp in epoch seconds for Job template creation. - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unixTimestamp"` // An optional description you create for each job template. Description *string `locationName:"description" type:"string"` // The timestamp in epoch seconds when the Job template was last updated. - LastUpdated *time.Time `locationName:"lastUpdated" type:"timestamp" timestampFormat:"unix"` + LastUpdated *time.Time `locationName:"lastUpdated" type:"timestamp" timestampFormat:"unixTimestamp"` // A name you create for each job template. Each name must be unique within // your account. - Name *string `locationName:"name" type:"string"` + // + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true"` // Optional. The queue that jobs created from this template are assigned to. // If you don't specify this, jobs will go to the default queue. @@ -7252,7 +9864,9 @@ type JobTemplate struct { // JobTemplateSettings contains all the transcode settings saved in the template // that will be applied to jobs created from it. - Settings *JobTemplateSettings `locationName:"settings" type:"structure"` + // + // Settings is a required field + Settings *JobTemplateSettings `locationName:"settings" type:"structure" required:"true"` // A job template can be of two types: system or custom. System or built-in // job templates can't be modified or deleted by the user. @@ -7341,18 +9955,22 @@ type JobTemplateSettings struct { // multiple inputs when referencing a job template. Inputs []*InputTemplate `locationName:"inputs" type:"list"` + // Overlay motion graphics on top of your video. The motion graphics that you + // specify here appear on all outputs in all output groups. + MotionImageInserter *MotionImageInserter `locationName:"motionImageInserter" type:"structure"` + // Settings for Nielsen Configuration NielsenConfiguration *NielsenConfiguration `locationName:"nielsenConfiguration" type:"structure"` - // **!!**(OutputGroups) contains one group of settings for each set of outputs - // that share a common package type. All unpackaged files (MPEG-4, MPEG-2 TS, - // Quicktime, MXF, and no container) are grouped in a single output group as - // well. Required in (OutputGroups) is a group of settings that apply to the - // whole group. This required object depends on the value you set for (Type) - // under (OutputGroups)>(OutputGroupSettings). Type, settings object pairs are - // as follows. * FILE_GROUP_SETTINGS, FileGroupSettings * HLS_GROUP_SETTINGS, - // HlsGroupSettings * DASH_ISO_GROUP_SETTINGS, DashIsoGroupSettings * MS_SMOOTH_GROUP_SETTINGS, - // MsSmoothGroupSettings + // (OutputGroups) contains one group of settings for each set of outputs that + // share a common package type. All unpackaged files (MPEG-4, MPEG-2 TS, Quicktime, + // MXF, and no container) are grouped in a single output group as well. Required + // in (OutputGroups) is a group of settings that apply to the whole group. This + // required object depends on the value you set for (Type) under (OutputGroups)>(OutputGroupSettings). + // Type, settings object pairs are as follows. * FILE_GROUP_SETTINGS, FileGroupSettings + // * HLS_GROUP_SETTINGS, HlsGroupSettings * DASH_ISO_GROUP_SETTINGS, DashIsoGroupSettings + // * MS_SMOOTH_GROUP_SETTINGS, MsSmoothGroupSettings * CMAF_GROUP_SETTINGS, + // CmafGroupSettings OutputGroups []*OutputGroup `locationName:"outputGroups" type:"list"` // Contains settings used to acquire and adjust timecode information from inputs. @@ -7375,6 +9993,49 @@ func (s JobTemplateSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *JobTemplateSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "JobTemplateSettings"} + if s.AdAvailOffset != nil && *s.AdAvailOffset < -1000 { + invalidParams.Add(request.NewErrParamMinValue("AdAvailOffset", -1000)) + } + if s.AvailBlanking != nil { + if err := s.AvailBlanking.Validate(); err != nil { + invalidParams.AddNested("AvailBlanking", err.(request.ErrInvalidParams)) + } + } + if s.Inputs != nil { + for i, v := range s.Inputs { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Inputs", i), err.(request.ErrInvalidParams)) + } + } + } + if s.MotionImageInserter != nil { + if err := s.MotionImageInserter.Validate(); err != nil { + invalidParams.AddNested("MotionImageInserter", err.(request.ErrInvalidParams)) + } + } + if s.OutputGroups != nil { + for i, v := range s.OutputGroups { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "OutputGroups", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAdAvailOffset sets the AdAvailOffset field's value. func (s *JobTemplateSettings) SetAdAvailOffset(v int64) *JobTemplateSettings { s.AdAvailOffset = &v @@ -7393,6 +10054,12 @@ func (s *JobTemplateSettings) SetInputs(v []*InputTemplate) *JobTemplateSettings return s } +// SetMotionImageInserter sets the MotionImageInserter field's value. +func (s *JobTemplateSettings) SetMotionImageInserter(v *MotionImageInserter) *JobTemplateSettings { + s.MotionImageInserter = v + return s +} + // SetNielsenConfiguration sets the NielsenConfiguration field's value. func (s *JobTemplateSettings) SetNielsenConfiguration(v *NielsenConfiguration) *JobTemplateSettings { s.NielsenConfiguration = v @@ -7435,7 +10102,7 @@ type ListJobTemplatesInput struct { // Optional. Number of job templates, up to twenty, that will be returned at // one time. - MaxResults *int64 `location:"querystring" locationName:"maxResults" type:"integer"` + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` // Use this string, provided with the response to a previous request, to request // the next batch of job templates. @@ -7456,6 +10123,19 @@ func (s ListJobTemplatesInput) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListJobTemplatesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListJobTemplatesInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetCategory sets the Category field's value. func (s *ListJobTemplatesInput) SetCategory(v string) *ListJobTemplatesInput { s.Category = &v @@ -7487,7 +10167,7 @@ func (s *ListJobTemplatesInput) SetOrder(v string) *ListJobTemplatesInput { } // Successful list job templates requests return a JSON array of job templates. -// If you do not specify how they are ordered, you will receive them in alphabetical +// If you don't specify how they are ordered, you will receive them in alphabetical // order by name. type ListJobTemplatesOutput struct { _ struct{} `type:"structure"` @@ -7529,7 +10209,7 @@ type ListJobsInput struct { _ struct{} `type:"structure"` // Optional. Number of jobs, up to twenty, that will be returned at one time. - MaxResults *int64 `location:"querystring" locationName:"maxResults" type:"integer"` + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` // Use this string, provided with the response to a previous request, to request // the next batch of jobs. @@ -7556,6 +10236,19 @@ func (s ListJobsInput) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListJobsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListJobsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetMaxResults sets the MaxResults field's value. func (s *ListJobsInput) SetMaxResults(v int64) *ListJobsInput { s.MaxResults = &v @@ -7586,9 +10279,8 @@ func (s *ListJobsInput) SetStatus(v string) *ListJobsInput { return s } -// Successful list jobs requests return a JSON array of jobs. If you do not -// specify how they are ordered, you will receive the most recently created -// first. +// Successful list jobs requests return a JSON array of jobs. If you don't specify +// how they are ordered, you will receive the most recently created first. type ListJobsOutput struct { _ struct{} `type:"structure"` @@ -7638,7 +10330,7 @@ type ListPresetsInput struct { ListBy *string `location:"querystring" locationName:"listBy" type:"string" enum:"PresetListBy"` // Optional. Number of presets, up to twenty, that will be returned at one time - MaxResults *int64 `location:"querystring" locationName:"maxResults" type:"integer"` + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` // Use this string, provided with the response to a previous request, to request // the next batch of presets. @@ -7659,6 +10351,19 @@ func (s ListPresetsInput) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListPresetsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListPresetsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetCategory sets the Category field's value. func (s *ListPresetsInput) SetCategory(v string) *ListPresetsInput { s.Category = &v @@ -7689,9 +10394,8 @@ func (s *ListPresetsInput) SetOrder(v string) *ListPresetsInput { return s } -// Successful list presets requests return a JSON array of presets. If you do -// not specify how they are ordered, you will receive them alphabetically by -// name. +// Successful list presets requests return a JSON array of presets. If you don't +// specify how they are ordered, you will receive them alphabetically by name. type ListPresetsOutput struct { _ struct{} `type:"structure"` @@ -7735,7 +10439,7 @@ type ListQueuesInput struct { ListBy *string `location:"querystring" locationName:"listBy" type:"string" enum:"QueueListBy"` // Optional. Number of queues, up to twenty, that will be returned at one time. - MaxResults *int64 `location:"querystring" locationName:"maxResults" type:"integer"` + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` // Use this string, provided with the response to a previous request, to request // the next batch of queues. @@ -7756,6 +10460,19 @@ func (s ListQueuesInput) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListQueuesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListQueuesInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetListBy sets the ListBy field's value. func (s *ListQueuesInput) SetListBy(v string) *ListQueuesInput { s.ListBy = &v @@ -7780,7 +10497,7 @@ func (s *ListQueuesInput) SetOrder(v string) *ListQueuesInput { return s } -// Successful list queues return a JSON array of queues. If you do not specify +// Successful list queues return a JSON array of queues. If you don't specify // how they are ordered, you will receive them alphabetically by name. type ListQueuesOutput struct { _ struct{} `type:"structure"` @@ -7788,7 +10505,7 @@ type ListQueuesOutput struct { // Use this string to request the next batch of queues. NextToken *string `locationName:"nextToken" type:"string"` - // List of queues + // List of queues. Queues []*Queue `locationName:"queues" type:"list"` } @@ -7814,6 +10531,74 @@ func (s *ListQueuesOutput) SetQueues(v []*Queue) *ListQueuesOutput { return s } +// List the tags for your AWS Elemental MediaConvert resource by sending a request +// with the Amazon Resource Name (ARN) of the resource. To get the ARN, send +// a GET request with the resource name. +type ListTagsForResourceInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the resource that you want to list tags + // for. To get the ARN, send a GET request with the resource name. + // + // Arn is a required field + Arn *string `location:"uri" locationName:"arn" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListTagsForResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsForResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListTagsForResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTagsForResourceInput"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *ListTagsForResourceInput) SetArn(v string) *ListTagsForResourceInput { + s.Arn = &v + return s +} + +// A successful request to list the tags for a resource returns a JSON map of +// tags. +type ListTagsForResourceOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) and tags for an AWS Elemental MediaConvert + // resource. + ResourceTags *ResourceTags `locationName:"resourceTags" type:"structure"` +} + +// String returns the string representation +func (s ListTagsForResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsForResourceOutput) GoString() string { + return s.String() +} + +// SetResourceTags sets the ResourceTags field's value. +func (s *ListTagsForResourceOutput) SetResourceTags(v *ResourceTags) *ListTagsForResourceOutput { + s.ResourceTags = v + return s +} + // Settings for M2TS Container. type M2tsSettings struct { _ struct{} `type:"structure"` @@ -7856,7 +10641,7 @@ type M2tsSettings struct { DvbTdtSettings *DvbTdtSettings `locationName:"dvbTdtSettings" type:"structure"` // Packet Identifier (PID) for input source DVB Teletext data to this output. - DvbTeletextPid *int64 `locationName:"dvbTeletextPid" type:"integer"` + DvbTeletextPid *int64 `locationName:"dvbTeletextPid" min:"32" type:"integer"` // When set to VIDEO_AND_FIXED_INTERVALS, audio EBP markers will be added to // partitions 3 and 4. The interval between these additional markers will be @@ -7911,7 +10696,7 @@ type M2tsSettings struct { // Packet Identifier (PID) of the Program Clock Reference (PCR) in the transport // stream. When no value is given, the encoder will assign the same value as // the Video PID. - PcrPid *int64 `locationName:"pcrPid" type:"integer"` + PcrPid *int64 `locationName:"pcrPid" min:"32" type:"integer"` // The number of milliseconds between instances of this table in the output // transport stream. @@ -7919,10 +10704,10 @@ type M2tsSettings struct { // Packet Identifier (PID) for the Program Map Table (PMT) in the transport // stream. - PmtPid *int64 `locationName:"pmtPid" type:"integer"` + PmtPid *int64 `locationName:"pmtPid" min:"32" type:"integer"` // Packet Identifier (PID) of the private metadata stream in the transport stream. - PrivateMetadataPid *int64 `locationName:"privateMetadataPid" type:"integer"` + PrivateMetadataPid *int64 `locationName:"privateMetadataPid" min:"32" type:"integer"` // The value of the program number field in the Program Map Table. ProgramNumber *int64 `locationName:"programNumber" type:"integer"` @@ -7933,7 +10718,7 @@ type M2tsSettings struct { RateMode *string `locationName:"rateMode" type:"string" enum:"M2tsRateMode"` // Packet Identifier (PID) of the SCTE-35 stream in the transport stream. - Scte35Pid *int64 `locationName:"scte35Pid" type:"integer"` + Scte35Pid *int64 `locationName:"scte35Pid" min:"32" type:"integer"` // Enables SCTE-35 passthrough (scte35Source) to pass any SCTE-35 signals from // input to output. @@ -7966,13 +10751,13 @@ type M2tsSettings struct { SegmentationTime *float64 `locationName:"segmentationTime" type:"double"` // Packet Identifier (PID) of the timed metadata stream in the transport stream. - TimedMetadataPid *int64 `locationName:"timedMetadataPid" type:"integer"` + TimedMetadataPid *int64 `locationName:"timedMetadataPid" min:"32" type:"integer"` // The value of the transport stream ID field in the Program Map Table. TransportStreamId *int64 `locationName:"transportStreamId" type:"integer"` // Packet Identifier (PID) of the elementary video stream in the transport stream. - VideoPid *int64 `locationName:"videoPid" type:"integer"` + VideoPid *int64 `locationName:"videoPid" min:"32" type:"integer"` } // String returns the string representation @@ -7985,6 +10770,52 @@ func (s M2tsSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *M2tsSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "M2tsSettings"} + if s.DvbTeletextPid != nil && *s.DvbTeletextPid < 32 { + invalidParams.Add(request.NewErrParamMinValue("DvbTeletextPid", 32)) + } + if s.PcrPid != nil && *s.PcrPid < 32 { + invalidParams.Add(request.NewErrParamMinValue("PcrPid", 32)) + } + if s.PmtPid != nil && *s.PmtPid < 32 { + invalidParams.Add(request.NewErrParamMinValue("PmtPid", 32)) + } + if s.PrivateMetadataPid != nil && *s.PrivateMetadataPid < 32 { + invalidParams.Add(request.NewErrParamMinValue("PrivateMetadataPid", 32)) + } + if s.Scte35Pid != nil && *s.Scte35Pid < 32 { + invalidParams.Add(request.NewErrParamMinValue("Scte35Pid", 32)) + } + if s.TimedMetadataPid != nil && *s.TimedMetadataPid < 32 { + invalidParams.Add(request.NewErrParamMinValue("TimedMetadataPid", 32)) + } + if s.VideoPid != nil && *s.VideoPid < 32 { + invalidParams.Add(request.NewErrParamMinValue("VideoPid", 32)) + } + if s.DvbNitSettings != nil { + if err := s.DvbNitSettings.Validate(); err != nil { + invalidParams.AddNested("DvbNitSettings", err.(request.ErrInvalidParams)) + } + } + if s.DvbSdtSettings != nil { + if err := s.DvbSdtSettings.Validate(); err != nil { + invalidParams.AddNested("DvbSdtSettings", err.(request.ErrInvalidParams)) + } + } + if s.DvbTdtSettings != nil { + if err := s.DvbTdtSettings.Validate(); err != nil { + invalidParams.AddNested("DvbTdtSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAudioBufferModel sets the AudioBufferModel field's value. func (s *M2tsSettings) SetAudioBufferModel(v string) *M2tsSettings { s.AudioBufferModel = &v @@ -8217,7 +11048,7 @@ type M3u8Settings struct { // Packet Identifier (PID) of the Program Clock Reference (PCR) in the transport // stream. When no value is given, the encoder will assign the same value as // the Video PID. - PcrPid *int64 `locationName:"pcrPid" type:"integer"` + PcrPid *int64 `locationName:"pcrPid" min:"32" type:"integer"` // The number of milliseconds between instances of this table in the output // transport stream. @@ -8225,33 +11056,33 @@ type M3u8Settings struct { // Packet Identifier (PID) for the Program Map Table (PMT) in the transport // stream. - PmtPid *int64 `locationName:"pmtPid" type:"integer"` + PmtPid *int64 `locationName:"pmtPid" min:"32" type:"integer"` // Packet Identifier (PID) of the private metadata stream in the transport stream. - PrivateMetadataPid *int64 `locationName:"privateMetadataPid" type:"integer"` + PrivateMetadataPid *int64 `locationName:"privateMetadataPid" min:"32" type:"integer"` // The value of the program number field in the Program Map Table. ProgramNumber *int64 `locationName:"programNumber" type:"integer"` // Packet Identifier (PID) of the SCTE-35 stream in the transport stream. - Scte35Pid *int64 `locationName:"scte35Pid" type:"integer"` + Scte35Pid *int64 `locationName:"scte35Pid" min:"32" type:"integer"` // Enables SCTE-35 passthrough (scte35Source) to pass any SCTE-35 signals from // input to output. Scte35Source *string `locationName:"scte35Source" type:"string" enum:"M3u8Scte35Source"` - // If PASSTHROUGH, inserts ID3 timed metadata from the timed_metadata REST command - // into this output. + // Applies only to HLS outputs. Use this setting to specify whether the service + // inserts the ID3 timed metadata from the input in this output. TimedMetadata *string `locationName:"timedMetadata" type:"string" enum:"TimedMetadata"` // Packet Identifier (PID) of the timed metadata stream in the transport stream. - TimedMetadataPid *int64 `locationName:"timedMetadataPid" type:"integer"` + TimedMetadataPid *int64 `locationName:"timedMetadataPid" min:"32" type:"integer"` // The value of the transport stream ID field in the Program Map Table. TransportStreamId *int64 `locationName:"transportStreamId" type:"integer"` // Packet Identifier (PID) of the elementary video stream in the transport stream. - VideoPid *int64 `locationName:"videoPid" type:"integer"` + VideoPid *int64 `locationName:"videoPid" min:"32" type:"integer"` } // String returns the string representation @@ -8264,6 +11095,34 @@ func (s M3u8Settings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *M3u8Settings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "M3u8Settings"} + if s.PcrPid != nil && *s.PcrPid < 32 { + invalidParams.Add(request.NewErrParamMinValue("PcrPid", 32)) + } + if s.PmtPid != nil && *s.PmtPid < 32 { + invalidParams.Add(request.NewErrParamMinValue("PmtPid", 32)) + } + if s.PrivateMetadataPid != nil && *s.PrivateMetadataPid < 32 { + invalidParams.Add(request.NewErrParamMinValue("PrivateMetadataPid", 32)) + } + if s.Scte35Pid != nil && *s.Scte35Pid < 32 { + invalidParams.Add(request.NewErrParamMinValue("Scte35Pid", 32)) + } + if s.TimedMetadataPid != nil && *s.TimedMetadataPid < 32 { + invalidParams.Add(request.NewErrParamMinValue("TimedMetadataPid", 32)) + } + if s.VideoPid != nil && *s.VideoPid < 32 { + invalidParams.Add(request.NewErrParamMinValue("VideoPid", 32)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAudioFramesPerPes sets the AudioFramesPerPes field's value. func (s *M3u8Settings) SetAudioFramesPerPes(v int64) *M3u8Settings { s.AudioFramesPerPes = &v @@ -8360,6 +11219,215 @@ func (s *M3u8Settings) SetVideoPid(v int64) *M3u8Settings { return s } +// Overlay motion graphics on top of your video at the time that you specify. +type MotionImageInserter struct { + _ struct{} `type:"structure"` + + // If your motion graphic asset is a .mov file, keep this setting unspecified. + // If your motion graphic asset is a series of .png files, specify the framerate + // of the overlay in frames per second, as a fraction. For example, specify + // 24 fps as 24/1. Make sure that the number of images in your series matches + // the framerate and your intended overlay duration. For example, if you want + // a 30-second overlay at 30 fps, you should have 900 .png images. This overlay + // framerate doesn't need to match the framerate of the underlying video. + Framerate *MotionImageInsertionFramerate `locationName:"framerate" type:"structure"` + + // Specify the .mov file or series of .png files that you want to overlay on + // your video. For .png files, provide the file name of the first file in the + // series. Make sure that the names of the .png files end with sequential numbers + // that specify the order that they are played in. For example, overlay_000.png, + // overlay_001.png, overlay_002.png, and so on. The sequence must start at zero, + // and each image file name must have the same number of digits. Pad your initial + // file names with enough zeros to complete the sequence. For example, if the + // first image is overlay_0.png, there can be only 10 images in the sequence, + // with the last image being overlay_9.png. But if the first image is overlay_00.png, + // there can be 100 images in the sequence. + Input *string `locationName:"input" min:"14" type:"string"` + + // Choose the type of motion graphic asset that you are providing for your overlay. + // You can choose either a .mov file or a series of .png files. + InsertionMode *string `locationName:"insertionMode" type:"string" enum:"MotionImageInsertionMode"` + + // Use Offset to specify the placement of your motion graphic overlay on the + // video frame. Specify in pixels, from the upper-left corner of the frame. + // If you don't specify an offset, the service scales your overlay to the full + // size of the frame. Otherwise, the service inserts the overlay at its native + // resolution and scales the size up or down with any video scaling. + Offset *MotionImageInsertionOffset `locationName:"offset" type:"structure"` + + // Specify whether your motion graphic overlay repeats on a loop or plays only + // once. + Playback *string `locationName:"playback" type:"string" enum:"MotionImagePlayback"` + + // Specify when the motion overlay begins. Use timecode format (HH:MM:SS:FF + // or HH:MM:SS;FF). Make sure that the timecode you provide here takes into + // account how you have set up your timecode configuration under both job settings + // and input settings. The simplest way to do that is to set both to start at + // 0. If you need to set up your job to follow timecodes embedded in your source + // that don't start at zero, make sure that you specify a start time that is + // after the first embedded timecode. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/setting-up-timecode.html + // Find job-wide and input timecode configuration settings in your JSON job + // settings specification at settings>timecodeConfig>source and settings>inputs>timecodeSource. + StartTime *string `locationName:"startTime" min:"11" type:"string"` +} + +// String returns the string representation +func (s MotionImageInserter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MotionImageInserter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *MotionImageInserter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MotionImageInserter"} + if s.Input != nil && len(*s.Input) < 14 { + invalidParams.Add(request.NewErrParamMinLen("Input", 14)) + } + if s.StartTime != nil && len(*s.StartTime) < 11 { + invalidParams.Add(request.NewErrParamMinLen("StartTime", 11)) + } + if s.Framerate != nil { + if err := s.Framerate.Validate(); err != nil { + invalidParams.AddNested("Framerate", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFramerate sets the Framerate field's value. +func (s *MotionImageInserter) SetFramerate(v *MotionImageInsertionFramerate) *MotionImageInserter { + s.Framerate = v + return s +} + +// SetInput sets the Input field's value. +func (s *MotionImageInserter) SetInput(v string) *MotionImageInserter { + s.Input = &v + return s +} + +// SetInsertionMode sets the InsertionMode field's value. +func (s *MotionImageInserter) SetInsertionMode(v string) *MotionImageInserter { + s.InsertionMode = &v + return s +} + +// SetOffset sets the Offset field's value. +func (s *MotionImageInserter) SetOffset(v *MotionImageInsertionOffset) *MotionImageInserter { + s.Offset = v + return s +} + +// SetPlayback sets the Playback field's value. +func (s *MotionImageInserter) SetPlayback(v string) *MotionImageInserter { + s.Playback = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *MotionImageInserter) SetStartTime(v string) *MotionImageInserter { + s.StartTime = &v + return s +} + +// For motion overlays that don't have a built-in framerate, specify the framerate +// of the overlay in frames per second, as a fraction. For example, specify +// 24 fps as 24/1. The overlay framerate doesn't need to match the framerate +// of the underlying video. +type MotionImageInsertionFramerate struct { + _ struct{} `type:"structure"` + + // The bottom of the fraction that expresses your overlay framerate. For example, + // if your framerate is 24 fps, set this value to 1. + FramerateDenominator *int64 `locationName:"framerateDenominator" min:"1" type:"integer"` + + // The top of the fraction that expresses your overlay framerate. For example, + // if your framerate is 24 fps, set this value to 24. + FramerateNumerator *int64 `locationName:"framerateNumerator" min:"1" type:"integer"` +} + +// String returns the string representation +func (s MotionImageInsertionFramerate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MotionImageInsertionFramerate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *MotionImageInsertionFramerate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MotionImageInsertionFramerate"} + if s.FramerateDenominator != nil && *s.FramerateDenominator < 1 { + invalidParams.Add(request.NewErrParamMinValue("FramerateDenominator", 1)) + } + if s.FramerateNumerator != nil && *s.FramerateNumerator < 1 { + invalidParams.Add(request.NewErrParamMinValue("FramerateNumerator", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFramerateDenominator sets the FramerateDenominator field's value. +func (s *MotionImageInsertionFramerate) SetFramerateDenominator(v int64) *MotionImageInsertionFramerate { + s.FramerateDenominator = &v + return s +} + +// SetFramerateNumerator sets the FramerateNumerator field's value. +func (s *MotionImageInsertionFramerate) SetFramerateNumerator(v int64) *MotionImageInsertionFramerate { + s.FramerateNumerator = &v + return s +} + +// Specify the offset between the upper-left corner of the video frame and the +// top left corner of the overlay. +type MotionImageInsertionOffset struct { + _ struct{} `type:"structure"` + + // Set the distance, in pixels, between the overlay and the left edge of the + // video frame. + ImageX *int64 `locationName:"imageX" type:"integer"` + + // Set the distance, in pixels, between the overlay and the top edge of the + // video frame. + ImageY *int64 `locationName:"imageY" type:"integer"` +} + +// String returns the string representation +func (s MotionImageInsertionOffset) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MotionImageInsertionOffset) GoString() string { + return s.String() +} + +// SetImageX sets the ImageX field's value. +func (s *MotionImageInsertionOffset) SetImageX(v int64) *MotionImageInsertionOffset { + s.ImageX = &v + return s +} + +// SetImageY sets the ImageY field's value. +func (s *MotionImageInsertionOffset) SetImageY(v int64) *MotionImageInsertionOffset { + s.ImageY = &v + return s +} + // Settings for MOV Container. type MovSettings struct { _ struct{} `type:"structure"` @@ -8434,15 +11502,15 @@ type Mp2Settings struct { _ struct{} `type:"structure"` // Average bitrate in bits/second. - Bitrate *int64 `locationName:"bitrate" type:"integer"` + Bitrate *int64 `locationName:"bitrate" min:"32000" type:"integer"` // Set Channels to specify the number of channels in this output audio track. // Choosing Mono in the console will give you 1 output channel; choosing Stereo // will give you 2. In the API, valid values are 1 and 2. - Channels *int64 `locationName:"channels" type:"integer"` + Channels *int64 `locationName:"channels" min:"1" type:"integer"` // Sample rate in hz. - SampleRate *int64 `locationName:"sampleRate" type:"integer"` + SampleRate *int64 `locationName:"sampleRate" min:"32000" type:"integer"` } // String returns the string representation @@ -8455,6 +11523,25 @@ func (s Mp2Settings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *Mp2Settings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Mp2Settings"} + if s.Bitrate != nil && *s.Bitrate < 32000 { + invalidParams.Add(request.NewErrParamMinValue("Bitrate", 32000)) + } + if s.Channels != nil && *s.Channels < 1 { + invalidParams.Add(request.NewErrParamMinValue("Channels", 1)) + } + if s.SampleRate != nil && *s.SampleRate < 32000 { + invalidParams.Add(request.NewErrParamMinValue("SampleRate", 32000)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetBitrate sets the Bitrate field's value. func (s *Mp2Settings) SetBitrate(v int64) *Mp2Settings { s.Bitrate = &v @@ -8539,11 +11626,9 @@ type Mpeg2Settings struct { // quality. AdaptiveQuantization *string `locationName:"adaptiveQuantization" type:"string" enum:"Mpeg2AdaptiveQuantization"` - // Average bitrate in bits/second. Required for VBR, CBR, and ABR. Five megabits - // can be entered as 5000000 or 5m. Five hundred kilobits can be entered as - // 500000 or 0.5m. For MS Smooth outputs, bitrates must be unique when rounded - // down to the nearest multiple of 1000. - Bitrate *int64 `locationName:"bitrate" type:"integer"` + // Average bitrate in bits/second. Required for VBR and CBR. For MS Smooth outputs, + // bitrates must be unique when rounded down to the nearest multiple of 1000. + Bitrate *int64 `locationName:"bitrate" min:"1000" type:"integer"` // Use Level (Mpeg2CodecLevel) to set the MPEG-2 level for the video output. CodecLevel *string `locationName:"codecLevel" type:"string" enum:"Mpeg2CodecLevel"` @@ -8551,20 +11636,35 @@ type Mpeg2Settings struct { // Use Profile (Mpeg2CodecProfile) to set the MPEG-2 profile for the video output. CodecProfile *string `locationName:"codecProfile" type:"string" enum:"Mpeg2CodecProfile"` - // Using the API, set FramerateControl to INITIALIZE_FROM_SOURCE if you want - // the service to use the framerate from the input. Using the console, do this - // by choosing INITIALIZE_FROM_SOURCE for Framerate. + // Choose Adaptive to improve subjective video quality for high-motion content. + // This will cause the service to use fewer B-frames (which infer information + // based on other frames) for high-motion portions of the video and more B-frames + // for low-motion portions. The maximum number of B-frames is limited by the + // value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames). + DynamicSubGop *string `locationName:"dynamicSubGop" type:"string" enum:"Mpeg2DynamicSubGop"` + + // If you are using the console, use the Framerate setting to specify the framerate + // for this output. If you want to keep the same framerate as the input video, + // choose Follow source. If you want to do framerate conversion, choose a framerate + // from the dropdown list or choose Custom. The framerates shown in the dropdown + // list are decimal approximations of fractions. If you choose Custom, specify + // your framerate as a fraction. If you are creating your transcoding job sepecification + // as a JSON file without the console, use FramerateControl to specify which + // value the service uses for the framerate for this output. Choose INITIALIZE_FROM_SOURCE + // if you want the service to use the framerate from the input. Choose SPECIFIED + // if you want the service to use the framerate you specify in the settings + // FramerateNumerator and FramerateDenominator. FramerateControl *string `locationName:"framerateControl" type:"string" enum:"Mpeg2FramerateControl"` // When set to INTERPOLATE, produces smoother motion during framerate conversion. FramerateConversionAlgorithm *string `locationName:"framerateConversionAlgorithm" type:"string" enum:"Mpeg2FramerateConversionAlgorithm"` // Framerate denominator. - FramerateDenominator *int64 `locationName:"framerateDenominator" type:"integer"` + FramerateDenominator *int64 `locationName:"framerateDenominator" min:"1" type:"integer"` // Framerate numerator - framerate is a fraction, e.g. 24000 / 1001 = 23.976 // fps. - FramerateNumerator *int64 `locationName:"framerateNumerator" type:"integer"` + FramerateNumerator *int64 `locationName:"framerateNumerator" min:"24" type:"integer"` // Frequency of closed GOPs. In streaming applications, it is recommended that // this be set to 1 so a decoder joining mid-stream will receive an IDR frame @@ -8582,14 +11682,14 @@ type Mpeg2Settings struct { // Percentage of the buffer that should initially be filled (HRD buffer model). HrdBufferInitialFillPercentage *int64 `locationName:"hrdBufferInitialFillPercentage" type:"integer"` - // Size of buffer (HRD buffer model). Five megabits can be entered as 5000000 - // or 5m. Five hundred kilobits can be entered as 500000 or 0.5m. + // Size of buffer (HRD buffer model) in bits. For example, enter five megabits + // as 5000000. HrdBufferSize *int64 `locationName:"hrdBufferSize" type:"integer"` // Use Interlace mode (InterlaceMode) to choose the scan line type for the output. // * Top Field First (TOP_FIELD) and Bottom Field First (BOTTOM_FIELD) produce // interlaced output with the entire output having the same field polarity (top - // or bottom first). * Follow, Default Top (FOLLOw_TOP_FIELD) and Follow, Default + // or bottom first). * Follow, Default Top (FOLLOW_TOP_FIELD) and Follow, Default // Bottom (FOLLOW_BOTTOM_FIELD) use the same field polarity as the source. Therefore, // behavior depends on the input scan type. - If the source is interlaced, the // output will be interlaced with the same polarity as the source (it will follow @@ -8605,10 +11705,9 @@ type Mpeg2Settings struct { // ratio. IntraDcPrecision *string `locationName:"intraDcPrecision" type:"string" enum:"Mpeg2IntraDcPrecision"` - // Maximum bitrate in bits/second (for VBR mode only). Five megabits can be - // entered as 5000000 or 5m. Five hundred kilobits can be entered as 500000 - // or 0.5m. - MaxBitrate *int64 `locationName:"maxBitrate" type:"integer"` + // Maximum bitrate in bits/second. For example, enter five megabits per second + // as 5000000. + MaxBitrate *int64 `locationName:"maxBitrate" min:"1000" type:"integer"` // Enforces separation between repeated (cadence) I-frames and I-frames inserted // by Scene Change Detection. If a scene change I-frame is within I-interval @@ -8628,10 +11727,10 @@ type Mpeg2Settings struct { ParControl *string `locationName:"parControl" type:"string" enum:"Mpeg2ParControl"` // Pixel Aspect Ratio denominator. - ParDenominator *int64 `locationName:"parDenominator" type:"integer"` + ParDenominator *int64 `locationName:"parDenominator" min:"1" type:"integer"` // Pixel Aspect Ratio numerator. - ParNumerator *int64 `locationName:"parNumerator" type:"integer"` + ParNumerator *int64 `locationName:"parNumerator" min:"1" type:"integer"` // Use Quality tuning level (Mpeg2QualityTuningLevel) to specifiy whether to // use single-pass or multipass video encoding. @@ -8680,6 +11779,34 @@ func (s Mpeg2Settings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *Mpeg2Settings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Mpeg2Settings"} + if s.Bitrate != nil && *s.Bitrate < 1000 { + invalidParams.Add(request.NewErrParamMinValue("Bitrate", 1000)) + } + if s.FramerateDenominator != nil && *s.FramerateDenominator < 1 { + invalidParams.Add(request.NewErrParamMinValue("FramerateDenominator", 1)) + } + if s.FramerateNumerator != nil && *s.FramerateNumerator < 24 { + invalidParams.Add(request.NewErrParamMinValue("FramerateNumerator", 24)) + } + if s.MaxBitrate != nil && *s.MaxBitrate < 1000 { + invalidParams.Add(request.NewErrParamMinValue("MaxBitrate", 1000)) + } + if s.ParDenominator != nil && *s.ParDenominator < 1 { + invalidParams.Add(request.NewErrParamMinValue("ParDenominator", 1)) + } + if s.ParNumerator != nil && *s.ParNumerator < 1 { + invalidParams.Add(request.NewErrParamMinValue("ParNumerator", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAdaptiveQuantization sets the AdaptiveQuantization field's value. func (s *Mpeg2Settings) SetAdaptiveQuantization(v string) *Mpeg2Settings { s.AdaptiveQuantization = &v @@ -8704,6 +11831,12 @@ func (s *Mpeg2Settings) SetCodecProfile(v string) *Mpeg2Settings { return s } +// SetDynamicSubGop sets the DynamicSubGop field's value. +func (s *Mpeg2Settings) SetDynamicSubGop(v string) *Mpeg2Settings { + s.DynamicSubGop = &v + return s +} + // SetFramerateControl sets the FramerateControl field's value. func (s *Mpeg2Settings) SetFramerateControl(v string) *Mpeg2Settings { s.FramerateControl = &v @@ -8907,7 +12040,7 @@ type MsSmoothGroupSettings struct { // Use Fragment length (FragmentLength) to specify the mp4 fragment sizes in // seconds. Fragment length must be compatible with GOP size and framerate. - FragmentLength *int64 `locationName:"fragmentLength" type:"integer"` + FragmentLength *int64 `locationName:"fragmentLength" min:"1" type:"integer"` // Use Manifest encoding (MsSmoothManifestEncoding) to specify the encoding // format for the server and client manifest. Valid options are utf8 and utf16. @@ -8924,6 +12057,19 @@ func (s MsSmoothGroupSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *MsSmoothGroupSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MsSmoothGroupSettings"} + if s.FragmentLength != nil && *s.FragmentLength < 1 { + invalidParams.Add(request.NewErrParamMinValue("FragmentLength", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAudioDeduplication sets the AudioDeduplication field's value. func (s *MsSmoothGroupSettings) SetAudioDeduplication(v string) *MsSmoothGroupSettings { s.AudioDeduplication = &v @@ -9000,8 +12146,8 @@ type NoiseReducer struct { // Use Noise reducer filter (NoiseReducerFilter) to select one of the following // spatial image filtering functions. To use this setting, you must also enable // Noise reducer (NoiseReducer). * Bilateral is an edge preserving noise reduction - // filter * Mean (softest), Gaussian, Lanczos, and Sharpen (sharpest) are convolution - // filters * Conserve is a min/max noise reduction filter * Spatial is frequency-domain + // filter. * Mean (softest), Gaussian, Lanczos, and Sharpen (sharpest) are convolution + // filters. * Conserve is a min/max noise reduction filter. * Spatial is a frequency-domain // filter based on JND principles. Filter *string `locationName:"filter" type:"string" enum:"NoiseReducerFilter"` @@ -9022,6 +12168,21 @@ func (s NoiseReducer) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *NoiseReducer) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "NoiseReducer"} + if s.SpatialFilterSettings != nil { + if err := s.SpatialFilterSettings.Validate(); err != nil { + invalidParams.AddNested("SpatialFilterSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetFilter sets the Filter field's value. func (s *NoiseReducer) SetFilter(v string) *NoiseReducer { s.Filter = &v @@ -9092,6 +12253,19 @@ func (s NoiseReducerSpatialFilterSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *NoiseReducerSpatialFilterSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "NoiseReducerSpatialFilterSettings"} + if s.Speed != nil && *s.Speed < -2 { + invalidParams.Add(request.NewErrParamMinValue("Speed", -2)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetPostFilterSharpenStrength sets the PostFilterSharpenStrength field's value. func (s *NoiseReducerSpatialFilterSettings) SetPostFilterSharpenStrength(v int64) *NoiseReducerSpatialFilterSettings { s.PostFilterSharpenStrength = &v @@ -9142,7 +12316,7 @@ type Output struct { // identifiers. For DASH ISO outputs, if you use the format identifiers $Number$ // or $Time$ in one output, you must use them in the same way in all outputs // of the output group. - NameModifier *string `locationName:"nameModifier" type:"string"` + NameModifier *string `locationName:"nameModifier" min:"1" type:"string"` // Specific settings for this type of output. OutputSettings *OutputSettings `locationName:"outputSettings" type:"structure"` @@ -9168,6 +12342,49 @@ func (s Output) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *Output) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Output"} + if s.NameModifier != nil && len(*s.NameModifier) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NameModifier", 1)) + } + if s.AudioDescriptions != nil { + for i, v := range s.AudioDescriptions { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AudioDescriptions", i), err.(request.ErrInvalidParams)) + } + } + } + if s.CaptionDescriptions != nil { + for i, v := range s.CaptionDescriptions { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "CaptionDescriptions", i), err.(request.ErrInvalidParams)) + } + } + } + if s.ContainerSettings != nil { + if err := s.ContainerSettings.Validate(); err != nil { + invalidParams.AddNested("ContainerSettings", err.(request.ErrInvalidParams)) + } + } + if s.VideoDescription != nil { + if err := s.VideoDescription.Validate(); err != nil { + invalidParams.AddNested("VideoDescription", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAudioDescriptions sets the AudioDescriptions field's value. func (s *Output) SetAudioDescriptions(v []*AudioDescription) *Output { s.AudioDescriptions = v @@ -9304,6 +12521,31 @@ func (s OutputGroup) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *OutputGroup) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "OutputGroup"} + if s.OutputGroupSettings != nil { + if err := s.OutputGroupSettings.Validate(); err != nil { + invalidParams.AddNested("OutputGroupSettings", err.(request.ErrInvalidParams)) + } + } + if s.Outputs != nil { + for i, v := range s.Outputs { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Outputs", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetCustomName sets the CustomName field's value. func (s *OutputGroup) SetCustomName(v string) *OutputGroup { s.CustomName = &v @@ -9356,6 +12598,11 @@ func (s *OutputGroupDetail) SetOutputDetails(v []*OutputDetail) *OutputGroupDeta type OutputGroupSettings struct { _ struct{} `type:"structure"` + // Required when you set (Type) under (OutputGroups)>(OutputGroupSettings) to + // CMAF_GROUP_SETTINGS. Each output in a CMAF Output Group may only contain + // a single video, audio, or caption output. + CmafGroupSettings *CmafGroupSettings `locationName:"cmafGroupSettings" type:"structure"` + // Required when you set (Type) under (OutputGroups)>(OutputGroupSettings) to // DASH_ISO_GROUP_SETTINGS. DashIsoGroupSettings *DashIsoGroupSettings `locationName:"dashIsoGroupSettings" type:"structure"` @@ -9372,7 +12619,8 @@ type OutputGroupSettings struct { // MS_SMOOTH_GROUP_SETTINGS. MsSmoothGroupSettings *MsSmoothGroupSettings `locationName:"msSmoothGroupSettings" type:"structure"` - // Type of output group (File group, Apple HLS, DASH ISO, Microsoft Smooth Streaming) + // Type of output group (File group, Apple HLS, DASH ISO, Microsoft Smooth Streaming, + // CMAF) Type *string `locationName:"type" type:"string" enum:"OutputGroupType"` } @@ -9386,6 +12634,42 @@ func (s OutputGroupSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *OutputGroupSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "OutputGroupSettings"} + if s.CmafGroupSettings != nil { + if err := s.CmafGroupSettings.Validate(); err != nil { + invalidParams.AddNested("CmafGroupSettings", err.(request.ErrInvalidParams)) + } + } + if s.DashIsoGroupSettings != nil { + if err := s.DashIsoGroupSettings.Validate(); err != nil { + invalidParams.AddNested("DashIsoGroupSettings", err.(request.ErrInvalidParams)) + } + } + if s.HlsGroupSettings != nil { + if err := s.HlsGroupSettings.Validate(); err != nil { + invalidParams.AddNested("HlsGroupSettings", err.(request.ErrInvalidParams)) + } + } + if s.MsSmoothGroupSettings != nil { + if err := s.MsSmoothGroupSettings.Validate(); err != nil { + invalidParams.AddNested("MsSmoothGroupSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCmafGroupSettings sets the CmafGroupSettings field's value. +func (s *OutputGroupSettings) SetCmafGroupSettings(v *CmafGroupSettings) *OutputGroupSettings { + s.CmafGroupSettings = v + return s +} + // SetDashIsoGroupSettings sets the DashIsoGroupSettings field's value. func (s *OutputGroupSettings) SetDashIsoGroupSettings(v *DashIsoGroupSettings) *OutputGroupSettings { s.DashIsoGroupSettings = v @@ -9452,19 +12736,23 @@ type Preset struct { Category *string `locationName:"category" type:"string"` // The timestamp in epoch seconds for preset creation. - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unixTimestamp"` // An optional description you create for each preset. Description *string `locationName:"description" type:"string"` // The timestamp in epoch seconds when the preset was last updated. - LastUpdated *time.Time `locationName:"lastUpdated" type:"timestamp" timestampFormat:"unix"` + LastUpdated *time.Time `locationName:"lastUpdated" type:"timestamp" timestampFormat:"unixTimestamp"` // A name you create for each preset. Each name must be unique within your account. - Name *string `locationName:"name" type:"string"` + // + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true"` // Settings for preset - Settings *PresetSettings `locationName:"settings" type:"structure"` + // + // Settings is a required field + Settings *PresetSettings `locationName:"settings" type:"structure" required:"true"` // A preset can be of two types: system or custom. System or built-in preset // can't be modified or deleted by the user. @@ -9561,6 +12849,46 @@ func (s PresetSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *PresetSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PresetSettings"} + if s.AudioDescriptions != nil { + for i, v := range s.AudioDescriptions { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AudioDescriptions", i), err.(request.ErrInvalidParams)) + } + } + } + if s.CaptionDescriptions != nil { + for i, v := range s.CaptionDescriptions { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "CaptionDescriptions", i), err.(request.ErrInvalidParams)) + } + } + } + if s.ContainerSettings != nil { + if err := s.ContainerSettings.Validate(); err != nil { + invalidParams.AddNested("ContainerSettings", err.(request.ErrInvalidParams)) + } + } + if s.VideoDescription != nil { + if err := s.VideoDescription.Validate(); err != nil { + invalidParams.AddNested("VideoDescription", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAudioDescriptions sets the AudioDescriptions field's value. func (s *PresetSettings) SetAudioDescriptions(v []*AudioDescription) *PresetSettings { s.AudioDescriptions = v @@ -9594,27 +12922,35 @@ type ProresSettings struct { // to use for this output. CodecProfile *string `locationName:"codecProfile" type:"string" enum:"ProresCodecProfile"` - // Using the API, set FramerateControl to INITIALIZE_FROM_SOURCE if you want - // the service to use the framerate from the input. Using the console, do this - // by choosing INITIALIZE_FROM_SOURCE for Framerate. + // If you are using the console, use the Framerate setting to specify the framerate + // for this output. If you want to keep the same framerate as the input video, + // choose Follow source. If you want to do framerate conversion, choose a framerate + // from the dropdown list or choose Custom. The framerates shown in the dropdown + // list are decimal approximations of fractions. If you choose Custom, specify + // your framerate as a fraction. If you are creating your transcoding job sepecification + // as a JSON file without the console, use FramerateControl to specify which + // value the service uses for the framerate for this output. Choose INITIALIZE_FROM_SOURCE + // if you want the service to use the framerate from the input. Choose SPECIFIED + // if you want the service to use the framerate you specify in the settings + // FramerateNumerator and FramerateDenominator. FramerateControl *string `locationName:"framerateControl" type:"string" enum:"ProresFramerateControl"` // When set to INTERPOLATE, produces smoother motion during framerate conversion. FramerateConversionAlgorithm *string `locationName:"framerateConversionAlgorithm" type:"string" enum:"ProresFramerateConversionAlgorithm"` // Framerate denominator. - FramerateDenominator *int64 `locationName:"framerateDenominator" type:"integer"` + FramerateDenominator *int64 `locationName:"framerateDenominator" min:"1" type:"integer"` // When you use the API for transcode jobs that use framerate conversion, specify // the framerate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use // FramerateNumerator to specify the numerator of this fraction. In this example, // use 24000 for the value of FramerateNumerator. - FramerateNumerator *int64 `locationName:"framerateNumerator" type:"integer"` + FramerateNumerator *int64 `locationName:"framerateNumerator" min:"1" type:"integer"` // Use Interlace mode (InterlaceMode) to choose the scan line type for the output. // * Top Field First (TOP_FIELD) and Bottom Field First (BOTTOM_FIELD) produce // interlaced output with the entire output having the same field polarity (top - // or bottom first). * Follow, Default Top (FOLLOw_TOP_FIELD) and Follow, Default + // or bottom first). * Follow, Default Top (FOLLOW_TOP_FIELD) and Follow, Default // Bottom (FOLLOW_BOTTOM_FIELD) use the same field polarity as the source. Therefore, // behavior depends on the input scan type. - If the source is interlaced, the // output will be interlaced with the same polarity as the source (it will follow @@ -9632,10 +12968,10 @@ type ProresSettings struct { ParControl *string `locationName:"parControl" type:"string" enum:"ProresParControl"` // Pixel Aspect Ratio denominator. - ParDenominator *int64 `locationName:"parDenominator" type:"integer"` + ParDenominator *int64 `locationName:"parDenominator" min:"1" type:"integer"` // Pixel Aspect Ratio numerator. - ParNumerator *int64 `locationName:"parNumerator" type:"integer"` + ParNumerator *int64 `locationName:"parNumerator" min:"1" type:"integer"` // Enables Slow PAL rate conversion. 23.976fps and 24fps input is relabeled // as 25fps, and audio is sped up correspondingly. @@ -9658,6 +12994,28 @@ func (s ProresSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *ProresSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ProresSettings"} + if s.FramerateDenominator != nil && *s.FramerateDenominator < 1 { + invalidParams.Add(request.NewErrParamMinValue("FramerateDenominator", 1)) + } + if s.FramerateNumerator != nil && *s.FramerateNumerator < 1 { + invalidParams.Add(request.NewErrParamMinValue("FramerateNumerator", 1)) + } + if s.ParDenominator != nil && *s.ParDenominator < 1 { + invalidParams.Add(request.NewErrParamMinValue("ParDenominator", 1)) + } + if s.ParNumerator != nil && *s.ParNumerator < 1 { + invalidParams.Add(request.NewErrParamMinValue("ParNumerator", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetCodecProfile sets the CodecProfile field's value. func (s *ProresSettings) SetCodecProfile(v string) *ProresSettings { s.CodecProfile = &v @@ -9724,34 +13082,56 @@ func (s *ProresSettings) SetTelecine(v string) *ProresSettings { return s } -// MediaConvert jobs are submitted to a queue. Unless specified otherwise jobs -// are submitted to a built-in default queue. User can create additional queues -// to separate the jobs of different categories or priority. +// You can use queues to manage the resources that are available to your AWS +// account for running multiple transcoding jobs at the same time. If you don't +// specify a queue, the service sends all jobs through the default queue. For +// more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/about-resource-allocation-and-job-prioritization.html. type Queue struct { _ struct{} `type:"structure"` // An identifier for this resource that is unique within all of AWS. Arn *string `locationName:"arn" type:"string"` - // The timestamp in epoch seconds for queue creation. - CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` + // The time stamp in epoch seconds for queue creation. + CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unixTimestamp"` - // An optional description you create for each queue. + // An optional description that you create for each queue. Description *string `locationName:"description" type:"string"` - // The timestamp in epoch seconds when the queue was last updated. - LastUpdated *time.Time `locationName:"lastUpdated" type:"timestamp" timestampFormat:"unix"` + // The time stamp in epoch seconds when the queue was last updated. + LastUpdated *time.Time `locationName:"lastUpdated" type:"timestamp" timestampFormat:"unixTimestamp"` - // A name you create for each queue. Each name must be unique within your account. - Name *string `locationName:"name" type:"string"` + // A name that you create for each queue. Each name must be unique within your + // account. + // + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true"` + + // Specifies whether the pricing plan for the queue is On-demand or Reserved. + // The pricing plan for the queue determines whether you pay On-demand or Reserved + // pricing for the transcoding jobs that you run through the queue. For Reserved + // queue pricing, you must set up a contract. You can create a Reserved queue + // contract through the AWS Elemental MediaConvert console. + PricingPlan *string `locationName:"pricingPlan" type:"string" enum:"PricingPlan"` - // Queues can be ACTIVE or PAUSED. If you pause a queue, jobs in that queue - // will not begin. Jobs running when a queue is paused continue to run until - // they finish or error out. + // The estimated number of jobs with a PROGRESSING status. + ProgressingJobsCount *int64 `locationName:"progressingJobsCount" type:"integer"` + + // Details about the pricing plan for your reserved queue. Required for reserved + // queues and not applicable to on-demand queues. + ReservationPlan *ReservationPlan `locationName:"reservationPlan" type:"structure"` + + // Queues can be ACTIVE or PAUSED. If you pause a queue, the service won't begin + // processing jobs in that queue. Jobs that are running when you pause the queue + // continue to run until they finish or result in an error. Status *string `locationName:"status" type:"string" enum:"QueueStatus"` - // A queue can be of two types: system or custom. System or built-in queues - // can't be modified or deleted by the user. + // The estimated number of jobs with a SUBMITTED status. + SubmittedJobsCount *int64 `locationName:"submittedJobsCount" type:"integer"` + + // Specifies whether this queue is system or custom. System queues are built + // in. You can't modify or delete system queues. You can create and modify custom + // queues. Type *string `locationName:"type" type:"string" enum:"Type"` } @@ -9795,12 +13175,36 @@ func (s *Queue) SetName(v string) *Queue { return s } +// SetPricingPlan sets the PricingPlan field's value. +func (s *Queue) SetPricingPlan(v string) *Queue { + s.PricingPlan = &v + return s +} + +// SetProgressingJobsCount sets the ProgressingJobsCount field's value. +func (s *Queue) SetProgressingJobsCount(v int64) *Queue { + s.ProgressingJobsCount = &v + return s +} + +// SetReservationPlan sets the ReservationPlan field's value. +func (s *Queue) SetReservationPlan(v *ReservationPlan) *Queue { + s.ReservationPlan = v + return s +} + // SetStatus sets the Status field's value. func (s *Queue) SetStatus(v string) *Queue { s.Status = &v return s } +// SetSubmittedJobsCount sets the SubmittedJobsCount field's value. +func (s *Queue) SetSubmittedJobsCount(v int64) *Queue { + s.SubmittedJobsCount = &v + return s +} + // SetType sets the Type field's value. func (s *Queue) SetType(v string) *Queue { s.Type = &v @@ -9811,18 +13215,18 @@ func (s *Queue) SetType(v string) *Queue { type Rectangle struct { _ struct{} `type:"structure"` - // Height of rectangle in pixels. - Height *int64 `locationName:"height" type:"integer"` + // Height of rectangle in pixels. Specify only even numbers. + Height *int64 `locationName:"height" min:"2" type:"integer"` - // Width of rectangle in pixels. - Width *int64 `locationName:"width" type:"integer"` + // Width of rectangle in pixels. Specify only even numbers. + Width *int64 `locationName:"width" min:"2" type:"integer"` // The distance, in pixels, between the rectangle and the left edge of the video - // frame. + // frame. Specify only even numbers. X *int64 `locationName:"x" type:"integer"` // The distance, in pixels, between the rectangle and the top edge of the video - // frame. + // frame. Specify only even numbers. Y *int64 `locationName:"y" type:"integer"` } @@ -9836,6 +13240,22 @@ func (s Rectangle) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *Rectangle) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Rectangle"} + if s.Height != nil && *s.Height < 2 { + invalidParams.Add(request.NewErrParamMinValue("Height", 2)) + } + if s.Width != nil && *s.Width < 2 { + invalidParams.Add(request.NewErrParamMinValue("Width", 2)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetHeight sets the Height field's value. func (s *Rectangle) SetHeight(v int64) *Rectangle { s.Height = &v @@ -9861,8 +13281,8 @@ func (s *Rectangle) SetY(v int64) *Rectangle { } // Use Manual audio remixing (RemixSettings) to adjust audio levels for each -// output channel. With audio remixing, you can output more or fewer audio channels -// than your input audio source provides. +// audio channel in each output of your job. With audio remixing, you can output +// more or fewer audio channels than your input audio source provides. type RemixSettings struct { _ struct{} `type:"structure"` @@ -9875,11 +13295,11 @@ type RemixSettings struct { // Specify the number of audio channels from your input that you want to use // in your output. With remixing, you might combine or split the data in these // channels, so the number of channels in your final output might be different. - ChannelsIn *int64 `locationName:"channelsIn" type:"integer"` + ChannelsIn *int64 `locationName:"channelsIn" min:"1" type:"integer"` // Specify the number of channels in this output after remixing. Valid values: // 1, 2, 4, 6, 8 - ChannelsOut *int64 `locationName:"channelsOut" type:"integer"` + ChannelsOut *int64 `locationName:"channelsOut" min:"1" type:"integer"` } // String returns the string representation @@ -9892,6 +13312,22 @@ func (s RemixSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *RemixSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RemixSettings"} + if s.ChannelsIn != nil && *s.ChannelsIn < 1 { + invalidParams.Add(request.NewErrParamMinValue("ChannelsIn", 1)) + } + if s.ChannelsOut != nil && *s.ChannelsOut < 1 { + invalidParams.Add(request.NewErrParamMinValue("ChannelsOut", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetChannelMapping sets the ChannelMapping field's value. func (s *RemixSettings) SetChannelMapping(v *ChannelMapping) *RemixSettings { s.ChannelMapping = v @@ -9910,6 +13346,190 @@ func (s *RemixSettings) SetChannelsOut(v int64) *RemixSettings { return s } +// Details about the pricing plan for your reserved queue. Required for reserved +// queues and not applicable to on-demand queues. +type ReservationPlan struct { + _ struct{} `type:"structure"` + + // The length of time that you commit to when you set up a pricing plan contract + // for a reserved queue. + Commitment *string `locationName:"commitment" type:"string" enum:"Commitment"` + + // The time stamp, in epoch seconds, for when the pricing plan for this reserved + // queue expires. + ExpiresAt *time.Time `locationName:"expiresAt" type:"timestamp" timestampFormat:"unixTimestamp"` + + // The time stamp in epoch seconds when the reserved queue's reservation plan + // was created. + PurchasedAt *time.Time `locationName:"purchasedAt" type:"timestamp" timestampFormat:"unixTimestamp"` + + // Specifies whether the pricing plan contract for your reserved queue automatically + // renews (AUTO_RENEW) or expires (EXPIRE) at the end of the contract period. + RenewalType *string `locationName:"renewalType" type:"string" enum:"RenewalType"` + + // Specifies the number of reserved transcode slots (RTS) for this queue. The + // number of RTS determines how many jobs the queue can process in parallel; + // each RTS can process one job at a time. To increase this number, create a + // replacement contract through the AWS Elemental MediaConvert console. + ReservedSlots *int64 `locationName:"reservedSlots" type:"integer"` + + // Specifies whether the pricing plan for your reserved queue is ACTIVE or EXPIRED. + Status *string `locationName:"status" type:"string" enum:"ReservationPlanStatus"` +} + +// String returns the string representation +func (s ReservationPlan) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReservationPlan) GoString() string { + return s.String() +} + +// SetCommitment sets the Commitment field's value. +func (s *ReservationPlan) SetCommitment(v string) *ReservationPlan { + s.Commitment = &v + return s +} + +// SetExpiresAt sets the ExpiresAt field's value. +func (s *ReservationPlan) SetExpiresAt(v time.Time) *ReservationPlan { + s.ExpiresAt = &v + return s +} + +// SetPurchasedAt sets the PurchasedAt field's value. +func (s *ReservationPlan) SetPurchasedAt(v time.Time) *ReservationPlan { + s.PurchasedAt = &v + return s +} + +// SetRenewalType sets the RenewalType field's value. +func (s *ReservationPlan) SetRenewalType(v string) *ReservationPlan { + s.RenewalType = &v + return s +} + +// SetReservedSlots sets the ReservedSlots field's value. +func (s *ReservationPlan) SetReservedSlots(v int64) *ReservationPlan { + s.ReservedSlots = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *ReservationPlan) SetStatus(v string) *ReservationPlan { + s.Status = &v + return s +} + +// Details about the pricing plan for your reserved queue. Required for reserved +// queues and not applicable to on-demand queues. +type ReservationPlanSettings struct { + _ struct{} `type:"structure"` + + // The length of time that you commit to when you set up a pricing plan contract + // for a reserved queue. + // + // Commitment is a required field + Commitment *string `locationName:"commitment" type:"string" required:"true" enum:"Commitment"` + + // Specifies whether the pricing plan contract for your reserved queue automatically + // renews (AUTO_RENEW) or expires (EXPIRE) at the end of the contract period. + // + // RenewalType is a required field + RenewalType *string `locationName:"renewalType" type:"string" required:"true" enum:"RenewalType"` + + // Specifies the number of reserved transcode slots (RTS) for this queue. The + // number of RTS determines how many jobs the queue can process in parallel; + // each RTS can process one job at a time. To increase this number, create a + // replacement contract through the AWS Elemental MediaConvert console. + // + // ReservedSlots is a required field + ReservedSlots *int64 `locationName:"reservedSlots" type:"integer" required:"true"` +} + +// String returns the string representation +func (s ReservationPlanSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReservationPlanSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ReservationPlanSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ReservationPlanSettings"} + if s.Commitment == nil { + invalidParams.Add(request.NewErrParamRequired("Commitment")) + } + if s.RenewalType == nil { + invalidParams.Add(request.NewErrParamRequired("RenewalType")) + } + if s.ReservedSlots == nil { + invalidParams.Add(request.NewErrParamRequired("ReservedSlots")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCommitment sets the Commitment field's value. +func (s *ReservationPlanSettings) SetCommitment(v string) *ReservationPlanSettings { + s.Commitment = &v + return s +} + +// SetRenewalType sets the RenewalType field's value. +func (s *ReservationPlanSettings) SetRenewalType(v string) *ReservationPlanSettings { + s.RenewalType = &v + return s +} + +// SetReservedSlots sets the ReservedSlots field's value. +func (s *ReservationPlanSettings) SetReservedSlots(v int64) *ReservationPlanSettings { + s.ReservedSlots = &v + return s +} + +// The Amazon Resource Name (ARN) and tags for an AWS Elemental MediaConvert +// resource. +type ResourceTags struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the resource. + Arn *string `locationName:"arn" type:"string"` + + // The tags for the resource. + Tags map[string]*string `locationName:"tags" type:"map"` +} + +// String returns the string representation +func (s ResourceTags) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResourceTags) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *ResourceTags) SetArn(v string) *ResourceTags { + s.Arn = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *ResourceTags) SetTags(v map[string]*string) *ResourceTags { + s.Tags = v + return s +} + // Settings for SCC caption output. type SccDestinationSettings struct { _ struct{} `type:"structure"` @@ -9942,6 +13562,11 @@ func (s *SccDestinationSettings) SetFramerate(v string) *SccDestinationSettings type SpekeKeyProvider struct { _ struct{} `type:"structure"` + // Optional AWS Certificate Manager ARN for a certificate to send to the keyprovider. + // The certificate holds a key used by the keyprovider to encrypt the keys in + // its response. + CertificateArn *string `locationName:"certificateArn" type:"string"` + // The SPEKE-compliant server uses Resource ID (ResourceId) to identify content. ResourceId *string `locationName:"resourceId" type:"string"` @@ -9964,6 +13589,12 @@ func (s SpekeKeyProvider) GoString() string { return s.String() } +// SetCertificateArn sets the CertificateArn field's value. +func (s *SpekeKeyProvider) SetCertificateArn(v string) *SpekeKeyProvider { + s.CertificateArn = &v + return s +} + // SetResourceId sets the ResourceId field's value. func (s *SpekeKeyProvider) SetResourceId(v string) *SpekeKeyProvider { s.ResourceId = &v @@ -9982,7 +13613,7 @@ func (s *SpekeKeyProvider) SetUrl(v string) *SpekeKeyProvider { return s } -// Settings for use with a SPEKE key provider. +// Use these settings to set up encryption with a static key provider. type StaticKeyProvider struct { _ struct{} `type:"structure"` @@ -10038,6 +13669,78 @@ func (s *StaticKeyProvider) SetUrl(v string) *StaticKeyProvider { return s } +// To add tags to a queue, preset, or job template, send a request with the +// Amazon Resource Name (ARN) of the resource and the tags that you want to +// add. +type TagResourceInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the resource that you want to tag. To get + // the ARN, send a GET request with the resource name. + // + // Arn is a required field + Arn *string `locationName:"arn" type:"string" required:"true"` + + // The tags that you want to add to the resource. You can tag resources with + // a key-value pair or with only a key. + // + // Tags is a required field + Tags map[string]*string `locationName:"tags" type:"map" required:"true"` +} + +// String returns the string representation +func (s TagResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *TagResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TagResourceInput"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *TagResourceInput) SetArn(v string) *TagResourceInput { + s.Arn = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *TagResourceInput) SetTags(v map[string]*string) *TagResourceInput { + s.Tags = v + return s +} + +// A successful request to add tags to a resource returns an OK message. +type TagResourceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s TagResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagResourceOutput) GoString() string { + return s.String() +} + // Settings for Teletext caption output type TeletextDestinationSettings struct { _ struct{} `type:"structure"` @@ -10046,7 +13749,7 @@ type TeletextDestinationSettings struct { // this output. This value must be a three-digit hexadecimal string; strings // ending in -FF are invalid. If you are passing through the entire set of Teletext // data, do not use this field. - PageNumber *string `locationName:"pageNumber" type:"string"` + PageNumber *string `locationName:"pageNumber" min:"3" type:"string"` } // String returns the string representation @@ -10059,6 +13762,19 @@ func (s TeletextDestinationSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *TeletextDestinationSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TeletextDestinationSettings"} + if s.PageNumber != nil && len(*s.PageNumber) < 3 { + invalidParams.Add(request.NewErrParamMinLen("PageNumber", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetPageNumber sets the PageNumber field's value. func (s *TeletextDestinationSettings) SetPageNumber(v string) *TeletextDestinationSettings { s.PageNumber = &v @@ -10072,7 +13788,7 @@ type TeletextSourceSettings struct { // Use Page Number (PageNumber) to specify the three-digit hexadecimal page // number that will be used for Teletext captions. Do not use this setting if // you are passing through teletext from the input source to output. - PageNumber *string `locationName:"pageNumber" type:"string"` + PageNumber *string `locationName:"pageNumber" min:"3" type:"string"` } // String returns the string representation @@ -10085,6 +13801,19 @@ func (s TeletextSourceSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *TeletextSourceSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TeletextSourceSettings"} + if s.PageNumber != nil && len(*s.PageNumber) < 3 { + invalidParams.Add(request.NewErrParamMinLen("PageNumber", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetPageNumber sets the PageNumber field's value. func (s *TeletextSourceSettings) SetPageNumber(v string) *TeletextSourceSettings { s.PageNumber = &v @@ -10098,7 +13827,7 @@ type TimecodeBurnin struct { // Use Font Size (FontSize) to set the font size of any burned-in timecode. // Valid values are 10, 16, 32, 48. - FontSize *int64 `locationName:"fontSize" type:"integer"` + FontSize *int64 `locationName:"fontSize" min:"10" type:"integer"` // Use Position (Position) under under Timecode burn-in (TimecodeBurnIn) to // specify the location the burned-in timecode on output video. @@ -10122,6 +13851,19 @@ func (s TimecodeBurnin) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *TimecodeBurnin) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TimecodeBurnin"} + if s.FontSize != nil && *s.FontSize < 10 { + invalidParams.Add(request.NewErrParamMinValue("FontSize", 10)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetFontSize sets the FontSize field's value. func (s *TimecodeBurnin) SetFontSize(v int64) *TimecodeBurnin { s.FontSize = &v @@ -10140,7 +13882,8 @@ func (s *TimecodeBurnin) SetPrefix(v string) *TimecodeBurnin { return s } -// Contains settings used to acquire and adjust timecode information from inputs. +// These settings control how the service handles timecodes throughout the job. +// These settings don't affect input clipping. type TimecodeConfig struct { _ struct{} `type:"structure"` @@ -10148,40 +13891,40 @@ type TimecodeConfig struct { // Timecode (Anchor) to specify a timecode that will match the input video frame // to the output video frame. Use 24-hour format with frame number, (HH:MM:SS:FF) // or (HH:MM:SS;FF). This setting ignores framerate conversion. System behavior - // for Anchor Timecode varies depending on your setting for Timecode source - // (TimecodeSource). * If Timecode source (TimecodeSource) is set to Specified - // Start (specifiedstart), the first input frame is the specified value in Start - // Timecode (Start). Anchor Timecode (Anchor) and Start Timecode (Start) are - // used calculate output timecode. * If Timecode source (TimecodeSource) is - // set to Start at 0 (zerobased) the first frame is 00:00:00:00. * If Timecode - // source (TimecodeSource) is set to Embedded (embedded), the first frame is - // the timecode value on the first input frame of the input. + // for Anchor Timecode varies depending on your setting for Source (TimecodeSource). + // * If Source (TimecodeSource) is set to Specified Start (SPECIFIEDSTART), + // the first input frame is the specified value in Start Timecode (Start). Anchor + // Timecode (Anchor) and Start Timecode (Start) are used calculate output timecode. + // * If Source (TimecodeSource) is set to Start at 0 (ZEROBASED) the first frame + // is 00:00:00:00. * If Source (TimecodeSource) is set to Embedded (EMBEDDED), + // the first frame is the timecode value on the first input frame of the input. Anchor *string `locationName:"anchor" type:"string"` - // Use Timecode source (TimecodeSource) to set how timecodes are handled within - // this input. To make sure that your video, audio, captions, and markers are - // synchronized and that time-based features, such as image inserter, work correctly, - // choose the Timecode source option that matches your assets. All timecodes - // are in a 24-hour format with frame number (HH:MM:SS:FF). * Embedded (EMBEDDED) - // - Use the timecode that is in the input video. If no embedded timecode is - // in the source, the service will use Start at 0 (ZEROBASED) instead. * Start + // Use Source (TimecodeSource) to set how timecodes are handled within this + // job. To make sure that your video, audio, captions, and markers are synchronized + // and that time-based features, such as image inserter, work correctly, choose + // the Timecode source option that matches your assets. All timecodes are in + // a 24-hour format with frame number (HH:MM:SS:FF). * Embedded (EMBEDDED) - + // Use the timecode that is in the input video. If no embedded timecode is in + // the source, the service will use Start at 0 (ZEROBASED) instead. * Start // at 0 (ZEROBASED) - Set the timecode of the initial frame to 00:00:00:00. // * Specified Start (SPECIFIEDSTART) - Set the timecode of the initial frame // to a value other than zero. You use Start timecode (Start) to provide this // value. Source *string `locationName:"source" type:"string" enum:"TimecodeSource"` - // Only use when you set Timecode Source (TimecodeSource) to Specified Start - // (SPECIFIEDSTART). Use Start timecode (Start) to specify the timecode for - // the initial frame. Use 24-hour format with frame number, (HH:MM:SS:FF) or - // (HH:MM:SS;FF). + // Only use when you set Source (TimecodeSource) to Specified start (SPECIFIEDSTART). + // Use Start timecode (Start) to specify the timecode for the initial frame. + // Use 24-hour format with frame number, (HH:MM:SS:FF) or (HH:MM:SS;FF). Start *string `locationName:"start" type:"string"` - // Only applies to outputs that support program-date-time stamp. Use Time stamp + // Only applies to outputs that support program-date-time stamp. Use Timestamp // offset (TimestampOffset) to overwrite the timecode date without affecting // the time and frame number. Provide the new date as a string in the format // "yyyy-mm-dd". To use Time stamp offset, you must also enable Insert program-date-time - // (InsertProgramDateTime) in the output settings. + // (InsertProgramDateTime) in the output settings. For example, if the date + // part of your timecodes is 2002-1-25 and you want to change it to one year + // later, set Timestamp offset (TimestampOffset) to 2003-1-25. TimestampOffset *string `locationName:"timestampOffset" type:"string"` } @@ -10252,13 +13995,13 @@ type Timing struct { _ struct{} `type:"structure"` // The time, in Unix epoch format, that the transcoding job finished - FinishTime *time.Time `locationName:"finishTime" type:"timestamp" timestampFormat:"unix"` + FinishTime *time.Time `locationName:"finishTime" type:"timestamp" timestampFormat:"unixTimestamp"` // The time, in Unix epoch format, that transcoding for the job began. - StartTime *time.Time `locationName:"startTime" type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `locationName:"startTime" type:"timestamp" timestampFormat:"unixTimestamp"` // The time, in Unix epoch format, that you submitted the job. - SubmitTime *time.Time `locationName:"submitTime" type:"timestamp" timestampFormat:"unix"` + SubmitTime *time.Time `locationName:"submitTime" type:"timestamp" timestampFormat:"unixTimestamp"` } // String returns the string representation @@ -10315,6 +14058,71 @@ func (s *TtmlDestinationSettings) SetStylePassthrough(v string) *TtmlDestination return s } +// To remove tags from a resource, send a request with the Amazon Resource Name +// (ARN) of the resource and the keys of the tags that you want to remove. +type UntagResourceInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the resource that you want to remove tags + // from. To get the ARN, send a GET request with the resource name. + // + // Arn is a required field + Arn *string `location:"uri" locationName:"arn" type:"string" required:"true"` + + // The keys of the tags that you want to remove from the resource. + TagKeys []*string `locationName:"tagKeys" type:"list"` +} + +// String returns the string representation +func (s UntagResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UntagResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UntagResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UntagResourceInput"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *UntagResourceInput) SetArn(v string) *UntagResourceInput { + s.Arn = &v + return s +} + +// SetTagKeys sets the TagKeys field's value. +func (s *UntagResourceInput) SetTagKeys(v []*string) *UntagResourceInput { + s.TagKeys = v + return s +} + +// A successful request to remove tags from a resource returns an OK message. +type UntagResourceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UntagResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UntagResourceOutput) GoString() string { + return s.String() +} + // Modify a job template by sending a request with the job template name and // any of the following that you wish to change: description, category, and // queue. @@ -10356,6 +14164,11 @@ func (s *UpdateJobTemplateInput) Validate() error { if s.Name == nil { invalidParams.Add(request.NewErrParamRequired("Name")) } + if s.Settings != nil { + if err := s.Settings.Validate(); err != nil { + invalidParams.AddNested("Settings", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -10456,6 +14269,11 @@ func (s *UpdatePresetInput) Validate() error { if s.Name == nil { invalidParams.Add(request.NewErrParamRequired("Name")) } + if s.Settings != nil { + if err := s.Settings.Validate(); err != nil { + invalidParams.AddNested("Settings", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -10512,23 +14330,27 @@ func (s *UpdatePresetOutput) SetPreset(v *Preset) *UpdatePresetOutput { return s } -// Modify a queue by sending a request with the queue name and any of the following -// that you wish to change - description, status. You pause or activate a queue -// by changing its status between ACTIVE and PAUSED. +// Modify a queue by sending a request with the queue name and any changes to +// the queue. type UpdateQueueInput struct { _ struct{} `type:"structure"` // The new description for the queue, if you are changing it. Description *string `locationName:"description" type:"string"` - // The name of the queue you are modifying. + // The name of the queue that you are modifying. // // Name is a required field Name *string `location:"uri" locationName:"name" type:"string" required:"true"` - // Queues can be ACTIVE or PAUSED. If you pause a queue, jobs in that queue - // will not begin. Jobs running when a queue is paused continue to run until - // they finish or error out. + // Details about the pricing plan for your reserved queue. Required for reserved + // queues and not applicable to on-demand queues. + ReservationPlanSettings *ReservationPlanSettings `locationName:"reservationPlanSettings" type:"structure"` + + // Pause or activate a queue by changing its status between ACTIVE and PAUSED. + // If you pause a queue, jobs in that queue won't begin. Jobs that are running + // when you pause the queue continue to run until they finish or result in an + // error. Status *string `locationName:"status" type:"string" enum:"QueueStatus"` } @@ -10548,6 +14370,11 @@ func (s *UpdateQueueInput) Validate() error { if s.Name == nil { invalidParams.Add(request.NewErrParamRequired("Name")) } + if s.ReservationPlanSettings != nil { + if err := s.ReservationPlanSettings.Validate(); err != nil { + invalidParams.AddNested("ReservationPlanSettings", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -10567,19 +14394,26 @@ func (s *UpdateQueueInput) SetName(v string) *UpdateQueueInput { return s } +// SetReservationPlanSettings sets the ReservationPlanSettings field's value. +func (s *UpdateQueueInput) SetReservationPlanSettings(v *ReservationPlanSettings) *UpdateQueueInput { + s.ReservationPlanSettings = v + return s +} + // SetStatus sets the Status field's value. func (s *UpdateQueueInput) SetStatus(v string) *UpdateQueueInput { s.Status = &v return s } -// Successful update queue requests will return the new queue JSON. +// Successful update queue requests return the new queue information in JSON. type UpdateQueueOutput struct { _ struct{} `type:"structure"` - // MediaConvert jobs are submitted to a queue. Unless specified otherwise jobs - // are submitted to a built-in default queue. User can create additional queues - // to separate the jobs of different categories or priority. + // You can use queues to manage the resources that are available to your AWS + // account for running multiple transcoding jobs at the same time. If you don't + // specify a queue, the service sends all jobs through the default queue. For + // more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/about-resource-allocation-and-job-prioritization.html. Queue *Queue `locationName:"queue" type:"structure"` } @@ -10609,7 +14443,8 @@ func (s *UpdateQueueOutput) SetQueue(v *Queue) *UpdateQueueOutput { type VideoCodecSettings struct { _ struct{} `type:"structure"` - // Type of video codec + // Specifies the video codec. This must be equal to one of the enum values defined + // by the object VideoCodec. Codec *string `locationName:"codec" type:"string" enum:"VideoCodec"` // Required when you set (Codec) under (VideoDescription)>(CodecSettings) to @@ -10642,6 +14477,41 @@ func (s VideoCodecSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *VideoCodecSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "VideoCodecSettings"} + if s.FrameCaptureSettings != nil { + if err := s.FrameCaptureSettings.Validate(); err != nil { + invalidParams.AddNested("FrameCaptureSettings", err.(request.ErrInvalidParams)) + } + } + if s.H264Settings != nil { + if err := s.H264Settings.Validate(); err != nil { + invalidParams.AddNested("H264Settings", err.(request.ErrInvalidParams)) + } + } + if s.H265Settings != nil { + if err := s.H265Settings.Validate(); err != nil { + invalidParams.AddNested("H265Settings", err.(request.ErrInvalidParams)) + } + } + if s.Mpeg2Settings != nil { + if err := s.Mpeg2Settings.Validate(); err != nil { + invalidParams.AddNested("Mpeg2Settings", err.(request.ErrInvalidParams)) + } + } + if s.ProresSettings != nil { + if err := s.ProresSettings.Validate(); err != nil { + invalidParams.AddNested("ProresSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetCodec sets the Codec field's value. func (s *VideoCodecSettings) SetCodec(v string) *VideoCodecSettings { s.Codec = &v @@ -10682,12 +14552,12 @@ func (s *VideoCodecSettings) SetProresSettings(v *ProresSettings) *VideoCodecSet type VideoDescription struct { _ struct{} `type:"structure"` - // This setting only applies to H.264 and MPEG2 outputs. Use Insert AFD signaling - // (AfdSignaling) to whether there are AFD values in the output video data and - // what those values are. * Choose None to remove all AFD values from this output. - // * Choose Fixed to ignore input AFD values and instead encode the value specified - // in the job. * Choose Auto to calculate output AFD values based on the input - // AFD scaler data. + // This setting only applies to H.264, H.265, and MPEG2 outputs. Use Insert + // AFD signaling (AfdSignaling) to specify whether the service includes AFD + // values in the output video data and what those values are. * Choose None + // to remove all AFD values from this output. * Choose Fixed to ignore input + // AFD values and instead encode the value specified in the job. * Choose Auto + // to calculate output AFD values based on the input AFD scaler data. AfdSignaling *string `locationName:"afdSignaling" type:"string" enum:"AfdSignaling"` // Enable Anti-alias (AntiAlias) to enhance sharp edges in video output when @@ -10729,7 +14599,7 @@ type VideoDescription struct { // Use the Height (Height) setting to define the video resolution height for // this output. Specify in pixels. If you don't provide a value here, the service // will use the input height. - Height *int64 `locationName:"height" type:"integer"` + Height *int64 `locationName:"height" min:"32" type:"integer"` // Use Position (Position) to point to a rectangle object to define your position. // This setting overrides any other aspect ratio. @@ -10759,12 +14629,18 @@ type VideoDescription struct { // setting, 100 the sharpest, and 50 recommended for most content. Sharpness *int64 `locationName:"sharpness" type:"integer"` - // Enable Timecode insertion to include timecode information in this output. - // Do this in the API by setting (VideoTimecodeInsertion) to (PIC_TIMING_SEI). - // To get timecodes to appear correctly in your output, also set up the timecode - // configuration for your job in the input settings. Only enable Timecode insertion - // when the input framerate is identical to output framerate. Disable this setting - // to remove the timecode from the output. Default is disabled. + // Applies only to H.264, H.265, MPEG2, and ProRes outputs. Only enable Timecode + // insertion when the input framerate is identical to the output framerate. + // To include timecodes in this output, set Timecode insertion (VideoTimecodeInsertion) + // to PIC_TIMING_SEI. To leave them out, set it to DISABLED. Default is DISABLED. + // When the service inserts timecodes in an output, by default, it uses any + // embedded timecodes from the input. If none are present, the service will + // set the timecode for the first output frame to zero. To change this default + // behavior, adjust the settings under Timecode configuration (TimecodeConfig). + // In the console, these settings are located under Job > Job settings > Timecode + // configuration. Note - Timecode source under input settings (InputTimecodeSource) + // does not affect the timecodes that are inserted in the output. Source under + // Job settings > Timecode configuration (TimecodeSource) does. TimecodeInsertion *string `locationName:"timecodeInsertion" type:"string" enum:"VideoTimecodeInsertion"` // Find additional transcoding features under Preprocessors (VideoPreprocessors). @@ -10775,7 +14651,7 @@ type VideoDescription struct { // Use Width (Width) to define the video resolution width, in pixels, for this // output. If you don't provide a value here, the service will use the input // width. - Width *int64 `locationName:"width" type:"integer"` + Width *int64 `locationName:"width" min:"32" type:"integer"` } // String returns the string representation @@ -10788,6 +14664,42 @@ func (s VideoDescription) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *VideoDescription) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "VideoDescription"} + if s.Height != nil && *s.Height < 32 { + invalidParams.Add(request.NewErrParamMinValue("Height", 32)) + } + if s.Width != nil && *s.Width < 32 { + invalidParams.Add(request.NewErrParamMinValue("Width", 32)) + } + if s.CodecSettings != nil { + if err := s.CodecSettings.Validate(); err != nil { + invalidParams.AddNested("CodecSettings", err.(request.ErrInvalidParams)) + } + } + if s.Crop != nil { + if err := s.Crop.Validate(); err != nil { + invalidParams.AddNested("Crop", err.(request.ErrInvalidParams)) + } + } + if s.Position != nil { + if err := s.Position.Validate(); err != nil { + invalidParams.AddNested("Position", err.(request.ErrInvalidParams)) + } + } + if s.VideoPreprocessors != nil { + if err := s.VideoPreprocessors.Validate(); err != nil { + invalidParams.AddNested("VideoPreprocessors", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetAfdSignaling sets the AfdSignaling field's value. func (s *VideoDescription) SetAfdSignaling(v string) *VideoDescription { s.AfdSignaling = &v @@ -10951,6 +14863,36 @@ func (s VideoPreprocessor) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *VideoPreprocessor) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "VideoPreprocessor"} + if s.ColorCorrector != nil { + if err := s.ColorCorrector.Validate(); err != nil { + invalidParams.AddNested("ColorCorrector", err.(request.ErrInvalidParams)) + } + } + if s.ImageInserter != nil { + if err := s.ImageInserter.Validate(); err != nil { + invalidParams.AddNested("ImageInserter", err.(request.ErrInvalidParams)) + } + } + if s.NoiseReducer != nil { + if err := s.NoiseReducer.Validate(); err != nil { + invalidParams.AddNested("NoiseReducer", err.(request.ErrInvalidParams)) + } + } + if s.TimecodeBurnin != nil { + if err := s.TimecodeBurnin.Validate(); err != nil { + invalidParams.AddNested("TimecodeBurnin", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetColorCorrector sets the ColorCorrector field's value. func (s *VideoPreprocessor) SetColorCorrector(v *ColorCorrector) *VideoPreprocessor { s.ColorCorrector = v @@ -10985,30 +14927,39 @@ func (s *VideoPreprocessor) SetTimecodeBurnin(v *TimecodeBurnin) *VideoPreproces type VideoSelector struct { _ struct{} `type:"structure"` - // Specifies the colorspace of an input. This setting works in tandem with "Color - // Corrector":#color_corrector > color_space_conversion to determine if any - // conversion will be performed. + // If your input video has accurate color space metadata, or if you don't know + // about color space, leave this set to the default value FOLLOW. The service + // will automatically detect your input color space. If your input video has + // metadata indicating the wrong color space, or if your input video is missing + // color space metadata that should be there, specify the accurate color space + // here. If you choose HDR10, you can also correct inaccurate color space coefficients, + // using the HDR master display information controls. You must also set Color + // space usage (ColorSpaceUsage) to FORCE for the service to use these values. ColorSpace *string `locationName:"colorSpace" type:"string" enum:"ColorSpace"` - // There are two sources for color metadata, the input file and the job configuration. - // This enum controls which takes precedence. FORCE: System will use color metadata - // supplied by user, if any. If the user does not supply color metadata the - // system will use data from the source. FALLBACK: System will use color metadata - // from the source. If source has no color metadata, the system will use user-supplied - // color metadata values if available. + // There are two sources for color metadata, the input file and the job configuration + // (in the Color space and HDR master display informaiton settings). The Color + // space usage setting controls which takes precedence. FORCE: The system will + // use color metadata supplied by user, if any. If the user does not supply + // color metadata, the system will use data from the source. FALLBACK: The system + // will use color metadata from the source. If source has no color metadata, + // the system will use user-supplied color metadata values if available. ColorSpaceUsage *string `locationName:"colorSpaceUsage" type:"string" enum:"ColorSpaceUsage"` - // Use the HDR master display (Hdr10Metadata) settings to provide values for - // HDR color. These values vary depending on the input video and must be provided - // by a color grader. Range is 0 to 50,000, each increment represents 0.00002 - // in CIE1931 color coordinate. + // Use the HDR master display (Hdr10Metadata) settings to correct HDR metadata + // or to provide missing metadata. These values vary depending on the input + // video and must be provided by a color grader. Range is 0 to 50,000, each + // increment represents 0.00002 in CIE1931 color coordinate. Note that these + // settings are not color correction. Note that if you are creating HDR outputs + // inside of an HLS CMAF package, to comply with the Apple specification, you + // must use the HVC1 for H.265 setting. Hdr10Metadata *Hdr10Metadata `locationName:"hdr10Metadata" type:"structure"` // Use PID (Pid) to select specific video data from an input file. Specify this // value as an integer; the system automatically converts it to the hexidecimal // value. For example, 257 selects PID 0x101. A PID, or packet identifier, is // an identifier for a set of data in an MPEG-2 transport stream container. - Pid *int64 `locationName:"pid" type:"integer"` + Pid *int64 `locationName:"pid" min:"1" type:"integer"` // Selects a specific program from within a multi-program transport stream. // Note that Quad 4K is not currently supported. @@ -11025,6 +14976,22 @@ func (s VideoSelector) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *VideoSelector) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "VideoSelector"} + if s.Pid != nil && *s.Pid < 1 { + invalidParams.Add(request.NewErrParamMinValue("Pid", 1)) + } + if s.ProgramNumber != nil && *s.ProgramNumber < -2.147483648e+09 { + invalidParams.Add(request.NewErrParamMinValue("ProgramNumber", -2.147483648e+09)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetColorSpace sets the ColorSpace field's value. func (s *VideoSelector) SetColorSpace(v string) *VideoSelector { s.ColorSpace = &v @@ -11062,15 +15029,20 @@ type WavSettings struct { // Specify Bit depth (BitDepth), in bits per sample, to choose the encoding // quality for this audio track. - BitDepth *int64 `locationName:"bitDepth" type:"integer"` + BitDepth *int64 `locationName:"bitDepth" min:"16" type:"integer"` // Set Channels to specify the number of channels in this output audio track. // With WAV, valid values 1, 2, 4, and 8. In the console, these values are Mono, // Stereo, 4-Channel, and 8-Channel, respectively. - Channels *int64 `locationName:"channels" type:"integer"` + Channels *int64 `locationName:"channels" min:"1" type:"integer"` + + // The service defaults to using RIFF for WAV outputs. If your output audio + // is likely to exceed 4 GB in file size, or if you otherwise need the extended + // support of the RF64 format, set your output WAV file format to RF64. + Format *string `locationName:"format" type:"string" enum:"WavFormat"` // Sample rate in Hz. - SampleRate *int64 `locationName:"sampleRate" type:"integer"` + SampleRate *int64 `locationName:"sampleRate" min:"8000" type:"integer"` } // String returns the string representation @@ -11083,6 +15055,25 @@ func (s WavSettings) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *WavSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "WavSettings"} + if s.BitDepth != nil && *s.BitDepth < 16 { + invalidParams.Add(request.NewErrParamMinValue("BitDepth", 16)) + } + if s.Channels != nil && *s.Channels < 1 { + invalidParams.Add(request.NewErrParamMinValue("Channels", 1)) + } + if s.SampleRate != nil && *s.SampleRate < 8000 { + invalidParams.Add(request.NewErrParamMinValue("SampleRate", 8000)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetBitDepth sets the BitDepth field's value. func (s *WavSettings) SetBitDepth(v int64) *WavSettings { s.BitDepth = &v @@ -11095,6 +15086,12 @@ func (s *WavSettings) SetChannels(v int64) *WavSettings { return s } +// SetFormat sets the Format field's value. +func (s *WavSettings) SetFormat(v string) *WavSettings { + s.Format = &v + return s +} + // SetSampleRate sets the SampleRate field's value. func (s *WavSettings) SetSampleRate(v int64) *WavSettings { s.SampleRate = &v @@ -11270,12 +15267,12 @@ const ( Ac3MetadataControlUseConfigured = "USE_CONFIGURED" ) -// This setting only applies to H.264 and MPEG2 outputs. Use Insert AFD signaling -// (AfdSignaling) to whether there are AFD values in the output video data and -// what those values are. * Choose None to remove all AFD values from this output. -// * Choose Fixed to ignore input AFD values and instead encode the value specified -// in the job. * Choose Auto to calculate output AFD values based on the input -// AFD scaler data. +// This setting only applies to H.264, H.265, and MPEG2 outputs. Use Insert +// AFD signaling (AfdSignaling) to specify whether the service includes AFD +// values in the output video data and what those values are. * Choose None +// to remove all AFD values from this output. * Choose Fixed to ignore input +// AFD values and instead encode the value specified in the job. * Choose Auto +// to calculate output AFD values based on the input AFD scaler data. const ( // AfdSignalingNone is a AfdSignaling enum value AfdSignalingNone = "NONE" @@ -11322,10 +15319,9 @@ const ( AudioCodecPassthrough = "PASSTHROUGH" ) -// When an "Audio Description":#audio_description specifies an AudioSelector -// or AudioSelectorGroup for which no matching source is found in the input, -// then the audio selector marked as DEFAULT will be used. If none are marked -// as default, silence will be inserted for the duration of the input. +// Enable this setting on one audio selector to set it as the default for the +// job. The service uses this default for outputs where it can't find the specified +// input audio. If you don't set a default, those outputs have no audio. const ( // AudioDefaultSelectionDefault is a AudioDefaultSelection enum value AudioDefaultSelectionDefault = "DEFAULT" @@ -11410,6 +15406,22 @@ const ( AudioTypeControlUseConfigured = "USE_CONFIGURED" ) +// Optional. Choose a tag type that AWS Billing and Cost Management will use +// to sort your AWS Elemental MediaConvert costs on any billing report that +// you set up. Any transcoding outputs that don't have an associated tag will +// appear in your billing report unsorted. If you don't choose a valid value +// for this field, your job outputs will appear on the billing report unsorted. +const ( + // BillingTagsSourceQueue is a BillingTagsSource enum value + BillingTagsSourceQueue = "QUEUE" + + // BillingTagsSourcePreset is a BillingTagsSource enum value + BillingTagsSourcePreset = "PRESET" + + // BillingTagsSourceJobTemplate is a BillingTagsSource enum value + BillingTagsSourceJobTemplate = "JOB_TEMPLATE" +) + // If no explicit x_position or y_position is provided, setting alignment to // centered will place the captions at the bottom center of the output. Similarly, // setting a left alignment will align captions to the bottom left of the output. @@ -11500,9 +15512,11 @@ const ( BurninSubtitleShadowColorWhite = "WHITE" ) -// Controls whether a fixed grid size or proportional font spacing will be used -// to generate the output subtitles bitmap. Only applicable for Teletext inputs -// and DVB-Sub/Burn-in outputs. +// Only applies to jobs with input captions in Teletext or STL formats. Specify +// whether the spacing between letters in your captions is set by the captions +// grid or varies depending on letter width. Choose fixed grid to conform to +// the spacing specified in the captions file more accurately. Choose proportional +// to make the text easier to read if the captions are closed caption. const ( // BurninSubtitleTeletextSpacingFixedGrid is a BurninSubtitleTeletextSpacing enum value BurninSubtitleTeletextSpacingFixedGrid = "FIXED_GRID" @@ -11511,8 +15525,8 @@ const ( BurninSubtitleTeletextSpacingProportional = "PROPORTIONAL" ) -// Type of Caption output, including Burn-In, Embedded, SCC, SRT, TTML, WebVTT, -// DVB-Sub, Teletext. +// Type of Caption output, including Burn-In, Embedded (with or without SCTE20), +// SCC, SMI, SRT, TTML, WebVTT, DVB-Sub, Teletext. const ( // CaptionDestinationTypeBurnIn is a CaptionDestinationType enum value CaptionDestinationTypeBurnIn = "BURN_IN" @@ -11523,12 +15537,21 @@ const ( // CaptionDestinationTypeEmbedded is a CaptionDestinationType enum value CaptionDestinationTypeEmbedded = "EMBEDDED" + // CaptionDestinationTypeEmbeddedPlusScte20 is a CaptionDestinationType enum value + CaptionDestinationTypeEmbeddedPlusScte20 = "EMBEDDED_PLUS_SCTE20" + + // CaptionDestinationTypeScte20PlusEmbedded is a CaptionDestinationType enum value + CaptionDestinationTypeScte20PlusEmbedded = "SCTE20_PLUS_EMBEDDED" + // CaptionDestinationTypeScc is a CaptionDestinationType enum value CaptionDestinationTypeScc = "SCC" // CaptionDestinationTypeSrt is a CaptionDestinationType enum value CaptionDestinationTypeSrt = "SRT" + // CaptionDestinationTypeSmi is a CaptionDestinationType enum value + CaptionDestinationTypeSmi = "SMI" + // CaptionDestinationTypeTeletext is a CaptionDestinationType enum value CaptionDestinationTypeTeletext = "TELETEXT" @@ -11551,6 +15574,9 @@ const ( // CaptionSourceTypeEmbedded is a CaptionSourceType enum value CaptionSourceTypeEmbedded = "EMBEDDED" + // CaptionSourceTypeScte20 is a CaptionSourceType enum value + CaptionSourceTypeScte20 = "SCTE20" + // CaptionSourceTypeScc is a CaptionSourceType enum value CaptionSourceTypeScc = "SCC" @@ -11563,6 +15589,9 @@ const ( // CaptionSourceTypeSrt is a CaptionSourceType enum value CaptionSourceTypeSrt = "SRT" + // CaptionSourceTypeSmi is a CaptionSourceType enum value + CaptionSourceTypeSmi = "SMI" + // CaptionSourceTypeTeletext is a CaptionSourceType enum value CaptionSourceTypeTeletext = "TELETEXT" @@ -11570,6 +15599,108 @@ const ( CaptionSourceTypeNullSource = "NULL_SOURCE" ) +// When set to ENABLED, sets #EXT-X-ALLOW-CACHE:no tag, which prevents client +// from saving media segments for later replay. +const ( + // CmafClientCacheDisabled is a CmafClientCache enum value + CmafClientCacheDisabled = "DISABLED" + + // CmafClientCacheEnabled is a CmafClientCache enum value + CmafClientCacheEnabled = "ENABLED" +) + +// Specification to use (RFC-6381 or the default RFC-4281) during m3u8 playlist +// generation. +const ( + // CmafCodecSpecificationRfc6381 is a CmafCodecSpecification enum value + CmafCodecSpecificationRfc6381 = "RFC_6381" + + // CmafCodecSpecificationRfc4281 is a CmafCodecSpecification enum value + CmafCodecSpecificationRfc4281 = "RFC_4281" +) + +// Encrypts the segments with the given encryption scheme. Leave blank to disable. +// Selecting 'Disabled' in the web interface also disables encryption. +const ( + // CmafEncryptionTypeSampleAes is a CmafEncryptionType enum value + CmafEncryptionTypeSampleAes = "SAMPLE_AES" +) + +// The Initialization Vector is a 128-bit number used in conjunction with the +// key for encrypting blocks. If set to INCLUDE, Initialization Vector is listed +// in the manifest. Otherwise Initialization Vector is not in the manifest. +const ( + // CmafInitializationVectorInManifestInclude is a CmafInitializationVectorInManifest enum value + CmafInitializationVectorInManifestInclude = "INCLUDE" + + // CmafInitializationVectorInManifestExclude is a CmafInitializationVectorInManifest enum value + CmafInitializationVectorInManifestExclude = "EXCLUDE" +) + +// Indicates which type of key provider is used for encryption. +const ( + // CmafKeyProviderTypeStaticKey is a CmafKeyProviderType enum value + CmafKeyProviderTypeStaticKey = "STATIC_KEY" +) + +// When set to GZIP, compresses HLS playlist. +const ( + // CmafManifestCompressionGzip is a CmafManifestCompression enum value + CmafManifestCompressionGzip = "GZIP" + + // CmafManifestCompressionNone is a CmafManifestCompression enum value + CmafManifestCompressionNone = "NONE" +) + +// Indicates whether the output manifest should use floating point values for +// segment duration. +const ( + // CmafManifestDurationFormatFloatingPoint is a CmafManifestDurationFormat enum value + CmafManifestDurationFormatFloatingPoint = "FLOATING_POINT" + + // CmafManifestDurationFormatInteger is a CmafManifestDurationFormat enum value + CmafManifestDurationFormatInteger = "INTEGER" +) + +// When set to SINGLE_FILE, a single output file is generated, which is internally +// segmented using the Fragment Length and Segment Length. When set to SEGMENTED_FILES, +// separate segment files will be created. +const ( + // CmafSegmentControlSingleFile is a CmafSegmentControl enum value + CmafSegmentControlSingleFile = "SINGLE_FILE" + + // CmafSegmentControlSegmentedFiles is a CmafSegmentControl enum value + CmafSegmentControlSegmentedFiles = "SEGMENTED_FILES" +) + +// Include or exclude RESOLUTION attribute for video in EXT-X-STREAM-INF tag +// of variant manifest. +const ( + // CmafStreamInfResolutionInclude is a CmafStreamInfResolution enum value + CmafStreamInfResolutionInclude = "INCLUDE" + + // CmafStreamInfResolutionExclude is a CmafStreamInfResolution enum value + CmafStreamInfResolutionExclude = "EXCLUDE" +) + +// When set to ENABLED, a DASH MPD manifest will be generated for this output. +const ( + // CmafWriteDASHManifestDisabled is a CmafWriteDASHManifest enum value + CmafWriteDASHManifestDisabled = "DISABLED" + + // CmafWriteDASHManifestEnabled is a CmafWriteDASHManifest enum value + CmafWriteDASHManifestEnabled = "ENABLED" +) + +// When set to ENABLED, an Apple HLS manifest will be generated for this output. +const ( + // CmafWriteHLSManifestDisabled is a CmafWriteHLSManifest enum value + CmafWriteHLSManifestDisabled = "DISABLED" + + // CmafWriteHLSManifestEnabled is a CmafWriteHLSManifest enum value + CmafWriteHLSManifestEnabled = "ENABLED" +) + // Enable Insert color metadata (ColorMetadata) to include color metadata in // this output. This setting is enabled by default. const ( @@ -11580,9 +15711,14 @@ const ( ColorMetadataInsert = "INSERT" ) -// Specifies the colorspace of an input. This setting works in tandem with "Color -// Corrector":#color_corrector > color_space_conversion to determine if any -// conversion will be performed. +// If your input video has accurate color space metadata, or if you don't know +// about color space, leave this set to the default value FOLLOW. The service +// will automatically detect your input color space. If your input video has +// metadata indicating the wrong color space, or if your input video is missing +// color space metadata that should be there, specify the accurate color space +// here. If you choose HDR10, you can also correct inaccurate color space coefficients, +// using the HDR master display information controls. You must also set Color +// space usage (ColorSpaceUsage) to FORCE for the service to use these values. const ( // ColorSpaceFollow is a ColorSpace enum value ColorSpaceFollow = "FOLLOW" @@ -11622,12 +15758,13 @@ const ( ColorSpaceConversionForceHlg2020 = "FORCE_HLG_2020" ) -// There are two sources for color metadata, the input file and the job configuration. -// This enum controls which takes precedence. FORCE: System will use color metadata -// supplied by user, if any. If the user does not supply color metadata the -// system will use data from the source. FALLBACK: System will use color metadata -// from the source. If source has no color metadata, the system will use user-supplied -// color metadata values if available. +// There are two sources for color metadata, the input file and the job configuration +// (in the Color space and HDR master display informaiton settings). The Color +// space usage setting controls which takes precedence. FORCE: The system will +// use color metadata supplied by user, if any. If the user does not supply +// color metadata, the system will use data from the source. FALLBACK: The system +// will use color metadata from the source. If source has no color metadata, +// the system will use user-supplied color metadata values if available. const ( // ColorSpaceUsageForce is a ColorSpaceUsage enum value ColorSpaceUsageForce = "FORCE" @@ -11636,6 +15773,13 @@ const ( ColorSpaceUsageFallback = "FALLBACK" ) +// The length of time that you commit to when you set up a pricing plan contract +// for a reserved queue. +const ( + // CommitmentOneYear is a Commitment enum value + CommitmentOneYear = "ONE_YEAR" +) + // Container for this output. Some containers require a container settings object. // If not specified, the default object will be created. const ( @@ -11651,6 +15795,9 @@ const ( // ContainerTypeM3u8 is a ContainerType enum value ContainerTypeM3u8 = "M3U8" + // ContainerTypeCmfc is a ContainerType enum value + ContainerTypeCmfc = "CMFC" + // ContainerTypeMov is a ContainerType enum value ContainerTypeMov = "MOV" @@ -11687,6 +15834,32 @@ const ( DashIsoSegmentControlSegmentedFiles = "SEGMENTED_FILES" ) +// When you enable Precise segment duration in manifests (writeSegmentTimelineInRepresentation), +// your DASH manifest shows precise segment durations. The segment duration +// information appears inside the SegmentTimeline element, inside SegmentTemplate +// at the Representation level. When this feature isn't enabled, the segment +// durations in your DASH manifest are approximate. The segment duration information +// appears in the duration attribute of the SegmentTemplate element. +const ( + // DashIsoWriteSegmentTimelineInRepresentationEnabled is a DashIsoWriteSegmentTimelineInRepresentation enum value + DashIsoWriteSegmentTimelineInRepresentationEnabled = "ENABLED" + + // DashIsoWriteSegmentTimelineInRepresentationDisabled is a DashIsoWriteSegmentTimelineInRepresentation enum value + DashIsoWriteSegmentTimelineInRepresentationDisabled = "DISABLED" +) + +// This specifies how the encrypted file needs to be decrypted. +const ( + // DecryptionModeAesCtr is a DecryptionMode enum value + DecryptionModeAesCtr = "AES_CTR" + + // DecryptionModeAesCbc is a DecryptionMode enum value + DecryptionModeAesCbc = "AES_CBC" + + // DecryptionModeAesGcm is a DecryptionMode enum value + DecryptionModeAesGcm = "AES_GCM" +) + // Only applies when you set Deinterlacer (DeinterlaceMode) to Deinterlace (DEINTERLACE) // or Adaptive (ADAPTIVE). Motion adaptive interpolate (INTERPOLATE) produces // sharper pictures, while blend (BLEND) produces smoother motion. Use (INTERPOLATE_TICKER) @@ -11737,6 +15910,18 @@ const ( DeinterlacerModeAdaptive = "ADAPTIVE" ) +// Optional field, defaults to DEFAULT. Specify DEFAULT for this operation to +// return your endpoints if any exist, or to create an endpoint for you and +// return it if one doesn't already exist. Specify GET_ONLY to return your endpoints +// if any exist, or an empty list if none exist. +const ( + // DescribeEndpointsModeDefault is a DescribeEndpointsMode enum value + DescribeEndpointsModeDefault = "DEFAULT" + + // DescribeEndpointsModeGetOnly is a DescribeEndpointsMode enum value + DescribeEndpointsModeGetOnly = "GET_ONLY" +) + // Applies only to 29.97 fps outputs. When this feature is enabled, the service // will use drop-frame timecode on outputs. If it is not possible to use drop-frame // timecode, the system will fall back to non-drop-frame. This setting is enabled @@ -11839,9 +16024,11 @@ const ( DvbSubtitleShadowColorWhite = "WHITE" ) -// Controls whether a fixed grid size or proportional font spacing will be used -// to generate the output subtitles bitmap. Only applicable for Teletext inputs -// and DVB-Sub/Burn-in outputs. +// Only applies to jobs with input captions in Teletext or STL formats. Specify +// whether the spacing between letters in your captions is set by the captions +// grid or varies depending on letter width. Choose fixed grid to conform to +// the spacing specified in the captions file more accurately. Choose proportional +// to make the text easier to read if the captions are closed caption. const ( // DvbSubtitleTeletextSpacingFixedGrid is a DvbSubtitleTeletextSpacing enum value DvbSubtitleTeletextSpacingFixedGrid = "FIXED_GRID" @@ -12168,6 +16355,19 @@ const ( H264CodecProfileMain = "MAIN" ) +// Choose Adaptive to improve subjective video quality for high-motion content. +// This will cause the service to use fewer B-frames (which infer information +// based on other frames) for high-motion portions of the video and more B-frames +// for low-motion portions. The maximum number of B-frames is limited by the +// value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames). +const ( + // H264DynamicSubGopAdaptive is a H264DynamicSubGop enum value + H264DynamicSubGopAdaptive = "ADAPTIVE" + + // H264DynamicSubGopStatic is a H264DynamicSubGop enum value + H264DynamicSubGopStatic = "STATIC" +) + // Entropy encoding mode. Use CABAC (must be in Main or High profile) or CAVLC. const ( // H264EntropyEncodingCabac is a H264EntropyEncoding enum value @@ -12195,9 +16395,17 @@ const ( H264FlickerAdaptiveQuantizationEnabled = "ENABLED" ) -// Using the API, set FramerateControl to INITIALIZE_FROM_SOURCE if you want -// the service to use the framerate from the input. Using the console, do this -// by choosing INITIALIZE_FROM_SOURCE for Framerate. +// If you are using the console, use the Framerate setting to specify the framerate +// for this output. If you want to keep the same framerate as the input video, +// choose Follow source. If you want to do framerate conversion, choose a framerate +// from the dropdown list or choose Custom. The framerates shown in the dropdown +// list are decimal approximations of fractions. If you choose Custom, specify +// your framerate as a fraction. If you are creating your transcoding job specification +// as a JSON file without the console, use FramerateControl to specify which +// value the service uses for the framerate for this output. Choose INITIALIZE_FROM_SOURCE +// if you want the service to use the framerate from the input. Choose SPECIFIED +// if you want the service to use the framerate you specify in the settings +// FramerateNumerator and FramerateDenominator. const ( // H264FramerateControlInitializeFromSource is a H264FramerateControl enum value H264FramerateControlInitializeFromSource = "INITIALIZE_FROM_SOURCE" @@ -12238,13 +16446,13 @@ const ( // Use Interlace mode (InterlaceMode) to choose the scan line type for the output. // * Top Field First (TOP_FIELD) and Bottom Field First (BOTTOM_FIELD) produce // interlaced output with the entire output having the same field polarity (top -// or bottom first). * Follow, Default Top (FOLLOw_TOP_FIELD) and Follow, Default +// or bottom first). * Follow, Default Top (FOLLOW_TOP_FIELD) and Follow, Default // Bottom (FOLLOW_BOTTOM_FIELD) use the same field polarity as the source. Therefore, -// behavior depends on the input scan type. - If the source is interlaced, the -// output will be interlaced with the same polarity as the source (it will follow -// the source). The output could therefore be a mix of "top field first" and -// "bottom field first". - If the source is progressive, the output will be -// interlaced with "top field first" or "bottom field first" polarity, depending +// behavior depends on the input scan type, as follows. - If the source is interlaced, +// the output will be interlaced with the same polarity as the source (it will +// follow the source). The output could therefore be a mix of "top field first" +// and "bottom field first". - If the source is progressive, the output will +// be interlaced with "top field first" or "bottom field first" polarity, depending // on which of the Follow options you chose. const ( // H264InterlaceModeProgressive is a H264InterlaceMode enum value @@ -12288,14 +16496,17 @@ const ( H264QualityTuningLevelMultiPassHq = "MULTI_PASS_HQ" ) -// Rate control mode. CQ uses constant quantizer (qp), ABR (average bitrate) -// does not write HRD parameters. +// Use this setting to specify whether this output has a variable bitrate (VBR), +// constant bitrate (CBR) or quality-defined variable bitrate (QVBR). const ( // H264RateControlModeVbr is a H264RateControlMode enum value H264RateControlModeVbr = "VBR" // H264RateControlModeCbr is a H264RateControlMode enum value H264RateControlModeCbr = "CBR" + + // H264RateControlModeQvbr is a H264RateControlMode enum value + H264RateControlModeQvbr = "QVBR" ) // Places a PPS header on each encoded picture, even if repeated. @@ -12488,6 +16699,19 @@ const ( H265CodecProfileMain42210bitHigh = "MAIN_422_10BIT_HIGH" ) +// Choose Adaptive to improve subjective video quality for high-motion content. +// This will cause the service to use fewer B-frames (which infer information +// based on other frames) for high-motion portions of the video and more B-frames +// for low-motion portions. The maximum number of B-frames is limited by the +// value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames). +const ( + // H265DynamicSubGopAdaptive is a H265DynamicSubGop enum value + H265DynamicSubGopAdaptive = "ADAPTIVE" + + // H265DynamicSubGopStatic is a H265DynamicSubGop enum value + H265DynamicSubGopStatic = "STATIC" +) + // Adjust quantization within each frame to reduce flicker or 'pop' on I-frames. const ( // H265FlickerAdaptiveQuantizationDisabled is a H265FlickerAdaptiveQuantization enum value @@ -12497,9 +16721,17 @@ const ( H265FlickerAdaptiveQuantizationEnabled = "ENABLED" ) -// Using the API, set FramerateControl to INITIALIZE_FROM_SOURCE if you want -// the service to use the framerate from the input. Using the console, do this -// by choosing INITIALIZE_FROM_SOURCE for Framerate. +// If you are using the console, use the Framerate setting to specify the framerate +// for this output. If you want to keep the same framerate as the input video, +// choose Follow source. If you want to do framerate conversion, choose a framerate +// from the dropdown list or choose Custom. The framerates shown in the dropdown +// list are decimal approximations of fractions. If you choose Custom, specify +// your framerate as a fraction. If you are creating your transcoding job sepecification +// as a JSON file without the console, use FramerateControl to specify which +// value the service uses for the framerate for this output. Choose INITIALIZE_FROM_SOURCE +// if you want the service to use the framerate from the input. Choose SPECIFIED +// if you want the service to use the framerate you specify in the settings +// FramerateNumerator and FramerateDenominator. const ( // H265FramerateControlInitializeFromSource is a H265FramerateControl enum value H265FramerateControlInitializeFromSource = "INITIALIZE_FROM_SOURCE" @@ -12540,7 +16772,7 @@ const ( // Use Interlace mode (InterlaceMode) to choose the scan line type for the output. // * Top Field First (TOP_FIELD) and Bottom Field First (BOTTOM_FIELD) produce // interlaced output with the entire output having the same field polarity (top -// or bottom first). * Follow, Default Top (FOLLOw_TOP_FIELD) and Follow, Default +// or bottom first). * Follow, Default Top (FOLLOW_TOP_FIELD) and Follow, Default // Bottom (FOLLOW_BOTTOM_FIELD) use the same field polarity as the source. Therefore, // behavior depends on the input scan type. - If the source is interlaced, the // output will be interlaced with the same polarity as the source (it will follow @@ -12590,14 +16822,17 @@ const ( H265QualityTuningLevelMultiPassHq = "MULTI_PASS_HQ" ) -// Rate control mode. CQ uses constant quantizer (qp), ABR (average bitrate) -// does not write HRD parameters. +// Use this setting to specify whether this output has a variable bitrate (VBR), +// constant bitrate (CBR) or quality-defined variable bitrate (QVBR). const ( // H265RateControlModeVbr is a H265RateControlMode enum value H265RateControlModeVbr = "VBR" // H265RateControlModeCbr is a H265RateControlMode enum value H265RateControlModeCbr = "CBR" + + // H265RateControlModeQvbr is a H265RateControlMode enum value + H265RateControlModeQvbr = "QVBR" ) // Specify Sample Adaptive Offset (SAO) filter strength. Adaptive mode dynamically @@ -12705,6 +16940,18 @@ const ( H265UnregisteredSeiTimecodeEnabled = "ENABLED" ) +// If HVC1, output that is H.265 will be marked as HVC1 and adhere to the ISO-IECJTC1-SC29_N13798_Text_ISOIEC_FDIS_14496-15_3rd_E +// spec which states that parameter set NAL units will be stored in the sample +// headers but not in the samples directly. If HEV1, then H.265 will be marked +// as HEV1 and parameter set NAL units will be written into the samples. +const ( + // H265WriteMp4PackagingTypeHvc1 is a H265WriteMp4PackagingType enum value + H265WriteMp4PackagingTypeHvc1 = "HVC1" + + // H265WriteMp4PackagingTypeHev1 is a H265WriteMp4PackagingType enum value + H265WriteMp4PackagingTypeHev1 = "HEV1" +) + const ( // HlsAdMarkersElemental is a HlsAdMarkers enum value HlsAdMarkersElemental = "ELEMENTAL" @@ -12951,13 +17198,13 @@ const ( InputPsiControlUsePsi = "USE_PSI" ) -// Use Timecode source (InputTimecodeSource) to specify how timecode information -// from your input is adjusted and encoded in all outputs for the job. Default -// is embedded. Set to Embedded (EMBEDDED) to use the timecode that is in the -// input video. If no embedded timecode is in the source, will set the timecode -// for the first frame to 00:00:00:00. Set to Start at 0 (ZEROBASED) to set -// the timecode of the initial frame to 00:00:00:00. Set to Specified start -// (SPECIFIEDSTART) to provide the initial timecode yourself the setting (Start). +// Timecode source under input settings (InputTimecodeSource) only affects the +// behavior of features that apply to a single input at a time, such as input +// clipping and synchronizing some captions formats. Use this setting to specify +// whether the service counts frames by timecodes embedded in the video (EMBEDDED) +// or by starting the first frame at zero (ZEROBASED). In both cases, the timecode +// format is HH:MM:SS:FF or HH:MM:SS;FF, where FF is the frame number. Only +// set this to EMBEDDED if your source video has embedded timecodes. const ( // InputTimecodeSourceEmbedded is a InputTimecodeSource enum value InputTimecodeSourceEmbedded = "EMBEDDED" @@ -13001,8 +17248,7 @@ const ( JobTemplateListBySystem = "SYSTEM" ) -// Code to specify the language, following the specification "ISO 639-2 three-digit -// code":http://www.loc.gov/standards/iso639-2/ +// Specify the language, using the ISO 639-2 three-letter code listed at https://www.loc.gov/standards/iso639-2/php/code_list.php. const ( // LanguageCodeEng is a LanguageCode enum value LanguageCodeEng = "ENG" @@ -13752,6 +17998,26 @@ const ( M3u8Scte35SourceNone = "NONE" ) +// Choose the type of motion graphic asset that you are providing for your overlay. +// You can choose either a .mov file or a series of .png files. +const ( + // MotionImageInsertionModeMov is a MotionImageInsertionMode enum value + MotionImageInsertionModeMov = "MOV" + + // MotionImageInsertionModePng is a MotionImageInsertionMode enum value + MotionImageInsertionModePng = "PNG" +) + +// Specify whether your motion graphic overlay repeats on a loop or plays only +// once. +const ( + // MotionImagePlaybackOnce is a MotionImagePlayback enum value + MotionImagePlaybackOnce = "ONCE" + + // MotionImagePlaybackRepeat is a MotionImagePlayback enum value + MotionImagePlaybackRepeat = "REPEAT" +) + // When enabled, include 'clap' atom if appropriate for the video output settings. const ( // MovClapAtomInclude is a MovClapAtom enum value @@ -13880,9 +18146,30 @@ const ( Mpeg2CodecProfileProfile422 = "PROFILE_422" ) -// Using the API, set FramerateControl to INITIALIZE_FROM_SOURCE if you want -// the service to use the framerate from the input. Using the console, do this -// by choosing INITIALIZE_FROM_SOURCE for Framerate. +// Choose Adaptive to improve subjective video quality for high-motion content. +// This will cause the service to use fewer B-frames (which infer information +// based on other frames) for high-motion portions of the video and more B-frames +// for low-motion portions. The maximum number of B-frames is limited by the +// value you provide for the setting B frames between reference frames (numberBFramesBetweenReferenceFrames). +const ( + // Mpeg2DynamicSubGopAdaptive is a Mpeg2DynamicSubGop enum value + Mpeg2DynamicSubGopAdaptive = "ADAPTIVE" + + // Mpeg2DynamicSubGopStatic is a Mpeg2DynamicSubGop enum value + Mpeg2DynamicSubGopStatic = "STATIC" +) + +// If you are using the console, use the Framerate setting to specify the framerate +// for this output. If you want to keep the same framerate as the input video, +// choose Follow source. If you want to do framerate conversion, choose a framerate +// from the dropdown list or choose Custom. The framerates shown in the dropdown +// list are decimal approximations of fractions. If you choose Custom, specify +// your framerate as a fraction. If you are creating your transcoding job sepecification +// as a JSON file without the console, use FramerateControl to specify which +// value the service uses for the framerate for this output. Choose INITIALIZE_FROM_SOURCE +// if you want the service to use the framerate from the input. Choose SPECIFIED +// if you want the service to use the framerate you specify in the settings +// FramerateNumerator and FramerateDenominator. const ( // Mpeg2FramerateControlInitializeFromSource is a Mpeg2FramerateControl enum value Mpeg2FramerateControlInitializeFromSource = "INITIALIZE_FROM_SOURCE" @@ -13913,7 +18200,7 @@ const ( // Use Interlace mode (InterlaceMode) to choose the scan line type for the output. // * Top Field First (TOP_FIELD) and Bottom Field First (BOTTOM_FIELD) produce // interlaced output with the entire output having the same field polarity (top -// or bottom first). * Follow, Default Top (FOLLOw_TOP_FIELD) and Follow, Default +// or bottom first). * Follow, Default Top (FOLLOW_TOP_FIELD) and Follow, Default // Bottom (FOLLOW_BOTTOM_FIELD) use the same field polarity as the source. Therefore, // behavior depends on the input scan type. - If the source is interlaced, the // output will be interlaced with the same polarity as the source (it will follow @@ -14076,8 +18363,8 @@ const ( // Use Noise reducer filter (NoiseReducerFilter) to select one of the following // spatial image filtering functions. To use this setting, you must also enable // Noise reducer (NoiseReducer). * Bilateral is an edge preserving noise reduction -// filter * Mean (softest), Gaussian, Lanczos, and Sharpen (sharpest) are convolution -// filters * Conserve is a min/max noise reduction filter * Spatial is frequency-domain +// filter. * Mean (softest), Gaussian, Lanczos, and Sharpen (sharpest) are convolution +// filters. * Conserve is a min/max noise reduction filter. * Spatial is a frequency-domain // filter based on JND principles. const ( // NoiseReducerFilterBilateral is a NoiseReducerFilter enum value @@ -14112,7 +18399,8 @@ const ( OrderDescending = "DESCENDING" ) -// Type of output group (File group, Apple HLS, DASH ISO, Microsoft Smooth Streaming) +// Type of output group (File group, Apple HLS, DASH ISO, Microsoft Smooth Streaming, +// CMAF) const ( // OutputGroupTypeHlsGroupSettings is a OutputGroupType enum value OutputGroupTypeHlsGroupSettings = "HLS_GROUP_SETTINGS" @@ -14125,6 +18413,9 @@ const ( // OutputGroupTypeMsSmoothGroupSettings is a OutputGroupType enum value OutputGroupTypeMsSmoothGroupSettings = "MS_SMOOTH_GROUP_SETTINGS" + + // OutputGroupTypeCmafGroupSettings is a OutputGroupType enum value + OutputGroupTypeCmafGroupSettings = "CMAF_GROUP_SETTINGS" ) // Selects method of inserting SDT information into output stream. "Follow input @@ -14161,6 +18452,19 @@ const ( PresetListBySystem = "SYSTEM" ) +// Specifies whether the pricing plan for the queue is On-demand or Reserved. +// The pricing plan for the queue determines whether you pay On-demand or Reserved +// pricing for the transcoding jobs that you run through the queue. For Reserved +// queue pricing, you must set up a contract. You can create a Reserved queue +// contract through the AWS Elemental MediaConvert console. +const ( + // PricingPlanOnDemand is a PricingPlan enum value + PricingPlanOnDemand = "ON_DEMAND" + + // PricingPlanReserved is a PricingPlan enum value + PricingPlanReserved = "RESERVED" +) + // Use Profile (ProResCodecProfile) to specifiy the type of Apple ProRes codec // to use for this output. const ( @@ -14177,9 +18481,17 @@ const ( ProresCodecProfileAppleProres422Proxy = "APPLE_PRORES_422_PROXY" ) -// Using the API, set FramerateControl to INITIALIZE_FROM_SOURCE if you want -// the service to use the framerate from the input. Using the console, do this -// by choosing INITIALIZE_FROM_SOURCE for Framerate. +// If you are using the console, use the Framerate setting to specify the framerate +// for this output. If you want to keep the same framerate as the input video, +// choose Follow source. If you want to do framerate conversion, choose a framerate +// from the dropdown list or choose Custom. The framerates shown in the dropdown +// list are decimal approximations of fractions. If you choose Custom, specify +// your framerate as a fraction. If you are creating your transcoding job sepecification +// as a JSON file without the console, use FramerateControl to specify which +// value the service uses for the framerate for this output. Choose INITIALIZE_FROM_SOURCE +// if you want the service to use the framerate from the input. Choose SPECIFIED +// if you want the service to use the framerate you specify in the settings +// FramerateNumerator and FramerateDenominator. const ( // ProresFramerateControlInitializeFromSource is a ProresFramerateControl enum value ProresFramerateControlInitializeFromSource = "INITIALIZE_FROM_SOURCE" @@ -14200,7 +18512,7 @@ const ( // Use Interlace mode (InterlaceMode) to choose the scan line type for the output. // * Top Field First (TOP_FIELD) and Bottom Field First (BOTTOM_FIELD) produce // interlaced output with the entire output having the same field polarity (top -// or bottom first). * Follow, Default Top (FOLLOw_TOP_FIELD) and Follow, Default +// or bottom first). * Follow, Default Top (FOLLOW_TOP_FIELD) and Follow, Default // Bottom (FOLLOW_BOTTOM_FIELD) use the same field polarity as the source. Therefore, // behavior depends on the input scan type. - If the source is interlaced, the // output will be interlaced with the same polarity as the source (it will follow @@ -14272,8 +18584,8 @@ const ( ) // Queues can be ACTIVE or PAUSED. If you pause a queue, jobs in that queue -// will not begin. Jobs running when a queue is paused continue to run until -// they finish or error out. +// won't begin. Jobs that are running when you pause a queue continue to run +// until they finish or result in an error. const ( // QueueStatusActive is a QueueStatus enum value QueueStatusActive = "ACTIVE" @@ -14282,6 +18594,25 @@ const ( QueueStatusPaused = "PAUSED" ) +// Specifies whether the pricing plan contract for your reserved queue automatically +// renews (AUTO_RENEW) or expires (EXPIRE) at the end of the contract period. +const ( + // RenewalTypeAutoRenew is a RenewalType enum value + RenewalTypeAutoRenew = "AUTO_RENEW" + + // RenewalTypeExpire is a RenewalType enum value + RenewalTypeExpire = "EXPIRE" +) + +// Specifies whether the pricing plan for your reserved queue is ACTIVE or EXPIRED. +const ( + // ReservationPlanStatusActive is a ReservationPlanStatus enum value + ReservationPlanStatusActive = "ACTIVE" + + // ReservationPlanStatusExpired is a ReservationPlanStatus enum value + ReservationPlanStatusExpired = "EXPIRED" +) + // Use Respond to AFD (RespondToAfd) to specify how the service changes the // video itself in response to AFD values in the input. * Choose Respond to // clip the input video frame according to the AFD value, input display aspect @@ -14364,13 +18695,13 @@ const ( TimecodeBurninPositionBottomRight = "BOTTOM_RIGHT" ) -// Use Timecode source (TimecodeSource) to set how timecodes are handled within -// this input. To make sure that your video, audio, captions, and markers are -// synchronized and that time-based features, such as image inserter, work correctly, -// choose the Timecode source option that matches your assets. All timecodes -// are in a 24-hour format with frame number (HH:MM:SS:FF). * Embedded (EMBEDDED) -// - Use the timecode that is in the input video. If no embedded timecode is -// in the source, the service will use Start at 0 (ZEROBASED) instead. * Start +// Use Source (TimecodeSource) to set how timecodes are handled within this +// job. To make sure that your video, audio, captions, and markers are synchronized +// and that time-based features, such as image inserter, work correctly, choose +// the Timecode source option that matches your assets. All timecodes are in +// a 24-hour format with frame number (HH:MM:SS:FF). * Embedded (EMBEDDED) - +// Use the timecode that is in the input video. If no embedded timecode is in +// the source, the service will use Start at 0 (ZEROBASED) instead. * Start // at 0 (ZEROBASED) - Set the timecode of the initial frame to 00:00:00:00. // * Specified Start (SPECIFIEDSTART) - Set the timecode of the initial frame // to a value other than zero. You use Start timecode (Start) to provide this @@ -14386,8 +18717,8 @@ const ( TimecodeSourceSpecifiedstart = "SPECIFIEDSTART" ) -// If PASSTHROUGH, inserts ID3 timed metadata from the timed_metadata REST command -// into this output. +// Applies only to HLS outputs. Use this setting to specify whether the service +// inserts the ID3 timed metadata from the input in this output. const ( // TimedMetadataPassthrough is a TimedMetadata enum value TimedMetadataPassthrough = "PASSTHROUGH" @@ -14432,12 +18763,18 @@ const ( VideoCodecProres = "PRORES" ) -// Enable Timecode insertion to include timecode information in this output. -// Do this in the API by setting (VideoTimecodeInsertion) to (PIC_TIMING_SEI). -// To get timecodes to appear correctly in your output, also set up the timecode -// configuration for your job in the input settings. Only enable Timecode insertion -// when the input framerate is identical to output framerate. Disable this setting -// to remove the timecode from the output. Default is disabled. +// Applies only to H.264, H.265, MPEG2, and ProRes outputs. Only enable Timecode +// insertion when the input framerate is identical to the output framerate. +// To include timecodes in this output, set Timecode insertion (VideoTimecodeInsertion) +// to PIC_TIMING_SEI. To leave them out, set it to DISABLED. Default is DISABLED. +// When the service inserts timecodes in an output, by default, it uses any +// embedded timecodes from the input. If none are present, the service will +// set the timecode for the first output frame to zero. To change this default +// behavior, adjust the settings under Timecode configuration (TimecodeConfig). +// In the console, these settings are located under Job > Job settings > Timecode +// configuration. Note - Timecode source under input settings (InputTimecodeSource) +// does not affect the timecodes that are inserted in the output. Source under +// Job settings > Timecode configuration (TimecodeSource) does. const ( // VideoTimecodeInsertionDisabled is a VideoTimecodeInsertion enum value VideoTimecodeInsertionDisabled = "DISABLED" @@ -14445,3 +18782,14 @@ const ( // VideoTimecodeInsertionPicTimingSei is a VideoTimecodeInsertion enum value VideoTimecodeInsertionPicTimingSei = "PIC_TIMING_SEI" ) + +// The service defaults to using RIFF for WAV outputs. If your output audio +// is likely to exceed 4 GB in file size, or if you otherwise need the extended +// support of the RF64 format, set your output WAV file format to RF64. +const ( + // WavFormatRiff is a WavFormat enum value + WavFormatRiff = "RIFF" + + // WavFormatRf64 is a WavFormat enum value + WavFormatRf64 = "RF64" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/mediaconvert/service.go b/vendor/github.com/aws/aws-sdk-go/service/mediaconvert/service.go index 57088839281..441039b6851 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/mediaconvert/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/mediaconvert/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "mediaconvert" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "mediaconvert" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "MediaConvert" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the MediaConvert client with a session. @@ -45,19 +46,20 @@ const ( // svc := mediaconvert.New(mySession, aws.NewConfig().WithRegion("us-west-2")) func New(p client.ConfigProvider, cfgs ...*aws.Config) *MediaConvert { c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "mediaconvert" + } return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) } // newClient creates, initializes and returns a new service client instance. func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *MediaConvert { - if len(signingName) == 0 { - signingName = "mediaconvert" - } svc := &MediaConvert{ Client: client.New( cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/medialive/api.go b/vendor/github.com/aws/aws-sdk-go/service/medialive/api.go index e99fae51ec0..0e12c37ed85 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/medialive/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/medialive/api.go @@ -10,12 +10,104 @@ import ( "github.com/aws/aws-sdk-go/aws/request" ) +const opBatchUpdateSchedule = "BatchUpdateSchedule" + +// BatchUpdateScheduleRequest generates a "aws/request.Request" representing the +// client's request for the BatchUpdateSchedule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See BatchUpdateSchedule for more information on using the BatchUpdateSchedule +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the BatchUpdateScheduleRequest method. +// req, resp := client.BatchUpdateScheduleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/BatchUpdateSchedule +func (c *MediaLive) BatchUpdateScheduleRequest(input *BatchUpdateScheduleInput) (req *request.Request, output *BatchUpdateScheduleOutput) { + op := &request.Operation{ + Name: opBatchUpdateSchedule, + HTTPMethod: "PUT", + HTTPPath: "/prod/channels/{channelId}/schedule", + } + + if input == nil { + input = &BatchUpdateScheduleInput{} + } + + output = &BatchUpdateScheduleOutput{} + req = c.newRequest(op, input, output) + return +} + +// BatchUpdateSchedule API operation for AWS Elemental MediaLive. +// +// Update a channel schedule +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Elemental MediaLive's +// API operation BatchUpdateSchedule for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// +// * ErrCodeUnprocessableEntityException "UnprocessableEntityException" +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// +// * ErrCodeForbiddenException "ForbiddenException" +// +// * ErrCodeBadGatewayException "BadGatewayException" +// +// * ErrCodeNotFoundException "NotFoundException" +// +// * ErrCodeGatewayTimeoutException "GatewayTimeoutException" +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/BatchUpdateSchedule +func (c *MediaLive) BatchUpdateSchedule(input *BatchUpdateScheduleInput) (*BatchUpdateScheduleOutput, error) { + req, out := c.BatchUpdateScheduleRequest(input) + return out, req.Send() +} + +// BatchUpdateScheduleWithContext is the same as BatchUpdateSchedule with the addition of +// the ability to pass a context and additional request options. +// +// See BatchUpdateSchedule for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaLive) BatchUpdateScheduleWithContext(ctx aws.Context, input *BatchUpdateScheduleInput, opts ...request.Option) (*BatchUpdateScheduleOutput, error) { + req, out := c.BatchUpdateScheduleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateChannel = "CreateChannel" // CreateChannelRequest generates a "aws/request.Request" representing the // client's request for the CreateChannel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -106,8 +198,8 @@ const opCreateInput = "CreateInput" // CreateInputRequest generates a "aws/request.Request" representing the // client's request for the CreateInput operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -194,8 +286,8 @@ const opCreateInputSecurityGroup = "CreateInputSecurityGroup" // CreateInputSecurityGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateInputSecurityGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -282,8 +374,8 @@ const opDeleteChannel = "DeleteChannel" // DeleteChannelRequest generates a "aws/request.Request" representing the // client's request for the DeleteChannel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -374,8 +466,8 @@ const opDeleteInput = "DeleteInput" // DeleteInputRequest generates a "aws/request.Request" representing the // client's request for the DeleteInput operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -466,8 +558,8 @@ const opDeleteInputSecurityGroup = "DeleteInputSecurityGroup" // DeleteInputSecurityGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteInputSecurityGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -552,12 +644,104 @@ func (c *MediaLive) DeleteInputSecurityGroupWithContext(ctx aws.Context, input * return out, req.Send() } +const opDeleteReservation = "DeleteReservation" + +// DeleteReservationRequest generates a "aws/request.Request" representing the +// client's request for the DeleteReservation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteReservation for more information on using the DeleteReservation +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteReservationRequest method. +// req, resp := client.DeleteReservationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/DeleteReservation +func (c *MediaLive) DeleteReservationRequest(input *DeleteReservationInput) (req *request.Request, output *DeleteReservationOutput) { + op := &request.Operation{ + Name: opDeleteReservation, + HTTPMethod: "DELETE", + HTTPPath: "/prod/reservations/{reservationId}", + } + + if input == nil { + input = &DeleteReservationInput{} + } + + output = &DeleteReservationOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteReservation API operation for AWS Elemental MediaLive. +// +// Delete an expired reservation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Elemental MediaLive's +// API operation DeleteReservation for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// +// * ErrCodeForbiddenException "ForbiddenException" +// +// * ErrCodeBadGatewayException "BadGatewayException" +// +// * ErrCodeNotFoundException "NotFoundException" +// +// * ErrCodeGatewayTimeoutException "GatewayTimeoutException" +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// +// * ErrCodeConflictException "ConflictException" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/DeleteReservation +func (c *MediaLive) DeleteReservation(input *DeleteReservationInput) (*DeleteReservationOutput, error) { + req, out := c.DeleteReservationRequest(input) + return out, req.Send() +} + +// DeleteReservationWithContext is the same as DeleteReservation with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteReservation for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaLive) DeleteReservationWithContext(ctx aws.Context, input *DeleteReservationInput, opts ...request.Option) (*DeleteReservationOutput, error) { + req, out := c.DeleteReservationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeChannel = "DescribeChannel" // DescribeChannelRequest generates a "aws/request.Request" representing the // client's request for the DescribeChannel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -646,8 +830,8 @@ const opDescribeInput = "DescribeInput" // DescribeInputRequest generates a "aws/request.Request" representing the // client's request for the DescribeInput operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -736,8 +920,8 @@ const opDescribeInputSecurityGroup = "DescribeInputSecurityGroup" // DescribeInputSecurityGroupRequest generates a "aws/request.Request" representing the // client's request for the DescribeInputSecurityGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -822,64 +1006,58 @@ func (c *MediaLive) DescribeInputSecurityGroupWithContext(ctx aws.Context, input return out, req.Send() } -const opListChannels = "ListChannels" +const opDescribeOffering = "DescribeOffering" -// ListChannelsRequest generates a "aws/request.Request" representing the -// client's request for the ListChannels operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeOfferingRequest generates a "aws/request.Request" representing the +// client's request for the DescribeOffering operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListChannels for more information on using the ListChannels +// See DescribeOffering for more information on using the DescribeOffering // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListChannelsRequest method. -// req, resp := client.ListChannelsRequest(params) +// // Example sending a request using the DescribeOfferingRequest method. +// req, resp := client.DescribeOfferingRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/ListChannels -func (c *MediaLive) ListChannelsRequest(input *ListChannelsInput) (req *request.Request, output *ListChannelsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/DescribeOffering +func (c *MediaLive) DescribeOfferingRequest(input *DescribeOfferingInput) (req *request.Request, output *DescribeOfferingOutput) { op := &request.Operation{ - Name: opListChannels, + Name: opDescribeOffering, HTTPMethod: "GET", - HTTPPath: "/prod/channels", - Paginator: &request.Paginator{ - InputTokens: []string{"NextToken"}, - OutputTokens: []string{"NextToken"}, - LimitToken: "MaxResults", - TruncationToken: "", - }, + HTTPPath: "/prod/offerings/{offeringId}", } if input == nil { - input = &ListChannelsInput{} + input = &DescribeOfferingInput{} } - output = &ListChannelsOutput{} + output = &DescribeOfferingOutput{} req = c.newRequest(op, input, output) return } -// ListChannels API operation for AWS Elemental MediaLive. +// DescribeOffering API operation for AWS Elemental MediaLive. // -// Produces list of channels that have been created +// Get details for an offering. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Elemental MediaLive's -// API operation ListChannels for usage and error information. +// API operation DescribeOffering for usage and error information. // // Returned Error Codes: // * ErrCodeBadRequestException "BadRequestException" @@ -890,140 +1068,86 @@ func (c *MediaLive) ListChannelsRequest(input *ListChannelsInput) (req *request. // // * ErrCodeBadGatewayException "BadGatewayException" // +// * ErrCodeNotFoundException "NotFoundException" +// // * ErrCodeGatewayTimeoutException "GatewayTimeoutException" // // * ErrCodeTooManyRequestsException "TooManyRequestsException" // -// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/ListChannels -func (c *MediaLive) ListChannels(input *ListChannelsInput) (*ListChannelsOutput, error) { - req, out := c.ListChannelsRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/DescribeOffering +func (c *MediaLive) DescribeOffering(input *DescribeOfferingInput) (*DescribeOfferingOutput, error) { + req, out := c.DescribeOfferingRequest(input) return out, req.Send() } -// ListChannelsWithContext is the same as ListChannels with the addition of +// DescribeOfferingWithContext is the same as DescribeOffering with the addition of // the ability to pass a context and additional request options. // -// See ListChannels for details on how to use this API operation. +// See DescribeOffering for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *MediaLive) ListChannelsWithContext(ctx aws.Context, input *ListChannelsInput, opts ...request.Option) (*ListChannelsOutput, error) { - req, out := c.ListChannelsRequest(input) +func (c *MediaLive) DescribeOfferingWithContext(ctx aws.Context, input *DescribeOfferingInput, opts ...request.Option) (*DescribeOfferingOutput, error) { + req, out := c.DescribeOfferingRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// ListChannelsPages iterates over the pages of a ListChannels operation, -// calling the "fn" function with the response data for each page. To stop -// iterating, return false from the fn function. -// -// See ListChannels method for more information on how to use this operation. -// -// Note: This operation can generate multiple requests to a service. -// -// // Example iterating over at most 3 pages of a ListChannels operation. -// pageNum := 0 -// err := client.ListChannelsPages(params, -// func(page *ListChannelsOutput, lastPage bool) bool { -// pageNum++ -// fmt.Println(page) -// return pageNum <= 3 -// }) -// -func (c *MediaLive) ListChannelsPages(input *ListChannelsInput, fn func(*ListChannelsOutput, bool) bool) error { - return c.ListChannelsPagesWithContext(aws.BackgroundContext(), input, fn) -} - -// ListChannelsPagesWithContext same as ListChannelsPages except -// it takes a Context and allows setting request options on the pages. -// -// The context must be non-nil and will be used for request cancellation. If -// the context is nil a panic will occur. In the future the SDK may create -// sub-contexts for http.Requests. See https://golang.org/pkg/context/ -// for more information on using Contexts. -func (c *MediaLive) ListChannelsPagesWithContext(ctx aws.Context, input *ListChannelsInput, fn func(*ListChannelsOutput, bool) bool, opts ...request.Option) error { - p := request.Pagination{ - NewRequest: func() (*request.Request, error) { - var inCpy *ListChannelsInput - if input != nil { - tmp := *input - inCpy = &tmp - } - req, _ := c.ListChannelsRequest(inCpy) - req.SetContext(ctx) - req.ApplyOptions(opts...) - return req, nil - }, - } - - cont := true - for p.Next() && cont { - cont = fn(p.Page().(*ListChannelsOutput), !p.HasNextPage()) - } - return p.Err() -} - -const opListInputSecurityGroups = "ListInputSecurityGroups" +const opDescribeReservation = "DescribeReservation" -// ListInputSecurityGroupsRequest generates a "aws/request.Request" representing the -// client's request for the ListInputSecurityGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeReservationRequest generates a "aws/request.Request" representing the +// client's request for the DescribeReservation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListInputSecurityGroups for more information on using the ListInputSecurityGroups +// See DescribeReservation for more information on using the DescribeReservation // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListInputSecurityGroupsRequest method. -// req, resp := client.ListInputSecurityGroupsRequest(params) +// // Example sending a request using the DescribeReservationRequest method. +// req, resp := client.DescribeReservationRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/ListInputSecurityGroups -func (c *MediaLive) ListInputSecurityGroupsRequest(input *ListInputSecurityGroupsInput) (req *request.Request, output *ListInputSecurityGroupsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/DescribeReservation +func (c *MediaLive) DescribeReservationRequest(input *DescribeReservationInput) (req *request.Request, output *DescribeReservationOutput) { op := &request.Operation{ - Name: opListInputSecurityGroups, + Name: opDescribeReservation, HTTPMethod: "GET", - HTTPPath: "/prod/inputSecurityGroups", - Paginator: &request.Paginator{ - InputTokens: []string{"NextToken"}, - OutputTokens: []string{"NextToken"}, - LimitToken: "MaxResults", - TruncationToken: "", - }, + HTTPPath: "/prod/reservations/{reservationId}", } if input == nil { - input = &ListInputSecurityGroupsInput{} + input = &DescribeReservationInput{} } - output = &ListInputSecurityGroupsOutput{} + output = &DescribeReservationOutput{} req = c.newRequest(op, input, output) return } -// ListInputSecurityGroups API operation for AWS Elemental MediaLive. +// DescribeReservation API operation for AWS Elemental MediaLive. // -// Produces a list of Input Security Groups for an account +// Get details for a reservation. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Elemental MediaLive's -// API operation ListInputSecurityGroups for usage and error information. +// API operation DescribeReservation for usage and error information. // // Returned Error Codes: // * ErrCodeBadRequestException "BadRequestException" @@ -1034,113 +1158,65 @@ func (c *MediaLive) ListInputSecurityGroupsRequest(input *ListInputSecurityGroup // // * ErrCodeBadGatewayException "BadGatewayException" // +// * ErrCodeNotFoundException "NotFoundException" +// // * ErrCodeGatewayTimeoutException "GatewayTimeoutException" // // * ErrCodeTooManyRequestsException "TooManyRequestsException" // -// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/ListInputSecurityGroups -func (c *MediaLive) ListInputSecurityGroups(input *ListInputSecurityGroupsInput) (*ListInputSecurityGroupsOutput, error) { - req, out := c.ListInputSecurityGroupsRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/DescribeReservation +func (c *MediaLive) DescribeReservation(input *DescribeReservationInput) (*DescribeReservationOutput, error) { + req, out := c.DescribeReservationRequest(input) return out, req.Send() } -// ListInputSecurityGroupsWithContext is the same as ListInputSecurityGroups with the addition of +// DescribeReservationWithContext is the same as DescribeReservation with the addition of // the ability to pass a context and additional request options. // -// See ListInputSecurityGroups for details on how to use this API operation. +// See DescribeReservation for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *MediaLive) ListInputSecurityGroupsWithContext(ctx aws.Context, input *ListInputSecurityGroupsInput, opts ...request.Option) (*ListInputSecurityGroupsOutput, error) { - req, out := c.ListInputSecurityGroupsRequest(input) +func (c *MediaLive) DescribeReservationWithContext(ctx aws.Context, input *DescribeReservationInput, opts ...request.Option) (*DescribeReservationOutput, error) { + req, out := c.DescribeReservationRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// ListInputSecurityGroupsPages iterates over the pages of a ListInputSecurityGroups operation, -// calling the "fn" function with the response data for each page. To stop -// iterating, return false from the fn function. +const opDescribeSchedule = "DescribeSchedule" + +// DescribeScheduleRequest generates a "aws/request.Request" representing the +// client's request for the DescribeSchedule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // -// See ListInputSecurityGroups method for more information on how to use this operation. +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. // -// Note: This operation can generate multiple requests to a service. -// -// // Example iterating over at most 3 pages of a ListInputSecurityGroups operation. -// pageNum := 0 -// err := client.ListInputSecurityGroupsPages(params, -// func(page *ListInputSecurityGroupsOutput, lastPage bool) bool { -// pageNum++ -// fmt.Println(page) -// return pageNum <= 3 -// }) -// -func (c *MediaLive) ListInputSecurityGroupsPages(input *ListInputSecurityGroupsInput, fn func(*ListInputSecurityGroupsOutput, bool) bool) error { - return c.ListInputSecurityGroupsPagesWithContext(aws.BackgroundContext(), input, fn) -} - -// ListInputSecurityGroupsPagesWithContext same as ListInputSecurityGroupsPages except -// it takes a Context and allows setting request options on the pages. -// -// The context must be non-nil and will be used for request cancellation. If -// the context is nil a panic will occur. In the future the SDK may create -// sub-contexts for http.Requests. See https://golang.org/pkg/context/ -// for more information on using Contexts. -func (c *MediaLive) ListInputSecurityGroupsPagesWithContext(ctx aws.Context, input *ListInputSecurityGroupsInput, fn func(*ListInputSecurityGroupsOutput, bool) bool, opts ...request.Option) error { - p := request.Pagination{ - NewRequest: func() (*request.Request, error) { - var inCpy *ListInputSecurityGroupsInput - if input != nil { - tmp := *input - inCpy = &tmp - } - req, _ := c.ListInputSecurityGroupsRequest(inCpy) - req.SetContext(ctx) - req.ApplyOptions(opts...) - return req, nil - }, - } - - cont := true - for p.Next() && cont { - cont = fn(p.Page().(*ListInputSecurityGroupsOutput), !p.HasNextPage()) - } - return p.Err() -} - -const opListInputs = "ListInputs" - -// ListInputsRequest generates a "aws/request.Request" representing the -// client's request for the ListInputs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. -// -// Use "Send" method on the returned Request to send the API call to the service. -// the "output" return value is not valid until after Send returns without error. -// -// See ListInputs for more information on using the ListInputs -// API call, and error handling. +// See DescribeSchedule for more information on using the DescribeSchedule +// API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListInputsRequest method. -// req, resp := client.ListInputsRequest(params) +// // Example sending a request using the DescribeScheduleRequest method. +// req, resp := client.DescribeScheduleRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/ListInputs -func (c *MediaLive) ListInputsRequest(input *ListInputsInput) (req *request.Request, output *ListInputsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/DescribeSchedule +func (c *MediaLive) DescribeScheduleRequest(input *DescribeScheduleInput) (req *request.Request, output *DescribeScheduleOutput) { op := &request.Operation{ - Name: opListInputs, + Name: opDescribeSchedule, HTTPMethod: "GET", - HTTPPath: "/prod/inputs", + HTTPPath: "/prod/channels/{channelId}/schedule", Paginator: &request.Paginator{ InputTokens: []string{"NextToken"}, OutputTokens: []string{"NextToken"}, @@ -1150,24 +1226,24 @@ func (c *MediaLive) ListInputsRequest(input *ListInputsInput) (req *request.Requ } if input == nil { - input = &ListInputsInput{} + input = &DescribeScheduleInput{} } - output = &ListInputsOutput{} + output = &DescribeScheduleOutput{} req = c.newRequest(op, input, output) return } -// ListInputs API operation for AWS Elemental MediaLive. +// DescribeSchedule API operation for AWS Elemental MediaLive. // -// Produces list of inputs that have been created +// Get a channel schedule // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Elemental MediaLive's -// API operation ListInputs for usage and error information. +// API operation DescribeSchedule for usage and error information. // // Returned Error Codes: // * ErrCodeBadRequestException "BadRequestException" @@ -1178,69 +1254,71 @@ func (c *MediaLive) ListInputsRequest(input *ListInputsInput) (req *request.Requ // // * ErrCodeBadGatewayException "BadGatewayException" // +// * ErrCodeNotFoundException "NotFoundException" +// // * ErrCodeGatewayTimeoutException "GatewayTimeoutException" // // * ErrCodeTooManyRequestsException "TooManyRequestsException" // -// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/ListInputs -func (c *MediaLive) ListInputs(input *ListInputsInput) (*ListInputsOutput, error) { - req, out := c.ListInputsRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/DescribeSchedule +func (c *MediaLive) DescribeSchedule(input *DescribeScheduleInput) (*DescribeScheduleOutput, error) { + req, out := c.DescribeScheduleRequest(input) return out, req.Send() } -// ListInputsWithContext is the same as ListInputs with the addition of +// DescribeScheduleWithContext is the same as DescribeSchedule with the addition of // the ability to pass a context and additional request options. // -// See ListInputs for details on how to use this API operation. +// See DescribeSchedule for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *MediaLive) ListInputsWithContext(ctx aws.Context, input *ListInputsInput, opts ...request.Option) (*ListInputsOutput, error) { - req, out := c.ListInputsRequest(input) +func (c *MediaLive) DescribeScheduleWithContext(ctx aws.Context, input *DescribeScheduleInput, opts ...request.Option) (*DescribeScheduleOutput, error) { + req, out := c.DescribeScheduleRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// ListInputsPages iterates over the pages of a ListInputs operation, +// DescribeSchedulePages iterates over the pages of a DescribeSchedule operation, // calling the "fn" function with the response data for each page. To stop // iterating, return false from the fn function. // -// See ListInputs method for more information on how to use this operation. +// See DescribeSchedule method for more information on how to use this operation. // // Note: This operation can generate multiple requests to a service. // -// // Example iterating over at most 3 pages of a ListInputs operation. +// // Example iterating over at most 3 pages of a DescribeSchedule operation. // pageNum := 0 -// err := client.ListInputsPages(params, -// func(page *ListInputsOutput, lastPage bool) bool { +// err := client.DescribeSchedulePages(params, +// func(page *DescribeScheduleOutput, lastPage bool) bool { // pageNum++ // fmt.Println(page) // return pageNum <= 3 // }) // -func (c *MediaLive) ListInputsPages(input *ListInputsInput, fn func(*ListInputsOutput, bool) bool) error { - return c.ListInputsPagesWithContext(aws.BackgroundContext(), input, fn) +func (c *MediaLive) DescribeSchedulePages(input *DescribeScheduleInput, fn func(*DescribeScheduleOutput, bool) bool) error { + return c.DescribeSchedulePagesWithContext(aws.BackgroundContext(), input, fn) } -// ListInputsPagesWithContext same as ListInputsPages except +// DescribeSchedulePagesWithContext same as DescribeSchedulePages except // it takes a Context and allows setting request options on the pages. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *MediaLive) ListInputsPagesWithContext(ctx aws.Context, input *ListInputsInput, fn func(*ListInputsOutput, bool) bool, opts ...request.Option) error { +func (c *MediaLive) DescribeSchedulePagesWithContext(ctx aws.Context, input *DescribeScheduleInput, fn func(*DescribeScheduleOutput, bool) bool, opts ...request.Option) error { p := request.Pagination{ NewRequest: func() (*request.Request, error) { - var inCpy *ListInputsInput + var inCpy *DescribeScheduleInput if input != nil { tmp := *input inCpy = &tmp } - req, _ := c.ListInputsRequest(inCpy) + req, _ := c.DescribeScheduleRequest(inCpy) req.SetContext(ctx) req.ApplyOptions(opts...) return req, nil @@ -1249,63 +1327,69 @@ func (c *MediaLive) ListInputsPagesWithContext(ctx aws.Context, input *ListInput cont := true for p.Next() && cont { - cont = fn(p.Page().(*ListInputsOutput), !p.HasNextPage()) + cont = fn(p.Page().(*DescribeScheduleOutput), !p.HasNextPage()) } return p.Err() } -const opStartChannel = "StartChannel" +const opListChannels = "ListChannels" -// StartChannelRequest generates a "aws/request.Request" representing the -// client's request for the StartChannel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListChannelsRequest generates a "aws/request.Request" representing the +// client's request for the ListChannels operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See StartChannel for more information on using the StartChannel +// See ListChannels for more information on using the ListChannels // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the StartChannelRequest method. -// req, resp := client.StartChannelRequest(params) +// // Example sending a request using the ListChannelsRequest method. +// req, resp := client.ListChannelsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/StartChannel -func (c *MediaLive) StartChannelRequest(input *StartChannelInput) (req *request.Request, output *StartChannelOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/ListChannels +func (c *MediaLive) ListChannelsRequest(input *ListChannelsInput) (req *request.Request, output *ListChannelsOutput) { op := &request.Operation{ - Name: opStartChannel, - HTTPMethod: "POST", - HTTPPath: "/prod/channels/{channelId}/start", + Name: opListChannels, + HTTPMethod: "GET", + HTTPPath: "/prod/channels", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, } if input == nil { - input = &StartChannelInput{} + input = &ListChannelsInput{} } - output = &StartChannelOutput{} + output = &ListChannelsOutput{} req = c.newRequest(op, input, output) return } -// StartChannel API operation for AWS Elemental MediaLive. +// ListChannels API operation for AWS Elemental MediaLive. // -// Starts an existing channel +// Produces list of channels that have been created // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Elemental MediaLive's -// API operation StartChannel for usage and error information. +// API operation ListChannels for usage and error information. // // Returned Error Codes: // * ErrCodeBadRequestException "BadRequestException" @@ -1316,88 +1400,140 @@ func (c *MediaLive) StartChannelRequest(input *StartChannelInput) (req *request. // // * ErrCodeBadGatewayException "BadGatewayException" // -// * ErrCodeNotFoundException "NotFoundException" -// // * ErrCodeGatewayTimeoutException "GatewayTimeoutException" // // * ErrCodeTooManyRequestsException "TooManyRequestsException" // -// * ErrCodeConflictException "ConflictException" -// -// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/StartChannel -func (c *MediaLive) StartChannel(input *StartChannelInput) (*StartChannelOutput, error) { - req, out := c.StartChannelRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/ListChannels +func (c *MediaLive) ListChannels(input *ListChannelsInput) (*ListChannelsOutput, error) { + req, out := c.ListChannelsRequest(input) return out, req.Send() } -// StartChannelWithContext is the same as StartChannel with the addition of +// ListChannelsWithContext is the same as ListChannels with the addition of // the ability to pass a context and additional request options. // -// See StartChannel for details on how to use this API operation. +// See ListChannels for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *MediaLive) StartChannelWithContext(ctx aws.Context, input *StartChannelInput, opts ...request.Option) (*StartChannelOutput, error) { - req, out := c.StartChannelRequest(input) +func (c *MediaLive) ListChannelsWithContext(ctx aws.Context, input *ListChannelsInput, opts ...request.Option) (*ListChannelsOutput, error) { + req, out := c.ListChannelsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opStopChannel = "StopChannel" +// ListChannelsPages iterates over the pages of a ListChannels operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListChannels method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListChannels operation. +// pageNum := 0 +// err := client.ListChannelsPages(params, +// func(page *ListChannelsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *MediaLive) ListChannelsPages(input *ListChannelsInput, fn func(*ListChannelsOutput, bool) bool) error { + return c.ListChannelsPagesWithContext(aws.BackgroundContext(), input, fn) +} -// StopChannelRequest generates a "aws/request.Request" representing the -// client's request for the StopChannel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListChannelsPagesWithContext same as ListChannelsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaLive) ListChannelsPagesWithContext(ctx aws.Context, input *ListChannelsInput, fn func(*ListChannelsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListChannelsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListChannelsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListChannelsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListInputSecurityGroups = "ListInputSecurityGroups" + +// ListInputSecurityGroupsRequest generates a "aws/request.Request" representing the +// client's request for the ListInputSecurityGroups operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See StopChannel for more information on using the StopChannel +// See ListInputSecurityGroups for more information on using the ListInputSecurityGroups // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the StopChannelRequest method. -// req, resp := client.StopChannelRequest(params) +// // Example sending a request using the ListInputSecurityGroupsRequest method. +// req, resp := client.ListInputSecurityGroupsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/StopChannel -func (c *MediaLive) StopChannelRequest(input *StopChannelInput) (req *request.Request, output *StopChannelOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/ListInputSecurityGroups +func (c *MediaLive) ListInputSecurityGroupsRequest(input *ListInputSecurityGroupsInput) (req *request.Request, output *ListInputSecurityGroupsOutput) { op := &request.Operation{ - Name: opStopChannel, - HTTPMethod: "POST", - HTTPPath: "/prod/channels/{channelId}/stop", + Name: opListInputSecurityGroups, + HTTPMethod: "GET", + HTTPPath: "/prod/inputSecurityGroups", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, } if input == nil { - input = &StopChannelInput{} + input = &ListInputSecurityGroupsInput{} } - output = &StopChannelOutput{} + output = &ListInputSecurityGroupsOutput{} req = c.newRequest(op, input, output) return } -// StopChannel API operation for AWS Elemental MediaLive. +// ListInputSecurityGroups API operation for AWS Elemental MediaLive. // -// Stops a running channel +// Produces a list of Input Security Groups for an account // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Elemental MediaLive's -// API operation StopChannel for usage and error information. +// API operation ListInputSecurityGroups for usage and error information. // // Returned Error Codes: // * ErrCodeBadRequestException "BadRequestException" @@ -1408,94 +1544,144 @@ func (c *MediaLive) StopChannelRequest(input *StopChannelInput) (req *request.Re // // * ErrCodeBadGatewayException "BadGatewayException" // -// * ErrCodeNotFoundException "NotFoundException" -// // * ErrCodeGatewayTimeoutException "GatewayTimeoutException" // // * ErrCodeTooManyRequestsException "TooManyRequestsException" // -// * ErrCodeConflictException "ConflictException" -// -// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/StopChannel -func (c *MediaLive) StopChannel(input *StopChannelInput) (*StopChannelOutput, error) { - req, out := c.StopChannelRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/ListInputSecurityGroups +func (c *MediaLive) ListInputSecurityGroups(input *ListInputSecurityGroupsInput) (*ListInputSecurityGroupsOutput, error) { + req, out := c.ListInputSecurityGroupsRequest(input) return out, req.Send() } -// StopChannelWithContext is the same as StopChannel with the addition of +// ListInputSecurityGroupsWithContext is the same as ListInputSecurityGroups with the addition of // the ability to pass a context and additional request options. // -// See StopChannel for details on how to use this API operation. +// See ListInputSecurityGroups for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *MediaLive) StopChannelWithContext(ctx aws.Context, input *StopChannelInput, opts ...request.Option) (*StopChannelOutput, error) { - req, out := c.StopChannelRequest(input) +func (c *MediaLive) ListInputSecurityGroupsWithContext(ctx aws.Context, input *ListInputSecurityGroupsInput, opts ...request.Option) (*ListInputSecurityGroupsOutput, error) { + req, out := c.ListInputSecurityGroupsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateChannel = "UpdateChannel" - -// UpdateChannelRequest generates a "aws/request.Request" representing the -// client's request for the UpdateChannel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListInputSecurityGroupsPages iterates over the pages of a ListInputSecurityGroups operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. // -// Use "Send" method on the returned Request to send the API call to the service. -// the "output" return value is not valid until after Send returns without error. +// See ListInputSecurityGroups method for more information on how to use this operation. // -// See UpdateChannel for more information on using the UpdateChannel +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListInputSecurityGroups operation. +// pageNum := 0 +// err := client.ListInputSecurityGroupsPages(params, +// func(page *ListInputSecurityGroupsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *MediaLive) ListInputSecurityGroupsPages(input *ListInputSecurityGroupsInput, fn func(*ListInputSecurityGroupsOutput, bool) bool) error { + return c.ListInputSecurityGroupsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListInputSecurityGroupsPagesWithContext same as ListInputSecurityGroupsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaLive) ListInputSecurityGroupsPagesWithContext(ctx aws.Context, input *ListInputSecurityGroupsInput, fn func(*ListInputSecurityGroupsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListInputSecurityGroupsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListInputSecurityGroupsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListInputSecurityGroupsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListInputs = "ListInputs" + +// ListInputsRequest generates a "aws/request.Request" representing the +// client's request for the ListInputs operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListInputs for more information on using the ListInputs // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateChannelRequest method. -// req, resp := client.UpdateChannelRequest(params) +// // Example sending a request using the ListInputsRequest method. +// req, resp := client.ListInputsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/UpdateChannel -func (c *MediaLive) UpdateChannelRequest(input *UpdateChannelInput) (req *request.Request, output *UpdateChannelOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/ListInputs +func (c *MediaLive) ListInputsRequest(input *ListInputsInput) (req *request.Request, output *ListInputsOutput) { op := &request.Operation{ - Name: opUpdateChannel, - HTTPMethod: "PUT", - HTTPPath: "/prod/channels/{channelId}", + Name: opListInputs, + HTTPMethod: "GET", + HTTPPath: "/prod/inputs", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, } if input == nil { - input = &UpdateChannelInput{} + input = &ListInputsInput{} } - output = &UpdateChannelOutput{} + output = &ListInputsOutput{} req = c.newRequest(op, input, output) return } -// UpdateChannel API operation for AWS Elemental MediaLive. +// ListInputs API operation for AWS Elemental MediaLive. // -// Updates a channel. +// Produces list of inputs that have been created // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Elemental MediaLive's -// API operation UpdateChannel for usage and error information. +// API operation ListInputs for usage and error information. // // Returned Error Codes: // * ErrCodeBadRequestException "BadRequestException" // -// * ErrCodeUnprocessableEntityException "UnprocessableEntityException" -// // * ErrCodeInternalServerErrorException "InternalServerErrorException" // // * ErrCodeForbiddenException "ForbiddenException" @@ -1504,183 +1690,3359 @@ func (c *MediaLive) UpdateChannelRequest(input *UpdateChannelInput) (req *reques // // * ErrCodeGatewayTimeoutException "GatewayTimeoutException" // -// * ErrCodeConflictException "ConflictException" +// * ErrCodeTooManyRequestsException "TooManyRequestsException" // -// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/UpdateChannel -func (c *MediaLive) UpdateChannel(input *UpdateChannelInput) (*UpdateChannelOutput, error) { - req, out := c.UpdateChannelRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/ListInputs +func (c *MediaLive) ListInputs(input *ListInputsInput) (*ListInputsOutput, error) { + req, out := c.ListInputsRequest(input) return out, req.Send() } -// UpdateChannelWithContext is the same as UpdateChannel with the addition of +// ListInputsWithContext is the same as ListInputs with the addition of // the ability to pass a context and additional request options. // -// See UpdateChannel for details on how to use this API operation. +// See ListInputs for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *MediaLive) UpdateChannelWithContext(ctx aws.Context, input *UpdateChannelInput, opts ...request.Option) (*UpdateChannelOutput, error) { - req, out := c.UpdateChannelRequest(input) +func (c *MediaLive) ListInputsWithContext(ctx aws.Context, input *ListInputsInput, opts ...request.Option) (*ListInputsOutput, error) { + req, out := c.ListInputsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -type AacSettings struct { - _ struct{} `type:"structure"` - - // Average bitrate in bits/second. Valid values depend on rate control mode - // and profile. - Bitrate *float64 `locationName:"bitrate" type:"double"` - - // Mono, Stereo, or 5.1 channel layout. Valid values depend on rate control - // mode and profile. The adReceiverMix setting receives a stereo description - // plus control track and emits a mono AAC encode of the description track, - // with control data emitted in the PES header as per ETSI TS 101 154 Annex - // E. - CodingMode *string `locationName:"codingMode" type:"string" enum:"AacCodingMode"` - - // Set to "broadcasterMixedAd" when input contains pre-mixed main audio + AD - // (narration) as a stereo pair. The Audio Type field (audioType) will be set - // to 3, which signals to downstream systems that this stream contains "broadcaster - // mixed AD". Note that the input received by the encoder must contain pre-mixed - // audio; the encoder does not perform the mixing. The values in audioTypeControl - // and audioType (in AudioDescription) are ignored when set to broadcasterMixedAd.Leave - // set to "normal" when input does not contain pre-mixed audio + AD. - InputType *string `locationName:"inputType" type:"string" enum:"AacInputType"` +// ListInputsPages iterates over the pages of a ListInputs operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListInputs method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListInputs operation. +// pageNum := 0 +// err := client.ListInputsPages(params, +// func(page *ListInputsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *MediaLive) ListInputsPages(input *ListInputsInput, fn func(*ListInputsOutput, bool) bool) error { + return c.ListInputsPagesWithContext(aws.BackgroundContext(), input, fn) +} - // AAC Profile. - Profile *string `locationName:"profile" type:"string" enum:"AacProfile"` +// ListInputsPagesWithContext same as ListInputsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaLive) ListInputsPagesWithContext(ctx aws.Context, input *ListInputsInput, fn func(*ListInputsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListInputsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListInputsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } - // Rate Control Mode. - RateControlMode *string `locationName:"rateControlMode" type:"string" enum:"AacRateControlMode"` + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListInputsOutput), !p.HasNextPage()) + } + return p.Err() +} - // Sets LATM / LOAS AAC output for raw containers. - RawFormat *string `locationName:"rawFormat" type:"string" enum:"AacRawFormat"` +const opListOfferings = "ListOfferings" - // Sample rate in Hz. Valid values depend on rate control mode and profile. - SampleRate *float64 `locationName:"sampleRate" type:"double"` +// ListOfferingsRequest generates a "aws/request.Request" representing the +// client's request for the ListOfferings operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListOfferings for more information on using the ListOfferings +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListOfferingsRequest method. +// req, resp := client.ListOfferingsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/ListOfferings +func (c *MediaLive) ListOfferingsRequest(input *ListOfferingsInput) (req *request.Request, output *ListOfferingsOutput) { + op := &request.Operation{ + Name: opListOfferings, + HTTPMethod: "GET", + HTTPPath: "/prod/offerings", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } - // Use MPEG-2 AAC audio instead of MPEG-4 AAC audio for raw or MPEG-2 Transport - // Stream containers. - Spec *string `locationName:"spec" type:"string" enum:"AacSpec"` + if input == nil { + input = &ListOfferingsInput{} + } - // VBR Quality Level - Only used if rateControlMode is VBR. - VbrQuality *string `locationName:"vbrQuality" type:"string" enum:"AacVbrQuality"` + output = &ListOfferingsOutput{} + req = c.newRequest(op, input, output) + return } -// String returns the string representation -func (s AacSettings) String() string { - return awsutil.Prettify(s) +// ListOfferings API operation for AWS Elemental MediaLive. +// +// List offerings available for purchase. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Elemental MediaLive's +// API operation ListOfferings for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// +// * ErrCodeForbiddenException "ForbiddenException" +// +// * ErrCodeBadGatewayException "BadGatewayException" +// +// * ErrCodeGatewayTimeoutException "GatewayTimeoutException" +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/ListOfferings +func (c *MediaLive) ListOfferings(input *ListOfferingsInput) (*ListOfferingsOutput, error) { + req, out := c.ListOfferingsRequest(input) + return out, req.Send() } -// GoString returns the string representation -func (s AacSettings) GoString() string { - return s.String() +// ListOfferingsWithContext is the same as ListOfferings with the addition of +// the ability to pass a context and additional request options. +// +// See ListOfferings for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaLive) ListOfferingsWithContext(ctx aws.Context, input *ListOfferingsInput, opts ...request.Option) (*ListOfferingsOutput, error) { + req, out := c.ListOfferingsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() } -// SetBitrate sets the Bitrate field's value. -func (s *AacSettings) SetBitrate(v float64) *AacSettings { - s.Bitrate = &v - return s -} +// ListOfferingsPages iterates over the pages of a ListOfferings operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListOfferings method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListOfferings operation. +// pageNum := 0 +// err := client.ListOfferingsPages(params, +// func(page *ListOfferingsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *MediaLive) ListOfferingsPages(input *ListOfferingsInput, fn func(*ListOfferingsOutput, bool) bool) error { + return c.ListOfferingsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListOfferingsPagesWithContext same as ListOfferingsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaLive) ListOfferingsPagesWithContext(ctx aws.Context, input *ListOfferingsInput, fn func(*ListOfferingsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListOfferingsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListOfferingsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListOfferingsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListReservations = "ListReservations" + +// ListReservationsRequest generates a "aws/request.Request" representing the +// client's request for the ListReservations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListReservations for more information on using the ListReservations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListReservationsRequest method. +// req, resp := client.ListReservationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/ListReservations +func (c *MediaLive) ListReservationsRequest(input *ListReservationsInput) (req *request.Request, output *ListReservationsOutput) { + op := &request.Operation{ + Name: opListReservations, + HTTPMethod: "GET", + HTTPPath: "/prod/reservations", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListReservationsInput{} + } + + output = &ListReservationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListReservations API operation for AWS Elemental MediaLive. +// +// List purchased reservations. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Elemental MediaLive's +// API operation ListReservations for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// +// * ErrCodeForbiddenException "ForbiddenException" +// +// * ErrCodeBadGatewayException "BadGatewayException" +// +// * ErrCodeGatewayTimeoutException "GatewayTimeoutException" +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/ListReservations +func (c *MediaLive) ListReservations(input *ListReservationsInput) (*ListReservationsOutput, error) { + req, out := c.ListReservationsRequest(input) + return out, req.Send() +} + +// ListReservationsWithContext is the same as ListReservations with the addition of +// the ability to pass a context and additional request options. +// +// See ListReservations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaLive) ListReservationsWithContext(ctx aws.Context, input *ListReservationsInput, opts ...request.Option) (*ListReservationsOutput, error) { + req, out := c.ListReservationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListReservationsPages iterates over the pages of a ListReservations operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListReservations method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListReservations operation. +// pageNum := 0 +// err := client.ListReservationsPages(params, +// func(page *ListReservationsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *MediaLive) ListReservationsPages(input *ListReservationsInput, fn func(*ListReservationsOutput, bool) bool) error { + return c.ListReservationsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListReservationsPagesWithContext same as ListReservationsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaLive) ListReservationsPagesWithContext(ctx aws.Context, input *ListReservationsInput, fn func(*ListReservationsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListReservationsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListReservationsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListReservationsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opPurchaseOffering = "PurchaseOffering" + +// PurchaseOfferingRequest generates a "aws/request.Request" representing the +// client's request for the PurchaseOffering operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PurchaseOffering for more information on using the PurchaseOffering +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PurchaseOfferingRequest method. +// req, resp := client.PurchaseOfferingRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/PurchaseOffering +func (c *MediaLive) PurchaseOfferingRequest(input *PurchaseOfferingInput) (req *request.Request, output *PurchaseOfferingOutput) { + op := &request.Operation{ + Name: opPurchaseOffering, + HTTPMethod: "POST", + HTTPPath: "/prod/offerings/{offeringId}/purchase", + } + + if input == nil { + input = &PurchaseOfferingInput{} + } + + output = &PurchaseOfferingOutput{} + req = c.newRequest(op, input, output) + return +} + +// PurchaseOffering API operation for AWS Elemental MediaLive. +// +// Purchase an offering and create a reservation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Elemental MediaLive's +// API operation PurchaseOffering for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// +// * ErrCodeForbiddenException "ForbiddenException" +// +// * ErrCodeBadGatewayException "BadGatewayException" +// +// * ErrCodeNotFoundException "NotFoundException" +// +// * ErrCodeGatewayTimeoutException "GatewayTimeoutException" +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// +// * ErrCodeConflictException "ConflictException" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/PurchaseOffering +func (c *MediaLive) PurchaseOffering(input *PurchaseOfferingInput) (*PurchaseOfferingOutput, error) { + req, out := c.PurchaseOfferingRequest(input) + return out, req.Send() +} + +// PurchaseOfferingWithContext is the same as PurchaseOffering with the addition of +// the ability to pass a context and additional request options. +// +// See PurchaseOffering for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaLive) PurchaseOfferingWithContext(ctx aws.Context, input *PurchaseOfferingInput, opts ...request.Option) (*PurchaseOfferingOutput, error) { + req, out := c.PurchaseOfferingRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStartChannel = "StartChannel" + +// StartChannelRequest generates a "aws/request.Request" representing the +// client's request for the StartChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StartChannel for more information on using the StartChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StartChannelRequest method. +// req, resp := client.StartChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/StartChannel +func (c *MediaLive) StartChannelRequest(input *StartChannelInput) (req *request.Request, output *StartChannelOutput) { + op := &request.Operation{ + Name: opStartChannel, + HTTPMethod: "POST", + HTTPPath: "/prod/channels/{channelId}/start", + } + + if input == nil { + input = &StartChannelInput{} + } + + output = &StartChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// StartChannel API operation for AWS Elemental MediaLive. +// +// Starts an existing channel +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Elemental MediaLive's +// API operation StartChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// +// * ErrCodeForbiddenException "ForbiddenException" +// +// * ErrCodeBadGatewayException "BadGatewayException" +// +// * ErrCodeNotFoundException "NotFoundException" +// +// * ErrCodeGatewayTimeoutException "GatewayTimeoutException" +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// +// * ErrCodeConflictException "ConflictException" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/StartChannel +func (c *MediaLive) StartChannel(input *StartChannelInput) (*StartChannelOutput, error) { + req, out := c.StartChannelRequest(input) + return out, req.Send() +} + +// StartChannelWithContext is the same as StartChannel with the addition of +// the ability to pass a context and additional request options. +// +// See StartChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaLive) StartChannelWithContext(ctx aws.Context, input *StartChannelInput, opts ...request.Option) (*StartChannelOutput, error) { + req, out := c.StartChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStopChannel = "StopChannel" + +// StopChannelRequest generates a "aws/request.Request" representing the +// client's request for the StopChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StopChannel for more information on using the StopChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StopChannelRequest method. +// req, resp := client.StopChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/StopChannel +func (c *MediaLive) StopChannelRequest(input *StopChannelInput) (req *request.Request, output *StopChannelOutput) { + op := &request.Operation{ + Name: opStopChannel, + HTTPMethod: "POST", + HTTPPath: "/prod/channels/{channelId}/stop", + } + + if input == nil { + input = &StopChannelInput{} + } + + output = &StopChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// StopChannel API operation for AWS Elemental MediaLive. +// +// Stops a running channel +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Elemental MediaLive's +// API operation StopChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// +// * ErrCodeForbiddenException "ForbiddenException" +// +// * ErrCodeBadGatewayException "BadGatewayException" +// +// * ErrCodeNotFoundException "NotFoundException" +// +// * ErrCodeGatewayTimeoutException "GatewayTimeoutException" +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// +// * ErrCodeConflictException "ConflictException" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/StopChannel +func (c *MediaLive) StopChannel(input *StopChannelInput) (*StopChannelOutput, error) { + req, out := c.StopChannelRequest(input) + return out, req.Send() +} + +// StopChannelWithContext is the same as StopChannel with the addition of +// the ability to pass a context and additional request options. +// +// See StopChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaLive) StopChannelWithContext(ctx aws.Context, input *StopChannelInput, opts ...request.Option) (*StopChannelOutput, error) { + req, out := c.StopChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateChannel = "UpdateChannel" + +// UpdateChannelRequest generates a "aws/request.Request" representing the +// client's request for the UpdateChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateChannel for more information on using the UpdateChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateChannelRequest method. +// req, resp := client.UpdateChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/UpdateChannel +func (c *MediaLive) UpdateChannelRequest(input *UpdateChannelInput) (req *request.Request, output *UpdateChannelOutput) { + op := &request.Operation{ + Name: opUpdateChannel, + HTTPMethod: "PUT", + HTTPPath: "/prod/channels/{channelId}", + } + + if input == nil { + input = &UpdateChannelInput{} + } + + output = &UpdateChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateChannel API operation for AWS Elemental MediaLive. +// +// Updates a channel. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Elemental MediaLive's +// API operation UpdateChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// +// * ErrCodeUnprocessableEntityException "UnprocessableEntityException" +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// +// * ErrCodeForbiddenException "ForbiddenException" +// +// * ErrCodeBadGatewayException "BadGatewayException" +// +// * ErrCodeGatewayTimeoutException "GatewayTimeoutException" +// +// * ErrCodeConflictException "ConflictException" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/UpdateChannel +func (c *MediaLive) UpdateChannel(input *UpdateChannelInput) (*UpdateChannelOutput, error) { + req, out := c.UpdateChannelRequest(input) + return out, req.Send() +} + +// UpdateChannelWithContext is the same as UpdateChannel with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaLive) UpdateChannelWithContext(ctx aws.Context, input *UpdateChannelInput, opts ...request.Option) (*UpdateChannelOutput, error) { + req, out := c.UpdateChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateInput = "UpdateInput" + +// UpdateInputRequest generates a "aws/request.Request" representing the +// client's request for the UpdateInput operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateInput for more information on using the UpdateInput +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateInputRequest method. +// req, resp := client.UpdateInputRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/UpdateInput +func (c *MediaLive) UpdateInputRequest(input *UpdateInputInput) (req *request.Request, output *UpdateInputOutput) { + op := &request.Operation{ + Name: opUpdateInput, + HTTPMethod: "PUT", + HTTPPath: "/prod/inputs/{inputId}", + } + + if input == nil { + input = &UpdateInputInput{} + } + + output = &UpdateInputOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateInput API operation for AWS Elemental MediaLive. +// +// Updates an input. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Elemental MediaLive's +// API operation UpdateInput for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// +// * ErrCodeForbiddenException "ForbiddenException" +// +// * ErrCodeBadGatewayException "BadGatewayException" +// +// * ErrCodeNotFoundException "NotFoundException" +// +// * ErrCodeGatewayTimeoutException "GatewayTimeoutException" +// +// * ErrCodeConflictException "ConflictException" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/UpdateInput +func (c *MediaLive) UpdateInput(input *UpdateInputInput) (*UpdateInputOutput, error) { + req, out := c.UpdateInputRequest(input) + return out, req.Send() +} + +// UpdateInputWithContext is the same as UpdateInput with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateInput for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaLive) UpdateInputWithContext(ctx aws.Context, input *UpdateInputInput, opts ...request.Option) (*UpdateInputOutput, error) { + req, out := c.UpdateInputRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateInputSecurityGroup = "UpdateInputSecurityGroup" + +// UpdateInputSecurityGroupRequest generates a "aws/request.Request" representing the +// client's request for the UpdateInputSecurityGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateInputSecurityGroup for more information on using the UpdateInputSecurityGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateInputSecurityGroupRequest method. +// req, resp := client.UpdateInputSecurityGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/UpdateInputSecurityGroup +func (c *MediaLive) UpdateInputSecurityGroupRequest(input *UpdateInputSecurityGroupInput) (req *request.Request, output *UpdateInputSecurityGroupOutput) { + op := &request.Operation{ + Name: opUpdateInputSecurityGroup, + HTTPMethod: "PUT", + HTTPPath: "/prod/inputSecurityGroups/{inputSecurityGroupId}", + } + + if input == nil { + input = &UpdateInputSecurityGroupInput{} + } + + output = &UpdateInputSecurityGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateInputSecurityGroup API operation for AWS Elemental MediaLive. +// +// Update an Input Security Group's Whilelists. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Elemental MediaLive's +// API operation UpdateInputSecurityGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// +// * ErrCodeForbiddenException "ForbiddenException" +// +// * ErrCodeBadGatewayException "BadGatewayException" +// +// * ErrCodeNotFoundException "NotFoundException" +// +// * ErrCodeGatewayTimeoutException "GatewayTimeoutException" +// +// * ErrCodeConflictException "ConflictException" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/medialive-2017-10-14/UpdateInputSecurityGroup +func (c *MediaLive) UpdateInputSecurityGroup(input *UpdateInputSecurityGroupInput) (*UpdateInputSecurityGroupOutput, error) { + req, out := c.UpdateInputSecurityGroupRequest(input) + return out, req.Send() +} + +// UpdateInputSecurityGroupWithContext is the same as UpdateInputSecurityGroup with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateInputSecurityGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaLive) UpdateInputSecurityGroupWithContext(ctx aws.Context, input *UpdateInputSecurityGroupInput, opts ...request.Option) (*UpdateInputSecurityGroupOutput, error) { + req, out := c.UpdateInputSecurityGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type AacSettings struct { + _ struct{} `type:"structure"` + + // Average bitrate in bits/second. Valid values depend on rate control mode + // and profile. + Bitrate *float64 `locationName:"bitrate" type:"double"` + + // Mono, Stereo, or 5.1 channel layout. Valid values depend on rate control + // mode and profile. The adReceiverMix setting receives a stereo description + // plus control track and emits a mono AAC encode of the description track, + // with control data emitted in the PES header as per ETSI TS 101 154 Annex + // E. + CodingMode *string `locationName:"codingMode" type:"string" enum:"AacCodingMode"` + + // Set to "broadcasterMixedAd" when input contains pre-mixed main audio + AD + // (narration) as a stereo pair. The Audio Type field (audioType) will be set + // to 3, which signals to downstream systems that this stream contains "broadcaster + // mixed AD". Note that the input received by the encoder must contain pre-mixed + // audio; the encoder does not perform the mixing. The values in audioTypeControl + // and audioType (in AudioDescription) are ignored when set to broadcasterMixedAd.Leave + // set to "normal" when input does not contain pre-mixed audio + AD. + InputType *string `locationName:"inputType" type:"string" enum:"AacInputType"` + + // AAC Profile. + Profile *string `locationName:"profile" type:"string" enum:"AacProfile"` + + // Rate Control Mode. + RateControlMode *string `locationName:"rateControlMode" type:"string" enum:"AacRateControlMode"` + + // Sets LATM / LOAS AAC output for raw containers. + RawFormat *string `locationName:"rawFormat" type:"string" enum:"AacRawFormat"` + + // Sample rate in Hz. Valid values depend on rate control mode and profile. + SampleRate *float64 `locationName:"sampleRate" type:"double"` + + // Use MPEG-2 AAC audio instead of MPEG-4 AAC audio for raw or MPEG-2 Transport + // Stream containers. + Spec *string `locationName:"spec" type:"string" enum:"AacSpec"` + + // VBR Quality Level - Only used if rateControlMode is VBR. + VbrQuality *string `locationName:"vbrQuality" type:"string" enum:"AacVbrQuality"` +} + +// String returns the string representation +func (s AacSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AacSettings) GoString() string { + return s.String() +} + +// SetBitrate sets the Bitrate field's value. +func (s *AacSettings) SetBitrate(v float64) *AacSettings { + s.Bitrate = &v + return s +} + +// SetCodingMode sets the CodingMode field's value. +func (s *AacSettings) SetCodingMode(v string) *AacSettings { + s.CodingMode = &v + return s +} + +// SetInputType sets the InputType field's value. +func (s *AacSettings) SetInputType(v string) *AacSettings { + s.InputType = &v + return s +} + +// SetProfile sets the Profile field's value. +func (s *AacSettings) SetProfile(v string) *AacSettings { + s.Profile = &v + return s +} + +// SetRateControlMode sets the RateControlMode field's value. +func (s *AacSettings) SetRateControlMode(v string) *AacSettings { + s.RateControlMode = &v + return s +} + +// SetRawFormat sets the RawFormat field's value. +func (s *AacSettings) SetRawFormat(v string) *AacSettings { + s.RawFormat = &v + return s +} + +// SetSampleRate sets the SampleRate field's value. +func (s *AacSettings) SetSampleRate(v float64) *AacSettings { + s.SampleRate = &v + return s +} + +// SetSpec sets the Spec field's value. +func (s *AacSettings) SetSpec(v string) *AacSettings { + s.Spec = &v + return s +} + +// SetVbrQuality sets the VbrQuality field's value. +func (s *AacSettings) SetVbrQuality(v string) *AacSettings { + s.VbrQuality = &v + return s +} + +type Ac3Settings struct { + _ struct{} `type:"structure"` + + // Average bitrate in bits/second. Valid bitrates depend on the coding mode. + Bitrate *float64 `locationName:"bitrate" type:"double"` + + // Specifies the bitstream mode (bsmod) for the emitted AC-3 stream. See ATSC + // A/52-2012 for background on these values. + BitstreamMode *string `locationName:"bitstreamMode" type:"string" enum:"Ac3BitstreamMode"` + + // Dolby Digital coding mode. Determines number of channels. + CodingMode *string `locationName:"codingMode" type:"string" enum:"Ac3CodingMode"` + + // Sets the dialnorm for the output. If excluded and input audio is Dolby Digital, + // dialnorm will be passed through. + Dialnorm *int64 `locationName:"dialnorm" min:"1" type:"integer"` + + // If set to filmStandard, adds dynamic range compression signaling to the output + // bitstream as defined in the Dolby Digital specification. + DrcProfile *string `locationName:"drcProfile" type:"string" enum:"Ac3DrcProfile"` + + // When set to enabled, applies a 120Hz lowpass filter to the LFE channel prior + // to encoding. Only valid in codingMode32Lfe mode. + LfeFilter *string `locationName:"lfeFilter" type:"string" enum:"Ac3LfeFilter"` + + // When set to "followInput", encoder metadata will be sourced from the DD, + // DD+, or DolbyE decoder that supplied this audio data. If audio was not supplied + // from one of these streams, then the static metadata settings will be used. + MetadataControl *string `locationName:"metadataControl" type:"string" enum:"Ac3MetadataControl"` +} + +// String returns the string representation +func (s Ac3Settings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Ac3Settings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Ac3Settings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Ac3Settings"} + if s.Dialnorm != nil && *s.Dialnorm < 1 { + invalidParams.Add(request.NewErrParamMinValue("Dialnorm", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBitrate sets the Bitrate field's value. +func (s *Ac3Settings) SetBitrate(v float64) *Ac3Settings { + s.Bitrate = &v + return s +} + +// SetBitstreamMode sets the BitstreamMode field's value. +func (s *Ac3Settings) SetBitstreamMode(v string) *Ac3Settings { + s.BitstreamMode = &v + return s +} + +// SetCodingMode sets the CodingMode field's value. +func (s *Ac3Settings) SetCodingMode(v string) *Ac3Settings { + s.CodingMode = &v + return s +} + +// SetDialnorm sets the Dialnorm field's value. +func (s *Ac3Settings) SetDialnorm(v int64) *Ac3Settings { + s.Dialnorm = &v + return s +} + +// SetDrcProfile sets the DrcProfile field's value. +func (s *Ac3Settings) SetDrcProfile(v string) *Ac3Settings { + s.DrcProfile = &v + return s +} + +// SetLfeFilter sets the LfeFilter field's value. +func (s *Ac3Settings) SetLfeFilter(v string) *Ac3Settings { + s.LfeFilter = &v + return s +} + +// SetMetadataControl sets the MetadataControl field's value. +func (s *Ac3Settings) SetMetadataControl(v string) *Ac3Settings { + s.MetadataControl = &v + return s +} + +type ArchiveContainerSettings struct { + _ struct{} `type:"structure"` + + M2tsSettings *M2tsSettings `locationName:"m2tsSettings" type:"structure"` +} + +// String returns the string representation +func (s ArchiveContainerSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ArchiveContainerSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ArchiveContainerSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ArchiveContainerSettings"} + if s.M2tsSettings != nil { + if err := s.M2tsSettings.Validate(); err != nil { + invalidParams.AddNested("M2tsSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetM2tsSettings sets the M2tsSettings field's value. +func (s *ArchiveContainerSettings) SetM2tsSettings(v *M2tsSettings) *ArchiveContainerSettings { + s.M2tsSettings = v + return s +} + +type ArchiveGroupSettings struct { + _ struct{} `type:"structure"` + + // A directory and base filename where archive files should be written. If the + // base filename portion of the URI is left blank, the base filename of the + // first input will be automatically inserted. + // + // Destination is a required field + Destination *OutputLocationRef `locationName:"destination" type:"structure" required:"true"` + + // Number of seconds to write to archive file before closing and starting a + // new one. + RolloverInterval *int64 `locationName:"rolloverInterval" min:"1" type:"integer"` +} + +// String returns the string representation +func (s ArchiveGroupSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ArchiveGroupSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ArchiveGroupSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ArchiveGroupSettings"} + if s.Destination == nil { + invalidParams.Add(request.NewErrParamRequired("Destination")) + } + if s.RolloverInterval != nil && *s.RolloverInterval < 1 { + invalidParams.Add(request.NewErrParamMinValue("RolloverInterval", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDestination sets the Destination field's value. +func (s *ArchiveGroupSettings) SetDestination(v *OutputLocationRef) *ArchiveGroupSettings { + s.Destination = v + return s +} + +// SetRolloverInterval sets the RolloverInterval field's value. +func (s *ArchiveGroupSettings) SetRolloverInterval(v int64) *ArchiveGroupSettings { + s.RolloverInterval = &v + return s +} + +type ArchiveOutputSettings struct { + _ struct{} `type:"structure"` + + // Settings specific to the container type of the file. + // + // ContainerSettings is a required field + ContainerSettings *ArchiveContainerSettings `locationName:"containerSettings" type:"structure" required:"true"` + + // Output file extension. If excluded, this will be auto-selected from the container + // type. + Extension *string `locationName:"extension" type:"string"` + + // String concatenated to the end of the destination filename. Required for + // multiple outputs of the same type. + NameModifier *string `locationName:"nameModifier" type:"string"` +} + +// String returns the string representation +func (s ArchiveOutputSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ArchiveOutputSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ArchiveOutputSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ArchiveOutputSettings"} + if s.ContainerSettings == nil { + invalidParams.Add(request.NewErrParamRequired("ContainerSettings")) + } + if s.ContainerSettings != nil { + if err := s.ContainerSettings.Validate(); err != nil { + invalidParams.AddNested("ContainerSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetContainerSettings sets the ContainerSettings field's value. +func (s *ArchiveOutputSettings) SetContainerSettings(v *ArchiveContainerSettings) *ArchiveOutputSettings { + s.ContainerSettings = v + return s +} + +// SetExtension sets the Extension field's value. +func (s *ArchiveOutputSettings) SetExtension(v string) *ArchiveOutputSettings { + s.Extension = &v + return s +} + +// SetNameModifier sets the NameModifier field's value. +func (s *ArchiveOutputSettings) SetNameModifier(v string) *ArchiveOutputSettings { + s.NameModifier = &v + return s +} + +type AribDestinationSettings struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AribDestinationSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AribDestinationSettings) GoString() string { + return s.String() +} + +type AribSourceSettings struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AribSourceSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AribSourceSettings) GoString() string { + return s.String() +} + +type AudioChannelMapping struct { + _ struct{} `type:"structure"` + + // Indices and gain values for each input channel that should be remixed into + // this output channel. + // + // InputChannelLevels is a required field + InputChannelLevels []*InputChannelLevel `locationName:"inputChannelLevels" type:"list" required:"true"` + + // The index of the output channel being produced. + // + // OutputChannel is a required field + OutputChannel *int64 `locationName:"outputChannel" type:"integer" required:"true"` +} + +// String returns the string representation +func (s AudioChannelMapping) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AudioChannelMapping) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AudioChannelMapping) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AudioChannelMapping"} + if s.InputChannelLevels == nil { + invalidParams.Add(request.NewErrParamRequired("InputChannelLevels")) + } + if s.OutputChannel == nil { + invalidParams.Add(request.NewErrParamRequired("OutputChannel")) + } + if s.InputChannelLevels != nil { + for i, v := range s.InputChannelLevels { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "InputChannelLevels", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInputChannelLevels sets the InputChannelLevels field's value. +func (s *AudioChannelMapping) SetInputChannelLevels(v []*InputChannelLevel) *AudioChannelMapping { + s.InputChannelLevels = v + return s +} + +// SetOutputChannel sets the OutputChannel field's value. +func (s *AudioChannelMapping) SetOutputChannel(v int64) *AudioChannelMapping { + s.OutputChannel = &v + return s +} + +type AudioCodecSettings struct { + _ struct{} `type:"structure"` + + AacSettings *AacSettings `locationName:"aacSettings" type:"structure"` + + Ac3Settings *Ac3Settings `locationName:"ac3Settings" type:"structure"` + + Eac3Settings *Eac3Settings `locationName:"eac3Settings" type:"structure"` + + Mp2Settings *Mp2Settings `locationName:"mp2Settings" type:"structure"` + + PassThroughSettings *PassThroughSettings `locationName:"passThroughSettings" type:"structure"` +} + +// String returns the string representation +func (s AudioCodecSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AudioCodecSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AudioCodecSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AudioCodecSettings"} + if s.Ac3Settings != nil { + if err := s.Ac3Settings.Validate(); err != nil { + invalidParams.AddNested("Ac3Settings", err.(request.ErrInvalidParams)) + } + } + if s.Eac3Settings != nil { + if err := s.Eac3Settings.Validate(); err != nil { + invalidParams.AddNested("Eac3Settings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAacSettings sets the AacSettings field's value. +func (s *AudioCodecSettings) SetAacSettings(v *AacSettings) *AudioCodecSettings { + s.AacSettings = v + return s +} + +// SetAc3Settings sets the Ac3Settings field's value. +func (s *AudioCodecSettings) SetAc3Settings(v *Ac3Settings) *AudioCodecSettings { + s.Ac3Settings = v + return s +} + +// SetEac3Settings sets the Eac3Settings field's value. +func (s *AudioCodecSettings) SetEac3Settings(v *Eac3Settings) *AudioCodecSettings { + s.Eac3Settings = v + return s +} + +// SetMp2Settings sets the Mp2Settings field's value. +func (s *AudioCodecSettings) SetMp2Settings(v *Mp2Settings) *AudioCodecSettings { + s.Mp2Settings = v + return s +} + +// SetPassThroughSettings sets the PassThroughSettings field's value. +func (s *AudioCodecSettings) SetPassThroughSettings(v *PassThroughSettings) *AudioCodecSettings { + s.PassThroughSettings = v + return s +} + +type AudioDescription struct { + _ struct{} `type:"structure"` + + // Advanced audio normalization settings. + AudioNormalizationSettings *AudioNormalizationSettings `locationName:"audioNormalizationSettings" type:"structure"` + + // The name of the AudioSelector used as the source for this AudioDescription. + // + // AudioSelectorName is a required field + AudioSelectorName *string `locationName:"audioSelectorName" type:"string" required:"true"` + + // Applies only if audioTypeControl is useConfigured. The values for audioType + // are defined in ISO-IEC 13818-1. + AudioType *string `locationName:"audioType" type:"string" enum:"AudioType"` + + // Determines how audio type is determined. followInput: If the input contains + // an ISO 639 audioType, then that value is passed through to the output. If + // the input contains no ISO 639 audioType, the value in Audio Type is included + // in the output. useConfigured: The value in Audio Type is included in the + // output.Note that this field and audioType are both ignored if inputType is + // broadcasterMixedAd. + AudioTypeControl *string `locationName:"audioTypeControl" type:"string" enum:"AudioDescriptionAudioTypeControl"` + + // Audio codec settings. + CodecSettings *AudioCodecSettings `locationName:"codecSettings" type:"structure"` + + // Indicates the language of the audio output track. Only used if languageControlMode + // is useConfigured, or there is no ISO 639 language code specified in the input. + LanguageCode *string `locationName:"languageCode" min:"3" type:"string"` + + // Choosing followInput will cause the ISO 639 language code of the output to + // follow the ISO 639 language code of the input. The languageCode will be used + // when useConfigured is set, or when followInput is selected but there is no + // ISO 639 language code specified by the input. + LanguageCodeControl *string `locationName:"languageCodeControl" type:"string" enum:"AudioDescriptionLanguageCodeControl"` + + // The name of this AudioDescription. Outputs will use this name to uniquely + // identify this AudioDescription. Description names should be unique within + // this Live Event. + // + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true"` + + // Settings that control how input audio channels are remixed into the output + // audio channels. + RemixSettings *RemixSettings `locationName:"remixSettings" type:"structure"` + + // Used for MS Smooth and Apple HLS outputs. Indicates the name displayed by + // the player (eg. English, or Director Commentary). + StreamName *string `locationName:"streamName" type:"string"` +} + +// String returns the string representation +func (s AudioDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AudioDescription) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AudioDescription) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AudioDescription"} + if s.AudioSelectorName == nil { + invalidParams.Add(request.NewErrParamRequired("AudioSelectorName")) + } + if s.LanguageCode != nil && len(*s.LanguageCode) < 3 { + invalidParams.Add(request.NewErrParamMinLen("LanguageCode", 3)) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.CodecSettings != nil { + if err := s.CodecSettings.Validate(); err != nil { + invalidParams.AddNested("CodecSettings", err.(request.ErrInvalidParams)) + } + } + if s.RemixSettings != nil { + if err := s.RemixSettings.Validate(); err != nil { + invalidParams.AddNested("RemixSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAudioNormalizationSettings sets the AudioNormalizationSettings field's value. +func (s *AudioDescription) SetAudioNormalizationSettings(v *AudioNormalizationSettings) *AudioDescription { + s.AudioNormalizationSettings = v + return s +} + +// SetAudioSelectorName sets the AudioSelectorName field's value. +func (s *AudioDescription) SetAudioSelectorName(v string) *AudioDescription { + s.AudioSelectorName = &v + return s +} + +// SetAudioType sets the AudioType field's value. +func (s *AudioDescription) SetAudioType(v string) *AudioDescription { + s.AudioType = &v + return s +} + +// SetAudioTypeControl sets the AudioTypeControl field's value. +func (s *AudioDescription) SetAudioTypeControl(v string) *AudioDescription { + s.AudioTypeControl = &v + return s +} + +// SetCodecSettings sets the CodecSettings field's value. +func (s *AudioDescription) SetCodecSettings(v *AudioCodecSettings) *AudioDescription { + s.CodecSettings = v + return s +} + +// SetLanguageCode sets the LanguageCode field's value. +func (s *AudioDescription) SetLanguageCode(v string) *AudioDescription { + s.LanguageCode = &v + return s +} + +// SetLanguageCodeControl sets the LanguageCodeControl field's value. +func (s *AudioDescription) SetLanguageCodeControl(v string) *AudioDescription { + s.LanguageCodeControl = &v + return s +} + +// SetName sets the Name field's value. +func (s *AudioDescription) SetName(v string) *AudioDescription { + s.Name = &v + return s +} + +// SetRemixSettings sets the RemixSettings field's value. +func (s *AudioDescription) SetRemixSettings(v *RemixSettings) *AudioDescription { + s.RemixSettings = v + return s +} + +// SetStreamName sets the StreamName field's value. +func (s *AudioDescription) SetStreamName(v string) *AudioDescription { + s.StreamName = &v + return s +} + +type AudioLanguageSelection struct { + _ struct{} `type:"structure"` + + // Selects a specific three-letter language code from within an audio source. + // + // LanguageCode is a required field + LanguageCode *string `locationName:"languageCode" type:"string" required:"true"` + + // When set to "strict", the transport stream demux strictly identifies audio + // streams by their language descriptor. If a PMT update occurs such that an + // audio stream matching the initially selected language is no longer present + // then mute will be encoded until the language returns. If "loose", then on + // a PMT update the demux will choose another audio stream in the program with + // the same stream type if it can't find one with the same language. + LanguageSelectionPolicy *string `locationName:"languageSelectionPolicy" type:"string" enum:"AudioLanguageSelectionPolicy"` +} + +// String returns the string representation +func (s AudioLanguageSelection) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AudioLanguageSelection) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AudioLanguageSelection) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AudioLanguageSelection"} + if s.LanguageCode == nil { + invalidParams.Add(request.NewErrParamRequired("LanguageCode")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLanguageCode sets the LanguageCode field's value. +func (s *AudioLanguageSelection) SetLanguageCode(v string) *AudioLanguageSelection { + s.LanguageCode = &v + return s +} + +// SetLanguageSelectionPolicy sets the LanguageSelectionPolicy field's value. +func (s *AudioLanguageSelection) SetLanguageSelectionPolicy(v string) *AudioLanguageSelection { + s.LanguageSelectionPolicy = &v + return s +} + +type AudioNormalizationSettings struct { + _ struct{} `type:"structure"` + + // Audio normalization algorithm to use. itu17701 conforms to the CALM Act specification, + // itu17702 conforms to the EBU R-128 specification. + Algorithm *string `locationName:"algorithm" type:"string" enum:"AudioNormalizationAlgorithm"` + + // When set to correctAudio the output audio is corrected using the chosen algorithm. + // If set to measureOnly, the audio will be measured but not adjusted. + AlgorithmControl *string `locationName:"algorithmControl" type:"string" enum:"AudioNormalizationAlgorithmControl"` + + // Target LKFS(loudness) to adjust volume to. If no value is entered, a default + // value will be used according to the chosen algorithm. The CALM Act (1770-1) + // recommends a target of -24 LKFS. The EBU R-128 specification (1770-2) recommends + // a target of -23 LKFS. + TargetLkfs *float64 `locationName:"targetLkfs" type:"double"` +} + +// String returns the string representation +func (s AudioNormalizationSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AudioNormalizationSettings) GoString() string { + return s.String() +} + +// SetAlgorithm sets the Algorithm field's value. +func (s *AudioNormalizationSettings) SetAlgorithm(v string) *AudioNormalizationSettings { + s.Algorithm = &v + return s +} + +// SetAlgorithmControl sets the AlgorithmControl field's value. +func (s *AudioNormalizationSettings) SetAlgorithmControl(v string) *AudioNormalizationSettings { + s.AlgorithmControl = &v + return s +} + +// SetTargetLkfs sets the TargetLkfs field's value. +func (s *AudioNormalizationSettings) SetTargetLkfs(v float64) *AudioNormalizationSettings { + s.TargetLkfs = &v + return s +} + +type AudioOnlyHlsSettings struct { + _ struct{} `type:"structure"` + + // Specifies the group to which the audio Rendition belongs. + AudioGroupId *string `locationName:"audioGroupId" type:"string"` + + // For use with an audio only Stream. Must be a .jpg or .png file. If given, + // this image will be used as the cover-art for the audio only output. Ideally, + // it should be formatted for an iPhone screen for two reasons. The iPhone does + // not resize the image, it crops a centered image on the top/bottom and left/right. + // Additionally, this image file gets saved bit-for-bit into every 10-second + // segment file, so will increase bandwidth by {image file size} * {segment + // count} * {user count.}. + AudioOnlyImage *InputLocation `locationName:"audioOnlyImage" type:"structure"` + + // Four types of audio-only tracks are supported:Audio-Only Variant StreamThe + // client can play back this audio-only stream instead of video in low-bandwidth + // scenarios. Represented as an EXT-X-STREAM-INF in the HLS manifest.Alternate + // Audio, Auto Select, DefaultAlternate rendition that the client should try + // to play back by default. Represented as an EXT-X-MEDIA in the HLS manifest + // with DEFAULT=YES, AUTOSELECT=YESAlternate Audio, Auto Select, Not DefaultAlternate + // rendition that the client may try to play back by default. Represented as + // an EXT-X-MEDIA in the HLS manifest with DEFAULT=NO, AUTOSELECT=YESAlternate + // Audio, not Auto SelectAlternate rendition that the client will not try to + // play back by default. Represented as an EXT-X-MEDIA in the HLS manifest with + // DEFAULT=NO, AUTOSELECT=NO + AudioTrackType *string `locationName:"audioTrackType" type:"string" enum:"AudioOnlyHlsTrackType"` +} + +// String returns the string representation +func (s AudioOnlyHlsSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AudioOnlyHlsSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AudioOnlyHlsSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AudioOnlyHlsSettings"} + if s.AudioOnlyImage != nil { + if err := s.AudioOnlyImage.Validate(); err != nil { + invalidParams.AddNested("AudioOnlyImage", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAudioGroupId sets the AudioGroupId field's value. +func (s *AudioOnlyHlsSettings) SetAudioGroupId(v string) *AudioOnlyHlsSettings { + s.AudioGroupId = &v + return s +} + +// SetAudioOnlyImage sets the AudioOnlyImage field's value. +func (s *AudioOnlyHlsSettings) SetAudioOnlyImage(v *InputLocation) *AudioOnlyHlsSettings { + s.AudioOnlyImage = v + return s +} + +// SetAudioTrackType sets the AudioTrackType field's value. +func (s *AudioOnlyHlsSettings) SetAudioTrackType(v string) *AudioOnlyHlsSettings { + s.AudioTrackType = &v + return s +} + +type AudioPidSelection struct { + _ struct{} `type:"structure"` + + // Selects a specific PID from within a source. + // + // Pid is a required field + Pid *int64 `locationName:"pid" type:"integer" required:"true"` +} + +// String returns the string representation +func (s AudioPidSelection) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AudioPidSelection) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AudioPidSelection) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AudioPidSelection"} + if s.Pid == nil { + invalidParams.Add(request.NewErrParamRequired("Pid")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPid sets the Pid field's value. +func (s *AudioPidSelection) SetPid(v int64) *AudioPidSelection { + s.Pid = &v + return s +} + +type AudioSelector struct { + _ struct{} `type:"structure"` + + // The name of this AudioSelector. AudioDescriptions will use this name to uniquely + // identify this Selector. Selector names should be unique per input. + // + // Name is a required field + Name *string `locationName:"name" min:"1" type:"string" required:"true"` + + // The audio selector settings. + SelectorSettings *AudioSelectorSettings `locationName:"selectorSettings" type:"structure"` +} + +// String returns the string representation +func (s AudioSelector) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AudioSelector) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AudioSelector) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AudioSelector"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + if s.SelectorSettings != nil { + if err := s.SelectorSettings.Validate(); err != nil { + invalidParams.AddNested("SelectorSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *AudioSelector) SetName(v string) *AudioSelector { + s.Name = &v + return s +} + +// SetSelectorSettings sets the SelectorSettings field's value. +func (s *AudioSelector) SetSelectorSettings(v *AudioSelectorSettings) *AudioSelector { + s.SelectorSettings = v + return s +} + +type AudioSelectorSettings struct { + _ struct{} `type:"structure"` + + AudioLanguageSelection *AudioLanguageSelection `locationName:"audioLanguageSelection" type:"structure"` + + AudioPidSelection *AudioPidSelection `locationName:"audioPidSelection" type:"structure"` +} + +// String returns the string representation +func (s AudioSelectorSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AudioSelectorSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AudioSelectorSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AudioSelectorSettings"} + if s.AudioLanguageSelection != nil { + if err := s.AudioLanguageSelection.Validate(); err != nil { + invalidParams.AddNested("AudioLanguageSelection", err.(request.ErrInvalidParams)) + } + } + if s.AudioPidSelection != nil { + if err := s.AudioPidSelection.Validate(); err != nil { + invalidParams.AddNested("AudioPidSelection", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAudioLanguageSelection sets the AudioLanguageSelection field's value. +func (s *AudioSelectorSettings) SetAudioLanguageSelection(v *AudioLanguageSelection) *AudioSelectorSettings { + s.AudioLanguageSelection = v + return s +} + +// SetAudioPidSelection sets the AudioPidSelection field's value. +func (s *AudioSelectorSettings) SetAudioPidSelection(v *AudioPidSelection) *AudioSelectorSettings { + s.AudioPidSelection = v + return s +} + +type AvailBlanking struct { + _ struct{} `type:"structure"` + + // Blanking image to be used. Leave empty for solid black. Only bmp and png + // images are supported. + AvailBlankingImage *InputLocation `locationName:"availBlankingImage" type:"structure"` + + // When set to enabled, causes video, audio and captions to be blanked when + // insertion metadata is added. + State *string `locationName:"state" type:"string" enum:"AvailBlankingState"` +} + +// String returns the string representation +func (s AvailBlanking) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AvailBlanking) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AvailBlanking) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AvailBlanking"} + if s.AvailBlankingImage != nil { + if err := s.AvailBlankingImage.Validate(); err != nil { + invalidParams.AddNested("AvailBlankingImage", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAvailBlankingImage sets the AvailBlankingImage field's value. +func (s *AvailBlanking) SetAvailBlankingImage(v *InputLocation) *AvailBlanking { + s.AvailBlankingImage = v + return s +} + +// SetState sets the State field's value. +func (s *AvailBlanking) SetState(v string) *AvailBlanking { + s.State = &v + return s +} + +type AvailConfiguration struct { + _ struct{} `type:"structure"` + + // Ad avail settings. + AvailSettings *AvailSettings `locationName:"availSettings" type:"structure"` +} + +// String returns the string representation +func (s AvailConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AvailConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AvailConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AvailConfiguration"} + if s.AvailSettings != nil { + if err := s.AvailSettings.Validate(); err != nil { + invalidParams.AddNested("AvailSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAvailSettings sets the AvailSettings field's value. +func (s *AvailConfiguration) SetAvailSettings(v *AvailSettings) *AvailConfiguration { + s.AvailSettings = v + return s +} + +type AvailSettings struct { + _ struct{} `type:"structure"` + + Scte35SpliceInsert *Scte35SpliceInsert `locationName:"scte35SpliceInsert" type:"structure"` + + Scte35TimeSignalApos *Scte35TimeSignalApos `locationName:"scte35TimeSignalApos" type:"structure"` +} + +// String returns the string representation +func (s AvailSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AvailSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AvailSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AvailSettings"} + if s.Scte35SpliceInsert != nil { + if err := s.Scte35SpliceInsert.Validate(); err != nil { + invalidParams.AddNested("Scte35SpliceInsert", err.(request.ErrInvalidParams)) + } + } + if s.Scte35TimeSignalApos != nil { + if err := s.Scte35TimeSignalApos.Validate(); err != nil { + invalidParams.AddNested("Scte35TimeSignalApos", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetScte35SpliceInsert sets the Scte35SpliceInsert field's value. +func (s *AvailSettings) SetScte35SpliceInsert(v *Scte35SpliceInsert) *AvailSettings { + s.Scte35SpliceInsert = v + return s +} + +// SetScte35TimeSignalApos sets the Scte35TimeSignalApos field's value. +func (s *AvailSettings) SetScte35TimeSignalApos(v *Scte35TimeSignalApos) *AvailSettings { + s.Scte35TimeSignalApos = v + return s +} + +// A list of schedule actions to create (in a request) or that have been created +// (in a response). +type BatchScheduleActionCreateRequest struct { + _ struct{} `type:"structure"` + + // A list of schedule actions to create. + // + // ScheduleActions is a required field + ScheduleActions []*ScheduleAction `locationName:"scheduleActions" type:"list" required:"true"` +} + +// String returns the string representation +func (s BatchScheduleActionCreateRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchScheduleActionCreateRequest) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *BatchScheduleActionCreateRequest) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BatchScheduleActionCreateRequest"} + if s.ScheduleActions == nil { + invalidParams.Add(request.NewErrParamRequired("ScheduleActions")) + } + if s.ScheduleActions != nil { + for i, v := range s.ScheduleActions { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ScheduleActions", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetScheduleActions sets the ScheduleActions field's value. +func (s *BatchScheduleActionCreateRequest) SetScheduleActions(v []*ScheduleAction) *BatchScheduleActionCreateRequest { + s.ScheduleActions = v + return s +} + +// List of actions that have been created in the schedule. +type BatchScheduleActionCreateResult struct { + _ struct{} `type:"structure"` + + // List of actions that have been created in the schedule. + // + // ScheduleActions is a required field + ScheduleActions []*ScheduleAction `locationName:"scheduleActions" type:"list" required:"true"` +} + +// String returns the string representation +func (s BatchScheduleActionCreateResult) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchScheduleActionCreateResult) GoString() string { + return s.String() +} + +// SetScheduleActions sets the ScheduleActions field's value. +func (s *BatchScheduleActionCreateResult) SetScheduleActions(v []*ScheduleAction) *BatchScheduleActionCreateResult { + s.ScheduleActions = v + return s +} + +// A list of schedule actions to delete. +type BatchScheduleActionDeleteRequest struct { + _ struct{} `type:"structure"` + + // A list of schedule actions to delete. + // + // ActionNames is a required field + ActionNames []*string `locationName:"actionNames" type:"list" required:"true"` +} + +// String returns the string representation +func (s BatchScheduleActionDeleteRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchScheduleActionDeleteRequest) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *BatchScheduleActionDeleteRequest) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BatchScheduleActionDeleteRequest"} + if s.ActionNames == nil { + invalidParams.Add(request.NewErrParamRequired("ActionNames")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetActionNames sets the ActionNames field's value. +func (s *BatchScheduleActionDeleteRequest) SetActionNames(v []*string) *BatchScheduleActionDeleteRequest { + s.ActionNames = v + return s +} + +// List of actions that have been deleted from the schedule. +type BatchScheduleActionDeleteResult struct { + _ struct{} `type:"structure"` + + // List of actions that have been deleted from the schedule. + // + // ScheduleActions is a required field + ScheduleActions []*ScheduleAction `locationName:"scheduleActions" type:"list" required:"true"` +} + +// String returns the string representation +func (s BatchScheduleActionDeleteResult) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchScheduleActionDeleteResult) GoString() string { + return s.String() +} + +// SetScheduleActions sets the ScheduleActions field's value. +func (s *BatchScheduleActionDeleteResult) SetScheduleActions(v []*ScheduleAction) *BatchScheduleActionDeleteResult { + s.ScheduleActions = v + return s +} + +// A request to create actions (add actions to the schedule), delete actions +// (remove actions from the schedule), or both create and delete actions. +type BatchUpdateScheduleInput struct { + _ struct{} `type:"structure"` + + // ChannelId is a required field + ChannelId *string `location:"uri" locationName:"channelId" type:"string" required:"true"` + + // Schedule actions to create in the schedule. + Creates *BatchScheduleActionCreateRequest `locationName:"creates" type:"structure"` + + // Schedule actions to delete from the schedule. + Deletes *BatchScheduleActionDeleteRequest `locationName:"deletes" type:"structure"` +} + +// String returns the string representation +func (s BatchUpdateScheduleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchUpdateScheduleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *BatchUpdateScheduleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BatchUpdateScheduleInput"} + if s.ChannelId == nil { + invalidParams.Add(request.NewErrParamRequired("ChannelId")) + } + if s.Creates != nil { + if err := s.Creates.Validate(); err != nil { + invalidParams.AddNested("Creates", err.(request.ErrInvalidParams)) + } + } + if s.Deletes != nil { + if err := s.Deletes.Validate(); err != nil { + invalidParams.AddNested("Deletes", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetChannelId sets the ChannelId field's value. +func (s *BatchUpdateScheduleInput) SetChannelId(v string) *BatchUpdateScheduleInput { + s.ChannelId = &v + return s +} + +// SetCreates sets the Creates field's value. +func (s *BatchUpdateScheduleInput) SetCreates(v *BatchScheduleActionCreateRequest) *BatchUpdateScheduleInput { + s.Creates = v + return s +} + +// SetDeletes sets the Deletes field's value. +func (s *BatchUpdateScheduleInput) SetDeletes(v *BatchScheduleActionDeleteRequest) *BatchUpdateScheduleInput { + s.Deletes = v + return s +} + +type BatchUpdateScheduleOutput struct { + _ struct{} `type:"structure"` + + // List of actions that have been created in the schedule. + Creates *BatchScheduleActionCreateResult `locationName:"creates" type:"structure"` + + // List of actions that have been deleted from the schedule. + Deletes *BatchScheduleActionDeleteResult `locationName:"deletes" type:"structure"` +} + +// String returns the string representation +func (s BatchUpdateScheduleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchUpdateScheduleOutput) GoString() string { + return s.String() +} + +// SetCreates sets the Creates field's value. +func (s *BatchUpdateScheduleOutput) SetCreates(v *BatchScheduleActionCreateResult) *BatchUpdateScheduleOutput { + s.Creates = v + return s +} + +// SetDeletes sets the Deletes field's value. +func (s *BatchUpdateScheduleOutput) SetDeletes(v *BatchScheduleActionDeleteResult) *BatchUpdateScheduleOutput { + s.Deletes = v + return s +} + +type BlackoutSlate struct { + _ struct{} `type:"structure"` + + // Blackout slate image to be used. Leave empty for solid black. Only bmp and + // png images are supported. + BlackoutSlateImage *InputLocation `locationName:"blackoutSlateImage" type:"structure"` + + // Setting to enabled causes the encoder to blackout the video, audio, and captions, + // and raise the "Network Blackout Image" slate when an SCTE104/35 Network End + // Segmentation Descriptor is encountered. The blackout will be lifted when + // the Network Start Segmentation Descriptor is encountered. The Network End + // and Network Start descriptors must contain a network ID that matches the + // value entered in "Network ID". + NetworkEndBlackout *string `locationName:"networkEndBlackout" type:"string" enum:"BlackoutSlateNetworkEndBlackout"` + + // Path to local file to use as Network End Blackout image. Image will be scaled + // to fill the entire output raster. + NetworkEndBlackoutImage *InputLocation `locationName:"networkEndBlackoutImage" type:"structure"` + + // Provides Network ID that matches EIDR ID format (e.g., "10.XXXX/XXXX-XXXX-XXXX-XXXX-XXXX-C"). + NetworkId *string `locationName:"networkId" min:"34" type:"string"` + + // When set to enabled, causes video, audio and captions to be blanked when + // indicated by program metadata. + State *string `locationName:"state" type:"string" enum:"BlackoutSlateState"` +} + +// String returns the string representation +func (s BlackoutSlate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BlackoutSlate) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *BlackoutSlate) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BlackoutSlate"} + if s.NetworkId != nil && len(*s.NetworkId) < 34 { + invalidParams.Add(request.NewErrParamMinLen("NetworkId", 34)) + } + if s.BlackoutSlateImage != nil { + if err := s.BlackoutSlateImage.Validate(); err != nil { + invalidParams.AddNested("BlackoutSlateImage", err.(request.ErrInvalidParams)) + } + } + if s.NetworkEndBlackoutImage != nil { + if err := s.NetworkEndBlackoutImage.Validate(); err != nil { + invalidParams.AddNested("NetworkEndBlackoutImage", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBlackoutSlateImage sets the BlackoutSlateImage field's value. +func (s *BlackoutSlate) SetBlackoutSlateImage(v *InputLocation) *BlackoutSlate { + s.BlackoutSlateImage = v + return s +} + +// SetNetworkEndBlackout sets the NetworkEndBlackout field's value. +func (s *BlackoutSlate) SetNetworkEndBlackout(v string) *BlackoutSlate { + s.NetworkEndBlackout = &v + return s +} + +// SetNetworkEndBlackoutImage sets the NetworkEndBlackoutImage field's value. +func (s *BlackoutSlate) SetNetworkEndBlackoutImage(v *InputLocation) *BlackoutSlate { + s.NetworkEndBlackoutImage = v + return s +} + +// SetNetworkId sets the NetworkId field's value. +func (s *BlackoutSlate) SetNetworkId(v string) *BlackoutSlate { + s.NetworkId = &v + return s +} + +// SetState sets the State field's value. +func (s *BlackoutSlate) SetState(v string) *BlackoutSlate { + s.State = &v + return s +} + +type BurnInDestinationSettings struct { + _ struct{} `type:"structure"` + + // If no explicit xPosition or yPosition is provided, setting alignment to centered + // will place the captions at the bottom center of the output. Similarly, setting + // a left alignment will align captions to the bottom left of the output. If + // x and y positions are given in conjunction with the alignment parameter, + // the font will be justified (either left or centered) relative to those coordinates. + // Selecting "smart" justification will left-justify live subtitles and center-justify + // pre-recorded subtitles. All burn-in and DVB-Sub font settings must match. + Alignment *string `locationName:"alignment" type:"string" enum:"BurnInAlignment"` + + // Specifies the color of the rectangle behind the captions. All burn-in and + // DVB-Sub font settings must match. + BackgroundColor *string `locationName:"backgroundColor" type:"string" enum:"BurnInBackgroundColor"` + + // Specifies the opacity of the background rectangle. 255 is opaque; 0 is transparent. + // Leaving this parameter out is equivalent to setting it to 0 (transparent). + // All burn-in and DVB-Sub font settings must match. + BackgroundOpacity *int64 `locationName:"backgroundOpacity" type:"integer"` + + // External font file used for caption burn-in. File extension must be 'ttf' + // or 'tte'. Although the user can select output fonts for many different types + // of input captions, embedded, STL and teletext sources use a strict grid system. + // Using external fonts with these caption sources could cause unexpected display + // of proportional fonts. All burn-in and DVB-Sub font settings must match. + Font *InputLocation `locationName:"font" type:"structure"` + + // Specifies the color of the burned-in captions. This option is not valid for + // source captions that are STL, 608/embedded or teletext. These source settings + // are already pre-defined by the caption stream. All burn-in and DVB-Sub font + // settings must match. + FontColor *string `locationName:"fontColor" type:"string" enum:"BurnInFontColor"` + + // Specifies the opacity of the burned-in captions. 255 is opaque; 0 is transparent. + // All burn-in and DVB-Sub font settings must match. + FontOpacity *int64 `locationName:"fontOpacity" type:"integer"` + + // Font resolution in DPI (dots per inch); default is 96 dpi. All burn-in and + // DVB-Sub font settings must match. + FontResolution *int64 `locationName:"fontResolution" min:"96" type:"integer"` + + // When set to 'auto' fontSize will scale depending on the size of the output. + // Giving a positive integer will specify the exact font size in points. All + // burn-in and DVB-Sub font settings must match. + FontSize *string `locationName:"fontSize" type:"string"` + + // Specifies font outline color. This option is not valid for source captions + // that are either 608/embedded or teletext. These source settings are already + // pre-defined by the caption stream. All burn-in and DVB-Sub font settings + // must match. + OutlineColor *string `locationName:"outlineColor" type:"string" enum:"BurnInOutlineColor"` + + // Specifies font outline size in pixels. This option is not valid for source + // captions that are either 608/embedded or teletext. These source settings + // are already pre-defined by the caption stream. All burn-in and DVB-Sub font + // settings must match. + OutlineSize *int64 `locationName:"outlineSize" type:"integer"` + + // Specifies the color of the shadow cast by the captions. All burn-in and DVB-Sub + // font settings must match. + ShadowColor *string `locationName:"shadowColor" type:"string" enum:"BurnInShadowColor"` + + // Specifies the opacity of the shadow. 255 is opaque; 0 is transparent. Leaving + // this parameter out is equivalent to setting it to 0 (transparent). All burn-in + // and DVB-Sub font settings must match. + ShadowOpacity *int64 `locationName:"shadowOpacity" type:"integer"` + + // Specifies the horizontal offset of the shadow relative to the captions in + // pixels. A value of -2 would result in a shadow offset 2 pixels to the left. + // All burn-in and DVB-Sub font settings must match. + ShadowXOffset *int64 `locationName:"shadowXOffset" type:"integer"` + + // Specifies the vertical offset of the shadow relative to the captions in pixels. + // A value of -2 would result in a shadow offset 2 pixels above the text. All + // burn-in and DVB-Sub font settings must match. + ShadowYOffset *int64 `locationName:"shadowYOffset" type:"integer"` + + // Controls whether a fixed grid size will be used to generate the output subtitles + // bitmap. Only applicable for Teletext inputs and DVB-Sub/Burn-in outputs. + TeletextGridControl *string `locationName:"teletextGridControl" type:"string" enum:"BurnInTeletextGridControl"` + + // Specifies the horizontal position of the caption relative to the left side + // of the output in pixels. A value of 10 would result in the captions starting + // 10 pixels from the left of the output. If no explicit xPosition is provided, + // the horizontal caption position will be determined by the alignment parameter. + // All burn-in and DVB-Sub font settings must match. + XPosition *int64 `locationName:"xPosition" type:"integer"` + + // Specifies the vertical position of the caption relative to the top of the + // output in pixels. A value of 10 would result in the captions starting 10 + // pixels from the top of the output. If no explicit yPosition is provided, + // the caption will be positioned towards the bottom of the output. All burn-in + // and DVB-Sub font settings must match. + YPosition *int64 `locationName:"yPosition" type:"integer"` +} + +// String returns the string representation +func (s BurnInDestinationSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BurnInDestinationSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *BurnInDestinationSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BurnInDestinationSettings"} + if s.FontResolution != nil && *s.FontResolution < 96 { + invalidParams.Add(request.NewErrParamMinValue("FontResolution", 96)) + } + if s.Font != nil { + if err := s.Font.Validate(); err != nil { + invalidParams.AddNested("Font", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAlignment sets the Alignment field's value. +func (s *BurnInDestinationSettings) SetAlignment(v string) *BurnInDestinationSettings { + s.Alignment = &v + return s +} + +// SetBackgroundColor sets the BackgroundColor field's value. +func (s *BurnInDestinationSettings) SetBackgroundColor(v string) *BurnInDestinationSettings { + s.BackgroundColor = &v + return s +} + +// SetBackgroundOpacity sets the BackgroundOpacity field's value. +func (s *BurnInDestinationSettings) SetBackgroundOpacity(v int64) *BurnInDestinationSettings { + s.BackgroundOpacity = &v + return s +} + +// SetFont sets the Font field's value. +func (s *BurnInDestinationSettings) SetFont(v *InputLocation) *BurnInDestinationSettings { + s.Font = v + return s +} + +// SetFontColor sets the FontColor field's value. +func (s *BurnInDestinationSettings) SetFontColor(v string) *BurnInDestinationSettings { + s.FontColor = &v + return s +} + +// SetFontOpacity sets the FontOpacity field's value. +func (s *BurnInDestinationSettings) SetFontOpacity(v int64) *BurnInDestinationSettings { + s.FontOpacity = &v + return s +} + +// SetFontResolution sets the FontResolution field's value. +func (s *BurnInDestinationSettings) SetFontResolution(v int64) *BurnInDestinationSettings { + s.FontResolution = &v + return s +} + +// SetFontSize sets the FontSize field's value. +func (s *BurnInDestinationSettings) SetFontSize(v string) *BurnInDestinationSettings { + s.FontSize = &v + return s +} + +// SetOutlineColor sets the OutlineColor field's value. +func (s *BurnInDestinationSettings) SetOutlineColor(v string) *BurnInDestinationSettings { + s.OutlineColor = &v + return s +} + +// SetOutlineSize sets the OutlineSize field's value. +func (s *BurnInDestinationSettings) SetOutlineSize(v int64) *BurnInDestinationSettings { + s.OutlineSize = &v + return s +} + +// SetShadowColor sets the ShadowColor field's value. +func (s *BurnInDestinationSettings) SetShadowColor(v string) *BurnInDestinationSettings { + s.ShadowColor = &v + return s +} + +// SetShadowOpacity sets the ShadowOpacity field's value. +func (s *BurnInDestinationSettings) SetShadowOpacity(v int64) *BurnInDestinationSettings { + s.ShadowOpacity = &v + return s +} + +// SetShadowXOffset sets the ShadowXOffset field's value. +func (s *BurnInDestinationSettings) SetShadowXOffset(v int64) *BurnInDestinationSettings { + s.ShadowXOffset = &v + return s +} + +// SetShadowYOffset sets the ShadowYOffset field's value. +func (s *BurnInDestinationSettings) SetShadowYOffset(v int64) *BurnInDestinationSettings { + s.ShadowYOffset = &v + return s +} + +// SetTeletextGridControl sets the TeletextGridControl field's value. +func (s *BurnInDestinationSettings) SetTeletextGridControl(v string) *BurnInDestinationSettings { + s.TeletextGridControl = &v + return s +} + +// SetXPosition sets the XPosition field's value. +func (s *BurnInDestinationSettings) SetXPosition(v int64) *BurnInDestinationSettings { + s.XPosition = &v + return s +} + +// SetYPosition sets the YPosition field's value. +func (s *BurnInDestinationSettings) SetYPosition(v int64) *BurnInDestinationSettings { + s.YPosition = &v + return s +} + +// Output groups for this Live Event. Output groups contain information about +// where streams should be distributed. +type CaptionDescription struct { + _ struct{} `type:"structure"` + + // Specifies which input caption selector to use as a caption source when generating + // output captions. This field should match a captionSelector name. + // + // CaptionSelectorName is a required field + CaptionSelectorName *string `locationName:"captionSelectorName" type:"string" required:"true"` + + // Additional settings for captions destination that depend on the destination + // type. + DestinationSettings *CaptionDestinationSettings `locationName:"destinationSettings" type:"structure"` + + // ISO 639-2 three-digit code: http://www.loc.gov/standards/iso639-2/ + LanguageCode *string `locationName:"languageCode" type:"string"` + + // Human readable information to indicate captions available for players (eg. + // English, or Spanish). + LanguageDescription *string `locationName:"languageDescription" type:"string"` + + // Name of the caption description. Used to associate a caption description + // with an output. Names must be unique within an event. + // + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true"` +} + +// String returns the string representation +func (s CaptionDescription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CaptionDescription) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CaptionDescription) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CaptionDescription"} + if s.CaptionSelectorName == nil { + invalidParams.Add(request.NewErrParamRequired("CaptionSelectorName")) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.DestinationSettings != nil { + if err := s.DestinationSettings.Validate(); err != nil { + invalidParams.AddNested("DestinationSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCaptionSelectorName sets the CaptionSelectorName field's value. +func (s *CaptionDescription) SetCaptionSelectorName(v string) *CaptionDescription { + s.CaptionSelectorName = &v + return s +} + +// SetDestinationSettings sets the DestinationSettings field's value. +func (s *CaptionDescription) SetDestinationSettings(v *CaptionDestinationSettings) *CaptionDescription { + s.DestinationSettings = v + return s +} + +// SetLanguageCode sets the LanguageCode field's value. +func (s *CaptionDescription) SetLanguageCode(v string) *CaptionDescription { + s.LanguageCode = &v + return s +} + +// SetLanguageDescription sets the LanguageDescription field's value. +func (s *CaptionDescription) SetLanguageDescription(v string) *CaptionDescription { + s.LanguageDescription = &v + return s +} + +// SetName sets the Name field's value. +func (s *CaptionDescription) SetName(v string) *CaptionDescription { + s.Name = &v + return s +} + +type CaptionDestinationSettings struct { + _ struct{} `type:"structure"` + + AribDestinationSettings *AribDestinationSettings `locationName:"aribDestinationSettings" type:"structure"` + + BurnInDestinationSettings *BurnInDestinationSettings `locationName:"burnInDestinationSettings" type:"structure"` + + DvbSubDestinationSettings *DvbSubDestinationSettings `locationName:"dvbSubDestinationSettings" type:"structure"` + + EmbeddedDestinationSettings *EmbeddedDestinationSettings `locationName:"embeddedDestinationSettings" type:"structure"` + + EmbeddedPlusScte20DestinationSettings *EmbeddedPlusScte20DestinationSettings `locationName:"embeddedPlusScte20DestinationSettings" type:"structure"` + + RtmpCaptionInfoDestinationSettings *RtmpCaptionInfoDestinationSettings `locationName:"rtmpCaptionInfoDestinationSettings" type:"structure"` + + Scte20PlusEmbeddedDestinationSettings *Scte20PlusEmbeddedDestinationSettings `locationName:"scte20PlusEmbeddedDestinationSettings" type:"structure"` + + Scte27DestinationSettings *Scte27DestinationSettings `locationName:"scte27DestinationSettings" type:"structure"` + + SmpteTtDestinationSettings *SmpteTtDestinationSettings `locationName:"smpteTtDestinationSettings" type:"structure"` + + TeletextDestinationSettings *TeletextDestinationSettings `locationName:"teletextDestinationSettings" type:"structure"` + + TtmlDestinationSettings *TtmlDestinationSettings `locationName:"ttmlDestinationSettings" type:"structure"` + + WebvttDestinationSettings *WebvttDestinationSettings `locationName:"webvttDestinationSettings" type:"structure"` +} + +// String returns the string representation +func (s CaptionDestinationSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CaptionDestinationSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CaptionDestinationSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CaptionDestinationSettings"} + if s.BurnInDestinationSettings != nil { + if err := s.BurnInDestinationSettings.Validate(); err != nil { + invalidParams.AddNested("BurnInDestinationSettings", err.(request.ErrInvalidParams)) + } + } + if s.DvbSubDestinationSettings != nil { + if err := s.DvbSubDestinationSettings.Validate(); err != nil { + invalidParams.AddNested("DvbSubDestinationSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAribDestinationSettings sets the AribDestinationSettings field's value. +func (s *CaptionDestinationSettings) SetAribDestinationSettings(v *AribDestinationSettings) *CaptionDestinationSettings { + s.AribDestinationSettings = v + return s +} + +// SetBurnInDestinationSettings sets the BurnInDestinationSettings field's value. +func (s *CaptionDestinationSettings) SetBurnInDestinationSettings(v *BurnInDestinationSettings) *CaptionDestinationSettings { + s.BurnInDestinationSettings = v + return s +} + +// SetDvbSubDestinationSettings sets the DvbSubDestinationSettings field's value. +func (s *CaptionDestinationSettings) SetDvbSubDestinationSettings(v *DvbSubDestinationSettings) *CaptionDestinationSettings { + s.DvbSubDestinationSettings = v + return s +} + +// SetEmbeddedDestinationSettings sets the EmbeddedDestinationSettings field's value. +func (s *CaptionDestinationSettings) SetEmbeddedDestinationSettings(v *EmbeddedDestinationSettings) *CaptionDestinationSettings { + s.EmbeddedDestinationSettings = v + return s +} + +// SetEmbeddedPlusScte20DestinationSettings sets the EmbeddedPlusScte20DestinationSettings field's value. +func (s *CaptionDestinationSettings) SetEmbeddedPlusScte20DestinationSettings(v *EmbeddedPlusScte20DestinationSettings) *CaptionDestinationSettings { + s.EmbeddedPlusScte20DestinationSettings = v + return s +} + +// SetRtmpCaptionInfoDestinationSettings sets the RtmpCaptionInfoDestinationSettings field's value. +func (s *CaptionDestinationSettings) SetRtmpCaptionInfoDestinationSettings(v *RtmpCaptionInfoDestinationSettings) *CaptionDestinationSettings { + s.RtmpCaptionInfoDestinationSettings = v + return s +} + +// SetScte20PlusEmbeddedDestinationSettings sets the Scte20PlusEmbeddedDestinationSettings field's value. +func (s *CaptionDestinationSettings) SetScte20PlusEmbeddedDestinationSettings(v *Scte20PlusEmbeddedDestinationSettings) *CaptionDestinationSettings { + s.Scte20PlusEmbeddedDestinationSettings = v + return s +} + +// SetScte27DestinationSettings sets the Scte27DestinationSettings field's value. +func (s *CaptionDestinationSettings) SetScte27DestinationSettings(v *Scte27DestinationSettings) *CaptionDestinationSettings { + s.Scte27DestinationSettings = v + return s +} + +// SetSmpteTtDestinationSettings sets the SmpteTtDestinationSettings field's value. +func (s *CaptionDestinationSettings) SetSmpteTtDestinationSettings(v *SmpteTtDestinationSettings) *CaptionDestinationSettings { + s.SmpteTtDestinationSettings = v + return s +} + +// SetTeletextDestinationSettings sets the TeletextDestinationSettings field's value. +func (s *CaptionDestinationSettings) SetTeletextDestinationSettings(v *TeletextDestinationSettings) *CaptionDestinationSettings { + s.TeletextDestinationSettings = v + return s +} + +// SetTtmlDestinationSettings sets the TtmlDestinationSettings field's value. +func (s *CaptionDestinationSettings) SetTtmlDestinationSettings(v *TtmlDestinationSettings) *CaptionDestinationSettings { + s.TtmlDestinationSettings = v + return s +} + +// SetWebvttDestinationSettings sets the WebvttDestinationSettings field's value. +func (s *CaptionDestinationSettings) SetWebvttDestinationSettings(v *WebvttDestinationSettings) *CaptionDestinationSettings { + s.WebvttDestinationSettings = v + return s +} + +// Maps a caption channel to an ISO 693-2 language code (http://www.loc.gov/standards/iso639-2), +// with an optional description. +type CaptionLanguageMapping struct { + _ struct{} `type:"structure"` + + // The closed caption channel being described by this CaptionLanguageMapping. + // Each channel mapping must have a unique channel number (maximum of 4) + // + // CaptionChannel is a required field + CaptionChannel *int64 `locationName:"captionChannel" min:"1" type:"integer" required:"true"` + + // Three character ISO 639-2 language code (see http://www.loc.gov/standards/iso639-2) + // + // LanguageCode is a required field + LanguageCode *string `locationName:"languageCode" min:"3" type:"string" required:"true"` + + // Textual description of language + // + // LanguageDescription is a required field + LanguageDescription *string `locationName:"languageDescription" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CaptionLanguageMapping) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CaptionLanguageMapping) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CaptionLanguageMapping) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CaptionLanguageMapping"} + if s.CaptionChannel == nil { + invalidParams.Add(request.NewErrParamRequired("CaptionChannel")) + } + if s.CaptionChannel != nil && *s.CaptionChannel < 1 { + invalidParams.Add(request.NewErrParamMinValue("CaptionChannel", 1)) + } + if s.LanguageCode == nil { + invalidParams.Add(request.NewErrParamRequired("LanguageCode")) + } + if s.LanguageCode != nil && len(*s.LanguageCode) < 3 { + invalidParams.Add(request.NewErrParamMinLen("LanguageCode", 3)) + } + if s.LanguageDescription == nil { + invalidParams.Add(request.NewErrParamRequired("LanguageDescription")) + } + if s.LanguageDescription != nil && len(*s.LanguageDescription) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LanguageDescription", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCaptionChannel sets the CaptionChannel field's value. +func (s *CaptionLanguageMapping) SetCaptionChannel(v int64) *CaptionLanguageMapping { + s.CaptionChannel = &v + return s +} + +// SetLanguageCode sets the LanguageCode field's value. +func (s *CaptionLanguageMapping) SetLanguageCode(v string) *CaptionLanguageMapping { + s.LanguageCode = &v + return s +} + +// SetLanguageDescription sets the LanguageDescription field's value. +func (s *CaptionLanguageMapping) SetLanguageDescription(v string) *CaptionLanguageMapping { + s.LanguageDescription = &v + return s +} + +// Output groups for this Live Event. Output groups contain information about +// where streams should be distributed. +type CaptionSelector struct { + _ struct{} `type:"structure"` + + // When specified this field indicates the three letter language code of the + // caption track to extract from the source. + LanguageCode *string `locationName:"languageCode" type:"string"` + + // Name identifier for a caption selector. This name is used to associate this + // caption selector with one or more caption descriptions. Names must be unique + // within an event. + // + // Name is a required field + Name *string `locationName:"name" min:"1" type:"string" required:"true"` + + // Caption selector settings. + SelectorSettings *CaptionSelectorSettings `locationName:"selectorSettings" type:"structure"` +} + +// String returns the string representation +func (s CaptionSelector) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CaptionSelector) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CaptionSelector) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CaptionSelector"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + if s.SelectorSettings != nil { + if err := s.SelectorSettings.Validate(); err != nil { + invalidParams.AddNested("SelectorSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLanguageCode sets the LanguageCode field's value. +func (s *CaptionSelector) SetLanguageCode(v string) *CaptionSelector { + s.LanguageCode = &v + return s +} + +// SetName sets the Name field's value. +func (s *CaptionSelector) SetName(v string) *CaptionSelector { + s.Name = &v + return s +} + +// SetSelectorSettings sets the SelectorSettings field's value. +func (s *CaptionSelector) SetSelectorSettings(v *CaptionSelectorSettings) *CaptionSelector { + s.SelectorSettings = v + return s +} + +type CaptionSelectorSettings struct { + _ struct{} `type:"structure"` + + AribSourceSettings *AribSourceSettings `locationName:"aribSourceSettings" type:"structure"` + + DvbSubSourceSettings *DvbSubSourceSettings `locationName:"dvbSubSourceSettings" type:"structure"` + + EmbeddedSourceSettings *EmbeddedSourceSettings `locationName:"embeddedSourceSettings" type:"structure"` + + Scte20SourceSettings *Scte20SourceSettings `locationName:"scte20SourceSettings" type:"structure"` + + Scte27SourceSettings *Scte27SourceSettings `locationName:"scte27SourceSettings" type:"structure"` + + TeletextSourceSettings *TeletextSourceSettings `locationName:"teletextSourceSettings" type:"structure"` +} + +// String returns the string representation +func (s CaptionSelectorSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CaptionSelectorSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CaptionSelectorSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CaptionSelectorSettings"} + if s.DvbSubSourceSettings != nil { + if err := s.DvbSubSourceSettings.Validate(); err != nil { + invalidParams.AddNested("DvbSubSourceSettings", err.(request.ErrInvalidParams)) + } + } + if s.EmbeddedSourceSettings != nil { + if err := s.EmbeddedSourceSettings.Validate(); err != nil { + invalidParams.AddNested("EmbeddedSourceSettings", err.(request.ErrInvalidParams)) + } + } + if s.Scte20SourceSettings != nil { + if err := s.Scte20SourceSettings.Validate(); err != nil { + invalidParams.AddNested("Scte20SourceSettings", err.(request.ErrInvalidParams)) + } + } + if s.Scte27SourceSettings != nil { + if err := s.Scte27SourceSettings.Validate(); err != nil { + invalidParams.AddNested("Scte27SourceSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAribSourceSettings sets the AribSourceSettings field's value. +func (s *CaptionSelectorSettings) SetAribSourceSettings(v *AribSourceSettings) *CaptionSelectorSettings { + s.AribSourceSettings = v + return s +} + +// SetDvbSubSourceSettings sets the DvbSubSourceSettings field's value. +func (s *CaptionSelectorSettings) SetDvbSubSourceSettings(v *DvbSubSourceSettings) *CaptionSelectorSettings { + s.DvbSubSourceSettings = v + return s +} + +// SetEmbeddedSourceSettings sets the EmbeddedSourceSettings field's value. +func (s *CaptionSelectorSettings) SetEmbeddedSourceSettings(v *EmbeddedSourceSettings) *CaptionSelectorSettings { + s.EmbeddedSourceSettings = v + return s +} + +// SetScte20SourceSettings sets the Scte20SourceSettings field's value. +func (s *CaptionSelectorSettings) SetScte20SourceSettings(v *Scte20SourceSettings) *CaptionSelectorSettings { + s.Scte20SourceSettings = v + return s +} + +// SetScte27SourceSettings sets the Scte27SourceSettings field's value. +func (s *CaptionSelectorSettings) SetScte27SourceSettings(v *Scte27SourceSettings) *CaptionSelectorSettings { + s.Scte27SourceSettings = v + return s +} + +// SetTeletextSourceSettings sets the TeletextSourceSettings field's value. +func (s *CaptionSelectorSettings) SetTeletextSourceSettings(v *TeletextSourceSettings) *CaptionSelectorSettings { + s.TeletextSourceSettings = v + return s +} + +type Channel struct { + _ struct{} `type:"structure"` + + // The unique arn of the channel. + Arn *string `locationName:"arn" type:"string"` + + // A list of destinations of the channel. For UDP outputs, there is onedestination + // per output. For other types (HLS, for example), there isone destination per + // packager. + Destinations []*OutputDestination `locationName:"destinations" type:"list"` + + // The endpoints where outgoing connections initiate from + EgressEndpoints []*ChannelEgressEndpoint `locationName:"egressEndpoints" type:"list"` + + EncoderSettings *EncoderSettings `locationName:"encoderSettings" type:"structure"` + + // The unique id of the channel. + Id *string `locationName:"id" type:"string"` + + // List of input attachments for channel. + InputAttachments []*InputAttachment `locationName:"inputAttachments" type:"list"` + + InputSpecification *InputSpecification `locationName:"inputSpecification" type:"structure"` + + // The log level being written to CloudWatch Logs. + LogLevel *string `locationName:"logLevel" type:"string" enum:"LogLevel"` + + // The name of the channel. (user-mutable) + Name *string `locationName:"name" type:"string"` + + // The number of currently healthy pipelines. + PipelinesRunningCount *int64 `locationName:"pipelinesRunningCount" type:"integer"` + + // The Amazon Resource Name (ARN) of the role assumed when running the Channel. + RoleArn *string `locationName:"roleArn" type:"string"` + + State *string `locationName:"state" type:"string" enum:"ChannelState"` +} + +// String returns the string representation +func (s Channel) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Channel) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *Channel) SetArn(v string) *Channel { + s.Arn = &v + return s +} + +// SetDestinations sets the Destinations field's value. +func (s *Channel) SetDestinations(v []*OutputDestination) *Channel { + s.Destinations = v + return s +} + +// SetEgressEndpoints sets the EgressEndpoints field's value. +func (s *Channel) SetEgressEndpoints(v []*ChannelEgressEndpoint) *Channel { + s.EgressEndpoints = v + return s +} + +// SetEncoderSettings sets the EncoderSettings field's value. +func (s *Channel) SetEncoderSettings(v *EncoderSettings) *Channel { + s.EncoderSettings = v + return s +} + +// SetId sets the Id field's value. +func (s *Channel) SetId(v string) *Channel { + s.Id = &v + return s +} + +// SetInputAttachments sets the InputAttachments field's value. +func (s *Channel) SetInputAttachments(v []*InputAttachment) *Channel { + s.InputAttachments = v + return s +} + +// SetInputSpecification sets the InputSpecification field's value. +func (s *Channel) SetInputSpecification(v *InputSpecification) *Channel { + s.InputSpecification = v + return s +} + +// SetLogLevel sets the LogLevel field's value. +func (s *Channel) SetLogLevel(v string) *Channel { + s.LogLevel = &v + return s +} + +// SetName sets the Name field's value. +func (s *Channel) SetName(v string) *Channel { + s.Name = &v + return s +} + +// SetPipelinesRunningCount sets the PipelinesRunningCount field's value. +func (s *Channel) SetPipelinesRunningCount(v int64) *Channel { + s.PipelinesRunningCount = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *Channel) SetRoleArn(v string) *Channel { + s.RoleArn = &v + return s +} + +// SetState sets the State field's value. +func (s *Channel) SetState(v string) *Channel { + s.State = &v + return s +} + +type ChannelEgressEndpoint struct { + _ struct{} `type:"structure"` + + // Public IP of where a channel's output comes from + SourceIp *string `locationName:"sourceIp" type:"string"` +} + +// String returns the string representation +func (s ChannelEgressEndpoint) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ChannelEgressEndpoint) GoString() string { + return s.String() +} + +// SetSourceIp sets the SourceIp field's value. +func (s *ChannelEgressEndpoint) SetSourceIp(v string) *ChannelEgressEndpoint { + s.SourceIp = &v + return s +} + +type ChannelSummary struct { + _ struct{} `type:"structure"` + + // The unique arn of the channel. + Arn *string `locationName:"arn" type:"string"` + + // A list of destinations of the channel. For UDP outputs, there is onedestination + // per output. For other types (HLS, for example), there isone destination per + // packager. + Destinations []*OutputDestination `locationName:"destinations" type:"list"` + + // The endpoints where outgoing connections initiate from + EgressEndpoints []*ChannelEgressEndpoint `locationName:"egressEndpoints" type:"list"` + + // The unique id of the channel. + Id *string `locationName:"id" type:"string"` + + // List of input attachments for channel. + InputAttachments []*InputAttachment `locationName:"inputAttachments" type:"list"` + + InputSpecification *InputSpecification `locationName:"inputSpecification" type:"structure"` + + // The log level being written to CloudWatch Logs. + LogLevel *string `locationName:"logLevel" type:"string" enum:"LogLevel"` + + // The name of the channel. (user-mutable) + Name *string `locationName:"name" type:"string"` -// SetCodingMode sets the CodingMode field's value. -func (s *AacSettings) SetCodingMode(v string) *AacSettings { - s.CodingMode = &v + // The number of currently healthy pipelines. + PipelinesRunningCount *int64 `locationName:"pipelinesRunningCount" type:"integer"` + + // The Amazon Resource Name (ARN) of the role assumed when running the Channel. + RoleArn *string `locationName:"roleArn" type:"string"` + + State *string `locationName:"state" type:"string" enum:"ChannelState"` +} + +// String returns the string representation +func (s ChannelSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ChannelSummary) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *ChannelSummary) SetArn(v string) *ChannelSummary { + s.Arn = &v return s } -// SetInputType sets the InputType field's value. -func (s *AacSettings) SetInputType(v string) *AacSettings { - s.InputType = &v +// SetDestinations sets the Destinations field's value. +func (s *ChannelSummary) SetDestinations(v []*OutputDestination) *ChannelSummary { + s.Destinations = v return s } -// SetProfile sets the Profile field's value. -func (s *AacSettings) SetProfile(v string) *AacSettings { - s.Profile = &v +// SetEgressEndpoints sets the EgressEndpoints field's value. +func (s *ChannelSummary) SetEgressEndpoints(v []*ChannelEgressEndpoint) *ChannelSummary { + s.EgressEndpoints = v return s } -// SetRateControlMode sets the RateControlMode field's value. -func (s *AacSettings) SetRateControlMode(v string) *AacSettings { - s.RateControlMode = &v +// SetId sets the Id field's value. +func (s *ChannelSummary) SetId(v string) *ChannelSummary { + s.Id = &v return s } -// SetRawFormat sets the RawFormat field's value. -func (s *AacSettings) SetRawFormat(v string) *AacSettings { - s.RawFormat = &v +// SetInputAttachments sets the InputAttachments field's value. +func (s *ChannelSummary) SetInputAttachments(v []*InputAttachment) *ChannelSummary { + s.InputAttachments = v return s } -// SetSampleRate sets the SampleRate field's value. -func (s *AacSettings) SetSampleRate(v float64) *AacSettings { - s.SampleRate = &v +// SetInputSpecification sets the InputSpecification field's value. +func (s *ChannelSummary) SetInputSpecification(v *InputSpecification) *ChannelSummary { + s.InputSpecification = v return s } -// SetSpec sets the Spec field's value. -func (s *AacSettings) SetSpec(v string) *AacSettings { - s.Spec = &v +// SetLogLevel sets the LogLevel field's value. +func (s *ChannelSummary) SetLogLevel(v string) *ChannelSummary { + s.LogLevel = &v return s } -// SetVbrQuality sets the VbrQuality field's value. -func (s *AacSettings) SetVbrQuality(v string) *AacSettings { - s.VbrQuality = &v +// SetName sets the Name field's value. +func (s *ChannelSummary) SetName(v string) *ChannelSummary { + s.Name = &v return s } -type Ac3Settings struct { +// SetPipelinesRunningCount sets the PipelinesRunningCount field's value. +func (s *ChannelSummary) SetPipelinesRunningCount(v int64) *ChannelSummary { + s.PipelinesRunningCount = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *ChannelSummary) SetRoleArn(v string) *ChannelSummary { + s.RoleArn = &v + return s +} + +// SetState sets the State field's value. +func (s *ChannelSummary) SetState(v string) *ChannelSummary { + s.State = &v + return s +} + +type CreateChannelInput struct { _ struct{} `type:"structure"` - // Average bitrate in bits/second. Valid bitrates depend on the coding mode. - Bitrate *float64 `locationName:"bitrate" type:"double"` + Destinations []*OutputDestination `locationName:"destinations" type:"list"` - // Specifies the bitstream mode (bsmod) for the emitted AC-3 stream. See ATSC - // A/52-2012 for background on these values. - BitstreamMode *string `locationName:"bitstreamMode" type:"string" enum:"Ac3BitstreamMode"` + EncoderSettings *EncoderSettings `locationName:"encoderSettings" type:"structure"` - // Dolby Digital coding mode. Determines number of channels. - CodingMode *string `locationName:"codingMode" type:"string" enum:"Ac3CodingMode"` + InputAttachments []*InputAttachment `locationName:"inputAttachments" type:"list"` - // Sets the dialnorm for the output. If excluded and input audio is Dolby Digital, - // dialnorm will be passed through. - Dialnorm *int64 `locationName:"dialnorm" min:"1" type:"integer"` + InputSpecification *InputSpecification `locationName:"inputSpecification" type:"structure"` - // If set to filmStandard, adds dynamic range compression signaling to the output - // bitstream as defined in the Dolby Digital specification. - DrcProfile *string `locationName:"drcProfile" type:"string" enum:"Ac3DrcProfile"` + // The log level the user wants for their channel. + LogLevel *string `locationName:"logLevel" type:"string" enum:"LogLevel"` - // When set to enabled, applies a 120Hz lowpass filter to the LFE channel prior - // to encoding. Only valid in codingMode32Lfe mode. - LfeFilter *string `locationName:"lfeFilter" type:"string" enum:"Ac3LfeFilter"` + Name *string `locationName:"name" type:"string"` - // When set to "followInput", encoder metadata will be sourced from the DD, - // DD+, or DolbyE decoder that supplied this audio data. If audio was not supplied - // from one of these streams, then the static metadata settings will be used. - MetadataControl *string `locationName:"metadataControl" type:"string" enum:"Ac3MetadataControl"` + RequestId *string `locationName:"requestId" type:"string" idempotencyToken:"true"` + + Reserved *string `locationName:"reserved" deprecated:"true" type:"string"` + + RoleArn *string `locationName:"roleArn" type:"string"` } // String returns the string representation -func (s Ac3Settings) String() string { +func (s CreateChannelInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Ac3Settings) GoString() string { +func (s CreateChannelInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *Ac3Settings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "Ac3Settings"} - if s.Dialnorm != nil && *s.Dialnorm < 1 { - invalidParams.Add(request.NewErrParamMinValue("Dialnorm", 1)) +func (s *CreateChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateChannelInput"} + if s.EncoderSettings != nil { + if err := s.EncoderSettings.Validate(); err != nil { + invalidParams.AddNested("EncoderSettings", err.(request.ErrInvalidParams)) + } + } + if s.InputAttachments != nil { + for i, v := range s.InputAttachments { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "InputAttachments", i), err.(request.ErrInvalidParams)) + } + } } if invalidParams.Len() > 0 { @@ -1689,272 +5051,233 @@ func (s *Ac3Settings) Validate() error { return nil } -// SetBitrate sets the Bitrate field's value. -func (s *Ac3Settings) SetBitrate(v float64) *Ac3Settings { - s.Bitrate = &v +// SetDestinations sets the Destinations field's value. +func (s *CreateChannelInput) SetDestinations(v []*OutputDestination) *CreateChannelInput { + s.Destinations = v return s } -// SetBitstreamMode sets the BitstreamMode field's value. -func (s *Ac3Settings) SetBitstreamMode(v string) *Ac3Settings { - s.BitstreamMode = &v +// SetEncoderSettings sets the EncoderSettings field's value. +func (s *CreateChannelInput) SetEncoderSettings(v *EncoderSettings) *CreateChannelInput { + s.EncoderSettings = v return s } -// SetCodingMode sets the CodingMode field's value. -func (s *Ac3Settings) SetCodingMode(v string) *Ac3Settings { - s.CodingMode = &v +// SetInputAttachments sets the InputAttachments field's value. +func (s *CreateChannelInput) SetInputAttachments(v []*InputAttachment) *CreateChannelInput { + s.InputAttachments = v return s } -// SetDialnorm sets the Dialnorm field's value. -func (s *Ac3Settings) SetDialnorm(v int64) *Ac3Settings { - s.Dialnorm = &v +// SetInputSpecification sets the InputSpecification field's value. +func (s *CreateChannelInput) SetInputSpecification(v *InputSpecification) *CreateChannelInput { + s.InputSpecification = v return s } -// SetDrcProfile sets the DrcProfile field's value. -func (s *Ac3Settings) SetDrcProfile(v string) *Ac3Settings { - s.DrcProfile = &v +// SetLogLevel sets the LogLevel field's value. +func (s *CreateChannelInput) SetLogLevel(v string) *CreateChannelInput { + s.LogLevel = &v return s } -// SetLfeFilter sets the LfeFilter field's value. -func (s *Ac3Settings) SetLfeFilter(v string) *Ac3Settings { - s.LfeFilter = &v +// SetName sets the Name field's value. +func (s *CreateChannelInput) SetName(v string) *CreateChannelInput { + s.Name = &v return s } -// SetMetadataControl sets the MetadataControl field's value. -func (s *Ac3Settings) SetMetadataControl(v string) *Ac3Settings { - s.MetadataControl = &v +// SetRequestId sets the RequestId field's value. +func (s *CreateChannelInput) SetRequestId(v string) *CreateChannelInput { + s.RequestId = &v return s } -type ArchiveContainerSettings struct { - _ struct{} `type:"structure"` - - M2tsSettings *M2tsSettings `locationName:"m2tsSettings" type:"structure"` -} - -// String returns the string representation -func (s ArchiveContainerSettings) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s ArchiveContainerSettings) GoString() string { - return s.String() -} - -// Validate inspects the fields of the type to determine if they are valid. -func (s *ArchiveContainerSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ArchiveContainerSettings"} - if s.M2tsSettings != nil { - if err := s.M2tsSettings.Validate(); err != nil { - invalidParams.AddNested("M2tsSettings", err.(request.ErrInvalidParams)) - } - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetReserved sets the Reserved field's value. +func (s *CreateChannelInput) SetReserved(v string) *CreateChannelInput { + s.Reserved = &v + return s } -// SetM2tsSettings sets the M2tsSettings field's value. -func (s *ArchiveContainerSettings) SetM2tsSettings(v *M2tsSettings) *ArchiveContainerSettings { - s.M2tsSettings = v +// SetRoleArn sets the RoleArn field's value. +func (s *CreateChannelInput) SetRoleArn(v string) *CreateChannelInput { + s.RoleArn = &v return s } -type ArchiveGroupSettings struct { +type CreateChannelOutput struct { _ struct{} `type:"structure"` - // A directory and base filename where archive files should be written. If the - // base filename portion of the URI is left blank, the base filename of the - // first input will be automatically inserted. - // - // Destination is a required field - Destination *OutputLocationRef `locationName:"destination" type:"structure" required:"true"` - - // Number of seconds to write to archive file before closing and starting a - // new one. - RolloverInterval *int64 `locationName:"rolloverInterval" min:"1" type:"integer"` + Channel *Channel `locationName:"channel" type:"structure"` } // String returns the string representation -func (s ArchiveGroupSettings) String() string { +func (s CreateChannelOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ArchiveGroupSettings) GoString() string { - return s.String() -} - -// Validate inspects the fields of the type to determine if they are valid. -func (s *ArchiveGroupSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ArchiveGroupSettings"} - if s.Destination == nil { - invalidParams.Add(request.NewErrParamRequired("Destination")) - } - if s.RolloverInterval != nil && *s.RolloverInterval < 1 { - invalidParams.Add(request.NewErrParamMinValue("RolloverInterval", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetDestination sets the Destination field's value. -func (s *ArchiveGroupSettings) SetDestination(v *OutputLocationRef) *ArchiveGroupSettings { - s.Destination = v - return s +func (s CreateChannelOutput) GoString() string { + return s.String() } -// SetRolloverInterval sets the RolloverInterval field's value. -func (s *ArchiveGroupSettings) SetRolloverInterval(v int64) *ArchiveGroupSettings { - s.RolloverInterval = &v +// SetChannel sets the Channel field's value. +func (s *CreateChannelOutput) SetChannel(v *Channel) *CreateChannelOutput { + s.Channel = v return s } -type ArchiveOutputSettings struct { +type CreateInputInput struct { _ struct{} `type:"structure"` - // Settings specific to the container type of the file. - // - // ContainerSettings is a required field - ContainerSettings *ArchiveContainerSettings `locationName:"containerSettings" type:"structure" required:"true"` + Destinations []*InputDestinationRequest `locationName:"destinations" type:"list"` - // Output file extension. If excluded, this will be auto-selected from the container - // type. - Extension *string `locationName:"extension" type:"string"` + InputSecurityGroups []*string `locationName:"inputSecurityGroups" type:"list"` - // String concatenated to the end of the destination filename. Required for - // multiple outputs of the same type. - NameModifier *string `locationName:"nameModifier" type:"string"` + Name *string `locationName:"name" type:"string"` + + RequestId *string `locationName:"requestId" type:"string" idempotencyToken:"true"` + + Sources []*InputSourceRequest `locationName:"sources" type:"list"` + + Type *string `locationName:"type" type:"string" enum:"InputType"` } // String returns the string representation -func (s ArchiveOutputSettings) String() string { +func (s CreateInputInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ArchiveOutputSettings) GoString() string { +func (s CreateInputInput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *ArchiveOutputSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ArchiveOutputSettings"} - if s.ContainerSettings == nil { - invalidParams.Add(request.NewErrParamRequired("ContainerSettings")) - } - if s.ContainerSettings != nil { - if err := s.ContainerSettings.Validate(); err != nil { - invalidParams.AddNested("ContainerSettings", err.(request.ErrInvalidParams)) - } - } +// SetDestinations sets the Destinations field's value. +func (s *CreateInputInput) SetDestinations(v []*InputDestinationRequest) *CreateInputInput { + s.Destinations = v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetInputSecurityGroups sets the InputSecurityGroups field's value. +func (s *CreateInputInput) SetInputSecurityGroups(v []*string) *CreateInputInput { + s.InputSecurityGroups = v + return s } -// SetContainerSettings sets the ContainerSettings field's value. -func (s *ArchiveOutputSettings) SetContainerSettings(v *ArchiveContainerSettings) *ArchiveOutputSettings { - s.ContainerSettings = v +// SetName sets the Name field's value. +func (s *CreateInputInput) SetName(v string) *CreateInputInput { + s.Name = &v return s } -// SetExtension sets the Extension field's value. -func (s *ArchiveOutputSettings) SetExtension(v string) *ArchiveOutputSettings { - s.Extension = &v +// SetRequestId sets the RequestId field's value. +func (s *CreateInputInput) SetRequestId(v string) *CreateInputInput { + s.RequestId = &v return s } -// SetNameModifier sets the NameModifier field's value. -func (s *ArchiveOutputSettings) SetNameModifier(v string) *ArchiveOutputSettings { - s.NameModifier = &v +// SetSources sets the Sources field's value. +func (s *CreateInputInput) SetSources(v []*InputSourceRequest) *CreateInputInput { + s.Sources = v return s } -type AribDestinationSettings struct { +// SetType sets the Type field's value. +func (s *CreateInputInput) SetType(v string) *CreateInputInput { + s.Type = &v + return s +} + +type CreateInputOutput struct { _ struct{} `type:"structure"` + + Input *Input `locationName:"input" type:"structure"` } // String returns the string representation -func (s AribDestinationSettings) String() string { +func (s CreateInputOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AribDestinationSettings) GoString() string { +func (s CreateInputOutput) GoString() string { return s.String() } -type AribSourceSettings struct { +// SetInput sets the Input field's value. +func (s *CreateInputOutput) SetInput(v *Input) *CreateInputOutput { + s.Input = v + return s +} + +type CreateInputSecurityGroupInput struct { _ struct{} `type:"structure"` + + WhitelistRules []*InputWhitelistRuleCidr `locationName:"whitelistRules" type:"list"` } // String returns the string representation -func (s AribSourceSettings) String() string { +func (s CreateInputSecurityGroupInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AribSourceSettings) GoString() string { +func (s CreateInputSecurityGroupInput) GoString() string { return s.String() } -type AudioChannelMapping struct { +// SetWhitelistRules sets the WhitelistRules field's value. +func (s *CreateInputSecurityGroupInput) SetWhitelistRules(v []*InputWhitelistRuleCidr) *CreateInputSecurityGroupInput { + s.WhitelistRules = v + return s +} + +type CreateInputSecurityGroupOutput struct { _ struct{} `type:"structure"` - // Indices and gain values for each input channel that should be remixed into - // this output channel. - // - // InputChannelLevels is a required field - InputChannelLevels []*InputChannelLevel `locationName:"inputChannelLevels" type:"list" required:"true"` + // An Input Security Group + SecurityGroup *InputSecurityGroup `locationName:"securityGroup" type:"structure"` +} - // The index of the output channel being produced. - // - // OutputChannel is a required field - OutputChannel *int64 `locationName:"outputChannel" type:"integer" required:"true"` +// String returns the string representation +func (s CreateInputSecurityGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateInputSecurityGroupOutput) GoString() string { + return s.String() +} + +// SetSecurityGroup sets the SecurityGroup field's value. +func (s *CreateInputSecurityGroupOutput) SetSecurityGroup(v *InputSecurityGroup) *CreateInputSecurityGroupOutput { + s.SecurityGroup = v + return s +} + +type DeleteChannelInput struct { + _ struct{} `type:"structure"` + + // ChannelId is a required field + ChannelId *string `location:"uri" locationName:"channelId" type:"string" required:"true"` } // String returns the string representation -func (s AudioChannelMapping) String() string { +func (s DeleteChannelInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AudioChannelMapping) GoString() string { +func (s DeleteChannelInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *AudioChannelMapping) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AudioChannelMapping"} - if s.InputChannelLevels == nil { - invalidParams.Add(request.NewErrParamRequired("InputChannelLevels")) - } - if s.OutputChannel == nil { - invalidParams.Add(request.NewErrParamRequired("OutputChannel")) - } - if s.InputChannelLevels != nil { - for i, v := range s.InputChannelLevels { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "InputChannelLevels", i), err.(request.ErrInvalidParams)) - } - } +func (s *DeleteChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteChannelInput"} + if s.ChannelId == nil { + invalidParams.Add(request.NewErrParamRequired("ChannelId")) } if invalidParams.Len() > 0 { @@ -1963,175 +5286,145 @@ func (s *AudioChannelMapping) Validate() error { return nil } -// SetInputChannelLevels sets the InputChannelLevels field's value. -func (s *AudioChannelMapping) SetInputChannelLevels(v []*InputChannelLevel) *AudioChannelMapping { - s.InputChannelLevels = v - return s -} - -// SetOutputChannel sets the OutputChannel field's value. -func (s *AudioChannelMapping) SetOutputChannel(v int64) *AudioChannelMapping { - s.OutputChannel = &v +// SetChannelId sets the ChannelId field's value. +func (s *DeleteChannelInput) SetChannelId(v string) *DeleteChannelInput { + s.ChannelId = &v return s } -type AudioCodecSettings struct { +type DeleteChannelOutput struct { _ struct{} `type:"structure"` - AacSettings *AacSettings `locationName:"aacSettings" type:"structure"` + Arn *string `locationName:"arn" type:"string"` - Ac3Settings *Ac3Settings `locationName:"ac3Settings" type:"structure"` + Destinations []*OutputDestination `locationName:"destinations" type:"list"` - Eac3Settings *Eac3Settings `locationName:"eac3Settings" type:"structure"` + EgressEndpoints []*ChannelEgressEndpoint `locationName:"egressEndpoints" type:"list"` - Mp2Settings *Mp2Settings `locationName:"mp2Settings" type:"structure"` + EncoderSettings *EncoderSettings `locationName:"encoderSettings" type:"structure"` - PassThroughSettings *PassThroughSettings `locationName:"passThroughSettings" type:"structure"` + Id *string `locationName:"id" type:"string"` + + InputAttachments []*InputAttachment `locationName:"inputAttachments" type:"list"` + + InputSpecification *InputSpecification `locationName:"inputSpecification" type:"structure"` + + // The log level the user wants for their channel. + LogLevel *string `locationName:"logLevel" type:"string" enum:"LogLevel"` + + Name *string `locationName:"name" type:"string"` + + PipelinesRunningCount *int64 `locationName:"pipelinesRunningCount" type:"integer"` + + RoleArn *string `locationName:"roleArn" type:"string"` + + State *string `locationName:"state" type:"string" enum:"ChannelState"` } // String returns the string representation -func (s AudioCodecSettings) String() string { +func (s DeleteChannelOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AudioCodecSettings) GoString() string { +func (s DeleteChannelOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *AudioCodecSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AudioCodecSettings"} - if s.Ac3Settings != nil { - if err := s.Ac3Settings.Validate(); err != nil { - invalidParams.AddNested("Ac3Settings", err.(request.ErrInvalidParams)) - } - } - if s.Eac3Settings != nil { - if err := s.Eac3Settings.Validate(); err != nil { - invalidParams.AddNested("Eac3Settings", err.(request.ErrInvalidParams)) - } - } +// SetArn sets the Arn field's value. +func (s *DeleteChannelOutput) SetArn(v string) *DeleteChannelOutput { + s.Arn = &v + return s +} + +// SetDestinations sets the Destinations field's value. +func (s *DeleteChannelOutput) SetDestinations(v []*OutputDestination) *DeleteChannelOutput { + s.Destinations = v + return s +} + +// SetEgressEndpoints sets the EgressEndpoints field's value. +func (s *DeleteChannelOutput) SetEgressEndpoints(v []*ChannelEgressEndpoint) *DeleteChannelOutput { + s.EgressEndpoints = v + return s +} + +// SetEncoderSettings sets the EncoderSettings field's value. +func (s *DeleteChannelOutput) SetEncoderSettings(v *EncoderSettings) *DeleteChannelOutput { + s.EncoderSettings = v + return s +} + +// SetId sets the Id field's value. +func (s *DeleteChannelOutput) SetId(v string) *DeleteChannelOutput { + s.Id = &v + return s +} + +// SetInputAttachments sets the InputAttachments field's value. +func (s *DeleteChannelOutput) SetInputAttachments(v []*InputAttachment) *DeleteChannelOutput { + s.InputAttachments = v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetInputSpecification sets the InputSpecification field's value. +func (s *DeleteChannelOutput) SetInputSpecification(v *InputSpecification) *DeleteChannelOutput { + s.InputSpecification = v + return s } -// SetAacSettings sets the AacSettings field's value. -func (s *AudioCodecSettings) SetAacSettings(v *AacSettings) *AudioCodecSettings { - s.AacSettings = v +// SetLogLevel sets the LogLevel field's value. +func (s *DeleteChannelOutput) SetLogLevel(v string) *DeleteChannelOutput { + s.LogLevel = &v return s } -// SetAc3Settings sets the Ac3Settings field's value. -func (s *AudioCodecSettings) SetAc3Settings(v *Ac3Settings) *AudioCodecSettings { - s.Ac3Settings = v +// SetName sets the Name field's value. +func (s *DeleteChannelOutput) SetName(v string) *DeleteChannelOutput { + s.Name = &v return s } -// SetEac3Settings sets the Eac3Settings field's value. -func (s *AudioCodecSettings) SetEac3Settings(v *Eac3Settings) *AudioCodecSettings { - s.Eac3Settings = v +// SetPipelinesRunningCount sets the PipelinesRunningCount field's value. +func (s *DeleteChannelOutput) SetPipelinesRunningCount(v int64) *DeleteChannelOutput { + s.PipelinesRunningCount = &v return s } -// SetMp2Settings sets the Mp2Settings field's value. -func (s *AudioCodecSettings) SetMp2Settings(v *Mp2Settings) *AudioCodecSettings { - s.Mp2Settings = v +// SetRoleArn sets the RoleArn field's value. +func (s *DeleteChannelOutput) SetRoleArn(v string) *DeleteChannelOutput { + s.RoleArn = &v return s } -// SetPassThroughSettings sets the PassThroughSettings field's value. -func (s *AudioCodecSettings) SetPassThroughSettings(v *PassThroughSettings) *AudioCodecSettings { - s.PassThroughSettings = v +// SetState sets the State field's value. +func (s *DeleteChannelOutput) SetState(v string) *DeleteChannelOutput { + s.State = &v return s } -type AudioDescription struct { +type DeleteInputInput struct { _ struct{} `type:"structure"` - // Advanced audio normalization settings. - AudioNormalizationSettings *AudioNormalizationSettings `locationName:"audioNormalizationSettings" type:"structure"` - - // The name of the AudioSelector used as the source for this AudioDescription. - // - // AudioSelectorName is a required field - AudioSelectorName *string `locationName:"audioSelectorName" type:"string" required:"true"` - - // Applies only if audioTypeControl is useConfigured. The values for audioType - // are defined in ISO-IEC 13818-1. - AudioType *string `locationName:"audioType" type:"string" enum:"AudioType"` - - // Determines how audio type is determined. followInput: If the input contains - // an ISO 639 audioType, then that value is passed through to the output. If - // the input contains no ISO 639 audioType, the value in Audio Type is included - // in the output. useConfigured: The value in Audio Type is included in the - // output.Note that this field and audioType are both ignored if inputType is - // broadcasterMixedAd. - AudioTypeControl *string `locationName:"audioTypeControl" type:"string" enum:"AudioDescriptionAudioTypeControl"` - - // Audio codec settings. - CodecSettings *AudioCodecSettings `locationName:"codecSettings" type:"structure"` - - // Indicates the language of the audio output track. Only used if languageControlMode - // is useConfigured, or there is no ISO 639 language code specified in the input. - LanguageCode *string `locationName:"languageCode" min:"3" type:"string"` - - // Choosing followInput will cause the ISO 639 language code of the output to - // follow the ISO 639 language code of the input. The languageCode will be used - // when useConfigured is set, or when followInput is selected but there is no - // ISO 639 language code specified by the input. - LanguageCodeControl *string `locationName:"languageCodeControl" type:"string" enum:"AudioDescriptionLanguageCodeControl"` - - // The name of this AudioDescription. Outputs will use this name to uniquely - // identify this AudioDescription. Description names should be unique within - // this Live Event. - // - // Name is a required field - Name *string `locationName:"name" type:"string" required:"true"` - - // Settings that control how input audio channels are remixed into the output - // audio channels. - RemixSettings *RemixSettings `locationName:"remixSettings" type:"structure"` - - // Used for MS Smooth and Apple HLS outputs. Indicates the name displayed by - // the player (eg. English, or Director Commentary). - StreamName *string `locationName:"streamName" type:"string"` + // InputId is a required field + InputId *string `location:"uri" locationName:"inputId" type:"string" required:"true"` } // String returns the string representation -func (s AudioDescription) String() string { +func (s DeleteInputInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AudioDescription) GoString() string { +func (s DeleteInputInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *AudioDescription) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AudioDescription"} - if s.AudioSelectorName == nil { - invalidParams.Add(request.NewErrParamRequired("AudioSelectorName")) - } - if s.LanguageCode != nil && len(*s.LanguageCode) < 3 { - invalidParams.Add(request.NewErrParamMinLen("LanguageCode", 3)) - } - if s.Name == nil { - invalidParams.Add(request.NewErrParamRequired("Name")) - } - if s.CodecSettings != nil { - if err := s.CodecSettings.Validate(); err != nil { - invalidParams.AddNested("CodecSettings", err.(request.ErrInvalidParams)) - } - } - if s.RemixSettings != nil { - if err := s.RemixSettings.Validate(); err != nil { - invalidParams.AddNested("RemixSettings", err.(request.ErrInvalidParams)) - } +func (s *DeleteInputInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteInputInput"} + if s.InputId == nil { + invalidParams.Add(request.NewErrParamRequired("InputId")) } if invalidParams.Len() > 0 { @@ -2140,98 +5433,98 @@ func (s *AudioDescription) Validate() error { return nil } -// SetAudioNormalizationSettings sets the AudioNormalizationSettings field's value. -func (s *AudioDescription) SetAudioNormalizationSettings(v *AudioNormalizationSettings) *AudioDescription { - s.AudioNormalizationSettings = v +// SetInputId sets the InputId field's value. +func (s *DeleteInputInput) SetInputId(v string) *DeleteInputInput { + s.InputId = &v return s } -// SetAudioSelectorName sets the AudioSelectorName field's value. -func (s *AudioDescription) SetAudioSelectorName(v string) *AudioDescription { - s.AudioSelectorName = &v - return s +type DeleteInputOutput struct { + _ struct{} `type:"structure"` } -// SetAudioType sets the AudioType field's value. -func (s *AudioDescription) SetAudioType(v string) *AudioDescription { - s.AudioType = &v - return s +// String returns the string representation +func (s DeleteInputOutput) String() string { + return awsutil.Prettify(s) } -// SetAudioTypeControl sets the AudioTypeControl field's value. -func (s *AudioDescription) SetAudioTypeControl(v string) *AudioDescription { - s.AudioTypeControl = &v - return s +// GoString returns the string representation +func (s DeleteInputOutput) GoString() string { + return s.String() } -// SetCodecSettings sets the CodecSettings field's value. -func (s *AudioDescription) SetCodecSettings(v *AudioCodecSettings) *AudioDescription { - s.CodecSettings = v - return s -} +type DeleteInputSecurityGroupInput struct { + _ struct{} `type:"structure"` -// SetLanguageCode sets the LanguageCode field's value. -func (s *AudioDescription) SetLanguageCode(v string) *AudioDescription { - s.LanguageCode = &v - return s + // InputSecurityGroupId is a required field + InputSecurityGroupId *string `location:"uri" locationName:"inputSecurityGroupId" type:"string" required:"true"` } -// SetLanguageCodeControl sets the LanguageCodeControl field's value. -func (s *AudioDescription) SetLanguageCodeControl(v string) *AudioDescription { - s.LanguageCodeControl = &v - return s +// String returns the string representation +func (s DeleteInputSecurityGroupInput) String() string { + return awsutil.Prettify(s) } -// SetName sets the Name field's value. -func (s *AudioDescription) SetName(v string) *AudioDescription { - s.Name = &v - return s +// GoString returns the string representation +func (s DeleteInputSecurityGroupInput) GoString() string { + return s.String() } -// SetRemixSettings sets the RemixSettings field's value. -func (s *AudioDescription) SetRemixSettings(v *RemixSettings) *AudioDescription { - s.RemixSettings = v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteInputSecurityGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteInputSecurityGroupInput"} + if s.InputSecurityGroupId == nil { + invalidParams.Add(request.NewErrParamRequired("InputSecurityGroupId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetStreamName sets the StreamName field's value. -func (s *AudioDescription) SetStreamName(v string) *AudioDescription { - s.StreamName = &v +// SetInputSecurityGroupId sets the InputSecurityGroupId field's value. +func (s *DeleteInputSecurityGroupInput) SetInputSecurityGroupId(v string) *DeleteInputSecurityGroupInput { + s.InputSecurityGroupId = &v return s } -type AudioLanguageSelection struct { +type DeleteInputSecurityGroupOutput struct { _ struct{} `type:"structure"` +} - // Selects a specific three-letter language code from within an audio source. - // - // LanguageCode is a required field - LanguageCode *string `locationName:"languageCode" type:"string" required:"true"` +// String returns the string representation +func (s DeleteInputSecurityGroupOutput) String() string { + return awsutil.Prettify(s) +} - // When set to "strict", the transport stream demux strictly identifies audio - // streams by their language descriptor. If a PMT update occurs such that an - // audio stream matching the initially selected language is no longer present - // then mute will be encoded until the language returns. If "loose", then on - // a PMT update the demux will choose another audio stream in the program with - // the same stream type if it can't find one with the same language. - LanguageSelectionPolicy *string `locationName:"languageSelectionPolicy" type:"string" enum:"AudioLanguageSelectionPolicy"` +// GoString returns the string representation +func (s DeleteInputSecurityGroupOutput) GoString() string { + return s.String() +} + +type DeleteReservationInput struct { + _ struct{} `type:"structure"` + + // ReservationId is a required field + ReservationId *string `location:"uri" locationName:"reservationId" type:"string" required:"true"` } // String returns the string representation -func (s AudioLanguageSelection) String() string { +func (s DeleteReservationInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AudioLanguageSelection) GoString() string { +func (s DeleteReservationInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *AudioLanguageSelection) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AudioLanguageSelection"} - if s.LanguageCode == nil { - invalidParams.Add(request.NewErrParamRequired("LanguageCode")) +func (s *DeleteReservationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteReservationInput"} + if s.ReservationId == nil { + invalidParams.Add(request.NewErrParamRequired("ReservationId")) } if invalidParams.Len() > 0 { @@ -2240,207 +5533,188 @@ func (s *AudioLanguageSelection) Validate() error { return nil } -// SetLanguageCode sets the LanguageCode field's value. -func (s *AudioLanguageSelection) SetLanguageCode(v string) *AudioLanguageSelection { - s.LanguageCode = &v - return s -} - -// SetLanguageSelectionPolicy sets the LanguageSelectionPolicy field's value. -func (s *AudioLanguageSelection) SetLanguageSelectionPolicy(v string) *AudioLanguageSelection { - s.LanguageSelectionPolicy = &v +// SetReservationId sets the ReservationId field's value. +func (s *DeleteReservationInput) SetReservationId(v string) *DeleteReservationInput { + s.ReservationId = &v return s } -type AudioNormalizationSettings struct { +type DeleteReservationOutput struct { _ struct{} `type:"structure"` - // Audio normalization algorithm to use. itu17701 conforms to the CALM Act specification, - // itu17702 conforms to the EBU R-128 specification. - Algorithm *string `locationName:"algorithm" type:"string" enum:"AudioNormalizationAlgorithm"` + Arn *string `locationName:"arn" type:"string"` - // When set to correctAudio the output audio is corrected using the chosen algorithm. - // If set to measureOnly, the audio will be measured but not adjusted. - AlgorithmControl *string `locationName:"algorithmControl" type:"string" enum:"AudioNormalizationAlgorithmControl"` + Count *int64 `locationName:"count" type:"integer"` - // Target LKFS(loudness) to adjust volume to. If no value is entered, a default - // value will be used according to the chosen algorithm. The CALM Act (1770-1) - // recommends a target of -24 LKFS. The EBU R-128 specification (1770-2) recommends - // a target of -23 LKFS. - TargetLkfs *float64 `locationName:"targetLkfs" type:"double"` + CurrencyCode *string `locationName:"currencyCode" type:"string"` + + Duration *int64 `locationName:"duration" type:"integer"` + + // Units for duration, e.g. 'MONTHS' + DurationUnits *string `locationName:"durationUnits" type:"string" enum:"OfferingDurationUnits"` + + End *string `locationName:"end" type:"string"` + + FixedPrice *float64 `locationName:"fixedPrice" type:"double"` + + Name *string `locationName:"name" type:"string"` + + OfferingDescription *string `locationName:"offeringDescription" type:"string"` + + OfferingId *string `locationName:"offeringId" type:"string"` + + // Offering type, e.g. 'NO_UPFRONT' + OfferingType *string `locationName:"offeringType" type:"string" enum:"OfferingType"` + + Region *string `locationName:"region" type:"string"` + + ReservationId *string `locationName:"reservationId" type:"string"` + + // Resource configuration (codec, resolution, bitrate, ...) + ResourceSpecification *ReservationResourceSpecification `locationName:"resourceSpecification" type:"structure"` + + Start *string `locationName:"start" type:"string"` + + // Current reservation state + State *string `locationName:"state" type:"string" enum:"ReservationState"` + + UsagePrice *float64 `locationName:"usagePrice" type:"double"` } // String returns the string representation -func (s AudioNormalizationSettings) String() string { +func (s DeleteReservationOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AudioNormalizationSettings) GoString() string { +func (s DeleteReservationOutput) GoString() string { return s.String() } -// SetAlgorithm sets the Algorithm field's value. -func (s *AudioNormalizationSettings) SetAlgorithm(v string) *AudioNormalizationSettings { - s.Algorithm = &v +// SetArn sets the Arn field's value. +func (s *DeleteReservationOutput) SetArn(v string) *DeleteReservationOutput { + s.Arn = &v return s } -// SetAlgorithmControl sets the AlgorithmControl field's value. -func (s *AudioNormalizationSettings) SetAlgorithmControl(v string) *AudioNormalizationSettings { - s.AlgorithmControl = &v +// SetCount sets the Count field's value. +func (s *DeleteReservationOutput) SetCount(v int64) *DeleteReservationOutput { + s.Count = &v return s } -// SetTargetLkfs sets the TargetLkfs field's value. -func (s *AudioNormalizationSettings) SetTargetLkfs(v float64) *AudioNormalizationSettings { - s.TargetLkfs = &v +// SetCurrencyCode sets the CurrencyCode field's value. +func (s *DeleteReservationOutput) SetCurrencyCode(v string) *DeleteReservationOutput { + s.CurrencyCode = &v return s } -type AudioOnlyHlsSettings struct { - _ struct{} `type:"structure"` - - // Specifies the group to which the audio Rendition belongs. - AudioGroupId *string `locationName:"audioGroupId" type:"string"` - - // For use with an audio only Stream. Must be a .jpg or .png file. If given, - // this image will be used as the cover-art for the audio only output. Ideally, - // it should be formatted for an iPhone screen for two reasons. The iPhone does - // not resize the image, it crops a centered image on the top/bottom and left/right. - // Additionally, this image file gets saved bit-for-bit into every 10-second - // segment file, so will increase bandwidth by {image file size} * {segment - // count} * {user count.}. - AudioOnlyImage *InputLocation `locationName:"audioOnlyImage" type:"structure"` - - // Four types of audio-only tracks are supported:Audio-Only Variant StreamThe - // client can play back this audio-only stream instead of video in low-bandwidth - // scenarios. Represented as an EXT-X-STREAM-INF in the HLS manifest.Alternate - // Audio, Auto Select, DefaultAlternate rendition that the client should try - // to play back by default. Represented as an EXT-X-MEDIA in the HLS manifest - // with DEFAULT=YES, AUTOSELECT=YESAlternate Audio, Auto Select, Not DefaultAlternate - // rendition that the client may try to play back by default. Represented as - // an EXT-X-MEDIA in the HLS manifest with DEFAULT=NO, AUTOSELECT=YESAlternate - // Audio, not Auto SelectAlternate rendition that the client will not try to - // play back by default. Represented as an EXT-X-MEDIA in the HLS manifest with - // DEFAULT=NO, AUTOSELECT=NO - AudioTrackType *string `locationName:"audioTrackType" type:"string" enum:"AudioOnlyHlsTrackType"` +// SetDuration sets the Duration field's value. +func (s *DeleteReservationOutput) SetDuration(v int64) *DeleteReservationOutput { + s.Duration = &v + return s } -// String returns the string representation -func (s AudioOnlyHlsSettings) String() string { - return awsutil.Prettify(s) +// SetDurationUnits sets the DurationUnits field's value. +func (s *DeleteReservationOutput) SetDurationUnits(v string) *DeleteReservationOutput { + s.DurationUnits = &v + return s } -// GoString returns the string representation -func (s AudioOnlyHlsSettings) GoString() string { - return s.String() +// SetEnd sets the End field's value. +func (s *DeleteReservationOutput) SetEnd(v string) *DeleteReservationOutput { + s.End = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *AudioOnlyHlsSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AudioOnlyHlsSettings"} - if s.AudioOnlyImage != nil { - if err := s.AudioOnlyImage.Validate(); err != nil { - invalidParams.AddNested("AudioOnlyImage", err.(request.ErrInvalidParams)) - } - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetFixedPrice sets the FixedPrice field's value. +func (s *DeleteReservationOutput) SetFixedPrice(v float64) *DeleteReservationOutput { + s.FixedPrice = &v + return s } -// SetAudioGroupId sets the AudioGroupId field's value. -func (s *AudioOnlyHlsSettings) SetAudioGroupId(v string) *AudioOnlyHlsSettings { - s.AudioGroupId = &v +// SetName sets the Name field's value. +func (s *DeleteReservationOutput) SetName(v string) *DeleteReservationOutput { + s.Name = &v return s } -// SetAudioOnlyImage sets the AudioOnlyImage field's value. -func (s *AudioOnlyHlsSettings) SetAudioOnlyImage(v *InputLocation) *AudioOnlyHlsSettings { - s.AudioOnlyImage = v +// SetOfferingDescription sets the OfferingDescription field's value. +func (s *DeleteReservationOutput) SetOfferingDescription(v string) *DeleteReservationOutput { + s.OfferingDescription = &v return s } -// SetAudioTrackType sets the AudioTrackType field's value. -func (s *AudioOnlyHlsSettings) SetAudioTrackType(v string) *AudioOnlyHlsSettings { - s.AudioTrackType = &v +// SetOfferingId sets the OfferingId field's value. +func (s *DeleteReservationOutput) SetOfferingId(v string) *DeleteReservationOutput { + s.OfferingId = &v return s } -type AudioPidSelection struct { - _ struct{} `type:"structure"` +// SetOfferingType sets the OfferingType field's value. +func (s *DeleteReservationOutput) SetOfferingType(v string) *DeleteReservationOutput { + s.OfferingType = &v + return s +} - // Selects a specific PID from within a source. - // - // Pid is a required field - Pid *int64 `locationName:"pid" type:"integer" required:"true"` +// SetRegion sets the Region field's value. +func (s *DeleteReservationOutput) SetRegion(v string) *DeleteReservationOutput { + s.Region = &v + return s } -// String returns the string representation -func (s AudioPidSelection) String() string { - return awsutil.Prettify(s) +// SetReservationId sets the ReservationId field's value. +func (s *DeleteReservationOutput) SetReservationId(v string) *DeleteReservationOutput { + s.ReservationId = &v + return s } -// GoString returns the string representation -func (s AudioPidSelection) GoString() string { - return s.String() +// SetResourceSpecification sets the ResourceSpecification field's value. +func (s *DeleteReservationOutput) SetResourceSpecification(v *ReservationResourceSpecification) *DeleteReservationOutput { + s.ResourceSpecification = v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *AudioPidSelection) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AudioPidSelection"} - if s.Pid == nil { - invalidParams.Add(request.NewErrParamRequired("Pid")) - } +// SetStart sets the Start field's value. +func (s *DeleteReservationOutput) SetStart(v string) *DeleteReservationOutput { + s.Start = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetState sets the State field's value. +func (s *DeleteReservationOutput) SetState(v string) *DeleteReservationOutput { + s.State = &v + return s } -// SetPid sets the Pid field's value. -func (s *AudioPidSelection) SetPid(v int64) *AudioPidSelection { - s.Pid = &v +// SetUsagePrice sets the UsagePrice field's value. +func (s *DeleteReservationOutput) SetUsagePrice(v float64) *DeleteReservationOutput { + s.UsagePrice = &v return s } -type AudioSelector struct { +type DescribeChannelInput struct { _ struct{} `type:"structure"` - // The name of this AudioSelector. AudioDescriptions will use this name to uniquely - // identify this Selector. Selector names should be unique per input. - // - // Name is a required field - Name *string `locationName:"name" type:"string" required:"true"` - - // The audio selector settings. - SelectorSettings *AudioSelectorSettings `locationName:"selectorSettings" type:"structure"` + // ChannelId is a required field + ChannelId *string `location:"uri" locationName:"channelId" type:"string" required:"true"` } // String returns the string representation -func (s AudioSelector) String() string { +func (s DescribeChannelInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AudioSelector) GoString() string { +func (s DescribeChannelInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *AudioSelector) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AudioSelector"} - if s.Name == nil { - invalidParams.Add(request.NewErrParamRequired("Name")) - } - if s.SelectorSettings != nil { - if err := s.SelectorSettings.Validate(); err != nil { - invalidParams.AddNested("SelectorSettings", err.(request.ErrInvalidParams)) - } +func (s *DescribeChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeChannelInput"} + if s.ChannelId == nil { + invalidParams.Add(request.NewErrParamRequired("ChannelId")) } if invalidParams.Len() > 0 { @@ -2449,141 +5723,145 @@ func (s *AudioSelector) Validate() error { return nil } -// SetName sets the Name field's value. -func (s *AudioSelector) SetName(v string) *AudioSelector { - s.Name = &v - return s -} - -// SetSelectorSettings sets the SelectorSettings field's value. -func (s *AudioSelector) SetSelectorSettings(v *AudioSelectorSettings) *AudioSelector { - s.SelectorSettings = v +// SetChannelId sets the ChannelId field's value. +func (s *DescribeChannelInput) SetChannelId(v string) *DescribeChannelInput { + s.ChannelId = &v return s } -type AudioSelectorSettings struct { +type DescribeChannelOutput struct { _ struct{} `type:"structure"` - AudioLanguageSelection *AudioLanguageSelection `locationName:"audioLanguageSelection" type:"structure"` + Arn *string `locationName:"arn" type:"string"` - AudioPidSelection *AudioPidSelection `locationName:"audioPidSelection" type:"structure"` + Destinations []*OutputDestination `locationName:"destinations" type:"list"` + + EgressEndpoints []*ChannelEgressEndpoint `locationName:"egressEndpoints" type:"list"` + + EncoderSettings *EncoderSettings `locationName:"encoderSettings" type:"structure"` + + Id *string `locationName:"id" type:"string"` + + InputAttachments []*InputAttachment `locationName:"inputAttachments" type:"list"` + + InputSpecification *InputSpecification `locationName:"inputSpecification" type:"structure"` + + // The log level the user wants for their channel. + LogLevel *string `locationName:"logLevel" type:"string" enum:"LogLevel"` + + Name *string `locationName:"name" type:"string"` + + PipelinesRunningCount *int64 `locationName:"pipelinesRunningCount" type:"integer"` + + RoleArn *string `locationName:"roleArn" type:"string"` + + State *string `locationName:"state" type:"string" enum:"ChannelState"` } // String returns the string representation -func (s AudioSelectorSettings) String() string { +func (s DescribeChannelOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AudioSelectorSettings) GoString() string { +func (s DescribeChannelOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *AudioSelectorSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AudioSelectorSettings"} - if s.AudioLanguageSelection != nil { - if err := s.AudioLanguageSelection.Validate(); err != nil { - invalidParams.AddNested("AudioLanguageSelection", err.(request.ErrInvalidParams)) - } - } - if s.AudioPidSelection != nil { - if err := s.AudioPidSelection.Validate(); err != nil { - invalidParams.AddNested("AudioPidSelection", err.(request.ErrInvalidParams)) - } - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetArn sets the Arn field's value. +func (s *DescribeChannelOutput) SetArn(v string) *DescribeChannelOutput { + s.Arn = &v + return s } -// SetAudioLanguageSelection sets the AudioLanguageSelection field's value. -func (s *AudioSelectorSettings) SetAudioLanguageSelection(v *AudioLanguageSelection) *AudioSelectorSettings { - s.AudioLanguageSelection = v +// SetDestinations sets the Destinations field's value. +func (s *DescribeChannelOutput) SetDestinations(v []*OutputDestination) *DescribeChannelOutput { + s.Destinations = v return s } -// SetAudioPidSelection sets the AudioPidSelection field's value. -func (s *AudioSelectorSettings) SetAudioPidSelection(v *AudioPidSelection) *AudioSelectorSettings { - s.AudioPidSelection = v +// SetEgressEndpoints sets the EgressEndpoints field's value. +func (s *DescribeChannelOutput) SetEgressEndpoints(v []*ChannelEgressEndpoint) *DescribeChannelOutput { + s.EgressEndpoints = v return s } -type AvailBlanking struct { - _ struct{} `type:"structure"` +// SetEncoderSettings sets the EncoderSettings field's value. +func (s *DescribeChannelOutput) SetEncoderSettings(v *EncoderSettings) *DescribeChannelOutput { + s.EncoderSettings = v + return s +} - // Blanking image to be used. Leave empty for solid black. Only bmp and png - // images are supported. - AvailBlankingImage *InputLocation `locationName:"availBlankingImage" type:"structure"` +// SetId sets the Id field's value. +func (s *DescribeChannelOutput) SetId(v string) *DescribeChannelOutput { + s.Id = &v + return s +} - // When set to enabled, causes video, audio and captions to be blanked when - // insertion metadata is added. - State *string `locationName:"state" type:"string" enum:"AvailBlankingState"` +// SetInputAttachments sets the InputAttachments field's value. +func (s *DescribeChannelOutput) SetInputAttachments(v []*InputAttachment) *DescribeChannelOutput { + s.InputAttachments = v + return s } -// String returns the string representation -func (s AvailBlanking) String() string { - return awsutil.Prettify(s) +// SetInputSpecification sets the InputSpecification field's value. +func (s *DescribeChannelOutput) SetInputSpecification(v *InputSpecification) *DescribeChannelOutput { + s.InputSpecification = v + return s } -// GoString returns the string representation -func (s AvailBlanking) GoString() string { - return s.String() +// SetLogLevel sets the LogLevel field's value. +func (s *DescribeChannelOutput) SetLogLevel(v string) *DescribeChannelOutput { + s.LogLevel = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *AvailBlanking) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AvailBlanking"} - if s.AvailBlankingImage != nil { - if err := s.AvailBlankingImage.Validate(); err != nil { - invalidParams.AddNested("AvailBlankingImage", err.(request.ErrInvalidParams)) - } - } +// SetName sets the Name field's value. +func (s *DescribeChannelOutput) SetName(v string) *DescribeChannelOutput { + s.Name = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetPipelinesRunningCount sets the PipelinesRunningCount field's value. +func (s *DescribeChannelOutput) SetPipelinesRunningCount(v int64) *DescribeChannelOutput { + s.PipelinesRunningCount = &v + return s } -// SetAvailBlankingImage sets the AvailBlankingImage field's value. -func (s *AvailBlanking) SetAvailBlankingImage(v *InputLocation) *AvailBlanking { - s.AvailBlankingImage = v +// SetRoleArn sets the RoleArn field's value. +func (s *DescribeChannelOutput) SetRoleArn(v string) *DescribeChannelOutput { + s.RoleArn = &v return s } // SetState sets the State field's value. -func (s *AvailBlanking) SetState(v string) *AvailBlanking { +func (s *DescribeChannelOutput) SetState(v string) *DescribeChannelOutput { s.State = &v return s } -type AvailConfiguration struct { +type DescribeInputInput struct { _ struct{} `type:"structure"` - // Ad avail settings. - AvailSettings *AvailSettings `locationName:"availSettings" type:"structure"` + // InputId is a required field + InputId *string `location:"uri" locationName:"inputId" type:"string" required:"true"` } // String returns the string representation -func (s AvailConfiguration) String() string { +func (s DescribeInputInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AvailConfiguration) GoString() string { +func (s DescribeInputInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *AvailConfiguration) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AvailConfiguration"} - if s.AvailSettings != nil { - if err := s.AvailSettings.Validate(); err != nil { - invalidParams.AddNested("AvailSettings", err.(request.ErrInvalidParams)) - } +func (s *DescribeInputInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeInputInput"} + if s.InputId == nil { + invalidParams.Add(request.NewErrParamRequired("InputId")) } if invalidParams.Len() > 0 { @@ -2592,114 +5870,120 @@ func (s *AvailConfiguration) Validate() error { return nil } -// SetAvailSettings sets the AvailSettings field's value. -func (s *AvailConfiguration) SetAvailSettings(v *AvailSettings) *AvailConfiguration { - s.AvailSettings = v +// SetInputId sets the InputId field's value. +func (s *DescribeInputInput) SetInputId(v string) *DescribeInputInput { + s.InputId = &v return s } -type AvailSettings struct { +type DescribeInputOutput struct { _ struct{} `type:"structure"` - Scte35SpliceInsert *Scte35SpliceInsert `locationName:"scte35SpliceInsert" type:"structure"` + Arn *string `locationName:"arn" type:"string"` - Scte35TimeSignalApos *Scte35TimeSignalApos `locationName:"scte35TimeSignalApos" type:"structure"` + AttachedChannels []*string `locationName:"attachedChannels" type:"list"` + + Destinations []*InputDestination `locationName:"destinations" type:"list"` + + Id *string `locationName:"id" type:"string"` + + Name *string `locationName:"name" type:"string"` + + SecurityGroups []*string `locationName:"securityGroups" type:"list"` + + Sources []*InputSource `locationName:"sources" type:"list"` + + State *string `locationName:"state" type:"string" enum:"InputState"` + + Type *string `locationName:"type" type:"string" enum:"InputType"` } // String returns the string representation -func (s AvailSettings) String() string { +func (s DescribeInputOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AvailSettings) GoString() string { +func (s DescribeInputOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *AvailSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AvailSettings"} - if s.Scte35SpliceInsert != nil { - if err := s.Scte35SpliceInsert.Validate(); err != nil { - invalidParams.AddNested("Scte35SpliceInsert", err.(request.ErrInvalidParams)) - } - } - if s.Scte35TimeSignalApos != nil { - if err := s.Scte35TimeSignalApos.Validate(); err != nil { - invalidParams.AddNested("Scte35TimeSignalApos", err.(request.ErrInvalidParams)) - } - } +// SetArn sets the Arn field's value. +func (s *DescribeInputOutput) SetArn(v string) *DescribeInputOutput { + s.Arn = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetAttachedChannels sets the AttachedChannels field's value. +func (s *DescribeInputOutput) SetAttachedChannels(v []*string) *DescribeInputOutput { + s.AttachedChannels = v + return s } -// SetScte35SpliceInsert sets the Scte35SpliceInsert field's value. -func (s *AvailSettings) SetScte35SpliceInsert(v *Scte35SpliceInsert) *AvailSettings { - s.Scte35SpliceInsert = v +// SetDestinations sets the Destinations field's value. +func (s *DescribeInputOutput) SetDestinations(v []*InputDestination) *DescribeInputOutput { + s.Destinations = v return s } -// SetScte35TimeSignalApos sets the Scte35TimeSignalApos field's value. -func (s *AvailSettings) SetScte35TimeSignalApos(v *Scte35TimeSignalApos) *AvailSettings { - s.Scte35TimeSignalApos = v +// SetId sets the Id field's value. +func (s *DescribeInputOutput) SetId(v string) *DescribeInputOutput { + s.Id = &v return s } -type BlackoutSlate struct { - _ struct{} `type:"structure"` +// SetName sets the Name field's value. +func (s *DescribeInputOutput) SetName(v string) *DescribeInputOutput { + s.Name = &v + return s +} - // Blackout slate image to be used. Leave empty for solid black. Only bmp and - // png images are supported. - BlackoutSlateImage *InputLocation `locationName:"blackoutSlateImage" type:"structure"` +// SetSecurityGroups sets the SecurityGroups field's value. +func (s *DescribeInputOutput) SetSecurityGroups(v []*string) *DescribeInputOutput { + s.SecurityGroups = v + return s +} - // Setting to enabled causes the encoder to blackout the video, audio, and captions, - // and raise the "Network Blackout Image" slate when an SCTE104/35 Network End - // Segmentation Descriptor is encountered. The blackout will be lifted when - // the Network Start Segmentation Descriptor is encountered. The Network End - // and Network Start descriptors must contain a network ID that matches the - // value entered in "Network ID". - NetworkEndBlackout *string `locationName:"networkEndBlackout" type:"string" enum:"BlackoutSlateNetworkEndBlackout"` +// SetSources sets the Sources field's value. +func (s *DescribeInputOutput) SetSources(v []*InputSource) *DescribeInputOutput { + s.Sources = v + return s +} - // Path to local file to use as Network End Blackout image. Image will be scaled - // to fill the entire output raster. - NetworkEndBlackoutImage *InputLocation `locationName:"networkEndBlackoutImage" type:"structure"` +// SetState sets the State field's value. +func (s *DescribeInputOutput) SetState(v string) *DescribeInputOutput { + s.State = &v + return s +} - // Provides Network ID that matches EIDR ID format (e.g., "10.XXXX/XXXX-XXXX-XXXX-XXXX-XXXX-C"). - NetworkId *string `locationName:"networkId" min:"34" type:"string"` +// SetType sets the Type field's value. +func (s *DescribeInputOutput) SetType(v string) *DescribeInputOutput { + s.Type = &v + return s +} - // When set to enabled, causes video, audio and captions to be blanked when - // indicated by program metadata. - State *string `locationName:"state" type:"string" enum:"BlackoutSlateState"` +type DescribeInputSecurityGroupInput struct { + _ struct{} `type:"structure"` + + // InputSecurityGroupId is a required field + InputSecurityGroupId *string `location:"uri" locationName:"inputSecurityGroupId" type:"string" required:"true"` } // String returns the string representation -func (s BlackoutSlate) String() string { +func (s DescribeInputSecurityGroupInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s BlackoutSlate) GoString() string { +func (s DescribeInputSecurityGroupInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *BlackoutSlate) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "BlackoutSlate"} - if s.NetworkId != nil && len(*s.NetworkId) < 34 { - invalidParams.Add(request.NewErrParamMinLen("NetworkId", 34)) - } - if s.BlackoutSlateImage != nil { - if err := s.BlackoutSlateImage.Validate(); err != nil { - invalidParams.AddNested("BlackoutSlateImage", err.(request.ErrInvalidParams)) - } - } - if s.NetworkEndBlackoutImage != nil { - if err := s.NetworkEndBlackoutImage.Validate(); err != nil { - invalidParams.AddNested("NetworkEndBlackoutImage", err.(request.ErrInvalidParams)) - } +func (s *DescribeInputSecurityGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeInputSecurityGroupInput"} + if s.InputSecurityGroupId == nil { + invalidParams.Add(request.NewErrParamRequired("InputSecurityGroupId")) } if invalidParams.Len() > 0 { @@ -2708,153 +5992,88 @@ func (s *BlackoutSlate) Validate() error { return nil } -// SetBlackoutSlateImage sets the BlackoutSlateImage field's value. -func (s *BlackoutSlate) SetBlackoutSlateImage(v *InputLocation) *BlackoutSlate { - s.BlackoutSlateImage = v - return s -} - -// SetNetworkEndBlackout sets the NetworkEndBlackout field's value. -func (s *BlackoutSlate) SetNetworkEndBlackout(v string) *BlackoutSlate { - s.NetworkEndBlackout = &v - return s -} - -// SetNetworkEndBlackoutImage sets the NetworkEndBlackoutImage field's value. -func (s *BlackoutSlate) SetNetworkEndBlackoutImage(v *InputLocation) *BlackoutSlate { - s.NetworkEndBlackoutImage = v - return s -} - -// SetNetworkId sets the NetworkId field's value. -func (s *BlackoutSlate) SetNetworkId(v string) *BlackoutSlate { - s.NetworkId = &v - return s -} - -// SetState sets the State field's value. -func (s *BlackoutSlate) SetState(v string) *BlackoutSlate { - s.State = &v +// SetInputSecurityGroupId sets the InputSecurityGroupId field's value. +func (s *DescribeInputSecurityGroupInput) SetInputSecurityGroupId(v string) *DescribeInputSecurityGroupInput { + s.InputSecurityGroupId = &v return s } -type BurnInDestinationSettings struct { +type DescribeInputSecurityGroupOutput struct { _ struct{} `type:"structure"` - // If no explicit xPosition or yPosition is provided, setting alignment to centered - // will place the captions at the bottom center of the output. Similarly, setting - // a left alignment will align captions to the bottom left of the output. If - // x and y positions are given in conjunction with the alignment parameter, - // the font will be justified (either left or centered) relative to those coordinates. - // Selecting "smart" justification will left-justify live subtitles and center-justify - // pre-recorded subtitles. All burn-in and DVB-Sub font settings must match. - Alignment *string `locationName:"alignment" type:"string" enum:"BurnInAlignment"` - - // Specifies the color of the rectangle behind the captions. All burn-in and - // DVB-Sub font settings must match. - BackgroundColor *string `locationName:"backgroundColor" type:"string" enum:"BurnInBackgroundColor"` - - // Specifies the opacity of the background rectangle. 255 is opaque; 0 is transparent. - // Leaving this parameter out is equivalent to setting it to 0 (transparent). - // All burn-in and DVB-Sub font settings must match. - BackgroundOpacity *int64 `locationName:"backgroundOpacity" type:"integer"` - - // External font file used for caption burn-in. File extension must be 'ttf' - // or 'tte'. Although the user can select output fonts for many different types - // of input captions, embedded, STL and teletext sources use a strict grid system. - // Using external fonts with these caption sources could cause unexpected display - // of proportional fonts. All burn-in and DVB-Sub font settings must match. - Font *InputLocation `locationName:"font" type:"structure"` - - // Specifies the color of the burned-in captions. This option is not valid for - // source captions that are STL, 608/embedded or teletext. These source settings - // are already pre-defined by the caption stream. All burn-in and DVB-Sub font - // settings must match. - FontColor *string `locationName:"fontColor" type:"string" enum:"BurnInFontColor"` - - // Specifies the opacity of the burned-in captions. 255 is opaque; 0 is transparent. - // All burn-in and DVB-Sub font settings must match. - FontOpacity *int64 `locationName:"fontOpacity" type:"integer"` - - // Font resolution in DPI (dots per inch); default is 96 dpi. All burn-in and - // DVB-Sub font settings must match. - FontResolution *int64 `locationName:"fontResolution" min:"96" type:"integer"` + Arn *string `locationName:"arn" type:"string"` - // When set to 'auto' fontSize will scale depending on the size of the output. - // Giving a positive integer will specify the exact font size in points. All - // burn-in and DVB-Sub font settings must match. - FontSize *string `locationName:"fontSize" type:"string"` + Id *string `locationName:"id" type:"string"` - // Specifies font outline color. This option is not valid for source captions - // that are either 608/embedded or teletext. These source settings are already - // pre-defined by the caption stream. All burn-in and DVB-Sub font settings - // must match. - OutlineColor *string `locationName:"outlineColor" type:"string" enum:"BurnInOutlineColor"` + Inputs []*string `locationName:"inputs" type:"list"` - // Specifies font outline size in pixels. This option is not valid for source - // captions that are either 608/embedded or teletext. These source settings - // are already pre-defined by the caption stream. All burn-in and DVB-Sub font - // settings must match. - OutlineSize *int64 `locationName:"outlineSize" type:"integer"` + State *string `locationName:"state" type:"string" enum:"InputSecurityGroupState"` - // Specifies the color of the shadow cast by the captions. All burn-in and DVB-Sub - // font settings must match. - ShadowColor *string `locationName:"shadowColor" type:"string" enum:"BurnInShadowColor"` + WhitelistRules []*InputWhitelistRule `locationName:"whitelistRules" type:"list"` +} - // Specifies the opacity of the shadow. 255 is opaque; 0 is transparent. Leaving - // this parameter out is equivalent to setting it to 0 (transparent). All burn-in - // and DVB-Sub font settings must match. - ShadowOpacity *int64 `locationName:"shadowOpacity" type:"integer"` +// String returns the string representation +func (s DescribeInputSecurityGroupOutput) String() string { + return awsutil.Prettify(s) +} - // Specifies the horizontal offset of the shadow relative to the captions in - // pixels. A value of -2 would result in a shadow offset 2 pixels to the left. - // All burn-in and DVB-Sub font settings must match. - ShadowXOffset *int64 `locationName:"shadowXOffset" type:"integer"` +// GoString returns the string representation +func (s DescribeInputSecurityGroupOutput) GoString() string { + return s.String() +} - // Specifies the vertical offset of the shadow relative to the captions in pixels. - // A value of -2 would result in a shadow offset 2 pixels above the text. All - // burn-in and DVB-Sub font settings must match. - ShadowYOffset *int64 `locationName:"shadowYOffset" type:"integer"` +// SetArn sets the Arn field's value. +func (s *DescribeInputSecurityGroupOutput) SetArn(v string) *DescribeInputSecurityGroupOutput { + s.Arn = &v + return s +} - // Controls whether a fixed grid size will be used to generate the output subtitles - // bitmap. Only applicable for Teletext inputs and DVB-Sub/Burn-in outputs. - TeletextGridControl *string `locationName:"teletextGridControl" type:"string" enum:"BurnInTeletextGridControl"` +// SetId sets the Id field's value. +func (s *DescribeInputSecurityGroupOutput) SetId(v string) *DescribeInputSecurityGroupOutput { + s.Id = &v + return s +} - // Specifies the horizontal position of the caption relative to the left side - // of the output in pixels. A value of 10 would result in the captions starting - // 10 pixels from the left of the output. If no explicit xPosition is provided, - // the horizontal caption position will be determined by the alignment parameter. - // All burn-in and DVB-Sub font settings must match. - XPosition *int64 `locationName:"xPosition" type:"integer"` +// SetInputs sets the Inputs field's value. +func (s *DescribeInputSecurityGroupOutput) SetInputs(v []*string) *DescribeInputSecurityGroupOutput { + s.Inputs = v + return s +} - // Specifies the vertical position of the caption relative to the top of the - // output in pixels. A value of 10 would result in the captions starting 10 - // pixels from the top of the output. If no explicit yPosition is provided, - // the caption will be positioned towards the bottom of the output. All burn-in - // and DVB-Sub font settings must match. - YPosition *int64 `locationName:"yPosition" type:"integer"` +// SetState sets the State field's value. +func (s *DescribeInputSecurityGroupOutput) SetState(v string) *DescribeInputSecurityGroupOutput { + s.State = &v + return s +} + +// SetWhitelistRules sets the WhitelistRules field's value. +func (s *DescribeInputSecurityGroupOutput) SetWhitelistRules(v []*InputWhitelistRule) *DescribeInputSecurityGroupOutput { + s.WhitelistRules = v + return s +} + +type DescribeOfferingInput struct { + _ struct{} `type:"structure"` + + // OfferingId is a required field + OfferingId *string `location:"uri" locationName:"offeringId" type:"string" required:"true"` } // String returns the string representation -func (s BurnInDestinationSettings) String() string { +func (s DescribeOfferingInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s BurnInDestinationSettings) GoString() string { +func (s DescribeOfferingInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *BurnInDestinationSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "BurnInDestinationSettings"} - if s.FontResolution != nil && *s.FontResolution < 96 { - invalidParams.Add(request.NewErrParamMinValue("FontResolution", 96)) - } - if s.Font != nil { - if err := s.Font.Validate(); err != nil { - invalidParams.AddNested("Font", err.(request.ErrInvalidParams)) - } +func (s *DescribeOfferingInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeOfferingInput"} + if s.OfferingId == nil { + invalidParams.Add(request.NewErrParamRequired("OfferingId")) } if invalidParams.Len() > 0 { @@ -2863,246 +6082,336 @@ func (s *BurnInDestinationSettings) Validate() error { return nil } -// SetAlignment sets the Alignment field's value. -func (s *BurnInDestinationSettings) SetAlignment(v string) *BurnInDestinationSettings { - s.Alignment = &v +// SetOfferingId sets the OfferingId field's value. +func (s *DescribeOfferingInput) SetOfferingId(v string) *DescribeOfferingInput { + s.OfferingId = &v return s } -// SetBackgroundColor sets the BackgroundColor field's value. -func (s *BurnInDestinationSettings) SetBackgroundColor(v string) *BurnInDestinationSettings { - s.BackgroundColor = &v - return s +type DescribeOfferingOutput struct { + _ struct{} `type:"structure"` + + Arn *string `locationName:"arn" type:"string"` + + CurrencyCode *string `locationName:"currencyCode" type:"string"` + + Duration *int64 `locationName:"duration" type:"integer"` + + // Units for duration, e.g. 'MONTHS' + DurationUnits *string `locationName:"durationUnits" type:"string" enum:"OfferingDurationUnits"` + + FixedPrice *float64 `locationName:"fixedPrice" type:"double"` + + OfferingDescription *string `locationName:"offeringDescription" type:"string"` + + OfferingId *string `locationName:"offeringId" type:"string"` + + // Offering type, e.g. 'NO_UPFRONT' + OfferingType *string `locationName:"offeringType" type:"string" enum:"OfferingType"` + + Region *string `locationName:"region" type:"string"` + + // Resource configuration (codec, resolution, bitrate, ...) + ResourceSpecification *ReservationResourceSpecification `locationName:"resourceSpecification" type:"structure"` + + UsagePrice *float64 `locationName:"usagePrice" type:"double"` } -// SetBackgroundOpacity sets the BackgroundOpacity field's value. -func (s *BurnInDestinationSettings) SetBackgroundOpacity(v int64) *BurnInDestinationSettings { - s.BackgroundOpacity = &v - return s +// String returns the string representation +func (s DescribeOfferingOutput) String() string { + return awsutil.Prettify(s) } -// SetFont sets the Font field's value. -func (s *BurnInDestinationSettings) SetFont(v *InputLocation) *BurnInDestinationSettings { - s.Font = v - return s +// GoString returns the string representation +func (s DescribeOfferingOutput) GoString() string { + return s.String() } -// SetFontColor sets the FontColor field's value. -func (s *BurnInDestinationSettings) SetFontColor(v string) *BurnInDestinationSettings { - s.FontColor = &v +// SetArn sets the Arn field's value. +func (s *DescribeOfferingOutput) SetArn(v string) *DescribeOfferingOutput { + s.Arn = &v return s } -// SetFontOpacity sets the FontOpacity field's value. -func (s *BurnInDestinationSettings) SetFontOpacity(v int64) *BurnInDestinationSettings { - s.FontOpacity = &v +// SetCurrencyCode sets the CurrencyCode field's value. +func (s *DescribeOfferingOutput) SetCurrencyCode(v string) *DescribeOfferingOutput { + s.CurrencyCode = &v return s } -// SetFontResolution sets the FontResolution field's value. -func (s *BurnInDestinationSettings) SetFontResolution(v int64) *BurnInDestinationSettings { - s.FontResolution = &v +// SetDuration sets the Duration field's value. +func (s *DescribeOfferingOutput) SetDuration(v int64) *DescribeOfferingOutput { + s.Duration = &v return s } -// SetFontSize sets the FontSize field's value. -func (s *BurnInDestinationSettings) SetFontSize(v string) *BurnInDestinationSettings { - s.FontSize = &v +// SetDurationUnits sets the DurationUnits field's value. +func (s *DescribeOfferingOutput) SetDurationUnits(v string) *DescribeOfferingOutput { + s.DurationUnits = &v return s } -// SetOutlineColor sets the OutlineColor field's value. -func (s *BurnInDestinationSettings) SetOutlineColor(v string) *BurnInDestinationSettings { - s.OutlineColor = &v +// SetFixedPrice sets the FixedPrice field's value. +func (s *DescribeOfferingOutput) SetFixedPrice(v float64) *DescribeOfferingOutput { + s.FixedPrice = &v return s } -// SetOutlineSize sets the OutlineSize field's value. -func (s *BurnInDestinationSettings) SetOutlineSize(v int64) *BurnInDestinationSettings { - s.OutlineSize = &v +// SetOfferingDescription sets the OfferingDescription field's value. +func (s *DescribeOfferingOutput) SetOfferingDescription(v string) *DescribeOfferingOutput { + s.OfferingDescription = &v return s } -// SetShadowColor sets the ShadowColor field's value. -func (s *BurnInDestinationSettings) SetShadowColor(v string) *BurnInDestinationSettings { - s.ShadowColor = &v +// SetOfferingId sets the OfferingId field's value. +func (s *DescribeOfferingOutput) SetOfferingId(v string) *DescribeOfferingOutput { + s.OfferingId = &v return s } -// SetShadowOpacity sets the ShadowOpacity field's value. -func (s *BurnInDestinationSettings) SetShadowOpacity(v int64) *BurnInDestinationSettings { - s.ShadowOpacity = &v +// SetOfferingType sets the OfferingType field's value. +func (s *DescribeOfferingOutput) SetOfferingType(v string) *DescribeOfferingOutput { + s.OfferingType = &v return s } -// SetShadowXOffset sets the ShadowXOffset field's value. -func (s *BurnInDestinationSettings) SetShadowXOffset(v int64) *BurnInDestinationSettings { - s.ShadowXOffset = &v +// SetRegion sets the Region field's value. +func (s *DescribeOfferingOutput) SetRegion(v string) *DescribeOfferingOutput { + s.Region = &v return s } -// SetShadowYOffset sets the ShadowYOffset field's value. -func (s *BurnInDestinationSettings) SetShadowYOffset(v int64) *BurnInDestinationSettings { - s.ShadowYOffset = &v +// SetResourceSpecification sets the ResourceSpecification field's value. +func (s *DescribeOfferingOutput) SetResourceSpecification(v *ReservationResourceSpecification) *DescribeOfferingOutput { + s.ResourceSpecification = v return s } -// SetTeletextGridControl sets the TeletextGridControl field's value. -func (s *BurnInDestinationSettings) SetTeletextGridControl(v string) *BurnInDestinationSettings { - s.TeletextGridControl = &v +// SetUsagePrice sets the UsagePrice field's value. +func (s *DescribeOfferingOutput) SetUsagePrice(v float64) *DescribeOfferingOutput { + s.UsagePrice = &v return s } -// SetXPosition sets the XPosition field's value. -func (s *BurnInDestinationSettings) SetXPosition(v int64) *BurnInDestinationSettings { - s.XPosition = &v - return s +type DescribeReservationInput struct { + _ struct{} `type:"structure"` + + // ReservationId is a required field + ReservationId *string `location:"uri" locationName:"reservationId" type:"string" required:"true"` } -// SetYPosition sets the YPosition field's value. -func (s *BurnInDestinationSettings) SetYPosition(v int64) *BurnInDestinationSettings { - s.YPosition = &v +// String returns the string representation +func (s DescribeReservationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeReservationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeReservationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeReservationInput"} + if s.ReservationId == nil { + invalidParams.Add(request.NewErrParamRequired("ReservationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetReservationId sets the ReservationId field's value. +func (s *DescribeReservationInput) SetReservationId(v string) *DescribeReservationInput { + s.ReservationId = &v return s } -// Output groups for this Live Event. Output groups contain information about -// where streams should be distributed. -type CaptionDescription struct { +type DescribeReservationOutput struct { _ struct{} `type:"structure"` - // Specifies which input caption selector to use as a caption source when generating - // output captions. This field should match a captionSelector name. - // - // CaptionSelectorName is a required field - CaptionSelectorName *string `locationName:"captionSelectorName" type:"string" required:"true"` + Arn *string `locationName:"arn" type:"string"` - // Additional settings for captions destination that depend on the destination - // type. - DestinationSettings *CaptionDestinationSettings `locationName:"destinationSettings" type:"structure"` + Count *int64 `locationName:"count" type:"integer"` - // ISO 639-2 three-digit code: http://www.loc.gov/standards/iso639-2/ - LanguageCode *string `locationName:"languageCode" type:"string"` + CurrencyCode *string `locationName:"currencyCode" type:"string"` - // Human readable information to indicate captions available for players (eg. - // English, or Spanish). - LanguageDescription *string `locationName:"languageDescription" type:"string"` + Duration *int64 `locationName:"duration" type:"integer"` - // Name of the caption description. Used to associate a caption description - // with an output. Names must be unique within an event. - // - // Name is a required field - Name *string `locationName:"name" type:"string" required:"true"` + // Units for duration, e.g. 'MONTHS' + DurationUnits *string `locationName:"durationUnits" type:"string" enum:"OfferingDurationUnits"` + + End *string `locationName:"end" type:"string"` + + FixedPrice *float64 `locationName:"fixedPrice" type:"double"` + + Name *string `locationName:"name" type:"string"` + + OfferingDescription *string `locationName:"offeringDescription" type:"string"` + + OfferingId *string `locationName:"offeringId" type:"string"` + + // Offering type, e.g. 'NO_UPFRONT' + OfferingType *string `locationName:"offeringType" type:"string" enum:"OfferingType"` + + Region *string `locationName:"region" type:"string"` + + ReservationId *string `locationName:"reservationId" type:"string"` + + // Resource configuration (codec, resolution, bitrate, ...) + ResourceSpecification *ReservationResourceSpecification `locationName:"resourceSpecification" type:"structure"` + + Start *string `locationName:"start" type:"string"` + + // Current reservation state + State *string `locationName:"state" type:"string" enum:"ReservationState"` + + UsagePrice *float64 `locationName:"usagePrice" type:"double"` } // String returns the string representation -func (s CaptionDescription) String() string { +func (s DescribeReservationOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CaptionDescription) GoString() string { +func (s DescribeReservationOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *CaptionDescription) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CaptionDescription"} - if s.CaptionSelectorName == nil { - invalidParams.Add(request.NewErrParamRequired("CaptionSelectorName")) - } - if s.Name == nil { - invalidParams.Add(request.NewErrParamRequired("Name")) - } - if s.DestinationSettings != nil { - if err := s.DestinationSettings.Validate(); err != nil { - invalidParams.AddNested("DestinationSettings", err.(request.ErrInvalidParams)) - } - } +// SetArn sets the Arn field's value. +func (s *DescribeReservationOutput) SetArn(v string) *DescribeReservationOutput { + s.Arn = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetCount sets the Count field's value. +func (s *DescribeReservationOutput) SetCount(v int64) *DescribeReservationOutput { + s.Count = &v + return s } -// SetCaptionSelectorName sets the CaptionSelectorName field's value. -func (s *CaptionDescription) SetCaptionSelectorName(v string) *CaptionDescription { - s.CaptionSelectorName = &v +// SetCurrencyCode sets the CurrencyCode field's value. +func (s *DescribeReservationOutput) SetCurrencyCode(v string) *DescribeReservationOutput { + s.CurrencyCode = &v return s } -// SetDestinationSettings sets the DestinationSettings field's value. -func (s *CaptionDescription) SetDestinationSettings(v *CaptionDestinationSettings) *CaptionDescription { - s.DestinationSettings = v +// SetDuration sets the Duration field's value. +func (s *DescribeReservationOutput) SetDuration(v int64) *DescribeReservationOutput { + s.Duration = &v return s } -// SetLanguageCode sets the LanguageCode field's value. -func (s *CaptionDescription) SetLanguageCode(v string) *CaptionDescription { - s.LanguageCode = &v +// SetDurationUnits sets the DurationUnits field's value. +func (s *DescribeReservationOutput) SetDurationUnits(v string) *DescribeReservationOutput { + s.DurationUnits = &v return s } -// SetLanguageDescription sets the LanguageDescription field's value. -func (s *CaptionDescription) SetLanguageDescription(v string) *CaptionDescription { - s.LanguageDescription = &v +// SetEnd sets the End field's value. +func (s *DescribeReservationOutput) SetEnd(v string) *DescribeReservationOutput { + s.End = &v + return s +} + +// SetFixedPrice sets the FixedPrice field's value. +func (s *DescribeReservationOutput) SetFixedPrice(v float64) *DescribeReservationOutput { + s.FixedPrice = &v return s } // SetName sets the Name field's value. -func (s *CaptionDescription) SetName(v string) *CaptionDescription { +func (s *DescribeReservationOutput) SetName(v string) *DescribeReservationOutput { s.Name = &v return s } -type CaptionDestinationSettings struct { - _ struct{} `type:"structure"` +// SetOfferingDescription sets the OfferingDescription field's value. +func (s *DescribeReservationOutput) SetOfferingDescription(v string) *DescribeReservationOutput { + s.OfferingDescription = &v + return s +} - AribDestinationSettings *AribDestinationSettings `locationName:"aribDestinationSettings" type:"structure"` +// SetOfferingId sets the OfferingId field's value. +func (s *DescribeReservationOutput) SetOfferingId(v string) *DescribeReservationOutput { + s.OfferingId = &v + return s +} - BurnInDestinationSettings *BurnInDestinationSettings `locationName:"burnInDestinationSettings" type:"structure"` +// SetOfferingType sets the OfferingType field's value. +func (s *DescribeReservationOutput) SetOfferingType(v string) *DescribeReservationOutput { + s.OfferingType = &v + return s +} - DvbSubDestinationSettings *DvbSubDestinationSettings `locationName:"dvbSubDestinationSettings" type:"structure"` +// SetRegion sets the Region field's value. +func (s *DescribeReservationOutput) SetRegion(v string) *DescribeReservationOutput { + s.Region = &v + return s +} - EmbeddedDestinationSettings *EmbeddedDestinationSettings `locationName:"embeddedDestinationSettings" type:"structure"` +// SetReservationId sets the ReservationId field's value. +func (s *DescribeReservationOutput) SetReservationId(v string) *DescribeReservationOutput { + s.ReservationId = &v + return s +} - EmbeddedPlusScte20DestinationSettings *EmbeddedPlusScte20DestinationSettings `locationName:"embeddedPlusScte20DestinationSettings" type:"structure"` +// SetResourceSpecification sets the ResourceSpecification field's value. +func (s *DescribeReservationOutput) SetResourceSpecification(v *ReservationResourceSpecification) *DescribeReservationOutput { + s.ResourceSpecification = v + return s +} - Scte20PlusEmbeddedDestinationSettings *Scte20PlusEmbeddedDestinationSettings `locationName:"scte20PlusEmbeddedDestinationSettings" type:"structure"` +// SetStart sets the Start field's value. +func (s *DescribeReservationOutput) SetStart(v string) *DescribeReservationOutput { + s.Start = &v + return s +} - Scte27DestinationSettings *Scte27DestinationSettings `locationName:"scte27DestinationSettings" type:"structure"` +// SetState sets the State field's value. +func (s *DescribeReservationOutput) SetState(v string) *DescribeReservationOutput { + s.State = &v + return s +} - SmpteTtDestinationSettings *SmpteTtDestinationSettings `locationName:"smpteTtDestinationSettings" type:"structure"` +// SetUsagePrice sets the UsagePrice field's value. +func (s *DescribeReservationOutput) SetUsagePrice(v float64) *DescribeReservationOutput { + s.UsagePrice = &v + return s +} - TeletextDestinationSettings *TeletextDestinationSettings `locationName:"teletextDestinationSettings" type:"structure"` +type DescribeScheduleInput struct { + _ struct{} `type:"structure"` - TtmlDestinationSettings *TtmlDestinationSettings `locationName:"ttmlDestinationSettings" type:"structure"` + // ChannelId is a required field + ChannelId *string `location:"uri" locationName:"channelId" type:"string" required:"true"` - WebvttDestinationSettings *WebvttDestinationSettings `locationName:"webvttDestinationSettings" type:"structure"` + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` } // String returns the string representation -func (s CaptionDestinationSettings) String() string { +func (s DescribeScheduleInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CaptionDestinationSettings) GoString() string { +func (s DescribeScheduleInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CaptionDestinationSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CaptionDestinationSettings"} - if s.BurnInDestinationSettings != nil { - if err := s.BurnInDestinationSettings.Validate(); err != nil { - invalidParams.AddNested("BurnInDestinationSettings", err.(request.ErrInvalidParams)) - } +func (s *DescribeScheduleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeScheduleInput"} + if s.ChannelId == nil { + invalidParams.Add(request.NewErrParamRequired("ChannelId")) } - if s.DvbSubDestinationSettings != nil { - if err := s.DvbSubDestinationSettings.Validate(); err != nil { - invalidParams.AddNested("DvbSubDestinationSettings", err.(request.ErrInvalidParams)) - } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } if invalidParams.Len() > 0 { @@ -3111,124 +6420,98 @@ func (s *CaptionDestinationSettings) Validate() error { return nil } -// SetAribDestinationSettings sets the AribDestinationSettings field's value. -func (s *CaptionDestinationSettings) SetAribDestinationSettings(v *AribDestinationSettings) *CaptionDestinationSettings { - s.AribDestinationSettings = v - return s -} - -// SetBurnInDestinationSettings sets the BurnInDestinationSettings field's value. -func (s *CaptionDestinationSettings) SetBurnInDestinationSettings(v *BurnInDestinationSettings) *CaptionDestinationSettings { - s.BurnInDestinationSettings = v +// SetChannelId sets the ChannelId field's value. +func (s *DescribeScheduleInput) SetChannelId(v string) *DescribeScheduleInput { + s.ChannelId = &v return s } -// SetDvbSubDestinationSettings sets the DvbSubDestinationSettings field's value. -func (s *CaptionDestinationSettings) SetDvbSubDestinationSettings(v *DvbSubDestinationSettings) *CaptionDestinationSettings { - s.DvbSubDestinationSettings = v +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeScheduleInput) SetMaxResults(v int64) *DescribeScheduleInput { + s.MaxResults = &v return s } -// SetEmbeddedDestinationSettings sets the EmbeddedDestinationSettings field's value. -func (s *CaptionDestinationSettings) SetEmbeddedDestinationSettings(v *EmbeddedDestinationSettings) *CaptionDestinationSettings { - s.EmbeddedDestinationSettings = v +// SetNextToken sets the NextToken field's value. +func (s *DescribeScheduleInput) SetNextToken(v string) *DescribeScheduleInput { + s.NextToken = &v return s } -// SetEmbeddedPlusScte20DestinationSettings sets the EmbeddedPlusScte20DestinationSettings field's value. -func (s *CaptionDestinationSettings) SetEmbeddedPlusScte20DestinationSettings(v *EmbeddedPlusScte20DestinationSettings) *CaptionDestinationSettings { - s.EmbeddedPlusScte20DestinationSettings = v - return s -} +type DescribeScheduleOutput struct { + _ struct{} `type:"structure"` -// SetScte20PlusEmbeddedDestinationSettings sets the Scte20PlusEmbeddedDestinationSettings field's value. -func (s *CaptionDestinationSettings) SetScte20PlusEmbeddedDestinationSettings(v *Scte20PlusEmbeddedDestinationSettings) *CaptionDestinationSettings { - s.Scte20PlusEmbeddedDestinationSettings = v - return s -} + NextToken *string `locationName:"nextToken" type:"string"` -// SetScte27DestinationSettings sets the Scte27DestinationSettings field's value. -func (s *CaptionDestinationSettings) SetScte27DestinationSettings(v *Scte27DestinationSettings) *CaptionDestinationSettings { - s.Scte27DestinationSettings = v - return s + ScheduleActions []*ScheduleAction `locationName:"scheduleActions" type:"list"` } -// SetSmpteTtDestinationSettings sets the SmpteTtDestinationSettings field's value. -func (s *CaptionDestinationSettings) SetSmpteTtDestinationSettings(v *SmpteTtDestinationSettings) *CaptionDestinationSettings { - s.SmpteTtDestinationSettings = v - return s +// String returns the string representation +func (s DescribeScheduleOutput) String() string { + return awsutil.Prettify(s) } -// SetTeletextDestinationSettings sets the TeletextDestinationSettings field's value. -func (s *CaptionDestinationSettings) SetTeletextDestinationSettings(v *TeletextDestinationSettings) *CaptionDestinationSettings { - s.TeletextDestinationSettings = v - return s +// GoString returns the string representation +func (s DescribeScheduleOutput) GoString() string { + return s.String() } -// SetTtmlDestinationSettings sets the TtmlDestinationSettings field's value. -func (s *CaptionDestinationSettings) SetTtmlDestinationSettings(v *TtmlDestinationSettings) *CaptionDestinationSettings { - s.TtmlDestinationSettings = v +// SetNextToken sets the NextToken field's value. +func (s *DescribeScheduleOutput) SetNextToken(v string) *DescribeScheduleOutput { + s.NextToken = &v return s } -// SetWebvttDestinationSettings sets the WebvttDestinationSettings field's value. -func (s *CaptionDestinationSettings) SetWebvttDestinationSettings(v *WebvttDestinationSettings) *CaptionDestinationSettings { - s.WebvttDestinationSettings = v +// SetScheduleActions sets the ScheduleActions field's value. +func (s *DescribeScheduleOutput) SetScheduleActions(v []*ScheduleAction) *DescribeScheduleOutput { + s.ScheduleActions = v return s } -// Maps a caption channel to an ISO 693-2 language code (http://www.loc.gov/standards/iso639-2), -// with an optional description. -type CaptionLanguageMapping struct { +// DVB Network Information Table (NIT) +type DvbNitSettings struct { _ struct{} `type:"structure"` - // The closed caption channel being described by this CaptionLanguageMapping. - // Each channel mapping must have a unique channel number (maximum of 4) + // The numeric value placed in the Network Information Table (NIT). // - // CaptionChannel is a required field - CaptionChannel *int64 `locationName:"captionChannel" min:"1" type:"integer" required:"true"` + // NetworkId is a required field + NetworkId *int64 `locationName:"networkId" type:"integer" required:"true"` - // Three character ISO 639-2 language code (see http://www.loc.gov/standards/iso639-2) + // The network name text placed in the networkNameDescriptor inside the Network + // Information Table. Maximum length is 256 characters. // - // LanguageCode is a required field - LanguageCode *string `locationName:"languageCode" min:"3" type:"string" required:"true"` + // NetworkName is a required field + NetworkName *string `locationName:"networkName" min:"1" type:"string" required:"true"` - // Textual description of language - // - // LanguageDescription is a required field - LanguageDescription *string `locationName:"languageDescription" min:"1" type:"string" required:"true"` + // The number of milliseconds between instances of this table in the output + // transport stream. + RepInterval *int64 `locationName:"repInterval" min:"25" type:"integer"` } // String returns the string representation -func (s CaptionLanguageMapping) String() string { +func (s DvbNitSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CaptionLanguageMapping) GoString() string { +func (s DvbNitSettings) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CaptionLanguageMapping) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CaptionLanguageMapping"} - if s.CaptionChannel == nil { - invalidParams.Add(request.NewErrParamRequired("CaptionChannel")) - } - if s.CaptionChannel != nil && *s.CaptionChannel < 1 { - invalidParams.Add(request.NewErrParamMinValue("CaptionChannel", 1)) - } - if s.LanguageCode == nil { - invalidParams.Add(request.NewErrParamRequired("LanguageCode")) +func (s *DvbNitSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DvbNitSettings"} + if s.NetworkId == nil { + invalidParams.Add(request.NewErrParamRequired("NetworkId")) } - if s.LanguageCode != nil && len(*s.LanguageCode) < 3 { - invalidParams.Add(request.NewErrParamMinLen("LanguageCode", 3)) + if s.NetworkName == nil { + invalidParams.Add(request.NewErrParamRequired("NetworkName")) } - if s.LanguageDescription == nil { - invalidParams.Add(request.NewErrParamRequired("LanguageDescription")) + if s.NetworkName != nil && len(*s.NetworkName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NetworkName", 1)) } - if s.LanguageDescription != nil && len(*s.LanguageDescription) < 1 { - invalidParams.Add(request.NewErrParamMinLen("LanguageDescription", 1)) + if s.RepInterval != nil && *s.RepInterval < 25 { + invalidParams.Add(request.NewErrParamMinValue("RepInterval", 25)) } if invalidParams.Len() > 0 { @@ -3237,64 +6520,70 @@ func (s *CaptionLanguageMapping) Validate() error { return nil } -// SetCaptionChannel sets the CaptionChannel field's value. -func (s *CaptionLanguageMapping) SetCaptionChannel(v int64) *CaptionLanguageMapping { - s.CaptionChannel = &v +// SetNetworkId sets the NetworkId field's value. +func (s *DvbNitSettings) SetNetworkId(v int64) *DvbNitSettings { + s.NetworkId = &v return s } -// SetLanguageCode sets the LanguageCode field's value. -func (s *CaptionLanguageMapping) SetLanguageCode(v string) *CaptionLanguageMapping { - s.LanguageCode = &v +// SetNetworkName sets the NetworkName field's value. +func (s *DvbNitSettings) SetNetworkName(v string) *DvbNitSettings { + s.NetworkName = &v return s } -// SetLanguageDescription sets the LanguageDescription field's value. -func (s *CaptionLanguageMapping) SetLanguageDescription(v string) *CaptionLanguageMapping { - s.LanguageDescription = &v +// SetRepInterval sets the RepInterval field's value. +func (s *DvbNitSettings) SetRepInterval(v int64) *DvbNitSettings { + s.RepInterval = &v return s } -// Output groups for this Live Event. Output groups contain information about -// where streams should be distributed. -type CaptionSelector struct { +// DVB Service Description Table (SDT) +type DvbSdtSettings struct { _ struct{} `type:"structure"` - // When specified this field indicates the three letter language code of the - // caption track to extract from the source. - LanguageCode *string `locationName:"languageCode" type:"string"` + // Selects method of inserting SDT information into output stream. The sdtFollow + // setting copies SDT information from input stream to output stream. The sdtFollowIfPresent + // setting copies SDT information from input stream to output stream if SDT + // information is present in the input, otherwise it will fall back on the user-defined + // values. The sdtManual setting means user will enter the SDT information. + // The sdtNone setting means output stream will not contain SDT information. + OutputSdt *string `locationName:"outputSdt" type:"string" enum:"DvbSdtOutputSdt"` - // Name identifier for a caption selector. This name is used to associate this - // caption selector with one or more caption descriptions. Names must be unique - // within an event. - // - // Name is a required field - Name *string `locationName:"name" type:"string" required:"true"` + // The number of milliseconds between instances of this table in the output + // transport stream. + RepInterval *int64 `locationName:"repInterval" min:"25" type:"integer"` - // Caption selector settings. - SelectorSettings *CaptionSelectorSettings `locationName:"selectorSettings" type:"structure"` + // The service name placed in the serviceDescriptor in the Service Description + // Table. Maximum length is 256 characters. + ServiceName *string `locationName:"serviceName" min:"1" type:"string"` + + // The service provider name placed in the serviceDescriptor in the Service + // Description Table. Maximum length is 256 characters. + ServiceProviderName *string `locationName:"serviceProviderName" min:"1" type:"string"` } // String returns the string representation -func (s CaptionSelector) String() string { +func (s DvbSdtSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CaptionSelector) GoString() string { +func (s DvbSdtSettings) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CaptionSelector) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CaptionSelector"} - if s.Name == nil { - invalidParams.Add(request.NewErrParamRequired("Name")) +func (s *DvbSdtSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DvbSdtSettings"} + if s.RepInterval != nil && *s.RepInterval < 25 { + invalidParams.Add(request.NewErrParamMinValue("RepInterval", 25)) } - if s.SelectorSettings != nil { - if err := s.SelectorSettings.Validate(); err != nil { - invalidParams.AddNested("SelectorSettings", err.(request.ErrInvalidParams)) - } + if s.ServiceName != nil && len(*s.ServiceName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ServiceName", 1)) + } + if s.ServiceProviderName != nil && len(*s.ServiceProviderName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ServiceProviderName", 1)) } if invalidParams.Len() > 0 { @@ -3303,71 +6592,152 @@ func (s *CaptionSelector) Validate() error { return nil } -// SetLanguageCode sets the LanguageCode field's value. -func (s *CaptionSelector) SetLanguageCode(v string) *CaptionSelector { - s.LanguageCode = &v +// SetOutputSdt sets the OutputSdt field's value. +func (s *DvbSdtSettings) SetOutputSdt(v string) *DvbSdtSettings { + s.OutputSdt = &v return s } -// SetName sets the Name field's value. -func (s *CaptionSelector) SetName(v string) *CaptionSelector { - s.Name = &v +// SetRepInterval sets the RepInterval field's value. +func (s *DvbSdtSettings) SetRepInterval(v int64) *DvbSdtSettings { + s.RepInterval = &v + return s +} + +// SetServiceName sets the ServiceName field's value. +func (s *DvbSdtSettings) SetServiceName(v string) *DvbSdtSettings { + s.ServiceName = &v return s } -// SetSelectorSettings sets the SelectorSettings field's value. -func (s *CaptionSelector) SetSelectorSettings(v *CaptionSelectorSettings) *CaptionSelector { - s.SelectorSettings = v +// SetServiceProviderName sets the ServiceProviderName field's value. +func (s *DvbSdtSettings) SetServiceProviderName(v string) *DvbSdtSettings { + s.ServiceProviderName = &v return s } -type CaptionSelectorSettings struct { +type DvbSubDestinationSettings struct { _ struct{} `type:"structure"` - AribSourceSettings *AribSourceSettings `locationName:"aribSourceSettings" type:"structure"` + // If no explicit xPosition or yPosition is provided, setting alignment to centered + // will place the captions at the bottom center of the output. Similarly, setting + // a left alignment will align captions to the bottom left of the output. If + // x and y positions are given in conjunction with the alignment parameter, + // the font will be justified (either left or centered) relative to those coordinates. + // Selecting "smart" justification will left-justify live subtitles and center-justify + // pre-recorded subtitles. This option is not valid for source captions that + // are STL or 608/embedded. These source settings are already pre-defined by + // the caption stream. All burn-in and DVB-Sub font settings must match. + Alignment *string `locationName:"alignment" type:"string" enum:"DvbSubDestinationAlignment"` - DvbSubSourceSettings *DvbSubSourceSettings `locationName:"dvbSubSourceSettings" type:"structure"` + // Specifies the color of the rectangle behind the captions. All burn-in and + // DVB-Sub font settings must match. + BackgroundColor *string `locationName:"backgroundColor" type:"string" enum:"DvbSubDestinationBackgroundColor"` - EmbeddedSourceSettings *EmbeddedSourceSettings `locationName:"embeddedSourceSettings" type:"structure"` + // Specifies the opacity of the background rectangle. 255 is opaque; 0 is transparent. + // Leaving this parameter blank is equivalent to setting it to 0 (transparent). + // All burn-in and DVB-Sub font settings must match. + BackgroundOpacity *int64 `locationName:"backgroundOpacity" type:"integer"` - Scte20SourceSettings *Scte20SourceSettings `locationName:"scte20SourceSettings" type:"structure"` + // External font file used for caption burn-in. File extension must be 'ttf' + // or 'tte'. Although the user can select output fonts for many different types + // of input captions, embedded, STL and teletext sources use a strict grid system. + // Using external fonts with these caption sources could cause unexpected display + // of proportional fonts. All burn-in and DVB-Sub font settings must match. + Font *InputLocation `locationName:"font" type:"structure"` - Scte27SourceSettings *Scte27SourceSettings `locationName:"scte27SourceSettings" type:"structure"` + // Specifies the color of the burned-in captions. This option is not valid for + // source captions that are STL, 608/embedded or teletext. These source settings + // are already pre-defined by the caption stream. All burn-in and DVB-Sub font + // settings must match. + FontColor *string `locationName:"fontColor" type:"string" enum:"DvbSubDestinationFontColor"` - TeletextSourceSettings *TeletextSourceSettings `locationName:"teletextSourceSettings" type:"structure"` + // Specifies the opacity of the burned-in captions. 255 is opaque; 0 is transparent. + // All burn-in and DVB-Sub font settings must match. + FontOpacity *int64 `locationName:"fontOpacity" type:"integer"` + + // Font resolution in DPI (dots per inch); default is 96 dpi. All burn-in and + // DVB-Sub font settings must match. + FontResolution *int64 `locationName:"fontResolution" min:"96" type:"integer"` + + // When set to auto fontSize will scale depending on the size of the output. + // Giving a positive integer will specify the exact font size in points. All + // burn-in and DVB-Sub font settings must match. + FontSize *string `locationName:"fontSize" type:"string"` + + // Specifies font outline color. This option is not valid for source captions + // that are either 608/embedded or teletext. These source settings are already + // pre-defined by the caption stream. All burn-in and DVB-Sub font settings + // must match. + OutlineColor *string `locationName:"outlineColor" type:"string" enum:"DvbSubDestinationOutlineColor"` + + // Specifies font outline size in pixels. This option is not valid for source + // captions that are either 608/embedded or teletext. These source settings + // are already pre-defined by the caption stream. All burn-in and DVB-Sub font + // settings must match. + OutlineSize *int64 `locationName:"outlineSize" type:"integer"` + + // Specifies the color of the shadow cast by the captions. All burn-in and DVB-Sub + // font settings must match. + ShadowColor *string `locationName:"shadowColor" type:"string" enum:"DvbSubDestinationShadowColor"` + + // Specifies the opacity of the shadow. 255 is opaque; 0 is transparent. Leaving + // this parameter blank is equivalent to setting it to 0 (transparent). All + // burn-in and DVB-Sub font settings must match. + ShadowOpacity *int64 `locationName:"shadowOpacity" type:"integer"` + + // Specifies the horizontal offset of the shadow relative to the captions in + // pixels. A value of -2 would result in a shadow offset 2 pixels to the left. + // All burn-in and DVB-Sub font settings must match. + ShadowXOffset *int64 `locationName:"shadowXOffset" type:"integer"` + + // Specifies the vertical offset of the shadow relative to the captions in pixels. + // A value of -2 would result in a shadow offset 2 pixels above the text. All + // burn-in and DVB-Sub font settings must match. + ShadowYOffset *int64 `locationName:"shadowYOffset" type:"integer"` + + // Controls whether a fixed grid size will be used to generate the output subtitles + // bitmap. Only applicable for Teletext inputs and DVB-Sub/Burn-in outputs. + TeletextGridControl *string `locationName:"teletextGridControl" type:"string" enum:"DvbSubDestinationTeletextGridControl"` + + // Specifies the horizontal position of the caption relative to the left side + // of the output in pixels. A value of 10 would result in the captions starting + // 10 pixels from the left of the output. If no explicit xPosition is provided, + // the horizontal caption position will be determined by the alignment parameter. + // This option is not valid for source captions that are STL, 608/embedded or + // teletext. These source settings are already pre-defined by the caption stream. + // All burn-in and DVB-Sub font settings must match. + XPosition *int64 `locationName:"xPosition" type:"integer"` + + // Specifies the vertical position of the caption relative to the top of the + // output in pixels. A value of 10 would result in the captions starting 10 + // pixels from the top of the output. If no explicit yPosition is provided, + // the caption will be positioned towards the bottom of the output. This option + // is not valid for source captions that are STL, 608/embedded or teletext. + // These source settings are already pre-defined by the caption stream. All + // burn-in and DVB-Sub font settings must match. + YPosition *int64 `locationName:"yPosition" type:"integer"` } // String returns the string representation -func (s CaptionSelectorSettings) String() string { +func (s DvbSubDestinationSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CaptionSelectorSettings) GoString() string { +func (s DvbSubDestinationSettings) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CaptionSelectorSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CaptionSelectorSettings"} - if s.DvbSubSourceSettings != nil { - if err := s.DvbSubSourceSettings.Validate(); err != nil { - invalidParams.AddNested("DvbSubSourceSettings", err.(request.ErrInvalidParams)) - } - } - if s.EmbeddedSourceSettings != nil { - if err := s.EmbeddedSourceSettings.Validate(); err != nil { - invalidParams.AddNested("EmbeddedSourceSettings", err.(request.ErrInvalidParams)) - } - } - if s.Scte20SourceSettings != nil { - if err := s.Scte20SourceSettings.Validate(); err != nil { - invalidParams.AddNested("Scte20SourceSettings", err.(request.ErrInvalidParams)) - } +func (s *DvbSubDestinationSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DvbSubDestinationSettings"} + if s.FontResolution != nil && *s.FontResolution < 96 { + invalidParams.Add(request.NewErrParamMinValue("FontResolution", 96)) } - if s.Scte27SourceSettings != nil { - if err := s.Scte27SourceSettings.Validate(); err != nil { - invalidParams.AddNested("Scte27SourceSettings", err.(request.ErrInvalidParams)) + if s.Font != nil { + if err := s.Font.Validate(); err != nil { + invalidParams.AddNested("Font", err.(request.ErrInvalidParams)) } } @@ -3377,328 +6747,276 @@ func (s *CaptionSelectorSettings) Validate() error { return nil } -// SetAribSourceSettings sets the AribSourceSettings field's value. -func (s *CaptionSelectorSettings) SetAribSourceSettings(v *AribSourceSettings) *CaptionSelectorSettings { - s.AribSourceSettings = v +// SetAlignment sets the Alignment field's value. +func (s *DvbSubDestinationSettings) SetAlignment(v string) *DvbSubDestinationSettings { + s.Alignment = &v return s } -// SetDvbSubSourceSettings sets the DvbSubSourceSettings field's value. -func (s *CaptionSelectorSettings) SetDvbSubSourceSettings(v *DvbSubSourceSettings) *CaptionSelectorSettings { - s.DvbSubSourceSettings = v +// SetBackgroundColor sets the BackgroundColor field's value. +func (s *DvbSubDestinationSettings) SetBackgroundColor(v string) *DvbSubDestinationSettings { + s.BackgroundColor = &v return s } -// SetEmbeddedSourceSettings sets the EmbeddedSourceSettings field's value. -func (s *CaptionSelectorSettings) SetEmbeddedSourceSettings(v *EmbeddedSourceSettings) *CaptionSelectorSettings { - s.EmbeddedSourceSettings = v +// SetBackgroundOpacity sets the BackgroundOpacity field's value. +func (s *DvbSubDestinationSettings) SetBackgroundOpacity(v int64) *DvbSubDestinationSettings { + s.BackgroundOpacity = &v return s } -// SetScte20SourceSettings sets the Scte20SourceSettings field's value. -func (s *CaptionSelectorSettings) SetScte20SourceSettings(v *Scte20SourceSettings) *CaptionSelectorSettings { - s.Scte20SourceSettings = v +// SetFont sets the Font field's value. +func (s *DvbSubDestinationSettings) SetFont(v *InputLocation) *DvbSubDestinationSettings { + s.Font = v return s } -// SetScte27SourceSettings sets the Scte27SourceSettings field's value. -func (s *CaptionSelectorSettings) SetScte27SourceSettings(v *Scte27SourceSettings) *CaptionSelectorSettings { - s.Scte27SourceSettings = v +// SetFontColor sets the FontColor field's value. +func (s *DvbSubDestinationSettings) SetFontColor(v string) *DvbSubDestinationSettings { + s.FontColor = &v return s } -// SetTeletextSourceSettings sets the TeletextSourceSettings field's value. -func (s *CaptionSelectorSettings) SetTeletextSourceSettings(v *TeletextSourceSettings) *CaptionSelectorSettings { - s.TeletextSourceSettings = v +// SetFontOpacity sets the FontOpacity field's value. +func (s *DvbSubDestinationSettings) SetFontOpacity(v int64) *DvbSubDestinationSettings { + s.FontOpacity = &v return s } -type Channel struct { - _ struct{} `type:"structure"` - - // The unique arn of the channel. - Arn *string `locationName:"arn" type:"string"` - - // A list of destinations of the channel. For UDP outputs, there is onedestination - // per output. For other types (HLS, for example), there isone destination per - // packager. - Destinations []*OutputDestination `locationName:"destinations" type:"list"` - - // The endpoints where outgoing connections initiate from - EgressEndpoints []*ChannelEgressEndpoint `locationName:"egressEndpoints" type:"list"` - - EncoderSettings *EncoderSettings `locationName:"encoderSettings" type:"structure"` - - // The unique id of the channel. - Id *string `locationName:"id" type:"string"` - - // List of input attachments for channel. - InputAttachments []*InputAttachment `locationName:"inputAttachments" type:"list"` - - InputSpecification *InputSpecification `locationName:"inputSpecification" type:"structure"` - - // The name of the channel. (user-mutable) - Name *string `locationName:"name" type:"string"` - - // The number of currently healthy pipelines. - PipelinesRunningCount *int64 `locationName:"pipelinesRunningCount" type:"integer"` - - // The Amazon Resource Name (ARN) of the role assumed when running the Channel. - RoleArn *string `locationName:"roleArn" type:"string"` - - State *string `locationName:"state" type:"string" enum:"ChannelState"` -} - -// String returns the string representation -func (s Channel) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s Channel) GoString() string { - return s.String() -} - -// SetArn sets the Arn field's value. -func (s *Channel) SetArn(v string) *Channel { - s.Arn = &v +// SetFontResolution sets the FontResolution field's value. +func (s *DvbSubDestinationSettings) SetFontResolution(v int64) *DvbSubDestinationSettings { + s.FontResolution = &v return s } -// SetDestinations sets the Destinations field's value. -func (s *Channel) SetDestinations(v []*OutputDestination) *Channel { - s.Destinations = v +// SetFontSize sets the FontSize field's value. +func (s *DvbSubDestinationSettings) SetFontSize(v string) *DvbSubDestinationSettings { + s.FontSize = &v return s } -// SetEgressEndpoints sets the EgressEndpoints field's value. -func (s *Channel) SetEgressEndpoints(v []*ChannelEgressEndpoint) *Channel { - s.EgressEndpoints = v +// SetOutlineColor sets the OutlineColor field's value. +func (s *DvbSubDestinationSettings) SetOutlineColor(v string) *DvbSubDestinationSettings { + s.OutlineColor = &v return s } -// SetEncoderSettings sets the EncoderSettings field's value. -func (s *Channel) SetEncoderSettings(v *EncoderSettings) *Channel { - s.EncoderSettings = v +// SetOutlineSize sets the OutlineSize field's value. +func (s *DvbSubDestinationSettings) SetOutlineSize(v int64) *DvbSubDestinationSettings { + s.OutlineSize = &v return s } -// SetId sets the Id field's value. -func (s *Channel) SetId(v string) *Channel { - s.Id = &v +// SetShadowColor sets the ShadowColor field's value. +func (s *DvbSubDestinationSettings) SetShadowColor(v string) *DvbSubDestinationSettings { + s.ShadowColor = &v return s } -// SetInputAttachments sets the InputAttachments field's value. -func (s *Channel) SetInputAttachments(v []*InputAttachment) *Channel { - s.InputAttachments = v +// SetShadowOpacity sets the ShadowOpacity field's value. +func (s *DvbSubDestinationSettings) SetShadowOpacity(v int64) *DvbSubDestinationSettings { + s.ShadowOpacity = &v return s } -// SetInputSpecification sets the InputSpecification field's value. -func (s *Channel) SetInputSpecification(v *InputSpecification) *Channel { - s.InputSpecification = v +// SetShadowXOffset sets the ShadowXOffset field's value. +func (s *DvbSubDestinationSettings) SetShadowXOffset(v int64) *DvbSubDestinationSettings { + s.ShadowXOffset = &v return s } -// SetName sets the Name field's value. -func (s *Channel) SetName(v string) *Channel { - s.Name = &v +// SetShadowYOffset sets the ShadowYOffset field's value. +func (s *DvbSubDestinationSettings) SetShadowYOffset(v int64) *DvbSubDestinationSettings { + s.ShadowYOffset = &v return s } -// SetPipelinesRunningCount sets the PipelinesRunningCount field's value. -func (s *Channel) SetPipelinesRunningCount(v int64) *Channel { - s.PipelinesRunningCount = &v +// SetTeletextGridControl sets the TeletextGridControl field's value. +func (s *DvbSubDestinationSettings) SetTeletextGridControl(v string) *DvbSubDestinationSettings { + s.TeletextGridControl = &v return s } -// SetRoleArn sets the RoleArn field's value. -func (s *Channel) SetRoleArn(v string) *Channel { - s.RoleArn = &v +// SetXPosition sets the XPosition field's value. +func (s *DvbSubDestinationSettings) SetXPosition(v int64) *DvbSubDestinationSettings { + s.XPosition = &v return s } -// SetState sets the State field's value. -func (s *Channel) SetState(v string) *Channel { - s.State = &v +// SetYPosition sets the YPosition field's value. +func (s *DvbSubDestinationSettings) SetYPosition(v int64) *DvbSubDestinationSettings { + s.YPosition = &v return s } -type ChannelEgressEndpoint struct { +type DvbSubSourceSettings struct { _ struct{} `type:"structure"` - // Public IP of where a channel's output comes from - SourceIp *string `locationName:"sourceIp" type:"string"` + // When using DVB-Sub with Burn-In or SMPTE-TT, use this PID for the source + // content. Unused for DVB-Sub passthrough. All DVB-Sub content is passed through, + // regardless of selectors. + Pid *int64 `locationName:"pid" min:"1" type:"integer"` } // String returns the string representation -func (s ChannelEgressEndpoint) String() string { +func (s DvbSubSourceSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ChannelEgressEndpoint) GoString() string { +func (s DvbSubSourceSettings) GoString() string { return s.String() } -// SetSourceIp sets the SourceIp field's value. -func (s *ChannelEgressEndpoint) SetSourceIp(v string) *ChannelEgressEndpoint { - s.SourceIp = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *DvbSubSourceSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DvbSubSourceSettings"} + if s.Pid != nil && *s.Pid < 1 { + invalidParams.Add(request.NewErrParamMinValue("Pid", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPid sets the Pid field's value. +func (s *DvbSubSourceSettings) SetPid(v int64) *DvbSubSourceSettings { + s.Pid = &v return s } -type ChannelSummary struct { +// DVB Time and Date Table (SDT) +type DvbTdtSettings struct { _ struct{} `type:"structure"` - // The unique arn of the channel. - Arn *string `locationName:"arn" type:"string"` - - // A list of destinations of the channel. For UDP outputs, there is onedestination - // per output. For other types (HLS, for example), there isone destination per - // packager. - Destinations []*OutputDestination `locationName:"destinations" type:"list"` - - // The endpoints where outgoing connections initiate from - EgressEndpoints []*ChannelEgressEndpoint `locationName:"egressEndpoints" type:"list"` - - // The unique id of the channel. - Id *string `locationName:"id" type:"string"` - - // List of input attachments for channel. - InputAttachments []*InputAttachment `locationName:"inputAttachments" type:"list"` - - InputSpecification *InputSpecification `locationName:"inputSpecification" type:"structure"` - - // The name of the channel. (user-mutable) - Name *string `locationName:"name" type:"string"` - - // The number of currently healthy pipelines. - PipelinesRunningCount *int64 `locationName:"pipelinesRunningCount" type:"integer"` - - // The Amazon Resource Name (ARN) of the role assumed when running the Channel. - RoleArn *string `locationName:"roleArn" type:"string"` - - State *string `locationName:"state" type:"string" enum:"ChannelState"` + // The number of milliseconds between instances of this table in the output + // transport stream. + RepInterval *int64 `locationName:"repInterval" min:"1000" type:"integer"` } // String returns the string representation -func (s ChannelSummary) String() string { +func (s DvbTdtSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ChannelSummary) GoString() string { +func (s DvbTdtSettings) GoString() string { return s.String() } -// SetArn sets the Arn field's value. -func (s *ChannelSummary) SetArn(v string) *ChannelSummary { - s.Arn = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *DvbTdtSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DvbTdtSettings"} + if s.RepInterval != nil && *s.RepInterval < 1000 { + invalidParams.Add(request.NewErrParamMinValue("RepInterval", 1000)) + } -// SetDestinations sets the Destinations field's value. -func (s *ChannelSummary) SetDestinations(v []*OutputDestination) *ChannelSummary { - s.Destinations = v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetEgressEndpoints sets the EgressEndpoints field's value. -func (s *ChannelSummary) SetEgressEndpoints(v []*ChannelEgressEndpoint) *ChannelSummary { - s.EgressEndpoints = v +// SetRepInterval sets the RepInterval field's value. +func (s *DvbTdtSettings) SetRepInterval(v int64) *DvbTdtSettings { + s.RepInterval = &v return s } -// SetId sets the Id field's value. -func (s *ChannelSummary) SetId(v string) *ChannelSummary { - s.Id = &v - return s -} +type Eac3Settings struct { + _ struct{} `type:"structure"` -// SetInputAttachments sets the InputAttachments field's value. -func (s *ChannelSummary) SetInputAttachments(v []*InputAttachment) *ChannelSummary { - s.InputAttachments = v - return s -} + // When set to attenuate3Db, applies a 3 dB attenuation to the surround channels. + // Only used for 3/2 coding mode. + AttenuationControl *string `locationName:"attenuationControl" type:"string" enum:"Eac3AttenuationControl"` -// SetInputSpecification sets the InputSpecification field's value. -func (s *ChannelSummary) SetInputSpecification(v *InputSpecification) *ChannelSummary { - s.InputSpecification = v - return s -} + // Average bitrate in bits/second. Valid bitrates depend on the coding mode. + Bitrate *float64 `locationName:"bitrate" type:"double"` -// SetName sets the Name field's value. -func (s *ChannelSummary) SetName(v string) *ChannelSummary { - s.Name = &v - return s -} + // Specifies the bitstream mode (bsmod) for the emitted E-AC-3 stream. See ATSC + // A/52-2012 (Annex E) for background on these values. + BitstreamMode *string `locationName:"bitstreamMode" type:"string" enum:"Eac3BitstreamMode"` -// SetPipelinesRunningCount sets the PipelinesRunningCount field's value. -func (s *ChannelSummary) SetPipelinesRunningCount(v int64) *ChannelSummary { - s.PipelinesRunningCount = &v - return s -} + // Dolby Digital Plus coding mode. Determines number of channels. + CodingMode *string `locationName:"codingMode" type:"string" enum:"Eac3CodingMode"` -// SetRoleArn sets the RoleArn field's value. -func (s *ChannelSummary) SetRoleArn(v string) *ChannelSummary { - s.RoleArn = &v - return s -} + // When set to enabled, activates a DC highpass filter for all input channels. + DcFilter *string `locationName:"dcFilter" type:"string" enum:"Eac3DcFilter"` -// SetState sets the State field's value. -func (s *ChannelSummary) SetState(v string) *ChannelSummary { - s.State = &v - return s -} + // Sets the dialnorm for the output. If blank and input audio is Dolby Digital + // Plus, dialnorm will be passed through. + Dialnorm *int64 `locationName:"dialnorm" min:"1" type:"integer"` -type CreateChannelInput struct { - _ struct{} `type:"structure"` + // Sets the Dolby dynamic range compression profile. + DrcLine *string `locationName:"drcLine" type:"string" enum:"Eac3DrcLine"` - Destinations []*OutputDestination `locationName:"destinations" type:"list"` + // Sets the profile for heavy Dolby dynamic range compression, ensures that + // the instantaneous signal peaks do not exceed specified levels. + DrcRf *string `locationName:"drcRf" type:"string" enum:"Eac3DrcRf"` - EncoderSettings *EncoderSettings `locationName:"encoderSettings" type:"structure"` + // When encoding 3/2 audio, setting to lfe enables the LFE channel + LfeControl *string `locationName:"lfeControl" type:"string" enum:"Eac3LfeControl"` - InputAttachments []*InputAttachment `locationName:"inputAttachments" type:"list"` + // When set to enabled, applies a 120Hz lowpass filter to the LFE channel prior + // to encoding. Only valid with codingMode32 coding mode. + LfeFilter *string `locationName:"lfeFilter" type:"string" enum:"Eac3LfeFilter"` - InputSpecification *InputSpecification `locationName:"inputSpecification" type:"structure"` + // Left only/Right only center mix level. Only used for 3/2 coding mode. + LoRoCenterMixLevel *float64 `locationName:"loRoCenterMixLevel" type:"double"` - Name *string `locationName:"name" type:"string"` + // Left only/Right only surround mix level. Only used for 3/2 coding mode. + LoRoSurroundMixLevel *float64 `locationName:"loRoSurroundMixLevel" type:"double"` - RequestId *string `locationName:"requestId" type:"string" idempotencyToken:"true"` + // Left total/Right total center mix level. Only used for 3/2 coding mode. + LtRtCenterMixLevel *float64 `locationName:"ltRtCenterMixLevel" type:"double"` - Reserved *string `locationName:"reserved" deprecated:"true" type:"string"` + // Left total/Right total surround mix level. Only used for 3/2 coding mode. + LtRtSurroundMixLevel *float64 `locationName:"ltRtSurroundMixLevel" type:"double"` - RoleArn *string `locationName:"roleArn" type:"string"` + // When set to followInput, encoder metadata will be sourced from the DD, DD+, + // or DolbyE decoder that supplied this audio data. If audio was not supplied + // from one of these streams, then the static metadata settings will be used. + MetadataControl *string `locationName:"metadataControl" type:"string" enum:"Eac3MetadataControl"` + + // When set to whenPossible, input DD+ audio will be passed through if it is + // present on the input. This detection is dynamic over the life of the transcode. + // Inputs that alternate between DD+ and non-DD+ content will have a consistent + // DD+ output as the system alternates between passthrough and encoding. + PassthroughControl *string `locationName:"passthroughControl" type:"string" enum:"Eac3PassthroughControl"` + + // When set to shift90Degrees, applies a 90-degree phase shift to the surround + // channels. Only used for 3/2 coding mode. + PhaseControl *string `locationName:"phaseControl" type:"string" enum:"Eac3PhaseControl"` + + // Stereo downmix preference. Only used for 3/2 coding mode. + StereoDownmix *string `locationName:"stereoDownmix" type:"string" enum:"Eac3StereoDownmix"` + + // When encoding 3/2 audio, sets whether an extra center back surround channel + // is matrix encoded into the left and right surround channels. + SurroundExMode *string `locationName:"surroundExMode" type:"string" enum:"Eac3SurroundExMode"` + + // When encoding 2/0 audio, sets whether Dolby Surround is matrix encoded into + // the two channels. + SurroundMode *string `locationName:"surroundMode" type:"string" enum:"Eac3SurroundMode"` } // String returns the string representation -func (s CreateChannelInput) String() string { +func (s Eac3Settings) String() string { return awsutil.Prettify(s) } -// GoString returns the string representation -func (s CreateChannelInput) GoString() string { - return s.String() -} - -// Validate inspects the fields of the type to determine if they are valid. -func (s *CreateChannelInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateChannelInput"} - if s.EncoderSettings != nil { - if err := s.EncoderSettings.Validate(); err != nil { - invalidParams.AddNested("EncoderSettings", err.(request.ErrInvalidParams)) - } - } - if s.InputAttachments != nil { - for i, v := range s.InputAttachments { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "InputAttachments", i), err.(request.ErrInvalidParams)) - } - } +// GoString returns the string representation +func (s Eac3Settings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Eac3Settings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Eac3Settings"} + if s.Dialnorm != nil && *s.Dialnorm < 1 { + invalidParams.Add(request.NewErrParamMinValue("Dialnorm", 1)) } if invalidParams.Len() > 0 { @@ -3707,227 +7025,346 @@ func (s *CreateChannelInput) Validate() error { return nil } -// SetDestinations sets the Destinations field's value. -func (s *CreateChannelInput) SetDestinations(v []*OutputDestination) *CreateChannelInput { - s.Destinations = v +// SetAttenuationControl sets the AttenuationControl field's value. +func (s *Eac3Settings) SetAttenuationControl(v string) *Eac3Settings { + s.AttenuationControl = &v return s } -// SetEncoderSettings sets the EncoderSettings field's value. -func (s *CreateChannelInput) SetEncoderSettings(v *EncoderSettings) *CreateChannelInput { - s.EncoderSettings = v +// SetBitrate sets the Bitrate field's value. +func (s *Eac3Settings) SetBitrate(v float64) *Eac3Settings { + s.Bitrate = &v return s } -// SetInputAttachments sets the InputAttachments field's value. -func (s *CreateChannelInput) SetInputAttachments(v []*InputAttachment) *CreateChannelInput { - s.InputAttachments = v +// SetBitstreamMode sets the BitstreamMode field's value. +func (s *Eac3Settings) SetBitstreamMode(v string) *Eac3Settings { + s.BitstreamMode = &v return s } -// SetInputSpecification sets the InputSpecification field's value. -func (s *CreateChannelInput) SetInputSpecification(v *InputSpecification) *CreateChannelInput { - s.InputSpecification = v +// SetCodingMode sets the CodingMode field's value. +func (s *Eac3Settings) SetCodingMode(v string) *Eac3Settings { + s.CodingMode = &v return s } -// SetName sets the Name field's value. -func (s *CreateChannelInput) SetName(v string) *CreateChannelInput { - s.Name = &v +// SetDcFilter sets the DcFilter field's value. +func (s *Eac3Settings) SetDcFilter(v string) *Eac3Settings { + s.DcFilter = &v return s } -// SetRequestId sets the RequestId field's value. -func (s *CreateChannelInput) SetRequestId(v string) *CreateChannelInput { - s.RequestId = &v +// SetDialnorm sets the Dialnorm field's value. +func (s *Eac3Settings) SetDialnorm(v int64) *Eac3Settings { + s.Dialnorm = &v return s } -// SetReserved sets the Reserved field's value. -func (s *CreateChannelInput) SetReserved(v string) *CreateChannelInput { - s.Reserved = &v +// SetDrcLine sets the DrcLine field's value. +func (s *Eac3Settings) SetDrcLine(v string) *Eac3Settings { + s.DrcLine = &v return s } -// SetRoleArn sets the RoleArn field's value. -func (s *CreateChannelInput) SetRoleArn(v string) *CreateChannelInput { - s.RoleArn = &v +// SetDrcRf sets the DrcRf field's value. +func (s *Eac3Settings) SetDrcRf(v string) *Eac3Settings { + s.DrcRf = &v return s } -type CreateChannelOutput struct { - _ struct{} `type:"structure"` - - Channel *Channel `locationName:"channel" type:"structure"` -} - -// String returns the string representation -func (s CreateChannelOutput) String() string { - return awsutil.Prettify(s) +// SetLfeControl sets the LfeControl field's value. +func (s *Eac3Settings) SetLfeControl(v string) *Eac3Settings { + s.LfeControl = &v + return s } -// GoString returns the string representation -func (s CreateChannelOutput) GoString() string { - return s.String() +// SetLfeFilter sets the LfeFilter field's value. +func (s *Eac3Settings) SetLfeFilter(v string) *Eac3Settings { + s.LfeFilter = &v + return s } -// SetChannel sets the Channel field's value. -func (s *CreateChannelOutput) SetChannel(v *Channel) *CreateChannelOutput { - s.Channel = v +// SetLoRoCenterMixLevel sets the LoRoCenterMixLevel field's value. +func (s *Eac3Settings) SetLoRoCenterMixLevel(v float64) *Eac3Settings { + s.LoRoCenterMixLevel = &v return s } -type CreateInputInput struct { - _ struct{} `type:"structure"` - - Destinations []*InputDestinationRequest `locationName:"destinations" type:"list"` - - InputSecurityGroups []*string `locationName:"inputSecurityGroups" type:"list"` - - Name *string `locationName:"name" type:"string"` - - RequestId *string `locationName:"requestId" type:"string" idempotencyToken:"true"` - - Sources []*InputSourceRequest `locationName:"sources" type:"list"` - - Type *string `locationName:"type" type:"string" enum:"InputType"` +// SetLoRoSurroundMixLevel sets the LoRoSurroundMixLevel field's value. +func (s *Eac3Settings) SetLoRoSurroundMixLevel(v float64) *Eac3Settings { + s.LoRoSurroundMixLevel = &v + return s } -// String returns the string representation -func (s CreateInputInput) String() string { - return awsutil.Prettify(s) +// SetLtRtCenterMixLevel sets the LtRtCenterMixLevel field's value. +func (s *Eac3Settings) SetLtRtCenterMixLevel(v float64) *Eac3Settings { + s.LtRtCenterMixLevel = &v + return s } -// GoString returns the string representation -func (s CreateInputInput) GoString() string { - return s.String() +// SetLtRtSurroundMixLevel sets the LtRtSurroundMixLevel field's value. +func (s *Eac3Settings) SetLtRtSurroundMixLevel(v float64) *Eac3Settings { + s.LtRtSurroundMixLevel = &v + return s } -// SetDestinations sets the Destinations field's value. -func (s *CreateInputInput) SetDestinations(v []*InputDestinationRequest) *CreateInputInput { - s.Destinations = v +// SetMetadataControl sets the MetadataControl field's value. +func (s *Eac3Settings) SetMetadataControl(v string) *Eac3Settings { + s.MetadataControl = &v return s } -// SetInputSecurityGroups sets the InputSecurityGroups field's value. -func (s *CreateInputInput) SetInputSecurityGroups(v []*string) *CreateInputInput { - s.InputSecurityGroups = v +// SetPassthroughControl sets the PassthroughControl field's value. +func (s *Eac3Settings) SetPassthroughControl(v string) *Eac3Settings { + s.PassthroughControl = &v return s } -// SetName sets the Name field's value. -func (s *CreateInputInput) SetName(v string) *CreateInputInput { - s.Name = &v +// SetPhaseControl sets the PhaseControl field's value. +func (s *Eac3Settings) SetPhaseControl(v string) *Eac3Settings { + s.PhaseControl = &v return s } -// SetRequestId sets the RequestId field's value. -func (s *CreateInputInput) SetRequestId(v string) *CreateInputInput { - s.RequestId = &v +// SetStereoDownmix sets the StereoDownmix field's value. +func (s *Eac3Settings) SetStereoDownmix(v string) *Eac3Settings { + s.StereoDownmix = &v return s } -// SetSources sets the Sources field's value. -func (s *CreateInputInput) SetSources(v []*InputSourceRequest) *CreateInputInput { - s.Sources = v +// SetSurroundExMode sets the SurroundExMode field's value. +func (s *Eac3Settings) SetSurroundExMode(v string) *Eac3Settings { + s.SurroundExMode = &v return s } -// SetType sets the Type field's value. -func (s *CreateInputInput) SetType(v string) *CreateInputInput { - s.Type = &v +// SetSurroundMode sets the SurroundMode field's value. +func (s *Eac3Settings) SetSurroundMode(v string) *Eac3Settings { + s.SurroundMode = &v return s } -type CreateInputOutput struct { +type EmbeddedDestinationSettings struct { _ struct{} `type:"structure"` - - Input *Input `locationName:"input" type:"structure"` } // String returns the string representation -func (s CreateInputOutput) String() string { +func (s EmbeddedDestinationSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateInputOutput) GoString() string { +func (s EmbeddedDestinationSettings) GoString() string { return s.String() } -// SetInput sets the Input field's value. -func (s *CreateInputOutput) SetInput(v *Input) *CreateInputOutput { - s.Input = v - return s +type EmbeddedPlusScte20DestinationSettings struct { + _ struct{} `type:"structure"` } -type CreateInputSecurityGroupInput struct { +// String returns the string representation +func (s EmbeddedPlusScte20DestinationSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EmbeddedPlusScte20DestinationSettings) GoString() string { + return s.String() +} + +type EmbeddedSourceSettings struct { _ struct{} `type:"structure"` - WhitelistRules []*InputWhitelistRuleCidr `locationName:"whitelistRules" type:"list"` + // If upconvert, 608 data is both passed through via the "608 compatibility + // bytes" fields of the 708 wrapper as well as translated into 708. 708 data + // present in the source content will be discarded. + Convert608To708 *string `locationName:"convert608To708" type:"string" enum:"EmbeddedConvert608To708"` + + // Set to "auto" to handle streams with intermittent and/or non-aligned SCTE-20 + // and Embedded captions. + Scte20Detection *string `locationName:"scte20Detection" type:"string" enum:"EmbeddedScte20Detection"` + + // Specifies the 608/708 channel number within the video track from which to + // extract captions. Unused for passthrough. + Source608ChannelNumber *int64 `locationName:"source608ChannelNumber" min:"1" type:"integer"` + + // This field is unused and deprecated. + Source608TrackNumber *int64 `locationName:"source608TrackNumber" min:"1" type:"integer"` } // String returns the string representation -func (s CreateInputSecurityGroupInput) String() string { +func (s EmbeddedSourceSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateInputSecurityGroupInput) GoString() string { +func (s EmbeddedSourceSettings) GoString() string { return s.String() } -// SetWhitelistRules sets the WhitelistRules field's value. -func (s *CreateInputSecurityGroupInput) SetWhitelistRules(v []*InputWhitelistRuleCidr) *CreateInputSecurityGroupInput { - s.WhitelistRules = v +// Validate inspects the fields of the type to determine if they are valid. +func (s *EmbeddedSourceSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "EmbeddedSourceSettings"} + if s.Source608ChannelNumber != nil && *s.Source608ChannelNumber < 1 { + invalidParams.Add(request.NewErrParamMinValue("Source608ChannelNumber", 1)) + } + if s.Source608TrackNumber != nil && *s.Source608TrackNumber < 1 { + invalidParams.Add(request.NewErrParamMinValue("Source608TrackNumber", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetConvert608To708 sets the Convert608To708 field's value. +func (s *EmbeddedSourceSettings) SetConvert608To708(v string) *EmbeddedSourceSettings { + s.Convert608To708 = &v return s } -type CreateInputSecurityGroupOutput struct { - _ struct{} `type:"structure"` +// SetScte20Detection sets the Scte20Detection field's value. +func (s *EmbeddedSourceSettings) SetScte20Detection(v string) *EmbeddedSourceSettings { + s.Scte20Detection = &v + return s +} - // An Input Security Group - SecurityGroup *InputSecurityGroup `locationName:"securityGroup" type:"structure"` +// SetSource608ChannelNumber sets the Source608ChannelNumber field's value. +func (s *EmbeddedSourceSettings) SetSource608ChannelNumber(v int64) *EmbeddedSourceSettings { + s.Source608ChannelNumber = &v + return s } -// String returns the string representation -func (s CreateInputSecurityGroupOutput) String() string { - return awsutil.Prettify(s) +// SetSource608TrackNumber sets the Source608TrackNumber field's value. +func (s *EmbeddedSourceSettings) SetSource608TrackNumber(v int64) *EmbeddedSourceSettings { + s.Source608TrackNumber = &v + return s } -// GoString returns the string representation -func (s CreateInputSecurityGroupOutput) GoString() string { - return s.String() -} +type EncoderSettings struct { + _ struct{} `type:"structure"` + + // AudioDescriptions is a required field + AudioDescriptions []*AudioDescription `locationName:"audioDescriptions" type:"list" required:"true"` + + // Settings for ad avail blanking. + AvailBlanking *AvailBlanking `locationName:"availBlanking" type:"structure"` + + // Event-wide configuration settings for ad avail insertion. + AvailConfiguration *AvailConfiguration `locationName:"availConfiguration" type:"structure"` + + // Settings for blackout slate. + BlackoutSlate *BlackoutSlate `locationName:"blackoutSlate" type:"structure"` + + // Settings for caption decriptions + CaptionDescriptions []*CaptionDescription `locationName:"captionDescriptions" type:"list"` + + // Configuration settings that apply to the event as a whole. + GlobalConfiguration *GlobalConfiguration `locationName:"globalConfiguration" type:"structure"` -// SetSecurityGroup sets the SecurityGroup field's value. -func (s *CreateInputSecurityGroupOutput) SetSecurityGroup(v *InputSecurityGroup) *CreateInputSecurityGroupOutput { - s.SecurityGroup = v - return s -} + // OutputGroups is a required field + OutputGroups []*OutputGroup `locationName:"outputGroups" type:"list" required:"true"` -type DeleteChannelInput struct { - _ struct{} `type:"structure"` + // Contains settings used to acquire and adjust timecode information from inputs. + // + // TimecodeConfig is a required field + TimecodeConfig *TimecodeConfig `locationName:"timecodeConfig" type:"structure" required:"true"` - // ChannelId is a required field - ChannelId *string `location:"uri" locationName:"channelId" type:"string" required:"true"` + // VideoDescriptions is a required field + VideoDescriptions []*VideoDescription `locationName:"videoDescriptions" type:"list" required:"true"` } // String returns the string representation -func (s DeleteChannelInput) String() string { +func (s EncoderSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteChannelInput) GoString() string { +func (s EncoderSettings) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteChannelInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteChannelInput"} - if s.ChannelId == nil { - invalidParams.Add(request.NewErrParamRequired("ChannelId")) +func (s *EncoderSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "EncoderSettings"} + if s.AudioDescriptions == nil { + invalidParams.Add(request.NewErrParamRequired("AudioDescriptions")) + } + if s.OutputGroups == nil { + invalidParams.Add(request.NewErrParamRequired("OutputGroups")) + } + if s.TimecodeConfig == nil { + invalidParams.Add(request.NewErrParamRequired("TimecodeConfig")) + } + if s.VideoDescriptions == nil { + invalidParams.Add(request.NewErrParamRequired("VideoDescriptions")) + } + if s.AudioDescriptions != nil { + for i, v := range s.AudioDescriptions { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AudioDescriptions", i), err.(request.ErrInvalidParams)) + } + } + } + if s.AvailBlanking != nil { + if err := s.AvailBlanking.Validate(); err != nil { + invalidParams.AddNested("AvailBlanking", err.(request.ErrInvalidParams)) + } + } + if s.AvailConfiguration != nil { + if err := s.AvailConfiguration.Validate(); err != nil { + invalidParams.AddNested("AvailConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.BlackoutSlate != nil { + if err := s.BlackoutSlate.Validate(); err != nil { + invalidParams.AddNested("BlackoutSlate", err.(request.ErrInvalidParams)) + } + } + if s.CaptionDescriptions != nil { + for i, v := range s.CaptionDescriptions { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "CaptionDescriptions", i), err.(request.ErrInvalidParams)) + } + } + } + if s.GlobalConfiguration != nil { + if err := s.GlobalConfiguration.Validate(); err != nil { + invalidParams.AddNested("GlobalConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.OutputGroups != nil { + for i, v := range s.OutputGroups { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "OutputGroups", i), err.(request.ErrInvalidParams)) + } + } + } + if s.TimecodeConfig != nil { + if err := s.TimecodeConfig.Validate(); err != nil { + invalidParams.AddNested("TimecodeConfig", err.(request.ErrInvalidParams)) + } + } + if s.VideoDescriptions != nil { + for i, v := range s.VideoDescriptions { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "VideoDescriptions", i), err.(request.ErrInvalidParams)) + } + } } if invalidParams.Len() > 0 { @@ -3936,136 +7373,153 @@ func (s *DeleteChannelInput) Validate() error { return nil } -// SetChannelId sets the ChannelId field's value. -func (s *DeleteChannelInput) SetChannelId(v string) *DeleteChannelInput { - s.ChannelId = &v +// SetAudioDescriptions sets the AudioDescriptions field's value. +func (s *EncoderSettings) SetAudioDescriptions(v []*AudioDescription) *EncoderSettings { + s.AudioDescriptions = v return s } -type DeleteChannelOutput struct { - _ struct{} `type:"structure"` - - Arn *string `locationName:"arn" type:"string"` - - Destinations []*OutputDestination `locationName:"destinations" type:"list"` - - EgressEndpoints []*ChannelEgressEndpoint `locationName:"egressEndpoints" type:"list"` - - EncoderSettings *EncoderSettings `locationName:"encoderSettings" type:"structure"` - - Id *string `locationName:"id" type:"string"` - - InputAttachments []*InputAttachment `locationName:"inputAttachments" type:"list"` - - InputSpecification *InputSpecification `locationName:"inputSpecification" type:"structure"` - - Name *string `locationName:"name" type:"string"` - - PipelinesRunningCount *int64 `locationName:"pipelinesRunningCount" type:"integer"` - - RoleArn *string `locationName:"roleArn" type:"string"` - - State *string `locationName:"state" type:"string" enum:"ChannelState"` +// SetAvailBlanking sets the AvailBlanking field's value. +func (s *EncoderSettings) SetAvailBlanking(v *AvailBlanking) *EncoderSettings { + s.AvailBlanking = v + return s } -// String returns the string representation -func (s DeleteChannelOutput) String() string { - return awsutil.Prettify(s) +// SetAvailConfiguration sets the AvailConfiguration field's value. +func (s *EncoderSettings) SetAvailConfiguration(v *AvailConfiguration) *EncoderSettings { + s.AvailConfiguration = v + return s } -// GoString returns the string representation -func (s DeleteChannelOutput) GoString() string { - return s.String() +// SetBlackoutSlate sets the BlackoutSlate field's value. +func (s *EncoderSettings) SetBlackoutSlate(v *BlackoutSlate) *EncoderSettings { + s.BlackoutSlate = v + return s } -// SetArn sets the Arn field's value. -func (s *DeleteChannelOutput) SetArn(v string) *DeleteChannelOutput { - s.Arn = &v +// SetCaptionDescriptions sets the CaptionDescriptions field's value. +func (s *EncoderSettings) SetCaptionDescriptions(v []*CaptionDescription) *EncoderSettings { + s.CaptionDescriptions = v return s } -// SetDestinations sets the Destinations field's value. -func (s *DeleteChannelOutput) SetDestinations(v []*OutputDestination) *DeleteChannelOutput { - s.Destinations = v +// SetGlobalConfiguration sets the GlobalConfiguration field's value. +func (s *EncoderSettings) SetGlobalConfiguration(v *GlobalConfiguration) *EncoderSettings { + s.GlobalConfiguration = v return s } -// SetEgressEndpoints sets the EgressEndpoints field's value. -func (s *DeleteChannelOutput) SetEgressEndpoints(v []*ChannelEgressEndpoint) *DeleteChannelOutput { - s.EgressEndpoints = v +// SetOutputGroups sets the OutputGroups field's value. +func (s *EncoderSettings) SetOutputGroups(v []*OutputGroup) *EncoderSettings { + s.OutputGroups = v return s } -// SetEncoderSettings sets the EncoderSettings field's value. -func (s *DeleteChannelOutput) SetEncoderSettings(v *EncoderSettings) *DeleteChannelOutput { - s.EncoderSettings = v +// SetTimecodeConfig sets the TimecodeConfig field's value. +func (s *EncoderSettings) SetTimecodeConfig(v *TimecodeConfig) *EncoderSettings { + s.TimecodeConfig = v return s } -// SetId sets the Id field's value. -func (s *DeleteChannelOutput) SetId(v string) *DeleteChannelOutput { - s.Id = &v +// SetVideoDescriptions sets the VideoDescriptions field's value. +func (s *EncoderSettings) SetVideoDescriptions(v []*VideoDescription) *EncoderSettings { + s.VideoDescriptions = v return s } -// SetInputAttachments sets the InputAttachments field's value. -func (s *DeleteChannelOutput) SetInputAttachments(v []*InputAttachment) *DeleteChannelOutput { - s.InputAttachments = v - return s +type FecOutputSettings struct { + _ struct{} `type:"structure"` + + // Parameter D from SMPTE 2022-1. The height of the FEC protection matrix. The + // number of transport stream packets per column error correction packet. Must + // be between 4 and 20, inclusive. + ColumnDepth *int64 `locationName:"columnDepth" min:"4" type:"integer"` + + // Enables column only or column and row based FEC + IncludeFec *string `locationName:"includeFec" type:"string" enum:"FecOutputIncludeFec"` + + // Parameter L from SMPTE 2022-1. The width of the FEC protection matrix. Must + // be between 1 and 20, inclusive. If only Column FEC is used, then larger values + // increase robustness. If Row FEC is used, then this is the number of transport + // stream packets per row error correction packet, and the value must be between + // 4 and 20, inclusive, if includeFec is columnAndRow. If includeFec is column, + // this value must be 1 to 20, inclusive. + RowLength *int64 `locationName:"rowLength" min:"1" type:"integer"` } -// SetInputSpecification sets the InputSpecification field's value. -func (s *DeleteChannelOutput) SetInputSpecification(v *InputSpecification) *DeleteChannelOutput { - s.InputSpecification = v - return s +// String returns the string representation +func (s FecOutputSettings) String() string { + return awsutil.Prettify(s) } -// SetName sets the Name field's value. -func (s *DeleteChannelOutput) SetName(v string) *DeleteChannelOutput { - s.Name = &v - return s +// GoString returns the string representation +func (s FecOutputSettings) GoString() string { + return s.String() } -// SetPipelinesRunningCount sets the PipelinesRunningCount field's value. -func (s *DeleteChannelOutput) SetPipelinesRunningCount(v int64) *DeleteChannelOutput { - s.PipelinesRunningCount = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *FecOutputSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "FecOutputSettings"} + if s.ColumnDepth != nil && *s.ColumnDepth < 4 { + invalidParams.Add(request.NewErrParamMinValue("ColumnDepth", 4)) + } + if s.RowLength != nil && *s.RowLength < 1 { + invalidParams.Add(request.NewErrParamMinValue("RowLength", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetColumnDepth sets the ColumnDepth field's value. +func (s *FecOutputSettings) SetColumnDepth(v int64) *FecOutputSettings { + s.ColumnDepth = &v return s } -// SetRoleArn sets the RoleArn field's value. -func (s *DeleteChannelOutput) SetRoleArn(v string) *DeleteChannelOutput { - s.RoleArn = &v +// SetIncludeFec sets the IncludeFec field's value. +func (s *FecOutputSettings) SetIncludeFec(v string) *FecOutputSettings { + s.IncludeFec = &v return s } -// SetState sets the State field's value. -func (s *DeleteChannelOutput) SetState(v string) *DeleteChannelOutput { - s.State = &v +// SetRowLength sets the RowLength field's value. +func (s *FecOutputSettings) SetRowLength(v int64) *FecOutputSettings { + s.RowLength = &v return s } -type DeleteInputInput struct { +// Start time for the action. +type FixedModeScheduleActionStartSettings struct { _ struct{} `type:"structure"` - // InputId is a required field - InputId *string `location:"uri" locationName:"inputId" type:"string" required:"true"` + // Start time for the action to start in the channel. (Not the time for the + // action to be added to the schedule: actions are always added to the schedule + // immediately.) UTC format: yyyy-mm-ddThh:mm:ss.nnnZ. All the letters are digits + // (for example, mm might be 01) except for the two constants "T" for time and + // "Z" for "UTC format". + // + // Time is a required field + Time *string `locationName:"time" type:"string" required:"true"` } // String returns the string representation -func (s DeleteInputInput) String() string { +func (s FixedModeScheduleActionStartSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteInputInput) GoString() string { +func (s FixedModeScheduleActionStartSettings) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteInputInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteInputInput"} - if s.InputId == nil { - invalidParams.Add(request.NewErrParamRequired("InputId")) +func (s *FixedModeScheduleActionStartSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "FixedModeScheduleActionStartSettings"} + if s.Time == nil { + invalidParams.Add(request.NewErrParamRequired("Time")) } if invalidParams.Len() > 0 { @@ -4074,48 +7528,46 @@ func (s *DeleteInputInput) Validate() error { return nil } -// SetInputId sets the InputId field's value. -func (s *DeleteInputInput) SetInputId(v string) *DeleteInputInput { - s.InputId = &v +// SetTime sets the Time field's value. +func (s *FixedModeScheduleActionStartSettings) SetTime(v string) *FixedModeScheduleActionStartSettings { + s.Time = &v return s } -type DeleteInputOutput struct { +// Settings to specify if an action follows another. +type FollowModeScheduleActionStartSettings struct { _ struct{} `type:"structure"` -} - -// String returns the string representation -func (s DeleteInputOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s DeleteInputOutput) GoString() string { - return s.String() -} -type DeleteInputSecurityGroupInput struct { - _ struct{} `type:"structure"` + // Identifies whether this action starts relative to the start or relative to + // the end of the reference action. + // + // FollowPoint is a required field + FollowPoint *string `locationName:"followPoint" type:"string" required:"true" enum:"FollowPoint"` - // InputSecurityGroupId is a required field - InputSecurityGroupId *string `location:"uri" locationName:"inputSecurityGroupId" type:"string" required:"true"` + // The action name of another action that this one refers to. + // + // ReferenceActionName is a required field + ReferenceActionName *string `locationName:"referenceActionName" type:"string" required:"true"` } // String returns the string representation -func (s DeleteInputSecurityGroupInput) String() string { +func (s FollowModeScheduleActionStartSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteInputSecurityGroupInput) GoString() string { +func (s FollowModeScheduleActionStartSettings) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteInputSecurityGroupInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteInputSecurityGroupInput"} - if s.InputSecurityGroupId == nil { - invalidParams.Add(request.NewErrParamRequired("InputSecurityGroupId")) +func (s *FollowModeScheduleActionStartSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "FollowModeScheduleActionStartSettings"} + if s.FollowPoint == nil { + invalidParams.Add(request.NewErrParamRequired("FollowPoint")) + } + if s.ReferenceActionName == nil { + invalidParams.Add(request.NewErrParamRequired("ReferenceActionName")) } if invalidParams.Len() > 0 { @@ -4124,48 +7576,67 @@ func (s *DeleteInputSecurityGroupInput) Validate() error { return nil } -// SetInputSecurityGroupId sets the InputSecurityGroupId field's value. -func (s *DeleteInputSecurityGroupInput) SetInputSecurityGroupId(v string) *DeleteInputSecurityGroupInput { - s.InputSecurityGroupId = &v +// SetFollowPoint sets the FollowPoint field's value. +func (s *FollowModeScheduleActionStartSettings) SetFollowPoint(v string) *FollowModeScheduleActionStartSettings { + s.FollowPoint = &v return s } -type DeleteInputSecurityGroupOutput struct { - _ struct{} `type:"structure"` +// SetReferenceActionName sets the ReferenceActionName field's value. +func (s *FollowModeScheduleActionStartSettings) SetReferenceActionName(v string) *FollowModeScheduleActionStartSettings { + s.ReferenceActionName = &v + return s } -// String returns the string representation -func (s DeleteInputSecurityGroupOutput) String() string { - return awsutil.Prettify(s) -} +type GlobalConfiguration struct { + _ struct{} `type:"structure"` -// GoString returns the string representation -func (s DeleteInputSecurityGroupOutput) GoString() string { - return s.String() -} + // Value to set the initial audio gain for the Live Event. + InitialAudioGain *int64 `locationName:"initialAudioGain" type:"integer"` -type DescribeChannelInput struct { - _ struct{} `type:"structure"` + // Indicates the action to take when the current input completes (e.g. end-of-file). + // When switchAndLoopInputs is configured the encoder will restart at the beginning + // of the first input. When "none" is configured the encoder will transcode + // either black, a solid color, or a user specified slate images per the "Input + // Loss Behavior" configuration until the next input switch occurs (which is + // controlled through the Channel Schedule API). + InputEndAction *string `locationName:"inputEndAction" type:"string" enum:"GlobalConfigurationInputEndAction"` - // ChannelId is a required field - ChannelId *string `location:"uri" locationName:"channelId" type:"string" required:"true"` + // Settings for system actions when input is lost. + InputLossBehavior *InputLossBehavior `locationName:"inputLossBehavior" type:"structure"` + + // Indicates whether the rate of frames emitted by the Live encoder should be + // paced by its system clock (which optionally may be locked to another source + // via NTP) or should be locked to the clock of the source that is providing + // the input stream. + OutputTimingSource *string `locationName:"outputTimingSource" type:"string" enum:"GlobalConfigurationOutputTimingSource"` + + // Adjusts video input buffer for streams with very low video framerates. This + // is commonly set to enabled for music channels with less than one video frame + // per second. + SupportLowFramerateInputs *string `locationName:"supportLowFramerateInputs" type:"string" enum:"GlobalConfigurationLowFramerateInputs"` } // String returns the string representation -func (s DescribeChannelInput) String() string { +func (s GlobalConfiguration) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeChannelInput) GoString() string { +func (s GlobalConfiguration) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeChannelInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeChannelInput"} - if s.ChannelId == nil { - invalidParams.Add(request.NewErrParamRequired("ChannelId")) +func (s *GlobalConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GlobalConfiguration"} + if s.InitialAudioGain != nil && *s.InitialAudioGain < -60 { + invalidParams.Add(request.NewErrParamMinValue("InitialAudioGain", -60)) + } + if s.InputLossBehavior != nil { + if err := s.InputLossBehavior.Validate(); err != nil { + invalidParams.AddNested("InputLossBehavior", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -4174,136 +7645,225 @@ func (s *DescribeChannelInput) Validate() error { return nil } -// SetChannelId sets the ChannelId field's value. -func (s *DescribeChannelInput) SetChannelId(v string) *DescribeChannelInput { - s.ChannelId = &v +// SetInitialAudioGain sets the InitialAudioGain field's value. +func (s *GlobalConfiguration) SetInitialAudioGain(v int64) *GlobalConfiguration { + s.InitialAudioGain = &v return s } -type DescribeChannelOutput struct { +// SetInputEndAction sets the InputEndAction field's value. +func (s *GlobalConfiguration) SetInputEndAction(v string) *GlobalConfiguration { + s.InputEndAction = &v + return s +} + +// SetInputLossBehavior sets the InputLossBehavior field's value. +func (s *GlobalConfiguration) SetInputLossBehavior(v *InputLossBehavior) *GlobalConfiguration { + s.InputLossBehavior = v + return s +} + +// SetOutputTimingSource sets the OutputTimingSource field's value. +func (s *GlobalConfiguration) SetOutputTimingSource(v string) *GlobalConfiguration { + s.OutputTimingSource = &v + return s +} + +// SetSupportLowFramerateInputs sets the SupportLowFramerateInputs field's value. +func (s *GlobalConfiguration) SetSupportLowFramerateInputs(v string) *GlobalConfiguration { + s.SupportLowFramerateInputs = &v + return s +} + +type H264Settings struct { _ struct{} `type:"structure"` - Arn *string `locationName:"arn" type:"string"` + // Adaptive quantization. Allows intra-frame quantizers to vary to improve visual + // quality. + AdaptiveQuantization *string `locationName:"adaptiveQuantization" type:"string" enum:"H264AdaptiveQuantization"` - Destinations []*OutputDestination `locationName:"destinations" type:"list"` + // Indicates that AFD values will be written into the output stream. If afdSignaling + // is "auto", the system will try to preserve the input AFD value (in cases + // where multiple AFD values are valid). If set to "fixed", the AFD value will + // be the value configured in the fixedAfd parameter. + AfdSignaling *string `locationName:"afdSignaling" type:"string" enum:"AfdSignaling"` - EgressEndpoints []*ChannelEgressEndpoint `locationName:"egressEndpoints" type:"list"` + // Average bitrate in bits/second. Required for VBR, CBR, and ABR. For MS Smooth + // outputs, bitrates must be unique when rounded down to the nearest multiple + // of 1000. + Bitrate *int64 `locationName:"bitrate" min:"1000" type:"integer"` - EncoderSettings *EncoderSettings `locationName:"encoderSettings" type:"structure"` + // Percentage of the buffer that should initially be filled (HRD buffer model). + BufFillPct *int64 `locationName:"bufFillPct" type:"integer"` - Id *string `locationName:"id" type:"string"` + // Size of buffer (HRD buffer model) in bits/second. + BufSize *int64 `locationName:"bufSize" type:"integer"` - InputAttachments []*InputAttachment `locationName:"inputAttachments" type:"list"` + // Includes colorspace metadata in the output. + ColorMetadata *string `locationName:"colorMetadata" type:"string" enum:"H264ColorMetadata"` - InputSpecification *InputSpecification `locationName:"inputSpecification" type:"structure"` + // Entropy encoding mode. Use cabac (must be in Main or High profile) or cavlc. + EntropyEncoding *string `locationName:"entropyEncoding" type:"string" enum:"H264EntropyEncoding"` - Name *string `locationName:"name" type:"string"` + // Four bit AFD value to write on all frames of video in the output stream. + // Only valid when afdSignaling is set to 'Fixed'. + FixedAfd *string `locationName:"fixedAfd" type:"string" enum:"FixedAfd"` - PipelinesRunningCount *int64 `locationName:"pipelinesRunningCount" type:"integer"` + // If set to enabled, adjust quantization within each frame to reduce flicker + // or 'pop' on I-frames. + FlickerAq *string `locationName:"flickerAq" type:"string" enum:"H264FlickerAq"` - RoleArn *string `locationName:"roleArn" type:"string"` + // This field indicates how the output video frame rate is specified. If "specified" + // is selected then the output video frame rate is determined by framerateNumerator + // and framerateDenominator, else if "initializeFromSource" is selected then + // the output video frame rate will be set equal to the input video frame rate + // of the first input. + FramerateControl *string `locationName:"framerateControl" type:"string" enum:"H264FramerateControl"` - State *string `locationName:"state" type:"string" enum:"ChannelState"` -} + // Framerate denominator. + FramerateDenominator *int64 `locationName:"framerateDenominator" type:"integer"` -// String returns the string representation -func (s DescribeChannelOutput) String() string { - return awsutil.Prettify(s) -} + // Framerate numerator - framerate is a fraction, e.g. 24000 / 1001 = 23.976 + // fps. + FramerateNumerator *int64 `locationName:"framerateNumerator" type:"integer"` -// GoString returns the string representation -func (s DescribeChannelOutput) GoString() string { - return s.String() -} + // If enabled, use reference B frames for GOP structures that have B frames + // > 1. + GopBReference *string `locationName:"gopBReference" type:"string" enum:"H264GopBReference"` -// SetArn sets the Arn field's value. -func (s *DescribeChannelOutput) SetArn(v string) *DescribeChannelOutput { - s.Arn = &v - return s -} + // Frequency of closed GOPs. In streaming applications, it is recommended that + // this be set to 1 so a decoder joining mid-stream will receive an IDR frame + // as quickly as possible. Setting this value to 0 will break output segmenting. + GopClosedCadence *int64 `locationName:"gopClosedCadence" type:"integer"` -// SetDestinations sets the Destinations field's value. -func (s *DescribeChannelOutput) SetDestinations(v []*OutputDestination) *DescribeChannelOutput { - s.Destinations = v - return s -} + // Number of B-frames between reference frames. + GopNumBFrames *int64 `locationName:"gopNumBFrames" type:"integer"` -// SetEgressEndpoints sets the EgressEndpoints field's value. -func (s *DescribeChannelOutput) SetEgressEndpoints(v []*ChannelEgressEndpoint) *DescribeChannelOutput { - s.EgressEndpoints = v - return s -} + // GOP size (keyframe interval) in units of either frames or seconds per gopSizeUnits. + // Must be greater than zero. + GopSize *float64 `locationName:"gopSize" type:"double"` -// SetEncoderSettings sets the EncoderSettings field's value. -func (s *DescribeChannelOutput) SetEncoderSettings(v *EncoderSettings) *DescribeChannelOutput { - s.EncoderSettings = v - return s -} + // Indicates if the gopSize is specified in frames or seconds. If seconds the + // system will convert the gopSize into a frame count at run time. + GopSizeUnits *string `locationName:"gopSizeUnits" type:"string" enum:"H264GopSizeUnits"` + + // H.264 Level. + Level *string `locationName:"level" type:"string" enum:"H264Level"` + + // Amount of lookahead. A value of low can decrease latency and memory usage, + // while high can produce better quality for certain content. + LookAheadRateControl *string `locationName:"lookAheadRateControl" type:"string" enum:"H264LookAheadRateControl"` + + // Maximum bitrate in bits/second (for VBR and QVBR modes only).Required when + // rateControlMode is "qvbr". + MaxBitrate *int64 `locationName:"maxBitrate" min:"1000" type:"integer"` + + // Only meaningful if sceneChangeDetect is set to enabled. Enforces separation + // between repeated (cadence) I-frames and I-frames inserted by Scene Change + // Detection. If a scene change I-frame is within I-interval frames of a cadence + // I-frame, the GOP is shrunk and/or stretched to the scene change I-frame. + // GOP stretch requires enabling lookahead as well as setting I-interval. The + // normal cadence resumes for the next GOP. Note: Maximum GOP stretch = GOP + // size + Min-I-interval - 1 + MinIInterval *int64 `locationName:"minIInterval" type:"integer"` + + // Number of reference frames to use. The encoder may use more than requested + // if using B-frames and/or interlaced encoding. + NumRefFrames *int64 `locationName:"numRefFrames" min:"1" type:"integer"` + + // This field indicates how the output pixel aspect ratio is specified. If "specified" + // is selected then the output video pixel aspect ratio is determined by parNumerator + // and parDenominator, else if "initializeFromSource" is selected then the output + // pixsel aspect ratio will be set equal to the input video pixel aspect ratio + // of the first input. + ParControl *string `locationName:"parControl" type:"string" enum:"H264ParControl"` + + // Pixel Aspect Ratio denominator. + ParDenominator *int64 `locationName:"parDenominator" min:"1" type:"integer"` + + // Pixel Aspect Ratio numerator. + ParNumerator *int64 `locationName:"parNumerator" type:"integer"` + + // H.264 Profile. + Profile *string `locationName:"profile" type:"string" enum:"H264Profile"` -// SetId sets the Id field's value. -func (s *DescribeChannelOutput) SetId(v string) *DescribeChannelOutput { - s.Id = &v - return s -} + // Target quality value. Applicable only to QVBR mode. 1 is the lowest quality + // and 10 is thehighest and approaches lossless. Typical levels for content + // distribution are between 6 and 8. + QvbrQualityLevel *int64 `locationName:"qvbrQualityLevel" min:"1" type:"integer"` -// SetInputAttachments sets the InputAttachments field's value. -func (s *DescribeChannelOutput) SetInputAttachments(v []*InputAttachment) *DescribeChannelOutput { - s.InputAttachments = v - return s -} + // Rate control mode. - CBR: Constant Bit Rate- VBR: Variable Bit Rate- QVBR: + // Encoder dynamically controls the bitrate to meet the desired quality (specifiedthrough + // the qvbrQualityLevel field). The bitrate will not exceed the bitrate specified + // inthe maxBitrate field and will not fall below the bitrate required to meet + // the desiredquality level. + RateControlMode *string `locationName:"rateControlMode" type:"string" enum:"H264RateControlMode"` -// SetInputSpecification sets the InputSpecification field's value. -func (s *DescribeChannelOutput) SetInputSpecification(v *InputSpecification) *DescribeChannelOutput { - s.InputSpecification = v - return s -} + // Sets the scan type of the output to progressive or top-field-first interlaced. + ScanType *string `locationName:"scanType" type:"string" enum:"H264ScanType"` -// SetName sets the Name field's value. -func (s *DescribeChannelOutput) SetName(v string) *DescribeChannelOutput { - s.Name = &v - return s -} + // Scene change detection.- On: inserts I-frames when scene change is detected.- + // Off: does not force an I-frame when scene change is detected. + SceneChangeDetect *string `locationName:"sceneChangeDetect" type:"string" enum:"H264SceneChangeDetect"` -// SetPipelinesRunningCount sets the PipelinesRunningCount field's value. -func (s *DescribeChannelOutput) SetPipelinesRunningCount(v int64) *DescribeChannelOutput { - s.PipelinesRunningCount = &v - return s -} + // Number of slices per picture. Must be less than or equal to the number of + // macroblock rows for progressive pictures, and less than or equal to half + // the number of macroblock rows for interlaced pictures.This field is optional; + // when no value is specified the encoder will choose the number of slices based + // on encode resolution. + Slices *int64 `locationName:"slices" min:"1" type:"integer"` -// SetRoleArn sets the RoleArn field's value. -func (s *DescribeChannelOutput) SetRoleArn(v string) *DescribeChannelOutput { - s.RoleArn = &v - return s -} + // Softness. Selects quantizer matrix, larger values reduce high-frequency content + // in the encoded image. + Softness *int64 `locationName:"softness" type:"integer"` -// SetState sets the State field's value. -func (s *DescribeChannelOutput) SetState(v string) *DescribeChannelOutput { - s.State = &v - return s -} + // If set to enabled, adjust quantization within each frame based on spatial + // variation of content complexity. + SpatialAq *string `locationName:"spatialAq" type:"string" enum:"H264SpatialAq"` -type DescribeInputInput struct { - _ struct{} `type:"structure"` + // Produces a bitstream compliant with SMPTE RP-2027. + Syntax *string `locationName:"syntax" type:"string" enum:"H264Syntax"` - // InputId is a required field - InputId *string `location:"uri" locationName:"inputId" type:"string" required:"true"` + // If set to enabled, adjust quantization within each frame based on temporal + // variation of content complexity. + TemporalAq *string `locationName:"temporalAq" type:"string" enum:"H264TemporalAq"` + + // Determines how timecodes should be inserted into the video elementary stream.- + // 'disabled': Do not include timecodes- 'picTimingSei': Pass through picture + // timing SEI messages from the source specified in Timecode Config + TimecodeInsertion *string `locationName:"timecodeInsertion" type:"string" enum:"H264TimecodeInsertionBehavior"` } // String returns the string representation -func (s DescribeInputInput) String() string { +func (s H264Settings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeInputInput) GoString() string { +func (s H264Settings) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeInputInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeInputInput"} - if s.InputId == nil { - invalidParams.Add(request.NewErrParamRequired("InputId")) +func (s *H264Settings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "H264Settings"} + if s.Bitrate != nil && *s.Bitrate < 1000 { + invalidParams.Add(request.NewErrParamMinValue("Bitrate", 1000)) + } + if s.MaxBitrate != nil && *s.MaxBitrate < 1000 { + invalidParams.Add(request.NewErrParamMinValue("MaxBitrate", 1000)) + } + if s.NumRefFrames != nil && *s.NumRefFrames < 1 { + invalidParams.Add(request.NewErrParamMinValue("NumRefFrames", 1)) + } + if s.ParDenominator != nil && *s.ParDenominator < 1 { + invalidParams.Add(request.NewErrParamMinValue("ParDenominator", 1)) + } + if s.QvbrQualityLevel != nil && *s.QvbrQualityLevel < 1 { + invalidParams.Add(request.NewErrParamMinValue("QvbrQualityLevel", 1)) + } + if s.Slices != nil && *s.Slices < 1 { + invalidParams.Add(request.NewErrParamMinValue("Slices", 1)) } if invalidParams.Len() > 0 { @@ -4312,1169 +7872,988 @@ func (s *DescribeInputInput) Validate() error { return nil } -// SetInputId sets the InputId field's value. -func (s *DescribeInputInput) SetInputId(v string) *DescribeInputInput { - s.InputId = &v +// SetAdaptiveQuantization sets the AdaptiveQuantization field's value. +func (s *H264Settings) SetAdaptiveQuantization(v string) *H264Settings { + s.AdaptiveQuantization = &v return s } -type DescribeInputOutput struct { - _ struct{} `type:"structure"` - - Arn *string `locationName:"arn" type:"string"` - - AttachedChannels []*string `locationName:"attachedChannels" type:"list"` - - Destinations []*InputDestination `locationName:"destinations" type:"list"` - - Id *string `locationName:"id" type:"string"` - - Name *string `locationName:"name" type:"string"` - - SecurityGroups []*string `locationName:"securityGroups" type:"list"` - - Sources []*InputSource `locationName:"sources" type:"list"` - - State *string `locationName:"state" type:"string" enum:"InputState"` - - Type *string `locationName:"type" type:"string" enum:"InputType"` -} - -// String returns the string representation -func (s DescribeInputOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s DescribeInputOutput) GoString() string { - return s.String() -} - -// SetArn sets the Arn field's value. -func (s *DescribeInputOutput) SetArn(v string) *DescribeInputOutput { - s.Arn = &v +// SetAfdSignaling sets the AfdSignaling field's value. +func (s *H264Settings) SetAfdSignaling(v string) *H264Settings { + s.AfdSignaling = &v return s } -// SetAttachedChannels sets the AttachedChannels field's value. -func (s *DescribeInputOutput) SetAttachedChannels(v []*string) *DescribeInputOutput { - s.AttachedChannels = v +// SetBitrate sets the Bitrate field's value. +func (s *H264Settings) SetBitrate(v int64) *H264Settings { + s.Bitrate = &v return s } -// SetDestinations sets the Destinations field's value. -func (s *DescribeInputOutput) SetDestinations(v []*InputDestination) *DescribeInputOutput { - s.Destinations = v +// SetBufFillPct sets the BufFillPct field's value. +func (s *H264Settings) SetBufFillPct(v int64) *H264Settings { + s.BufFillPct = &v return s } -// SetId sets the Id field's value. -func (s *DescribeInputOutput) SetId(v string) *DescribeInputOutput { - s.Id = &v +// SetBufSize sets the BufSize field's value. +func (s *H264Settings) SetBufSize(v int64) *H264Settings { + s.BufSize = &v return s } -// SetName sets the Name field's value. -func (s *DescribeInputOutput) SetName(v string) *DescribeInputOutput { - s.Name = &v +// SetColorMetadata sets the ColorMetadata field's value. +func (s *H264Settings) SetColorMetadata(v string) *H264Settings { + s.ColorMetadata = &v return s } -// SetSecurityGroups sets the SecurityGroups field's value. -func (s *DescribeInputOutput) SetSecurityGroups(v []*string) *DescribeInputOutput { - s.SecurityGroups = v +// SetEntropyEncoding sets the EntropyEncoding field's value. +func (s *H264Settings) SetEntropyEncoding(v string) *H264Settings { + s.EntropyEncoding = &v return s } -// SetSources sets the Sources field's value. -func (s *DescribeInputOutput) SetSources(v []*InputSource) *DescribeInputOutput { - s.Sources = v +// SetFixedAfd sets the FixedAfd field's value. +func (s *H264Settings) SetFixedAfd(v string) *H264Settings { + s.FixedAfd = &v return s } -// SetState sets the State field's value. -func (s *DescribeInputOutput) SetState(v string) *DescribeInputOutput { - s.State = &v +// SetFlickerAq sets the FlickerAq field's value. +func (s *H264Settings) SetFlickerAq(v string) *H264Settings { + s.FlickerAq = &v return s } -// SetType sets the Type field's value. -func (s *DescribeInputOutput) SetType(v string) *DescribeInputOutput { - s.Type = &v +// SetFramerateControl sets the FramerateControl field's value. +func (s *H264Settings) SetFramerateControl(v string) *H264Settings { + s.FramerateControl = &v return s } -type DescribeInputSecurityGroupInput struct { - _ struct{} `type:"structure"` - - // InputSecurityGroupId is a required field - InputSecurityGroupId *string `location:"uri" locationName:"inputSecurityGroupId" type:"string" required:"true"` -} - -// String returns the string representation -func (s DescribeInputSecurityGroupInput) String() string { - return awsutil.Prettify(s) +// SetFramerateDenominator sets the FramerateDenominator field's value. +func (s *H264Settings) SetFramerateDenominator(v int64) *H264Settings { + s.FramerateDenominator = &v + return s } -// GoString returns the string representation -func (s DescribeInputSecurityGroupInput) GoString() string { - return s.String() +// SetFramerateNumerator sets the FramerateNumerator field's value. +func (s *H264Settings) SetFramerateNumerator(v int64) *H264Settings { + s.FramerateNumerator = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeInputSecurityGroupInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeInputSecurityGroupInput"} - if s.InputSecurityGroupId == nil { - invalidParams.Add(request.NewErrParamRequired("InputSecurityGroupId")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetGopBReference sets the GopBReference field's value. +func (s *H264Settings) SetGopBReference(v string) *H264Settings { + s.GopBReference = &v + return s } -// SetInputSecurityGroupId sets the InputSecurityGroupId field's value. -func (s *DescribeInputSecurityGroupInput) SetInputSecurityGroupId(v string) *DescribeInputSecurityGroupInput { - s.InputSecurityGroupId = &v +// SetGopClosedCadence sets the GopClosedCadence field's value. +func (s *H264Settings) SetGopClosedCadence(v int64) *H264Settings { + s.GopClosedCadence = &v return s } -type DescribeInputSecurityGroupOutput struct { - _ struct{} `type:"structure"` - - Arn *string `locationName:"arn" type:"string"` - - Id *string `locationName:"id" type:"string"` - - WhitelistRules []*InputWhitelistRule `locationName:"whitelistRules" type:"list"` +// SetGopNumBFrames sets the GopNumBFrames field's value. +func (s *H264Settings) SetGopNumBFrames(v int64) *H264Settings { + s.GopNumBFrames = &v + return s } -// String returns the string representation -func (s DescribeInputSecurityGroupOutput) String() string { - return awsutil.Prettify(s) +// SetGopSize sets the GopSize field's value. +func (s *H264Settings) SetGopSize(v float64) *H264Settings { + s.GopSize = &v + return s } -// GoString returns the string representation -func (s DescribeInputSecurityGroupOutput) GoString() string { - return s.String() +// SetGopSizeUnits sets the GopSizeUnits field's value. +func (s *H264Settings) SetGopSizeUnits(v string) *H264Settings { + s.GopSizeUnits = &v + return s } -// SetArn sets the Arn field's value. -func (s *DescribeInputSecurityGroupOutput) SetArn(v string) *DescribeInputSecurityGroupOutput { - s.Arn = &v +// SetLevel sets the Level field's value. +func (s *H264Settings) SetLevel(v string) *H264Settings { + s.Level = &v return s } -// SetId sets the Id field's value. -func (s *DescribeInputSecurityGroupOutput) SetId(v string) *DescribeInputSecurityGroupOutput { - s.Id = &v +// SetLookAheadRateControl sets the LookAheadRateControl field's value. +func (s *H264Settings) SetLookAheadRateControl(v string) *H264Settings { + s.LookAheadRateControl = &v return s } -// SetWhitelistRules sets the WhitelistRules field's value. -func (s *DescribeInputSecurityGroupOutput) SetWhitelistRules(v []*InputWhitelistRule) *DescribeInputSecurityGroupOutput { - s.WhitelistRules = v +// SetMaxBitrate sets the MaxBitrate field's value. +func (s *H264Settings) SetMaxBitrate(v int64) *H264Settings { + s.MaxBitrate = &v return s } -// DVB Network Information Table (NIT) -type DvbNitSettings struct { - _ struct{} `type:"structure"` - - // The numeric value placed in the Network Information Table (NIT). - // - // NetworkId is a required field - NetworkId *int64 `locationName:"networkId" type:"integer" required:"true"` - - // The network name text placed in the networkNameDescriptor inside the Network - // Information Table. Maximum length is 256 characters. - // - // NetworkName is a required field - NetworkName *string `locationName:"networkName" min:"1" type:"string" required:"true"` - - // The number of milliseconds between instances of this table in the output - // transport stream. - RepInterval *int64 `locationName:"repInterval" min:"25" type:"integer"` +// SetMinIInterval sets the MinIInterval field's value. +func (s *H264Settings) SetMinIInterval(v int64) *H264Settings { + s.MinIInterval = &v + return s } -// String returns the string representation -func (s DvbNitSettings) String() string { - return awsutil.Prettify(s) +// SetNumRefFrames sets the NumRefFrames field's value. +func (s *H264Settings) SetNumRefFrames(v int64) *H264Settings { + s.NumRefFrames = &v + return s } -// GoString returns the string representation -func (s DvbNitSettings) GoString() string { - return s.String() +// SetParControl sets the ParControl field's value. +func (s *H264Settings) SetParControl(v string) *H264Settings { + s.ParControl = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DvbNitSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DvbNitSettings"} - if s.NetworkId == nil { - invalidParams.Add(request.NewErrParamRequired("NetworkId")) - } - if s.NetworkName == nil { - invalidParams.Add(request.NewErrParamRequired("NetworkName")) - } - if s.NetworkName != nil && len(*s.NetworkName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("NetworkName", 1)) - } - if s.RepInterval != nil && *s.RepInterval < 25 { - invalidParams.Add(request.NewErrParamMinValue("RepInterval", 25)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetParDenominator sets the ParDenominator field's value. +func (s *H264Settings) SetParDenominator(v int64) *H264Settings { + s.ParDenominator = &v + return s } -// SetNetworkId sets the NetworkId field's value. -func (s *DvbNitSettings) SetNetworkId(v int64) *DvbNitSettings { - s.NetworkId = &v +// SetParNumerator sets the ParNumerator field's value. +func (s *H264Settings) SetParNumerator(v int64) *H264Settings { + s.ParNumerator = &v return s } -// SetNetworkName sets the NetworkName field's value. -func (s *DvbNitSettings) SetNetworkName(v string) *DvbNitSettings { - s.NetworkName = &v +// SetProfile sets the Profile field's value. +func (s *H264Settings) SetProfile(v string) *H264Settings { + s.Profile = &v return s } -// SetRepInterval sets the RepInterval field's value. -func (s *DvbNitSettings) SetRepInterval(v int64) *DvbNitSettings { - s.RepInterval = &v +// SetQvbrQualityLevel sets the QvbrQualityLevel field's value. +func (s *H264Settings) SetQvbrQualityLevel(v int64) *H264Settings { + s.QvbrQualityLevel = &v return s } -// DVB Service Description Table (SDT) -type DvbSdtSettings struct { - _ struct{} `type:"structure"` - - // Selects method of inserting SDT information into output stream. The sdtFollow - // setting copies SDT information from input stream to output stream. The sdtFollowIfPresent - // setting copies SDT information from input stream to output stream if SDT - // information is present in the input, otherwise it will fall back on the user-defined - // values. The sdtManual setting means user will enter the SDT information. - // The sdtNone setting means output stream will not contain SDT information. - OutputSdt *string `locationName:"outputSdt" type:"string" enum:"DvbSdtOutputSdt"` - - // The number of milliseconds between instances of this table in the output - // transport stream. - RepInterval *int64 `locationName:"repInterval" min:"25" type:"integer"` - - // The service name placed in the serviceDescriptor in the Service Description - // Table. Maximum length is 256 characters. - ServiceName *string `locationName:"serviceName" min:"1" type:"string"` - - // The service provider name placed in the serviceDescriptor in the Service - // Description Table. Maximum length is 256 characters. - ServiceProviderName *string `locationName:"serviceProviderName" min:"1" type:"string"` +// SetRateControlMode sets the RateControlMode field's value. +func (s *H264Settings) SetRateControlMode(v string) *H264Settings { + s.RateControlMode = &v + return s } -// String returns the string representation -func (s DvbSdtSettings) String() string { - return awsutil.Prettify(s) +// SetScanType sets the ScanType field's value. +func (s *H264Settings) SetScanType(v string) *H264Settings { + s.ScanType = &v + return s } -// GoString returns the string representation -func (s DvbSdtSettings) GoString() string { - return s.String() +// SetSceneChangeDetect sets the SceneChangeDetect field's value. +func (s *H264Settings) SetSceneChangeDetect(v string) *H264Settings { + s.SceneChangeDetect = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DvbSdtSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DvbSdtSettings"} - if s.RepInterval != nil && *s.RepInterval < 25 { - invalidParams.Add(request.NewErrParamMinValue("RepInterval", 25)) - } - if s.ServiceName != nil && len(*s.ServiceName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ServiceName", 1)) - } - if s.ServiceProviderName != nil && len(*s.ServiceProviderName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ServiceProviderName", 1)) - } +// SetSlices sets the Slices field's value. +func (s *H264Settings) SetSlices(v int64) *H264Settings { + s.Slices = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetSoftness sets the Softness field's value. +func (s *H264Settings) SetSoftness(v int64) *H264Settings { + s.Softness = &v + return s } -// SetOutputSdt sets the OutputSdt field's value. -func (s *DvbSdtSettings) SetOutputSdt(v string) *DvbSdtSettings { - s.OutputSdt = &v +// SetSpatialAq sets the SpatialAq field's value. +func (s *H264Settings) SetSpatialAq(v string) *H264Settings { + s.SpatialAq = &v return s } -// SetRepInterval sets the RepInterval field's value. -func (s *DvbSdtSettings) SetRepInterval(v int64) *DvbSdtSettings { - s.RepInterval = &v +// SetSyntax sets the Syntax field's value. +func (s *H264Settings) SetSyntax(v string) *H264Settings { + s.Syntax = &v return s } -// SetServiceName sets the ServiceName field's value. -func (s *DvbSdtSettings) SetServiceName(v string) *DvbSdtSettings { - s.ServiceName = &v +// SetTemporalAq sets the TemporalAq field's value. +func (s *H264Settings) SetTemporalAq(v string) *H264Settings { + s.TemporalAq = &v return s } -// SetServiceProviderName sets the ServiceProviderName field's value. -func (s *DvbSdtSettings) SetServiceProviderName(v string) *DvbSdtSettings { - s.ServiceProviderName = &v +// SetTimecodeInsertion sets the TimecodeInsertion field's value. +func (s *H264Settings) SetTimecodeInsertion(v string) *H264Settings { + s.TimecodeInsertion = &v return s } -type DvbSubDestinationSettings struct { +type HlsAkamaiSettings struct { _ struct{} `type:"structure"` - // If no explicit xPosition or yPosition is provided, setting alignment to centered - // will place the captions at the bottom center of the output. Similarly, setting - // a left alignment will align captions to the bottom left of the output. If - // x and y positions are given in conjunction with the alignment parameter, - // the font will be justified (either left or centered) relative to those coordinates. - // Selecting "smart" justification will left-justify live subtitles and center-justify - // pre-recorded subtitles. This option is not valid for source captions that - // are STL or 608/embedded. These source settings are already pre-defined by - // the caption stream. All burn-in and DVB-Sub font settings must match. - Alignment *string `locationName:"alignment" type:"string" enum:"DvbSubDestinationAlignment"` + // Number of seconds to wait before retrying connection to the CDN if the connection + // is lost. + ConnectionRetryInterval *int64 `locationName:"connectionRetryInterval" type:"integer"` - // Specifies the color of the rectangle behind the captions. All burn-in and - // DVB-Sub font settings must match. - BackgroundColor *string `locationName:"backgroundColor" type:"string" enum:"DvbSubDestinationBackgroundColor"` + // Size in seconds of file cache for streaming outputs. + FilecacheDuration *int64 `locationName:"filecacheDuration" type:"integer"` - // Specifies the opacity of the background rectangle. 255 is opaque; 0 is transparent. - // Leaving this parameter blank is equivalent to setting it to 0 (transparent). - // All burn-in and DVB-Sub font settings must match. - BackgroundOpacity *int64 `locationName:"backgroundOpacity" type:"integer"` + // Specify whether or not to use chunked transfer encoding to Akamai. User should + // contact Akamai to enable this feature. + HttpTransferMode *string `locationName:"httpTransferMode" type:"string" enum:"HlsAkamaiHttpTransferMode"` - // External font file used for caption burn-in. File extension must be 'ttf' - // or 'tte'. Although the user can select output fonts for many different types - // of input captions, embedded, STL and teletext sources use a strict grid system. - // Using external fonts with these caption sources could cause unexpected display - // of proportional fonts. All burn-in and DVB-Sub font settings must match. - Font *InputLocation `locationName:"font" type:"structure"` + // Number of retry attempts that will be made before the Live Event is put into + // an error state. + NumRetries *int64 `locationName:"numRetries" type:"integer"` - // Specifies the color of the burned-in captions. This option is not valid for - // source captions that are STL, 608/embedded or teletext. These source settings - // are already pre-defined by the caption stream. All burn-in and DVB-Sub font - // settings must match. - FontColor *string `locationName:"fontColor" type:"string" enum:"DvbSubDestinationFontColor"` + // If a streaming output fails, number of seconds to wait until a restart is + // initiated. A value of 0 means never restart. + RestartDelay *int64 `locationName:"restartDelay" type:"integer"` - // Specifies the opacity of the burned-in captions. 255 is opaque; 0 is transparent. - // All burn-in and DVB-Sub font settings must match. - FontOpacity *int64 `locationName:"fontOpacity" type:"integer"` + // Salt for authenticated Akamai. + Salt *string `locationName:"salt" type:"string"` - // Font resolution in DPI (dots per inch); default is 96 dpi. All burn-in and - // DVB-Sub font settings must match. - FontResolution *int64 `locationName:"fontResolution" min:"96" type:"integer"` + // Token parameter for authenticated akamai. If not specified, _gda_ is used. + Token *string `locationName:"token" type:"string"` +} - // When set to auto fontSize will scale depending on the size of the output. - // Giving a positive integer will specify the exact font size in points. All - // burn-in and DVB-Sub font settings must match. - FontSize *string `locationName:"fontSize" type:"string"` +// String returns the string representation +func (s HlsAkamaiSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s HlsAkamaiSettings) GoString() string { + return s.String() +} + +// SetConnectionRetryInterval sets the ConnectionRetryInterval field's value. +func (s *HlsAkamaiSettings) SetConnectionRetryInterval(v int64) *HlsAkamaiSettings { + s.ConnectionRetryInterval = &v + return s +} + +// SetFilecacheDuration sets the FilecacheDuration field's value. +func (s *HlsAkamaiSettings) SetFilecacheDuration(v int64) *HlsAkamaiSettings { + s.FilecacheDuration = &v + return s +} - // Specifies font outline color. This option is not valid for source captions - // that are either 608/embedded or teletext. These source settings are already - // pre-defined by the caption stream. All burn-in and DVB-Sub font settings - // must match. - OutlineColor *string `locationName:"outlineColor" type:"string" enum:"DvbSubDestinationOutlineColor"` +// SetHttpTransferMode sets the HttpTransferMode field's value. +func (s *HlsAkamaiSettings) SetHttpTransferMode(v string) *HlsAkamaiSettings { + s.HttpTransferMode = &v + return s +} - // Specifies font outline size in pixels. This option is not valid for source - // captions that are either 608/embedded or teletext. These source settings - // are already pre-defined by the caption stream. All burn-in and DVB-Sub font - // settings must match. - OutlineSize *int64 `locationName:"outlineSize" type:"integer"` +// SetNumRetries sets the NumRetries field's value. +func (s *HlsAkamaiSettings) SetNumRetries(v int64) *HlsAkamaiSettings { + s.NumRetries = &v + return s +} - // Specifies the color of the shadow cast by the captions. All burn-in and DVB-Sub - // font settings must match. - ShadowColor *string `locationName:"shadowColor" type:"string" enum:"DvbSubDestinationShadowColor"` +// SetRestartDelay sets the RestartDelay field's value. +func (s *HlsAkamaiSettings) SetRestartDelay(v int64) *HlsAkamaiSettings { + s.RestartDelay = &v + return s +} - // Specifies the opacity of the shadow. 255 is opaque; 0 is transparent. Leaving - // this parameter blank is equivalent to setting it to 0 (transparent). All - // burn-in and DVB-Sub font settings must match. - ShadowOpacity *int64 `locationName:"shadowOpacity" type:"integer"` +// SetSalt sets the Salt field's value. +func (s *HlsAkamaiSettings) SetSalt(v string) *HlsAkamaiSettings { + s.Salt = &v + return s +} - // Specifies the horizontal offset of the shadow relative to the captions in - // pixels. A value of -2 would result in a shadow offset 2 pixels to the left. - // All burn-in and DVB-Sub font settings must match. - ShadowXOffset *int64 `locationName:"shadowXOffset" type:"integer"` +// SetToken sets the Token field's value. +func (s *HlsAkamaiSettings) SetToken(v string) *HlsAkamaiSettings { + s.Token = &v + return s +} - // Specifies the vertical offset of the shadow relative to the captions in pixels. - // A value of -2 would result in a shadow offset 2 pixels above the text. All - // burn-in and DVB-Sub font settings must match. - ShadowYOffset *int64 `locationName:"shadowYOffset" type:"integer"` +type HlsBasicPutSettings struct { + _ struct{} `type:"structure"` - // Controls whether a fixed grid size will be used to generate the output subtitles - // bitmap. Only applicable for Teletext inputs and DVB-Sub/Burn-in outputs. - TeletextGridControl *string `locationName:"teletextGridControl" type:"string" enum:"DvbSubDestinationTeletextGridControl"` + // Number of seconds to wait before retrying connection to the CDN if the connection + // is lost. + ConnectionRetryInterval *int64 `locationName:"connectionRetryInterval" type:"integer"` - // Specifies the horizontal position of the caption relative to the left side - // of the output in pixels. A value of 10 would result in the captions starting - // 10 pixels from the left of the output. If no explicit xPosition is provided, - // the horizontal caption position will be determined by the alignment parameter. - // This option is not valid for source captions that are STL, 608/embedded or - // teletext. These source settings are already pre-defined by the caption stream. - // All burn-in and DVB-Sub font settings must match. - XPosition *int64 `locationName:"xPosition" type:"integer"` + // Size in seconds of file cache for streaming outputs. + FilecacheDuration *int64 `locationName:"filecacheDuration" type:"integer"` - // Specifies the vertical position of the caption relative to the top of the - // output in pixels. A value of 10 would result in the captions starting 10 - // pixels from the top of the output. If no explicit yPosition is provided, - // the caption will be positioned towards the bottom of the output. This option - // is not valid for source captions that are STL, 608/embedded or teletext. - // These source settings are already pre-defined by the caption stream. All - // burn-in and DVB-Sub font settings must match. - YPosition *int64 `locationName:"yPosition" type:"integer"` + // Number of retry attempts that will be made before the Live Event is put into + // an error state. + NumRetries *int64 `locationName:"numRetries" type:"integer"` + + // If a streaming output fails, number of seconds to wait until a restart is + // initiated. A value of 0 means never restart. + RestartDelay *int64 `locationName:"restartDelay" type:"integer"` } // String returns the string representation -func (s DvbSubDestinationSettings) String() string { +func (s HlsBasicPutSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DvbSubDestinationSettings) GoString() string { +func (s HlsBasicPutSettings) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DvbSubDestinationSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DvbSubDestinationSettings"} - if s.FontResolution != nil && *s.FontResolution < 96 { - invalidParams.Add(request.NewErrParamMinValue("FontResolution", 96)) - } - if s.Font != nil { - if err := s.Font.Validate(); err != nil { - invalidParams.AddNested("Font", err.(request.ErrInvalidParams)) - } - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetConnectionRetryInterval sets the ConnectionRetryInterval field's value. +func (s *HlsBasicPutSettings) SetConnectionRetryInterval(v int64) *HlsBasicPutSettings { + s.ConnectionRetryInterval = &v + return s } -// SetAlignment sets the Alignment field's value. -func (s *DvbSubDestinationSettings) SetAlignment(v string) *DvbSubDestinationSettings { - s.Alignment = &v +// SetFilecacheDuration sets the FilecacheDuration field's value. +func (s *HlsBasicPutSettings) SetFilecacheDuration(v int64) *HlsBasicPutSettings { + s.FilecacheDuration = &v return s } -// SetBackgroundColor sets the BackgroundColor field's value. -func (s *DvbSubDestinationSettings) SetBackgroundColor(v string) *DvbSubDestinationSettings { - s.BackgroundColor = &v +// SetNumRetries sets the NumRetries field's value. +func (s *HlsBasicPutSettings) SetNumRetries(v int64) *HlsBasicPutSettings { + s.NumRetries = &v return s } -// SetBackgroundOpacity sets the BackgroundOpacity field's value. -func (s *DvbSubDestinationSettings) SetBackgroundOpacity(v int64) *DvbSubDestinationSettings { - s.BackgroundOpacity = &v +// SetRestartDelay sets the RestartDelay field's value. +func (s *HlsBasicPutSettings) SetRestartDelay(v int64) *HlsBasicPutSettings { + s.RestartDelay = &v return s } -// SetFont sets the Font field's value. -func (s *DvbSubDestinationSettings) SetFont(v *InputLocation) *DvbSubDestinationSettings { - s.Font = v - return s +type HlsCdnSettings struct { + _ struct{} `type:"structure"` + + HlsAkamaiSettings *HlsAkamaiSettings `locationName:"hlsAkamaiSettings" type:"structure"` + + HlsBasicPutSettings *HlsBasicPutSettings `locationName:"hlsBasicPutSettings" type:"structure"` + + HlsMediaStoreSettings *HlsMediaStoreSettings `locationName:"hlsMediaStoreSettings" type:"structure"` + + HlsWebdavSettings *HlsWebdavSettings `locationName:"hlsWebdavSettings" type:"structure"` } -// SetFontColor sets the FontColor field's value. -func (s *DvbSubDestinationSettings) SetFontColor(v string) *DvbSubDestinationSettings { - s.FontColor = &v +// String returns the string representation +func (s HlsCdnSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s HlsCdnSettings) GoString() string { + return s.String() +} + +// SetHlsAkamaiSettings sets the HlsAkamaiSettings field's value. +func (s *HlsCdnSettings) SetHlsAkamaiSettings(v *HlsAkamaiSettings) *HlsCdnSettings { + s.HlsAkamaiSettings = v return s } -// SetFontOpacity sets the FontOpacity field's value. -func (s *DvbSubDestinationSettings) SetFontOpacity(v int64) *DvbSubDestinationSettings { - s.FontOpacity = &v +// SetHlsBasicPutSettings sets the HlsBasicPutSettings field's value. +func (s *HlsCdnSettings) SetHlsBasicPutSettings(v *HlsBasicPutSettings) *HlsCdnSettings { + s.HlsBasicPutSettings = v return s } -// SetFontResolution sets the FontResolution field's value. -func (s *DvbSubDestinationSettings) SetFontResolution(v int64) *DvbSubDestinationSettings { - s.FontResolution = &v +// SetHlsMediaStoreSettings sets the HlsMediaStoreSettings field's value. +func (s *HlsCdnSettings) SetHlsMediaStoreSettings(v *HlsMediaStoreSettings) *HlsCdnSettings { + s.HlsMediaStoreSettings = v return s } -// SetFontSize sets the FontSize field's value. -func (s *DvbSubDestinationSettings) SetFontSize(v string) *DvbSubDestinationSettings { - s.FontSize = &v +// SetHlsWebdavSettings sets the HlsWebdavSettings field's value. +func (s *HlsCdnSettings) SetHlsWebdavSettings(v *HlsWebdavSettings) *HlsCdnSettings { + s.HlsWebdavSettings = v return s } -// SetOutlineColor sets the OutlineColor field's value. -func (s *DvbSubDestinationSettings) SetOutlineColor(v string) *DvbSubDestinationSettings { - s.OutlineColor = &v - return s -} +type HlsGroupSettings struct { + _ struct{} `type:"structure"` + + // Choose one or more ad marker types to pass SCTE35 signals through to this + // group of Apple HLS outputs. + AdMarkers []*string `locationName:"adMarkers" type:"list"` + + // A partial URI prefix that will be prepended to each output in the media .m3u8 + // file. Can be used if base manifest is delivered from a different URL than + // the main .m3u8 file. + BaseUrlContent *string `locationName:"baseUrlContent" type:"string"` + + // A partial URI prefix that will be prepended to each output in the media .m3u8 + // file. Can be used if base manifest is delivered from a different URL than + // the main .m3u8 file. + BaseUrlManifest *string `locationName:"baseUrlManifest" type:"string"` + + // Mapping of up to 4 caption channels to caption languages. Is only meaningful + // if captionLanguageSetting is set to "insert". + CaptionLanguageMappings []*CaptionLanguageMapping `locationName:"captionLanguageMappings" type:"list"` + + // Applies only to 608 Embedded output captions.insert: Include CLOSED-CAPTIONS + // lines in the manifest. Specify at least one language in the CC1 Language + // Code field. One CLOSED-CAPTION line is added for each Language Code you specify. + // Make sure to specify the languages in the order in which they appear in the + // original source (if the source is embedded format) or the order of the caption + // selectors (if the source is other than embedded). Otherwise, languages in + // the manifest will not match up properly with the output captions.none: Include + // CLOSED-CAPTIONS=NONE line in the manifest.omit: Omit any CLOSED-CAPTIONS + // line from the manifest. + CaptionLanguageSetting *string `locationName:"captionLanguageSetting" type:"string" enum:"HlsCaptionLanguageSetting"` + + // When set to "disabled", sets the #EXT-X-ALLOW-CACHE:no tag in the manifest, + // which prevents clients from saving media segments for later replay. + ClientCache *string `locationName:"clientCache" type:"string" enum:"HlsClientCache"` + + // Specification to use (RFC-6381 or the default RFC-4281) during m3u8 playlist + // generation. + CodecSpecification *string `locationName:"codecSpecification" type:"string" enum:"HlsCodecSpecification"` + + // For use with encryptionType. This is a 128-bit, 16-byte hex value represented + // by a 32-character text string. If ivSource is set to "explicit" then this + // parameter is required and is used as the IV for encryption. + ConstantIv *string `locationName:"constantIv" min:"32" type:"string"` + + // A directory or HTTP destination for the HLS segments, manifest files, and + // encryption keys (if enabled). + // + // Destination is a required field + Destination *OutputLocationRef `locationName:"destination" type:"structure" required:"true"` + + // Place segments in subdirectories. + DirectoryStructure *string `locationName:"directoryStructure" type:"string" enum:"HlsDirectoryStructure"` + + // Encrypts the segments with the given encryption scheme. Exclude this parameter + // if no encryption is desired. + EncryptionType *string `locationName:"encryptionType" type:"string" enum:"HlsEncryptionType"` + + // Parameters that control interactions with the CDN. + HlsCdnSettings *HlsCdnSettings `locationName:"hlsCdnSettings" type:"structure"` + + // If mode is "live", the number of segments to retain in the manifest (.m3u8) + // file. This number must be less than or equal to keepSegments. If mode is + // "vod", this parameter has no effect. + IndexNSegments *int64 `locationName:"indexNSegments" min:"3" type:"integer"` + + // Parameter that control output group behavior on input loss. + InputLossAction *string `locationName:"inputLossAction" type:"string" enum:"InputLossActionForHlsOut"` + + // For use with encryptionType. The IV (Initialization Vector) is a 128-bit + // number used in conjunction with the key for encrypting blocks. If set to + // "include", IV is listed in the manifest, otherwise the IV is not in the manifest. + IvInManifest *string `locationName:"ivInManifest" type:"string" enum:"HlsIvInManifest"` + + // For use with encryptionType. The IV (Initialization Vector) is a 128-bit + // number used in conjunction with the key for encrypting blocks. If this setting + // is "followsSegmentNumber", it will cause the IV to change every segment (to + // match the segment number). If this is set to "explicit", you must enter a + // constantIv value. + IvSource *string `locationName:"ivSource" type:"string" enum:"HlsIvSource"` + + // If mode is "live", the number of TS segments to retain in the destination + // directory. If mode is "vod", this parameter has no effect. + KeepSegments *int64 `locationName:"keepSegments" min:"1" type:"integer"` + + // The value specifies how the key is represented in the resource identified + // by the URI. If parameter is absent, an implicit value of "identity" is used. + // A reverse DNS string can also be given. + KeyFormat *string `locationName:"keyFormat" type:"string"` -// SetOutlineSize sets the OutlineSize field's value. -func (s *DvbSubDestinationSettings) SetOutlineSize(v int64) *DvbSubDestinationSettings { - s.OutlineSize = &v - return s -} + // Either a single positive integer version value or a slash delimited list + // of version values (1/2/3). + KeyFormatVersions *string `locationName:"keyFormatVersions" type:"string"` -// SetShadowColor sets the ShadowColor field's value. -func (s *DvbSubDestinationSettings) SetShadowColor(v string) *DvbSubDestinationSettings { - s.ShadowColor = &v - return s -} + // The key provider settings. + KeyProviderSettings *KeyProviderSettings `locationName:"keyProviderSettings" type:"structure"` -// SetShadowOpacity sets the ShadowOpacity field's value. -func (s *DvbSubDestinationSettings) SetShadowOpacity(v int64) *DvbSubDestinationSettings { - s.ShadowOpacity = &v - return s -} + // When set to gzip, compresses HLS playlist. + ManifestCompression *string `locationName:"manifestCompression" type:"string" enum:"HlsManifestCompression"` -// SetShadowXOffset sets the ShadowXOffset field's value. -func (s *DvbSubDestinationSettings) SetShadowXOffset(v int64) *DvbSubDestinationSettings { - s.ShadowXOffset = &v - return s -} + // Indicates whether the output manifest should use floating point or integer + // values for segment duration. + ManifestDurationFormat *string `locationName:"manifestDurationFormat" type:"string" enum:"HlsManifestDurationFormat"` -// SetShadowYOffset sets the ShadowYOffset field's value. -func (s *DvbSubDestinationSettings) SetShadowYOffset(v int64) *DvbSubDestinationSettings { - s.ShadowYOffset = &v - return s -} + // When set, minimumSegmentLength is enforced by looking ahead and back within + // the specified range for a nearby avail and extending the segment size if + // needed. + MinSegmentLength *int64 `locationName:"minSegmentLength" type:"integer"` -// SetTeletextGridControl sets the TeletextGridControl field's value. -func (s *DvbSubDestinationSettings) SetTeletextGridControl(v string) *DvbSubDestinationSettings { - s.TeletextGridControl = &v - return s -} + // If "vod", all segments are indexed and kept permanently in the destination + // and manifest. If "live", only the number segments specified in keepSegments + // and indexNSegments are kept; newer segments replace older segments, which + // may prevent players from rewinding all the way to the beginning of the event.VOD + // mode uses HLS EXT-X-PLAYLIST-TYPE of EVENT while the channel is running, + // converting it to a "VOD" type manifest on completion of the stream. + Mode *string `locationName:"mode" type:"string" enum:"HlsMode"` -// SetXPosition sets the XPosition field's value. -func (s *DvbSubDestinationSettings) SetXPosition(v int64) *DvbSubDestinationSettings { - s.XPosition = &v - return s -} + // Generates the .m3u8 playlist file for this HLS output group. The segmentsOnly + // option will output segments without the .m3u8 file. + OutputSelection *string `locationName:"outputSelection" type:"string" enum:"HlsOutputSelection"` -// SetYPosition sets the YPosition field's value. -func (s *DvbSubDestinationSettings) SetYPosition(v int64) *DvbSubDestinationSettings { - s.YPosition = &v - return s -} + // Includes or excludes EXT-X-PROGRAM-DATE-TIME tag in .m3u8 manifest files. + // The value is calculated as follows: either the program date and time are + // initialized using the input timecode source, or the time is initialized using + // the input timecode source and the date is initialized using the timestampOffset. + ProgramDateTime *string `locationName:"programDateTime" type:"string" enum:"HlsProgramDateTime"` -type DvbSubSourceSettings struct { - _ struct{} `type:"structure"` + // Period of insertion of EXT-X-PROGRAM-DATE-TIME entry, in seconds. + ProgramDateTimePeriod *int64 `locationName:"programDateTimePeriod" type:"integer"` - // When using DVB-Sub with Burn-In or SMPTE-TT, use this PID for the source - // content. Unused for DVB-Sub passthrough. All DVB-Sub content is passed through, - // regardless of selectors. - Pid *int64 `locationName:"pid" min:"1" type:"integer"` -} + // Length of MPEG-2 Transport Stream segments to create (in seconds). Note that + // segments will end on the next keyframe after this number of seconds, so actual + // segment length may be longer. + SegmentLength *int64 `locationName:"segmentLength" min:"1" type:"integer"` -// String returns the string representation -func (s DvbSubSourceSettings) String() string { - return awsutil.Prettify(s) -} + // When set to useInputSegmentation, the output segment or fragment points are + // set by the RAI markers from the input streams. + SegmentationMode *string `locationName:"segmentationMode" type:"string" enum:"HlsSegmentationMode"` -// GoString returns the string representation -func (s DvbSubSourceSettings) GoString() string { - return s.String() -} + // Number of segments to write to a subdirectory before starting a new one. + // directoryStructure must be subdirectoryPerStream for this setting to have + // an effect. + SegmentsPerSubdirectory *int64 `locationName:"segmentsPerSubdirectory" min:"1" type:"integer"` -// Validate inspects the fields of the type to determine if they are valid. -func (s *DvbSubSourceSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DvbSubSourceSettings"} - if s.Pid != nil && *s.Pid < 1 { - invalidParams.Add(request.NewErrParamMinValue("Pid", 1)) - } + // Include or exclude RESOLUTION attribute for video in EXT-X-STREAM-INF tag + // of variant manifest. + StreamInfResolution *string `locationName:"streamInfResolution" type:"string" enum:"HlsStreamInfResolution"` - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} + // Indicates ID3 frame that has the timecode. + TimedMetadataId3Frame *string `locationName:"timedMetadataId3Frame" type:"string" enum:"HlsTimedMetadataId3Frame"` -// SetPid sets the Pid field's value. -func (s *DvbSubSourceSettings) SetPid(v int64) *DvbSubSourceSettings { - s.Pid = &v - return s -} + // Timed Metadata interval in seconds. + TimedMetadataId3Period *int64 `locationName:"timedMetadataId3Period" type:"integer"` -// DVB Time and Date Table (SDT) -type DvbTdtSettings struct { - _ struct{} `type:"structure"` + // Provides an extra millisecond delta offset to fine tune the timestamps. + TimestampDeltaMilliseconds *int64 `locationName:"timestampDeltaMilliseconds" type:"integer"` - // The number of milliseconds between instances of this table in the output - // transport stream. - RepInterval *int64 `locationName:"repInterval" min:"1000" type:"integer"` + // When set to "singleFile", emits the program as a single media resource (.ts) + // file, and uses #EXT-X-BYTERANGE tags to index segment for playback. Playback + // of VOD mode content during event is not guaranteed due to HTTP server caching. + TsFileMode *string `locationName:"tsFileMode" type:"string" enum:"HlsTsFileMode"` } // String returns the string representation -func (s DvbTdtSettings) String() string { +func (s HlsGroupSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DvbTdtSettings) GoString() string { +func (s HlsGroupSettings) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DvbTdtSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DvbTdtSettings"} - if s.RepInterval != nil && *s.RepInterval < 1000 { - invalidParams.Add(request.NewErrParamMinValue("RepInterval", 1000)) +func (s *HlsGroupSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "HlsGroupSettings"} + if s.ConstantIv != nil && len(*s.ConstantIv) < 32 { + invalidParams.Add(request.NewErrParamMinLen("ConstantIv", 32)) + } + if s.Destination == nil { + invalidParams.Add(request.NewErrParamRequired("Destination")) + } + if s.IndexNSegments != nil && *s.IndexNSegments < 3 { + invalidParams.Add(request.NewErrParamMinValue("IndexNSegments", 3)) + } + if s.KeepSegments != nil && *s.KeepSegments < 1 { + invalidParams.Add(request.NewErrParamMinValue("KeepSegments", 1)) + } + if s.SegmentLength != nil && *s.SegmentLength < 1 { + invalidParams.Add(request.NewErrParamMinValue("SegmentLength", 1)) + } + if s.SegmentsPerSubdirectory != nil && *s.SegmentsPerSubdirectory < 1 { + invalidParams.Add(request.NewErrParamMinValue("SegmentsPerSubdirectory", 1)) + } + if s.CaptionLanguageMappings != nil { + for i, v := range s.CaptionLanguageMappings { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "CaptionLanguageMappings", i), err.(request.ErrInvalidParams)) + } + } + } + if s.KeyProviderSettings != nil { + if err := s.KeyProviderSettings.Validate(); err != nil { + invalidParams.AddNested("KeyProviderSettings", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { return invalidParams - } - return nil -} - -// SetRepInterval sets the RepInterval field's value. -func (s *DvbTdtSettings) SetRepInterval(v int64) *DvbTdtSettings { - s.RepInterval = &v - return s -} - -type Eac3Settings struct { - _ struct{} `type:"structure"` - - // When set to attenuate3Db, applies a 3 dB attenuation to the surround channels. - // Only used for 3/2 coding mode. - AttenuationControl *string `locationName:"attenuationControl" type:"string" enum:"Eac3AttenuationControl"` - - // Average bitrate in bits/second. Valid bitrates depend on the coding mode. - Bitrate *float64 `locationName:"bitrate" type:"double"` - - // Specifies the bitstream mode (bsmod) for the emitted E-AC-3 stream. See ATSC - // A/52-2012 (Annex E) for background on these values. - BitstreamMode *string `locationName:"bitstreamMode" type:"string" enum:"Eac3BitstreamMode"` - - // Dolby Digital Plus coding mode. Determines number of channels. - CodingMode *string `locationName:"codingMode" type:"string" enum:"Eac3CodingMode"` - - // When set to enabled, activates a DC highpass filter for all input channels. - DcFilter *string `locationName:"dcFilter" type:"string" enum:"Eac3DcFilter"` - - // Sets the dialnorm for the output. If blank and input audio is Dolby Digital - // Plus, dialnorm will be passed through. - Dialnorm *int64 `locationName:"dialnorm" min:"1" type:"integer"` - - // Sets the Dolby dynamic range compression profile. - DrcLine *string `locationName:"drcLine" type:"string" enum:"Eac3DrcLine"` - - // Sets the profile for heavy Dolby dynamic range compression, ensures that - // the instantaneous signal peaks do not exceed specified levels. - DrcRf *string `locationName:"drcRf" type:"string" enum:"Eac3DrcRf"` - - // When encoding 3/2 audio, setting to lfe enables the LFE channel - LfeControl *string `locationName:"lfeControl" type:"string" enum:"Eac3LfeControl"` - - // When set to enabled, applies a 120Hz lowpass filter to the LFE channel prior - // to encoding. Only valid with codingMode32 coding mode. - LfeFilter *string `locationName:"lfeFilter" type:"string" enum:"Eac3LfeFilter"` - - // Left only/Right only center mix level. Only used for 3/2 coding mode. - LoRoCenterMixLevel *float64 `locationName:"loRoCenterMixLevel" type:"double"` - - // Left only/Right only surround mix level. Only used for 3/2 coding mode. - LoRoSurroundMixLevel *float64 `locationName:"loRoSurroundMixLevel" type:"double"` - - // Left total/Right total center mix level. Only used for 3/2 coding mode. - LtRtCenterMixLevel *float64 `locationName:"ltRtCenterMixLevel" type:"double"` - - // Left total/Right total surround mix level. Only used for 3/2 coding mode. - LtRtSurroundMixLevel *float64 `locationName:"ltRtSurroundMixLevel" type:"double"` - - // When set to followInput, encoder metadata will be sourced from the DD, DD+, - // or DolbyE decoder that supplied this audio data. If audio was not supplied - // from one of these streams, then the static metadata settings will be used. - MetadataControl *string `locationName:"metadataControl" type:"string" enum:"Eac3MetadataControl"` - - // When set to whenPossible, input DD+ audio will be passed through if it is - // present on the input. This detection is dynamic over the life of the transcode. - // Inputs that alternate between DD+ and non-DD+ content will have a consistent - // DD+ output as the system alternates between passthrough and encoding. - PassthroughControl *string `locationName:"passthroughControl" type:"string" enum:"Eac3PassthroughControl"` + } + return nil +} - // When set to shift90Degrees, applies a 90-degree phase shift to the surround - // channels. Only used for 3/2 coding mode. - PhaseControl *string `locationName:"phaseControl" type:"string" enum:"Eac3PhaseControl"` +// SetAdMarkers sets the AdMarkers field's value. +func (s *HlsGroupSettings) SetAdMarkers(v []*string) *HlsGroupSettings { + s.AdMarkers = v + return s +} - // Stereo downmix preference. Only used for 3/2 coding mode. - StereoDownmix *string `locationName:"stereoDownmix" type:"string" enum:"Eac3StereoDownmix"` +// SetBaseUrlContent sets the BaseUrlContent field's value. +func (s *HlsGroupSettings) SetBaseUrlContent(v string) *HlsGroupSettings { + s.BaseUrlContent = &v + return s +} - // When encoding 3/2 audio, sets whether an extra center back surround channel - // is matrix encoded into the left and right surround channels. - SurroundExMode *string `locationName:"surroundExMode" type:"string" enum:"Eac3SurroundExMode"` +// SetBaseUrlManifest sets the BaseUrlManifest field's value. +func (s *HlsGroupSettings) SetBaseUrlManifest(v string) *HlsGroupSettings { + s.BaseUrlManifest = &v + return s +} - // When encoding 2/0 audio, sets whether Dolby Surround is matrix encoded into - // the two channels. - SurroundMode *string `locationName:"surroundMode" type:"string" enum:"Eac3SurroundMode"` +// SetCaptionLanguageMappings sets the CaptionLanguageMappings field's value. +func (s *HlsGroupSettings) SetCaptionLanguageMappings(v []*CaptionLanguageMapping) *HlsGroupSettings { + s.CaptionLanguageMappings = v + return s } -// String returns the string representation -func (s Eac3Settings) String() string { - return awsutil.Prettify(s) +// SetCaptionLanguageSetting sets the CaptionLanguageSetting field's value. +func (s *HlsGroupSettings) SetCaptionLanguageSetting(v string) *HlsGroupSettings { + s.CaptionLanguageSetting = &v + return s } -// GoString returns the string representation -func (s Eac3Settings) GoString() string { - return s.String() +// SetClientCache sets the ClientCache field's value. +func (s *HlsGroupSettings) SetClientCache(v string) *HlsGroupSettings { + s.ClientCache = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *Eac3Settings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "Eac3Settings"} - if s.Dialnorm != nil && *s.Dialnorm < 1 { - invalidParams.Add(request.NewErrParamMinValue("Dialnorm", 1)) - } +// SetCodecSpecification sets the CodecSpecification field's value. +func (s *HlsGroupSettings) SetCodecSpecification(v string) *HlsGroupSettings { + s.CodecSpecification = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetConstantIv sets the ConstantIv field's value. +func (s *HlsGroupSettings) SetConstantIv(v string) *HlsGroupSettings { + s.ConstantIv = &v + return s } -// SetAttenuationControl sets the AttenuationControl field's value. -func (s *Eac3Settings) SetAttenuationControl(v string) *Eac3Settings { - s.AttenuationControl = &v +// SetDestination sets the Destination field's value. +func (s *HlsGroupSettings) SetDestination(v *OutputLocationRef) *HlsGroupSettings { + s.Destination = v return s } -// SetBitrate sets the Bitrate field's value. -func (s *Eac3Settings) SetBitrate(v float64) *Eac3Settings { - s.Bitrate = &v +// SetDirectoryStructure sets the DirectoryStructure field's value. +func (s *HlsGroupSettings) SetDirectoryStructure(v string) *HlsGroupSettings { + s.DirectoryStructure = &v return s } -// SetBitstreamMode sets the BitstreamMode field's value. -func (s *Eac3Settings) SetBitstreamMode(v string) *Eac3Settings { - s.BitstreamMode = &v +// SetEncryptionType sets the EncryptionType field's value. +func (s *HlsGroupSettings) SetEncryptionType(v string) *HlsGroupSettings { + s.EncryptionType = &v return s } -// SetCodingMode sets the CodingMode field's value. -func (s *Eac3Settings) SetCodingMode(v string) *Eac3Settings { - s.CodingMode = &v +// SetHlsCdnSettings sets the HlsCdnSettings field's value. +func (s *HlsGroupSettings) SetHlsCdnSettings(v *HlsCdnSettings) *HlsGroupSettings { + s.HlsCdnSettings = v return s } -// SetDcFilter sets the DcFilter field's value. -func (s *Eac3Settings) SetDcFilter(v string) *Eac3Settings { - s.DcFilter = &v +// SetIndexNSegments sets the IndexNSegments field's value. +func (s *HlsGroupSettings) SetIndexNSegments(v int64) *HlsGroupSettings { + s.IndexNSegments = &v return s } -// SetDialnorm sets the Dialnorm field's value. -func (s *Eac3Settings) SetDialnorm(v int64) *Eac3Settings { - s.Dialnorm = &v +// SetInputLossAction sets the InputLossAction field's value. +func (s *HlsGroupSettings) SetInputLossAction(v string) *HlsGroupSettings { + s.InputLossAction = &v return s } -// SetDrcLine sets the DrcLine field's value. -func (s *Eac3Settings) SetDrcLine(v string) *Eac3Settings { - s.DrcLine = &v +// SetIvInManifest sets the IvInManifest field's value. +func (s *HlsGroupSettings) SetIvInManifest(v string) *HlsGroupSettings { + s.IvInManifest = &v return s } -// SetDrcRf sets the DrcRf field's value. -func (s *Eac3Settings) SetDrcRf(v string) *Eac3Settings { - s.DrcRf = &v +// SetIvSource sets the IvSource field's value. +func (s *HlsGroupSettings) SetIvSource(v string) *HlsGroupSettings { + s.IvSource = &v return s } -// SetLfeControl sets the LfeControl field's value. -func (s *Eac3Settings) SetLfeControl(v string) *Eac3Settings { - s.LfeControl = &v +// SetKeepSegments sets the KeepSegments field's value. +func (s *HlsGroupSettings) SetKeepSegments(v int64) *HlsGroupSettings { + s.KeepSegments = &v return s } -// SetLfeFilter sets the LfeFilter field's value. -func (s *Eac3Settings) SetLfeFilter(v string) *Eac3Settings { - s.LfeFilter = &v +// SetKeyFormat sets the KeyFormat field's value. +func (s *HlsGroupSettings) SetKeyFormat(v string) *HlsGroupSettings { + s.KeyFormat = &v return s } -// SetLoRoCenterMixLevel sets the LoRoCenterMixLevel field's value. -func (s *Eac3Settings) SetLoRoCenterMixLevel(v float64) *Eac3Settings { - s.LoRoCenterMixLevel = &v +// SetKeyFormatVersions sets the KeyFormatVersions field's value. +func (s *HlsGroupSettings) SetKeyFormatVersions(v string) *HlsGroupSettings { + s.KeyFormatVersions = &v return s } -// SetLoRoSurroundMixLevel sets the LoRoSurroundMixLevel field's value. -func (s *Eac3Settings) SetLoRoSurroundMixLevel(v float64) *Eac3Settings { - s.LoRoSurroundMixLevel = &v +// SetKeyProviderSettings sets the KeyProviderSettings field's value. +func (s *HlsGroupSettings) SetKeyProviderSettings(v *KeyProviderSettings) *HlsGroupSettings { + s.KeyProviderSettings = v return s } -// SetLtRtCenterMixLevel sets the LtRtCenterMixLevel field's value. -func (s *Eac3Settings) SetLtRtCenterMixLevel(v float64) *Eac3Settings { - s.LtRtCenterMixLevel = &v +// SetManifestCompression sets the ManifestCompression field's value. +func (s *HlsGroupSettings) SetManifestCompression(v string) *HlsGroupSettings { + s.ManifestCompression = &v return s } -// SetLtRtSurroundMixLevel sets the LtRtSurroundMixLevel field's value. -func (s *Eac3Settings) SetLtRtSurroundMixLevel(v float64) *Eac3Settings { - s.LtRtSurroundMixLevel = &v +// SetManifestDurationFormat sets the ManifestDurationFormat field's value. +func (s *HlsGroupSettings) SetManifestDurationFormat(v string) *HlsGroupSettings { + s.ManifestDurationFormat = &v return s } -// SetMetadataControl sets the MetadataControl field's value. -func (s *Eac3Settings) SetMetadataControl(v string) *Eac3Settings { - s.MetadataControl = &v +// SetMinSegmentLength sets the MinSegmentLength field's value. +func (s *HlsGroupSettings) SetMinSegmentLength(v int64) *HlsGroupSettings { + s.MinSegmentLength = &v return s } -// SetPassthroughControl sets the PassthroughControl field's value. -func (s *Eac3Settings) SetPassthroughControl(v string) *Eac3Settings { - s.PassthroughControl = &v +// SetMode sets the Mode field's value. +func (s *HlsGroupSettings) SetMode(v string) *HlsGroupSettings { + s.Mode = &v return s } -// SetPhaseControl sets the PhaseControl field's value. -func (s *Eac3Settings) SetPhaseControl(v string) *Eac3Settings { - s.PhaseControl = &v +// SetOutputSelection sets the OutputSelection field's value. +func (s *HlsGroupSettings) SetOutputSelection(v string) *HlsGroupSettings { + s.OutputSelection = &v return s } -// SetStereoDownmix sets the StereoDownmix field's value. -func (s *Eac3Settings) SetStereoDownmix(v string) *Eac3Settings { - s.StereoDownmix = &v +// SetProgramDateTime sets the ProgramDateTime field's value. +func (s *HlsGroupSettings) SetProgramDateTime(v string) *HlsGroupSettings { + s.ProgramDateTime = &v return s } -// SetSurroundExMode sets the SurroundExMode field's value. -func (s *Eac3Settings) SetSurroundExMode(v string) *Eac3Settings { - s.SurroundExMode = &v +// SetProgramDateTimePeriod sets the ProgramDateTimePeriod field's value. +func (s *HlsGroupSettings) SetProgramDateTimePeriod(v int64) *HlsGroupSettings { + s.ProgramDateTimePeriod = &v return s } -// SetSurroundMode sets the SurroundMode field's value. -func (s *Eac3Settings) SetSurroundMode(v string) *Eac3Settings { - s.SurroundMode = &v +// SetSegmentLength sets the SegmentLength field's value. +func (s *HlsGroupSettings) SetSegmentLength(v int64) *HlsGroupSettings { + s.SegmentLength = &v return s } -type EmbeddedDestinationSettings struct { - _ struct{} `type:"structure"` +// SetSegmentationMode sets the SegmentationMode field's value. +func (s *HlsGroupSettings) SetSegmentationMode(v string) *HlsGroupSettings { + s.SegmentationMode = &v + return s } -// String returns the string representation -func (s EmbeddedDestinationSettings) String() string { - return awsutil.Prettify(s) +// SetSegmentsPerSubdirectory sets the SegmentsPerSubdirectory field's value. +func (s *HlsGroupSettings) SetSegmentsPerSubdirectory(v int64) *HlsGroupSettings { + s.SegmentsPerSubdirectory = &v + return s } -// GoString returns the string representation -func (s EmbeddedDestinationSettings) GoString() string { - return s.String() +// SetStreamInfResolution sets the StreamInfResolution field's value. +func (s *HlsGroupSettings) SetStreamInfResolution(v string) *HlsGroupSettings { + s.StreamInfResolution = &v + return s } -type EmbeddedPlusScte20DestinationSettings struct { - _ struct{} `type:"structure"` +// SetTimedMetadataId3Frame sets the TimedMetadataId3Frame field's value. +func (s *HlsGroupSettings) SetTimedMetadataId3Frame(v string) *HlsGroupSettings { + s.TimedMetadataId3Frame = &v + return s } -// String returns the string representation -func (s EmbeddedPlusScte20DestinationSettings) String() string { - return awsutil.Prettify(s) +// SetTimedMetadataId3Period sets the TimedMetadataId3Period field's value. +func (s *HlsGroupSettings) SetTimedMetadataId3Period(v int64) *HlsGroupSettings { + s.TimedMetadataId3Period = &v + return s } -// GoString returns the string representation -func (s EmbeddedPlusScte20DestinationSettings) GoString() string { - return s.String() +// SetTimestampDeltaMilliseconds sets the TimestampDeltaMilliseconds field's value. +func (s *HlsGroupSettings) SetTimestampDeltaMilliseconds(v int64) *HlsGroupSettings { + s.TimestampDeltaMilliseconds = &v + return s } -type EmbeddedSourceSettings struct { +// SetTsFileMode sets the TsFileMode field's value. +func (s *HlsGroupSettings) SetTsFileMode(v string) *HlsGroupSettings { + s.TsFileMode = &v + return s +} + +type HlsInputSettings struct { _ struct{} `type:"structure"` - // If upconvert, 608 data is both passed through via the "608 compatibility - // bytes" fields of the 708 wrapper as well as translated into 708. 708 data - // present in the source content will be discarded. - Convert608To708 *string `locationName:"convert608To708" type:"string" enum:"EmbeddedConvert608To708"` + // When specified the HLS stream with the m3u8 BANDWIDTH that most closely matches + // this value will be chosen, otherwise the highest bandwidth stream in the + // m3u8 will be chosen. The bitrate is specified in bits per second, as in an + // HLS manifest. + Bandwidth *int64 `locationName:"bandwidth" type:"integer"` - // Set to "auto" to handle streams with intermittent and/or non-aligned SCTE-20 - // and Embedded captions. - Scte20Detection *string `locationName:"scte20Detection" type:"string" enum:"EmbeddedScte20Detection"` + // When specified, reading of the HLS input will begin this many buffer segments + // from the end (most recently written segment). When not specified, the HLS + // input will begin with the first segment specified in the m3u8. + BufferSegments *int64 `locationName:"bufferSegments" type:"integer"` - // Specifies the 608/708 channel number within the video track from which to - // extract captions. Unused for passthrough. - Source608ChannelNumber *int64 `locationName:"source608ChannelNumber" min:"1" type:"integer"` + // The number of consecutive times that attempts to read a manifest or segment + // must fail before the input is considered unavailable. + Retries *int64 `locationName:"retries" type:"integer"` - // This field is unused and deprecated. - Source608TrackNumber *int64 `locationName:"source608TrackNumber" min:"1" type:"integer"` + // The number of seconds between retries when an attempt to read a manifest + // or segment fails. + RetryInterval *int64 `locationName:"retryInterval" type:"integer"` } // String returns the string representation -func (s EmbeddedSourceSettings) String() string { +func (s HlsInputSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s EmbeddedSourceSettings) GoString() string { +func (s HlsInputSettings) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *EmbeddedSourceSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "EmbeddedSourceSettings"} - if s.Source608ChannelNumber != nil && *s.Source608ChannelNumber < 1 { - invalidParams.Add(request.NewErrParamMinValue("Source608ChannelNumber", 1)) - } - if s.Source608TrackNumber != nil && *s.Source608TrackNumber < 1 { - invalidParams.Add(request.NewErrParamMinValue("Source608TrackNumber", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetConvert608To708 sets the Convert608To708 field's value. -func (s *EmbeddedSourceSettings) SetConvert608To708(v string) *EmbeddedSourceSettings { - s.Convert608To708 = &v +// SetBandwidth sets the Bandwidth field's value. +func (s *HlsInputSettings) SetBandwidth(v int64) *HlsInputSettings { + s.Bandwidth = &v return s } -// SetScte20Detection sets the Scte20Detection field's value. -func (s *EmbeddedSourceSettings) SetScte20Detection(v string) *EmbeddedSourceSettings { - s.Scte20Detection = &v +// SetBufferSegments sets the BufferSegments field's value. +func (s *HlsInputSettings) SetBufferSegments(v int64) *HlsInputSettings { + s.BufferSegments = &v return s } -// SetSource608ChannelNumber sets the Source608ChannelNumber field's value. -func (s *EmbeddedSourceSettings) SetSource608ChannelNumber(v int64) *EmbeddedSourceSettings { - s.Source608ChannelNumber = &v +// SetRetries sets the Retries field's value. +func (s *HlsInputSettings) SetRetries(v int64) *HlsInputSettings { + s.Retries = &v return s } -// SetSource608TrackNumber sets the Source608TrackNumber field's value. -func (s *EmbeddedSourceSettings) SetSource608TrackNumber(v int64) *EmbeddedSourceSettings { - s.Source608TrackNumber = &v +// SetRetryInterval sets the RetryInterval field's value. +func (s *HlsInputSettings) SetRetryInterval(v int64) *HlsInputSettings { + s.RetryInterval = &v return s } -type EncoderSettings struct { +type HlsMediaStoreSettings struct { _ struct{} `type:"structure"` - // AudioDescriptions is a required field - AudioDescriptions []*AudioDescription `locationName:"audioDescriptions" type:"list" required:"true"` - - // Settings for ad avail blanking. - AvailBlanking *AvailBlanking `locationName:"availBlanking" type:"structure"` - - // Event-wide configuration settings for ad avail insertion. - AvailConfiguration *AvailConfiguration `locationName:"availConfiguration" type:"structure"` - - // Settings for blackout slate. - BlackoutSlate *BlackoutSlate `locationName:"blackoutSlate" type:"structure"` - - // Settings for caption decriptions - CaptionDescriptions []*CaptionDescription `locationName:"captionDescriptions" type:"list"` + // Number of seconds to wait before retrying connection to the CDN if the connection + // is lost. + ConnectionRetryInterval *int64 `locationName:"connectionRetryInterval" type:"integer"` - // Configuration settings that apply to the event as a whole. - GlobalConfiguration *GlobalConfiguration `locationName:"globalConfiguration" type:"structure"` + // Size in seconds of file cache for streaming outputs. + FilecacheDuration *int64 `locationName:"filecacheDuration" type:"integer"` - // OutputGroups is a required field - OutputGroups []*OutputGroup `locationName:"outputGroups" type:"list" required:"true"` + // When set to temporal, output files are stored in non-persistent memory for + // faster reading and writing. + MediaStoreStorageClass *string `locationName:"mediaStoreStorageClass" type:"string" enum:"HlsMediaStoreStorageClass"` - // Contains settings used to acquire and adjust timecode information from inputs. - // - // TimecodeConfig is a required field - TimecodeConfig *TimecodeConfig `locationName:"timecodeConfig" type:"structure" required:"true"` + // Number of retry attempts that will be made before the Live Event is put into + // an error state. + NumRetries *int64 `locationName:"numRetries" type:"integer"` - // VideoDescriptions is a required field - VideoDescriptions []*VideoDescription `locationName:"videoDescriptions" type:"list" required:"true"` + // If a streaming output fails, number of seconds to wait until a restart is + // initiated. A value of 0 means never restart. + RestartDelay *int64 `locationName:"restartDelay" type:"integer"` } // String returns the string representation -func (s EncoderSettings) String() string { +func (s HlsMediaStoreSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s EncoderSettings) GoString() string { +func (s HlsMediaStoreSettings) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *EncoderSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "EncoderSettings"} - if s.AudioDescriptions == nil { - invalidParams.Add(request.NewErrParamRequired("AudioDescriptions")) - } - if s.OutputGroups == nil { - invalidParams.Add(request.NewErrParamRequired("OutputGroups")) - } - if s.TimecodeConfig == nil { - invalidParams.Add(request.NewErrParamRequired("TimecodeConfig")) - } - if s.VideoDescriptions == nil { - invalidParams.Add(request.NewErrParamRequired("VideoDescriptions")) - } - if s.AudioDescriptions != nil { - for i, v := range s.AudioDescriptions { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AudioDescriptions", i), err.(request.ErrInvalidParams)) - } - } - } - if s.AvailBlanking != nil { - if err := s.AvailBlanking.Validate(); err != nil { - invalidParams.AddNested("AvailBlanking", err.(request.ErrInvalidParams)) - } - } - if s.AvailConfiguration != nil { - if err := s.AvailConfiguration.Validate(); err != nil { - invalidParams.AddNested("AvailConfiguration", err.(request.ErrInvalidParams)) - } - } - if s.BlackoutSlate != nil { - if err := s.BlackoutSlate.Validate(); err != nil { - invalidParams.AddNested("BlackoutSlate", err.(request.ErrInvalidParams)) - } - } - if s.CaptionDescriptions != nil { - for i, v := range s.CaptionDescriptions { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "CaptionDescriptions", i), err.(request.ErrInvalidParams)) - } - } - } - if s.GlobalConfiguration != nil { - if err := s.GlobalConfiguration.Validate(); err != nil { - invalidParams.AddNested("GlobalConfiguration", err.(request.ErrInvalidParams)) - } - } - if s.OutputGroups != nil { - for i, v := range s.OutputGroups { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "OutputGroups", i), err.(request.ErrInvalidParams)) - } - } - } - if s.TimecodeConfig != nil { - if err := s.TimecodeConfig.Validate(); err != nil { - invalidParams.AddNested("TimecodeConfig", err.(request.ErrInvalidParams)) - } - } - if s.VideoDescriptions != nil { - for i, v := range s.VideoDescriptions { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "VideoDescriptions", i), err.(request.ErrInvalidParams)) - } - } - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetAudioDescriptions sets the AudioDescriptions field's value. -func (s *EncoderSettings) SetAudioDescriptions(v []*AudioDescription) *EncoderSettings { - s.AudioDescriptions = v - return s -} - -// SetAvailBlanking sets the AvailBlanking field's value. -func (s *EncoderSettings) SetAvailBlanking(v *AvailBlanking) *EncoderSettings { - s.AvailBlanking = v - return s -} - -// SetAvailConfiguration sets the AvailConfiguration field's value. -func (s *EncoderSettings) SetAvailConfiguration(v *AvailConfiguration) *EncoderSettings { - s.AvailConfiguration = v - return s -} - -// SetBlackoutSlate sets the BlackoutSlate field's value. -func (s *EncoderSettings) SetBlackoutSlate(v *BlackoutSlate) *EncoderSettings { - s.BlackoutSlate = v - return s -} - -// SetCaptionDescriptions sets the CaptionDescriptions field's value. -func (s *EncoderSettings) SetCaptionDescriptions(v []*CaptionDescription) *EncoderSettings { - s.CaptionDescriptions = v +// SetConnectionRetryInterval sets the ConnectionRetryInterval field's value. +func (s *HlsMediaStoreSettings) SetConnectionRetryInterval(v int64) *HlsMediaStoreSettings { + s.ConnectionRetryInterval = &v return s } -// SetGlobalConfiguration sets the GlobalConfiguration field's value. -func (s *EncoderSettings) SetGlobalConfiguration(v *GlobalConfiguration) *EncoderSettings { - s.GlobalConfiguration = v +// SetFilecacheDuration sets the FilecacheDuration field's value. +func (s *HlsMediaStoreSettings) SetFilecacheDuration(v int64) *HlsMediaStoreSettings { + s.FilecacheDuration = &v return s } -// SetOutputGroups sets the OutputGroups field's value. -func (s *EncoderSettings) SetOutputGroups(v []*OutputGroup) *EncoderSettings { - s.OutputGroups = v +// SetMediaStoreStorageClass sets the MediaStoreStorageClass field's value. +func (s *HlsMediaStoreSettings) SetMediaStoreStorageClass(v string) *HlsMediaStoreSettings { + s.MediaStoreStorageClass = &v return s } -// SetTimecodeConfig sets the TimecodeConfig field's value. -func (s *EncoderSettings) SetTimecodeConfig(v *TimecodeConfig) *EncoderSettings { - s.TimecodeConfig = v +// SetNumRetries sets the NumRetries field's value. +func (s *HlsMediaStoreSettings) SetNumRetries(v int64) *HlsMediaStoreSettings { + s.NumRetries = &v return s } -// SetVideoDescriptions sets the VideoDescriptions field's value. -func (s *EncoderSettings) SetVideoDescriptions(v []*VideoDescription) *EncoderSettings { - s.VideoDescriptions = v +// SetRestartDelay sets the RestartDelay field's value. +func (s *HlsMediaStoreSettings) SetRestartDelay(v int64) *HlsMediaStoreSettings { + s.RestartDelay = &v return s } -type FecOutputSettings struct { +type HlsOutputSettings struct { _ struct{} `type:"structure"` - // Parameter D from SMPTE 2022-1. The height of the FEC protection matrix. The - // number of transport stream packets per column error correction packet. Must - // be between 4 and 20, inclusive. - ColumnDepth *int64 `locationName:"columnDepth" min:"4" type:"integer"` - - // Enables column only or column and row based FEC - IncludeFec *string `locationName:"includeFec" type:"string" enum:"FecOutputIncludeFec"` + // Settings regarding the underlying stream. These settings are different for + // audio-only outputs. + // + // HlsSettings is a required field + HlsSettings *HlsSettings `locationName:"hlsSettings" type:"structure" required:"true"` - // Parameter L from SMPTE 2022-1. The width of the FEC protection matrix. Must - // be between 1 and 20, inclusive. If only Column FEC is used, then larger values - // increase robustness. If Row FEC is used, then this is the number of transport - // stream packets per row error correction packet, and the value must be between - // 4 and 20, inclusive, if includeFec is columnAndRow. If includeFec is column, - // this value must be 1 to 20, inclusive. - RowLength *int64 `locationName:"rowLength" min:"1" type:"integer"` + // String concatenated to the end of the destination filename. Accepts \"Format + // Identifiers\":#formatIdentifierParameters. + NameModifier *string `locationName:"nameModifier" min:"1" type:"string"` + + // String concatenated to end of segment filenames. + SegmentModifier *string `locationName:"segmentModifier" type:"string"` } // String returns the string representation -func (s FecOutputSettings) String() string { +func (s HlsOutputSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s FecOutputSettings) GoString() string { +func (s HlsOutputSettings) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *FecOutputSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "FecOutputSettings"} - if s.ColumnDepth != nil && *s.ColumnDepth < 4 { - invalidParams.Add(request.NewErrParamMinValue("ColumnDepth", 4)) +func (s *HlsOutputSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "HlsOutputSettings"} + if s.HlsSettings == nil { + invalidParams.Add(request.NewErrParamRequired("HlsSettings")) } - if s.RowLength != nil && *s.RowLength < 1 { - invalidParams.Add(request.NewErrParamMinValue("RowLength", 1)) + if s.NameModifier != nil && len(*s.NameModifier) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NameModifier", 1)) + } + if s.HlsSettings != nil { + if err := s.HlsSettings.Validate(); err != nil { + invalidParams.AddNested("HlsSettings", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -5483,72 +8862,53 @@ func (s *FecOutputSettings) Validate() error { return nil } -// SetColumnDepth sets the ColumnDepth field's value. -func (s *FecOutputSettings) SetColumnDepth(v int64) *FecOutputSettings { - s.ColumnDepth = &v +// SetHlsSettings sets the HlsSettings field's value. +func (s *HlsOutputSettings) SetHlsSettings(v *HlsSettings) *HlsOutputSettings { + s.HlsSettings = v return s } -// SetIncludeFec sets the IncludeFec field's value. -func (s *FecOutputSettings) SetIncludeFec(v string) *FecOutputSettings { - s.IncludeFec = &v +// SetNameModifier sets the NameModifier field's value. +func (s *HlsOutputSettings) SetNameModifier(v string) *HlsOutputSettings { + s.NameModifier = &v return s } -// SetRowLength sets the RowLength field's value. -func (s *FecOutputSettings) SetRowLength(v int64) *FecOutputSettings { - s.RowLength = &v +// SetSegmentModifier sets the SegmentModifier field's value. +func (s *HlsOutputSettings) SetSegmentModifier(v string) *HlsOutputSettings { + s.SegmentModifier = &v return s } -type GlobalConfiguration struct { +type HlsSettings struct { _ struct{} `type:"structure"` - // Value to set the initial audio gain for the Live Event. - InitialAudioGain *int64 `locationName:"initialAudioGain" type:"integer"` - - // Indicates the action to take when an input completes (e.g. end-of-file.) - // Options include immediately switching to the next sequential input (via "switchInput"), - // switching to the next input and looping back to the first input when last - // input ends (via "switchAndLoopInputs") or not switching inputs and instead - // transcoding black / color / slate images per the "Input Loss Behavior" configuration - // until an activateInput REST command is received (via "none"). - InputEndAction *string `locationName:"inputEndAction" type:"string" enum:"GlobalConfigurationInputEndAction"` - - // Settings for system actions when input is lost. - InputLossBehavior *InputLossBehavior `locationName:"inputLossBehavior" type:"structure"` - - // Indicates whether the rate of frames emitted by the Live encoder should be - // paced by its system clock (which optionally may be locked to another source - // via NTP) or should be locked to the clock of the source that is providing - // the input stream. - OutputTimingSource *string `locationName:"outputTimingSource" type:"string" enum:"GlobalConfigurationOutputTimingSource"` + AudioOnlyHlsSettings *AudioOnlyHlsSettings `locationName:"audioOnlyHlsSettings" type:"structure"` - // Adjusts video input buffer for streams with very low video framerates. This - // is commonly set to enabled for music channels with less than one video frame - // per second. - SupportLowFramerateInputs *string `locationName:"supportLowFramerateInputs" type:"string" enum:"GlobalConfigurationLowFramerateInputs"` + StandardHlsSettings *StandardHlsSettings `locationName:"standardHlsSettings" type:"structure"` } // String returns the string representation -func (s GlobalConfiguration) String() string { +func (s HlsSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GlobalConfiguration) GoString() string { +func (s HlsSettings) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GlobalConfiguration) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GlobalConfiguration"} - if s.InitialAudioGain != nil && *s.InitialAudioGain < -60 { - invalidParams.Add(request.NewErrParamMinValue("InitialAudioGain", -60)) +func (s *HlsSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "HlsSettings"} + if s.AudioOnlyHlsSettings != nil { + if err := s.AudioOnlyHlsSettings.Validate(); err != nil { + invalidParams.AddNested("AudioOnlyHlsSettings", err.(request.ErrInvalidParams)) + } } - if s.InputLossBehavior != nil { - if err := s.InputLossBehavior.Validate(); err != nil { - invalidParams.AddNested("InputLossBehavior", err.(request.ErrInvalidParams)) + if s.StandardHlsSettings != nil { + if err := s.StandardHlsSettings.Validate(); err != nil { + invalidParams.AddNested("StandardHlsSettings", err.(request.ErrInvalidParams)) } } @@ -5558,816 +8918,631 @@ func (s *GlobalConfiguration) Validate() error { return nil } -// SetInitialAudioGain sets the InitialAudioGain field's value. -func (s *GlobalConfiguration) SetInitialAudioGain(v int64) *GlobalConfiguration { - s.InitialAudioGain = &v - return s -} - -// SetInputEndAction sets the InputEndAction field's value. -func (s *GlobalConfiguration) SetInputEndAction(v string) *GlobalConfiguration { - s.InputEndAction = &v - return s -} - -// SetInputLossBehavior sets the InputLossBehavior field's value. -func (s *GlobalConfiguration) SetInputLossBehavior(v *InputLossBehavior) *GlobalConfiguration { - s.InputLossBehavior = v - return s -} - -// SetOutputTimingSource sets the OutputTimingSource field's value. -func (s *GlobalConfiguration) SetOutputTimingSource(v string) *GlobalConfiguration { - s.OutputTimingSource = &v +// SetAudioOnlyHlsSettings sets the AudioOnlyHlsSettings field's value. +func (s *HlsSettings) SetAudioOnlyHlsSettings(v *AudioOnlyHlsSettings) *HlsSettings { + s.AudioOnlyHlsSettings = v return s } -// SetSupportLowFramerateInputs sets the SupportLowFramerateInputs field's value. -func (s *GlobalConfiguration) SetSupportLowFramerateInputs(v string) *GlobalConfiguration { - s.SupportLowFramerateInputs = &v +// SetStandardHlsSettings sets the StandardHlsSettings field's value. +func (s *HlsSettings) SetStandardHlsSettings(v *StandardHlsSettings) *HlsSettings { + s.StandardHlsSettings = v return s } -type H264Settings struct { +type HlsWebdavSettings struct { _ struct{} `type:"structure"` - // Adaptive quantization. Allows intra-frame quantizers to vary to improve visual - // quality. - AdaptiveQuantization *string `locationName:"adaptiveQuantization" type:"string" enum:"H264AdaptiveQuantization"` - - // Indicates that AFD values will be written into the output stream. If afdSignaling - // is "auto", the system will try to preserve the input AFD value (in cases - // where multiple AFD values are valid). If set to "fixed", the AFD value will - // be the value configured in the fixedAfd parameter. - AfdSignaling *string `locationName:"afdSignaling" type:"string" enum:"AfdSignaling"` - - // Average bitrate in bits/second. Required for VBR, CBR, and ABR. For MS Smooth - // outputs, bitrates must be unique when rounded down to the nearest multiple - // of 1000. - Bitrate *int64 `locationName:"bitrate" min:"1000" type:"integer"` - - // Percentage of the buffer that should initially be filled (HRD buffer model). - BufFillPct *int64 `locationName:"bufFillPct" type:"integer"` - - // Size of buffer (HRD buffer model) in bits/second. - BufSize *int64 `locationName:"bufSize" type:"integer"` - - // Includes colorspace metadata in the output. - ColorMetadata *string `locationName:"colorMetadata" type:"string" enum:"H264ColorMetadata"` - - // Entropy encoding mode. Use cabac (must be in Main or High profile) or cavlc. - EntropyEncoding *string `locationName:"entropyEncoding" type:"string" enum:"H264EntropyEncoding"` - - // Four bit AFD value to write on all frames of video in the output stream. - // Only valid when afdSignaling is set to 'Fixed'. - FixedAfd *string `locationName:"fixedAfd" type:"string" enum:"FixedAfd"` - - // If set to enabled, adjust quantization within each frame to reduce flicker - // or 'pop' on I-frames. - FlickerAq *string `locationName:"flickerAq" type:"string" enum:"H264FlickerAq"` - - // This field indicates how the output video frame rate is specified. If "specified" - // is selected then the output video frame rate is determined by framerateNumerator - // and framerateDenominator, else if "initializeFromSource" is selected then - // the output video frame rate will be set equal to the input video frame rate - // of the first input. - FramerateControl *string `locationName:"framerateControl" type:"string" enum:"H264FramerateControl"` - - // Framerate denominator. - FramerateDenominator *int64 `locationName:"framerateDenominator" type:"integer"` - - // Framerate numerator - framerate is a fraction, e.g. 24000 / 1001 = 23.976 - // fps. - FramerateNumerator *int64 `locationName:"framerateNumerator" type:"integer"` - - // If enabled, use reference B frames for GOP structures that have B frames - // > 1. - GopBReference *string `locationName:"gopBReference" type:"string" enum:"H264GopBReference"` - - // Frequency of closed GOPs. In streaming applications, it is recommended that - // this be set to 1 so a decoder joining mid-stream will receive an IDR frame - // as quickly as possible. Setting this value to 0 will break output segmenting. - GopClosedCadence *int64 `locationName:"gopClosedCadence" type:"integer"` + // Number of seconds to wait before retrying connection to the CDN if the connection + // is lost. + ConnectionRetryInterval *int64 `locationName:"connectionRetryInterval" type:"integer"` - // Number of B-frames between reference frames. - GopNumBFrames *int64 `locationName:"gopNumBFrames" type:"integer"` + // Size in seconds of file cache for streaming outputs. + FilecacheDuration *int64 `locationName:"filecacheDuration" type:"integer"` - // GOP size (keyframe interval) in units of either frames or seconds per gopSizeUnits. - // Must be greater than zero. - GopSize *float64 `locationName:"gopSize" type:"double"` + // Specify whether or not to use chunked transfer encoding to WebDAV. + HttpTransferMode *string `locationName:"httpTransferMode" type:"string" enum:"HlsWebdavHttpTransferMode"` - // Indicates if the gopSize is specified in frames or seconds. If seconds the - // system will convert the gopSize into a frame count at run time. - GopSizeUnits *string `locationName:"gopSizeUnits" type:"string" enum:"H264GopSizeUnits"` + // Number of retry attempts that will be made before the Live Event is put into + // an error state. + NumRetries *int64 `locationName:"numRetries" type:"integer"` - // H.264 Level. - Level *string `locationName:"level" type:"string" enum:"H264Level"` + // If a streaming output fails, number of seconds to wait until a restart is + // initiated. A value of 0 means never restart. + RestartDelay *int64 `locationName:"restartDelay" type:"integer"` +} - // Amount of lookahead. A value of low can decrease latency and memory usage, - // while high can produce better quality for certain content. - LookAheadRateControl *string `locationName:"lookAheadRateControl" type:"string" enum:"H264LookAheadRateControl"` +// String returns the string representation +func (s HlsWebdavSettings) String() string { + return awsutil.Prettify(s) +} - // Maximum bitrate in bits/second (for VBR mode only). - MaxBitrate *int64 `locationName:"maxBitrate" min:"1000" type:"integer"` +// GoString returns the string representation +func (s HlsWebdavSettings) GoString() string { + return s.String() +} - // Only meaningful if sceneChangeDetect is set to enabled. Enforces separation - // between repeated (cadence) I-frames and I-frames inserted by Scene Change - // Detection. If a scene change I-frame is within I-interval frames of a cadence - // I-frame, the GOP is shrunk and/or stretched to the scene change I-frame. - // GOP stretch requires enabling lookahead as well as setting I-interval. The - // normal cadence resumes for the next GOP. Note: Maximum GOP stretch = GOP - // size + Min-I-interval - 1 - MinIInterval *int64 `locationName:"minIInterval" type:"integer"` +// SetConnectionRetryInterval sets the ConnectionRetryInterval field's value. +func (s *HlsWebdavSettings) SetConnectionRetryInterval(v int64) *HlsWebdavSettings { + s.ConnectionRetryInterval = &v + return s +} - // Number of reference frames to use. The encoder may use more than requested - // if using B-frames and/or interlaced encoding. - NumRefFrames *int64 `locationName:"numRefFrames" min:"1" type:"integer"` +// SetFilecacheDuration sets the FilecacheDuration field's value. +func (s *HlsWebdavSettings) SetFilecacheDuration(v int64) *HlsWebdavSettings { + s.FilecacheDuration = &v + return s +} - // This field indicates how the output pixel aspect ratio is specified. If "specified" - // is selected then the output video pixel aspect ratio is determined by parNumerator - // and parDenominator, else if "initializeFromSource" is selected then the output - // pixsel aspect ratio will be set equal to the input video pixel aspect ratio - // of the first input. - ParControl *string `locationName:"parControl" type:"string" enum:"H264ParControl"` +// SetHttpTransferMode sets the HttpTransferMode field's value. +func (s *HlsWebdavSettings) SetHttpTransferMode(v string) *HlsWebdavSettings { + s.HttpTransferMode = &v + return s +} - // Pixel Aspect Ratio denominator. - ParDenominator *int64 `locationName:"parDenominator" min:"1" type:"integer"` +// SetNumRetries sets the NumRetries field's value. +func (s *HlsWebdavSettings) SetNumRetries(v int64) *HlsWebdavSettings { + s.NumRetries = &v + return s +} - // Pixel Aspect Ratio numerator. - ParNumerator *int64 `locationName:"parNumerator" type:"integer"` +// SetRestartDelay sets the RestartDelay field's value. +func (s *HlsWebdavSettings) SetRestartDelay(v int64) *HlsWebdavSettings { + s.RestartDelay = &v + return s +} - // H.264 Profile. - Profile *string `locationName:"profile" type:"string" enum:"H264Profile"` +type Input struct { + _ struct{} `type:"structure"` - // Rate control mode. - RateControlMode *string `locationName:"rateControlMode" type:"string" enum:"H264RateControlMode"` + // The Unique ARN of the input (generated, immutable). + Arn *string `locationName:"arn" type:"string"` - // Sets the scan type of the output to progressive or top-field-first interlaced. - ScanType *string `locationName:"scanType" type:"string" enum:"H264ScanType"` + // A list of channel IDs that that input is attached to (currently an input + // can only be attached to one channel). + AttachedChannels []*string `locationName:"attachedChannels" type:"list"` - // Scene change detection. Inserts I-frames on scene changes when enabled. - SceneChangeDetect *string `locationName:"sceneChangeDetect" type:"string" enum:"H264SceneChangeDetect"` + // A list of the destinations of the input (PUSH-type). + Destinations []*InputDestination `locationName:"destinations" type:"list"` - // Number of slices per picture. Must be less than or equal to the number of - // macroblock rows for progressive pictures, and less than or equal to half - // the number of macroblock rows for interlaced pictures.This field is optional; - // when no value is specified the encoder will choose the number of slices based - // on encode resolution. - Slices *int64 `locationName:"slices" min:"1" type:"integer"` + // The generated ID of the input (unique for user account, immutable). + Id *string `locationName:"id" type:"string"` - // Softness. Selects quantizer matrix, larger values reduce high-frequency content - // in the encoded image. - Softness *int64 `locationName:"softness" type:"integer"` + // The user-assigned name (This is a mutable value). + Name *string `locationName:"name" type:"string"` - // If set to enabled, adjust quantization within each frame based on spatial - // variation of content complexity. - SpatialAq *string `locationName:"spatialAq" type:"string" enum:"H264SpatialAq"` + // A list of IDs for all the security groups attached to the input. + SecurityGroups []*string `locationName:"securityGroups" type:"list"` - // Produces a bitstream compliant with SMPTE RP-2027. - Syntax *string `locationName:"syntax" type:"string" enum:"H264Syntax"` + // A list of the sources of the input (PULL-type). + Sources []*InputSource `locationName:"sources" type:"list"` - // If set to enabled, adjust quantization within each frame based on temporal - // variation of content complexity. - TemporalAq *string `locationName:"temporalAq" type:"string" enum:"H264TemporalAq"` + State *string `locationName:"state" type:"string" enum:"InputState"` - // Determines how timecodes should be inserted into the video elementary stream.- - // 'disabled': Do not include timecodes- 'picTimingSei': Pass through picture - // timing SEI messages from the source specified in Timecode Config - TimecodeInsertion *string `locationName:"timecodeInsertion" type:"string" enum:"H264TimecodeInsertionBehavior"` + Type *string `locationName:"type" type:"string" enum:"InputType"` } // String returns the string representation -func (s H264Settings) String() string { +func (s Input) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s H264Settings) GoString() string { +func (s Input) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *H264Settings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "H264Settings"} - if s.Bitrate != nil && *s.Bitrate < 1000 { - invalidParams.Add(request.NewErrParamMinValue("Bitrate", 1000)) - } - if s.MaxBitrate != nil && *s.MaxBitrate < 1000 { - invalidParams.Add(request.NewErrParamMinValue("MaxBitrate", 1000)) - } - if s.NumRefFrames != nil && *s.NumRefFrames < 1 { - invalidParams.Add(request.NewErrParamMinValue("NumRefFrames", 1)) - } - if s.ParDenominator != nil && *s.ParDenominator < 1 { - invalidParams.Add(request.NewErrParamMinValue("ParDenominator", 1)) - } - if s.Slices != nil && *s.Slices < 1 { - invalidParams.Add(request.NewErrParamMinValue("Slices", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetAdaptiveQuantization sets the AdaptiveQuantization field's value. -func (s *H264Settings) SetAdaptiveQuantization(v string) *H264Settings { - s.AdaptiveQuantization = &v +// SetArn sets the Arn field's value. +func (s *Input) SetArn(v string) *Input { + s.Arn = &v return s } -// SetAfdSignaling sets the AfdSignaling field's value. -func (s *H264Settings) SetAfdSignaling(v string) *H264Settings { - s.AfdSignaling = &v +// SetAttachedChannels sets the AttachedChannels field's value. +func (s *Input) SetAttachedChannels(v []*string) *Input { + s.AttachedChannels = v return s } -// SetBitrate sets the Bitrate field's value. -func (s *H264Settings) SetBitrate(v int64) *H264Settings { - s.Bitrate = &v +// SetDestinations sets the Destinations field's value. +func (s *Input) SetDestinations(v []*InputDestination) *Input { + s.Destinations = v return s } -// SetBufFillPct sets the BufFillPct field's value. -func (s *H264Settings) SetBufFillPct(v int64) *H264Settings { - s.BufFillPct = &v +// SetId sets the Id field's value. +func (s *Input) SetId(v string) *Input { + s.Id = &v return s } -// SetBufSize sets the BufSize field's value. -func (s *H264Settings) SetBufSize(v int64) *H264Settings { - s.BufSize = &v +// SetName sets the Name field's value. +func (s *Input) SetName(v string) *Input { + s.Name = &v return s } -// SetColorMetadata sets the ColorMetadata field's value. -func (s *H264Settings) SetColorMetadata(v string) *H264Settings { - s.ColorMetadata = &v +// SetSecurityGroups sets the SecurityGroups field's value. +func (s *Input) SetSecurityGroups(v []*string) *Input { + s.SecurityGroups = v return s } -// SetEntropyEncoding sets the EntropyEncoding field's value. -func (s *H264Settings) SetEntropyEncoding(v string) *H264Settings { - s.EntropyEncoding = &v +// SetSources sets the Sources field's value. +func (s *Input) SetSources(v []*InputSource) *Input { + s.Sources = v return s } -// SetFixedAfd sets the FixedAfd field's value. -func (s *H264Settings) SetFixedAfd(v string) *H264Settings { - s.FixedAfd = &v +// SetState sets the State field's value. +func (s *Input) SetState(v string) *Input { + s.State = &v return s } -// SetFlickerAq sets the FlickerAq field's value. -func (s *H264Settings) SetFlickerAq(v string) *H264Settings { - s.FlickerAq = &v +// SetType sets the Type field's value. +func (s *Input) SetType(v string) *Input { + s.Type = &v return s } -// SetFramerateControl sets the FramerateControl field's value. -func (s *H264Settings) SetFramerateControl(v string) *H264Settings { - s.FramerateControl = &v - return s -} +type InputAttachment struct { + _ struct{} `type:"structure"` -// SetFramerateDenominator sets the FramerateDenominator field's value. -func (s *H264Settings) SetFramerateDenominator(v int64) *H264Settings { - s.FramerateDenominator = &v - return s -} + // User-specified name for the attachment. This is required if the user wants + // to use this input in an input switch action. + InputAttachmentName *string `locationName:"inputAttachmentName" type:"string"` -// SetFramerateNumerator sets the FramerateNumerator field's value. -func (s *H264Settings) SetFramerateNumerator(v int64) *H264Settings { - s.FramerateNumerator = &v - return s -} + // The ID of the input + InputId *string `locationName:"inputId" type:"string"` -// SetGopBReference sets the GopBReference field's value. -func (s *H264Settings) SetGopBReference(v string) *H264Settings { - s.GopBReference = &v - return s + // Settings of an input (caption selector, etc.) + InputSettings *InputSettings `locationName:"inputSettings" type:"structure"` } -// SetGopClosedCadence sets the GopClosedCadence field's value. -func (s *H264Settings) SetGopClosedCadence(v int64) *H264Settings { - s.GopClosedCadence = &v - return s +// String returns the string representation +func (s InputAttachment) String() string { + return awsutil.Prettify(s) } -// SetGopNumBFrames sets the GopNumBFrames field's value. -func (s *H264Settings) SetGopNumBFrames(v int64) *H264Settings { - s.GopNumBFrames = &v - return s +// GoString returns the string representation +func (s InputAttachment) GoString() string { + return s.String() } -// SetGopSize sets the GopSize field's value. -func (s *H264Settings) SetGopSize(v float64) *H264Settings { - s.GopSize = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *InputAttachment) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InputAttachment"} + if s.InputSettings != nil { + if err := s.InputSettings.Validate(); err != nil { + invalidParams.AddNested("InputSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetGopSizeUnits sets the GopSizeUnits field's value. -func (s *H264Settings) SetGopSizeUnits(v string) *H264Settings { - s.GopSizeUnits = &v +// SetInputAttachmentName sets the InputAttachmentName field's value. +func (s *InputAttachment) SetInputAttachmentName(v string) *InputAttachment { + s.InputAttachmentName = &v return s } -// SetLevel sets the Level field's value. -func (s *H264Settings) SetLevel(v string) *H264Settings { - s.Level = &v +// SetInputId sets the InputId field's value. +func (s *InputAttachment) SetInputId(v string) *InputAttachment { + s.InputId = &v return s } -// SetLookAheadRateControl sets the LookAheadRateControl field's value. -func (s *H264Settings) SetLookAheadRateControl(v string) *H264Settings { - s.LookAheadRateControl = &v +// SetInputSettings sets the InputSettings field's value. +func (s *InputAttachment) SetInputSettings(v *InputSettings) *InputAttachment { + s.InputSettings = v return s } -// SetMaxBitrate sets the MaxBitrate field's value. -func (s *H264Settings) SetMaxBitrate(v int64) *H264Settings { - s.MaxBitrate = &v - return s +type InputChannelLevel struct { + _ struct{} `type:"structure"` + + // Remixing value. Units are in dB and acceptable values are within the range + // from -60 (mute) and 6 dB. + // + // Gain is a required field + Gain *int64 `locationName:"gain" type:"integer" required:"true"` + + // The index of the input channel used as a source. + // + // InputChannel is a required field + InputChannel *int64 `locationName:"inputChannel" type:"integer" required:"true"` } -// SetMinIInterval sets the MinIInterval field's value. -func (s *H264Settings) SetMinIInterval(v int64) *H264Settings { - s.MinIInterval = &v - return s +// String returns the string representation +func (s InputChannelLevel) String() string { + return awsutil.Prettify(s) } -// SetNumRefFrames sets the NumRefFrames field's value. -func (s *H264Settings) SetNumRefFrames(v int64) *H264Settings { - s.NumRefFrames = &v - return s +// GoString returns the string representation +func (s InputChannelLevel) GoString() string { + return s.String() } -// SetParControl sets the ParControl field's value. -func (s *H264Settings) SetParControl(v string) *H264Settings { - s.ParControl = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *InputChannelLevel) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InputChannelLevel"} + if s.Gain == nil { + invalidParams.Add(request.NewErrParamRequired("Gain")) + } + if s.Gain != nil && *s.Gain < -60 { + invalidParams.Add(request.NewErrParamMinValue("Gain", -60)) + } + if s.InputChannel == nil { + invalidParams.Add(request.NewErrParamRequired("InputChannel")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetParDenominator sets the ParDenominator field's value. -func (s *H264Settings) SetParDenominator(v int64) *H264Settings { - s.ParDenominator = &v +// SetGain sets the Gain field's value. +func (s *InputChannelLevel) SetGain(v int64) *InputChannelLevel { + s.Gain = &v return s } -// SetParNumerator sets the ParNumerator field's value. -func (s *H264Settings) SetParNumerator(v int64) *H264Settings { - s.ParNumerator = &v +// SetInputChannel sets the InputChannel field's value. +func (s *InputChannelLevel) SetInputChannel(v int64) *InputChannelLevel { + s.InputChannel = &v return s } -// SetProfile sets the Profile field's value. -func (s *H264Settings) SetProfile(v string) *H264Settings { - s.Profile = &v - return s +// The settings for a PUSH type input. +type InputDestination struct { + _ struct{} `type:"structure"` + + // The system-generated static IP address of endpoint.It remains fixed for the + // lifetime of the input. + Ip *string `locationName:"ip" type:"string"` + + // The port number for the input. + Port *string `locationName:"port" type:"string"` + + // This represents the endpoint that the customer stream will bepushed to. + Url *string `locationName:"url" type:"string"` } -// SetRateControlMode sets the RateControlMode field's value. -func (s *H264Settings) SetRateControlMode(v string) *H264Settings { - s.RateControlMode = &v - return s +// String returns the string representation +func (s InputDestination) String() string { + return awsutil.Prettify(s) } -// SetScanType sets the ScanType field's value. -func (s *H264Settings) SetScanType(v string) *H264Settings { - s.ScanType = &v - return s +// GoString returns the string representation +func (s InputDestination) GoString() string { + return s.String() } -// SetSceneChangeDetect sets the SceneChangeDetect field's value. -func (s *H264Settings) SetSceneChangeDetect(v string) *H264Settings { - s.SceneChangeDetect = &v +// SetIp sets the Ip field's value. +func (s *InputDestination) SetIp(v string) *InputDestination { + s.Ip = &v return s } -// SetSlices sets the Slices field's value. -func (s *H264Settings) SetSlices(v int64) *H264Settings { - s.Slices = &v +// SetPort sets the Port field's value. +func (s *InputDestination) SetPort(v string) *InputDestination { + s.Port = &v return s } -// SetSoftness sets the Softness field's value. -func (s *H264Settings) SetSoftness(v int64) *H264Settings { - s.Softness = &v +// SetUrl sets the Url field's value. +func (s *InputDestination) SetUrl(v string) *InputDestination { + s.Url = &v return s } -// SetSpatialAq sets the SpatialAq field's value. -func (s *H264Settings) SetSpatialAq(v string) *H264Settings { - s.SpatialAq = &v - return s +// Endpoint settings for a PUSH type input. +type InputDestinationRequest struct { + _ struct{} `type:"structure"` + + // A unique name for the location the RTMP stream is being pushedto. + StreamName *string `locationName:"streamName" type:"string"` } -// SetSyntax sets the Syntax field's value. -func (s *H264Settings) SetSyntax(v string) *H264Settings { - s.Syntax = &v - return s +// String returns the string representation +func (s InputDestinationRequest) String() string { + return awsutil.Prettify(s) } -// SetTemporalAq sets the TemporalAq field's value. -func (s *H264Settings) SetTemporalAq(v string) *H264Settings { - s.TemporalAq = &v - return s +// GoString returns the string representation +func (s InputDestinationRequest) GoString() string { + return s.String() } -// SetTimecodeInsertion sets the TimecodeInsertion field's value. -func (s *H264Settings) SetTimecodeInsertion(v string) *H264Settings { - s.TimecodeInsertion = &v +// SetStreamName sets the StreamName field's value. +func (s *InputDestinationRequest) SetStreamName(v string) *InputDestinationRequest { + s.StreamName = &v return s } -type HlsAkamaiSettings struct { +type InputLocation struct { _ struct{} `type:"structure"` - // Number of seconds to wait before retrying connection to the CDN if the connection - // is lost. - ConnectionRetryInterval *int64 `locationName:"connectionRetryInterval" type:"integer"` - - // Size in seconds of file cache for streaming outputs. - FilecacheDuration *int64 `locationName:"filecacheDuration" type:"integer"` - - // Specify whether or not to use chunked transfer encoding to Akamai. User should - // contact Akamai to enable this feature. - HttpTransferMode *string `locationName:"httpTransferMode" type:"string" enum:"HlsAkamaiHttpTransferMode"` - - // Number of retry attempts that will be made before the Live Event is put into - // an error state. - NumRetries *int64 `locationName:"numRetries" type:"integer"` - - // If a streaming output fails, number of seconds to wait until a restart is - // initiated. A value of 0 means never restart. - RestartDelay *int64 `locationName:"restartDelay" type:"integer"` + // key used to extract the password from EC2 Parameter store + PasswordParam *string `locationName:"passwordParam" type:"string"` - // Salt for authenticated Akamai. - Salt *string `locationName:"salt" type:"string"` + // Uniform Resource Identifier - This should be a path to a file accessible + // to the Live system (eg. a http:// URI) depending on the output type. For + // example, a RTMP destination should have a uri simliar to: "rtmp://fmsserver/live". + // + // Uri is a required field + Uri *string `locationName:"uri" type:"string" required:"true"` - // Token parameter for authenticated akamai. If not specified, _gda_ is used. - Token *string `locationName:"token" type:"string"` + // Username if credentials are required to access a file or publishing point. + // This can be either a plaintext username, or a reference to an AWS parameter + // store name from which the username can be retrieved. AWS Parameter store + // format: "ssm://" + Username *string `locationName:"username" type:"string"` } // String returns the string representation -func (s HlsAkamaiSettings) String() string { +func (s InputLocation) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s HlsAkamaiSettings) GoString() string { +func (s InputLocation) GoString() string { return s.String() } -// SetConnectionRetryInterval sets the ConnectionRetryInterval field's value. -func (s *HlsAkamaiSettings) SetConnectionRetryInterval(v int64) *HlsAkamaiSettings { - s.ConnectionRetryInterval = &v - return s -} - -// SetFilecacheDuration sets the FilecacheDuration field's value. -func (s *HlsAkamaiSettings) SetFilecacheDuration(v int64) *HlsAkamaiSettings { - s.FilecacheDuration = &v - return s -} - -// SetHttpTransferMode sets the HttpTransferMode field's value. -func (s *HlsAkamaiSettings) SetHttpTransferMode(v string) *HlsAkamaiSettings { - s.HttpTransferMode = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *InputLocation) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InputLocation"} + if s.Uri == nil { + invalidParams.Add(request.NewErrParamRequired("Uri")) + } -// SetNumRetries sets the NumRetries field's value. -func (s *HlsAkamaiSettings) SetNumRetries(v int64) *HlsAkamaiSettings { - s.NumRetries = &v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetRestartDelay sets the RestartDelay field's value. -func (s *HlsAkamaiSettings) SetRestartDelay(v int64) *HlsAkamaiSettings { - s.RestartDelay = &v +// SetPasswordParam sets the PasswordParam field's value. +func (s *InputLocation) SetPasswordParam(v string) *InputLocation { + s.PasswordParam = &v return s } -// SetSalt sets the Salt field's value. -func (s *HlsAkamaiSettings) SetSalt(v string) *HlsAkamaiSettings { - s.Salt = &v +// SetUri sets the Uri field's value. +func (s *InputLocation) SetUri(v string) *InputLocation { + s.Uri = &v return s } -// SetToken sets the Token field's value. -func (s *HlsAkamaiSettings) SetToken(v string) *HlsAkamaiSettings { - s.Token = &v +// SetUsername sets the Username field's value. +func (s *InputLocation) SetUsername(v string) *InputLocation { + s.Username = &v return s } -type HlsBasicPutSettings struct { +type InputLossBehavior struct { _ struct{} `type:"structure"` - // Number of seconds to wait before retrying connection to the CDN if the connection - // is lost. - ConnectionRetryInterval *int64 `locationName:"connectionRetryInterval" type:"integer"` + // On input loss, the number of milliseconds to substitute black into the output + // before switching to the frame specified by inputLossImageType. A value x, + // where 0 <= x <= 1,000,000 and a value of 1,000,000 will be interpreted as + // infinite. + BlackFrameMsec *int64 `locationName:"blackFrameMsec" type:"integer"` - // Size in seconds of file cache for streaming outputs. - FilecacheDuration *int64 `locationName:"filecacheDuration" type:"integer"` + // When input loss image type is "color" this field specifies the color to use. + // Value: 6 hex characters representing the values of RGB. + InputLossImageColor *string `locationName:"inputLossImageColor" min:"6" type:"string"` - // Number of retry attempts that will be made before the Live Event is put into - // an error state. - NumRetries *int64 `locationName:"numRetries" type:"integer"` + // When input loss image type is "slate" these fields specify the parameters + // for accessing the slate. + InputLossImageSlate *InputLocation `locationName:"inputLossImageSlate" type:"structure"` - // If a streaming output fails, number of seconds to wait until a restart is - // initiated. A value of 0 means never restart. - RestartDelay *int64 `locationName:"restartDelay" type:"integer"` + // Indicates whether to substitute a solid color or a slate into the output + // after input loss exceeds blackFrameMsec. + InputLossImageType *string `locationName:"inputLossImageType" type:"string" enum:"InputLossImageType"` + + // On input loss, the number of milliseconds to repeat the previous picture + // before substituting black into the output. A value x, where 0 <= x <= 1,000,000 + // and a value of 1,000,000 will be interpreted as infinite. + RepeatFrameMsec *int64 `locationName:"repeatFrameMsec" type:"integer"` } // String returns the string representation -func (s HlsBasicPutSettings) String() string { +func (s InputLossBehavior) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s HlsBasicPutSettings) GoString() string { +func (s InputLossBehavior) GoString() string { return s.String() } -// SetConnectionRetryInterval sets the ConnectionRetryInterval field's value. -func (s *HlsBasicPutSettings) SetConnectionRetryInterval(v int64) *HlsBasicPutSettings { - s.ConnectionRetryInterval = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *InputLossBehavior) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InputLossBehavior"} + if s.InputLossImageColor != nil && len(*s.InputLossImageColor) < 6 { + invalidParams.Add(request.NewErrParamMinLen("InputLossImageColor", 6)) + } + if s.InputLossImageSlate != nil { + if err := s.InputLossImageSlate.Validate(); err != nil { + invalidParams.AddNested("InputLossImageSlate", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBlackFrameMsec sets the BlackFrameMsec field's value. +func (s *InputLossBehavior) SetBlackFrameMsec(v int64) *InputLossBehavior { + s.BlackFrameMsec = &v return s } -// SetFilecacheDuration sets the FilecacheDuration field's value. -func (s *HlsBasicPutSettings) SetFilecacheDuration(v int64) *HlsBasicPutSettings { - s.FilecacheDuration = &v +// SetInputLossImageColor sets the InputLossImageColor field's value. +func (s *InputLossBehavior) SetInputLossImageColor(v string) *InputLossBehavior { + s.InputLossImageColor = &v return s } -// SetNumRetries sets the NumRetries field's value. -func (s *HlsBasicPutSettings) SetNumRetries(v int64) *HlsBasicPutSettings { - s.NumRetries = &v +// SetInputLossImageSlate sets the InputLossImageSlate field's value. +func (s *InputLossBehavior) SetInputLossImageSlate(v *InputLocation) *InputLossBehavior { + s.InputLossImageSlate = v return s } -// SetRestartDelay sets the RestartDelay field's value. -func (s *HlsBasicPutSettings) SetRestartDelay(v int64) *HlsBasicPutSettings { - s.RestartDelay = &v +// SetInputLossImageType sets the InputLossImageType field's value. +func (s *InputLossBehavior) SetInputLossImageType(v string) *InputLossBehavior { + s.InputLossImageType = &v + return s +} + +// SetRepeatFrameMsec sets the RepeatFrameMsec field's value. +func (s *InputLossBehavior) SetRepeatFrameMsec(v int64) *InputLossBehavior { + s.RepeatFrameMsec = &v return s } -type HlsCdnSettings struct { +// An Input Security Group +type InputSecurityGroup struct { _ struct{} `type:"structure"` - HlsAkamaiSettings *HlsAkamaiSettings `locationName:"hlsAkamaiSettings" type:"structure"` + // Unique ARN of Input Security Group + Arn *string `locationName:"arn" type:"string"` - HlsBasicPutSettings *HlsBasicPutSettings `locationName:"hlsBasicPutSettings" type:"structure"` + // The Id of the Input Security Group + Id *string `locationName:"id" type:"string"` - HlsMediaStoreSettings *HlsMediaStoreSettings `locationName:"hlsMediaStoreSettings" type:"structure"` + // The list of inputs currently using this Input Security Group. + Inputs []*string `locationName:"inputs" type:"list"` - HlsWebdavSettings *HlsWebdavSettings `locationName:"hlsWebdavSettings" type:"structure"` + // The current state of the Input Security Group. + State *string `locationName:"state" type:"string" enum:"InputSecurityGroupState"` + + // Whitelist rules and their sync status + WhitelistRules []*InputWhitelistRule `locationName:"whitelistRules" type:"list"` } // String returns the string representation -func (s HlsCdnSettings) String() string { +func (s InputSecurityGroup) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s HlsCdnSettings) GoString() string { +func (s InputSecurityGroup) GoString() string { return s.String() } -// SetHlsAkamaiSettings sets the HlsAkamaiSettings field's value. -func (s *HlsCdnSettings) SetHlsAkamaiSettings(v *HlsAkamaiSettings) *HlsCdnSettings { - s.HlsAkamaiSettings = v +// SetArn sets the Arn field's value. +func (s *InputSecurityGroup) SetArn(v string) *InputSecurityGroup { + s.Arn = &v return s } -// SetHlsBasicPutSettings sets the HlsBasicPutSettings field's value. -func (s *HlsCdnSettings) SetHlsBasicPutSettings(v *HlsBasicPutSettings) *HlsCdnSettings { - s.HlsBasicPutSettings = v +// SetId sets the Id field's value. +func (s *InputSecurityGroup) SetId(v string) *InputSecurityGroup { + s.Id = &v return s } -// SetHlsMediaStoreSettings sets the HlsMediaStoreSettings field's value. -func (s *HlsCdnSettings) SetHlsMediaStoreSettings(v *HlsMediaStoreSettings) *HlsCdnSettings { - s.HlsMediaStoreSettings = v +// SetInputs sets the Inputs field's value. +func (s *InputSecurityGroup) SetInputs(v []*string) *InputSecurityGroup { + s.Inputs = v return s } -// SetHlsWebdavSettings sets the HlsWebdavSettings field's value. -func (s *HlsCdnSettings) SetHlsWebdavSettings(v *HlsWebdavSettings) *HlsCdnSettings { - s.HlsWebdavSettings = v +// SetState sets the State field's value. +func (s *InputSecurityGroup) SetState(v string) *InputSecurityGroup { + s.State = &v return s } -type HlsGroupSettings struct { - _ struct{} `type:"structure"` - - // Choose one or more ad marker types to pass SCTE35 signals through to this - // group of Apple HLS outputs. - AdMarkers []*string `locationName:"adMarkers" type:"list"` - - // A partial URI prefix that will be prepended to each output in the media .m3u8 - // file. Can be used if base manifest is delivered from a different URL than - // the main .m3u8 file. - BaseUrlContent *string `locationName:"baseUrlContent" type:"string"` - - // A partial URI prefix that will be prepended to each output in the media .m3u8 - // file. Can be used if base manifest is delivered from a different URL than - // the main .m3u8 file. - BaseUrlManifest *string `locationName:"baseUrlManifest" type:"string"` - - // Mapping of up to 4 caption channels to caption languages. Is only meaningful - // if captionLanguageSetting is set to "insert". - CaptionLanguageMappings []*CaptionLanguageMapping `locationName:"captionLanguageMappings" type:"list"` - - // Applies only to 608 Embedded output captions.insert: Include CLOSED-CAPTIONS - // lines in the manifest. Specify at least one language in the CC1 Language - // Code field. One CLOSED-CAPTION line is added for each Language Code you specify. - // Make sure to specify the languages in the order in which they appear in the - // original source (if the source is embedded format) or the order of the caption - // selectors (if the source is other than embedded). Otherwise, languages in - // the manifest will not match up properly with the output captions.none: Include - // CLOSED-CAPTIONS=NONE line in the manifest.omit: Omit any CLOSED-CAPTIONS - // line from the manifest. - CaptionLanguageSetting *string `locationName:"captionLanguageSetting" type:"string" enum:"HlsCaptionLanguageSetting"` - - // When set to "disabled", sets the #EXT-X-ALLOW-CACHE:no tag in the manifest, - // which prevents clients from saving media segments for later replay. - ClientCache *string `locationName:"clientCache" type:"string" enum:"HlsClientCache"` - - // Specification to use (RFC-6381 or the default RFC-4281) during m3u8 playlist - // generation. - CodecSpecification *string `locationName:"codecSpecification" type:"string" enum:"HlsCodecSpecification"` - - // For use with encryptionType. This is a 128-bit, 16-byte hex value represented - // by a 32-character text string. If ivSource is set to "explicit" then this - // parameter is required and is used as the IV for encryption. - ConstantIv *string `locationName:"constantIv" min:"32" type:"string"` - - // A directory or HTTP destination for the HLS segments, manifest files, and - // encryption keys (if enabled). - // - // Destination is a required field - Destination *OutputLocationRef `locationName:"destination" type:"structure" required:"true"` - - // Place segments in subdirectories. - DirectoryStructure *string `locationName:"directoryStructure" type:"string" enum:"HlsDirectoryStructure"` - - // Encrypts the segments with the given encryption scheme. Exclude this parameter - // if no encryption is desired. - EncryptionType *string `locationName:"encryptionType" type:"string" enum:"HlsEncryptionType"` - - // Parameters that control interactions with the CDN. - HlsCdnSettings *HlsCdnSettings `locationName:"hlsCdnSettings" type:"structure"` - - // If mode is "live", the number of segments to retain in the manifest (.m3u8) - // file. This number must be less than or equal to keepSegments. If mode is - // "vod", this parameter has no effect. - IndexNSegments *int64 `locationName:"indexNSegments" min:"3" type:"integer"` - - // Parameter that control output group behavior on input loss. - InputLossAction *string `locationName:"inputLossAction" type:"string" enum:"InputLossActionForHlsOut"` - - // For use with encryptionType. The IV (Initialization Vector) is a 128-bit - // number used in conjunction with the key for encrypting blocks. If set to - // "include", IV is listed in the manifest, otherwise the IV is not in the manifest. - IvInManifest *string `locationName:"ivInManifest" type:"string" enum:"HlsIvInManifest"` - - // For use with encryptionType. The IV (Initialization Vector) is a 128-bit - // number used in conjunction with the key for encrypting blocks. If this setting - // is "followsSegmentNumber", it will cause the IV to change every segment (to - // match the segment number). If this is set to "explicit", you must enter a - // constantIv value. - IvSource *string `locationName:"ivSource" type:"string" enum:"HlsIvSource"` - - // If mode is "live", the number of TS segments to retain in the destination - // directory. If mode is "vod", this parameter has no effect. - KeepSegments *int64 `locationName:"keepSegments" min:"1" type:"integer"` - - // The value specifies how the key is represented in the resource identified - // by the URI. If parameter is absent, an implicit value of "identity" is used. - // A reverse DNS string can also be given. - KeyFormat *string `locationName:"keyFormat" type:"string"` - - // Either a single positive integer version value or a slash delimited list - // of version values (1/2/3). - KeyFormatVersions *string `locationName:"keyFormatVersions" type:"string"` - - // The key provider settings. - KeyProviderSettings *KeyProviderSettings `locationName:"keyProviderSettings" type:"structure"` - - // When set to gzip, compresses HLS playlist. - ManifestCompression *string `locationName:"manifestCompression" type:"string" enum:"HlsManifestCompression"` - - // Indicates whether the output manifest should use floating point or integer - // values for segment duration. - ManifestDurationFormat *string `locationName:"manifestDurationFormat" type:"string" enum:"HlsManifestDurationFormat"` - - // When set, minimumSegmentLength is enforced by looking ahead and back within - // the specified range for a nearby avail and extending the segment size if - // needed. - MinSegmentLength *int64 `locationName:"minSegmentLength" type:"integer"` - - // If "vod", all segments are indexed and kept permanently in the destination - // and manifest. If "live", only the number segments specified in keepSegments - // and indexNSegments are kept; newer segments replace older segments, which - // may prevent players from rewinding all the way to the beginning of the event.VOD - // mode uses HLS EXT-X-PLAYLIST-TYPE of EVENT while the channel is running, - // converting it to a "VOD" type manifest on completion of the stream. - Mode *string `locationName:"mode" type:"string" enum:"HlsMode"` - - // Generates the .m3u8 playlist file for this HLS output group. The segmentsOnly - // option will output segments without the .m3u8 file. - OutputSelection *string `locationName:"outputSelection" type:"string" enum:"HlsOutputSelection"` - - // Includes or excludes EXT-X-PROGRAM-DATE-TIME tag in .m3u8 manifest files. - // The value is calculated as follows: either the program date and time are - // initialized using the input timecode source, or the time is initialized using - // the input timecode source and the date is initialized using the timestampOffset. - ProgramDateTime *string `locationName:"programDateTime" type:"string" enum:"HlsProgramDateTime"` - - // Period of insertion of EXT-X-PROGRAM-DATE-TIME entry, in seconds. - ProgramDateTimePeriod *int64 `locationName:"programDateTimePeriod" type:"integer"` +// SetWhitelistRules sets the WhitelistRules field's value. +func (s *InputSecurityGroup) SetWhitelistRules(v []*InputWhitelistRule) *InputSecurityGroup { + s.WhitelistRules = v + return s +} - // Length of MPEG-2 Transport Stream segments to create (in seconds). Note that - // segments will end on the next keyframe after this number of seconds, so actual - // segment length may be longer. - SegmentLength *int64 `locationName:"segmentLength" min:"1" type:"integer"` +// Live Event input parameters. There can be multiple inputs in a single Live +// Event. +type InputSettings struct { + _ struct{} `type:"structure"` - // When set to useInputSegmentation, the output segment or fragment points are - // set by the RAI markers from the input streams. - SegmentationMode *string `locationName:"segmentationMode" type:"string" enum:"HlsSegmentationMode"` + // Used to select the audio stream to decode for inputs that have multiple available. + AudioSelectors []*AudioSelector `locationName:"audioSelectors" type:"list"` - // Number of segments to write to a subdirectory before starting a new one. - // directoryStructure must be subdirectoryPerStream for this setting to have - // an effect. - SegmentsPerSubdirectory *int64 `locationName:"segmentsPerSubdirectory" min:"1" type:"integer"` + // Used to select the caption input to use for inputs that have multiple available. + CaptionSelectors []*CaptionSelector `locationName:"captionSelectors" type:"list"` - // Include or exclude RESOLUTION attribute for video in EXT-X-STREAM-INF tag - // of variant manifest. - StreamInfResolution *string `locationName:"streamInfResolution" type:"string" enum:"HlsStreamInfResolution"` + // Enable or disable the deblock filter when filtering. + DeblockFilter *string `locationName:"deblockFilter" type:"string" enum:"InputDeblockFilter"` - // Indicates ID3 frame that has the timecode. - TimedMetadataId3Frame *string `locationName:"timedMetadataId3Frame" type:"string" enum:"HlsTimedMetadataId3Frame"` + // Enable or disable the denoise filter when filtering. + DenoiseFilter *string `locationName:"denoiseFilter" type:"string" enum:"InputDenoiseFilter"` - // Timed Metadata interval in seconds. - TimedMetadataId3Period *int64 `locationName:"timedMetadataId3Period" type:"integer"` + // Adjusts the magnitude of filtering from 1 (minimal) to 5 (strongest). + FilterStrength *int64 `locationName:"filterStrength" min:"1" type:"integer"` - // Provides an extra millisecond delta offset to fine tune the timestamps. - TimestampDeltaMilliseconds *int64 `locationName:"timestampDeltaMilliseconds" type:"integer"` + // Turns on the filter for this input. MPEG-2 inputs have the deblocking filter + // enabled by default.1) auto - filtering will be applied depending on input + // type/quality2) disabled - no filtering will be applied to the input3) forced + // - filtering will be applied regardless of input type + InputFilter *string `locationName:"inputFilter" type:"string" enum:"InputFilter"` - // When set to "singleFile", emits the program as a single media resource (.ts) - // file, and uses #EXT-X-BYTERANGE tags to index segment for playback. Playback - // of VOD mode content during event is not guaranteed due to HTTP server caching. - TsFileMode *string `locationName:"tsFileMode" type:"string" enum:"HlsTsFileMode"` -} + // Input settings. + NetworkInputSettings *NetworkInputSettings `locationName:"networkInputSettings" type:"structure"` -// String returns the string representation -func (s HlsGroupSettings) String() string { - return awsutil.Prettify(s) -} + // Loop input if it is a file. This allows a file input to be streamed indefinitely. + SourceEndBehavior *string `locationName:"sourceEndBehavior" type:"string" enum:"InputSourceEndBehavior"` -// GoString returns the string representation -func (s HlsGroupSettings) GoString() string { - return s.String() + // Informs which video elementary stream to decode for input types that have + // multiple available. + VideoSelector *VideoSelector `locationName:"videoSelector" type:"structure"` } -// Validate inspects the fields of the type to determine if they are valid. -func (s *HlsGroupSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "HlsGroupSettings"} - if s.ConstantIv != nil && len(*s.ConstantIv) < 32 { - invalidParams.Add(request.NewErrParamMinLen("ConstantIv", 32)) - } - if s.Destination == nil { - invalidParams.Add(request.NewErrParamRequired("Destination")) - } - if s.IndexNSegments != nil && *s.IndexNSegments < 3 { - invalidParams.Add(request.NewErrParamMinValue("IndexNSegments", 3)) - } - if s.KeepSegments != nil && *s.KeepSegments < 1 { - invalidParams.Add(request.NewErrParamMinValue("KeepSegments", 1)) - } - if s.SegmentLength != nil && *s.SegmentLength < 1 { - invalidParams.Add(request.NewErrParamMinValue("SegmentLength", 1)) - } - if s.SegmentsPerSubdirectory != nil && *s.SegmentsPerSubdirectory < 1 { - invalidParams.Add(request.NewErrParamMinValue("SegmentsPerSubdirectory", 1)) +// String returns the string representation +func (s InputSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InputSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InputSettings"} + if s.FilterStrength != nil && *s.FilterStrength < 1 { + invalidParams.Add(request.NewErrParamMinValue("FilterStrength", 1)) } - if s.CaptionLanguageMappings != nil { - for i, v := range s.CaptionLanguageMappings { + if s.AudioSelectors != nil { + for i, v := range s.AudioSelectors { if v == nil { continue } if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "CaptionLanguageMappings", i), err.(request.ErrInvalidParams)) + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AudioSelectors", i), err.(request.ErrInvalidParams)) } } } - if s.KeyProviderSettings != nil { - if err := s.KeyProviderSettings.Validate(); err != nil { - invalidParams.AddNested("KeyProviderSettings", err.(request.ErrInvalidParams)) + if s.CaptionSelectors != nil { + for i, v := range s.CaptionSelectors { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "CaptionSelectors", i), err.(request.ErrInvalidParams)) + } } } @@ -6377,376 +9552,405 @@ func (s *HlsGroupSettings) Validate() error { return nil } -// SetAdMarkers sets the AdMarkers field's value. -func (s *HlsGroupSettings) SetAdMarkers(v []*string) *HlsGroupSettings { - s.AdMarkers = v +// SetAudioSelectors sets the AudioSelectors field's value. +func (s *InputSettings) SetAudioSelectors(v []*AudioSelector) *InputSettings { + s.AudioSelectors = v return s } -// SetBaseUrlContent sets the BaseUrlContent field's value. -func (s *HlsGroupSettings) SetBaseUrlContent(v string) *HlsGroupSettings { - s.BaseUrlContent = &v +// SetCaptionSelectors sets the CaptionSelectors field's value. +func (s *InputSettings) SetCaptionSelectors(v []*CaptionSelector) *InputSettings { + s.CaptionSelectors = v return s } -// SetBaseUrlManifest sets the BaseUrlManifest field's value. -func (s *HlsGroupSettings) SetBaseUrlManifest(v string) *HlsGroupSettings { - s.BaseUrlManifest = &v +// SetDeblockFilter sets the DeblockFilter field's value. +func (s *InputSettings) SetDeblockFilter(v string) *InputSettings { + s.DeblockFilter = &v return s } -// SetCaptionLanguageMappings sets the CaptionLanguageMappings field's value. -func (s *HlsGroupSettings) SetCaptionLanguageMappings(v []*CaptionLanguageMapping) *HlsGroupSettings { - s.CaptionLanguageMappings = v +// SetDenoiseFilter sets the DenoiseFilter field's value. +func (s *InputSettings) SetDenoiseFilter(v string) *InputSettings { + s.DenoiseFilter = &v return s } -// SetCaptionLanguageSetting sets the CaptionLanguageSetting field's value. -func (s *HlsGroupSettings) SetCaptionLanguageSetting(v string) *HlsGroupSettings { - s.CaptionLanguageSetting = &v +// SetFilterStrength sets the FilterStrength field's value. +func (s *InputSettings) SetFilterStrength(v int64) *InputSettings { + s.FilterStrength = &v return s } -// SetClientCache sets the ClientCache field's value. -func (s *HlsGroupSettings) SetClientCache(v string) *HlsGroupSettings { - s.ClientCache = &v +// SetInputFilter sets the InputFilter field's value. +func (s *InputSettings) SetInputFilter(v string) *InputSettings { + s.InputFilter = &v return s } -// SetCodecSpecification sets the CodecSpecification field's value. -func (s *HlsGroupSettings) SetCodecSpecification(v string) *HlsGroupSettings { - s.CodecSpecification = &v +// SetNetworkInputSettings sets the NetworkInputSettings field's value. +func (s *InputSettings) SetNetworkInputSettings(v *NetworkInputSettings) *InputSettings { + s.NetworkInputSettings = v return s } -// SetConstantIv sets the ConstantIv field's value. -func (s *HlsGroupSettings) SetConstantIv(v string) *HlsGroupSettings { - s.ConstantIv = &v +// SetSourceEndBehavior sets the SourceEndBehavior field's value. +func (s *InputSettings) SetSourceEndBehavior(v string) *InputSettings { + s.SourceEndBehavior = &v return s } -// SetDestination sets the Destination field's value. -func (s *HlsGroupSettings) SetDestination(v *OutputLocationRef) *HlsGroupSettings { - s.Destination = v +// SetVideoSelector sets the VideoSelector field's value. +func (s *InputSettings) SetVideoSelector(v *VideoSelector) *InputSettings { + s.VideoSelector = v return s } -// SetDirectoryStructure sets the DirectoryStructure field's value. -func (s *HlsGroupSettings) SetDirectoryStructure(v string) *HlsGroupSettings { - s.DirectoryStructure = &v - return s +// The settings for a PULL type input. +type InputSource struct { + _ struct{} `type:"structure"` + + // The key used to extract the password from EC2 Parameter store. + PasswordParam *string `locationName:"passwordParam" type:"string"` + + // This represents the customer's source URL where stream ispulled from. + Url *string `locationName:"url" type:"string"` + + // The username for the input source. + Username *string `locationName:"username" type:"string"` } -// SetEncryptionType sets the EncryptionType field's value. -func (s *HlsGroupSettings) SetEncryptionType(v string) *HlsGroupSettings { - s.EncryptionType = &v - return s +// String returns the string representation +func (s InputSource) String() string { + return awsutil.Prettify(s) } -// SetHlsCdnSettings sets the HlsCdnSettings field's value. -func (s *HlsGroupSettings) SetHlsCdnSettings(v *HlsCdnSettings) *HlsGroupSettings { - s.HlsCdnSettings = v - return s +// GoString returns the string representation +func (s InputSource) GoString() string { + return s.String() } -// SetIndexNSegments sets the IndexNSegments field's value. -func (s *HlsGroupSettings) SetIndexNSegments(v int64) *HlsGroupSettings { - s.IndexNSegments = &v +// SetPasswordParam sets the PasswordParam field's value. +func (s *InputSource) SetPasswordParam(v string) *InputSource { + s.PasswordParam = &v return s } -// SetInputLossAction sets the InputLossAction field's value. -func (s *HlsGroupSettings) SetInputLossAction(v string) *HlsGroupSettings { - s.InputLossAction = &v +// SetUrl sets the Url field's value. +func (s *InputSource) SetUrl(v string) *InputSource { + s.Url = &v return s } -// SetIvInManifest sets the IvInManifest field's value. -func (s *HlsGroupSettings) SetIvInManifest(v string) *HlsGroupSettings { - s.IvInManifest = &v +// SetUsername sets the Username field's value. +func (s *InputSource) SetUsername(v string) *InputSource { + s.Username = &v return s } -// SetIvSource sets the IvSource field's value. -func (s *HlsGroupSettings) SetIvSource(v string) *HlsGroupSettings { - s.IvSource = &v - return s +// Settings for for a PULL type input. +type InputSourceRequest struct { + _ struct{} `type:"structure"` + + // The key used to extract the password from EC2 Parameter store. + PasswordParam *string `locationName:"passwordParam" type:"string"` + + // This represents the customer's source URL where stream ispulled from. + Url *string `locationName:"url" type:"string"` + + // The username for the input source. + Username *string `locationName:"username" type:"string"` } -// SetKeepSegments sets the KeepSegments field's value. -func (s *HlsGroupSettings) SetKeepSegments(v int64) *HlsGroupSettings { - s.KeepSegments = &v - return s +// String returns the string representation +func (s InputSourceRequest) String() string { + return awsutil.Prettify(s) } -// SetKeyFormat sets the KeyFormat field's value. -func (s *HlsGroupSettings) SetKeyFormat(v string) *HlsGroupSettings { - s.KeyFormat = &v +// GoString returns the string representation +func (s InputSourceRequest) GoString() string { + return s.String() +} + +// SetPasswordParam sets the PasswordParam field's value. +func (s *InputSourceRequest) SetPasswordParam(v string) *InputSourceRequest { + s.PasswordParam = &v return s } -// SetKeyFormatVersions sets the KeyFormatVersions field's value. -func (s *HlsGroupSettings) SetKeyFormatVersions(v string) *HlsGroupSettings { - s.KeyFormatVersions = &v +// SetUrl sets the Url field's value. +func (s *InputSourceRequest) SetUrl(v string) *InputSourceRequest { + s.Url = &v return s } -// SetKeyProviderSettings sets the KeyProviderSettings field's value. -func (s *HlsGroupSettings) SetKeyProviderSettings(v *KeyProviderSettings) *HlsGroupSettings { - s.KeyProviderSettings = v +// SetUsername sets the Username field's value. +func (s *InputSourceRequest) SetUsername(v string) *InputSourceRequest { + s.Username = &v return s } -// SetManifestCompression sets the ManifestCompression field's value. -func (s *HlsGroupSettings) SetManifestCompression(v string) *HlsGroupSettings { - s.ManifestCompression = &v +type InputSpecification struct { + _ struct{} `type:"structure"` + + // Input codec + Codec *string `locationName:"codec" type:"string" enum:"InputCodec"` + + // Maximum input bitrate, categorized coarsely + MaximumBitrate *string `locationName:"maximumBitrate" type:"string" enum:"InputMaximumBitrate"` + + // Input resolution, categorized coarsely + Resolution *string `locationName:"resolution" type:"string" enum:"InputResolution"` +} + +// String returns the string representation +func (s InputSpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputSpecification) GoString() string { + return s.String() +} + +// SetCodec sets the Codec field's value. +func (s *InputSpecification) SetCodec(v string) *InputSpecification { + s.Codec = &v return s } -// SetManifestDurationFormat sets the ManifestDurationFormat field's value. -func (s *HlsGroupSettings) SetManifestDurationFormat(v string) *HlsGroupSettings { - s.ManifestDurationFormat = &v +// SetMaximumBitrate sets the MaximumBitrate field's value. +func (s *InputSpecification) SetMaximumBitrate(v string) *InputSpecification { + s.MaximumBitrate = &v return s } -// SetMinSegmentLength sets the MinSegmentLength field's value. -func (s *HlsGroupSettings) SetMinSegmentLength(v int64) *HlsGroupSettings { - s.MinSegmentLength = &v +// SetResolution sets the Resolution field's value. +func (s *InputSpecification) SetResolution(v string) *InputSpecification { + s.Resolution = &v return s } -// SetMode sets the Mode field's value. -func (s *HlsGroupSettings) SetMode(v string) *HlsGroupSettings { - s.Mode = &v +// Settings for the action to switch an input. +type InputSwitchScheduleActionSettings struct { + _ struct{} `type:"structure"` + + // The name of the input attachment that should be switched to by this action. + // + // InputAttachmentNameReference is a required field + InputAttachmentNameReference *string `locationName:"inputAttachmentNameReference" type:"string" required:"true"` +} + +// String returns the string representation +func (s InputSwitchScheduleActionSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputSwitchScheduleActionSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InputSwitchScheduleActionSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InputSwitchScheduleActionSettings"} + if s.InputAttachmentNameReference == nil { + invalidParams.Add(request.NewErrParamRequired("InputAttachmentNameReference")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInputAttachmentNameReference sets the InputAttachmentNameReference field's value. +func (s *InputSwitchScheduleActionSettings) SetInputAttachmentNameReference(v string) *InputSwitchScheduleActionSettings { + s.InputAttachmentNameReference = &v return s } -// SetOutputSelection sets the OutputSelection field's value. -func (s *HlsGroupSettings) SetOutputSelection(v string) *HlsGroupSettings { - s.OutputSelection = &v - return s +// Whitelist rule +type InputWhitelistRule struct { + _ struct{} `type:"structure"` + + // The IPv4 CIDR that's whitelisted. + Cidr *string `locationName:"cidr" type:"string"` +} + +// String returns the string representation +func (s InputWhitelistRule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputWhitelistRule) GoString() string { + return s.String() } -// SetProgramDateTime sets the ProgramDateTime field's value. -func (s *HlsGroupSettings) SetProgramDateTime(v string) *HlsGroupSettings { - s.ProgramDateTime = &v +// SetCidr sets the Cidr field's value. +func (s *InputWhitelistRule) SetCidr(v string) *InputWhitelistRule { + s.Cidr = &v return s } -// SetProgramDateTimePeriod sets the ProgramDateTimePeriod field's value. -func (s *HlsGroupSettings) SetProgramDateTimePeriod(v int64) *HlsGroupSettings { - s.ProgramDateTimePeriod = &v - return s +// An IPv4 CIDR to whitelist. +type InputWhitelistRuleCidr struct { + _ struct{} `type:"structure"` + + // The IPv4 CIDR to whitelist. + Cidr *string `locationName:"cidr" type:"string"` } -// SetSegmentLength sets the SegmentLength field's value. -func (s *HlsGroupSettings) SetSegmentLength(v int64) *HlsGroupSettings { - s.SegmentLength = &v - return s +// String returns the string representation +func (s InputWhitelistRuleCidr) String() string { + return awsutil.Prettify(s) } -// SetSegmentationMode sets the SegmentationMode field's value. -func (s *HlsGroupSettings) SetSegmentationMode(v string) *HlsGroupSettings { - s.SegmentationMode = &v - return s +// GoString returns the string representation +func (s InputWhitelistRuleCidr) GoString() string { + return s.String() } -// SetSegmentsPerSubdirectory sets the SegmentsPerSubdirectory field's value. -func (s *HlsGroupSettings) SetSegmentsPerSubdirectory(v int64) *HlsGroupSettings { - s.SegmentsPerSubdirectory = &v +// SetCidr sets the Cidr field's value. +func (s *InputWhitelistRuleCidr) SetCidr(v string) *InputWhitelistRuleCidr { + s.Cidr = &v return s } -// SetStreamInfResolution sets the StreamInfResolution field's value. -func (s *HlsGroupSettings) SetStreamInfResolution(v string) *HlsGroupSettings { - s.StreamInfResolution = &v - return s +type KeyProviderSettings struct { + _ struct{} `type:"structure"` + + StaticKeySettings *StaticKeySettings `locationName:"staticKeySettings" type:"structure"` } -// SetTimedMetadataId3Frame sets the TimedMetadataId3Frame field's value. -func (s *HlsGroupSettings) SetTimedMetadataId3Frame(v string) *HlsGroupSettings { - s.TimedMetadataId3Frame = &v - return s +// String returns the string representation +func (s KeyProviderSettings) String() string { + return awsutil.Prettify(s) } -// SetTimedMetadataId3Period sets the TimedMetadataId3Period field's value. -func (s *HlsGroupSettings) SetTimedMetadataId3Period(v int64) *HlsGroupSettings { - s.TimedMetadataId3Period = &v - return s +// GoString returns the string representation +func (s KeyProviderSettings) GoString() string { + return s.String() } -// SetTimestampDeltaMilliseconds sets the TimestampDeltaMilliseconds field's value. -func (s *HlsGroupSettings) SetTimestampDeltaMilliseconds(v int64) *HlsGroupSettings { - s.TimestampDeltaMilliseconds = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *KeyProviderSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "KeyProviderSettings"} + if s.StaticKeySettings != nil { + if err := s.StaticKeySettings.Validate(); err != nil { + invalidParams.AddNested("StaticKeySettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetTsFileMode sets the TsFileMode field's value. -func (s *HlsGroupSettings) SetTsFileMode(v string) *HlsGroupSettings { - s.TsFileMode = &v +// SetStaticKeySettings sets the StaticKeySettings field's value. +func (s *KeyProviderSettings) SetStaticKeySettings(v *StaticKeySettings) *KeyProviderSettings { + s.StaticKeySettings = v return s } -type HlsInputSettings struct { +type ListChannelsInput struct { _ struct{} `type:"structure"` - // When specified the HLS stream with the m3u8 BANDWIDTH that most closely matches - // this value will be chosen, otherwise the highest bandwidth stream in the - // m3u8 will be chosen. The bitrate is specified in bits per second, as in an - // HLS manifest. - Bandwidth *int64 `locationName:"bandwidth" type:"integer"` - - // When specified, reading of the HLS input will begin this many buffer segments - // from the end (most recently written segment). When not specified, the HLS - // input will begin with the first segment specified in the m3u8. - BufferSegments *int64 `locationName:"bufferSegments" type:"integer"` - - // The number of consecutive times that attempts to read a manifest or segment - // must fail before the input is considered unavailable. - Retries *int64 `locationName:"retries" type:"integer"` + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - // The number of seconds between retries when an attempt to read a manifest - // or segment fails. - RetryInterval *int64 `locationName:"retryInterval" type:"integer"` + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` } // String returns the string representation -func (s HlsInputSettings) String() string { +func (s ListChannelsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s HlsInputSettings) GoString() string { +func (s ListChannelsInput) GoString() string { return s.String() } -// SetBandwidth sets the Bandwidth field's value. -func (s *HlsInputSettings) SetBandwidth(v int64) *HlsInputSettings { - s.Bandwidth = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListChannelsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListChannelsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } -// SetBufferSegments sets the BufferSegments field's value. -func (s *HlsInputSettings) SetBufferSegments(v int64) *HlsInputSettings { - s.BufferSegments = &v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetRetries sets the Retries field's value. -func (s *HlsInputSettings) SetRetries(v int64) *HlsInputSettings { - s.Retries = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListChannelsInput) SetMaxResults(v int64) *ListChannelsInput { + s.MaxResults = &v return s } -// SetRetryInterval sets the RetryInterval field's value. -func (s *HlsInputSettings) SetRetryInterval(v int64) *HlsInputSettings { - s.RetryInterval = &v +// SetNextToken sets the NextToken field's value. +func (s *ListChannelsInput) SetNextToken(v string) *ListChannelsInput { + s.NextToken = &v return s } -type HlsMediaStoreSettings struct { +type ListChannelsOutput struct { _ struct{} `type:"structure"` - // Number of seconds to wait before retrying connection to the CDN if the connection - // is lost. - ConnectionRetryInterval *int64 `locationName:"connectionRetryInterval" type:"integer"` - - // Size in seconds of file cache for streaming outputs. - FilecacheDuration *int64 `locationName:"filecacheDuration" type:"integer"` - - // When set to temporal, output files are stored in non-persistent memory for - // faster reading and writing. - MediaStoreStorageClass *string `locationName:"mediaStoreStorageClass" type:"string" enum:"HlsMediaStoreStorageClass"` - - // Number of retry attempts that will be made before the Live Event is put into - // an error state. - NumRetries *int64 `locationName:"numRetries" type:"integer"` + Channels []*ChannelSummary `locationName:"channels" type:"list"` - // If a streaming output fails, number of seconds to wait until a restart is - // initiated. A value of 0 means never restart. - RestartDelay *int64 `locationName:"restartDelay" type:"integer"` + NextToken *string `locationName:"nextToken" type:"string"` } // String returns the string representation -func (s HlsMediaStoreSettings) String() string { +func (s ListChannelsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s HlsMediaStoreSettings) GoString() string { +func (s ListChannelsOutput) GoString() string { return s.String() } -// SetConnectionRetryInterval sets the ConnectionRetryInterval field's value. -func (s *HlsMediaStoreSettings) SetConnectionRetryInterval(v int64) *HlsMediaStoreSettings { - s.ConnectionRetryInterval = &v - return s -} - -// SetFilecacheDuration sets the FilecacheDuration field's value. -func (s *HlsMediaStoreSettings) SetFilecacheDuration(v int64) *HlsMediaStoreSettings { - s.FilecacheDuration = &v - return s -} - -// SetMediaStoreStorageClass sets the MediaStoreStorageClass field's value. -func (s *HlsMediaStoreSettings) SetMediaStoreStorageClass(v string) *HlsMediaStoreSettings { - s.MediaStoreStorageClass = &v - return s -} - -// SetNumRetries sets the NumRetries field's value. -func (s *HlsMediaStoreSettings) SetNumRetries(v int64) *HlsMediaStoreSettings { - s.NumRetries = &v +// SetChannels sets the Channels field's value. +func (s *ListChannelsOutput) SetChannels(v []*ChannelSummary) *ListChannelsOutput { + s.Channels = v return s } -// SetRestartDelay sets the RestartDelay field's value. -func (s *HlsMediaStoreSettings) SetRestartDelay(v int64) *HlsMediaStoreSettings { - s.RestartDelay = &v +// SetNextToken sets the NextToken field's value. +func (s *ListChannelsOutput) SetNextToken(v string) *ListChannelsOutput { + s.NextToken = &v return s } -type HlsOutputSettings struct { +type ListInputSecurityGroupsInput struct { _ struct{} `type:"structure"` - // Settings regarding the underlying stream. These settings are different for - // audio-only outputs. - // - // HlsSettings is a required field - HlsSettings *HlsSettings `locationName:"hlsSettings" type:"structure" required:"true"` - - // String concatenated to the end of the destination filename. Accepts \"Format - // Identifiers\":#formatIdentifierParameters. - NameModifier *string `locationName:"nameModifier" min:"1" type:"string"` + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - // String concatenated to end of segment filenames. - SegmentModifier *string `locationName:"segmentModifier" type:"string"` + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` } // String returns the string representation -func (s HlsOutputSettings) String() string { +func (s ListInputSecurityGroupsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s HlsOutputSettings) GoString() string { +func (s ListInputSecurityGroupsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *HlsOutputSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "HlsOutputSettings"} - if s.HlsSettings == nil { - invalidParams.Add(request.NewErrParamRequired("HlsSettings")) - } - if s.NameModifier != nil && len(*s.NameModifier) < 1 { - invalidParams.Add(request.NewErrParamMinLen("NameModifier", 1)) - } - if s.HlsSettings != nil { - if err := s.HlsSettings.Validate(); err != nil { - invalidParams.AddNested("HlsSettings", err.(request.ErrInvalidParams)) - } +func (s *ListInputSecurityGroupsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListInputSecurityGroupsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } if invalidParams.Len() > 0 { @@ -6755,54 +9959,71 @@ func (s *HlsOutputSettings) Validate() error { return nil } -// SetHlsSettings sets the HlsSettings field's value. -func (s *HlsOutputSettings) SetHlsSettings(v *HlsSettings) *HlsOutputSettings { - s.HlsSettings = v - return s +// SetMaxResults sets the MaxResults field's value. +func (s *ListInputSecurityGroupsInput) SetMaxResults(v int64) *ListInputSecurityGroupsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListInputSecurityGroupsInput) SetNextToken(v string) *ListInputSecurityGroupsInput { + s.NextToken = &v + return s +} + +type ListInputSecurityGroupsOutput struct { + _ struct{} `type:"structure"` + + InputSecurityGroups []*InputSecurityGroup `locationName:"inputSecurityGroups" type:"list"` + + NextToken *string `locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s ListInputSecurityGroupsOutput) String() string { + return awsutil.Prettify(s) } -// SetNameModifier sets the NameModifier field's value. -func (s *HlsOutputSettings) SetNameModifier(v string) *HlsOutputSettings { - s.NameModifier = &v +// GoString returns the string representation +func (s ListInputSecurityGroupsOutput) GoString() string { + return s.String() +} + +// SetInputSecurityGroups sets the InputSecurityGroups field's value. +func (s *ListInputSecurityGroupsOutput) SetInputSecurityGroups(v []*InputSecurityGroup) *ListInputSecurityGroupsOutput { + s.InputSecurityGroups = v return s } -// SetSegmentModifier sets the SegmentModifier field's value. -func (s *HlsOutputSettings) SetSegmentModifier(v string) *HlsOutputSettings { - s.SegmentModifier = &v +// SetNextToken sets the NextToken field's value. +func (s *ListInputSecurityGroupsOutput) SetNextToken(v string) *ListInputSecurityGroupsOutput { + s.NextToken = &v return s } -type HlsSettings struct { +type ListInputsInput struct { _ struct{} `type:"structure"` - AudioOnlyHlsSettings *AudioOnlyHlsSettings `locationName:"audioOnlyHlsSettings" type:"structure"` + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - StandardHlsSettings *StandardHlsSettings `locationName:"standardHlsSettings" type:"structure"` + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` } // String returns the string representation -func (s HlsSettings) String() string { +func (s ListInputsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s HlsSettings) GoString() string { +func (s ListInputsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *HlsSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "HlsSettings"} - if s.AudioOnlyHlsSettings != nil { - if err := s.AudioOnlyHlsSettings.Validate(); err != nil { - invalidParams.AddNested("AudioOnlyHlsSettings", err.(request.ErrInvalidParams)) - } - } - if s.StandardHlsSettings != nil { - if err := s.StandardHlsSettings.Validate(); err != nil { - invalidParams.AddNested("StandardHlsSettings", err.(request.ErrInvalidParams)) - } +func (s *ListInputsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListInputsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } if invalidParams.Len() > 0 { @@ -6811,257 +10032,222 @@ func (s *HlsSettings) Validate() error { return nil } -// SetAudioOnlyHlsSettings sets the AudioOnlyHlsSettings field's value. -func (s *HlsSettings) SetAudioOnlyHlsSettings(v *AudioOnlyHlsSettings) *HlsSettings { - s.AudioOnlyHlsSettings = v +// SetMaxResults sets the MaxResults field's value. +func (s *ListInputsInput) SetMaxResults(v int64) *ListInputsInput { + s.MaxResults = &v return s } -// SetStandardHlsSettings sets the StandardHlsSettings field's value. -func (s *HlsSettings) SetStandardHlsSettings(v *StandardHlsSettings) *HlsSettings { - s.StandardHlsSettings = v +// SetNextToken sets the NextToken field's value. +func (s *ListInputsInput) SetNextToken(v string) *ListInputsInput { + s.NextToken = &v return s } -type HlsWebdavSettings struct { +type ListInputsOutput struct { _ struct{} `type:"structure"` - // Number of seconds to wait before retrying connection to the CDN if the connection - // is lost. - ConnectionRetryInterval *int64 `locationName:"connectionRetryInterval" type:"integer"` - - // Size in seconds of file cache for streaming outputs. - FilecacheDuration *int64 `locationName:"filecacheDuration" type:"integer"` - - // Specify whether or not to use chunked transfer encoding to WebDAV. - HttpTransferMode *string `locationName:"httpTransferMode" type:"string" enum:"HlsWebdavHttpTransferMode"` - - // Number of retry attempts that will be made before the Live Event is put into - // an error state. - NumRetries *int64 `locationName:"numRetries" type:"integer"` + Inputs []*Input `locationName:"inputs" type:"list"` - // If a streaming output fails, number of seconds to wait until a restart is - // initiated. A value of 0 means never restart. - RestartDelay *int64 `locationName:"restartDelay" type:"integer"` + NextToken *string `locationName:"nextToken" type:"string"` } // String returns the string representation -func (s HlsWebdavSettings) String() string { +func (s ListInputsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s HlsWebdavSettings) GoString() string { +func (s ListInputsOutput) GoString() string { return s.String() } -// SetConnectionRetryInterval sets the ConnectionRetryInterval field's value. -func (s *HlsWebdavSettings) SetConnectionRetryInterval(v int64) *HlsWebdavSettings { - s.ConnectionRetryInterval = &v - return s -} - -// SetFilecacheDuration sets the FilecacheDuration field's value. -func (s *HlsWebdavSettings) SetFilecacheDuration(v int64) *HlsWebdavSettings { - s.FilecacheDuration = &v - return s -} - -// SetHttpTransferMode sets the HttpTransferMode field's value. -func (s *HlsWebdavSettings) SetHttpTransferMode(v string) *HlsWebdavSettings { - s.HttpTransferMode = &v - return s -} - -// SetNumRetries sets the NumRetries field's value. -func (s *HlsWebdavSettings) SetNumRetries(v int64) *HlsWebdavSettings { - s.NumRetries = &v +// SetInputs sets the Inputs field's value. +func (s *ListInputsOutput) SetInputs(v []*Input) *ListInputsOutput { + s.Inputs = v return s } -// SetRestartDelay sets the RestartDelay field's value. -func (s *HlsWebdavSettings) SetRestartDelay(v int64) *HlsWebdavSettings { - s.RestartDelay = &v +// SetNextToken sets the NextToken field's value. +func (s *ListInputsOutput) SetNextToken(v string) *ListInputsOutput { + s.NextToken = &v return s } -type Input struct { +type ListOfferingsInput struct { _ struct{} `type:"structure"` - // The Unique ARN of the input (generated, immutable). - Arn *string `locationName:"arn" type:"string"` + ChannelConfiguration *string `location:"querystring" locationName:"channelConfiguration" type:"string"` - // A list of channel IDs that that input is attached to (currently an input - // can only be attached to one channel). - AttachedChannels []*string `locationName:"attachedChannels" type:"list"` + Codec *string `location:"querystring" locationName:"codec" type:"string"` - // A list of the destinations of the input (PUSH-type). - Destinations []*InputDestination `locationName:"destinations" type:"list"` + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - // The generated ID of the input (unique for user account, immutable). - Id *string `locationName:"id" type:"string"` + MaximumBitrate *string `location:"querystring" locationName:"maximumBitrate" type:"string"` - // The user-assigned name (This is a mutable value). - Name *string `locationName:"name" type:"string"` + MaximumFramerate *string `location:"querystring" locationName:"maximumFramerate" type:"string"` - // A list of IDs for all the security groups attached to the input. - SecurityGroups []*string `locationName:"securityGroups" type:"list"` + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` - // A list of the sources of the input (PULL-type). - Sources []*InputSource `locationName:"sources" type:"list"` + Resolution *string `location:"querystring" locationName:"resolution" type:"string"` - State *string `locationName:"state" type:"string" enum:"InputState"` + ResourceType *string `location:"querystring" locationName:"resourceType" type:"string"` - Type *string `locationName:"type" type:"string" enum:"InputType"` + SpecialFeature *string `location:"querystring" locationName:"specialFeature" type:"string"` + + VideoQuality *string `location:"querystring" locationName:"videoQuality" type:"string"` } // String returns the string representation -func (s Input) String() string { +func (s ListOfferingsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Input) GoString() string { +func (s ListOfferingsInput) GoString() string { return s.String() } -// SetArn sets the Arn field's value. -func (s *Input) SetArn(v string) *Input { - s.Arn = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListOfferingsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListOfferingsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetChannelConfiguration sets the ChannelConfiguration field's value. +func (s *ListOfferingsInput) SetChannelConfiguration(v string) *ListOfferingsInput { + s.ChannelConfiguration = &v return s } -// SetAttachedChannels sets the AttachedChannels field's value. -func (s *Input) SetAttachedChannels(v []*string) *Input { - s.AttachedChannels = v +// SetCodec sets the Codec field's value. +func (s *ListOfferingsInput) SetCodec(v string) *ListOfferingsInput { + s.Codec = &v return s } -// SetDestinations sets the Destinations field's value. -func (s *Input) SetDestinations(v []*InputDestination) *Input { - s.Destinations = v +// SetMaxResults sets the MaxResults field's value. +func (s *ListOfferingsInput) SetMaxResults(v int64) *ListOfferingsInput { + s.MaxResults = &v return s } -// SetId sets the Id field's value. -func (s *Input) SetId(v string) *Input { - s.Id = &v +// SetMaximumBitrate sets the MaximumBitrate field's value. +func (s *ListOfferingsInput) SetMaximumBitrate(v string) *ListOfferingsInput { + s.MaximumBitrate = &v return s } -// SetName sets the Name field's value. -func (s *Input) SetName(v string) *Input { - s.Name = &v +// SetMaximumFramerate sets the MaximumFramerate field's value. +func (s *ListOfferingsInput) SetMaximumFramerate(v string) *ListOfferingsInput { + s.MaximumFramerate = &v return s } -// SetSecurityGroups sets the SecurityGroups field's value. -func (s *Input) SetSecurityGroups(v []*string) *Input { - s.SecurityGroups = v +// SetNextToken sets the NextToken field's value. +func (s *ListOfferingsInput) SetNextToken(v string) *ListOfferingsInput { + s.NextToken = &v return s } -// SetSources sets the Sources field's value. -func (s *Input) SetSources(v []*InputSource) *Input { - s.Sources = v +// SetResolution sets the Resolution field's value. +func (s *ListOfferingsInput) SetResolution(v string) *ListOfferingsInput { + s.Resolution = &v return s } -// SetState sets the State field's value. -func (s *Input) SetState(v string) *Input { - s.State = &v +// SetResourceType sets the ResourceType field's value. +func (s *ListOfferingsInput) SetResourceType(v string) *ListOfferingsInput { + s.ResourceType = &v return s } -// SetType sets the Type field's value. -func (s *Input) SetType(v string) *Input { - s.Type = &v +// SetSpecialFeature sets the SpecialFeature field's value. +func (s *ListOfferingsInput) SetSpecialFeature(v string) *ListOfferingsInput { + s.SpecialFeature = &v return s } -type InputAttachment struct { +// SetVideoQuality sets the VideoQuality field's value. +func (s *ListOfferingsInput) SetVideoQuality(v string) *ListOfferingsInput { + s.VideoQuality = &v + return s +} + +type ListOfferingsOutput struct { _ struct{} `type:"structure"` - // The ID of the input - InputId *string `locationName:"inputId" type:"string"` + NextToken *string `locationName:"nextToken" type:"string"` - // Settings of an input (caption selector, etc.) - InputSettings *InputSettings `locationName:"inputSettings" type:"structure"` + Offerings []*Offering `locationName:"offerings" type:"list"` } // String returns the string representation -func (s InputAttachment) String() string { +func (s ListOfferingsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s InputAttachment) GoString() string { +func (s ListOfferingsOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *InputAttachment) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "InputAttachment"} - if s.InputSettings != nil { - if err := s.InputSettings.Validate(); err != nil { - invalidParams.AddNested("InputSettings", err.(request.ErrInvalidParams)) - } - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetInputId sets the InputId field's value. -func (s *InputAttachment) SetInputId(v string) *InputAttachment { - s.InputId = &v +// SetNextToken sets the NextToken field's value. +func (s *ListOfferingsOutput) SetNextToken(v string) *ListOfferingsOutput { + s.NextToken = &v return s } -// SetInputSettings sets the InputSettings field's value. -func (s *InputAttachment) SetInputSettings(v *InputSettings) *InputAttachment { - s.InputSettings = v +// SetOfferings sets the Offerings field's value. +func (s *ListOfferingsOutput) SetOfferings(v []*Offering) *ListOfferingsOutput { + s.Offerings = v return s } -type InputChannelLevel struct { +type ListReservationsInput struct { _ struct{} `type:"structure"` - // Remixing value. Units are in dB and acceptable values are within the range - // from -60 (mute) and 6 dB. - // - // Gain is a required field - Gain *int64 `locationName:"gain" type:"integer" required:"true"` + Codec *string `location:"querystring" locationName:"codec" type:"string"` - // The index of the input channel used as a source. - // - // InputChannel is a required field - InputChannel *int64 `locationName:"inputChannel" type:"integer" required:"true"` + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + + MaximumBitrate *string `location:"querystring" locationName:"maximumBitrate" type:"string"` + + MaximumFramerate *string `location:"querystring" locationName:"maximumFramerate" type:"string"` + + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + + Resolution *string `location:"querystring" locationName:"resolution" type:"string"` + + ResourceType *string `location:"querystring" locationName:"resourceType" type:"string"` + + SpecialFeature *string `location:"querystring" locationName:"specialFeature" type:"string"` + + VideoQuality *string `location:"querystring" locationName:"videoQuality" type:"string"` } // String returns the string representation -func (s InputChannelLevel) String() string { +func (s ListReservationsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s InputChannelLevel) GoString() string { +func (s ListReservationsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *InputChannelLevel) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "InputChannelLevel"} - if s.Gain == nil { - invalidParams.Add(request.NewErrParamRequired("Gain")) - } - if s.Gain != nil && *s.Gain < -60 { - invalidParams.Add(request.NewErrParamMinValue("Gain", -60)) - } - if s.InputChannel == nil { - invalidParams.Add(request.NewErrParamRequired("InputChannel")) +func (s *ListReservationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListReservationsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } if invalidParams.Len() > 0 { @@ -7070,120 +10256,342 @@ func (s *InputChannelLevel) Validate() error { return nil } -// SetGain sets the Gain field's value. -func (s *InputChannelLevel) SetGain(v int64) *InputChannelLevel { - s.Gain = &v +// SetCodec sets the Codec field's value. +func (s *ListReservationsInput) SetCodec(v string) *ListReservationsInput { + s.Codec = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListReservationsInput) SetMaxResults(v int64) *ListReservationsInput { + s.MaxResults = &v return s } -// SetInputChannel sets the InputChannel field's value. -func (s *InputChannelLevel) SetInputChannel(v int64) *InputChannelLevel { - s.InputChannel = &v +// SetMaximumBitrate sets the MaximumBitrate field's value. +func (s *ListReservationsInput) SetMaximumBitrate(v string) *ListReservationsInput { + s.MaximumBitrate = &v return s } -// The settings for a PUSH type input. -type InputDestination struct { - _ struct{} `type:"structure"` - - // The system-generated static IP address of endpoint.It remains fixed for the - // lifetime of the input. - Ip *string `locationName:"ip" type:"string"` - - // The port number for the input. - Port *string `locationName:"port" type:"string"` - - // This represents the endpoint that the customer stream will bepushed to. - Url *string `locationName:"url" type:"string"` +// SetMaximumFramerate sets the MaximumFramerate field's value. +func (s *ListReservationsInput) SetMaximumFramerate(v string) *ListReservationsInput { + s.MaximumFramerate = &v + return s } -// String returns the string representation -func (s InputDestination) String() string { - return awsutil.Prettify(s) +// SetNextToken sets the NextToken field's value. +func (s *ListReservationsInput) SetNextToken(v string) *ListReservationsInput { + s.NextToken = &v + return s } -// GoString returns the string representation -func (s InputDestination) GoString() string { - return s.String() +// SetResolution sets the Resolution field's value. +func (s *ListReservationsInput) SetResolution(v string) *ListReservationsInput { + s.Resolution = &v + return s } -// SetIp sets the Ip field's value. -func (s *InputDestination) SetIp(v string) *InputDestination { - s.Ip = &v +// SetResourceType sets the ResourceType field's value. +func (s *ListReservationsInput) SetResourceType(v string) *ListReservationsInput { + s.ResourceType = &v return s } -// SetPort sets the Port field's value. -func (s *InputDestination) SetPort(v string) *InputDestination { - s.Port = &v +// SetSpecialFeature sets the SpecialFeature field's value. +func (s *ListReservationsInput) SetSpecialFeature(v string) *ListReservationsInput { + s.SpecialFeature = &v return s } -// SetUrl sets the Url field's value. -func (s *InputDestination) SetUrl(v string) *InputDestination { - s.Url = &v +// SetVideoQuality sets the VideoQuality field's value. +func (s *ListReservationsInput) SetVideoQuality(v string) *ListReservationsInput { + s.VideoQuality = &v return s } -// Endpoint settings for a PUSH type input. -type InputDestinationRequest struct { +type ListReservationsOutput struct { _ struct{} `type:"structure"` - // A unique name for the location the RTMP stream is being pushedto. - StreamName *string `locationName:"streamName" type:"string"` + NextToken *string `locationName:"nextToken" type:"string"` + + Reservations []*Reservation `locationName:"reservations" type:"list"` } // String returns the string representation -func (s InputDestinationRequest) String() string { +func (s ListReservationsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s InputDestinationRequest) GoString() string { +func (s ListReservationsOutput) GoString() string { return s.String() } -// SetStreamName sets the StreamName field's value. -func (s *InputDestinationRequest) SetStreamName(v string) *InputDestinationRequest { - s.StreamName = &v - return s -} +// SetNextToken sets the NextToken field's value. +func (s *ListReservationsOutput) SetNextToken(v string) *ListReservationsOutput { + s.NextToken = &v + return s +} + +// SetReservations sets the Reservations field's value. +func (s *ListReservationsOutput) SetReservations(v []*Reservation) *ListReservationsOutput { + s.Reservations = v + return s +} + +type M2tsSettings struct { + _ struct{} `type:"structure"` + + // When set to drop, output audio streams will be removed from the program if + // the selected input audio stream is removed from the input. This allows the + // output audio configuration to dynamically change based on input configuration. + // If this is set to encodeSilence, all output audio streams will output encoded + // silence when not connected to an active input stream. + AbsentInputAudioBehavior *string `locationName:"absentInputAudioBehavior" type:"string" enum:"M2tsAbsentInputAudioBehavior"` + + // When set to enabled, uses ARIB-compliant field muxing and removes video descriptor. + Arib *string `locationName:"arib" type:"string" enum:"M2tsArib"` + + // Packet Identifier (PID) for ARIB Captions in the transport stream. Can be + // entered as a decimal or hexadecimal value. Valid values are 32 (or 0x20)..8182 + // (or 0x1ff6). + AribCaptionsPid *string `locationName:"aribCaptionsPid" type:"string"` + + // If set to auto, pid number used for ARIB Captions will be auto-selected from + // unused pids. If set to useConfigured, ARIB Captions will be on the configured + // pid number. + AribCaptionsPidControl *string `locationName:"aribCaptionsPidControl" type:"string" enum:"M2tsAribCaptionsPidControl"` + + // When set to dvb, uses DVB buffer model for Dolby Digital audio. When set + // to atsc, the ATSC model is used. + AudioBufferModel *string `locationName:"audioBufferModel" type:"string" enum:"M2tsAudioBufferModel"` + + // The number of audio frames to insert for each PES packet. + AudioFramesPerPes *int64 `locationName:"audioFramesPerPes" type:"integer"` + + // Packet Identifier (PID) of the elementary audio stream(s) in the transport + // stream. Multiple values are accepted, and can be entered in ranges and/or + // by comma separation. Can be entered as decimal or hexadecimal values. Each + // PID specified must be in the range of 32 (or 0x20)..8182 (or 0x1ff6). + AudioPids *string `locationName:"audioPids" type:"string"` + + // When set to atsc, uses stream type = 0x81 for AC3 and stream type = 0x87 + // for EAC3. When set to dvb, uses stream type = 0x06. + AudioStreamType *string `locationName:"audioStreamType" type:"string" enum:"M2tsAudioStreamType"` + + // The output bitrate of the transport stream in bits per second. Setting to + // 0 lets the muxer automatically determine the appropriate bitrate. + Bitrate *int64 `locationName:"bitrate" type:"integer"` + + // If set to multiplex, use multiplex buffer model for accurate interleaving. + // Setting to bufferModel to none can lead to lower latency, but low-memory + // devices may not be able to play back the stream without interruptions. + BufferModel *string `locationName:"bufferModel" type:"string" enum:"M2tsBufferModel"` + + // When set to enabled, generates captionServiceDescriptor in PMT. + CcDescriptor *string `locationName:"ccDescriptor" type:"string" enum:"M2tsCcDescriptor"` + + // Inserts DVB Network Information Table (NIT) at the specified table repetition + // interval. + DvbNitSettings *DvbNitSettings `locationName:"dvbNitSettings" type:"structure"` + + // Inserts DVB Service Description Table (SDT) at the specified table repetition + // interval. + DvbSdtSettings *DvbSdtSettings `locationName:"dvbSdtSettings" type:"structure"` + + // Packet Identifier (PID) for input source DVB Subtitle data to this output. + // Multiple values are accepted, and can be entered in ranges and/or by comma + // separation. Can be entered as decimal or hexadecimal values. Each PID specified + // must be in the range of 32 (or 0x20)..8182 (or 0x1ff6). + DvbSubPids *string `locationName:"dvbSubPids" type:"string"` + + // Inserts DVB Time and Date Table (TDT) at the specified table repetition interval. + DvbTdtSettings *DvbTdtSettings `locationName:"dvbTdtSettings" type:"structure"` + + // Packet Identifier (PID) for input source DVB Teletext data to this output. + // Can be entered as a decimal or hexadecimal value. Valid values are 32 (or + // 0x20)..8182 (or 0x1ff6). + DvbTeletextPid *string `locationName:"dvbTeletextPid" type:"string"` + + // If set to passthrough, passes any EBIF data from the input source to this + // output. + Ebif *string `locationName:"ebif" type:"string" enum:"M2tsEbifControl"` + + // When videoAndFixedIntervals is selected, audio EBP markers will be added + // to partitions 3 and 4. The interval between these additional markers will + // be fixed, and will be slightly shorter than the video EBP marker interval. + // Only available when EBP Cablelabs segmentation markers are selected. Partitions + // 1 and 2 will always follow the video interval. + EbpAudioInterval *string `locationName:"ebpAudioInterval" type:"string" enum:"M2tsAudioInterval"` + + // When set, enforces that Encoder Boundary Points do not come within the specified + // time interval of each other by looking ahead at input video. If another EBP + // is going to come in within the specified time interval, the current EBP is + // not emitted, and the segment is "stretched" to the next marker. The lookahead + // value does not add latency to the system. The Live Event must be configured + // elsewhere to create sufficient latency to make the lookahead accurate. + EbpLookaheadMs *int64 `locationName:"ebpLookaheadMs" type:"integer"` + + // Controls placement of EBP on Audio PIDs. If set to videoAndAudioPids, EBP + // markers will be placed on the video PID and all audio PIDs. If set to videoPid, + // EBP markers will be placed on only the video PID. + EbpPlacement *string `locationName:"ebpPlacement" type:"string" enum:"M2tsEbpPlacement"` + + // This field is unused and deprecated. + EcmPid *string `locationName:"ecmPid" type:"string"` + + // Include or exclude the ES Rate field in the PES header. + EsRateInPes *string `locationName:"esRateInPes" type:"string" enum:"M2tsEsRateInPes"` + + // Packet Identifier (PID) for input source ETV Platform data to this output. + // Can be entered as a decimal or hexadecimal value. Valid values are 32 (or + // 0x20)..8182 (or 0x1ff6). + EtvPlatformPid *string `locationName:"etvPlatformPid" type:"string"` + + // Packet Identifier (PID) for input source ETV Signal data to this output. + // Can be entered as a decimal or hexadecimal value. Valid values are 32 (or + // 0x20)..8182 (or 0x1ff6). + EtvSignalPid *string `locationName:"etvSignalPid" type:"string"` + + // The length in seconds of each fragment. Only used with EBP markers. + FragmentTime *float64 `locationName:"fragmentTime" type:"double"` + + // If set to passthrough, passes any KLV data from the input source to this + // output. + Klv *string `locationName:"klv" type:"string" enum:"M2tsKlv"` + + // Packet Identifier (PID) for input source KLV data to this output. Multiple + // values are accepted, and can be entered in ranges and/or by comma separation. + // Can be entered as decimal or hexadecimal values. Each PID specified must + // be in the range of 32 (or 0x20)..8182 (or 0x1ff6). + KlvDataPids *string `locationName:"klvDataPids" type:"string"` + + // Value in bits per second of extra null packets to insert into the transport + // stream. This can be used if a downstream encryption system requires periodic + // null packets. + NullPacketBitrate *float64 `locationName:"nullPacketBitrate" type:"double"` + + // The number of milliseconds between instances of this table in the output + // transport stream. Valid values are 0, 10..1000. + PatInterval *int64 `locationName:"patInterval" type:"integer"` + + // When set to pcrEveryPesPacket, a Program Clock Reference value is inserted + // for every Packetized Elementary Stream (PES) header. This parameter is effective + // only when the PCR PID is the same as the video or audio elementary stream. + PcrControl *string `locationName:"pcrControl" type:"string" enum:"M2tsPcrControl"` + + // Maximum time in milliseconds between Program Clock Reference (PCRs) inserted + // into the transport stream. + PcrPeriod *int64 `locationName:"pcrPeriod" type:"integer"` + + // Packet Identifier (PID) of the Program Clock Reference (PCR) in the transport + // stream. When no value is given, the encoder will assign the same value as + // the Video PID. Can be entered as a decimal or hexadecimal value. Valid values + // are 32 (or 0x20)..8182 (or 0x1ff6). + PcrPid *string `locationName:"pcrPid" type:"string"` + + // The number of milliseconds between instances of this table in the output + // transport stream. Valid values are 0, 10..1000. + PmtInterval *int64 `locationName:"pmtInterval" type:"integer"` + + // Packet Identifier (PID) for the Program Map Table (PMT) in the transport + // stream. Can be entered as a decimal or hexadecimal value. Valid values are + // 32 (or 0x20)..8182 (or 0x1ff6). + PmtPid *string `locationName:"pmtPid" type:"string"` + + // The value of the program number field in the Program Map Table. + ProgramNum *int64 `locationName:"programNum" type:"integer"` + + // When vbr, does not insert null packets into transport stream to fill specified + // bitrate. The bitrate setting acts as the maximum bitrate when vbr is set. + RateMode *string `locationName:"rateMode" type:"string" enum:"M2tsRateMode"` + + // Packet Identifier (PID) for input source SCTE-27 data to this output. Multiple + // values are accepted, and can be entered in ranges and/or by comma separation. + // Can be entered as decimal or hexadecimal values. Each PID specified must + // be in the range of 32 (or 0x20)..8182 (or 0x1ff6). + Scte27Pids *string `locationName:"scte27Pids" type:"string"` + + // Optionally pass SCTE-35 signals from the input source to this output. + Scte35Control *string `locationName:"scte35Control" type:"string" enum:"M2tsScte35Control"` + + // Packet Identifier (PID) of the SCTE-35 stream in the transport stream. Can + // be entered as a decimal or hexadecimal value. Valid values are 32 (or 0x20)..8182 + // (or 0x1ff6). + Scte35Pid *string `locationName:"scte35Pid" type:"string"` + + // Inserts segmentation markers at each segmentationTime period. raiSegstart + // sets the Random Access Indicator bit in the adaptation field. raiAdapt sets + // the RAI bit and adds the current timecode in the private data bytes. psiSegstart + // inserts PAT and PMT tables at the start of segments. ebp adds Encoder Boundary + // Point information to the adaptation field as per OpenCable specification + // OC-SP-EBP-I01-130118. ebpLegacy adds Encoder Boundary Point information to + // the adaptation field using a legacy proprietary format. + SegmentationMarkers *string `locationName:"segmentationMarkers" type:"string" enum:"M2tsSegmentationMarkers"` + + // The segmentation style parameter controls how segmentation markers are inserted + // into the transport stream. With avails, it is possible that segments may + // be truncated, which can influence where future segmentation markers are inserted.When + // a segmentation style of "resetCadence" is selected and a segment is truncated + // due to an avail, we will reset the segmentation cadence. This means the subsequent + // segment will have a duration of $segmentationTime seconds.When a segmentation + // style of "maintainCadence" is selected and a segment is truncated due to + // an avail, we will not reset the segmentation cadence. This means the subsequent + // segment will likely be truncated as well. However, all segments after that + // will have a duration of $segmentationTime seconds. Note that EBP lookahead + // is a slight exception to this rule. + SegmentationStyle *string `locationName:"segmentationStyle" type:"string" enum:"M2tsSegmentationStyle"` + + // The length in seconds of each segment. Required unless markers is set to + // None_. + SegmentationTime *float64 `locationName:"segmentationTime" type:"double"` -type InputLocation struct { - _ struct{} `type:"structure"` + // When set to passthrough, timed metadata will be passed through from input + // to output. + TimedMetadataBehavior *string `locationName:"timedMetadataBehavior" type:"string" enum:"M2tsTimedMetadataBehavior"` - // key used to extract the password from EC2 Parameter store - PasswordParam *string `locationName:"passwordParam" type:"string"` + // Packet Identifier (PID) of the timed metadata stream in the transport stream. + // Can be entered as a decimal or hexadecimal value. Valid values are 32 (or + // 0x20)..8182 (or 0x1ff6). + TimedMetadataPid *string `locationName:"timedMetadataPid" type:"string"` - // Uniform Resource Identifier - This should be a path to a file accessible - // to the Live system (eg. a http:// URI) depending on the output type. For - // example, a rtmpEndpoint should have a uri simliar to: "rtmp://fmsserver/live". - // - // Uri is a required field - Uri *string `locationName:"uri" type:"string" required:"true"` + // The value of the transport stream ID field in the Program Map Table. + TransportStreamId *int64 `locationName:"transportStreamId" type:"integer"` - // Username if credentials are required to access a file or publishing point. - // This can be either a plaintext username, or a reference to an AWS parameter - // store name from which the username can be retrieved. AWS Parameter store - // format: "ssm://" - Username *string `locationName:"username" type:"string"` + // Packet Identifier (PID) of the elementary video stream in the transport stream. + // Can be entered as a decimal or hexadecimal value. Valid values are 32 (or + // 0x20)..8182 (or 0x1ff6). + VideoPid *string `locationName:"videoPid" type:"string"` } // String returns the string representation -func (s InputLocation) String() string { +func (s M2tsSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s InputLocation) GoString() string { +func (s M2tsSettings) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *InputLocation) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "InputLocation"} - if s.Uri == nil { - invalidParams.Add(request.NewErrParamRequired("Uri")) +func (s *M2tsSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "M2tsSettings"} + if s.DvbNitSettings != nil { + if err := s.DvbNitSettings.Validate(); err != nil { + invalidParams.AddNested("DvbNitSettings", err.(request.ErrInvalidParams)) + } + } + if s.DvbSdtSettings != nil { + if err := s.DvbSdtSettings.Validate(); err != nil { + invalidParams.AddNested("DvbSdtSettings", err.(request.ErrInvalidParams)) + } + } + if s.DvbTdtSettings != nil { + if err := s.DvbTdtSettings.Validate(); err != nil { + invalidParams.AddNested("DvbTdtSettings", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -7192,591 +10600,601 @@ func (s *InputLocation) Validate() error { return nil } -// SetPasswordParam sets the PasswordParam field's value. -func (s *InputLocation) SetPasswordParam(v string) *InputLocation { - s.PasswordParam = &v +// SetAbsentInputAudioBehavior sets the AbsentInputAudioBehavior field's value. +func (s *M2tsSettings) SetAbsentInputAudioBehavior(v string) *M2tsSettings { + s.AbsentInputAudioBehavior = &v return s } -// SetUri sets the Uri field's value. -func (s *InputLocation) SetUri(v string) *InputLocation { - s.Uri = &v +// SetArib sets the Arib field's value. +func (s *M2tsSettings) SetArib(v string) *M2tsSettings { + s.Arib = &v return s } -// SetUsername sets the Username field's value. -func (s *InputLocation) SetUsername(v string) *InputLocation { - s.Username = &v +// SetAribCaptionsPid sets the AribCaptionsPid field's value. +func (s *M2tsSettings) SetAribCaptionsPid(v string) *M2tsSettings { + s.AribCaptionsPid = &v return s } -type InputLossBehavior struct { - _ struct{} `type:"structure"` +// SetAribCaptionsPidControl sets the AribCaptionsPidControl field's value. +func (s *M2tsSettings) SetAribCaptionsPidControl(v string) *M2tsSettings { + s.AribCaptionsPidControl = &v + return s +} - // On input loss, the number of milliseconds to substitute black into the output - // before switching to the frame specified by inputLossImageType. A value x, - // where 0 <= x <= 1,000,000 and a value of 1,000,000 will be interpreted as - // infinite. - BlackFrameMsec *int64 `locationName:"blackFrameMsec" type:"integer"` +// SetAudioBufferModel sets the AudioBufferModel field's value. +func (s *M2tsSettings) SetAudioBufferModel(v string) *M2tsSettings { + s.AudioBufferModel = &v + return s +} - // When input loss image type is "color" this field specifies the color to use. - // Value: 6 hex characters representing the values of RGB. - InputLossImageColor *string `locationName:"inputLossImageColor" min:"6" type:"string"` +// SetAudioFramesPerPes sets the AudioFramesPerPes field's value. +func (s *M2tsSettings) SetAudioFramesPerPes(v int64) *M2tsSettings { + s.AudioFramesPerPes = &v + return s +} - // When input loss image type is "slate" these fields specify the parameters - // for accessing the slate. - InputLossImageSlate *InputLocation `locationName:"inputLossImageSlate" type:"structure"` +// SetAudioPids sets the AudioPids field's value. +func (s *M2tsSettings) SetAudioPids(v string) *M2tsSettings { + s.AudioPids = &v + return s +} - // Indicates whether to substitute a solid color or a slate into the output - // after input loss exceeds blackFrameMsec. - InputLossImageType *string `locationName:"inputLossImageType" type:"string" enum:"InputLossImageType"` +// SetAudioStreamType sets the AudioStreamType field's value. +func (s *M2tsSettings) SetAudioStreamType(v string) *M2tsSettings { + s.AudioStreamType = &v + return s +} - // On input loss, the number of milliseconds to repeat the previous picture - // before substituting black into the output. A value x, where 0 <= x <= 1,000,000 - // and a value of 1,000,000 will be interpreted as infinite. - RepeatFrameMsec *int64 `locationName:"repeatFrameMsec" type:"integer"` +// SetBitrate sets the Bitrate field's value. +func (s *M2tsSettings) SetBitrate(v int64) *M2tsSettings { + s.Bitrate = &v + return s } -// String returns the string representation -func (s InputLossBehavior) String() string { - return awsutil.Prettify(s) +// SetBufferModel sets the BufferModel field's value. +func (s *M2tsSettings) SetBufferModel(v string) *M2tsSettings { + s.BufferModel = &v + return s } -// GoString returns the string representation -func (s InputLossBehavior) GoString() string { - return s.String() +// SetCcDescriptor sets the CcDescriptor field's value. +func (s *M2tsSettings) SetCcDescriptor(v string) *M2tsSettings { + s.CcDescriptor = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *InputLossBehavior) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "InputLossBehavior"} - if s.InputLossImageColor != nil && len(*s.InputLossImageColor) < 6 { - invalidParams.Add(request.NewErrParamMinLen("InputLossImageColor", 6)) - } - if s.InputLossImageSlate != nil { - if err := s.InputLossImageSlate.Validate(); err != nil { - invalidParams.AddNested("InputLossImageSlate", err.(request.ErrInvalidParams)) - } - } +// SetDvbNitSettings sets the DvbNitSettings field's value. +func (s *M2tsSettings) SetDvbNitSettings(v *DvbNitSettings) *M2tsSettings { + s.DvbNitSettings = v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetDvbSdtSettings sets the DvbSdtSettings field's value. +func (s *M2tsSettings) SetDvbSdtSettings(v *DvbSdtSettings) *M2tsSettings { + s.DvbSdtSettings = v + return s } -// SetBlackFrameMsec sets the BlackFrameMsec field's value. -func (s *InputLossBehavior) SetBlackFrameMsec(v int64) *InputLossBehavior { - s.BlackFrameMsec = &v +// SetDvbSubPids sets the DvbSubPids field's value. +func (s *M2tsSettings) SetDvbSubPids(v string) *M2tsSettings { + s.DvbSubPids = &v return s } -// SetInputLossImageColor sets the InputLossImageColor field's value. -func (s *InputLossBehavior) SetInputLossImageColor(v string) *InputLossBehavior { - s.InputLossImageColor = &v +// SetDvbTdtSettings sets the DvbTdtSettings field's value. +func (s *M2tsSettings) SetDvbTdtSettings(v *DvbTdtSettings) *M2tsSettings { + s.DvbTdtSettings = v return s } -// SetInputLossImageSlate sets the InputLossImageSlate field's value. -func (s *InputLossBehavior) SetInputLossImageSlate(v *InputLocation) *InputLossBehavior { - s.InputLossImageSlate = v +// SetDvbTeletextPid sets the DvbTeletextPid field's value. +func (s *M2tsSettings) SetDvbTeletextPid(v string) *M2tsSettings { + s.DvbTeletextPid = &v return s } -// SetInputLossImageType sets the InputLossImageType field's value. -func (s *InputLossBehavior) SetInputLossImageType(v string) *InputLossBehavior { - s.InputLossImageType = &v +// SetEbif sets the Ebif field's value. +func (s *M2tsSettings) SetEbif(v string) *M2tsSettings { + s.Ebif = &v return s } -// SetRepeatFrameMsec sets the RepeatFrameMsec field's value. -func (s *InputLossBehavior) SetRepeatFrameMsec(v int64) *InputLossBehavior { - s.RepeatFrameMsec = &v +// SetEbpAudioInterval sets the EbpAudioInterval field's value. +func (s *M2tsSettings) SetEbpAudioInterval(v string) *M2tsSettings { + s.EbpAudioInterval = &v return s } -// An Input Security Group -type InputSecurityGroup struct { - _ struct{} `type:"structure"` +// SetEbpLookaheadMs sets the EbpLookaheadMs field's value. +func (s *M2tsSettings) SetEbpLookaheadMs(v int64) *M2tsSettings { + s.EbpLookaheadMs = &v + return s +} - // Unique ARN of Input Security Group - Arn *string `locationName:"arn" type:"string"` +// SetEbpPlacement sets the EbpPlacement field's value. +func (s *M2tsSettings) SetEbpPlacement(v string) *M2tsSettings { + s.EbpPlacement = &v + return s +} - // The Id of the Input Security Group - Id *string `locationName:"id" type:"string"` +// SetEcmPid sets the EcmPid field's value. +func (s *M2tsSettings) SetEcmPid(v string) *M2tsSettings { + s.EcmPid = &v + return s +} - // Whitelist rules and their sync status - WhitelistRules []*InputWhitelistRule `locationName:"whitelistRules" type:"list"` +// SetEsRateInPes sets the EsRateInPes field's value. +func (s *M2tsSettings) SetEsRateInPes(v string) *M2tsSettings { + s.EsRateInPes = &v + return s +} + +// SetEtvPlatformPid sets the EtvPlatformPid field's value. +func (s *M2tsSettings) SetEtvPlatformPid(v string) *M2tsSettings { + s.EtvPlatformPid = &v + return s +} + +// SetEtvSignalPid sets the EtvSignalPid field's value. +func (s *M2tsSettings) SetEtvSignalPid(v string) *M2tsSettings { + s.EtvSignalPid = &v + return s +} + +// SetFragmentTime sets the FragmentTime field's value. +func (s *M2tsSettings) SetFragmentTime(v float64) *M2tsSettings { + s.FragmentTime = &v + return s +} + +// SetKlv sets the Klv field's value. +func (s *M2tsSettings) SetKlv(v string) *M2tsSettings { + s.Klv = &v + return s } -// String returns the string representation -func (s InputSecurityGroup) String() string { - return awsutil.Prettify(s) +// SetKlvDataPids sets the KlvDataPids field's value. +func (s *M2tsSettings) SetKlvDataPids(v string) *M2tsSettings { + s.KlvDataPids = &v + return s } -// GoString returns the string representation -func (s InputSecurityGroup) GoString() string { - return s.String() +// SetNullPacketBitrate sets the NullPacketBitrate field's value. +func (s *M2tsSettings) SetNullPacketBitrate(v float64) *M2tsSettings { + s.NullPacketBitrate = &v + return s } -// SetArn sets the Arn field's value. -func (s *InputSecurityGroup) SetArn(v string) *InputSecurityGroup { - s.Arn = &v +// SetPatInterval sets the PatInterval field's value. +func (s *M2tsSettings) SetPatInterval(v int64) *M2tsSettings { + s.PatInterval = &v return s } -// SetId sets the Id field's value. -func (s *InputSecurityGroup) SetId(v string) *InputSecurityGroup { - s.Id = &v +// SetPcrControl sets the PcrControl field's value. +func (s *M2tsSettings) SetPcrControl(v string) *M2tsSettings { + s.PcrControl = &v return s } -// SetWhitelistRules sets the WhitelistRules field's value. -func (s *InputSecurityGroup) SetWhitelistRules(v []*InputWhitelistRule) *InputSecurityGroup { - s.WhitelistRules = v +// SetPcrPeriod sets the PcrPeriod field's value. +func (s *M2tsSettings) SetPcrPeriod(v int64) *M2tsSettings { + s.PcrPeriod = &v return s } -// Live Event input parameters. There can be multiple inputs in a single Live -// Event. -type InputSettings struct { - _ struct{} `type:"structure"` - - // Used to select the audio stream to decode for inputs that have multiple available. - AudioSelectors []*AudioSelector `locationName:"audioSelectors" type:"list"` - - // Used to select the caption input to use for inputs that have multiple available. - CaptionSelectors []*CaptionSelector `locationName:"captionSelectors" type:"list"` - - // Enable or disable the deblock filter when filtering. - DeblockFilter *string `locationName:"deblockFilter" type:"string" enum:"InputDeblockFilter"` - - // Enable or disable the denoise filter when filtering. - DenoiseFilter *string `locationName:"denoiseFilter" type:"string" enum:"InputDenoiseFilter"` - - // Adjusts the magnitude of filtering from 1 (minimal) to 5 (strongest). - FilterStrength *int64 `locationName:"filterStrength" min:"1" type:"integer"` - - // Turns on the filter for this input. MPEG-2 inputs have the deblocking filter - // enabled by default.1) auto - filtering will be applied depending on input - // type/quality2) disabled - no filtering will be applied to the input3) forced - // - filtering will be applied regardless of input type - InputFilter *string `locationName:"inputFilter" type:"string" enum:"InputFilter"` - - // Input settings. - NetworkInputSettings *NetworkInputSettings `locationName:"networkInputSettings" type:"structure"` - - // Loop input if it is a file. This allows a file input to be streamed indefinitely. - SourceEndBehavior *string `locationName:"sourceEndBehavior" type:"string" enum:"InputSourceEndBehavior"` +// SetPcrPid sets the PcrPid field's value. +func (s *M2tsSettings) SetPcrPid(v string) *M2tsSettings { + s.PcrPid = &v + return s +} - // Informs which video elementary stream to decode for input types that have - // multiple available. - VideoSelector *VideoSelector `locationName:"videoSelector" type:"structure"` +// SetPmtInterval sets the PmtInterval field's value. +func (s *M2tsSettings) SetPmtInterval(v int64) *M2tsSettings { + s.PmtInterval = &v + return s } -// String returns the string representation -func (s InputSettings) String() string { - return awsutil.Prettify(s) +// SetPmtPid sets the PmtPid field's value. +func (s *M2tsSettings) SetPmtPid(v string) *M2tsSettings { + s.PmtPid = &v + return s } -// GoString returns the string representation -func (s InputSettings) GoString() string { - return s.String() +// SetProgramNum sets the ProgramNum field's value. +func (s *M2tsSettings) SetProgramNum(v int64) *M2tsSettings { + s.ProgramNum = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *InputSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "InputSettings"} - if s.FilterStrength != nil && *s.FilterStrength < 1 { - invalidParams.Add(request.NewErrParamMinValue("FilterStrength", 1)) - } - if s.AudioSelectors != nil { - for i, v := range s.AudioSelectors { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AudioSelectors", i), err.(request.ErrInvalidParams)) - } - } - } - if s.CaptionSelectors != nil { - for i, v := range s.CaptionSelectors { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "CaptionSelectors", i), err.(request.ErrInvalidParams)) - } - } - } +// SetRateMode sets the RateMode field's value. +func (s *M2tsSettings) SetRateMode(v string) *M2tsSettings { + s.RateMode = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetScte27Pids sets the Scte27Pids field's value. +func (s *M2tsSettings) SetScte27Pids(v string) *M2tsSettings { + s.Scte27Pids = &v + return s } -// SetAudioSelectors sets the AudioSelectors field's value. -func (s *InputSettings) SetAudioSelectors(v []*AudioSelector) *InputSettings { - s.AudioSelectors = v +// SetScte35Control sets the Scte35Control field's value. +func (s *M2tsSettings) SetScte35Control(v string) *M2tsSettings { + s.Scte35Control = &v return s } -// SetCaptionSelectors sets the CaptionSelectors field's value. -func (s *InputSettings) SetCaptionSelectors(v []*CaptionSelector) *InputSettings { - s.CaptionSelectors = v +// SetScte35Pid sets the Scte35Pid field's value. +func (s *M2tsSettings) SetScte35Pid(v string) *M2tsSettings { + s.Scte35Pid = &v return s } -// SetDeblockFilter sets the DeblockFilter field's value. -func (s *InputSettings) SetDeblockFilter(v string) *InputSettings { - s.DeblockFilter = &v +// SetSegmentationMarkers sets the SegmentationMarkers field's value. +func (s *M2tsSettings) SetSegmentationMarkers(v string) *M2tsSettings { + s.SegmentationMarkers = &v return s } -// SetDenoiseFilter sets the DenoiseFilter field's value. -func (s *InputSettings) SetDenoiseFilter(v string) *InputSettings { - s.DenoiseFilter = &v +// SetSegmentationStyle sets the SegmentationStyle field's value. +func (s *M2tsSettings) SetSegmentationStyle(v string) *M2tsSettings { + s.SegmentationStyle = &v return s } -// SetFilterStrength sets the FilterStrength field's value. -func (s *InputSettings) SetFilterStrength(v int64) *InputSettings { - s.FilterStrength = &v +// SetSegmentationTime sets the SegmentationTime field's value. +func (s *M2tsSettings) SetSegmentationTime(v float64) *M2tsSettings { + s.SegmentationTime = &v return s } -// SetInputFilter sets the InputFilter field's value. -func (s *InputSettings) SetInputFilter(v string) *InputSettings { - s.InputFilter = &v +// SetTimedMetadataBehavior sets the TimedMetadataBehavior field's value. +func (s *M2tsSettings) SetTimedMetadataBehavior(v string) *M2tsSettings { + s.TimedMetadataBehavior = &v return s } -// SetNetworkInputSettings sets the NetworkInputSettings field's value. -func (s *InputSettings) SetNetworkInputSettings(v *NetworkInputSettings) *InputSettings { - s.NetworkInputSettings = v +// SetTimedMetadataPid sets the TimedMetadataPid field's value. +func (s *M2tsSettings) SetTimedMetadataPid(v string) *M2tsSettings { + s.TimedMetadataPid = &v return s } -// SetSourceEndBehavior sets the SourceEndBehavior field's value. -func (s *InputSettings) SetSourceEndBehavior(v string) *InputSettings { - s.SourceEndBehavior = &v +// SetTransportStreamId sets the TransportStreamId field's value. +func (s *M2tsSettings) SetTransportStreamId(v int64) *M2tsSettings { + s.TransportStreamId = &v return s } -// SetVideoSelector sets the VideoSelector field's value. -func (s *InputSettings) SetVideoSelector(v *VideoSelector) *InputSettings { - s.VideoSelector = v +// SetVideoPid sets the VideoPid field's value. +func (s *M2tsSettings) SetVideoPid(v string) *M2tsSettings { + s.VideoPid = &v return s } -// The settings for a PULL type input. -type InputSource struct { +// Settings information for the .m3u8 container +type M3u8Settings struct { _ struct{} `type:"structure"` - // The key used to extract the password from EC2 Parameter store. - PasswordParam *string `locationName:"passwordParam" type:"string"` + // The number of audio frames to insert for each PES packet. + AudioFramesPerPes *int64 `locationName:"audioFramesPerPes" type:"integer"` - // This represents the customer's source URL where stream ispulled from. - Url *string `locationName:"url" type:"string"` + // Packet Identifier (PID) of the elementary audio stream(s) in the transport + // stream. Multiple values are accepted, and can be entered in ranges and/or + // by comma separation. Can be entered as decimal or hexadecimal values. + AudioPids *string `locationName:"audioPids" type:"string"` - // The username for the input source. - Username *string `locationName:"username" type:"string"` -} + // This parameter is unused and deprecated. + EcmPid *string `locationName:"ecmPid" type:"string"` -// String returns the string representation -func (s InputSource) String() string { - return awsutil.Prettify(s) -} + // The number of milliseconds between instances of this table in the output + // transport stream. A value of \"0\" writes out the PMT once per segment file. + PatInterval *int64 `locationName:"patInterval" type:"integer"` -// GoString returns the string representation -func (s InputSource) GoString() string { - return s.String() -} + // When set to pcrEveryPesPacket, a Program Clock Reference value is inserted + // for every Packetized Elementary Stream (PES) header. This parameter is effective + // only when the PCR PID is the same as the video or audio elementary stream. + PcrControl *string `locationName:"pcrControl" type:"string" enum:"M3u8PcrControl"` -// SetPasswordParam sets the PasswordParam field's value. -func (s *InputSource) SetPasswordParam(v string) *InputSource { - s.PasswordParam = &v - return s -} + // Maximum time in milliseconds between Program Clock References (PCRs) inserted + // into the transport stream. + PcrPeriod *int64 `locationName:"pcrPeriod" type:"integer"` -// SetUrl sets the Url field's value. -func (s *InputSource) SetUrl(v string) *InputSource { - s.Url = &v - return s -} + // Packet Identifier (PID) of the Program Clock Reference (PCR) in the transport + // stream. When no value is given, the encoder will assign the same value as + // the Video PID. Can be entered as a decimal or hexadecimal value. + PcrPid *string `locationName:"pcrPid" type:"string"` -// SetUsername sets the Username field's value. -func (s *InputSource) SetUsername(v string) *InputSource { - s.Username = &v - return s -} + // The number of milliseconds between instances of this table in the output + // transport stream. A value of \"0\" writes out the PMT once per segment file. + PmtInterval *int64 `locationName:"pmtInterval" type:"integer"` + + // Packet Identifier (PID) for the Program Map Table (PMT) in the transport + // stream. Can be entered as a decimal or hexadecimal value. + PmtPid *string `locationName:"pmtPid" type:"string"` + + // The value of the program number field in the Program Map Table. + ProgramNum *int64 `locationName:"programNum" type:"integer"` + + // If set to passthrough, passes any SCTE-35 signals from the input source to + // this output. + Scte35Behavior *string `locationName:"scte35Behavior" type:"string" enum:"M3u8Scte35Behavior"` + + // Packet Identifier (PID) of the SCTE-35 stream in the transport stream. Can + // be entered as a decimal or hexadecimal value. + Scte35Pid *string `locationName:"scte35Pid" type:"string"` -// Settings for for a PULL type input. -type InputSourceRequest struct { - _ struct{} `type:"structure"` + // When set to passthrough, timed metadata is passed through from input to output. + TimedMetadataBehavior *string `locationName:"timedMetadataBehavior" type:"string" enum:"M3u8TimedMetadataBehavior"` - // The key used to extract the password from EC2 Parameter store. - PasswordParam *string `locationName:"passwordParam" type:"string"` + // Packet Identifier (PID) of the timed metadata stream in the transport stream. + // Can be entered as a decimal or hexadecimal value. Valid values are 32 (or + // 0x20)..8182 (or 0x1ff6). + TimedMetadataPid *string `locationName:"timedMetadataPid" type:"string"` - // This represents the customer's source URL where stream ispulled from. - Url *string `locationName:"url" type:"string"` + // The value of the transport stream ID field in the Program Map Table. + TransportStreamId *int64 `locationName:"transportStreamId" type:"integer"` - // The username for the input source. - Username *string `locationName:"username" type:"string"` + // Packet Identifier (PID) of the elementary video stream in the transport stream. + // Can be entered as a decimal or hexadecimal value. + VideoPid *string `locationName:"videoPid" type:"string"` } // String returns the string representation -func (s InputSourceRequest) String() string { +func (s M3u8Settings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s InputSourceRequest) GoString() string { +func (s M3u8Settings) GoString() string { return s.String() } -// SetPasswordParam sets the PasswordParam field's value. -func (s *InputSourceRequest) SetPasswordParam(v string) *InputSourceRequest { - s.PasswordParam = &v +// SetAudioFramesPerPes sets the AudioFramesPerPes field's value. +func (s *M3u8Settings) SetAudioFramesPerPes(v int64) *M3u8Settings { + s.AudioFramesPerPes = &v return s } -// SetUrl sets the Url field's value. -func (s *InputSourceRequest) SetUrl(v string) *InputSourceRequest { - s.Url = &v +// SetAudioPids sets the AudioPids field's value. +func (s *M3u8Settings) SetAudioPids(v string) *M3u8Settings { + s.AudioPids = &v return s } -// SetUsername sets the Username field's value. -func (s *InputSourceRequest) SetUsername(v string) *InputSourceRequest { - s.Username = &v +// SetEcmPid sets the EcmPid field's value. +func (s *M3u8Settings) SetEcmPid(v string) *M3u8Settings { + s.EcmPid = &v return s } -type InputSpecification struct { - _ struct{} `type:"structure"` - - // Input codec - Codec *string `locationName:"codec" type:"string" enum:"InputCodec"` - - // Maximum input bitrate, categorized coarsely - MaximumBitrate *string `locationName:"maximumBitrate" type:"string" enum:"InputMaximumBitrate"` - - // Input resolution, categorized coarsely - Resolution *string `locationName:"resolution" type:"string" enum:"InputResolution"` -} - -// String returns the string representation -func (s InputSpecification) String() string { - return awsutil.Prettify(s) +// SetPatInterval sets the PatInterval field's value. +func (s *M3u8Settings) SetPatInterval(v int64) *M3u8Settings { + s.PatInterval = &v + return s } -// GoString returns the string representation -func (s InputSpecification) GoString() string { - return s.String() +// SetPcrControl sets the PcrControl field's value. +func (s *M3u8Settings) SetPcrControl(v string) *M3u8Settings { + s.PcrControl = &v + return s } -// SetCodec sets the Codec field's value. -func (s *InputSpecification) SetCodec(v string) *InputSpecification { - s.Codec = &v +// SetPcrPeriod sets the PcrPeriod field's value. +func (s *M3u8Settings) SetPcrPeriod(v int64) *M3u8Settings { + s.PcrPeriod = &v return s } -// SetMaximumBitrate sets the MaximumBitrate field's value. -func (s *InputSpecification) SetMaximumBitrate(v string) *InputSpecification { - s.MaximumBitrate = &v +// SetPcrPid sets the PcrPid field's value. +func (s *M3u8Settings) SetPcrPid(v string) *M3u8Settings { + s.PcrPid = &v return s } -// SetResolution sets the Resolution field's value. -func (s *InputSpecification) SetResolution(v string) *InputSpecification { - s.Resolution = &v +// SetPmtInterval sets the PmtInterval field's value. +func (s *M3u8Settings) SetPmtInterval(v int64) *M3u8Settings { + s.PmtInterval = &v return s } -// Whitelist rule -type InputWhitelistRule struct { - _ struct{} `type:"structure"` - - // The IPv4 CIDR that's whitelisted. - Cidr *string `locationName:"cidr" type:"string"` +// SetPmtPid sets the PmtPid field's value. +func (s *M3u8Settings) SetPmtPid(v string) *M3u8Settings { + s.PmtPid = &v + return s } -// String returns the string representation -func (s InputWhitelistRule) String() string { - return awsutil.Prettify(s) +// SetProgramNum sets the ProgramNum field's value. +func (s *M3u8Settings) SetProgramNum(v int64) *M3u8Settings { + s.ProgramNum = &v + return s } -// GoString returns the string representation -func (s InputWhitelistRule) GoString() string { - return s.String() +// SetScte35Behavior sets the Scte35Behavior field's value. +func (s *M3u8Settings) SetScte35Behavior(v string) *M3u8Settings { + s.Scte35Behavior = &v + return s } -// SetCidr sets the Cidr field's value. -func (s *InputWhitelistRule) SetCidr(v string) *InputWhitelistRule { - s.Cidr = &v +// SetScte35Pid sets the Scte35Pid field's value. +func (s *M3u8Settings) SetScte35Pid(v string) *M3u8Settings { + s.Scte35Pid = &v return s } -// An IPv4 CIDR to whitelist. -type InputWhitelistRuleCidr struct { - _ struct{} `type:"structure"` - - // The IPv4 CIDR to whitelist - Cidr *string `locationName:"cidr" type:"string"` +// SetTimedMetadataBehavior sets the TimedMetadataBehavior field's value. +func (s *M3u8Settings) SetTimedMetadataBehavior(v string) *M3u8Settings { + s.TimedMetadataBehavior = &v + return s } -// String returns the string representation -func (s InputWhitelistRuleCidr) String() string { - return awsutil.Prettify(s) +// SetTimedMetadataPid sets the TimedMetadataPid field's value. +func (s *M3u8Settings) SetTimedMetadataPid(v string) *M3u8Settings { + s.TimedMetadataPid = &v + return s } -// GoString returns the string representation -func (s InputWhitelistRuleCidr) GoString() string { - return s.String() +// SetTransportStreamId sets the TransportStreamId field's value. +func (s *M3u8Settings) SetTransportStreamId(v int64) *M3u8Settings { + s.TransportStreamId = &v + return s } -// SetCidr sets the Cidr field's value. -func (s *InputWhitelistRuleCidr) SetCidr(v string) *InputWhitelistRuleCidr { - s.Cidr = &v +// SetVideoPid sets the VideoPid field's value. +func (s *M3u8Settings) SetVideoPid(v string) *M3u8Settings { + s.VideoPid = &v return s } -type KeyProviderSettings struct { +type Mp2Settings struct { _ struct{} `type:"structure"` - StaticKeySettings *StaticKeySettings `locationName:"staticKeySettings" type:"structure"` + // Average bitrate in bits/second. + Bitrate *float64 `locationName:"bitrate" type:"double"` + + // The MPEG2 Audio coding mode. Valid values are codingMode10 (for mono) or + // codingMode20 (for stereo). + CodingMode *string `locationName:"codingMode" type:"string" enum:"Mp2CodingMode"` + + // Sample rate in Hz. + SampleRate *float64 `locationName:"sampleRate" type:"double"` } // String returns the string representation -func (s KeyProviderSettings) String() string { +func (s Mp2Settings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s KeyProviderSettings) GoString() string { +func (s Mp2Settings) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *KeyProviderSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "KeyProviderSettings"} - if s.StaticKeySettings != nil { - if err := s.StaticKeySettings.Validate(); err != nil { - invalidParams.AddNested("StaticKeySettings", err.(request.ErrInvalidParams)) - } - } +// SetBitrate sets the Bitrate field's value. +func (s *Mp2Settings) SetBitrate(v float64) *Mp2Settings { + s.Bitrate = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetCodingMode sets the CodingMode field's value. +func (s *Mp2Settings) SetCodingMode(v string) *Mp2Settings { + s.CodingMode = &v + return s } -// SetStaticKeySettings sets the StaticKeySettings field's value. -func (s *KeyProviderSettings) SetStaticKeySettings(v *StaticKeySettings) *KeyProviderSettings { - s.StaticKeySettings = v +// SetSampleRate sets the SampleRate field's value. +func (s *Mp2Settings) SetSampleRate(v float64) *Mp2Settings { + s.SampleRate = &v return s } -type ListChannelsInput struct { +type MsSmoothGroupSettings struct { _ struct{} `type:"structure"` - MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + // The value of the "Acquisition Point Identity" element used in each message + // placed in the sparse track. Only enabled if sparseTrackType is not "none". + AcquisitionPointId *string `locationName:"acquisitionPointId" type:"string"` - NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` -} + // If set to passthrough for an audio-only MS Smooth output, the fragment absolute + // time will be set to the current timecode. This option does not write timecodes + // to the audio elementary stream. + AudioOnlyTimecodeControl *string `locationName:"audioOnlyTimecodeControl" type:"string" enum:"SmoothGroupAudioOnlyTimecodeControl"` -// String returns the string representation -func (s ListChannelsInput) String() string { - return awsutil.Prettify(s) -} + // If set to verifyAuthenticity, verify the https certificate chain to a trusted + // Certificate Authority (CA). This will cause https outputs to self-signed + // certificates to fail. + CertificateMode *string `locationName:"certificateMode" type:"string" enum:"SmoothGroupCertificateMode"` -// GoString returns the string representation -func (s ListChannelsInput) GoString() string { - return s.String() -} + // Number of seconds to wait before retrying connection to the IIS server if + // the connection is lost. Content will be cached during this time and the cache + // will be be delivered to the IIS server once the connection is re-established. + ConnectionRetryInterval *int64 `locationName:"connectionRetryInterval" type:"integer"` -// Validate inspects the fields of the type to determine if they are valid. -func (s *ListChannelsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListChannelsInput"} - if s.MaxResults != nil && *s.MaxResults < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) - } + // Smooth Streaming publish point on an IIS server. Elemental Live acts as a + // "Push" encoder to IIS. + // + // Destination is a required field + Destination *OutputLocationRef `locationName:"destination" type:"structure" required:"true"` - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} + // MS Smooth event ID to be sent to the IIS server.Should only be specified + // if eventIdMode is set to useConfigured. + EventId *string `locationName:"eventId" type:"string"` -// SetMaxResults sets the MaxResults field's value. -func (s *ListChannelsInput) SetMaxResults(v int64) *ListChannelsInput { - s.MaxResults = &v - return s -} + // Specifies whether or not to send an event ID to the IIS server. If no event + // ID is sent and the same Live Event is used without changing the publishing + // point, clients might see cached video from the previous run.Options:- "useConfigured" + // - use the value provided in eventId- "useTimestamp" - generate and send an + // event ID based on the current timestamp- "noEventId" - do not send an event + // ID to the IIS server. + EventIdMode *string `locationName:"eventIdMode" type:"string" enum:"SmoothGroupEventIdMode"` -// SetNextToken sets the NextToken field's value. -func (s *ListChannelsInput) SetNextToken(v string) *ListChannelsInput { - s.NextToken = &v - return s -} + // When set to sendEos, send EOS signal to IIS server when stopping the event + EventStopBehavior *string `locationName:"eventStopBehavior" type:"string" enum:"SmoothGroupEventStopBehavior"` + + // Size in seconds of file cache for streaming outputs. + FilecacheDuration *int64 `locationName:"filecacheDuration" type:"integer"` -type ListChannelsOutput struct { - _ struct{} `type:"structure"` + // Length of mp4 fragments to generate (in seconds). Fragment length must be + // compatible with GOP size and framerate. + FragmentLength *int64 `locationName:"fragmentLength" min:"1" type:"integer"` - Channels []*ChannelSummary `locationName:"channels" type:"list"` + // Parameter that control output group behavior on input loss. + InputLossAction *string `locationName:"inputLossAction" type:"string" enum:"InputLossActionForMsSmoothOut"` - NextToken *string `locationName:"nextToken" type:"string"` -} + // Number of retry attempts. + NumRetries *int64 `locationName:"numRetries" type:"integer"` -// String returns the string representation -func (s ListChannelsOutput) String() string { - return awsutil.Prettify(s) -} + // Number of seconds before initiating a restart due to output failure, due + // to exhausting the numRetries on one segment, or exceeding filecacheDuration. + RestartDelay *int64 `locationName:"restartDelay" type:"integer"` -// GoString returns the string representation -func (s ListChannelsOutput) GoString() string { - return s.String() -} + // When set to useInputSegmentation, the output segment or fragment points are + // set by the RAI markers from the input streams. + SegmentationMode *string `locationName:"segmentationMode" type:"string" enum:"SmoothGroupSegmentationMode"` -// SetChannels sets the Channels field's value. -func (s *ListChannelsOutput) SetChannels(v []*ChannelSummary) *ListChannelsOutput { - s.Channels = v - return s -} + // Number of milliseconds to delay the output from the second pipeline. + SendDelayMs *int64 `locationName:"sendDelayMs" type:"integer"` -// SetNextToken sets the NextToken field's value. -func (s *ListChannelsOutput) SetNextToken(v string) *ListChannelsOutput { - s.NextToken = &v - return s -} + // If set to scte35, use incoming SCTE-35 messages to generate a sparse track + // in this group of MS-Smooth outputs. + SparseTrackType *string `locationName:"sparseTrackType" type:"string" enum:"SmoothGroupSparseTrackType"` -type ListInputSecurityGroupsInput struct { - _ struct{} `type:"structure"` + // When set to send, send stream manifest so publishing point doesn't start + // until all streams start. + StreamManifestBehavior *string `locationName:"streamManifestBehavior" type:"string" enum:"SmoothGroupStreamManifestBehavior"` - MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + // Timestamp offset for the event. Only used if timestampOffsetMode is set to + // useConfiguredOffset. + TimestampOffset *string `locationName:"timestampOffset" type:"string"` - NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` + // Type of timestamp date offset to use.- useEventStartDate: Use the date the + // event was started as the offset- useConfiguredOffset: Use an explicitly configured + // date as the offset + TimestampOffsetMode *string `locationName:"timestampOffsetMode" type:"string" enum:"SmoothGroupTimestampOffsetMode"` } // String returns the string representation -func (s ListInputSecurityGroupsInput) String() string { +func (s MsSmoothGroupSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListInputSecurityGroupsInput) GoString() string { +func (s MsSmoothGroupSettings) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListInputSecurityGroupsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListInputSecurityGroupsInput"} - if s.MaxResults != nil && *s.MaxResults < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) +func (s *MsSmoothGroupSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MsSmoothGroupSettings"} + if s.Destination == nil { + invalidParams.Add(request.NewErrParamRequired("Destination")) + } + if s.FragmentLength != nil && *s.FragmentLength < 1 { + invalidParams.Add(request.NewErrParamMinValue("FragmentLength", 1)) } if invalidParams.Len() > 0 { @@ -7785,372 +11203,342 @@ func (s *ListInputSecurityGroupsInput) Validate() error { return nil } -// SetMaxResults sets the MaxResults field's value. -func (s *ListInputSecurityGroupsInput) SetMaxResults(v int64) *ListInputSecurityGroupsInput { - s.MaxResults = &v +// SetAcquisitionPointId sets the AcquisitionPointId field's value. +func (s *MsSmoothGroupSettings) SetAcquisitionPointId(v string) *MsSmoothGroupSettings { + s.AcquisitionPointId = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListInputSecurityGroupsInput) SetNextToken(v string) *ListInputSecurityGroupsInput { - s.NextToken = &v +// SetAudioOnlyTimecodeControl sets the AudioOnlyTimecodeControl field's value. +func (s *MsSmoothGroupSettings) SetAudioOnlyTimecodeControl(v string) *MsSmoothGroupSettings { + s.AudioOnlyTimecodeControl = &v return s } -type ListInputSecurityGroupsOutput struct { - _ struct{} `type:"structure"` - - InputSecurityGroups []*InputSecurityGroup `locationName:"inputSecurityGroups" type:"list"` - - NextToken *string `locationName:"nextToken" type:"string"` +// SetCertificateMode sets the CertificateMode field's value. +func (s *MsSmoothGroupSettings) SetCertificateMode(v string) *MsSmoothGroupSettings { + s.CertificateMode = &v + return s } -// String returns the string representation -func (s ListInputSecurityGroupsOutput) String() string { - return awsutil.Prettify(s) +// SetConnectionRetryInterval sets the ConnectionRetryInterval field's value. +func (s *MsSmoothGroupSettings) SetConnectionRetryInterval(v int64) *MsSmoothGroupSettings { + s.ConnectionRetryInterval = &v + return s } -// GoString returns the string representation -func (s ListInputSecurityGroupsOutput) GoString() string { - return s.String() +// SetDestination sets the Destination field's value. +func (s *MsSmoothGroupSettings) SetDestination(v *OutputLocationRef) *MsSmoothGroupSettings { + s.Destination = v + return s } -// SetInputSecurityGroups sets the InputSecurityGroups field's value. -func (s *ListInputSecurityGroupsOutput) SetInputSecurityGroups(v []*InputSecurityGroup) *ListInputSecurityGroupsOutput { - s.InputSecurityGroups = v +// SetEventId sets the EventId field's value. +func (s *MsSmoothGroupSettings) SetEventId(v string) *MsSmoothGroupSettings { + s.EventId = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListInputSecurityGroupsOutput) SetNextToken(v string) *ListInputSecurityGroupsOutput { - s.NextToken = &v +// SetEventIdMode sets the EventIdMode field's value. +func (s *MsSmoothGroupSettings) SetEventIdMode(v string) *MsSmoothGroupSettings { + s.EventIdMode = &v return s } -type ListInputsInput struct { - _ struct{} `type:"structure"` - - MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` - - NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` +// SetEventStopBehavior sets the EventStopBehavior field's value. +func (s *MsSmoothGroupSettings) SetEventStopBehavior(v string) *MsSmoothGroupSettings { + s.EventStopBehavior = &v + return s } -// String returns the string representation -func (s ListInputsInput) String() string { - return awsutil.Prettify(s) +// SetFilecacheDuration sets the FilecacheDuration field's value. +func (s *MsSmoothGroupSettings) SetFilecacheDuration(v int64) *MsSmoothGroupSettings { + s.FilecacheDuration = &v + return s } -// GoString returns the string representation -func (s ListInputsInput) GoString() string { - return s.String() +// SetFragmentLength sets the FragmentLength field's value. +func (s *MsSmoothGroupSettings) SetFragmentLength(v int64) *MsSmoothGroupSettings { + s.FragmentLength = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *ListInputsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListInputsInput"} - if s.MaxResults != nil && *s.MaxResults < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetInputLossAction sets the InputLossAction field's value. +func (s *MsSmoothGroupSettings) SetInputLossAction(v string) *MsSmoothGroupSettings { + s.InputLossAction = &v + return s } -// SetMaxResults sets the MaxResults field's value. -func (s *ListInputsInput) SetMaxResults(v int64) *ListInputsInput { - s.MaxResults = &v +// SetNumRetries sets the NumRetries field's value. +func (s *MsSmoothGroupSettings) SetNumRetries(v int64) *MsSmoothGroupSettings { + s.NumRetries = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListInputsInput) SetNextToken(v string) *ListInputsInput { - s.NextToken = &v +// SetRestartDelay sets the RestartDelay field's value. +func (s *MsSmoothGroupSettings) SetRestartDelay(v int64) *MsSmoothGroupSettings { + s.RestartDelay = &v return s } -type ListInputsOutput struct { - _ struct{} `type:"structure"` - - Inputs []*Input `locationName:"inputs" type:"list"` +// SetSegmentationMode sets the SegmentationMode field's value. +func (s *MsSmoothGroupSettings) SetSegmentationMode(v string) *MsSmoothGroupSettings { + s.SegmentationMode = &v + return s +} - NextToken *string `locationName:"nextToken" type:"string"` +// SetSendDelayMs sets the SendDelayMs field's value. +func (s *MsSmoothGroupSettings) SetSendDelayMs(v int64) *MsSmoothGroupSettings { + s.SendDelayMs = &v + return s } -// String returns the string representation -func (s ListInputsOutput) String() string { - return awsutil.Prettify(s) +// SetSparseTrackType sets the SparseTrackType field's value. +func (s *MsSmoothGroupSettings) SetSparseTrackType(v string) *MsSmoothGroupSettings { + s.SparseTrackType = &v + return s } -// GoString returns the string representation -func (s ListInputsOutput) GoString() string { - return s.String() +// SetStreamManifestBehavior sets the StreamManifestBehavior field's value. +func (s *MsSmoothGroupSettings) SetStreamManifestBehavior(v string) *MsSmoothGroupSettings { + s.StreamManifestBehavior = &v + return s } -// SetInputs sets the Inputs field's value. -func (s *ListInputsOutput) SetInputs(v []*Input) *ListInputsOutput { - s.Inputs = v +// SetTimestampOffset sets the TimestampOffset field's value. +func (s *MsSmoothGroupSettings) SetTimestampOffset(v string) *MsSmoothGroupSettings { + s.TimestampOffset = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListInputsOutput) SetNextToken(v string) *ListInputsOutput { - s.NextToken = &v +// SetTimestampOffsetMode sets the TimestampOffsetMode field's value. +func (s *MsSmoothGroupSettings) SetTimestampOffsetMode(v string) *MsSmoothGroupSettings { + s.TimestampOffsetMode = &v return s } -type M2tsSettings struct { +type MsSmoothOutputSettings struct { _ struct{} `type:"structure"` - // When set to drop, output audio streams will be removed from the program if - // the selected input audio stream is removed from the input. This allows the - // output audio configuration to dynamically change based on input configuration. - // If this is set to encodeSilence, all output audio streams will output encoded - // silence when not connected to an active input stream. - AbsentInputAudioBehavior *string `locationName:"absentInputAudioBehavior" type:"string" enum:"M2tsAbsentInputAudioBehavior"` - - // When set to enabled, uses ARIB-compliant field muxing and removes video descriptor. - Arib *string `locationName:"arib" type:"string" enum:"M2tsArib"` - - // Packet Identifier (PID) for ARIB Captions in the transport stream. Can be - // entered as a decimal or hexadecimal value. Valid values are 32 (or 0x20)..8182 - // (or 0x1ff6). - AribCaptionsPid *string `locationName:"aribCaptionsPid" type:"string"` - - // If set to auto, pid number used for ARIB Captions will be auto-selected from - // unused pids. If set to useConfigured, ARIB Captions will be on the configured - // pid number. - AribCaptionsPidControl *string `locationName:"aribCaptionsPidControl" type:"string" enum:"M2tsAribCaptionsPidControl"` - - // When set to dvb, uses DVB buffer model for Dolby Digital audio. When set - // to atsc, the ATSC model is used. - AudioBufferModel *string `locationName:"audioBufferModel" type:"string" enum:"M2tsAudioBufferModel"` + // String concatenated to the end of the destination filename. Required for + // multiple outputs of the same type. + NameModifier *string `locationName:"nameModifier" type:"string"` +} - // The number of audio frames to insert for each PES packet. - AudioFramesPerPes *int64 `locationName:"audioFramesPerPes" type:"integer"` +// String returns the string representation +func (s MsSmoothOutputSettings) String() string { + return awsutil.Prettify(s) +} - // Packet Identifier (PID) of the elementary audio stream(s) in the transport - // stream. Multiple values are accepted, and can be entered in ranges and/or - // by comma separation. Can be entered as decimal or hexadecimal values. Each - // PID specified must be in the range of 32 (or 0x20)..8182 (or 0x1ff6). - AudioPids *string `locationName:"audioPids" type:"string"` +// GoString returns the string representation +func (s MsSmoothOutputSettings) GoString() string { + return s.String() +} - // When set to atsc, uses stream type = 0x81 for AC3 and stream type = 0x87 - // for EAC3. When set to dvb, uses stream type = 0x06. - AudioStreamType *string `locationName:"audioStreamType" type:"string" enum:"M2tsAudioStreamType"` +// SetNameModifier sets the NameModifier field's value. +func (s *MsSmoothOutputSettings) SetNameModifier(v string) *MsSmoothOutputSettings { + s.NameModifier = &v + return s +} - // The output bitrate of the transport stream in bits per second. Setting to - // 0 lets the muxer automatically determine the appropriate bitrate. - Bitrate *int64 `locationName:"bitrate" type:"integer"` +// Network source to transcode. Must be accessible to the Elemental Live node +// that is running the live event through a network connection. +type NetworkInputSettings struct { + _ struct{} `type:"structure"` - // If set to multiplex, use multiplex buffer model for accurate interleaving. - // Setting to bufferModel to none can lead to lower latency, but low-memory - // devices may not be able to play back the stream without interruptions. - BufferModel *string `locationName:"bufferModel" type:"string" enum:"M2tsBufferModel"` + // Specifies HLS input settings when the uri is for a HLS manifest. + HlsInputSettings *HlsInputSettings `locationName:"hlsInputSettings" type:"structure"` - // When set to enabled, generates captionServiceDescriptor in PMT. - CcDescriptor *string `locationName:"ccDescriptor" type:"string" enum:"M2tsCcDescriptor"` + // Check HTTPS server certificates. When set to checkCryptographyOnly, cryptography + // in the certificate will be checked, but not the server's name. Certain subdomains + // (notably S3 buckets that use dots in the bucket name) do not strictly match + // the corresponding certificate's wildcard pattern and would otherwise cause + // the event to error. This setting is ignored for protocols that do not use + // https. + ServerValidation *string `locationName:"serverValidation" type:"string" enum:"NetworkInputServerValidation"` +} - // Inserts DVB Network Information Table (NIT) at the specified table repetition - // interval. - DvbNitSettings *DvbNitSettings `locationName:"dvbNitSettings" type:"structure"` +// String returns the string representation +func (s NetworkInputSettings) String() string { + return awsutil.Prettify(s) +} - // Inserts DVB Service Description Table (SDT) at the specified table repetition - // interval. - DvbSdtSettings *DvbSdtSettings `locationName:"dvbSdtSettings" type:"structure"` +// GoString returns the string representation +func (s NetworkInputSettings) GoString() string { + return s.String() +} - // Packet Identifier (PID) for input source DVB Subtitle data to this output. - // Multiple values are accepted, and can be entered in ranges and/or by comma - // separation. Can be entered as decimal or hexadecimal values. Each PID specified - // must be in the range of 32 (or 0x20)..8182 (or 0x1ff6). - DvbSubPids *string `locationName:"dvbSubPids" type:"string"` +// SetHlsInputSettings sets the HlsInputSettings field's value. +func (s *NetworkInputSettings) SetHlsInputSettings(v *HlsInputSettings) *NetworkInputSettings { + s.HlsInputSettings = v + return s +} - // Inserts DVB Time and Date Table (TDT) at the specified table repetition interval. - DvbTdtSettings *DvbTdtSettings `locationName:"dvbTdtSettings" type:"structure"` +// SetServerValidation sets the ServerValidation field's value. +func (s *NetworkInputSettings) SetServerValidation(v string) *NetworkInputSettings { + s.ServerValidation = &v + return s +} - // Packet Identifier (PID) for input source DVB Teletext data to this output. - // Can be entered as a decimal or hexadecimal value. Valid values are 32 (or - // 0x20)..8182 (or 0x1ff6). - DvbTeletextPid *string `locationName:"dvbTeletextPid" type:"string"` +// Reserved resources available for purchase +type Offering struct { + _ struct{} `type:"structure"` - // If set to passthrough, passes any EBIF data from the input source to this - // output. - Ebif *string `locationName:"ebif" type:"string" enum:"M2tsEbifControl"` + // Unique offering ARN, e.g. 'arn:aws:medialive:us-west-2:123456789012:offering:87654321' + Arn *string `locationName:"arn" type:"string"` - // When videoAndFixedIntervals is selected, audio EBP markers will be added - // to partitions 3 and 4. The interval between these additional markers will - // be fixed, and will be slightly shorter than the video EBP marker interval. - // Only available when EBP Cablelabs segmentation markers are selected. Partitions - // 1 and 2 will always follow the video interval. - EbpAudioInterval *string `locationName:"ebpAudioInterval" type:"string" enum:"M2tsAudioInterval"` + // Currency code for usagePrice and fixedPrice in ISO-4217 format, e.g. 'USD' + CurrencyCode *string `locationName:"currencyCode" type:"string"` - // When set, enforces that Encoder Boundary Points do not come within the specified - // time interval of each other by looking ahead at input video. If another EBP - // is going to come in within the specified time interval, the current EBP is - // not emitted, and the segment is "stretched" to the next marker. The lookahead - // value does not add latency to the system. The Live Event must be configured - // elsewhere to create sufficient latency to make the lookahead accurate. - EbpLookaheadMs *int64 `locationName:"ebpLookaheadMs" type:"integer"` + // Lease duration, e.g. '12' + Duration *int64 `locationName:"duration" type:"integer"` - // Controls placement of EBP on Audio PIDs. If set to videoAndAudioPids, EBP - // markers will be placed on the video PID and all audio PIDs. If set to videoPid, - // EBP markers will be placed on only the video PID. - EbpPlacement *string `locationName:"ebpPlacement" type:"string" enum:"M2tsEbpPlacement"` + // Units for duration, e.g. 'MONTHS' + DurationUnits *string `locationName:"durationUnits" type:"string" enum:"OfferingDurationUnits"` - // This field is unused and deprecated. - EcmPid *string `locationName:"ecmPid" type:"string"` + // One-time charge for each reserved resource, e.g. '0.0' for a NO_UPFRONT offering + FixedPrice *float64 `locationName:"fixedPrice" type:"double"` - // Include or exclude the ES Rate field in the PES header. - EsRateInPes *string `locationName:"esRateInPes" type:"string" enum:"M2tsEsRateInPes"` + // Offering description, e.g. 'HD AVC output at 10-20 Mbps, 30 fps, and standard + // VQ in US West (Oregon)' + OfferingDescription *string `locationName:"offeringDescription" type:"string"` - // Packet Identifier (PID) for input source ETV Platform data to this output. - // Can be entered as a decimal or hexadecimal value. Valid values are 32 (or - // 0x20)..8182 (or 0x1ff6). - EtvPlatformPid *string `locationName:"etvPlatformPid" type:"string"` + // Unique offering ID, e.g. '87654321' + OfferingId *string `locationName:"offeringId" type:"string"` - // Packet Identifier (PID) for input source ETV Signal data to this output. - // Can be entered as a decimal or hexadecimal value. Valid values are 32 (or - // 0x20)..8182 (or 0x1ff6). - EtvSignalPid *string `locationName:"etvSignalPid" type:"string"` + // Offering type, e.g. 'NO_UPFRONT' + OfferingType *string `locationName:"offeringType" type:"string" enum:"OfferingType"` - // The length in seconds of each fragment. Only used with EBP markers. - FragmentTime *float64 `locationName:"fragmentTime" type:"double"` + // AWS region, e.g. 'us-west-2' + Region *string `locationName:"region" type:"string"` - // If set to passthrough, passes any KLV data from the input source to this - // output. - Klv *string `locationName:"klv" type:"string" enum:"M2tsKlv"` + // Resource configuration details + ResourceSpecification *ReservationResourceSpecification `locationName:"resourceSpecification" type:"structure"` - // Packet Identifier (PID) for input source KLV data to this output. Multiple - // values are accepted, and can be entered in ranges and/or by comma separation. - // Can be entered as decimal or hexadecimal values. Each PID specified must - // be in the range of 32 (or 0x20)..8182 (or 0x1ff6). - KlvDataPids *string `locationName:"klvDataPids" type:"string"` + // Recurring usage charge for each reserved resource, e.g. '157.0' + UsagePrice *float64 `locationName:"usagePrice" type:"double"` +} - // Value in bits per second of extra null packets to insert into the transport - // stream. This can be used if a downstream encryption system requires periodic - // null packets. - NullPacketBitrate *float64 `locationName:"nullPacketBitrate" type:"double"` +// String returns the string representation +func (s Offering) String() string { + return awsutil.Prettify(s) +} - // The number of milliseconds between instances of this table in the output - // transport stream. Valid values are 0, 10..1000. - PatInterval *int64 `locationName:"patInterval" type:"integer"` +// GoString returns the string representation +func (s Offering) GoString() string { + return s.String() +} - // When set to pcrEveryPesPacket, a Program Clock Reference value is inserted - // for every Packetized Elementary Stream (PES) header. This parameter is effective - // only when the PCR PID is the same as the video or audio elementary stream. - PcrControl *string `locationName:"pcrControl" type:"string" enum:"M2tsPcrControl"` +// SetArn sets the Arn field's value. +func (s *Offering) SetArn(v string) *Offering { + s.Arn = &v + return s +} - // Maximum time in milliseconds between Program Clock Reference (PCRs) inserted - // into the transport stream. - PcrPeriod *int64 `locationName:"pcrPeriod" type:"integer"` +// SetCurrencyCode sets the CurrencyCode field's value. +func (s *Offering) SetCurrencyCode(v string) *Offering { + s.CurrencyCode = &v + return s +} - // Packet Identifier (PID) of the Program Clock Reference (PCR) in the transport - // stream. When no value is given, the encoder will assign the same value as - // the Video PID. Can be entered as a decimal or hexadecimal value. Valid values - // are 32 (or 0x20)..8182 (or 0x1ff6). - PcrPid *string `locationName:"pcrPid" type:"string"` +// SetDuration sets the Duration field's value. +func (s *Offering) SetDuration(v int64) *Offering { + s.Duration = &v + return s +} - // The number of milliseconds between instances of this table in the output - // transport stream. Valid values are 0, 10..1000. - PmtInterval *int64 `locationName:"pmtInterval" type:"integer"` +// SetDurationUnits sets the DurationUnits field's value. +func (s *Offering) SetDurationUnits(v string) *Offering { + s.DurationUnits = &v + return s +} - // Packet Identifier (PID) for the Program Map Table (PMT) in the transport - // stream. Can be entered as a decimal or hexadecimal value. Valid values are - // 32 (or 0x20)..8182 (or 0x1ff6). - PmtPid *string `locationName:"pmtPid" type:"string"` +// SetFixedPrice sets the FixedPrice field's value. +func (s *Offering) SetFixedPrice(v float64) *Offering { + s.FixedPrice = &v + return s +} - // The value of the program number field in the Program Map Table. - ProgramNum *int64 `locationName:"programNum" type:"integer"` +// SetOfferingDescription sets the OfferingDescription field's value. +func (s *Offering) SetOfferingDescription(v string) *Offering { + s.OfferingDescription = &v + return s +} - // When vbr, does not insert null packets into transport stream to fill specified - // bitrate. The bitrate setting acts as the maximum bitrate when vbr is set. - RateMode *string `locationName:"rateMode" type:"string" enum:"M2tsRateMode"` +// SetOfferingId sets the OfferingId field's value. +func (s *Offering) SetOfferingId(v string) *Offering { + s.OfferingId = &v + return s +} - // Packet Identifier (PID) for input source SCTE-27 data to this output. Multiple - // values are accepted, and can be entered in ranges and/or by comma separation. - // Can be entered as decimal or hexadecimal values. Each PID specified must - // be in the range of 32 (or 0x20)..8182 (or 0x1ff6). - Scte27Pids *string `locationName:"scte27Pids" type:"string"` +// SetOfferingType sets the OfferingType field's value. +func (s *Offering) SetOfferingType(v string) *Offering { + s.OfferingType = &v + return s +} - // Optionally pass SCTE-35 signals from the input source to this output. - Scte35Control *string `locationName:"scte35Control" type:"string" enum:"M2tsScte35Control"` +// SetRegion sets the Region field's value. +func (s *Offering) SetRegion(v string) *Offering { + s.Region = &v + return s +} - // Packet Identifier (PID) of the SCTE-35 stream in the transport stream. Can - // be entered as a decimal or hexadecimal value. Valid values are 32 (or 0x20)..8182 - // (or 0x1ff6). - Scte35Pid *string `locationName:"scte35Pid" type:"string"` +// SetResourceSpecification sets the ResourceSpecification field's value. +func (s *Offering) SetResourceSpecification(v *ReservationResourceSpecification) *Offering { + s.ResourceSpecification = v + return s +} - // Inserts segmentation markers at each segmentationTime period. raiSegstart - // sets the Random Access Indicator bit in the adaptation field. raiAdapt sets - // the RAI bit and adds the current timecode in the private data bytes. psiSegstart - // inserts PAT and PMT tables at the start of segments. ebp adds Encoder Boundary - // Point information to the adaptation field as per OpenCable specification - // OC-SP-EBP-I01-130118. ebpLegacy adds Encoder Boundary Point information to - // the adaptation field using a legacy proprietary format. - SegmentationMarkers *string `locationName:"segmentationMarkers" type:"string" enum:"M2tsSegmentationMarkers"` +// SetUsagePrice sets the UsagePrice field's value. +func (s *Offering) SetUsagePrice(v float64) *Offering { + s.UsagePrice = &v + return s +} - // The segmentation style parameter controls how segmentation markers are inserted - // into the transport stream. With avails, it is possible that segments may - // be truncated, which can influence where future segmentation markers are inserted.When - // a segmentation style of "resetCadence" is selected and a segment is truncated - // due to an avail, we will reset the segmentation cadence. This means the subsequent - // segment will have a duration of $segmentationTime seconds.When a segmentation - // style of "maintainCadence" is selected and a segment is truncated due to - // an avail, we will not reset the segmentation cadence. This means the subsequent - // segment will likely be truncated as well. However, all segments after that - // will have a duration of $segmentationTime seconds. Note that EBP lookahead - // is a slight exception to this rule. - SegmentationStyle *string `locationName:"segmentationStyle" type:"string" enum:"M2tsSegmentationStyle"` +// Output settings. There can be multiple outputs within a group. +type Output struct { + _ struct{} `type:"structure"` - // The length in seconds of each segment. Required unless markers is set to - // None_. - SegmentationTime *float64 `locationName:"segmentationTime" type:"double"` + // The names of the AudioDescriptions used as audio sources for this output. + AudioDescriptionNames []*string `locationName:"audioDescriptionNames" type:"list"` - // When set to passthrough, timed metadata will be passed through from input - // to output. - TimedMetadataBehavior *string `locationName:"timedMetadataBehavior" type:"string" enum:"M2tsTimedMetadataBehavior"` + // The names of the CaptionDescriptions used as caption sources for this output. + CaptionDescriptionNames []*string `locationName:"captionDescriptionNames" type:"list"` - // Packet Identifier (PID) of the timed metadata stream in the transport stream. - // Can be entered as a decimal or hexadecimal value. Valid values are 32 (or - // 0x20)..8182 (or 0x1ff6). - TimedMetadataPid *string `locationName:"timedMetadataPid" type:"string"` + // The name used to identify an output. + OutputName *string `locationName:"outputName" min:"1" type:"string"` - // The value of the transport stream ID field in the Program Map Table. - TransportStreamId *int64 `locationName:"transportStreamId" type:"integer"` + // Output type-specific settings. + // + // OutputSettings is a required field + OutputSettings *OutputSettings `locationName:"outputSettings" type:"structure" required:"true"` - // Packet Identifier (PID) of the elementary video stream in the transport stream. - // Can be entered as a decimal or hexadecimal value. Valid values are 32 (or - // 0x20)..8182 (or 0x1ff6). - VideoPid *string `locationName:"videoPid" type:"string"` + // The name of the VideoDescription used as the source for this output. + VideoDescriptionName *string `locationName:"videoDescriptionName" type:"string"` } // String returns the string representation -func (s M2tsSettings) String() string { +func (s Output) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s M2tsSettings) GoString() string { +func (s Output) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *M2tsSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "M2tsSettings"} - if s.DvbNitSettings != nil { - if err := s.DvbNitSettings.Validate(); err != nil { - invalidParams.AddNested("DvbNitSettings", err.(request.ErrInvalidParams)) - } +func (s *Output) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Output"} + if s.OutputName != nil && len(*s.OutputName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("OutputName", 1)) } - if s.DvbSdtSettings != nil { - if err := s.DvbSdtSettings.Validate(); err != nil { - invalidParams.AddNested("DvbSdtSettings", err.(request.ErrInvalidParams)) - } + if s.OutputSettings == nil { + invalidParams.Add(request.NewErrParamRequired("OutputSettings")) } - if s.DvbTdtSettings != nil { - if err := s.DvbTdtSettings.Validate(); err != nil { - invalidParams.AddNested("DvbTdtSettings", err.(request.ErrInvalidParams)) + if s.OutputSettings != nil { + if err := s.OutputSettings.Validate(); err != nil { + invalidParams.AddNested("OutputSettings", err.(request.ErrInvalidParams)) } } @@ -8160,829 +11548,955 @@ func (s *M2tsSettings) Validate() error { return nil } -// SetAbsentInputAudioBehavior sets the AbsentInputAudioBehavior field's value. -func (s *M2tsSettings) SetAbsentInputAudioBehavior(v string) *M2tsSettings { - s.AbsentInputAudioBehavior = &v +// SetAudioDescriptionNames sets the AudioDescriptionNames field's value. +func (s *Output) SetAudioDescriptionNames(v []*string) *Output { + s.AudioDescriptionNames = v return s } -// SetArib sets the Arib field's value. -func (s *M2tsSettings) SetArib(v string) *M2tsSettings { - s.Arib = &v +// SetCaptionDescriptionNames sets the CaptionDescriptionNames field's value. +func (s *Output) SetCaptionDescriptionNames(v []*string) *Output { + s.CaptionDescriptionNames = v return s } -// SetAribCaptionsPid sets the AribCaptionsPid field's value. -func (s *M2tsSettings) SetAribCaptionsPid(v string) *M2tsSettings { - s.AribCaptionsPid = &v +// SetOutputName sets the OutputName field's value. +func (s *Output) SetOutputName(v string) *Output { + s.OutputName = &v return s } -// SetAribCaptionsPidControl sets the AribCaptionsPidControl field's value. -func (s *M2tsSettings) SetAribCaptionsPidControl(v string) *M2tsSettings { - s.AribCaptionsPidControl = &v +// SetOutputSettings sets the OutputSettings field's value. +func (s *Output) SetOutputSettings(v *OutputSettings) *Output { + s.OutputSettings = v return s } -// SetAudioBufferModel sets the AudioBufferModel field's value. -func (s *M2tsSettings) SetAudioBufferModel(v string) *M2tsSettings { - s.AudioBufferModel = &v +// SetVideoDescriptionName sets the VideoDescriptionName field's value. +func (s *Output) SetVideoDescriptionName(v string) *Output { + s.VideoDescriptionName = &v return s } -// SetAudioFramesPerPes sets the AudioFramesPerPes field's value. -func (s *M2tsSettings) SetAudioFramesPerPes(v int64) *M2tsSettings { - s.AudioFramesPerPes = &v - return s +type OutputDestination struct { + _ struct{} `type:"structure"` + + // User-specified id. This is used in an output group or an output. + Id *string `locationName:"id" type:"string"` + + // Destination settings for output; one for each redundant encoder. + Settings []*OutputDestinationSettings `locationName:"settings" type:"list"` } -// SetAudioPids sets the AudioPids field's value. -func (s *M2tsSettings) SetAudioPids(v string) *M2tsSettings { - s.AudioPids = &v - return s +// String returns the string representation +func (s OutputDestination) String() string { + return awsutil.Prettify(s) } -// SetAudioStreamType sets the AudioStreamType field's value. -func (s *M2tsSettings) SetAudioStreamType(v string) *M2tsSettings { - s.AudioStreamType = &v - return s +// GoString returns the string representation +func (s OutputDestination) GoString() string { + return s.String() } -// SetBitrate sets the Bitrate field's value. -func (s *M2tsSettings) SetBitrate(v int64) *M2tsSettings { - s.Bitrate = &v +// SetId sets the Id field's value. +func (s *OutputDestination) SetId(v string) *OutputDestination { + s.Id = &v return s } -// SetBufferModel sets the BufferModel field's value. -func (s *M2tsSettings) SetBufferModel(v string) *M2tsSettings { - s.BufferModel = &v +// SetSettings sets the Settings field's value. +func (s *OutputDestination) SetSettings(v []*OutputDestinationSettings) *OutputDestination { + s.Settings = v return s } -// SetCcDescriptor sets the CcDescriptor field's value. -func (s *M2tsSettings) SetCcDescriptor(v string) *M2tsSettings { - s.CcDescriptor = &v - return s +type OutputDestinationSettings struct { + _ struct{} `type:"structure"` + + // key used to extract the password from EC2 Parameter store + PasswordParam *string `locationName:"passwordParam" type:"string"` + + // Stream name for RTMP destinations (URLs of type rtmp://) + StreamName *string `locationName:"streamName" type:"string"` + + // A URL specifying a destination + Url *string `locationName:"url" type:"string"` + + // username for destination + Username *string `locationName:"username" type:"string"` } -// SetDvbNitSettings sets the DvbNitSettings field's value. -func (s *M2tsSettings) SetDvbNitSettings(v *DvbNitSettings) *M2tsSettings { - s.DvbNitSettings = v - return s +// String returns the string representation +func (s OutputDestinationSettings) String() string { + return awsutil.Prettify(s) } -// SetDvbSdtSettings sets the DvbSdtSettings field's value. -func (s *M2tsSettings) SetDvbSdtSettings(v *DvbSdtSettings) *M2tsSettings { - s.DvbSdtSettings = v - return s +// GoString returns the string representation +func (s OutputDestinationSettings) GoString() string { + return s.String() } -// SetDvbSubPids sets the DvbSubPids field's value. -func (s *M2tsSettings) SetDvbSubPids(v string) *M2tsSettings { - s.DvbSubPids = &v +// SetPasswordParam sets the PasswordParam field's value. +func (s *OutputDestinationSettings) SetPasswordParam(v string) *OutputDestinationSettings { + s.PasswordParam = &v return s } -// SetDvbTdtSettings sets the DvbTdtSettings field's value. -func (s *M2tsSettings) SetDvbTdtSettings(v *DvbTdtSettings) *M2tsSettings { - s.DvbTdtSettings = v +// SetStreamName sets the StreamName field's value. +func (s *OutputDestinationSettings) SetStreamName(v string) *OutputDestinationSettings { + s.StreamName = &v return s } -// SetDvbTeletextPid sets the DvbTeletextPid field's value. -func (s *M2tsSettings) SetDvbTeletextPid(v string) *M2tsSettings { - s.DvbTeletextPid = &v +// SetUrl sets the Url field's value. +func (s *OutputDestinationSettings) SetUrl(v string) *OutputDestinationSettings { + s.Url = &v return s } -// SetEbif sets the Ebif field's value. -func (s *M2tsSettings) SetEbif(v string) *M2tsSettings { - s.Ebif = &v +// SetUsername sets the Username field's value. +func (s *OutputDestinationSettings) SetUsername(v string) *OutputDestinationSettings { + s.Username = &v return s } -// SetEbpAudioInterval sets the EbpAudioInterval field's value. -func (s *M2tsSettings) SetEbpAudioInterval(v string) *M2tsSettings { - s.EbpAudioInterval = &v - return s +// Output groups for this Live Event. Output groups contain information about +// where streams should be distributed. +type OutputGroup struct { + _ struct{} `type:"structure"` + + // Custom output group name optionally defined by the user. Only letters, numbers, + // and the underscore character allowed; only 32 characters allowed. + Name *string `locationName:"name" type:"string"` + + // Settings associated with the output group. + // + // OutputGroupSettings is a required field + OutputGroupSettings *OutputGroupSettings `locationName:"outputGroupSettings" type:"structure" required:"true"` + + // Outputs is a required field + Outputs []*Output `locationName:"outputs" type:"list" required:"true"` } -// SetEbpLookaheadMs sets the EbpLookaheadMs field's value. -func (s *M2tsSettings) SetEbpLookaheadMs(v int64) *M2tsSettings { - s.EbpLookaheadMs = &v - return s +// String returns the string representation +func (s OutputGroup) String() string { + return awsutil.Prettify(s) } -// SetEbpPlacement sets the EbpPlacement field's value. -func (s *M2tsSettings) SetEbpPlacement(v string) *M2tsSettings { - s.EbpPlacement = &v - return s +// GoString returns the string representation +func (s OutputGroup) GoString() string { + return s.String() } -// SetEcmPid sets the EcmPid field's value. -func (s *M2tsSettings) SetEcmPid(v string) *M2tsSettings { - s.EcmPid = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *OutputGroup) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "OutputGroup"} + if s.OutputGroupSettings == nil { + invalidParams.Add(request.NewErrParamRequired("OutputGroupSettings")) + } + if s.Outputs == nil { + invalidParams.Add(request.NewErrParamRequired("Outputs")) + } + if s.OutputGroupSettings != nil { + if err := s.OutputGroupSettings.Validate(); err != nil { + invalidParams.AddNested("OutputGroupSettings", err.(request.ErrInvalidParams)) + } + } + if s.Outputs != nil { + for i, v := range s.Outputs { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Outputs", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetEsRateInPes sets the EsRateInPes field's value. -func (s *M2tsSettings) SetEsRateInPes(v string) *M2tsSettings { - s.EsRateInPes = &v +// SetName sets the Name field's value. +func (s *OutputGroup) SetName(v string) *OutputGroup { + s.Name = &v return s } -// SetEtvPlatformPid sets the EtvPlatformPid field's value. -func (s *M2tsSettings) SetEtvPlatformPid(v string) *M2tsSettings { - s.EtvPlatformPid = &v +// SetOutputGroupSettings sets the OutputGroupSettings field's value. +func (s *OutputGroup) SetOutputGroupSettings(v *OutputGroupSettings) *OutputGroup { + s.OutputGroupSettings = v return s } -// SetEtvSignalPid sets the EtvSignalPid field's value. -func (s *M2tsSettings) SetEtvSignalPid(v string) *M2tsSettings { - s.EtvSignalPid = &v +// SetOutputs sets the Outputs field's value. +func (s *OutputGroup) SetOutputs(v []*Output) *OutputGroup { + s.Outputs = v return s } -// SetFragmentTime sets the FragmentTime field's value. -func (s *M2tsSettings) SetFragmentTime(v float64) *M2tsSettings { - s.FragmentTime = &v +type OutputGroupSettings struct { + _ struct{} `type:"structure"` + + ArchiveGroupSettings *ArchiveGroupSettings `locationName:"archiveGroupSettings" type:"structure"` + + HlsGroupSettings *HlsGroupSettings `locationName:"hlsGroupSettings" type:"structure"` + + MsSmoothGroupSettings *MsSmoothGroupSettings `locationName:"msSmoothGroupSettings" type:"structure"` + + RtmpGroupSettings *RtmpGroupSettings `locationName:"rtmpGroupSettings" type:"structure"` + + UdpGroupSettings *UdpGroupSettings `locationName:"udpGroupSettings" type:"structure"` +} + +// String returns the string representation +func (s OutputGroupSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OutputGroupSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *OutputGroupSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "OutputGroupSettings"} + if s.ArchiveGroupSettings != nil { + if err := s.ArchiveGroupSettings.Validate(); err != nil { + invalidParams.AddNested("ArchiveGroupSettings", err.(request.ErrInvalidParams)) + } + } + if s.HlsGroupSettings != nil { + if err := s.HlsGroupSettings.Validate(); err != nil { + invalidParams.AddNested("HlsGroupSettings", err.(request.ErrInvalidParams)) + } + } + if s.MsSmoothGroupSettings != nil { + if err := s.MsSmoothGroupSettings.Validate(); err != nil { + invalidParams.AddNested("MsSmoothGroupSettings", err.(request.ErrInvalidParams)) + } + } + if s.RtmpGroupSettings != nil { + if err := s.RtmpGroupSettings.Validate(); err != nil { + invalidParams.AddNested("RtmpGroupSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArchiveGroupSettings sets the ArchiveGroupSettings field's value. +func (s *OutputGroupSettings) SetArchiveGroupSettings(v *ArchiveGroupSettings) *OutputGroupSettings { + s.ArchiveGroupSettings = v return s } -// SetKlv sets the Klv field's value. -func (s *M2tsSettings) SetKlv(v string) *M2tsSettings { - s.Klv = &v +// SetHlsGroupSettings sets the HlsGroupSettings field's value. +func (s *OutputGroupSettings) SetHlsGroupSettings(v *HlsGroupSettings) *OutputGroupSettings { + s.HlsGroupSettings = v return s } -// SetKlvDataPids sets the KlvDataPids field's value. -func (s *M2tsSettings) SetKlvDataPids(v string) *M2tsSettings { - s.KlvDataPids = &v +// SetMsSmoothGroupSettings sets the MsSmoothGroupSettings field's value. +func (s *OutputGroupSettings) SetMsSmoothGroupSettings(v *MsSmoothGroupSettings) *OutputGroupSettings { + s.MsSmoothGroupSettings = v return s } -// SetNullPacketBitrate sets the NullPacketBitrate field's value. -func (s *M2tsSettings) SetNullPacketBitrate(v float64) *M2tsSettings { - s.NullPacketBitrate = &v +// SetRtmpGroupSettings sets the RtmpGroupSettings field's value. +func (s *OutputGroupSettings) SetRtmpGroupSettings(v *RtmpGroupSettings) *OutputGroupSettings { + s.RtmpGroupSettings = v return s } -// SetPatInterval sets the PatInterval field's value. -func (s *M2tsSettings) SetPatInterval(v int64) *M2tsSettings { - s.PatInterval = &v +// SetUdpGroupSettings sets the UdpGroupSettings field's value. +func (s *OutputGroupSettings) SetUdpGroupSettings(v *UdpGroupSettings) *OutputGroupSettings { + s.UdpGroupSettings = v return s } -// SetPcrControl sets the PcrControl field's value. -func (s *M2tsSettings) SetPcrControl(v string) *M2tsSettings { - s.PcrControl = &v - return s -} +// Reference to an OutputDestination ID defined in the channel +type OutputLocationRef struct { + _ struct{} `type:"structure"` -// SetPcrPeriod sets the PcrPeriod field's value. -func (s *M2tsSettings) SetPcrPeriod(v int64) *M2tsSettings { - s.PcrPeriod = &v - return s + DestinationRefId *string `locationName:"destinationRefId" type:"string"` } -// SetPcrPid sets the PcrPid field's value. -func (s *M2tsSettings) SetPcrPid(v string) *M2tsSettings { - s.PcrPid = &v - return s +// String returns the string representation +func (s OutputLocationRef) String() string { + return awsutil.Prettify(s) } -// SetPmtInterval sets the PmtInterval field's value. -func (s *M2tsSettings) SetPmtInterval(v int64) *M2tsSettings { - s.PmtInterval = &v - return s +// GoString returns the string representation +func (s OutputLocationRef) GoString() string { + return s.String() } -// SetPmtPid sets the PmtPid field's value. -func (s *M2tsSettings) SetPmtPid(v string) *M2tsSettings { - s.PmtPid = &v +// SetDestinationRefId sets the DestinationRefId field's value. +func (s *OutputLocationRef) SetDestinationRefId(v string) *OutputLocationRef { + s.DestinationRefId = &v return s } -// SetProgramNum sets the ProgramNum field's value. -func (s *M2tsSettings) SetProgramNum(v int64) *M2tsSettings { - s.ProgramNum = &v - return s -} +type OutputSettings struct { + _ struct{} `type:"structure"` -// SetRateMode sets the RateMode field's value. -func (s *M2tsSettings) SetRateMode(v string) *M2tsSettings { - s.RateMode = &v - return s -} + ArchiveOutputSettings *ArchiveOutputSettings `locationName:"archiveOutputSettings" type:"structure"` -// SetScte27Pids sets the Scte27Pids field's value. -func (s *M2tsSettings) SetScte27Pids(v string) *M2tsSettings { - s.Scte27Pids = &v - return s -} + HlsOutputSettings *HlsOutputSettings `locationName:"hlsOutputSettings" type:"structure"` -// SetScte35Control sets the Scte35Control field's value. -func (s *M2tsSettings) SetScte35Control(v string) *M2tsSettings { - s.Scte35Control = &v - return s + MsSmoothOutputSettings *MsSmoothOutputSettings `locationName:"msSmoothOutputSettings" type:"structure"` + + RtmpOutputSettings *RtmpOutputSettings `locationName:"rtmpOutputSettings" type:"structure"` + + UdpOutputSettings *UdpOutputSettings `locationName:"udpOutputSettings" type:"structure"` } -// SetScte35Pid sets the Scte35Pid field's value. -func (s *M2tsSettings) SetScte35Pid(v string) *M2tsSettings { - s.Scte35Pid = &v - return s +// String returns the string representation +func (s OutputSettings) String() string { + return awsutil.Prettify(s) } -// SetSegmentationMarkers sets the SegmentationMarkers field's value. -func (s *M2tsSettings) SetSegmentationMarkers(v string) *M2tsSettings { - s.SegmentationMarkers = &v - return s +// GoString returns the string representation +func (s OutputSettings) GoString() string { + return s.String() } -// SetSegmentationStyle sets the SegmentationStyle field's value. -func (s *M2tsSettings) SetSegmentationStyle(v string) *M2tsSettings { - s.SegmentationStyle = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *OutputSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "OutputSettings"} + if s.ArchiveOutputSettings != nil { + if err := s.ArchiveOutputSettings.Validate(); err != nil { + invalidParams.AddNested("ArchiveOutputSettings", err.(request.ErrInvalidParams)) + } + } + if s.HlsOutputSettings != nil { + if err := s.HlsOutputSettings.Validate(); err != nil { + invalidParams.AddNested("HlsOutputSettings", err.(request.ErrInvalidParams)) + } + } + if s.RtmpOutputSettings != nil { + if err := s.RtmpOutputSettings.Validate(); err != nil { + invalidParams.AddNested("RtmpOutputSettings", err.(request.ErrInvalidParams)) + } + } + if s.UdpOutputSettings != nil { + if err := s.UdpOutputSettings.Validate(); err != nil { + invalidParams.AddNested("UdpOutputSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetSegmentationTime sets the SegmentationTime field's value. -func (s *M2tsSettings) SetSegmentationTime(v float64) *M2tsSettings { - s.SegmentationTime = &v +// SetArchiveOutputSettings sets the ArchiveOutputSettings field's value. +func (s *OutputSettings) SetArchiveOutputSettings(v *ArchiveOutputSettings) *OutputSettings { + s.ArchiveOutputSettings = v return s } -// SetTimedMetadataBehavior sets the TimedMetadataBehavior field's value. -func (s *M2tsSettings) SetTimedMetadataBehavior(v string) *M2tsSettings { - s.TimedMetadataBehavior = &v +// SetHlsOutputSettings sets the HlsOutputSettings field's value. +func (s *OutputSettings) SetHlsOutputSettings(v *HlsOutputSettings) *OutputSettings { + s.HlsOutputSettings = v return s } -// SetTimedMetadataPid sets the TimedMetadataPid field's value. -func (s *M2tsSettings) SetTimedMetadataPid(v string) *M2tsSettings { - s.TimedMetadataPid = &v +// SetMsSmoothOutputSettings sets the MsSmoothOutputSettings field's value. +func (s *OutputSettings) SetMsSmoothOutputSettings(v *MsSmoothOutputSettings) *OutputSettings { + s.MsSmoothOutputSettings = v return s } -// SetTransportStreamId sets the TransportStreamId field's value. -func (s *M2tsSettings) SetTransportStreamId(v int64) *M2tsSettings { - s.TransportStreamId = &v +// SetRtmpOutputSettings sets the RtmpOutputSettings field's value. +func (s *OutputSettings) SetRtmpOutputSettings(v *RtmpOutputSettings) *OutputSettings { + s.RtmpOutputSettings = v return s } -// SetVideoPid sets the VideoPid field's value. -func (s *M2tsSettings) SetVideoPid(v string) *M2tsSettings { - s.VideoPid = &v +// SetUdpOutputSettings sets the UdpOutputSettings field's value. +func (s *OutputSettings) SetUdpOutputSettings(v *UdpOutputSettings) *OutputSettings { + s.UdpOutputSettings = v return s } -// Settings information for the .m3u8 container -type M3u8Settings struct { +type PassThroughSettings struct { _ struct{} `type:"structure"` +} - // The number of audio frames to insert for each PES packet. - AudioFramesPerPes *int64 `locationName:"audioFramesPerPes" type:"integer"` - - // Packet Identifier (PID) of the elementary audio stream(s) in the transport - // stream. Multiple values are accepted, and can be entered in ranges and/or - // by comma separation. Can be entered as decimal or hexadecimal values. - AudioPids *string `locationName:"audioPids" type:"string"` - - // This parameter is unused and deprecated. - EcmPid *string `locationName:"ecmPid" type:"string"` - - // The number of milliseconds between instances of this table in the output - // transport stream. A value of \"0\" writes out the PMT once per segment file. - PatInterval *int64 `locationName:"patInterval" type:"integer"` - - // When set to pcrEveryPesPacket, a Program Clock Reference value is inserted - // for every Packetized Elementary Stream (PES) header. This parameter is effective - // only when the PCR PID is the same as the video or audio elementary stream. - PcrControl *string `locationName:"pcrControl" type:"string" enum:"M3u8PcrControl"` - - // Maximum time in milliseconds between Program Clock References (PCRs) inserted - // into the transport stream. - PcrPeriod *int64 `locationName:"pcrPeriod" type:"integer"` - - // Packet Identifier (PID) of the Program Clock Reference (PCR) in the transport - // stream. When no value is given, the encoder will assign the same value as - // the Video PID. Can be entered as a decimal or hexadecimal value. - PcrPid *string `locationName:"pcrPid" type:"string"` - - // The number of milliseconds between instances of this table in the output - // transport stream. A value of \"0\" writes out the PMT once per segment file. - PmtInterval *int64 `locationName:"pmtInterval" type:"integer"` +// String returns the string representation +func (s PassThroughSettings) String() string { + return awsutil.Prettify(s) +} - // Packet Identifier (PID) for the Program Map Table (PMT) in the transport - // stream. Can be entered as a decimal or hexadecimal value. - PmtPid *string `locationName:"pmtPid" type:"string"` +// GoString returns the string representation +func (s PassThroughSettings) GoString() string { + return s.String() +} - // The value of the program number field in the Program Map Table. - ProgramNum *int64 `locationName:"programNum" type:"integer"` +type PurchaseOfferingInput struct { + _ struct{} `type:"structure"` - // If set to passthrough, passes any SCTE-35 signals from the input source to - // this output. - Scte35Behavior *string `locationName:"scte35Behavior" type:"string" enum:"M3u8Scte35Behavior"` + // Count is a required field + Count *int64 `locationName:"count" min:"1" type:"integer" required:"true"` - // Packet Identifier (PID) of the SCTE-35 stream in the transport stream. Can - // be entered as a decimal or hexadecimal value. - Scte35Pid *string `locationName:"scte35Pid" type:"string"` + Name *string `locationName:"name" type:"string"` - // When set to passthrough, timed metadata is passed through from input to output. - TimedMetadataBehavior *string `locationName:"timedMetadataBehavior" type:"string" enum:"M3u8TimedMetadataBehavior"` + // OfferingId is a required field + OfferingId *string `location:"uri" locationName:"offeringId" type:"string" required:"true"` - // The value of the transport stream ID field in the Program Map Table. - TransportStreamId *int64 `locationName:"transportStreamId" type:"integer"` + RequestId *string `locationName:"requestId" type:"string" idempotencyToken:"true"` - // Packet Identifier (PID) of the elementary video stream in the transport stream. - // Can be entered as a decimal or hexadecimal value. - VideoPid *string `locationName:"videoPid" type:"string"` + Start *string `locationName:"start" type:"string"` } // String returns the string representation -func (s M3u8Settings) String() string { +func (s PurchaseOfferingInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s M3u8Settings) GoString() string { +func (s PurchaseOfferingInput) GoString() string { return s.String() } -// SetAudioFramesPerPes sets the AudioFramesPerPes field's value. -func (s *M3u8Settings) SetAudioFramesPerPes(v int64) *M3u8Settings { - s.AudioFramesPerPes = &v - return s -} - -// SetAudioPids sets the AudioPids field's value. -func (s *M3u8Settings) SetAudioPids(v string) *M3u8Settings { - s.AudioPids = &v - return s -} - -// SetEcmPid sets the EcmPid field's value. -func (s *M3u8Settings) SetEcmPid(v string) *M3u8Settings { - s.EcmPid = &v - return s -} - -// SetPatInterval sets the PatInterval field's value. -func (s *M3u8Settings) SetPatInterval(v int64) *M3u8Settings { - s.PatInterval = &v - return s -} - -// SetPcrControl sets the PcrControl field's value. -func (s *M3u8Settings) SetPcrControl(v string) *M3u8Settings { - s.PcrControl = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *PurchaseOfferingInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PurchaseOfferingInput"} + if s.Count == nil { + invalidParams.Add(request.NewErrParamRequired("Count")) + } + if s.Count != nil && *s.Count < 1 { + invalidParams.Add(request.NewErrParamMinValue("Count", 1)) + } + if s.OfferingId == nil { + invalidParams.Add(request.NewErrParamRequired("OfferingId")) + } -// SetPcrPeriod sets the PcrPeriod field's value. -func (s *M3u8Settings) SetPcrPeriod(v int64) *M3u8Settings { - s.PcrPeriod = &v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetPcrPid sets the PcrPid field's value. -func (s *M3u8Settings) SetPcrPid(v string) *M3u8Settings { - s.PcrPid = &v +// SetCount sets the Count field's value. +func (s *PurchaseOfferingInput) SetCount(v int64) *PurchaseOfferingInput { + s.Count = &v return s } -// SetPmtInterval sets the PmtInterval field's value. -func (s *M3u8Settings) SetPmtInterval(v int64) *M3u8Settings { - s.PmtInterval = &v +// SetName sets the Name field's value. +func (s *PurchaseOfferingInput) SetName(v string) *PurchaseOfferingInput { + s.Name = &v return s } -// SetPmtPid sets the PmtPid field's value. -func (s *M3u8Settings) SetPmtPid(v string) *M3u8Settings { - s.PmtPid = &v +// SetOfferingId sets the OfferingId field's value. +func (s *PurchaseOfferingInput) SetOfferingId(v string) *PurchaseOfferingInput { + s.OfferingId = &v return s } -// SetProgramNum sets the ProgramNum field's value. -func (s *M3u8Settings) SetProgramNum(v int64) *M3u8Settings { - s.ProgramNum = &v +// SetRequestId sets the RequestId field's value. +func (s *PurchaseOfferingInput) SetRequestId(v string) *PurchaseOfferingInput { + s.RequestId = &v return s } -// SetScte35Behavior sets the Scte35Behavior field's value. -func (s *M3u8Settings) SetScte35Behavior(v string) *M3u8Settings { - s.Scte35Behavior = &v +// SetStart sets the Start field's value. +func (s *PurchaseOfferingInput) SetStart(v string) *PurchaseOfferingInput { + s.Start = &v return s } -// SetScte35Pid sets the Scte35Pid field's value. -func (s *M3u8Settings) SetScte35Pid(v string) *M3u8Settings { - s.Scte35Pid = &v - return s +type PurchaseOfferingOutput struct { + _ struct{} `type:"structure"` + + // Reserved resources available to use + Reservation *Reservation `locationName:"reservation" type:"structure"` } -// SetTimedMetadataBehavior sets the TimedMetadataBehavior field's value. -func (s *M3u8Settings) SetTimedMetadataBehavior(v string) *M3u8Settings { - s.TimedMetadataBehavior = &v - return s +// String returns the string representation +func (s PurchaseOfferingOutput) String() string { + return awsutil.Prettify(s) } -// SetTransportStreamId sets the TransportStreamId field's value. -func (s *M3u8Settings) SetTransportStreamId(v int64) *M3u8Settings { - s.TransportStreamId = &v - return s +// GoString returns the string representation +func (s PurchaseOfferingOutput) GoString() string { + return s.String() } -// SetVideoPid sets the VideoPid field's value. -func (s *M3u8Settings) SetVideoPid(v string) *M3u8Settings { - s.VideoPid = &v +// SetReservation sets the Reservation field's value. +func (s *PurchaseOfferingOutput) SetReservation(v *Reservation) *PurchaseOfferingOutput { + s.Reservation = v return s } -type Mp2Settings struct { +type RemixSettings struct { _ struct{} `type:"structure"` - // Average bitrate in bits/second. - Bitrate *float64 `locationName:"bitrate" type:"double"` + // Mapping of input channels to output channels, with appropriate gain adjustments. + // + // ChannelMappings is a required field + ChannelMappings []*AudioChannelMapping `locationName:"channelMappings" type:"list" required:"true"` - // The MPEG2 Audio coding mode. Valid values are codingMode10 (for mono) or - // codingMode20 (for stereo). - CodingMode *string `locationName:"codingMode" type:"string" enum:"Mp2CodingMode"` + // Number of input channels to be used. + ChannelsIn *int64 `locationName:"channelsIn" min:"1" type:"integer"` - // Sample rate in Hz. - SampleRate *float64 `locationName:"sampleRate" type:"double"` + // Number of output channels to be produced.Valid values: 1, 2, 4, 6, 8 + ChannelsOut *int64 `locationName:"channelsOut" min:"1" type:"integer"` } // String returns the string representation -func (s Mp2Settings) String() string { +func (s RemixSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Mp2Settings) GoString() string { +func (s RemixSettings) GoString() string { return s.String() } -// SetBitrate sets the Bitrate field's value. -func (s *Mp2Settings) SetBitrate(v float64) *Mp2Settings { - s.Bitrate = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *RemixSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RemixSettings"} + if s.ChannelMappings == nil { + invalidParams.Add(request.NewErrParamRequired("ChannelMappings")) + } + if s.ChannelsIn != nil && *s.ChannelsIn < 1 { + invalidParams.Add(request.NewErrParamMinValue("ChannelsIn", 1)) + } + if s.ChannelsOut != nil && *s.ChannelsOut < 1 { + invalidParams.Add(request.NewErrParamMinValue("ChannelsOut", 1)) + } + if s.ChannelMappings != nil { + for i, v := range s.ChannelMappings { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ChannelMappings", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetChannelMappings sets the ChannelMappings field's value. +func (s *RemixSettings) SetChannelMappings(v []*AudioChannelMapping) *RemixSettings { + s.ChannelMappings = v return s } -// SetCodingMode sets the CodingMode field's value. -func (s *Mp2Settings) SetCodingMode(v string) *Mp2Settings { - s.CodingMode = &v +// SetChannelsIn sets the ChannelsIn field's value. +func (s *RemixSettings) SetChannelsIn(v int64) *RemixSettings { + s.ChannelsIn = &v return s } -// SetSampleRate sets the SampleRate field's value. -func (s *Mp2Settings) SetSampleRate(v float64) *Mp2Settings { - s.SampleRate = &v +// SetChannelsOut sets the ChannelsOut field's value. +func (s *RemixSettings) SetChannelsOut(v int64) *RemixSettings { + s.ChannelsOut = &v return s } -type MsSmoothGroupSettings struct { +// Reserved resources available to use +type Reservation struct { _ struct{} `type:"structure"` - // The value of the "Acquisition Point Identity" element used in each message - // placed in the sparse track. Only enabled if sparseTrackType is not "none". - AcquisitionPointId *string `locationName:"acquisitionPointId" type:"string"` - - // If set to passthrough for an audio-only MS Smooth output, the fragment absolute - // time will be set to the current timecode. This option does not write timecodes - // to the audio elementary stream. - AudioOnlyTimecodeControl *string `locationName:"audioOnlyTimecodeControl" type:"string" enum:"SmoothGroupAudioOnlyTimecodeControl"` - - // If set to verifyAuthenticity, verify the https certificate chain to a trusted - // Certificate Authority (CA). This will cause https outputs to self-signed - // certificates to fail unless those certificates are manually added to the - // OS trusted keystore. - CertificateMode *string `locationName:"certificateMode" type:"string" enum:"SmoothGroupCertificateMode"` + // Unique reservation ARN, e.g. 'arn:aws:medialive:us-west-2:123456789012:reservation:1234567' + Arn *string `locationName:"arn" type:"string"` - // Number of seconds to wait before retrying connection to the IIS server if - // the connection is lost. Content will be cached during this time and the cache - // will be be delivered to the IIS server once the connection is re-established. - ConnectionRetryInterval *int64 `locationName:"connectionRetryInterval" type:"integer"` + // Number of reserved resources + Count *int64 `locationName:"count" type:"integer"` - // Smooth Streaming publish point on an IIS server. Elemental Live acts as a - // "Push" encoder to IIS. - // - // Destination is a required field - Destination *OutputLocationRef `locationName:"destination" type:"structure" required:"true"` + // Currency code for usagePrice and fixedPrice in ISO-4217 format, e.g. 'USD' + CurrencyCode *string `locationName:"currencyCode" type:"string"` - // MS Smooth event ID to be sent to the IIS server.Should only be specified - // if eventIdMode is set to useConfigured. - EventId *string `locationName:"eventId" type:"string"` + // Lease duration, e.g. '12' + Duration *int64 `locationName:"duration" type:"integer"` - // Specifies whether or not to send an event ID to the IIS server. If no event - // ID is sent and the same Live Event is used without changing the publishing - // point, clients might see cached video from the previous run.Options:- "useConfigured" - // - use the value provided in eventId- "useTimestamp" - generate and send an - // event ID based on the current timestamp- "noEventId" - do not send an event - // ID to the IIS server. - EventIdMode *string `locationName:"eventIdMode" type:"string" enum:"SmoothGroupEventIdMode"` + // Units for duration, e.g. 'MONTHS' + DurationUnits *string `locationName:"durationUnits" type:"string" enum:"OfferingDurationUnits"` - // When set to sendEos, send EOS signal to IIS server when stopping the event - EventStopBehavior *string `locationName:"eventStopBehavior" type:"string" enum:"SmoothGroupEventStopBehavior"` + // Reservation UTC end date and time in ISO-8601 format, e.g. '2019-03-01T00:00:00' + End *string `locationName:"end" type:"string"` - // Size in seconds of file cache for streaming outputs. - FilecacheDuration *int64 `locationName:"filecacheDuration" type:"integer"` + // One-time charge for each reserved resource, e.g. '0.0' for a NO_UPFRONT offering + FixedPrice *float64 `locationName:"fixedPrice" type:"double"` - // Length of mp4 fragments to generate (in seconds). Fragment length must be - // compatible with GOP size and framerate. - FragmentLength *int64 `locationName:"fragmentLength" min:"1" type:"integer"` + // User specified reservation name + Name *string `locationName:"name" type:"string"` - // Parameter that control output group behavior on input loss. - InputLossAction *string `locationName:"inputLossAction" type:"string" enum:"InputLossActionForMsSmoothOut"` + // Offering description, e.g. 'HD AVC output at 10-20 Mbps, 30 fps, and standard + // VQ in US West (Oregon)' + OfferingDescription *string `locationName:"offeringDescription" type:"string"` - // Number of retry attempts. - NumRetries *int64 `locationName:"numRetries" type:"integer"` + // Unique offering ID, e.g. '87654321' + OfferingId *string `locationName:"offeringId" type:"string"` - // Number of seconds before initiating a restart due to output failure, due - // to exhausting the numRetries on one segment, or exceeding filecacheDuration. - RestartDelay *int64 `locationName:"restartDelay" type:"integer"` + // Offering type, e.g. 'NO_UPFRONT' + OfferingType *string `locationName:"offeringType" type:"string" enum:"OfferingType"` - // When set to useInputSegmentation, the output segment or fragment points are - // set by the RAI markers from the input streams. - SegmentationMode *string `locationName:"segmentationMode" type:"string" enum:"SmoothGroupSegmentationMode"` + // AWS region, e.g. 'us-west-2' + Region *string `locationName:"region" type:"string"` - // Outputs that are "output locked" can use this delay. Assign a delay to the - // output that is "secondary". Do not assign a delay to the "primary" output. - // The delay means that the primary output will always reach the downstream - // system before the secondary, which helps ensure that the downstream system - // always uses the primary output. (If there were no delay, the downstream system - // might flip-flop between whichever output happens to arrive first.) If the - // primary fails, the downstream system will switch to the secondary output. - // When the primary is restarted, the downstream system will switch back to - // the primary (because once again it is always arriving first) - SendDelayMs *int64 `locationName:"sendDelayMs" type:"integer"` + // Unique reservation ID, e.g. '1234567' + ReservationId *string `locationName:"reservationId" type:"string"` - // If set to scte35, use incoming SCTE-35 messages to generate a sparse track - // in this group of MS-Smooth outputs. - SparseTrackType *string `locationName:"sparseTrackType" type:"string" enum:"SmoothGroupSparseTrackType"` + // Resource configuration details + ResourceSpecification *ReservationResourceSpecification `locationName:"resourceSpecification" type:"structure"` - // When set to send, send stream manifest so publishing point doesn't start - // until all streams start. - StreamManifestBehavior *string `locationName:"streamManifestBehavior" type:"string" enum:"SmoothGroupStreamManifestBehavior"` + // Reservation UTC start date and time in ISO-8601 format, e.g. '2018-03-01T00:00:00' + Start *string `locationName:"start" type:"string"` - // Timestamp offset for the event. Only used if timestampOffsetMode is set to - // useConfiguredOffset. - TimestampOffset *string `locationName:"timestampOffset" type:"string"` + // Current state of reservation, e.g. 'ACTIVE' + State *string `locationName:"state" type:"string" enum:"ReservationState"` - // Type of timestamp date offset to use.- useEventStartDate: Use the date the - // event was started as the offset- useConfiguredOffset: Use an explicitly configured - // date as the offset - TimestampOffsetMode *string `locationName:"timestampOffsetMode" type:"string" enum:"SmoothGroupTimestampOffsetMode"` + // Recurring usage charge for each reserved resource, e.g. '157.0' + UsagePrice *float64 `locationName:"usagePrice" type:"double"` } // String returns the string representation -func (s MsSmoothGroupSettings) String() string { +func (s Reservation) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s MsSmoothGroupSettings) GoString() string { +func (s Reservation) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *MsSmoothGroupSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "MsSmoothGroupSettings"} - if s.Destination == nil { - invalidParams.Add(request.NewErrParamRequired("Destination")) - } - if s.FragmentLength != nil && *s.FragmentLength < 1 { - invalidParams.Add(request.NewErrParamMinValue("FragmentLength", 1)) - } +// SetArn sets the Arn field's value. +func (s *Reservation) SetArn(v string) *Reservation { + s.Arn = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetCount sets the Count field's value. +func (s *Reservation) SetCount(v int64) *Reservation { + s.Count = &v + return s } -// SetAcquisitionPointId sets the AcquisitionPointId field's value. -func (s *MsSmoothGroupSettings) SetAcquisitionPointId(v string) *MsSmoothGroupSettings { - s.AcquisitionPointId = &v +// SetCurrencyCode sets the CurrencyCode field's value. +func (s *Reservation) SetCurrencyCode(v string) *Reservation { + s.CurrencyCode = &v return s } -// SetAudioOnlyTimecodeControl sets the AudioOnlyTimecodeControl field's value. -func (s *MsSmoothGroupSettings) SetAudioOnlyTimecodeControl(v string) *MsSmoothGroupSettings { - s.AudioOnlyTimecodeControl = &v +// SetDuration sets the Duration field's value. +func (s *Reservation) SetDuration(v int64) *Reservation { + s.Duration = &v return s } -// SetCertificateMode sets the CertificateMode field's value. -func (s *MsSmoothGroupSettings) SetCertificateMode(v string) *MsSmoothGroupSettings { - s.CertificateMode = &v +// SetDurationUnits sets the DurationUnits field's value. +func (s *Reservation) SetDurationUnits(v string) *Reservation { + s.DurationUnits = &v return s } -// SetConnectionRetryInterval sets the ConnectionRetryInterval field's value. -func (s *MsSmoothGroupSettings) SetConnectionRetryInterval(v int64) *MsSmoothGroupSettings { - s.ConnectionRetryInterval = &v +// SetEnd sets the End field's value. +func (s *Reservation) SetEnd(v string) *Reservation { + s.End = &v return s } -// SetDestination sets the Destination field's value. -func (s *MsSmoothGroupSettings) SetDestination(v *OutputLocationRef) *MsSmoothGroupSettings { - s.Destination = v +// SetFixedPrice sets the FixedPrice field's value. +func (s *Reservation) SetFixedPrice(v float64) *Reservation { + s.FixedPrice = &v return s } -// SetEventId sets the EventId field's value. -func (s *MsSmoothGroupSettings) SetEventId(v string) *MsSmoothGroupSettings { - s.EventId = &v +// SetName sets the Name field's value. +func (s *Reservation) SetName(v string) *Reservation { + s.Name = &v return s } -// SetEventIdMode sets the EventIdMode field's value. -func (s *MsSmoothGroupSettings) SetEventIdMode(v string) *MsSmoothGroupSettings { - s.EventIdMode = &v +// SetOfferingDescription sets the OfferingDescription field's value. +func (s *Reservation) SetOfferingDescription(v string) *Reservation { + s.OfferingDescription = &v return s } -// SetEventStopBehavior sets the EventStopBehavior field's value. -func (s *MsSmoothGroupSettings) SetEventStopBehavior(v string) *MsSmoothGroupSettings { - s.EventStopBehavior = &v +// SetOfferingId sets the OfferingId field's value. +func (s *Reservation) SetOfferingId(v string) *Reservation { + s.OfferingId = &v return s } -// SetFilecacheDuration sets the FilecacheDuration field's value. -func (s *MsSmoothGroupSettings) SetFilecacheDuration(v int64) *MsSmoothGroupSettings { - s.FilecacheDuration = &v +// SetOfferingType sets the OfferingType field's value. +func (s *Reservation) SetOfferingType(v string) *Reservation { + s.OfferingType = &v + return s +} + +// SetRegion sets the Region field's value. +func (s *Reservation) SetRegion(v string) *Reservation { + s.Region = &v + return s +} + +// SetReservationId sets the ReservationId field's value. +func (s *Reservation) SetReservationId(v string) *Reservation { + s.ReservationId = &v + return s +} + +// SetResourceSpecification sets the ResourceSpecification field's value. +func (s *Reservation) SetResourceSpecification(v *ReservationResourceSpecification) *Reservation { + s.ResourceSpecification = v + return s +} + +// SetStart sets the Start field's value. +func (s *Reservation) SetStart(v string) *Reservation { + s.Start = &v + return s +} + +// SetState sets the State field's value. +func (s *Reservation) SetState(v string) *Reservation { + s.State = &v + return s +} + +// SetUsagePrice sets the UsagePrice field's value. +func (s *Reservation) SetUsagePrice(v float64) *Reservation { + s.UsagePrice = &v return s } -// SetFragmentLength sets the FragmentLength field's value. -func (s *MsSmoothGroupSettings) SetFragmentLength(v int64) *MsSmoothGroupSettings { - s.FragmentLength = &v - return s +// Resource configuration (codec, resolution, bitrate, ...) +type ReservationResourceSpecification struct { + _ struct{} `type:"structure"` + + // Codec, e.g. 'AVC' + Codec *string `locationName:"codec" type:"string" enum:"ReservationCodec"` + + // Maximum bitrate, e.g. 'MAX_20_MBPS' + MaximumBitrate *string `locationName:"maximumBitrate" type:"string" enum:"ReservationMaximumBitrate"` + + // Maximum framerate, e.g. 'MAX_30_FPS' (Outputs only) + MaximumFramerate *string `locationName:"maximumFramerate" type:"string" enum:"ReservationMaximumFramerate"` + + // Resolution, e.g. 'HD' + Resolution *string `locationName:"resolution" type:"string" enum:"ReservationResolution"` + + // Resource type, 'INPUT', 'OUTPUT', or 'CHANNEL' + ResourceType *string `locationName:"resourceType" type:"string" enum:"ReservationResourceType"` + + // Special feature, e.g. 'AUDIO_NORMALIZATION' (Channels only) + SpecialFeature *string `locationName:"specialFeature" type:"string" enum:"ReservationSpecialFeature"` + + // Video quality, e.g. 'STANDARD' (Outputs only) + VideoQuality *string `locationName:"videoQuality" type:"string" enum:"ReservationVideoQuality"` } -// SetInputLossAction sets the InputLossAction field's value. -func (s *MsSmoothGroupSettings) SetInputLossAction(v string) *MsSmoothGroupSettings { - s.InputLossAction = &v - return s +// String returns the string representation +func (s ReservationResourceSpecification) String() string { + return awsutil.Prettify(s) } -// SetNumRetries sets the NumRetries field's value. -func (s *MsSmoothGroupSettings) SetNumRetries(v int64) *MsSmoothGroupSettings { - s.NumRetries = &v - return s +// GoString returns the string representation +func (s ReservationResourceSpecification) GoString() string { + return s.String() } -// SetRestartDelay sets the RestartDelay field's value. -func (s *MsSmoothGroupSettings) SetRestartDelay(v int64) *MsSmoothGroupSettings { - s.RestartDelay = &v +// SetCodec sets the Codec field's value. +func (s *ReservationResourceSpecification) SetCodec(v string) *ReservationResourceSpecification { + s.Codec = &v return s } -// SetSegmentationMode sets the SegmentationMode field's value. -func (s *MsSmoothGroupSettings) SetSegmentationMode(v string) *MsSmoothGroupSettings { - s.SegmentationMode = &v +// SetMaximumBitrate sets the MaximumBitrate field's value. +func (s *ReservationResourceSpecification) SetMaximumBitrate(v string) *ReservationResourceSpecification { + s.MaximumBitrate = &v return s } -// SetSendDelayMs sets the SendDelayMs field's value. -func (s *MsSmoothGroupSettings) SetSendDelayMs(v int64) *MsSmoothGroupSettings { - s.SendDelayMs = &v +// SetMaximumFramerate sets the MaximumFramerate field's value. +func (s *ReservationResourceSpecification) SetMaximumFramerate(v string) *ReservationResourceSpecification { + s.MaximumFramerate = &v return s } -// SetSparseTrackType sets the SparseTrackType field's value. -func (s *MsSmoothGroupSettings) SetSparseTrackType(v string) *MsSmoothGroupSettings { - s.SparseTrackType = &v +// SetResolution sets the Resolution field's value. +func (s *ReservationResourceSpecification) SetResolution(v string) *ReservationResourceSpecification { + s.Resolution = &v return s } -// SetStreamManifestBehavior sets the StreamManifestBehavior field's value. -func (s *MsSmoothGroupSettings) SetStreamManifestBehavior(v string) *MsSmoothGroupSettings { - s.StreamManifestBehavior = &v +// SetResourceType sets the ResourceType field's value. +func (s *ReservationResourceSpecification) SetResourceType(v string) *ReservationResourceSpecification { + s.ResourceType = &v return s } -// SetTimestampOffset sets the TimestampOffset field's value. -func (s *MsSmoothGroupSettings) SetTimestampOffset(v string) *MsSmoothGroupSettings { - s.TimestampOffset = &v +// SetSpecialFeature sets the SpecialFeature field's value. +func (s *ReservationResourceSpecification) SetSpecialFeature(v string) *ReservationResourceSpecification { + s.SpecialFeature = &v return s } -// SetTimestampOffsetMode sets the TimestampOffsetMode field's value. -func (s *MsSmoothGroupSettings) SetTimestampOffsetMode(v string) *MsSmoothGroupSettings { - s.TimestampOffsetMode = &v +// SetVideoQuality sets the VideoQuality field's value. +func (s *ReservationResourceSpecification) SetVideoQuality(v string) *ReservationResourceSpecification { + s.VideoQuality = &v return s } -type MsSmoothOutputSettings struct { +type RtmpCaptionInfoDestinationSettings struct { _ struct{} `type:"structure"` - - // String concatenated to the end of the destination filename. Required for - // multiple outputs of the same type. - NameModifier *string `locationName:"nameModifier" type:"string"` } // String returns the string representation -func (s MsSmoothOutputSettings) String() string { +func (s RtmpCaptionInfoDestinationSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s MsSmoothOutputSettings) GoString() string { +func (s RtmpCaptionInfoDestinationSettings) GoString() string { return s.String() } -// SetNameModifier sets the NameModifier field's value. -func (s *MsSmoothOutputSettings) SetNameModifier(v string) *MsSmoothOutputSettings { - s.NameModifier = &v - return s -} - -// Network source to transcode. Must be accessible to the Elemental Live node -// that is running the live event through a network connection. -type NetworkInputSettings struct { +type RtmpGroupSettings struct { _ struct{} `type:"structure"` - // Specifies HLS input settings when the uri is for a HLS manifest. - HlsInputSettings *HlsInputSettings `locationName:"hlsInputSettings" type:"structure"` + // Authentication scheme to use when connecting with CDN + AuthenticationScheme *string `locationName:"authenticationScheme" type:"string" enum:"AuthenticationScheme"` - // Check HTTPS server certificates. When set to checkCryptographyOnly, cryptography - // in the certificate will be checked, but not the server's name. Certain subdomains - // (notably S3 buckets that use dots in the bucket name) do not strictly match - // the corresponding certificate's wildcard pattern and would otherwise cause - // the event to error. This setting is ignored for protocols that do not use - // https. - ServerValidation *string `locationName:"serverValidation" type:"string" enum:"NetworkInputServerValidation"` + // Controls behavior when content cache fills up. If remote origin server stalls + // the RTMP connection and does not accept content fast enough the 'Media Cache' + // will fill up. When the cache reaches the duration specified by cacheLength + // the cache will stop accepting new content. If set to disconnectImmediately, + // the RTMP output will force a disconnect. Clear the media cache, and reconnect + // after restartDelay seconds. If set to waitForServer, the RTMP output will + // wait up to 5 minutes to allow the origin server to begin accepting data again. + CacheFullBehavior *string `locationName:"cacheFullBehavior" type:"string" enum:"RtmpCacheFullBehavior"` + + // Cache length, in seconds, is used to calculate buffer size. + CacheLength *int64 `locationName:"cacheLength" min:"30" type:"integer"` + + // Controls the types of data that passes to onCaptionInfo outputs. If set to + // 'all' then 608 and 708 carried DTVCC data will be passed. If set to 'field1AndField2608' + // then DTVCC data will be stripped out, but 608 data from both fields will + // be passed. If set to 'field1608' then only the data carried in 608 from field + // 1 video will be passed. + CaptionData *string `locationName:"captionData" type:"string" enum:"RtmpCaptionData"` + + // If a streaming output fails, number of seconds to wait until a restart is + // initiated. A value of 0 means never restart. + RestartDelay *int64 `locationName:"restartDelay" type:"integer"` } // String returns the string representation -func (s NetworkInputSettings) String() string { +func (s RtmpGroupSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s NetworkInputSettings) GoString() string { +func (s RtmpGroupSettings) GoString() string { return s.String() } -// SetHlsInputSettings sets the HlsInputSettings field's value. -func (s *NetworkInputSettings) SetHlsInputSettings(v *HlsInputSettings) *NetworkInputSettings { - s.HlsInputSettings = v +// Validate inspects the fields of the type to determine if they are valid. +func (s *RtmpGroupSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RtmpGroupSettings"} + if s.CacheLength != nil && *s.CacheLength < 30 { + invalidParams.Add(request.NewErrParamMinValue("CacheLength", 30)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAuthenticationScheme sets the AuthenticationScheme field's value. +func (s *RtmpGroupSettings) SetAuthenticationScheme(v string) *RtmpGroupSettings { + s.AuthenticationScheme = &v return s } -// SetServerValidation sets the ServerValidation field's value. -func (s *NetworkInputSettings) SetServerValidation(v string) *NetworkInputSettings { - s.ServerValidation = &v +// SetCacheFullBehavior sets the CacheFullBehavior field's value. +func (s *RtmpGroupSettings) SetCacheFullBehavior(v string) *RtmpGroupSettings { + s.CacheFullBehavior = &v return s } -// Output settings. There can be multiple outputs within a group. -type Output struct { - _ struct{} `type:"structure"` +// SetCacheLength sets the CacheLength field's value. +func (s *RtmpGroupSettings) SetCacheLength(v int64) *RtmpGroupSettings { + s.CacheLength = &v + return s +} - // The names of the AudioDescriptions used as audio sources for this output. - AudioDescriptionNames []*string `locationName:"audioDescriptionNames" type:"list"` +// SetCaptionData sets the CaptionData field's value. +func (s *RtmpGroupSettings) SetCaptionData(v string) *RtmpGroupSettings { + s.CaptionData = &v + return s +} - // The names of the CaptionDescriptions used as caption sources for this output. - CaptionDescriptionNames []*string `locationName:"captionDescriptionNames" type:"list"` +// SetRestartDelay sets the RestartDelay field's value. +func (s *RtmpGroupSettings) SetRestartDelay(v int64) *RtmpGroupSettings { + s.RestartDelay = &v + return s +} - // The name used to identify an output. - OutputName *string `locationName:"outputName" min:"1" type:"string"` +type RtmpOutputSettings struct { + _ struct{} `type:"structure"` - // Output type-specific settings. + // If set to verifyAuthenticity, verify the tls certificate chain to a trusted + // Certificate Authority (CA). This will cause rtmps outputs with self-signed + // certificates to fail. + CertificateMode *string `locationName:"certificateMode" type:"string" enum:"RtmpOutputCertificateMode"` + + // Number of seconds to wait before retrying a connection to the Flash Media + // server if the connection is lost. + ConnectionRetryInterval *int64 `locationName:"connectionRetryInterval" min:"1" type:"integer"` + + // The RTMP endpoint excluding the stream name (eg. rtmp://host/appname). For + // connection to Akamai, a username and password must be supplied. URI fields + // accept format identifiers. // - // OutputSettings is a required field - OutputSettings *OutputSettings `locationName:"outputSettings" type:"structure" required:"true"` + // Destination is a required field + Destination *OutputLocationRef `locationName:"destination" type:"structure" required:"true"` - // The name of the VideoDescription used as the source for this output. - VideoDescriptionName *string `locationName:"videoDescriptionName" type:"string"` + // Number of retry attempts. + NumRetries *int64 `locationName:"numRetries" type:"integer"` } // String returns the string representation -func (s Output) String() string { +func (s RtmpOutputSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Output) GoString() string { +func (s RtmpOutputSettings) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *Output) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "Output"} - if s.OutputName != nil && len(*s.OutputName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("OutputName", 1)) - } - if s.OutputSettings == nil { - invalidParams.Add(request.NewErrParamRequired("OutputSettings")) +func (s *RtmpOutputSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RtmpOutputSettings"} + if s.ConnectionRetryInterval != nil && *s.ConnectionRetryInterval < 1 { + invalidParams.Add(request.NewErrParamMinValue("ConnectionRetryInterval", 1)) } - if s.OutputSettings != nil { - if err := s.OutputSettings.Validate(); err != nil { - invalidParams.AddNested("OutputSettings", err.(request.ErrInvalidParams)) - } + if s.Destination == nil { + invalidParams.Add(request.NewErrParamRequired("Destination")) } if invalidParams.Len() > 0 { @@ -8991,159 +12505,247 @@ func (s *Output) Validate() error { return nil } -// SetAudioDescriptionNames sets the AudioDescriptionNames field's value. -func (s *Output) SetAudioDescriptionNames(v []*string) *Output { - s.AudioDescriptionNames = v - return s -} - -// SetCaptionDescriptionNames sets the CaptionDescriptionNames field's value. -func (s *Output) SetCaptionDescriptionNames(v []*string) *Output { - s.CaptionDescriptionNames = v +// SetCertificateMode sets the CertificateMode field's value. +func (s *RtmpOutputSettings) SetCertificateMode(v string) *RtmpOutputSettings { + s.CertificateMode = &v return s } -// SetOutputName sets the OutputName field's value. -func (s *Output) SetOutputName(v string) *Output { - s.OutputName = &v +// SetConnectionRetryInterval sets the ConnectionRetryInterval field's value. +func (s *RtmpOutputSettings) SetConnectionRetryInterval(v int64) *RtmpOutputSettings { + s.ConnectionRetryInterval = &v return s } -// SetOutputSettings sets the OutputSettings field's value. -func (s *Output) SetOutputSettings(v *OutputSettings) *Output { - s.OutputSettings = v +// SetDestination sets the Destination field's value. +func (s *RtmpOutputSettings) SetDestination(v *OutputLocationRef) *RtmpOutputSettings { + s.Destination = v return s } -// SetVideoDescriptionName sets the VideoDescriptionName field's value. -func (s *Output) SetVideoDescriptionName(v string) *Output { - s.VideoDescriptionName = &v +// SetNumRetries sets the NumRetries field's value. +func (s *RtmpOutputSettings) SetNumRetries(v int64) *RtmpOutputSettings { + s.NumRetries = &v return s } -type OutputDestination struct { +// Contains information on a single schedule action. +type ScheduleAction struct { _ struct{} `type:"structure"` - // User-specified id. This is used in an output group or an output. - Id *string `locationName:"id" type:"string"` + // The name of the action, must be unique within the schedule. This name provides + // the main reference to an action once it is added to the schedule. A name + // is unique if it is no longer in the schedule. The schedule is automatically + // cleaned up to remove actions with a start time of more than 1 hour ago (approximately) + // so at that point a name can be reused. + // + // ActionName is a required field + ActionName *string `locationName:"actionName" type:"string" required:"true"` - // Destination settings for output; one for each redundant encoder. - Settings []*OutputDestinationSettings `locationName:"settings" type:"list"` + // Settings for this schedule action. + // + // ScheduleActionSettings is a required field + ScheduleActionSettings *ScheduleActionSettings `locationName:"scheduleActionSettings" type:"structure" required:"true"` + + // The time for the action to start in the channel. + // + // ScheduleActionStartSettings is a required field + ScheduleActionStartSettings *ScheduleActionStartSettings `locationName:"scheduleActionStartSettings" type:"structure" required:"true"` } // String returns the string representation -func (s OutputDestination) String() string { +func (s ScheduleAction) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s OutputDestination) GoString() string { +func (s ScheduleAction) GoString() string { return s.String() } -// SetId sets the Id field's value. -func (s *OutputDestination) SetId(v string) *OutputDestination { - s.Id = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *ScheduleAction) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ScheduleAction"} + if s.ActionName == nil { + invalidParams.Add(request.NewErrParamRequired("ActionName")) + } + if s.ScheduleActionSettings == nil { + invalidParams.Add(request.NewErrParamRequired("ScheduleActionSettings")) + } + if s.ScheduleActionStartSettings == nil { + invalidParams.Add(request.NewErrParamRequired("ScheduleActionStartSettings")) + } + if s.ScheduleActionSettings != nil { + if err := s.ScheduleActionSettings.Validate(); err != nil { + invalidParams.AddNested("ScheduleActionSettings", err.(request.ErrInvalidParams)) + } + } + if s.ScheduleActionStartSettings != nil { + if err := s.ScheduleActionStartSettings.Validate(); err != nil { + invalidParams.AddNested("ScheduleActionStartSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetActionName sets the ActionName field's value. +func (s *ScheduleAction) SetActionName(v string) *ScheduleAction { + s.ActionName = &v return s } -// SetSettings sets the Settings field's value. -func (s *OutputDestination) SetSettings(v []*OutputDestinationSettings) *OutputDestination { - s.Settings = v +// SetScheduleActionSettings sets the ScheduleActionSettings field's value. +func (s *ScheduleAction) SetScheduleActionSettings(v *ScheduleActionSettings) *ScheduleAction { + s.ScheduleActionSettings = v return s } -type OutputDestinationSettings struct { +// SetScheduleActionStartSettings sets the ScheduleActionStartSettings field's value. +func (s *ScheduleAction) SetScheduleActionStartSettings(v *ScheduleActionStartSettings) *ScheduleAction { + s.ScheduleActionStartSettings = v + return s +} + +// Holds the settings for a single schedule action. +type ScheduleActionSettings struct { _ struct{} `type:"structure"` - // key used to extract the password from EC2 Parameter store - PasswordParam *string `locationName:"passwordParam" type:"string"` + // Settings to switch an input + InputSwitchSettings *InputSwitchScheduleActionSettings `locationName:"inputSwitchSettings" type:"structure"` - // A URL specifying a destination - Url *string `locationName:"url" type:"string"` + // Settings for SCTE-35 return_to_network message + Scte35ReturnToNetworkSettings *Scte35ReturnToNetworkScheduleActionSettings `locationName:"scte35ReturnToNetworkSettings" type:"structure"` - // username for destination - Username *string `locationName:"username" type:"string"` + // Settings for SCTE-35 splice_insert message + Scte35SpliceInsertSettings *Scte35SpliceInsertScheduleActionSettings `locationName:"scte35SpliceInsertSettings" type:"structure"` + + // Settings for SCTE-35 time_signal message + Scte35TimeSignalSettings *Scte35TimeSignalScheduleActionSettings `locationName:"scte35TimeSignalSettings" type:"structure"` + + // Settings to activate a static image overlay + StaticImageActivateSettings *StaticImageActivateScheduleActionSettings `locationName:"staticImageActivateSettings" type:"structure"` + + // Settings to deactivate a static image overlay + StaticImageDeactivateSettings *StaticImageDeactivateScheduleActionSettings `locationName:"staticImageDeactivateSettings" type:"structure"` } // String returns the string representation -func (s OutputDestinationSettings) String() string { +func (s ScheduleActionSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s OutputDestinationSettings) GoString() string { +func (s ScheduleActionSettings) GoString() string { return s.String() } -// SetPasswordParam sets the PasswordParam field's value. -func (s *OutputDestinationSettings) SetPasswordParam(v string) *OutputDestinationSettings { - s.PasswordParam = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *ScheduleActionSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ScheduleActionSettings"} + if s.InputSwitchSettings != nil { + if err := s.InputSwitchSettings.Validate(); err != nil { + invalidParams.AddNested("InputSwitchSettings", err.(request.ErrInvalidParams)) + } + } + if s.Scte35ReturnToNetworkSettings != nil { + if err := s.Scte35ReturnToNetworkSettings.Validate(); err != nil { + invalidParams.AddNested("Scte35ReturnToNetworkSettings", err.(request.ErrInvalidParams)) + } + } + if s.Scte35SpliceInsertSettings != nil { + if err := s.Scte35SpliceInsertSettings.Validate(); err != nil { + invalidParams.AddNested("Scte35SpliceInsertSettings", err.(request.ErrInvalidParams)) + } + } + if s.Scte35TimeSignalSettings != nil { + if err := s.Scte35TimeSignalSettings.Validate(); err != nil { + invalidParams.AddNested("Scte35TimeSignalSettings", err.(request.ErrInvalidParams)) + } + } + if s.StaticImageActivateSettings != nil { + if err := s.StaticImageActivateSettings.Validate(); err != nil { + invalidParams.AddNested("StaticImageActivateSettings", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInputSwitchSettings sets the InputSwitchSettings field's value. +func (s *ScheduleActionSettings) SetInputSwitchSettings(v *InputSwitchScheduleActionSettings) *ScheduleActionSettings { + s.InputSwitchSettings = v return s } -// SetUrl sets the Url field's value. -func (s *OutputDestinationSettings) SetUrl(v string) *OutputDestinationSettings { - s.Url = &v +// SetScte35ReturnToNetworkSettings sets the Scte35ReturnToNetworkSettings field's value. +func (s *ScheduleActionSettings) SetScte35ReturnToNetworkSettings(v *Scte35ReturnToNetworkScheduleActionSettings) *ScheduleActionSettings { + s.Scte35ReturnToNetworkSettings = v return s } -// SetUsername sets the Username field's value. -func (s *OutputDestinationSettings) SetUsername(v string) *OutputDestinationSettings { - s.Username = &v +// SetScte35SpliceInsertSettings sets the Scte35SpliceInsertSettings field's value. +func (s *ScheduleActionSettings) SetScte35SpliceInsertSettings(v *Scte35SpliceInsertScheduleActionSettings) *ScheduleActionSettings { + s.Scte35SpliceInsertSettings = v return s } -// Output groups for this Live Event. Output groups contain information about -// where streams should be distributed. -type OutputGroup struct { - _ struct{} `type:"structure"` +// SetScte35TimeSignalSettings sets the Scte35TimeSignalSettings field's value. +func (s *ScheduleActionSettings) SetScte35TimeSignalSettings(v *Scte35TimeSignalScheduleActionSettings) *ScheduleActionSettings { + s.Scte35TimeSignalSettings = v + return s +} - // Custom output group name optionally defined by the user. Only letters, numbers, - // and the underscore character allowed; only 32 characters allowed. - Name *string `locationName:"name" type:"string"` +// SetStaticImageActivateSettings sets the StaticImageActivateSettings field's value. +func (s *ScheduleActionSettings) SetStaticImageActivateSettings(v *StaticImageActivateScheduleActionSettings) *ScheduleActionSettings { + s.StaticImageActivateSettings = v + return s +} - // Settings associated with the output group. - // - // OutputGroupSettings is a required field - OutputGroupSettings *OutputGroupSettings `locationName:"outputGroupSettings" type:"structure" required:"true"` +// SetStaticImageDeactivateSettings sets the StaticImageDeactivateSettings field's value. +func (s *ScheduleActionSettings) SetStaticImageDeactivateSettings(v *StaticImageDeactivateScheduleActionSettings) *ScheduleActionSettings { + s.StaticImageDeactivateSettings = v + return s +} - // Outputs is a required field - Outputs []*Output `locationName:"outputs" type:"list" required:"true"` +// Settings to specify the start time for an action. +type ScheduleActionStartSettings struct { + _ struct{} `type:"structure"` + + // Holds the start time for the action. + FixedModeScheduleActionStartSettings *FixedModeScheduleActionStartSettings `locationName:"fixedModeScheduleActionStartSettings" type:"structure"` + + // Specifies an action to follow for scheduling this action. + FollowModeScheduleActionStartSettings *FollowModeScheduleActionStartSettings `locationName:"followModeScheduleActionStartSettings" type:"structure"` } // String returns the string representation -func (s OutputGroup) String() string { +func (s ScheduleActionStartSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s OutputGroup) GoString() string { +func (s ScheduleActionStartSettings) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *OutputGroup) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "OutputGroup"} - if s.OutputGroupSettings == nil { - invalidParams.Add(request.NewErrParamRequired("OutputGroupSettings")) - } - if s.Outputs == nil { - invalidParams.Add(request.NewErrParamRequired("Outputs")) - } - if s.OutputGroupSettings != nil { - if err := s.OutputGroupSettings.Validate(); err != nil { - invalidParams.AddNested("OutputGroupSettings", err.(request.ErrInvalidParams)) +func (s *ScheduleActionStartSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ScheduleActionStartSettings"} + if s.FixedModeScheduleActionStartSettings != nil { + if err := s.FixedModeScheduleActionStartSettings.Validate(); err != nil { + invalidParams.AddNested("FixedModeScheduleActionStartSettings", err.(request.ErrInvalidParams)) } } - if s.Outputs != nil { - for i, v := range s.Outputs { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Outputs", i), err.(request.ErrInvalidParams)) - } + if s.FollowModeScheduleActionStartSettings != nil { + if err := s.FollowModeScheduleActionStartSettings.Validate(); err != nil { + invalidParams.AddNested("FollowModeScheduleActionStartSettings", err.(request.ErrInvalidParams)) } } @@ -9153,63 +12755,60 @@ func (s *OutputGroup) Validate() error { return nil } -// SetName sets the Name field's value. -func (s *OutputGroup) SetName(v string) *OutputGroup { - s.Name = &v +// SetFixedModeScheduleActionStartSettings sets the FixedModeScheduleActionStartSettings field's value. +func (s *ScheduleActionStartSettings) SetFixedModeScheduleActionStartSettings(v *FixedModeScheduleActionStartSettings) *ScheduleActionStartSettings { + s.FixedModeScheduleActionStartSettings = v return s } -// SetOutputGroupSettings sets the OutputGroupSettings field's value. -func (s *OutputGroup) SetOutputGroupSettings(v *OutputGroupSettings) *OutputGroup { - s.OutputGroupSettings = v +// SetFollowModeScheduleActionStartSettings sets the FollowModeScheduleActionStartSettings field's value. +func (s *ScheduleActionStartSettings) SetFollowModeScheduleActionStartSettings(v *FollowModeScheduleActionStartSettings) *ScheduleActionStartSettings { + s.FollowModeScheduleActionStartSettings = v return s } -// SetOutputs sets the Outputs field's value. -func (s *OutputGroup) SetOutputs(v []*Output) *OutputGroup { - s.Outputs = v - return s +type Scte20PlusEmbeddedDestinationSettings struct { + _ struct{} `type:"structure"` } -type OutputGroupSettings struct { - _ struct{} `type:"structure"` +// String returns the string representation +func (s Scte20PlusEmbeddedDestinationSettings) String() string { + return awsutil.Prettify(s) +} - ArchiveGroupSettings *ArchiveGroupSettings `locationName:"archiveGroupSettings" type:"structure"` +// GoString returns the string representation +func (s Scte20PlusEmbeddedDestinationSettings) GoString() string { + return s.String() +} - HlsGroupSettings *HlsGroupSettings `locationName:"hlsGroupSettings" type:"structure"` +type Scte20SourceSettings struct { + _ struct{} `type:"structure"` - MsSmoothGroupSettings *MsSmoothGroupSettings `locationName:"msSmoothGroupSettings" type:"structure"` + // If upconvert, 608 data is both passed through via the "608 compatibility + // bytes" fields of the 708 wrapper as well as translated into 708. 708 data + // present in the source content will be discarded. + Convert608To708 *string `locationName:"convert608To708" type:"string" enum:"Scte20Convert608To708"` - UdpGroupSettings *UdpGroupSettings `locationName:"udpGroupSettings" type:"structure"` + // Specifies the 608/708 channel number within the video track from which to + // extract captions. Unused for passthrough. + Source608ChannelNumber *int64 `locationName:"source608ChannelNumber" min:"1" type:"integer"` } // String returns the string representation -func (s OutputGroupSettings) String() string { +func (s Scte20SourceSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s OutputGroupSettings) GoString() string { +func (s Scte20SourceSettings) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *OutputGroupSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "OutputGroupSettings"} - if s.ArchiveGroupSettings != nil { - if err := s.ArchiveGroupSettings.Validate(); err != nil { - invalidParams.AddNested("ArchiveGroupSettings", err.(request.ErrInvalidParams)) - } - } - if s.HlsGroupSettings != nil { - if err := s.HlsGroupSettings.Validate(); err != nil { - invalidParams.AddNested("HlsGroupSettings", err.(request.ErrInvalidParams)) - } - } - if s.MsSmoothGroupSettings != nil { - if err := s.MsSmoothGroupSettings.Validate(); err != nil { - invalidParams.AddNested("MsSmoothGroupSettings", err.(request.ErrInvalidParams)) - } +func (s *Scte20SourceSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Scte20SourceSettings"} + if s.Source608ChannelNumber != nil && *s.Source608ChannelNumber < 1 { + invalidParams.Add(request.NewErrParamMinValue("Source608ChannelNumber", 1)) } if invalidParams.Len() > 0 { @@ -9218,92 +12817,125 @@ func (s *OutputGroupSettings) Validate() error { return nil } -// SetArchiveGroupSettings sets the ArchiveGroupSettings field's value. -func (s *OutputGroupSettings) SetArchiveGroupSettings(v *ArchiveGroupSettings) *OutputGroupSettings { - s.ArchiveGroupSettings = v +// SetConvert608To708 sets the Convert608To708 field's value. +func (s *Scte20SourceSettings) SetConvert608To708(v string) *Scte20SourceSettings { + s.Convert608To708 = &v return s } -// SetHlsGroupSettings sets the HlsGroupSettings field's value. -func (s *OutputGroupSettings) SetHlsGroupSettings(v *HlsGroupSettings) *OutputGroupSettings { - s.HlsGroupSettings = v +// SetSource608ChannelNumber sets the Source608ChannelNumber field's value. +func (s *Scte20SourceSettings) SetSource608ChannelNumber(v int64) *Scte20SourceSettings { + s.Source608ChannelNumber = &v return s } -// SetMsSmoothGroupSettings sets the MsSmoothGroupSettings field's value. -func (s *OutputGroupSettings) SetMsSmoothGroupSettings(v *MsSmoothGroupSettings) *OutputGroupSettings { - s.MsSmoothGroupSettings = v - return s +type Scte27DestinationSettings struct { + _ struct{} `type:"structure"` } -// SetUdpGroupSettings sets the UdpGroupSettings field's value. -func (s *OutputGroupSettings) SetUdpGroupSettings(v *UdpGroupSettings) *OutputGroupSettings { - s.UdpGroupSettings = v - return s +// String returns the string representation +func (s Scte27DestinationSettings) String() string { + return awsutil.Prettify(s) } -// Reference to an OutputDestination ID defined in the channel -type OutputLocationRef struct { +// GoString returns the string representation +func (s Scte27DestinationSettings) GoString() string { + return s.String() +} + +type Scte27SourceSettings struct { _ struct{} `type:"structure"` - DestinationRefId *string `locationName:"destinationRefId" type:"string"` + // The pid field is used in conjunction with the caption selector languageCode + // field as follows: - Specify PID and Language: Extracts captions from that + // PID; the language is "informational". - Specify PID and omit Language: Extracts + // the specified PID. - Omit PID and specify Language: Extracts the specified + // language, whichever PID that happens to be. - Omit PID and omit Language: + // Valid only if source is DVB-Sub that is being passed through; all languages + // will be passed through. + Pid *int64 `locationName:"pid" min:"1" type:"integer"` } // String returns the string representation -func (s OutputLocationRef) String() string { +func (s Scte27SourceSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s OutputLocationRef) GoString() string { +func (s Scte27SourceSettings) GoString() string { return s.String() } -// SetDestinationRefId sets the DestinationRefId field's value. -func (s *OutputLocationRef) SetDestinationRefId(v string) *OutputLocationRef { - s.DestinationRefId = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *Scte27SourceSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Scte27SourceSettings"} + if s.Pid != nil && *s.Pid < 1 { + invalidParams.Add(request.NewErrParamMinValue("Pid", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPid sets the Pid field's value. +func (s *Scte27SourceSettings) SetPid(v int64) *Scte27SourceSettings { + s.Pid = &v return s } -type OutputSettings struct { +// Corresponds to SCTE-35 delivery_not_restricted_flag parameter. To declare +// delivery restrictions, include this element and its four "restriction" flags. +// To declare that there are no restrictions, omit this element. +type Scte35DeliveryRestrictions struct { _ struct{} `type:"structure"` - ArchiveOutputSettings *ArchiveOutputSettings `locationName:"archiveOutputSettings" type:"structure"` + // Corresponds to SCTE-35 archive_allowed_flag. + // + // ArchiveAllowedFlag is a required field + ArchiveAllowedFlag *string `locationName:"archiveAllowedFlag" type:"string" required:"true" enum:"Scte35ArchiveAllowedFlag"` - HlsOutputSettings *HlsOutputSettings `locationName:"hlsOutputSettings" type:"structure"` + // Corresponds to SCTE-35 device_restrictions parameter. + // + // DeviceRestrictions is a required field + DeviceRestrictions *string `locationName:"deviceRestrictions" type:"string" required:"true" enum:"Scte35DeviceRestrictions"` - MsSmoothOutputSettings *MsSmoothOutputSettings `locationName:"msSmoothOutputSettings" type:"structure"` + // Corresponds to SCTE-35 no_regional_blackout_flag parameter. + // + // NoRegionalBlackoutFlag is a required field + NoRegionalBlackoutFlag *string `locationName:"noRegionalBlackoutFlag" type:"string" required:"true" enum:"Scte35NoRegionalBlackoutFlag"` - UdpOutputSettings *UdpOutputSettings `locationName:"udpOutputSettings" type:"structure"` + // Corresponds to SCTE-35 web_delivery_allowed_flag parameter. + // + // WebDeliveryAllowedFlag is a required field + WebDeliveryAllowedFlag *string `locationName:"webDeliveryAllowedFlag" type:"string" required:"true" enum:"Scte35WebDeliveryAllowedFlag"` } // String returns the string representation -func (s OutputSettings) String() string { +func (s Scte35DeliveryRestrictions) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s OutputSettings) GoString() string { +func (s Scte35DeliveryRestrictions) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *OutputSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "OutputSettings"} - if s.ArchiveOutputSettings != nil { - if err := s.ArchiveOutputSettings.Validate(); err != nil { - invalidParams.AddNested("ArchiveOutputSettings", err.(request.ErrInvalidParams)) - } +func (s *Scte35DeliveryRestrictions) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Scte35DeliveryRestrictions"} + if s.ArchiveAllowedFlag == nil { + invalidParams.Add(request.NewErrParamRequired("ArchiveAllowedFlag")) } - if s.HlsOutputSettings != nil { - if err := s.HlsOutputSettings.Validate(); err != nil { - invalidParams.AddNested("HlsOutputSettings", err.(request.ErrInvalidParams)) - } + if s.DeviceRestrictions == nil { + invalidParams.Add(request.NewErrParamRequired("DeviceRestrictions")) } - if s.UdpOutputSettings != nil { - if err := s.UdpOutputSettings.Validate(); err != nil { - invalidParams.AddNested("UdpOutputSettings", err.(request.ErrInvalidParams)) - } + if s.NoRegionalBlackoutFlag == nil { + invalidParams.Add(request.NewErrParamRequired("NoRegionalBlackoutFlag")) + } + if s.WebDeliveryAllowedFlag == nil { + invalidParams.Add(request.NewErrParamRequired("WebDeliveryAllowedFlag")) } if invalidParams.Len() > 0 { @@ -9312,158 +12944,241 @@ func (s *OutputSettings) Validate() error { return nil } -// SetArchiveOutputSettings sets the ArchiveOutputSettings field's value. -func (s *OutputSettings) SetArchiveOutputSettings(v *ArchiveOutputSettings) *OutputSettings { - s.ArchiveOutputSettings = v +// SetArchiveAllowedFlag sets the ArchiveAllowedFlag field's value. +func (s *Scte35DeliveryRestrictions) SetArchiveAllowedFlag(v string) *Scte35DeliveryRestrictions { + s.ArchiveAllowedFlag = &v return s } -// SetHlsOutputSettings sets the HlsOutputSettings field's value. -func (s *OutputSettings) SetHlsOutputSettings(v *HlsOutputSettings) *OutputSettings { - s.HlsOutputSettings = v +// SetDeviceRestrictions sets the DeviceRestrictions field's value. +func (s *Scte35DeliveryRestrictions) SetDeviceRestrictions(v string) *Scte35DeliveryRestrictions { + s.DeviceRestrictions = &v return s } -// SetMsSmoothOutputSettings sets the MsSmoothOutputSettings field's value. -func (s *OutputSettings) SetMsSmoothOutputSettings(v *MsSmoothOutputSettings) *OutputSettings { - s.MsSmoothOutputSettings = v +// SetNoRegionalBlackoutFlag sets the NoRegionalBlackoutFlag field's value. +func (s *Scte35DeliveryRestrictions) SetNoRegionalBlackoutFlag(v string) *Scte35DeliveryRestrictions { + s.NoRegionalBlackoutFlag = &v return s } -// SetUdpOutputSettings sets the UdpOutputSettings field's value. -func (s *OutputSettings) SetUdpOutputSettings(v *UdpOutputSettings) *OutputSettings { - s.UdpOutputSettings = v +// SetWebDeliveryAllowedFlag sets the WebDeliveryAllowedFlag field's value. +func (s *Scte35DeliveryRestrictions) SetWebDeliveryAllowedFlag(v string) *Scte35DeliveryRestrictions { + s.WebDeliveryAllowedFlag = &v return s } -type PassThroughSettings struct { +// Holds one set of SCTE-35 Descriptor Settings. +type Scte35Descriptor struct { _ struct{} `type:"structure"` + + // SCTE-35 Descriptor Settings. + // + // Scte35DescriptorSettings is a required field + Scte35DescriptorSettings *Scte35DescriptorSettings `locationName:"scte35DescriptorSettings" type:"structure" required:"true"` } // String returns the string representation -func (s PassThroughSettings) String() string { +func (s Scte35Descriptor) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s PassThroughSettings) GoString() string { +func (s Scte35Descriptor) GoString() string { return s.String() } -type RemixSettings struct { - _ struct{} `type:"structure"` +// Validate inspects the fields of the type to determine if they are valid. +func (s *Scte35Descriptor) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Scte35Descriptor"} + if s.Scte35DescriptorSettings == nil { + invalidParams.Add(request.NewErrParamRequired("Scte35DescriptorSettings")) + } + if s.Scte35DescriptorSettings != nil { + if err := s.Scte35DescriptorSettings.Validate(); err != nil { + invalidParams.AddNested("Scte35DescriptorSettings", err.(request.ErrInvalidParams)) + } + } - // Mapping of input channels to output channels, with appropriate gain adjustments. - // - // ChannelMappings is a required field - ChannelMappings []*AudioChannelMapping `locationName:"channelMappings" type:"list" required:"true"` + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} - // Number of input channels to be used. - ChannelsIn *int64 `locationName:"channelsIn" min:"1" type:"integer"` +// SetScte35DescriptorSettings sets the Scte35DescriptorSettings field's value. +func (s *Scte35Descriptor) SetScte35DescriptorSettings(v *Scte35DescriptorSettings) *Scte35Descriptor { + s.Scte35DescriptorSettings = v + return s +} - // Number of output channels to be produced.Valid values: 1, 2, 4, 6, 8 - ChannelsOut *int64 `locationName:"channelsOut" min:"1" type:"integer"` +// SCTE-35 Descriptor settings. +type Scte35DescriptorSettings struct { + _ struct{} `type:"structure"` + + // SCTE-35 Segmentation Descriptor. + // + // SegmentationDescriptorScte35DescriptorSettings is a required field + SegmentationDescriptorScte35DescriptorSettings *Scte35SegmentationDescriptor `locationName:"segmentationDescriptorScte35DescriptorSettings" type:"structure" required:"true"` } // String returns the string representation -func (s RemixSettings) String() string { +func (s Scte35DescriptorSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s RemixSettings) GoString() string { +func (s Scte35DescriptorSettings) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *RemixSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "RemixSettings"} - if s.ChannelMappings == nil { - invalidParams.Add(request.NewErrParamRequired("ChannelMappings")) - } - if s.ChannelsIn != nil && *s.ChannelsIn < 1 { - invalidParams.Add(request.NewErrParamMinValue("ChannelsIn", 1)) - } - if s.ChannelsOut != nil && *s.ChannelsOut < 1 { - invalidParams.Add(request.NewErrParamMinValue("ChannelsOut", 1)) - } - if s.ChannelMappings != nil { - for i, v := range s.ChannelMappings { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ChannelMappings", i), err.(request.ErrInvalidParams)) - } +func (s *Scte35DescriptorSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Scte35DescriptorSettings"} + if s.SegmentationDescriptorScte35DescriptorSettings == nil { + invalidParams.Add(request.NewErrParamRequired("SegmentationDescriptorScte35DescriptorSettings")) + } + if s.SegmentationDescriptorScte35DescriptorSettings != nil { + if err := s.SegmentationDescriptorScte35DescriptorSettings.Validate(); err != nil { + invalidParams.AddNested("SegmentationDescriptorScte35DescriptorSettings", err.(request.ErrInvalidParams)) } } if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetChannelMappings sets the ChannelMappings field's value. -func (s *RemixSettings) SetChannelMappings(v []*AudioChannelMapping) *RemixSettings { - s.ChannelMappings = v - return s -} - -// SetChannelsIn sets the ChannelsIn field's value. -func (s *RemixSettings) SetChannelsIn(v int64) *RemixSettings { - s.ChannelsIn = &v - return s + return invalidParams + } + return nil } -// SetChannelsOut sets the ChannelsOut field's value. -func (s *RemixSettings) SetChannelsOut(v int64) *RemixSettings { - s.ChannelsOut = &v +// SetSegmentationDescriptorScte35DescriptorSettings sets the SegmentationDescriptorScte35DescriptorSettings field's value. +func (s *Scte35DescriptorSettings) SetSegmentationDescriptorScte35DescriptorSettings(v *Scte35SegmentationDescriptor) *Scte35DescriptorSettings { + s.SegmentationDescriptorScte35DescriptorSettings = v return s } -type Scte20PlusEmbeddedDestinationSettings struct { +// Settings for a SCTE-35 return_to_network message. +type Scte35ReturnToNetworkScheduleActionSettings struct { _ struct{} `type:"structure"` + + // The splice_event_id for the SCTE-35 splice_insert, as defined in SCTE-35. + // + // SpliceEventId is a required field + SpliceEventId *int64 `locationName:"spliceEventId" type:"long" required:"true"` } // String returns the string representation -func (s Scte20PlusEmbeddedDestinationSettings) String() string { +func (s Scte35ReturnToNetworkScheduleActionSettings) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Scte20PlusEmbeddedDestinationSettings) GoString() string { +func (s Scte35ReturnToNetworkScheduleActionSettings) GoString() string { return s.String() } -type Scte20SourceSettings struct { +// Validate inspects the fields of the type to determine if they are valid. +func (s *Scte35ReturnToNetworkScheduleActionSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Scte35ReturnToNetworkScheduleActionSettings"} + if s.SpliceEventId == nil { + invalidParams.Add(request.NewErrParamRequired("SpliceEventId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSpliceEventId sets the SpliceEventId field's value. +func (s *Scte35ReturnToNetworkScheduleActionSettings) SetSpliceEventId(v int64) *Scte35ReturnToNetworkScheduleActionSettings { + s.SpliceEventId = &v + return s +} + +// Corresponds to SCTE-35 segmentation_descriptor. +type Scte35SegmentationDescriptor struct { _ struct{} `type:"structure"` - // If upconvert, 608 data is both passed through via the "608 compatibility - // bytes" fields of the 708 wrapper as well as translated into 708. 708 data - // present in the source content will be discarded. - Convert608To708 *string `locationName:"convert608To708" type:"string" enum:"Scte20Convert608To708"` + // Holds the four SCTE-35 delivery restriction parameters. + DeliveryRestrictions *Scte35DeliveryRestrictions `locationName:"deliveryRestrictions" type:"structure"` - // Specifies the 608/708 channel number within the video track from which to - // extract captions. Unused for passthrough. - Source608ChannelNumber *int64 `locationName:"source608ChannelNumber" min:"1" type:"integer"` + // Corresponds to SCTE-35 segment_num. A value that is valid for the specified + // segmentation_type_id. + SegmentNum *int64 `locationName:"segmentNum" type:"integer"` + + // Corresponds to SCTE-35 segmentation_event_cancel_indicator. + // + // SegmentationCancelIndicator is a required field + SegmentationCancelIndicator *string `locationName:"segmentationCancelIndicator" type:"string" required:"true" enum:"Scte35SegmentationCancelIndicator"` + + // Corresponds to SCTE-35 segmentation_duration. Optional. The duration for + // the time_signal, in 90 KHz ticks. To convert seconds to ticks, multiple the + // seconds by 90,000. Enter time in 90 KHz clock ticks. If you do not enter + // a duration, the time_signal will continue until you insert a cancellation + // message. + SegmentationDuration *int64 `locationName:"segmentationDuration" type:"long"` + + // Corresponds to SCTE-35 segmentation_event_id. + // + // SegmentationEventId is a required field + SegmentationEventId *int64 `locationName:"segmentationEventId" type:"long" required:"true"` + + // Corresponds to SCTE-35 segmentation_type_id. One of the segmentation_type_id + // values listed in the SCTE-35 specification. On the console, enter the ID + // in decimal (for example, "52"). In the CLI, API, or an SDK, enter the ID + // in hex (for example, "0x34") or decimal (for example, "52"). + SegmentationTypeId *int64 `locationName:"segmentationTypeId" type:"integer"` + + // Corresponds to SCTE-35 segmentation_upid. Enter a string containing the hexadecimal + // representation of the characters that make up the SCTE-35 segmentation_upid + // value. Must contain an even number of hex characters. Do not include spaces + // between each hex pair. For example, the ASCII "ADS Information" becomes hex + // "41445320496e666f726d6174696f6e. + SegmentationUpid *string `locationName:"segmentationUpid" type:"string"` + + // Corresponds to SCTE-35 segmentation_upid_type. On the console, enter one + // of the types listed in the SCTE-35 specification, converted to a decimal. + // For example, "0x0C" hex from the specification is "12" in decimal. In the + // CLI, API, or an SDK, enter one of the types listed in the SCTE-35 specification, + // in either hex (for example, "0x0C" ) or in decimal (for example, "12"). + SegmentationUpidType *int64 `locationName:"segmentationUpidType" type:"integer"` + + // Corresponds to SCTE-35 segments_expected. A value that is valid for the specified + // segmentation_type_id. + SegmentsExpected *int64 `locationName:"segmentsExpected" type:"integer"` + + // Corresponds to SCTE-35 sub_segment_num. A value that is valid for the specified + // segmentation_type_id. + SubSegmentNum *int64 `locationName:"subSegmentNum" type:"integer"` + + // Corresponds to SCTE-35 sub_segments_expected. A value that is valid for the + // specified segmentation_type_id. + SubSegmentsExpected *int64 `locationName:"subSegmentsExpected" type:"integer"` } // String returns the string representation -func (s Scte20SourceSettings) String() string { +func (s Scte35SegmentationDescriptor) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Scte20SourceSettings) GoString() string { +func (s Scte35SegmentationDescriptor) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *Scte20SourceSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "Scte20SourceSettings"} - if s.Source608ChannelNumber != nil && *s.Source608ChannelNumber < 1 { - invalidParams.Add(request.NewErrParamMinValue("Source608ChannelNumber", 1)) +func (s *Scte35SegmentationDescriptor) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Scte35SegmentationDescriptor"} + if s.SegmentationCancelIndicator == nil { + invalidParams.Add(request.NewErrParamRequired("SegmentationCancelIndicator")) + } + if s.SegmentationEventId == nil { + invalidParams.Add(request.NewErrParamRequired("SegmentationEventId")) + } + if s.DeliveryRestrictions != nil { + if err := s.DeliveryRestrictions.Validate(); err != nil { + invalidParams.AddNested("DeliveryRestrictions", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -9472,71 +13187,69 @@ func (s *Scte20SourceSettings) Validate() error { return nil } -// SetConvert608To708 sets the Convert608To708 field's value. -func (s *Scte20SourceSettings) SetConvert608To708(v string) *Scte20SourceSettings { - s.Convert608To708 = &v +// SetDeliveryRestrictions sets the DeliveryRestrictions field's value. +func (s *Scte35SegmentationDescriptor) SetDeliveryRestrictions(v *Scte35DeliveryRestrictions) *Scte35SegmentationDescriptor { + s.DeliveryRestrictions = v return s } -// SetSource608ChannelNumber sets the Source608ChannelNumber field's value. -func (s *Scte20SourceSettings) SetSource608ChannelNumber(v int64) *Scte20SourceSettings { - s.Source608ChannelNumber = &v +// SetSegmentNum sets the SegmentNum field's value. +func (s *Scte35SegmentationDescriptor) SetSegmentNum(v int64) *Scte35SegmentationDescriptor { + s.SegmentNum = &v return s } -type Scte27DestinationSettings struct { - _ struct{} `type:"structure"` +// SetSegmentationCancelIndicator sets the SegmentationCancelIndicator field's value. +func (s *Scte35SegmentationDescriptor) SetSegmentationCancelIndicator(v string) *Scte35SegmentationDescriptor { + s.SegmentationCancelIndicator = &v + return s } -// String returns the string representation -func (s Scte27DestinationSettings) String() string { - return awsutil.Prettify(s) +// SetSegmentationDuration sets the SegmentationDuration field's value. +func (s *Scte35SegmentationDescriptor) SetSegmentationDuration(v int64) *Scte35SegmentationDescriptor { + s.SegmentationDuration = &v + return s } -// GoString returns the string representation -func (s Scte27DestinationSettings) GoString() string { - return s.String() +// SetSegmentationEventId sets the SegmentationEventId field's value. +func (s *Scte35SegmentationDescriptor) SetSegmentationEventId(v int64) *Scte35SegmentationDescriptor { + s.SegmentationEventId = &v + return s } -type Scte27SourceSettings struct { - _ struct{} `type:"structure"` - - // The pid field is used in conjunction with the caption selector languageCode - // field as follows: - Specify PID and Language: Extracts captions from that - // PID; the language is "informational". - Specify PID and omit Language: Extracts - // the specified PID. - Omit PID and specify Language: Extracts the specified - // language, whichever PID that happens to be. - Omit PID and omit Language: - // Valid only if source is DVB-Sub that is being passed through; all languages - // will be passed through. - Pid *int64 `locationName:"pid" min:"1" type:"integer"` +// SetSegmentationTypeId sets the SegmentationTypeId field's value. +func (s *Scte35SegmentationDescriptor) SetSegmentationTypeId(v int64) *Scte35SegmentationDescriptor { + s.SegmentationTypeId = &v + return s } -// String returns the string representation -func (s Scte27SourceSettings) String() string { - return awsutil.Prettify(s) +// SetSegmentationUpid sets the SegmentationUpid field's value. +func (s *Scte35SegmentationDescriptor) SetSegmentationUpid(v string) *Scte35SegmentationDescriptor { + s.SegmentationUpid = &v + return s } -// GoString returns the string representation -func (s Scte27SourceSettings) GoString() string { - return s.String() +// SetSegmentationUpidType sets the SegmentationUpidType field's value. +func (s *Scte35SegmentationDescriptor) SetSegmentationUpidType(v int64) *Scte35SegmentationDescriptor { + s.SegmentationUpidType = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *Scte27SourceSettings) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "Scte27SourceSettings"} - if s.Pid != nil && *s.Pid < 1 { - invalidParams.Add(request.NewErrParamMinValue("Pid", 1)) - } +// SetSegmentsExpected sets the SegmentsExpected field's value. +func (s *Scte35SegmentationDescriptor) SetSegmentsExpected(v int64) *Scte35SegmentationDescriptor { + s.SegmentsExpected = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetSubSegmentNum sets the SubSegmentNum field's value. +func (s *Scte35SegmentationDescriptor) SetSubSegmentNum(v int64) *Scte35SegmentationDescriptor { + s.SubSegmentNum = &v + return s } -// SetPid sets the Pid field's value. -func (s *Scte27SourceSettings) SetPid(v int64) *Scte27SourceSettings { - s.Pid = &v +// SetSubSegmentsExpected sets the SubSegmentsExpected field's value. +func (s *Scte35SegmentationDescriptor) SetSubSegmentsExpected(v int64) *Scte35SegmentationDescriptor { + s.SubSegmentsExpected = &v return s } @@ -9598,6 +13311,59 @@ func (s *Scte35SpliceInsert) SetWebDeliveryAllowedFlag(v string) *Scte35SpliceIn return s } +// Settings for a SCTE-35 splice_insert message. +type Scte35SpliceInsertScheduleActionSettings struct { + _ struct{} `type:"structure"` + + // Optional, the duration for the splice_insert, in 90 KHz ticks. To convert + // seconds to ticks, multiple the seconds by 90,000. If you enter a duration, + // there is an expectation that the downstream system can read the duration + // and cue in at that time. If you do not enter a duration, the splice_insert + // will continue indefinitely and there is an expectation that you will enter + // a return_to_network to end the splice_insert at the appropriate time. + Duration *int64 `locationName:"duration" type:"long"` + + // The splice_event_id for the SCTE-35 splice_insert, as defined in SCTE-35. + // + // SpliceEventId is a required field + SpliceEventId *int64 `locationName:"spliceEventId" type:"long" required:"true"` +} + +// String returns the string representation +func (s Scte35SpliceInsertScheduleActionSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Scte35SpliceInsertScheduleActionSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Scte35SpliceInsertScheduleActionSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Scte35SpliceInsertScheduleActionSettings"} + if s.SpliceEventId == nil { + invalidParams.Add(request.NewErrParamRequired("SpliceEventId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDuration sets the Duration field's value. +func (s *Scte35SpliceInsertScheduleActionSettings) SetDuration(v int64) *Scte35SpliceInsertScheduleActionSettings { + s.Duration = &v + return s +} + +// SetSpliceEventId sets the SpliceEventId field's value. +func (s *Scte35SpliceInsertScheduleActionSettings) SetSpliceEventId(v int64) *Scte35SpliceInsertScheduleActionSettings { + s.SpliceEventId = &v + return s +} + type Scte35TimeSignalApos struct { _ struct{} `type:"structure"` @@ -9656,6 +13422,55 @@ func (s *Scte35TimeSignalApos) SetWebDeliveryAllowedFlag(v string) *Scte35TimeSi return s } +// Settings for a SCTE-35 time_signal. +type Scte35TimeSignalScheduleActionSettings struct { + _ struct{} `type:"structure"` + + // The list of SCTE-35 descriptors accompanying the SCTE-35 time_signal. + // + // Scte35Descriptors is a required field + Scte35Descriptors []*Scte35Descriptor `locationName:"scte35Descriptors" type:"list" required:"true"` +} + +// String returns the string representation +func (s Scte35TimeSignalScheduleActionSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Scte35TimeSignalScheduleActionSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Scte35TimeSignalScheduleActionSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Scte35TimeSignalScheduleActionSettings"} + if s.Scte35Descriptors == nil { + invalidParams.Add(request.NewErrParamRequired("Scte35Descriptors")) + } + if s.Scte35Descriptors != nil { + for i, v := range s.Scte35Descriptors { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Scte35Descriptors", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetScte35Descriptors sets the Scte35Descriptors field's value. +func (s *Scte35TimeSignalScheduleActionSettings) SetScte35Descriptors(v []*Scte35Descriptor) *Scte35TimeSignalScheduleActionSettings { + s.Scte35Descriptors = v + return s +} + type SmpteTtDestinationSettings struct { _ struct{} `type:"structure"` } @@ -9771,6 +13586,9 @@ type StartChannelOutput struct { InputSpecification *InputSpecification `locationName:"inputSpecification" type:"structure"` + // The log level the user wants for their channel. + LogLevel *string `locationName:"logLevel" type:"string" enum:"LogLevel"` + Name *string `locationName:"name" type:"string"` PipelinesRunningCount *int64 `locationName:"pipelinesRunningCount" type:"integer"` @@ -9832,6 +13650,12 @@ func (s *StartChannelOutput) SetInputSpecification(v *InputSpecification) *Start return s } +// SetLogLevel sets the LogLevel field's value. +func (s *StartChannelOutput) SetLogLevel(v string) *StartChannelOutput { + s.LogLevel = &v + return s +} + // SetName sets the Name field's value. func (s *StartChannelOutput) SetName(v string) *StartChannelOutput { s.Name = &v @@ -9856,6 +13680,191 @@ func (s *StartChannelOutput) SetState(v string) *StartChannelOutput { return s } +// Settings for the action to activate a static image. +type StaticImageActivateScheduleActionSettings struct { + _ struct{} `type:"structure"` + + // The duration in milliseconds for the image to remain on the video. If omitted + // or set to 0 the duration is unlimited and the image will remain until it + // is explicitly deactivated. + Duration *int64 `locationName:"duration" type:"integer"` + + // The time in milliseconds for the image to fade in. The fade-in starts at + // the start time of the overlay. Default is 0 (no fade-in). + FadeIn *int64 `locationName:"fadeIn" type:"integer"` + + // Applies only if a duration is specified. The time in milliseconds for the + // image to fade out. The fade-out starts when the duration time is hit, so + // it effectively extends the duration. Default is 0 (no fade-out). + FadeOut *int64 `locationName:"fadeOut" type:"integer"` + + // The height of the image when inserted into the video, in pixels. The overlay + // will be scaled up or down to the specified height. Leave blank to use the + // native height of the overlay. + Height *int64 `locationName:"height" min:"1" type:"integer"` + + // The location and filename of the image file to overlay on the video. The + // file must be a 32-bit BMP, PNG, or TGA file, and must not be larger (in pixels) + // than the input video. + // + // Image is a required field + Image *InputLocation `locationName:"image" type:"structure" required:"true"` + + // Placement of the left edge of the overlay relative to the left edge of the + // video frame, in pixels. 0 (the default) is the left edge of the frame. If + // the placement causes the overlay to extend beyond the right edge of the underlying + // video, then the overlay is cropped on the right. + ImageX *int64 `locationName:"imageX" type:"integer"` + + // Placement of the top edge of the overlay relative to the top edge of the + // video frame, in pixels. 0 (the default) is the top edge of the frame. If + // the placement causes the overlay to extend beyond the bottom edge of the + // underlying video, then the overlay is cropped on the bottom. + ImageY *int64 `locationName:"imageY" type:"integer"` + + // The number of the layer, 0 to 7. There are 8 layers that can be overlaid + // on the video, each layer with a different image. The layers are in Z order, + // which means that overlays with higher values of layer are inserted on top + // of overlays with lower values of layer. Default is 0. + Layer *int64 `locationName:"layer" type:"integer"` + + // Opacity of image where 0 is transparent and 100 is fully opaque. Default + // is 100. + Opacity *int64 `locationName:"opacity" type:"integer"` + + // The width of the image when inserted into the video, in pixels. The overlay + // will be scaled up or down to the specified width. Leave blank to use the + // native width of the overlay. + Width *int64 `locationName:"width" min:"1" type:"integer"` +} + +// String returns the string representation +func (s StaticImageActivateScheduleActionSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StaticImageActivateScheduleActionSettings) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StaticImageActivateScheduleActionSettings) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StaticImageActivateScheduleActionSettings"} + if s.Height != nil && *s.Height < 1 { + invalidParams.Add(request.NewErrParamMinValue("Height", 1)) + } + if s.Image == nil { + invalidParams.Add(request.NewErrParamRequired("Image")) + } + if s.Width != nil && *s.Width < 1 { + invalidParams.Add(request.NewErrParamMinValue("Width", 1)) + } + if s.Image != nil { + if err := s.Image.Validate(); err != nil { + invalidParams.AddNested("Image", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDuration sets the Duration field's value. +func (s *StaticImageActivateScheduleActionSettings) SetDuration(v int64) *StaticImageActivateScheduleActionSettings { + s.Duration = &v + return s +} + +// SetFadeIn sets the FadeIn field's value. +func (s *StaticImageActivateScheduleActionSettings) SetFadeIn(v int64) *StaticImageActivateScheduleActionSettings { + s.FadeIn = &v + return s +} + +// SetFadeOut sets the FadeOut field's value. +func (s *StaticImageActivateScheduleActionSettings) SetFadeOut(v int64) *StaticImageActivateScheduleActionSettings { + s.FadeOut = &v + return s +} + +// SetHeight sets the Height field's value. +func (s *StaticImageActivateScheduleActionSettings) SetHeight(v int64) *StaticImageActivateScheduleActionSettings { + s.Height = &v + return s +} + +// SetImage sets the Image field's value. +func (s *StaticImageActivateScheduleActionSettings) SetImage(v *InputLocation) *StaticImageActivateScheduleActionSettings { + s.Image = v + return s +} + +// SetImageX sets the ImageX field's value. +func (s *StaticImageActivateScheduleActionSettings) SetImageX(v int64) *StaticImageActivateScheduleActionSettings { + s.ImageX = &v + return s +} + +// SetImageY sets the ImageY field's value. +func (s *StaticImageActivateScheduleActionSettings) SetImageY(v int64) *StaticImageActivateScheduleActionSettings { + s.ImageY = &v + return s +} + +// SetLayer sets the Layer field's value. +func (s *StaticImageActivateScheduleActionSettings) SetLayer(v int64) *StaticImageActivateScheduleActionSettings { + s.Layer = &v + return s +} + +// SetOpacity sets the Opacity field's value. +func (s *StaticImageActivateScheduleActionSettings) SetOpacity(v int64) *StaticImageActivateScheduleActionSettings { + s.Opacity = &v + return s +} + +// SetWidth sets the Width field's value. +func (s *StaticImageActivateScheduleActionSettings) SetWidth(v int64) *StaticImageActivateScheduleActionSettings { + s.Width = &v + return s +} + +// Settings for the action to deactivate the image in a specific layer. +type StaticImageDeactivateScheduleActionSettings struct { + _ struct{} `type:"structure"` + + // The time in milliseconds for the image to fade out. Default is 0 (no fade-out). + FadeOut *int64 `locationName:"fadeOut" type:"integer"` + + // The image overlay layer to deactivate, 0 to 7. Default is 0. + Layer *int64 `locationName:"layer" type:"integer"` +} + +// String returns the string representation +func (s StaticImageDeactivateScheduleActionSettings) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StaticImageDeactivateScheduleActionSettings) GoString() string { + return s.String() +} + +// SetFadeOut sets the FadeOut field's value. +func (s *StaticImageDeactivateScheduleActionSettings) SetFadeOut(v int64) *StaticImageDeactivateScheduleActionSettings { + s.FadeOut = &v + return s +} + +// SetLayer sets the Layer field's value. +func (s *StaticImageDeactivateScheduleActionSettings) SetLayer(v int64) *StaticImageDeactivateScheduleActionSettings { + s.Layer = &v + return s +} + type StaticKeySettings struct { _ struct{} `type:"structure"` @@ -9964,6 +13973,9 @@ type StopChannelOutput struct { InputSpecification *InputSpecification `locationName:"inputSpecification" type:"structure"` + // The log level the user wants for their channel. + LogLevel *string `locationName:"logLevel" type:"string" enum:"LogLevel"` + Name *string `locationName:"name" type:"string"` PipelinesRunningCount *int64 `locationName:"pipelinesRunningCount" type:"integer"` @@ -10025,6 +14037,12 @@ func (s *StopChannelOutput) SetInputSpecification(v *InputSpecification) *StopCh return s } +// SetLogLevel sets the LogLevel field's value. +func (s *StopChannelOutput) SetLogLevel(v string) *StopChannelOutput { + s.LogLevel = &v + return s +} + // SetName sets the Name field's value. func (s *StopChannelOutput) SetName(v string) *StopChannelOutput { s.Name = &v @@ -10347,8 +14365,13 @@ type UpdateChannelInput struct { EncoderSettings *EncoderSettings `locationName:"encoderSettings" type:"structure"` + InputAttachments []*InputAttachment `locationName:"inputAttachments" type:"list"` + InputSpecification *InputSpecification `locationName:"inputSpecification" type:"structure"` + // The log level the user wants for their channel. + LogLevel *string `locationName:"logLevel" type:"string" enum:"LogLevel"` + Name *string `locationName:"name" type:"string"` RoleArn *string `locationName:"roleArn" type:"string"` @@ -10375,6 +14398,16 @@ func (s *UpdateChannelInput) Validate() error { invalidParams.AddNested("EncoderSettings", err.(request.ErrInvalidParams)) } } + if s.InputAttachments != nil { + for i, v := range s.InputAttachments { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "InputAttachments", i), err.(request.ErrInvalidParams)) + } + } + } if invalidParams.Len() > 0 { return invalidParams @@ -10400,12 +14433,24 @@ func (s *UpdateChannelInput) SetEncoderSettings(v *EncoderSettings) *UpdateChann return s } +// SetInputAttachments sets the InputAttachments field's value. +func (s *UpdateChannelInput) SetInputAttachments(v []*InputAttachment) *UpdateChannelInput { + s.InputAttachments = v + return s +} + // SetInputSpecification sets the InputSpecification field's value. func (s *UpdateChannelInput) SetInputSpecification(v *InputSpecification) *UpdateChannelInput { s.InputSpecification = v return s } +// SetLogLevel sets the LogLevel field's value. +func (s *UpdateChannelInput) SetLogLevel(v string) *UpdateChannelInput { + s.LogLevel = &v + return s +} + // SetName sets the Name field's value. func (s *UpdateChannelInput) SetName(v string) *UpdateChannelInput { s.Name = &v @@ -10440,6 +14485,163 @@ func (s *UpdateChannelOutput) SetChannel(v *Channel) *UpdateChannelOutput { return s } +type UpdateInputInput struct { + _ struct{} `type:"structure"` + + Destinations []*InputDestinationRequest `locationName:"destinations" type:"list"` + + // InputId is a required field + InputId *string `location:"uri" locationName:"inputId" type:"string" required:"true"` + + InputSecurityGroups []*string `locationName:"inputSecurityGroups" type:"list"` + + Name *string `locationName:"name" type:"string"` + + Sources []*InputSourceRequest `locationName:"sources" type:"list"` +} + +// String returns the string representation +func (s UpdateInputInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateInputInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateInputInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateInputInput"} + if s.InputId == nil { + invalidParams.Add(request.NewErrParamRequired("InputId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDestinations sets the Destinations field's value. +func (s *UpdateInputInput) SetDestinations(v []*InputDestinationRequest) *UpdateInputInput { + s.Destinations = v + return s +} + +// SetInputId sets the InputId field's value. +func (s *UpdateInputInput) SetInputId(v string) *UpdateInputInput { + s.InputId = &v + return s +} + +// SetInputSecurityGroups sets the InputSecurityGroups field's value. +func (s *UpdateInputInput) SetInputSecurityGroups(v []*string) *UpdateInputInput { + s.InputSecurityGroups = v + return s +} + +// SetName sets the Name field's value. +func (s *UpdateInputInput) SetName(v string) *UpdateInputInput { + s.Name = &v + return s +} + +// SetSources sets the Sources field's value. +func (s *UpdateInputInput) SetSources(v []*InputSourceRequest) *UpdateInputInput { + s.Sources = v + return s +} + +type UpdateInputOutput struct { + _ struct{} `type:"structure"` + + Input *Input `locationName:"input" type:"structure"` +} + +// String returns the string representation +func (s UpdateInputOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateInputOutput) GoString() string { + return s.String() +} + +// SetInput sets the Input field's value. +func (s *UpdateInputOutput) SetInput(v *Input) *UpdateInputOutput { + s.Input = v + return s +} + +type UpdateInputSecurityGroupInput struct { + _ struct{} `type:"structure"` + + // InputSecurityGroupId is a required field + InputSecurityGroupId *string `location:"uri" locationName:"inputSecurityGroupId" type:"string" required:"true"` + + WhitelistRules []*InputWhitelistRuleCidr `locationName:"whitelistRules" type:"list"` +} + +// String returns the string representation +func (s UpdateInputSecurityGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateInputSecurityGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateInputSecurityGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateInputSecurityGroupInput"} + if s.InputSecurityGroupId == nil { + invalidParams.Add(request.NewErrParamRequired("InputSecurityGroupId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInputSecurityGroupId sets the InputSecurityGroupId field's value. +func (s *UpdateInputSecurityGroupInput) SetInputSecurityGroupId(v string) *UpdateInputSecurityGroupInput { + s.InputSecurityGroupId = &v + return s +} + +// SetWhitelistRules sets the WhitelistRules field's value. +func (s *UpdateInputSecurityGroupInput) SetWhitelistRules(v []*InputWhitelistRuleCidr) *UpdateInputSecurityGroupInput { + s.WhitelistRules = v + return s +} + +type UpdateInputSecurityGroupOutput struct { + _ struct{} `type:"structure"` + + // An Input Security Group + SecurityGroup *InputSecurityGroup `locationName:"securityGroup" type:"structure"` +} + +// String returns the string representation +func (s UpdateInputSecurityGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateInputSecurityGroupOutput) GoString() string { + return s.String() +} + +// SetSecurityGroup sets the SecurityGroup field's value. +func (s *UpdateInputSecurityGroupOutput) SetSecurityGroup(v *InputSecurityGroup) *UpdateInputSecurityGroupOutput { + s.SecurityGroup = v + return s +} + type ValidationError struct { _ struct{} `type:"structure"` @@ -10973,6 +15175,14 @@ const ( AudioTypeVisualImpairedCommentary = "VISUAL_IMPAIRED_COMMENTARY" ) +const ( + // AuthenticationSchemeAkamai is a AuthenticationScheme enum value + AuthenticationSchemeAkamai = "AKAMAI" + + // AuthenticationSchemeCommon is a AuthenticationScheme enum value + AuthenticationSchemeCommon = "COMMON" +) + const ( // AvailBlankingStateDisabled is a AvailBlankingState enum value AvailBlankingStateDisabled = "DISABLED" @@ -11421,6 +15631,15 @@ const ( FixedAfdAfd1111 = "AFD_1111" ) +// Follow reference point. +const ( + // FollowPointEnd is a FollowPoint enum value + FollowPointEnd = "END" + + // FollowPointStart is a FollowPoint enum value + FollowPointStart = "START" +) + const ( // GlobalConfigurationInputEndActionNone is a GlobalConfigurationInputEndAction enum value GlobalConfigurationInputEndActionNone = "NONE" @@ -11609,6 +15828,9 @@ const ( // H264RateControlModeCbr is a H264RateControlMode enum value H264RateControlModeCbr = "CBR" + // H264RateControlModeQvbr is a H264RateControlMode enum value + H264RateControlModeQvbr = "QVBR" + // H264RateControlModeVbr is a H264RateControlMode enum value H264RateControlModeVbr = "VBR" ) @@ -11927,6 +16149,20 @@ const ( InputResolutionUhd = "UHD" ) +const ( + // InputSecurityGroupStateIdle is a InputSecurityGroupState enum value + InputSecurityGroupStateIdle = "IDLE" + + // InputSecurityGroupStateInUse is a InputSecurityGroupState enum value + InputSecurityGroupStateInUse = "IN_USE" + + // InputSecurityGroupStateUpdating is a InputSecurityGroupState enum value + InputSecurityGroupStateUpdating = "UPDATING" + + // InputSecurityGroupStateDeleted is a InputSecurityGroupState enum value + InputSecurityGroupStateDeleted = "DELETED" +) + const ( // InputSourceEndBehaviorContinue is a InputSourceEndBehavior enum value InputSourceEndBehaviorContinue = "CONTINUE" @@ -11967,6 +16203,27 @@ const ( // InputTypeUrlPull is a InputType enum value InputTypeUrlPull = "URL_PULL" + + // InputTypeMp4File is a InputType enum value + InputTypeMp4File = "MP4_FILE" +) + +// The log level the user wants for their channel. +const ( + // LogLevelError is a LogLevel enum value + LogLevelError = "ERROR" + + // LogLevelWarning is a LogLevel enum value + LogLevelWarning = "WARNING" + + // LogLevelInfo is a LogLevel enum value + LogLevelInfo = "INFO" + + // LogLevelDebug is a LogLevel enum value + LogLevelDebug = "DEBUG" + + // LogLevelDisabled is a LogLevel enum value + LogLevelDisabled = "DISABLED" ) const ( @@ -12165,6 +16422,142 @@ const ( NetworkInputServerValidationCheckCryptographyOnly = "CHECK_CRYPTOGRAPHY_ONLY" ) +// Units for duration, e.g. 'MONTHS' +const ( + // OfferingDurationUnitsMonths is a OfferingDurationUnits enum value + OfferingDurationUnitsMonths = "MONTHS" +) + +// Offering type, e.g. 'NO_UPFRONT' +const ( + // OfferingTypeNoUpfront is a OfferingType enum value + OfferingTypeNoUpfront = "NO_UPFRONT" +) + +// Codec, 'MPEG2', 'AVC', 'HEVC', or 'AUDIO' +const ( + // ReservationCodecMpeg2 is a ReservationCodec enum value + ReservationCodecMpeg2 = "MPEG2" + + // ReservationCodecAvc is a ReservationCodec enum value + ReservationCodecAvc = "AVC" + + // ReservationCodecHevc is a ReservationCodec enum value + ReservationCodecHevc = "HEVC" + + // ReservationCodecAudio is a ReservationCodec enum value + ReservationCodecAudio = "AUDIO" +) + +// Maximum bitrate in megabits per second +const ( + // ReservationMaximumBitrateMax10Mbps is a ReservationMaximumBitrate enum value + ReservationMaximumBitrateMax10Mbps = "MAX_10_MBPS" + + // ReservationMaximumBitrateMax20Mbps is a ReservationMaximumBitrate enum value + ReservationMaximumBitrateMax20Mbps = "MAX_20_MBPS" + + // ReservationMaximumBitrateMax50Mbps is a ReservationMaximumBitrate enum value + ReservationMaximumBitrateMax50Mbps = "MAX_50_MBPS" +) + +// Maximum framerate in frames per second (Outputs only) +const ( + // ReservationMaximumFramerateMax30Fps is a ReservationMaximumFramerate enum value + ReservationMaximumFramerateMax30Fps = "MAX_30_FPS" + + // ReservationMaximumFramerateMax60Fps is a ReservationMaximumFramerate enum value + ReservationMaximumFramerateMax60Fps = "MAX_60_FPS" +) + +// Resolution based on lines of vertical resolution; SD is less than 720 lines, +// HD is 720 to 1080 lines, UHD is greater than 1080 lines +const ( + // ReservationResolutionSd is a ReservationResolution enum value + ReservationResolutionSd = "SD" + + // ReservationResolutionHd is a ReservationResolution enum value + ReservationResolutionHd = "HD" + + // ReservationResolutionUhd is a ReservationResolution enum value + ReservationResolutionUhd = "UHD" +) + +// Resource type, 'INPUT', 'OUTPUT', or 'CHANNEL' +const ( + // ReservationResourceTypeInput is a ReservationResourceType enum value + ReservationResourceTypeInput = "INPUT" + + // ReservationResourceTypeOutput is a ReservationResourceType enum value + ReservationResourceTypeOutput = "OUTPUT" + + // ReservationResourceTypeChannel is a ReservationResourceType enum value + ReservationResourceTypeChannel = "CHANNEL" +) + +// Special features, 'ADVANCED_AUDIO' or 'AUDIO_NORMALIZATION' +const ( + // ReservationSpecialFeatureAdvancedAudio is a ReservationSpecialFeature enum value + ReservationSpecialFeatureAdvancedAudio = "ADVANCED_AUDIO" + + // ReservationSpecialFeatureAudioNormalization is a ReservationSpecialFeature enum value + ReservationSpecialFeatureAudioNormalization = "AUDIO_NORMALIZATION" +) + +// Current reservation state +const ( + // ReservationStateActive is a ReservationState enum value + ReservationStateActive = "ACTIVE" + + // ReservationStateExpired is a ReservationState enum value + ReservationStateExpired = "EXPIRED" + + // ReservationStateCanceled is a ReservationState enum value + ReservationStateCanceled = "CANCELED" + + // ReservationStateDeleted is a ReservationState enum value + ReservationStateDeleted = "DELETED" +) + +// Video quality, e.g. 'STANDARD' (Outputs only) +const ( + // ReservationVideoQualityStandard is a ReservationVideoQuality enum value + ReservationVideoQualityStandard = "STANDARD" + + // ReservationVideoQualityEnhanced is a ReservationVideoQuality enum value + ReservationVideoQualityEnhanced = "ENHANCED" + + // ReservationVideoQualityPremium is a ReservationVideoQuality enum value + ReservationVideoQualityPremium = "PREMIUM" +) + +const ( + // RtmpCacheFullBehaviorDisconnectImmediately is a RtmpCacheFullBehavior enum value + RtmpCacheFullBehaviorDisconnectImmediately = "DISCONNECT_IMMEDIATELY" + + // RtmpCacheFullBehaviorWaitForServer is a RtmpCacheFullBehavior enum value + RtmpCacheFullBehaviorWaitForServer = "WAIT_FOR_SERVER" +) + +const ( + // RtmpCaptionDataAll is a RtmpCaptionData enum value + RtmpCaptionDataAll = "ALL" + + // RtmpCaptionDataField1608 is a RtmpCaptionData enum value + RtmpCaptionDataField1608 = "FIELD1_608" + + // RtmpCaptionDataField1AndField2608 is a RtmpCaptionData enum value + RtmpCaptionDataField1AndField2608 = "FIELD1_AND_FIELD2_608" +) + +const ( + // RtmpOutputCertificateModeSelfSigned is a RtmpOutputCertificateMode enum value + RtmpOutputCertificateModeSelfSigned = "SELF_SIGNED" + + // RtmpOutputCertificateModeVerifyAuthenticity is a RtmpOutputCertificateMode enum value + RtmpOutputCertificateModeVerifyAuthenticity = "VERIFY_AUTHENTICITY" +) + const ( // Scte20Convert608To708Disabled is a Scte20Convert608To708 enum value Scte20Convert608To708Disabled = "DISABLED" @@ -12189,6 +16582,58 @@ const ( Scte35AposWebDeliveryAllowedBehaviorIgnore = "IGNORE" ) +// Corresponds to the archive_allowed parameter. A value of ARCHIVE_NOT_ALLOWED +// corresponds to 0 (false) in the SCTE-35 specification. If you include one +// of the "restriction" flags then you must include all four of them. +const ( + // Scte35ArchiveAllowedFlagArchiveNotAllowed is a Scte35ArchiveAllowedFlag enum value + Scte35ArchiveAllowedFlagArchiveNotAllowed = "ARCHIVE_NOT_ALLOWED" + + // Scte35ArchiveAllowedFlagArchiveAllowed is a Scte35ArchiveAllowedFlag enum value + Scte35ArchiveAllowedFlagArchiveAllowed = "ARCHIVE_ALLOWED" +) + +// Corresponds to the device_restrictions parameter in a segmentation_descriptor. +// If you include one of the "restriction" flags then you must include all four +// of them. +const ( + // Scte35DeviceRestrictionsNone is a Scte35DeviceRestrictions enum value + Scte35DeviceRestrictionsNone = "NONE" + + // Scte35DeviceRestrictionsRestrictGroup0 is a Scte35DeviceRestrictions enum value + Scte35DeviceRestrictionsRestrictGroup0 = "RESTRICT_GROUP0" + + // Scte35DeviceRestrictionsRestrictGroup1 is a Scte35DeviceRestrictions enum value + Scte35DeviceRestrictionsRestrictGroup1 = "RESTRICT_GROUP1" + + // Scte35DeviceRestrictionsRestrictGroup2 is a Scte35DeviceRestrictions enum value + Scte35DeviceRestrictionsRestrictGroup2 = "RESTRICT_GROUP2" +) + +// Corresponds to the no_regional_blackout_flag parameter. A value of REGIONAL_BLACKOUT +// corresponds to 0 (false) in the SCTE-35 specification. If you include one +// of the "restriction" flags then you must include all four of them. +const ( + // Scte35NoRegionalBlackoutFlagRegionalBlackout is a Scte35NoRegionalBlackoutFlag enum value + Scte35NoRegionalBlackoutFlagRegionalBlackout = "REGIONAL_BLACKOUT" + + // Scte35NoRegionalBlackoutFlagNoRegionalBlackout is a Scte35NoRegionalBlackoutFlag enum value + Scte35NoRegionalBlackoutFlagNoRegionalBlackout = "NO_REGIONAL_BLACKOUT" +) + +// Corresponds to SCTE-35 segmentation_event_cancel_indicator. SEGMENTATION_EVENT_NOT_CANCELED +// corresponds to 0 in the SCTE-35 specification and indicates that this is +// an insertion request. SEGMENTATION_EVENT_CANCELED corresponds to 1 in the +// SCTE-35 specification and indicates that this is a cancelation request, in +// which case complete this field and the existing event ID to cancel. +const ( + // Scte35SegmentationCancelIndicatorSegmentationEventNotCanceled is a Scte35SegmentationCancelIndicator enum value + Scte35SegmentationCancelIndicatorSegmentationEventNotCanceled = "SEGMENTATION_EVENT_NOT_CANCELED" + + // Scte35SegmentationCancelIndicatorSegmentationEventCanceled is a Scte35SegmentationCancelIndicator enum value + Scte35SegmentationCancelIndicatorSegmentationEventCanceled = "SEGMENTATION_EVENT_CANCELED" +) + const ( // Scte35SpliceInsertNoRegionalBlackoutBehaviorFollow is a Scte35SpliceInsertNoRegionalBlackoutBehavior enum value Scte35SpliceInsertNoRegionalBlackoutBehaviorFollow = "FOLLOW" @@ -12205,6 +16650,17 @@ const ( Scte35SpliceInsertWebDeliveryAllowedBehaviorIgnore = "IGNORE" ) +// Corresponds to the web_delivery_allowed_flag parameter. A value of WEB_DELIVERY_NOT_ALLOWED +// corresponds to 0 (false) in the SCTE-35 specification. If you include one +// of the "restriction" flags then you must include all four of them. +const ( + // Scte35WebDeliveryAllowedFlagWebDeliveryNotAllowed is a Scte35WebDeliveryAllowedFlag enum value + Scte35WebDeliveryAllowedFlagWebDeliveryNotAllowed = "WEB_DELIVERY_NOT_ALLOWED" + + // Scte35WebDeliveryAllowedFlagWebDeliveryAllowed is a Scte35WebDeliveryAllowedFlag enum value + Scte35WebDeliveryAllowedFlagWebDeliveryAllowed = "WEB_DELIVERY_ALLOWED" +) + const ( // SmoothGroupAudioOnlyTimecodeControlPassthrough is a SmoothGroupAudioOnlyTimecodeControl enum value SmoothGroupAudioOnlyTimecodeControlPassthrough = "PASSTHROUGH" diff --git a/vendor/github.com/aws/aws-sdk-go/service/medialive/service.go b/vendor/github.com/aws/aws-sdk-go/service/medialive/service.go index 96e7506c17f..582d570f177 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/medialive/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/medialive/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "medialive" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "medialive" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "MediaLive" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the MediaLive client with a session. @@ -45,19 +46,20 @@ const ( // svc := medialive.New(mySession, aws.NewConfig().WithRegion("us-west-2")) func New(p client.ConfigProvider, cfgs ...*aws.Config) *MediaLive { c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "medialive" + } return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) } // newClient creates, initializes and returns a new service client instance. func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *MediaLive { - if len(signingName) == 0 { - signingName = "medialive" - } svc := &MediaLive{ Client: client.New( cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/mediapackage/api.go b/vendor/github.com/aws/aws-sdk-go/service/mediapackage/api.go index 074dee83694..c36d0f0a7ba 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/mediapackage/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/mediapackage/api.go @@ -3,6 +3,8 @@ package mediapackage import ( + "fmt" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awsutil" "github.com/aws/aws-sdk-go/aws/request" @@ -12,8 +14,8 @@ const opCreateChannel = "CreateChannel" // CreateChannelRequest generates a "aws/request.Request" representing the // client's request for the CreateChannel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -100,8 +102,8 @@ const opCreateOriginEndpoint = "CreateOriginEndpoint" // CreateOriginEndpointRequest generates a "aws/request.Request" representing the // client's request for the CreateOriginEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -188,8 +190,8 @@ const opDeleteChannel = "DeleteChannel" // DeleteChannelRequest generates a "aws/request.Request" representing the // client's request for the DeleteChannel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -276,8 +278,8 @@ const opDeleteOriginEndpoint = "DeleteOriginEndpoint" // DeleteOriginEndpointRequest generates a "aws/request.Request" representing the // client's request for the DeleteOriginEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -364,8 +366,8 @@ const opDescribeChannel = "DescribeChannel" // DescribeChannelRequest generates a "aws/request.Request" representing the // client's request for the DescribeChannel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -452,8 +454,8 @@ const opDescribeOriginEndpoint = "DescribeOriginEndpoint" // DescribeOriginEndpointRequest generates a "aws/request.Request" representing the // client's request for the DescribeOriginEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -540,8 +542,8 @@ const opListChannels = "ListChannels" // ListChannelsRequest generates a "aws/request.Request" representing the // client's request for the ListChannels operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -684,8 +686,8 @@ const opListOriginEndpoints = "ListOriginEndpoints" // ListOriginEndpointsRequest generates a "aws/request.Request" representing the // client's request for the ListOriginEndpoints operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -828,8 +830,8 @@ const opRotateChannelCredentials = "RotateChannelCredentials" // RotateChannelCredentialsRequest generates a "aws/request.Request" representing the // client's request for the RotateChannelCredentials operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -850,7 +852,12 @@ const opRotateChannelCredentials = "RotateChannelCredentials" // } // // See also, https://docs.aws.amazon.com/goto/WebAPI/mediapackage-2017-10-12/RotateChannelCredentials +// +// Deprecated: This API is deprecated. Please use RotateIngestEndpointCredentials instead func (c *MediaPackage) RotateChannelCredentialsRequest(input *RotateChannelCredentialsInput) (req *request.Request, output *RotateChannelCredentialsOutput) { + if c.Client.Config.Logger != nil { + c.Client.Config.Logger.Log("This operation, RotateChannelCredentials, has been deprecated") + } op := &request.Operation{ Name: opRotateChannelCredentials, HTTPMethod: "PUT", @@ -868,7 +875,8 @@ func (c *MediaPackage) RotateChannelCredentialsRequest(input *RotateChannelCrede // RotateChannelCredentials API operation for AWS Elemental MediaPackage. // -// Changes the Channel ingest username and password. +// Changes the Channel's first IngestEndpoint's username and password. WARNING +// - This API is deprecated. Please use RotateIngestEndpointCredentials instead // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -891,6 +899,8 @@ func (c *MediaPackage) RotateChannelCredentialsRequest(input *RotateChannelCrede // * ErrCodeTooManyRequestsException "TooManyRequestsException" // // See also, https://docs.aws.amazon.com/goto/WebAPI/mediapackage-2017-10-12/RotateChannelCredentials +// +// Deprecated: This API is deprecated. Please use RotateIngestEndpointCredentials instead func (c *MediaPackage) RotateChannelCredentials(input *RotateChannelCredentialsInput) (*RotateChannelCredentialsOutput, error) { req, out := c.RotateChannelCredentialsRequest(input) return out, req.Send() @@ -905,6 +915,8 @@ func (c *MediaPackage) RotateChannelCredentials(input *RotateChannelCredentialsI // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. +// +// Deprecated: This API is deprecated. Please use RotateIngestEndpointCredentials instead func (c *MediaPackage) RotateChannelCredentialsWithContext(ctx aws.Context, input *RotateChannelCredentialsInput, opts ...request.Option) (*RotateChannelCredentialsOutput, error) { req, out := c.RotateChannelCredentialsRequest(input) req.SetContext(ctx) @@ -912,12 +924,101 @@ func (c *MediaPackage) RotateChannelCredentialsWithContext(ctx aws.Context, inpu return out, req.Send() } +const opRotateIngestEndpointCredentials = "RotateIngestEndpointCredentials" + +// RotateIngestEndpointCredentialsRequest generates a "aws/request.Request" representing the +// client's request for the RotateIngestEndpointCredentials operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RotateIngestEndpointCredentials for more information on using the RotateIngestEndpointCredentials +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RotateIngestEndpointCredentialsRequest method. +// req, resp := client.RotateIngestEndpointCredentialsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/mediapackage-2017-10-12/RotateIngestEndpointCredentials +func (c *MediaPackage) RotateIngestEndpointCredentialsRequest(input *RotateIngestEndpointCredentialsInput) (req *request.Request, output *RotateIngestEndpointCredentialsOutput) { + op := &request.Operation{ + Name: opRotateIngestEndpointCredentials, + HTTPMethod: "PUT", + HTTPPath: "/channels/{id}/ingest_endpoints/{ingest_endpoint_id}/credentials", + } + + if input == nil { + input = &RotateIngestEndpointCredentialsInput{} + } + + output = &RotateIngestEndpointCredentialsOutput{} + req = c.newRequest(op, input, output) + return +} + +// RotateIngestEndpointCredentials API operation for AWS Elemental MediaPackage. +// +// Rotate the IngestEndpoint's username and password, as specified by the IngestEndpoint's +// id. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Elemental MediaPackage's +// API operation RotateIngestEndpointCredentials for usage and error information. +// +// Returned Error Codes: +// * ErrCodeUnprocessableEntityException "UnprocessableEntityException" +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// +// * ErrCodeForbiddenException "ForbiddenException" +// +// * ErrCodeNotFoundException "NotFoundException" +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/mediapackage-2017-10-12/RotateIngestEndpointCredentials +func (c *MediaPackage) RotateIngestEndpointCredentials(input *RotateIngestEndpointCredentialsInput) (*RotateIngestEndpointCredentialsOutput, error) { + req, out := c.RotateIngestEndpointCredentialsRequest(input) + return out, req.Send() +} + +// RotateIngestEndpointCredentialsWithContext is the same as RotateIngestEndpointCredentials with the addition of +// the ability to pass a context and additional request options. +// +// See RotateIngestEndpointCredentials for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *MediaPackage) RotateIngestEndpointCredentialsWithContext(ctx aws.Context, input *RotateIngestEndpointCredentialsInput, opts ...request.Option) (*RotateIngestEndpointCredentialsOutput, error) { + req, out := c.RotateIngestEndpointCredentialsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opUpdateChannel = "UpdateChannel" // UpdateChannelRequest generates a "aws/request.Request" representing the // client's request for the UpdateChannel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1004,8 +1105,8 @@ const opUpdateOriginEndpoint = "UpdateOriginEndpoint" // UpdateOriginEndpointRequest generates a "aws/request.Request" representing the // client's request for the UpdateOriginEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1139,6 +1240,209 @@ func (s *Channel) SetId(v string) *Channel { return s } +// A Common Media Application Format (CMAF) encryption configuration. +type CmafEncryption struct { + _ struct{} `type:"structure"` + + // Time (in seconds) between each encryption key rotation. + KeyRotationIntervalSeconds *int64 `locationName:"keyRotationIntervalSeconds" type:"integer"` + + // A configuration for accessing an external Secure Packager and Encoder Key + // Exchange (SPEKE) service that will provide encryption keys. + // + // SpekeKeyProvider is a required field + SpekeKeyProvider *SpekeKeyProvider `locationName:"spekeKeyProvider" type:"structure" required:"true"` +} + +// String returns the string representation +func (s CmafEncryption) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CmafEncryption) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CmafEncryption) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CmafEncryption"} + if s.SpekeKeyProvider == nil { + invalidParams.Add(request.NewErrParamRequired("SpekeKeyProvider")) + } + if s.SpekeKeyProvider != nil { + if err := s.SpekeKeyProvider.Validate(); err != nil { + invalidParams.AddNested("SpekeKeyProvider", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKeyRotationIntervalSeconds sets the KeyRotationIntervalSeconds field's value. +func (s *CmafEncryption) SetKeyRotationIntervalSeconds(v int64) *CmafEncryption { + s.KeyRotationIntervalSeconds = &v + return s +} + +// SetSpekeKeyProvider sets the SpekeKeyProvider field's value. +func (s *CmafEncryption) SetSpekeKeyProvider(v *SpekeKeyProvider) *CmafEncryption { + s.SpekeKeyProvider = v + return s +} + +// A Common Media Application Format (CMAF) packaging configuration. +type CmafPackage struct { + _ struct{} `type:"structure"` + + // A Common Media Application Format (CMAF) encryption configuration. + Encryption *CmafEncryption `locationName:"encryption" type:"structure"` + + // A list of HLS manifest configurations + HlsManifests []*HlsManifest `locationName:"hlsManifests" type:"list"` + + // Duration (in seconds) of each segment. Actual segments will berounded to + // the nearest multiple of the source segment duration. + SegmentDurationSeconds *int64 `locationName:"segmentDurationSeconds" type:"integer"` + + // An optional custom string that is prepended to the name of each segment. + // If not specified, it defaults to the ChannelId. + SegmentPrefix *string `locationName:"segmentPrefix" type:"string"` + + // A StreamSelection configuration. + StreamSelection *StreamSelection `locationName:"streamSelection" type:"structure"` +} + +// String returns the string representation +func (s CmafPackage) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CmafPackage) GoString() string { + return s.String() +} + +// SetEncryption sets the Encryption field's value. +func (s *CmafPackage) SetEncryption(v *CmafEncryption) *CmafPackage { + s.Encryption = v + return s +} + +// SetHlsManifests sets the HlsManifests field's value. +func (s *CmafPackage) SetHlsManifests(v []*HlsManifest) *CmafPackage { + s.HlsManifests = v + return s +} + +// SetSegmentDurationSeconds sets the SegmentDurationSeconds field's value. +func (s *CmafPackage) SetSegmentDurationSeconds(v int64) *CmafPackage { + s.SegmentDurationSeconds = &v + return s +} + +// SetSegmentPrefix sets the SegmentPrefix field's value. +func (s *CmafPackage) SetSegmentPrefix(v string) *CmafPackage { + s.SegmentPrefix = &v + return s +} + +// SetStreamSelection sets the StreamSelection field's value. +func (s *CmafPackage) SetStreamSelection(v *StreamSelection) *CmafPackage { + s.StreamSelection = v + return s +} + +// A Common Media Application Format (CMAF) packaging configuration. +type CmafPackageCreateOrUpdateParameters struct { + _ struct{} `type:"structure"` + + // A Common Media Application Format (CMAF) encryption configuration. + Encryption *CmafEncryption `locationName:"encryption" type:"structure"` + + // A list of HLS manifest configurations + HlsManifests []*HlsManifestCreateOrUpdateParameters `locationName:"hlsManifests" type:"list"` + + // Duration (in seconds) of each segment. Actual segments will berounded to + // the nearest multiple of the source segment duration. + SegmentDurationSeconds *int64 `locationName:"segmentDurationSeconds" type:"integer"` + + // An optional custom string that is prepended to the name of each segment. + // If not specified, it defaults to the ChannelId. + SegmentPrefix *string `locationName:"segmentPrefix" type:"string"` + + // A StreamSelection configuration. + StreamSelection *StreamSelection `locationName:"streamSelection" type:"structure"` +} + +// String returns the string representation +func (s CmafPackageCreateOrUpdateParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CmafPackageCreateOrUpdateParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CmafPackageCreateOrUpdateParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CmafPackageCreateOrUpdateParameters"} + if s.Encryption != nil { + if err := s.Encryption.Validate(); err != nil { + invalidParams.AddNested("Encryption", err.(request.ErrInvalidParams)) + } + } + if s.HlsManifests != nil { + for i, v := range s.HlsManifests { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "HlsManifests", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEncryption sets the Encryption field's value. +func (s *CmafPackageCreateOrUpdateParameters) SetEncryption(v *CmafEncryption) *CmafPackageCreateOrUpdateParameters { + s.Encryption = v + return s +} + +// SetHlsManifests sets the HlsManifests field's value. +func (s *CmafPackageCreateOrUpdateParameters) SetHlsManifests(v []*HlsManifestCreateOrUpdateParameters) *CmafPackageCreateOrUpdateParameters { + s.HlsManifests = v + return s +} + +// SetSegmentDurationSeconds sets the SegmentDurationSeconds field's value. +func (s *CmafPackageCreateOrUpdateParameters) SetSegmentDurationSeconds(v int64) *CmafPackageCreateOrUpdateParameters { + s.SegmentDurationSeconds = &v + return s +} + +// SetSegmentPrefix sets the SegmentPrefix field's value. +func (s *CmafPackageCreateOrUpdateParameters) SetSegmentPrefix(v string) *CmafPackageCreateOrUpdateParameters { + s.SegmentPrefix = &v + return s +} + +// SetStreamSelection sets the StreamSelection field's value. +func (s *CmafPackageCreateOrUpdateParameters) SetStreamSelection(v *StreamSelection) *CmafPackageCreateOrUpdateParameters { + s.StreamSelection = v + return s +} + type CreateChannelInput struct { _ struct{} `type:"structure"` @@ -1236,6 +1540,9 @@ type CreateOriginEndpointInput struct { // ChannelId is a required field ChannelId *string `locationName:"channelId" type:"string" required:"true"` + // A Common Media Application Format (CMAF) packaging configuration. + CmafPackage *CmafPackageCreateOrUpdateParameters `locationName:"cmafPackage" type:"structure"` + // A Dynamic Adaptive Streaming over HTTP (DASH) packaging configuration. DashPackage *DashPackage `locationName:"dashPackage" type:"structure"` @@ -1278,6 +1585,11 @@ func (s *CreateOriginEndpointInput) Validate() error { if s.Id == nil { invalidParams.Add(request.NewErrParamRequired("Id")) } + if s.CmafPackage != nil { + if err := s.CmafPackage.Validate(); err != nil { + invalidParams.AddNested("CmafPackage", err.(request.ErrInvalidParams)) + } + } if s.DashPackage != nil { if err := s.DashPackage.Validate(); err != nil { invalidParams.AddNested("DashPackage", err.(request.ErrInvalidParams)) @@ -1306,6 +1618,12 @@ func (s *CreateOriginEndpointInput) SetChannelId(v string) *CreateOriginEndpoint return s } +// SetCmafPackage sets the CmafPackage field's value. +func (s *CreateOriginEndpointInput) SetCmafPackage(v *CmafPackageCreateOrUpdateParameters) *CreateOriginEndpointInput { + s.CmafPackage = v + return s +} + // SetDashPackage sets the DashPackage field's value. func (s *CreateOriginEndpointInput) SetDashPackage(v *DashPackage) *CreateOriginEndpointInput { s.DashPackage = v @@ -1367,6 +1685,9 @@ type CreateOriginEndpointOutput struct { ChannelId *string `locationName:"channelId" type:"string"` + // A Common Media Application Format (CMAF) packaging configuration. + CmafPackage *CmafPackage `locationName:"cmafPackage" type:"structure"` + // A Dynamic Adaptive Streaming over HTTP (DASH) packaging configuration. DashPackage *DashPackage `locationName:"dashPackage" type:"structure"` @@ -1413,6 +1734,12 @@ func (s *CreateOriginEndpointOutput) SetChannelId(v string) *CreateOriginEndpoin return s } +// SetCmafPackage sets the CmafPackage field's value. +func (s *CreateOriginEndpointOutput) SetCmafPackage(v *CmafPackage) *CreateOriginEndpointOutput { + s.CmafPackage = v + return s +} + // SetDashPackage sets the DashPackage field's value. func (s *CreateOriginEndpointOutput) SetDashPackage(v *DashPackage) *CreateOriginEndpointOutput { s.DashPackage = v @@ -1545,6 +1872,13 @@ type DashPackage struct { // Streaming over HTTP (DASH) Media Presentation Description (MPD). MinUpdatePeriodSeconds *int64 `locationName:"minUpdatePeriodSeconds" type:"integer"` + // A list of triggers that controls when the outgoing Dynamic Adaptive Streaming + // over HTTP (DASH)Media Presentation Description (MPD) will be partitioned + // into multiple periods. If empty, the content will notbe partitioned into + // more than one period. If the list contains "ADS", new periods will be created + // wherethe Channel source contains SCTE-35 ad markers. + PeriodTriggers []*string `locationName:"periodTriggers" type:"list"` + // The Dynamic Adaptive Streaming over HTTP (DASH) profile type. When set to // "HBBTV_1_5", HbbTV 1.5 compliant output is enabled. Profile *string `locationName:"profile" type:"string" enum:"Profile"` @@ -1609,6 +1943,12 @@ func (s *DashPackage) SetMinUpdatePeriodSeconds(v int64) *DashPackage { return s } +// SetPeriodTriggers sets the PeriodTriggers field's value. +func (s *DashPackage) SetPeriodTriggers(v []*string) *DashPackage { + s.PeriodTriggers = v + return s +} + // SetProfile sets the Profile field's value. func (s *DashPackage) SetProfile(v string) *DashPackage { s.Profile = &v @@ -1859,6 +2199,9 @@ type DescribeOriginEndpointOutput struct { ChannelId *string `locationName:"channelId" type:"string"` + // A Common Media Application Format (CMAF) packaging configuration. + CmafPackage *CmafPackage `locationName:"cmafPackage" type:"structure"` + // A Dynamic Adaptive Streaming over HTTP (DASH) packaging configuration. DashPackage *DashPackage `locationName:"dashPackage" type:"structure"` @@ -1905,6 +2248,12 @@ func (s *DescribeOriginEndpointOutput) SetChannelId(v string) *DescribeOriginEnd return s } +// SetCmafPackage sets the CmafPackage field's value. +func (s *DescribeOriginEndpointOutput) SetCmafPackage(v *CmafPackage) *DescribeOriginEndpointOutput { + s.CmafPackage = v + return s +} + // SetDashPackage sets the DashPackage field's value. func (s *DescribeOriginEndpointOutput) SetDashPackage(v *DashPackage) *DescribeOriginEndpointOutput { s.DashPackage = v @@ -2071,8 +2420,8 @@ func (s *HlsIngest) SetIngestEndpoints(v []*IngestEndpoint) *HlsIngest { return s } -// An HTTP Live Streaming (HLS) packaging configuration. -type HlsPackage struct { +// A HTTP Live Streaming (HLS) manifest configuration. +type HlsManifest struct { _ struct{} `type:"structure"` // This setting controls how ad markers are included in the packaged OriginEndpoint."NONE" @@ -2082,12 +2431,19 @@ type HlsPackage struct { // ad markers and blackout tags based on SCTE-35messages in the input source. AdMarkers *string `locationName:"adMarkers" type:"string" enum:"AdMarkers"` - // An HTTP Live Streaming (HLS) encryption configuration. - Encryption *HlsEncryption `locationName:"encryption" type:"structure"` + // The ID of the manifest. The ID must be unique within the OriginEndpoint and + // it cannot be changed after it is created. + // + // Id is a required field + Id *string `locationName:"id" type:"string" required:"true"` // When enabled, an I-Frame only stream will be included in the output. IncludeIframeOnlyStream *bool `locationName:"includeIframeOnlyStream" type:"boolean"` + // An optional short string appended to the end of the OriginEndpoint URL. If + // not specified, defaults to the manifestName for the OriginEndpoint. + ManifestName *string `locationName:"manifestName" type:"string"` + // The HTTP Live Streaming (HLS) playlist type.When either "EVENT" or "VOD" // is specified, a corresponding EXT-X-PLAYLIST-TYPEentry will be included in // the media playlist. @@ -2098,7 +2454,7 @@ type HlsPackage struct { // The interval (in seconds) between each EXT-X-PROGRAM-DATE-TIME taginserted // into manifests. Additionally, when an interval is specifiedID3Timed Metadata - // messages will be generated every 5 seconds using the ingest time of the content.If + // messages will be generated every 5 seconds using theingest time of the content.If // the interval is not specified, or set to 0, thenno EXT-X-PROGRAM-DATE-TIME // tags will be inserted into manifests and noID3Timed Metadata messages will // be generated. Note that irrespectiveof this parameter, if any ID3 Timed Metadata @@ -2106,24 +2462,229 @@ type HlsPackage struct { // HLS output. ProgramDateTimeIntervalSeconds *int64 `locationName:"programDateTimeIntervalSeconds" type:"integer"` - // Duration (in seconds) of each fragment. Actual fragments will berounded to - // the nearest multiple of the source fragment duration. - SegmentDurationSeconds *int64 `locationName:"segmentDurationSeconds" type:"integer"` - - // A StreamSelection configuration. - StreamSelection *StreamSelection `locationName:"streamSelection" type:"structure"` - - // When enabled, audio streams will be placed in rendition groups in the output. - UseAudioRenditionGroup *bool `locationName:"useAudioRenditionGroup" type:"boolean"` + // The URL of the packaged OriginEndpoint for consumption. + Url *string `locationName:"url" type:"string"` } // String returns the string representation -func (s HlsPackage) String() string { +func (s HlsManifest) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s HlsPackage) GoString() string { +func (s HlsManifest) GoString() string { + return s.String() +} + +// SetAdMarkers sets the AdMarkers field's value. +func (s *HlsManifest) SetAdMarkers(v string) *HlsManifest { + s.AdMarkers = &v + return s +} + +// SetId sets the Id field's value. +func (s *HlsManifest) SetId(v string) *HlsManifest { + s.Id = &v + return s +} + +// SetIncludeIframeOnlyStream sets the IncludeIframeOnlyStream field's value. +func (s *HlsManifest) SetIncludeIframeOnlyStream(v bool) *HlsManifest { + s.IncludeIframeOnlyStream = &v + return s +} + +// SetManifestName sets the ManifestName field's value. +func (s *HlsManifest) SetManifestName(v string) *HlsManifest { + s.ManifestName = &v + return s +} + +// SetPlaylistType sets the PlaylistType field's value. +func (s *HlsManifest) SetPlaylistType(v string) *HlsManifest { + s.PlaylistType = &v + return s +} + +// SetPlaylistWindowSeconds sets the PlaylistWindowSeconds field's value. +func (s *HlsManifest) SetPlaylistWindowSeconds(v int64) *HlsManifest { + s.PlaylistWindowSeconds = &v + return s +} + +// SetProgramDateTimeIntervalSeconds sets the ProgramDateTimeIntervalSeconds field's value. +func (s *HlsManifest) SetProgramDateTimeIntervalSeconds(v int64) *HlsManifest { + s.ProgramDateTimeIntervalSeconds = &v + return s +} + +// SetUrl sets the Url field's value. +func (s *HlsManifest) SetUrl(v string) *HlsManifest { + s.Url = &v + return s +} + +// A HTTP Live Streaming (HLS) manifest configuration. +type HlsManifestCreateOrUpdateParameters struct { + _ struct{} `type:"structure"` + + // This setting controls how ad markers are included in the packaged OriginEndpoint."NONE" + // will omit all SCTE-35 ad markers from the output."PASSTHROUGH" causes the + // manifest to contain a copy of the SCTE-35 admarkers (comments) taken directly + // from the input HTTP Live Streaming (HLS) manifest."SCTE35_ENHANCED" generates + // ad markers and blackout tags based on SCTE-35messages in the input source. + AdMarkers *string `locationName:"adMarkers" type:"string" enum:"AdMarkers"` + + // The ID of the manifest. The ID must be unique within the OriginEndpoint and + // it cannot be changed after it is created. + // + // Id is a required field + Id *string `locationName:"id" type:"string" required:"true"` + + // When enabled, an I-Frame only stream will be included in the output. + IncludeIframeOnlyStream *bool `locationName:"includeIframeOnlyStream" type:"boolean"` + + // An optional short string appended to the end of the OriginEndpoint URL. If + // not specified, defaults to the manifestName for the OriginEndpoint. + ManifestName *string `locationName:"manifestName" type:"string"` + + // The HTTP Live Streaming (HLS) playlist type.When either "EVENT" or "VOD" + // is specified, a corresponding EXT-X-PLAYLIST-TYPEentry will be included in + // the media playlist. + PlaylistType *string `locationName:"playlistType" type:"string" enum:"PlaylistType"` + + // Time window (in seconds) contained in each parent manifest. + PlaylistWindowSeconds *int64 `locationName:"playlistWindowSeconds" type:"integer"` + + // The interval (in seconds) between each EXT-X-PROGRAM-DATE-TIME taginserted + // into manifests. Additionally, when an interval is specifiedID3Timed Metadata + // messages will be generated every 5 seconds using theingest time of the content.If + // the interval is not specified, or set to 0, thenno EXT-X-PROGRAM-DATE-TIME + // tags will be inserted into manifests and noID3Timed Metadata messages will + // be generated. Note that irrespectiveof this parameter, if any ID3 Timed Metadata + // is found in HTTP Live Streaming (HLS) input,it will be passed through to + // HLS output. + ProgramDateTimeIntervalSeconds *int64 `locationName:"programDateTimeIntervalSeconds" type:"integer"` +} + +// String returns the string representation +func (s HlsManifestCreateOrUpdateParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s HlsManifestCreateOrUpdateParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *HlsManifestCreateOrUpdateParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "HlsManifestCreateOrUpdateParameters"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAdMarkers sets the AdMarkers field's value. +func (s *HlsManifestCreateOrUpdateParameters) SetAdMarkers(v string) *HlsManifestCreateOrUpdateParameters { + s.AdMarkers = &v + return s +} + +// SetId sets the Id field's value. +func (s *HlsManifestCreateOrUpdateParameters) SetId(v string) *HlsManifestCreateOrUpdateParameters { + s.Id = &v + return s +} + +// SetIncludeIframeOnlyStream sets the IncludeIframeOnlyStream field's value. +func (s *HlsManifestCreateOrUpdateParameters) SetIncludeIframeOnlyStream(v bool) *HlsManifestCreateOrUpdateParameters { + s.IncludeIframeOnlyStream = &v + return s +} + +// SetManifestName sets the ManifestName field's value. +func (s *HlsManifestCreateOrUpdateParameters) SetManifestName(v string) *HlsManifestCreateOrUpdateParameters { + s.ManifestName = &v + return s +} + +// SetPlaylistType sets the PlaylistType field's value. +func (s *HlsManifestCreateOrUpdateParameters) SetPlaylistType(v string) *HlsManifestCreateOrUpdateParameters { + s.PlaylistType = &v + return s +} + +// SetPlaylistWindowSeconds sets the PlaylistWindowSeconds field's value. +func (s *HlsManifestCreateOrUpdateParameters) SetPlaylistWindowSeconds(v int64) *HlsManifestCreateOrUpdateParameters { + s.PlaylistWindowSeconds = &v + return s +} + +// SetProgramDateTimeIntervalSeconds sets the ProgramDateTimeIntervalSeconds field's value. +func (s *HlsManifestCreateOrUpdateParameters) SetProgramDateTimeIntervalSeconds(v int64) *HlsManifestCreateOrUpdateParameters { + s.ProgramDateTimeIntervalSeconds = &v + return s +} + +// An HTTP Live Streaming (HLS) packaging configuration. +type HlsPackage struct { + _ struct{} `type:"structure"` + + // This setting controls how ad markers are included in the packaged OriginEndpoint."NONE" + // will omit all SCTE-35 ad markers from the output."PASSTHROUGH" causes the + // manifest to contain a copy of the SCTE-35 admarkers (comments) taken directly + // from the input HTTP Live Streaming (HLS) manifest."SCTE35_ENHANCED" generates + // ad markers and blackout tags based on SCTE-35messages in the input source. + AdMarkers *string `locationName:"adMarkers" type:"string" enum:"AdMarkers"` + + // An HTTP Live Streaming (HLS) encryption configuration. + Encryption *HlsEncryption `locationName:"encryption" type:"structure"` + + // When enabled, an I-Frame only stream will be included in the output. + IncludeIframeOnlyStream *bool `locationName:"includeIframeOnlyStream" type:"boolean"` + + // The HTTP Live Streaming (HLS) playlist type.When either "EVENT" or "VOD" + // is specified, a corresponding EXT-X-PLAYLIST-TYPEentry will be included in + // the media playlist. + PlaylistType *string `locationName:"playlistType" type:"string" enum:"PlaylistType"` + + // Time window (in seconds) contained in each parent manifest. + PlaylistWindowSeconds *int64 `locationName:"playlistWindowSeconds" type:"integer"` + + // The interval (in seconds) between each EXT-X-PROGRAM-DATE-TIME taginserted + // into manifests. Additionally, when an interval is specifiedID3Timed Metadata + // messages will be generated every 5 seconds using theingest time of the content.If + // the interval is not specified, or set to 0, thenno EXT-X-PROGRAM-DATE-TIME + // tags will be inserted into manifests and noID3Timed Metadata messages will + // be generated. Note that irrespectiveof this parameter, if any ID3 Timed Metadata + // is found in HTTP Live Streaming (HLS) input,it will be passed through to + // HLS output. + ProgramDateTimeIntervalSeconds *int64 `locationName:"programDateTimeIntervalSeconds" type:"integer"` + + // Duration (in seconds) of each fragment. Actual fragments will berounded to + // the nearest multiple of the source fragment duration. + SegmentDurationSeconds *int64 `locationName:"segmentDurationSeconds" type:"integer"` + + // A StreamSelection configuration. + StreamSelection *StreamSelection `locationName:"streamSelection" type:"structure"` + + // When enabled, audio streams will be placed in rendition groups in the output. + UseAudioRenditionGroup *bool `locationName:"useAudioRenditionGroup" type:"boolean"` +} + +// String returns the string representation +func (s HlsPackage) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s HlsPackage) GoString() string { return s.String() } @@ -2200,6 +2761,9 @@ func (s *HlsPackage) SetUseAudioRenditionGroup(v bool) *HlsPackage { type IngestEndpoint struct { _ struct{} `type:"structure"` + // The system generated unique identifier for the IngestEndpoint + Id *string `locationName:"id" type:"string"` + // The system generated password for ingest authentication. Password *string `locationName:"password" type:"string"` @@ -2220,6 +2784,12 @@ func (s IngestEndpoint) GoString() string { return s.String() } +// SetId sets the Id field's value. +func (s *IngestEndpoint) SetId(v string) *IngestEndpoint { + s.Id = &v + return s +} + // SetPassword sets the Password field's value. func (s *IngestEndpoint) SetPassword(v string) *IngestEndpoint { s.Password = &v @@ -2513,6 +3083,9 @@ type OriginEndpoint struct { // The ID of the Channel the OriginEndpoint is associated with. ChannelId *string `locationName:"channelId" type:"string"` + // A Common Media Application Format (CMAF) packaging configuration. + CmafPackage *CmafPackage `locationName:"cmafPackage" type:"structure"` + // A Dynamic Adaptive Streaming over HTTP (DASH) packaging configuration. DashPackage *DashPackage `locationName:"dashPackage" type:"structure"` @@ -2568,6 +3141,12 @@ func (s *OriginEndpoint) SetChannelId(v string) *OriginEndpoint { return s } +// SetCmafPackage sets the CmafPackage field's value. +func (s *OriginEndpoint) SetCmafPackage(v *CmafPackage) *OriginEndpoint { + s.CmafPackage = v + return s +} + // SetDashPackage sets the DashPackage field's value. func (s *OriginEndpoint) SetDashPackage(v *DashPackage) *OriginEndpoint { s.DashPackage = v @@ -2628,8 +3207,9 @@ func (s *OriginEndpoint) SetWhitelist(v []*string) *OriginEndpoint { return s } +// Deprecated: RotateChannelCredentialsInput has been deprecated type RotateChannelCredentialsInput struct { - _ struct{} `type:"structure"` + _ struct{} `deprecated:"true" type:"structure"` // Id is a required field Id *string `location:"uri" locationName:"id" type:"string" required:"true"` @@ -2664,8 +3244,9 @@ func (s *RotateChannelCredentialsInput) SetId(v string) *RotateChannelCredential return s } +// Deprecated: RotateChannelCredentialsOutput has been deprecated type RotateChannelCredentialsOutput struct { - _ struct{} `type:"structure"` + _ struct{} `deprecated:"true" type:"structure"` Arn *string `locationName:"arn" type:"string"` @@ -2711,11 +3292,111 @@ func (s *RotateChannelCredentialsOutput) SetId(v string) *RotateChannelCredentia return s } +type RotateIngestEndpointCredentialsInput struct { + _ struct{} `type:"structure"` + + // Id is a required field + Id *string `location:"uri" locationName:"id" type:"string" required:"true"` + + // IngestEndpointId is a required field + IngestEndpointId *string `location:"uri" locationName:"ingest_endpoint_id" type:"string" required:"true"` +} + +// String returns the string representation +func (s RotateIngestEndpointCredentialsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RotateIngestEndpointCredentialsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RotateIngestEndpointCredentialsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RotateIngestEndpointCredentialsInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.IngestEndpointId == nil { + invalidParams.Add(request.NewErrParamRequired("IngestEndpointId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetId sets the Id field's value. +func (s *RotateIngestEndpointCredentialsInput) SetId(v string) *RotateIngestEndpointCredentialsInput { + s.Id = &v + return s +} + +// SetIngestEndpointId sets the IngestEndpointId field's value. +func (s *RotateIngestEndpointCredentialsInput) SetIngestEndpointId(v string) *RotateIngestEndpointCredentialsInput { + s.IngestEndpointId = &v + return s +} + +type RotateIngestEndpointCredentialsOutput struct { + _ struct{} `type:"structure"` + + Arn *string `locationName:"arn" type:"string"` + + Description *string `locationName:"description" type:"string"` + + // An HTTP Live Streaming (HLS) ingest resource configuration. + HlsIngest *HlsIngest `locationName:"hlsIngest" type:"structure"` + + Id *string `locationName:"id" type:"string"` +} + +// String returns the string representation +func (s RotateIngestEndpointCredentialsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RotateIngestEndpointCredentialsOutput) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *RotateIngestEndpointCredentialsOutput) SetArn(v string) *RotateIngestEndpointCredentialsOutput { + s.Arn = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *RotateIngestEndpointCredentialsOutput) SetDescription(v string) *RotateIngestEndpointCredentialsOutput { + s.Description = &v + return s +} + +// SetHlsIngest sets the HlsIngest field's value. +func (s *RotateIngestEndpointCredentialsOutput) SetHlsIngest(v *HlsIngest) *RotateIngestEndpointCredentialsOutput { + s.HlsIngest = v + return s +} + +// SetId sets the Id field's value. +func (s *RotateIngestEndpointCredentialsOutput) SetId(v string) *RotateIngestEndpointCredentialsOutput { + s.Id = &v + return s +} + // A configuration for accessing an external Secure Packager and Encoder Key // Exchange (SPEKE) service that will provide encryption keys. type SpekeKeyProvider struct { _ struct{} `type:"structure"` + // An Amazon Resource Name (ARN) of a Certificate Manager certificatethat MediaPackage + // will use for enforcing secure end-to-end datatransfer with the key provider + // service. + CertificateArn *string `locationName:"certificateArn" type:"string"` + // The resource ID to include in key requests. // // ResourceId is a required field @@ -2770,6 +3451,12 @@ func (s *SpekeKeyProvider) Validate() error { return nil } +// SetCertificateArn sets the CertificateArn field's value. +func (s *SpekeKeyProvider) SetCertificateArn(v string) *SpekeKeyProvider { + s.CertificateArn = &v + return s +} + // SetResourceId sets the ResourceId field's value. func (s *SpekeKeyProvider) SetResourceId(v string) *SpekeKeyProvider { s.ResourceId = &v @@ -2930,6 +3617,9 @@ func (s *UpdateChannelOutput) SetId(v string) *UpdateChannelOutput { type UpdateOriginEndpointInput struct { _ struct{} `type:"structure"` + // A Common Media Application Format (CMAF) packaging configuration. + CmafPackage *CmafPackageCreateOrUpdateParameters `locationName:"cmafPackage" type:"structure"` + // A Dynamic Adaptive Streaming over HTTP (DASH) packaging configuration. DashPackage *DashPackage `locationName:"dashPackage" type:"structure"` @@ -2969,6 +3659,11 @@ func (s *UpdateOriginEndpointInput) Validate() error { if s.Id == nil { invalidParams.Add(request.NewErrParamRequired("Id")) } + if s.CmafPackage != nil { + if err := s.CmafPackage.Validate(); err != nil { + invalidParams.AddNested("CmafPackage", err.(request.ErrInvalidParams)) + } + } if s.DashPackage != nil { if err := s.DashPackage.Validate(); err != nil { invalidParams.AddNested("DashPackage", err.(request.ErrInvalidParams)) @@ -2991,6 +3686,12 @@ func (s *UpdateOriginEndpointInput) Validate() error { return nil } +// SetCmafPackage sets the CmafPackage field's value. +func (s *UpdateOriginEndpointInput) SetCmafPackage(v *CmafPackageCreateOrUpdateParameters) *UpdateOriginEndpointInput { + s.CmafPackage = v + return s +} + // SetDashPackage sets the DashPackage field's value. func (s *UpdateOriginEndpointInput) SetDashPackage(v *DashPackage) *UpdateOriginEndpointInput { s.DashPackage = v @@ -3052,6 +3753,9 @@ type UpdateOriginEndpointOutput struct { ChannelId *string `locationName:"channelId" type:"string"` + // A Common Media Application Format (CMAF) packaging configuration. + CmafPackage *CmafPackage `locationName:"cmafPackage" type:"structure"` + // A Dynamic Adaptive Streaming over HTTP (DASH) packaging configuration. DashPackage *DashPackage `locationName:"dashPackage" type:"structure"` @@ -3098,6 +3802,12 @@ func (s *UpdateOriginEndpointOutput) SetChannelId(v string) *UpdateOriginEndpoin return s } +// SetCmafPackage sets the CmafPackage field's value. +func (s *UpdateOriginEndpointOutput) SetCmafPackage(v *CmafPackage) *UpdateOriginEndpointOutput { + s.CmafPackage = v + return s +} + // SetDashPackage sets the DashPackage field's value. func (s *UpdateOriginEndpointOutput) SetDashPackage(v *DashPackage) *UpdateOriginEndpointOutput { s.DashPackage = v @@ -3206,3 +3916,8 @@ const ( // StreamOrderVideoBitrateDescending is a StreamOrder enum value StreamOrderVideoBitrateDescending = "VIDEO_BITRATE_DESCENDING" ) + +const ( + // __PeriodTriggersElementAds is a __PeriodTriggersElement enum value + __PeriodTriggersElementAds = "ADS" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/mediapackage/service.go b/vendor/github.com/aws/aws-sdk-go/service/mediapackage/service.go index 2447a2092bf..96dd74dcf3d 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/mediapackage/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/mediapackage/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "mediapackage" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "mediapackage" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "MediaPackage" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the MediaPackage client with a session. @@ -45,19 +46,20 @@ const ( // svc := mediapackage.New(mySession, aws.NewConfig().WithRegion("us-west-2")) func New(p client.ConfigProvider, cfgs ...*aws.Config) *MediaPackage { c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "mediapackage" + } return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) } // newClient creates, initializes and returns a new service client instance. func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *MediaPackage { - if len(signingName) == 0 { - signingName = "mediapackage" - } svc := &MediaPackage{ Client: client.New( cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/mediastore/api.go b/vendor/github.com/aws/aws-sdk-go/service/mediastore/api.go index f5d369592ab..4ee35c42e5a 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/mediastore/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/mediastore/api.go @@ -14,8 +14,8 @@ const opCreateContainer = "CreateContainer" // CreateContainerRequest generates a "aws/request.Request" representing the // client's request for the CreateContainer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -100,8 +100,8 @@ const opDeleteContainer = "DeleteContainer" // DeleteContainerRequest generates a "aws/request.Request" representing the // client's request for the DeleteContainer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -187,8 +187,8 @@ const opDeleteContainerPolicy = "DeleteContainerPolicy" // DeleteContainerPolicyRequest generates a "aws/request.Request" representing the // client's request for the DeleteContainerPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -275,8 +275,8 @@ const opDeleteCorsPolicy = "DeleteCorsPolicy" // DeleteCorsPolicyRequest generates a "aws/request.Request" representing the // client's request for the DeleteCorsPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -368,8 +368,8 @@ const opDescribeContainer = "DescribeContainer" // DescribeContainerRequest generates a "aws/request.Request" representing the // client's request for the DescribeContainer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -455,8 +455,8 @@ const opGetContainerPolicy = "GetContainerPolicy" // GetContainerPolicyRequest generates a "aws/request.Request" representing the // client's request for the GetContainerPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -545,8 +545,8 @@ const opGetCorsPolicy = "GetCorsPolicy" // GetCorsPolicyRequest generates a "aws/request.Request" representing the // client's request for the GetCorsPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -638,8 +638,8 @@ const opListContainers = "ListContainers" // ListContainersRequest generates a "aws/request.Request" representing the // client's request for the ListContainers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -726,8 +726,8 @@ const opPutContainerPolicy = "PutContainerPolicy" // PutContainerPolicyRequest generates a "aws/request.Request" representing the // client's request for the PutContainerPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -818,8 +818,8 @@ const opPutCorsPolicy = "PutCorsPolicy" // PutCorsPolicyRequest generates a "aws/request.Request" representing the // client's request for the PutCorsPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -923,7 +923,7 @@ type Container struct { ARN *string `min:"1" type:"string"` // Unix timestamp. - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreationTime *time.Time `type:"timestamp"` // The DNS endpoint of the container. Use the endpoint to identify the specific // container when sending requests to the data plane. The service assigns this diff --git a/vendor/github.com/aws/aws-sdk-go/service/mediastore/service.go b/vendor/github.com/aws/aws-sdk-go/service/mediastore/service.go index fa36fb91063..492b3bf04c3 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/mediastore/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/mediastore/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "mediastore" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "mediastore" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "MediaStore" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the MediaStore client with a session. @@ -45,19 +46,20 @@ const ( // svc := mediastore.New(mySession, aws.NewConfig().WithRegion("us-west-2")) func New(p client.ConfigProvider, cfgs ...*aws.Config) *MediaStore { c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "mediastore" + } return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) } // newClient creates, initializes and returns a new service client instance. func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *MediaStore { - if len(signingName) == 0 { - signingName = "mediastore" - } svc := &MediaStore{ Client: client.New( cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/mediastoredata/api.go b/vendor/github.com/aws/aws-sdk-go/service/mediastoredata/api.go index 8b47a0e816d..5163c04cbd1 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/mediastoredata/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/mediastoredata/api.go @@ -16,8 +16,8 @@ const opDeleteObject = "DeleteObject" // DeleteObjectRequest generates a "aws/request.Request" representing the // client's request for the DeleteObject operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -101,8 +101,8 @@ const opDescribeObject = "DescribeObject" // DescribeObjectRequest generates a "aws/request.Request" representing the // client's request for the DescribeObject operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -186,8 +186,8 @@ const opGetObject = "GetObject" // GetObjectRequest generates a "aws/request.Request" representing the // client's request for the GetObject operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -274,8 +274,8 @@ const opListItems = "ListItems" // ListItemsRequest generates a "aws/request.Request" representing the // client's request for the ListItems operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -357,8 +357,8 @@ const opPutObject = "PutObject" // PutObjectRequest generates a "aws/request.Request" representing the // client's request for the PutObject operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -400,7 +400,7 @@ func (c *MediaStoreData) PutObjectRequest(input *PutObjectInput) (req *request.R // PutObject API operation for AWS Elemental MediaStore Data Plane. // -// Uploads an object to the specified path. Object sizes are limited to 10 MB. +// Uploads an object to the specified path. Object sizes are limited to 25 MB. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -556,7 +556,7 @@ type DescribeObjectOutput struct { ETag *string `location:"header" locationName:"ETag" min:"1" type:"string"` // The date and time that the object was last modified. - LastModified *time.Time `location:"header" locationName:"Last-Modified" type:"timestamp" timestampFormat:"rfc822"` + LastModified *time.Time `location:"header" locationName:"Last-Modified" type:"timestamp"` } // String returns the string representation @@ -699,7 +699,7 @@ type GetObjectOutput struct { ETag *string `location:"header" locationName:"ETag" min:"1" type:"string"` // The date and time that the object was last modified. - LastModified *time.Time `location:"header" locationName:"Last-Modified" type:"timestamp" timestampFormat:"rfc822"` + LastModified *time.Time `location:"header" locationName:"Last-Modified" type:"timestamp"` // The HTML status code of the request. Status codes ranging from 200 to 299 // indicate success. All other status codes indicate the type of error that @@ -781,7 +781,7 @@ type Item struct { ETag *string `min:"1" type:"string"` // The date and time that the item was last modified. - LastModified *time.Time `type:"timestamp" timestampFormat:"unix"` + LastModified *time.Time `type:"timestamp"` // The name of the item. Name *string `type:"string"` @@ -839,11 +839,24 @@ func (s *Item) SetType(v string) *Item { type ListItemsInput struct { _ struct{} `type:"structure"` - // The maximum results to return. The service might return fewer results. + // The maximum number of results to return per API request. For example, you + // submit a ListItems request with MaxResults set at 500. Although 2,000 items + // match your request, the service returns no more than the first 500 items. + // (The service also returns a NextToken value that you can use to fetch the + // next batch of results.) The service might return fewer results than the MaxResults + // value. + // + // If MaxResults is not included in the request, the service defaults to pagination + // with a maximum of 1,000 results per page. MaxResults *int64 `location:"querystring" locationName:"MaxResults" min:"1" type:"integer"` - // The NextToken received in the ListItemsResponse for the same container and - // path. Tokens expire after 15 minutes. + // The token that identifies which batch of results that you want to see. For + // example, you submit a ListItems request with MaxResults set at 500. The service + // returns the first batch of results (up to 500) and a NextToken value. To + // see the next batch of results, you can submit the ListItems request a second + // time and specify the NextToken value. + // + // Tokens expire after 15 minutes. NextToken *string `location:"querystring" locationName:"NextToken" type:"string"` // The path in the container from which to retrieve items. Format: / 0 { + return invalidParams + } + return nil +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *AddRoleToDBClusterInput) SetDBClusterIdentifier(v string) *AddRoleToDBClusterInput { + s.DBClusterIdentifier = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *AddRoleToDBClusterInput) SetRoleArn(v string) *AddRoleToDBClusterInput { + s.RoleArn = &v + return s +} + +type AddRoleToDBClusterOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AddRoleToDBClusterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddRoleToDBClusterOutput) GoString() string { + return s.String() +} + +type AddSourceIdentifierToSubscriptionInput struct { + _ struct{} `type:"structure"` + + // The identifier of the event source to be added. + // + // Constraints: + // + // * If the source type is a DB instance, then a DBInstanceIdentifier must + // be supplied. + // + // * If the source type is a DB security group, a DBSecurityGroupName must + // be supplied. + // + // * If the source type is a DB parameter group, a DBParameterGroupName must + // be supplied. + // + // * If the source type is a DB snapshot, a DBSnapshotIdentifier must be + // supplied. + // + // SourceIdentifier is a required field + SourceIdentifier *string `type:"string" required:"true"` + + // The name of the event notification subscription you want to add a source + // identifier to. + // + // SubscriptionName is a required field + SubscriptionName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s AddSourceIdentifierToSubscriptionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddSourceIdentifierToSubscriptionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddSourceIdentifierToSubscriptionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddSourceIdentifierToSubscriptionInput"} + if s.SourceIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("SourceIdentifier")) + } + if s.SubscriptionName == nil { + invalidParams.Add(request.NewErrParamRequired("SubscriptionName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSourceIdentifier sets the SourceIdentifier field's value. +func (s *AddSourceIdentifierToSubscriptionInput) SetSourceIdentifier(v string) *AddSourceIdentifierToSubscriptionInput { + s.SourceIdentifier = &v + return s +} + +// SetSubscriptionName sets the SubscriptionName field's value. +func (s *AddSourceIdentifierToSubscriptionInput) SetSubscriptionName(v string) *AddSourceIdentifierToSubscriptionInput { + s.SubscriptionName = &v + return s +} + +type AddSourceIdentifierToSubscriptionOutput struct { + _ struct{} `type:"structure"` + + // Contains the results of a successful invocation of the DescribeEventSubscriptions + // action. + EventSubscription *EventSubscription `type:"structure"` +} + +// String returns the string representation +func (s AddSourceIdentifierToSubscriptionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddSourceIdentifierToSubscriptionOutput) GoString() string { + return s.String() +} + +// SetEventSubscription sets the EventSubscription field's value. +func (s *AddSourceIdentifierToSubscriptionOutput) SetEventSubscription(v *EventSubscription) *AddSourceIdentifierToSubscriptionOutput { + s.EventSubscription = v + return s +} + +type AddTagsToResourceInput struct { + _ struct{} `type:"structure"` + + // The Amazon Neptune resource that the tags are added to. This value is an + // Amazon Resource Name (ARN). For information about creating an ARN, see Constructing + // an Amazon Resource Name (ARN) (http://docs.aws.amazon.com/neptune/latest/UserGuide/tagging.ARN.html#tagging.ARN.Constructing). + // + // ResourceName is a required field + ResourceName *string `type:"string" required:"true"` + + // The tags to be assigned to the Amazon Neptune resource. + // + // Tags is a required field + Tags []*Tag `locationNameList:"Tag" type:"list" required:"true"` +} + +// String returns the string representation +func (s AddTagsToResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddTagsToResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddTagsToResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddTagsToResourceInput"} + if s.ResourceName == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceName")) + } + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceName sets the ResourceName field's value. +func (s *AddTagsToResourceInput) SetResourceName(v string) *AddTagsToResourceInput { + s.ResourceName = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *AddTagsToResourceInput) SetTags(v []*Tag) *AddTagsToResourceInput { + s.Tags = v + return s +} + +type AddTagsToResourceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AddTagsToResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddTagsToResourceOutput) GoString() string { + return s.String() +} + +type ApplyPendingMaintenanceActionInput struct { + _ struct{} `type:"structure"` + + // The pending maintenance action to apply to this resource. + // + // Valid values: system-update, db-upgrade + // + // ApplyAction is a required field + ApplyAction *string `type:"string" required:"true"` + + // A value that specifies the type of opt-in request, or undoes an opt-in request. + // An opt-in request of type immediate can't be undone. + // + // Valid values: + // + // * immediate - Apply the maintenance action immediately. + // + // * next-maintenance - Apply the maintenance action during the next maintenance + // window for the resource. + // + // * undo-opt-in - Cancel any existing next-maintenance opt-in requests. + // + // OptInType is a required field + OptInType *string `type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the resource that the pending maintenance + // action applies to. For information about creating an ARN, see Constructing + // an Amazon Resource Name (ARN) (http://docs.aws.amazon.com/neptune/latest/UserGuide/tagging.ARN.html#tagging.ARN.Constructing). + // + // ResourceIdentifier is a required field + ResourceIdentifier *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s ApplyPendingMaintenanceActionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ApplyPendingMaintenanceActionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ApplyPendingMaintenanceActionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ApplyPendingMaintenanceActionInput"} + if s.ApplyAction == nil { + invalidParams.Add(request.NewErrParamRequired("ApplyAction")) + } + if s.OptInType == nil { + invalidParams.Add(request.NewErrParamRequired("OptInType")) + } + if s.ResourceIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplyAction sets the ApplyAction field's value. +func (s *ApplyPendingMaintenanceActionInput) SetApplyAction(v string) *ApplyPendingMaintenanceActionInput { + s.ApplyAction = &v + return s +} + +// SetOptInType sets the OptInType field's value. +func (s *ApplyPendingMaintenanceActionInput) SetOptInType(v string) *ApplyPendingMaintenanceActionInput { + s.OptInType = &v + return s +} + +// SetResourceIdentifier sets the ResourceIdentifier field's value. +func (s *ApplyPendingMaintenanceActionInput) SetResourceIdentifier(v string) *ApplyPendingMaintenanceActionInput { + s.ResourceIdentifier = &v + return s +} + +type ApplyPendingMaintenanceActionOutput struct { + _ struct{} `type:"structure"` + + // Describes the pending maintenance actions for a resource. + ResourcePendingMaintenanceActions *ResourcePendingMaintenanceActions `type:"structure"` +} + +// String returns the string representation +func (s ApplyPendingMaintenanceActionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ApplyPendingMaintenanceActionOutput) GoString() string { + return s.String() +} + +// SetResourcePendingMaintenanceActions sets the ResourcePendingMaintenanceActions field's value. +func (s *ApplyPendingMaintenanceActionOutput) SetResourcePendingMaintenanceActions(v *ResourcePendingMaintenanceActions) *ApplyPendingMaintenanceActionOutput { + s.ResourcePendingMaintenanceActions = v + return s +} + +// Contains Availability Zone information. +// +// This data type is used as an element in the following data type: +// +// * OrderableDBInstanceOption +type AvailabilityZone struct { + _ struct{} `type:"structure"` + + // The name of the availability zone. + Name *string `type:"string"` +} + +// String returns the string representation +func (s AvailabilityZone) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AvailabilityZone) GoString() string { + return s.String() +} + +// SetName sets the Name field's value. +func (s *AvailabilityZone) SetName(v string) *AvailabilityZone { + s.Name = &v + return s +} + +// This data type is used as a response element in the action DescribeDBEngineVersions. +type CharacterSet struct { + _ struct{} `type:"structure"` + + // The description of the character set. + CharacterSetDescription *string `type:"string"` + + // The name of the character set. + CharacterSetName *string `type:"string"` +} + +// String returns the string representation +func (s CharacterSet) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CharacterSet) GoString() string { + return s.String() +} + +// SetCharacterSetDescription sets the CharacterSetDescription field's value. +func (s *CharacterSet) SetCharacterSetDescription(v string) *CharacterSet { + s.CharacterSetDescription = &v + return s +} + +// SetCharacterSetName sets the CharacterSetName field's value. +func (s *CharacterSet) SetCharacterSetName(v string) *CharacterSet { + s.CharacterSetName = &v + return s +} + +// The configuration setting for the log types to be enabled for export to CloudWatch +// Logs for a specific DB instance or DB cluster. +type CloudwatchLogsExportConfiguration struct { + _ struct{} `type:"structure"` + + // The list of log types to disable. + DisableLogTypes []*string `type:"list"` + + // The list of log types to enable. + EnableLogTypes []*string `type:"list"` +} + +// String returns the string representation +func (s CloudwatchLogsExportConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CloudwatchLogsExportConfiguration) GoString() string { + return s.String() +} + +// SetDisableLogTypes sets the DisableLogTypes field's value. +func (s *CloudwatchLogsExportConfiguration) SetDisableLogTypes(v []*string) *CloudwatchLogsExportConfiguration { + s.DisableLogTypes = v + return s +} + +// SetEnableLogTypes sets the EnableLogTypes field's value. +func (s *CloudwatchLogsExportConfiguration) SetEnableLogTypes(v []*string) *CloudwatchLogsExportConfiguration { + s.EnableLogTypes = v + return s +} + +type CopyDBClusterParameterGroupInput struct { + _ struct{} `type:"structure"` + + // The identifier or Amazon Resource Name (ARN) for the source DB cluster parameter + // group. For information about creating an ARN, see Constructing an Amazon + // Resource Name (ARN) (http://docs.aws.amazon.com/neptune/latest/UserGuide/tagging.ARN.html#tagging.ARN.Constructing). + // + // Constraints: + // + // * Must specify a valid DB cluster parameter group. + // + // * If the source DB cluster parameter group is in the same AWS Region as + // the copy, specify a valid DB parameter group identifier, for example my-db-cluster-param-group, + // or a valid ARN. + // + // * If the source DB parameter group is in a different AWS Region than the + // copy, specify a valid DB cluster parameter group ARN, for example arn:aws:rds:us-east-1:123456789012:cluster-pg:custom-cluster-group1. + // + // SourceDBClusterParameterGroupIdentifier is a required field + SourceDBClusterParameterGroupIdentifier *string `type:"string" required:"true"` + + // A list of tags. For more information, see Tagging Amazon Neptune Resources + // (http://docs.aws.amazon.com/neptune/latest/UserGuide/tagging.ARN.html). + Tags []*Tag `locationNameList:"Tag" type:"list"` + + // A description for the copied DB cluster parameter group. + // + // TargetDBClusterParameterGroupDescription is a required field + TargetDBClusterParameterGroupDescription *string `type:"string" required:"true"` + + // The identifier for the copied DB cluster parameter group. + // + // Constraints: + // + // * Cannot be null, empty, or blank + // + // * Must contain from 1 to 255 letters, numbers, or hyphens + // + // * First character must be a letter + // + // * Cannot end with a hyphen or contain two consecutive hyphens + // + // Example: my-cluster-param-group1 + // + // TargetDBClusterParameterGroupIdentifier is a required field + TargetDBClusterParameterGroupIdentifier *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s CopyDBClusterParameterGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CopyDBClusterParameterGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CopyDBClusterParameterGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CopyDBClusterParameterGroupInput"} + if s.SourceDBClusterParameterGroupIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("SourceDBClusterParameterGroupIdentifier")) + } + if s.TargetDBClusterParameterGroupDescription == nil { + invalidParams.Add(request.NewErrParamRequired("TargetDBClusterParameterGroupDescription")) + } + if s.TargetDBClusterParameterGroupIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("TargetDBClusterParameterGroupIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSourceDBClusterParameterGroupIdentifier sets the SourceDBClusterParameterGroupIdentifier field's value. +func (s *CopyDBClusterParameterGroupInput) SetSourceDBClusterParameterGroupIdentifier(v string) *CopyDBClusterParameterGroupInput { + s.SourceDBClusterParameterGroupIdentifier = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CopyDBClusterParameterGroupInput) SetTags(v []*Tag) *CopyDBClusterParameterGroupInput { + s.Tags = v + return s +} + +// SetTargetDBClusterParameterGroupDescription sets the TargetDBClusterParameterGroupDescription field's value. +func (s *CopyDBClusterParameterGroupInput) SetTargetDBClusterParameterGroupDescription(v string) *CopyDBClusterParameterGroupInput { + s.TargetDBClusterParameterGroupDescription = &v + return s +} + +// SetTargetDBClusterParameterGroupIdentifier sets the TargetDBClusterParameterGroupIdentifier field's value. +func (s *CopyDBClusterParameterGroupInput) SetTargetDBClusterParameterGroupIdentifier(v string) *CopyDBClusterParameterGroupInput { + s.TargetDBClusterParameterGroupIdentifier = &v + return s +} + +type CopyDBClusterParameterGroupOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Neptune DB cluster parameter group. + // + // This data type is used as a response element in the DescribeDBClusterParameterGroups + // action. + DBClusterParameterGroup *DBClusterParameterGroup `type:"structure"` +} + +// String returns the string representation +func (s CopyDBClusterParameterGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CopyDBClusterParameterGroupOutput) GoString() string { + return s.String() +} + +// SetDBClusterParameterGroup sets the DBClusterParameterGroup field's value. +func (s *CopyDBClusterParameterGroupOutput) SetDBClusterParameterGroup(v *DBClusterParameterGroup) *CopyDBClusterParameterGroupOutput { + s.DBClusterParameterGroup = v + return s +} + +type CopyDBClusterSnapshotInput struct { + _ struct{} `type:"structure"` + + // True to copy all tags from the source DB cluster snapshot to the target DB + // cluster snapshot, and otherwise false. The default is false. + CopyTags *bool `type:"boolean"` + + // The AWS AWS KMS key ID for an encrypted DB cluster snapshot. The KMS key + // ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key + // alias for the KMS encryption key. + // + // If you copy an unencrypted DB cluster snapshot and specify a value for the + // KmsKeyId parameter, Amazon Neptune encrypts the target DB cluster snapshot + // using the specified KMS encryption key. + // + // If you copy an encrypted DB cluster snapshot from your AWS account, you can + // specify a value for KmsKeyId to encrypt the copy with a new KMS encryption + // key. If you don't specify a value for KmsKeyId, then the copy of the DB cluster + // snapshot is encrypted with the same KMS key as the source DB cluster snapshot. + // + // If you copy an encrypted DB cluster snapshot that is shared from another + // AWS account, then you must specify a value for KmsKeyId. + // + // To copy an encrypted DB cluster snapshot to another AWS Region, you must + // set KmsKeyId to the KMS key ID you want to use to encrypt the copy of the + // DB cluster snapshot in the destination AWS Region. KMS encryption keys are + // specific to the AWS Region that they are created in, and you can't use encryption + // keys from one AWS Region in another AWS Region. + KmsKeyId *string `type:"string"` + + // The URL that contains a Signature Version 4 signed request for the CopyDBClusterSnapshot + // API action in the AWS Region that contains the source DB cluster snapshot + // to copy. The PreSignedUrl parameter must be used when copying an encrypted + // DB cluster snapshot from another AWS Region. + // + // The pre-signed URL must be a valid request for the CopyDBSClusterSnapshot + // API action that can be executed in the source AWS Region that contains the + // encrypted DB cluster snapshot to be copied. The pre-signed URL request must + // contain the following parameter values: + // + // * KmsKeyId - The AWS KMS key identifier for the key to use to encrypt + // the copy of the DB cluster snapshot in the destination AWS Region. This + // is the same identifier for both the CopyDBClusterSnapshot action that + // is called in the destination AWS Region, and the action contained in the + // pre-signed URL. + // + // * DestinationRegion - The name of the AWS Region that the DB cluster snapshot + // will be created in. + // + // * SourceDBClusterSnapshotIdentifier - The DB cluster snapshot identifier + // for the encrypted DB cluster snapshot to be copied. This identifier must + // be in the Amazon Resource Name (ARN) format for the source AWS Region. + // For example, if you are copying an encrypted DB cluster snapshot from + // the us-west-2 AWS Region, then your SourceDBClusterSnapshotIdentifier + // looks like the following example: arn:aws:rds:us-west-2:123456789012:cluster-snapshot:neptune-cluster1-snapshot-20161115. + // + // To learn how to generate a Signature Version 4 signed request, see Authenticating + // Requests: Using Query Parameters (AWS Signature Version 4) (http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html) + // and Signature Version 4 Signing Process (http://docs.aws.amazon.com/general/latest/gr/signature-version-4.html). + PreSignedUrl *string `type:"string"` + + // The identifier of the DB cluster snapshot to copy. This parameter is not + // case-sensitive. + // + // You can't copy an encrypted, shared DB cluster snapshot from one AWS Region + // to another. + // + // Constraints: + // + // * Must specify a valid system snapshot in the "available" state. + // + // * If the source snapshot is in the same AWS Region as the copy, specify + // a valid DB snapshot identifier. + // + // * If the source snapshot is in a different AWS Region than the copy, specify + // a valid DB cluster snapshot ARN. + // + // Example: my-cluster-snapshot1 + // + // SourceDBClusterSnapshotIdentifier is a required field + SourceDBClusterSnapshotIdentifier *string `type:"string" required:"true"` + + // A list of tags. For more information, see Tagging Amazon Neptune Resources + // (http://docs.aws.amazon.com/neptune/latest/UserGuide/tagging.ARN.html). + Tags []*Tag `locationNameList:"Tag" type:"list"` + + // The identifier of the new DB cluster snapshot to create from the source DB + // cluster snapshot. This parameter is not case-sensitive. + // + // Constraints: + // + // * Must contain from 1 to 63 letters, numbers, or hyphens. + // + // * First character must be a letter. + // + // * Cannot end with a hyphen or contain two consecutive hyphens. + // + // Example: my-cluster-snapshot2 + // + // TargetDBClusterSnapshotIdentifier is a required field + TargetDBClusterSnapshotIdentifier *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s CopyDBClusterSnapshotInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CopyDBClusterSnapshotInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CopyDBClusterSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CopyDBClusterSnapshotInput"} + if s.SourceDBClusterSnapshotIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("SourceDBClusterSnapshotIdentifier")) + } + if s.TargetDBClusterSnapshotIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("TargetDBClusterSnapshotIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCopyTags sets the CopyTags field's value. +func (s *CopyDBClusterSnapshotInput) SetCopyTags(v bool) *CopyDBClusterSnapshotInput { + s.CopyTags = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *CopyDBClusterSnapshotInput) SetKmsKeyId(v string) *CopyDBClusterSnapshotInput { + s.KmsKeyId = &v + return s +} + +// SetPreSignedUrl sets the PreSignedUrl field's value. +func (s *CopyDBClusterSnapshotInput) SetPreSignedUrl(v string) *CopyDBClusterSnapshotInput { + s.PreSignedUrl = &v + return s +} + +// SetSourceDBClusterSnapshotIdentifier sets the SourceDBClusterSnapshotIdentifier field's value. +func (s *CopyDBClusterSnapshotInput) SetSourceDBClusterSnapshotIdentifier(v string) *CopyDBClusterSnapshotInput { + s.SourceDBClusterSnapshotIdentifier = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CopyDBClusterSnapshotInput) SetTags(v []*Tag) *CopyDBClusterSnapshotInput { + s.Tags = v + return s +} + +// SetTargetDBClusterSnapshotIdentifier sets the TargetDBClusterSnapshotIdentifier field's value. +func (s *CopyDBClusterSnapshotInput) SetTargetDBClusterSnapshotIdentifier(v string) *CopyDBClusterSnapshotInput { + s.TargetDBClusterSnapshotIdentifier = &v + return s +} + +type CopyDBClusterSnapshotOutput struct { + _ struct{} `type:"structure"` + + // Contains the details for an Amazon Neptune DB cluster snapshot + // + // This data type is used as a response element in the DescribeDBClusterSnapshots + // action. + DBClusterSnapshot *DBClusterSnapshot `type:"structure"` +} + +// String returns the string representation +func (s CopyDBClusterSnapshotOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CopyDBClusterSnapshotOutput) GoString() string { + return s.String() +} + +// SetDBClusterSnapshot sets the DBClusterSnapshot field's value. +func (s *CopyDBClusterSnapshotOutput) SetDBClusterSnapshot(v *DBClusterSnapshot) *CopyDBClusterSnapshotOutput { + s.DBClusterSnapshot = v + return s +} + +type CopyDBParameterGroupInput struct { + _ struct{} `type:"structure"` + + // The identifier or ARN for the source DB parameter group. For information + // about creating an ARN, see Constructing an Amazon Resource Name (ARN) (http://docs.aws.amazon.com/neptune/latest/UserGuide/tagging.ARN.html#tagging.ARN.Constructing). + // + // Constraints: + // + // * Must specify a valid DB parameter group. + // + // * Must specify a valid DB parameter group identifier, for example my-db-param-group, + // or a valid ARN. + // + // SourceDBParameterGroupIdentifier is a required field + SourceDBParameterGroupIdentifier *string `type:"string" required:"true"` + + // A list of tags. For more information, see Tagging Amazon Neptune Resources + // (http://docs.aws.amazon.com/neptune/latest/UserGuide/tagging.ARN.html). + Tags []*Tag `locationNameList:"Tag" type:"list"` + + // A description for the copied DB parameter group. + // + // TargetDBParameterGroupDescription is a required field + TargetDBParameterGroupDescription *string `type:"string" required:"true"` + + // The identifier for the copied DB parameter group. + // + // Constraints: + // + // * Cannot be null, empty, or blank + // + // * Must contain from 1 to 255 letters, numbers, or hyphens + // + // * First character must be a letter + // + // * Cannot end with a hyphen or contain two consecutive hyphens + // + // Example: my-db-parameter-group + // + // TargetDBParameterGroupIdentifier is a required field + TargetDBParameterGroupIdentifier *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s CopyDBParameterGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CopyDBParameterGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CopyDBParameterGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CopyDBParameterGroupInput"} + if s.SourceDBParameterGroupIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("SourceDBParameterGroupIdentifier")) + } + if s.TargetDBParameterGroupDescription == nil { + invalidParams.Add(request.NewErrParamRequired("TargetDBParameterGroupDescription")) + } + if s.TargetDBParameterGroupIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("TargetDBParameterGroupIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSourceDBParameterGroupIdentifier sets the SourceDBParameterGroupIdentifier field's value. +func (s *CopyDBParameterGroupInput) SetSourceDBParameterGroupIdentifier(v string) *CopyDBParameterGroupInput { + s.SourceDBParameterGroupIdentifier = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CopyDBParameterGroupInput) SetTags(v []*Tag) *CopyDBParameterGroupInput { + s.Tags = v + return s +} + +// SetTargetDBParameterGroupDescription sets the TargetDBParameterGroupDescription field's value. +func (s *CopyDBParameterGroupInput) SetTargetDBParameterGroupDescription(v string) *CopyDBParameterGroupInput { + s.TargetDBParameterGroupDescription = &v + return s +} + +// SetTargetDBParameterGroupIdentifier sets the TargetDBParameterGroupIdentifier field's value. +func (s *CopyDBParameterGroupInput) SetTargetDBParameterGroupIdentifier(v string) *CopyDBParameterGroupInput { + s.TargetDBParameterGroupIdentifier = &v + return s +} + +type CopyDBParameterGroupOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Neptune DB parameter group. + // + // This data type is used as a response element in the DescribeDBParameterGroups + // action. + DBParameterGroup *DBParameterGroup `type:"structure"` +} + +// String returns the string representation +func (s CopyDBParameterGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CopyDBParameterGroupOutput) GoString() string { + return s.String() +} + +// SetDBParameterGroup sets the DBParameterGroup field's value. +func (s *CopyDBParameterGroupOutput) SetDBParameterGroup(v *DBParameterGroup) *CopyDBParameterGroupOutput { + s.DBParameterGroup = v + return s +} + +type CreateDBClusterInput struct { + _ struct{} `type:"structure"` + + // A list of EC2 Availability Zones that instances in the DB cluster can be + // created in. + AvailabilityZones []*string `locationNameList:"AvailabilityZone" type:"list"` + + // The number of days for which automated backups are retained. You must specify + // a minimum value of 1. + // + // Default: 1 + // + // Constraints: + // + // * Must be a value from 1 to 35 + BackupRetentionPeriod *int64 `type:"integer"` + + // A value that indicates that the DB cluster should be associated with the + // specified CharacterSet. + CharacterSetName *string `type:"string"` + + // The DB cluster identifier. This parameter is stored as a lowercase string. + // + // Constraints: + // + // * Must contain from 1 to 63 letters, numbers, or hyphens. + // + // * First character must be a letter. + // + // * Cannot end with a hyphen or contain two consecutive hyphens. + // + // Example: my-cluster1 + // + // DBClusterIdentifier is a required field + DBClusterIdentifier *string `type:"string" required:"true"` + + // The name of the DB cluster parameter group to associate with this DB cluster. + // If this argument is omitted, the default is used. + // + // Constraints: + // + // * If supplied, must match the name of an existing DBClusterParameterGroup. + DBClusterParameterGroupName *string `type:"string"` + + // A DB subnet group to associate with this DB cluster. + // + // Constraints: Must match the name of an existing DBSubnetGroup. Must not be + // default. + // + // Example: mySubnetgroup + DBSubnetGroupName *string `type:"string"` + + // The name for your database of up to 64 alpha-numeric characters. If you do + // not provide a name, Amazon Neptune will not create a database in the DB cluster + // you are creating. + DatabaseName *string `type:"string"` + + // True to enable mapping of AWS Identity and Access Management (IAM) accounts + // to database accounts, and otherwise false. + // + // Default: false + EnableIAMDatabaseAuthentication *bool `type:"boolean"` + + // The name of the database engine to be used for this DB cluster. + // + // Valid Values: neptune + // + // Engine is a required field + Engine *string `type:"string" required:"true"` + + // The version number of the database engine to use. + // + // Example: 1.0.1 + EngineVersion *string `type:"string"` + + // The AWS KMS key identifier for an encrypted DB cluster. + // + // The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption + // key. If you are creating a DB cluster with the same AWS account that owns + // the KMS encryption key used to encrypt the new DB cluster, then you can use + // the KMS key alias instead of the ARN for the KMS encryption key. + // + // If an encryption key is not specified in KmsKeyId: + // + // * If ReplicationSourceIdentifier identifies an encrypted source, then + // Amazon Neptune will use the encryption key used to encrypt the source. + // Otherwise, Amazon Neptune will use your default encryption key. + // + // * If the StorageEncrypted parameter is true and ReplicationSourceIdentifier + // is not specified, then Amazon Neptune will use your default encryption + // key. + // + // AWS KMS creates the default encryption key for your AWS account. Your AWS + // account has a different default encryption key for each AWS Region. + // + // If you create a Read Replica of an encrypted DB cluster in another AWS Region, + // you must set KmsKeyId to a KMS key ID that is valid in the destination AWS + // Region. This key is used to encrypt the Read Replica in that AWS Region. + KmsKeyId *string `type:"string"` + + // The password for the master database user. This password can contain any + // printable ASCII character except "/", """, or "@". + // + // Constraints: Must contain from 8 to 41 characters. + MasterUserPassword *string `type:"string"` + + // The name of the master user for the DB cluster. + // + // Constraints: + // + // * Must be 1 to 16 letters or numbers. + // + // * First character must be a letter. + // + // * Cannot be a reserved word for the chosen database engine. + MasterUsername *string `type:"string"` + + // A value that indicates that the DB cluster should be associated with the + // specified option group. + // + // Permanent options can't be removed from an option group. The option group + // can't be removed from a DB cluster once it is associated with a DB cluster. + OptionGroupName *string `type:"string"` + + // The port number on which the instances in the DB cluster accept connections. + // + // Default: 8182 + Port *int64 `type:"integer"` + + // A URL that contains a Signature Version 4 signed request for the CreateDBCluster + // action to be called in the source AWS Region where the DB cluster is replicated + // from. You only need to specify PreSignedUrl when you are performing cross-region + // replication from an encrypted DB cluster. + // + // The pre-signed URL must be a valid request for the CreateDBCluster API action + // that can be executed in the source AWS Region that contains the encrypted + // DB cluster to be copied. + // + // The pre-signed URL request must contain the following parameter values: + // + // * KmsKeyId - The AWS KMS key identifier for the key to use to encrypt + // the copy of the DB cluster in the destination AWS Region. This should + // refer to the same KMS key for both the CreateDBCluster action that is + // called in the destination AWS Region, and the action contained in the + // pre-signed URL. + // + // * DestinationRegion - The name of the AWS Region that Read Replica will + // be created in. + // + // * ReplicationSourceIdentifier - The DB cluster identifier for the encrypted + // DB cluster to be copied. This identifier must be in the Amazon Resource + // Name (ARN) format for the source AWS Region. For example, if you are copying + // an encrypted DB cluster from the us-west-2 AWS Region, then your ReplicationSourceIdentifier + // would look like Example: arn:aws:rds:us-west-2:123456789012:cluster:neptune-cluster1. + // + // To learn how to generate a Signature Version 4 signed request, see Authenticating + // Requests: Using Query Parameters (AWS Signature Version 4) (http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html) + // and Signature Version 4 Signing Process (http://docs.aws.amazon.com/general/latest/gr/signature-version-4.html). + PreSignedUrl *string `type:"string"` + + // The daily time range during which automated backups are created if automated + // backups are enabled using the BackupRetentionPeriod parameter. + // + // The default is a 30-minute window selected at random from an 8-hour block + // of time for each AWS Region. To see the time blocks available, see Adjusting + // the Preferred Maintenance Window (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AdjustingTheMaintenanceWindow.html) + // in the Amazon Neptune User Guide. + // + // Constraints: + // + // * Must be in the format hh24:mi-hh24:mi. + // + // * Must be in Universal Coordinated Time (UTC). + // + // * Must not conflict with the preferred maintenance window. + // + // * Must be at least 30 minutes. + PreferredBackupWindow *string `type:"string"` + + // The weekly time range during which system maintenance can occur, in Universal + // Coordinated Time (UTC). + // + // Format: ddd:hh24:mi-ddd:hh24:mi + // + // The default is a 30-minute window selected at random from an 8-hour block + // of time for each AWS Region, occurring on a random day of the week. To see + // the time blocks available, see Adjusting the Preferred Maintenance Window + // (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AdjustingTheMaintenanceWindow.html) + // in the Amazon Neptune User Guide. + // + // Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun. + // + // Constraints: Minimum 30-minute window. + PreferredMaintenanceWindow *string `type:"string"` + + // The Amazon Resource Name (ARN) of the source DB instance or DB cluster if + // this DB cluster is created as a Read Replica. + ReplicationSourceIdentifier *string `type:"string"` + + // Specifies whether the DB cluster is encrypted. + StorageEncrypted *bool `type:"boolean"` + + // A list of tags. For more information, see Tagging Amazon Neptune Resources + // (http://docs.aws.amazon.com/neptune/latest/UserGuide/tagging.ARN.html). + Tags []*Tag `locationNameList:"Tag" type:"list"` + + // A list of EC2 VPC security groups to associate with this DB cluster. + VpcSecurityGroupIds []*string `locationNameList:"VpcSecurityGroupId" type:"list"` +} + +// String returns the string representation +func (s CreateDBClusterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDBClusterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDBClusterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDBClusterInput"} + if s.DBClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterIdentifier")) + } + if s.Engine == nil { + invalidParams.Add(request.NewErrParamRequired("Engine")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAvailabilityZones sets the AvailabilityZones field's value. +func (s *CreateDBClusterInput) SetAvailabilityZones(v []*string) *CreateDBClusterInput { + s.AvailabilityZones = v + return s +} + +// SetBackupRetentionPeriod sets the BackupRetentionPeriod field's value. +func (s *CreateDBClusterInput) SetBackupRetentionPeriod(v int64) *CreateDBClusterInput { + s.BackupRetentionPeriod = &v + return s +} + +// SetCharacterSetName sets the CharacterSetName field's value. +func (s *CreateDBClusterInput) SetCharacterSetName(v string) *CreateDBClusterInput { + s.CharacterSetName = &v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *CreateDBClusterInput) SetDBClusterIdentifier(v string) *CreateDBClusterInput { + s.DBClusterIdentifier = &v + return s +} + +// SetDBClusterParameterGroupName sets the DBClusterParameterGroupName field's value. +func (s *CreateDBClusterInput) SetDBClusterParameterGroupName(v string) *CreateDBClusterInput { + s.DBClusterParameterGroupName = &v + return s +} + +// SetDBSubnetGroupName sets the DBSubnetGroupName field's value. +func (s *CreateDBClusterInput) SetDBSubnetGroupName(v string) *CreateDBClusterInput { + s.DBSubnetGroupName = &v + return s +} + +// SetDatabaseName sets the DatabaseName field's value. +func (s *CreateDBClusterInput) SetDatabaseName(v string) *CreateDBClusterInput { + s.DatabaseName = &v + return s +} + +// SetEnableIAMDatabaseAuthentication sets the EnableIAMDatabaseAuthentication field's value. +func (s *CreateDBClusterInput) SetEnableIAMDatabaseAuthentication(v bool) *CreateDBClusterInput { + s.EnableIAMDatabaseAuthentication = &v + return s +} + +// SetEngine sets the Engine field's value. +func (s *CreateDBClusterInput) SetEngine(v string) *CreateDBClusterInput { + s.Engine = &v + return s +} + +// SetEngineVersion sets the EngineVersion field's value. +func (s *CreateDBClusterInput) SetEngineVersion(v string) *CreateDBClusterInput { + s.EngineVersion = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *CreateDBClusterInput) SetKmsKeyId(v string) *CreateDBClusterInput { + s.KmsKeyId = &v + return s +} + +// SetMasterUserPassword sets the MasterUserPassword field's value. +func (s *CreateDBClusterInput) SetMasterUserPassword(v string) *CreateDBClusterInput { + s.MasterUserPassword = &v + return s +} + +// SetMasterUsername sets the MasterUsername field's value. +func (s *CreateDBClusterInput) SetMasterUsername(v string) *CreateDBClusterInput { + s.MasterUsername = &v + return s +} + +// SetOptionGroupName sets the OptionGroupName field's value. +func (s *CreateDBClusterInput) SetOptionGroupName(v string) *CreateDBClusterInput { + s.OptionGroupName = &v + return s +} + +// SetPort sets the Port field's value. +func (s *CreateDBClusterInput) SetPort(v int64) *CreateDBClusterInput { + s.Port = &v + return s +} + +// SetPreSignedUrl sets the PreSignedUrl field's value. +func (s *CreateDBClusterInput) SetPreSignedUrl(v string) *CreateDBClusterInput { + s.PreSignedUrl = &v + return s +} + +// SetPreferredBackupWindow sets the PreferredBackupWindow field's value. +func (s *CreateDBClusterInput) SetPreferredBackupWindow(v string) *CreateDBClusterInput { + s.PreferredBackupWindow = &v + return s +} + +// SetPreferredMaintenanceWindow sets the PreferredMaintenanceWindow field's value. +func (s *CreateDBClusterInput) SetPreferredMaintenanceWindow(v string) *CreateDBClusterInput { + s.PreferredMaintenanceWindow = &v + return s +} + +// SetReplicationSourceIdentifier sets the ReplicationSourceIdentifier field's value. +func (s *CreateDBClusterInput) SetReplicationSourceIdentifier(v string) *CreateDBClusterInput { + s.ReplicationSourceIdentifier = &v + return s +} + +// SetStorageEncrypted sets the StorageEncrypted field's value. +func (s *CreateDBClusterInput) SetStorageEncrypted(v bool) *CreateDBClusterInput { + s.StorageEncrypted = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateDBClusterInput) SetTags(v []*Tag) *CreateDBClusterInput { + s.Tags = v + return s +} + +// SetVpcSecurityGroupIds sets the VpcSecurityGroupIds field's value. +func (s *CreateDBClusterInput) SetVpcSecurityGroupIds(v []*string) *CreateDBClusterInput { + s.VpcSecurityGroupIds = v + return s +} + +type CreateDBClusterOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Neptune DB cluster. + // + // This data type is used as a response element in the DescribeDBClusters action. + DBCluster *DBCluster `type:"structure"` +} + +// String returns the string representation +func (s CreateDBClusterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDBClusterOutput) GoString() string { + return s.String() +} + +// SetDBCluster sets the DBCluster field's value. +func (s *CreateDBClusterOutput) SetDBCluster(v *DBCluster) *CreateDBClusterOutput { + s.DBCluster = v + return s +} + +type CreateDBClusterParameterGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the DB cluster parameter group. + // + // Constraints: + // + // * Must match the name of an existing DBClusterParameterGroup. + // + // This value is stored as a lowercase string. + // + // DBClusterParameterGroupName is a required field + DBClusterParameterGroupName *string `type:"string" required:"true"` + + // The DB cluster parameter group family name. A DB cluster parameter group + // can be associated with one and only one DB cluster parameter group family, + // and can be applied only to a DB cluster running a database engine and engine + // version compatible with that DB cluster parameter group family. + // + // DBParameterGroupFamily is a required field + DBParameterGroupFamily *string `type:"string" required:"true"` + + // The description for the DB cluster parameter group. + // + // Description is a required field + Description *string `type:"string" required:"true"` + + // A list of tags. For more information, see Tagging Amazon Neptune Resources + // (http://docs.aws.amazon.com/neptune/latest/UserGuide/tagging.ARN.html). + Tags []*Tag `locationNameList:"Tag" type:"list"` +} + +// String returns the string representation +func (s CreateDBClusterParameterGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDBClusterParameterGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDBClusterParameterGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDBClusterParameterGroupInput"} + if s.DBClusterParameterGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterParameterGroupName")) + } + if s.DBParameterGroupFamily == nil { + invalidParams.Add(request.NewErrParamRequired("DBParameterGroupFamily")) + } + if s.Description == nil { + invalidParams.Add(request.NewErrParamRequired("Description")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterParameterGroupName sets the DBClusterParameterGroupName field's value. +func (s *CreateDBClusterParameterGroupInput) SetDBClusterParameterGroupName(v string) *CreateDBClusterParameterGroupInput { + s.DBClusterParameterGroupName = &v + return s +} + +// SetDBParameterGroupFamily sets the DBParameterGroupFamily field's value. +func (s *CreateDBClusterParameterGroupInput) SetDBParameterGroupFamily(v string) *CreateDBClusterParameterGroupInput { + s.DBParameterGroupFamily = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *CreateDBClusterParameterGroupInput) SetDescription(v string) *CreateDBClusterParameterGroupInput { + s.Description = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateDBClusterParameterGroupInput) SetTags(v []*Tag) *CreateDBClusterParameterGroupInput { + s.Tags = v + return s +} + +type CreateDBClusterParameterGroupOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Neptune DB cluster parameter group. + // + // This data type is used as a response element in the DescribeDBClusterParameterGroups + // action. + DBClusterParameterGroup *DBClusterParameterGroup `type:"structure"` +} + +// String returns the string representation +func (s CreateDBClusterParameterGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDBClusterParameterGroupOutput) GoString() string { + return s.String() +} + +// SetDBClusterParameterGroup sets the DBClusterParameterGroup field's value. +func (s *CreateDBClusterParameterGroupOutput) SetDBClusterParameterGroup(v *DBClusterParameterGroup) *CreateDBClusterParameterGroupOutput { + s.DBClusterParameterGroup = v + return s +} + +type CreateDBClusterSnapshotInput struct { + _ struct{} `type:"structure"` + + // The identifier of the DB cluster to create a snapshot for. This parameter + // is not case-sensitive. + // + // Constraints: + // + // * Must match the identifier of an existing DBCluster. + // + // Example: my-cluster1 + // + // DBClusterIdentifier is a required field + DBClusterIdentifier *string `type:"string" required:"true"` + + // The identifier of the DB cluster snapshot. This parameter is stored as a + // lowercase string. + // + // Constraints: + // + // * Must contain from 1 to 63 letters, numbers, or hyphens. + // + // * First character must be a letter. + // + // * Cannot end with a hyphen or contain two consecutive hyphens. + // + // Example: my-cluster1-snapshot1 + // + // DBClusterSnapshotIdentifier is a required field + DBClusterSnapshotIdentifier *string `type:"string" required:"true"` + + // The tags to be assigned to the DB cluster snapshot. + Tags []*Tag `locationNameList:"Tag" type:"list"` +} + +// String returns the string representation +func (s CreateDBClusterSnapshotInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDBClusterSnapshotInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDBClusterSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDBClusterSnapshotInput"} + if s.DBClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterIdentifier")) + } + if s.DBClusterSnapshotIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterSnapshotIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *CreateDBClusterSnapshotInput) SetDBClusterIdentifier(v string) *CreateDBClusterSnapshotInput { + s.DBClusterIdentifier = &v + return s +} + +// SetDBClusterSnapshotIdentifier sets the DBClusterSnapshotIdentifier field's value. +func (s *CreateDBClusterSnapshotInput) SetDBClusterSnapshotIdentifier(v string) *CreateDBClusterSnapshotInput { + s.DBClusterSnapshotIdentifier = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateDBClusterSnapshotInput) SetTags(v []*Tag) *CreateDBClusterSnapshotInput { + s.Tags = v + return s +} + +type CreateDBClusterSnapshotOutput struct { + _ struct{} `type:"structure"` + + // Contains the details for an Amazon Neptune DB cluster snapshot + // + // This data type is used as a response element in the DescribeDBClusterSnapshots + // action. + DBClusterSnapshot *DBClusterSnapshot `type:"structure"` +} + +// String returns the string representation +func (s CreateDBClusterSnapshotOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDBClusterSnapshotOutput) GoString() string { + return s.String() +} + +// SetDBClusterSnapshot sets the DBClusterSnapshot field's value. +func (s *CreateDBClusterSnapshotOutput) SetDBClusterSnapshot(v *DBClusterSnapshot) *CreateDBClusterSnapshotOutput { + s.DBClusterSnapshot = v + return s +} + +type CreateDBInstanceInput struct { + _ struct{} `type:"structure"` + + // The amount of storage (in gibibytes) to allocate for the DB instance. + // + // Type: Integer + // + // Not applicable. Neptune cluster volumes automatically grow as the amount + // of data in your database increases, though you are only charged for the space + // that you use in a Neptune cluster volume. + AllocatedStorage *int64 `type:"integer"` + + // Indicates that minor engine upgrades are applied automatically to the DB + // instance during the maintenance window. + // + // Default: true + AutoMinorVersionUpgrade *bool `type:"boolean"` + + // The EC2 Availability Zone that the DB instance is created in. + // + // Default: A random, system-chosen Availability Zone in the endpoint's AWS + // Region. + // + // Example: us-east-1d + // + // Constraint: The AvailabilityZone parameter can't be specified if the MultiAZ + // parameter is set to true. The specified Availability Zone must be in the + // same AWS Region as the current endpoint. + AvailabilityZone *string `type:"string"` + + // The number of days for which automated backups are retained. + // + // Not applicable. The retention period for automated backups is managed by + // the DB cluster. For more information, see CreateDBCluster. + // + // Default: 1 + // + // Constraints: + // + // * Must be a value from 0 to 35 + // + // * Cannot be set to 0 if the DB instance is a source to Read Replicas + BackupRetentionPeriod *int64 `type:"integer"` + + // Indicates that the DB instance should be associated with the specified CharacterSet. + // + // Not applicable. The character set is managed by the DB cluster. For more + // information, see CreateDBCluster. + CharacterSetName *string `type:"string"` + + // True to copy all tags from the DB instance to snapshots of the DB instance, + // and otherwise false. The default is false. + CopyTagsToSnapshot *bool `type:"boolean"` + + // The identifier of the DB cluster that the instance will belong to. + // + // For information on creating a DB cluster, see CreateDBCluster. + // + // Type: String + DBClusterIdentifier *string `type:"string"` + + // The compute and memory capacity of the DB instance, for example, db.m4.large. + // Not all DB instance classes are available in all AWS Regions. + // + // DBInstanceClass is a required field + DBInstanceClass *string `type:"string" required:"true"` + + // The DB instance identifier. This parameter is stored as a lowercase string. + // + // Constraints: + // + // * Must contain from 1 to 63 letters, numbers, or hyphens. + // + // * First character must be a letter. + // + // * Cannot end with a hyphen or contain two consecutive hyphens. + // + // Example: mydbinstance + // + // DBInstanceIdentifier is a required field + DBInstanceIdentifier *string `type:"string" required:"true"` + + // The database name. + // + // Type: String + DBName *string `type:"string"` + + // The name of the DB parameter group to associate with this DB instance. If + // this argument is omitted, the default DBParameterGroup for the specified + // engine is used. + // + // Constraints: + // + // * Must be 1 to 255 letters, numbers, or hyphens. + // + // * First character must be a letter + // + // * Cannot end with a hyphen or contain two consecutive hyphens + DBParameterGroupName *string `type:"string"` + + // A list of DB security groups to associate with this DB instance. + // + // Default: The default DB security group for the database engine. + DBSecurityGroups []*string `locationNameList:"DBSecurityGroupName" type:"list"` + + // A DB subnet group to associate with this DB instance. + // + // If there is no DB subnet group, then it is a non-VPC DB instance. + DBSubnetGroupName *string `type:"string"` + + // Specify the Active Directory Domain to create the instance in. + Domain *string `type:"string"` + + // Specify the name of the IAM role to be used when making API calls to the + // Directory Service. + DomainIAMRoleName *string `type:"string"` + + // The list of log types that need to be enabled for exporting to CloudWatch + // Logs. + EnableCloudwatchLogsExports []*string `type:"list"` + + // True to enable AWS Identity and Access Management (IAM) authentication for + // Neptune. + // + // Default: false + EnableIAMDatabaseAuthentication *bool `type:"boolean"` + + // True to enable Performance Insights for the DB instance, and otherwise false. + EnablePerformanceInsights *bool `type:"boolean"` + + // The name of the database engine to be used for this instance. + // + // Valid Values: neptune + // + // Engine is a required field + Engine *string `type:"string" required:"true"` + + // The version number of the database engine to use. + EngineVersion *string `type:"string"` + + // The amount of Provisioned IOPS (input/output operations per second) to be + // initially allocated for the DB instance. + Iops *int64 `type:"integer"` + + // The AWS KMS key identifier for an encrypted DB instance. + // + // The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption + // key. If you are creating a DB instance with the same AWS account that owns + // the KMS encryption key used to encrypt the new DB instance, then you can + // use the KMS key alias instead of the ARN for the KM encryption key. + // + // Not applicable. The KMS key identifier is managed by the DB cluster. For + // more information, see CreateDBCluster. + // + // If the StorageEncrypted parameter is true, and you do not specify a value + // for the KmsKeyId parameter, then Amazon Neptune will use your default encryption + // key. AWS KMS creates the default encryption key for your AWS account. Your + // AWS account has a different default encryption key for each AWS Region. + KmsKeyId *string `type:"string"` + + // License model information for this DB instance. + // + // Valid values: license-included | bring-your-own-license | general-public-license + LicenseModel *string `type:"string"` + + // The password for the master user. The password can include any printable + // ASCII character except "/", """, or "@". + // + // Not used. + MasterUserPassword *string `type:"string"` + + // The name for the master user. Not used. + MasterUsername *string `type:"string"` + + // The interval, in seconds, between points when Enhanced Monitoring metrics + // are collected for the DB instance. To disable collecting Enhanced Monitoring + // metrics, specify 0. The default is 0. + // + // If MonitoringRoleArn is specified, then you must also set MonitoringInterval + // to a value other than 0. + // + // Valid Values: 0, 1, 5, 10, 15, 30, 60 + MonitoringInterval *int64 `type:"integer"` + + // The ARN for the IAM role that permits Neptune to send enhanced monitoring + // metrics to Amazon CloudWatch Logs. For example, arn:aws:iam:123456789012:role/emaccess. + // + // If MonitoringInterval is set to a value other than 0, then you must supply + // a MonitoringRoleArn value. + MonitoringRoleArn *string `type:"string"` + + // Specifies if the DB instance is a Multi-AZ deployment. You can't set the + // AvailabilityZone parameter if the MultiAZ parameter is set to true. + MultiAZ *bool `type:"boolean"` + + // Indicates that the DB instance should be associated with the specified option + // group. + // + // Permanent options, such as the TDE option for Oracle Advanced Security TDE, + // can't be removed from an option group, and that option group can't be removed + // from a DB instance once it is associated with a DB instance + OptionGroupName *string `type:"string"` + + // The AWS KMS key identifier for encryption of Performance Insights data. The + // KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the + // KMS key alias for the KMS encryption key. + PerformanceInsightsKMSKeyId *string `type:"string"` + + // The port number on which the database accepts connections. + // + // Not applicable. The port is managed by the DB cluster. For more information, + // see CreateDBCluster. + // + // Default: 8182 + // + // Type: Integer + Port *int64 `type:"integer"` + + // The daily time range during which automated backups are created. + // + // Not applicable. The daily time range for creating automated backups is managed + // by the DB cluster. For more information, see CreateDBCluster. + PreferredBackupWindow *string `type:"string"` + + // The time range each week during which system maintenance can occur, in Universal + // Coordinated Time (UTC). + // + // Format: ddd:hh24:mi-ddd:hh24:mi + // + // The default is a 30-minute window selected at random from an 8-hour block + // of time for each AWS Region, occurring on a random day of the week. + // + // Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun. + // + // Constraints: Minimum 30-minute window. + PreferredMaintenanceWindow *string `type:"string"` + + // A value that specifies the order in which an Read Replica is promoted to + // the primary instance after a failure of the existing primary instance. + // + // Default: 1 + // + // Valid Values: 0 - 15 + PromotionTier *int64 `type:"integer"` + + // This parameter is not supported. + // + // Deprecated: PubliclyAccessible has been deprecated + PubliclyAccessible *bool `deprecated:"true" type:"boolean"` + + // Specifies whether the DB instance is encrypted. + // + // Not applicable. The encryption for DB instances is managed by the DB cluster. + // For more information, see CreateDBCluster. + // + // Default: false + StorageEncrypted *bool `type:"boolean"` + + // Specifies the storage type to be associated with the DB instance. + // + // Not applicable. Storage is managed by the DB Cluster. + StorageType *string `type:"string"` + + // A list of tags. For more information, see Tagging Amazon Neptune Resources + // (http://docs.aws.amazon.com/neptune/latest/UserGuide/tagging.ARN.html). + Tags []*Tag `locationNameList:"Tag" type:"list"` + + // The ARN from the key store with which to associate the instance for TDE encryption. + TdeCredentialArn *string `type:"string"` + + // The password for the given ARN from the key store in order to access the + // device. + TdeCredentialPassword *string `type:"string"` + + // The time zone of the DB instance. + Timezone *string `type:"string"` + + // A list of EC2 VPC security groups to associate with this DB instance. + // + // Not applicable. The associated list of EC2 VPC security groups is managed + // by the DB cluster. For more information, see CreateDBCluster. + // + // Default: The default EC2 VPC security group for the DB subnet group's VPC. + VpcSecurityGroupIds []*string `locationNameList:"VpcSecurityGroupId" type:"list"` +} + +// String returns the string representation +func (s CreateDBInstanceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDBInstanceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDBInstanceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDBInstanceInput"} + if s.DBInstanceClass == nil { + invalidParams.Add(request.NewErrParamRequired("DBInstanceClass")) + } + if s.DBInstanceIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBInstanceIdentifier")) + } + if s.Engine == nil { + invalidParams.Add(request.NewErrParamRequired("Engine")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAllocatedStorage sets the AllocatedStorage field's value. +func (s *CreateDBInstanceInput) SetAllocatedStorage(v int64) *CreateDBInstanceInput { + s.AllocatedStorage = &v + return s +} + +// SetAutoMinorVersionUpgrade sets the AutoMinorVersionUpgrade field's value. +func (s *CreateDBInstanceInput) SetAutoMinorVersionUpgrade(v bool) *CreateDBInstanceInput { + s.AutoMinorVersionUpgrade = &v + return s +} + +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *CreateDBInstanceInput) SetAvailabilityZone(v string) *CreateDBInstanceInput { + s.AvailabilityZone = &v + return s +} + +// SetBackupRetentionPeriod sets the BackupRetentionPeriod field's value. +func (s *CreateDBInstanceInput) SetBackupRetentionPeriod(v int64) *CreateDBInstanceInput { + s.BackupRetentionPeriod = &v + return s +} + +// SetCharacterSetName sets the CharacterSetName field's value. +func (s *CreateDBInstanceInput) SetCharacterSetName(v string) *CreateDBInstanceInput { + s.CharacterSetName = &v + return s +} + +// SetCopyTagsToSnapshot sets the CopyTagsToSnapshot field's value. +func (s *CreateDBInstanceInput) SetCopyTagsToSnapshot(v bool) *CreateDBInstanceInput { + s.CopyTagsToSnapshot = &v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *CreateDBInstanceInput) SetDBClusterIdentifier(v string) *CreateDBInstanceInput { + s.DBClusterIdentifier = &v + return s +} + +// SetDBInstanceClass sets the DBInstanceClass field's value. +func (s *CreateDBInstanceInput) SetDBInstanceClass(v string) *CreateDBInstanceInput { + s.DBInstanceClass = &v + return s +} + +// SetDBInstanceIdentifier sets the DBInstanceIdentifier field's value. +func (s *CreateDBInstanceInput) SetDBInstanceIdentifier(v string) *CreateDBInstanceInput { + s.DBInstanceIdentifier = &v + return s +} + +// SetDBName sets the DBName field's value. +func (s *CreateDBInstanceInput) SetDBName(v string) *CreateDBInstanceInput { + s.DBName = &v + return s +} + +// SetDBParameterGroupName sets the DBParameterGroupName field's value. +func (s *CreateDBInstanceInput) SetDBParameterGroupName(v string) *CreateDBInstanceInput { + s.DBParameterGroupName = &v + return s +} + +// SetDBSecurityGroups sets the DBSecurityGroups field's value. +func (s *CreateDBInstanceInput) SetDBSecurityGroups(v []*string) *CreateDBInstanceInput { + s.DBSecurityGroups = v + return s +} + +// SetDBSubnetGroupName sets the DBSubnetGroupName field's value. +func (s *CreateDBInstanceInput) SetDBSubnetGroupName(v string) *CreateDBInstanceInput { + s.DBSubnetGroupName = &v + return s +} + +// SetDomain sets the Domain field's value. +func (s *CreateDBInstanceInput) SetDomain(v string) *CreateDBInstanceInput { + s.Domain = &v + return s +} + +// SetDomainIAMRoleName sets the DomainIAMRoleName field's value. +func (s *CreateDBInstanceInput) SetDomainIAMRoleName(v string) *CreateDBInstanceInput { + s.DomainIAMRoleName = &v + return s +} + +// SetEnableCloudwatchLogsExports sets the EnableCloudwatchLogsExports field's value. +func (s *CreateDBInstanceInput) SetEnableCloudwatchLogsExports(v []*string) *CreateDBInstanceInput { + s.EnableCloudwatchLogsExports = v + return s +} + +// SetEnableIAMDatabaseAuthentication sets the EnableIAMDatabaseAuthentication field's value. +func (s *CreateDBInstanceInput) SetEnableIAMDatabaseAuthentication(v bool) *CreateDBInstanceInput { + s.EnableIAMDatabaseAuthentication = &v + return s +} + +// SetEnablePerformanceInsights sets the EnablePerformanceInsights field's value. +func (s *CreateDBInstanceInput) SetEnablePerformanceInsights(v bool) *CreateDBInstanceInput { + s.EnablePerformanceInsights = &v + return s +} + +// SetEngine sets the Engine field's value. +func (s *CreateDBInstanceInput) SetEngine(v string) *CreateDBInstanceInput { + s.Engine = &v + return s +} + +// SetEngineVersion sets the EngineVersion field's value. +func (s *CreateDBInstanceInput) SetEngineVersion(v string) *CreateDBInstanceInput { + s.EngineVersion = &v + return s +} + +// SetIops sets the Iops field's value. +func (s *CreateDBInstanceInput) SetIops(v int64) *CreateDBInstanceInput { + s.Iops = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *CreateDBInstanceInput) SetKmsKeyId(v string) *CreateDBInstanceInput { + s.KmsKeyId = &v + return s +} + +// SetLicenseModel sets the LicenseModel field's value. +func (s *CreateDBInstanceInput) SetLicenseModel(v string) *CreateDBInstanceInput { + s.LicenseModel = &v + return s +} + +// SetMasterUserPassword sets the MasterUserPassword field's value. +func (s *CreateDBInstanceInput) SetMasterUserPassword(v string) *CreateDBInstanceInput { + s.MasterUserPassword = &v + return s +} + +// SetMasterUsername sets the MasterUsername field's value. +func (s *CreateDBInstanceInput) SetMasterUsername(v string) *CreateDBInstanceInput { + s.MasterUsername = &v + return s +} + +// SetMonitoringInterval sets the MonitoringInterval field's value. +func (s *CreateDBInstanceInput) SetMonitoringInterval(v int64) *CreateDBInstanceInput { + s.MonitoringInterval = &v + return s +} + +// SetMonitoringRoleArn sets the MonitoringRoleArn field's value. +func (s *CreateDBInstanceInput) SetMonitoringRoleArn(v string) *CreateDBInstanceInput { + s.MonitoringRoleArn = &v + return s +} + +// SetMultiAZ sets the MultiAZ field's value. +func (s *CreateDBInstanceInput) SetMultiAZ(v bool) *CreateDBInstanceInput { + s.MultiAZ = &v + return s +} + +// SetOptionGroupName sets the OptionGroupName field's value. +func (s *CreateDBInstanceInput) SetOptionGroupName(v string) *CreateDBInstanceInput { + s.OptionGroupName = &v + return s +} + +// SetPerformanceInsightsKMSKeyId sets the PerformanceInsightsKMSKeyId field's value. +func (s *CreateDBInstanceInput) SetPerformanceInsightsKMSKeyId(v string) *CreateDBInstanceInput { + s.PerformanceInsightsKMSKeyId = &v + return s +} + +// SetPort sets the Port field's value. +func (s *CreateDBInstanceInput) SetPort(v int64) *CreateDBInstanceInput { + s.Port = &v + return s +} + +// SetPreferredBackupWindow sets the PreferredBackupWindow field's value. +func (s *CreateDBInstanceInput) SetPreferredBackupWindow(v string) *CreateDBInstanceInput { + s.PreferredBackupWindow = &v + return s +} + +// SetPreferredMaintenanceWindow sets the PreferredMaintenanceWindow field's value. +func (s *CreateDBInstanceInput) SetPreferredMaintenanceWindow(v string) *CreateDBInstanceInput { + s.PreferredMaintenanceWindow = &v + return s +} + +// SetPromotionTier sets the PromotionTier field's value. +func (s *CreateDBInstanceInput) SetPromotionTier(v int64) *CreateDBInstanceInput { + s.PromotionTier = &v + return s +} + +// SetPubliclyAccessible sets the PubliclyAccessible field's value. +func (s *CreateDBInstanceInput) SetPubliclyAccessible(v bool) *CreateDBInstanceInput { + s.PubliclyAccessible = &v + return s +} + +// SetStorageEncrypted sets the StorageEncrypted field's value. +func (s *CreateDBInstanceInput) SetStorageEncrypted(v bool) *CreateDBInstanceInput { + s.StorageEncrypted = &v + return s +} + +// SetStorageType sets the StorageType field's value. +func (s *CreateDBInstanceInput) SetStorageType(v string) *CreateDBInstanceInput { + s.StorageType = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateDBInstanceInput) SetTags(v []*Tag) *CreateDBInstanceInput { + s.Tags = v + return s +} + +// SetTdeCredentialArn sets the TdeCredentialArn field's value. +func (s *CreateDBInstanceInput) SetTdeCredentialArn(v string) *CreateDBInstanceInput { + s.TdeCredentialArn = &v + return s +} + +// SetTdeCredentialPassword sets the TdeCredentialPassword field's value. +func (s *CreateDBInstanceInput) SetTdeCredentialPassword(v string) *CreateDBInstanceInput { + s.TdeCredentialPassword = &v + return s +} + +// SetTimezone sets the Timezone field's value. +func (s *CreateDBInstanceInput) SetTimezone(v string) *CreateDBInstanceInput { + s.Timezone = &v + return s +} + +// SetVpcSecurityGroupIds sets the VpcSecurityGroupIds field's value. +func (s *CreateDBInstanceInput) SetVpcSecurityGroupIds(v []*string) *CreateDBInstanceInput { + s.VpcSecurityGroupIds = v + return s +} + +type CreateDBInstanceOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Neptune DB instance. + // + // This data type is used as a response element in the DescribeDBInstances action. + DBInstance *DBInstance `type:"structure"` +} + +// String returns the string representation +func (s CreateDBInstanceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDBInstanceOutput) GoString() string { + return s.String() +} + +// SetDBInstance sets the DBInstance field's value. +func (s *CreateDBInstanceOutput) SetDBInstance(v *DBInstance) *CreateDBInstanceOutput { + s.DBInstance = v + return s +} + +type CreateDBParameterGroupInput struct { + _ struct{} `type:"structure"` + + // The DB parameter group family name. A DB parameter group can be associated + // with one and only one DB parameter group family, and can be applied only + // to a DB instance running a database engine and engine version compatible + // with that DB parameter group family. + // + // DBParameterGroupFamily is a required field + DBParameterGroupFamily *string `type:"string" required:"true"` + + // The name of the DB parameter group. + // + // Constraints: + // + // * Must be 1 to 255 letters, numbers, or hyphens. + // + // * First character must be a letter + // + // * Cannot end with a hyphen or contain two consecutive hyphens + // + // This value is stored as a lowercase string. + // + // DBParameterGroupName is a required field + DBParameterGroupName *string `type:"string" required:"true"` + + // The description for the DB parameter group. + // + // Description is a required field + Description *string `type:"string" required:"true"` + + // A list of tags. For more information, see Tagging Amazon Neptune Resources + // (http://docs.aws.amazon.com/neptune/latest/UserGuide/tagging.ARN.html). + Tags []*Tag `locationNameList:"Tag" type:"list"` +} + +// String returns the string representation +func (s CreateDBParameterGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDBParameterGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDBParameterGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDBParameterGroupInput"} + if s.DBParameterGroupFamily == nil { + invalidParams.Add(request.NewErrParamRequired("DBParameterGroupFamily")) + } + if s.DBParameterGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("DBParameterGroupName")) + } + if s.Description == nil { + invalidParams.Add(request.NewErrParamRequired("Description")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBParameterGroupFamily sets the DBParameterGroupFamily field's value. +func (s *CreateDBParameterGroupInput) SetDBParameterGroupFamily(v string) *CreateDBParameterGroupInput { + s.DBParameterGroupFamily = &v + return s +} + +// SetDBParameterGroupName sets the DBParameterGroupName field's value. +func (s *CreateDBParameterGroupInput) SetDBParameterGroupName(v string) *CreateDBParameterGroupInput { + s.DBParameterGroupName = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *CreateDBParameterGroupInput) SetDescription(v string) *CreateDBParameterGroupInput { + s.Description = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateDBParameterGroupInput) SetTags(v []*Tag) *CreateDBParameterGroupInput { + s.Tags = v + return s +} + +type CreateDBParameterGroupOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Neptune DB parameter group. + // + // This data type is used as a response element in the DescribeDBParameterGroups + // action. + DBParameterGroup *DBParameterGroup `type:"structure"` +} + +// String returns the string representation +func (s CreateDBParameterGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDBParameterGroupOutput) GoString() string { + return s.String() +} + +// SetDBParameterGroup sets the DBParameterGroup field's value. +func (s *CreateDBParameterGroupOutput) SetDBParameterGroup(v *DBParameterGroup) *CreateDBParameterGroupOutput { + s.DBParameterGroup = v + return s +} + +type CreateDBSubnetGroupInput struct { + _ struct{} `type:"structure"` + + // The description for the DB subnet group. + // + // DBSubnetGroupDescription is a required field + DBSubnetGroupDescription *string `type:"string" required:"true"` + + // The name for the DB subnet group. This value is stored as a lowercase string. + // + // Constraints: Must contain no more than 255 letters, numbers, periods, underscores, + // spaces, or hyphens. Must not be default. + // + // Example: mySubnetgroup + // + // DBSubnetGroupName is a required field + DBSubnetGroupName *string `type:"string" required:"true"` + + // The EC2 Subnet IDs for the DB subnet group. + // + // SubnetIds is a required field + SubnetIds []*string `locationNameList:"SubnetIdentifier" type:"list" required:"true"` + + // A list of tags. For more information, see Tagging Amazon Neptune Resources + // (http://docs.aws.amazon.com/neptune/latest/UserGuide/tagging.ARN.html). + Tags []*Tag `locationNameList:"Tag" type:"list"` +} + +// String returns the string representation +func (s CreateDBSubnetGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDBSubnetGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDBSubnetGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDBSubnetGroupInput"} + if s.DBSubnetGroupDescription == nil { + invalidParams.Add(request.NewErrParamRequired("DBSubnetGroupDescription")) + } + if s.DBSubnetGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("DBSubnetGroupName")) + } + if s.SubnetIds == nil { + invalidParams.Add(request.NewErrParamRequired("SubnetIds")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBSubnetGroupDescription sets the DBSubnetGroupDescription field's value. +func (s *CreateDBSubnetGroupInput) SetDBSubnetGroupDescription(v string) *CreateDBSubnetGroupInput { + s.DBSubnetGroupDescription = &v + return s +} + +// SetDBSubnetGroupName sets the DBSubnetGroupName field's value. +func (s *CreateDBSubnetGroupInput) SetDBSubnetGroupName(v string) *CreateDBSubnetGroupInput { + s.DBSubnetGroupName = &v + return s +} + +// SetSubnetIds sets the SubnetIds field's value. +func (s *CreateDBSubnetGroupInput) SetSubnetIds(v []*string) *CreateDBSubnetGroupInput { + s.SubnetIds = v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateDBSubnetGroupInput) SetTags(v []*Tag) *CreateDBSubnetGroupInput { + s.Tags = v + return s +} + +type CreateDBSubnetGroupOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Neptune DB subnet group. + // + // This data type is used as a response element in the DescribeDBSubnetGroups + // action. + DBSubnetGroup *DBSubnetGroup `type:"structure"` +} + +// String returns the string representation +func (s CreateDBSubnetGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDBSubnetGroupOutput) GoString() string { + return s.String() +} + +// SetDBSubnetGroup sets the DBSubnetGroup field's value. +func (s *CreateDBSubnetGroupOutput) SetDBSubnetGroup(v *DBSubnetGroup) *CreateDBSubnetGroupOutput { + s.DBSubnetGroup = v + return s +} + +type CreateEventSubscriptionInput struct { + _ struct{} `type:"structure"` + + // A Boolean value; set to true to activate the subscription, set to false to + // create the subscription but not active it. + Enabled *bool `type:"boolean"` + + // A list of event categories for a SourceType that you want to subscribe to. + // You can see a list of the categories for a given SourceType by using the + // DescribeEventCategories action. + EventCategories []*string `locationNameList:"EventCategory" type:"list"` + + // The Amazon Resource Name (ARN) of the SNS topic created for event notification. + // The ARN is created by Amazon SNS when you create a topic and subscribe to + // it. + // + // SnsTopicArn is a required field + SnsTopicArn *string `type:"string" required:"true"` + + // The list of identifiers of the event sources for which events are returned. + // If not specified, then all sources are included in the response. An identifier + // must begin with a letter and must contain only ASCII letters, digits, and + // hyphens; it can't end with a hyphen or contain two consecutive hyphens. + // + // Constraints: + // + // * If SourceIds are supplied, SourceType must also be provided. + // + // * If the source type is a DB instance, then a DBInstanceIdentifier must + // be supplied. + // + // * If the source type is a DB security group, a DBSecurityGroupName must + // be supplied. + // + // * If the source type is a DB parameter group, a DBParameterGroupName must + // be supplied. + // + // * If the source type is a DB snapshot, a DBSnapshotIdentifier must be + // supplied. + SourceIds []*string `locationNameList:"SourceId" type:"list"` + + // The type of source that is generating the events. For example, if you want + // to be notified of events generated by a DB instance, you would set this parameter + // to db-instance. if this value is not specified, all events are returned. + // + // Valid values: db-instance | db-cluster | db-parameter-group | db-security-group + // | db-snapshot | db-cluster-snapshot + SourceType *string `type:"string"` + + // The name of the subscription. + // + // Constraints: The name must be less than 255 characters. + // + // SubscriptionName is a required field + SubscriptionName *string `type:"string" required:"true"` + + // A list of tags. For more information, see Tagging Amazon Neptune Resources + // (http://docs.aws.amazon.com/neptune/latest/UserGuide/tagging.ARN.html). + Tags []*Tag `locationNameList:"Tag" type:"list"` +} + +// String returns the string representation +func (s CreateEventSubscriptionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateEventSubscriptionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateEventSubscriptionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateEventSubscriptionInput"} + if s.SnsTopicArn == nil { + invalidParams.Add(request.NewErrParamRequired("SnsTopicArn")) + } + if s.SubscriptionName == nil { + invalidParams.Add(request.NewErrParamRequired("SubscriptionName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEnabled sets the Enabled field's value. +func (s *CreateEventSubscriptionInput) SetEnabled(v bool) *CreateEventSubscriptionInput { + s.Enabled = &v + return s +} + +// SetEventCategories sets the EventCategories field's value. +func (s *CreateEventSubscriptionInput) SetEventCategories(v []*string) *CreateEventSubscriptionInput { + s.EventCategories = v + return s +} + +// SetSnsTopicArn sets the SnsTopicArn field's value. +func (s *CreateEventSubscriptionInput) SetSnsTopicArn(v string) *CreateEventSubscriptionInput { + s.SnsTopicArn = &v + return s +} + +// SetSourceIds sets the SourceIds field's value. +func (s *CreateEventSubscriptionInput) SetSourceIds(v []*string) *CreateEventSubscriptionInput { + s.SourceIds = v + return s +} + +// SetSourceType sets the SourceType field's value. +func (s *CreateEventSubscriptionInput) SetSourceType(v string) *CreateEventSubscriptionInput { + s.SourceType = &v + return s +} + +// SetSubscriptionName sets the SubscriptionName field's value. +func (s *CreateEventSubscriptionInput) SetSubscriptionName(v string) *CreateEventSubscriptionInput { + s.SubscriptionName = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateEventSubscriptionInput) SetTags(v []*Tag) *CreateEventSubscriptionInput { + s.Tags = v + return s +} + +type CreateEventSubscriptionOutput struct { + _ struct{} `type:"structure"` + + // Contains the results of a successful invocation of the DescribeEventSubscriptions + // action. + EventSubscription *EventSubscription `type:"structure"` +} + +// String returns the string representation +func (s CreateEventSubscriptionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateEventSubscriptionOutput) GoString() string { + return s.String() +} + +// SetEventSubscription sets the EventSubscription field's value. +func (s *CreateEventSubscriptionOutput) SetEventSubscription(v *EventSubscription) *CreateEventSubscriptionOutput { + s.EventSubscription = v + return s +} + +// Contains the details of an Amazon Neptune DB cluster. +// +// This data type is used as a response element in the DescribeDBClusters action. +type DBCluster struct { + _ struct{} `type:"structure"` + + // AllocatedStorage always returns 1, because Neptune DB cluster storage size + // is not fixed, but instead automatically adjusts as needed. + AllocatedStorage *int64 `type:"integer"` + + // Provides a list of the AWS Identity and Access Management (IAM) roles that + // are associated with the DB cluster. IAM roles that are associated with a + // DB cluster grant permission for the DB cluster to access other AWS services + // on your behalf. + AssociatedRoles []*DBClusterRole `locationNameList:"DBClusterRole" type:"list"` + + // Provides the list of EC2 Availability Zones that instances in the DB cluster + // can be created in. + AvailabilityZones []*string `locationNameList:"AvailabilityZone" type:"list"` + + // Specifies the number of days for which automatic DB snapshots are retained. + BackupRetentionPeriod *int64 `type:"integer"` + + // If present, specifies the name of the character set that this cluster is + // associated with. + CharacterSetName *string `type:"string"` + + // Identifies the clone group to which the DB cluster is associated. + CloneGroupId *string `type:"string"` + + // Specifies the time when the DB cluster was created, in Universal Coordinated + // Time (UTC). + ClusterCreateTime *time.Time `type:"timestamp"` + + // The Amazon Resource Name (ARN) for the DB cluster. + DBClusterArn *string `type:"string"` + + // Contains a user-supplied DB cluster identifier. This identifier is the unique + // key that identifies a DB cluster. + DBClusterIdentifier *string `type:"string"` + + // Provides the list of instances that make up the DB cluster. + DBClusterMembers []*DBClusterMember `locationNameList:"DBClusterMember" type:"list"` + + // Provides the list of option group memberships for this DB cluster. + DBClusterOptionGroupMemberships []*DBClusterOptionGroupStatus `locationNameList:"DBClusterOptionGroup" type:"list"` + + // Specifies the name of the DB cluster parameter group for the DB cluster. + DBClusterParameterGroup *string `type:"string"` + + // Specifies information on the subnet group associated with the DB cluster, + // including the name, description, and subnets in the subnet group. + DBSubnetGroup *string `type:"string"` + + // Contains the name of the initial database of this DB cluster that was provided + // at create time, if one was specified when the DB cluster was created. This + // same name is returned for the life of the DB cluster. + DatabaseName *string `type:"string"` + + // The AWS Region-unique, immutable identifier for the DB cluster. This identifier + // is found in AWS CloudTrail log entries whenever the AWS KMS key for the DB + // cluster is accessed. + DbClusterResourceId *string `type:"string"` + + // Specifies the earliest time to which a database can be restored with point-in-time + // restore. + EarliestRestorableTime *time.Time `type:"timestamp"` + + // Specifies the connection endpoint for the primary instance of the DB cluster. + Endpoint *string `type:"string"` + + // Provides the name of the database engine to be used for this DB cluster. + Engine *string `type:"string"` + + // Indicates the database engine version. + EngineVersion *string `type:"string"` + + // Specifies the ID that Amazon Route 53 assigns when you create a hosted zone. + HostedZoneId *string `type:"string"` + + // True if mapping of AWS Identity and Access Management (IAM) accounts to database + // accounts is enabled, and otherwise false. + IAMDatabaseAuthenticationEnabled *bool `type:"boolean"` + + // If StorageEncrypted is true, the AWS KMS key identifier for the encrypted + // DB cluster. + KmsKeyId *string `type:"string"` + + // Specifies the latest time to which a database can be restored with point-in-time + // restore. + LatestRestorableTime *time.Time `type:"timestamp"` + + // Contains the master username for the DB cluster. + MasterUsername *string `type:"string"` + + // Specifies whether the DB cluster has instances in multiple Availability Zones. + MultiAZ *bool `type:"boolean"` + + // Specifies the progress of the operation as a percentage. + PercentProgress *string `type:"string"` + + // Specifies the port that the database engine is listening on. + Port *int64 `type:"integer"` + + // Specifies the daily time range during which automated backups are created + // if automated backups are enabled, as determined by the BackupRetentionPeriod. + PreferredBackupWindow *string `type:"string"` + + // Specifies the weekly time range during which system maintenance can occur, + // in Universal Coordinated Time (UTC). + PreferredMaintenanceWindow *string `type:"string"` + + // Contains one or more identifiers of the Read Replicas associated with this + // DB cluster. + ReadReplicaIdentifiers []*string `locationNameList:"ReadReplicaIdentifier" type:"list"` + + // The reader endpoint for the DB cluster. The reader endpoint for a DB cluster + // load-balances connections across the Read Replicas that are available in + // a DB cluster. As clients request new connections to the reader endpoint, + // Neptune distributes the connection requests among the Read Replicas in the + // DB cluster. This functionality can help balance your read workload across + // multiple Read Replicas in your DB cluster. + // + // If a failover occurs, and the Read Replica that you are connected to is promoted + // to be the primary instance, your connection is dropped. To continue sending + // your read workload to other Read Replicas in the cluster, you can then reconnect + // to the reader endpoint. + ReaderEndpoint *string `type:"string"` + + // Contains the identifier of the source DB cluster if this DB cluster is a + // Read Replica. + ReplicationSourceIdentifier *string `type:"string"` + + // Specifies the current state of this DB cluster. + Status *string `type:"string"` + + // Specifies whether the DB cluster is encrypted. + StorageEncrypted *bool `type:"boolean"` + + // Provides a list of VPC security groups that the DB cluster belongs to. + VpcSecurityGroups []*VpcSecurityGroupMembership `locationNameList:"VpcSecurityGroupMembership" type:"list"` +} + +// String returns the string representation +func (s DBCluster) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DBCluster) GoString() string { + return s.String() +} + +// SetAllocatedStorage sets the AllocatedStorage field's value. +func (s *DBCluster) SetAllocatedStorage(v int64) *DBCluster { + s.AllocatedStorage = &v + return s +} + +// SetAssociatedRoles sets the AssociatedRoles field's value. +func (s *DBCluster) SetAssociatedRoles(v []*DBClusterRole) *DBCluster { + s.AssociatedRoles = v + return s +} + +// SetAvailabilityZones sets the AvailabilityZones field's value. +func (s *DBCluster) SetAvailabilityZones(v []*string) *DBCluster { + s.AvailabilityZones = v + return s +} + +// SetBackupRetentionPeriod sets the BackupRetentionPeriod field's value. +func (s *DBCluster) SetBackupRetentionPeriod(v int64) *DBCluster { + s.BackupRetentionPeriod = &v + return s +} + +// SetCharacterSetName sets the CharacterSetName field's value. +func (s *DBCluster) SetCharacterSetName(v string) *DBCluster { + s.CharacterSetName = &v + return s +} + +// SetCloneGroupId sets the CloneGroupId field's value. +func (s *DBCluster) SetCloneGroupId(v string) *DBCluster { + s.CloneGroupId = &v + return s +} + +// SetClusterCreateTime sets the ClusterCreateTime field's value. +func (s *DBCluster) SetClusterCreateTime(v time.Time) *DBCluster { + s.ClusterCreateTime = &v + return s +} + +// SetDBClusterArn sets the DBClusterArn field's value. +func (s *DBCluster) SetDBClusterArn(v string) *DBCluster { + s.DBClusterArn = &v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *DBCluster) SetDBClusterIdentifier(v string) *DBCluster { + s.DBClusterIdentifier = &v + return s +} + +// SetDBClusterMembers sets the DBClusterMembers field's value. +func (s *DBCluster) SetDBClusterMembers(v []*DBClusterMember) *DBCluster { + s.DBClusterMembers = v + return s +} + +// SetDBClusterOptionGroupMemberships sets the DBClusterOptionGroupMemberships field's value. +func (s *DBCluster) SetDBClusterOptionGroupMemberships(v []*DBClusterOptionGroupStatus) *DBCluster { + s.DBClusterOptionGroupMemberships = v + return s +} + +// SetDBClusterParameterGroup sets the DBClusterParameterGroup field's value. +func (s *DBCluster) SetDBClusterParameterGroup(v string) *DBCluster { + s.DBClusterParameterGroup = &v + return s +} + +// SetDBSubnetGroup sets the DBSubnetGroup field's value. +func (s *DBCluster) SetDBSubnetGroup(v string) *DBCluster { + s.DBSubnetGroup = &v + return s +} + +// SetDatabaseName sets the DatabaseName field's value. +func (s *DBCluster) SetDatabaseName(v string) *DBCluster { + s.DatabaseName = &v + return s +} + +// SetDbClusterResourceId sets the DbClusterResourceId field's value. +func (s *DBCluster) SetDbClusterResourceId(v string) *DBCluster { + s.DbClusterResourceId = &v + return s +} + +// SetEarliestRestorableTime sets the EarliestRestorableTime field's value. +func (s *DBCluster) SetEarliestRestorableTime(v time.Time) *DBCluster { + s.EarliestRestorableTime = &v + return s +} + +// SetEndpoint sets the Endpoint field's value. +func (s *DBCluster) SetEndpoint(v string) *DBCluster { + s.Endpoint = &v + return s +} + +// SetEngine sets the Engine field's value. +func (s *DBCluster) SetEngine(v string) *DBCluster { + s.Engine = &v + return s +} + +// SetEngineVersion sets the EngineVersion field's value. +func (s *DBCluster) SetEngineVersion(v string) *DBCluster { + s.EngineVersion = &v + return s +} + +// SetHostedZoneId sets the HostedZoneId field's value. +func (s *DBCluster) SetHostedZoneId(v string) *DBCluster { + s.HostedZoneId = &v + return s +} + +// SetIAMDatabaseAuthenticationEnabled sets the IAMDatabaseAuthenticationEnabled field's value. +func (s *DBCluster) SetIAMDatabaseAuthenticationEnabled(v bool) *DBCluster { + s.IAMDatabaseAuthenticationEnabled = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *DBCluster) SetKmsKeyId(v string) *DBCluster { + s.KmsKeyId = &v + return s +} + +// SetLatestRestorableTime sets the LatestRestorableTime field's value. +func (s *DBCluster) SetLatestRestorableTime(v time.Time) *DBCluster { + s.LatestRestorableTime = &v + return s +} + +// SetMasterUsername sets the MasterUsername field's value. +func (s *DBCluster) SetMasterUsername(v string) *DBCluster { + s.MasterUsername = &v + return s +} + +// SetMultiAZ sets the MultiAZ field's value. +func (s *DBCluster) SetMultiAZ(v bool) *DBCluster { + s.MultiAZ = &v + return s +} + +// SetPercentProgress sets the PercentProgress field's value. +func (s *DBCluster) SetPercentProgress(v string) *DBCluster { + s.PercentProgress = &v + return s +} + +// SetPort sets the Port field's value. +func (s *DBCluster) SetPort(v int64) *DBCluster { + s.Port = &v + return s +} + +// SetPreferredBackupWindow sets the PreferredBackupWindow field's value. +func (s *DBCluster) SetPreferredBackupWindow(v string) *DBCluster { + s.PreferredBackupWindow = &v + return s +} + +// SetPreferredMaintenanceWindow sets the PreferredMaintenanceWindow field's value. +func (s *DBCluster) SetPreferredMaintenanceWindow(v string) *DBCluster { + s.PreferredMaintenanceWindow = &v + return s +} + +// SetReadReplicaIdentifiers sets the ReadReplicaIdentifiers field's value. +func (s *DBCluster) SetReadReplicaIdentifiers(v []*string) *DBCluster { + s.ReadReplicaIdentifiers = v + return s +} + +// SetReaderEndpoint sets the ReaderEndpoint field's value. +func (s *DBCluster) SetReaderEndpoint(v string) *DBCluster { + s.ReaderEndpoint = &v + return s +} + +// SetReplicationSourceIdentifier sets the ReplicationSourceIdentifier field's value. +func (s *DBCluster) SetReplicationSourceIdentifier(v string) *DBCluster { + s.ReplicationSourceIdentifier = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *DBCluster) SetStatus(v string) *DBCluster { + s.Status = &v + return s +} + +// SetStorageEncrypted sets the StorageEncrypted field's value. +func (s *DBCluster) SetStorageEncrypted(v bool) *DBCluster { + s.StorageEncrypted = &v + return s +} + +// SetVpcSecurityGroups sets the VpcSecurityGroups field's value. +func (s *DBCluster) SetVpcSecurityGroups(v []*VpcSecurityGroupMembership) *DBCluster { + s.VpcSecurityGroups = v + return s +} + +// Contains information about an instance that is part of a DB cluster. +type DBClusterMember struct { + _ struct{} `type:"structure"` + + // Specifies the status of the DB cluster parameter group for this member of + // the DB cluster. + DBClusterParameterGroupStatus *string `type:"string"` + + // Specifies the instance identifier for this member of the DB cluster. + DBInstanceIdentifier *string `type:"string"` + + // Value that is true if the cluster member is the primary instance for the + // DB cluster and false otherwise. + IsClusterWriter *bool `type:"boolean"` + + // A value that specifies the order in which a Read Replica is promoted to the + // primary instance after a failure of the existing primary instance. + PromotionTier *int64 `type:"integer"` +} + +// String returns the string representation +func (s DBClusterMember) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DBClusterMember) GoString() string { + return s.String() +} + +// SetDBClusterParameterGroupStatus sets the DBClusterParameterGroupStatus field's value. +func (s *DBClusterMember) SetDBClusterParameterGroupStatus(v string) *DBClusterMember { + s.DBClusterParameterGroupStatus = &v + return s +} + +// SetDBInstanceIdentifier sets the DBInstanceIdentifier field's value. +func (s *DBClusterMember) SetDBInstanceIdentifier(v string) *DBClusterMember { + s.DBInstanceIdentifier = &v + return s +} + +// SetIsClusterWriter sets the IsClusterWriter field's value. +func (s *DBClusterMember) SetIsClusterWriter(v bool) *DBClusterMember { + s.IsClusterWriter = &v + return s +} + +// SetPromotionTier sets the PromotionTier field's value. +func (s *DBClusterMember) SetPromotionTier(v int64) *DBClusterMember { + s.PromotionTier = &v + return s +} + +// Contains status information for a DB cluster option group. +type DBClusterOptionGroupStatus struct { + _ struct{} `type:"structure"` + + // Specifies the name of the DB cluster option group. + DBClusterOptionGroupName *string `type:"string"` + + // Specifies the status of the DB cluster option group. + Status *string `type:"string"` +} + +// String returns the string representation +func (s DBClusterOptionGroupStatus) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DBClusterOptionGroupStatus) GoString() string { + return s.String() +} + +// SetDBClusterOptionGroupName sets the DBClusterOptionGroupName field's value. +func (s *DBClusterOptionGroupStatus) SetDBClusterOptionGroupName(v string) *DBClusterOptionGroupStatus { + s.DBClusterOptionGroupName = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *DBClusterOptionGroupStatus) SetStatus(v string) *DBClusterOptionGroupStatus { + s.Status = &v + return s +} + +// Contains the details of an Amazon Neptune DB cluster parameter group. +// +// This data type is used as a response element in the DescribeDBClusterParameterGroups +// action. +type DBClusterParameterGroup struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) for the DB cluster parameter group. + DBClusterParameterGroupArn *string `type:"string"` + + // Provides the name of the DB cluster parameter group. + DBClusterParameterGroupName *string `type:"string"` + + // Provides the name of the DB parameter group family that this DB cluster parameter + // group is compatible with. + DBParameterGroupFamily *string `type:"string"` + + // Provides the customer-specified description for this DB cluster parameter + // group. + Description *string `type:"string"` +} + +// String returns the string representation +func (s DBClusterParameterGroup) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DBClusterParameterGroup) GoString() string { + return s.String() +} + +// SetDBClusterParameterGroupArn sets the DBClusterParameterGroupArn field's value. +func (s *DBClusterParameterGroup) SetDBClusterParameterGroupArn(v string) *DBClusterParameterGroup { + s.DBClusterParameterGroupArn = &v + return s +} + +// SetDBClusterParameterGroupName sets the DBClusterParameterGroupName field's value. +func (s *DBClusterParameterGroup) SetDBClusterParameterGroupName(v string) *DBClusterParameterGroup { + s.DBClusterParameterGroupName = &v + return s +} + +// SetDBParameterGroupFamily sets the DBParameterGroupFamily field's value. +func (s *DBClusterParameterGroup) SetDBParameterGroupFamily(v string) *DBClusterParameterGroup { + s.DBParameterGroupFamily = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *DBClusterParameterGroup) SetDescription(v string) *DBClusterParameterGroup { + s.Description = &v + return s +} + +// Describes an AWS Identity and Access Management (IAM) role that is associated +// with a DB cluster. +type DBClusterRole struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the IAM role that is associated with the + // DB cluster. + RoleArn *string `type:"string"` + + // Describes the state of association between the IAM role and the DB cluster. + // The Status property returns one of the following values: + // + // * ACTIVE - the IAM role ARN is associated with the DB cluster and can + // be used to access other AWS services on your behalf. + // + // * PENDING - the IAM role ARN is being associated with the DB cluster. + // + // * INVALID - the IAM role ARN is associated with the DB cluster, but the + // DB cluster is unable to assume the IAM role in order to access other AWS + // services on your behalf. + Status *string `type:"string"` +} + +// String returns the string representation +func (s DBClusterRole) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DBClusterRole) GoString() string { + return s.String() +} + +// SetRoleArn sets the RoleArn field's value. +func (s *DBClusterRole) SetRoleArn(v string) *DBClusterRole { + s.RoleArn = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *DBClusterRole) SetStatus(v string) *DBClusterRole { + s.Status = &v + return s +} + +// Contains the details for an Amazon Neptune DB cluster snapshot +// +// This data type is used as a response element in the DescribeDBClusterSnapshots +// action. +type DBClusterSnapshot struct { + _ struct{} `type:"structure"` + + // Specifies the allocated storage size in gibibytes (GiB). + AllocatedStorage *int64 `type:"integer"` + + // Provides the list of EC2 Availability Zones that instances in the DB cluster + // snapshot can be restored in. + AvailabilityZones []*string `locationNameList:"AvailabilityZone" type:"list"` + + // Specifies the time when the DB cluster was created, in Universal Coordinated + // Time (UTC). + ClusterCreateTime *time.Time `type:"timestamp"` + + // Specifies the DB cluster identifier of the DB cluster that this DB cluster + // snapshot was created from. + DBClusterIdentifier *string `type:"string"` + + // The Amazon Resource Name (ARN) for the DB cluster snapshot. + DBClusterSnapshotArn *string `type:"string"` + + // Specifies the identifier for the DB cluster snapshot. + DBClusterSnapshotIdentifier *string `type:"string"` + + // Specifies the name of the database engine. + Engine *string `type:"string"` + + // Provides the version of the database engine for this DB cluster snapshot. + EngineVersion *string `type:"string"` + + // True if mapping of AWS Identity and Access Management (IAM) accounts to database + // accounts is enabled, and otherwise false. + IAMDatabaseAuthenticationEnabled *bool `type:"boolean"` + + // If StorageEncrypted is true, the AWS KMS key identifier for the encrypted + // DB cluster snapshot. + KmsKeyId *string `type:"string"` + + // Provides the license model information for this DB cluster snapshot. + LicenseModel *string `type:"string"` + + // Provides the master username for the DB cluster snapshot. + MasterUsername *string `type:"string"` + + // Specifies the percentage of the estimated data that has been transferred. + PercentProgress *int64 `type:"integer"` + + // Specifies the port that the DB cluster was listening on at the time of the + // snapshot. + Port *int64 `type:"integer"` + + // Provides the time when the snapshot was taken, in Universal Coordinated Time + // (UTC). + SnapshotCreateTime *time.Time `type:"timestamp"` + + // Provides the type of the DB cluster snapshot. + SnapshotType *string `type:"string"` + + // If the DB cluster snapshot was copied from a source DB cluster snapshot, + // the Amazon Resource Name (ARN) for the source DB cluster snapshot, otherwise, + // a null value. + SourceDBClusterSnapshotArn *string `type:"string"` + + // Specifies the status of this DB cluster snapshot. + Status *string `type:"string"` + + // Specifies whether the DB cluster snapshot is encrypted. + StorageEncrypted *bool `type:"boolean"` + + // Provides the VPC ID associated with the DB cluster snapshot. + VpcId *string `type:"string"` +} + +// String returns the string representation +func (s DBClusterSnapshot) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DBClusterSnapshot) GoString() string { + return s.String() +} + +// SetAllocatedStorage sets the AllocatedStorage field's value. +func (s *DBClusterSnapshot) SetAllocatedStorage(v int64) *DBClusterSnapshot { + s.AllocatedStorage = &v + return s +} + +// SetAvailabilityZones sets the AvailabilityZones field's value. +func (s *DBClusterSnapshot) SetAvailabilityZones(v []*string) *DBClusterSnapshot { + s.AvailabilityZones = v + return s +} + +// SetClusterCreateTime sets the ClusterCreateTime field's value. +func (s *DBClusterSnapshot) SetClusterCreateTime(v time.Time) *DBClusterSnapshot { + s.ClusterCreateTime = &v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *DBClusterSnapshot) SetDBClusterIdentifier(v string) *DBClusterSnapshot { + s.DBClusterIdentifier = &v + return s +} + +// SetDBClusterSnapshotArn sets the DBClusterSnapshotArn field's value. +func (s *DBClusterSnapshot) SetDBClusterSnapshotArn(v string) *DBClusterSnapshot { + s.DBClusterSnapshotArn = &v + return s +} + +// SetDBClusterSnapshotIdentifier sets the DBClusterSnapshotIdentifier field's value. +func (s *DBClusterSnapshot) SetDBClusterSnapshotIdentifier(v string) *DBClusterSnapshot { + s.DBClusterSnapshotIdentifier = &v + return s +} + +// SetEngine sets the Engine field's value. +func (s *DBClusterSnapshot) SetEngine(v string) *DBClusterSnapshot { + s.Engine = &v + return s +} + +// SetEngineVersion sets the EngineVersion field's value. +func (s *DBClusterSnapshot) SetEngineVersion(v string) *DBClusterSnapshot { + s.EngineVersion = &v + return s +} + +// SetIAMDatabaseAuthenticationEnabled sets the IAMDatabaseAuthenticationEnabled field's value. +func (s *DBClusterSnapshot) SetIAMDatabaseAuthenticationEnabled(v bool) *DBClusterSnapshot { + s.IAMDatabaseAuthenticationEnabled = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *DBClusterSnapshot) SetKmsKeyId(v string) *DBClusterSnapshot { + s.KmsKeyId = &v + return s +} + +// SetLicenseModel sets the LicenseModel field's value. +func (s *DBClusterSnapshot) SetLicenseModel(v string) *DBClusterSnapshot { + s.LicenseModel = &v + return s +} + +// SetMasterUsername sets the MasterUsername field's value. +func (s *DBClusterSnapshot) SetMasterUsername(v string) *DBClusterSnapshot { + s.MasterUsername = &v + return s +} + +// SetPercentProgress sets the PercentProgress field's value. +func (s *DBClusterSnapshot) SetPercentProgress(v int64) *DBClusterSnapshot { + s.PercentProgress = &v + return s +} + +// SetPort sets the Port field's value. +func (s *DBClusterSnapshot) SetPort(v int64) *DBClusterSnapshot { + s.Port = &v + return s +} + +// SetSnapshotCreateTime sets the SnapshotCreateTime field's value. +func (s *DBClusterSnapshot) SetSnapshotCreateTime(v time.Time) *DBClusterSnapshot { + s.SnapshotCreateTime = &v + return s +} + +// SetSnapshotType sets the SnapshotType field's value. +func (s *DBClusterSnapshot) SetSnapshotType(v string) *DBClusterSnapshot { + s.SnapshotType = &v + return s +} + +// SetSourceDBClusterSnapshotArn sets the SourceDBClusterSnapshotArn field's value. +func (s *DBClusterSnapshot) SetSourceDBClusterSnapshotArn(v string) *DBClusterSnapshot { + s.SourceDBClusterSnapshotArn = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *DBClusterSnapshot) SetStatus(v string) *DBClusterSnapshot { + s.Status = &v + return s +} + +// SetStorageEncrypted sets the StorageEncrypted field's value. +func (s *DBClusterSnapshot) SetStorageEncrypted(v bool) *DBClusterSnapshot { + s.StorageEncrypted = &v + return s +} + +// SetVpcId sets the VpcId field's value. +func (s *DBClusterSnapshot) SetVpcId(v string) *DBClusterSnapshot { + s.VpcId = &v + return s +} + +// Contains the name and values of a manual DB cluster snapshot attribute. +// +// Manual DB cluster snapshot attributes are used to authorize other AWS accounts +// to restore a manual DB cluster snapshot. For more information, see the ModifyDBClusterSnapshotAttribute +// API action. +type DBClusterSnapshotAttribute struct { + _ struct{} `type:"structure"` + + // The name of the manual DB cluster snapshot attribute. + // + // The attribute named restore refers to the list of AWS accounts that have + // permission to copy or restore the manual DB cluster snapshot. For more information, + // see the ModifyDBClusterSnapshotAttribute API action. + AttributeName *string `type:"string"` + + // The value(s) for the manual DB cluster snapshot attribute. + // + // If the AttributeName field is set to restore, then this element returns a + // list of IDs of the AWS accounts that are authorized to copy or restore the + // manual DB cluster snapshot. If a value of all is in the list, then the manual + // DB cluster snapshot is public and available for any AWS account to copy or + // restore. + AttributeValues []*string `locationNameList:"AttributeValue" type:"list"` +} + +// String returns the string representation +func (s DBClusterSnapshotAttribute) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DBClusterSnapshotAttribute) GoString() string { + return s.String() +} + +// SetAttributeName sets the AttributeName field's value. +func (s *DBClusterSnapshotAttribute) SetAttributeName(v string) *DBClusterSnapshotAttribute { + s.AttributeName = &v + return s +} + +// SetAttributeValues sets the AttributeValues field's value. +func (s *DBClusterSnapshotAttribute) SetAttributeValues(v []*string) *DBClusterSnapshotAttribute { + s.AttributeValues = v + return s +} + +// Contains the results of a successful call to the DescribeDBClusterSnapshotAttributes +// API action. +// +// Manual DB cluster snapshot attributes are used to authorize other AWS accounts +// to copy or restore a manual DB cluster snapshot. For more information, see +// the ModifyDBClusterSnapshotAttribute API action. +type DBClusterSnapshotAttributesResult struct { + _ struct{} `type:"structure"` + + // The list of attributes and values for the manual DB cluster snapshot. + DBClusterSnapshotAttributes []*DBClusterSnapshotAttribute `locationNameList:"DBClusterSnapshotAttribute" type:"list"` + + // The identifier of the manual DB cluster snapshot that the attributes apply + // to. + DBClusterSnapshotIdentifier *string `type:"string"` +} + +// String returns the string representation +func (s DBClusterSnapshotAttributesResult) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DBClusterSnapshotAttributesResult) GoString() string { + return s.String() +} + +// SetDBClusterSnapshotAttributes sets the DBClusterSnapshotAttributes field's value. +func (s *DBClusterSnapshotAttributesResult) SetDBClusterSnapshotAttributes(v []*DBClusterSnapshotAttribute) *DBClusterSnapshotAttributesResult { + s.DBClusterSnapshotAttributes = v + return s +} + +// SetDBClusterSnapshotIdentifier sets the DBClusterSnapshotIdentifier field's value. +func (s *DBClusterSnapshotAttributesResult) SetDBClusterSnapshotIdentifier(v string) *DBClusterSnapshotAttributesResult { + s.DBClusterSnapshotIdentifier = &v + return s +} + +// This data type is used as a response element in the action DescribeDBEngineVersions. +type DBEngineVersion struct { + _ struct{} `type:"structure"` + + // The description of the database engine. + DBEngineDescription *string `type:"string"` + + // The description of the database engine version. + DBEngineVersionDescription *string `type:"string"` + + // The name of the DB parameter group family for the database engine. + DBParameterGroupFamily *string `type:"string"` + + // The default character set for new instances of this engine version, if the + // CharacterSetName parameter of the CreateDBInstance API is not specified. + DefaultCharacterSet *CharacterSet `type:"structure"` + + // The name of the database engine. + Engine *string `type:"string"` + + // The version number of the database engine. + EngineVersion *string `type:"string"` + + // The types of logs that the database engine has available for export to CloudWatch + // Logs. + ExportableLogTypes []*string `type:"list"` + + // A list of the character sets supported by this engine for the CharacterSetName + // parameter of the CreateDBInstance action. + SupportedCharacterSets []*CharacterSet `locationNameList:"CharacterSet" type:"list"` + + // A list of the time zones supported by this engine for the Timezone parameter + // of the CreateDBInstance action. + SupportedTimezones []*Timezone `locationNameList:"Timezone" type:"list"` + + // A value that indicates whether the engine version supports exporting the + // log types specified by ExportableLogTypes to CloudWatch Logs. + SupportsLogExportsToCloudwatchLogs *bool `type:"boolean"` + + // Indicates whether the database engine version supports read replicas. + SupportsReadReplica *bool `type:"boolean"` + + // A list of engine versions that this database engine version can be upgraded + // to. + ValidUpgradeTarget []*UpgradeTarget `locationNameList:"UpgradeTarget" type:"list"` +} + +// String returns the string representation +func (s DBEngineVersion) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DBEngineVersion) GoString() string { + return s.String() +} + +// SetDBEngineDescription sets the DBEngineDescription field's value. +func (s *DBEngineVersion) SetDBEngineDescription(v string) *DBEngineVersion { + s.DBEngineDescription = &v + return s +} + +// SetDBEngineVersionDescription sets the DBEngineVersionDescription field's value. +func (s *DBEngineVersion) SetDBEngineVersionDescription(v string) *DBEngineVersion { + s.DBEngineVersionDescription = &v + return s +} + +// SetDBParameterGroupFamily sets the DBParameterGroupFamily field's value. +func (s *DBEngineVersion) SetDBParameterGroupFamily(v string) *DBEngineVersion { + s.DBParameterGroupFamily = &v + return s +} + +// SetDefaultCharacterSet sets the DefaultCharacterSet field's value. +func (s *DBEngineVersion) SetDefaultCharacterSet(v *CharacterSet) *DBEngineVersion { + s.DefaultCharacterSet = v + return s +} + +// SetEngine sets the Engine field's value. +func (s *DBEngineVersion) SetEngine(v string) *DBEngineVersion { + s.Engine = &v + return s +} + +// SetEngineVersion sets the EngineVersion field's value. +func (s *DBEngineVersion) SetEngineVersion(v string) *DBEngineVersion { + s.EngineVersion = &v + return s +} + +// SetExportableLogTypes sets the ExportableLogTypes field's value. +func (s *DBEngineVersion) SetExportableLogTypes(v []*string) *DBEngineVersion { + s.ExportableLogTypes = v + return s +} + +// SetSupportedCharacterSets sets the SupportedCharacterSets field's value. +func (s *DBEngineVersion) SetSupportedCharacterSets(v []*CharacterSet) *DBEngineVersion { + s.SupportedCharacterSets = v + return s +} + +// SetSupportedTimezones sets the SupportedTimezones field's value. +func (s *DBEngineVersion) SetSupportedTimezones(v []*Timezone) *DBEngineVersion { + s.SupportedTimezones = v + return s +} + +// SetSupportsLogExportsToCloudwatchLogs sets the SupportsLogExportsToCloudwatchLogs field's value. +func (s *DBEngineVersion) SetSupportsLogExportsToCloudwatchLogs(v bool) *DBEngineVersion { + s.SupportsLogExportsToCloudwatchLogs = &v + return s +} + +// SetSupportsReadReplica sets the SupportsReadReplica field's value. +func (s *DBEngineVersion) SetSupportsReadReplica(v bool) *DBEngineVersion { + s.SupportsReadReplica = &v + return s +} + +// SetValidUpgradeTarget sets the ValidUpgradeTarget field's value. +func (s *DBEngineVersion) SetValidUpgradeTarget(v []*UpgradeTarget) *DBEngineVersion { + s.ValidUpgradeTarget = v + return s +} + +// Contains the details of an Amazon Neptune DB instance. +// +// This data type is used as a response element in the DescribeDBInstances action. +type DBInstance struct { + _ struct{} `type:"structure"` + + // Specifies the allocated storage size specified in gibibytes. + AllocatedStorage *int64 `type:"integer"` + + // Indicates that minor version patches are applied automatically. + AutoMinorVersionUpgrade *bool `type:"boolean"` + + // Specifies the name of the Availability Zone the DB instance is located in. + AvailabilityZone *string `type:"string"` + + // Specifies the number of days for which automatic DB snapshots are retained. + BackupRetentionPeriod *int64 `type:"integer"` + + // The identifier of the CA certificate for this DB instance. + CACertificateIdentifier *string `type:"string"` + + // If present, specifies the name of the character set that this instance is + // associated with. + CharacterSetName *string `type:"string"` + + // Specifies whether tags are copied from the DB instance to snapshots of the + // DB instance. + CopyTagsToSnapshot *bool `type:"boolean"` + + // If the DB instance is a member of a DB cluster, contains the name of the + // DB cluster that the DB instance is a member of. + DBClusterIdentifier *string `type:"string"` + + // The Amazon Resource Name (ARN) for the DB instance. + DBInstanceArn *string `type:"string"` + + // Contains the name of the compute and memory capacity class of the DB instance. + DBInstanceClass *string `type:"string"` + + // Contains a user-supplied database identifier. This identifier is the unique + // key that identifies a DB instance. + DBInstanceIdentifier *string `type:"string"` + + // Specifies the current state of this database. + DBInstanceStatus *string `type:"string"` + + // The database name. + DBName *string `type:"string"` + + // Provides the list of DB parameter groups applied to this DB instance. + DBParameterGroups []*DBParameterGroupStatus `locationNameList:"DBParameterGroup" type:"list"` + + // Provides List of DB security group elements containing only DBSecurityGroup.Name + // and DBSecurityGroup.Status subelements. + DBSecurityGroups []*DBSecurityGroupMembership `locationNameList:"DBSecurityGroup" type:"list"` + + // Specifies information on the subnet group associated with the DB instance, + // including the name, description, and subnets in the subnet group. + DBSubnetGroup *DBSubnetGroup `type:"structure"` + + // Specifies the port that the DB instance listens on. If the DB instance is + // part of a DB cluster, this can be a different port than the DB cluster port. + DbInstancePort *int64 `type:"integer"` + + // The AWS Region-unique, immutable identifier for the DB instance. This identifier + // is found in AWS CloudTrail log entries whenever the AWS KMS key for the DB + // instance is accessed. + DbiResourceId *string `type:"string"` + + // Not supported + DomainMemberships []*DomainMembership `locationNameList:"DomainMembership" type:"list"` + + // A list of log types that this DB instance is configured to export to CloudWatch + // Logs. + EnabledCloudwatchLogsExports []*string `type:"list"` + + // Specifies the connection endpoint. + Endpoint *Endpoint `type:"structure"` + + // Provides the name of the database engine to be used for this DB instance. + Engine *string `type:"string"` + + // Indicates the database engine version. + EngineVersion *string `type:"string"` + + // The Amazon Resource Name (ARN) of the Amazon CloudWatch Logs log stream that + // receives the Enhanced Monitoring metrics data for the DB instance. + EnhancedMonitoringResourceArn *string `type:"string"` + + // True if AWS Identity and Access Management (IAM) authentication is enabled, + // and otherwise false. + IAMDatabaseAuthenticationEnabled *bool `type:"boolean"` + + // Provides the date and time the DB instance was created. + InstanceCreateTime *time.Time `type:"timestamp"` + + // Specifies the Provisioned IOPS (I/O operations per second) value. + Iops *int64 `type:"integer"` + + // If StorageEncrypted is true, the AWS KMS key identifier for the encrypted + // DB instance. + KmsKeyId *string `type:"string"` + + // Specifies the latest time to which a database can be restored with point-in-time + // restore. + LatestRestorableTime *time.Time `type:"timestamp"` + + // License model information for this DB instance. + LicenseModel *string `type:"string"` + + // Contains the master username for the DB instance. + MasterUsername *string `type:"string"` + + // The interval, in seconds, between points when Enhanced Monitoring metrics + // are collected for the DB instance. + MonitoringInterval *int64 `type:"integer"` + + // The ARN for the IAM role that permits Neptune to send Enhanced Monitoring + // metrics to Amazon CloudWatch Logs. + MonitoringRoleArn *string `type:"string"` + + // Specifies if the DB instance is a Multi-AZ deployment. + MultiAZ *bool `type:"boolean"` + + // Provides the list of option group memberships for this DB instance. + OptionGroupMemberships []*OptionGroupMembership `locationNameList:"OptionGroupMembership" type:"list"` + + // Specifies that changes to the DB instance are pending. This element is only + // included when changes are pending. Specific changes are identified by subelements. + PendingModifiedValues *PendingModifiedValues `type:"structure"` + + // True if Performance Insights is enabled for the DB instance, and otherwise + // false. + PerformanceInsightsEnabled *bool `type:"boolean"` + + // The AWS KMS key identifier for encryption of Performance Insights data. The + // KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the + // KMS key alias for the KMS encryption key. + PerformanceInsightsKMSKeyId *string `type:"string"` + + // Specifies the daily time range during which automated backups are created + // if automated backups are enabled, as determined by the BackupRetentionPeriod. + PreferredBackupWindow *string `type:"string"` + + // Specifies the weekly time range during which system maintenance can occur, + // in Universal Coordinated Time (UTC). + PreferredMaintenanceWindow *string `type:"string"` + + // A value that specifies the order in which a Read Replica is promoted to the + // primary instance after a failure of the existing primary instance. + PromotionTier *int64 `type:"integer"` + + // This parameter is not supported. + // + // Deprecated: PubliclyAccessible has been deprecated + PubliclyAccessible *bool `deprecated:"true" type:"boolean"` + + // Contains one or more identifiers of DB clusters that are Read Replicas of + // this DB instance. + ReadReplicaDBClusterIdentifiers []*string `locationNameList:"ReadReplicaDBClusterIdentifier" type:"list"` + + // Contains one or more identifiers of the Read Replicas associated with this + // DB instance. + ReadReplicaDBInstanceIdentifiers []*string `locationNameList:"ReadReplicaDBInstanceIdentifier" type:"list"` + + // Contains the identifier of the source DB instance if this DB instance is + // a Read Replica. + ReadReplicaSourceDBInstanceIdentifier *string `type:"string"` + + // If present, specifies the name of the secondary Availability Zone for a DB + // instance with multi-AZ support. + SecondaryAvailabilityZone *string `type:"string"` + + // The status of a Read Replica. If the instance is not a Read Replica, this + // is blank. + StatusInfos []*DBInstanceStatusInfo `locationNameList:"DBInstanceStatusInfo" type:"list"` + + // Specifies whether the DB instance is encrypted. + StorageEncrypted *bool `type:"boolean"` + + // Specifies the storage type associated with DB instance. + StorageType *string `type:"string"` + + // The ARN from the key store with which the instance is associated for TDE + // encryption. + TdeCredentialArn *string `type:"string"` + + // Not supported. + Timezone *string `type:"string"` + + // Provides a list of VPC security group elements that the DB instance belongs + // to. + VpcSecurityGroups []*VpcSecurityGroupMembership `locationNameList:"VpcSecurityGroupMembership" type:"list"` +} + +// String returns the string representation +func (s DBInstance) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DBInstance) GoString() string { + return s.String() +} + +// SetAllocatedStorage sets the AllocatedStorage field's value. +func (s *DBInstance) SetAllocatedStorage(v int64) *DBInstance { + s.AllocatedStorage = &v + return s +} + +// SetAutoMinorVersionUpgrade sets the AutoMinorVersionUpgrade field's value. +func (s *DBInstance) SetAutoMinorVersionUpgrade(v bool) *DBInstance { + s.AutoMinorVersionUpgrade = &v + return s +} + +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *DBInstance) SetAvailabilityZone(v string) *DBInstance { + s.AvailabilityZone = &v + return s +} + +// SetBackupRetentionPeriod sets the BackupRetentionPeriod field's value. +func (s *DBInstance) SetBackupRetentionPeriod(v int64) *DBInstance { + s.BackupRetentionPeriod = &v + return s +} + +// SetCACertificateIdentifier sets the CACertificateIdentifier field's value. +func (s *DBInstance) SetCACertificateIdentifier(v string) *DBInstance { + s.CACertificateIdentifier = &v + return s +} + +// SetCharacterSetName sets the CharacterSetName field's value. +func (s *DBInstance) SetCharacterSetName(v string) *DBInstance { + s.CharacterSetName = &v + return s +} + +// SetCopyTagsToSnapshot sets the CopyTagsToSnapshot field's value. +func (s *DBInstance) SetCopyTagsToSnapshot(v bool) *DBInstance { + s.CopyTagsToSnapshot = &v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *DBInstance) SetDBClusterIdentifier(v string) *DBInstance { + s.DBClusterIdentifier = &v + return s +} + +// SetDBInstanceArn sets the DBInstanceArn field's value. +func (s *DBInstance) SetDBInstanceArn(v string) *DBInstance { + s.DBInstanceArn = &v + return s +} + +// SetDBInstanceClass sets the DBInstanceClass field's value. +func (s *DBInstance) SetDBInstanceClass(v string) *DBInstance { + s.DBInstanceClass = &v + return s +} + +// SetDBInstanceIdentifier sets the DBInstanceIdentifier field's value. +func (s *DBInstance) SetDBInstanceIdentifier(v string) *DBInstance { + s.DBInstanceIdentifier = &v + return s +} + +// SetDBInstanceStatus sets the DBInstanceStatus field's value. +func (s *DBInstance) SetDBInstanceStatus(v string) *DBInstance { + s.DBInstanceStatus = &v + return s +} + +// SetDBName sets the DBName field's value. +func (s *DBInstance) SetDBName(v string) *DBInstance { + s.DBName = &v + return s +} + +// SetDBParameterGroups sets the DBParameterGroups field's value. +func (s *DBInstance) SetDBParameterGroups(v []*DBParameterGroupStatus) *DBInstance { + s.DBParameterGroups = v + return s +} + +// SetDBSecurityGroups sets the DBSecurityGroups field's value. +func (s *DBInstance) SetDBSecurityGroups(v []*DBSecurityGroupMembership) *DBInstance { + s.DBSecurityGroups = v + return s +} + +// SetDBSubnetGroup sets the DBSubnetGroup field's value. +func (s *DBInstance) SetDBSubnetGroup(v *DBSubnetGroup) *DBInstance { + s.DBSubnetGroup = v + return s +} + +// SetDbInstancePort sets the DbInstancePort field's value. +func (s *DBInstance) SetDbInstancePort(v int64) *DBInstance { + s.DbInstancePort = &v + return s +} + +// SetDbiResourceId sets the DbiResourceId field's value. +func (s *DBInstance) SetDbiResourceId(v string) *DBInstance { + s.DbiResourceId = &v + return s +} + +// SetDomainMemberships sets the DomainMemberships field's value. +func (s *DBInstance) SetDomainMemberships(v []*DomainMembership) *DBInstance { + s.DomainMemberships = v + return s +} + +// SetEnabledCloudwatchLogsExports sets the EnabledCloudwatchLogsExports field's value. +func (s *DBInstance) SetEnabledCloudwatchLogsExports(v []*string) *DBInstance { + s.EnabledCloudwatchLogsExports = v + return s +} + +// SetEndpoint sets the Endpoint field's value. +func (s *DBInstance) SetEndpoint(v *Endpoint) *DBInstance { + s.Endpoint = v + return s +} + +// SetEngine sets the Engine field's value. +func (s *DBInstance) SetEngine(v string) *DBInstance { + s.Engine = &v + return s +} + +// SetEngineVersion sets the EngineVersion field's value. +func (s *DBInstance) SetEngineVersion(v string) *DBInstance { + s.EngineVersion = &v + return s +} + +// SetEnhancedMonitoringResourceArn sets the EnhancedMonitoringResourceArn field's value. +func (s *DBInstance) SetEnhancedMonitoringResourceArn(v string) *DBInstance { + s.EnhancedMonitoringResourceArn = &v + return s +} + +// SetIAMDatabaseAuthenticationEnabled sets the IAMDatabaseAuthenticationEnabled field's value. +func (s *DBInstance) SetIAMDatabaseAuthenticationEnabled(v bool) *DBInstance { + s.IAMDatabaseAuthenticationEnabled = &v + return s +} + +// SetInstanceCreateTime sets the InstanceCreateTime field's value. +func (s *DBInstance) SetInstanceCreateTime(v time.Time) *DBInstance { + s.InstanceCreateTime = &v + return s +} + +// SetIops sets the Iops field's value. +func (s *DBInstance) SetIops(v int64) *DBInstance { + s.Iops = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *DBInstance) SetKmsKeyId(v string) *DBInstance { + s.KmsKeyId = &v + return s +} + +// SetLatestRestorableTime sets the LatestRestorableTime field's value. +func (s *DBInstance) SetLatestRestorableTime(v time.Time) *DBInstance { + s.LatestRestorableTime = &v + return s +} + +// SetLicenseModel sets the LicenseModel field's value. +func (s *DBInstance) SetLicenseModel(v string) *DBInstance { + s.LicenseModel = &v + return s +} + +// SetMasterUsername sets the MasterUsername field's value. +func (s *DBInstance) SetMasterUsername(v string) *DBInstance { + s.MasterUsername = &v + return s +} + +// SetMonitoringInterval sets the MonitoringInterval field's value. +func (s *DBInstance) SetMonitoringInterval(v int64) *DBInstance { + s.MonitoringInterval = &v + return s +} + +// SetMonitoringRoleArn sets the MonitoringRoleArn field's value. +func (s *DBInstance) SetMonitoringRoleArn(v string) *DBInstance { + s.MonitoringRoleArn = &v + return s +} + +// SetMultiAZ sets the MultiAZ field's value. +func (s *DBInstance) SetMultiAZ(v bool) *DBInstance { + s.MultiAZ = &v + return s +} + +// SetOptionGroupMemberships sets the OptionGroupMemberships field's value. +func (s *DBInstance) SetOptionGroupMemberships(v []*OptionGroupMembership) *DBInstance { + s.OptionGroupMemberships = v + return s +} + +// SetPendingModifiedValues sets the PendingModifiedValues field's value. +func (s *DBInstance) SetPendingModifiedValues(v *PendingModifiedValues) *DBInstance { + s.PendingModifiedValues = v + return s +} + +// SetPerformanceInsightsEnabled sets the PerformanceInsightsEnabled field's value. +func (s *DBInstance) SetPerformanceInsightsEnabled(v bool) *DBInstance { + s.PerformanceInsightsEnabled = &v + return s +} + +// SetPerformanceInsightsKMSKeyId sets the PerformanceInsightsKMSKeyId field's value. +func (s *DBInstance) SetPerformanceInsightsKMSKeyId(v string) *DBInstance { + s.PerformanceInsightsKMSKeyId = &v + return s +} + +// SetPreferredBackupWindow sets the PreferredBackupWindow field's value. +func (s *DBInstance) SetPreferredBackupWindow(v string) *DBInstance { + s.PreferredBackupWindow = &v + return s +} + +// SetPreferredMaintenanceWindow sets the PreferredMaintenanceWindow field's value. +func (s *DBInstance) SetPreferredMaintenanceWindow(v string) *DBInstance { + s.PreferredMaintenanceWindow = &v + return s +} + +// SetPromotionTier sets the PromotionTier field's value. +func (s *DBInstance) SetPromotionTier(v int64) *DBInstance { + s.PromotionTier = &v + return s +} + +// SetPubliclyAccessible sets the PubliclyAccessible field's value. +func (s *DBInstance) SetPubliclyAccessible(v bool) *DBInstance { + s.PubliclyAccessible = &v + return s +} + +// SetReadReplicaDBClusterIdentifiers sets the ReadReplicaDBClusterIdentifiers field's value. +func (s *DBInstance) SetReadReplicaDBClusterIdentifiers(v []*string) *DBInstance { + s.ReadReplicaDBClusterIdentifiers = v + return s +} + +// SetReadReplicaDBInstanceIdentifiers sets the ReadReplicaDBInstanceIdentifiers field's value. +func (s *DBInstance) SetReadReplicaDBInstanceIdentifiers(v []*string) *DBInstance { + s.ReadReplicaDBInstanceIdentifiers = v + return s +} + +// SetReadReplicaSourceDBInstanceIdentifier sets the ReadReplicaSourceDBInstanceIdentifier field's value. +func (s *DBInstance) SetReadReplicaSourceDBInstanceIdentifier(v string) *DBInstance { + s.ReadReplicaSourceDBInstanceIdentifier = &v + return s +} + +// SetSecondaryAvailabilityZone sets the SecondaryAvailabilityZone field's value. +func (s *DBInstance) SetSecondaryAvailabilityZone(v string) *DBInstance { + s.SecondaryAvailabilityZone = &v + return s +} + +// SetStatusInfos sets the StatusInfos field's value. +func (s *DBInstance) SetStatusInfos(v []*DBInstanceStatusInfo) *DBInstance { + s.StatusInfos = v + return s +} + +// SetStorageEncrypted sets the StorageEncrypted field's value. +func (s *DBInstance) SetStorageEncrypted(v bool) *DBInstance { + s.StorageEncrypted = &v + return s +} + +// SetStorageType sets the StorageType field's value. +func (s *DBInstance) SetStorageType(v string) *DBInstance { + s.StorageType = &v + return s +} + +// SetTdeCredentialArn sets the TdeCredentialArn field's value. +func (s *DBInstance) SetTdeCredentialArn(v string) *DBInstance { + s.TdeCredentialArn = &v + return s +} + +// SetTimezone sets the Timezone field's value. +func (s *DBInstance) SetTimezone(v string) *DBInstance { + s.Timezone = &v + return s +} + +// SetVpcSecurityGroups sets the VpcSecurityGroups field's value. +func (s *DBInstance) SetVpcSecurityGroups(v []*VpcSecurityGroupMembership) *DBInstance { + s.VpcSecurityGroups = v + return s +} + +// Provides a list of status information for a DB instance. +type DBInstanceStatusInfo struct { + _ struct{} `type:"structure"` + + // Details of the error if there is an error for the instance. If the instance + // is not in an error state, this value is blank. + Message *string `type:"string"` + + // Boolean value that is true if the instance is operating normally, or false + // if the instance is in an error state. + Normal *bool `type:"boolean"` + + // Status of the DB instance. For a StatusType of read replica, the values can + // be replicating, error, stopped, or terminated. + Status *string `type:"string"` + + // This value is currently "read replication." + StatusType *string `type:"string"` +} + +// String returns the string representation +func (s DBInstanceStatusInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DBInstanceStatusInfo) GoString() string { + return s.String() +} + +// SetMessage sets the Message field's value. +func (s *DBInstanceStatusInfo) SetMessage(v string) *DBInstanceStatusInfo { + s.Message = &v + return s +} + +// SetNormal sets the Normal field's value. +func (s *DBInstanceStatusInfo) SetNormal(v bool) *DBInstanceStatusInfo { + s.Normal = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *DBInstanceStatusInfo) SetStatus(v string) *DBInstanceStatusInfo { + s.Status = &v + return s +} + +// SetStatusType sets the StatusType field's value. +func (s *DBInstanceStatusInfo) SetStatusType(v string) *DBInstanceStatusInfo { + s.StatusType = &v + return s +} + +// Contains the details of an Amazon Neptune DB parameter group. +// +// This data type is used as a response element in the DescribeDBParameterGroups +// action. +type DBParameterGroup struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) for the DB parameter group. + DBParameterGroupArn *string `type:"string"` + + // Provides the name of the DB parameter group family that this DB parameter + // group is compatible with. + DBParameterGroupFamily *string `type:"string"` + + // Provides the name of the DB parameter group. + DBParameterGroupName *string `type:"string"` + + // Provides the customer-specified description for this DB parameter group. + Description *string `type:"string"` +} + +// String returns the string representation +func (s DBParameterGroup) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DBParameterGroup) GoString() string { + return s.String() +} + +// SetDBParameterGroupArn sets the DBParameterGroupArn field's value. +func (s *DBParameterGroup) SetDBParameterGroupArn(v string) *DBParameterGroup { + s.DBParameterGroupArn = &v + return s +} + +// SetDBParameterGroupFamily sets the DBParameterGroupFamily field's value. +func (s *DBParameterGroup) SetDBParameterGroupFamily(v string) *DBParameterGroup { + s.DBParameterGroupFamily = &v + return s +} + +// SetDBParameterGroupName sets the DBParameterGroupName field's value. +func (s *DBParameterGroup) SetDBParameterGroupName(v string) *DBParameterGroup { + s.DBParameterGroupName = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *DBParameterGroup) SetDescription(v string) *DBParameterGroup { + s.Description = &v + return s +} + +// The status of the DB parameter group. +// +// This data type is used as a response element in the following actions: +// +// * CreateDBInstance +// +// * DeleteDBInstance +// +// * ModifyDBInstance +// +// * RebootDBInstance +type DBParameterGroupStatus struct { + _ struct{} `type:"structure"` + + // The name of the DP parameter group. + DBParameterGroupName *string `type:"string"` + + // The status of parameter updates. + ParameterApplyStatus *string `type:"string"` +} + +// String returns the string representation +func (s DBParameterGroupStatus) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DBParameterGroupStatus) GoString() string { + return s.String() +} + +// SetDBParameterGroupName sets the DBParameterGroupName field's value. +func (s *DBParameterGroupStatus) SetDBParameterGroupName(v string) *DBParameterGroupStatus { + s.DBParameterGroupName = &v + return s +} + +// SetParameterApplyStatus sets the ParameterApplyStatus field's value. +func (s *DBParameterGroupStatus) SetParameterApplyStatus(v string) *DBParameterGroupStatus { + s.ParameterApplyStatus = &v + return s +} + +// This data type is used as a response element in the following actions: +// +// * ModifyDBInstance +// +// * RebootDBInstance +type DBSecurityGroupMembership struct { + _ struct{} `type:"structure"` + + // The name of the DB security group. + DBSecurityGroupName *string `type:"string"` + + // The status of the DB security group. + Status *string `type:"string"` +} + +// String returns the string representation +func (s DBSecurityGroupMembership) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DBSecurityGroupMembership) GoString() string { + return s.String() +} + +// SetDBSecurityGroupName sets the DBSecurityGroupName field's value. +func (s *DBSecurityGroupMembership) SetDBSecurityGroupName(v string) *DBSecurityGroupMembership { + s.DBSecurityGroupName = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *DBSecurityGroupMembership) SetStatus(v string) *DBSecurityGroupMembership { + s.Status = &v + return s +} + +// Contains the details of an Amazon Neptune DB subnet group. +// +// This data type is used as a response element in the DescribeDBSubnetGroups +// action. +type DBSubnetGroup struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) for the DB subnet group. + DBSubnetGroupArn *string `type:"string"` + + // Provides the description of the DB subnet group. + DBSubnetGroupDescription *string `type:"string"` + + // The name of the DB subnet group. + DBSubnetGroupName *string `type:"string"` + + // Provides the status of the DB subnet group. + SubnetGroupStatus *string `type:"string"` + + // Contains a list of Subnet elements. + Subnets []*Subnet `locationNameList:"Subnet" type:"list"` + + // Provides the VpcId of the DB subnet group. + VpcId *string `type:"string"` +} + +// String returns the string representation +func (s DBSubnetGroup) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DBSubnetGroup) GoString() string { + return s.String() +} + +// SetDBSubnetGroupArn sets the DBSubnetGroupArn field's value. +func (s *DBSubnetGroup) SetDBSubnetGroupArn(v string) *DBSubnetGroup { + s.DBSubnetGroupArn = &v + return s +} + +// SetDBSubnetGroupDescription sets the DBSubnetGroupDescription field's value. +func (s *DBSubnetGroup) SetDBSubnetGroupDescription(v string) *DBSubnetGroup { + s.DBSubnetGroupDescription = &v + return s +} + +// SetDBSubnetGroupName sets the DBSubnetGroupName field's value. +func (s *DBSubnetGroup) SetDBSubnetGroupName(v string) *DBSubnetGroup { + s.DBSubnetGroupName = &v + return s +} + +// SetSubnetGroupStatus sets the SubnetGroupStatus field's value. +func (s *DBSubnetGroup) SetSubnetGroupStatus(v string) *DBSubnetGroup { + s.SubnetGroupStatus = &v + return s +} + +// SetSubnets sets the Subnets field's value. +func (s *DBSubnetGroup) SetSubnets(v []*Subnet) *DBSubnetGroup { + s.Subnets = v + return s +} + +// SetVpcId sets the VpcId field's value. +func (s *DBSubnetGroup) SetVpcId(v string) *DBSubnetGroup { + s.VpcId = &v + return s +} + +type DeleteDBClusterInput struct { + _ struct{} `type:"structure"` + + // The DB cluster identifier for the DB cluster to be deleted. This parameter + // isn't case-sensitive. + // + // Constraints: + // + // * Must match an existing DBClusterIdentifier. + // + // DBClusterIdentifier is a required field + DBClusterIdentifier *string `type:"string" required:"true"` + + // The DB cluster snapshot identifier of the new DB cluster snapshot created + // when SkipFinalSnapshot is set to false. + // + // Specifying this parameter and also setting the SkipFinalShapshot parameter + // to true results in an error. + // + // Constraints: + // + // * Must be 1 to 255 letters, numbers, or hyphens. + // + // * First character must be a letter + // + // * Cannot end with a hyphen or contain two consecutive hyphens + FinalDBSnapshotIdentifier *string `type:"string"` + + // Determines whether a final DB cluster snapshot is created before the DB cluster + // is deleted. If true is specified, no DB cluster snapshot is created. If false + // is specified, a DB cluster snapshot is created before the DB cluster is deleted. + // + // You must specify a FinalDBSnapshotIdentifier parameter if SkipFinalSnapshot + // is false. + // + // Default: false + SkipFinalSnapshot *bool `type:"boolean"` +} + +// String returns the string representation +func (s DeleteDBClusterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDBClusterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteDBClusterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDBClusterInput"} + if s.DBClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *DeleteDBClusterInput) SetDBClusterIdentifier(v string) *DeleteDBClusterInput { + s.DBClusterIdentifier = &v + return s +} + +// SetFinalDBSnapshotIdentifier sets the FinalDBSnapshotIdentifier field's value. +func (s *DeleteDBClusterInput) SetFinalDBSnapshotIdentifier(v string) *DeleteDBClusterInput { + s.FinalDBSnapshotIdentifier = &v + return s +} + +// SetSkipFinalSnapshot sets the SkipFinalSnapshot field's value. +func (s *DeleteDBClusterInput) SetSkipFinalSnapshot(v bool) *DeleteDBClusterInput { + s.SkipFinalSnapshot = &v + return s +} + +type DeleteDBClusterOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Neptune DB cluster. + // + // This data type is used as a response element in the DescribeDBClusters action. + DBCluster *DBCluster `type:"structure"` +} + +// String returns the string representation +func (s DeleteDBClusterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDBClusterOutput) GoString() string { + return s.String() +} + +// SetDBCluster sets the DBCluster field's value. +func (s *DeleteDBClusterOutput) SetDBCluster(v *DBCluster) *DeleteDBClusterOutput { + s.DBCluster = v + return s +} + +type DeleteDBClusterParameterGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the DB cluster parameter group. + // + // Constraints: + // + // * Must be the name of an existing DB cluster parameter group. + // + // * You can't delete a default DB cluster parameter group. + // + // * Cannot be associated with any DB clusters. + // + // DBClusterParameterGroupName is a required field + DBClusterParameterGroupName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteDBClusterParameterGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDBClusterParameterGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteDBClusterParameterGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDBClusterParameterGroupInput"} + if s.DBClusterParameterGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterParameterGroupName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterParameterGroupName sets the DBClusterParameterGroupName field's value. +func (s *DeleteDBClusterParameterGroupInput) SetDBClusterParameterGroupName(v string) *DeleteDBClusterParameterGroupInput { + s.DBClusterParameterGroupName = &v + return s +} + +type DeleteDBClusterParameterGroupOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteDBClusterParameterGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDBClusterParameterGroupOutput) GoString() string { + return s.String() +} + +type DeleteDBClusterSnapshotInput struct { + _ struct{} `type:"structure"` + + // The identifier of the DB cluster snapshot to delete. + // + // Constraints: Must be the name of an existing DB cluster snapshot in the available + // state. + // + // DBClusterSnapshotIdentifier is a required field + DBClusterSnapshotIdentifier *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteDBClusterSnapshotInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDBClusterSnapshotInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteDBClusterSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDBClusterSnapshotInput"} + if s.DBClusterSnapshotIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterSnapshotIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterSnapshotIdentifier sets the DBClusterSnapshotIdentifier field's value. +func (s *DeleteDBClusterSnapshotInput) SetDBClusterSnapshotIdentifier(v string) *DeleteDBClusterSnapshotInput { + s.DBClusterSnapshotIdentifier = &v + return s +} + +type DeleteDBClusterSnapshotOutput struct { + _ struct{} `type:"structure"` + + // Contains the details for an Amazon Neptune DB cluster snapshot + // + // This data type is used as a response element in the DescribeDBClusterSnapshots + // action. + DBClusterSnapshot *DBClusterSnapshot `type:"structure"` +} + +// String returns the string representation +func (s DeleteDBClusterSnapshotOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDBClusterSnapshotOutput) GoString() string { + return s.String() +} + +// SetDBClusterSnapshot sets the DBClusterSnapshot field's value. +func (s *DeleteDBClusterSnapshotOutput) SetDBClusterSnapshot(v *DBClusterSnapshot) *DeleteDBClusterSnapshotOutput { + s.DBClusterSnapshot = v + return s +} + +type DeleteDBInstanceInput struct { + _ struct{} `type:"structure"` + + // The DB instance identifier for the DB instance to be deleted. This parameter + // isn't case-sensitive. + // + // Constraints: + // + // * Must match the name of an existing DB instance. + // + // DBInstanceIdentifier is a required field + DBInstanceIdentifier *string `type:"string" required:"true"` + + // The DBSnapshotIdentifier of the new DBSnapshot created when SkipFinalSnapshot + // is set to false. + // + // Specifying this parameter and also setting the SkipFinalShapshot parameter + // to true results in an error. + // + // Constraints: + // + // * Must be 1 to 255 letters or numbers. + // + // * First character must be a letter + // + // * Cannot end with a hyphen or contain two consecutive hyphens + // + // * Cannot be specified when deleting a Read Replica. + FinalDBSnapshotIdentifier *string `type:"string"` + + // Determines whether a final DB snapshot is created before the DB instance + // is deleted. If true is specified, no DBSnapshot is created. If false is specified, + // a DB snapshot is created before the DB instance is deleted. + // + // Note that when a DB instance is in a failure state and has a status of 'failed', + // 'incompatible-restore', or 'incompatible-network', it can only be deleted + // when the SkipFinalSnapshot parameter is set to "true". + // + // Specify true when deleting a Read Replica. + // + // The FinalDBSnapshotIdentifier parameter must be specified if SkipFinalSnapshot + // is false. + // + // Default: false + SkipFinalSnapshot *bool `type:"boolean"` +} + +// String returns the string representation +func (s DeleteDBInstanceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDBInstanceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteDBInstanceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDBInstanceInput"} + if s.DBInstanceIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBInstanceIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBInstanceIdentifier sets the DBInstanceIdentifier field's value. +func (s *DeleteDBInstanceInput) SetDBInstanceIdentifier(v string) *DeleteDBInstanceInput { + s.DBInstanceIdentifier = &v + return s +} + +// SetFinalDBSnapshotIdentifier sets the FinalDBSnapshotIdentifier field's value. +func (s *DeleteDBInstanceInput) SetFinalDBSnapshotIdentifier(v string) *DeleteDBInstanceInput { + s.FinalDBSnapshotIdentifier = &v + return s +} + +// SetSkipFinalSnapshot sets the SkipFinalSnapshot field's value. +func (s *DeleteDBInstanceInput) SetSkipFinalSnapshot(v bool) *DeleteDBInstanceInput { + s.SkipFinalSnapshot = &v + return s +} + +type DeleteDBInstanceOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Neptune DB instance. + // + // This data type is used as a response element in the DescribeDBInstances action. + DBInstance *DBInstance `type:"structure"` +} + +// String returns the string representation +func (s DeleteDBInstanceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDBInstanceOutput) GoString() string { + return s.String() +} + +// SetDBInstance sets the DBInstance field's value. +func (s *DeleteDBInstanceOutput) SetDBInstance(v *DBInstance) *DeleteDBInstanceOutput { + s.DBInstance = v + return s +} + +type DeleteDBParameterGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the DB parameter group. + // + // Constraints: + // + // * Must be the name of an existing DB parameter group + // + // * You can't delete a default DB parameter group + // + // * Cannot be associated with any DB instances + // + // DBParameterGroupName is a required field + DBParameterGroupName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteDBParameterGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDBParameterGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteDBParameterGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDBParameterGroupInput"} + if s.DBParameterGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("DBParameterGroupName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBParameterGroupName sets the DBParameterGroupName field's value. +func (s *DeleteDBParameterGroupInput) SetDBParameterGroupName(v string) *DeleteDBParameterGroupInput { + s.DBParameterGroupName = &v + return s +} + +type DeleteDBParameterGroupOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteDBParameterGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDBParameterGroupOutput) GoString() string { + return s.String() +} + +type DeleteDBSubnetGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the database subnet group to delete. + // + // You can't delete the default subnet group. + // + // Constraints: + // + // Constraints: Must match the name of an existing DBSubnetGroup. Must not be + // default. + // + // Example: mySubnetgroup + // + // DBSubnetGroupName is a required field + DBSubnetGroupName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteDBSubnetGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDBSubnetGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteDBSubnetGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDBSubnetGroupInput"} + if s.DBSubnetGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("DBSubnetGroupName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBSubnetGroupName sets the DBSubnetGroupName field's value. +func (s *DeleteDBSubnetGroupInput) SetDBSubnetGroupName(v string) *DeleteDBSubnetGroupInput { + s.DBSubnetGroupName = &v + return s +} + +type DeleteDBSubnetGroupOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteDBSubnetGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDBSubnetGroupOutput) GoString() string { + return s.String() +} + +type DeleteEventSubscriptionInput struct { + _ struct{} `type:"structure"` + + // The name of the event notification subscription you want to delete. + // + // SubscriptionName is a required field + SubscriptionName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteEventSubscriptionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteEventSubscriptionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteEventSubscriptionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteEventSubscriptionInput"} + if s.SubscriptionName == nil { + invalidParams.Add(request.NewErrParamRequired("SubscriptionName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSubscriptionName sets the SubscriptionName field's value. +func (s *DeleteEventSubscriptionInput) SetSubscriptionName(v string) *DeleteEventSubscriptionInput { + s.SubscriptionName = &v + return s +} + +type DeleteEventSubscriptionOutput struct { + _ struct{} `type:"structure"` + + // Contains the results of a successful invocation of the DescribeEventSubscriptions + // action. + EventSubscription *EventSubscription `type:"structure"` +} + +// String returns the string representation +func (s DeleteEventSubscriptionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteEventSubscriptionOutput) GoString() string { + return s.String() +} + +// SetEventSubscription sets the EventSubscription field's value. +func (s *DeleteEventSubscriptionOutput) SetEventSubscription(v *EventSubscription) *DeleteEventSubscriptionOutput { + s.EventSubscription = v + return s +} + +type DescribeDBClusterParameterGroupsInput struct { + _ struct{} `type:"structure"` + + // The name of a specific DB cluster parameter group to return details for. + // + // Constraints: + // + // * If supplied, must match the name of an existing DBClusterParameterGroup. + DBClusterParameterGroupName *string `type:"string"` + + // This parameter is not currently supported. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // An optional pagination token provided by a previous DescribeDBClusterParameterGroups + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords. + Marker *string `type:"string"` + + // The maximum number of records to include in the response. If more records + // exist than the specified MaxRecords value, a pagination token called a marker + // is included in the response so that the remaining results can be retrieved. + // + // Default: 100 + // + // Constraints: Minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` +} + +// String returns the string representation +func (s DescribeDBClusterParameterGroupsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBClusterParameterGroupsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeDBClusterParameterGroupsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeDBClusterParameterGroupsInput"} + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterParameterGroupName sets the DBClusterParameterGroupName field's value. +func (s *DescribeDBClusterParameterGroupsInput) SetDBClusterParameterGroupName(v string) *DescribeDBClusterParameterGroupsInput { + s.DBClusterParameterGroupName = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeDBClusterParameterGroupsInput) SetFilters(v []*Filter) *DescribeDBClusterParameterGroupsInput { + s.Filters = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBClusterParameterGroupsInput) SetMarker(v string) *DescribeDBClusterParameterGroupsInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeDBClusterParameterGroupsInput) SetMaxRecords(v int64) *DescribeDBClusterParameterGroupsInput { + s.MaxRecords = &v + return s +} + +type DescribeDBClusterParameterGroupsOutput struct { + _ struct{} `type:"structure"` + + // A list of DB cluster parameter groups. + DBClusterParameterGroups []*DBClusterParameterGroup `locationNameList:"DBClusterParameterGroup" type:"list"` + + // An optional pagination token provided by a previous DescribeDBClusterParameterGroups + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords. + Marker *string `type:"string"` +} + +// String returns the string representation +func (s DescribeDBClusterParameterGroupsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBClusterParameterGroupsOutput) GoString() string { + return s.String() +} + +// SetDBClusterParameterGroups sets the DBClusterParameterGroups field's value. +func (s *DescribeDBClusterParameterGroupsOutput) SetDBClusterParameterGroups(v []*DBClusterParameterGroup) *DescribeDBClusterParameterGroupsOutput { + s.DBClusterParameterGroups = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBClusterParameterGroupsOutput) SetMarker(v string) *DescribeDBClusterParameterGroupsOutput { + s.Marker = &v + return s +} + +type DescribeDBClusterParametersInput struct { + _ struct{} `type:"structure"` + + // The name of a specific DB cluster parameter group to return parameter details + // for. + // + // Constraints: + // + // * If supplied, must match the name of an existing DBClusterParameterGroup. + // + // DBClusterParameterGroupName is a required field + DBClusterParameterGroupName *string `type:"string" required:"true"` + + // This parameter is not currently supported. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // An optional pagination token provided by a previous DescribeDBClusterParameters + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords. + Marker *string `type:"string"` + + // The maximum number of records to include in the response. If more records + // exist than the specified MaxRecords value, a pagination token called a marker + // is included in the response so that the remaining results can be retrieved. + // + // Default: 100 + // + // Constraints: Minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` + + // A value that indicates to return only parameters for a specific source. Parameter + // sources can be engine, service, or customer. + Source *string `type:"string"` +} + +// String returns the string representation +func (s DescribeDBClusterParametersInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBClusterParametersInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeDBClusterParametersInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeDBClusterParametersInput"} + if s.DBClusterParameterGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterParameterGroupName")) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterParameterGroupName sets the DBClusterParameterGroupName field's value. +func (s *DescribeDBClusterParametersInput) SetDBClusterParameterGroupName(v string) *DescribeDBClusterParametersInput { + s.DBClusterParameterGroupName = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeDBClusterParametersInput) SetFilters(v []*Filter) *DescribeDBClusterParametersInput { + s.Filters = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBClusterParametersInput) SetMarker(v string) *DescribeDBClusterParametersInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeDBClusterParametersInput) SetMaxRecords(v int64) *DescribeDBClusterParametersInput { + s.MaxRecords = &v + return s +} + +// SetSource sets the Source field's value. +func (s *DescribeDBClusterParametersInput) SetSource(v string) *DescribeDBClusterParametersInput { + s.Source = &v + return s +} + +// Provides details about a DB cluster parameter group including the parameters +// in the DB cluster parameter group. +type DescribeDBClusterParametersOutput struct { + _ struct{} `type:"structure"` + + // An optional pagination token provided by a previous DescribeDBClusterParameters + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords . + Marker *string `type:"string"` + + // Provides a list of parameters for the DB cluster parameter group. + Parameters []*Parameter `locationNameList:"Parameter" type:"list"` +} + +// String returns the string representation +func (s DescribeDBClusterParametersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBClusterParametersOutput) GoString() string { + return s.String() +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBClusterParametersOutput) SetMarker(v string) *DescribeDBClusterParametersOutput { + s.Marker = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *DescribeDBClusterParametersOutput) SetParameters(v []*Parameter) *DescribeDBClusterParametersOutput { + s.Parameters = v + return s +} + +type DescribeDBClusterSnapshotAttributesInput struct { + _ struct{} `type:"structure"` + + // The identifier for the DB cluster snapshot to describe the attributes for. + // + // DBClusterSnapshotIdentifier is a required field + DBClusterSnapshotIdentifier *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeDBClusterSnapshotAttributesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBClusterSnapshotAttributesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeDBClusterSnapshotAttributesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeDBClusterSnapshotAttributesInput"} + if s.DBClusterSnapshotIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterSnapshotIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterSnapshotIdentifier sets the DBClusterSnapshotIdentifier field's value. +func (s *DescribeDBClusterSnapshotAttributesInput) SetDBClusterSnapshotIdentifier(v string) *DescribeDBClusterSnapshotAttributesInput { + s.DBClusterSnapshotIdentifier = &v + return s +} + +type DescribeDBClusterSnapshotAttributesOutput struct { + _ struct{} `type:"structure"` + + // Contains the results of a successful call to the DescribeDBClusterSnapshotAttributes + // API action. + // + // Manual DB cluster snapshot attributes are used to authorize other AWS accounts + // to copy or restore a manual DB cluster snapshot. For more information, see + // the ModifyDBClusterSnapshotAttribute API action. + DBClusterSnapshotAttributesResult *DBClusterSnapshotAttributesResult `type:"structure"` +} + +// String returns the string representation +func (s DescribeDBClusterSnapshotAttributesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBClusterSnapshotAttributesOutput) GoString() string { + return s.String() +} + +// SetDBClusterSnapshotAttributesResult sets the DBClusterSnapshotAttributesResult field's value. +func (s *DescribeDBClusterSnapshotAttributesOutput) SetDBClusterSnapshotAttributesResult(v *DBClusterSnapshotAttributesResult) *DescribeDBClusterSnapshotAttributesOutput { + s.DBClusterSnapshotAttributesResult = v + return s +} + +type DescribeDBClusterSnapshotsInput struct { + _ struct{} `type:"structure"` + + // The ID of the DB cluster to retrieve the list of DB cluster snapshots for. + // This parameter can't be used in conjunction with the DBClusterSnapshotIdentifier + // parameter. This parameter is not case-sensitive. + // + // Constraints: + // + // * If supplied, must match the identifier of an existing DBCluster. + DBClusterIdentifier *string `type:"string"` + + // A specific DB cluster snapshot identifier to describe. This parameter can't + // be used in conjunction with the DBClusterIdentifier parameter. This value + // is stored as a lowercase string. + // + // Constraints: + // + // * If supplied, must match the identifier of an existing DBClusterSnapshot. + // + // * If this identifier is for an automated snapshot, the SnapshotType parameter + // must also be specified. + DBClusterSnapshotIdentifier *string `type:"string"` + + // This parameter is not currently supported. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // True to include manual DB cluster snapshots that are public and can be copied + // or restored by any AWS account, and otherwise false. The default is false. + // The default is false. + // + // You can share a manual DB cluster snapshot as public by using the ModifyDBClusterSnapshotAttribute + // API action. + IncludePublic *bool `type:"boolean"` + + // True to include shared manual DB cluster snapshots from other AWS accounts + // that this AWS account has been given permission to copy or restore, and otherwise + // false. The default is false. + // + // You can give an AWS account permission to restore a manual DB cluster snapshot + // from another AWS account by the ModifyDBClusterSnapshotAttribute API action. + IncludeShared *bool `type:"boolean"` + + // An optional pagination token provided by a previous DescribeDBClusterSnapshots + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords. + Marker *string `type:"string"` + + // The maximum number of records to include in the response. If more records + // exist than the specified MaxRecords value, a pagination token called a marker + // is included in the response so that the remaining results can be retrieved. + // + // Default: 100 + // + // Constraints: Minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` + + // The type of DB cluster snapshots to be returned. You can specify one of the + // following values: + // + // * automated - Return all DB cluster snapshots that have been automatically + // taken by Amazon Neptune for my AWS account. + // + // * manual - Return all DB cluster snapshots that have been taken by my + // AWS account. + // + // * shared - Return all manual DB cluster snapshots that have been shared + // to my AWS account. + // + // * public - Return all DB cluster snapshots that have been marked as public. + // + // If you don't specify a SnapshotType value, then both automated and manual + // DB cluster snapshots are returned. You can include shared DB cluster snapshots + // with these results by setting the IncludeShared parameter to true. You can + // include public DB cluster snapshots with these results by setting the IncludePublic + // parameter to true. + // + // The IncludeShared and IncludePublic parameters don't apply for SnapshotType + // values of manual or automated. The IncludePublic parameter doesn't apply + // when SnapshotType is set to shared. The IncludeShared parameter doesn't apply + // when SnapshotType is set to public. + SnapshotType *string `type:"string"` +} + +// String returns the string representation +func (s DescribeDBClusterSnapshotsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBClusterSnapshotsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeDBClusterSnapshotsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeDBClusterSnapshotsInput"} + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *DescribeDBClusterSnapshotsInput) SetDBClusterIdentifier(v string) *DescribeDBClusterSnapshotsInput { + s.DBClusterIdentifier = &v + return s +} + +// SetDBClusterSnapshotIdentifier sets the DBClusterSnapshotIdentifier field's value. +func (s *DescribeDBClusterSnapshotsInput) SetDBClusterSnapshotIdentifier(v string) *DescribeDBClusterSnapshotsInput { + s.DBClusterSnapshotIdentifier = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeDBClusterSnapshotsInput) SetFilters(v []*Filter) *DescribeDBClusterSnapshotsInput { + s.Filters = v + return s +} + +// SetIncludePublic sets the IncludePublic field's value. +func (s *DescribeDBClusterSnapshotsInput) SetIncludePublic(v bool) *DescribeDBClusterSnapshotsInput { + s.IncludePublic = &v + return s +} + +// SetIncludeShared sets the IncludeShared field's value. +func (s *DescribeDBClusterSnapshotsInput) SetIncludeShared(v bool) *DescribeDBClusterSnapshotsInput { + s.IncludeShared = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBClusterSnapshotsInput) SetMarker(v string) *DescribeDBClusterSnapshotsInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeDBClusterSnapshotsInput) SetMaxRecords(v int64) *DescribeDBClusterSnapshotsInput { + s.MaxRecords = &v + return s +} + +// SetSnapshotType sets the SnapshotType field's value. +func (s *DescribeDBClusterSnapshotsInput) SetSnapshotType(v string) *DescribeDBClusterSnapshotsInput { + s.SnapshotType = &v + return s +} + +// Provides a list of DB cluster snapshots for the user as the result of a call +// to the DescribeDBClusterSnapshots action. +type DescribeDBClusterSnapshotsOutput struct { + _ struct{} `type:"structure"` + + // Provides a list of DB cluster snapshots for the user. + DBClusterSnapshots []*DBClusterSnapshot `locationNameList:"DBClusterSnapshot" type:"list"` + + // An optional pagination token provided by a previous DescribeDBClusterSnapshots + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords. + Marker *string `type:"string"` +} + +// String returns the string representation +func (s DescribeDBClusterSnapshotsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBClusterSnapshotsOutput) GoString() string { + return s.String() +} + +// SetDBClusterSnapshots sets the DBClusterSnapshots field's value. +func (s *DescribeDBClusterSnapshotsOutput) SetDBClusterSnapshots(v []*DBClusterSnapshot) *DescribeDBClusterSnapshotsOutput { + s.DBClusterSnapshots = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBClusterSnapshotsOutput) SetMarker(v string) *DescribeDBClusterSnapshotsOutput { + s.Marker = &v + return s +} + +type DescribeDBClustersInput struct { + _ struct{} `type:"structure"` + + // The user-supplied DB cluster identifier. If this parameter is specified, + // information from only the specific DB cluster is returned. This parameter + // isn't case-sensitive. + // + // Constraints: + // + // * If supplied, must match an existing DBClusterIdentifier. + DBClusterIdentifier *string `type:"string"` + + // A filter that specifies one or more DB clusters to describe. + // + // Supported filters: + // + // * db-cluster-id - Accepts DB cluster identifiers and DB cluster Amazon + // Resource Names (ARNs). The results list will only include information + // about the DB clusters identified by these ARNs. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // An optional pagination token provided by a previous DescribeDBClusters request. + // If this parameter is specified, the response includes only records beyond + // the marker, up to the value specified by MaxRecords. + Marker *string `type:"string"` + + // The maximum number of records to include in the response. If more records + // exist than the specified MaxRecords value, a pagination token called a marker + // is included in the response so that the remaining results can be retrieved. + // + // Default: 100 + // + // Constraints: Minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` +} + +// String returns the string representation +func (s DescribeDBClustersInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBClustersInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeDBClustersInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeDBClustersInput"} + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *DescribeDBClustersInput) SetDBClusterIdentifier(v string) *DescribeDBClustersInput { + s.DBClusterIdentifier = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeDBClustersInput) SetFilters(v []*Filter) *DescribeDBClustersInput { + s.Filters = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBClustersInput) SetMarker(v string) *DescribeDBClustersInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeDBClustersInput) SetMaxRecords(v int64) *DescribeDBClustersInput { + s.MaxRecords = &v + return s +} + +// Contains the result of a successful invocation of the DescribeDBClusters +// action. +type DescribeDBClustersOutput struct { + _ struct{} `type:"structure"` + + // Contains a list of DB clusters for the user. + DBClusters []*DBCluster `locationNameList:"DBCluster" type:"list"` + + // A pagination token that can be used in a subsequent DescribeDBClusters request. + Marker *string `type:"string"` +} + +// String returns the string representation +func (s DescribeDBClustersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBClustersOutput) GoString() string { + return s.String() +} + +// SetDBClusters sets the DBClusters field's value. +func (s *DescribeDBClustersOutput) SetDBClusters(v []*DBCluster) *DescribeDBClustersOutput { + s.DBClusters = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBClustersOutput) SetMarker(v string) *DescribeDBClustersOutput { + s.Marker = &v + return s +} + +type DescribeDBEngineVersionsInput struct { + _ struct{} `type:"structure"` + + // The name of a specific DB parameter group family to return details for. + // + // Constraints: + // + // * If supplied, must match an existing DBParameterGroupFamily. + DBParameterGroupFamily *string `type:"string"` + + // Indicates that only the default version of the specified engine or engine + // and major version combination is returned. + DefaultOnly *bool `type:"boolean"` + + // The database engine to return. + Engine *string `type:"string"` + + // The database engine version to return. + // + // Example: 5.1.49 + EngineVersion *string `type:"string"` + + // Not currently supported. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // If this parameter is specified and the requested engine supports the CharacterSetName + // parameter for CreateDBInstance, the response includes a list of supported + // character sets for each engine version. + ListSupportedCharacterSets *bool `type:"boolean"` + + // If this parameter is specified and the requested engine supports the TimeZone + // parameter for CreateDBInstance, the response includes a list of supported + // time zones for each engine version. + ListSupportedTimezones *bool `type:"boolean"` + + // An optional pagination token provided by a previous request. If this parameter + // is specified, the response includes only records beyond the marker, up to + // the value specified by MaxRecords. + Marker *string `type:"string"` + + // The maximum number of records to include in the response. If more than the + // MaxRecords value is available, a pagination token called a marker is included + // in the response so that the following results can be retrieved. + // + // Default: 100 + // + // Constraints: Minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` +} + +// String returns the string representation +func (s DescribeDBEngineVersionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBEngineVersionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeDBEngineVersionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeDBEngineVersionsInput"} + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBParameterGroupFamily sets the DBParameterGroupFamily field's value. +func (s *DescribeDBEngineVersionsInput) SetDBParameterGroupFamily(v string) *DescribeDBEngineVersionsInput { + s.DBParameterGroupFamily = &v + return s +} + +// SetDefaultOnly sets the DefaultOnly field's value. +func (s *DescribeDBEngineVersionsInput) SetDefaultOnly(v bool) *DescribeDBEngineVersionsInput { + s.DefaultOnly = &v + return s +} + +// SetEngine sets the Engine field's value. +func (s *DescribeDBEngineVersionsInput) SetEngine(v string) *DescribeDBEngineVersionsInput { + s.Engine = &v + return s +} + +// SetEngineVersion sets the EngineVersion field's value. +func (s *DescribeDBEngineVersionsInput) SetEngineVersion(v string) *DescribeDBEngineVersionsInput { + s.EngineVersion = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeDBEngineVersionsInput) SetFilters(v []*Filter) *DescribeDBEngineVersionsInput { + s.Filters = v + return s +} + +// SetListSupportedCharacterSets sets the ListSupportedCharacterSets field's value. +func (s *DescribeDBEngineVersionsInput) SetListSupportedCharacterSets(v bool) *DescribeDBEngineVersionsInput { + s.ListSupportedCharacterSets = &v + return s +} + +// SetListSupportedTimezones sets the ListSupportedTimezones field's value. +func (s *DescribeDBEngineVersionsInput) SetListSupportedTimezones(v bool) *DescribeDBEngineVersionsInput { + s.ListSupportedTimezones = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBEngineVersionsInput) SetMarker(v string) *DescribeDBEngineVersionsInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeDBEngineVersionsInput) SetMaxRecords(v int64) *DescribeDBEngineVersionsInput { + s.MaxRecords = &v + return s +} + +// Contains the result of a successful invocation of the DescribeDBEngineVersions +// action. +type DescribeDBEngineVersionsOutput struct { + _ struct{} `type:"structure"` + + // A list of DBEngineVersion elements. + DBEngineVersions []*DBEngineVersion `locationNameList:"DBEngineVersion" type:"list"` + + // An optional pagination token provided by a previous request. If this parameter + // is specified, the response includes only records beyond the marker, up to + // the value specified by MaxRecords. + Marker *string `type:"string"` +} + +// String returns the string representation +func (s DescribeDBEngineVersionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBEngineVersionsOutput) GoString() string { + return s.String() +} + +// SetDBEngineVersions sets the DBEngineVersions field's value. +func (s *DescribeDBEngineVersionsOutput) SetDBEngineVersions(v []*DBEngineVersion) *DescribeDBEngineVersionsOutput { + s.DBEngineVersions = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBEngineVersionsOutput) SetMarker(v string) *DescribeDBEngineVersionsOutput { + s.Marker = &v + return s +} + +type DescribeDBInstancesInput struct { + _ struct{} `type:"structure"` + + // The user-supplied instance identifier. If this parameter is specified, information + // from only the specific DB instance is returned. This parameter isn't case-sensitive. + // + // Constraints: + // + // * If supplied, must match the identifier of an existing DBInstance. + DBInstanceIdentifier *string `type:"string"` + + // A filter that specifies one or more DB instances to describe. + // + // Supported filters: + // + // * db-cluster-id - Accepts DB cluster identifiers and DB cluster Amazon + // Resource Names (ARNs). The results list will only include information + // about the DB instances associated with the DB clusters identified by these + // ARNs. + // + // * db-instance-id - Accepts DB instance identifiers and DB instance Amazon + // Resource Names (ARNs). The results list will only include information + // about the DB instances identified by these ARNs. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // An optional pagination token provided by a previous DescribeDBInstances request. + // If this parameter is specified, the response includes only records beyond + // the marker, up to the value specified by MaxRecords. + Marker *string `type:"string"` + + // The maximum number of records to include in the response. If more records + // exist than the specified MaxRecords value, a pagination token called a marker + // is included in the response so that the remaining results can be retrieved. + // + // Default: 100 + // + // Constraints: Minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` +} + +// String returns the string representation +func (s DescribeDBInstancesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBInstancesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeDBInstancesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeDBInstancesInput"} + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBInstanceIdentifier sets the DBInstanceIdentifier field's value. +func (s *DescribeDBInstancesInput) SetDBInstanceIdentifier(v string) *DescribeDBInstancesInput { + s.DBInstanceIdentifier = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeDBInstancesInput) SetFilters(v []*Filter) *DescribeDBInstancesInput { + s.Filters = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBInstancesInput) SetMarker(v string) *DescribeDBInstancesInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeDBInstancesInput) SetMaxRecords(v int64) *DescribeDBInstancesInput { + s.MaxRecords = &v + return s +} + +// Contains the result of a successful invocation of the DescribeDBInstances +// action. +type DescribeDBInstancesOutput struct { + _ struct{} `type:"structure"` + + // A list of DBInstance instances. + DBInstances []*DBInstance `locationNameList:"DBInstance" type:"list"` + + // An optional pagination token provided by a previous request. If this parameter + // is specified, the response includes only records beyond the marker, up to + // the value specified by MaxRecords . + Marker *string `type:"string"` +} + +// String returns the string representation +func (s DescribeDBInstancesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBInstancesOutput) GoString() string { + return s.String() +} + +// SetDBInstances sets the DBInstances field's value. +func (s *DescribeDBInstancesOutput) SetDBInstances(v []*DBInstance) *DescribeDBInstancesOutput { + s.DBInstances = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBInstancesOutput) SetMarker(v string) *DescribeDBInstancesOutput { + s.Marker = &v + return s +} + +type DescribeDBParameterGroupsInput struct { + _ struct{} `type:"structure"` + + // The name of a specific DB parameter group to return details for. + // + // Constraints: + // + // * If supplied, must match the name of an existing DBClusterParameterGroup. + DBParameterGroupName *string `type:"string"` + + // This parameter is not currently supported. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // An optional pagination token provided by a previous DescribeDBParameterGroups + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords. + Marker *string `type:"string"` + + // The maximum number of records to include in the response. If more records + // exist than the specified MaxRecords value, a pagination token called a marker + // is included in the response so that the remaining results can be retrieved. + // + // Default: 100 + // + // Constraints: Minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` +} + +// String returns the string representation +func (s DescribeDBParameterGroupsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBParameterGroupsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeDBParameterGroupsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeDBParameterGroupsInput"} + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBParameterGroupName sets the DBParameterGroupName field's value. +func (s *DescribeDBParameterGroupsInput) SetDBParameterGroupName(v string) *DescribeDBParameterGroupsInput { + s.DBParameterGroupName = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeDBParameterGroupsInput) SetFilters(v []*Filter) *DescribeDBParameterGroupsInput { + s.Filters = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBParameterGroupsInput) SetMarker(v string) *DescribeDBParameterGroupsInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeDBParameterGroupsInput) SetMaxRecords(v int64) *DescribeDBParameterGroupsInput { + s.MaxRecords = &v + return s +} + +// Contains the result of a successful invocation of the DescribeDBParameterGroups +// action. +type DescribeDBParameterGroupsOutput struct { + _ struct{} `type:"structure"` + + // A list of DBParameterGroup instances. + DBParameterGroups []*DBParameterGroup `locationNameList:"DBParameterGroup" type:"list"` + + // An optional pagination token provided by a previous request. If this parameter + // is specified, the response includes only records beyond the marker, up to + // the value specified by MaxRecords. + Marker *string `type:"string"` +} + +// String returns the string representation +func (s DescribeDBParameterGroupsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBParameterGroupsOutput) GoString() string { + return s.String() +} + +// SetDBParameterGroups sets the DBParameterGroups field's value. +func (s *DescribeDBParameterGroupsOutput) SetDBParameterGroups(v []*DBParameterGroup) *DescribeDBParameterGroupsOutput { + s.DBParameterGroups = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBParameterGroupsOutput) SetMarker(v string) *DescribeDBParameterGroupsOutput { + s.Marker = &v + return s +} + +type DescribeDBParametersInput struct { + _ struct{} `type:"structure"` + + // The name of a specific DB parameter group to return details for. + // + // Constraints: + // + // * If supplied, must match the name of an existing DBParameterGroup. + // + // DBParameterGroupName is a required field + DBParameterGroupName *string `type:"string" required:"true"` + + // This parameter is not currently supported. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // An optional pagination token provided by a previous DescribeDBParameters + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords. + Marker *string `type:"string"` + + // The maximum number of records to include in the response. If more records + // exist than the specified MaxRecords value, a pagination token called a marker + // is included in the response so that the remaining results can be retrieved. + // + // Default: 100 + // + // Constraints: Minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` + + // The parameter types to return. + // + // Default: All parameter types returned + // + // Valid Values: user | system | engine-default + Source *string `type:"string"` +} + +// String returns the string representation +func (s DescribeDBParametersInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBParametersInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeDBParametersInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeDBParametersInput"} + if s.DBParameterGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("DBParameterGroupName")) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBParameterGroupName sets the DBParameterGroupName field's value. +func (s *DescribeDBParametersInput) SetDBParameterGroupName(v string) *DescribeDBParametersInput { + s.DBParameterGroupName = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeDBParametersInput) SetFilters(v []*Filter) *DescribeDBParametersInput { + s.Filters = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBParametersInput) SetMarker(v string) *DescribeDBParametersInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeDBParametersInput) SetMaxRecords(v int64) *DescribeDBParametersInput { + s.MaxRecords = &v + return s +} + +// SetSource sets the Source field's value. +func (s *DescribeDBParametersInput) SetSource(v string) *DescribeDBParametersInput { + s.Source = &v + return s +} + +// Contains the result of a successful invocation of the DescribeDBParameters +// action. +type DescribeDBParametersOutput struct { + _ struct{} `type:"structure"` + + // An optional pagination token provided by a previous request. If this parameter + // is specified, the response includes only records beyond the marker, up to + // the value specified by MaxRecords. + Marker *string `type:"string"` + + // A list of Parameter values. + Parameters []*Parameter `locationNameList:"Parameter" type:"list"` +} + +// String returns the string representation +func (s DescribeDBParametersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBParametersOutput) GoString() string { + return s.String() +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBParametersOutput) SetMarker(v string) *DescribeDBParametersOutput { + s.Marker = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *DescribeDBParametersOutput) SetParameters(v []*Parameter) *DescribeDBParametersOutput { + s.Parameters = v + return s +} + +type DescribeDBSubnetGroupsInput struct { + _ struct{} `type:"structure"` + + // The name of the DB subnet group to return details for. + DBSubnetGroupName *string `type:"string"` + + // This parameter is not currently supported. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // An optional pagination token provided by a previous DescribeDBSubnetGroups + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords. + Marker *string `type:"string"` + + // The maximum number of records to include in the response. If more records + // exist than the specified MaxRecords value, a pagination token called a marker + // is included in the response so that the remaining results can be retrieved. + // + // Default: 100 + // + // Constraints: Minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` +} + +// String returns the string representation +func (s DescribeDBSubnetGroupsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBSubnetGroupsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeDBSubnetGroupsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeDBSubnetGroupsInput"} + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBSubnetGroupName sets the DBSubnetGroupName field's value. +func (s *DescribeDBSubnetGroupsInput) SetDBSubnetGroupName(v string) *DescribeDBSubnetGroupsInput { + s.DBSubnetGroupName = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeDBSubnetGroupsInput) SetFilters(v []*Filter) *DescribeDBSubnetGroupsInput { + s.Filters = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBSubnetGroupsInput) SetMarker(v string) *DescribeDBSubnetGroupsInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeDBSubnetGroupsInput) SetMaxRecords(v int64) *DescribeDBSubnetGroupsInput { + s.MaxRecords = &v + return s +} + +// Contains the result of a successful invocation of the DescribeDBSubnetGroups +// action. +type DescribeDBSubnetGroupsOutput struct { + _ struct{} `type:"structure"` + + // A list of DBSubnetGroup instances. + DBSubnetGroups []*DBSubnetGroup `locationNameList:"DBSubnetGroup" type:"list"` + + // An optional pagination token provided by a previous request. If this parameter + // is specified, the response includes only records beyond the marker, up to + // the value specified by MaxRecords. + Marker *string `type:"string"` +} + +// String returns the string representation +func (s DescribeDBSubnetGroupsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBSubnetGroupsOutput) GoString() string { + return s.String() +} + +// SetDBSubnetGroups sets the DBSubnetGroups field's value. +func (s *DescribeDBSubnetGroupsOutput) SetDBSubnetGroups(v []*DBSubnetGroup) *DescribeDBSubnetGroupsOutput { + s.DBSubnetGroups = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBSubnetGroupsOutput) SetMarker(v string) *DescribeDBSubnetGroupsOutput { + s.Marker = &v + return s +} + +type DescribeEngineDefaultClusterParametersInput struct { + _ struct{} `type:"structure"` + + // The name of the DB cluster parameter group family to return engine parameter + // information for. + // + // DBParameterGroupFamily is a required field + DBParameterGroupFamily *string `type:"string" required:"true"` + + // This parameter is not currently supported. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // An optional pagination token provided by a previous DescribeEngineDefaultClusterParameters + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords. + Marker *string `type:"string"` + + // The maximum number of records to include in the response. If more records + // exist than the specified MaxRecords value, a pagination token called a marker + // is included in the response so that the remaining results can be retrieved. + // + // Default: 100 + // + // Constraints: Minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` +} + +// String returns the string representation +func (s DescribeEngineDefaultClusterParametersInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEngineDefaultClusterParametersInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeEngineDefaultClusterParametersInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeEngineDefaultClusterParametersInput"} + if s.DBParameterGroupFamily == nil { + invalidParams.Add(request.NewErrParamRequired("DBParameterGroupFamily")) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBParameterGroupFamily sets the DBParameterGroupFamily field's value. +func (s *DescribeEngineDefaultClusterParametersInput) SetDBParameterGroupFamily(v string) *DescribeEngineDefaultClusterParametersInput { + s.DBParameterGroupFamily = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeEngineDefaultClusterParametersInput) SetFilters(v []*Filter) *DescribeEngineDefaultClusterParametersInput { + s.Filters = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeEngineDefaultClusterParametersInput) SetMarker(v string) *DescribeEngineDefaultClusterParametersInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeEngineDefaultClusterParametersInput) SetMaxRecords(v int64) *DescribeEngineDefaultClusterParametersInput { + s.MaxRecords = &v + return s +} + +type DescribeEngineDefaultClusterParametersOutput struct { + _ struct{} `type:"structure"` + + // Contains the result of a successful invocation of the DescribeEngineDefaultParameters + // action. + EngineDefaults *EngineDefaults `type:"structure"` +} + +// String returns the string representation +func (s DescribeEngineDefaultClusterParametersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEngineDefaultClusterParametersOutput) GoString() string { + return s.String() +} + +// SetEngineDefaults sets the EngineDefaults field's value. +func (s *DescribeEngineDefaultClusterParametersOutput) SetEngineDefaults(v *EngineDefaults) *DescribeEngineDefaultClusterParametersOutput { + s.EngineDefaults = v + return s +} + +type DescribeEngineDefaultParametersInput struct { + _ struct{} `type:"structure"` + + // The name of the DB parameter group family. + // + // DBParameterGroupFamily is a required field + DBParameterGroupFamily *string `type:"string" required:"true"` + + // Not currently supported. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // An optional pagination token provided by a previous DescribeEngineDefaultParameters + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords. + Marker *string `type:"string"` + + // The maximum number of records to include in the response. If more records + // exist than the specified MaxRecords value, a pagination token called a marker + // is included in the response so that the remaining results can be retrieved. + // + // Default: 100 + // + // Constraints: Minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` +} + +// String returns the string representation +func (s DescribeEngineDefaultParametersInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEngineDefaultParametersInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeEngineDefaultParametersInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeEngineDefaultParametersInput"} + if s.DBParameterGroupFamily == nil { + invalidParams.Add(request.NewErrParamRequired("DBParameterGroupFamily")) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBParameterGroupFamily sets the DBParameterGroupFamily field's value. +func (s *DescribeEngineDefaultParametersInput) SetDBParameterGroupFamily(v string) *DescribeEngineDefaultParametersInput { + s.DBParameterGroupFamily = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeEngineDefaultParametersInput) SetFilters(v []*Filter) *DescribeEngineDefaultParametersInput { + s.Filters = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeEngineDefaultParametersInput) SetMarker(v string) *DescribeEngineDefaultParametersInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeEngineDefaultParametersInput) SetMaxRecords(v int64) *DescribeEngineDefaultParametersInput { + s.MaxRecords = &v + return s +} + +type DescribeEngineDefaultParametersOutput struct { + _ struct{} `type:"structure"` + + // Contains the result of a successful invocation of the DescribeEngineDefaultParameters + // action. + EngineDefaults *EngineDefaults `type:"structure"` +} + +// String returns the string representation +func (s DescribeEngineDefaultParametersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEngineDefaultParametersOutput) GoString() string { + return s.String() +} + +// SetEngineDefaults sets the EngineDefaults field's value. +func (s *DescribeEngineDefaultParametersOutput) SetEngineDefaults(v *EngineDefaults) *DescribeEngineDefaultParametersOutput { + s.EngineDefaults = v + return s +} + +type DescribeEventCategoriesInput struct { + _ struct{} `type:"structure"` + + // This parameter is not currently supported. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // The type of source that is generating the events. + // + // Valid values: db-instance | db-parameter-group | db-security-group | db-snapshot + SourceType *string `type:"string"` +} + +// String returns the string representation +func (s DescribeEventCategoriesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEventCategoriesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeEventCategoriesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeEventCategoriesInput"} + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeEventCategoriesInput) SetFilters(v []*Filter) *DescribeEventCategoriesInput { + s.Filters = v + return s +} + +// SetSourceType sets the SourceType field's value. +func (s *DescribeEventCategoriesInput) SetSourceType(v string) *DescribeEventCategoriesInput { + s.SourceType = &v + return s +} + +// Data returned from the DescribeEventCategories action. +type DescribeEventCategoriesOutput struct { + _ struct{} `type:"structure"` + + // A list of EventCategoriesMap data types. + EventCategoriesMapList []*EventCategoriesMap `locationNameList:"EventCategoriesMap" type:"list"` +} + +// String returns the string representation +func (s DescribeEventCategoriesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEventCategoriesOutput) GoString() string { + return s.String() +} + +// SetEventCategoriesMapList sets the EventCategoriesMapList field's value. +func (s *DescribeEventCategoriesOutput) SetEventCategoriesMapList(v []*EventCategoriesMap) *DescribeEventCategoriesOutput { + s.EventCategoriesMapList = v + return s +} + +type DescribeEventSubscriptionsInput struct { + _ struct{} `type:"structure"` + + // This parameter is not currently supported. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // An optional pagination token provided by a previous DescribeOrderableDBInstanceOptions + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords . + Marker *string `type:"string"` + + // The maximum number of records to include in the response. If more records + // exist than the specified MaxRecords value, a pagination token called a marker + // is included in the response so that the remaining results can be retrieved. + // + // Default: 100 + // + // Constraints: Minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` + + // The name of the event notification subscription you want to describe. + SubscriptionName *string `type:"string"` +} + +// String returns the string representation +func (s DescribeEventSubscriptionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEventSubscriptionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeEventSubscriptionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeEventSubscriptionsInput"} + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeEventSubscriptionsInput) SetFilters(v []*Filter) *DescribeEventSubscriptionsInput { + s.Filters = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeEventSubscriptionsInput) SetMarker(v string) *DescribeEventSubscriptionsInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeEventSubscriptionsInput) SetMaxRecords(v int64) *DescribeEventSubscriptionsInput { + s.MaxRecords = &v + return s +} + +// SetSubscriptionName sets the SubscriptionName field's value. +func (s *DescribeEventSubscriptionsInput) SetSubscriptionName(v string) *DescribeEventSubscriptionsInput { + s.SubscriptionName = &v + return s +} + +// Data returned by the DescribeEventSubscriptions action. +type DescribeEventSubscriptionsOutput struct { + _ struct{} `type:"structure"` + + // A list of EventSubscriptions data types. + EventSubscriptionsList []*EventSubscription `locationNameList:"EventSubscription" type:"list"` + + // An optional pagination token provided by a previous DescribeOrderableDBInstanceOptions + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords. + Marker *string `type:"string"` +} + +// String returns the string representation +func (s DescribeEventSubscriptionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEventSubscriptionsOutput) GoString() string { + return s.String() +} + +// SetEventSubscriptionsList sets the EventSubscriptionsList field's value. +func (s *DescribeEventSubscriptionsOutput) SetEventSubscriptionsList(v []*EventSubscription) *DescribeEventSubscriptionsOutput { + s.EventSubscriptionsList = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeEventSubscriptionsOutput) SetMarker(v string) *DescribeEventSubscriptionsOutput { + s.Marker = &v + return s +} + +type DescribeEventsInput struct { + _ struct{} `type:"structure"` + + // The number of minutes to retrieve events for. + // + // Default: 60 + Duration *int64 `type:"integer"` + + // The end of the time interval for which to retrieve events, specified in ISO + // 8601 format. For more information about ISO 8601, go to the ISO8601 Wikipedia + // page. (http://en.wikipedia.org/wiki/ISO_8601) + // + // Example: 2009-07-08T18:00Z + EndTime *time.Time `type:"timestamp"` + + // A list of event categories that trigger notifications for a event notification + // subscription. + EventCategories []*string `locationNameList:"EventCategory" type:"list"` + + // This parameter is not currently supported. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // An optional pagination token provided by a previous DescribeEvents request. + // If this parameter is specified, the response includes only records beyond + // the marker, up to the value specified by MaxRecords. + Marker *string `type:"string"` + + // The maximum number of records to include in the response. If more records + // exist than the specified MaxRecords value, a pagination token called a marker + // is included in the response so that the remaining results can be retrieved. + // + // Default: 100 + // + // Constraints: Minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` + + // The identifier of the event source for which events are returned. If not + // specified, then all sources are included in the response. + // + // Constraints: + // + // * If SourceIdentifier is supplied, SourceType must also be provided. + // + // * If the source type is DBInstance, then a DBInstanceIdentifier must be + // supplied. + // + // * If the source type is DBSecurityGroup, a DBSecurityGroupName must be + // supplied. + // + // * If the source type is DBParameterGroup, a DBParameterGroupName must + // be supplied. + // + // * If the source type is DBSnapshot, a DBSnapshotIdentifier must be supplied. + // + // * Cannot end with a hyphen or contain two consecutive hyphens. + SourceIdentifier *string `type:"string"` + + // The event source to retrieve events for. If no value is specified, all events + // are returned. + SourceType *string `type:"string" enum:"SourceType"` + + // The beginning of the time interval to retrieve events for, specified in ISO + // 8601 format. For more information about ISO 8601, go to the ISO8601 Wikipedia + // page. (http://en.wikipedia.org/wiki/ISO_8601) + // + // Example: 2009-07-08T18:00Z + StartTime *time.Time `type:"timestamp"` +} + +// String returns the string representation +func (s DescribeEventsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEventsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeEventsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeEventsInput"} + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDuration sets the Duration field's value. +func (s *DescribeEventsInput) SetDuration(v int64) *DescribeEventsInput { + s.Duration = &v + return s +} + +// SetEndTime sets the EndTime field's value. +func (s *DescribeEventsInput) SetEndTime(v time.Time) *DescribeEventsInput { + s.EndTime = &v + return s +} + +// SetEventCategories sets the EventCategories field's value. +func (s *DescribeEventsInput) SetEventCategories(v []*string) *DescribeEventsInput { + s.EventCategories = v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeEventsInput) SetFilters(v []*Filter) *DescribeEventsInput { + s.Filters = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeEventsInput) SetMarker(v string) *DescribeEventsInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeEventsInput) SetMaxRecords(v int64) *DescribeEventsInput { + s.MaxRecords = &v + return s +} + +// SetSourceIdentifier sets the SourceIdentifier field's value. +func (s *DescribeEventsInput) SetSourceIdentifier(v string) *DescribeEventsInput { + s.SourceIdentifier = &v + return s +} + +// SetSourceType sets the SourceType field's value. +func (s *DescribeEventsInput) SetSourceType(v string) *DescribeEventsInput { + s.SourceType = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *DescribeEventsInput) SetStartTime(v time.Time) *DescribeEventsInput { + s.StartTime = &v + return s +} + +// Contains the result of a successful invocation of the DescribeEvents action. +type DescribeEventsOutput struct { + _ struct{} `type:"structure"` + + // A list of Event instances. + Events []*Event `locationNameList:"Event" type:"list"` + + // An optional pagination token provided by a previous Events request. If this + // parameter is specified, the response includes only records beyond the marker, + // up to the value specified by MaxRecords . + Marker *string `type:"string"` +} + +// String returns the string representation +func (s DescribeEventsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEventsOutput) GoString() string { + return s.String() +} + +// SetEvents sets the Events field's value. +func (s *DescribeEventsOutput) SetEvents(v []*Event) *DescribeEventsOutput { + s.Events = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeEventsOutput) SetMarker(v string) *DescribeEventsOutput { + s.Marker = &v + return s +} + +type DescribeOrderableDBInstanceOptionsInput struct { + _ struct{} `type:"structure"` + + // The DB instance class filter value. Specify this parameter to show only the + // available offerings matching the specified DB instance class. + DBInstanceClass *string `type:"string"` + + // The name of the engine to retrieve DB instance options for. + // + // Engine is a required field + Engine *string `type:"string" required:"true"` + + // The engine version filter value. Specify this parameter to show only the + // available offerings matching the specified engine version. + EngineVersion *string `type:"string"` + + // This parameter is not currently supported. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // The license model filter value. Specify this parameter to show only the available + // offerings matching the specified license model. + LicenseModel *string `type:"string"` + + // An optional pagination token provided by a previous DescribeOrderableDBInstanceOptions + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords . + Marker *string `type:"string"` + + // The maximum number of records to include in the response. If more records + // exist than the specified MaxRecords value, a pagination token called a marker + // is included in the response so that the remaining results can be retrieved. + // + // Default: 100 + // + // Constraints: Minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` + + // The VPC filter value. Specify this parameter to show only the available VPC + // or non-VPC offerings. + Vpc *bool `type:"boolean"` +} + +// String returns the string representation +func (s DescribeOrderableDBInstanceOptionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeOrderableDBInstanceOptionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeOrderableDBInstanceOptionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeOrderableDBInstanceOptionsInput"} + if s.Engine == nil { + invalidParams.Add(request.NewErrParamRequired("Engine")) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBInstanceClass sets the DBInstanceClass field's value. +func (s *DescribeOrderableDBInstanceOptionsInput) SetDBInstanceClass(v string) *DescribeOrderableDBInstanceOptionsInput { + s.DBInstanceClass = &v + return s +} + +// SetEngine sets the Engine field's value. +func (s *DescribeOrderableDBInstanceOptionsInput) SetEngine(v string) *DescribeOrderableDBInstanceOptionsInput { + s.Engine = &v + return s +} + +// SetEngineVersion sets the EngineVersion field's value. +func (s *DescribeOrderableDBInstanceOptionsInput) SetEngineVersion(v string) *DescribeOrderableDBInstanceOptionsInput { + s.EngineVersion = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeOrderableDBInstanceOptionsInput) SetFilters(v []*Filter) *DescribeOrderableDBInstanceOptionsInput { + s.Filters = v + return s +} + +// SetLicenseModel sets the LicenseModel field's value. +func (s *DescribeOrderableDBInstanceOptionsInput) SetLicenseModel(v string) *DescribeOrderableDBInstanceOptionsInput { + s.LicenseModel = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeOrderableDBInstanceOptionsInput) SetMarker(v string) *DescribeOrderableDBInstanceOptionsInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeOrderableDBInstanceOptionsInput) SetMaxRecords(v int64) *DescribeOrderableDBInstanceOptionsInput { + s.MaxRecords = &v + return s +} + +// SetVpc sets the Vpc field's value. +func (s *DescribeOrderableDBInstanceOptionsInput) SetVpc(v bool) *DescribeOrderableDBInstanceOptionsInput { + s.Vpc = &v + return s +} + +// Contains the result of a successful invocation of the DescribeOrderableDBInstanceOptions +// action. +type DescribeOrderableDBInstanceOptionsOutput struct { + _ struct{} `type:"structure"` + + // An optional pagination token provided by a previous OrderableDBInstanceOptions + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords . + Marker *string `type:"string"` + + // An OrderableDBInstanceOption structure containing information about orderable + // options for the DB instance. + OrderableDBInstanceOptions []*OrderableDBInstanceOption `locationNameList:"OrderableDBInstanceOption" type:"list"` +} + +// String returns the string representation +func (s DescribeOrderableDBInstanceOptionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeOrderableDBInstanceOptionsOutput) GoString() string { + return s.String() +} + +// SetMarker sets the Marker field's value. +func (s *DescribeOrderableDBInstanceOptionsOutput) SetMarker(v string) *DescribeOrderableDBInstanceOptionsOutput { + s.Marker = &v + return s +} + +// SetOrderableDBInstanceOptions sets the OrderableDBInstanceOptions field's value. +func (s *DescribeOrderableDBInstanceOptionsOutput) SetOrderableDBInstanceOptions(v []*OrderableDBInstanceOption) *DescribeOrderableDBInstanceOptionsOutput { + s.OrderableDBInstanceOptions = v + return s +} + +type DescribePendingMaintenanceActionsInput struct { + _ struct{} `type:"structure"` + + // A filter that specifies one or more resources to return pending maintenance + // actions for. + // + // Supported filters: + // + // * db-cluster-id - Accepts DB cluster identifiers and DB cluster Amazon + // Resource Names (ARNs). The results list will only include pending maintenance + // actions for the DB clusters identified by these ARNs. + // + // * db-instance-id - Accepts DB instance identifiers and DB instance ARNs. + // The results list will only include pending maintenance actions for the + // DB instances identified by these ARNs. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // An optional pagination token provided by a previous DescribePendingMaintenanceActions + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to a number of records specified by MaxRecords. + Marker *string `type:"string"` + + // The maximum number of records to include in the response. If more records + // exist than the specified MaxRecords value, a pagination token called a marker + // is included in the response so that the remaining results can be retrieved. + // + // Default: 100 + // + // Constraints: Minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` + + // The ARN of a resource to return pending maintenance actions for. + ResourceIdentifier *string `type:"string"` +} + +// String returns the string representation +func (s DescribePendingMaintenanceActionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribePendingMaintenanceActionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribePendingMaintenanceActionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribePendingMaintenanceActionsInput"} + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribePendingMaintenanceActionsInput) SetFilters(v []*Filter) *DescribePendingMaintenanceActionsInput { + s.Filters = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribePendingMaintenanceActionsInput) SetMarker(v string) *DescribePendingMaintenanceActionsInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribePendingMaintenanceActionsInput) SetMaxRecords(v int64) *DescribePendingMaintenanceActionsInput { + s.MaxRecords = &v + return s +} + +// SetResourceIdentifier sets the ResourceIdentifier field's value. +func (s *DescribePendingMaintenanceActionsInput) SetResourceIdentifier(v string) *DescribePendingMaintenanceActionsInput { + s.ResourceIdentifier = &v + return s +} + +// Data returned from the DescribePendingMaintenanceActions action. +type DescribePendingMaintenanceActionsOutput struct { + _ struct{} `type:"structure"` + + // An optional pagination token provided by a previous DescribePendingMaintenanceActions + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to a number of records specified by MaxRecords. + Marker *string `type:"string"` + + // A list of the pending maintenance actions for the resource. + PendingMaintenanceActions []*ResourcePendingMaintenanceActions `locationNameList:"ResourcePendingMaintenanceActions" type:"list"` +} + +// String returns the string representation +func (s DescribePendingMaintenanceActionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribePendingMaintenanceActionsOutput) GoString() string { + return s.String() +} + +// SetMarker sets the Marker field's value. +func (s *DescribePendingMaintenanceActionsOutput) SetMarker(v string) *DescribePendingMaintenanceActionsOutput { + s.Marker = &v + return s +} + +// SetPendingMaintenanceActions sets the PendingMaintenanceActions field's value. +func (s *DescribePendingMaintenanceActionsOutput) SetPendingMaintenanceActions(v []*ResourcePendingMaintenanceActions) *DescribePendingMaintenanceActionsOutput { + s.PendingMaintenanceActions = v + return s +} + +type DescribeValidDBInstanceModificationsInput struct { + _ struct{} `type:"structure"` + + // The customer identifier or the ARN of your DB instance. + // + // DBInstanceIdentifier is a required field + DBInstanceIdentifier *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeValidDBInstanceModificationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeValidDBInstanceModificationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeValidDBInstanceModificationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeValidDBInstanceModificationsInput"} + if s.DBInstanceIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBInstanceIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBInstanceIdentifier sets the DBInstanceIdentifier field's value. +func (s *DescribeValidDBInstanceModificationsInput) SetDBInstanceIdentifier(v string) *DescribeValidDBInstanceModificationsInput { + s.DBInstanceIdentifier = &v + return s +} + +type DescribeValidDBInstanceModificationsOutput struct { + _ struct{} `type:"structure"` + + // Information about valid modifications that you can make to your DB instance. + // Contains the result of a successful call to the DescribeValidDBInstanceModifications + // action. You can use this information when you call ModifyDBInstance. + ValidDBInstanceModificationsMessage *ValidDBInstanceModificationsMessage `type:"structure"` +} + +// String returns the string representation +func (s DescribeValidDBInstanceModificationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeValidDBInstanceModificationsOutput) GoString() string { + return s.String() +} + +// SetValidDBInstanceModificationsMessage sets the ValidDBInstanceModificationsMessage field's value. +func (s *DescribeValidDBInstanceModificationsOutput) SetValidDBInstanceModificationsMessage(v *ValidDBInstanceModificationsMessage) *DescribeValidDBInstanceModificationsOutput { + s.ValidDBInstanceModificationsMessage = v + return s +} + +// An Active Directory Domain membership record associated with the DB instance. +type DomainMembership struct { + _ struct{} `type:"structure"` + + // The identifier of the Active Directory Domain. + Domain *string `type:"string"` + + // The fully qualified domain name of the Active Directory Domain. + FQDN *string `type:"string"` + + // The name of the IAM role to be used when making API calls to the Directory + // Service. + IAMRoleName *string `type:"string"` + + // The status of the DB instance's Active Directory Domain membership, such + // as joined, pending-join, failed etc). + Status *string `type:"string"` +} + +// String returns the string representation +func (s DomainMembership) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DomainMembership) GoString() string { + return s.String() +} + +// SetDomain sets the Domain field's value. +func (s *DomainMembership) SetDomain(v string) *DomainMembership { + s.Domain = &v + return s +} + +// SetFQDN sets the FQDN field's value. +func (s *DomainMembership) SetFQDN(v string) *DomainMembership { + s.FQDN = &v + return s +} + +// SetIAMRoleName sets the IAMRoleName field's value. +func (s *DomainMembership) SetIAMRoleName(v string) *DomainMembership { + s.IAMRoleName = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *DomainMembership) SetStatus(v string) *DomainMembership { + s.Status = &v + return s +} + +// A range of double values. +type DoubleRange struct { + _ struct{} `type:"structure"` + + // The minimum value in the range. + From *float64 `type:"double"` + + // The maximum value in the range. + To *float64 `type:"double"` +} + +// String returns the string representation +func (s DoubleRange) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DoubleRange) GoString() string { + return s.String() +} + +// SetFrom sets the From field's value. +func (s *DoubleRange) SetFrom(v float64) *DoubleRange { + s.From = &v + return s +} + +// SetTo sets the To field's value. +func (s *DoubleRange) SetTo(v float64) *DoubleRange { + s.To = &v + return s +} + +// This data type is used as a response element in the following actions: +// +// * CreateDBInstance +// +// * DescribeDBInstances +// +// * DeleteDBInstance +type Endpoint struct { + _ struct{} `type:"structure"` + + // Specifies the DNS address of the DB instance. + Address *string `type:"string"` + + // Specifies the ID that Amazon Route 53 assigns when you create a hosted zone. + HostedZoneId *string `type:"string"` + + // Specifies the port that the database engine is listening on. + Port *int64 `type:"integer"` +} + +// String returns the string representation +func (s Endpoint) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Endpoint) GoString() string { + return s.String() +} + +// SetAddress sets the Address field's value. +func (s *Endpoint) SetAddress(v string) *Endpoint { + s.Address = &v + return s +} + +// SetHostedZoneId sets the HostedZoneId field's value. +func (s *Endpoint) SetHostedZoneId(v string) *Endpoint { + s.HostedZoneId = &v + return s +} + +// SetPort sets the Port field's value. +func (s *Endpoint) SetPort(v int64) *Endpoint { + s.Port = &v + return s +} + +// Contains the result of a successful invocation of the DescribeEngineDefaultParameters +// action. +type EngineDefaults struct { + _ struct{} `type:"structure"` + + // Specifies the name of the DB parameter group family that the engine default + // parameters apply to. + DBParameterGroupFamily *string `type:"string"` + + // An optional pagination token provided by a previous EngineDefaults request. + // If this parameter is specified, the response includes only records beyond + // the marker, up to the value specified by MaxRecords . + Marker *string `type:"string"` + + // Contains a list of engine default parameters. + Parameters []*Parameter `locationNameList:"Parameter" type:"list"` +} + +// String returns the string representation +func (s EngineDefaults) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EngineDefaults) GoString() string { + return s.String() +} + +// SetDBParameterGroupFamily sets the DBParameterGroupFamily field's value. +func (s *EngineDefaults) SetDBParameterGroupFamily(v string) *EngineDefaults { + s.DBParameterGroupFamily = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *EngineDefaults) SetMarker(v string) *EngineDefaults { + s.Marker = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *EngineDefaults) SetParameters(v []*Parameter) *EngineDefaults { + s.Parameters = v + return s +} + +// This data type is used as a response element in the DescribeEvents action. +type Event struct { + _ struct{} `type:"structure"` + + // Specifies the date and time of the event. + Date *time.Time `type:"timestamp"` + + // Specifies the category for the event. + EventCategories []*string `locationNameList:"EventCategory" type:"list"` + + // Provides the text of this event. + Message *string `type:"string"` + + // The Amazon Resource Name (ARN) for the event. + SourceArn *string `type:"string"` + + // Provides the identifier for the source of the event. + SourceIdentifier *string `type:"string"` + + // Specifies the source type for this event. + SourceType *string `type:"string" enum:"SourceType"` +} + +// String returns the string representation +func (s Event) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Event) GoString() string { + return s.String() +} + +// SetDate sets the Date field's value. +func (s *Event) SetDate(v time.Time) *Event { + s.Date = &v + return s +} + +// SetEventCategories sets the EventCategories field's value. +func (s *Event) SetEventCategories(v []*string) *Event { + s.EventCategories = v + return s +} + +// SetMessage sets the Message field's value. +func (s *Event) SetMessage(v string) *Event { + s.Message = &v + return s +} + +// SetSourceArn sets the SourceArn field's value. +func (s *Event) SetSourceArn(v string) *Event { + s.SourceArn = &v + return s +} + +// SetSourceIdentifier sets the SourceIdentifier field's value. +func (s *Event) SetSourceIdentifier(v string) *Event { + s.SourceIdentifier = &v + return s +} + +// SetSourceType sets the SourceType field's value. +func (s *Event) SetSourceType(v string) *Event { + s.SourceType = &v + return s +} + +// Contains the results of a successful invocation of the DescribeEventCategories +// action. +type EventCategoriesMap struct { + _ struct{} `type:"structure"` + + // The event categories for the specified source type + EventCategories []*string `locationNameList:"EventCategory" type:"list"` + + // The source type that the returned categories belong to + SourceType *string `type:"string"` +} + +// String returns the string representation +func (s EventCategoriesMap) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EventCategoriesMap) GoString() string { + return s.String() +} + +// SetEventCategories sets the EventCategories field's value. +func (s *EventCategoriesMap) SetEventCategories(v []*string) *EventCategoriesMap { + s.EventCategories = v + return s +} + +// SetSourceType sets the SourceType field's value. +func (s *EventCategoriesMap) SetSourceType(v string) *EventCategoriesMap { + s.SourceType = &v + return s +} + +// Contains the results of a successful invocation of the DescribeEventSubscriptions +// action. +type EventSubscription struct { + _ struct{} `type:"structure"` + + // The event notification subscription Id. + CustSubscriptionId *string `type:"string"` + + // The AWS customer account associated with the event notification subscription. + CustomerAwsId *string `type:"string"` + + // A Boolean value indicating if the subscription is enabled. True indicates + // the subscription is enabled. + Enabled *bool `type:"boolean"` + + // A list of event categories for the event notification subscription. + EventCategoriesList []*string `locationNameList:"EventCategory" type:"list"` + + // The Amazon Resource Name (ARN) for the event subscription. + EventSubscriptionArn *string `type:"string"` + + // The topic ARN of the event notification subscription. + SnsTopicArn *string `type:"string"` + + // A list of source IDs for the event notification subscription. + SourceIdsList []*string `locationNameList:"SourceId" type:"list"` + + // The source type for the event notification subscription. + SourceType *string `type:"string"` + + // The status of the event notification subscription. + // + // Constraints: + // + // Can be one of the following: creating | modifying | deleting | active | no-permission + // | topic-not-exist + // + // The status "no-permission" indicates that Neptune no longer has permission + // to post to the SNS topic. The status "topic-not-exist" indicates that the + // topic was deleted after the subscription was created. + Status *string `type:"string"` + + // The time the event notification subscription was created. + SubscriptionCreationTime *string `type:"string"` +} + +// String returns the string representation +func (s EventSubscription) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EventSubscription) GoString() string { + return s.String() +} + +// SetCustSubscriptionId sets the CustSubscriptionId field's value. +func (s *EventSubscription) SetCustSubscriptionId(v string) *EventSubscription { + s.CustSubscriptionId = &v + return s +} + +// SetCustomerAwsId sets the CustomerAwsId field's value. +func (s *EventSubscription) SetCustomerAwsId(v string) *EventSubscription { + s.CustomerAwsId = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *EventSubscription) SetEnabled(v bool) *EventSubscription { + s.Enabled = &v + return s +} + +// SetEventCategoriesList sets the EventCategoriesList field's value. +func (s *EventSubscription) SetEventCategoriesList(v []*string) *EventSubscription { + s.EventCategoriesList = v + return s +} + +// SetEventSubscriptionArn sets the EventSubscriptionArn field's value. +func (s *EventSubscription) SetEventSubscriptionArn(v string) *EventSubscription { + s.EventSubscriptionArn = &v + return s +} + +// SetSnsTopicArn sets the SnsTopicArn field's value. +func (s *EventSubscription) SetSnsTopicArn(v string) *EventSubscription { + s.SnsTopicArn = &v + return s +} + +// SetSourceIdsList sets the SourceIdsList field's value. +func (s *EventSubscription) SetSourceIdsList(v []*string) *EventSubscription { + s.SourceIdsList = v + return s +} + +// SetSourceType sets the SourceType field's value. +func (s *EventSubscription) SetSourceType(v string) *EventSubscription { + s.SourceType = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *EventSubscription) SetStatus(v string) *EventSubscription { + s.Status = &v + return s +} + +// SetSubscriptionCreationTime sets the SubscriptionCreationTime field's value. +func (s *EventSubscription) SetSubscriptionCreationTime(v string) *EventSubscription { + s.SubscriptionCreationTime = &v + return s +} + +type FailoverDBClusterInput struct { + _ struct{} `type:"structure"` + + // A DB cluster identifier to force a failover for. This parameter is not case-sensitive. + // + // Constraints: + // + // * Must match the identifier of an existing DBCluster. + DBClusterIdentifier *string `type:"string"` + + // The name of the instance to promote to the primary instance. + // + // You must specify the instance identifier for an Read Replica in the DB cluster. + // For example, mydbcluster-replica1. + TargetDBInstanceIdentifier *string `type:"string"` +} + +// String returns the string representation +func (s FailoverDBClusterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FailoverDBClusterInput) GoString() string { + return s.String() +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *FailoverDBClusterInput) SetDBClusterIdentifier(v string) *FailoverDBClusterInput { + s.DBClusterIdentifier = &v + return s +} + +// SetTargetDBInstanceIdentifier sets the TargetDBInstanceIdentifier field's value. +func (s *FailoverDBClusterInput) SetTargetDBInstanceIdentifier(v string) *FailoverDBClusterInput { + s.TargetDBInstanceIdentifier = &v + return s +} + +type FailoverDBClusterOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Neptune DB cluster. + // + // This data type is used as a response element in the DescribeDBClusters action. + DBCluster *DBCluster `type:"structure"` +} + +// String returns the string representation +func (s FailoverDBClusterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FailoverDBClusterOutput) GoString() string { + return s.String() +} + +// SetDBCluster sets the DBCluster field's value. +func (s *FailoverDBClusterOutput) SetDBCluster(v *DBCluster) *FailoverDBClusterOutput { + s.DBCluster = v + return s +} + +// This type is not currently supported. +type Filter struct { + _ struct{} `type:"structure"` + + // This parameter is not currently supported. + // + // Name is a required field + Name *string `type:"string" required:"true"` + + // This parameter is not currently supported. + // + // Values is a required field + Values []*string `locationNameList:"Value" type:"list" required:"true"` +} + +// String returns the string representation +func (s Filter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Filter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Filter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Filter"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Values == nil { + invalidParams.Add(request.NewErrParamRequired("Values")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *Filter) SetName(v string) *Filter { + s.Name = &v + return s +} + +// SetValues sets the Values field's value. +func (s *Filter) SetValues(v []*string) *Filter { + s.Values = v + return s +} + +type ListTagsForResourceInput struct { + _ struct{} `type:"structure"` + + // This parameter is not currently supported. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // The Amazon Neptune resource with tags to be listed. This value is an Amazon + // Resource Name (ARN). For information about creating an ARN, see Constructing + // an Amazon Resource Name (ARN) (http://docs.aws.amazon.com/neptune/latest/UserGuide/tagging.ARN.html#tagging.ARN.Constructing). + // + // ResourceName is a required field + ResourceName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s ListTagsForResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsForResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListTagsForResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTagsForResourceInput"} + if s.ResourceName == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceName")) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *ListTagsForResourceInput) SetFilters(v []*Filter) *ListTagsForResourceInput { + s.Filters = v + return s +} + +// SetResourceName sets the ResourceName field's value. +func (s *ListTagsForResourceInput) SetResourceName(v string) *ListTagsForResourceInput { + s.ResourceName = &v + return s +} + +type ListTagsForResourceOutput struct { + _ struct{} `type:"structure"` + + // List of tags returned by the ListTagsForResource operation. + TagList []*Tag `locationNameList:"Tag" type:"list"` +} + +// String returns the string representation +func (s ListTagsForResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsForResourceOutput) GoString() string { + return s.String() +} + +// SetTagList sets the TagList field's value. +func (s *ListTagsForResourceOutput) SetTagList(v []*Tag) *ListTagsForResourceOutput { + s.TagList = v + return s +} + +type ModifyDBClusterInput struct { + _ struct{} `type:"structure"` + + // A value that specifies whether the modifications in this request and any + // pending modifications are asynchronously applied as soon as possible, regardless + // of the PreferredMaintenanceWindow setting for the DB cluster. If this parameter + // is set to false, changes to the DB cluster are applied during the next maintenance + // window. + // + // The ApplyImmediately parameter only affects the NewDBClusterIdentifier and + // MasterUserPassword values. If you set the ApplyImmediately parameter value + // to false, then changes to the NewDBClusterIdentifier and MasterUserPassword + // values are applied during the next maintenance window. All other changes + // are applied immediately, regardless of the value of the ApplyImmediately + // parameter. + // + // Default: false + ApplyImmediately *bool `type:"boolean"` + + // The number of days for which automated backups are retained. You must specify + // a minimum value of 1. + // + // Default: 1 + // + // Constraints: + // + // * Must be a value from 1 to 35 + BackupRetentionPeriod *int64 `type:"integer"` + + // The DB cluster identifier for the cluster being modified. This parameter + // is not case-sensitive. + // + // Constraints: + // + // * Must match the identifier of an existing DBCluster. + // + // DBClusterIdentifier is a required field + DBClusterIdentifier *string `type:"string" required:"true"` + + // The name of the DB cluster parameter group to use for the DB cluster. + DBClusterParameterGroupName *string `type:"string"` + + // True to enable mapping of AWS Identity and Access Management (IAM) accounts + // to database accounts, and otherwise false. + // + // Default: false + EnableIAMDatabaseAuthentication *bool `type:"boolean"` + + // The version number of the database engine to which you want to upgrade. Changing + // this parameter results in an outage. The change is applied during the next + // maintenance window unless the ApplyImmediately parameter is set to true. + // + // For a list of valid engine versions, see CreateDBInstance, or call DescribeDBEngineVersions. + EngineVersion *string `type:"string"` + + // The new password for the master database user. This password can contain + // any printable ASCII character except "/", """, or "@". + // + // Constraints: Must contain from 8 to 41 characters. + MasterUserPassword *string `type:"string"` + + // The new DB cluster identifier for the DB cluster when renaming a DB cluster. + // This value is stored as a lowercase string. + // + // Constraints: + // + // * Must contain from 1 to 63 letters, numbers, or hyphens + // + // * The first character must be a letter + // + // * Cannot end with a hyphen or contain two consecutive hyphens + // + // Example: my-cluster2 + NewDBClusterIdentifier *string `type:"string"` + + // A value that indicates that the DB cluster should be associated with the + // specified option group. Changing this parameter doesn't result in an outage + // except in the following case, and the change is applied during the next maintenance + // window unless the ApplyImmediately parameter is set to true for this request. + // If the parameter change results in an option group that enables OEM, this + // change can cause a brief (sub-second) period during which new connections + // are rejected but existing connections are not interrupted. + // + // Permanent options can't be removed from an option group. The option group + // can't be removed from a DB cluster once it is associated with a DB cluster. + OptionGroupName *string `type:"string"` + + // The port number on which the DB cluster accepts connections. + // + // Constraints: Value must be 1150-65535 + // + // Default: The same port as the original DB cluster. + Port *int64 `type:"integer"` + + // The daily time range during which automated backups are created if automated + // backups are enabled, using the BackupRetentionPeriod parameter. + // + // The default is a 30-minute window selected at random from an 8-hour block + // of time for each AWS Region. + // + // Constraints: + // + // * Must be in the format hh24:mi-hh24:mi. + // + // * Must be in Universal Coordinated Time (UTC). + // + // * Must not conflict with the preferred maintenance window. + // + // * Must be at least 30 minutes. + PreferredBackupWindow *string `type:"string"` + + // The weekly time range during which system maintenance can occur, in Universal + // Coordinated Time (UTC). + // + // Format: ddd:hh24:mi-ddd:hh24:mi + // + // The default is a 30-minute window selected at random from an 8-hour block + // of time for each AWS Region, occurring on a random day of the week. + // + // Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun. + // + // Constraints: Minimum 30-minute window. + PreferredMaintenanceWindow *string `type:"string"` + + // A list of VPC security groups that the DB cluster will belong to. + VpcSecurityGroupIds []*string `locationNameList:"VpcSecurityGroupId" type:"list"` +} + +// String returns the string representation +func (s ModifyDBClusterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyDBClusterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyDBClusterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyDBClusterInput"} + if s.DBClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplyImmediately sets the ApplyImmediately field's value. +func (s *ModifyDBClusterInput) SetApplyImmediately(v bool) *ModifyDBClusterInput { + s.ApplyImmediately = &v + return s +} + +// SetBackupRetentionPeriod sets the BackupRetentionPeriod field's value. +func (s *ModifyDBClusterInput) SetBackupRetentionPeriod(v int64) *ModifyDBClusterInput { + s.BackupRetentionPeriod = &v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *ModifyDBClusterInput) SetDBClusterIdentifier(v string) *ModifyDBClusterInput { + s.DBClusterIdentifier = &v + return s +} + +// SetDBClusterParameterGroupName sets the DBClusterParameterGroupName field's value. +func (s *ModifyDBClusterInput) SetDBClusterParameterGroupName(v string) *ModifyDBClusterInput { + s.DBClusterParameterGroupName = &v + return s +} + +// SetEnableIAMDatabaseAuthentication sets the EnableIAMDatabaseAuthentication field's value. +func (s *ModifyDBClusterInput) SetEnableIAMDatabaseAuthentication(v bool) *ModifyDBClusterInput { + s.EnableIAMDatabaseAuthentication = &v + return s +} + +// SetEngineVersion sets the EngineVersion field's value. +func (s *ModifyDBClusterInput) SetEngineVersion(v string) *ModifyDBClusterInput { + s.EngineVersion = &v + return s +} + +// SetMasterUserPassword sets the MasterUserPassword field's value. +func (s *ModifyDBClusterInput) SetMasterUserPassword(v string) *ModifyDBClusterInput { + s.MasterUserPassword = &v + return s +} + +// SetNewDBClusterIdentifier sets the NewDBClusterIdentifier field's value. +func (s *ModifyDBClusterInput) SetNewDBClusterIdentifier(v string) *ModifyDBClusterInput { + s.NewDBClusterIdentifier = &v + return s +} + +// SetOptionGroupName sets the OptionGroupName field's value. +func (s *ModifyDBClusterInput) SetOptionGroupName(v string) *ModifyDBClusterInput { + s.OptionGroupName = &v + return s +} + +// SetPort sets the Port field's value. +func (s *ModifyDBClusterInput) SetPort(v int64) *ModifyDBClusterInput { + s.Port = &v + return s +} + +// SetPreferredBackupWindow sets the PreferredBackupWindow field's value. +func (s *ModifyDBClusterInput) SetPreferredBackupWindow(v string) *ModifyDBClusterInput { + s.PreferredBackupWindow = &v + return s +} + +// SetPreferredMaintenanceWindow sets the PreferredMaintenanceWindow field's value. +func (s *ModifyDBClusterInput) SetPreferredMaintenanceWindow(v string) *ModifyDBClusterInput { + s.PreferredMaintenanceWindow = &v + return s +} + +// SetVpcSecurityGroupIds sets the VpcSecurityGroupIds field's value. +func (s *ModifyDBClusterInput) SetVpcSecurityGroupIds(v []*string) *ModifyDBClusterInput { + s.VpcSecurityGroupIds = v + return s +} + +type ModifyDBClusterOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Neptune DB cluster. + // + // This data type is used as a response element in the DescribeDBClusters action. + DBCluster *DBCluster `type:"structure"` +} + +// String returns the string representation +func (s ModifyDBClusterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyDBClusterOutput) GoString() string { + return s.String() +} + +// SetDBCluster sets the DBCluster field's value. +func (s *ModifyDBClusterOutput) SetDBCluster(v *DBCluster) *ModifyDBClusterOutput { + s.DBCluster = v + return s +} + +type ModifyDBClusterParameterGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the DB cluster parameter group to modify. + // + // DBClusterParameterGroupName is a required field + DBClusterParameterGroupName *string `type:"string" required:"true"` + + // A list of parameters in the DB cluster parameter group to modify. + // + // Parameters is a required field + Parameters []*Parameter `locationNameList:"Parameter" type:"list" required:"true"` +} + +// String returns the string representation +func (s ModifyDBClusterParameterGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyDBClusterParameterGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyDBClusterParameterGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyDBClusterParameterGroupInput"} + if s.DBClusterParameterGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterParameterGroupName")) + } + if s.Parameters == nil { + invalidParams.Add(request.NewErrParamRequired("Parameters")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterParameterGroupName sets the DBClusterParameterGroupName field's value. +func (s *ModifyDBClusterParameterGroupInput) SetDBClusterParameterGroupName(v string) *ModifyDBClusterParameterGroupInput { + s.DBClusterParameterGroupName = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *ModifyDBClusterParameterGroupInput) SetParameters(v []*Parameter) *ModifyDBClusterParameterGroupInput { + s.Parameters = v + return s +} + +type ModifyDBClusterSnapshotAttributeInput struct { + _ struct{} `type:"structure"` + + // The name of the DB cluster snapshot attribute to modify. + // + // To manage authorization for other AWS accounts to copy or restore a manual + // DB cluster snapshot, set this value to restore. + // + // AttributeName is a required field + AttributeName *string `type:"string" required:"true"` + + // The identifier for the DB cluster snapshot to modify the attributes for. + // + // DBClusterSnapshotIdentifier is a required field + DBClusterSnapshotIdentifier *string `type:"string" required:"true"` + + // A list of DB cluster snapshot attributes to add to the attribute specified + // by AttributeName. + // + // To authorize other AWS accounts to copy or restore a manual DB cluster snapshot, + // set this list to include one or more AWS account IDs, or all to make the + // manual DB cluster snapshot restorable by any AWS account. Do not add the + // all value for any manual DB cluster snapshots that contain private information + // that you don't want available to all AWS accounts. + ValuesToAdd []*string `locationNameList:"AttributeValue" type:"list"` + + // A list of DB cluster snapshot attributes to remove from the attribute specified + // by AttributeName. + // + // To remove authorization for other AWS accounts to copy or restore a manual + // DB cluster snapshot, set this list to include one or more AWS account identifiers, + // or all to remove authorization for any AWS account to copy or restore the + // DB cluster snapshot. If you specify all, an AWS account whose account ID + // is explicitly added to the restore attribute can still copy or restore a + // manual DB cluster snapshot. + ValuesToRemove []*string `locationNameList:"AttributeValue" type:"list"` +} + +// String returns the string representation +func (s ModifyDBClusterSnapshotAttributeInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyDBClusterSnapshotAttributeInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyDBClusterSnapshotAttributeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyDBClusterSnapshotAttributeInput"} + if s.AttributeName == nil { + invalidParams.Add(request.NewErrParamRequired("AttributeName")) + } + if s.DBClusterSnapshotIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterSnapshotIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttributeName sets the AttributeName field's value. +func (s *ModifyDBClusterSnapshotAttributeInput) SetAttributeName(v string) *ModifyDBClusterSnapshotAttributeInput { + s.AttributeName = &v + return s +} + +// SetDBClusterSnapshotIdentifier sets the DBClusterSnapshotIdentifier field's value. +func (s *ModifyDBClusterSnapshotAttributeInput) SetDBClusterSnapshotIdentifier(v string) *ModifyDBClusterSnapshotAttributeInput { + s.DBClusterSnapshotIdentifier = &v + return s +} + +// SetValuesToAdd sets the ValuesToAdd field's value. +func (s *ModifyDBClusterSnapshotAttributeInput) SetValuesToAdd(v []*string) *ModifyDBClusterSnapshotAttributeInput { + s.ValuesToAdd = v + return s +} + +// SetValuesToRemove sets the ValuesToRemove field's value. +func (s *ModifyDBClusterSnapshotAttributeInput) SetValuesToRemove(v []*string) *ModifyDBClusterSnapshotAttributeInput { + s.ValuesToRemove = v + return s +} + +type ModifyDBClusterSnapshotAttributeOutput struct { + _ struct{} `type:"structure"` + + // Contains the results of a successful call to the DescribeDBClusterSnapshotAttributes + // API action. + // + // Manual DB cluster snapshot attributes are used to authorize other AWS accounts + // to copy or restore a manual DB cluster snapshot. For more information, see + // the ModifyDBClusterSnapshotAttribute API action. + DBClusterSnapshotAttributesResult *DBClusterSnapshotAttributesResult `type:"structure"` +} + +// String returns the string representation +func (s ModifyDBClusterSnapshotAttributeOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyDBClusterSnapshotAttributeOutput) GoString() string { + return s.String() +} + +// SetDBClusterSnapshotAttributesResult sets the DBClusterSnapshotAttributesResult field's value. +func (s *ModifyDBClusterSnapshotAttributeOutput) SetDBClusterSnapshotAttributesResult(v *DBClusterSnapshotAttributesResult) *ModifyDBClusterSnapshotAttributeOutput { + s.DBClusterSnapshotAttributesResult = v + return s +} + +type ModifyDBInstanceInput struct { + _ struct{} `type:"structure"` + + // The new amount of storage (in gibibytes) to allocate for the DB instance. + // + // Not applicable. Storage is managed by the DB Cluster. + AllocatedStorage *int64 `type:"integer"` + + // Indicates that major version upgrades are allowed. Changing this parameter + // doesn't result in an outage and the change is asynchronously applied as soon + // as possible. + // + // Constraints: This parameter must be set to true when specifying a value for + // the EngineVersion parameter that is a different major version than the DB + // instance's current version. + AllowMajorVersionUpgrade *bool `type:"boolean"` + + // Specifies whether the modifications in this request and any pending modifications + // are asynchronously applied as soon as possible, regardless of the PreferredMaintenanceWindow + // setting for the DB instance. + // + // If this parameter is set to false, changes to the DB instance are applied + // during the next maintenance window. Some parameter changes can cause an outage + // and are applied on the next call to RebootDBInstance, or the next failure + // reboot. + // + // Default: false + ApplyImmediately *bool `type:"boolean"` + + // Indicates that minor version upgrades are applied automatically to the DB + // instance during the maintenance window. Changing this parameter doesn't result + // in an outage except in the following case and the change is asynchronously + // applied as soon as possible. An outage will result if this parameter is set + // to true during the maintenance window, and a newer minor version is available, + // and Neptune has enabled auto patching for that engine version. + AutoMinorVersionUpgrade *bool `type:"boolean"` + + // The number of days to retain automated backups. Setting this parameter to + // a positive number enables backups. Setting this parameter to 0 disables automated + // backups. + // + // Not applicable. The retention period for automated backups is managed by + // the DB cluster. For more information, see ModifyDBCluster. + // + // Default: Uses existing setting + BackupRetentionPeriod *int64 `type:"integer"` + + // Indicates the certificate that needs to be associated with the instance. + CACertificateIdentifier *string `type:"string"` + + // The configuration setting for the log types to be enabled for export to CloudWatch + // Logs for a specific DB instance or DB cluster. + CloudwatchLogsExportConfiguration *CloudwatchLogsExportConfiguration `type:"structure"` + + // True to copy all tags from the DB instance to snapshots of the DB instance, + // and otherwise false. The default is false. + CopyTagsToSnapshot *bool `type:"boolean"` + + // The new compute and memory capacity of the DB instance, for example, db.m4.large. + // Not all DB instance classes are available in all AWS Regions. + // + // If you modify the DB instance class, an outage occurs during the change. + // The change is applied during the next maintenance window, unless ApplyImmediately + // is specified as true for this request. + // + // Default: Uses existing setting + DBInstanceClass *string `type:"string"` + + // The DB instance identifier. This value is stored as a lowercase string. + // + // Constraints: + // + // * Must match the identifier of an existing DBInstance. + // + // DBInstanceIdentifier is a required field + DBInstanceIdentifier *string `type:"string" required:"true"` + + // The name of the DB parameter group to apply to the DB instance. Changing + // this setting doesn't result in an outage. The parameter group name itself + // is changed immediately, but the actual parameter changes are not applied + // until you reboot the instance without failover. The db instance will NOT + // be rebooted automatically and the parameter changes will NOT be applied during + // the next maintenance window. + // + // Default: Uses existing setting + // + // Constraints: The DB parameter group must be in the same DB parameter group + // family as this DB instance. + DBParameterGroupName *string `type:"string"` + + // The port number on which the database accepts connections. + // + // The value of the DBPortNumber parameter must not match any of the port values + // specified for options in the option group for the DB instance. + // + // Your database will restart when you change the DBPortNumber value regardless + // of the value of the ApplyImmediately parameter. + // + // Default: 8182 + DBPortNumber *int64 `type:"integer"` + + // A list of DB security groups to authorize on this DB instance. Changing this + // setting doesn't result in an outage and the change is asynchronously applied + // as soon as possible. + // + // Constraints: + // + // * If supplied, must match existing DBSecurityGroups. + DBSecurityGroups []*string `locationNameList:"DBSecurityGroupName" type:"list"` + + // The new DB subnet group for the DB instance. You can use this parameter to + // move your DB instance to a different VPC. + // + // Changing the subnet group causes an outage during the change. The change + // is applied during the next maintenance window, unless you specify true for + // the ApplyImmediately parameter. + // + // Constraints: If supplied, must match the name of an existing DBSubnetGroup. + // + // Example: mySubnetGroup + DBSubnetGroupName *string `type:"string"` + + // Not supported. + Domain *string `type:"string"` + + // Not supported + DomainIAMRoleName *string `type:"string"` + + // True to enable mapping of AWS Identity and Access Management (IAM) accounts + // to database accounts, and otherwise false. + // + // You can enable IAM database authentication for the following database engines + // + // Not applicable. Mapping AWS IAM accounts to database accounts is managed + // by the DB cluster. For more information, see ModifyDBCluster. + // + // Default: false + EnableIAMDatabaseAuthentication *bool `type:"boolean"` + + // True to enable Performance Insights for the DB instance, and otherwise false. + EnablePerformanceInsights *bool `type:"boolean"` + + // The version number of the database engine to upgrade to. Changing this parameter + // results in an outage and the change is applied during the next maintenance + // window unless the ApplyImmediately parameter is set to true for this request. + // + // For major version upgrades, if a nondefault DB parameter group is currently + // in use, a new DB parameter group in the DB parameter group family for the + // new engine version must be specified. The new DB parameter group can be the + // default for that DB parameter group family. + EngineVersion *string `type:"string"` + + // The new Provisioned IOPS (I/O operations per second) value for the instance. + // + // Changing this setting doesn't result in an outage and the change is applied + // during the next maintenance window unless the ApplyImmediately parameter + // is set to true for this request. + // + // Default: Uses existing setting + Iops *int64 `type:"integer"` + + // The license model for the DB instance. + // + // Valid values: license-included | bring-your-own-license | general-public-license + LicenseModel *string `type:"string"` + + // The new password for the master user. The password can include any printable + // ASCII character except "/", """, or "@". + // + // Not applicable. + // + // Default: Uses existing setting + MasterUserPassword *string `type:"string"` + + // The interval, in seconds, between points when Enhanced Monitoring metrics + // are collected for the DB instance. To disable collecting Enhanced Monitoring + // metrics, specify 0. The default is 0. + // + // If MonitoringRoleArn is specified, then you must also set MonitoringInterval + // to a value other than 0. + // + // Valid Values: 0, 1, 5, 10, 15, 30, 60 + MonitoringInterval *int64 `type:"integer"` + + // The ARN for the IAM role that permits Neptune to send enhanced monitoring + // metrics to Amazon CloudWatch Logs. For example, arn:aws:iam:123456789012:role/emaccess. + // + // If MonitoringInterval is set to a value other than 0, then you must supply + // a MonitoringRoleArn value. + MonitoringRoleArn *string `type:"string"` + + // Specifies if the DB instance is a Multi-AZ deployment. Changing this parameter + // doesn't result in an outage and the change is applied during the next maintenance + // window unless the ApplyImmediately parameter is set to true for this request. + MultiAZ *bool `type:"boolean"` + + // The new DB instance identifier for the DB instance when renaming a DB instance. + // When you change the DB instance identifier, an instance reboot will occur + // immediately if you set Apply Immediately to true, or will occur during the + // next maintenance window if Apply Immediately to false. This value is stored + // as a lowercase string. + // + // Constraints: + // + // * Must contain from 1 to 63 letters, numbers, or hyphens. + // + // * The first character must be a letter. + // + // * Cannot end with a hyphen or contain two consecutive hyphens. + // + // Example: mydbinstance + NewDBInstanceIdentifier *string `type:"string"` + + // Indicates that the DB instance should be associated with the specified option + // group. Changing this parameter doesn't result in an outage except in the + // following case and the change is applied during the next maintenance window + // unless the ApplyImmediately parameter is set to true for this request. If + // the parameter change results in an option group that enables OEM, this change + // can cause a brief (sub-second) period during which new connections are rejected + // but existing connections are not interrupted. + // + // Permanent options, such as the TDE option for Oracle Advanced Security TDE, + // can't be removed from an option group, and that option group can't be removed + // from a DB instance once it is associated with a DB instance + OptionGroupName *string `type:"string"` + + // The AWS KMS key identifier for encryption of Performance Insights data. The + // KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the + // KMS key alias for the KMS encryption key. + PerformanceInsightsKMSKeyId *string `type:"string"` + + // The daily time range during which automated backups are created if automated + // backups are enabled. + // + // Not applicable. The daily time range for creating automated backups is managed + // by the DB cluster. For more information, see ModifyDBCluster. + // + // Constraints: + // + // * Must be in the format hh24:mi-hh24:mi + // + // * Must be in Universal Time Coordinated (UTC) + // + // * Must not conflict with the preferred maintenance window + // + // * Must be at least 30 minutes + PreferredBackupWindow *string `type:"string"` + + // The weekly time range (in UTC) during which system maintenance can occur, + // which might result in an outage. Changing this parameter doesn't result in + // an outage, except in the following situation, and the change is asynchronously + // applied as soon as possible. If there are pending actions that cause a reboot, + // and the maintenance window is changed to include the current time, then changing + // this parameter will cause a reboot of the DB instance. If moving this window + // to the current time, there must be at least 30 minutes between the current + // time and end of the window to ensure pending changes are applied. + // + // Default: Uses existing setting + // + // Format: ddd:hh24:mi-ddd:hh24:mi + // + // Valid Days: Mon | Tue | Wed | Thu | Fri | Sat | Sun + // + // Constraints: Must be at least 30 minutes + PreferredMaintenanceWindow *string `type:"string"` + + // A value that specifies the order in which a Read Replica is promoted to the + // primary instance after a failure of the existing primary instance. + // + // Default: 1 + // + // Valid Values: 0 - 15 + PromotionTier *int64 `type:"integer"` + + // This parameter is not supported. + // + // Deprecated: PubliclyAccessible has been deprecated + PubliclyAccessible *bool `deprecated:"true" type:"boolean"` + + // Specifies the storage type to be associated with the DB instance. + // + // If you specify Provisioned IOPS (io1), you must also include a value for + // the Iops parameter. + // + // If you choose to migrate your DB instance from using standard storage to + // using Provisioned IOPS, or from using Provisioned IOPS to using standard + // storage, the process can take time. The duration of the migration depends + // on several factors such as database load, storage size, storage type (standard + // or Provisioned IOPS), amount of IOPS provisioned (if any), and the number + // of prior scale storage operations. Typical migration times are under 24 hours, + // but the process can take up to several days in some cases. During the migration, + // the DB instance is available for use, but might experience performance degradation. + // While the migration takes place, nightly backups for the instance are suspended. + // No other Amazon Neptune operations can take place for the instance, including + // modifying the instance, rebooting the instance, deleting the instance, creating + // a Read Replica for the instance, and creating a DB snapshot of the instance. + // + // Valid values: standard | gp2 | io1 + // + // Default: io1 if the Iops parameter is specified, otherwise standard + StorageType *string `type:"string"` + + // The ARN from the key store with which to associate the instance for TDE encryption. + TdeCredentialArn *string `type:"string"` + + // The password for the given ARN from the key store in order to access the + // device. + TdeCredentialPassword *string `type:"string"` + + // A list of EC2 VPC security groups to authorize on this DB instance. This + // change is asynchronously applied as soon as possible. + // + // Not applicable. The associated list of EC2 VPC security groups is managed + // by the DB cluster. For more information, see ModifyDBCluster. + // + // Constraints: + // + // * If supplied, must match existing VpcSecurityGroupIds. + VpcSecurityGroupIds []*string `locationNameList:"VpcSecurityGroupId" type:"list"` +} + +// String returns the string representation +func (s ModifyDBInstanceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyDBInstanceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyDBInstanceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyDBInstanceInput"} + if s.DBInstanceIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBInstanceIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAllocatedStorage sets the AllocatedStorage field's value. +func (s *ModifyDBInstanceInput) SetAllocatedStorage(v int64) *ModifyDBInstanceInput { + s.AllocatedStorage = &v + return s +} + +// SetAllowMajorVersionUpgrade sets the AllowMajorVersionUpgrade field's value. +func (s *ModifyDBInstanceInput) SetAllowMajorVersionUpgrade(v bool) *ModifyDBInstanceInput { + s.AllowMajorVersionUpgrade = &v + return s +} + +// SetApplyImmediately sets the ApplyImmediately field's value. +func (s *ModifyDBInstanceInput) SetApplyImmediately(v bool) *ModifyDBInstanceInput { + s.ApplyImmediately = &v + return s +} + +// SetAutoMinorVersionUpgrade sets the AutoMinorVersionUpgrade field's value. +func (s *ModifyDBInstanceInput) SetAutoMinorVersionUpgrade(v bool) *ModifyDBInstanceInput { + s.AutoMinorVersionUpgrade = &v + return s +} + +// SetBackupRetentionPeriod sets the BackupRetentionPeriod field's value. +func (s *ModifyDBInstanceInput) SetBackupRetentionPeriod(v int64) *ModifyDBInstanceInput { + s.BackupRetentionPeriod = &v + return s +} + +// SetCACertificateIdentifier sets the CACertificateIdentifier field's value. +func (s *ModifyDBInstanceInput) SetCACertificateIdentifier(v string) *ModifyDBInstanceInput { + s.CACertificateIdentifier = &v + return s +} + +// SetCloudwatchLogsExportConfiguration sets the CloudwatchLogsExportConfiguration field's value. +func (s *ModifyDBInstanceInput) SetCloudwatchLogsExportConfiguration(v *CloudwatchLogsExportConfiguration) *ModifyDBInstanceInput { + s.CloudwatchLogsExportConfiguration = v + return s +} + +// SetCopyTagsToSnapshot sets the CopyTagsToSnapshot field's value. +func (s *ModifyDBInstanceInput) SetCopyTagsToSnapshot(v bool) *ModifyDBInstanceInput { + s.CopyTagsToSnapshot = &v + return s +} + +// SetDBInstanceClass sets the DBInstanceClass field's value. +func (s *ModifyDBInstanceInput) SetDBInstanceClass(v string) *ModifyDBInstanceInput { + s.DBInstanceClass = &v + return s +} + +// SetDBInstanceIdentifier sets the DBInstanceIdentifier field's value. +func (s *ModifyDBInstanceInput) SetDBInstanceIdentifier(v string) *ModifyDBInstanceInput { + s.DBInstanceIdentifier = &v + return s +} + +// SetDBParameterGroupName sets the DBParameterGroupName field's value. +func (s *ModifyDBInstanceInput) SetDBParameterGroupName(v string) *ModifyDBInstanceInput { + s.DBParameterGroupName = &v + return s +} + +// SetDBPortNumber sets the DBPortNumber field's value. +func (s *ModifyDBInstanceInput) SetDBPortNumber(v int64) *ModifyDBInstanceInput { + s.DBPortNumber = &v + return s +} + +// SetDBSecurityGroups sets the DBSecurityGroups field's value. +func (s *ModifyDBInstanceInput) SetDBSecurityGroups(v []*string) *ModifyDBInstanceInput { + s.DBSecurityGroups = v + return s +} + +// SetDBSubnetGroupName sets the DBSubnetGroupName field's value. +func (s *ModifyDBInstanceInput) SetDBSubnetGroupName(v string) *ModifyDBInstanceInput { + s.DBSubnetGroupName = &v + return s +} + +// SetDomain sets the Domain field's value. +func (s *ModifyDBInstanceInput) SetDomain(v string) *ModifyDBInstanceInput { + s.Domain = &v + return s +} + +// SetDomainIAMRoleName sets the DomainIAMRoleName field's value. +func (s *ModifyDBInstanceInput) SetDomainIAMRoleName(v string) *ModifyDBInstanceInput { + s.DomainIAMRoleName = &v + return s +} + +// SetEnableIAMDatabaseAuthentication sets the EnableIAMDatabaseAuthentication field's value. +func (s *ModifyDBInstanceInput) SetEnableIAMDatabaseAuthentication(v bool) *ModifyDBInstanceInput { + s.EnableIAMDatabaseAuthentication = &v + return s +} + +// SetEnablePerformanceInsights sets the EnablePerformanceInsights field's value. +func (s *ModifyDBInstanceInput) SetEnablePerformanceInsights(v bool) *ModifyDBInstanceInput { + s.EnablePerformanceInsights = &v + return s +} + +// SetEngineVersion sets the EngineVersion field's value. +func (s *ModifyDBInstanceInput) SetEngineVersion(v string) *ModifyDBInstanceInput { + s.EngineVersion = &v + return s +} + +// SetIops sets the Iops field's value. +func (s *ModifyDBInstanceInput) SetIops(v int64) *ModifyDBInstanceInput { + s.Iops = &v + return s +} + +// SetLicenseModel sets the LicenseModel field's value. +func (s *ModifyDBInstanceInput) SetLicenseModel(v string) *ModifyDBInstanceInput { + s.LicenseModel = &v + return s +} + +// SetMasterUserPassword sets the MasterUserPassword field's value. +func (s *ModifyDBInstanceInput) SetMasterUserPassword(v string) *ModifyDBInstanceInput { + s.MasterUserPassword = &v + return s +} + +// SetMonitoringInterval sets the MonitoringInterval field's value. +func (s *ModifyDBInstanceInput) SetMonitoringInterval(v int64) *ModifyDBInstanceInput { + s.MonitoringInterval = &v + return s +} + +// SetMonitoringRoleArn sets the MonitoringRoleArn field's value. +func (s *ModifyDBInstanceInput) SetMonitoringRoleArn(v string) *ModifyDBInstanceInput { + s.MonitoringRoleArn = &v + return s +} + +// SetMultiAZ sets the MultiAZ field's value. +func (s *ModifyDBInstanceInput) SetMultiAZ(v bool) *ModifyDBInstanceInput { + s.MultiAZ = &v + return s +} + +// SetNewDBInstanceIdentifier sets the NewDBInstanceIdentifier field's value. +func (s *ModifyDBInstanceInput) SetNewDBInstanceIdentifier(v string) *ModifyDBInstanceInput { + s.NewDBInstanceIdentifier = &v + return s +} + +// SetOptionGroupName sets the OptionGroupName field's value. +func (s *ModifyDBInstanceInput) SetOptionGroupName(v string) *ModifyDBInstanceInput { + s.OptionGroupName = &v + return s +} + +// SetPerformanceInsightsKMSKeyId sets the PerformanceInsightsKMSKeyId field's value. +func (s *ModifyDBInstanceInput) SetPerformanceInsightsKMSKeyId(v string) *ModifyDBInstanceInput { + s.PerformanceInsightsKMSKeyId = &v + return s +} + +// SetPreferredBackupWindow sets the PreferredBackupWindow field's value. +func (s *ModifyDBInstanceInput) SetPreferredBackupWindow(v string) *ModifyDBInstanceInput { + s.PreferredBackupWindow = &v + return s +} + +// SetPreferredMaintenanceWindow sets the PreferredMaintenanceWindow field's value. +func (s *ModifyDBInstanceInput) SetPreferredMaintenanceWindow(v string) *ModifyDBInstanceInput { + s.PreferredMaintenanceWindow = &v + return s +} + +// SetPromotionTier sets the PromotionTier field's value. +func (s *ModifyDBInstanceInput) SetPromotionTier(v int64) *ModifyDBInstanceInput { + s.PromotionTier = &v + return s +} + +// SetPubliclyAccessible sets the PubliclyAccessible field's value. +func (s *ModifyDBInstanceInput) SetPubliclyAccessible(v bool) *ModifyDBInstanceInput { + s.PubliclyAccessible = &v + return s +} + +// SetStorageType sets the StorageType field's value. +func (s *ModifyDBInstanceInput) SetStorageType(v string) *ModifyDBInstanceInput { + s.StorageType = &v + return s +} + +// SetTdeCredentialArn sets the TdeCredentialArn field's value. +func (s *ModifyDBInstanceInput) SetTdeCredentialArn(v string) *ModifyDBInstanceInput { + s.TdeCredentialArn = &v + return s +} + +// SetTdeCredentialPassword sets the TdeCredentialPassword field's value. +func (s *ModifyDBInstanceInput) SetTdeCredentialPassword(v string) *ModifyDBInstanceInput { + s.TdeCredentialPassword = &v + return s +} + +// SetVpcSecurityGroupIds sets the VpcSecurityGroupIds field's value. +func (s *ModifyDBInstanceInput) SetVpcSecurityGroupIds(v []*string) *ModifyDBInstanceInput { + s.VpcSecurityGroupIds = v + return s +} + +type ModifyDBInstanceOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Neptune DB instance. + // + // This data type is used as a response element in the DescribeDBInstances action. + DBInstance *DBInstance `type:"structure"` +} + +// String returns the string representation +func (s ModifyDBInstanceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyDBInstanceOutput) GoString() string { + return s.String() +} + +// SetDBInstance sets the DBInstance field's value. +func (s *ModifyDBInstanceOutput) SetDBInstance(v *DBInstance) *ModifyDBInstanceOutput { + s.DBInstance = v + return s +} + +type ModifyDBParameterGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the DB parameter group. + // + // Constraints: + // + // * If supplied, must match the name of an existing DBParameterGroup. + // + // DBParameterGroupName is a required field + DBParameterGroupName *string `type:"string" required:"true"` + + // An array of parameter names, values, and the apply method for the parameter + // update. At least one parameter name, value, and apply method must be supplied; + // subsequent arguments are optional. A maximum of 20 parameters can be modified + // in a single request. + // + // Valid Values (for the application method): immediate | pending-reboot + // + // You can use the immediate value with dynamic parameters only. You can use + // the pending-reboot value for both dynamic and static parameters, and changes + // are applied when you reboot the DB instance without failover. + // + // Parameters is a required field + Parameters []*Parameter `locationNameList:"Parameter" type:"list" required:"true"` +} + +// String returns the string representation +func (s ModifyDBParameterGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyDBParameterGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyDBParameterGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyDBParameterGroupInput"} + if s.DBParameterGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("DBParameterGroupName")) + } + if s.Parameters == nil { + invalidParams.Add(request.NewErrParamRequired("Parameters")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBParameterGroupName sets the DBParameterGroupName field's value. +func (s *ModifyDBParameterGroupInput) SetDBParameterGroupName(v string) *ModifyDBParameterGroupInput { + s.DBParameterGroupName = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *ModifyDBParameterGroupInput) SetParameters(v []*Parameter) *ModifyDBParameterGroupInput { + s.Parameters = v + return s +} + +type ModifyDBSubnetGroupInput struct { + _ struct{} `type:"structure"` + + // The description for the DB subnet group. + DBSubnetGroupDescription *string `type:"string"` + + // The name for the DB subnet group. This value is stored as a lowercase string. + // You can't modify the default subnet group. + // + // Constraints: Must match the name of an existing DBSubnetGroup. Must not be + // default. + // + // Example: mySubnetgroup + // + // DBSubnetGroupName is a required field + DBSubnetGroupName *string `type:"string" required:"true"` + + // The EC2 subnet IDs for the DB subnet group. + // + // SubnetIds is a required field + SubnetIds []*string `locationNameList:"SubnetIdentifier" type:"list" required:"true"` +} + +// String returns the string representation +func (s ModifyDBSubnetGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyDBSubnetGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyDBSubnetGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyDBSubnetGroupInput"} + if s.DBSubnetGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("DBSubnetGroupName")) + } + if s.SubnetIds == nil { + invalidParams.Add(request.NewErrParamRequired("SubnetIds")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBSubnetGroupDescription sets the DBSubnetGroupDescription field's value. +func (s *ModifyDBSubnetGroupInput) SetDBSubnetGroupDescription(v string) *ModifyDBSubnetGroupInput { + s.DBSubnetGroupDescription = &v + return s +} + +// SetDBSubnetGroupName sets the DBSubnetGroupName field's value. +func (s *ModifyDBSubnetGroupInput) SetDBSubnetGroupName(v string) *ModifyDBSubnetGroupInput { + s.DBSubnetGroupName = &v + return s +} + +// SetSubnetIds sets the SubnetIds field's value. +func (s *ModifyDBSubnetGroupInput) SetSubnetIds(v []*string) *ModifyDBSubnetGroupInput { + s.SubnetIds = v + return s +} + +type ModifyDBSubnetGroupOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Neptune DB subnet group. + // + // This data type is used as a response element in the DescribeDBSubnetGroups + // action. + DBSubnetGroup *DBSubnetGroup `type:"structure"` +} + +// String returns the string representation +func (s ModifyDBSubnetGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyDBSubnetGroupOutput) GoString() string { + return s.String() +} + +// SetDBSubnetGroup sets the DBSubnetGroup field's value. +func (s *ModifyDBSubnetGroupOutput) SetDBSubnetGroup(v *DBSubnetGroup) *ModifyDBSubnetGroupOutput { + s.DBSubnetGroup = v + return s +} + +type ModifyEventSubscriptionInput struct { + _ struct{} `type:"structure"` + + // A Boolean value; set to true to activate the subscription. + Enabled *bool `type:"boolean"` + + // A list of event categories for a SourceType that you want to subscribe to. + // You can see a list of the categories for a given SourceType by using the + // DescribeEventCategories action. + EventCategories []*string `locationNameList:"EventCategory" type:"list"` + + // The Amazon Resource Name (ARN) of the SNS topic created for event notification. + // The ARN is created by Amazon SNS when you create a topic and subscribe to + // it. + SnsTopicArn *string `type:"string"` + + // The type of source that is generating the events. For example, if you want + // to be notified of events generated by a DB instance, you would set this parameter + // to db-instance. if this value is not specified, all events are returned. + // + // Valid values: db-instance | db-parameter-group | db-security-group | db-snapshot + SourceType *string `type:"string"` + + // The name of the event notification subscription. + // + // SubscriptionName is a required field + SubscriptionName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s ModifyEventSubscriptionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyEventSubscriptionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyEventSubscriptionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyEventSubscriptionInput"} + if s.SubscriptionName == nil { + invalidParams.Add(request.NewErrParamRequired("SubscriptionName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEnabled sets the Enabled field's value. +func (s *ModifyEventSubscriptionInput) SetEnabled(v bool) *ModifyEventSubscriptionInput { + s.Enabled = &v + return s +} + +// SetEventCategories sets the EventCategories field's value. +func (s *ModifyEventSubscriptionInput) SetEventCategories(v []*string) *ModifyEventSubscriptionInput { + s.EventCategories = v + return s +} + +// SetSnsTopicArn sets the SnsTopicArn field's value. +func (s *ModifyEventSubscriptionInput) SetSnsTopicArn(v string) *ModifyEventSubscriptionInput { + s.SnsTopicArn = &v + return s +} + +// SetSourceType sets the SourceType field's value. +func (s *ModifyEventSubscriptionInput) SetSourceType(v string) *ModifyEventSubscriptionInput { + s.SourceType = &v + return s +} + +// SetSubscriptionName sets the SubscriptionName field's value. +func (s *ModifyEventSubscriptionInput) SetSubscriptionName(v string) *ModifyEventSubscriptionInput { + s.SubscriptionName = &v + return s +} + +type ModifyEventSubscriptionOutput struct { + _ struct{} `type:"structure"` + + // Contains the results of a successful invocation of the DescribeEventSubscriptions + // action. + EventSubscription *EventSubscription `type:"structure"` +} + +// String returns the string representation +func (s ModifyEventSubscriptionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyEventSubscriptionOutput) GoString() string { + return s.String() +} + +// SetEventSubscription sets the EventSubscription field's value. +func (s *ModifyEventSubscriptionOutput) SetEventSubscription(v *EventSubscription) *ModifyEventSubscriptionOutput { + s.EventSubscription = v + return s +} + +// Provides information on the option groups the DB instance is a member of. +type OptionGroupMembership struct { + _ struct{} `type:"structure"` + + // The name of the option group that the instance belongs to. + OptionGroupName *string `type:"string"` + + // The status of the DB instance's option group membership. Valid values are: + // in-sync, pending-apply, pending-removal, pending-maintenance-apply, pending-maintenance-removal, + // applying, removing, and failed. + Status *string `type:"string"` +} + +// String returns the string representation +func (s OptionGroupMembership) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OptionGroupMembership) GoString() string { + return s.String() +} + +// SetOptionGroupName sets the OptionGroupName field's value. +func (s *OptionGroupMembership) SetOptionGroupName(v string) *OptionGroupMembership { + s.OptionGroupName = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *OptionGroupMembership) SetStatus(v string) *OptionGroupMembership { + s.Status = &v + return s +} + +// Contains a list of available options for a DB instance. +// +// This data type is used as a response element in the DescribeOrderableDBInstanceOptions +// action. +type OrderableDBInstanceOption struct { + _ struct{} `type:"structure"` + + // A list of Availability Zones for a DB instance. + AvailabilityZones []*AvailabilityZone `locationNameList:"AvailabilityZone" type:"list"` + + // The DB instance class for a DB instance. + DBInstanceClass *string `type:"string"` + + // The engine type of a DB instance. + Engine *string `type:"string"` + + // The engine version of a DB instance. + EngineVersion *string `type:"string"` + + // The license model for a DB instance. + LicenseModel *string `type:"string"` + + // Maximum total provisioned IOPS for a DB instance. + MaxIopsPerDbInstance *int64 `type:"integer"` + + // Maximum provisioned IOPS per GiB for a DB instance. + MaxIopsPerGib *float64 `type:"double"` + + // Maximum storage size for a DB instance. + MaxStorageSize *int64 `type:"integer"` + + // Minimum total provisioned IOPS for a DB instance. + MinIopsPerDbInstance *int64 `type:"integer"` + + // Minimum provisioned IOPS per GiB for a DB instance. + MinIopsPerGib *float64 `type:"double"` + + // Minimum storage size for a DB instance. + MinStorageSize *int64 `type:"integer"` + + // Indicates whether a DB instance is Multi-AZ capable. + MultiAZCapable *bool `type:"boolean"` + + // Indicates whether a DB instance can have a Read Replica. + ReadReplicaCapable *bool `type:"boolean"` + + // Indicates the storage type for a DB instance. + StorageType *string `type:"string"` + + // Indicates whether a DB instance supports Enhanced Monitoring at intervals + // from 1 to 60 seconds. + SupportsEnhancedMonitoring *bool `type:"boolean"` + + // Indicates whether a DB instance supports IAM database authentication. + SupportsIAMDatabaseAuthentication *bool `type:"boolean"` + + // Indicates whether a DB instance supports provisioned IOPS. + SupportsIops *bool `type:"boolean"` + + // True if a DB instance supports Performance Insights, otherwise false. + SupportsPerformanceInsights *bool `type:"boolean"` + + // Indicates whether a DB instance supports encrypted storage. + SupportsStorageEncryption *bool `type:"boolean"` + + // Indicates whether a DB instance is in a VPC. + Vpc *bool `type:"boolean"` +} + +// String returns the string representation +func (s OrderableDBInstanceOption) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OrderableDBInstanceOption) GoString() string { + return s.String() +} + +// SetAvailabilityZones sets the AvailabilityZones field's value. +func (s *OrderableDBInstanceOption) SetAvailabilityZones(v []*AvailabilityZone) *OrderableDBInstanceOption { + s.AvailabilityZones = v + return s +} + +// SetDBInstanceClass sets the DBInstanceClass field's value. +func (s *OrderableDBInstanceOption) SetDBInstanceClass(v string) *OrderableDBInstanceOption { + s.DBInstanceClass = &v + return s +} + +// SetEngine sets the Engine field's value. +func (s *OrderableDBInstanceOption) SetEngine(v string) *OrderableDBInstanceOption { + s.Engine = &v + return s +} + +// SetEngineVersion sets the EngineVersion field's value. +func (s *OrderableDBInstanceOption) SetEngineVersion(v string) *OrderableDBInstanceOption { + s.EngineVersion = &v + return s +} + +// SetLicenseModel sets the LicenseModel field's value. +func (s *OrderableDBInstanceOption) SetLicenseModel(v string) *OrderableDBInstanceOption { + s.LicenseModel = &v + return s +} + +// SetMaxIopsPerDbInstance sets the MaxIopsPerDbInstance field's value. +func (s *OrderableDBInstanceOption) SetMaxIopsPerDbInstance(v int64) *OrderableDBInstanceOption { + s.MaxIopsPerDbInstance = &v + return s +} + +// SetMaxIopsPerGib sets the MaxIopsPerGib field's value. +func (s *OrderableDBInstanceOption) SetMaxIopsPerGib(v float64) *OrderableDBInstanceOption { + s.MaxIopsPerGib = &v + return s +} + +// SetMaxStorageSize sets the MaxStorageSize field's value. +func (s *OrderableDBInstanceOption) SetMaxStorageSize(v int64) *OrderableDBInstanceOption { + s.MaxStorageSize = &v + return s +} + +// SetMinIopsPerDbInstance sets the MinIopsPerDbInstance field's value. +func (s *OrderableDBInstanceOption) SetMinIopsPerDbInstance(v int64) *OrderableDBInstanceOption { + s.MinIopsPerDbInstance = &v + return s +} + +// SetMinIopsPerGib sets the MinIopsPerGib field's value. +func (s *OrderableDBInstanceOption) SetMinIopsPerGib(v float64) *OrderableDBInstanceOption { + s.MinIopsPerGib = &v + return s +} + +// SetMinStorageSize sets the MinStorageSize field's value. +func (s *OrderableDBInstanceOption) SetMinStorageSize(v int64) *OrderableDBInstanceOption { + s.MinStorageSize = &v + return s +} + +// SetMultiAZCapable sets the MultiAZCapable field's value. +func (s *OrderableDBInstanceOption) SetMultiAZCapable(v bool) *OrderableDBInstanceOption { + s.MultiAZCapable = &v + return s +} + +// SetReadReplicaCapable sets the ReadReplicaCapable field's value. +func (s *OrderableDBInstanceOption) SetReadReplicaCapable(v bool) *OrderableDBInstanceOption { + s.ReadReplicaCapable = &v + return s +} + +// SetStorageType sets the StorageType field's value. +func (s *OrderableDBInstanceOption) SetStorageType(v string) *OrderableDBInstanceOption { + s.StorageType = &v + return s +} + +// SetSupportsEnhancedMonitoring sets the SupportsEnhancedMonitoring field's value. +func (s *OrderableDBInstanceOption) SetSupportsEnhancedMonitoring(v bool) *OrderableDBInstanceOption { + s.SupportsEnhancedMonitoring = &v + return s +} + +// SetSupportsIAMDatabaseAuthentication sets the SupportsIAMDatabaseAuthentication field's value. +func (s *OrderableDBInstanceOption) SetSupportsIAMDatabaseAuthentication(v bool) *OrderableDBInstanceOption { + s.SupportsIAMDatabaseAuthentication = &v + return s +} + +// SetSupportsIops sets the SupportsIops field's value. +func (s *OrderableDBInstanceOption) SetSupportsIops(v bool) *OrderableDBInstanceOption { + s.SupportsIops = &v + return s +} + +// SetSupportsPerformanceInsights sets the SupportsPerformanceInsights field's value. +func (s *OrderableDBInstanceOption) SetSupportsPerformanceInsights(v bool) *OrderableDBInstanceOption { + s.SupportsPerformanceInsights = &v + return s +} + +// SetSupportsStorageEncryption sets the SupportsStorageEncryption field's value. +func (s *OrderableDBInstanceOption) SetSupportsStorageEncryption(v bool) *OrderableDBInstanceOption { + s.SupportsStorageEncryption = &v + return s +} + +// SetVpc sets the Vpc field's value. +func (s *OrderableDBInstanceOption) SetVpc(v bool) *OrderableDBInstanceOption { + s.Vpc = &v + return s +} + +// This data type is used as a request parameter in the ModifyDBParameterGroup +// and ResetDBParameterGroup actions. +// +// This data type is used as a response element in the DescribeEngineDefaultParameters +// and DescribeDBParameters actions. +type Parameter struct { + _ struct{} `type:"structure"` + + // Specifies the valid range of values for the parameter. + AllowedValues *string `type:"string"` + + // Indicates when to apply parameter updates. + ApplyMethod *string `type:"string" enum:"ApplyMethod"` + + // Specifies the engine specific parameters type. + ApplyType *string `type:"string"` + + // Specifies the valid data type for the parameter. + DataType *string `type:"string"` + + // Provides a description of the parameter. + Description *string `type:"string"` + + // Indicates whether (true) or not (false) the parameter can be modified. Some + // parameters have security or operational implications that prevent them from + // being changed. + IsModifiable *bool `type:"boolean"` + + // The earliest engine version to which the parameter can apply. + MinimumEngineVersion *string `type:"string"` + + // Specifies the name of the parameter. + ParameterName *string `type:"string"` + + // Specifies the value of the parameter. + ParameterValue *string `type:"string"` + + // Indicates the source of the parameter value. + Source *string `type:"string"` +} + +// String returns the string representation +func (s Parameter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Parameter) GoString() string { + return s.String() +} + +// SetAllowedValues sets the AllowedValues field's value. +func (s *Parameter) SetAllowedValues(v string) *Parameter { + s.AllowedValues = &v + return s +} + +// SetApplyMethod sets the ApplyMethod field's value. +func (s *Parameter) SetApplyMethod(v string) *Parameter { + s.ApplyMethod = &v + return s +} + +// SetApplyType sets the ApplyType field's value. +func (s *Parameter) SetApplyType(v string) *Parameter { + s.ApplyType = &v + return s +} + +// SetDataType sets the DataType field's value. +func (s *Parameter) SetDataType(v string) *Parameter { + s.DataType = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *Parameter) SetDescription(v string) *Parameter { + s.Description = &v + return s +} + +// SetIsModifiable sets the IsModifiable field's value. +func (s *Parameter) SetIsModifiable(v bool) *Parameter { + s.IsModifiable = &v + return s +} + +// SetMinimumEngineVersion sets the MinimumEngineVersion field's value. +func (s *Parameter) SetMinimumEngineVersion(v string) *Parameter { + s.MinimumEngineVersion = &v + return s +} + +// SetParameterName sets the ParameterName field's value. +func (s *Parameter) SetParameterName(v string) *Parameter { + s.ParameterName = &v + return s +} + +// SetParameterValue sets the ParameterValue field's value. +func (s *Parameter) SetParameterValue(v string) *Parameter { + s.ParameterValue = &v + return s +} + +// SetSource sets the Source field's value. +func (s *Parameter) SetSource(v string) *Parameter { + s.Source = &v + return s +} + +// A list of the log types whose configuration is still pending. In other words, +// these log types are in the process of being activated or deactivated. +type PendingCloudwatchLogsExports struct { + _ struct{} `type:"structure"` + + // Log types that are in the process of being enabled. After they are enabled, + // these log types are exported to CloudWatch Logs. + LogTypesToDisable []*string `type:"list"` + + // Log types that are in the process of being deactivated. After they are deactivated, + // these log types aren't exported to CloudWatch Logs. + LogTypesToEnable []*string `type:"list"` +} + +// String returns the string representation +func (s PendingCloudwatchLogsExports) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PendingCloudwatchLogsExports) GoString() string { + return s.String() +} + +// SetLogTypesToDisable sets the LogTypesToDisable field's value. +func (s *PendingCloudwatchLogsExports) SetLogTypesToDisable(v []*string) *PendingCloudwatchLogsExports { + s.LogTypesToDisable = v + return s +} + +// SetLogTypesToEnable sets the LogTypesToEnable field's value. +func (s *PendingCloudwatchLogsExports) SetLogTypesToEnable(v []*string) *PendingCloudwatchLogsExports { + s.LogTypesToEnable = v + return s +} + +// Provides information about a pending maintenance action for a resource. +type PendingMaintenanceAction struct { + _ struct{} `type:"structure"` + + // The type of pending maintenance action that is available for the resource. + Action *string `type:"string"` + + // The date of the maintenance window when the action is applied. The maintenance + // action is applied to the resource during its first maintenance window after + // this date. If this date is specified, any next-maintenance opt-in requests + // are ignored. + AutoAppliedAfterDate *time.Time `type:"timestamp"` + + // The effective date when the pending maintenance action is applied to the + // resource. This date takes into account opt-in requests received from the + // ApplyPendingMaintenanceAction API, the AutoAppliedAfterDate, and the ForcedApplyDate. + // This value is blank if an opt-in request has not been received and nothing + // has been specified as AutoAppliedAfterDate or ForcedApplyDate. + CurrentApplyDate *time.Time `type:"timestamp"` + + // A description providing more detail about the maintenance action. + Description *string `type:"string"` + + // The date when the maintenance action is automatically applied. The maintenance + // action is applied to the resource on this date regardless of the maintenance + // window for the resource. If this date is specified, any immediate opt-in + // requests are ignored. + ForcedApplyDate *time.Time `type:"timestamp"` + + // Indicates the type of opt-in request that has been received for the resource. + OptInStatus *string `type:"string"` +} + +// String returns the string representation +func (s PendingMaintenanceAction) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PendingMaintenanceAction) GoString() string { + return s.String() +} + +// SetAction sets the Action field's value. +func (s *PendingMaintenanceAction) SetAction(v string) *PendingMaintenanceAction { + s.Action = &v + return s +} + +// SetAutoAppliedAfterDate sets the AutoAppliedAfterDate field's value. +func (s *PendingMaintenanceAction) SetAutoAppliedAfterDate(v time.Time) *PendingMaintenanceAction { + s.AutoAppliedAfterDate = &v + return s +} + +// SetCurrentApplyDate sets the CurrentApplyDate field's value. +func (s *PendingMaintenanceAction) SetCurrentApplyDate(v time.Time) *PendingMaintenanceAction { + s.CurrentApplyDate = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *PendingMaintenanceAction) SetDescription(v string) *PendingMaintenanceAction { + s.Description = &v + return s +} + +// SetForcedApplyDate sets the ForcedApplyDate field's value. +func (s *PendingMaintenanceAction) SetForcedApplyDate(v time.Time) *PendingMaintenanceAction { + s.ForcedApplyDate = &v + return s +} + +// SetOptInStatus sets the OptInStatus field's value. +func (s *PendingMaintenanceAction) SetOptInStatus(v string) *PendingMaintenanceAction { + s.OptInStatus = &v + return s +} + +// This data type is used as a response element in the ModifyDBInstance action. +type PendingModifiedValues struct { + _ struct{} `type:"structure"` + + // Contains the new AllocatedStorage size for the DB instance that will be applied + // or is currently being applied. + AllocatedStorage *int64 `type:"integer"` + + // Specifies the pending number of days for which automated backups are retained. + BackupRetentionPeriod *int64 `type:"integer"` + + // Specifies the identifier of the CA certificate for the DB instance. + CACertificateIdentifier *string `type:"string"` + + // Contains the new DBInstanceClass for the DB instance that will be applied + // or is currently being applied. + DBInstanceClass *string `type:"string"` + + // Contains the new DBInstanceIdentifier for the DB instance that will be applied + // or is currently being applied. + DBInstanceIdentifier *string `type:"string"` + + // The new DB subnet group for the DB instance. + DBSubnetGroupName *string `type:"string"` + + // Indicates the database engine version. + EngineVersion *string `type:"string"` + + // Specifies the new Provisioned IOPS value for the DB instance that will be + // applied or is currently being applied. + Iops *int64 `type:"integer"` + + // The license model for the DB instance. + // + // Valid values: license-included | bring-your-own-license | general-public-license + LicenseModel *string `type:"string"` + + // Contains the pending or currently-in-progress change of the master credentials + // for the DB instance. + MasterUserPassword *string `type:"string"` + + // Indicates that the Single-AZ DB instance is to change to a Multi-AZ deployment. + MultiAZ *bool `type:"boolean"` + + // A list of the log types whose configuration is still pending. In other words, + // these log types are in the process of being activated or deactivated. + PendingCloudwatchLogsExports *PendingCloudwatchLogsExports `type:"structure"` + + // Specifies the pending port for the DB instance. + Port *int64 `type:"integer"` + + // Specifies the storage type to be associated with the DB instance. + StorageType *string `type:"string"` +} + +// String returns the string representation +func (s PendingModifiedValues) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PendingModifiedValues) GoString() string { + return s.String() +} + +// SetAllocatedStorage sets the AllocatedStorage field's value. +func (s *PendingModifiedValues) SetAllocatedStorage(v int64) *PendingModifiedValues { + s.AllocatedStorage = &v + return s +} + +// SetBackupRetentionPeriod sets the BackupRetentionPeriod field's value. +func (s *PendingModifiedValues) SetBackupRetentionPeriod(v int64) *PendingModifiedValues { + s.BackupRetentionPeriod = &v + return s +} + +// SetCACertificateIdentifier sets the CACertificateIdentifier field's value. +func (s *PendingModifiedValues) SetCACertificateIdentifier(v string) *PendingModifiedValues { + s.CACertificateIdentifier = &v + return s +} + +// SetDBInstanceClass sets the DBInstanceClass field's value. +func (s *PendingModifiedValues) SetDBInstanceClass(v string) *PendingModifiedValues { + s.DBInstanceClass = &v + return s +} + +// SetDBInstanceIdentifier sets the DBInstanceIdentifier field's value. +func (s *PendingModifiedValues) SetDBInstanceIdentifier(v string) *PendingModifiedValues { + s.DBInstanceIdentifier = &v + return s +} + +// SetDBSubnetGroupName sets the DBSubnetGroupName field's value. +func (s *PendingModifiedValues) SetDBSubnetGroupName(v string) *PendingModifiedValues { + s.DBSubnetGroupName = &v + return s +} + +// SetEngineVersion sets the EngineVersion field's value. +func (s *PendingModifiedValues) SetEngineVersion(v string) *PendingModifiedValues { + s.EngineVersion = &v + return s +} + +// SetIops sets the Iops field's value. +func (s *PendingModifiedValues) SetIops(v int64) *PendingModifiedValues { + s.Iops = &v + return s +} + +// SetLicenseModel sets the LicenseModel field's value. +func (s *PendingModifiedValues) SetLicenseModel(v string) *PendingModifiedValues { + s.LicenseModel = &v + return s +} + +// SetMasterUserPassword sets the MasterUserPassword field's value. +func (s *PendingModifiedValues) SetMasterUserPassword(v string) *PendingModifiedValues { + s.MasterUserPassword = &v + return s +} + +// SetMultiAZ sets the MultiAZ field's value. +func (s *PendingModifiedValues) SetMultiAZ(v bool) *PendingModifiedValues { + s.MultiAZ = &v + return s +} + +// SetPendingCloudwatchLogsExports sets the PendingCloudwatchLogsExports field's value. +func (s *PendingModifiedValues) SetPendingCloudwatchLogsExports(v *PendingCloudwatchLogsExports) *PendingModifiedValues { + s.PendingCloudwatchLogsExports = v + return s +} + +// SetPort sets the Port field's value. +func (s *PendingModifiedValues) SetPort(v int64) *PendingModifiedValues { + s.Port = &v + return s +} + +// SetStorageType sets the StorageType field's value. +func (s *PendingModifiedValues) SetStorageType(v string) *PendingModifiedValues { + s.StorageType = &v + return s +} + +type PromoteReadReplicaDBClusterInput struct { + _ struct{} `type:"structure"` + + // The identifier of the DB cluster Read Replica to promote. This parameter + // is not case-sensitive. + // + // Constraints: + // + // * Must match the identifier of an existing DBCluster Read Replica. + // + // Example: my-cluster-replica1 + // + // DBClusterIdentifier is a required field + DBClusterIdentifier *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s PromoteReadReplicaDBClusterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PromoteReadReplicaDBClusterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PromoteReadReplicaDBClusterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PromoteReadReplicaDBClusterInput"} + if s.DBClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *PromoteReadReplicaDBClusterInput) SetDBClusterIdentifier(v string) *PromoteReadReplicaDBClusterInput { + s.DBClusterIdentifier = &v + return s +} + +type PromoteReadReplicaDBClusterOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Neptune DB cluster. + // + // This data type is used as a response element in the DescribeDBClusters action. + DBCluster *DBCluster `type:"structure"` +} + +// String returns the string representation +func (s PromoteReadReplicaDBClusterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PromoteReadReplicaDBClusterOutput) GoString() string { + return s.String() +} + +// SetDBCluster sets the DBCluster field's value. +func (s *PromoteReadReplicaDBClusterOutput) SetDBCluster(v *DBCluster) *PromoteReadReplicaDBClusterOutput { + s.DBCluster = v + return s +} + +// A range of integer values. +type Range struct { + _ struct{} `type:"structure"` + + // The minimum value in the range. + From *int64 `type:"integer"` + + // The step value for the range. For example, if you have a range of 5,000 to + // 10,000, with a step value of 1,000, the valid values start at 5,000 and step + // up by 1,000. Even though 7,500 is within the range, it isn't a valid value + // for the range. The valid values are 5,000, 6,000, 7,000, 8,000... + Step *int64 `type:"integer"` + + // The maximum value in the range. + To *int64 `type:"integer"` +} + +// String returns the string representation +func (s Range) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Range) GoString() string { + return s.String() +} + +// SetFrom sets the From field's value. +func (s *Range) SetFrom(v int64) *Range { + s.From = &v + return s +} + +// SetStep sets the Step field's value. +func (s *Range) SetStep(v int64) *Range { + s.Step = &v + return s +} + +// SetTo sets the To field's value. +func (s *Range) SetTo(v int64) *Range { + s.To = &v + return s +} + +type RebootDBInstanceInput struct { + _ struct{} `type:"structure"` + + // The DB instance identifier. This parameter is stored as a lowercase string. + // + // Constraints: + // + // * Must match the identifier of an existing DBInstance. + // + // DBInstanceIdentifier is a required field + DBInstanceIdentifier *string `type:"string" required:"true"` + + // When true, the reboot is conducted through a MultiAZ failover. + // + // Constraint: You can't specify true if the instance is not configured for + // MultiAZ. + ForceFailover *bool `type:"boolean"` +} + +// String returns the string representation +func (s RebootDBInstanceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RebootDBInstanceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RebootDBInstanceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RebootDBInstanceInput"} + if s.DBInstanceIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBInstanceIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBInstanceIdentifier sets the DBInstanceIdentifier field's value. +func (s *RebootDBInstanceInput) SetDBInstanceIdentifier(v string) *RebootDBInstanceInput { + s.DBInstanceIdentifier = &v + return s +} + +// SetForceFailover sets the ForceFailover field's value. +func (s *RebootDBInstanceInput) SetForceFailover(v bool) *RebootDBInstanceInput { + s.ForceFailover = &v + return s +} + +type RebootDBInstanceOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Neptune DB instance. + // + // This data type is used as a response element in the DescribeDBInstances action. + DBInstance *DBInstance `type:"structure"` +} + +// String returns the string representation +func (s RebootDBInstanceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RebootDBInstanceOutput) GoString() string { + return s.String() +} + +// SetDBInstance sets the DBInstance field's value. +func (s *RebootDBInstanceOutput) SetDBInstance(v *DBInstance) *RebootDBInstanceOutput { + s.DBInstance = v + return s +} + +type RemoveRoleFromDBClusterInput struct { + _ struct{} `type:"structure"` + + // The name of the DB cluster to disassociate the IAM role from. + // + // DBClusterIdentifier is a required field + DBClusterIdentifier *string `type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the IAM role to disassociate from the DB + // cluster, for example arn:aws:iam::123456789012:role/NeptuneAccessRole. + // + // RoleArn is a required field + RoleArn *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s RemoveRoleFromDBClusterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveRoleFromDBClusterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RemoveRoleFromDBClusterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RemoveRoleFromDBClusterInput"} + if s.DBClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterIdentifier")) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *RemoveRoleFromDBClusterInput) SetDBClusterIdentifier(v string) *RemoveRoleFromDBClusterInput { + s.DBClusterIdentifier = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *RemoveRoleFromDBClusterInput) SetRoleArn(v string) *RemoveRoleFromDBClusterInput { + s.RoleArn = &v + return s +} + +type RemoveRoleFromDBClusterOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s RemoveRoleFromDBClusterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveRoleFromDBClusterOutput) GoString() string { + return s.String() +} + +type RemoveSourceIdentifierFromSubscriptionInput struct { + _ struct{} `type:"structure"` + + // The source identifier to be removed from the subscription, such as the DB + // instance identifier for a DB instance or the name of a security group. + // + // SourceIdentifier is a required field + SourceIdentifier *string `type:"string" required:"true"` + + // The name of the event notification subscription you want to remove a source + // identifier from. + // + // SubscriptionName is a required field + SubscriptionName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s RemoveSourceIdentifierFromSubscriptionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveSourceIdentifierFromSubscriptionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RemoveSourceIdentifierFromSubscriptionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RemoveSourceIdentifierFromSubscriptionInput"} + if s.SourceIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("SourceIdentifier")) + } + if s.SubscriptionName == nil { + invalidParams.Add(request.NewErrParamRequired("SubscriptionName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSourceIdentifier sets the SourceIdentifier field's value. +func (s *RemoveSourceIdentifierFromSubscriptionInput) SetSourceIdentifier(v string) *RemoveSourceIdentifierFromSubscriptionInput { + s.SourceIdentifier = &v + return s +} + +// SetSubscriptionName sets the SubscriptionName field's value. +func (s *RemoveSourceIdentifierFromSubscriptionInput) SetSubscriptionName(v string) *RemoveSourceIdentifierFromSubscriptionInput { + s.SubscriptionName = &v + return s +} + +type RemoveSourceIdentifierFromSubscriptionOutput struct { + _ struct{} `type:"structure"` + + // Contains the results of a successful invocation of the DescribeEventSubscriptions + // action. + EventSubscription *EventSubscription `type:"structure"` +} + +// String returns the string representation +func (s RemoveSourceIdentifierFromSubscriptionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveSourceIdentifierFromSubscriptionOutput) GoString() string { + return s.String() +} + +// SetEventSubscription sets the EventSubscription field's value. +func (s *RemoveSourceIdentifierFromSubscriptionOutput) SetEventSubscription(v *EventSubscription) *RemoveSourceIdentifierFromSubscriptionOutput { + s.EventSubscription = v + return s +} + +type RemoveTagsFromResourceInput struct { + _ struct{} `type:"structure"` + + // The Amazon Neptune resource that the tags are removed from. This value is + // an Amazon Resource Name (ARN). For information about creating an ARN, see + // Constructing an Amazon Resource Name (ARN) (http://docs.aws.amazon.com/neptune/latest/UserGuide/tagging.ARN.html#tagging.ARN.Constructing). + // + // ResourceName is a required field + ResourceName *string `type:"string" required:"true"` + + // The tag key (name) of the tag to be removed. + // + // TagKeys is a required field + TagKeys []*string `type:"list" required:"true"` +} + +// String returns the string representation +func (s RemoveTagsFromResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveTagsFromResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RemoveTagsFromResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RemoveTagsFromResourceInput"} + if s.ResourceName == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceName")) + } + if s.TagKeys == nil { + invalidParams.Add(request.NewErrParamRequired("TagKeys")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceName sets the ResourceName field's value. +func (s *RemoveTagsFromResourceInput) SetResourceName(v string) *RemoveTagsFromResourceInput { + s.ResourceName = &v + return s +} + +// SetTagKeys sets the TagKeys field's value. +func (s *RemoveTagsFromResourceInput) SetTagKeys(v []*string) *RemoveTagsFromResourceInput { + s.TagKeys = v + return s +} + +type RemoveTagsFromResourceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s RemoveTagsFromResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveTagsFromResourceOutput) GoString() string { + return s.String() +} + +type ResetDBClusterParameterGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the DB cluster parameter group to reset. + // + // DBClusterParameterGroupName is a required field + DBClusterParameterGroupName *string `type:"string" required:"true"` + + // A list of parameter names in the DB cluster parameter group to reset to the + // default values. You can't use this parameter if the ResetAllParameters parameter + // is set to true. + Parameters []*Parameter `locationNameList:"Parameter" type:"list"` + + // A value that is set to true to reset all parameters in the DB cluster parameter + // group to their default values, and false otherwise. You can't use this parameter + // if there is a list of parameter names specified for the Parameters parameter. + ResetAllParameters *bool `type:"boolean"` +} + +// String returns the string representation +func (s ResetDBClusterParameterGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResetDBClusterParameterGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ResetDBClusterParameterGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResetDBClusterParameterGroupInput"} + if s.DBClusterParameterGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterParameterGroupName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterParameterGroupName sets the DBClusterParameterGroupName field's value. +func (s *ResetDBClusterParameterGroupInput) SetDBClusterParameterGroupName(v string) *ResetDBClusterParameterGroupInput { + s.DBClusterParameterGroupName = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *ResetDBClusterParameterGroupInput) SetParameters(v []*Parameter) *ResetDBClusterParameterGroupInput { + s.Parameters = v + return s +} + +// SetResetAllParameters sets the ResetAllParameters field's value. +func (s *ResetDBClusterParameterGroupInput) SetResetAllParameters(v bool) *ResetDBClusterParameterGroupInput { + s.ResetAllParameters = &v + return s +} + +type ResetDBClusterParameterGroupOutput struct { + _ struct{} `type:"structure"` + + // The name of the DB cluster parameter group. + // + // Constraints: + // + // * Must be 1 to 255 letters or numbers. + // + // * First character must be a letter + // + // * Cannot end with a hyphen or contain two consecutive hyphens + // + // This value is stored as a lowercase string. + DBClusterParameterGroupName *string `type:"string"` +} + +// String returns the string representation +func (s ResetDBClusterParameterGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResetDBClusterParameterGroupOutput) GoString() string { + return s.String() +} + +// SetDBClusterParameterGroupName sets the DBClusterParameterGroupName field's value. +func (s *ResetDBClusterParameterGroupOutput) SetDBClusterParameterGroupName(v string) *ResetDBClusterParameterGroupOutput { + s.DBClusterParameterGroupName = &v + return s +} + +type ResetDBParameterGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the DB parameter group. + // + // Constraints: + // + // * Must match the name of an existing DBParameterGroup. + // + // DBParameterGroupName is a required field + DBParameterGroupName *string `type:"string" required:"true"` + + // To reset the entire DB parameter group, specify the DBParameterGroup name + // and ResetAllParameters parameters. To reset specific parameters, provide + // a list of the following: ParameterName and ApplyMethod. A maximum of 20 parameters + // can be modified in a single request. + // + // Valid Values (for Apply method): pending-reboot + Parameters []*Parameter `locationNameList:"Parameter" type:"list"` + + // Specifies whether (true) or not (false) to reset all parameters in the DB + // parameter group to default values. + // + // Default: true + ResetAllParameters *bool `type:"boolean"` +} + +// String returns the string representation +func (s ResetDBParameterGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResetDBParameterGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ResetDBParameterGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResetDBParameterGroupInput"} + if s.DBParameterGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("DBParameterGroupName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBParameterGroupName sets the DBParameterGroupName field's value. +func (s *ResetDBParameterGroupInput) SetDBParameterGroupName(v string) *ResetDBParameterGroupInput { + s.DBParameterGroupName = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *ResetDBParameterGroupInput) SetParameters(v []*Parameter) *ResetDBParameterGroupInput { + s.Parameters = v + return s +} + +// SetResetAllParameters sets the ResetAllParameters field's value. +func (s *ResetDBParameterGroupInput) SetResetAllParameters(v bool) *ResetDBParameterGroupInput { + s.ResetAllParameters = &v + return s +} + +// Contains the result of a successful invocation of the ModifyDBParameterGroup +// or ResetDBParameterGroup action. +type ResetDBParameterGroupOutput struct { + _ struct{} `type:"structure"` + + // Provides the name of the DB parameter group. + DBParameterGroupName *string `type:"string"` +} + +// String returns the string representation +func (s ResetDBParameterGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResetDBParameterGroupOutput) GoString() string { + return s.String() +} + +// SetDBParameterGroupName sets the DBParameterGroupName field's value. +func (s *ResetDBParameterGroupOutput) SetDBParameterGroupName(v string) *ResetDBParameterGroupOutput { + s.DBParameterGroupName = &v + return s +} + +// Describes the pending maintenance actions for a resource. +type ResourcePendingMaintenanceActions struct { + _ struct{} `type:"structure"` + + // A list that provides details about the pending maintenance actions for the + // resource. + PendingMaintenanceActionDetails []*PendingMaintenanceAction `locationNameList:"PendingMaintenanceAction" type:"list"` + + // The ARN of the resource that has pending maintenance actions. + ResourceIdentifier *string `type:"string"` +} + +// String returns the string representation +func (s ResourcePendingMaintenanceActions) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResourcePendingMaintenanceActions) GoString() string { + return s.String() +} + +// SetPendingMaintenanceActionDetails sets the PendingMaintenanceActionDetails field's value. +func (s *ResourcePendingMaintenanceActions) SetPendingMaintenanceActionDetails(v []*PendingMaintenanceAction) *ResourcePendingMaintenanceActions { + s.PendingMaintenanceActionDetails = v + return s +} + +// SetResourceIdentifier sets the ResourceIdentifier field's value. +func (s *ResourcePendingMaintenanceActions) SetResourceIdentifier(v string) *ResourcePendingMaintenanceActions { + s.ResourceIdentifier = &v + return s +} + +type RestoreDBClusterFromSnapshotInput struct { + _ struct{} `type:"structure"` + + // Provides the list of EC2 Availability Zones that instances in the restored + // DB cluster can be created in. + AvailabilityZones []*string `locationNameList:"AvailabilityZone" type:"list"` + + // The name of the DB cluster to create from the DB snapshot or DB cluster snapshot. + // This parameter isn't case-sensitive. + // + // Constraints: + // + // * Must contain from 1 to 63 letters, numbers, or hyphens + // + // * First character must be a letter + // + // * Cannot end with a hyphen or contain two consecutive hyphens + // + // Example: my-snapshot-id + // + // DBClusterIdentifier is a required field + DBClusterIdentifier *string `type:"string" required:"true"` + + // The name of the DB subnet group to use for the new DB cluster. + // + // Constraints: If supplied, must match the name of an existing DBSubnetGroup. + // + // Example: mySubnetgroup + DBSubnetGroupName *string `type:"string"` + + // The database name for the restored DB cluster. + DatabaseName *string `type:"string"` + + // True to enable mapping of AWS Identity and Access Management (IAM) accounts + // to database accounts, and otherwise false. + // + // Default: false + EnableIAMDatabaseAuthentication *bool `type:"boolean"` + + // The database engine to use for the new DB cluster. + // + // Default: The same as source + // + // Constraint: Must be compatible with the engine of the source + // + // Engine is a required field + Engine *string `type:"string" required:"true"` + + // The version of the database engine to use for the new DB cluster. + EngineVersion *string `type:"string"` + + // The AWS KMS key identifier to use when restoring an encrypted DB cluster + // from a DB snapshot or DB cluster snapshot. + // + // The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption + // key. If you are restoring a DB cluster with the same AWS account that owns + // the KMS encryption key used to encrypt the new DB cluster, then you can use + // the KMS key alias instead of the ARN for the KMS encryption key. + // + // If you do not specify a value for the KmsKeyId parameter, then the following + // will occur: + // + // * If the DB snapshot or DB cluster snapshot in SnapshotIdentifier is encrypted, + // then the restored DB cluster is encrypted using the KMS key that was used + // to encrypt the DB snapshot or DB cluster snapshot. + // + // * If the DB snapshot or DB cluster snapshot in SnapshotIdentifier is not + // encrypted, then the restored DB cluster is not encrypted. + KmsKeyId *string `type:"string"` + + // The name of the option group to use for the restored DB cluster. + OptionGroupName *string `type:"string"` + + // The port number on which the new DB cluster accepts connections. + // + // Constraints: Value must be 1150-65535 + // + // Default: The same port as the original DB cluster. + Port *int64 `type:"integer"` + + // The identifier for the DB snapshot or DB cluster snapshot to restore from. + // + // You can use either the name or the Amazon Resource Name (ARN) to specify + // a DB cluster snapshot. However, you can use only the ARN to specify a DB + // snapshot. + // + // Constraints: + // + // * Must match the identifier of an existing Snapshot. + // + // SnapshotIdentifier is a required field + SnapshotIdentifier *string `type:"string" required:"true"` + + // The tags to be assigned to the restored DB cluster. + Tags []*Tag `locationNameList:"Tag" type:"list"` + + // A list of VPC security groups that the new DB cluster will belong to. + VpcSecurityGroupIds []*string `locationNameList:"VpcSecurityGroupId" type:"list"` +} + +// String returns the string representation +func (s RestoreDBClusterFromSnapshotInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreDBClusterFromSnapshotInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RestoreDBClusterFromSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RestoreDBClusterFromSnapshotInput"} + if s.DBClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterIdentifier")) + } + if s.Engine == nil { + invalidParams.Add(request.NewErrParamRequired("Engine")) + } + if s.SnapshotIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("SnapshotIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAvailabilityZones sets the AvailabilityZones field's value. +func (s *RestoreDBClusterFromSnapshotInput) SetAvailabilityZones(v []*string) *RestoreDBClusterFromSnapshotInput { + s.AvailabilityZones = v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *RestoreDBClusterFromSnapshotInput) SetDBClusterIdentifier(v string) *RestoreDBClusterFromSnapshotInput { + s.DBClusterIdentifier = &v + return s +} + +// SetDBSubnetGroupName sets the DBSubnetGroupName field's value. +func (s *RestoreDBClusterFromSnapshotInput) SetDBSubnetGroupName(v string) *RestoreDBClusterFromSnapshotInput { + s.DBSubnetGroupName = &v + return s +} + +// SetDatabaseName sets the DatabaseName field's value. +func (s *RestoreDBClusterFromSnapshotInput) SetDatabaseName(v string) *RestoreDBClusterFromSnapshotInput { + s.DatabaseName = &v + return s +} + +// SetEnableIAMDatabaseAuthentication sets the EnableIAMDatabaseAuthentication field's value. +func (s *RestoreDBClusterFromSnapshotInput) SetEnableIAMDatabaseAuthentication(v bool) *RestoreDBClusterFromSnapshotInput { + s.EnableIAMDatabaseAuthentication = &v + return s +} + +// SetEngine sets the Engine field's value. +func (s *RestoreDBClusterFromSnapshotInput) SetEngine(v string) *RestoreDBClusterFromSnapshotInput { + s.Engine = &v + return s +} + +// SetEngineVersion sets the EngineVersion field's value. +func (s *RestoreDBClusterFromSnapshotInput) SetEngineVersion(v string) *RestoreDBClusterFromSnapshotInput { + s.EngineVersion = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *RestoreDBClusterFromSnapshotInput) SetKmsKeyId(v string) *RestoreDBClusterFromSnapshotInput { + s.KmsKeyId = &v + return s +} + +// SetOptionGroupName sets the OptionGroupName field's value. +func (s *RestoreDBClusterFromSnapshotInput) SetOptionGroupName(v string) *RestoreDBClusterFromSnapshotInput { + s.OptionGroupName = &v + return s +} + +// SetPort sets the Port field's value. +func (s *RestoreDBClusterFromSnapshotInput) SetPort(v int64) *RestoreDBClusterFromSnapshotInput { + s.Port = &v + return s +} + +// SetSnapshotIdentifier sets the SnapshotIdentifier field's value. +func (s *RestoreDBClusterFromSnapshotInput) SetSnapshotIdentifier(v string) *RestoreDBClusterFromSnapshotInput { + s.SnapshotIdentifier = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *RestoreDBClusterFromSnapshotInput) SetTags(v []*Tag) *RestoreDBClusterFromSnapshotInput { + s.Tags = v + return s +} + +// SetVpcSecurityGroupIds sets the VpcSecurityGroupIds field's value. +func (s *RestoreDBClusterFromSnapshotInput) SetVpcSecurityGroupIds(v []*string) *RestoreDBClusterFromSnapshotInput { + s.VpcSecurityGroupIds = v + return s +} + +type RestoreDBClusterFromSnapshotOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Neptune DB cluster. + // + // This data type is used as a response element in the DescribeDBClusters action. + DBCluster *DBCluster `type:"structure"` +} + +// String returns the string representation +func (s RestoreDBClusterFromSnapshotOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreDBClusterFromSnapshotOutput) GoString() string { + return s.String() +} + +// SetDBCluster sets the DBCluster field's value. +func (s *RestoreDBClusterFromSnapshotOutput) SetDBCluster(v *DBCluster) *RestoreDBClusterFromSnapshotOutput { + s.DBCluster = v + return s +} + +type RestoreDBClusterToPointInTimeInput struct { + _ struct{} `type:"structure"` + + // The name of the new DB cluster to be created. + // + // Constraints: + // + // * Must contain from 1 to 63 letters, numbers, or hyphens + // + // * First character must be a letter + // + // * Cannot end with a hyphen or contain two consecutive hyphens + // + // DBClusterIdentifier is a required field + DBClusterIdentifier *string `type:"string" required:"true"` + + // The DB subnet group name to use for the new DB cluster. + // + // Constraints: If supplied, must match the name of an existing DBSubnetGroup. + // + // Example: mySubnetgroup + DBSubnetGroupName *string `type:"string"` + + // True to enable mapping of AWS Identity and Access Management (IAM) accounts + // to database accounts, and otherwise false. + // + // Default: false + EnableIAMDatabaseAuthentication *bool `type:"boolean"` + + // The AWS KMS key identifier to use when restoring an encrypted DB cluster + // from an encrypted DB cluster. + // + // The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption + // key. If you are restoring a DB cluster with the same AWS account that owns + // the KMS encryption key used to encrypt the new DB cluster, then you can use + // the KMS key alias instead of the ARN for the KMS encryption key. + // + // You can restore to a new DB cluster and encrypt the new DB cluster with a + // KMS key that is different than the KMS key used to encrypt the source DB + // cluster. The new DB cluster is encrypted with the KMS key identified by the + // KmsKeyId parameter. + // + // If you do not specify a value for the KmsKeyId parameter, then the following + // will occur: + // + // * If the DB cluster is encrypted, then the restored DB cluster is encrypted + // using the KMS key that was used to encrypt the source DB cluster. + // + // * If the DB cluster is not encrypted, then the restored DB cluster is + // not encrypted. + // + // If DBClusterIdentifier refers to a DB cluster that is not encrypted, then + // the restore request is rejected. + KmsKeyId *string `type:"string"` + + // The name of the option group for the new DB cluster. + OptionGroupName *string `type:"string"` + + // The port number on which the new DB cluster accepts connections. + // + // Constraints: Value must be 1150-65535 + // + // Default: The same port as the original DB cluster. + Port *int64 `type:"integer"` + + // The date and time to restore the DB cluster to. + // + // Valid Values: Value must be a time in Universal Coordinated Time (UTC) format + // + // Constraints: + // + // * Must be before the latest restorable time for the DB instance + // + // * Must be specified if UseLatestRestorableTime parameter is not provided + // + // * Cannot be specified if UseLatestRestorableTime parameter is true + // + // * Cannot be specified if RestoreType parameter is copy-on-write + // + // Example: 2015-03-07T23:45:00Z + RestoreToTime *time.Time `type:"timestamp"` + + // The type of restore to be performed. You can specify one of the following + // values: + // + // * full-copy - The new DB cluster is restored as a full copy of the source + // DB cluster. + // + // * copy-on-write - The new DB cluster is restored as a clone of the source + // DB cluster. + // + // Constraints: You can't specify copy-on-write if the engine version of the + // source DB cluster is earlier than 1.11. + // + // If you don't specify a RestoreType value, then the new DB cluster is restored + // as a full copy of the source DB cluster. + RestoreType *string `type:"string"` + + // The identifier of the source DB cluster from which to restore. + // + // Constraints: + // + // * Must match the identifier of an existing DBCluster. + // + // SourceDBClusterIdentifier is a required field + SourceDBClusterIdentifier *string `type:"string" required:"true"` + + // A list of tags. For more information, see Tagging Amazon Neptune Resources + // (http://docs.aws.amazon.com/neptune/latest/UserGuide/tagging.ARN.html). + Tags []*Tag `locationNameList:"Tag" type:"list"` + + // A value that is set to true to restore the DB cluster to the latest restorable + // backup time, and false otherwise. + // + // Default: false + // + // Constraints: Cannot be specified if RestoreToTime parameter is provided. + UseLatestRestorableTime *bool `type:"boolean"` + + // A list of VPC security groups that the new DB cluster belongs to. + VpcSecurityGroupIds []*string `locationNameList:"VpcSecurityGroupId" type:"list"` +} + +// String returns the string representation +func (s RestoreDBClusterToPointInTimeInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreDBClusterToPointInTimeInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RestoreDBClusterToPointInTimeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RestoreDBClusterToPointInTimeInput"} + if s.DBClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterIdentifier")) + } + if s.SourceDBClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("SourceDBClusterIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *RestoreDBClusterToPointInTimeInput) SetDBClusterIdentifier(v string) *RestoreDBClusterToPointInTimeInput { + s.DBClusterIdentifier = &v + return s +} + +// SetDBSubnetGroupName sets the DBSubnetGroupName field's value. +func (s *RestoreDBClusterToPointInTimeInput) SetDBSubnetGroupName(v string) *RestoreDBClusterToPointInTimeInput { + s.DBSubnetGroupName = &v + return s +} + +// SetEnableIAMDatabaseAuthentication sets the EnableIAMDatabaseAuthentication field's value. +func (s *RestoreDBClusterToPointInTimeInput) SetEnableIAMDatabaseAuthentication(v bool) *RestoreDBClusterToPointInTimeInput { + s.EnableIAMDatabaseAuthentication = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *RestoreDBClusterToPointInTimeInput) SetKmsKeyId(v string) *RestoreDBClusterToPointInTimeInput { + s.KmsKeyId = &v + return s +} + +// SetOptionGroupName sets the OptionGroupName field's value. +func (s *RestoreDBClusterToPointInTimeInput) SetOptionGroupName(v string) *RestoreDBClusterToPointInTimeInput { + s.OptionGroupName = &v + return s +} + +// SetPort sets the Port field's value. +func (s *RestoreDBClusterToPointInTimeInput) SetPort(v int64) *RestoreDBClusterToPointInTimeInput { + s.Port = &v + return s +} + +// SetRestoreToTime sets the RestoreToTime field's value. +func (s *RestoreDBClusterToPointInTimeInput) SetRestoreToTime(v time.Time) *RestoreDBClusterToPointInTimeInput { + s.RestoreToTime = &v + return s +} + +// SetRestoreType sets the RestoreType field's value. +func (s *RestoreDBClusterToPointInTimeInput) SetRestoreType(v string) *RestoreDBClusterToPointInTimeInput { + s.RestoreType = &v + return s +} + +// SetSourceDBClusterIdentifier sets the SourceDBClusterIdentifier field's value. +func (s *RestoreDBClusterToPointInTimeInput) SetSourceDBClusterIdentifier(v string) *RestoreDBClusterToPointInTimeInput { + s.SourceDBClusterIdentifier = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *RestoreDBClusterToPointInTimeInput) SetTags(v []*Tag) *RestoreDBClusterToPointInTimeInput { + s.Tags = v + return s +} + +// SetUseLatestRestorableTime sets the UseLatestRestorableTime field's value. +func (s *RestoreDBClusterToPointInTimeInput) SetUseLatestRestorableTime(v bool) *RestoreDBClusterToPointInTimeInput { + s.UseLatestRestorableTime = &v + return s +} + +// SetVpcSecurityGroupIds sets the VpcSecurityGroupIds field's value. +func (s *RestoreDBClusterToPointInTimeInput) SetVpcSecurityGroupIds(v []*string) *RestoreDBClusterToPointInTimeInput { + s.VpcSecurityGroupIds = v + return s +} + +type RestoreDBClusterToPointInTimeOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Neptune DB cluster. + // + // This data type is used as a response element in the DescribeDBClusters action. + DBCluster *DBCluster `type:"structure"` +} + +// String returns the string representation +func (s RestoreDBClusterToPointInTimeOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreDBClusterToPointInTimeOutput) GoString() string { + return s.String() +} + +// SetDBCluster sets the DBCluster field's value. +func (s *RestoreDBClusterToPointInTimeOutput) SetDBCluster(v *DBCluster) *RestoreDBClusterToPointInTimeOutput { + s.DBCluster = v + return s +} + +// This data type is used as a response element in the DescribeDBSubnetGroups +// action. +type Subnet struct { + _ struct{} `type:"structure"` + + // Contains Availability Zone information. + // + // This data type is used as an element in the following data type: + // + // * OrderableDBInstanceOption + SubnetAvailabilityZone *AvailabilityZone `type:"structure"` + + // Specifies the identifier of the subnet. + SubnetIdentifier *string `type:"string"` + + // Specifies the status of the subnet. + SubnetStatus *string `type:"string"` +} + +// String returns the string representation +func (s Subnet) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Subnet) GoString() string { + return s.String() +} + +// SetSubnetAvailabilityZone sets the SubnetAvailabilityZone field's value. +func (s *Subnet) SetSubnetAvailabilityZone(v *AvailabilityZone) *Subnet { + s.SubnetAvailabilityZone = v + return s +} + +// SetSubnetIdentifier sets the SubnetIdentifier field's value. +func (s *Subnet) SetSubnetIdentifier(v string) *Subnet { + s.SubnetIdentifier = &v + return s +} + +// SetSubnetStatus sets the SubnetStatus field's value. +func (s *Subnet) SetSubnetStatus(v string) *Subnet { + s.SubnetStatus = &v + return s +} + +// Metadata assigned to an Amazon Neptune resource consisting of a key-value +// pair. +type Tag struct { + _ struct{} `type:"structure"` + + // A key is the required name of the tag. The string value can be from 1 to + // 128 Unicode characters in length and can't be prefixed with "aws:" or "rds:". + // The string can only contain only the set of Unicode letters, digits, white-space, + // '_', '.', '/', '=', '+', '-' (Java regex: "^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$"). + Key *string `type:"string"` + + // A value is the optional value of the tag. The string value can be from 1 + // to 256 Unicode characters in length and can't be prefixed with "aws:" or + // "rds:". The string can only contain only the set of Unicode letters, digits, + // white-space, '_', '.', '/', '=', '+', '-' (Java regex: "^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$"). + Value *string `type:"string"` +} + +// String returns the string representation +func (s Tag) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Tag) GoString() string { + return s.String() +} + +// SetKey sets the Key field's value. +func (s *Tag) SetKey(v string) *Tag { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Tag) SetValue(v string) *Tag { + s.Value = &v + return s +} + +// A time zone associated with a DBInstance. This data type is an element in +// the response to the DescribeDBInstances, and the DescribeDBEngineVersions +// actions. +type Timezone struct { + _ struct{} `type:"structure"` + + // The name of the time zone. + TimezoneName *string `type:"string"` +} + +// String returns the string representation +func (s Timezone) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Timezone) GoString() string { + return s.String() +} + +// SetTimezoneName sets the TimezoneName field's value. +func (s *Timezone) SetTimezoneName(v string) *Timezone { + s.TimezoneName = &v + return s +} + +// The version of the database engine that a DB instance can be upgraded to. +type UpgradeTarget struct { + _ struct{} `type:"structure"` + + // A value that indicates whether the target version is applied to any source + // DB instances that have AutoMinorVersionUpgrade set to true. + AutoUpgrade *bool `type:"boolean"` + + // The version of the database engine that a DB instance can be upgraded to. + Description *string `type:"string"` + + // The name of the upgrade target database engine. + Engine *string `type:"string"` + + // The version number of the upgrade target database engine. + EngineVersion *string `type:"string"` + + // A value that indicates whether a database engine is upgraded to a major version. + IsMajorVersionUpgrade *bool `type:"boolean"` +} + +// String returns the string representation +func (s UpgradeTarget) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpgradeTarget) GoString() string { + return s.String() +} + +// SetAutoUpgrade sets the AutoUpgrade field's value. +func (s *UpgradeTarget) SetAutoUpgrade(v bool) *UpgradeTarget { + s.AutoUpgrade = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *UpgradeTarget) SetDescription(v string) *UpgradeTarget { + s.Description = &v + return s +} + +// SetEngine sets the Engine field's value. +func (s *UpgradeTarget) SetEngine(v string) *UpgradeTarget { + s.Engine = &v + return s +} + +// SetEngineVersion sets the EngineVersion field's value. +func (s *UpgradeTarget) SetEngineVersion(v string) *UpgradeTarget { + s.EngineVersion = &v + return s +} + +// SetIsMajorVersionUpgrade sets the IsMajorVersionUpgrade field's value. +func (s *UpgradeTarget) SetIsMajorVersionUpgrade(v bool) *UpgradeTarget { + s.IsMajorVersionUpgrade = &v + return s +} + +// Information about valid modifications that you can make to your DB instance. +// Contains the result of a successful call to the DescribeValidDBInstanceModifications +// action. You can use this information when you call ModifyDBInstance. +type ValidDBInstanceModificationsMessage struct { + _ struct{} `type:"structure"` + + // Valid storage options for your DB instance. + Storage []*ValidStorageOptions `locationNameList:"ValidStorageOptions" type:"list"` +} + +// String returns the string representation +func (s ValidDBInstanceModificationsMessage) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ValidDBInstanceModificationsMessage) GoString() string { + return s.String() +} + +// SetStorage sets the Storage field's value. +func (s *ValidDBInstanceModificationsMessage) SetStorage(v []*ValidStorageOptions) *ValidDBInstanceModificationsMessage { + s.Storage = v + return s +} + +// Information about valid modifications that you can make to your DB instance. +// Contains the result of a successful call to the DescribeValidDBInstanceModifications +// action. +type ValidStorageOptions struct { + _ struct{} `type:"structure"` + + // The valid range of Provisioned IOPS to gibibytes of storage multiplier. For + // example, 3-10, which means that provisioned IOPS can be between 3 and 10 + // times storage. + IopsToStorageRatio []*DoubleRange `locationNameList:"DoubleRange" type:"list"` + + // The valid range of provisioned IOPS. For example, 1000-20000. + ProvisionedIops []*Range `locationNameList:"Range" type:"list"` + + // The valid range of storage in gibibytes. For example, 100 to 16384. + StorageSize []*Range `locationNameList:"Range" type:"list"` + + // The valid storage types for your DB instance. For example, gp2, io1. + StorageType *string `type:"string"` +} + +// String returns the string representation +func (s ValidStorageOptions) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ValidStorageOptions) GoString() string { + return s.String() +} + +// SetIopsToStorageRatio sets the IopsToStorageRatio field's value. +func (s *ValidStorageOptions) SetIopsToStorageRatio(v []*DoubleRange) *ValidStorageOptions { + s.IopsToStorageRatio = v + return s +} + +// SetProvisionedIops sets the ProvisionedIops field's value. +func (s *ValidStorageOptions) SetProvisionedIops(v []*Range) *ValidStorageOptions { + s.ProvisionedIops = v + return s +} + +// SetStorageSize sets the StorageSize field's value. +func (s *ValidStorageOptions) SetStorageSize(v []*Range) *ValidStorageOptions { + s.StorageSize = v + return s +} + +// SetStorageType sets the StorageType field's value. +func (s *ValidStorageOptions) SetStorageType(v string) *ValidStorageOptions { + s.StorageType = &v + return s +} + +// This data type is used as a response element for queries on VPC security +// group membership. +type VpcSecurityGroupMembership struct { + _ struct{} `type:"structure"` + + // The status of the VPC security group. + Status *string `type:"string"` + + // The name of the VPC security group. + VpcSecurityGroupId *string `type:"string"` +} + +// String returns the string representation +func (s VpcSecurityGroupMembership) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s VpcSecurityGroupMembership) GoString() string { + return s.String() +} + +// SetStatus sets the Status field's value. +func (s *VpcSecurityGroupMembership) SetStatus(v string) *VpcSecurityGroupMembership { + s.Status = &v + return s +} + +// SetVpcSecurityGroupId sets the VpcSecurityGroupId field's value. +func (s *VpcSecurityGroupMembership) SetVpcSecurityGroupId(v string) *VpcSecurityGroupMembership { + s.VpcSecurityGroupId = &v + return s +} + +const ( + // ApplyMethodImmediate is a ApplyMethod enum value + ApplyMethodImmediate = "immediate" + + // ApplyMethodPendingReboot is a ApplyMethod enum value + ApplyMethodPendingReboot = "pending-reboot" +) + +const ( + // SourceTypeDbInstance is a SourceType enum value + SourceTypeDbInstance = "db-instance" + + // SourceTypeDbParameterGroup is a SourceType enum value + SourceTypeDbParameterGroup = "db-parameter-group" + + // SourceTypeDbSecurityGroup is a SourceType enum value + SourceTypeDbSecurityGroup = "db-security-group" + + // SourceTypeDbSnapshot is a SourceType enum value + SourceTypeDbSnapshot = "db-snapshot" + + // SourceTypeDbCluster is a SourceType enum value + SourceTypeDbCluster = "db-cluster" + + // SourceTypeDbClusterSnapshot is a SourceType enum value + SourceTypeDbClusterSnapshot = "db-cluster-snapshot" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/neptune/doc.go b/vendor/github.com/aws/aws-sdk-go/service/neptune/doc.go new file mode 100644 index 00000000000..503af846651 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/neptune/doc.go @@ -0,0 +1,48 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package neptune provides the client and types for making API +// requests to Amazon Neptune. +// +// Amazon Neptune is a fast, reliable, fully-managed graph database service +// that makes it easy to build and run applications that work with highly connected +// datasets. The core of Amazon Neptune is a purpose-built, high-performance +// graph database engine optimized for storing billions of relationships and +// querying the graph with milliseconds latency. Amazon Neptune supports popular +// graph models Property Graph and W3C's RDF, and their respective query languages +// Apache TinkerPop Gremlin and SPARQL, allowing you to easily build queries +// that efficiently navigate highly connected datasets. Neptune powers graph +// use cases such as recommendation engines, fraud detection, knowledge graphs, +// drug discovery, and network security. +// +// This interface reference for Amazon Neptune contains documentation for a +// programming or command line interface you can use to manage Amazon Neptune. +// Note that Amazon Neptune is asynchronous, which means that some interfaces +// might require techniques such as polling or callback functions to determine +// when a command has been applied. In this reference, the parameter descriptions +// indicate whether a command is applied immediately, on the next instance reboot, +// or during the maintenance window. The reference structure is as follows, +// and we list following some related topics from the user guide. +// +// Amazon Neptune API Reference +// +// See https://docs.aws.amazon.com/goto/WebAPI/neptune-2014-10-31 for more information on this service. +// +// See neptune package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/neptune/ +// +// Using the Client +// +// To contact Amazon Neptune with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the Amazon Neptune client Neptune for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/neptune/#New +package neptune diff --git a/vendor/github.com/aws/aws-sdk-go/service/neptune/errors.go b/vendor/github.com/aws/aws-sdk-go/service/neptune/errors.go new file mode 100644 index 00000000000..8c0c3f96794 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/neptune/errors.go @@ -0,0 +1,361 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package neptune + +const ( + + // ErrCodeAuthorizationNotFoundFault for service response error code + // "AuthorizationNotFound". + // + // Specified CIDRIP or EC2 security group is not authorized for the specified + // DB security group. + // + // Neptune may not also be authorized via IAM to perform necessary actions on + // your behalf. + ErrCodeAuthorizationNotFoundFault = "AuthorizationNotFound" + + // ErrCodeCertificateNotFoundFault for service response error code + // "CertificateNotFound". + // + // CertificateIdentifier does not refer to an existing certificate. + ErrCodeCertificateNotFoundFault = "CertificateNotFound" + + // ErrCodeDBClusterAlreadyExistsFault for service response error code + // "DBClusterAlreadyExistsFault". + // + // User already has a DB cluster with the given identifier. + ErrCodeDBClusterAlreadyExistsFault = "DBClusterAlreadyExistsFault" + + // ErrCodeDBClusterNotFoundFault for service response error code + // "DBClusterNotFoundFault". + // + // DBClusterIdentifier does not refer to an existing DB cluster. + ErrCodeDBClusterNotFoundFault = "DBClusterNotFoundFault" + + // ErrCodeDBClusterParameterGroupNotFoundFault for service response error code + // "DBClusterParameterGroupNotFound". + // + // DBClusterParameterGroupName does not refer to an existing DB Cluster parameter + // group. + ErrCodeDBClusterParameterGroupNotFoundFault = "DBClusterParameterGroupNotFound" + + // ErrCodeDBClusterQuotaExceededFault for service response error code + // "DBClusterQuotaExceededFault". + // + // User attempted to create a new DB cluster and the user has already reached + // the maximum allowed DB cluster quota. + ErrCodeDBClusterQuotaExceededFault = "DBClusterQuotaExceededFault" + + // ErrCodeDBClusterRoleAlreadyExistsFault for service response error code + // "DBClusterRoleAlreadyExists". + // + // The specified IAM role Amazon Resource Name (ARN) is already associated with + // the specified DB cluster. + ErrCodeDBClusterRoleAlreadyExistsFault = "DBClusterRoleAlreadyExists" + + // ErrCodeDBClusterRoleNotFoundFault for service response error code + // "DBClusterRoleNotFound". + // + // The specified IAM role Amazon Resource Name (ARN) is not associated with + // the specified DB cluster. + ErrCodeDBClusterRoleNotFoundFault = "DBClusterRoleNotFound" + + // ErrCodeDBClusterRoleQuotaExceededFault for service response error code + // "DBClusterRoleQuotaExceeded". + // + // You have exceeded the maximum number of IAM roles that can be associated + // with the specified DB cluster. + ErrCodeDBClusterRoleQuotaExceededFault = "DBClusterRoleQuotaExceeded" + + // ErrCodeDBClusterSnapshotAlreadyExistsFault for service response error code + // "DBClusterSnapshotAlreadyExistsFault". + // + // User already has a DB cluster snapshot with the given identifier. + ErrCodeDBClusterSnapshotAlreadyExistsFault = "DBClusterSnapshotAlreadyExistsFault" + + // ErrCodeDBClusterSnapshotNotFoundFault for service response error code + // "DBClusterSnapshotNotFoundFault". + // + // DBClusterSnapshotIdentifier does not refer to an existing DB cluster snapshot. + ErrCodeDBClusterSnapshotNotFoundFault = "DBClusterSnapshotNotFoundFault" + + // ErrCodeDBInstanceAlreadyExistsFault for service response error code + // "DBInstanceAlreadyExists". + // + // User already has a DB instance with the given identifier. + ErrCodeDBInstanceAlreadyExistsFault = "DBInstanceAlreadyExists" + + // ErrCodeDBInstanceNotFoundFault for service response error code + // "DBInstanceNotFound". + // + // DBInstanceIdentifier does not refer to an existing DB instance. + ErrCodeDBInstanceNotFoundFault = "DBInstanceNotFound" + + // ErrCodeDBParameterGroupAlreadyExistsFault for service response error code + // "DBParameterGroupAlreadyExists". + // + // A DB parameter group with the same name exists. + ErrCodeDBParameterGroupAlreadyExistsFault = "DBParameterGroupAlreadyExists" + + // ErrCodeDBParameterGroupNotFoundFault for service response error code + // "DBParameterGroupNotFound". + // + // DBParameterGroupName does not refer to an existing DB parameter group. + ErrCodeDBParameterGroupNotFoundFault = "DBParameterGroupNotFound" + + // ErrCodeDBParameterGroupQuotaExceededFault for service response error code + // "DBParameterGroupQuotaExceeded". + // + // Request would result in user exceeding the allowed number of DB parameter + // groups. + ErrCodeDBParameterGroupQuotaExceededFault = "DBParameterGroupQuotaExceeded" + + // ErrCodeDBSecurityGroupNotFoundFault for service response error code + // "DBSecurityGroupNotFound". + // + // DBSecurityGroupName does not refer to an existing DB security group. + ErrCodeDBSecurityGroupNotFoundFault = "DBSecurityGroupNotFound" + + // ErrCodeDBSnapshotAlreadyExistsFault for service response error code + // "DBSnapshotAlreadyExists". + // + // DBSnapshotIdentifier is already used by an existing snapshot. + ErrCodeDBSnapshotAlreadyExistsFault = "DBSnapshotAlreadyExists" + + // ErrCodeDBSnapshotNotFoundFault for service response error code + // "DBSnapshotNotFound". + // + // DBSnapshotIdentifier does not refer to an existing DB snapshot. + ErrCodeDBSnapshotNotFoundFault = "DBSnapshotNotFound" + + // ErrCodeDBSubnetGroupAlreadyExistsFault for service response error code + // "DBSubnetGroupAlreadyExists". + // + // DBSubnetGroupName is already used by an existing DB subnet group. + ErrCodeDBSubnetGroupAlreadyExistsFault = "DBSubnetGroupAlreadyExists" + + // ErrCodeDBSubnetGroupDoesNotCoverEnoughAZs for service response error code + // "DBSubnetGroupDoesNotCoverEnoughAZs". + // + // Subnets in the DB subnet group should cover at least two Availability Zones + // unless there is only one Availability Zone. + ErrCodeDBSubnetGroupDoesNotCoverEnoughAZs = "DBSubnetGroupDoesNotCoverEnoughAZs" + + // ErrCodeDBSubnetGroupNotFoundFault for service response error code + // "DBSubnetGroupNotFoundFault". + // + // DBSubnetGroupName does not refer to an existing DB subnet group. + ErrCodeDBSubnetGroupNotFoundFault = "DBSubnetGroupNotFoundFault" + + // ErrCodeDBSubnetGroupQuotaExceededFault for service response error code + // "DBSubnetGroupQuotaExceeded". + // + // Request would result in user exceeding the allowed number of DB subnet groups. + ErrCodeDBSubnetGroupQuotaExceededFault = "DBSubnetGroupQuotaExceeded" + + // ErrCodeDBSubnetQuotaExceededFault for service response error code + // "DBSubnetQuotaExceededFault". + // + // Request would result in user exceeding the allowed number of subnets in a + // DB subnet groups. + ErrCodeDBSubnetQuotaExceededFault = "DBSubnetQuotaExceededFault" + + // ErrCodeDBUpgradeDependencyFailureFault for service response error code + // "DBUpgradeDependencyFailure". + // + // The DB upgrade failed because a resource the DB depends on could not be modified. + ErrCodeDBUpgradeDependencyFailureFault = "DBUpgradeDependencyFailure" + + // ErrCodeDomainNotFoundFault for service response error code + // "DomainNotFoundFault". + // + // Domain does not refer to an existing Active Directory Domain. + ErrCodeDomainNotFoundFault = "DomainNotFoundFault" + + // ErrCodeEventSubscriptionQuotaExceededFault for service response error code + // "EventSubscriptionQuotaExceeded". + ErrCodeEventSubscriptionQuotaExceededFault = "EventSubscriptionQuotaExceeded" + + // ErrCodeInstanceQuotaExceededFault for service response error code + // "InstanceQuotaExceeded". + // + // Request would result in user exceeding the allowed number of DB instances. + ErrCodeInstanceQuotaExceededFault = "InstanceQuotaExceeded" + + // ErrCodeInsufficientDBClusterCapacityFault for service response error code + // "InsufficientDBClusterCapacityFault". + // + // The DB cluster does not have enough capacity for the current operation. + ErrCodeInsufficientDBClusterCapacityFault = "InsufficientDBClusterCapacityFault" + + // ErrCodeInsufficientDBInstanceCapacityFault for service response error code + // "InsufficientDBInstanceCapacity". + // + // Specified DB instance class is not available in the specified Availability + // Zone. + ErrCodeInsufficientDBInstanceCapacityFault = "InsufficientDBInstanceCapacity" + + // ErrCodeInsufficientStorageClusterCapacityFault for service response error code + // "InsufficientStorageClusterCapacity". + // + // There is insufficient storage available for the current action. You may be + // able to resolve this error by updating your subnet group to use different + // Availability Zones that have more storage available. + ErrCodeInsufficientStorageClusterCapacityFault = "InsufficientStorageClusterCapacity" + + // ErrCodeInvalidDBClusterSnapshotStateFault for service response error code + // "InvalidDBClusterSnapshotStateFault". + // + // The supplied value is not a valid DB cluster snapshot state. + ErrCodeInvalidDBClusterSnapshotStateFault = "InvalidDBClusterSnapshotStateFault" + + // ErrCodeInvalidDBClusterStateFault for service response error code + // "InvalidDBClusterStateFault". + // + // The DB cluster is not in a valid state. + ErrCodeInvalidDBClusterStateFault = "InvalidDBClusterStateFault" + + // ErrCodeInvalidDBInstanceStateFault for service response error code + // "InvalidDBInstanceState". + // + // The specified DB instance is not in the available state. + ErrCodeInvalidDBInstanceStateFault = "InvalidDBInstanceState" + + // ErrCodeInvalidDBParameterGroupStateFault for service response error code + // "InvalidDBParameterGroupState". + // + // The DB parameter group is in use or is in an invalid state. If you are attempting + // to delete the parameter group, you cannot delete it when the parameter group + // is in this state. + ErrCodeInvalidDBParameterGroupStateFault = "InvalidDBParameterGroupState" + + // ErrCodeInvalidDBSecurityGroupStateFault for service response error code + // "InvalidDBSecurityGroupState". + // + // The state of the DB security group does not allow deletion. + ErrCodeInvalidDBSecurityGroupStateFault = "InvalidDBSecurityGroupState" + + // ErrCodeInvalidDBSnapshotStateFault for service response error code + // "InvalidDBSnapshotState". + // + // The state of the DB snapshot does not allow deletion. + ErrCodeInvalidDBSnapshotStateFault = "InvalidDBSnapshotState" + + // ErrCodeInvalidDBSubnetGroupStateFault for service response error code + // "InvalidDBSubnetGroupStateFault". + // + // The DB subnet group cannot be deleted because it is in use. + ErrCodeInvalidDBSubnetGroupStateFault = "InvalidDBSubnetGroupStateFault" + + // ErrCodeInvalidDBSubnetStateFault for service response error code + // "InvalidDBSubnetStateFault". + // + // The DB subnet is not in the available state. + ErrCodeInvalidDBSubnetStateFault = "InvalidDBSubnetStateFault" + + // ErrCodeInvalidEventSubscriptionStateFault for service response error code + // "InvalidEventSubscriptionState". + ErrCodeInvalidEventSubscriptionStateFault = "InvalidEventSubscriptionState" + + // ErrCodeInvalidRestoreFault for service response error code + // "InvalidRestoreFault". + // + // Cannot restore from vpc backup to non-vpc DB instance. + ErrCodeInvalidRestoreFault = "InvalidRestoreFault" + + // ErrCodeInvalidSubnet for service response error code + // "InvalidSubnet". + // + // The requested subnet is invalid, or multiple subnets were requested that + // are not all in a common VPC. + ErrCodeInvalidSubnet = "InvalidSubnet" + + // ErrCodeInvalidVPCNetworkStateFault for service response error code + // "InvalidVPCNetworkStateFault". + // + // DB subnet group does not cover all Availability Zones after it is created + // because users' change. + ErrCodeInvalidVPCNetworkStateFault = "InvalidVPCNetworkStateFault" + + // ErrCodeKMSKeyNotAccessibleFault for service response error code + // "KMSKeyNotAccessibleFault". + // + // Error accessing KMS key. + ErrCodeKMSKeyNotAccessibleFault = "KMSKeyNotAccessibleFault" + + // ErrCodeOptionGroupNotFoundFault for service response error code + // "OptionGroupNotFoundFault". + ErrCodeOptionGroupNotFoundFault = "OptionGroupNotFoundFault" + + // ErrCodeProvisionedIopsNotAvailableInAZFault for service response error code + // "ProvisionedIopsNotAvailableInAZFault". + // + // Provisioned IOPS not available in the specified Availability Zone. + ErrCodeProvisionedIopsNotAvailableInAZFault = "ProvisionedIopsNotAvailableInAZFault" + + // ErrCodeResourceNotFoundFault for service response error code + // "ResourceNotFoundFault". + // + // The specified resource ID was not found. + ErrCodeResourceNotFoundFault = "ResourceNotFoundFault" + + // ErrCodeSNSInvalidTopicFault for service response error code + // "SNSInvalidTopic". + ErrCodeSNSInvalidTopicFault = "SNSInvalidTopic" + + // ErrCodeSNSNoAuthorizationFault for service response error code + // "SNSNoAuthorization". + ErrCodeSNSNoAuthorizationFault = "SNSNoAuthorization" + + // ErrCodeSNSTopicArnNotFoundFault for service response error code + // "SNSTopicArnNotFound". + ErrCodeSNSTopicArnNotFoundFault = "SNSTopicArnNotFound" + + // ErrCodeSharedSnapshotQuotaExceededFault for service response error code + // "SharedSnapshotQuotaExceeded". + // + // You have exceeded the maximum number of accounts that you can share a manual + // DB snapshot with. + ErrCodeSharedSnapshotQuotaExceededFault = "SharedSnapshotQuotaExceeded" + + // ErrCodeSnapshotQuotaExceededFault for service response error code + // "SnapshotQuotaExceeded". + // + // Request would result in user exceeding the allowed number of DB snapshots. + ErrCodeSnapshotQuotaExceededFault = "SnapshotQuotaExceeded" + + // ErrCodeSourceNotFoundFault for service response error code + // "SourceNotFound". + ErrCodeSourceNotFoundFault = "SourceNotFound" + + // ErrCodeStorageQuotaExceededFault for service response error code + // "StorageQuotaExceeded". + // + // Request would result in user exceeding the allowed amount of storage available + // across all DB instances. + ErrCodeStorageQuotaExceededFault = "StorageQuotaExceeded" + + // ErrCodeStorageTypeNotSupportedFault for service response error code + // "StorageTypeNotSupported". + // + // StorageType specified cannot be associated with the DB Instance. + ErrCodeStorageTypeNotSupportedFault = "StorageTypeNotSupported" + + // ErrCodeSubnetAlreadyInUse for service response error code + // "SubnetAlreadyInUse". + // + // The DB subnet is already in use in the Availability Zone. + ErrCodeSubnetAlreadyInUse = "SubnetAlreadyInUse" + + // ErrCodeSubscriptionAlreadyExistFault for service response error code + // "SubscriptionAlreadyExist". + ErrCodeSubscriptionAlreadyExistFault = "SubscriptionAlreadyExist" + + // ErrCodeSubscriptionCategoryNotFoundFault for service response error code + // "SubscriptionCategoryNotFound". + ErrCodeSubscriptionCategoryNotFoundFault = "SubscriptionCategoryNotFound" + + // ErrCodeSubscriptionNotFoundFault for service response error code + // "SubscriptionNotFound". + ErrCodeSubscriptionNotFoundFault = "SubscriptionNotFound" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/neptune/service.go b/vendor/github.com/aws/aws-sdk-go/service/neptune/service.go new file mode 100644 index 00000000000..3ddc5e5fba7 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/neptune/service.go @@ -0,0 +1,98 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package neptune + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/query" +) + +// Neptune provides the API operation methods for making requests to +// Amazon Neptune. See this package's package overview docs +// for details on the service. +// +// Neptune methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type Neptune struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "rds" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Neptune" // ServiceID is a unique identifer of a specific service. +) + +// New creates a new instance of the Neptune client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a Neptune client from just a session. +// svc := neptune.New(mySession) +// +// // Create a Neptune client with additional configuration +// svc := neptune.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *Neptune { + c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "rds" + } + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *Neptune { + svc := &Neptune{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + ServiceID: ServiceID, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2014-10-31", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(query.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(query.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(query.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(query.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a Neptune operation and runs any +// custom request initialization. +func (c *Neptune) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/neptune/waiters.go b/vendor/github.com/aws/aws-sdk-go/service/neptune/waiters.go new file mode 100644 index 00000000000..acc80183c97 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/neptune/waiters.go @@ -0,0 +1,152 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package neptune + +import ( + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/request" +) + +// WaitUntilDBInstanceAvailable uses the Amazon Neptune API operation +// DescribeDBInstances to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *Neptune) WaitUntilDBInstanceAvailable(input *DescribeDBInstancesInput) error { + return c.WaitUntilDBInstanceAvailableWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilDBInstanceAvailableWithContext is an extended version of WaitUntilDBInstanceAvailable. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Neptune) WaitUntilDBInstanceAvailableWithContext(ctx aws.Context, input *DescribeDBInstancesInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilDBInstanceAvailable", + MaxAttempts: 60, + Delay: request.ConstantWaiterDelay(30 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.PathAllWaiterMatch, Argument: "DBInstances[].DBInstanceStatus", + Expected: "available", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "DBInstances[].DBInstanceStatus", + Expected: "deleted", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "DBInstances[].DBInstanceStatus", + Expected: "deleting", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "DBInstances[].DBInstanceStatus", + Expected: "failed", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "DBInstances[].DBInstanceStatus", + Expected: "incompatible-restore", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "DBInstances[].DBInstanceStatus", + Expected: "incompatible-parameters", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeDBInstancesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeDBInstancesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} + +// WaitUntilDBInstanceDeleted uses the Amazon Neptune API operation +// DescribeDBInstances to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *Neptune) WaitUntilDBInstanceDeleted(input *DescribeDBInstancesInput) error { + return c.WaitUntilDBInstanceDeletedWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilDBInstanceDeletedWithContext is an extended version of WaitUntilDBInstanceDeleted. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Neptune) WaitUntilDBInstanceDeletedWithContext(ctx aws.Context, input *DescribeDBInstancesInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilDBInstanceDeleted", + MaxAttempts: 60, + Delay: request.ConstantWaiterDelay(30 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.PathAllWaiterMatch, Argument: "DBInstances[].DBInstanceStatus", + Expected: "deleted", + }, + { + State: request.SuccessWaiterState, + Matcher: request.ErrorWaiterMatch, + Expected: "DBInstanceNotFound", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "DBInstances[].DBInstanceStatus", + Expected: "creating", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "DBInstances[].DBInstanceStatus", + Expected: "modifying", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "DBInstances[].DBInstanceStatus", + Expected: "rebooting", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathAnyWaiterMatch, Argument: "DBInstances[].DBInstanceStatus", + Expected: "resetting-master-credentials", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeDBInstancesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeDBInstancesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/opsworks/api.go b/vendor/github.com/aws/aws-sdk-go/service/opsworks/api.go index f266280ef0d..8ca5b1407de 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/opsworks/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/opsworks/api.go @@ -16,8 +16,8 @@ const opAssignInstance = "AssignInstance" // AssignInstanceRequest generates a "aws/request.Request" representing the // client's request for the AssignInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -112,8 +112,8 @@ const opAssignVolume = "AssignVolume" // AssignVolumeRequest generates a "aws/request.Request" representing the // client's request for the AssignVolume operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -205,8 +205,8 @@ const opAssociateElasticIp = "AssociateElasticIp" // AssociateElasticIpRequest generates a "aws/request.Request" representing the // client's request for the AssociateElasticIp operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -296,8 +296,8 @@ const opAttachElasticLoadBalancer = "AttachElasticLoadBalancer" // AttachElasticLoadBalancerRequest generates a "aws/request.Request" representing the // client's request for the AttachElasticLoadBalancer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -392,8 +392,8 @@ const opCloneStack = "CloneStack" // CloneStackRequest generates a "aws/request.Request" representing the // client's request for the CloneStack operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -437,8 +437,8 @@ func (c *OpsWorks) CloneStackRequest(input *CloneStackInput) (req *request.Reque // By default, all parameters are set to the values used by the parent stack. // // Required Permissions: To use this action, an IAM user must have an attached -// policy that explicitly grants permissions. For more information on user permissions, -// see Managing User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). +// policy that explicitly grants permissions. For more information about user +// permissions, see Managing User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -480,8 +480,8 @@ const opCreateApp = "CreateApp" // CreateAppRequest generates a "aws/request.Request" representing the // client's request for the CreateApp operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -568,8 +568,8 @@ const opCreateDeployment = "CreateDeployment" // CreateDeploymentRequest generates a "aws/request.Request" representing the // client's request for the CreateDeployment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -657,8 +657,8 @@ const opCreateInstance = "CreateInstance" // CreateInstanceRequest generates a "aws/request.Request" representing the // client's request for the CreateInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -745,8 +745,8 @@ const opCreateLayer = "CreateLayer" // CreateLayerRequest generates a "aws/request.Request" representing the // client's request for the CreateLayer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -839,8 +839,8 @@ const opCreateStack = "CreateStack" // CreateStackRequest generates a "aws/request.Request" representing the // client's request for the CreateStack operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -882,8 +882,8 @@ func (c *OpsWorks) CreateStackRequest(input *CreateStackInput) (req *request.Req // Creates a new stack. For more information, see Create a New Stack (http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-edit.html). // // Required Permissions: To use this action, an IAM user must have an attached -// policy that explicitly grants permissions. For more information on user permissions, -// see Managing User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). +// policy that explicitly grants permissions. For more information about user +// permissions, see Managing User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -922,8 +922,8 @@ const opCreateUserProfile = "CreateUserProfile" // CreateUserProfileRequest generates a "aws/request.Request" representing the // client's request for the CreateUserProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -965,8 +965,8 @@ func (c *OpsWorks) CreateUserProfileRequest(input *CreateUserProfileInput) (req // Creates a new user profile. // // Required Permissions: To use this action, an IAM user must have an attached -// policy that explicitly grants permissions. For more information on user permissions, -// see Managing User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). +// policy that explicitly grants permissions. For more information about user +// permissions, see Managing User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1005,8 +1005,8 @@ const opDeleteApp = "DeleteApp" // DeleteAppRequest generates a "aws/request.Request" representing the // client's request for the DeleteApp operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1094,8 +1094,8 @@ const opDeleteInstance = "DeleteInstance" // DeleteInstanceRequest generates a "aws/request.Request" representing the // client's request for the DeleteInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1186,8 +1186,8 @@ const opDeleteLayer = "DeleteLayer" // DeleteLayerRequest generates a "aws/request.Request" representing the // client's request for the DeleteLayer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1277,8 +1277,8 @@ const opDeleteStack = "DeleteStack" // DeleteStackRequest generates a "aws/request.Request" representing the // client's request for the DeleteStack operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1368,8 +1368,8 @@ const opDeleteUserProfile = "DeleteUserProfile" // DeleteUserProfileRequest generates a "aws/request.Request" representing the // client's request for the DeleteUserProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1413,8 +1413,8 @@ func (c *OpsWorks) DeleteUserProfileRequest(input *DeleteUserProfileInput) (req // Deletes a user profile. // // Required Permissions: To use this action, an IAM user must have an attached -// policy that explicitly grants permissions. For more information on user permissions, -// see Managing User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). +// policy that explicitly grants permissions. For more information about user +// permissions, see Managing User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1456,8 +1456,8 @@ const opDeregisterEcsCluster = "DeregisterEcsCluster" // DeregisterEcsClusterRequest generates a "aws/request.Request" representing the // client's request for the DeregisterEcsCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1546,8 +1546,8 @@ const opDeregisterElasticIp = "DeregisterElasticIp" // DeregisterElasticIpRequest generates a "aws/request.Request" representing the // client's request for the DeregisterElasticIp operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1636,8 +1636,8 @@ const opDeregisterInstance = "DeregisterInstance" // DeregisterInstanceRequest generates a "aws/request.Request" representing the // client's request for the DeregisterInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1679,8 +1679,8 @@ func (c *OpsWorks) DeregisterInstanceRequest(input *DeregisterInstanceInput) (re // DeregisterInstance API operation for AWS OpsWorks. // // Deregister a registered Amazon EC2 or on-premises instance. This action removes -// the instance from the stack and returns it to your control. This action can -// not be used with instances that were created with AWS OpsWorks Stacks. +// the instance from the stack and returns it to your control. This action cannot +// be used with instances that were created with AWS OpsWorks Stacks. // // Required Permissions: To use this action, an IAM user must have a Manage // permissions level for the stack or an attached policy that explicitly grants @@ -1727,8 +1727,8 @@ const opDeregisterRdsDbInstance = "DeregisterRdsDbInstance" // DeregisterRdsDbInstanceRequest generates a "aws/request.Request" representing the // client's request for the DeregisterRdsDbInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1816,8 +1816,8 @@ const opDeregisterVolume = "DeregisterVolume" // DeregisterVolumeRequest generates a "aws/request.Request" representing the // client's request for the DeregisterVolume operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1906,8 +1906,8 @@ const opDescribeAgentVersions = "DescribeAgentVersions" // DescribeAgentVersionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeAgentVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1990,8 +1990,8 @@ const opDescribeApps = "DescribeApps" // DescribeAppsRequest generates a "aws/request.Request" representing the // client's request for the DescribeApps operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2036,7 +2036,7 @@ func (c *OpsWorks) DescribeAppsRequest(input *DescribeAppsInput) (req *request.R // // Required Permissions: To use this action, an IAM user must have a Show, Deploy, // or Manage permissions level for the stack, or an attached policy that explicitly -// grants permissions. For more information on user permissions, see Managing +// grants permissions. For more information about user permissions, see Managing // User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -2079,8 +2079,8 @@ const opDescribeCommands = "DescribeCommands" // DescribeCommandsRequest generates a "aws/request.Request" representing the // client's request for the DescribeCommands operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2125,7 +2125,7 @@ func (c *OpsWorks) DescribeCommandsRequest(input *DescribeCommandsInput) (req *r // // Required Permissions: To use this action, an IAM user must have a Show, Deploy, // or Manage permissions level for the stack, or an attached policy that explicitly -// grants permissions. For more information on user permissions, see Managing +// grants permissions. For more information about user permissions, see Managing // User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -2168,8 +2168,8 @@ const opDescribeDeployments = "DescribeDeployments" // DescribeDeploymentsRequest generates a "aws/request.Request" representing the // client's request for the DescribeDeployments operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2214,7 +2214,7 @@ func (c *OpsWorks) DescribeDeploymentsRequest(input *DescribeDeploymentsInput) ( // // Required Permissions: To use this action, an IAM user must have a Show, Deploy, // or Manage permissions level for the stack, or an attached policy that explicitly -// grants permissions. For more information on user permissions, see Managing +// grants permissions. For more information about user permissions, see Managing // User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -2257,8 +2257,8 @@ const opDescribeEcsClusters = "DescribeEcsClusters" // DescribeEcsClustersRequest generates a "aws/request.Request" representing the // client's request for the DescribeEcsClusters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2310,7 +2310,7 @@ func (c *OpsWorks) DescribeEcsClustersRequest(input *DescribeEcsClustersInput) ( // // Required Permissions: To use this action, an IAM user must have a Show, Deploy, // or Manage permissions level for the stack or an attached policy that explicitly -// grants permission. For more information on user permissions, see Managing +// grants permission. For more information about user permissions, see Managing // User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // This call accepts only one resource-identifying parameter. @@ -2405,8 +2405,8 @@ const opDescribeElasticIps = "DescribeElasticIps" // DescribeElasticIpsRequest generates a "aws/request.Request" representing the // client's request for the DescribeElasticIps operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2451,7 +2451,7 @@ func (c *OpsWorks) DescribeElasticIpsRequest(input *DescribeElasticIpsInput) (re // // Required Permissions: To use this action, an IAM user must have a Show, Deploy, // or Manage permissions level for the stack, or an attached policy that explicitly -// grants permissions. For more information on user permissions, see Managing +// grants permissions. For more information about user permissions, see Managing // User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -2494,8 +2494,8 @@ const opDescribeElasticLoadBalancers = "DescribeElasticLoadBalancers" // DescribeElasticLoadBalancersRequest generates a "aws/request.Request" representing the // client's request for the DescribeElasticLoadBalancers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2540,7 +2540,7 @@ func (c *OpsWorks) DescribeElasticLoadBalancersRequest(input *DescribeElasticLoa // // Required Permissions: To use this action, an IAM user must have a Show, Deploy, // or Manage permissions level for the stack, or an attached policy that explicitly -// grants permissions. For more information on user permissions, see Managing +// grants permissions. For more information about user permissions, see Managing // User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -2583,8 +2583,8 @@ const opDescribeInstances = "DescribeInstances" // DescribeInstancesRequest generates a "aws/request.Request" representing the // client's request for the DescribeInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2629,7 +2629,7 @@ func (c *OpsWorks) DescribeInstancesRequest(input *DescribeInstancesInput) (req // // Required Permissions: To use this action, an IAM user must have a Show, Deploy, // or Manage permissions level for the stack, or an attached policy that explicitly -// grants permissions. For more information on user permissions, see Managing +// grants permissions. For more information about user permissions, see Managing // User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -2672,8 +2672,8 @@ const opDescribeLayers = "DescribeLayers" // DescribeLayersRequest generates a "aws/request.Request" representing the // client's request for the DescribeLayers operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2718,7 +2718,7 @@ func (c *OpsWorks) DescribeLayersRequest(input *DescribeLayersInput) (req *reque // // Required Permissions: To use this action, an IAM user must have a Show, Deploy, // or Manage permissions level for the stack, or an attached policy that explicitly -// grants permissions. For more information on user permissions, see Managing +// grants permissions. For more information about user permissions, see Managing // User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -2761,8 +2761,8 @@ const opDescribeLoadBasedAutoScaling = "DescribeLoadBasedAutoScaling" // DescribeLoadBasedAutoScalingRequest generates a "aws/request.Request" representing the // client's request for the DescribeLoadBasedAutoScaling operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2807,7 +2807,7 @@ func (c *OpsWorks) DescribeLoadBasedAutoScalingRequest(input *DescribeLoadBasedA // // Required Permissions: To use this action, an IAM user must have a Show, Deploy, // or Manage permissions level for the stack, or an attached policy that explicitly -// grants permissions. For more information on user permissions, see Managing +// grants permissions. For more information about user permissions, see Managing // User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -2850,8 +2850,8 @@ const opDescribeMyUserProfile = "DescribeMyUserProfile" // DescribeMyUserProfileRequest generates a "aws/request.Request" representing the // client's request for the DescribeMyUserProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2894,7 +2894,7 @@ func (c *OpsWorks) DescribeMyUserProfileRequest(input *DescribeMyUserProfileInpu // // Required Permissions: To use this action, an IAM user must have self-management // enabled or an attached policy that explicitly grants permissions. For more -// information on user permissions, see Managing User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). +// information about user permissions, see Managing User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2928,8 +2928,8 @@ const opDescribeOperatingSystems = "DescribeOperatingSystems" // DescribeOperatingSystemsRequest generates a "aws/request.Request" representing the // client's request for the DescribeOperatingSystems operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3002,8 +3002,8 @@ const opDescribePermissions = "DescribePermissions" // DescribePermissionsRequest generates a "aws/request.Request" representing the // client's request for the DescribePermissions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3089,8 +3089,8 @@ const opDescribeRaidArrays = "DescribeRaidArrays" // DescribeRaidArraysRequest generates a "aws/request.Request" representing the // client's request for the DescribeRaidArrays operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3135,7 +3135,7 @@ func (c *OpsWorks) DescribeRaidArraysRequest(input *DescribeRaidArraysInput) (re // // Required Permissions: To use this action, an IAM user must have a Show, Deploy, // or Manage permissions level for the stack, or an attached policy that explicitly -// grants permissions. For more information on user permissions, see Managing +// grants permissions. For more information about user permissions, see Managing // User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -3178,8 +3178,8 @@ const opDescribeRdsDbInstances = "DescribeRdsDbInstances" // DescribeRdsDbInstancesRequest generates a "aws/request.Request" representing the // client's request for the DescribeRdsDbInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3222,7 +3222,7 @@ func (c *OpsWorks) DescribeRdsDbInstancesRequest(input *DescribeRdsDbInstancesIn // // Required Permissions: To use this action, an IAM user must have a Show, Deploy, // or Manage permissions level for the stack, or an attached policy that explicitly -// grants permissions. For more information on user permissions, see Managing +// grants permissions. For more information about user permissions, see Managing // User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // This call accepts only one resource-identifying parameter. @@ -3267,8 +3267,8 @@ const opDescribeServiceErrors = "DescribeServiceErrors" // DescribeServiceErrorsRequest generates a "aws/request.Request" representing the // client's request for the DescribeServiceErrors operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3311,7 +3311,7 @@ func (c *OpsWorks) DescribeServiceErrorsRequest(input *DescribeServiceErrorsInpu // // Required Permissions: To use this action, an IAM user must have a Show, Deploy, // or Manage permissions level for the stack, or an attached policy that explicitly -// grants permissions. For more information on user permissions, see Managing +// grants permissions. For more information about user permissions, see Managing // User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // This call accepts only one resource-identifying parameter. @@ -3356,8 +3356,8 @@ const opDescribeStackProvisioningParameters = "DescribeStackProvisioningParamete // DescribeStackProvisioningParametersRequest generates a "aws/request.Request" representing the // client's request for the DescribeStackProvisioningParameters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3400,7 +3400,7 @@ func (c *OpsWorks) DescribeStackProvisioningParametersRequest(input *DescribeSta // // Required Permissions: To use this action, an IAM user must have a Show, Deploy, // or Manage permissions level for the stack or an attached policy that explicitly -// grants permissions. For more information on user permissions, see Managing +// grants permissions. For more information about user permissions, see Managing // User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -3443,8 +3443,8 @@ const opDescribeStackSummary = "DescribeStackSummary" // DescribeStackSummaryRequest generates a "aws/request.Request" representing the // client's request for the DescribeStackSummary operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3488,7 +3488,7 @@ func (c *OpsWorks) DescribeStackSummaryRequest(input *DescribeStackSummaryInput) // // Required Permissions: To use this action, an IAM user must have a Show, Deploy, // or Manage permissions level for the stack, or an attached policy that explicitly -// grants permissions. For more information on user permissions, see Managing +// grants permissions. For more information about user permissions, see Managing // User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -3531,8 +3531,8 @@ const opDescribeStacks = "DescribeStacks" // DescribeStacksRequest generates a "aws/request.Request" representing the // client's request for the DescribeStacks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3575,7 +3575,7 @@ func (c *OpsWorks) DescribeStacksRequest(input *DescribeStacksInput) (req *reque // // Required Permissions: To use this action, an IAM user must have a Show, Deploy, // or Manage permissions level for the stack, or an attached policy that explicitly -// grants permissions. For more information on user permissions, see Managing +// grants permissions. For more information about user permissions, see Managing // User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -3618,8 +3618,8 @@ const opDescribeTimeBasedAutoScaling = "DescribeTimeBasedAutoScaling" // DescribeTimeBasedAutoScalingRequest generates a "aws/request.Request" representing the // client's request for the DescribeTimeBasedAutoScaling operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3664,7 +3664,7 @@ func (c *OpsWorks) DescribeTimeBasedAutoScalingRequest(input *DescribeTimeBasedA // // Required Permissions: To use this action, an IAM user must have a Show, Deploy, // or Manage permissions level for the stack, or an attached policy that explicitly -// grants permissions. For more information on user permissions, see Managing +// grants permissions. For more information about user permissions, see Managing // User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -3707,8 +3707,8 @@ const opDescribeUserProfiles = "DescribeUserProfiles" // DescribeUserProfilesRequest generates a "aws/request.Request" representing the // client's request for the DescribeUserProfiles operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3750,8 +3750,8 @@ func (c *OpsWorks) DescribeUserProfilesRequest(input *DescribeUserProfilesInput) // Describe specified users. // // Required Permissions: To use this action, an IAM user must have an attached -// policy that explicitly grants permissions. For more information on user permissions, -// see Managing User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). +// policy that explicitly grants permissions. For more information about user +// permissions, see Managing User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3793,8 +3793,8 @@ const opDescribeVolumes = "DescribeVolumes" // DescribeVolumesRequest generates a "aws/request.Request" representing the // client's request for the DescribeVolumes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3839,7 +3839,7 @@ func (c *OpsWorks) DescribeVolumesRequest(input *DescribeVolumesInput) (req *req // // Required Permissions: To use this action, an IAM user must have a Show, Deploy, // or Manage permissions level for the stack, or an attached policy that explicitly -// grants permissions. For more information on user permissions, see Managing +// grants permissions. For more information about user permissions, see Managing // User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -3882,8 +3882,8 @@ const opDetachElasticLoadBalancer = "DetachElasticLoadBalancer" // DetachElasticLoadBalancerRequest generates a "aws/request.Request" representing the // client's request for the DetachElasticLoadBalancer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3968,8 +3968,8 @@ const opDisassociateElasticIp = "DisassociateElasticIp" // DisassociateElasticIpRequest generates a "aws/request.Request" representing the // client's request for the DisassociateElasticIp operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4059,8 +4059,8 @@ const opGetHostnameSuggestion = "GetHostnameSuggestion" // GetHostnameSuggestionRequest generates a "aws/request.Request" representing the // client's request for the GetHostnameSuggestion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4147,8 +4147,8 @@ const opGrantAccess = "GrantAccess" // GrantAccessRequest generates a "aws/request.Request" representing the // client's request for the GrantAccess operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4231,8 +4231,8 @@ const opListTags = "ListTags" // ListTagsRequest generates a "aws/request.Request" representing the // client's request for the ListTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4313,8 +4313,8 @@ const opRebootInstance = "RebootInstance" // RebootInstanceRequest generates a "aws/request.Request" representing the // client's request for the RebootInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4403,8 +4403,8 @@ const opRegisterEcsCluster = "RegisterEcsCluster" // RegisterEcsClusterRequest generates a "aws/request.Request" representing the // client's request for the RegisterEcsCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4492,8 +4492,8 @@ const opRegisterElasticIp = "RegisterElasticIp" // RegisterElasticIpRequest generates a "aws/request.Request" representing the // client's request for the RegisterElasticIp operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4582,8 +4582,8 @@ const opRegisterInstance = "RegisterInstance" // RegisterInstanceRequest generates a "aws/request.Request" representing the // client's request for the RegisterInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4683,8 +4683,8 @@ const opRegisterRdsDbInstance = "RegisterRdsDbInstance" // RegisterRdsDbInstanceRequest generates a "aws/request.Request" representing the // client's request for the RegisterRdsDbInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4772,8 +4772,8 @@ const opRegisterVolume = "RegisterVolume" // RegisterVolumeRequest generates a "aws/request.Request" representing the // client's request for the RegisterVolume operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4862,8 +4862,8 @@ const opSetLoadBasedAutoScaling = "SetLoadBasedAutoScaling" // SetLoadBasedAutoScalingRequest generates a "aws/request.Request" representing the // client's request for the SetLoadBasedAutoScaling operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4958,8 +4958,8 @@ const opSetPermission = "SetPermission" // SetPermissionRequest generates a "aws/request.Request" representing the // client's request for the SetPermission operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5048,8 +5048,8 @@ const opSetTimeBasedAutoScaling = "SetTimeBasedAutoScaling" // SetTimeBasedAutoScalingRequest generates a "aws/request.Request" representing the // client's request for the SetTimeBasedAutoScaling operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5139,8 +5139,8 @@ const opStartInstance = "StartInstance" // StartInstanceRequest generates a "aws/request.Request" representing the // client's request for the StartInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5229,8 +5229,8 @@ const opStartStack = "StartStack" // StartStackRequest generates a "aws/request.Request" representing the // client's request for the StartStack operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5318,8 +5318,8 @@ const opStopInstance = "StopInstance" // StopInstanceRequest generates a "aws/request.Request" representing the // client's request for the StopInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5410,8 +5410,8 @@ const opStopStack = "StopStack" // StopStackRequest generates a "aws/request.Request" representing the // client's request for the StopStack operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5499,8 +5499,8 @@ const opTagResource = "TagResource" // TagResourceRequest generates a "aws/request.Request" representing the // client's request for the TagResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5585,8 +5585,8 @@ const opUnassignInstance = "UnassignInstance" // UnassignInstanceRequest generates a "aws/request.Request" representing the // client's request for the UnassignInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5627,14 +5627,14 @@ func (c *OpsWorks) UnassignInstanceRequest(input *UnassignInstanceInput) (req *r // UnassignInstance API operation for AWS OpsWorks. // -// Unassigns a registered instance from all of it's layers. The instance remains -// in the stack as an unassigned instance and can be assigned to another layer, -// as needed. You cannot use this action with instances that were created with -// AWS OpsWorks Stacks. +// Unassigns a registered instance from all layers that are using the instance. +// The instance remains in the stack as an unassigned instance, and can be assigned +// to another layer as needed. You cannot use this action with instances that +// were created with AWS OpsWorks Stacks. // // Required Permissions: To use this action, an IAM user must have a Manage // permissions level for the stack or an attached policy that explicitly grants -// permissions. For more information on user permissions, see Managing User +// permissions. For more information about user permissions, see Managing User // Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -5677,8 +5677,8 @@ const opUnassignVolume = "UnassignVolume" // UnassignVolumeRequest generates a "aws/request.Request" representing the // client's request for the UnassignVolume operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5767,8 +5767,8 @@ const opUntagResource = "UntagResource" // UntagResourceRequest generates a "aws/request.Request" representing the // client's request for the UntagResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5851,8 +5851,8 @@ const opUpdateApp = "UpdateApp" // UpdateAppRequest generates a "aws/request.Request" representing the // client's request for the UpdateApp operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5940,8 +5940,8 @@ const opUpdateElasticIp = "UpdateElasticIp" // UpdateElasticIpRequest generates a "aws/request.Request" representing the // client's request for the UpdateElasticIp operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6030,8 +6030,8 @@ const opUpdateInstance = "UpdateInstance" // UpdateInstanceRequest generates a "aws/request.Request" representing the // client's request for the UpdateInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6119,8 +6119,8 @@ const opUpdateLayer = "UpdateLayer" // UpdateLayerRequest generates a "aws/request.Request" representing the // client's request for the UpdateLayer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6208,8 +6208,8 @@ const opUpdateMyUserProfile = "UpdateMyUserProfile" // UpdateMyUserProfileRequest generates a "aws/request.Request" representing the // client's request for the UpdateMyUserProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6254,7 +6254,7 @@ func (c *OpsWorks) UpdateMyUserProfileRequest(input *UpdateMyUserProfileInput) ( // // Required Permissions: To use this action, an IAM user must have self-management // enabled or an attached policy that explicitly grants permissions. For more -// information on user permissions, see Managing User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). +// information about user permissions, see Managing User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -6293,8 +6293,8 @@ const opUpdateRdsDbInstance = "UpdateRdsDbInstance" // UpdateRdsDbInstanceRequest generates a "aws/request.Request" representing the // client's request for the UpdateRdsDbInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6382,8 +6382,8 @@ const opUpdateStack = "UpdateStack" // UpdateStackRequest generates a "aws/request.Request" representing the // client's request for the UpdateStack operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6471,8 +6471,8 @@ const opUpdateUserProfile = "UpdateUserProfile" // UpdateUserProfileRequest generates a "aws/request.Request" representing the // client's request for the UpdateUserProfile operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6516,8 +6516,8 @@ func (c *OpsWorks) UpdateUserProfileRequest(input *UpdateUserProfileInput) (req // Updates a specified user profile. // // Required Permissions: To use this action, an IAM user must have an attached -// policy that explicitly grants permissions. For more information on user permissions, -// see Managing User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). +// policy that explicitly grants permissions. For more information about user +// permissions, see Managing User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -6559,8 +6559,8 @@ const opUpdateVolume = "UpdateVolume" // UpdateVolumeRequest generates a "aws/request.Request" representing the // client's request for the UpdateVolume operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7027,8 +7027,8 @@ type AttachElasticLoadBalancerInput struct { // ElasticLoadBalancerName is a required field ElasticLoadBalancerName *string `type:"string" required:"true"` - // The ID of the layer that the Elastic Load Balancing instance is to be attached - // to. + // The ID of the layer to which the Elastic Load Balancing instance is to be + // attached. // // LayerId is a required field LayerId *string `type:"string" required:"true"` @@ -7343,8 +7343,8 @@ type CloneStackInput struct { // // "{\"key1\": \"value1\", \"key2\": \"value2\",...}" // - // For more information on custom JSON, see Use Custom JSON to Modify the Stack - // Configuration Attributes (http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html) + // For more information about custom JSON, see Use Custom JSON to Modify the + // Stack Configuration Attributes (http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html) CustomJson *string `type:"string"` // The cloned stack's default Availability Zone, which must be in the specified @@ -7376,11 +7376,11 @@ type CloneStackInput struct { // Server Standard, or Microsoft Windows Server 2012 R2 with SQL Server Web. // // * A custom AMI: Custom. You specify the custom AMI you want to use when - // you create instances. For more information on how to use custom AMIs with - // OpsWorks, see Using Custom AMIs (http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html). + // you create instances. For more information about how to use custom AMIs + // with OpsWorks, see Using Custom AMIs (http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html). // // The default option is the parent stack's operating system. For more information - // on the supported operating systems, see AWS OpsWorks Stacks Operating Systems + // about supported operating systems, see AWS OpsWorks Stacks Operating Systems // (http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html). // // You can specify a different Linux operating system for the cloned stack, @@ -7515,9 +7515,9 @@ type CloneStackInput struct { // // * You must specify a value for DefaultSubnetId. // - // For more information on how to use AWS OpsWorks Stacks with a VPC, see Running - // a Stack in a VPC (http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-vpc.html). - // For more information on default VPC and EC2 Classic, see Supported Platforms + // For more information about how to use AWS OpsWorks Stacks with a VPC, see + // Running a Stack in a VPC (http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-vpc.html). + // For more information about default VPC and EC2 Classic, see Supported Platforms // (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html). VpcId *string `type:"string"` } @@ -8244,8 +8244,8 @@ type CreateDeploymentInput struct { // // "{\"key1\": \"value1\", \"key2\": \"value2\",...}" // - // For more information on custom JSON, see Use Custom JSON to Modify the Stack - // Configuration Attributes (http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html). + // For more information about custom JSON, see Use Custom JSON to Modify the + // Stack Configuration Attributes (http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html). CustomJson *string `type:"string"` // The instance IDs for the deployment targets. @@ -8453,15 +8453,15 @@ type CreateInstanceInput struct { // // * A custom AMI: Custom. // - // For more information on the supported operating systems, see AWS OpsWorks + // For more information about the supported operating systems, see AWS OpsWorks // Stacks Operating Systems (http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html). // // The default option is the current Amazon Linux version. If you set this parameter // to Custom, you must use the CreateInstance action's AmiId parameter to specify // the custom AMI that you want to use. Block device mappings are not supported - // if the value is Custom. For more information on the supported operating systems, + // if the value is Custom. For more information about supported operating systems, // see Operating Systems (http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html)For - // more information on how to use custom AMIs with AWS OpsWorks Stacks, see + // more information about how to use custom AMIs with AWS OpsWorks Stacks, see // Using Custom AMIs (http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html). Os *string `type:"string"` @@ -8962,7 +8962,7 @@ type CreateStackInput struct { // The configuration manager. When you create a stack we recommend that you // use the configuration manager to specify the Chef version: 12, 11.10, or // 11.4 for Linux stacks, or 12.2 for Windows stacks. The default value for - // Linux stacks is currently 11.4. + // Linux stacks is currently 12. ConfigurationManager *StackConfigurationManager `type:"structure"` // Contains the information required to retrieve an app or cookbook from a repository. @@ -8976,8 +8976,8 @@ type CreateStackInput struct { // // "{\"key1\": \"value1\", \"key2\": \"value2\",...}" // - // For more information on custom JSON, see Use Custom JSON to Modify the Stack - // Configuration Attributes (http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html). + // For more information about custom JSON, see Use Custom JSON to Modify the + // Stack Configuration Attributes (http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html). CustomJson *string `type:"string"` // The stack's default Availability Zone, which must be in the specified region. @@ -9017,7 +9017,7 @@ type CreateStackInput struct { // you create instances. For more information, see Using Custom AMIs (http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html). // // The default option is the current Amazon Linux version. For more information - // on the supported operating systems, see AWS OpsWorks Stacks Operating Systems + // about supported operating systems, see AWS OpsWorks Stacks Operating Systems // (http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html). DefaultOs *string `type:"string"` @@ -9080,8 +9080,24 @@ type CreateStackInput struct { // Name is a required field Name *string `type:"string" required:"true"` - // The stack's AWS region, such as "ap-south-1". For more information about - // Amazon regions, see Regions and Endpoints (http://docs.aws.amazon.com/general/latest/gr/rande.html). + // The stack's AWS region, such as ap-south-1. For more information about Amazon + // regions, see Regions and Endpoints (http://docs.aws.amazon.com/general/latest/gr/rande.html). + // + // In the AWS CLI, this API maps to the --stack-region parameter. If the --stack-region + // parameter and the AWS CLI common parameter --region are set to the same value, + // the stack uses a regional endpoint. If the --stack-region parameter is not + // set, but the AWS CLI --region parameter is, this also results in a stack + // with a regional endpoint. However, if the --region parameter is set to us-east-1, + // and the --stack-region parameter is set to one of the following, then the + // stack uses a legacy or classic region: us-west-1, us-west-2, sa-east-1, eu-central-1, + // eu-west-1, ap-northeast-1, ap-southeast-1, ap-southeast-2. In this case, + // the actual API endpoint of the stack is in us-east-1. Only the preceding + // regions are supported as classic regions in the us-east-1 API endpoint. Because + // it is a best practice to choose the regional endpoint that is closest to + // where you manage AWS, we recommend that you use regional endpoints for new + // stacks. The AWS CLI common --region parameter always specifies a regional + // API endpoint; it cannot be used to specify a classic AWS OpsWorks Stacks + // region. // // Region is a required field Region *string `type:"string" required:"true"` @@ -9142,9 +9158,9 @@ type CreateStackInput struct { // // * You must specify a value for DefaultSubnetId. // - // For more information on how to use AWS OpsWorks Stacks with a VPC, see Running - // a Stack in a VPC (http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-vpc.html). - // For more information on default VPC and EC2-Classic, see Supported Platforms + // For more information about how to use AWS OpsWorks Stacks with a VPC, see + // Running a Stack in a VPC (http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-vpc.html). + // For more information about default VPC and EC2-Classic, see Supported Platforms // (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html). VpcId *string `type:"string"` } @@ -9976,7 +9992,7 @@ func (s *DeploymentCommand) SetName(v string) *DeploymentCommand { type DeregisterEcsClusterInput struct { _ struct{} `type:"structure"` - // The cluster's ARN. + // The cluster's Amazon Resource Number (ARN). // // EcsClusterArn is a required field EcsClusterArn *string `type:"string" required:"true"` @@ -10958,6 +10974,7 @@ func (s DescribeOperatingSystemsInput) GoString() string { type DescribeOperatingSystemsOutput struct { _ struct{} `type:"structure"` + // Contains information in response to a DescribeOperatingSystems request. OperatingSystems []*OperatingSystem `type:"list"` } @@ -11118,8 +11135,8 @@ type DescribeRdsDbInstancesInput struct { // An array containing the ARNs of the instances to be described. RdsDbInstanceArns []*string `type:"list"` - // The stack ID that the instances are registered with. The operation returns - // descriptions of all registered Amazon RDS instances. + // The ID of the stack with which the instances are registered. The operation + // returns descriptions of all registered Amazon RDS instances. // // StackId is a required field StackId *string `type:"string" required:"true"` @@ -11256,7 +11273,7 @@ func (s *DescribeServiceErrorsOutput) SetServiceErrors(v []*ServiceError) *Descr type DescribeStackProvisioningParametersInput struct { _ struct{} `type:"structure"` - // The stack ID + // The stack ID. // // StackId is a required field StackId *string `type:"string" required:"true"` @@ -12253,6 +12270,7 @@ type Instance struct { // The instance architecture: "i386" or "x86_64". Architecture *string `type:"string" enum:"Architecture"` + // The instance's Amazon Resource Number (ARN). Arn *string `type:"string"` // For load-based or time-based instances, the type. @@ -12744,6 +12762,7 @@ type InstancesCount struct { // The number of instances with start_failed status. StartFailed *int64 `type:"integer"` + // The number of instances with stop_failed status. StopFailed *int64 `type:"integer"` // The number of instances with stopped status. @@ -12896,6 +12915,7 @@ func (s *InstancesCount) SetUnassigning(v int64) *InstancesCount { type Layer struct { _ struct{} `type:"structure"` + // The Amazon Resource Number (ARN) of a layer. Arn *string `type:"string"` // The layer attributes. @@ -14575,8 +14595,8 @@ type SetPermissionInput struct { // // * iam_only // - // For more information on the permissions associated with these levels, see - // Managing User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). + // For more information about the permissions associated with these levels, + // see Managing User Permissions (http://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users.html). Level *string `type:"string"` // The stack ID. @@ -15352,6 +15372,7 @@ func (s StartStackOutput) GoString() string { type StopInstanceInput struct { _ struct{} `type:"structure"` + // Specifies whether to force an instance to stop. Force *bool `type:"boolean"` // The instance ID. @@ -15975,7 +15996,7 @@ func (s UpdateAppOutput) GoString() string { type UpdateElasticIpInput struct { _ struct{} `type:"structure"` - // The address. + // The IP address for which you want to update the name. // // ElasticIp is a required field ElasticIp *string `type:"string" required:"true"` @@ -16118,15 +16139,15 @@ type UpdateInstanceInput struct { // Microsoft Windows Server 2012 R2 with SQL Server Standard, or Microsoft // Windows Server 2012 R2 with SQL Server Web. // - // For more information on the supported operating systems, see AWS OpsWorks + // For more information about supported operating systems, see AWS OpsWorks // Stacks Operating Systems (http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html). // // The default option is the current Amazon Linux version. If you set this parameter // to Custom, you must use the AmiId parameter to specify the custom AMI that - // you want to use. For more information on the supported operating systems, + // you want to use. For more information about supported operating systems, // see Operating Systems (http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html). - // For more information on how to use custom AMIs with OpsWorks, see Using Custom - // AMIs (http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html). + // For more information about how to use custom AMIs with OpsWorks, see Using + // Custom AMIs (http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html). // // You can specify a different Linux operating system for the updated stack, // but you cannot change from Linux to Windows or Windows to Linux. @@ -16613,7 +16634,7 @@ type UpdateStackInput struct { // The configuration manager. When you update a stack, we recommend that you // use the configuration manager to specify the Chef version: 12, 11.10, or // 11.4 for Linux stacks, or 12.2 for Windows stacks. The default value for - // Linux stacks is currently 11.4. + // Linux stacks is currently 12. ConfigurationManager *StackConfigurationManager `type:"structure"` // Contains the information required to retrieve an app or cookbook from a repository. @@ -16627,8 +16648,8 @@ type UpdateStackInput struct { // // "{\"key1\": \"value1\", \"key2\": \"value2\",...}" // - // For more information on custom JSON, see Use Custom JSON to Modify the Stack - // Configuration Attributes (http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html). + // For more information about custom JSON, see Use Custom JSON to Modify the + // Stack Configuration Attributes (http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-json.html). CustomJson *string `type:"string"` // The stack's default Availability Zone, which must be in the stack's region. @@ -16661,11 +16682,11 @@ type UpdateStackInput struct { // Windows Server 2012 R2 with SQL Server Web. // // * A custom AMI: Custom. You specify the custom AMI you want to use when - // you create instances. For more information on how to use custom AMIs with - // OpsWorks, see Using Custom AMIs (http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html). + // you create instances. For more information about how to use custom AMIs + // with OpsWorks, see Using Custom AMIs (http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html). // // The default option is the stack's current operating system. For more information - // on the supported operating systems, see AWS OpsWorks Stacks Operating Systems + // about supported operating systems, see AWS OpsWorks Stacks Operating Systems // (http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html). DefaultOs *string `type:"string"` @@ -17133,6 +17154,8 @@ type Volume struct { // The Amazon EC2 volume ID. Ec2VolumeId *string `type:"string"` + // Specifies whether an Amazon EBS volume is encrypted. For more information, + // see Amazon EBS Encryption (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html). Encrypted *bool `type:"boolean"` // The instance ID. @@ -17163,7 +17186,23 @@ type Volume struct { // The volume ID. VolumeId *string `type:"string"` - // The volume type, standard or PIOPS. + // The volume type. For more information, see Amazon EBS Volume Types (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html). + // + // * standard - Magnetic. Magnetic volumes must have a minimum size of 1 + // GiB and a maximum size of 1024 GiB. + // + // * io1 - Provisioned IOPS (SSD). PIOPS volumes must have a minimum size + // of 4 GiB and a maximum size of 16384 GiB. + // + // * gp2 - General Purpose (SSD). General purpose volumes must have a minimum + // size of 1 GiB and a maximum size of 16384 GiB. + // + // * st1 - Throughput Optimized hard disk drive (HDD). Throughput optimized + // HDD volumes must have a minimum size of 500 GiB and a maximum size of + // 16384 GiB. + // + // * sc1 - Cold HDD. Cold HDD volumes must have a minimum size of 500 GiB + // and a maximum size of 16384 GiB. VolumeType *string `type:"string"` } @@ -17292,15 +17331,21 @@ type VolumeConfiguration struct { // The volume type. For more information, see Amazon EBS Volume Types (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html). // - // * standard - Magnetic + // * standard - Magnetic. Magnetic volumes must have a minimum size of 1 + // GiB and a maximum size of 1024 GiB. // - // * io1 - Provisioned IOPS (SSD) + // * io1 - Provisioned IOPS (SSD). PIOPS volumes must have a minimum size + // of 4 GiB and a maximum size of 16384 GiB. // - // * gp2 - General Purpose (SSD) + // * gp2 - General Purpose (SSD). General purpose volumes must have a minimum + // size of 1 GiB and a maximum size of 16384 GiB. // - // * st1 - Throughput Optimized hard disk drive (HDD) + // * st1 - Throughput Optimized hard disk drive (HDD). Throughput optimized + // HDD volumes must have a minimum size of 500 GiB and a maximum size of + // 16384 GiB. // - // * sc1 - Cold HDD + // * sc1 - Cold HDD. Cold HDD volumes must have a minimum size of 500 GiB + // and a maximum size of 16384 GiB. VolumeType *string `type:"string"` } diff --git a/vendor/github.com/aws/aws-sdk-go/service/opsworks/service.go b/vendor/github.com/aws/aws-sdk-go/service/opsworks/service.go index 3e94bb43ddf..a1a8307cae4 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/opsworks/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/opsworks/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "opsworks" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "opsworks" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "OpsWorks" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the OpsWorks client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/organizations/api.go b/vendor/github.com/aws/aws-sdk-go/service/organizations/api.go index 816b78dfeac..974cf99b39c 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/organizations/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/organizations/api.go @@ -16,8 +16,8 @@ const opAcceptHandshake = "AcceptHandshake" // AcceptHandshakeRequest generates a "aws/request.Request" representing the // client's request for the AcceptHandshake operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -68,7 +68,7 @@ func (c *Organizations) AcceptHandshakeRequest(input *AcceptHandshakeInput) (req // The user who calls the API for an invitation to join must have the organizations:AcceptHandshake // permission. If you enabled all features in the organization, then the // user must also have the iam:CreateServiceLinkedRole permission so that -// Organizations can create the required service-linked role named OrgsServiceLinkedRoleName. +// Organizations can create the required service-linked role named AWSServiceRoleForOrganizations. // For more information, see AWS Organizations and Service-Linked Roles (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_integration_services.html#orgs_integration_service-linked-roles) // in the AWS Organizations User Guide. // @@ -101,7 +101,7 @@ func (c *Organizations) AcceptHandshakeRequest(input *AcceptHandshakeInput) (req // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeHandshakeConstraintViolationException "HandshakeConstraintViolationException" @@ -112,15 +112,15 @@ func (c *Organizations) AcceptHandshakeRequest(input *AcceptHandshakeInput) (req // specific API or operation: // // * ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on -// the number of accounts in an organization. Note: deleted and closed accounts -// still count toward your limit. +// the number of accounts in an organization. Note that deleted and closed +// accounts still count toward your limit. // -// If you get an exception that indicates that you exceeded your account limits -// for the organization or that you can"t add an account because your organization -// is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). +// If you get this exception immediately after creating the organization, wait +// one hour and try again. If after an hour it continues to fail with this +// error, contact AWS Support (https://console.aws.amazon.com/support/home#/). // // * HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of -// handshakes you can send in one day. +// handshakes that you can send in one day. // // * ALREADY_IN_AN_ORGANIZATION: The handshake request is invalid because // the invited account is already a member of an organization. @@ -128,13 +128,13 @@ func (c *Organizations) AcceptHandshakeRequest(input *AcceptHandshakeInput) (req // * ORGANIZATION_ALREADY_HAS_ALL_FEATURES: The handshake request is invalid // because the organization has already enabled all features. // -// * INVITE_DISABLED_DURING_ENABLE_ALL_FEATURES: You cannot issue new invitations -// to join an organization while it is in the process of enabling all features. +// * INVITE_DISABLED_DURING_ENABLE_ALL_FEATURES: You can't issue new invitations +// to join an organization while it's in the process of enabling all features. // You can resume inviting accounts after you finalize the process when all // accounts have agreed to the change. // -// * PAYMENT_INSTRUMENT_REQUIRED: You cannot complete the operation with -// an account that does not have a payment instrument, such as a credit card, +// * PAYMENT_INSTRUMENT_REQUIRED: You can't complete the operation with an +// account that doesn't have a payment instrument, such as a credit card, // associated with it. // // * ORGANIZATION_FROM_DIFFERENT_SELLER_OF_RECORD: The request failed because @@ -151,7 +151,7 @@ func (c *Organizations) AcceptHandshakeRequest(input *AcceptHandshakeInput) (req // // * ErrCodeInvalidHandshakeTransitionException "InvalidHandshakeTransitionException" // You can't perform the operation on the handshake in its current state. For -// example, you can't cancel a handshake that was already accepted, or accept +// example, you can't cancel a handshake that was already accepted or accept // a handshake that was already declined. // // * ErrCodeHandshakeAlreadyInStateException "HandshakeAlreadyInStateException" @@ -167,11 +167,11 @@ func (c *Organizations) AcceptHandshakeRequest(input *AcceptHandshakeInput) (req // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -191,11 +191,11 @@ func (c *Organizations) AcceptHandshakeRequest(input *AcceptHandshakeInput) (req // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -230,9 +230,9 @@ func (c *Organizations) AcceptHandshakeRequest(input *AcceptHandshakeInput) (req // protect against denial-of-service attacks. Try again later. // // * ErrCodeAccessDeniedForDependencyException "AccessDeniedForDependencyException" -// The operation you attempted requires you to have the iam:CreateServiceLinkedRole -// so that Organizations can create the required service-linked role. You do -// not have that permission. +// The operation that you attempted requires you to have the iam:CreateServiceLinkedRole +// so that AWS Organizations can create the required service-linked role. You +// don't have that permission. // // See also, https://docs.aws.amazon.com/goto/WebAPI/organizations-2016-11-28/AcceptHandshake func (c *Organizations) AcceptHandshake(input *AcceptHandshakeInput) (*AcceptHandshakeOutput, error) { @@ -260,8 +260,8 @@ const opAttachPolicy = "AttachPolicy" // AttachPolicyRequest generates a "aws/request.Request" representing the // client's request for the AttachPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -302,8 +302,8 @@ func (c *Organizations) AttachPolicyRequest(input *AttachPolicyInput) (req *requ // AttachPolicy API operation for AWS Organizations. // -// Attaches a policy to a root, an organizational unit, or an individual account. -// How the policy affects accounts depends on the type of policy: +// Attaches a policy to a root, an organizational unit (OU), or an individual +// account. How the policy affects accounts depends on the type of policy: // // * Service control policy (SCP) - An SCP specifies what permissions can // be delegated to users in affected member accounts. The scope of influence @@ -358,7 +358,7 @@ func (c *Organizations) AttachPolicyRequest(input *AttachPolicyInput) (req *requ // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" @@ -367,36 +367,43 @@ func (c *Organizations) AttachPolicyRequest(input *AttachPolicyInput) (req *requ // // * ErrCodeConstraintViolationException "ConstraintViolationException" // Performing this operation violates a minimum or maximum value limit. For -// example, attempting to removing the last SCP from an OU or root, inviting -// or creating too many accounts to the organization, or attaching too many -// policies to an account, OU, or root. This exception includes a reason that -// contains additional information about the violated limit: +// example, attempting to removing the last service control policy (SCP) from +// an OU or root, inviting or creating too many accounts to the organization, +// or attaching too many policies to an account, OU, or root. This exception +// includes a reason that contains additional information about the violated +// limit. // // Some of the reasons in the following list might not be applicable to this // specific API or operation: // -// ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on the number -// of accounts in an organization. If you need more accounts, contact AWS Support -// to request an increase in your limit. +// * ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on +// the number of accounts in an organization. If you need more accounts, +// contactAWS Support (https://console.aws.amazon.com/support/home#/) to +// request an increase in your limit. // -// Or, The number of invitations that you tried to send would cause you to exceed -// the limit of accounts in your organization. Send fewer invitations, or contact -// AWS Support to request an increase in the number of accounts. +// Or the number of invitations that you tried to send would cause you to exceed +// the limit of accounts in your organization. Send fewer invitations or +// contact AWS Support to request an increase in the number of accounts. // -// Note: deleted and closed accounts still count toward your limit. +// Deleted and closed accounts still count toward your limit. // -// If you get an exception that indicates that you exceeded your account limits -// for the organization or that you can"t add an account because your organization -// is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). +// If you get receive this exception when running a command immediately after +// creating the organization, wait one hour and try again. If after an hour +// it continues to fail with this error, contact AWS Support (https://console.aws.amazon.com/support/home#/). // // * HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of -// handshakes you can send in one day. +// handshakes that you can send in one day. // -// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of organizational -// units you can have in an organization. +// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of OUs +// that you can have in an organization. // -// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an organizational unit -// tree that is too many levels deep. +// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an OU tree that is +// too many levels deep. +// +// * ORGANIZATION_NOT_IN_ALL_FEATURES_MODE: You attempted to perform an operation +// that requires the organization to be configured to support all features. +// An organization that supports only consolidated billing features can't +// perform this operation. // // * POLICY_NUMBER_LIMIT_EXCEEDED. You attempted to exceed the number of // policies that you can have in an organization. @@ -410,23 +417,24 @@ func (c *Organizations) AttachPolicyRequest(input *AttachPolicyInput) (req *requ // minimum number of policies of a certain type required. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_EULA: You attempted to remove an account -// from the organization that does not yet have enough information to exist -// as a stand-alone account. This account requires you to first agree to -// the AWS Customer Agreement. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// from the organization that doesn't yet have enough information to exist +// as a standalone account. This account requires you to first agree to the +// AWS Customer Agreement. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_PHONE_VERIFICATION: You attempted to remove -// an account from the organization that does not yet have enough information -// to exist as a stand-alone account. This account requires you to first -// complete phone verification. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// an account from the organization that doesn't yet have enough information +// to exist as a standalone account. This account requires you to first complete +// phone verification. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To create an organization -// with this account, you first must associate a payment instrument, such -// as a credit card, with the account. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// with this master account, you first must associate a payment instrument, +// such as a credit card, with the account. Follow the steps at To leave +// an organization when all required account information has not yet been +// provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To complete this operation @@ -462,11 +470,11 @@ func (c *Organizations) AttachPolicyRequest(input *AttachPolicyInput) (req *requ // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -486,11 +494,11 @@ func (c *Organizations) AttachPolicyRequest(input *AttachPolicyInput) (req *requ // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -516,7 +524,7 @@ func (c *Organizations) AttachPolicyRequest(input *AttachPolicyInput) (req *requ // We can't find a policy with the PolicyId that you specified. // // * ErrCodePolicyTypeNotEnabledException "PolicyTypeNotEnabledException" -// The specified policy type is not currently enabled in this root. You cannot +// The specified policy type isn't currently enabled in this root. You can't // attach policies of the specified type to entities in a root until you enable // that type in the root. For more information, see Enabling All Features in // Your Organization (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org_support-all-features.html) @@ -559,8 +567,8 @@ const opCancelHandshake = "CancelHandshake" // CancelHandshakeRequest generates a "aws/request.Request" representing the // client's request for the CancelHandshake operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -633,7 +641,7 @@ func (c *Organizations) CancelHandshakeRequest(input *CancelHandshakeInput) (req // // * ErrCodeInvalidHandshakeTransitionException "InvalidHandshakeTransitionException" // You can't perform the operation on the handshake in its current state. For -// example, you can't cancel a handshake that was already accepted, or accept +// example, you can't cancel a handshake that was already accepted or accept // a handshake that was already declined. // // * ErrCodeHandshakeAlreadyInStateException "HandshakeAlreadyInStateException" @@ -649,11 +657,11 @@ func (c *Organizations) CancelHandshakeRequest(input *CancelHandshakeInput) (req // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -673,11 +681,11 @@ func (c *Organizations) CancelHandshakeRequest(input *CancelHandshakeInput) (req // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -733,8 +741,8 @@ const opCreateAccount = "CreateAccount" // CreateAccountRequest generates a "aws/request.Request" representing the // client's request for the CreateAccount operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -775,53 +783,57 @@ func (c *Organizations) CreateAccountRequest(input *CreateAccountInput) (req *re // // Creates an AWS account that is automatically a member of the organization // whose credentials made the request. This is an asynchronous request that -// AWS performs in the background. If you want to check the status of the request -// later, you need the OperationId response element from this operation to provide -// as a parameter to the DescribeCreateAccountStatus operation. -// -// The user who calls the API for an invitation to join must have the organizations:CreateAccount -// permission. If you enabled all features in the organization, then the user -// must also have the iam:CreateServiceLinkedRole permission so that Organizations -// can create the required service-linked role named OrgsServiceLinkedRoleName. -// For more information, see AWS Organizations and Service-Linked Roles (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_integration_services.html#orgs_integration_service-linked-roles) -// in the AWS Organizations User Guide. +// AWS performs in the background. Because CreateAccount operates asynchronously, +// it can return a successful completion message even though account initialization +// might still be in progress. You might need to wait a few minutes before you +// can successfully access the account. To check the status of the request, +// do one of the following: +// +// * Use the OperationId response element from this operation to provide +// as a parameter to the DescribeCreateAccountStatus operation. +// +// * Check the AWS CloudTrail log for the CreateAccountResult event. For +// information on using AWS CloudTrail with Organizations, see Monitoring +// the Activity in Your Organization (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_monitoring.html) +// in the AWS Organizations User Guide. +// +// The user who calls the API to create an account must have the organizations:CreateAccountpermission. If you enabled all features in the organization, AWS Organizations +// will create the required service-linked role named AWSServiceRoleForOrganizations. For more information, see AWS Organizations and Service-Linked Roles (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_integrate_services.html#orgs_integrate_services-using_slrs)in the AWS Organizations User Guide. // -// The user in the master account who calls this API must also have the iam:CreateRole -// permission because AWS Organizations preconfigures the new member account -// with a role (named OrganizationAccountAccessRole by default) that grants -// users in the master account administrator permissions in the new member account. -// Principals in the master account can assume the role. AWS Organizations clones -// the company name and address information for the new account from the organization's -// master account. +// AWS Organizations preconfigures the new member account with a role (named +// OrganizationAccountAccessRoleby default) that grants users in the master account administrator permissions +// in the new member account. Principals in the master account can assume the +// role. AWS Organizations clones the company name and address information for +// the new account from the organization's master account. // // This operation can be called only from the organization's master account. // // For more information about creating accounts, see Creating an AWS Account -// in Your Organization (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_create.html) -// in the AWS Organizations User Guide. +// in Your Organization (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_create.html)in the AWS Organizations User Guide. // // When you create an account in an organization using the AWS Organizations // console, API, or CLI commands, the information required for the account to // operate as a standalone account, such as a payment method and signing the -// End User Licence Agreement (EULA) is not automatically collected. If you +// end user license agreement (EULA) is not automatically collected. If you // must remove an account from your organization later, you can do so only after // you provide the missing information. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// as a member account (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // +// If you get an exception that indicates that you exceeded your account limits +// for the organization, contact AWS Support (https://console.aws.amazon.com/support/home#/). +// +// If you get an exception that indicates that the operation failed because +// your organization is still initializing, wait one hour and then try again. +// If the error persists, contact AWS Support (https://console.aws.amazon.com/support/home#/). +// // When you create a member account with this operation, you can choose whether // to create the account with the IAM User and Role Access to Billing Information // switch enabled. If you enable it, IAM users and roles that have appropriate // permissions can view billing information for the account. If you disable -// this, then only the account root user can access billing information. For -// information about how to disable this for an account, see Granting Access -// to Your Billing Information and Tools (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/grantaccess.html). -// -// This operation can be called only from the organization's master account. -// -// If you get an exception that indicates that you exceeded your account limits -// for the organization or that you can"t add an account because your organization -// is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). +// it, only the account root user can access billing information. For information +// about how to disable this switch for an account, see Granting Access to Your +// Billing Information and Tools (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/grantaccess.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -839,7 +851,7 @@ func (c *Organizations) CreateAccountRequest(input *CreateAccountInput) (req *re // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" @@ -848,36 +860,43 @@ func (c *Organizations) CreateAccountRequest(input *CreateAccountInput) (req *re // // * ErrCodeConstraintViolationException "ConstraintViolationException" // Performing this operation violates a minimum or maximum value limit. For -// example, attempting to removing the last SCP from an OU or root, inviting -// or creating too many accounts to the organization, or attaching too many -// policies to an account, OU, or root. This exception includes a reason that -// contains additional information about the violated limit: +// example, attempting to removing the last service control policy (SCP) from +// an OU or root, inviting or creating too many accounts to the organization, +// or attaching too many policies to an account, OU, or root. This exception +// includes a reason that contains additional information about the violated +// limit. // // Some of the reasons in the following list might not be applicable to this // specific API or operation: // -// ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on the number -// of accounts in an organization. If you need more accounts, contact AWS Support -// to request an increase in your limit. +// * ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on +// the number of accounts in an organization. If you need more accounts, +// contactAWS Support (https://console.aws.amazon.com/support/home#/) to +// request an increase in your limit. // -// Or, The number of invitations that you tried to send would cause you to exceed -// the limit of accounts in your organization. Send fewer invitations, or contact -// AWS Support to request an increase in the number of accounts. +// Or the number of invitations that you tried to send would cause you to exceed +// the limit of accounts in your organization. Send fewer invitations or +// contact AWS Support to request an increase in the number of accounts. // -// Note: deleted and closed accounts still count toward your limit. +// Deleted and closed accounts still count toward your limit. // -// If you get an exception that indicates that you exceeded your account limits -// for the organization or that you can"t add an account because your organization -// is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). +// If you get receive this exception when running a command immediately after +// creating the organization, wait one hour and try again. If after an hour +// it continues to fail with this error, contact AWS Support (https://console.aws.amazon.com/support/home#/). // // * HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of -// handshakes you can send in one day. +// handshakes that you can send in one day. +// +// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of OUs +// that you can have in an organization. // -// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of organizational -// units you can have in an organization. +// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an OU tree that is +// too many levels deep. // -// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an organizational unit -// tree that is too many levels deep. +// * ORGANIZATION_NOT_IN_ALL_FEATURES_MODE: You attempted to perform an operation +// that requires the organization to be configured to support all features. +// An organization that supports only consolidated billing features can't +// perform this operation. // // * POLICY_NUMBER_LIMIT_EXCEEDED. You attempted to exceed the number of // policies that you can have in an organization. @@ -891,23 +910,24 @@ func (c *Organizations) CreateAccountRequest(input *CreateAccountInput) (req *re // minimum number of policies of a certain type required. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_EULA: You attempted to remove an account -// from the organization that does not yet have enough information to exist -// as a stand-alone account. This account requires you to first agree to -// the AWS Customer Agreement. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// from the organization that doesn't yet have enough information to exist +// as a standalone account. This account requires you to first agree to the +// AWS Customer Agreement. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_PHONE_VERIFICATION: You attempted to remove -// an account from the organization that does not yet have enough information -// to exist as a stand-alone account. This account requires you to first -// complete phone verification. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// an account from the organization that doesn't yet have enough information +// to exist as a standalone account. This account requires you to first complete +// phone verification. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To create an organization -// with this account, you first must associate a payment instrument, such -// as a credit card, with the account. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// with this master account, you first must associate a payment instrument, +// such as a credit card, with the account. Follow the steps at To leave +// an organization when all required account information has not yet been +// provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To complete this operation @@ -940,11 +960,11 @@ func (c *Organizations) CreateAccountRequest(input *CreateAccountInput) (req *re // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -964,11 +984,11 @@ func (c *Organizations) CreateAccountRequest(input *CreateAccountInput) (req *re // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -991,8 +1011,10 @@ func (c *Organizations) CreateAccountRequest(input *CreateAccountInput) (req *re // between entities in the same root. // // * ErrCodeFinalizingOrganizationException "FinalizingOrganizationException" -// AWS Organizations could not finalize the creation of your organization. Try -// again later. If this persists, contact AWS customer support. +// AWS Organizations couldn't perform the operation because your organization +// hasn't finished initializing. This can take up to an hour. Try again later. +// If after one hour you continue to receive this error, contact AWS Support +// (https://console.aws.amazon.com/support/home#/). // // * ErrCodeServiceException "ServiceException" // AWS Organizations can't complete your request because of an internal service @@ -1028,8 +1050,8 @@ const opCreateOrganization = "CreateOrganization" // CreateOrganizationRequest generates a "aws/request.Request" representing the // client's request for the CreateOrganization operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1108,36 +1130,43 @@ func (c *Organizations) CreateOrganizationRequest(input *CreateOrganizationInput // // * ErrCodeConstraintViolationException "ConstraintViolationException" // Performing this operation violates a minimum or maximum value limit. For -// example, attempting to removing the last SCP from an OU or root, inviting -// or creating too many accounts to the organization, or attaching too many -// policies to an account, OU, or root. This exception includes a reason that -// contains additional information about the violated limit: +// example, attempting to removing the last service control policy (SCP) from +// an OU or root, inviting or creating too many accounts to the organization, +// or attaching too many policies to an account, OU, or root. This exception +// includes a reason that contains additional information about the violated +// limit. // // Some of the reasons in the following list might not be applicable to this // specific API or operation: // -// ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on the number -// of accounts in an organization. If you need more accounts, contact AWS Support -// to request an increase in your limit. +// * ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on +// the number of accounts in an organization. If you need more accounts, +// contactAWS Support (https://console.aws.amazon.com/support/home#/) to +// request an increase in your limit. // -// Or, The number of invitations that you tried to send would cause you to exceed -// the limit of accounts in your organization. Send fewer invitations, or contact -// AWS Support to request an increase in the number of accounts. +// Or the number of invitations that you tried to send would cause you to exceed +// the limit of accounts in your organization. Send fewer invitations or +// contact AWS Support to request an increase in the number of accounts. // -// Note: deleted and closed accounts still count toward your limit. +// Deleted and closed accounts still count toward your limit. // -// If you get an exception that indicates that you exceeded your account limits -// for the organization or that you can"t add an account because your organization -// is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). +// If you get receive this exception when running a command immediately after +// creating the organization, wait one hour and try again. If after an hour +// it continues to fail with this error, contact AWS Support (https://console.aws.amazon.com/support/home#/). // // * HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of -// handshakes you can send in one day. +// handshakes that you can send in one day. // -// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of organizational -// units you can have in an organization. +// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of OUs +// that you can have in an organization. // -// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an organizational unit -// tree that is too many levels deep. +// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an OU tree that is +// too many levels deep. +// +// * ORGANIZATION_NOT_IN_ALL_FEATURES_MODE: You attempted to perform an operation +// that requires the organization to be configured to support all features. +// An organization that supports only consolidated billing features can't +// perform this operation. // // * POLICY_NUMBER_LIMIT_EXCEEDED. You attempted to exceed the number of // policies that you can have in an organization. @@ -1151,23 +1180,24 @@ func (c *Organizations) CreateOrganizationRequest(input *CreateOrganizationInput // minimum number of policies of a certain type required. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_EULA: You attempted to remove an account -// from the organization that does not yet have enough information to exist -// as a stand-alone account. This account requires you to first agree to -// the AWS Customer Agreement. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// from the organization that doesn't yet have enough information to exist +// as a standalone account. This account requires you to first agree to the +// AWS Customer Agreement. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_PHONE_VERIFICATION: You attempted to remove -// an account from the organization that does not yet have enough information -// to exist as a stand-alone account. This account requires you to first -// complete phone verification. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// an account from the organization that doesn't yet have enough information +// to exist as a standalone account. This account requires you to first complete +// phone verification. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To create an organization -// with this account, you first must associate a payment instrument, such -// as a credit card, with the account. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// with this master account, you first must associate a payment instrument, +// such as a credit card, with the account. Follow the steps at To leave +// an organization when all required account information has not yet been +// provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To complete this operation @@ -1200,11 +1230,11 @@ func (c *Organizations) CreateOrganizationRequest(input *CreateOrganizationInput // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -1224,11 +1254,11 @@ func (c *Organizations) CreateOrganizationRequest(input *CreateOrganizationInput // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -1259,9 +1289,9 @@ func (c *Organizations) CreateOrganizationRequest(input *CreateOrganizationInput // protect against denial-of-service attacks. Try again later. // // * ErrCodeAccessDeniedForDependencyException "AccessDeniedForDependencyException" -// The operation you attempted requires you to have the iam:CreateServiceLinkedRole -// so that Organizations can create the required service-linked role. You do -// not have that permission. +// The operation that you attempted requires you to have the iam:CreateServiceLinkedRole +// so that AWS Organizations can create the required service-linked role. You +// don't have that permission. // // See also, https://docs.aws.amazon.com/goto/WebAPI/organizations-2016-11-28/CreateOrganization func (c *Organizations) CreateOrganization(input *CreateOrganizationInput) (*CreateOrganizationOutput, error) { @@ -1289,8 +1319,8 @@ const opCreateOrganizationalUnit = "CreateOrganizationalUnit" // CreateOrganizationalUnitRequest generates a "aws/request.Request" representing the // client's request for the CreateOrganizationalUnit operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1356,7 +1386,7 @@ func (c *Organizations) CreateOrganizationalUnitRequest(input *CreateOrganizatio // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" @@ -1365,36 +1395,43 @@ func (c *Organizations) CreateOrganizationalUnitRequest(input *CreateOrganizatio // // * ErrCodeConstraintViolationException "ConstraintViolationException" // Performing this operation violates a minimum or maximum value limit. For -// example, attempting to removing the last SCP from an OU or root, inviting -// or creating too many accounts to the organization, or attaching too many -// policies to an account, OU, or root. This exception includes a reason that -// contains additional information about the violated limit: +// example, attempting to removing the last service control policy (SCP) from +// an OU or root, inviting or creating too many accounts to the organization, +// or attaching too many policies to an account, OU, or root. This exception +// includes a reason that contains additional information about the violated +// limit. // // Some of the reasons in the following list might not be applicable to this // specific API or operation: // -// ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on the number -// of accounts in an organization. If you need more accounts, contact AWS Support -// to request an increase in your limit. +// * ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on +// the number of accounts in an organization. If you need more accounts, +// contactAWS Support (https://console.aws.amazon.com/support/home#/) to +// request an increase in your limit. // -// Or, The number of invitations that you tried to send would cause you to exceed -// the limit of accounts in your organization. Send fewer invitations, or contact -// AWS Support to request an increase in the number of accounts. +// Or the number of invitations that you tried to send would cause you to exceed +// the limit of accounts in your organization. Send fewer invitations or +// contact AWS Support to request an increase in the number of accounts. // -// Note: deleted and closed accounts still count toward your limit. +// Deleted and closed accounts still count toward your limit. // -// If you get an exception that indicates that you exceeded your account limits -// for the organization or that you can"t add an account because your organization -// is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). +// If you get receive this exception when running a command immediately after +// creating the organization, wait one hour and try again. If after an hour +// it continues to fail with this error, contact AWS Support (https://console.aws.amazon.com/support/home#/). // // * HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of -// handshakes you can send in one day. +// handshakes that you can send in one day. +// +// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of OUs +// that you can have in an organization. // -// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of organizational -// units you can have in an organization. +// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an OU tree that is +// too many levels deep. // -// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an organizational unit -// tree that is too many levels deep. +// * ORGANIZATION_NOT_IN_ALL_FEATURES_MODE: You attempted to perform an operation +// that requires the organization to be configured to support all features. +// An organization that supports only consolidated billing features can't +// perform this operation. // // * POLICY_NUMBER_LIMIT_EXCEEDED. You attempted to exceed the number of // policies that you can have in an organization. @@ -1408,23 +1445,24 @@ func (c *Organizations) CreateOrganizationalUnitRequest(input *CreateOrganizatio // minimum number of policies of a certain type required. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_EULA: You attempted to remove an account -// from the organization that does not yet have enough information to exist -// as a stand-alone account. This account requires you to first agree to -// the AWS Customer Agreement. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// from the organization that doesn't yet have enough information to exist +// as a standalone account. This account requires you to first agree to the +// AWS Customer Agreement. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_PHONE_VERIFICATION: You attempted to remove -// an account from the organization that does not yet have enough information -// to exist as a stand-alone account. This account requires you to first -// complete phone verification. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// an account from the organization that doesn't yet have enough information +// to exist as a standalone account. This account requires you to first complete +// phone verification. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To create an organization -// with this account, you first must associate a payment instrument, such -// as a credit card, with the account. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// with this master account, you first must associate a payment instrument, +// such as a credit card, with the account. Follow the steps at To leave +// an organization when all required account information has not yet been +// provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To complete this operation @@ -1449,7 +1487,7 @@ func (c *Organizations) CreateOrganizationalUnitRequest(input *CreateOrganizatio // account. Then try the operation again. // // * ErrCodeDuplicateOrganizationalUnitException "DuplicateOrganizationalUnitException" -// An organizational unit (OU) with the same name already exists. +// An OU with the same name already exists. // // * ErrCodeInvalidInputException "InvalidInputException" // The requested operation failed because you provided invalid values for one @@ -1460,11 +1498,11 @@ func (c *Organizations) CreateOrganizationalUnitRequest(input *CreateOrganizatio // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -1484,11 +1522,11 @@ func (c *Organizations) CreateOrganizationalUnitRequest(input *CreateOrganizatio // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -1511,8 +1549,7 @@ func (c *Organizations) CreateOrganizationalUnitRequest(input *CreateOrganizatio // between entities in the same root. // // * ErrCodeParentNotFoundException "ParentNotFoundException" -// We can't find a root or organizational unit (OU) with the ParentId that you -// specified. +// We can't find a root or OU with the ParentId that you specified. // // * ErrCodeServiceException "ServiceException" // AWS Organizations can't complete your request because of an internal service @@ -1548,8 +1585,8 @@ const opCreatePolicy = "CreatePolicy" // CreatePolicyRequest generates a "aws/request.Request" representing the // client's request for the CreatePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1612,7 +1649,7 @@ func (c *Organizations) CreatePolicyRequest(input *CreatePolicyInput) (req *requ // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" @@ -1621,36 +1658,43 @@ func (c *Organizations) CreatePolicyRequest(input *CreatePolicyInput) (req *requ // // * ErrCodeConstraintViolationException "ConstraintViolationException" // Performing this operation violates a minimum or maximum value limit. For -// example, attempting to removing the last SCP from an OU or root, inviting -// or creating too many accounts to the organization, or attaching too many -// policies to an account, OU, or root. This exception includes a reason that -// contains additional information about the violated limit: +// example, attempting to removing the last service control policy (SCP) from +// an OU or root, inviting or creating too many accounts to the organization, +// or attaching too many policies to an account, OU, or root. This exception +// includes a reason that contains additional information about the violated +// limit. // // Some of the reasons in the following list might not be applicable to this // specific API or operation: // -// ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on the number -// of accounts in an organization. If you need more accounts, contact AWS Support -// to request an increase in your limit. +// * ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on +// the number of accounts in an organization. If you need more accounts, +// contactAWS Support (https://console.aws.amazon.com/support/home#/) to +// request an increase in your limit. // -// Or, The number of invitations that you tried to send would cause you to exceed -// the limit of accounts in your organization. Send fewer invitations, or contact -// AWS Support to request an increase in the number of accounts. +// Or the number of invitations that you tried to send would cause you to exceed +// the limit of accounts in your organization. Send fewer invitations or +// contact AWS Support to request an increase in the number of accounts. // -// Note: deleted and closed accounts still count toward your limit. +// Deleted and closed accounts still count toward your limit. // -// If you get an exception that indicates that you exceeded your account limits -// for the organization or that you can"t add an account because your organization -// is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). +// If you get receive this exception when running a command immediately after +// creating the organization, wait one hour and try again. If after an hour +// it continues to fail with this error, contact AWS Support (https://console.aws.amazon.com/support/home#/). // // * HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of -// handshakes you can send in one day. +// handshakes that you can send in one day. // -// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of organizational -// units you can have in an organization. +// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of OUs +// that you can have in an organization. // -// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an organizational unit -// tree that is too many levels deep. +// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an OU tree that is +// too many levels deep. +// +// * ORGANIZATION_NOT_IN_ALL_FEATURES_MODE: You attempted to perform an operation +// that requires the organization to be configured to support all features. +// An organization that supports only consolidated billing features can't +// perform this operation. // // * POLICY_NUMBER_LIMIT_EXCEEDED. You attempted to exceed the number of // policies that you can have in an organization. @@ -1664,23 +1708,24 @@ func (c *Organizations) CreatePolicyRequest(input *CreatePolicyInput) (req *requ // minimum number of policies of a certain type required. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_EULA: You attempted to remove an account -// from the organization that does not yet have enough information to exist -// as a stand-alone account. This account requires you to first agree to -// the AWS Customer Agreement. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// from the organization that doesn't yet have enough information to exist +// as a standalone account. This account requires you to first agree to the +// AWS Customer Agreement. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_PHONE_VERIFICATION: You attempted to remove -// an account from the organization that does not yet have enough information -// to exist as a stand-alone account. This account requires you to first -// complete phone verification. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// an account from the organization that doesn't yet have enough information +// to exist as a standalone account. This account requires you to first complete +// phone verification. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To create an organization -// with this account, you first must associate a payment instrument, such -// as a credit card, with the account. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// with this master account, you first must associate a payment instrument, +// such as a credit card, with the account. Follow the steps at To leave +// an organization when all required account information has not yet been +// provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To complete this operation @@ -1716,11 +1761,11 @@ func (c *Organizations) CreatePolicyRequest(input *CreatePolicyInput) (req *requ // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -1740,11 +1785,11 @@ func (c *Organizations) CreatePolicyRequest(input *CreatePolicyInput) (req *requ // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -1767,16 +1812,16 @@ func (c *Organizations) CreatePolicyRequest(input *CreatePolicyInput) (req *requ // between entities in the same root. // // * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocumentException" -// The provided policy document does not meet the requirements of the specified +// The provided policy document doesn't meet the requirements of the specified // policy type. For example, the syntax might be incorrect. For details about // service control policy syntax, see Service Control Policy Syntax (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_reference_scp-syntax.html) // in the AWS Organizations User Guide. // // * ErrCodePolicyTypeNotAvailableForOrganizationException "PolicyTypeNotAvailableForOrganizationException" // You can't use the specified policy type with the feature set currently enabled -// for this organization. For example, you can enable service control policies -// (SCPs) only after you enable all features in the organization. For more information, -// see Enabling and Disabling a Policy Type on a Root (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies.html#enable_policies_on_root) +// for this organization. For example, you can enable SCPs only after you enable +// all features in the organization. For more information, see Enabling and +// Disabling a Policy Type on a Root (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies.html#enable_policies_on_root) // in the AWS Organizations User Guide. // // * ErrCodeServiceException "ServiceException" @@ -1813,8 +1858,8 @@ const opDeclineHandshake = "DeclineHandshake" // DeclineHandshakeRequest generates a "aws/request.Request" representing the // client's request for the DeclineHandshake operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1888,7 +1933,7 @@ func (c *Organizations) DeclineHandshakeRequest(input *DeclineHandshakeInput) (r // // * ErrCodeInvalidHandshakeTransitionException "InvalidHandshakeTransitionException" // You can't perform the operation on the handshake in its current state. For -// example, you can't cancel a handshake that was already accepted, or accept +// example, you can't cancel a handshake that was already accepted or accept // a handshake that was already declined. // // * ErrCodeHandshakeAlreadyInStateException "HandshakeAlreadyInStateException" @@ -1904,11 +1949,11 @@ func (c *Organizations) DeclineHandshakeRequest(input *DeclineHandshakeInput) (r // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -1928,11 +1973,11 @@ func (c *Organizations) DeclineHandshakeRequest(input *DeclineHandshakeInput) (r // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -1988,8 +2033,8 @@ const opDeleteOrganization = "DeleteOrganization" // DeleteOrganizationRequest generates a "aws/request.Request" representing the // client's request for the DeleteOrganization operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2031,8 +2076,7 @@ func (c *Organizations) DeleteOrganizationRequest(input *DeleteOrganizationInput // DeleteOrganization API operation for AWS Organizations. // // Deletes the organization. You can delete an organization only by using credentials -// from the master account. The organization must be empty of member accounts, -// OUs, and policies. +// from the master account. The organization must be empty of member accounts. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2050,7 +2094,7 @@ func (c *Organizations) DeleteOrganizationRequest(input *DeleteOrganizationInput // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" @@ -2066,11 +2110,11 @@ func (c *Organizations) DeleteOrganizationRequest(input *DeleteOrganizationInput // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -2090,11 +2134,11 @@ func (c *Organizations) DeleteOrganizationRequest(input *DeleteOrganizationInput // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -2118,8 +2162,7 @@ func (c *Organizations) DeleteOrganizationRequest(input *DeleteOrganizationInput // // * ErrCodeOrganizationNotEmptyException "OrganizationNotEmptyException" // The organization isn't empty. To delete an organization, you must first remove -// all accounts except the master account, delete all organizational units (OUs), -// and delete all policies. +// all accounts except the master account, delete all OUs, and delete all policies. // // * ErrCodeServiceException "ServiceException" // AWS Organizations can't complete your request because of an internal service @@ -2155,8 +2198,8 @@ const opDeleteOrganizationalUnit = "DeleteOrganizationalUnit" // DeleteOrganizationalUnitRequest generates a "aws/request.Request" representing the // client's request for the DeleteOrganizationalUnit operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2197,7 +2240,7 @@ func (c *Organizations) DeleteOrganizationalUnitRequest(input *DeleteOrganizatio // DeleteOrganizationalUnit API operation for AWS Organizations. // -// Deletes an organizational unit from a root or another OU. You must first +// Deletes an organizational unit (OU) from a root or another OU. You must first // remove all accounts and child OUs from the OU that you want to delete. // // This operation can be called only from the organization's master account. @@ -2218,7 +2261,7 @@ func (c *Organizations) DeleteOrganizationalUnitRequest(input *DeleteOrganizatio // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" @@ -2234,11 +2277,11 @@ func (c *Organizations) DeleteOrganizationalUnitRequest(input *DeleteOrganizatio // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -2258,11 +2301,11 @@ func (c *Organizations) DeleteOrganizationalUnitRequest(input *DeleteOrganizatio // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -2285,13 +2328,11 @@ func (c *Organizations) DeleteOrganizationalUnitRequest(input *DeleteOrganizatio // between entities in the same root. // // * ErrCodeOrganizationalUnitNotEmptyException "OrganizationalUnitNotEmptyException" -// The specified organizational unit (OU) is not empty. Move all accounts to -// another root or to other OUs, remove all child OUs, and then try the operation -// again. +// The specified OU is not empty. Move all accounts to another root or to other +// OUs, remove all child OUs, and try the operation again. // // * ErrCodeOrganizationalUnitNotFoundException "OrganizationalUnitNotFoundException" -// We can't find an organizational unit (OU) with the OrganizationalUnitId that -// you specified. +// We can't find an OU with the OrganizationalUnitId that you specified. // // * ErrCodeServiceException "ServiceException" // AWS Organizations can't complete your request because of an internal service @@ -2327,8 +2368,8 @@ const opDeletePolicy = "DeletePolicy" // DeletePolicyRequest generates a "aws/request.Request" representing the // client's request for the DeletePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2370,7 +2411,8 @@ func (c *Organizations) DeletePolicyRequest(input *DeletePolicyInput) (req *requ // DeletePolicy API operation for AWS Organizations. // // Deletes the specified policy from your organization. Before you perform this -// operation, you must first detach the policy from all OUs, roots, and accounts. +// operation, you must first detach the policy from all organizational units +// (OUs), roots, and accounts. // // This operation can be called only from the organization's master account. // @@ -2390,7 +2432,7 @@ func (c *Organizations) DeletePolicyRequest(input *DeletePolicyInput) (req *requ // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" @@ -2406,11 +2448,11 @@ func (c *Organizations) DeletePolicyRequest(input *DeletePolicyInput) (req *requ // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -2430,11 +2472,11 @@ func (c *Organizations) DeletePolicyRequest(input *DeletePolicyInput) (req *requ // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -2458,7 +2500,7 @@ func (c *Organizations) DeletePolicyRequest(input *DeletePolicyInput) (req *requ // // * ErrCodePolicyInUseException "PolicyInUseException" // The policy is attached to one or more entities. You must detach it from all -// roots, organizational units (OUs), and accounts before performing this operation. +// roots, OUs, and accounts before performing this operation. // // * ErrCodePolicyNotFoundException "PolicyNotFoundException" // We can't find a policy with the PolicyId that you specified. @@ -2497,8 +2539,8 @@ const opDescribeAccount = "DescribeAccount" // DescribeAccountRequest generates a "aws/request.Request" representing the // client's request for the DescribeAccount operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2558,11 +2600,11 @@ func (c *Organizations) DescribeAccountRequest(input *DescribeAccountInput) (req // // * ErrCodeAccountNotFoundException "AccountNotFoundException" // We can't find an AWS account with the AccountId that you specified, or the -// account whose credentials you used to make this request is not a member of +// account whose credentials you used to make this request isn't a member of // an organization. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeInvalidInputException "InvalidInputException" @@ -2574,11 +2616,11 @@ func (c *Organizations) DescribeAccountRequest(input *DescribeAccountInput) (req // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -2598,11 +2640,11 @@ func (c *Organizations) DescribeAccountRequest(input *DescribeAccountInput) (req // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -2658,8 +2700,8 @@ const opDescribeCreateAccountStatus = "DescribeCreateAccountStatus" // DescribeCreateAccountStatusRequest generates a "aws/request.Request" representing the // client's request for the DescribeCreateAccountStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2718,7 +2760,7 @@ func (c *Organizations) DescribeCreateAccountStatusRequest(input *DescribeCreate // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeCreateAccountStatusNotFoundException "CreateAccountStatusNotFoundException" @@ -2734,11 +2776,11 @@ func (c *Organizations) DescribeCreateAccountStatusRequest(input *DescribeCreate // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -2758,11 +2800,11 @@ func (c *Organizations) DescribeCreateAccountStatusRequest(input *DescribeCreate // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -2818,8 +2860,8 @@ const opDescribeHandshake = "DescribeHandshake" // DescribeHandshakeRequest generates a "aws/request.Request" representing the // client's request for the DescribeHandshake operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2899,11 +2941,11 @@ func (c *Organizations) DescribeHandshakeRequest(input *DescribeHandshakeInput) // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -2923,11 +2965,11 @@ func (c *Organizations) DescribeHandshakeRequest(input *DescribeHandshakeInput) // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -2983,8 +3025,8 @@ const opDescribeOrganization = "DescribeOrganization" // DescribeOrganizationRequest generates a "aws/request.Request" representing the // client's request for the DescribeOrganization operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3048,7 +3090,7 @@ func (c *Organizations) DescribeOrganizationRequest(input *DescribeOrganizationI // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" @@ -3089,8 +3131,8 @@ const opDescribeOrganizationalUnit = "DescribeOrganizationalUnit" // DescribeOrganizationalUnitRequest generates a "aws/request.Request" representing the // client's request for the DescribeOrganizationalUnit operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3149,7 +3191,7 @@ func (c *Organizations) DescribeOrganizationalUnitRequest(input *DescribeOrganiz // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeInvalidInputException "InvalidInputException" @@ -3161,11 +3203,11 @@ func (c *Organizations) DescribeOrganizationalUnitRequest(input *DescribeOrganiz // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -3185,11 +3227,11 @@ func (c *Organizations) DescribeOrganizationalUnitRequest(input *DescribeOrganiz // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -3212,8 +3254,7 @@ func (c *Organizations) DescribeOrganizationalUnitRequest(input *DescribeOrganiz // between entities in the same root. // // * ErrCodeOrganizationalUnitNotFoundException "OrganizationalUnitNotFoundException" -// We can't find an organizational unit (OU) with the OrganizationalUnitId that -// you specified. +// We can't find an OU with the OrganizationalUnitId that you specified. // // * ErrCodeServiceException "ServiceException" // AWS Organizations can't complete your request because of an internal service @@ -3249,8 +3290,8 @@ const opDescribePolicy = "DescribePolicy" // DescribePolicyRequest generates a "aws/request.Request" representing the // client's request for the DescribePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3309,7 +3350,7 @@ func (c *Organizations) DescribePolicyRequest(input *DescribePolicyInput) (req * // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeInvalidInputException "InvalidInputException" @@ -3321,11 +3362,11 @@ func (c *Organizations) DescribePolicyRequest(input *DescribePolicyInput) (req * // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -3345,11 +3386,11 @@ func (c *Organizations) DescribePolicyRequest(input *DescribePolicyInput) (req * // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -3408,8 +3449,8 @@ const opDetachPolicy = "DetachPolicy" // DetachPolicyRequest generates a "aws/request.Request" representing the // client's request for the DetachPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3450,8 +3491,8 @@ func (c *Organizations) DetachPolicyRequest(input *DetachPolicyInput) (req *requ // DetachPolicy API operation for AWS Organizations. // -// Detaches a policy from a target root, organizational unit, or account. If -// the policy being detached is a service control policy (SCP), the changes +// Detaches a policy from a target root, organizational unit (OU), or account. +// If the policy being detached is a service control policy (SCP), the changes // to permissions for IAM users and roles in affected accounts are immediate. // // Note: Every root, OU, and account must have at least one SCP attached. If @@ -3482,7 +3523,7 @@ func (c *Organizations) DetachPolicyRequest(input *DetachPolicyInput) (req *requ // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" @@ -3491,36 +3532,43 @@ func (c *Organizations) DetachPolicyRequest(input *DetachPolicyInput) (req *requ // // * ErrCodeConstraintViolationException "ConstraintViolationException" // Performing this operation violates a minimum or maximum value limit. For -// example, attempting to removing the last SCP from an OU or root, inviting -// or creating too many accounts to the organization, or attaching too many -// policies to an account, OU, or root. This exception includes a reason that -// contains additional information about the violated limit: +// example, attempting to removing the last service control policy (SCP) from +// an OU or root, inviting or creating too many accounts to the organization, +// or attaching too many policies to an account, OU, or root. This exception +// includes a reason that contains additional information about the violated +// limit. // // Some of the reasons in the following list might not be applicable to this // specific API or operation: // -// ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on the number -// of accounts in an organization. If you need more accounts, contact AWS Support -// to request an increase in your limit. +// * ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on +// the number of accounts in an organization. If you need more accounts, +// contactAWS Support (https://console.aws.amazon.com/support/home#/) to +// request an increase in your limit. // -// Or, The number of invitations that you tried to send would cause you to exceed -// the limit of accounts in your organization. Send fewer invitations, or contact -// AWS Support to request an increase in the number of accounts. +// Or the number of invitations that you tried to send would cause you to exceed +// the limit of accounts in your organization. Send fewer invitations or +// contact AWS Support to request an increase in the number of accounts. // -// Note: deleted and closed accounts still count toward your limit. +// Deleted and closed accounts still count toward your limit. // -// If you get an exception that indicates that you exceeded your account limits -// for the organization or that you can"t add an account because your organization -// is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). +// If you get receive this exception when running a command immediately after +// creating the organization, wait one hour and try again. If after an hour +// it continues to fail with this error, contact AWS Support (https://console.aws.amazon.com/support/home#/). // // * HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of -// handshakes you can send in one day. +// handshakes that you can send in one day. +// +// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of OUs +// that you can have in an organization. // -// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of organizational -// units you can have in an organization. +// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an OU tree that is +// too many levels deep. // -// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an organizational unit -// tree that is too many levels deep. +// * ORGANIZATION_NOT_IN_ALL_FEATURES_MODE: You attempted to perform an operation +// that requires the organization to be configured to support all features. +// An organization that supports only consolidated billing features can't +// perform this operation. // // * POLICY_NUMBER_LIMIT_EXCEEDED. You attempted to exceed the number of // policies that you can have in an organization. @@ -3534,23 +3582,24 @@ func (c *Organizations) DetachPolicyRequest(input *DetachPolicyInput) (req *requ // minimum number of policies of a certain type required. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_EULA: You attempted to remove an account -// from the organization that does not yet have enough information to exist -// as a stand-alone account. This account requires you to first agree to -// the AWS Customer Agreement. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// from the organization that doesn't yet have enough information to exist +// as a standalone account. This account requires you to first agree to the +// AWS Customer Agreement. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_PHONE_VERIFICATION: You attempted to remove -// an account from the organization that does not yet have enough information -// to exist as a stand-alone account. This account requires you to first -// complete phone verification. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// an account from the organization that doesn't yet have enough information +// to exist as a standalone account. This account requires you to first complete +// phone verification. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To create an organization -// with this account, you first must associate a payment instrument, such -// as a credit card, with the account. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// with this master account, you first must associate a payment instrument, +// such as a credit card, with the account. Follow the steps at To leave +// an organization when all required account information has not yet been +// provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To complete this operation @@ -3583,11 +3632,11 @@ func (c *Organizations) DetachPolicyRequest(input *DetachPolicyInput) (req *requ // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -3607,11 +3656,11 @@ func (c *Organizations) DetachPolicyRequest(input *DetachPolicyInput) (req *requ // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -3676,8 +3725,8 @@ const opDisableAWSServiceAccess = "DisableAWSServiceAccess" // DisableAWSServiceAccessRequest generates a "aws/request.Request" representing the // client's request for the DisableAWSServiceAccess operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3758,7 +3807,7 @@ func (c *Organizations) DisableAWSServiceAccessRequest(input *DisableAWSServiceA // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" @@ -3767,36 +3816,43 @@ func (c *Organizations) DisableAWSServiceAccessRequest(input *DisableAWSServiceA // // * ErrCodeConstraintViolationException "ConstraintViolationException" // Performing this operation violates a minimum or maximum value limit. For -// example, attempting to removing the last SCP from an OU or root, inviting -// or creating too many accounts to the organization, or attaching too many -// policies to an account, OU, or root. This exception includes a reason that -// contains additional information about the violated limit: +// example, attempting to removing the last service control policy (SCP) from +// an OU or root, inviting or creating too many accounts to the organization, +// or attaching too many policies to an account, OU, or root. This exception +// includes a reason that contains additional information about the violated +// limit. // // Some of the reasons in the following list might not be applicable to this // specific API or operation: // -// ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on the number -// of accounts in an organization. If you need more accounts, contact AWS Support -// to request an increase in your limit. +// * ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on +// the number of accounts in an organization. If you need more accounts, +// contactAWS Support (https://console.aws.amazon.com/support/home#/) to +// request an increase in your limit. // -// Or, The number of invitations that you tried to send would cause you to exceed -// the limit of accounts in your organization. Send fewer invitations, or contact -// AWS Support to request an increase in the number of accounts. +// Or the number of invitations that you tried to send would cause you to exceed +// the limit of accounts in your organization. Send fewer invitations or +// contact AWS Support to request an increase in the number of accounts. // -// Note: deleted and closed accounts still count toward your limit. +// Deleted and closed accounts still count toward your limit. // -// If you get an exception that indicates that you exceeded your account limits -// for the organization or that you can"t add an account because your organization -// is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). +// If you get receive this exception when running a command immediately after +// creating the organization, wait one hour and try again. If after an hour +// it continues to fail with this error, contact AWS Support (https://console.aws.amazon.com/support/home#/). // // * HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of -// handshakes you can send in one day. +// handshakes that you can send in one day. +// +// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of OUs +// that you can have in an organization. // -// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of organizational -// units you can have in an organization. +// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an OU tree that is +// too many levels deep. // -// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an organizational unit -// tree that is too many levels deep. +// * ORGANIZATION_NOT_IN_ALL_FEATURES_MODE: You attempted to perform an operation +// that requires the organization to be configured to support all features. +// An organization that supports only consolidated billing features can't +// perform this operation. // // * POLICY_NUMBER_LIMIT_EXCEEDED. You attempted to exceed the number of // policies that you can have in an organization. @@ -3810,23 +3866,24 @@ func (c *Organizations) DisableAWSServiceAccessRequest(input *DisableAWSServiceA // minimum number of policies of a certain type required. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_EULA: You attempted to remove an account -// from the organization that does not yet have enough information to exist -// as a stand-alone account. This account requires you to first agree to -// the AWS Customer Agreement. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// from the organization that doesn't yet have enough information to exist +// as a standalone account. This account requires you to first agree to the +// AWS Customer Agreement. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_PHONE_VERIFICATION: You attempted to remove -// an account from the organization that does not yet have enough information -// to exist as a stand-alone account. This account requires you to first -// complete phone verification. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// an account from the organization that doesn't yet have enough information +// to exist as a standalone account. This account requires you to first complete +// phone verification. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To create an organization -// with this account, you first must associate a payment instrument, such -// as a credit card, with the account. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// with this master account, you first must associate a payment instrument, +// such as a credit card, with the account. Follow the steps at To leave +// an organization when all required account information has not yet been +// provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To complete this operation @@ -3859,11 +3916,11 @@ func (c *Organizations) DisableAWSServiceAccessRequest(input *DisableAWSServiceA // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -3883,11 +3940,11 @@ func (c *Organizations) DisableAWSServiceAccessRequest(input *DisableAWSServiceA // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -3943,8 +4000,8 @@ const opDisablePolicyType = "DisablePolicyType" // DisablePolicyTypeRequest generates a "aws/request.Request" representing the // client's request for the DisablePolicyType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3986,8 +4043,8 @@ func (c *Organizations) DisablePolicyTypeRequest(input *DisablePolicyTypeInput) // Disables an organizational control policy type in a root. A policy of a certain // type can be attached to entities in a root only if that type is enabled in // the root. After you perform this operation, you no longer can attach policies -// of the specified type to that root or to any OU or account in that root. -// You can undo this by using the EnablePolicyType operation. +// of the specified type to that root or to any organizational unit (OU) or +// account in that root. You can undo this by using the EnablePolicyType operation. // // This operation can be called only from the organization's master account. // @@ -4012,7 +4069,7 @@ func (c *Organizations) DisablePolicyTypeRequest(input *DisablePolicyTypeInput) // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" @@ -4021,36 +4078,43 @@ func (c *Organizations) DisablePolicyTypeRequest(input *DisablePolicyTypeInput) // // * ErrCodeConstraintViolationException "ConstraintViolationException" // Performing this operation violates a minimum or maximum value limit. For -// example, attempting to removing the last SCP from an OU or root, inviting -// or creating too many accounts to the organization, or attaching too many -// policies to an account, OU, or root. This exception includes a reason that -// contains additional information about the violated limit: +// example, attempting to removing the last service control policy (SCP) from +// an OU or root, inviting or creating too many accounts to the organization, +// or attaching too many policies to an account, OU, or root. This exception +// includes a reason that contains additional information about the violated +// limit. // // Some of the reasons in the following list might not be applicable to this // specific API or operation: // -// ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on the number -// of accounts in an organization. If you need more accounts, contact AWS Support -// to request an increase in your limit. +// * ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on +// the number of accounts in an organization. If you need more accounts, +// contactAWS Support (https://console.aws.amazon.com/support/home#/) to +// request an increase in your limit. // -// Or, The number of invitations that you tried to send would cause you to exceed -// the limit of accounts in your organization. Send fewer invitations, or contact -// AWS Support to request an increase in the number of accounts. +// Or the number of invitations that you tried to send would cause you to exceed +// the limit of accounts in your organization. Send fewer invitations or +// contact AWS Support to request an increase in the number of accounts. // -// Note: deleted and closed accounts still count toward your limit. +// Deleted and closed accounts still count toward your limit. // -// If you get an exception that indicates that you exceeded your account limits -// for the organization or that you can"t add an account because your organization -// is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). +// If you get receive this exception when running a command immediately after +// creating the organization, wait one hour and try again. If after an hour +// it continues to fail with this error, contact AWS Support (https://console.aws.amazon.com/support/home#/). // // * HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of -// handshakes you can send in one day. +// handshakes that you can send in one day. // -// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of organizational -// units you can have in an organization. +// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of OUs +// that you can have in an organization. // -// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an organizational unit -// tree that is too many levels deep. +// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an OU tree that is +// too many levels deep. +// +// * ORGANIZATION_NOT_IN_ALL_FEATURES_MODE: You attempted to perform an operation +// that requires the organization to be configured to support all features. +// An organization that supports only consolidated billing features can't +// perform this operation. // // * POLICY_NUMBER_LIMIT_EXCEEDED. You attempted to exceed the number of // policies that you can have in an organization. @@ -4064,23 +4128,24 @@ func (c *Organizations) DisablePolicyTypeRequest(input *DisablePolicyTypeInput) // minimum number of policies of a certain type required. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_EULA: You attempted to remove an account -// from the organization that does not yet have enough information to exist -// as a stand-alone account. This account requires you to first agree to -// the AWS Customer Agreement. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// from the organization that doesn't yet have enough information to exist +// as a standalone account. This account requires you to first agree to the +// AWS Customer Agreement. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_PHONE_VERIFICATION: You attempted to remove -// an account from the organization that does not yet have enough information -// to exist as a stand-alone account. This account requires you to first -// complete phone verification. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// an account from the organization that doesn't yet have enough information +// to exist as a standalone account. This account requires you to first complete +// phone verification. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To create an organization -// with this account, you first must associate a payment instrument, such -// as a credit card, with the account. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// with this master account, you first must associate a payment instrument, +// such as a credit card, with the account. Follow the steps at To leave +// an organization when all required account information has not yet been +// provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To complete this operation @@ -4113,11 +4178,11 @@ func (c *Organizations) DisablePolicyTypeRequest(input *DisablePolicyTypeInput) // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -4137,11 +4202,11 @@ func (c *Organizations) DisablePolicyTypeRequest(input *DisablePolicyTypeInput) // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -4164,7 +4229,7 @@ func (c *Organizations) DisablePolicyTypeRequest(input *DisablePolicyTypeInput) // between entities in the same root. // // * ErrCodePolicyTypeNotEnabledException "PolicyTypeNotEnabledException" -// The specified policy type is not currently enabled in this root. You cannot +// The specified policy type isn't currently enabled in this root. You can't // attach policies of the specified type to entities in a root until you enable // that type in the root. For more information, see Enabling All Features in // Your Organization (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org_support-all-features.html) @@ -4207,8 +4272,8 @@ const opEnableAWSServiceAccess = "EnableAWSServiceAccess" // EnableAWSServiceAccessRequest generates a "aws/request.Request" representing the // client's request for the EnableAWSServiceAccess operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4286,7 +4351,7 @@ func (c *Organizations) EnableAWSServiceAccessRequest(input *EnableAWSServiceAcc // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" @@ -4295,36 +4360,43 @@ func (c *Organizations) EnableAWSServiceAccessRequest(input *EnableAWSServiceAcc // // * ErrCodeConstraintViolationException "ConstraintViolationException" // Performing this operation violates a minimum or maximum value limit. For -// example, attempting to removing the last SCP from an OU or root, inviting -// or creating too many accounts to the organization, or attaching too many -// policies to an account, OU, or root. This exception includes a reason that -// contains additional information about the violated limit: +// example, attempting to removing the last service control policy (SCP) from +// an OU or root, inviting or creating too many accounts to the organization, +// or attaching too many policies to an account, OU, or root. This exception +// includes a reason that contains additional information about the violated +// limit. // // Some of the reasons in the following list might not be applicable to this // specific API or operation: // -// ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on the number -// of accounts in an organization. If you need more accounts, contact AWS Support -// to request an increase in your limit. +// * ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on +// the number of accounts in an organization. If you need more accounts, +// contactAWS Support (https://console.aws.amazon.com/support/home#/) to +// request an increase in your limit. // -// Or, The number of invitations that you tried to send would cause you to exceed -// the limit of accounts in your organization. Send fewer invitations, or contact -// AWS Support to request an increase in the number of accounts. +// Or the number of invitations that you tried to send would cause you to exceed +// the limit of accounts in your organization. Send fewer invitations or +// contact AWS Support to request an increase in the number of accounts. // -// Note: deleted and closed accounts still count toward your limit. +// Deleted and closed accounts still count toward your limit. // -// If you get an exception that indicates that you exceeded your account limits -// for the organization or that you can"t add an account because your organization -// is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). +// If you get receive this exception when running a command immediately after +// creating the organization, wait one hour and try again. If after an hour +// it continues to fail with this error, contact AWS Support (https://console.aws.amazon.com/support/home#/). // // * HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of -// handshakes you can send in one day. +// handshakes that you can send in one day. +// +// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of OUs +// that you can have in an organization. // -// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of organizational -// units you can have in an organization. +// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an OU tree that is +// too many levels deep. // -// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an organizational unit -// tree that is too many levels deep. +// * ORGANIZATION_NOT_IN_ALL_FEATURES_MODE: You attempted to perform an operation +// that requires the organization to be configured to support all features. +// An organization that supports only consolidated billing features can't +// perform this operation. // // * POLICY_NUMBER_LIMIT_EXCEEDED. You attempted to exceed the number of // policies that you can have in an organization. @@ -4338,23 +4410,24 @@ func (c *Organizations) EnableAWSServiceAccessRequest(input *EnableAWSServiceAcc // minimum number of policies of a certain type required. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_EULA: You attempted to remove an account -// from the organization that does not yet have enough information to exist -// as a stand-alone account. This account requires you to first agree to -// the AWS Customer Agreement. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// from the organization that doesn't yet have enough information to exist +// as a standalone account. This account requires you to first agree to the +// AWS Customer Agreement. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_PHONE_VERIFICATION: You attempted to remove -// an account from the organization that does not yet have enough information -// to exist as a stand-alone account. This account requires you to first -// complete phone verification. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// an account from the organization that doesn't yet have enough information +// to exist as a standalone account. This account requires you to first complete +// phone verification. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To create an organization -// with this account, you first must associate a payment instrument, such -// as a credit card, with the account. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// with this master account, you first must associate a payment instrument, +// such as a credit card, with the account. Follow the steps at To leave +// an organization when all required account information has not yet been +// provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To complete this operation @@ -4387,11 +4460,11 @@ func (c *Organizations) EnableAWSServiceAccessRequest(input *EnableAWSServiceAcc // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -4411,11 +4484,11 @@ func (c *Organizations) EnableAWSServiceAccessRequest(input *EnableAWSServiceAcc // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -4471,8 +4544,8 @@ const opEnableAllFeatures = "EnableAllFeatures" // EnableAllFeaturesRequest generates a "aws/request.Request" representing the // client's request for the EnableAllFeatures operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4520,12 +4593,11 @@ func (c *Organizations) EnableAllFeaturesRequest(input *EnableAllFeaturesInput) // in the AWS Organizations User Guide. // // This operation is required only for organizations that were created explicitly -// with only the consolidated billing features enabled, or that were migrated -// from a Consolidated Billing account family to Organizations. Calling this -// operation sends a handshake to every invited account in the organization. -// The feature set change can be finalized and the additional features enabled -// only after all administrators in the invited accounts approve the change -// by accepting the handshake. +// with only the consolidated billing features enabled. Calling this operation +// sends a handshake to every invited account in the organization. The feature +// set change can be finalized and the additional features enabled only after +// all administrators in the invited accounts approve the change by accepting +// the handshake. // // After you enable all features, you can separately enable or disable individual // policy types in a root using EnablePolicyType and DisablePolicyType. To see @@ -4559,7 +4631,7 @@ func (c *Organizations) EnableAllFeaturesRequest(input *EnableAllFeaturesInput) // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" @@ -4574,15 +4646,15 @@ func (c *Organizations) EnableAllFeaturesRequest(input *EnableAllFeaturesInput) // specific API or operation: // // * ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on -// the number of accounts in an organization. Note: deleted and closed accounts -// still count toward your limit. +// the number of accounts in an organization. Note that deleted and closed +// accounts still count toward your limit. // -// If you get an exception that indicates that you exceeded your account limits -// for the organization or that you can"t add an account because your organization -// is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). +// If you get this exception immediately after creating the organization, wait +// one hour and try again. If after an hour it continues to fail with this +// error, contact AWS Support (https://console.aws.amazon.com/support/home#/). // // * HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of -// handshakes you can send in one day. +// handshakes that you can send in one day. // // * ALREADY_IN_AN_ORGANIZATION: The handshake request is invalid because // the invited account is already a member of an organization. @@ -4590,13 +4662,13 @@ func (c *Organizations) EnableAllFeaturesRequest(input *EnableAllFeaturesInput) // * ORGANIZATION_ALREADY_HAS_ALL_FEATURES: The handshake request is invalid // because the organization has already enabled all features. // -// * INVITE_DISABLED_DURING_ENABLE_ALL_FEATURES: You cannot issue new invitations -// to join an organization while it is in the process of enabling all features. +// * INVITE_DISABLED_DURING_ENABLE_ALL_FEATURES: You can't issue new invitations +// to join an organization while it's in the process of enabling all features. // You can resume inviting accounts after you finalize the process when all // accounts have agreed to the change. // -// * PAYMENT_INSTRUMENT_REQUIRED: You cannot complete the operation with -// an account that does not have a payment instrument, such as a credit card, +// * PAYMENT_INSTRUMENT_REQUIRED: You can't complete the operation with an +// account that doesn't have a payment instrument, such as a credit card, // associated with it. // // * ORGANIZATION_FROM_DIFFERENT_SELLER_OF_RECORD: The request failed because @@ -4617,11 +4689,11 @@ func (c *Organizations) EnableAllFeaturesRequest(input *EnableAllFeaturesInput) // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -4641,11 +4713,11 @@ func (c *Organizations) EnableAllFeaturesRequest(input *EnableAllFeaturesInput) // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -4701,8 +4773,8 @@ const opEnablePolicyType = "EnablePolicyType" // EnablePolicyTypeRequest generates a "aws/request.Request" representing the // client's request for the EnablePolicyType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4742,8 +4814,9 @@ func (c *Organizations) EnablePolicyTypeRequest(input *EnablePolicyTypeInput) (r // EnablePolicyType API operation for AWS Organizations. // // Enables a policy type in a root. After you enable a policy type in a root, -// you can attach policies of that type to the root, any OU, or account in that -// root. You can undo this by using the DisablePolicyType operation. +// you can attach policies of that type to the root, any organizational unit +// (OU), or account in that root. You can undo this by using the DisablePolicyType +// operation. // // This operation can be called only from the organization's master account. // @@ -4769,7 +4842,7 @@ func (c *Organizations) EnablePolicyTypeRequest(input *EnablePolicyTypeInput) (r // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" @@ -4778,36 +4851,43 @@ func (c *Organizations) EnablePolicyTypeRequest(input *EnablePolicyTypeInput) (r // // * ErrCodeConstraintViolationException "ConstraintViolationException" // Performing this operation violates a minimum or maximum value limit. For -// example, attempting to removing the last SCP from an OU or root, inviting -// or creating too many accounts to the organization, or attaching too many -// policies to an account, OU, or root. This exception includes a reason that -// contains additional information about the violated limit: +// example, attempting to removing the last service control policy (SCP) from +// an OU or root, inviting or creating too many accounts to the organization, +// or attaching too many policies to an account, OU, or root. This exception +// includes a reason that contains additional information about the violated +// limit. // // Some of the reasons in the following list might not be applicable to this // specific API or operation: // -// ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on the number -// of accounts in an organization. If you need more accounts, contact AWS Support -// to request an increase in your limit. +// * ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on +// the number of accounts in an organization. If you need more accounts, +// contactAWS Support (https://console.aws.amazon.com/support/home#/) to +// request an increase in your limit. // -// Or, The number of invitations that you tried to send would cause you to exceed -// the limit of accounts in your organization. Send fewer invitations, or contact -// AWS Support to request an increase in the number of accounts. +// Or the number of invitations that you tried to send would cause you to exceed +// the limit of accounts in your organization. Send fewer invitations or +// contact AWS Support to request an increase in the number of accounts. // -// Note: deleted and closed accounts still count toward your limit. +// Deleted and closed accounts still count toward your limit. // -// If you get an exception that indicates that you exceeded your account limits -// for the organization or that you can"t add an account because your organization -// is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). +// If you get receive this exception when running a command immediately after +// creating the organization, wait one hour and try again. If after an hour +// it continues to fail with this error, contact AWS Support (https://console.aws.amazon.com/support/home#/). // // * HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of -// handshakes you can send in one day. +// handshakes that you can send in one day. +// +// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of OUs +// that you can have in an organization. // -// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of organizational -// units you can have in an organization. +// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an OU tree that is +// too many levels deep. // -// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an organizational unit -// tree that is too many levels deep. +// * ORGANIZATION_NOT_IN_ALL_FEATURES_MODE: You attempted to perform an operation +// that requires the organization to be configured to support all features. +// An organization that supports only consolidated billing features can't +// perform this operation. // // * POLICY_NUMBER_LIMIT_EXCEEDED. You attempted to exceed the number of // policies that you can have in an organization. @@ -4821,23 +4901,24 @@ func (c *Organizations) EnablePolicyTypeRequest(input *EnablePolicyTypeInput) (r // minimum number of policies of a certain type required. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_EULA: You attempted to remove an account -// from the organization that does not yet have enough information to exist -// as a stand-alone account. This account requires you to first agree to -// the AWS Customer Agreement. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// from the organization that doesn't yet have enough information to exist +// as a standalone account. This account requires you to first agree to the +// AWS Customer Agreement. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_PHONE_VERIFICATION: You attempted to remove -// an account from the organization that does not yet have enough information -// to exist as a stand-alone account. This account requires you to first -// complete phone verification. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// an account from the organization that doesn't yet have enough information +// to exist as a standalone account. This account requires you to first complete +// phone verification. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To create an organization -// with this account, you first must associate a payment instrument, such -// as a credit card, with the account. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// with this master account, you first must associate a payment instrument, +// such as a credit card, with the account. Follow the steps at To leave +// an organization when all required account information has not yet been +// provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To complete this operation @@ -4870,11 +4951,11 @@ func (c *Organizations) EnablePolicyTypeRequest(input *EnablePolicyTypeInput) (r // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -4894,11 +4975,11 @@ func (c *Organizations) EnablePolicyTypeRequest(input *EnablePolicyTypeInput) (r // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -4936,9 +5017,9 @@ func (c *Organizations) EnablePolicyTypeRequest(input *EnablePolicyTypeInput) (r // // * ErrCodePolicyTypeNotAvailableForOrganizationException "PolicyTypeNotAvailableForOrganizationException" // You can't use the specified policy type with the feature set currently enabled -// for this organization. For example, you can enable service control policies -// (SCPs) only after you enable all features in the organization. For more information, -// see Enabling and Disabling a Policy Type on a Root (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies.html#enable_policies_on_root) +// for this organization. For example, you can enable SCPs only after you enable +// all features in the organization. For more information, see Enabling and +// Disabling a Policy Type on a Root (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies.html#enable_policies_on_root) // in the AWS Organizations User Guide. // // See also, https://docs.aws.amazon.com/goto/WebAPI/organizations-2016-11-28/EnablePolicyType @@ -4967,8 +5048,8 @@ const opInviteAccountToOrganization = "InviteAccountToOrganization" // InviteAccountToOrganizationRequest generates a "aws/request.Request" representing the // client's request for the InviteAccountToOrganization operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5019,11 +5100,12 @@ func (c *Organizations) InviteAccountToOrganizationRequest(input *InviteAccountT // accounts from AISPL and AWS, or any other AWS seller. For more information, // see Consolidated Billing in India (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/useconsolidatedbilliing-India.html). // -// This operation can be called only from the organization's master account. +// If you receive an exception that indicates that you exceeded your account +// limits for the organization or that the operation failed because your organization +// is still initializing, wait one hour and then try again. If the error persists +// after an hour, then contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). // -// If you get an exception that indicates that you exceeded your account limits -// for the organization or that you can"t add an account because your organization -// is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). +// This operation can be called only from the organization's master account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -5041,9 +5123,15 @@ func (c *Organizations) InviteAccountToOrganizationRequest(input *InviteAccountT // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // +// * ErrCodeAccountOwnerNotVerifiedException "AccountOwnerNotVerifiedException" +// You can't invite an existing account to your organization until you verify +// that you own the email address associated with the master account. For more +// information, see Email Address Verification (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_create.html#about-email-verification) +// in the AWS Organizations User Guide. +// // * ErrCodeConcurrentModificationException "ConcurrentModificationException" // The target of the operation is currently being modified by a different request. // Try again later. @@ -5056,15 +5144,15 @@ func (c *Organizations) InviteAccountToOrganizationRequest(input *InviteAccountT // specific API or operation: // // * ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on -// the number of accounts in an organization. Note: deleted and closed accounts -// still count toward your limit. +// the number of accounts in an organization. Note that deleted and closed +// accounts still count toward your limit. // -// If you get an exception that indicates that you exceeded your account limits -// for the organization or that you can"t add an account because your organization -// is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). +// If you get this exception immediately after creating the organization, wait +// one hour and try again. If after an hour it continues to fail with this +// error, contact AWS Support (https://console.aws.amazon.com/support/home#/). // // * HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of -// handshakes you can send in one day. +// handshakes that you can send in one day. // // * ALREADY_IN_AN_ORGANIZATION: The handshake request is invalid because // the invited account is already a member of an organization. @@ -5072,13 +5160,13 @@ func (c *Organizations) InviteAccountToOrganizationRequest(input *InviteAccountT // * ORGANIZATION_ALREADY_HAS_ALL_FEATURES: The handshake request is invalid // because the organization has already enabled all features. // -// * INVITE_DISABLED_DURING_ENABLE_ALL_FEATURES: You cannot issue new invitations -// to join an organization while it is in the process of enabling all features. +// * INVITE_DISABLED_DURING_ENABLE_ALL_FEATURES: You can't issue new invitations +// to join an organization while it's in the process of enabling all features. // You can resume inviting accounts after you finalize the process when all // accounts have agreed to the change. // -// * PAYMENT_INSTRUMENT_REQUIRED: You cannot complete the operation with -// an account that does not have a payment instrument, such as a credit card, +// * PAYMENT_INSTRUMENT_REQUIRED: You can't complete the operation with an +// account that doesn't have a payment instrument, such as a credit card, // associated with it. // // * ORGANIZATION_FROM_DIFFERENT_SELLER_OF_RECORD: The request failed because @@ -5106,11 +5194,11 @@ func (c *Organizations) InviteAccountToOrganizationRequest(input *InviteAccountT // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -5130,11 +5218,11 @@ func (c *Organizations) InviteAccountToOrganizationRequest(input *InviteAccountT // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -5157,8 +5245,10 @@ func (c *Organizations) InviteAccountToOrganizationRequest(input *InviteAccountT // between entities in the same root. // // * ErrCodeFinalizingOrganizationException "FinalizingOrganizationException" -// AWS Organizations could not finalize the creation of your organization. Try -// again later. If this persists, contact AWS customer support. +// AWS Organizations couldn't perform the operation because your organization +// hasn't finished initializing. This can take up to an hour. Try again later. +// If after one hour you continue to receive this error, contact AWS Support +// (https://console.aws.amazon.com/support/home#/). // // * ErrCodeServiceException "ServiceException" // AWS Organizations can't complete your request because of an internal service @@ -5194,8 +5284,8 @@ const opLeaveOrganization = "LeaveOrganization" // LeaveOrganizationRequest generates a "aws/request.Request" representing the // client's request for the LeaveOrganization operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5283,11 +5373,11 @@ func (c *Organizations) LeaveOrganizationRequest(input *LeaveOrganizationInput) // // * ErrCodeAccountNotFoundException "AccountNotFoundException" // We can't find an AWS account with the AccountId that you specified, or the -// account whose credentials you used to make this request is not a member of +// account whose credentials you used to make this request isn't a member of // an organization. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" @@ -5296,36 +5386,43 @@ func (c *Organizations) LeaveOrganizationRequest(input *LeaveOrganizationInput) // // * ErrCodeConstraintViolationException "ConstraintViolationException" // Performing this operation violates a minimum or maximum value limit. For -// example, attempting to removing the last SCP from an OU or root, inviting -// or creating too many accounts to the organization, or attaching too many -// policies to an account, OU, or root. This exception includes a reason that -// contains additional information about the violated limit: +// example, attempting to removing the last service control policy (SCP) from +// an OU or root, inviting or creating too many accounts to the organization, +// or attaching too many policies to an account, OU, or root. This exception +// includes a reason that contains additional information about the violated +// limit. // // Some of the reasons in the following list might not be applicable to this // specific API or operation: // -// ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on the number -// of accounts in an organization. If you need more accounts, contact AWS Support -// to request an increase in your limit. +// * ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on +// the number of accounts in an organization. If you need more accounts, +// contactAWS Support (https://console.aws.amazon.com/support/home#/) to +// request an increase in your limit. // -// Or, The number of invitations that you tried to send would cause you to exceed -// the limit of accounts in your organization. Send fewer invitations, or contact -// AWS Support to request an increase in the number of accounts. +// Or the number of invitations that you tried to send would cause you to exceed +// the limit of accounts in your organization. Send fewer invitations or +// contact AWS Support to request an increase in the number of accounts. // -// Note: deleted and closed accounts still count toward your limit. +// Deleted and closed accounts still count toward your limit. // -// If you get an exception that indicates that you exceeded your account limits -// for the organization or that you can"t add an account because your organization -// is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). +// If you get receive this exception when running a command immediately after +// creating the organization, wait one hour and try again. If after an hour +// it continues to fail with this error, contact AWS Support (https://console.aws.amazon.com/support/home#/). // // * HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of -// handshakes you can send in one day. +// handshakes that you can send in one day. +// +// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of OUs +// that you can have in an organization. // -// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of organizational -// units you can have in an organization. +// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an OU tree that is +// too many levels deep. // -// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an organizational unit -// tree that is too many levels deep. +// * ORGANIZATION_NOT_IN_ALL_FEATURES_MODE: You attempted to perform an operation +// that requires the organization to be configured to support all features. +// An organization that supports only consolidated billing features can't +// perform this operation. // // * POLICY_NUMBER_LIMIT_EXCEEDED. You attempted to exceed the number of // policies that you can have in an organization. @@ -5339,23 +5436,24 @@ func (c *Organizations) LeaveOrganizationRequest(input *LeaveOrganizationInput) // minimum number of policies of a certain type required. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_EULA: You attempted to remove an account -// from the organization that does not yet have enough information to exist -// as a stand-alone account. This account requires you to first agree to -// the AWS Customer Agreement. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// from the organization that doesn't yet have enough information to exist +// as a standalone account. This account requires you to first agree to the +// AWS Customer Agreement. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_PHONE_VERIFICATION: You attempted to remove -// an account from the organization that does not yet have enough information -// to exist as a stand-alone account. This account requires you to first -// complete phone verification. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// an account from the organization that doesn't yet have enough information +// to exist as a standalone account. This account requires you to first complete +// phone verification. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To create an organization -// with this account, you first must associate a payment instrument, such -// as a credit card, with the account. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// with this master account, you first must associate a payment instrument, +// such as a credit card, with the account. Follow the steps at To leave +// an organization when all required account information has not yet been +// provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To complete this operation @@ -5388,11 +5486,11 @@ func (c *Organizations) LeaveOrganizationRequest(input *LeaveOrganizationInput) // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -5412,11 +5510,11 @@ func (c *Organizations) LeaveOrganizationRequest(input *LeaveOrganizationInput) // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -5477,8 +5575,8 @@ const opListAWSServiceAccessForOrganization = "ListAWSServiceAccessForOrganizati // ListAWSServiceAccessForOrganizationRequest generates a "aws/request.Request" representing the // client's request for the ListAWSServiceAccessForOrganization operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5551,41 +5649,48 @@ func (c *Organizations) ListAWSServiceAccessForOrganizationRequest(input *ListAW // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConstraintViolationException "ConstraintViolationException" // Performing this operation violates a minimum or maximum value limit. For -// example, attempting to removing the last SCP from an OU or root, inviting -// or creating too many accounts to the organization, or attaching too many -// policies to an account, OU, or root. This exception includes a reason that -// contains additional information about the violated limit: +// example, attempting to removing the last service control policy (SCP) from +// an OU or root, inviting or creating too many accounts to the organization, +// or attaching too many policies to an account, OU, or root. This exception +// includes a reason that contains additional information about the violated +// limit. // // Some of the reasons in the following list might not be applicable to this // specific API or operation: // -// ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on the number -// of accounts in an organization. If you need more accounts, contact AWS Support -// to request an increase in your limit. +// * ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on +// the number of accounts in an organization. If you need more accounts, +// contactAWS Support (https://console.aws.amazon.com/support/home#/) to +// request an increase in your limit. // -// Or, The number of invitations that you tried to send would cause you to exceed -// the limit of accounts in your organization. Send fewer invitations, or contact -// AWS Support to request an increase in the number of accounts. +// Or the number of invitations that you tried to send would cause you to exceed +// the limit of accounts in your organization. Send fewer invitations or +// contact AWS Support to request an increase in the number of accounts. // -// Note: deleted and closed accounts still count toward your limit. +// Deleted and closed accounts still count toward your limit. // -// If you get an exception that indicates that you exceeded your account limits -// for the organization or that you can"t add an account because your organization -// is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). +// If you get receive this exception when running a command immediately after +// creating the organization, wait one hour and try again. If after an hour +// it continues to fail with this error, contact AWS Support (https://console.aws.amazon.com/support/home#/). // // * HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of -// handshakes you can send in one day. +// handshakes that you can send in one day. // -// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of organizational -// units you can have in an organization. +// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of OUs +// that you can have in an organization. // -// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an organizational unit -// tree that is too many levels deep. +// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an OU tree that is +// too many levels deep. +// +// * ORGANIZATION_NOT_IN_ALL_FEATURES_MODE: You attempted to perform an operation +// that requires the organization to be configured to support all features. +// An organization that supports only consolidated billing features can't +// perform this operation. // // * POLICY_NUMBER_LIMIT_EXCEEDED. You attempted to exceed the number of // policies that you can have in an organization. @@ -5599,23 +5704,24 @@ func (c *Organizations) ListAWSServiceAccessForOrganizationRequest(input *ListAW // minimum number of policies of a certain type required. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_EULA: You attempted to remove an account -// from the organization that does not yet have enough information to exist -// as a stand-alone account. This account requires you to first agree to -// the AWS Customer Agreement. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// from the organization that doesn't yet have enough information to exist +// as a standalone account. This account requires you to first agree to the +// AWS Customer Agreement. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_PHONE_VERIFICATION: You attempted to remove -// an account from the organization that does not yet have enough information -// to exist as a stand-alone account. This account requires you to first -// complete phone verification. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// an account from the organization that doesn't yet have enough information +// to exist as a standalone account. This account requires you to first complete +// phone verification. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To create an organization -// with this account, you first must associate a payment instrument, such -// as a credit card, with the account. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// with this master account, you first must associate a payment instrument, +// such as a credit card, with the account. Follow the steps at To leave +// an organization when all required account information has not yet been +// provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To complete this operation @@ -5648,11 +5754,11 @@ func (c *Organizations) ListAWSServiceAccessForOrganizationRequest(input *ListAW // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -5672,11 +5778,11 @@ func (c *Organizations) ListAWSServiceAccessForOrganizationRequest(input *ListAW // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -5782,8 +5888,8 @@ const opListAccounts = "ListAccounts" // ListAccountsRequest generates a "aws/request.Request" representing the // client's request for the ListAccounts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5829,7 +5935,8 @@ func (c *Organizations) ListAccountsRequest(input *ListAccountsInput) (req *requ // ListAccounts API operation for AWS Organizations. // // Lists all the accounts in the organization. To request only the accounts -// in a specified root or OU, use the ListAccountsForParent operation instead. +// in a specified root or organizational unit (OU), use the ListAccountsForParent +// operation instead. // // Always check the NextToken response parameter for a null value when calling // a List* operation. These operations can occasionally return an empty set @@ -5854,7 +5961,7 @@ func (c *Organizations) ListAccountsRequest(input *ListAccountsInput) (req *requ // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeInvalidInputException "InvalidInputException" @@ -5866,11 +5973,11 @@ func (c *Organizations) ListAccountsRequest(input *ListAccountsInput) (req *requ // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -5890,11 +5997,11 @@ func (c *Organizations) ListAccountsRequest(input *ListAccountsInput) (req *requ // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -6000,8 +6107,8 @@ const opListAccountsForParent = "ListAccountsForParent" // ListAccountsForParentRequest generates a "aws/request.Request" representing the // client's request for the ListAccountsForParent operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6076,7 +6183,7 @@ func (c *Organizations) ListAccountsForParentRequest(input *ListAccountsForParen // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeInvalidInputException "InvalidInputException" @@ -6088,11 +6195,11 @@ func (c *Organizations) ListAccountsForParentRequest(input *ListAccountsForParen // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -6112,11 +6219,11 @@ func (c *Organizations) ListAccountsForParentRequest(input *ListAccountsForParen // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -6139,8 +6246,7 @@ func (c *Organizations) ListAccountsForParentRequest(input *ListAccountsForParen // between entities in the same root. // // * ErrCodeParentNotFoundException "ParentNotFoundException" -// We can't find a root or organizational unit (OU) with the ParentId that you -// specified. +// We can't find a root or OU with the ParentId that you specified. // // * ErrCodeServiceException "ServiceException" // AWS Organizations can't complete your request because of an internal service @@ -6226,8 +6332,8 @@ const opListChildren = "ListChildren" // ListChildrenRequest generates a "aws/request.Request" representing the // client's request for the ListChildren operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6272,9 +6378,9 @@ func (c *Organizations) ListChildrenRequest(input *ListChildrenInput) (req *requ // ListChildren API operation for AWS Organizations. // -// Lists all of the OUs or accounts that are contained in the specified parent -// OU or root. This operation, along with ListParents enables you to traverse -// the tree structure that makes up this root. +// Lists all of the organizational units (OUs) or accounts that are contained +// in the specified parent OU or root. This operation, along with ListParents +// enables you to traverse the tree structure that makes up this root. // // Always check the NextToken response parameter for a null value when calling // a List* operation. These operations can occasionally return an empty set @@ -6299,7 +6405,7 @@ func (c *Organizations) ListChildrenRequest(input *ListChildrenInput) (req *requ // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeInvalidInputException "InvalidInputException" @@ -6311,11 +6417,11 @@ func (c *Organizations) ListChildrenRequest(input *ListChildrenInput) (req *requ // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -6335,11 +6441,11 @@ func (c *Organizations) ListChildrenRequest(input *ListChildrenInput) (req *requ // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -6362,8 +6468,7 @@ func (c *Organizations) ListChildrenRequest(input *ListChildrenInput) (req *requ // between entities in the same root. // // * ErrCodeParentNotFoundException "ParentNotFoundException" -// We can't find a root or organizational unit (OU) with the ParentId that you -// specified. +// We can't find a root or OU with the ParentId that you specified. // // * ErrCodeServiceException "ServiceException" // AWS Organizations can't complete your request because of an internal service @@ -6449,8 +6554,8 @@ const opListCreateAccountStatus = "ListCreateAccountStatus" // ListCreateAccountStatusRequest generates a "aws/request.Request" representing the // client's request for the ListCreateAccountStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6521,7 +6626,7 @@ func (c *Organizations) ListCreateAccountStatusRequest(input *ListCreateAccountS // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeInvalidInputException "InvalidInputException" @@ -6533,11 +6638,11 @@ func (c *Organizations) ListCreateAccountStatusRequest(input *ListCreateAccountS // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -6557,11 +6662,11 @@ func (c *Organizations) ListCreateAccountStatusRequest(input *ListCreateAccountS // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -6667,8 +6772,8 @@ const opListHandshakesForAccount = "ListHandshakesForAccount" // ListHandshakesForAccountRequest generates a "aws/request.Request" representing the // client's request for the ListHandshakesForAccount operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6755,11 +6860,11 @@ func (c *Organizations) ListHandshakesForAccountRequest(input *ListHandshakesFor // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -6779,11 +6884,11 @@ func (c *Organizations) ListHandshakesForAccountRequest(input *ListHandshakesFor // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -6889,8 +6994,8 @@ const opListHandshakesForOrganization = "ListHandshakesForOrganization" // ListHandshakesForOrganizationRequest generates a "aws/request.Request" representing the // client's request for the ListHandshakesForOrganization operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6967,7 +7072,7 @@ func (c *Organizations) ListHandshakesForOrganizationRequest(input *ListHandshak // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" @@ -6983,11 +7088,11 @@ func (c *Organizations) ListHandshakesForOrganizationRequest(input *ListHandshak // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -7007,11 +7112,11 @@ func (c *Organizations) ListHandshakesForOrganizationRequest(input *ListHandshak // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -7117,8 +7222,8 @@ const opListOrganizationalUnitsForParent = "ListOrganizationalUnitsForParent" // ListOrganizationalUnitsForParentRequest generates a "aws/request.Request" representing the // client's request for the ListOrganizationalUnitsForParent operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7188,7 +7293,7 @@ func (c *Organizations) ListOrganizationalUnitsForParentRequest(input *ListOrgan // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeInvalidInputException "InvalidInputException" @@ -7200,11 +7305,11 @@ func (c *Organizations) ListOrganizationalUnitsForParentRequest(input *ListOrgan // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -7224,11 +7329,11 @@ func (c *Organizations) ListOrganizationalUnitsForParentRequest(input *ListOrgan // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -7251,8 +7356,7 @@ func (c *Organizations) ListOrganizationalUnitsForParentRequest(input *ListOrgan // between entities in the same root. // // * ErrCodeParentNotFoundException "ParentNotFoundException" -// We can't find a root or organizational unit (OU) with the ParentId that you -// specified. +// We can't find a root or OU with the ParentId that you specified. // // * ErrCodeServiceException "ServiceException" // AWS Organizations can't complete your request because of an internal service @@ -7338,8 +7442,8 @@ const opListParents = "ListParents" // ListParentsRequest generates a "aws/request.Request" representing the // client's request for the ListParents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7413,7 +7517,7 @@ func (c *Organizations) ListParentsRequest(input *ListParentsInput) (req *reques // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeChildNotFoundException "ChildNotFoundException" @@ -7429,11 +7533,11 @@ func (c *Organizations) ListParentsRequest(input *ListParentsInput) (req *reques // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -7453,11 +7557,11 @@ func (c *Organizations) ListParentsRequest(input *ListParentsInput) (req *reques // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -7563,8 +7667,8 @@ const opListPolicies = "ListPolicies" // ListPoliciesRequest generates a "aws/request.Request" representing the // client's request for the ListPolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7634,7 +7738,7 @@ func (c *Organizations) ListPoliciesRequest(input *ListPoliciesInput) (req *requ // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeInvalidInputException "InvalidInputException" @@ -7646,11 +7750,11 @@ func (c *Organizations) ListPoliciesRequest(input *ListPoliciesInput) (req *requ // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -7670,11 +7774,11 @@ func (c *Organizations) ListPoliciesRequest(input *ListPoliciesInput) (req *requ // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -7780,8 +7884,8 @@ const opListPoliciesForTarget = "ListPoliciesForTarget" // ListPoliciesForTargetRequest generates a "aws/request.Request" representing the // client's request for the ListPoliciesForTarget operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7853,7 +7957,7 @@ func (c *Organizations) ListPoliciesForTargetRequest(input *ListPoliciesForTarge // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeInvalidInputException "InvalidInputException" @@ -7865,11 +7969,11 @@ func (c *Organizations) ListPoliciesForTargetRequest(input *ListPoliciesForTarge // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -7889,11 +7993,11 @@ func (c *Organizations) ListPoliciesForTargetRequest(input *ListPoliciesForTarge // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -8002,8 +8106,8 @@ const opListRoots = "ListRoots" // ListRootsRequest generates a "aws/request.Request" representing the // client's request for the ListRoots operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8079,7 +8183,7 @@ func (c *Organizations) ListRootsRequest(input *ListRootsInput) (req *request.Re // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeInvalidInputException "InvalidInputException" @@ -8091,11 +8195,11 @@ func (c *Organizations) ListRootsRequest(input *ListRootsInput) (req *request.Re // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -8115,11 +8219,11 @@ func (c *Organizations) ListRootsRequest(input *ListRootsInput) (req *request.Re // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -8225,8 +8329,8 @@ const opListTargetsForPolicy = "ListTargetsForPolicy" // ListTargetsForPolicyRequest generates a "aws/request.Request" representing the // client's request for the ListTargetsForPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8271,7 +8375,8 @@ func (c *Organizations) ListTargetsForPolicyRequest(input *ListTargetsForPolicyI // ListTargetsForPolicy API operation for AWS Organizations. // -// Lists all the roots, OUs, and accounts to which the specified policy is attached. +// Lists all the roots, organizaitonal units (OUs), and accounts to which the +// specified policy is attached. // // Always check the NextToken response parameter for a null value when calling // a List* operation. These operations can occasionally return an empty set @@ -8296,7 +8401,7 @@ func (c *Organizations) ListTargetsForPolicyRequest(input *ListTargetsForPolicyI // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeInvalidInputException "InvalidInputException" @@ -8308,11 +8413,11 @@ func (c *Organizations) ListTargetsForPolicyRequest(input *ListTargetsForPolicyI // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -8332,11 +8437,11 @@ func (c *Organizations) ListTargetsForPolicyRequest(input *ListTargetsForPolicyI // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -8445,8 +8550,8 @@ const opMoveAccount = "MoveAccount" // MoveAccountRequest generates a "aws/request.Request" representing the // client's request for the MoveAccount operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8487,8 +8592,8 @@ func (c *Organizations) MoveAccountRequest(input *MoveAccountInput) (req *reques // MoveAccount API operation for AWS Organizations. // -// Moves an account from its current source parent root or OU to the specified -// destination parent root or OU. +// Moves an account from its current source parent root or organizational unit +// (OU) to the specified destination parent root or OU. // // This operation can be called only from the organization's master account. // @@ -8516,11 +8621,11 @@ func (c *Organizations) MoveAccountRequest(input *MoveAccountInput) (req *reques // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -8540,11 +8645,11 @@ func (c *Organizations) MoveAccountRequest(input *MoveAccountInput) (req *reques // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -8578,7 +8683,7 @@ func (c *Organizations) MoveAccountRequest(input *MoveAccountInput) (req *reques // // * ErrCodeAccountNotFoundException "AccountNotFoundException" // We can't find an AWS account with the AccountId that you specified, or the -// account whose credentials you used to make this request is not a member of +// account whose credentials you used to make this request isn't a member of // an organization. // // * ErrCodeTooManyRequestsException "TooManyRequestsException" @@ -8590,7 +8695,7 @@ func (c *Organizations) MoveAccountRequest(input *MoveAccountInput) (req *reques // Try again later. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeServiceException "ServiceException" @@ -8623,8 +8728,8 @@ const opRemoveAccountFromOrganization = "RemoveAccountFromOrganization" // RemoveAccountFromOrganizationRequest generates a "aws/request.Request" representing the // client's request for the RemoveAccountFromOrganization operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8691,11 +8796,6 @@ func (c *Organizations) RemoveAccountFromOrganizationRequest(input *RemoveAccoun // (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // -// You can remove a member account only after you enable IAM user access to -// billing in the member account. For more information, see Activating Access -// to the Billing and Cost Management Console (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/grantaccess.html#ControllingAccessWebsite-Activate) -// in the AWS Billing and Cost Management User Guide. -// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -8713,11 +8813,11 @@ func (c *Organizations) RemoveAccountFromOrganizationRequest(input *RemoveAccoun // // * ErrCodeAccountNotFoundException "AccountNotFoundException" // We can't find an AWS account with the AccountId that you specified, or the -// account whose credentials you used to make this request is not a member of +// account whose credentials you used to make this request isn't a member of // an organization. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" @@ -8726,36 +8826,43 @@ func (c *Organizations) RemoveAccountFromOrganizationRequest(input *RemoveAccoun // // * ErrCodeConstraintViolationException "ConstraintViolationException" // Performing this operation violates a minimum or maximum value limit. For -// example, attempting to removing the last SCP from an OU or root, inviting -// or creating too many accounts to the organization, or attaching too many -// policies to an account, OU, or root. This exception includes a reason that -// contains additional information about the violated limit: +// example, attempting to removing the last service control policy (SCP) from +// an OU or root, inviting or creating too many accounts to the organization, +// or attaching too many policies to an account, OU, or root. This exception +// includes a reason that contains additional information about the violated +// limit. // // Some of the reasons in the following list might not be applicable to this // specific API or operation: // -// ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on the number -// of accounts in an organization. If you need more accounts, contact AWS Support -// to request an increase in your limit. +// * ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on +// the number of accounts in an organization. If you need more accounts, +// contactAWS Support (https://console.aws.amazon.com/support/home#/) to +// request an increase in your limit. // -// Or, The number of invitations that you tried to send would cause you to exceed -// the limit of accounts in your organization. Send fewer invitations, or contact -// AWS Support to request an increase in the number of accounts. +// Or the number of invitations that you tried to send would cause you to exceed +// the limit of accounts in your organization. Send fewer invitations or +// contact AWS Support to request an increase in the number of accounts. // -// Note: deleted and closed accounts still count toward your limit. +// Deleted and closed accounts still count toward your limit. // -// If you get an exception that indicates that you exceeded your account limits -// for the organization or that you can"t add an account because your organization -// is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). +// If you get receive this exception when running a command immediately after +// creating the organization, wait one hour and try again. If after an hour +// it continues to fail with this error, contact AWS Support (https://console.aws.amazon.com/support/home#/). // // * HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of -// handshakes you can send in one day. +// handshakes that you can send in one day. +// +// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of OUs +// that you can have in an organization. // -// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of organizational -// units you can have in an organization. +// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an OU tree that is +// too many levels deep. // -// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an organizational unit -// tree that is too many levels deep. +// * ORGANIZATION_NOT_IN_ALL_FEATURES_MODE: You attempted to perform an operation +// that requires the organization to be configured to support all features. +// An organization that supports only consolidated billing features can't +// perform this operation. // // * POLICY_NUMBER_LIMIT_EXCEEDED. You attempted to exceed the number of // policies that you can have in an organization. @@ -8769,23 +8876,24 @@ func (c *Organizations) RemoveAccountFromOrganizationRequest(input *RemoveAccoun // minimum number of policies of a certain type required. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_EULA: You attempted to remove an account -// from the organization that does not yet have enough information to exist -// as a stand-alone account. This account requires you to first agree to -// the AWS Customer Agreement. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// from the organization that doesn't yet have enough information to exist +// as a standalone account. This account requires you to first agree to the +// AWS Customer Agreement. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_PHONE_VERIFICATION: You attempted to remove -// an account from the organization that does not yet have enough information -// to exist as a stand-alone account. This account requires you to first -// complete phone verification. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// an account from the organization that doesn't yet have enough information +// to exist as a standalone account. This account requires you to first complete +// phone verification. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To create an organization -// with this account, you first must associate a payment instrument, such -// as a credit card, with the account. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// with this master account, you first must associate a payment instrument, +// such as a credit card, with the account. Follow the steps at To leave +// an organization when all required account information has not yet been +// provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To complete this operation @@ -8818,11 +8926,11 @@ func (c *Organizations) RemoveAccountFromOrganizationRequest(input *RemoveAccoun // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -8842,11 +8950,11 @@ func (c *Organizations) RemoveAccountFromOrganizationRequest(input *RemoveAccoun // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -8907,8 +9015,8 @@ const opUpdateOrganizationalUnit = "UpdateOrganizationalUnit" // UpdateOrganizationalUnitRequest generates a "aws/request.Request" representing the // client's request for the UpdateOrganizationalUnit operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8969,7 +9077,7 @@ func (c *Organizations) UpdateOrganizationalUnitRequest(input *UpdateOrganizatio // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" @@ -8977,7 +9085,7 @@ func (c *Organizations) UpdateOrganizationalUnitRequest(input *UpdateOrganizatio // Try again later. // // * ErrCodeDuplicateOrganizationalUnitException "DuplicateOrganizationalUnitException" -// An organizational unit (OU) with the same name already exists. +// An OU with the same name already exists. // // * ErrCodeInvalidInputException "InvalidInputException" // The requested operation failed because you provided invalid values for one @@ -8988,11 +9096,11 @@ func (c *Organizations) UpdateOrganizationalUnitRequest(input *UpdateOrganizatio // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -9012,11 +9120,11 @@ func (c *Organizations) UpdateOrganizationalUnitRequest(input *UpdateOrganizatio // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -9039,8 +9147,7 @@ func (c *Organizations) UpdateOrganizationalUnitRequest(input *UpdateOrganizatio // between entities in the same root. // // * ErrCodeOrganizationalUnitNotFoundException "OrganizationalUnitNotFoundException" -// We can't find an organizational unit (OU) with the OrganizationalUnitId that -// you specified. +// We can't find an OU with the OrganizationalUnitId that you specified. // // * ErrCodeServiceException "ServiceException" // AWS Organizations can't complete your request because of an internal service @@ -9076,8 +9183,8 @@ const opUpdatePolicy = "UpdatePolicy" // UpdatePolicyRequest generates a "aws/request.Request" representing the // client's request for the UpdatePolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9138,7 +9245,7 @@ func (c *Organizations) UpdatePolicyRequest(input *UpdatePolicyInput) (req *requ // in the IAM User Guide. // // * ErrCodeAWSOrganizationsNotInUseException "AWSOrganizationsNotInUseException" -// Your account is not a member of an organization. To make this request, you +// Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. // // * ErrCodeConcurrentModificationException "ConcurrentModificationException" @@ -9147,36 +9254,43 @@ func (c *Organizations) UpdatePolicyRequest(input *UpdatePolicyInput) (req *requ // // * ErrCodeConstraintViolationException "ConstraintViolationException" // Performing this operation violates a minimum or maximum value limit. For -// example, attempting to removing the last SCP from an OU or root, inviting -// or creating too many accounts to the organization, or attaching too many -// policies to an account, OU, or root. This exception includes a reason that -// contains additional information about the violated limit: +// example, attempting to removing the last service control policy (SCP) from +// an OU or root, inviting or creating too many accounts to the organization, +// or attaching too many policies to an account, OU, or root. This exception +// includes a reason that contains additional information about the violated +// limit. // // Some of the reasons in the following list might not be applicable to this // specific API or operation: // -// ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on the number -// of accounts in an organization. If you need more accounts, contact AWS Support -// to request an increase in your limit. +// * ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on +// the number of accounts in an organization. If you need more accounts, +// contactAWS Support (https://console.aws.amazon.com/support/home#/) to +// request an increase in your limit. // -// Or, The number of invitations that you tried to send would cause you to exceed -// the limit of accounts in your organization. Send fewer invitations, or contact -// AWS Support to request an increase in the number of accounts. +// Or the number of invitations that you tried to send would cause you to exceed +// the limit of accounts in your organization. Send fewer invitations or +// contact AWS Support to request an increase in the number of accounts. // -// Note: deleted and closed accounts still count toward your limit. +// Deleted and closed accounts still count toward your limit. // -// If you get an exception that indicates that you exceeded your account limits -// for the organization or that you can"t add an account because your organization -// is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). +// If you get receive this exception when running a command immediately after +// creating the organization, wait one hour and try again. If after an hour +// it continues to fail with this error, contact AWS Support (https://console.aws.amazon.com/support/home#/). // // * HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of -// handshakes you can send in one day. +// handshakes that you can send in one day. +// +// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of OUs +// that you can have in an organization. // -// * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of organizational -// units you can have in an organization. +// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an OU tree that is +// too many levels deep. // -// * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an organizational unit -// tree that is too many levels deep. +// * ORGANIZATION_NOT_IN_ALL_FEATURES_MODE: You attempted to perform an operation +// that requires the organization to be configured to support all features. +// An organization that supports only consolidated billing features can't +// perform this operation. // // * POLICY_NUMBER_LIMIT_EXCEEDED. You attempted to exceed the number of // policies that you can have in an organization. @@ -9190,23 +9304,24 @@ func (c *Organizations) UpdatePolicyRequest(input *UpdatePolicyInput) (req *requ // minimum number of policies of a certain type required. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_EULA: You attempted to remove an account -// from the organization that does not yet have enough information to exist -// as a stand-alone account. This account requires you to first agree to -// the AWS Customer Agreement. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// from the organization that doesn't yet have enough information to exist +// as a standalone account. This account requires you to first agree to the +// AWS Customer Agreement. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_PHONE_VERIFICATION: You attempted to remove -// an account from the organization that does not yet have enough information -// to exist as a stand-alone account. This account requires you to first -// complete phone verification. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// an account from the organization that doesn't yet have enough information +// to exist as a standalone account. This account requires you to first complete +// phone verification. Follow the steps at To leave an organization when +// all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To create an organization -// with this account, you first must associate a payment instrument, such -// as a credit card, with the account. Follow the steps at To leave an organization -// when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) +// with this master account, you first must associate a payment instrument, +// such as a credit card, with the account. Follow the steps at To leave +// an organization when all required account information has not yet been +// provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To complete this operation @@ -9242,11 +9357,11 @@ func (c *Organizations) UpdatePolicyRequest(input *UpdatePolicyInput) (req *requ // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and -// cannot be modified. +// can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // -// * INVALID_ENUM: You specified a value that is not valid for that parameter. +// * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -9266,11 +9381,11 @@ func (c *Organizations) UpdatePolicyRequest(input *UpdatePolicyInput) (req *requ // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // -// * INVALID_ROLE_NAME: You provided a role name that is not valid. A role -// name can’t begin with the reserved prefix 'AWSServiceRoleFor'. +// * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role +// name can't begin with the reserved prefix AWSServiceRoleFor. // -// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the -// organization. +// * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource +// Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -9293,7 +9408,7 @@ func (c *Organizations) UpdatePolicyRequest(input *UpdatePolicyInput) (req *requ // between entities in the same root. // // * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocumentException" -// The provided policy document does not meet the requirements of the specified +// The provided policy document doesn't meet the requirements of the specified // policy type. For example, the syntax might be incorrect. For details about // service control policy syntax, see Service Control Policy Syntax (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_reference_scp-syntax.html) // in the AWS Organizations User Guide. @@ -9422,7 +9537,7 @@ type Account struct { JoinedMethod *string `type:"string" enum:"AccountJoinedMethod"` // The date the account became a part of the organization. - JoinedTimestamp *time.Time `type:"timestamp" timestampFormat:"unix"` + JoinedTimestamp *time.Time `type:"timestamp"` // The friendly name of the account. // @@ -9690,7 +9805,7 @@ type CreateAccountInput struct { // The email address of the owner to assign to the new member account. This // email address must not already be associated with another AWS account. You - // must use a valid email address to complete account creation. You cannot access + // must use a valid email address to complete account creation. You can't access // the root user of the account or remove an account that was created with an // invalid email address. // @@ -9698,25 +9813,26 @@ type CreateAccountInput struct { Email *string `min:"6" type:"string" required:"true"` // If set to ALLOW, the new account enables IAM users to access account billing - // information if they have the required permissions. If set to DENY, then only - // the root user of the new account can access account billing information. - // For more information, see Activating Access to the Billing and Cost Management + // information if they have the required permissions. If set to DENY, only the + // root user of the new account can access account billing information. For + // more information, see Activating Access to the Billing and Cost Management // Console (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/grantaccess.html#ControllingAccessWebsite-Activate) // in the AWS Billing and Cost Management User Guide. // - // If you do not specify this parameter, the value defaults to ALLOW, and IAM + // If you don't specify this parameter, the value defaults to ALLOW, and IAM // users and roles with the required permissions can access billing information // for the new account. IamUserAccessToBilling *string `type:"string" enum:"IAMUserAccessToBilling"` // (Optional) // - // The name of an IAM role that Organizations automatically preconfigures in - // the new member account. This role trusts the master account, allowing users - // in the master account to assume the role, as permitted by the master account - // administrator. The role has administrator permissions in the new member account. + // The name of an IAM role that AWS Organizations automatically preconfigures + // in the new member account. This role trusts the master account, allowing + // users in the master account to assume the role, as permitted by the master + // account administrator. The role has administrator permissions in the new + // member account. // - // If you do not specify this parameter, the role name defaults to OrganizationAccountAccessRole. + // If you don't specify this parameter, the role name defaults to OrganizationAccountAccessRole. // // For more information about how to use this role to access the member account, // see Accessing and Administering the Member Accounts in Your Organization @@ -9795,7 +9911,10 @@ type CreateAccountOutput struct { // This response structure might not be fully populated when you first receive // it because account creation is an asynchronous process. You can pass the // returned CreateAccountStatus ID as a parameter to DescribeCreateAccountStatus - // to get status about the progress of the request at later times. + // to get status about the progress of the request at later times. You can also + // check the AWS CloudTrail log for the CreateAccountResult event. For more + // information, see Monitoring the Activity in Your Organization (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_monitoring.html) + // in the AWS Organizations User Guide. CreateAccountStatus *CreateAccountStatus `type:"structure"` } @@ -9831,7 +9950,7 @@ type CreateAccountStatus struct { AccountName *string `min:"1" type:"string"` // The date and time that the account was created and the request completed. - CompletedTimestamp *time.Time `type:"timestamp" timestampFormat:"unix"` + CompletedTimestamp *time.Time `type:"timestamp"` // If the request failed, a description of the reason for the failure. // @@ -9860,7 +9979,7 @@ type CreateAccountStatus struct { Id *string `type:"string"` // The date and time that the request was made for the account creation. - RequestedTimestamp *time.Time `type:"timestamp" timestampFormat:"unix"` + RequestedTimestamp *time.Time `type:"timestamp"` // The status of the request. State *string `type:"string" enum:"CreateAccountState"` @@ -11165,7 +11284,7 @@ type EnabledServicePrincipal struct { // The date that the service principal was enabled for integration with AWS // Organizations. - DateEnabled *time.Time `type:"timestamp" timestampFormat:"unix"` + DateEnabled *time.Time `type:"timestamp"` // The name of the service principal. This is typically in the form of a URL, // such as: servicename.amazonaws.com. @@ -11233,7 +11352,7 @@ type Handshake struct { // The date and time that the handshake expires. If the recipient of the handshake // request fails to respond before the specified date and time, the handshake // becomes inactive and is no longer valid. - ExpirationTimestamp *time.Time `type:"timestamp" timestampFormat:"unix"` + ExpirationTimestamp *time.Time `type:"timestamp"` // The unique identifier (ID) of a handshake. The originating account creates // the ID when it initiates the handshake. @@ -11246,7 +11365,7 @@ type Handshake struct { Parties []*HandshakeParty `type:"list"` // The date and time that the handshake request was made. - RequestedTimestamp *time.Time `type:"timestamp" timestampFormat:"unix"` + RequestedTimestamp *time.Time `type:"timestamp"` // Additional information that is needed to process the handshake. Resources []*HandshakeResource `type:"list"` @@ -11517,7 +11636,7 @@ type InviteAccountToOrganizationInput struct { // number as the Id. If you specify "Type": "EMAIL", then you must specify the // email address that is associated with the account. // - // --target Id=bill@example.com,Type=EMAIL + // --target Id=diego@example.com,Type=EMAIL // // Target is a required field Target *HandshakeParty `type:"structure" required:"true"` @@ -11618,15 +11737,15 @@ func (s LeaveOrganizationOutput) GoString() string { type ListAWSServiceAccessForOrganizationInput struct { _ struct{} `type:"structure"` - // (Optional) Use this to limit the number of results you want included in the - // response. If you do not include this parameter, it defaults to a value that - // is specific to the operation. If additional items exist beyond the maximum - // you specify, the NextToken response element is present and has a value (is - // not null). Include that value as the NextToken request parameter in the next - // call to the operation to get the next part of the results. Note that Organizations - // might return fewer results than the maximum even when there are more results - // available. You should check NextToken after every operation to ensure that - // you receive all of the results. + // (Optional) Use this to limit the number of results you want included per + // page in the response. If you do not include this parameter, it defaults to + // a value that is specific to the operation. If additional items exist beyond + // the maximum you specify, the NextToken response element is present and has + // a value (is not null). Include that value as the NextToken request parameter + // in the next call to the operation to get the next part of the results. Note + // that Organizations might return fewer results than the maximum even when + // there are more results available. You should check NextToken after every + // operation to ensure that you receive all of the results. MaxResults *int64 `min:"1" type:"integer"` // Use this parameter if you receive a NextToken response in a previous request @@ -11712,15 +11831,15 @@ func (s *ListAWSServiceAccessForOrganizationOutput) SetNextToken(v string) *List type ListAccountsForParentInput struct { _ struct{} `type:"structure"` - // (Optional) Use this to limit the number of results you want included in the - // response. If you do not include this parameter, it defaults to a value that - // is specific to the operation. If additional items exist beyond the maximum - // you specify, the NextToken response element is present and has a value (is - // not null). Include that value as the NextToken request parameter in the next - // call to the operation to get the next part of the results. Note that Organizations - // might return fewer results than the maximum even when there are more results - // available. You should check NextToken after every operation to ensure that - // you receive all of the results. + // (Optional) Use this to limit the number of results you want included per + // page in the response. If you do not include this parameter, it defaults to + // a value that is specific to the operation. If additional items exist beyond + // the maximum you specify, the NextToken response element is present and has + // a value (is not null). Include that value as the NextToken request parameter + // in the next call to the operation to get the next part of the results. Note + // that Organizations might return fewer results than the maximum even when + // there are more results available. You should check NextToken after every + // operation to ensure that you receive all of the results. MaxResults *int64 `min:"1" type:"integer"` // Use this parameter if you receive a NextToken response in a previous request @@ -11819,15 +11938,15 @@ func (s *ListAccountsForParentOutput) SetNextToken(v string) *ListAccountsForPar type ListAccountsInput struct { _ struct{} `type:"structure"` - // (Optional) Use this to limit the number of results you want included in the - // response. If you do not include this parameter, it defaults to a value that - // is specific to the operation. If additional items exist beyond the maximum - // you specify, the NextToken response element is present and has a value (is - // not null). Include that value as the NextToken request parameter in the next - // call to the operation to get the next part of the results. Note that Organizations - // might return fewer results than the maximum even when there are more results - // available. You should check NextToken after every operation to ensure that - // you receive all of the results. + // (Optional) Use this to limit the number of results you want included per + // page in the response. If you do not include this parameter, it defaults to + // a value that is specific to the operation. If additional items exist beyond + // the maximum you specify, the NextToken response element is present and has + // a value (is not null). Include that value as the NextToken request parameter + // in the next call to the operation to get the next part of the results. Note + // that Organizations might return fewer results than the maximum even when + // there are more results available. You should check NextToken after every + // operation to ensure that you receive all of the results. MaxResults *int64 `min:"1" type:"integer"` // Use this parameter if you receive a NextToken response in a previous request @@ -11916,15 +12035,15 @@ type ListChildrenInput struct { // ChildType is a required field ChildType *string `type:"string" required:"true" enum:"ChildType"` - // (Optional) Use this to limit the number of results you want included in the - // response. If you do not include this parameter, it defaults to a value that - // is specific to the operation. If additional items exist beyond the maximum - // you specify, the NextToken response element is present and has a value (is - // not null). Include that value as the NextToken request parameter in the next - // call to the operation to get the next part of the results. Note that Organizations - // might return fewer results than the maximum even when there are more results - // available. You should check NextToken after every operation to ensure that - // you receive all of the results. + // (Optional) Use this to limit the number of results you want included per + // page in the response. If you do not include this parameter, it defaults to + // a value that is specific to the operation. If additional items exist beyond + // the maximum you specify, the NextToken response element is present and has + // a value (is not null). Include that value as the NextToken request parameter + // in the next call to the operation to get the next part of the results. Note + // that Organizations might return fewer results than the maximum even when + // there are more results available. You should check NextToken after every + // operation to ensure that you receive all of the results. MaxResults *int64 `min:"1" type:"integer"` // Use this parameter if you receive a NextToken response in a previous request @@ -12043,15 +12162,15 @@ func (s *ListChildrenOutput) SetNextToken(v string) *ListChildrenOutput { type ListCreateAccountStatusInput struct { _ struct{} `type:"structure"` - // (Optional) Use this to limit the number of results you want included in the - // response. If you do not include this parameter, it defaults to a value that - // is specific to the operation. If additional items exist beyond the maximum - // you specify, the NextToken response element is present and has a value (is - // not null). Include that value as the NextToken request parameter in the next - // call to the operation to get the next part of the results. Note that Organizations - // might return fewer results than the maximum even when there are more results - // available. You should check NextToken after every operation to ensure that - // you receive all of the results. + // (Optional) Use this to limit the number of results you want included per + // page in the response. If you do not include this parameter, it defaults to + // a value that is specific to the operation. If additional items exist beyond + // the maximum you specify, the NextToken response element is present and has + // a value (is not null). Include that value as the NextToken request parameter + // in the next call to the operation to get the next part of the results. Note + // that Organizations might return fewer results than the maximum even when + // there are more results available. You should check NextToken after every + // operation to ensure that you receive all of the results. MaxResults *int64 `min:"1" type:"integer"` // Use this parameter if you receive a NextToken response in a previous request @@ -12155,15 +12274,15 @@ type ListHandshakesForAccountInput struct { // handshakes that were generated by that parent request. Filter *HandshakeFilter `type:"structure"` - // (Optional) Use this to limit the number of results you want included in the - // response. If you do not include this parameter, it defaults to a value that - // is specific to the operation. If additional items exist beyond the maximum - // you specify, the NextToken response element is present and has a value (is - // not null). Include that value as the NextToken request parameter in the next - // call to the operation to get the next part of the results. Note that Organizations - // might return fewer results than the maximum even when there are more results - // available. You should check NextToken after every operation to ensure that - // you receive all of the results. + // (Optional) Use this to limit the number of results you want included per + // page in the response. If you do not include this parameter, it defaults to + // a value that is specific to the operation. If additional items exist beyond + // the maximum you specify, the NextToken response element is present and has + // a value (is not null). Include that value as the NextToken request parameter + // in the next call to the operation to get the next part of the results. Note + // that Organizations might return fewer results than the maximum even when + // there are more results available. You should check NextToken after every + // operation to ensure that you receive all of the results. MaxResults *int64 `min:"1" type:"integer"` // Use this parameter if you receive a NextToken response in a previous request @@ -12262,15 +12381,15 @@ type ListHandshakesForOrganizationInput struct { // the handshakes that were generated by that parent request. Filter *HandshakeFilter `type:"structure"` - // (Optional) Use this to limit the number of results you want included in the - // response. If you do not include this parameter, it defaults to a value that - // is specific to the operation. If additional items exist beyond the maximum - // you specify, the NextToken response element is present and has a value (is - // not null). Include that value as the NextToken request parameter in the next - // call to the operation to get the next part of the results. Note that Organizations - // might return fewer results than the maximum even when there are more results - // available. You should check NextToken after every operation to ensure that - // you receive all of the results. + // (Optional) Use this to limit the number of results you want included per + // page in the response. If you do not include this parameter, it defaults to + // a value that is specific to the operation. If additional items exist beyond + // the maximum you specify, the NextToken response element is present and has + // a value (is not null). Include that value as the NextToken request parameter + // in the next call to the operation to get the next part of the results. Note + // that Organizations might return fewer results than the maximum even when + // there are more results available. You should check NextToken after every + // operation to ensure that you receive all of the results. MaxResults *int64 `min:"1" type:"integer"` // Use this parameter if you receive a NextToken response in a previous request @@ -12361,15 +12480,15 @@ func (s *ListHandshakesForOrganizationOutput) SetNextToken(v string) *ListHandsh type ListOrganizationalUnitsForParentInput struct { _ struct{} `type:"structure"` - // (Optional) Use this to limit the number of results you want included in the - // response. If you do not include this parameter, it defaults to a value that - // is specific to the operation. If additional items exist beyond the maximum - // you specify, the NextToken response element is present and has a value (is - // not null). Include that value as the NextToken request parameter in the next - // call to the operation to get the next part of the results. Note that Organizations - // might return fewer results than the maximum even when there are more results - // available. You should check NextToken after every operation to ensure that - // you receive all of the results. + // (Optional) Use this to limit the number of results you want included per + // page in the response. If you do not include this parameter, it defaults to + // a value that is specific to the operation. If additional items exist beyond + // the maximum you specify, the NextToken response element is present and has + // a value (is not null). Include that value as the NextToken request parameter + // in the next call to the operation to get the next part of the results. Note + // that Organizations might return fewer results than the maximum even when + // there are more results available. You should check NextToken after every + // operation to ensure that you receive all of the results. MaxResults *int64 `min:"1" type:"integer"` // Use this parameter if you receive a NextToken response in a previous request @@ -12495,15 +12614,15 @@ type ListParentsInput struct { // ChildId is a required field ChildId *string `type:"string" required:"true"` - // (Optional) Use this to limit the number of results you want included in the - // response. If you do not include this parameter, it defaults to a value that - // is specific to the operation. If additional items exist beyond the maximum - // you specify, the NextToken response element is present and has a value (is - // not null). Include that value as the NextToken request parameter in the next - // call to the operation to get the next part of the results. Note that Organizations - // might return fewer results than the maximum even when there are more results - // available. You should check NextToken after every operation to ensure that - // you receive all of the results. + // (Optional) Use this to limit the number of results you want included per + // page in the response. If you do not include this parameter, it defaults to + // a value that is specific to the operation. If additional items exist beyond + // the maximum you specify, the NextToken response element is present and has + // a value (is not null). Include that value as the NextToken request parameter + // in the next call to the operation to get the next part of the results. Note + // that Organizations might return fewer results than the maximum even when + // there are more results available. You should check NextToken after every + // operation to ensure that you receive all of the results. MaxResults *int64 `min:"1" type:"integer"` // Use this parameter if you receive a NextToken response in a previous request @@ -12601,15 +12720,15 @@ type ListPoliciesForTargetInput struct { // Filter is a required field Filter *string `type:"string" required:"true" enum:"PolicyType"` - // (Optional) Use this to limit the number of results you want included in the - // response. If you do not include this parameter, it defaults to a value that - // is specific to the operation. If additional items exist beyond the maximum - // you specify, the NextToken response element is present and has a value (is - // not null). Include that value as the NextToken request parameter in the next - // call to the operation to get the next part of the results. Note that Organizations - // might return fewer results than the maximum even when there are more results - // available. You should check NextToken after every operation to ensure that - // you receive all of the results. + // (Optional) Use this to limit the number of results you want included per + // page in the response. If you do not include this parameter, it defaults to + // a value that is specific to the operation. If additional items exist beyond + // the maximum you specify, the NextToken response element is present and has + // a value (is not null). Include that value as the NextToken request parameter + // in the next call to the operation to get the next part of the results. Note + // that Organizations might return fewer results than the maximum even when + // there are more results available. You should check NextToken after every + // operation to ensure that you receive all of the results. MaxResults *int64 `min:"1" type:"integer"` // Use this parameter if you receive a NextToken response in a previous request @@ -12735,15 +12854,15 @@ type ListPoliciesInput struct { // Filter is a required field Filter *string `type:"string" required:"true" enum:"PolicyType"` - // (Optional) Use this to limit the number of results you want included in the - // response. If you do not include this parameter, it defaults to a value that - // is specific to the operation. If additional items exist beyond the maximum - // you specify, the NextToken response element is present and has a value (is - // not null). Include that value as the NextToken request parameter in the next - // call to the operation to get the next part of the results. Note that Organizations - // might return fewer results than the maximum even when there are more results - // available. You should check NextToken after every operation to ensure that - // you receive all of the results. + // (Optional) Use this to limit the number of results you want included per + // page in the response. If you do not include this parameter, it defaults to + // a value that is specific to the operation. If additional items exist beyond + // the maximum you specify, the NextToken response element is present and has + // a value (is not null). Include that value as the NextToken request parameter + // in the next call to the operation to get the next part of the results. Note + // that Organizations might return fewer results than the maximum even when + // there are more results available. You should check NextToken after every + // operation to ensure that you receive all of the results. MaxResults *int64 `min:"1" type:"integer"` // Use this parameter if you receive a NextToken response in a previous request @@ -12838,15 +12957,15 @@ func (s *ListPoliciesOutput) SetPolicies(v []*PolicySummary) *ListPoliciesOutput type ListRootsInput struct { _ struct{} `type:"structure"` - // (Optional) Use this to limit the number of results you want included in the - // response. If you do not include this parameter, it defaults to a value that - // is specific to the operation. If additional items exist beyond the maximum - // you specify, the NextToken response element is present and has a value (is - // not null). Include that value as the NextToken request parameter in the next - // call to the operation to get the next part of the results. Note that Organizations - // might return fewer results than the maximum even when there are more results - // available. You should check NextToken after every operation to ensure that - // you receive all of the results. + // (Optional) Use this to limit the number of results you want included per + // page in the response. If you do not include this parameter, it defaults to + // a value that is specific to the operation. If additional items exist beyond + // the maximum you specify, the NextToken response element is present and has + // a value (is not null). Include that value as the NextToken request parameter + // in the next call to the operation to get the next part of the results. Note + // that Organizations might return fewer results than the maximum even when + // there are more results available. You should check NextToken after every + // operation to ensure that you receive all of the results. MaxResults *int64 `min:"1" type:"integer"` // Use this parameter if you receive a NextToken response in a previous request @@ -12930,15 +13049,15 @@ func (s *ListRootsOutput) SetRoots(v []*Root) *ListRootsOutput { type ListTargetsForPolicyInput struct { _ struct{} `type:"structure"` - // (Optional) Use this to limit the number of results you want included in the - // response. If you do not include this parameter, it defaults to a value that - // is specific to the operation. If additional items exist beyond the maximum - // you specify, the NextToken response element is present and has a value (is - // not null). Include that value as the NextToken request parameter in the next - // call to the operation to get the next part of the results. Note that Organizations - // might return fewer results than the maximum even when there are more results - // available. You should check NextToken after every operation to ensure that - // you receive all of the results. + // (Optional) Use this to limit the number of results you want included per + // page in the response. If you do not include this parameter, it defaults to + // a value that is specific to the operation. If additional items exist beyond + // the maximum you specify, the NextToken response element is present and has + // a value (is not null). Include that value as the NextToken request parameter + // in the next call to the operation to get the next part of the results. Note + // that Organizations might return fewer results than the maximum even when + // there are more results available. You should check NextToken after every + // operation to ensure that you receive all of the results. MaxResults *int64 `min:"1" type:"integer"` // Use this parameter if you receive a NextToken response in a previous request @@ -13990,6 +14109,12 @@ const ( // ConstraintViolationExceptionReasonOrganizationNotInAllFeaturesMode is a ConstraintViolationExceptionReason enum value ConstraintViolationExceptionReasonOrganizationNotInAllFeaturesMode = "ORGANIZATION_NOT_IN_ALL_FEATURES_MODE" + + // ConstraintViolationExceptionReasonEmailVerificationCodeExpired is a ConstraintViolationExceptionReason enum value + ConstraintViolationExceptionReasonEmailVerificationCodeExpired = "EMAIL_VERIFICATION_CODE_EXPIRED" + + // ConstraintViolationExceptionReasonWaitPeriodActive is a ConstraintViolationExceptionReason enum value + ConstraintViolationExceptionReasonWaitPeriodActive = "WAIT_PERIOD_ACTIVE" ) const ( diff --git a/vendor/github.com/aws/aws-sdk-go/service/organizations/doc.go b/vendor/github.com/aws/aws-sdk-go/service/organizations/doc.go index 7838186af7a..cc704a61a84 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/organizations/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/organizations/doc.go @@ -109,7 +109,7 @@ // requests were successfully made to Organizations, who made the request, when // it was made, and so on. For more about AWS Organizations and its support // for AWS CloudTrail, see Logging AWS Organizations Events with AWS CloudTrail -// (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_cloudtrail-integration.html) +// (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_monitoring.html#orgs_cloudtrail-integration) // in the AWS Organizations User Guide. To learn more about CloudTrail, including // how to turn it on and find your log files, see the AWS CloudTrail User Guide // (http://docs.aws.amazon.com/awscloudtrail/latest/userguide/what_is_cloud_trail_top_level.html). diff --git a/vendor/github.com/aws/aws-sdk-go/service/organizations/errors.go b/vendor/github.com/aws/aws-sdk-go/service/organizations/errors.go index 7ccbd82e447..5d886170ef4 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/organizations/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/organizations/errors.go @@ -7,7 +7,7 @@ const ( // ErrCodeAWSOrganizationsNotInUseException for service response error code // "AWSOrganizationsNotInUseException". // - // Your account is not a member of an organization. To make this request, you + // Your account isn't a member of an organization. To make this request, you // must use the credentials of an account that belongs to an organization. ErrCodeAWSOrganizationsNotInUseException = "AWSOrganizationsNotInUseException" @@ -24,19 +24,28 @@ const ( // ErrCodeAccessDeniedForDependencyException for service response error code // "AccessDeniedForDependencyException". // - // The operation you attempted requires you to have the iam:CreateServiceLinkedRole - // so that Organizations can create the required service-linked role. You do - // not have that permission. + // The operation that you attempted requires you to have the iam:CreateServiceLinkedRole + // so that AWS Organizations can create the required service-linked role. You + // don't have that permission. ErrCodeAccessDeniedForDependencyException = "AccessDeniedForDependencyException" // ErrCodeAccountNotFoundException for service response error code // "AccountNotFoundException". // // We can't find an AWS account with the AccountId that you specified, or the - // account whose credentials you used to make this request is not a member of + // account whose credentials you used to make this request isn't a member of // an organization. ErrCodeAccountNotFoundException = "AccountNotFoundException" + // ErrCodeAccountOwnerNotVerifiedException for service response error code + // "AccountOwnerNotVerifiedException". + // + // You can't invite an existing account to your organization until you verify + // that you own the email address associated with the master account. For more + // information, see Email Address Verification (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_create.html#about-email-verification) + // in the AWS Organizations User Guide. + ErrCodeAccountOwnerNotVerifiedException = "AccountOwnerNotVerifiedException" + // ErrCodeAlreadyInOrganizationException for service response error code // "AlreadyInOrganizationException". // @@ -62,36 +71,43 @@ const ( // "ConstraintViolationException". // // Performing this operation violates a minimum or maximum value limit. For - // example, attempting to removing the last SCP from an OU or root, inviting - // or creating too many accounts to the organization, or attaching too many - // policies to an account, OU, or root. This exception includes a reason that - // contains additional information about the violated limit: + // example, attempting to removing the last service control policy (SCP) from + // an OU or root, inviting or creating too many accounts to the organization, + // or attaching too many policies to an account, OU, or root. This exception + // includes a reason that contains additional information about the violated + // limit. // // Some of the reasons in the following list might not be applicable to this // specific API or operation: // - // ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on the number - // of accounts in an organization. If you need more accounts, contact AWS Support - // to request an increase in your limit. + // * ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on + // the number of accounts in an organization. If you need more accounts, + // contactAWS Support (https://console.aws.amazon.com/support/home#/) to + // request an increase in your limit. // - // Or, The number of invitations that you tried to send would cause you to exceed - // the limit of accounts in your organization. Send fewer invitations, or contact - // AWS Support to request an increase in the number of accounts. + // Or the number of invitations that you tried to send would cause you to exceed + // the limit of accounts in your organization. Send fewer invitations or + // contact AWS Support to request an increase in the number of accounts. // - // Note: deleted and closed accounts still count toward your limit. + // Deleted and closed accounts still count toward your limit. // - // If you get an exception that indicates that you exceeded your account limits - // for the organization or that you can"t add an account because your organization - // is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). + // If you get receive this exception when running a command immediately after + // creating the organization, wait one hour and try again. If after an hour + // it continues to fail with this error, contact AWS Support (https://console.aws.amazon.com/support/home#/). // // * HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of - // handshakes you can send in one day. + // handshakes that you can send in one day. // - // * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of organizational - // units you can have in an organization. + // * OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of OUs + // that you can have in an organization. // - // * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an organizational unit - // tree that is too many levels deep. + // * OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an OU tree that is + // too many levels deep. + // + // * ORGANIZATION_NOT_IN_ALL_FEATURES_MODE: You attempted to perform an operation + // that requires the organization to be configured to support all features. + // An organization that supports only consolidated billing features can't + // perform this operation. // // * POLICY_NUMBER_LIMIT_EXCEEDED. You attempted to exceed the number of // policies that you can have in an organization. @@ -105,23 +121,24 @@ const ( // minimum number of policies of a certain type required. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_EULA: You attempted to remove an account - // from the organization that does not yet have enough information to exist - // as a stand-alone account. This account requires you to first agree to - // the AWS Customer Agreement. Follow the steps at To leave an organization - // when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) + // from the organization that doesn't yet have enough information to exist + // as a standalone account. This account requires you to first agree to the + // AWS Customer Agreement. Follow the steps at To leave an organization when + // all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * ACCOUNT_CANNOT_LEAVE_WITHOUT_PHONE_VERIFICATION: You attempted to remove - // an account from the organization that does not yet have enough information - // to exist as a stand-alone account. This account requires you to first - // complete phone verification. Follow the steps at To leave an organization - // when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) + // an account from the organization that doesn't yet have enough information + // to exist as a standalone account. This account requires you to first complete + // phone verification. Follow the steps at To leave an organization when + // all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To create an organization - // with this account, you first must associate a payment instrument, such - // as a credit card, with the account. Follow the steps at To leave an organization - // when all required account information has not yet been provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) + // with this master account, you first must associate a payment instrument, + // such as a credit card, with the account. Follow the steps at To leave + // an organization when all required account information has not yet been + // provided (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html#leave-without-all-info) // in the AWS Organizations User Guide. // // * MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To complete this operation @@ -179,7 +196,7 @@ const ( // ErrCodeDuplicateOrganizationalUnitException for service response error code // "DuplicateOrganizationalUnitException". // - // An organizational unit (OU) with the same name already exists. + // An OU with the same name already exists. ErrCodeDuplicateOrganizationalUnitException = "DuplicateOrganizationalUnitException" // ErrCodeDuplicatePolicyAttachmentException for service response error code @@ -197,8 +214,10 @@ const ( // ErrCodeFinalizingOrganizationException for service response error code // "FinalizingOrganizationException". // - // AWS Organizations could not finalize the creation of your organization. Try - // again later. If this persists, contact AWS customer support. + // AWS Organizations couldn't perform the operation because your organization + // hasn't finished initializing. This can take up to an hour. Try again later. + // If after one hour you continue to receive this error, contact AWS Support + // (https://console.aws.amazon.com/support/home#/). ErrCodeFinalizingOrganizationException = "FinalizingOrganizationException" // ErrCodeHandshakeAlreadyInStateException for service response error code @@ -218,15 +237,15 @@ const ( // specific API or operation: // // * ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on - // the number of accounts in an organization. Note: deleted and closed accounts - // still count toward your limit. + // the number of accounts in an organization. Note that deleted and closed + // accounts still count toward your limit. // - // If you get an exception that indicates that you exceeded your account limits - // for the organization or that you can"t add an account because your organization - // is still initializing, please contact AWS Customer Support (https://console.aws.amazon.com/support/home#/). + // If you get this exception immediately after creating the organization, wait + // one hour and try again. If after an hour it continues to fail with this + // error, contact AWS Support (https://console.aws.amazon.com/support/home#/). // // * HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of - // handshakes you can send in one day. + // handshakes that you can send in one day. // // * ALREADY_IN_AN_ORGANIZATION: The handshake request is invalid because // the invited account is already a member of an organization. @@ -234,13 +253,13 @@ const ( // * ORGANIZATION_ALREADY_HAS_ALL_FEATURES: The handshake request is invalid // because the organization has already enabled all features. // - // * INVITE_DISABLED_DURING_ENABLE_ALL_FEATURES: You cannot issue new invitations - // to join an organization while it is in the process of enabling all features. + // * INVITE_DISABLED_DURING_ENABLE_ALL_FEATURES: You can't issue new invitations + // to join an organization while it's in the process of enabling all features. // You can resume inviting accounts after you finalize the process when all // accounts have agreed to the change. // - // * PAYMENT_INSTRUMENT_REQUIRED: You cannot complete the operation with - // an account that does not have a payment instrument, such as a credit card, + // * PAYMENT_INSTRUMENT_REQUIRED: You can't complete the operation with an + // account that doesn't have a payment instrument, such as a credit card, // associated with it. // // * ORGANIZATION_FROM_DIFFERENT_SELLER_OF_RECORD: The request failed because @@ -263,7 +282,7 @@ const ( // "InvalidHandshakeTransitionException". // // You can't perform the operation on the handshake in its current state. For - // example, you can't cancel a handshake that was already accepted, or accept + // example, you can't cancel a handshake that was already accepted or accept // a handshake that was already declined. ErrCodeInvalidHandshakeTransitionException = "InvalidHandshakeTransitionException" @@ -278,11 +297,11 @@ const ( // specific API or operation: // // * IMMUTABLE_POLICY: You specified a policy that is managed by AWS and - // cannot be modified. + // can't be modified. // // * INPUT_REQUIRED: You must include a value for all required parameters. // - // * INVALID_ENUM: You specified a value that is not valid for that parameter. + // * INVALID_ENUM: You specified a value that isn't valid for that parameter. // // * INVALID_FULL_NAME_TARGET: You specified a full name that contains invalid // characters. @@ -302,11 +321,11 @@ const ( // * INVALID_PATTERN_TARGET_ID: You specified a policy target ID that doesn't // match the required pattern. // - // * INVALID_ROLE_NAME: You provided a role name that is not valid. A role - // name can’t begin with the reserved prefix 'AWSServiceRoleFor'. + // * INVALID_ROLE_NAME: You provided a role name that isn't valid. A role + // name can't begin with the reserved prefix AWSServiceRoleFor. // - // * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid ARN for the - // organization. + // * INVALID_SYNTAX_ORGANIZATION_ARN: You specified an invalid Amazon Resource + // Name (ARN) for the organization. // // * INVALID_SYNTAX_POLICY_ID: You specified an invalid policy ID. // @@ -332,7 +351,7 @@ const ( // ErrCodeMalformedPolicyDocumentException for service response error code // "MalformedPolicyDocumentException". // - // The provided policy document does not meet the requirements of the specified + // The provided policy document doesn't meet the requirements of the specified // policy type. For example, the syntax might be incorrect. For details about // service control policy syntax, see Service Control Policy Syntax (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_reference_scp-syntax.html) // in the AWS Organizations User Guide. @@ -350,37 +369,33 @@ const ( // "OrganizationNotEmptyException". // // The organization isn't empty. To delete an organization, you must first remove - // all accounts except the master account, delete all organizational units (OUs), - // and delete all policies. + // all accounts except the master account, delete all OUs, and delete all policies. ErrCodeOrganizationNotEmptyException = "OrganizationNotEmptyException" // ErrCodeOrganizationalUnitNotEmptyException for service response error code // "OrganizationalUnitNotEmptyException". // - // The specified organizational unit (OU) is not empty. Move all accounts to - // another root or to other OUs, remove all child OUs, and then try the operation - // again. + // The specified OU is not empty. Move all accounts to another root or to other + // OUs, remove all child OUs, and try the operation again. ErrCodeOrganizationalUnitNotEmptyException = "OrganizationalUnitNotEmptyException" // ErrCodeOrganizationalUnitNotFoundException for service response error code // "OrganizationalUnitNotFoundException". // - // We can't find an organizational unit (OU) with the OrganizationalUnitId that - // you specified. + // We can't find an OU with the OrganizationalUnitId that you specified. ErrCodeOrganizationalUnitNotFoundException = "OrganizationalUnitNotFoundException" // ErrCodeParentNotFoundException for service response error code // "ParentNotFoundException". // - // We can't find a root or organizational unit (OU) with the ParentId that you - // specified. + // We can't find a root or OU with the ParentId that you specified. ErrCodeParentNotFoundException = "ParentNotFoundException" // ErrCodePolicyInUseException for service response error code // "PolicyInUseException". // // The policy is attached to one or more entities. You must detach it from all - // roots, organizational units (OUs), and accounts before performing this operation. + // roots, OUs, and accounts before performing this operation. ErrCodePolicyInUseException = "PolicyInUseException" // ErrCodePolicyNotAttachedException for service response error code @@ -405,16 +420,16 @@ const ( // "PolicyTypeNotAvailableForOrganizationException". // // You can't use the specified policy type with the feature set currently enabled - // for this organization. For example, you can enable service control policies - // (SCPs) only after you enable all features in the organization. For more information, - // see Enabling and Disabling a Policy Type on a Root (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies.html#enable_policies_on_root) + // for this organization. For example, you can enable SCPs only after you enable + // all features in the organization. For more information, see Enabling and + // Disabling a Policy Type on a Root (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies.html#enable_policies_on_root) // in the AWS Organizations User Guide. ErrCodePolicyTypeNotAvailableForOrganizationException = "PolicyTypeNotAvailableForOrganizationException" // ErrCodePolicyTypeNotEnabledException for service response error code // "PolicyTypeNotEnabledException". // - // The specified policy type is not currently enabled in this root. You cannot + // The specified policy type isn't currently enabled in this root. You can't // attach policies of the specified type to entities in a root until you enable // that type in the root. For more information, see Enabling All Features in // Your Organization (http://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org_support-all-features.html) diff --git a/vendor/github.com/aws/aws-sdk-go/service/organizations/service.go b/vendor/github.com/aws/aws-sdk-go/service/organizations/service.go index 0ca4f04f13d..565c1715f3c 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/organizations/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/organizations/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "organizations" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "organizations" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Organizations" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the Organizations client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/pinpoint/api.go b/vendor/github.com/aws/aws-sdk-go/service/pinpoint/api.go new file mode 100644 index 00000000000..f4a8c811378 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/pinpoint/api.go @@ -0,0 +1,20721 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package pinpoint + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" +) + +const opCreateApp = "CreateApp" + +// CreateAppRequest generates a "aws/request.Request" representing the +// client's request for the CreateApp operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateApp for more information on using the CreateApp +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateAppRequest method. +// req, resp := client.CreateAppRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/CreateApp +func (c *Pinpoint) CreateAppRequest(input *CreateAppInput) (req *request.Request, output *CreateAppOutput) { + op := &request.Operation{ + Name: opCreateApp, + HTTPMethod: "POST", + HTTPPath: "/v1/apps", + } + + if input == nil { + input = &CreateAppInput{} + } + + output = &CreateAppOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateApp API operation for Amazon Pinpoint. +// +// Creates or updates an app. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation CreateApp for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/CreateApp +func (c *Pinpoint) CreateApp(input *CreateAppInput) (*CreateAppOutput, error) { + req, out := c.CreateAppRequest(input) + return out, req.Send() +} + +// CreateAppWithContext is the same as CreateApp with the addition of +// the ability to pass a context and additional request options. +// +// See CreateApp for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) CreateAppWithContext(ctx aws.Context, input *CreateAppInput, opts ...request.Option) (*CreateAppOutput, error) { + req, out := c.CreateAppRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateCampaign = "CreateCampaign" + +// CreateCampaignRequest generates a "aws/request.Request" representing the +// client's request for the CreateCampaign operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateCampaign for more information on using the CreateCampaign +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateCampaignRequest method. +// req, resp := client.CreateCampaignRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/CreateCampaign +func (c *Pinpoint) CreateCampaignRequest(input *CreateCampaignInput) (req *request.Request, output *CreateCampaignOutput) { + op := &request.Operation{ + Name: opCreateCampaign, + HTTPMethod: "POST", + HTTPPath: "/v1/apps/{application-id}/campaigns", + } + + if input == nil { + input = &CreateCampaignInput{} + } + + output = &CreateCampaignOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateCampaign API operation for Amazon Pinpoint. +// +// Creates or updates a campaign. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation CreateCampaign for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/CreateCampaign +func (c *Pinpoint) CreateCampaign(input *CreateCampaignInput) (*CreateCampaignOutput, error) { + req, out := c.CreateCampaignRequest(input) + return out, req.Send() +} + +// CreateCampaignWithContext is the same as CreateCampaign with the addition of +// the ability to pass a context and additional request options. +// +// See CreateCampaign for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) CreateCampaignWithContext(ctx aws.Context, input *CreateCampaignInput, opts ...request.Option) (*CreateCampaignOutput, error) { + req, out := c.CreateCampaignRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateExportJob = "CreateExportJob" + +// CreateExportJobRequest generates a "aws/request.Request" representing the +// client's request for the CreateExportJob operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateExportJob for more information on using the CreateExportJob +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateExportJobRequest method. +// req, resp := client.CreateExportJobRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/CreateExportJob +func (c *Pinpoint) CreateExportJobRequest(input *CreateExportJobInput) (req *request.Request, output *CreateExportJobOutput) { + op := &request.Operation{ + Name: opCreateExportJob, + HTTPMethod: "POST", + HTTPPath: "/v1/apps/{application-id}/jobs/export", + } + + if input == nil { + input = &CreateExportJobInput{} + } + + output = &CreateExportJobOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateExportJob API operation for Amazon Pinpoint. +// +// Creates an export job. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation CreateExportJob for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/CreateExportJob +func (c *Pinpoint) CreateExportJob(input *CreateExportJobInput) (*CreateExportJobOutput, error) { + req, out := c.CreateExportJobRequest(input) + return out, req.Send() +} + +// CreateExportJobWithContext is the same as CreateExportJob with the addition of +// the ability to pass a context and additional request options. +// +// See CreateExportJob for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) CreateExportJobWithContext(ctx aws.Context, input *CreateExportJobInput, opts ...request.Option) (*CreateExportJobOutput, error) { + req, out := c.CreateExportJobRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateImportJob = "CreateImportJob" + +// CreateImportJobRequest generates a "aws/request.Request" representing the +// client's request for the CreateImportJob operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateImportJob for more information on using the CreateImportJob +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateImportJobRequest method. +// req, resp := client.CreateImportJobRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/CreateImportJob +func (c *Pinpoint) CreateImportJobRequest(input *CreateImportJobInput) (req *request.Request, output *CreateImportJobOutput) { + op := &request.Operation{ + Name: opCreateImportJob, + HTTPMethod: "POST", + HTTPPath: "/v1/apps/{application-id}/jobs/import", + } + + if input == nil { + input = &CreateImportJobInput{} + } + + output = &CreateImportJobOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateImportJob API operation for Amazon Pinpoint. +// +// Creates or updates an import job. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation CreateImportJob for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/CreateImportJob +func (c *Pinpoint) CreateImportJob(input *CreateImportJobInput) (*CreateImportJobOutput, error) { + req, out := c.CreateImportJobRequest(input) + return out, req.Send() +} + +// CreateImportJobWithContext is the same as CreateImportJob with the addition of +// the ability to pass a context and additional request options. +// +// See CreateImportJob for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) CreateImportJobWithContext(ctx aws.Context, input *CreateImportJobInput, opts ...request.Option) (*CreateImportJobOutput, error) { + req, out := c.CreateImportJobRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateSegment = "CreateSegment" + +// CreateSegmentRequest generates a "aws/request.Request" representing the +// client's request for the CreateSegment operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateSegment for more information on using the CreateSegment +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateSegmentRequest method. +// req, resp := client.CreateSegmentRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/CreateSegment +func (c *Pinpoint) CreateSegmentRequest(input *CreateSegmentInput) (req *request.Request, output *CreateSegmentOutput) { + op := &request.Operation{ + Name: opCreateSegment, + HTTPMethod: "POST", + HTTPPath: "/v1/apps/{application-id}/segments", + } + + if input == nil { + input = &CreateSegmentInput{} + } + + output = &CreateSegmentOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateSegment API operation for Amazon Pinpoint. +// +// Used to create or update a segment. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation CreateSegment for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/CreateSegment +func (c *Pinpoint) CreateSegment(input *CreateSegmentInput) (*CreateSegmentOutput, error) { + req, out := c.CreateSegmentRequest(input) + return out, req.Send() +} + +// CreateSegmentWithContext is the same as CreateSegment with the addition of +// the ability to pass a context and additional request options. +// +// See CreateSegment for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) CreateSegmentWithContext(ctx aws.Context, input *CreateSegmentInput, opts ...request.Option) (*CreateSegmentOutput, error) { + req, out := c.CreateSegmentRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteAdmChannel = "DeleteAdmChannel" + +// DeleteAdmChannelRequest generates a "aws/request.Request" representing the +// client's request for the DeleteAdmChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteAdmChannel for more information on using the DeleteAdmChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteAdmChannelRequest method. +// req, resp := client.DeleteAdmChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteAdmChannel +func (c *Pinpoint) DeleteAdmChannelRequest(input *DeleteAdmChannelInput) (req *request.Request, output *DeleteAdmChannelOutput) { + op := &request.Operation{ + Name: opDeleteAdmChannel, + HTTPMethod: "DELETE", + HTTPPath: "/v1/apps/{application-id}/channels/adm", + } + + if input == nil { + input = &DeleteAdmChannelInput{} + } + + output = &DeleteAdmChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteAdmChannel API operation for Amazon Pinpoint. +// +// Delete an ADM channel. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation DeleteAdmChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteAdmChannel +func (c *Pinpoint) DeleteAdmChannel(input *DeleteAdmChannelInput) (*DeleteAdmChannelOutput, error) { + req, out := c.DeleteAdmChannelRequest(input) + return out, req.Send() +} + +// DeleteAdmChannelWithContext is the same as DeleteAdmChannel with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteAdmChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) DeleteAdmChannelWithContext(ctx aws.Context, input *DeleteAdmChannelInput, opts ...request.Option) (*DeleteAdmChannelOutput, error) { + req, out := c.DeleteAdmChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteApnsChannel = "DeleteApnsChannel" + +// DeleteApnsChannelRequest generates a "aws/request.Request" representing the +// client's request for the DeleteApnsChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteApnsChannel for more information on using the DeleteApnsChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteApnsChannelRequest method. +// req, resp := client.DeleteApnsChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteApnsChannel +func (c *Pinpoint) DeleteApnsChannelRequest(input *DeleteApnsChannelInput) (req *request.Request, output *DeleteApnsChannelOutput) { + op := &request.Operation{ + Name: opDeleteApnsChannel, + HTTPMethod: "DELETE", + HTTPPath: "/v1/apps/{application-id}/channels/apns", + } + + if input == nil { + input = &DeleteApnsChannelInput{} + } + + output = &DeleteApnsChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteApnsChannel API operation for Amazon Pinpoint. +// +// Deletes the APNs channel for an app. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation DeleteApnsChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteApnsChannel +func (c *Pinpoint) DeleteApnsChannel(input *DeleteApnsChannelInput) (*DeleteApnsChannelOutput, error) { + req, out := c.DeleteApnsChannelRequest(input) + return out, req.Send() +} + +// DeleteApnsChannelWithContext is the same as DeleteApnsChannel with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteApnsChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) DeleteApnsChannelWithContext(ctx aws.Context, input *DeleteApnsChannelInput, opts ...request.Option) (*DeleteApnsChannelOutput, error) { + req, out := c.DeleteApnsChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteApnsSandboxChannel = "DeleteApnsSandboxChannel" + +// DeleteApnsSandboxChannelRequest generates a "aws/request.Request" representing the +// client's request for the DeleteApnsSandboxChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteApnsSandboxChannel for more information on using the DeleteApnsSandboxChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteApnsSandboxChannelRequest method. +// req, resp := client.DeleteApnsSandboxChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteApnsSandboxChannel +func (c *Pinpoint) DeleteApnsSandboxChannelRequest(input *DeleteApnsSandboxChannelInput) (req *request.Request, output *DeleteApnsSandboxChannelOutput) { + op := &request.Operation{ + Name: opDeleteApnsSandboxChannel, + HTTPMethod: "DELETE", + HTTPPath: "/v1/apps/{application-id}/channels/apns_sandbox", + } + + if input == nil { + input = &DeleteApnsSandboxChannelInput{} + } + + output = &DeleteApnsSandboxChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteApnsSandboxChannel API operation for Amazon Pinpoint. +// +// Delete an APNS sandbox channel. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation DeleteApnsSandboxChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteApnsSandboxChannel +func (c *Pinpoint) DeleteApnsSandboxChannel(input *DeleteApnsSandboxChannelInput) (*DeleteApnsSandboxChannelOutput, error) { + req, out := c.DeleteApnsSandboxChannelRequest(input) + return out, req.Send() +} + +// DeleteApnsSandboxChannelWithContext is the same as DeleteApnsSandboxChannel with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteApnsSandboxChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) DeleteApnsSandboxChannelWithContext(ctx aws.Context, input *DeleteApnsSandboxChannelInput, opts ...request.Option) (*DeleteApnsSandboxChannelOutput, error) { + req, out := c.DeleteApnsSandboxChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteApnsVoipChannel = "DeleteApnsVoipChannel" + +// DeleteApnsVoipChannelRequest generates a "aws/request.Request" representing the +// client's request for the DeleteApnsVoipChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteApnsVoipChannel for more information on using the DeleteApnsVoipChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteApnsVoipChannelRequest method. +// req, resp := client.DeleteApnsVoipChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteApnsVoipChannel +func (c *Pinpoint) DeleteApnsVoipChannelRequest(input *DeleteApnsVoipChannelInput) (req *request.Request, output *DeleteApnsVoipChannelOutput) { + op := &request.Operation{ + Name: opDeleteApnsVoipChannel, + HTTPMethod: "DELETE", + HTTPPath: "/v1/apps/{application-id}/channels/apns_voip", + } + + if input == nil { + input = &DeleteApnsVoipChannelInput{} + } + + output = &DeleteApnsVoipChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteApnsVoipChannel API operation for Amazon Pinpoint. +// +// Delete an APNS VoIP channel +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation DeleteApnsVoipChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteApnsVoipChannel +func (c *Pinpoint) DeleteApnsVoipChannel(input *DeleteApnsVoipChannelInput) (*DeleteApnsVoipChannelOutput, error) { + req, out := c.DeleteApnsVoipChannelRequest(input) + return out, req.Send() +} + +// DeleteApnsVoipChannelWithContext is the same as DeleteApnsVoipChannel with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteApnsVoipChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) DeleteApnsVoipChannelWithContext(ctx aws.Context, input *DeleteApnsVoipChannelInput, opts ...request.Option) (*DeleteApnsVoipChannelOutput, error) { + req, out := c.DeleteApnsVoipChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteApnsVoipSandboxChannel = "DeleteApnsVoipSandboxChannel" + +// DeleteApnsVoipSandboxChannelRequest generates a "aws/request.Request" representing the +// client's request for the DeleteApnsVoipSandboxChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteApnsVoipSandboxChannel for more information on using the DeleteApnsVoipSandboxChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteApnsVoipSandboxChannelRequest method. +// req, resp := client.DeleteApnsVoipSandboxChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteApnsVoipSandboxChannel +func (c *Pinpoint) DeleteApnsVoipSandboxChannelRequest(input *DeleteApnsVoipSandboxChannelInput) (req *request.Request, output *DeleteApnsVoipSandboxChannelOutput) { + op := &request.Operation{ + Name: opDeleteApnsVoipSandboxChannel, + HTTPMethod: "DELETE", + HTTPPath: "/v1/apps/{application-id}/channels/apns_voip_sandbox", + } + + if input == nil { + input = &DeleteApnsVoipSandboxChannelInput{} + } + + output = &DeleteApnsVoipSandboxChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteApnsVoipSandboxChannel API operation for Amazon Pinpoint. +// +// Delete an APNS VoIP sandbox channel +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation DeleteApnsVoipSandboxChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteApnsVoipSandboxChannel +func (c *Pinpoint) DeleteApnsVoipSandboxChannel(input *DeleteApnsVoipSandboxChannelInput) (*DeleteApnsVoipSandboxChannelOutput, error) { + req, out := c.DeleteApnsVoipSandboxChannelRequest(input) + return out, req.Send() +} + +// DeleteApnsVoipSandboxChannelWithContext is the same as DeleteApnsVoipSandboxChannel with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteApnsVoipSandboxChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) DeleteApnsVoipSandboxChannelWithContext(ctx aws.Context, input *DeleteApnsVoipSandboxChannelInput, opts ...request.Option) (*DeleteApnsVoipSandboxChannelOutput, error) { + req, out := c.DeleteApnsVoipSandboxChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteApp = "DeleteApp" + +// DeleteAppRequest generates a "aws/request.Request" representing the +// client's request for the DeleteApp operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteApp for more information on using the DeleteApp +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteAppRequest method. +// req, resp := client.DeleteAppRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteApp +func (c *Pinpoint) DeleteAppRequest(input *DeleteAppInput) (req *request.Request, output *DeleteAppOutput) { + op := &request.Operation{ + Name: opDeleteApp, + HTTPMethod: "DELETE", + HTTPPath: "/v1/apps/{application-id}", + } + + if input == nil { + input = &DeleteAppInput{} + } + + output = &DeleteAppOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteApp API operation for Amazon Pinpoint. +// +// Deletes an app. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation DeleteApp for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteApp +func (c *Pinpoint) DeleteApp(input *DeleteAppInput) (*DeleteAppOutput, error) { + req, out := c.DeleteAppRequest(input) + return out, req.Send() +} + +// DeleteAppWithContext is the same as DeleteApp with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteApp for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) DeleteAppWithContext(ctx aws.Context, input *DeleteAppInput, opts ...request.Option) (*DeleteAppOutput, error) { + req, out := c.DeleteAppRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteBaiduChannel = "DeleteBaiduChannel" + +// DeleteBaiduChannelRequest generates a "aws/request.Request" representing the +// client's request for the DeleteBaiduChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteBaiduChannel for more information on using the DeleteBaiduChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteBaiduChannelRequest method. +// req, resp := client.DeleteBaiduChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteBaiduChannel +func (c *Pinpoint) DeleteBaiduChannelRequest(input *DeleteBaiduChannelInput) (req *request.Request, output *DeleteBaiduChannelOutput) { + op := &request.Operation{ + Name: opDeleteBaiduChannel, + HTTPMethod: "DELETE", + HTTPPath: "/v1/apps/{application-id}/channels/baidu", + } + + if input == nil { + input = &DeleteBaiduChannelInput{} + } + + output = &DeleteBaiduChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteBaiduChannel API operation for Amazon Pinpoint. +// +// Delete a BAIDU GCM channel +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation DeleteBaiduChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteBaiduChannel +func (c *Pinpoint) DeleteBaiduChannel(input *DeleteBaiduChannelInput) (*DeleteBaiduChannelOutput, error) { + req, out := c.DeleteBaiduChannelRequest(input) + return out, req.Send() +} + +// DeleteBaiduChannelWithContext is the same as DeleteBaiduChannel with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteBaiduChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) DeleteBaiduChannelWithContext(ctx aws.Context, input *DeleteBaiduChannelInput, opts ...request.Option) (*DeleteBaiduChannelOutput, error) { + req, out := c.DeleteBaiduChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteCampaign = "DeleteCampaign" + +// DeleteCampaignRequest generates a "aws/request.Request" representing the +// client's request for the DeleteCampaign operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteCampaign for more information on using the DeleteCampaign +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteCampaignRequest method. +// req, resp := client.DeleteCampaignRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteCampaign +func (c *Pinpoint) DeleteCampaignRequest(input *DeleteCampaignInput) (req *request.Request, output *DeleteCampaignOutput) { + op := &request.Operation{ + Name: opDeleteCampaign, + HTTPMethod: "DELETE", + HTTPPath: "/v1/apps/{application-id}/campaigns/{campaign-id}", + } + + if input == nil { + input = &DeleteCampaignInput{} + } + + output = &DeleteCampaignOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteCampaign API operation for Amazon Pinpoint. +// +// Deletes a campaign. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation DeleteCampaign for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteCampaign +func (c *Pinpoint) DeleteCampaign(input *DeleteCampaignInput) (*DeleteCampaignOutput, error) { + req, out := c.DeleteCampaignRequest(input) + return out, req.Send() +} + +// DeleteCampaignWithContext is the same as DeleteCampaign with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteCampaign for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) DeleteCampaignWithContext(ctx aws.Context, input *DeleteCampaignInput, opts ...request.Option) (*DeleteCampaignOutput, error) { + req, out := c.DeleteCampaignRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteEmailChannel = "DeleteEmailChannel" + +// DeleteEmailChannelRequest generates a "aws/request.Request" representing the +// client's request for the DeleteEmailChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteEmailChannel for more information on using the DeleteEmailChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteEmailChannelRequest method. +// req, resp := client.DeleteEmailChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteEmailChannel +func (c *Pinpoint) DeleteEmailChannelRequest(input *DeleteEmailChannelInput) (req *request.Request, output *DeleteEmailChannelOutput) { + op := &request.Operation{ + Name: opDeleteEmailChannel, + HTTPMethod: "DELETE", + HTTPPath: "/v1/apps/{application-id}/channels/email", + } + + if input == nil { + input = &DeleteEmailChannelInput{} + } + + output = &DeleteEmailChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteEmailChannel API operation for Amazon Pinpoint. +// +// Delete an email channel. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation DeleteEmailChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteEmailChannel +func (c *Pinpoint) DeleteEmailChannel(input *DeleteEmailChannelInput) (*DeleteEmailChannelOutput, error) { + req, out := c.DeleteEmailChannelRequest(input) + return out, req.Send() +} + +// DeleteEmailChannelWithContext is the same as DeleteEmailChannel with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteEmailChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) DeleteEmailChannelWithContext(ctx aws.Context, input *DeleteEmailChannelInput, opts ...request.Option) (*DeleteEmailChannelOutput, error) { + req, out := c.DeleteEmailChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteEndpoint = "DeleteEndpoint" + +// DeleteEndpointRequest generates a "aws/request.Request" representing the +// client's request for the DeleteEndpoint operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteEndpoint for more information on using the DeleteEndpoint +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteEndpointRequest method. +// req, resp := client.DeleteEndpointRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteEndpoint +func (c *Pinpoint) DeleteEndpointRequest(input *DeleteEndpointInput) (req *request.Request, output *DeleteEndpointOutput) { + op := &request.Operation{ + Name: opDeleteEndpoint, + HTTPMethod: "DELETE", + HTTPPath: "/v1/apps/{application-id}/endpoints/{endpoint-id}", + } + + if input == nil { + input = &DeleteEndpointInput{} + } + + output = &DeleteEndpointOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteEndpoint API operation for Amazon Pinpoint. +// +// Deletes an endpoint. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation DeleteEndpoint for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteEndpoint +func (c *Pinpoint) DeleteEndpoint(input *DeleteEndpointInput) (*DeleteEndpointOutput, error) { + req, out := c.DeleteEndpointRequest(input) + return out, req.Send() +} + +// DeleteEndpointWithContext is the same as DeleteEndpoint with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteEndpoint for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) DeleteEndpointWithContext(ctx aws.Context, input *DeleteEndpointInput, opts ...request.Option) (*DeleteEndpointOutput, error) { + req, out := c.DeleteEndpointRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteEventStream = "DeleteEventStream" + +// DeleteEventStreamRequest generates a "aws/request.Request" representing the +// client's request for the DeleteEventStream operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteEventStream for more information on using the DeleteEventStream +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteEventStreamRequest method. +// req, resp := client.DeleteEventStreamRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteEventStream +func (c *Pinpoint) DeleteEventStreamRequest(input *DeleteEventStreamInput) (req *request.Request, output *DeleteEventStreamOutput) { + op := &request.Operation{ + Name: opDeleteEventStream, + HTTPMethod: "DELETE", + HTTPPath: "/v1/apps/{application-id}/eventstream", + } + + if input == nil { + input = &DeleteEventStreamInput{} + } + + output = &DeleteEventStreamOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteEventStream API operation for Amazon Pinpoint. +// +// Deletes the event stream for an app. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation DeleteEventStream for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteEventStream +func (c *Pinpoint) DeleteEventStream(input *DeleteEventStreamInput) (*DeleteEventStreamOutput, error) { + req, out := c.DeleteEventStreamRequest(input) + return out, req.Send() +} + +// DeleteEventStreamWithContext is the same as DeleteEventStream with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteEventStream for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) DeleteEventStreamWithContext(ctx aws.Context, input *DeleteEventStreamInput, opts ...request.Option) (*DeleteEventStreamOutput, error) { + req, out := c.DeleteEventStreamRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteGcmChannel = "DeleteGcmChannel" + +// DeleteGcmChannelRequest generates a "aws/request.Request" representing the +// client's request for the DeleteGcmChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteGcmChannel for more information on using the DeleteGcmChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteGcmChannelRequest method. +// req, resp := client.DeleteGcmChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteGcmChannel +func (c *Pinpoint) DeleteGcmChannelRequest(input *DeleteGcmChannelInput) (req *request.Request, output *DeleteGcmChannelOutput) { + op := &request.Operation{ + Name: opDeleteGcmChannel, + HTTPMethod: "DELETE", + HTTPPath: "/v1/apps/{application-id}/channels/gcm", + } + + if input == nil { + input = &DeleteGcmChannelInput{} + } + + output = &DeleteGcmChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteGcmChannel API operation for Amazon Pinpoint. +// +// Deletes the GCM channel for an app. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation DeleteGcmChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteGcmChannel +func (c *Pinpoint) DeleteGcmChannel(input *DeleteGcmChannelInput) (*DeleteGcmChannelOutput, error) { + req, out := c.DeleteGcmChannelRequest(input) + return out, req.Send() +} + +// DeleteGcmChannelWithContext is the same as DeleteGcmChannel with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteGcmChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) DeleteGcmChannelWithContext(ctx aws.Context, input *DeleteGcmChannelInput, opts ...request.Option) (*DeleteGcmChannelOutput, error) { + req, out := c.DeleteGcmChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteSegment = "DeleteSegment" + +// DeleteSegmentRequest generates a "aws/request.Request" representing the +// client's request for the DeleteSegment operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteSegment for more information on using the DeleteSegment +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteSegmentRequest method. +// req, resp := client.DeleteSegmentRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteSegment +func (c *Pinpoint) DeleteSegmentRequest(input *DeleteSegmentInput) (req *request.Request, output *DeleteSegmentOutput) { + op := &request.Operation{ + Name: opDeleteSegment, + HTTPMethod: "DELETE", + HTTPPath: "/v1/apps/{application-id}/segments/{segment-id}", + } + + if input == nil { + input = &DeleteSegmentInput{} + } + + output = &DeleteSegmentOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteSegment API operation for Amazon Pinpoint. +// +// Deletes a segment. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation DeleteSegment for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteSegment +func (c *Pinpoint) DeleteSegment(input *DeleteSegmentInput) (*DeleteSegmentOutput, error) { + req, out := c.DeleteSegmentRequest(input) + return out, req.Send() +} + +// DeleteSegmentWithContext is the same as DeleteSegment with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteSegment for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) DeleteSegmentWithContext(ctx aws.Context, input *DeleteSegmentInput, opts ...request.Option) (*DeleteSegmentOutput, error) { + req, out := c.DeleteSegmentRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteSmsChannel = "DeleteSmsChannel" + +// DeleteSmsChannelRequest generates a "aws/request.Request" representing the +// client's request for the DeleteSmsChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteSmsChannel for more information on using the DeleteSmsChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteSmsChannelRequest method. +// req, resp := client.DeleteSmsChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteSmsChannel +func (c *Pinpoint) DeleteSmsChannelRequest(input *DeleteSmsChannelInput) (req *request.Request, output *DeleteSmsChannelOutput) { + op := &request.Operation{ + Name: opDeleteSmsChannel, + HTTPMethod: "DELETE", + HTTPPath: "/v1/apps/{application-id}/channels/sms", + } + + if input == nil { + input = &DeleteSmsChannelInput{} + } + + output = &DeleteSmsChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteSmsChannel API operation for Amazon Pinpoint. +// +// Delete an SMS channel. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation DeleteSmsChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteSmsChannel +func (c *Pinpoint) DeleteSmsChannel(input *DeleteSmsChannelInput) (*DeleteSmsChannelOutput, error) { + req, out := c.DeleteSmsChannelRequest(input) + return out, req.Send() +} + +// DeleteSmsChannelWithContext is the same as DeleteSmsChannel with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteSmsChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) DeleteSmsChannelWithContext(ctx aws.Context, input *DeleteSmsChannelInput, opts ...request.Option) (*DeleteSmsChannelOutput, error) { + req, out := c.DeleteSmsChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteUserEndpoints = "DeleteUserEndpoints" + +// DeleteUserEndpointsRequest generates a "aws/request.Request" representing the +// client's request for the DeleteUserEndpoints operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteUserEndpoints for more information on using the DeleteUserEndpoints +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteUserEndpointsRequest method. +// req, resp := client.DeleteUserEndpointsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteUserEndpoints +func (c *Pinpoint) DeleteUserEndpointsRequest(input *DeleteUserEndpointsInput) (req *request.Request, output *DeleteUserEndpointsOutput) { + op := &request.Operation{ + Name: opDeleteUserEndpoints, + HTTPMethod: "DELETE", + HTTPPath: "/v1/apps/{application-id}/users/{user-id}", + } + + if input == nil { + input = &DeleteUserEndpointsInput{} + } + + output = &DeleteUserEndpointsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteUserEndpoints API operation for Amazon Pinpoint. +// +// Deletes endpoints that are associated with a User ID. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation DeleteUserEndpoints for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteUserEndpoints +func (c *Pinpoint) DeleteUserEndpoints(input *DeleteUserEndpointsInput) (*DeleteUserEndpointsOutput, error) { + req, out := c.DeleteUserEndpointsRequest(input) + return out, req.Send() +} + +// DeleteUserEndpointsWithContext is the same as DeleteUserEndpoints with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteUserEndpoints for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) DeleteUserEndpointsWithContext(ctx aws.Context, input *DeleteUserEndpointsInput, opts ...request.Option) (*DeleteUserEndpointsOutput, error) { + req, out := c.DeleteUserEndpointsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteVoiceChannel = "DeleteVoiceChannel" + +// DeleteVoiceChannelRequest generates a "aws/request.Request" representing the +// client's request for the DeleteVoiceChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteVoiceChannel for more information on using the DeleteVoiceChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteVoiceChannelRequest method. +// req, resp := client.DeleteVoiceChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteVoiceChannel +func (c *Pinpoint) DeleteVoiceChannelRequest(input *DeleteVoiceChannelInput) (req *request.Request, output *DeleteVoiceChannelOutput) { + op := &request.Operation{ + Name: opDeleteVoiceChannel, + HTTPMethod: "DELETE", + HTTPPath: "/v1/apps/{application-id}/channels/voice", + } + + if input == nil { + input = &DeleteVoiceChannelInput{} + } + + output = &DeleteVoiceChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteVoiceChannel API operation for Amazon Pinpoint. +// +// Delete an Voice channel +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation DeleteVoiceChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/DeleteVoiceChannel +func (c *Pinpoint) DeleteVoiceChannel(input *DeleteVoiceChannelInput) (*DeleteVoiceChannelOutput, error) { + req, out := c.DeleteVoiceChannelRequest(input) + return out, req.Send() +} + +// DeleteVoiceChannelWithContext is the same as DeleteVoiceChannel with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteVoiceChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) DeleteVoiceChannelWithContext(ctx aws.Context, input *DeleteVoiceChannelInput, opts ...request.Option) (*DeleteVoiceChannelOutput, error) { + req, out := c.DeleteVoiceChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetAdmChannel = "GetAdmChannel" + +// GetAdmChannelRequest generates a "aws/request.Request" representing the +// client's request for the GetAdmChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetAdmChannel for more information on using the GetAdmChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetAdmChannelRequest method. +// req, resp := client.GetAdmChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetAdmChannel +func (c *Pinpoint) GetAdmChannelRequest(input *GetAdmChannelInput) (req *request.Request, output *GetAdmChannelOutput) { + op := &request.Operation{ + Name: opGetAdmChannel, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/channels/adm", + } + + if input == nil { + input = &GetAdmChannelInput{} + } + + output = &GetAdmChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetAdmChannel API operation for Amazon Pinpoint. +// +// Get an ADM channel. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetAdmChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetAdmChannel +func (c *Pinpoint) GetAdmChannel(input *GetAdmChannelInput) (*GetAdmChannelOutput, error) { + req, out := c.GetAdmChannelRequest(input) + return out, req.Send() +} + +// GetAdmChannelWithContext is the same as GetAdmChannel with the addition of +// the ability to pass a context and additional request options. +// +// See GetAdmChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetAdmChannelWithContext(ctx aws.Context, input *GetAdmChannelInput, opts ...request.Option) (*GetAdmChannelOutput, error) { + req, out := c.GetAdmChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetApnsChannel = "GetApnsChannel" + +// GetApnsChannelRequest generates a "aws/request.Request" representing the +// client's request for the GetApnsChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetApnsChannel for more information on using the GetApnsChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetApnsChannelRequest method. +// req, resp := client.GetApnsChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetApnsChannel +func (c *Pinpoint) GetApnsChannelRequest(input *GetApnsChannelInput) (req *request.Request, output *GetApnsChannelOutput) { + op := &request.Operation{ + Name: opGetApnsChannel, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/channels/apns", + } + + if input == nil { + input = &GetApnsChannelInput{} + } + + output = &GetApnsChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetApnsChannel API operation for Amazon Pinpoint. +// +// Returns information about the APNs channel for an app. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetApnsChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetApnsChannel +func (c *Pinpoint) GetApnsChannel(input *GetApnsChannelInput) (*GetApnsChannelOutput, error) { + req, out := c.GetApnsChannelRequest(input) + return out, req.Send() +} + +// GetApnsChannelWithContext is the same as GetApnsChannel with the addition of +// the ability to pass a context and additional request options. +// +// See GetApnsChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetApnsChannelWithContext(ctx aws.Context, input *GetApnsChannelInput, opts ...request.Option) (*GetApnsChannelOutput, error) { + req, out := c.GetApnsChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetApnsSandboxChannel = "GetApnsSandboxChannel" + +// GetApnsSandboxChannelRequest generates a "aws/request.Request" representing the +// client's request for the GetApnsSandboxChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetApnsSandboxChannel for more information on using the GetApnsSandboxChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetApnsSandboxChannelRequest method. +// req, resp := client.GetApnsSandboxChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetApnsSandboxChannel +func (c *Pinpoint) GetApnsSandboxChannelRequest(input *GetApnsSandboxChannelInput) (req *request.Request, output *GetApnsSandboxChannelOutput) { + op := &request.Operation{ + Name: opGetApnsSandboxChannel, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/channels/apns_sandbox", + } + + if input == nil { + input = &GetApnsSandboxChannelInput{} + } + + output = &GetApnsSandboxChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetApnsSandboxChannel API operation for Amazon Pinpoint. +// +// Get an APNS sandbox channel. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetApnsSandboxChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetApnsSandboxChannel +func (c *Pinpoint) GetApnsSandboxChannel(input *GetApnsSandboxChannelInput) (*GetApnsSandboxChannelOutput, error) { + req, out := c.GetApnsSandboxChannelRequest(input) + return out, req.Send() +} + +// GetApnsSandboxChannelWithContext is the same as GetApnsSandboxChannel with the addition of +// the ability to pass a context and additional request options. +// +// See GetApnsSandboxChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetApnsSandboxChannelWithContext(ctx aws.Context, input *GetApnsSandboxChannelInput, opts ...request.Option) (*GetApnsSandboxChannelOutput, error) { + req, out := c.GetApnsSandboxChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetApnsVoipChannel = "GetApnsVoipChannel" + +// GetApnsVoipChannelRequest generates a "aws/request.Request" representing the +// client's request for the GetApnsVoipChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetApnsVoipChannel for more information on using the GetApnsVoipChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetApnsVoipChannelRequest method. +// req, resp := client.GetApnsVoipChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetApnsVoipChannel +func (c *Pinpoint) GetApnsVoipChannelRequest(input *GetApnsVoipChannelInput) (req *request.Request, output *GetApnsVoipChannelOutput) { + op := &request.Operation{ + Name: opGetApnsVoipChannel, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/channels/apns_voip", + } + + if input == nil { + input = &GetApnsVoipChannelInput{} + } + + output = &GetApnsVoipChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetApnsVoipChannel API operation for Amazon Pinpoint. +// +// Get an APNS VoIP channel +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetApnsVoipChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetApnsVoipChannel +func (c *Pinpoint) GetApnsVoipChannel(input *GetApnsVoipChannelInput) (*GetApnsVoipChannelOutput, error) { + req, out := c.GetApnsVoipChannelRequest(input) + return out, req.Send() +} + +// GetApnsVoipChannelWithContext is the same as GetApnsVoipChannel with the addition of +// the ability to pass a context and additional request options. +// +// See GetApnsVoipChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetApnsVoipChannelWithContext(ctx aws.Context, input *GetApnsVoipChannelInput, opts ...request.Option) (*GetApnsVoipChannelOutput, error) { + req, out := c.GetApnsVoipChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetApnsVoipSandboxChannel = "GetApnsVoipSandboxChannel" + +// GetApnsVoipSandboxChannelRequest generates a "aws/request.Request" representing the +// client's request for the GetApnsVoipSandboxChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetApnsVoipSandboxChannel for more information on using the GetApnsVoipSandboxChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetApnsVoipSandboxChannelRequest method. +// req, resp := client.GetApnsVoipSandboxChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetApnsVoipSandboxChannel +func (c *Pinpoint) GetApnsVoipSandboxChannelRequest(input *GetApnsVoipSandboxChannelInput) (req *request.Request, output *GetApnsVoipSandboxChannelOutput) { + op := &request.Operation{ + Name: opGetApnsVoipSandboxChannel, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/channels/apns_voip_sandbox", + } + + if input == nil { + input = &GetApnsVoipSandboxChannelInput{} + } + + output = &GetApnsVoipSandboxChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetApnsVoipSandboxChannel API operation for Amazon Pinpoint. +// +// Get an APNS VoIPSandbox channel +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetApnsVoipSandboxChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetApnsVoipSandboxChannel +func (c *Pinpoint) GetApnsVoipSandboxChannel(input *GetApnsVoipSandboxChannelInput) (*GetApnsVoipSandboxChannelOutput, error) { + req, out := c.GetApnsVoipSandboxChannelRequest(input) + return out, req.Send() +} + +// GetApnsVoipSandboxChannelWithContext is the same as GetApnsVoipSandboxChannel with the addition of +// the ability to pass a context and additional request options. +// +// See GetApnsVoipSandboxChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetApnsVoipSandboxChannelWithContext(ctx aws.Context, input *GetApnsVoipSandboxChannelInput, opts ...request.Option) (*GetApnsVoipSandboxChannelOutput, error) { + req, out := c.GetApnsVoipSandboxChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetApp = "GetApp" + +// GetAppRequest generates a "aws/request.Request" representing the +// client's request for the GetApp operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetApp for more information on using the GetApp +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetAppRequest method. +// req, resp := client.GetAppRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetApp +func (c *Pinpoint) GetAppRequest(input *GetAppInput) (req *request.Request, output *GetAppOutput) { + op := &request.Operation{ + Name: opGetApp, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}", + } + + if input == nil { + input = &GetAppInput{} + } + + output = &GetAppOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetApp API operation for Amazon Pinpoint. +// +// Returns information about an app. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetApp for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetApp +func (c *Pinpoint) GetApp(input *GetAppInput) (*GetAppOutput, error) { + req, out := c.GetAppRequest(input) + return out, req.Send() +} + +// GetAppWithContext is the same as GetApp with the addition of +// the ability to pass a context and additional request options. +// +// See GetApp for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetAppWithContext(ctx aws.Context, input *GetAppInput, opts ...request.Option) (*GetAppOutput, error) { + req, out := c.GetAppRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetApplicationSettings = "GetApplicationSettings" + +// GetApplicationSettingsRequest generates a "aws/request.Request" representing the +// client's request for the GetApplicationSettings operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetApplicationSettings for more information on using the GetApplicationSettings +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetApplicationSettingsRequest method. +// req, resp := client.GetApplicationSettingsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetApplicationSettings +func (c *Pinpoint) GetApplicationSettingsRequest(input *GetApplicationSettingsInput) (req *request.Request, output *GetApplicationSettingsOutput) { + op := &request.Operation{ + Name: opGetApplicationSettings, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/settings", + } + + if input == nil { + input = &GetApplicationSettingsInput{} + } + + output = &GetApplicationSettingsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetApplicationSettings API operation for Amazon Pinpoint. +// +// Used to request the settings for an app. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetApplicationSettings for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetApplicationSettings +func (c *Pinpoint) GetApplicationSettings(input *GetApplicationSettingsInput) (*GetApplicationSettingsOutput, error) { + req, out := c.GetApplicationSettingsRequest(input) + return out, req.Send() +} + +// GetApplicationSettingsWithContext is the same as GetApplicationSettings with the addition of +// the ability to pass a context and additional request options. +// +// See GetApplicationSettings for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetApplicationSettingsWithContext(ctx aws.Context, input *GetApplicationSettingsInput, opts ...request.Option) (*GetApplicationSettingsOutput, error) { + req, out := c.GetApplicationSettingsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetApps = "GetApps" + +// GetAppsRequest generates a "aws/request.Request" representing the +// client's request for the GetApps operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetApps for more information on using the GetApps +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetAppsRequest method. +// req, resp := client.GetAppsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetApps +func (c *Pinpoint) GetAppsRequest(input *GetAppsInput) (req *request.Request, output *GetAppsOutput) { + op := &request.Operation{ + Name: opGetApps, + HTTPMethod: "GET", + HTTPPath: "/v1/apps", + } + + if input == nil { + input = &GetAppsInput{} + } + + output = &GetAppsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetApps API operation for Amazon Pinpoint. +// +// Returns information about your apps. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetApps for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetApps +func (c *Pinpoint) GetApps(input *GetAppsInput) (*GetAppsOutput, error) { + req, out := c.GetAppsRequest(input) + return out, req.Send() +} + +// GetAppsWithContext is the same as GetApps with the addition of +// the ability to pass a context and additional request options. +// +// See GetApps for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetAppsWithContext(ctx aws.Context, input *GetAppsInput, opts ...request.Option) (*GetAppsOutput, error) { + req, out := c.GetAppsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetBaiduChannel = "GetBaiduChannel" + +// GetBaiduChannelRequest generates a "aws/request.Request" representing the +// client's request for the GetBaiduChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBaiduChannel for more information on using the GetBaiduChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBaiduChannelRequest method. +// req, resp := client.GetBaiduChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetBaiduChannel +func (c *Pinpoint) GetBaiduChannelRequest(input *GetBaiduChannelInput) (req *request.Request, output *GetBaiduChannelOutput) { + op := &request.Operation{ + Name: opGetBaiduChannel, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/channels/baidu", + } + + if input == nil { + input = &GetBaiduChannelInput{} + } + + output = &GetBaiduChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBaiduChannel API operation for Amazon Pinpoint. +// +// Get a BAIDU GCM channel +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetBaiduChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetBaiduChannel +func (c *Pinpoint) GetBaiduChannel(input *GetBaiduChannelInput) (*GetBaiduChannelOutput, error) { + req, out := c.GetBaiduChannelRequest(input) + return out, req.Send() +} + +// GetBaiduChannelWithContext is the same as GetBaiduChannel with the addition of +// the ability to pass a context and additional request options. +// +// See GetBaiduChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetBaiduChannelWithContext(ctx aws.Context, input *GetBaiduChannelInput, opts ...request.Option) (*GetBaiduChannelOutput, error) { + req, out := c.GetBaiduChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetCampaign = "GetCampaign" + +// GetCampaignRequest generates a "aws/request.Request" representing the +// client's request for the GetCampaign operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetCampaign for more information on using the GetCampaign +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetCampaignRequest method. +// req, resp := client.GetCampaignRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetCampaign +func (c *Pinpoint) GetCampaignRequest(input *GetCampaignInput) (req *request.Request, output *GetCampaignOutput) { + op := &request.Operation{ + Name: opGetCampaign, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/campaigns/{campaign-id}", + } + + if input == nil { + input = &GetCampaignInput{} + } + + output = &GetCampaignOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetCampaign API operation for Amazon Pinpoint. +// +// Returns information about a campaign. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetCampaign for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetCampaign +func (c *Pinpoint) GetCampaign(input *GetCampaignInput) (*GetCampaignOutput, error) { + req, out := c.GetCampaignRequest(input) + return out, req.Send() +} + +// GetCampaignWithContext is the same as GetCampaign with the addition of +// the ability to pass a context and additional request options. +// +// See GetCampaign for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetCampaignWithContext(ctx aws.Context, input *GetCampaignInput, opts ...request.Option) (*GetCampaignOutput, error) { + req, out := c.GetCampaignRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetCampaignActivities = "GetCampaignActivities" + +// GetCampaignActivitiesRequest generates a "aws/request.Request" representing the +// client's request for the GetCampaignActivities operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetCampaignActivities for more information on using the GetCampaignActivities +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetCampaignActivitiesRequest method. +// req, resp := client.GetCampaignActivitiesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetCampaignActivities +func (c *Pinpoint) GetCampaignActivitiesRequest(input *GetCampaignActivitiesInput) (req *request.Request, output *GetCampaignActivitiesOutput) { + op := &request.Operation{ + Name: opGetCampaignActivities, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/campaigns/{campaign-id}/activities", + } + + if input == nil { + input = &GetCampaignActivitiesInput{} + } + + output = &GetCampaignActivitiesOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetCampaignActivities API operation for Amazon Pinpoint. +// +// Returns information about the activity performed by a campaign. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetCampaignActivities for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetCampaignActivities +func (c *Pinpoint) GetCampaignActivities(input *GetCampaignActivitiesInput) (*GetCampaignActivitiesOutput, error) { + req, out := c.GetCampaignActivitiesRequest(input) + return out, req.Send() +} + +// GetCampaignActivitiesWithContext is the same as GetCampaignActivities with the addition of +// the ability to pass a context and additional request options. +// +// See GetCampaignActivities for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetCampaignActivitiesWithContext(ctx aws.Context, input *GetCampaignActivitiesInput, opts ...request.Option) (*GetCampaignActivitiesOutput, error) { + req, out := c.GetCampaignActivitiesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetCampaignVersion = "GetCampaignVersion" + +// GetCampaignVersionRequest generates a "aws/request.Request" representing the +// client's request for the GetCampaignVersion operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetCampaignVersion for more information on using the GetCampaignVersion +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetCampaignVersionRequest method. +// req, resp := client.GetCampaignVersionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetCampaignVersion +func (c *Pinpoint) GetCampaignVersionRequest(input *GetCampaignVersionInput) (req *request.Request, output *GetCampaignVersionOutput) { + op := &request.Operation{ + Name: opGetCampaignVersion, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/campaigns/{campaign-id}/versions/{version}", + } + + if input == nil { + input = &GetCampaignVersionInput{} + } + + output = &GetCampaignVersionOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetCampaignVersion API operation for Amazon Pinpoint. +// +// Returns information about a specific version of a campaign. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetCampaignVersion for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetCampaignVersion +func (c *Pinpoint) GetCampaignVersion(input *GetCampaignVersionInput) (*GetCampaignVersionOutput, error) { + req, out := c.GetCampaignVersionRequest(input) + return out, req.Send() +} + +// GetCampaignVersionWithContext is the same as GetCampaignVersion with the addition of +// the ability to pass a context and additional request options. +// +// See GetCampaignVersion for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetCampaignVersionWithContext(ctx aws.Context, input *GetCampaignVersionInput, opts ...request.Option) (*GetCampaignVersionOutput, error) { + req, out := c.GetCampaignVersionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetCampaignVersions = "GetCampaignVersions" + +// GetCampaignVersionsRequest generates a "aws/request.Request" representing the +// client's request for the GetCampaignVersions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetCampaignVersions for more information on using the GetCampaignVersions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetCampaignVersionsRequest method. +// req, resp := client.GetCampaignVersionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetCampaignVersions +func (c *Pinpoint) GetCampaignVersionsRequest(input *GetCampaignVersionsInput) (req *request.Request, output *GetCampaignVersionsOutput) { + op := &request.Operation{ + Name: opGetCampaignVersions, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/campaigns/{campaign-id}/versions", + } + + if input == nil { + input = &GetCampaignVersionsInput{} + } + + output = &GetCampaignVersionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetCampaignVersions API operation for Amazon Pinpoint. +// +// Returns information about your campaign versions. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetCampaignVersions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetCampaignVersions +func (c *Pinpoint) GetCampaignVersions(input *GetCampaignVersionsInput) (*GetCampaignVersionsOutput, error) { + req, out := c.GetCampaignVersionsRequest(input) + return out, req.Send() +} + +// GetCampaignVersionsWithContext is the same as GetCampaignVersions with the addition of +// the ability to pass a context and additional request options. +// +// See GetCampaignVersions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetCampaignVersionsWithContext(ctx aws.Context, input *GetCampaignVersionsInput, opts ...request.Option) (*GetCampaignVersionsOutput, error) { + req, out := c.GetCampaignVersionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetCampaigns = "GetCampaigns" + +// GetCampaignsRequest generates a "aws/request.Request" representing the +// client's request for the GetCampaigns operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetCampaigns for more information on using the GetCampaigns +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetCampaignsRequest method. +// req, resp := client.GetCampaignsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetCampaigns +func (c *Pinpoint) GetCampaignsRequest(input *GetCampaignsInput) (req *request.Request, output *GetCampaignsOutput) { + op := &request.Operation{ + Name: opGetCampaigns, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/campaigns", + } + + if input == nil { + input = &GetCampaignsInput{} + } + + output = &GetCampaignsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetCampaigns API operation for Amazon Pinpoint. +// +// Returns information about your campaigns. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetCampaigns for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetCampaigns +func (c *Pinpoint) GetCampaigns(input *GetCampaignsInput) (*GetCampaignsOutput, error) { + req, out := c.GetCampaignsRequest(input) + return out, req.Send() +} + +// GetCampaignsWithContext is the same as GetCampaigns with the addition of +// the ability to pass a context and additional request options. +// +// See GetCampaigns for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetCampaignsWithContext(ctx aws.Context, input *GetCampaignsInput, opts ...request.Option) (*GetCampaignsOutput, error) { + req, out := c.GetCampaignsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetChannels = "GetChannels" + +// GetChannelsRequest generates a "aws/request.Request" representing the +// client's request for the GetChannels operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetChannels for more information on using the GetChannels +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetChannelsRequest method. +// req, resp := client.GetChannelsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetChannels +func (c *Pinpoint) GetChannelsRequest(input *GetChannelsInput) (req *request.Request, output *GetChannelsOutput) { + op := &request.Operation{ + Name: opGetChannels, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/channels", + } + + if input == nil { + input = &GetChannelsInput{} + } + + output = &GetChannelsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetChannels API operation for Amazon Pinpoint. +// +// Get all channels. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetChannels for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetChannels +func (c *Pinpoint) GetChannels(input *GetChannelsInput) (*GetChannelsOutput, error) { + req, out := c.GetChannelsRequest(input) + return out, req.Send() +} + +// GetChannelsWithContext is the same as GetChannels with the addition of +// the ability to pass a context and additional request options. +// +// See GetChannels for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetChannelsWithContext(ctx aws.Context, input *GetChannelsInput, opts ...request.Option) (*GetChannelsOutput, error) { + req, out := c.GetChannelsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetEmailChannel = "GetEmailChannel" + +// GetEmailChannelRequest generates a "aws/request.Request" representing the +// client's request for the GetEmailChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetEmailChannel for more information on using the GetEmailChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetEmailChannelRequest method. +// req, resp := client.GetEmailChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetEmailChannel +func (c *Pinpoint) GetEmailChannelRequest(input *GetEmailChannelInput) (req *request.Request, output *GetEmailChannelOutput) { + op := &request.Operation{ + Name: opGetEmailChannel, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/channels/email", + } + + if input == nil { + input = &GetEmailChannelInput{} + } + + output = &GetEmailChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetEmailChannel API operation for Amazon Pinpoint. +// +// Get an email channel. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetEmailChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetEmailChannel +func (c *Pinpoint) GetEmailChannel(input *GetEmailChannelInput) (*GetEmailChannelOutput, error) { + req, out := c.GetEmailChannelRequest(input) + return out, req.Send() +} + +// GetEmailChannelWithContext is the same as GetEmailChannel with the addition of +// the ability to pass a context and additional request options. +// +// See GetEmailChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetEmailChannelWithContext(ctx aws.Context, input *GetEmailChannelInput, opts ...request.Option) (*GetEmailChannelOutput, error) { + req, out := c.GetEmailChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetEndpoint = "GetEndpoint" + +// GetEndpointRequest generates a "aws/request.Request" representing the +// client's request for the GetEndpoint operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetEndpoint for more information on using the GetEndpoint +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetEndpointRequest method. +// req, resp := client.GetEndpointRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetEndpoint +func (c *Pinpoint) GetEndpointRequest(input *GetEndpointInput) (req *request.Request, output *GetEndpointOutput) { + op := &request.Operation{ + Name: opGetEndpoint, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/endpoints/{endpoint-id}", + } + + if input == nil { + input = &GetEndpointInput{} + } + + output = &GetEndpointOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetEndpoint API operation for Amazon Pinpoint. +// +// Returns information about an endpoint. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetEndpoint for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetEndpoint +func (c *Pinpoint) GetEndpoint(input *GetEndpointInput) (*GetEndpointOutput, error) { + req, out := c.GetEndpointRequest(input) + return out, req.Send() +} + +// GetEndpointWithContext is the same as GetEndpoint with the addition of +// the ability to pass a context and additional request options. +// +// See GetEndpoint for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetEndpointWithContext(ctx aws.Context, input *GetEndpointInput, opts ...request.Option) (*GetEndpointOutput, error) { + req, out := c.GetEndpointRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetEventStream = "GetEventStream" + +// GetEventStreamRequest generates a "aws/request.Request" representing the +// client's request for the GetEventStream operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetEventStream for more information on using the GetEventStream +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetEventStreamRequest method. +// req, resp := client.GetEventStreamRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetEventStream +func (c *Pinpoint) GetEventStreamRequest(input *GetEventStreamInput) (req *request.Request, output *GetEventStreamOutput) { + op := &request.Operation{ + Name: opGetEventStream, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/eventstream", + } + + if input == nil { + input = &GetEventStreamInput{} + } + + output = &GetEventStreamOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetEventStream API operation for Amazon Pinpoint. +// +// Returns the event stream for an app. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetEventStream for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetEventStream +func (c *Pinpoint) GetEventStream(input *GetEventStreamInput) (*GetEventStreamOutput, error) { + req, out := c.GetEventStreamRequest(input) + return out, req.Send() +} + +// GetEventStreamWithContext is the same as GetEventStream with the addition of +// the ability to pass a context and additional request options. +// +// See GetEventStream for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetEventStreamWithContext(ctx aws.Context, input *GetEventStreamInput, opts ...request.Option) (*GetEventStreamOutput, error) { + req, out := c.GetEventStreamRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetExportJob = "GetExportJob" + +// GetExportJobRequest generates a "aws/request.Request" representing the +// client's request for the GetExportJob operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetExportJob for more information on using the GetExportJob +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetExportJobRequest method. +// req, resp := client.GetExportJobRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetExportJob +func (c *Pinpoint) GetExportJobRequest(input *GetExportJobInput) (req *request.Request, output *GetExportJobOutput) { + op := &request.Operation{ + Name: opGetExportJob, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/jobs/export/{job-id}", + } + + if input == nil { + input = &GetExportJobInput{} + } + + output = &GetExportJobOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetExportJob API operation for Amazon Pinpoint. +// +// Returns information about an export job. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetExportJob for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetExportJob +func (c *Pinpoint) GetExportJob(input *GetExportJobInput) (*GetExportJobOutput, error) { + req, out := c.GetExportJobRequest(input) + return out, req.Send() +} + +// GetExportJobWithContext is the same as GetExportJob with the addition of +// the ability to pass a context and additional request options. +// +// See GetExportJob for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetExportJobWithContext(ctx aws.Context, input *GetExportJobInput, opts ...request.Option) (*GetExportJobOutput, error) { + req, out := c.GetExportJobRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetExportJobs = "GetExportJobs" + +// GetExportJobsRequest generates a "aws/request.Request" representing the +// client's request for the GetExportJobs operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetExportJobs for more information on using the GetExportJobs +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetExportJobsRequest method. +// req, resp := client.GetExportJobsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetExportJobs +func (c *Pinpoint) GetExportJobsRequest(input *GetExportJobsInput) (req *request.Request, output *GetExportJobsOutput) { + op := &request.Operation{ + Name: opGetExportJobs, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/jobs/export", + } + + if input == nil { + input = &GetExportJobsInput{} + } + + output = &GetExportJobsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetExportJobs API operation for Amazon Pinpoint. +// +// Returns information about your export jobs. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetExportJobs for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetExportJobs +func (c *Pinpoint) GetExportJobs(input *GetExportJobsInput) (*GetExportJobsOutput, error) { + req, out := c.GetExportJobsRequest(input) + return out, req.Send() +} + +// GetExportJobsWithContext is the same as GetExportJobs with the addition of +// the ability to pass a context and additional request options. +// +// See GetExportJobs for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetExportJobsWithContext(ctx aws.Context, input *GetExportJobsInput, opts ...request.Option) (*GetExportJobsOutput, error) { + req, out := c.GetExportJobsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetGcmChannel = "GetGcmChannel" + +// GetGcmChannelRequest generates a "aws/request.Request" representing the +// client's request for the GetGcmChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetGcmChannel for more information on using the GetGcmChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetGcmChannelRequest method. +// req, resp := client.GetGcmChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetGcmChannel +func (c *Pinpoint) GetGcmChannelRequest(input *GetGcmChannelInput) (req *request.Request, output *GetGcmChannelOutput) { + op := &request.Operation{ + Name: opGetGcmChannel, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/channels/gcm", + } + + if input == nil { + input = &GetGcmChannelInput{} + } + + output = &GetGcmChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetGcmChannel API operation for Amazon Pinpoint. +// +// Returns information about the GCM channel for an app. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetGcmChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetGcmChannel +func (c *Pinpoint) GetGcmChannel(input *GetGcmChannelInput) (*GetGcmChannelOutput, error) { + req, out := c.GetGcmChannelRequest(input) + return out, req.Send() +} + +// GetGcmChannelWithContext is the same as GetGcmChannel with the addition of +// the ability to pass a context and additional request options. +// +// See GetGcmChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetGcmChannelWithContext(ctx aws.Context, input *GetGcmChannelInput, opts ...request.Option) (*GetGcmChannelOutput, error) { + req, out := c.GetGcmChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetImportJob = "GetImportJob" + +// GetImportJobRequest generates a "aws/request.Request" representing the +// client's request for the GetImportJob operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetImportJob for more information on using the GetImportJob +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetImportJobRequest method. +// req, resp := client.GetImportJobRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetImportJob +func (c *Pinpoint) GetImportJobRequest(input *GetImportJobInput) (req *request.Request, output *GetImportJobOutput) { + op := &request.Operation{ + Name: opGetImportJob, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/jobs/import/{job-id}", + } + + if input == nil { + input = &GetImportJobInput{} + } + + output = &GetImportJobOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetImportJob API operation for Amazon Pinpoint. +// +// Returns information about an import job. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetImportJob for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetImportJob +func (c *Pinpoint) GetImportJob(input *GetImportJobInput) (*GetImportJobOutput, error) { + req, out := c.GetImportJobRequest(input) + return out, req.Send() +} + +// GetImportJobWithContext is the same as GetImportJob with the addition of +// the ability to pass a context and additional request options. +// +// See GetImportJob for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetImportJobWithContext(ctx aws.Context, input *GetImportJobInput, opts ...request.Option) (*GetImportJobOutput, error) { + req, out := c.GetImportJobRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetImportJobs = "GetImportJobs" + +// GetImportJobsRequest generates a "aws/request.Request" representing the +// client's request for the GetImportJobs operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetImportJobs for more information on using the GetImportJobs +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetImportJobsRequest method. +// req, resp := client.GetImportJobsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetImportJobs +func (c *Pinpoint) GetImportJobsRequest(input *GetImportJobsInput) (req *request.Request, output *GetImportJobsOutput) { + op := &request.Operation{ + Name: opGetImportJobs, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/jobs/import", + } + + if input == nil { + input = &GetImportJobsInput{} + } + + output = &GetImportJobsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetImportJobs API operation for Amazon Pinpoint. +// +// Returns information about your import jobs. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetImportJobs for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetImportJobs +func (c *Pinpoint) GetImportJobs(input *GetImportJobsInput) (*GetImportJobsOutput, error) { + req, out := c.GetImportJobsRequest(input) + return out, req.Send() +} + +// GetImportJobsWithContext is the same as GetImportJobs with the addition of +// the ability to pass a context and additional request options. +// +// See GetImportJobs for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetImportJobsWithContext(ctx aws.Context, input *GetImportJobsInput, opts ...request.Option) (*GetImportJobsOutput, error) { + req, out := c.GetImportJobsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetSegment = "GetSegment" + +// GetSegmentRequest generates a "aws/request.Request" representing the +// client's request for the GetSegment operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetSegment for more information on using the GetSegment +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetSegmentRequest method. +// req, resp := client.GetSegmentRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetSegment +func (c *Pinpoint) GetSegmentRequest(input *GetSegmentInput) (req *request.Request, output *GetSegmentOutput) { + op := &request.Operation{ + Name: opGetSegment, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/segments/{segment-id}", + } + + if input == nil { + input = &GetSegmentInput{} + } + + output = &GetSegmentOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetSegment API operation for Amazon Pinpoint. +// +// Returns information about a segment. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetSegment for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetSegment +func (c *Pinpoint) GetSegment(input *GetSegmentInput) (*GetSegmentOutput, error) { + req, out := c.GetSegmentRequest(input) + return out, req.Send() +} + +// GetSegmentWithContext is the same as GetSegment with the addition of +// the ability to pass a context and additional request options. +// +// See GetSegment for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetSegmentWithContext(ctx aws.Context, input *GetSegmentInput, opts ...request.Option) (*GetSegmentOutput, error) { + req, out := c.GetSegmentRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetSegmentExportJobs = "GetSegmentExportJobs" + +// GetSegmentExportJobsRequest generates a "aws/request.Request" representing the +// client's request for the GetSegmentExportJobs operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetSegmentExportJobs for more information on using the GetSegmentExportJobs +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetSegmentExportJobsRequest method. +// req, resp := client.GetSegmentExportJobsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetSegmentExportJobs +func (c *Pinpoint) GetSegmentExportJobsRequest(input *GetSegmentExportJobsInput) (req *request.Request, output *GetSegmentExportJobsOutput) { + op := &request.Operation{ + Name: opGetSegmentExportJobs, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/segments/{segment-id}/jobs/export", + } + + if input == nil { + input = &GetSegmentExportJobsInput{} + } + + output = &GetSegmentExportJobsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetSegmentExportJobs API operation for Amazon Pinpoint. +// +// Returns a list of export jobs for a specific segment. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetSegmentExportJobs for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetSegmentExportJobs +func (c *Pinpoint) GetSegmentExportJobs(input *GetSegmentExportJobsInput) (*GetSegmentExportJobsOutput, error) { + req, out := c.GetSegmentExportJobsRequest(input) + return out, req.Send() +} + +// GetSegmentExportJobsWithContext is the same as GetSegmentExportJobs with the addition of +// the ability to pass a context and additional request options. +// +// See GetSegmentExportJobs for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetSegmentExportJobsWithContext(ctx aws.Context, input *GetSegmentExportJobsInput, opts ...request.Option) (*GetSegmentExportJobsOutput, error) { + req, out := c.GetSegmentExportJobsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetSegmentImportJobs = "GetSegmentImportJobs" + +// GetSegmentImportJobsRequest generates a "aws/request.Request" representing the +// client's request for the GetSegmentImportJobs operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetSegmentImportJobs for more information on using the GetSegmentImportJobs +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetSegmentImportJobsRequest method. +// req, resp := client.GetSegmentImportJobsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetSegmentImportJobs +func (c *Pinpoint) GetSegmentImportJobsRequest(input *GetSegmentImportJobsInput) (req *request.Request, output *GetSegmentImportJobsOutput) { + op := &request.Operation{ + Name: opGetSegmentImportJobs, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/segments/{segment-id}/jobs/import", + } + + if input == nil { + input = &GetSegmentImportJobsInput{} + } + + output = &GetSegmentImportJobsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetSegmentImportJobs API operation for Amazon Pinpoint. +// +// Returns a list of import jobs for a specific segment. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetSegmentImportJobs for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetSegmentImportJobs +func (c *Pinpoint) GetSegmentImportJobs(input *GetSegmentImportJobsInput) (*GetSegmentImportJobsOutput, error) { + req, out := c.GetSegmentImportJobsRequest(input) + return out, req.Send() +} + +// GetSegmentImportJobsWithContext is the same as GetSegmentImportJobs with the addition of +// the ability to pass a context and additional request options. +// +// See GetSegmentImportJobs for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetSegmentImportJobsWithContext(ctx aws.Context, input *GetSegmentImportJobsInput, opts ...request.Option) (*GetSegmentImportJobsOutput, error) { + req, out := c.GetSegmentImportJobsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetSegmentVersion = "GetSegmentVersion" + +// GetSegmentVersionRequest generates a "aws/request.Request" representing the +// client's request for the GetSegmentVersion operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetSegmentVersion for more information on using the GetSegmentVersion +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetSegmentVersionRequest method. +// req, resp := client.GetSegmentVersionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetSegmentVersion +func (c *Pinpoint) GetSegmentVersionRequest(input *GetSegmentVersionInput) (req *request.Request, output *GetSegmentVersionOutput) { + op := &request.Operation{ + Name: opGetSegmentVersion, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/segments/{segment-id}/versions/{version}", + } + + if input == nil { + input = &GetSegmentVersionInput{} + } + + output = &GetSegmentVersionOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetSegmentVersion API operation for Amazon Pinpoint. +// +// Returns information about a segment version. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetSegmentVersion for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetSegmentVersion +func (c *Pinpoint) GetSegmentVersion(input *GetSegmentVersionInput) (*GetSegmentVersionOutput, error) { + req, out := c.GetSegmentVersionRequest(input) + return out, req.Send() +} + +// GetSegmentVersionWithContext is the same as GetSegmentVersion with the addition of +// the ability to pass a context and additional request options. +// +// See GetSegmentVersion for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetSegmentVersionWithContext(ctx aws.Context, input *GetSegmentVersionInput, opts ...request.Option) (*GetSegmentVersionOutput, error) { + req, out := c.GetSegmentVersionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetSegmentVersions = "GetSegmentVersions" + +// GetSegmentVersionsRequest generates a "aws/request.Request" representing the +// client's request for the GetSegmentVersions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetSegmentVersions for more information on using the GetSegmentVersions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetSegmentVersionsRequest method. +// req, resp := client.GetSegmentVersionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetSegmentVersions +func (c *Pinpoint) GetSegmentVersionsRequest(input *GetSegmentVersionsInput) (req *request.Request, output *GetSegmentVersionsOutput) { + op := &request.Operation{ + Name: opGetSegmentVersions, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/segments/{segment-id}/versions", + } + + if input == nil { + input = &GetSegmentVersionsInput{} + } + + output = &GetSegmentVersionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetSegmentVersions API operation for Amazon Pinpoint. +// +// Returns information about your segment versions. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetSegmentVersions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetSegmentVersions +func (c *Pinpoint) GetSegmentVersions(input *GetSegmentVersionsInput) (*GetSegmentVersionsOutput, error) { + req, out := c.GetSegmentVersionsRequest(input) + return out, req.Send() +} + +// GetSegmentVersionsWithContext is the same as GetSegmentVersions with the addition of +// the ability to pass a context and additional request options. +// +// See GetSegmentVersions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetSegmentVersionsWithContext(ctx aws.Context, input *GetSegmentVersionsInput, opts ...request.Option) (*GetSegmentVersionsOutput, error) { + req, out := c.GetSegmentVersionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetSegments = "GetSegments" + +// GetSegmentsRequest generates a "aws/request.Request" representing the +// client's request for the GetSegments operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetSegments for more information on using the GetSegments +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetSegmentsRequest method. +// req, resp := client.GetSegmentsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetSegments +func (c *Pinpoint) GetSegmentsRequest(input *GetSegmentsInput) (req *request.Request, output *GetSegmentsOutput) { + op := &request.Operation{ + Name: opGetSegments, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/segments", + } + + if input == nil { + input = &GetSegmentsInput{} + } + + output = &GetSegmentsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetSegments API operation for Amazon Pinpoint. +// +// Used to get information about your segments. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetSegments for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetSegments +func (c *Pinpoint) GetSegments(input *GetSegmentsInput) (*GetSegmentsOutput, error) { + req, out := c.GetSegmentsRequest(input) + return out, req.Send() +} + +// GetSegmentsWithContext is the same as GetSegments with the addition of +// the ability to pass a context and additional request options. +// +// See GetSegments for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetSegmentsWithContext(ctx aws.Context, input *GetSegmentsInput, opts ...request.Option) (*GetSegmentsOutput, error) { + req, out := c.GetSegmentsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetSmsChannel = "GetSmsChannel" + +// GetSmsChannelRequest generates a "aws/request.Request" representing the +// client's request for the GetSmsChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetSmsChannel for more information on using the GetSmsChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetSmsChannelRequest method. +// req, resp := client.GetSmsChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetSmsChannel +func (c *Pinpoint) GetSmsChannelRequest(input *GetSmsChannelInput) (req *request.Request, output *GetSmsChannelOutput) { + op := &request.Operation{ + Name: opGetSmsChannel, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/channels/sms", + } + + if input == nil { + input = &GetSmsChannelInput{} + } + + output = &GetSmsChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetSmsChannel API operation for Amazon Pinpoint. +// +// Get an SMS channel. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetSmsChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetSmsChannel +func (c *Pinpoint) GetSmsChannel(input *GetSmsChannelInput) (*GetSmsChannelOutput, error) { + req, out := c.GetSmsChannelRequest(input) + return out, req.Send() +} + +// GetSmsChannelWithContext is the same as GetSmsChannel with the addition of +// the ability to pass a context and additional request options. +// +// See GetSmsChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetSmsChannelWithContext(ctx aws.Context, input *GetSmsChannelInput, opts ...request.Option) (*GetSmsChannelOutput, error) { + req, out := c.GetSmsChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetUserEndpoints = "GetUserEndpoints" + +// GetUserEndpointsRequest generates a "aws/request.Request" representing the +// client's request for the GetUserEndpoints operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetUserEndpoints for more information on using the GetUserEndpoints +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetUserEndpointsRequest method. +// req, resp := client.GetUserEndpointsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetUserEndpoints +func (c *Pinpoint) GetUserEndpointsRequest(input *GetUserEndpointsInput) (req *request.Request, output *GetUserEndpointsOutput) { + op := &request.Operation{ + Name: opGetUserEndpoints, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/users/{user-id}", + } + + if input == nil { + input = &GetUserEndpointsInput{} + } + + output = &GetUserEndpointsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetUserEndpoints API operation for Amazon Pinpoint. +// +// Returns information about the endpoints that are associated with a User ID. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetUserEndpoints for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetUserEndpoints +func (c *Pinpoint) GetUserEndpoints(input *GetUserEndpointsInput) (*GetUserEndpointsOutput, error) { + req, out := c.GetUserEndpointsRequest(input) + return out, req.Send() +} + +// GetUserEndpointsWithContext is the same as GetUserEndpoints with the addition of +// the ability to pass a context and additional request options. +// +// See GetUserEndpoints for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetUserEndpointsWithContext(ctx aws.Context, input *GetUserEndpointsInput, opts ...request.Option) (*GetUserEndpointsOutput, error) { + req, out := c.GetUserEndpointsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetVoiceChannel = "GetVoiceChannel" + +// GetVoiceChannelRequest generates a "aws/request.Request" representing the +// client's request for the GetVoiceChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetVoiceChannel for more information on using the GetVoiceChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetVoiceChannelRequest method. +// req, resp := client.GetVoiceChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetVoiceChannel +func (c *Pinpoint) GetVoiceChannelRequest(input *GetVoiceChannelInput) (req *request.Request, output *GetVoiceChannelOutput) { + op := &request.Operation{ + Name: opGetVoiceChannel, + HTTPMethod: "GET", + HTTPPath: "/v1/apps/{application-id}/channels/voice", + } + + if input == nil { + input = &GetVoiceChannelInput{} + } + + output = &GetVoiceChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetVoiceChannel API operation for Amazon Pinpoint. +// +// Get a Voice Channel +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation GetVoiceChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/GetVoiceChannel +func (c *Pinpoint) GetVoiceChannel(input *GetVoiceChannelInput) (*GetVoiceChannelOutput, error) { + req, out := c.GetVoiceChannelRequest(input) + return out, req.Send() +} + +// GetVoiceChannelWithContext is the same as GetVoiceChannel with the addition of +// the ability to pass a context and additional request options. +// +// See GetVoiceChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) GetVoiceChannelWithContext(ctx aws.Context, input *GetVoiceChannelInput, opts ...request.Option) (*GetVoiceChannelOutput, error) { + req, out := c.GetVoiceChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPhoneNumberValidate = "PhoneNumberValidate" + +// PhoneNumberValidateRequest generates a "aws/request.Request" representing the +// client's request for the PhoneNumberValidate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PhoneNumberValidate for more information on using the PhoneNumberValidate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PhoneNumberValidateRequest method. +// req, resp := client.PhoneNumberValidateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/PhoneNumberValidate +func (c *Pinpoint) PhoneNumberValidateRequest(input *PhoneNumberValidateInput) (req *request.Request, output *PhoneNumberValidateOutput) { + op := &request.Operation{ + Name: opPhoneNumberValidate, + HTTPMethod: "POST", + HTTPPath: "/v1/phone/number/validate", + } + + if input == nil { + input = &PhoneNumberValidateInput{} + } + + output = &PhoneNumberValidateOutput{} + req = c.newRequest(op, input, output) + return +} + +// PhoneNumberValidate API operation for Amazon Pinpoint. +// +// Returns information about the specified phone number. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation PhoneNumberValidate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/PhoneNumberValidate +func (c *Pinpoint) PhoneNumberValidate(input *PhoneNumberValidateInput) (*PhoneNumberValidateOutput, error) { + req, out := c.PhoneNumberValidateRequest(input) + return out, req.Send() +} + +// PhoneNumberValidateWithContext is the same as PhoneNumberValidate with the addition of +// the ability to pass a context and additional request options. +// +// See PhoneNumberValidate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) PhoneNumberValidateWithContext(ctx aws.Context, input *PhoneNumberValidateInput, opts ...request.Option) (*PhoneNumberValidateOutput, error) { + req, out := c.PhoneNumberValidateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutEventStream = "PutEventStream" + +// PutEventStreamRequest generates a "aws/request.Request" representing the +// client's request for the PutEventStream operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutEventStream for more information on using the PutEventStream +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutEventStreamRequest method. +// req, resp := client.PutEventStreamRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/PutEventStream +func (c *Pinpoint) PutEventStreamRequest(input *PutEventStreamInput) (req *request.Request, output *PutEventStreamOutput) { + op := &request.Operation{ + Name: opPutEventStream, + HTTPMethod: "POST", + HTTPPath: "/v1/apps/{application-id}/eventstream", + } + + if input == nil { + input = &PutEventStreamInput{} + } + + output = &PutEventStreamOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutEventStream API operation for Amazon Pinpoint. +// +// Use to create or update the event stream for an app. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation PutEventStream for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/PutEventStream +func (c *Pinpoint) PutEventStream(input *PutEventStreamInput) (*PutEventStreamOutput, error) { + req, out := c.PutEventStreamRequest(input) + return out, req.Send() +} + +// PutEventStreamWithContext is the same as PutEventStream with the addition of +// the ability to pass a context and additional request options. +// +// See PutEventStream for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) PutEventStreamWithContext(ctx aws.Context, input *PutEventStreamInput, opts ...request.Option) (*PutEventStreamOutput, error) { + req, out := c.PutEventStreamRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutEvents = "PutEvents" + +// PutEventsRequest generates a "aws/request.Request" representing the +// client's request for the PutEvents operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutEvents for more information on using the PutEvents +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutEventsRequest method. +// req, resp := client.PutEventsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/PutEvents +func (c *Pinpoint) PutEventsRequest(input *PutEventsInput) (req *request.Request, output *PutEventsOutput) { + op := &request.Operation{ + Name: opPutEvents, + HTTPMethod: "POST", + HTTPPath: "/v1/apps/{application-id}/events", + } + + if input == nil { + input = &PutEventsInput{} + } + + output = &PutEventsOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutEvents API operation for Amazon Pinpoint. +// +// Use to record events for endpoints. This method creates events and creates +// or updates the endpoints that those events are associated with. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation PutEvents for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/PutEvents +func (c *Pinpoint) PutEvents(input *PutEventsInput) (*PutEventsOutput, error) { + req, out := c.PutEventsRequest(input) + return out, req.Send() +} + +// PutEventsWithContext is the same as PutEvents with the addition of +// the ability to pass a context and additional request options. +// +// See PutEvents for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) PutEventsWithContext(ctx aws.Context, input *PutEventsInput, opts ...request.Option) (*PutEventsOutput, error) { + req, out := c.PutEventsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRemoveAttributes = "RemoveAttributes" + +// RemoveAttributesRequest generates a "aws/request.Request" representing the +// client's request for the RemoveAttributes operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RemoveAttributes for more information on using the RemoveAttributes +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RemoveAttributesRequest method. +// req, resp := client.RemoveAttributesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/RemoveAttributes +func (c *Pinpoint) RemoveAttributesRequest(input *RemoveAttributesInput) (req *request.Request, output *RemoveAttributesOutput) { + op := &request.Operation{ + Name: opRemoveAttributes, + HTTPMethod: "PUT", + HTTPPath: "/v1/apps/{application-id}/attributes/{attribute-type}", + } + + if input == nil { + input = &RemoveAttributesInput{} + } + + output = &RemoveAttributesOutput{} + req = c.newRequest(op, input, output) + return +} + +// RemoveAttributes API operation for Amazon Pinpoint. +// +// Used to remove the attributes for an app +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation RemoveAttributes for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/RemoveAttributes +func (c *Pinpoint) RemoveAttributes(input *RemoveAttributesInput) (*RemoveAttributesOutput, error) { + req, out := c.RemoveAttributesRequest(input) + return out, req.Send() +} + +// RemoveAttributesWithContext is the same as RemoveAttributes with the addition of +// the ability to pass a context and additional request options. +// +// See RemoveAttributes for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) RemoveAttributesWithContext(ctx aws.Context, input *RemoveAttributesInput, opts ...request.Option) (*RemoveAttributesOutput, error) { + req, out := c.RemoveAttributesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opSendMessages = "SendMessages" + +// SendMessagesRequest generates a "aws/request.Request" representing the +// client's request for the SendMessages operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SendMessages for more information on using the SendMessages +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SendMessagesRequest method. +// req, resp := client.SendMessagesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/SendMessages +func (c *Pinpoint) SendMessagesRequest(input *SendMessagesInput) (req *request.Request, output *SendMessagesOutput) { + op := &request.Operation{ + Name: opSendMessages, + HTTPMethod: "POST", + HTTPPath: "/v1/apps/{application-id}/messages", + } + + if input == nil { + input = &SendMessagesInput{} + } + + output = &SendMessagesOutput{} + req = c.newRequest(op, input, output) + return +} + +// SendMessages API operation for Amazon Pinpoint. +// +// Used to send a direct message. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation SendMessages for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/SendMessages +func (c *Pinpoint) SendMessages(input *SendMessagesInput) (*SendMessagesOutput, error) { + req, out := c.SendMessagesRequest(input) + return out, req.Send() +} + +// SendMessagesWithContext is the same as SendMessages with the addition of +// the ability to pass a context and additional request options. +// +// See SendMessages for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) SendMessagesWithContext(ctx aws.Context, input *SendMessagesInput, opts ...request.Option) (*SendMessagesOutput, error) { + req, out := c.SendMessagesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opSendUsersMessages = "SendUsersMessages" + +// SendUsersMessagesRequest generates a "aws/request.Request" representing the +// client's request for the SendUsersMessages operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SendUsersMessages for more information on using the SendUsersMessages +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SendUsersMessagesRequest method. +// req, resp := client.SendUsersMessagesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/SendUsersMessages +func (c *Pinpoint) SendUsersMessagesRequest(input *SendUsersMessagesInput) (req *request.Request, output *SendUsersMessagesOutput) { + op := &request.Operation{ + Name: opSendUsersMessages, + HTTPMethod: "POST", + HTTPPath: "/v1/apps/{application-id}/users-messages", + } + + if input == nil { + input = &SendUsersMessagesInput{} + } + + output = &SendUsersMessagesOutput{} + req = c.newRequest(op, input, output) + return +} + +// SendUsersMessages API operation for Amazon Pinpoint. +// +// Used to send a message to a list of users. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation SendUsersMessages for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/SendUsersMessages +func (c *Pinpoint) SendUsersMessages(input *SendUsersMessagesInput) (*SendUsersMessagesOutput, error) { + req, out := c.SendUsersMessagesRequest(input) + return out, req.Send() +} + +// SendUsersMessagesWithContext is the same as SendUsersMessages with the addition of +// the ability to pass a context and additional request options. +// +// See SendUsersMessages for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) SendUsersMessagesWithContext(ctx aws.Context, input *SendUsersMessagesInput, opts ...request.Option) (*SendUsersMessagesOutput, error) { + req, out := c.SendUsersMessagesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateAdmChannel = "UpdateAdmChannel" + +// UpdateAdmChannelRequest generates a "aws/request.Request" representing the +// client's request for the UpdateAdmChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateAdmChannel for more information on using the UpdateAdmChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateAdmChannelRequest method. +// req, resp := client.UpdateAdmChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateAdmChannel +func (c *Pinpoint) UpdateAdmChannelRequest(input *UpdateAdmChannelInput) (req *request.Request, output *UpdateAdmChannelOutput) { + op := &request.Operation{ + Name: opUpdateAdmChannel, + HTTPMethod: "PUT", + HTTPPath: "/v1/apps/{application-id}/channels/adm", + } + + if input == nil { + input = &UpdateAdmChannelInput{} + } + + output = &UpdateAdmChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateAdmChannel API operation for Amazon Pinpoint. +// +// Update an ADM channel. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation UpdateAdmChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateAdmChannel +func (c *Pinpoint) UpdateAdmChannel(input *UpdateAdmChannelInput) (*UpdateAdmChannelOutput, error) { + req, out := c.UpdateAdmChannelRequest(input) + return out, req.Send() +} + +// UpdateAdmChannelWithContext is the same as UpdateAdmChannel with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateAdmChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) UpdateAdmChannelWithContext(ctx aws.Context, input *UpdateAdmChannelInput, opts ...request.Option) (*UpdateAdmChannelOutput, error) { + req, out := c.UpdateAdmChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateApnsChannel = "UpdateApnsChannel" + +// UpdateApnsChannelRequest generates a "aws/request.Request" representing the +// client's request for the UpdateApnsChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateApnsChannel for more information on using the UpdateApnsChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateApnsChannelRequest method. +// req, resp := client.UpdateApnsChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateApnsChannel +func (c *Pinpoint) UpdateApnsChannelRequest(input *UpdateApnsChannelInput) (req *request.Request, output *UpdateApnsChannelOutput) { + op := &request.Operation{ + Name: opUpdateApnsChannel, + HTTPMethod: "PUT", + HTTPPath: "/v1/apps/{application-id}/channels/apns", + } + + if input == nil { + input = &UpdateApnsChannelInput{} + } + + output = &UpdateApnsChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateApnsChannel API operation for Amazon Pinpoint. +// +// Use to update the APNs channel for an app. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation UpdateApnsChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateApnsChannel +func (c *Pinpoint) UpdateApnsChannel(input *UpdateApnsChannelInput) (*UpdateApnsChannelOutput, error) { + req, out := c.UpdateApnsChannelRequest(input) + return out, req.Send() +} + +// UpdateApnsChannelWithContext is the same as UpdateApnsChannel with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateApnsChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) UpdateApnsChannelWithContext(ctx aws.Context, input *UpdateApnsChannelInput, opts ...request.Option) (*UpdateApnsChannelOutput, error) { + req, out := c.UpdateApnsChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateApnsSandboxChannel = "UpdateApnsSandboxChannel" + +// UpdateApnsSandboxChannelRequest generates a "aws/request.Request" representing the +// client's request for the UpdateApnsSandboxChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateApnsSandboxChannel for more information on using the UpdateApnsSandboxChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateApnsSandboxChannelRequest method. +// req, resp := client.UpdateApnsSandboxChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateApnsSandboxChannel +func (c *Pinpoint) UpdateApnsSandboxChannelRequest(input *UpdateApnsSandboxChannelInput) (req *request.Request, output *UpdateApnsSandboxChannelOutput) { + op := &request.Operation{ + Name: opUpdateApnsSandboxChannel, + HTTPMethod: "PUT", + HTTPPath: "/v1/apps/{application-id}/channels/apns_sandbox", + } + + if input == nil { + input = &UpdateApnsSandboxChannelInput{} + } + + output = &UpdateApnsSandboxChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateApnsSandboxChannel API operation for Amazon Pinpoint. +// +// Update an APNS sandbox channel. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation UpdateApnsSandboxChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateApnsSandboxChannel +func (c *Pinpoint) UpdateApnsSandboxChannel(input *UpdateApnsSandboxChannelInput) (*UpdateApnsSandboxChannelOutput, error) { + req, out := c.UpdateApnsSandboxChannelRequest(input) + return out, req.Send() +} + +// UpdateApnsSandboxChannelWithContext is the same as UpdateApnsSandboxChannel with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateApnsSandboxChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) UpdateApnsSandboxChannelWithContext(ctx aws.Context, input *UpdateApnsSandboxChannelInput, opts ...request.Option) (*UpdateApnsSandboxChannelOutput, error) { + req, out := c.UpdateApnsSandboxChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateApnsVoipChannel = "UpdateApnsVoipChannel" + +// UpdateApnsVoipChannelRequest generates a "aws/request.Request" representing the +// client's request for the UpdateApnsVoipChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateApnsVoipChannel for more information on using the UpdateApnsVoipChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateApnsVoipChannelRequest method. +// req, resp := client.UpdateApnsVoipChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateApnsVoipChannel +func (c *Pinpoint) UpdateApnsVoipChannelRequest(input *UpdateApnsVoipChannelInput) (req *request.Request, output *UpdateApnsVoipChannelOutput) { + op := &request.Operation{ + Name: opUpdateApnsVoipChannel, + HTTPMethod: "PUT", + HTTPPath: "/v1/apps/{application-id}/channels/apns_voip", + } + + if input == nil { + input = &UpdateApnsVoipChannelInput{} + } + + output = &UpdateApnsVoipChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateApnsVoipChannel API operation for Amazon Pinpoint. +// +// Update an APNS VoIP channel +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation UpdateApnsVoipChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateApnsVoipChannel +func (c *Pinpoint) UpdateApnsVoipChannel(input *UpdateApnsVoipChannelInput) (*UpdateApnsVoipChannelOutput, error) { + req, out := c.UpdateApnsVoipChannelRequest(input) + return out, req.Send() +} + +// UpdateApnsVoipChannelWithContext is the same as UpdateApnsVoipChannel with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateApnsVoipChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) UpdateApnsVoipChannelWithContext(ctx aws.Context, input *UpdateApnsVoipChannelInput, opts ...request.Option) (*UpdateApnsVoipChannelOutput, error) { + req, out := c.UpdateApnsVoipChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateApnsVoipSandboxChannel = "UpdateApnsVoipSandboxChannel" + +// UpdateApnsVoipSandboxChannelRequest generates a "aws/request.Request" representing the +// client's request for the UpdateApnsVoipSandboxChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateApnsVoipSandboxChannel for more information on using the UpdateApnsVoipSandboxChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateApnsVoipSandboxChannelRequest method. +// req, resp := client.UpdateApnsVoipSandboxChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateApnsVoipSandboxChannel +func (c *Pinpoint) UpdateApnsVoipSandboxChannelRequest(input *UpdateApnsVoipSandboxChannelInput) (req *request.Request, output *UpdateApnsVoipSandboxChannelOutput) { + op := &request.Operation{ + Name: opUpdateApnsVoipSandboxChannel, + HTTPMethod: "PUT", + HTTPPath: "/v1/apps/{application-id}/channels/apns_voip_sandbox", + } + + if input == nil { + input = &UpdateApnsVoipSandboxChannelInput{} + } + + output = &UpdateApnsVoipSandboxChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateApnsVoipSandboxChannel API operation for Amazon Pinpoint. +// +// Update an APNS VoIP sandbox channel +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation UpdateApnsVoipSandboxChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateApnsVoipSandboxChannel +func (c *Pinpoint) UpdateApnsVoipSandboxChannel(input *UpdateApnsVoipSandboxChannelInput) (*UpdateApnsVoipSandboxChannelOutput, error) { + req, out := c.UpdateApnsVoipSandboxChannelRequest(input) + return out, req.Send() +} + +// UpdateApnsVoipSandboxChannelWithContext is the same as UpdateApnsVoipSandboxChannel with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateApnsVoipSandboxChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) UpdateApnsVoipSandboxChannelWithContext(ctx aws.Context, input *UpdateApnsVoipSandboxChannelInput, opts ...request.Option) (*UpdateApnsVoipSandboxChannelOutput, error) { + req, out := c.UpdateApnsVoipSandboxChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateApplicationSettings = "UpdateApplicationSettings" + +// UpdateApplicationSettingsRequest generates a "aws/request.Request" representing the +// client's request for the UpdateApplicationSettings operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateApplicationSettings for more information on using the UpdateApplicationSettings +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateApplicationSettingsRequest method. +// req, resp := client.UpdateApplicationSettingsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateApplicationSettings +func (c *Pinpoint) UpdateApplicationSettingsRequest(input *UpdateApplicationSettingsInput) (req *request.Request, output *UpdateApplicationSettingsOutput) { + op := &request.Operation{ + Name: opUpdateApplicationSettings, + HTTPMethod: "PUT", + HTTPPath: "/v1/apps/{application-id}/settings", + } + + if input == nil { + input = &UpdateApplicationSettingsInput{} + } + + output = &UpdateApplicationSettingsOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateApplicationSettings API operation for Amazon Pinpoint. +// +// Used to update the settings for an app. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation UpdateApplicationSettings for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateApplicationSettings +func (c *Pinpoint) UpdateApplicationSettings(input *UpdateApplicationSettingsInput) (*UpdateApplicationSettingsOutput, error) { + req, out := c.UpdateApplicationSettingsRequest(input) + return out, req.Send() +} + +// UpdateApplicationSettingsWithContext is the same as UpdateApplicationSettings with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateApplicationSettings for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) UpdateApplicationSettingsWithContext(ctx aws.Context, input *UpdateApplicationSettingsInput, opts ...request.Option) (*UpdateApplicationSettingsOutput, error) { + req, out := c.UpdateApplicationSettingsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateBaiduChannel = "UpdateBaiduChannel" + +// UpdateBaiduChannelRequest generates a "aws/request.Request" representing the +// client's request for the UpdateBaiduChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateBaiduChannel for more information on using the UpdateBaiduChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateBaiduChannelRequest method. +// req, resp := client.UpdateBaiduChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateBaiduChannel +func (c *Pinpoint) UpdateBaiduChannelRequest(input *UpdateBaiduChannelInput) (req *request.Request, output *UpdateBaiduChannelOutput) { + op := &request.Operation{ + Name: opUpdateBaiduChannel, + HTTPMethod: "PUT", + HTTPPath: "/v1/apps/{application-id}/channels/baidu", + } + + if input == nil { + input = &UpdateBaiduChannelInput{} + } + + output = &UpdateBaiduChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateBaiduChannel API operation for Amazon Pinpoint. +// +// Update a BAIDU GCM channel +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation UpdateBaiduChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateBaiduChannel +func (c *Pinpoint) UpdateBaiduChannel(input *UpdateBaiduChannelInput) (*UpdateBaiduChannelOutput, error) { + req, out := c.UpdateBaiduChannelRequest(input) + return out, req.Send() +} + +// UpdateBaiduChannelWithContext is the same as UpdateBaiduChannel with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateBaiduChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) UpdateBaiduChannelWithContext(ctx aws.Context, input *UpdateBaiduChannelInput, opts ...request.Option) (*UpdateBaiduChannelOutput, error) { + req, out := c.UpdateBaiduChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateCampaign = "UpdateCampaign" + +// UpdateCampaignRequest generates a "aws/request.Request" representing the +// client's request for the UpdateCampaign operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateCampaign for more information on using the UpdateCampaign +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateCampaignRequest method. +// req, resp := client.UpdateCampaignRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateCampaign +func (c *Pinpoint) UpdateCampaignRequest(input *UpdateCampaignInput) (req *request.Request, output *UpdateCampaignOutput) { + op := &request.Operation{ + Name: opUpdateCampaign, + HTTPMethod: "PUT", + HTTPPath: "/v1/apps/{application-id}/campaigns/{campaign-id}", + } + + if input == nil { + input = &UpdateCampaignInput{} + } + + output = &UpdateCampaignOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateCampaign API operation for Amazon Pinpoint. +// +// Use to update a campaign. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation UpdateCampaign for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateCampaign +func (c *Pinpoint) UpdateCampaign(input *UpdateCampaignInput) (*UpdateCampaignOutput, error) { + req, out := c.UpdateCampaignRequest(input) + return out, req.Send() +} + +// UpdateCampaignWithContext is the same as UpdateCampaign with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateCampaign for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) UpdateCampaignWithContext(ctx aws.Context, input *UpdateCampaignInput, opts ...request.Option) (*UpdateCampaignOutput, error) { + req, out := c.UpdateCampaignRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateEmailChannel = "UpdateEmailChannel" + +// UpdateEmailChannelRequest generates a "aws/request.Request" representing the +// client's request for the UpdateEmailChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateEmailChannel for more information on using the UpdateEmailChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateEmailChannelRequest method. +// req, resp := client.UpdateEmailChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateEmailChannel +func (c *Pinpoint) UpdateEmailChannelRequest(input *UpdateEmailChannelInput) (req *request.Request, output *UpdateEmailChannelOutput) { + op := &request.Operation{ + Name: opUpdateEmailChannel, + HTTPMethod: "PUT", + HTTPPath: "/v1/apps/{application-id}/channels/email", + } + + if input == nil { + input = &UpdateEmailChannelInput{} + } + + output = &UpdateEmailChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateEmailChannel API operation for Amazon Pinpoint. +// +// Update an email channel. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation UpdateEmailChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateEmailChannel +func (c *Pinpoint) UpdateEmailChannel(input *UpdateEmailChannelInput) (*UpdateEmailChannelOutput, error) { + req, out := c.UpdateEmailChannelRequest(input) + return out, req.Send() +} + +// UpdateEmailChannelWithContext is the same as UpdateEmailChannel with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateEmailChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) UpdateEmailChannelWithContext(ctx aws.Context, input *UpdateEmailChannelInput, opts ...request.Option) (*UpdateEmailChannelOutput, error) { + req, out := c.UpdateEmailChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateEndpoint = "UpdateEndpoint" + +// UpdateEndpointRequest generates a "aws/request.Request" representing the +// client's request for the UpdateEndpoint operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateEndpoint for more information on using the UpdateEndpoint +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateEndpointRequest method. +// req, resp := client.UpdateEndpointRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateEndpoint +func (c *Pinpoint) UpdateEndpointRequest(input *UpdateEndpointInput) (req *request.Request, output *UpdateEndpointOutput) { + op := &request.Operation{ + Name: opUpdateEndpoint, + HTTPMethod: "PUT", + HTTPPath: "/v1/apps/{application-id}/endpoints/{endpoint-id}", + } + + if input == nil { + input = &UpdateEndpointInput{} + } + + output = &UpdateEndpointOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateEndpoint API operation for Amazon Pinpoint. +// +// Creates or updates an endpoint. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation UpdateEndpoint for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateEndpoint +func (c *Pinpoint) UpdateEndpoint(input *UpdateEndpointInput) (*UpdateEndpointOutput, error) { + req, out := c.UpdateEndpointRequest(input) + return out, req.Send() +} + +// UpdateEndpointWithContext is the same as UpdateEndpoint with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateEndpoint for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) UpdateEndpointWithContext(ctx aws.Context, input *UpdateEndpointInput, opts ...request.Option) (*UpdateEndpointOutput, error) { + req, out := c.UpdateEndpointRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateEndpointsBatch = "UpdateEndpointsBatch" + +// UpdateEndpointsBatchRequest generates a "aws/request.Request" representing the +// client's request for the UpdateEndpointsBatch operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateEndpointsBatch for more information on using the UpdateEndpointsBatch +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateEndpointsBatchRequest method. +// req, resp := client.UpdateEndpointsBatchRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateEndpointsBatch +func (c *Pinpoint) UpdateEndpointsBatchRequest(input *UpdateEndpointsBatchInput) (req *request.Request, output *UpdateEndpointsBatchOutput) { + op := &request.Operation{ + Name: opUpdateEndpointsBatch, + HTTPMethod: "PUT", + HTTPPath: "/v1/apps/{application-id}/endpoints", + } + + if input == nil { + input = &UpdateEndpointsBatchInput{} + } + + output = &UpdateEndpointsBatchOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateEndpointsBatch API operation for Amazon Pinpoint. +// +// Use to update a batch of endpoints. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation UpdateEndpointsBatch for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateEndpointsBatch +func (c *Pinpoint) UpdateEndpointsBatch(input *UpdateEndpointsBatchInput) (*UpdateEndpointsBatchOutput, error) { + req, out := c.UpdateEndpointsBatchRequest(input) + return out, req.Send() +} + +// UpdateEndpointsBatchWithContext is the same as UpdateEndpointsBatch with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateEndpointsBatch for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) UpdateEndpointsBatchWithContext(ctx aws.Context, input *UpdateEndpointsBatchInput, opts ...request.Option) (*UpdateEndpointsBatchOutput, error) { + req, out := c.UpdateEndpointsBatchRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateGcmChannel = "UpdateGcmChannel" + +// UpdateGcmChannelRequest generates a "aws/request.Request" representing the +// client's request for the UpdateGcmChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateGcmChannel for more information on using the UpdateGcmChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateGcmChannelRequest method. +// req, resp := client.UpdateGcmChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateGcmChannel +func (c *Pinpoint) UpdateGcmChannelRequest(input *UpdateGcmChannelInput) (req *request.Request, output *UpdateGcmChannelOutput) { + op := &request.Operation{ + Name: opUpdateGcmChannel, + HTTPMethod: "PUT", + HTTPPath: "/v1/apps/{application-id}/channels/gcm", + } + + if input == nil { + input = &UpdateGcmChannelInput{} + } + + output = &UpdateGcmChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateGcmChannel API operation for Amazon Pinpoint. +// +// Use to update the GCM channel for an app. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation UpdateGcmChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateGcmChannel +func (c *Pinpoint) UpdateGcmChannel(input *UpdateGcmChannelInput) (*UpdateGcmChannelOutput, error) { + req, out := c.UpdateGcmChannelRequest(input) + return out, req.Send() +} + +// UpdateGcmChannelWithContext is the same as UpdateGcmChannel with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateGcmChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) UpdateGcmChannelWithContext(ctx aws.Context, input *UpdateGcmChannelInput, opts ...request.Option) (*UpdateGcmChannelOutput, error) { + req, out := c.UpdateGcmChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateSegment = "UpdateSegment" + +// UpdateSegmentRequest generates a "aws/request.Request" representing the +// client's request for the UpdateSegment operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateSegment for more information on using the UpdateSegment +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateSegmentRequest method. +// req, resp := client.UpdateSegmentRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateSegment +func (c *Pinpoint) UpdateSegmentRequest(input *UpdateSegmentInput) (req *request.Request, output *UpdateSegmentOutput) { + op := &request.Operation{ + Name: opUpdateSegment, + HTTPMethod: "PUT", + HTTPPath: "/v1/apps/{application-id}/segments/{segment-id}", + } + + if input == nil { + input = &UpdateSegmentInput{} + } + + output = &UpdateSegmentOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateSegment API operation for Amazon Pinpoint. +// +// Used to update a segment. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation UpdateSegment for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateSegment +func (c *Pinpoint) UpdateSegment(input *UpdateSegmentInput) (*UpdateSegmentOutput, error) { + req, out := c.UpdateSegmentRequest(input) + return out, req.Send() +} + +// UpdateSegmentWithContext is the same as UpdateSegment with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateSegment for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) UpdateSegmentWithContext(ctx aws.Context, input *UpdateSegmentInput, opts ...request.Option) (*UpdateSegmentOutput, error) { + req, out := c.UpdateSegmentRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateSmsChannel = "UpdateSmsChannel" + +// UpdateSmsChannelRequest generates a "aws/request.Request" representing the +// client's request for the UpdateSmsChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateSmsChannel for more information on using the UpdateSmsChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateSmsChannelRequest method. +// req, resp := client.UpdateSmsChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateSmsChannel +func (c *Pinpoint) UpdateSmsChannelRequest(input *UpdateSmsChannelInput) (req *request.Request, output *UpdateSmsChannelOutput) { + op := &request.Operation{ + Name: opUpdateSmsChannel, + HTTPMethod: "PUT", + HTTPPath: "/v1/apps/{application-id}/channels/sms", + } + + if input == nil { + input = &UpdateSmsChannelInput{} + } + + output = &UpdateSmsChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateSmsChannel API operation for Amazon Pinpoint. +// +// Update an SMS channel. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation UpdateSmsChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateSmsChannel +func (c *Pinpoint) UpdateSmsChannel(input *UpdateSmsChannelInput) (*UpdateSmsChannelOutput, error) { + req, out := c.UpdateSmsChannelRequest(input) + return out, req.Send() +} + +// UpdateSmsChannelWithContext is the same as UpdateSmsChannel with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateSmsChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) UpdateSmsChannelWithContext(ctx aws.Context, input *UpdateSmsChannelInput, opts ...request.Option) (*UpdateSmsChannelOutput, error) { + req, out := c.UpdateSmsChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateVoiceChannel = "UpdateVoiceChannel" + +// UpdateVoiceChannelRequest generates a "aws/request.Request" representing the +// client's request for the UpdateVoiceChannel operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateVoiceChannel for more information on using the UpdateVoiceChannel +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateVoiceChannelRequest method. +// req, resp := client.UpdateVoiceChannelRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateVoiceChannel +func (c *Pinpoint) UpdateVoiceChannelRequest(input *UpdateVoiceChannelInput) (req *request.Request, output *UpdateVoiceChannelOutput) { + op := &request.Operation{ + Name: opUpdateVoiceChannel, + HTTPMethod: "PUT", + HTTPPath: "/v1/apps/{application-id}/channels/voice", + } + + if input == nil { + input = &UpdateVoiceChannelInput{} + } + + output = &UpdateVoiceChannelOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateVoiceChannel API operation for Amazon Pinpoint. +// +// Update an Voice channel +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Pinpoint's +// API operation UpdateVoiceChannel for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// Simple message object. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// Simple message object. +// +// * ErrCodeForbiddenException "ForbiddenException" +// Simple message object. +// +// * ErrCodeNotFoundException "NotFoundException" +// Simple message object. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// Simple message object. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// Simple message object. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01/UpdateVoiceChannel +func (c *Pinpoint) UpdateVoiceChannel(input *UpdateVoiceChannelInput) (*UpdateVoiceChannelOutput, error) { + req, out := c.UpdateVoiceChannelRequest(input) + return out, req.Send() +} + +// UpdateVoiceChannelWithContext is the same as UpdateVoiceChannel with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateVoiceChannel for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pinpoint) UpdateVoiceChannelWithContext(ctx aws.Context, input *UpdateVoiceChannelInput, opts ...request.Option) (*UpdateVoiceChannelOutput, error) { + req, out := c.UpdateVoiceChannelRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// Amazon Device Messaging channel definition. +type ADMChannelRequest struct { + _ struct{} `type:"structure"` + + // The Client ID that you obtained from the Amazon App Distribution Portal. + ClientId *string `type:"string"` + + // The Client Secret that you obtained from the Amazon App Distribution Portal. + ClientSecret *string `type:"string"` + + // Indicates whether or not the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` +} + +// String returns the string representation +func (s ADMChannelRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ADMChannelRequest) GoString() string { + return s.String() +} + +// SetClientId sets the ClientId field's value. +func (s *ADMChannelRequest) SetClientId(v string) *ADMChannelRequest { + s.ClientId = &v + return s +} + +// SetClientSecret sets the ClientSecret field's value. +func (s *ADMChannelRequest) SetClientSecret(v string) *ADMChannelRequest { + s.ClientSecret = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *ADMChannelRequest) SetEnabled(v bool) *ADMChannelRequest { + s.Enabled = &v + return s +} + +// Amazon Device Messaging channel definition. +type ADMChannelResponse struct { + _ struct{} `type:"structure"` + + // The ID of the application to which the channel applies. + ApplicationId *string `type:"string"` + + // The date and time when this channel was created. + CreationDate *string `type:"string"` + + // Indicates whether or not the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` + + // Not used. Retained for backwards compatibility. + HasCredential *bool `type:"boolean"` + + // (Deprecated) An identifier for the channel. Retained for backwards compatibility. + Id *string `type:"string"` + + // Indicates whether or not the channel is archived. + IsArchived *bool `type:"boolean"` + + // The user who last updated this channel. + LastModifiedBy *string `type:"string"` + + // The date and time when this channel was last modified. + LastModifiedDate *string `type:"string"` + + // The platform type. For this channel, the value is always "ADM." + Platform *string `type:"string"` + + // The channel version. + Version *int64 `type:"integer"` +} + +// String returns the string representation +func (s ADMChannelResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ADMChannelResponse) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *ADMChannelResponse) SetApplicationId(v string) *ADMChannelResponse { + s.ApplicationId = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *ADMChannelResponse) SetCreationDate(v string) *ADMChannelResponse { + s.CreationDate = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *ADMChannelResponse) SetEnabled(v bool) *ADMChannelResponse { + s.Enabled = &v + return s +} + +// SetHasCredential sets the HasCredential field's value. +func (s *ADMChannelResponse) SetHasCredential(v bool) *ADMChannelResponse { + s.HasCredential = &v + return s +} + +// SetId sets the Id field's value. +func (s *ADMChannelResponse) SetId(v string) *ADMChannelResponse { + s.Id = &v + return s +} + +// SetIsArchived sets the IsArchived field's value. +func (s *ADMChannelResponse) SetIsArchived(v bool) *ADMChannelResponse { + s.IsArchived = &v + return s +} + +// SetLastModifiedBy sets the LastModifiedBy field's value. +func (s *ADMChannelResponse) SetLastModifiedBy(v string) *ADMChannelResponse { + s.LastModifiedBy = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *ADMChannelResponse) SetLastModifiedDate(v string) *ADMChannelResponse { + s.LastModifiedDate = &v + return s +} + +// SetPlatform sets the Platform field's value. +func (s *ADMChannelResponse) SetPlatform(v string) *ADMChannelResponse { + s.Platform = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *ADMChannelResponse) SetVersion(v int64) *ADMChannelResponse { + s.Version = &v + return s +} + +// ADM Message. +type ADMMessage struct { + _ struct{} `type:"structure"` + + // The action that occurs if the user taps a push notification delivered by + // the campaign: OPEN_APP - Your app launches, or it becomes the foreground + // app if it has been sent to the background. This is the default action. DEEP_LINK + // - Uses deep linking features in iOS and Android to open your app and display + // a designated user interface within the app. URL - The default mobile browser + // on the user's device launches and opens a web page at the URL you specify. + // Possible values include: OPEN_APP | DEEP_LINK | URL + Action *string `type:"string" enum:"Action"` + + // The message body of the notification. + Body *string `type:"string"` + + // Optional. Arbitrary string used to indicate multiple messages are logically + // the same and that ADM is allowed to drop previously enqueued messages in + // favor of this one. + ConsolidationKey *string `type:"string"` + + // The data payload used for a silent push. This payload is added to the notifications' + // data.pinpoint.jsonBody' object + Data map[string]*string `type:"map"` + + // Optional. Number of seconds ADM should retain the message if the device is + // offline + ExpiresAfter *string `type:"string"` + + // The icon image name of the asset saved in your application. + IconReference *string `type:"string"` + + // The URL that points to an image used as the large icon to the notification + // content view. + ImageIconUrl *string `type:"string"` + + // The URL that points to an image used in the push notification. + ImageUrl *string `type:"string"` + + // Optional. Base-64-encoded MD5 checksum of the data parameter. Used to verify + // data integrity + MD5 *string `type:"string"` + + // The Raw JSON formatted string to be used as the payload. This value overrides + // the message. + RawContent *string `type:"string"` + + // Indicates if the message should display on the users device. Silent pushes + // can be used for Remote Configuration and Phone Home use cases. + SilentPush *bool `type:"boolean"` + + // The URL that points to an image used as the small icon for the notification + // which will be used to represent the notification in the status bar and content + // view + SmallImageIconUrl *string `type:"string"` + + // Indicates a sound to play when the device receives the notification. Supports + // default, or the filename of a sound resource bundled in the app. Android + // sound files must reside in /res/raw/ + Sound *string `type:"string"` + + // Default message substitutions. Can be overridden by individual address substitutions. + Substitutions map[string][]*string `type:"map"` + + // The message title that displays above the message on the user's device. + Title *string `type:"string"` + + // The URL to open in the user's mobile browser. Used if the value for Action + // is URL. + Url *string `type:"string"` +} + +// String returns the string representation +func (s ADMMessage) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ADMMessage) GoString() string { + return s.String() +} + +// SetAction sets the Action field's value. +func (s *ADMMessage) SetAction(v string) *ADMMessage { + s.Action = &v + return s +} + +// SetBody sets the Body field's value. +func (s *ADMMessage) SetBody(v string) *ADMMessage { + s.Body = &v + return s +} + +// SetConsolidationKey sets the ConsolidationKey field's value. +func (s *ADMMessage) SetConsolidationKey(v string) *ADMMessage { + s.ConsolidationKey = &v + return s +} + +// SetData sets the Data field's value. +func (s *ADMMessage) SetData(v map[string]*string) *ADMMessage { + s.Data = v + return s +} + +// SetExpiresAfter sets the ExpiresAfter field's value. +func (s *ADMMessage) SetExpiresAfter(v string) *ADMMessage { + s.ExpiresAfter = &v + return s +} + +// SetIconReference sets the IconReference field's value. +func (s *ADMMessage) SetIconReference(v string) *ADMMessage { + s.IconReference = &v + return s +} + +// SetImageIconUrl sets the ImageIconUrl field's value. +func (s *ADMMessage) SetImageIconUrl(v string) *ADMMessage { + s.ImageIconUrl = &v + return s +} + +// SetImageUrl sets the ImageUrl field's value. +func (s *ADMMessage) SetImageUrl(v string) *ADMMessage { + s.ImageUrl = &v + return s +} + +// SetMD5 sets the MD5 field's value. +func (s *ADMMessage) SetMD5(v string) *ADMMessage { + s.MD5 = &v + return s +} + +// SetRawContent sets the RawContent field's value. +func (s *ADMMessage) SetRawContent(v string) *ADMMessage { + s.RawContent = &v + return s +} + +// SetSilentPush sets the SilentPush field's value. +func (s *ADMMessage) SetSilentPush(v bool) *ADMMessage { + s.SilentPush = &v + return s +} + +// SetSmallImageIconUrl sets the SmallImageIconUrl field's value. +func (s *ADMMessage) SetSmallImageIconUrl(v string) *ADMMessage { + s.SmallImageIconUrl = &v + return s +} + +// SetSound sets the Sound field's value. +func (s *ADMMessage) SetSound(v string) *ADMMessage { + s.Sound = &v + return s +} + +// SetSubstitutions sets the Substitutions field's value. +func (s *ADMMessage) SetSubstitutions(v map[string][]*string) *ADMMessage { + s.Substitutions = v + return s +} + +// SetTitle sets the Title field's value. +func (s *ADMMessage) SetTitle(v string) *ADMMessage { + s.Title = &v + return s +} + +// SetUrl sets the Url field's value. +func (s *ADMMessage) SetUrl(v string) *ADMMessage { + s.Url = &v + return s +} + +// Apple Push Notification Service channel definition. +type APNSChannelRequest struct { + _ struct{} `type:"structure"` + + // The bundle id used for APNs Tokens. + BundleId *string `type:"string"` + + // The distribution certificate from Apple. + Certificate *string `type:"string"` + + // The default authentication method used for APNs. + DefaultAuthenticationMethod *string `type:"string"` + + // If the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` + + // The certificate private key. + PrivateKey *string `type:"string"` + + // The team id used for APNs Tokens. + TeamId *string `type:"string"` + + // The token key used for APNs Tokens. + TokenKey *string `type:"string"` + + // The token key used for APNs Tokens. + TokenKeyId *string `type:"string"` +} + +// String returns the string representation +func (s APNSChannelRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s APNSChannelRequest) GoString() string { + return s.String() +} + +// SetBundleId sets the BundleId field's value. +func (s *APNSChannelRequest) SetBundleId(v string) *APNSChannelRequest { + s.BundleId = &v + return s +} + +// SetCertificate sets the Certificate field's value. +func (s *APNSChannelRequest) SetCertificate(v string) *APNSChannelRequest { + s.Certificate = &v + return s +} + +// SetDefaultAuthenticationMethod sets the DefaultAuthenticationMethod field's value. +func (s *APNSChannelRequest) SetDefaultAuthenticationMethod(v string) *APNSChannelRequest { + s.DefaultAuthenticationMethod = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *APNSChannelRequest) SetEnabled(v bool) *APNSChannelRequest { + s.Enabled = &v + return s +} + +// SetPrivateKey sets the PrivateKey field's value. +func (s *APNSChannelRequest) SetPrivateKey(v string) *APNSChannelRequest { + s.PrivateKey = &v + return s +} + +// SetTeamId sets the TeamId field's value. +func (s *APNSChannelRequest) SetTeamId(v string) *APNSChannelRequest { + s.TeamId = &v + return s +} + +// SetTokenKey sets the TokenKey field's value. +func (s *APNSChannelRequest) SetTokenKey(v string) *APNSChannelRequest { + s.TokenKey = &v + return s +} + +// SetTokenKeyId sets the TokenKeyId field's value. +func (s *APNSChannelRequest) SetTokenKeyId(v string) *APNSChannelRequest { + s.TokenKeyId = &v + return s +} + +// Apple Distribution Push Notification Service channel definition. +type APNSChannelResponse struct { + _ struct{} `type:"structure"` + + // The ID of the application that the channel applies to. + ApplicationId *string `type:"string"` + + // The date and time when this channel was created. + CreationDate *string `type:"string"` + + // The default authentication method used for APNs. + DefaultAuthenticationMethod *string `type:"string"` + + // If the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` + + // Not used. Retained for backwards compatibility. + HasCredential *bool `type:"boolean"` + + // Indicates whether the channel is configured with a key for APNs token authentication. + // Provide a token key by setting the TokenKey attribute. + HasTokenKey *bool `type:"boolean"` + + // (Deprecated) An identifier for the channel. Retained for backwards compatibility. + Id *string `type:"string"` + + // Indicates whether or not the channel is archived. + IsArchived *bool `type:"boolean"` + + // The user who last updated this channel. + LastModifiedBy *string `type:"string"` + + // The date and time when this channel was last modified. + LastModifiedDate *string `type:"string"` + + // The platform type. For this channel, the value is always "ADM." + Platform *string `type:"string"` + + // The channel version. + Version *int64 `type:"integer"` +} + +// String returns the string representation +func (s APNSChannelResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s APNSChannelResponse) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *APNSChannelResponse) SetApplicationId(v string) *APNSChannelResponse { + s.ApplicationId = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *APNSChannelResponse) SetCreationDate(v string) *APNSChannelResponse { + s.CreationDate = &v + return s +} + +// SetDefaultAuthenticationMethod sets the DefaultAuthenticationMethod field's value. +func (s *APNSChannelResponse) SetDefaultAuthenticationMethod(v string) *APNSChannelResponse { + s.DefaultAuthenticationMethod = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *APNSChannelResponse) SetEnabled(v bool) *APNSChannelResponse { + s.Enabled = &v + return s +} + +// SetHasCredential sets the HasCredential field's value. +func (s *APNSChannelResponse) SetHasCredential(v bool) *APNSChannelResponse { + s.HasCredential = &v + return s +} + +// SetHasTokenKey sets the HasTokenKey field's value. +func (s *APNSChannelResponse) SetHasTokenKey(v bool) *APNSChannelResponse { + s.HasTokenKey = &v + return s +} + +// SetId sets the Id field's value. +func (s *APNSChannelResponse) SetId(v string) *APNSChannelResponse { + s.Id = &v + return s +} + +// SetIsArchived sets the IsArchived field's value. +func (s *APNSChannelResponse) SetIsArchived(v bool) *APNSChannelResponse { + s.IsArchived = &v + return s +} + +// SetLastModifiedBy sets the LastModifiedBy field's value. +func (s *APNSChannelResponse) SetLastModifiedBy(v string) *APNSChannelResponse { + s.LastModifiedBy = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *APNSChannelResponse) SetLastModifiedDate(v string) *APNSChannelResponse { + s.LastModifiedDate = &v + return s +} + +// SetPlatform sets the Platform field's value. +func (s *APNSChannelResponse) SetPlatform(v string) *APNSChannelResponse { + s.Platform = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *APNSChannelResponse) SetVersion(v int64) *APNSChannelResponse { + s.Version = &v + return s +} + +// APNS Message. +type APNSMessage struct { + _ struct{} `type:"structure"` + + // The action that occurs if the user taps a push notification delivered by + // the campaign: OPEN_APP - Your app launches, or it becomes the foreground + // app if it has been sent to the background. This is the default action. DEEP_LINK + // - Uses deep linking features in iOS and Android to open your app and display + // a designated user interface within the app. URL - The default mobile browser + // on the user's device launches and opens a web page at the URL you specify. + // Possible values include: OPEN_APP | DEEP_LINK | URL + Action *string `type:"string" enum:"Action"` + + // Include this key when you want the system to modify the badge of your app + // icon. If this key is not included in the dictionary, the badge is not changed. + // To remove the badge, set the value of this key to 0. + Badge *int64 `type:"integer"` + + // The message body of the notification. + Body *string `type:"string"` + + // Provide this key with a string value that represents the notification's type. + // This value corresponds to the value in the identifier property of one of + // your app's registered categories. + Category *string `type:"string"` + + // An ID that, if assigned to multiple messages, causes APNs to coalesce the + // messages into a single push notification instead of delivering each message + // individually. The value must not exceed 64 bytes. Amazon Pinpoint uses this + // value to set the apns-collapse-id request header when it sends the message + // to APNs. + CollapseId *string `type:"string"` + + // The data payload used for a silent push. This payload is added to the notifications' + // data.pinpoint.jsonBody' object + Data map[string]*string `type:"map"` + + // A URL that refers to the location of an image or video that you want to display + // in the push notification. + MediaUrl *string `type:"string"` + + // The preferred authentication method, either "CERTIFICATE" or "TOKEN" + PreferredAuthenticationMethod *string `type:"string"` + + // The message priority. Amazon Pinpoint uses this value to set the apns-priority + // request header when it sends the message to APNs. Accepts the following values:"5" + // - Low priority. Messages might be delayed, delivered in groups, and throttled."10" + // - High priority. Messages are sent immediately. High priority messages must + // cause an alert, sound, or badge on the receiving device.The default value + // is "10".The equivalent values for FCM or GCM messages are "normal" and "high". + // Amazon Pinpoint accepts these values for APNs messages and converts them.For + // more information about the apns-priority parameter, see Communicating with + // APNs in the APNs Local and Remote Notification Programming Guide. + Priority *string `type:"string"` + + // The Raw JSON formatted string to be used as the payload. This value overrides + // the message. + RawContent *string `type:"string"` + + // Indicates if the message should display on the users device. Silent pushes + // can be used for Remote Configuration and Phone Home use cases. + SilentPush *bool `type:"boolean"` + + // Include this key when you want the system to play a sound. The value of this + // key is the name of a sound file in your app's main bundle or in the Library/Sounds + // folder of your app's data container. If the sound file cannot be found, or + // if you specify defaultfor the value, the system plays the default alert sound. + Sound *string `type:"string"` + + // Default message substitutions. Can be overridden by individual address substitutions. + Substitutions map[string][]*string `type:"map"` + + // Provide this key with a string value that represents the app-specific identifier + // for grouping notifications. If you provide a Notification Content app extension, + // you can use this value to group your notifications together. + ThreadId *string `type:"string"` + + // The length of time (in seconds) that APNs stores and attempts to deliver + // the message. If the value is 0, APNs does not store the message or attempt + // to deliver it more than once. Amazon Pinpoint uses this value to set the + // apns-expiration request header when it sends the message to APNs. + TimeToLive *int64 `type:"integer"` + + // The message title that displays above the message on the user's device. + Title *string `type:"string"` + + // The URL to open in the user's mobile browser. Used if the value for Action + // is URL. + Url *string `type:"string"` +} + +// String returns the string representation +func (s APNSMessage) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s APNSMessage) GoString() string { + return s.String() +} + +// SetAction sets the Action field's value. +func (s *APNSMessage) SetAction(v string) *APNSMessage { + s.Action = &v + return s +} + +// SetBadge sets the Badge field's value. +func (s *APNSMessage) SetBadge(v int64) *APNSMessage { + s.Badge = &v + return s +} + +// SetBody sets the Body field's value. +func (s *APNSMessage) SetBody(v string) *APNSMessage { + s.Body = &v + return s +} + +// SetCategory sets the Category field's value. +func (s *APNSMessage) SetCategory(v string) *APNSMessage { + s.Category = &v + return s +} + +// SetCollapseId sets the CollapseId field's value. +func (s *APNSMessage) SetCollapseId(v string) *APNSMessage { + s.CollapseId = &v + return s +} + +// SetData sets the Data field's value. +func (s *APNSMessage) SetData(v map[string]*string) *APNSMessage { + s.Data = v + return s +} + +// SetMediaUrl sets the MediaUrl field's value. +func (s *APNSMessage) SetMediaUrl(v string) *APNSMessage { + s.MediaUrl = &v + return s +} + +// SetPreferredAuthenticationMethod sets the PreferredAuthenticationMethod field's value. +func (s *APNSMessage) SetPreferredAuthenticationMethod(v string) *APNSMessage { + s.PreferredAuthenticationMethod = &v + return s +} + +// SetPriority sets the Priority field's value. +func (s *APNSMessage) SetPriority(v string) *APNSMessage { + s.Priority = &v + return s +} + +// SetRawContent sets the RawContent field's value. +func (s *APNSMessage) SetRawContent(v string) *APNSMessage { + s.RawContent = &v + return s +} + +// SetSilentPush sets the SilentPush field's value. +func (s *APNSMessage) SetSilentPush(v bool) *APNSMessage { + s.SilentPush = &v + return s +} + +// SetSound sets the Sound field's value. +func (s *APNSMessage) SetSound(v string) *APNSMessage { + s.Sound = &v + return s +} + +// SetSubstitutions sets the Substitutions field's value. +func (s *APNSMessage) SetSubstitutions(v map[string][]*string) *APNSMessage { + s.Substitutions = v + return s +} + +// SetThreadId sets the ThreadId field's value. +func (s *APNSMessage) SetThreadId(v string) *APNSMessage { + s.ThreadId = &v + return s +} + +// SetTimeToLive sets the TimeToLive field's value. +func (s *APNSMessage) SetTimeToLive(v int64) *APNSMessage { + s.TimeToLive = &v + return s +} + +// SetTitle sets the Title field's value. +func (s *APNSMessage) SetTitle(v string) *APNSMessage { + s.Title = &v + return s +} + +// SetUrl sets the Url field's value. +func (s *APNSMessage) SetUrl(v string) *APNSMessage { + s.Url = &v + return s +} + +// Apple Development Push Notification Service channel definition. +type APNSSandboxChannelRequest struct { + _ struct{} `type:"structure"` + + // The bundle id used for APNs Tokens. + BundleId *string `type:"string"` + + // The distribution certificate from Apple. + Certificate *string `type:"string"` + + // The default authentication method used for APNs. + DefaultAuthenticationMethod *string `type:"string"` + + // If the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` + + // The certificate private key. + PrivateKey *string `type:"string"` + + // The team id used for APNs Tokens. + TeamId *string `type:"string"` + + // The token key used for APNs Tokens. + TokenKey *string `type:"string"` + + // The token key used for APNs Tokens. + TokenKeyId *string `type:"string"` +} + +// String returns the string representation +func (s APNSSandboxChannelRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s APNSSandboxChannelRequest) GoString() string { + return s.String() +} + +// SetBundleId sets the BundleId field's value. +func (s *APNSSandboxChannelRequest) SetBundleId(v string) *APNSSandboxChannelRequest { + s.BundleId = &v + return s +} + +// SetCertificate sets the Certificate field's value. +func (s *APNSSandboxChannelRequest) SetCertificate(v string) *APNSSandboxChannelRequest { + s.Certificate = &v + return s +} + +// SetDefaultAuthenticationMethod sets the DefaultAuthenticationMethod field's value. +func (s *APNSSandboxChannelRequest) SetDefaultAuthenticationMethod(v string) *APNSSandboxChannelRequest { + s.DefaultAuthenticationMethod = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *APNSSandboxChannelRequest) SetEnabled(v bool) *APNSSandboxChannelRequest { + s.Enabled = &v + return s +} + +// SetPrivateKey sets the PrivateKey field's value. +func (s *APNSSandboxChannelRequest) SetPrivateKey(v string) *APNSSandboxChannelRequest { + s.PrivateKey = &v + return s +} + +// SetTeamId sets the TeamId field's value. +func (s *APNSSandboxChannelRequest) SetTeamId(v string) *APNSSandboxChannelRequest { + s.TeamId = &v + return s +} + +// SetTokenKey sets the TokenKey field's value. +func (s *APNSSandboxChannelRequest) SetTokenKey(v string) *APNSSandboxChannelRequest { + s.TokenKey = &v + return s +} + +// SetTokenKeyId sets the TokenKeyId field's value. +func (s *APNSSandboxChannelRequest) SetTokenKeyId(v string) *APNSSandboxChannelRequest { + s.TokenKeyId = &v + return s +} + +// Apple Development Push Notification Service channel definition. +type APNSSandboxChannelResponse struct { + _ struct{} `type:"structure"` + + // The ID of the application to which the channel applies. + ApplicationId *string `type:"string"` + + // When was this segment created + CreationDate *string `type:"string"` + + // The default authentication method used for APNs. + DefaultAuthenticationMethod *string `type:"string"` + + // If the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` + + // Not used. Retained for backwards compatibility. + HasCredential *bool `type:"boolean"` + + // Indicates whether the channel is configured with a key for APNs token authentication. + // Provide a token key by setting the TokenKey attribute. + HasTokenKey *bool `type:"boolean"` + + // Channel ID. Not used, only for backwards compatibility. + Id *string `type:"string"` + + // Is this channel archived + IsArchived *bool `type:"boolean"` + + // Who last updated this entry + LastModifiedBy *string `type:"string"` + + // Last date this was updated + LastModifiedDate *string `type:"string"` + + // The platform type. Will be APNS_SANDBOX. + Platform *string `type:"string"` + + // Version of channel + Version *int64 `type:"integer"` +} + +// String returns the string representation +func (s APNSSandboxChannelResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s APNSSandboxChannelResponse) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *APNSSandboxChannelResponse) SetApplicationId(v string) *APNSSandboxChannelResponse { + s.ApplicationId = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *APNSSandboxChannelResponse) SetCreationDate(v string) *APNSSandboxChannelResponse { + s.CreationDate = &v + return s +} + +// SetDefaultAuthenticationMethod sets the DefaultAuthenticationMethod field's value. +func (s *APNSSandboxChannelResponse) SetDefaultAuthenticationMethod(v string) *APNSSandboxChannelResponse { + s.DefaultAuthenticationMethod = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *APNSSandboxChannelResponse) SetEnabled(v bool) *APNSSandboxChannelResponse { + s.Enabled = &v + return s +} + +// SetHasCredential sets the HasCredential field's value. +func (s *APNSSandboxChannelResponse) SetHasCredential(v bool) *APNSSandboxChannelResponse { + s.HasCredential = &v + return s +} + +// SetHasTokenKey sets the HasTokenKey field's value. +func (s *APNSSandboxChannelResponse) SetHasTokenKey(v bool) *APNSSandboxChannelResponse { + s.HasTokenKey = &v + return s +} + +// SetId sets the Id field's value. +func (s *APNSSandboxChannelResponse) SetId(v string) *APNSSandboxChannelResponse { + s.Id = &v + return s +} + +// SetIsArchived sets the IsArchived field's value. +func (s *APNSSandboxChannelResponse) SetIsArchived(v bool) *APNSSandboxChannelResponse { + s.IsArchived = &v + return s +} + +// SetLastModifiedBy sets the LastModifiedBy field's value. +func (s *APNSSandboxChannelResponse) SetLastModifiedBy(v string) *APNSSandboxChannelResponse { + s.LastModifiedBy = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *APNSSandboxChannelResponse) SetLastModifiedDate(v string) *APNSSandboxChannelResponse { + s.LastModifiedDate = &v + return s +} + +// SetPlatform sets the Platform field's value. +func (s *APNSSandboxChannelResponse) SetPlatform(v string) *APNSSandboxChannelResponse { + s.Platform = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *APNSSandboxChannelResponse) SetVersion(v int64) *APNSSandboxChannelResponse { + s.Version = &v + return s +} + +// Apple VoIP Push Notification Service channel definition. +type APNSVoipChannelRequest struct { + _ struct{} `type:"structure"` + + // The bundle id used for APNs Tokens. + BundleId *string `type:"string"` + + // The distribution certificate from Apple. + Certificate *string `type:"string"` + + // The default authentication method used for APNs. + DefaultAuthenticationMethod *string `type:"string"` + + // If the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` + + // The certificate private key. + PrivateKey *string `type:"string"` + + // The team id used for APNs Tokens. + TeamId *string `type:"string"` + + // The token key used for APNs Tokens. + TokenKey *string `type:"string"` + + // The token key used for APNs Tokens. + TokenKeyId *string `type:"string"` +} + +// String returns the string representation +func (s APNSVoipChannelRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s APNSVoipChannelRequest) GoString() string { + return s.String() +} + +// SetBundleId sets the BundleId field's value. +func (s *APNSVoipChannelRequest) SetBundleId(v string) *APNSVoipChannelRequest { + s.BundleId = &v + return s +} + +// SetCertificate sets the Certificate field's value. +func (s *APNSVoipChannelRequest) SetCertificate(v string) *APNSVoipChannelRequest { + s.Certificate = &v + return s +} + +// SetDefaultAuthenticationMethod sets the DefaultAuthenticationMethod field's value. +func (s *APNSVoipChannelRequest) SetDefaultAuthenticationMethod(v string) *APNSVoipChannelRequest { + s.DefaultAuthenticationMethod = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *APNSVoipChannelRequest) SetEnabled(v bool) *APNSVoipChannelRequest { + s.Enabled = &v + return s +} + +// SetPrivateKey sets the PrivateKey field's value. +func (s *APNSVoipChannelRequest) SetPrivateKey(v string) *APNSVoipChannelRequest { + s.PrivateKey = &v + return s +} + +// SetTeamId sets the TeamId field's value. +func (s *APNSVoipChannelRequest) SetTeamId(v string) *APNSVoipChannelRequest { + s.TeamId = &v + return s +} + +// SetTokenKey sets the TokenKey field's value. +func (s *APNSVoipChannelRequest) SetTokenKey(v string) *APNSVoipChannelRequest { + s.TokenKey = &v + return s +} + +// SetTokenKeyId sets the TokenKeyId field's value. +func (s *APNSVoipChannelRequest) SetTokenKeyId(v string) *APNSVoipChannelRequest { + s.TokenKeyId = &v + return s +} + +// Apple VoIP Push Notification Service channel definition. +type APNSVoipChannelResponse struct { + _ struct{} `type:"structure"` + + // Application id + ApplicationId *string `type:"string"` + + // When was this segment created + CreationDate *string `type:"string"` + + // The default authentication method used for APNs. + DefaultAuthenticationMethod *string `type:"string"` + + // If the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` + + // Not used. Retained for backwards compatibility. + HasCredential *bool `type:"boolean"` + + // If the channel is registered with a token key for authentication. + HasTokenKey *bool `type:"boolean"` + + // Channel ID. Not used, only for backwards compatibility. + Id *string `type:"string"` + + // Is this channel archived + IsArchived *bool `type:"boolean"` + + // Who made the last change + LastModifiedBy *string `type:"string"` + + // Last date this was updated + LastModifiedDate *string `type:"string"` + + // The platform type. Will be APNS. + Platform *string `type:"string"` + + // Version of channel + Version *int64 `type:"integer"` +} + +// String returns the string representation +func (s APNSVoipChannelResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s APNSVoipChannelResponse) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *APNSVoipChannelResponse) SetApplicationId(v string) *APNSVoipChannelResponse { + s.ApplicationId = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *APNSVoipChannelResponse) SetCreationDate(v string) *APNSVoipChannelResponse { + s.CreationDate = &v + return s +} + +// SetDefaultAuthenticationMethod sets the DefaultAuthenticationMethod field's value. +func (s *APNSVoipChannelResponse) SetDefaultAuthenticationMethod(v string) *APNSVoipChannelResponse { + s.DefaultAuthenticationMethod = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *APNSVoipChannelResponse) SetEnabled(v bool) *APNSVoipChannelResponse { + s.Enabled = &v + return s +} + +// SetHasCredential sets the HasCredential field's value. +func (s *APNSVoipChannelResponse) SetHasCredential(v bool) *APNSVoipChannelResponse { + s.HasCredential = &v + return s +} + +// SetHasTokenKey sets the HasTokenKey field's value. +func (s *APNSVoipChannelResponse) SetHasTokenKey(v bool) *APNSVoipChannelResponse { + s.HasTokenKey = &v + return s +} + +// SetId sets the Id field's value. +func (s *APNSVoipChannelResponse) SetId(v string) *APNSVoipChannelResponse { + s.Id = &v + return s +} + +// SetIsArchived sets the IsArchived field's value. +func (s *APNSVoipChannelResponse) SetIsArchived(v bool) *APNSVoipChannelResponse { + s.IsArchived = &v + return s +} + +// SetLastModifiedBy sets the LastModifiedBy field's value. +func (s *APNSVoipChannelResponse) SetLastModifiedBy(v string) *APNSVoipChannelResponse { + s.LastModifiedBy = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *APNSVoipChannelResponse) SetLastModifiedDate(v string) *APNSVoipChannelResponse { + s.LastModifiedDate = &v + return s +} + +// SetPlatform sets the Platform field's value. +func (s *APNSVoipChannelResponse) SetPlatform(v string) *APNSVoipChannelResponse { + s.Platform = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *APNSVoipChannelResponse) SetVersion(v int64) *APNSVoipChannelResponse { + s.Version = &v + return s +} + +// Apple VoIP Developer Push Notification Service channel definition. +type APNSVoipSandboxChannelRequest struct { + _ struct{} `type:"structure"` + + // The bundle id used for APNs Tokens. + BundleId *string `type:"string"` + + // The distribution certificate from Apple. + Certificate *string `type:"string"` + + // The default authentication method used for APNs. + DefaultAuthenticationMethod *string `type:"string"` + + // If the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` + + // The certificate private key. + PrivateKey *string `type:"string"` + + // The team id used for APNs Tokens. + TeamId *string `type:"string"` + + // The token key used for APNs Tokens. + TokenKey *string `type:"string"` + + // The token key used for APNs Tokens. + TokenKeyId *string `type:"string"` +} + +// String returns the string representation +func (s APNSVoipSandboxChannelRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s APNSVoipSandboxChannelRequest) GoString() string { + return s.String() +} + +// SetBundleId sets the BundleId field's value. +func (s *APNSVoipSandboxChannelRequest) SetBundleId(v string) *APNSVoipSandboxChannelRequest { + s.BundleId = &v + return s +} + +// SetCertificate sets the Certificate field's value. +func (s *APNSVoipSandboxChannelRequest) SetCertificate(v string) *APNSVoipSandboxChannelRequest { + s.Certificate = &v + return s +} + +// SetDefaultAuthenticationMethod sets the DefaultAuthenticationMethod field's value. +func (s *APNSVoipSandboxChannelRequest) SetDefaultAuthenticationMethod(v string) *APNSVoipSandboxChannelRequest { + s.DefaultAuthenticationMethod = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *APNSVoipSandboxChannelRequest) SetEnabled(v bool) *APNSVoipSandboxChannelRequest { + s.Enabled = &v + return s +} + +// SetPrivateKey sets the PrivateKey field's value. +func (s *APNSVoipSandboxChannelRequest) SetPrivateKey(v string) *APNSVoipSandboxChannelRequest { + s.PrivateKey = &v + return s +} + +// SetTeamId sets the TeamId field's value. +func (s *APNSVoipSandboxChannelRequest) SetTeamId(v string) *APNSVoipSandboxChannelRequest { + s.TeamId = &v + return s +} + +// SetTokenKey sets the TokenKey field's value. +func (s *APNSVoipSandboxChannelRequest) SetTokenKey(v string) *APNSVoipSandboxChannelRequest { + s.TokenKey = &v + return s +} + +// SetTokenKeyId sets the TokenKeyId field's value. +func (s *APNSVoipSandboxChannelRequest) SetTokenKeyId(v string) *APNSVoipSandboxChannelRequest { + s.TokenKeyId = &v + return s +} + +// Apple VoIP Developer Push Notification Service channel definition. +type APNSVoipSandboxChannelResponse struct { + _ struct{} `type:"structure"` + + // Application id + ApplicationId *string `type:"string"` + + // When was this segment created + CreationDate *string `type:"string"` + + // The default authentication method used for APNs. + DefaultAuthenticationMethod *string `type:"string"` + + // If the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` + + // Not used. Retained for backwards compatibility. + HasCredential *bool `type:"boolean"` + + // If the channel is registered with a token key for authentication. + HasTokenKey *bool `type:"boolean"` + + // Channel ID. Not used, only for backwards compatibility. + Id *string `type:"string"` + + // Is this channel archived + IsArchived *bool `type:"boolean"` + + // Who made the last change + LastModifiedBy *string `type:"string"` + + // Last date this was updated + LastModifiedDate *string `type:"string"` + + // The platform type. Will be APNS. + Platform *string `type:"string"` + + // Version of channel + Version *int64 `type:"integer"` +} + +// String returns the string representation +func (s APNSVoipSandboxChannelResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s APNSVoipSandboxChannelResponse) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *APNSVoipSandboxChannelResponse) SetApplicationId(v string) *APNSVoipSandboxChannelResponse { + s.ApplicationId = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *APNSVoipSandboxChannelResponse) SetCreationDate(v string) *APNSVoipSandboxChannelResponse { + s.CreationDate = &v + return s +} + +// SetDefaultAuthenticationMethod sets the DefaultAuthenticationMethod field's value. +func (s *APNSVoipSandboxChannelResponse) SetDefaultAuthenticationMethod(v string) *APNSVoipSandboxChannelResponse { + s.DefaultAuthenticationMethod = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *APNSVoipSandboxChannelResponse) SetEnabled(v bool) *APNSVoipSandboxChannelResponse { + s.Enabled = &v + return s +} + +// SetHasCredential sets the HasCredential field's value. +func (s *APNSVoipSandboxChannelResponse) SetHasCredential(v bool) *APNSVoipSandboxChannelResponse { + s.HasCredential = &v + return s +} + +// SetHasTokenKey sets the HasTokenKey field's value. +func (s *APNSVoipSandboxChannelResponse) SetHasTokenKey(v bool) *APNSVoipSandboxChannelResponse { + s.HasTokenKey = &v + return s +} + +// SetId sets the Id field's value. +func (s *APNSVoipSandboxChannelResponse) SetId(v string) *APNSVoipSandboxChannelResponse { + s.Id = &v + return s +} + +// SetIsArchived sets the IsArchived field's value. +func (s *APNSVoipSandboxChannelResponse) SetIsArchived(v bool) *APNSVoipSandboxChannelResponse { + s.IsArchived = &v + return s +} + +// SetLastModifiedBy sets the LastModifiedBy field's value. +func (s *APNSVoipSandboxChannelResponse) SetLastModifiedBy(v string) *APNSVoipSandboxChannelResponse { + s.LastModifiedBy = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *APNSVoipSandboxChannelResponse) SetLastModifiedDate(v string) *APNSVoipSandboxChannelResponse { + s.LastModifiedDate = &v + return s +} + +// SetPlatform sets the Platform field's value. +func (s *APNSVoipSandboxChannelResponse) SetPlatform(v string) *APNSVoipSandboxChannelResponse { + s.Platform = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *APNSVoipSandboxChannelResponse) SetVersion(v int64) *APNSVoipSandboxChannelResponse { + s.Version = &v + return s +} + +// Activities for campaign. +type ActivitiesResponse struct { + _ struct{} `type:"structure"` + + // List of campaign activities + Item []*ActivityResponse `type:"list"` + + // The string that you use in a subsequent request to get the next page of results + // in a paginated response. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ActivitiesResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ActivitiesResponse) GoString() string { + return s.String() +} + +// SetItem sets the Item field's value. +func (s *ActivitiesResponse) SetItem(v []*ActivityResponse) *ActivitiesResponse { + s.Item = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ActivitiesResponse) SetNextToken(v string) *ActivitiesResponse { + s.NextToken = &v + return s +} + +// Activity definition +type ActivityResponse struct { + _ struct{} `type:"structure"` + + // The ID of the application to which the campaign applies. + ApplicationId *string `type:"string"` + + // The ID of the campaign to which the activity applies. + CampaignId *string `type:"string"` + + // The actual time the activity was marked CANCELLED or COMPLETED. Provided + // in ISO 8601 format. + End *string `type:"string"` + + // The unique activity ID. + Id *string `type:"string"` + + // Indicates whether the activity succeeded.Valid values: SUCCESS, FAIL + Result *string `type:"string"` + + // The scheduled start time for the activity in ISO 8601 format. + ScheduledStart *string `type:"string"` + + // The actual start time of the activity in ISO 8601 format. + Start *string `type:"string"` + + // The state of the activity.Valid values: PENDING, INITIALIZING, RUNNING, PAUSED, + // CANCELLED, COMPLETED + State *string `type:"string"` + + // The total number of endpoints to which the campaign successfully delivered + // messages. + SuccessfulEndpointCount *int64 `type:"integer"` + + // The total number of timezones completed. + TimezonesCompletedCount *int64 `type:"integer"` + + // The total number of unique timezones present in the segment. + TimezonesTotalCount *int64 `type:"integer"` + + // The total number of endpoints to which the campaign attempts to deliver messages. + TotalEndpointCount *int64 `type:"integer"` + + // The ID of a variation of the campaign used for A/B testing. + TreatmentId *string `type:"string"` +} + +// String returns the string representation +func (s ActivityResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ActivityResponse) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *ActivityResponse) SetApplicationId(v string) *ActivityResponse { + s.ApplicationId = &v + return s +} + +// SetCampaignId sets the CampaignId field's value. +func (s *ActivityResponse) SetCampaignId(v string) *ActivityResponse { + s.CampaignId = &v + return s +} + +// SetEnd sets the End field's value. +func (s *ActivityResponse) SetEnd(v string) *ActivityResponse { + s.End = &v + return s +} + +// SetId sets the Id field's value. +func (s *ActivityResponse) SetId(v string) *ActivityResponse { + s.Id = &v + return s +} + +// SetResult sets the Result field's value. +func (s *ActivityResponse) SetResult(v string) *ActivityResponse { + s.Result = &v + return s +} + +// SetScheduledStart sets the ScheduledStart field's value. +func (s *ActivityResponse) SetScheduledStart(v string) *ActivityResponse { + s.ScheduledStart = &v + return s +} + +// SetStart sets the Start field's value. +func (s *ActivityResponse) SetStart(v string) *ActivityResponse { + s.Start = &v + return s +} + +// SetState sets the State field's value. +func (s *ActivityResponse) SetState(v string) *ActivityResponse { + s.State = &v + return s +} + +// SetSuccessfulEndpointCount sets the SuccessfulEndpointCount field's value. +func (s *ActivityResponse) SetSuccessfulEndpointCount(v int64) *ActivityResponse { + s.SuccessfulEndpointCount = &v + return s +} + +// SetTimezonesCompletedCount sets the TimezonesCompletedCount field's value. +func (s *ActivityResponse) SetTimezonesCompletedCount(v int64) *ActivityResponse { + s.TimezonesCompletedCount = &v + return s +} + +// SetTimezonesTotalCount sets the TimezonesTotalCount field's value. +func (s *ActivityResponse) SetTimezonesTotalCount(v int64) *ActivityResponse { + s.TimezonesTotalCount = &v + return s +} + +// SetTotalEndpointCount sets the TotalEndpointCount field's value. +func (s *ActivityResponse) SetTotalEndpointCount(v int64) *ActivityResponse { + s.TotalEndpointCount = &v + return s +} + +// SetTreatmentId sets the TreatmentId field's value. +func (s *ActivityResponse) SetTreatmentId(v string) *ActivityResponse { + s.TreatmentId = &v + return s +} + +// Address configuration. +type AddressConfiguration struct { + _ struct{} `type:"structure"` + + // Body override. If specified will override default body. + BodyOverride *string `type:"string"` + + // The channel type.Valid values: GCM | APNS | APNS_SANDBOX | APNS_VOIP | APNS_VOIP_SANDBOX + // | ADM | SMS | EMAIL | BAIDU + ChannelType *string `type:"string" enum:"ChannelType"` + + // A map of custom attributes to attributes to be attached to the message for + // this address. This payload is added to the push notification's 'data.pinpoint' + // object or added to the email/sms delivery receipt event attributes. + Context map[string]*string `type:"map"` + + // The Raw JSON formatted string to be used as the payload. This value overrides + // the message. + RawContent *string `type:"string"` + + // A map of substitution values for the message to be merged with the DefaultMessage's + // substitutions. Substitutions on this map take precedence over the all other + // substitutions. + Substitutions map[string][]*string `type:"map"` + + // Title override. If specified will override default title if applicable. + TitleOverride *string `type:"string"` +} + +// String returns the string representation +func (s AddressConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddressConfiguration) GoString() string { + return s.String() +} + +// SetBodyOverride sets the BodyOverride field's value. +func (s *AddressConfiguration) SetBodyOverride(v string) *AddressConfiguration { + s.BodyOverride = &v + return s +} + +// SetChannelType sets the ChannelType field's value. +func (s *AddressConfiguration) SetChannelType(v string) *AddressConfiguration { + s.ChannelType = &v + return s +} + +// SetContext sets the Context field's value. +func (s *AddressConfiguration) SetContext(v map[string]*string) *AddressConfiguration { + s.Context = v + return s +} + +// SetRawContent sets the RawContent field's value. +func (s *AddressConfiguration) SetRawContent(v string) *AddressConfiguration { + s.RawContent = &v + return s +} + +// SetSubstitutions sets the Substitutions field's value. +func (s *AddressConfiguration) SetSubstitutions(v map[string][]*string) *AddressConfiguration { + s.Substitutions = v + return s +} + +// SetTitleOverride sets the TitleOverride field's value. +func (s *AddressConfiguration) SetTitleOverride(v string) *AddressConfiguration { + s.TitleOverride = &v + return s +} + +// Application Response. +type ApplicationResponse struct { + _ struct{} `type:"structure"` + + // The unique application ID. + Id *string `type:"string"` + + // The display name of the application. + Name *string `type:"string"` +} + +// String returns the string representation +func (s ApplicationResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ApplicationResponse) GoString() string { + return s.String() +} + +// SetId sets the Id field's value. +func (s *ApplicationResponse) SetId(v string) *ApplicationResponse { + s.Id = &v + return s +} + +// SetName sets the Name field's value. +func (s *ApplicationResponse) SetName(v string) *ApplicationResponse { + s.Name = &v + return s +} + +// Application settings. +type ApplicationSettingsResource struct { + _ struct{} `type:"structure"` + + // The unique ID for the application. + ApplicationId *string `type:"string"` + + // Default campaign hook. + CampaignHook *CampaignHook `type:"structure"` + + // The date that the settings were last updated in ISO 8601 format. + LastModifiedDate *string `type:"string"` + + // The default campaign limits for the app. These limits apply to each campaign + // for the app, unless the campaign overrides the default with limits of its + // own. + Limits *CampaignLimits `type:"structure"` + + // The default quiet time for the app. Campaigns in the app don't send messages + // to endpoints during the quiet time.Note: Make sure that your endpoints include + // the Demographics.Timezone attribute if you plan to enable a quiet time for + // your app. If your endpoints don't include this attribute, they'll receive + // the messages that you send them, even if quiet time is enabled.When you set + // up an app to use quiet time, campaigns in that app don't send messages during + // the time range you specified, as long as all of the following are true:- + // The endpoint includes a valid Demographic.Timezone attribute.- The current + // time in the endpoint's time zone is later than or equal to the time specified + // in the QuietTime.Start attribute for the app (or campaign, if applicable).- + // The current time in the endpoint's time zone is earlier than or equal to + // the time specified in the QuietTime.End attribute for the app (or campaign, + // if applicable).Individual campaigns within the app can have their own quiet + // time settings, which override the quiet time settings at the app level. + QuietTime *QuietTime `type:"structure"` +} + +// String returns the string representation +func (s ApplicationSettingsResource) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ApplicationSettingsResource) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *ApplicationSettingsResource) SetApplicationId(v string) *ApplicationSettingsResource { + s.ApplicationId = &v + return s +} + +// SetCampaignHook sets the CampaignHook field's value. +func (s *ApplicationSettingsResource) SetCampaignHook(v *CampaignHook) *ApplicationSettingsResource { + s.CampaignHook = v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *ApplicationSettingsResource) SetLastModifiedDate(v string) *ApplicationSettingsResource { + s.LastModifiedDate = &v + return s +} + +// SetLimits sets the Limits field's value. +func (s *ApplicationSettingsResource) SetLimits(v *CampaignLimits) *ApplicationSettingsResource { + s.Limits = v + return s +} + +// SetQuietTime sets the QuietTime field's value. +func (s *ApplicationSettingsResource) SetQuietTime(v *QuietTime) *ApplicationSettingsResource { + s.QuietTime = v + return s +} + +// Get Applications Result. +type ApplicationsResponse struct { + _ struct{} `type:"structure"` + + // List of applications returned in this page. + Item []*ApplicationResponse `type:"list"` + + // The string that you use in a subsequent request to get the next page of results + // in a paginated response. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ApplicationsResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ApplicationsResponse) GoString() string { + return s.String() +} + +// SetItem sets the Item field's value. +func (s *ApplicationsResponse) SetItem(v []*ApplicationResponse) *ApplicationsResponse { + s.Item = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ApplicationsResponse) SetNextToken(v string) *ApplicationsResponse { + s.NextToken = &v + return s +} + +// Custom attibute dimension +type AttributeDimension struct { + _ struct{} `type:"structure"` + + // The type of dimension:INCLUSIVE - Endpoints that match the criteria are included + // in the segment.EXCLUSIVE - Endpoints that match the criteria are excluded + // from the segment. + AttributeType *string `type:"string" enum:"AttributeType"` + + // The criteria values for the segment dimension. Endpoints with matching attribute + // values are included or excluded from the segment, depending on the setting + // for Type. + Values []*string `type:"list"` +} + +// String returns the string representation +func (s AttributeDimension) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttributeDimension) GoString() string { + return s.String() +} + +// SetAttributeType sets the AttributeType field's value. +func (s *AttributeDimension) SetAttributeType(v string) *AttributeDimension { + s.AttributeType = &v + return s +} + +// SetValues sets the Values field's value. +func (s *AttributeDimension) SetValues(v []*string) *AttributeDimension { + s.Values = v + return s +} + +// Attributes. +type AttributesResource struct { + _ struct{} `type:"structure"` + + // The unique ID for the application. + ApplicationId *string `type:"string"` + + // The attribute type for the application. + AttributeType *string `type:"string"` + + // The attributes for the application. + Attributes []*string `type:"list"` +} + +// String returns the string representation +func (s AttributesResource) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttributesResource) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *AttributesResource) SetApplicationId(v string) *AttributesResource { + s.ApplicationId = &v + return s +} + +// SetAttributeType sets the AttributeType field's value. +func (s *AttributesResource) SetAttributeType(v string) *AttributesResource { + s.AttributeType = &v + return s +} + +// SetAttributes sets the Attributes field's value. +func (s *AttributesResource) SetAttributes(v []*string) *AttributesResource { + s.Attributes = v + return s +} + +// Baidu Cloud Push credentials +type BaiduChannelRequest struct { + _ struct{} `type:"structure"` + + // Platform credential API key from Baidu. + ApiKey *string `type:"string"` + + // If the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` + + // Platform credential Secret key from Baidu. + SecretKey *string `type:"string"` +} + +// String returns the string representation +func (s BaiduChannelRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BaiduChannelRequest) GoString() string { + return s.String() +} + +// SetApiKey sets the ApiKey field's value. +func (s *BaiduChannelRequest) SetApiKey(v string) *BaiduChannelRequest { + s.ApiKey = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *BaiduChannelRequest) SetEnabled(v bool) *BaiduChannelRequest { + s.Enabled = &v + return s +} + +// SetSecretKey sets the SecretKey field's value. +func (s *BaiduChannelRequest) SetSecretKey(v string) *BaiduChannelRequest { + s.SecretKey = &v + return s +} + +// Baidu Cloud Messaging channel definition +type BaiduChannelResponse struct { + _ struct{} `type:"structure"` + + // Application id + ApplicationId *string `type:"string"` + + // When was this segment created + CreationDate *string `type:"string"` + + // The Baidu API key from Baidu. + Credential *string `type:"string"` + + // If the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` + + // Not used. Retained for backwards compatibility. + HasCredential *bool `type:"boolean"` + + // Channel ID. Not used, only for backwards compatibility. + Id *string `type:"string"` + + // Is this channel archived + IsArchived *bool `type:"boolean"` + + // Who made the last change + LastModifiedBy *string `type:"string"` + + // Last date this was updated + LastModifiedDate *string `type:"string"` + + // The platform type. Will be BAIDU + Platform *string `type:"string"` + + // Version of channel + Version *int64 `type:"integer"` +} + +// String returns the string representation +func (s BaiduChannelResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BaiduChannelResponse) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *BaiduChannelResponse) SetApplicationId(v string) *BaiduChannelResponse { + s.ApplicationId = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *BaiduChannelResponse) SetCreationDate(v string) *BaiduChannelResponse { + s.CreationDate = &v + return s +} + +// SetCredential sets the Credential field's value. +func (s *BaiduChannelResponse) SetCredential(v string) *BaiduChannelResponse { + s.Credential = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *BaiduChannelResponse) SetEnabled(v bool) *BaiduChannelResponse { + s.Enabled = &v + return s +} + +// SetHasCredential sets the HasCredential field's value. +func (s *BaiduChannelResponse) SetHasCredential(v bool) *BaiduChannelResponse { + s.HasCredential = &v + return s +} + +// SetId sets the Id field's value. +func (s *BaiduChannelResponse) SetId(v string) *BaiduChannelResponse { + s.Id = &v + return s +} + +// SetIsArchived sets the IsArchived field's value. +func (s *BaiduChannelResponse) SetIsArchived(v bool) *BaiduChannelResponse { + s.IsArchived = &v + return s +} + +// SetLastModifiedBy sets the LastModifiedBy field's value. +func (s *BaiduChannelResponse) SetLastModifiedBy(v string) *BaiduChannelResponse { + s.LastModifiedBy = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *BaiduChannelResponse) SetLastModifiedDate(v string) *BaiduChannelResponse { + s.LastModifiedDate = &v + return s +} + +// SetPlatform sets the Platform field's value. +func (s *BaiduChannelResponse) SetPlatform(v string) *BaiduChannelResponse { + s.Platform = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *BaiduChannelResponse) SetVersion(v int64) *BaiduChannelResponse { + s.Version = &v + return s +} + +// Baidu Message. +type BaiduMessage struct { + _ struct{} `type:"structure"` + + // The action that occurs if the user taps a push notification delivered by + // the campaign: OPEN_APP - Your app launches, or it becomes the foreground + // app if it has been sent to the background. This is the default action. DEEP_LINK + // - Uses deep linking features in iOS and Android to open your app and display + // a designated user interface within the app. URL - The default mobile browser + // on the user's device launches and opens a web page at the URL you specify. + // Possible values include: OPEN_APP | DEEP_LINK | URL + Action *string `type:"string" enum:"Action"` + + // The message body of the notification. + Body *string `type:"string"` + + // The data payload used for a silent push. This payload is added to the notifications' + // data.pinpoint.jsonBody' object + Data map[string]*string `type:"map"` + + // The icon image name of the asset saved in your application. + IconReference *string `type:"string"` + + // The URL that points to an image used as the large icon to the notification + // content view. + ImageIconUrl *string `type:"string"` + + // The URL that points to an image used in the push notification. + ImageUrl *string `type:"string"` + + // The Raw JSON formatted string to be used as the payload. This value overrides + // the message. + RawContent *string `type:"string"` + + // Indicates if the message should display on the users device. Silent pushes + // can be used for Remote Configuration and Phone Home use cases. + SilentPush *bool `type:"boolean"` + + // The URL that points to an image used as the small icon for the notification + // which will be used to represent the notification in the status bar and content + // view + SmallImageIconUrl *string `type:"string"` + + // Indicates a sound to play when the device receives the notification. Supports + // default, or the filename of a sound resource bundled in the app. Android + // sound files must reside in /res/raw/ + Sound *string `type:"string"` + + // Default message substitutions. Can be overridden by individual address substitutions. + Substitutions map[string][]*string `type:"map"` + + // This parameter specifies how long (in seconds) the message should be kept + // in Baidu storage if the device is offline. The and the default value and + // the maximum time to live supported is 7 days (604800 seconds) + TimeToLive *int64 `type:"integer"` + + // The message title that displays above the message on the user's device. + Title *string `type:"string"` + + // The URL to open in the user's mobile browser. Used if the value for Action + // is URL. + Url *string `type:"string"` +} + +// String returns the string representation +func (s BaiduMessage) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BaiduMessage) GoString() string { + return s.String() +} + +// SetAction sets the Action field's value. +func (s *BaiduMessage) SetAction(v string) *BaiduMessage { + s.Action = &v + return s +} + +// SetBody sets the Body field's value. +func (s *BaiduMessage) SetBody(v string) *BaiduMessage { + s.Body = &v + return s +} + +// SetData sets the Data field's value. +func (s *BaiduMessage) SetData(v map[string]*string) *BaiduMessage { + s.Data = v + return s +} + +// SetIconReference sets the IconReference field's value. +func (s *BaiduMessage) SetIconReference(v string) *BaiduMessage { + s.IconReference = &v + return s +} + +// SetImageIconUrl sets the ImageIconUrl field's value. +func (s *BaiduMessage) SetImageIconUrl(v string) *BaiduMessage { + s.ImageIconUrl = &v + return s +} + +// SetImageUrl sets the ImageUrl field's value. +func (s *BaiduMessage) SetImageUrl(v string) *BaiduMessage { + s.ImageUrl = &v + return s +} + +// SetRawContent sets the RawContent field's value. +func (s *BaiduMessage) SetRawContent(v string) *BaiduMessage { + s.RawContent = &v + return s +} + +// SetSilentPush sets the SilentPush field's value. +func (s *BaiduMessage) SetSilentPush(v bool) *BaiduMessage { + s.SilentPush = &v + return s +} + +// SetSmallImageIconUrl sets the SmallImageIconUrl field's value. +func (s *BaiduMessage) SetSmallImageIconUrl(v string) *BaiduMessage { + s.SmallImageIconUrl = &v + return s +} + +// SetSound sets the Sound field's value. +func (s *BaiduMessage) SetSound(v string) *BaiduMessage { + s.Sound = &v + return s +} + +// SetSubstitutions sets the Substitutions field's value. +func (s *BaiduMessage) SetSubstitutions(v map[string][]*string) *BaiduMessage { + s.Substitutions = v + return s +} + +// SetTimeToLive sets the TimeToLive field's value. +func (s *BaiduMessage) SetTimeToLive(v int64) *BaiduMessage { + s.TimeToLive = &v + return s +} + +// SetTitle sets the Title field's value. +func (s *BaiduMessage) SetTitle(v string) *BaiduMessage { + s.Title = &v + return s +} + +// SetUrl sets the Url field's value. +func (s *BaiduMessage) SetUrl(v string) *BaiduMessage { + s.Url = &v + return s +} + +// The email message configuration. +type CampaignEmailMessage struct { + _ struct{} `type:"structure"` + + // The email text body. + Body *string `type:"string"` + + // The email address used to send the email from. Defaults to use FromAddress + // specified in the Email Channel. + FromAddress *string `type:"string"` + + // The email html body. + HtmlBody *string `type:"string"` + + // The email title (Or subject). + Title *string `type:"string"` +} + +// String returns the string representation +func (s CampaignEmailMessage) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CampaignEmailMessage) GoString() string { + return s.String() +} + +// SetBody sets the Body field's value. +func (s *CampaignEmailMessage) SetBody(v string) *CampaignEmailMessage { + s.Body = &v + return s +} + +// SetFromAddress sets the FromAddress field's value. +func (s *CampaignEmailMessage) SetFromAddress(v string) *CampaignEmailMessage { + s.FromAddress = &v + return s +} + +// SetHtmlBody sets the HtmlBody field's value. +func (s *CampaignEmailMessage) SetHtmlBody(v string) *CampaignEmailMessage { + s.HtmlBody = &v + return s +} + +// SetTitle sets the Title field's value. +func (s *CampaignEmailMessage) SetTitle(v string) *CampaignEmailMessage { + s.Title = &v + return s +} + +// An object that defines the events that cause the campaign to be sent. +type CampaignEventFilter struct { + _ struct{} `type:"structure"` + + // An object that defines the dimensions for the event filter. + Dimensions *EventDimensions `type:"structure"` + + // The type of event that causes the campaign to be sent. Possible values:SYSTEM + // - Send the campaign when a system event occurs. See the System resource for + // more information.ENDPOINT - Send the campaign when an endpoint event occurs. + // See the Event resource for more information. + FilterType *string `type:"string" enum:"FilterType"` +} + +// String returns the string representation +func (s CampaignEventFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CampaignEventFilter) GoString() string { + return s.String() +} + +// SetDimensions sets the Dimensions field's value. +func (s *CampaignEventFilter) SetDimensions(v *EventDimensions) *CampaignEventFilter { + s.Dimensions = v + return s +} + +// SetFilterType sets the FilterType field's value. +func (s *CampaignEventFilter) SetFilterType(v string) *CampaignEventFilter { + s.FilterType = &v + return s +} + +// Campaign hook information. +type CampaignHook struct { + _ struct{} `type:"structure"` + + // Lambda function name or arn to be called for delivery + LambdaFunctionName *string `type:"string"` + + // What mode Lambda should be invoked in. + Mode *string `type:"string" enum:"Mode"` + + // Web URL to call for hook. If the URL has authentication specified it will + // be added as authentication to the request + WebUrl *string `type:"string"` +} + +// String returns the string representation +func (s CampaignHook) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CampaignHook) GoString() string { + return s.String() +} + +// SetLambdaFunctionName sets the LambdaFunctionName field's value. +func (s *CampaignHook) SetLambdaFunctionName(v string) *CampaignHook { + s.LambdaFunctionName = &v + return s +} + +// SetMode sets the Mode field's value. +func (s *CampaignHook) SetMode(v string) *CampaignHook { + s.Mode = &v + return s +} + +// SetWebUrl sets the WebUrl field's value. +func (s *CampaignHook) SetWebUrl(v string) *CampaignHook { + s.WebUrl = &v + return s +} + +// Campaign Limits are used to limit the number of messages that can be sent +// to a single endpoint. +type CampaignLimits struct { + _ struct{} `type:"structure"` + + // The maximum number of messages that each campaign can send to a single endpoint + // in a 24-hour period. + Daily *int64 `type:"integer"` + + // The length of time (in seconds) that the campaign can run before it ends + // and message deliveries stop. This duration begins at the scheduled start + // time for the campaign. The minimum value is 60. + MaximumDuration *int64 `type:"integer"` + + // The number of messages that the campaign can send per second. The minimum + // value is 50, and the maximum is 20000. + MessagesPerSecond *int64 `type:"integer"` + + // The maximum number of messages that an individual campaign can send to a + // single endpoint over the course of the campaign. + Total *int64 `type:"integer"` +} + +// String returns the string representation +func (s CampaignLimits) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CampaignLimits) GoString() string { + return s.String() +} + +// SetDaily sets the Daily field's value. +func (s *CampaignLimits) SetDaily(v int64) *CampaignLimits { + s.Daily = &v + return s +} + +// SetMaximumDuration sets the MaximumDuration field's value. +func (s *CampaignLimits) SetMaximumDuration(v int64) *CampaignLimits { + s.MaximumDuration = &v + return s +} + +// SetMessagesPerSecond sets the MessagesPerSecond field's value. +func (s *CampaignLimits) SetMessagesPerSecond(v int64) *CampaignLimits { + s.MessagesPerSecond = &v + return s +} + +// SetTotal sets the Total field's value. +func (s *CampaignLimits) SetTotal(v int64) *CampaignLimits { + s.Total = &v + return s +} + +// Campaign definition +type CampaignResponse struct { + _ struct{} `type:"structure"` + + // Treatments that are defined in addition to the default treatment. + AdditionalTreatments []*TreatmentResource `type:"list"` + + // The ID of the application to which the campaign applies. + ApplicationId *string `type:"string"` + + // The date the campaign was created in ISO 8601 format. + CreationDate *string `type:"string"` + + // The status of the campaign's default treatment. Only present for A/B test + // campaigns. + DefaultState *CampaignState `type:"structure"` + + // A description of the campaign. + Description *string `type:"string"` + + // The allocated percentage of end users who will not receive messages from + // this campaign. + HoldoutPercent *int64 `type:"integer"` + + // Campaign hook information. + Hook *CampaignHook `type:"structure"` + + // The unique campaign ID. + Id *string `type:"string"` + + // Indicates whether the campaign is paused. A paused campaign does not send + // messages unless you resume it by setting IsPaused to false. + IsPaused *bool `type:"boolean"` + + // The date the campaign was last updated in ISO 8601 format. + LastModifiedDate *string `type:"string"` + + // The campaign limits settings. + Limits *CampaignLimits `type:"structure"` + + // The message configuration settings. + MessageConfiguration *MessageConfiguration `type:"structure"` + + // The custom name of the campaign. + Name *string `type:"string"` + + // The campaign schedule. + Schedule *Schedule `type:"structure"` + + // The ID of the segment to which the campaign sends messages. + SegmentId *string `type:"string"` + + // The version of the segment to which the campaign sends messages. + SegmentVersion *int64 `type:"integer"` + + // The campaign status.An A/B test campaign will have a status of COMPLETED + // only when all treatments have a status of COMPLETED. + State *CampaignState `type:"structure"` + + // A custom description for the treatment. + TreatmentDescription *string `type:"string"` + + // The custom name of a variation of the campaign used for A/B testing. + TreatmentName *string `type:"string"` + + // The campaign version number. + Version *int64 `type:"integer"` +} + +// String returns the string representation +func (s CampaignResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CampaignResponse) GoString() string { + return s.String() +} + +// SetAdditionalTreatments sets the AdditionalTreatments field's value. +func (s *CampaignResponse) SetAdditionalTreatments(v []*TreatmentResource) *CampaignResponse { + s.AdditionalTreatments = v + return s +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *CampaignResponse) SetApplicationId(v string) *CampaignResponse { + s.ApplicationId = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *CampaignResponse) SetCreationDate(v string) *CampaignResponse { + s.CreationDate = &v + return s +} + +// SetDefaultState sets the DefaultState field's value. +func (s *CampaignResponse) SetDefaultState(v *CampaignState) *CampaignResponse { + s.DefaultState = v + return s +} + +// SetDescription sets the Description field's value. +func (s *CampaignResponse) SetDescription(v string) *CampaignResponse { + s.Description = &v + return s +} + +// SetHoldoutPercent sets the HoldoutPercent field's value. +func (s *CampaignResponse) SetHoldoutPercent(v int64) *CampaignResponse { + s.HoldoutPercent = &v + return s +} + +// SetHook sets the Hook field's value. +func (s *CampaignResponse) SetHook(v *CampaignHook) *CampaignResponse { + s.Hook = v + return s +} + +// SetId sets the Id field's value. +func (s *CampaignResponse) SetId(v string) *CampaignResponse { + s.Id = &v + return s +} + +// SetIsPaused sets the IsPaused field's value. +func (s *CampaignResponse) SetIsPaused(v bool) *CampaignResponse { + s.IsPaused = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *CampaignResponse) SetLastModifiedDate(v string) *CampaignResponse { + s.LastModifiedDate = &v + return s +} + +// SetLimits sets the Limits field's value. +func (s *CampaignResponse) SetLimits(v *CampaignLimits) *CampaignResponse { + s.Limits = v + return s +} + +// SetMessageConfiguration sets the MessageConfiguration field's value. +func (s *CampaignResponse) SetMessageConfiguration(v *MessageConfiguration) *CampaignResponse { + s.MessageConfiguration = v + return s +} + +// SetName sets the Name field's value. +func (s *CampaignResponse) SetName(v string) *CampaignResponse { + s.Name = &v + return s +} + +// SetSchedule sets the Schedule field's value. +func (s *CampaignResponse) SetSchedule(v *Schedule) *CampaignResponse { + s.Schedule = v + return s +} + +// SetSegmentId sets the SegmentId field's value. +func (s *CampaignResponse) SetSegmentId(v string) *CampaignResponse { + s.SegmentId = &v + return s +} + +// SetSegmentVersion sets the SegmentVersion field's value. +func (s *CampaignResponse) SetSegmentVersion(v int64) *CampaignResponse { + s.SegmentVersion = &v + return s +} + +// SetState sets the State field's value. +func (s *CampaignResponse) SetState(v *CampaignState) *CampaignResponse { + s.State = v + return s +} + +// SetTreatmentDescription sets the TreatmentDescription field's value. +func (s *CampaignResponse) SetTreatmentDescription(v string) *CampaignResponse { + s.TreatmentDescription = &v + return s +} + +// SetTreatmentName sets the TreatmentName field's value. +func (s *CampaignResponse) SetTreatmentName(v string) *CampaignResponse { + s.TreatmentName = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *CampaignResponse) SetVersion(v int64) *CampaignResponse { + s.Version = &v + return s +} + +// SMS message configuration. +type CampaignSmsMessage struct { + _ struct{} `type:"structure"` + + // The SMS text body. + Body *string `type:"string"` + + // Is this is a transactional SMS message, otherwise a promotional message. + MessageType *string `type:"string" enum:"MessageType"` + + // Sender ID of sent message. + SenderId *string `type:"string"` +} + +// String returns the string representation +func (s CampaignSmsMessage) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CampaignSmsMessage) GoString() string { + return s.String() +} + +// SetBody sets the Body field's value. +func (s *CampaignSmsMessage) SetBody(v string) *CampaignSmsMessage { + s.Body = &v + return s +} + +// SetMessageType sets the MessageType field's value. +func (s *CampaignSmsMessage) SetMessageType(v string) *CampaignSmsMessage { + s.MessageType = &v + return s +} + +// SetSenderId sets the SenderId field's value. +func (s *CampaignSmsMessage) SetSenderId(v string) *CampaignSmsMessage { + s.SenderId = &v + return s +} + +// State of the Campaign +type CampaignState struct { + _ struct{} `type:"structure"` + + // The status of the campaign, or the status of a treatment that belongs to + // an A/B test campaign.Valid values: SCHEDULED, EXECUTING, PENDING_NEXT_RUN, + // COMPLETED, PAUSED + CampaignStatus *string `type:"string" enum:"CampaignStatus"` +} + +// String returns the string representation +func (s CampaignState) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CampaignState) GoString() string { + return s.String() +} + +// SetCampaignStatus sets the CampaignStatus field's value. +func (s *CampaignState) SetCampaignStatus(v string) *CampaignState { + s.CampaignStatus = &v + return s +} + +// List of available campaigns. +type CampaignsResponse struct { + _ struct{} `type:"structure"` + + // A list of campaigns. + Item []*CampaignResponse `type:"list"` + + // The string that you use in a subsequent request to get the next page of results + // in a paginated response. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s CampaignsResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CampaignsResponse) GoString() string { + return s.String() +} + +// SetItem sets the Item field's value. +func (s *CampaignsResponse) SetItem(v []*CampaignResponse) *CampaignsResponse { + s.Item = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *CampaignsResponse) SetNextToken(v string) *CampaignsResponse { + s.NextToken = &v + return s +} + +// Base definition for channel response. +type ChannelResponse struct { + _ struct{} `type:"structure"` + + // Application id + ApplicationId *string `type:"string"` + + // When was this segment created + CreationDate *string `type:"string"` + + // If the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` + + // Not used. Retained for backwards compatibility. + HasCredential *bool `type:"boolean"` + + // Channel ID. Not used, only for backwards compatibility. + Id *string `type:"string"` + + // Is this channel archived + IsArchived *bool `type:"boolean"` + + // Who made the last change + LastModifiedBy *string `type:"string"` + + // Last date this was updated + LastModifiedDate *string `type:"string"` + + // Version of channel + Version *int64 `type:"integer"` +} + +// String returns the string representation +func (s ChannelResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ChannelResponse) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *ChannelResponse) SetApplicationId(v string) *ChannelResponse { + s.ApplicationId = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *ChannelResponse) SetCreationDate(v string) *ChannelResponse { + s.CreationDate = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *ChannelResponse) SetEnabled(v bool) *ChannelResponse { + s.Enabled = &v + return s +} + +// SetHasCredential sets the HasCredential field's value. +func (s *ChannelResponse) SetHasCredential(v bool) *ChannelResponse { + s.HasCredential = &v + return s +} + +// SetId sets the Id field's value. +func (s *ChannelResponse) SetId(v string) *ChannelResponse { + s.Id = &v + return s +} + +// SetIsArchived sets the IsArchived field's value. +func (s *ChannelResponse) SetIsArchived(v bool) *ChannelResponse { + s.IsArchived = &v + return s +} + +// SetLastModifiedBy sets the LastModifiedBy field's value. +func (s *ChannelResponse) SetLastModifiedBy(v string) *ChannelResponse { + s.LastModifiedBy = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *ChannelResponse) SetLastModifiedDate(v string) *ChannelResponse { + s.LastModifiedDate = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *ChannelResponse) SetVersion(v int64) *ChannelResponse { + s.Version = &v + return s +} + +// Get channels definition +type ChannelsResponse struct { + _ struct{} `type:"structure"` + + // A map of channels, with the ChannelType as the key and the Channel as the + // value. + Channels map[string]*ChannelResponse `type:"map"` +} + +// String returns the string representation +func (s ChannelsResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ChannelsResponse) GoString() string { + return s.String() +} + +// SetChannels sets the Channels field's value. +func (s *ChannelsResponse) SetChannels(v map[string]*ChannelResponse) *ChannelsResponse { + s.Channels = v + return s +} + +type CreateAppInput struct { + _ struct{} `type:"structure" payload:"CreateApplicationRequest"` + + // Application Request. + // + // CreateApplicationRequest is a required field + CreateApplicationRequest *CreateApplicationRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateAppInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateAppInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateAppInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateAppInput"} + if s.CreateApplicationRequest == nil { + invalidParams.Add(request.NewErrParamRequired("CreateApplicationRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCreateApplicationRequest sets the CreateApplicationRequest field's value. +func (s *CreateAppInput) SetCreateApplicationRequest(v *CreateApplicationRequest) *CreateAppInput { + s.CreateApplicationRequest = v + return s +} + +type CreateAppOutput struct { + _ struct{} `type:"structure" payload:"ApplicationResponse"` + + // Application Response. + // + // ApplicationResponse is a required field + ApplicationResponse *ApplicationResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateAppOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateAppOutput) GoString() string { + return s.String() +} + +// SetApplicationResponse sets the ApplicationResponse field's value. +func (s *CreateAppOutput) SetApplicationResponse(v *ApplicationResponse) *CreateAppOutput { + s.ApplicationResponse = v + return s +} + +// Application Request. +type CreateApplicationRequest struct { + _ struct{} `type:"structure"` + + // The display name of the application. Used in the Amazon Pinpoint console. + Name *string `type:"string"` +} + +// String returns the string representation +func (s CreateApplicationRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateApplicationRequest) GoString() string { + return s.String() +} + +// SetName sets the Name field's value. +func (s *CreateApplicationRequest) SetName(v string) *CreateApplicationRequest { + s.Name = &v + return s +} + +type CreateCampaignInput struct { + _ struct{} `type:"structure" payload:"WriteCampaignRequest"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // Used to create a campaign. + // + // WriteCampaignRequest is a required field + WriteCampaignRequest *WriteCampaignRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateCampaignInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateCampaignInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateCampaignInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateCampaignInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.WriteCampaignRequest == nil { + invalidParams.Add(request.NewErrParamRequired("WriteCampaignRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *CreateCampaignInput) SetApplicationId(v string) *CreateCampaignInput { + s.ApplicationId = &v + return s +} + +// SetWriteCampaignRequest sets the WriteCampaignRequest field's value. +func (s *CreateCampaignInput) SetWriteCampaignRequest(v *WriteCampaignRequest) *CreateCampaignInput { + s.WriteCampaignRequest = v + return s +} + +type CreateCampaignOutput struct { + _ struct{} `type:"structure" payload:"CampaignResponse"` + + // Campaign definition + // + // CampaignResponse is a required field + CampaignResponse *CampaignResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateCampaignOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateCampaignOutput) GoString() string { + return s.String() +} + +// SetCampaignResponse sets the CampaignResponse field's value. +func (s *CreateCampaignOutput) SetCampaignResponse(v *CampaignResponse) *CreateCampaignOutput { + s.CampaignResponse = v + return s +} + +type CreateExportJobInput struct { + _ struct{} `type:"structure" payload:"ExportJobRequest"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // Export job request. + // + // ExportJobRequest is a required field + ExportJobRequest *ExportJobRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateExportJobInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateExportJobInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateExportJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateExportJobInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.ExportJobRequest == nil { + invalidParams.Add(request.NewErrParamRequired("ExportJobRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *CreateExportJobInput) SetApplicationId(v string) *CreateExportJobInput { + s.ApplicationId = &v + return s +} + +// SetExportJobRequest sets the ExportJobRequest field's value. +func (s *CreateExportJobInput) SetExportJobRequest(v *ExportJobRequest) *CreateExportJobInput { + s.ExportJobRequest = v + return s +} + +type CreateExportJobOutput struct { + _ struct{} `type:"structure" payload:"ExportJobResponse"` + + // Export job response. + // + // ExportJobResponse is a required field + ExportJobResponse *ExportJobResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateExportJobOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateExportJobOutput) GoString() string { + return s.String() +} + +// SetExportJobResponse sets the ExportJobResponse field's value. +func (s *CreateExportJobOutput) SetExportJobResponse(v *ExportJobResponse) *CreateExportJobOutput { + s.ExportJobResponse = v + return s +} + +type CreateImportJobInput struct { + _ struct{} `type:"structure" payload:"ImportJobRequest"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // Import job request. + // + // ImportJobRequest is a required field + ImportJobRequest *ImportJobRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateImportJobInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateImportJobInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateImportJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateImportJobInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.ImportJobRequest == nil { + invalidParams.Add(request.NewErrParamRequired("ImportJobRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *CreateImportJobInput) SetApplicationId(v string) *CreateImportJobInput { + s.ApplicationId = &v + return s +} + +// SetImportJobRequest sets the ImportJobRequest field's value. +func (s *CreateImportJobInput) SetImportJobRequest(v *ImportJobRequest) *CreateImportJobInput { + s.ImportJobRequest = v + return s +} + +type CreateImportJobOutput struct { + _ struct{} `type:"structure" payload:"ImportJobResponse"` + + // Import job response. + // + // ImportJobResponse is a required field + ImportJobResponse *ImportJobResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateImportJobOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateImportJobOutput) GoString() string { + return s.String() +} + +// SetImportJobResponse sets the ImportJobResponse field's value. +func (s *CreateImportJobOutput) SetImportJobResponse(v *ImportJobResponse) *CreateImportJobOutput { + s.ImportJobResponse = v + return s +} + +type CreateSegmentInput struct { + _ struct{} `type:"structure" payload:"WriteSegmentRequest"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // Segment definition. + // + // WriteSegmentRequest is a required field + WriteSegmentRequest *WriteSegmentRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateSegmentInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateSegmentInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateSegmentInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateSegmentInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.WriteSegmentRequest == nil { + invalidParams.Add(request.NewErrParamRequired("WriteSegmentRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *CreateSegmentInput) SetApplicationId(v string) *CreateSegmentInput { + s.ApplicationId = &v + return s +} + +// SetWriteSegmentRequest sets the WriteSegmentRequest field's value. +func (s *CreateSegmentInput) SetWriteSegmentRequest(v *WriteSegmentRequest) *CreateSegmentInput { + s.WriteSegmentRequest = v + return s +} + +type CreateSegmentOutput struct { + _ struct{} `type:"structure" payload:"SegmentResponse"` + + // Segment definition. + // + // SegmentResponse is a required field + SegmentResponse *SegmentResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateSegmentOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateSegmentOutput) GoString() string { + return s.String() +} + +// SetSegmentResponse sets the SegmentResponse field's value. +func (s *CreateSegmentOutput) SetSegmentResponse(v *SegmentResponse) *CreateSegmentOutput { + s.SegmentResponse = v + return s +} + +// The default message to use across all channels. +type DefaultMessage struct { + _ struct{} `type:"structure"` + + // The message body of the notification, the email body or the text message. + Body *string `type:"string"` + + // Default message substitutions. Can be overridden by individual address substitutions. + Substitutions map[string][]*string `type:"map"` +} + +// String returns the string representation +func (s DefaultMessage) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DefaultMessage) GoString() string { + return s.String() +} + +// SetBody sets the Body field's value. +func (s *DefaultMessage) SetBody(v string) *DefaultMessage { + s.Body = &v + return s +} + +// SetSubstitutions sets the Substitutions field's value. +func (s *DefaultMessage) SetSubstitutions(v map[string][]*string) *DefaultMessage { + s.Substitutions = v + return s +} + +// Default Push Notification Message. +type DefaultPushNotificationMessage struct { + _ struct{} `type:"structure"` + + // The action that occurs if the user taps a push notification delivered by + // the campaign: OPEN_APP - Your app launches, or it becomes the foreground + // app if it has been sent to the background. This is the default action. DEEP_LINK + // - Uses deep linking features in iOS and Android to open your app and display + // a designated user interface within the app. URL - The default mobile browser + // on the user's device launches and opens a web page at the URL you specify. + // Possible values include: OPEN_APP | DEEP_LINK | URL + Action *string `type:"string" enum:"Action"` + + // The message body of the notification. + Body *string `type:"string"` + + // The data payload used for a silent push. This payload is added to the notifications' + // data.pinpoint.jsonBody' object + Data map[string]*string `type:"map"` + + // Indicates if the message should display on the recipient's device. You can + // use silent pushes for remote configuration or to deliver messages to in-app + // notification centers. + SilentPush *bool `type:"boolean"` + + // Default message substitutions. Can be overridden by individual address substitutions. + Substitutions map[string][]*string `type:"map"` + + // The message title that displays above the message on the user's device. + Title *string `type:"string"` + + // The URL to open in the user's mobile browser. Used if the value for Action + // is URL. + Url *string `type:"string"` +} + +// String returns the string representation +func (s DefaultPushNotificationMessage) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DefaultPushNotificationMessage) GoString() string { + return s.String() +} + +// SetAction sets the Action field's value. +func (s *DefaultPushNotificationMessage) SetAction(v string) *DefaultPushNotificationMessage { + s.Action = &v + return s +} + +// SetBody sets the Body field's value. +func (s *DefaultPushNotificationMessage) SetBody(v string) *DefaultPushNotificationMessage { + s.Body = &v + return s +} + +// SetData sets the Data field's value. +func (s *DefaultPushNotificationMessage) SetData(v map[string]*string) *DefaultPushNotificationMessage { + s.Data = v + return s +} + +// SetSilentPush sets the SilentPush field's value. +func (s *DefaultPushNotificationMessage) SetSilentPush(v bool) *DefaultPushNotificationMessage { + s.SilentPush = &v + return s +} + +// SetSubstitutions sets the Substitutions field's value. +func (s *DefaultPushNotificationMessage) SetSubstitutions(v map[string][]*string) *DefaultPushNotificationMessage { + s.Substitutions = v + return s +} + +// SetTitle sets the Title field's value. +func (s *DefaultPushNotificationMessage) SetTitle(v string) *DefaultPushNotificationMessage { + s.Title = &v + return s +} + +// SetUrl sets the Url field's value. +func (s *DefaultPushNotificationMessage) SetUrl(v string) *DefaultPushNotificationMessage { + s.Url = &v + return s +} + +type DeleteAdmChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteAdmChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAdmChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteAdmChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteAdmChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *DeleteAdmChannelInput) SetApplicationId(v string) *DeleteAdmChannelInput { + s.ApplicationId = &v + return s +} + +type DeleteAdmChannelOutput struct { + _ struct{} `type:"structure" payload:"ADMChannelResponse"` + + // Amazon Device Messaging channel definition. + // + // ADMChannelResponse is a required field + ADMChannelResponse *ADMChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DeleteAdmChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAdmChannelOutput) GoString() string { + return s.String() +} + +// SetADMChannelResponse sets the ADMChannelResponse field's value. +func (s *DeleteAdmChannelOutput) SetADMChannelResponse(v *ADMChannelResponse) *DeleteAdmChannelOutput { + s.ADMChannelResponse = v + return s +} + +type DeleteApnsChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteApnsChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApnsChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteApnsChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteApnsChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *DeleteApnsChannelInput) SetApplicationId(v string) *DeleteApnsChannelInput { + s.ApplicationId = &v + return s +} + +type DeleteApnsChannelOutput struct { + _ struct{} `type:"structure" payload:"APNSChannelResponse"` + + // Apple Distribution Push Notification Service channel definition. + // + // APNSChannelResponse is a required field + APNSChannelResponse *APNSChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DeleteApnsChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApnsChannelOutput) GoString() string { + return s.String() +} + +// SetAPNSChannelResponse sets the APNSChannelResponse field's value. +func (s *DeleteApnsChannelOutput) SetAPNSChannelResponse(v *APNSChannelResponse) *DeleteApnsChannelOutput { + s.APNSChannelResponse = v + return s +} + +type DeleteApnsSandboxChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteApnsSandboxChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApnsSandboxChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteApnsSandboxChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteApnsSandboxChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *DeleteApnsSandboxChannelInput) SetApplicationId(v string) *DeleteApnsSandboxChannelInput { + s.ApplicationId = &v + return s +} + +type DeleteApnsSandboxChannelOutput struct { + _ struct{} `type:"structure" payload:"APNSSandboxChannelResponse"` + + // Apple Development Push Notification Service channel definition. + // + // APNSSandboxChannelResponse is a required field + APNSSandboxChannelResponse *APNSSandboxChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DeleteApnsSandboxChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApnsSandboxChannelOutput) GoString() string { + return s.String() +} + +// SetAPNSSandboxChannelResponse sets the APNSSandboxChannelResponse field's value. +func (s *DeleteApnsSandboxChannelOutput) SetAPNSSandboxChannelResponse(v *APNSSandboxChannelResponse) *DeleteApnsSandboxChannelOutput { + s.APNSSandboxChannelResponse = v + return s +} + +type DeleteApnsVoipChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteApnsVoipChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApnsVoipChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteApnsVoipChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteApnsVoipChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *DeleteApnsVoipChannelInput) SetApplicationId(v string) *DeleteApnsVoipChannelInput { + s.ApplicationId = &v + return s +} + +type DeleteApnsVoipChannelOutput struct { + _ struct{} `type:"structure" payload:"APNSVoipChannelResponse"` + + // Apple VoIP Push Notification Service channel definition. + // + // APNSVoipChannelResponse is a required field + APNSVoipChannelResponse *APNSVoipChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DeleteApnsVoipChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApnsVoipChannelOutput) GoString() string { + return s.String() +} + +// SetAPNSVoipChannelResponse sets the APNSVoipChannelResponse field's value. +func (s *DeleteApnsVoipChannelOutput) SetAPNSVoipChannelResponse(v *APNSVoipChannelResponse) *DeleteApnsVoipChannelOutput { + s.APNSVoipChannelResponse = v + return s +} + +type DeleteApnsVoipSandboxChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteApnsVoipSandboxChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApnsVoipSandboxChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteApnsVoipSandboxChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteApnsVoipSandboxChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *DeleteApnsVoipSandboxChannelInput) SetApplicationId(v string) *DeleteApnsVoipSandboxChannelInput { + s.ApplicationId = &v + return s +} + +type DeleteApnsVoipSandboxChannelOutput struct { + _ struct{} `type:"structure" payload:"APNSVoipSandboxChannelResponse"` + + // Apple VoIP Developer Push Notification Service channel definition. + // + // APNSVoipSandboxChannelResponse is a required field + APNSVoipSandboxChannelResponse *APNSVoipSandboxChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DeleteApnsVoipSandboxChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApnsVoipSandboxChannelOutput) GoString() string { + return s.String() +} + +// SetAPNSVoipSandboxChannelResponse sets the APNSVoipSandboxChannelResponse field's value. +func (s *DeleteApnsVoipSandboxChannelOutput) SetAPNSVoipSandboxChannelResponse(v *APNSVoipSandboxChannelResponse) *DeleteApnsVoipSandboxChannelOutput { + s.APNSVoipSandboxChannelResponse = v + return s +} + +type DeleteAppInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteAppInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAppInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteAppInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteAppInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *DeleteAppInput) SetApplicationId(v string) *DeleteAppInput { + s.ApplicationId = &v + return s +} + +type DeleteAppOutput struct { + _ struct{} `type:"structure" payload:"ApplicationResponse"` + + // Application Response. + // + // ApplicationResponse is a required field + ApplicationResponse *ApplicationResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DeleteAppOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteAppOutput) GoString() string { + return s.String() +} + +// SetApplicationResponse sets the ApplicationResponse field's value. +func (s *DeleteAppOutput) SetApplicationResponse(v *ApplicationResponse) *DeleteAppOutput { + s.ApplicationResponse = v + return s +} + +type DeleteBaiduChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteBaiduChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBaiduChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteBaiduChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteBaiduChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *DeleteBaiduChannelInput) SetApplicationId(v string) *DeleteBaiduChannelInput { + s.ApplicationId = &v + return s +} + +type DeleteBaiduChannelOutput struct { + _ struct{} `type:"structure" payload:"BaiduChannelResponse"` + + // Baidu Cloud Messaging channel definition + // + // BaiduChannelResponse is a required field + BaiduChannelResponse *BaiduChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DeleteBaiduChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBaiduChannelOutput) GoString() string { + return s.String() +} + +// SetBaiduChannelResponse sets the BaiduChannelResponse field's value. +func (s *DeleteBaiduChannelOutput) SetBaiduChannelResponse(v *BaiduChannelResponse) *DeleteBaiduChannelOutput { + s.BaiduChannelResponse = v + return s +} + +type DeleteCampaignInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // CampaignId is a required field + CampaignId *string `location:"uri" locationName:"campaign-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteCampaignInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteCampaignInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteCampaignInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteCampaignInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.CampaignId == nil { + invalidParams.Add(request.NewErrParamRequired("CampaignId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *DeleteCampaignInput) SetApplicationId(v string) *DeleteCampaignInput { + s.ApplicationId = &v + return s +} + +// SetCampaignId sets the CampaignId field's value. +func (s *DeleteCampaignInput) SetCampaignId(v string) *DeleteCampaignInput { + s.CampaignId = &v + return s +} + +type DeleteCampaignOutput struct { + _ struct{} `type:"structure" payload:"CampaignResponse"` + + // Campaign definition + // + // CampaignResponse is a required field + CampaignResponse *CampaignResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DeleteCampaignOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteCampaignOutput) GoString() string { + return s.String() +} + +// SetCampaignResponse sets the CampaignResponse field's value. +func (s *DeleteCampaignOutput) SetCampaignResponse(v *CampaignResponse) *DeleteCampaignOutput { + s.CampaignResponse = v + return s +} + +type DeleteEmailChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteEmailChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteEmailChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteEmailChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteEmailChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *DeleteEmailChannelInput) SetApplicationId(v string) *DeleteEmailChannelInput { + s.ApplicationId = &v + return s +} + +type DeleteEmailChannelOutput struct { + _ struct{} `type:"structure" payload:"EmailChannelResponse"` + + // Email Channel Response. + // + // EmailChannelResponse is a required field + EmailChannelResponse *EmailChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DeleteEmailChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteEmailChannelOutput) GoString() string { + return s.String() +} + +// SetEmailChannelResponse sets the EmailChannelResponse field's value. +func (s *DeleteEmailChannelOutput) SetEmailChannelResponse(v *EmailChannelResponse) *DeleteEmailChannelOutput { + s.EmailChannelResponse = v + return s +} + +type DeleteEndpointInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // EndpointId is a required field + EndpointId *string `location:"uri" locationName:"endpoint-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteEndpointInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteEndpointInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteEndpointInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteEndpointInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.EndpointId == nil { + invalidParams.Add(request.NewErrParamRequired("EndpointId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *DeleteEndpointInput) SetApplicationId(v string) *DeleteEndpointInput { + s.ApplicationId = &v + return s +} + +// SetEndpointId sets the EndpointId field's value. +func (s *DeleteEndpointInput) SetEndpointId(v string) *DeleteEndpointInput { + s.EndpointId = &v + return s +} + +type DeleteEndpointOutput struct { + _ struct{} `type:"structure" payload:"EndpointResponse"` + + // Endpoint response + // + // EndpointResponse is a required field + EndpointResponse *EndpointResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DeleteEndpointOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteEndpointOutput) GoString() string { + return s.String() +} + +// SetEndpointResponse sets the EndpointResponse field's value. +func (s *DeleteEndpointOutput) SetEndpointResponse(v *EndpointResponse) *DeleteEndpointOutput { + s.EndpointResponse = v + return s +} + +type DeleteEventStreamInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteEventStreamInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteEventStreamInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteEventStreamInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteEventStreamInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *DeleteEventStreamInput) SetApplicationId(v string) *DeleteEventStreamInput { + s.ApplicationId = &v + return s +} + +type DeleteEventStreamOutput struct { + _ struct{} `type:"structure" payload:"EventStream"` + + // Model for an event publishing subscription export. + // + // EventStream is a required field + EventStream *EventStream `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DeleteEventStreamOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteEventStreamOutput) GoString() string { + return s.String() +} + +// SetEventStream sets the EventStream field's value. +func (s *DeleteEventStreamOutput) SetEventStream(v *EventStream) *DeleteEventStreamOutput { + s.EventStream = v + return s +} + +type DeleteGcmChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteGcmChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteGcmChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteGcmChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteGcmChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *DeleteGcmChannelInput) SetApplicationId(v string) *DeleteGcmChannelInput { + s.ApplicationId = &v + return s +} + +type DeleteGcmChannelOutput struct { + _ struct{} `type:"structure" payload:"GCMChannelResponse"` + + // Google Cloud Messaging channel definition + // + // GCMChannelResponse is a required field + GCMChannelResponse *GCMChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DeleteGcmChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteGcmChannelOutput) GoString() string { + return s.String() +} + +// SetGCMChannelResponse sets the GCMChannelResponse field's value. +func (s *DeleteGcmChannelOutput) SetGCMChannelResponse(v *GCMChannelResponse) *DeleteGcmChannelOutput { + s.GCMChannelResponse = v + return s +} + +type DeleteSegmentInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // SegmentId is a required field + SegmentId *string `location:"uri" locationName:"segment-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteSegmentInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSegmentInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteSegmentInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteSegmentInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.SegmentId == nil { + invalidParams.Add(request.NewErrParamRequired("SegmentId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *DeleteSegmentInput) SetApplicationId(v string) *DeleteSegmentInput { + s.ApplicationId = &v + return s +} + +// SetSegmentId sets the SegmentId field's value. +func (s *DeleteSegmentInput) SetSegmentId(v string) *DeleteSegmentInput { + s.SegmentId = &v + return s +} + +type DeleteSegmentOutput struct { + _ struct{} `type:"structure" payload:"SegmentResponse"` + + // Segment definition. + // + // SegmentResponse is a required field + SegmentResponse *SegmentResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DeleteSegmentOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSegmentOutput) GoString() string { + return s.String() +} + +// SetSegmentResponse sets the SegmentResponse field's value. +func (s *DeleteSegmentOutput) SetSegmentResponse(v *SegmentResponse) *DeleteSegmentOutput { + s.SegmentResponse = v + return s +} + +type DeleteSmsChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteSmsChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSmsChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteSmsChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteSmsChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *DeleteSmsChannelInput) SetApplicationId(v string) *DeleteSmsChannelInput { + s.ApplicationId = &v + return s +} + +type DeleteSmsChannelOutput struct { + _ struct{} `type:"structure" payload:"SMSChannelResponse"` + + // SMS Channel Response. + // + // SMSChannelResponse is a required field + SMSChannelResponse *SMSChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DeleteSmsChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSmsChannelOutput) GoString() string { + return s.String() +} + +// SetSMSChannelResponse sets the SMSChannelResponse field's value. +func (s *DeleteSmsChannelOutput) SetSMSChannelResponse(v *SMSChannelResponse) *DeleteSmsChannelOutput { + s.SMSChannelResponse = v + return s +} + +type DeleteUserEndpointsInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // UserId is a required field + UserId *string `location:"uri" locationName:"user-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteUserEndpointsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteUserEndpointsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteUserEndpointsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteUserEndpointsInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.UserId == nil { + invalidParams.Add(request.NewErrParamRequired("UserId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *DeleteUserEndpointsInput) SetApplicationId(v string) *DeleteUserEndpointsInput { + s.ApplicationId = &v + return s +} + +// SetUserId sets the UserId field's value. +func (s *DeleteUserEndpointsInput) SetUserId(v string) *DeleteUserEndpointsInput { + s.UserId = &v + return s +} + +type DeleteUserEndpointsOutput struct { + _ struct{} `type:"structure" payload:"EndpointsResponse"` + + // List of endpoints + // + // EndpointsResponse is a required field + EndpointsResponse *EndpointsResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DeleteUserEndpointsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteUserEndpointsOutput) GoString() string { + return s.String() +} + +// SetEndpointsResponse sets the EndpointsResponse field's value. +func (s *DeleteUserEndpointsOutput) SetEndpointsResponse(v *EndpointsResponse) *DeleteUserEndpointsOutput { + s.EndpointsResponse = v + return s +} + +type DeleteVoiceChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteVoiceChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteVoiceChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteVoiceChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteVoiceChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *DeleteVoiceChannelInput) SetApplicationId(v string) *DeleteVoiceChannelInput { + s.ApplicationId = &v + return s +} + +type DeleteVoiceChannelOutput struct { + _ struct{} `type:"structure" payload:"VoiceChannelResponse"` + + // Voice Channel Response. + // + // VoiceChannelResponse is a required field + VoiceChannelResponse *VoiceChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DeleteVoiceChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteVoiceChannelOutput) GoString() string { + return s.String() +} + +// SetVoiceChannelResponse sets the VoiceChannelResponse field's value. +func (s *DeleteVoiceChannelOutput) SetVoiceChannelResponse(v *VoiceChannelResponse) *DeleteVoiceChannelOutput { + s.VoiceChannelResponse = v + return s +} + +// Message definitions for the default message and any messages that are tailored +// for specific channels. +type DirectMessageConfiguration struct { + _ struct{} `type:"structure"` + + // The message to ADM channels. Overrides the default push notification message. + ADMMessage *ADMMessage `type:"structure"` + + // The message to APNS channels. Overrides the default push notification message. + APNSMessage *APNSMessage `type:"structure"` + + // The message to Baidu GCM channels. Overrides the default push notification + // message. + BaiduMessage *BaiduMessage `type:"structure"` + + // The default message for all channels. + DefaultMessage *DefaultMessage `type:"structure"` + + // The default push notification message for all push channels. + DefaultPushNotificationMessage *DefaultPushNotificationMessage `type:"structure"` + + // The message to Email channels. Overrides the default message. + EmailMessage *EmailMessage `type:"structure"` + + // The message to GCM channels. Overrides the default push notification message. + GCMMessage *GCMMessage `type:"structure"` + + // The message to SMS channels. Overrides the default message. + SMSMessage *SMSMessage `type:"structure"` + + // The message to Voice channels. Overrides the default message. + VoiceMessage *VoiceMessage `type:"structure"` +} + +// String returns the string representation +func (s DirectMessageConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DirectMessageConfiguration) GoString() string { + return s.String() +} + +// SetADMMessage sets the ADMMessage field's value. +func (s *DirectMessageConfiguration) SetADMMessage(v *ADMMessage) *DirectMessageConfiguration { + s.ADMMessage = v + return s +} + +// SetAPNSMessage sets the APNSMessage field's value. +func (s *DirectMessageConfiguration) SetAPNSMessage(v *APNSMessage) *DirectMessageConfiguration { + s.APNSMessage = v + return s +} + +// SetBaiduMessage sets the BaiduMessage field's value. +func (s *DirectMessageConfiguration) SetBaiduMessage(v *BaiduMessage) *DirectMessageConfiguration { + s.BaiduMessage = v + return s +} + +// SetDefaultMessage sets the DefaultMessage field's value. +func (s *DirectMessageConfiguration) SetDefaultMessage(v *DefaultMessage) *DirectMessageConfiguration { + s.DefaultMessage = v + return s +} + +// SetDefaultPushNotificationMessage sets the DefaultPushNotificationMessage field's value. +func (s *DirectMessageConfiguration) SetDefaultPushNotificationMessage(v *DefaultPushNotificationMessage) *DirectMessageConfiguration { + s.DefaultPushNotificationMessage = v + return s +} + +// SetEmailMessage sets the EmailMessage field's value. +func (s *DirectMessageConfiguration) SetEmailMessage(v *EmailMessage) *DirectMessageConfiguration { + s.EmailMessage = v + return s +} + +// SetGCMMessage sets the GCMMessage field's value. +func (s *DirectMessageConfiguration) SetGCMMessage(v *GCMMessage) *DirectMessageConfiguration { + s.GCMMessage = v + return s +} + +// SetSMSMessage sets the SMSMessage field's value. +func (s *DirectMessageConfiguration) SetSMSMessage(v *SMSMessage) *DirectMessageConfiguration { + s.SMSMessage = v + return s +} + +// SetVoiceMessage sets the VoiceMessage field's value. +func (s *DirectMessageConfiguration) SetVoiceMessage(v *VoiceMessage) *DirectMessageConfiguration { + s.VoiceMessage = v + return s +} + +// Email Channel Request +type EmailChannelRequest struct { + _ struct{} `type:"structure"` + + // The configuration set that you want to use when you send email using the + // Pinpoint Email API. + ConfigurationSet *string `type:"string"` + + // If the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` + + // The email address used to send emails from. + FromAddress *string `type:"string"` + + // The ARN of an identity verified with SES. + Identity *string `type:"string"` + + // The ARN of an IAM Role used to submit events to Mobile Analytics' event ingestion + // service + RoleArn *string `type:"string"` +} + +// String returns the string representation +func (s EmailChannelRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EmailChannelRequest) GoString() string { + return s.String() +} + +// SetConfigurationSet sets the ConfigurationSet field's value. +func (s *EmailChannelRequest) SetConfigurationSet(v string) *EmailChannelRequest { + s.ConfigurationSet = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *EmailChannelRequest) SetEnabled(v bool) *EmailChannelRequest { + s.Enabled = &v + return s +} + +// SetFromAddress sets the FromAddress field's value. +func (s *EmailChannelRequest) SetFromAddress(v string) *EmailChannelRequest { + s.FromAddress = &v + return s +} + +// SetIdentity sets the Identity field's value. +func (s *EmailChannelRequest) SetIdentity(v string) *EmailChannelRequest { + s.Identity = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *EmailChannelRequest) SetRoleArn(v string) *EmailChannelRequest { + s.RoleArn = &v + return s +} + +// Email Channel Response. +type EmailChannelResponse struct { + _ struct{} `type:"structure"` + + // The unique ID of the application to which the email channel belongs. + ApplicationId *string `type:"string"` + + // The configuration set that you want to use when you send email using the + // Pinpoint Email API. + ConfigurationSet *string `type:"string"` + + // The date that the settings were last updated in ISO 8601 format. + CreationDate *string `type:"string"` + + // If the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` + + // The email address used to send emails from. + FromAddress *string `type:"string"` + + // Not used. Retained for backwards compatibility. + HasCredential *bool `type:"boolean"` + + // Channel ID. Not used, only for backwards compatibility. + Id *string `type:"string"` + + // The ARN of an identity verified with SES. + Identity *string `type:"string"` + + // Is this channel archived + IsArchived *bool `type:"boolean"` + + // Who last updated this entry + LastModifiedBy *string `type:"string"` + + // Last date this was updated + LastModifiedDate *string `type:"string"` + + // Messages per second that can be sent + MessagesPerSecond *int64 `type:"integer"` + + // Platform type. Will be "EMAIL" + Platform *string `type:"string"` + + // The ARN of an IAM Role used to submit events to Mobile Analytics' event ingestion + // service + RoleArn *string `type:"string"` + + // Version of channel + Version *int64 `type:"integer"` +} + +// String returns the string representation +func (s EmailChannelResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EmailChannelResponse) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *EmailChannelResponse) SetApplicationId(v string) *EmailChannelResponse { + s.ApplicationId = &v + return s +} + +// SetConfigurationSet sets the ConfigurationSet field's value. +func (s *EmailChannelResponse) SetConfigurationSet(v string) *EmailChannelResponse { + s.ConfigurationSet = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *EmailChannelResponse) SetCreationDate(v string) *EmailChannelResponse { + s.CreationDate = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *EmailChannelResponse) SetEnabled(v bool) *EmailChannelResponse { + s.Enabled = &v + return s +} + +// SetFromAddress sets the FromAddress field's value. +func (s *EmailChannelResponse) SetFromAddress(v string) *EmailChannelResponse { + s.FromAddress = &v + return s +} + +// SetHasCredential sets the HasCredential field's value. +func (s *EmailChannelResponse) SetHasCredential(v bool) *EmailChannelResponse { + s.HasCredential = &v + return s +} + +// SetId sets the Id field's value. +func (s *EmailChannelResponse) SetId(v string) *EmailChannelResponse { + s.Id = &v + return s +} + +// SetIdentity sets the Identity field's value. +func (s *EmailChannelResponse) SetIdentity(v string) *EmailChannelResponse { + s.Identity = &v + return s +} + +// SetIsArchived sets the IsArchived field's value. +func (s *EmailChannelResponse) SetIsArchived(v bool) *EmailChannelResponse { + s.IsArchived = &v + return s +} + +// SetLastModifiedBy sets the LastModifiedBy field's value. +func (s *EmailChannelResponse) SetLastModifiedBy(v string) *EmailChannelResponse { + s.LastModifiedBy = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *EmailChannelResponse) SetLastModifiedDate(v string) *EmailChannelResponse { + s.LastModifiedDate = &v + return s +} + +// SetMessagesPerSecond sets the MessagesPerSecond field's value. +func (s *EmailChannelResponse) SetMessagesPerSecond(v int64) *EmailChannelResponse { + s.MessagesPerSecond = &v + return s +} + +// SetPlatform sets the Platform field's value. +func (s *EmailChannelResponse) SetPlatform(v string) *EmailChannelResponse { + s.Platform = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *EmailChannelResponse) SetRoleArn(v string) *EmailChannelResponse { + s.RoleArn = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *EmailChannelResponse) SetVersion(v int64) *EmailChannelResponse { + s.Version = &v + return s +} + +// Email Message. +type EmailMessage struct { + _ struct{} `type:"structure"` + + // The body of the email message. + Body *string `type:"string"` + + // The email address that bounces and complaints will be forwarded to when feedback + // forwarding is enabled. + FeedbackForwardingAddress *string `type:"string"` + + // The email address used to send the email from. Defaults to use FromAddress + // specified in the Email Channel. + FromAddress *string `type:"string"` + + // An email represented as a raw MIME message. + RawEmail *RawEmail `type:"structure"` + + // The reply-to email address(es) for the email. If the recipient replies to + // the email, each reply-to address will receive the reply. + ReplyToAddresses []*string `type:"list"` + + // An email composed of a subject, a text part and a html part. + SimpleEmail *SimpleEmail `type:"structure"` + + // Default message substitutions. Can be overridden by individual address substitutions. + Substitutions map[string][]*string `type:"map"` +} + +// String returns the string representation +func (s EmailMessage) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EmailMessage) GoString() string { + return s.String() +} + +// SetBody sets the Body field's value. +func (s *EmailMessage) SetBody(v string) *EmailMessage { + s.Body = &v + return s +} + +// SetFeedbackForwardingAddress sets the FeedbackForwardingAddress field's value. +func (s *EmailMessage) SetFeedbackForwardingAddress(v string) *EmailMessage { + s.FeedbackForwardingAddress = &v + return s +} + +// SetFromAddress sets the FromAddress field's value. +func (s *EmailMessage) SetFromAddress(v string) *EmailMessage { + s.FromAddress = &v + return s +} + +// SetRawEmail sets the RawEmail field's value. +func (s *EmailMessage) SetRawEmail(v *RawEmail) *EmailMessage { + s.RawEmail = v + return s +} + +// SetReplyToAddresses sets the ReplyToAddresses field's value. +func (s *EmailMessage) SetReplyToAddresses(v []*string) *EmailMessage { + s.ReplyToAddresses = v + return s +} + +// SetSimpleEmail sets the SimpleEmail field's value. +func (s *EmailMessage) SetSimpleEmail(v *SimpleEmail) *EmailMessage { + s.SimpleEmail = v + return s +} + +// SetSubstitutions sets the Substitutions field's value. +func (s *EmailMessage) SetSubstitutions(v map[string][]*string) *EmailMessage { + s.Substitutions = v + return s +} + +// Endpoint update request +type EndpointBatchItem struct { + _ struct{} `type:"structure"` + + // The destination for messages that you send to this endpoint. The address + // varies by channel. For mobile push channels, use the token provided by the + // push notification service, such as the APNs device token or the FCM registration + // token. For the SMS channel, use a phone number in E.164 format, such as +12065550100. + // For the email channel, use an email address. + Address *string `type:"string"` + + // Custom attributes that describe the endpoint by associating a name with an + // array of values. For example, an attribute named "interests" might have the + // values ["science", "politics", "travel"]. You can use these attributes as + // selection criteria when you create a segment of users to engage with a messaging + // campaign.The following characters are not recommended in attribute names: + // # : ? \ /. The Amazon Pinpoint console does not display attributes that include + // these characters in the name. This limitation does not apply to attribute + // values. + Attributes map[string][]*string `type:"map"` + + // The channel type.Valid values: GCM | APNS | APNS_SANDBOX | APNS_VOIP | APNS_VOIP_SANDBOX + // | ADM | SMS | EMAIL | BAIDU + ChannelType *string `type:"string" enum:"ChannelType"` + + // The endpoint demographic attributes. + Demographic *EndpointDemographic `type:"structure"` + + // The last time the endpoint was updated. Provided in ISO 8601 format. + EffectiveDate *string `type:"string"` + + // Unused. + EndpointStatus *string `type:"string"` + + // The unique Id for the Endpoint in the batch. + Id *string `type:"string"` + + // The endpoint location attributes. + Location *EndpointLocation `type:"structure"` + + // Custom metrics that your app reports to Amazon Pinpoint. + Metrics map[string]*float64 `type:"map"` + + // Indicates whether a user has opted out of receiving messages with one of + // the following values:ALL - User has opted out of all messages.NONE - Users + // has not opted out and receives all messages. + OptOut *string `type:"string"` + + // The unique ID for the most recent request to update the endpoint. + RequestId *string `type:"string"` + + // Custom user-specific attributes that your app reports to Amazon Pinpoint. + User *EndpointUser `type:"structure"` +} + +// String returns the string representation +func (s EndpointBatchItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EndpointBatchItem) GoString() string { + return s.String() +} + +// SetAddress sets the Address field's value. +func (s *EndpointBatchItem) SetAddress(v string) *EndpointBatchItem { + s.Address = &v + return s +} + +// SetAttributes sets the Attributes field's value. +func (s *EndpointBatchItem) SetAttributes(v map[string][]*string) *EndpointBatchItem { + s.Attributes = v + return s +} + +// SetChannelType sets the ChannelType field's value. +func (s *EndpointBatchItem) SetChannelType(v string) *EndpointBatchItem { + s.ChannelType = &v + return s +} + +// SetDemographic sets the Demographic field's value. +func (s *EndpointBatchItem) SetDemographic(v *EndpointDemographic) *EndpointBatchItem { + s.Demographic = v + return s +} + +// SetEffectiveDate sets the EffectiveDate field's value. +func (s *EndpointBatchItem) SetEffectiveDate(v string) *EndpointBatchItem { + s.EffectiveDate = &v + return s +} + +// SetEndpointStatus sets the EndpointStatus field's value. +func (s *EndpointBatchItem) SetEndpointStatus(v string) *EndpointBatchItem { + s.EndpointStatus = &v + return s +} + +// SetId sets the Id field's value. +func (s *EndpointBatchItem) SetId(v string) *EndpointBatchItem { + s.Id = &v + return s +} + +// SetLocation sets the Location field's value. +func (s *EndpointBatchItem) SetLocation(v *EndpointLocation) *EndpointBatchItem { + s.Location = v + return s +} + +// SetMetrics sets the Metrics field's value. +func (s *EndpointBatchItem) SetMetrics(v map[string]*float64) *EndpointBatchItem { + s.Metrics = v + return s +} + +// SetOptOut sets the OptOut field's value. +func (s *EndpointBatchItem) SetOptOut(v string) *EndpointBatchItem { + s.OptOut = &v + return s +} + +// SetRequestId sets the RequestId field's value. +func (s *EndpointBatchItem) SetRequestId(v string) *EndpointBatchItem { + s.RequestId = &v + return s +} + +// SetUser sets the User field's value. +func (s *EndpointBatchItem) SetUser(v *EndpointUser) *EndpointBatchItem { + s.User = v + return s +} + +// Endpoint batch update request. +type EndpointBatchRequest struct { + _ struct{} `type:"structure"` + + // List of items to update. Maximum 100 items + Item []*EndpointBatchItem `type:"list"` +} + +// String returns the string representation +func (s EndpointBatchRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EndpointBatchRequest) GoString() string { + return s.String() +} + +// SetItem sets the Item field's value. +func (s *EndpointBatchRequest) SetItem(v []*EndpointBatchItem) *EndpointBatchRequest { + s.Item = v + return s +} + +// Demographic information about the endpoint. +type EndpointDemographic struct { + _ struct{} `type:"structure"` + + // The version of the application associated with the endpoint. + AppVersion *string `type:"string"` + + // The endpoint locale in the following format: The ISO 639-1 alpha-2 code, + // followed by an underscore, followed by an ISO 3166-1 alpha-2 value. + Locale *string `type:"string"` + + // The manufacturer of the endpoint device, such as Apple or Samsung. + Make *string `type:"string"` + + // The model name or number of the endpoint device, such as iPhone. + Model *string `type:"string"` + + // The model version of the endpoint device. + ModelVersion *string `type:"string"` + + // The platform of the endpoint device, such as iOS or Android. + Platform *string `type:"string"` + + // The platform version of the endpoint device. + PlatformVersion *string `type:"string"` + + // The timezone of the endpoint. Specified as a tz database value, such as Americas/Los_Angeles. + Timezone *string `type:"string"` +} + +// String returns the string representation +func (s EndpointDemographic) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EndpointDemographic) GoString() string { + return s.String() +} + +// SetAppVersion sets the AppVersion field's value. +func (s *EndpointDemographic) SetAppVersion(v string) *EndpointDemographic { + s.AppVersion = &v + return s +} + +// SetLocale sets the Locale field's value. +func (s *EndpointDemographic) SetLocale(v string) *EndpointDemographic { + s.Locale = &v + return s +} + +// SetMake sets the Make field's value. +func (s *EndpointDemographic) SetMake(v string) *EndpointDemographic { + s.Make = &v + return s +} + +// SetModel sets the Model field's value. +func (s *EndpointDemographic) SetModel(v string) *EndpointDemographic { + s.Model = &v + return s +} + +// SetModelVersion sets the ModelVersion field's value. +func (s *EndpointDemographic) SetModelVersion(v string) *EndpointDemographic { + s.ModelVersion = &v + return s +} + +// SetPlatform sets the Platform field's value. +func (s *EndpointDemographic) SetPlatform(v string) *EndpointDemographic { + s.Platform = &v + return s +} + +// SetPlatformVersion sets the PlatformVersion field's value. +func (s *EndpointDemographic) SetPlatformVersion(v string) *EndpointDemographic { + s.PlatformVersion = &v + return s +} + +// SetTimezone sets the Timezone field's value. +func (s *EndpointDemographic) SetTimezone(v string) *EndpointDemographic { + s.Timezone = &v + return s +} + +// A complex object that holds the status code and message as a result of processing +// an endpoint. +type EndpointItemResponse struct { + _ struct{} `type:"structure"` + + // A custom message associated with the registration of an endpoint when issuing + // a response. + Message *string `type:"string"` + + // The status code associated with the merging of an endpoint when issuing a + // response. + StatusCode *int64 `type:"integer"` +} + +// String returns the string representation +func (s EndpointItemResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EndpointItemResponse) GoString() string { + return s.String() +} + +// SetMessage sets the Message field's value. +func (s *EndpointItemResponse) SetMessage(v string) *EndpointItemResponse { + s.Message = &v + return s +} + +// SetStatusCode sets the StatusCode field's value. +func (s *EndpointItemResponse) SetStatusCode(v int64) *EndpointItemResponse { + s.StatusCode = &v + return s +} + +// Location data for the endpoint. +type EndpointLocation struct { + _ struct{} `type:"structure"` + + // The city where the endpoint is located. + City *string `type:"string"` + + // The two-letter code for the country or region of the endpoint. Specified + // as an ISO 3166-1 alpha-2 code, such as "US" for the United States. + Country *string `type:"string"` + + // The latitude of the endpoint location, rounded to one decimal place. + Latitude *float64 `type:"double"` + + // The longitude of the endpoint location, rounded to one decimal place. + Longitude *float64 `type:"double"` + + // The postal code or zip code of the endpoint. + PostalCode *string `type:"string"` + + // The region of the endpoint location. For example, in the United States, this + // corresponds to a state. + Region *string `type:"string"` +} + +// String returns the string representation +func (s EndpointLocation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EndpointLocation) GoString() string { + return s.String() +} + +// SetCity sets the City field's value. +func (s *EndpointLocation) SetCity(v string) *EndpointLocation { + s.City = &v + return s +} + +// SetCountry sets the Country field's value. +func (s *EndpointLocation) SetCountry(v string) *EndpointLocation { + s.Country = &v + return s +} + +// SetLatitude sets the Latitude field's value. +func (s *EndpointLocation) SetLatitude(v float64) *EndpointLocation { + s.Latitude = &v + return s +} + +// SetLongitude sets the Longitude field's value. +func (s *EndpointLocation) SetLongitude(v float64) *EndpointLocation { + s.Longitude = &v + return s +} + +// SetPostalCode sets the PostalCode field's value. +func (s *EndpointLocation) SetPostalCode(v string) *EndpointLocation { + s.PostalCode = &v + return s +} + +// SetRegion sets the Region field's value. +func (s *EndpointLocation) SetRegion(v string) *EndpointLocation { + s.Region = &v + return s +} + +// The result from sending a message to an endpoint. +type EndpointMessageResult struct { + _ struct{} `type:"structure"` + + // Address that endpoint message was delivered to. + Address *string `type:"string"` + + // The delivery status of the message. Possible values:SUCCESS - The message + // was successfully delivered to the endpoint.TRANSIENT_FAILURE - A temporary + // error occurred. Amazon Pinpoint will attempt to deliver the message again + // later.FAILURE_PERMANENT - An error occurred when delivering the message to + // the endpoint. Amazon Pinpoint won't attempt to send the message again.TIMEOUT + // - The message couldn't be sent within the timeout period.QUIET_TIME - The + // local time for the endpoint was within the QuietTime for the campaign or + // app.DAILY_CAP - The endpoint has received the maximum number of messages + // it can receive within a 24-hour period.HOLDOUT - The endpoint was in a hold + // out treatment for the campaign.THROTTLED - Amazon Pinpoint throttled sending + // to this endpoint.EXPIRED - The endpoint address is expired.CAMPAIGN_CAP - + // The endpoint received the maximum number of messages allowed by the campaign.SERVICE_FAILURE + // - A service-level failure prevented Amazon Pinpoint from delivering the message.UNKNOWN + // - An unknown error occurred. + DeliveryStatus *string `type:"string" enum:"DeliveryStatus"` + + // Unique message identifier associated with the message that was sent. + MessageId *string `type:"string"` + + // Downstream service status code. + StatusCode *int64 `type:"integer"` + + // Status message for message delivery. + StatusMessage *string `type:"string"` + + // If token was updated as part of delivery. (This is GCM Specific) + UpdatedToken *string `type:"string"` +} + +// String returns the string representation +func (s EndpointMessageResult) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EndpointMessageResult) GoString() string { + return s.String() +} + +// SetAddress sets the Address field's value. +func (s *EndpointMessageResult) SetAddress(v string) *EndpointMessageResult { + s.Address = &v + return s +} + +// SetDeliveryStatus sets the DeliveryStatus field's value. +func (s *EndpointMessageResult) SetDeliveryStatus(v string) *EndpointMessageResult { + s.DeliveryStatus = &v + return s +} + +// SetMessageId sets the MessageId field's value. +func (s *EndpointMessageResult) SetMessageId(v string) *EndpointMessageResult { + s.MessageId = &v + return s +} + +// SetStatusCode sets the StatusCode field's value. +func (s *EndpointMessageResult) SetStatusCode(v int64) *EndpointMessageResult { + s.StatusCode = &v + return s +} + +// SetStatusMessage sets the StatusMessage field's value. +func (s *EndpointMessageResult) SetStatusMessage(v string) *EndpointMessageResult { + s.StatusMessage = &v + return s +} + +// SetUpdatedToken sets the UpdatedToken field's value. +func (s *EndpointMessageResult) SetUpdatedToken(v string) *EndpointMessageResult { + s.UpdatedToken = &v + return s +} + +// An endpoint update request. +type EndpointRequest struct { + _ struct{} `type:"structure"` + + // The destination for messages that you send to this endpoint. The address + // varies by channel. For mobile push channels, use the token provided by the + // push notification service, such as the APNs device token or the FCM registration + // token. For the SMS channel, use a phone number in E.164 format, such as +12065550100. + // For the email channel, use an email address. + Address *string `type:"string"` + + // Custom attributes that describe the endpoint by associating a name with an + // array of values. For example, an attribute named "interests" might have the + // values ["science", "politics", "travel"]. You can use these attributes as + // selection criteria when you create a segment of users to engage with a messaging + // campaign.The following characters are not recommended in attribute names: + // # : ? \ /. The Amazon Pinpoint console does not display attributes that include + // these characters in the name. This limitation does not apply to attribute + // values. + Attributes map[string][]*string `type:"map"` + + // The channel type.Valid values: GCM | APNS | APNS_SANDBOX | APNS_VOIP | APNS_VOIP_SANDBOX + // | ADM | SMS | EMAIL | BAIDU + ChannelType *string `type:"string" enum:"ChannelType"` + + // Demographic attributes for the endpoint. + Demographic *EndpointDemographic `type:"structure"` + + // The date and time when the endpoint was updated, shown in ISO 8601 format. + EffectiveDate *string `type:"string"` + + // Unused. + EndpointStatus *string `type:"string"` + + // The endpoint location attributes. + Location *EndpointLocation `type:"structure"` + + // Custom metrics that your app reports to Amazon Pinpoint. + Metrics map[string]*float64 `type:"map"` + + // Indicates whether a user has opted out of receiving messages with one of + // the following values:ALL - User has opted out of all messages.NONE - Users + // has not opted out and receives all messages. + OptOut *string `type:"string"` + + // The unique ID for the most recent request to update the endpoint. + RequestId *string `type:"string"` + + // Custom user-specific attributes that your app reports to Amazon Pinpoint. + User *EndpointUser `type:"structure"` +} + +// String returns the string representation +func (s EndpointRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EndpointRequest) GoString() string { + return s.String() +} + +// SetAddress sets the Address field's value. +func (s *EndpointRequest) SetAddress(v string) *EndpointRequest { + s.Address = &v + return s +} + +// SetAttributes sets the Attributes field's value. +func (s *EndpointRequest) SetAttributes(v map[string][]*string) *EndpointRequest { + s.Attributes = v + return s +} + +// SetChannelType sets the ChannelType field's value. +func (s *EndpointRequest) SetChannelType(v string) *EndpointRequest { + s.ChannelType = &v + return s +} + +// SetDemographic sets the Demographic field's value. +func (s *EndpointRequest) SetDemographic(v *EndpointDemographic) *EndpointRequest { + s.Demographic = v + return s +} + +// SetEffectiveDate sets the EffectiveDate field's value. +func (s *EndpointRequest) SetEffectiveDate(v string) *EndpointRequest { + s.EffectiveDate = &v + return s +} + +// SetEndpointStatus sets the EndpointStatus field's value. +func (s *EndpointRequest) SetEndpointStatus(v string) *EndpointRequest { + s.EndpointStatus = &v + return s +} + +// SetLocation sets the Location field's value. +func (s *EndpointRequest) SetLocation(v *EndpointLocation) *EndpointRequest { + s.Location = v + return s +} + +// SetMetrics sets the Metrics field's value. +func (s *EndpointRequest) SetMetrics(v map[string]*float64) *EndpointRequest { + s.Metrics = v + return s +} + +// SetOptOut sets the OptOut field's value. +func (s *EndpointRequest) SetOptOut(v string) *EndpointRequest { + s.OptOut = &v + return s +} + +// SetRequestId sets the RequestId field's value. +func (s *EndpointRequest) SetRequestId(v string) *EndpointRequest { + s.RequestId = &v + return s +} + +// SetUser sets the User field's value. +func (s *EndpointRequest) SetUser(v *EndpointUser) *EndpointRequest { + s.User = v + return s +} + +// Endpoint response +type EndpointResponse struct { + _ struct{} `type:"structure"` + + // The address of the endpoint as provided by your push provider. For example, + // the DeviceToken or RegistrationId. + Address *string `type:"string"` + + // The ID of the application that is associated with the endpoint. + ApplicationId *string `type:"string"` + + // Custom attributes that describe the endpoint by associating a name with an + // array of values. For example, an attribute named "interests" might have the + // following values: ["science", "politics", "travel"]. You can use these attributes + // as selection criteria when you create segments.The Amazon Pinpoint console + // can't display attribute names that include the following characters: hash/pound + // sign (#), colon (:), question mark (?), backslash (\), and forward slash + // (/). For this reason, you should avoid using these characters in the names + // of custom attributes. + Attributes map[string][]*string `type:"map"` + + // The channel type.Valid values: GCM | APNS | APNS_SANDBOX | APNS_VOIP | APNS_VOIP_SANDBOX + // | ADM | SMS | EMAIL | BAIDU + ChannelType *string `type:"string" enum:"ChannelType"` + + // A number from 0-99 that represents the cohort the endpoint is assigned to. + // Endpoints are grouped into cohorts randomly, and each cohort contains approximately + // 1 percent of the endpoints for an app. Amazon Pinpoint assigns cohorts to + // the holdout or treatment allocations for a campaign. + CohortId *string `type:"string"` + + // The date and time when the endpoint was created, shown in ISO 8601 format. + CreationDate *string `type:"string"` + + // The endpoint demographic attributes. + Demographic *EndpointDemographic `type:"structure"` + + // The date and time when the endpoint was last updated, shown in ISO 8601 format. + EffectiveDate *string `type:"string"` + + // Unused. + EndpointStatus *string `type:"string"` + + // The unique ID that you assigned to the endpoint. The ID should be a globally + // unique identifier (GUID) to ensure that it doesn't conflict with other endpoint + // IDs associated with the application. + Id *string `type:"string"` + + // The endpoint location attributes. + Location *EndpointLocation `type:"structure"` + + // Custom metrics that your app reports to Amazon Pinpoint. + Metrics map[string]*float64 `type:"map"` + + // Indicates whether a user has opted out of receiving messages with one of + // the following values:ALL - User has opted out of all messages.NONE - Users + // has not opted out and receives all messages. + OptOut *string `type:"string"` + + // The unique ID for the most recent request to update the endpoint. + RequestId *string `type:"string"` + + // Custom user-specific attributes that your app reports to Amazon Pinpoint. + User *EndpointUser `type:"structure"` +} + +// String returns the string representation +func (s EndpointResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EndpointResponse) GoString() string { + return s.String() +} + +// SetAddress sets the Address field's value. +func (s *EndpointResponse) SetAddress(v string) *EndpointResponse { + s.Address = &v + return s +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *EndpointResponse) SetApplicationId(v string) *EndpointResponse { + s.ApplicationId = &v + return s +} + +// SetAttributes sets the Attributes field's value. +func (s *EndpointResponse) SetAttributes(v map[string][]*string) *EndpointResponse { + s.Attributes = v + return s +} + +// SetChannelType sets the ChannelType field's value. +func (s *EndpointResponse) SetChannelType(v string) *EndpointResponse { + s.ChannelType = &v + return s +} + +// SetCohortId sets the CohortId field's value. +func (s *EndpointResponse) SetCohortId(v string) *EndpointResponse { + s.CohortId = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *EndpointResponse) SetCreationDate(v string) *EndpointResponse { + s.CreationDate = &v + return s +} + +// SetDemographic sets the Demographic field's value. +func (s *EndpointResponse) SetDemographic(v *EndpointDemographic) *EndpointResponse { + s.Demographic = v + return s +} + +// SetEffectiveDate sets the EffectiveDate field's value. +func (s *EndpointResponse) SetEffectiveDate(v string) *EndpointResponse { + s.EffectiveDate = &v + return s +} + +// SetEndpointStatus sets the EndpointStatus field's value. +func (s *EndpointResponse) SetEndpointStatus(v string) *EndpointResponse { + s.EndpointStatus = &v + return s +} + +// SetId sets the Id field's value. +func (s *EndpointResponse) SetId(v string) *EndpointResponse { + s.Id = &v + return s +} + +// SetLocation sets the Location field's value. +func (s *EndpointResponse) SetLocation(v *EndpointLocation) *EndpointResponse { + s.Location = v + return s +} + +// SetMetrics sets the Metrics field's value. +func (s *EndpointResponse) SetMetrics(v map[string]*float64) *EndpointResponse { + s.Metrics = v + return s +} + +// SetOptOut sets the OptOut field's value. +func (s *EndpointResponse) SetOptOut(v string) *EndpointResponse { + s.OptOut = &v + return s +} + +// SetRequestId sets the RequestId field's value. +func (s *EndpointResponse) SetRequestId(v string) *EndpointResponse { + s.RequestId = &v + return s +} + +// SetUser sets the User field's value. +func (s *EndpointResponse) SetUser(v *EndpointUser) *EndpointResponse { + s.User = v + return s +} + +// Endpoint send configuration. +type EndpointSendConfiguration struct { + _ struct{} `type:"structure"` + + // Body override. If specified will override default body. + BodyOverride *string `type:"string"` + + // A map of custom attributes to attributes to be attached to the message for + // this address. This payload is added to the push notification's 'data.pinpoint' + // object or added to the email/sms delivery receipt event attributes. + Context map[string]*string `type:"map"` + + // The Raw JSON formatted string to be used as the payload. This value overrides + // the message. + RawContent *string `type:"string"` + + // A map of substitution values for the message to be merged with the DefaultMessage's + // substitutions. Substitutions on this map take precedence over the all other + // substitutions. + Substitutions map[string][]*string `type:"map"` + + // Title override. If specified will override default title if applicable. + TitleOverride *string `type:"string"` +} + +// String returns the string representation +func (s EndpointSendConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EndpointSendConfiguration) GoString() string { + return s.String() +} + +// SetBodyOverride sets the BodyOverride field's value. +func (s *EndpointSendConfiguration) SetBodyOverride(v string) *EndpointSendConfiguration { + s.BodyOverride = &v + return s +} + +// SetContext sets the Context field's value. +func (s *EndpointSendConfiguration) SetContext(v map[string]*string) *EndpointSendConfiguration { + s.Context = v + return s +} + +// SetRawContent sets the RawContent field's value. +func (s *EndpointSendConfiguration) SetRawContent(v string) *EndpointSendConfiguration { + s.RawContent = &v + return s +} + +// SetSubstitutions sets the Substitutions field's value. +func (s *EndpointSendConfiguration) SetSubstitutions(v map[string][]*string) *EndpointSendConfiguration { + s.Substitutions = v + return s +} + +// SetTitleOverride sets the TitleOverride field's value. +func (s *EndpointSendConfiguration) SetTitleOverride(v string) *EndpointSendConfiguration { + s.TitleOverride = &v + return s +} + +// Endpoint user specific custom userAttributes +type EndpointUser struct { + _ struct{} `type:"structure"` + + // Custom attributes that describe the user by associating a name with an array + // of values. For example, an attribute named "interests" might have the following + // values: ["science", "politics", "travel"]. You can use these attributes as + // selection criteria when you create segments.The Amazon Pinpoint console can't + // display attribute names that include the following characters: hash/pound + // sign (#), colon (:), question mark (?), backslash (\), and forward slash + // (/). For this reason, you should avoid using these characters in the names + // of custom attributes. + UserAttributes map[string][]*string `type:"map"` + + // The unique ID of the user. + UserId *string `type:"string"` +} + +// String returns the string representation +func (s EndpointUser) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EndpointUser) GoString() string { + return s.String() +} + +// SetUserAttributes sets the UserAttributes field's value. +func (s *EndpointUser) SetUserAttributes(v map[string][]*string) *EndpointUser { + s.UserAttributes = v + return s +} + +// SetUserId sets the UserId field's value. +func (s *EndpointUser) SetUserId(v string) *EndpointUser { + s.UserId = &v + return s +} + +// List of endpoints +type EndpointsResponse struct { + _ struct{} `type:"structure"` + + // The list of endpoints. + Item []*EndpointResponse `type:"list"` +} + +// String returns the string representation +func (s EndpointsResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EndpointsResponse) GoString() string { + return s.String() +} + +// SetItem sets the Item field's value. +func (s *EndpointsResponse) SetItem(v []*EndpointResponse) *EndpointsResponse { + s.Item = v + return s +} + +// Model for creating or updating events. +type Event struct { + _ struct{} `type:"structure"` + + // Custom attributes that are associated with the event you're adding or updating. + Attributes map[string]*string `type:"map"` + + // The version of the SDK that's running on the client device. + ClientSdkVersion *string `type:"string"` + + // The name of the custom event that you're recording. + EventType *string `type:"string"` + + // Custom metrics related to the event. + Metrics map[string]*float64 `type:"map"` + + // Information about the session in which the event occurred. + Session *Session `type:"structure"` + + // The date and time when the event occurred, in ISO 8601 format. + Timestamp *string `type:"string"` +} + +// String returns the string representation +func (s Event) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Event) GoString() string { + return s.String() +} + +// SetAttributes sets the Attributes field's value. +func (s *Event) SetAttributes(v map[string]*string) *Event { + s.Attributes = v + return s +} + +// SetClientSdkVersion sets the ClientSdkVersion field's value. +func (s *Event) SetClientSdkVersion(v string) *Event { + s.ClientSdkVersion = &v + return s +} + +// SetEventType sets the EventType field's value. +func (s *Event) SetEventType(v string) *Event { + s.EventType = &v + return s +} + +// SetMetrics sets the Metrics field's value. +func (s *Event) SetMetrics(v map[string]*float64) *Event { + s.Metrics = v + return s +} + +// SetSession sets the Session field's value. +func (s *Event) SetSession(v *Session) *Event { + s.Session = v + return s +} + +// SetTimestamp sets the Timestamp field's value. +func (s *Event) SetTimestamp(v string) *Event { + s.Timestamp = &v + return s +} + +// Event dimensions. +type EventDimensions struct { + _ struct{} `type:"structure"` + + // Custom attributes that your app reports to Amazon Pinpoint. You can use these + // attributes as selection criteria when you create an event filter. + Attributes map[string]*AttributeDimension `type:"map"` + + // The name of the event that causes the campaign to be sent. This can be a + // standard event type that Amazon Pinpoint generates, such as _session.start, + // or a custom event that's specific to your app. + EventType *SetDimension `type:"structure"` + + // Custom metrics that your app reports to Amazon Pinpoint. You can use these + // attributes as selection criteria when you create an event filter. + Metrics map[string]*MetricDimension `type:"map"` +} + +// String returns the string representation +func (s EventDimensions) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EventDimensions) GoString() string { + return s.String() +} + +// SetAttributes sets the Attributes field's value. +func (s *EventDimensions) SetAttributes(v map[string]*AttributeDimension) *EventDimensions { + s.Attributes = v + return s +} + +// SetEventType sets the EventType field's value. +func (s *EventDimensions) SetEventType(v *SetDimension) *EventDimensions { + s.EventType = v + return s +} + +// SetMetrics sets the Metrics field's value. +func (s *EventDimensions) SetMetrics(v map[string]*MetricDimension) *EventDimensions { + s.Metrics = v + return s +} + +// A complex object that holds the status code and message as a result of processing +// an event. +type EventItemResponse struct { + _ struct{} `type:"structure"` + + // A custom message that is associated with the processing of an event. + Message *string `type:"string"` + + // The status returned in the response as a result of processing the event.Possible + // values: 400 (for invalid events) and 202 (for events that were accepted). + StatusCode *int64 `type:"integer"` +} + +// String returns the string representation +func (s EventItemResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EventItemResponse) GoString() string { + return s.String() +} + +// SetMessage sets the Message field's value. +func (s *EventItemResponse) SetMessage(v string) *EventItemResponse { + s.Message = &v + return s +} + +// SetStatusCode sets the StatusCode field's value. +func (s *EventItemResponse) SetStatusCode(v int64) *EventItemResponse { + s.StatusCode = &v + return s +} + +// Model for an event publishing subscription export. +type EventStream struct { + _ struct{} `type:"structure"` + + // The ID of the application from which events should be published. + ApplicationId *string `type:"string"` + + // The Amazon Resource Name (ARN) of the Amazon Kinesis stream or Firehose delivery + // stream to which you want to publish events. Firehose ARN: arn:aws:firehose:REGION:ACCOUNT_ID:deliverystream/STREAM_NAME + // Kinesis ARN: arn:aws:kinesis:REGION:ACCOUNT_ID:stream/STREAM_NAME + DestinationStreamArn *string `type:"string"` + + // (Deprecated) Your AWS account ID, which you assigned to the ExternalID key + // in an IAM trust policy. Used by Amazon Pinpoint to assume an IAM role. This + // requirement is removed, and external IDs are not recommended for IAM roles + // assumed by Amazon Pinpoint. + ExternalId *string `type:"string"` + + // The date the event stream was last updated in ISO 8601 format. + LastModifiedDate *string `type:"string"` + + // The IAM user who last modified the event stream. + LastUpdatedBy *string `type:"string"` + + // The IAM role that authorizes Amazon Pinpoint to publish events to the stream + // in your account. + RoleArn *string `type:"string"` +} + +// String returns the string representation +func (s EventStream) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EventStream) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *EventStream) SetApplicationId(v string) *EventStream { + s.ApplicationId = &v + return s +} + +// SetDestinationStreamArn sets the DestinationStreamArn field's value. +func (s *EventStream) SetDestinationStreamArn(v string) *EventStream { + s.DestinationStreamArn = &v + return s +} + +// SetExternalId sets the ExternalId field's value. +func (s *EventStream) SetExternalId(v string) *EventStream { + s.ExternalId = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *EventStream) SetLastModifiedDate(v string) *EventStream { + s.LastModifiedDate = &v + return s +} + +// SetLastUpdatedBy sets the LastUpdatedBy field's value. +func (s *EventStream) SetLastUpdatedBy(v string) *EventStream { + s.LastUpdatedBy = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *EventStream) SetRoleArn(v string) *EventStream { + s.RoleArn = &v + return s +} + +// A batch of PublicEndpoints and Events to process. +type EventsBatch struct { + _ struct{} `type:"structure"` + + // The PublicEndpoint attached to the EndpointId from the request. + Endpoint *PublicEndpoint `type:"structure"` + + // An object that contains a set of events associated with the endpoint. + Events map[string]*Event `type:"map"` +} + +// String returns the string representation +func (s EventsBatch) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EventsBatch) GoString() string { + return s.String() +} + +// SetEndpoint sets the Endpoint field's value. +func (s *EventsBatch) SetEndpoint(v *PublicEndpoint) *EventsBatch { + s.Endpoint = v + return s +} + +// SetEvents sets the Events field's value. +func (s *EventsBatch) SetEvents(v map[string]*Event) *EventsBatch { + s.Events = v + return s +} + +// A set of events to process. +type EventsRequest struct { + _ struct{} `type:"structure"` + + // A batch of events to process. Each BatchItem consists of an endpoint ID as + // the key, and an EventsBatch object as the value. + BatchItem map[string]*EventsBatch `type:"map"` +} + +// String returns the string representation +func (s EventsRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EventsRequest) GoString() string { + return s.String() +} + +// SetBatchItem sets the BatchItem field's value. +func (s *EventsRequest) SetBatchItem(v map[string]*EventsBatch) *EventsRequest { + s.BatchItem = v + return s +} + +// Custom messages associated with events. +type EventsResponse struct { + _ struct{} `type:"structure"` + + // A map that contains a multipart response for each endpoint. Each item in + // this object uses the endpoint ID as the key, and the item response as the + // value.If no item response exists, the value can also be one of the following: + // 202 (if the request was processed successfully) or 400 (if the payload was + // invalid, or required fields were missing). + Results map[string]*ItemResponse `type:"map"` +} + +// String returns the string representation +func (s EventsResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EventsResponse) GoString() string { + return s.String() +} + +// SetResults sets the Results field's value. +func (s *EventsResponse) SetResults(v map[string]*ItemResponse) *EventsResponse { + s.Results = v + return s +} + +// Export job request. +type ExportJobRequest struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of an IAM role that grants Amazon Pinpoint + // access to the Amazon S3 location that endpoints will be exported to. + RoleArn *string `type:"string"` + + // A URL that points to the location within an Amazon S3 bucket that will receive + // the export. The location is typically a folder with multiple files.The URL + // should follow this format: s3://bucket-name/folder-name/Amazon Pinpoint will + // export endpoints to this location. + S3UrlPrefix *string `type:"string"` + + // The ID of the segment to export endpoints from. If not present, Amazon Pinpoint + // exports all of the endpoints that belong to the application. + SegmentId *string `type:"string"` + + // The version of the segment to export if specified. + SegmentVersion *int64 `type:"integer"` +} + +// String returns the string representation +func (s ExportJobRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ExportJobRequest) GoString() string { + return s.String() +} + +// SetRoleArn sets the RoleArn field's value. +func (s *ExportJobRequest) SetRoleArn(v string) *ExportJobRequest { + s.RoleArn = &v + return s +} + +// SetS3UrlPrefix sets the S3UrlPrefix field's value. +func (s *ExportJobRequest) SetS3UrlPrefix(v string) *ExportJobRequest { + s.S3UrlPrefix = &v + return s +} + +// SetSegmentId sets the SegmentId field's value. +func (s *ExportJobRequest) SetSegmentId(v string) *ExportJobRequest { + s.SegmentId = &v + return s +} + +// SetSegmentVersion sets the SegmentVersion field's value. +func (s *ExportJobRequest) SetSegmentVersion(v int64) *ExportJobRequest { + s.SegmentVersion = &v + return s +} + +// Export job resource. +type ExportJobResource struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of an IAM role that grants Amazon Pinpoint + // access to the Amazon S3 location that endpoints will be exported to. + RoleArn *string `type:"string"` + + // A URL that points to the location within an Amazon S3 bucket that will receive + // the export. The location is typically a folder with multiple files.The URL + // should follow this format: s3://bucket-name/folder-name/Amazon Pinpoint will + // export endpoints to this location. + S3UrlPrefix *string `type:"string"` + + // The ID of the segment to export endpoints from. If not present, Amazon Pinpoint + // exports all of the endpoints that belong to the application. + SegmentId *string `type:"string"` + + // The version of the segment to export if specified. + SegmentVersion *int64 `type:"integer"` +} + +// String returns the string representation +func (s ExportJobResource) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ExportJobResource) GoString() string { + return s.String() +} + +// SetRoleArn sets the RoleArn field's value. +func (s *ExportJobResource) SetRoleArn(v string) *ExportJobResource { + s.RoleArn = &v + return s +} + +// SetS3UrlPrefix sets the S3UrlPrefix field's value. +func (s *ExportJobResource) SetS3UrlPrefix(v string) *ExportJobResource { + s.S3UrlPrefix = &v + return s +} + +// SetSegmentId sets the SegmentId field's value. +func (s *ExportJobResource) SetSegmentId(v string) *ExportJobResource { + s.SegmentId = &v + return s +} + +// SetSegmentVersion sets the SegmentVersion field's value. +func (s *ExportJobResource) SetSegmentVersion(v int64) *ExportJobResource { + s.SegmentVersion = &v + return s +} + +// Export job response. +type ExportJobResponse struct { + _ struct{} `type:"structure"` + + // The unique ID of the application associated with the export job. + ApplicationId *string `type:"string"` + + // The number of pieces that have successfully completed as of the time of the + // request. + CompletedPieces *int64 `type:"integer"` + + // The date the job completed in ISO 8601 format. + CompletionDate *string `type:"string"` + + // The date the job was created in ISO 8601 format. + CreationDate *string `type:"string"` + + // The export job settings. + Definition *ExportJobResource `type:"structure"` + + // The number of pieces that failed to be processed as of the time of the request. + FailedPieces *int64 `type:"integer"` + + // Provides up to 100 of the first failed entries for the job, if any exist. + Failures []*string `type:"list"` + + // The unique ID of the job. + Id *string `type:"string"` + + // The status of the job.Valid values: CREATED, INITIALIZING, PROCESSING, COMPLETING, + // COMPLETED, FAILING, FAILEDThe job status is FAILED if one or more pieces + // failed. + JobStatus *string `type:"string" enum:"JobStatus"` + + // The number of endpoints that were not processed; for example, because of + // syntax errors. + TotalFailures *int64 `type:"integer"` + + // The total number of pieces that must be processed to finish the job. Each + // piece is an approximately equal portion of the endpoints. + TotalPieces *int64 `type:"integer"` + + // The number of endpoints that were processed by the job. + TotalProcessed *int64 `type:"integer"` + + // The job type. Will be 'EXPORT'. + Type *string `type:"string"` +} + +// String returns the string representation +func (s ExportJobResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ExportJobResponse) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *ExportJobResponse) SetApplicationId(v string) *ExportJobResponse { + s.ApplicationId = &v + return s +} + +// SetCompletedPieces sets the CompletedPieces field's value. +func (s *ExportJobResponse) SetCompletedPieces(v int64) *ExportJobResponse { + s.CompletedPieces = &v + return s +} + +// SetCompletionDate sets the CompletionDate field's value. +func (s *ExportJobResponse) SetCompletionDate(v string) *ExportJobResponse { + s.CompletionDate = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *ExportJobResponse) SetCreationDate(v string) *ExportJobResponse { + s.CreationDate = &v + return s +} + +// SetDefinition sets the Definition field's value. +func (s *ExportJobResponse) SetDefinition(v *ExportJobResource) *ExportJobResponse { + s.Definition = v + return s +} + +// SetFailedPieces sets the FailedPieces field's value. +func (s *ExportJobResponse) SetFailedPieces(v int64) *ExportJobResponse { + s.FailedPieces = &v + return s +} + +// SetFailures sets the Failures field's value. +func (s *ExportJobResponse) SetFailures(v []*string) *ExportJobResponse { + s.Failures = v + return s +} + +// SetId sets the Id field's value. +func (s *ExportJobResponse) SetId(v string) *ExportJobResponse { + s.Id = &v + return s +} + +// SetJobStatus sets the JobStatus field's value. +func (s *ExportJobResponse) SetJobStatus(v string) *ExportJobResponse { + s.JobStatus = &v + return s +} + +// SetTotalFailures sets the TotalFailures field's value. +func (s *ExportJobResponse) SetTotalFailures(v int64) *ExportJobResponse { + s.TotalFailures = &v + return s +} + +// SetTotalPieces sets the TotalPieces field's value. +func (s *ExportJobResponse) SetTotalPieces(v int64) *ExportJobResponse { + s.TotalPieces = &v + return s +} + +// SetTotalProcessed sets the TotalProcessed field's value. +func (s *ExportJobResponse) SetTotalProcessed(v int64) *ExportJobResponse { + s.TotalProcessed = &v + return s +} + +// SetType sets the Type field's value. +func (s *ExportJobResponse) SetType(v string) *ExportJobResponse { + s.Type = &v + return s +} + +// Export job list. +type ExportJobsResponse struct { + _ struct{} `type:"structure"` + + // A list of export jobs for the application. + Item []*ExportJobResponse `type:"list"` + + // The string that you use in a subsequent request to get the next page of results + // in a paginated response. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ExportJobsResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ExportJobsResponse) GoString() string { + return s.String() +} + +// SetItem sets the Item field's value. +func (s *ExportJobsResponse) SetItem(v []*ExportJobResponse) *ExportJobsResponse { + s.Item = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ExportJobsResponse) SetNextToken(v string) *ExportJobsResponse { + s.NextToken = &v + return s +} + +// Google Cloud Messaging credentials +type GCMChannelRequest struct { + _ struct{} `type:"structure"` + + // Platform credential API key from Google. + ApiKey *string `type:"string"` + + // If the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` +} + +// String returns the string representation +func (s GCMChannelRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GCMChannelRequest) GoString() string { + return s.String() +} + +// SetApiKey sets the ApiKey field's value. +func (s *GCMChannelRequest) SetApiKey(v string) *GCMChannelRequest { + s.ApiKey = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *GCMChannelRequest) SetEnabled(v bool) *GCMChannelRequest { + s.Enabled = &v + return s +} + +// Google Cloud Messaging channel definition +type GCMChannelResponse struct { + _ struct{} `type:"structure"` + + // The ID of the application to which the channel applies. + ApplicationId *string `type:"string"` + + // When was this segment created + CreationDate *string `type:"string"` + + // The GCM API key from Google. + Credential *string `type:"string"` + + // If the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` + + // Not used. Retained for backwards compatibility. + HasCredential *bool `type:"boolean"` + + // Channel ID. Not used. Present only for backwards compatibility. + Id *string `type:"string"` + + // Is this channel archived + IsArchived *bool `type:"boolean"` + + // Who last updated this entry + LastModifiedBy *string `type:"string"` + + // Last date this was updated + LastModifiedDate *string `type:"string"` + + // The platform type. Will be GCM + Platform *string `type:"string"` + + // Version of channel + Version *int64 `type:"integer"` +} + +// String returns the string representation +func (s GCMChannelResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GCMChannelResponse) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GCMChannelResponse) SetApplicationId(v string) *GCMChannelResponse { + s.ApplicationId = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *GCMChannelResponse) SetCreationDate(v string) *GCMChannelResponse { + s.CreationDate = &v + return s +} + +// SetCredential sets the Credential field's value. +func (s *GCMChannelResponse) SetCredential(v string) *GCMChannelResponse { + s.Credential = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *GCMChannelResponse) SetEnabled(v bool) *GCMChannelResponse { + s.Enabled = &v + return s +} + +// SetHasCredential sets the HasCredential field's value. +func (s *GCMChannelResponse) SetHasCredential(v bool) *GCMChannelResponse { + s.HasCredential = &v + return s +} + +// SetId sets the Id field's value. +func (s *GCMChannelResponse) SetId(v string) *GCMChannelResponse { + s.Id = &v + return s +} + +// SetIsArchived sets the IsArchived field's value. +func (s *GCMChannelResponse) SetIsArchived(v bool) *GCMChannelResponse { + s.IsArchived = &v + return s +} + +// SetLastModifiedBy sets the LastModifiedBy field's value. +func (s *GCMChannelResponse) SetLastModifiedBy(v string) *GCMChannelResponse { + s.LastModifiedBy = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *GCMChannelResponse) SetLastModifiedDate(v string) *GCMChannelResponse { + s.LastModifiedDate = &v + return s +} + +// SetPlatform sets the Platform field's value. +func (s *GCMChannelResponse) SetPlatform(v string) *GCMChannelResponse { + s.Platform = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *GCMChannelResponse) SetVersion(v int64) *GCMChannelResponse { + s.Version = &v + return s +} + +// GCM Message. +type GCMMessage struct { + _ struct{} `type:"structure"` + + // The action that occurs if the user taps a push notification delivered by + // the campaign: OPEN_APP - Your app launches, or it becomes the foreground + // app if it has been sent to the background. This is the default action. DEEP_LINK + // - Uses deep linking features in iOS and Android to open your app and display + // a designated user interface within the app. URL - The default mobile browser + // on the user's device launches and opens a web page at the URL you specify. + // Possible values include: OPEN_APP | DEEP_LINK | URL + Action *string `type:"string" enum:"Action"` + + // The message body of the notification. + Body *string `type:"string"` + + // This parameter identifies a group of messages (e.g., with collapse_key: "Updates + // Available") that can be collapsed, so that only the last message gets sent + // when delivery can be resumed. This is intended to avoid sending too many + // of the same messages when the device comes back online or becomes active. + CollapseKey *string `type:"string"` + + // The data payload used for a silent push. This payload is added to the notifications' + // data.pinpoint.jsonBody' object + Data map[string]*string `type:"map"` + + // The icon image name of the asset saved in your application. + IconReference *string `type:"string"` + + // The URL that points to an image used as the large icon to the notification + // content view. + ImageIconUrl *string `type:"string"` + + // The URL that points to an image used in the push notification. + ImageUrl *string `type:"string"` + + // The message priority. Amazon Pinpoint uses this value to set the FCM or GCM + // priority parameter when it sends the message. Accepts the following values:"Normal" + // - Messages might be delayed. Delivery is optimized for battery usage on the + // receiving device. Use normal priority unless immediate delivery is required."High" + // - Messages are sent immediately and might wake a sleeping device.The equivalent + // values for APNs messages are "5" and "10". Amazon Pinpoint accepts these + // values here and converts them.For more information, see About FCM Messages + // in the Firebase documentation. + Priority *string `type:"string"` + + // The Raw JSON formatted string to be used as the payload. This value overrides + // the message. + RawContent *string `type:"string"` + + // This parameter specifies the package name of the application where the registration + // tokens must match in order to receive the message. + RestrictedPackageName *string `type:"string"` + + // Indicates if the message should display on the users device. Silent pushes + // can be used for Remote Configuration and Phone Home use cases. + SilentPush *bool `type:"boolean"` + + // The URL that points to an image used as the small icon for the notification + // which will be used to represent the notification in the status bar and content + // view + SmallImageIconUrl *string `type:"string"` + + // Indicates a sound to play when the device receives the notification. Supports + // default, or the filename of a sound resource bundled in the app. Android + // sound files must reside in /res/raw/ + Sound *string `type:"string"` + + // Default message substitutions. Can be overridden by individual address substitutions. + Substitutions map[string][]*string `type:"map"` + + // The length of time (in seconds) that FCM or GCM stores and attempts to deliver + // the message. If unspecified, the value defaults to the maximum, which is + // 2,419,200 seconds (28 days). Amazon Pinpoint uses this value to set the FCM + // or GCM time_to_live parameter. + TimeToLive *int64 `type:"integer"` + + // The message title that displays above the message on the user's device. + Title *string `type:"string"` + + // The URL to open in the user's mobile browser. Used if the value for Action + // is URL. + Url *string `type:"string"` +} + +// String returns the string representation +func (s GCMMessage) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GCMMessage) GoString() string { + return s.String() +} + +// SetAction sets the Action field's value. +func (s *GCMMessage) SetAction(v string) *GCMMessage { + s.Action = &v + return s +} + +// SetBody sets the Body field's value. +func (s *GCMMessage) SetBody(v string) *GCMMessage { + s.Body = &v + return s +} + +// SetCollapseKey sets the CollapseKey field's value. +func (s *GCMMessage) SetCollapseKey(v string) *GCMMessage { + s.CollapseKey = &v + return s +} + +// SetData sets the Data field's value. +func (s *GCMMessage) SetData(v map[string]*string) *GCMMessage { + s.Data = v + return s +} + +// SetIconReference sets the IconReference field's value. +func (s *GCMMessage) SetIconReference(v string) *GCMMessage { + s.IconReference = &v + return s +} + +// SetImageIconUrl sets the ImageIconUrl field's value. +func (s *GCMMessage) SetImageIconUrl(v string) *GCMMessage { + s.ImageIconUrl = &v + return s +} + +// SetImageUrl sets the ImageUrl field's value. +func (s *GCMMessage) SetImageUrl(v string) *GCMMessage { + s.ImageUrl = &v + return s +} + +// SetPriority sets the Priority field's value. +func (s *GCMMessage) SetPriority(v string) *GCMMessage { + s.Priority = &v + return s +} + +// SetRawContent sets the RawContent field's value. +func (s *GCMMessage) SetRawContent(v string) *GCMMessage { + s.RawContent = &v + return s +} + +// SetRestrictedPackageName sets the RestrictedPackageName field's value. +func (s *GCMMessage) SetRestrictedPackageName(v string) *GCMMessage { + s.RestrictedPackageName = &v + return s +} + +// SetSilentPush sets the SilentPush field's value. +func (s *GCMMessage) SetSilentPush(v bool) *GCMMessage { + s.SilentPush = &v + return s +} + +// SetSmallImageIconUrl sets the SmallImageIconUrl field's value. +func (s *GCMMessage) SetSmallImageIconUrl(v string) *GCMMessage { + s.SmallImageIconUrl = &v + return s +} + +// SetSound sets the Sound field's value. +func (s *GCMMessage) SetSound(v string) *GCMMessage { + s.Sound = &v + return s +} + +// SetSubstitutions sets the Substitutions field's value. +func (s *GCMMessage) SetSubstitutions(v map[string][]*string) *GCMMessage { + s.Substitutions = v + return s +} + +// SetTimeToLive sets the TimeToLive field's value. +func (s *GCMMessage) SetTimeToLive(v int64) *GCMMessage { + s.TimeToLive = &v + return s +} + +// SetTitle sets the Title field's value. +func (s *GCMMessage) SetTitle(v string) *GCMMessage { + s.Title = &v + return s +} + +// SetUrl sets the Url field's value. +func (s *GCMMessage) SetUrl(v string) *GCMMessage { + s.Url = &v + return s +} + +// GPS coordinates +type GPSCoordinates struct { + _ struct{} `type:"structure"` + + // Latitude + Latitude *float64 `type:"double"` + + // Longitude + Longitude *float64 `type:"double"` +} + +// String returns the string representation +func (s GPSCoordinates) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GPSCoordinates) GoString() string { + return s.String() +} + +// SetLatitude sets the Latitude field's value. +func (s *GPSCoordinates) SetLatitude(v float64) *GPSCoordinates { + s.Latitude = &v + return s +} + +// SetLongitude sets the Longitude field's value. +func (s *GPSCoordinates) SetLongitude(v float64) *GPSCoordinates { + s.Longitude = &v + return s +} + +// GPS point location dimension +type GPSPointDimension struct { + _ struct{} `type:"structure"` + + // Coordinate to measure distance from. + Coordinates *GPSCoordinates `type:"structure"` + + // Range in kilometers from the coordinate. + RangeInKilometers *float64 `type:"double"` +} + +// String returns the string representation +func (s GPSPointDimension) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GPSPointDimension) GoString() string { + return s.String() +} + +// SetCoordinates sets the Coordinates field's value. +func (s *GPSPointDimension) SetCoordinates(v *GPSCoordinates) *GPSPointDimension { + s.Coordinates = v + return s +} + +// SetRangeInKilometers sets the RangeInKilometers field's value. +func (s *GPSPointDimension) SetRangeInKilometers(v float64) *GPSPointDimension { + s.RangeInKilometers = &v + return s +} + +type GetAdmChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetAdmChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAdmChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetAdmChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetAdmChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetAdmChannelInput) SetApplicationId(v string) *GetAdmChannelInput { + s.ApplicationId = &v + return s +} + +type GetAdmChannelOutput struct { + _ struct{} `type:"structure" payload:"ADMChannelResponse"` + + // Amazon Device Messaging channel definition. + // + // ADMChannelResponse is a required field + ADMChannelResponse *ADMChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetAdmChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAdmChannelOutput) GoString() string { + return s.String() +} + +// SetADMChannelResponse sets the ADMChannelResponse field's value. +func (s *GetAdmChannelOutput) SetADMChannelResponse(v *ADMChannelResponse) *GetAdmChannelOutput { + s.ADMChannelResponse = v + return s +} + +type GetApnsChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetApnsChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetApnsChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetApnsChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetApnsChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetApnsChannelInput) SetApplicationId(v string) *GetApnsChannelInput { + s.ApplicationId = &v + return s +} + +type GetApnsChannelOutput struct { + _ struct{} `type:"structure" payload:"APNSChannelResponse"` + + // Apple Distribution Push Notification Service channel definition. + // + // APNSChannelResponse is a required field + APNSChannelResponse *APNSChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetApnsChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetApnsChannelOutput) GoString() string { + return s.String() +} + +// SetAPNSChannelResponse sets the APNSChannelResponse field's value. +func (s *GetApnsChannelOutput) SetAPNSChannelResponse(v *APNSChannelResponse) *GetApnsChannelOutput { + s.APNSChannelResponse = v + return s +} + +type GetApnsSandboxChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetApnsSandboxChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetApnsSandboxChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetApnsSandboxChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetApnsSandboxChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetApnsSandboxChannelInput) SetApplicationId(v string) *GetApnsSandboxChannelInput { + s.ApplicationId = &v + return s +} + +type GetApnsSandboxChannelOutput struct { + _ struct{} `type:"structure" payload:"APNSSandboxChannelResponse"` + + // Apple Development Push Notification Service channel definition. + // + // APNSSandboxChannelResponse is a required field + APNSSandboxChannelResponse *APNSSandboxChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetApnsSandboxChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetApnsSandboxChannelOutput) GoString() string { + return s.String() +} + +// SetAPNSSandboxChannelResponse sets the APNSSandboxChannelResponse field's value. +func (s *GetApnsSandboxChannelOutput) SetAPNSSandboxChannelResponse(v *APNSSandboxChannelResponse) *GetApnsSandboxChannelOutput { + s.APNSSandboxChannelResponse = v + return s +} + +type GetApnsVoipChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetApnsVoipChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetApnsVoipChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetApnsVoipChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetApnsVoipChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetApnsVoipChannelInput) SetApplicationId(v string) *GetApnsVoipChannelInput { + s.ApplicationId = &v + return s +} + +type GetApnsVoipChannelOutput struct { + _ struct{} `type:"structure" payload:"APNSVoipChannelResponse"` + + // Apple VoIP Push Notification Service channel definition. + // + // APNSVoipChannelResponse is a required field + APNSVoipChannelResponse *APNSVoipChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetApnsVoipChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetApnsVoipChannelOutput) GoString() string { + return s.String() +} + +// SetAPNSVoipChannelResponse sets the APNSVoipChannelResponse field's value. +func (s *GetApnsVoipChannelOutput) SetAPNSVoipChannelResponse(v *APNSVoipChannelResponse) *GetApnsVoipChannelOutput { + s.APNSVoipChannelResponse = v + return s +} + +type GetApnsVoipSandboxChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetApnsVoipSandboxChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetApnsVoipSandboxChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetApnsVoipSandboxChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetApnsVoipSandboxChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetApnsVoipSandboxChannelInput) SetApplicationId(v string) *GetApnsVoipSandboxChannelInput { + s.ApplicationId = &v + return s +} + +type GetApnsVoipSandboxChannelOutput struct { + _ struct{} `type:"structure" payload:"APNSVoipSandboxChannelResponse"` + + // Apple VoIP Developer Push Notification Service channel definition. + // + // APNSVoipSandboxChannelResponse is a required field + APNSVoipSandboxChannelResponse *APNSVoipSandboxChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetApnsVoipSandboxChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetApnsVoipSandboxChannelOutput) GoString() string { + return s.String() +} + +// SetAPNSVoipSandboxChannelResponse sets the APNSVoipSandboxChannelResponse field's value. +func (s *GetApnsVoipSandboxChannelOutput) SetAPNSVoipSandboxChannelResponse(v *APNSVoipSandboxChannelResponse) *GetApnsVoipSandboxChannelOutput { + s.APNSVoipSandboxChannelResponse = v + return s +} + +type GetAppInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetAppInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAppInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetAppInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetAppInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetAppInput) SetApplicationId(v string) *GetAppInput { + s.ApplicationId = &v + return s +} + +type GetAppOutput struct { + _ struct{} `type:"structure" payload:"ApplicationResponse"` + + // Application Response. + // + // ApplicationResponse is a required field + ApplicationResponse *ApplicationResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetAppOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAppOutput) GoString() string { + return s.String() +} + +// SetApplicationResponse sets the ApplicationResponse field's value. +func (s *GetAppOutput) SetApplicationResponse(v *ApplicationResponse) *GetAppOutput { + s.ApplicationResponse = v + return s +} + +type GetApplicationSettingsInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetApplicationSettingsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetApplicationSettingsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetApplicationSettingsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetApplicationSettingsInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetApplicationSettingsInput) SetApplicationId(v string) *GetApplicationSettingsInput { + s.ApplicationId = &v + return s +} + +type GetApplicationSettingsOutput struct { + _ struct{} `type:"structure" payload:"ApplicationSettingsResource"` + + // Application settings. + // + // ApplicationSettingsResource is a required field + ApplicationSettingsResource *ApplicationSettingsResource `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetApplicationSettingsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetApplicationSettingsOutput) GoString() string { + return s.String() +} + +// SetApplicationSettingsResource sets the ApplicationSettingsResource field's value. +func (s *GetApplicationSettingsOutput) SetApplicationSettingsResource(v *ApplicationSettingsResource) *GetApplicationSettingsOutput { + s.ApplicationSettingsResource = v + return s +} + +type GetAppsInput struct { + _ struct{} `type:"structure"` + + PageSize *string `location:"querystring" locationName:"page-size" type:"string"` + + Token *string `location:"querystring" locationName:"token" type:"string"` +} + +// String returns the string representation +func (s GetAppsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAppsInput) GoString() string { + return s.String() +} + +// SetPageSize sets the PageSize field's value. +func (s *GetAppsInput) SetPageSize(v string) *GetAppsInput { + s.PageSize = &v + return s +} + +// SetToken sets the Token field's value. +func (s *GetAppsInput) SetToken(v string) *GetAppsInput { + s.Token = &v + return s +} + +type GetAppsOutput struct { + _ struct{} `type:"structure" payload:"ApplicationsResponse"` + + // Get Applications Result. + // + // ApplicationsResponse is a required field + ApplicationsResponse *ApplicationsResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetAppsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAppsOutput) GoString() string { + return s.String() +} + +// SetApplicationsResponse sets the ApplicationsResponse field's value. +func (s *GetAppsOutput) SetApplicationsResponse(v *ApplicationsResponse) *GetAppsOutput { + s.ApplicationsResponse = v + return s +} + +type GetBaiduChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBaiduChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBaiduChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBaiduChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBaiduChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetBaiduChannelInput) SetApplicationId(v string) *GetBaiduChannelInput { + s.ApplicationId = &v + return s +} + +type GetBaiduChannelOutput struct { + _ struct{} `type:"structure" payload:"BaiduChannelResponse"` + + // Baidu Cloud Messaging channel definition + // + // BaiduChannelResponse is a required field + BaiduChannelResponse *BaiduChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetBaiduChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBaiduChannelOutput) GoString() string { + return s.String() +} + +// SetBaiduChannelResponse sets the BaiduChannelResponse field's value. +func (s *GetBaiduChannelOutput) SetBaiduChannelResponse(v *BaiduChannelResponse) *GetBaiduChannelOutput { + s.BaiduChannelResponse = v + return s +} + +type GetCampaignActivitiesInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // CampaignId is a required field + CampaignId *string `location:"uri" locationName:"campaign-id" type:"string" required:"true"` + + PageSize *string `location:"querystring" locationName:"page-size" type:"string"` + + Token *string `location:"querystring" locationName:"token" type:"string"` +} + +// String returns the string representation +func (s GetCampaignActivitiesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCampaignActivitiesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetCampaignActivitiesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetCampaignActivitiesInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.CampaignId == nil { + invalidParams.Add(request.NewErrParamRequired("CampaignId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetCampaignActivitiesInput) SetApplicationId(v string) *GetCampaignActivitiesInput { + s.ApplicationId = &v + return s +} + +// SetCampaignId sets the CampaignId field's value. +func (s *GetCampaignActivitiesInput) SetCampaignId(v string) *GetCampaignActivitiesInput { + s.CampaignId = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *GetCampaignActivitiesInput) SetPageSize(v string) *GetCampaignActivitiesInput { + s.PageSize = &v + return s +} + +// SetToken sets the Token field's value. +func (s *GetCampaignActivitiesInput) SetToken(v string) *GetCampaignActivitiesInput { + s.Token = &v + return s +} + +type GetCampaignActivitiesOutput struct { + _ struct{} `type:"structure" payload:"ActivitiesResponse"` + + // Activities for campaign. + // + // ActivitiesResponse is a required field + ActivitiesResponse *ActivitiesResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetCampaignActivitiesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCampaignActivitiesOutput) GoString() string { + return s.String() +} + +// SetActivitiesResponse sets the ActivitiesResponse field's value. +func (s *GetCampaignActivitiesOutput) SetActivitiesResponse(v *ActivitiesResponse) *GetCampaignActivitiesOutput { + s.ActivitiesResponse = v + return s +} + +type GetCampaignInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // CampaignId is a required field + CampaignId *string `location:"uri" locationName:"campaign-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetCampaignInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCampaignInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetCampaignInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetCampaignInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.CampaignId == nil { + invalidParams.Add(request.NewErrParamRequired("CampaignId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetCampaignInput) SetApplicationId(v string) *GetCampaignInput { + s.ApplicationId = &v + return s +} + +// SetCampaignId sets the CampaignId field's value. +func (s *GetCampaignInput) SetCampaignId(v string) *GetCampaignInput { + s.CampaignId = &v + return s +} + +type GetCampaignOutput struct { + _ struct{} `type:"structure" payload:"CampaignResponse"` + + // Campaign definition + // + // CampaignResponse is a required field + CampaignResponse *CampaignResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetCampaignOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCampaignOutput) GoString() string { + return s.String() +} + +// SetCampaignResponse sets the CampaignResponse field's value. +func (s *GetCampaignOutput) SetCampaignResponse(v *CampaignResponse) *GetCampaignOutput { + s.CampaignResponse = v + return s +} + +type GetCampaignVersionInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // CampaignId is a required field + CampaignId *string `location:"uri" locationName:"campaign-id" type:"string" required:"true"` + + // Version is a required field + Version *string `location:"uri" locationName:"version" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetCampaignVersionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCampaignVersionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetCampaignVersionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetCampaignVersionInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.CampaignId == nil { + invalidParams.Add(request.NewErrParamRequired("CampaignId")) + } + if s.Version == nil { + invalidParams.Add(request.NewErrParamRequired("Version")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetCampaignVersionInput) SetApplicationId(v string) *GetCampaignVersionInput { + s.ApplicationId = &v + return s +} + +// SetCampaignId sets the CampaignId field's value. +func (s *GetCampaignVersionInput) SetCampaignId(v string) *GetCampaignVersionInput { + s.CampaignId = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *GetCampaignVersionInput) SetVersion(v string) *GetCampaignVersionInput { + s.Version = &v + return s +} + +type GetCampaignVersionOutput struct { + _ struct{} `type:"structure" payload:"CampaignResponse"` + + // Campaign definition + // + // CampaignResponse is a required field + CampaignResponse *CampaignResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetCampaignVersionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCampaignVersionOutput) GoString() string { + return s.String() +} + +// SetCampaignResponse sets the CampaignResponse field's value. +func (s *GetCampaignVersionOutput) SetCampaignResponse(v *CampaignResponse) *GetCampaignVersionOutput { + s.CampaignResponse = v + return s +} + +type GetCampaignVersionsInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // CampaignId is a required field + CampaignId *string `location:"uri" locationName:"campaign-id" type:"string" required:"true"` + + PageSize *string `location:"querystring" locationName:"page-size" type:"string"` + + Token *string `location:"querystring" locationName:"token" type:"string"` +} + +// String returns the string representation +func (s GetCampaignVersionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCampaignVersionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetCampaignVersionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetCampaignVersionsInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.CampaignId == nil { + invalidParams.Add(request.NewErrParamRequired("CampaignId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetCampaignVersionsInput) SetApplicationId(v string) *GetCampaignVersionsInput { + s.ApplicationId = &v + return s +} + +// SetCampaignId sets the CampaignId field's value. +func (s *GetCampaignVersionsInput) SetCampaignId(v string) *GetCampaignVersionsInput { + s.CampaignId = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *GetCampaignVersionsInput) SetPageSize(v string) *GetCampaignVersionsInput { + s.PageSize = &v + return s +} + +// SetToken sets the Token field's value. +func (s *GetCampaignVersionsInput) SetToken(v string) *GetCampaignVersionsInput { + s.Token = &v + return s +} + +type GetCampaignVersionsOutput struct { + _ struct{} `type:"structure" payload:"CampaignsResponse"` + + // List of available campaigns. + // + // CampaignsResponse is a required field + CampaignsResponse *CampaignsResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetCampaignVersionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCampaignVersionsOutput) GoString() string { + return s.String() +} + +// SetCampaignsResponse sets the CampaignsResponse field's value. +func (s *GetCampaignVersionsOutput) SetCampaignsResponse(v *CampaignsResponse) *GetCampaignVersionsOutput { + s.CampaignsResponse = v + return s +} + +type GetCampaignsInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + PageSize *string `location:"querystring" locationName:"page-size" type:"string"` + + Token *string `location:"querystring" locationName:"token" type:"string"` +} + +// String returns the string representation +func (s GetCampaignsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCampaignsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetCampaignsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetCampaignsInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetCampaignsInput) SetApplicationId(v string) *GetCampaignsInput { + s.ApplicationId = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *GetCampaignsInput) SetPageSize(v string) *GetCampaignsInput { + s.PageSize = &v + return s +} + +// SetToken sets the Token field's value. +func (s *GetCampaignsInput) SetToken(v string) *GetCampaignsInput { + s.Token = &v + return s +} + +type GetCampaignsOutput struct { + _ struct{} `type:"structure" payload:"CampaignsResponse"` + + // List of available campaigns. + // + // CampaignsResponse is a required field + CampaignsResponse *CampaignsResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetCampaignsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCampaignsOutput) GoString() string { + return s.String() +} + +// SetCampaignsResponse sets the CampaignsResponse field's value. +func (s *GetCampaignsOutput) SetCampaignsResponse(v *CampaignsResponse) *GetCampaignsOutput { + s.CampaignsResponse = v + return s +} + +type GetChannelsInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetChannelsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetChannelsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetChannelsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetChannelsInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetChannelsInput) SetApplicationId(v string) *GetChannelsInput { + s.ApplicationId = &v + return s +} + +type GetChannelsOutput struct { + _ struct{} `type:"structure" payload:"ChannelsResponse"` + + // Get channels definition + // + // ChannelsResponse is a required field + ChannelsResponse *ChannelsResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetChannelsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetChannelsOutput) GoString() string { + return s.String() +} + +// SetChannelsResponse sets the ChannelsResponse field's value. +func (s *GetChannelsOutput) SetChannelsResponse(v *ChannelsResponse) *GetChannelsOutput { + s.ChannelsResponse = v + return s +} + +type GetEmailChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetEmailChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetEmailChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetEmailChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetEmailChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetEmailChannelInput) SetApplicationId(v string) *GetEmailChannelInput { + s.ApplicationId = &v + return s +} + +type GetEmailChannelOutput struct { + _ struct{} `type:"structure" payload:"EmailChannelResponse"` + + // Email Channel Response. + // + // EmailChannelResponse is a required field + EmailChannelResponse *EmailChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetEmailChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetEmailChannelOutput) GoString() string { + return s.String() +} + +// SetEmailChannelResponse sets the EmailChannelResponse field's value. +func (s *GetEmailChannelOutput) SetEmailChannelResponse(v *EmailChannelResponse) *GetEmailChannelOutput { + s.EmailChannelResponse = v + return s +} + +type GetEndpointInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // EndpointId is a required field + EndpointId *string `location:"uri" locationName:"endpoint-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetEndpointInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetEndpointInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetEndpointInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetEndpointInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.EndpointId == nil { + invalidParams.Add(request.NewErrParamRequired("EndpointId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetEndpointInput) SetApplicationId(v string) *GetEndpointInput { + s.ApplicationId = &v + return s +} + +// SetEndpointId sets the EndpointId field's value. +func (s *GetEndpointInput) SetEndpointId(v string) *GetEndpointInput { + s.EndpointId = &v + return s +} + +type GetEndpointOutput struct { + _ struct{} `type:"structure" payload:"EndpointResponse"` + + // Endpoint response + // + // EndpointResponse is a required field + EndpointResponse *EndpointResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetEndpointOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetEndpointOutput) GoString() string { + return s.String() +} + +// SetEndpointResponse sets the EndpointResponse field's value. +func (s *GetEndpointOutput) SetEndpointResponse(v *EndpointResponse) *GetEndpointOutput { + s.EndpointResponse = v + return s +} + +type GetEventStreamInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetEventStreamInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetEventStreamInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetEventStreamInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetEventStreamInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetEventStreamInput) SetApplicationId(v string) *GetEventStreamInput { + s.ApplicationId = &v + return s +} + +type GetEventStreamOutput struct { + _ struct{} `type:"structure" payload:"EventStream"` + + // Model for an event publishing subscription export. + // + // EventStream is a required field + EventStream *EventStream `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetEventStreamOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetEventStreamOutput) GoString() string { + return s.String() +} + +// SetEventStream sets the EventStream field's value. +func (s *GetEventStreamOutput) SetEventStream(v *EventStream) *GetEventStreamOutput { + s.EventStream = v + return s +} + +type GetExportJobInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // JobId is a required field + JobId *string `location:"uri" locationName:"job-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetExportJobInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetExportJobInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetExportJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetExportJobInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.JobId == nil { + invalidParams.Add(request.NewErrParamRequired("JobId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetExportJobInput) SetApplicationId(v string) *GetExportJobInput { + s.ApplicationId = &v + return s +} + +// SetJobId sets the JobId field's value. +func (s *GetExportJobInput) SetJobId(v string) *GetExportJobInput { + s.JobId = &v + return s +} + +type GetExportJobOutput struct { + _ struct{} `type:"structure" payload:"ExportJobResponse"` + + // Export job response. + // + // ExportJobResponse is a required field + ExportJobResponse *ExportJobResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetExportJobOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetExportJobOutput) GoString() string { + return s.String() +} + +// SetExportJobResponse sets the ExportJobResponse field's value. +func (s *GetExportJobOutput) SetExportJobResponse(v *ExportJobResponse) *GetExportJobOutput { + s.ExportJobResponse = v + return s +} + +type GetExportJobsInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + PageSize *string `location:"querystring" locationName:"page-size" type:"string"` + + Token *string `location:"querystring" locationName:"token" type:"string"` +} + +// String returns the string representation +func (s GetExportJobsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetExportJobsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetExportJobsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetExportJobsInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetExportJobsInput) SetApplicationId(v string) *GetExportJobsInput { + s.ApplicationId = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *GetExportJobsInput) SetPageSize(v string) *GetExportJobsInput { + s.PageSize = &v + return s +} + +// SetToken sets the Token field's value. +func (s *GetExportJobsInput) SetToken(v string) *GetExportJobsInput { + s.Token = &v + return s +} + +type GetExportJobsOutput struct { + _ struct{} `type:"structure" payload:"ExportJobsResponse"` + + // Export job list. + // + // ExportJobsResponse is a required field + ExportJobsResponse *ExportJobsResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetExportJobsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetExportJobsOutput) GoString() string { + return s.String() +} + +// SetExportJobsResponse sets the ExportJobsResponse field's value. +func (s *GetExportJobsOutput) SetExportJobsResponse(v *ExportJobsResponse) *GetExportJobsOutput { + s.ExportJobsResponse = v + return s +} + +type GetGcmChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetGcmChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetGcmChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetGcmChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetGcmChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetGcmChannelInput) SetApplicationId(v string) *GetGcmChannelInput { + s.ApplicationId = &v + return s +} + +type GetGcmChannelOutput struct { + _ struct{} `type:"structure" payload:"GCMChannelResponse"` + + // Google Cloud Messaging channel definition + // + // GCMChannelResponse is a required field + GCMChannelResponse *GCMChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetGcmChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetGcmChannelOutput) GoString() string { + return s.String() +} + +// SetGCMChannelResponse sets the GCMChannelResponse field's value. +func (s *GetGcmChannelOutput) SetGCMChannelResponse(v *GCMChannelResponse) *GetGcmChannelOutput { + s.GCMChannelResponse = v + return s +} + +type GetImportJobInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // JobId is a required field + JobId *string `location:"uri" locationName:"job-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetImportJobInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetImportJobInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetImportJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetImportJobInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.JobId == nil { + invalidParams.Add(request.NewErrParamRequired("JobId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetImportJobInput) SetApplicationId(v string) *GetImportJobInput { + s.ApplicationId = &v + return s +} + +// SetJobId sets the JobId field's value. +func (s *GetImportJobInput) SetJobId(v string) *GetImportJobInput { + s.JobId = &v + return s +} + +type GetImportJobOutput struct { + _ struct{} `type:"structure" payload:"ImportJobResponse"` + + // Import job response. + // + // ImportJobResponse is a required field + ImportJobResponse *ImportJobResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetImportJobOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetImportJobOutput) GoString() string { + return s.String() +} + +// SetImportJobResponse sets the ImportJobResponse field's value. +func (s *GetImportJobOutput) SetImportJobResponse(v *ImportJobResponse) *GetImportJobOutput { + s.ImportJobResponse = v + return s +} + +type GetImportJobsInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + PageSize *string `location:"querystring" locationName:"page-size" type:"string"` + + Token *string `location:"querystring" locationName:"token" type:"string"` +} + +// String returns the string representation +func (s GetImportJobsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetImportJobsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetImportJobsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetImportJobsInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetImportJobsInput) SetApplicationId(v string) *GetImportJobsInput { + s.ApplicationId = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *GetImportJobsInput) SetPageSize(v string) *GetImportJobsInput { + s.PageSize = &v + return s +} + +// SetToken sets the Token field's value. +func (s *GetImportJobsInput) SetToken(v string) *GetImportJobsInput { + s.Token = &v + return s +} + +type GetImportJobsOutput struct { + _ struct{} `type:"structure" payload:"ImportJobsResponse"` + + // Import job list. + // + // ImportJobsResponse is a required field + ImportJobsResponse *ImportJobsResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetImportJobsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetImportJobsOutput) GoString() string { + return s.String() +} + +// SetImportJobsResponse sets the ImportJobsResponse field's value. +func (s *GetImportJobsOutput) SetImportJobsResponse(v *ImportJobsResponse) *GetImportJobsOutput { + s.ImportJobsResponse = v + return s +} + +type GetSegmentExportJobsInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + PageSize *string `location:"querystring" locationName:"page-size" type:"string"` + + // SegmentId is a required field + SegmentId *string `location:"uri" locationName:"segment-id" type:"string" required:"true"` + + Token *string `location:"querystring" locationName:"token" type:"string"` +} + +// String returns the string representation +func (s GetSegmentExportJobsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSegmentExportJobsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetSegmentExportJobsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetSegmentExportJobsInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.SegmentId == nil { + invalidParams.Add(request.NewErrParamRequired("SegmentId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetSegmentExportJobsInput) SetApplicationId(v string) *GetSegmentExportJobsInput { + s.ApplicationId = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *GetSegmentExportJobsInput) SetPageSize(v string) *GetSegmentExportJobsInput { + s.PageSize = &v + return s +} + +// SetSegmentId sets the SegmentId field's value. +func (s *GetSegmentExportJobsInput) SetSegmentId(v string) *GetSegmentExportJobsInput { + s.SegmentId = &v + return s +} + +// SetToken sets the Token field's value. +func (s *GetSegmentExportJobsInput) SetToken(v string) *GetSegmentExportJobsInput { + s.Token = &v + return s +} + +type GetSegmentExportJobsOutput struct { + _ struct{} `type:"structure" payload:"ExportJobsResponse"` + + // Export job list. + // + // ExportJobsResponse is a required field + ExportJobsResponse *ExportJobsResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetSegmentExportJobsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSegmentExportJobsOutput) GoString() string { + return s.String() +} + +// SetExportJobsResponse sets the ExportJobsResponse field's value. +func (s *GetSegmentExportJobsOutput) SetExportJobsResponse(v *ExportJobsResponse) *GetSegmentExportJobsOutput { + s.ExportJobsResponse = v + return s +} + +type GetSegmentImportJobsInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + PageSize *string `location:"querystring" locationName:"page-size" type:"string"` + + // SegmentId is a required field + SegmentId *string `location:"uri" locationName:"segment-id" type:"string" required:"true"` + + Token *string `location:"querystring" locationName:"token" type:"string"` +} + +// String returns the string representation +func (s GetSegmentImportJobsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSegmentImportJobsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetSegmentImportJobsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetSegmentImportJobsInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.SegmentId == nil { + invalidParams.Add(request.NewErrParamRequired("SegmentId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetSegmentImportJobsInput) SetApplicationId(v string) *GetSegmentImportJobsInput { + s.ApplicationId = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *GetSegmentImportJobsInput) SetPageSize(v string) *GetSegmentImportJobsInput { + s.PageSize = &v + return s +} + +// SetSegmentId sets the SegmentId field's value. +func (s *GetSegmentImportJobsInput) SetSegmentId(v string) *GetSegmentImportJobsInput { + s.SegmentId = &v + return s +} + +// SetToken sets the Token field's value. +func (s *GetSegmentImportJobsInput) SetToken(v string) *GetSegmentImportJobsInput { + s.Token = &v + return s +} + +type GetSegmentImportJobsOutput struct { + _ struct{} `type:"structure" payload:"ImportJobsResponse"` + + // Import job list. + // + // ImportJobsResponse is a required field + ImportJobsResponse *ImportJobsResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetSegmentImportJobsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSegmentImportJobsOutput) GoString() string { + return s.String() +} + +// SetImportJobsResponse sets the ImportJobsResponse field's value. +func (s *GetSegmentImportJobsOutput) SetImportJobsResponse(v *ImportJobsResponse) *GetSegmentImportJobsOutput { + s.ImportJobsResponse = v + return s +} + +type GetSegmentInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // SegmentId is a required field + SegmentId *string `location:"uri" locationName:"segment-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetSegmentInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSegmentInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetSegmentInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetSegmentInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.SegmentId == nil { + invalidParams.Add(request.NewErrParamRequired("SegmentId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetSegmentInput) SetApplicationId(v string) *GetSegmentInput { + s.ApplicationId = &v + return s +} + +// SetSegmentId sets the SegmentId field's value. +func (s *GetSegmentInput) SetSegmentId(v string) *GetSegmentInput { + s.SegmentId = &v + return s +} + +type GetSegmentOutput struct { + _ struct{} `type:"structure" payload:"SegmentResponse"` + + // Segment definition. + // + // SegmentResponse is a required field + SegmentResponse *SegmentResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetSegmentOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSegmentOutput) GoString() string { + return s.String() +} + +// SetSegmentResponse sets the SegmentResponse field's value. +func (s *GetSegmentOutput) SetSegmentResponse(v *SegmentResponse) *GetSegmentOutput { + s.SegmentResponse = v + return s +} + +type GetSegmentVersionInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // SegmentId is a required field + SegmentId *string `location:"uri" locationName:"segment-id" type:"string" required:"true"` + + // Version is a required field + Version *string `location:"uri" locationName:"version" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetSegmentVersionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSegmentVersionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetSegmentVersionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetSegmentVersionInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.SegmentId == nil { + invalidParams.Add(request.NewErrParamRequired("SegmentId")) + } + if s.Version == nil { + invalidParams.Add(request.NewErrParamRequired("Version")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetSegmentVersionInput) SetApplicationId(v string) *GetSegmentVersionInput { + s.ApplicationId = &v + return s +} + +// SetSegmentId sets the SegmentId field's value. +func (s *GetSegmentVersionInput) SetSegmentId(v string) *GetSegmentVersionInput { + s.SegmentId = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *GetSegmentVersionInput) SetVersion(v string) *GetSegmentVersionInput { + s.Version = &v + return s +} + +type GetSegmentVersionOutput struct { + _ struct{} `type:"structure" payload:"SegmentResponse"` + + // Segment definition. + // + // SegmentResponse is a required field + SegmentResponse *SegmentResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetSegmentVersionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSegmentVersionOutput) GoString() string { + return s.String() +} + +// SetSegmentResponse sets the SegmentResponse field's value. +func (s *GetSegmentVersionOutput) SetSegmentResponse(v *SegmentResponse) *GetSegmentVersionOutput { + s.SegmentResponse = v + return s +} + +type GetSegmentVersionsInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + PageSize *string `location:"querystring" locationName:"page-size" type:"string"` + + // SegmentId is a required field + SegmentId *string `location:"uri" locationName:"segment-id" type:"string" required:"true"` + + Token *string `location:"querystring" locationName:"token" type:"string"` +} + +// String returns the string representation +func (s GetSegmentVersionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSegmentVersionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetSegmentVersionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetSegmentVersionsInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.SegmentId == nil { + invalidParams.Add(request.NewErrParamRequired("SegmentId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetSegmentVersionsInput) SetApplicationId(v string) *GetSegmentVersionsInput { + s.ApplicationId = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *GetSegmentVersionsInput) SetPageSize(v string) *GetSegmentVersionsInput { + s.PageSize = &v + return s +} + +// SetSegmentId sets the SegmentId field's value. +func (s *GetSegmentVersionsInput) SetSegmentId(v string) *GetSegmentVersionsInput { + s.SegmentId = &v + return s +} + +// SetToken sets the Token field's value. +func (s *GetSegmentVersionsInput) SetToken(v string) *GetSegmentVersionsInput { + s.Token = &v + return s +} + +type GetSegmentVersionsOutput struct { + _ struct{} `type:"structure" payload:"SegmentsResponse"` + + // Segments in your account. + // + // SegmentsResponse is a required field + SegmentsResponse *SegmentsResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetSegmentVersionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSegmentVersionsOutput) GoString() string { + return s.String() +} + +// SetSegmentsResponse sets the SegmentsResponse field's value. +func (s *GetSegmentVersionsOutput) SetSegmentsResponse(v *SegmentsResponse) *GetSegmentVersionsOutput { + s.SegmentsResponse = v + return s +} + +type GetSegmentsInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + PageSize *string `location:"querystring" locationName:"page-size" type:"string"` + + Token *string `location:"querystring" locationName:"token" type:"string"` +} + +// String returns the string representation +func (s GetSegmentsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSegmentsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetSegmentsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetSegmentsInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetSegmentsInput) SetApplicationId(v string) *GetSegmentsInput { + s.ApplicationId = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *GetSegmentsInput) SetPageSize(v string) *GetSegmentsInput { + s.PageSize = &v + return s +} + +// SetToken sets the Token field's value. +func (s *GetSegmentsInput) SetToken(v string) *GetSegmentsInput { + s.Token = &v + return s +} + +type GetSegmentsOutput struct { + _ struct{} `type:"structure" payload:"SegmentsResponse"` + + // Segments in your account. + // + // SegmentsResponse is a required field + SegmentsResponse *SegmentsResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetSegmentsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSegmentsOutput) GoString() string { + return s.String() +} + +// SetSegmentsResponse sets the SegmentsResponse field's value. +func (s *GetSegmentsOutput) SetSegmentsResponse(v *SegmentsResponse) *GetSegmentsOutput { + s.SegmentsResponse = v + return s +} + +type GetSmsChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetSmsChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSmsChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetSmsChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetSmsChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetSmsChannelInput) SetApplicationId(v string) *GetSmsChannelInput { + s.ApplicationId = &v + return s +} + +type GetSmsChannelOutput struct { + _ struct{} `type:"structure" payload:"SMSChannelResponse"` + + // SMS Channel Response. + // + // SMSChannelResponse is a required field + SMSChannelResponse *SMSChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetSmsChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSmsChannelOutput) GoString() string { + return s.String() +} + +// SetSMSChannelResponse sets the SMSChannelResponse field's value. +func (s *GetSmsChannelOutput) SetSMSChannelResponse(v *SMSChannelResponse) *GetSmsChannelOutput { + s.SMSChannelResponse = v + return s +} + +type GetUserEndpointsInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // UserId is a required field + UserId *string `location:"uri" locationName:"user-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetUserEndpointsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetUserEndpointsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetUserEndpointsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetUserEndpointsInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.UserId == nil { + invalidParams.Add(request.NewErrParamRequired("UserId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetUserEndpointsInput) SetApplicationId(v string) *GetUserEndpointsInput { + s.ApplicationId = &v + return s +} + +// SetUserId sets the UserId field's value. +func (s *GetUserEndpointsInput) SetUserId(v string) *GetUserEndpointsInput { + s.UserId = &v + return s +} + +type GetUserEndpointsOutput struct { + _ struct{} `type:"structure" payload:"EndpointsResponse"` + + // List of endpoints + // + // EndpointsResponse is a required field + EndpointsResponse *EndpointsResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetUserEndpointsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetUserEndpointsOutput) GoString() string { + return s.String() +} + +// SetEndpointsResponse sets the EndpointsResponse field's value. +func (s *GetUserEndpointsOutput) SetEndpointsResponse(v *EndpointsResponse) *GetUserEndpointsOutput { + s.EndpointsResponse = v + return s +} + +type GetVoiceChannelInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetVoiceChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetVoiceChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetVoiceChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetVoiceChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetVoiceChannelInput) SetApplicationId(v string) *GetVoiceChannelInput { + s.ApplicationId = &v + return s +} + +type GetVoiceChannelOutput struct { + _ struct{} `type:"structure" payload:"VoiceChannelResponse"` + + // Voice Channel Response. + // + // VoiceChannelResponse is a required field + VoiceChannelResponse *VoiceChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GetVoiceChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetVoiceChannelOutput) GoString() string { + return s.String() +} + +// SetVoiceChannelResponse sets the VoiceChannelResponse field's value. +func (s *GetVoiceChannelOutput) SetVoiceChannelResponse(v *VoiceChannelResponse) *GetVoiceChannelOutput { + s.VoiceChannelResponse = v + return s +} + +// Import job request. +type ImportJobRequest struct { + _ struct{} `type:"structure"` + + // Sets whether the endpoints create a segment when they are imported. + DefineSegment *bool `type:"boolean"` + + // (Deprecated) Your AWS account ID, which you assigned to the ExternalID key + // in an IAM trust policy. Used by Amazon Pinpoint to assume an IAM role. This + // requirement is removed, and external IDs are not recommended for IAM roles + // assumed by Amazon Pinpoint. + ExternalId *string `type:"string"` + + // The format of the files that contain the endpoint definitions.Valid values: + // CSV, JSON + Format *string `type:"string" enum:"Format"` + + // Sets whether the endpoints are registered with Amazon Pinpoint when they + // are imported. + RegisterEndpoints *bool `type:"boolean"` + + // The Amazon Resource Name (ARN) of an IAM role that grants Amazon Pinpoint + // access to the Amazon S3 location that contains the endpoints to import. + RoleArn *string `type:"string"` + + // The URL of the S3 bucket that contains the segment information to import. + // The location can be a folder or a single file. The URL should use the following + // format: s3://bucket-name/folder-name/file-nameAmazon Pinpoint imports endpoints + // from this location and any subfolders it contains. + S3Url *string `type:"string"` + + // The ID of the segment to update if the import job is meant to update an existing + // segment. + SegmentId *string `type:"string"` + + // A custom name for the segment created by the import job. Use if DefineSegment + // is true. + SegmentName *string `type:"string"` +} + +// String returns the string representation +func (s ImportJobRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ImportJobRequest) GoString() string { + return s.String() +} + +// SetDefineSegment sets the DefineSegment field's value. +func (s *ImportJobRequest) SetDefineSegment(v bool) *ImportJobRequest { + s.DefineSegment = &v + return s +} + +// SetExternalId sets the ExternalId field's value. +func (s *ImportJobRequest) SetExternalId(v string) *ImportJobRequest { + s.ExternalId = &v + return s +} + +// SetFormat sets the Format field's value. +func (s *ImportJobRequest) SetFormat(v string) *ImportJobRequest { + s.Format = &v + return s +} + +// SetRegisterEndpoints sets the RegisterEndpoints field's value. +func (s *ImportJobRequest) SetRegisterEndpoints(v bool) *ImportJobRequest { + s.RegisterEndpoints = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *ImportJobRequest) SetRoleArn(v string) *ImportJobRequest { + s.RoleArn = &v + return s +} + +// SetS3Url sets the S3Url field's value. +func (s *ImportJobRequest) SetS3Url(v string) *ImportJobRequest { + s.S3Url = &v + return s +} + +// SetSegmentId sets the SegmentId field's value. +func (s *ImportJobRequest) SetSegmentId(v string) *ImportJobRequest { + s.SegmentId = &v + return s +} + +// SetSegmentName sets the SegmentName field's value. +func (s *ImportJobRequest) SetSegmentName(v string) *ImportJobRequest { + s.SegmentName = &v + return s +} + +// Import job resource +type ImportJobResource struct { + _ struct{} `type:"structure"` + + // Sets whether the endpoints create a segment when they are imported. + DefineSegment *bool `type:"boolean"` + + // (Deprecated) Your AWS account ID, which you assigned to the ExternalID key + // in an IAM trust policy. Used by Amazon Pinpoint to assume an IAM role. This + // requirement is removed, and external IDs are not recommended for IAM roles + // assumed by Amazon Pinpoint. + ExternalId *string `type:"string"` + + // The format of the files that contain the endpoint definitions.Valid values: + // CSV, JSON + Format *string `type:"string" enum:"Format"` + + // Sets whether the endpoints are registered with Amazon Pinpoint when they + // are imported. + RegisterEndpoints *bool `type:"boolean"` + + // The Amazon Resource Name (ARN) of an IAM role that grants Amazon Pinpoint + // access to the Amazon S3 location that contains the endpoints to import. + RoleArn *string `type:"string"` + + // The URL of the S3 bucket that contains the segment information to import. + // The location can be a folder or a single file. The URL should use the following + // format: s3://bucket-name/folder-name/file-nameAmazon Pinpoint imports endpoints + // from this location and any subfolders it contains. + S3Url *string `type:"string"` + + // The ID of the segment to update if the import job is meant to update an existing + // segment. + SegmentId *string `type:"string"` + + // A custom name for the segment created by the import job. Use if DefineSegment + // is true. + SegmentName *string `type:"string"` +} + +// String returns the string representation +func (s ImportJobResource) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ImportJobResource) GoString() string { + return s.String() +} + +// SetDefineSegment sets the DefineSegment field's value. +func (s *ImportJobResource) SetDefineSegment(v bool) *ImportJobResource { + s.DefineSegment = &v + return s +} + +// SetExternalId sets the ExternalId field's value. +func (s *ImportJobResource) SetExternalId(v string) *ImportJobResource { + s.ExternalId = &v + return s +} + +// SetFormat sets the Format field's value. +func (s *ImportJobResource) SetFormat(v string) *ImportJobResource { + s.Format = &v + return s +} + +// SetRegisterEndpoints sets the RegisterEndpoints field's value. +func (s *ImportJobResource) SetRegisterEndpoints(v bool) *ImportJobResource { + s.RegisterEndpoints = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *ImportJobResource) SetRoleArn(v string) *ImportJobResource { + s.RoleArn = &v + return s +} + +// SetS3Url sets the S3Url field's value. +func (s *ImportJobResource) SetS3Url(v string) *ImportJobResource { + s.S3Url = &v + return s +} + +// SetSegmentId sets the SegmentId field's value. +func (s *ImportJobResource) SetSegmentId(v string) *ImportJobResource { + s.SegmentId = &v + return s +} + +// SetSegmentName sets the SegmentName field's value. +func (s *ImportJobResource) SetSegmentName(v string) *ImportJobResource { + s.SegmentName = &v + return s +} + +// Import job response. +type ImportJobResponse struct { + _ struct{} `type:"structure"` + + // The unique ID of the application to which the import job applies. + ApplicationId *string `type:"string"` + + // The number of pieces that have successfully imported as of the time of the + // request. + CompletedPieces *int64 `type:"integer"` + + // The date the import job completed in ISO 8601 format. + CompletionDate *string `type:"string"` + + // The date the import job was created in ISO 8601 format. + CreationDate *string `type:"string"` + + // The import job settings. + Definition *ImportJobResource `type:"structure"` + + // The number of pieces that have failed to import as of the time of the request. + FailedPieces *int64 `type:"integer"` + + // Provides up to 100 of the first failed entries for the job, if any exist. + Failures []*string `type:"list"` + + // The unique ID of the import job. + Id *string `type:"string"` + + // The status of the import job.Valid values: CREATED, INITIALIZING, PROCESSING, + // COMPLETING, COMPLETED, FAILING, FAILEDThe job status is FAILED if one or + // more pieces failed to import. + JobStatus *string `type:"string" enum:"JobStatus"` + + // The number of endpoints that failed to import; for example, because of syntax + // errors. + TotalFailures *int64 `type:"integer"` + + // The total number of pieces that must be imported to finish the job. Each + // piece is an approximately equal portion of the endpoints to import. + TotalPieces *int64 `type:"integer"` + + // The number of endpoints that were processed by the import job. + TotalProcessed *int64 `type:"integer"` + + // The job type. Will be Import. + Type *string `type:"string"` +} + +// String returns the string representation +func (s ImportJobResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ImportJobResponse) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *ImportJobResponse) SetApplicationId(v string) *ImportJobResponse { + s.ApplicationId = &v + return s +} + +// SetCompletedPieces sets the CompletedPieces field's value. +func (s *ImportJobResponse) SetCompletedPieces(v int64) *ImportJobResponse { + s.CompletedPieces = &v + return s +} + +// SetCompletionDate sets the CompletionDate field's value. +func (s *ImportJobResponse) SetCompletionDate(v string) *ImportJobResponse { + s.CompletionDate = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *ImportJobResponse) SetCreationDate(v string) *ImportJobResponse { + s.CreationDate = &v + return s +} + +// SetDefinition sets the Definition field's value. +func (s *ImportJobResponse) SetDefinition(v *ImportJobResource) *ImportJobResponse { + s.Definition = v + return s +} + +// SetFailedPieces sets the FailedPieces field's value. +func (s *ImportJobResponse) SetFailedPieces(v int64) *ImportJobResponse { + s.FailedPieces = &v + return s +} + +// SetFailures sets the Failures field's value. +func (s *ImportJobResponse) SetFailures(v []*string) *ImportJobResponse { + s.Failures = v + return s +} + +// SetId sets the Id field's value. +func (s *ImportJobResponse) SetId(v string) *ImportJobResponse { + s.Id = &v + return s +} + +// SetJobStatus sets the JobStatus field's value. +func (s *ImportJobResponse) SetJobStatus(v string) *ImportJobResponse { + s.JobStatus = &v + return s +} + +// SetTotalFailures sets the TotalFailures field's value. +func (s *ImportJobResponse) SetTotalFailures(v int64) *ImportJobResponse { + s.TotalFailures = &v + return s +} + +// SetTotalPieces sets the TotalPieces field's value. +func (s *ImportJobResponse) SetTotalPieces(v int64) *ImportJobResponse { + s.TotalPieces = &v + return s +} + +// SetTotalProcessed sets the TotalProcessed field's value. +func (s *ImportJobResponse) SetTotalProcessed(v int64) *ImportJobResponse { + s.TotalProcessed = &v + return s +} + +// SetType sets the Type field's value. +func (s *ImportJobResponse) SetType(v string) *ImportJobResponse { + s.Type = &v + return s +} + +// Import job list. +type ImportJobsResponse struct { + _ struct{} `type:"structure"` + + // A list of import jobs for the application. + Item []*ImportJobResponse `type:"list"` + + // The string that you use in a subsequent request to get the next page of results + // in a paginated response. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ImportJobsResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ImportJobsResponse) GoString() string { + return s.String() +} + +// SetItem sets the Item field's value. +func (s *ImportJobsResponse) SetItem(v []*ImportJobResponse) *ImportJobsResponse { + s.Item = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ImportJobsResponse) SetNextToken(v string) *ImportJobsResponse { + s.NextToken = &v + return s +} + +// The response that's provided after registering the endpoint. +type ItemResponse struct { + _ struct{} `type:"structure"` + + // The response received after the endpoint was accepted. + EndpointItemResponse *EndpointItemResponse `type:"structure"` + + // A multipart response object that contains a key and value for each event + // ID in the request. In each object, the event ID is the key, and an EventItemResponse + // object is the value. + EventsItemResponse map[string]*EventItemResponse `type:"map"` +} + +// String returns the string representation +func (s ItemResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ItemResponse) GoString() string { + return s.String() +} + +// SetEndpointItemResponse sets the EndpointItemResponse field's value. +func (s *ItemResponse) SetEndpointItemResponse(v *EndpointItemResponse) *ItemResponse { + s.EndpointItemResponse = v + return s +} + +// SetEventsItemResponse sets the EventsItemResponse field's value. +func (s *ItemResponse) SetEventsItemResponse(v map[string]*EventItemResponse) *ItemResponse { + s.EventsItemResponse = v + return s +} + +// Message to send +type Message struct { + _ struct{} `type:"structure"` + + // The action that occurs if the user taps a push notification delivered by + // the campaign:OPEN_APP - Your app launches, or it becomes the foreground app + // if it has been sent to the background. This is the default action.DEEP_LINK + // - Uses deep linking features in iOS and Android to open your app and display + // a designated user interface within the app.URL - The default mobile browser + // on the user's device launches and opens a web page at the URL you specify. + Action *string `type:"string" enum:"Action"` + + // The message body. Can include up to 140 characters. + Body *string `type:"string"` + + // The URL that points to the icon image for the push notification icon, for + // example, the app icon. + ImageIconUrl *string `type:"string"` + + // The URL that points to the small icon image for the push notification icon, + // for example, the app icon. + ImageSmallIconUrl *string `type:"string"` + + // The URL that points to an image used in the push notification. + ImageUrl *string `type:"string"` + + // The JSON payload used for a silent push. + JsonBody *string `type:"string"` + + // A URL that refers to the location of an image or video that you want to display + // in the push notification. + MediaUrl *string `type:"string"` + + // The Raw JSON formatted string to be used as the payload. This value overrides + // the message. + RawContent *string `type:"string"` + + // Indicates if the message should display on the users device.Silent pushes + // can be used for Remote Configuration and Phone Home use cases. + SilentPush *bool `type:"boolean"` + + // This parameter specifies how long (in seconds) the message should be kept + // if the service is unable to deliver the notification the first time. If the + // value is 0, it treats the notification as if it expires immediately and does + // not store the notification or attempt to redeliver it. This value is converted + // to the expiration field when sent to the service. It only applies to APNs + // and GCM + TimeToLive *int64 `type:"integer"` + + // The message title that displays above the message on the user's device. + Title *string `type:"string"` + + // The URL to open in the user's mobile browser. Used if the value for Action + // is URL. + Url *string `type:"string"` +} + +// String returns the string representation +func (s Message) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Message) GoString() string { + return s.String() +} + +// SetAction sets the Action field's value. +func (s *Message) SetAction(v string) *Message { + s.Action = &v + return s +} + +// SetBody sets the Body field's value. +func (s *Message) SetBody(v string) *Message { + s.Body = &v + return s +} + +// SetImageIconUrl sets the ImageIconUrl field's value. +func (s *Message) SetImageIconUrl(v string) *Message { + s.ImageIconUrl = &v + return s +} + +// SetImageSmallIconUrl sets the ImageSmallIconUrl field's value. +func (s *Message) SetImageSmallIconUrl(v string) *Message { + s.ImageSmallIconUrl = &v + return s +} + +// SetImageUrl sets the ImageUrl field's value. +func (s *Message) SetImageUrl(v string) *Message { + s.ImageUrl = &v + return s +} + +// SetJsonBody sets the JsonBody field's value. +func (s *Message) SetJsonBody(v string) *Message { + s.JsonBody = &v + return s +} + +// SetMediaUrl sets the MediaUrl field's value. +func (s *Message) SetMediaUrl(v string) *Message { + s.MediaUrl = &v + return s +} + +// SetRawContent sets the RawContent field's value. +func (s *Message) SetRawContent(v string) *Message { + s.RawContent = &v + return s +} + +// SetSilentPush sets the SilentPush field's value. +func (s *Message) SetSilentPush(v bool) *Message { + s.SilentPush = &v + return s +} + +// SetTimeToLive sets the TimeToLive field's value. +func (s *Message) SetTimeToLive(v int64) *Message { + s.TimeToLive = &v + return s +} + +// SetTitle sets the Title field's value. +func (s *Message) SetTitle(v string) *Message { + s.Title = &v + return s +} + +// SetUrl sets the Url field's value. +func (s *Message) SetUrl(v string) *Message { + s.Url = &v + return s +} + +// Simple message object. +type MessageBody struct { + _ struct{} `type:"structure"` + + // The error message that's returned from the API. + Message *string `type:"string"` + + // The unique message body ID. + RequestID *string `type:"string"` +} + +// String returns the string representation +func (s MessageBody) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MessageBody) GoString() string { + return s.String() +} + +// SetMessage sets the Message field's value. +func (s *MessageBody) SetMessage(v string) *MessageBody { + s.Message = &v + return s +} + +// SetRequestID sets the RequestID field's value. +func (s *MessageBody) SetRequestID(v string) *MessageBody { + s.RequestID = &v + return s +} + +// Message configuration for a campaign. +type MessageConfiguration struct { + _ struct{} `type:"structure"` + + // The message that the campaign delivers to ADM channels. Overrides the default + // message. + ADMMessage *Message `type:"structure"` + + // The message that the campaign delivers to APNS channels. Overrides the default + // message. + APNSMessage *Message `type:"structure"` + + // The message that the campaign delivers to Baidu channels. Overrides the default + // message. + BaiduMessage *Message `type:"structure"` + + // The default message for all channels. + DefaultMessage *Message `type:"structure"` + + // The email message configuration. + EmailMessage *CampaignEmailMessage `type:"structure"` + + // The message that the campaign delivers to GCM channels. Overrides the default + // message. + GCMMessage *Message `type:"structure"` + + // The SMS message configuration. + SMSMessage *CampaignSmsMessage `type:"structure"` +} + +// String returns the string representation +func (s MessageConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MessageConfiguration) GoString() string { + return s.String() +} + +// SetADMMessage sets the ADMMessage field's value. +func (s *MessageConfiguration) SetADMMessage(v *Message) *MessageConfiguration { + s.ADMMessage = v + return s +} + +// SetAPNSMessage sets the APNSMessage field's value. +func (s *MessageConfiguration) SetAPNSMessage(v *Message) *MessageConfiguration { + s.APNSMessage = v + return s +} + +// SetBaiduMessage sets the BaiduMessage field's value. +func (s *MessageConfiguration) SetBaiduMessage(v *Message) *MessageConfiguration { + s.BaiduMessage = v + return s +} + +// SetDefaultMessage sets the DefaultMessage field's value. +func (s *MessageConfiguration) SetDefaultMessage(v *Message) *MessageConfiguration { + s.DefaultMessage = v + return s +} + +// SetEmailMessage sets the EmailMessage field's value. +func (s *MessageConfiguration) SetEmailMessage(v *CampaignEmailMessage) *MessageConfiguration { + s.EmailMessage = v + return s +} + +// SetGCMMessage sets the GCMMessage field's value. +func (s *MessageConfiguration) SetGCMMessage(v *Message) *MessageConfiguration { + s.GCMMessage = v + return s +} + +// SetSMSMessage sets the SMSMessage field's value. +func (s *MessageConfiguration) SetSMSMessage(v *CampaignSmsMessage) *MessageConfiguration { + s.SMSMessage = v + return s +} + +// Send message request. +type MessageRequest struct { + _ struct{} `type:"structure"` + + // A map of key-value pairs, where each key is an address and each value is + // an AddressConfiguration object. An address can be a push notification token, + // a phone number, or an email address. + Addresses map[string]*AddressConfiguration `type:"map"` + + // A map of custom attributes to attributes to be attached to the message. This + // payload is added to the push notification's 'data.pinpoint' object or added + // to the email/sms delivery receipt event attributes. + Context map[string]*string `type:"map"` + + // A map of key-value pairs, where each key is an endpoint ID and each value + // is an EndpointSendConfiguration object. Within an EndpointSendConfiguration + // object, you can tailor the message for an endpoint by specifying message + // overrides or substitutions. + Endpoints map[string]*EndpointSendConfiguration `type:"map"` + + // Message configuration. + MessageConfiguration *DirectMessageConfiguration `type:"structure"` + + // A unique ID that you can use to trace a message. This ID is visible to recipients. + TraceId *string `type:"string"` +} + +// String returns the string representation +func (s MessageRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MessageRequest) GoString() string { + return s.String() +} + +// SetAddresses sets the Addresses field's value. +func (s *MessageRequest) SetAddresses(v map[string]*AddressConfiguration) *MessageRequest { + s.Addresses = v + return s +} + +// SetContext sets the Context field's value. +func (s *MessageRequest) SetContext(v map[string]*string) *MessageRequest { + s.Context = v + return s +} + +// SetEndpoints sets the Endpoints field's value. +func (s *MessageRequest) SetEndpoints(v map[string]*EndpointSendConfiguration) *MessageRequest { + s.Endpoints = v + return s +} + +// SetMessageConfiguration sets the MessageConfiguration field's value. +func (s *MessageRequest) SetMessageConfiguration(v *DirectMessageConfiguration) *MessageRequest { + s.MessageConfiguration = v + return s +} + +// SetTraceId sets the TraceId field's value. +func (s *MessageRequest) SetTraceId(v string) *MessageRequest { + s.TraceId = &v + return s +} + +// Send message response. +type MessageResponse struct { + _ struct{} `type:"structure"` + + // Application id of the message. + ApplicationId *string `type:"string"` + + // A map containing a multi part response for each address, with the endpointId + // as the key and the result as the value. + EndpointResult map[string]*EndpointMessageResult `type:"map"` + + // Original request Id for which this message was delivered. + RequestId *string `type:"string"` + + // A map containing a multi part response for each address, with the address + // as the key(Email address, phone number or push token) and the result as the + // value. + Result map[string]*MessageResult `type:"map"` +} + +// String returns the string representation +func (s MessageResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MessageResponse) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *MessageResponse) SetApplicationId(v string) *MessageResponse { + s.ApplicationId = &v + return s +} + +// SetEndpointResult sets the EndpointResult field's value. +func (s *MessageResponse) SetEndpointResult(v map[string]*EndpointMessageResult) *MessageResponse { + s.EndpointResult = v + return s +} + +// SetRequestId sets the RequestId field's value. +func (s *MessageResponse) SetRequestId(v string) *MessageResponse { + s.RequestId = &v + return s +} + +// SetResult sets the Result field's value. +func (s *MessageResponse) SetResult(v map[string]*MessageResult) *MessageResponse { + s.Result = v + return s +} + +// The result from sending a message to an address. +type MessageResult struct { + _ struct{} `type:"structure"` + + // The delivery status of the message. Possible values:SUCCESS - The message + // was successfully delivered to the endpoint.TRANSIENT_FAILURE - A temporary + // error occurred. Amazon Pinpoint will attempt to deliver the message again + // later.FAILURE_PERMANENT - An error occurred when delivering the message to + // the endpoint. Amazon Pinpoint won't attempt to send the message again.TIMEOUT + // - The message couldn't be sent within the timeout period.QUIET_TIME - The + // local time for the endpoint was within the QuietTime for the campaign or + // app.DAILY_CAP - The endpoint has received the maximum number of messages + // it can receive within a 24-hour period.HOLDOUT - The endpoint was in a hold + // out treatment for the campaign.THROTTLED - Amazon Pinpoint throttled sending + // to this endpoint.EXPIRED - The endpoint address is expired.CAMPAIGN_CAP - + // The endpoint received the maximum number of messages allowed by the campaign.SERVICE_FAILURE + // - A service-level failure prevented Amazon Pinpoint from delivering the message.UNKNOWN + // - An unknown error occurred. + DeliveryStatus *string `type:"string" enum:"DeliveryStatus"` + + // Unique message identifier associated with the message that was sent. + MessageId *string `type:"string"` + + // Downstream service status code. + StatusCode *int64 `type:"integer"` + + // Status message for message delivery. + StatusMessage *string `type:"string"` + + // If token was updated as part of delivery. (This is GCM Specific) + UpdatedToken *string `type:"string"` +} + +// String returns the string representation +func (s MessageResult) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MessageResult) GoString() string { + return s.String() +} + +// SetDeliveryStatus sets the DeliveryStatus field's value. +func (s *MessageResult) SetDeliveryStatus(v string) *MessageResult { + s.DeliveryStatus = &v + return s +} + +// SetMessageId sets the MessageId field's value. +func (s *MessageResult) SetMessageId(v string) *MessageResult { + s.MessageId = &v + return s +} + +// SetStatusCode sets the StatusCode field's value. +func (s *MessageResult) SetStatusCode(v int64) *MessageResult { + s.StatusCode = &v + return s +} + +// SetStatusMessage sets the StatusMessage field's value. +func (s *MessageResult) SetStatusMessage(v string) *MessageResult { + s.StatusMessage = &v + return s +} + +// SetUpdatedToken sets the UpdatedToken field's value. +func (s *MessageResult) SetUpdatedToken(v string) *MessageResult { + s.UpdatedToken = &v + return s +} + +// Custom metric dimension +type MetricDimension struct { + _ struct{} `type:"structure"` + + // The operator that you're using to compare metric values. Possible values: + // GREATER_THAN, LESS_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN_OR_EQUAL, or EQUAL + ComparisonOperator *string `type:"string"` + + // The value to be compared. + Value *float64 `type:"double"` +} + +// String returns the string representation +func (s MetricDimension) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MetricDimension) GoString() string { + return s.String() +} + +// SetComparisonOperator sets the ComparisonOperator field's value. +func (s *MetricDimension) SetComparisonOperator(v string) *MetricDimension { + s.ComparisonOperator = &v + return s +} + +// SetValue sets the Value field's value. +func (s *MetricDimension) SetValue(v float64) *MetricDimension { + s.Value = &v + return s +} + +// Phone Number Validate request. +type NumberValidateRequest struct { + _ struct{} `type:"structure"` + + // (Optional) The two-character ISO country code for the country or region where + // the phone number was originally registered. + IsoCountryCode *string `type:"string"` + + // The phone number to get information about. The phone number that you provide + // should include a country code. If the number doesn't include a valid country + // code, the operation might result in an error. + PhoneNumber *string `type:"string"` +} + +// String returns the string representation +func (s NumberValidateRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NumberValidateRequest) GoString() string { + return s.String() +} + +// SetIsoCountryCode sets the IsoCountryCode field's value. +func (s *NumberValidateRequest) SetIsoCountryCode(v string) *NumberValidateRequest { + s.IsoCountryCode = &v + return s +} + +// SetPhoneNumber sets the PhoneNumber field's value. +func (s *NumberValidateRequest) SetPhoneNumber(v string) *NumberValidateRequest { + s.PhoneNumber = &v + return s +} + +// Phone Number Validate response. +type NumberValidateResponse struct { + _ struct{} `type:"structure"` + + // The carrier or servive provider that the phone number is currently registered + // with. + Carrier *string `type:"string"` + + // The city where the phone number was originally registered. + City *string `type:"string"` + + // The cleansed phone number, shown in E.164 format. + CleansedPhoneNumberE164 *string `type:"string"` + + // The cleansed phone number, shown in the local phone number format. + CleansedPhoneNumberNational *string `type:"string"` + + // The country or region where the phone number was originally registered. + Country *string `type:"string"` + + // The two-character ISO code for the country or region where the phone number + // was originally registered. + CountryCodeIso2 *string `type:"string"` + + // The numeric code for the country or region where the phone number was originally + // registered. + CountryCodeNumeric *string `type:"string"` + + // The county where the phone number was originally registered. + County *string `type:"string"` + + // The two-character code (in ISO 3166-1 alpha-2 format) for the country or + // region in the request body. + OriginalCountryCodeIso2 *string `type:"string"` + + // The phone number that you included in the request body. + OriginalPhoneNumber *string `type:"string"` + + // A description of the phone type. Possible values are MOBILE, LANDLINE, VOIP, + // INVALID, PREPAID, and OTHER. + PhoneType *string `type:"string"` + + // The phone type, represented by an integer. Possible values include 0 (MOBILE), + // 1 (LANDLINE), 2 (VOIP), 3 (INVALID), 4 (OTHER), and 5 (PREPAID). + PhoneTypeCode *int64 `type:"integer"` + + // The time zone for the location where the phone number was originally registered. + Timezone *string `type:"string"` + + // The postal code for the location where the phone number was originally registered. + ZipCode *string `type:"string"` +} + +// String returns the string representation +func (s NumberValidateResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NumberValidateResponse) GoString() string { + return s.String() +} + +// SetCarrier sets the Carrier field's value. +func (s *NumberValidateResponse) SetCarrier(v string) *NumberValidateResponse { + s.Carrier = &v + return s +} + +// SetCity sets the City field's value. +func (s *NumberValidateResponse) SetCity(v string) *NumberValidateResponse { + s.City = &v + return s +} + +// SetCleansedPhoneNumberE164 sets the CleansedPhoneNumberE164 field's value. +func (s *NumberValidateResponse) SetCleansedPhoneNumberE164(v string) *NumberValidateResponse { + s.CleansedPhoneNumberE164 = &v + return s +} + +// SetCleansedPhoneNumberNational sets the CleansedPhoneNumberNational field's value. +func (s *NumberValidateResponse) SetCleansedPhoneNumberNational(v string) *NumberValidateResponse { + s.CleansedPhoneNumberNational = &v + return s +} + +// SetCountry sets the Country field's value. +func (s *NumberValidateResponse) SetCountry(v string) *NumberValidateResponse { + s.Country = &v + return s +} + +// SetCountryCodeIso2 sets the CountryCodeIso2 field's value. +func (s *NumberValidateResponse) SetCountryCodeIso2(v string) *NumberValidateResponse { + s.CountryCodeIso2 = &v + return s +} + +// SetCountryCodeNumeric sets the CountryCodeNumeric field's value. +func (s *NumberValidateResponse) SetCountryCodeNumeric(v string) *NumberValidateResponse { + s.CountryCodeNumeric = &v + return s +} + +// SetCounty sets the County field's value. +func (s *NumberValidateResponse) SetCounty(v string) *NumberValidateResponse { + s.County = &v + return s +} + +// SetOriginalCountryCodeIso2 sets the OriginalCountryCodeIso2 field's value. +func (s *NumberValidateResponse) SetOriginalCountryCodeIso2(v string) *NumberValidateResponse { + s.OriginalCountryCodeIso2 = &v + return s +} + +// SetOriginalPhoneNumber sets the OriginalPhoneNumber field's value. +func (s *NumberValidateResponse) SetOriginalPhoneNumber(v string) *NumberValidateResponse { + s.OriginalPhoneNumber = &v + return s +} + +// SetPhoneType sets the PhoneType field's value. +func (s *NumberValidateResponse) SetPhoneType(v string) *NumberValidateResponse { + s.PhoneType = &v + return s +} + +// SetPhoneTypeCode sets the PhoneTypeCode field's value. +func (s *NumberValidateResponse) SetPhoneTypeCode(v int64) *NumberValidateResponse { + s.PhoneTypeCode = &v + return s +} + +// SetTimezone sets the Timezone field's value. +func (s *NumberValidateResponse) SetTimezone(v string) *NumberValidateResponse { + s.Timezone = &v + return s +} + +// SetZipCode sets the ZipCode field's value. +func (s *NumberValidateResponse) SetZipCode(v string) *NumberValidateResponse { + s.ZipCode = &v + return s +} + +type PhoneNumberValidateInput struct { + _ struct{} `type:"structure" payload:"NumberValidateRequest"` + + // Phone Number Validate request. + // + // NumberValidateRequest is a required field + NumberValidateRequest *NumberValidateRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s PhoneNumberValidateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PhoneNumberValidateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PhoneNumberValidateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PhoneNumberValidateInput"} + if s.NumberValidateRequest == nil { + invalidParams.Add(request.NewErrParamRequired("NumberValidateRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNumberValidateRequest sets the NumberValidateRequest field's value. +func (s *PhoneNumberValidateInput) SetNumberValidateRequest(v *NumberValidateRequest) *PhoneNumberValidateInput { + s.NumberValidateRequest = v + return s +} + +type PhoneNumberValidateOutput struct { + _ struct{} `type:"structure" payload:"NumberValidateResponse"` + + // Phone Number Validate response. + // + // NumberValidateResponse is a required field + NumberValidateResponse *NumberValidateResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s PhoneNumberValidateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PhoneNumberValidateOutput) GoString() string { + return s.String() +} + +// SetNumberValidateResponse sets the NumberValidateResponse field's value. +func (s *PhoneNumberValidateOutput) SetNumberValidateResponse(v *NumberValidateResponse) *PhoneNumberValidateOutput { + s.NumberValidateResponse = v + return s +} + +// Public endpoint attributes. +type PublicEndpoint struct { + _ struct{} `type:"structure"` + + // The unique identifier for the recipient. For example, an address could be + // a device token, email address, or mobile phone number. + Address *string `type:"string"` + + // Custom attributes that your app reports to Amazon Pinpoint. You can use these + // attributes as selection criteria when you create a segment. + Attributes map[string][]*string `type:"map"` + + // The channel type.Valid values: APNS, GCM + ChannelType *string `type:"string" enum:"ChannelType"` + + // The endpoint demographic attributes. + Demographic *EndpointDemographic `type:"structure"` + + // The date and time when the endpoint was last updated, in ISO 8601 format. + EffectiveDate *string `type:"string"` + + // The status of the endpoint. If the update fails, the value is INACTIVE. If + // the endpoint is updated successfully, the value is ACTIVE. + EndpointStatus *string `type:"string"` + + // The endpoint location attributes. + Location *EndpointLocation `type:"structure"` + + // Custom metrics that your app reports to Amazon Pinpoint. + Metrics map[string]*float64 `type:"map"` + + // Indicates whether a user has opted out of receiving messages with one of + // the following values:ALL - User has opted out of all messages.NONE - Users + // has not opted out and receives all messages. + OptOut *string `type:"string"` + + // A unique identifier that is generated each time the endpoint is updated. + RequestId *string `type:"string"` + + // Custom user-specific attributes that your app reports to Amazon Pinpoint. + User *EndpointUser `type:"structure"` +} + +// String returns the string representation +func (s PublicEndpoint) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PublicEndpoint) GoString() string { + return s.String() +} + +// SetAddress sets the Address field's value. +func (s *PublicEndpoint) SetAddress(v string) *PublicEndpoint { + s.Address = &v + return s +} + +// SetAttributes sets the Attributes field's value. +func (s *PublicEndpoint) SetAttributes(v map[string][]*string) *PublicEndpoint { + s.Attributes = v + return s +} + +// SetChannelType sets the ChannelType field's value. +func (s *PublicEndpoint) SetChannelType(v string) *PublicEndpoint { + s.ChannelType = &v + return s +} + +// SetDemographic sets the Demographic field's value. +func (s *PublicEndpoint) SetDemographic(v *EndpointDemographic) *PublicEndpoint { + s.Demographic = v + return s +} + +// SetEffectiveDate sets the EffectiveDate field's value. +func (s *PublicEndpoint) SetEffectiveDate(v string) *PublicEndpoint { + s.EffectiveDate = &v + return s +} + +// SetEndpointStatus sets the EndpointStatus field's value. +func (s *PublicEndpoint) SetEndpointStatus(v string) *PublicEndpoint { + s.EndpointStatus = &v + return s +} + +// SetLocation sets the Location field's value. +func (s *PublicEndpoint) SetLocation(v *EndpointLocation) *PublicEndpoint { + s.Location = v + return s +} + +// SetMetrics sets the Metrics field's value. +func (s *PublicEndpoint) SetMetrics(v map[string]*float64) *PublicEndpoint { + s.Metrics = v + return s +} + +// SetOptOut sets the OptOut field's value. +func (s *PublicEndpoint) SetOptOut(v string) *PublicEndpoint { + s.OptOut = &v + return s +} + +// SetRequestId sets the RequestId field's value. +func (s *PublicEndpoint) SetRequestId(v string) *PublicEndpoint { + s.RequestId = &v + return s +} + +// SetUser sets the User field's value. +func (s *PublicEndpoint) SetUser(v *EndpointUser) *PublicEndpoint { + s.User = v + return s +} + +type PutEventStreamInput struct { + _ struct{} `type:"structure" payload:"WriteEventStream"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // Request to save an EventStream. + // + // WriteEventStream is a required field + WriteEventStream *WriteEventStream `type:"structure" required:"true"` +} + +// String returns the string representation +func (s PutEventStreamInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutEventStreamInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutEventStreamInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutEventStreamInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.WriteEventStream == nil { + invalidParams.Add(request.NewErrParamRequired("WriteEventStream")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *PutEventStreamInput) SetApplicationId(v string) *PutEventStreamInput { + s.ApplicationId = &v + return s +} + +// SetWriteEventStream sets the WriteEventStream field's value. +func (s *PutEventStreamInput) SetWriteEventStream(v *WriteEventStream) *PutEventStreamInput { + s.WriteEventStream = v + return s +} + +type PutEventStreamOutput struct { + _ struct{} `type:"structure" payload:"EventStream"` + + // Model for an event publishing subscription export. + // + // EventStream is a required field + EventStream *EventStream `type:"structure" required:"true"` +} + +// String returns the string representation +func (s PutEventStreamOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutEventStreamOutput) GoString() string { + return s.String() +} + +// SetEventStream sets the EventStream field's value. +func (s *PutEventStreamOutput) SetEventStream(v *EventStream) *PutEventStreamOutput { + s.EventStream = v + return s +} + +type PutEventsInput struct { + _ struct{} `type:"structure" payload:"EventsRequest"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // A set of events to process. + // + // EventsRequest is a required field + EventsRequest *EventsRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s PutEventsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutEventsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutEventsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutEventsInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.EventsRequest == nil { + invalidParams.Add(request.NewErrParamRequired("EventsRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *PutEventsInput) SetApplicationId(v string) *PutEventsInput { + s.ApplicationId = &v + return s +} + +// SetEventsRequest sets the EventsRequest field's value. +func (s *PutEventsInput) SetEventsRequest(v *EventsRequest) *PutEventsInput { + s.EventsRequest = v + return s +} + +type PutEventsOutput struct { + _ struct{} `type:"structure" payload:"EventsResponse"` + + // Custom messages associated with events. + // + // EventsResponse is a required field + EventsResponse *EventsResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s PutEventsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutEventsOutput) GoString() string { + return s.String() +} + +// SetEventsResponse sets the EventsResponse field's value. +func (s *PutEventsOutput) SetEventsResponse(v *EventsResponse) *PutEventsOutput { + s.EventsResponse = v + return s +} + +// Quiet Time +type QuietTime struct { + _ struct{} `type:"structure"` + + // The time at which quiet time should end. The value that you specify has to + // be in HH:mm format, where HH is the hour in 24-hour format (with a leading + // zero, if applicable), and mm is the minutes. For example, use 02:30 to represent + // 2:30 AM, or 14:30 to represent 2:30 PM. + End *string `type:"string"` + + // The time at which quiet time should begin. The value that you specify has + // to be in HH:mm format, where HH is the hour in 24-hour format (with a leading + // zero, if applicable), and mm is the minutes. For example, use 02:30 to represent + // 2:30 AM, or 14:30 to represent 2:30 PM. + Start *string `type:"string"` +} + +// String returns the string representation +func (s QuietTime) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s QuietTime) GoString() string { + return s.String() +} + +// SetEnd sets the End field's value. +func (s *QuietTime) SetEnd(v string) *QuietTime { + s.End = &v + return s +} + +// SetStart sets the Start field's value. +func (s *QuietTime) SetStart(v string) *QuietTime { + s.Start = &v + return s +} + +// An email represented as a raw MIME message. +type RawEmail struct { + _ struct{} `type:"structure"` + + // The raw email message itself. Then entire message must be base64-encoded. + // + // Data is automatically base64 encoded/decoded by the SDK. + Data []byte `type:"blob"` +} + +// String returns the string representation +func (s RawEmail) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RawEmail) GoString() string { + return s.String() +} + +// SetData sets the Data field's value. +func (s *RawEmail) SetData(v []byte) *RawEmail { + s.Data = v + return s +} + +// Define how a segment based on recency of use. +type RecencyDimension struct { + _ struct{} `type:"structure"` + + // The length of time during which users have been active or inactive with your + // app.Valid values: HR_24, DAY_7, DAY_14, DAY_30 + Duration *string `type:"string" enum:"Duration"` + + // The recency dimension type:ACTIVE - Users who have used your app within the + // specified duration are included in the segment.INACTIVE - Users who have + // not used your app within the specified duration are included in the segment. + RecencyType *string `type:"string" enum:"RecencyType"` +} + +// String returns the string representation +func (s RecencyDimension) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RecencyDimension) GoString() string { + return s.String() +} + +// SetDuration sets the Duration field's value. +func (s *RecencyDimension) SetDuration(v string) *RecencyDimension { + s.Duration = &v + return s +} + +// SetRecencyType sets the RecencyType field's value. +func (s *RecencyDimension) SetRecencyType(v string) *RecencyDimension { + s.RecencyType = &v + return s +} + +type RemoveAttributesInput struct { + _ struct{} `type:"structure" payload:"UpdateAttributesRequest"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // AttributeType is a required field + AttributeType *string `location:"uri" locationName:"attribute-type" type:"string" required:"true"` + + // Update attributes request + // + // UpdateAttributesRequest is a required field + UpdateAttributesRequest *UpdateAttributesRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s RemoveAttributesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveAttributesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RemoveAttributesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RemoveAttributesInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.AttributeType == nil { + invalidParams.Add(request.NewErrParamRequired("AttributeType")) + } + if s.UpdateAttributesRequest == nil { + invalidParams.Add(request.NewErrParamRequired("UpdateAttributesRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *RemoveAttributesInput) SetApplicationId(v string) *RemoveAttributesInput { + s.ApplicationId = &v + return s +} + +// SetAttributeType sets the AttributeType field's value. +func (s *RemoveAttributesInput) SetAttributeType(v string) *RemoveAttributesInput { + s.AttributeType = &v + return s +} + +// SetUpdateAttributesRequest sets the UpdateAttributesRequest field's value. +func (s *RemoveAttributesInput) SetUpdateAttributesRequest(v *UpdateAttributesRequest) *RemoveAttributesInput { + s.UpdateAttributesRequest = v + return s +} + +type RemoveAttributesOutput struct { + _ struct{} `type:"structure" payload:"AttributesResource"` + + // Attributes. + // + // AttributesResource is a required field + AttributesResource *AttributesResource `type:"structure" required:"true"` +} + +// String returns the string representation +func (s RemoveAttributesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveAttributesOutput) GoString() string { + return s.String() +} + +// SetAttributesResource sets the AttributesResource field's value. +func (s *RemoveAttributesOutput) SetAttributesResource(v *AttributesResource) *RemoveAttributesOutput { + s.AttributesResource = v + return s +} + +// SMS Channel Request +type SMSChannelRequest struct { + _ struct{} `type:"structure"` + + // If the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` + + // Sender identifier of your messages. + SenderId *string `type:"string"` + + // ShortCode registered with phone provider. + ShortCode *string `type:"string"` +} + +// String returns the string representation +func (s SMSChannelRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SMSChannelRequest) GoString() string { + return s.String() +} + +// SetEnabled sets the Enabled field's value. +func (s *SMSChannelRequest) SetEnabled(v bool) *SMSChannelRequest { + s.Enabled = &v + return s +} + +// SetSenderId sets the SenderId field's value. +func (s *SMSChannelRequest) SetSenderId(v string) *SMSChannelRequest { + s.SenderId = &v + return s +} + +// SetShortCode sets the ShortCode field's value. +func (s *SMSChannelRequest) SetShortCode(v string) *SMSChannelRequest { + s.ShortCode = &v + return s +} + +// SMS Channel Response. +type SMSChannelResponse struct { + _ struct{} `type:"structure"` + + // The unique ID of the application to which the SMS channel belongs. + ApplicationId *string `type:"string"` + + // The date that the settings were last updated in ISO 8601 format. + CreationDate *string `type:"string"` + + // If the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` + + // Not used. Retained for backwards compatibility. + HasCredential *bool `type:"boolean"` + + // Channel ID. Not used, only for backwards compatibility. + Id *string `type:"string"` + + // Is this channel archived + IsArchived *bool `type:"boolean"` + + // Who last updated this entry + LastModifiedBy *string `type:"string"` + + // Last date this was updated + LastModifiedDate *string `type:"string"` + + // Platform type. Will be "SMS" + Platform *string `type:"string"` + + // Promotional messages per second that can be sent + PromotionalMessagesPerSecond *int64 `type:"integer"` + + // Sender identifier of your messages. + SenderId *string `type:"string"` + + // The short code registered with the phone provider. + ShortCode *string `type:"string"` + + // Transactional messages per second that can be sent + TransactionalMessagesPerSecond *int64 `type:"integer"` + + // Version of channel + Version *int64 `type:"integer"` +} + +// String returns the string representation +func (s SMSChannelResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SMSChannelResponse) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *SMSChannelResponse) SetApplicationId(v string) *SMSChannelResponse { + s.ApplicationId = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *SMSChannelResponse) SetCreationDate(v string) *SMSChannelResponse { + s.CreationDate = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *SMSChannelResponse) SetEnabled(v bool) *SMSChannelResponse { + s.Enabled = &v + return s +} + +// SetHasCredential sets the HasCredential field's value. +func (s *SMSChannelResponse) SetHasCredential(v bool) *SMSChannelResponse { + s.HasCredential = &v + return s +} + +// SetId sets the Id field's value. +func (s *SMSChannelResponse) SetId(v string) *SMSChannelResponse { + s.Id = &v + return s +} + +// SetIsArchived sets the IsArchived field's value. +func (s *SMSChannelResponse) SetIsArchived(v bool) *SMSChannelResponse { + s.IsArchived = &v + return s +} + +// SetLastModifiedBy sets the LastModifiedBy field's value. +func (s *SMSChannelResponse) SetLastModifiedBy(v string) *SMSChannelResponse { + s.LastModifiedBy = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *SMSChannelResponse) SetLastModifiedDate(v string) *SMSChannelResponse { + s.LastModifiedDate = &v + return s +} + +// SetPlatform sets the Platform field's value. +func (s *SMSChannelResponse) SetPlatform(v string) *SMSChannelResponse { + s.Platform = &v + return s +} + +// SetPromotionalMessagesPerSecond sets the PromotionalMessagesPerSecond field's value. +func (s *SMSChannelResponse) SetPromotionalMessagesPerSecond(v int64) *SMSChannelResponse { + s.PromotionalMessagesPerSecond = &v + return s +} + +// SetSenderId sets the SenderId field's value. +func (s *SMSChannelResponse) SetSenderId(v string) *SMSChannelResponse { + s.SenderId = &v + return s +} + +// SetShortCode sets the ShortCode field's value. +func (s *SMSChannelResponse) SetShortCode(v string) *SMSChannelResponse { + s.ShortCode = &v + return s +} + +// SetTransactionalMessagesPerSecond sets the TransactionalMessagesPerSecond field's value. +func (s *SMSChannelResponse) SetTransactionalMessagesPerSecond(v int64) *SMSChannelResponse { + s.TransactionalMessagesPerSecond = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *SMSChannelResponse) SetVersion(v int64) *SMSChannelResponse { + s.Version = &v + return s +} + +// SMS Message. +type SMSMessage struct { + _ struct{} `type:"structure"` + + // The body of the SMS message. + Body *string `type:"string"` + + // The SMS program name that you provided to AWS Support when you requested + // your dedicated number. + Keyword *string `type:"string"` + + // Is this a transaction priority message or lower priority. + MessageType *string `type:"string" enum:"MessageType"` + + // The phone number that the SMS message originates from. Specify one of the + // dedicated long codes or short codes that you requested from AWS Support and + // that is assigned to your account. If this attribute is not specified, Amazon + // Pinpoint randomly assigns a long code. + OriginationNumber *string `type:"string"` + + // The sender ID that is shown as the message sender on the recipient's device. + // Support for sender IDs varies by country or region. + SenderId *string `type:"string"` + + // Default message substitutions. Can be overridden by individual address substitutions. + Substitutions map[string][]*string `type:"map"` +} + +// String returns the string representation +func (s SMSMessage) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SMSMessage) GoString() string { + return s.String() +} + +// SetBody sets the Body field's value. +func (s *SMSMessage) SetBody(v string) *SMSMessage { + s.Body = &v + return s +} + +// SetKeyword sets the Keyword field's value. +func (s *SMSMessage) SetKeyword(v string) *SMSMessage { + s.Keyword = &v + return s +} + +// SetMessageType sets the MessageType field's value. +func (s *SMSMessage) SetMessageType(v string) *SMSMessage { + s.MessageType = &v + return s +} + +// SetOriginationNumber sets the OriginationNumber field's value. +func (s *SMSMessage) SetOriginationNumber(v string) *SMSMessage { + s.OriginationNumber = &v + return s +} + +// SetSenderId sets the SenderId field's value. +func (s *SMSMessage) SetSenderId(v string) *SMSMessage { + s.SenderId = &v + return s +} + +// SetSubstitutions sets the Substitutions field's value. +func (s *SMSMessage) SetSubstitutions(v map[string][]*string) *SMSMessage { + s.Substitutions = v + return s +} + +// Shcedule that defines when a campaign is run. +type Schedule struct { + _ struct{} `type:"structure"` + + // The scheduled time that the campaign ends in ISO 8601 format. + EndTime *string `type:"string"` + + // Defines the type of events that can trigger the campaign. Used when the Frequency + // is set to EVENT. + EventFilter *CampaignEventFilter `type:"structure"` + + // How often the campaign delivers messages.Valid values:ONCEHOURLYDAILYWEEKLYMONTHLYEVENT + Frequency *string `type:"string" enum:"Frequency"` + + // Indicates whether the campaign schedule takes effect according to each user's + // local time. + IsLocalTime *bool `type:"boolean"` + + // The default quiet time for the campaign. The campaign doesn't send messages + // to endpoints during the quiet time.Note: Make sure that your endpoints include + // the Demographics.Timezone attribute if you plan to enable a quiet time for + // your campaign. If your endpoints don't include this attribute, they'll receive + // the messages that you send them, even if quiet time is enabled.When you set + // up a campaign to use quiet time, the campaign doesn't send messages during + // the time range you specified, as long as all of the following are true:- + // The endpoint includes a valid Demographic.Timezone attribute.- The current + // time in the endpoint's time zone is later than or equal to the time specified + // in the QuietTime.Start attribute for the campaign.- The current time in the + // endpoint's time zone is earlier than or equal to the time specified in the + // QuietTime.End attribute for the campaign. + QuietTime *QuietTime `type:"structure"` + + // The scheduled time that the campaign begins in ISO 8601 format. + StartTime *string `type:"string"` + + // The starting UTC offset for the schedule if the value for isLocalTime is + // trueValid values: UTCUTC+01UTC+02UTC+03UTC+03:30UTC+04UTC+04:30UTC+05UTC+05:30UTC+05:45UTC+06UTC+06:30UTC+07UTC+08UTC+09UTC+09:30UTC+10UTC+10:30UTC+11UTC+12UTC+13UTC-02UTC-03UTC-04UTC-05UTC-06UTC-07UTC-08UTC-09UTC-10UTC-11 + Timezone *string `type:"string"` +} + +// String returns the string representation +func (s Schedule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Schedule) GoString() string { + return s.String() +} + +// SetEndTime sets the EndTime field's value. +func (s *Schedule) SetEndTime(v string) *Schedule { + s.EndTime = &v + return s +} + +// SetEventFilter sets the EventFilter field's value. +func (s *Schedule) SetEventFilter(v *CampaignEventFilter) *Schedule { + s.EventFilter = v + return s +} + +// SetFrequency sets the Frequency field's value. +func (s *Schedule) SetFrequency(v string) *Schedule { + s.Frequency = &v + return s +} + +// SetIsLocalTime sets the IsLocalTime field's value. +func (s *Schedule) SetIsLocalTime(v bool) *Schedule { + s.IsLocalTime = &v + return s +} + +// SetQuietTime sets the QuietTime field's value. +func (s *Schedule) SetQuietTime(v *QuietTime) *Schedule { + s.QuietTime = v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *Schedule) SetStartTime(v string) *Schedule { + s.StartTime = &v + return s +} + +// SetTimezone sets the Timezone field's value. +func (s *Schedule) SetTimezone(v string) *Schedule { + s.Timezone = &v + return s +} + +// Segment behavior dimensions +type SegmentBehaviors struct { + _ struct{} `type:"structure"` + + // The recency of use. + Recency *RecencyDimension `type:"structure"` +} + +// String returns the string representation +func (s SegmentBehaviors) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SegmentBehaviors) GoString() string { + return s.String() +} + +// SetRecency sets the Recency field's value. +func (s *SegmentBehaviors) SetRecency(v *RecencyDimension) *SegmentBehaviors { + s.Recency = v + return s +} + +// Segment demographic dimensions +type SegmentDemographics struct { + _ struct{} `type:"structure"` + + // The app version criteria for the segment. + AppVersion *SetDimension `type:"structure"` + + // The channel criteria for the segment. + Channel *SetDimension `type:"structure"` + + // The device type criteria for the segment. + DeviceType *SetDimension `type:"structure"` + + // The device make criteria for the segment. + Make *SetDimension `type:"structure"` + + // The device model criteria for the segment. + Model *SetDimension `type:"structure"` + + // The device platform criteria for the segment. + Platform *SetDimension `type:"structure"` +} + +// String returns the string representation +func (s SegmentDemographics) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SegmentDemographics) GoString() string { + return s.String() +} + +// SetAppVersion sets the AppVersion field's value. +func (s *SegmentDemographics) SetAppVersion(v *SetDimension) *SegmentDemographics { + s.AppVersion = v + return s +} + +// SetChannel sets the Channel field's value. +func (s *SegmentDemographics) SetChannel(v *SetDimension) *SegmentDemographics { + s.Channel = v + return s +} + +// SetDeviceType sets the DeviceType field's value. +func (s *SegmentDemographics) SetDeviceType(v *SetDimension) *SegmentDemographics { + s.DeviceType = v + return s +} + +// SetMake sets the Make field's value. +func (s *SegmentDemographics) SetMake(v *SetDimension) *SegmentDemographics { + s.Make = v + return s +} + +// SetModel sets the Model field's value. +func (s *SegmentDemographics) SetModel(v *SetDimension) *SegmentDemographics { + s.Model = v + return s +} + +// SetPlatform sets the Platform field's value. +func (s *SegmentDemographics) SetPlatform(v *SetDimension) *SegmentDemographics { + s.Platform = v + return s +} + +// Segment dimensions +type SegmentDimensions struct { + _ struct{} `type:"structure"` + + // Custom segment attributes. + Attributes map[string]*AttributeDimension `type:"map"` + + // The segment behaviors attributes. + Behavior *SegmentBehaviors `type:"structure"` + + // The segment demographics attributes. + Demographic *SegmentDemographics `type:"structure"` + + // The segment location attributes. + Location *SegmentLocation `type:"structure"` + + // Custom segment metrics. + Metrics map[string]*MetricDimension `type:"map"` + + // Custom segment user attributes. + UserAttributes map[string]*AttributeDimension `type:"map"` +} + +// String returns the string representation +func (s SegmentDimensions) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SegmentDimensions) GoString() string { + return s.String() +} + +// SetAttributes sets the Attributes field's value. +func (s *SegmentDimensions) SetAttributes(v map[string]*AttributeDimension) *SegmentDimensions { + s.Attributes = v + return s +} + +// SetBehavior sets the Behavior field's value. +func (s *SegmentDimensions) SetBehavior(v *SegmentBehaviors) *SegmentDimensions { + s.Behavior = v + return s +} + +// SetDemographic sets the Demographic field's value. +func (s *SegmentDimensions) SetDemographic(v *SegmentDemographics) *SegmentDimensions { + s.Demographic = v + return s +} + +// SetLocation sets the Location field's value. +func (s *SegmentDimensions) SetLocation(v *SegmentLocation) *SegmentDimensions { + s.Location = v + return s +} + +// SetMetrics sets the Metrics field's value. +func (s *SegmentDimensions) SetMetrics(v map[string]*MetricDimension) *SegmentDimensions { + s.Metrics = v + return s +} + +// SetUserAttributes sets the UserAttributes field's value. +func (s *SegmentDimensions) SetUserAttributes(v map[string]*AttributeDimension) *SegmentDimensions { + s.UserAttributes = v + return s +} + +// Segment group definition. +type SegmentGroup struct { + _ struct{} `type:"structure"` + + // List of dimensions to include or exclude. + Dimensions []*SegmentDimensions `type:"list"` + + // The base segment that you build your segment on. The source segment defines + // the starting "universe" of endpoints. When you add dimensions to the segment, + // it filters the source segment based on the dimensions that you specify. You + // can specify more than one dimensional segment. You can only specify one imported + // segment.NOTE: If you specify an imported segment for this attribute, the + // segment size estimate that appears in the Amazon Pinpoint console shows the + // size of the imported segment, without any filters applied to it. + SourceSegments []*SegmentReference `type:"list"` + + // Specify how to handle multiple source segments. For example, if you specify + // three source segments, should the resulting segment be based on any or all + // of the segments? Acceptable values: ANY or ALL. + SourceType *string `type:"string" enum:"SourceType"` + + // Specify how to handle multiple segment dimensions. For example, if you specify + // three dimensions, should the resulting segment include endpoints that are + // matched by all, any, or none of the dimensions? Acceptable values: ALL, ANY, + // or NONE. + Type *string `type:"string" enum:"Type"` +} + +// String returns the string representation +func (s SegmentGroup) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SegmentGroup) GoString() string { + return s.String() +} + +// SetDimensions sets the Dimensions field's value. +func (s *SegmentGroup) SetDimensions(v []*SegmentDimensions) *SegmentGroup { + s.Dimensions = v + return s +} + +// SetSourceSegments sets the SourceSegments field's value. +func (s *SegmentGroup) SetSourceSegments(v []*SegmentReference) *SegmentGroup { + s.SourceSegments = v + return s +} + +// SetSourceType sets the SourceType field's value. +func (s *SegmentGroup) SetSourceType(v string) *SegmentGroup { + s.SourceType = &v + return s +} + +// SetType sets the Type field's value. +func (s *SegmentGroup) SetType(v string) *SegmentGroup { + s.Type = &v + return s +} + +// Segment group definition. +type SegmentGroupList struct { + _ struct{} `type:"structure"` + + // A set of segment criteria to evaluate. + Groups []*SegmentGroup `type:"list"` + + // Specify how to handle multiple segment groups. For example, if the segment + // includes three segment groups, should the resulting segment include endpoints + // that are matched by all, any, or none of the segment groups you created. + // Acceptable values: ALL, ANY, or NONE. + Include *string `type:"string" enum:"Include"` +} + +// String returns the string representation +func (s SegmentGroupList) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SegmentGroupList) GoString() string { + return s.String() +} + +// SetGroups sets the Groups field's value. +func (s *SegmentGroupList) SetGroups(v []*SegmentGroup) *SegmentGroupList { + s.Groups = v + return s +} + +// SetInclude sets the Include field's value. +func (s *SegmentGroupList) SetInclude(v string) *SegmentGroupList { + s.Include = &v + return s +} + +// Segment import definition. +type SegmentImportResource struct { + _ struct{} `type:"structure"` + + // The number of channel types in the imported segment. + ChannelCounts map[string]*int64 `type:"map"` + + // (Deprecated) Your AWS account ID, which you assigned to the ExternalID key + // in an IAM trust policy. Used by Amazon Pinpoint to assume an IAM role. This + // requirement is removed, and external IDs are not recommended for IAM roles + // assumed by Amazon Pinpoint. + ExternalId *string `type:"string"` + + // The format of the endpoint files that were imported to create this segment.Valid + // values: CSV, JSON + Format *string `type:"string" enum:"Format"` + + // The Amazon Resource Name (ARN) of an IAM role that grants Amazon Pinpoint + // access to the endpoints in Amazon S3. + RoleArn *string `type:"string"` + + // The URL of the S3 bucket that the segment was imported from. + S3Url *string `type:"string"` + + // The number of endpoints that were successfully imported to create this segment. + Size *int64 `type:"integer"` +} + +// String returns the string representation +func (s SegmentImportResource) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SegmentImportResource) GoString() string { + return s.String() +} + +// SetChannelCounts sets the ChannelCounts field's value. +func (s *SegmentImportResource) SetChannelCounts(v map[string]*int64) *SegmentImportResource { + s.ChannelCounts = v + return s +} + +// SetExternalId sets the ExternalId field's value. +func (s *SegmentImportResource) SetExternalId(v string) *SegmentImportResource { + s.ExternalId = &v + return s +} + +// SetFormat sets the Format field's value. +func (s *SegmentImportResource) SetFormat(v string) *SegmentImportResource { + s.Format = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *SegmentImportResource) SetRoleArn(v string) *SegmentImportResource { + s.RoleArn = &v + return s +} + +// SetS3Url sets the S3Url field's value. +func (s *SegmentImportResource) SetS3Url(v string) *SegmentImportResource { + s.S3Url = &v + return s +} + +// SetSize sets the Size field's value. +func (s *SegmentImportResource) SetSize(v int64) *SegmentImportResource { + s.Size = &v + return s +} + +// Segment location dimensions +type SegmentLocation struct { + _ struct{} `type:"structure"` + + // The country or region, in ISO 3166-1 alpha-2 format. + Country *SetDimension `type:"structure"` + + // The GPS Point dimension. + GPSPoint *GPSPointDimension `type:"structure"` +} + +// String returns the string representation +func (s SegmentLocation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SegmentLocation) GoString() string { + return s.String() +} + +// SetCountry sets the Country field's value. +func (s *SegmentLocation) SetCountry(v *SetDimension) *SegmentLocation { + s.Country = v + return s +} + +// SetGPSPoint sets the GPSPoint field's value. +func (s *SegmentLocation) SetGPSPoint(v *GPSPointDimension) *SegmentLocation { + s.GPSPoint = v + return s +} + +// Segment reference. +type SegmentReference struct { + _ struct{} `type:"structure"` + + // A unique identifier for the segment. + Id *string `type:"string"` + + // If specified contains a specific version of the segment included. + Version *int64 `type:"integer"` +} + +// String returns the string representation +func (s SegmentReference) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SegmentReference) GoString() string { + return s.String() +} + +// SetId sets the Id field's value. +func (s *SegmentReference) SetId(v string) *SegmentReference { + s.Id = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *SegmentReference) SetVersion(v int64) *SegmentReference { + s.Version = &v + return s +} + +// Segment definition. +type SegmentResponse struct { + _ struct{} `type:"structure"` + + // The ID of the application that the segment applies to. + ApplicationId *string `type:"string"` + + // The date and time when the segment was created. + CreationDate *string `type:"string"` + + // The segment dimensions attributes. + Dimensions *SegmentDimensions `type:"structure"` + + // The unique segment ID. + Id *string `type:"string"` + + // The import job settings. + ImportDefinition *SegmentImportResource `type:"structure"` + + // The date and time when the segment was last modified. + LastModifiedDate *string `type:"string"` + + // The name of the segment. + Name *string `type:"string"` + + // A segment group, which consists of zero or more source segments, plus dimensions + // that are applied to those source segments. + SegmentGroups *SegmentGroupList `type:"structure"` + + // The segment type:DIMENSIONAL - A dynamic segment built from selection criteria + // based on endpoint data reported by your app. You create this type of segment + // by using the segment builder in the Amazon Pinpoint console or by making + // a POST request to the segments resource.IMPORT - A static segment built from + // an imported set of endpoint definitions. You create this type of segment + // by importing a segment in the Amazon Pinpoint console or by making a POST + // request to the jobs/import resource. + SegmentType *string `type:"string" enum:"SegmentType"` + + // The segment version number. + Version *int64 `type:"integer"` +} + +// String returns the string representation +func (s SegmentResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SegmentResponse) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *SegmentResponse) SetApplicationId(v string) *SegmentResponse { + s.ApplicationId = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *SegmentResponse) SetCreationDate(v string) *SegmentResponse { + s.CreationDate = &v + return s +} + +// SetDimensions sets the Dimensions field's value. +func (s *SegmentResponse) SetDimensions(v *SegmentDimensions) *SegmentResponse { + s.Dimensions = v + return s +} + +// SetId sets the Id field's value. +func (s *SegmentResponse) SetId(v string) *SegmentResponse { + s.Id = &v + return s +} + +// SetImportDefinition sets the ImportDefinition field's value. +func (s *SegmentResponse) SetImportDefinition(v *SegmentImportResource) *SegmentResponse { + s.ImportDefinition = v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *SegmentResponse) SetLastModifiedDate(v string) *SegmentResponse { + s.LastModifiedDate = &v + return s +} + +// SetName sets the Name field's value. +func (s *SegmentResponse) SetName(v string) *SegmentResponse { + s.Name = &v + return s +} + +// SetSegmentGroups sets the SegmentGroups field's value. +func (s *SegmentResponse) SetSegmentGroups(v *SegmentGroupList) *SegmentResponse { + s.SegmentGroups = v + return s +} + +// SetSegmentType sets the SegmentType field's value. +func (s *SegmentResponse) SetSegmentType(v string) *SegmentResponse { + s.SegmentType = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *SegmentResponse) SetVersion(v int64) *SegmentResponse { + s.Version = &v + return s +} + +// Segments in your account. +type SegmentsResponse struct { + _ struct{} `type:"structure"` + + // The list of segments. + Item []*SegmentResponse `type:"list"` + + // An identifier used to retrieve the next page of results. The token is null + // if no additional pages exist. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s SegmentsResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SegmentsResponse) GoString() string { + return s.String() +} + +// SetItem sets the Item field's value. +func (s *SegmentsResponse) SetItem(v []*SegmentResponse) *SegmentsResponse { + s.Item = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *SegmentsResponse) SetNextToken(v string) *SegmentsResponse { + s.NextToken = &v + return s +} + +type SendMessagesInput struct { + _ struct{} `type:"structure" payload:"MessageRequest"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // Send message request. + // + // MessageRequest is a required field + MessageRequest *MessageRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s SendMessagesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SendMessagesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SendMessagesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SendMessagesInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.MessageRequest == nil { + invalidParams.Add(request.NewErrParamRequired("MessageRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *SendMessagesInput) SetApplicationId(v string) *SendMessagesInput { + s.ApplicationId = &v + return s +} + +// SetMessageRequest sets the MessageRequest field's value. +func (s *SendMessagesInput) SetMessageRequest(v *MessageRequest) *SendMessagesInput { + s.MessageRequest = v + return s +} + +type SendMessagesOutput struct { + _ struct{} `type:"structure" payload:"MessageResponse"` + + // Send message response. + // + // MessageResponse is a required field + MessageResponse *MessageResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s SendMessagesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SendMessagesOutput) GoString() string { + return s.String() +} + +// SetMessageResponse sets the MessageResponse field's value. +func (s *SendMessagesOutput) SetMessageResponse(v *MessageResponse) *SendMessagesOutput { + s.MessageResponse = v + return s +} + +// Send message request. +type SendUsersMessageRequest struct { + _ struct{} `type:"structure"` + + // A map of custom attribute-value pairs. Amazon Pinpoint adds these attributes + // to the data.pinpoint object in the body of the push notification payload. + // Amazon Pinpoint also provides these attributes in the events that it generates + // for users-messages deliveries. + Context map[string]*string `type:"map"` + + // Message definitions for the default message and any messages that are tailored + // for specific channels. + MessageConfiguration *DirectMessageConfiguration `type:"structure"` + + // A unique ID that you can use to trace a message. This ID is visible to recipients. + TraceId *string `type:"string"` + + // A map that associates user IDs with EndpointSendConfiguration objects. Within + // an EndpointSendConfiguration object, you can tailor the message for a user + // by specifying message overrides or substitutions. + Users map[string]*EndpointSendConfiguration `type:"map"` +} + +// String returns the string representation +func (s SendUsersMessageRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SendUsersMessageRequest) GoString() string { + return s.String() +} + +// SetContext sets the Context field's value. +func (s *SendUsersMessageRequest) SetContext(v map[string]*string) *SendUsersMessageRequest { + s.Context = v + return s +} + +// SetMessageConfiguration sets the MessageConfiguration field's value. +func (s *SendUsersMessageRequest) SetMessageConfiguration(v *DirectMessageConfiguration) *SendUsersMessageRequest { + s.MessageConfiguration = v + return s +} + +// SetTraceId sets the TraceId field's value. +func (s *SendUsersMessageRequest) SetTraceId(v string) *SendUsersMessageRequest { + s.TraceId = &v + return s +} + +// SetUsers sets the Users field's value. +func (s *SendUsersMessageRequest) SetUsers(v map[string]*EndpointSendConfiguration) *SendUsersMessageRequest { + s.Users = v + return s +} + +// User send message response. +type SendUsersMessageResponse struct { + _ struct{} `type:"structure"` + + // The unique ID of the Amazon Pinpoint project used to send the message. + ApplicationId *string `type:"string"` + + // The unique ID assigned to the users-messages request. + RequestId *string `type:"string"` + + // An object that shows the endpoints that were messaged for each user. The + // object provides a list of user IDs. For each user ID, it provides the endpoint + // IDs that were messaged. For each endpoint ID, it provides an EndpointMessageResult + // object. + Result map[string]map[string]*EndpointMessageResult `type:"map"` +} + +// String returns the string representation +func (s SendUsersMessageResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SendUsersMessageResponse) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *SendUsersMessageResponse) SetApplicationId(v string) *SendUsersMessageResponse { + s.ApplicationId = &v + return s +} + +// SetRequestId sets the RequestId field's value. +func (s *SendUsersMessageResponse) SetRequestId(v string) *SendUsersMessageResponse { + s.RequestId = &v + return s +} + +// SetResult sets the Result field's value. +func (s *SendUsersMessageResponse) SetResult(v map[string]map[string]*EndpointMessageResult) *SendUsersMessageResponse { + s.Result = v + return s +} + +type SendUsersMessagesInput struct { + _ struct{} `type:"structure" payload:"SendUsersMessageRequest"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // Send message request. + // + // SendUsersMessageRequest is a required field + SendUsersMessageRequest *SendUsersMessageRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s SendUsersMessagesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SendUsersMessagesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SendUsersMessagesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SendUsersMessagesInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.SendUsersMessageRequest == nil { + invalidParams.Add(request.NewErrParamRequired("SendUsersMessageRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *SendUsersMessagesInput) SetApplicationId(v string) *SendUsersMessagesInput { + s.ApplicationId = &v + return s +} + +// SetSendUsersMessageRequest sets the SendUsersMessageRequest field's value. +func (s *SendUsersMessagesInput) SetSendUsersMessageRequest(v *SendUsersMessageRequest) *SendUsersMessagesInput { + s.SendUsersMessageRequest = v + return s +} + +type SendUsersMessagesOutput struct { + _ struct{} `type:"structure" payload:"SendUsersMessageResponse"` + + // User send message response. + // + // SendUsersMessageResponse is a required field + SendUsersMessageResponse *SendUsersMessageResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s SendUsersMessagesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SendUsersMessagesOutput) GoString() string { + return s.String() +} + +// SetSendUsersMessageResponse sets the SendUsersMessageResponse field's value. +func (s *SendUsersMessagesOutput) SetSendUsersMessageResponse(v *SendUsersMessageResponse) *SendUsersMessagesOutput { + s.SendUsersMessageResponse = v + return s +} + +// Information about a session. +type Session struct { + _ struct{} `type:"structure"` + + // The duration of the session, in milliseconds. + Duration *int64 `type:"integer"` + + // A unique identifier for the session. + Id *string `type:"string"` + + // The date and time when the session began. + StartTimestamp *string `type:"string"` + + // The date and time when the session ended. + StopTimestamp *string `type:"string"` +} + +// String returns the string representation +func (s Session) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Session) GoString() string { + return s.String() +} + +// SetDuration sets the Duration field's value. +func (s *Session) SetDuration(v int64) *Session { + s.Duration = &v + return s +} + +// SetId sets the Id field's value. +func (s *Session) SetId(v string) *Session { + s.Id = &v + return s +} + +// SetStartTimestamp sets the StartTimestamp field's value. +func (s *Session) SetStartTimestamp(v string) *Session { + s.StartTimestamp = &v + return s +} + +// SetStopTimestamp sets the StopTimestamp field's value. +func (s *Session) SetStopTimestamp(v string) *Session { + s.StopTimestamp = &v + return s +} + +// Dimension specification of a segment. +type SetDimension struct { + _ struct{} `type:"structure"` + + // The type of dimension:INCLUSIVE - Endpoints that match the criteria are included + // in the segment.EXCLUSIVE - Endpoints that match the criteria are excluded + // from the segment. + DimensionType *string `type:"string" enum:"DimensionType"` + + // The criteria values for the segment dimension. Endpoints with matching attribute + // values are included or excluded from the segment, depending on the setting + // for Type. + Values []*string `type:"list"` +} + +// String returns the string representation +func (s SetDimension) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SetDimension) GoString() string { + return s.String() +} + +// SetDimensionType sets the DimensionType field's value. +func (s *SetDimension) SetDimensionType(v string) *SetDimension { + s.DimensionType = &v + return s +} + +// SetValues sets the Values field's value. +func (s *SetDimension) SetValues(v []*string) *SetDimension { + s.Values = v + return s +} + +// An email composed of a subject, a text part and a html part. +type SimpleEmail struct { + _ struct{} `type:"structure"` + + // The content of the message, in HTML format. Use this for email clients that + // can process HTML. You can include clickable links, formatted text, and much + // more in an HTML message. + HtmlPart *SimpleEmailPart `type:"structure"` + + // The subject of the message: A short summary of the content, which will appear + // in the recipient's inbox. + Subject *SimpleEmailPart `type:"structure"` + + // The content of the message, in text format. Use this for text-based email + // clients, or clients on high-latency networks (such as mobile devices). + TextPart *SimpleEmailPart `type:"structure"` +} + +// String returns the string representation +func (s SimpleEmail) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SimpleEmail) GoString() string { + return s.String() +} + +// SetHtmlPart sets the HtmlPart field's value. +func (s *SimpleEmail) SetHtmlPart(v *SimpleEmailPart) *SimpleEmail { + s.HtmlPart = v + return s +} + +// SetSubject sets the Subject field's value. +func (s *SimpleEmail) SetSubject(v *SimpleEmailPart) *SimpleEmail { + s.Subject = v + return s +} + +// SetTextPart sets the TextPart field's value. +func (s *SimpleEmail) SetTextPart(v *SimpleEmailPart) *SimpleEmail { + s.TextPart = v + return s +} + +// Textual email data, plus an optional character set specification. +type SimpleEmailPart struct { + _ struct{} `type:"structure"` + + // The character set of the content. + Charset *string `type:"string"` + + // The textual data of the content. + Data *string `type:"string"` +} + +// String returns the string representation +func (s SimpleEmailPart) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SimpleEmailPart) GoString() string { + return s.String() +} + +// SetCharset sets the Charset field's value. +func (s *SimpleEmailPart) SetCharset(v string) *SimpleEmailPart { + s.Charset = &v + return s +} + +// SetData sets the Data field's value. +func (s *SimpleEmailPart) SetData(v string) *SimpleEmailPart { + s.Data = &v + return s +} + +// Treatment resource +type TreatmentResource struct { + _ struct{} `type:"structure"` + + // The unique treatment ID. + Id *string `type:"string"` + + // The message configuration settings. + MessageConfiguration *MessageConfiguration `type:"structure"` + + // The campaign schedule. + Schedule *Schedule `type:"structure"` + + // The allocated percentage of users for this treatment. + SizePercent *int64 `type:"integer"` + + // The treatment status. + State *CampaignState `type:"structure"` + + // A custom description for the treatment. + TreatmentDescription *string `type:"string"` + + // The custom name of a variation of the campaign used for A/B testing. + TreatmentName *string `type:"string"` +} + +// String returns the string representation +func (s TreatmentResource) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TreatmentResource) GoString() string { + return s.String() +} + +// SetId sets the Id field's value. +func (s *TreatmentResource) SetId(v string) *TreatmentResource { + s.Id = &v + return s +} + +// SetMessageConfiguration sets the MessageConfiguration field's value. +func (s *TreatmentResource) SetMessageConfiguration(v *MessageConfiguration) *TreatmentResource { + s.MessageConfiguration = v + return s +} + +// SetSchedule sets the Schedule field's value. +func (s *TreatmentResource) SetSchedule(v *Schedule) *TreatmentResource { + s.Schedule = v + return s +} + +// SetSizePercent sets the SizePercent field's value. +func (s *TreatmentResource) SetSizePercent(v int64) *TreatmentResource { + s.SizePercent = &v + return s +} + +// SetState sets the State field's value. +func (s *TreatmentResource) SetState(v *CampaignState) *TreatmentResource { + s.State = v + return s +} + +// SetTreatmentDescription sets the TreatmentDescription field's value. +func (s *TreatmentResource) SetTreatmentDescription(v string) *TreatmentResource { + s.TreatmentDescription = &v + return s +} + +// SetTreatmentName sets the TreatmentName field's value. +func (s *TreatmentResource) SetTreatmentName(v string) *TreatmentResource { + s.TreatmentName = &v + return s +} + +type UpdateAdmChannelInput struct { + _ struct{} `type:"structure" payload:"ADMChannelRequest"` + + // Amazon Device Messaging channel definition. + // + // ADMChannelRequest is a required field + ADMChannelRequest *ADMChannelRequest `type:"structure" required:"true"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateAdmChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateAdmChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateAdmChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateAdmChannelInput"} + if s.ADMChannelRequest == nil { + invalidParams.Add(request.NewErrParamRequired("ADMChannelRequest")) + } + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetADMChannelRequest sets the ADMChannelRequest field's value. +func (s *UpdateAdmChannelInput) SetADMChannelRequest(v *ADMChannelRequest) *UpdateAdmChannelInput { + s.ADMChannelRequest = v + return s +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *UpdateAdmChannelInput) SetApplicationId(v string) *UpdateAdmChannelInput { + s.ApplicationId = &v + return s +} + +type UpdateAdmChannelOutput struct { + _ struct{} `type:"structure" payload:"ADMChannelResponse"` + + // Amazon Device Messaging channel definition. + // + // ADMChannelResponse is a required field + ADMChannelResponse *ADMChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateAdmChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateAdmChannelOutput) GoString() string { + return s.String() +} + +// SetADMChannelResponse sets the ADMChannelResponse field's value. +func (s *UpdateAdmChannelOutput) SetADMChannelResponse(v *ADMChannelResponse) *UpdateAdmChannelOutput { + s.ADMChannelResponse = v + return s +} + +type UpdateApnsChannelInput struct { + _ struct{} `type:"structure" payload:"APNSChannelRequest"` + + // Apple Push Notification Service channel definition. + // + // APNSChannelRequest is a required field + APNSChannelRequest *APNSChannelRequest `type:"structure" required:"true"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateApnsChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateApnsChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateApnsChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateApnsChannelInput"} + if s.APNSChannelRequest == nil { + invalidParams.Add(request.NewErrParamRequired("APNSChannelRequest")) + } + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAPNSChannelRequest sets the APNSChannelRequest field's value. +func (s *UpdateApnsChannelInput) SetAPNSChannelRequest(v *APNSChannelRequest) *UpdateApnsChannelInput { + s.APNSChannelRequest = v + return s +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *UpdateApnsChannelInput) SetApplicationId(v string) *UpdateApnsChannelInput { + s.ApplicationId = &v + return s +} + +type UpdateApnsChannelOutput struct { + _ struct{} `type:"structure" payload:"APNSChannelResponse"` + + // Apple Distribution Push Notification Service channel definition. + // + // APNSChannelResponse is a required field + APNSChannelResponse *APNSChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateApnsChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateApnsChannelOutput) GoString() string { + return s.String() +} + +// SetAPNSChannelResponse sets the APNSChannelResponse field's value. +func (s *UpdateApnsChannelOutput) SetAPNSChannelResponse(v *APNSChannelResponse) *UpdateApnsChannelOutput { + s.APNSChannelResponse = v + return s +} + +type UpdateApnsSandboxChannelInput struct { + _ struct{} `type:"structure" payload:"APNSSandboxChannelRequest"` + + // Apple Development Push Notification Service channel definition. + // + // APNSSandboxChannelRequest is a required field + APNSSandboxChannelRequest *APNSSandboxChannelRequest `type:"structure" required:"true"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateApnsSandboxChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateApnsSandboxChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateApnsSandboxChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateApnsSandboxChannelInput"} + if s.APNSSandboxChannelRequest == nil { + invalidParams.Add(request.NewErrParamRequired("APNSSandboxChannelRequest")) + } + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAPNSSandboxChannelRequest sets the APNSSandboxChannelRequest field's value. +func (s *UpdateApnsSandboxChannelInput) SetAPNSSandboxChannelRequest(v *APNSSandboxChannelRequest) *UpdateApnsSandboxChannelInput { + s.APNSSandboxChannelRequest = v + return s +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *UpdateApnsSandboxChannelInput) SetApplicationId(v string) *UpdateApnsSandboxChannelInput { + s.ApplicationId = &v + return s +} + +type UpdateApnsSandboxChannelOutput struct { + _ struct{} `type:"structure" payload:"APNSSandboxChannelResponse"` + + // Apple Development Push Notification Service channel definition. + // + // APNSSandboxChannelResponse is a required field + APNSSandboxChannelResponse *APNSSandboxChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateApnsSandboxChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateApnsSandboxChannelOutput) GoString() string { + return s.String() +} + +// SetAPNSSandboxChannelResponse sets the APNSSandboxChannelResponse field's value. +func (s *UpdateApnsSandboxChannelOutput) SetAPNSSandboxChannelResponse(v *APNSSandboxChannelResponse) *UpdateApnsSandboxChannelOutput { + s.APNSSandboxChannelResponse = v + return s +} + +type UpdateApnsVoipChannelInput struct { + _ struct{} `type:"structure" payload:"APNSVoipChannelRequest"` + + // Apple VoIP Push Notification Service channel definition. + // + // APNSVoipChannelRequest is a required field + APNSVoipChannelRequest *APNSVoipChannelRequest `type:"structure" required:"true"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateApnsVoipChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateApnsVoipChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateApnsVoipChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateApnsVoipChannelInput"} + if s.APNSVoipChannelRequest == nil { + invalidParams.Add(request.NewErrParamRequired("APNSVoipChannelRequest")) + } + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAPNSVoipChannelRequest sets the APNSVoipChannelRequest field's value. +func (s *UpdateApnsVoipChannelInput) SetAPNSVoipChannelRequest(v *APNSVoipChannelRequest) *UpdateApnsVoipChannelInput { + s.APNSVoipChannelRequest = v + return s +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *UpdateApnsVoipChannelInput) SetApplicationId(v string) *UpdateApnsVoipChannelInput { + s.ApplicationId = &v + return s +} + +type UpdateApnsVoipChannelOutput struct { + _ struct{} `type:"structure" payload:"APNSVoipChannelResponse"` + + // Apple VoIP Push Notification Service channel definition. + // + // APNSVoipChannelResponse is a required field + APNSVoipChannelResponse *APNSVoipChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateApnsVoipChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateApnsVoipChannelOutput) GoString() string { + return s.String() +} + +// SetAPNSVoipChannelResponse sets the APNSVoipChannelResponse field's value. +func (s *UpdateApnsVoipChannelOutput) SetAPNSVoipChannelResponse(v *APNSVoipChannelResponse) *UpdateApnsVoipChannelOutput { + s.APNSVoipChannelResponse = v + return s +} + +type UpdateApnsVoipSandboxChannelInput struct { + _ struct{} `type:"structure" payload:"APNSVoipSandboxChannelRequest"` + + // Apple VoIP Developer Push Notification Service channel definition. + // + // APNSVoipSandboxChannelRequest is a required field + APNSVoipSandboxChannelRequest *APNSVoipSandboxChannelRequest `type:"structure" required:"true"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateApnsVoipSandboxChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateApnsVoipSandboxChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateApnsVoipSandboxChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateApnsVoipSandboxChannelInput"} + if s.APNSVoipSandboxChannelRequest == nil { + invalidParams.Add(request.NewErrParamRequired("APNSVoipSandboxChannelRequest")) + } + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAPNSVoipSandboxChannelRequest sets the APNSVoipSandboxChannelRequest field's value. +func (s *UpdateApnsVoipSandboxChannelInput) SetAPNSVoipSandboxChannelRequest(v *APNSVoipSandboxChannelRequest) *UpdateApnsVoipSandboxChannelInput { + s.APNSVoipSandboxChannelRequest = v + return s +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *UpdateApnsVoipSandboxChannelInput) SetApplicationId(v string) *UpdateApnsVoipSandboxChannelInput { + s.ApplicationId = &v + return s +} + +type UpdateApnsVoipSandboxChannelOutput struct { + _ struct{} `type:"structure" payload:"APNSVoipSandboxChannelResponse"` + + // Apple VoIP Developer Push Notification Service channel definition. + // + // APNSVoipSandboxChannelResponse is a required field + APNSVoipSandboxChannelResponse *APNSVoipSandboxChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateApnsVoipSandboxChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateApnsVoipSandboxChannelOutput) GoString() string { + return s.String() +} + +// SetAPNSVoipSandboxChannelResponse sets the APNSVoipSandboxChannelResponse field's value. +func (s *UpdateApnsVoipSandboxChannelOutput) SetAPNSVoipSandboxChannelResponse(v *APNSVoipSandboxChannelResponse) *UpdateApnsVoipSandboxChannelOutput { + s.APNSVoipSandboxChannelResponse = v + return s +} + +type UpdateApplicationSettingsInput struct { + _ struct{} `type:"structure" payload:"WriteApplicationSettingsRequest"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // Creating application setting request + // + // WriteApplicationSettingsRequest is a required field + WriteApplicationSettingsRequest *WriteApplicationSettingsRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateApplicationSettingsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateApplicationSettingsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateApplicationSettingsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateApplicationSettingsInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.WriteApplicationSettingsRequest == nil { + invalidParams.Add(request.NewErrParamRequired("WriteApplicationSettingsRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *UpdateApplicationSettingsInput) SetApplicationId(v string) *UpdateApplicationSettingsInput { + s.ApplicationId = &v + return s +} + +// SetWriteApplicationSettingsRequest sets the WriteApplicationSettingsRequest field's value. +func (s *UpdateApplicationSettingsInput) SetWriteApplicationSettingsRequest(v *WriteApplicationSettingsRequest) *UpdateApplicationSettingsInput { + s.WriteApplicationSettingsRequest = v + return s +} + +type UpdateApplicationSettingsOutput struct { + _ struct{} `type:"structure" payload:"ApplicationSettingsResource"` + + // Application settings. + // + // ApplicationSettingsResource is a required field + ApplicationSettingsResource *ApplicationSettingsResource `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateApplicationSettingsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateApplicationSettingsOutput) GoString() string { + return s.String() +} + +// SetApplicationSettingsResource sets the ApplicationSettingsResource field's value. +func (s *UpdateApplicationSettingsOutput) SetApplicationSettingsResource(v *ApplicationSettingsResource) *UpdateApplicationSettingsOutput { + s.ApplicationSettingsResource = v + return s +} + +// Update attributes request +type UpdateAttributesRequest struct { + _ struct{} `type:"structure"` + + // The GLOB wildcard for removing the attributes in the application + Blacklist []*string `type:"list"` +} + +// String returns the string representation +func (s UpdateAttributesRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateAttributesRequest) GoString() string { + return s.String() +} + +// SetBlacklist sets the Blacklist field's value. +func (s *UpdateAttributesRequest) SetBlacklist(v []*string) *UpdateAttributesRequest { + s.Blacklist = v + return s +} + +type UpdateBaiduChannelInput struct { + _ struct{} `type:"structure" payload:"BaiduChannelRequest"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // Baidu Cloud Push credentials + // + // BaiduChannelRequest is a required field + BaiduChannelRequest *BaiduChannelRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateBaiduChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateBaiduChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateBaiduChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateBaiduChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.BaiduChannelRequest == nil { + invalidParams.Add(request.NewErrParamRequired("BaiduChannelRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *UpdateBaiduChannelInput) SetApplicationId(v string) *UpdateBaiduChannelInput { + s.ApplicationId = &v + return s +} + +// SetBaiduChannelRequest sets the BaiduChannelRequest field's value. +func (s *UpdateBaiduChannelInput) SetBaiduChannelRequest(v *BaiduChannelRequest) *UpdateBaiduChannelInput { + s.BaiduChannelRequest = v + return s +} + +type UpdateBaiduChannelOutput struct { + _ struct{} `type:"structure" payload:"BaiduChannelResponse"` + + // Baidu Cloud Messaging channel definition + // + // BaiduChannelResponse is a required field + BaiduChannelResponse *BaiduChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateBaiduChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateBaiduChannelOutput) GoString() string { + return s.String() +} + +// SetBaiduChannelResponse sets the BaiduChannelResponse field's value. +func (s *UpdateBaiduChannelOutput) SetBaiduChannelResponse(v *BaiduChannelResponse) *UpdateBaiduChannelOutput { + s.BaiduChannelResponse = v + return s +} + +type UpdateCampaignInput struct { + _ struct{} `type:"structure" payload:"WriteCampaignRequest"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // CampaignId is a required field + CampaignId *string `location:"uri" locationName:"campaign-id" type:"string" required:"true"` + + // Used to create a campaign. + // + // WriteCampaignRequest is a required field + WriteCampaignRequest *WriteCampaignRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateCampaignInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateCampaignInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateCampaignInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateCampaignInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.CampaignId == nil { + invalidParams.Add(request.NewErrParamRequired("CampaignId")) + } + if s.WriteCampaignRequest == nil { + invalidParams.Add(request.NewErrParamRequired("WriteCampaignRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *UpdateCampaignInput) SetApplicationId(v string) *UpdateCampaignInput { + s.ApplicationId = &v + return s +} + +// SetCampaignId sets the CampaignId field's value. +func (s *UpdateCampaignInput) SetCampaignId(v string) *UpdateCampaignInput { + s.CampaignId = &v + return s +} + +// SetWriteCampaignRequest sets the WriteCampaignRequest field's value. +func (s *UpdateCampaignInput) SetWriteCampaignRequest(v *WriteCampaignRequest) *UpdateCampaignInput { + s.WriteCampaignRequest = v + return s +} + +type UpdateCampaignOutput struct { + _ struct{} `type:"structure" payload:"CampaignResponse"` + + // Campaign definition + // + // CampaignResponse is a required field + CampaignResponse *CampaignResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateCampaignOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateCampaignOutput) GoString() string { + return s.String() +} + +// SetCampaignResponse sets the CampaignResponse field's value. +func (s *UpdateCampaignOutput) SetCampaignResponse(v *CampaignResponse) *UpdateCampaignOutput { + s.CampaignResponse = v + return s +} + +type UpdateEmailChannelInput struct { + _ struct{} `type:"structure" payload:"EmailChannelRequest"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // Email Channel Request + // + // EmailChannelRequest is a required field + EmailChannelRequest *EmailChannelRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateEmailChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateEmailChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateEmailChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateEmailChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.EmailChannelRequest == nil { + invalidParams.Add(request.NewErrParamRequired("EmailChannelRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *UpdateEmailChannelInput) SetApplicationId(v string) *UpdateEmailChannelInput { + s.ApplicationId = &v + return s +} + +// SetEmailChannelRequest sets the EmailChannelRequest field's value. +func (s *UpdateEmailChannelInput) SetEmailChannelRequest(v *EmailChannelRequest) *UpdateEmailChannelInput { + s.EmailChannelRequest = v + return s +} + +type UpdateEmailChannelOutput struct { + _ struct{} `type:"structure" payload:"EmailChannelResponse"` + + // Email Channel Response. + // + // EmailChannelResponse is a required field + EmailChannelResponse *EmailChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateEmailChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateEmailChannelOutput) GoString() string { + return s.String() +} + +// SetEmailChannelResponse sets the EmailChannelResponse field's value. +func (s *UpdateEmailChannelOutput) SetEmailChannelResponse(v *EmailChannelResponse) *UpdateEmailChannelOutput { + s.EmailChannelResponse = v + return s +} + +type UpdateEndpointInput struct { + _ struct{} `type:"structure" payload:"EndpointRequest"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // EndpointId is a required field + EndpointId *string `location:"uri" locationName:"endpoint-id" type:"string" required:"true"` + + // An endpoint update request. + // + // EndpointRequest is a required field + EndpointRequest *EndpointRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateEndpointInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateEndpointInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateEndpointInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateEndpointInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.EndpointId == nil { + invalidParams.Add(request.NewErrParamRequired("EndpointId")) + } + if s.EndpointRequest == nil { + invalidParams.Add(request.NewErrParamRequired("EndpointRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *UpdateEndpointInput) SetApplicationId(v string) *UpdateEndpointInput { + s.ApplicationId = &v + return s +} + +// SetEndpointId sets the EndpointId field's value. +func (s *UpdateEndpointInput) SetEndpointId(v string) *UpdateEndpointInput { + s.EndpointId = &v + return s +} + +// SetEndpointRequest sets the EndpointRequest field's value. +func (s *UpdateEndpointInput) SetEndpointRequest(v *EndpointRequest) *UpdateEndpointInput { + s.EndpointRequest = v + return s +} + +type UpdateEndpointOutput struct { + _ struct{} `type:"structure" payload:"MessageBody"` + + // Simple message object. + // + // MessageBody is a required field + MessageBody *MessageBody `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateEndpointOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateEndpointOutput) GoString() string { + return s.String() +} + +// SetMessageBody sets the MessageBody field's value. +func (s *UpdateEndpointOutput) SetMessageBody(v *MessageBody) *UpdateEndpointOutput { + s.MessageBody = v + return s +} + +type UpdateEndpointsBatchInput struct { + _ struct{} `type:"structure" payload:"EndpointBatchRequest"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // Endpoint batch update request. + // + // EndpointBatchRequest is a required field + EndpointBatchRequest *EndpointBatchRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateEndpointsBatchInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateEndpointsBatchInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateEndpointsBatchInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateEndpointsBatchInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.EndpointBatchRequest == nil { + invalidParams.Add(request.NewErrParamRequired("EndpointBatchRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *UpdateEndpointsBatchInput) SetApplicationId(v string) *UpdateEndpointsBatchInput { + s.ApplicationId = &v + return s +} + +// SetEndpointBatchRequest sets the EndpointBatchRequest field's value. +func (s *UpdateEndpointsBatchInput) SetEndpointBatchRequest(v *EndpointBatchRequest) *UpdateEndpointsBatchInput { + s.EndpointBatchRequest = v + return s +} + +type UpdateEndpointsBatchOutput struct { + _ struct{} `type:"structure" payload:"MessageBody"` + + // Simple message object. + // + // MessageBody is a required field + MessageBody *MessageBody `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateEndpointsBatchOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateEndpointsBatchOutput) GoString() string { + return s.String() +} + +// SetMessageBody sets the MessageBody field's value. +func (s *UpdateEndpointsBatchOutput) SetMessageBody(v *MessageBody) *UpdateEndpointsBatchOutput { + s.MessageBody = v + return s +} + +type UpdateGcmChannelInput struct { + _ struct{} `type:"structure" payload:"GCMChannelRequest"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // Google Cloud Messaging credentials + // + // GCMChannelRequest is a required field + GCMChannelRequest *GCMChannelRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateGcmChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateGcmChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateGcmChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateGcmChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.GCMChannelRequest == nil { + invalidParams.Add(request.NewErrParamRequired("GCMChannelRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *UpdateGcmChannelInput) SetApplicationId(v string) *UpdateGcmChannelInput { + s.ApplicationId = &v + return s +} + +// SetGCMChannelRequest sets the GCMChannelRequest field's value. +func (s *UpdateGcmChannelInput) SetGCMChannelRequest(v *GCMChannelRequest) *UpdateGcmChannelInput { + s.GCMChannelRequest = v + return s +} + +type UpdateGcmChannelOutput struct { + _ struct{} `type:"structure" payload:"GCMChannelResponse"` + + // Google Cloud Messaging channel definition + // + // GCMChannelResponse is a required field + GCMChannelResponse *GCMChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateGcmChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateGcmChannelOutput) GoString() string { + return s.String() +} + +// SetGCMChannelResponse sets the GCMChannelResponse field's value. +func (s *UpdateGcmChannelOutput) SetGCMChannelResponse(v *GCMChannelResponse) *UpdateGcmChannelOutput { + s.GCMChannelResponse = v + return s +} + +type UpdateSegmentInput struct { + _ struct{} `type:"structure" payload:"WriteSegmentRequest"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // SegmentId is a required field + SegmentId *string `location:"uri" locationName:"segment-id" type:"string" required:"true"` + + // Segment definition. + // + // WriteSegmentRequest is a required field + WriteSegmentRequest *WriteSegmentRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateSegmentInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSegmentInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateSegmentInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateSegmentInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.SegmentId == nil { + invalidParams.Add(request.NewErrParamRequired("SegmentId")) + } + if s.WriteSegmentRequest == nil { + invalidParams.Add(request.NewErrParamRequired("WriteSegmentRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *UpdateSegmentInput) SetApplicationId(v string) *UpdateSegmentInput { + s.ApplicationId = &v + return s +} + +// SetSegmentId sets the SegmentId field's value. +func (s *UpdateSegmentInput) SetSegmentId(v string) *UpdateSegmentInput { + s.SegmentId = &v + return s +} + +// SetWriteSegmentRequest sets the WriteSegmentRequest field's value. +func (s *UpdateSegmentInput) SetWriteSegmentRequest(v *WriteSegmentRequest) *UpdateSegmentInput { + s.WriteSegmentRequest = v + return s +} + +type UpdateSegmentOutput struct { + _ struct{} `type:"structure" payload:"SegmentResponse"` + + // Segment definition. + // + // SegmentResponse is a required field + SegmentResponse *SegmentResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateSegmentOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSegmentOutput) GoString() string { + return s.String() +} + +// SetSegmentResponse sets the SegmentResponse field's value. +func (s *UpdateSegmentOutput) SetSegmentResponse(v *SegmentResponse) *UpdateSegmentOutput { + s.SegmentResponse = v + return s +} + +type UpdateSmsChannelInput struct { + _ struct{} `type:"structure" payload:"SMSChannelRequest"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // SMS Channel Request + // + // SMSChannelRequest is a required field + SMSChannelRequest *SMSChannelRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateSmsChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSmsChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateSmsChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateSmsChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.SMSChannelRequest == nil { + invalidParams.Add(request.NewErrParamRequired("SMSChannelRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *UpdateSmsChannelInput) SetApplicationId(v string) *UpdateSmsChannelInput { + s.ApplicationId = &v + return s +} + +// SetSMSChannelRequest sets the SMSChannelRequest field's value. +func (s *UpdateSmsChannelInput) SetSMSChannelRequest(v *SMSChannelRequest) *UpdateSmsChannelInput { + s.SMSChannelRequest = v + return s +} + +type UpdateSmsChannelOutput struct { + _ struct{} `type:"structure" payload:"SMSChannelResponse"` + + // SMS Channel Response. + // + // SMSChannelResponse is a required field + SMSChannelResponse *SMSChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateSmsChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSmsChannelOutput) GoString() string { + return s.String() +} + +// SetSMSChannelResponse sets the SMSChannelResponse field's value. +func (s *UpdateSmsChannelOutput) SetSMSChannelResponse(v *SMSChannelResponse) *UpdateSmsChannelOutput { + s.SMSChannelResponse = v + return s +} + +type UpdateVoiceChannelInput struct { + _ struct{} `type:"structure" payload:"VoiceChannelRequest"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"application-id" type:"string" required:"true"` + + // Voice Channel Request + // + // VoiceChannelRequest is a required field + VoiceChannelRequest *VoiceChannelRequest `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateVoiceChannelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateVoiceChannelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateVoiceChannelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateVoiceChannelInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.VoiceChannelRequest == nil { + invalidParams.Add(request.NewErrParamRequired("VoiceChannelRequest")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *UpdateVoiceChannelInput) SetApplicationId(v string) *UpdateVoiceChannelInput { + s.ApplicationId = &v + return s +} + +// SetVoiceChannelRequest sets the VoiceChannelRequest field's value. +func (s *UpdateVoiceChannelInput) SetVoiceChannelRequest(v *VoiceChannelRequest) *UpdateVoiceChannelInput { + s.VoiceChannelRequest = v + return s +} + +type UpdateVoiceChannelOutput struct { + _ struct{} `type:"structure" payload:"VoiceChannelResponse"` + + // Voice Channel Response. + // + // VoiceChannelResponse is a required field + VoiceChannelResponse *VoiceChannelResponse `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateVoiceChannelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateVoiceChannelOutput) GoString() string { + return s.String() +} + +// SetVoiceChannelResponse sets the VoiceChannelResponse field's value. +func (s *UpdateVoiceChannelOutput) SetVoiceChannelResponse(v *VoiceChannelResponse) *UpdateVoiceChannelOutput { + s.VoiceChannelResponse = v + return s +} + +// Voice Channel Request +type VoiceChannelRequest struct { + _ struct{} `type:"structure"` + + // If the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` +} + +// String returns the string representation +func (s VoiceChannelRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s VoiceChannelRequest) GoString() string { + return s.String() +} + +// SetEnabled sets the Enabled field's value. +func (s *VoiceChannelRequest) SetEnabled(v bool) *VoiceChannelRequest { + s.Enabled = &v + return s +} + +// Voice Channel Response. +type VoiceChannelResponse struct { + _ struct{} `type:"structure"` + + // Application id + ApplicationId *string `type:"string"` + + // The date that the settings were last updated in ISO 8601 format. + CreationDate *string `type:"string"` + + // If the channel is enabled for sending messages. + Enabled *bool `type:"boolean"` + + HasCredential *bool `type:"boolean"` + + // Channel ID. Not used, only for backwards compatibility. + Id *string `type:"string"` + + // Is this channel archived + IsArchived *bool `type:"boolean"` + + // Who made the last change + LastModifiedBy *string `type:"string"` + + // Last date this was updated + LastModifiedDate *string `type:"string"` + + // Platform type. Will be "Voice" + Platform *string `type:"string"` + + // Version of channel + Version *int64 `type:"integer"` +} + +// String returns the string representation +func (s VoiceChannelResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s VoiceChannelResponse) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *VoiceChannelResponse) SetApplicationId(v string) *VoiceChannelResponse { + s.ApplicationId = &v + return s +} + +// SetCreationDate sets the CreationDate field's value. +func (s *VoiceChannelResponse) SetCreationDate(v string) *VoiceChannelResponse { + s.CreationDate = &v + return s +} + +// SetEnabled sets the Enabled field's value. +func (s *VoiceChannelResponse) SetEnabled(v bool) *VoiceChannelResponse { + s.Enabled = &v + return s +} + +// SetHasCredential sets the HasCredential field's value. +func (s *VoiceChannelResponse) SetHasCredential(v bool) *VoiceChannelResponse { + s.HasCredential = &v + return s +} + +// SetId sets the Id field's value. +func (s *VoiceChannelResponse) SetId(v string) *VoiceChannelResponse { + s.Id = &v + return s +} + +// SetIsArchived sets the IsArchived field's value. +func (s *VoiceChannelResponse) SetIsArchived(v bool) *VoiceChannelResponse { + s.IsArchived = &v + return s +} + +// SetLastModifiedBy sets the LastModifiedBy field's value. +func (s *VoiceChannelResponse) SetLastModifiedBy(v string) *VoiceChannelResponse { + s.LastModifiedBy = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *VoiceChannelResponse) SetLastModifiedDate(v string) *VoiceChannelResponse { + s.LastModifiedDate = &v + return s +} + +// SetPlatform sets the Platform field's value. +func (s *VoiceChannelResponse) SetPlatform(v string) *VoiceChannelResponse { + s.Platform = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *VoiceChannelResponse) SetVersion(v int64) *VoiceChannelResponse { + s.Version = &v + return s +} + +// Voice Message. +type VoiceMessage struct { + _ struct{} `type:"structure"` + + // The message body of the notification, the email body or the text message. + Body *string `type:"string"` + + // Language of sent message + LanguageCode *string `type:"string"` + + // Is the number from the pool or messaging service to send from. + OriginationNumber *string `type:"string"` + + // Default message substitutions. Can be overridden by individual address substitutions. + Substitutions map[string][]*string `type:"map"` + + // Voice ID of sent message. + VoiceId *string `type:"string"` +} + +// String returns the string representation +func (s VoiceMessage) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s VoiceMessage) GoString() string { + return s.String() +} + +// SetBody sets the Body field's value. +func (s *VoiceMessage) SetBody(v string) *VoiceMessage { + s.Body = &v + return s +} + +// SetLanguageCode sets the LanguageCode field's value. +func (s *VoiceMessage) SetLanguageCode(v string) *VoiceMessage { + s.LanguageCode = &v + return s +} + +// SetOriginationNumber sets the OriginationNumber field's value. +func (s *VoiceMessage) SetOriginationNumber(v string) *VoiceMessage { + s.OriginationNumber = &v + return s +} + +// SetSubstitutions sets the Substitutions field's value. +func (s *VoiceMessage) SetSubstitutions(v map[string][]*string) *VoiceMessage { + s.Substitutions = v + return s +} + +// SetVoiceId sets the VoiceId field's value. +func (s *VoiceMessage) SetVoiceId(v string) *VoiceMessage { + s.VoiceId = &v + return s +} + +// Creating application setting request +type WriteApplicationSettingsRequest struct { + _ struct{} `type:"structure"` + + // Default campaign hook information. + CampaignHook *CampaignHook `type:"structure"` + + // The CloudWatchMetrics settings for the app. + CloudWatchMetricsEnabled *bool `type:"boolean"` + + // The limits that apply to each campaign in the project by default. Campaigns + // can also have their own limits, which override the settings at the project + // level. + Limits *CampaignLimits `type:"structure"` + + // The default quiet time for the app. Campaigns in the app don't send messages + // to endpoints during the quiet time.Note: Make sure that your endpoints include + // the Demographics.Timezone attribute if you plan to enable a quiet time for + // your app. If your endpoints don't include this attribute, they'll receive + // the messages that you send them, even if quiet time is enabled.When you set + // up an app to use quiet time, campaigns in that app don't send messages during + // the time range you specified, as long as all of the following are true:- + // The endpoint includes a valid Demographic.Timezone attribute.- The current + // time in the endpoint's time zone is later than or equal to the time specified + // in the QuietTime.Start attribute for the app (or campaign, if applicable).- + // The current time in the endpoint's time zone is earlier than or equal to + // the time specified in the QuietTime.End attribute for the app (or campaign, + // if applicable).Individual campaigns within the app can have their own quiet + // time settings, which override the quiet time settings at the app level. + QuietTime *QuietTime `type:"structure"` +} + +// String returns the string representation +func (s WriteApplicationSettingsRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s WriteApplicationSettingsRequest) GoString() string { + return s.String() +} + +// SetCampaignHook sets the CampaignHook field's value. +func (s *WriteApplicationSettingsRequest) SetCampaignHook(v *CampaignHook) *WriteApplicationSettingsRequest { + s.CampaignHook = v + return s +} + +// SetCloudWatchMetricsEnabled sets the CloudWatchMetricsEnabled field's value. +func (s *WriteApplicationSettingsRequest) SetCloudWatchMetricsEnabled(v bool) *WriteApplicationSettingsRequest { + s.CloudWatchMetricsEnabled = &v + return s +} + +// SetLimits sets the Limits field's value. +func (s *WriteApplicationSettingsRequest) SetLimits(v *CampaignLimits) *WriteApplicationSettingsRequest { + s.Limits = v + return s +} + +// SetQuietTime sets the QuietTime field's value. +func (s *WriteApplicationSettingsRequest) SetQuietTime(v *QuietTime) *WriteApplicationSettingsRequest { + s.QuietTime = v + return s +} + +// Used to create a campaign. +type WriteCampaignRequest struct { + _ struct{} `type:"structure"` + + // Treatments that are defined in addition to the default treatment. + AdditionalTreatments []*WriteTreatmentResource `type:"list"` + + // A description of the campaign. + Description *string `type:"string"` + + // The allocated percentage of end users who will not receive messages from + // this campaign. + HoldoutPercent *int64 `type:"integer"` + + // Campaign hook information. + Hook *CampaignHook `type:"structure"` + + // Indicates whether the campaign is paused. A paused campaign does not send + // messages unless you resume it by setting IsPaused to false. + IsPaused *bool `type:"boolean"` + + // The campaign limits settings. + Limits *CampaignLimits `type:"structure"` + + // The message configuration settings. + MessageConfiguration *MessageConfiguration `type:"structure"` + + // The custom name of the campaign. + Name *string `type:"string"` + + // The campaign schedule. + Schedule *Schedule `type:"structure"` + + // The ID of the segment to which the campaign sends messages. + SegmentId *string `type:"string"` + + // The version of the segment to which the campaign sends messages. + SegmentVersion *int64 `type:"integer"` + + // A custom description for the treatment. + TreatmentDescription *string `type:"string"` + + // The custom name of a variation of the campaign used for A/B testing. + TreatmentName *string `type:"string"` +} + +// String returns the string representation +func (s WriteCampaignRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s WriteCampaignRequest) GoString() string { + return s.String() +} + +// SetAdditionalTreatments sets the AdditionalTreatments field's value. +func (s *WriteCampaignRequest) SetAdditionalTreatments(v []*WriteTreatmentResource) *WriteCampaignRequest { + s.AdditionalTreatments = v + return s +} + +// SetDescription sets the Description field's value. +func (s *WriteCampaignRequest) SetDescription(v string) *WriteCampaignRequest { + s.Description = &v + return s +} + +// SetHoldoutPercent sets the HoldoutPercent field's value. +func (s *WriteCampaignRequest) SetHoldoutPercent(v int64) *WriteCampaignRequest { + s.HoldoutPercent = &v + return s +} + +// SetHook sets the Hook field's value. +func (s *WriteCampaignRequest) SetHook(v *CampaignHook) *WriteCampaignRequest { + s.Hook = v + return s +} + +// SetIsPaused sets the IsPaused field's value. +func (s *WriteCampaignRequest) SetIsPaused(v bool) *WriteCampaignRequest { + s.IsPaused = &v + return s +} + +// SetLimits sets the Limits field's value. +func (s *WriteCampaignRequest) SetLimits(v *CampaignLimits) *WriteCampaignRequest { + s.Limits = v + return s +} + +// SetMessageConfiguration sets the MessageConfiguration field's value. +func (s *WriteCampaignRequest) SetMessageConfiguration(v *MessageConfiguration) *WriteCampaignRequest { + s.MessageConfiguration = v + return s +} + +// SetName sets the Name field's value. +func (s *WriteCampaignRequest) SetName(v string) *WriteCampaignRequest { + s.Name = &v + return s +} + +// SetSchedule sets the Schedule field's value. +func (s *WriteCampaignRequest) SetSchedule(v *Schedule) *WriteCampaignRequest { + s.Schedule = v + return s +} + +// SetSegmentId sets the SegmentId field's value. +func (s *WriteCampaignRequest) SetSegmentId(v string) *WriteCampaignRequest { + s.SegmentId = &v + return s +} + +// SetSegmentVersion sets the SegmentVersion field's value. +func (s *WriteCampaignRequest) SetSegmentVersion(v int64) *WriteCampaignRequest { + s.SegmentVersion = &v + return s +} + +// SetTreatmentDescription sets the TreatmentDescription field's value. +func (s *WriteCampaignRequest) SetTreatmentDescription(v string) *WriteCampaignRequest { + s.TreatmentDescription = &v + return s +} + +// SetTreatmentName sets the TreatmentName field's value. +func (s *WriteCampaignRequest) SetTreatmentName(v string) *WriteCampaignRequest { + s.TreatmentName = &v + return s +} + +// Request to save an EventStream. +type WriteEventStream struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the Amazon Kinesis stream or Firehose delivery + // stream to which you want to publish events. Firehose ARN: arn:aws:firehose:REGION:ACCOUNT_ID:deliverystream/STREAM_NAME + // Kinesis ARN: arn:aws:kinesis:REGION:ACCOUNT_ID:stream/STREAM_NAME + DestinationStreamArn *string `type:"string"` + + // The IAM role that authorizes Amazon Pinpoint to publish events to the stream + // in your account. + RoleArn *string `type:"string"` +} + +// String returns the string representation +func (s WriteEventStream) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s WriteEventStream) GoString() string { + return s.String() +} + +// SetDestinationStreamArn sets the DestinationStreamArn field's value. +func (s *WriteEventStream) SetDestinationStreamArn(v string) *WriteEventStream { + s.DestinationStreamArn = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *WriteEventStream) SetRoleArn(v string) *WriteEventStream { + s.RoleArn = &v + return s +} + +// Segment definition. +type WriteSegmentRequest struct { + _ struct{} `type:"structure"` + + // The segment dimensions attributes. + Dimensions *SegmentDimensions `type:"structure"` + + // The name of segment + Name *string `type:"string"` + + // A segment group, which consists of zero or more source segments, plus dimensions + // that are applied to those source segments. Your request can only include + // one segment group. Your request can include either a SegmentGroups object + // or a Dimensions object, but not both. + SegmentGroups *SegmentGroupList `type:"structure"` +} + +// String returns the string representation +func (s WriteSegmentRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s WriteSegmentRequest) GoString() string { + return s.String() +} + +// SetDimensions sets the Dimensions field's value. +func (s *WriteSegmentRequest) SetDimensions(v *SegmentDimensions) *WriteSegmentRequest { + s.Dimensions = v + return s +} + +// SetName sets the Name field's value. +func (s *WriteSegmentRequest) SetName(v string) *WriteSegmentRequest { + s.Name = &v + return s +} + +// SetSegmentGroups sets the SegmentGroups field's value. +func (s *WriteSegmentRequest) SetSegmentGroups(v *SegmentGroupList) *WriteSegmentRequest { + s.SegmentGroups = v + return s +} + +// Used to create a campaign treatment. +type WriteTreatmentResource struct { + _ struct{} `type:"structure"` + + // The message configuration settings. + MessageConfiguration *MessageConfiguration `type:"structure"` + + // The campaign schedule. + Schedule *Schedule `type:"structure"` + + // The allocated percentage of users for this treatment. + SizePercent *int64 `type:"integer"` + + // A custom description for the treatment. + TreatmentDescription *string `type:"string"` + + // The custom name of a variation of the campaign used for A/B testing. + TreatmentName *string `type:"string"` +} + +// String returns the string representation +func (s WriteTreatmentResource) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s WriteTreatmentResource) GoString() string { + return s.String() +} + +// SetMessageConfiguration sets the MessageConfiguration field's value. +func (s *WriteTreatmentResource) SetMessageConfiguration(v *MessageConfiguration) *WriteTreatmentResource { + s.MessageConfiguration = v + return s +} + +// SetSchedule sets the Schedule field's value. +func (s *WriteTreatmentResource) SetSchedule(v *Schedule) *WriteTreatmentResource { + s.Schedule = v + return s +} + +// SetSizePercent sets the SizePercent field's value. +func (s *WriteTreatmentResource) SetSizePercent(v int64) *WriteTreatmentResource { + s.SizePercent = &v + return s +} + +// SetTreatmentDescription sets the TreatmentDescription field's value. +func (s *WriteTreatmentResource) SetTreatmentDescription(v string) *WriteTreatmentResource { + s.TreatmentDescription = &v + return s +} + +// SetTreatmentName sets the TreatmentName field's value. +func (s *WriteTreatmentResource) SetTreatmentName(v string) *WriteTreatmentResource { + s.TreatmentName = &v + return s +} + +const ( + // ActionOpenApp is a Action enum value + ActionOpenApp = "OPEN_APP" + + // ActionDeepLink is a Action enum value + ActionDeepLink = "DEEP_LINK" + + // ActionUrl is a Action enum value + ActionUrl = "URL" +) + +const ( + // AttributeTypeInclusive is a AttributeType enum value + AttributeTypeInclusive = "INCLUSIVE" + + // AttributeTypeExclusive is a AttributeType enum value + AttributeTypeExclusive = "EXCLUSIVE" +) + +const ( + // CampaignStatusScheduled is a CampaignStatus enum value + CampaignStatusScheduled = "SCHEDULED" + + // CampaignStatusExecuting is a CampaignStatus enum value + CampaignStatusExecuting = "EXECUTING" + + // CampaignStatusPendingNextRun is a CampaignStatus enum value + CampaignStatusPendingNextRun = "PENDING_NEXT_RUN" + + // CampaignStatusCompleted is a CampaignStatus enum value + CampaignStatusCompleted = "COMPLETED" + + // CampaignStatusPaused is a CampaignStatus enum value + CampaignStatusPaused = "PAUSED" + + // CampaignStatusDeleted is a CampaignStatus enum value + CampaignStatusDeleted = "DELETED" +) + +const ( + // ChannelTypeGcm is a ChannelType enum value + ChannelTypeGcm = "GCM" + + // ChannelTypeApns is a ChannelType enum value + ChannelTypeApns = "APNS" + + // ChannelTypeApnsSandbox is a ChannelType enum value + ChannelTypeApnsSandbox = "APNS_SANDBOX" + + // ChannelTypeApnsVoip is a ChannelType enum value + ChannelTypeApnsVoip = "APNS_VOIP" + + // ChannelTypeApnsVoipSandbox is a ChannelType enum value + ChannelTypeApnsVoipSandbox = "APNS_VOIP_SANDBOX" + + // ChannelTypeAdm is a ChannelType enum value + ChannelTypeAdm = "ADM" + + // ChannelTypeSms is a ChannelType enum value + ChannelTypeSms = "SMS" + + // ChannelTypeVoice is a ChannelType enum value + ChannelTypeVoice = "VOICE" + + // ChannelTypeEmail is a ChannelType enum value + ChannelTypeEmail = "EMAIL" + + // ChannelTypeBaidu is a ChannelType enum value + ChannelTypeBaidu = "BAIDU" + + // ChannelTypeCustom is a ChannelType enum value + ChannelTypeCustom = "CUSTOM" +) + +const ( + // DeliveryStatusSuccessful is a DeliveryStatus enum value + DeliveryStatusSuccessful = "SUCCESSFUL" + + // DeliveryStatusThrottled is a DeliveryStatus enum value + DeliveryStatusThrottled = "THROTTLED" + + // DeliveryStatusTemporaryFailure is a DeliveryStatus enum value + DeliveryStatusTemporaryFailure = "TEMPORARY_FAILURE" + + // DeliveryStatusPermanentFailure is a DeliveryStatus enum value + DeliveryStatusPermanentFailure = "PERMANENT_FAILURE" + + // DeliveryStatusUnknownFailure is a DeliveryStatus enum value + DeliveryStatusUnknownFailure = "UNKNOWN_FAILURE" + + // DeliveryStatusOptOut is a DeliveryStatus enum value + DeliveryStatusOptOut = "OPT_OUT" + + // DeliveryStatusDuplicate is a DeliveryStatus enum value + DeliveryStatusDuplicate = "DUPLICATE" +) + +const ( + // DimensionTypeInclusive is a DimensionType enum value + DimensionTypeInclusive = "INCLUSIVE" + + // DimensionTypeExclusive is a DimensionType enum value + DimensionTypeExclusive = "EXCLUSIVE" +) + +const ( + // DurationHr24 is a Duration enum value + DurationHr24 = "HR_24" + + // DurationDay7 is a Duration enum value + DurationDay7 = "DAY_7" + + // DurationDay14 is a Duration enum value + DurationDay14 = "DAY_14" + + // DurationDay30 is a Duration enum value + DurationDay30 = "DAY_30" +) + +const ( + // FilterTypeSystem is a FilterType enum value + FilterTypeSystem = "SYSTEM" + + // FilterTypeEndpoint is a FilterType enum value + FilterTypeEndpoint = "ENDPOINT" +) + +const ( + // FormatCsv is a Format enum value + FormatCsv = "CSV" + + // FormatJson is a Format enum value + FormatJson = "JSON" +) + +const ( + // FrequencyOnce is a Frequency enum value + FrequencyOnce = "ONCE" + + // FrequencyHourly is a Frequency enum value + FrequencyHourly = "HOURLY" + + // FrequencyDaily is a Frequency enum value + FrequencyDaily = "DAILY" + + // FrequencyWeekly is a Frequency enum value + FrequencyWeekly = "WEEKLY" + + // FrequencyMonthly is a Frequency enum value + FrequencyMonthly = "MONTHLY" + + // FrequencyEvent is a Frequency enum value + FrequencyEvent = "EVENT" +) + +const ( + // IncludeAll is a Include enum value + IncludeAll = "ALL" + + // IncludeAny is a Include enum value + IncludeAny = "ANY" + + // IncludeNone is a Include enum value + IncludeNone = "NONE" +) + +const ( + // JobStatusCreated is a JobStatus enum value + JobStatusCreated = "CREATED" + + // JobStatusInitializing is a JobStatus enum value + JobStatusInitializing = "INITIALIZING" + + // JobStatusProcessing is a JobStatus enum value + JobStatusProcessing = "PROCESSING" + + // JobStatusCompleting is a JobStatus enum value + JobStatusCompleting = "COMPLETING" + + // JobStatusCompleted is a JobStatus enum value + JobStatusCompleted = "COMPLETED" + + // JobStatusFailing is a JobStatus enum value + JobStatusFailing = "FAILING" + + // JobStatusFailed is a JobStatus enum value + JobStatusFailed = "FAILED" +) + +const ( + // MessageTypeTransactional is a MessageType enum value + MessageTypeTransactional = "TRANSACTIONAL" + + // MessageTypePromotional is a MessageType enum value + MessageTypePromotional = "PROMOTIONAL" +) + +const ( + // ModeDelivery is a Mode enum value + ModeDelivery = "DELIVERY" + + // ModeFilter is a Mode enum value + ModeFilter = "FILTER" +) + +const ( + // RecencyTypeActive is a RecencyType enum value + RecencyTypeActive = "ACTIVE" + + // RecencyTypeInactive is a RecencyType enum value + RecencyTypeInactive = "INACTIVE" +) + +const ( + // SegmentTypeDimensional is a SegmentType enum value + SegmentTypeDimensional = "DIMENSIONAL" + + // SegmentTypeImport is a SegmentType enum value + SegmentTypeImport = "IMPORT" +) + +const ( + // SourceTypeAll is a SourceType enum value + SourceTypeAll = "ALL" + + // SourceTypeAny is a SourceType enum value + SourceTypeAny = "ANY" + + // SourceTypeNone is a SourceType enum value + SourceTypeNone = "NONE" +) + +const ( + // TypeAll is a Type enum value + TypeAll = "ALL" + + // TypeAny is a Type enum value + TypeAny = "ANY" + + // TypeNone is a Type enum value + TypeNone = "NONE" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/pinpoint/doc.go b/vendor/github.com/aws/aws-sdk-go/service/pinpoint/doc.go new file mode 100644 index 00000000000..f41551af66e --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/pinpoint/doc.go @@ -0,0 +1,26 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package pinpoint provides the client and types for making API +// requests to Amazon Pinpoint. +// +// See https://docs.aws.amazon.com/goto/WebAPI/pinpoint-2016-12-01 for more information on this service. +// +// See pinpoint package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/pinpoint/ +// +// Using the Client +// +// To contact Amazon Pinpoint with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the Amazon Pinpoint client Pinpoint for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/pinpoint/#New +package pinpoint diff --git a/vendor/github.com/aws/aws-sdk-go/service/pinpoint/errors.go b/vendor/github.com/aws/aws-sdk-go/service/pinpoint/errors.go new file mode 100644 index 00000000000..7bd7525b6a6 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/pinpoint/errors.go @@ -0,0 +1,42 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package pinpoint + +const ( + + // ErrCodeBadRequestException for service response error code + // "BadRequestException". + // + // Simple message object. + ErrCodeBadRequestException = "BadRequestException" + + // ErrCodeForbiddenException for service response error code + // "ForbiddenException". + // + // Simple message object. + ErrCodeForbiddenException = "ForbiddenException" + + // ErrCodeInternalServerErrorException for service response error code + // "InternalServerErrorException". + // + // Simple message object. + ErrCodeInternalServerErrorException = "InternalServerErrorException" + + // ErrCodeMethodNotAllowedException for service response error code + // "MethodNotAllowedException". + // + // Simple message object. + ErrCodeMethodNotAllowedException = "MethodNotAllowedException" + + // ErrCodeNotFoundException for service response error code + // "NotFoundException". + // + // Simple message object. + ErrCodeNotFoundException = "NotFoundException" + + // ErrCodeTooManyRequestsException for service response error code + // "TooManyRequestsException". + // + // Simple message object. + ErrCodeTooManyRequestsException = "TooManyRequestsException" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/pinpoint/service.go b/vendor/github.com/aws/aws-sdk-go/service/pinpoint/service.go new file mode 100644 index 00000000000..9bee611da82 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/pinpoint/service.go @@ -0,0 +1,99 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package pinpoint + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/restjson" +) + +// Pinpoint provides the API operation methods for making requests to +// Amazon Pinpoint. See this package's package overview docs +// for details on the service. +// +// Pinpoint methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type Pinpoint struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "pinpoint" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Pinpoint" // ServiceID is a unique identifer of a specific service. +) + +// New creates a new instance of the Pinpoint client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a Pinpoint client from just a session. +// svc := pinpoint.New(mySession) +// +// // Create a Pinpoint client with additional configuration +// svc := pinpoint.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *Pinpoint { + c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "mobiletargeting" + } + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *Pinpoint { + svc := &Pinpoint{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + ServiceID: ServiceID, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2016-12-01", + JSONVersion: "1.1", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(restjson.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(restjson.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(restjson.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(restjson.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a Pinpoint operation and runs any +// custom request initialization. +func (c *Pinpoint) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/pricing/api.go b/vendor/github.com/aws/aws-sdk-go/service/pricing/api.go new file mode 100644 index 00000000000..395d8c6e50d --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/pricing/api.go @@ -0,0 +1,955 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package pricing + +import ( + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" +) + +const opDescribeServices = "DescribeServices" + +// DescribeServicesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeServices operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeServices for more information on using the DescribeServices +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeServicesRequest method. +// req, resp := client.DescribeServicesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pricing-2017-10-15/DescribeServices +func (c *Pricing) DescribeServicesRequest(input *DescribeServicesInput) (req *request.Request, output *DescribeServicesOutput) { + op := &request.Operation{ + Name: opDescribeServices, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeServicesInput{} + } + + output = &DescribeServicesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeServices API operation for AWS Price List Service. +// +// Returns the metadata for one service or a list of the metadata for all services. +// Use this without a service code to get the service codes for all services. +// Use it with a service code, such as AmazonEC2, to get information specific +// to that service, such as the attribute names available for that service. +// For example, some of the attribute names available for EC2 are volumeType, +// maxIopsVolume, operation, locationType, and instanceCapacity10xlarge. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Price List Service's +// API operation DescribeServices for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalErrorException "InternalErrorException" +// An error on the server occurred during the processing of your request. Try +// again later. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// One or more parameters had an invalid value. +// +// * ErrCodeNotFoundException "NotFoundException" +// The requested resource can't be found. +// +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The pagination token is invalid. Try again without a pagination token. +// +// * ErrCodeExpiredNextTokenException "ExpiredNextTokenException" +// The pagination token expired. Try again without a pagination token. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pricing-2017-10-15/DescribeServices +func (c *Pricing) DescribeServices(input *DescribeServicesInput) (*DescribeServicesOutput, error) { + req, out := c.DescribeServicesRequest(input) + return out, req.Send() +} + +// DescribeServicesWithContext is the same as DescribeServices with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeServices for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pricing) DescribeServicesWithContext(ctx aws.Context, input *DescribeServicesInput, opts ...request.Option) (*DescribeServicesOutput, error) { + req, out := c.DescribeServicesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeServicesPages iterates over the pages of a DescribeServices operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeServices method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeServices operation. +// pageNum := 0 +// err := client.DescribeServicesPages(params, +// func(page *DescribeServicesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *Pricing) DescribeServicesPages(input *DescribeServicesInput, fn func(*DescribeServicesOutput, bool) bool) error { + return c.DescribeServicesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeServicesPagesWithContext same as DescribeServicesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pricing) DescribeServicesPagesWithContext(ctx aws.Context, input *DescribeServicesInput, fn func(*DescribeServicesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeServicesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeServicesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeServicesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opGetAttributeValues = "GetAttributeValues" + +// GetAttributeValuesRequest generates a "aws/request.Request" representing the +// client's request for the GetAttributeValues operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetAttributeValues for more information on using the GetAttributeValues +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetAttributeValuesRequest method. +// req, resp := client.GetAttributeValuesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pricing-2017-10-15/GetAttributeValues +func (c *Pricing) GetAttributeValuesRequest(input *GetAttributeValuesInput) (req *request.Request, output *GetAttributeValuesOutput) { + op := &request.Operation{ + Name: opGetAttributeValues, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &GetAttributeValuesInput{} + } + + output = &GetAttributeValuesOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetAttributeValues API operation for AWS Price List Service. +// +// Returns a list of attribute values. Attibutes are similar to the details +// in a Price List API offer file. For a list of available attributes, see Offer +// File Definitions (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/reading-an-offer.html#pps-defs) +// in the AWS Billing and Cost Management User Guide (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-what-is.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Price List Service's +// API operation GetAttributeValues for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalErrorException "InternalErrorException" +// An error on the server occurred during the processing of your request. Try +// again later. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// One or more parameters had an invalid value. +// +// * ErrCodeNotFoundException "NotFoundException" +// The requested resource can't be found. +// +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The pagination token is invalid. Try again without a pagination token. +// +// * ErrCodeExpiredNextTokenException "ExpiredNextTokenException" +// The pagination token expired. Try again without a pagination token. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pricing-2017-10-15/GetAttributeValues +func (c *Pricing) GetAttributeValues(input *GetAttributeValuesInput) (*GetAttributeValuesOutput, error) { + req, out := c.GetAttributeValuesRequest(input) + return out, req.Send() +} + +// GetAttributeValuesWithContext is the same as GetAttributeValues with the addition of +// the ability to pass a context and additional request options. +// +// See GetAttributeValues for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pricing) GetAttributeValuesWithContext(ctx aws.Context, input *GetAttributeValuesInput, opts ...request.Option) (*GetAttributeValuesOutput, error) { + req, out := c.GetAttributeValuesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// GetAttributeValuesPages iterates over the pages of a GetAttributeValues operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See GetAttributeValues method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a GetAttributeValues operation. +// pageNum := 0 +// err := client.GetAttributeValuesPages(params, +// func(page *GetAttributeValuesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *Pricing) GetAttributeValuesPages(input *GetAttributeValuesInput, fn func(*GetAttributeValuesOutput, bool) bool) error { + return c.GetAttributeValuesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// GetAttributeValuesPagesWithContext same as GetAttributeValuesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pricing) GetAttributeValuesPagesWithContext(ctx aws.Context, input *GetAttributeValuesInput, fn func(*GetAttributeValuesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *GetAttributeValuesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.GetAttributeValuesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*GetAttributeValuesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opGetProducts = "GetProducts" + +// GetProductsRequest generates a "aws/request.Request" representing the +// client's request for the GetProducts operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetProducts for more information on using the GetProducts +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetProductsRequest method. +// req, resp := client.GetProductsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pricing-2017-10-15/GetProducts +func (c *Pricing) GetProductsRequest(input *GetProductsInput) (req *request.Request, output *GetProductsOutput) { + op := &request.Operation{ + Name: opGetProducts, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &GetProductsInput{} + } + + output = &GetProductsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetProducts API operation for AWS Price List Service. +// +// Returns a list of all products that match the filter criteria. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Price List Service's +// API operation GetProducts for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalErrorException "InternalErrorException" +// An error on the server occurred during the processing of your request. Try +// again later. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// One or more parameters had an invalid value. +// +// * ErrCodeNotFoundException "NotFoundException" +// The requested resource can't be found. +// +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// The pagination token is invalid. Try again without a pagination token. +// +// * ErrCodeExpiredNextTokenException "ExpiredNextTokenException" +// The pagination token expired. Try again without a pagination token. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pricing-2017-10-15/GetProducts +func (c *Pricing) GetProducts(input *GetProductsInput) (*GetProductsOutput, error) { + req, out := c.GetProductsRequest(input) + return out, req.Send() +} + +// GetProductsWithContext is the same as GetProducts with the addition of +// the ability to pass a context and additional request options. +// +// See GetProducts for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pricing) GetProductsWithContext(ctx aws.Context, input *GetProductsInput, opts ...request.Option) (*GetProductsOutput, error) { + req, out := c.GetProductsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// GetProductsPages iterates over the pages of a GetProducts operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See GetProducts method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a GetProducts operation. +// pageNum := 0 +// err := client.GetProductsPages(params, +// func(page *GetProductsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *Pricing) GetProductsPages(input *GetProductsInput, fn func(*GetProductsOutput, bool) bool) error { + return c.GetProductsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// GetProductsPagesWithContext same as GetProductsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Pricing) GetProductsPagesWithContext(ctx aws.Context, input *GetProductsInput, fn func(*GetProductsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *GetProductsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.GetProductsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*GetProductsOutput), !p.HasNextPage()) + } + return p.Err() +} + +// The values of a given attribute, such as Throughput Optimized HDD or Provisioned +// IOPS for the Amazon EC2volumeType attribute. +type AttributeValue struct { + _ struct{} `type:"structure"` + + // The specific value of an attributeName. + Value *string `type:"string"` +} + +// String returns the string representation +func (s AttributeValue) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttributeValue) GoString() string { + return s.String() +} + +// SetValue sets the Value field's value. +func (s *AttributeValue) SetValue(v string) *AttributeValue { + s.Value = &v + return s +} + +type DescribeServicesInput struct { + _ struct{} `type:"structure"` + + // The format version that you want the response to be in. + // + // Valid values are: aws_v1 + FormatVersion *string `type:"string"` + + // The maximum number of results that you want returned in the response. + MaxResults *int64 `min:"1" type:"integer"` + + // The pagination token that indicates the next set of results that you want + // to retrieve. + NextToken *string `type:"string"` + + // The code for the service whose information you want to retrieve, such as + // AmazonEC2. You can use the ServiceCode to filter the results in a GetProducts + // call. To retrieve a list of all services, leave this blank. + ServiceCode *string `type:"string"` +} + +// String returns the string representation +func (s DescribeServicesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeServicesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeServicesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeServicesInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFormatVersion sets the FormatVersion field's value. +func (s *DescribeServicesInput) SetFormatVersion(v string) *DescribeServicesInput { + s.FormatVersion = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeServicesInput) SetMaxResults(v int64) *DescribeServicesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeServicesInput) SetNextToken(v string) *DescribeServicesInput { + s.NextToken = &v + return s +} + +// SetServiceCode sets the ServiceCode field's value. +func (s *DescribeServicesInput) SetServiceCode(v string) *DescribeServicesInput { + s.ServiceCode = &v + return s +} + +type DescribeServicesOutput struct { + _ struct{} `type:"structure"` + + // The format version of the response. For example, aws_v1. + FormatVersion *string `type:"string"` + + // The pagination token for the next set of retreivable results. + NextToken *string `type:"string"` + + // The service metadata for the service or services in the response. + Services []*Service `type:"list"` +} + +// String returns the string representation +func (s DescribeServicesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeServicesOutput) GoString() string { + return s.String() +} + +// SetFormatVersion sets the FormatVersion field's value. +func (s *DescribeServicesOutput) SetFormatVersion(v string) *DescribeServicesOutput { + s.FormatVersion = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeServicesOutput) SetNextToken(v string) *DescribeServicesOutput { + s.NextToken = &v + return s +} + +// SetServices sets the Services field's value. +func (s *DescribeServicesOutput) SetServices(v []*Service) *DescribeServicesOutput { + s.Services = v + return s +} + +// The constraints that you want all returned products to match. +type Filter struct { + _ struct{} `type:"structure"` + + // The product metadata field that you want to filter on. You can filter by + // just the service code to see all products for a specific service, filter + // by just the attribute name to see a specific attribute for multiple services, + // or use both a service code and an attribute name to retrieve only products + // that match both fields. + // + // Valid values include: ServiceCode, and all attribute names + // + // For example, you can filter by the AmazonEC2 service code and the volumeType + // attribute name to get the prices for only Amazon EC2 volumes. + // + // Field is a required field + Field *string `type:"string" required:"true"` + + // The type of filter that you want to use. + // + // Valid values are: TERM_MATCH. TERM_MATCH returns only products that match + // both the given filter field and the given value. + // + // Type is a required field + Type *string `type:"string" required:"true" enum:"FilterType"` + + // The service code or attribute value that you want to filter by. If you are + // filtering by service code this is the actual service code, such as AmazonEC2. + // If you are filtering by attribute name, this is the attribute value that + // you want the returned products to match, such as a Provisioned IOPS volume. + // + // Value is a required field + Value *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s Filter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Filter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Filter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Filter"} + if s.Field == nil { + invalidParams.Add(request.NewErrParamRequired("Field")) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetField sets the Field field's value. +func (s *Filter) SetField(v string) *Filter { + s.Field = &v + return s +} + +// SetType sets the Type field's value. +func (s *Filter) SetType(v string) *Filter { + s.Type = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Filter) SetValue(v string) *Filter { + s.Value = &v + return s +} + +type GetAttributeValuesInput struct { + _ struct{} `type:"structure"` + + // The name of the attribute that you want to retrieve the values for, such + // as volumeType. + // + // AttributeName is a required field + AttributeName *string `type:"string" required:"true"` + + // The maximum number of results to return in response. + MaxResults *int64 `min:"1" type:"integer"` + + // The pagination token that indicates the next set of results that you want + // to retrieve. + NextToken *string `type:"string"` + + // The service code for the service whose attributes you want to retrieve. For + // example, if you want the retrieve an EC2 attribute, use AmazonEC2. + // + // ServiceCode is a required field + ServiceCode *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s GetAttributeValuesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAttributeValuesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetAttributeValuesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetAttributeValuesInput"} + if s.AttributeName == nil { + invalidParams.Add(request.NewErrParamRequired("AttributeName")) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.ServiceCode == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceCode")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttributeName sets the AttributeName field's value. +func (s *GetAttributeValuesInput) SetAttributeName(v string) *GetAttributeValuesInput { + s.AttributeName = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *GetAttributeValuesInput) SetMaxResults(v int64) *GetAttributeValuesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *GetAttributeValuesInput) SetNextToken(v string) *GetAttributeValuesInput { + s.NextToken = &v + return s +} + +// SetServiceCode sets the ServiceCode field's value. +func (s *GetAttributeValuesInput) SetServiceCode(v string) *GetAttributeValuesInput { + s.ServiceCode = &v + return s +} + +type GetAttributeValuesOutput struct { + _ struct{} `type:"structure"` + + // The list of values for an attribute. For example, Throughput Optimized HDD + // and Provisioned IOPS are two available values for the AmazonEC2volumeType. + AttributeValues []*AttributeValue `type:"list"` + + // The pagination token that indicates the next set of results to retrieve. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s GetAttributeValuesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAttributeValuesOutput) GoString() string { + return s.String() +} + +// SetAttributeValues sets the AttributeValues field's value. +func (s *GetAttributeValuesOutput) SetAttributeValues(v []*AttributeValue) *GetAttributeValuesOutput { + s.AttributeValues = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *GetAttributeValuesOutput) SetNextToken(v string) *GetAttributeValuesOutput { + s.NextToken = &v + return s +} + +type GetProductsInput struct { + _ struct{} `type:"structure"` + + // The list of filters that limit the returned products. only products that + // match all filters are returned. + Filters []*Filter `type:"list"` + + // The format version that you want the response to be in. + // + // Valid values are: aws_v1 + FormatVersion *string `type:"string"` + + // The maximum number of results to return in the response. + MaxResults *int64 `min:"1" type:"integer"` + + // The pagination token that indicates the next set of results that you want + // to retrieve. + NextToken *string `type:"string"` + + // The code for the service whose products you want to retrieve. + ServiceCode *string `type:"string"` +} + +// String returns the string representation +func (s GetProductsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetProductsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetProductsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetProductsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *GetProductsInput) SetFilters(v []*Filter) *GetProductsInput { + s.Filters = v + return s +} + +// SetFormatVersion sets the FormatVersion field's value. +func (s *GetProductsInput) SetFormatVersion(v string) *GetProductsInput { + s.FormatVersion = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *GetProductsInput) SetMaxResults(v int64) *GetProductsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *GetProductsInput) SetNextToken(v string) *GetProductsInput { + s.NextToken = &v + return s +} + +// SetServiceCode sets the ServiceCode field's value. +func (s *GetProductsInput) SetServiceCode(v string) *GetProductsInput { + s.ServiceCode = &v + return s +} + +type GetProductsOutput struct { + _ struct{} `type:"structure"` + + // The format version of the response. For example, aws_v1. + FormatVersion *string `type:"string"` + + // The pagination token that indicates the next set of results to retrieve. + NextToken *string `type:"string"` + + // The list of products that match your filters. The list contains both the + // product metadata and the price information. + PriceList []aws.JSONValue `type:"list"` +} + +// String returns the string representation +func (s GetProductsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetProductsOutput) GoString() string { + return s.String() +} + +// SetFormatVersion sets the FormatVersion field's value. +func (s *GetProductsOutput) SetFormatVersion(v string) *GetProductsOutput { + s.FormatVersion = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *GetProductsOutput) SetNextToken(v string) *GetProductsOutput { + s.NextToken = &v + return s +} + +// SetPriceList sets the PriceList field's value. +func (s *GetProductsOutput) SetPriceList(v []aws.JSONValue) *GetProductsOutput { + s.PriceList = v + return s +} + +// The metadata for a service, such as the service code and available attribute +// names. +type Service struct { + _ struct{} `type:"structure"` + + // The attributes that are available for this service. + AttributeNames []*string `type:"list"` + + // The code for the AWS service. + ServiceCode *string `type:"string"` +} + +// String returns the string representation +func (s Service) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Service) GoString() string { + return s.String() +} + +// SetAttributeNames sets the AttributeNames field's value. +func (s *Service) SetAttributeNames(v []*string) *Service { + s.AttributeNames = v + return s +} + +// SetServiceCode sets the ServiceCode field's value. +func (s *Service) SetServiceCode(v string) *Service { + s.ServiceCode = &v + return s +} + +const ( + // FilterTypeTermMatch is a FilterType enum value + FilterTypeTermMatch = "TERM_MATCH" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/pricing/doc.go b/vendor/github.com/aws/aws-sdk-go/service/pricing/doc.go new file mode 100644 index 00000000000..0555bc4c3cb --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/pricing/doc.go @@ -0,0 +1,51 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package pricing provides the client and types for making API +// requests to AWS Price List Service. +// +// AWS Price List Service API (AWS Price List Service) is a centralized and +// convenient way to programmatically query Amazon Web Services for services, +// products, and pricing information. The AWS Price List Service uses standardized +// product attributes such as Location, Storage Class, and Operating System, +// and provides prices at the SKU level. You can use the AWS Price List Service +// to build cost control and scenario planning tools, reconcile billing data, +// forecast future spend for budgeting purposes, and provide cost benefit analysis +// that compare your internal workloads with AWS. +// +// Use GetServices without a service code to retrieve the service codes for +// all AWS services, then GetServices with a service code to retreive the attribute +// names for that service. After you have the service code and attribute names, +// you can use GetAttributeValues to see what values are available for an attribute. +// With the service code and an attribute name and value, you can use GetProducts +// to find specific products that you're interested in, such as an AmazonEC2 +// instance, with a Provisioned IOPSvolumeType. +// +// Service Endpoint +// +// AWS Price List Service API provides the following two endpoints: +// +// * https://api.pricing.us-east-1.amazonaws.com +// +// * https://api.pricing.ap-south-1.amazonaws.com +// +// See https://docs.aws.amazon.com/goto/WebAPI/pricing-2017-10-15 for more information on this service. +// +// See pricing package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/pricing/ +// +// Using the Client +// +// To contact AWS Price List Service with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the AWS Price List Service client Pricing for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/pricing/#New +package pricing diff --git a/vendor/github.com/aws/aws-sdk-go/service/pricing/errors.go b/vendor/github.com/aws/aws-sdk-go/service/pricing/errors.go new file mode 100644 index 00000000000..10e4c44fe92 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/pricing/errors.go @@ -0,0 +1,37 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package pricing + +const ( + + // ErrCodeExpiredNextTokenException for service response error code + // "ExpiredNextTokenException". + // + // The pagination token expired. Try again without a pagination token. + ErrCodeExpiredNextTokenException = "ExpiredNextTokenException" + + // ErrCodeInternalErrorException for service response error code + // "InternalErrorException". + // + // An error on the server occurred during the processing of your request. Try + // again later. + ErrCodeInternalErrorException = "InternalErrorException" + + // ErrCodeInvalidNextTokenException for service response error code + // "InvalidNextTokenException". + // + // The pagination token is invalid. Try again without a pagination token. + ErrCodeInvalidNextTokenException = "InvalidNextTokenException" + + // ErrCodeInvalidParameterException for service response error code + // "InvalidParameterException". + // + // One or more parameters had an invalid value. + ErrCodeInvalidParameterException = "InvalidParameterException" + + // ErrCodeNotFoundException for service response error code + // "NotFoundException". + // + // The requested resource can't be found. + ErrCodeNotFoundException = "NotFoundException" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/pricing/service.go b/vendor/github.com/aws/aws-sdk-go/service/pricing/service.go new file mode 100644 index 00000000000..90ff33d0a08 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/pricing/service.go @@ -0,0 +1,100 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package pricing + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" +) + +// Pricing provides the API operation methods for making requests to +// AWS Price List Service. See this package's package overview docs +// for details on the service. +// +// Pricing methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type Pricing struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "api.pricing" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Pricing" // ServiceID is a unique identifer of a specific service. +) + +// New creates a new instance of the Pricing client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a Pricing client from just a session. +// svc := pricing.New(mySession) +// +// // Create a Pricing client with additional configuration +// svc := pricing.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *Pricing { + c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "pricing" + } + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *Pricing { + svc := &Pricing{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + ServiceID: ServiceID, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2017-10-15", + JSONVersion: "1.1", + TargetPrefix: "AWSPriceListService", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(jsonrpc.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(jsonrpc.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(jsonrpc.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(jsonrpc.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a Pricing operation and runs any +// custom request initialization. +func (c *Pricing) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/rds/api.go b/vendor/github.com/aws/aws-sdk-go/service/rds/api.go index 4c6754ecf4c..0ba8144dce1 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/rds/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/rds/api.go @@ -17,8 +17,8 @@ const opAddRoleToDBCluster = "AddRoleToDBCluster" // AddRoleToDBClusterRequest generates a "aws/request.Request" representing the // client's request for the AddRoleToDBCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -60,8 +60,9 @@ func (c *RDS) AddRoleToDBClusterRequest(input *AddRoleToDBClusterInput) (req *re // AddRoleToDBCluster API operation for Amazon Relational Database Service. // // Associates an Identity and Access Management (IAM) role from an Aurora DB -// cluster. For more information, see Authorizing Amazon Aurora to Access Other -// AWS Services On Your Behalf (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Authorizing.AWSServices.html). +// cluster. For more information, see Authorizing Amazon Aurora MySQL to Access +// Other AWS Services on Your Behalf (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Authorizing.html) +// in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -72,14 +73,14 @@ func (c *RDS) AddRoleToDBClusterRequest(input *AddRoleToDBClusterInput) (req *re // // Returned Error Codes: // * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" -// DBClusterIdentifier does not refer to an existing DB cluster. +// DBClusterIdentifier doesn't refer to an existing DB cluster. // // * ErrCodeDBClusterRoleAlreadyExistsFault "DBClusterRoleAlreadyExists" // The specified IAM role Amazon Resource Name (ARN) is already associated with // the specified DB cluster. // // * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" -// The DB cluster is not in a valid state. +// The requested operation can't be performed while the cluster is in this state. // // * ErrCodeDBClusterRoleQuotaExceededFault "DBClusterRoleQuotaExceeded" // You have exceeded the maximum number of IAM roles that can be associated @@ -111,8 +112,8 @@ const opAddSourceIdentifierToSubscription = "AddSourceIdentifierToSubscription" // AddSourceIdentifierToSubscriptionRequest generates a "aws/request.Request" representing the // client's request for the AddSourceIdentifierToSubscription operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -193,8 +194,8 @@ const opAddTagsToResource = "AddTagsToResource" // AddTagsToResourceRequest generates a "aws/request.Request" representing the // client's request for the AddTagsToResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -251,13 +252,13 @@ func (c *RDS) AddTagsToResourceRequest(input *AddTagsToResourceInput) (req *requ // // Returned Error Codes: // * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" -// DBInstanceIdentifier does not refer to an existing DB instance. +// DBInstanceIdentifier doesn't refer to an existing DB instance. // // * ErrCodeDBSnapshotNotFoundFault "DBSnapshotNotFound" -// DBSnapshotIdentifier does not refer to an existing DB snapshot. +// DBSnapshotIdentifier doesn't refer to an existing DB snapshot. // // * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" -// DBClusterIdentifier does not refer to an existing DB cluster. +// DBClusterIdentifier doesn't refer to an existing DB cluster. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/AddTagsToResource func (c *RDS) AddTagsToResource(input *AddTagsToResourceInput) (*AddTagsToResourceOutput, error) { @@ -285,8 +286,8 @@ const opApplyPendingMaintenanceAction = "ApplyPendingMaintenanceAction" // ApplyPendingMaintenanceActionRequest generates a "aws/request.Request" representing the // client's request for the ApplyPendingMaintenanceAction operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -339,6 +340,12 @@ func (c *RDS) ApplyPendingMaintenanceActionRequest(input *ApplyPendingMaintenanc // * ErrCodeResourceNotFoundFault "ResourceNotFoundFault" // The specified resource ID was not found. // +// * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" +// The requested operation can't be performed while the cluster is in this state. +// +// * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" +// The DB instance isn't in a valid state. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/ApplyPendingMaintenanceAction func (c *RDS) ApplyPendingMaintenanceAction(input *ApplyPendingMaintenanceActionInput) (*ApplyPendingMaintenanceActionOutput, error) { req, out := c.ApplyPendingMaintenanceActionRequest(input) @@ -365,8 +372,8 @@ const opAuthorizeDBSecurityGroupIngress = "AuthorizeDBSecurityGroupIngress" // AuthorizeDBSecurityGroupIngressRequest generates a "aws/request.Request" representing the // client's request for the AuthorizeDBSecurityGroupIngress operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -428,17 +435,17 @@ func (c *RDS) AuthorizeDBSecurityGroupIngressRequest(input *AuthorizeDBSecurityG // // Returned Error Codes: // * ErrCodeDBSecurityGroupNotFoundFault "DBSecurityGroupNotFound" -// DBSecurityGroupName does not refer to an existing DB security group. +// DBSecurityGroupName doesn't refer to an existing DB security group. // // * ErrCodeInvalidDBSecurityGroupStateFault "InvalidDBSecurityGroupState" -// The state of the DB security group does not allow deletion. +// The state of the DB security group doesn't allow deletion. // // * ErrCodeAuthorizationAlreadyExistsFault "AuthorizationAlreadyExists" -// The specified CIDRIP or EC2 security group is already authorized for the -// specified DB security group. +// The specified CIDRIP or Amazon EC2 security group is already authorized for +// the specified DB security group. // // * ErrCodeAuthorizationQuotaExceededFault "AuthorizationQuotaExceeded" -// DB security group authorization quota has been reached. +// The DB security group authorization quota has been reached. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/AuthorizeDBSecurityGroupIngress func (c *RDS) AuthorizeDBSecurityGroupIngress(input *AuthorizeDBSecurityGroupIngressInput) (*AuthorizeDBSecurityGroupIngressOutput, error) { @@ -462,12 +469,98 @@ func (c *RDS) AuthorizeDBSecurityGroupIngressWithContext(ctx aws.Context, input return out, req.Send() } +const opBacktrackDBCluster = "BacktrackDBCluster" + +// BacktrackDBClusterRequest generates a "aws/request.Request" representing the +// client's request for the BacktrackDBCluster operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See BacktrackDBCluster for more information on using the BacktrackDBCluster +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the BacktrackDBClusterRequest method. +// req, resp := client.BacktrackDBClusterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/BacktrackDBCluster +func (c *RDS) BacktrackDBClusterRequest(input *BacktrackDBClusterInput) (req *request.Request, output *BacktrackDBClusterOutput) { + op := &request.Operation{ + Name: opBacktrackDBCluster, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &BacktrackDBClusterInput{} + } + + output = &BacktrackDBClusterOutput{} + req = c.newRequest(op, input, output) + return +} + +// BacktrackDBCluster API operation for Amazon Relational Database Service. +// +// Backtracks a DB cluster to a specific time, without creating a new DB cluster. +// +// For more information on backtracking, see Backtracking an Aurora DB Cluster +// (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Managing.Backtrack.html) +// in the Amazon Aurora User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Relational Database Service's +// API operation BacktrackDBCluster for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" +// DBClusterIdentifier doesn't refer to an existing DB cluster. +// +// * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" +// The requested operation can't be performed while the cluster is in this state. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/BacktrackDBCluster +func (c *RDS) BacktrackDBCluster(input *BacktrackDBClusterInput) (*BacktrackDBClusterOutput, error) { + req, out := c.BacktrackDBClusterRequest(input) + return out, req.Send() +} + +// BacktrackDBClusterWithContext is the same as BacktrackDBCluster with the addition of +// the ability to pass a context and additional request options. +// +// See BacktrackDBCluster for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *RDS) BacktrackDBClusterWithContext(ctx aws.Context, input *BacktrackDBClusterInput, opts ...request.Option) (*BacktrackDBClusterOutput, error) { + req, out := c.BacktrackDBClusterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCopyDBClusterParameterGroup = "CopyDBClusterParameterGroup" // CopyDBClusterParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the CopyDBClusterParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -517,10 +610,10 @@ func (c *RDS) CopyDBClusterParameterGroupRequest(input *CopyDBClusterParameterGr // // Returned Error Codes: // * ErrCodeDBParameterGroupNotFoundFault "DBParameterGroupNotFound" -// DBParameterGroupName does not refer to an existing DB parameter group. +// DBParameterGroupName doesn't refer to an existing DB parameter group. // // * ErrCodeDBParameterGroupQuotaExceededFault "DBParameterGroupQuotaExceeded" -// Request would result in user exceeding the allowed number of DB parameter +// The request would result in the user exceeding the allowed number of DB parameter // groups. // // * ErrCodeDBParameterGroupAlreadyExistsFault "DBParameterGroupAlreadyExists" @@ -552,8 +645,8 @@ const opCopyDBClusterSnapshot = "CopyDBClusterSnapshot" // CopyDBClusterSnapshotRequest generates a "aws/request.Request" representing the // client's request for the CopyDBClusterSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -649,12 +742,11 @@ func (c *RDS) CopyDBClusterSnapshotRequest(input *CopyDBClusterSnapshotInput) (r // DB cluster snapshot is in "copying" status. // // For more information on copying encrypted DB cluster snapshots from one AWS -// Region to another, see Copying a DB Cluster Snapshot in the Same Account, -// Either in the Same Region or Across Regions (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html#USER_CopyDBClusterSnapshot.CrossRegion) -// in the Amazon RDS User Guide. +// Region to another, see Copying a Snapshot (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_CopySnapshot.html) +// in the Amazon Aurora User Guide. // -// For more information on Amazon Aurora, see Aurora on Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html) -// in the Amazon RDS User Guide. +// For more information on Amazon Aurora, see What Is Amazon Aurora? (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) +// in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -665,22 +757,22 @@ func (c *RDS) CopyDBClusterSnapshotRequest(input *CopyDBClusterSnapshotInput) (r // // Returned Error Codes: // * ErrCodeDBClusterSnapshotAlreadyExistsFault "DBClusterSnapshotAlreadyExistsFault" -// User already has a DB cluster snapshot with the given identifier. +// The user already has a DB cluster snapshot with the given identifier. // // * ErrCodeDBClusterSnapshotNotFoundFault "DBClusterSnapshotNotFoundFault" -// DBClusterSnapshotIdentifier does not refer to an existing DB cluster snapshot. +// DBClusterSnapshotIdentifier doesn't refer to an existing DB cluster snapshot. // // * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" -// The DB cluster is not in a valid state. +// The requested operation can't be performed while the cluster is in this state. // // * ErrCodeInvalidDBClusterSnapshotStateFault "InvalidDBClusterSnapshotStateFault" -// The supplied value is not a valid DB cluster snapshot state. +// The supplied value isn't a valid DB cluster snapshot state. // // * ErrCodeSnapshotQuotaExceededFault "SnapshotQuotaExceeded" -// Request would result in user exceeding the allowed number of DB snapshots. +// The request would result in the user exceeding the allowed number of DB snapshots. // // * ErrCodeKMSKeyNotAccessibleFault "KMSKeyNotAccessibleFault" -// Error accessing KMS key. +// An error occurred accessing an AWS KMS key. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/CopyDBClusterSnapshot func (c *RDS) CopyDBClusterSnapshot(input *CopyDBClusterSnapshotInput) (*CopyDBClusterSnapshotOutput, error) { @@ -708,8 +800,8 @@ const opCopyDBParameterGroup = "CopyDBParameterGroup" // CopyDBParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the CopyDBParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -759,13 +851,13 @@ func (c *RDS) CopyDBParameterGroupRequest(input *CopyDBParameterGroupInput) (req // // Returned Error Codes: // * ErrCodeDBParameterGroupNotFoundFault "DBParameterGroupNotFound" -// DBParameterGroupName does not refer to an existing DB parameter group. +// DBParameterGroupName doesn't refer to an existing DB parameter group. // // * ErrCodeDBParameterGroupAlreadyExistsFault "DBParameterGroupAlreadyExists" // A DB parameter group with the same name exists. // // * ErrCodeDBParameterGroupQuotaExceededFault "DBParameterGroupQuotaExceeded" -// Request would result in user exceeding the allowed number of DB parameter +// The request would result in the user exceeding the allowed number of DB parameter // groups. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/CopyDBParameterGroup @@ -794,8 +886,8 @@ const opCopyDBSnapshot = "CopyDBSnapshot" // CopyDBSnapshotRequest generates a "aws/request.Request" representing the // client's request for the CopyDBSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -856,16 +948,16 @@ func (c *RDS) CopyDBSnapshotRequest(input *CopyDBSnapshotInput) (req *request.Re // DBSnapshotIdentifier is already used by an existing snapshot. // // * ErrCodeDBSnapshotNotFoundFault "DBSnapshotNotFound" -// DBSnapshotIdentifier does not refer to an existing DB snapshot. +// DBSnapshotIdentifier doesn't refer to an existing DB snapshot. // // * ErrCodeInvalidDBSnapshotStateFault "InvalidDBSnapshotState" -// The state of the DB snapshot does not allow deletion. +// The state of the DB snapshot doesn't allow deletion. // // * ErrCodeSnapshotQuotaExceededFault "SnapshotQuotaExceeded" -// Request would result in user exceeding the allowed number of DB snapshots. +// The request would result in the user exceeding the allowed number of DB snapshots. // // * ErrCodeKMSKeyNotAccessibleFault "KMSKeyNotAccessibleFault" -// Error accessing KMS key. +// An error occurred accessing an AWS KMS key. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/CopyDBSnapshot func (c *RDS) CopyDBSnapshot(input *CopyDBSnapshotInput) (*CopyDBSnapshotOutput, error) { @@ -893,8 +985,8 @@ const opCopyOptionGroup = "CopyOptionGroup" // CopyOptionGroupRequest generates a "aws/request.Request" representing the // client's request for the CopyOptionGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -978,8 +1070,8 @@ const opCreateDBCluster = "CreateDBCluster" // CreateDBClusterRequest generates a "aws/request.Request" representing the // client's request for the CreateDBCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1025,8 +1117,8 @@ func (c *RDS) CreateDBClusterRequest(input *CreateDBClusterInput) (req *request. // For cross-region replication where the DB cluster identified by ReplicationSourceIdentifier // is encrypted, you must also specify the PreSignedUrl parameter. // -// For more information on Amazon Aurora, see Aurora on Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html) -// in the Amazon RDS User Guide. +// For more information on Amazon Aurora, see What Is Amazon Aurora? (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) +// in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1037,53 +1129,53 @@ func (c *RDS) CreateDBClusterRequest(input *CreateDBClusterInput) (req *request. // // Returned Error Codes: // * ErrCodeDBClusterAlreadyExistsFault "DBClusterAlreadyExistsFault" -// User already has a DB cluster with the given identifier. +// The user already has a DB cluster with the given identifier. // // * ErrCodeInsufficientStorageClusterCapacityFault "InsufficientStorageClusterCapacity" -// There is insufficient storage available for the current action. You may be -// able to resolve this error by updating your subnet group to use different +// There is insufficient storage available for the current action. You might +// be able to resolve this error by updating your subnet group to use different // Availability Zones that have more storage available. // // * ErrCodeDBClusterQuotaExceededFault "DBClusterQuotaExceededFault" -// User attempted to create a new DB cluster and the user has already reached +// The user attempted to create a new DB cluster and the user has already reached // the maximum allowed DB cluster quota. // // * ErrCodeStorageQuotaExceededFault "StorageQuotaExceeded" -// Request would result in user exceeding the allowed amount of storage available -// across all DB instances. +// The request would result in the user exceeding the allowed amount of storage +// available across all DB instances. // // * ErrCodeDBSubnetGroupNotFoundFault "DBSubnetGroupNotFoundFault" -// DBSubnetGroupName does not refer to an existing DB subnet group. +// DBSubnetGroupName doesn't refer to an existing DB subnet group. // // * ErrCodeInvalidVPCNetworkStateFault "InvalidVPCNetworkStateFault" -// DB subnet group does not cover all Availability Zones after it is created -// because users' change. +// The DB subnet group doesn't cover all Availability Zones after it's created +// because of users' change. // // * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" -// The DB cluster is not in a valid state. +// The requested operation can't be performed while the cluster is in this state. // // * ErrCodeInvalidDBSubnetGroupStateFault "InvalidDBSubnetGroupStateFault" -// The DB subnet group cannot be deleted because it is in use. +// The DB subnet group cannot be deleted because it's in use. // // * ErrCodeInvalidSubnet "InvalidSubnet" // The requested subnet is invalid, or multiple subnets were requested that // are not all in a common VPC. // // * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" -// The specified DB instance is not in the available state. +// The DB instance isn't in a valid state. // // * ErrCodeDBClusterParameterGroupNotFoundFault "DBClusterParameterGroupNotFound" -// DBClusterParameterGroupName does not refer to an existing DB Cluster parameter +// DBClusterParameterGroupName doesn't refer to an existing DB cluster parameter // group. // // * ErrCodeKMSKeyNotAccessibleFault "KMSKeyNotAccessibleFault" -// Error accessing KMS key. +// An error occurred accessing an AWS KMS key. // // * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" -// DBClusterIdentifier does not refer to an existing DB cluster. +// DBClusterIdentifier doesn't refer to an existing DB cluster. // // * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" -// DBInstanceIdentifier does not refer to an existing DB instance. +// DBInstanceIdentifier doesn't refer to an existing DB instance. // // * ErrCodeDBSubnetGroupDoesNotCoverEnoughAZs "DBSubnetGroupDoesNotCoverEnoughAZs" // Subnets in the DB subnet group should cover at least two Availability Zones @@ -1111,12 +1203,107 @@ func (c *RDS) CreateDBClusterWithContext(ctx aws.Context, input *CreateDBCluster return out, req.Send() } +const opCreateDBClusterEndpoint = "CreateDBClusterEndpoint" + +// CreateDBClusterEndpointRequest generates a "aws/request.Request" representing the +// client's request for the CreateDBClusterEndpoint operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateDBClusterEndpoint for more information on using the CreateDBClusterEndpoint +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateDBClusterEndpointRequest method. +// req, resp := client.CreateDBClusterEndpointRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/CreateDBClusterEndpoint +func (c *RDS) CreateDBClusterEndpointRequest(input *CreateDBClusterEndpointInput) (req *request.Request, output *CreateDBClusterEndpointOutput) { + op := &request.Operation{ + Name: opCreateDBClusterEndpoint, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateDBClusterEndpointInput{} + } + + output = &CreateDBClusterEndpointOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateDBClusterEndpoint API operation for Amazon Relational Database Service. +// +// Creates a new custom endpoint and associates it with an Amazon Aurora DB +// cluster. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Relational Database Service's +// API operation CreateDBClusterEndpoint for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDBClusterEndpointQuotaExceededFault "DBClusterEndpointQuotaExceededFault" +// The cluster already has the maximum number of custom endpoints. +// +// * ErrCodeDBClusterEndpointAlreadyExistsFault "DBClusterEndpointAlreadyExistsFault" +// The specified custom endpoint can't be created because it already exists. +// +// * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" +// DBClusterIdentifier doesn't refer to an existing DB cluster. +// +// * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" +// The requested operation can't be performed while the cluster is in this state. +// +// * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" +// DBInstanceIdentifier doesn't refer to an existing DB instance. +// +// * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" +// The DB instance isn't in a valid state. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/CreateDBClusterEndpoint +func (c *RDS) CreateDBClusterEndpoint(input *CreateDBClusterEndpointInput) (*CreateDBClusterEndpointOutput, error) { + req, out := c.CreateDBClusterEndpointRequest(input) + return out, req.Send() +} + +// CreateDBClusterEndpointWithContext is the same as CreateDBClusterEndpoint with the addition of +// the ability to pass a context and additional request options. +// +// See CreateDBClusterEndpoint for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *RDS) CreateDBClusterEndpointWithContext(ctx aws.Context, input *CreateDBClusterEndpointInput, opts ...request.Option) (*CreateDBClusterEndpointOutput, error) { + req, out := c.CreateDBClusterEndpointRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateDBClusterParameterGroup = "CreateDBClusterParameterGroup" // CreateDBClusterParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateDBClusterParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1181,8 +1368,8 @@ func (c *RDS) CreateDBClusterParameterGroupRequest(input *CreateDBClusterParamet // command to verify that your DB cluster parameter group has been created or // modified. // -// For more information on Amazon Aurora, see Aurora on Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html) -// in the Amazon RDS User Guide. +// For more information on Amazon Aurora, see What Is Amazon Aurora? (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) +// in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1193,7 +1380,7 @@ func (c *RDS) CreateDBClusterParameterGroupRequest(input *CreateDBClusterParamet // // Returned Error Codes: // * ErrCodeDBParameterGroupQuotaExceededFault "DBParameterGroupQuotaExceeded" -// Request would result in user exceeding the allowed number of DB parameter +// The request would result in the user exceeding the allowed number of DB parameter // groups. // // * ErrCodeDBParameterGroupAlreadyExistsFault "DBParameterGroupAlreadyExists" @@ -1225,8 +1412,8 @@ const opCreateDBClusterSnapshot = "CreateDBClusterSnapshot" // CreateDBClusterSnapshotRequest generates a "aws/request.Request" representing the // client's request for the CreateDBClusterSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1266,8 +1453,8 @@ func (c *RDS) CreateDBClusterSnapshotRequest(input *CreateDBClusterSnapshotInput // CreateDBClusterSnapshot API operation for Amazon Relational Database Service. // // Creates a snapshot of a DB cluster. For more information on Amazon Aurora, -// see Aurora on Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html) -// in the Amazon RDS User Guide. +// see What Is Amazon Aurora? (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) +// in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1278,19 +1465,19 @@ func (c *RDS) CreateDBClusterSnapshotRequest(input *CreateDBClusterSnapshotInput // // Returned Error Codes: // * ErrCodeDBClusterSnapshotAlreadyExistsFault "DBClusterSnapshotAlreadyExistsFault" -// User already has a DB cluster snapshot with the given identifier. +// The user already has a DB cluster snapshot with the given identifier. // // * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" -// The DB cluster is not in a valid state. +// The requested operation can't be performed while the cluster is in this state. // // * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" -// DBClusterIdentifier does not refer to an existing DB cluster. +// DBClusterIdentifier doesn't refer to an existing DB cluster. // // * ErrCodeSnapshotQuotaExceededFault "SnapshotQuotaExceeded" -// Request would result in user exceeding the allowed number of DB snapshots. +// The request would result in the user exceeding the allowed number of DB snapshots. // // * ErrCodeInvalidDBClusterSnapshotStateFault "InvalidDBClusterSnapshotStateFault" -// The supplied value is not a valid DB cluster snapshot state. +// The supplied value isn't a valid DB cluster snapshot state. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/CreateDBClusterSnapshot func (c *RDS) CreateDBClusterSnapshot(input *CreateDBClusterSnapshotInput) (*CreateDBClusterSnapshotOutput, error) { @@ -1318,8 +1505,8 @@ const opCreateDBInstance = "CreateDBInstance" // CreateDBInstanceRequest generates a "aws/request.Request" representing the // client's request for the CreateDBInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1369,42 +1556,42 @@ func (c *RDS) CreateDBInstanceRequest(input *CreateDBInstanceInput) (req *reques // // Returned Error Codes: // * ErrCodeDBInstanceAlreadyExistsFault "DBInstanceAlreadyExists" -// User already has a DB instance with the given identifier. +// The user already has a DB instance with the given identifier. // // * ErrCodeInsufficientDBInstanceCapacityFault "InsufficientDBInstanceCapacity" -// Specified DB instance class is not available in the specified Availability +// The specified DB instance class isn't available in the specified Availability // Zone. // // * ErrCodeDBParameterGroupNotFoundFault "DBParameterGroupNotFound" -// DBParameterGroupName does not refer to an existing DB parameter group. +// DBParameterGroupName doesn't refer to an existing DB parameter group. // // * ErrCodeDBSecurityGroupNotFoundFault "DBSecurityGroupNotFound" -// DBSecurityGroupName does not refer to an existing DB security group. +// DBSecurityGroupName doesn't refer to an existing DB security group. // // * ErrCodeInstanceQuotaExceededFault "InstanceQuotaExceeded" -// Request would result in user exceeding the allowed number of DB instances. +// The request would result in the user exceeding the allowed number of DB instances. // // * ErrCodeStorageQuotaExceededFault "StorageQuotaExceeded" -// Request would result in user exceeding the allowed amount of storage available -// across all DB instances. +// The request would result in the user exceeding the allowed amount of storage +// available across all DB instances. // // * ErrCodeDBSubnetGroupNotFoundFault "DBSubnetGroupNotFoundFault" -// DBSubnetGroupName does not refer to an existing DB subnet group. +// DBSubnetGroupName doesn't refer to an existing DB subnet group. // // * ErrCodeDBSubnetGroupDoesNotCoverEnoughAZs "DBSubnetGroupDoesNotCoverEnoughAZs" // Subnets in the DB subnet group should cover at least two Availability Zones // unless there is only one Availability Zone. // // * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" -// The DB cluster is not in a valid state. +// The requested operation can't be performed while the cluster is in this state. // // * ErrCodeInvalidSubnet "InvalidSubnet" // The requested subnet is invalid, or multiple subnets were requested that // are not all in a common VPC. // // * ErrCodeInvalidVPCNetworkStateFault "InvalidVPCNetworkStateFault" -// DB subnet group does not cover all Availability Zones after it is created -// because users' change. +// The DB subnet group doesn't cover all Availability Zones after it's created +// because of users' change. // // * ErrCodeProvisionedIopsNotAvailableInAZFault "ProvisionedIopsNotAvailableInAZFault" // Provisioned IOPS not available in the specified Availability Zone. @@ -1413,23 +1600,25 @@ func (c *RDS) CreateDBInstanceRequest(input *CreateDBInstanceInput) (req *reques // The specified option group could not be found. // // * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" -// DBClusterIdentifier does not refer to an existing DB cluster. +// DBClusterIdentifier doesn't refer to an existing DB cluster. // // * ErrCodeStorageTypeNotSupportedFault "StorageTypeNotSupported" -// StorageType specified cannot be associated with the DB Instance. +// Storage of the StorageType specified can't be associated with the DB instance. // // * ErrCodeAuthorizationNotFoundFault "AuthorizationNotFound" -// Specified CIDRIP or EC2 security group is not authorized for the specified -// DB security group. +// The specified CIDRIP or Amazon EC2 security group isn't authorized for the +// specified DB security group. // -// RDS may not also be authorized via IAM to perform necessary actions on your -// behalf. +// RDS also may not be authorized by using IAM to perform necessary actions +// on your behalf. // // * ErrCodeKMSKeyNotAccessibleFault "KMSKeyNotAccessibleFault" -// Error accessing KMS key. +// An error occurred accessing an AWS KMS key. // // * ErrCodeDomainNotFoundFault "DomainNotFoundFault" -// Domain does not refer to an existing Active Directory Domain. +// Domain doesn't refer to an existing Active Directory domain. +// +// * ErrCodeBackupPolicyNotFoundFault "BackupPolicyNotFoundFault" // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/CreateDBInstance func (c *RDS) CreateDBInstance(input *CreateDBInstanceInput) (*CreateDBInstanceOutput, error) { @@ -1457,8 +1646,8 @@ const opCreateDBInstanceReadReplica = "CreateDBInstanceReadReplica" // CreateDBInstanceReadReplicaRequest generates a "aws/request.Request" representing the // client's request for the CreateDBInstanceReadReplica operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1500,7 +1689,8 @@ func (c *RDS) CreateDBInstanceReadReplicaRequest(input *CreateDBInstanceReadRepl // Creates a new DB instance that acts as a Read Replica for an existing source // DB instance. You can create a Read Replica for a DB instance running MySQL, // MariaDB, or PostgreSQL. For more information, see Working with PostgreSQL, -// MySQL, and MariaDB Read Replicas (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html). +// MySQL, and MariaDB Read Replicas (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html) +// in the Amazon RDS User Guide. // // Amazon Aurora doesn't support this action. You must call the CreateDBInstance // action to create a DB instance for an Aurora DB cluster. @@ -1520,33 +1710,33 @@ func (c *RDS) CreateDBInstanceReadReplicaRequest(input *CreateDBInstanceReadRepl // // Returned Error Codes: // * ErrCodeDBInstanceAlreadyExistsFault "DBInstanceAlreadyExists" -// User already has a DB instance with the given identifier. +// The user already has a DB instance with the given identifier. // // * ErrCodeInsufficientDBInstanceCapacityFault "InsufficientDBInstanceCapacity" -// Specified DB instance class is not available in the specified Availability +// The specified DB instance class isn't available in the specified Availability // Zone. // // * ErrCodeDBParameterGroupNotFoundFault "DBParameterGroupNotFound" -// DBParameterGroupName does not refer to an existing DB parameter group. +// DBParameterGroupName doesn't refer to an existing DB parameter group. // // * ErrCodeDBSecurityGroupNotFoundFault "DBSecurityGroupNotFound" -// DBSecurityGroupName does not refer to an existing DB security group. +// DBSecurityGroupName doesn't refer to an existing DB security group. // // * ErrCodeInstanceQuotaExceededFault "InstanceQuotaExceeded" -// Request would result in user exceeding the allowed number of DB instances. +// The request would result in the user exceeding the allowed number of DB instances. // // * ErrCodeStorageQuotaExceededFault "StorageQuotaExceeded" -// Request would result in user exceeding the allowed amount of storage available -// across all DB instances. +// The request would result in the user exceeding the allowed amount of storage +// available across all DB instances. // // * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" -// DBInstanceIdentifier does not refer to an existing DB instance. +// DBInstanceIdentifier doesn't refer to an existing DB instance. // // * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" -// The specified DB instance is not in the available state. +// The DB instance isn't in a valid state. // // * ErrCodeDBSubnetGroupNotFoundFault "DBSubnetGroupNotFoundFault" -// DBSubnetGroupName does not refer to an existing DB subnet group. +// DBSubnetGroupName doesn't refer to an existing DB subnet group. // // * ErrCodeDBSubnetGroupDoesNotCoverEnoughAZs "DBSubnetGroupDoesNotCoverEnoughAZs" // Subnets in the DB subnet group should cover at least two Availability Zones @@ -1557,8 +1747,8 @@ func (c *RDS) CreateDBInstanceReadReplicaRequest(input *CreateDBInstanceReadRepl // are not all in a common VPC. // // * ErrCodeInvalidVPCNetworkStateFault "InvalidVPCNetworkStateFault" -// DB subnet group does not cover all Availability Zones after it is created -// because users' change. +// The DB subnet group doesn't cover all Availability Zones after it's created +// because of users' change. // // * ErrCodeProvisionedIopsNotAvailableInAZFault "ProvisionedIopsNotAvailableInAZFault" // Provisioned IOPS not available in the specified Availability Zone. @@ -1567,18 +1757,18 @@ func (c *RDS) CreateDBInstanceReadReplicaRequest(input *CreateDBInstanceReadRepl // The specified option group could not be found. // // * ErrCodeDBSubnetGroupNotAllowedFault "DBSubnetGroupNotAllowedFault" -// Indicates that the DBSubnetGroup should not be specified while creating read -// replicas that lie in the same region as the source instance. +// The DBSubnetGroup shouldn't be specified while creating read replicas that +// lie in the same region as the source instance. // // * ErrCodeInvalidDBSubnetGroupFault "InvalidDBSubnetGroupFault" -// Indicates the DBSubnetGroup does not belong to the same VPC as that of an -// existing cross region read replica of the same source instance. +// The DBSubnetGroup doesn't belong to the same VPC as that of an existing cross-region +// read replica of the same source instance. // // * ErrCodeStorageTypeNotSupportedFault "StorageTypeNotSupported" -// StorageType specified cannot be associated with the DB Instance. +// Storage of the StorageType specified can't be associated with the DB instance. // // * ErrCodeKMSKeyNotAccessibleFault "KMSKeyNotAccessibleFault" -// Error accessing KMS key. +// An error occurred accessing an AWS KMS key. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/CreateDBInstanceReadReplica func (c *RDS) CreateDBInstanceReadReplica(input *CreateDBInstanceReadReplicaInput) (*CreateDBInstanceReadReplicaOutput, error) { @@ -1606,8 +1796,8 @@ const opCreateDBParameterGroup = "CreateDBParameterGroup" // CreateDBParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateDBParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1677,7 +1867,7 @@ func (c *RDS) CreateDBParameterGroupRequest(input *CreateDBParameterGroupInput) // // Returned Error Codes: // * ErrCodeDBParameterGroupQuotaExceededFault "DBParameterGroupQuotaExceeded" -// Request would result in user exceeding the allowed number of DB parameter +// The request would result in the user exceeding the allowed number of DB parameter // groups. // // * ErrCodeDBParameterGroupAlreadyExistsFault "DBParameterGroupAlreadyExists" @@ -1709,8 +1899,8 @@ const opCreateDBSecurityGroup = "CreateDBSecurityGroup" // CreateDBSecurityGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateDBSecurityGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1752,6 +1942,9 @@ func (c *RDS) CreateDBSecurityGroupRequest(input *CreateDBSecurityGroupInput) (r // Creates a new DB security group. DB security groups control access to a DB // instance. // +// A DB security group controls access to EC2-Classic DB instances that are +// not in a VPC. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -1765,11 +1958,11 @@ func (c *RDS) CreateDBSecurityGroupRequest(input *CreateDBSecurityGroupInput) (r // exists. // // * ErrCodeDBSecurityGroupQuotaExceededFault "QuotaExceeded.DBSecurityGroup" -// Request would result in user exceeding the allowed number of DB security +// The request would result in the user exceeding the allowed number of DB security // groups. // // * ErrCodeDBSecurityGroupNotSupportedFault "DBSecurityGroupNotSupported" -// A DB security group is not allowed for this action. +// A DB security group isn't allowed for this action. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/CreateDBSecurityGroup func (c *RDS) CreateDBSecurityGroup(input *CreateDBSecurityGroupInput) (*CreateDBSecurityGroupOutput, error) { @@ -1797,8 +1990,8 @@ const opCreateDBSnapshot = "CreateDBSnapshot" // CreateDBSnapshotRequest generates a "aws/request.Request" representing the // client's request for the CreateDBSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1851,13 +2044,13 @@ func (c *RDS) CreateDBSnapshotRequest(input *CreateDBSnapshotInput) (req *reques // DBSnapshotIdentifier is already used by an existing snapshot. // // * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" -// The specified DB instance is not in the available state. +// The DB instance isn't in a valid state. // // * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" -// DBInstanceIdentifier does not refer to an existing DB instance. +// DBInstanceIdentifier doesn't refer to an existing DB instance. // // * ErrCodeSnapshotQuotaExceededFault "SnapshotQuotaExceeded" -// Request would result in user exceeding the allowed number of DB snapshots. +// The request would result in the user exceeding the allowed number of DB snapshots. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/CreateDBSnapshot func (c *RDS) CreateDBSnapshot(input *CreateDBSnapshotInput) (*CreateDBSnapshotOutput, error) { @@ -1885,8 +2078,8 @@ const opCreateDBSubnetGroup = "CreateDBSubnetGroup" // CreateDBSubnetGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateDBSubnetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1940,11 +2133,12 @@ func (c *RDS) CreateDBSubnetGroupRequest(input *CreateDBSubnetGroupInput) (req * // DBSubnetGroupName is already used by an existing DB subnet group. // // * ErrCodeDBSubnetGroupQuotaExceededFault "DBSubnetGroupQuotaExceeded" -// Request would result in user exceeding the allowed number of DB subnet groups. +// The request would result in the user exceeding the allowed number of DB subnet +// groups. // // * ErrCodeDBSubnetQuotaExceededFault "DBSubnetQuotaExceededFault" -// Request would result in user exceeding the allowed number of subnets in a -// DB subnet groups. +// The request would result in the user exceeding the allowed number of subnets +// in a DB subnet groups. // // * ErrCodeDBSubnetGroupDoesNotCoverEnoughAZs "DBSubnetGroupDoesNotCoverEnoughAZs" // Subnets in the DB subnet group should cover at least two Availability Zones @@ -1980,8 +2174,8 @@ const opCreateEventSubscription = "CreateEventSubscription" // CreateEventSubscriptionRequest generates a "aws/request.Request" representing the // client's request for the CreateEventSubscription operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2094,8 +2288,8 @@ const opCreateOptionGroup = "CreateOptionGroup" // CreateOptionGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateOptionGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2176,8 +2370,8 @@ const opDeleteDBCluster = "DeleteDBCluster" // DeleteDBClusterRequest generates a "aws/request.Request" representing the // client's request for the DeleteDBCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2221,7 +2415,7 @@ func (c *RDS) DeleteDBClusterRequest(input *DeleteDBClusterInput) (req *request. // and can't be recovered. Manual DB cluster snapshots of the specified DB cluster // are not deleted. // -// For more information on Amazon Aurora, see Aurora on Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html)in the Amazon RDS User Guide. +// For more information on Amazon Aurora, see What Is Amazon Aurora? (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html)in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2232,19 +2426,19 @@ func (c *RDS) DeleteDBClusterRequest(input *DeleteDBClusterInput) (req *request. // // Returned Error Codes: // * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" -// DBClusterIdentifier does not refer to an existing DB cluster. +// DBClusterIdentifier doesn't refer to an existing DB cluster. // // * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" -// The DB cluster is not in a valid state. +// The requested operation can't be performed while the cluster is in this state. // // * ErrCodeDBClusterSnapshotAlreadyExistsFault "DBClusterSnapshotAlreadyExistsFault" -// User already has a DB cluster snapshot with the given identifier. +// The user already has a DB cluster snapshot with the given identifier. // // * ErrCodeSnapshotQuotaExceededFault "SnapshotQuotaExceeded" -// Request would result in user exceeding the allowed number of DB snapshots. +// The request would result in the user exceeding the allowed number of DB snapshots. // // * ErrCodeInvalidDBClusterSnapshotStateFault "InvalidDBClusterSnapshotStateFault" -// The supplied value is not a valid DB cluster snapshot state. +// The supplied value isn't a valid DB cluster snapshot state. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DeleteDBCluster func (c *RDS) DeleteDBCluster(input *DeleteDBClusterInput) (*DeleteDBClusterOutput, error) { @@ -2268,12 +2462,98 @@ func (c *RDS) DeleteDBClusterWithContext(ctx aws.Context, input *DeleteDBCluster return out, req.Send() } +const opDeleteDBClusterEndpoint = "DeleteDBClusterEndpoint" + +// DeleteDBClusterEndpointRequest generates a "aws/request.Request" representing the +// client's request for the DeleteDBClusterEndpoint operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteDBClusterEndpoint for more information on using the DeleteDBClusterEndpoint +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteDBClusterEndpointRequest method. +// req, resp := client.DeleteDBClusterEndpointRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DeleteDBClusterEndpoint +func (c *RDS) DeleteDBClusterEndpointRequest(input *DeleteDBClusterEndpointInput) (req *request.Request, output *DeleteDBClusterEndpointOutput) { + op := &request.Operation{ + Name: opDeleteDBClusterEndpoint, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteDBClusterEndpointInput{} + } + + output = &DeleteDBClusterEndpointOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteDBClusterEndpoint API operation for Amazon Relational Database Service. +// +// Deletes a custom endpoint and removes it from an Amazon Aurora DB cluster. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Relational Database Service's +// API operation DeleteDBClusterEndpoint for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidDBClusterEndpointStateFault "InvalidDBClusterEndpointStateFault" +// The requested operation can't be performed on the endpoint while the endpoint +// is in this state. +// +// * ErrCodeDBClusterEndpointNotFoundFault "DBClusterEndpointNotFoundFault" +// The specified custom endpoint doesn't exist. +// +// * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" +// The requested operation can't be performed while the cluster is in this state. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DeleteDBClusterEndpoint +func (c *RDS) DeleteDBClusterEndpoint(input *DeleteDBClusterEndpointInput) (*DeleteDBClusterEndpointOutput, error) { + req, out := c.DeleteDBClusterEndpointRequest(input) + return out, req.Send() +} + +// DeleteDBClusterEndpointWithContext is the same as DeleteDBClusterEndpoint with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteDBClusterEndpoint for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *RDS) DeleteDBClusterEndpointWithContext(ctx aws.Context, input *DeleteDBClusterEndpointInput, opts ...request.Option) (*DeleteDBClusterEndpointOutput, error) { + req, out := c.DeleteDBClusterEndpointRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteDBClusterParameterGroup = "DeleteDBClusterParameterGroup" // DeleteDBClusterParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteDBClusterParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2317,8 +2597,8 @@ func (c *RDS) DeleteDBClusterParameterGroupRequest(input *DeleteDBClusterParamet // Deletes a specified DB cluster parameter group. The DB cluster parameter // group to be deleted can't be associated with any DB clusters. // -// For more information on Amazon Aurora, see Aurora on Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html) -// in the Amazon RDS User Guide. +// For more information on Amazon Aurora, see What Is Amazon Aurora? (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) +// in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2330,11 +2610,11 @@ func (c *RDS) DeleteDBClusterParameterGroupRequest(input *DeleteDBClusterParamet // Returned Error Codes: // * ErrCodeInvalidDBParameterGroupStateFault "InvalidDBParameterGroupState" // The DB parameter group is in use or is in an invalid state. If you are attempting -// to delete the parameter group, you cannot delete it when the parameter group +// to delete the parameter group, you can't delete it when the parameter group // is in this state. // // * ErrCodeDBParameterGroupNotFoundFault "DBParameterGroupNotFound" -// DBParameterGroupName does not refer to an existing DB parameter group. +// DBParameterGroupName doesn't refer to an existing DB parameter group. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DeleteDBClusterParameterGroup func (c *RDS) DeleteDBClusterParameterGroup(input *DeleteDBClusterParameterGroupInput) (*DeleteDBClusterParameterGroupOutput, error) { @@ -2362,8 +2642,8 @@ const opDeleteDBClusterSnapshot = "DeleteDBClusterSnapshot" // DeleteDBClusterSnapshotRequest generates a "aws/request.Request" representing the // client's request for the DeleteDBClusterSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2407,8 +2687,8 @@ func (c *RDS) DeleteDBClusterSnapshotRequest(input *DeleteDBClusterSnapshotInput // // The DB cluster snapshot must be in the available state to be deleted. // -// For more information on Amazon Aurora, see Aurora on Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html) -// in the Amazon RDS User Guide. +// For more information on Amazon Aurora, see What Is Amazon Aurora? (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) +// in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2419,10 +2699,10 @@ func (c *RDS) DeleteDBClusterSnapshotRequest(input *DeleteDBClusterSnapshotInput // // Returned Error Codes: // * ErrCodeInvalidDBClusterSnapshotStateFault "InvalidDBClusterSnapshotStateFault" -// The supplied value is not a valid DB cluster snapshot state. +// The supplied value isn't a valid DB cluster snapshot state. // // * ErrCodeDBClusterSnapshotNotFoundFault "DBClusterSnapshotNotFoundFault" -// DBClusterSnapshotIdentifier does not refer to an existing DB cluster snapshot. +// DBClusterSnapshotIdentifier doesn't refer to an existing DB cluster snapshot. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DeleteDBClusterSnapshot func (c *RDS) DeleteDBClusterSnapshot(input *DeleteDBClusterSnapshotInput) (*DeleteDBClusterSnapshotOutput, error) { @@ -2450,8 +2730,8 @@ const opDeleteDBInstance = "DeleteDBInstance" // DeleteDBInstanceRequest generates a "aws/request.Request" representing the // client's request for the DeleteDBInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2525,19 +2805,24 @@ func (c *RDS) DeleteDBInstanceRequest(input *DeleteDBInstanceInput) (req *reques // // Returned Error Codes: // * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" -// DBInstanceIdentifier does not refer to an existing DB instance. +// DBInstanceIdentifier doesn't refer to an existing DB instance. // // * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" -// The specified DB instance is not in the available state. +// The DB instance isn't in a valid state. // // * ErrCodeDBSnapshotAlreadyExistsFault "DBSnapshotAlreadyExists" // DBSnapshotIdentifier is already used by an existing snapshot. // // * ErrCodeSnapshotQuotaExceededFault "SnapshotQuotaExceeded" -// Request would result in user exceeding the allowed number of DB snapshots. +// The request would result in the user exceeding the allowed number of DB snapshots. // // * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" -// The DB cluster is not in a valid state. +// The requested operation can't be performed while the cluster is in this state. +// +// * ErrCodeDBInstanceAutomatedBackupQuotaExceededFault "DBInstanceAutomatedBackupQuotaExceeded" +// The quota for retained automated backups was exceeded. This prevents you +// from retaining any additional automated backups. The retained automated backups +// quota is the same as your DB Instance quota. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DeleteDBInstance func (c *RDS) DeleteDBInstance(input *DeleteDBInstanceInput) (*DeleteDBInstanceOutput, error) { @@ -2561,12 +2846,96 @@ func (c *RDS) DeleteDBInstanceWithContext(ctx aws.Context, input *DeleteDBInstan return out, req.Send() } +const opDeleteDBInstanceAutomatedBackup = "DeleteDBInstanceAutomatedBackup" + +// DeleteDBInstanceAutomatedBackupRequest generates a "aws/request.Request" representing the +// client's request for the DeleteDBInstanceAutomatedBackup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteDBInstanceAutomatedBackup for more information on using the DeleteDBInstanceAutomatedBackup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteDBInstanceAutomatedBackupRequest method. +// req, resp := client.DeleteDBInstanceAutomatedBackupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DeleteDBInstanceAutomatedBackup +func (c *RDS) DeleteDBInstanceAutomatedBackupRequest(input *DeleteDBInstanceAutomatedBackupInput) (req *request.Request, output *DeleteDBInstanceAutomatedBackupOutput) { + op := &request.Operation{ + Name: opDeleteDBInstanceAutomatedBackup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteDBInstanceAutomatedBackupInput{} + } + + output = &DeleteDBInstanceAutomatedBackupOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteDBInstanceAutomatedBackup API operation for Amazon Relational Database Service. +// +// Deletes automated backups based on the source instance's DbiResourceId value +// or the restorable instance's resource ID. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Relational Database Service's +// API operation DeleteDBInstanceAutomatedBackup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidDBInstanceAutomatedBackupStateFault "InvalidDBInstanceAutomatedBackupState" +// The automated backup is in an invalid state. For example, this automated +// backup is associated with an active instance. +// +// * ErrCodeDBInstanceAutomatedBackupNotFoundFault "DBInstanceAutomatedBackupNotFound" +// No automated backup for this DB instance was found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DeleteDBInstanceAutomatedBackup +func (c *RDS) DeleteDBInstanceAutomatedBackup(input *DeleteDBInstanceAutomatedBackupInput) (*DeleteDBInstanceAutomatedBackupOutput, error) { + req, out := c.DeleteDBInstanceAutomatedBackupRequest(input) + return out, req.Send() +} + +// DeleteDBInstanceAutomatedBackupWithContext is the same as DeleteDBInstanceAutomatedBackup with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteDBInstanceAutomatedBackup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *RDS) DeleteDBInstanceAutomatedBackupWithContext(ctx aws.Context, input *DeleteDBInstanceAutomatedBackupInput, opts ...request.Option) (*DeleteDBInstanceAutomatedBackupOutput, error) { + req, out := c.DeleteDBInstanceAutomatedBackupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteDBParameterGroup = "DeleteDBParameterGroup" // DeleteDBParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteDBParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2607,7 +2976,7 @@ func (c *RDS) DeleteDBParameterGroupRequest(input *DeleteDBParameterGroupInput) // DeleteDBParameterGroup API operation for Amazon Relational Database Service. // -// Deletes a specified DBParameterGroup. The DBParameterGroup to be deleted +// Deletes a specified DB parameter group. The DB parameter group to be deleted // can't be associated with any DB instances. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -2620,11 +2989,11 @@ func (c *RDS) DeleteDBParameterGroupRequest(input *DeleteDBParameterGroupInput) // Returned Error Codes: // * ErrCodeInvalidDBParameterGroupStateFault "InvalidDBParameterGroupState" // The DB parameter group is in use or is in an invalid state. If you are attempting -// to delete the parameter group, you cannot delete it when the parameter group +// to delete the parameter group, you can't delete it when the parameter group // is in this state. // // * ErrCodeDBParameterGroupNotFoundFault "DBParameterGroupNotFound" -// DBParameterGroupName does not refer to an existing DB parameter group. +// DBParameterGroupName doesn't refer to an existing DB parameter group. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DeleteDBParameterGroup func (c *RDS) DeleteDBParameterGroup(input *DeleteDBParameterGroupInput) (*DeleteDBParameterGroupOutput, error) { @@ -2652,8 +3021,8 @@ const opDeleteDBSecurityGroup = "DeleteDBSecurityGroup" // DeleteDBSecurityGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteDBSecurityGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2707,10 +3076,10 @@ func (c *RDS) DeleteDBSecurityGroupRequest(input *DeleteDBSecurityGroupInput) (r // // Returned Error Codes: // * ErrCodeInvalidDBSecurityGroupStateFault "InvalidDBSecurityGroupState" -// The state of the DB security group does not allow deletion. +// The state of the DB security group doesn't allow deletion. // // * ErrCodeDBSecurityGroupNotFoundFault "DBSecurityGroupNotFound" -// DBSecurityGroupName does not refer to an existing DB security group. +// DBSecurityGroupName doesn't refer to an existing DB security group. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DeleteDBSecurityGroup func (c *RDS) DeleteDBSecurityGroup(input *DeleteDBSecurityGroupInput) (*DeleteDBSecurityGroupOutput, error) { @@ -2738,8 +3107,8 @@ const opDeleteDBSnapshot = "DeleteDBSnapshot" // DeleteDBSnapshotRequest generates a "aws/request.Request" representing the // client's request for the DeleteDBSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2778,10 +3147,10 @@ func (c *RDS) DeleteDBSnapshotRequest(input *DeleteDBSnapshotInput) (req *reques // DeleteDBSnapshot API operation for Amazon Relational Database Service. // -// Deletes a DBSnapshot. If the snapshot is being copied, the copy operation +// Deletes a DB snapshot. If the snapshot is being copied, the copy operation // is terminated. // -// The DBSnapshot must be in the available state to be deleted. +// The DB snapshot must be in the available state to be deleted. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2792,10 +3161,10 @@ func (c *RDS) DeleteDBSnapshotRequest(input *DeleteDBSnapshotInput) (req *reques // // Returned Error Codes: // * ErrCodeInvalidDBSnapshotStateFault "InvalidDBSnapshotState" -// The state of the DB snapshot does not allow deletion. +// The state of the DB snapshot doesn't allow deletion. // // * ErrCodeDBSnapshotNotFoundFault "DBSnapshotNotFound" -// DBSnapshotIdentifier does not refer to an existing DB snapshot. +// DBSnapshotIdentifier doesn't refer to an existing DB snapshot. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DeleteDBSnapshot func (c *RDS) DeleteDBSnapshot(input *DeleteDBSnapshotInput) (*DeleteDBSnapshotOutput, error) { @@ -2823,8 +3192,8 @@ const opDeleteDBSubnetGroup = "DeleteDBSubnetGroup" // DeleteDBSubnetGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteDBSubnetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2878,13 +3247,13 @@ func (c *RDS) DeleteDBSubnetGroupRequest(input *DeleteDBSubnetGroupInput) (req * // // Returned Error Codes: // * ErrCodeInvalidDBSubnetGroupStateFault "InvalidDBSubnetGroupStateFault" -// The DB subnet group cannot be deleted because it is in use. +// The DB subnet group cannot be deleted because it's in use. // // * ErrCodeInvalidDBSubnetStateFault "InvalidDBSubnetStateFault" -// The DB subnet is not in the available state. +// The DB subnet isn't in the available state. // // * ErrCodeDBSubnetGroupNotFoundFault "DBSubnetGroupNotFoundFault" -// DBSubnetGroupName does not refer to an existing DB subnet group. +// DBSubnetGroupName doesn't refer to an existing DB subnet group. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DeleteDBSubnetGroup func (c *RDS) DeleteDBSubnetGroup(input *DeleteDBSubnetGroupInput) (*DeleteDBSubnetGroupOutput, error) { @@ -2912,8 +3281,8 @@ const opDeleteEventSubscription = "DeleteEventSubscription" // DeleteEventSubscriptionRequest generates a "aws/request.Request" representing the // client's request for the DeleteEventSubscription operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2995,8 +3364,8 @@ const opDeleteOptionGroup = "DeleteOptionGroup" // DeleteOptionGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteOptionGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3051,7 +3420,7 @@ func (c *RDS) DeleteOptionGroupRequest(input *DeleteOptionGroupInput) (req *requ // The specified option group could not be found. // // * ErrCodeInvalidOptionGroupStateFault "InvalidOptionGroupStateFault" -// The option group is not in the available state. +// The option group isn't in the available state. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DeleteOptionGroup func (c *RDS) DeleteOptionGroup(input *DeleteOptionGroupInput) (*DeleteOptionGroupOutput, error) { @@ -3079,8 +3448,8 @@ const opDescribeAccountAttributes = "DescribeAccountAttributes" // DescribeAccountAttributesRequest generates a "aws/request.Request" representing the // client's request for the DescribeAccountAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3158,8 +3527,8 @@ const opDescribeCertificates = "DescribeCertificates" // DescribeCertificatesRequest generates a "aws/request.Request" representing the // client's request for the DescribeCertificates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3209,7 +3578,7 @@ func (c *RDS) DescribeCertificatesRequest(input *DescribeCertificatesInput) (req // // Returned Error Codes: // * ErrCodeCertificateNotFoundFault "CertificateNotFound" -// CertificateIdentifier does not refer to an existing certificate. +// CertificateIdentifier doesn't refer to an existing certificate. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeCertificates func (c *RDS) DescribeCertificates(input *DescribeCertificatesInput) (*DescribeCertificatesOutput, error) { @@ -3233,17 +3602,181 @@ func (c *RDS) DescribeCertificatesWithContext(ctx aws.Context, input *DescribeCe return out, req.Send() } -const opDescribeDBClusterParameterGroups = "DescribeDBClusterParameterGroups" +const opDescribeDBClusterBacktracks = "DescribeDBClusterBacktracks" -// DescribeDBClusterParameterGroupsRequest generates a "aws/request.Request" representing the -// client's request for the DescribeDBClusterParameterGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeDBClusterBacktracksRequest generates a "aws/request.Request" representing the +// client's request for the DescribeDBClusterBacktracks operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeDBClusterParameterGroups for more information on using the DescribeDBClusterParameterGroups +// See DescribeDBClusterBacktracks for more information on using the DescribeDBClusterBacktracks +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeDBClusterBacktracksRequest method. +// req, resp := client.DescribeDBClusterBacktracksRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBClusterBacktracks +func (c *RDS) DescribeDBClusterBacktracksRequest(input *DescribeDBClusterBacktracksInput) (req *request.Request, output *DescribeDBClusterBacktracksOutput) { + op := &request.Operation{ + Name: opDescribeDBClusterBacktracks, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeDBClusterBacktracksInput{} + } + + output = &DescribeDBClusterBacktracksOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeDBClusterBacktracks API operation for Amazon Relational Database Service. +// +// Returns information about backtracks for a DB cluster. +// +// For more information on Amazon Aurora, see What Is Amazon Aurora? (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) +// in the Amazon Aurora User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Relational Database Service's +// API operation DescribeDBClusterBacktracks for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" +// DBClusterIdentifier doesn't refer to an existing DB cluster. +// +// * ErrCodeDBClusterBacktrackNotFoundFault "DBClusterBacktrackNotFoundFault" +// BacktrackIdentifier doesn't refer to an existing backtrack. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBClusterBacktracks +func (c *RDS) DescribeDBClusterBacktracks(input *DescribeDBClusterBacktracksInput) (*DescribeDBClusterBacktracksOutput, error) { + req, out := c.DescribeDBClusterBacktracksRequest(input) + return out, req.Send() +} + +// DescribeDBClusterBacktracksWithContext is the same as DescribeDBClusterBacktracks with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeDBClusterBacktracks for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *RDS) DescribeDBClusterBacktracksWithContext(ctx aws.Context, input *DescribeDBClusterBacktracksInput, opts ...request.Option) (*DescribeDBClusterBacktracksOutput, error) { + req, out := c.DescribeDBClusterBacktracksRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeDBClusterEndpoints = "DescribeDBClusterEndpoints" + +// DescribeDBClusterEndpointsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeDBClusterEndpoints operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeDBClusterEndpoints for more information on using the DescribeDBClusterEndpoints +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeDBClusterEndpointsRequest method. +// req, resp := client.DescribeDBClusterEndpointsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBClusterEndpoints +func (c *RDS) DescribeDBClusterEndpointsRequest(input *DescribeDBClusterEndpointsInput) (req *request.Request, output *DescribeDBClusterEndpointsOutput) { + op := &request.Operation{ + Name: opDescribeDBClusterEndpoints, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeDBClusterEndpointsInput{} + } + + output = &DescribeDBClusterEndpointsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeDBClusterEndpoints API operation for Amazon Relational Database Service. +// +// Returns information about endpoints for an Amazon Aurora DB cluster. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Relational Database Service's +// API operation DescribeDBClusterEndpoints for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" +// DBClusterIdentifier doesn't refer to an existing DB cluster. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBClusterEndpoints +func (c *RDS) DescribeDBClusterEndpoints(input *DescribeDBClusterEndpointsInput) (*DescribeDBClusterEndpointsOutput, error) { + req, out := c.DescribeDBClusterEndpointsRequest(input) + return out, req.Send() +} + +// DescribeDBClusterEndpointsWithContext is the same as DescribeDBClusterEndpoints with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeDBClusterEndpoints for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *RDS) DescribeDBClusterEndpointsWithContext(ctx aws.Context, input *DescribeDBClusterEndpointsInput, opts ...request.Option) (*DescribeDBClusterEndpointsOutput, error) { + req, out := c.DescribeDBClusterEndpointsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeDBClusterParameterGroups = "DescribeDBClusterParameterGroups" + +// DescribeDBClusterParameterGroupsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeDBClusterParameterGroups operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeDBClusterParameterGroups for more information on using the DescribeDBClusterParameterGroups // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration @@ -3281,8 +3814,8 @@ func (c *RDS) DescribeDBClusterParameterGroupsRequest(input *DescribeDBClusterPa // parameter is specified, the list will contain only the description of the // specified DB cluster parameter group. // -// For more information on Amazon Aurora, see Aurora on Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html) -// in the Amazon RDS User Guide. +// For more information on Amazon Aurora, see What Is Amazon Aurora? (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) +// in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3293,7 +3826,7 @@ func (c *RDS) DescribeDBClusterParameterGroupsRequest(input *DescribeDBClusterPa // // Returned Error Codes: // * ErrCodeDBParameterGroupNotFoundFault "DBParameterGroupNotFound" -// DBParameterGroupName does not refer to an existing DB parameter group. +// DBParameterGroupName doesn't refer to an existing DB parameter group. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBClusterParameterGroups func (c *RDS) DescribeDBClusterParameterGroups(input *DescribeDBClusterParameterGroupsInput) (*DescribeDBClusterParameterGroupsOutput, error) { @@ -3321,8 +3854,8 @@ const opDescribeDBClusterParameters = "DescribeDBClusterParameters" // DescribeDBClusterParametersRequest generates a "aws/request.Request" representing the // client's request for the DescribeDBClusterParameters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3364,8 +3897,8 @@ func (c *RDS) DescribeDBClusterParametersRequest(input *DescribeDBClusterParamet // Returns the detailed parameter list for a particular DB cluster parameter // group. // -// For more information on Amazon Aurora, see Aurora on Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html) -// in the Amazon RDS User Guide. +// For more information on Amazon Aurora, see What Is Amazon Aurora? (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) +// in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3376,7 +3909,7 @@ func (c *RDS) DescribeDBClusterParametersRequest(input *DescribeDBClusterParamet // // Returned Error Codes: // * ErrCodeDBParameterGroupNotFoundFault "DBParameterGroupNotFound" -// DBParameterGroupName does not refer to an existing DB parameter group. +// DBParameterGroupName doesn't refer to an existing DB parameter group. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBClusterParameters func (c *RDS) DescribeDBClusterParameters(input *DescribeDBClusterParametersInput) (*DescribeDBClusterParametersOutput, error) { @@ -3404,8 +3937,8 @@ const opDescribeDBClusterSnapshotAttributes = "DescribeDBClusterSnapshotAttribut // DescribeDBClusterSnapshotAttributesRequest generates a "aws/request.Request" representing the // client's request for the DescribeDBClusterSnapshotAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3466,7 +3999,7 @@ func (c *RDS) DescribeDBClusterSnapshotAttributesRequest(input *DescribeDBCluste // // Returned Error Codes: // * ErrCodeDBClusterSnapshotNotFoundFault "DBClusterSnapshotNotFoundFault" -// DBClusterSnapshotIdentifier does not refer to an existing DB cluster snapshot. +// DBClusterSnapshotIdentifier doesn't refer to an existing DB cluster snapshot. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBClusterSnapshotAttributes func (c *RDS) DescribeDBClusterSnapshotAttributes(input *DescribeDBClusterSnapshotAttributesInput) (*DescribeDBClusterSnapshotAttributesOutput, error) { @@ -3494,8 +4027,8 @@ const opDescribeDBClusterSnapshots = "DescribeDBClusterSnapshots" // DescribeDBClusterSnapshotsRequest generates a "aws/request.Request" representing the // client's request for the DescribeDBClusterSnapshots operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3537,8 +4070,8 @@ func (c *RDS) DescribeDBClusterSnapshotsRequest(input *DescribeDBClusterSnapshot // Returns information about DB cluster snapshots. This API action supports // pagination. // -// For more information on Amazon Aurora, see Aurora on Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html) -// in the Amazon RDS User Guide. +// For more information on Amazon Aurora, see What Is Amazon Aurora? (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) +// in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3549,7 +4082,7 @@ func (c *RDS) DescribeDBClusterSnapshotsRequest(input *DescribeDBClusterSnapshot // // Returned Error Codes: // * ErrCodeDBClusterSnapshotNotFoundFault "DBClusterSnapshotNotFoundFault" -// DBClusterSnapshotIdentifier does not refer to an existing DB cluster snapshot. +// DBClusterSnapshotIdentifier doesn't refer to an existing DB cluster snapshot. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBClusterSnapshots func (c *RDS) DescribeDBClusterSnapshots(input *DescribeDBClusterSnapshotsInput) (*DescribeDBClusterSnapshotsOutput, error) { @@ -3577,8 +4110,8 @@ const opDescribeDBClusters = "DescribeDBClusters" // DescribeDBClustersRequest generates a "aws/request.Request" representing the // client's request for the DescribeDBClusters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3604,6 +4137,12 @@ func (c *RDS) DescribeDBClustersRequest(input *DescribeDBClustersInput) (req *re Name: opDescribeDBClusters, HTTPMethod: "POST", HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxRecords", + TruncationToken: "", + }, } if input == nil { @@ -3620,8 +4159,8 @@ func (c *RDS) DescribeDBClustersRequest(input *DescribeDBClustersInput) (req *re // Returns information about provisioned Aurora DB clusters. This API supports // pagination. // -// For more information on Amazon Aurora, see Aurora on Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html) -// in the Amazon RDS User Guide. +// For more information on Amazon Aurora, see What Is Amazon Aurora? (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) +// in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3632,7 +4171,7 @@ func (c *RDS) DescribeDBClustersRequest(input *DescribeDBClustersInput) (req *re // // Returned Error Codes: // * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" -// DBClusterIdentifier does not refer to an existing DB cluster. +// DBClusterIdentifier doesn't refer to an existing DB cluster. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBClusters func (c *RDS) DescribeDBClusters(input *DescribeDBClustersInput) (*DescribeDBClustersOutput, error) { @@ -3656,12 +4195,62 @@ func (c *RDS) DescribeDBClustersWithContext(ctx aws.Context, input *DescribeDBCl return out, req.Send() } +// DescribeDBClustersPages iterates over the pages of a DescribeDBClusters operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeDBClusters method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeDBClusters operation. +// pageNum := 0 +// err := client.DescribeDBClustersPages(params, +// func(page *DescribeDBClustersOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *RDS) DescribeDBClustersPages(input *DescribeDBClustersInput, fn func(*DescribeDBClustersOutput, bool) bool) error { + return c.DescribeDBClustersPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeDBClustersPagesWithContext same as DescribeDBClustersPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *RDS) DescribeDBClustersPagesWithContext(ctx aws.Context, input *DescribeDBClustersInput, fn func(*DescribeDBClustersOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeDBClustersInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeDBClustersRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeDBClustersOutput), !p.HasNextPage()) + } + return p.Err() +} + const opDescribeDBEngineVersions = "DescribeDBEngineVersions" // DescribeDBEngineVersionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeDBEngineVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3786,35 +4375,35 @@ func (c *RDS) DescribeDBEngineVersionsPagesWithContext(ctx aws.Context, input *D return p.Err() } -const opDescribeDBInstances = "DescribeDBInstances" +const opDescribeDBInstanceAutomatedBackups = "DescribeDBInstanceAutomatedBackups" -// DescribeDBInstancesRequest generates a "aws/request.Request" representing the -// client's request for the DescribeDBInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeDBInstanceAutomatedBackupsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeDBInstanceAutomatedBackups operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeDBInstances for more information on using the DescribeDBInstances +// See DescribeDBInstanceAutomatedBackups for more information on using the DescribeDBInstanceAutomatedBackups // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeDBInstancesRequest method. -// req, resp := client.DescribeDBInstancesRequest(params) +// // Example sending a request using the DescribeDBInstanceAutomatedBackupsRequest method. +// req, resp := client.DescribeDBInstanceAutomatedBackupsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBInstances -func (c *RDS) DescribeDBInstancesRequest(input *DescribeDBInstancesInput) (req *request.Request, output *DescribeDBInstancesOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBInstanceAutomatedBackups +func (c *RDS) DescribeDBInstanceAutomatedBackupsRequest(input *DescribeDBInstanceAutomatedBackupsInput) (req *request.Request, output *DescribeDBInstanceAutomatedBackupsOutput) { op := &request.Operation{ - Name: opDescribeDBInstances, + Name: opDescribeDBInstanceAutomatedBackups, HTTPMethod: "POST", HTTPPath: "/", Paginator: &request.Paginator{ @@ -3826,88 +4415,94 @@ func (c *RDS) DescribeDBInstancesRequest(input *DescribeDBInstancesInput) (req * } if input == nil { - input = &DescribeDBInstancesInput{} + input = &DescribeDBInstanceAutomatedBackupsInput{} } - output = &DescribeDBInstancesOutput{} + output = &DescribeDBInstanceAutomatedBackupsOutput{} req = c.newRequest(op, input, output) return } -// DescribeDBInstances API operation for Amazon Relational Database Service. +// DescribeDBInstanceAutomatedBackups API operation for Amazon Relational Database Service. // -// Returns information about provisioned RDS instances. This API supports pagination. +// Displays backups for both current and deleted instances. For example, use +// this operation to find details about automated backups for previously deleted +// instances. Current instances with retention periods greater than zero (0) +// are returned for both the DescribeDBInstanceAutomatedBackups and DescribeDBInstances +// operations. +// +// All parameters are optional. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Relational Database Service's -// API operation DescribeDBInstances for usage and error information. +// API operation DescribeDBInstanceAutomatedBackups for usage and error information. // // Returned Error Codes: -// * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" -// DBInstanceIdentifier does not refer to an existing DB instance. +// * ErrCodeDBInstanceAutomatedBackupNotFoundFault "DBInstanceAutomatedBackupNotFound" +// No automated backup for this DB instance was found. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBInstances -func (c *RDS) DescribeDBInstances(input *DescribeDBInstancesInput) (*DescribeDBInstancesOutput, error) { - req, out := c.DescribeDBInstancesRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBInstanceAutomatedBackups +func (c *RDS) DescribeDBInstanceAutomatedBackups(input *DescribeDBInstanceAutomatedBackupsInput) (*DescribeDBInstanceAutomatedBackupsOutput, error) { + req, out := c.DescribeDBInstanceAutomatedBackupsRequest(input) return out, req.Send() } -// DescribeDBInstancesWithContext is the same as DescribeDBInstances with the addition of +// DescribeDBInstanceAutomatedBackupsWithContext is the same as DescribeDBInstanceAutomatedBackups with the addition of // the ability to pass a context and additional request options. // -// See DescribeDBInstances for details on how to use this API operation. +// See DescribeDBInstanceAutomatedBackups for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *RDS) DescribeDBInstancesWithContext(ctx aws.Context, input *DescribeDBInstancesInput, opts ...request.Option) (*DescribeDBInstancesOutput, error) { - req, out := c.DescribeDBInstancesRequest(input) +func (c *RDS) DescribeDBInstanceAutomatedBackupsWithContext(ctx aws.Context, input *DescribeDBInstanceAutomatedBackupsInput, opts ...request.Option) (*DescribeDBInstanceAutomatedBackupsOutput, error) { + req, out := c.DescribeDBInstanceAutomatedBackupsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// DescribeDBInstancesPages iterates over the pages of a DescribeDBInstances operation, +// DescribeDBInstanceAutomatedBackupsPages iterates over the pages of a DescribeDBInstanceAutomatedBackups operation, // calling the "fn" function with the response data for each page. To stop // iterating, return false from the fn function. // -// See DescribeDBInstances method for more information on how to use this operation. +// See DescribeDBInstanceAutomatedBackups method for more information on how to use this operation. // // Note: This operation can generate multiple requests to a service. // -// // Example iterating over at most 3 pages of a DescribeDBInstances operation. +// // Example iterating over at most 3 pages of a DescribeDBInstanceAutomatedBackups operation. // pageNum := 0 -// err := client.DescribeDBInstancesPages(params, -// func(page *DescribeDBInstancesOutput, lastPage bool) bool { +// err := client.DescribeDBInstanceAutomatedBackupsPages(params, +// func(page *DescribeDBInstanceAutomatedBackupsOutput, lastPage bool) bool { // pageNum++ // fmt.Println(page) // return pageNum <= 3 // }) // -func (c *RDS) DescribeDBInstancesPages(input *DescribeDBInstancesInput, fn func(*DescribeDBInstancesOutput, bool) bool) error { - return c.DescribeDBInstancesPagesWithContext(aws.BackgroundContext(), input, fn) +func (c *RDS) DescribeDBInstanceAutomatedBackupsPages(input *DescribeDBInstanceAutomatedBackupsInput, fn func(*DescribeDBInstanceAutomatedBackupsOutput, bool) bool) error { + return c.DescribeDBInstanceAutomatedBackupsPagesWithContext(aws.BackgroundContext(), input, fn) } -// DescribeDBInstancesPagesWithContext same as DescribeDBInstancesPages except +// DescribeDBInstanceAutomatedBackupsPagesWithContext same as DescribeDBInstanceAutomatedBackupsPages except // it takes a Context and allows setting request options on the pages. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *RDS) DescribeDBInstancesPagesWithContext(ctx aws.Context, input *DescribeDBInstancesInput, fn func(*DescribeDBInstancesOutput, bool) bool, opts ...request.Option) error { +func (c *RDS) DescribeDBInstanceAutomatedBackupsPagesWithContext(ctx aws.Context, input *DescribeDBInstanceAutomatedBackupsInput, fn func(*DescribeDBInstanceAutomatedBackupsOutput, bool) bool, opts ...request.Option) error { p := request.Pagination{ NewRequest: func() (*request.Request, error) { - var inCpy *DescribeDBInstancesInput + var inCpy *DescribeDBInstanceAutomatedBackupsInput if input != nil { tmp := *input inCpy = &tmp } - req, _ := c.DescribeDBInstancesRequest(inCpy) + req, _ := c.DescribeDBInstanceAutomatedBackupsRequest(inCpy) req.SetContext(ctx) req.ApplyOptions(opts...) return req, nil @@ -3916,40 +4511,40 @@ func (c *RDS) DescribeDBInstancesPagesWithContext(ctx aws.Context, input *Descri cont := true for p.Next() && cont { - cont = fn(p.Page().(*DescribeDBInstancesOutput), !p.HasNextPage()) + cont = fn(p.Page().(*DescribeDBInstanceAutomatedBackupsOutput), !p.HasNextPage()) } return p.Err() } -const opDescribeDBLogFiles = "DescribeDBLogFiles" +const opDescribeDBInstances = "DescribeDBInstances" -// DescribeDBLogFilesRequest generates a "aws/request.Request" representing the -// client's request for the DescribeDBLogFiles operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeDBInstancesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeDBInstances operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeDBLogFiles for more information on using the DescribeDBLogFiles +// See DescribeDBInstances for more information on using the DescribeDBInstances // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeDBLogFilesRequest method. -// req, resp := client.DescribeDBLogFilesRequest(params) +// // Example sending a request using the DescribeDBInstancesRequest method. +// req, resp := client.DescribeDBInstancesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBLogFiles -func (c *RDS) DescribeDBLogFilesRequest(input *DescribeDBLogFilesInput) (req *request.Request, output *DescribeDBLogFilesOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBInstances +func (c *RDS) DescribeDBInstancesRequest(input *DescribeDBInstancesInput) (req *request.Request, output *DescribeDBInstancesOutput) { op := &request.Operation{ - Name: opDescribeDBLogFiles, + Name: opDescribeDBInstances, HTTPMethod: "POST", HTTPPath: "/", Paginator: &request.Paginator{ @@ -3961,73 +4556,208 @@ func (c *RDS) DescribeDBLogFilesRequest(input *DescribeDBLogFilesInput) (req *re } if input == nil { - input = &DescribeDBLogFilesInput{} + input = &DescribeDBInstancesInput{} } - output = &DescribeDBLogFilesOutput{} + output = &DescribeDBInstancesOutput{} req = c.newRequest(op, input, output) return } -// DescribeDBLogFiles API operation for Amazon Relational Database Service. +// DescribeDBInstances API operation for Amazon Relational Database Service. // -// Returns a list of DB log files for the DB instance. +// Returns information about provisioned RDS instances. This API supports pagination. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Relational Database Service's -// API operation DescribeDBLogFiles for usage and error information. +// API operation DescribeDBInstances for usage and error information. // // Returned Error Codes: // * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" -// DBInstanceIdentifier does not refer to an existing DB instance. +// DBInstanceIdentifier doesn't refer to an existing DB instance. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBLogFiles -func (c *RDS) DescribeDBLogFiles(input *DescribeDBLogFilesInput) (*DescribeDBLogFilesOutput, error) { - req, out := c.DescribeDBLogFilesRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBInstances +func (c *RDS) DescribeDBInstances(input *DescribeDBInstancesInput) (*DescribeDBInstancesOutput, error) { + req, out := c.DescribeDBInstancesRequest(input) return out, req.Send() } -// DescribeDBLogFilesWithContext is the same as DescribeDBLogFiles with the addition of +// DescribeDBInstancesWithContext is the same as DescribeDBInstances with the addition of // the ability to pass a context and additional request options. // -// See DescribeDBLogFiles for details on how to use this API operation. +// See DescribeDBInstances for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *RDS) DescribeDBLogFilesWithContext(ctx aws.Context, input *DescribeDBLogFilesInput, opts ...request.Option) (*DescribeDBLogFilesOutput, error) { - req, out := c.DescribeDBLogFilesRequest(input) +func (c *RDS) DescribeDBInstancesWithContext(ctx aws.Context, input *DescribeDBInstancesInput, opts ...request.Option) (*DescribeDBInstancesOutput, error) { + req, out := c.DescribeDBInstancesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// DescribeDBLogFilesPages iterates over the pages of a DescribeDBLogFiles operation, +// DescribeDBInstancesPages iterates over the pages of a DescribeDBInstances operation, // calling the "fn" function with the response data for each page. To stop // iterating, return false from the fn function. // -// See DescribeDBLogFiles method for more information on how to use this operation. +// See DescribeDBInstances method for more information on how to use this operation. // // Note: This operation can generate multiple requests to a service. // -// // Example iterating over at most 3 pages of a DescribeDBLogFiles operation. +// // Example iterating over at most 3 pages of a DescribeDBInstances operation. // pageNum := 0 -// err := client.DescribeDBLogFilesPages(params, -// func(page *DescribeDBLogFilesOutput, lastPage bool) bool { +// err := client.DescribeDBInstancesPages(params, +// func(page *DescribeDBInstancesOutput, lastPage bool) bool { // pageNum++ // fmt.Println(page) // return pageNum <= 3 // }) // -func (c *RDS) DescribeDBLogFilesPages(input *DescribeDBLogFilesInput, fn func(*DescribeDBLogFilesOutput, bool) bool) error { - return c.DescribeDBLogFilesPagesWithContext(aws.BackgroundContext(), input, fn) +func (c *RDS) DescribeDBInstancesPages(input *DescribeDBInstancesInput, fn func(*DescribeDBInstancesOutput, bool) bool) error { + return c.DescribeDBInstancesPagesWithContext(aws.BackgroundContext(), input, fn) } -// DescribeDBLogFilesPagesWithContext same as DescribeDBLogFilesPages except +// DescribeDBInstancesPagesWithContext same as DescribeDBInstancesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *RDS) DescribeDBInstancesPagesWithContext(ctx aws.Context, input *DescribeDBInstancesInput, fn func(*DescribeDBInstancesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeDBInstancesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeDBInstancesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeDBInstancesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDescribeDBLogFiles = "DescribeDBLogFiles" + +// DescribeDBLogFilesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeDBLogFiles operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeDBLogFiles for more information on using the DescribeDBLogFiles +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeDBLogFilesRequest method. +// req, resp := client.DescribeDBLogFilesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBLogFiles +func (c *RDS) DescribeDBLogFilesRequest(input *DescribeDBLogFilesInput) (req *request.Request, output *DescribeDBLogFilesOutput) { + op := &request.Operation{ + Name: opDescribeDBLogFiles, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxRecords", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeDBLogFilesInput{} + } + + output = &DescribeDBLogFilesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeDBLogFiles API operation for Amazon Relational Database Service. +// +// Returns a list of DB log files for the DB instance. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Relational Database Service's +// API operation DescribeDBLogFiles for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" +// DBInstanceIdentifier doesn't refer to an existing DB instance. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBLogFiles +func (c *RDS) DescribeDBLogFiles(input *DescribeDBLogFilesInput) (*DescribeDBLogFilesOutput, error) { + req, out := c.DescribeDBLogFilesRequest(input) + return out, req.Send() +} + +// DescribeDBLogFilesWithContext is the same as DescribeDBLogFiles with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeDBLogFiles for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *RDS) DescribeDBLogFilesWithContext(ctx aws.Context, input *DescribeDBLogFilesInput, opts ...request.Option) (*DescribeDBLogFilesOutput, error) { + req, out := c.DescribeDBLogFilesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeDBLogFilesPages iterates over the pages of a DescribeDBLogFiles operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeDBLogFiles method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeDBLogFiles operation. +// pageNum := 0 +// err := client.DescribeDBLogFilesPages(params, +// func(page *DescribeDBLogFilesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *RDS) DescribeDBLogFilesPages(input *DescribeDBLogFilesInput, fn func(*DescribeDBLogFilesOutput, bool) bool) error { + return c.DescribeDBLogFilesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeDBLogFilesPagesWithContext same as DescribeDBLogFilesPages except // it takes a Context and allows setting request options on the pages. // // The context must be non-nil and will be used for request cancellation. If @@ -4060,8 +4790,8 @@ const opDescribeDBParameterGroups = "DescribeDBParameterGroups" // DescribeDBParameterGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeDBParameterGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4119,7 +4849,7 @@ func (c *RDS) DescribeDBParameterGroupsRequest(input *DescribeDBParameterGroupsI // // Returned Error Codes: // * ErrCodeDBParameterGroupNotFoundFault "DBParameterGroupNotFound" -// DBParameterGroupName does not refer to an existing DB parameter group. +// DBParameterGroupName doesn't refer to an existing DB parameter group. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBParameterGroups func (c *RDS) DescribeDBParameterGroups(input *DescribeDBParameterGroupsInput) (*DescribeDBParameterGroupsOutput, error) { @@ -4197,8 +4927,8 @@ const opDescribeDBParameters = "DescribeDBParameters" // DescribeDBParametersRequest generates a "aws/request.Request" representing the // client's request for the DescribeDBParameters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4254,7 +4984,7 @@ func (c *RDS) DescribeDBParametersRequest(input *DescribeDBParametersInput) (req // // Returned Error Codes: // * ErrCodeDBParameterGroupNotFoundFault "DBParameterGroupNotFound" -// DBParameterGroupName does not refer to an existing DB parameter group. +// DBParameterGroupName doesn't refer to an existing DB parameter group. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBParameters func (c *RDS) DescribeDBParameters(input *DescribeDBParametersInput) (*DescribeDBParametersOutput, error) { @@ -4332,8 +5062,8 @@ const opDescribeDBSecurityGroups = "DescribeDBSecurityGroups" // DescribeDBSecurityGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeDBSecurityGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4391,7 +5121,7 @@ func (c *RDS) DescribeDBSecurityGroupsRequest(input *DescribeDBSecurityGroupsInp // // Returned Error Codes: // * ErrCodeDBSecurityGroupNotFoundFault "DBSecurityGroupNotFound" -// DBSecurityGroupName does not refer to an existing DB security group. +// DBSecurityGroupName doesn't refer to an existing DB security group. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBSecurityGroups func (c *RDS) DescribeDBSecurityGroups(input *DescribeDBSecurityGroupsInput) (*DescribeDBSecurityGroupsOutput, error) { @@ -4469,8 +5199,8 @@ const opDescribeDBSnapshotAttributes = "DescribeDBSnapshotAttributes" // DescribeDBSnapshotAttributesRequest generates a "aws/request.Request" representing the // client's request for the DescribeDBSnapshotAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4531,7 +5261,7 @@ func (c *RDS) DescribeDBSnapshotAttributesRequest(input *DescribeDBSnapshotAttri // // Returned Error Codes: // * ErrCodeDBSnapshotNotFoundFault "DBSnapshotNotFound" -// DBSnapshotIdentifier does not refer to an existing DB snapshot. +// DBSnapshotIdentifier doesn't refer to an existing DB snapshot. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBSnapshotAttributes func (c *RDS) DescribeDBSnapshotAttributes(input *DescribeDBSnapshotAttributesInput) (*DescribeDBSnapshotAttributesOutput, error) { @@ -4559,8 +5289,8 @@ const opDescribeDBSnapshots = "DescribeDBSnapshots" // DescribeDBSnapshotsRequest generates a "aws/request.Request" representing the // client's request for the DescribeDBSnapshots operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4616,7 +5346,7 @@ func (c *RDS) DescribeDBSnapshotsRequest(input *DescribeDBSnapshotsInput) (req * // // Returned Error Codes: // * ErrCodeDBSnapshotNotFoundFault "DBSnapshotNotFound" -// DBSnapshotIdentifier does not refer to an existing DB snapshot. +// DBSnapshotIdentifier doesn't refer to an existing DB snapshot. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBSnapshots func (c *RDS) DescribeDBSnapshots(input *DescribeDBSnapshotsInput) (*DescribeDBSnapshotsOutput, error) { @@ -4694,8 +5424,8 @@ const opDescribeDBSubnetGroups = "DescribeDBSubnetGroups" // DescribeDBSubnetGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeDBSubnetGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4754,7 +5484,7 @@ func (c *RDS) DescribeDBSubnetGroupsRequest(input *DescribeDBSubnetGroupsInput) // // Returned Error Codes: // * ErrCodeDBSubnetGroupNotFoundFault "DBSubnetGroupNotFoundFault" -// DBSubnetGroupName does not refer to an existing DB subnet group. +// DBSubnetGroupName doesn't refer to an existing DB subnet group. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBSubnetGroups func (c *RDS) DescribeDBSubnetGroups(input *DescribeDBSubnetGroupsInput) (*DescribeDBSubnetGroupsOutput, error) { @@ -4832,8 +5562,8 @@ const opDescribeEngineDefaultClusterParameters = "DescribeEngineDefaultClusterPa // DescribeEngineDefaultClusterParametersRequest generates a "aws/request.Request" representing the // client's request for the DescribeEngineDefaultClusterParameters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4875,8 +5605,8 @@ func (c *RDS) DescribeEngineDefaultClusterParametersRequest(input *DescribeEngin // Returns the default engine and system parameter information for the cluster // database engine. // -// For more information on Amazon Aurora, see Aurora on Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html) -// in the Amazon RDS User Guide. +// For more information on Amazon Aurora, see What Is Amazon Aurora? (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) +// in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4910,8 +5640,8 @@ const opDescribeEngineDefaultParameters = "DescribeEngineDefaultParameters" // DescribeEngineDefaultParametersRequest generates a "aws/request.Request" representing the // client's request for the DescribeEngineDefaultParameters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5041,8 +5771,8 @@ const opDescribeEventCategories = "DescribeEventCategories" // DescribeEventCategoriesRequest generates a "aws/request.Request" representing the // client's request for the DescribeEventCategories operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5118,8 +5848,8 @@ const opDescribeEventSubscriptions = "DescribeEventSubscriptions" // DescribeEventSubscriptionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeEventSubscriptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5257,8 +5987,8 @@ const opDescribeEvents = "DescribeEvents" // DescribeEventsRequest generates a "aws/request.Request" representing the // client's request for the DescribeEvents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5391,8 +6121,8 @@ const opDescribeOptionGroupOptions = "DescribeOptionGroupOptions" // DescribeOptionGroupOptionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeOptionGroupOptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5521,8 +6251,8 @@ const opDescribeOptionGroups = "DescribeOptionGroups" // DescribeOptionGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeOptionGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5656,8 +6386,8 @@ const opDescribeOrderableDBInstanceOptions = "DescribeOrderableDBInstanceOptions // DescribeOrderableDBInstanceOptionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeOrderableDBInstanceOptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5786,8 +6516,8 @@ const opDescribePendingMaintenanceActions = "DescribePendingMaintenanceActions" // DescribePendingMaintenanceActionsRequest generates a "aws/request.Request" representing the // client's request for the DescribePendingMaintenanceActions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5866,8 +6596,8 @@ const opDescribeReservedDBInstances = "DescribeReservedDBInstances" // DescribeReservedDBInstancesRequest generates a "aws/request.Request" representing the // client's request for the DescribeReservedDBInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6002,8 +6732,8 @@ const opDescribeReservedDBInstancesOfferings = "DescribeReservedDBInstancesOffer // DescribeReservedDBInstancesOfferingsRequest generates a "aws/request.Request" representing the // client's request for the DescribeReservedDBInstancesOfferings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6137,8 +6867,8 @@ const opDescribeSourceRegions = "DescribeSourceRegions" // DescribeSourceRegionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeSourceRegions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6213,8 +6943,8 @@ const opDescribeValidDBInstanceModifications = "DescribeValidDBInstanceModificat // DescribeValidDBInstanceModificationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeValidDBInstanceModifications operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6266,10 +6996,10 @@ func (c *RDS) DescribeValidDBInstanceModificationsRequest(input *DescribeValidDB // // Returned Error Codes: // * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" -// DBInstanceIdentifier does not refer to an existing DB instance. +// DBInstanceIdentifier doesn't refer to an existing DB instance. // // * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" -// The specified DB instance is not in the available state. +// The DB instance isn't in a valid state. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeValidDBInstanceModifications func (c *RDS) DescribeValidDBInstanceModifications(input *DescribeValidDBInstanceModificationsInput) (*DescribeValidDBInstanceModificationsOutput, error) { @@ -6297,8 +7027,8 @@ const opDownloadDBLogFilePortion = "DownloadDBLogFilePortion" // DownloadDBLogFilePortionRequest generates a "aws/request.Request" representing the // client's request for the DownloadDBLogFilePortion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6354,10 +7084,10 @@ func (c *RDS) DownloadDBLogFilePortionRequest(input *DownloadDBLogFilePortionInp // // Returned Error Codes: // * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" -// DBInstanceIdentifier does not refer to an existing DB instance. +// DBInstanceIdentifier doesn't refer to an existing DB instance. // // * ErrCodeDBLogFileNotFoundFault "DBLogFileNotFoundFault" -// LogFileName does not refer to an existing DB log file. +// LogFileName doesn't refer to an existing DB log file. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DownloadDBLogFilePortion func (c *RDS) DownloadDBLogFilePortion(input *DownloadDBLogFilePortionInput) (*DownloadDBLogFilePortionOutput, error) { @@ -6435,8 +7165,8 @@ const opFailoverDBCluster = "FailoverDBCluster" // FailoverDBClusterRequest generates a "aws/request.Request" representing the // client's request for the FailoverDBCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6487,8 +7217,8 @@ func (c *RDS) FailoverDBClusterRequest(input *FailoverDBClusterInput) (req *requ // re-establish any existing connections that use those endpoint addresses when // the failover is complete. // -// For more information on Amazon Aurora, see Aurora on Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html) -// in the Amazon RDS User Guide. +// For more information on Amazon Aurora, see What Is Amazon Aurora? (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) +// in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -6499,13 +7229,13 @@ func (c *RDS) FailoverDBClusterRequest(input *FailoverDBClusterInput) (req *requ // // Returned Error Codes: // * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" -// DBClusterIdentifier does not refer to an existing DB cluster. +// DBClusterIdentifier doesn't refer to an existing DB cluster. // // * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" -// The DB cluster is not in a valid state. +// The requested operation can't be performed while the cluster is in this state. // // * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" -// The specified DB instance is not in the available state. +// The DB instance isn't in a valid state. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/FailoverDBCluster func (c *RDS) FailoverDBCluster(input *FailoverDBClusterInput) (*FailoverDBClusterOutput, error) { @@ -6533,8 +7263,8 @@ const opListTagsForResource = "ListTagsForResource" // ListTagsForResourceRequest generates a "aws/request.Request" representing the // client's request for the ListTagsForResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6576,7 +7306,8 @@ func (c *RDS) ListTagsForResourceRequest(input *ListTagsForResourceInput) (req * // Lists all tags on an Amazon RDS resource. // // For an overview on tagging an Amazon RDS resource, see Tagging Amazon RDS -// Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Tagging.html). +// Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Tagging.html) +// in the Amazon RDS User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -6587,13 +7318,13 @@ func (c *RDS) ListTagsForResourceRequest(input *ListTagsForResourceInput) (req * // // Returned Error Codes: // * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" -// DBInstanceIdentifier does not refer to an existing DB instance. +// DBInstanceIdentifier doesn't refer to an existing DB instance. // // * ErrCodeDBSnapshotNotFoundFault "DBSnapshotNotFound" -// DBSnapshotIdentifier does not refer to an existing DB snapshot. +// DBSnapshotIdentifier doesn't refer to an existing DB snapshot. // // * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" -// DBClusterIdentifier does not refer to an existing DB cluster. +// DBClusterIdentifier doesn't refer to an existing DB cluster. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/ListTagsForResource func (c *RDS) ListTagsForResource(input *ListTagsForResourceInput) (*ListTagsForResourceOutput, error) { @@ -6617,12 +7348,117 @@ func (c *RDS) ListTagsForResourceWithContext(ctx aws.Context, input *ListTagsFor return out, req.Send() } +const opModifyCurrentDBClusterCapacity = "ModifyCurrentDBClusterCapacity" + +// ModifyCurrentDBClusterCapacityRequest generates a "aws/request.Request" representing the +// client's request for the ModifyCurrentDBClusterCapacity operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyCurrentDBClusterCapacity for more information on using the ModifyCurrentDBClusterCapacity +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyCurrentDBClusterCapacityRequest method. +// req, resp := client.ModifyCurrentDBClusterCapacityRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/ModifyCurrentDBClusterCapacity +func (c *RDS) ModifyCurrentDBClusterCapacityRequest(input *ModifyCurrentDBClusterCapacityInput) (req *request.Request, output *ModifyCurrentDBClusterCapacityOutput) { + op := &request.Operation{ + Name: opModifyCurrentDBClusterCapacity, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyCurrentDBClusterCapacityInput{} + } + + output = &ModifyCurrentDBClusterCapacityOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyCurrentDBClusterCapacity API operation for Amazon Relational Database Service. +// +// Set the capacity of an Aurora Serverless DB cluster to a specific value. +// +// Aurora Serverless scales seamlessly based on the workload on the DB cluster. +// In some cases, the capacity might not scale fast enough to meet a sudden +// change in workload, such as a large number of new transactions. Call ModifyCurrentDBClusterCapacity +// to set the capacity explicitly. +// +// After this call sets the DB cluster capacity, Aurora Serverless can automatically +// scale the DB cluster based on the cooldown period for scaling up and the +// cooldown period for scaling down. +// +// For more information about Aurora Serverless, see Using Amazon Aurora Serverless +// (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.html) +// in the Amazon Aurora User Guide. +// +// If you call ModifyCurrentDBClusterCapacity with the default TimeoutAction, +// connections that prevent Aurora Serverless from finding a scaling point might +// be dropped. For more information about scaling points, see Autoscaling for +// Aurora Serverless (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.how-it-works.html#aurora-serverless.how-it-works.auto-scaling) +// in the Amazon Aurora User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Relational Database Service's +// API operation ModifyCurrentDBClusterCapacity for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" +// DBClusterIdentifier doesn't refer to an existing DB cluster. +// +// * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" +// The requested operation can't be performed while the cluster is in this state. +// +// * ErrCodeInvalidDBClusterCapacityFault "InvalidDBClusterCapacityFault" +// Capacity isn't a valid Aurora Serverless DB cluster capacity. Valid capacity +// values are 2, 4, 8, 16, 32, 64, 128, and 256. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/ModifyCurrentDBClusterCapacity +func (c *RDS) ModifyCurrentDBClusterCapacity(input *ModifyCurrentDBClusterCapacityInput) (*ModifyCurrentDBClusterCapacityOutput, error) { + req, out := c.ModifyCurrentDBClusterCapacityRequest(input) + return out, req.Send() +} + +// ModifyCurrentDBClusterCapacityWithContext is the same as ModifyCurrentDBClusterCapacity with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyCurrentDBClusterCapacity for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *RDS) ModifyCurrentDBClusterCapacityWithContext(ctx aws.Context, input *ModifyCurrentDBClusterCapacityInput, opts ...request.Option) (*ModifyCurrentDBClusterCapacityOutput, error) { + req, out := c.ModifyCurrentDBClusterCapacityRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opModifyDBCluster = "ModifyDBCluster" // ModifyDBClusterRequest generates a "aws/request.Request" representing the // client's request for the ModifyDBCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6663,9 +7499,9 @@ func (c *RDS) ModifyDBClusterRequest(input *ModifyDBClusterInput) (req *request. // // Modify a setting for an Amazon Aurora DB cluster. You can change one or more // database configuration parameters by specifying these parameters and the -// new values in the request. For more information on Amazon Aurora, see Aurora -// on Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html) -// in the Amazon RDS User Guide. +// new values in the request. For more information on Amazon Aurora, see What +// Is Amazon Aurora? (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) +// in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -6676,41 +7512,41 @@ func (c *RDS) ModifyDBClusterRequest(input *ModifyDBClusterInput) (req *request. // // Returned Error Codes: // * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" -// DBClusterIdentifier does not refer to an existing DB cluster. +// DBClusterIdentifier doesn't refer to an existing DB cluster. // // * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" -// The DB cluster is not in a valid state. +// The requested operation can't be performed while the cluster is in this state. // // * ErrCodeStorageQuotaExceededFault "StorageQuotaExceeded" -// Request would result in user exceeding the allowed amount of storage available -// across all DB instances. +// The request would result in the user exceeding the allowed amount of storage +// available across all DB instances. // // * ErrCodeDBSubnetGroupNotFoundFault "DBSubnetGroupNotFoundFault" -// DBSubnetGroupName does not refer to an existing DB subnet group. +// DBSubnetGroupName doesn't refer to an existing DB subnet group. // // * ErrCodeInvalidVPCNetworkStateFault "InvalidVPCNetworkStateFault" -// DB subnet group does not cover all Availability Zones after it is created -// because users' change. +// The DB subnet group doesn't cover all Availability Zones after it's created +// because of users' change. // // * ErrCodeInvalidDBSubnetGroupStateFault "InvalidDBSubnetGroupStateFault" -// The DB subnet group cannot be deleted because it is in use. +// The DB subnet group cannot be deleted because it's in use. // // * ErrCodeInvalidSubnet "InvalidSubnet" // The requested subnet is invalid, or multiple subnets were requested that // are not all in a common VPC. // // * ErrCodeDBClusterParameterGroupNotFoundFault "DBClusterParameterGroupNotFound" -// DBClusterParameterGroupName does not refer to an existing DB Cluster parameter +// DBClusterParameterGroupName doesn't refer to an existing DB cluster parameter // group. // // * ErrCodeInvalidDBSecurityGroupStateFault "InvalidDBSecurityGroupState" -// The state of the DB security group does not allow deletion. +// The state of the DB security group doesn't allow deletion. // // * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" -// The specified DB instance is not in the available state. +// The DB instance isn't in a valid state. // // * ErrCodeDBClusterAlreadyExistsFault "DBClusterAlreadyExistsFault" -// User already has a DB cluster with the given identifier. +// The user already has a DB cluster with the given identifier. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/ModifyDBCluster func (c *RDS) ModifyDBCluster(input *ModifyDBClusterInput) (*ModifyDBClusterOutput, error) { @@ -6734,121 +7570,213 @@ func (c *RDS) ModifyDBClusterWithContext(ctx aws.Context, input *ModifyDBCluster return out, req.Send() } -const opModifyDBClusterParameterGroup = "ModifyDBClusterParameterGroup" +const opModifyDBClusterEndpoint = "ModifyDBClusterEndpoint" -// ModifyDBClusterParameterGroupRequest generates a "aws/request.Request" representing the -// client's request for the ModifyDBClusterParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ModifyDBClusterEndpointRequest generates a "aws/request.Request" representing the +// client's request for the ModifyDBClusterEndpoint operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ModifyDBClusterParameterGroup for more information on using the ModifyDBClusterParameterGroup +// See ModifyDBClusterEndpoint for more information on using the ModifyDBClusterEndpoint // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ModifyDBClusterParameterGroupRequest method. -// req, resp := client.ModifyDBClusterParameterGroupRequest(params) +// // Example sending a request using the ModifyDBClusterEndpointRequest method. +// req, resp := client.ModifyDBClusterEndpointRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/ModifyDBClusterParameterGroup -func (c *RDS) ModifyDBClusterParameterGroupRequest(input *ModifyDBClusterParameterGroupInput) (req *request.Request, output *DBClusterParameterGroupNameMessage) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/ModifyDBClusterEndpoint +func (c *RDS) ModifyDBClusterEndpointRequest(input *ModifyDBClusterEndpointInput) (req *request.Request, output *ModifyDBClusterEndpointOutput) { op := &request.Operation{ - Name: opModifyDBClusterParameterGroup, + Name: opModifyDBClusterEndpoint, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &ModifyDBClusterParameterGroupInput{} + input = &ModifyDBClusterEndpointInput{} } - output = &DBClusterParameterGroupNameMessage{} + output = &ModifyDBClusterEndpointOutput{} req = c.newRequest(op, input, output) return } -// ModifyDBClusterParameterGroup API operation for Amazon Relational Database Service. -// -// Modifies the parameters of a DB cluster parameter group. To modify more than -// one parameter, submit a list of the following: ParameterName, ParameterValue, -// and ApplyMethod. A maximum of 20 parameters can be modified in a single request. -// -// For more information on Amazon Aurora, see Aurora on Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html) -// in the Amazon RDS User Guide. -// -// Changes to dynamic parameters are applied immediately. Changes to static -// parameters require a reboot without failover to the DB cluster associated -// with the parameter group before the change can take effect. +// ModifyDBClusterEndpoint API operation for Amazon Relational Database Service. // -// After you create a DB cluster parameter group, you should wait at least 5 -// minutes before creating your first DB cluster that uses that DB cluster parameter -// group as the default parameter group. This allows Amazon RDS to fully complete -// the create action before the parameter group is used as the default for a -// new DB cluster. This is especially important for parameters that are critical -// when creating the default database for a DB cluster, such as the character -// set for the default database defined by the character_set_database parameter. -// You can use the Parameter Groups option of the Amazon RDS console (https://console.aws.amazon.com/rds/) -// or the DescribeDBClusterParameters command to verify that your DB cluster -// parameter group has been created or modified. +// Modifies the properties of an endpoint in an Amazon Aurora DB cluster. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Relational Database Service's -// API operation ModifyDBClusterParameterGroup for usage and error information. +// API operation ModifyDBClusterEndpoint for usage and error information. // // Returned Error Codes: -// * ErrCodeDBParameterGroupNotFoundFault "DBParameterGroupNotFound" -// DBParameterGroupName does not refer to an existing DB parameter group. +// * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" +// The requested operation can't be performed while the cluster is in this state. // -// * ErrCodeInvalidDBParameterGroupStateFault "InvalidDBParameterGroupState" -// The DB parameter group is in use or is in an invalid state. If you are attempting -// to delete the parameter group, you cannot delete it when the parameter group +// * ErrCodeInvalidDBClusterEndpointStateFault "InvalidDBClusterEndpointStateFault" +// The requested operation can't be performed on the endpoint while the endpoint // is in this state. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/ModifyDBClusterParameterGroup -func (c *RDS) ModifyDBClusterParameterGroup(input *ModifyDBClusterParameterGroupInput) (*DBClusterParameterGroupNameMessage, error) { - req, out := c.ModifyDBClusterParameterGroupRequest(input) +// * ErrCodeDBClusterEndpointNotFoundFault "DBClusterEndpointNotFoundFault" +// The specified custom endpoint doesn't exist. +// +// * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" +// DBInstanceIdentifier doesn't refer to an existing DB instance. +// +// * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" +// The DB instance isn't in a valid state. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/ModifyDBClusterEndpoint +func (c *RDS) ModifyDBClusterEndpoint(input *ModifyDBClusterEndpointInput) (*ModifyDBClusterEndpointOutput, error) { + req, out := c.ModifyDBClusterEndpointRequest(input) return out, req.Send() } -// ModifyDBClusterParameterGroupWithContext is the same as ModifyDBClusterParameterGroup with the addition of +// ModifyDBClusterEndpointWithContext is the same as ModifyDBClusterEndpoint with the addition of // the ability to pass a context and additional request options. // -// See ModifyDBClusterParameterGroup for details on how to use this API operation. +// See ModifyDBClusterEndpoint for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *RDS) ModifyDBClusterParameterGroupWithContext(ctx aws.Context, input *ModifyDBClusterParameterGroupInput, opts ...request.Option) (*DBClusterParameterGroupNameMessage, error) { - req, out := c.ModifyDBClusterParameterGroupRequest(input) +func (c *RDS) ModifyDBClusterEndpointWithContext(ctx aws.Context, input *ModifyDBClusterEndpointInput, opts ...request.Option) (*ModifyDBClusterEndpointOutput, error) { + req, out := c.ModifyDBClusterEndpointRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opModifyDBClusterSnapshotAttribute = "ModifyDBClusterSnapshotAttribute" +const opModifyDBClusterParameterGroup = "ModifyDBClusterParameterGroup" -// ModifyDBClusterSnapshotAttributeRequest generates a "aws/request.Request" representing the -// client's request for the ModifyDBClusterSnapshotAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ModifyDBClusterParameterGroupRequest generates a "aws/request.Request" representing the +// client's request for the ModifyDBClusterParameterGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ModifyDBClusterSnapshotAttribute for more information on using the ModifyDBClusterSnapshotAttribute +// See ModifyDBClusterParameterGroup for more information on using the ModifyDBClusterParameterGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyDBClusterParameterGroupRequest method. +// req, resp := client.ModifyDBClusterParameterGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/ModifyDBClusterParameterGroup +func (c *RDS) ModifyDBClusterParameterGroupRequest(input *ModifyDBClusterParameterGroupInput) (req *request.Request, output *DBClusterParameterGroupNameMessage) { + op := &request.Operation{ + Name: opModifyDBClusterParameterGroup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyDBClusterParameterGroupInput{} + } + + output = &DBClusterParameterGroupNameMessage{} + req = c.newRequest(op, input, output) + return +} + +// ModifyDBClusterParameterGroup API operation for Amazon Relational Database Service. +// +// Modifies the parameters of a DB cluster parameter group. To modify more than +// one parameter, submit a list of the following: ParameterName, ParameterValue, +// and ApplyMethod. A maximum of 20 parameters can be modified in a single request. +// +// For more information on Amazon Aurora, see What Is Amazon Aurora? (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) +// in the Amazon Aurora User Guide. +// +// Changes to dynamic parameters are applied immediately. Changes to static +// parameters require a reboot without failover to the DB cluster associated +// with the parameter group before the change can take effect. +// +// After you create a DB cluster parameter group, you should wait at least 5 +// minutes before creating your first DB cluster that uses that DB cluster parameter +// group as the default parameter group. This allows Amazon RDS to fully complete +// the create action before the parameter group is used as the default for a +// new DB cluster. This is especially important for parameters that are critical +// when creating the default database for a DB cluster, such as the character +// set for the default database defined by the character_set_database parameter. +// You can use the Parameter Groups option of the Amazon RDS console (https://console.aws.amazon.com/rds/) +// or the DescribeDBClusterParameters command to verify that your DB cluster +// parameter group has been created or modified. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Relational Database Service's +// API operation ModifyDBClusterParameterGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDBParameterGroupNotFoundFault "DBParameterGroupNotFound" +// DBParameterGroupName doesn't refer to an existing DB parameter group. +// +// * ErrCodeInvalidDBParameterGroupStateFault "InvalidDBParameterGroupState" +// The DB parameter group is in use or is in an invalid state. If you are attempting +// to delete the parameter group, you can't delete it when the parameter group +// is in this state. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/ModifyDBClusterParameterGroup +func (c *RDS) ModifyDBClusterParameterGroup(input *ModifyDBClusterParameterGroupInput) (*DBClusterParameterGroupNameMessage, error) { + req, out := c.ModifyDBClusterParameterGroupRequest(input) + return out, req.Send() +} + +// ModifyDBClusterParameterGroupWithContext is the same as ModifyDBClusterParameterGroup with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyDBClusterParameterGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *RDS) ModifyDBClusterParameterGroupWithContext(ctx aws.Context, input *ModifyDBClusterParameterGroupInput, opts ...request.Option) (*DBClusterParameterGroupNameMessage, error) { + req, out := c.ModifyDBClusterParameterGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyDBClusterSnapshotAttribute = "ModifyDBClusterSnapshotAttribute" + +// ModifyDBClusterSnapshotAttributeRequest generates a "aws/request.Request" representing the +// client's request for the ModifyDBClusterSnapshotAttribute operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyDBClusterSnapshotAttribute for more information on using the ModifyDBClusterSnapshotAttribute // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration @@ -6909,10 +7837,10 @@ func (c *RDS) ModifyDBClusterSnapshotAttributeRequest(input *ModifyDBClusterSnap // // Returned Error Codes: // * ErrCodeDBClusterSnapshotNotFoundFault "DBClusterSnapshotNotFoundFault" -// DBClusterSnapshotIdentifier does not refer to an existing DB cluster snapshot. +// DBClusterSnapshotIdentifier doesn't refer to an existing DB cluster snapshot. // // * ErrCodeInvalidDBClusterSnapshotStateFault "InvalidDBClusterSnapshotStateFault" -// The supplied value is not a valid DB cluster snapshot state. +// The supplied value isn't a valid DB cluster snapshot state. // // * ErrCodeSharedSnapshotQuotaExceededFault "SharedSnapshotQuotaExceeded" // You have exceeded the maximum number of accounts that you can share a manual @@ -6944,8 +7872,8 @@ const opModifyDBInstance = "ModifyDBInstance" // ModifyDBInstanceRequest generates a "aws/request.Request" representing the // client's request for the ModifyDBInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6998,34 +7926,34 @@ func (c *RDS) ModifyDBInstanceRequest(input *ModifyDBInstanceInput) (req *reques // // Returned Error Codes: // * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" -// The specified DB instance is not in the available state. +// The DB instance isn't in a valid state. // // * ErrCodeInvalidDBSecurityGroupStateFault "InvalidDBSecurityGroupState" -// The state of the DB security group does not allow deletion. +// The state of the DB security group doesn't allow deletion. // // * ErrCodeDBInstanceAlreadyExistsFault "DBInstanceAlreadyExists" -// User already has a DB instance with the given identifier. +// The user already has a DB instance with the given identifier. // // * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" -// DBInstanceIdentifier does not refer to an existing DB instance. +// DBInstanceIdentifier doesn't refer to an existing DB instance. // // * ErrCodeDBSecurityGroupNotFoundFault "DBSecurityGroupNotFound" -// DBSecurityGroupName does not refer to an existing DB security group. +// DBSecurityGroupName doesn't refer to an existing DB security group. // // * ErrCodeDBParameterGroupNotFoundFault "DBParameterGroupNotFound" -// DBParameterGroupName does not refer to an existing DB parameter group. +// DBParameterGroupName doesn't refer to an existing DB parameter group. // // * ErrCodeInsufficientDBInstanceCapacityFault "InsufficientDBInstanceCapacity" -// Specified DB instance class is not available in the specified Availability +// The specified DB instance class isn't available in the specified Availability // Zone. // // * ErrCodeStorageQuotaExceededFault "StorageQuotaExceeded" -// Request would result in user exceeding the allowed amount of storage available -// across all DB instances. +// The request would result in the user exceeding the allowed amount of storage +// available across all DB instances. // // * ErrCodeInvalidVPCNetworkStateFault "InvalidVPCNetworkStateFault" -// DB subnet group does not cover all Availability Zones after it is created -// because users' change. +// The DB subnet group doesn't cover all Availability Zones after it's created +// because of users' change. // // * ErrCodeProvisionedIopsNotAvailableInAZFault "ProvisionedIopsNotAvailableInAZFault" // Provisioned IOPS not available in the specified Availability Zone. @@ -7034,23 +7962,25 @@ func (c *RDS) ModifyDBInstanceRequest(input *ModifyDBInstanceInput) (req *reques // The specified option group could not be found. // // * ErrCodeDBUpgradeDependencyFailureFault "DBUpgradeDependencyFailure" -// The DB upgrade failed because a resource the DB depends on could not be modified. +// The DB upgrade failed because a resource the DB depends on can't be modified. // // * ErrCodeStorageTypeNotSupportedFault "StorageTypeNotSupported" -// StorageType specified cannot be associated with the DB Instance. +// Storage of the StorageType specified can't be associated with the DB instance. // // * ErrCodeAuthorizationNotFoundFault "AuthorizationNotFound" -// Specified CIDRIP or EC2 security group is not authorized for the specified -// DB security group. +// The specified CIDRIP or Amazon EC2 security group isn't authorized for the +// specified DB security group. // -// RDS may not also be authorized via IAM to perform necessary actions on your -// behalf. +// RDS also may not be authorized by using IAM to perform necessary actions +// on your behalf. // // * ErrCodeCertificateNotFoundFault "CertificateNotFound" -// CertificateIdentifier does not refer to an existing certificate. +// CertificateIdentifier doesn't refer to an existing certificate. // // * ErrCodeDomainNotFoundFault "DomainNotFoundFault" -// Domain does not refer to an existing Active Directory Domain. +// Domain doesn't refer to an existing Active Directory domain. +// +// * ErrCodeBackupPolicyNotFoundFault "BackupPolicyNotFoundFault" // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/ModifyDBInstance func (c *RDS) ModifyDBInstance(input *ModifyDBInstanceInput) (*ModifyDBInstanceOutput, error) { @@ -7078,8 +8008,8 @@ const opModifyDBParameterGroup = "ModifyDBParameterGroup" // ModifyDBParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the ModifyDBParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7146,11 +8076,11 @@ func (c *RDS) ModifyDBParameterGroupRequest(input *ModifyDBParameterGroupInput) // // Returned Error Codes: // * ErrCodeDBParameterGroupNotFoundFault "DBParameterGroupNotFound" -// DBParameterGroupName does not refer to an existing DB parameter group. +// DBParameterGroupName doesn't refer to an existing DB parameter group. // // * ErrCodeInvalidDBParameterGroupStateFault "InvalidDBParameterGroupState" // The DB parameter group is in use or is in an invalid state. If you are attempting -// to delete the parameter group, you cannot delete it when the parameter group +// to delete the parameter group, you can't delete it when the parameter group // is in this state. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/ModifyDBParameterGroup @@ -7179,8 +8109,8 @@ const opModifyDBSnapshot = "ModifyDBSnapshot" // ModifyDBSnapshotRequest generates a "aws/request.Request" representing the // client's request for the ModifyDBSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7233,7 +8163,7 @@ func (c *RDS) ModifyDBSnapshotRequest(input *ModifyDBSnapshotInput) (req *reques // // Returned Error Codes: // * ErrCodeDBSnapshotNotFoundFault "DBSnapshotNotFound" -// DBSnapshotIdentifier does not refer to an existing DB snapshot. +// DBSnapshotIdentifier doesn't refer to an existing DB snapshot. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/ModifyDBSnapshot func (c *RDS) ModifyDBSnapshot(input *ModifyDBSnapshotInput) (*ModifyDBSnapshotOutput, error) { @@ -7261,8 +8191,8 @@ const opModifyDBSnapshotAttribute = "ModifyDBSnapshotAttribute" // ModifyDBSnapshotAttributeRequest generates a "aws/request.Request" representing the // client's request for the ModifyDBSnapshotAttribute operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7328,10 +8258,10 @@ func (c *RDS) ModifyDBSnapshotAttributeRequest(input *ModifyDBSnapshotAttributeI // // Returned Error Codes: // * ErrCodeDBSnapshotNotFoundFault "DBSnapshotNotFound" -// DBSnapshotIdentifier does not refer to an existing DB snapshot. +// DBSnapshotIdentifier doesn't refer to an existing DB snapshot. // // * ErrCodeInvalidDBSnapshotStateFault "InvalidDBSnapshotState" -// The state of the DB snapshot does not allow deletion. +// The state of the DB snapshot doesn't allow deletion. // // * ErrCodeSharedSnapshotQuotaExceededFault "SharedSnapshotQuotaExceeded" // You have exceeded the maximum number of accounts that you can share a manual @@ -7363,8 +8293,8 @@ const opModifyDBSubnetGroup = "ModifyDBSubnetGroup" // ModifyDBSubnetGroupRequest generates a "aws/request.Request" representing the // client's request for the ModifyDBSubnetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7415,11 +8345,11 @@ func (c *RDS) ModifyDBSubnetGroupRequest(input *ModifyDBSubnetGroupInput) (req * // // Returned Error Codes: // * ErrCodeDBSubnetGroupNotFoundFault "DBSubnetGroupNotFoundFault" -// DBSubnetGroupName does not refer to an existing DB subnet group. +// DBSubnetGroupName doesn't refer to an existing DB subnet group. // // * ErrCodeDBSubnetQuotaExceededFault "DBSubnetQuotaExceededFault" -// Request would result in user exceeding the allowed number of subnets in a -// DB subnet groups. +// The request would result in the user exceeding the allowed number of subnets +// in a DB subnet groups. // // * ErrCodeSubnetAlreadyInUse "SubnetAlreadyInUse" // The DB subnet is already in use in the Availability Zone. @@ -7458,8 +8388,8 @@ const opModifyEventSubscription = "ModifyEventSubscription" // ModifyEventSubscriptionRequest generates a "aws/request.Request" representing the // client's request for the ModifyEventSubscription operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7560,8 +8490,8 @@ const opModifyOptionGroup = "ModifyOptionGroup" // ModifyOptionGroupRequest generates a "aws/request.Request" representing the // client's request for the ModifyOptionGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7611,7 +8541,7 @@ func (c *RDS) ModifyOptionGroupRequest(input *ModifyOptionGroupInput) (req *requ // // Returned Error Codes: // * ErrCodeInvalidOptionGroupStateFault "InvalidOptionGroupStateFault" -// The option group is not in the available state. +// The option group isn't in the available state. // // * ErrCodeOptionGroupNotFoundFault "OptionGroupNotFoundFault" // The specified option group could not be found. @@ -7642,8 +8572,8 @@ const opPromoteReadReplica = "PromoteReadReplica" // PromoteReadReplicaRequest generates a "aws/request.Request" representing the // client's request for the PromoteReadReplica operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7703,10 +8633,10 @@ func (c *RDS) PromoteReadReplicaRequest(input *PromoteReadReplicaInput) (req *re // // Returned Error Codes: // * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" -// The specified DB instance is not in the available state. +// The DB instance isn't in a valid state. // // * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" -// DBInstanceIdentifier does not refer to an existing DB instance. +// DBInstanceIdentifier doesn't refer to an existing DB instance. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/PromoteReadReplica func (c *RDS) PromoteReadReplica(input *PromoteReadReplicaInput) (*PromoteReadReplicaOutput, error) { @@ -7734,8 +8664,8 @@ const opPromoteReadReplicaDBCluster = "PromoteReadReplicaDBCluster" // PromoteReadReplicaDBClusterRequest generates a "aws/request.Request" representing the // client's request for the PromoteReadReplicaDBCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7785,10 +8715,10 @@ func (c *RDS) PromoteReadReplicaDBClusterRequest(input *PromoteReadReplicaDBClus // // Returned Error Codes: // * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" -// DBClusterIdentifier does not refer to an existing DB cluster. +// DBClusterIdentifier doesn't refer to an existing DB cluster. // // * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" -// The DB cluster is not in a valid state. +// The requested operation can't be performed while the cluster is in this state. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/PromoteReadReplicaDBCluster func (c *RDS) PromoteReadReplicaDBCluster(input *PromoteReadReplicaDBClusterInput) (*PromoteReadReplicaDBClusterOutput, error) { @@ -7816,8 +8746,8 @@ const opPurchaseReservedDBInstancesOffering = "PurchaseReservedDBInstancesOfferi // PurchaseReservedDBInstancesOfferingRequest generates a "aws/request.Request" representing the // client's request for the PurchaseReservedDBInstancesOffering operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7901,8 +8831,8 @@ const opRebootDBInstance = "RebootDBInstance" // RebootDBInstanceRequest generates a "aws/request.Request" representing the // client's request for the RebootDBInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7950,7 +8880,8 @@ func (c *RDS) RebootDBInstanceRequest(input *RebootDBInstanceInput) (req *reques // DB instance results in a momentary outage, during which the DB instance status // is set to rebooting. // -// For more information about rebooting, see Rebooting a DB Instance (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RebootInstance.html). +// For more information about rebooting, see Rebooting a DB Instance (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RebootInstance.html) +// in the Amazon RDS User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -7961,10 +8892,10 @@ func (c *RDS) RebootDBInstanceRequest(input *RebootDBInstanceInput) (req *reques // // Returned Error Codes: // * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" -// The specified DB instance is not in the available state. +// The DB instance isn't in a valid state. // // * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" -// DBInstanceIdentifier does not refer to an existing DB instance. +// DBInstanceIdentifier doesn't refer to an existing DB instance. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/RebootDBInstance func (c *RDS) RebootDBInstance(input *RebootDBInstanceInput) (*RebootDBInstanceOutput, error) { @@ -7992,8 +8923,8 @@ const opRemoveRoleFromDBCluster = "RemoveRoleFromDBCluster" // RemoveRoleFromDBClusterRequest generates a "aws/request.Request" representing the // client's request for the RemoveRoleFromDBCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8035,8 +8966,9 @@ func (c *RDS) RemoveRoleFromDBClusterRequest(input *RemoveRoleFromDBClusterInput // RemoveRoleFromDBCluster API operation for Amazon Relational Database Service. // // Disassociates an Identity and Access Management (IAM) role from an Aurora -// DB cluster. For more information, see Authorizing Amazon Aurora to Access -// Other AWS Services On Your Behalf (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Authorizing.AWSServices.html). +// DB cluster. For more information, see Authorizing Amazon Aurora MySQL to +// Access Other AWS Services on Your Behalf (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Authorizing.html) +// in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -8047,14 +8979,14 @@ func (c *RDS) RemoveRoleFromDBClusterRequest(input *RemoveRoleFromDBClusterInput // // Returned Error Codes: // * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" -// DBClusterIdentifier does not refer to an existing DB cluster. +// DBClusterIdentifier doesn't refer to an existing DB cluster. // // * ErrCodeDBClusterRoleNotFoundFault "DBClusterRoleNotFound" -// The specified IAM role Amazon Resource Name (ARN) is not associated with -// the specified DB cluster. +// The specified IAM role Amazon Resource Name (ARN) isn't associated with the +// specified DB cluster. // // * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" -// The DB cluster is not in a valid state. +// The requested operation can't be performed while the cluster is in this state. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/RemoveRoleFromDBCluster func (c *RDS) RemoveRoleFromDBCluster(input *RemoveRoleFromDBClusterInput) (*RemoveRoleFromDBClusterOutput, error) { @@ -8082,8 +9014,8 @@ const opRemoveSourceIdentifierFromSubscription = "RemoveSourceIdentifierFromSubs // RemoveSourceIdentifierFromSubscriptionRequest generates a "aws/request.Request" representing the // client's request for the RemoveSourceIdentifierFromSubscription operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8164,8 +9096,8 @@ const opRemoveTagsFromResource = "RemoveTagsFromResource" // RemoveTagsFromResourceRequest generates a "aws/request.Request" representing the // client's request for the RemoveTagsFromResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8209,7 +9141,8 @@ func (c *RDS) RemoveTagsFromResourceRequest(input *RemoveTagsFromResourceInput) // Removes metadata tags from an Amazon RDS resource. // // For an overview on tagging an Amazon RDS resource, see Tagging Amazon RDS -// Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Tagging.html). +// Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Tagging.html) +// in the Amazon RDS User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -8220,13 +9153,13 @@ func (c *RDS) RemoveTagsFromResourceRequest(input *RemoveTagsFromResourceInput) // // Returned Error Codes: // * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" -// DBInstanceIdentifier does not refer to an existing DB instance. +// DBInstanceIdentifier doesn't refer to an existing DB instance. // // * ErrCodeDBSnapshotNotFoundFault "DBSnapshotNotFound" -// DBSnapshotIdentifier does not refer to an existing DB snapshot. +// DBSnapshotIdentifier doesn't refer to an existing DB snapshot. // // * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" -// DBClusterIdentifier does not refer to an existing DB cluster. +// DBClusterIdentifier doesn't refer to an existing DB cluster. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/RemoveTagsFromResource func (c *RDS) RemoveTagsFromResource(input *RemoveTagsFromResourceInput) (*RemoveTagsFromResourceOutput, error) { @@ -8254,8 +9187,8 @@ const opResetDBClusterParameterGroup = "ResetDBClusterParameterGroup" // ResetDBClusterParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the ResetDBClusterParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8305,8 +9238,8 @@ func (c *RDS) ResetDBClusterParameterGroupRequest(input *ResetDBClusterParameter // for every DB instance in your DB cluster that you want the updated static // parameter to apply to. // -// For more information on Amazon Aurora, see Aurora on Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html) -// in the Amazon RDS User Guide. +// For more information on Amazon Aurora, see What Is Amazon Aurora? (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) +// in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -8318,11 +9251,11 @@ func (c *RDS) ResetDBClusterParameterGroupRequest(input *ResetDBClusterParameter // Returned Error Codes: // * ErrCodeInvalidDBParameterGroupStateFault "InvalidDBParameterGroupState" // The DB parameter group is in use or is in an invalid state. If you are attempting -// to delete the parameter group, you cannot delete it when the parameter group +// to delete the parameter group, you can't delete it when the parameter group // is in this state. // // * ErrCodeDBParameterGroupNotFoundFault "DBParameterGroupNotFound" -// DBParameterGroupName does not refer to an existing DB parameter group. +// DBParameterGroupName doesn't refer to an existing DB parameter group. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/ResetDBClusterParameterGroup func (c *RDS) ResetDBClusterParameterGroup(input *ResetDBClusterParameterGroupInput) (*DBClusterParameterGroupNameMessage, error) { @@ -8350,8 +9283,8 @@ const opResetDBParameterGroup = "ResetDBParameterGroup" // ResetDBParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the ResetDBParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8408,11 +9341,11 @@ func (c *RDS) ResetDBParameterGroupRequest(input *ResetDBParameterGroupInput) (r // Returned Error Codes: // * ErrCodeInvalidDBParameterGroupStateFault "InvalidDBParameterGroupState" // The DB parameter group is in use or is in an invalid state. If you are attempting -// to delete the parameter group, you cannot delete it when the parameter group +// to delete the parameter group, you can't delete it when the parameter group // is in this state. // // * ErrCodeDBParameterGroupNotFoundFault "DBParameterGroupNotFound" -// DBParameterGroupName does not refer to an existing DB parameter group. +// DBParameterGroupName doesn't refer to an existing DB parameter group. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/ResetDBParameterGroup func (c *RDS) ResetDBParameterGroup(input *ResetDBParameterGroupInput) (*DBParameterGroupNameMessage, error) { @@ -8440,8 +9373,8 @@ const opRestoreDBClusterFromS3 = "RestoreDBClusterFromS3" // RestoreDBClusterFromS3Request generates a "aws/request.Request" representing the // client's request for the RestoreDBClusterFromS3 operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8482,8 +9415,9 @@ func (c *RDS) RestoreDBClusterFromS3Request(input *RestoreDBClusterFromS3Input) // // Creates an Amazon Aurora DB cluster from data stored in an Amazon S3 bucket. // Amazon RDS must be authorized to access the Amazon S3 bucket and the data -// must be created using the Percona XtraBackup utility as described in Migrating -// Data from MySQL by Using an Amazon S3 Bucket (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Migrate.MySQL.html#Aurora.Migrate.MySQL.S3). +// must be created using the Percona XtraBackup utility as described in Migrating +// Data to an Amazon Aurora MySQL DB Cluster (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.html) +// in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -8494,51 +9428,51 @@ func (c *RDS) RestoreDBClusterFromS3Request(input *RestoreDBClusterFromS3Input) // // Returned Error Codes: // * ErrCodeDBClusterAlreadyExistsFault "DBClusterAlreadyExistsFault" -// User already has a DB cluster with the given identifier. +// The user already has a DB cluster with the given identifier. // // * ErrCodeDBClusterQuotaExceededFault "DBClusterQuotaExceededFault" -// User attempted to create a new DB cluster and the user has already reached +// The user attempted to create a new DB cluster and the user has already reached // the maximum allowed DB cluster quota. // // * ErrCodeStorageQuotaExceededFault "StorageQuotaExceeded" -// Request would result in user exceeding the allowed amount of storage available -// across all DB instances. +// The request would result in the user exceeding the allowed amount of storage +// available across all DB instances. // // * ErrCodeDBSubnetGroupNotFoundFault "DBSubnetGroupNotFoundFault" -// DBSubnetGroupName does not refer to an existing DB subnet group. +// DBSubnetGroupName doesn't refer to an existing DB subnet group. // // * ErrCodeInvalidVPCNetworkStateFault "InvalidVPCNetworkStateFault" -// DB subnet group does not cover all Availability Zones after it is created -// because users' change. +// The DB subnet group doesn't cover all Availability Zones after it's created +// because of users' change. // // * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" -// The DB cluster is not in a valid state. +// The requested operation can't be performed while the cluster is in this state. // // * ErrCodeInvalidDBSubnetGroupStateFault "InvalidDBSubnetGroupStateFault" -// The DB subnet group cannot be deleted because it is in use. +// The DB subnet group cannot be deleted because it's in use. // // * ErrCodeInvalidSubnet "InvalidSubnet" // The requested subnet is invalid, or multiple subnets were requested that // are not all in a common VPC. // // * ErrCodeInvalidS3BucketFault "InvalidS3BucketFault" -// The specified Amazon S3 bucket name could not be found or Amazon RDS is not -// authorized to access the specified Amazon S3 bucket. Verify the SourceS3BucketName -// and S3IngestionRoleArn values and try again. +// The specified Amazon S3 bucket name can't be found or Amazon RDS isn't authorized +// to access the specified Amazon S3 bucket. Verify the SourceS3BucketName and +// S3IngestionRoleArn values and try again. // // * ErrCodeDBClusterParameterGroupNotFoundFault "DBClusterParameterGroupNotFound" -// DBClusterParameterGroupName does not refer to an existing DB Cluster parameter +// DBClusterParameterGroupName doesn't refer to an existing DB cluster parameter // group. // // * ErrCodeKMSKeyNotAccessibleFault "KMSKeyNotAccessibleFault" -// Error accessing KMS key. +// An error occurred accessing an AWS KMS key. // // * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" -// DBClusterIdentifier does not refer to an existing DB cluster. +// DBClusterIdentifier doesn't refer to an existing DB cluster. // // * ErrCodeInsufficientStorageClusterCapacityFault "InsufficientStorageClusterCapacity" -// There is insufficient storage available for the current action. You may be -// able to resolve this error by updating your subnet group to use different +// There is insufficient storage available for the current action. You might +// be able to resolve this error by updating your subnet group to use different // Availability Zones that have more storage available. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/RestoreDBClusterFromS3 @@ -8567,8 +9501,8 @@ const opRestoreDBClusterFromSnapshot = "RestoreDBClusterFromSnapshot" // RestoreDBClusterFromSnapshotRequest generates a "aws/request.Request" representing the // client's request for the RestoreDBClusterFromSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8617,8 +9551,8 @@ func (c *RDS) RestoreDBClusterFromSnapshotRequest(input *RestoreDBClusterFromSna // source DB cluster, except that the new DB cluster is created with the default // security group. // -// For more information on Amazon Aurora, see Aurora on Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html) -// in the Amazon RDS User Guide. +// For more information on Amazon Aurora, see What Is Amazon Aurora? (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) +// in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -8629,52 +9563,52 @@ func (c *RDS) RestoreDBClusterFromSnapshotRequest(input *RestoreDBClusterFromSna // // Returned Error Codes: // * ErrCodeDBClusterAlreadyExistsFault "DBClusterAlreadyExistsFault" -// User already has a DB cluster with the given identifier. +// The user already has a DB cluster with the given identifier. // // * ErrCodeDBClusterQuotaExceededFault "DBClusterQuotaExceededFault" -// User attempted to create a new DB cluster and the user has already reached +// The user attempted to create a new DB cluster and the user has already reached // the maximum allowed DB cluster quota. // // * ErrCodeStorageQuotaExceededFault "StorageQuotaExceeded" -// Request would result in user exceeding the allowed amount of storage available -// across all DB instances. +// The request would result in the user exceeding the allowed amount of storage +// available across all DB instances. // // * ErrCodeDBSubnetGroupNotFoundFault "DBSubnetGroupNotFoundFault" -// DBSubnetGroupName does not refer to an existing DB subnet group. +// DBSubnetGroupName doesn't refer to an existing DB subnet group. // // * ErrCodeDBSnapshotNotFoundFault "DBSnapshotNotFound" -// DBSnapshotIdentifier does not refer to an existing DB snapshot. +// DBSnapshotIdentifier doesn't refer to an existing DB snapshot. // // * ErrCodeDBClusterSnapshotNotFoundFault "DBClusterSnapshotNotFoundFault" -// DBClusterSnapshotIdentifier does not refer to an existing DB cluster snapshot. +// DBClusterSnapshotIdentifier doesn't refer to an existing DB cluster snapshot. // // * ErrCodeInsufficientDBClusterCapacityFault "InsufficientDBClusterCapacityFault" -// The DB cluster does not have enough capacity for the current operation. +// The DB cluster doesn't have enough capacity for the current operation. // // * ErrCodeInsufficientStorageClusterCapacityFault "InsufficientStorageClusterCapacity" -// There is insufficient storage available for the current action. You may be -// able to resolve this error by updating your subnet group to use different +// There is insufficient storage available for the current action. You might +// be able to resolve this error by updating your subnet group to use different // Availability Zones that have more storage available. // // * ErrCodeInvalidDBSnapshotStateFault "InvalidDBSnapshotState" -// The state of the DB snapshot does not allow deletion. +// The state of the DB snapshot doesn't allow deletion. // // * ErrCodeInvalidDBClusterSnapshotStateFault "InvalidDBClusterSnapshotStateFault" -// The supplied value is not a valid DB cluster snapshot state. +// The supplied value isn't a valid DB cluster snapshot state. // // * ErrCodeStorageQuotaExceededFault "StorageQuotaExceeded" -// Request would result in user exceeding the allowed amount of storage available -// across all DB instances. +// The request would result in the user exceeding the allowed amount of storage +// available across all DB instances. // // * ErrCodeInvalidVPCNetworkStateFault "InvalidVPCNetworkStateFault" -// DB subnet group does not cover all Availability Zones after it is created -// because users' change. +// The DB subnet group doesn't cover all Availability Zones after it's created +// because of users' change. // // * ErrCodeInvalidRestoreFault "InvalidRestoreFault" -// Cannot restore from vpc backup to non-vpc DB instance. +// Cannot restore from VPC backup to non-VPC DB instance. // // * ErrCodeDBSubnetGroupNotFoundFault "DBSubnetGroupNotFoundFault" -// DBSubnetGroupName does not refer to an existing DB subnet group. +// DBSubnetGroupName doesn't refer to an existing DB subnet group. // // * ErrCodeInvalidSubnet "InvalidSubnet" // The requested subnet is invalid, or multiple subnets were requested that @@ -8684,7 +9618,11 @@ func (c *RDS) RestoreDBClusterFromSnapshotRequest(input *RestoreDBClusterFromSna // The specified option group could not be found. // // * ErrCodeKMSKeyNotAccessibleFault "KMSKeyNotAccessibleFault" -// Error accessing KMS key. +// An error occurred accessing an AWS KMS key. +// +// * ErrCodeDBClusterParameterGroupNotFoundFault "DBClusterParameterGroupNotFound" +// DBClusterParameterGroupName doesn't refer to an existing DB cluster parameter +// group. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/RestoreDBClusterFromSnapshot func (c *RDS) RestoreDBClusterFromSnapshot(input *RestoreDBClusterFromSnapshotInput) (*RestoreDBClusterFromSnapshotOutput, error) { @@ -8712,8 +9650,8 @@ const opRestoreDBClusterToPointInTime = "RestoreDBClusterToPointInTime" // RestoreDBClusterToPointInTimeRequest generates a "aws/request.Request" representing the // client's request for the RestoreDBClusterToPointInTime operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8765,8 +9703,8 @@ func (c *RDS) RestoreDBClusterToPointInTimeRequest(input *RestoreDBClusterToPoin // RestoreDBClusterToPointInTime action has completed and the DB cluster is // available. // -// For more information on Amazon Aurora, see Aurora on Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html) -// in the Amazon RDS User Guide. +// For more information on Amazon Aurora, see What Is Amazon Aurora? (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) +// in the Amazon Aurora User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -8777,58 +9715,62 @@ func (c *RDS) RestoreDBClusterToPointInTimeRequest(input *RestoreDBClusterToPoin // // Returned Error Codes: // * ErrCodeDBClusterAlreadyExistsFault "DBClusterAlreadyExistsFault" -// User already has a DB cluster with the given identifier. +// The user already has a DB cluster with the given identifier. // // * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" -// DBClusterIdentifier does not refer to an existing DB cluster. +// DBClusterIdentifier doesn't refer to an existing DB cluster. // // * ErrCodeDBClusterQuotaExceededFault "DBClusterQuotaExceededFault" -// User attempted to create a new DB cluster and the user has already reached +// The user attempted to create a new DB cluster and the user has already reached // the maximum allowed DB cluster quota. // // * ErrCodeDBClusterSnapshotNotFoundFault "DBClusterSnapshotNotFoundFault" -// DBClusterSnapshotIdentifier does not refer to an existing DB cluster snapshot. +// DBClusterSnapshotIdentifier doesn't refer to an existing DB cluster snapshot. // // * ErrCodeDBSubnetGroupNotFoundFault "DBSubnetGroupNotFoundFault" -// DBSubnetGroupName does not refer to an existing DB subnet group. +// DBSubnetGroupName doesn't refer to an existing DB subnet group. // // * ErrCodeInsufficientDBClusterCapacityFault "InsufficientDBClusterCapacityFault" -// The DB cluster does not have enough capacity for the current operation. +// The DB cluster doesn't have enough capacity for the current operation. // // * ErrCodeInsufficientStorageClusterCapacityFault "InsufficientStorageClusterCapacity" -// There is insufficient storage available for the current action. You may be -// able to resolve this error by updating your subnet group to use different +// There is insufficient storage available for the current action. You might +// be able to resolve this error by updating your subnet group to use different // Availability Zones that have more storage available. // // * ErrCodeInvalidDBClusterSnapshotStateFault "InvalidDBClusterSnapshotStateFault" -// The supplied value is not a valid DB cluster snapshot state. +// The supplied value isn't a valid DB cluster snapshot state. // // * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" -// The DB cluster is not in a valid state. +// The requested operation can't be performed while the cluster is in this state. // // * ErrCodeInvalidDBSnapshotStateFault "InvalidDBSnapshotState" -// The state of the DB snapshot does not allow deletion. +// The state of the DB snapshot doesn't allow deletion. // // * ErrCodeInvalidRestoreFault "InvalidRestoreFault" -// Cannot restore from vpc backup to non-vpc DB instance. +// Cannot restore from VPC backup to non-VPC DB instance. // // * ErrCodeInvalidSubnet "InvalidSubnet" // The requested subnet is invalid, or multiple subnets were requested that // are not all in a common VPC. // // * ErrCodeInvalidVPCNetworkStateFault "InvalidVPCNetworkStateFault" -// DB subnet group does not cover all Availability Zones after it is created -// because users' change. +// The DB subnet group doesn't cover all Availability Zones after it's created +// because of users' change. // // * ErrCodeKMSKeyNotAccessibleFault "KMSKeyNotAccessibleFault" -// Error accessing KMS key. +// An error occurred accessing an AWS KMS key. // // * ErrCodeOptionGroupNotFoundFault "OptionGroupNotFoundFault" // The specified option group could not be found. // // * ErrCodeStorageQuotaExceededFault "StorageQuotaExceeded" -// Request would result in user exceeding the allowed amount of storage available -// across all DB instances. +// The request would result in the user exceeding the allowed amount of storage +// available across all DB instances. +// +// * ErrCodeDBClusterParameterGroupNotFoundFault "DBClusterParameterGroupNotFound" +// DBClusterParameterGroupName doesn't refer to an existing DB cluster parameter +// group. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/RestoreDBClusterToPointInTime func (c *RDS) RestoreDBClusterToPointInTime(input *RestoreDBClusterToPointInTimeInput) (*RestoreDBClusterToPointInTimeOutput, error) { @@ -8856,8 +9798,8 @@ const opRestoreDBInstanceFromDBSnapshot = "RestoreDBInstanceFromDBSnapshot" // RestoreDBInstanceFromDBSnapshotRequest generates a "aws/request.Request" representing the // client's request for the RestoreDBInstanceFromDBSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8928,34 +9870,34 @@ func (c *RDS) RestoreDBInstanceFromDBSnapshotRequest(input *RestoreDBInstanceFro // // Returned Error Codes: // * ErrCodeDBInstanceAlreadyExistsFault "DBInstanceAlreadyExists" -// User already has a DB instance with the given identifier. +// The user already has a DB instance with the given identifier. // // * ErrCodeDBSnapshotNotFoundFault "DBSnapshotNotFound" -// DBSnapshotIdentifier does not refer to an existing DB snapshot. +// DBSnapshotIdentifier doesn't refer to an existing DB snapshot. // // * ErrCodeInstanceQuotaExceededFault "InstanceQuotaExceeded" -// Request would result in user exceeding the allowed number of DB instances. +// The request would result in the user exceeding the allowed number of DB instances. // // * ErrCodeInsufficientDBInstanceCapacityFault "InsufficientDBInstanceCapacity" -// Specified DB instance class is not available in the specified Availability +// The specified DB instance class isn't available in the specified Availability // Zone. // // * ErrCodeInvalidDBSnapshotStateFault "InvalidDBSnapshotState" -// The state of the DB snapshot does not allow deletion. +// The state of the DB snapshot doesn't allow deletion. // // * ErrCodeStorageQuotaExceededFault "StorageQuotaExceeded" -// Request would result in user exceeding the allowed amount of storage available -// across all DB instances. +// The request would result in the user exceeding the allowed amount of storage +// available across all DB instances. // // * ErrCodeInvalidVPCNetworkStateFault "InvalidVPCNetworkStateFault" -// DB subnet group does not cover all Availability Zones after it is created -// because users' change. +// The DB subnet group doesn't cover all Availability Zones after it's created +// because of users' change. // // * ErrCodeInvalidRestoreFault "InvalidRestoreFault" -// Cannot restore from vpc backup to non-vpc DB instance. +// Cannot restore from VPC backup to non-VPC DB instance. // // * ErrCodeDBSubnetGroupNotFoundFault "DBSubnetGroupNotFoundFault" -// DBSubnetGroupName does not refer to an existing DB subnet group. +// DBSubnetGroupName doesn't refer to an existing DB subnet group. // // * ErrCodeDBSubnetGroupDoesNotCoverEnoughAZs "DBSubnetGroupDoesNotCoverEnoughAZs" // Subnets in the DB subnet group should cover at least two Availability Zones @@ -8972,23 +9914,28 @@ func (c *RDS) RestoreDBInstanceFromDBSnapshotRequest(input *RestoreDBInstanceFro // The specified option group could not be found. // // * ErrCodeStorageTypeNotSupportedFault "StorageTypeNotSupported" -// StorageType specified cannot be associated with the DB Instance. +// Storage of the StorageType specified can't be associated with the DB instance. // // * ErrCodeAuthorizationNotFoundFault "AuthorizationNotFound" -// Specified CIDRIP or EC2 security group is not authorized for the specified -// DB security group. +// The specified CIDRIP or Amazon EC2 security group isn't authorized for the +// specified DB security group. // -// RDS may not also be authorized via IAM to perform necessary actions on your -// behalf. +// RDS also may not be authorized by using IAM to perform necessary actions +// on your behalf. // // * ErrCodeKMSKeyNotAccessibleFault "KMSKeyNotAccessibleFault" -// Error accessing KMS key. +// An error occurred accessing an AWS KMS key. // // * ErrCodeDBSecurityGroupNotFoundFault "DBSecurityGroupNotFound" -// DBSecurityGroupName does not refer to an existing DB security group. +// DBSecurityGroupName doesn't refer to an existing DB security group. // // * ErrCodeDomainNotFoundFault "DomainNotFoundFault" -// Domain does not refer to an existing Active Directory Domain. +// Domain doesn't refer to an existing Active Directory domain. +// +// * ErrCodeDBParameterGroupNotFoundFault "DBParameterGroupNotFound" +// DBParameterGroupName doesn't refer to an existing DB parameter group. +// +// * ErrCodeBackupPolicyNotFoundFault "BackupPolicyNotFoundFault" // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/RestoreDBInstanceFromDBSnapshot func (c *RDS) RestoreDBInstanceFromDBSnapshot(input *RestoreDBInstanceFromDBSnapshotInput) (*RestoreDBInstanceFromDBSnapshotOutput, error) { @@ -9016,8 +9963,8 @@ const opRestoreDBInstanceFromS3 = "RestoreDBInstanceFromS3" // RestoreDBInstanceFromS3Request generates a "aws/request.Request" representing the // client's request for the RestoreDBInstanceFromS3 operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9061,7 +10008,8 @@ func (c *RDS) RestoreDBInstanceFromS3Request(input *RestoreDBInstanceFromS3Input // database, store it on Amazon Simple Storage Service (Amazon S3), and then // restore the backup file onto a new Amazon RDS DB instance running MySQL. // For more information, see Importing Data into an Amazon RDS MySQL DB Instance -// (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.html). +// (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.html) +// in the Amazon RDS User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -9072,27 +10020,27 @@ func (c *RDS) RestoreDBInstanceFromS3Request(input *RestoreDBInstanceFromS3Input // // Returned Error Codes: // * ErrCodeDBInstanceAlreadyExistsFault "DBInstanceAlreadyExists" -// User already has a DB instance with the given identifier. +// The user already has a DB instance with the given identifier. // // * ErrCodeInsufficientDBInstanceCapacityFault "InsufficientDBInstanceCapacity" -// Specified DB instance class is not available in the specified Availability +// The specified DB instance class isn't available in the specified Availability // Zone. // // * ErrCodeDBParameterGroupNotFoundFault "DBParameterGroupNotFound" -// DBParameterGroupName does not refer to an existing DB parameter group. +// DBParameterGroupName doesn't refer to an existing DB parameter group. // // * ErrCodeDBSecurityGroupNotFoundFault "DBSecurityGroupNotFound" -// DBSecurityGroupName does not refer to an existing DB security group. +// DBSecurityGroupName doesn't refer to an existing DB security group. // // * ErrCodeInstanceQuotaExceededFault "InstanceQuotaExceeded" -// Request would result in user exceeding the allowed number of DB instances. +// The request would result in the user exceeding the allowed number of DB instances. // // * ErrCodeStorageQuotaExceededFault "StorageQuotaExceeded" -// Request would result in user exceeding the allowed amount of storage available -// across all DB instances. +// The request would result in the user exceeding the allowed amount of storage +// available across all DB instances. // // * ErrCodeDBSubnetGroupNotFoundFault "DBSubnetGroupNotFoundFault" -// DBSubnetGroupName does not refer to an existing DB subnet group. +// DBSubnetGroupName doesn't refer to an existing DB subnet group. // // * ErrCodeDBSubnetGroupDoesNotCoverEnoughAZs "DBSubnetGroupDoesNotCoverEnoughAZs" // Subnets in the DB subnet group should cover at least two Availability Zones @@ -9103,13 +10051,13 @@ func (c *RDS) RestoreDBInstanceFromS3Request(input *RestoreDBInstanceFromS3Input // are not all in a common VPC. // // * ErrCodeInvalidVPCNetworkStateFault "InvalidVPCNetworkStateFault" -// DB subnet group does not cover all Availability Zones after it is created -// because users' change. +// The DB subnet group doesn't cover all Availability Zones after it's created +// because of users' change. // // * ErrCodeInvalidS3BucketFault "InvalidS3BucketFault" -// The specified Amazon S3 bucket name could not be found or Amazon RDS is not -// authorized to access the specified Amazon S3 bucket. Verify the SourceS3BucketName -// and S3IngestionRoleArn values and try again. +// The specified Amazon S3 bucket name can't be found or Amazon RDS isn't authorized +// to access the specified Amazon S3 bucket. Verify the SourceS3BucketName and +// S3IngestionRoleArn values and try again. // // * ErrCodeProvisionedIopsNotAvailableInAZFault "ProvisionedIopsNotAvailableInAZFault" // Provisioned IOPS not available in the specified Availability Zone. @@ -9118,17 +10066,19 @@ func (c *RDS) RestoreDBInstanceFromS3Request(input *RestoreDBInstanceFromS3Input // The specified option group could not be found. // // * ErrCodeStorageTypeNotSupportedFault "StorageTypeNotSupported" -// StorageType specified cannot be associated with the DB Instance. +// Storage of the StorageType specified can't be associated with the DB instance. // // * ErrCodeAuthorizationNotFoundFault "AuthorizationNotFound" -// Specified CIDRIP or EC2 security group is not authorized for the specified -// DB security group. +// The specified CIDRIP or Amazon EC2 security group isn't authorized for the +// specified DB security group. // -// RDS may not also be authorized via IAM to perform necessary actions on your -// behalf. +// RDS also may not be authorized by using IAM to perform necessary actions +// on your behalf. // // * ErrCodeKMSKeyNotAccessibleFault "KMSKeyNotAccessibleFault" -// Error accessing KMS key. +// An error occurred accessing an AWS KMS key. +// +// * ErrCodeBackupPolicyNotFoundFault "BackupPolicyNotFoundFault" // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/RestoreDBInstanceFromS3 func (c *RDS) RestoreDBInstanceFromS3(input *RestoreDBInstanceFromS3Input) (*RestoreDBInstanceFromS3Output, error) { @@ -9156,8 +10106,8 @@ const opRestoreDBInstanceToPointInTime = "RestoreDBInstanceToPointInTime" // RestoreDBInstanceToPointInTimeRequest generates a "aws/request.Request" representing the // client's request for the RestoreDBInstanceToPointInTime operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9202,7 +10152,7 @@ func (c *RDS) RestoreDBInstanceToPointInTimeRequest(input *RestoreDBInstanceToPo // the BackupRetentionPeriod property. // // The target database is created with most of the original configuration, but -// in a system-selected availability zone, with the default security group, +// in a system-selected Availability Zone, with the default security group, // the default subnet group, and the default DB parameter group. By default, // the new DB instance is created as a single-AZ deployment except when the // instance is a SQL Server instance that has an option group that is associated @@ -9221,38 +10171,38 @@ func (c *RDS) RestoreDBInstanceToPointInTimeRequest(input *RestoreDBInstanceToPo // // Returned Error Codes: // * ErrCodeDBInstanceAlreadyExistsFault "DBInstanceAlreadyExists" -// User already has a DB instance with the given identifier. +// The user already has a DB instance with the given identifier. // // * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" -// DBInstanceIdentifier does not refer to an existing DB instance. +// DBInstanceIdentifier doesn't refer to an existing DB instance. // // * ErrCodeInstanceQuotaExceededFault "InstanceQuotaExceeded" -// Request would result in user exceeding the allowed number of DB instances. +// The request would result in the user exceeding the allowed number of DB instances. // // * ErrCodeInsufficientDBInstanceCapacityFault "InsufficientDBInstanceCapacity" -// Specified DB instance class is not available in the specified Availability +// The specified DB instance class isn't available in the specified Availability // Zone. // // * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" -// The specified DB instance is not in the available state. +// The DB instance isn't in a valid state. // // * ErrCodePointInTimeRestoreNotEnabledFault "PointInTimeRestoreNotEnabled" // SourceDBInstanceIdentifier refers to a DB instance with BackupRetentionPeriod // equal to 0. // // * ErrCodeStorageQuotaExceededFault "StorageQuotaExceeded" -// Request would result in user exceeding the allowed amount of storage available -// across all DB instances. +// The request would result in the user exceeding the allowed amount of storage +// available across all DB instances. // // * ErrCodeInvalidVPCNetworkStateFault "InvalidVPCNetworkStateFault" -// DB subnet group does not cover all Availability Zones after it is created -// because users' change. +// The DB subnet group doesn't cover all Availability Zones after it's created +// because of users' change. // // * ErrCodeInvalidRestoreFault "InvalidRestoreFault" -// Cannot restore from vpc backup to non-vpc DB instance. +// Cannot restore from VPC backup to non-VPC DB instance. // // * ErrCodeDBSubnetGroupNotFoundFault "DBSubnetGroupNotFoundFault" -// DBSubnetGroupName does not refer to an existing DB subnet group. +// DBSubnetGroupName doesn't refer to an existing DB subnet group. // // * ErrCodeDBSubnetGroupDoesNotCoverEnoughAZs "DBSubnetGroupDoesNotCoverEnoughAZs" // Subnets in the DB subnet group should cover at least two Availability Zones @@ -9269,23 +10219,31 @@ func (c *RDS) RestoreDBInstanceToPointInTimeRequest(input *RestoreDBInstanceToPo // The specified option group could not be found. // // * ErrCodeStorageTypeNotSupportedFault "StorageTypeNotSupported" -// StorageType specified cannot be associated with the DB Instance. +// Storage of the StorageType specified can't be associated with the DB instance. // // * ErrCodeAuthorizationNotFoundFault "AuthorizationNotFound" -// Specified CIDRIP or EC2 security group is not authorized for the specified -// DB security group. +// The specified CIDRIP or Amazon EC2 security group isn't authorized for the +// specified DB security group. // -// RDS may not also be authorized via IAM to perform necessary actions on your -// behalf. +// RDS also may not be authorized by using IAM to perform necessary actions +// on your behalf. // // * ErrCodeKMSKeyNotAccessibleFault "KMSKeyNotAccessibleFault" -// Error accessing KMS key. +// An error occurred accessing an AWS KMS key. // // * ErrCodeDBSecurityGroupNotFoundFault "DBSecurityGroupNotFound" -// DBSecurityGroupName does not refer to an existing DB security group. +// DBSecurityGroupName doesn't refer to an existing DB security group. // // * ErrCodeDomainNotFoundFault "DomainNotFoundFault" -// Domain does not refer to an existing Active Directory Domain. +// Domain doesn't refer to an existing Active Directory domain. +// +// * ErrCodeBackupPolicyNotFoundFault "BackupPolicyNotFoundFault" +// +// * ErrCodeDBParameterGroupNotFoundFault "DBParameterGroupNotFound" +// DBParameterGroupName doesn't refer to an existing DB parameter group. +// +// * ErrCodeDBInstanceAutomatedBackupNotFoundFault "DBInstanceAutomatedBackupNotFound" +// No automated backup for this DB instance was found. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/RestoreDBInstanceToPointInTime func (c *RDS) RestoreDBInstanceToPointInTime(input *RestoreDBInstanceToPointInTimeInput) (*RestoreDBInstanceToPointInTimeOutput, error) { @@ -9313,8 +10271,8 @@ const opRevokeDBSecurityGroupIngress = "RevokeDBSecurityGroupIngress" // RevokeDBSecurityGroupIngressRequest generates a "aws/request.Request" representing the // client's request for the RevokeDBSecurityGroupIngress operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9367,17 +10325,17 @@ func (c *RDS) RevokeDBSecurityGroupIngressRequest(input *RevokeDBSecurityGroupIn // // Returned Error Codes: // * ErrCodeDBSecurityGroupNotFoundFault "DBSecurityGroupNotFound" -// DBSecurityGroupName does not refer to an existing DB security group. +// DBSecurityGroupName doesn't refer to an existing DB security group. // // * ErrCodeAuthorizationNotFoundFault "AuthorizationNotFound" -// Specified CIDRIP or EC2 security group is not authorized for the specified -// DB security group. +// The specified CIDRIP or Amazon EC2 security group isn't authorized for the +// specified DB security group. // -// RDS may not also be authorized via IAM to perform necessary actions on your -// behalf. +// RDS also may not be authorized by using IAM to perform necessary actions +// on your behalf. // // * ErrCodeInvalidDBSecurityGroupStateFault "InvalidDBSecurityGroupState" -// The state of the DB security group does not allow deletion. +// The state of the DB security group doesn't allow deletion. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/RevokeDBSecurityGroupIngress func (c *RDS) RevokeDBSecurityGroupIngress(input *RevokeDBSecurityGroupIngressInput) (*RevokeDBSecurityGroupIngressOutput, error) { @@ -9401,12 +10359,101 @@ func (c *RDS) RevokeDBSecurityGroupIngressWithContext(ctx aws.Context, input *Re return out, req.Send() } +const opStartDBCluster = "StartDBCluster" + +// StartDBClusterRequest generates a "aws/request.Request" representing the +// client's request for the StartDBCluster operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StartDBCluster for more information on using the StartDBCluster +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StartDBClusterRequest method. +// req, resp := client.StartDBClusterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/StartDBCluster +func (c *RDS) StartDBClusterRequest(input *StartDBClusterInput) (req *request.Request, output *StartDBClusterOutput) { + op := &request.Operation{ + Name: opStartDBCluster, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StartDBClusterInput{} + } + + output = &StartDBClusterOutput{} + req = c.newRequest(op, input, output) + return +} + +// StartDBCluster API operation for Amazon Relational Database Service. +// +// Starts an Amazon Aurora DB cluster that was stopped using the AWS console, +// the stop-db-cluster AWS CLI command, or the StopDBCluster action. +// +// For more information, see Stopping and Starting an Aurora Cluster (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-cluster-stop-start.html) +// in the Amazon Aurora User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Relational Database Service's +// API operation StartDBCluster for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" +// DBClusterIdentifier doesn't refer to an existing DB cluster. +// +// * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" +// The requested operation can't be performed while the cluster is in this state. +// +// * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" +// The DB instance isn't in a valid state. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/StartDBCluster +func (c *RDS) StartDBCluster(input *StartDBClusterInput) (*StartDBClusterOutput, error) { + req, out := c.StartDBClusterRequest(input) + return out, req.Send() +} + +// StartDBClusterWithContext is the same as StartDBCluster with the addition of +// the ability to pass a context and additional request options. +// +// See StartDBCluster for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *RDS) StartDBClusterWithContext(ctx aws.Context, input *StartDBClusterInput, opts ...request.Option) (*StartDBClusterOutput, error) { + req, out := c.StartDBClusterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opStartDBInstance = "StartDBInstance" // StartDBInstanceRequest generates a "aws/request.Request" representing the // client's request for the StartDBInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9445,11 +10492,15 @@ func (c *RDS) StartDBInstanceRequest(input *StartDBInstanceInput) (req *request. // StartDBInstance API operation for Amazon Relational Database Service. // -// Starts a DB instance that was stopped using the AWS console, the stop-db-instance -// AWS CLI command, or the StopDBInstance action. For more information, see -// Stopping and Starting a DB instance in the AWS RDS user guide. +// Starts an Amazon RDS DB instance that was stopped using the AWS console, +// the stop-db-instance AWS CLI command, or the StopDBInstance action. // -// This command doesn't apply to Aurora MySQL and Aurora PostgreSQL. +// For more information, see Starting an Amazon RDS DB instance That Was Previously +// Stopped (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_StartInstance.html) +// in the Amazon RDS User Guide. +// +// This command doesn't apply to Aurora MySQL and Aurora PostgreSQL. For Aurora +// DB clusters, use StartDBCluster instead. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -9460,45 +10511,45 @@ func (c *RDS) StartDBInstanceRequest(input *StartDBInstanceInput) (req *request. // // Returned Error Codes: // * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" -// DBInstanceIdentifier does not refer to an existing DB instance. +// DBInstanceIdentifier doesn't refer to an existing DB instance. // // * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" -// The specified DB instance is not in the available state. +// The DB instance isn't in a valid state. // // * ErrCodeInsufficientDBInstanceCapacityFault "InsufficientDBInstanceCapacity" -// Specified DB instance class is not available in the specified Availability +// The specified DB instance class isn't available in the specified Availability // Zone. // // * ErrCodeDBSubnetGroupNotFoundFault "DBSubnetGroupNotFoundFault" -// DBSubnetGroupName does not refer to an existing DB subnet group. +// DBSubnetGroupName doesn't refer to an existing DB subnet group. // // * ErrCodeDBSubnetGroupDoesNotCoverEnoughAZs "DBSubnetGroupDoesNotCoverEnoughAZs" // Subnets in the DB subnet group should cover at least two Availability Zones // unless there is only one Availability Zone. // // * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" -// The DB cluster is not in a valid state. +// The requested operation can't be performed while the cluster is in this state. // // * ErrCodeInvalidSubnet "InvalidSubnet" // The requested subnet is invalid, or multiple subnets were requested that // are not all in a common VPC. // // * ErrCodeInvalidVPCNetworkStateFault "InvalidVPCNetworkStateFault" -// DB subnet group does not cover all Availability Zones after it is created -// because users' change. +// The DB subnet group doesn't cover all Availability Zones after it's created +// because of users' change. // // * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" -// DBClusterIdentifier does not refer to an existing DB cluster. +// DBClusterIdentifier doesn't refer to an existing DB cluster. // // * ErrCodeAuthorizationNotFoundFault "AuthorizationNotFound" -// Specified CIDRIP or EC2 security group is not authorized for the specified -// DB security group. +// The specified CIDRIP or Amazon EC2 security group isn't authorized for the +// specified DB security group. // -// RDS may not also be authorized via IAM to perform necessary actions on your -// behalf. +// RDS also may not be authorized by using IAM to perform necessary actions +// on your behalf. // // * ErrCodeKMSKeyNotAccessibleFault "KMSKeyNotAccessibleFault" -// Error accessing KMS key. +// An error occurred accessing an AWS KMS key. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/StartDBInstance func (c *RDS) StartDBInstance(input *StartDBInstanceInput) (*StartDBInstanceOutput, error) { @@ -9522,12 +10573,103 @@ func (c *RDS) StartDBInstanceWithContext(ctx aws.Context, input *StartDBInstance return out, req.Send() } +const opStopDBCluster = "StopDBCluster" + +// StopDBClusterRequest generates a "aws/request.Request" representing the +// client's request for the StopDBCluster operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StopDBCluster for more information on using the StopDBCluster +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StopDBClusterRequest method. +// req, resp := client.StopDBClusterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/StopDBCluster +func (c *RDS) StopDBClusterRequest(input *StopDBClusterInput) (req *request.Request, output *StopDBClusterOutput) { + op := &request.Operation{ + Name: opStopDBCluster, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StopDBClusterInput{} + } + + output = &StopDBClusterOutput{} + req = c.newRequest(op, input, output) + return +} + +// StopDBCluster API operation for Amazon Relational Database Service. +// +// Stops an Amazon Aurora DB cluster. When you stop a DB cluster, Aurora retains +// the DB cluster's metadata, including its endpoints and DB parameter groups. +// Aurora also retains the transaction logs so you can do a point-in-time restore +// if necessary. +// +// For more information, see Stopping and Starting an Aurora Cluster (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-cluster-stop-start.html) +// in the Amazon Aurora User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Relational Database Service's +// API operation StopDBCluster for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" +// DBClusterIdentifier doesn't refer to an existing DB cluster. +// +// * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" +// The requested operation can't be performed while the cluster is in this state. +// +// * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" +// The DB instance isn't in a valid state. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/StopDBCluster +func (c *RDS) StopDBCluster(input *StopDBClusterInput) (*StopDBClusterOutput, error) { + req, out := c.StopDBClusterRequest(input) + return out, req.Send() +} + +// StopDBClusterWithContext is the same as StopDBCluster with the addition of +// the ability to pass a context and additional request options. +// +// See StopDBCluster for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *RDS) StopDBClusterWithContext(ctx aws.Context, input *StopDBClusterInput, opts ...request.Option) (*StopDBClusterOutput, error) { + req, out := c.StopDBClusterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opStopDBInstance = "StopDBInstance" // StopDBInstanceRequest generates a "aws/request.Request" representing the // client's request for the StopDBInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9566,13 +10708,17 @@ func (c *RDS) StopDBInstanceRequest(input *StopDBInstanceInput) (req *request.Re // StopDBInstance API operation for Amazon Relational Database Service. // -// Stops a DB instance. When you stop a DB instance, Amazon RDS retains the -// DB instance's metadata, including its endpoint, DB parameter group, and option -// group membership. Amazon RDS also retains the transaction logs so you can -// do a point-in-time restore if necessary. For more information, see Stopping -// and Starting a DB instance in the AWS RDS user guide. +// Stops an Amazon RDS DB instance. When you stop a DB instance, Amazon RDS +// retains the DB instance's metadata, including its endpoint, DB parameter +// group, and option group membership. Amazon RDS also retains the transaction +// logs so you can do a point-in-time restore if necessary. // -// This command doesn't apply to Aurora MySQL and Aurora PostgreSQL. +// For more information, see Stopping an Amazon RDS DB Instance Temporarily +// (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_StopInstance.html) +// in the Amazon RDS User Guide. +// +// This command doesn't apply to Aurora MySQL and Aurora PostgreSQL. For Aurora +// clusters, use StopDBCluster instead. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -9583,19 +10729,19 @@ func (c *RDS) StopDBInstanceRequest(input *StopDBInstanceInput) (req *request.Re // // Returned Error Codes: // * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" -// DBInstanceIdentifier does not refer to an existing DB instance. +// DBInstanceIdentifier doesn't refer to an existing DB instance. // // * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" -// The specified DB instance is not in the available state. +// The DB instance isn't in a valid state. // // * ErrCodeDBSnapshotAlreadyExistsFault "DBSnapshotAlreadyExists" // DBSnapshotIdentifier is already used by an existing snapshot. // // * ErrCodeSnapshotQuotaExceededFault "SnapshotQuotaExceeded" -// Request would result in user exceeding the allowed number of DB snapshots. +// The request would result in the user exceeding the allowed number of DB snapshots. // // * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" -// The DB cluster is not in a valid state. +// The requested operation can't be performed while the cluster is in this state. // // See also, https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/StopDBInstance func (c *RDS) StopDBInstance(input *StopDBInstanceInput) (*StopDBInstanceOutput, error) { @@ -10107,7 +11253,7 @@ func (s *AuthorizeDBSecurityGroupIngressOutput) SetDBSecurityGroup(v *DBSecurity type AvailabilityZone struct { _ struct{} `type:"structure"` - // The name of the availability zone. + // The name of the Availability Zone. Name *string `type:"string"` } @@ -10127,8 +11273,235 @@ func (s *AvailabilityZone) SetName(v string) *AvailabilityZone { return s } -// A CA certificate for an AWS account. -type Certificate struct { +// Contains the available processor feature information for the DB instance +// class of a DB instance. +// +// For more information, see Configuring the Processor of the DB Instance Class +// (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html#USER_ConfigureProcessor) +// in the Amazon RDS User Guide. +type AvailableProcessorFeature struct { + _ struct{} `type:"structure"` + + // The allowed values for the processor feature of the DB instance class. + AllowedValues *string `type:"string"` + + // The default value for the processor feature of the DB instance class. + DefaultValue *string `type:"string"` + + // The name of the processor feature. Valid names are coreCount and threadsPerCore. + Name *string `type:"string"` +} + +// String returns the string representation +func (s AvailableProcessorFeature) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AvailableProcessorFeature) GoString() string { + return s.String() +} + +// SetAllowedValues sets the AllowedValues field's value. +func (s *AvailableProcessorFeature) SetAllowedValues(v string) *AvailableProcessorFeature { + s.AllowedValues = &v + return s +} + +// SetDefaultValue sets the DefaultValue field's value. +func (s *AvailableProcessorFeature) SetDefaultValue(v string) *AvailableProcessorFeature { + s.DefaultValue = &v + return s +} + +// SetName sets the Name field's value. +func (s *AvailableProcessorFeature) SetName(v string) *AvailableProcessorFeature { + s.Name = &v + return s +} + +type BacktrackDBClusterInput struct { + _ struct{} `type:"structure"` + + // The timestamp of the time to backtrack the DB cluster to, specified in ISO + // 8601 format. For more information about ISO 8601, see the ISO8601 Wikipedia + // page. (http://en.wikipedia.org/wiki/ISO_8601) + // + // If the specified time is not a consistent time for the DB cluster, Aurora + // automatically chooses the nearest possible consistent time for the DB cluster. + // + // Constraints: + // + // * Must contain a valid ISO 8601 timestamp. + // + // * Can't contain a timestamp set in the future. + // + // Example: 2017-07-08T18:00Z + // + // BacktrackTo is a required field + BacktrackTo *time.Time `type:"timestamp" required:"true"` + + // The DB cluster identifier of the DB cluster to be backtracked. This parameter + // is stored as a lowercase string. + // + // Constraints: + // + // * Must contain from 1 to 63 alphanumeric characters or hyphens. + // + // * First character must be a letter. + // + // * Can't end with a hyphen or contain two consecutive hyphens. + // + // Example: my-cluster1 + // + // DBClusterIdentifier is a required field + DBClusterIdentifier *string `type:"string" required:"true"` + + // A value that, if specified, forces the DB cluster to backtrack when binary + // logging is enabled. Otherwise, an error occurs when binary logging is enabled. + Force *bool `type:"boolean"` + + // If BacktrackTo is set to a timestamp earlier than the earliest backtrack + // time, this value backtracks the DB cluster to the earliest possible backtrack + // time. Otherwise, an error occurs. + UseEarliestTimeOnPointInTimeUnavailable *bool `type:"boolean"` +} + +// String returns the string representation +func (s BacktrackDBClusterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BacktrackDBClusterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *BacktrackDBClusterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BacktrackDBClusterInput"} + if s.BacktrackTo == nil { + invalidParams.Add(request.NewErrParamRequired("BacktrackTo")) + } + if s.DBClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBacktrackTo sets the BacktrackTo field's value. +func (s *BacktrackDBClusterInput) SetBacktrackTo(v time.Time) *BacktrackDBClusterInput { + s.BacktrackTo = &v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *BacktrackDBClusterInput) SetDBClusterIdentifier(v string) *BacktrackDBClusterInput { + s.DBClusterIdentifier = &v + return s +} + +// SetForce sets the Force field's value. +func (s *BacktrackDBClusterInput) SetForce(v bool) *BacktrackDBClusterInput { + s.Force = &v + return s +} + +// SetUseEarliestTimeOnPointInTimeUnavailable sets the UseEarliestTimeOnPointInTimeUnavailable field's value. +func (s *BacktrackDBClusterInput) SetUseEarliestTimeOnPointInTimeUnavailable(v bool) *BacktrackDBClusterInput { + s.UseEarliestTimeOnPointInTimeUnavailable = &v + return s +} + +// This data type is used as a response element in the DescribeDBClusterBacktracks +// action. +type BacktrackDBClusterOutput struct { + _ struct{} `type:"structure"` + + // Contains the backtrack identifier. + BacktrackIdentifier *string `type:"string"` + + // The timestamp of the time at which the backtrack was requested. + BacktrackRequestCreationTime *time.Time `type:"timestamp"` + + // The timestamp of the time to which the DB cluster was backtracked. + BacktrackTo *time.Time `type:"timestamp"` + + // The timestamp of the time from which the DB cluster was backtracked. + BacktrackedFrom *time.Time `type:"timestamp"` + + // Contains a user-supplied DB cluster identifier. This identifier is the unique + // key that identifies a DB cluster. + DBClusterIdentifier *string `type:"string"` + + // The status of the backtrack. This property returns one of the following values: + // + // * applying - The backtrack is currently being applied to or rolled back + // from the DB cluster. + // + // * completed - The backtrack has successfully been applied to or rolled + // back from the DB cluster. + // + // * failed - An error occurred while the backtrack was applied to or rolled + // back from the DB cluster. + // + // * pending - The backtrack is currently pending application to or rollback + // from the DB cluster. + Status *string `type:"string"` +} + +// String returns the string representation +func (s BacktrackDBClusterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BacktrackDBClusterOutput) GoString() string { + return s.String() +} + +// SetBacktrackIdentifier sets the BacktrackIdentifier field's value. +func (s *BacktrackDBClusterOutput) SetBacktrackIdentifier(v string) *BacktrackDBClusterOutput { + s.BacktrackIdentifier = &v + return s +} + +// SetBacktrackRequestCreationTime sets the BacktrackRequestCreationTime field's value. +func (s *BacktrackDBClusterOutput) SetBacktrackRequestCreationTime(v time.Time) *BacktrackDBClusterOutput { + s.BacktrackRequestCreationTime = &v + return s +} + +// SetBacktrackTo sets the BacktrackTo field's value. +func (s *BacktrackDBClusterOutput) SetBacktrackTo(v time.Time) *BacktrackDBClusterOutput { + s.BacktrackTo = &v + return s +} + +// SetBacktrackedFrom sets the BacktrackedFrom field's value. +func (s *BacktrackDBClusterOutput) SetBacktrackedFrom(v time.Time) *BacktrackDBClusterOutput { + s.BacktrackedFrom = &v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *BacktrackDBClusterOutput) SetDBClusterIdentifier(v string) *BacktrackDBClusterOutput { + s.DBClusterIdentifier = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *BacktrackDBClusterOutput) SetStatus(v string) *BacktrackDBClusterOutput { + s.Status = &v + return s +} + +// A CA certificate for an AWS account. +type Certificate struct { _ struct{} `type:"structure"` // The Amazon Resource Name (ARN) for the certificate. @@ -10144,10 +11517,10 @@ type Certificate struct { Thumbprint *string `type:"string"` // The starting date from which the certificate is valid. - ValidFrom *time.Time `type:"timestamp" timestampFormat:"iso8601"` + ValidFrom *time.Time `type:"timestamp"` // The final date that the certificate continues to be valid. - ValidTill *time.Time `type:"timestamp" timestampFormat:"iso8601"` + ValidTill *time.Time `type:"timestamp"` } // String returns the string representation @@ -10231,6 +11604,12 @@ func (s *CharacterSet) SetCharacterSetName(v string) *CharacterSet { // The configuration setting for the log types to be enabled for export to CloudWatch // Logs for a specific DB instance or DB cluster. +// +// The EnableLogTypes and DisableLogTypes arrays determine which logs will be +// exported (or not exported) to CloudWatch Logs. The values within these arrays +// depend on the DB engine being used. For more information, see Publishing +// Database Logs to Amazon CloudWatch Logs (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.html#USER_LogAccess.Procedural.UploadtoCloudWatch) +// in the Amazon RDS User Guide. type CloudwatchLogsExportConfiguration struct { _ struct{} `type:"structure"` @@ -10267,8 +11646,9 @@ type CopyDBClusterParameterGroupInput struct { _ struct{} `type:"structure"` // The identifier or Amazon Resource Name (ARN) for the source DB cluster parameter - // group. For information about creating an ARN, see Constructing an RDS Amazon - // Resource Name (ARN) (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.ARN.html#USER_Tagging.ARN.Constructing). + // group. For information about creating an ARN, see Constructing an ARN for + // Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_Tagging.ARN.html#USER_Tagging.ARN.Constructing) + // in the Amazon Aurora User Guide. // // Constraints: // @@ -10284,7 +11664,8 @@ type CopyDBClusterParameterGroupInput struct { // SourceDBClusterParameterGroupIdentifier is a required field SourceDBClusterParameterGroupIdentifier *string `type:"string" required:"true"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` // A description for the copied DB cluster parameter group. @@ -10296,13 +11677,13 @@ type CopyDBClusterParameterGroupInput struct { // // Constraints: // - // * Cannot be null, empty, or blank + // * Can't be null, empty, or blank // // * Must contain from 1 to 255 letters, numbers, or hyphens // // * First character must be a letter // - // * Cannot end with a hyphen or contain two consecutive hyphens + // * Can't end with a hyphen or contain two consecutive hyphens // // Example: my-cluster-param-group1 // @@ -10403,10 +11784,6 @@ type CopyDBClusterSnapshotInput struct { // ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key // alias for the KMS encryption key. // - // If you copy an unencrypted DB cluster snapshot and specify a value for the - // KmsKeyId parameter, Amazon RDS encrypts the target DB cluster snapshot using - // the specified KMS encryption key. - // // If you copy an encrypted DB cluster snapshot from your AWS account, you can // specify a value for KmsKeyId to encrypt the copy with a new KMS encryption // key. If you don't specify a value for KmsKeyId, then the copy of the DB cluster @@ -10420,6 +11797,9 @@ type CopyDBClusterSnapshotInput struct { // DB cluster snapshot in the destination AWS Region. KMS encryption keys are // specific to the AWS Region that they are created in, and you can't use encryption // keys from one AWS Region in another AWS Region. + // + // If you copy an unencrypted DB cluster snapshot and specify a value for the + // KmsKeyId parameter, an error is returned. KmsKeyId *string `type:"string"` // The URL that contains a Signature Version 4 signed request for the CopyDBClusterSnapshot @@ -10468,7 +11848,8 @@ type CopyDBClusterSnapshotInput struct { // // * If the source snapshot is in a different AWS Region than the copy, specify // a valid DB cluster snapshot ARN. For more information, go to Copying - // a DB Snapshot or DB Cluster Snapshot (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html). + // Snapshots Across AWS Regions (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_CopySnapshot.html#USER_CopySnapshot.AcrossRegions) + // in the Amazon Aurora User Guide. // // Example: my-cluster-snapshot1 // @@ -10480,7 +11861,8 @@ type CopyDBClusterSnapshotInput struct { // have the same region as the source ARN. SourceRegion *string `type:"string" ignore:"true"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` // The identifier of the new DB cluster snapshot to create from the source DB @@ -10492,7 +11874,7 @@ type CopyDBClusterSnapshotInput struct { // // * First character must be a letter. // - // * Cannot end with a hyphen or contain two consecutive hyphens. + // * Can't end with a hyphen or contain two consecutive hyphens. // // Example: my-cluster-snapshot2 // @@ -10604,8 +11986,8 @@ type CopyDBParameterGroupInput struct { _ struct{} `type:"structure"` // The identifier or ARN for the source DB parameter group. For information - // about creating an ARN, see Constructing an RDS Amazon Resource Name (ARN) - // (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.ARN.html#USER_Tagging.ARN.Constructing). + // about creating an ARN, see Constructing an ARN for Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.ARN.html#USER_Tagging.ARN.Constructing) + // in the Amazon RDS User Guide. // // Constraints: // @@ -10617,7 +11999,8 @@ type CopyDBParameterGroupInput struct { // SourceDBParameterGroupIdentifier is a required field SourceDBParameterGroupIdentifier *string `type:"string" required:"true"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` // A description for the copied DB parameter group. @@ -10629,13 +12012,13 @@ type CopyDBParameterGroupInput struct { // // Constraints: // - // * Cannot be null, empty, or blank + // * Can't be null, empty, or blank // // * Must contain from 1 to 255 letters, numbers, or hyphens // // * First character must be a letter // - // * Cannot end with a hyphen or contain two consecutive hyphens + // * Can't end with a hyphen or contain two consecutive hyphens // // Example: my-db-parameter-group // @@ -10759,7 +12142,8 @@ type CopyDBSnapshotInput struct { // another, and your DB instance uses a nondefault option group. If your source // DB instance uses Transparent Data Encryption for Oracle or Microsoft SQL // Server, you must specify this option when copying across AWS Regions. For - // more information, see Option Group Considerations (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html#USER_CopySnapshot.Options). + // more information, see Option Group Considerations (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html#USER_CopySnapshot.Options) + // in the Amazon RDS User Guide. OptionGroupName *string `type:"string"` // The URL that contains a Signature Version 4 signed request for the CopyDBSnapshot @@ -10836,20 +12220,21 @@ type CopyDBSnapshotInput struct { // have the same region as the source ARN. SourceRegion *string `type:"string" ignore:"true"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` // The identifier for the copy of the snapshot. // // Constraints: // - // * Cannot be null, empty, or blank + // * Can't be null, empty, or blank // // * Must contain from 1 to 255 letters, numbers, or hyphens // // * First character must be a letter // - // * Cannot end with a hyphen or contain two consecutive hyphens + // * Can't end with a hyphen or contain two consecutive hyphens // // Example: my-db-snapshot // @@ -10966,7 +12351,8 @@ type CopyOptionGroupInput struct { _ struct{} `type:"structure"` // The identifier or ARN for the source option group. For information about - // creating an ARN, see Constructing an RDS Amazon Resource Name (ARN) (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.ARN.html#USER_Tagging.ARN.Constructing). + // creating an ARN, see Constructing an ARN for Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.ARN.html#USER_Tagging.ARN.Constructing) + // in the Amazon RDS User Guide. // // Constraints: // @@ -10982,7 +12368,8 @@ type CopyOptionGroupInput struct { // SourceOptionGroupIdentifier is a required field SourceOptionGroupIdentifier *string `type:"string" required:"true"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` // The description for the copied option group. @@ -10994,13 +12381,13 @@ type CopyOptionGroupInput struct { // // Constraints: // - // * Cannot be null, empty, or blank + // * Can't be null, empty, or blank // // * Must contain from 1 to 255 letters, numbers, or hyphens // // * First character must be a letter // - // * Cannot end with a hyphen or contain two consecutive hyphens + // * Can't end with a hyphen or contain two consecutive hyphens // // Example: my-option-group // @@ -11083,105 +12470,345 @@ func (s *CopyOptionGroupOutput) SetOptionGroup(v *OptionGroup) *CopyOptionGroupO return s } -type CreateDBClusterInput struct { +type CreateDBClusterEndpointInput struct { _ struct{} `type:"structure"` - // A list of EC2 Availability Zones that instances in the DB cluster can be - // created in. For information on AWS Regions and Availability Zones, see Regions - // and Availability Zones (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html). - AvailabilityZones []*string `locationNameList:"AvailabilityZone" type:"list"` - - // The number of days for which automated backups are retained. You must specify - // a minimum value of 1. - // - // Default: 1 - // - // Constraints: + // The identifier to use for the new endpoint. This parameter is stored as a + // lowercase string. // - // * Must be a value from 1 to 35 - BackupRetentionPeriod *int64 `type:"integer"` - - // A value that indicates that the DB cluster should be associated with the - // specified CharacterSet. - CharacterSetName *string `type:"string"` + // DBClusterEndpointIdentifier is a required field + DBClusterEndpointIdentifier *string `type:"string" required:"true"` - // The DB cluster identifier. This parameter is stored as a lowercase string. - // - // Constraints: - // - // * Must contain from 1 to 63 letters, numbers, or hyphens. - // - // * First character must be a letter. - // - // * Cannot end with a hyphen or contain two consecutive hyphens. - // - // Example: my-cluster1 + // The DB cluster identifier of the DB cluster associated with the endpoint. + // This parameter is stored as a lowercase string. // // DBClusterIdentifier is a required field DBClusterIdentifier *string `type:"string" required:"true"` - // The name of the DB cluster parameter group to associate with this DB cluster. - // If this argument is omitted, default.aurora5.6 is used. - // - // Constraints: + // The type of the endpoint. One of: READER, ANY. // - // * If supplied, must match the name of an existing DBClusterParameterGroup. - DBClusterParameterGroupName *string `type:"string"` + // EndpointType is a required field + EndpointType *string `type:"string" required:"true"` - // A DB subnet group to associate with this DB cluster. - // - // Constraints: Must match the name of an existing DBSubnetGroup. Must not be - // default. - // - // Example: mySubnetgroup - DBSubnetGroupName *string `type:"string"` + // List of DB instance identifiers that aren't part of the custom endpoint group. + // All other eligible instances are reachable through the custom endpoint. Only + // relevant if the list of static members is empty. + ExcludedMembers []*string `type:"list"` - // The name for your database of up to 64 alpha-numeric characters. If you do - // not provide a name, Amazon RDS will not create a database in the DB cluster - // you are creating. - DatabaseName *string `type:"string"` + // List of DB instance identifiers that are part of the custom endpoint group. + StaticMembers []*string `type:"list"` +} - // DestinationRegion is used for presigning the request to a given region. - DestinationRegion *string `type:"string"` +// String returns the string representation +func (s CreateDBClusterEndpointInput) String() string { + return awsutil.Prettify(s) +} - // True to enable mapping of AWS Identity and Access Management (IAM) accounts - // to database accounts, and otherwise false. - // - // Default: false - EnableIAMDatabaseAuthentication *bool `type:"boolean"` +// GoString returns the string representation +func (s CreateDBClusterEndpointInput) GoString() string { + return s.String() +} - // The name of the database engine to be used for this DB cluster. - // - // Valid Values: aurora (for MySQL 5.6-compatible Aurora), aurora-mysql (for - // MySQL 5.7-compatible Aurora), and aurora-postgresql - // - // Engine is a required field - Engine *string `type:"string" required:"true"` +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDBClusterEndpointInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDBClusterEndpointInput"} + if s.DBClusterEndpointIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterEndpointIdentifier")) + } + if s.DBClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterIdentifier")) + } + if s.EndpointType == nil { + invalidParams.Add(request.NewErrParamRequired("EndpointType")) + } - // The version number of the database engine to use. - // - // Aurora MySQL - // - // Example: 5.6.10a, 5.7.12 - // - // Aurora PostgreSQL - // - // Example: 9.6.3 - EngineVersion *string `type:"string"` + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} - // The AWS KMS key identifier for an encrypted DB cluster. - // - // The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption - // key. If you are creating a DB cluster with the same AWS account that owns - // the KMS encryption key used to encrypt the new DB cluster, then you can use - // the KMS key alias instead of the ARN for the KMS encryption key. - // - // If an encryption key is not specified in KmsKeyId: - // - // * If ReplicationSourceIdentifier identifies an encrypted source, then - // Amazon RDS will use the encryption key used to encrypt the source. Otherwise, - // Amazon RDS will use your default encryption key. - // +// SetDBClusterEndpointIdentifier sets the DBClusterEndpointIdentifier field's value. +func (s *CreateDBClusterEndpointInput) SetDBClusterEndpointIdentifier(v string) *CreateDBClusterEndpointInput { + s.DBClusterEndpointIdentifier = &v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *CreateDBClusterEndpointInput) SetDBClusterIdentifier(v string) *CreateDBClusterEndpointInput { + s.DBClusterIdentifier = &v + return s +} + +// SetEndpointType sets the EndpointType field's value. +func (s *CreateDBClusterEndpointInput) SetEndpointType(v string) *CreateDBClusterEndpointInput { + s.EndpointType = &v + return s +} + +// SetExcludedMembers sets the ExcludedMembers field's value. +func (s *CreateDBClusterEndpointInput) SetExcludedMembers(v []*string) *CreateDBClusterEndpointInput { + s.ExcludedMembers = v + return s +} + +// SetStaticMembers sets the StaticMembers field's value. +func (s *CreateDBClusterEndpointInput) SetStaticMembers(v []*string) *CreateDBClusterEndpointInput { + s.StaticMembers = v + return s +} + +// This data type represents the information you need to connect to an Amazon +// Aurora DB cluster. This data type is used as a response element in the following +// actions: +// +// * CreateDBClusterEndpoint +// +// * DescribeDBClusterEndpoints +// +// * ModifyDBClusterEndpoint +// +// * DeleteDBClusterEndpoint +// +// For the data structure that represents Amazon RDS DB instance endpoints, +// see Endpoint. +type CreateDBClusterEndpointOutput struct { + _ struct{} `type:"structure"` + + // The type associated with a custom endpoint. One of: READER, ANY. + CustomEndpointType *string `type:"string"` + + // The Amazon Resource Name (ARN) for the endpoint. + DBClusterEndpointArn *string `type:"string"` + + // The identifier associated with the endpoint. This parameter is stored as + // a lowercase string. + DBClusterEndpointIdentifier *string `type:"string"` + + // A unique system-generated identifier for an endpoint. It remains the same + // for the whole life of the endpoint. + DBClusterEndpointResourceIdentifier *string `type:"string"` + + // The DB cluster identifier of the DB cluster associated with the endpoint. + // This parameter is stored as a lowercase string. + DBClusterIdentifier *string `type:"string"` + + // The DNS address of the endpoint. + Endpoint *string `type:"string"` + + // The type of the endpoint. One of: READER, WRITER, CUSTOM. + EndpointType *string `type:"string"` + + // List of DB instance identifiers that aren't part of the custom endpoint group. + // All other eligible instances are reachable through the custom endpoint. Only + // relevant if the list of static members is empty. + ExcludedMembers []*string `type:"list"` + + // List of DB instance identifiers that are part of the custom endpoint group. + StaticMembers []*string `type:"list"` + + // The current status of the endpoint. One of: creating, available, deleting, + // modifying. + Status *string `type:"string"` +} + +// String returns the string representation +func (s CreateDBClusterEndpointOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDBClusterEndpointOutput) GoString() string { + return s.String() +} + +// SetCustomEndpointType sets the CustomEndpointType field's value. +func (s *CreateDBClusterEndpointOutput) SetCustomEndpointType(v string) *CreateDBClusterEndpointOutput { + s.CustomEndpointType = &v + return s +} + +// SetDBClusterEndpointArn sets the DBClusterEndpointArn field's value. +func (s *CreateDBClusterEndpointOutput) SetDBClusterEndpointArn(v string) *CreateDBClusterEndpointOutput { + s.DBClusterEndpointArn = &v + return s +} + +// SetDBClusterEndpointIdentifier sets the DBClusterEndpointIdentifier field's value. +func (s *CreateDBClusterEndpointOutput) SetDBClusterEndpointIdentifier(v string) *CreateDBClusterEndpointOutput { + s.DBClusterEndpointIdentifier = &v + return s +} + +// SetDBClusterEndpointResourceIdentifier sets the DBClusterEndpointResourceIdentifier field's value. +func (s *CreateDBClusterEndpointOutput) SetDBClusterEndpointResourceIdentifier(v string) *CreateDBClusterEndpointOutput { + s.DBClusterEndpointResourceIdentifier = &v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *CreateDBClusterEndpointOutput) SetDBClusterIdentifier(v string) *CreateDBClusterEndpointOutput { + s.DBClusterIdentifier = &v + return s +} + +// SetEndpoint sets the Endpoint field's value. +func (s *CreateDBClusterEndpointOutput) SetEndpoint(v string) *CreateDBClusterEndpointOutput { + s.Endpoint = &v + return s +} + +// SetEndpointType sets the EndpointType field's value. +func (s *CreateDBClusterEndpointOutput) SetEndpointType(v string) *CreateDBClusterEndpointOutput { + s.EndpointType = &v + return s +} + +// SetExcludedMembers sets the ExcludedMembers field's value. +func (s *CreateDBClusterEndpointOutput) SetExcludedMembers(v []*string) *CreateDBClusterEndpointOutput { + s.ExcludedMembers = v + return s +} + +// SetStaticMembers sets the StaticMembers field's value. +func (s *CreateDBClusterEndpointOutput) SetStaticMembers(v []*string) *CreateDBClusterEndpointOutput { + s.StaticMembers = v + return s +} + +// SetStatus sets the Status field's value. +func (s *CreateDBClusterEndpointOutput) SetStatus(v string) *CreateDBClusterEndpointOutput { + s.Status = &v + return s +} + +type CreateDBClusterInput struct { + _ struct{} `type:"structure"` + + // A list of EC2 Availability Zones that instances in the DB cluster can be + // created in. For information on AWS Regions and Availability Zones, see Choosing + // the Regions and Availability Zones (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.RegionsAndAvailabilityZones.html) + // in the Amazon Aurora User Guide. + AvailabilityZones []*string `locationNameList:"AvailabilityZone" type:"list"` + + // The target backtrack window, in seconds. To disable backtracking, set this + // value to 0. + // + // Default: 0 + // + // Constraints: + // + // * If specified, this value must be set to a number from 0 to 259,200 (72 + // hours). + BacktrackWindow *int64 `type:"long"` + + // The number of days for which automated backups are retained. You must specify + // a minimum value of 1. + // + // Default: 1 + // + // Constraints: + // + // * Must be a value from 1 to 35 + BackupRetentionPeriod *int64 `type:"integer"` + + // A value that indicates that the DB cluster should be associated with the + // specified CharacterSet. + CharacterSetName *string `type:"string"` + + // The DB cluster identifier. This parameter is stored as a lowercase string. + // + // Constraints: + // + // * Must contain from 1 to 63 letters, numbers, or hyphens. + // + // * First character must be a letter. + // + // * Can't end with a hyphen or contain two consecutive hyphens. + // + // Example: my-cluster1 + // + // DBClusterIdentifier is a required field + DBClusterIdentifier *string `type:"string" required:"true"` + + // The name of the DB cluster parameter group to associate with this DB cluster. + // If this argument is omitted, default.aurora5.6 is used. + // + // Constraints: + // + // * If supplied, must match the name of an existing DB cluster parameter + // group. + DBClusterParameterGroupName *string `type:"string"` + + // A DB subnet group to associate with this DB cluster. + // + // Constraints: Must match the name of an existing DBSubnetGroup. Must not be + // default. + // + // Example: mySubnetgroup + DBSubnetGroupName *string `type:"string"` + + // The name for your database of up to 64 alpha-numeric characters. If you do + // not provide a name, Amazon RDS will not create a database in the DB cluster + // you are creating. + DatabaseName *string `type:"string"` + + // Indicates if the DB cluster should have deletion protection enabled. The + // database can't be deleted when this value is set to true. The default is + // false. + DeletionProtection *bool `type:"boolean"` + + // DestinationRegion is used for presigning the request to a given region. + DestinationRegion *string `type:"string"` + + // The list of log types that need to be enabled for exporting to CloudWatch + // Logs. The values in the list depend on the DB engine being used. For more + // information, see Publishing Database Logs to Amazon CloudWatch Logs (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.html#USER_LogAccess.Procedural.UploadtoCloudWatch) + // in the Amazon Aurora User Guide. + EnableCloudwatchLogsExports []*string `type:"list"` + + // True to enable mapping of AWS Identity and Access Management (IAM) accounts + // to database accounts, and otherwise false. + // + // Default: false + EnableIAMDatabaseAuthentication *bool `type:"boolean"` + + // The name of the database engine to be used for this DB cluster. + // + // Valid Values: aurora (for MySQL 5.6-compatible Aurora), aurora-mysql (for + // MySQL 5.7-compatible Aurora), and aurora-postgresql + // + // Engine is a required field + Engine *string `type:"string" required:"true"` + + // The DB engine mode of the DB cluster, either provisioned, serverless, or + // parallelquery. + EngineMode *string `type:"string"` + + // The version number of the database engine to use. + // + // Aurora MySQL + // + // Example: 5.6.10a, 5.7.12 + // + // Aurora PostgreSQL + // + // Example: 9.6.3 + EngineVersion *string `type:"string"` + + // The AWS KMS key identifier for an encrypted DB cluster. + // + // The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption + // key. If you are creating a DB cluster with the same AWS account that owns + // the KMS encryption key used to encrypt the new DB cluster, then you can use + // the KMS key alias instead of the ARN for the KMS encryption key. + // + // If an encryption key is not specified in KmsKeyId: + // + // * If ReplicationSourceIdentifier identifies an encrypted source, then + // Amazon RDS will use the encryption key used to encrypt the source. Otherwise, + // Amazon RDS will use your default encryption key. + // // * If the StorageEncrypted parameter is true and ReplicationSourceIdentifier // is not specified, then Amazon RDS will use your default encryption key. // @@ -11207,7 +12834,7 @@ type CreateDBClusterInput struct { // // * First character must be a letter. // - // * Cannot be a reserved word for the chosen database engine. + // * Can't be a reserved word for the chosen database engine. MasterUsername *string `type:"string"` // A value that indicates that the DB cluster should be associated with the @@ -11258,8 +12885,8 @@ type CreateDBClusterInput struct { // // The default is a 30-minute window selected at random from an 8-hour block // of time for each AWS Region. To see the time blocks available, see Adjusting - // the Preferred Maintenance Window (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AdjustingTheMaintenanceWindow.html) - // in the Amazon RDS User Guide. + // the Preferred DB Cluster Maintenance Window (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.Maintenance.html#AdjustingTheMaintenanceWindow.Aurora) + // in the Amazon Aurora User Guide. // // Constraints: // @@ -11279,9 +12906,9 @@ type CreateDBClusterInput struct { // // The default is a 30-minute window selected at random from an 8-hour block // of time for each AWS Region, occurring on a random day of the week. To see - // the time blocks available, see Adjusting the Preferred Maintenance Window - // (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AdjustingTheMaintenanceWindow.html) - // in the Amazon RDS User Guide. + // the time blocks available, see Adjusting the Preferred DB Cluster Maintenance + // Window (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.Maintenance.html#AdjustingTheMaintenanceWindow.Aurora) + // in the Amazon Aurora User Guide. // // Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun. // @@ -11292,6 +12919,10 @@ type CreateDBClusterInput struct { // this DB cluster is created as a Read Replica. ReplicationSourceIdentifier *string `type:"string"` + // For DB clusters in serverless DB engine mode, the scaling properties of the + // DB cluster. + ScalingConfiguration *ScalingConfiguration `type:"structure"` + // SourceRegion is the source region where the resource exists. This is not // sent over the wire and is only used for presigning. This value should always // have the same region as the source ARN. @@ -11300,7 +12931,8 @@ type CreateDBClusterInput struct { // Specifies whether the DB cluster is encrypted. StorageEncrypted *bool `type:"boolean"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` // A list of EC2 VPC security groups to associate with this DB cluster. @@ -11339,6 +12971,12 @@ func (s *CreateDBClusterInput) SetAvailabilityZones(v []*string) *CreateDBCluste return s } +// SetBacktrackWindow sets the BacktrackWindow field's value. +func (s *CreateDBClusterInput) SetBacktrackWindow(v int64) *CreateDBClusterInput { + s.BacktrackWindow = &v + return s +} + // SetBackupRetentionPeriod sets the BackupRetentionPeriod field's value. func (s *CreateDBClusterInput) SetBackupRetentionPeriod(v int64) *CreateDBClusterInput { s.BackupRetentionPeriod = &v @@ -11375,12 +13013,24 @@ func (s *CreateDBClusterInput) SetDatabaseName(v string) *CreateDBClusterInput { return s } +// SetDeletionProtection sets the DeletionProtection field's value. +func (s *CreateDBClusterInput) SetDeletionProtection(v bool) *CreateDBClusterInput { + s.DeletionProtection = &v + return s +} + // SetDestinationRegion sets the DestinationRegion field's value. func (s *CreateDBClusterInput) SetDestinationRegion(v string) *CreateDBClusterInput { s.DestinationRegion = &v return s } +// SetEnableCloudwatchLogsExports sets the EnableCloudwatchLogsExports field's value. +func (s *CreateDBClusterInput) SetEnableCloudwatchLogsExports(v []*string) *CreateDBClusterInput { + s.EnableCloudwatchLogsExports = v + return s +} + // SetEnableIAMDatabaseAuthentication sets the EnableIAMDatabaseAuthentication field's value. func (s *CreateDBClusterInput) SetEnableIAMDatabaseAuthentication(v bool) *CreateDBClusterInput { s.EnableIAMDatabaseAuthentication = &v @@ -11393,6 +13043,12 @@ func (s *CreateDBClusterInput) SetEngine(v string) *CreateDBClusterInput { return s } +// SetEngineMode sets the EngineMode field's value. +func (s *CreateDBClusterInput) SetEngineMode(v string) *CreateDBClusterInput { + s.EngineMode = &v + return s +} + // SetEngineVersion sets the EngineVersion field's value. func (s *CreateDBClusterInput) SetEngineVersion(v string) *CreateDBClusterInput { s.EngineVersion = &v @@ -11453,6 +13109,12 @@ func (s *CreateDBClusterInput) SetReplicationSourceIdentifier(v string) *CreateD return s } +// SetScalingConfiguration sets the ScalingConfiguration field's value. +func (s *CreateDBClusterInput) SetScalingConfiguration(v *ScalingConfiguration) *CreateDBClusterInput { + s.ScalingConfiguration = v + return s +} + // SetSourceRegion sets the SourceRegion field's value. func (s *CreateDBClusterInput) SetSourceRegion(v string) *CreateDBClusterInput { s.SourceRegion = &v @@ -11480,9 +13142,10 @@ func (s *CreateDBClusterInput) SetVpcSecurityGroupIds(v []*string) *CreateDBClus type CreateDBClusterOutput struct { _ struct{} `type:"structure"` - // Contains the details of an Amazon RDS DB cluster. + // Contains the details of an Amazon Aurora DB cluster. // - // This data type is used as a response element in the DescribeDBClusters action. + // This data type is used as a response element in the DescribeDBClusters, StopDBCluster, + // and StartDBCluster actions. DBCluster *DBCluster `type:"structure"` } @@ -11509,7 +13172,7 @@ type CreateDBClusterParameterGroupInput struct { // // Constraints: // - // * Must match the name of an existing DBClusterParameterGroup. + // * Must match the name of an existing DB cluster parameter group. // // This value is stored as a lowercase string. // @@ -11537,7 +13200,8 @@ type CreateDBClusterParameterGroupInput struct { // Description is a required field Description *string `type:"string" required:"true"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` } @@ -11644,7 +13308,7 @@ type CreateDBClusterSnapshotInput struct { // // * First character must be a letter. // - // * Cannot end with a hyphen or contain two consecutive hyphens. + // * Can't end with a hyphen or contain two consecutive hyphens. // // Example: my-cluster1-snapshot1 // @@ -11772,9 +13436,9 @@ type CreateDBInstanceInput struct { // // Constraints to the amount of storage for each storage type are the following: // - // * General Purpose (SSD) storage (gp2): Must be an integer from 20 to 16384. + // * General Purpose (SSD) storage (gp2): Must be an integer from 20 to 32768. // - // * Provisioned IOPS storage (io1): Must be an integer from 100 to 16384. + // * Provisioned IOPS storage (io1): Must be an integer from 100 to 32768. // // * Magnetic storage (standard): Must be an integer from 10 to 3072. // @@ -11836,7 +13500,7 @@ type CreateDBInstanceInput struct { // // * Must be a value from 0 to 35 // - // * Cannot be set to 0 if the DB instance is a source to Read Replicas + // * Can't be set to 0 if the DB instance is a source to Read Replicas BackupRetentionPeriod *int64 `type:"integer"` // For supported engines, indicates that the DB instance should be associated @@ -11876,7 +13540,7 @@ type CreateDBInstanceInput struct { // // * First character must be a letter. // - // * Cannot end with a hyphen or contain two consecutive hyphens. + // * Can't end with a hyphen or contain two consecutive hyphens. // // Example: mydbinstance // @@ -11897,7 +13561,7 @@ type CreateDBInstanceInput struct { // // * Must contain 1 to 64 letters or numbers. // - // * Cannot be a word reserved by the specified database engine + // * Can't be a word reserved by the specified database engine // // MariaDB // @@ -11908,7 +13572,7 @@ type CreateDBInstanceInput struct { // // * Must contain 1 to 64 letters or numbers. // - // * Cannot be a word reserved by the specified database engine + // * Can't be a word reserved by the specified database engine // // PostgreSQL // @@ -11923,7 +13587,7 @@ type CreateDBInstanceInput struct { // * Must begin with a letter or an underscore. Subsequent characters can // be letters, underscores, or digits (0-9). // - // * Cannot be a word reserved by the specified database engine + // * Can't be a word reserved by the specified database engine // // Oracle // @@ -11935,7 +13599,7 @@ type CreateDBInstanceInput struct { // // Constraints: // - // * Cannot be longer than 8 characters + // * Can't be longer than 8 characters // // SQL Server // @@ -11951,7 +13615,7 @@ type CreateDBInstanceInput struct { // // * Must contain 1 to 64 letters or numbers. // - // * Cannot be a word reserved by the specified database engine + // * Can't be a word reserved by the specified database engine DBName *string `type:"string"` // The name of the DB parameter group to associate with this DB instance. If @@ -11964,7 +13628,7 @@ type CreateDBInstanceInput struct { // // * First character must be a letter // - // * Cannot end with a hyphen or contain two consecutive hyphens + // * Can't end with a hyphen or contain two consecutive hyphens DBParameterGroupName *string `type:"string"` // A list of DB security groups to associate with this DB instance. @@ -11977,6 +13641,11 @@ type CreateDBInstanceInput struct { // If there is no DB subnet group, then it is a non-VPC DB instance. DBSubnetGroupName *string `type:"string"` + // Indicates if the DB instance should have deletion protection enabled. The + // database can't be deleted when this value is set to true. The default is + // false. For more information, see Deleting a DB Instance (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_DeleteInstance.html). + DeletionProtection *bool `type:"boolean"` + // Specify the Active Directory Domain to create the instance in. Domain *string `type:"string"` @@ -11985,7 +13654,9 @@ type CreateDBInstanceInput struct { DomainIAMRoleName *string `type:"string"` // The list of log types that need to be enabled for exporting to CloudWatch - // Logs. + // Logs. The values in the list depend on the DB engine being used. For more + // information, see Publishing Database Logs to Amazon CloudWatch Logs (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.html#USER_LogAccess.Procedural.UploadtoCloudWatch) + // in the Amazon Relational Database Service User Guide. EnableCloudwatchLogsExports []*string `type:"list"` // True to enable mapping of AWS Identity and Access Management (IAM) accounts @@ -12008,6 +13679,9 @@ type CreateDBInstanceInput struct { EnableIAMDatabaseAuthentication *bool `type:"boolean"` // True to enable Performance Insights for the DB instance, and otherwise false. + // + // For more information, see Using Amazon Performance Insights (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.html) + // in the Amazon Relational Database Service User Guide. EnablePerformanceInsights *bool `type:"boolean"` // The name of the database engine to be used for this instance. @@ -12049,9 +13723,11 @@ type CreateDBInstanceInput struct { // The version number of the database engine to use. // - // The following are the database engines and major and minor versions that - // are available with Amazon RDS. Not every database engine is available for - // every AWS Region. + // For a list of valid engine versions, call DescribeDBEngineVersions. + // + // The following are the database engines and links to information about the + // major and minor versions that are available with Amazon RDS. Not every database + // engine is available for every AWS Region. // // Amazon Aurora // @@ -12060,120 +13736,59 @@ type CreateDBInstanceInput struct { // // MariaDB // - // * 10.2.11 (supported in all AWS Regions) - // - // 10.1.26 (supported in all AWS Regions) - // - // * 10.1.23 (supported in all AWS Regions) + // See MariaDB on Amazon RDS Versions (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MariaDB.html#MariaDB.Concepts.VersionMgmt) + // in the Amazon RDS User Guide. // - // * 10.1.19 (supported in all AWS Regions) + // Microsoft SQL Server // - // * 10.1.14 (supported in all AWS Regions except us-east-2) + // See Version and Feature Support on Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SQLServer.html#SQLServer.Concepts.General.FeatureSupport) + // in the Amazon RDS User Guide. // - // * 10.0.32 (supported in all AWS Regions) + // MySQL // - // * 10.0.31 (supported in all AWS Regions) + // See MySQL on Amazon RDS Versions (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MySQL.html#MySQL.Concepts.VersionMgmt) + // in the Amazon RDS User Guide. // - // * 10.0.28 (supported in all AWS Regions) + // Oracle // - // * 10.0.24 (supported in all AWS Regions) + // See Oracle Database Engine Release Notes (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.PatchComposition.html) + // in the Amazon RDS User Guide. // - // * 10.0.17 (supported in all AWS Regions except us-east-2, ca-central-1, - // eu-west-2) + // PostgreSQL // - // Microsoft SQL Server 2017 + // See Supported PostgreSQL Database Versions (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts.General.DBVersions) + // in the Amazon RDS User Guide. + EngineVersion *string `type:"string"` + + // The amount of Provisioned IOPS (input/output operations per second) to be + // initially allocated for the DB instance. For information about valid Iops + // values, see see Amazon RDS Provisioned IOPS Storage to Improve Performance + // (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html#USER_PIOPS) + // in the Amazon RDS User Guide. // - // * 14.00.1000.169.v1 (supported for all editions, and all AWS Regions) + // Constraints: Must be a multiple between 1 and 50 of the storage amount for + // the DB instance. + Iops *int64 `type:"integer"` + + // The AWS KMS key identifier for an encrypted DB instance. // - // Microsoft SQL Server 2016 + // The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption + // key. If you are creating a DB instance with the same AWS account that owns + // the KMS encryption key used to encrypt the new DB instance, then you can + // use the KMS key alias instead of the ARN for the KM encryption key. // - // * 13.00.4451.0.v1 (supported for all editions, and all AWS Regions) + // Amazon Aurora // - // * 13.00.4422.0.v1 (supported for all editions, and all AWS Regions) + // Not applicable. The KMS key identifier is managed by the DB cluster. For + // more information, see CreateDBCluster. // - // * 13.00.2164.0.v1 (supported for all editions, and all AWS Regions) - // - // Microsoft SQL Server 2014 - // - // * 12.00.5546.0.v1 (supported for all editions, and all AWS Regions) - // - // * 12.00.5000.0.v1 (supported for all editions, and all AWS Regions) - // - // * 12.00.4422.0.v1 (supported for all editions except Enterprise Edition, - // and all AWS Regions except ca-central-1 and eu-west-2) - // - // Microsoft SQL Server 2012 - // - // * 11.00.6594.0.v1 (supported for all editions, and all AWS Regions) - // - // * 11.00.6020.0.v1 (supported for all editions, and all AWS Regions) - // - // * 11.00.5058.0.v1 (supported for all editions, and all AWS Regions except - // us-east-2, ca-central-1, and eu-west-2) - // - // * 11.00.2100.60.v1 (supported for all editions, and all AWS Regions except - // us-east-2, ca-central-1, and eu-west-2) - // - // Microsoft SQL Server 2008 R2 - // - // * 10.50.6529.0.v1 (supported for all editions, and all AWS Regions except - // us-east-2, ca-central-1, and eu-west-2) - // - // * 10.50.6000.34.v1 (supported for all editions, and all AWS Regions except - // us-east-2, ca-central-1, and eu-west-2) - // - // * 10.50.2789.0.v1 (supported for all editions, and all AWS Regions except - // us-east-2, ca-central-1, and eu-west-2) - // - // MySQL - // - // * 5.7.19 (supported in all AWS regions) - // - // * 5.7.17 (supported in all AWS regions) - // - // * 5.7.16 (supported in all AWS regions) - // - // 5.6.37(supported in all AWS Regions) - // - // 5.6.35(supported in all AWS Regions) - // - // 5.6.34(supported in all AWS Regions) - // - // 5.6.29(supported in all AWS Regions) - // - // 5.6.27 - EngineVersion *string `type:"string"` - - // The amount of Provisioned IOPS (input/output operations per second) to be - // initially allocated for the DB instance. For information about valid Iops - // values, see see Amazon RDS Provisioned IOPS Storage to Improve Performance - // (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html#USER_PIOPS). - // - // Constraints: Must be a multiple between 1 and 50 of the storage amount for - // the DB instance. Must also be an integer multiple of 1000. For example, if - // the size of your DB instance is 500 GiB, then your Iops value can be 2000, - // 3000, 4000, or 5000. - Iops *int64 `type:"integer"` - - // The AWS KMS key identifier for an encrypted DB instance. - // - // The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption - // key. If you are creating a DB instance with the same AWS account that owns - // the KMS encryption key used to encrypt the new DB instance, then you can - // use the KMS key alias instead of the ARN for the KM encryption key. - // - // Amazon Aurora - // - // Not applicable. The KMS key identifier is managed by the DB cluster. For - // more information, see CreateDBCluster. - // - // If the StorageEncrypted parameter is true, and you do not specify a value - // for the KmsKeyId parameter, then Amazon RDS will use your default encryption - // key. AWS KMS creates the default encryption key for your AWS account. Your - // AWS account has a different default encryption key for each AWS Region. - KmsKeyId *string `type:"string"` - - // License model information for this DB instance. + // If the StorageEncrypted parameter is true, and you do not specify a value + // for the KmsKeyId parameter, then Amazon RDS will use your default encryption + // key. AWS KMS creates the default encryption key for your AWS account. Your + // AWS account has a different default encryption key for each AWS Region. + KmsKeyId *string `type:"string"` + + // License model information for this DB instance. // // Valid values: license-included | bring-your-own-license | general-public-license LicenseModel *string `type:"string"` @@ -12222,7 +13837,7 @@ type CreateDBInstanceInput struct { // // * Must be 1 to 16 letters or numbers. // - // * Cannot be a reserved word for the chosen database engine. + // * Can't be a reserved word for the chosen database engine. // // Microsoft SQL Server // @@ -12234,7 +13849,7 @@ type CreateDBInstanceInput struct { // // * The first character must be a letter. // - // * Cannot be a reserved word for the chosen database engine. + // * Can't be a reserved word for the chosen database engine. // // MySQL // @@ -12246,7 +13861,7 @@ type CreateDBInstanceInput struct { // // * First character must be a letter. // - // * Cannot be a reserved word for the chosen database engine. + // * Can't be a reserved word for the chosen database engine. // // Oracle // @@ -12258,7 +13873,7 @@ type CreateDBInstanceInput struct { // // * First character must be a letter. // - // * Cannot be a reserved word for the chosen database engine. + // * Can't be a reserved word for the chosen database engine. // // PostgreSQL // @@ -12270,7 +13885,7 @@ type CreateDBInstanceInput struct { // // * First character must be a letter. // - // * Cannot be a reserved word for the chosen database engine. + // * Can't be a reserved word for the chosen database engine. MasterUsername *string `type:"string"` // The interval, in seconds, between points when Enhanced Monitoring metrics @@ -12286,7 +13901,8 @@ type CreateDBInstanceInput struct { // The ARN for the IAM role that permits RDS to send enhanced monitoring metrics // to Amazon CloudWatch Logs. For example, arn:aws:iam:123456789012:role/emaccess. // For information on creating a monitoring role, go to Setting Up and Enabling - // Enhanced Monitoring (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.OS.html#USER_Monitoring.OS.Enabling). + // Enhanced Monitoring (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.OS.html#USER_Monitoring.OS.Enabling) + // in the Amazon RDS User Guide. // // If MonitoringInterval is set to a value other than 0, then you must supply // a MonitoringRoleArn value. @@ -12309,6 +13925,10 @@ type CreateDBInstanceInput struct { // KMS key alias for the KMS encryption key. PerformanceInsightsKMSKeyId *string `type:"string"` + // The amount of time, in days, to retain Performance Insights data. Valid values + // are 7 or 731 (2 years). + PerformanceInsightsRetentionPeriod *int64 `type:"integer"` + // The port number on which the database accepts connections. // // MySQL @@ -12359,7 +13979,8 @@ type CreateDBInstanceInput struct { // The daily time range during which automated backups are created if automated // backups are enabled, using the BackupRetentionPeriod parameter. For more - // information, see The Backup Window (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html#USER_WorkingWithAutomatedBackups.BackupWindow). + // information, see The Backup Window (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html#USER_WorkingWithAutomatedBackups.BackupWindow) + // in the Amazon RDS User Guide. // // Amazon Aurora // @@ -12368,7 +13989,8 @@ type CreateDBInstanceInput struct { // // The default is a 30-minute window selected at random from an 8-hour block // of time for each AWS Region. To see the time blocks available, see Adjusting - // the Preferred DB Instance Maintenance Window (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Maintenance.html#AdjustingTheMaintenanceWindow). + // the Preferred DB Instance Maintenance Window (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Maintenance.html#AdjustingTheMaintenanceWindow) + // in the Amazon RDS User Guide. // // Constraints: // @@ -12395,9 +14017,14 @@ type CreateDBInstanceInput struct { // Constraints: Minimum 30-minute window. PreferredMaintenanceWindow *string `type:"string"` + // The number of CPU cores and the number of threads per core for the DB instance + // class of the DB instance. + ProcessorFeatures []*ProcessorFeature `locationNameList:"ProcessorFeature" type:"list"` + // A value that specifies the order in which an Aurora Replica is promoted to // the primary instance after a failure of the existing primary instance. For - // more information, see Fault Tolerance for an Aurora DB Cluster (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Managing.html#Aurora.Managing.FaultTolerance). + // more information, see Fault Tolerance for an Aurora DB Cluster (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html#Aurora.Managing.FaultTolerance) + // in the Amazon Aurora User Guide. // // Default: 1 // @@ -12409,17 +14036,26 @@ type CreateDBInstanceInput struct { // which resolves to a public IP address. A value of false specifies an internal // instance with a DNS name that resolves to a private IP address. // - // Default: The default behavior varies depending on whether a VPC has been - // requested or not. The following list shows the default behavior in each case. + // Default: The default behavior varies depending on whether DBSubnetGroupName + // is specified. // - // * Default VPC: true + // If DBSubnetGroupName is not specified, and PubliclyAccessible is not specified, + // the following applies: // - // * VPC: false + // * If the default VPC in the target region doesn’t have an Internet gateway + // attached to it, the DB instance is private. // - // If no DB subnet group has been specified as part of the request and the PubliclyAccessible - // value has not been set, the DB instance is publicly accessible. If a specific - // DB subnet group has been specified as part of the request and the PubliclyAccessible - // value has not been set, the DB instance is private. + // * If the default VPC in the target region has an Internet gateway attached + // to it, the DB instance is public. + // + // If DBSubnetGroupName is specified, and PubliclyAccessible is not specified, + // the following applies: + // + // * If the subnets are part of a VPC that doesn’t have an Internet gateway + // attached to it, the DB instance is private. + // + // * If the subnets are part of a VPC that has an Internet gateway attached + // to it, the DB instance is public. PubliclyAccessible *bool `type:"boolean"` // Specifies whether the DB instance is encrypted. @@ -12441,7 +14077,8 @@ type CreateDBInstanceInput struct { // Default: io1 if the Iops parameter is specified, otherwise standard StorageType *string `type:"string"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` // The ARN from the key store with which to associate the instance for TDE encryption. @@ -12573,6 +14210,12 @@ func (s *CreateDBInstanceInput) SetDBSubnetGroupName(v string) *CreateDBInstance return s } +// SetDeletionProtection sets the DeletionProtection field's value. +func (s *CreateDBInstanceInput) SetDeletionProtection(v bool) *CreateDBInstanceInput { + s.DeletionProtection = &v + return s +} + // SetDomain sets the Domain field's value. func (s *CreateDBInstanceInput) SetDomain(v string) *CreateDBInstanceInput { s.Domain = &v @@ -12675,6 +14318,12 @@ func (s *CreateDBInstanceInput) SetPerformanceInsightsKMSKeyId(v string) *Create return s } +// SetPerformanceInsightsRetentionPeriod sets the PerformanceInsightsRetentionPeriod field's value. +func (s *CreateDBInstanceInput) SetPerformanceInsightsRetentionPeriod(v int64) *CreateDBInstanceInput { + s.PerformanceInsightsRetentionPeriod = &v + return s +} + // SetPort sets the Port field's value. func (s *CreateDBInstanceInput) SetPort(v int64) *CreateDBInstanceInput { s.Port = &v @@ -12693,6 +14342,12 @@ func (s *CreateDBInstanceInput) SetPreferredMaintenanceWindow(v string) *CreateD return s } +// SetProcessorFeatures sets the ProcessorFeatures field's value. +func (s *CreateDBInstanceInput) SetProcessorFeatures(v []*ProcessorFeature) *CreateDBInstanceInput { + s.ProcessorFeatures = v + return s +} + // SetPromotionTier sets the PromotionTier field's value. func (s *CreateDBInstanceInput) SetPromotionTier(v int64) *CreateDBInstanceInput { s.PromotionTier = &v @@ -12835,10 +14490,18 @@ type CreateDBInstanceReadReplicaInput struct { // Example: mySubnetgroup DBSubnetGroupName *string `type:"string"` + // Indicates if the DB instance should have deletion protection enabled. The + // database can't be deleted when this value is set to true. The default is + // false. For more information, see Deleting a DB Instance (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_DeleteInstance.html). + DeletionProtection *bool `type:"boolean"` + // DestinationRegion is used for presigning the request to a given region. DestinationRegion *string `type:"string"` // The list of logs that the new DB instance is to export to CloudWatch Logs. + // The values in the list depend on the DB engine being used. For more information, + // see Publishing Database Logs to Amazon CloudWatch Logs (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.html#USER_LogAccess.Procedural.UploadtoCloudWatch) + // in the Amazon RDS User Guide. EnableCloudwatchLogsExports []*string `type:"list"` // True to enable mapping of AWS Identity and Access Management (IAM) accounts @@ -12850,12 +14513,15 @@ type CreateDBInstanceReadReplicaInput struct { // // * For MySQL 5.7, minor version 5.7.16 or higher // - // * Aurora 5.6 or higher. + // * Aurora MySQL 5.6 or higher // // Default: false EnableIAMDatabaseAuthentication *bool `type:"boolean"` // True to enable Performance Insights for the read replica, and otherwise false. + // + // For more information, see Using Amazon Performance Insights (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.html) + // in the Amazon RDS User Guide. EnablePerformanceInsights *bool `type:"boolean"` // The amount of Provisioned IOPS (input/output operations per second) to be @@ -12892,7 +14558,8 @@ type CreateDBInstanceReadReplicaInput struct { // The ARN for the IAM role that permits RDS to send enhanced monitoring metrics // to Amazon CloudWatch Logs. For example, arn:aws:iam:123456789012:role/emaccess. // For information on creating a monitoring role, go to To create an IAM role - // for Amazon RDS Enhanced Monitoring (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.html#USER_Monitoring.OS.IAMRole). + // for Amazon RDS Enhanced Monitoring (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.html#USER_Monitoring.OS.IAMRole) + // in the Amazon RDS User Guide. // // If MonitoringInterval is set to a value other than 0, then you must supply // a MonitoringRoleArn value. @@ -12915,6 +14582,10 @@ type CreateDBInstanceReadReplicaInput struct { // KMS key alias for the KMS encryption key. PerformanceInsightsKMSKeyId *string `type:"string"` + // The amount of time, in days, to retain Performance Insights data. Valid values + // are 7 or 731 (2 years). + PerformanceInsightsRetentionPeriod *int64 `type:"integer"` + // The port number that the DB instance uses for connections. // // Default: Inherits from the source DB instance @@ -12965,22 +14636,15 @@ type CreateDBInstanceReadReplicaInput struct { // and Signature Version 4 Signing Process (http://docs.aws.amazon.com/general/latest/gr/signature-version-4.html). PreSignedUrl *string `type:"string"` + // The number of CPU cores and the number of threads per core for the DB instance + // class of the DB instance. + ProcessorFeatures []*ProcessorFeature `locationNameList:"ProcessorFeature" type:"list"` + // Specifies the accessibility options for the DB instance. A value of true // specifies an Internet-facing instance with a publicly resolvable DNS name, // which resolves to a public IP address. A value of false specifies an internal - // instance with a DNS name that resolves to a private IP address. - // - // Default: The default behavior varies depending on whether a VPC has been - // requested or not. The following list shows the default behavior in each case. - // - // * Default VPC:true - // - // * VPC:false - // - // If no DB subnet group has been specified as part of the request and the PubliclyAccessible - // value has not been set, the DB instance is publicly accessible. If a specific - // DB subnet group has been specified as part of the request and the PubliclyAccessible - // value has not been set, the DB instance is private. + // instance with a DNS name that resolves to a private IP address. For more + // information, see CreateDBInstance. PubliclyAccessible *bool `type:"boolean"` // The identifier of the DB instance that will act as the source for the Read @@ -12992,7 +14656,7 @@ type CreateDBInstanceReadReplicaInput struct { // DB instance. // // * Can specify a DB instance that is a MySQL Read Replica only if the source - // is running MySQL 5.6. + // is running MySQL 5.6 or later. // // * Can specify a DB instance that is a PostgreSQL DB instance only if the // source is running PostgreSQL 9.3.5 or later (9.4.7 and higher for cross-region @@ -13006,7 +14670,8 @@ type CreateDBInstanceReadReplicaInput struct { // // * If the source DB instance is in a different AWS Region than the Read // Replica, specify a valid DB instance ARN. For more information, go to - // Constructing a Amazon RDS Amazon Resource Name (ARN) (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.ARN.html#USER_Tagging.ARN.Constructing). + // Constructing an ARN for Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.ARN.html#USER_Tagging.ARN.Constructing) + // in the Amazon RDS User Guide. // // SourceDBInstanceIdentifier is a required field SourceDBInstanceIdentifier *string `type:"string" required:"true"` @@ -13025,8 +14690,18 @@ type CreateDBInstanceReadReplicaInput struct { // Default: io1 if the Iops parameter is specified, otherwise standard StorageType *string `type:"string"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` + + // A value that specifies that the DB instance class of the DB instance uses + // its default processor features. + UseDefaultProcessorFeatures *bool `type:"boolean"` + + // A list of EC2 VPC security groups to associate with the Read Replica. + // + // Default: The default EC2 VPC security group for the DB subnet group's VPC. + VpcSecurityGroupIds []*string `locationNameList:"VpcSecurityGroupId" type:"list"` } // String returns the string representation @@ -13091,6 +14766,12 @@ func (s *CreateDBInstanceReadReplicaInput) SetDBSubnetGroupName(v string) *Creat return s } +// SetDeletionProtection sets the DeletionProtection field's value. +func (s *CreateDBInstanceReadReplicaInput) SetDeletionProtection(v bool) *CreateDBInstanceReadReplicaInput { + s.DeletionProtection = &v + return s +} + // SetDestinationRegion sets the DestinationRegion field's value. func (s *CreateDBInstanceReadReplicaInput) SetDestinationRegion(v string) *CreateDBInstanceReadReplicaInput { s.DestinationRegion = &v @@ -13157,6 +14838,12 @@ func (s *CreateDBInstanceReadReplicaInput) SetPerformanceInsightsKMSKeyId(v stri return s } +// SetPerformanceInsightsRetentionPeriod sets the PerformanceInsightsRetentionPeriod field's value. +func (s *CreateDBInstanceReadReplicaInput) SetPerformanceInsightsRetentionPeriod(v int64) *CreateDBInstanceReadReplicaInput { + s.PerformanceInsightsRetentionPeriod = &v + return s +} + // SetPort sets the Port field's value. func (s *CreateDBInstanceReadReplicaInput) SetPort(v int64) *CreateDBInstanceReadReplicaInput { s.Port = &v @@ -13169,6 +14856,12 @@ func (s *CreateDBInstanceReadReplicaInput) SetPreSignedUrl(v string) *CreateDBIn return s } +// SetProcessorFeatures sets the ProcessorFeatures field's value. +func (s *CreateDBInstanceReadReplicaInput) SetProcessorFeatures(v []*ProcessorFeature) *CreateDBInstanceReadReplicaInput { + s.ProcessorFeatures = v + return s +} + // SetPubliclyAccessible sets the PubliclyAccessible field's value. func (s *CreateDBInstanceReadReplicaInput) SetPubliclyAccessible(v bool) *CreateDBInstanceReadReplicaInput { s.PubliclyAccessible = &v @@ -13199,6 +14892,18 @@ func (s *CreateDBInstanceReadReplicaInput) SetTags(v []*Tag) *CreateDBInstanceRe return s } +// SetUseDefaultProcessorFeatures sets the UseDefaultProcessorFeatures field's value. +func (s *CreateDBInstanceReadReplicaInput) SetUseDefaultProcessorFeatures(v bool) *CreateDBInstanceReadReplicaInput { + s.UseDefaultProcessorFeatures = &v + return s +} + +// SetVpcSecurityGroupIds sets the VpcSecurityGroupIds field's value. +func (s *CreateDBInstanceReadReplicaInput) SetVpcSecurityGroupIds(v []*string) *CreateDBInstanceReadReplicaInput { + s.VpcSecurityGroupIds = v + return s +} + type CreateDBInstanceReadReplicaOutput struct { _ struct{} `type:"structure"` @@ -13232,6 +14937,13 @@ type CreateDBParameterGroupInput struct { // to a DB instance running a database engine and engine version compatible // with that DB parameter group family. // + // To list all of the available parameter group families, use the following + // command: + // + // aws rds describe-db-engine-versions --query "DBEngineVersions[].DBParameterGroupFamily" + // + // The output contains duplicates. + // // DBParameterGroupFamily is a required field DBParameterGroupFamily *string `type:"string" required:"true"` @@ -13243,7 +14955,7 @@ type CreateDBParameterGroupInput struct { // // * First character must be a letter // - // * Cannot end with a hyphen or contain two consecutive hyphens + // * Can't end with a hyphen or contain two consecutive hyphens // // This value is stored as a lowercase string. // @@ -13255,7 +14967,8 @@ type CreateDBParameterGroupInput struct { // Description is a required field Description *string `type:"string" required:"true"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` } @@ -13354,7 +15067,7 @@ type CreateDBSecurityGroupInput struct { // // * First character must be a letter // - // * Cannot end with a hyphen or contain two consecutive hyphens + // * Can't end with a hyphen or contain two consecutive hyphens // // * Must not be "Default" // @@ -13363,7 +15076,8 @@ type CreateDBSecurityGroupInput struct { // DBSecurityGroupName is a required field DBSecurityGroupName *string `type:"string" required:"true"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` } @@ -13453,20 +15167,21 @@ type CreateDBSnapshotInput struct { // // Constraints: // - // * Cannot be null, empty, or blank + // * Can't be null, empty, or blank // // * Must contain from 1 to 255 letters, numbers, or hyphens // // * First character must be a letter // - // * Cannot end with a hyphen or contain two consecutive hyphens + // * Can't end with a hyphen or contain two consecutive hyphens // // Example: my-snapshot-id // // DBSnapshotIdentifier is a required field DBSnapshotIdentifier *string `type:"string" required:"true"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` } @@ -13562,7 +15277,8 @@ type CreateDBSubnetGroupInput struct { // SubnetIds is a required field SubnetIds []*string `locationNameList:"SubnetIdentifier" type:"list" required:"true"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` } @@ -13703,7 +15419,8 @@ type CreateEventSubscriptionInput struct { // SubscriptionName is a required field SubscriptionName *string `type:"string" required:"true"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` } @@ -13827,14 +15544,15 @@ type CreateOptionGroupInput struct { // // * First character must be a letter // - // * Cannot end with a hyphen or contain two consecutive hyphens + // * Can't end with a hyphen or contain two consecutive hyphens // // Example: myoptiongroup // // OptionGroupName is a required field OptionGroupName *string `type:"string" required:"true"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` } @@ -13922,9 +15640,10 @@ func (s *CreateOptionGroupOutput) SetOptionGroup(v *OptionGroup) *CreateOptionGr return s } -// Contains the details of an Amazon RDS DB cluster. +// Contains the details of an Amazon Aurora DB cluster. // -// This data type is used as a response element in the DescribeDBClusters action. +// This data type is used as a response element in the DescribeDBClusters, StopDBCluster, +// and StartDBCluster actions. type DBCluster struct { _ struct{} `type:"structure"` @@ -13944,9 +15663,18 @@ type DBCluster struct { // can be created in. AvailabilityZones []*string `locationNameList:"AvailabilityZone" type:"list"` + // The number of change records stored for Backtrack. + BacktrackConsumedChangeRecords *int64 `type:"long"` + + // The target backtrack window, in seconds. If this value is set to 0, backtracking + // is disabled for the DB cluster. Otherwise, backtracking is enabled. + BacktrackWindow *int64 `type:"long"` + // Specifies the number of days for which automatic DB snapshots are retained. BackupRetentionPeriod *int64 `type:"integer"` + Capacity *int64 `type:"integer"` + // If present, specifies the name of the character set that this cluster is // associated with. CharacterSetName *string `type:"string"` @@ -13956,7 +15684,10 @@ type DBCluster struct { // Specifies the time when the DB cluster was created, in Universal Coordinated // Time (UTC). - ClusterCreateTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + ClusterCreateTime *time.Time `type:"timestamp"` + + // Identifies all custom endpoints associated with the cluster. + CustomEndpoints []*string `type:"list"` // The Amazon Resource Name (ARN) for the DB cluster. DBClusterArn *string `type:"string"` @@ -13988,9 +15719,24 @@ type DBCluster struct { // cluster is accessed. DbClusterResourceId *string `type:"string"` - // Specifies the earliest time to which a database can be restored with point-in-time + // Indicates if the DB cluster has deletion protection enabled. The database + // can't be deleted when this value is set to true. + DeletionProtection *bool `type:"boolean"` + + // The earliest time to which a DB cluster can be backtracked. + EarliestBacktrackTime *time.Time `type:"timestamp"` + + // The earliest time to which a database can be restored with point-in-time // restore. - EarliestRestorableTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + EarliestRestorableTime *time.Time `type:"timestamp"` + + // A list of log types that this DB cluster is configured to export to CloudWatch + // Logs. + // + // Log types vary by DB engine. For information about the log types for each + // DB engine, see Amazon RDS Database Log Files (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.html) + // in the Amazon Aurora User Guide. + EnabledCloudwatchLogsExports []*string `type:"list"` // Specifies the connection endpoint for the primary instance of the DB cluster. Endpoint *string `type:"string"` @@ -13998,6 +15744,10 @@ type DBCluster struct { // Provides the name of the database engine to be used for this DB cluster. Engine *string `type:"string"` + // The DB engine mode of the DB cluster, either provisioned, serverless, or + // parallelquery. + EngineMode *string `type:"string"` + // Indicates the database engine version. EngineVersion *string `type:"string"` @@ -14014,7 +15764,7 @@ type DBCluster struct { // Specifies the latest time to which a database can be restored with point-in-time // restore. - LatestRestorableTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + LatestRestorableTime *time.Time `type:"timestamp"` // Contains the master username for the DB cluster. MasterUsername *string `type:"string"` @@ -14057,6 +15807,13 @@ type DBCluster struct { // Read Replica. ReplicationSourceIdentifier *string `type:"string"` + // Shows the scaling configuration for an Aurora DB cluster in serverless DB + // engine mode. + // + // For more information, see Using Amazon Aurora Serverless (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.html) + // in the Amazon Aurora User Guide. + ScalingConfigurationInfo *ScalingConfigurationInfo `type:"structure"` + // Specifies the current state of this DB cluster. Status *string `type:"string"` @@ -14095,12 +15852,30 @@ func (s *DBCluster) SetAvailabilityZones(v []*string) *DBCluster { return s } +// SetBacktrackConsumedChangeRecords sets the BacktrackConsumedChangeRecords field's value. +func (s *DBCluster) SetBacktrackConsumedChangeRecords(v int64) *DBCluster { + s.BacktrackConsumedChangeRecords = &v + return s +} + +// SetBacktrackWindow sets the BacktrackWindow field's value. +func (s *DBCluster) SetBacktrackWindow(v int64) *DBCluster { + s.BacktrackWindow = &v + return s +} + // SetBackupRetentionPeriod sets the BackupRetentionPeriod field's value. func (s *DBCluster) SetBackupRetentionPeriod(v int64) *DBCluster { s.BackupRetentionPeriod = &v return s } +// SetCapacity sets the Capacity field's value. +func (s *DBCluster) SetCapacity(v int64) *DBCluster { + s.Capacity = &v + return s +} + // SetCharacterSetName sets the CharacterSetName field's value. func (s *DBCluster) SetCharacterSetName(v string) *DBCluster { s.CharacterSetName = &v @@ -14119,6 +15894,12 @@ func (s *DBCluster) SetClusterCreateTime(v time.Time) *DBCluster { return s } +// SetCustomEndpoints sets the CustomEndpoints field's value. +func (s *DBCluster) SetCustomEndpoints(v []*string) *DBCluster { + s.CustomEndpoints = v + return s +} + // SetDBClusterArn sets the DBClusterArn field's value. func (s *DBCluster) SetDBClusterArn(v string) *DBCluster { s.DBClusterArn = &v @@ -14167,12 +15948,30 @@ func (s *DBCluster) SetDbClusterResourceId(v string) *DBCluster { return s } +// SetDeletionProtection sets the DeletionProtection field's value. +func (s *DBCluster) SetDeletionProtection(v bool) *DBCluster { + s.DeletionProtection = &v + return s +} + +// SetEarliestBacktrackTime sets the EarliestBacktrackTime field's value. +func (s *DBCluster) SetEarliestBacktrackTime(v time.Time) *DBCluster { + s.EarliestBacktrackTime = &v + return s +} + // SetEarliestRestorableTime sets the EarliestRestorableTime field's value. func (s *DBCluster) SetEarliestRestorableTime(v time.Time) *DBCluster { s.EarliestRestorableTime = &v return s } +// SetEnabledCloudwatchLogsExports sets the EnabledCloudwatchLogsExports field's value. +func (s *DBCluster) SetEnabledCloudwatchLogsExports(v []*string) *DBCluster { + s.EnabledCloudwatchLogsExports = v + return s +} + // SetEndpoint sets the Endpoint field's value. func (s *DBCluster) SetEndpoint(v string) *DBCluster { s.Endpoint = &v @@ -14185,6 +15984,12 @@ func (s *DBCluster) SetEngine(v string) *DBCluster { return s } +// SetEngineMode sets the EngineMode field's value. +func (s *DBCluster) SetEngineMode(v string) *DBCluster { + s.EngineMode = &v + return s +} + // SetEngineVersion sets the EngineVersion field's value. func (s *DBCluster) SetEngineVersion(v string) *DBCluster { s.EngineVersion = &v @@ -14269,6 +16074,12 @@ func (s *DBCluster) SetReplicationSourceIdentifier(v string) *DBCluster { return s } +// SetScalingConfigurationInfo sets the ScalingConfigurationInfo field's value. +func (s *DBCluster) SetScalingConfigurationInfo(v *ScalingConfigurationInfo) *DBCluster { + s.ScalingConfigurationInfo = v + return s +} + // SetStatus sets the Status field's value. func (s *DBCluster) SetStatus(v string) *DBCluster { s.Status = &v @@ -14287,6 +16098,130 @@ func (s *DBCluster) SetVpcSecurityGroups(v []*VpcSecurityGroupMembership) *DBClu return s } +// This data type represents the information you need to connect to an Amazon +// Aurora DB cluster. This data type is used as a response element in the following +// actions: +// +// * CreateDBClusterEndpoint +// +// * DescribeDBClusterEndpoints +// +// * ModifyDBClusterEndpoint +// +// * DeleteDBClusterEndpoint +// +// For the data structure that represents Amazon RDS DB instance endpoints, +// see Endpoint. +type DBClusterEndpoint struct { + _ struct{} `type:"structure"` + + // The type associated with a custom endpoint. One of: READER, ANY. + CustomEndpointType *string `type:"string"` + + // The Amazon Resource Name (ARN) for the endpoint. + DBClusterEndpointArn *string `type:"string"` + + // The identifier associated with the endpoint. This parameter is stored as + // a lowercase string. + DBClusterEndpointIdentifier *string `type:"string"` + + // A unique system-generated identifier for an endpoint. It remains the same + // for the whole life of the endpoint. + DBClusterEndpointResourceIdentifier *string `type:"string"` + + // The DB cluster identifier of the DB cluster associated with the endpoint. + // This parameter is stored as a lowercase string. + DBClusterIdentifier *string `type:"string"` + + // The DNS address of the endpoint. + Endpoint *string `type:"string"` + + // The type of the endpoint. One of: READER, WRITER, CUSTOM. + EndpointType *string `type:"string"` + + // List of DB instance identifiers that aren't part of the custom endpoint group. + // All other eligible instances are reachable through the custom endpoint. Only + // relevant if the list of static members is empty. + ExcludedMembers []*string `type:"list"` + + // List of DB instance identifiers that are part of the custom endpoint group. + StaticMembers []*string `type:"list"` + + // The current status of the endpoint. One of: creating, available, deleting, + // modifying. + Status *string `type:"string"` +} + +// String returns the string representation +func (s DBClusterEndpoint) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DBClusterEndpoint) GoString() string { + return s.String() +} + +// SetCustomEndpointType sets the CustomEndpointType field's value. +func (s *DBClusterEndpoint) SetCustomEndpointType(v string) *DBClusterEndpoint { + s.CustomEndpointType = &v + return s +} + +// SetDBClusterEndpointArn sets the DBClusterEndpointArn field's value. +func (s *DBClusterEndpoint) SetDBClusterEndpointArn(v string) *DBClusterEndpoint { + s.DBClusterEndpointArn = &v + return s +} + +// SetDBClusterEndpointIdentifier sets the DBClusterEndpointIdentifier field's value. +func (s *DBClusterEndpoint) SetDBClusterEndpointIdentifier(v string) *DBClusterEndpoint { + s.DBClusterEndpointIdentifier = &v + return s +} + +// SetDBClusterEndpointResourceIdentifier sets the DBClusterEndpointResourceIdentifier field's value. +func (s *DBClusterEndpoint) SetDBClusterEndpointResourceIdentifier(v string) *DBClusterEndpoint { + s.DBClusterEndpointResourceIdentifier = &v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *DBClusterEndpoint) SetDBClusterIdentifier(v string) *DBClusterEndpoint { + s.DBClusterIdentifier = &v + return s +} + +// SetEndpoint sets the Endpoint field's value. +func (s *DBClusterEndpoint) SetEndpoint(v string) *DBClusterEndpoint { + s.Endpoint = &v + return s +} + +// SetEndpointType sets the EndpointType field's value. +func (s *DBClusterEndpoint) SetEndpointType(v string) *DBClusterEndpoint { + s.EndpointType = &v + return s +} + +// SetExcludedMembers sets the ExcludedMembers field's value. +func (s *DBClusterEndpoint) SetExcludedMembers(v []*string) *DBClusterEndpoint { + s.ExcludedMembers = v + return s +} + +// SetStaticMembers sets the StaticMembers field's value. +func (s *DBClusterEndpoint) SetStaticMembers(v []*string) *DBClusterEndpoint { + s.StaticMembers = v + return s +} + +// SetStatus sets the Status field's value. +func (s *DBClusterEndpoint) SetStatus(v string) *DBClusterEndpoint { + s.Status = &v + return s +} + // Contains information about an instance that is part of a DB cluster. type DBClusterMember struct { _ struct{} `type:"structure"` @@ -14304,7 +16239,8 @@ type DBClusterMember struct { // A value that specifies the order in which an Aurora Replica is promoted to // the primary instance after a failure of the existing primary instance. For - // more information, see Fault Tolerance for an Aurora DB Cluster (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Managing.html#Aurora.Managing.FaultTolerance). + // more information, see Fault Tolerance for an Aurora DB Cluster (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html#Aurora.Managing.FaultTolerance) + // in the Amazon Aurora User Guide. PromotionTier *int64 `type:"integer"` } @@ -14442,7 +16378,7 @@ type DBClusterParameterGroupNameMessage struct { // // * First character must be a letter // - // * Cannot end with a hyphen or contain two consecutive hyphens + // * Can't end with a hyphen or contain two consecutive hyphens // // This value is stored as a lowercase string. DBClusterParameterGroupName *string `type:"string"` @@ -14469,6 +16405,8 @@ func (s *DBClusterParameterGroupNameMessage) SetDBClusterParameterGroupName(v st type DBClusterRole struct { _ struct{} `type:"structure"` + FeatureName *string `type:"string"` + // The Amazon Resource Name (ARN) of the IAM role that is associated with the // DB cluster. RoleArn *string `type:"string"` @@ -14497,6 +16435,12 @@ func (s DBClusterRole) GoString() string { return s.String() } +// SetFeatureName sets the FeatureName field's value. +func (s *DBClusterRole) SetFeatureName(v string) *DBClusterRole { + s.FeatureName = &v + return s +} + // SetRoleArn sets the RoleArn field's value. func (s *DBClusterRole) SetRoleArn(v string) *DBClusterRole { s.RoleArn = &v @@ -14525,7 +16469,7 @@ type DBClusterSnapshot struct { // Specifies the time when the DB cluster was created, in Universal Coordinated // Time (UTC). - ClusterCreateTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + ClusterCreateTime *time.Time `type:"timestamp"` // Specifies the DB cluster identifier of the DB cluster that this DB cluster // snapshot was created from. @@ -14566,7 +16510,7 @@ type DBClusterSnapshot struct { // Provides the time when the snapshot was taken, in Universal Coordinated Time // (UTC). - SnapshotCreateTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + SnapshotCreateTime *time.Time `type:"timestamp"` // Provides the type of the DB cluster snapshot. SnapshotType *string `type:"string"` @@ -14833,6 +16777,9 @@ type DBEngineVersion struct { // parameter of the CreateDBInstance action. SupportedCharacterSets []*CharacterSet `locationNameList:"CharacterSet" type:"list"` + // A list of the supported DB engine modes. + SupportedEngineModes []*string `type:"list"` + // A list of the time zones supported by this engine for the Timezone parameter // of the CreateDBInstance action. SupportedTimezones []*Timezone `locationNameList:"Timezone" type:"list"` @@ -14907,6 +16854,12 @@ func (s *DBEngineVersion) SetSupportedCharacterSets(v []*CharacterSet) *DBEngine return s } +// SetSupportedEngineModes sets the SupportedEngineModes field's value. +func (s *DBEngineVersion) SetSupportedEngineModes(v []*string) *DBEngineVersion { + s.SupportedEngineModes = v + return s +} + // SetSupportedTimezones sets the SupportedTimezones field's value. func (s *DBEngineVersion) SetSupportedTimezones(v []*Timezone) *DBEngineVersion { s.SupportedTimezones = v @@ -15016,11 +16969,20 @@ type DBInstance struct { // instance is accessed. DbiResourceId *string `type:"string"` + // Indicates if the DB instance has deletion protection enabled. The database + // can't be deleted when this value is set to true. For more information, see + // Deleting a DB Instance (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_DeleteInstance.html). + DeletionProtection *bool `type:"boolean"` + // The Active Directory Domain membership records associated with the DB instance. DomainMemberships []*DomainMembership `locationNameList:"DomainMembership" type:"list"` // A list of log types that this DB instance is configured to export to CloudWatch // Logs. + // + // Log types vary by DB engine. For information about the log types for each + // DB engine, see Amazon RDS Database Log Files (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.html) + // in the Amazon RDS User Guide. EnabledCloudwatchLogsExports []*string `type:"list"` // Specifies the connection endpoint. @@ -15050,7 +17012,7 @@ type DBInstance struct { IAMDatabaseAuthenticationEnabled *bool `type:"boolean"` // Provides the date and time the DB instance was created. - InstanceCreateTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + InstanceCreateTime *time.Time `type:"timestamp"` // Specifies the Provisioned IOPS (I/O operations per second) value. Iops *int64 `type:"integer"` @@ -15061,11 +17023,14 @@ type DBInstance struct { // Specifies the latest time to which a database can be restored with point-in-time // restore. - LatestRestorableTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + LatestRestorableTime *time.Time `type:"timestamp"` // License model information for this DB instance. LicenseModel *string `type:"string"` + // Specifies the listener connection endpoint for SQL Server Always On. + ListenerEndpoint *Endpoint `type:"structure"` + // Contains the master username for the DB instance. MasterUsername *string `type:"string"` @@ -15096,6 +17061,10 @@ type DBInstance struct { // KMS key alias for the KMS encryption key. PerformanceInsightsKMSKeyId *string `type:"string"` + // The amount of time, in days, to retain Performance Insights data. Valid values + // are 7 or 731 (2 years). + PerformanceInsightsRetentionPeriod *int64 `type:"integer"` + // Specifies the daily time range during which automated backups are created // if automated backups are enabled, as determined by the BackupRetentionPeriod. PreferredBackupWindow *string `type:"string"` @@ -15104,31 +17073,27 @@ type DBInstance struct { // in Universal Coordinated Time (UTC). PreferredMaintenanceWindow *string `type:"string"` + // The number of CPU cores and the number of threads per core for the DB instance + // class of the DB instance. + ProcessorFeatures []*ProcessorFeature `locationNameList:"ProcessorFeature" type:"list"` + // A value that specifies the order in which an Aurora Replica is promoted to // the primary instance after a failure of the existing primary instance. For - // more information, see Fault Tolerance for an Aurora DB Cluster (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Managing.html#Aurora.Managing.FaultTolerance). + // more information, see Fault Tolerance for an Aurora DB Cluster (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html#Aurora.Managing.FaultTolerance) + // in the Amazon Aurora User Guide. PromotionTier *int64 `type:"integer"` // Specifies the accessibility options for the DB instance. A value of true // specifies an Internet-facing instance with a publicly resolvable DNS name, // which resolves to a public IP address. A value of false specifies an internal // instance with a DNS name that resolves to a private IP address. - // - // Default: The default behavior varies depending on whether a VPC has been - // requested or not. The following list shows the default behavior in each case. - // - // * Default VPC:true - // - // * VPC:false - // - // If no DB subnet group has been specified as part of the request and the PubliclyAccessible - // value has not been set, the DB instance is publicly accessible. If a specific - // DB subnet group has been specified as part of the request and the PubliclyAccessible - // value has not been set, the DB instance is private. PubliclyAccessible *bool `type:"boolean"` - // Contains one or more identifiers of Aurora DB clusters that are Read Replicas - // of this DB instance. + // Contains one or more identifiers of Aurora DB clusters to which the RDS DB + // instance is replicated as a Read Replica. For example, when you create an + // Aurora Read Replica of an RDS MySQL DB instance, the Aurora MySQL DB cluster + // for the Aurora Read Replica is shown. This output does not contain information + // about cross region Aurora Read Replicas. ReadReplicaDBClusterIdentifiers []*string `locationNameList:"ReadReplicaDBClusterIdentifier" type:"list"` // Contains one or more identifiers of the Read Replicas associated with this @@ -15285,6 +17250,12 @@ func (s *DBInstance) SetDbiResourceId(v string) *DBInstance { return s } +// SetDeletionProtection sets the DeletionProtection field's value. +func (s *DBInstance) SetDeletionProtection(v bool) *DBInstance { + s.DeletionProtection = &v + return s +} + // SetDomainMemberships sets the DomainMemberships field's value. func (s *DBInstance) SetDomainMemberships(v []*DomainMembership) *DBInstance { s.DomainMemberships = v @@ -15357,6 +17328,12 @@ func (s *DBInstance) SetLicenseModel(v string) *DBInstance { return s } +// SetListenerEndpoint sets the ListenerEndpoint field's value. +func (s *DBInstance) SetListenerEndpoint(v *Endpoint) *DBInstance { + s.ListenerEndpoint = v + return s +} + // SetMasterUsername sets the MasterUsername field's value. func (s *DBInstance) SetMasterUsername(v string) *DBInstance { s.MasterUsername = &v @@ -15405,6 +17382,12 @@ func (s *DBInstance) SetPerformanceInsightsKMSKeyId(v string) *DBInstance { return s } +// SetPerformanceInsightsRetentionPeriod sets the PerformanceInsightsRetentionPeriod field's value. +func (s *DBInstance) SetPerformanceInsightsRetentionPeriod(v int64) *DBInstance { + s.PerformanceInsightsRetentionPeriod = &v + return s +} + // SetPreferredBackupWindow sets the PreferredBackupWindow field's value. func (s *DBInstance) SetPreferredBackupWindow(v string) *DBInstance { s.PreferredBackupWindow = &v @@ -15417,6 +17400,12 @@ func (s *DBInstance) SetPreferredMaintenanceWindow(v string) *DBInstance { return s } +// SetProcessorFeatures sets the ProcessorFeatures field's value. +func (s *DBInstance) SetProcessorFeatures(v []*ProcessorFeature) *DBInstance { + s.ProcessorFeatures = v + return s +} + // SetPromotionTier sets the PromotionTier field's value. func (s *DBInstance) SetPromotionTier(v int64) *DBInstance { s.PromotionTier = &v @@ -15489,20 +17478,267 @@ func (s *DBInstance) SetVpcSecurityGroups(v []*VpcSecurityGroupMembership) *DBIn return s } -// Provides a list of status information for a DB instance. -type DBInstanceStatusInfo struct { +// An automated backup of a DB instance. It it consists of system backups, transaction +// logs, and the database instance properties that existed at the time you deleted +// the source instance. +type DBInstanceAutomatedBackup struct { _ struct{} `type:"structure"` - // Details of the error if there is an error for the instance. If the instance - // is not in an error state, this value is blank. - Message *string `type:"string"` + // Specifies the allocated storage size in gibibytes (GiB). + AllocatedStorage *int64 `type:"integer"` + + // The Availability Zone that the automated backup was created in. For information + // on AWS Regions and Availability Zones, see Regions and Availability Zones + // (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html). + AvailabilityZone *string `type:"string"` + + // The Amazon Resource Name (ARN) for the automated backup. + DBInstanceArn *string `type:"string"` + + // The customer id of the instance that is/was associated with the automated + // backup. + DBInstanceIdentifier *string `type:"string"` + + // The identifier for the source DB instance, which can't be changed and which + // is unique to an AWS Region. + DbiResourceId *string `type:"string"` + + // Specifies whether the automated backup is encrypted. + Encrypted *bool `type:"boolean"` + + // The name of the database engine for this automated backup. + Engine *string `type:"string"` + + // The version of the database engine for the automated backup. + EngineVersion *string `type:"string"` + + // True if mapping of AWS Identity and Access Management (IAM) accounts to database + // accounts is enabled, and otherwise false. + IAMDatabaseAuthenticationEnabled *bool `type:"boolean"` + + // Provides the date and time that the DB instance was created. + InstanceCreateTime *time.Time `type:"timestamp"` + + // The IOPS (I/O operations per second) value for the automated backup. + Iops *int64 `type:"integer"` + + // The AWS KMS key ID for an automated backup. The KMS key ID is the Amazon + // Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS + // encryption key. + KmsKeyId *string `type:"string"` + + // License model information for the automated backup. + LicenseModel *string `type:"string"` + + // The license model of an automated backup. + MasterUsername *string `type:"string"` + + // The option group the automated backup is associated with. If omitted, the + // default option group for the engine specified is used. + OptionGroupName *string `type:"string"` + + // The port number that the automated backup used for connections. + // + // Default: Inherits from the source DB instance + // + // Valid Values: 1150-65535 + Port *int64 `type:"integer"` + + // The AWS Region associated with the automated backup. + Region *string `type:"string"` + + // Earliest and latest time an instance can be restored to. + RestoreWindow *RestoreWindow `type:"structure"` + + // Provides a list of status information for an automated backup: + // + // * active - automated backups for current instances + // + // * retained - automated backups for deleted instances + // + // * creating - automated backups that are waiting for the first automated + // snapshot to be available. + Status *string `type:"string"` + + // Specifies the storage type associated with the automated backup. + StorageType *string `type:"string"` + + // The ARN from the key store with which the automated backup is associated + // for TDE encryption. + TdeCredentialArn *string `type:"string"` + + // The time zone of the automated backup. In most cases, the Timezone element + // is empty. Timezone content appears only for Microsoft SQL Server DB instances + // that were created with a time zone specified. + Timezone *string `type:"string"` + + // Provides the VPC ID associated with the DB instance + VpcId *string `type:"string"` +} + +// String returns the string representation +func (s DBInstanceAutomatedBackup) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DBInstanceAutomatedBackup) GoString() string { + return s.String() +} + +// SetAllocatedStorage sets the AllocatedStorage field's value. +func (s *DBInstanceAutomatedBackup) SetAllocatedStorage(v int64) *DBInstanceAutomatedBackup { + s.AllocatedStorage = &v + return s +} + +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *DBInstanceAutomatedBackup) SetAvailabilityZone(v string) *DBInstanceAutomatedBackup { + s.AvailabilityZone = &v + return s +} + +// SetDBInstanceArn sets the DBInstanceArn field's value. +func (s *DBInstanceAutomatedBackup) SetDBInstanceArn(v string) *DBInstanceAutomatedBackup { + s.DBInstanceArn = &v + return s +} + +// SetDBInstanceIdentifier sets the DBInstanceIdentifier field's value. +func (s *DBInstanceAutomatedBackup) SetDBInstanceIdentifier(v string) *DBInstanceAutomatedBackup { + s.DBInstanceIdentifier = &v + return s +} + +// SetDbiResourceId sets the DbiResourceId field's value. +func (s *DBInstanceAutomatedBackup) SetDbiResourceId(v string) *DBInstanceAutomatedBackup { + s.DbiResourceId = &v + return s +} + +// SetEncrypted sets the Encrypted field's value. +func (s *DBInstanceAutomatedBackup) SetEncrypted(v bool) *DBInstanceAutomatedBackup { + s.Encrypted = &v + return s +} + +// SetEngine sets the Engine field's value. +func (s *DBInstanceAutomatedBackup) SetEngine(v string) *DBInstanceAutomatedBackup { + s.Engine = &v + return s +} + +// SetEngineVersion sets the EngineVersion field's value. +func (s *DBInstanceAutomatedBackup) SetEngineVersion(v string) *DBInstanceAutomatedBackup { + s.EngineVersion = &v + return s +} + +// SetIAMDatabaseAuthenticationEnabled sets the IAMDatabaseAuthenticationEnabled field's value. +func (s *DBInstanceAutomatedBackup) SetIAMDatabaseAuthenticationEnabled(v bool) *DBInstanceAutomatedBackup { + s.IAMDatabaseAuthenticationEnabled = &v + return s +} + +// SetInstanceCreateTime sets the InstanceCreateTime field's value. +func (s *DBInstanceAutomatedBackup) SetInstanceCreateTime(v time.Time) *DBInstanceAutomatedBackup { + s.InstanceCreateTime = &v + return s +} + +// SetIops sets the Iops field's value. +func (s *DBInstanceAutomatedBackup) SetIops(v int64) *DBInstanceAutomatedBackup { + s.Iops = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *DBInstanceAutomatedBackup) SetKmsKeyId(v string) *DBInstanceAutomatedBackup { + s.KmsKeyId = &v + return s +} + +// SetLicenseModel sets the LicenseModel field's value. +func (s *DBInstanceAutomatedBackup) SetLicenseModel(v string) *DBInstanceAutomatedBackup { + s.LicenseModel = &v + return s +} + +// SetMasterUsername sets the MasterUsername field's value. +func (s *DBInstanceAutomatedBackup) SetMasterUsername(v string) *DBInstanceAutomatedBackup { + s.MasterUsername = &v + return s +} + +// SetOptionGroupName sets the OptionGroupName field's value. +func (s *DBInstanceAutomatedBackup) SetOptionGroupName(v string) *DBInstanceAutomatedBackup { + s.OptionGroupName = &v + return s +} + +// SetPort sets the Port field's value. +func (s *DBInstanceAutomatedBackup) SetPort(v int64) *DBInstanceAutomatedBackup { + s.Port = &v + return s +} + +// SetRegion sets the Region field's value. +func (s *DBInstanceAutomatedBackup) SetRegion(v string) *DBInstanceAutomatedBackup { + s.Region = &v + return s +} + +// SetRestoreWindow sets the RestoreWindow field's value. +func (s *DBInstanceAutomatedBackup) SetRestoreWindow(v *RestoreWindow) *DBInstanceAutomatedBackup { + s.RestoreWindow = v + return s +} + +// SetStatus sets the Status field's value. +func (s *DBInstanceAutomatedBackup) SetStatus(v string) *DBInstanceAutomatedBackup { + s.Status = &v + return s +} + +// SetStorageType sets the StorageType field's value. +func (s *DBInstanceAutomatedBackup) SetStorageType(v string) *DBInstanceAutomatedBackup { + s.StorageType = &v + return s +} + +// SetTdeCredentialArn sets the TdeCredentialArn field's value. +func (s *DBInstanceAutomatedBackup) SetTdeCredentialArn(v string) *DBInstanceAutomatedBackup { + s.TdeCredentialArn = &v + return s +} + +// SetTimezone sets the Timezone field's value. +func (s *DBInstanceAutomatedBackup) SetTimezone(v string) *DBInstanceAutomatedBackup { + s.Timezone = &v + return s +} + +// SetVpcId sets the VpcId field's value. +func (s *DBInstanceAutomatedBackup) SetVpcId(v string) *DBInstanceAutomatedBackup { + s.VpcId = &v + return s +} + +// Provides a list of status information for a DB instance. +type DBInstanceStatusInfo struct { + _ struct{} `type:"structure"` + + // Details of the error if there is an error for the instance. If the instance + // is not in an error state, this value is blank. + Message *string `type:"string"` // Boolean value that is true if the instance is operating normally, or false // if the instance is in an error state. Normal *bool `type:"boolean"` // Status of the DB instance. For a StatusType of read replica, the values can - // be replicating, error, stopped, or terminated. + // be replicating, replication stop point set, replication stop point reached, + // error, stopped, or terminated. Status *string `type:"string"` // This value is currently "read replication." @@ -15815,6 +18051,10 @@ type DBSnapshot struct { // Specifies the identifier for the DB snapshot. DBSnapshotIdentifier *string `type:"string"` + // The identifier for the source DB instance, which can't be changed and which + // is unique to an AWS Region. + DbiResourceId *string `type:"string"` + // Specifies whether the DB snapshot is encrypted. Encrypted *bool `type:"boolean"` @@ -15830,7 +18070,7 @@ type DBSnapshot struct { // Specifies the time when the snapshot was taken, in Universal Coordinated // Time (UTC). - InstanceCreateTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + InstanceCreateTime *time.Time `type:"timestamp"` // Specifies the Provisioned IOPS (I/O operations per second) value of the DB // instance at the time of the snapshot. @@ -15855,9 +18095,13 @@ type DBSnapshot struct { // of the snapshot. Port *int64 `type:"integer"` + // The number of CPU cores and the number of threads per core for the DB instance + // class of the DB instance when the DB snapshot was created. + ProcessorFeatures []*ProcessorFeature `locationNameList:"ProcessorFeature" type:"list"` + // Provides the time when the snapshot was taken, in Universal Coordinated Time // (UTC). - SnapshotCreateTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + SnapshotCreateTime *time.Time `type:"timestamp"` // Provides the type of the DB snapshot. SnapshotType *string `type:"string"` @@ -15927,6 +18171,12 @@ func (s *DBSnapshot) SetDBSnapshotIdentifier(v string) *DBSnapshot { return s } +// SetDbiResourceId sets the DbiResourceId field's value. +func (s *DBSnapshot) SetDbiResourceId(v string) *DBSnapshot { + s.DbiResourceId = &v + return s +} + // SetEncrypted sets the Encrypted field's value. func (s *DBSnapshot) SetEncrypted(v bool) *DBSnapshot { s.Encrypted = &v @@ -15999,6 +18249,12 @@ func (s *DBSnapshot) SetPort(v int64) *DBSnapshot { return s } +// SetProcessorFeatures sets the ProcessorFeatures field's value. +func (s *DBSnapshot) SetProcessorFeatures(v []*ProcessorFeature) *DBSnapshot { + s.ProcessorFeatures = v + return s +} + // SetSnapshotCreateTime sets the SnapshotCreateTime field's value. func (s *DBSnapshot) SetSnapshotCreateTime(v time.Time) *DBSnapshot { s.SnapshotCreateTime = &v @@ -16209,60 +18465,31 @@ func (s *DBSubnetGroup) SetVpcId(v string) *DBSubnetGroup { return s } -type DeleteDBClusterInput struct { +type DeleteDBClusterEndpointInput struct { _ struct{} `type:"structure"` - // The DB cluster identifier for the DB cluster to be deleted. This parameter - // isn't case-sensitive. - // - // Constraints: - // - // * Must match an existing DBClusterIdentifier. - // - // DBClusterIdentifier is a required field - DBClusterIdentifier *string `type:"string" required:"true"` - - // The DB cluster snapshot identifier of the new DB cluster snapshot created - // when SkipFinalSnapshot is set to false. - // - // Specifying this parameter and also setting the SkipFinalShapshot parameter - // to true results in an error. - // - // Constraints: - // - // * Must be 1 to 255 letters, numbers, or hyphens. - // - // * First character must be a letter - // - // * Cannot end with a hyphen or contain two consecutive hyphens - FinalDBSnapshotIdentifier *string `type:"string"` - - // Determines whether a final DB cluster snapshot is created before the DB cluster - // is deleted. If true is specified, no DB cluster snapshot is created. If false - // is specified, a DB cluster snapshot is created before the DB cluster is deleted. - // - // You must specify a FinalDBSnapshotIdentifier parameter if SkipFinalSnapshot - // is false. + // The identifier associated with the custom endpoint. This parameter is stored + // as a lowercase string. // - // Default: false - SkipFinalSnapshot *bool `type:"boolean"` + // DBClusterEndpointIdentifier is a required field + DBClusterEndpointIdentifier *string `type:"string" required:"true"` } // String returns the string representation -func (s DeleteDBClusterInput) String() string { +func (s DeleteDBClusterEndpointInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteDBClusterInput) GoString() string { +func (s DeleteDBClusterEndpointInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteDBClusterInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteDBClusterInput"} - if s.DBClusterIdentifier == nil { - invalidParams.Add(request.NewErrParamRequired("DBClusterIdentifier")) +func (s *DeleteDBClusterEndpointInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDBClusterEndpointInput"} + if s.DBClusterEndpointIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterEndpointIdentifier")) } if invalidParams.Len() > 0 { @@ -16271,30 +18498,223 @@ func (s *DeleteDBClusterInput) Validate() error { return nil } -// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. -func (s *DeleteDBClusterInput) SetDBClusterIdentifier(v string) *DeleteDBClusterInput { - s.DBClusterIdentifier = &v +// SetDBClusterEndpointIdentifier sets the DBClusterEndpointIdentifier field's value. +func (s *DeleteDBClusterEndpointInput) SetDBClusterEndpointIdentifier(v string) *DeleteDBClusterEndpointInput { + s.DBClusterEndpointIdentifier = &v return s } -// SetFinalDBSnapshotIdentifier sets the FinalDBSnapshotIdentifier field's value. -func (s *DeleteDBClusterInput) SetFinalDBSnapshotIdentifier(v string) *DeleteDBClusterInput { - s.FinalDBSnapshotIdentifier = &v - return s -} +// This data type represents the information you need to connect to an Amazon +// Aurora DB cluster. This data type is used as a response element in the following +// actions: +// +// * CreateDBClusterEndpoint +// +// * DescribeDBClusterEndpoints +// +// * ModifyDBClusterEndpoint +// +// * DeleteDBClusterEndpoint +// +// For the data structure that represents Amazon RDS DB instance endpoints, +// see Endpoint. +type DeleteDBClusterEndpointOutput struct { + _ struct{} `type:"structure"` -// SetSkipFinalSnapshot sets the SkipFinalSnapshot field's value. -func (s *DeleteDBClusterInput) SetSkipFinalSnapshot(v bool) *DeleteDBClusterInput { - s.SkipFinalSnapshot = &v - return s -} + // The type associated with a custom endpoint. One of: READER, ANY. + CustomEndpointType *string `type:"string"` -type DeleteDBClusterOutput struct { + // The Amazon Resource Name (ARN) for the endpoint. + DBClusterEndpointArn *string `type:"string"` + + // The identifier associated with the endpoint. This parameter is stored as + // a lowercase string. + DBClusterEndpointIdentifier *string `type:"string"` + + // A unique system-generated identifier for an endpoint. It remains the same + // for the whole life of the endpoint. + DBClusterEndpointResourceIdentifier *string `type:"string"` + + // The DB cluster identifier of the DB cluster associated with the endpoint. + // This parameter is stored as a lowercase string. + DBClusterIdentifier *string `type:"string"` + + // The DNS address of the endpoint. + Endpoint *string `type:"string"` + + // The type of the endpoint. One of: READER, WRITER, CUSTOM. + EndpointType *string `type:"string"` + + // List of DB instance identifiers that aren't part of the custom endpoint group. + // All other eligible instances are reachable through the custom endpoint. Only + // relevant if the list of static members is empty. + ExcludedMembers []*string `type:"list"` + + // List of DB instance identifiers that are part of the custom endpoint group. + StaticMembers []*string `type:"list"` + + // The current status of the endpoint. One of: creating, available, deleting, + // modifying. + Status *string `type:"string"` +} + +// String returns the string representation +func (s DeleteDBClusterEndpointOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDBClusterEndpointOutput) GoString() string { + return s.String() +} + +// SetCustomEndpointType sets the CustomEndpointType field's value. +func (s *DeleteDBClusterEndpointOutput) SetCustomEndpointType(v string) *DeleteDBClusterEndpointOutput { + s.CustomEndpointType = &v + return s +} + +// SetDBClusterEndpointArn sets the DBClusterEndpointArn field's value. +func (s *DeleteDBClusterEndpointOutput) SetDBClusterEndpointArn(v string) *DeleteDBClusterEndpointOutput { + s.DBClusterEndpointArn = &v + return s +} + +// SetDBClusterEndpointIdentifier sets the DBClusterEndpointIdentifier field's value. +func (s *DeleteDBClusterEndpointOutput) SetDBClusterEndpointIdentifier(v string) *DeleteDBClusterEndpointOutput { + s.DBClusterEndpointIdentifier = &v + return s +} + +// SetDBClusterEndpointResourceIdentifier sets the DBClusterEndpointResourceIdentifier field's value. +func (s *DeleteDBClusterEndpointOutput) SetDBClusterEndpointResourceIdentifier(v string) *DeleteDBClusterEndpointOutput { + s.DBClusterEndpointResourceIdentifier = &v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *DeleteDBClusterEndpointOutput) SetDBClusterIdentifier(v string) *DeleteDBClusterEndpointOutput { + s.DBClusterIdentifier = &v + return s +} + +// SetEndpoint sets the Endpoint field's value. +func (s *DeleteDBClusterEndpointOutput) SetEndpoint(v string) *DeleteDBClusterEndpointOutput { + s.Endpoint = &v + return s +} + +// SetEndpointType sets the EndpointType field's value. +func (s *DeleteDBClusterEndpointOutput) SetEndpointType(v string) *DeleteDBClusterEndpointOutput { + s.EndpointType = &v + return s +} + +// SetExcludedMembers sets the ExcludedMembers field's value. +func (s *DeleteDBClusterEndpointOutput) SetExcludedMembers(v []*string) *DeleteDBClusterEndpointOutput { + s.ExcludedMembers = v + return s +} + +// SetStaticMembers sets the StaticMembers field's value. +func (s *DeleteDBClusterEndpointOutput) SetStaticMembers(v []*string) *DeleteDBClusterEndpointOutput { + s.StaticMembers = v + return s +} + +// SetStatus sets the Status field's value. +func (s *DeleteDBClusterEndpointOutput) SetStatus(v string) *DeleteDBClusterEndpointOutput { + s.Status = &v + return s +} + +type DeleteDBClusterInput struct { + _ struct{} `type:"structure"` + + // The DB cluster identifier for the DB cluster to be deleted. This parameter + // isn't case-sensitive. + // + // Constraints: + // + // * Must match an existing DBClusterIdentifier. + // + // DBClusterIdentifier is a required field + DBClusterIdentifier *string `type:"string" required:"true"` + + // The DB cluster snapshot identifier of the new DB cluster snapshot created + // when SkipFinalSnapshot is set to false. + // + // Specifying this parameter and also setting the SkipFinalShapshot parameter + // to true results in an error. + // + // Constraints: + // + // * Must be 1 to 255 letters, numbers, or hyphens. + // + // * First character must be a letter + // + // * Can't end with a hyphen or contain two consecutive hyphens + FinalDBSnapshotIdentifier *string `type:"string"` + + // Determines whether a final DB cluster snapshot is created before the DB cluster + // is deleted. If true is specified, no DB cluster snapshot is created. If false + // is specified, a DB cluster snapshot is created before the DB cluster is deleted. + // + // You must specify a FinalDBSnapshotIdentifier parameter if SkipFinalSnapshot + // is false. + // + // Default: false + SkipFinalSnapshot *bool `type:"boolean"` +} + +// String returns the string representation +func (s DeleteDBClusterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDBClusterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteDBClusterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDBClusterInput"} + if s.DBClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *DeleteDBClusterInput) SetDBClusterIdentifier(v string) *DeleteDBClusterInput { + s.DBClusterIdentifier = &v + return s +} + +// SetFinalDBSnapshotIdentifier sets the FinalDBSnapshotIdentifier field's value. +func (s *DeleteDBClusterInput) SetFinalDBSnapshotIdentifier(v string) *DeleteDBClusterInput { + s.FinalDBSnapshotIdentifier = &v + return s +} + +// SetSkipFinalSnapshot sets the SkipFinalSnapshot field's value. +func (s *DeleteDBClusterInput) SetSkipFinalSnapshot(v bool) *DeleteDBClusterInput { + s.SkipFinalSnapshot = &v + return s +} + +type DeleteDBClusterOutput struct { _ struct{} `type:"structure"` - // Contains the details of an Amazon RDS DB cluster. + // Contains the details of an Amazon Aurora DB cluster. // - // This data type is used as a response element in the DescribeDBClusters action. + // This data type is used as a response element in the DescribeDBClusters, StopDBCluster, + // and StartDBCluster actions. DBCluster *DBCluster `type:"structure"` } @@ -16325,7 +18745,7 @@ type DeleteDBClusterParameterGroupInput struct { // // * You can't delete a default DB cluster parameter group. // - // * Cannot be associated with any DB clusters. + // * Can't be associated with any DB clusters. // // DBClusterParameterGroupName is a required field DBClusterParameterGroupName *string `type:"string" required:"true"` @@ -16441,6 +18861,71 @@ func (s *DeleteDBClusterSnapshotOutput) SetDBClusterSnapshot(v *DBClusterSnapsho return s } +// Parameter input for the DeleteDBInstanceAutomatedBackup operation. +type DeleteDBInstanceAutomatedBackupInput struct { + _ struct{} `type:"structure"` + + // The identifier for the source DB instance, which can't be changed and which + // is unique to an AWS Region. + // + // DbiResourceId is a required field + DbiResourceId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteDBInstanceAutomatedBackupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDBInstanceAutomatedBackupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteDBInstanceAutomatedBackupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDBInstanceAutomatedBackupInput"} + if s.DbiResourceId == nil { + invalidParams.Add(request.NewErrParamRequired("DbiResourceId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDbiResourceId sets the DbiResourceId field's value. +func (s *DeleteDBInstanceAutomatedBackupInput) SetDbiResourceId(v string) *DeleteDBInstanceAutomatedBackupInput { + s.DbiResourceId = &v + return s +} + +type DeleteDBInstanceAutomatedBackupOutput struct { + _ struct{} `type:"structure"` + + // An automated backup of a DB instance. It it consists of system backups, transaction + // logs, and the database instance properties that existed at the time you deleted + // the source instance. + DBInstanceAutomatedBackup *DBInstanceAutomatedBackup `type:"structure"` +} + +// String returns the string representation +func (s DeleteDBInstanceAutomatedBackupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDBInstanceAutomatedBackupOutput) GoString() string { + return s.String() +} + +// SetDBInstanceAutomatedBackup sets the DBInstanceAutomatedBackup field's value. +func (s *DeleteDBInstanceAutomatedBackupOutput) SetDBInstanceAutomatedBackup(v *DBInstanceAutomatedBackup) *DeleteDBInstanceAutomatedBackupOutput { + s.DBInstanceAutomatedBackup = v + return s +} + type DeleteDBInstanceInput struct { _ struct{} `type:"structure"` @@ -16454,7 +18939,12 @@ type DeleteDBInstanceInput struct { // DBInstanceIdentifier is a required field DBInstanceIdentifier *string `type:"string" required:"true"` - // The DBSnapshotIdentifier of the new DBSnapshot created when SkipFinalSnapshot + // A value that indicates whether to remove automated backups immediately after + // the DB instance is deleted. This parameter isn't case-sensitive. This parameter + // defaults to true. + DeleteAutomatedBackups *bool `type:"boolean"` + + // The DBSnapshotIdentifier of the new DB snapshot created when SkipFinalSnapshot // is set to false. // // Specifying this parameter and also setting the SkipFinalShapshot parameter @@ -16464,20 +18954,21 @@ type DeleteDBInstanceInput struct { // // * Must be 1 to 255 letters or numbers. // - // * First character must be a letter + // * First character must be a letter. // - // * Cannot end with a hyphen or contain two consecutive hyphens + // * Can't end with a hyphen or contain two consecutive hyphens. // - // * Cannot be specified when deleting a Read Replica. + // * Can't be specified when deleting a Read Replica. FinalDBSnapshotIdentifier *string `type:"string"` - // Determines whether a final DB snapshot is created before the DB instance - // is deleted. If true is specified, no DBSnapshot is created. If false is specified, - // a DB snapshot is created before the DB instance is deleted. + // A value that indicates whether a final DB snapshot is created before the + // DB instance is deleted. If true is specified, no DB snapshot is created. + // If false is specified, a DB snapshot is created before the DB instance is + // deleted. // - // Note that when a DB instance is in a failure state and has a status of 'failed', - // 'incompatible-restore', or 'incompatible-network', it can only be deleted - // when the SkipFinalSnapshot parameter is set to "true". + // When a DB instance is in a failure state and has a status of failed, incompatible-restore, + // or incompatible-network, you can only delete it when the SkipFinalSnapshot + // parameter is set to true. // // Specify true when deleting a Read Replica. // @@ -16517,6 +19008,12 @@ func (s *DeleteDBInstanceInput) SetDBInstanceIdentifier(v string) *DeleteDBInsta return s } +// SetDeleteAutomatedBackups sets the DeleteAutomatedBackups field's value. +func (s *DeleteDBInstanceInput) SetDeleteAutomatedBackups(v bool) *DeleteDBInstanceInput { + s.DeleteAutomatedBackups = &v + return s +} + // SetFinalDBSnapshotIdentifier sets the FinalDBSnapshotIdentifier field's value. func (s *DeleteDBInstanceInput) SetFinalDBSnapshotIdentifier(v string) *DeleteDBInstanceInput { s.FinalDBSnapshotIdentifier = &v @@ -16565,7 +19062,7 @@ type DeleteDBParameterGroupInput struct { // // * You can't delete a default DB parameter group // - // * Cannot be associated with any DB instances + // * Can't be associated with any DB instances // // DBParameterGroupName is a required field DBParameterGroupName *string `type:"string" required:"true"` @@ -16627,7 +19124,7 @@ type DeleteDBSecurityGroupInput struct { // // * First character must be a letter // - // * Cannot end with a hyphen or contain two consecutive hyphens + // * Can't end with a hyphen or contain two consecutive hyphens // // * Must not be "Default" // @@ -16681,7 +19178,7 @@ func (s DeleteDBSecurityGroupOutput) GoString() string { type DeleteDBSnapshotInput struct { _ struct{} `type:"structure"` - // The DBSnapshot identifier. + // The DB snapshot identifier. // // Constraints: Must be the name of an existing DB snapshot in the available // state. @@ -16870,29 +19367,309 @@ func (s *DeleteEventSubscriptionOutput) SetEventSubscription(v *EventSubscriptio type DeleteOptionGroupInput struct { _ struct{} `type:"structure"` - // The name of the option group to be deleted. + // The name of the option group to be deleted. + // + // You can't delete default option groups. + // + // OptionGroupName is a required field + OptionGroupName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteOptionGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteOptionGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteOptionGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteOptionGroupInput"} + if s.OptionGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("OptionGroupName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetOptionGroupName sets the OptionGroupName field's value. +func (s *DeleteOptionGroupInput) SetOptionGroupName(v string) *DeleteOptionGroupInput { + s.OptionGroupName = &v + return s +} + +type DeleteOptionGroupOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteOptionGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteOptionGroupOutput) GoString() string { + return s.String() +} + +type DescribeAccountAttributesInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DescribeAccountAttributesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAccountAttributesInput) GoString() string { + return s.String() +} + +// Data returned by the DescribeAccountAttributes action. +type DescribeAccountAttributesOutput struct { + _ struct{} `type:"structure"` + + // A list of AccountQuota objects. Within this list, each quota has a name, + // a count of usage toward the quota maximum, and a maximum value for the quota. + AccountQuotas []*AccountQuota `locationNameList:"AccountQuota" type:"list"` +} + +// String returns the string representation +func (s DescribeAccountAttributesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAccountAttributesOutput) GoString() string { + return s.String() +} + +// SetAccountQuotas sets the AccountQuotas field's value. +func (s *DescribeAccountAttributesOutput) SetAccountQuotas(v []*AccountQuota) *DescribeAccountAttributesOutput { + s.AccountQuotas = v + return s +} + +type DescribeCertificatesInput struct { + _ struct{} `type:"structure"` + + // The user-supplied certificate identifier. If this parameter is specified, + // information for only the identified certificate is returned. This parameter + // isn't case-sensitive. + // + // Constraints: + // + // * Must match an existing CertificateIdentifier. + CertificateIdentifier *string `type:"string"` + + // This parameter is not currently supported. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // An optional pagination token provided by a previous DescribeCertificates + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords. + Marker *string `type:"string"` + + // The maximum number of records to include in the response. If more records + // exist than the specified MaxRecords value, a pagination token called a marker + // is included in the response so that the remaining results can be retrieved. + // + // Default: 100 + // + // Constraints: Minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` +} + +// String returns the string representation +func (s DescribeCertificatesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeCertificatesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeCertificatesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeCertificatesInput"} + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCertificateIdentifier sets the CertificateIdentifier field's value. +func (s *DescribeCertificatesInput) SetCertificateIdentifier(v string) *DescribeCertificatesInput { + s.CertificateIdentifier = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeCertificatesInput) SetFilters(v []*Filter) *DescribeCertificatesInput { + s.Filters = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeCertificatesInput) SetMarker(v string) *DescribeCertificatesInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeCertificatesInput) SetMaxRecords(v int64) *DescribeCertificatesInput { + s.MaxRecords = &v + return s +} + +// Data returned by the DescribeCertificates action. +type DescribeCertificatesOutput struct { + _ struct{} `type:"structure"` + + // The list of Certificate objects for the AWS account. + Certificates []*Certificate `locationNameList:"Certificate" type:"list"` + + // An optional pagination token provided by a previous DescribeCertificates + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords . + Marker *string `type:"string"` +} + +// String returns the string representation +func (s DescribeCertificatesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeCertificatesOutput) GoString() string { + return s.String() +} + +// SetCertificates sets the Certificates field's value. +func (s *DescribeCertificatesOutput) SetCertificates(v []*Certificate) *DescribeCertificatesOutput { + s.Certificates = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeCertificatesOutput) SetMarker(v string) *DescribeCertificatesOutput { + s.Marker = &v + return s +} + +type DescribeDBClusterBacktracksInput struct { + _ struct{} `type:"structure"` + + // If specified, this value is the backtrack identifier of the backtrack to + // be described. + // + // Constraints: + // + // * Must contain a valid universally unique identifier (UUID). For more + // information about UUIDs, see A Universally Unique Identifier (UUID) URN + // Namespace (http://www.ietf.org/rfc/rfc4122.txt). + // + // Example: 123e4567-e89b-12d3-a456-426655440000 + BacktrackIdentifier *string `type:"string"` + + // The DB cluster identifier of the DB cluster to be described. This parameter + // is stored as a lowercase string. + // + // Constraints: + // + // * Must contain from 1 to 63 alphanumeric characters or hyphens. + // + // * First character must be a letter. + // + // * Can't end with a hyphen or contain two consecutive hyphens. + // + // Example: my-cluster1 + // + // DBClusterIdentifier is a required field + DBClusterIdentifier *string `type:"string" required:"true"` + + // A filter that specifies one or more DB clusters to describe. Supported filters + // include the following: + // + // * db-cluster-backtrack-id - Accepts backtrack identifiers. The results + // list includes information about only the backtracks identified by these + // identifiers. + // + // * db-cluster-backtrack-status - Accepts any of the following backtrack + // status values: + // + // applying + // + // completed + // + // failed + // + // pending + // + // The results list includes information about only the backtracks identified + // by these values. For more information about backtrack status values, see + // DBClusterBacktrack. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // An optional pagination token provided by a previous DescribeDBClusterBacktracks + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords. + Marker *string `type:"string"` + + // The maximum number of records to include in the response. If more records + // exist than the specified MaxRecords value, a pagination token called a marker + // is included in the response so that the remaining results can be retrieved. // - // You can't delete default option groups. + // Default: 100 // - // OptionGroupName is a required field - OptionGroupName *string `type:"string" required:"true"` + // Constraints: Minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` } // String returns the string representation -func (s DeleteOptionGroupInput) String() string { +func (s DescribeDBClusterBacktracksInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteOptionGroupInput) GoString() string { +func (s DescribeDBClusterBacktracksInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteOptionGroupInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteOptionGroupInput"} - if s.OptionGroupName == nil { - invalidParams.Add(request.NewErrParamRequired("OptionGroupName")) +func (s *DescribeDBClusterBacktracksInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeDBClusterBacktracksInput"} + if s.DBClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterIdentifier")) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } } if invalidParams.Len() > 0 { @@ -16901,81 +19678,92 @@ func (s *DeleteOptionGroupInput) Validate() error { return nil } -// SetOptionGroupName sets the OptionGroupName field's value. -func (s *DeleteOptionGroupInput) SetOptionGroupName(v string) *DeleteOptionGroupInput { - s.OptionGroupName = &v +// SetBacktrackIdentifier sets the BacktrackIdentifier field's value. +func (s *DescribeDBClusterBacktracksInput) SetBacktrackIdentifier(v string) *DescribeDBClusterBacktracksInput { + s.BacktrackIdentifier = &v return s } -type DeleteOptionGroupOutput struct { - _ struct{} `type:"structure"` -} - -// String returns the string representation -func (s DeleteOptionGroupOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s DeleteOptionGroupOutput) GoString() string { - return s.String() +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *DescribeDBClusterBacktracksInput) SetDBClusterIdentifier(v string) *DescribeDBClusterBacktracksInput { + s.DBClusterIdentifier = &v + return s } -type DescribeAccountAttributesInput struct { - _ struct{} `type:"structure"` +// SetFilters sets the Filters field's value. +func (s *DescribeDBClusterBacktracksInput) SetFilters(v []*Filter) *DescribeDBClusterBacktracksInput { + s.Filters = v + return s } -// String returns the string representation -func (s DescribeAccountAttributesInput) String() string { - return awsutil.Prettify(s) +// SetMarker sets the Marker field's value. +func (s *DescribeDBClusterBacktracksInput) SetMarker(v string) *DescribeDBClusterBacktracksInput { + s.Marker = &v + return s } -// GoString returns the string representation -func (s DescribeAccountAttributesInput) GoString() string { - return s.String() +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeDBClusterBacktracksInput) SetMaxRecords(v int64) *DescribeDBClusterBacktracksInput { + s.MaxRecords = &v + return s } -// Data returned by the DescribeAccountAttributes action. -type DescribeAccountAttributesOutput struct { +// Contains the result of a successful invocation of the DescribeDBClusterBacktracks +// action. +type DescribeDBClusterBacktracksOutput struct { _ struct{} `type:"structure"` - // A list of AccountQuota objects. Within this list, each quota has a name, - // a count of usage toward the quota maximum, and a maximum value for the quota. - AccountQuotas []*AccountQuota `locationNameList:"AccountQuota" type:"list"` + // Contains a list of backtracks for the user. + DBClusterBacktracks []*BacktrackDBClusterOutput `locationNameList:"DBClusterBacktrack" type:"list"` + + // A pagination token that can be used in a subsequent DescribeDBClusterBacktracks + // request. + Marker *string `type:"string"` } // String returns the string representation -func (s DescribeAccountAttributesOutput) String() string { +func (s DescribeDBClusterBacktracksOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeAccountAttributesOutput) GoString() string { +func (s DescribeDBClusterBacktracksOutput) GoString() string { return s.String() } -// SetAccountQuotas sets the AccountQuotas field's value. -func (s *DescribeAccountAttributesOutput) SetAccountQuotas(v []*AccountQuota) *DescribeAccountAttributesOutput { - s.AccountQuotas = v +// SetDBClusterBacktracks sets the DBClusterBacktracks field's value. +func (s *DescribeDBClusterBacktracksOutput) SetDBClusterBacktracks(v []*BacktrackDBClusterOutput) *DescribeDBClusterBacktracksOutput { + s.DBClusterBacktracks = v return s } -type DescribeCertificatesInput struct { +// SetMarker sets the Marker field's value. +func (s *DescribeDBClusterBacktracksOutput) SetMarker(v string) *DescribeDBClusterBacktracksOutput { + s.Marker = &v + return s +} + +type DescribeDBClusterEndpointsInput struct { _ struct{} `type:"structure"` - // The user-supplied certificate identifier. If this parameter is specified, - // information for only the identified certificate is returned. This parameter - // isn't case-sensitive. - // - // Constraints: - // - // * Must match an existing CertificateIdentifier. - CertificateIdentifier *string `type:"string"` + // The identifier of the endpoint to describe. This parameter is stored as a + // lowercase string. + DBClusterEndpointIdentifier *string `type:"string"` - // This parameter is not currently supported. + // The DB cluster identifier of the DB cluster associated with the endpoint. + // This parameter is stored as a lowercase string. + DBClusterIdentifier *string `type:"string"` + + // A set of name-value pairs that define which endpoints to include in the output. + // The filters are specified as name-value pairs, in the format Name=endpoint_type,Values=endpoint_type1,endpoint_type2,.... + // Name can be one of: db-cluster-endpoint-type, db-cluster-endpoint-custom-type, + // db-cluster-endpoint-id, db-cluster-endpoint-status. Values for the db-cluster-endpoint-type + // filter can be one or more of: reader, writer, custom. Values for the db-cluster-endpoint-custom-type + // filter can be one or more of: reader, any. Values for the db-cluster-endpoint-status + // filter can be one or more of: available, creating, deleting, modifying. Filters []*Filter `locationNameList:"Filter" type:"list"` - // An optional pagination token provided by a previous DescribeCertificates + // An optional pagination token provided by a previous DescribeDBClusterEndpoints // request. If this parameter is specified, the response includes only records // beyond the marker, up to the value specified by MaxRecords. Marker *string `type:"string"` @@ -16991,18 +19779,18 @@ type DescribeCertificatesInput struct { } // String returns the string representation -func (s DescribeCertificatesInput) String() string { +func (s DescribeDBClusterEndpointsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeCertificatesInput) GoString() string { +func (s DescribeDBClusterEndpointsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeCertificatesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeCertificatesInput"} +func (s *DescribeDBClusterEndpointsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeDBClusterEndpointsInput"} if s.Filters != nil { for i, v := range s.Filters { if v == nil { @@ -17020,61 +19808,67 @@ func (s *DescribeCertificatesInput) Validate() error { return nil } -// SetCertificateIdentifier sets the CertificateIdentifier field's value. -func (s *DescribeCertificatesInput) SetCertificateIdentifier(v string) *DescribeCertificatesInput { - s.CertificateIdentifier = &v +// SetDBClusterEndpointIdentifier sets the DBClusterEndpointIdentifier field's value. +func (s *DescribeDBClusterEndpointsInput) SetDBClusterEndpointIdentifier(v string) *DescribeDBClusterEndpointsInput { + s.DBClusterEndpointIdentifier = &v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *DescribeDBClusterEndpointsInput) SetDBClusterIdentifier(v string) *DescribeDBClusterEndpointsInput { + s.DBClusterIdentifier = &v return s } // SetFilters sets the Filters field's value. -func (s *DescribeCertificatesInput) SetFilters(v []*Filter) *DescribeCertificatesInput { +func (s *DescribeDBClusterEndpointsInput) SetFilters(v []*Filter) *DescribeDBClusterEndpointsInput { s.Filters = v return s } // SetMarker sets the Marker field's value. -func (s *DescribeCertificatesInput) SetMarker(v string) *DescribeCertificatesInput { +func (s *DescribeDBClusterEndpointsInput) SetMarker(v string) *DescribeDBClusterEndpointsInput { s.Marker = &v return s } // SetMaxRecords sets the MaxRecords field's value. -func (s *DescribeCertificatesInput) SetMaxRecords(v int64) *DescribeCertificatesInput { +func (s *DescribeDBClusterEndpointsInput) SetMaxRecords(v int64) *DescribeDBClusterEndpointsInput { s.MaxRecords = &v return s } -// Data returned by the DescribeCertificates action. -type DescribeCertificatesOutput struct { +type DescribeDBClusterEndpointsOutput struct { _ struct{} `type:"structure"` - // The list of Certificate objects for the AWS account. - Certificates []*Certificate `locationNameList:"Certificate" type:"list"` + // Contains the details of the endpoints associated with the cluster and matching + // any filter conditions. + DBClusterEndpoints []*DBClusterEndpoint `locationNameList:"DBClusterEndpointList" type:"list"` - // An optional pagination token provided by a previous DescribeCertificates + // An optional pagination token provided by a previous DescribeDBClusterEndpoints // request. If this parameter is specified, the response includes only records - // beyond the marker, up to the value specified by MaxRecords . + // beyond the marker, up to the value specified by MaxRecords. Marker *string `type:"string"` } // String returns the string representation -func (s DescribeCertificatesOutput) String() string { +func (s DescribeDBClusterEndpointsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeCertificatesOutput) GoString() string { +func (s DescribeDBClusterEndpointsOutput) GoString() string { return s.String() } -// SetCertificates sets the Certificates field's value. -func (s *DescribeCertificatesOutput) SetCertificates(v []*Certificate) *DescribeCertificatesOutput { - s.Certificates = v +// SetDBClusterEndpoints sets the DBClusterEndpoints field's value. +func (s *DescribeDBClusterEndpointsOutput) SetDBClusterEndpoints(v []*DBClusterEndpoint) *DescribeDBClusterEndpointsOutput { + s.DBClusterEndpoints = v return s } // SetMarker sets the Marker field's value. -func (s *DescribeCertificatesOutput) SetMarker(v string) *DescribeCertificatesOutput { +func (s *DescribeDBClusterEndpointsOutput) SetMarker(v string) *DescribeDBClusterEndpointsOutput { s.Marker = &v return s } @@ -17739,7 +20533,7 @@ type DescribeDBEngineVersionsInput struct { // Example: 5.1.49 EngineVersion *string `type:"string"` - // Not currently supported. + // This parameter is not currently supported. Filters []*Filter `locationNameList:"Filter" type:"list"` // If this parameter is specified and the requested engine supports the CharacterSetName @@ -17809,80 +20603,225 @@ func (s *DescribeDBEngineVersionsInput) SetDefaultOnly(v bool) *DescribeDBEngine return s } -// SetEngine sets the Engine field's value. -func (s *DescribeDBEngineVersionsInput) SetEngine(v string) *DescribeDBEngineVersionsInput { - s.Engine = &v +// SetEngine sets the Engine field's value. +func (s *DescribeDBEngineVersionsInput) SetEngine(v string) *DescribeDBEngineVersionsInput { + s.Engine = &v + return s +} + +// SetEngineVersion sets the EngineVersion field's value. +func (s *DescribeDBEngineVersionsInput) SetEngineVersion(v string) *DescribeDBEngineVersionsInput { + s.EngineVersion = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeDBEngineVersionsInput) SetFilters(v []*Filter) *DescribeDBEngineVersionsInput { + s.Filters = v + return s +} + +// SetListSupportedCharacterSets sets the ListSupportedCharacterSets field's value. +func (s *DescribeDBEngineVersionsInput) SetListSupportedCharacterSets(v bool) *DescribeDBEngineVersionsInput { + s.ListSupportedCharacterSets = &v + return s +} + +// SetListSupportedTimezones sets the ListSupportedTimezones field's value. +func (s *DescribeDBEngineVersionsInput) SetListSupportedTimezones(v bool) *DescribeDBEngineVersionsInput { + s.ListSupportedTimezones = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBEngineVersionsInput) SetMarker(v string) *DescribeDBEngineVersionsInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeDBEngineVersionsInput) SetMaxRecords(v int64) *DescribeDBEngineVersionsInput { + s.MaxRecords = &v + return s +} + +// Contains the result of a successful invocation of the DescribeDBEngineVersions +// action. +type DescribeDBEngineVersionsOutput struct { + _ struct{} `type:"structure"` + + // A list of DBEngineVersion elements. + DBEngineVersions []*DBEngineVersion `locationNameList:"DBEngineVersion" type:"list"` + + // An optional pagination token provided by a previous request. If this parameter + // is specified, the response includes only records beyond the marker, up to + // the value specified by MaxRecords. + Marker *string `type:"string"` +} + +// String returns the string representation +func (s DescribeDBEngineVersionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBEngineVersionsOutput) GoString() string { + return s.String() +} + +// SetDBEngineVersions sets the DBEngineVersions field's value. +func (s *DescribeDBEngineVersionsOutput) SetDBEngineVersions(v []*DBEngineVersion) *DescribeDBEngineVersionsOutput { + s.DBEngineVersions = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBEngineVersionsOutput) SetMarker(v string) *DescribeDBEngineVersionsOutput { + s.Marker = &v + return s +} + +// Parameter input for DescribeDBInstanceAutomatedBackups. +type DescribeDBInstanceAutomatedBackupsInput struct { + _ struct{} `type:"structure"` + + // (Optional) The user-supplied instance identifier. If this parameter is specified, + // it must match the identifier of an existing DB instance. It returns information + // from the specific DB instance' automated backup. This parameter isn't case-sensitive. + DBInstanceIdentifier *string `type:"string"` + + // The resource ID of the DB instance that is the source of the automated backup. + // This parameter isn't case-sensitive. + DbiResourceId *string `type:"string"` + + // A filter that specifies which resources to return based on status. + // + // Supported filters are the following: + // + // * status + // + // active - automated backups for current instances + // + // retained - automated backups for deleted instances + // + // creating - automated backups that are waiting for the first automated snapshot + // to be available + // + // * db-instance-id - Accepts DB instance identifiers and Amazon Resource + // Names (ARNs) for DB instances. The results list includes only information + // about the DB instance automated backupss identified by these ARNs. + // + // * dbi-resource-id - Accepts DB instance resource identifiers and DB Amazon + // Resource Names (ARNs) for DB instances. The results list includes only + // information about the DB instance resources identified by these ARNs. + // + // Returns all resources by default. The status for each resource is specified + // in the response. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // The pagination token provided in the previous request. If this parameter + // is specified the response includes only records beyond the marker, up to + // MaxRecords. + Marker *string `type:"string"` + + // The maximum number of records to include in the response. If more records + // exist than the specified MaxRecords value, a pagination token called a marker + // is included in the response so that the remaining results can be retrieved. + MaxRecords *int64 `type:"integer"` +} + +// String returns the string representation +func (s DescribeDBInstanceAutomatedBackupsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBInstanceAutomatedBackupsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeDBInstanceAutomatedBackupsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeDBInstanceAutomatedBackupsInput"} + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBInstanceIdentifier sets the DBInstanceIdentifier field's value. +func (s *DescribeDBInstanceAutomatedBackupsInput) SetDBInstanceIdentifier(v string) *DescribeDBInstanceAutomatedBackupsInput { + s.DBInstanceIdentifier = &v return s } -// SetEngineVersion sets the EngineVersion field's value. -func (s *DescribeDBEngineVersionsInput) SetEngineVersion(v string) *DescribeDBEngineVersionsInput { - s.EngineVersion = &v +// SetDbiResourceId sets the DbiResourceId field's value. +func (s *DescribeDBInstanceAutomatedBackupsInput) SetDbiResourceId(v string) *DescribeDBInstanceAutomatedBackupsInput { + s.DbiResourceId = &v return s } // SetFilters sets the Filters field's value. -func (s *DescribeDBEngineVersionsInput) SetFilters(v []*Filter) *DescribeDBEngineVersionsInput { +func (s *DescribeDBInstanceAutomatedBackupsInput) SetFilters(v []*Filter) *DescribeDBInstanceAutomatedBackupsInput { s.Filters = v return s } -// SetListSupportedCharacterSets sets the ListSupportedCharacterSets field's value. -func (s *DescribeDBEngineVersionsInput) SetListSupportedCharacterSets(v bool) *DescribeDBEngineVersionsInput { - s.ListSupportedCharacterSets = &v - return s -} - -// SetListSupportedTimezones sets the ListSupportedTimezones field's value. -func (s *DescribeDBEngineVersionsInput) SetListSupportedTimezones(v bool) *DescribeDBEngineVersionsInput { - s.ListSupportedTimezones = &v - return s -} - // SetMarker sets the Marker field's value. -func (s *DescribeDBEngineVersionsInput) SetMarker(v string) *DescribeDBEngineVersionsInput { +func (s *DescribeDBInstanceAutomatedBackupsInput) SetMarker(v string) *DescribeDBInstanceAutomatedBackupsInput { s.Marker = &v return s } // SetMaxRecords sets the MaxRecords field's value. -func (s *DescribeDBEngineVersionsInput) SetMaxRecords(v int64) *DescribeDBEngineVersionsInput { +func (s *DescribeDBInstanceAutomatedBackupsInput) SetMaxRecords(v int64) *DescribeDBInstanceAutomatedBackupsInput { s.MaxRecords = &v return s } -// Contains the result of a successful invocation of the DescribeDBEngineVersions +// Contains the result of a successful invocation of the DescribeDBInstanceAutomatedBackups // action. -type DescribeDBEngineVersionsOutput struct { +type DescribeDBInstanceAutomatedBackupsOutput struct { _ struct{} `type:"structure"` - // A list of DBEngineVersion elements. - DBEngineVersions []*DBEngineVersion `locationNameList:"DBEngineVersion" type:"list"` + // A list of DBInstanceAutomatedBackup instances. + DBInstanceAutomatedBackups []*DBInstanceAutomatedBackup `locationNameList:"DBInstanceAutomatedBackup" type:"list"` // An optional pagination token provided by a previous request. If this parameter // is specified, the response includes only records beyond the marker, up to - // the value specified by MaxRecords. + // the value specified by MaxRecords . Marker *string `type:"string"` } // String returns the string representation -func (s DescribeDBEngineVersionsOutput) String() string { +func (s DescribeDBInstanceAutomatedBackupsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeDBEngineVersionsOutput) GoString() string { +func (s DescribeDBInstanceAutomatedBackupsOutput) GoString() string { return s.String() } -// SetDBEngineVersions sets the DBEngineVersions field's value. -func (s *DescribeDBEngineVersionsOutput) SetDBEngineVersions(v []*DBEngineVersion) *DescribeDBEngineVersionsOutput { - s.DBEngineVersions = v +// SetDBInstanceAutomatedBackups sets the DBInstanceAutomatedBackups field's value. +func (s *DescribeDBInstanceAutomatedBackupsOutput) SetDBInstanceAutomatedBackups(v []*DBInstanceAutomatedBackup) *DescribeDBInstanceAutomatedBackupsOutput { + s.DBInstanceAutomatedBackups = v return s } // SetMarker sets the Marker field's value. -func (s *DescribeDBEngineVersionsOutput) SetMarker(v string) *DescribeDBEngineVersionsOutput { +func (s *DescribeDBInstanceAutomatedBackupsOutput) SetMarker(v string) *DescribeDBInstanceAutomatedBackupsOutput { s.Marker = &v return s } @@ -18663,6 +21602,9 @@ type DescribeDBSnapshotsInput struct { // must also be specified. DBSnapshotIdentifier *string `type:"string"` + // A specific DB resource ID to describe. + DbiResourceId *string `type:"string"` + // This parameter is not currently supported. Filters []*Filter `locationNameList:"Filter" type:"list"` @@ -18763,6 +21705,12 @@ func (s *DescribeDBSnapshotsInput) SetDBSnapshotIdentifier(v string) *DescribeDB return s } +// SetDbiResourceId sets the DbiResourceId field's value. +func (s *DescribeDBSnapshotsInput) SetDbiResourceId(v string) *DescribeDBSnapshotsInput { + s.DbiResourceId = &v + return s +} + // SetFilters sets the Filters field's value. func (s *DescribeDBSnapshotsInput) SetFilters(v []*Filter) *DescribeDBSnapshotsInput { s.Filters = v @@ -19065,7 +22013,7 @@ type DescribeEngineDefaultParametersInput struct { // DBParameterGroupFamily is a required field DBParameterGroupFamily *string `type:"string" required:"true"` - // Not currently supported. + // This parameter is not currently supported. Filters []*Filter `locationNameList:"Filter" type:"list"` // An optional pagination token provided by a previous DescribeEngineDefaultParameters @@ -19368,7 +22316,7 @@ type DescribeEventsInput struct { // page. (http://en.wikipedia.org/wiki/ISO_8601) // // Example: 2009-07-08T18:00Z - EndTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + EndTime *time.Time `type:"timestamp"` // A list of event categories that trigger notifications for a event notification // subscription. @@ -19409,7 +22357,7 @@ type DescribeEventsInput struct { // // * If the source type is DBSnapshot, a DBSnapshotIdentifier must be supplied. // - // * Cannot end with a hyphen or contain two consecutive hyphens. + // * Can't end with a hyphen or contain two consecutive hyphens. SourceIdentifier *string `type:"string"` // The event source to retrieve events for. If no value is specified, all events @@ -19421,7 +22369,7 @@ type DescribeEventsInput struct { // page. (http://en.wikipedia.org/wiki/ISO_8601) // // Example: 2009-07-08T18:00Z - StartTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + StartTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -19699,7 +22647,7 @@ type DescribeOptionGroupsInput struct { // Constraints: Minimum 20, maximum 100. MaxRecords *int64 `type:"integer"` - // The name of the option group to describe. Cannot be supplied together with + // The name of the option group to describe. Can't be supplied together with // EngineName or MajorEngineVersion. OptionGroupName *string `type:"string"` } @@ -20273,7 +23221,9 @@ type DescribeReservedDBInstancesOfferingsInput struct { OfferingType *string `type:"string"` // Product description filter value. Specify this parameter to show only the - // available offerings matching the specified product description. + // available offerings that contain the specified product description. + // + // The results show offerings that partially match the filter value. ProductDescription *string `type:"string"` // The offering identifier filter value. Specify this parameter to show only @@ -20907,13 +23857,18 @@ func (s *EC2SecurityGroup) SetStatus(v string) *EC2SecurityGroup { return s } -// This data type is used as a response element in the following actions: +// This data type represents the information you need to connect to an Amazon +// RDS DB instance. This data type is used as a response element in the following +// actions: // // * CreateDBInstance // // * DescribeDBInstances // // * DeleteDBInstance +// +// For the data structure that represents Amazon Aurora DB cluster endpoints, +// see DBClusterEndpoint. type Endpoint struct { _ struct{} `type:"structure"` @@ -21006,7 +23961,7 @@ type Event struct { _ struct{} `type:"structure"` // Specifies the date and time of the event. - Date *time.Time `type:"timestamp" timestampFormat:"iso8601"` + Date *time.Time `type:"timestamp"` // Specifies the category for the event. EventCategories []*string `locationNameList:"EventCategory" type:"list"` @@ -21247,76 +24202,319 @@ func (s FailoverDBClusterInput) GoString() string { return s.String() } -// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. -func (s *FailoverDBClusterInput) SetDBClusterIdentifier(v string) *FailoverDBClusterInput { - s.DBClusterIdentifier = &v - return s -} - -// SetTargetDBInstanceIdentifier sets the TargetDBInstanceIdentifier field's value. -func (s *FailoverDBClusterInput) SetTargetDBInstanceIdentifier(v string) *FailoverDBClusterInput { - s.TargetDBInstanceIdentifier = &v +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *FailoverDBClusterInput) SetDBClusterIdentifier(v string) *FailoverDBClusterInput { + s.DBClusterIdentifier = &v + return s +} + +// SetTargetDBInstanceIdentifier sets the TargetDBInstanceIdentifier field's value. +func (s *FailoverDBClusterInput) SetTargetDBInstanceIdentifier(v string) *FailoverDBClusterInput { + s.TargetDBInstanceIdentifier = &v + return s +} + +type FailoverDBClusterOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Aurora DB cluster. + // + // This data type is used as a response element in the DescribeDBClusters, StopDBCluster, + // and StartDBCluster actions. + DBCluster *DBCluster `type:"structure"` +} + +// String returns the string representation +func (s FailoverDBClusterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FailoverDBClusterOutput) GoString() string { + return s.String() +} + +// SetDBCluster sets the DBCluster field's value. +func (s *FailoverDBClusterOutput) SetDBCluster(v *DBCluster) *FailoverDBClusterOutput { + s.DBCluster = v + return s +} + +// A filter name and value pair that is used to return a more specific list +// of results from a describe operation. Filters can be used to match a set +// of resources by specific criteria, such as IDs. The filters supported by +// a describe operation are documented with the describe operation. +// +// Currently, wildcards are not supported in filters. +// +// The following actions can be filtered: +// +// * DescribeDBClusterBacktracks +// +// * DescribeDBClusterEndpoints +// +// * DescribeDBClusters +// +// * DescribeDBInstances +// +// * DescribePendingMaintenanceActions +type Filter struct { + _ struct{} `type:"structure"` + + // The name of the filter. Filter names are case-sensitive. + // + // Name is a required field + Name *string `type:"string" required:"true"` + + // One or more filter values. Filter values are case-sensitive. + // + // Values is a required field + Values []*string `locationNameList:"Value" type:"list" required:"true"` +} + +// String returns the string representation +func (s Filter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Filter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Filter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Filter"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Values == nil { + invalidParams.Add(request.NewErrParamRequired("Values")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *Filter) SetName(v string) *Filter { + s.Name = &v + return s +} + +// SetValues sets the Values field's value. +func (s *Filter) SetValues(v []*string) *Filter { + s.Values = v + return s +} + +// This data type is used as a response element in the DescribeDBSecurityGroups +// action. +type IPRange struct { + _ struct{} `type:"structure"` + + // Specifies the IP range. + CIDRIP *string `type:"string"` + + // Specifies the status of the IP range. Status can be "authorizing", "authorized", + // "revoking", and "revoked". + Status *string `type:"string"` +} + +// String returns the string representation +func (s IPRange) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s IPRange) GoString() string { + return s.String() +} + +// SetCIDRIP sets the CIDRIP field's value. +func (s *IPRange) SetCIDRIP(v string) *IPRange { + s.CIDRIP = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *IPRange) SetStatus(v string) *IPRange { + s.Status = &v + return s +} + +type ListTagsForResourceInput struct { + _ struct{} `type:"structure"` + + // This parameter is not currently supported. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // The Amazon RDS resource with tags to be listed. This value is an Amazon Resource + // Name (ARN). For information about creating an ARN, see Constructing an ARN + // for Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.ARN.html#USER_Tagging.ARN.Constructing) + // in the Amazon RDS User Guide. + // + // ResourceName is a required field + ResourceName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s ListTagsForResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsForResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListTagsForResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTagsForResourceInput"} + if s.ResourceName == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceName")) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *ListTagsForResourceInput) SetFilters(v []*Filter) *ListTagsForResourceInput { + s.Filters = v + return s +} + +// SetResourceName sets the ResourceName field's value. +func (s *ListTagsForResourceInput) SetResourceName(v string) *ListTagsForResourceInput { + s.ResourceName = &v + return s +} + +type ListTagsForResourceOutput struct { + _ struct{} `type:"structure"` + + // List of tags returned by the ListTagsForResource operation. + TagList []*Tag `locationNameList:"Tag" type:"list"` +} + +// String returns the string representation +func (s ListTagsForResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsForResourceOutput) GoString() string { + return s.String() +} + +// SetTagList sets the TagList field's value. +func (s *ListTagsForResourceOutput) SetTagList(v []*Tag) *ListTagsForResourceOutput { + s.TagList = v return s } -type FailoverDBClusterOutput struct { +// The minimum DB engine version required for each corresponding allowed value +// for an option setting. +type MinimumEngineVersionPerAllowedValue struct { _ struct{} `type:"structure"` - // Contains the details of an Amazon RDS DB cluster. - // - // This data type is used as a response element in the DescribeDBClusters action. - DBCluster *DBCluster `type:"structure"` + // The allowed value for an option setting. + AllowedValue *string `type:"string"` + + // The minimum DB engine version required for the allowed value. + MinimumEngineVersion *string `type:"string"` } // String returns the string representation -func (s FailoverDBClusterOutput) String() string { +func (s MinimumEngineVersionPerAllowedValue) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s FailoverDBClusterOutput) GoString() string { +func (s MinimumEngineVersionPerAllowedValue) GoString() string { return s.String() } -// SetDBCluster sets the DBCluster field's value. -func (s *FailoverDBClusterOutput) SetDBCluster(v *DBCluster) *FailoverDBClusterOutput { - s.DBCluster = v +// SetAllowedValue sets the AllowedValue field's value. +func (s *MinimumEngineVersionPerAllowedValue) SetAllowedValue(v string) *MinimumEngineVersionPerAllowedValue { + s.AllowedValue = &v return s } -// This type is not currently supported. -type Filter struct { +// SetMinimumEngineVersion sets the MinimumEngineVersion field's value. +func (s *MinimumEngineVersionPerAllowedValue) SetMinimumEngineVersion(v string) *MinimumEngineVersionPerAllowedValue { + s.MinimumEngineVersion = &v + return s +} + +type ModifyCurrentDBClusterCapacityInput struct { _ struct{} `type:"structure"` - // This parameter is not currently supported. + // The DB cluster capacity. // - // Name is a required field - Name *string `type:"string" required:"true"` + // Constraints: + // + // * Value must be 2, 4, 8, 16, 32, 64, 128, or 256. + Capacity *int64 `type:"integer"` - // This parameter is not currently supported. + // The DB cluster identifier for the cluster being modified. This parameter + // is not case-sensitive. // - // Values is a required field - Values []*string `locationNameList:"Value" type:"list" required:"true"` + // Constraints: + // + // * Must match the identifier of an existing DB cluster. + // + // DBClusterIdentifier is a required field + DBClusterIdentifier *string `type:"string" required:"true"` + + // The amount of time, in seconds, that Aurora Serverless tries to find a scaling + // point to perform seamless scaling before enforcing the timeout action. The + // default is 300. + // + // * Value must be from 10 through 600. + SecondsBeforeTimeout *int64 `type:"integer"` + + // The action to take when the timeout is reached, either ForceApplyCapacityChange + // or RollbackCapacityChange. + // + // ForceApplyCapacityChange, the default, sets the capacity to the specified + // value as soon as possible. + // + // RollbackCapacityChange ignores the capacity change if a scaling point is + // not found in the timeout period. + TimeoutAction *string `type:"string"` } // String returns the string representation -func (s Filter) String() string { +func (s ModifyCurrentDBClusterCapacityInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Filter) GoString() string { +func (s ModifyCurrentDBClusterCapacityInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *Filter) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "Filter"} - if s.Name == nil { - invalidParams.Add(request.NewErrParamRequired("Name")) - } - if s.Values == nil { - invalidParams.Add(request.NewErrParamRequired("Values")) +func (s *ModifyCurrentDBClusterCapacityInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyCurrentDBClusterCapacityInput"} + if s.DBClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterIdentifier")) } if invalidParams.Len() > 0 { @@ -21325,92 +24523,128 @@ func (s *Filter) Validate() error { return nil } -// SetName sets the Name field's value. -func (s *Filter) SetName(v string) *Filter { - s.Name = &v +// SetCapacity sets the Capacity field's value. +func (s *ModifyCurrentDBClusterCapacityInput) SetCapacity(v int64) *ModifyCurrentDBClusterCapacityInput { + s.Capacity = &v return s } -// SetValues sets the Values field's value. -func (s *Filter) SetValues(v []*string) *Filter { - s.Values = v +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *ModifyCurrentDBClusterCapacityInput) SetDBClusterIdentifier(v string) *ModifyCurrentDBClusterCapacityInput { + s.DBClusterIdentifier = &v return s } -// This data type is used as a response element in the DescribeDBSecurityGroups -// action. -type IPRange struct { +// SetSecondsBeforeTimeout sets the SecondsBeforeTimeout field's value. +func (s *ModifyCurrentDBClusterCapacityInput) SetSecondsBeforeTimeout(v int64) *ModifyCurrentDBClusterCapacityInput { + s.SecondsBeforeTimeout = &v + return s +} + +// SetTimeoutAction sets the TimeoutAction field's value. +func (s *ModifyCurrentDBClusterCapacityInput) SetTimeoutAction(v string) *ModifyCurrentDBClusterCapacityInput { + s.TimeoutAction = &v + return s +} + +type ModifyCurrentDBClusterCapacityOutput struct { _ struct{} `type:"structure"` - // Specifies the IP range. - CIDRIP *string `type:"string"` + // The current capacity of the DB cluster. + CurrentCapacity *int64 `type:"integer"` - // Specifies the status of the IP range. Status can be "authorizing", "authorized", - // "revoking", and "revoked". - Status *string `type:"string"` + // A user-supplied DB cluster identifier. This identifier is the unique key + // that identifies a DB cluster. + DBClusterIdentifier *string `type:"string"` + + // A value that specifies the capacity that the DB cluster scales to next. + PendingCapacity *int64 `type:"integer"` + + // The number of seconds before a call to ModifyCurrentDBClusterCapacity times + // out. + SecondsBeforeTimeout *int64 `type:"integer"` + + // The timeout action of a call to ModifyCurrentDBClusterCapacity, either ForceApplyCapacityChange + // or RollbackCapacityChange. + TimeoutAction *string `type:"string"` } // String returns the string representation -func (s IPRange) String() string { +func (s ModifyCurrentDBClusterCapacityOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s IPRange) GoString() string { +func (s ModifyCurrentDBClusterCapacityOutput) GoString() string { return s.String() } -// SetCIDRIP sets the CIDRIP field's value. -func (s *IPRange) SetCIDRIP(v string) *IPRange { - s.CIDRIP = &v +// SetCurrentCapacity sets the CurrentCapacity field's value. +func (s *ModifyCurrentDBClusterCapacityOutput) SetCurrentCapacity(v int64) *ModifyCurrentDBClusterCapacityOutput { + s.CurrentCapacity = &v return s } -// SetStatus sets the Status field's value. -func (s *IPRange) SetStatus(v string) *IPRange { - s.Status = &v +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *ModifyCurrentDBClusterCapacityOutput) SetDBClusterIdentifier(v string) *ModifyCurrentDBClusterCapacityOutput { + s.DBClusterIdentifier = &v return s } -type ListTagsForResourceInput struct { - _ struct{} `type:"structure"` +// SetPendingCapacity sets the PendingCapacity field's value. +func (s *ModifyCurrentDBClusterCapacityOutput) SetPendingCapacity(v int64) *ModifyCurrentDBClusterCapacityOutput { + s.PendingCapacity = &v + return s +} - // This parameter is not currently supported. - Filters []*Filter `locationNameList:"Filter" type:"list"` +// SetSecondsBeforeTimeout sets the SecondsBeforeTimeout field's value. +func (s *ModifyCurrentDBClusterCapacityOutput) SetSecondsBeforeTimeout(v int64) *ModifyCurrentDBClusterCapacityOutput { + s.SecondsBeforeTimeout = &v + return s +} - // The Amazon RDS resource with tags to be listed. This value is an Amazon Resource - // Name (ARN). For information about creating an ARN, see Constructing an RDS - // Amazon Resource Name (ARN) (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.ARN.html#USER_Tagging.ARN.Constructing). +// SetTimeoutAction sets the TimeoutAction field's value. +func (s *ModifyCurrentDBClusterCapacityOutput) SetTimeoutAction(v string) *ModifyCurrentDBClusterCapacityOutput { + s.TimeoutAction = &v + return s +} + +type ModifyDBClusterEndpointInput struct { + _ struct{} `type:"structure"` + + // The identifier of the endpoint to modify. This parameter is stored as a lowercase + // string. // - // ResourceName is a required field - ResourceName *string `type:"string" required:"true"` + // DBClusterEndpointIdentifier is a required field + DBClusterEndpointIdentifier *string `type:"string" required:"true"` + + // The type of the endpoint. One of: READER, ANY. + EndpointType *string `type:"string"` + + // List of DB instance identifiers that aren't part of the custom endpoint group. + // All other eligible instances are reachable through the custom endpoint. Only + // relevant if the list of static members is empty. + ExcludedMembers []*string `type:"list"` + + // List of DB instance identifiers that are part of the custom endpoint group. + StaticMembers []*string `type:"list"` } // String returns the string representation -func (s ListTagsForResourceInput) String() string { +func (s ModifyDBClusterEndpointInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListTagsForResourceInput) GoString() string { +func (s ModifyDBClusterEndpointInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListTagsForResourceInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListTagsForResourceInput"} - if s.ResourceName == nil { - invalidParams.Add(request.NewErrParamRequired("ResourceName")) - } - if s.Filters != nil { - for i, v := range s.Filters { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) - } - } +func (s *ModifyDBClusterEndpointInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyDBClusterEndpointInput"} + if s.DBClusterEndpointIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterEndpointIdentifier")) } if invalidParams.Len() > 0 { @@ -21419,38 +24653,151 @@ func (s *ListTagsForResourceInput) Validate() error { return nil } -// SetFilters sets the Filters field's value. -func (s *ListTagsForResourceInput) SetFilters(v []*Filter) *ListTagsForResourceInput { - s.Filters = v +// SetDBClusterEndpointIdentifier sets the DBClusterEndpointIdentifier field's value. +func (s *ModifyDBClusterEndpointInput) SetDBClusterEndpointIdentifier(v string) *ModifyDBClusterEndpointInput { + s.DBClusterEndpointIdentifier = &v return s } -// SetResourceName sets the ResourceName field's value. -func (s *ListTagsForResourceInput) SetResourceName(v string) *ListTagsForResourceInput { - s.ResourceName = &v +// SetEndpointType sets the EndpointType field's value. +func (s *ModifyDBClusterEndpointInput) SetEndpointType(v string) *ModifyDBClusterEndpointInput { + s.EndpointType = &v return s } -type ListTagsForResourceOutput struct { +// SetExcludedMembers sets the ExcludedMembers field's value. +func (s *ModifyDBClusterEndpointInput) SetExcludedMembers(v []*string) *ModifyDBClusterEndpointInput { + s.ExcludedMembers = v + return s +} + +// SetStaticMembers sets the StaticMembers field's value. +func (s *ModifyDBClusterEndpointInput) SetStaticMembers(v []*string) *ModifyDBClusterEndpointInput { + s.StaticMembers = v + return s +} + +// This data type represents the information you need to connect to an Amazon +// Aurora DB cluster. This data type is used as a response element in the following +// actions: +// +// * CreateDBClusterEndpoint +// +// * DescribeDBClusterEndpoints +// +// * ModifyDBClusterEndpoint +// +// * DeleteDBClusterEndpoint +// +// For the data structure that represents Amazon RDS DB instance endpoints, +// see Endpoint. +type ModifyDBClusterEndpointOutput struct { _ struct{} `type:"structure"` - // List of tags returned by the ListTagsForResource operation. - TagList []*Tag `locationNameList:"Tag" type:"list"` + // The type associated with a custom endpoint. One of: READER, ANY. + CustomEndpointType *string `type:"string"` + + // The Amazon Resource Name (ARN) for the endpoint. + DBClusterEndpointArn *string `type:"string"` + + // The identifier associated with the endpoint. This parameter is stored as + // a lowercase string. + DBClusterEndpointIdentifier *string `type:"string"` + + // A unique system-generated identifier for an endpoint. It remains the same + // for the whole life of the endpoint. + DBClusterEndpointResourceIdentifier *string `type:"string"` + + // The DB cluster identifier of the DB cluster associated with the endpoint. + // This parameter is stored as a lowercase string. + DBClusterIdentifier *string `type:"string"` + + // The DNS address of the endpoint. + Endpoint *string `type:"string"` + + // The type of the endpoint. One of: READER, WRITER, CUSTOM. + EndpointType *string `type:"string"` + + // List of DB instance identifiers that aren't part of the custom endpoint group. + // All other eligible instances are reachable through the custom endpoint. Only + // relevant if the list of static members is empty. + ExcludedMembers []*string `type:"list"` + + // List of DB instance identifiers that are part of the custom endpoint group. + StaticMembers []*string `type:"list"` + + // The current status of the endpoint. One of: creating, available, deleting, + // modifying. + Status *string `type:"string"` } // String returns the string representation -func (s ListTagsForResourceOutput) String() string { +func (s ModifyDBClusterEndpointOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListTagsForResourceOutput) GoString() string { +func (s ModifyDBClusterEndpointOutput) GoString() string { return s.String() } -// SetTagList sets the TagList field's value. -func (s *ListTagsForResourceOutput) SetTagList(v []*Tag) *ListTagsForResourceOutput { - s.TagList = v +// SetCustomEndpointType sets the CustomEndpointType field's value. +func (s *ModifyDBClusterEndpointOutput) SetCustomEndpointType(v string) *ModifyDBClusterEndpointOutput { + s.CustomEndpointType = &v + return s +} + +// SetDBClusterEndpointArn sets the DBClusterEndpointArn field's value. +func (s *ModifyDBClusterEndpointOutput) SetDBClusterEndpointArn(v string) *ModifyDBClusterEndpointOutput { + s.DBClusterEndpointArn = &v + return s +} + +// SetDBClusterEndpointIdentifier sets the DBClusterEndpointIdentifier field's value. +func (s *ModifyDBClusterEndpointOutput) SetDBClusterEndpointIdentifier(v string) *ModifyDBClusterEndpointOutput { + s.DBClusterEndpointIdentifier = &v + return s +} + +// SetDBClusterEndpointResourceIdentifier sets the DBClusterEndpointResourceIdentifier field's value. +func (s *ModifyDBClusterEndpointOutput) SetDBClusterEndpointResourceIdentifier(v string) *ModifyDBClusterEndpointOutput { + s.DBClusterEndpointResourceIdentifier = &v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *ModifyDBClusterEndpointOutput) SetDBClusterIdentifier(v string) *ModifyDBClusterEndpointOutput { + s.DBClusterIdentifier = &v + return s +} + +// SetEndpoint sets the Endpoint field's value. +func (s *ModifyDBClusterEndpointOutput) SetEndpoint(v string) *ModifyDBClusterEndpointOutput { + s.Endpoint = &v + return s +} + +// SetEndpointType sets the EndpointType field's value. +func (s *ModifyDBClusterEndpointOutput) SetEndpointType(v string) *ModifyDBClusterEndpointOutput { + s.EndpointType = &v + return s +} + +// SetExcludedMembers sets the ExcludedMembers field's value. +func (s *ModifyDBClusterEndpointOutput) SetExcludedMembers(v []*string) *ModifyDBClusterEndpointOutput { + s.ExcludedMembers = v + return s +} + +// SetStaticMembers sets the StaticMembers field's value. +func (s *ModifyDBClusterEndpointOutput) SetStaticMembers(v []*string) *ModifyDBClusterEndpointOutput { + s.StaticMembers = v + return s +} + +// SetStatus sets the Status field's value. +func (s *ModifyDBClusterEndpointOutput) SetStatus(v string) *ModifyDBClusterEndpointOutput { + s.Status = &v return s } @@ -21463,16 +24810,27 @@ type ModifyDBClusterInput struct { // is set to false, changes to the DB cluster are applied during the next maintenance // window. // - // The ApplyImmediately parameter only affects the NewDBClusterIdentifier and - // MasterUserPassword values. If you set the ApplyImmediately parameter value - // to false, then changes to the NewDBClusterIdentifier and MasterUserPassword - // values are applied during the next maintenance window. All other changes - // are applied immediately, regardless of the value of the ApplyImmediately - // parameter. + // The ApplyImmediately parameter only affects the EnableIAMDatabaseAuthentication, + // MasterUserPassword, and NewDBClusterIdentifier values. If you set the ApplyImmediately + // parameter value to false, then changes to the EnableIAMDatabaseAuthentication, + // MasterUserPassword, and NewDBClusterIdentifier values are applied during + // the next maintenance window. All other changes are applied immediately, regardless + // of the value of the ApplyImmediately parameter. // // Default: false ApplyImmediately *bool `type:"boolean"` + // The target backtrack window, in seconds. To disable backtracking, set this + // value to 0. + // + // Default: 0 + // + // Constraints: + // + // * If specified, this value must be set to a number from 0 to 259,200 (72 + // hours). + BacktrackWindow *int64 `type:"long"` + // The number of days for which automated backups are retained. You must specify // a minimum value of 1. // @@ -21483,6 +24841,10 @@ type ModifyDBClusterInput struct { // * Must be a value from 1 to 35 BackupRetentionPeriod *int64 `type:"integer"` + // The configuration setting for the log types to be enabled for export to CloudWatch + // Logs for a specific DB cluster. + CloudwatchLogsExportConfiguration *CloudwatchLogsExportConfiguration `type:"structure"` + // The DB cluster identifier for the cluster being modified. This parameter // is not case-sensitive. // @@ -21496,12 +24858,23 @@ type ModifyDBClusterInput struct { // The name of the DB cluster parameter group to use for the DB cluster. DBClusterParameterGroupName *string `type:"string"` + // Indicates if the DB cluster has deletion protection enabled. The database + // can't be deleted when this value is set to true. + DeletionProtection *bool `type:"boolean"` + // True to enable mapping of AWS Identity and Access Management (IAM) accounts // to database accounts, and otherwise false. // // Default: false EnableIAMDatabaseAuthentication *bool `type:"boolean"` + // The version number of the database engine to which you want to upgrade. Changing + // this parameter results in an outage. The change is applied during the next + // maintenance window unless the ApplyImmediately parameter is set to true. + // + // For a list of valid engine versions, see CreateDBCluster, or call DescribeDBEngineVersions. + EngineVersion *string `type:"string"` + // The new password for the master database user. This password can contain // any printable ASCII character except "/", """, or "@". // @@ -21517,7 +24890,7 @@ type ModifyDBClusterInput struct { // // * The first character must be a letter // - // * Cannot end with a hyphen or contain two consecutive hyphens + // * Can't end with a hyphen or contain two consecutive hyphens // // Example: my-cluster2 NewDBClusterIdentifier *string `type:"string"` @@ -21546,8 +24919,8 @@ type ModifyDBClusterInput struct { // // The default is a 30-minute window selected at random from an 8-hour block // of time for each AWS Region. To see the time blocks available, see Adjusting - // the Preferred Maintenance Window (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AdjustingTheMaintenanceWindow.html) - // in the Amazon RDS User Guide. + // the Preferred DB Cluster Maintenance Window (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.Maintenance.html#AdjustingTheMaintenanceWindow.Aurora) + // in the Amazon Aurora User Guide. // // Constraints: // @@ -21567,15 +24940,19 @@ type ModifyDBClusterInput struct { // // The default is a 30-minute window selected at random from an 8-hour block // of time for each AWS Region, occurring on a random day of the week. To see - // the time blocks available, see Adjusting the Preferred Maintenance Window - // (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AdjustingTheMaintenanceWindow.html) - // in the Amazon RDS User Guide. + // the time blocks available, see Adjusting the Preferred DB Cluster Maintenance + // Window (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.Maintenance.html#AdjustingTheMaintenanceWindow.Aurora) + // in the Amazon Aurora User Guide. // // Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun. // // Constraints: Minimum 30-minute window. PreferredMaintenanceWindow *string `type:"string"` + // The scaling properties of the DB cluster. You can only modify scaling properties + // for DB clusters in serverless DB engine mode. + ScalingConfiguration *ScalingConfiguration `type:"structure"` + // A list of VPC security groups that the DB cluster will belong to. VpcSecurityGroupIds []*string `locationNameList:"VpcSecurityGroupId" type:"list"` } @@ -21609,12 +24986,24 @@ func (s *ModifyDBClusterInput) SetApplyImmediately(v bool) *ModifyDBClusterInput return s } +// SetBacktrackWindow sets the BacktrackWindow field's value. +func (s *ModifyDBClusterInput) SetBacktrackWindow(v int64) *ModifyDBClusterInput { + s.BacktrackWindow = &v + return s +} + // SetBackupRetentionPeriod sets the BackupRetentionPeriod field's value. func (s *ModifyDBClusterInput) SetBackupRetentionPeriod(v int64) *ModifyDBClusterInput { s.BackupRetentionPeriod = &v return s } +// SetCloudwatchLogsExportConfiguration sets the CloudwatchLogsExportConfiguration field's value. +func (s *ModifyDBClusterInput) SetCloudwatchLogsExportConfiguration(v *CloudwatchLogsExportConfiguration) *ModifyDBClusterInput { + s.CloudwatchLogsExportConfiguration = v + return s +} + // SetDBClusterIdentifier sets the DBClusterIdentifier field's value. func (s *ModifyDBClusterInput) SetDBClusterIdentifier(v string) *ModifyDBClusterInput { s.DBClusterIdentifier = &v @@ -21627,12 +25016,24 @@ func (s *ModifyDBClusterInput) SetDBClusterParameterGroupName(v string) *ModifyD return s } +// SetDeletionProtection sets the DeletionProtection field's value. +func (s *ModifyDBClusterInput) SetDeletionProtection(v bool) *ModifyDBClusterInput { + s.DeletionProtection = &v + return s +} + // SetEnableIAMDatabaseAuthentication sets the EnableIAMDatabaseAuthentication field's value. func (s *ModifyDBClusterInput) SetEnableIAMDatabaseAuthentication(v bool) *ModifyDBClusterInput { s.EnableIAMDatabaseAuthentication = &v return s } +// SetEngineVersion sets the EngineVersion field's value. +func (s *ModifyDBClusterInput) SetEngineVersion(v string) *ModifyDBClusterInput { + s.EngineVersion = &v + return s +} + // SetMasterUserPassword sets the MasterUserPassword field's value. func (s *ModifyDBClusterInput) SetMasterUserPassword(v string) *ModifyDBClusterInput { s.MasterUserPassword = &v @@ -21669,6 +25070,12 @@ func (s *ModifyDBClusterInput) SetPreferredMaintenanceWindow(v string) *ModifyDB return s } +// SetScalingConfiguration sets the ScalingConfiguration field's value. +func (s *ModifyDBClusterInput) SetScalingConfiguration(v *ScalingConfiguration) *ModifyDBClusterInput { + s.ScalingConfiguration = v + return s +} + // SetVpcSecurityGroupIds sets the VpcSecurityGroupIds field's value. func (s *ModifyDBClusterInput) SetVpcSecurityGroupIds(v []*string) *ModifyDBClusterInput { s.VpcSecurityGroupIds = v @@ -21678,9 +25085,10 @@ func (s *ModifyDBClusterInput) SetVpcSecurityGroupIds(v []*string) *ModifyDBClus type ModifyDBClusterOutput struct { _ struct{} `type:"structure"` - // Contains the details of an Amazon RDS DB cluster. + // Contains the details of an Amazon Aurora DB cluster. // - // This data type is used as a response element in the DescribeDBClusters action. + // This data type is used as a response element in the DescribeDBClusters, StopDBCluster, + // and StartDBCluster actions. DBCluster *DBCluster `type:"structure"` } @@ -21899,8 +25307,9 @@ type ModifyDBInstanceInput struct { // and are applied on the next call to RebootDBInstance, or the next failure // reboot. Review the table of parameters in Modifying a DB Instance and Using // the Apply Immediately Parameter (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html) - // to see the impact that setting ApplyImmediately to true or false has for - // each modified parameter and to determine when the changes are applied. + // in the Amazon RDS User Guide. to see the impact that setting ApplyImmediately + // to true or false has for each modified parameter and to determine when the + // changes are applied. // // Default: false ApplyImmediately *bool `type:"boolean"` @@ -21936,19 +25345,19 @@ type ModifyDBInstanceInput struct { // * Must be a value from 0 to 35 // // * Can be specified for a MySQL Read Replica only if the source is running - // MySQL 5.6 + // MySQL 5.6 or later // // * Can be specified for a PostgreSQL Read Replica only if the source is // running PostgreSQL 9.3.5 // - // * Cannot be set to 0 if the DB instance is a source to Read Replicas + // * Can't be set to 0 if the DB instance is a source to Read Replicas BackupRetentionPeriod *int64 `type:"integer"` // Indicates the certificate that needs to be associated with the instance. CACertificateIdentifier *string `type:"string"` // The configuration setting for the log types to be enabled for export to CloudWatch - // Logs for a specific DB instance or DB cluster. + // Logs for a specific DB instance. CloudwatchLogsExportConfiguration *CloudwatchLogsExportConfiguration `type:"structure"` // True to copy all tags from the DB instance to snapshots of the DB instance, @@ -22050,7 +25459,8 @@ type ModifyDBInstanceInput struct { // The new DB subnet group for the DB instance. You can use this parameter to // move your DB instance to a different VPC. If your DB instance is not in a // VPC, you can also use this parameter to move your DB instance into a VPC. - // For more information, see Updating the VPC for a DB Instance (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html#USER_VPC.Non-VPC2VPC). + // For more information, see Updating the VPC for a DB Instance (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html#USER_VPC.Non-VPC2VPC) + // in the Amazon RDS User Guide. // // Changing the subnet group causes an outage during the change. The change // is applied during the next maintenance window, unless you specify true for @@ -22061,6 +25471,11 @@ type ModifyDBInstanceInput struct { // Example: mySubnetGroup DBSubnetGroupName *string `type:"string"` + // Indicates if the DB instance has deletion protection enabled. The database + // can't be deleted when this value is set to true. For more information, see + // Deleting a DB Instance (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_DeleteInstance.html). + DeletionProtection *bool `type:"boolean"` + // The Active Directory Domain to move the instance to. Specify none to remove // the instance from its current domain. The domain must be created prior to // this operation. Currently only a Microsoft SQL Server instance can be created @@ -22090,6 +25505,9 @@ type ModifyDBInstanceInput struct { EnableIAMDatabaseAuthentication *bool `type:"boolean"` // True to enable Performance Insights for the DB instance, and otherwise false. + // + // For more information, see Using Amazon Performance Insights (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.html) + // in the Amazon Relational Database Service User Guide. EnablePerformanceInsights *bool `type:"boolean"` // The version number of the database engine to upgrade to. Changing this parameter @@ -22101,7 +25519,8 @@ type ModifyDBInstanceInput struct { // new engine version must be specified. The new DB parameter group can be the // default for that DB parameter group family. // - // For a list of valid engine versions, see CreateDBInstance. + // For information about valid engine versions, see CreateDBInstance, or call + // DescribeDBEngineVersions. EngineVersion *string `type:"string"` // The new Provisioned IOPS (I/O operations per second) value for the RDS instance. @@ -22191,7 +25610,8 @@ type ModifyDBInstanceInput struct { // The ARN for the IAM role that permits RDS to send enhanced monitoring metrics // to Amazon CloudWatch Logs. For example, arn:aws:iam:123456789012:role/emaccess. // For information on creating a monitoring role, go to To create an IAM role - // for Amazon RDS Enhanced Monitoring (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.html#USER_Monitoring.OS.IAMRole). + // for Amazon RDS Enhanced Monitoring (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.html#USER_Monitoring.OS.IAMRole) + // in the Amazon RDS User Guide. // // If MonitoringInterval is set to a value other than 0, then you must supply // a MonitoringRoleArn value. @@ -22214,7 +25634,7 @@ type ModifyDBInstanceInput struct { // // * The first character must be a letter. // - // * Cannot end with a hyphen or contain two consecutive hyphens. + // * Can't end with a hyphen or contain two consecutive hyphens. // // Example: mydbinstance NewDBInstanceIdentifier *string `type:"string"` @@ -22237,6 +25657,10 @@ type ModifyDBInstanceInput struct { // KMS key alias for the KMS encryption key. PerformanceInsightsKMSKeyId *string `type:"string"` + // The amount of time, in days, to retain Performance Insights data. Valid values + // are 7 or 731 (2 years). + PerformanceInsightsRetentionPeriod *int64 `type:"integer"` + // The daily time range during which automated backups are created if automated // backups are enabled, as determined by the BackupRetentionPeriod parameter. // Changing this parameter doesn't result in an outage and the change is asynchronously @@ -22276,9 +25700,14 @@ type ModifyDBInstanceInput struct { // Constraints: Must be at least 30 minutes PreferredMaintenanceWindow *string `type:"string"` + // The number of CPU cores and the number of threads per core for the DB instance + // class of the DB instance. + ProcessorFeatures []*ProcessorFeature `locationNameList:"ProcessorFeature" type:"list"` + // A value that specifies the order in which an Aurora Replica is promoted to // the primary instance after a failure of the existing primary instance. For - // more information, see Fault Tolerance for an Aurora DB Cluster (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Managing.html#Aurora.Managing.FaultTolerance). + // more information, see Fault Tolerance for an Aurora DB Cluster (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html#Aurora.Managing.FaultTolerance) + // in the Amazon Aurora User Guide. // // Default: 1 // @@ -22331,6 +25760,10 @@ type ModifyDBInstanceInput struct { // device. TdeCredentialPassword *string `type:"string"` + // A value that specifies that the DB instance class of the DB instance uses + // its default processor features. + UseDefaultProcessorFeatures *bool `type:"boolean"` + // A list of EC2 VPC security groups to authorize on this DB instance. This // change is asynchronously applied as soon as possible. // @@ -22452,6 +25885,12 @@ func (s *ModifyDBInstanceInput) SetDBSubnetGroupName(v string) *ModifyDBInstance return s } +// SetDeletionProtection sets the DeletionProtection field's value. +func (s *ModifyDBInstanceInput) SetDeletionProtection(v bool) *ModifyDBInstanceInput { + s.DeletionProtection = &v + return s +} + // SetDomain sets the Domain field's value. func (s *ModifyDBInstanceInput) SetDomain(v string) *ModifyDBInstanceInput { s.Domain = &v @@ -22536,6 +25975,12 @@ func (s *ModifyDBInstanceInput) SetPerformanceInsightsKMSKeyId(v string) *Modify return s } +// SetPerformanceInsightsRetentionPeriod sets the PerformanceInsightsRetentionPeriod field's value. +func (s *ModifyDBInstanceInput) SetPerformanceInsightsRetentionPeriod(v int64) *ModifyDBInstanceInput { + s.PerformanceInsightsRetentionPeriod = &v + return s +} + // SetPreferredBackupWindow sets the PreferredBackupWindow field's value. func (s *ModifyDBInstanceInput) SetPreferredBackupWindow(v string) *ModifyDBInstanceInput { s.PreferredBackupWindow = &v @@ -22548,6 +25993,12 @@ func (s *ModifyDBInstanceInput) SetPreferredMaintenanceWindow(v string) *ModifyD return s } +// SetProcessorFeatures sets the ProcessorFeatures field's value. +func (s *ModifyDBInstanceInput) SetProcessorFeatures(v []*ProcessorFeature) *ModifyDBInstanceInput { + s.ProcessorFeatures = v + return s +} + // SetPromotionTier sets the PromotionTier field's value. func (s *ModifyDBInstanceInput) SetPromotionTier(v int64) *ModifyDBInstanceInput { s.PromotionTier = &v @@ -22578,6 +26029,12 @@ func (s *ModifyDBInstanceInput) SetTdeCredentialPassword(v string) *ModifyDBInst return s } +// SetUseDefaultProcessorFeatures sets the UseDefaultProcessorFeatures field's value. +func (s *ModifyDBInstanceInput) SetUseDefaultProcessorFeatures(v bool) *ModifyDBInstanceInput { + s.UseDefaultProcessorFeatures = &v + return s +} + // SetVpcSecurityGroupIds sets the VpcSecurityGroupIds field's value. func (s *ModifyDBInstanceInput) SetVpcSecurityGroupIds(v []*string) *ModifyDBInstanceInput { s.VpcSecurityGroupIds = v @@ -22819,7 +26276,8 @@ type ModifyDBSnapshotInput struct { // You can specify this parameter when you upgrade an Oracle DB snapshot. The // same option group considerations apply when upgrading a DB snapshot as when // upgrading a DB instance. For more information, see Option Group Considerations - // (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Oracle.html#USER_UpgradeDBInstance.Oracle.OGPG.OG). + // (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Oracle.html#USER_UpgradeDBInstance.Oracle.OGPG.OG) + // in the Amazon RDS User Guide. OptionGroupName *string `type:"string"` } @@ -23689,6 +27147,14 @@ type OptionGroupOptionSetting struct { // from the default value. IsModifiable *bool `type:"boolean"` + // Boolean value where true indicates that a value must be specified for this + // option setting of the option group option. + IsRequired *bool `type:"boolean"` + + // The minimum DB engine version required for the corresponding allowed value + // for this option setting. + MinimumEngineVersionPerAllowedValue []*MinimumEngineVersionPerAllowedValue `locationNameList:"MinimumEngineVersionPerAllowedValue" type:"list"` + // The description of the option group option. SettingDescription *string `type:"string"` @@ -23730,6 +27196,18 @@ func (s *OptionGroupOptionSetting) SetIsModifiable(v bool) *OptionGroupOptionSet return s } +// SetIsRequired sets the IsRequired field's value. +func (s *OptionGroupOptionSetting) SetIsRequired(v bool) *OptionGroupOptionSetting { + s.IsRequired = &v + return s +} + +// SetMinimumEngineVersionPerAllowedValue sets the MinimumEngineVersionPerAllowedValue field's value. +func (s *OptionGroupOptionSetting) SetMinimumEngineVersionPerAllowedValue(v []*MinimumEngineVersionPerAllowedValue) *OptionGroupOptionSetting { + s.MinimumEngineVersionPerAllowedValue = v + return s +} + // SetSettingDescription sets the SettingDescription field's value. func (s *OptionGroupOptionSetting) SetSettingDescription(v string) *OptionGroupOptionSetting { s.SettingDescription = &v @@ -23886,6 +27364,10 @@ type OrderableDBInstanceOption struct { // A list of Availability Zones for a DB instance. AvailabilityZones []*AvailabilityZone `locationNameList:"AvailabilityZone" type:"list"` + // A list of the available processor features for the DB instance class of a + // DB instance. + AvailableProcessorFeatures []*AvailableProcessorFeature `locationNameList:"AvailableProcessorFeature" type:"list"` + // The DB instance class for a DB instance. DBInstanceClass *string `type:"string"` @@ -23925,6 +27407,9 @@ type OrderableDBInstanceOption struct { // Indicates the storage type for a DB instance. StorageType *string `type:"string"` + // A list of the supported DB engine modes. + SupportedEngineModes []*string `type:"list"` + // Indicates whether a DB instance supports Enhanced Monitoring at intervals // from 1 to 60 seconds. SupportsEnhancedMonitoring *bool `type:"boolean"` @@ -23961,6 +27446,12 @@ func (s *OrderableDBInstanceOption) SetAvailabilityZones(v []*AvailabilityZone) return s } +// SetAvailableProcessorFeatures sets the AvailableProcessorFeatures field's value. +func (s *OrderableDBInstanceOption) SetAvailableProcessorFeatures(v []*AvailableProcessorFeature) *OrderableDBInstanceOption { + s.AvailableProcessorFeatures = v + return s +} + // SetDBInstanceClass sets the DBInstanceClass field's value. func (s *OrderableDBInstanceOption) SetDBInstanceClass(v string) *OrderableDBInstanceOption { s.DBInstanceClass = &v @@ -24039,6 +27530,12 @@ func (s *OrderableDBInstanceOption) SetStorageType(v string) *OrderableDBInstanc return s } +// SetSupportedEngineModes sets the SupportedEngineModes field's value. +func (s *OrderableDBInstanceOption) SetSupportedEngineModes(v []*string) *OrderableDBInstanceOption { + s.SupportedEngineModes = v + return s +} + // SetSupportsEnhancedMonitoring sets the SupportsEnhancedMonitoring field's value. func (s *OrderableDBInstanceOption) SetSupportsEnhancedMonitoring(v bool) *OrderableDBInstanceOption { s.SupportsEnhancedMonitoring = &v @@ -24114,6 +27611,9 @@ type Parameter struct { // Indicates the source of the parameter value. Source *string `type:"string"` + + // The valid DB engine modes. + SupportedEngineModes []*string `type:"list"` } // String returns the string representation @@ -24186,6 +27686,12 @@ func (s *Parameter) SetSource(v string) *Parameter { return s } +// SetSupportedEngineModes sets the SupportedEngineModes field's value. +func (s *Parameter) SetSupportedEngineModes(v []*string) *Parameter { + s.SupportedEngineModes = v + return s +} + // A list of the log types whose configuration is still pending. In other words, // these log types are in the process of being activated or deactivated. type PendingCloudwatchLogsExports struct { @@ -24233,14 +27739,14 @@ type PendingMaintenanceAction struct { // action is applied to the resource during its first maintenance window after // this date. If this date is specified, any next-maintenance opt-in requests // are ignored. - AutoAppliedAfterDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + AutoAppliedAfterDate *time.Time `type:"timestamp"` // The effective date when the pending maintenance action is applied to the // resource. This date takes into account opt-in requests received from the // ApplyPendingMaintenanceAction API, the AutoAppliedAfterDate, and the ForcedApplyDate. // This value is blank if an opt-in request has not been received and nothing // has been specified as AutoAppliedAfterDate or ForcedApplyDate. - CurrentApplyDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CurrentApplyDate *time.Time `type:"timestamp"` // A description providing more detail about the maintenance action. Description *string `type:"string"` @@ -24249,7 +27755,7 @@ type PendingMaintenanceAction struct { // action is applied to the resource on this date regardless of the maintenance // window for the resource. If this date is specified, any immediate opt-in // requests are ignored. - ForcedApplyDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + ForcedApplyDate *time.Time `type:"timestamp"` // Indicates the type of opt-in request that has been received for the resource. OptInStatus *string `type:"string"` @@ -24352,6 +27858,10 @@ type PendingModifiedValues struct { // Specifies the pending port for the DB instance. Port *int64 `type:"integer"` + // The number of CPU cores and the number of threads per core for the DB instance + // class of the DB instance. + ProcessorFeatures []*ProcessorFeature `locationNameList:"ProcessorFeature" type:"list"` + // Specifies the storage type to be associated with the DB instance. StorageType *string `type:"string"` } @@ -24444,12 +27954,85 @@ func (s *PendingModifiedValues) SetPort(v int64) *PendingModifiedValues { return s } +// SetProcessorFeatures sets the ProcessorFeatures field's value. +func (s *PendingModifiedValues) SetProcessorFeatures(v []*ProcessorFeature) *PendingModifiedValues { + s.ProcessorFeatures = v + return s +} + // SetStorageType sets the StorageType field's value. func (s *PendingModifiedValues) SetStorageType(v string) *PendingModifiedValues { s.StorageType = &v return s } +// Contains the processor features of a DB instance class. +// +// To specify the number of CPU cores, use the coreCount feature name for the +// Name parameter. To specify the number of threads per core, use the threadsPerCore +// feature name for the Name parameter. +// +// You can set the processor features of the DB instance class for a DB instance +// when you call one of the following actions: +// +// * CreateDBInstance +// +// * ModifyDBInstance +// +// * RestoreDBInstanceFromDBSnapshot +// +// * RestoreDBInstanceFromS3 +// +// * RestoreDBInstanceToPointInTime +// +// You can view the valid processor values for a particular instance class by +// calling the DescribeOrderableDBInstanceOptions action and specifying the +// instance class for the DBInstanceClass parameter. +// +// In addition, you can use the following actions for DB instance class processor +// information: +// +// * DescribeDBInstances +// +// * DescribeDBSnapshots +// +// * DescribeValidDBInstanceModifications +// +// For more information, see Configuring the Processor of the DB Instance Class +// (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html#USER_ConfigureProcessor) +// in the Amazon RDS User Guide. +type ProcessorFeature struct { + _ struct{} `type:"structure"` + + // The name of the processor feature. Valid names are coreCount and threadsPerCore. + Name *string `type:"string"` + + // The value of a processor feature name. + Value *string `type:"string"` +} + +// String returns the string representation +func (s ProcessorFeature) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ProcessorFeature) GoString() string { + return s.String() +} + +// SetName sets the Name field's value. +func (s *ProcessorFeature) SetName(v string) *ProcessorFeature { + s.Name = &v + return s +} + +// SetValue sets the Value field's value. +func (s *ProcessorFeature) SetValue(v string) *ProcessorFeature { + s.Value = &v + return s +} + type PromoteReadReplicaDBClusterInput struct { _ struct{} `type:"structure"` @@ -24498,9 +28081,10 @@ func (s *PromoteReadReplicaDBClusterInput) SetDBClusterIdentifier(v string) *Pro type PromoteReadReplicaDBClusterOutput struct { _ struct{} `type:"structure"` - // Contains the details of an Amazon RDS DB cluster. + // Contains the details of an Amazon Aurora DB cluster. // - // This data type is used as a response element in the DescribeDBClusters action. + // This data type is used as a response element in the DescribeDBClusters, StopDBCluster, + // and StartDBCluster actions. DBCluster *DBCluster `type:"structure"` } @@ -24651,7 +28235,8 @@ type PurchaseReservedDBInstancesOfferingInput struct { // ReservedDBInstancesOfferingId is a required field ReservedDBInstancesOfferingId *string `type:"string" required:"true"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` } @@ -25034,7 +28619,8 @@ type RemoveTagsFromResourceInput struct { // The Amazon RDS resource that the tags are removed from. This value is an // Amazon Resource Name (ARN). For information about creating an ARN, see Constructing - // an RDS Amazon Resource Name (ARN) (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.ARN.html#USER_Tagging.ARN.Constructing). + // an ARN for Amazon RDS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.ARN.html#USER_Tagging.ARN.Constructing) + // in the Amazon RDS User Guide. // // ResourceName is a required field ResourceName *string `type:"string" required:"true"` @@ -25139,7 +28725,7 @@ type ReservedDBInstance struct { ReservedDBInstancesOfferingId *string `type:"string"` // The time the reservation started. - StartTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + StartTime *time.Time `type:"timestamp"` // The state of the reserved DB instance. State *string `type:"string"` @@ -25541,6 +29127,17 @@ type RestoreDBClusterFromS3Input struct { // can be created in. AvailabilityZones []*string `locationNameList:"AvailabilityZone" type:"list"` + // The target backtrack window, in seconds. To disable backtracking, set this + // value to 0. + // + // Default: 0 + // + // Constraints: + // + // * If specified, this value must be set to a number from 0 to 259,200 (72 + // hours). + BacktrackWindow *int64 `type:"long"` + // The number of days for which automated backups of the restored DB cluster // are retained. You must specify a minimum value of 1. // @@ -25564,7 +29161,7 @@ type RestoreDBClusterFromS3Input struct { // // * First character must be a letter. // - // * Cannot end with a hyphen or contain two consecutive hyphens. + // * Can't end with a hyphen or contain two consecutive hyphens. // // Example: my-cluster1 // @@ -25589,6 +29186,17 @@ type RestoreDBClusterFromS3Input struct { // The database name for the restored DB cluster. DatabaseName *string `type:"string"` + // Indicates if the DB cluster should have deletion protection enabled. The + // database can't be deleted when this value is set to true. The default is + // false. + DeletionProtection *bool `type:"boolean"` + + // The list of logs that the restored DB cluster is to export to CloudWatch + // Logs. The values in the list depend on the DB engine being used. For more + // information, see Publishing Database Logs to Amazon CloudWatch Logs (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.html#USER_LogAccess.Procedural.UploadtoCloudWatch) + // in the Amazon Aurora User Guide. + EnableCloudwatchLogsExports []*string `type:"list"` + // True to enable mapping of AWS Identity and Access Management (IAM) accounts // to database accounts, and otherwise false. // @@ -25642,7 +29250,7 @@ type RestoreDBClusterFromS3Input struct { // // * First character must be a letter. // - // * Cannot be a reserved word for the chosen database engine. + // * Can't be a reserved word for the chosen database engine. // // MasterUsername is a required field MasterUsername *string `type:"string" required:"true"` @@ -25665,8 +29273,8 @@ type RestoreDBClusterFromS3Input struct { // // The default is a 30-minute window selected at random from an 8-hour block // of time for each AWS Region. To see the time blocks available, see Adjusting - // the Preferred Maintenance Window (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AdjustingTheMaintenanceWindow.html) - // in the Amazon RDS User Guide. + // the Preferred Maintenance Window (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.Maintenance.html#AdjustingTheMaintenanceWindow.Aurora) + // in the Amazon Aurora User Guide. // // Constraints: // @@ -25687,8 +29295,8 @@ type RestoreDBClusterFromS3Input struct { // The default is a 30-minute window selected at random from an 8-hour block // of time for each AWS Region, occurring on a random day of the week. To see // the time blocks available, see Adjusting the Preferred Maintenance Window - // (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AdjustingTheMaintenanceWindow.html) - // in the Amazon RDS User Guide. + // (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.Maintenance.html#AdjustingTheMaintenanceWindow.Aurora) + // in the Amazon Aurora User Guide. // // Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun. // @@ -25734,7 +29342,8 @@ type RestoreDBClusterFromS3Input struct { // Specifies whether the restored DB cluster is encrypted. StorageEncrypted *bool `type:"boolean"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` // A list of EC2 VPC security groups to associate with the restored DB cluster. @@ -25791,6 +29400,12 @@ func (s *RestoreDBClusterFromS3Input) SetAvailabilityZones(v []*string) *Restore return s } +// SetBacktrackWindow sets the BacktrackWindow field's value. +func (s *RestoreDBClusterFromS3Input) SetBacktrackWindow(v int64) *RestoreDBClusterFromS3Input { + s.BacktrackWindow = &v + return s +} + // SetBackupRetentionPeriod sets the BackupRetentionPeriod field's value. func (s *RestoreDBClusterFromS3Input) SetBackupRetentionPeriod(v int64) *RestoreDBClusterFromS3Input { s.BackupRetentionPeriod = &v @@ -25827,6 +29442,18 @@ func (s *RestoreDBClusterFromS3Input) SetDatabaseName(v string) *RestoreDBCluste return s } +// SetDeletionProtection sets the DeletionProtection field's value. +func (s *RestoreDBClusterFromS3Input) SetDeletionProtection(v bool) *RestoreDBClusterFromS3Input { + s.DeletionProtection = &v + return s +} + +// SetEnableCloudwatchLogsExports sets the EnableCloudwatchLogsExports field's value. +func (s *RestoreDBClusterFromS3Input) SetEnableCloudwatchLogsExports(v []*string) *RestoreDBClusterFromS3Input { + s.EnableCloudwatchLogsExports = v + return s +} + // SetEnableIAMDatabaseAuthentication sets the EnableIAMDatabaseAuthentication field's value. func (s *RestoreDBClusterFromS3Input) SetEnableIAMDatabaseAuthentication(v bool) *RestoreDBClusterFromS3Input { s.EnableIAMDatabaseAuthentication = &v @@ -25938,9 +29565,10 @@ func (s *RestoreDBClusterFromS3Input) SetVpcSecurityGroupIds(v []*string) *Resto type RestoreDBClusterFromS3Output struct { _ struct{} `type:"structure"` - // Contains the details of an Amazon RDS DB cluster. + // Contains the details of an Amazon Aurora DB cluster. // - // This data type is used as a response element in the DescribeDBClusters action. + // This data type is used as a response element in the DescribeDBClusters, StopDBCluster, + // and StartDBCluster actions. DBCluster *DBCluster `type:"structure"` } @@ -25963,10 +29591,21 @@ func (s *RestoreDBClusterFromS3Output) SetDBCluster(v *DBCluster) *RestoreDBClus type RestoreDBClusterFromSnapshotInput struct { _ struct{} `type:"structure"` - // Provides the list of EC2 Availability Zones that instances in the restored - // DB cluster can be created in. + // Provides the list of Amazon EC2 Availability Zones that instances in the + // restored DB cluster can be created in. AvailabilityZones []*string `locationNameList:"AvailabilityZone" type:"list"` + // The target backtrack window, in seconds. To disable backtracking, set this + // value to 0. + // + // Default: 0 + // + // Constraints: + // + // * If specified, this value must be set to a number from 0 to 259,200 (72 + // hours). + BacktrackWindow *int64 `type:"long"` + // The name of the DB cluster to create from the DB snapshot or DB cluster snapshot. // This parameter isn't case-sensitive. // @@ -25976,16 +29615,32 @@ type RestoreDBClusterFromSnapshotInput struct { // // * First character must be a letter // - // * Cannot end with a hyphen or contain two consecutive hyphens + // * Can't end with a hyphen or contain two consecutive hyphens // // Example: my-snapshot-id // // DBClusterIdentifier is a required field DBClusterIdentifier *string `type:"string" required:"true"` + // The name of the DB cluster parameter group to associate with this DB cluster. + // If this argument is omitted, the default DB cluster parameter group for the + // specified engine is used. + // + // Constraints: + // + // * If supplied, must match the name of an existing default DB cluster parameter + // group. + // + // * Must be 1 to 255 letters, numbers, or hyphens. + // + // * First character must be a letter. + // + // * Can't end with a hyphen or contain two consecutive hyphens. + DBClusterParameterGroupName *string `type:"string"` + // The name of the DB subnet group to use for the new DB cluster. // - // Constraints: If supplied, must match the name of an existing DBSubnetGroup. + // Constraints: If supplied, must match the name of an existing DB subnet group. // // Example: mySubnetgroup DBSubnetGroupName *string `type:"string"` @@ -25993,6 +29648,17 @@ type RestoreDBClusterFromSnapshotInput struct { // The database name for the restored DB cluster. DatabaseName *string `type:"string"` + // Indicates if the DB cluster should have deletion protection enabled. The + // database can't be deleted when this value is set to true. The default is + // false. + DeletionProtection *bool `type:"boolean"` + + // The list of logs that the restored DB cluster is to export to Amazon CloudWatch + // Logs. The values in the list depend on the DB engine being used. For more + // information, see Publishing Database Logs to Amazon CloudWatch Logs (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.html#USER_LogAccess.Procedural.UploadtoCloudWatch) + // in the Amazon Aurora User Guide. + EnableCloudwatchLogsExports []*string `type:"list"` + // True to enable mapping of AWS Identity and Access Management (IAM) accounts // to database accounts, and otherwise false. // @@ -26008,6 +29674,10 @@ type RestoreDBClusterFromSnapshotInput struct { // Engine is a required field Engine *string `type:"string" required:"true"` + // The DB engine mode of the DB cluster, either provisioned, serverless, or + // parallelquery. + EngineMode *string `type:"string"` + // The version of the database engine to use for the new DB cluster. EngineVersion *string `type:"string"` @@ -26019,8 +29689,8 @@ type RestoreDBClusterFromSnapshotInput struct { // the KMS encryption key used to encrypt the new DB cluster, then you can use // the KMS key alias instead of the ARN for the KMS encryption key. // - // If you do not specify a value for the KmsKeyId parameter, then the following - // will occur: + // If you don't specify a value for the KmsKeyId parameter, then the following + // occurs: // // * If the DB snapshot or DB cluster snapshot in SnapshotIdentifier is encrypted, // then the restored DB cluster is encrypted using the KMS key that was used @@ -26035,11 +29705,15 @@ type RestoreDBClusterFromSnapshotInput struct { // The port number on which the new DB cluster accepts connections. // - // Constraints: Value must be 1150-65535 + // Constraints: This value must be 1150-65535 // // Default: The same port as the original DB cluster. Port *int64 `type:"integer"` + // For DB clusters in serverless DB engine mode, the scaling properties of the + // DB cluster. + ScalingConfiguration *ScalingConfiguration `type:"structure"` + // The identifier for the DB snapshot or DB cluster snapshot to restore from. // // You can use either the name or the Amazon Resource Name (ARN) to specify @@ -26095,12 +29769,24 @@ func (s *RestoreDBClusterFromSnapshotInput) SetAvailabilityZones(v []*string) *R return s } +// SetBacktrackWindow sets the BacktrackWindow field's value. +func (s *RestoreDBClusterFromSnapshotInput) SetBacktrackWindow(v int64) *RestoreDBClusterFromSnapshotInput { + s.BacktrackWindow = &v + return s +} + // SetDBClusterIdentifier sets the DBClusterIdentifier field's value. func (s *RestoreDBClusterFromSnapshotInput) SetDBClusterIdentifier(v string) *RestoreDBClusterFromSnapshotInput { s.DBClusterIdentifier = &v return s } +// SetDBClusterParameterGroupName sets the DBClusterParameterGroupName field's value. +func (s *RestoreDBClusterFromSnapshotInput) SetDBClusterParameterGroupName(v string) *RestoreDBClusterFromSnapshotInput { + s.DBClusterParameterGroupName = &v + return s +} + // SetDBSubnetGroupName sets the DBSubnetGroupName field's value. func (s *RestoreDBClusterFromSnapshotInput) SetDBSubnetGroupName(v string) *RestoreDBClusterFromSnapshotInput { s.DBSubnetGroupName = &v @@ -26113,6 +29799,18 @@ func (s *RestoreDBClusterFromSnapshotInput) SetDatabaseName(v string) *RestoreDB return s } +// SetDeletionProtection sets the DeletionProtection field's value. +func (s *RestoreDBClusterFromSnapshotInput) SetDeletionProtection(v bool) *RestoreDBClusterFromSnapshotInput { + s.DeletionProtection = &v + return s +} + +// SetEnableCloudwatchLogsExports sets the EnableCloudwatchLogsExports field's value. +func (s *RestoreDBClusterFromSnapshotInput) SetEnableCloudwatchLogsExports(v []*string) *RestoreDBClusterFromSnapshotInput { + s.EnableCloudwatchLogsExports = v + return s +} + // SetEnableIAMDatabaseAuthentication sets the EnableIAMDatabaseAuthentication field's value. func (s *RestoreDBClusterFromSnapshotInput) SetEnableIAMDatabaseAuthentication(v bool) *RestoreDBClusterFromSnapshotInput { s.EnableIAMDatabaseAuthentication = &v @@ -26125,6 +29823,12 @@ func (s *RestoreDBClusterFromSnapshotInput) SetEngine(v string) *RestoreDBCluste return s } +// SetEngineMode sets the EngineMode field's value. +func (s *RestoreDBClusterFromSnapshotInput) SetEngineMode(v string) *RestoreDBClusterFromSnapshotInput { + s.EngineMode = &v + return s +} + // SetEngineVersion sets the EngineVersion field's value. func (s *RestoreDBClusterFromSnapshotInput) SetEngineVersion(v string) *RestoreDBClusterFromSnapshotInput { s.EngineVersion = &v @@ -26149,6 +29853,12 @@ func (s *RestoreDBClusterFromSnapshotInput) SetPort(v int64) *RestoreDBClusterFr return s } +// SetScalingConfiguration sets the ScalingConfiguration field's value. +func (s *RestoreDBClusterFromSnapshotInput) SetScalingConfiguration(v *ScalingConfiguration) *RestoreDBClusterFromSnapshotInput { + s.ScalingConfiguration = v + return s +} + // SetSnapshotIdentifier sets the SnapshotIdentifier field's value. func (s *RestoreDBClusterFromSnapshotInput) SetSnapshotIdentifier(v string) *RestoreDBClusterFromSnapshotInput { s.SnapshotIdentifier = &v @@ -26170,9 +29880,10 @@ func (s *RestoreDBClusterFromSnapshotInput) SetVpcSecurityGroupIds(v []*string) type RestoreDBClusterFromSnapshotOutput struct { _ struct{} `type:"structure"` - // Contains the details of an Amazon RDS DB cluster. + // Contains the details of an Amazon Aurora DB cluster. // - // This data type is used as a response element in the DescribeDBClusters action. + // This data type is used as a response element in the DescribeDBClusters, StopDBCluster, + // and StartDBCluster actions. DBCluster *DBCluster `type:"structure"` } @@ -26195,6 +29906,17 @@ func (s *RestoreDBClusterFromSnapshotOutput) SetDBCluster(v *DBCluster) *Restore type RestoreDBClusterToPointInTimeInput struct { _ struct{} `type:"structure"` + // The target backtrack window, in seconds. To disable backtracking, set this + // value to 0. + // + // Default: 0 + // + // Constraints: + // + // * If specified, this value must be set to a number from 0 to 259,200 (72 + // hours). + BacktrackWindow *int64 `type:"long"` + // The name of the new DB cluster to be created. // // Constraints: @@ -26203,11 +29925,27 @@ type RestoreDBClusterToPointInTimeInput struct { // // * First character must be a letter // - // * Cannot end with a hyphen or contain two consecutive hyphens + // * Can't end with a hyphen or contain two consecutive hyphens // // DBClusterIdentifier is a required field DBClusterIdentifier *string `type:"string" required:"true"` + // The name of the DB cluster parameter group to associate with this DB cluster. + // If this argument is omitted, the default DB cluster parameter group for the + // specified engine is used. + // + // Constraints: + // + // * If supplied, must match the name of an existing DB cluster parameter + // group. + // + // * Must be 1 to 255 letters, numbers, or hyphens. + // + // * First character must be a letter. + // + // * Can't end with a hyphen or contain two consecutive hyphens. + DBClusterParameterGroupName *string `type:"string"` + // The DB subnet group name to use for the new DB cluster. // // Constraints: If supplied, must match the name of an existing DBSubnetGroup. @@ -26215,6 +29953,17 @@ type RestoreDBClusterToPointInTimeInput struct { // Example: mySubnetgroup DBSubnetGroupName *string `type:"string"` + // Indicates if the DB cluster should have deletion protection enabled. The + // database can't be deleted when this value is set to true. The default is + // false. + DeletionProtection *bool `type:"boolean"` + + // The list of logs that the restored DB cluster is to export to CloudWatch + // Logs. The values in the list depend on the DB engine being used. For more + // information, see Publishing Database Logs to Amazon CloudWatch Logs (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.html#USER_LogAccess.Procedural.UploadtoCloudWatch) + // in the Amazon Aurora User Guide. + EnableCloudwatchLogsExports []*string `type:"list"` + // True to enable mapping of AWS Identity and Access Management (IAM) accounts // to database accounts, and otherwise false. // @@ -26234,8 +29983,8 @@ type RestoreDBClusterToPointInTimeInput struct { // cluster. The new DB cluster is encrypted with the KMS key identified by the // KmsKeyId parameter. // - // If you do not specify a value for the KmsKeyId parameter, then the following - // will occur: + // If you don't specify a value for the KmsKeyId parameter, then the following + // occurs: // // * If the DB cluster is encrypted, then the restored DB cluster is encrypted // using the KMS key that was used to encrypt the source DB cluster. @@ -26252,9 +30001,9 @@ type RestoreDBClusterToPointInTimeInput struct { // The port number on which the new DB cluster accepts connections. // - // Constraints: Value must be 1150-65535 + // Constraints: A value from 1150-65535. // - // Default: The same port as the original DB cluster. + // Default: The default port for the engine. Port *int64 `type:"integer"` // The date and time to restore the DB cluster to. @@ -26267,12 +30016,12 @@ type RestoreDBClusterToPointInTimeInput struct { // // * Must be specified if UseLatestRestorableTime parameter is not provided // - // * Cannot be specified if UseLatestRestorableTime parameter is true + // * Can't be specified if UseLatestRestorableTime parameter is true // - // * Cannot be specified if RestoreType parameter is copy-on-write + // * Can't be specified if RestoreType parameter is copy-on-write // // Example: 2015-03-07T23:45:00Z - RestoreToTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + RestoreToTime *time.Time `type:"timestamp"` // The type of restore to be performed. You can specify one of the following // values: @@ -26299,7 +30048,8 @@ type RestoreDBClusterToPointInTimeInput struct { // SourceDBClusterIdentifier is a required field SourceDBClusterIdentifier *string `type:"string" required:"true"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` // A value that is set to true to restore the DB cluster to the latest restorable @@ -26307,7 +30057,7 @@ type RestoreDBClusterToPointInTimeInput struct { // // Default: false // - // Constraints: Cannot be specified if RestoreToTime parameter is provided. + // Constraints: Can't be specified if RestoreToTime parameter is provided. UseLatestRestorableTime *bool `type:"boolean"` // A list of VPC security groups that the new DB cluster belongs to. @@ -26340,18 +30090,42 @@ func (s *RestoreDBClusterToPointInTimeInput) Validate() error { return nil } +// SetBacktrackWindow sets the BacktrackWindow field's value. +func (s *RestoreDBClusterToPointInTimeInput) SetBacktrackWindow(v int64) *RestoreDBClusterToPointInTimeInput { + s.BacktrackWindow = &v + return s +} + // SetDBClusterIdentifier sets the DBClusterIdentifier field's value. func (s *RestoreDBClusterToPointInTimeInput) SetDBClusterIdentifier(v string) *RestoreDBClusterToPointInTimeInput { s.DBClusterIdentifier = &v return s } +// SetDBClusterParameterGroupName sets the DBClusterParameterGroupName field's value. +func (s *RestoreDBClusterToPointInTimeInput) SetDBClusterParameterGroupName(v string) *RestoreDBClusterToPointInTimeInput { + s.DBClusterParameterGroupName = &v + return s +} + // SetDBSubnetGroupName sets the DBSubnetGroupName field's value. func (s *RestoreDBClusterToPointInTimeInput) SetDBSubnetGroupName(v string) *RestoreDBClusterToPointInTimeInput { s.DBSubnetGroupName = &v return s } +// SetDeletionProtection sets the DeletionProtection field's value. +func (s *RestoreDBClusterToPointInTimeInput) SetDeletionProtection(v bool) *RestoreDBClusterToPointInTimeInput { + s.DeletionProtection = &v + return s +} + +// SetEnableCloudwatchLogsExports sets the EnableCloudwatchLogsExports field's value. +func (s *RestoreDBClusterToPointInTimeInput) SetEnableCloudwatchLogsExports(v []*string) *RestoreDBClusterToPointInTimeInput { + s.EnableCloudwatchLogsExports = v + return s +} + // SetEnableIAMDatabaseAuthentication sets the EnableIAMDatabaseAuthentication field's value. func (s *RestoreDBClusterToPointInTimeInput) SetEnableIAMDatabaseAuthentication(v bool) *RestoreDBClusterToPointInTimeInput { s.EnableIAMDatabaseAuthentication = &v @@ -26415,9 +30189,10 @@ func (s *RestoreDBClusterToPointInTimeInput) SetVpcSecurityGroupIds(v []*string) type RestoreDBClusterToPointInTimeOutput struct { _ struct{} `type:"structure"` - // Contains the details of an Amazon RDS DB cluster. + // Contains the details of an Amazon Aurora DB cluster. // - // This data type is used as a response element in the DescribeDBClusters action. + // This data type is used as a response element in the DescribeDBClusters, StopDBCluster, + // and StartDBCluster actions. DBCluster *DBCluster `type:"structure"` } @@ -26476,7 +30251,7 @@ type RestoreDBInstanceFromDBSnapshotInput struct { // // * First character must be a letter // - // * Cannot end with a hyphen or contain two consecutive hyphens + // * Can't end with a hyphen or contain two consecutive hyphens // // Example: my-snapshot-id // @@ -26488,6 +30263,21 @@ type RestoreDBInstanceFromDBSnapshotInput struct { // This parameter doesn't apply to the MySQL, PostgreSQL, or MariaDB engines. DBName *string `type:"string"` + // The name of the DB parameter group to associate with this DB instance. If + // this argument is omitted, the default DBParameterGroup for the specified + // engine is used. + // + // Constraints: + // + // * If supplied, must match the name of an existing DBParameterGroup. + // + // * Must be 1 to 255 letters, numbers, or hyphens. + // + // * First character must be a letter. + // + // * Can't end with a hyphen or contain two consecutive hyphens. + DBParameterGroupName *string `type:"string"` + // The identifier for the DB snapshot to restore from. // // Constraints: @@ -26507,6 +30297,11 @@ type RestoreDBInstanceFromDBSnapshotInput struct { // Example: mySubnetgroup DBSubnetGroupName *string `type:"string"` + // Indicates if the DB instance should have deletion protection enabled. The + // database can't be deleted when this value is set to true. The default is + // false. For more information, see Deleting a DB Instance (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_DeleteInstance.html). + DeletionProtection *bool `type:"boolean"` + // Specify the Active Directory Domain to restore the instance in. Domain *string `type:"string"` @@ -26515,7 +30310,9 @@ type RestoreDBInstanceFromDBSnapshotInput struct { DomainIAMRoleName *string `type:"string"` // The list of logs that the restored DB instance is to export to CloudWatch - // Logs. + // Logs. The values in the list depend on the DB engine being used. For more + // information, see Publishing Database Logs to Amazon CloudWatch Logs (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.html#USER_LogAccess.Procedural.UploadtoCloudWatch) + // in the Amazon Aurora User Guide. EnableCloudwatchLogsExports []*string `type:"list"` // True to enable mapping of AWS Identity and Access Management (IAM) accounts @@ -26571,7 +30368,8 @@ type RestoreDBInstanceFromDBSnapshotInput struct { // // The provisioned IOPS value must follow the requirements for your database // engine. For more information, see Amazon RDS Provisioned IOPS Storage to - // Improve Performance (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html#USER_PIOPS). + // Improve Performance (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html#USER_PIOPS) + // in the Amazon RDS User Guide. // // Constraints: Must be an integer greater than 1000. Iops *int64 `type:"integer"` @@ -26603,22 +30401,15 @@ type RestoreDBInstanceFromDBSnapshotInput struct { // Constraints: Value must be 1150-65535 Port *int64 `type:"integer"` + // The number of CPU cores and the number of threads per core for the DB instance + // class of the DB instance. + ProcessorFeatures []*ProcessorFeature `locationNameList:"ProcessorFeature" type:"list"` + // Specifies the accessibility options for the DB instance. A value of true // specifies an Internet-facing instance with a publicly resolvable DNS name, // which resolves to a public IP address. A value of false specifies an internal - // instance with a DNS name that resolves to a private IP address. - // - // Default: The default behavior varies depending on whether a VPC has been - // requested or not. The following list shows the default behavior in each case. - // - // * Default VPC: true - // - // * VPC: false - // - // If no DB subnet group has been specified as part of the request and the PubliclyAccessible - // value has not been set, the DB instance is publicly accessible. If a specific - // DB subnet group has been specified as part of the request and the PubliclyAccessible - // value has not been set, the DB instance is private. + // instance with a DNS name that resolves to a private IP address. For more + // information, see CreateDBInstance. PubliclyAccessible *bool `type:"boolean"` // Specifies the storage type to be associated with the DB instance. @@ -26630,7 +30421,8 @@ type RestoreDBInstanceFromDBSnapshotInput struct { // Default: io1 if the Iops parameter is specified, otherwise standard StorageType *string `type:"string"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` // The ARN from the key store with which to associate the instance for TDE encryption. @@ -26639,6 +30431,15 @@ type RestoreDBInstanceFromDBSnapshotInput struct { // The password for the given ARN from the key store in order to access the // device. TdeCredentialPassword *string `type:"string"` + + // A value that specifies that the DB instance class of the DB instance uses + // its default processor features. + UseDefaultProcessorFeatures *bool `type:"boolean"` + + // A list of EC2 VPC security groups to associate with this DB instance. + // + // Default: The default EC2 VPC security group for the DB subnet group's VPC. + VpcSecurityGroupIds []*string `locationNameList:"VpcSecurityGroupId" type:"list"` } // String returns the string representation @@ -26703,6 +30504,12 @@ func (s *RestoreDBInstanceFromDBSnapshotInput) SetDBName(v string) *RestoreDBIns return s } +// SetDBParameterGroupName sets the DBParameterGroupName field's value. +func (s *RestoreDBInstanceFromDBSnapshotInput) SetDBParameterGroupName(v string) *RestoreDBInstanceFromDBSnapshotInput { + s.DBParameterGroupName = &v + return s +} + // SetDBSnapshotIdentifier sets the DBSnapshotIdentifier field's value. func (s *RestoreDBInstanceFromDBSnapshotInput) SetDBSnapshotIdentifier(v string) *RestoreDBInstanceFromDBSnapshotInput { s.DBSnapshotIdentifier = &v @@ -26715,6 +30522,12 @@ func (s *RestoreDBInstanceFromDBSnapshotInput) SetDBSubnetGroupName(v string) *R return s } +// SetDeletionProtection sets the DeletionProtection field's value. +func (s *RestoreDBInstanceFromDBSnapshotInput) SetDeletionProtection(v bool) *RestoreDBInstanceFromDBSnapshotInput { + s.DeletionProtection = &v + return s +} + // SetDomain sets the Domain field's value. func (s *RestoreDBInstanceFromDBSnapshotInput) SetDomain(v string) *RestoreDBInstanceFromDBSnapshotInput { s.Domain = &v @@ -26775,6 +30588,12 @@ func (s *RestoreDBInstanceFromDBSnapshotInput) SetPort(v int64) *RestoreDBInstan return s } +// SetProcessorFeatures sets the ProcessorFeatures field's value. +func (s *RestoreDBInstanceFromDBSnapshotInput) SetProcessorFeatures(v []*ProcessorFeature) *RestoreDBInstanceFromDBSnapshotInput { + s.ProcessorFeatures = v + return s +} + // SetPubliclyAccessible sets the PubliclyAccessible field's value. func (s *RestoreDBInstanceFromDBSnapshotInput) SetPubliclyAccessible(v bool) *RestoreDBInstanceFromDBSnapshotInput { s.PubliclyAccessible = &v @@ -26805,6 +30624,18 @@ func (s *RestoreDBInstanceFromDBSnapshotInput) SetTdeCredentialPassword(v string return s } +// SetUseDefaultProcessorFeatures sets the UseDefaultProcessorFeatures field's value. +func (s *RestoreDBInstanceFromDBSnapshotInput) SetUseDefaultProcessorFeatures(v bool) *RestoreDBInstanceFromDBSnapshotInput { + s.UseDefaultProcessorFeatures = &v + return s +} + +// SetVpcSecurityGroupIds sets the VpcSecurityGroupIds field's value. +func (s *RestoreDBInstanceFromDBSnapshotInput) SetVpcSecurityGroupIds(v []*string) *RestoreDBInstanceFromDBSnapshotInput { + s.VpcSecurityGroupIds = v + return s +} + type RestoreDBInstanceFromDBSnapshotOutput struct { _ struct{} `type:"structure"` @@ -26849,7 +30680,8 @@ type RestoreDBInstanceFromS3Input struct { // The Availability Zone that the DB instance is created in. For information // about AWS Regions and Availability Zones, see Regions and Availability Zones - // (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html). + // (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html) + // in the Amazon RDS User Guide. // // Default: A random, system-chosen Availability Zone in the endpoint's AWS // Region. @@ -26892,7 +30724,7 @@ type RestoreDBInstanceFromS3Input struct { // // * First character must be a letter. // - // * Cannot end with a hyphen or contain two consecutive hyphens. + // * Can't end with a hyphen or contain two consecutive hyphens. // // Example: mydbinstance // @@ -26916,8 +30748,15 @@ type RestoreDBInstanceFromS3Input struct { // A DB subnet group to associate with this DB instance. DBSubnetGroupName *string `type:"string"` + // Indicates if the DB instance should have deletion protection enabled. The + // database can't be deleted when this value is set to true. The default is + // false. For more information, see Deleting a DB Instance (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_DeleteInstance.html). + DeletionProtection *bool `type:"boolean"` + // The list of logs that the restored DB instance is to export to CloudWatch - // Logs. + // Logs. The values in the list depend on the DB engine being used. For more + // information, see Publishing Database Logs to Amazon CloudWatch Logs (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.html#USER_LogAccess.Procedural.UploadtoCloudWatch) + // in the Amazon RDS User Guide. EnableCloudwatchLogsExports []*string `type:"list"` // True to enable mapping of AWS Identity and Access Management (IAM) accounts @@ -26927,6 +30766,9 @@ type RestoreDBInstanceFromS3Input struct { EnableIAMDatabaseAuthentication *bool `type:"boolean"` // True to enable Performance Insights for the DB instance, and otherwise false. + // + // For more information, see Using Amazon Performance Insights (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.html) + // in the Amazon Relational Database Service User Guide. EnablePerformanceInsights *bool `type:"boolean"` // The name of the database engine to be used for this instance. @@ -26937,12 +30779,14 @@ type RestoreDBInstanceFromS3Input struct { Engine *string `type:"string" required:"true"` // The version number of the database engine to use. Choose the latest minor - // version of your database engine as specified in CreateDBInstance. + // version of your database engine. For information about engine versions, see + // CreateDBInstance, or call DescribeDBEngineVersions. EngineVersion *string `type:"string"` // The amount of Provisioned IOPS (input/output operations per second) to allocate // initially for the DB instance. For information about valid Iops values, see - // see Amazon RDS Provisioned IOPS Storage to Improve Performance (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html#USER_PIOPS). + // see Amazon RDS Provisioned IOPS Storage to Improve Performance (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html#USER_PIOPS) + // in the Amazon RDS User Guide. Iops *int64 `type:"integer"` // The AWS KMS key identifier for an encrypted DB instance. @@ -26975,7 +30819,7 @@ type RestoreDBInstanceFromS3Input struct { // // * First character must be a letter. // - // * Cannot be a reserved word for the chosen database engine. + // * Can't be a reserved word for the chosen database engine. MasterUsername *string `type:"string"` // The interval, in seconds, between points when Enhanced Monitoring metrics @@ -26993,7 +30837,8 @@ type RestoreDBInstanceFromS3Input struct { // The ARN for the IAM role that permits RDS to send enhanced monitoring metrics // to Amazon CloudWatch Logs. For example, arn:aws:iam:123456789012:role/emaccess. // For information on creating a monitoring role, see Setting Up and Enabling - // Enhanced Monitoring (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.OS.html#USER_Monitoring.OS.Enabling). + // Enhanced Monitoring (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.OS.html#USER_Monitoring.OS.Enabling) + // in the Amazon RDS User Guide. // // If MonitoringInterval is set to a value other than 0, then you must supply // a MonitoringRoleArn value. @@ -27013,6 +30858,10 @@ type RestoreDBInstanceFromS3Input struct { // the KMS key alias for the KMS encryption key. PerformanceInsightsKMSKeyId *string `type:"string"` + // The amount of time, in days, to retain Performance Insights data. Valid values + // are 7 or 731 (2 years). + PerformanceInsightsRetentionPeriod *int64 `type:"integer"` + // The port number on which the database accepts connections. // // Type: Integer @@ -27023,7 +30872,8 @@ type RestoreDBInstanceFromS3Input struct { Port *int64 `type:"integer"` // The time range each day during which automated backups are created if automated - // backups are enabled. For more information, see The Backup Window (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html#USER_WorkingWithAutomatedBackups.BackupWindow). + // backups are enabled. For more information, see The Backup Window (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html#USER_WorkingWithAutomatedBackups.BackupWindow) + // in the Amazon RDS User Guide. // // Constraints: // @@ -27038,7 +30888,8 @@ type RestoreDBInstanceFromS3Input struct { // The time range each week during which system maintenance can occur, in Universal // Coordinated Time (UTC). For more information, see Amazon RDS Maintenance - // Window (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Maintenance.html#Concepts.DBMaintenance). + // Window (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Maintenance.html#Concepts.DBMaintenance) + // in the Amazon RDS User Guide. // // Constraints: // @@ -27053,7 +30904,14 @@ type RestoreDBInstanceFromS3Input struct { // * Must be at least 30 minutes. PreferredMaintenanceWindow *string `type:"string"` - // Specifies whether the DB instance is publicly accessible or not. For more + // The number of CPU cores and the number of threads per core for the DB instance + // class of the DB instance. + ProcessorFeatures []*ProcessorFeature `locationNameList:"ProcessorFeature" type:"list"` + + // Specifies the accessibility options for the DB instance. A value of true + // specifies an Internet-facing instance with a publicly resolvable DNS name, + // which resolves to a public IP address. A value of false specifies an internal + // instance with a DNS name that resolves to a private IP address. For more // information, see CreateDBInstance. PubliclyAccessible *bool `type:"boolean"` @@ -27098,9 +30956,14 @@ type RestoreDBInstanceFromS3Input struct { StorageType *string `type:"string"` // A list of tags to associate with this DB instance. For more information, - // see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` + // A value that specifies that the DB instance class of the DB instance uses + // its default processor features. + UseDefaultProcessorFeatures *bool `type:"boolean"` + // A list of VPC security groups to associate with this DB instance. VpcSecurityGroupIds []*string `locationNameList:"VpcSecurityGroupId" type:"list"` } @@ -27212,6 +31075,12 @@ func (s *RestoreDBInstanceFromS3Input) SetDBSubnetGroupName(v string) *RestoreDB return s } +// SetDeletionProtection sets the DeletionProtection field's value. +func (s *RestoreDBInstanceFromS3Input) SetDeletionProtection(v bool) *RestoreDBInstanceFromS3Input { + s.DeletionProtection = &v + return s +} + // SetEnableCloudwatchLogsExports sets the EnableCloudwatchLogsExports field's value. func (s *RestoreDBInstanceFromS3Input) SetEnableCloudwatchLogsExports(v []*string) *RestoreDBInstanceFromS3Input { s.EnableCloudwatchLogsExports = v @@ -27302,6 +31171,12 @@ func (s *RestoreDBInstanceFromS3Input) SetPerformanceInsightsKMSKeyId(v string) return s } +// SetPerformanceInsightsRetentionPeriod sets the PerformanceInsightsRetentionPeriod field's value. +func (s *RestoreDBInstanceFromS3Input) SetPerformanceInsightsRetentionPeriod(v int64) *RestoreDBInstanceFromS3Input { + s.PerformanceInsightsRetentionPeriod = &v + return s +} + // SetPort sets the Port field's value. func (s *RestoreDBInstanceFromS3Input) SetPort(v int64) *RestoreDBInstanceFromS3Input { s.Port = &v @@ -27320,6 +31195,12 @@ func (s *RestoreDBInstanceFromS3Input) SetPreferredMaintenanceWindow(v string) * return s } +// SetProcessorFeatures sets the ProcessorFeatures field's value. +func (s *RestoreDBInstanceFromS3Input) SetProcessorFeatures(v []*ProcessorFeature) *RestoreDBInstanceFromS3Input { + s.ProcessorFeatures = v + return s +} + // SetPubliclyAccessible sets the PubliclyAccessible field's value. func (s *RestoreDBInstanceFromS3Input) SetPubliclyAccessible(v bool) *RestoreDBInstanceFromS3Input { s.PubliclyAccessible = &v @@ -27374,6 +31255,12 @@ func (s *RestoreDBInstanceFromS3Input) SetTags(v []*Tag) *RestoreDBInstanceFromS return s } +// SetUseDefaultProcessorFeatures sets the UseDefaultProcessorFeatures field's value. +func (s *RestoreDBInstanceFromS3Input) SetUseDefaultProcessorFeatures(v bool) *RestoreDBInstanceFromS3Input { + s.UseDefaultProcessorFeatures = &v + return s +} + // SetVpcSecurityGroupIds sets the VpcSecurityGroupIds field's value. func (s *RestoreDBInstanceFromS3Input) SetVpcSecurityGroupIds(v []*string) *RestoreDBInstanceFromS3Input { s.VpcSecurityGroupIds = v @@ -27440,6 +31327,21 @@ type RestoreDBInstanceToPointInTimeInput struct { // This parameter is not used for the MySQL or MariaDB engines. DBName *string `type:"string"` + // The name of the DB parameter group to associate with this DB instance. If + // this argument is omitted, the default DBParameterGroup for the specified + // engine is used. + // + // Constraints: + // + // * If supplied, must match the name of an existing DBParameterGroup. + // + // * Must be 1 to 255 letters, numbers, or hyphens. + // + // * First character must be a letter. + // + // * Can't end with a hyphen or contain two consecutive hyphens. + DBParameterGroupName *string `type:"string"` + // The DB subnet group name to use for the new instance. // // Constraints: If supplied, must match the name of an existing DBSubnetGroup. @@ -27447,6 +31349,11 @@ type RestoreDBInstanceToPointInTimeInput struct { // Example: mySubnetgroup DBSubnetGroupName *string `type:"string"` + // Indicates if the DB instance should have deletion protection enabled. The + // database can't be deleted when this value is set to true. The default is + // false. For more information, see Deleting a DB Instance (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_DeleteInstance.html). + DeletionProtection *bool `type:"boolean"` + // Specify the Active Directory Domain to restore the instance in. Domain *string `type:"string"` @@ -27455,7 +31362,9 @@ type RestoreDBInstanceToPointInTimeInput struct { DomainIAMRoleName *string `type:"string"` // The list of logs that the restored DB instance is to export to CloudWatch - // Logs. + // Logs. The values in the list depend on the DB engine being used. For more + // information, see Publishing Database Logs to Amazon CloudWatch Logs (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.html#USER_LogAccess.Procedural.UploadtoCloudWatch) + // in the Amazon RDS User Guide. EnableCloudwatchLogsExports []*string `type:"list"` // True to enable mapping of AWS Identity and Access Management (IAM) accounts @@ -27538,22 +31447,15 @@ type RestoreDBInstanceToPointInTimeInput struct { // Default: The same port as the original DB instance. Port *int64 `type:"integer"` + // The number of CPU cores and the number of threads per core for the DB instance + // class of the DB instance. + ProcessorFeatures []*ProcessorFeature `locationNameList:"ProcessorFeature" type:"list"` + // Specifies the accessibility options for the DB instance. A value of true // specifies an Internet-facing instance with a publicly resolvable DNS name, // which resolves to a public IP address. A value of false specifies an internal - // instance with a DNS name that resolves to a private IP address. - // - // Default: The default behavior varies depending on whether a VPC has been - // requested or not. The following list shows the default behavior in each case. - // - // * Default VPC:true - // - // * VPC:false - // - // If no DB subnet group has been specified as part of the request and the PubliclyAccessible - // value has not been set, the DB instance is publicly accessible. If a specific - // DB subnet group has been specified as part of the request and the PubliclyAccessible - // value has not been set, the DB instance is private. + // instance with a DNS name that resolves to a private IP address. For more + // information, see CreateDBInstance. PubliclyAccessible *bool `type:"boolean"` // The date and time to restore from. @@ -27564,19 +31466,20 @@ type RestoreDBInstanceToPointInTimeInput struct { // // * Must be before the latest restorable time for the DB instance // - // * Cannot be specified if UseLatestRestorableTime parameter is true + // * Can't be specified if UseLatestRestorableTime parameter is true // // Example: 2009-09-07T23:45:00Z - RestoreTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + RestoreTime *time.Time `type:"timestamp"` // The identifier of the source DB instance from which to restore. // // Constraints: // // * Must match the identifier of an existing DB instance. - // - // SourceDBInstanceIdentifier is a required field - SourceDBInstanceIdentifier *string `type:"string" required:"true"` + SourceDBInstanceIdentifier *string `type:"string"` + + // The resource ID of the source DB instance from which to restore. + SourceDbiResourceId *string `type:"string"` // Specifies the storage type to be associated with the DB instance. // @@ -27587,7 +31490,8 @@ type RestoreDBInstanceToPointInTimeInput struct { // Default: io1 if the Iops parameter is specified, otherwise standard StorageType *string `type:"string"` - // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). + // A list of tags. For more information, see Tagging Amazon RDS Resources (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) + // in the Amazon RDS User Guide. Tags []*Tag `locationNameList:"Tag" type:"list"` // The name of the new DB instance to be created. @@ -27598,7 +31502,7 @@ type RestoreDBInstanceToPointInTimeInput struct { // // * First character must be a letter // - // * Cannot end with a hyphen or contain two consecutive hyphens + // * Can't end with a hyphen or contain two consecutive hyphens // // TargetDBInstanceIdentifier is a required field TargetDBInstanceIdentifier *string `type:"string" required:"true"` @@ -27610,13 +31514,22 @@ type RestoreDBInstanceToPointInTimeInput struct { // device. TdeCredentialPassword *string `type:"string"` + // A value that specifies that the DB instance class of the DB instance uses + // its default processor features. + UseDefaultProcessorFeatures *bool `type:"boolean"` + // Specifies whether (true) or not (false) the DB instance is restored from // the latest backup time. // // Default: false // - // Constraints: Cannot be specified if RestoreTime parameter is provided. + // Constraints: Can't be specified if RestoreTime parameter is provided. UseLatestRestorableTime *bool `type:"boolean"` + + // A list of EC2 VPC security groups to associate with this DB instance. + // + // Default: The default EC2 VPC security group for the DB subnet group's VPC. + VpcSecurityGroupIds []*string `locationNameList:"VpcSecurityGroupId" type:"list"` } // String returns the string representation @@ -27632,9 +31545,6 @@ func (s RestoreDBInstanceToPointInTimeInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *RestoreDBInstanceToPointInTimeInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "RestoreDBInstanceToPointInTimeInput"} - if s.SourceDBInstanceIdentifier == nil { - invalidParams.Add(request.NewErrParamRequired("SourceDBInstanceIdentifier")) - } if s.TargetDBInstanceIdentifier == nil { invalidParams.Add(request.NewErrParamRequired("TargetDBInstanceIdentifier")) } @@ -27675,12 +31585,24 @@ func (s *RestoreDBInstanceToPointInTimeInput) SetDBName(v string) *RestoreDBInst return s } +// SetDBParameterGroupName sets the DBParameterGroupName field's value. +func (s *RestoreDBInstanceToPointInTimeInput) SetDBParameterGroupName(v string) *RestoreDBInstanceToPointInTimeInput { + s.DBParameterGroupName = &v + return s +} + // SetDBSubnetGroupName sets the DBSubnetGroupName field's value. func (s *RestoreDBInstanceToPointInTimeInput) SetDBSubnetGroupName(v string) *RestoreDBInstanceToPointInTimeInput { s.DBSubnetGroupName = &v return s } +// SetDeletionProtection sets the DeletionProtection field's value. +func (s *RestoreDBInstanceToPointInTimeInput) SetDeletionProtection(v bool) *RestoreDBInstanceToPointInTimeInput { + s.DeletionProtection = &v + return s +} + // SetDomain sets the Domain field's value. func (s *RestoreDBInstanceToPointInTimeInput) SetDomain(v string) *RestoreDBInstanceToPointInTimeInput { s.Domain = &v @@ -27741,6 +31663,12 @@ func (s *RestoreDBInstanceToPointInTimeInput) SetPort(v int64) *RestoreDBInstanc return s } +// SetProcessorFeatures sets the ProcessorFeatures field's value. +func (s *RestoreDBInstanceToPointInTimeInput) SetProcessorFeatures(v []*ProcessorFeature) *RestoreDBInstanceToPointInTimeInput { + s.ProcessorFeatures = v + return s +} + // SetPubliclyAccessible sets the PubliclyAccessible field's value. func (s *RestoreDBInstanceToPointInTimeInput) SetPubliclyAccessible(v bool) *RestoreDBInstanceToPointInTimeInput { s.PubliclyAccessible = &v @@ -27759,6 +31687,12 @@ func (s *RestoreDBInstanceToPointInTimeInput) SetSourceDBInstanceIdentifier(v st return s } +// SetSourceDbiResourceId sets the SourceDbiResourceId field's value. +func (s *RestoreDBInstanceToPointInTimeInput) SetSourceDbiResourceId(v string) *RestoreDBInstanceToPointInTimeInput { + s.SourceDbiResourceId = &v + return s +} + // SetStorageType sets the StorageType field's value. func (s *RestoreDBInstanceToPointInTimeInput) SetStorageType(v string) *RestoreDBInstanceToPointInTimeInput { s.StorageType = &v @@ -27789,12 +31723,24 @@ func (s *RestoreDBInstanceToPointInTimeInput) SetTdeCredentialPassword(v string) return s } +// SetUseDefaultProcessorFeatures sets the UseDefaultProcessorFeatures field's value. +func (s *RestoreDBInstanceToPointInTimeInput) SetUseDefaultProcessorFeatures(v bool) *RestoreDBInstanceToPointInTimeInput { + s.UseDefaultProcessorFeatures = &v + return s +} + // SetUseLatestRestorableTime sets the UseLatestRestorableTime field's value. func (s *RestoreDBInstanceToPointInTimeInput) SetUseLatestRestorableTime(v bool) *RestoreDBInstanceToPointInTimeInput { s.UseLatestRestorableTime = &v return s } +// SetVpcSecurityGroupIds sets the VpcSecurityGroupIds field's value. +func (s *RestoreDBInstanceToPointInTimeInput) SetVpcSecurityGroupIds(v []*string) *RestoreDBInstanceToPointInTimeInput { + s.VpcSecurityGroupIds = v + return s +} + type RestoreDBInstanceToPointInTimeOutput struct { _ struct{} `type:"structure"` @@ -27820,6 +31766,39 @@ func (s *RestoreDBInstanceToPointInTimeOutput) SetDBInstance(v *DBInstance) *Res return s } +// Earliest and latest time an instance can be restored to: +type RestoreWindow struct { + _ struct{} `type:"structure"` + + // The earliest time you can restore an instance to. + EarliestTime *time.Time `type:"timestamp"` + + // The latest time you can restore an instance to. + LatestTime *time.Time `type:"timestamp"` +} + +// String returns the string representation +func (s RestoreWindow) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreWindow) GoString() string { + return s.String() +} + +// SetEarliestTime sets the EarliestTime field's value. +func (s *RestoreWindow) SetEarliestTime(v time.Time) *RestoreWindow { + s.EarliestTime = &v + return s +} + +// SetLatestTime sets the LatestTime field's value. +func (s *RestoreWindow) SetLatestTime(v time.Time) *RestoreWindow { + s.LatestTime = &v + return s +} + type RevokeDBSecurityGroupIngressInput struct { _ struct{} `type:"structure"` @@ -27930,6 +31909,132 @@ func (s *RevokeDBSecurityGroupIngressOutput) SetDBSecurityGroup(v *DBSecurityGro return s } +// Contains the scaling configuration of an Aurora Serverless DB cluster. +// +// For more information, see Using Amazon Aurora Serverless (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.html) +// in the Amazon Aurora User Guide. +type ScalingConfiguration struct { + _ struct{} `type:"structure"` + + // A value that specifies whether to allow or disallow automatic pause for an + // Aurora DB cluster in serverless DB engine mode. A DB cluster can be paused + // only when it's idle (it has no connections). + // + // If a DB cluster is paused for more than seven days, the DB cluster might + // be backed up with a snapshot. In this case, the DB cluster is restored when + // there is a request to connect to it. + AutoPause *bool `type:"boolean"` + + // The maximum capacity for an Aurora DB cluster in serverless DB engine mode. + // + // Valid capacity values are 2, 4, 8, 16, 32, 64, 128, and 256. + // + // The maximum capacity must be greater than or equal to the minimum capacity. + MaxCapacity *int64 `type:"integer"` + + // The minimum capacity for an Aurora DB cluster in serverless DB engine mode. + // + // Valid capacity values are 2, 4, 8, 16, 32, 64, 128, and 256. + // + // The minimum capacity must be less than or equal to the maximum capacity. + MinCapacity *int64 `type:"integer"` + + // The time, in seconds, before an Aurora DB cluster in serverless mode is paused. + SecondsUntilAutoPause *int64 `type:"integer"` +} + +// String returns the string representation +func (s ScalingConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ScalingConfiguration) GoString() string { + return s.String() +} + +// SetAutoPause sets the AutoPause field's value. +func (s *ScalingConfiguration) SetAutoPause(v bool) *ScalingConfiguration { + s.AutoPause = &v + return s +} + +// SetMaxCapacity sets the MaxCapacity field's value. +func (s *ScalingConfiguration) SetMaxCapacity(v int64) *ScalingConfiguration { + s.MaxCapacity = &v + return s +} + +// SetMinCapacity sets the MinCapacity field's value. +func (s *ScalingConfiguration) SetMinCapacity(v int64) *ScalingConfiguration { + s.MinCapacity = &v + return s +} + +// SetSecondsUntilAutoPause sets the SecondsUntilAutoPause field's value. +func (s *ScalingConfiguration) SetSecondsUntilAutoPause(v int64) *ScalingConfiguration { + s.SecondsUntilAutoPause = &v + return s +} + +// Shows the scaling configuration for an Aurora DB cluster in serverless DB +// engine mode. +// +// For more information, see Using Amazon Aurora Serverless (http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.html) +// in the Amazon Aurora User Guide. +type ScalingConfigurationInfo struct { + _ struct{} `type:"structure"` + + // A value that indicates whether automatic pause is allowed for the Aurora + // DB cluster in serverless DB engine mode. + AutoPause *bool `type:"boolean"` + + // The maximum capacity for an Aurora DB cluster in serverless DB engine mode. + MaxCapacity *int64 `type:"integer"` + + // The maximum capacity for the Aurora DB cluster in serverless DB engine mode. + MinCapacity *int64 `type:"integer"` + + // The remaining amount of time, in seconds, before the Aurora DB cluster in + // serverless mode is paused. A DB cluster can be paused only when it's idle + // (it has no connections). + SecondsUntilAutoPause *int64 `type:"integer"` +} + +// String returns the string representation +func (s ScalingConfigurationInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ScalingConfigurationInfo) GoString() string { + return s.String() +} + +// SetAutoPause sets the AutoPause field's value. +func (s *ScalingConfigurationInfo) SetAutoPause(v bool) *ScalingConfigurationInfo { + s.AutoPause = &v + return s +} + +// SetMaxCapacity sets the MaxCapacity field's value. +func (s *ScalingConfigurationInfo) SetMaxCapacity(v int64) *ScalingConfigurationInfo { + s.MaxCapacity = &v + return s +} + +// SetMinCapacity sets the MinCapacity field's value. +func (s *ScalingConfigurationInfo) SetMinCapacity(v int64) *ScalingConfigurationInfo { + s.MinCapacity = &v + return s +} + +// SetSecondsUntilAutoPause sets the SecondsUntilAutoPause field's value. +func (s *ScalingConfigurationInfo) SetSecondsUntilAutoPause(v int64) *ScalingConfigurationInfo { + s.SecondsUntilAutoPause = &v + return s +} + // Contains an AWS Region name as the result of a successful call to the DescribeSourceRegions // action. type SourceRegion struct { @@ -27973,6 +32078,71 @@ func (s *SourceRegion) SetStatus(v string) *SourceRegion { return s } +type StartDBClusterInput struct { + _ struct{} `type:"structure"` + + // The DB cluster identifier of the Amazon Aurora DB cluster to be started. + // This parameter is stored as a lowercase string. + // + // DBClusterIdentifier is a required field + DBClusterIdentifier *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s StartDBClusterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartDBClusterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StartDBClusterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StartDBClusterInput"} + if s.DBClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *StartDBClusterInput) SetDBClusterIdentifier(v string) *StartDBClusterInput { + s.DBClusterIdentifier = &v + return s +} + +type StartDBClusterOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Aurora DB cluster. + // + // This data type is used as a response element in the DescribeDBClusters, StopDBCluster, + // and StartDBCluster actions. + DBCluster *DBCluster `type:"structure"` +} + +// String returns the string representation +func (s StartDBClusterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartDBClusterOutput) GoString() string { + return s.String() +} + +// SetDBCluster sets the DBCluster field's value. +func (s *StartDBClusterOutput) SetDBCluster(v *DBCluster) *StartDBClusterOutput { + s.DBCluster = v + return s +} + type StartDBInstanceInput struct { _ struct{} `type:"structure"` @@ -28036,6 +32206,71 @@ func (s *StartDBInstanceOutput) SetDBInstance(v *DBInstance) *StartDBInstanceOut return s } +type StopDBClusterInput struct { + _ struct{} `type:"structure"` + + // The DB cluster identifier of the Amazon Aurora DB cluster to be stopped. + // This parameter is stored as a lowercase string. + // + // DBClusterIdentifier is a required field + DBClusterIdentifier *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s StopDBClusterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StopDBClusterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StopDBClusterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StopDBClusterInput"} + if s.DBClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *StopDBClusterInput) SetDBClusterIdentifier(v string) *StopDBClusterInput { + s.DBClusterIdentifier = &v + return s +} + +type StopDBClusterOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of an Amazon Aurora DB cluster. + // + // This data type is used as a response element in the DescribeDBClusters, StopDBCluster, + // and StartDBCluster actions. + DBCluster *DBCluster `type:"structure"` +} + +// String returns the string representation +func (s StopDBClusterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StopDBClusterOutput) GoString() string { + return s.String() +} + +// SetDBCluster sets the DBCluster field's value. +func (s *StopDBClusterOutput) SetDBCluster(v *DBCluster) *StopDBClusterOutput { + s.DBCluster = v + return s +} + type StopDBInstanceInput struct { _ struct{} `type:"structure"` @@ -28290,6 +32525,9 @@ type ValidDBInstanceModificationsMessage struct { // Valid storage options for your DB instance. Storage []*ValidStorageOptions `locationNameList:"ValidStorageOptions" type:"list"` + + // Valid processor features for your DB instance. + ValidProcessorFeatures []*AvailableProcessorFeature `locationNameList:"AvailableProcessorFeature" type:"list"` } // String returns the string representation @@ -28308,6 +32546,12 @@ func (s *ValidDBInstanceModificationsMessage) SetStorage(v []*ValidStorageOption return s } +// SetValidProcessorFeatures sets the ValidProcessorFeatures field's value. +func (s *ValidDBInstanceModificationsMessage) SetValidProcessorFeatures(v []*AvailableProcessorFeature) *ValidDBInstanceModificationsMessage { + s.ValidProcessorFeatures = v + return s +} + // Information about valid modifications that you can make to your DB instance. // Contains the result of a successful call to the DescribeValidDBInstanceModifications // action. diff --git a/vendor/github.com/aws/aws-sdk-go/service/rds/errors.go b/vendor/github.com/aws/aws-sdk-go/service/rds/errors.go index 4cb1983df84..ed23a490c59 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/rds/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/rds/errors.go @@ -7,55 +7,83 @@ const ( // ErrCodeAuthorizationAlreadyExistsFault for service response error code // "AuthorizationAlreadyExists". // - // The specified CIDRIP or EC2 security group is already authorized for the - // specified DB security group. + // The specified CIDRIP or Amazon EC2 security group is already authorized for + // the specified DB security group. ErrCodeAuthorizationAlreadyExistsFault = "AuthorizationAlreadyExists" // ErrCodeAuthorizationNotFoundFault for service response error code // "AuthorizationNotFound". // - // Specified CIDRIP or EC2 security group is not authorized for the specified - // DB security group. + // The specified CIDRIP or Amazon EC2 security group isn't authorized for the + // specified DB security group. // - // RDS may not also be authorized via IAM to perform necessary actions on your - // behalf. + // RDS also may not be authorized by using IAM to perform necessary actions + // on your behalf. ErrCodeAuthorizationNotFoundFault = "AuthorizationNotFound" // ErrCodeAuthorizationQuotaExceededFault for service response error code // "AuthorizationQuotaExceeded". // - // DB security group authorization quota has been reached. + // The DB security group authorization quota has been reached. ErrCodeAuthorizationQuotaExceededFault = "AuthorizationQuotaExceeded" + // ErrCodeBackupPolicyNotFoundFault for service response error code + // "BackupPolicyNotFoundFault". + ErrCodeBackupPolicyNotFoundFault = "BackupPolicyNotFoundFault" + // ErrCodeCertificateNotFoundFault for service response error code // "CertificateNotFound". // - // CertificateIdentifier does not refer to an existing certificate. + // CertificateIdentifier doesn't refer to an existing certificate. ErrCodeCertificateNotFoundFault = "CertificateNotFound" // ErrCodeDBClusterAlreadyExistsFault for service response error code // "DBClusterAlreadyExistsFault". // - // User already has a DB cluster with the given identifier. + // The user already has a DB cluster with the given identifier. ErrCodeDBClusterAlreadyExistsFault = "DBClusterAlreadyExistsFault" + // ErrCodeDBClusterBacktrackNotFoundFault for service response error code + // "DBClusterBacktrackNotFoundFault". + // + // BacktrackIdentifier doesn't refer to an existing backtrack. + ErrCodeDBClusterBacktrackNotFoundFault = "DBClusterBacktrackNotFoundFault" + + // ErrCodeDBClusterEndpointAlreadyExistsFault for service response error code + // "DBClusterEndpointAlreadyExistsFault". + // + // The specified custom endpoint can't be created because it already exists. + ErrCodeDBClusterEndpointAlreadyExistsFault = "DBClusterEndpointAlreadyExistsFault" + + // ErrCodeDBClusterEndpointNotFoundFault for service response error code + // "DBClusterEndpointNotFoundFault". + // + // The specified custom endpoint doesn't exist. + ErrCodeDBClusterEndpointNotFoundFault = "DBClusterEndpointNotFoundFault" + + // ErrCodeDBClusterEndpointQuotaExceededFault for service response error code + // "DBClusterEndpointQuotaExceededFault". + // + // The cluster already has the maximum number of custom endpoints. + ErrCodeDBClusterEndpointQuotaExceededFault = "DBClusterEndpointQuotaExceededFault" + // ErrCodeDBClusterNotFoundFault for service response error code // "DBClusterNotFoundFault". // - // DBClusterIdentifier does not refer to an existing DB cluster. + // DBClusterIdentifier doesn't refer to an existing DB cluster. ErrCodeDBClusterNotFoundFault = "DBClusterNotFoundFault" // ErrCodeDBClusterParameterGroupNotFoundFault for service response error code // "DBClusterParameterGroupNotFound". // - // DBClusterParameterGroupName does not refer to an existing DB Cluster parameter + // DBClusterParameterGroupName doesn't refer to an existing DB cluster parameter // group. ErrCodeDBClusterParameterGroupNotFoundFault = "DBClusterParameterGroupNotFound" // ErrCodeDBClusterQuotaExceededFault for service response error code // "DBClusterQuotaExceededFault". // - // User attempted to create a new DB cluster and the user has already reached + // The user attempted to create a new DB cluster and the user has already reached // the maximum allowed DB cluster quota. ErrCodeDBClusterQuotaExceededFault = "DBClusterQuotaExceededFault" @@ -69,8 +97,8 @@ const ( // ErrCodeDBClusterRoleNotFoundFault for service response error code // "DBClusterRoleNotFound". // - // The specified IAM role Amazon Resource Name (ARN) is not associated with - // the specified DB cluster. + // The specified IAM role Amazon Resource Name (ARN) isn't associated with the + // specified DB cluster. ErrCodeDBClusterRoleNotFoundFault = "DBClusterRoleNotFound" // ErrCodeDBClusterRoleQuotaExceededFault for service response error code @@ -83,31 +111,45 @@ const ( // ErrCodeDBClusterSnapshotAlreadyExistsFault for service response error code // "DBClusterSnapshotAlreadyExistsFault". // - // User already has a DB cluster snapshot with the given identifier. + // The user already has a DB cluster snapshot with the given identifier. ErrCodeDBClusterSnapshotAlreadyExistsFault = "DBClusterSnapshotAlreadyExistsFault" // ErrCodeDBClusterSnapshotNotFoundFault for service response error code // "DBClusterSnapshotNotFoundFault". // - // DBClusterSnapshotIdentifier does not refer to an existing DB cluster snapshot. + // DBClusterSnapshotIdentifier doesn't refer to an existing DB cluster snapshot. ErrCodeDBClusterSnapshotNotFoundFault = "DBClusterSnapshotNotFoundFault" // ErrCodeDBInstanceAlreadyExistsFault for service response error code // "DBInstanceAlreadyExists". // - // User already has a DB instance with the given identifier. + // The user already has a DB instance with the given identifier. ErrCodeDBInstanceAlreadyExistsFault = "DBInstanceAlreadyExists" + // ErrCodeDBInstanceAutomatedBackupNotFoundFault for service response error code + // "DBInstanceAutomatedBackupNotFound". + // + // No automated backup for this DB instance was found. + ErrCodeDBInstanceAutomatedBackupNotFoundFault = "DBInstanceAutomatedBackupNotFound" + + // ErrCodeDBInstanceAutomatedBackupQuotaExceededFault for service response error code + // "DBInstanceAutomatedBackupQuotaExceeded". + // + // The quota for retained automated backups was exceeded. This prevents you + // from retaining any additional automated backups. The retained automated backups + // quota is the same as your DB Instance quota. + ErrCodeDBInstanceAutomatedBackupQuotaExceededFault = "DBInstanceAutomatedBackupQuotaExceeded" + // ErrCodeDBInstanceNotFoundFault for service response error code // "DBInstanceNotFound". // - // DBInstanceIdentifier does not refer to an existing DB instance. + // DBInstanceIdentifier doesn't refer to an existing DB instance. ErrCodeDBInstanceNotFoundFault = "DBInstanceNotFound" // ErrCodeDBLogFileNotFoundFault for service response error code // "DBLogFileNotFoundFault". // - // LogFileName does not refer to an existing DB log file. + // LogFileName doesn't refer to an existing DB log file. ErrCodeDBLogFileNotFoundFault = "DBLogFileNotFoundFault" // ErrCodeDBParameterGroupAlreadyExistsFault for service response error code @@ -119,13 +161,13 @@ const ( // ErrCodeDBParameterGroupNotFoundFault for service response error code // "DBParameterGroupNotFound". // - // DBParameterGroupName does not refer to an existing DB parameter group. + // DBParameterGroupName doesn't refer to an existing DB parameter group. ErrCodeDBParameterGroupNotFoundFault = "DBParameterGroupNotFound" // ErrCodeDBParameterGroupQuotaExceededFault for service response error code // "DBParameterGroupQuotaExceeded". // - // Request would result in user exceeding the allowed number of DB parameter + // The request would result in the user exceeding the allowed number of DB parameter // groups. ErrCodeDBParameterGroupQuotaExceededFault = "DBParameterGroupQuotaExceeded" @@ -139,19 +181,19 @@ const ( // ErrCodeDBSecurityGroupNotFoundFault for service response error code // "DBSecurityGroupNotFound". // - // DBSecurityGroupName does not refer to an existing DB security group. + // DBSecurityGroupName doesn't refer to an existing DB security group. ErrCodeDBSecurityGroupNotFoundFault = "DBSecurityGroupNotFound" // ErrCodeDBSecurityGroupNotSupportedFault for service response error code // "DBSecurityGroupNotSupported". // - // A DB security group is not allowed for this action. + // A DB security group isn't allowed for this action. ErrCodeDBSecurityGroupNotSupportedFault = "DBSecurityGroupNotSupported" // ErrCodeDBSecurityGroupQuotaExceededFault for service response error code // "QuotaExceeded.DBSecurityGroup". // - // Request would result in user exceeding the allowed number of DB security + // The request would result in the user exceeding the allowed number of DB security // groups. ErrCodeDBSecurityGroupQuotaExceededFault = "QuotaExceeded.DBSecurityGroup" @@ -164,7 +206,7 @@ const ( // ErrCodeDBSnapshotNotFoundFault for service response error code // "DBSnapshotNotFound". // - // DBSnapshotIdentifier does not refer to an existing DB snapshot. + // DBSnapshotIdentifier doesn't refer to an existing DB snapshot. ErrCodeDBSnapshotNotFoundFault = "DBSnapshotNotFound" // ErrCodeDBSubnetGroupAlreadyExistsFault for service response error code @@ -183,39 +225,40 @@ const ( // ErrCodeDBSubnetGroupNotAllowedFault for service response error code // "DBSubnetGroupNotAllowedFault". // - // Indicates that the DBSubnetGroup should not be specified while creating read - // replicas that lie in the same region as the source instance. + // The DBSubnetGroup shouldn't be specified while creating read replicas that + // lie in the same region as the source instance. ErrCodeDBSubnetGroupNotAllowedFault = "DBSubnetGroupNotAllowedFault" // ErrCodeDBSubnetGroupNotFoundFault for service response error code // "DBSubnetGroupNotFoundFault". // - // DBSubnetGroupName does not refer to an existing DB subnet group. + // DBSubnetGroupName doesn't refer to an existing DB subnet group. ErrCodeDBSubnetGroupNotFoundFault = "DBSubnetGroupNotFoundFault" // ErrCodeDBSubnetGroupQuotaExceededFault for service response error code // "DBSubnetGroupQuotaExceeded". // - // Request would result in user exceeding the allowed number of DB subnet groups. + // The request would result in the user exceeding the allowed number of DB subnet + // groups. ErrCodeDBSubnetGroupQuotaExceededFault = "DBSubnetGroupQuotaExceeded" // ErrCodeDBSubnetQuotaExceededFault for service response error code // "DBSubnetQuotaExceededFault". // - // Request would result in user exceeding the allowed number of subnets in a - // DB subnet groups. + // The request would result in the user exceeding the allowed number of subnets + // in a DB subnet groups. ErrCodeDBSubnetQuotaExceededFault = "DBSubnetQuotaExceededFault" // ErrCodeDBUpgradeDependencyFailureFault for service response error code // "DBUpgradeDependencyFailure". // - // The DB upgrade failed because a resource the DB depends on could not be modified. + // The DB upgrade failed because a resource the DB depends on can't be modified. ErrCodeDBUpgradeDependencyFailureFault = "DBUpgradeDependencyFailure" // ErrCodeDomainNotFoundFault for service response error code // "DomainNotFoundFault". // - // Domain does not refer to an existing Active Directory Domain. + // Domain doesn't refer to an existing Active Directory domain. ErrCodeDomainNotFoundFault = "DomainNotFoundFault" // ErrCodeEventSubscriptionQuotaExceededFault for service response error code @@ -227,85 +270,106 @@ const ( // ErrCodeInstanceQuotaExceededFault for service response error code // "InstanceQuotaExceeded". // - // Request would result in user exceeding the allowed number of DB instances. + // The request would result in the user exceeding the allowed number of DB instances. ErrCodeInstanceQuotaExceededFault = "InstanceQuotaExceeded" // ErrCodeInsufficientDBClusterCapacityFault for service response error code // "InsufficientDBClusterCapacityFault". // - // The DB cluster does not have enough capacity for the current operation. + // The DB cluster doesn't have enough capacity for the current operation. ErrCodeInsufficientDBClusterCapacityFault = "InsufficientDBClusterCapacityFault" // ErrCodeInsufficientDBInstanceCapacityFault for service response error code // "InsufficientDBInstanceCapacity". // - // Specified DB instance class is not available in the specified Availability + // The specified DB instance class isn't available in the specified Availability // Zone. ErrCodeInsufficientDBInstanceCapacityFault = "InsufficientDBInstanceCapacity" // ErrCodeInsufficientStorageClusterCapacityFault for service response error code // "InsufficientStorageClusterCapacity". // - // There is insufficient storage available for the current action. You may be - // able to resolve this error by updating your subnet group to use different + // There is insufficient storage available for the current action. You might + // be able to resolve this error by updating your subnet group to use different // Availability Zones that have more storage available. ErrCodeInsufficientStorageClusterCapacityFault = "InsufficientStorageClusterCapacity" + // ErrCodeInvalidDBClusterCapacityFault for service response error code + // "InvalidDBClusterCapacityFault". + // + // Capacity isn't a valid Aurora Serverless DB cluster capacity. Valid capacity + // values are 2, 4, 8, 16, 32, 64, 128, and 256. + ErrCodeInvalidDBClusterCapacityFault = "InvalidDBClusterCapacityFault" + + // ErrCodeInvalidDBClusterEndpointStateFault for service response error code + // "InvalidDBClusterEndpointStateFault". + // + // The requested operation can't be performed on the endpoint while the endpoint + // is in this state. + ErrCodeInvalidDBClusterEndpointStateFault = "InvalidDBClusterEndpointStateFault" + // ErrCodeInvalidDBClusterSnapshotStateFault for service response error code // "InvalidDBClusterSnapshotStateFault". // - // The supplied value is not a valid DB cluster snapshot state. + // The supplied value isn't a valid DB cluster snapshot state. ErrCodeInvalidDBClusterSnapshotStateFault = "InvalidDBClusterSnapshotStateFault" // ErrCodeInvalidDBClusterStateFault for service response error code // "InvalidDBClusterStateFault". // - // The DB cluster is not in a valid state. + // The requested operation can't be performed while the cluster is in this state. ErrCodeInvalidDBClusterStateFault = "InvalidDBClusterStateFault" + // ErrCodeInvalidDBInstanceAutomatedBackupStateFault for service response error code + // "InvalidDBInstanceAutomatedBackupState". + // + // The automated backup is in an invalid state. For example, this automated + // backup is associated with an active instance. + ErrCodeInvalidDBInstanceAutomatedBackupStateFault = "InvalidDBInstanceAutomatedBackupState" + // ErrCodeInvalidDBInstanceStateFault for service response error code // "InvalidDBInstanceState". // - // The specified DB instance is not in the available state. + // The DB instance isn't in a valid state. ErrCodeInvalidDBInstanceStateFault = "InvalidDBInstanceState" // ErrCodeInvalidDBParameterGroupStateFault for service response error code // "InvalidDBParameterGroupState". // // The DB parameter group is in use or is in an invalid state. If you are attempting - // to delete the parameter group, you cannot delete it when the parameter group + // to delete the parameter group, you can't delete it when the parameter group // is in this state. ErrCodeInvalidDBParameterGroupStateFault = "InvalidDBParameterGroupState" // ErrCodeInvalidDBSecurityGroupStateFault for service response error code // "InvalidDBSecurityGroupState". // - // The state of the DB security group does not allow deletion. + // The state of the DB security group doesn't allow deletion. ErrCodeInvalidDBSecurityGroupStateFault = "InvalidDBSecurityGroupState" // ErrCodeInvalidDBSnapshotStateFault for service response error code // "InvalidDBSnapshotState". // - // The state of the DB snapshot does not allow deletion. + // The state of the DB snapshot doesn't allow deletion. ErrCodeInvalidDBSnapshotStateFault = "InvalidDBSnapshotState" // ErrCodeInvalidDBSubnetGroupFault for service response error code // "InvalidDBSubnetGroupFault". // - // Indicates the DBSubnetGroup does not belong to the same VPC as that of an - // existing cross region read replica of the same source instance. + // The DBSubnetGroup doesn't belong to the same VPC as that of an existing cross-region + // read replica of the same source instance. ErrCodeInvalidDBSubnetGroupFault = "InvalidDBSubnetGroupFault" // ErrCodeInvalidDBSubnetGroupStateFault for service response error code // "InvalidDBSubnetGroupStateFault". // - // The DB subnet group cannot be deleted because it is in use. + // The DB subnet group cannot be deleted because it's in use. ErrCodeInvalidDBSubnetGroupStateFault = "InvalidDBSubnetGroupStateFault" // ErrCodeInvalidDBSubnetStateFault for service response error code // "InvalidDBSubnetStateFault". // - // The DB subnet is not in the available state. + // The DB subnet isn't in the available state. ErrCodeInvalidDBSubnetStateFault = "InvalidDBSubnetStateFault" // ErrCodeInvalidEventSubscriptionStateFault for service response error code @@ -318,21 +382,21 @@ const ( // ErrCodeInvalidOptionGroupStateFault for service response error code // "InvalidOptionGroupStateFault". // - // The option group is not in the available state. + // The option group isn't in the available state. ErrCodeInvalidOptionGroupStateFault = "InvalidOptionGroupStateFault" // ErrCodeInvalidRestoreFault for service response error code // "InvalidRestoreFault". // - // Cannot restore from vpc backup to non-vpc DB instance. + // Cannot restore from VPC backup to non-VPC DB instance. ErrCodeInvalidRestoreFault = "InvalidRestoreFault" // ErrCodeInvalidS3BucketFault for service response error code // "InvalidS3BucketFault". // - // The specified Amazon S3 bucket name could not be found or Amazon RDS is not - // authorized to access the specified Amazon S3 bucket. Verify the SourceS3BucketName - // and S3IngestionRoleArn values and try again. + // The specified Amazon S3 bucket name can't be found or Amazon RDS isn't authorized + // to access the specified Amazon S3 bucket. Verify the SourceS3BucketName and + // S3IngestionRoleArn values and try again. ErrCodeInvalidS3BucketFault = "InvalidS3BucketFault" // ErrCodeInvalidSubnet for service response error code @@ -345,14 +409,14 @@ const ( // ErrCodeInvalidVPCNetworkStateFault for service response error code // "InvalidVPCNetworkStateFault". // - // DB subnet group does not cover all Availability Zones after it is created - // because users' change. + // The DB subnet group doesn't cover all Availability Zones after it's created + // because of users' change. ErrCodeInvalidVPCNetworkStateFault = "InvalidVPCNetworkStateFault" // ErrCodeKMSKeyNotAccessibleFault for service response error code // "KMSKeyNotAccessibleFault". // - // Error accessing KMS key. + // An error occurred accessing an AWS KMS key. ErrCodeKMSKeyNotAccessibleFault = "KMSKeyNotAccessibleFault" // ErrCodeOptionGroupAlreadyExistsFault for service response error code @@ -444,7 +508,7 @@ const ( // ErrCodeSnapshotQuotaExceededFault for service response error code // "SnapshotQuotaExceeded". // - // Request would result in user exceeding the allowed number of DB snapshots. + // The request would result in the user exceeding the allowed number of DB snapshots. ErrCodeSnapshotQuotaExceededFault = "SnapshotQuotaExceeded" // ErrCodeSourceNotFoundFault for service response error code @@ -456,14 +520,14 @@ const ( // ErrCodeStorageQuotaExceededFault for service response error code // "StorageQuotaExceeded". // - // Request would result in user exceeding the allowed amount of storage available - // across all DB instances. + // The request would result in the user exceeding the allowed amount of storage + // available across all DB instances. ErrCodeStorageQuotaExceededFault = "StorageQuotaExceeded" // ErrCodeStorageTypeNotSupportedFault for service response error code // "StorageTypeNotSupported". // - // StorageType specified cannot be associated with the DB Instance. + // Storage of the StorageType specified can't be associated with the DB instance. ErrCodeStorageTypeNotSupportedFault = "StorageTypeNotSupported" // ErrCodeSubnetAlreadyInUse for service response error code diff --git a/vendor/github.com/aws/aws-sdk-go/service/rds/service.go b/vendor/github.com/aws/aws-sdk-go/service/rds/service.go index 2e2ec2e97ee..f2d0efaf7d0 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/rds/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/rds/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "rds" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "rds" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "RDS" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the RDS client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/redshift/api.go b/vendor/github.com/aws/aws-sdk-go/service/redshift/api.go index e84ed828ff0..913675699dd 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/redshift/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/redshift/api.go @@ -3,6 +3,7 @@ package redshift import ( + "fmt" "time" "github.com/aws/aws-sdk-go/aws" @@ -12,12 +13,112 @@ import ( "github.com/aws/aws-sdk-go/private/protocol/query" ) +const opAcceptReservedNodeExchange = "AcceptReservedNodeExchange" + +// AcceptReservedNodeExchangeRequest generates a "aws/request.Request" representing the +// client's request for the AcceptReservedNodeExchange operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AcceptReservedNodeExchange for more information on using the AcceptReservedNodeExchange +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AcceptReservedNodeExchangeRequest method. +// req, resp := client.AcceptReservedNodeExchangeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/AcceptReservedNodeExchange +func (c *Redshift) AcceptReservedNodeExchangeRequest(input *AcceptReservedNodeExchangeInput) (req *request.Request, output *AcceptReservedNodeExchangeOutput) { + op := &request.Operation{ + Name: opAcceptReservedNodeExchange, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AcceptReservedNodeExchangeInput{} + } + + output = &AcceptReservedNodeExchangeOutput{} + req = c.newRequest(op, input, output) + return +} + +// AcceptReservedNodeExchange API operation for Amazon Redshift. +// +// Exchanges a DC1 Reserved Node for a DC2 Reserved Node with no changes to +// the configuration (term, payment type, or number of nodes) and no additional +// costs. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Redshift's +// API operation AcceptReservedNodeExchange for usage and error information. +// +// Returned Error Codes: +// * ErrCodeReservedNodeNotFoundFault "ReservedNodeNotFound" +// The specified reserved compute node not found. +// +// * ErrCodeInvalidReservedNodeStateFault "InvalidReservedNodeState" +// Indicates that the Reserved Node being exchanged is not in an active state. +// +// * ErrCodeReservedNodeAlreadyMigratedFault "ReservedNodeAlreadyMigrated" +// Indicates that the reserved node has already been exchanged. +// +// * ErrCodeReservedNodeOfferingNotFoundFault "ReservedNodeOfferingNotFound" +// Specified offering does not exist. +// +// * ErrCodeUnsupportedOperationFault "UnsupportedOperation" +// The requested operation isn't supported. +// +// * ErrCodeDependentServiceUnavailableFault "DependentServiceUnavailableFault" +// Your request cannot be completed because a dependent internal service is +// temporarily unavailable. Wait 30 to 60 seconds and try again. +// +// * ErrCodeReservedNodeAlreadyExistsFault "ReservedNodeAlreadyExists" +// User already has a reservation with the given identifier. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/AcceptReservedNodeExchange +func (c *Redshift) AcceptReservedNodeExchange(input *AcceptReservedNodeExchangeInput) (*AcceptReservedNodeExchangeOutput, error) { + req, out := c.AcceptReservedNodeExchangeRequest(input) + return out, req.Send() +} + +// AcceptReservedNodeExchangeWithContext is the same as AcceptReservedNodeExchange with the addition of +// the ability to pass a context and additional request options. +// +// See AcceptReservedNodeExchange for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Redshift) AcceptReservedNodeExchangeWithContext(ctx aws.Context, input *AcceptReservedNodeExchangeInput, opts ...request.Option) (*AcceptReservedNodeExchangeOutput, error) { + req, out := c.AcceptReservedNodeExchangeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opAuthorizeClusterSecurityGroupIngress = "AuthorizeClusterSecurityGroupIngress" // AuthorizeClusterSecurityGroupIngressRequest generates a "aws/request.Request" representing the // client's request for the AuthorizeClusterSecurityGroupIngress operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -125,8 +226,8 @@ const opAuthorizeSnapshotAccess = "AuthorizeSnapshotAccess" // AuthorizeSnapshotAccessRequest generates a "aws/request.Request" representing the // client's request for the AuthorizeSnapshotAccess operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -222,12 +323,265 @@ func (c *Redshift) AuthorizeSnapshotAccessWithContext(ctx aws.Context, input *Au return out, req.Send() } +const opBatchDeleteClusterSnapshots = "BatchDeleteClusterSnapshots" + +// BatchDeleteClusterSnapshotsRequest generates a "aws/request.Request" representing the +// client's request for the BatchDeleteClusterSnapshots operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See BatchDeleteClusterSnapshots for more information on using the BatchDeleteClusterSnapshots +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the BatchDeleteClusterSnapshotsRequest method. +// req, resp := client.BatchDeleteClusterSnapshotsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/BatchDeleteClusterSnapshots +func (c *Redshift) BatchDeleteClusterSnapshotsRequest(input *BatchDeleteClusterSnapshotsInput) (req *request.Request, output *BatchDeleteClusterSnapshotsOutput) { + op := &request.Operation{ + Name: opBatchDeleteClusterSnapshots, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &BatchDeleteClusterSnapshotsInput{} + } + + output = &BatchDeleteClusterSnapshotsOutput{} + req = c.newRequest(op, input, output) + return +} + +// BatchDeleteClusterSnapshots API operation for Amazon Redshift. +// +// Deletes a set of cluster snapshots. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Redshift's +// API operation BatchDeleteClusterSnapshots for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBatchDeleteRequestSizeExceededFault "BatchDeleteRequestSizeExceeded" +// The maximum number for a batch delete of snapshots has been reached. The +// limit is 100. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/BatchDeleteClusterSnapshots +func (c *Redshift) BatchDeleteClusterSnapshots(input *BatchDeleteClusterSnapshotsInput) (*BatchDeleteClusterSnapshotsOutput, error) { + req, out := c.BatchDeleteClusterSnapshotsRequest(input) + return out, req.Send() +} + +// BatchDeleteClusterSnapshotsWithContext is the same as BatchDeleteClusterSnapshots with the addition of +// the ability to pass a context and additional request options. +// +// See BatchDeleteClusterSnapshots for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Redshift) BatchDeleteClusterSnapshotsWithContext(ctx aws.Context, input *BatchDeleteClusterSnapshotsInput, opts ...request.Option) (*BatchDeleteClusterSnapshotsOutput, error) { + req, out := c.BatchDeleteClusterSnapshotsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opBatchModifyClusterSnapshots = "BatchModifyClusterSnapshots" + +// BatchModifyClusterSnapshotsRequest generates a "aws/request.Request" representing the +// client's request for the BatchModifyClusterSnapshots operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See BatchModifyClusterSnapshots for more information on using the BatchModifyClusterSnapshots +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the BatchModifyClusterSnapshotsRequest method. +// req, resp := client.BatchModifyClusterSnapshotsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/BatchModifyClusterSnapshots +func (c *Redshift) BatchModifyClusterSnapshotsRequest(input *BatchModifyClusterSnapshotsInput) (req *request.Request, output *BatchModifyClusterSnapshotsOutput) { + op := &request.Operation{ + Name: opBatchModifyClusterSnapshots, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &BatchModifyClusterSnapshotsInput{} + } + + output = &BatchModifyClusterSnapshotsOutput{} + req = c.newRequest(op, input, output) + return +} + +// BatchModifyClusterSnapshots API operation for Amazon Redshift. +// +// Modifies the settings for a list of snapshots. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Redshift's +// API operation BatchModifyClusterSnapshots for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidRetentionPeriodFault "InvalidRetentionPeriodFault" +// The retention period specified is either in the past or is not a valide value. +// +// The value must be either -1 or an integer between 1 and 3,653. +// +// * ErrCodeBatchModifyClusterSnapshotsLimitExceededFault "BatchModifyClusterSnapshotsLimitExceededFault" +// The maximum number for snapshot identifiers has been reached. The limit is +// 100. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/BatchModifyClusterSnapshots +func (c *Redshift) BatchModifyClusterSnapshots(input *BatchModifyClusterSnapshotsInput) (*BatchModifyClusterSnapshotsOutput, error) { + req, out := c.BatchModifyClusterSnapshotsRequest(input) + return out, req.Send() +} + +// BatchModifyClusterSnapshotsWithContext is the same as BatchModifyClusterSnapshots with the addition of +// the ability to pass a context and additional request options. +// +// See BatchModifyClusterSnapshots for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Redshift) BatchModifyClusterSnapshotsWithContext(ctx aws.Context, input *BatchModifyClusterSnapshotsInput, opts ...request.Option) (*BatchModifyClusterSnapshotsOutput, error) { + req, out := c.BatchModifyClusterSnapshotsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCancelResize = "CancelResize" + +// CancelResizeRequest generates a "aws/request.Request" representing the +// client's request for the CancelResize operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CancelResize for more information on using the CancelResize +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CancelResizeRequest method. +// req, resp := client.CancelResizeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/CancelResize +func (c *Redshift) CancelResizeRequest(input *CancelResizeInput) (req *request.Request, output *CancelResizeOutput) { + op := &request.Operation{ + Name: opCancelResize, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CancelResizeInput{} + } + + output = &CancelResizeOutput{} + req = c.newRequest(op, input, output) + return +} + +// CancelResize API operation for Amazon Redshift. +// +// Cancels a resize operation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Redshift's +// API operation CancelResize for usage and error information. +// +// Returned Error Codes: +// * ErrCodeClusterNotFoundFault "ClusterNotFound" +// The ClusterIdentifier parameter does not refer to an existing cluster. +// +// * ErrCodeResizeNotFoundFault "ResizeNotFound" +// A resize operation for the specified cluster is not found. +// +// * ErrCodeInvalidClusterStateFault "InvalidClusterState" +// The specified cluster is not in the available state. +// +// * ErrCodeUnsupportedOperationFault "UnsupportedOperation" +// The requested operation isn't supported. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/CancelResize +func (c *Redshift) CancelResize(input *CancelResizeInput) (*CancelResizeOutput, error) { + req, out := c.CancelResizeRequest(input) + return out, req.Send() +} + +// CancelResizeWithContext is the same as CancelResize with the addition of +// the ability to pass a context and additional request options. +// +// See CancelResize for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Redshift) CancelResizeWithContext(ctx aws.Context, input *CancelResizeInput, opts ...request.Option) (*CancelResizeOutput, error) { + req, out := c.CancelResizeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCopyClusterSnapshot = "CopyClusterSnapshot" // CopyClusterSnapshotRequest generates a "aws/request.Request" representing the // client's request for the CopyClusterSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -303,6 +657,11 @@ func (c *Redshift) CopyClusterSnapshotRequest(input *CopyClusterSnapshotInput) ( // The request would result in the user exceeding the allowed number of cluster // snapshots. // +// * ErrCodeInvalidRetentionPeriodFault "InvalidRetentionPeriodFault" +// The retention period specified is either in the past or is not a valide value. +// +// The value must be either -1 or an integer between 1 and 3,653. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/CopyClusterSnapshot func (c *Redshift) CopyClusterSnapshot(input *CopyClusterSnapshotInput) (*CopyClusterSnapshotOutput, error) { req, out := c.CopyClusterSnapshotRequest(input) @@ -329,8 +688,8 @@ const opCreateCluster = "CreateCluster" // CreateClusterRequest generates a "aws/request.Request" representing the // client's request for the CreateCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -371,10 +730,10 @@ func (c *Redshift) CreateClusterRequest(input *CreateClusterInput) (req *request // // Creates a new cluster. // -// To create the cluster in Virtual Private Cloud (VPC), you must provide a -// cluster subnet group name. The cluster subnet group identifies the subnets -// of your VPC that Amazon Redshift uses when creating the cluster. For more -// information about managing clusters, go to Amazon Redshift Clusters (http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html) +// To create a cluster in Virtual Private Cloud (VPC), you must provide a cluster +// subnet group name. The cluster subnet group identifies the subnets of your +// VPC that Amazon Redshift uses when creating the cluster. For more information +// about managing clusters, go to Amazon Redshift Clusters (http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html) // in the Amazon Redshift Cluster Management Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -440,7 +799,7 @@ func (c *Redshift) CreateClusterRequest(input *CreateClusterInput) (req *request // The Elastic IP (EIP) is invalid or cannot be found. // // * ErrCodeTagLimitExceededFault "TagLimitExceededFault" -// The request exceeds the limit of 10 tags for the resource. +// You have exceeded the number of tags allowed. // // * ErrCodeInvalidTagFault "InvalidTagFault" // The tag is invalid. @@ -452,6 +811,17 @@ func (c *Redshift) CreateClusterRequest(input *CreateClusterInput) (req *request // The request cannot be completed because a dependent service is throttling // requests made by Amazon Redshift on your behalf. Wait and retry the request. // +// * ErrCodeInvalidClusterTrackFault "InvalidClusterTrack" +// The provided cluster track name is not valid. +// +// * ErrCodeSnapshotScheduleNotFoundFault "SnapshotScheduleNotFound" +// We could not find the specified snapshot schedule. +// +// * ErrCodeInvalidRetentionPeriodFault "InvalidRetentionPeriodFault" +// The retention period specified is either in the past or is not a valide value. +// +// The value must be either -1 or an integer between 1 and 3,653. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/CreateCluster func (c *Redshift) CreateCluster(input *CreateClusterInput) (*CreateClusterOutput, error) { req, out := c.CreateClusterRequest(input) @@ -478,8 +848,8 @@ const opCreateClusterParameterGroup = "CreateClusterParameterGroup" // CreateClusterParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateClusterParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -548,7 +918,7 @@ func (c *Redshift) CreateClusterParameterGroupRequest(input *CreateClusterParame // A cluster parameter group with the same name already exists. // // * ErrCodeTagLimitExceededFault "TagLimitExceededFault" -// The request exceeds the limit of 10 tags for the resource. +// You have exceeded the number of tags allowed. // // * ErrCodeInvalidTagFault "InvalidTagFault" // The tag is invalid. @@ -579,8 +949,8 @@ const opCreateClusterSecurityGroup = "CreateClusterSecurityGroup" // CreateClusterSecurityGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateClusterSecurityGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -644,7 +1014,7 @@ func (c *Redshift) CreateClusterSecurityGroupRequest(input *CreateClusterSecurit // in the Amazon Redshift Cluster Management Guide. // // * ErrCodeTagLimitExceededFault "TagLimitExceededFault" -// The request exceeds the limit of 10 tags for the resource. +// You have exceeded the number of tags allowed. // // * ErrCodeInvalidTagFault "InvalidTagFault" // The tag is invalid. @@ -675,8 +1045,8 @@ const opCreateClusterSnapshot = "CreateClusterSnapshot" // CreateClusterSnapshotRequest generates a "aws/request.Request" representing the // client's request for the CreateClusterSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -745,11 +1115,16 @@ func (c *Redshift) CreateClusterSnapshotRequest(input *CreateClusterSnapshotInpu // snapshots. // // * ErrCodeTagLimitExceededFault "TagLimitExceededFault" -// The request exceeds the limit of 10 tags for the resource. +// You have exceeded the number of tags allowed. // // * ErrCodeInvalidTagFault "InvalidTagFault" // The tag is invalid. // +// * ErrCodeInvalidRetentionPeriodFault "InvalidRetentionPeriodFault" +// The retention period specified is either in the past or is not a valide value. +// +// The value must be either -1 or an integer between 1 and 3,653. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/CreateClusterSnapshot func (c *Redshift) CreateClusterSnapshot(input *CreateClusterSnapshotInput) (*CreateClusterSnapshotOutput, error) { req, out := c.CreateClusterSnapshotRequest(input) @@ -776,8 +1151,8 @@ const opCreateClusterSubnetGroup = "CreateClusterSubnetGroup" // CreateClusterSubnetGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateClusterSubnetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -855,7 +1230,7 @@ func (c *Redshift) CreateClusterSubnetGroupRequest(input *CreateClusterSubnetGro // Your account is not authorized to perform the requested operation. // // * ErrCodeTagLimitExceededFault "TagLimitExceededFault" -// The request exceeds the limit of 10 tags for the resource. +// You have exceeded the number of tags allowed. // // * ErrCodeInvalidTagFault "InvalidTagFault" // The tag is invalid. @@ -890,8 +1265,8 @@ const opCreateEventSubscription = "CreateEventSubscription" // CreateEventSubscriptionRequest generates a "aws/request.Request" representing the // client's request for the CreateEventSubscription operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -999,7 +1374,7 @@ func (c *Redshift) CreateEventSubscriptionRequest(input *CreateEventSubscription // The specified Amazon Redshift event source could not be found. // // * ErrCodeTagLimitExceededFault "TagLimitExceededFault" -// The request exceeds the limit of 10 tags for the resource. +// You have exceeded the number of tags allowed. // // * ErrCodeInvalidTagFault "InvalidTagFault" // The tag is invalid. @@ -1030,8 +1405,8 @@ const opCreateHsmClientCertificate = "CreateHsmClientCertificate" // CreateHsmClientCertificateRequest generates a "aws/request.Request" representing the // client's request for the CreateHsmClientCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1098,7 +1473,7 @@ func (c *Redshift) CreateHsmClientCertificateRequest(input *CreateHsmClientCerti // in the Amazon Redshift Cluster Management Guide. // // * ErrCodeTagLimitExceededFault "TagLimitExceededFault" -// The request exceeds the limit of 10 tags for the resource. +// You have exceeded the number of tags allowed. // // * ErrCodeInvalidTagFault "InvalidTagFault" // The tag is invalid. @@ -1129,8 +1504,8 @@ const opCreateHsmConfiguration = "CreateHsmConfiguration" // CreateHsmConfigurationRequest generates a "aws/request.Request" representing the // client's request for the CreateHsmConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1198,7 +1573,7 @@ func (c *Redshift) CreateHsmConfigurationRequest(input *CreateHsmConfigurationIn // in the Amazon Redshift Cluster Management Guide. // // * ErrCodeTagLimitExceededFault "TagLimitExceededFault" -// The request exceeds the limit of 10 tags for the resource. +// You have exceeded the number of tags allowed. // // * ErrCodeInvalidTagFault "InvalidTagFault" // The tag is invalid. @@ -1229,8 +1604,8 @@ const opCreateSnapshotCopyGrant = "CreateSnapshotCopyGrant" // CreateSnapshotCopyGrantRequest generates a "aws/request.Request" representing the // client's request for the CreateSnapshotCopyGrant operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1297,7 +1672,7 @@ func (c *Redshift) CreateSnapshotCopyGrantRequest(input *CreateSnapshotCopyGrant // The encryption key has exceeded its grant limit in AWS KMS. // // * ErrCodeTagLimitExceededFault "TagLimitExceededFault" -// The request exceeds the limit of 10 tags for the resource. +// You have exceeded the number of tags allowed. // // * ErrCodeInvalidTagFault "InvalidTagFault" // The tag is invalid. @@ -1328,12 +1703,103 @@ func (c *Redshift) CreateSnapshotCopyGrantWithContext(ctx aws.Context, input *Cr return out, req.Send() } +const opCreateSnapshotSchedule = "CreateSnapshotSchedule" + +// CreateSnapshotScheduleRequest generates a "aws/request.Request" representing the +// client's request for the CreateSnapshotSchedule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateSnapshotSchedule for more information on using the CreateSnapshotSchedule +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateSnapshotScheduleRequest method. +// req, resp := client.CreateSnapshotScheduleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/CreateSnapshotSchedule +func (c *Redshift) CreateSnapshotScheduleRequest(input *CreateSnapshotScheduleInput) (req *request.Request, output *CreateSnapshotScheduleOutput) { + op := &request.Operation{ + Name: opCreateSnapshotSchedule, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateSnapshotScheduleInput{} + } + + output = &CreateSnapshotScheduleOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateSnapshotSchedule API operation for Amazon Redshift. +// +// Creates a new snapshot schedule. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Redshift's +// API operation CreateSnapshotSchedule for usage and error information. +// +// Returned Error Codes: +// * ErrCodeSnapshotScheduleAlreadyExistsFault "SnapshotScheduleAlreadyExists" +// The specified snapshot schedule already exists. +// +// * ErrCodeInvalidScheduleFault "InvalidSchedule" +// The schedule you submitted isn't valid. +// +// * ErrCodeSnapshotScheduleQuotaExceededFault "SnapshotScheduleQuotaExceeded" +// You have exceeded the quota of snapshot schedules. +// +// * ErrCodeTagLimitExceededFault "TagLimitExceededFault" +// You have exceeded the number of tags allowed. +// +// * ErrCodeScheduleDefinitionTypeUnsupportedFault "ScheduleDefinitionTypeUnsupported" +// The definition you submitted is not supported. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/CreateSnapshotSchedule +func (c *Redshift) CreateSnapshotSchedule(input *CreateSnapshotScheduleInput) (*CreateSnapshotScheduleOutput, error) { + req, out := c.CreateSnapshotScheduleRequest(input) + return out, req.Send() +} + +// CreateSnapshotScheduleWithContext is the same as CreateSnapshotSchedule with the addition of +// the ability to pass a context and additional request options. +// +// See CreateSnapshotSchedule for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Redshift) CreateSnapshotScheduleWithContext(ctx aws.Context, input *CreateSnapshotScheduleInput, opts ...request.Option) (*CreateSnapshotScheduleOutput, error) { + req, out := c.CreateSnapshotScheduleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateTags = "CreateTags" // CreateTagsRequest generates a "aws/request.Request" representing the // client's request for the CreateTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1376,7 +1842,7 @@ func (c *Redshift) CreateTagsRequest(input *CreateTagsInput) (req *request.Reque // // Adds one or more tags to a specified resource. // -// A resource can have up to 10 tags. If you try to create more than 10 tags +// A resource can have up to 50 tags. If you try to create more than 50 tags // for a resource, you will receive an error and the attempt will fail. // // If you specify a key that already exists for the resource, the value for @@ -1391,7 +1857,7 @@ func (c *Redshift) CreateTagsRequest(input *CreateTagsInput) (req *request.Reque // // Returned Error Codes: // * ErrCodeTagLimitExceededFault "TagLimitExceededFault" -// The request exceeds the limit of 10 tags for the resource. +// You have exceeded the number of tags allowed. // // * ErrCodeResourceNotFoundFault "ResourceNotFoundFault" // The resource could not be found. @@ -1425,8 +1891,8 @@ const opDeleteCluster = "DeleteCluster" // DeleteClusterRequest generates a "aws/request.Request" representing the // client's request for the DeleteCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1505,6 +1971,11 @@ func (c *Redshift) DeleteClusterRequest(input *DeleteClusterInput) (req *request // The request would result in the user exceeding the allowed number of cluster // snapshots. // +// * ErrCodeInvalidRetentionPeriodFault "InvalidRetentionPeriodFault" +// The retention period specified is either in the past or is not a valide value. +// +// The value must be either -1 or an integer between 1 and 3,653. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/DeleteCluster func (c *Redshift) DeleteCluster(input *DeleteClusterInput) (*DeleteClusterOutput, error) { req, out := c.DeleteClusterRequest(input) @@ -1531,8 +2002,8 @@ const opDeleteClusterParameterGroup = "DeleteClusterParameterGroup" // DeleteClusterParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteClusterParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1619,8 +2090,8 @@ const opDeleteClusterSecurityGroup = "DeleteClusterSecurityGroup" // DeleteClusterSecurityGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteClusterSecurityGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1711,8 +2182,8 @@ const opDeleteClusterSnapshot = "DeleteClusterSnapshot" // DeleteClusterSnapshotRequest generates a "aws/request.Request" representing the // client's request for the DeleteClusterSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1801,8 +2272,8 @@ const opDeleteClusterSubnetGroup = "DeleteClusterSubnetGroup" // DeleteClusterSubnetGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteClusterSubnetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1889,8 +2360,8 @@ const opDeleteEventSubscription = "DeleteEventSubscription" // DeleteEventSubscriptionRequest generates a "aws/request.Request" representing the // client's request for the DeleteEventSubscription operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1975,8 +2446,8 @@ const opDeleteHsmClientCertificate = "DeleteHsmClientCertificate" // DeleteHsmClientCertificateRequest generates a "aws/request.Request" representing the // client's request for the DeleteHsmClientCertificate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2060,8 +2531,8 @@ const opDeleteHsmConfiguration = "DeleteHsmConfiguration" // DeleteHsmConfigurationRequest generates a "aws/request.Request" representing the // client's request for the DeleteHsmConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2145,8 +2616,8 @@ const opDeleteSnapshotCopyGrant = "DeleteSnapshotCopyGrant" // DeleteSnapshotCopyGrantRequest generates a "aws/request.Request" representing the // client's request for the DeleteSnapshotCopyGrant operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2227,12 +2698,96 @@ func (c *Redshift) DeleteSnapshotCopyGrantWithContext(ctx aws.Context, input *De return out, req.Send() } +const opDeleteSnapshotSchedule = "DeleteSnapshotSchedule" + +// DeleteSnapshotScheduleRequest generates a "aws/request.Request" representing the +// client's request for the DeleteSnapshotSchedule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteSnapshotSchedule for more information on using the DeleteSnapshotSchedule +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteSnapshotScheduleRequest method. +// req, resp := client.DeleteSnapshotScheduleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/DeleteSnapshotSchedule +func (c *Redshift) DeleteSnapshotScheduleRequest(input *DeleteSnapshotScheduleInput) (req *request.Request, output *DeleteSnapshotScheduleOutput) { + op := &request.Operation{ + Name: opDeleteSnapshotSchedule, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteSnapshotScheduleInput{} + } + + output = &DeleteSnapshotScheduleOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteSnapshotSchedule API operation for Amazon Redshift. +// +// Deletes a snapshot schedule. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Redshift's +// API operation DeleteSnapshotSchedule for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidClusterSnapshotScheduleStateFault "InvalidClusterSnapshotScheduleState" +// The cluster snapshot schedule state is not valid. +// +// * ErrCodeSnapshotScheduleNotFoundFault "SnapshotScheduleNotFound" +// We could not find the specified snapshot schedule. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/DeleteSnapshotSchedule +func (c *Redshift) DeleteSnapshotSchedule(input *DeleteSnapshotScheduleInput) (*DeleteSnapshotScheduleOutput, error) { + req, out := c.DeleteSnapshotScheduleRequest(input) + return out, req.Send() +} + +// DeleteSnapshotScheduleWithContext is the same as DeleteSnapshotSchedule with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteSnapshotSchedule for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Redshift) DeleteSnapshotScheduleWithContext(ctx aws.Context, input *DeleteSnapshotScheduleInput, opts ...request.Option) (*DeleteSnapshotScheduleOutput, error) { + req, out := c.DeleteSnapshotScheduleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteTags = "DeleteTags" // DeleteTagsRequest generates a "aws/request.Request" representing the // client's request for the DeleteTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2312,12 +2867,168 @@ func (c *Redshift) DeleteTagsWithContext(ctx aws.Context, input *DeleteTagsInput return out, req.Send() } +const opDescribeAccountAttributes = "DescribeAccountAttributes" + +// DescribeAccountAttributesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeAccountAttributes operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeAccountAttributes for more information on using the DescribeAccountAttributes +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeAccountAttributesRequest method. +// req, resp := client.DescribeAccountAttributesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/DescribeAccountAttributes +func (c *Redshift) DescribeAccountAttributesRequest(input *DescribeAccountAttributesInput) (req *request.Request, output *DescribeAccountAttributesOutput) { + op := &request.Operation{ + Name: opDescribeAccountAttributes, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeAccountAttributesInput{} + } + + output = &DescribeAccountAttributesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeAccountAttributes API operation for Amazon Redshift. +// +// Returns a list of attributes attached to an account +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Redshift's +// API operation DescribeAccountAttributes for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/DescribeAccountAttributes +func (c *Redshift) DescribeAccountAttributes(input *DescribeAccountAttributesInput) (*DescribeAccountAttributesOutput, error) { + req, out := c.DescribeAccountAttributesRequest(input) + return out, req.Send() +} + +// DescribeAccountAttributesWithContext is the same as DescribeAccountAttributes with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeAccountAttributes for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Redshift) DescribeAccountAttributesWithContext(ctx aws.Context, input *DescribeAccountAttributesInput, opts ...request.Option) (*DescribeAccountAttributesOutput, error) { + req, out := c.DescribeAccountAttributesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeClusterDbRevisions = "DescribeClusterDbRevisions" + +// DescribeClusterDbRevisionsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeClusterDbRevisions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeClusterDbRevisions for more information on using the DescribeClusterDbRevisions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeClusterDbRevisionsRequest method. +// req, resp := client.DescribeClusterDbRevisionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/DescribeClusterDbRevisions +func (c *Redshift) DescribeClusterDbRevisionsRequest(input *DescribeClusterDbRevisionsInput) (req *request.Request, output *DescribeClusterDbRevisionsOutput) { + op := &request.Operation{ + Name: opDescribeClusterDbRevisions, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeClusterDbRevisionsInput{} + } + + output = &DescribeClusterDbRevisionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeClusterDbRevisions API operation for Amazon Redshift. +// +// Returns an array of ClusterDbRevision objects. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Redshift's +// API operation DescribeClusterDbRevisions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeClusterNotFoundFault "ClusterNotFound" +// The ClusterIdentifier parameter does not refer to an existing cluster. +// +// * ErrCodeInvalidClusterStateFault "InvalidClusterState" +// The specified cluster is not in the available state. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/DescribeClusterDbRevisions +func (c *Redshift) DescribeClusterDbRevisions(input *DescribeClusterDbRevisionsInput) (*DescribeClusterDbRevisionsOutput, error) { + req, out := c.DescribeClusterDbRevisionsRequest(input) + return out, req.Send() +} + +// DescribeClusterDbRevisionsWithContext is the same as DescribeClusterDbRevisions with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeClusterDbRevisions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Redshift) DescribeClusterDbRevisionsWithContext(ctx aws.Context, input *DescribeClusterDbRevisionsInput, opts ...request.Option) (*DescribeClusterDbRevisionsOutput, error) { + req, out := c.DescribeClusterDbRevisionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeClusterParameterGroups = "DescribeClusterParameterGroups" // DescribeClusterParameterGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeClusterParameterGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2472,8 +3183,8 @@ const opDescribeClusterParameters = "DescribeClusterParameters" // DescribeClusterParametersRequest generates a "aws/request.Request" representing the // client's request for the DescribeClusterParameters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2618,8 +3329,8 @@ const opDescribeClusterSecurityGroups = "DescribeClusterSecurityGroups" // DescribeClusterSecurityGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeClusterSecurityGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2773,8 +3484,8 @@ const opDescribeClusterSnapshots = "DescribeClusterSnapshots" // DescribeClusterSnapshotsRequest generates a "aws/request.Request" representing the // client's request for the DescribeClusterSnapshots operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2928,8 +3639,8 @@ const opDescribeClusterSubnetGroups = "DescribeClusterSubnetGroups" // DescribeClusterSubnetGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribeClusterSubnetGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3075,72 +3786,154 @@ func (c *Redshift) DescribeClusterSubnetGroupsPagesWithContext(ctx aws.Context, return p.Err() } -const opDescribeClusterVersions = "DescribeClusterVersions" +const opDescribeClusterTracks = "DescribeClusterTracks" -// DescribeClusterVersionsRequest generates a "aws/request.Request" representing the -// client's request for the DescribeClusterVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeClusterTracksRequest generates a "aws/request.Request" representing the +// client's request for the DescribeClusterTracks operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeClusterVersions for more information on using the DescribeClusterVersions +// See DescribeClusterTracks for more information on using the DescribeClusterTracks // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeClusterVersionsRequest method. -// req, resp := client.DescribeClusterVersionsRequest(params) +// // Example sending a request using the DescribeClusterTracksRequest method. +// req, resp := client.DescribeClusterTracksRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/DescribeClusterVersions -func (c *Redshift) DescribeClusterVersionsRequest(input *DescribeClusterVersionsInput) (req *request.Request, output *DescribeClusterVersionsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/DescribeClusterTracks +func (c *Redshift) DescribeClusterTracksRequest(input *DescribeClusterTracksInput) (req *request.Request, output *DescribeClusterTracksOutput) { op := &request.Operation{ - Name: opDescribeClusterVersions, + Name: opDescribeClusterTracks, HTTPMethod: "POST", HTTPPath: "/", - Paginator: &request.Paginator{ - InputTokens: []string{"Marker"}, - OutputTokens: []string{"Marker"}, - LimitToken: "MaxRecords", - TruncationToken: "", - }, } if input == nil { - input = &DescribeClusterVersionsInput{} + input = &DescribeClusterTracksInput{} } - output = &DescribeClusterVersionsOutput{} + output = &DescribeClusterTracksOutput{} req = c.newRequest(op, input, output) return } -// DescribeClusterVersions API operation for Amazon Redshift. +// DescribeClusterTracks API operation for Amazon Redshift. // -// Returns descriptions of the available Amazon Redshift cluster versions. You -// can call this operation even before creating any clusters to learn more about -// the Amazon Redshift versions. For more information about managing clusters, -// go to Amazon Redshift Clusters (http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html) -// in the Amazon Redshift Cluster Management Guide. +// Returns a list of all the available maintenance tracks. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Redshift's -// API operation DescribeClusterVersions for usage and error information. -// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/DescribeClusterVersions -func (c *Redshift) DescribeClusterVersions(input *DescribeClusterVersionsInput) (*DescribeClusterVersionsOutput, error) { - req, out := c.DescribeClusterVersionsRequest(input) - return out, req.Send() +// API operation DescribeClusterTracks for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidClusterTrackFault "InvalidClusterTrack" +// The provided cluster track name is not valid. +// +// * ErrCodeUnauthorizedOperation "UnauthorizedOperation" +// Your account is not authorized to perform the requested operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/DescribeClusterTracks +func (c *Redshift) DescribeClusterTracks(input *DescribeClusterTracksInput) (*DescribeClusterTracksOutput, error) { + req, out := c.DescribeClusterTracksRequest(input) + return out, req.Send() +} + +// DescribeClusterTracksWithContext is the same as DescribeClusterTracks with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeClusterTracks for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Redshift) DescribeClusterTracksWithContext(ctx aws.Context, input *DescribeClusterTracksInput, opts ...request.Option) (*DescribeClusterTracksOutput, error) { + req, out := c.DescribeClusterTracksRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeClusterVersions = "DescribeClusterVersions" + +// DescribeClusterVersionsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeClusterVersions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeClusterVersions for more information on using the DescribeClusterVersions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeClusterVersionsRequest method. +// req, resp := client.DescribeClusterVersionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/DescribeClusterVersions +func (c *Redshift) DescribeClusterVersionsRequest(input *DescribeClusterVersionsInput) (req *request.Request, output *DescribeClusterVersionsOutput) { + op := &request.Operation{ + Name: opDescribeClusterVersions, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxRecords", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeClusterVersionsInput{} + } + + output = &DescribeClusterVersionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeClusterVersions API operation for Amazon Redshift. +// +// Returns descriptions of the available Amazon Redshift cluster versions. You +// can call this operation even before creating any clusters to learn more about +// the Amazon Redshift versions. For more information about managing clusters, +// go to Amazon Redshift Clusters (http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html) +// in the Amazon Redshift Cluster Management Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Redshift's +// API operation DescribeClusterVersions for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/DescribeClusterVersions +func (c *Redshift) DescribeClusterVersions(input *DescribeClusterVersionsInput) (*DescribeClusterVersionsOutput, error) { + req, out := c.DescribeClusterVersionsRequest(input) + return out, req.Send() } // DescribeClusterVersionsWithContext is the same as DescribeClusterVersions with the addition of @@ -3213,8 +4006,8 @@ const opDescribeClusters = "DescribeClusters" // DescribeClustersRequest generates a "aws/request.Request" representing the // client's request for the DescribeClusters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3364,8 +4157,8 @@ const opDescribeDefaultClusterParameters = "DescribeDefaultClusterParameters" // DescribeDefaultClusterParametersRequest generates a "aws/request.Request" representing the // client's request for the DescribeDefaultClusterParameters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3498,8 +4291,8 @@ const opDescribeEventCategories = "DescribeEventCategories" // DescribeEventCategoriesRequest generates a "aws/request.Request" representing the // client's request for the DescribeEventCategories operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3574,8 +4367,8 @@ const opDescribeEventSubscriptions = "DescribeEventSubscriptions" // DescribeEventSubscriptionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeEventSubscriptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3725,8 +4518,8 @@ const opDescribeEvents = "DescribeEvents" // DescribeEventsRequest generates a "aws/request.Request" representing the // client's request for the DescribeEvents operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3858,8 +4651,8 @@ const opDescribeHsmClientCertificates = "DescribeHsmClientCertificates" // DescribeHsmClientCertificatesRequest generates a "aws/request.Request" representing the // client's request for the DescribeHsmClientCertificates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4008,8 +4801,8 @@ const opDescribeHsmConfigurations = "DescribeHsmConfigurations" // DescribeHsmConfigurationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeHsmConfigurations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4158,8 +4951,8 @@ const opDescribeLoggingStatus = "DescribeLoggingStatus" // DescribeLoggingStatusRequest generates a "aws/request.Request" representing the // client's request for the DescribeLoggingStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4238,8 +5031,8 @@ const opDescribeOrderableClusterOptions = "DescribeOrderableClusterOptions" // DescribeOrderableClusterOptionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeOrderableClusterOptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4376,8 +5169,8 @@ const opDescribeReservedNodeOfferings = "DescribeReservedNodeOfferings" // DescribeReservedNodeOfferingsRequest generates a "aws/request.Request" representing the // client's request for the DescribeReservedNodeOfferings operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4527,8 +5320,8 @@ const opDescribeReservedNodes = "DescribeReservedNodes" // DescribeReservedNodesRequest generates a "aws/request.Request" representing the // client's request for the DescribeReservedNodes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4666,8 +5459,8 @@ const opDescribeResize = "DescribeResize" // DescribeResizeRequest generates a "aws/request.Request" representing the // client's request for the DescribeResize operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4754,8 +5547,8 @@ const opDescribeSnapshotCopyGrants = "DescribeSnapshotCopyGrants" // DescribeSnapshotCopyGrantsRequest generates a "aws/request.Request" representing the // client's request for the DescribeSnapshotCopyGrants operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4838,12 +5631,161 @@ func (c *Redshift) DescribeSnapshotCopyGrantsWithContext(ctx aws.Context, input return out, req.Send() } +const opDescribeSnapshotSchedules = "DescribeSnapshotSchedules" + +// DescribeSnapshotSchedulesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeSnapshotSchedules operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeSnapshotSchedules for more information on using the DescribeSnapshotSchedules +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeSnapshotSchedulesRequest method. +// req, resp := client.DescribeSnapshotSchedulesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/DescribeSnapshotSchedules +func (c *Redshift) DescribeSnapshotSchedulesRequest(input *DescribeSnapshotSchedulesInput) (req *request.Request, output *DescribeSnapshotSchedulesOutput) { + op := &request.Operation{ + Name: opDescribeSnapshotSchedules, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeSnapshotSchedulesInput{} + } + + output = &DescribeSnapshotSchedulesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeSnapshotSchedules API operation for Amazon Redshift. +// +// Returns a list of snapshot schedules. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Redshift's +// API operation DescribeSnapshotSchedules for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/DescribeSnapshotSchedules +func (c *Redshift) DescribeSnapshotSchedules(input *DescribeSnapshotSchedulesInput) (*DescribeSnapshotSchedulesOutput, error) { + req, out := c.DescribeSnapshotSchedulesRequest(input) + return out, req.Send() +} + +// DescribeSnapshotSchedulesWithContext is the same as DescribeSnapshotSchedules with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeSnapshotSchedules for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Redshift) DescribeSnapshotSchedulesWithContext(ctx aws.Context, input *DescribeSnapshotSchedulesInput, opts ...request.Option) (*DescribeSnapshotSchedulesOutput, error) { + req, out := c.DescribeSnapshotSchedulesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeStorage = "DescribeStorage" + +// DescribeStorageRequest generates a "aws/request.Request" representing the +// client's request for the DescribeStorage operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeStorage for more information on using the DescribeStorage +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeStorageRequest method. +// req, resp := client.DescribeStorageRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/DescribeStorage +func (c *Redshift) DescribeStorageRequest(input *DescribeStorageInput) (req *request.Request, output *DescribeStorageOutput) { + op := &request.Operation{ + Name: opDescribeStorage, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeStorageInput{} + } + + output = &DescribeStorageOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeStorage API operation for Amazon Redshift. +// +// Returns the total amount of snapshot usage and provisioned storage for a +// user in megabytes. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Redshift's +// API operation DescribeStorage for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/DescribeStorage +func (c *Redshift) DescribeStorage(input *DescribeStorageInput) (*DescribeStorageOutput, error) { + req, out := c.DescribeStorageRequest(input) + return out, req.Send() +} + +// DescribeStorageWithContext is the same as DescribeStorage with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeStorage for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Redshift) DescribeStorageWithContext(ctx aws.Context, input *DescribeStorageInput, opts ...request.Option) (*DescribeStorageOutput, error) { + req, out := c.DescribeStorageRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeTableRestoreStatus = "DescribeTableRestoreStatus" // DescribeTableRestoreStatusRequest generates a "aws/request.Request" representing the // client's request for the DescribeTableRestoreStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4928,8 +5870,8 @@ const opDescribeTags = "DescribeTags" // DescribeTagsRequest generates a "aws/request.Request" representing the // client's request for the DescribeTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5032,8 +5974,8 @@ const opDisableLogging = "DisableLogging" // DisableLoggingRequest generates a "aws/request.Request" representing the // client's request for the DisableLogging operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5112,8 +6054,8 @@ const opDisableSnapshotCopy = "DisableSnapshotCopy" // DisableSnapshotCopyRequest generates a "aws/request.Request" representing the // client's request for the DisableSnapshotCopy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5205,8 +6147,8 @@ const opEnableLogging = "EnableLogging" // EnableLoggingRequest generates a "aws/request.Request" representing the // client's request for the EnableLogging operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5301,8 +6243,8 @@ const opEnableSnapshotCopy = "EnableSnapshotCopy" // EnableSnapshotCopyRequest generates a "aws/request.Request" representing the // client's request for the EnableSnapshotCopy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5384,6 +6326,11 @@ func (c *Redshift) EnableSnapshotCopyRequest(input *EnableSnapshotCopyInput) (re // The request cannot be completed because a dependent service is throttling // requests made by Amazon Redshift on your behalf. Wait and retry the request. // +// * ErrCodeInvalidRetentionPeriodFault "InvalidRetentionPeriodFault" +// The retention period specified is either in the past or is not a valide value. +// +// The value must be either -1 or an integer between 1 and 3,653. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/EnableSnapshotCopy func (c *Redshift) EnableSnapshotCopy(input *EnableSnapshotCopyInput) (*EnableSnapshotCopyOutput, error) { req, out := c.EnableSnapshotCopyRequest(input) @@ -5410,8 +6357,8 @@ const opGetClusterCredentials = "GetClusterCredentials" // GetClusterCredentialsRequest generates a "aws/request.Request" representing the // client's request for the GetClusterCredentials operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5511,108 +6458,204 @@ func (c *Redshift) GetClusterCredentialsWithContext(ctx aws.Context, input *GetC return out, req.Send() } -const opModifyCluster = "ModifyCluster" +const opGetReservedNodeExchangeOfferings = "GetReservedNodeExchangeOfferings" -// ModifyClusterRequest generates a "aws/request.Request" representing the -// client's request for the ModifyCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetReservedNodeExchangeOfferingsRequest generates a "aws/request.Request" representing the +// client's request for the GetReservedNodeExchangeOfferings operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ModifyCluster for more information on using the ModifyCluster +// See GetReservedNodeExchangeOfferings for more information on using the GetReservedNodeExchangeOfferings // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ModifyClusterRequest method. -// req, resp := client.ModifyClusterRequest(params) +// // Example sending a request using the GetReservedNodeExchangeOfferingsRequest method. +// req, resp := client.GetReservedNodeExchangeOfferingsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/ModifyCluster -func (c *Redshift) ModifyClusterRequest(input *ModifyClusterInput) (req *request.Request, output *ModifyClusterOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/GetReservedNodeExchangeOfferings +func (c *Redshift) GetReservedNodeExchangeOfferingsRequest(input *GetReservedNodeExchangeOfferingsInput) (req *request.Request, output *GetReservedNodeExchangeOfferingsOutput) { op := &request.Operation{ - Name: opModifyCluster, + Name: opGetReservedNodeExchangeOfferings, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &ModifyClusterInput{} + input = &GetReservedNodeExchangeOfferingsInput{} } - output = &ModifyClusterOutput{} + output = &GetReservedNodeExchangeOfferingsOutput{} req = c.newRequest(op, input, output) return } -// ModifyCluster API operation for Amazon Redshift. -// -// Modifies the settings for a cluster. For example, you can add another security -// or parameter group, update the preferred maintenance window, or change the -// master user password. Resetting a cluster password or modifying the security -// groups associated with a cluster do not need a reboot. However, modifying -// a parameter group requires a reboot for parameters to take effect. For more -// information about managing clusters, go to Amazon Redshift Clusters (http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html) -// in the Amazon Redshift Cluster Management Guide. +// GetReservedNodeExchangeOfferings API operation for Amazon Redshift. // -// You can also change node type and the number of nodes to scale up or down -// the cluster. When resizing a cluster, you must specify both the number of -// nodes and the node type even if one of the parameters does not change. +// Returns an array of DC2 ReservedNodeOfferings that matches the payment type, +// term, and usage price of the given DC1 reserved node. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Redshift's -// API operation ModifyCluster for usage and error information. +// API operation GetReservedNodeExchangeOfferings for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidClusterStateFault "InvalidClusterState" -// The specified cluster is not in the available state. -// -// * ErrCodeInvalidClusterSecurityGroupStateFault "InvalidClusterSecurityGroupState" -// The state of the cluster security group is not available. -// -// * ErrCodeClusterNotFoundFault "ClusterNotFound" -// The ClusterIdentifier parameter does not refer to an existing cluster. -// -// * ErrCodeNumberOfNodesQuotaExceededFault "NumberOfNodesQuotaExceeded" -// The operation would exceed the number of nodes allotted to the account. For -// information about increasing your quota, go to Limits in Amazon Redshift -// (http://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) -// in the Amazon Redshift Cluster Management Guide. +// * ErrCodeReservedNodeNotFoundFault "ReservedNodeNotFound" +// The specified reserved compute node not found. // -// * ErrCodeNumberOfNodesPerClusterLimitExceededFault "NumberOfNodesPerClusterLimitExceeded" -// The operation would exceed the number of nodes allowed for a cluster. +// * ErrCodeInvalidReservedNodeStateFault "InvalidReservedNodeState" +// Indicates that the Reserved Node being exchanged is not in an active state. // -// * ErrCodeClusterSecurityGroupNotFoundFault "ClusterSecurityGroupNotFound" -// The cluster security group name does not refer to an existing cluster security -// group. +// * ErrCodeReservedNodeAlreadyMigratedFault "ReservedNodeAlreadyMigrated" +// Indicates that the reserved node has already been exchanged. // -// * ErrCodeClusterParameterGroupNotFoundFault "ClusterParameterGroupNotFound" -// The parameter group name does not refer to an existing parameter group. +// * ErrCodeReservedNodeOfferingNotFoundFault "ReservedNodeOfferingNotFound" +// Specified offering does not exist. // -// * ErrCodeInsufficientClusterCapacityFault "InsufficientClusterCapacity" -// The number of nodes specified exceeds the allotted capacity of the cluster. +// * ErrCodeUnsupportedOperationFault "UnsupportedOperation" +// The requested operation isn't supported. // -// * ErrCodeUnsupportedOptionFault "UnsupportedOptionFault" -// A request option was specified that is not supported. +// * ErrCodeDependentServiceUnavailableFault "DependentServiceUnavailableFault" +// Your request cannot be completed because a dependent internal service is +// temporarily unavailable. Wait 30 to 60 seconds and try again. // -// * ErrCodeUnauthorizedOperation "UnauthorizedOperation" -// Your account is not authorized to perform the requested operation. +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/GetReservedNodeExchangeOfferings +func (c *Redshift) GetReservedNodeExchangeOfferings(input *GetReservedNodeExchangeOfferingsInput) (*GetReservedNodeExchangeOfferingsOutput, error) { + req, out := c.GetReservedNodeExchangeOfferingsRequest(input) + return out, req.Send() +} + +// GetReservedNodeExchangeOfferingsWithContext is the same as GetReservedNodeExchangeOfferings with the addition of +// the ability to pass a context and additional request options. // -// * ErrCodeHsmClientCertificateNotFoundFault "HsmClientCertificateNotFoundFault" -// There is no Amazon Redshift HSM client certificate with the specified identifier. +// See GetReservedNodeExchangeOfferings for details on how to use this API operation. // -// * ErrCodeHsmConfigurationNotFoundFault "HsmConfigurationNotFoundFault" +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Redshift) GetReservedNodeExchangeOfferingsWithContext(ctx aws.Context, input *GetReservedNodeExchangeOfferingsInput, opts ...request.Option) (*GetReservedNodeExchangeOfferingsOutput, error) { + req, out := c.GetReservedNodeExchangeOfferingsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyCluster = "ModifyCluster" + +// ModifyClusterRequest generates a "aws/request.Request" representing the +// client's request for the ModifyCluster operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyCluster for more information on using the ModifyCluster +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyClusterRequest method. +// req, resp := client.ModifyClusterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/ModifyCluster +func (c *Redshift) ModifyClusterRequest(input *ModifyClusterInput) (req *request.Request, output *ModifyClusterOutput) { + op := &request.Operation{ + Name: opModifyCluster, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyClusterInput{} + } + + output = &ModifyClusterOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyCluster API operation for Amazon Redshift. +// +// Modifies the settings for a cluster. For example, you can add another security +// or parameter group, update the preferred maintenance window, or change the +// master user password. Resetting a cluster password or modifying the security +// groups associated with a cluster do not need a reboot. However, modifying +// a parameter group requires a reboot for parameters to take effect. For more +// information about managing clusters, go to Amazon Redshift Clusters (http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html) +// in the Amazon Redshift Cluster Management Guide. +// +// You can also change node type and the number of nodes to scale up or down +// the cluster. When resizing a cluster, you must specify both the number of +// nodes and the node type even if one of the parameters does not change. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Redshift's +// API operation ModifyCluster for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidClusterStateFault "InvalidClusterState" +// The specified cluster is not in the available state. +// +// * ErrCodeInvalidClusterSecurityGroupStateFault "InvalidClusterSecurityGroupState" +// The state of the cluster security group is not available. +// +// * ErrCodeClusterNotFoundFault "ClusterNotFound" +// The ClusterIdentifier parameter does not refer to an existing cluster. +// +// * ErrCodeNumberOfNodesQuotaExceededFault "NumberOfNodesQuotaExceeded" +// The operation would exceed the number of nodes allotted to the account. For +// information about increasing your quota, go to Limits in Amazon Redshift +// (http://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) +// in the Amazon Redshift Cluster Management Guide. +// +// * ErrCodeNumberOfNodesPerClusterLimitExceededFault "NumberOfNodesPerClusterLimitExceeded" +// The operation would exceed the number of nodes allowed for a cluster. +// +// * ErrCodeClusterSecurityGroupNotFoundFault "ClusterSecurityGroupNotFound" +// The cluster security group name does not refer to an existing cluster security +// group. +// +// * ErrCodeClusterParameterGroupNotFoundFault "ClusterParameterGroupNotFound" +// The parameter group name does not refer to an existing parameter group. +// +// * ErrCodeInsufficientClusterCapacityFault "InsufficientClusterCapacity" +// The number of nodes specified exceeds the allotted capacity of the cluster. +// +// * ErrCodeUnsupportedOptionFault "UnsupportedOptionFault" +// A request option was specified that is not supported. +// +// * ErrCodeUnauthorizedOperation "UnauthorizedOperation" +// Your account is not authorized to perform the requested operation. +// +// * ErrCodeHsmClientCertificateNotFoundFault "HsmClientCertificateNotFoundFault" +// There is no Amazon Redshift HSM client certificate with the specified identifier. +// +// * ErrCodeHsmConfigurationNotFoundFault "HsmConfigurationNotFoundFault" // There is no Amazon Redshift HSM configuration with the specified identifier. // // * ErrCodeClusterAlreadyExistsFault "ClusterAlreadyExists" @@ -5628,6 +6671,18 @@ func (c *Redshift) ModifyClusterRequest(input *ModifyClusterInput) (req *request // * ErrCodeInvalidElasticIpFault "InvalidElasticIpFault" // The Elastic IP (EIP) is invalid or cannot be found. // +// * ErrCodeTableLimitExceededFault "TableLimitExceeded" +// The number of tables in the cluster exceeds the limit for the requested new +// cluster node type. +// +// * ErrCodeInvalidClusterTrackFault "InvalidClusterTrack" +// The provided cluster track name is not valid. +// +// * ErrCodeInvalidRetentionPeriodFault "InvalidRetentionPeriodFault" +// The retention period specified is either in the past or is not a valide value. +// +// The value must be either -1 or an integer between 1 and 3,653. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/ModifyCluster func (c *Redshift) ModifyCluster(input *ModifyClusterInput) (*ModifyClusterOutput, error) { req, out := c.ModifyClusterRequest(input) @@ -5650,12 +6705,98 @@ func (c *Redshift) ModifyClusterWithContext(ctx aws.Context, input *ModifyCluste return out, req.Send() } +const opModifyClusterDbRevision = "ModifyClusterDbRevision" + +// ModifyClusterDbRevisionRequest generates a "aws/request.Request" representing the +// client's request for the ModifyClusterDbRevision operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyClusterDbRevision for more information on using the ModifyClusterDbRevision +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyClusterDbRevisionRequest method. +// req, resp := client.ModifyClusterDbRevisionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/ModifyClusterDbRevision +func (c *Redshift) ModifyClusterDbRevisionRequest(input *ModifyClusterDbRevisionInput) (req *request.Request, output *ModifyClusterDbRevisionOutput) { + op := &request.Operation{ + Name: opModifyClusterDbRevision, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyClusterDbRevisionInput{} + } + + output = &ModifyClusterDbRevisionOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyClusterDbRevision API operation for Amazon Redshift. +// +// Modifies the database revision of a cluster. The database revision is a unique +// revision of the database running in a cluster. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Redshift's +// API operation ModifyClusterDbRevision for usage and error information. +// +// Returned Error Codes: +// * ErrCodeClusterNotFoundFault "ClusterNotFound" +// The ClusterIdentifier parameter does not refer to an existing cluster. +// +// * ErrCodeClusterOnLatestRevisionFault "ClusterOnLatestRevision" +// Cluster is already on the latest database revision. +// +// * ErrCodeInvalidClusterStateFault "InvalidClusterState" +// The specified cluster is not in the available state. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/ModifyClusterDbRevision +func (c *Redshift) ModifyClusterDbRevision(input *ModifyClusterDbRevisionInput) (*ModifyClusterDbRevisionOutput, error) { + req, out := c.ModifyClusterDbRevisionRequest(input) + return out, req.Send() +} + +// ModifyClusterDbRevisionWithContext is the same as ModifyClusterDbRevision with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyClusterDbRevision for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Redshift) ModifyClusterDbRevisionWithContext(ctx aws.Context, input *ModifyClusterDbRevisionInput, opts ...request.Option) (*ModifyClusterDbRevisionOutput, error) { + req, out := c.ModifyClusterDbRevisionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opModifyClusterIamRoles = "ModifyClusterIamRoles" // ModifyClusterIamRolesRequest generates a "aws/request.Request" representing the // client's request for the ModifyClusterIamRoles operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5735,12 +6876,92 @@ func (c *Redshift) ModifyClusterIamRolesWithContext(ctx aws.Context, input *Modi return out, req.Send() } +const opModifyClusterMaintenance = "ModifyClusterMaintenance" + +// ModifyClusterMaintenanceRequest generates a "aws/request.Request" representing the +// client's request for the ModifyClusterMaintenance operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyClusterMaintenance for more information on using the ModifyClusterMaintenance +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyClusterMaintenanceRequest method. +// req, resp := client.ModifyClusterMaintenanceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/ModifyClusterMaintenance +func (c *Redshift) ModifyClusterMaintenanceRequest(input *ModifyClusterMaintenanceInput) (req *request.Request, output *ModifyClusterMaintenanceOutput) { + op := &request.Operation{ + Name: opModifyClusterMaintenance, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyClusterMaintenanceInput{} + } + + output = &ModifyClusterMaintenanceOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyClusterMaintenance API operation for Amazon Redshift. +// +// Modifies the maintenance settings of a cluster. For example, you can defer +// a maintenance window. You can also update or cancel a deferment. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Redshift's +// API operation ModifyClusterMaintenance for usage and error information. +// +// Returned Error Codes: +// * ErrCodeClusterNotFoundFault "ClusterNotFound" +// The ClusterIdentifier parameter does not refer to an existing cluster. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/ModifyClusterMaintenance +func (c *Redshift) ModifyClusterMaintenance(input *ModifyClusterMaintenanceInput) (*ModifyClusterMaintenanceOutput, error) { + req, out := c.ModifyClusterMaintenanceRequest(input) + return out, req.Send() +} + +// ModifyClusterMaintenanceWithContext is the same as ModifyClusterMaintenance with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyClusterMaintenance for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Redshift) ModifyClusterMaintenanceWithContext(ctx aws.Context, input *ModifyClusterMaintenanceInput, opts ...request.Option) (*ModifyClusterMaintenanceOutput, error) { + req, out := c.ModifyClusterMaintenanceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opModifyClusterParameterGroup = "ModifyClusterParameterGroup" // ModifyClusterParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the ModifyClusterParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5823,71 +7044,246 @@ func (c *Redshift) ModifyClusterParameterGroupWithContext(ctx aws.Context, input return out, req.Send() } -const opModifyClusterSubnetGroup = "ModifyClusterSubnetGroup" +const opModifyClusterSnapshot = "ModifyClusterSnapshot" -// ModifyClusterSubnetGroupRequest generates a "aws/request.Request" representing the -// client's request for the ModifyClusterSubnetGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ModifyClusterSnapshotRequest generates a "aws/request.Request" representing the +// client's request for the ModifyClusterSnapshot operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ModifyClusterSubnetGroup for more information on using the ModifyClusterSubnetGroup +// See ModifyClusterSnapshot for more information on using the ModifyClusterSnapshot // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ModifyClusterSubnetGroupRequest method. -// req, resp := client.ModifyClusterSubnetGroupRequest(params) +// // Example sending a request using the ModifyClusterSnapshotRequest method. +// req, resp := client.ModifyClusterSnapshotRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/ModifyClusterSubnetGroup -func (c *Redshift) ModifyClusterSubnetGroupRequest(input *ModifyClusterSubnetGroupInput) (req *request.Request, output *ModifyClusterSubnetGroupOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/ModifyClusterSnapshot +func (c *Redshift) ModifyClusterSnapshotRequest(input *ModifyClusterSnapshotInput) (req *request.Request, output *ModifyClusterSnapshotOutput) { op := &request.Operation{ - Name: opModifyClusterSubnetGroup, + Name: opModifyClusterSnapshot, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &ModifyClusterSubnetGroupInput{} + input = &ModifyClusterSnapshotInput{} } - output = &ModifyClusterSubnetGroupOutput{} + output = &ModifyClusterSnapshotOutput{} req = c.newRequest(op, input, output) return } -// ModifyClusterSubnetGroup API operation for Amazon Redshift. +// ModifyClusterSnapshot API operation for Amazon Redshift. // -// Modifies a cluster subnet group to include the specified list of VPC subnets. -// The operation replaces the existing list of subnets with the new list of -// subnets. +// Modifies the settings for a snapshot. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Redshift's -// API operation ModifyClusterSubnetGroup for usage and error information. +// API operation ModifyClusterSnapshot for usage and error information. // // Returned Error Codes: -// * ErrCodeClusterSubnetGroupNotFoundFault "ClusterSubnetGroupNotFoundFault" -// The cluster subnet group name does not refer to an existing cluster subnet -// group. +// * ErrCodeInvalidClusterSnapshotStateFault "InvalidClusterSnapshotState" +// The specified cluster snapshot is not in the available state, or other accounts +// are authorized to access the snapshot. // -// * ErrCodeClusterSubnetQuotaExceededFault "ClusterSubnetQuotaExceededFault" -// The request would result in user exceeding the allowed number of subnets -// in a cluster subnet groups. For information about increasing your quota, -// go to Limits in Amazon Redshift (http://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) -// in the Amazon Redshift Cluster Management Guide. +// * ErrCodeClusterSnapshotNotFoundFault "ClusterSnapshotNotFound" +// The snapshot identifier does not refer to an existing cluster snapshot. +// +// * ErrCodeInvalidRetentionPeriodFault "InvalidRetentionPeriodFault" +// The retention period specified is either in the past or is not a valide value. +// +// The value must be either -1 or an integer between 1 and 3,653. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/ModifyClusterSnapshot +func (c *Redshift) ModifyClusterSnapshot(input *ModifyClusterSnapshotInput) (*ModifyClusterSnapshotOutput, error) { + req, out := c.ModifyClusterSnapshotRequest(input) + return out, req.Send() +} + +// ModifyClusterSnapshotWithContext is the same as ModifyClusterSnapshot with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyClusterSnapshot for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Redshift) ModifyClusterSnapshotWithContext(ctx aws.Context, input *ModifyClusterSnapshotInput, opts ...request.Option) (*ModifyClusterSnapshotOutput, error) { + req, out := c.ModifyClusterSnapshotRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyClusterSnapshotSchedule = "ModifyClusterSnapshotSchedule" + +// ModifyClusterSnapshotScheduleRequest generates a "aws/request.Request" representing the +// client's request for the ModifyClusterSnapshotSchedule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyClusterSnapshotSchedule for more information on using the ModifyClusterSnapshotSchedule +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyClusterSnapshotScheduleRequest method. +// req, resp := client.ModifyClusterSnapshotScheduleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/ModifyClusterSnapshotSchedule +func (c *Redshift) ModifyClusterSnapshotScheduleRequest(input *ModifyClusterSnapshotScheduleInput) (req *request.Request, output *ModifyClusterSnapshotScheduleOutput) { + op := &request.Operation{ + Name: opModifyClusterSnapshotSchedule, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyClusterSnapshotScheduleInput{} + } + + output = &ModifyClusterSnapshotScheduleOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(query.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// ModifyClusterSnapshotSchedule API operation for Amazon Redshift. +// +// Modifies a snapshot schedule for a cluster. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Redshift's +// API operation ModifyClusterSnapshotSchedule for usage and error information. +// +// Returned Error Codes: +// * ErrCodeClusterNotFoundFault "ClusterNotFound" +// The ClusterIdentifier parameter does not refer to an existing cluster. +// +// * ErrCodeSnapshotScheduleNotFoundFault "SnapshotScheduleNotFound" +// We could not find the specified snapshot schedule. +// +// * ErrCodeInvalidClusterSnapshotScheduleStateFault "InvalidClusterSnapshotScheduleState" +// The cluster snapshot schedule state is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/ModifyClusterSnapshotSchedule +func (c *Redshift) ModifyClusterSnapshotSchedule(input *ModifyClusterSnapshotScheduleInput) (*ModifyClusterSnapshotScheduleOutput, error) { + req, out := c.ModifyClusterSnapshotScheduleRequest(input) + return out, req.Send() +} + +// ModifyClusterSnapshotScheduleWithContext is the same as ModifyClusterSnapshotSchedule with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyClusterSnapshotSchedule for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Redshift) ModifyClusterSnapshotScheduleWithContext(ctx aws.Context, input *ModifyClusterSnapshotScheduleInput, opts ...request.Option) (*ModifyClusterSnapshotScheduleOutput, error) { + req, out := c.ModifyClusterSnapshotScheduleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyClusterSubnetGroup = "ModifyClusterSubnetGroup" + +// ModifyClusterSubnetGroupRequest generates a "aws/request.Request" representing the +// client's request for the ModifyClusterSubnetGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyClusterSubnetGroup for more information on using the ModifyClusterSubnetGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyClusterSubnetGroupRequest method. +// req, resp := client.ModifyClusterSubnetGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/ModifyClusterSubnetGroup +func (c *Redshift) ModifyClusterSubnetGroupRequest(input *ModifyClusterSubnetGroupInput) (req *request.Request, output *ModifyClusterSubnetGroupOutput) { + op := &request.Operation{ + Name: opModifyClusterSubnetGroup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyClusterSubnetGroupInput{} + } + + output = &ModifyClusterSubnetGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyClusterSubnetGroup API operation for Amazon Redshift. +// +// Modifies a cluster subnet group to include the specified list of VPC subnets. +// The operation replaces the existing list of subnets with the new list of +// subnets. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Redshift's +// API operation ModifyClusterSubnetGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeClusterSubnetGroupNotFoundFault "ClusterSubnetGroupNotFoundFault" +// The cluster subnet group name does not refer to an existing cluster subnet +// group. +// +// * ErrCodeClusterSubnetQuotaExceededFault "ClusterSubnetQuotaExceededFault" +// The request would result in user exceeding the allowed number of subnets +// in a cluster subnet groups. For information about increasing your quota, +// go to Limits in Amazon Redshift (http://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) +// in the Amazon Redshift Cluster Management Guide. // // * ErrCodeSubnetAlreadyInUse "SubnetAlreadyInUse" // A specified subnet is already in use by another cluster. @@ -5929,8 +7325,8 @@ const opModifyEventSubscription = "ModifyEventSubscription" // ModifyEventSubscriptionRequest generates a "aws/request.Request" representing the // client's request for the ModifyEventSubscription operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6040,8 +7436,8 @@ const opModifySnapshotCopyRetentionPeriod = "ModifySnapshotCopyRetentionPeriod" // ModifySnapshotCopyRetentionPeriodRequest generates a "aws/request.Request" representing the // client's request for the ModifySnapshotCopyRetentionPeriod operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6080,8 +7476,13 @@ func (c *Redshift) ModifySnapshotCopyRetentionPeriodRequest(input *ModifySnapsho // ModifySnapshotCopyRetentionPeriod API operation for Amazon Redshift. // -// Modifies the number of days to retain automated snapshots in the destination -// region after they are copied from the source region. +// Modifies the number of days to retain snapshots in the destination region +// after they are copied from the source region. By default, this only changes +// the retention period of copied automated snapshots. The retention periods +// for both new and existing copied automated snapshots will be updated with +// the new retention period. You can set the manual option to change only the +// retention periods of copied manual snapshots. If you set this option only +// newly copied manual snapshots will have the new retention period // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -6103,6 +7504,11 @@ func (c *Redshift) ModifySnapshotCopyRetentionPeriodRequest(input *ModifySnapsho // * ErrCodeInvalidClusterStateFault "InvalidClusterState" // The specified cluster is not in the available state. // +// * ErrCodeInvalidRetentionPeriodFault "InvalidRetentionPeriodFault" +// The retention period specified is either in the past or is not a valide value. +// +// The value must be either -1 or an integer between 1 and 3,653. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/ModifySnapshotCopyRetentionPeriod func (c *Redshift) ModifySnapshotCopyRetentionPeriod(input *ModifySnapshotCopyRetentionPeriodInput) (*ModifySnapshotCopyRetentionPeriodOutput, error) { req, out := c.ModifySnapshotCopyRetentionPeriodRequest(input) @@ -6125,12 +7531,98 @@ func (c *Redshift) ModifySnapshotCopyRetentionPeriodWithContext(ctx aws.Context, return out, req.Send() } +const opModifySnapshotSchedule = "ModifySnapshotSchedule" + +// ModifySnapshotScheduleRequest generates a "aws/request.Request" representing the +// client's request for the ModifySnapshotSchedule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifySnapshotSchedule for more information on using the ModifySnapshotSchedule +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifySnapshotScheduleRequest method. +// req, resp := client.ModifySnapshotScheduleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/ModifySnapshotSchedule +func (c *Redshift) ModifySnapshotScheduleRequest(input *ModifySnapshotScheduleInput) (req *request.Request, output *ModifySnapshotScheduleOutput) { + op := &request.Operation{ + Name: opModifySnapshotSchedule, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifySnapshotScheduleInput{} + } + + output = &ModifySnapshotScheduleOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifySnapshotSchedule API operation for Amazon Redshift. +// +// Modifies a snapshot schedule. Any schedule associate with a cluster will +// be modified asynchronously. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Redshift's +// API operation ModifySnapshotSchedule for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidScheduleFault "InvalidSchedule" +// The schedule you submitted isn't valid. +// +// * ErrCodeSnapshotScheduleNotFoundFault "SnapshotScheduleNotFound" +// We could not find the specified snapshot schedule. +// +// * ErrCodeSnapshotScheduleUpdateInProgressFault "SnapshotScheduleUpdateInProgress" +// The specified snapshot schedule is already being updated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/ModifySnapshotSchedule +func (c *Redshift) ModifySnapshotSchedule(input *ModifySnapshotScheduleInput) (*ModifySnapshotScheduleOutput, error) { + req, out := c.ModifySnapshotScheduleRequest(input) + return out, req.Send() +} + +// ModifySnapshotScheduleWithContext is the same as ModifySnapshotSchedule with the addition of +// the ability to pass a context and additional request options. +// +// See ModifySnapshotSchedule for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Redshift) ModifySnapshotScheduleWithContext(ctx aws.Context, input *ModifySnapshotScheduleInput, opts ...request.Option) (*ModifySnapshotScheduleOutput, error) { + req, out := c.ModifySnapshotScheduleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opPurchaseReservedNodeOffering = "PurchaseReservedNodeOffering" // PurchaseReservedNodeOfferingRequest generates a "aws/request.Request" representing the // client's request for the PurchaseReservedNodeOffering operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6227,8 +7719,8 @@ const opRebootCluster = "RebootCluster" // RebootClusterRequest generates a "aws/request.Request" representing the // client's request for the RebootCluster operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6315,8 +7807,8 @@ const opResetClusterParameterGroup = "ResetClusterParameterGroup" // ResetClusterParameterGroupRequest generates a "aws/request.Request" representing the // client's request for the ResetClusterParameterGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6398,94 +7890,217 @@ func (c *Redshift) ResetClusterParameterGroupWithContext(ctx aws.Context, input return out, req.Send() } -const opRestoreFromClusterSnapshot = "RestoreFromClusterSnapshot" +const opResizeCluster = "ResizeCluster" -// RestoreFromClusterSnapshotRequest generates a "aws/request.Request" representing the -// client's request for the RestoreFromClusterSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ResizeClusterRequest generates a "aws/request.Request" representing the +// client's request for the ResizeCluster operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See RestoreFromClusterSnapshot for more information on using the RestoreFromClusterSnapshot +// See ResizeCluster for more information on using the ResizeCluster // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the RestoreFromClusterSnapshotRequest method. -// req, resp := client.RestoreFromClusterSnapshotRequest(params) +// // Example sending a request using the ResizeClusterRequest method. +// req, resp := client.ResizeClusterRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/RestoreFromClusterSnapshot -func (c *Redshift) RestoreFromClusterSnapshotRequest(input *RestoreFromClusterSnapshotInput) (req *request.Request, output *RestoreFromClusterSnapshotOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/ResizeCluster +func (c *Redshift) ResizeClusterRequest(input *ResizeClusterInput) (req *request.Request, output *ResizeClusterOutput) { op := &request.Operation{ - Name: opRestoreFromClusterSnapshot, + Name: opResizeCluster, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &RestoreFromClusterSnapshotInput{} + input = &ResizeClusterInput{} } - output = &RestoreFromClusterSnapshotOutput{} + output = &ResizeClusterOutput{} req = c.newRequest(op, input, output) return } -// RestoreFromClusterSnapshot API operation for Amazon Redshift. +// ResizeCluster API operation for Amazon Redshift. // -// Creates a new cluster from a snapshot. By default, Amazon Redshift creates -// the resulting cluster with the same configuration as the original cluster -// from which the snapshot was created, except that the new cluster is created -// with the default cluster security and parameter groups. After Amazon Redshift -// creates the cluster, you can use the ModifyCluster API to associate a different -// security group and different parameter group with the restored cluster. If -// you are using a DS node type, you can also choose to change to another DS -// node type of the same size during restore. +// Changes the size of the cluster. You can change the cluster's type, or change +// the number or type of nodes. The default behavior is to use the elastic resize +// method. With an elastic resize, your cluster is available for read and write +// operations more quickly than with the classic resize method. // -// If you restore a cluster into a VPC, you must provide a cluster subnet group -// where you want the cluster restored. +// Elastic resize operations have the following restrictions: // -// For more information about working with snapshots, go to Amazon Redshift -// Snapshots (http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html) -// in the Amazon Redshift Cluster Management Guide. +// * You can only resize clusters of the following types: +// +// dc2.large +// +// dc2.8xlarge +// +// ds2.xlarge +// +// ds2.8xlarge +// +// * The type of nodes that you add must match the node type for the cluster. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Redshift's -// API operation RestoreFromClusterSnapshot for usage and error information. +// API operation ResizeCluster for usage and error information. // // Returned Error Codes: -// * ErrCodeAccessToSnapshotDeniedFault "AccessToSnapshotDenied" -// The owner of the specified snapshot has not authorized your account to access -// the snapshot. -// -// * ErrCodeClusterAlreadyExistsFault "ClusterAlreadyExists" -// The account already has a cluster with the given identifier. +// * ErrCodeInvalidClusterStateFault "InvalidClusterState" +// The specified cluster is not in the available state. // -// * ErrCodeClusterSnapshotNotFoundFault "ClusterSnapshotNotFound" -// The snapshot identifier does not refer to an existing cluster snapshot. +// * ErrCodeClusterNotFoundFault "ClusterNotFound" +// The ClusterIdentifier parameter does not refer to an existing cluster. // -// * ErrCodeClusterQuotaExceededFault "ClusterQuotaExceeded" -// The request would exceed the allowed number of cluster instances for this -// account. For information about increasing your quota, go to Limits in Amazon -// Redshift (http://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) +// * ErrCodeNumberOfNodesQuotaExceededFault "NumberOfNodesQuotaExceeded" +// The operation would exceed the number of nodes allotted to the account. For +// information about increasing your quota, go to Limits in Amazon Redshift +// (http://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) // in the Amazon Redshift Cluster Management Guide. // +// * ErrCodeNumberOfNodesPerClusterLimitExceededFault "NumberOfNodesPerClusterLimitExceeded" +// The operation would exceed the number of nodes allowed for a cluster. +// // * ErrCodeInsufficientClusterCapacityFault "InsufficientClusterCapacity" // The number of nodes specified exceeds the allotted capacity of the cluster. // -// * ErrCodeInvalidClusterSnapshotStateFault "InvalidClusterSnapshotState" +// * ErrCodeUnsupportedOptionFault "UnsupportedOptionFault" +// A request option was specified that is not supported. +// +// * ErrCodeUnsupportedOperationFault "UnsupportedOperation" +// The requested operation isn't supported. +// +// * ErrCodeUnauthorizedOperation "UnauthorizedOperation" +// Your account is not authorized to perform the requested operation. +// +// * ErrCodeLimitExceededFault "LimitExceededFault" +// The encryption key has exceeded its grant limit in AWS KMS. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/ResizeCluster +func (c *Redshift) ResizeCluster(input *ResizeClusterInput) (*ResizeClusterOutput, error) { + req, out := c.ResizeClusterRequest(input) + return out, req.Send() +} + +// ResizeClusterWithContext is the same as ResizeCluster with the addition of +// the ability to pass a context and additional request options. +// +// See ResizeCluster for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Redshift) ResizeClusterWithContext(ctx aws.Context, input *ResizeClusterInput, opts ...request.Option) (*ResizeClusterOutput, error) { + req, out := c.ResizeClusterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRestoreFromClusterSnapshot = "RestoreFromClusterSnapshot" + +// RestoreFromClusterSnapshotRequest generates a "aws/request.Request" representing the +// client's request for the RestoreFromClusterSnapshot operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RestoreFromClusterSnapshot for more information on using the RestoreFromClusterSnapshot +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RestoreFromClusterSnapshotRequest method. +// req, resp := client.RestoreFromClusterSnapshotRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/RestoreFromClusterSnapshot +func (c *Redshift) RestoreFromClusterSnapshotRequest(input *RestoreFromClusterSnapshotInput) (req *request.Request, output *RestoreFromClusterSnapshotOutput) { + op := &request.Operation{ + Name: opRestoreFromClusterSnapshot, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RestoreFromClusterSnapshotInput{} + } + + output = &RestoreFromClusterSnapshotOutput{} + req = c.newRequest(op, input, output) + return +} + +// RestoreFromClusterSnapshot API operation for Amazon Redshift. +// +// Creates a new cluster from a snapshot. By default, Amazon Redshift creates +// the resulting cluster with the same configuration as the original cluster +// from which the snapshot was created, except that the new cluster is created +// with the default cluster security and parameter groups. After Amazon Redshift +// creates the cluster, you can use the ModifyCluster API to associate a different +// security group and different parameter group with the restored cluster. If +// you are using a DS node type, you can also choose to change to another DS +// node type of the same size during restore. +// +// If you restore a cluster into a VPC, you must provide a cluster subnet group +// where you want the cluster restored. +// +// For more information about working with snapshots, go to Amazon Redshift +// Snapshots (http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html) +// in the Amazon Redshift Cluster Management Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Redshift's +// API operation RestoreFromClusterSnapshot for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAccessToSnapshotDeniedFault "AccessToSnapshotDenied" +// The owner of the specified snapshot has not authorized your account to access +// the snapshot. +// +// * ErrCodeClusterAlreadyExistsFault "ClusterAlreadyExists" +// The account already has a cluster with the given identifier. +// +// * ErrCodeClusterSnapshotNotFoundFault "ClusterSnapshotNotFound" +// The snapshot identifier does not refer to an existing cluster snapshot. +// +// * ErrCodeClusterQuotaExceededFault "ClusterQuotaExceeded" +// The request would exceed the allowed number of cluster instances for this +// account. For information about increasing your quota, go to Limits in Amazon +// Redshift (http://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) +// in the Amazon Redshift Cluster Management Guide. +// +// * ErrCodeInsufficientClusterCapacityFault "InsufficientClusterCapacity" +// The number of nodes specified exceeds the allotted capacity of the cluster. +// +// * ErrCodeInvalidClusterSnapshotStateFault "InvalidClusterSnapshotState" // The specified cluster snapshot is not in the available state, or other accounts // are authorized to access the snapshot. // @@ -6541,6 +8156,12 @@ func (c *Redshift) RestoreFromClusterSnapshotRequest(input *RestoreFromClusterSn // The request cannot be completed because a dependent service is throttling // requests made by Amazon Redshift on your behalf. Wait and retry the request. // +// * ErrCodeInvalidClusterTrackFault "InvalidClusterTrack" +// The provided cluster track name is not valid. +// +// * ErrCodeSnapshotScheduleNotFoundFault "SnapshotScheduleNotFound" +// We could not find the specified snapshot schedule. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/redshift-2012-12-01/RestoreFromClusterSnapshot func (c *Redshift) RestoreFromClusterSnapshot(input *RestoreFromClusterSnapshotInput) (*RestoreFromClusterSnapshotOutput, error) { req, out := c.RestoreFromClusterSnapshotRequest(input) @@ -6567,8 +8188,8 @@ const opRestoreTableFromClusterSnapshot = "RestoreTableFromClusterSnapshot" // RestoreTableFromClusterSnapshotRequest generates a "aws/request.Request" representing the // client's request for the RestoreTableFromClusterSnapshot operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6679,8 +8300,8 @@ const opRevokeClusterSecurityGroupIngress = "RevokeClusterSecurityGroupIngress" // RevokeClusterSecurityGroupIngressRequest generates a "aws/request.Request" representing the // client's request for the RevokeClusterSecurityGroupIngress operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6770,8 +8391,8 @@ const opRevokeSnapshotAccess = "RevokeSnapshotAccess" // RevokeSnapshotAccessRequest generates a "aws/request.Request" representing the // client's request for the RevokeSnapshotAccess operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6863,8 +8484,8 @@ const opRotateEncryptionKey = "RotateEncryptionKey" // RotateEncryptionKeyRequest generates a "aws/request.Request" representing the // client's request for the RotateEncryptionKey operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6945,6 +8566,117 @@ func (c *Redshift) RotateEncryptionKeyWithContext(ctx aws.Context, input *Rotate return out, req.Send() } +type AcceptReservedNodeExchangeInput struct { + _ struct{} `type:"structure"` + + // A string representing the node identifier of the DC1 Reserved Node to be + // exchanged. + // + // ReservedNodeId is a required field + ReservedNodeId *string `type:"string" required:"true"` + + // The unique identifier of the DC2 Reserved Node offering to be used for the + // exchange. You can obtain the value for the parameter by calling GetReservedNodeExchangeOfferings + // + // TargetReservedNodeOfferingId is a required field + TargetReservedNodeOfferingId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s AcceptReservedNodeExchangeInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AcceptReservedNodeExchangeInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AcceptReservedNodeExchangeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AcceptReservedNodeExchangeInput"} + if s.ReservedNodeId == nil { + invalidParams.Add(request.NewErrParamRequired("ReservedNodeId")) + } + if s.TargetReservedNodeOfferingId == nil { + invalidParams.Add(request.NewErrParamRequired("TargetReservedNodeOfferingId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetReservedNodeId sets the ReservedNodeId field's value. +func (s *AcceptReservedNodeExchangeInput) SetReservedNodeId(v string) *AcceptReservedNodeExchangeInput { + s.ReservedNodeId = &v + return s +} + +// SetTargetReservedNodeOfferingId sets the TargetReservedNodeOfferingId field's value. +func (s *AcceptReservedNodeExchangeInput) SetTargetReservedNodeOfferingId(v string) *AcceptReservedNodeExchangeInput { + s.TargetReservedNodeOfferingId = &v + return s +} + +type AcceptReservedNodeExchangeOutput struct { + _ struct{} `type:"structure"` + + // Describes a reserved node. You can call the DescribeReservedNodeOfferings + // API to obtain the available reserved node offerings. + ExchangedReservedNode *ReservedNode `type:"structure"` +} + +// String returns the string representation +func (s AcceptReservedNodeExchangeOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AcceptReservedNodeExchangeOutput) GoString() string { + return s.String() +} + +// SetExchangedReservedNode sets the ExchangedReservedNode field's value. +func (s *AcceptReservedNodeExchangeOutput) SetExchangedReservedNode(v *ReservedNode) *AcceptReservedNodeExchangeOutput { + s.ExchangedReservedNode = v + return s +} + +// A name value pair that describes an aspect of an account. +type AccountAttribute struct { + _ struct{} `type:"structure"` + + // The name of the attribute. + AttributeName *string `type:"string"` + + // A list of attribute values. + AttributeValues []*AttributeValueTarget `locationNameList:"AttributeValueTarget" type:"list"` +} + +// String returns the string representation +func (s AccountAttribute) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AccountAttribute) GoString() string { + return s.String() +} + +// SetAttributeName sets the AttributeName field's value. +func (s *AccountAttribute) SetAttributeName(v string) *AccountAttribute { + s.AttributeName = &v + return s +} + +// SetAttributeValues sets the AttributeValues field's value. +func (s *AccountAttribute) SetAttributeValues(v []*AttributeValueTarget) *AccountAttribute { + s.AttributeValues = v + return s +} + // Describes an AWS customer account authorized to restore a snapshot. type AccountWithRestoreAccess struct { _ struct{} `type:"structure"` @@ -6979,6 +8711,30 @@ func (s *AccountWithRestoreAccess) SetAccountId(v string) *AccountWithRestoreAcc return s } +// Describes an attribute value. +type AttributeValueTarget struct { + _ struct{} `type:"structure"` + + // The value of the attribute. + AttributeValue *string `type:"string"` +} + +// String returns the string representation +func (s AttributeValueTarget) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttributeValueTarget) GoString() string { + return s.String() +} + +// SetAttributeValue sets the AttributeValue field's value. +func (s *AttributeValueTarget) SetAttributeValue(v string) *AttributeValueTarget { + s.AttributeValue = &v + return s +} + type AuthorizeClusterSecurityGroupIngressInput struct { _ struct{} `type:"structure"` @@ -7119,76 +8875,469 @@ func (s *AuthorizeSnapshotAccessInput) Validate() error { return nil } -// SetAccountWithRestoreAccess sets the AccountWithRestoreAccess field's value. -func (s *AuthorizeSnapshotAccessInput) SetAccountWithRestoreAccess(v string) *AuthorizeSnapshotAccessInput { - s.AccountWithRestoreAccess = &v +// SetAccountWithRestoreAccess sets the AccountWithRestoreAccess field's value. +func (s *AuthorizeSnapshotAccessInput) SetAccountWithRestoreAccess(v string) *AuthorizeSnapshotAccessInput { + s.AccountWithRestoreAccess = &v + return s +} + +// SetSnapshotClusterIdentifier sets the SnapshotClusterIdentifier field's value. +func (s *AuthorizeSnapshotAccessInput) SetSnapshotClusterIdentifier(v string) *AuthorizeSnapshotAccessInput { + s.SnapshotClusterIdentifier = &v + return s +} + +// SetSnapshotIdentifier sets the SnapshotIdentifier field's value. +func (s *AuthorizeSnapshotAccessInput) SetSnapshotIdentifier(v string) *AuthorizeSnapshotAccessInput { + s.SnapshotIdentifier = &v + return s +} + +type AuthorizeSnapshotAccessOutput struct { + _ struct{} `type:"structure"` + + // Describes a snapshot. + Snapshot *Snapshot `type:"structure"` +} + +// String returns the string representation +func (s AuthorizeSnapshotAccessOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AuthorizeSnapshotAccessOutput) GoString() string { + return s.String() +} + +// SetSnapshot sets the Snapshot field's value. +func (s *AuthorizeSnapshotAccessOutput) SetSnapshot(v *Snapshot) *AuthorizeSnapshotAccessOutput { + s.Snapshot = v + return s +} + +// Describes an availability zone. +type AvailabilityZone struct { + _ struct{} `type:"structure"` + + // The name of the availability zone. + Name *string `type:"string"` + + SupportedPlatforms []*SupportedPlatform `locationNameList:"SupportedPlatform" type:"list"` +} + +// String returns the string representation +func (s AvailabilityZone) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AvailabilityZone) GoString() string { + return s.String() +} + +// SetName sets the Name field's value. +func (s *AvailabilityZone) SetName(v string) *AvailabilityZone { + s.Name = &v + return s +} + +// SetSupportedPlatforms sets the SupportedPlatforms field's value. +func (s *AvailabilityZone) SetSupportedPlatforms(v []*SupportedPlatform) *AvailabilityZone { + s.SupportedPlatforms = v + return s +} + +type BatchDeleteClusterSnapshotsInput struct { + _ struct{} `type:"structure"` + + // A list of indentifiers for the snapshots you want to delete. + // + // Identifiers is a required field + Identifiers []*DeleteClusterSnapshotMessage `locationNameList:"DeleteClusterSnapshotMessage" type:"list" required:"true"` +} + +// String returns the string representation +func (s BatchDeleteClusterSnapshotsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchDeleteClusterSnapshotsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *BatchDeleteClusterSnapshotsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BatchDeleteClusterSnapshotsInput"} + if s.Identifiers == nil { + invalidParams.Add(request.NewErrParamRequired("Identifiers")) + } + if s.Identifiers != nil { + for i, v := range s.Identifiers { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Identifiers", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetIdentifiers sets the Identifiers field's value. +func (s *BatchDeleteClusterSnapshotsInput) SetIdentifiers(v []*DeleteClusterSnapshotMessage) *BatchDeleteClusterSnapshotsInput { + s.Identifiers = v + return s +} + +type BatchDeleteClusterSnapshotsOutput struct { + _ struct{} `type:"structure"` + + // A list of any errors returned. + Errors []*SnapshotErrorMessage `locationNameList:"SnapshotErrorMessage" type:"list"` + + // A list of the snapshot identifiers that were deleted. + Resources []*string `locationNameList:"String" type:"list"` +} + +// String returns the string representation +func (s BatchDeleteClusterSnapshotsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchDeleteClusterSnapshotsOutput) GoString() string { + return s.String() +} + +// SetErrors sets the Errors field's value. +func (s *BatchDeleteClusterSnapshotsOutput) SetErrors(v []*SnapshotErrorMessage) *BatchDeleteClusterSnapshotsOutput { + s.Errors = v + return s +} + +// SetResources sets the Resources field's value. +func (s *BatchDeleteClusterSnapshotsOutput) SetResources(v []*string) *BatchDeleteClusterSnapshotsOutput { + s.Resources = v + return s +} + +type BatchModifyClusterSnapshotsInput struct { + _ struct{} `type:"structure"` + + // A boolean value indicating whether to override an exception if the retention + // period has passed. + Force *bool `type:"boolean"` + + // The number of days that a manual snapshot is retained. If you specify the + // value -1, the manual snapshot is retained indefinitely. + // + // The number must be either -1 or an integer between 1 and 3,653. + // + // If you decrease the manual snapshot retention period from its current value, + // existing manual snapshots that fall outside of the new retention period will + // return an error. If you want to suppress the errors and delete the snapshots, + // use the force option. + ManualSnapshotRetentionPeriod *int64 `type:"integer"` + + // A list of snapshot identifiers you want to modify. + // + // SnapshotIdentifierList is a required field + SnapshotIdentifierList []*string `locationNameList:"String" type:"list" required:"true"` +} + +// String returns the string representation +func (s BatchModifyClusterSnapshotsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchModifyClusterSnapshotsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *BatchModifyClusterSnapshotsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BatchModifyClusterSnapshotsInput"} + if s.SnapshotIdentifierList == nil { + invalidParams.Add(request.NewErrParamRequired("SnapshotIdentifierList")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetForce sets the Force field's value. +func (s *BatchModifyClusterSnapshotsInput) SetForce(v bool) *BatchModifyClusterSnapshotsInput { + s.Force = &v + return s +} + +// SetManualSnapshotRetentionPeriod sets the ManualSnapshotRetentionPeriod field's value. +func (s *BatchModifyClusterSnapshotsInput) SetManualSnapshotRetentionPeriod(v int64) *BatchModifyClusterSnapshotsInput { + s.ManualSnapshotRetentionPeriod = &v + return s +} + +// SetSnapshotIdentifierList sets the SnapshotIdentifierList field's value. +func (s *BatchModifyClusterSnapshotsInput) SetSnapshotIdentifierList(v []*string) *BatchModifyClusterSnapshotsInput { + s.SnapshotIdentifierList = v + return s +} + +type BatchModifyClusterSnapshotsOutput struct { + _ struct{} `type:"structure"` + + // A list of any errors returned. + Errors []*SnapshotErrorMessage `locationNameList:"SnapshotErrorMessage" type:"list"` + + // A list of the snapshots that were modified. + Resources []*string `locationNameList:"String" type:"list"` +} + +// String returns the string representation +func (s BatchModifyClusterSnapshotsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchModifyClusterSnapshotsOutput) GoString() string { + return s.String() +} + +// SetErrors sets the Errors field's value. +func (s *BatchModifyClusterSnapshotsOutput) SetErrors(v []*SnapshotErrorMessage) *BatchModifyClusterSnapshotsOutput { + s.Errors = v + return s +} + +// SetResources sets the Resources field's value. +func (s *BatchModifyClusterSnapshotsOutput) SetResources(v []*string) *BatchModifyClusterSnapshotsOutput { + s.Resources = v + return s +} + +type CancelResizeInput struct { + _ struct{} `type:"structure"` + + // The unique identifier for the cluster that you want to cancel a resize operation + // for. + // + // ClusterIdentifier is a required field + ClusterIdentifier *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s CancelResizeInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelResizeInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CancelResizeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CancelResizeInput"} + if s.ClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("ClusterIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClusterIdentifier sets the ClusterIdentifier field's value. +func (s *CancelResizeInput) SetClusterIdentifier(v string) *CancelResizeInput { + s.ClusterIdentifier = &v + return s +} + +// Describes the result of a cluster resize operation. +type CancelResizeOutput struct { + _ struct{} `type:"structure"` + + // The average rate of the resize operation over the last few minutes, measured + // in megabytes per second. After the resize operation completes, this value + // shows the average rate of the entire resize operation. + AvgResizeRateInMegaBytesPerSecond *float64 `type:"double"` + + // The amount of seconds that have elapsed since the resize operation began. + // After the resize operation completes, this value shows the total actual time, + // in seconds, for the resize operation. + ElapsedTimeInSeconds *int64 `type:"long"` + + // The estimated time remaining, in seconds, until the resize operation is complete. + // This value is calculated based on the average resize rate and the estimated + // amount of data remaining to be processed. Once the resize operation is complete, + // this value will be 0. + EstimatedTimeToCompletionInSeconds *int64 `type:"long"` + + // The names of tables that have been completely imported . + // + // Valid Values: List of table names. + ImportTablesCompleted []*string `type:"list"` + + // The names of tables that are being currently imported. + // + // Valid Values: List of table names. + ImportTablesInProgress []*string `type:"list"` + + // The names of tables that have not been yet imported. + // + // Valid Values: List of table names + ImportTablesNotStarted []*string `type:"list"` + + // An optional string to provide additional details about the resize action. + Message *string `type:"string"` + + // While the resize operation is in progress, this value shows the current amount + // of data, in megabytes, that has been processed so far. When the resize operation + // is complete, this value shows the total amount of data, in megabytes, on + // the cluster, which may be more or less than TotalResizeDataInMegaBytes (the + // estimated total amount of data before resize). + ProgressInMegaBytes *int64 `type:"long"` + + // An enum with possible values of ClassicResize and ElasticResize. These values + // describe the type of resize operation being performed. + ResizeType *string `type:"string"` + + // The status of the resize operation. + // + // Valid Values: NONE | IN_PROGRESS | FAILED | SUCCEEDED | CANCELLING + Status *string `type:"string"` + + // The cluster type after the resize operation is complete. + // + // Valid Values: multi-node | single-node + TargetClusterType *string `type:"string"` + + // The type of encryption for the cluster after the resize is complete. + // + // Possible values are KMS and None. In the China region possible values are: + // Legacy and None. + TargetEncryptionType *string `type:"string"` + + // The node type that the cluster will have after the resize operation is complete. + TargetNodeType *string `type:"string"` + + // The number of nodes that the cluster will have after the resize operation + // is complete. + TargetNumberOfNodes *int64 `type:"integer"` + + // The estimated total amount of data, in megabytes, on the cluster before the + // resize operation began. + TotalResizeDataInMegaBytes *int64 `type:"long"` +} + +// String returns the string representation +func (s CancelResizeOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelResizeOutput) GoString() string { + return s.String() +} + +// SetAvgResizeRateInMegaBytesPerSecond sets the AvgResizeRateInMegaBytesPerSecond field's value. +func (s *CancelResizeOutput) SetAvgResizeRateInMegaBytesPerSecond(v float64) *CancelResizeOutput { + s.AvgResizeRateInMegaBytesPerSecond = &v return s } -// SetSnapshotClusterIdentifier sets the SnapshotClusterIdentifier field's value. -func (s *AuthorizeSnapshotAccessInput) SetSnapshotClusterIdentifier(v string) *AuthorizeSnapshotAccessInput { - s.SnapshotClusterIdentifier = &v +// SetElapsedTimeInSeconds sets the ElapsedTimeInSeconds field's value. +func (s *CancelResizeOutput) SetElapsedTimeInSeconds(v int64) *CancelResizeOutput { + s.ElapsedTimeInSeconds = &v return s } -// SetSnapshotIdentifier sets the SnapshotIdentifier field's value. -func (s *AuthorizeSnapshotAccessInput) SetSnapshotIdentifier(v string) *AuthorizeSnapshotAccessInput { - s.SnapshotIdentifier = &v +// SetEstimatedTimeToCompletionInSeconds sets the EstimatedTimeToCompletionInSeconds field's value. +func (s *CancelResizeOutput) SetEstimatedTimeToCompletionInSeconds(v int64) *CancelResizeOutput { + s.EstimatedTimeToCompletionInSeconds = &v return s } -type AuthorizeSnapshotAccessOutput struct { - _ struct{} `type:"structure"` +// SetImportTablesCompleted sets the ImportTablesCompleted field's value. +func (s *CancelResizeOutput) SetImportTablesCompleted(v []*string) *CancelResizeOutput { + s.ImportTablesCompleted = v + return s +} - // Describes a snapshot. - Snapshot *Snapshot `type:"structure"` +// SetImportTablesInProgress sets the ImportTablesInProgress field's value. +func (s *CancelResizeOutput) SetImportTablesInProgress(v []*string) *CancelResizeOutput { + s.ImportTablesInProgress = v + return s } -// String returns the string representation -func (s AuthorizeSnapshotAccessOutput) String() string { - return awsutil.Prettify(s) +// SetImportTablesNotStarted sets the ImportTablesNotStarted field's value. +func (s *CancelResizeOutput) SetImportTablesNotStarted(v []*string) *CancelResizeOutput { + s.ImportTablesNotStarted = v + return s } -// GoString returns the string representation -func (s AuthorizeSnapshotAccessOutput) GoString() string { - return s.String() +// SetMessage sets the Message field's value. +func (s *CancelResizeOutput) SetMessage(v string) *CancelResizeOutput { + s.Message = &v + return s } -// SetSnapshot sets the Snapshot field's value. -func (s *AuthorizeSnapshotAccessOutput) SetSnapshot(v *Snapshot) *AuthorizeSnapshotAccessOutput { - s.Snapshot = v +// SetProgressInMegaBytes sets the ProgressInMegaBytes field's value. +func (s *CancelResizeOutput) SetProgressInMegaBytes(v int64) *CancelResizeOutput { + s.ProgressInMegaBytes = &v return s } -// Describes an availability zone. -type AvailabilityZone struct { - _ struct{} `type:"structure"` +// SetResizeType sets the ResizeType field's value. +func (s *CancelResizeOutput) SetResizeType(v string) *CancelResizeOutput { + s.ResizeType = &v + return s +} - // The name of the availability zone. - Name *string `type:"string"` +// SetStatus sets the Status field's value. +func (s *CancelResizeOutput) SetStatus(v string) *CancelResizeOutput { + s.Status = &v + return s +} - SupportedPlatforms []*SupportedPlatform `locationNameList:"SupportedPlatform" type:"list"` +// SetTargetClusterType sets the TargetClusterType field's value. +func (s *CancelResizeOutput) SetTargetClusterType(v string) *CancelResizeOutput { + s.TargetClusterType = &v + return s } -// String returns the string representation -func (s AvailabilityZone) String() string { - return awsutil.Prettify(s) +// SetTargetEncryptionType sets the TargetEncryptionType field's value. +func (s *CancelResizeOutput) SetTargetEncryptionType(v string) *CancelResizeOutput { + s.TargetEncryptionType = &v + return s } -// GoString returns the string representation -func (s AvailabilityZone) GoString() string { - return s.String() +// SetTargetNodeType sets the TargetNodeType field's value. +func (s *CancelResizeOutput) SetTargetNodeType(v string) *CancelResizeOutput { + s.TargetNodeType = &v + return s } -// SetName sets the Name field's value. -func (s *AvailabilityZone) SetName(v string) *AvailabilityZone { - s.Name = &v +// SetTargetNumberOfNodes sets the TargetNumberOfNodes field's value. +func (s *CancelResizeOutput) SetTargetNumberOfNodes(v int64) *CancelResizeOutput { + s.TargetNumberOfNodes = &v return s } -// SetSupportedPlatforms sets the SupportedPlatforms field's value. -func (s *AvailabilityZone) SetSupportedPlatforms(v []*SupportedPlatform) *AvailabilityZone { - s.SupportedPlatforms = v +// SetTotalResizeDataInMegaBytes sets the TotalResizeDataInMegaBytes field's value. +func (s *CancelResizeOutput) SetTotalResizeDataInMegaBytes(v int64) *CancelResizeOutput { + s.TotalResizeDataInMegaBytes = &v return s } @@ -7207,7 +9356,7 @@ type Cluster struct { AvailabilityZone *string `type:"string"` // The date and time that the cluster was created. - ClusterCreateTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + ClusterCreateTime *time.Time `type:"timestamp"` // The unique identifier of the cluster. ClusterIdentifier *string `type:"string"` @@ -7242,6 +9391,8 @@ type Cluster struct { // // * available // + // * cancelling-resize + // // * creating // // * deleting @@ -7285,9 +9436,20 @@ type Cluster struct { // was not specified, a database named devdev was created by default. DBName *string `type:"string"` + // Describes the status of a cluster while it is in the process of resizing + // with an incremental resize. + DataTransferProgress *DataTransferProgress `type:"structure"` + + // Describes a group of DeferredMaintenanceWindow objects. + DeferredMaintenanceWindows []*DeferredMaintenanceWindow `locationNameList:"DeferredMaintenanceWindow" type:"list"` + // The status of the elastic IP (EIP) address. ElasticIpStatus *ElasticIpStatus `type:"structure"` + // The number of nodes that you can resize the cluster to with the elastic resize + // method. + ElasticResizeNumberOfNodeOptions *string `type:"string"` + // A Boolean value that, if true, indicates that data in the cluster is encrypted // at rest. Encrypted *bool `type:"boolean"` @@ -7321,6 +9483,16 @@ type Cluster struct { // to encrypt data in the cluster. KmsKeyId *string `type:"string"` + // The name of the maintenance track for the cluster. + MaintenanceTrackName *string `type:"string"` + + // The default number of days to retain a manual snapshot. If the value is -1, + // the snapshot is retained indefinitely. This setting does not change the retention + // period of existing snapshots. + // + // The value must be either -1 or an integer between 1 and 3,653 + ManualSnapshotRetentionPeriod *int64 `type:"integer"` + // The master user name for the cluster. This name is used to connect to the // database that is specified in the DBName parameter. MasterUsername *string `type:"string"` @@ -7334,6 +9506,9 @@ type Cluster struct { // The number of compute nodes in the cluster. NumberOfNodes *int64 `type:"integer"` + // Cluster operations that are waiting to be started. + PendingActions []*string `type:"list"` + // A value that, if present, indicates that changes to the cluster are pending. // Specific pending changes are identified by subelements. PendingModifiedValues *PendingModifiedValues `type:"structure"` @@ -7346,10 +9521,24 @@ type Cluster struct { // from a public network. PubliclyAccessible *bool `type:"boolean"` + // Returns the following: + // + // * AllowCancelResize: a boolean value indicating if the resize operation + // can be cancelled. + // + // * ResizeType: Returns ClassicResize + ResizeInfo *ResizeInfo `type:"structure"` + // A value that describes the status of a cluster restore action. This parameter // returns null if the cluster was not created by restoring a snapshot. RestoreStatus *RestoreStatus `type:"structure"` + // A unique identifier for the cluster snapshot schedule. + SnapshotScheduleIdentifier *string `type:"string"` + + // The current state of the cluster snapshot schedule. + SnapshotScheduleState *string `type:"string" enum:"ScheduleState"` + // The list of tags for the cluster. Tags []*Tag `locationNameList:"Tag" type:"list"` @@ -7462,12 +9651,30 @@ func (s *Cluster) SetDBName(v string) *Cluster { return s } +// SetDataTransferProgress sets the DataTransferProgress field's value. +func (s *Cluster) SetDataTransferProgress(v *DataTransferProgress) *Cluster { + s.DataTransferProgress = v + return s +} + +// SetDeferredMaintenanceWindows sets the DeferredMaintenanceWindows field's value. +func (s *Cluster) SetDeferredMaintenanceWindows(v []*DeferredMaintenanceWindow) *Cluster { + s.DeferredMaintenanceWindows = v + return s +} + // SetElasticIpStatus sets the ElasticIpStatus field's value. func (s *Cluster) SetElasticIpStatus(v *ElasticIpStatus) *Cluster { s.ElasticIpStatus = v return s } +// SetElasticResizeNumberOfNodeOptions sets the ElasticResizeNumberOfNodeOptions field's value. +func (s *Cluster) SetElasticResizeNumberOfNodeOptions(v string) *Cluster { + s.ElasticResizeNumberOfNodeOptions = &v + return s +} + // SetEncrypted sets the Encrypted field's value. func (s *Cluster) SetEncrypted(v bool) *Cluster { s.Encrypted = &v @@ -7504,6 +9711,18 @@ func (s *Cluster) SetKmsKeyId(v string) *Cluster { return s } +// SetMaintenanceTrackName sets the MaintenanceTrackName field's value. +func (s *Cluster) SetMaintenanceTrackName(v string) *Cluster { + s.MaintenanceTrackName = &v + return s +} + +// SetManualSnapshotRetentionPeriod sets the ManualSnapshotRetentionPeriod field's value. +func (s *Cluster) SetManualSnapshotRetentionPeriod(v int64) *Cluster { + s.ManualSnapshotRetentionPeriod = &v + return s +} + // SetMasterUsername sets the MasterUsername field's value. func (s *Cluster) SetMasterUsername(v string) *Cluster { s.MasterUsername = &v @@ -7528,6 +9747,12 @@ func (s *Cluster) SetNumberOfNodes(v int64) *Cluster { return s } +// SetPendingActions sets the PendingActions field's value. +func (s *Cluster) SetPendingActions(v []*string) *Cluster { + s.PendingActions = v + return s +} + // SetPendingModifiedValues sets the PendingModifiedValues field's value. func (s *Cluster) SetPendingModifiedValues(v *PendingModifiedValues) *Cluster { s.PendingModifiedValues = v @@ -7546,12 +9771,30 @@ func (s *Cluster) SetPubliclyAccessible(v bool) *Cluster { return s } +// SetResizeInfo sets the ResizeInfo field's value. +func (s *Cluster) SetResizeInfo(v *ResizeInfo) *Cluster { + s.ResizeInfo = v + return s +} + // SetRestoreStatus sets the RestoreStatus field's value. func (s *Cluster) SetRestoreStatus(v *RestoreStatus) *Cluster { s.RestoreStatus = v return s } +// SetSnapshotScheduleIdentifier sets the SnapshotScheduleIdentifier field's value. +func (s *Cluster) SetSnapshotScheduleIdentifier(v string) *Cluster { + s.SnapshotScheduleIdentifier = &v + return s +} + +// SetSnapshotScheduleState sets the SnapshotScheduleState field's value. +func (s *Cluster) SetSnapshotScheduleState(v string) *Cluster { + s.SnapshotScheduleState = &v + return s +} + // SetTags sets the Tags field's value. func (s *Cluster) SetTags(v []*Tag) *Cluster { s.Tags = v @@ -7570,6 +9813,58 @@ func (s *Cluster) SetVpcSecurityGroups(v []*VpcSecurityGroupMembership) *Cluster return s } +// Describes a ClusterDbRevision. +type ClusterDbRevision struct { + _ struct{} `type:"structure"` + + // The unique identifier of the cluster. + ClusterIdentifier *string `type:"string"` + + // A string representing the current cluster version. + CurrentDatabaseRevision *string `type:"string"` + + // The date on which the database revision was released. + DatabaseRevisionReleaseDate *time.Time `type:"timestamp"` + + // A list of RevisionTarget objects, where each object describes the database + // revision that a cluster can be updated to. + RevisionTargets []*RevisionTarget `locationNameList:"RevisionTarget" type:"list"` +} + +// String returns the string representation +func (s ClusterDbRevision) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ClusterDbRevision) GoString() string { + return s.String() +} + +// SetClusterIdentifier sets the ClusterIdentifier field's value. +func (s *ClusterDbRevision) SetClusterIdentifier(v string) *ClusterDbRevision { + s.ClusterIdentifier = &v + return s +} + +// SetCurrentDatabaseRevision sets the CurrentDatabaseRevision field's value. +func (s *ClusterDbRevision) SetCurrentDatabaseRevision(v string) *ClusterDbRevision { + s.CurrentDatabaseRevision = &v + return s +} + +// SetDatabaseRevisionReleaseDate sets the DatabaseRevisionReleaseDate field's value. +func (s *ClusterDbRevision) SetDatabaseRevisionReleaseDate(v time.Time) *ClusterDbRevision { + s.DatabaseRevisionReleaseDate = &v + return s +} + +// SetRevisionTargets sets the RevisionTargets field's value. +func (s *ClusterDbRevision) SetRevisionTargets(v []*RevisionTarget) *ClusterDbRevision { + s.RevisionTargets = v + return s +} + // An AWS Identity and Access Management (IAM) role that can be used by the // associated Amazon Redshift cluster to access other AWS services. type ClusterIamRole struct { @@ -7957,6 +10252,13 @@ type ClusterSnapshotCopyStatus struct { // snapshot copy is enabled. DestinationRegion *string `type:"string"` + // The number of days that automated snapshots are retained in the destination + // region after they are copied from a source region. If the value is -1, the + // manual snapshot is retained indefinitely. + // + // The value must be either -1 or an integer between 1 and 3,653. + ManualSnapshotRetentionPeriod *int64 `type:"integer"` + // The number of days that automated snapshots are retained in the destination // region after they are copied from a source region. RetentionPeriod *int64 `type:"long"` @@ -7981,6 +10283,12 @@ func (s *ClusterSnapshotCopyStatus) SetDestinationRegion(v string) *ClusterSnaps return s } +// SetManualSnapshotRetentionPeriod sets the ManualSnapshotRetentionPeriod field's value. +func (s *ClusterSnapshotCopyStatus) SetManualSnapshotRetentionPeriod(v int64) *ClusterSnapshotCopyStatus { + s.ManualSnapshotRetentionPeriod = &v + return s +} + // SetRetentionPeriod sets the RetentionPeriod field's value. func (s *ClusterSnapshotCopyStatus) SetRetentionPeriod(v int64) *ClusterSnapshotCopyStatus { s.RetentionPeriod = &v @@ -8109,6 +10417,14 @@ func (s *ClusterVersion) SetDescription(v string) *ClusterVersion { type CopyClusterSnapshotInput struct { _ struct{} `type:"structure"` + // The number of days that a manual snapshot is retained. If the value is -1, + // the manual snapshot is retained indefinitely. + // + // The value must be either -1 or an integer between 1 and 3,653. + // + // The default value is -1. + ManualSnapshotRetentionPeriod *int64 `type:"integer"` + // The identifier of the cluster the source snapshot was created from. This // parameter is required if your IAM user has a policy containing a snapshot // resource element that specifies anything other than * for the cluster name. @@ -8172,6 +10488,12 @@ func (s *CopyClusterSnapshotInput) Validate() error { return nil } +// SetManualSnapshotRetentionPeriod sets the ManualSnapshotRetentionPeriod field's value. +func (s *CopyClusterSnapshotInput) SetManualSnapshotRetentionPeriod(v int64) *CopyClusterSnapshotInput { + s.ManualSnapshotRetentionPeriod = &v + return s +} + // SetSourceSnapshotClusterIdentifier sets the SourceSnapshotClusterIdentifier field's value. func (s *CopyClusterSnapshotInput) SetSourceSnapshotClusterIdentifier(v string) *CopyClusterSnapshotInput { s.SourceSnapshotClusterIdentifier = &v @@ -8384,6 +10706,18 @@ type CreateClusterInput struct { // want to use to encrypt data in the cluster. KmsKeyId *string `type:"string"` + // An optional parameter for the name of the maintenance track for the cluster. + // If you don't provide a maintenance track name, the cluster is assigned to + // the current track. + MaintenanceTrackName *string `type:"string"` + + // The default number of days to retain a manual snapshot. If the value is -1, + // the snapshot is retained indefinitely. This setting does not change the retention + // period of existing snapshots. + // + // The value must be either -1 or an integer between 1 and 3,653 + ManualSnapshotRetentionPeriod *int64 `type:"integer"` + // The password associated with the master user account for the cluster that // is being created. // @@ -8474,6 +10808,9 @@ type CreateClusterInput struct { // If true, the cluster can be accessed from a public network. PubliclyAccessible *bool `type:"boolean"` + // A unique identifier for the snapshot schedule. + SnapshotScheduleIdentifier *string `type:"string"` + // A list of tag instances. Tags []*Tag `locationNameList:"Tag" type:"list"` @@ -8624,6 +10961,18 @@ func (s *CreateClusterInput) SetKmsKeyId(v string) *CreateClusterInput { return s } +// SetMaintenanceTrackName sets the MaintenanceTrackName field's value. +func (s *CreateClusterInput) SetMaintenanceTrackName(v string) *CreateClusterInput { + s.MaintenanceTrackName = &v + return s +} + +// SetManualSnapshotRetentionPeriod sets the ManualSnapshotRetentionPeriod field's value. +func (s *CreateClusterInput) SetManualSnapshotRetentionPeriod(v int64) *CreateClusterInput { + s.ManualSnapshotRetentionPeriod = &v + return s +} + // SetMasterUserPassword sets the MasterUserPassword field's value. func (s *CreateClusterInput) SetMasterUserPassword(v string) *CreateClusterInput { s.MasterUserPassword = &v @@ -8666,6 +11015,12 @@ func (s *CreateClusterInput) SetPubliclyAccessible(v bool) *CreateClusterInput { return s } +// SetSnapshotScheduleIdentifier sets the SnapshotScheduleIdentifier field's value. +func (s *CreateClusterInput) SetSnapshotScheduleIdentifier(v string) *CreateClusterInput { + s.SnapshotScheduleIdentifier = &v + return s +} + // SetTags sets the Tags field's value. func (s *CreateClusterInput) SetTags(v []*Tag) *CreateClusterInput { s.Tags = v @@ -8923,6 +11278,14 @@ type CreateClusterSnapshotInput struct { // ClusterIdentifier is a required field ClusterIdentifier *string `type:"string" required:"true"` + // The number of days that a manual snapshot is retained. If the value is -1, + // the manual snapshot is retained indefinitely. + // + // The value must be either -1 or an integer between 1 and 3,653. + // + // The default value is -1. + ManualSnapshotRetentionPeriod *int64 `type:"integer"` + // A unique identifier for the snapshot that you are requesting. This identifier // must be unique for all snapshots within the AWS account. // @@ -8977,6 +11340,12 @@ func (s *CreateClusterSnapshotInput) SetClusterIdentifier(v string) *CreateClust return s } +// SetManualSnapshotRetentionPeriod sets the ManualSnapshotRetentionPeriod field's value. +func (s *CreateClusterSnapshotInput) SetManualSnapshotRetentionPeriod(v int64) *CreateClusterSnapshotInput { + s.ManualSnapshotRetentionPeriod = &v + return s +} + // SetSnapshotIdentifier sets the SnapshotIdentifier field's value. func (s *CreateClusterSnapshotInput) SetSnapshotIdentifier(v string) *CreateClusterSnapshotInput { s.SnapshotIdentifier = &v @@ -9132,7 +11501,7 @@ type CreateEventSubscriptionInput struct { // Specifies the Amazon Redshift event categories to be published by the event // notification subscription. // - // Values: Configuration, Management, Monitoring, Security + // Values: configuration, management, monitoring, security EventCategories []*string `locationNameList:"EventCategory" type:"list"` // Specifies the Amazon Redshift event severity to be published by the event @@ -9602,6 +11971,133 @@ func (s *CreateSnapshotCopyGrantOutput) SetSnapshotCopyGrant(v *SnapshotCopyGran return s } +type CreateSnapshotScheduleInput struct { + _ struct{} `type:"structure"` + + DryRun *bool `type:"boolean"` + + NextInvocations *int64 `type:"integer"` + + // The definition of the snapshot schedule. The definition is made up of schedule + // expressions. For example, "cron(30 12 *)" or "rate(12 hours)". + ScheduleDefinitions []*string `locationNameList:"ScheduleDefinition" type:"list"` + + // The description of the snapshot schedule. + ScheduleDescription *string `type:"string"` + + // A unique identifier for a snapshot schedule. Only alphanumeric characters + // are allowed for the identifier. + ScheduleIdentifier *string `type:"string"` + + Tags []*Tag `locationNameList:"Tag" type:"list"` +} + +// String returns the string representation +func (s CreateSnapshotScheduleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateSnapshotScheduleInput) GoString() string { + return s.String() +} + +// SetDryRun sets the DryRun field's value. +func (s *CreateSnapshotScheduleInput) SetDryRun(v bool) *CreateSnapshotScheduleInput { + s.DryRun = &v + return s +} + +// SetNextInvocations sets the NextInvocations field's value. +func (s *CreateSnapshotScheduleInput) SetNextInvocations(v int64) *CreateSnapshotScheduleInput { + s.NextInvocations = &v + return s +} + +// SetScheduleDefinitions sets the ScheduleDefinitions field's value. +func (s *CreateSnapshotScheduleInput) SetScheduleDefinitions(v []*string) *CreateSnapshotScheduleInput { + s.ScheduleDefinitions = v + return s +} + +// SetScheduleDescription sets the ScheduleDescription field's value. +func (s *CreateSnapshotScheduleInput) SetScheduleDescription(v string) *CreateSnapshotScheduleInput { + s.ScheduleDescription = &v + return s +} + +// SetScheduleIdentifier sets the ScheduleIdentifier field's value. +func (s *CreateSnapshotScheduleInput) SetScheduleIdentifier(v string) *CreateSnapshotScheduleInput { + s.ScheduleIdentifier = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateSnapshotScheduleInput) SetTags(v []*Tag) *CreateSnapshotScheduleInput { + s.Tags = v + return s +} + +// Describes a snapshot schedule. You can set a regular interval for creating +// snapshots of a cluster. You can also schedule snapshots for specific dates. +type CreateSnapshotScheduleOutput struct { + _ struct{} `type:"structure"` + + NextInvocations []*time.Time `locationNameList:"SnapshotTime" type:"list"` + + // A list of ScheduleDefinitions + ScheduleDefinitions []*string `locationNameList:"ScheduleDefinition" type:"list"` + + // The description of the schedule. + ScheduleDescription *string `type:"string"` + + // A unique identifier for the schedule. + ScheduleIdentifier *string `type:"string"` + + // An optional set of tags describing the schedule. + Tags []*Tag `locationNameList:"Tag" type:"list"` +} + +// String returns the string representation +func (s CreateSnapshotScheduleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateSnapshotScheduleOutput) GoString() string { + return s.String() +} + +// SetNextInvocations sets the NextInvocations field's value. +func (s *CreateSnapshotScheduleOutput) SetNextInvocations(v []*time.Time) *CreateSnapshotScheduleOutput { + s.NextInvocations = v + return s +} + +// SetScheduleDefinitions sets the ScheduleDefinitions field's value. +func (s *CreateSnapshotScheduleOutput) SetScheduleDefinitions(v []*string) *CreateSnapshotScheduleOutput { + s.ScheduleDefinitions = v + return s +} + +// SetScheduleDescription sets the ScheduleDescription field's value. +func (s *CreateSnapshotScheduleOutput) SetScheduleDescription(v string) *CreateSnapshotScheduleOutput { + s.ScheduleDescription = &v + return s +} + +// SetScheduleIdentifier sets the ScheduleIdentifier field's value. +func (s *CreateSnapshotScheduleOutput) SetScheduleIdentifier(v string) *CreateSnapshotScheduleOutput { + s.ScheduleIdentifier = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateSnapshotScheduleOutput) SetTags(v []*Tag) *CreateSnapshotScheduleOutput { + s.Tags = v + return s +} + // Contains the output from the CreateTags action. type CreateTagsInput struct { _ struct{} `type:"structure"` @@ -9674,6 +12170,77 @@ func (s CreateTagsOutput) GoString() string { return s.String() } +// Describes the status of a cluster while it is in the process of resizing +// with an incremental resize. +type DataTransferProgress struct { + _ struct{} `type:"structure"` + + // Describes the data transfer rate in MB's per second. + CurrentRateInMegaBytesPerSecond *float64 `type:"double"` + + // Describes the total amount of data that has been transfered in MB's. + DataTransferredInMegaBytes *int64 `type:"long"` + + // Describes the number of seconds that have elapsed during the data transfer. + ElapsedTimeInSeconds *int64 `type:"long"` + + // Describes the estimated number of seconds remaining to complete the transfer. + EstimatedTimeToCompletionInSeconds *int64 `type:"long"` + + // Describes the status of the cluster. While the transfer is in progress the + // status is transferringdata. + Status *string `type:"string"` + + // Describes the total amount of data to be transfered in megabytes. + TotalDataInMegaBytes *int64 `type:"long"` +} + +// String returns the string representation +func (s DataTransferProgress) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DataTransferProgress) GoString() string { + return s.String() +} + +// SetCurrentRateInMegaBytesPerSecond sets the CurrentRateInMegaBytesPerSecond field's value. +func (s *DataTransferProgress) SetCurrentRateInMegaBytesPerSecond(v float64) *DataTransferProgress { + s.CurrentRateInMegaBytesPerSecond = &v + return s +} + +// SetDataTransferredInMegaBytes sets the DataTransferredInMegaBytes field's value. +func (s *DataTransferProgress) SetDataTransferredInMegaBytes(v int64) *DataTransferProgress { + s.DataTransferredInMegaBytes = &v + return s +} + +// SetElapsedTimeInSeconds sets the ElapsedTimeInSeconds field's value. +func (s *DataTransferProgress) SetElapsedTimeInSeconds(v int64) *DataTransferProgress { + s.ElapsedTimeInSeconds = &v + return s +} + +// SetEstimatedTimeToCompletionInSeconds sets the EstimatedTimeToCompletionInSeconds field's value. +func (s *DataTransferProgress) SetEstimatedTimeToCompletionInSeconds(v int64) *DataTransferProgress { + s.EstimatedTimeToCompletionInSeconds = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *DataTransferProgress) SetStatus(v string) *DataTransferProgress { + s.Status = &v + return s +} + +// SetTotalDataInMegaBytes sets the TotalDataInMegaBytes field's value. +func (s *DataTransferProgress) SetTotalDataInMegaBytes(v int64) *DataTransferProgress { + s.TotalDataInMegaBytes = &v + return s +} + // Describes the default cluster parameters for a parameter group family. type DefaultClusterParameters struct { _ struct{} `type:"structure"` @@ -9721,6 +12288,48 @@ func (s *DefaultClusterParameters) SetParameters(v []*Parameter) *DefaultCluster return s } +// Describes a deferred maintenance window +type DeferredMaintenanceWindow struct { + _ struct{} `type:"structure"` + + // A timestamp for the end of the time period when we defer maintenance. + DeferMaintenanceEndTime *time.Time `type:"timestamp"` + + // A unique identifier for the maintenance window. + DeferMaintenanceIdentifier *string `type:"string"` + + // A timestamp for the beginning of the time period when we defer maintenance. + DeferMaintenanceStartTime *time.Time `type:"timestamp"` +} + +// String returns the string representation +func (s DeferredMaintenanceWindow) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeferredMaintenanceWindow) GoString() string { + return s.String() +} + +// SetDeferMaintenanceEndTime sets the DeferMaintenanceEndTime field's value. +func (s *DeferredMaintenanceWindow) SetDeferMaintenanceEndTime(v time.Time) *DeferredMaintenanceWindow { + s.DeferMaintenanceEndTime = &v + return s +} + +// SetDeferMaintenanceIdentifier sets the DeferMaintenanceIdentifier field's value. +func (s *DeferredMaintenanceWindow) SetDeferMaintenanceIdentifier(v string) *DeferredMaintenanceWindow { + s.DeferMaintenanceIdentifier = &v + return s +} + +// SetDeferMaintenanceStartTime sets the DeferMaintenanceStartTime field's value. +func (s *DeferredMaintenanceWindow) SetDeferMaintenanceStartTime(v time.Time) *DeferredMaintenanceWindow { + s.DeferMaintenanceStartTime = &v + return s +} + type DeleteClusterInput struct { _ struct{} `type:"structure"` @@ -9752,6 +12361,14 @@ type DeleteClusterInput struct { // * Cannot end with a hyphen or contain two consecutive hyphens. FinalClusterSnapshotIdentifier *string `type:"string"` + // The number of days that a manual snapshot is retained. If the value is -1, + // the manual snapshot is retained indefinitely. + // + // The value must be either -1 or an integer between 1 and 3,653. + // + // The default value is -1. + FinalClusterSnapshotRetentionPeriod *int64 `type:"integer"` + // Determines whether a final snapshot of the cluster is created before Amazon // Redshift deletes the cluster. If true, a final cluster snapshot is not created. // If false, a final cluster snapshot is created before the cluster is deleted. @@ -9798,6 +12415,12 @@ func (s *DeleteClusterInput) SetFinalClusterSnapshotIdentifier(v string) *Delete return s } +// SetFinalClusterSnapshotRetentionPeriod sets the FinalClusterSnapshotRetentionPeriod field's value. +func (s *DeleteClusterInput) SetFinalClusterSnapshotRetentionPeriod(v int64) *DeleteClusterInput { + s.FinalClusterSnapshotRetentionPeriod = &v + return s +} + // SetSkipFinalClusterSnapshot sets the SkipFinalClusterSnapshot field's value. func (s *DeleteClusterInput) SetSkipFinalClusterSnapshot(v bool) *DeleteClusterInput { s.SkipFinalClusterSnapshot = &v @@ -9949,8 +12572,8 @@ type DeleteClusterSnapshotInput struct { // The unique identifier of the manual snapshot to be deleted. // - // Constraints: Must be the name of an existing snapshot that is in the available - // state. + // Constraints: Must be the name of an existing snapshot that is in the available, + // failed, or cancelled state. // // SnapshotIdentifier is a required field SnapshotIdentifier *string `type:"string" required:"true"` @@ -9991,6 +12614,60 @@ func (s *DeleteClusterSnapshotInput) SetSnapshotIdentifier(v string) *DeleteClus return s } +type DeleteClusterSnapshotMessage struct { + _ struct{} `type:"structure"` + + // The unique identifier of the cluster the snapshot was created from. This + // parameter is required if your IAM user has a policy containing a snapshot + // resource element that specifies anything other than * for the cluster name. + // + // Constraints: Must be the name of valid cluster. + SnapshotClusterIdentifier *string `type:"string"` + + // The unique identifier of the manual snapshot to be deleted. + // + // Constraints: Must be the name of an existing snapshot that is in the available, + // failed, or cancelled state. + // + // SnapshotIdentifier is a required field + SnapshotIdentifier *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteClusterSnapshotMessage) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteClusterSnapshotMessage) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteClusterSnapshotMessage) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteClusterSnapshotMessage"} + if s.SnapshotIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("SnapshotIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSnapshotClusterIdentifier sets the SnapshotClusterIdentifier field's value. +func (s *DeleteClusterSnapshotMessage) SetSnapshotClusterIdentifier(v string) *DeleteClusterSnapshotMessage { + s.SnapshotClusterIdentifier = &v + return s +} + +// SetSnapshotIdentifier sets the SnapshotIdentifier field's value. +func (s *DeleteClusterSnapshotMessage) SetSnapshotIdentifier(v string) *DeleteClusterSnapshotMessage { + s.SnapshotIdentifier = &v + return s +} + type DeleteClusterSnapshotOutput struct { _ struct{} `type:"structure"` @@ -10275,6 +12952,58 @@ func (s DeleteSnapshotCopyGrantOutput) GoString() string { return s.String() } +type DeleteSnapshotScheduleInput struct { + _ struct{} `type:"structure"` + + // A unique identifier of the snapshot schedule to delete. + // + // ScheduleIdentifier is a required field + ScheduleIdentifier *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteSnapshotScheduleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSnapshotScheduleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteSnapshotScheduleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteSnapshotScheduleInput"} + if s.ScheduleIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("ScheduleIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetScheduleIdentifier sets the ScheduleIdentifier field's value. +func (s *DeleteSnapshotScheduleInput) SetScheduleIdentifier(v string) *DeleteSnapshotScheduleInput { + s.ScheduleIdentifier = &v + return s +} + +type DeleteSnapshotScheduleOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteSnapshotScheduleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSnapshotScheduleOutput) GoString() string { + return s.String() +} + // Contains the output from the DeleteTags action. type DeleteTagsInput struct { _ struct{} `type:"structure"` @@ -10287,62 +13016,202 @@ type DeleteTagsInput struct { // The tag key that you want to delete. // - // TagKeys is a required field - TagKeys []*string `locationNameList:"TagKey" type:"list" required:"true"` + // TagKeys is a required field + TagKeys []*string `locationNameList:"TagKey" type:"list" required:"true"` +} + +// String returns the string representation +func (s DeleteTagsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteTagsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteTagsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteTagsInput"} + if s.ResourceName == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceName")) + } + if s.TagKeys == nil { + invalidParams.Add(request.NewErrParamRequired("TagKeys")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceName sets the ResourceName field's value. +func (s *DeleteTagsInput) SetResourceName(v string) *DeleteTagsInput { + s.ResourceName = &v + return s +} + +// SetTagKeys sets the TagKeys field's value. +func (s *DeleteTagsInput) SetTagKeys(v []*string) *DeleteTagsInput { + s.TagKeys = v + return s +} + +type DeleteTagsOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteTagsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteTagsOutput) GoString() string { + return s.String() +} + +type DescribeAccountAttributesInput struct { + _ struct{} `type:"structure"` + + // A list of attribute names. + AttributeNames []*string `locationNameList:"AttributeName" type:"list"` +} + +// String returns the string representation +func (s DescribeAccountAttributesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAccountAttributesInput) GoString() string { + return s.String() +} + +// SetAttributeNames sets the AttributeNames field's value. +func (s *DescribeAccountAttributesInput) SetAttributeNames(v []*string) *DescribeAccountAttributesInput { + s.AttributeNames = v + return s +} + +type DescribeAccountAttributesOutput struct { + _ struct{} `type:"structure"` + + // A list of attributes assigned to an account. + AccountAttributes []*AccountAttribute `locationNameList:"AccountAttribute" type:"list"` +} + +// String returns the string representation +func (s DescribeAccountAttributesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAccountAttributesOutput) GoString() string { + return s.String() +} + +// SetAccountAttributes sets the AccountAttributes field's value. +func (s *DescribeAccountAttributesOutput) SetAccountAttributes(v []*AccountAttribute) *DescribeAccountAttributesOutput { + s.AccountAttributes = v + return s +} + +type DescribeClusterDbRevisionsInput struct { + _ struct{} `type:"structure"` + + // A unique identifier for a cluster whose ClusterDbRevisions you are requesting. + // This parameter is case sensitive. All clusters defined for an account are + // returned by default. + ClusterIdentifier *string `type:"string"` + + // An optional parameter that specifies the starting point for returning a set + // of response records. When the results of a DescribeClusterDbRevisions request + // exceed the value specified in MaxRecords, Amazon Redshift returns a value + // in the marker field of the response. You can retrieve the next set of response + // records by providing the returned marker value in the marker parameter and + // retrying the request. + // + // Constraints: You can specify either the ClusterIdentifier parameter, or the + // marker parameter, but not both. + Marker *string `type:"string"` + + // The maximum number of response records to return in each call. If the number + // of remaining response records exceeds the specified MaxRecords value, a value + // is returned in the marker field of the response. You can retrieve the next + // set of response records by providing the returned marker value in the marker + // parameter and retrying the request. + // + // Default: 100 + // + // Constraints: minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` } // String returns the string representation -func (s DeleteTagsInput) String() string { +func (s DescribeClusterDbRevisionsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteTagsInput) GoString() string { +func (s DescribeClusterDbRevisionsInput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteTagsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteTagsInput"} - if s.ResourceName == nil { - invalidParams.Add(request.NewErrParamRequired("ResourceName")) - } - if s.TagKeys == nil { - invalidParams.Add(request.NewErrParamRequired("TagKeys")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetClusterIdentifier sets the ClusterIdentifier field's value. +func (s *DescribeClusterDbRevisionsInput) SetClusterIdentifier(v string) *DescribeClusterDbRevisionsInput { + s.ClusterIdentifier = &v + return s } -// SetResourceName sets the ResourceName field's value. -func (s *DeleteTagsInput) SetResourceName(v string) *DeleteTagsInput { - s.ResourceName = &v +// SetMarker sets the Marker field's value. +func (s *DescribeClusterDbRevisionsInput) SetMarker(v string) *DescribeClusterDbRevisionsInput { + s.Marker = &v return s } -// SetTagKeys sets the TagKeys field's value. -func (s *DeleteTagsInput) SetTagKeys(v []*string) *DeleteTagsInput { - s.TagKeys = v +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeClusterDbRevisionsInput) SetMaxRecords(v int64) *DescribeClusterDbRevisionsInput { + s.MaxRecords = &v return s } -type DeleteTagsOutput struct { +type DescribeClusterDbRevisionsOutput struct { _ struct{} `type:"structure"` + + // A list of revisions. + ClusterDbRevisions []*ClusterDbRevision `locationNameList:"ClusterDbRevision" type:"list"` + + // A string representing the starting point for the next set of revisions. If + // a value is returned in a response, you can retrieve the next set of revisions + // by providing the value in the marker parameter and retrying the command. + // If the marker field is empty, all revisions have already been returned. + Marker *string `type:"string"` } // String returns the string representation -func (s DeleteTagsOutput) String() string { +func (s DescribeClusterDbRevisionsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteTagsOutput) GoString() string { +func (s DescribeClusterDbRevisionsOutput) GoString() string { return s.String() } +// SetClusterDbRevisions sets the ClusterDbRevisions field's value. +func (s *DescribeClusterDbRevisionsOutput) SetClusterDbRevisions(v []*ClusterDbRevision) *DescribeClusterDbRevisionsOutput { + s.ClusterDbRevisions = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeClusterDbRevisionsOutput) SetMarker(v string) *DescribeClusterDbRevisionsOutput { + s.Marker = &v + return s +} + type DescribeClusterParameterGroupsInput struct { _ struct{} `type:"structure"` @@ -10725,7 +13594,7 @@ type DescribeClusterSnapshotsInput struct { // about ISO 8601, go to the ISO8601 Wikipedia page. (http://en.wikipedia.org/wiki/ISO_8601) // // Example: 2012-07-16T18:00:00Z - EndTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + EndTime *time.Time `type:"timestamp"` // An optional parameter that specifies the starting point to return a set of // response records. When the results of a DescribeClusterSnapshots request @@ -10760,12 +13629,14 @@ type DescribeClusterSnapshotsInput struct { // Valid Values: automated | manual SnapshotType *string `type:"string"` + SortingEntities []*SnapshotSortingEntity `locationNameList:"SnapshotSortingEntity" type:"list"` + // A value that requests only snapshots created at or after the specified time. // The time value is specified in ISO 8601 format. For more information about // ISO 8601, go to the ISO8601 Wikipedia page. (http://en.wikipedia.org/wiki/ISO_8601) // // Example: 2012-07-16T18:00:00Z - StartTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + StartTime *time.Time `type:"timestamp"` // A tag key or keys for which you want to return all matching cluster snapshots // that are associated with the specified key or keys. For example, suppose @@ -10794,6 +13665,26 @@ func (s DescribeClusterSnapshotsInput) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeClusterSnapshotsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeClusterSnapshotsInput"} + if s.SortingEntities != nil { + for i, v := range s.SortingEntities { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "SortingEntities", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + // SetClusterExists sets the ClusterExists field's value. func (s *DescribeClusterSnapshotsInput) SetClusterExists(v bool) *DescribeClusterSnapshotsInput { s.ClusterExists = &v @@ -10842,6 +13733,12 @@ func (s *DescribeClusterSnapshotsInput) SetSnapshotType(v string) *DescribeClust return s } +// SetSortingEntities sets the SortingEntities field's value. +func (s *DescribeClusterSnapshotsInput) SetSortingEntities(v []*SnapshotSortingEntity) *DescribeClusterSnapshotsInput { + s.SortingEntities = v + return s +} + // SetStartTime sets the StartTime field's value. func (s *DescribeClusterSnapshotsInput) SetStartTime(v time.Time) *DescribeClusterSnapshotsInput { s.StartTime = &v @@ -11015,6 +13912,86 @@ func (s *DescribeClusterSubnetGroupsOutput) SetMarker(v string) *DescribeCluster return s } +type DescribeClusterTracksInput struct { + _ struct{} `type:"structure"` + + // The name of the maintenance track. + MaintenanceTrackName *string `type:"string"` + + // An optional parameter that specifies the starting point to return a set of + // response records. When the results of a DescribeClusterTracks request exceed + // the value specified in MaxRecords, Amazon Redshift returns a value in the + // Marker field of the response. You can retrieve the next set of response records + // by providing the returned marker value in the Marker parameter and retrying + // the request. + Marker *string `type:"string"` + + // An integer value for the maximum number of maintenance tracks to return. + MaxRecords *int64 `type:"integer"` +} + +// String returns the string representation +func (s DescribeClusterTracksInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeClusterTracksInput) GoString() string { + return s.String() +} + +// SetMaintenanceTrackName sets the MaintenanceTrackName field's value. +func (s *DescribeClusterTracksInput) SetMaintenanceTrackName(v string) *DescribeClusterTracksInput { + s.MaintenanceTrackName = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeClusterTracksInput) SetMarker(v string) *DescribeClusterTracksInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeClusterTracksInput) SetMaxRecords(v int64) *DescribeClusterTracksInput { + s.MaxRecords = &v + return s +} + +type DescribeClusterTracksOutput struct { + _ struct{} `type:"structure"` + + // A list of maintenance tracks output by the DescribeClusterTracks operation. + MaintenanceTracks []*MaintenanceTrack `locationNameList:"MaintenanceTrack" type:"list"` + + // The starting point to return a set of response tracklist records. You can + // retrieve the next set of response records by providing the returned marker + // value in the Marker parameter and retrying the request. + Marker *string `type:"string"` +} + +// String returns the string representation +func (s DescribeClusterTracksOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeClusterTracksOutput) GoString() string { + return s.String() +} + +// SetMaintenanceTracks sets the MaintenanceTracks field's value. +func (s *DescribeClusterTracksOutput) SetMaintenanceTracks(v []*MaintenanceTrack) *DescribeClusterTracksOutput { + s.MaintenanceTracks = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeClusterTracksOutput) SetMarker(v string) *DescribeClusterTracksOutput { + s.Marker = &v + return s +} + type DescribeClusterVersionsInput struct { _ struct{} `type:"structure"` @@ -11518,7 +14495,7 @@ type DescribeEventsInput struct { // page. (http://en.wikipedia.org/wiki/ISO_8601) // // Example: 2009-07-08T18:00Z - EndTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + EndTime *time.Time `type:"timestamp"` // An optional parameter that specifies the starting point to return a set of // response records. When the results of a DescribeEvents request exceed the @@ -11577,7 +14554,7 @@ type DescribeEventsInput struct { // page. (http://en.wikipedia.org/wiki/ISO_8601) // // Example: 2009-07-08T18:00Z - StartTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + StartTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -12308,6 +15285,9 @@ type DescribeResizeOutput struct { // Valid Values: List of table names ImportTablesNotStarted []*string `type:"list"` + // An optional string to provide additional details about the resize action. + Message *string `type:"string"` + // While the resize operation is in progress, this value shows the current amount // of data, in megabytes, that has been processed so far. When the resize operation // is complete, this value shows the total amount of data, in megabytes, on @@ -12315,9 +15295,13 @@ type DescribeResizeOutput struct { // estimated total amount of data before resize). ProgressInMegaBytes *int64 `type:"long"` + // An enum with possible values of ClassicResize and ElasticResize. These values + // describe the type of resize operation being performed. + ResizeType *string `type:"string"` + // The status of the resize operation. // - // Valid Values: NONE | IN_PROGRESS | FAILED | SUCCEEDED + // Valid Values: NONE | IN_PROGRESS | FAILED | SUCCEEDED | CANCELLING Status *string `type:"string"` // The cluster type after the resize operation is complete. @@ -12325,6 +15309,12 @@ type DescribeResizeOutput struct { // Valid Values: multi-node | single-node TargetClusterType *string `type:"string"` + // The type of encryption for the cluster after the resize is complete. + // + // Possible values are KMS and None. In the China region possible values are: + // Legacy and None. + TargetEncryptionType *string `type:"string"` + // The node type that the cluster will have after the resize operation is complete. TargetNodeType *string `type:"string"` @@ -12383,12 +15373,24 @@ func (s *DescribeResizeOutput) SetImportTablesNotStarted(v []*string) *DescribeR return s } +// SetMessage sets the Message field's value. +func (s *DescribeResizeOutput) SetMessage(v string) *DescribeResizeOutput { + s.Message = &v + return s +} + // SetProgressInMegaBytes sets the ProgressInMegaBytes field's value. func (s *DescribeResizeOutput) SetProgressInMegaBytes(v int64) *DescribeResizeOutput { s.ProgressInMegaBytes = &v return s } +// SetResizeType sets the ResizeType field's value. +func (s *DescribeResizeOutput) SetResizeType(v string) *DescribeResizeOutput { + s.ResizeType = &v + return s +} + // SetStatus sets the Status field's value. func (s *DescribeResizeOutput) SetStatus(v string) *DescribeResizeOutput { s.Status = &v @@ -12401,6 +15403,12 @@ func (s *DescribeResizeOutput) SetTargetClusterType(v string) *DescribeResizeOut return s } +// SetTargetEncryptionType sets the TargetEncryptionType field's value. +func (s *DescribeResizeOutput) SetTargetEncryptionType(v string) *DescribeResizeOutput { + s.TargetEncryptionType = &v + return s +} + // SetTargetNodeType sets the TargetNodeType field's value. func (s *DescribeResizeOutput) SetTargetNodeType(v string) *DescribeResizeOutput { s.TargetNodeType = &v @@ -12419,128 +15427,286 @@ func (s *DescribeResizeOutput) SetTotalResizeDataInMegaBytes(v int64) *DescribeR return s } -// The result of the DescribeSnapshotCopyGrants action. -type DescribeSnapshotCopyGrantsInput struct { +// The result of the DescribeSnapshotCopyGrants action. +type DescribeSnapshotCopyGrantsInput struct { + _ struct{} `type:"structure"` + + // An optional parameter that specifies the starting point to return a set of + // response records. When the results of a DescribeSnapshotCopyGrant request + // exceed the value specified in MaxRecords, AWS returns a value in the Marker + // field of the response. You can retrieve the next set of response records + // by providing the returned marker value in the Marker parameter and retrying + // the request. + // + // Constraints: You can specify either the SnapshotCopyGrantName parameter or + // the Marker parameter, but not both. + Marker *string `type:"string"` + + // The maximum number of response records to return in each call. If the number + // of remaining response records exceeds the specified MaxRecords value, a value + // is returned in a marker field of the response. You can retrieve the next + // set of records by retrying the command with the returned marker value. + // + // Default: 100 + // + // Constraints: minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` + + // The name of the snapshot copy grant. + SnapshotCopyGrantName *string `type:"string"` + + // A tag key or keys for which you want to return all matching resources that + // are associated with the specified key or keys. For example, suppose that + // you have resources tagged with keys called owner and environment. If you + // specify both of these tag keys in the request, Amazon Redshift returns a + // response with all resources that have either or both of these tag keys associated + // with them. + TagKeys []*string `locationNameList:"TagKey" type:"list"` + + // A tag value or values for which you want to return all matching resources + // that are associated with the specified value or values. For example, suppose + // that you have resources tagged with values called admin and test. If you + // specify both of these tag values in the request, Amazon Redshift returns + // a response with all resources that have either or both of these tag values + // associated with them. + TagValues []*string `locationNameList:"TagValue" type:"list"` +} + +// String returns the string representation +func (s DescribeSnapshotCopyGrantsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeSnapshotCopyGrantsInput) GoString() string { + return s.String() +} + +// SetMarker sets the Marker field's value. +func (s *DescribeSnapshotCopyGrantsInput) SetMarker(v string) *DescribeSnapshotCopyGrantsInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeSnapshotCopyGrantsInput) SetMaxRecords(v int64) *DescribeSnapshotCopyGrantsInput { + s.MaxRecords = &v + return s +} + +// SetSnapshotCopyGrantName sets the SnapshotCopyGrantName field's value. +func (s *DescribeSnapshotCopyGrantsInput) SetSnapshotCopyGrantName(v string) *DescribeSnapshotCopyGrantsInput { + s.SnapshotCopyGrantName = &v + return s +} + +// SetTagKeys sets the TagKeys field's value. +func (s *DescribeSnapshotCopyGrantsInput) SetTagKeys(v []*string) *DescribeSnapshotCopyGrantsInput { + s.TagKeys = v + return s +} + +// SetTagValues sets the TagValues field's value. +func (s *DescribeSnapshotCopyGrantsInput) SetTagValues(v []*string) *DescribeSnapshotCopyGrantsInput { + s.TagValues = v + return s +} + +type DescribeSnapshotCopyGrantsOutput struct { + _ struct{} `type:"structure"` + + // An optional parameter that specifies the starting point to return a set of + // response records. When the results of a DescribeSnapshotCopyGrant request + // exceed the value specified in MaxRecords, AWS returns a value in the Marker + // field of the response. You can retrieve the next set of response records + // by providing the returned marker value in the Marker parameter and retrying + // the request. + // + // Constraints: You can specify either the SnapshotCopyGrantName parameter or + // the Marker parameter, but not both. + Marker *string `type:"string"` + + // The list of SnapshotCopyGrant objects. + SnapshotCopyGrants []*SnapshotCopyGrant `locationNameList:"SnapshotCopyGrant" type:"list"` +} + +// String returns the string representation +func (s DescribeSnapshotCopyGrantsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeSnapshotCopyGrantsOutput) GoString() string { + return s.String() +} + +// SetMarker sets the Marker field's value. +func (s *DescribeSnapshotCopyGrantsOutput) SetMarker(v string) *DescribeSnapshotCopyGrantsOutput { + s.Marker = &v + return s +} + +// SetSnapshotCopyGrants sets the SnapshotCopyGrants field's value. +func (s *DescribeSnapshotCopyGrantsOutput) SetSnapshotCopyGrants(v []*SnapshotCopyGrant) *DescribeSnapshotCopyGrantsOutput { + s.SnapshotCopyGrants = v + return s +} + +type DescribeSnapshotSchedulesInput struct { _ struct{} `type:"structure"` - // An optional parameter that specifies the starting point to return a set of - // response records. When the results of a DescribeSnapshotCopyGrant request - // exceed the value specified in MaxRecords, AWS returns a value in the Marker - // field of the response. You can retrieve the next set of response records - // by providing the returned marker value in the Marker parameter and retrying - // the request. - // - // Constraints: You can specify either the SnapshotCopyGrantName parameter or - // the Marker parameter, but not both. + // The unique identifier for the cluster whose snapshot schedules you want to + // view. + ClusterIdentifier *string `type:"string"` + + // A value that indicates the starting point for the next set of response records + // in a subsequent request. If a value is returned in a response, you can retrieve + // the next set of records by providing this returned marker value in the marker + // parameter and retrying the command. If the marker field is empty, all response + // records have been retrieved for the request. Marker *string `type:"string"` - // The maximum number of response records to return in each call. If the number + // The maximum number or response records to return in each call. If the number // of remaining response records exceeds the specified MaxRecords value, a value // is returned in a marker field of the response. You can retrieve the next // set of records by retrying the command with the returned marker value. - // - // Default: 100 - // - // Constraints: minimum 20, maximum 100. MaxRecords *int64 `type:"integer"` - // The name of the snapshot copy grant. - SnapshotCopyGrantName *string `type:"string"` + // A unique identifier for a snapshot schedule. + ScheduleIdentifier *string `type:"string"` - // A tag key or keys for which you want to return all matching resources that - // are associated with the specified key or keys. For example, suppose that - // you have resources tagged with keys called owner and environment. If you - // specify both of these tag keys in the request, Amazon Redshift returns a - // response with all resources that have either or both of these tag keys associated - // with them. + // The key value for a snapshot schedule tag. TagKeys []*string `locationNameList:"TagKey" type:"list"` - // A tag value or values for which you want to return all matching resources - // that are associated with the specified value or values. For example, suppose - // that you have resources tagged with values called admin and test. If you - // specify both of these tag values in the request, Amazon Redshift returns - // a response with all resources that have either or both of these tag values - // associated with them. + // The value corresponding to the key of the snapshot schedule tag. TagValues []*string `locationNameList:"TagValue" type:"list"` } // String returns the string representation -func (s DescribeSnapshotCopyGrantsInput) String() string { +func (s DescribeSnapshotSchedulesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeSnapshotCopyGrantsInput) GoString() string { +func (s DescribeSnapshotSchedulesInput) GoString() string { return s.String() } +// SetClusterIdentifier sets the ClusterIdentifier field's value. +func (s *DescribeSnapshotSchedulesInput) SetClusterIdentifier(v string) *DescribeSnapshotSchedulesInput { + s.ClusterIdentifier = &v + return s +} + // SetMarker sets the Marker field's value. -func (s *DescribeSnapshotCopyGrantsInput) SetMarker(v string) *DescribeSnapshotCopyGrantsInput { +func (s *DescribeSnapshotSchedulesInput) SetMarker(v string) *DescribeSnapshotSchedulesInput { s.Marker = &v return s } // SetMaxRecords sets the MaxRecords field's value. -func (s *DescribeSnapshotCopyGrantsInput) SetMaxRecords(v int64) *DescribeSnapshotCopyGrantsInput { +func (s *DescribeSnapshotSchedulesInput) SetMaxRecords(v int64) *DescribeSnapshotSchedulesInput { s.MaxRecords = &v return s } -// SetSnapshotCopyGrantName sets the SnapshotCopyGrantName field's value. -func (s *DescribeSnapshotCopyGrantsInput) SetSnapshotCopyGrantName(v string) *DescribeSnapshotCopyGrantsInput { - s.SnapshotCopyGrantName = &v +// SetScheduleIdentifier sets the ScheduleIdentifier field's value. +func (s *DescribeSnapshotSchedulesInput) SetScheduleIdentifier(v string) *DescribeSnapshotSchedulesInput { + s.ScheduleIdentifier = &v return s } // SetTagKeys sets the TagKeys field's value. -func (s *DescribeSnapshotCopyGrantsInput) SetTagKeys(v []*string) *DescribeSnapshotCopyGrantsInput { +func (s *DescribeSnapshotSchedulesInput) SetTagKeys(v []*string) *DescribeSnapshotSchedulesInput { s.TagKeys = v return s } // SetTagValues sets the TagValues field's value. -func (s *DescribeSnapshotCopyGrantsInput) SetTagValues(v []*string) *DescribeSnapshotCopyGrantsInput { +func (s *DescribeSnapshotSchedulesInput) SetTagValues(v []*string) *DescribeSnapshotSchedulesInput { s.TagValues = v return s } -type DescribeSnapshotCopyGrantsOutput struct { +type DescribeSnapshotSchedulesOutput struct { _ struct{} `type:"structure"` - // An optional parameter that specifies the starting point to return a set of - // response records. When the results of a DescribeSnapshotCopyGrant request - // exceed the value specified in MaxRecords, AWS returns a value in the Marker - // field of the response. You can retrieve the next set of response records - // by providing the returned marker value in the Marker parameter and retrying - // the request. - // - // Constraints: You can specify either the SnapshotCopyGrantName parameter or - // the Marker parameter, but not both. + // A value that indicates the starting point for the next set of response records + // in a subsequent request. If a value is returned in a response, you can retrieve + // the next set of records by providing this returned marker value in the marker + // parameter and retrying the command. If the marker field is empty, all response + // records have been retrieved for the request. Marker *string `type:"string"` - // The list of SnapshotCopyGrant objects. - SnapshotCopyGrants []*SnapshotCopyGrant `locationNameList:"SnapshotCopyGrant" type:"list"` + // A list of SnapshotSchedules. + SnapshotSchedules []*SnapshotSchedule `locationNameList:"SnapshotSchedule" type:"list"` } // String returns the string representation -func (s DescribeSnapshotCopyGrantsOutput) String() string { +func (s DescribeSnapshotSchedulesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeSnapshotCopyGrantsOutput) GoString() string { +func (s DescribeSnapshotSchedulesOutput) GoString() string { return s.String() } // SetMarker sets the Marker field's value. -func (s *DescribeSnapshotCopyGrantsOutput) SetMarker(v string) *DescribeSnapshotCopyGrantsOutput { +func (s *DescribeSnapshotSchedulesOutput) SetMarker(v string) *DescribeSnapshotSchedulesOutput { s.Marker = &v return s } -// SetSnapshotCopyGrants sets the SnapshotCopyGrants field's value. -func (s *DescribeSnapshotCopyGrantsOutput) SetSnapshotCopyGrants(v []*SnapshotCopyGrant) *DescribeSnapshotCopyGrantsOutput { - s.SnapshotCopyGrants = v +// SetSnapshotSchedules sets the SnapshotSchedules field's value. +func (s *DescribeSnapshotSchedulesOutput) SetSnapshotSchedules(v []*SnapshotSchedule) *DescribeSnapshotSchedulesOutput { + s.SnapshotSchedules = v + return s +} + +type DescribeStorageInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DescribeStorageInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStorageInput) GoString() string { + return s.String() +} + +type DescribeStorageOutput struct { + _ struct{} `type:"structure"` + + // The total amount of storage currently used for snapshots. + TotalBackupSizeInMegaBytes *float64 `type:"double"` + + // The total amount of storage currently provisioned. + TotalProvisionedStorageInMegaBytes *float64 `type:"double"` +} + +// String returns the string representation +func (s DescribeStorageOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStorageOutput) GoString() string { + return s.String() +} + +// SetTotalBackupSizeInMegaBytes sets the TotalBackupSizeInMegaBytes field's value. +func (s *DescribeStorageOutput) SetTotalBackupSizeInMegaBytes(v float64) *DescribeStorageOutput { + s.TotalBackupSizeInMegaBytes = &v + return s +} + +// SetTotalProvisionedStorageInMegaBytes sets the TotalProvisionedStorageInMegaBytes field's value. +func (s *DescribeStorageOutput) SetTotalProvisionedStorageInMegaBytes(v float64) *DescribeStorageOutput { + s.TotalProvisionedStorageInMegaBytes = &v return s } @@ -13078,6 +16244,13 @@ type EnableSnapshotCopyInput struct { // DestinationRegion is a required field DestinationRegion *string `type:"string" required:"true"` + // The number of days to retain newly copied snapshots in the destination region + // after they are copied from the source region. If the value is -1, the manual + // snapshot is retained indefinitely. + // + // The value must be either -1 or an integer between 1 and 3,653. + ManualSnapshotRetentionPeriod *int64 `type:"integer"` + // The number of days to retain automated snapshots in the destination region // after they are copied from the source region. // @@ -13129,6 +16302,12 @@ func (s *EnableSnapshotCopyInput) SetDestinationRegion(v string) *EnableSnapshot return s } +// SetManualSnapshotRetentionPeriod sets the ManualSnapshotRetentionPeriod field's value. +func (s *EnableSnapshotCopyInput) SetManualSnapshotRetentionPeriod(v int64) *EnableSnapshotCopyInput { + s.ManualSnapshotRetentionPeriod = &v + return s +} + // SetRetentionPeriod sets the RetentionPeriod field's value. func (s *EnableSnapshotCopyInput) SetRetentionPeriod(v int64) *EnableSnapshotCopyInput { s.RetentionPeriod = &v @@ -13202,7 +16381,7 @@ type Event struct { _ struct{} `type:"structure"` // The date and time of the event. - Date *time.Time `type:"timestamp" timestampFormat:"iso8601"` + Date *time.Time `type:"timestamp"` // A list of the event categories. // @@ -13417,7 +16596,7 @@ type EventSubscription struct { // The date and time the Amazon Redshift event notification subscription was // created. - SubscriptionCreationTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + SubscriptionCreationTime *time.Time `type:"timestamp"` // The list of tags for the event subscription. Tags []*Tag `locationNameList:"Tag" type:"list"` @@ -13670,7 +16849,7 @@ type GetClusterCredentialsOutput struct { DbUser *string `type:"string"` // The date and time the password in DbPassword expires. - Expiration *time.Time `type:"timestamp" timestampFormat:"iso8601"` + Expiration *time.Time `type:"timestamp"` } // String returns the string representation @@ -13701,6 +16880,100 @@ func (s *GetClusterCredentialsOutput) SetExpiration(v time.Time) *GetClusterCred return s } +type GetReservedNodeExchangeOfferingsInput struct { + _ struct{} `type:"structure"` + + // A value that indicates the starting point for the next set of ReservedNodeOfferings. + Marker *string `type:"string"` + + // An integer setting the maximum number of ReservedNodeOfferings to retrieve. + MaxRecords *int64 `type:"integer"` + + // A string representing the node identifier for the DC1 Reserved Node to be + // exchanged. + // + // ReservedNodeId is a required field + ReservedNodeId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s GetReservedNodeExchangeOfferingsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetReservedNodeExchangeOfferingsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetReservedNodeExchangeOfferingsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetReservedNodeExchangeOfferingsInput"} + if s.ReservedNodeId == nil { + invalidParams.Add(request.NewErrParamRequired("ReservedNodeId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMarker sets the Marker field's value. +func (s *GetReservedNodeExchangeOfferingsInput) SetMarker(v string) *GetReservedNodeExchangeOfferingsInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *GetReservedNodeExchangeOfferingsInput) SetMaxRecords(v int64) *GetReservedNodeExchangeOfferingsInput { + s.MaxRecords = &v + return s +} + +// SetReservedNodeId sets the ReservedNodeId field's value. +func (s *GetReservedNodeExchangeOfferingsInput) SetReservedNodeId(v string) *GetReservedNodeExchangeOfferingsInput { + s.ReservedNodeId = &v + return s +} + +type GetReservedNodeExchangeOfferingsOutput struct { + _ struct{} `type:"structure"` + + // An optional parameter that specifies the starting point for returning a set + // of response records. When the results of a GetReservedNodeExchangeOfferings + // request exceed the value specified in MaxRecords, Amazon Redshift returns + // a value in the marker field of the response. You can retrieve the next set + // of response records by providing the returned marker value in the marker + // parameter and retrying the request. + Marker *string `type:"string"` + + // Returns an array of ReservedNodeOffering objects. + ReservedNodeOfferings []*ReservedNodeOffering `locationNameList:"ReservedNodeOffering" type:"list"` +} + +// String returns the string representation +func (s GetReservedNodeExchangeOfferingsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetReservedNodeExchangeOfferingsOutput) GoString() string { + return s.String() +} + +// SetMarker sets the Marker field's value. +func (s *GetReservedNodeExchangeOfferingsOutput) SetMarker(v string) *GetReservedNodeExchangeOfferingsOutput { + s.Marker = &v + return s +} + +// SetReservedNodeOfferings sets the ReservedNodeOfferings field's value. +func (s *GetReservedNodeExchangeOfferingsOutput) SetReservedNodeOfferings(v []*ReservedNodeOffering) *GetReservedNodeExchangeOfferingsOutput { + s.ReservedNodeOfferings = v + return s +} + // Returns information about an HSM client certificate. The certificate is stored // in a secure Hardware Storage Module (HSM), and used by the Amazon Redshift // cluster to encrypt data files. @@ -13909,61 +17182,185 @@ type LoggingStatus struct { LastFailureMessage *string `type:"string"` // The last time when logs failed to be delivered. - LastFailureTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + LastFailureTime *time.Time `type:"timestamp"` // The last time that logs were delivered. - LastSuccessfulDeliveryTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + LastSuccessfulDeliveryTime *time.Time `type:"timestamp"` // true if logging is on, false if logging is off. LoggingEnabled *bool `type:"boolean"` - // The prefix applied to the log file names. - S3KeyPrefix *string `type:"string"` + // The prefix applied to the log file names. + S3KeyPrefix *string `type:"string"` +} + +// String returns the string representation +func (s LoggingStatus) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LoggingStatus) GoString() string { + return s.String() +} + +// SetBucketName sets the BucketName field's value. +func (s *LoggingStatus) SetBucketName(v string) *LoggingStatus { + s.BucketName = &v + return s +} + +// SetLastFailureMessage sets the LastFailureMessage field's value. +func (s *LoggingStatus) SetLastFailureMessage(v string) *LoggingStatus { + s.LastFailureMessage = &v + return s +} + +// SetLastFailureTime sets the LastFailureTime field's value. +func (s *LoggingStatus) SetLastFailureTime(v time.Time) *LoggingStatus { + s.LastFailureTime = &v + return s +} + +// SetLastSuccessfulDeliveryTime sets the LastSuccessfulDeliveryTime field's value. +func (s *LoggingStatus) SetLastSuccessfulDeliveryTime(v time.Time) *LoggingStatus { + s.LastSuccessfulDeliveryTime = &v + return s +} + +// SetLoggingEnabled sets the LoggingEnabled field's value. +func (s *LoggingStatus) SetLoggingEnabled(v bool) *LoggingStatus { + s.LoggingEnabled = &v + return s +} + +// SetS3KeyPrefix sets the S3KeyPrefix field's value. +func (s *LoggingStatus) SetS3KeyPrefix(v string) *LoggingStatus { + s.S3KeyPrefix = &v + return s +} + +// Defines a maintenance track that determines which Amazon Redshift version +// to apply during a maintenance window. If the value for MaintenanceTrack is +// current, the cluster is updated to the most recently certified maintenance +// release. If the value is trailing, the cluster is updated to the previously +// certified maintenance release. +type MaintenanceTrack struct { + _ struct{} `type:"structure"` + + // The version number for the cluster release. + DatabaseVersion *string `type:"string"` + + // The name of the maintenance track. Possible values are current and trailing. + MaintenanceTrackName *string `type:"string"` + + // An array of UpdateTarget objects to update with the maintenance track. + UpdateTargets []*UpdateTarget `locationNameList:"UpdateTarget" type:"list"` +} + +// String returns the string representation +func (s MaintenanceTrack) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MaintenanceTrack) GoString() string { + return s.String() +} + +// SetDatabaseVersion sets the DatabaseVersion field's value. +func (s *MaintenanceTrack) SetDatabaseVersion(v string) *MaintenanceTrack { + s.DatabaseVersion = &v + return s +} + +// SetMaintenanceTrackName sets the MaintenanceTrackName field's value. +func (s *MaintenanceTrack) SetMaintenanceTrackName(v string) *MaintenanceTrack { + s.MaintenanceTrackName = &v + return s +} + +// SetUpdateTargets sets the UpdateTargets field's value. +func (s *MaintenanceTrack) SetUpdateTargets(v []*UpdateTarget) *MaintenanceTrack { + s.UpdateTargets = v + return s +} + +type ModifyClusterDbRevisionInput struct { + _ struct{} `type:"structure"` + + // The unique identifier of a cluster whose database revision you want to modify. + // + // Example: examplecluster + // + // ClusterIdentifier is a required field + ClusterIdentifier *string `type:"string" required:"true"` + + // The identifier of the database revision. You can retrieve this value from + // the response to the DescribeClusterDbRevisions request. + // + // RevisionTarget is a required field + RevisionTarget *string `type:"string" required:"true"` } // String returns the string representation -func (s LoggingStatus) String() string { +func (s ModifyClusterDbRevisionInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s LoggingStatus) GoString() string { +func (s ModifyClusterDbRevisionInput) GoString() string { return s.String() } -// SetBucketName sets the BucketName field's value. -func (s *LoggingStatus) SetBucketName(v string) *LoggingStatus { - s.BucketName = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyClusterDbRevisionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyClusterDbRevisionInput"} + if s.ClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("ClusterIdentifier")) + } + if s.RevisionTarget == nil { + invalidParams.Add(request.NewErrParamRequired("RevisionTarget")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetLastFailureMessage sets the LastFailureMessage field's value. -func (s *LoggingStatus) SetLastFailureMessage(v string) *LoggingStatus { - s.LastFailureMessage = &v +// SetClusterIdentifier sets the ClusterIdentifier field's value. +func (s *ModifyClusterDbRevisionInput) SetClusterIdentifier(v string) *ModifyClusterDbRevisionInput { + s.ClusterIdentifier = &v return s } -// SetLastFailureTime sets the LastFailureTime field's value. -func (s *LoggingStatus) SetLastFailureTime(v time.Time) *LoggingStatus { - s.LastFailureTime = &v +// SetRevisionTarget sets the RevisionTarget field's value. +func (s *ModifyClusterDbRevisionInput) SetRevisionTarget(v string) *ModifyClusterDbRevisionInput { + s.RevisionTarget = &v return s } -// SetLastSuccessfulDeliveryTime sets the LastSuccessfulDeliveryTime field's value. -func (s *LoggingStatus) SetLastSuccessfulDeliveryTime(v time.Time) *LoggingStatus { - s.LastSuccessfulDeliveryTime = &v - return s +type ModifyClusterDbRevisionOutput struct { + _ struct{} `type:"structure"` + + // Describes a cluster. + Cluster *Cluster `type:"structure"` } -// SetLoggingEnabled sets the LoggingEnabled field's value. -func (s *LoggingStatus) SetLoggingEnabled(v bool) *LoggingStatus { - s.LoggingEnabled = &v - return s +// String returns the string representation +func (s ModifyClusterDbRevisionOutput) String() string { + return awsutil.Prettify(s) } -// SetS3KeyPrefix sets the S3KeyPrefix field's value. -func (s *LoggingStatus) SetS3KeyPrefix(v string) *LoggingStatus { - s.S3KeyPrefix = &v +// GoString returns the string representation +func (s ModifyClusterDbRevisionOutput) GoString() string { + return s.String() +} + +// SetCluster sets the Cluster field's value. +func (s *ModifyClusterDbRevisionOutput) SetCluster(v *Cluster) *ModifyClusterDbRevisionOutput { + s.Cluster = v return s } @@ -14135,6 +17532,13 @@ type ModifyClusterInput struct { // in the Amazon Redshift Cluster Management Guide. ElasticIp *string `type:"string"` + // Indicates whether the cluster is encrypted. If the cluster is encrypted and + // you provide a value for the KmsKeyId parameter, we will encrypt the cluster + // with the provided KmsKeyId. If you don't provide a KmsKeyId, we will encrypt + // with the default key. In the China region we will use legacy encryption if + // you specify that the cluster is encrypted. + Encrypted *bool `type:"boolean"` + // An option that specifies whether to create the cluster with enhanced VPC // routing enabled. To create a cluster that uses enhanced VPC routing, the // cluster must be in a VPC. For more information, see Enhanced VPC Routing @@ -14154,6 +17558,26 @@ type ModifyClusterInput struct { // the Amazon Redshift cluster can use to retrieve and store keys in an HSM. HsmConfigurationIdentifier *string `type:"string"` + // The AWS Key Management Service (KMS) key ID of the encryption key that you + // want to use to encrypt data in the cluster. + KmsKeyId *string `type:"string"` + + // The name for the maintenance track that you want to assign for the cluster. + // This name change is asynchronous. The new track name stays in the PendingModifiedValues + // for the cluster until the next maintenance window. When the maintenance track + // changes, the cluster is switched to the latest cluster release available + // for the maintenance track. At this point, the maintenance track name is applied. + MaintenanceTrackName *string `type:"string"` + + // The default for number of days that a newly created manual snapshot is retained. + // If the value is -1, the manual snapshot is retained indefinitely. This value + // will not retroactively change the retention periods of existing manual snapshots + // + // The value must be either -1 or an integer between 1 and 3,653. + // + // The default value is -1. + ManualSnapshotRetentionPeriod *int64 `type:"integer"` + // The new password for the cluster master user. This change is asynchronously // applied as soon as possible. Between the time of the request and the completion // of the request, the MasterUserPassword element exists in the PendingModifiedValues @@ -14248,7 +17672,7 @@ type ModifyClusterInput struct { PubliclyAccessible *bool `type:"boolean"` // A list of virtual private cloud (VPC) security groups to be associated with - // the cluster. + // the cluster. This change is asynchronously applied as soon as possible. VpcSecurityGroupIds []*string `locationNameList:"VpcSecurityGroupId" type:"list"` } @@ -14323,6 +17747,12 @@ func (s *ModifyClusterInput) SetElasticIp(v string) *ModifyClusterInput { return s } +// SetEncrypted sets the Encrypted field's value. +func (s *ModifyClusterInput) SetEncrypted(v bool) *ModifyClusterInput { + s.Encrypted = &v + return s +} + // SetEnhancedVpcRouting sets the EnhancedVpcRouting field's value. func (s *ModifyClusterInput) SetEnhancedVpcRouting(v bool) *ModifyClusterInput { s.EnhancedVpcRouting = &v @@ -14341,6 +17771,24 @@ func (s *ModifyClusterInput) SetHsmConfigurationIdentifier(v string) *ModifyClus return s } +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *ModifyClusterInput) SetKmsKeyId(v string) *ModifyClusterInput { + s.KmsKeyId = &v + return s +} + +// SetMaintenanceTrackName sets the MaintenanceTrackName field's value. +func (s *ModifyClusterInput) SetMaintenanceTrackName(v string) *ModifyClusterInput { + s.MaintenanceTrackName = &v + return s +} + +// SetManualSnapshotRetentionPeriod sets the ManualSnapshotRetentionPeriod field's value. +func (s *ModifyClusterInput) SetManualSnapshotRetentionPeriod(v int64) *ModifyClusterInput { + s.ManualSnapshotRetentionPeriod = &v + return s +} + // SetMasterUserPassword sets the MasterUserPassword field's value. func (s *ModifyClusterInput) SetMasterUserPassword(v string) *ModifyClusterInput { s.MasterUserPassword = &v @@ -14383,6 +17831,115 @@ func (s *ModifyClusterInput) SetVpcSecurityGroupIds(v []*string) *ModifyClusterI return s } +type ModifyClusterMaintenanceInput struct { + _ struct{} `type:"structure"` + + // A unique identifier for the cluster. + // + // ClusterIdentifier is a required field + ClusterIdentifier *string `type:"string" required:"true"` + + // A boolean indicating whether to enable the deferred maintenance window. + DeferMaintenance *bool `type:"boolean"` + + // An integer indicating the duration of the maintenance window in days. If + // you specify a duration, you can't specify an end time. The duration must + // be 14 days or less. + DeferMaintenanceDuration *int64 `type:"integer"` + + // A timestamp indicating end time for the deferred maintenance window. If you + // specify an end time, you can't specify a duration. + DeferMaintenanceEndTime *time.Time `type:"timestamp"` + + // A unique identifier for the deferred maintenance window. + DeferMaintenanceIdentifier *string `type:"string"` + + // A timestamp indicating the start time for the deferred maintenance window. + DeferMaintenanceStartTime *time.Time `type:"timestamp"` +} + +// String returns the string representation +func (s ModifyClusterMaintenanceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyClusterMaintenanceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyClusterMaintenanceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyClusterMaintenanceInput"} + if s.ClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("ClusterIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClusterIdentifier sets the ClusterIdentifier field's value. +func (s *ModifyClusterMaintenanceInput) SetClusterIdentifier(v string) *ModifyClusterMaintenanceInput { + s.ClusterIdentifier = &v + return s +} + +// SetDeferMaintenance sets the DeferMaintenance field's value. +func (s *ModifyClusterMaintenanceInput) SetDeferMaintenance(v bool) *ModifyClusterMaintenanceInput { + s.DeferMaintenance = &v + return s +} + +// SetDeferMaintenanceDuration sets the DeferMaintenanceDuration field's value. +func (s *ModifyClusterMaintenanceInput) SetDeferMaintenanceDuration(v int64) *ModifyClusterMaintenanceInput { + s.DeferMaintenanceDuration = &v + return s +} + +// SetDeferMaintenanceEndTime sets the DeferMaintenanceEndTime field's value. +func (s *ModifyClusterMaintenanceInput) SetDeferMaintenanceEndTime(v time.Time) *ModifyClusterMaintenanceInput { + s.DeferMaintenanceEndTime = &v + return s +} + +// SetDeferMaintenanceIdentifier sets the DeferMaintenanceIdentifier field's value. +func (s *ModifyClusterMaintenanceInput) SetDeferMaintenanceIdentifier(v string) *ModifyClusterMaintenanceInput { + s.DeferMaintenanceIdentifier = &v + return s +} + +// SetDeferMaintenanceStartTime sets the DeferMaintenanceStartTime field's value. +func (s *ModifyClusterMaintenanceInput) SetDeferMaintenanceStartTime(v time.Time) *ModifyClusterMaintenanceInput { + s.DeferMaintenanceStartTime = &v + return s +} + +type ModifyClusterMaintenanceOutput struct { + _ struct{} `type:"structure"` + + // Describes a cluster. + Cluster *Cluster `type:"structure"` +} + +// String returns the string representation +func (s ModifyClusterMaintenanceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyClusterMaintenanceOutput) GoString() string { + return s.String() +} + +// SetCluster sets the Cluster field's value. +func (s *ModifyClusterMaintenanceOutput) SetCluster(v *Cluster) *ModifyClusterMaintenanceOutput { + s.Cluster = v + return s +} + type ModifyClusterOutput struct { _ struct{} `type:"structure"` @@ -14428,23 +17985,161 @@ type ModifyClusterParameterGroupInput struct { } // String returns the string representation -func (s ModifyClusterParameterGroupInput) String() string { +func (s ModifyClusterParameterGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyClusterParameterGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyClusterParameterGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyClusterParameterGroupInput"} + if s.ParameterGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("ParameterGroupName")) + } + if s.Parameters == nil { + invalidParams.Add(request.NewErrParamRequired("Parameters")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetParameterGroupName sets the ParameterGroupName field's value. +func (s *ModifyClusterParameterGroupInput) SetParameterGroupName(v string) *ModifyClusterParameterGroupInput { + s.ParameterGroupName = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *ModifyClusterParameterGroupInput) SetParameters(v []*Parameter) *ModifyClusterParameterGroupInput { + s.Parameters = v + return s +} + +type ModifyClusterSnapshotInput struct { + _ struct{} `type:"structure"` + + // A Boolean option to override an exception if the retention period has already + // passed. + Force *bool `type:"boolean"` + + // The number of days that a manual snapshot is retained. If the value is -1, + // the manual snapshot is retained indefinitely. + // + // If the manual snapshot falls outside of the new retention period, you can + // specify the force option to immediately delete the snapshot. + // + // The value must be either -1 or an integer between 1 and 3,653. + ManualSnapshotRetentionPeriod *int64 `type:"integer"` + + // The identifier of the snapshot whose setting you want to modify. + // + // SnapshotIdentifier is a required field + SnapshotIdentifier *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s ModifyClusterSnapshotInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyClusterSnapshotInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyClusterSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyClusterSnapshotInput"} + if s.SnapshotIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("SnapshotIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetForce sets the Force field's value. +func (s *ModifyClusterSnapshotInput) SetForce(v bool) *ModifyClusterSnapshotInput { + s.Force = &v + return s +} + +// SetManualSnapshotRetentionPeriod sets the ManualSnapshotRetentionPeriod field's value. +func (s *ModifyClusterSnapshotInput) SetManualSnapshotRetentionPeriod(v int64) *ModifyClusterSnapshotInput { + s.ManualSnapshotRetentionPeriod = &v + return s +} + +// SetSnapshotIdentifier sets the SnapshotIdentifier field's value. +func (s *ModifyClusterSnapshotInput) SetSnapshotIdentifier(v string) *ModifyClusterSnapshotInput { + s.SnapshotIdentifier = &v + return s +} + +type ModifyClusterSnapshotOutput struct { + _ struct{} `type:"structure"` + + // Describes a snapshot. + Snapshot *Snapshot `type:"structure"` +} + +// String returns the string representation +func (s ModifyClusterSnapshotOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyClusterSnapshotOutput) GoString() string { + return s.String() +} + +// SetSnapshot sets the Snapshot field's value. +func (s *ModifyClusterSnapshotOutput) SetSnapshot(v *Snapshot) *ModifyClusterSnapshotOutput { + s.Snapshot = v + return s +} + +type ModifyClusterSnapshotScheduleInput struct { + _ struct{} `type:"structure"` + + // A unique identifier for the cluster whose snapshot schedule you want to modify. + // + // ClusterIdentifier is a required field + ClusterIdentifier *string `type:"string" required:"true"` + + // A boolean to indicate whether to remove the assoiciation between the cluster + // and the schedule. + DisassociateSchedule *bool `type:"boolean"` + + // A unique alphanumeric identifier for the schedule you want to associate with + // the cluster. + ScheduleIdentifier *string `type:"string"` +} + +// String returns the string representation +func (s ModifyClusterSnapshotScheduleInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ModifyClusterParameterGroupInput) GoString() string { +func (s ModifyClusterSnapshotScheduleInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ModifyClusterParameterGroupInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ModifyClusterParameterGroupInput"} - if s.ParameterGroupName == nil { - invalidParams.Add(request.NewErrParamRequired("ParameterGroupName")) - } - if s.Parameters == nil { - invalidParams.Add(request.NewErrParamRequired("Parameters")) +func (s *ModifyClusterSnapshotScheduleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyClusterSnapshotScheduleInput"} + if s.ClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("ClusterIdentifier")) } if invalidParams.Len() > 0 { @@ -14453,18 +18148,38 @@ func (s *ModifyClusterParameterGroupInput) Validate() error { return nil } -// SetParameterGroupName sets the ParameterGroupName field's value. -func (s *ModifyClusterParameterGroupInput) SetParameterGroupName(v string) *ModifyClusterParameterGroupInput { - s.ParameterGroupName = &v +// SetClusterIdentifier sets the ClusterIdentifier field's value. +func (s *ModifyClusterSnapshotScheduleInput) SetClusterIdentifier(v string) *ModifyClusterSnapshotScheduleInput { + s.ClusterIdentifier = &v return s } -// SetParameters sets the Parameters field's value. -func (s *ModifyClusterParameterGroupInput) SetParameters(v []*Parameter) *ModifyClusterParameterGroupInput { - s.Parameters = v +// SetDisassociateSchedule sets the DisassociateSchedule field's value. +func (s *ModifyClusterSnapshotScheduleInput) SetDisassociateSchedule(v bool) *ModifyClusterSnapshotScheduleInput { + s.DisassociateSchedule = &v + return s +} + +// SetScheduleIdentifier sets the ScheduleIdentifier field's value. +func (s *ModifyClusterSnapshotScheduleInput) SetScheduleIdentifier(v string) *ModifyClusterSnapshotScheduleInput { + s.ScheduleIdentifier = &v return s } +type ModifyClusterSnapshotScheduleOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s ModifyClusterSnapshotScheduleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyClusterSnapshotScheduleOutput) GoString() string { + return s.String() +} + type ModifyClusterSubnetGroupInput struct { _ struct{} `type:"structure"` @@ -14560,7 +18275,7 @@ type ModifyEventSubscriptionInput struct { // Specifies the Amazon Redshift event categories to be published by the event // notification subscription. // - // Values: Configuration, Management, Monitoring, Security + // Values: configuration, management, monitoring, security EventCategories []*string `locationNameList:"EventCategory" type:"list"` // Specifies the Amazon Redshift event severity to be published by the event @@ -14692,7 +18407,8 @@ type ModifySnapshotCopyRetentionPeriodInput struct { _ struct{} `type:"structure"` // The unique identifier of the cluster for which you want to change the retention - // period for automated snapshots that are copied to a destination region. + // period for either automated or manual snapshots that are copied to a destination + // region. // // Constraints: Must be the valid name of an existing cluster that has cross-region // snapshot copy enabled. @@ -14700,15 +18416,30 @@ type ModifySnapshotCopyRetentionPeriodInput struct { // ClusterIdentifier is a required field ClusterIdentifier *string `type:"string" required:"true"` + // Indicates whether to apply the snapshot retention period to newly copied + // manual snapshots instead of automated snapshots. + Manual *bool `type:"boolean"` + // The number of days to retain automated snapshots in the destination region // after they are copied from the source region. // + // By default, this only changes the retention period of copied automated snapshots. + // // If you decrease the retention period for automated snapshots that are copied // to a destination region, Amazon Redshift will delete any existing automated // snapshots that were copied to the destination region and that fall outside // of the new retention period. // - // Constraints: Must be at least 1 and no more than 35. + // Constraints: Must be at least 1 and no more than 35 for automated snapshots. + // + // If you specify the manual option, only newly copied manual snapshots will + // have the new retention period. + // + // If you specify the value of -1 newly copied manual snapshots are retained + // indefinitely. + // + // Constraints: The number of days must be either -1 or an integer between 1 + // and 3,653 for manual snapshots. // // RetentionPeriod is a required field RetentionPeriod *int64 `type:"integer" required:"true"` @@ -14746,6 +18477,12 @@ func (s *ModifySnapshotCopyRetentionPeriodInput) SetClusterIdentifier(v string) return s } +// SetManual sets the Manual field's value. +func (s *ModifySnapshotCopyRetentionPeriodInput) SetManual(v bool) *ModifySnapshotCopyRetentionPeriodInput { + s.Manual = &v + return s +} + // SetRetentionPeriod sets the RetentionPeriod field's value. func (s *ModifySnapshotCopyRetentionPeriodInput) SetRetentionPeriod(v int64) *ModifySnapshotCopyRetentionPeriodInput { s.RetentionPeriod = &v @@ -14775,6 +18512,119 @@ func (s *ModifySnapshotCopyRetentionPeriodOutput) SetCluster(v *Cluster) *Modify return s } +type ModifySnapshotScheduleInput struct { + _ struct{} `type:"structure"` + + // An updated list of schedule definitions. A schedule definition is made up + // of schedule expressions. For example, "cron(30 12 *)" or "rate(12 hours)". + // + // ScheduleDefinitions is a required field + ScheduleDefinitions []*string `locationNameList:"ScheduleDefinition" type:"list" required:"true"` + + // A unique alphanumeric identifier of the schedule to modify. + // + // ScheduleIdentifier is a required field + ScheduleIdentifier *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s ModifySnapshotScheduleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifySnapshotScheduleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifySnapshotScheduleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifySnapshotScheduleInput"} + if s.ScheduleDefinitions == nil { + invalidParams.Add(request.NewErrParamRequired("ScheduleDefinitions")) + } + if s.ScheduleIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("ScheduleIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetScheduleDefinitions sets the ScheduleDefinitions field's value. +func (s *ModifySnapshotScheduleInput) SetScheduleDefinitions(v []*string) *ModifySnapshotScheduleInput { + s.ScheduleDefinitions = v + return s +} + +// SetScheduleIdentifier sets the ScheduleIdentifier field's value. +func (s *ModifySnapshotScheduleInput) SetScheduleIdentifier(v string) *ModifySnapshotScheduleInput { + s.ScheduleIdentifier = &v + return s +} + +// Describes a snapshot schedule. You can set a regular interval for creating +// snapshots of a cluster. You can also schedule snapshots for specific dates. +type ModifySnapshotScheduleOutput struct { + _ struct{} `type:"structure"` + + NextInvocations []*time.Time `locationNameList:"SnapshotTime" type:"list"` + + // A list of ScheduleDefinitions + ScheduleDefinitions []*string `locationNameList:"ScheduleDefinition" type:"list"` + + // The description of the schedule. + ScheduleDescription *string `type:"string"` + + // A unique identifier for the schedule. + ScheduleIdentifier *string `type:"string"` + + // An optional set of tags describing the schedule. + Tags []*Tag `locationNameList:"Tag" type:"list"` +} + +// String returns the string representation +func (s ModifySnapshotScheduleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifySnapshotScheduleOutput) GoString() string { + return s.String() +} + +// SetNextInvocations sets the NextInvocations field's value. +func (s *ModifySnapshotScheduleOutput) SetNextInvocations(v []*time.Time) *ModifySnapshotScheduleOutput { + s.NextInvocations = v + return s +} + +// SetScheduleDefinitions sets the ScheduleDefinitions field's value. +func (s *ModifySnapshotScheduleOutput) SetScheduleDefinitions(v []*string) *ModifySnapshotScheduleOutput { + s.ScheduleDefinitions = v + return s +} + +// SetScheduleDescription sets the ScheduleDescription field's value. +func (s *ModifySnapshotScheduleOutput) SetScheduleDescription(v string) *ModifySnapshotScheduleOutput { + s.ScheduleDescription = &v + return s +} + +// SetScheduleIdentifier sets the ScheduleIdentifier field's value. +func (s *ModifySnapshotScheduleOutput) SetScheduleIdentifier(v string) *ModifySnapshotScheduleOutput { + s.ScheduleIdentifier = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *ModifySnapshotScheduleOutput) SetTags(v []*Tag) *ModifySnapshotScheduleOutput { + s.Tags = v + return s +} + // Describes an orderable cluster option. type OrderableClusterOption struct { _ struct{} `type:"structure"` @@ -14945,6 +18795,10 @@ type PendingModifiedValues struct { // The pending or in-progress change of the service version. ClusterVersion *string `type:"string"` + // The encryption type for a cluster. Possible values are: KMS and None. For + // the China region the possible values are None, and Legacy. + EncryptionType *string `type:"string"` + // An option that specifies whether to create the cluster with enhanced VPC // routing enabled. To create a cluster that uses enhanced VPC routing, the // cluster must be in a VPC. For more information, see Enhanced VPC Routing @@ -14956,6 +18810,10 @@ type PendingModifiedValues struct { // Default: false EnhancedVpcRouting *bool `type:"boolean"` + // The name of the maintenance track that the cluster will change to during + // the next maintenance window. + MaintenanceTrackName *string `type:"string"` + // The pending or in-progress change of the master user password for the cluster. MasterUserPassword *string `type:"string"` @@ -15004,12 +18862,24 @@ func (s *PendingModifiedValues) SetClusterVersion(v string) *PendingModifiedValu return s } +// SetEncryptionType sets the EncryptionType field's value. +func (s *PendingModifiedValues) SetEncryptionType(v string) *PendingModifiedValues { + s.EncryptionType = &v + return s +} + // SetEnhancedVpcRouting sets the EnhancedVpcRouting field's value. func (s *PendingModifiedValues) SetEnhancedVpcRouting(v bool) *PendingModifiedValues { s.EnhancedVpcRouting = &v return s } +// SetMaintenanceTrackName sets the MaintenanceTrackName field's value. +func (s *PendingModifiedValues) SetMaintenanceTrackName(v string) *PendingModifiedValues { + s.MaintenanceTrackName = &v + return s +} + // SetMasterUserPassword sets the MasterUserPassword field's value. func (s *PendingModifiedValues) SetMasterUserPassword(v string) *PendingModifiedValues { s.MasterUserPassword = &v @@ -15239,7 +19109,7 @@ type ReservedNode struct { // The time the reservation started. You purchase a reserved node offering for // a duration. This is the start time of that duration. - StartTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + StartTime *time.Time `type:"timestamp"` // The state of the reserved compute node. // @@ -15252,6 +19122,11 @@ type ReservedNode struct { // use. // // * payment-failed-Payment failed for the purchase attempt. + // + // * retired-The reserved node is no longer available. + // + // * exchanging-The owner is exchanging the reserved node for another reserved + // node. State *string `type:"string"` // The hourly rate Amazon Redshift charges you for this reserved node. @@ -15440,71 +19315,208 @@ func (s *ReservedNodeOffering) SetReservedNodeOfferingType(v string) *ReservedNo return s } -// SetUsagePrice sets the UsagePrice field's value. -func (s *ReservedNodeOffering) SetUsagePrice(v float64) *ReservedNodeOffering { - s.UsagePrice = &v +// SetUsagePrice sets the UsagePrice field's value. +func (s *ReservedNodeOffering) SetUsagePrice(v float64) *ReservedNodeOffering { + s.UsagePrice = &v + return s +} + +type ResetClusterParameterGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the cluster parameter group to be reset. + // + // ParameterGroupName is a required field + ParameterGroupName *string `type:"string" required:"true"` + + // An array of names of parameters to be reset. If ResetAllParameters option + // is not used, then at least one parameter name must be supplied. + // + // Constraints: A maximum of 20 parameters can be reset in a single request. + Parameters []*Parameter `locationNameList:"Parameter" type:"list"` + + // If true, all parameters in the specified parameter group will be reset to + // their default values. + // + // Default: true + ResetAllParameters *bool `type:"boolean"` +} + +// String returns the string representation +func (s ResetClusterParameterGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResetClusterParameterGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ResetClusterParameterGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResetClusterParameterGroupInput"} + if s.ParameterGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("ParameterGroupName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetParameterGroupName sets the ParameterGroupName field's value. +func (s *ResetClusterParameterGroupInput) SetParameterGroupName(v string) *ResetClusterParameterGroupInput { + s.ParameterGroupName = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *ResetClusterParameterGroupInput) SetParameters(v []*Parameter) *ResetClusterParameterGroupInput { + s.Parameters = v + return s +} + +// SetResetAllParameters sets the ResetAllParameters field's value. +func (s *ResetClusterParameterGroupInput) SetResetAllParameters(v bool) *ResetClusterParameterGroupInput { + s.ResetAllParameters = &v + return s +} + +type ResizeClusterInput struct { + _ struct{} `type:"structure"` + + // A boolean value indicating whether the resize operation is using the classic + // resize process. If you don't provide this parameter or set the value to false, + // the resize type is elastic. + Classic *bool `type:"boolean"` + + // The unique identifier for the cluster to resize. + // + // ClusterIdentifier is a required field + ClusterIdentifier *string `type:"string" required:"true"` + + // The new cluster type for the specified cluster. + ClusterType *string `type:"string"` + + // The new node type for the nodes you are adding. + NodeType *string `type:"string"` + + // The new number of nodes for the cluster. + // + // NumberOfNodes is a required field + NumberOfNodes *int64 `type:"integer" required:"true"` +} + +// String returns the string representation +func (s ResizeClusterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResizeClusterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ResizeClusterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResizeClusterInput"} + if s.ClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("ClusterIdentifier")) + } + if s.NumberOfNodes == nil { + invalidParams.Add(request.NewErrParamRequired("NumberOfNodes")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClassic sets the Classic field's value. +func (s *ResizeClusterInput) SetClassic(v bool) *ResizeClusterInput { + s.Classic = &v + return s +} + +// SetClusterIdentifier sets the ClusterIdentifier field's value. +func (s *ResizeClusterInput) SetClusterIdentifier(v string) *ResizeClusterInput { + s.ClusterIdentifier = &v + return s +} + +// SetClusterType sets the ClusterType field's value. +func (s *ResizeClusterInput) SetClusterType(v string) *ResizeClusterInput { + s.ClusterType = &v + return s +} + +// SetNodeType sets the NodeType field's value. +func (s *ResizeClusterInput) SetNodeType(v string) *ResizeClusterInput { + s.NodeType = &v + return s +} + +// SetNumberOfNodes sets the NumberOfNodes field's value. +func (s *ResizeClusterInput) SetNumberOfNodes(v int64) *ResizeClusterInput { + s.NumberOfNodes = &v + return s +} + +type ResizeClusterOutput struct { + _ struct{} `type:"structure"` + + // Describes a cluster. + Cluster *Cluster `type:"structure"` +} + +// String returns the string representation +func (s ResizeClusterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResizeClusterOutput) GoString() string { + return s.String() +} + +// SetCluster sets the Cluster field's value. +func (s *ResizeClusterOutput) SetCluster(v *Cluster) *ResizeClusterOutput { + s.Cluster = v return s } -type ResetClusterParameterGroupInput struct { +// Describes a resize operation. +type ResizeInfo struct { _ struct{} `type:"structure"` - // The name of the cluster parameter group to be reset. - // - // ParameterGroupName is a required field - ParameterGroupName *string `type:"string" required:"true"` - - // An array of names of parameters to be reset. If ResetAllParameters option - // is not used, then at least one parameter name must be supplied. - // - // Constraints: A maximum of 20 parameters can be reset in a single request. - Parameters []*Parameter `locationNameList:"Parameter" type:"list"` + // A boolean value indicating if the resize operation can be cancelled. + AllowCancelResize *bool `type:"boolean"` - // If true, all parameters in the specified parameter group will be reset to - // their default values. - // - // Default: true - ResetAllParameters *bool `type:"boolean"` + // Returns the value ClassicResize. + ResizeType *string `type:"string"` } // String returns the string representation -func (s ResetClusterParameterGroupInput) String() string { +func (s ResizeInfo) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ResetClusterParameterGroupInput) GoString() string { +func (s ResizeInfo) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *ResetClusterParameterGroupInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ResetClusterParameterGroupInput"} - if s.ParameterGroupName == nil { - invalidParams.Add(request.NewErrParamRequired("ParameterGroupName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetParameterGroupName sets the ParameterGroupName field's value. -func (s *ResetClusterParameterGroupInput) SetParameterGroupName(v string) *ResetClusterParameterGroupInput { - s.ParameterGroupName = &v - return s -} - -// SetParameters sets the Parameters field's value. -func (s *ResetClusterParameterGroupInput) SetParameters(v []*Parameter) *ResetClusterParameterGroupInput { - s.Parameters = v +// SetAllowCancelResize sets the AllowCancelResize field's value. +func (s *ResizeInfo) SetAllowCancelResize(v bool) *ResizeInfo { + s.AllowCancelResize = &v return s } -// SetResetAllParameters sets the ResetAllParameters field's value. -func (s *ResetClusterParameterGroupInput) SetResetAllParameters(v bool) *ResetClusterParameterGroupInput { - s.ResetAllParameters = &v +// SetResizeType sets the ResizeType field's value. +func (s *ResizeInfo) SetResizeType(v string) *ResizeInfo { + s.ResizeType = &v return s } @@ -15616,6 +19628,17 @@ type RestoreFromClusterSnapshotInput struct { // snapshot. KmsKeyId *string `type:"string"` + // The name of the maintenance track for the restored cluster. When you take + // a snapshot, the snapshot inherits the MaintenanceTrack value from the cluster. + // The snapshot might be on a different track than the cluster that was the + // source for the snapshot. For example, suppose that you take a snapshot of + // a cluster that is on the current track and then change the cluster to be + // on the trailing track. In this case, the snapshot and the source cluster + // are on different tracks. + MaintenanceTrackName *string `type:"string"` + + ManualSnapshotRetentionPeriod *int64 `type:"integer"` + // The node type that the restored cluster will be provisioned with. // // Default: The node type of the cluster from which the snapshot was taken. @@ -15673,6 +19696,9 @@ type RestoreFromClusterSnapshotInput struct { // SnapshotIdentifier is a required field SnapshotIdentifier *string `type:"string" required:"true"` + // A unique identifier for the snapshot schedule. + SnapshotScheduleIdentifier *string `type:"string"` + // A list of Virtual Private Cloud (VPC) security groups to be associated with // the cluster. // @@ -15792,6 +19818,18 @@ func (s *RestoreFromClusterSnapshotInput) SetKmsKeyId(v string) *RestoreFromClus return s } +// SetMaintenanceTrackName sets the MaintenanceTrackName field's value. +func (s *RestoreFromClusterSnapshotInput) SetMaintenanceTrackName(v string) *RestoreFromClusterSnapshotInput { + s.MaintenanceTrackName = &v + return s +} + +// SetManualSnapshotRetentionPeriod sets the ManualSnapshotRetentionPeriod field's value. +func (s *RestoreFromClusterSnapshotInput) SetManualSnapshotRetentionPeriod(v int64) *RestoreFromClusterSnapshotInput { + s.ManualSnapshotRetentionPeriod = &v + return s +} + // SetNodeType sets the NodeType field's value. func (s *RestoreFromClusterSnapshotInput) SetNodeType(v string) *RestoreFromClusterSnapshotInput { s.NodeType = &v @@ -15834,6 +19872,12 @@ func (s *RestoreFromClusterSnapshotInput) SetSnapshotIdentifier(v string) *Resto return s } +// SetSnapshotScheduleIdentifier sets the SnapshotScheduleIdentifier field's value. +func (s *RestoreFromClusterSnapshotInput) SetSnapshotScheduleIdentifier(v string) *RestoreFromClusterSnapshotInput { + s.SnapshotScheduleIdentifier = &v + return s +} + // SetVpcSecurityGroupIds sets the VpcSecurityGroupIds field's value. func (s *RestoreFromClusterSnapshotInput) SetVpcSecurityGroupIds(v []*string) *RestoreFromClusterSnapshotInput { s.VpcSecurityGroupIds = v @@ -16084,6 +20128,50 @@ func (s *RestoreTableFromClusterSnapshotOutput) SetTableRestoreStatus(v *TableRe return s } +// Describes a RevisionTarget. +type RevisionTarget struct { + _ struct{} `type:"structure"` + + // A unique string that identifies the version to update the cluster to. You + // can use this value in ModifyClusterDbRevision. + DatabaseRevision *string `type:"string"` + + // The date on which the database revision was released. + DatabaseRevisionReleaseDate *time.Time `type:"timestamp"` + + // A string that describes the changes and features that will be applied to + // the cluster when it is updated to the corresponding ClusterDbRevision. + Description *string `type:"string"` +} + +// String returns the string representation +func (s RevisionTarget) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RevisionTarget) GoString() string { + return s.String() +} + +// SetDatabaseRevision sets the DatabaseRevision field's value. +func (s *RevisionTarget) SetDatabaseRevision(v string) *RevisionTarget { + s.DatabaseRevision = &v + return s +} + +// SetDatabaseRevisionReleaseDate sets the DatabaseRevisionReleaseDate field's value. +func (s *RevisionTarget) SetDatabaseRevisionReleaseDate(v time.Time) *RevisionTarget { + s.DatabaseRevisionReleaseDate = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *RevisionTarget) SetDescription(v string) *RevisionTarget { + s.Description = &v + return s +} + type RevokeClusterSecurityGroupIngressInput struct { _ struct{} `type:"structure"` @@ -16350,7 +20438,7 @@ type Snapshot struct { BackupProgressInMegaBytes *float64 `type:"double"` // The time (UTC) when the cluster was originally created. - ClusterCreateTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + ClusterCreateTime *time.Time `type:"timestamp"` // The identifier of the cluster for which the snapshot was taken. ClusterIdentifier *string `type:"string"` @@ -16396,6 +20484,18 @@ type Snapshot struct { // used to encrypt data in the cluster from which the snapshot was taken. KmsKeyId *string `type:"string"` + // The name of the maintenance track for the snapshot. + MaintenanceTrackName *string `type:"string"` + + // The number of days until a manual snapshot will pass its retention period. + ManualSnapshotRemainingDays *int64 `type:"integer"` + + // The number of days that a manual snapshot is retained. If the value is -1, + // the manual snapshot is retained indefinitely. + // + // The value must be either -1 or an integer between 1 and 3,653. + ManualSnapshotRetentionPeriod *int64 `type:"integer"` + // The master user name for the cluster. MasterUsername *string `type:"string"` @@ -16418,11 +20518,14 @@ type Snapshot struct { // The time (UTC) when Amazon Redshift began the snapshot. A snapshot contains // a copy of the cluster data as of this exact time. - SnapshotCreateTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + SnapshotCreateTime *time.Time `type:"timestamp"` // The snapshot identifier that is provided in the request. SnapshotIdentifier *string `type:"string"` + // A timestamp representing the start of the retention period for the snapshot. + SnapshotRetentionStartTime *time.Time `type:"timestamp"` + // The snapshot type. Snapshots created using CreateClusterSnapshot and CopyClusterSnapshot // will be of type "manual". SnapshotType *string `type:"string"` @@ -16554,6 +20657,24 @@ func (s *Snapshot) SetKmsKeyId(v string) *Snapshot { return s } +// SetMaintenanceTrackName sets the MaintenanceTrackName field's value. +func (s *Snapshot) SetMaintenanceTrackName(v string) *Snapshot { + s.MaintenanceTrackName = &v + return s +} + +// SetManualSnapshotRemainingDays sets the ManualSnapshotRemainingDays field's value. +func (s *Snapshot) SetManualSnapshotRemainingDays(v int64) *Snapshot { + s.ManualSnapshotRemainingDays = &v + return s +} + +// SetManualSnapshotRetentionPeriod sets the ManualSnapshotRetentionPeriod field's value. +func (s *Snapshot) SetManualSnapshotRetentionPeriod(v int64) *Snapshot { + s.ManualSnapshotRetentionPeriod = &v + return s +} + // SetMasterUsername sets the MasterUsername field's value. func (s *Snapshot) SetMasterUsername(v string) *Snapshot { s.MasterUsername = &v @@ -16602,6 +20723,12 @@ func (s *Snapshot) SetSnapshotIdentifier(v string) *Snapshot { return s } +// SetSnapshotRetentionStartTime sets the SnapshotRetentionStartTime field's value. +func (s *Snapshot) SetSnapshotRetentionStartTime(v time.Time) *Snapshot { + s.SnapshotRetentionStartTime = &v + return s +} + // SetSnapshotType sets the SnapshotType field's value. func (s *Snapshot) SetSnapshotType(v string) *Snapshot { s.SnapshotType = &v @@ -16687,6 +20814,165 @@ func (s *SnapshotCopyGrant) SetTags(v []*Tag) *SnapshotCopyGrant { return s } +// Describes the errors returned by a snapshot. +type SnapshotErrorMessage struct { + _ struct{} `type:"structure"` + + // The failure code for the error. + FailureCode *string `type:"string"` + + // The text message describing the error. + FailureReason *string `type:"string"` + + // A unique identifier for the cluster. + SnapshotClusterIdentifier *string `type:"string"` + + // A unique identifier for the snapshot returning the error. + SnapshotIdentifier *string `type:"string"` +} + +// String returns the string representation +func (s SnapshotErrorMessage) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SnapshotErrorMessage) GoString() string { + return s.String() +} + +// SetFailureCode sets the FailureCode field's value. +func (s *SnapshotErrorMessage) SetFailureCode(v string) *SnapshotErrorMessage { + s.FailureCode = &v + return s +} + +// SetFailureReason sets the FailureReason field's value. +func (s *SnapshotErrorMessage) SetFailureReason(v string) *SnapshotErrorMessage { + s.FailureReason = &v + return s +} + +// SetSnapshotClusterIdentifier sets the SnapshotClusterIdentifier field's value. +func (s *SnapshotErrorMessage) SetSnapshotClusterIdentifier(v string) *SnapshotErrorMessage { + s.SnapshotClusterIdentifier = &v + return s +} + +// SetSnapshotIdentifier sets the SnapshotIdentifier field's value. +func (s *SnapshotErrorMessage) SetSnapshotIdentifier(v string) *SnapshotErrorMessage { + s.SnapshotIdentifier = &v + return s +} + +// Describes a snapshot schedule. You can set a regular interval for creating +// snapshots of a cluster. You can also schedule snapshots for specific dates. +type SnapshotSchedule struct { + _ struct{} `type:"structure"` + + NextInvocations []*time.Time `locationNameList:"SnapshotTime" type:"list"` + + // A list of ScheduleDefinitions + ScheduleDefinitions []*string `locationNameList:"ScheduleDefinition" type:"list"` + + // The description of the schedule. + ScheduleDescription *string `type:"string"` + + // A unique identifier for the schedule. + ScheduleIdentifier *string `type:"string"` + + // An optional set of tags describing the schedule. + Tags []*Tag `locationNameList:"Tag" type:"list"` +} + +// String returns the string representation +func (s SnapshotSchedule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SnapshotSchedule) GoString() string { + return s.String() +} + +// SetNextInvocations sets the NextInvocations field's value. +func (s *SnapshotSchedule) SetNextInvocations(v []*time.Time) *SnapshotSchedule { + s.NextInvocations = v + return s +} + +// SetScheduleDefinitions sets the ScheduleDefinitions field's value. +func (s *SnapshotSchedule) SetScheduleDefinitions(v []*string) *SnapshotSchedule { + s.ScheduleDefinitions = v + return s +} + +// SetScheduleDescription sets the ScheduleDescription field's value. +func (s *SnapshotSchedule) SetScheduleDescription(v string) *SnapshotSchedule { + s.ScheduleDescription = &v + return s +} + +// SetScheduleIdentifier sets the ScheduleIdentifier field's value. +func (s *SnapshotSchedule) SetScheduleIdentifier(v string) *SnapshotSchedule { + s.ScheduleIdentifier = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *SnapshotSchedule) SetTags(v []*Tag) *SnapshotSchedule { + s.Tags = v + return s +} + +// Describes a sorting entity +type SnapshotSortingEntity struct { + _ struct{} `type:"structure"` + + // The category for sorting the snapshots. + // + // Attribute is a required field + Attribute *string `type:"string" required:"true" enum:"SnapshotAttributeToSortBy"` + + // The order for listing the attributes. + SortOrder *string `type:"string" enum:"SortByOrder"` +} + +// String returns the string representation +func (s SnapshotSortingEntity) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SnapshotSortingEntity) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SnapshotSortingEntity) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SnapshotSortingEntity"} + if s.Attribute == nil { + invalidParams.Add(request.NewErrParamRequired("Attribute")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttribute sets the Attribute field's value. +func (s *SnapshotSortingEntity) SetAttribute(v string) *SnapshotSortingEntity { + s.Attribute = &v + return s +} + +// SetSortOrder sets the SortOrder field's value. +func (s *SnapshotSortingEntity) SetSortOrder(v string) *SnapshotSortingEntity { + s.SortOrder = &v + return s +} + // Describes a subnet. type Subnet struct { _ struct{} `type:"structure"` @@ -16729,6 +21015,30 @@ func (s *Subnet) SetSubnetStatus(v string) *Subnet { return s } +// Describes the operations that are allowed on a maintenance track. +type SupportedOperation struct { + _ struct{} `type:"structure"` + + // A list of the supported operations. + OperationName *string `type:"string"` +} + +// String returns the string representation +func (s SupportedOperation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SupportedOperation) GoString() string { + return s.String() +} + +// SetOperationName sets the OperationName field's value. +func (s *SupportedOperation) SetOperationName(v string) *SupportedOperation { + s.OperationName = &v + return s +} + // A list of supported platforms for orderable clusters. type SupportedPlatform struct { _ struct{} `type:"structure"` @@ -16772,7 +21082,7 @@ type TableRestoreStatus struct { // The time that the table restore request was made, in Universal Coordinated // Time (UTC). - RequestTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + RequestTime *time.Time `type:"timestamp"` // The identifier of the snapshot that the table is being restored from. SnapshotIdentifier *string `type:"string"` @@ -16997,6 +21307,48 @@ func (s *TaggedResource) SetTag(v *Tag) *TaggedResource { return s } +// A maintenance track that you can switch the current track to. +type UpdateTarget struct { + _ struct{} `type:"structure"` + + // The cluster version for the new maintenance track. + DatabaseVersion *string `type:"string"` + + // The name of the new maintenance track. + MaintenanceTrackName *string `type:"string"` + + // A list of operations supported by the maintenance track. + SupportedOperations []*SupportedOperation `locationNameList:"SupportedOperation" type:"list"` +} + +// String returns the string representation +func (s UpdateTarget) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateTarget) GoString() string { + return s.String() +} + +// SetDatabaseVersion sets the DatabaseVersion field's value. +func (s *UpdateTarget) SetDatabaseVersion(v string) *UpdateTarget { + s.DatabaseVersion = &v + return s +} + +// SetMaintenanceTrackName sets the MaintenanceTrackName field's value. +func (s *UpdateTarget) SetMaintenanceTrackName(v string) *UpdateTarget { + s.MaintenanceTrackName = &v + return s +} + +// SetSupportedOperations sets the SupportedOperations field's value. +func (s *UpdateTarget) SetSupportedOperations(v []*SupportedOperation) *UpdateTarget { + s.SupportedOperations = v + return s +} + // Describes the members of a VPC security group. type VpcSecurityGroupMembership struct { _ struct{} `type:"structure"` @@ -17046,6 +21398,36 @@ const ( ReservedNodeOfferingTypeUpgradable = "Upgradable" ) +const ( + // ScheduleStateModifying is a ScheduleState enum value + ScheduleStateModifying = "MODIFYING" + + // ScheduleStateActive is a ScheduleState enum value + ScheduleStateActive = "ACTIVE" + + // ScheduleStateFailed is a ScheduleState enum value + ScheduleStateFailed = "FAILED" +) + +const ( + // SnapshotAttributeToSortBySourceType is a SnapshotAttributeToSortBy enum value + SnapshotAttributeToSortBySourceType = "SOURCE_TYPE" + + // SnapshotAttributeToSortByTotalSize is a SnapshotAttributeToSortBy enum value + SnapshotAttributeToSortByTotalSize = "TOTAL_SIZE" + + // SnapshotAttributeToSortByCreateTime is a SnapshotAttributeToSortBy enum value + SnapshotAttributeToSortByCreateTime = "CREATE_TIME" +) + +const ( + // SortByOrderAsc is a SortByOrder enum value + SortByOrderAsc = "ASC" + + // SortByOrderDesc is a SortByOrder enum value + SortByOrderDesc = "DESC" +) + const ( // SourceTypeCluster is a SourceType enum value SourceTypeCluster = "cluster" diff --git a/vendor/github.com/aws/aws-sdk-go/service/redshift/errors.go b/vendor/github.com/aws/aws-sdk-go/service/redshift/errors.go index 8b0570fe3e3..ddb25ae66d3 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/redshift/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/redshift/errors.go @@ -31,6 +31,20 @@ const ( // The authorization quota for the cluster security group has been reached. ErrCodeAuthorizationQuotaExceededFault = "AuthorizationQuotaExceeded" + // ErrCodeBatchDeleteRequestSizeExceededFault for service response error code + // "BatchDeleteRequestSizeExceeded". + // + // The maximum number for a batch delete of snapshots has been reached. The + // limit is 100. + ErrCodeBatchDeleteRequestSizeExceededFault = "BatchDeleteRequestSizeExceeded" + + // ErrCodeBatchModifyClusterSnapshotsLimitExceededFault for service response error code + // "BatchModifyClusterSnapshotsLimitExceededFault". + // + // The maximum number for snapshot identifiers has been reached. The limit is + // 100. + ErrCodeBatchModifyClusterSnapshotsLimitExceededFault = "BatchModifyClusterSnapshotsLimitExceededFault" + // ErrCodeBucketNotFoundFault for service response error code // "BucketNotFoundFault". // @@ -49,6 +63,12 @@ const ( // The ClusterIdentifier parameter does not refer to an existing cluster. ErrCodeClusterNotFoundFault = "ClusterNotFound" + // ErrCodeClusterOnLatestRevisionFault for service response error code + // "ClusterOnLatestRevision". + // + // Cluster is already on the latest database revision. + ErrCodeClusterOnLatestRevisionFault = "ClusterOnLatestRevision" + // ErrCodeClusterParameterGroupAlreadyExistsFault for service response error code // "ClusterParameterGroupAlreadyExists". // @@ -263,6 +283,12 @@ const ( // The state of the cluster security group is not available. ErrCodeInvalidClusterSecurityGroupStateFault = "InvalidClusterSecurityGroupState" + // ErrCodeInvalidClusterSnapshotScheduleStateFault for service response error code + // "InvalidClusterSnapshotScheduleState". + // + // The cluster snapshot schedule state is not valid. + ErrCodeInvalidClusterSnapshotScheduleStateFault = "InvalidClusterSnapshotScheduleState" + // ErrCodeInvalidClusterSnapshotStateFault for service response error code // "InvalidClusterSnapshotState". // @@ -288,6 +314,12 @@ const ( // The state of the subnet is invalid. ErrCodeInvalidClusterSubnetStateFault = "InvalidClusterSubnetStateFault" + // ErrCodeInvalidClusterTrackFault for service response error code + // "InvalidClusterTrack". + // + // The provided cluster track name is not valid. + ErrCodeInvalidClusterTrackFault = "InvalidClusterTrack" + // ErrCodeInvalidElasticIpFault for service response error code // "InvalidElasticIpFault". // @@ -308,12 +340,26 @@ const ( // in use by one or more Amazon Redshift clusters. ErrCodeInvalidHsmConfigurationStateFault = "InvalidHsmConfigurationStateFault" + // ErrCodeInvalidReservedNodeStateFault for service response error code + // "InvalidReservedNodeState". + // + // Indicates that the Reserved Node being exchanged is not in an active state. + ErrCodeInvalidReservedNodeStateFault = "InvalidReservedNodeState" + // ErrCodeInvalidRestoreFault for service response error code // "InvalidRestore". // // The restore is invalid. ErrCodeInvalidRestoreFault = "InvalidRestore" + // ErrCodeInvalidRetentionPeriodFault for service response error code + // "InvalidRetentionPeriodFault". + // + // The retention period specified is either in the past or is not a valide value. + // + // The value must be either -1 or an integer between 1 and 3,653. + ErrCodeInvalidRetentionPeriodFault = "InvalidRetentionPeriodFault" + // ErrCodeInvalidS3BucketNameFault for service response error code // "InvalidS3BucketNameFault". // @@ -329,6 +375,12 @@ const ( // documented constraints. ErrCodeInvalidS3KeyPrefixFault = "InvalidS3KeyPrefixFault" + // ErrCodeInvalidScheduleFault for service response error code + // "InvalidSchedule". + // + // The schedule you submitted isn't valid. + ErrCodeInvalidScheduleFault = "InvalidSchedule" + // ErrCodeInvalidSnapshotCopyGrantStateFault for service response error code // "InvalidSnapshotCopyGrantStateFault". // @@ -396,6 +448,12 @@ const ( // User already has a reservation with the given identifier. ErrCodeReservedNodeAlreadyExistsFault = "ReservedNodeAlreadyExists" + // ErrCodeReservedNodeAlreadyMigratedFault for service response error code + // "ReservedNodeAlreadyMigrated". + // + // Indicates that the reserved node has already been exchanged. + ErrCodeReservedNodeAlreadyMigratedFault = "ReservedNodeAlreadyMigrated" + // ErrCodeReservedNodeNotFoundFault for service response error code // "ReservedNodeNotFound". // @@ -448,6 +506,12 @@ const ( // exist. ErrCodeSNSTopicArnNotFoundFault = "SNSTopicArnNotFound" + // ErrCodeScheduleDefinitionTypeUnsupportedFault for service response error code + // "ScheduleDefinitionTypeUnsupported". + // + // The definition you submitted is not supported. + ErrCodeScheduleDefinitionTypeUnsupportedFault = "ScheduleDefinitionTypeUnsupported" + // ErrCodeSnapshotCopyAlreadyDisabledFault for service response error code // "SnapshotCopyAlreadyDisabledFault". // @@ -487,6 +551,30 @@ const ( // this region. ErrCodeSnapshotCopyGrantQuotaExceededFault = "SnapshotCopyGrantQuotaExceededFault" + // ErrCodeSnapshotScheduleAlreadyExistsFault for service response error code + // "SnapshotScheduleAlreadyExists". + // + // The specified snapshot schedule already exists. + ErrCodeSnapshotScheduleAlreadyExistsFault = "SnapshotScheduleAlreadyExists" + + // ErrCodeSnapshotScheduleNotFoundFault for service response error code + // "SnapshotScheduleNotFound". + // + // We could not find the specified snapshot schedule. + ErrCodeSnapshotScheduleNotFoundFault = "SnapshotScheduleNotFound" + + // ErrCodeSnapshotScheduleQuotaExceededFault for service response error code + // "SnapshotScheduleQuotaExceeded". + // + // You have exceeded the quota of snapshot schedules. + ErrCodeSnapshotScheduleQuotaExceededFault = "SnapshotScheduleQuotaExceeded" + + // ErrCodeSnapshotScheduleUpdateInProgressFault for service response error code + // "SnapshotScheduleUpdateInProgress". + // + // The specified snapshot schedule is already being updated. + ErrCodeSnapshotScheduleUpdateInProgressFault = "SnapshotScheduleUpdateInProgress" + // ErrCodeSourceNotFoundFault for service response error code // "SourceNotFound". // @@ -535,6 +623,13 @@ const ( // The allowed values are ERROR and INFO. ErrCodeSubscriptionSeverityNotFoundFault = "SubscriptionSeverityNotFound" + // ErrCodeTableLimitExceededFault for service response error code + // "TableLimitExceeded". + // + // The number of tables in the cluster exceeds the limit for the requested new + // cluster node type. + ErrCodeTableLimitExceededFault = "TableLimitExceeded" + // ErrCodeTableRestoreNotFoundFault for service response error code // "TableRestoreNotFoundFault". // @@ -544,7 +639,7 @@ const ( // ErrCodeTagLimitExceededFault for service response error code // "TagLimitExceededFault". // - // The request exceeds the limit of 10 tags for the resource. + // You have exceeded the number of tags allowed. ErrCodeTagLimitExceededFault = "TagLimitExceededFault" // ErrCodeUnauthorizedOperation for service response error code diff --git a/vendor/github.com/aws/aws-sdk-go/service/redshift/service.go b/vendor/github.com/aws/aws-sdk-go/service/redshift/service.go index b859170a40e..a750d141c62 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/redshift/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/redshift/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "redshift" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "redshift" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Redshift" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the Redshift client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/resourcegroups/api.go b/vendor/github.com/aws/aws-sdk-go/service/resourcegroups/api.go new file mode 100644 index 00000000000..323d41878a7 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/resourcegroups/api.go @@ -0,0 +1,2830 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package resourcegroups + +import ( + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" +) + +const opCreateGroup = "CreateGroup" + +// CreateGroupRequest generates a "aws/request.Request" representing the +// client's request for the CreateGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateGroup for more information on using the CreateGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateGroupRequest method. +// req, resp := client.CreateGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/CreateGroup +func (c *ResourceGroups) CreateGroupRequest(input *CreateGroupInput) (req *request.Request, output *CreateGroupOutput) { + op := &request.Operation{ + Name: opCreateGroup, + HTTPMethod: "POST", + HTTPPath: "/groups", + } + + if input == nil { + input = &CreateGroupInput{} + } + + output = &CreateGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateGroup API operation for AWS Resource Groups. +// +// Creates a group with a specified name, description, and resource query. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Resource Groups's +// API operation CreateGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// The request does not comply with validation rules that are defined for the +// request parameters. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The caller is not authorized to make the request. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// The request uses an HTTP method which is not allowed for the specified resource. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The caller has exceeded throttling limits. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// An internal error occurred while processing the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/CreateGroup +func (c *ResourceGroups) CreateGroup(input *CreateGroupInput) (*CreateGroupOutput, error) { + req, out := c.CreateGroupRequest(input) + return out, req.Send() +} + +// CreateGroupWithContext is the same as CreateGroup with the addition of +// the ability to pass a context and additional request options. +// +// See CreateGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ResourceGroups) CreateGroupWithContext(ctx aws.Context, input *CreateGroupInput, opts ...request.Option) (*CreateGroupOutput, error) { + req, out := c.CreateGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteGroup = "DeleteGroup" + +// DeleteGroupRequest generates a "aws/request.Request" representing the +// client's request for the DeleteGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteGroup for more information on using the DeleteGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteGroupRequest method. +// req, resp := client.DeleteGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/DeleteGroup +func (c *ResourceGroups) DeleteGroupRequest(input *DeleteGroupInput) (req *request.Request, output *DeleteGroupOutput) { + op := &request.Operation{ + Name: opDeleteGroup, + HTTPMethod: "DELETE", + HTTPPath: "/groups/{GroupName}", + } + + if input == nil { + input = &DeleteGroupInput{} + } + + output = &DeleteGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteGroup API operation for AWS Resource Groups. +// +// Deletes a specified resource group. Deleting a resource group does not delete +// resources that are members of the group; it only deletes the group structure. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Resource Groups's +// API operation DeleteGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// The request does not comply with validation rules that are defined for the +// request parameters. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The caller is not authorized to make the request. +// +// * ErrCodeNotFoundException "NotFoundException" +// One or more resources specified in the request do not exist. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// The request uses an HTTP method which is not allowed for the specified resource. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The caller has exceeded throttling limits. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// An internal error occurred while processing the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/DeleteGroup +func (c *ResourceGroups) DeleteGroup(input *DeleteGroupInput) (*DeleteGroupOutput, error) { + req, out := c.DeleteGroupRequest(input) + return out, req.Send() +} + +// DeleteGroupWithContext is the same as DeleteGroup with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ResourceGroups) DeleteGroupWithContext(ctx aws.Context, input *DeleteGroupInput, opts ...request.Option) (*DeleteGroupOutput, error) { + req, out := c.DeleteGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetGroup = "GetGroup" + +// GetGroupRequest generates a "aws/request.Request" representing the +// client's request for the GetGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetGroup for more information on using the GetGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetGroupRequest method. +// req, resp := client.GetGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/GetGroup +func (c *ResourceGroups) GetGroupRequest(input *GetGroupInput) (req *request.Request, output *GetGroupOutput) { + op := &request.Operation{ + Name: opGetGroup, + HTTPMethod: "GET", + HTTPPath: "/groups/{GroupName}", + } + + if input == nil { + input = &GetGroupInput{} + } + + output = &GetGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetGroup API operation for AWS Resource Groups. +// +// Returns information about a specified resource group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Resource Groups's +// API operation GetGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// The request does not comply with validation rules that are defined for the +// request parameters. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The caller is not authorized to make the request. +// +// * ErrCodeNotFoundException "NotFoundException" +// One or more resources specified in the request do not exist. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// The request uses an HTTP method which is not allowed for the specified resource. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The caller has exceeded throttling limits. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// An internal error occurred while processing the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/GetGroup +func (c *ResourceGroups) GetGroup(input *GetGroupInput) (*GetGroupOutput, error) { + req, out := c.GetGroupRequest(input) + return out, req.Send() +} + +// GetGroupWithContext is the same as GetGroup with the addition of +// the ability to pass a context and additional request options. +// +// See GetGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ResourceGroups) GetGroupWithContext(ctx aws.Context, input *GetGroupInput, opts ...request.Option) (*GetGroupOutput, error) { + req, out := c.GetGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetGroupQuery = "GetGroupQuery" + +// GetGroupQueryRequest generates a "aws/request.Request" representing the +// client's request for the GetGroupQuery operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetGroupQuery for more information on using the GetGroupQuery +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetGroupQueryRequest method. +// req, resp := client.GetGroupQueryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/GetGroupQuery +func (c *ResourceGroups) GetGroupQueryRequest(input *GetGroupQueryInput) (req *request.Request, output *GetGroupQueryOutput) { + op := &request.Operation{ + Name: opGetGroupQuery, + HTTPMethod: "GET", + HTTPPath: "/groups/{GroupName}/query", + } + + if input == nil { + input = &GetGroupQueryInput{} + } + + output = &GetGroupQueryOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetGroupQuery API operation for AWS Resource Groups. +// +// Returns the resource query associated with the specified resource group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Resource Groups's +// API operation GetGroupQuery for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// The request does not comply with validation rules that are defined for the +// request parameters. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The caller is not authorized to make the request. +// +// * ErrCodeNotFoundException "NotFoundException" +// One or more resources specified in the request do not exist. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// The request uses an HTTP method which is not allowed for the specified resource. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The caller has exceeded throttling limits. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// An internal error occurred while processing the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/GetGroupQuery +func (c *ResourceGroups) GetGroupQuery(input *GetGroupQueryInput) (*GetGroupQueryOutput, error) { + req, out := c.GetGroupQueryRequest(input) + return out, req.Send() +} + +// GetGroupQueryWithContext is the same as GetGroupQuery with the addition of +// the ability to pass a context and additional request options. +// +// See GetGroupQuery for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ResourceGroups) GetGroupQueryWithContext(ctx aws.Context, input *GetGroupQueryInput, opts ...request.Option) (*GetGroupQueryOutput, error) { + req, out := c.GetGroupQueryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetTags = "GetTags" + +// GetTagsRequest generates a "aws/request.Request" representing the +// client's request for the GetTags operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetTags for more information on using the GetTags +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetTagsRequest method. +// req, resp := client.GetTagsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/GetTags +func (c *ResourceGroups) GetTagsRequest(input *GetTagsInput) (req *request.Request, output *GetTagsOutput) { + op := &request.Operation{ + Name: opGetTags, + HTTPMethod: "GET", + HTTPPath: "/resources/{Arn}/tags", + } + + if input == nil { + input = &GetTagsInput{} + } + + output = &GetTagsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetTags API operation for AWS Resource Groups. +// +// Returns a list of tags that are associated with a resource, specified by +// an ARN. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Resource Groups's +// API operation GetTags for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// The request does not comply with validation rules that are defined for the +// request parameters. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The caller is not authorized to make the request. +// +// * ErrCodeNotFoundException "NotFoundException" +// One or more resources specified in the request do not exist. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// The request uses an HTTP method which is not allowed for the specified resource. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The caller has exceeded throttling limits. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// An internal error occurred while processing the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/GetTags +func (c *ResourceGroups) GetTags(input *GetTagsInput) (*GetTagsOutput, error) { + req, out := c.GetTagsRequest(input) + return out, req.Send() +} + +// GetTagsWithContext is the same as GetTags with the addition of +// the ability to pass a context and additional request options. +// +// See GetTags for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ResourceGroups) GetTagsWithContext(ctx aws.Context, input *GetTagsInput, opts ...request.Option) (*GetTagsOutput, error) { + req, out := c.GetTagsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListGroupResources = "ListGroupResources" + +// ListGroupResourcesRequest generates a "aws/request.Request" representing the +// client's request for the ListGroupResources operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListGroupResources for more information on using the ListGroupResources +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListGroupResourcesRequest method. +// req, resp := client.ListGroupResourcesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/ListGroupResources +func (c *ResourceGroups) ListGroupResourcesRequest(input *ListGroupResourcesInput) (req *request.Request, output *ListGroupResourcesOutput) { + op := &request.Operation{ + Name: opListGroupResources, + HTTPMethod: "POST", + HTTPPath: "/groups/{GroupName}/resource-identifiers-list", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListGroupResourcesInput{} + } + + output = &ListGroupResourcesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListGroupResources API operation for AWS Resource Groups. +// +// Returns a list of ARNs of resources that are members of a specified resource +// group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Resource Groups's +// API operation ListGroupResources for usage and error information. +// +// Returned Error Codes: +// * ErrCodeUnauthorizedException "UnauthorizedException" +// The request has not been applied because it lacks valid authentication credentials +// for the target resource. +// +// * ErrCodeBadRequestException "BadRequestException" +// The request does not comply with validation rules that are defined for the +// request parameters. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The caller is not authorized to make the request. +// +// * ErrCodeNotFoundException "NotFoundException" +// One or more resources specified in the request do not exist. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// The request uses an HTTP method which is not allowed for the specified resource. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The caller has exceeded throttling limits. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// An internal error occurred while processing the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/ListGroupResources +func (c *ResourceGroups) ListGroupResources(input *ListGroupResourcesInput) (*ListGroupResourcesOutput, error) { + req, out := c.ListGroupResourcesRequest(input) + return out, req.Send() +} + +// ListGroupResourcesWithContext is the same as ListGroupResources with the addition of +// the ability to pass a context and additional request options. +// +// See ListGroupResources for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ResourceGroups) ListGroupResourcesWithContext(ctx aws.Context, input *ListGroupResourcesInput, opts ...request.Option) (*ListGroupResourcesOutput, error) { + req, out := c.ListGroupResourcesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListGroupResourcesPages iterates over the pages of a ListGroupResources operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListGroupResources method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListGroupResources operation. +// pageNum := 0 +// err := client.ListGroupResourcesPages(params, +// func(page *ListGroupResourcesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *ResourceGroups) ListGroupResourcesPages(input *ListGroupResourcesInput, fn func(*ListGroupResourcesOutput, bool) bool) error { + return c.ListGroupResourcesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListGroupResourcesPagesWithContext same as ListGroupResourcesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ResourceGroups) ListGroupResourcesPagesWithContext(ctx aws.Context, input *ListGroupResourcesInput, fn func(*ListGroupResourcesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListGroupResourcesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListGroupResourcesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListGroupResourcesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListGroups = "ListGroups" + +// ListGroupsRequest generates a "aws/request.Request" representing the +// client's request for the ListGroups operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListGroups for more information on using the ListGroups +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListGroupsRequest method. +// req, resp := client.ListGroupsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/ListGroups +func (c *ResourceGroups) ListGroupsRequest(input *ListGroupsInput) (req *request.Request, output *ListGroupsOutput) { + op := &request.Operation{ + Name: opListGroups, + HTTPMethod: "POST", + HTTPPath: "/groups-list", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListGroupsInput{} + } + + output = &ListGroupsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListGroups API operation for AWS Resource Groups. +// +// Returns a list of existing resource groups in your account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Resource Groups's +// API operation ListGroups for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// The request does not comply with validation rules that are defined for the +// request parameters. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The caller is not authorized to make the request. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// The request uses an HTTP method which is not allowed for the specified resource. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The caller has exceeded throttling limits. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// An internal error occurred while processing the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/ListGroups +func (c *ResourceGroups) ListGroups(input *ListGroupsInput) (*ListGroupsOutput, error) { + req, out := c.ListGroupsRequest(input) + return out, req.Send() +} + +// ListGroupsWithContext is the same as ListGroups with the addition of +// the ability to pass a context and additional request options. +// +// See ListGroups for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ResourceGroups) ListGroupsWithContext(ctx aws.Context, input *ListGroupsInput, opts ...request.Option) (*ListGroupsOutput, error) { + req, out := c.ListGroupsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListGroupsPages iterates over the pages of a ListGroups operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListGroups method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListGroups operation. +// pageNum := 0 +// err := client.ListGroupsPages(params, +// func(page *ListGroupsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *ResourceGroups) ListGroupsPages(input *ListGroupsInput, fn func(*ListGroupsOutput, bool) bool) error { + return c.ListGroupsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListGroupsPagesWithContext same as ListGroupsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ResourceGroups) ListGroupsPagesWithContext(ctx aws.Context, input *ListGroupsInput, fn func(*ListGroupsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListGroupsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListGroupsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListGroupsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opSearchResources = "SearchResources" + +// SearchResourcesRequest generates a "aws/request.Request" representing the +// client's request for the SearchResources operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SearchResources for more information on using the SearchResources +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SearchResourcesRequest method. +// req, resp := client.SearchResourcesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/SearchResources +func (c *ResourceGroups) SearchResourcesRequest(input *SearchResourcesInput) (req *request.Request, output *SearchResourcesOutput) { + op := &request.Operation{ + Name: opSearchResources, + HTTPMethod: "POST", + HTTPPath: "/resources/search", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &SearchResourcesInput{} + } + + output = &SearchResourcesOutput{} + req = c.newRequest(op, input, output) + return +} + +// SearchResources API operation for AWS Resource Groups. +// +// Returns a list of AWS resource identifiers that matches a specified query. +// The query uses the same format as a resource query in a CreateGroup or UpdateGroupQuery +// operation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Resource Groups's +// API operation SearchResources for usage and error information. +// +// Returned Error Codes: +// * ErrCodeUnauthorizedException "UnauthorizedException" +// The request has not been applied because it lacks valid authentication credentials +// for the target resource. +// +// * ErrCodeBadRequestException "BadRequestException" +// The request does not comply with validation rules that are defined for the +// request parameters. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The caller is not authorized to make the request. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// The request uses an HTTP method which is not allowed for the specified resource. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The caller has exceeded throttling limits. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// An internal error occurred while processing the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/SearchResources +func (c *ResourceGroups) SearchResources(input *SearchResourcesInput) (*SearchResourcesOutput, error) { + req, out := c.SearchResourcesRequest(input) + return out, req.Send() +} + +// SearchResourcesWithContext is the same as SearchResources with the addition of +// the ability to pass a context and additional request options. +// +// See SearchResources for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ResourceGroups) SearchResourcesWithContext(ctx aws.Context, input *SearchResourcesInput, opts ...request.Option) (*SearchResourcesOutput, error) { + req, out := c.SearchResourcesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// SearchResourcesPages iterates over the pages of a SearchResources operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See SearchResources method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a SearchResources operation. +// pageNum := 0 +// err := client.SearchResourcesPages(params, +// func(page *SearchResourcesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *ResourceGroups) SearchResourcesPages(input *SearchResourcesInput, fn func(*SearchResourcesOutput, bool) bool) error { + return c.SearchResourcesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// SearchResourcesPagesWithContext same as SearchResourcesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ResourceGroups) SearchResourcesPagesWithContext(ctx aws.Context, input *SearchResourcesInput, fn func(*SearchResourcesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *SearchResourcesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.SearchResourcesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*SearchResourcesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opTag = "Tag" + +// TagRequest generates a "aws/request.Request" representing the +// client's request for the Tag operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See Tag for more information on using the Tag +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the TagRequest method. +// req, resp := client.TagRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/Tag +func (c *ResourceGroups) TagRequest(input *TagInput) (req *request.Request, output *TagOutput) { + op := &request.Operation{ + Name: opTag, + HTTPMethod: "PUT", + HTTPPath: "/resources/{Arn}/tags", + } + + if input == nil { + input = &TagInput{} + } + + output = &TagOutput{} + req = c.newRequest(op, input, output) + return +} + +// Tag API operation for AWS Resource Groups. +// +// Adds specified tags to a resource with the specified ARN. Existing tags on +// a resource are not changed if they are not specified in the request parameters. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Resource Groups's +// API operation Tag for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// The request does not comply with validation rules that are defined for the +// request parameters. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The caller is not authorized to make the request. +// +// * ErrCodeNotFoundException "NotFoundException" +// One or more resources specified in the request do not exist. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// The request uses an HTTP method which is not allowed for the specified resource. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The caller has exceeded throttling limits. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// An internal error occurred while processing the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/Tag +func (c *ResourceGroups) Tag(input *TagInput) (*TagOutput, error) { + req, out := c.TagRequest(input) + return out, req.Send() +} + +// TagWithContext is the same as Tag with the addition of +// the ability to pass a context and additional request options. +// +// See Tag for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ResourceGroups) TagWithContext(ctx aws.Context, input *TagInput, opts ...request.Option) (*TagOutput, error) { + req, out := c.TagRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUntag = "Untag" + +// UntagRequest generates a "aws/request.Request" representing the +// client's request for the Untag operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See Untag for more information on using the Untag +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UntagRequest method. +// req, resp := client.UntagRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/Untag +func (c *ResourceGroups) UntagRequest(input *UntagInput) (req *request.Request, output *UntagOutput) { + op := &request.Operation{ + Name: opUntag, + HTTPMethod: "PATCH", + HTTPPath: "/resources/{Arn}/tags", + } + + if input == nil { + input = &UntagInput{} + } + + output = &UntagOutput{} + req = c.newRequest(op, input, output) + return +} + +// Untag API operation for AWS Resource Groups. +// +// Deletes specified tags from a specified resource. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Resource Groups's +// API operation Untag for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// The request does not comply with validation rules that are defined for the +// request parameters. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The caller is not authorized to make the request. +// +// * ErrCodeNotFoundException "NotFoundException" +// One or more resources specified in the request do not exist. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// The request uses an HTTP method which is not allowed for the specified resource. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The caller has exceeded throttling limits. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// An internal error occurred while processing the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/Untag +func (c *ResourceGroups) Untag(input *UntagInput) (*UntagOutput, error) { + req, out := c.UntagRequest(input) + return out, req.Send() +} + +// UntagWithContext is the same as Untag with the addition of +// the ability to pass a context and additional request options. +// +// See Untag for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ResourceGroups) UntagWithContext(ctx aws.Context, input *UntagInput, opts ...request.Option) (*UntagOutput, error) { + req, out := c.UntagRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateGroup = "UpdateGroup" + +// UpdateGroupRequest generates a "aws/request.Request" representing the +// client's request for the UpdateGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateGroup for more information on using the UpdateGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateGroupRequest method. +// req, resp := client.UpdateGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/UpdateGroup +func (c *ResourceGroups) UpdateGroupRequest(input *UpdateGroupInput) (req *request.Request, output *UpdateGroupOutput) { + op := &request.Operation{ + Name: opUpdateGroup, + HTTPMethod: "PUT", + HTTPPath: "/groups/{GroupName}", + } + + if input == nil { + input = &UpdateGroupInput{} + } + + output = &UpdateGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateGroup API operation for AWS Resource Groups. +// +// Updates an existing group with a new or changed description. You cannot update +// the name of a resource group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Resource Groups's +// API operation UpdateGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// The request does not comply with validation rules that are defined for the +// request parameters. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The caller is not authorized to make the request. +// +// * ErrCodeNotFoundException "NotFoundException" +// One or more resources specified in the request do not exist. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// The request uses an HTTP method which is not allowed for the specified resource. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The caller has exceeded throttling limits. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// An internal error occurred while processing the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/UpdateGroup +func (c *ResourceGroups) UpdateGroup(input *UpdateGroupInput) (*UpdateGroupOutput, error) { + req, out := c.UpdateGroupRequest(input) + return out, req.Send() +} + +// UpdateGroupWithContext is the same as UpdateGroup with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ResourceGroups) UpdateGroupWithContext(ctx aws.Context, input *UpdateGroupInput, opts ...request.Option) (*UpdateGroupOutput, error) { + req, out := c.UpdateGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateGroupQuery = "UpdateGroupQuery" + +// UpdateGroupQueryRequest generates a "aws/request.Request" representing the +// client's request for the UpdateGroupQuery operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateGroupQuery for more information on using the UpdateGroupQuery +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateGroupQueryRequest method. +// req, resp := client.UpdateGroupQueryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/UpdateGroupQuery +func (c *ResourceGroups) UpdateGroupQueryRequest(input *UpdateGroupQueryInput) (req *request.Request, output *UpdateGroupQueryOutput) { + op := &request.Operation{ + Name: opUpdateGroupQuery, + HTTPMethod: "PUT", + HTTPPath: "/groups/{GroupName}/query", + } + + if input == nil { + input = &UpdateGroupQueryInput{} + } + + output = &UpdateGroupQueryOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateGroupQuery API operation for AWS Resource Groups. +// +// Updates the resource query of a group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Resource Groups's +// API operation UpdateGroupQuery for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// The request does not comply with validation rules that are defined for the +// request parameters. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The caller is not authorized to make the request. +// +// * ErrCodeNotFoundException "NotFoundException" +// One or more resources specified in the request do not exist. +// +// * ErrCodeMethodNotAllowedException "MethodNotAllowedException" +// The request uses an HTTP method which is not allowed for the specified resource. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The caller has exceeded throttling limits. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// An internal error occurred while processing the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27/UpdateGroupQuery +func (c *ResourceGroups) UpdateGroupQuery(input *UpdateGroupQueryInput) (*UpdateGroupQueryOutput, error) { + req, out := c.UpdateGroupQueryRequest(input) + return out, req.Send() +} + +// UpdateGroupQueryWithContext is the same as UpdateGroupQuery with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateGroupQuery for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ResourceGroups) UpdateGroupQueryWithContext(ctx aws.Context, input *UpdateGroupQueryInput, opts ...request.Option) (*UpdateGroupQueryOutput, error) { + req, out := c.UpdateGroupQueryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type CreateGroupInput struct { + _ struct{} `type:"structure"` + + // The description of the resource group. Descriptions can have a maximum of + // 511 characters, including letters, numbers, hyphens, underscores, punctuation, + // and spaces. + Description *string `type:"string"` + + // The name of the group, which is the identifier of the group in other operations. + // A resource group name cannot be updated after it is created. A resource group + // name can have a maximum of 128 characters, including letters, numbers, hyphens, + // dots, and underscores. The name cannot start with AWS or aws; these are reserved. + // A resource group name must be unique within your account. + // + // Name is a required field + Name *string `min:"1" type:"string" required:"true"` + + // The resource query that determines which AWS resources are members of this + // group. + // + // ResourceQuery is a required field + ResourceQuery *ResourceQuery `type:"structure" required:"true"` + + // The tags to add to the group. A tag is a string-to-string map of key-value + // pairs. Tag keys can have a maximum character length of 128 characters, and + // tag values can have a maximum length of 256 characters. + Tags map[string]*string `type:"map"` +} + +// String returns the string representation +func (s CreateGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateGroupInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + if s.ResourceQuery == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceQuery")) + } + if s.ResourceQuery != nil { + if err := s.ResourceQuery.Validate(); err != nil { + invalidParams.AddNested("ResourceQuery", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDescription sets the Description field's value. +func (s *CreateGroupInput) SetDescription(v string) *CreateGroupInput { + s.Description = &v + return s +} + +// SetName sets the Name field's value. +func (s *CreateGroupInput) SetName(v string) *CreateGroupInput { + s.Name = &v + return s +} + +// SetResourceQuery sets the ResourceQuery field's value. +func (s *CreateGroupInput) SetResourceQuery(v *ResourceQuery) *CreateGroupInput { + s.ResourceQuery = v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateGroupInput) SetTags(v map[string]*string) *CreateGroupInput { + s.Tags = v + return s +} + +type CreateGroupOutput struct { + _ struct{} `type:"structure"` + + // A full description of the resource group after it is created. + Group *Group `type:"structure"` + + // The resource query associated with the group. + ResourceQuery *ResourceQuery `type:"structure"` + + // The tags associated with the group. + Tags map[string]*string `type:"map"` +} + +// String returns the string representation +func (s CreateGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateGroupOutput) GoString() string { + return s.String() +} + +// SetGroup sets the Group field's value. +func (s *CreateGroupOutput) SetGroup(v *Group) *CreateGroupOutput { + s.Group = v + return s +} + +// SetResourceQuery sets the ResourceQuery field's value. +func (s *CreateGroupOutput) SetResourceQuery(v *ResourceQuery) *CreateGroupOutput { + s.ResourceQuery = v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateGroupOutput) SetTags(v map[string]*string) *CreateGroupOutput { + s.Tags = v + return s +} + +type DeleteGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the resource group to delete. + // + // GroupName is a required field + GroupName *string `location:"uri" locationName:"GroupName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteGroupInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + if s.GroupName != nil && len(*s.GroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupName sets the GroupName field's value. +func (s *DeleteGroupInput) SetGroupName(v string) *DeleteGroupInput { + s.GroupName = &v + return s +} + +type DeleteGroupOutput struct { + _ struct{} `type:"structure"` + + // A full description of the deleted resource group. + Group *Group `type:"structure"` +} + +// String returns the string representation +func (s DeleteGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteGroupOutput) GoString() string { + return s.String() +} + +// SetGroup sets the Group field's value. +func (s *DeleteGroupOutput) SetGroup(v *Group) *DeleteGroupOutput { + s.Group = v + return s +} + +type GetGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the resource group. + // + // GroupName is a required field + GroupName *string `location:"uri" locationName:"GroupName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetGroupInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + if s.GroupName != nil && len(*s.GroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupName sets the GroupName field's value. +func (s *GetGroupInput) SetGroupName(v string) *GetGroupInput { + s.GroupName = &v + return s +} + +type GetGroupOutput struct { + _ struct{} `type:"structure"` + + // A full description of the resource group. + Group *Group `type:"structure"` +} + +// String returns the string representation +func (s GetGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetGroupOutput) GoString() string { + return s.String() +} + +// SetGroup sets the Group field's value. +func (s *GetGroupOutput) SetGroup(v *Group) *GetGroupOutput { + s.Group = v + return s +} + +type GetGroupQueryInput struct { + _ struct{} `type:"structure"` + + // The name of the resource group. + // + // GroupName is a required field + GroupName *string `location:"uri" locationName:"GroupName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetGroupQueryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetGroupQueryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetGroupQueryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetGroupQueryInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + if s.GroupName != nil && len(*s.GroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupName sets the GroupName field's value. +func (s *GetGroupQueryInput) SetGroupName(v string) *GetGroupQueryInput { + s.GroupName = &v + return s +} + +type GetGroupQueryOutput struct { + _ struct{} `type:"structure"` + + // The resource query associated with the specified group. + GroupQuery *GroupQuery `type:"structure"` +} + +// String returns the string representation +func (s GetGroupQueryOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetGroupQueryOutput) GoString() string { + return s.String() +} + +// SetGroupQuery sets the GroupQuery field's value. +func (s *GetGroupQueryOutput) SetGroupQuery(v *GroupQuery) *GetGroupQueryOutput { + s.GroupQuery = v + return s +} + +type GetTagsInput struct { + _ struct{} `type:"structure"` + + // The ARN of the resource for which you want a list of tags. The resource must + // exist within the account you are using. + // + // Arn is a required field + Arn *string `location:"uri" locationName:"Arn" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetTagsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetTagsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetTagsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetTagsInput"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *GetTagsInput) SetArn(v string) *GetTagsInput { + s.Arn = &v + return s +} + +type GetTagsOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the tagged resource. + Arn *string `type:"string"` + + // The tags associated with the specified resource. + Tags map[string]*string `type:"map"` +} + +// String returns the string representation +func (s GetTagsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetTagsOutput) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *GetTagsOutput) SetArn(v string) *GetTagsOutput { + s.Arn = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *GetTagsOutput) SetTags(v map[string]*string) *GetTagsOutput { + s.Tags = v + return s +} + +// A resource group. +type Group struct { + _ struct{} `type:"structure"` + + // The description of the resource group. + Description *string `type:"string"` + + // The ARN of a resource group. + // + // GroupArn is a required field + GroupArn *string `type:"string" required:"true"` + + // The name of a resource group. + // + // Name is a required field + Name *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s Group) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Group) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *Group) SetDescription(v string) *Group { + s.Description = &v + return s +} + +// SetGroupArn sets the GroupArn field's value. +func (s *Group) SetGroupArn(v string) *Group { + s.GroupArn = &v + return s +} + +// SetName sets the Name field's value. +func (s *Group) SetName(v string) *Group { + s.Name = &v + return s +} + +// A filter name and value pair that is used to obtain more specific results +// from a list of groups. +type GroupFilter struct { + _ struct{} `type:"structure"` + + // The name of the filter. Filter names are case-sensitive. + // + // Name is a required field + Name *string `type:"string" required:"true" enum:"GroupFilterName"` + + // One or more filter values. Allowed filter values vary by group filter name, + // and are case-sensitive. + // + // Values is a required field + Values []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s GroupFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GroupFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GroupFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GroupFilter"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Values == nil { + invalidParams.Add(request.NewErrParamRequired("Values")) + } + if s.Values != nil && len(s.Values) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Values", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *GroupFilter) SetName(v string) *GroupFilter { + s.Name = &v + return s +} + +// SetValues sets the Values field's value. +func (s *GroupFilter) SetValues(v []*string) *GroupFilter { + s.Values = v + return s +} + +// The ARN and group name of a group. +type GroupIdentifier struct { + _ struct{} `type:"structure"` + + // The ARN of a resource group. + GroupArn *string `type:"string"` + + // The name of a resource group. + GroupName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s GroupIdentifier) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GroupIdentifier) GoString() string { + return s.String() +} + +// SetGroupArn sets the GroupArn field's value. +func (s *GroupIdentifier) SetGroupArn(v string) *GroupIdentifier { + s.GroupArn = &v + return s +} + +// SetGroupName sets the GroupName field's value. +func (s *GroupIdentifier) SetGroupName(v string) *GroupIdentifier { + s.GroupName = &v + return s +} + +// The underlying resource query of a resource group. Resources that match query +// results are part of the group. +type GroupQuery struct { + _ struct{} `type:"structure"` + + // The name of a resource group that is associated with a specific resource + // query. + // + // GroupName is a required field + GroupName *string `min:"1" type:"string" required:"true"` + + // The resource query which determines which AWS resources are members of the + // associated resource group. + // + // ResourceQuery is a required field + ResourceQuery *ResourceQuery `type:"structure" required:"true"` +} + +// String returns the string representation +func (s GroupQuery) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GroupQuery) GoString() string { + return s.String() +} + +// SetGroupName sets the GroupName field's value. +func (s *GroupQuery) SetGroupName(v string) *GroupQuery { + s.GroupName = &v + return s +} + +// SetResourceQuery sets the ResourceQuery field's value. +func (s *GroupQuery) SetResourceQuery(v *ResourceQuery) *GroupQuery { + s.ResourceQuery = v + return s +} + +type ListGroupResourcesInput struct { + _ struct{} `type:"structure"` + + // Filters, formatted as ResourceFilter objects, that you want to apply to a + // ListGroupResources operation. + // + // * resource-type - Filter resources by their type. Specify up to five resource + // types in the format AWS::ServiceCode::ResourceType. For example, AWS::EC2::Instance, + // or AWS::S3::Bucket. + Filters []*ResourceFilter `type:"list"` + + // The name of the resource group. + // + // GroupName is a required field + GroupName *string `location:"uri" locationName:"GroupName" min:"1" type:"string" required:"true"` + + // The maximum number of group member ARNs that are returned in a single call + // by ListGroupResources, in paginated output. By default, this number is 50. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + + // The NextToken value that is returned in a paginated ListGroupResources request. + // To get the next page of results, run the call again, add the NextToken parameter, + // and specify the NextToken value. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s ListGroupResourcesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListGroupResourcesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListGroupResourcesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListGroupResourcesInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + if s.GroupName != nil && len(*s.GroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GroupName", 1)) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *ListGroupResourcesInput) SetFilters(v []*ResourceFilter) *ListGroupResourcesInput { + s.Filters = v + return s +} + +// SetGroupName sets the GroupName field's value. +func (s *ListGroupResourcesInput) SetGroupName(v string) *ListGroupResourcesInput { + s.GroupName = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListGroupResourcesInput) SetMaxResults(v int64) *ListGroupResourcesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListGroupResourcesInput) SetNextToken(v string) *ListGroupResourcesInput { + s.NextToken = &v + return s +} + +type ListGroupResourcesOutput struct { + _ struct{} `type:"structure"` + + // The NextToken value to include in a subsequent ListGroupResources request, + // to get more results. + NextToken *string `type:"string"` + + // A list of QueryError objects. Each error is an object that contains ErrorCode + // and Message structures. Possible values for ErrorCode are CLOUDFORMATION_STACK_INACTIVE + // and CLOUDFORMATION_STACK_NOT_EXISTING. + QueryErrors []*QueryError `type:"list"` + + // The ARNs and resource types of resources that are members of the group that + // you specified. + ResourceIdentifiers []*ResourceIdentifier `type:"list"` +} + +// String returns the string representation +func (s ListGroupResourcesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListGroupResourcesOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListGroupResourcesOutput) SetNextToken(v string) *ListGroupResourcesOutput { + s.NextToken = &v + return s +} + +// SetQueryErrors sets the QueryErrors field's value. +func (s *ListGroupResourcesOutput) SetQueryErrors(v []*QueryError) *ListGroupResourcesOutput { + s.QueryErrors = v + return s +} + +// SetResourceIdentifiers sets the ResourceIdentifiers field's value. +func (s *ListGroupResourcesOutput) SetResourceIdentifiers(v []*ResourceIdentifier) *ListGroupResourcesOutput { + s.ResourceIdentifiers = v + return s +} + +type ListGroupsInput struct { + _ struct{} `type:"structure"` + + // Filters, formatted as GroupFilter objects, that you want to apply to a ListGroups + // operation. + // + // * resource-type - Filter groups by resource type. Specify up to five resource + // types in the format AWS::ServiceCode::ResourceType. For example, AWS::EC2::Instance, + // or AWS::S3::Bucket. + Filters []*GroupFilter `type:"list"` + + // The maximum number of resource group results that are returned by ListGroups + // in paginated output. By default, this number is 50. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + + // The NextToken value that is returned in a paginated ListGroups request. To + // get the next page of results, run the call again, add the NextToken parameter, + // and specify the NextToken value. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s ListGroupsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListGroupsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListGroupsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListGroupsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *ListGroupsInput) SetFilters(v []*GroupFilter) *ListGroupsInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListGroupsInput) SetMaxResults(v int64) *ListGroupsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListGroupsInput) SetNextToken(v string) *ListGroupsInput { + s.NextToken = &v + return s +} + +type ListGroupsOutput struct { + _ struct{} `type:"structure"` + + // A list of GroupIdentifier objects. Each identifier is an object that contains + // both the GroupName and the GroupArn. + GroupIdentifiers []*GroupIdentifier `type:"list"` + + // A list of resource groups. + // + // Deprecated: This field is deprecated, use GroupIdentifiers instead. + Groups []*Group `deprecated:"true" type:"list"` + + // The NextToken value to include in a subsequent ListGroups request, to get + // more results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListGroupsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListGroupsOutput) GoString() string { + return s.String() +} + +// SetGroupIdentifiers sets the GroupIdentifiers field's value. +func (s *ListGroupsOutput) SetGroupIdentifiers(v []*GroupIdentifier) *ListGroupsOutput { + s.GroupIdentifiers = v + return s +} + +// SetGroups sets the Groups field's value. +func (s *ListGroupsOutput) SetGroups(v []*Group) *ListGroupsOutput { + s.Groups = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListGroupsOutput) SetNextToken(v string) *ListGroupsOutput { + s.NextToken = &v + return s +} + +// A two-part error structure that can occur in ListGroupResources or SearchResources +// operations on CloudFormation stack-based queries. The error occurs if the +// CloudFormation stack on which the query is based either does not exist, or +// has a status that renders the stack inactive. A QueryError occurrence does +// not necessarily mean that AWS Resource Groups could not complete the operation, +// but the resulting group might have no member resources. +type QueryError struct { + _ struct{} `type:"structure"` + + // Possible values are CLOUDFORMATION_STACK_INACTIVE and CLOUDFORMATION_STACK_NOT_EXISTING. + ErrorCode *string `type:"string" enum:"QueryErrorCode"` + + // A message that explains the ErrorCode value. Messages might state that the + // specified CloudFormation stack does not exist (or no longer exists). For + // CLOUDFORMATION_STACK_INACTIVE, the message typically states that the CloudFormation + // stack has a status that is not (or no longer) active, such as CREATE_FAILED. + Message *string `type:"string"` +} + +// String returns the string representation +func (s QueryError) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s QueryError) GoString() string { + return s.String() +} + +// SetErrorCode sets the ErrorCode field's value. +func (s *QueryError) SetErrorCode(v string) *QueryError { + s.ErrorCode = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *QueryError) SetMessage(v string) *QueryError { + s.Message = &v + return s +} + +// A filter name and value pair that is used to obtain more specific results +// from a list of resources. +type ResourceFilter struct { + _ struct{} `type:"structure"` + + // The name of the filter. Filter names are case-sensitive. + // + // Name is a required field + Name *string `type:"string" required:"true" enum:"ResourceFilterName"` + + // One or more filter values. Allowed filter values vary by resource filter + // name, and are case-sensitive. + // + // Values is a required field + Values []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s ResourceFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResourceFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ResourceFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResourceFilter"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Values == nil { + invalidParams.Add(request.NewErrParamRequired("Values")) + } + if s.Values != nil && len(s.Values) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Values", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *ResourceFilter) SetName(v string) *ResourceFilter { + s.Name = &v + return s +} + +// SetValues sets the Values field's value. +func (s *ResourceFilter) SetValues(v []*string) *ResourceFilter { + s.Values = v + return s +} + +// The ARN of a resource, and its resource type. +type ResourceIdentifier struct { + _ struct{} `type:"structure"` + + // The ARN of a resource. + ResourceArn *string `type:"string"` + + // The resource type of a resource, such as AWS::EC2::Instance. + ResourceType *string `type:"string"` +} + +// String returns the string representation +func (s ResourceIdentifier) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResourceIdentifier) GoString() string { + return s.String() +} + +// SetResourceArn sets the ResourceArn field's value. +func (s *ResourceIdentifier) SetResourceArn(v string) *ResourceIdentifier { + s.ResourceArn = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *ResourceIdentifier) SetResourceType(v string) *ResourceIdentifier { + s.ResourceType = &v + return s +} + +// The query that is used to define a resource group or a search for resources. +type ResourceQuery struct { + _ struct{} `type:"structure"` + + // The query that defines a group or a search. + // + // Query is a required field + Query *string `type:"string" required:"true"` + + // The type of the query. The valid values in this release are TAG_FILTERS_1_0 + // and CLOUDFORMATION_STACK_1_0. + // + // TAG_FILTERS_1_0: A JSON syntax that lets you specify a collection of simple + // tag filters for resource types and tags, as supported by the AWS Tagging + // API GetResources (https://docs.aws.amazon.com/resourcegroupstagging/latest/APIReference/API_GetResources.html) + // operation. If you specify more than one tag key, only resources that match + // all tag keys, and at least one value of each specified tag key, are returned + // in your query. If you specify more than one value for a tag key, a resource + // matches the filter if it has a tag key value that matches any of the specified + // values. + // + // For example, consider the following sample query for resources that have + // two tags, Stage and Version, with two values each. ([{"Key":"Stage","Values":["Test","Deploy"]},{"Key":"Version","Values":["1","2"]}]) + // The results of this query might include the following. + // + // * An EC2 instance that has the following two tags: {"Key":"Stage","Values":["Deploy"]}, + // and {"Key":"Version","Values":["2"]} + // + // * An S3 bucket that has the following two tags: {"Key":"Stage","Values":["Test","Deploy"]}, + // and {"Key":"Version","Values":["1"]} + // + // The query would not return the following results, however. The following + // EC2 instance does not have all tag keys specified in the filter, so it is + // rejected. The RDS database has all of the tag keys, but no values that match + // at least one of the specified tag key values in the filter. + // + // * An EC2 instance that has only the following tag: {"Key":"Stage","Values":["Deploy"]}. + // + // * An RDS database that has the following two tags: {"Key":"Stage","Values":["Archived"]}, + // and {"Key":"Version","Values":["4"]} + // + // CLOUDFORMATION_STACK_1_0: A JSON syntax that lets you specify a CloudFormation + // stack ARN. + // + // Type is a required field + Type *string `type:"string" required:"true" enum:"QueryType"` +} + +// String returns the string representation +func (s ResourceQuery) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResourceQuery) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ResourceQuery) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResourceQuery"} + if s.Query == nil { + invalidParams.Add(request.NewErrParamRequired("Query")) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetQuery sets the Query field's value. +func (s *ResourceQuery) SetQuery(v string) *ResourceQuery { + s.Query = &v + return s +} + +// SetType sets the Type field's value. +func (s *ResourceQuery) SetType(v string) *ResourceQuery { + s.Type = &v + return s +} + +type SearchResourcesInput struct { + _ struct{} `type:"structure"` + + // The maximum number of group member ARNs returned by SearchResources in paginated + // output. By default, this number is 50. + MaxResults *int64 `min:"1" type:"integer"` + + // The NextToken value that is returned in a paginated SearchResources request. + // To get the next page of results, run the call again, add the NextToken parameter, + // and specify the NextToken value. + NextToken *string `type:"string"` + + // The search query, using the same formats that are supported for resource + // group definition. + // + // ResourceQuery is a required field + ResourceQuery *ResourceQuery `type:"structure" required:"true"` +} + +// String returns the string representation +func (s SearchResourcesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SearchResourcesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SearchResourcesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SearchResourcesInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.ResourceQuery == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceQuery")) + } + if s.ResourceQuery != nil { + if err := s.ResourceQuery.Validate(); err != nil { + invalidParams.AddNested("ResourceQuery", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *SearchResourcesInput) SetMaxResults(v int64) *SearchResourcesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *SearchResourcesInput) SetNextToken(v string) *SearchResourcesInput { + s.NextToken = &v + return s +} + +// SetResourceQuery sets the ResourceQuery field's value. +func (s *SearchResourcesInput) SetResourceQuery(v *ResourceQuery) *SearchResourcesInput { + s.ResourceQuery = v + return s +} + +type SearchResourcesOutput struct { + _ struct{} `type:"structure"` + + // The NextToken value to include in a subsequent SearchResources request, to + // get more results. + NextToken *string `type:"string"` + + // A list of QueryError objects. Each error is an object that contains ErrorCode + // and Message structures. Possible values for ErrorCode are CLOUDFORMATION_STACK_INACTIVE + // and CLOUDFORMATION_STACK_NOT_EXISTING. + QueryErrors []*QueryError `type:"list"` + + // The ARNs and resource types of resources that are members of the group that + // you specified. + ResourceIdentifiers []*ResourceIdentifier `type:"list"` +} + +// String returns the string representation +func (s SearchResourcesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SearchResourcesOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *SearchResourcesOutput) SetNextToken(v string) *SearchResourcesOutput { + s.NextToken = &v + return s +} + +// SetQueryErrors sets the QueryErrors field's value. +func (s *SearchResourcesOutput) SetQueryErrors(v []*QueryError) *SearchResourcesOutput { + s.QueryErrors = v + return s +} + +// SetResourceIdentifiers sets the ResourceIdentifiers field's value. +func (s *SearchResourcesOutput) SetResourceIdentifiers(v []*ResourceIdentifier) *SearchResourcesOutput { + s.ResourceIdentifiers = v + return s +} + +type TagInput struct { + _ struct{} `type:"structure"` + + // The ARN of the resource to which to add tags. + // + // Arn is a required field + Arn *string `location:"uri" locationName:"Arn" type:"string" required:"true"` + + // The tags to add to the specified resource. A tag is a string-to-string map + // of key-value pairs. Tag keys can have a maximum character length of 128 characters, + // and tag values can have a maximum length of 256 characters. + // + // Tags is a required field + Tags map[string]*string `type:"map" required:"true"` +} + +// String returns the string representation +func (s TagInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *TagInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TagInput"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *TagInput) SetArn(v string) *TagInput { + s.Arn = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *TagInput) SetTags(v map[string]*string) *TagInput { + s.Tags = v + return s +} + +type TagOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the tagged resource. + Arn *string `type:"string"` + + // The tags that have been added to the specified resource. + Tags map[string]*string `type:"map"` +} + +// String returns the string representation +func (s TagOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagOutput) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *TagOutput) SetArn(v string) *TagOutput { + s.Arn = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *TagOutput) SetTags(v map[string]*string) *TagOutput { + s.Tags = v + return s +} + +type UntagInput struct { + _ struct{} `type:"structure"` + + // The ARN of the resource from which to remove tags. + // + // Arn is a required field + Arn *string `location:"uri" locationName:"Arn" type:"string" required:"true"` + + // The keys of the tags to be removed. + // + // Keys is a required field + Keys []*string `type:"list" required:"true"` +} + +// String returns the string representation +func (s UntagInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UntagInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UntagInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UntagInput"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + if s.Keys == nil { + invalidParams.Add(request.NewErrParamRequired("Keys")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *UntagInput) SetArn(v string) *UntagInput { + s.Arn = &v + return s +} + +// SetKeys sets the Keys field's value. +func (s *UntagInput) SetKeys(v []*string) *UntagInput { + s.Keys = v + return s +} + +type UntagOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the resource from which tags have been removed. + Arn *string `type:"string"` + + // The keys of tags that have been removed. + Keys []*string `type:"list"` +} + +// String returns the string representation +func (s UntagOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UntagOutput) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *UntagOutput) SetArn(v string) *UntagOutput { + s.Arn = &v + return s +} + +// SetKeys sets the Keys field's value. +func (s *UntagOutput) SetKeys(v []*string) *UntagOutput { + s.Keys = v + return s +} + +type UpdateGroupInput struct { + _ struct{} `type:"structure"` + + // The description of the resource group. Descriptions can have a maximum of + // 511 characters, including letters, numbers, hyphens, underscores, punctuation, + // and spaces. + Description *string `type:"string"` + + // The name of the resource group for which you want to update its description. + // + // GroupName is a required field + GroupName *string `location:"uri" locationName:"GroupName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateGroupInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + if s.GroupName != nil && len(*s.GroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDescription sets the Description field's value. +func (s *UpdateGroupInput) SetDescription(v string) *UpdateGroupInput { + s.Description = &v + return s +} + +// SetGroupName sets the GroupName field's value. +func (s *UpdateGroupInput) SetGroupName(v string) *UpdateGroupInput { + s.GroupName = &v + return s +} + +type UpdateGroupOutput struct { + _ struct{} `type:"structure"` + + // The full description of the resource group after it has been updated. + Group *Group `type:"structure"` +} + +// String returns the string representation +func (s UpdateGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateGroupOutput) GoString() string { + return s.String() +} + +// SetGroup sets the Group field's value. +func (s *UpdateGroupOutput) SetGroup(v *Group) *UpdateGroupOutput { + s.Group = v + return s +} + +type UpdateGroupQueryInput struct { + _ struct{} `type:"structure"` + + // The name of the resource group for which you want to edit the query. + // + // GroupName is a required field + GroupName *string `location:"uri" locationName:"GroupName" min:"1" type:"string" required:"true"` + + // The resource query that determines which AWS resources are members of the + // resource group. + // + // ResourceQuery is a required field + ResourceQuery *ResourceQuery `type:"structure" required:"true"` +} + +// String returns the string representation +func (s UpdateGroupQueryInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateGroupQueryInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateGroupQueryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateGroupQueryInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + if s.GroupName != nil && len(*s.GroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GroupName", 1)) + } + if s.ResourceQuery == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceQuery")) + } + if s.ResourceQuery != nil { + if err := s.ResourceQuery.Validate(); err != nil { + invalidParams.AddNested("ResourceQuery", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupName sets the GroupName field's value. +func (s *UpdateGroupQueryInput) SetGroupName(v string) *UpdateGroupQueryInput { + s.GroupName = &v + return s +} + +// SetResourceQuery sets the ResourceQuery field's value. +func (s *UpdateGroupQueryInput) SetResourceQuery(v *ResourceQuery) *UpdateGroupQueryInput { + s.ResourceQuery = v + return s +} + +type UpdateGroupQueryOutput struct { + _ struct{} `type:"structure"` + + // The resource query associated with the resource group after the update. + GroupQuery *GroupQuery `type:"structure"` +} + +// String returns the string representation +func (s UpdateGroupQueryOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateGroupQueryOutput) GoString() string { + return s.String() +} + +// SetGroupQuery sets the GroupQuery field's value. +func (s *UpdateGroupQueryOutput) SetGroupQuery(v *GroupQuery) *UpdateGroupQueryOutput { + s.GroupQuery = v + return s +} + +const ( + // GroupFilterNameResourceType is a GroupFilterName enum value + GroupFilterNameResourceType = "resource-type" +) + +const ( + // QueryErrorCodeCloudformationStackInactive is a QueryErrorCode enum value + QueryErrorCodeCloudformationStackInactive = "CLOUDFORMATION_STACK_INACTIVE" + + // QueryErrorCodeCloudformationStackNotExisting is a QueryErrorCode enum value + QueryErrorCodeCloudformationStackNotExisting = "CLOUDFORMATION_STACK_NOT_EXISTING" +) + +const ( + // QueryTypeTagFilters10 is a QueryType enum value + QueryTypeTagFilters10 = "TAG_FILTERS_1_0" + + // QueryTypeCloudformationStack10 is a QueryType enum value + QueryTypeCloudformationStack10 = "CLOUDFORMATION_STACK_1_0" +) + +const ( + // ResourceFilterNameResourceType is a ResourceFilterName enum value + ResourceFilterNameResourceType = "resource-type" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/resourcegroups/doc.go b/vendor/github.com/aws/aws-sdk-go/service/resourcegroups/doc.go new file mode 100644 index 00000000000..5d182652de0 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/resourcegroups/doc.go @@ -0,0 +1,60 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package resourcegroups provides the client and types for making API +// requests to AWS Resource Groups. +// +// AWS Resource Groups lets you organize AWS resources such as Amazon EC2 instances, +// Amazon Relational Database Service databases, and Amazon S3 buckets into +// groups using criteria that you define as tags. A resource group is a collection +// of resources that match the resource types specified in a query, and share +// one or more tags or portions of tags. You can create a group of resources +// based on their roles in your cloud infrastructure, lifecycle stages, regions, +// application layers, or virtually any criteria. Resource groups enable you +// to automate management tasks, such as those in AWS Systems Manager Automation +// documents, on tag-related resources in AWS Systems Manager. Groups of tagged +// resources also let you quickly view a custom console in AWS Systems Manager +// that shows AWS Config compliance and other monitoring data about member resources. +// +// To create a resource group, build a resource query, and specify tags that +// identify the criteria that members of the group have in common. Tags are +// key-value pairs. +// +// For more information about Resource Groups, see the AWS Resource Groups User +// Guide (https://docs.aws.amazon.com/ARG/latest/userguide/welcome.html). +// +// AWS Resource Groups uses a REST-compliant API that you can use to perform +// the following types of operations. +// +// * Create, Read, Update, and Delete (CRUD) operations on resource groups +// and resource query entities +// +// * Applying, editing, and removing tags from resource groups +// +// * Resolving resource group member ARNs so they can be returned as search +// results +// +// * Getting data about resources that are members of a group +// +// * Searching AWS resources based on a resource query +// +// See https://docs.aws.amazon.com/goto/WebAPI/resource-groups-2017-11-27 for more information on this service. +// +// See resourcegroups package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/resourcegroups/ +// +// Using the Client +// +// To contact AWS Resource Groups with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the AWS Resource Groups client ResourceGroups for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/resourcegroups/#New +package resourcegroups diff --git a/vendor/github.com/aws/aws-sdk-go/service/resourcegroups/errors.go b/vendor/github.com/aws/aws-sdk-go/service/resourcegroups/errors.go new file mode 100644 index 00000000000..02234d49436 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/resourcegroups/errors.go @@ -0,0 +1,50 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package resourcegroups + +const ( + + // ErrCodeBadRequestException for service response error code + // "BadRequestException". + // + // The request does not comply with validation rules that are defined for the + // request parameters. + ErrCodeBadRequestException = "BadRequestException" + + // ErrCodeForbiddenException for service response error code + // "ForbiddenException". + // + // The caller is not authorized to make the request. + ErrCodeForbiddenException = "ForbiddenException" + + // ErrCodeInternalServerErrorException for service response error code + // "InternalServerErrorException". + // + // An internal error occurred while processing the request. + ErrCodeInternalServerErrorException = "InternalServerErrorException" + + // ErrCodeMethodNotAllowedException for service response error code + // "MethodNotAllowedException". + // + // The request uses an HTTP method which is not allowed for the specified resource. + ErrCodeMethodNotAllowedException = "MethodNotAllowedException" + + // ErrCodeNotFoundException for service response error code + // "NotFoundException". + // + // One or more resources specified in the request do not exist. + ErrCodeNotFoundException = "NotFoundException" + + // ErrCodeTooManyRequestsException for service response error code + // "TooManyRequestsException". + // + // The caller has exceeded throttling limits. + ErrCodeTooManyRequestsException = "TooManyRequestsException" + + // ErrCodeUnauthorizedException for service response error code + // "UnauthorizedException". + // + // The request has not been applied because it lacks valid authentication credentials + // for the target resource. + ErrCodeUnauthorizedException = "UnauthorizedException" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/resourcegroups/service.go b/vendor/github.com/aws/aws-sdk-go/service/resourcegroups/service.go new file mode 100644 index 00000000000..46a19ff2316 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/resourcegroups/service.go @@ -0,0 +1,98 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package resourcegroups + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/restjson" +) + +// ResourceGroups provides the API operation methods for making requests to +// AWS Resource Groups. See this package's package overview docs +// for details on the service. +// +// ResourceGroups methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type ResourceGroups struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "resource-groups" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Resource Groups" // ServiceID is a unique identifer of a specific service. +) + +// New creates a new instance of the ResourceGroups client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a ResourceGroups client from just a session. +// svc := resourcegroups.New(mySession) +// +// // Create a ResourceGroups client with additional configuration +// svc := resourcegroups.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *ResourceGroups { + c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "resource-groups" + } + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *ResourceGroups { + svc := &ResourceGroups{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + ServiceID: ServiceID, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2017-11-27", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(restjson.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(restjson.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(restjson.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(restjson.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a ResourceGroups operation and runs any +// custom request initialization. +func (c *ResourceGroups) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/route53/api.go b/vendor/github.com/aws/aws-sdk-go/service/route53/api.go index e610485f070..7672212114e 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/route53/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/route53/api.go @@ -15,8 +15,8 @@ const opAssociateVPCWithHostedZone = "AssociateVPCWithHostedZone" // AssociateVPCWithHostedZoneRequest generates a "aws/request.Request" representing the // client's request for the AssociateVPCWithHostedZone operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -146,8 +146,8 @@ const opChangeResourceRecordSets = "ChangeResourceRecordSets" // ChangeResourceRecordSetsRequest generates a "aws/request.Request" representing the // client's request for the ChangeResourceRecordSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -197,28 +197,28 @@ func (c *Route53) ChangeResourceRecordSetsRequest(input *ChangeResourceRecordSet // The request body must include a document with a ChangeResourceRecordSetsRequest // element. The request body contains a list of change items, known as a change // batch. Change batches are considered transactional changes. When using the -// Amazon Route 53 API to change resource record sets, Amazon Route 53 either -// makes all or none of the changes in a change batch request. This ensures -// that Amazon Route 53 never partially implements the intended changes to the -// resource record sets in a hosted zone. +// Amazon Route 53 API to change resource record sets, Route 53 either makes +// all or none of the changes in a change batch request. This ensures that Route +// 53 never partially implements the intended changes to the resource record +// sets in a hosted zone. // // For example, a change batch request that deletes the CNAME record for www.example.com -// and creates an alias resource record set for www.example.com. Amazon Route -// 53 deletes the first resource record set and creates the second resource -// record set in a single operation. If either the DELETE or the CREATE action -// fails, then both changes (plus any other changes in the batch) fail, and -// the original CNAME record continues to exist. +// and creates an alias resource record set for www.example.com. Route 53 deletes +// the first resource record set and creates the second resource record set +// in a single operation. If either the DELETE or the CREATE action fails, then +// both changes (plus any other changes in the batch) fail, and the original +// CNAME record continues to exist. // // Due to the nature of transactional changes, you can't delete the same resource // record set more than once in a single change batch. If you attempt to delete -// the same change batch more than once, Amazon Route 53 returns an InvalidChangeBatch +// the same change batch more than once, Route 53 returns an InvalidChangeBatch // error. // // Traffic Flow // // To create resource record sets for complex routing configurations, use either -// the traffic flow visual editor in the Amazon Route 53 console or the API -// actions for traffic policies and traffic policy instances. Save the configuration +// the traffic flow visual editor in the Route 53 console or the API actions +// for traffic policies and traffic policy instances. Save the configuration // as a traffic policy, then associate the traffic policy with one or more domain // names (such as example.com) or subdomain names (such as www.example.com), // in the same hosted zone or in multiple hosted zones. You can roll back the @@ -236,8 +236,8 @@ func (c *Route53) ChangeResourceRecordSetsRequest(input *ChangeResourceRecordSet // values. // // * UPSERT: If a resource record set does not already exist, AWS creates -// it. If a resource set does exist, Amazon Route 53 updates it with the -// values in the request. +// it. If a resource set does exist, Route 53 updates it with the values +// in the request. // // Syntaxes for Creating, Updating, and Deleting Resource Record Sets // @@ -251,14 +251,14 @@ func (c *Route53) ChangeResourceRecordSetsRequest(input *ChangeResourceRecordSet // all of the elements for every kind of resource record set that you can create, // delete, or update by using ChangeResourceRecordSets. // -// Change Propagation to Amazon Route 53 DNS Servers +// Change Propagation to Route 53 DNS Servers // -// When you submit a ChangeResourceRecordSets request, Amazon Route 53 propagates -// your changes to all of the Amazon Route 53 authoritative DNS servers. While -// your changes are propagating, GetChange returns a status of PENDING. When -// propagation is complete, GetChange returns a status of INSYNC. Changes generally -// propagate to all Amazon Route 53 name servers within 60 seconds. For more -// information, see GetChange. +// When you submit a ChangeResourceRecordSets request, Route 53 propagates your +// changes to all of the Route 53 authoritative DNS servers. While your changes +// are propagating, GetChange returns a status of PENDING. When propagation +// is complete, GetChange returns a status of INSYNC. Changes generally propagate +// to all Route 53 name servers within 60 seconds. For more information, see +// GetChange. // // Limits on ChangeResourceRecordSets Requests // @@ -278,8 +278,7 @@ func (c *Route53) ChangeResourceRecordSetsRequest(input *ChangeResourceRecordSet // No hosted zone exists with the ID that you specified. // // * ErrCodeNoSuchHealthCheck "NoSuchHealthCheck" -// No health check exists with the ID that you specified in the DeleteHealthCheck -// request. +// No health check exists with the specified ID. // // * ErrCodeInvalidChangeBatch "InvalidChangeBatch" // This exception contains a list of messages that might contain one or more @@ -291,8 +290,8 @@ func (c *Route53) ChangeResourceRecordSetsRequest(input *ChangeResourceRecordSet // * ErrCodePriorRequestNotComplete "PriorRequestNotComplete" // If Amazon Route 53 can't process a request before the next request arrives, // it will reject subsequent requests for the same hosted zone and return an -// HTTP 400 error (Bad request). If Amazon Route 53 returns this error repeatedly -// for the same request, we recommend that you wait, in intervals of increasing +// HTTP 400 error (Bad request). If Route 53 returns this error repeatedly for +// the same request, we recommend that you wait, in intervals of increasing // duration, before you try the request again. // // See also, https://docs.aws.amazon.com/goto/WebAPI/route53-2013-04-01/ChangeResourceRecordSets @@ -321,8 +320,8 @@ const opChangeTagsForResource = "ChangeTagsForResource" // ChangeTagsForResourceRequest generates a "aws/request.Request" representing the // client's request for the ChangeTagsForResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -379,8 +378,7 @@ func (c *Route53) ChangeTagsForResourceRequest(input *ChangeTagsForResourceInput // The input is not valid. // // * ErrCodeNoSuchHealthCheck "NoSuchHealthCheck" -// No health check exists with the ID that you specified in the DeleteHealthCheck -// request. +// No health check exists with the specified ID. // // * ErrCodeNoSuchHostedZone "NoSuchHostedZone" // No hosted zone exists with the ID that you specified. @@ -388,8 +386,8 @@ func (c *Route53) ChangeTagsForResourceRequest(input *ChangeTagsForResourceInput // * ErrCodePriorRequestNotComplete "PriorRequestNotComplete" // If Amazon Route 53 can't process a request before the next request arrives, // it will reject subsequent requests for the same hosted zone and return an -// HTTP 400 error (Bad request). If Amazon Route 53 returns this error repeatedly -// for the same request, we recommend that you wait, in intervals of increasing +// HTTP 400 error (Bad request). If Route 53 returns this error repeatedly for +// the same request, we recommend that you wait, in intervals of increasing // duration, before you try the request again. // // * ErrCodeThrottlingException "ThrottlingException" @@ -421,8 +419,8 @@ const opCreateHealthCheck = "CreateHealthCheck" // CreateHealthCheckRequest generates a "aws/request.Request" representing the // client's request for the CreateHealthCheck operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -471,17 +469,17 @@ func (c *Route53) CreateHealthCheckRequest(input *CreateHealthCheckInput) (req * // If you're registering EC2 instances with an Elastic Load Balancing (ELB) // load balancer, do not create Amazon Route 53 health checks for the EC2 instances. // When you register an EC2 instance with a load balancer, you configure settings -// for an ELB health check, which performs a similar function to an Amazon Route -// 53 health check. +// for an ELB health check, which performs a similar function to a Route 53 +// health check. // // Private Hosted Zones // // You can associate health checks with failover resource record sets in a private // hosted zone. Note the following: // -// * Amazon Route 53 health checkers are outside the VPC. To check the health -// of an endpoint within a VPC by IP address, you must assign a public IP -// address to the instance in the VPC. +// * Route 53 health checkers are outside the VPC. To check the health of +// an endpoint within a VPC by IP address, you must assign a public IP address +// to the instance in the VPC. // // * You can configure a health checker to check the health of an external // resource that the instance relies on, such as a database server. @@ -557,8 +555,8 @@ const opCreateHostedZone = "CreateHostedZone" // CreateHostedZoneRequest generates a "aws/request.Request" representing the // client's request for the CreateHostedZone operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -597,41 +595,45 @@ func (c *Route53) CreateHostedZoneRequest(input *CreateHostedZoneInput) (req *re // CreateHostedZone API operation for Amazon Route 53. // -// Creates a new public hosted zone, which you use to specify how the Domain -// Name System (DNS) routes traffic on the Internet for a domain, such as example.com, -// and its subdomains. +// Creates a new public or private hosted zone. You create records in a public +// hosted zone to define how you want to route traffic on the internet for a +// domain, such as example.com, and its subdomains (apex.example.com, acme.example.com). +// You create records in a private hosted zone to define how you want to route +// traffic for a domain and its subdomains within one or more Amazon Virtual +// Private Clouds (Amazon VPCs). // -// You can't convert a public hosted zones to a private hosted zone or vice -// versa. Instead, you must create a new hosted zone with the same name and -// create new resource record sets. +// You can't convert a public hosted zone to a private hosted zone or vice versa. +// Instead, you must create a new hosted zone with the same name and create +// new resource record sets. // // For more information about charges for hosted zones, see Amazon Route 53 // Pricing (http://aws.amazon.com/route53/pricing/). // // Note the following: // -// * You can't create a hosted zone for a top-level domain (TLD). +// * You can't create a hosted zone for a top-level domain (TLD) such as +// .com. // -// * Amazon Route 53 automatically creates a default SOA record and four -// NS records for the zone. For more information about SOA and NS records, -// see NS and SOA Records that Amazon Route 53 Creates for a Hosted Zone -// (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/SOA-NSrecords.html) +// * For public hosted zones, Amazon Route 53 automatically creates a default +// SOA record and four NS records for the zone. For more information about +// SOA and NS records, see NS and SOA Records that Route 53 Creates for a +// Hosted Zone (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/SOA-NSrecords.html) // in the Amazon Route 53 Developer Guide. // -// If you want to use the same name servers for multiple hosted zones, you can -// optionally associate a reusable delegation set with the hosted zone. See -// the DelegationSetId element. +// If you want to use the same name servers for multiple public hosted zones, +// you can optionally associate a reusable delegation set with the hosted +// zone. See the DelegationSetId element. // -// * If your domain is registered with a registrar other than Amazon Route -// 53, you must update the name servers with your registrar to make Amazon -// Route 53 your DNS service. For more information, see Configuring Amazon -// Route 53 as your DNS Service (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/creating-migrating.html) +// * If your domain is registered with a registrar other than Route 53, you +// must update the name servers with your registrar to make Route 53 the +// DNS service for the domain. For more information, see Migrating DNS Service +// for an Existing Domain to Amazon Route 53 (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/MigratingDNS.html) // in the Amazon Route 53 Developer Guide. // // When you submit a CreateHostedZone request, the initial status of the hosted -// zone is PENDING. This means that the NS and SOA records are not yet available -// on all Amazon Route 53 DNS servers. When the NS and SOA records are available, -// the status of the zone changes to INSYNC. +// zone is PENDING. For public hosted zones, this means that the NS and SOA +// records are not yet available on all Route 53 DNS servers. When the NS and +// SOA records are available, the status of the zone changes to INSYNC. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -678,8 +680,8 @@ func (c *Route53) CreateHostedZoneRequest(input *CreateHostedZoneInput) (req *re // You can create a hosted zone that has the same name as an existing hosted // zone (example.com is common), but there is a limit to the number of hosted // zones that have the same name. If you get this error, Amazon Route 53 has -// reached that limit. If you own the domain name and Amazon Route 53 generates -// this error, contact Customer Support. +// reached that limit. If you own the domain name and Route 53 generates this +// error, contact Customer Support. // // * ErrCodeConflictingDomainExists "ConflictingDomainExists" // The cause of this error depends on whether you're trying to create a public @@ -731,8 +733,8 @@ const opCreateQueryLoggingConfig = "CreateQueryLoggingConfig" // CreateQueryLoggingConfigRequest generates a "aws/request.Request" representing the // client's request for the CreateQueryLoggingConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -775,10 +777,10 @@ func (c *Route53) CreateQueryLoggingConfigRequest(input *CreateQueryLoggingConfi // configuration, Amazon Route 53 begins to publish log data to an Amazon CloudWatch // Logs log group. // -// DNS query logs contain information about the queries that Amazon Route 53 -// receives for a specified public hosted zone, such as the following: +// DNS query logs contain information about the queries that Route 53 receives +// for a specified public hosted zone, such as the following: // -// * Amazon Route 53 edge location that responded to the DNS query +// * Route 53 edge location that responded to the DNS query // // * Domain or subdomain that was requested // @@ -789,8 +791,8 @@ func (c *Route53) CreateQueryLoggingConfigRequest(input *CreateQueryLoggingConfi // Log Group and Resource PolicyBefore you create a query logging configuration, // perform the following operations. // -// If you create a query logging configuration using the Amazon Route 53 console, -// Amazon Route 53 performs these operations automatically. +// If you create a query logging configuration using the Route 53 console, Route +// 53 performs these operations automatically. // // Create a CloudWatch Logs log group, and make note of the ARN, which you specify // when you create a query logging configuration. Note the following: @@ -806,30 +808,30 @@ func (c *Route53) CreateQueryLoggingConfigRequest(input *CreateQueryLoggingConfi // /aws/route53/hosted zone name // // In the next step, you'll create a resource policy, which controls access -// to one or more log groups and the associated AWS resources, such as Amazon -// Route 53 hosted zones. There's a limit on the number of resource policies -// that you can create, so we recommend that you use a consistent prefix so -// you can use the same resource policy for all the log groups that you create -// for query logging. +// to one or more log groups and the associated AWS resources, such as Route +// 53 hosted zones. There's a limit on the number of resource policies that +// you can create, so we recommend that you use a consistent prefix so you can +// use the same resource policy for all the log groups that you create for query +// logging. // // Create a CloudWatch Logs resource policy, and give it the permissions that -// Amazon Route 53 needs to create log streams and to send query logs to log -// streams. For the value of Resource, specify the ARN for the log group that -// you created in the previous step. To use the same resource policy for all -// the CloudWatch Logs log groups that you created for query logging configurations, -// replace the hosted zone name with *, for example: +// Route 53 needs to create log streams and to send query logs to log streams. +// For the value of Resource, specify the ARN for the log group that you created +// in the previous step. To use the same resource policy for all the CloudWatch +// Logs log groups that you created for query logging configurations, replace +// the hosted zone name with *, for example: // // arn:aws:logs:us-east-1:123412341234:log-group:/aws/route53/* // // You can't use the CloudWatch console to create or edit a resource policy. // You must use the CloudWatch API, one of the AWS SDKs, or the AWS CLI. // -// Log Streams and Edge LocationsWhen Amazon Route 53 finishes creating the -// configuration for DNS query logging, it does the following: +// Log Streams and Edge LocationsWhen Route 53 finishes creating the configuration +// for DNS query logging, it does the following: // // Creates a log stream for an edge location the first time that the edge location // responds to DNS queries for the specified hosted zone. That log stream is -// used to log all queries that Amazon Route 53 responds to for that edge location. +// used to log all queries that Route 53 responds to for that edge location. // // Begins to send query logs to the applicable log stream. // @@ -841,18 +843,17 @@ func (c *Route53) CreateQueryLoggingConfigRequest(input *CreateQueryLoggingConfi // number, for example, DFW3. The three-letter code typically corresponds with // the International Air Transport Association airport code for an airport near // the edge location. (These abbreviations might change in the future.) For -// a list of edge locations, see "The Amazon Route 53 Global Network" on the -// Amazon Route 53 Product Details (http://aws.amazon.com/route53/details/) -// page. +// a list of edge locations, see "The Route 53 Global Network" on the Route +// 53 Product Details (http://aws.amazon.com/route53/details/) page. // // Queries That Are LoggedQuery logs contain only the queries that DNS resolvers -// forward to Amazon Route 53. If a DNS resolver has already cached the response -// to a query (such as the IP address for a load balancer for example.com), -// the resolver will continue to return the cached response. It doesn't forward -// another query to Amazon Route 53 until the TTL for the corresponding resource -// record set expires. Depending on how many DNS queries are submitted for a -// resource record set, and depending on the TTL for that resource record set, -// query logs might contain information about only one query out of every several +// forward to Route 53. If a DNS resolver has already cached the response to +// a query (such as the IP address for a load balancer for example.com), the +// resolver will continue to return the cached response. It doesn't forward +// another query to Route 53 until the TTL for the corresponding resource record +// set expires. Depending on how many DNS queries are submitted for a resource +// record set, and depending on the TTL for that resource record set, query +// logs might contain information about only one query out of every several // thousand queries that are submitted to DNS. For more information about how // DNS works, see Routing Internet Traffic to Your Website or Web Application // (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/welcome-dns-service.html) @@ -865,9 +866,8 @@ func (c *Route53) CreateQueryLoggingConfigRequest(input *CreateQueryLoggingConfi // PricingFor information about charges for query logs, see Amazon CloudWatch // Pricing (http://aws.amazon.com/cloudwatch/pricing/). // -// How to Stop LoggingIf you want Amazon Route 53 to stop sending query logs -// to CloudWatch Logs, delete the query logging configuration. For more information, -// see DeleteQueryLoggingConfig. +// How to Stop LoggingIf you want Route 53 to stop sending query logs to CloudWatch +// Logs, delete the query logging configuration. For more information, see DeleteQueryLoggingConfig. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -932,8 +932,8 @@ const opCreateReusableDelegationSet = "CreateReusableDelegationSet" // CreateReusableDelegationSetRequest generates a "aws/request.Request" representing the // client's request for the CreateReusableDelegationSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1052,8 +1052,8 @@ func (c *Route53) CreateReusableDelegationSetRequest(input *CreateReusableDelega // You can create a hosted zone that has the same name as an existing hosted // zone (example.com is common), but there is a limit to the number of hosted // zones that have the same name. If you get this error, Amazon Route 53 has -// reached that limit. If you own the domain name and Amazon Route 53 generates -// this error, contact Customer Support. +// reached that limit. If you own the domain name and Route 53 generates this +// error, contact Customer Support. // // * ErrCodeDelegationSetAlreadyReusable "DelegationSetAlreadyReusable" // The specified delegation set has already been marked as reusable. @@ -1084,8 +1084,8 @@ const opCreateTrafficPolicy = "CreateTrafficPolicy" // CreateTrafficPolicyRequest generates a "aws/request.Request" representing the // client's request for the CreateTrafficPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1184,8 +1184,8 @@ const opCreateTrafficPolicyInstance = "CreateTrafficPolicyInstance" // CreateTrafficPolicyInstanceRequest generates a "aws/request.Request" representing the // client's request for the CreateTrafficPolicyInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1289,8 +1289,8 @@ const opCreateTrafficPolicyVersion = "CreateTrafficPolicyVersion" // CreateTrafficPolicyVersionRequest generates a "aws/request.Request" representing the // client's request for the CreateTrafficPolicyVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1396,8 +1396,8 @@ const opCreateVPCAssociationAuthorization = "CreateVPCAssociationAuthorization" // CreateVPCAssociationAuthorizationRequest generates a "aws/request.Request" representing the // client's request for the CreateVPCAssociationAuthorization operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1502,8 +1502,8 @@ const opDeleteHealthCheck = "DeleteHealthCheck" // DeleteHealthCheckRequest generates a "aws/request.Request" representing the // client's request for the DeleteHealthCheck operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1561,8 +1561,7 @@ func (c *Route53) DeleteHealthCheckRequest(input *DeleteHealthCheckInput) (req * // // Returned Error Codes: // * ErrCodeNoSuchHealthCheck "NoSuchHealthCheck" -// No health check exists with the ID that you specified in the DeleteHealthCheck -// request. +// No health check exists with the specified ID. // // * ErrCodeHealthCheckInUse "HealthCheckInUse" // This error code is not in use. @@ -1596,8 +1595,8 @@ const opDeleteHostedZone = "DeleteHostedZone" // DeleteHostedZoneRequest generates a "aws/request.Request" representing the // client's request for the DeleteHostedZone operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1656,8 +1655,8 @@ func (c *Route53) DeleteHostedZoneRequest(input *DeleteHostedZoneInput) (req *re // and NS resource record sets. If the hosted zone contains other resource record // sets, you must delete them before you can delete the hosted zone. If you // try to delete a hosted zone that contains other resource record sets, the -// request fails, and Amazon Route 53 returns a HostedZoneNotEmpty error. For -// information about deleting records from your hosted zone, see ChangeResourceRecordSets. +// request fails, and Route 53 returns a HostedZoneNotEmpty error. For information +// about deleting records from your hosted zone, see ChangeResourceRecordSets. // // To verify that the hosted zone has been deleted, do one of the following: // @@ -1684,8 +1683,8 @@ func (c *Route53) DeleteHostedZoneRequest(input *DeleteHostedZoneInput) (req *re // * ErrCodePriorRequestNotComplete "PriorRequestNotComplete" // If Amazon Route 53 can't process a request before the next request arrives, // it will reject subsequent requests for the same hosted zone and return an -// HTTP 400 error (Bad request). If Amazon Route 53 returns this error repeatedly -// for the same request, we recommend that you wait, in intervals of increasing +// HTTP 400 error (Bad request). If Route 53 returns this error repeatedly for +// the same request, we recommend that you wait, in intervals of increasing // duration, before you try the request again. // // * ErrCodeInvalidInput "InvalidInput" @@ -1720,8 +1719,8 @@ const opDeleteQueryLoggingConfig = "DeleteQueryLoggingConfig" // DeleteQueryLoggingConfigRequest generates a "aws/request.Request" representing the // client's request for the DeleteQueryLoggingConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1761,8 +1760,8 @@ func (c *Route53) DeleteQueryLoggingConfigRequest(input *DeleteQueryLoggingConfi // DeleteQueryLoggingConfig API operation for Amazon Route 53. // // Deletes a configuration for DNS query logging. If you delete a configuration, -// Amazon Route 53 stops sending query logs to CloudWatch Logs. Amazon Route -// 53 doesn't delete any logs that are already in CloudWatch Logs. +// Amazon Route 53 stops sending query logs to CloudWatch Logs. Route 53 doesn't +// delete any logs that are already in CloudWatch Logs. // // For more information about DNS query logs, see CreateQueryLoggingConfig. // @@ -1810,8 +1809,8 @@ const opDeleteReusableDelegationSet = "DeleteReusableDelegationSet" // DeleteReusableDelegationSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteReusableDelegationSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1906,8 +1905,8 @@ const opDeleteTrafficPolicy = "DeleteTrafficPolicy" // DeleteTrafficPolicyRequest generates a "aws/request.Request" representing the // client's request for the DeleteTrafficPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1996,8 +1995,8 @@ const opDeleteTrafficPolicyInstance = "DeleteTrafficPolicyInstance" // DeleteTrafficPolicyInstanceRequest generates a "aws/request.Request" representing the // client's request for the DeleteTrafficPolicyInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2039,8 +2038,7 @@ func (c *Route53) DeleteTrafficPolicyInstanceRequest(input *DeleteTrafficPolicyI // Deletes a traffic policy instance and all of the resource record sets that // Amazon Route 53 created when you created the instance. // -// In the Amazon Route 53 console, traffic policy instances are known as policy -// records. +// In the Route 53 console, traffic policy instances are known as policy records. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2059,8 +2057,8 @@ func (c *Route53) DeleteTrafficPolicyInstanceRequest(input *DeleteTrafficPolicyI // * ErrCodePriorRequestNotComplete "PriorRequestNotComplete" // If Amazon Route 53 can't process a request before the next request arrives, // it will reject subsequent requests for the same hosted zone and return an -// HTTP 400 error (Bad request). If Amazon Route 53 returns this error repeatedly -// for the same request, we recommend that you wait, in intervals of increasing +// HTTP 400 error (Bad request). If Route 53 returns this error repeatedly for +// the same request, we recommend that you wait, in intervals of increasing // duration, before you try the request again. // // See also, https://docs.aws.amazon.com/goto/WebAPI/route53-2013-04-01/DeleteTrafficPolicyInstance @@ -2089,8 +2087,8 @@ const opDeleteVPCAssociationAuthorization = "DeleteVPCAssociationAuthorization" // DeleteVPCAssociationAuthorizationRequest generates a "aws/request.Request" representing the // client's request for the DeleteVPCAssociationAuthorization operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2192,8 +2190,8 @@ const opDisassociateVPCFromHostedZone = "DisassociateVPCFromHostedZone" // DisassociateVPCFromHostedZoneRequest generates a "aws/request.Request" representing the // client's request for the DisassociateVPCFromHostedZone operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2232,13 +2230,16 @@ func (c *Route53) DisassociateVPCFromHostedZoneRequest(input *DisassociateVPCFro // DisassociateVPCFromHostedZone API operation for Amazon Route 53. // -// Disassociates a VPC from a Amazon Route 53 private hosted zone. +// Disassociates a VPC from a Amazon Route 53 private hosted zone. Note the +// following: +// +// * You can't disassociate the last VPC from a private hosted zone. // -// You can't disassociate the last VPC from a private hosted zone. +// * You can't convert a private hosted zone into a public hosted zone. // -// You can't disassociate a VPC from a private hosted zone when only one VPC -// is associated with the hosted zone. You also can't convert a private hosted -// zone into a public hosted zone. +// * You can submit a DisassociateVPCFromHostedZone request using either +// the account that created the hosted zone or the account that created the +// VPC. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2292,8 +2293,8 @@ const opGetAccountLimit = "GetAccountLimit" // GetAccountLimitRequest generates a "aws/request.Request" representing the // client's request for the GetAccountLimit operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2376,8 +2377,8 @@ const opGetChange = "GetChange" // GetChangeRequest generates a "aws/request.Request" representing the // client's request for the GetChange operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2423,8 +2424,8 @@ func (c *Route53) GetChangeRequest(input *GetChangeInput) (req *request.Request, // to all Amazon Route 53 DNS servers. This is the initial status of all // change batch requests. // -// * INSYNC indicates that the changes have propagated to all Amazon Route -// 53 DNS servers. +// * INSYNC indicates that the changes have propagated to all Route 53 DNS +// servers. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2466,8 +2467,8 @@ const opGetCheckerIpRanges = "GetCheckerIpRanges" // GetCheckerIpRangesRequest generates a "aws/request.Request" representing the // client's request for the GetCheckerIpRanges operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2543,8 +2544,8 @@ const opGetGeoLocation = "GetGeoLocation" // GetGeoLocationRequest generates a "aws/request.Request" representing the // client's request for the GetGeoLocation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2589,17 +2590,17 @@ func (c *Route53) GetGeoLocationRequest(input *GetGeoLocationInput) (req *reques // Use the following syntax to determine whether a continent is supported for // geolocation: // -// GET /2013-04-01/geolocation?ContinentCode=two-letter abbreviation for a continent +// GET /2013-04-01/geolocation?continentcode=two-letter abbreviation for a continent // // Use the following syntax to determine whether a country is supported for // geolocation: // -// GET /2013-04-01/geolocation?CountryCode=two-character country code +// GET /2013-04-01/geolocation?countrycode=two-character country code // // Use the following syntax to determine whether a subdivision of a country // is supported for geolocation: // -// GET /2013-04-01/geolocation?CountryCode=two-character country code&SubdivisionCode=subdivision +// GET /2013-04-01/geolocation?countrycode=two-character country code&subdivisioncode=subdivision // code // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -2611,7 +2612,7 @@ func (c *Route53) GetGeoLocationRequest(input *GetGeoLocationInput) (req *reques // // Returned Error Codes: // * ErrCodeNoSuchGeoLocation "NoSuchGeoLocation" -// Amazon Route 53 doesn't support the specified geolocation. +// Amazon Route 53 doesn't support the specified geographic location. // // * ErrCodeInvalidInput "InvalidInput" // The input is not valid. @@ -2642,8 +2643,8 @@ const opGetHealthCheck = "GetHealthCheck" // GetHealthCheckRequest generates a "aws/request.Request" representing the // client's request for the GetHealthCheck operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2693,8 +2694,7 @@ func (c *Route53) GetHealthCheckRequest(input *GetHealthCheckInput) (req *reques // // Returned Error Codes: // * ErrCodeNoSuchHealthCheck "NoSuchHealthCheck" -// No health check exists with the ID that you specified in the DeleteHealthCheck -// request. +// No health check exists with the specified ID. // // * ErrCodeInvalidInput "InvalidInput" // The input is not valid. @@ -2729,8 +2729,8 @@ const opGetHealthCheckCount = "GetHealthCheckCount" // GetHealthCheckCountRequest generates a "aws/request.Request" representing the // client's request for the GetHealthCheckCount operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2804,8 +2804,8 @@ const opGetHealthCheckLastFailureReason = "GetHealthCheckLastFailureReason" // GetHealthCheckLastFailureReasonRequest generates a "aws/request.Request" representing the // client's request for the GetHealthCheckLastFailureReason operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2855,8 +2855,7 @@ func (c *Route53) GetHealthCheckLastFailureReasonRequest(input *GetHealthCheckLa // // Returned Error Codes: // * ErrCodeNoSuchHealthCheck "NoSuchHealthCheck" -// No health check exists with the ID that you specified in the DeleteHealthCheck -// request. +// No health check exists with the specified ID. // // * ErrCodeInvalidInput "InvalidInput" // The input is not valid. @@ -2887,8 +2886,8 @@ const opGetHealthCheckStatus = "GetHealthCheckStatus" // GetHealthCheckStatusRequest generates a "aws/request.Request" representing the // client's request for the GetHealthCheckStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2938,8 +2937,7 @@ func (c *Route53) GetHealthCheckStatusRequest(input *GetHealthCheckStatusInput) // // Returned Error Codes: // * ErrCodeNoSuchHealthCheck "NoSuchHealthCheck" -// No health check exists with the ID that you specified in the DeleteHealthCheck -// request. +// No health check exists with the specified ID. // // * ErrCodeInvalidInput "InvalidInput" // The input is not valid. @@ -2970,8 +2968,8 @@ const opGetHostedZone = "GetHostedZone" // GetHostedZoneRequest generates a "aws/request.Request" representing the // client's request for the GetHostedZone operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3053,8 +3051,8 @@ const opGetHostedZoneCount = "GetHostedZoneCount" // GetHostedZoneCountRequest generates a "aws/request.Request" representing the // client's request for the GetHostedZoneCount operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3133,8 +3131,8 @@ const opGetHostedZoneLimit = "GetHostedZoneLimit" // GetHostedZoneLimitRequest generates a "aws/request.Request" representing the // client's request for the GetHostedZoneLimit operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3223,8 +3221,8 @@ const opGetQueryLoggingConfig = "GetQueryLoggingConfig" // GetQueryLoggingConfigRequest generates a "aws/request.Request" representing the // client's request for the GetQueryLoggingConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3308,8 +3306,8 @@ const opGetReusableDelegationSet = "GetReusableDelegationSet" // GetReusableDelegationSetRequest generates a "aws/request.Request" representing the // client's request for the GetReusableDelegationSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3394,8 +3392,8 @@ const opGetReusableDelegationSetLimit = "GetReusableDelegationSetLimit" // GetReusableDelegationSetLimitRequest generates a "aws/request.Request" representing the // client's request for the GetReusableDelegationSetLimit operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3481,8 +3479,8 @@ const opGetTrafficPolicy = "GetTrafficPolicy" // GetTrafficPolicyRequest generates a "aws/request.Request" representing the // client's request for the GetTrafficPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3563,8 +3561,8 @@ const opGetTrafficPolicyInstance = "GetTrafficPolicyInstance" // GetTrafficPolicyInstanceRequest generates a "aws/request.Request" representing the // client's request for the GetTrafficPolicyInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3610,8 +3608,7 @@ func (c *Route53) GetTrafficPolicyInstanceRequest(input *GetTrafficPolicyInstanc // record sets that are specified in the traffic policy definition. For more // information, see the State response element. // -// In the Amazon Route 53 console, traffic policy instances are known as policy -// records. +// In the Route 53 console, traffic policy instances are known as policy records. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -3653,8 +3650,8 @@ const opGetTrafficPolicyInstanceCount = "GetTrafficPolicyInstanceCount" // GetTrafficPolicyInstanceCountRequest generates a "aws/request.Request" representing the // client's request for the GetTrafficPolicyInstanceCount operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3728,8 +3725,8 @@ const opListGeoLocations = "ListGeoLocations" // ListGeoLocationsRequest generates a "aws/request.Request" representing the // client's request for the ListGeoLocations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3768,7 +3765,7 @@ func (c *Route53) ListGeoLocationsRequest(input *ListGeoLocationsInput) (req *re // ListGeoLocations API operation for Amazon Route 53. // -// Retrieves a list of supported geo locations. +// Retrieves a list of supported geographic locations. // // Countries are listed first, and continents are listed last. If Amazon Route // 53 supports subdivisions for a country (for example, states or provinces), @@ -3812,8 +3809,8 @@ const opListHealthChecks = "ListHealthChecks" // ListHealthChecksRequest generates a "aws/request.Request" representing the // client's request for the ListHealthChecks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3952,8 +3949,8 @@ const opListHostedZones = "ListHostedZones" // ListHostedZonesRequest generates a "aws/request.Request" representing the // client's request for the ListHostedZones operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4099,8 +4096,8 @@ const opListHostedZonesByName = "ListHostedZonesByName" // ListHostedZonesByNameRequest generates a "aws/request.Request" representing the // client's request for the ListHostedZonesByName operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4163,10 +4160,10 @@ func (c *Route53) ListHostedZonesByNameRequest(input *ListHostedZonesByNameInput // domain names, see DNS Domain Name Format (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/DomainNameFormat.html) // in the Amazon Route 53 Developer Guide. // -// Amazon Route 53 returns up to 100 items in each response. If you have a lot -// of hosted zones, use the MaxItems parameter to list them in groups of up -// to 100. The response includes values that help navigate from one group of -// MaxItems hosted zones to the next: +// Route 53 returns up to 100 items in each response. If you have a lot of hosted +// zones, use the MaxItems parameter to list them in groups of up to 100. The +// response includes values that help navigate from one group of MaxItems hosted +// zones to the next: // // * The DNSName and HostedZoneId elements in the response contain the values, // if any, specified for the dnsname and hostedzoneid parameters in the request @@ -4230,8 +4227,8 @@ const opListQueryLoggingConfigs = "ListQueryLoggingConfigs" // ListQueryLoggingConfigsRequest generates a "aws/request.Request" representing the // client's request for the ListQueryLoggingConfigs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4323,8 +4320,8 @@ const opListResourceRecordSets = "ListResourceRecordSets" // ListResourceRecordSetsRequest generates a "aws/request.Request" representing the // client's request for the ListResourceRecordSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4373,18 +4370,25 @@ func (c *Route53) ListResourceRecordSetsRequest(input *ListResourceRecordSetsInp // // ListResourceRecordSets returns up to 100 resource record sets at a time in // ASCII order, beginning at a position specified by the name and type elements. -// The action sorts results first by DNS name with the labels reversed, for -// example: +// +// Sort order +// +// ListResourceRecordSets sorts results first by DNS name with the labels reversed, +// for example: // // com.example.www. // -// Note the trailing dot, which can change the sort order in some circumstances. +// Note the trailing dot, which can change the sort order when the record name +// contains characters that appear before . (decimal 46) in the ASCII table. +// These characters include the following: ! " # $ % & ' ( ) * + , - +// +// When multiple records have the same DNS name, ListResourceRecordSets sorts +// results by the record type. // -// When multiple records have the same DNS name, the action sorts results by -// the record type. +// Specifying where to start listing records // -// You can use the name and type elements to adjust the beginning position of -// the list of resource record sets returned: +// You can use the name and type elements to specify the resource record set +// that the list begins with: // // If you do not specify Name or TypeThe results begin with the first resource // record set that the hosted zone contains. @@ -4399,9 +4403,13 @@ func (c *Route53) ListResourceRecordSetsRequest(input *ListResourceRecordSetsInp // record set in the list whose name is greater than or equal to Name, and whose // type is greater than or equal to Type. // +// Resource record sets that are PENDING +// // This action returns the most current version of the records. This includes -// records that are PENDING, and that are not yet available on all Amazon Route -// 53 DNS servers. +// records that are PENDING, and that are not yet available on all Route 53 +// DNS servers. +// +// Changing resource record sets // // To ensure that you get an accurate listing of the resource record sets for // a hosted zone at a point in time, do not submit a ChangeResourceRecordSets @@ -4409,6 +4417,14 @@ func (c *Route53) ListResourceRecordSetsRequest(input *ListResourceRecordSetsInp // request. If you do, some pages may display results without the latest changes // while other pages display results with the latest changes. // +// Displaying the next page of results +// +// If a ListResourceRecordSets command returns more than one page of results, +// the value of IsTruncated is true. To display the next page of results, get +// the values of NextRecordName, NextRecordType, and NextRecordIdentifier (if +// any) from the response. Then submit another ListResourceRecordSets request, +// and specify those values for StartRecordName, StartRecordType, and StartRecordIdentifier. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -4499,8 +4515,8 @@ const opListReusableDelegationSets = "ListReusableDelegationSets" // ListReusableDelegationSetsRequest generates a "aws/request.Request" representing the // client's request for the ListReusableDelegationSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4579,8 +4595,8 @@ const opListTagsForResource = "ListTagsForResource" // ListTagsForResourceRequest generates a "aws/request.Request" representing the // client's request for the ListTagsForResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4637,8 +4653,7 @@ func (c *Route53) ListTagsForResourceRequest(input *ListTagsForResourceInput) (r // The input is not valid. // // * ErrCodeNoSuchHealthCheck "NoSuchHealthCheck" -// No health check exists with the ID that you specified in the DeleteHealthCheck -// request. +// No health check exists with the specified ID. // // * ErrCodeNoSuchHostedZone "NoSuchHostedZone" // No hosted zone exists with the ID that you specified. @@ -4646,8 +4661,8 @@ func (c *Route53) ListTagsForResourceRequest(input *ListTagsForResourceInput) (r // * ErrCodePriorRequestNotComplete "PriorRequestNotComplete" // If Amazon Route 53 can't process a request before the next request arrives, // it will reject subsequent requests for the same hosted zone and return an -// HTTP 400 error (Bad request). If Amazon Route 53 returns this error repeatedly -// for the same request, we recommend that you wait, in intervals of increasing +// HTTP 400 error (Bad request). If Route 53 returns this error repeatedly for +// the same request, we recommend that you wait, in intervals of increasing // duration, before you try the request again. // // * ErrCodeThrottlingException "ThrottlingException" @@ -4679,8 +4694,8 @@ const opListTagsForResources = "ListTagsForResources" // ListTagsForResourcesRequest generates a "aws/request.Request" representing the // client's request for the ListTagsForResources operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4737,8 +4752,7 @@ func (c *Route53) ListTagsForResourcesRequest(input *ListTagsForResourcesInput) // The input is not valid. // // * ErrCodeNoSuchHealthCheck "NoSuchHealthCheck" -// No health check exists with the ID that you specified in the DeleteHealthCheck -// request. +// No health check exists with the specified ID. // // * ErrCodeNoSuchHostedZone "NoSuchHostedZone" // No hosted zone exists with the ID that you specified. @@ -4746,8 +4760,8 @@ func (c *Route53) ListTagsForResourcesRequest(input *ListTagsForResourcesInput) // * ErrCodePriorRequestNotComplete "PriorRequestNotComplete" // If Amazon Route 53 can't process a request before the next request arrives, // it will reject subsequent requests for the same hosted zone and return an -// HTTP 400 error (Bad request). If Amazon Route 53 returns this error repeatedly -// for the same request, we recommend that you wait, in intervals of increasing +// HTTP 400 error (Bad request). If Route 53 returns this error repeatedly for +// the same request, we recommend that you wait, in intervals of increasing // duration, before you try the request again. // // * ErrCodeThrottlingException "ThrottlingException" @@ -4779,8 +4793,8 @@ const opListTrafficPolicies = "ListTrafficPolicies" // ListTrafficPoliciesRequest generates a "aws/request.Request" representing the // client's request for the ListTrafficPolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4821,7 +4835,7 @@ func (c *Route53) ListTrafficPoliciesRequest(input *ListTrafficPoliciesInput) (r // // Gets information about the latest version for every traffic policy that is // associated with the current AWS account. Policies are listed in the order -// in which they were created. +// that they were created in. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4860,8 +4874,8 @@ const opListTrafficPolicyInstances = "ListTrafficPolicyInstances" // ListTrafficPolicyInstancesRequest generates a "aws/request.Request" representing the // client's request for the ListTrafficPolicyInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4908,9 +4922,9 @@ func (c *Route53) ListTrafficPolicyInstancesRequest(input *ListTrafficPolicyInst // in the traffic policy definition. For more information, see the State response // element. // -// Amazon Route 53 returns a maximum of 100 items in each response. If you have -// a lot of traffic policy instances, you can use the MaxItems parameter to -// list them in groups of up to 100. +// Route 53 returns a maximum of 100 items in each response. If you have a lot +// of traffic policy instances, you can use the MaxItems parameter to list them +// in groups of up to 100. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4952,8 +4966,8 @@ const opListTrafficPolicyInstancesByHostedZone = "ListTrafficPolicyInstancesByHo // ListTrafficPolicyInstancesByHostedZoneRequest generates a "aws/request.Request" representing the // client's request for the ListTrafficPolicyInstancesByHostedZone operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5000,9 +5014,9 @@ func (c *Route53) ListTrafficPolicyInstancesByHostedZoneRequest(input *ListTraff // record sets that are specified in the traffic policy definition. For more // information, see the State response element. // -// Amazon Route 53 returns a maximum of 100 items in each response. If you have -// a lot of traffic policy instances, you can use the MaxItems parameter to -// list them in groups of up to 100. +// Route 53 returns a maximum of 100 items in each response. If you have a lot +// of traffic policy instances, you can use the MaxItems parameter to list them +// in groups of up to 100. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -5047,8 +5061,8 @@ const opListTrafficPolicyInstancesByPolicy = "ListTrafficPolicyInstancesByPolicy // ListTrafficPolicyInstancesByPolicyRequest generates a "aws/request.Request" representing the // client's request for the ListTrafficPolicyInstancesByPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5095,9 +5109,9 @@ func (c *Route53) ListTrafficPolicyInstancesByPolicyRequest(input *ListTrafficPo // record sets that are specified in the traffic policy definition. For more // information, see the State response element. // -// Amazon Route 53 returns a maximum of 100 items in each response. If you have -// a lot of traffic policy instances, you can use the MaxItems parameter to -// list them in groups of up to 100. +// Route 53 returns a maximum of 100 items in each response. If you have a lot +// of traffic policy instances, you can use the MaxItems parameter to list them +// in groups of up to 100. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -5142,8 +5156,8 @@ const opListTrafficPolicyVersions = "ListTrafficPolicyVersions" // ListTrafficPolicyVersionsRequest generates a "aws/request.Request" representing the // client's request for the ListTrafficPolicyVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5226,8 +5240,8 @@ const opListVPCAssociationAuthorizations = "ListVPCAssociationAuthorizations" // ListVPCAssociationAuthorizationsRequest generates a "aws/request.Request" representing the // client's request for the ListVPCAssociationAuthorizations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5317,8 +5331,8 @@ const opTestDNSAnswer = "TestDNSAnswer" // TestDNSAnswerRequest generates a "aws/request.Request" representing the // client's request for the TestDNSAnswer operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5401,8 +5415,8 @@ const opUpdateHealthCheck = "UpdateHealthCheck" // UpdateHealthCheckRequest generates a "aws/request.Request" representing the // client's request for the UpdateHealthCheck operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5456,8 +5470,7 @@ func (c *Route53) UpdateHealthCheckRequest(input *UpdateHealthCheckInput) (req * // // Returned Error Codes: // * ErrCodeNoSuchHealthCheck "NoSuchHealthCheck" -// No health check exists with the ID that you specified in the DeleteHealthCheck -// request. +// No health check exists with the specified ID. // // * ErrCodeInvalidInput "InvalidInput" // The input is not valid. @@ -5492,8 +5505,8 @@ const opUpdateHostedZoneComment = "UpdateHostedZoneComment" // UpdateHostedZoneCommentRequest generates a "aws/request.Request" representing the // client's request for the UpdateHostedZoneComment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5574,8 +5587,8 @@ const opUpdateTrafficPolicyComment = "UpdateTrafficPolicyComment" // UpdateTrafficPolicyCommentRequest generates a "aws/request.Request" representing the // client's request for the UpdateTrafficPolicyComment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5660,8 +5673,8 @@ const opUpdateTrafficPolicyInstance = "UpdateTrafficPolicyInstance" // UpdateTrafficPolicyInstanceRequest generates a "aws/request.Request" representing the // client's request for the UpdateTrafficPolicyInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5705,19 +5718,19 @@ func (c *Route53) UpdateTrafficPolicyInstanceRequest(input *UpdateTrafficPolicyI // // When you update a traffic policy instance, Amazon Route 53 continues to respond // to DNS queries for the root resource record set name (such as example.com) -// while it replaces one group of resource record sets with another. Amazon -// Route 53 performs the following operations: +// while it replaces one group of resource record sets with another. Route 53 +// performs the following operations: // -// Amazon Route 53 creates a new group of resource record sets based on the -// specified traffic policy. This is true regardless of how significant the -// differences are between the existing resource record sets and the new resource -// record sets. +// Route 53 creates a new group of resource record sets based on the specified +// traffic policy. This is true regardless of how significant the differences +// are between the existing resource record sets and the new resource record +// sets. // -// When all of the new resource record sets have been created, Amazon Route -// 53 starts to respond to DNS queries for the root resource record set name -// (such as example.com) by using the new resource record sets. +// When all of the new resource record sets have been created, Route 53 starts +// to respond to DNS queries for the root resource record set name (such as +// example.com) by using the new resource record sets. // -// Amazon Route 53 deletes the old group of resource record sets that are associated +// Route 53 deletes the old group of resource record sets that are associated // with the root resource record set name. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -5740,8 +5753,8 @@ func (c *Route53) UpdateTrafficPolicyInstanceRequest(input *UpdateTrafficPolicyI // * ErrCodePriorRequestNotComplete "PriorRequestNotComplete" // If Amazon Route 53 can't process a request before the next request arrives, // it will reject subsequent requests for the same hosted zone and return an -// HTTP 400 error (Bad request). If Amazon Route 53 returns this error repeatedly -// for the same request, we recommend that you wait, in intervals of increasing +// HTTP 400 error (Bad request). If Route 53 returns this error repeatedly for +// the same request, we recommend that you wait, in intervals of increasing // duration, before you try the request again. // // * ErrCodeConflictingTypes "ConflictingTypes" @@ -5827,20 +5840,29 @@ func (s *AccountLimit) SetValue(v int64) *AccountLimit { } // A complex type that identifies the CloudWatch alarm that you want Amazon -// Route 53 health checkers to use to determine whether this health check is -// healthy. +// Route 53 health checkers to use to determine whether the specified health +// check is healthy. type AlarmIdentifier struct { _ struct{} `type:"structure"` // The name of the CloudWatch alarm that you want Amazon Route 53 health checkers // to use to determine whether this health check is healthy. // + // Route 53 supports CloudWatch alarms with the following features: + // + // Standard-resolution metrics. High-resolution metrics aren't supported. For + // more information, see High-Resolution Metrics (http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/publishingMetrics.html#high-resolution-metrics) + // in the Amazon CloudWatch User Guide. + // + // Statistics: Average, Minimum, Maximum, Sum, and SampleCount. Extended statistics + // aren't supported. + // // Name is a required field Name *string `min:"1" type:"string" required:"true"` - // A complex type that identifies the CloudWatch alarm that you want Amazon - // Route 53 health checkers to use to determine whether this health check is - // healthy. + // For the CloudWatch alarm that you want Route 53 health checkers to use to + // determine whether this health check is healthy, the region that the alarm + // was created in. // // For the current list of CloudWatch regions, see Amazon CloudWatch (http://docs.aws.amazon.com/general/latest/gr/rande.html#cw_region) // in the AWS Regions and Endpoints chapter of the Amazon Web Services General @@ -5925,9 +5947,29 @@ type AliasTarget struct { // see Using Alternate Domain Names (CNAMEs) (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html) // in the Amazon CloudFront Developer Guide. // - // Elastic Beanstalk environmentSpecify the CNAME attribute for the environment. - // (The environment must have a regionalized domain name.) You can use the following - // methods to get the value of the CNAME attribute: + // For failover alias records, you can't specify a CloudFront distribution for + // both the primary and secondary records. A distribution must include an alternate + // domain name that matches the name of the record. However, the primary and + // secondary records have the same name, and you can't include the same alternate + // domain name in more than one distribution. + // + // Elastic Beanstalk environmentIf the domain name for your Elastic Beanstalk + // environment includes the region that you deployed the environment in, you + // can create an alias record that routes traffic to the environment. For example, + // the domain name my-environment.us-west-2.elasticbeanstalk.com is a regionalized + // domain name. + // + // For environments that were created before early 2016, the domain name doesn't + // include the region. To route traffic to these environments, you must create + // a CNAME record instead of an alias record. Note that you can't create a CNAME + // record for the root domain name. For example, if your domain name is example.com, + // you can create a record that routes traffic for acme.example.com to your + // Elastic Beanstalk environment, but you can't create a record that routes + // traffic for example.com to your Elastic Beanstalk environment. + // + // For Elastic Beanstalk environments that have regionalized subdomains, specify + // the CNAME attribute for the environment. You can use the following methods + // to get the value of the CNAME attribute: // // AWS Management Console: For information about how to get the value by using // the console, see Using Custom Domains with AWS Elastic Beanstalk (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customdomains.html) @@ -5965,7 +6007,7 @@ type AliasTarget struct { // Application and Network Load Balancers: describe-load-balancers (http://docs.aws.amazon.com/cli/latest/reference/elbv2/describe-load-balancers.html) // // Amazon S3 bucket that is configured as a static websiteSpecify the domain - // name of the Amazon S3 website endpoint in which you created the bucket, for + // name of the Amazon S3 website endpoint that you created the bucket in, for // example, s3-website-us-east-2.amazonaws.com. For more information about valid // values, see the table Amazon Simple Storage Service (S3) Website Endpoints // (http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) in the @@ -5973,8 +6015,14 @@ type AliasTarget struct { // buckets for websites, see Getting Started with Amazon Route 53 (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/getting-started.html) // in the Amazon Route 53 Developer Guide. // - // Another Amazon Route 53 resource record setSpecify the value of the Name - // element for a resource record set in the current hosted zone. + // Another Route 53 resource record setSpecify the value of the Name element + // for a resource record set in the current hosted zone. + // + // If you're creating an alias record that has the same name as the hosted zone + // (known as the zone apex), you can't specify the domain name for a record + // for which the value of Type is CNAME. This is because the alias record must + // have the same type as the record that you're routing traffic to, and creating + // a CNAME record for the zone apex isn't supported even for an alias record. // // DNSName is a required field DNSName *string `type:"string" required:"true"` @@ -5982,48 +6030,61 @@ type AliasTarget struct { // Applies only to alias, failover alias, geolocation alias, latency alias, // and weighted alias resource record sets: When EvaluateTargetHealth is true, // an alias resource record set inherits the health of the referenced AWS resource, - // such as an ELB load balancer, or the referenced resource record set. + // such as an ELB load balancer or another resource record set in the hosted + // zone. // // Note the following: // - // * You can't set EvaluateTargetHealth to true when the alias target is - // a CloudFront distribution. - // - // * If the AWS resource that you specify in AliasTarget is a resource record - // set or a group of resource record sets (for example, a group of weighted - // resource record sets), but it is not another alias resource record set, - // we recommend that you associate a health check with all of the resource - // record sets in the alias target. For more information, see What Happens - // When You Omit Health Checks? (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-complex-configs.html#dns-failover-complex-configs-hc-omitting) - // in the Amazon Route 53 Developer Guide. - // - // * If you specify an Elastic Beanstalk environment in HostedZoneId and - // DNSName, and if the environment contains an ELB load balancer, Elastic - // Load Balancing routes queries only to the healthy Amazon EC2 instances - // that are registered with the load balancer. (An environment automatically - // contains an ELB load balancer if it includes more than one EC2 instance.) - // If you set EvaluateTargetHealth to true and either no EC2 instances are - // healthy or the load balancer itself is unhealthy, Amazon Route 53 routes - // queries to other available resources that are healthy, if any. - // - // If the environment contains a single EC2 instance, there are no special requirements. - // - // * If you specify an ELB load balancer in AliasTarget, ELB routes queries - // only to the healthy EC2 instances that are registered with the load balancer. - // If no EC2 instances are healthy or if the load balancer itself is unhealthy, - // and if EvaluateTargetHealth is true for the corresponding alias resource - // record set, Amazon Route 53 routes queries to other resources. When you - // create a load balancer, you configure settings for ELB health checks; - // they're not Amazon Route 53 health checks, but they perform a similar - // function. Do not create Amazon Route 53 health checks for the EC2 instances - // that you register with an ELB load balancer. - // - // For more information, see How Health Checks Work in More Complex Amazon Route - // 53 Configurations (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-complex-configs.html) - // in the Amazon Route 53 Developer Guide. - // - // * We recommend that you set EvaluateTargetHealth to true only when you - // have enough idle capacity to handle the failure of one or more endpoints. + // CloudFront distributionsYou can't set EvaluateTargetHealth to true when the + // alias target is a CloudFront distribution. + // + // Elastic Beanstalk environments that have regionalized subdomainsIf you specify + // an Elastic Beanstalk environment in DNSName and the environment contains + // an ELB load balancer, Elastic Load Balancing routes queries only to the healthy + // Amazon EC2 instances that are registered with the load balancer. (An environment + // automatically contains an ELB load balancer if it includes more than one + // Amazon EC2 instance.) If you set EvaluateTargetHealth to true and either + // no Amazon EC2 instances are healthy or the load balancer itself is unhealthy, + // Route 53 routes queries to other available resources that are healthy, if + // any. + // + // If the environment contains a single Amazon EC2 instance, there are no special + // requirements. + // + // ELB load balancersHealth checking behavior depends on the type of load balancer: + // + // Classic Load Balancers: If you specify an ELB Classic Load Balancer in DNSName, + // Elastic Load Balancing routes queries only to the healthy Amazon EC2 instances + // that are registered with the load balancer. If you set EvaluateTargetHealth + // to true and either no EC2 instances are healthy or the load balancer itself + // is unhealthy, Route 53 routes queries to other resources. + // + // Application and Network Load Balancers: If you specify an ELB Application + // or Network Load Balancer and you set EvaluateTargetHealth to true, Route + // 53 routes queries to the load balancer based on the health of the target + // groups that are associated with the load balancer: + // + // For an Application or Network Load Balancer to be considered healthy, every + // target group that contains targets must contain at least one healthy target. + // If any target group contains only unhealthy targets, the load balancer is + // considered unhealthy, and Route 53 routes queries to other resources. + // + // A target group that has no registered targets is considered healthy. + // + // When you create a load balancer, you configure settings for Elastic Load + // Balancing health checks; they're not Route 53 health checks, but they perform + // a similar function. Do not create Route 53 health checks for the EC2 instances + // that you register with an ELB load balancer. + // + // S3 bucketsThere are no special requirements for setting EvaluateTargetHealth + // to true when the alias target is an S3 bucket. + // + // Other records in the same hosted zoneIf the AWS resource that you specify + // in DNSName is a record or a group of records (for example, a group of weighted + // records) but is not another alias record, we recommend that you associate + // a health check with all of the records in the alias target. For more information, + // see What Happens When You Omit Health Checks? (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-complex-configs.html#dns-failover-complex-configs-hc-omitting) + // in the Amazon Route 53 Developer Guide. // // For more information and examples, see Amazon Route 53 Health Checks and // DNS Failover (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html) @@ -6039,8 +6100,8 @@ type AliasTarget struct { // // Alias resource record sets for CloudFront can't be created in a private zone. // - // Elastic Beanstalk environmentSpecify the hosted zone ID for the region in - // which you created the environment. The environment must have a regionalized + // Elastic Beanstalk environmentSpecify the hosted zone ID for the region that + // you created the environment in. The environment must have a regionalized // subdomain. For a list of regions and the corresponding hosted zone IDs, see // AWS Elastic Beanstalk (http://docs.aws.amazon.com/general/latest/gr/rande.html#elasticbeanstalk_region) // in the "AWS Regions and Endpoints" chapter of the Amazon Web Services General @@ -6083,8 +6144,8 @@ type AliasTarget struct { // table in the "AWS Regions and Endpoints" chapter of the Amazon Web Services // General Reference. // - // Another Amazon Route 53 resource record set in your hosted zoneSpecify the - // hosted zone ID of your hosted zone. (An alias resource record set can't reference + // Another Route 53 resource record set in your hosted zoneSpecify the hosted + // zone ID of your hosted zone. (An alias resource record set can't reference // a resource record set in a different hosted zone.) // // HostedZoneId is a required field @@ -6251,13 +6312,13 @@ type Change struct { // To delete the resource record set that is associated with a traffic policy // instance, use DeleteTrafficPolicyInstance. Amazon Route 53 will delete // the resource record set automatically. If you delete the resource record - // set by using ChangeResourceRecordSets, Amazon Route 53 doesn't automatically + // set by using ChangeResourceRecordSets, Route 53 doesn't automatically // delete the traffic policy instance, and you'll continue to be charged // for it even though it's no longer in use. // - // * UPSERT: If a resource record set doesn't already exist, Amazon Route - // 53 creates it. If a resource record set does exist, Amazon Route 53 updates - // it with the values in the request. + // * UPSERT: If a resource record set doesn't already exist, Route 53 creates + // it. If a resource record set does exist, Route 53 updates it with the + // values in the request. // // Action is a required field Action *string `type:"string" required:"true" enum:"ChangeAction"` @@ -6401,7 +6462,7 @@ type ChangeInfo struct { // at 17:48:16.751 UTC. // // SubmittedAt is a required field - SubmittedAt *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + SubmittedAt *time.Time `type:"timestamp" required:"true"` } // String returns the string representation @@ -6752,24 +6813,24 @@ type CreateHealthCheckInput struct { // * If you send a CreateHealthCheck request with the same CallerReference // and settings as a previous request, and if the health check doesn't exist, // Amazon Route 53 creates the health check. If the health check does exist, - // Amazon Route 53 returns the settings for the existing health check. + // Route 53 returns the settings for the existing health check. // // * If you send a CreateHealthCheck request with the same CallerReference - // as a deleted health check, regardless of the settings, Amazon Route 53 - // returns a HealthCheckAlreadyExists error. + // as a deleted health check, regardless of the settings, Route 53 returns + // a HealthCheckAlreadyExists error. // // * If you send a CreateHealthCheck request with the same CallerReference - // as an existing health check but with different settings, Amazon Route - // 53 returns a HealthCheckAlreadyExists error. + // as an existing health check but with different settings, Route 53 returns + // a HealthCheckAlreadyExists error. // // * If you send a CreateHealthCheck request with a unique CallerReference - // but settings identical to an existing health check, Amazon Route 53 creates - // the health check. + // but settings identical to an existing health check, Route 53 creates the + // health check. // // CallerReference is a required field CallerReference *string `min:"1" type:"string" required:"true"` - // A complex type that contains the response to a CreateHealthCheck request. + // A complex type that contains settings for a new health check. // // HealthCheckConfig is a required field HealthCheckConfig *HealthCheckConfig `type:"structure" required:"true"` @@ -6858,8 +6919,8 @@ func (s *CreateHealthCheckOutput) SetLocation(v string) *CreateHealthCheckOutput return s } -// A complex type that contains information about the request to create a hosted -// zone. +// A complex type that contains information about the request to create a public +// or private hosted zone. type CreateHostedZoneInput struct { _ struct{} `locationName:"CreateHostedZoneRequest" type:"structure" xmlURI:"https://route53.amazonaws.com/doc/2013-04-01/"` @@ -6888,16 +6949,15 @@ type CreateHostedZoneInput struct { // and the other elements. HostedZoneConfig *HostedZoneConfig `type:"structure"` - // The name of the domain. For resource record types that include a domain name, - // specify a fully qualified domain name, for example, www.example.com. The - // trailing dot is optional; Amazon Route 53 assumes that the domain name is - // fully qualified. This means that Amazon Route 53 treats www.example.com (without - // a trailing dot) and www.example.com. (with a trailing dot) as identical. + // The name of the domain. Specify a fully qualified domain name, for example, + // www.example.com. The trailing dot is optional; Amazon Route 53 assumes that + // the domain name is fully qualified. This means that Route 53 treats www.example.com + // (without a trailing dot) and www.example.com. (with a trailing dot) as identical. // // If you're creating a public hosted zone, this is the name you have registered // with your DNS registrar. If your domain name is registered with a registrar - // other than Amazon Route 53, change the name servers for your domain to the - // set of NameServers that CreateHostedZone returns in DelegationSet. + // other than Route 53, change the name servers for your domain to the set of + // NameServers that CreateHostedZone returns in DelegationSet. // // Name is a required field Name *string `type:"string" required:"true"` @@ -7303,15 +7363,15 @@ func (s *CreateTrafficPolicyInput) SetName(v string) *CreateTrafficPolicyInput { type CreateTrafficPolicyInstanceInput struct { _ struct{} `locationName:"CreateTrafficPolicyInstanceRequest" type:"structure" xmlURI:"https://route53.amazonaws.com/doc/2013-04-01/"` - // The ID of the hosted zone in which you want Amazon Route 53 to create resource - // record sets by using the configuration in a traffic policy. + // The ID of the hosted zone that you want Amazon Route 53 to create resource + // record sets in by using the configuration in a traffic policy. // // HostedZoneId is a required field HostedZoneId *string `type:"string" required:"true"` // The domain name (such as example.com) or subdomain name (such as www.example.com) // for which Amazon Route 53 responds to DNS queries by using the resource record - // sets that Amazon Route 53 creates for this traffic policy instance. + // sets that Route 53 creates for this traffic policy instance. // // Name is a required field Name *string `type:"string" required:"true"` @@ -8317,7 +8377,7 @@ func (s *DisassociateVPCFromHostedZoneOutput) SetChangeInfo(v *ChangeInfo) *Disa return s } -// A complex type that contains information about a geo location. +// A complex type that contains information about a geographic location. type GeoLocation struct { _ struct{} `type:"structure"` @@ -8332,8 +8392,8 @@ type GeoLocation struct { // The two-letter code for the country. CountryCode *string `min:"1" type:"string"` - // The code for the subdivision, for example, a state in the United States or - // a province in Canada. + // The code for the subdivision. Route 53 currently supports only states in + // the United States. SubdivisionCode *string `min:"1" type:"string"` } @@ -8401,12 +8461,12 @@ type GeoLocationDetails struct { // The name of the country. CountryName *string `min:"1" type:"string"` - // The code for the subdivision, for example, a state in the United States or - // a province in Canada. + // The code for the subdivision. Route 53 currently supports only states in + // the United States. SubdivisionCode *string `min:"1" type:"string"` - // The full name of the subdivision, for example, a state in the United States - // or a province in Canada. + // The full name of the subdivision. Route 53 currently supports only states + // in the United States. SubdivisionName *string `min:"1" type:"string"` } @@ -8688,8 +8748,8 @@ type GetGeoLocationInput struct { // Amazon Route 53 uses the one- to three-letter subdivision codes that are // specified in ISO standard 3166-1 alpha-2 (https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2). - // Amazon Route 53 doesn't support subdivision codes for all countries. If you - // specify SubdivisionCode, you must also specify CountryCode. + // Route 53 doesn't support subdivision codes for all countries. If you specify + // subdivisioncode, you must also specify countrycode. SubdivisionCode *string `location:"querystring" locationName:"subdivisioncode" min:"1" type:"string"` } @@ -9776,8 +9836,8 @@ type HealthCheckConfig struct { _ struct{} `type:"structure"` // A complex type that identifies the CloudWatch alarm that you want Amazon - // Route 53 health checkers to use to determine whether this health check is - // healthy. + // Route 53 health checkers to use to determine whether the specified health + // check is healthy. AlarmIdentifier *AlarmIdentifier `type:"structure"` // (CALCULATED Health Checks Only) A complex type that contains one ChildHealthCheck @@ -9785,6 +9845,27 @@ type HealthCheckConfig struct { // health check. ChildHealthChecks []*string `locationNameList:"ChildHealthCheck" type:"list"` + // Stops Route 53 from performing health checks. When you disable a health check, + // here's what happens: + // + // * Health checks that check the health of endpoints: Route 53 stops submitting + // requests to your application, server, or other resource. + // + // * Calculated health checks: Route 53 stops aggregating the status of the + // referenced health checks. + // + // * Health checks that monitor CloudWatch alarms: Route 53 stops monitoring + // the corresponding CloudWatch metrics. + // + // After you disable a health check, Route 53 considers the status of the health + // check to always be healthy. If you configured DNS failover, Route 53 continues + // to route traffic to the corresponding resources. If you want to stop routing + // traffic to a resource, change the value of UpdateHealthCheckRequest$Inverted. + // + // Charges for a health check still apply when the health check is disabled. + // For more information, see Amazon Route 53 Pricing (http://aws.amazon.com/route53/pricing/). + Disabled *bool `type:"boolean"` + // Specify whether you want Amazon Route 53 to send the value of FullyQualifiedDomainName // to the endpoint in the client_hello message during TLS negotiation. This // allows the endpoint to respond to HTTPS health check requests with the applicable @@ -9824,38 +9905,37 @@ type HealthCheckConfig struct { // Amazon Route 53 sends health check requests to the specified IPv4 or IPv6 // address and passes the value of FullyQualifiedDomainName in the Host header // for all health checks except TCP health checks. This is typically the fully - // qualified DNS name of the endpoint on which you want Amazon Route 53 to perform + // qualified DNS name of the endpoint on which you want Route 53 to perform // health checks. // - // When Amazon Route 53 checks the health of an endpoint, here is how it constructs + // When Route 53 checks the health of an endpoint, here is how it constructs // the Host header: // // * If you specify a value of 80 for Port and HTTP or HTTP_STR_MATCH for - // Type, Amazon Route 53 passes the value of FullyQualifiedDomainName to - // the endpoint in the Host header. + // Type, Route 53 passes the value of FullyQualifiedDomainName to the endpoint + // in the Host header. // // * If you specify a value of 443 for Port and HTTPS or HTTPS_STR_MATCH - // for Type, Amazon Route 53 passes the value of FullyQualifiedDomainName - // to the endpoint in the Host header. + // for Type, Route 53 passes the value of FullyQualifiedDomainName to the + // endpoint in the Host header. // // * If you specify another value for Port and any value except TCP for Type, - // Amazon Route 53 passes FullyQualifiedDomainName:Port to the endpoint in - // the Host header. + // Route 53 passes FullyQualifiedDomainName:Port to the endpoint in the Host + // header. // - // If you don't specify a value for FullyQualifiedDomainName, Amazon Route 53 - // substitutes the value of IPAddress in the Host header in each of the preceding - // cases. + // If you don't specify a value for FullyQualifiedDomainName, Route 53 substitutes + // the value of IPAddress in the Host header in each of the preceding cases. // // If you don't specify a value for IPAddress: // - // Amazon Route 53 sends a DNS request to the domain that you specify for FullyQualifiedDomainName + // Route 53 sends a DNS request to the domain that you specify for FullyQualifiedDomainName // at the interval that you specify for RequestInterval. Using an IPv4 address - // that DNS returns, Amazon Route 53 then checks the health of the endpoint. + // that DNS returns, Route 53 then checks the health of the endpoint. // - // If you don't specify a value for IPAddress, Amazon Route 53 uses only IPv4 - // to send health checks to the endpoint. If there's no resource record set - // with a type of A for the name that you specify for FullyQualifiedDomainName, - // the health check fails with a "DNS resolution failed" error. + // If you don't specify a value for IPAddress, Route 53 uses only IPv4 to send + // health checks to the endpoint. If there's no resource record set with a type + // of A for the name that you specify for FullyQualifiedDomainName, the health + // check fails with a "DNS resolution failed" error. // // If you want to check the health of weighted, latency, or failover resource // record sets and you choose to specify the endpoint only by FullyQualifiedDomainName, @@ -9871,9 +9951,9 @@ type HealthCheckConfig struct { // check results will be unpredictable. // // In addition, if the value that you specify for Type is HTTP, HTTPS, HTTP_STR_MATCH, - // or HTTPS_STR_MATCH, Amazon Route 53 passes the value of FullyQualifiedDomainName + // or HTTPS_STR_MATCH, Route 53 passes the value of FullyQualifiedDomainName // in the Host header, as it does when you specify a value for IPAddress. If - // the value of Type is TCP, Amazon Route 53 doesn't pass a Host header. + // the value of Type is TCP, Route 53 doesn't pass a Host header. FullyQualifiedDomainName *string `type:"string"` // The number of child health checks that are associated with a CALCULATED health @@ -9885,18 +9965,18 @@ type HealthCheckConfig struct { // Note the following: // // * If you specify a number greater than the number of child health checks, - // Amazon Route 53 always considers this health check to be unhealthy. + // Route 53 always considers this health check to be unhealthy. // - // * If you specify 0, Amazon Route 53 always considers this health check - // to be healthy. + // * If you specify 0, Route 53 always considers this health check to be + // healthy. HealthThreshold *int64 `type:"integer"` // The IPv4 or IPv6 IP address of the endpoint that you want Amazon Route 53 // to perform health checks on. If you don't specify a value for IPAddress, - // Amazon Route 53 sends a DNS request to resolve the domain name that you specify + // Route 53 sends a DNS request to resolve the domain name that you specify // in FullyQualifiedDomainName at the interval that you specify in RequestInterval. - // Using an IP address returned by DNS, Amazon Route 53 then checks the health - // of the endpoint. + // Using an IP address returned by DNS, Route 53 then checks the health of the + // endpoint. // // Use one of the following formats for the value of IPAddress: // @@ -9915,9 +9995,9 @@ type HealthCheckConfig struct { // // For more information, see HealthCheckConfig$FullyQualifiedDomainName. // - // Constraints: Amazon Route 53 can't check the health of endpoints for which - // the IP address is in local, private, non-routable, or multicast ranges. For - // more information about IP addresses for which you can't create health checks, + // Constraints: Route 53 can't check the health of endpoints for which the IP + // address is in local, private, non-routable, or multicast ranges. For more + // information about IP addresses for which you can't create health checks, // see the following documents: // // * RFC 5735, Special Use IPv4 Addresses (https://tools.ietf.org/html/rfc5735) @@ -9932,14 +10012,14 @@ type HealthCheckConfig struct { // When CloudWatch has insufficient data about the metric to determine the alarm // state, the status that you want Amazon Route 53 to assign to the health check: // - // * Healthy: Amazon Route 53 considers the health check to be healthy. + // * Healthy: Route 53 considers the health check to be healthy. // - // * Unhealthy: Amazon Route 53 considers the health check to be unhealthy. + // * Unhealthy: Route 53 considers the health check to be unhealthy. // - // * LastKnownStatus: Amazon Route 53 uses the status of the health check - // from the last time that CloudWatch had sufficient data to determine the - // alarm state. For new health checks that have no last known status, the - // default status for the health check is healthy. + // * LastKnownStatus: Route 53 uses the status of the health check from the + // last time that CloudWatch had sufficient data to determine the alarm state. + // For new health checks that have no last known status, the default status + // for the health check is healthy. InsufficientDataHealthStatus *string `type:"string" enum:"InsufficientDataHealthStatus"` // Specify whether you want Amazon Route 53 to invert the status of a health @@ -9949,7 +10029,7 @@ type HealthCheckConfig struct { // Specify whether you want Amazon Route 53 to measure the latency between health // checkers in multiple AWS regions and your endpoint, and to display CloudWatch - // latency graphs on the Health Checks page in the Amazon Route 53 console. + // latency graphs on the Health Checks page in the Route 53 console. // // You can't change the value of MeasureLatency after you create a health check. MeasureLatency *bool `type:"boolean"` @@ -9961,18 +10041,18 @@ type HealthCheckConfig struct { // A complex type that contains one Region element for each region from which // you want Amazon Route 53 health checkers to check the specified endpoint. // - // If you don't specify any regions, Amazon Route 53 health checkers automatically + // If you don't specify any regions, Route 53 health checkers automatically // performs checks from all of the regions that are listed under Valid Values. // // If you update a health check to remove a region that has been performing - // health checks, Amazon Route 53 will briefly continue to perform checks from - // that region to ensure that some health checkers are always checking the endpoint + // health checks, Route 53 will briefly continue to perform checks from that + // region to ensure that some health checkers are always checking the endpoint // (for example, if you replace three regions with four different regions). Regions []*string `locationNameList:"Region" min:"3" type:"list"` // The number of seconds between the time that Amazon Route 53 gets a response // from your endpoint and the time that it sends the next health check request. - // Each Amazon Route 53 health checker makes requests at this interval. + // Each Route 53 health checker makes requests at this interval. // // You can't change the value of RequestInterval after you create a health check. // @@ -9983,16 +10063,16 @@ type HealthCheckConfig struct { // The path, if any, that you want Amazon Route 53 to request when performing // health checks. The path can be any value for which your endpoint will return // an HTTP status code of 2xx or 3xx when the endpoint is healthy, for example, - // the file /docs/route53-health-check.html. + // the file /docs/route53-health-check.html. You can also include query string + // parameters, for example, /welcome.html?language=jp&login=y. ResourcePath *string `type:"string"` // If the value of Type is HTTP_STR_MATCH or HTTP_STR_MATCH, the string that // you want Amazon Route 53 to search for in the response body from the specified - // resource. If the string appears in the response body, Amazon Route 53 considers + // resource. If the string appears in the response body, Route 53 considers // the resource healthy. // - // Amazon Route 53 considers case when searching for SearchString in the response - // body. + // Route 53 considers case when searching for SearchString in the response body. SearchString *string `type:"string"` // The type of health check that you want to create, which indicates how Amazon @@ -10002,28 +10082,26 @@ type HealthCheckConfig struct { // // You can create the following types of health checks: // - // * HTTP: Amazon Route 53 tries to establish a TCP connection. If successful, - // Amazon Route 53 submits an HTTP request and waits for an HTTP status code - // of 200 or greater and less than 400. + // * HTTP: Route 53 tries to establish a TCP connection. If successful, Route + // 53 submits an HTTP request and waits for an HTTP status code of 200 or + // greater and less than 400. // - // * HTTPS: Amazon Route 53 tries to establish a TCP connection. If successful, - // Amazon Route 53 submits an HTTPS request and waits for an HTTP status - // code of 200 or greater and less than 400. + // * HTTPS: Route 53 tries to establish a TCP connection. If successful, + // Route 53 submits an HTTPS request and waits for an HTTP status code of + // 200 or greater and less than 400. // // If you specify HTTPS for the value of Type, the endpoint must support TLS // v1.0 or later. // - // * HTTP_STR_MATCH: Amazon Route 53 tries to establish a TCP connection. - // If successful, Amazon Route 53 submits an HTTP request and searches the - // first 5,120 bytes of the response body for the string that you specify - // in SearchString. + // * HTTP_STR_MATCH: Route 53 tries to establish a TCP connection. If successful, + // Route 53 submits an HTTP request and searches the first 5,120 bytes of + // the response body for the string that you specify in SearchString. // - // * HTTPS_STR_MATCH: Amazon Route 53 tries to establish a TCP connection. - // If successful, Amazon Route 53 submits an HTTPS request and searches the - // first 5,120 bytes of the response body for the string that you specify - // in SearchString. + // * HTTPS_STR_MATCH: Route 53 tries to establish a TCP connection. If successful, + // Route 53 submits an HTTPS request and searches the first 5,120 bytes of + // the response body for the string that you specify in SearchString. // - // * TCP: Amazon Route 53 tries to establish a TCP connection. + // * TCP: Route 53 tries to establish a TCP connection. // // * CLOUDWATCH_METRIC: The health check is associated with a CloudWatch // alarm. If the state of the alarm is OK, the health check is considered @@ -10033,12 +10111,12 @@ type HealthCheckConfig struct { // Healthy, Unhealthy, or LastKnownStatus. // // * CALCULATED: For health checks that monitor the status of other health - // checks, Amazon Route 53 adds up the number of health checks that Amazon - // Route 53 health checkers consider to be healthy and compares that number - // with the value of HealthThreshold. + // checks, Route 53 adds up the number of health checks that Route 53 health + // checkers consider to be healthy and compares that number with the value + // of HealthThreshold. // - // For more information, see How Amazon Route 53 Determines Whether an Endpoint - // Is Healthy (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-determining-health-of-endpoints.html) + // For more information, see How Route 53 Determines Whether an Endpoint Is + // Healthy (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-determining-health-of-endpoints.html) // in the Amazon Route 53 Developer Guide. // // Type is a required field @@ -10097,6 +10175,12 @@ func (s *HealthCheckConfig) SetChildHealthChecks(v []*string) *HealthCheckConfig return s } +// SetDisabled sets the Disabled field's value. +func (s *HealthCheckConfig) SetDisabled(v bool) *HealthCheckConfig { + s.Disabled = &v + return s +} + // SetEnableSNI sets the EnableSNI field's value. func (s *HealthCheckConfig) SetEnableSNI(v bool) *HealthCheckConfig { s.EnableSNI = &v @@ -10250,7 +10334,7 @@ type HostedZone struct { // If the hosted zone was created by another service, the service that created // the hosted zone. When a hosted zone is created by another service, you can't - // edit or delete it using Amazon Route 53. + // edit or delete it using Route 53. LinkedService *LinkedService `type:"structure"` // The name of the domain. For public hosted zones, this is the name that you @@ -10438,39 +10522,39 @@ type ListGeoLocationsInput struct { _ struct{} `type:"structure"` // (Optional) The maximum number of geolocations to be included in the response - // body for this request. If more than MaxItems geolocations remain to be listed, + // body for this request. If more than maxitems geolocations remain to be listed, // then the value of the IsTruncated element in the response is true. MaxItems *string `location:"querystring" locationName:"maxitems" type:"string"` // The code for the continent with which you want to start listing locations - // that Amazon Route 53 supports for geolocation. If Amazon Route 53 has already - // returned a page or more of results, if IsTruncated is true, and if NextContinentCode - // from the previous response has a value, enter that value in StartContinentCode + // that Amazon Route 53 supports for geolocation. If Route 53 has already returned + // a page or more of results, if IsTruncated is true, and if NextContinentCode + // from the previous response has a value, enter that value in startcontinentcode // to return the next page of results. // - // Include StartContinentCode only if you want to list continents. Don't include - // StartContinentCode when you're listing countries or countries with their + // Include startcontinentcode only if you want to list continents. Don't include + // startcontinentcode when you're listing countries or countries with their // subdivisions. StartContinentCode *string `location:"querystring" locationName:"startcontinentcode" min:"2" type:"string"` // The code for the country with which you want to start listing locations that - // Amazon Route 53 supports for geolocation. If Amazon Route 53 has already - // returned a page or more of results, if IsTruncated is true, and if NextCountryCode - // from the previous response has a value, enter that value in StartCountryCode + // Amazon Route 53 supports for geolocation. If Route 53 has already returned + // a page or more of results, if IsTruncated is true, and if NextCountryCode + // from the previous response has a value, enter that value in startcountrycode // to return the next page of results. // - // Amazon Route 53 uses the two-letter country codes that are specified in ISO - // standard 3166-1 alpha-2 (https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2). + // Route 53 uses the two-letter country codes that are specified in ISO standard + // 3166-1 alpha-2 (https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2). StartCountryCode *string `location:"querystring" locationName:"startcountrycode" min:"1" type:"string"` // The code for the subdivision (for example, state or province) with which // you want to start listing locations that Amazon Route 53 supports for geolocation. - // If Amazon Route 53 has already returned a page or more of results, if IsTruncated + // If Route 53 has already returned a page or more of results, if IsTruncated // is true, and if NextSubdivisionCode from the previous response has a value, - // enter that value in StartSubdivisionCode to return the next page of results. + // enter that value in startsubdivisioncode to return the next page of results. // - // To list subdivisions of a country, you must include both StartCountryCode - // and StartSubdivisionCode. + // To list subdivisions of a country, you must include both startcountrycode + // and startsubdivisioncode. StartSubdivisionCode *string `location:"querystring" locationName:"startsubdivisioncode" min:"1" type:"string"` } @@ -10540,8 +10624,8 @@ type ListGeoLocationsOutput struct { // A value that indicates whether more locations remain to be listed after the // last location in this response. If so, the value of IsTruncated is true. // To get more values, submit another request and include the values of NextContinentCode, - // NextCountryCode, and NextSubdivisionCode in the StartContinentCode, StartCountryCode, - // and StartSubdivisionCode, as applicable. + // NextCountryCode, and NextSubdivisionCode in the startcontinentcode, startcountrycode, + // and startsubdivisioncode, as applicable. // // IsTruncated is a required field IsTruncated *bool `type:"boolean" required:"true"` @@ -10552,17 +10636,17 @@ type ListGeoLocationsOutput struct { MaxItems *string `type:"string" required:"true"` // If IsTruncated is true, you can make a follow-up request to display more - // locations. Enter the value of NextContinentCode in the StartContinentCode + // locations. Enter the value of NextContinentCode in the startcontinentcode // parameter in another ListGeoLocations request. NextContinentCode *string `min:"2" type:"string"` // If IsTruncated is true, you can make a follow-up request to display more - // locations. Enter the value of NextCountryCode in the StartCountryCode parameter + // locations. Enter the value of NextCountryCode in the startcountrycode parameter // in another ListGeoLocations request. NextCountryCode *string `min:"1" type:"string"` // If IsTruncated is true, you can make a follow-up request to display more - // locations. Enter the value of NextSubdivisionCode in the StartSubdivisionCode + // locations. Enter the value of NextSubdivisionCode in the startsubdivisioncode // parameter in another ListGeoLocations request. NextSubdivisionCode *string `min:"1" type:"string"` } @@ -10631,8 +10715,8 @@ type ListHealthChecksInput struct { // The maximum number of health checks that you want ListHealthChecks to return // in response to the current request. Amazon Route 53 returns a maximum of - // 100 items. If you set MaxItems to a value greater than 100, Amazon Route - // 53 returns only the first 100 health checks. + // 100 items. If you set MaxItems to a value greater than 100, Route 53 returns + // only the first 100 health checks. MaxItems *string `location:"querystring" locationName:"maxitems" type:"string"` } @@ -10920,7 +11004,7 @@ type ListHostedZonesInput struct { // (Optional) The maximum number of hosted zones that you want Amazon Route // 53 to return. If you have more than maxitems hosted zones, the value of IsTruncated // in the response is true, and the value of NextMarker is the hosted zone ID - // of the first hosted zone that Amazon Route 53 will return if you submit another + // of the first hosted zone that Route 53 will return if you submit another // request. MaxItems *string `location:"querystring" locationName:"maxitems" type:"string"` } @@ -11045,8 +11129,7 @@ type ListQueryLoggingConfigsInput struct { // AWS account has more than MaxResults configurations, use the value of ListQueryLoggingConfigsResponse$NextToken // in the response to get the next page of results. // - // If you don't specify a value for MaxResults, Amazon Route 53 returns up to - // 100 configurations. + // If you don't specify a value for MaxResults, Route 53 returns up to 100 configurations. MaxResults *string `location:"querystring" locationName:"maxresults" type:"string"` // (Optional) If the current AWS account has more than MaxResults query logging @@ -11163,8 +11246,8 @@ type ListResourceRecordSetsInput struct { // Valid values for basic resource record sets: A | AAAA | CAA | CNAME | MX // | NAPTR | NS | PTR | SOA | SPF | SRV | TXT // - // Values for weighted, latency, geo, and failover resource record sets: A | - // AAAA | CAA | CNAME | MX | NAPTR | PTR | SPF | SRV | TXT + // Values for weighted, latency, geolocation, and failover resource record sets: + // A | AAAA | CAA | CNAME | MX | NAPTR | PTR | SPF | SRV | TXT // // Values for alias resource record sets: // @@ -11256,9 +11339,12 @@ type ListResourceRecordSetsOutput struct { // MaxItems is a required field MaxItems *string `type:"string" required:"true"` - // Weighted, latency, geolocation, and failover resource record sets only: If - // results were truncated for a given DNS name and type, the value of SetIdentifier + // Resource record sets that have a routing policy other than simple: If results + // were truncated for a given DNS name and type, the value of SetIdentifier // for the next resource record set that has the current DNS name and type. + // + // For information about routing policies, see Choosing a Routing Policy (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html) + // in the Amazon Route 53 Developer Guide. NextRecordIdentifier *string `min:"1" type:"string"` // If the results were truncated, the name of the next record in the list. @@ -11342,7 +11428,7 @@ type ListReusableDelegationSetsInput struct { // The number of reusable delegation sets that you want Amazon Route 53 to return // in the response to this request. If you specify a value greater than 100, - // Amazon Route 53 returns only the first 100 reusable delegation sets. + // Route 53 returns only the first 100 reusable delegation sets. MaxItems *string `location:"querystring" locationName:"maxitems" type:"string"` } @@ -11626,7 +11712,7 @@ type ListTrafficPoliciesInput struct { // 53 to return in response to this request. If you have more than MaxItems // traffic policies, the value of IsTruncated in the response is true, and the // value of TrafficPolicyIdMarker is the ID of the first traffic policy that - // Amazon Route 53 will return if you submit another request. + // Route 53 will return if you submit another request. MaxItems *string `location:"querystring" locationName:"maxitems" type:"string"` // (Conditional) For your first request to ListTrafficPolicies, don't include @@ -12207,8 +12293,8 @@ type ListTrafficPolicyInstancesOutput struct { _ struct{} `type:"structure"` // If IsTruncated is true, HostedZoneIdMarker is the ID of the hosted zone of - // the first traffic policy instance that Amazon Route 53 will return if you - // submit another ListTrafficPolicyInstances request. + // the first traffic policy instance that Route 53 will return if you submit + // another ListTrafficPolicyInstances request. HostedZoneIdMarker *string `type:"string"` // A flag that indicates whether there are more traffic policy instances to @@ -12227,8 +12313,8 @@ type ListTrafficPolicyInstancesOutput struct { MaxItems *string `type:"string" required:"true"` // If IsTruncated is true, TrafficPolicyInstanceNameMarker is the name of the - // first traffic policy instance that Amazon Route 53 will return if you submit - // another ListTrafficPolicyInstances request. + // first traffic policy instance that Route 53 will return if you submit another + // ListTrafficPolicyInstances request. TrafficPolicyInstanceNameMarker *string `type:"string"` // If IsTruncated is true, TrafficPolicyInstanceTypeMarker is the DNS type of @@ -12305,8 +12391,7 @@ type ListTrafficPolicyVersionsInput struct { // 53 to include in the response body for this request. If the specified traffic // policy has more than MaxItems versions, the value of IsTruncated in the response // is true, and the value of the TrafficPolicyVersionMarker element is the ID - // of the first version that Amazon Route 53 will return if you submit another - // request. + // of the first version that Route 53 will return if you submit another request. MaxItems *string `location:"querystring" locationName:"maxitems" type:"string"` // For your first request to ListTrafficPolicyVersions, don't include the TrafficPolicyVersionMarker @@ -12445,8 +12530,8 @@ type ListVPCAssociationAuthorizationsInput struct { HostedZoneId *string `location:"uri" locationName:"Id" type:"string" required:"true"` // Optional: An integer that specifies the maximum number of VPCs that you want - // Amazon Route 53 to return. If you don't specify a value for MaxResults, Amazon - // Route 53 returns up to 50 VPCs per page. + // Amazon Route 53 to return. If you don't specify a value for MaxResults, Route + // 53 returns up to 50 VPCs per page. MaxResults *string `location:"querystring" locationName:"maxresults" type:"string"` // Optional: If a response includes a NextToken element, there are more VPCs @@ -12682,23 +12767,22 @@ type ResourceRecordSet struct { // Except where noted, the following failover behaviors assume that you have // included the HealthCheckId element in both resource record sets: // - // * When the primary resource record set is healthy, Amazon Route 53 responds - // to DNS queries with the applicable value from the primary resource record + // * When the primary resource record set is healthy, Route 53 responds to + // DNS queries with the applicable value from the primary resource record // set regardless of the health of the secondary resource record set. // // * When the primary resource record set is unhealthy and the secondary - // resource record set is healthy, Amazon Route 53 responds to DNS queries - // with the applicable value from the secondary resource record set. + // resource record set is healthy, Route 53 responds to DNS queries with + // the applicable value from the secondary resource record set. // - // * When the secondary resource record set is unhealthy, Amazon Route 53 - // responds to DNS queries with the applicable value from the primary resource - // record set regardless of the health of the primary resource record set. + // * When the secondary resource record set is unhealthy, Route 53 responds + // to DNS queries with the applicable value from the primary resource record + // set regardless of the health of the primary resource record set. // // * If you omit the HealthCheckId element for the secondary resource record - // set, and if the primary resource record set is unhealthy, Amazon Route - // 53 always responds to DNS queries with the applicable value from the secondary - // resource record set. This is true regardless of the health of the associated - // endpoint. + // set, and if the primary resource record set is unhealthy, Route 53 always + // responds to DNS queries with the applicable value from the secondary resource + // record set. This is true regardless of the health of the associated endpoint. // // You can't create non-failover resource record sets that have the same values // for the Name and Type elements as failover resource record sets. @@ -12706,15 +12790,15 @@ type ResourceRecordSet struct { // For failover alias resource record sets, you must also include the EvaluateTargetHealth // element and set the value to true. // - // For more information about configuring failover for Amazon Route 53, see - // the following topics in the Amazon Route 53 Developer Guide: + // For more information about configuring failover for Route 53, see the following + // topics in the Amazon Route 53 Developer Guide: // - // * Amazon Route 53 Health Checks and DNS Failover (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html) + // * Route 53 Health Checks and DNS Failover (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html) // // * Configuring Failover in a Private Hosted Zone (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-private-hosted-zones.html) Failover *string `type:"string" enum:"ResourceRecordSetFailover"` - // Geo location resource record sets only: A complex type that lets you control + // Geolocation resource record sets only: A complex type that lets you control // how Amazon Route 53 responds to DNS queries based on the geographic origin // of the query. For example, if you want all queries from Africa to be routed // to a web server with an IP address of 192.0.2.111, create a resource record @@ -12738,24 +12822,24 @@ type ResourceRecordSet struct { // // Geolocation works by mapping IP addresses to locations. However, some IP // addresses aren't mapped to geographic locations, so even if you create geolocation - // resource record sets that cover all seven continents, Amazon Route 53 will - // receive some DNS queries from locations that it can't identify. We recommend - // that you create a resource record set for which the value of CountryCode - // is *, which handles both queries that come from locations for which you haven't + // resource record sets that cover all seven continents, Route 53 will receive + // some DNS queries from locations that it can't identify. We recommend that + // you create a resource record set for which the value of CountryCode is *, + // which handles both queries that come from locations for which you haven't // created geolocation resource record sets and queries from IP addresses that // aren't mapped to a location. If you don't create a * resource record set, - // Amazon Route 53 returns a "no answer" response for queries from those locations. + // Route 53 returns a "no answer" response for queries from those locations. // // You can't create non-geolocation resource record sets that have the same // values for the Name and Type elements as geolocation resource record sets. GeoLocation *GeoLocation `type:"structure"` // If you want Amazon Route 53 to return this resource record set in response - // to a DNS query only when a health check is passing, include the HealthCheckId - // element and specify the ID of the applicable health check. + // to a DNS query only when the status of a health check is healthy, include + // the HealthCheckId element and specify the ID of the applicable health check. // - // Amazon Route 53 determines whether a resource record set is healthy based - // on one of the following: + // Route 53 determines whether a resource record set is healthy based on one + // of the following: // // * By periodically sending a request to the endpoint that is specified // in the health check @@ -12766,61 +12850,102 @@ type ResourceRecordSet struct { // * By determining the current state of a CloudWatch alarm (CloudWatch metric // health checks) // - // For more information, see How Amazon Route 53 Determines Whether an Endpoint - // Is Healthy (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-determining-health-of-endpoints.html). - // - // The HealthCheckId element is only useful when Amazon Route 53 is choosing - // between two or more resource record sets to respond to a DNS query, and you - // want Amazon Route 53 to base the choice in part on the status of a health - // check. Configuring health checks only makes sense in the following configurations: - // - // * You're checking the health of the resource record sets in a group of - // weighted, latency, geolocation, or failover resource record sets, and - // you specify health check IDs for all of the resource record sets. If the - // health check for one resource record set specifies an endpoint that is - // not healthy, Amazon Route 53 stops responding to queries using the value - // for that resource record set. - // - // * You set EvaluateTargetHealth to true for the resource record sets in - // a group of alias, weighted alias, latency alias, geolocation alias, or - // failover alias resource record sets, and you specify health check IDs - // for all of the resource record sets that are referenced by the alias resource - // record sets. - // - // Amazon Route 53 doesn't check the health of the endpoint specified in the + // Route 53 doesn't check the health of the endpoint that is specified in the // resource record set, for example, the endpoint specified by the IP address // in the Value element. When you add a HealthCheckId element to a resource - // record set, Amazon Route 53 checks the health of the endpoint that you specified + // record set, Route 53 checks the health of the endpoint that you specified // in the health check. // - // For geolocation resource record sets, if an endpoint is unhealthy, Amazon - // Route 53 looks for a resource record set for the larger, associated geographic + // For more information, see the following topics in the Amazon Route 53 Developer + // Guide: + // + // * How Amazon Route 53 Determines Whether an Endpoint Is Healthy (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-determining-health-of-endpoints.html) + // + // * Route 53 Health Checks and DNS Failover (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html) + // + // * Configuring Failover in a Private Hosted Zone (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-private-hosted-zones.html) + // + // When to Specify HealthCheckId + // + // Specifying a value for HealthCheckId is useful only when Route 53 is choosing + // between two or more resource record sets to respond to a DNS query, and you + // want Route 53 to base the choice in part on the status of a health check. + // Configuring health checks makes sense only in the following configurations: + // + // * Non-alias resource record sets: You're checking the health of a group + // of non-alias resource record sets that have the same routing policy, name, + // and type (such as multiple weighted records named www.example.com with + // a type of A) and you specify health check IDs for all the resource record + // sets. + // + // If the health check status for a resource record set is healthy, Route 53 + // includes the record among the records that it responds to DNS queries + // with. + // + // If the health check status for a resource record set is unhealthy, Route + // 53 stops responding to DNS queries using the value for that resource record + // set. + // + // If the health check status for all resource record sets in the group is unhealthy, + // Route 53 considers all resource record sets in the group healthy and responds + // to DNS queries accordingly. + // + // * Alias resource record sets: You specify the following settings: + // + // You set EvaluateTargetHealth to true for an alias resource record set in + // a group of resource record sets that have the same routing policy, name, + // and type (such as multiple weighted records named www.example.com with + // a type of A). + // + // You configure the alias resource record set to route traffic to a non-alias + // resource record set in the same hosted zone. + // + // You specify a health check ID for the non-alias resource record set. + // + // If the health check status is healthy, Route 53 considers the alias resource + // record set to be healthy and includes the alias record among the records + // that it responds to DNS queries with. + // + // If the health check status is unhealthy, Route 53 stops responding to DNS + // queries using the alias resource record set. + // + // The alias resource record set can also route traffic to a group of non-alias + // resource record sets that have the same routing policy, name, and type. + // In that configuration, associate health checks with all of the resource + // record sets in the group of non-alias resource record sets. + // + // Geolocation Routing + // + // For geolocation resource record sets, if an endpoint is unhealthy, Route + // 53 looks for a resource record set for the larger, associated geographic // region. For example, suppose you have resource record sets for a state in - // the United States, for the United States, for North America, and for all + // the United States, for the entire United States, for North America, and a + // resource record set that has * for CountryCode is *, which applies to all // locations. If the endpoint for the state resource record set is unhealthy, - // Amazon Route 53 checks the resource record sets for the United States, for - // North America, and for all locations (a resource record set for which the - // value of CountryCode is *), in that order, until it finds a resource record - // set for which the endpoint is healthy. + // Route 53 checks for healthy resource record sets in the following order until + // it finds a resource record set for which the endpoint is healthy: + // + // * The United States + // + // * North America + // + // * The default resource record set + // + // Specifying the Health Check Endpoint by Domain Name // // If your health checks specify the endpoint only by domain name, we recommend // that you create a separate health check for each endpoint. For example, create // a health check for each HTTP server that is serving content for www.example.com. // For the value of FullyQualifiedDomainName, specify the domain name of the // server (such as us-east-2-www.example.com), not the name of the resource - // record sets (example.com). + // record sets (www.example.com). // - // n this configuration, if you create a health check for which the value of - // FullyQualifiedDomainName matches the name of the resource record sets and - // then associate the health check with those resource record sets, health check - // results will be unpredictable. - // - // For more information, see the following topics in the Amazon Route 53 Developer - // Guide: + // Health check results will be unpredictable if you do the following: // - // * Amazon Route 53 Health Checks and DNS Failover (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html) + // Create a health check that has the same value for FullyQualifiedDomainName + // as the name of a resource record set. // - // * Configuring Failover in a Private Hosted Zone (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-private-hosted-zones.html) + // Associate that health check with the resource record set. HealthCheckId *string `type:"string"` // Multivalue answer resource record sets only: To route traffic approximately @@ -12833,17 +12958,17 @@ type ResourceRecordSet struct { // address only when the health check is healthy. // // * If you don't associate a health check with a multivalue answer record, - // Amazon Route 53 always considers the record to be healthy. + // Route 53 always considers the record to be healthy. // - // * Amazon Route 53 responds to DNS queries with up to eight healthy records; - // if you have eight or fewer healthy records, Amazon Route 53 responds to - // all DNS queries with all the healthy records. + // * Route 53 responds to DNS queries with up to eight healthy records; if + // you have eight or fewer healthy records, Route 53 responds to all DNS + // queries with all the healthy records. // - // * If you have more than eight healthy records, Amazon Route 53 responds - // to different DNS resolvers with different combinations of healthy records. + // * If you have more than eight healthy records, Route 53 responds to different + // DNS resolvers with different combinations of healthy records. // - // * When all records are unhealthy, Amazon Route 53 responds to DNS queries - // with up to eight unhealthy records. + // * When all records are unhealthy, Route 53 responds to DNS queries with + // up to eight unhealthy records. // // * If a resource becomes unavailable after a resolver caches a response, // client software typically tries another of the IP addresses in the response. @@ -12851,13 +12976,17 @@ type ResourceRecordSet struct { // You can't create multivalue answer alias records. MultiValueAnswer *bool `type:"boolean"` - // The name of the domain you want to perform the action on. + // For ChangeResourceRecordSets requests, the name of the record that you want + // to create, update, or delete. For ListResourceRecordSets responses, the name + // of a record in the specified hosted zone. + // + // ChangeResourceRecordSets Only // // Enter a fully qualified domain name, for example, www.example.com. You can // optionally include a trailing dot. If you omit the trailing dot, Amazon Route - // 53 still assumes that the domain name that you specify is fully qualified. - // This means that Amazon Route 53 treats www.example.com (without a trailing - // dot) and www.example.com. (with a trailing dot) as identical. + // 53 assumes that the domain name that you specify is fully qualified. This + // means that Route 53 treats www.example.com (without a trailing dot) and www.example.com. + // (with a trailing dot) as identical. // // For information about how to specify characters other than a-z, 0-9, and // - (hyphen) and how to specify internationalized domain names, see DNS Domain @@ -12896,10 +13025,10 @@ type ResourceRecordSet struct { // zones is not supported. // // When Amazon Route 53 receives a DNS query for a domain name and type for - // which you have created latency resource record sets, Amazon Route 53 selects - // the latency resource record set that has the lowest latency between the end - // user and the associated Amazon EC2 Region. Amazon Route 53 then returns the - // value that is associated with the selected resource record set. + // which you have created latency resource record sets, Route 53 selects the + // latency resource record set that has the lowest latency between the end user + // and the associated Amazon EC2 Region. Route 53 then returns the value that + // is associated with the selected resource record set. // // Note the following: // @@ -12910,8 +13039,8 @@ type ResourceRecordSet struct { // EC2 Region. // // * You aren't required to create latency resource record sets for all Amazon - // EC2 Regions. Amazon Route 53 will choose the region with the best latency - // from among the regions that you create latency resource record sets for. + // EC2 Regions. Route 53 will choose the region with the best latency from + // among the regions that you create latency resource record sets for. // // * You can't create non-latency resource record sets that have the same // values for the Name and Type elements as latency resource record sets. @@ -12922,11 +13051,15 @@ type ResourceRecordSet struct { // If you're creating an alias resource record set, omit ResourceRecords. ResourceRecords []*ResourceRecord `locationNameList:"ResourceRecord" min:"1" type:"list"` - // Weighted, Latency, Geo, and Failover resource record sets only: An identifier + // Resource record sets that have a routing policy other than simple: An identifier // that differentiates among multiple resource record sets that have the same - // combination of DNS name and type. The value of SetIdentifier must be unique - // for each resource record set that has the same combination of DNS name and - // type. Omit SetIdentifier for any other types of record sets. + // combination of name and type, such as multiple weighted resource record sets + // named acme.example.com that have a type of A. In a group of resource record + // sets that have the same name and type, the value of SetIdentifier must be + // unique for each resource record set. + // + // For information about routing policies, see Choosing a Routing Policy (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html) + // in the Amazon Route 53 Developer Guide. SetIdentifier *string `min:"1" type:"string"` // The resource record cache time to live (TTL), in seconds. Note the following: @@ -12952,14 +13085,14 @@ type ResourceRecordSet struct { // When you create a traffic policy instance, Amazon Route 53 automatically // creates a resource record set. TrafficPolicyInstanceId is the ID of the traffic - // policy instance that Amazon Route 53 created this resource record set for. + // policy instance that Route 53 created this resource record set for. // // To delete the resource record set that is associated with a traffic policy - // instance, use DeleteTrafficPolicyInstance. Amazon Route 53 will delete the - // resource record set automatically. If you delete the resource record set - // by using ChangeResourceRecordSets, Amazon Route 53 doesn't automatically - // delete the traffic policy instance, and you'll continue to be charged for - // it even though it's no longer in use. + // instance, use DeleteTrafficPolicyInstance. Route 53 will delete the resource + // record set automatically. If you delete the resource record set by using + // ChangeResourceRecordSets, Route 53 doesn't automatically delete the traffic + // policy instance, and you'll continue to be charged for it even though it's + // no longer in use. TrafficPolicyInstanceId *string `min:"1" type:"string"` // The DNS record type. For information about different record types and how @@ -13005,16 +13138,22 @@ type ResourceRecordSet struct { // the resource record set that you're creating the alias for. All values // are supported except NS and SOA. // + // If you're creating an alias record that has the same name as the hosted zone + // (known as the zone apex), you can't route traffic to a record for which + // the value of Type is CNAME. This is because the alias record must have + // the same type as the record you're routing traffic to, and creating a + // CNAME record for the zone apex isn't supported even for an alias record. + // // Type is a required field Type *string `type:"string" required:"true" enum:"RRType"` // Weighted resource record sets only: Among resource record sets that have // the same combination of DNS name and type, a value that determines the proportion // of DNS queries that Amazon Route 53 responds to using the current resource - // record set. Amazon Route 53 calculates the sum of the weights for the resource - // record sets that have the same combination of DNS name and type. Amazon Route - // 53 then responds to queries based on the ratio of a resource's weight to - // the total. Note the following: + // record set. Route 53 calculates the sum of the weights for the resource record + // sets that have the same combination of DNS name and type. Route 53 then responds + // to queries based on the ratio of a resource's weight to the total. Note the + // following: // // * You must specify a value for the Weight element for every weighted resource // record set. @@ -13030,16 +13169,14 @@ type ResourceRecordSet struct { // the same values for the Name and Type elements. // // * For weighted (but not weighted alias) resource record sets, if you set - // Weight to 0 for a resource record set, Amazon Route 53 never responds - // to queries with the applicable value for that resource record set. However, - // if you set Weight to 0 for all resource record sets that have the same - // combination of DNS name and type, traffic is routed to all resources with - // equal probability. + // Weight to 0 for a resource record set, Route 53 never responds to queries + // with the applicable value for that resource record set. However, if you + // set Weight to 0 for all resource record sets that have the same combination + // of DNS name and type, traffic is routed to all resources with equal probability. // // The effect of setting Weight to 0 is different when you associate health // checks with weighted resource record sets. For more information, see Options - // for Configuring Amazon Route 53 Active-Active and Active-Passive Failover - // (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-configuring-options.html) + // for Configuring Route 53 Active-Active and Active-Passive Failover (http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-configuring-options.html) // in the Amazon Route 53 Developer Guide. Weight *int64 `type:"long"` } @@ -13275,7 +13412,7 @@ type StatusReport struct { // 8601 format (https://en.wikipedia.org/wiki/ISO_8601) and Coordinated Universal // Time (UTC). For example, the value 2017-03-27T17:48:16.751Z represents March // 27, 2017 at 17:48:16.751 UTC. - CheckedTime *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CheckedTime *time.Time `type:"timestamp"` // A description of the status of the health check endpoint as reported by one // of the Amazon Route 53 health checkers. @@ -13372,6 +13509,13 @@ type TestDNSAnswerInput struct { // and 24 for edns0clientsubnetmask, the checking tool will simulate a request // from 192.0.2.0/24. The default value is 24 bits for IPv4 addresses and 64 // bits for IPv6 addresses. + // + // The range of valid values depends on whether edns0clientsubnetip is an IPv4 + // or an IPv6 address: + // + // * IPv4: Specify a value between 0 and 32 + // + // * IPv6: Specify a value between 0 and 128 EDNS0ClientSubnetMask *string `location:"querystring" locationName:"edns0clientsubnetmask" type:"string"` // The ID of the hosted zone that you want Amazon Route 53 to simulate a query @@ -13665,14 +13809,14 @@ type TrafficPolicyInstance struct { // The value of State is one of the following values: // // AppliedAmazon Route 53 has finished creating resource record sets, and changes - // have propagated to all Amazon Route 53 edge locations. + // have propagated to all Route 53 edge locations. // - // CreatingAmazon Route 53 is creating the resource record sets. Use GetTrafficPolicyInstance + // CreatingRoute 53 is creating the resource record sets. Use GetTrafficPolicyInstance // to confirm that the CreateTrafficPolicyInstance request completed successfully. // - // FailedAmazon Route 53 wasn't able to create or update the resource record - // sets. When the value of State is Failed, see Message for an explanation of - // what caused the request to fail. + // FailedRoute 53 wasn't able to create or update the resource record sets. + // When the value of State is Failed, see Message for an explanation of what + // caused the request to fail. // // State is a required field State *string `type:"string" required:"true"` @@ -13845,14 +13989,35 @@ type UpdateHealthCheckInput struct { _ struct{} `locationName:"UpdateHealthCheckRequest" type:"structure" xmlURI:"https://route53.amazonaws.com/doc/2013-04-01/"` // A complex type that identifies the CloudWatch alarm that you want Amazon - // Route 53 health checkers to use to determine whether this health check is - // healthy. + // Route 53 health checkers to use to determine whether the specified health + // check is healthy. AlarmIdentifier *AlarmIdentifier `type:"structure"` // A complex type that contains one ChildHealthCheck element for each health // check that you want to associate with a CALCULATED health check. ChildHealthChecks []*string `locationNameList:"ChildHealthCheck" type:"list"` + // Stops Route 53 from performing health checks. When you disable a health check, + // here's what happens: + // + // * Health checks that check the health of endpoints: Route 53 stops submitting + // requests to your application, server, or other resource. + // + // * Calculated health checks: Route 53 stops aggregating the status of the + // referenced health checks. + // + // * Health checks that monitor CloudWatch alarms: Route 53 stops monitoring + // the corresponding CloudWatch metrics. + // + // After you disable a health check, Route 53 considers the status of the health + // check to always be healthy. If you configured DNS failover, Route 53 continues + // to route traffic to the corresponding resources. If you want to stop routing + // traffic to a resource, change the value of UpdateHealthCheckRequest$Inverted. + // + // Charges for a health check still apply when the health check is disabled. + // For more information, see Amazon Route 53 Pricing (http://aws.amazon.com/route53/pricing/). + Disabled *bool `type:"boolean"` + // Specify whether you want Amazon Route 53 to send the value of FullyQualifiedDomainName // to the endpoint in the client_hello message during TLS negotiation. This // allows the endpoint to respond to HTTPS health check requests with the applicable @@ -13893,42 +14058,40 @@ type UpdateHealthCheckInput struct { // // If you specify a value forIPAddress: // - // Amazon Route 53 sends health check requests to the specified IPv4 or IPv6 - // address and passes the value of FullyQualifiedDomainName in the Host header - // for all health checks except TCP health checks. This is typically the fully - // qualified DNS name of the endpoint on which you want Amazon Route 53 to perform - // health checks. + // Route 53 sends health check requests to the specified IPv4 or IPv6 address + // and passes the value of FullyQualifiedDomainName in the Host header for all + // health checks except TCP health checks. This is typically the fully qualified + // DNS name of the endpoint on which you want Route 53 to perform health checks. // - // When Amazon Route 53 checks the health of an endpoint, here is how it constructs + // When Route 53 checks the health of an endpoint, here is how it constructs // the Host header: // // * If you specify a value of 80 for Port and HTTP or HTTP_STR_MATCH for - // Type, Amazon Route 53 passes the value of FullyQualifiedDomainName to - // the endpoint in the Host header. + // Type, Route 53 passes the value of FullyQualifiedDomainName to the endpoint + // in the Host header. // // * If you specify a value of 443 for Port and HTTPS or HTTPS_STR_MATCH - // for Type, Amazon Route 53 passes the value of FullyQualifiedDomainName - // to the endpoint in the Host header. + // for Type, Route 53 passes the value of FullyQualifiedDomainName to the + // endpoint in the Host header. // // * If you specify another value for Port and any value except TCP for Type, - // Amazon Route 53 passes FullyQualifiedDomainName:Port to the endpoint in - // the Host header. + // Route 53 passes FullyQualifiedDomainName:Port to the endpoint in the Host + // header. // - // If you don't specify a value for FullyQualifiedDomainName, Amazon Route 53 - // substitutes the value of IPAddress in the Host header in each of the above - // cases. + // If you don't specify a value for FullyQualifiedDomainName, Route 53 substitutes + // the value of IPAddress in the Host header in each of the above cases. // // If you don't specify a value forIPAddress: // - // If you don't specify a value for IPAddress, Amazon Route 53 sends a DNS request + // If you don't specify a value for IPAddress, Route 53 sends a DNS request // to the domain that you specify in FullyQualifiedDomainName at the interval // you specify in RequestInterval. Using an IPv4 address that is returned by - // DNS, Amazon Route 53 then checks the health of the endpoint. + // DNS, Route 53 then checks the health of the endpoint. // - // If you don't specify a value for IPAddress, Amazon Route 53 uses only IPv4 - // to send health checks to the endpoint. If there's no resource record set - // with a type of A for the name that you specify for FullyQualifiedDomainName, - // the health check fails with a "DNS resolution failed" error. + // If you don't specify a value for IPAddress, Route 53 uses only IPv4 to send + // health checks to the endpoint. If there's no resource record set with a type + // of A for the name that you specify for FullyQualifiedDomainName, the health + // check fails with a "DNS resolution failed" error. // // If you want to check the health of weighted, latency, or failover resource // record sets and you choose to specify the endpoint only by FullyQualifiedDomainName, @@ -13943,9 +14106,9 @@ type UpdateHealthCheckInput struct { // with those resource record sets, health check results will be unpredictable. // // In addition, if the value of Type is HTTP, HTTPS, HTTP_STR_MATCH, or HTTPS_STR_MATCH, - // Amazon Route 53 passes the value of FullyQualifiedDomainName in the Host - // header, as it does when you specify a value for IPAddress. If the value of - // Type is TCP, Amazon Route 53 doesn't pass a Host header. + // Route 53 passes the value of FullyQualifiedDomainName in the Host header, + // as it does when you specify a value for IPAddress. If the value of Type is + // TCP, Route 53 doesn't pass a Host header. FullyQualifiedDomainName *string `type:"string"` // The ID for the health check for which you want detailed information. When @@ -13961,15 +14124,14 @@ type UpdateHealthCheckInput struct { // We recommend that you use GetHealthCheck or ListHealthChecks to get the current // value of HealthCheckVersion for the health check that you want to update, // and that you include that value in your UpdateHealthCheck request. This prevents - // Amazon Route 53 from overwriting an intervening update: + // Route 53 from overwriting an intervening update: // // * If the value in the UpdateHealthCheck request matches the value of HealthCheckVersion - // in the health check, Amazon Route 53 updates the health check with the - // new settings. + // in the health check, Route 53 updates the health check with the new settings. // // * If the value of HealthCheckVersion in the health check is greater, the - // health check was changed after you got the version number. Amazon Route - // 53 does not update the health check, and it returns a HealthCheckVersionMismatch + // health check was changed after you got the version number. Route 53 does + // not update the health check, and it returns a HealthCheckVersionMismatch // error. HealthCheckVersion *int64 `min:"1" type:"long"` @@ -13982,18 +14144,18 @@ type UpdateHealthCheckInput struct { // Note the following: // // * If you specify a number greater than the number of child health checks, - // Amazon Route 53 always considers this health check to be unhealthy. + // Route 53 always considers this health check to be unhealthy. // - // * If you specify 0, Amazon Route 53 always considers this health check - // to be healthy. + // * If you specify 0, Route 53 always considers this health check to be + // healthy. HealthThreshold *int64 `type:"integer"` // The IPv4 or IPv6 IP address for the endpoint that you want Amazon Route 53 // to perform health checks on. If you don't specify a value for IPAddress, - // Amazon Route 53 sends a DNS request to resolve the domain name that you specify + // Route 53 sends a DNS request to resolve the domain name that you specify // in FullyQualifiedDomainName at the interval that you specify in RequestInterval. - // Using an IP address that is returned by DNS, Amazon Route 53 then checks - // the health of the endpoint. + // Using an IP address that is returned by DNS, Route 53 then checks the health + // of the endpoint. // // Use one of the following formats for the value of IPAddress: // @@ -14022,9 +14184,9 @@ type UpdateHealthCheckInput struct { // // For more information, see UpdateHealthCheckRequest$FullyQualifiedDomainName. // - // Constraints: Amazon Route 53 can't check the health of endpoints for which - // the IP address is in local, private, non-routable, or multicast ranges. For - // more information about IP addresses for which you can't create health checks, + // Constraints: Route 53 can't check the health of endpoints for which the IP + // address is in local, private, non-routable, or multicast ranges. For more + // information about IP addresses for which you can't create health checks, // see the following documents: // // * RFC 5735, Special Use IPv4 Addresses (https://tools.ietf.org/html/rfc5735) @@ -14037,14 +14199,14 @@ type UpdateHealthCheckInput struct { // When CloudWatch has insufficient data about the metric to determine the alarm // state, the status that you want Amazon Route 53 to assign to the health check: // - // * Healthy: Amazon Route 53 considers the health check to be healthy. + // * Healthy: Route 53 considers the health check to be healthy. // - // * Unhealthy: Amazon Route 53 considers the health check to be unhealthy. + // * Unhealthy: Route 53 considers the health check to be unhealthy. // - // * LastKnownStatus: Amazon Route 53 uses the status of the health check - // from the last time CloudWatch had sufficient data to determine the alarm - // state. For new health checks that have no last known status, the default - // status for the health check is healthy. + // * LastKnownStatus: Route 53 uses the status of the health check from the + // last time CloudWatch had sufficient data to determine the alarm state. + // For new health checks that have no last known status, the default status + // for the health check is healthy. InsufficientDataHealthStatus *string `type:"string" enum:"InsufficientDataHealthStatus"` // Specify whether you want Amazon Route 53 to invert the status of a health @@ -14067,27 +14229,27 @@ type UpdateHealthCheckInput struct { // * ChildHealthChecks: Amazon Route 53 resets HealthCheckConfig$ChildHealthChecks // to null. // - // * FullyQualifiedDomainName: Amazon Route 53 resets HealthCheckConfig$FullyQualifiedDomainName + // * FullyQualifiedDomainName: Route 53 resets HealthCheckConfig$FullyQualifiedDomainName // to null. // - // * Regions: Amazon Route 53 resets the HealthCheckConfig$Regions list to - // the default set of regions. + // * Regions: Route 53 resets the HealthCheckConfig$Regions list to the default + // set of regions. // - // * ResourcePath: Amazon Route 53 resets HealthCheckConfig$ResourcePath - // to null. + // * ResourcePath: Route 53 resets HealthCheckConfig$ResourcePath to null. ResetElements []*string `locationNameList:"ResettableElementName" type:"list"` // The path that you want Amazon Route 53 to request when performing health // checks. The path can be any value for which your endpoint will return an // HTTP status code of 2xx or 3xx when the endpoint is healthy, for example - // the file /docs/route53-health-check.html. + // the file /docs/route53-health-check.html. You can also include query string + // parameters, for example, /welcome.html?language=jp&login=y. // // Specify this value only if you want to change it. ResourcePath *string `type:"string"` // If the value of Type is HTTP_STR_MATCH or HTTP_STR_MATCH, the string that // you want Amazon Route 53 to search for in the response body from the specified - // resource. If the string appears in the response body, Amazon Route 53 considers + // resource. If the string appears in the response body, Route 53 considers // the resource healthy. (You can't change the value of Type when you update // a health check.) SearchString *string `type:"string"` @@ -14145,6 +14307,12 @@ func (s *UpdateHealthCheckInput) SetChildHealthChecks(v []*string) *UpdateHealth return s } +// SetDisabled sets the Disabled field's value. +func (s *UpdateHealthCheckInput) SetDisabled(v bool) *UpdateHealthCheckInput { + s.Disabled = &v + return s +} + // SetEnableSNI sets the EnableSNI field's value. func (s *UpdateHealthCheckInput) SetEnableSNI(v bool) *UpdateHealthCheckInput { s.EnableSNI = &v @@ -14232,8 +14400,7 @@ func (s *UpdateHealthCheckInput) SetSearchString(v string) *UpdateHealthCheckInp type UpdateHealthCheckOutput struct { _ struct{} `type:"structure"` - // A complex type that contains information about one health check that is associated - // with the current AWS account. + // A complex type that contains the response to an UpdateHealthCheck request. // // HealthCheck is a required field HealthCheck *HealthCheck `type:"structure" required:"true"` @@ -14309,7 +14476,8 @@ func (s *UpdateHostedZoneCommentInput) SetId(v string) *UpdateHostedZoneCommentI type UpdateHostedZoneCommentOutput struct { _ struct{} `type:"structure"` - // A complex type that contains general information about the hosted zone. + // A complex type that contains the response to the UpdateHostedZoneComment + // request. // // HostedZone is a required field HostedZone *HostedZone `type:"structure" required:"true"` @@ -14562,7 +14730,7 @@ type VPC struct { // (Private hosted zones only) The ID of an Amazon VPC. VPCId *string `type:"string"` - // (Private hosted zones only) The region in which you created an Amazon VPC. + // (Private hosted zones only) The region that an Amazon VPC was created in. VPCRegion *string `min:"1" type:"string" enum:"VPCRegion"` } diff --git a/vendor/github.com/aws/aws-sdk-go/service/route53/errors.go b/vendor/github.com/aws/aws-sdk-go/service/route53/errors.go index d37e10cdebd..a2e70bfc622 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/route53/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/route53/errors.go @@ -66,8 +66,8 @@ const ( // You can create a hosted zone that has the same name as an existing hosted // zone (example.com is common), but there is a limit to the number of hosted // zones that have the same name. If you get this error, Amazon Route 53 has - // reached that limit. If you own the domain name and Amazon Route 53 generates - // this error, contact Customer Support. + // reached that limit. If you own the domain name and Route 53 generates this + // error, contact Customer Support. ErrCodeDelegationSetNotAvailable = "DelegationSetNotAvailable" // ErrCodeDelegationSetNotReusable for service response error code @@ -239,14 +239,13 @@ const ( // ErrCodeNoSuchGeoLocation for service response error code // "NoSuchGeoLocation". // - // Amazon Route 53 doesn't support the specified geolocation. + // Amazon Route 53 doesn't support the specified geographic location. ErrCodeNoSuchGeoLocation = "NoSuchGeoLocation" // ErrCodeNoSuchHealthCheck for service response error code // "NoSuchHealthCheck". // - // No health check exists with the ID that you specified in the DeleteHealthCheck - // request. + // No health check exists with the specified ID. ErrCodeNoSuchHealthCheck = "NoSuchHealthCheck" // ErrCodeNoSuchHostedZone for service response error code @@ -285,8 +284,8 @@ const ( // // If Amazon Route 53 can't process a request before the next request arrives, // it will reject subsequent requests for the same hosted zone and return an - // HTTP 400 error (Bad request). If Amazon Route 53 returns this error repeatedly - // for the same request, we recommend that you wait, in intervals of increasing + // HTTP 400 error (Bad request). If Route 53 returns this error repeatedly for + // the same request, we recommend that you wait, in intervals of increasing // duration, before you try the request again. ErrCodePriorRequestNotComplete = "PriorRequestNotComplete" diff --git a/vendor/github.com/aws/aws-sdk-go/service/route53/service.go b/vendor/github.com/aws/aws-sdk-go/service/route53/service.go index 98ba1c8f82f..dd22cb2cd84 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/route53/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/route53/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "route53" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "route53" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Route 53" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the Route53 client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/api.go b/vendor/github.com/aws/aws-sdk-go/service/s3/api.go index 2faeee1ec8f..d67cfde79e2 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/s3/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/api.go @@ -3,14 +3,22 @@ package s3 import ( + "bytes" "fmt" "io" + "sync" + "sync/atomic" "time" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/client" "github.com/aws/aws-sdk-go/aws/request" "github.com/aws/aws-sdk-go/private/protocol" + "github.com/aws/aws-sdk-go/private/protocol/eventstream" + "github.com/aws/aws-sdk-go/private/protocol/eventstream/eventstreamapi" + "github.com/aws/aws-sdk-go/private/protocol/rest" "github.com/aws/aws-sdk-go/private/protocol/restxml" ) @@ -18,8 +26,8 @@ const opAbortMultipartUpload = "AbortMultipartUpload" // AbortMultipartUploadRequest generates a "aws/request.Request" representing the // client's request for the AbortMultipartUpload operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -101,8 +109,8 @@ const opCompleteMultipartUpload = "CompleteMultipartUpload" // CompleteMultipartUploadRequest generates a "aws/request.Request" representing the // client's request for the CompleteMultipartUpload operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -175,8 +183,8 @@ const opCopyObject = "CopyObject" // CopyObjectRequest generates a "aws/request.Request" representing the // client's request for the CopyObject operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -255,8 +263,8 @@ const opCreateBucket = "CreateBucket" // CreateBucketRequest generates a "aws/request.Request" representing the // client's request for the CreateBucket operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -337,8 +345,8 @@ const opCreateMultipartUpload = "CreateMultipartUpload" // CreateMultipartUploadRequest generates a "aws/request.Request" representing the // client's request for the CreateMultipartUpload operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -417,8 +425,8 @@ const opDeleteBucket = "DeleteBucket" // DeleteBucketRequest generates a "aws/request.Request" representing the // client's request for the DeleteBucket operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -494,8 +502,8 @@ const opDeleteBucketAnalyticsConfiguration = "DeleteBucketAnalyticsConfiguration // DeleteBucketAnalyticsConfigurationRequest generates a "aws/request.Request" representing the // client's request for the DeleteBucketAnalyticsConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -571,8 +579,8 @@ const opDeleteBucketCors = "DeleteBucketCors" // DeleteBucketCorsRequest generates a "aws/request.Request" representing the // client's request for the DeleteBucketCors operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -613,7 +621,7 @@ func (c *S3) DeleteBucketCorsRequest(input *DeleteBucketCorsInput) (req *request // DeleteBucketCors API operation for Amazon Simple Storage Service. // -// Deletes the cors configuration information set for the bucket. +// Deletes the CORS configuration information set for the bucket. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -647,8 +655,8 @@ const opDeleteBucketEncryption = "DeleteBucketEncryption" // DeleteBucketEncryptionRequest generates a "aws/request.Request" representing the // client's request for the DeleteBucketEncryption operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -723,8 +731,8 @@ const opDeleteBucketInventoryConfiguration = "DeleteBucketInventoryConfiguration // DeleteBucketInventoryConfigurationRequest generates a "aws/request.Request" representing the // client's request for the DeleteBucketInventoryConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -800,8 +808,8 @@ const opDeleteBucketLifecycle = "DeleteBucketLifecycle" // DeleteBucketLifecycleRequest generates a "aws/request.Request" representing the // client's request for the DeleteBucketLifecycle operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -876,8 +884,8 @@ const opDeleteBucketMetricsConfiguration = "DeleteBucketMetricsConfiguration" // DeleteBucketMetricsConfigurationRequest generates a "aws/request.Request" representing the // client's request for the DeleteBucketMetricsConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -953,8 +961,8 @@ const opDeleteBucketPolicy = "DeleteBucketPolicy" // DeleteBucketPolicyRequest generates a "aws/request.Request" representing the // client's request for the DeleteBucketPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1029,8 +1037,8 @@ const opDeleteBucketReplication = "DeleteBucketReplication" // DeleteBucketReplicationRequest generates a "aws/request.Request" representing the // client's request for the DeleteBucketReplication operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1071,7 +1079,9 @@ func (c *S3) DeleteBucketReplicationRequest(input *DeleteBucketReplicationInput) // DeleteBucketReplication API operation for Amazon Simple Storage Service. // -// Deletes the replication configuration from the bucket. +// Deletes the replication configuration from the bucket. For information about +// replication configuration, see Cross-Region Replication (CRR) ( https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html) +// in the Amazon S3 Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1105,8 +1115,8 @@ const opDeleteBucketTagging = "DeleteBucketTagging" // DeleteBucketTaggingRequest generates a "aws/request.Request" representing the // client's request for the DeleteBucketTagging operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1181,8 +1191,8 @@ const opDeleteBucketWebsite = "DeleteBucketWebsite" // DeleteBucketWebsiteRequest generates a "aws/request.Request" representing the // client's request for the DeleteBucketWebsite operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1257,8 +1267,8 @@ const opDeleteObject = "DeleteObject" // DeleteObjectRequest generates a "aws/request.Request" representing the // client's request for the DeleteObject operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1333,8 +1343,8 @@ const opDeleteObjectTagging = "DeleteObjectTagging" // DeleteObjectTaggingRequest generates a "aws/request.Request" representing the // client's request for the DeleteObjectTagging operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1407,8 +1417,8 @@ const opDeleteObjects = "DeleteObjects" // DeleteObjectsRequest generates a "aws/request.Request" representing the // client's request for the DeleteObjects operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1478,12 +1488,88 @@ func (c *S3) DeleteObjectsWithContext(ctx aws.Context, input *DeleteObjectsInput return out, req.Send() } +const opDeletePublicAccessBlock = "DeletePublicAccessBlock" + +// DeletePublicAccessBlockRequest generates a "aws/request.Request" representing the +// client's request for the DeletePublicAccessBlock operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeletePublicAccessBlock for more information on using the DeletePublicAccessBlock +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeletePublicAccessBlockRequest method. +// req, resp := client.DeletePublicAccessBlockRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeletePublicAccessBlock +func (c *S3) DeletePublicAccessBlockRequest(input *DeletePublicAccessBlockInput) (req *request.Request, output *DeletePublicAccessBlockOutput) { + op := &request.Operation{ + Name: opDeletePublicAccessBlock, + HTTPMethod: "DELETE", + HTTPPath: "/{Bucket}?publicAccessBlock", + } + + if input == nil { + input = &DeletePublicAccessBlockInput{} + } + + output = &DeletePublicAccessBlockOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeletePublicAccessBlock API operation for Amazon Simple Storage Service. +// +// Removes the Public Access Block configuration for an Amazon S3 bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation DeletePublicAccessBlock for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeletePublicAccessBlock +func (c *S3) DeletePublicAccessBlock(input *DeletePublicAccessBlockInput) (*DeletePublicAccessBlockOutput, error) { + req, out := c.DeletePublicAccessBlockRequest(input) + return out, req.Send() +} + +// DeletePublicAccessBlockWithContext is the same as DeletePublicAccessBlock with the addition of +// the ability to pass a context and additional request options. +// +// See DeletePublicAccessBlock for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) DeletePublicAccessBlockWithContext(ctx aws.Context, input *DeletePublicAccessBlockInput, opts ...request.Option) (*DeletePublicAccessBlockOutput, error) { + req, out := c.DeletePublicAccessBlockRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opGetBucketAccelerateConfiguration = "GetBucketAccelerateConfiguration" // GetBucketAccelerateConfigurationRequest generates a "aws/request.Request" representing the // client's request for the GetBucketAccelerateConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1556,8 +1642,8 @@ const opGetBucketAcl = "GetBucketAcl" // GetBucketAclRequest generates a "aws/request.Request" representing the // client's request for the GetBucketAcl operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1630,8 +1716,8 @@ const opGetBucketAnalyticsConfiguration = "GetBucketAnalyticsConfiguration" // GetBucketAnalyticsConfigurationRequest generates a "aws/request.Request" representing the // client's request for the GetBucketAnalyticsConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1705,8 +1791,8 @@ const opGetBucketCors = "GetBucketCors" // GetBucketCorsRequest generates a "aws/request.Request" representing the // client's request for the GetBucketCors operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1745,7 +1831,7 @@ func (c *S3) GetBucketCorsRequest(input *GetBucketCorsInput) (req *request.Reque // GetBucketCors API operation for Amazon Simple Storage Service. // -// Returns the cors configuration for the bucket. +// Returns the CORS configuration for the bucket. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1779,8 +1865,8 @@ const opGetBucketEncryption = "GetBucketEncryption" // GetBucketEncryptionRequest generates a "aws/request.Request" representing the // client's request for the GetBucketEncryption operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1853,8 +1939,8 @@ const opGetBucketInventoryConfiguration = "GetBucketInventoryConfiguration" // GetBucketInventoryConfigurationRequest generates a "aws/request.Request" representing the // client's request for the GetBucketInventoryConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1928,8 +2014,8 @@ const opGetBucketLifecycle = "GetBucketLifecycle" // GetBucketLifecycleRequest generates a "aws/request.Request" representing the // client's request for the GetBucketLifecycle operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1950,6 +2036,8 @@ const opGetBucketLifecycle = "GetBucketLifecycle" // } // // See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLifecycle +// +// Deprecated: GetBucketLifecycle has been deprecated func (c *S3) GetBucketLifecycleRequest(input *GetBucketLifecycleInput) (req *request.Request, output *GetBucketLifecycleOutput) { if c.Client.Config.Logger != nil { c.Client.Config.Logger.Log("This operation, GetBucketLifecycle, has been deprecated") @@ -1980,6 +2068,8 @@ func (c *S3) GetBucketLifecycleRequest(input *GetBucketLifecycleInput) (req *req // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketLifecycle for usage and error information. // See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLifecycle +// +// Deprecated: GetBucketLifecycle has been deprecated func (c *S3) GetBucketLifecycle(input *GetBucketLifecycleInput) (*GetBucketLifecycleOutput, error) { req, out := c.GetBucketLifecycleRequest(input) return out, req.Send() @@ -1994,6 +2084,8 @@ func (c *S3) GetBucketLifecycle(input *GetBucketLifecycleInput) (*GetBucketLifec // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. +// +// Deprecated: GetBucketLifecycleWithContext has been deprecated func (c *S3) GetBucketLifecycleWithContext(ctx aws.Context, input *GetBucketLifecycleInput, opts ...request.Option) (*GetBucketLifecycleOutput, error) { req, out := c.GetBucketLifecycleRequest(input) req.SetContext(ctx) @@ -2005,8 +2097,8 @@ const opGetBucketLifecycleConfiguration = "GetBucketLifecycleConfiguration" // GetBucketLifecycleConfigurationRequest generates a "aws/request.Request" representing the // client's request for the GetBucketLifecycleConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2079,8 +2171,8 @@ const opGetBucketLocation = "GetBucketLocation" // GetBucketLocationRequest generates a "aws/request.Request" representing the // client's request for the GetBucketLocation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2153,8 +2245,8 @@ const opGetBucketLogging = "GetBucketLogging" // GetBucketLoggingRequest generates a "aws/request.Request" representing the // client's request for the GetBucketLogging operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2228,8 +2320,8 @@ const opGetBucketMetricsConfiguration = "GetBucketMetricsConfiguration" // GetBucketMetricsConfigurationRequest generates a "aws/request.Request" representing the // client's request for the GetBucketMetricsConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2303,8 +2395,8 @@ const opGetBucketNotification = "GetBucketNotification" // GetBucketNotificationRequest generates a "aws/request.Request" representing the // client's request for the GetBucketNotification operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2325,6 +2417,8 @@ const opGetBucketNotification = "GetBucketNotification" // } // // See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketNotification +// +// Deprecated: GetBucketNotification has been deprecated func (c *S3) GetBucketNotificationRequest(input *GetBucketNotificationConfigurationRequest) (req *request.Request, output *NotificationConfigurationDeprecated) { if c.Client.Config.Logger != nil { c.Client.Config.Logger.Log("This operation, GetBucketNotification, has been deprecated") @@ -2355,6 +2449,8 @@ func (c *S3) GetBucketNotificationRequest(input *GetBucketNotificationConfigurat // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketNotification for usage and error information. // See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketNotification +// +// Deprecated: GetBucketNotification has been deprecated func (c *S3) GetBucketNotification(input *GetBucketNotificationConfigurationRequest) (*NotificationConfigurationDeprecated, error) { req, out := c.GetBucketNotificationRequest(input) return out, req.Send() @@ -2369,6 +2465,8 @@ func (c *S3) GetBucketNotification(input *GetBucketNotificationConfigurationRequ // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. +// +// Deprecated: GetBucketNotificationWithContext has been deprecated func (c *S3) GetBucketNotificationWithContext(ctx aws.Context, input *GetBucketNotificationConfigurationRequest, opts ...request.Option) (*NotificationConfigurationDeprecated, error) { req, out := c.GetBucketNotificationRequest(input) req.SetContext(ctx) @@ -2380,8 +2478,8 @@ const opGetBucketNotificationConfiguration = "GetBucketNotificationConfiguration // GetBucketNotificationConfigurationRequest generates a "aws/request.Request" representing the // client's request for the GetBucketNotificationConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2454,8 +2552,8 @@ const opGetBucketPolicy = "GetBucketPolicy" // GetBucketPolicyRequest generates a "aws/request.Request" representing the // client's request for the GetBucketPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2524,12 +2622,87 @@ func (c *S3) GetBucketPolicyWithContext(ctx aws.Context, input *GetBucketPolicyI return out, req.Send() } +const opGetBucketPolicyStatus = "GetBucketPolicyStatus" + +// GetBucketPolicyStatusRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketPolicyStatus operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketPolicyStatus for more information on using the GetBucketPolicyStatus +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketPolicyStatusRequest method. +// req, resp := client.GetBucketPolicyStatusRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketPolicyStatus +func (c *S3) GetBucketPolicyStatusRequest(input *GetBucketPolicyStatusInput) (req *request.Request, output *GetBucketPolicyStatusOutput) { + op := &request.Operation{ + Name: opGetBucketPolicyStatus, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?policyStatus", + } + + if input == nil { + input = &GetBucketPolicyStatusInput{} + } + + output = &GetBucketPolicyStatusOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketPolicyStatus API operation for Amazon Simple Storage Service. +// +// Retrieves the policy status for an Amazon S3 bucket, indicating whether the +// bucket is public. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketPolicyStatus for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketPolicyStatus +func (c *S3) GetBucketPolicyStatus(input *GetBucketPolicyStatusInput) (*GetBucketPolicyStatusOutput, error) { + req, out := c.GetBucketPolicyStatusRequest(input) + return out, req.Send() +} + +// GetBucketPolicyStatusWithContext is the same as GetBucketPolicyStatus with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketPolicyStatus for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketPolicyStatusWithContext(ctx aws.Context, input *GetBucketPolicyStatusInput, opts ...request.Option) (*GetBucketPolicyStatusOutput, error) { + req, out := c.GetBucketPolicyStatusRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opGetBucketReplication = "GetBucketReplication" // GetBucketReplicationRequest generates a "aws/request.Request" representing the // client's request for the GetBucketReplication operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2570,6 +2743,10 @@ func (c *S3) GetBucketReplicationRequest(input *GetBucketReplicationInput) (req // // Returns the replication configuration of a bucket. // +// It can take a while to propagate the put or delete a replication configuration +// to all Amazon S3 systems. Therefore, a get request soon after put or delete +// can return a wrong result. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -2602,8 +2779,8 @@ const opGetBucketRequestPayment = "GetBucketRequestPayment" // GetBucketRequestPaymentRequest generates a "aws/request.Request" representing the // client's request for the GetBucketRequestPayment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2676,8 +2853,8 @@ const opGetBucketTagging = "GetBucketTagging" // GetBucketTaggingRequest generates a "aws/request.Request" representing the // client's request for the GetBucketTagging operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2750,8 +2927,8 @@ const opGetBucketVersioning = "GetBucketVersioning" // GetBucketVersioningRequest generates a "aws/request.Request" representing the // client's request for the GetBucketVersioning operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2824,8 +3001,8 @@ const opGetBucketWebsite = "GetBucketWebsite" // GetBucketWebsiteRequest generates a "aws/request.Request" representing the // client's request for the GetBucketWebsite operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2898,8 +3075,8 @@ const opGetObject = "GetObject" // GetObjectRequest generates a "aws/request.Request" representing the // client's request for the GetObject operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2977,8 +3154,8 @@ const opGetObjectAcl = "GetObjectAcl" // GetObjectAclRequest generates a "aws/request.Request" representing the // client's request for the GetObjectAcl operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3056,8 +3233,8 @@ const opGetObjectTagging = "GetObjectTagging" // GetObjectTaggingRequest generates a "aws/request.Request" representing the // client's request for the GetObjectTagging operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3130,8 +3307,8 @@ const opGetObjectTorrent = "GetObjectTorrent" // GetObjectTorrentRequest generates a "aws/request.Request" representing the // client's request for the GetObjectTorrent operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3200,12 +3377,86 @@ func (c *S3) GetObjectTorrentWithContext(ctx aws.Context, input *GetObjectTorren return out, req.Send() } +const opGetPublicAccessBlock = "GetPublicAccessBlock" + +// GetPublicAccessBlockRequest generates a "aws/request.Request" representing the +// client's request for the GetPublicAccessBlock operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetPublicAccessBlock for more information on using the GetPublicAccessBlock +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetPublicAccessBlockRequest method. +// req, resp := client.GetPublicAccessBlockRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetPublicAccessBlock +func (c *S3) GetPublicAccessBlockRequest(input *GetPublicAccessBlockInput) (req *request.Request, output *GetPublicAccessBlockOutput) { + op := &request.Operation{ + Name: opGetPublicAccessBlock, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?publicAccessBlock", + } + + if input == nil { + input = &GetPublicAccessBlockInput{} + } + + output = &GetPublicAccessBlockOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetPublicAccessBlock API operation for Amazon Simple Storage Service. +// +// Retrieves the Public Access Block configuration for an Amazon S3 bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetPublicAccessBlock for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetPublicAccessBlock +func (c *S3) GetPublicAccessBlock(input *GetPublicAccessBlockInput) (*GetPublicAccessBlockOutput, error) { + req, out := c.GetPublicAccessBlockRequest(input) + return out, req.Send() +} + +// GetPublicAccessBlockWithContext is the same as GetPublicAccessBlock with the addition of +// the ability to pass a context and additional request options. +// +// See GetPublicAccessBlock for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetPublicAccessBlockWithContext(ctx aws.Context, input *GetPublicAccessBlockInput, opts ...request.Option) (*GetPublicAccessBlockOutput, error) { + req, out := c.GetPublicAccessBlockRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opHeadBucket = "HeadBucket" // HeadBucketRequest generates a "aws/request.Request" representing the // client's request for the HeadBucket operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3286,8 +3537,8 @@ const opHeadObject = "HeadObject" // HeadObjectRequest generates a "aws/request.Request" representing the // client's request for the HeadObject operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3365,8 +3616,8 @@ const opListBucketAnalyticsConfigurations = "ListBucketAnalyticsConfigurations" // ListBucketAnalyticsConfigurationsRequest generates a "aws/request.Request" representing the // client's request for the ListBucketAnalyticsConfigurations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3439,8 +3690,8 @@ const opListBucketInventoryConfigurations = "ListBucketInventoryConfigurations" // ListBucketInventoryConfigurationsRequest generates a "aws/request.Request" representing the // client's request for the ListBucketInventoryConfigurations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3513,8 +3764,8 @@ const opListBucketMetricsConfigurations = "ListBucketMetricsConfigurations" // ListBucketMetricsConfigurationsRequest generates a "aws/request.Request" representing the // client's request for the ListBucketMetricsConfigurations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3587,8 +3838,8 @@ const opListBuckets = "ListBuckets" // ListBucketsRequest generates a "aws/request.Request" representing the // client's request for the ListBuckets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3661,8 +3912,8 @@ const opListMultipartUploads = "ListMultipartUploads" // ListMultipartUploadsRequest generates a "aws/request.Request" representing the // client's request for the ListMultipartUploads operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3791,8 +4042,8 @@ const opListObjectVersions = "ListObjectVersions" // ListObjectVersionsRequest generates a "aws/request.Request" representing the // client's request for the ListObjectVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3921,8 +4172,8 @@ const opListObjects = "ListObjects" // ListObjectsRequest generates a "aws/request.Request" representing the // client's request for the ListObjects operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4058,8 +4309,8 @@ const opListObjectsV2 = "ListObjectsV2" // ListObjectsV2Request generates a "aws/request.Request" representing the // client's request for the ListObjectsV2 operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4196,8 +4447,8 @@ const opListParts = "ListParts" // ListPartsRequest generates a "aws/request.Request" representing the // client's request for the ListParts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4326,8 +4577,8 @@ const opPutBucketAccelerateConfiguration = "PutBucketAccelerateConfiguration" // PutBucketAccelerateConfigurationRequest generates a "aws/request.Request" representing the // client's request for the PutBucketAccelerateConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4402,8 +4653,8 @@ const opPutBucketAcl = "PutBucketAcl" // PutBucketAclRequest generates a "aws/request.Request" representing the // client's request for the PutBucketAcl operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4478,8 +4729,8 @@ const opPutBucketAnalyticsConfiguration = "PutBucketAnalyticsConfiguration" // PutBucketAnalyticsConfigurationRequest generates a "aws/request.Request" representing the // client's request for the PutBucketAnalyticsConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4555,8 +4806,8 @@ const opPutBucketCors = "PutBucketCors" // PutBucketCorsRequest generates a "aws/request.Request" representing the // client's request for the PutBucketCors operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4597,7 +4848,7 @@ func (c *S3) PutBucketCorsRequest(input *PutBucketCorsInput) (req *request.Reque // PutBucketCors API operation for Amazon Simple Storage Service. // -// Sets the cors configuration for a bucket. +// Sets the CORS configuration for a bucket. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4631,8 +4882,8 @@ const opPutBucketEncryption = "PutBucketEncryption" // PutBucketEncryptionRequest generates a "aws/request.Request" representing the // client's request for the PutBucketEncryption operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4708,8 +4959,8 @@ const opPutBucketInventoryConfiguration = "PutBucketInventoryConfiguration" // PutBucketInventoryConfigurationRequest generates a "aws/request.Request" representing the // client's request for the PutBucketInventoryConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4785,8 +5036,8 @@ const opPutBucketLifecycle = "PutBucketLifecycle" // PutBucketLifecycleRequest generates a "aws/request.Request" representing the // client's request for the PutBucketLifecycle operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4807,6 +5058,8 @@ const opPutBucketLifecycle = "PutBucketLifecycle" // } // // See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLifecycle +// +// Deprecated: PutBucketLifecycle has been deprecated func (c *S3) PutBucketLifecycleRequest(input *PutBucketLifecycleInput) (req *request.Request, output *PutBucketLifecycleOutput) { if c.Client.Config.Logger != nil { c.Client.Config.Logger.Log("This operation, PutBucketLifecycle, has been deprecated") @@ -4839,6 +5092,8 @@ func (c *S3) PutBucketLifecycleRequest(input *PutBucketLifecycleInput) (req *req // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutBucketLifecycle for usage and error information. // See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLifecycle +// +// Deprecated: PutBucketLifecycle has been deprecated func (c *S3) PutBucketLifecycle(input *PutBucketLifecycleInput) (*PutBucketLifecycleOutput, error) { req, out := c.PutBucketLifecycleRequest(input) return out, req.Send() @@ -4853,6 +5108,8 @@ func (c *S3) PutBucketLifecycle(input *PutBucketLifecycleInput) (*PutBucketLifec // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. +// +// Deprecated: PutBucketLifecycleWithContext has been deprecated func (c *S3) PutBucketLifecycleWithContext(ctx aws.Context, input *PutBucketLifecycleInput, opts ...request.Option) (*PutBucketLifecycleOutput, error) { req, out := c.PutBucketLifecycleRequest(input) req.SetContext(ctx) @@ -4864,8 +5121,8 @@ const opPutBucketLifecycleConfiguration = "PutBucketLifecycleConfiguration" // PutBucketLifecycleConfigurationRequest generates a "aws/request.Request" representing the // client's request for the PutBucketLifecycleConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4941,8 +5198,8 @@ const opPutBucketLogging = "PutBucketLogging" // PutBucketLoggingRequest generates a "aws/request.Request" representing the // client's request for the PutBucketLogging operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5019,8 +5276,8 @@ const opPutBucketMetricsConfiguration = "PutBucketMetricsConfiguration" // PutBucketMetricsConfigurationRequest generates a "aws/request.Request" representing the // client's request for the PutBucketMetricsConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5096,8 +5353,8 @@ const opPutBucketNotification = "PutBucketNotification" // PutBucketNotificationRequest generates a "aws/request.Request" representing the // client's request for the PutBucketNotification operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5118,6 +5375,8 @@ const opPutBucketNotification = "PutBucketNotification" // } // // See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketNotification +// +// Deprecated: PutBucketNotification has been deprecated func (c *S3) PutBucketNotificationRequest(input *PutBucketNotificationInput) (req *request.Request, output *PutBucketNotificationOutput) { if c.Client.Config.Logger != nil { c.Client.Config.Logger.Log("This operation, PutBucketNotification, has been deprecated") @@ -5150,6 +5409,8 @@ func (c *S3) PutBucketNotificationRequest(input *PutBucketNotificationInput) (re // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutBucketNotification for usage and error information. // See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketNotification +// +// Deprecated: PutBucketNotification has been deprecated func (c *S3) PutBucketNotification(input *PutBucketNotificationInput) (*PutBucketNotificationOutput, error) { req, out := c.PutBucketNotificationRequest(input) return out, req.Send() @@ -5164,6 +5425,8 @@ func (c *S3) PutBucketNotification(input *PutBucketNotificationInput) (*PutBucke // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. +// +// Deprecated: PutBucketNotificationWithContext has been deprecated func (c *S3) PutBucketNotificationWithContext(ctx aws.Context, input *PutBucketNotificationInput, opts ...request.Option) (*PutBucketNotificationOutput, error) { req, out := c.PutBucketNotificationRequest(input) req.SetContext(ctx) @@ -5175,8 +5438,8 @@ const opPutBucketNotificationConfiguration = "PutBucketNotificationConfiguration // PutBucketNotificationConfigurationRequest generates a "aws/request.Request" representing the // client's request for the PutBucketNotificationConfiguration operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5251,8 +5514,8 @@ const opPutBucketPolicy = "PutBucketPolicy" // PutBucketPolicyRequest generates a "aws/request.Request" representing the // client's request for the PutBucketPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5328,8 +5591,8 @@ const opPutBucketReplication = "PutBucketReplication" // PutBucketReplicationRequest generates a "aws/request.Request" representing the // client's request for the PutBucketReplication operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5370,8 +5633,9 @@ func (c *S3) PutBucketReplicationRequest(input *PutBucketReplicationInput) (req // PutBucketReplication API operation for Amazon Simple Storage Service. // -// Creates a new replication configuration (or replaces an existing one, if -// present). +// Creates a replication configuration or replaces an existing one. For more +// information, see Cross-Region Replication (CRR) ( https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html) +// in the Amazon S3 Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -5405,8 +5669,8 @@ const opPutBucketRequestPayment = "PutBucketRequestPayment" // PutBucketRequestPaymentRequest generates a "aws/request.Request" representing the // client's request for the PutBucketRequestPayment operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5485,8 +5749,8 @@ const opPutBucketTagging = "PutBucketTagging" // PutBucketTaggingRequest generates a "aws/request.Request" representing the // client's request for the PutBucketTagging operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5561,8 +5825,8 @@ const opPutBucketVersioning = "PutBucketVersioning" // PutBucketVersioningRequest generates a "aws/request.Request" representing the // client's request for the PutBucketVersioning operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5638,8 +5902,8 @@ const opPutBucketWebsite = "PutBucketWebsite" // PutBucketWebsiteRequest generates a "aws/request.Request" representing the // client's request for the PutBucketWebsite operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5714,8 +5978,8 @@ const opPutObject = "PutObject" // PutObjectRequest generates a "aws/request.Request" representing the // client's request for the PutObject operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5788,8 +6052,8 @@ const opPutObjectAcl = "PutObjectAcl" // PutObjectAclRequest generates a "aws/request.Request" representing the // client's request for the PutObjectAcl operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5868,8 +6132,8 @@ const opPutObjectTagging = "PutObjectTagging" // PutObjectTaggingRequest generates a "aws/request.Request" representing the // client's request for the PutObjectTagging operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5938,12 +6202,89 @@ func (c *S3) PutObjectTaggingWithContext(ctx aws.Context, input *PutObjectTaggin return out, req.Send() } +const opPutPublicAccessBlock = "PutPublicAccessBlock" + +// PutPublicAccessBlockRequest generates a "aws/request.Request" representing the +// client's request for the PutPublicAccessBlock operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutPublicAccessBlock for more information on using the PutPublicAccessBlock +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutPublicAccessBlockRequest method. +// req, resp := client.PutPublicAccessBlockRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutPublicAccessBlock +func (c *S3) PutPublicAccessBlockRequest(input *PutPublicAccessBlockInput) (req *request.Request, output *PutPublicAccessBlockOutput) { + op := &request.Operation{ + Name: opPutPublicAccessBlock, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?publicAccessBlock", + } + + if input == nil { + input = &PutPublicAccessBlockInput{} + } + + output = &PutPublicAccessBlockOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutPublicAccessBlock API operation for Amazon Simple Storage Service. +// +// Creates or modifies the Public Access Block configuration for an Amazon S3 +// bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutPublicAccessBlock for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutPublicAccessBlock +func (c *S3) PutPublicAccessBlock(input *PutPublicAccessBlockInput) (*PutPublicAccessBlockOutput, error) { + req, out := c.PutPublicAccessBlockRequest(input) + return out, req.Send() +} + +// PutPublicAccessBlockWithContext is the same as PutPublicAccessBlock with the addition of +// the ability to pass a context and additional request options. +// +// See PutPublicAccessBlock for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutPublicAccessBlockWithContext(ctx aws.Context, input *PutPublicAccessBlockInput, opts ...request.Option) (*PutPublicAccessBlockOutput, error) { + req, out := c.PutPublicAccessBlockRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opRestoreObject = "RestoreObject" // RestoreObjectRequest generates a "aws/request.Request" representing the // client's request for the RestoreObject operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6017,97 +6358,179 @@ func (c *S3) RestoreObjectWithContext(ctx aws.Context, input *RestoreObjectInput return out, req.Send() } -const opUploadPart = "UploadPart" +const opSelectObjectContent = "SelectObjectContent" -// UploadPartRequest generates a "aws/request.Request" representing the -// client's request for the UploadPart operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// SelectObjectContentRequest generates a "aws/request.Request" representing the +// client's request for the SelectObjectContent operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UploadPart for more information on using the UploadPart +// See SelectObjectContent for more information on using the SelectObjectContent // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UploadPartRequest method. -// req, resp := client.UploadPartRequest(params) +// // Example sending a request using the SelectObjectContentRequest method. +// req, resp := client.SelectObjectContentRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPart -func (c *S3) UploadPartRequest(input *UploadPartInput) (req *request.Request, output *UploadPartOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/SelectObjectContent +func (c *S3) SelectObjectContentRequest(input *SelectObjectContentInput) (req *request.Request, output *SelectObjectContentOutput) { op := &request.Operation{ - Name: opUploadPart, - HTTPMethod: "PUT", - HTTPPath: "/{Bucket}/{Key+}", + Name: opSelectObjectContent, + HTTPMethod: "POST", + HTTPPath: "/{Bucket}/{Key+}?select&select-type=2", } if input == nil { - input = &UploadPartInput{} + input = &SelectObjectContentInput{} } - output = &UploadPartOutput{} + output = &SelectObjectContentOutput{} req = c.newRequest(op, input, output) + req.Handlers.Send.Swap(client.LogHTTPResponseHandler.Name, client.LogHTTPResponseHeaderHandler) + req.Handlers.Unmarshal.Swap(restxml.UnmarshalHandler.Name, rest.UnmarshalHandler) + req.Handlers.Unmarshal.PushBack(output.runEventStreamLoop) return } -// UploadPart API operation for Amazon Simple Storage Service. -// -// Uploads a part in a multipart upload. +// SelectObjectContent API operation for Amazon Simple Storage Service. // -// Note: After you initiate multipart upload and upload one or more parts, you -// must either complete or abort multipart upload in order to stop getting charged -// for storage of the uploaded parts. Only after you either complete or abort -// multipart upload, Amazon S3 frees up the parts storage and stops charging -// you for the parts storage. +// This operation filters the contents of an Amazon S3 object based on a simple +// Structured Query Language (SQL) statement. In the request, along with the +// SQL expression, you must also specify a data serialization format (JSON or +// CSV) of the object. Amazon S3 uses this to parse object data into records, +// and returns only records that match the specified SQL expression. You must +// also specify the data serialization format for the response. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Simple Storage Service's -// API operation UploadPart for usage and error information. -// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPart -func (c *S3) UploadPart(input *UploadPartInput) (*UploadPartOutput, error) { - req, out := c.UploadPartRequest(input) +// API operation SelectObjectContent for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/SelectObjectContent +func (c *S3) SelectObjectContent(input *SelectObjectContentInput) (*SelectObjectContentOutput, error) { + req, out := c.SelectObjectContentRequest(input) return out, req.Send() } -// UploadPartWithContext is the same as UploadPart with the addition of +// SelectObjectContentWithContext is the same as SelectObjectContent with the addition of // the ability to pass a context and additional request options. // -// See UploadPart for details on how to use this API operation. +// See SelectObjectContent for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *S3) UploadPartWithContext(ctx aws.Context, input *UploadPartInput, opts ...request.Option) (*UploadPartOutput, error) { - req, out := c.UploadPartRequest(input) +func (c *S3) SelectObjectContentWithContext(ctx aws.Context, input *SelectObjectContentInput, opts ...request.Option) (*SelectObjectContentOutput, error) { + req, out := c.SelectObjectContentRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUploadPartCopy = "UploadPartCopy" +const opUploadPart = "UploadPart" -// UploadPartCopyRequest generates a "aws/request.Request" representing the -// client's request for the UploadPartCopy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UploadPartRequest generates a "aws/request.Request" representing the +// client's request for the UploadPart operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UploadPartCopy for more information on using the UploadPartCopy +// See UploadPart for more information on using the UploadPart +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UploadPartRequest method. +// req, resp := client.UploadPartRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPart +func (c *S3) UploadPartRequest(input *UploadPartInput) (req *request.Request, output *UploadPartOutput) { + op := &request.Operation{ + Name: opUploadPart, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}/{Key+}", + } + + if input == nil { + input = &UploadPartInput{} + } + + output = &UploadPartOutput{} + req = c.newRequest(op, input, output) + return +} + +// UploadPart API operation for Amazon Simple Storage Service. +// +// Uploads a part in a multipart upload. +// +// Note: After you initiate multipart upload and upload one or more parts, you +// must either complete or abort multipart upload in order to stop getting charged +// for storage of the uploaded parts. Only after you either complete or abort +// multipart upload, Amazon S3 frees up the parts storage and stops charging +// you for the parts storage. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation UploadPart for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPart +func (c *S3) UploadPart(input *UploadPartInput) (*UploadPartOutput, error) { + req, out := c.UploadPartRequest(input) + return out, req.Send() +} + +// UploadPartWithContext is the same as UploadPart with the addition of +// the ability to pass a context and additional request options. +// +// See UploadPart for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) UploadPartWithContext(ctx aws.Context, input *UploadPartInput, opts ...request.Option) (*UploadPartOutput, error) { + req, out := c.UploadPartRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUploadPartCopy = "UploadPartCopy" + +// UploadPartCopyRequest generates a "aws/request.Request" representing the +// client's request for the UploadPartCopy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UploadPartCopy for more information on using the UploadPartCopy // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration @@ -6377,7 +6800,7 @@ func (s *AccessControlPolicy) SetOwner(v *Owner) *AccessControlPolicy { return s } -// Container for information regarding the access control for replicas. +// A container for information about access control for replicas. type AccessControlTranslation struct { _ struct{} `type:"structure"` @@ -6730,7 +7153,7 @@ type Bucket struct { _ struct{} `type:"structure"` // Date the bucket was created. - CreationDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CreationDate *time.Time `type:"timestamp"` // The name of the bucket. Name *string `type:"string"` @@ -6807,6 +7230,9 @@ func (s *BucketLifecycleConfiguration) SetRules(v []*LifecycleRule) *BucketLifec type BucketLoggingStatus struct { _ struct{} `type:"structure"` + // Container for logging information. Presence of this element indicates that + // logging is enabled. Parameters TargetBucket and TargetPrefix are required + // in this case. LoggingEnabled *LoggingEnabled `type:"structure"` } @@ -6974,11 +7400,16 @@ func (s *CORSRule) SetMaxAgeSeconds(v int64) *CORSRule { type CSVInput struct { _ struct{} `type:"structure"` - // Single character used to indicate a row should be ignored when present at - // the start of a row. + // Specifies that CSV field values may contain quoted record delimiters and + // such records should be allowed. Default value is FALSE. Setting this value + // to TRUE may lower performance. + AllowQuotedRecordDelimiter *bool `type:"boolean"` + + // The single character used to indicate a row should be ignored when present + // at the start of a row. Comments *string `type:"string"` - // Value used to separate individual fields in a record. + // The value used to separate individual fields in a record. FieldDelimiter *string `type:"string"` // Describes the first line of input. Valid values: None, Ignore, Use. @@ -6987,11 +7418,11 @@ type CSVInput struct { // Value used for escaping where the field delimiter is part of the value. QuoteCharacter *string `type:"string"` - // Single character used for escaping the quote character inside an already + // The single character used for escaping the quote character inside an already // escaped value. QuoteEscapeCharacter *string `type:"string"` - // Value used to separate individual records. + // The value used to separate individual records. RecordDelimiter *string `type:"string"` } @@ -7005,6 +7436,12 @@ func (s CSVInput) GoString() string { return s.String() } +// SetAllowQuotedRecordDelimiter sets the AllowQuotedRecordDelimiter field's value. +func (s *CSVInput) SetAllowQuotedRecordDelimiter(v bool) *CSVInput { + s.AllowQuotedRecordDelimiter = &v + return s +} + // SetComments sets the Comments field's value. func (s *CSVInput) SetComments(v string) *CSVInput { s.Comments = &v @@ -7045,20 +7482,20 @@ func (s *CSVInput) SetRecordDelimiter(v string) *CSVInput { type CSVOutput struct { _ struct{} `type:"structure"` - // Value used to separate individual fields in a record. + // The value used to separate individual fields in a record. FieldDelimiter *string `type:"string"` - // Value used for escaping where the field delimiter is part of the value. + // The value used for escaping where the field delimiter is part of the value. QuoteCharacter *string `type:"string"` - // Single character used for escaping the quote character inside an already + // Th single character used for escaping the quote character inside an already // escaped value. QuoteEscapeCharacter *string `type:"string"` // Indicates whether or not all output fields should be quoted. QuoteFields *string `type:"string" enum:"QuoteFields"` - // Value used to separate individual records. + // The value used to separate individual records. RecordDelimiter *string `type:"string"` } @@ -7107,12 +7544,14 @@ type CloudFunctionConfiguration struct { CloudFunction *string `type:"string"` - // Bucket event for which to send notifications. + // The bucket event for which to send notifications. + // + // Deprecated: Event has been deprecated Event *string `deprecated:"true" type:"string" enum:"Event"` Events []*string `locationName:"Event" type:"list" flattened:"true"` - // Optional unique identifier for configurations in a notification configuration. + // An optional unique identifier for configurations in a notification configuration. // If you don't provide one, Amazon S3 will assign an ID. Id *string `type:"string"` @@ -7471,6 +7910,32 @@ func (s *Condition) SetKeyPrefixEquals(v string) *Condition { return s } +type ContinuationEvent struct { + _ struct{} `locationName:"ContinuationEvent" type:"structure"` +} + +// String returns the string representation +func (s ContinuationEvent) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ContinuationEvent) GoString() string { + return s.String() +} + +// The ContinuationEvent is and event in the SelectObjectContentEventStream group of events. +func (s *ContinuationEvent) eventSelectObjectContentEventStream() {} + +// UnmarshalEvent unmarshals the EventStream Message into the ContinuationEvent value. +// This method is only used internally within the SDK's EventStream handling. +func (s *ContinuationEvent) UnmarshalEvent( + payloadUnmarshaler protocol.PayloadUnmarshaler, + msg eventstream.Message, +) error { + return nil +} + type CopyObjectInput struct { _ struct{} `type:"structure"` @@ -7507,14 +7972,14 @@ type CopyObjectInput struct { CopySourceIfMatch *string `location:"header" locationName:"x-amz-copy-source-if-match" type:"string"` // Copies the object if it has been modified since the specified time. - CopySourceIfModifiedSince *time.Time `location:"header" locationName:"x-amz-copy-source-if-modified-since" type:"timestamp" timestampFormat:"rfc822"` + CopySourceIfModifiedSince *time.Time `location:"header" locationName:"x-amz-copy-source-if-modified-since" type:"timestamp"` // Copies the object if its entity tag (ETag) is different than the specified // ETag. CopySourceIfNoneMatch *string `location:"header" locationName:"x-amz-copy-source-if-none-match" type:"string"` // Copies the object if it hasn't been modified since the specified time. - CopySourceIfUnmodifiedSince *time.Time `location:"header" locationName:"x-amz-copy-source-if-unmodified-since" type:"timestamp" timestampFormat:"rfc822"` + CopySourceIfUnmodifiedSince *time.Time `location:"header" locationName:"x-amz-copy-source-if-unmodified-since" type:"timestamp"` // Specifies the algorithm to use when decrypting the source object (e.g., AES256). CopySourceSSECustomerAlgorithm *string `location:"header" locationName:"x-amz-copy-source-server-side-encryption-customer-algorithm" type:"string"` @@ -7530,7 +7995,7 @@ type CopyObjectInput struct { CopySourceSSECustomerKeyMD5 *string `location:"header" locationName:"x-amz-copy-source-server-side-encryption-customer-key-MD5" type:"string"` // The date and time at which the object is no longer cacheable. - Expires *time.Time `location:"header" locationName:"Expires" type:"timestamp" timestampFormat:"rfc822"` + Expires *time.Time `location:"header" locationName:"Expires" type:"timestamp"` // Gives the grantee READ, READ_ACP, and WRITE_ACP permissions on the object. GrantFullControl *string `location:"header" locationName:"x-amz-grant-full-control" type:"string"` @@ -7959,7 +8424,7 @@ type CopyObjectResult struct { ETag *string `type:"string"` - LastModified *time.Time `type:"timestamp" timestampFormat:"iso8601"` + LastModified *time.Time `type:"timestamp"` } // String returns the string representation @@ -7991,7 +8456,7 @@ type CopyPartResult struct { ETag *string `type:"string"` // Date and time at which the object was uploaded. - LastModified *time.Time `type:"timestamp" timestampFormat:"iso8601"` + LastModified *time.Time `type:"timestamp"` } // String returns the string representation @@ -8195,7 +8660,7 @@ type CreateMultipartUploadInput struct { ContentType *string `location:"header" locationName:"Content-Type" type:"string"` // The date and time at which the object is no longer cacheable. - Expires *time.Time `location:"header" locationName:"Expires" type:"timestamp" timestampFormat:"rfc822"` + Expires *time.Time `location:"header" locationName:"Expires" type:"timestamp"` // Gives the grantee READ, READ_ACP, and WRITE_ACP permissions on the object. GrantFullControl *string `location:"header" locationName:"x-amz-grant-full-control" type:"string"` @@ -8443,7 +8908,7 @@ type CreateMultipartUploadOutput struct { _ struct{} `type:"structure"` // Date when multipart upload will become eligible for abort operation by lifecycle. - AbortDate *time.Time `location:"header" locationName:"x-amz-abort-date" type:"timestamp" timestampFormat:"rfc822"` + AbortDate *time.Time `location:"header" locationName:"x-amz-abort-date" type:"timestamp"` // Id of the lifecycle rule that makes a multipart upload eligible for abort // operation. @@ -9124,6 +9589,11 @@ func (s DeleteBucketPolicyOutput) GoString() string { type DeleteBucketReplicationInput struct { _ struct{} `type:"structure"` + // The bucket name. + // + // It can take a while to propagate the deletion of a replication configuration + // to all Amazon S3 systems. + // // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` } @@ -9303,7 +9773,7 @@ type DeleteMarkerEntry struct { Key *string `min:"1" type:"string"` // Date and time the object was last modified. - LastModified *time.Time `type:"timestamp" timestampFormat:"iso8601"` + LastModified *time.Time `type:"timestamp"` Owner *Owner `type:"structure"` @@ -9351,6 +9821,33 @@ func (s *DeleteMarkerEntry) SetVersionId(v string) *DeleteMarkerEntry { return s } +// Specifies whether Amazon S3 should replicate delete makers. +type DeleteMarkerReplication struct { + _ struct{} `type:"structure"` + + // The status of the delete marker replication. + // + // In the current implementation, Amazon S3 doesn't replicate the delete markers. + // The status must be Disabled. + Status *string `type:"string" enum:"DeleteMarkerReplicationStatus"` +} + +// String returns the string representation +func (s DeleteMarkerReplication) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteMarkerReplication) GoString() string { + return s.String() +} + +// SetStatus sets the Status field's value. +func (s *DeleteMarkerReplication) SetStatus(v string) *DeleteMarkerReplication { + s.Status = &v + return s +} + type DeleteObjectInput struct { _ struct{} `type:"structure"` @@ -9696,6 +10193,66 @@ func (s *DeleteObjectsOutput) SetRequestCharged(v string) *DeleteObjectsOutput { return s } +type DeletePublicAccessBlockInput struct { + _ struct{} `type:"structure"` + + // The Amazon S3 bucket whose Public Access Block configuration you want to + // delete. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeletePublicAccessBlockInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeletePublicAccessBlockInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeletePublicAccessBlockInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeletePublicAccessBlockInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *DeletePublicAccessBlockInput) SetBucket(v string) *DeletePublicAccessBlockInput { + s.Bucket = &v + return s +} + +func (s *DeletePublicAccessBlockInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type DeletePublicAccessBlockOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeletePublicAccessBlockOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeletePublicAccessBlockOutput) GoString() string { + return s.String() +} + type DeletedObject struct { _ struct{} `type:"structure"` @@ -9742,27 +10299,43 @@ func (s *DeletedObject) SetVersionId(v string) *DeletedObject { return s } -// Container for replication destination information. +// A container for information about the replication destination. type Destination struct { _ struct{} `type:"structure"` - // Container for information regarding the access control for replicas. + // A container for information about access control for replicas. + // + // Use this element only in a cross-account scenario where source and destination + // bucket owners are not the same to change replica ownership to the AWS account + // that owns the destination bucket. If you don't add this element to the replication + // configuration, the replicas are owned by same AWS account that owns the source + // object. AccessControlTranslation *AccessControlTranslation `type:"structure"` - // Account ID of the destination bucket. Currently this is only being verified - // if Access Control Translation is enabled + // The account ID of the destination bucket. Currently, Amazon S3 verifies this + // value only if Access Control Translation is enabled. + // + // In a cross-account scenario, if you change replica ownership to the AWS account + // that owns the destination bucket by adding the AccessControlTranslation element, + // this is the account ID of the owner of the destination bucket. Account *string `type:"string"` - // Amazon resource name (ARN) of the bucket where you want Amazon S3 to store - // replicas of the object identified by the rule. + // The Amazon Resource Name (ARN) of the bucket where you want Amazon S3 to + // store replicas of the object identified by the rule. + // + // If there are multiple rules in your replication configuration, all rules + // must specify the same bucket as the destination. A replication configuration + // can replicate objects to only one destination bucket. // // Bucket is a required field Bucket *string `type:"string" required:"true"` - // Container for information regarding encryption based configuration for replicas. + // A container that provides information about encryption. If SourceSelectionCriteria + // is specified, you must specify this element. EncryptionConfiguration *EncryptionConfiguration `type:"structure"` - // The class of storage used to store the object. + // The class of storage used to store the object. By default Amazon S3 uses + // storage class of the source object when creating a replica. StorageClass *string `type:"string" enum:"StorageClass"` } @@ -9892,11 +10465,13 @@ func (s *Encryption) SetKMSKeyId(v string) *Encryption { return s } -// Container for information regarding encryption based configuration for replicas. +// A container for information about the encryption-based configuration for +// replicas. type EncryptionConfiguration struct { _ struct{} `type:"structure"` - // The id of the KMS key used to encrypt the replica object. + // The ID of the AWS KMS key for the AWS Region where the destination bucket + // resides. Amazon S3 uses this key to encrypt the replica object. ReplicaKmsKeyID *string `type:"string"` } @@ -9916,6 +10491,32 @@ func (s *EncryptionConfiguration) SetReplicaKmsKeyID(v string) *EncryptionConfig return s } +type EndEvent struct { + _ struct{} `locationName:"EndEvent" type:"structure"` +} + +// String returns the string representation +func (s EndEvent) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EndEvent) GoString() string { + return s.String() +} + +// The EndEvent is and event in the SelectObjectContentEventStream group of events. +func (s *EndEvent) eventSelectObjectContentEventStream() {} + +// UnmarshalEvent unmarshals the EventStream Message into the EndEvent value. +// This method is only used internally within the SDK's EventStream handling. +func (s *EndEvent) UnmarshalEvent( + payloadUnmarshaler protocol.PayloadUnmarshaler, + msg eventstream.Message, +) error { + return nil +} + type Error struct { _ struct{} `type:"structure"` @@ -10003,14 +10604,16 @@ func (s *ErrorDocument) SetKey(v string) *ErrorDocument { return s } -// Container for key value pair that defines the criteria for the filter rule. +// A container for a key value pair that defines the criteria for the filter +// rule. type FilterRule struct { _ struct{} `type:"structure"` - // Object key name prefix or suffix identifying one or more objects to which - // the filtering rule applies. Maximum prefix length can be up to 1,024 characters. + // The object key name prefix or suffix identifying one or more objects to which + // the filtering rule applies. The maximum prefix length is 1,024 characters. // Overlapping prefixes and suffixes are not supported. For more information, - // go to Configuring Event Notifications (http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html) + // see Configuring Event Notifications (http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html) + // in the Amazon Simple Storage Service Developer Guide. Name *string `type:"string" enum:"FilterRuleName"` Value *string `type:"string"` @@ -10720,6 +11323,9 @@ func (s *GetBucketLoggingInput) getBucket() (v string) { type GetBucketLoggingOutput struct { _ struct{} `type:"structure"` + // Container for logging information. Presence of this element indicates that + // logging is enabled. Parameters TargetBucket and TargetPrefix are required + // in this case. LoggingEnabled *LoggingEnabled `type:"structure"` } @@ -10932,6 +11538,74 @@ func (s *GetBucketPolicyOutput) SetPolicy(v string) *GetBucketPolicyOutput { return s } +type GetBucketPolicyStatusInput struct { + _ struct{} `type:"structure"` + + // The name of the Amazon S3 bucket whose public-policy status you want to retrieve. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketPolicyStatusInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketPolicyStatusInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketPolicyStatusInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketPolicyStatusInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketPolicyStatusInput) SetBucket(v string) *GetBucketPolicyStatusInput { + s.Bucket = &v + return s +} + +func (s *GetBucketPolicyStatusInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type GetBucketPolicyStatusOutput struct { + _ struct{} `type:"structure" payload:"PolicyStatus"` + + // The public-policy status for this bucket. + PolicyStatus *PolicyStatus `type:"structure"` +} + +// String returns the string representation +func (s GetBucketPolicyStatusOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketPolicyStatusOutput) GoString() string { + return s.String() +} + +// SetPolicyStatus sets the PolicyStatus field's value. +func (s *GetBucketPolicyStatusOutput) SetPolicyStatus(v *PolicyStatus) *GetBucketPolicyStatusOutput { + s.PolicyStatus = v + return s +} + type GetBucketReplicationInput struct { _ struct{} `type:"structure"` @@ -10978,8 +11652,8 @@ func (s *GetBucketReplicationInput) getBucket() (v string) { type GetBucketReplicationOutput struct { _ struct{} `type:"structure" payload:"ReplicationConfiguration"` - // Container for replication rules. You can add as many as 1,000 rules. Total - // replication configuration size can be up to 2 MB. + // A container for replication rules. You can add up to 1,000 rules. The maximum + // size of a replication configuration is 2 MB. ReplicationConfiguration *ReplicationConfiguration `type:"structure"` } @@ -11429,7 +12103,7 @@ type GetObjectInput struct { // Return the object only if it has been modified since the specified time, // otherwise return a 304 (not modified). - IfModifiedSince *time.Time `location:"header" locationName:"If-Modified-Since" type:"timestamp" timestampFormat:"rfc822"` + IfModifiedSince *time.Time `location:"header" locationName:"If-Modified-Since" type:"timestamp"` // Return the object only if its entity tag (ETag) is different from the one // specified, otherwise return a 304 (not modified). @@ -11437,7 +12111,7 @@ type GetObjectInput struct { // Return the object only if it has not been modified since the specified time, // otherwise return a 412 (precondition failed). - IfUnmodifiedSince *time.Time `location:"header" locationName:"If-Unmodified-Since" type:"timestamp" timestampFormat:"rfc822"` + IfUnmodifiedSince *time.Time `location:"header" locationName:"If-Unmodified-Since" type:"timestamp"` // Key is a required field Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` @@ -11473,7 +12147,7 @@ type GetObjectInput struct { ResponseContentType *string `location:"querystring" locationName:"response-content-type" type:"string"` // Sets the Expires header of the response. - ResponseExpires *time.Time `location:"querystring" locationName:"response-expires" type:"timestamp" timestampFormat:"iso8601"` + ResponseExpires *time.Time `location:"querystring" locationName:"response-expires" type:"timestamp"` // Specifies the algorithm to use to when encrypting the object (e.g., AES256). SSECustomerAlgorithm *string `location:"header" locationName:"x-amz-server-side-encryption-customer-algorithm" type:"string"` @@ -11700,7 +12374,7 @@ type GetObjectOutput struct { Expires *string `location:"header" locationName:"Expires" type:"string"` // Last modified date of the object - LastModified *time.Time `location:"header" locationName:"Last-Modified" type:"timestamp" timestampFormat:"rfc822"` + LastModified *time.Time `location:"header" locationName:"Last-Modified" type:"timestamp"` // A map of metadata to store with the object in S3. Metadata map[string]*string `location:"headers" locationName:"x-amz-meta-" type:"map"` @@ -12133,30 +12807,31 @@ func (s *GetObjectTorrentOutput) SetRequestCharged(v string) *GetObjectTorrentOu return s } -type GlacierJobParameters struct { +type GetPublicAccessBlockInput struct { _ struct{} `type:"structure"` - // Glacier retrieval tier at which the restore will be processed. + // The name of the Amazon S3 bucket whose Public Access Block configuration + // you want to retrieve. // - // Tier is a required field - Tier *string `type:"string" required:"true" enum:"Tier"` + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` } // String returns the string representation -func (s GlacierJobParameters) String() string { +func (s GetPublicAccessBlockInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s GlacierJobParameters) GoString() string { +func (s GetPublicAccessBlockInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *GlacierJobParameters) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "GlacierJobParameters"} - if s.Tier == nil { - invalidParams.Add(request.NewErrParamRequired("Tier")) +func (s *GetPublicAccessBlockInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetPublicAccessBlockInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) } if invalidParams.Len() > 0 { @@ -12165,10 +12840,79 @@ func (s *GlacierJobParameters) Validate() error { return nil } -// SetTier sets the Tier field's value. -func (s *GlacierJobParameters) SetTier(v string) *GlacierJobParameters { - s.Tier = &v - return s +// SetBucket sets the Bucket field's value. +func (s *GetPublicAccessBlockInput) SetBucket(v string) *GetPublicAccessBlockInput { + s.Bucket = &v + return s +} + +func (s *GetPublicAccessBlockInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type GetPublicAccessBlockOutput struct { + _ struct{} `type:"structure" payload:"PublicAccessBlockConfiguration"` + + // The Public Access Block configuration currently in effect for this Amazon + // S3 bucket. + PublicAccessBlockConfiguration *PublicAccessBlockConfiguration `type:"structure"` +} + +// String returns the string representation +func (s GetPublicAccessBlockOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetPublicAccessBlockOutput) GoString() string { + return s.String() +} + +// SetPublicAccessBlockConfiguration sets the PublicAccessBlockConfiguration field's value. +func (s *GetPublicAccessBlockOutput) SetPublicAccessBlockConfiguration(v *PublicAccessBlockConfiguration) *GetPublicAccessBlockOutput { + s.PublicAccessBlockConfiguration = v + return s +} + +type GlacierJobParameters struct { + _ struct{} `type:"structure"` + + // Glacier retrieval tier at which the restore will be processed. + // + // Tier is a required field + Tier *string `type:"string" required:"true" enum:"Tier"` +} + +// String returns the string representation +func (s GlacierJobParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GlacierJobParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GlacierJobParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GlacierJobParameters"} + if s.Tier == nil { + invalidParams.Add(request.NewErrParamRequired("Tier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTier sets the Tier field's value. +func (s *GlacierJobParameters) SetTier(v string) *GlacierJobParameters { + s.Tier = &v + return s } type Grant struct { @@ -12360,7 +13104,7 @@ type HeadObjectInput struct { // Return the object only if it has been modified since the specified time, // otherwise return a 304 (not modified). - IfModifiedSince *time.Time `location:"header" locationName:"If-Modified-Since" type:"timestamp" timestampFormat:"rfc822"` + IfModifiedSince *time.Time `location:"header" locationName:"If-Modified-Since" type:"timestamp"` // Return the object only if its entity tag (ETag) is different from the one // specified, otherwise return a 304 (not modified). @@ -12368,7 +13112,7 @@ type HeadObjectInput struct { // Return the object only if it has not been modified since the specified time, // otherwise return a 412 (precondition failed). - IfUnmodifiedSince *time.Time `location:"header" locationName:"If-Unmodified-Since" type:"timestamp" timestampFormat:"rfc822"` + IfUnmodifiedSince *time.Time `location:"header" locationName:"If-Unmodified-Since" type:"timestamp"` // Key is a required field Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` @@ -12572,7 +13316,7 @@ type HeadObjectOutput struct { Expires *string `location:"header" locationName:"Expires" type:"string"` // Last modified date of the object - LastModified *time.Time `location:"header" locationName:"Last-Modified" type:"timestamp" timestampFormat:"rfc822"` + LastModified *time.Time `location:"header" locationName:"Last-Modified" type:"timestamp"` // A map of metadata to store with the object in S3. Metadata map[string]*string `location:"headers" locationName:"x-amz-meta-" type:"map"` @@ -12865,6 +13609,16 @@ type InputSerialization struct { // Describes the serialization of a CSV-encoded object. CSV *CSVInput `type:"structure"` + + // Specifies object's compression format. Valid values: NONE, GZIP, BZIP2. Default + // Value: NONE. + CompressionType *string `type:"string" enum:"CompressionType"` + + // Specifies JSON as object's input serialization format. + JSON *JSONInput `type:"structure"` + + // Specifies Parquet as object's input serialization format. + Parquet *ParquetInput `type:"structure"` } // String returns the string representation @@ -12883,6 +13637,24 @@ func (s *InputSerialization) SetCSV(v *CSVInput) *InputSerialization { return s } +// SetCompressionType sets the CompressionType field's value. +func (s *InputSerialization) SetCompressionType(v string) *InputSerialization { + s.CompressionType = &v + return s +} + +// SetJSON sets the JSON field's value. +func (s *InputSerialization) SetJSON(v *JSONInput) *InputSerialization { + s.JSON = v + return s +} + +// SetParquet sets the Parquet field's value. +func (s *InputSerialization) SetParquet(v *ParquetInput) *InputSerialization { + s.Parquet = v + return s +} + type InventoryConfiguration struct { _ struct{} `type:"structure"` @@ -13060,10 +13832,10 @@ func (s *InventoryDestination) SetS3BucketDestination(v *InventoryS3BucketDestin type InventoryEncryption struct { _ struct{} `type:"structure"` - // Specifies the use of SSE-KMS to encrypt delievered Inventory reports. + // Specifies the use of SSE-KMS to encrypt delivered Inventory reports. SSEKMS *SSEKMS `locationName:"SSE-KMS" type:"structure"` - // Specifies the use of SSE-S3 to encrypt delievered Inventory reports. + // Specifies the use of SSE-S3 to encrypt delivered Inventory reports. SSES3 *SSES3 `locationName:"SSE-S3" type:"structure"` } @@ -13273,12 +14045,58 @@ func (s *InventorySchedule) SetFrequency(v string) *InventorySchedule { return s } -// Container for object key name prefix and suffix filtering rules. +type JSONInput struct { + _ struct{} `type:"structure"` + + // The type of JSON. Valid values: Document, Lines. + Type *string `type:"string" enum:"JSONType"` +} + +// String returns the string representation +func (s JSONInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s JSONInput) GoString() string { + return s.String() +} + +// SetType sets the Type field's value. +func (s *JSONInput) SetType(v string) *JSONInput { + s.Type = &v + return s +} + +type JSONOutput struct { + _ struct{} `type:"structure"` + + // The value used to separate individual records in the output. + RecordDelimiter *string `type:"string"` +} + +// String returns the string representation +func (s JSONOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s JSONOutput) GoString() string { + return s.String() +} + +// SetRecordDelimiter sets the RecordDelimiter field's value. +func (s *JSONOutput) SetRecordDelimiter(v string) *JSONOutput { + s.RecordDelimiter = &v + return s +} + +// A container for object key name prefix and suffix filtering rules. type KeyFilter struct { _ struct{} `type:"structure"` - // A list of containers for key value pair that defines the criteria for the - // filter rule. + // A list of containers for the key value pair that defines the criteria for + // the filter rule. FilterRules []*FilterRule `locationName:"FilterRule" type:"list" flattened:"true"` } @@ -13298,23 +14116,24 @@ func (s *KeyFilter) SetFilterRules(v []*FilterRule) *KeyFilter { return s } -// Container for specifying the AWS Lambda notification configuration. +// A container for specifying the configuration for AWS Lambda notifications. type LambdaFunctionConfiguration struct { _ struct{} `type:"structure"` // Events is a required field Events []*string `locationName:"Event" type:"list" flattened:"true" required:"true"` - // Container for object key name filtering rules. For information about key - // name filtering, go to Configuring Event Notifications (http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html) + // A container for object key name filtering rules. For information about key + // name filtering, see Configuring Event Notifications (http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html) + // in the Amazon Simple Storage Service Developer Guide. Filter *NotificationConfigurationFilter `type:"structure"` - // Optional unique identifier for configurations in a notification configuration. + // An optional unique identifier for configurations in a notification configuration. // If you don't provide one, Amazon S3 will assign an ID. Id *string `type:"string"` - // Lambda cloud function ARN that Amazon S3 can invoke when it detects events - // of the specified type. + // The Amazon Resource Name (ARN) of the Lambda cloud function that Amazon S3 + // can invoke when it detects events of the specified type. // // LambdaFunctionArn is a required field LambdaFunctionArn *string `locationName:"CloudFunction" type:"string" required:"true"` @@ -13489,6 +14308,8 @@ type LifecycleRule struct { // Prefix identifying one or more objects to which the rule applies. This is // deprecated; use Filter instead. + // + // Deprecated: Prefix has been deprecated Prefix *string `deprecated:"true" type:"string"` // If 'Enabled', the rule is currently being applied. If 'Disabled', the rule @@ -15124,7 +15945,7 @@ type ListPartsOutput struct { _ struct{} `type:"structure"` // Date when multipart upload will become eligible for abort operation by lifecycle. - AbortDate *time.Time `location:"header" locationName:"x-amz-abort-date" type:"timestamp" timestampFormat:"rfc822"` + AbortDate *time.Time `location:"header" locationName:"x-amz-abort-date" type:"timestamp"` // Id of the lifecycle rule that makes a multipart upload eligible for abort // operation. @@ -15397,6 +16218,9 @@ func (s *Location) SetUserMetadata(v []*MetadataEntry) *Location { return s } +// Container for logging information. Presence of this element indicates that +// logging is enabled. Parameters TargetBucket and TargetPrefix are required +// in this case. type LoggingEnabled struct { _ struct{} `type:"structure"` @@ -15406,13 +16230,17 @@ type LoggingEnabled struct { // to deliver their logs to the same target bucket. In this case you should // choose a different TargetPrefix for each source bucket so that the delivered // log files can be distinguished by key. - TargetBucket *string `type:"string"` + // + // TargetBucket is a required field + TargetBucket *string `type:"string" required:"true"` TargetGrants []*TargetGrant `locationNameList:"Grant" type:"list"` // This element lets you specify a prefix for the keys that the log files will // be stored under. - TargetPrefix *string `type:"string"` + // + // TargetPrefix is a required field + TargetPrefix *string `type:"string" required:"true"` } // String returns the string representation @@ -15428,6 +16256,12 @@ func (s LoggingEnabled) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *LoggingEnabled) Validate() error { invalidParams := request.ErrInvalidParams{Context: "LoggingEnabled"} + if s.TargetBucket == nil { + invalidParams.Add(request.NewErrParamRequired("TargetBucket")) + } + if s.TargetPrefix == nil { + invalidParams.Add(request.NewErrParamRequired("TargetPrefix")) + } if s.TargetGrants != nil { for i, v := range s.TargetGrants { if v == nil { @@ -15667,7 +16501,7 @@ type MultipartUpload struct { _ struct{} `type:"structure"` // Date and time at which the multipart upload was initiated. - Initiated *time.Time `type:"timestamp" timestampFormat:"iso8601"` + Initiated *time.Time `type:"timestamp"` // Identifies who initiated the multipart upload. Initiator *Initiator `type:"structure"` @@ -15741,7 +16575,8 @@ type NoncurrentVersionExpiration struct { // Specifies the number of days an object is noncurrent before Amazon S3 can // perform the associated action. For information about the noncurrent days // calculations, see How Amazon S3 Calculates When an Object Became Noncurrent - // (http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html) + // (http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html) in + // the Amazon Simple Storage Service Developer Guide. NoncurrentDays *int64 `type:"integer"` } @@ -15762,17 +16597,19 @@ func (s *NoncurrentVersionExpiration) SetNoncurrentDays(v int64) *NoncurrentVers } // Container for the transition rule that describes when noncurrent objects -// transition to the STANDARD_IA or GLACIER storage class. If your bucket is -// versioning-enabled (or versioning is suspended), you can set this action -// to request that Amazon S3 transition noncurrent object versions to the STANDARD_IA -// or GLACIER storage class at a specific period in the object's lifetime. +// transition to the STANDARD_IA, ONEZONE_IA, or GLACIER storage class. If your +// bucket is versioning-enabled (or versioning is suspended), you can set this +// action to request that Amazon S3 transition noncurrent object versions to +// the STANDARD_IA, ONEZONE_IA, or GLACIER storage class at a specific period +// in the object's lifetime. type NoncurrentVersionTransition struct { _ struct{} `type:"structure"` // Specifies the number of days an object is noncurrent before Amazon S3 can // perform the associated action. For information about the noncurrent days // calculations, see How Amazon S3 Calculates When an Object Became Noncurrent - // (http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html) + // (http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html) in + // the Amazon Simple Storage Service Developer Guide. NoncurrentDays *int64 `type:"integer"` // The class of storage used to store the object. @@ -15801,8 +16638,8 @@ func (s *NoncurrentVersionTransition) SetStorageClass(v string) *NoncurrentVersi return s } -// Container for specifying the notification configuration of the bucket. If -// this element is empty, notifications are turned off on the bucket. +// A container for specifying the notification configuration of the bucket. +// If this element is empty, notifications are turned off for the bucket. type NotificationConfiguration struct { _ struct{} `type:"structure"` @@ -15919,12 +16756,13 @@ func (s *NotificationConfigurationDeprecated) SetTopicConfiguration(v *TopicConf return s } -// Container for object key name filtering rules. For information about key -// name filtering, go to Configuring Event Notifications (http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html) +// A container for object key name filtering rules. For information about key +// name filtering, see Configuring Event Notifications (http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html) +// in the Amazon Simple Storage Service Developer Guide. type NotificationConfigurationFilter struct { _ struct{} `type:"structure"` - // Container for object key name prefix and suffix filtering rules. + // A container for object key name prefix and suffix filtering rules. Key *KeyFilter `locationName:"S3Key" type:"structure"` } @@ -15951,7 +16789,7 @@ type Object struct { Key *string `min:"1" type:"string"` - LastModified *time.Time `type:"timestamp" timestampFormat:"iso8601"` + LastModified *time.Time `type:"timestamp"` Owner *Owner `type:"structure"` @@ -16070,7 +16908,7 @@ type ObjectVersion struct { Key *string `min:"1" type:"string"` // Date and time the object was last modified. - LastModified *time.Time `type:"timestamp" timestampFormat:"iso8601"` + LastModified *time.Time `type:"timestamp"` Owner *Owner `type:"structure"` @@ -16187,6 +17025,9 @@ type OutputSerialization struct { // Describes the serialization of CSV-encoded Select results. CSV *CSVOutput `type:"structure"` + + // Specifies JSON as request's output serialization format. + JSON *JSONOutput `type:"structure"` } // String returns the string representation @@ -16205,6 +17046,12 @@ func (s *OutputSerialization) SetCSV(v *CSVOutput) *OutputSerialization { return s } +// SetJSON sets the JSON field's value. +func (s *OutputSerialization) SetJSON(v *JSONOutput) *OutputSerialization { + s.JSON = v + return s +} + type Owner struct { _ struct{} `type:"structure"` @@ -16235,6 +17082,20 @@ func (s *Owner) SetID(v string) *Owner { return s } +type ParquetInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s ParquetInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ParquetInput) GoString() string { + return s.String() +} + type Part struct { _ struct{} `type:"structure"` @@ -16242,13 +17103,13 @@ type Part struct { ETag *string `type:"string"` // Date and time at which the part was uploaded. - LastModified *time.Time `type:"timestamp" timestampFormat:"iso8601"` + LastModified *time.Time `type:"timestamp"` // Part number identifying the part. This is a positive integer between 1 and // 10,000. PartNumber *int64 `type:"integer"` - // Size of the uploaded part data. + // Size in bytes of the uploaded part data. Size *int64 `type:"integer"` } @@ -16286,6 +17147,218 @@ func (s *Part) SetSize(v int64) *Part { return s } +// The container element for this bucket's public-policy status. +type PolicyStatus struct { + _ struct{} `type:"structure"` + + // The public-policy status for this bucket. TRUE indicates that this bucket + // is public. FALSE indicates that the bucket is not public. + IsPublic *bool `locationName:"IsPublic" type:"boolean"` +} + +// String returns the string representation +func (s PolicyStatus) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PolicyStatus) GoString() string { + return s.String() +} + +// SetIsPublic sets the IsPublic field's value. +func (s *PolicyStatus) SetIsPublic(v bool) *PolicyStatus { + s.IsPublic = &v + return s +} + +type Progress struct { + _ struct{} `type:"structure"` + + // The current number of uncompressed object bytes processed. + BytesProcessed *int64 `type:"long"` + + // The current number of bytes of records payload data returned. + BytesReturned *int64 `type:"long"` + + // The current number of object bytes scanned. + BytesScanned *int64 `type:"long"` +} + +// String returns the string representation +func (s Progress) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Progress) GoString() string { + return s.String() +} + +// SetBytesProcessed sets the BytesProcessed field's value. +func (s *Progress) SetBytesProcessed(v int64) *Progress { + s.BytesProcessed = &v + return s +} + +// SetBytesReturned sets the BytesReturned field's value. +func (s *Progress) SetBytesReturned(v int64) *Progress { + s.BytesReturned = &v + return s +} + +// SetBytesScanned sets the BytesScanned field's value. +func (s *Progress) SetBytesScanned(v int64) *Progress { + s.BytesScanned = &v + return s +} + +type ProgressEvent struct { + _ struct{} `locationName:"ProgressEvent" type:"structure" payload:"Details"` + + // The Progress event details. + Details *Progress `locationName:"Details" type:"structure"` +} + +// String returns the string representation +func (s ProgressEvent) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ProgressEvent) GoString() string { + return s.String() +} + +// SetDetails sets the Details field's value. +func (s *ProgressEvent) SetDetails(v *Progress) *ProgressEvent { + s.Details = v + return s +} + +// The ProgressEvent is and event in the SelectObjectContentEventStream group of events. +func (s *ProgressEvent) eventSelectObjectContentEventStream() {} + +// UnmarshalEvent unmarshals the EventStream Message into the ProgressEvent value. +// This method is only used internally within the SDK's EventStream handling. +func (s *ProgressEvent) UnmarshalEvent( + payloadUnmarshaler protocol.PayloadUnmarshaler, + msg eventstream.Message, +) error { + if err := payloadUnmarshaler.UnmarshalPayload( + bytes.NewReader(msg.Payload), s, + ); err != nil { + return err + } + return nil +} + +// The container element for all Public Access Block configuration options. +// You can enable the configuration options in any combination. +// +// Amazon S3 considers a bucket policy public unless at least one of the following +// conditions is true: +// +// The policy limits access to a set of CIDRs using aws:SourceIp. For more information +// on CIDR, see http://www.rfc-editor.org/rfc/rfc4632.txt (http://www.rfc-editor.org/rfc/rfc4632.txt) +// +// The policy grants permissions, not including any "bad actions," to one of +// the following: +// +// A fixed AWS principal, user, role, or service principal +// +// A fixed aws:SourceArn +// +// A fixed aws:SourceVpc +// +// A fixed aws:SourceVpce +// +// A fixed aws:SourceOwner +// +// A fixed aws:SourceAccount +// +// A fixed value of s3:x-amz-server-side-encryption-aws-kms-key-id +// +// A fixed value of aws:userid outside the pattern "AROLEID:*" +// +// "Bad actions" are those that could expose the data inside a bucket to reads +// or writes by the public. These actions are s3:Get*, s3:List*, s3:AbortMultipartUpload, +// s3:Delete*, s3:Put*, and s3:RestoreObject. +// +// The star notation for bad actions indicates that all matching operations +// are considered bad actions. For example, because s3:Get* is a bad action, +// s3:GetObject, s3:GetObjectVersion, and s3:GetObjectAcl are all bad actions. +type PublicAccessBlockConfiguration struct { + _ struct{} `type:"structure"` + + // Specifies whether Amazon S3 should block public ACLs for this bucket. Setting + // this element to TRUEcauses the following behavior: + // + // PUT Bucket acl and PUT Object acl calls will fail if the specified ACL allows + // public access. + // + // * PUT Object calls will fail if the request includes an object ACL. + BlockPublicAcls *bool `locationName:"BlockPublicAcls" type:"boolean"` + + // Specifies whether Amazon S3 should block public bucket policies for this + // bucket. Setting this element to TRUE causes Amazon S3 to reject calls to + // PUT Bucket policy if the specified bucket policy allows public access. + // + // Note that enabling this setting doesn't affect existing bucket policies. + BlockPublicPolicy *bool `locationName:"BlockPublicPolicy" type:"boolean"` + + // Specifies whether Amazon S3 should ignore public ACLs for this bucket. Setting + // this element to TRUE causes Amazon S3 to ignore all public ACLs on this bucket + // and any objects that it contains. + // + // Note that enabling this setting doesn't affect the persistence of any existing + // ACLs and doesn't prevent new public ACLs from being set. + IgnorePublicAcls *bool `locationName:"IgnorePublicAcls" type:"boolean"` + + // Specifies whether Amazon S3 should restrict public bucket policies for this + // bucket. If this element is set to TRUE, then only the bucket owner and AWS + // Services can access this bucket if it has a public policy. + // + // Note that enabling this setting doesn't affect previously stored bucket policies, + // except that public and cross-account access within any public bucket policy, + // including non-public delegation to specific accounts, is blocked. + RestrictPublicBuckets *bool `locationName:"RestrictPublicBuckets" type:"boolean"` +} + +// String returns the string representation +func (s PublicAccessBlockConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PublicAccessBlockConfiguration) GoString() string { + return s.String() +} + +// SetBlockPublicAcls sets the BlockPublicAcls field's value. +func (s *PublicAccessBlockConfiguration) SetBlockPublicAcls(v bool) *PublicAccessBlockConfiguration { + s.BlockPublicAcls = &v + return s +} + +// SetBlockPublicPolicy sets the BlockPublicPolicy field's value. +func (s *PublicAccessBlockConfiguration) SetBlockPublicPolicy(v bool) *PublicAccessBlockConfiguration { + s.BlockPublicPolicy = &v + return s +} + +// SetIgnorePublicAcls sets the IgnorePublicAcls field's value. +func (s *PublicAccessBlockConfiguration) SetIgnorePublicAcls(v bool) *PublicAccessBlockConfiguration { + s.IgnorePublicAcls = &v + return s +} + +// SetRestrictPublicBuckets sets the RestrictPublicBuckets field's value. +func (s *PublicAccessBlockConfiguration) SetRestrictPublicBuckets(v bool) *PublicAccessBlockConfiguration { + s.RestrictPublicBuckets = &v + return s +} + type PutBucketAccelerateConfigurationInput struct { _ struct{} `type:"structure" payload:"AccelerateConfiguration"` @@ -17134,8 +18207,8 @@ type PutBucketNotificationConfigurationInput struct { // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` - // Container for specifying the notification configuration of the bucket. If - // this element is empty, notifications are turned off on the bucket. + // A container for specifying the notification configuration of the bucket. + // If this element is empty, notifications are turned off for the bucket. // // NotificationConfiguration is a required field NotificationConfiguration *NotificationConfiguration `locationName:"NotificationConfiguration" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` @@ -17361,8 +18434,8 @@ type PutBucketReplicationInput struct { // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` - // Container for replication rules. You can add as many as 1,000 rules. Total - // replication configuration size can be up to 2 MB. + // A container for replication rules. You can add up to 1,000 rules. The maximum + // size of a replication configuration is 2 MB. // // ReplicationConfiguration is a required field ReplicationConfiguration *ReplicationConfiguration `locationName:"ReplicationConfiguration" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` @@ -17943,7 +19016,7 @@ type PutObjectInput struct { ContentType *string `location:"header" locationName:"Content-Type" type:"string"` // The date and time at which the object is no longer cacheable. - Expires *time.Time `location:"header" locationName:"Expires" type:"timestamp" timestampFormat:"rfc822"` + Expires *time.Time `location:"header" locationName:"Expires" type:"timestamp"` // Gives the grantee READ, READ_ACP, and WRITE_ACP permissions on the object. GrantFullControl *string `location:"header" locationName:"x-amz-grant-full-control" type:"string"` @@ -17999,7 +19072,8 @@ type PutObjectInput struct { // The type of storage to use for the object. Defaults to 'STANDARD'. StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"` - // The tag-set for the object. The tag-set must be encoded as URL Query parameters + // The tag-set for the object. The tag-set must be encoded as URL Query parameters. + // (For example, "Key1=Value1") Tagging *string `location:"header" locationName:"x-amz-tagging" type:"string"` // If the bucket is configured as a website, redirects requests for this object @@ -18406,47 +19480,40 @@ func (s *PutObjectTaggingOutput) SetVersionId(v string) *PutObjectTaggingOutput return s } -// Container for specifying an configuration when you want Amazon S3 to publish -// events to an Amazon Simple Queue Service (Amazon SQS) queue. -type QueueConfiguration struct { - _ struct{} `type:"structure"` - - // Events is a required field - Events []*string `locationName:"Event" type:"list" flattened:"true" required:"true"` - - // Container for object key name filtering rules. For information about key - // name filtering, go to Configuring Event Notifications (http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html) - Filter *NotificationConfigurationFilter `type:"structure"` +type PutPublicAccessBlockInput struct { + _ struct{} `type:"structure" payload:"PublicAccessBlockConfiguration"` - // Optional unique identifier for configurations in a notification configuration. - // If you don't provide one, Amazon S3 will assign an ID. - Id *string `type:"string"` + // The name of the Amazon S3 bucket whose Public Access Block configuration + // you want to set. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` - // Amazon SQS queue ARN to which Amazon S3 will publish a message when it detects - // events of specified type. + // The Public Access Block configuration that you want to apply to this Amazon + // S3 bucket. // - // QueueArn is a required field - QueueArn *string `locationName:"Queue" type:"string" required:"true"` + // PublicAccessBlockConfiguration is a required field + PublicAccessBlockConfiguration *PublicAccessBlockConfiguration `locationName:"PublicAccessBlockConfiguration" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` } // String returns the string representation -func (s QueueConfiguration) String() string { +func (s PutPublicAccessBlockInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s QueueConfiguration) GoString() string { +func (s PutPublicAccessBlockInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *QueueConfiguration) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "QueueConfiguration"} - if s.Events == nil { - invalidParams.Add(request.NewErrParamRequired("Events")) +func (s *PutPublicAccessBlockInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutPublicAccessBlockInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) } - if s.QueueArn == nil { - invalidParams.Add(request.NewErrParamRequired("QueueArn")) + if s.PublicAccessBlockConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("PublicAccessBlockConfiguration")) } if invalidParams.Len() > 0 { @@ -18455,7 +19522,91 @@ func (s *QueueConfiguration) Validate() error { return nil } -// SetEvents sets the Events field's value. +// SetBucket sets the Bucket field's value. +func (s *PutPublicAccessBlockInput) SetBucket(v string) *PutPublicAccessBlockInput { + s.Bucket = &v + return s +} + +func (s *PutPublicAccessBlockInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetPublicAccessBlockConfiguration sets the PublicAccessBlockConfiguration field's value. +func (s *PutPublicAccessBlockInput) SetPublicAccessBlockConfiguration(v *PublicAccessBlockConfiguration) *PutPublicAccessBlockInput { + s.PublicAccessBlockConfiguration = v + return s +} + +type PutPublicAccessBlockOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutPublicAccessBlockOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutPublicAccessBlockOutput) GoString() string { + return s.String() +} + +// A container for specifying the configuration for publication of messages +// to an Amazon Simple Queue Service (Amazon SQS) queue.when Amazon S3 detects +// specified events. +type QueueConfiguration struct { + _ struct{} `type:"structure"` + + // Events is a required field + Events []*string `locationName:"Event" type:"list" flattened:"true" required:"true"` + + // A container for object key name filtering rules. For information about key + // name filtering, see Configuring Event Notifications (http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html) + // in the Amazon Simple Storage Service Developer Guide. + Filter *NotificationConfigurationFilter `type:"structure"` + + // An optional unique identifier for configurations in a notification configuration. + // If you don't provide one, Amazon S3 will assign an ID. + Id *string `type:"string"` + + // The Amazon Resource Name (ARN) of the Amazon SQS queue to which Amazon S3 + // will publish a message when it detects events of the specified type. + // + // QueueArn is a required field + QueueArn *string `locationName:"Queue" type:"string" required:"true"` +} + +// String returns the string representation +func (s QueueConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s QueueConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *QueueConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "QueueConfiguration"} + if s.Events == nil { + invalidParams.Add(request.NewErrParamRequired("Events")) + } + if s.QueueArn == nil { + invalidParams.Add(request.NewErrParamRequired("QueueArn")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEvents sets the Events field's value. func (s *QueueConfiguration) SetEvents(v []*string) *QueueConfiguration { s.Events = v return s @@ -18482,12 +19633,14 @@ func (s *QueueConfiguration) SetQueueArn(v string) *QueueConfiguration { type QueueConfigurationDeprecated struct { _ struct{} `type:"structure"` - // Bucket event for which to send notifications. + // The bucket event for which to send notifications. + // + // Deprecated: Event has been deprecated Event *string `deprecated:"true" type:"string" enum:"Event"` Events []*string `locationName:"Event" type:"list" flattened:"true"` - // Optional unique identifier for configurations in a notification configuration. + // An optional unique identifier for configurations in a notification configuration. // If you don't provide one, Amazon S3 will assign an ID. Id *string `type:"string"` @@ -18528,6 +19681,45 @@ func (s *QueueConfigurationDeprecated) SetQueue(v string) *QueueConfigurationDep return s } +type RecordsEvent struct { + _ struct{} `locationName:"RecordsEvent" type:"structure" payload:"Payload"` + + // The byte array of partial, one or more result records. + // + // Payload is automatically base64 encoded/decoded by the SDK. + Payload []byte `type:"blob"` +} + +// String returns the string representation +func (s RecordsEvent) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RecordsEvent) GoString() string { + return s.String() +} + +// SetPayload sets the Payload field's value. +func (s *RecordsEvent) SetPayload(v []byte) *RecordsEvent { + s.Payload = v + return s +} + +// The RecordsEvent is and event in the SelectObjectContentEventStream group of events. +func (s *RecordsEvent) eventSelectObjectContentEventStream() {} + +// UnmarshalEvent unmarshals the EventStream Message into the RecordsEvent value. +// This method is only used internally within the SDK's EventStream handling. +func (s *RecordsEvent) UnmarshalEvent( + payloadUnmarshaler protocol.PayloadUnmarshaler, + msg eventstream.Message, +) error { + s.Payload = make([]byte, len(msg.Payload)) + copy(s.Payload, msg.Payload) + return nil +} + type Redirect struct { _ struct{} `type:"structure"` @@ -18644,19 +19836,19 @@ func (s *RedirectAllRequestsTo) SetProtocol(v string) *RedirectAllRequestsTo { return s } -// Container for replication rules. You can add as many as 1,000 rules. Total -// replication configuration size can be up to 2 MB. +// A container for replication rules. You can add up to 1,000 rules. The maximum +// size of a replication configuration is 2 MB. type ReplicationConfiguration struct { _ struct{} `type:"structure"` - // Amazon Resource Name (ARN) of an IAM role for Amazon S3 to assume when replicating - // the objects. + // The Amazon Resource Name (ARN) of the AWS Identity and Access Management + // (IAM) role that Amazon S3 can assume when replicating the objects. // // Role is a required field Role *string `type:"string" required:"true"` - // Container for information about a particular replication rule. Replication - // configuration must have at least one rule and can contain up to 1,000 rules. + // A container for one or more replication rules. A replication configuration + // must have at least one rule and can contain a maximum of 1,000 rules. // // Rules is a required field Rules []*ReplicationRule `locationName:"Rule" type:"list" flattened:"true" required:"true"` @@ -18710,29 +19902,57 @@ func (s *ReplicationConfiguration) SetRules(v []*ReplicationRule) *ReplicationCo return s } -// Container for information about a particular replication rule. +// A container for information about a specific replication rule. type ReplicationRule struct { _ struct{} `type:"structure"` - // Container for replication destination information. + // Specifies whether Amazon S3 should replicate delete makers. + DeleteMarkerReplication *DeleteMarkerReplication `type:"structure"` + + // A container for information about the replication destination. // // Destination is a required field Destination *Destination `type:"structure" required:"true"` - // Unique identifier for the rule. The value cannot be longer than 255 characters. + // A filter that identifies the subset of objects to which the replication rule + // applies. A Filter must specify exactly one Prefix, Tag, or an And child element. + Filter *ReplicationRuleFilter `type:"structure"` + + // A unique identifier for the rule. The maximum value is 255 characters. ID *string `type:"string"` - // Object keyname prefix identifying one or more objects to which the rule applies. - // Maximum prefix length can be up to 1,024 characters. Overlapping prefixes - // are not supported. + // An object keyname prefix that identifies the object or objects to which the + // rule applies. The maximum prefix length is 1,024 characters. // - // Prefix is a required field - Prefix *string `type:"string" required:"true"` + // Deprecated: Prefix has been deprecated + Prefix *string `deprecated:"true" type:"string"` - // Container for filters that define which source objects should be replicated. + // The priority associated with the rule. If you specify multiple rules in a + // replication configuration, Amazon S3 prioritizes the rules to prevent conflicts + // when filtering. If two or more rules identify the same object based on a + // specified filter, the rule with higher priority takes precedence. For example: + // + // * Same object quality prefix based filter criteria If prefixes you specified + // in multiple rules overlap + // + // * Same object qualify tag based filter criteria specified in multiple + // rules + // + // For more information, see Cross-Region Replication (CRR) ( https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html) + // in the Amazon S3 Developer Guide. + Priority *int64 `type:"integer"` + + // A container that describes additional filters for identifying the source + // objects that you want to replicate. You can choose to enable or disable the + // replication of these objects. Currently, Amazon S3 supports only the filter + // that you can specify for objects created with server-side encryption using + // an AWS KMS-Managed Key (SSE-KMS). + // + // If you want Amazon S3 to replicate objects created with server-side encryption + // using AWS KMS-Managed Keys. SourceSelectionCriteria *SourceSelectionCriteria `type:"structure"` - // The rule is ignored if status is not Enabled. + // If status isn't enabled, the rule is ignored. // // Status is a required field Status *string `type:"string" required:"true" enum:"ReplicationRuleStatus"` @@ -18754,9 +19974,6 @@ func (s *ReplicationRule) Validate() error { if s.Destination == nil { invalidParams.Add(request.NewErrParamRequired("Destination")) } - if s.Prefix == nil { - invalidParams.Add(request.NewErrParamRequired("Prefix")) - } if s.Status == nil { invalidParams.Add(request.NewErrParamRequired("Status")) } @@ -18765,6 +19982,11 @@ func (s *ReplicationRule) Validate() error { invalidParams.AddNested("Destination", err.(request.ErrInvalidParams)) } } + if s.Filter != nil { + if err := s.Filter.Validate(); err != nil { + invalidParams.AddNested("Filter", err.(request.ErrInvalidParams)) + } + } if s.SourceSelectionCriteria != nil { if err := s.SourceSelectionCriteria.Validate(); err != nil { invalidParams.AddNested("SourceSelectionCriteria", err.(request.ErrInvalidParams)) @@ -18777,12 +19999,24 @@ func (s *ReplicationRule) Validate() error { return nil } +// SetDeleteMarkerReplication sets the DeleteMarkerReplication field's value. +func (s *ReplicationRule) SetDeleteMarkerReplication(v *DeleteMarkerReplication) *ReplicationRule { + s.DeleteMarkerReplication = v + return s +} + // SetDestination sets the Destination field's value. func (s *ReplicationRule) SetDestination(v *Destination) *ReplicationRule { s.Destination = v return s } +// SetFilter sets the Filter field's value. +func (s *ReplicationRule) SetFilter(v *ReplicationRuleFilter) *ReplicationRule { + s.Filter = v + return s +} + // SetID sets the ID field's value. func (s *ReplicationRule) SetID(v string) *ReplicationRule { s.ID = &v @@ -18795,6 +20029,12 @@ func (s *ReplicationRule) SetPrefix(v string) *ReplicationRule { return s } +// SetPriority sets the Priority field's value. +func (s *ReplicationRule) SetPriority(v int64) *ReplicationRule { + s.Priority = &v + return s +} + // SetSourceSelectionCriteria sets the SourceSelectionCriteria field's value. func (s *ReplicationRule) SetSourceSelectionCriteria(v *SourceSelectionCriteria) *ReplicationRule { s.SourceSelectionCriteria = v @@ -18807,6 +20047,130 @@ func (s *ReplicationRule) SetStatus(v string) *ReplicationRule { return s } +type ReplicationRuleAndOperator struct { + _ struct{} `type:"structure"` + + Prefix *string `type:"string"` + + Tags []*Tag `locationName:"Tag" locationNameList:"Tag" type:"list" flattened:"true"` +} + +// String returns the string representation +func (s ReplicationRuleAndOperator) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReplicationRuleAndOperator) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ReplicationRuleAndOperator) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ReplicationRuleAndOperator"} + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPrefix sets the Prefix field's value. +func (s *ReplicationRuleAndOperator) SetPrefix(v string) *ReplicationRuleAndOperator { + s.Prefix = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *ReplicationRuleAndOperator) SetTags(v []*Tag) *ReplicationRuleAndOperator { + s.Tags = v + return s +} + +// A filter that identifies the subset of objects to which the replication rule +// applies. A Filter must specify exactly one Prefix, Tag, or an And child element. +type ReplicationRuleFilter struct { + _ struct{} `type:"structure"` + + // A container for specifying rule filters. The filters determine the subset + // of objects to which the rule applies. This element is required only if you + // specify more than one filter. For example: + // + // * If you specify both a Prefix and a Tag filter, wrap these filters in + // an And tag. + // + // * If you specify a filter based on multiple tags, wrap the Tag elements + // in an And tag. + And *ReplicationRuleAndOperator `type:"structure"` + + // An object keyname prefix that identifies the subset of objects to which the + // rule applies. + Prefix *string `type:"string"` + + // A container for specifying a tag key and value. + // + // The rule applies only to objects that have the tag in their tag set. + Tag *Tag `type:"structure"` +} + +// String returns the string representation +func (s ReplicationRuleFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReplicationRuleFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ReplicationRuleFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ReplicationRuleFilter"} + if s.And != nil { + if err := s.And.Validate(); err != nil { + invalidParams.AddNested("And", err.(request.ErrInvalidParams)) + } + } + if s.Tag != nil { + if err := s.Tag.Validate(); err != nil { + invalidParams.AddNested("Tag", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAnd sets the And field's value. +func (s *ReplicationRuleFilter) SetAnd(v *ReplicationRuleAndOperator) *ReplicationRuleFilter { + s.And = v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *ReplicationRuleFilter) SetPrefix(v string) *ReplicationRuleFilter { + s.Prefix = &v + return s +} + +// SetTag sets the Tag field's value. +func (s *ReplicationRuleFilter) SetTag(v *Tag) *ReplicationRuleFilter { + s.Tag = v + return s +} + type RequestPaymentConfiguration struct { _ struct{} `type:"structure"` @@ -18845,6 +20209,30 @@ func (s *RequestPaymentConfiguration) SetPayer(v string) *RequestPaymentConfigur return s } +type RequestProgress struct { + _ struct{} `type:"structure"` + + // Specifies whether periodic QueryProgress frames should be sent. Valid values: + // TRUE, FALSE. Default value: FALSE. + Enabled *bool `type:"boolean"` +} + +// String returns the string representation +func (s RequestProgress) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RequestProgress) GoString() string { + return s.String() +} + +// SetEnabled sets the Enabled field's value. +func (s *RequestProgress) SetEnabled(v bool) *RequestProgress { + s.Enabled = &v + return s +} + type RestoreObjectInput struct { _ struct{} `type:"structure" payload:"RestoreRequest"` @@ -19087,7 +20475,7 @@ type RoutingRule struct { // Container for redirect information. You can redirect requests to another // host, to another page, or with another protocol. In the event of an error, - // you can can specify a different error code to return. + // you can specify a different error code to return. // // Redirect is a required field Redirect *Redirect `type:"structure" required:"true"` @@ -19148,10 +20536,11 @@ type Rule struct { NoncurrentVersionExpiration *NoncurrentVersionExpiration `type:"structure"` // Container for the transition rule that describes when noncurrent objects - // transition to the STANDARD_IA or GLACIER storage class. If your bucket is - // versioning-enabled (or versioning is suspended), you can set this action - // to request that Amazon S3 transition noncurrent object versions to the STANDARD_IA - // or GLACIER storage class at a specific period in the object's lifetime. + // transition to the STANDARD_IA, ONEZONE_IA, or GLACIER storage class. If your + // bucket is versioning-enabled (or versioning is suspended), you can set this + // action to request that Amazon S3 transition noncurrent object versions to + // the STANDARD_IA, ONEZONE_IA, or GLACIER storage class at a specific period + // in the object's lifetime. NoncurrentVersionTransition *NoncurrentVersionTransition `type:"structure"` // Prefix identifying one or more objects to which the rule applies. @@ -19242,7 +20631,7 @@ func (s *Rule) SetTransition(v *Transition) *Rule { return s } -// Specifies the use of SSE-KMS to encrypt delievered Inventory reports. +// Specifies the use of SSE-KMS to encrypt delivered Inventory reports. type SSEKMS struct { _ struct{} `locationName:"SSE-KMS" type:"structure"` @@ -19282,7 +20671,7 @@ func (s *SSEKMS) SetKeyId(v string) *SSEKMS { return s } -// Specifies the use of SSE-S3 to encrypt delievered Inventory reports. +// Specifies the use of SSE-S3 to encrypt delivered Inventory reports. type SSES3 struct { _ struct{} `locationName:"SSE-S3" type:"structure"` } @@ -19297,6 +20686,439 @@ func (s SSES3) GoString() string { return s.String() } +// SelectObjectContentEventStream provides handling of EventStreams for +// the SelectObjectContent API. +// +// Use this type to receive SelectObjectContentEventStream events. The events +// can be read from the Events channel member. +// +// The events that can be received are: +// +// * ContinuationEvent +// * EndEvent +// * ProgressEvent +// * RecordsEvent +// * StatsEvent +type SelectObjectContentEventStream struct { + // Reader is the EventStream reader for the SelectObjectContentEventStream + // events. This value is automatically set by the SDK when the API call is made + // Use this member when unit testing your code with the SDK to mock out the + // EventStream Reader. + // + // Must not be nil. + Reader SelectObjectContentEventStreamReader + + // StreamCloser is the io.Closer for the EventStream connection. For HTTP + // EventStream this is the response Body. The stream will be closed when + // the Close method of the EventStream is called. + StreamCloser io.Closer +} + +// Close closes the EventStream. This will also cause the Events channel to be +// closed. You can use the closing of the Events channel to terminate your +// application's read from the API's EventStream. +// +// Will close the underlying EventStream reader. For EventStream over HTTP +// connection this will also close the HTTP connection. +// +// Close must be called when done using the EventStream API. Not calling Close +// may result in resource leaks. +func (es *SelectObjectContentEventStream) Close() (err error) { + es.Reader.Close() + return es.Err() +} + +// Err returns any error that occurred while reading EventStream Events from +// the service API's response. Returns nil if there were no errors. +func (es *SelectObjectContentEventStream) Err() error { + if err := es.Reader.Err(); err != nil { + return err + } + es.StreamCloser.Close() + + return nil +} + +// Events returns a channel to read EventStream Events from the +// SelectObjectContent API. +// +// These events are: +// +// * ContinuationEvent +// * EndEvent +// * ProgressEvent +// * RecordsEvent +// * StatsEvent +func (es *SelectObjectContentEventStream) Events() <-chan SelectObjectContentEventStreamEvent { + return es.Reader.Events() +} + +// SelectObjectContentEventStreamEvent groups together all EventStream +// events read from the SelectObjectContent API. +// +// These events are: +// +// * ContinuationEvent +// * EndEvent +// * ProgressEvent +// * RecordsEvent +// * StatsEvent +type SelectObjectContentEventStreamEvent interface { + eventSelectObjectContentEventStream() +} + +// SelectObjectContentEventStreamReader provides the interface for reading EventStream +// Events from the SelectObjectContent API. The +// default implementation for this interface will be SelectObjectContentEventStream. +// +// The reader's Close method must allow multiple concurrent calls. +// +// These events are: +// +// * ContinuationEvent +// * EndEvent +// * ProgressEvent +// * RecordsEvent +// * StatsEvent +type SelectObjectContentEventStreamReader interface { + // Returns a channel of events as they are read from the event stream. + Events() <-chan SelectObjectContentEventStreamEvent + + // Close will close the underlying event stream reader. For event stream over + // HTTP this will also close the HTTP connection. + Close() error + + // Returns any error that has occured while reading from the event stream. + Err() error +} + +type readSelectObjectContentEventStream struct { + eventReader *eventstreamapi.EventReader + stream chan SelectObjectContentEventStreamEvent + errVal atomic.Value + + done chan struct{} + closeOnce sync.Once +} + +func newReadSelectObjectContentEventStream( + reader io.ReadCloser, + unmarshalers request.HandlerList, + logger aws.Logger, + logLevel aws.LogLevelType, +) *readSelectObjectContentEventStream { + r := &readSelectObjectContentEventStream{ + stream: make(chan SelectObjectContentEventStreamEvent), + done: make(chan struct{}), + } + + r.eventReader = eventstreamapi.NewEventReader( + reader, + protocol.HandlerPayloadUnmarshal{ + Unmarshalers: unmarshalers, + }, + r.unmarshalerForEventType, + ) + r.eventReader.UseLogger(logger, logLevel) + + return r +} + +// Close will close the underlying event stream reader. For EventStream over +// HTTP this will also close the HTTP connection. +func (r *readSelectObjectContentEventStream) Close() error { + r.closeOnce.Do(r.safeClose) + + return r.Err() +} + +func (r *readSelectObjectContentEventStream) safeClose() { + close(r.done) + err := r.eventReader.Close() + if err != nil { + r.errVal.Store(err) + } +} + +func (r *readSelectObjectContentEventStream) Err() error { + if v := r.errVal.Load(); v != nil { + return v.(error) + } + + return nil +} + +func (r *readSelectObjectContentEventStream) Events() <-chan SelectObjectContentEventStreamEvent { + return r.stream +} + +func (r *readSelectObjectContentEventStream) readEventStream() { + defer close(r.stream) + + for { + event, err := r.eventReader.ReadEvent() + if err != nil { + if err == io.EOF { + return + } + select { + case <-r.done: + // If closed already ignore the error + return + default: + } + r.errVal.Store(err) + return + } + + select { + case r.stream <- event.(SelectObjectContentEventStreamEvent): + case <-r.done: + return + } + } +} + +func (r *readSelectObjectContentEventStream) unmarshalerForEventType( + eventType string, +) (eventstreamapi.Unmarshaler, error) { + switch eventType { + case "Cont": + return &ContinuationEvent{}, nil + + case "End": + return &EndEvent{}, nil + + case "Progress": + return &ProgressEvent{}, nil + + case "Records": + return &RecordsEvent{}, nil + + case "Stats": + return &StatsEvent{}, nil + default: + return nil, awserr.New( + request.ErrCodeSerialization, + fmt.Sprintf("unknown event type name, %s, for SelectObjectContentEventStream", eventType), + nil, + ) + } +} + +// Request to filter the contents of an Amazon S3 object based on a simple Structured +// Query Language (SQL) statement. In the request, along with the SQL expression, +// you must specify a data serialization format (JSON or CSV) of the object. +// Amazon S3 uses this to parse object data into records. It returns only records +// that match the specified SQL expression. You must also specify the data serialization +// format for the response. For more information, see S3Select API Documentation +// (http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectSELECTContent.html). +type SelectObjectContentInput struct { + _ struct{} `locationName:"SelectObjectContentRequest" type:"structure" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` + + // The S3 bucket. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // The expression that is used to query the object. + // + // Expression is a required field + Expression *string `type:"string" required:"true"` + + // The type of the provided expression (for example., SQL). + // + // ExpressionType is a required field + ExpressionType *string `type:"string" required:"true" enum:"ExpressionType"` + + // Describes the format of the data in the object that is being queried. + // + // InputSerialization is a required field + InputSerialization *InputSerialization `type:"structure" required:"true"` + + // The object key. + // + // Key is a required field + Key *string `location:"uri" locationName:"Key" min:"1" type:"string" required:"true"` + + // Describes the format of the data that you want Amazon S3 to return in response. + // + // OutputSerialization is a required field + OutputSerialization *OutputSerialization `type:"structure" required:"true"` + + // Specifies if periodic request progress information should be enabled. + RequestProgress *RequestProgress `type:"structure"` + + // The SSE Algorithm used to encrypt the object. For more information, see + // Server-Side Encryption (Using Customer-Provided Encryption Keys (http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html). + SSECustomerAlgorithm *string `location:"header" locationName:"x-amz-server-side-encryption-customer-algorithm" type:"string"` + + // The SSE Customer Key. For more information, see Server-Side Encryption (Using + // Customer-Provided Encryption Keys (http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html). + SSECustomerKey *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key" type:"string"` + + // The SSE Customer Key MD5. For more information, see Server-Side Encryption + // (Using Customer-Provided Encryption Keys (http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html). + SSECustomerKeyMD5 *string `location:"header" locationName:"x-amz-server-side-encryption-customer-key-MD5" type:"string"` +} + +// String returns the string representation +func (s SelectObjectContentInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SelectObjectContentInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SelectObjectContentInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SelectObjectContentInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.Expression == nil { + invalidParams.Add(request.NewErrParamRequired("Expression")) + } + if s.ExpressionType == nil { + invalidParams.Add(request.NewErrParamRequired("ExpressionType")) + } + if s.InputSerialization == nil { + invalidParams.Add(request.NewErrParamRequired("InputSerialization")) + } + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.OutputSerialization == nil { + invalidParams.Add(request.NewErrParamRequired("OutputSerialization")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *SelectObjectContentInput) SetBucket(v string) *SelectObjectContentInput { + s.Bucket = &v + return s +} + +func (s *SelectObjectContentInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetExpression sets the Expression field's value. +func (s *SelectObjectContentInput) SetExpression(v string) *SelectObjectContentInput { + s.Expression = &v + return s +} + +// SetExpressionType sets the ExpressionType field's value. +func (s *SelectObjectContentInput) SetExpressionType(v string) *SelectObjectContentInput { + s.ExpressionType = &v + return s +} + +// SetInputSerialization sets the InputSerialization field's value. +func (s *SelectObjectContentInput) SetInputSerialization(v *InputSerialization) *SelectObjectContentInput { + s.InputSerialization = v + return s +} + +// SetKey sets the Key field's value. +func (s *SelectObjectContentInput) SetKey(v string) *SelectObjectContentInput { + s.Key = &v + return s +} + +// SetOutputSerialization sets the OutputSerialization field's value. +func (s *SelectObjectContentInput) SetOutputSerialization(v *OutputSerialization) *SelectObjectContentInput { + s.OutputSerialization = v + return s +} + +// SetRequestProgress sets the RequestProgress field's value. +func (s *SelectObjectContentInput) SetRequestProgress(v *RequestProgress) *SelectObjectContentInput { + s.RequestProgress = v + return s +} + +// SetSSECustomerAlgorithm sets the SSECustomerAlgorithm field's value. +func (s *SelectObjectContentInput) SetSSECustomerAlgorithm(v string) *SelectObjectContentInput { + s.SSECustomerAlgorithm = &v + return s +} + +// SetSSECustomerKey sets the SSECustomerKey field's value. +func (s *SelectObjectContentInput) SetSSECustomerKey(v string) *SelectObjectContentInput { + s.SSECustomerKey = &v + return s +} + +func (s *SelectObjectContentInput) getSSECustomerKey() (v string) { + if s.SSECustomerKey == nil { + return v + } + return *s.SSECustomerKey +} + +// SetSSECustomerKeyMD5 sets the SSECustomerKeyMD5 field's value. +func (s *SelectObjectContentInput) SetSSECustomerKeyMD5(v string) *SelectObjectContentInput { + s.SSECustomerKeyMD5 = &v + return s +} + +type SelectObjectContentOutput struct { + _ struct{} `type:"structure" payload:"Payload"` + + // Use EventStream to use the API's stream. + EventStream *SelectObjectContentEventStream `type:"structure"` +} + +// String returns the string representation +func (s SelectObjectContentOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SelectObjectContentOutput) GoString() string { + return s.String() +} + +// SetEventStream sets the EventStream field's value. +func (s *SelectObjectContentOutput) SetEventStream(v *SelectObjectContentEventStream) *SelectObjectContentOutput { + s.EventStream = v + return s +} + +func (s *SelectObjectContentOutput) runEventStreamLoop(r *request.Request) { + if r.Error != nil { + return + } + reader := newReadSelectObjectContentEventStream( + r.HTTPResponse.Body, + r.Handlers.UnmarshalStream, + r.Config.Logger, + r.Config.LogLevel.Value(), + ) + go reader.readEventStream() + + eventStream := &SelectObjectContentEventStream{ + StreamCloser: r.HTTPResponse.Body, + Reader: reader, + } + s.EventStream = eventStream +} + // Describes the parameters for Select job types. type SelectParameters struct { _ struct{} `type:"structure"` @@ -19522,11 +21344,13 @@ func (s *ServerSideEncryptionRule) SetApplyServerSideEncryptionByDefault(v *Serv return s } -// Container for filters that define which source objects should be replicated. +// A container for filters that define which source objects should be replicated. type SourceSelectionCriteria struct { _ struct{} `type:"structure"` - // Container for filter information of selection of KMS Encrypted S3 objects. + // A container for filter information for the selection of S3 objects encrypted + // with AWS KMS. If you include SourceSelectionCriteria in the replication configuration, + // this element is required. SseKmsEncryptedObjects *SseKmsEncryptedObjects `type:"structure"` } @@ -19561,12 +21385,13 @@ func (s *SourceSelectionCriteria) SetSseKmsEncryptedObjects(v *SseKmsEncryptedOb return s } -// Container for filter information of selection of KMS Encrypted S3 objects. +// A container for filter information for the selection of S3 objects encrypted +// with AWS KMS. type SseKmsEncryptedObjects struct { _ struct{} `type:"structure"` - // The replication for KMS encrypted S3 objects is disabled if status is not - // Enabled. + // If the status is not Enabled, replication for S3 objects encrypted with AWS + // KMS is disabled. // // Status is a required field Status *string `type:"string" required:"true" enum:"SseKmsEncryptedObjectsStatus"` @@ -19601,6 +21426,87 @@ func (s *SseKmsEncryptedObjects) SetStatus(v string) *SseKmsEncryptedObjects { return s } +type Stats struct { + _ struct{} `type:"structure"` + + // The total number of uncompressed object bytes processed. + BytesProcessed *int64 `type:"long"` + + // The total number of bytes of records payload data returned. + BytesReturned *int64 `type:"long"` + + // The total number of object bytes scanned. + BytesScanned *int64 `type:"long"` +} + +// String returns the string representation +func (s Stats) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Stats) GoString() string { + return s.String() +} + +// SetBytesProcessed sets the BytesProcessed field's value. +func (s *Stats) SetBytesProcessed(v int64) *Stats { + s.BytesProcessed = &v + return s +} + +// SetBytesReturned sets the BytesReturned field's value. +func (s *Stats) SetBytesReturned(v int64) *Stats { + s.BytesReturned = &v + return s +} + +// SetBytesScanned sets the BytesScanned field's value. +func (s *Stats) SetBytesScanned(v int64) *Stats { + s.BytesScanned = &v + return s +} + +type StatsEvent struct { + _ struct{} `locationName:"StatsEvent" type:"structure" payload:"Details"` + + // The Stats event details. + Details *Stats `locationName:"Details" type:"structure"` +} + +// String returns the string representation +func (s StatsEvent) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StatsEvent) GoString() string { + return s.String() +} + +// SetDetails sets the Details field's value. +func (s *StatsEvent) SetDetails(v *Stats) *StatsEvent { + s.Details = v + return s +} + +// The StatsEvent is and event in the SelectObjectContentEventStream group of events. +func (s *StatsEvent) eventSelectObjectContentEventStream() {} + +// UnmarshalEvent unmarshals the EventStream Message into the StatsEvent value. +// This method is only used internally within the SDK's EventStream handling. +func (s *StatsEvent) UnmarshalEvent( + payloadUnmarshaler protocol.PayloadUnmarshaler, + msg eventstream.Message, +) error { + if err := payloadUnmarshaler.UnmarshalPayload( + bytes.NewReader(msg.Payload), s, + ); err != nil { + return err + } + return nil +} + type StorageClassAnalysis struct { _ struct{} `type:"structure"` @@ -19844,24 +21750,26 @@ func (s *TargetGrant) SetPermission(v string) *TargetGrant { return s } -// Container for specifying the configuration when you want Amazon S3 to publish -// events to an Amazon Simple Notification Service (Amazon SNS) topic. +// A container for specifying the configuration for publication of messages +// to an Amazon Simple Notification Service (Amazon SNS) topic.when Amazon S3 +// detects specified events. type TopicConfiguration struct { _ struct{} `type:"structure"` // Events is a required field Events []*string `locationName:"Event" type:"list" flattened:"true" required:"true"` - // Container for object key name filtering rules. For information about key - // name filtering, go to Configuring Event Notifications (http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html) + // A container for object key name filtering rules. For information about key + // name filtering, see Configuring Event Notifications (http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html) + // in the Amazon Simple Storage Service Developer Guide. Filter *NotificationConfigurationFilter `type:"structure"` - // Optional unique identifier for configurations in a notification configuration. + // An optional unique identifier for configurations in a notification configuration. // If you don't provide one, Amazon S3 will assign an ID. Id *string `type:"string"` - // Amazon SNS topic ARN to which Amazon S3 will publish a message when it detects - // events of specified type. + // The Amazon Resource Name (ARN) of the Amazon SNS topic to which Amazon S3 + // will publish a message when it detects events of the specified type. // // TopicArn is a required field TopicArn *string `locationName:"Topic" type:"string" required:"true"` @@ -19921,11 +21829,13 @@ type TopicConfigurationDeprecated struct { _ struct{} `type:"structure"` // Bucket event for which to send notifications. + // + // Deprecated: Event has been deprecated Event *string `deprecated:"true" type:"string" enum:"Event"` Events []*string `locationName:"Event" type:"list" flattened:"true"` - // Optional unique identifier for configurations in a notification configuration. + // An optional unique identifier for configurations in a notification configuration. // If you don't provide one, Amazon S3 will assign an ID. Id *string `type:"string"` @@ -20027,14 +21937,14 @@ type UploadPartCopyInput struct { CopySourceIfMatch *string `location:"header" locationName:"x-amz-copy-source-if-match" type:"string"` // Copies the object if it has been modified since the specified time. - CopySourceIfModifiedSince *time.Time `location:"header" locationName:"x-amz-copy-source-if-modified-since" type:"timestamp" timestampFormat:"rfc822"` + CopySourceIfModifiedSince *time.Time `location:"header" locationName:"x-amz-copy-source-if-modified-since" type:"timestamp"` // Copies the object if its entity tag (ETag) is different than the specified // ETag. CopySourceIfNoneMatch *string `location:"header" locationName:"x-amz-copy-source-if-none-match" type:"string"` // Copies the object if it hasn't been modified since the specified time. - CopySourceIfUnmodifiedSince *time.Time `location:"header" locationName:"x-amz-copy-source-if-unmodified-since" type:"timestamp" timestampFormat:"rfc822"` + CopySourceIfUnmodifiedSince *time.Time `location:"header" locationName:"x-amz-copy-source-if-unmodified-since" type:"timestamp"` // The range of bytes to copy from the source object. The range value must use // the form bytes=first-last, where the first and last are the zero-based byte @@ -20781,6 +22691,25 @@ const ( BucketVersioningStatusSuspended = "Suspended" ) +const ( + // CompressionTypeNone is a CompressionType enum value + CompressionTypeNone = "NONE" + + // CompressionTypeGzip is a CompressionType enum value + CompressionTypeGzip = "GZIP" + + // CompressionTypeBzip2 is a CompressionType enum value + CompressionTypeBzip2 = "BZIP2" +) + +const ( + // DeleteMarkerReplicationStatusEnabled is a DeleteMarkerReplicationStatus enum value + DeleteMarkerReplicationStatusEnabled = "Enabled" + + // DeleteMarkerReplicationStatusDisabled is a DeleteMarkerReplicationStatus enum value + DeleteMarkerReplicationStatusDisabled = "Disabled" +) + // Requests Amazon S3 to encode the object keys in the response and specifies // the encoding method to use. An object key may contain any Unicode character; // however, XML 1.0 parser cannot parse some characters, such as characters @@ -20792,7 +22721,7 @@ const ( EncodingTypeUrl = "url" ) -// Bucket event for which to send notifications. +// The bucket event for which to send notifications. const ( // EventS3ReducedRedundancyLostObject is a Event enum value EventS3ReducedRedundancyLostObject = "s3:ReducedRedundancyLostObject" @@ -20901,6 +22830,14 @@ const ( InventoryOptionalFieldEncryptionStatus = "EncryptionStatus" ) +const ( + // JSONTypeDocument is a JSONType enum value + JSONTypeDocument = "DOCUMENT" + + // JSONTypeLines is a JSONType enum value + JSONTypeLines = "LINES" +) + const ( // MFADeleteEnabled is a MFADelete enum value MFADeleteEnabled = "Enabled" @@ -20957,6 +22894,12 @@ const ( // ObjectStorageClassGlacier is a ObjectStorageClass enum value ObjectStorageClassGlacier = "GLACIER" + + // ObjectStorageClassStandardIa is a ObjectStorageClass enum value + ObjectStorageClassStandardIa = "STANDARD_IA" + + // ObjectStorageClassOnezoneIa is a ObjectStorageClass enum value + ObjectStorageClassOnezoneIa = "ONEZONE_IA" ) const ( @@ -21078,6 +23021,9 @@ const ( // StorageClassStandardIa is a StorageClass enum value StorageClassStandardIa = "STANDARD_IA" + + // StorageClassOnezoneIa is a StorageClass enum value + StorageClassOnezoneIa = "ONEZONE_IA" ) const ( @@ -21110,6 +23056,9 @@ const ( // TransitionStorageClassStandardIa is a TransitionStorageClass enum value TransitionStorageClassStandardIa = "STANDARD_IA" + + // TransitionStorageClassOnezoneIa is a TransitionStorageClass enum value + TransitionStorageClassOnezoneIa = "ONEZONE_IA" ) const ( diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/customizations.go b/vendor/github.com/aws/aws-sdk-go/service/s3/customizations.go index 8cfd7c8bbbe..6f560a40975 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/s3/customizations.go +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/customizations.go @@ -3,6 +3,7 @@ package s3 import ( "github.com/aws/aws-sdk-go/aws/client" "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/internal/s3err" ) func init() { @@ -21,6 +22,7 @@ func defaultInitClientFn(c *client.Client) { // S3 uses custom error unmarshaling logic c.Handlers.UnmarshalError.Clear() c.Handlers.UnmarshalError.PushBack(unmarshalError) + c.Handlers.UnmarshalError.PushBackNamed(s3err.RequestFailureWrapperHandler()) } func defaultInitRequestFn(r *request.Request) { @@ -42,11 +44,13 @@ func defaultInitRequestFn(r *request.Request) { r.Handlers.Validate.PushFront(populateLocationConstraint) case opCopyObject, opUploadPartCopy, opCompleteMultipartUpload: r.Handlers.Unmarshal.PushFront(copyMultipartStatusOKUnmarhsalError) + r.Handlers.Unmarshal.PushBackNamed(s3err.RequestFailureWrapperHandler()) case opPutObject, opUploadPart: r.Handlers.Build.PushBack(computeBodyHashes) - case opGetObject: - r.Handlers.Build.PushBack(askForTxEncodingAppendMD5) - r.Handlers.Unmarshal.PushBack(useMD5ValidationReader) + // Disabled until #1837 root issue is resolved. + // case opGetObject: + // r.Handlers.Build.PushBack(askForTxEncodingAppendMD5) + // r.Handlers.Unmarshal.PushBack(useMD5ValidationReader) } } diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/service.go b/vendor/github.com/aws/aws-sdk-go/service/s3/service.go index 614e477d3bb..d17dcc9dadc 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/s3/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "s3" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "s3" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "S3" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the S3 client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, @@ -65,12 +67,16 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio } // Handlers - svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Sign.PushBackNamed(v4.BuildNamedHandler(v4.SignRequestHandler.Name, func(s *v4.Signer) { + s.DisableURIPathEscaping = true + })) svc.Handlers.Build.PushBackNamed(restxml.BuildHandler) svc.Handlers.Unmarshal.PushBackNamed(restxml.UnmarshalHandler) svc.Handlers.UnmarshalMeta.PushBackNamed(restxml.UnmarshalMetaHandler) svc.Handlers.UnmarshalError.PushBackNamed(restxml.UnmarshalErrorHandler) + svc.Handlers.UnmarshalStream.PushBackNamed(restxml.UnmarshalHandler) + // Run custom client initialization if present if initClient != nil { initClient(svc.Client) diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/statusok_error.go b/vendor/github.com/aws/aws-sdk-go/service/s3/statusok_error.go index 9f33efc6ca8..fde3050f95b 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/s3/statusok_error.go +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/statusok_error.go @@ -13,7 +13,11 @@ import ( func copyMultipartStatusOKUnmarhsalError(r *request.Request) { b, err := ioutil.ReadAll(r.HTTPResponse.Body) if err != nil { - r.Error = awserr.New("SerializationError", "unable to read response body", err) + r.Error = awserr.NewRequestFailure( + awserr.New("SerializationError", "unable to read response body", err), + r.HTTPResponse.StatusCode, + r.RequestID, + ) return } body := bytes.NewReader(b) diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/unmarshal_error.go b/vendor/github.com/aws/aws-sdk-go/service/s3/unmarshal_error.go index bcca8627af3..12c0612c8de 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/s3/unmarshal_error.go +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/unmarshal_error.go @@ -23,22 +23,17 @@ func unmarshalError(r *request.Request) { defer r.HTTPResponse.Body.Close() defer io.Copy(ioutil.Discard, r.HTTPResponse.Body) - hostID := r.HTTPResponse.Header.Get("X-Amz-Id-2") - // Bucket exists in a different region, and request needs // to be made to the correct region. if r.HTTPResponse.StatusCode == http.StatusMovedPermanently { - r.Error = requestFailure{ - RequestFailure: awserr.NewRequestFailure( - awserr.New("BucketRegionError", - fmt.Sprintf("incorrect region, the bucket is not in '%s' region", - aws.StringValue(r.Config.Region)), - nil), - r.HTTPResponse.StatusCode, - r.RequestID, - ), - hostID: hostID, - } + r.Error = awserr.NewRequestFailure( + awserr.New("BucketRegionError", + fmt.Sprintf("incorrect region, the bucket is not in '%s' region", + aws.StringValue(r.Config.Region)), + nil), + r.HTTPResponse.StatusCode, + r.RequestID, + ) return } @@ -63,14 +58,11 @@ func unmarshalError(r *request.Request) { errMsg = statusText } - r.Error = requestFailure{ - RequestFailure: awserr.NewRequestFailure( - awserr.New(errCode, errMsg, err), - r.HTTPResponse.StatusCode, - r.RequestID, - ), - hostID: hostID, - } + r.Error = awserr.NewRequestFailure( + awserr.New(errCode, errMsg, err), + r.HTTPResponse.StatusCode, + r.RequestID, + ) } // A RequestFailure provides access to the S3 Request ID and Host ID values @@ -83,21 +75,3 @@ type RequestFailure interface { // Host ID is the S3 Host ID needed for debug, and contacting support HostID() string } - -type requestFailure struct { - awserr.RequestFailure - - hostID string -} - -func (r requestFailure) Error() string { - extra := fmt.Sprintf("status code: %d, request id: %s, host id: %s", - r.StatusCode(), r.RequestID(), r.hostID) - return awserr.SprintError(r.Code(), r.Message(), extra, r.OrigErr()) -} -func (r requestFailure) String() string { - return r.Error() -} -func (r requestFailure) HostID() string { - return r.hostID -} diff --git a/vendor/github.com/aws/aws-sdk-go/service/sagemaker/api.go b/vendor/github.com/aws/aws-sdk-go/service/sagemaker/api.go index fa3d29942a6..3230bcba3f2 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/sagemaker/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/sagemaker/api.go @@ -17,8 +17,8 @@ const opAddTags = "AddTags" // AddTagsRequest generates a "aws/request.Request" representing the // client's request for the AddTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -58,13 +58,20 @@ func (c *SageMaker) AddTagsRequest(input *AddTagsInput) (req *request.Request, o // AddTags API operation for Amazon SageMaker Service. // // Adds or overwrites one or more tags for the specified Amazon SageMaker resource. -// You can add tags to notebook instances, training jobs, models, endpoint configurations, -// and endpoints. +// You can add tags to notebook instances, training jobs, hyperparameter tuning +// jobs, models, endpoint configurations, and endpoints. // // Each tag consists of a key and an optional value. Tag keys must be unique -// per resource. For more information about tags, see Using Cost Allocation -// Tags (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html#allocation-what) -// in the AWS Billing and Cost Management User Guide. +// per resource. For more information about tags, see For more information, +// see AWS Tagging Strategies (https://aws.amazon.com/answers/account-management/aws-tagging-strategies/). +// +// Tags that you add to a hyperparameter tuning job by calling this API are +// also added to any training jobs that the hyperparameter tuning job launches +// after you call this API, but not to training jobs that the hyperparameter +// tuning job launched before you called this API. To make sure that the tags +// associated with a hyperparameter tuning job are also added to all training +// jobs that the hyperparameter tuning job launches, add the tags when you first +// create the tuning job by specifying them in the Tags parameter of CreateHyperParameterTuningJob // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -98,8 +105,8 @@ const opCreateEndpoint = "CreateEndpoint" // CreateEndpointRequest generates a "aws/request.Request" representing the // client's request for the CreateEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -159,6 +166,14 @@ func (c *SageMaker) CreateEndpointRequest(input *CreateEndpointInput) (req *requ // For an example, see Exercise 1: Using the K-Means Algorithm Provided by Amazon // SageMaker (http://docs.aws.amazon.com/sagemaker/latest/dg/ex1.html). // +// If any of the models hosted at this endpoint get model data from an Amazon +// S3 location, Amazon SageMaker uses AWS Security Token Service to download +// model artifacts from the S3 path you provided. AWS STS is activated in your +// IAM user account by default. If you previously deactivated AWS STS for a +// region, you need to reactivate AWS STS for that region. For more information, +// see Activating and Deactivating AWS STS i an AWS Region (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) +// in the AWS Identity and Access Management User Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -197,8 +212,8 @@ const opCreateEndpointConfig = "CreateEndpointConfig" // CreateEndpointConfigRequest generates a "aws/request.Request" representing the // client's request for the CreateEndpointConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -291,12 +306,99 @@ func (c *SageMaker) CreateEndpointConfigWithContext(ctx aws.Context, input *Crea return out, req.Send() } +const opCreateHyperParameterTuningJob = "CreateHyperParameterTuningJob" + +// CreateHyperParameterTuningJobRequest generates a "aws/request.Request" representing the +// client's request for the CreateHyperParameterTuningJob operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateHyperParameterTuningJob for more information on using the CreateHyperParameterTuningJob +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateHyperParameterTuningJobRequest method. +// req, resp := client.CreateHyperParameterTuningJobRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/CreateHyperParameterTuningJob +func (c *SageMaker) CreateHyperParameterTuningJobRequest(input *CreateHyperParameterTuningJobInput) (req *request.Request, output *CreateHyperParameterTuningJobOutput) { + op := &request.Operation{ + Name: opCreateHyperParameterTuningJob, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateHyperParameterTuningJobInput{} + } + + output = &CreateHyperParameterTuningJobOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateHyperParameterTuningJob API operation for Amazon SageMaker Service. +// +// Starts a hyperparameter tuning job. A hyperparameter tuning job finds the +// best version of a model by running many training jobs on your dataset using +// the algorithm you choose and values for hyperparameters within ranges that +// you specify. It then chooses the hyperparameter values that result in a model +// that performs the best, as measured by an objective metric that you choose. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation CreateHyperParameterTuningJob for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceInUse "ResourceInUse" +// Resource being accessed is in use. +// +// * ErrCodeResourceLimitExceeded "ResourceLimitExceeded" +// You have exceeded an Amazon SageMaker resource limit. For example, you might +// have too many training jobs created. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/CreateHyperParameterTuningJob +func (c *SageMaker) CreateHyperParameterTuningJob(input *CreateHyperParameterTuningJobInput) (*CreateHyperParameterTuningJobOutput, error) { + req, out := c.CreateHyperParameterTuningJobRequest(input) + return out, req.Send() +} + +// CreateHyperParameterTuningJobWithContext is the same as CreateHyperParameterTuningJob with the addition of +// the ability to pass a context and additional request options. +// +// See CreateHyperParameterTuningJob for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) CreateHyperParameterTuningJobWithContext(ctx aws.Context, input *CreateHyperParameterTuningJobInput, opts ...request.Option) (*CreateHyperParameterTuningJobOutput, error) { + req, out := c.CreateHyperParameterTuningJobRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateModel = "CreateModel" // CreateModelRequest generates a "aws/request.Request" representing the // client's request for the CreateModel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -336,27 +438,32 @@ func (c *SageMaker) CreateModelRequest(input *CreateModelInput) (req *request.Re // CreateModel API operation for Amazon SageMaker Service. // // Creates a model in Amazon SageMaker. In the request, you name the model and -// describe one or more containers. For each container, you specify the docker -// image containing inference code, artifacts (from prior training), and custom -// environment map that the inference code uses when you deploy the model into -// production. -// -// Use this API to create a model only if you want to use Amazon SageMaker hosting -// services. To host your model, you create an endpoint configuration with the -// CreateEndpointConfig API, and then create an endpoint with the CreateEndpoint -// API. +// describe a primary container. For the primary container, you specify the +// docker image containing inference code, artifacts (from prior training), +// and custom environment map that the inference code uses when you deploy the +// model for predictions. +// +// Use this API to create a model if you want to use Amazon SageMaker hosting +// services or run a batch transform job. // -// Amazon SageMaker then deploys all of the containers that you defined for -// the model in the hosting environment. +// To host your model, you create an endpoint configuration with the CreateEndpointConfig +// API, and then create an endpoint with the CreateEndpoint API. Amazon SageMaker +// then deploys all of the containers that you defined for the model in the +// hosting environment. +// +// To run a batch transform using your model, you start a job with the CreateTransformJob +// API. Amazon SageMaker uses your model and your dataset to get inferences +// which are then saved to a specified S3 location. // // In the CreateModel request, you must define a container with the PrimaryContainer // parameter. // // In the request, you also provide an IAM role that Amazon SageMaker can assume // to access model artifacts and docker image for deployment on ML compute hosting -// instances. In addition, you also use the IAM role to manage permissions the -// inference code needs. For example, if the inference code access any other -// AWS resources, you grant necessary permissions via this role. +// instances or for batch transform jobs. In addition, you also use the IAM +// role to manage permissions the inference code needs. For example, if the +// inference code access any other AWS resources, you grant necessary permissions +// via this role. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -396,8 +503,8 @@ const opCreateNotebookInstance = "CreateNotebookInstance" // CreateNotebookInstanceRequest generates a "aws/request.Request" representing the // client's request for the CreateNotebookInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -512,8 +619,8 @@ const opCreateNotebookInstanceLifecycleConfig = "CreateNotebookInstanceLifecycle // CreateNotebookInstanceLifecycleConfigRequest generates a "aws/request.Request" representing the // client's request for the CreateNotebookInstanceLifecycleConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -556,7 +663,20 @@ func (c *SageMaker) CreateNotebookInstanceLifecycleConfigRequest(input *CreateNo // instance. A lifecycle configuration is a collection of shell scripts that // run when you create or start a notebook instance. // -// For information about notebook instance lifestyle configurations, see notebook-lifecycle-config. +// Each lifecycle configuration script has a limit of 16384 characters. +// +// The value of the $PATH environment variable that is available to both scripts +// is /sbin:bin:/usr/sbin:/usr/bin. +// +// View CloudWatch Logs for notebook instance lifecycle configurations in log +// group /aws/sagemaker/NotebookInstances in log stream [notebook-instance-name]/[LifecycleConfigHook]. +// +// Lifecycle configuration scripts cannot run for longer than 5 minutes. If +// a script runs for longer than 5 minutes, it fails and the notebook instance +// is not created or started. +// +// For information about notebook instance lifestyle configurations, see Step +// 2.1: (Optional) Customize a Notebook Instance (http://docs.aws.amazon.com/sagemaker/latest/dg/notebook-lifecycle-config.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -596,8 +716,8 @@ const opCreatePresignedNotebookInstanceUrl = "CreatePresignedNotebookInstanceUrl // CreatePresignedNotebookInstanceUrlRequest generates a "aws/request.Request" representing the // client's request for the CreatePresignedNotebookInstanceUrl operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -636,12 +756,21 @@ func (c *SageMaker) CreatePresignedNotebookInstanceUrlRequest(input *CreatePresi // CreatePresignedNotebookInstanceUrl API operation for Amazon SageMaker Service. // -// Returns a URL that you can use to connect to the Juypter server from a notebook +// Returns a URL that you can use to connect to the Jupyter server from a notebook // instance. In the Amazon SageMaker console, when you choose Open next to a // notebook instance, Amazon SageMaker opens a new tab showing the Jupyter server // home page from the notebook instance. The console uses this API to get the // URL and show the page. // +// You can restrict access to this API and to the URL that it returns to a list +// of IP addresses that you specify. To restrict access, attach an IAM policy +// that denies access to this API unless the call comes from an IP address in +// the specified list to every AWS Identity and Access Management user, group, +// or role used to access the notebook instance. Use the NotIpAddress condition +// operator and the aws:SourceIP condition context key to specify the list of +// IP addresses that you want to have access to the notebook instance. For more +// information, see Limit Access to a Notebook Instance by IP Address (http://docs.aws.amazon.com/sagemaker/latest/dg/howitworks-access-ws.html#nbi-ip-filter). +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -674,8 +803,8 @@ const opCreateTrainingJob = "CreateTrainingJob" // CreateTrainingJobRequest generates a "aws/request.Request" representing the // client's request for the CreateTrainingJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -788,12 +917,120 @@ func (c *SageMaker) CreateTrainingJobWithContext(ctx aws.Context, input *CreateT return out, req.Send() } +const opCreateTransformJob = "CreateTransformJob" + +// CreateTransformJobRequest generates a "aws/request.Request" representing the +// client's request for the CreateTransformJob operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateTransformJob for more information on using the CreateTransformJob +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateTransformJobRequest method. +// req, resp := client.CreateTransformJobRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/CreateTransformJob +func (c *SageMaker) CreateTransformJobRequest(input *CreateTransformJobInput) (req *request.Request, output *CreateTransformJobOutput) { + op := &request.Operation{ + Name: opCreateTransformJob, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateTransformJobInput{} + } + + output = &CreateTransformJobOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateTransformJob API operation for Amazon SageMaker Service. +// +// Starts a transform job. A transform job uses a trained model to get inferences +// on a dataset and saves these results to an Amazon S3 location that you specify. +// +// To perform batch transformations, you create a transform job and use the +// data that you have readily available. +// +// In the request body, you provide the following: +// +// * TransformJobName - Identifies the transform job. The name must be unique +// within an AWS Region in an AWS account. +// +// * ModelName - Identifies the model to use. ModelName must be the name +// of an existing Amazon SageMaker model in the same AWS Region and AWS account. +// For information on creating a model, see CreateModel. +// +// * TransformInput - Describes the dataset to be transformed and the Amazon +// S3 location where it is stored. +// +// * TransformOutput - Identifies the Amazon S3 location where you want Amazon +// SageMaker to save the results from the transform job. +// +// * TransformResources - Identifies the ML compute instances for the transform +// job. +// +// For more information about how batch transformation works Amazon SageMaker, +// see How It Works (http://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation CreateTransformJob for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceInUse "ResourceInUse" +// Resource being accessed is in use. +// +// * ErrCodeResourceLimitExceeded "ResourceLimitExceeded" +// You have exceeded an Amazon SageMaker resource limit. For example, you might +// have too many training jobs created. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/CreateTransformJob +func (c *SageMaker) CreateTransformJob(input *CreateTransformJobInput) (*CreateTransformJobOutput, error) { + req, out := c.CreateTransformJobRequest(input) + return out, req.Send() +} + +// CreateTransformJobWithContext is the same as CreateTransformJob with the addition of +// the ability to pass a context and additional request options. +// +// See CreateTransformJob for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) CreateTransformJobWithContext(ctx aws.Context, input *CreateTransformJobInput, opts ...request.Option) (*CreateTransformJobOutput, error) { + req, out := c.CreateTransformJobRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteEndpoint = "DeleteEndpoint" // DeleteEndpointRequest generates a "aws/request.Request" representing the // client's request for the DeleteEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -837,6 +1074,10 @@ func (c *SageMaker) DeleteEndpointRequest(input *DeleteEndpointInput) (req *requ // Deletes an endpoint. Amazon SageMaker frees up all of the resources that // were deployed when the endpoint was created. // +// Amazon SageMaker retires any custom KMS key grants associated with the endpoint, +// meaning you don't need to use the RevokeGrant (http://docs.aws.amazon.com/kms/latest/APIReference/API_RevokeGrant.html) +// API call. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -869,8 +1110,8 @@ const opDeleteEndpointConfig = "DeleteEndpointConfig" // DeleteEndpointConfigRequest generates a "aws/request.Request" representing the // client's request for the DeleteEndpointConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -911,7 +1152,7 @@ func (c *SageMaker) DeleteEndpointConfigRequest(input *DeleteEndpointConfigInput // DeleteEndpointConfig API operation for Amazon SageMaker Service. // -// Deletes an endpoint configuration. The DeleteEndpoingConfig API deletes only +// Deletes an endpoint configuration. The DeleteEndpointConfig API deletes only // the specified configuration. It does not delete endpoints created using the // configuration. // @@ -947,8 +1188,8 @@ const opDeleteModel = "DeleteModel" // DeleteModelRequest generates a "aws/request.Request" representing the // client's request for the DeleteModel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1026,8 +1267,8 @@ const opDeleteNotebookInstance = "DeleteNotebookInstance" // DeleteNotebookInstanceRequest generates a "aws/request.Request" representing the // client's request for the DeleteNotebookInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1107,8 +1348,8 @@ const opDeleteNotebookInstanceLifecycleConfig = "DeleteNotebookInstanceLifecycle // DeleteNotebookInstanceLifecycleConfigRequest generates a "aws/request.Request" representing the // client's request for the DeleteNotebookInstanceLifecycleConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1183,8 +1424,8 @@ const opDeleteTags = "DeleteTags" // DeleteTagsRequest generates a "aws/request.Request" representing the // client's request for the DeleteTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1227,6 +1468,10 @@ func (c *SageMaker) DeleteTagsRequest(input *DeleteTagsInput) (req *request.Requ // // To list a resource's tags, use the ListTags API. // +// When you call this API to delete tags from a hyperparameter tuning job, the +// deleted tags are not removed from training jobs that the hyperparameter tuning +// job launched before you called this API. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -1259,8 +1504,8 @@ const opDescribeEndpoint = "DescribeEndpoint" // DescribeEndpointRequest generates a "aws/request.Request" representing the // client's request for the DescribeEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1333,8 +1578,8 @@ const opDescribeEndpointConfig = "DescribeEndpointConfig" // DescribeEndpointConfigRequest generates a "aws/request.Request" representing the // client's request for the DescribeEndpointConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1404,12 +1649,91 @@ func (c *SageMaker) DescribeEndpointConfigWithContext(ctx aws.Context, input *De return out, req.Send() } +const opDescribeHyperParameterTuningJob = "DescribeHyperParameterTuningJob" + +// DescribeHyperParameterTuningJobRequest generates a "aws/request.Request" representing the +// client's request for the DescribeHyperParameterTuningJob operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeHyperParameterTuningJob for more information on using the DescribeHyperParameterTuningJob +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeHyperParameterTuningJobRequest method. +// req, resp := client.DescribeHyperParameterTuningJobRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/DescribeHyperParameterTuningJob +func (c *SageMaker) DescribeHyperParameterTuningJobRequest(input *DescribeHyperParameterTuningJobInput) (req *request.Request, output *DescribeHyperParameterTuningJobOutput) { + op := &request.Operation{ + Name: opDescribeHyperParameterTuningJob, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeHyperParameterTuningJobInput{} + } + + output = &DescribeHyperParameterTuningJobOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeHyperParameterTuningJob API operation for Amazon SageMaker Service. +// +// Gets a description of a hyperparameter tuning job. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation DescribeHyperParameterTuningJob for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFound "ResourceNotFound" +// Resource being access is not found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/DescribeHyperParameterTuningJob +func (c *SageMaker) DescribeHyperParameterTuningJob(input *DescribeHyperParameterTuningJobInput) (*DescribeHyperParameterTuningJobOutput, error) { + req, out := c.DescribeHyperParameterTuningJobRequest(input) + return out, req.Send() +} + +// DescribeHyperParameterTuningJobWithContext is the same as DescribeHyperParameterTuningJob with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeHyperParameterTuningJob for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) DescribeHyperParameterTuningJobWithContext(ctx aws.Context, input *DescribeHyperParameterTuningJobInput, opts ...request.Option) (*DescribeHyperParameterTuningJobOutput, error) { + req, out := c.DescribeHyperParameterTuningJobRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeModel = "DescribeModel" // DescribeModelRequest generates a "aws/request.Request" representing the // client's request for the DescribeModel operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1482,8 +1806,8 @@ const opDescribeNotebookInstance = "DescribeNotebookInstance" // DescribeNotebookInstanceRequest generates a "aws/request.Request" representing the // client's request for the DescribeNotebookInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1556,8 +1880,8 @@ const opDescribeNotebookInstanceLifecycleConfig = "DescribeNotebookInstanceLifec // DescribeNotebookInstanceLifecycleConfigRequest generates a "aws/request.Request" representing the // client's request for the DescribeNotebookInstanceLifecycleConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1598,7 +1922,8 @@ func (c *SageMaker) DescribeNotebookInstanceLifecycleConfigRequest(input *Descri // // Returns a description of a notebook instance lifecycle configuration. // -// For information about notebook instance lifestyle configurations, see notebook-lifecycle-config. +// For information about notebook instance lifestyle configurations, see Step +// 2.1: (Optional) Customize a Notebook Instance (http://docs.aws.amazon.com/sagemaker/latest/dg/notebook-lifecycle-config.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1632,8 +1957,8 @@ const opDescribeTrainingJob = "DescribeTrainingJob" // DescribeTrainingJobRequest generates a "aws/request.Request" representing the // client's request for the DescribeTrainingJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1707,12 +2032,91 @@ func (c *SageMaker) DescribeTrainingJobWithContext(ctx aws.Context, input *Descr return out, req.Send() } +const opDescribeTransformJob = "DescribeTransformJob" + +// DescribeTransformJobRequest generates a "aws/request.Request" representing the +// client's request for the DescribeTransformJob operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeTransformJob for more information on using the DescribeTransformJob +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeTransformJobRequest method. +// req, resp := client.DescribeTransformJobRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/DescribeTransformJob +func (c *SageMaker) DescribeTransformJobRequest(input *DescribeTransformJobInput) (req *request.Request, output *DescribeTransformJobOutput) { + op := &request.Operation{ + Name: opDescribeTransformJob, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeTransformJobInput{} + } + + output = &DescribeTransformJobOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeTransformJob API operation for Amazon SageMaker Service. +// +// Returns information about a transform job. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation DescribeTransformJob for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFound "ResourceNotFound" +// Resource being access is not found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/DescribeTransformJob +func (c *SageMaker) DescribeTransformJob(input *DescribeTransformJobInput) (*DescribeTransformJobOutput, error) { + req, out := c.DescribeTransformJobRequest(input) + return out, req.Send() +} + +// DescribeTransformJobWithContext is the same as DescribeTransformJob with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeTransformJob for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) DescribeTransformJobWithContext(ctx aws.Context, input *DescribeTransformJobInput, opts ...request.Option) (*DescribeTransformJobOutput, error) { + req, out := c.DescribeTransformJobRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opListEndpointConfigs = "ListEndpointConfigs" // ListEndpointConfigsRequest generates a "aws/request.Request" representing the // client's request for the ListEndpointConfigs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1841,8 +2245,8 @@ const opListEndpoints = "ListEndpoints" // ListEndpointsRequest generates a "aws/request.Request" representing the // client's request for the ListEndpoints operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1967,12 +2371,143 @@ func (c *SageMaker) ListEndpointsPagesWithContext(ctx aws.Context, input *ListEn return p.Err() } -const opListModels = "ListModels" +const opListHyperParameterTuningJobs = "ListHyperParameterTuningJobs" -// ListModelsRequest generates a "aws/request.Request" representing the -// client's request for the ListModels operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListHyperParameterTuningJobsRequest generates a "aws/request.Request" representing the +// client's request for the ListHyperParameterTuningJobs operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListHyperParameterTuningJobs for more information on using the ListHyperParameterTuningJobs +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListHyperParameterTuningJobsRequest method. +// req, resp := client.ListHyperParameterTuningJobsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/ListHyperParameterTuningJobs +func (c *SageMaker) ListHyperParameterTuningJobsRequest(input *ListHyperParameterTuningJobsInput) (req *request.Request, output *ListHyperParameterTuningJobsOutput) { + op := &request.Operation{ + Name: opListHyperParameterTuningJobs, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListHyperParameterTuningJobsInput{} + } + + output = &ListHyperParameterTuningJobsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListHyperParameterTuningJobs API operation for Amazon SageMaker Service. +// +// Gets a list of HyperParameterTuningJobSummary objects that describe the hyperparameter +// tuning jobs launched in your account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation ListHyperParameterTuningJobs for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/ListHyperParameterTuningJobs +func (c *SageMaker) ListHyperParameterTuningJobs(input *ListHyperParameterTuningJobsInput) (*ListHyperParameterTuningJobsOutput, error) { + req, out := c.ListHyperParameterTuningJobsRequest(input) + return out, req.Send() +} + +// ListHyperParameterTuningJobsWithContext is the same as ListHyperParameterTuningJobs with the addition of +// the ability to pass a context and additional request options. +// +// See ListHyperParameterTuningJobs for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) ListHyperParameterTuningJobsWithContext(ctx aws.Context, input *ListHyperParameterTuningJobsInput, opts ...request.Option) (*ListHyperParameterTuningJobsOutput, error) { + req, out := c.ListHyperParameterTuningJobsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListHyperParameterTuningJobsPages iterates over the pages of a ListHyperParameterTuningJobs operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListHyperParameterTuningJobs method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListHyperParameterTuningJobs operation. +// pageNum := 0 +// err := client.ListHyperParameterTuningJobsPages(params, +// func(page *ListHyperParameterTuningJobsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *SageMaker) ListHyperParameterTuningJobsPages(input *ListHyperParameterTuningJobsInput, fn func(*ListHyperParameterTuningJobsOutput, bool) bool) error { + return c.ListHyperParameterTuningJobsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListHyperParameterTuningJobsPagesWithContext same as ListHyperParameterTuningJobsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) ListHyperParameterTuningJobsPagesWithContext(ctx aws.Context, input *ListHyperParameterTuningJobsInput, fn func(*ListHyperParameterTuningJobsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListHyperParameterTuningJobsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListHyperParameterTuningJobsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListHyperParameterTuningJobsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListModels = "ListModels" + +// ListModelsRequest generates a "aws/request.Request" representing the +// client's request for the ListModels operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2102,8 +2637,8 @@ const opListNotebookInstanceLifecycleConfigs = "ListNotebookInstanceLifecycleCon // ListNotebookInstanceLifecycleConfigsRequest generates a "aws/request.Request" representing the // client's request for the ListNotebookInstanceLifecycleConfigs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2148,7 +2683,8 @@ func (c *SageMaker) ListNotebookInstanceLifecycleConfigsRequest(input *ListNoteb // ListNotebookInstanceLifecycleConfigs API operation for Amazon SageMaker Service. // -// Lists notebook instance lifestyle configurations created with the API. +// Lists notebook instance lifestyle configurations created with the CreateNotebookInstanceLifecycleConfig +// API. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2232,8 +2768,8 @@ const opListNotebookInstances = "ListNotebookInstances" // ListNotebookInstancesRequest generates a "aws/request.Request" representing the // client's request for the ListNotebookInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2363,8 +2899,8 @@ const opListTags = "ListTags" // ListTagsRequest generates a "aws/request.Request" representing the // client's request for the ListTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2493,8 +3029,8 @@ const opListTrainingJobs = "ListTrainingJobs" // ListTrainingJobsRequest generates a "aws/request.Request" representing the // client's request for the ListTrainingJobs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2619,647 +3155,3885 @@ func (c *SageMaker) ListTrainingJobsPagesWithContext(ctx aws.Context, input *Lis return p.Err() } -const opStartNotebookInstance = "StartNotebookInstance" +const opListTrainingJobsForHyperParameterTuningJob = "ListTrainingJobsForHyperParameterTuningJob" -// StartNotebookInstanceRequest generates a "aws/request.Request" representing the -// client's request for the StartNotebookInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListTrainingJobsForHyperParameterTuningJobRequest generates a "aws/request.Request" representing the +// client's request for the ListTrainingJobsForHyperParameterTuningJob operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See StartNotebookInstance for more information on using the StartNotebookInstance +// See ListTrainingJobsForHyperParameterTuningJob for more information on using the ListTrainingJobsForHyperParameterTuningJob // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the StartNotebookInstanceRequest method. -// req, resp := client.StartNotebookInstanceRequest(params) +// // Example sending a request using the ListTrainingJobsForHyperParameterTuningJobRequest method. +// req, resp := client.ListTrainingJobsForHyperParameterTuningJobRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/StartNotebookInstance -func (c *SageMaker) StartNotebookInstanceRequest(input *StartNotebookInstanceInput) (req *request.Request, output *StartNotebookInstanceOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/ListTrainingJobsForHyperParameterTuningJob +func (c *SageMaker) ListTrainingJobsForHyperParameterTuningJobRequest(input *ListTrainingJobsForHyperParameterTuningJobInput) (req *request.Request, output *ListTrainingJobsForHyperParameterTuningJobOutput) { op := &request.Operation{ - Name: opStartNotebookInstance, + Name: opListTrainingJobsForHyperParameterTuningJob, HTTPMethod: "POST", HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, } if input == nil { - input = &StartNotebookInstanceInput{} + input = &ListTrainingJobsForHyperParameterTuningJobInput{} } - output = &StartNotebookInstanceOutput{} + output = &ListTrainingJobsForHyperParameterTuningJobOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// StartNotebookInstance API operation for Amazon SageMaker Service. +// ListTrainingJobsForHyperParameterTuningJob API operation for Amazon SageMaker Service. // -// Launches an ML compute instance with the latest version of the libraries -// and attaches your ML storage volume. After configuring the notebook instance, -// Amazon SageMaker sets the notebook instance status to InService. A notebook -// instance's status must be InService before you can connect to your Jupyter -// notebook. +// Gets a list of TrainingJobSummary objects that describe the training jobs +// that a hyperparameter tuning job launched. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon SageMaker Service's -// API operation StartNotebookInstance for usage and error information. +// API operation ListTrainingJobsForHyperParameterTuningJob for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceLimitExceeded "ResourceLimitExceeded" -// You have exceeded an Amazon SageMaker resource limit. For example, you might -// have too many training jobs created. +// * ErrCodeResourceNotFound "ResourceNotFound" +// Resource being access is not found. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/StartNotebookInstance -func (c *SageMaker) StartNotebookInstance(input *StartNotebookInstanceInput) (*StartNotebookInstanceOutput, error) { - req, out := c.StartNotebookInstanceRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/ListTrainingJobsForHyperParameterTuningJob +func (c *SageMaker) ListTrainingJobsForHyperParameterTuningJob(input *ListTrainingJobsForHyperParameterTuningJobInput) (*ListTrainingJobsForHyperParameterTuningJobOutput, error) { + req, out := c.ListTrainingJobsForHyperParameterTuningJobRequest(input) return out, req.Send() } -// StartNotebookInstanceWithContext is the same as StartNotebookInstance with the addition of +// ListTrainingJobsForHyperParameterTuningJobWithContext is the same as ListTrainingJobsForHyperParameterTuningJob with the addition of // the ability to pass a context and additional request options. // -// See StartNotebookInstance for details on how to use this API operation. +// See ListTrainingJobsForHyperParameterTuningJob for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *SageMaker) StartNotebookInstanceWithContext(ctx aws.Context, input *StartNotebookInstanceInput, opts ...request.Option) (*StartNotebookInstanceOutput, error) { - req, out := c.StartNotebookInstanceRequest(input) +func (c *SageMaker) ListTrainingJobsForHyperParameterTuningJobWithContext(ctx aws.Context, input *ListTrainingJobsForHyperParameterTuningJobInput, opts ...request.Option) (*ListTrainingJobsForHyperParameterTuningJobOutput, error) { + req, out := c.ListTrainingJobsForHyperParameterTuningJobRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opStopNotebookInstance = "StopNotebookInstance" +// ListTrainingJobsForHyperParameterTuningJobPages iterates over the pages of a ListTrainingJobsForHyperParameterTuningJob operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListTrainingJobsForHyperParameterTuningJob method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListTrainingJobsForHyperParameterTuningJob operation. +// pageNum := 0 +// err := client.ListTrainingJobsForHyperParameterTuningJobPages(params, +// func(page *ListTrainingJobsForHyperParameterTuningJobOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *SageMaker) ListTrainingJobsForHyperParameterTuningJobPages(input *ListTrainingJobsForHyperParameterTuningJobInput, fn func(*ListTrainingJobsForHyperParameterTuningJobOutput, bool) bool) error { + return c.ListTrainingJobsForHyperParameterTuningJobPagesWithContext(aws.BackgroundContext(), input, fn) +} -// StopNotebookInstanceRequest generates a "aws/request.Request" representing the -// client's request for the StopNotebookInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListTrainingJobsForHyperParameterTuningJobPagesWithContext same as ListTrainingJobsForHyperParameterTuningJobPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) ListTrainingJobsForHyperParameterTuningJobPagesWithContext(ctx aws.Context, input *ListTrainingJobsForHyperParameterTuningJobInput, fn func(*ListTrainingJobsForHyperParameterTuningJobOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListTrainingJobsForHyperParameterTuningJobInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListTrainingJobsForHyperParameterTuningJobRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListTrainingJobsForHyperParameterTuningJobOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListTransformJobs = "ListTransformJobs" + +// ListTransformJobsRequest generates a "aws/request.Request" representing the +// client's request for the ListTransformJobs operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See StopNotebookInstance for more information on using the StopNotebookInstance +// See ListTransformJobs for more information on using the ListTransformJobs // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the StopNotebookInstanceRequest method. -// req, resp := client.StopNotebookInstanceRequest(params) +// // Example sending a request using the ListTransformJobsRequest method. +// req, resp := client.ListTransformJobsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/StopNotebookInstance -func (c *SageMaker) StopNotebookInstanceRequest(input *StopNotebookInstanceInput) (req *request.Request, output *StopNotebookInstanceOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/ListTransformJobs +func (c *SageMaker) ListTransformJobsRequest(input *ListTransformJobsInput) (req *request.Request, output *ListTransformJobsOutput) { op := &request.Operation{ - Name: opStopNotebookInstance, + Name: opListTransformJobs, HTTPMethod: "POST", HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, } if input == nil { - input = &StopNotebookInstanceInput{} + input = &ListTransformJobsInput{} } - output = &StopNotebookInstanceOutput{} + output = &ListTransformJobsOutput{} req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// StopNotebookInstance API operation for Amazon SageMaker Service. -// -// Terminates the ML compute instance. Before terminating the instance, Amazon -// SageMaker disconnects the ML storage volume from it. Amazon SageMaker preserves -// the ML storage volume. +// ListTransformJobs API operation for Amazon SageMaker Service. // -// To access data on the ML storage volume for a notebook instance that has -// been terminated, call the StartNotebookInstance API. StartNotebookInstance -// launches another ML compute instance, configures it, and attaches the preserved -// ML storage volume so you can continue your work. +// Lists transform jobs. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon SageMaker Service's -// API operation StopNotebookInstance for usage and error information. -// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/StopNotebookInstance -func (c *SageMaker) StopNotebookInstance(input *StopNotebookInstanceInput) (*StopNotebookInstanceOutput, error) { - req, out := c.StopNotebookInstanceRequest(input) +// API operation ListTransformJobs for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/ListTransformJobs +func (c *SageMaker) ListTransformJobs(input *ListTransformJobsInput) (*ListTransformJobsOutput, error) { + req, out := c.ListTransformJobsRequest(input) return out, req.Send() } -// StopNotebookInstanceWithContext is the same as StopNotebookInstance with the addition of +// ListTransformJobsWithContext is the same as ListTransformJobs with the addition of // the ability to pass a context and additional request options. // -// See StopNotebookInstance for details on how to use this API operation. +// See ListTransformJobs for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *SageMaker) StopNotebookInstanceWithContext(ctx aws.Context, input *StopNotebookInstanceInput, opts ...request.Option) (*StopNotebookInstanceOutput, error) { - req, out := c.StopNotebookInstanceRequest(input) +func (c *SageMaker) ListTransformJobsWithContext(ctx aws.Context, input *ListTransformJobsInput, opts ...request.Option) (*ListTransformJobsOutput, error) { + req, out := c.ListTransformJobsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opStopTrainingJob = "StopTrainingJob" - -// StopTrainingJobRequest generates a "aws/request.Request" representing the -// client's request for the StopTrainingJob operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. -// -// Use "Send" method on the returned Request to send the API call to the service. -// the "output" return value is not valid until after Send returns without error. -// -// See StopTrainingJob for more information on using the StopTrainingJob -// API call, and error handling. -// -// This method is useful when you want to inject custom logic or configuration -// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// ListTransformJobsPages iterates over the pages of a ListTransformJobs operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. // +// See ListTransformJobs method for more information on how to use this operation. // -// // Example sending a request using the StopTrainingJobRequest method. -// req, resp := client.StopTrainingJobRequest(params) +// Note: This operation can generate multiple requests to a service. // -// err := req.Send() -// if err == nil { // resp is now filled -// fmt.Println(resp) -// } +// // Example iterating over at most 3 pages of a ListTransformJobs operation. +// pageNum := 0 +// err := client.ListTransformJobsPages(params, +// func(page *ListTransformJobsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) // -// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/StopTrainingJob -func (c *SageMaker) StopTrainingJobRequest(input *StopTrainingJobInput) (req *request.Request, output *StopTrainingJobOutput) { - op := &request.Operation{ - Name: opStopTrainingJob, - HTTPMethod: "POST", - HTTPPath: "/", - } - - if input == nil { - input = &StopTrainingJobInput{} - } - - output = &StopTrainingJobOutput{} - req = c.newRequest(op, input, output) - req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) - req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) - return +func (c *SageMaker) ListTransformJobsPages(input *ListTransformJobsInput, fn func(*ListTransformJobsOutput, bool) bool) error { + return c.ListTransformJobsPagesWithContext(aws.BackgroundContext(), input, fn) } -// StopTrainingJob API operation for Amazon SageMaker Service. -// -// Stops a training job. To stop a job, Amazon SageMaker sends the algorithm -// the SIGTERM signal, which delays job termination for 120 seconds. Algorithms -// might use this 120-second window to save the model artifacts, so the results -// of the training is not lost. -// -// Training algorithms provided by Amazon SageMaker save the intermediate results -// of a model training job. This intermediate data is a valid model artifact. -// You can use the model artifacts that are saved when Amazon SageMaker stops -// a training job to create a model. -// -// When it receives a StopTrainingJob request, Amazon SageMaker changes the -// status of the job to Stopping. After Amazon SageMaker stops the job, it sets -// the status to Stopped. -// -// Returns awserr.Error for service API and SDK errors. Use runtime type assertions -// with awserr.Error's Code and Message methods to get detailed information about -// the error. -// -// See the AWS API reference guide for Amazon SageMaker Service's -// API operation StopTrainingJob for usage and error information. -// -// Returned Error Codes: -// * ErrCodeResourceNotFound "ResourceNotFound" -// Resource being access is not found. -// -// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/StopTrainingJob -func (c *SageMaker) StopTrainingJob(input *StopTrainingJobInput) (*StopTrainingJobOutput, error) { - req, out := c.StopTrainingJobRequest(input) - return out, req.Send() -} - -// StopTrainingJobWithContext is the same as StopTrainingJob with the addition of -// the ability to pass a context and additional request options. -// -// See StopTrainingJob for details on how to use this API operation. +// ListTransformJobsPagesWithContext same as ListTransformJobsPages except +// it takes a Context and allows setting request options on the pages. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *SageMaker) StopTrainingJobWithContext(ctx aws.Context, input *StopTrainingJobInput, opts ...request.Option) (*StopTrainingJobOutput, error) { - req, out := c.StopTrainingJobRequest(input) - req.SetContext(ctx) - req.ApplyOptions(opts...) - return out, req.Send() +func (c *SageMaker) ListTransformJobsPagesWithContext(ctx aws.Context, input *ListTransformJobsInput, fn func(*ListTransformJobsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListTransformJobsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListTransformJobsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListTransformJobsOutput), !p.HasNextPage()) + } + return p.Err() } -const opUpdateEndpoint = "UpdateEndpoint" +const opStartNotebookInstance = "StartNotebookInstance" -// UpdateEndpointRequest generates a "aws/request.Request" representing the -// client's request for the UpdateEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// StartNotebookInstanceRequest generates a "aws/request.Request" representing the +// client's request for the StartNotebookInstance operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateEndpoint for more information on using the UpdateEndpoint +// See StartNotebookInstance for more information on using the StartNotebookInstance // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateEndpointRequest method. -// req, resp := client.UpdateEndpointRequest(params) +// // Example sending a request using the StartNotebookInstanceRequest method. +// req, resp := client.StartNotebookInstanceRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateEndpoint -func (c *SageMaker) UpdateEndpointRequest(input *UpdateEndpointInput) (req *request.Request, output *UpdateEndpointOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/StartNotebookInstance +func (c *SageMaker) StartNotebookInstanceRequest(input *StartNotebookInstanceInput) (req *request.Request, output *StartNotebookInstanceOutput) { op := &request.Operation{ - Name: opUpdateEndpoint, + Name: opStartNotebookInstance, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateEndpointInput{} + input = &StartNotebookInstanceInput{} } - output = &UpdateEndpointOutput{} + output = &StartNotebookInstanceOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateEndpoint API operation for Amazon SageMaker Service. -// -// Deploys the new EndpointConfig specified in the request, switches to using -// newly created endpoint, and then deletes resources provisioned for the endpoint -// using the previous EndpointConfig (there is no availability loss). +// StartNotebookInstance API operation for Amazon SageMaker Service. // -// When Amazon SageMaker receives the request, it sets the endpoint status to -// Updating. After updating the endpoint, it sets the status to InService. To -// check the status of an endpoint, use the DescribeEndpoint (http://docs.aws.amazon.com/sagemaker/latest/dg/API_DescribeEndpoint.html) -// API. +// Launches an ML compute instance with the latest version of the libraries +// and attaches your ML storage volume. After configuring the notebook instance, +// Amazon SageMaker sets the notebook instance status to InService. A notebook +// instance's status must be InService before you can connect to your Jupyter +// notebook. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon SageMaker Service's -// API operation UpdateEndpoint for usage and error information. +// API operation StartNotebookInstance for usage and error information. // // Returned Error Codes: // * ErrCodeResourceLimitExceeded "ResourceLimitExceeded" // You have exceeded an Amazon SageMaker resource limit. For example, you might // have too many training jobs created. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateEndpoint -func (c *SageMaker) UpdateEndpoint(input *UpdateEndpointInput) (*UpdateEndpointOutput, error) { - req, out := c.UpdateEndpointRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/StartNotebookInstance +func (c *SageMaker) StartNotebookInstance(input *StartNotebookInstanceInput) (*StartNotebookInstanceOutput, error) { + req, out := c.StartNotebookInstanceRequest(input) return out, req.Send() } -// UpdateEndpointWithContext is the same as UpdateEndpoint with the addition of +// StartNotebookInstanceWithContext is the same as StartNotebookInstance with the addition of // the ability to pass a context and additional request options. // -// See UpdateEndpoint for details on how to use this API operation. +// See StartNotebookInstance for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *SageMaker) UpdateEndpointWithContext(ctx aws.Context, input *UpdateEndpointInput, opts ...request.Option) (*UpdateEndpointOutput, error) { - req, out := c.UpdateEndpointRequest(input) +func (c *SageMaker) StartNotebookInstanceWithContext(ctx aws.Context, input *StartNotebookInstanceInput, opts ...request.Option) (*StartNotebookInstanceOutput, error) { + req, out := c.StartNotebookInstanceRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateEndpointWeightsAndCapacities = "UpdateEndpointWeightsAndCapacities" +const opStopHyperParameterTuningJob = "StopHyperParameterTuningJob" -// UpdateEndpointWeightsAndCapacitiesRequest generates a "aws/request.Request" representing the -// client's request for the UpdateEndpointWeightsAndCapacities operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// StopHyperParameterTuningJobRequest generates a "aws/request.Request" representing the +// client's request for the StopHyperParameterTuningJob operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateEndpointWeightsAndCapacities for more information on using the UpdateEndpointWeightsAndCapacities +// See StopHyperParameterTuningJob for more information on using the StopHyperParameterTuningJob // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateEndpointWeightsAndCapacitiesRequest method. -// req, resp := client.UpdateEndpointWeightsAndCapacitiesRequest(params) +// // Example sending a request using the StopHyperParameterTuningJobRequest method. +// req, resp := client.StopHyperParameterTuningJobRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateEndpointWeightsAndCapacities -func (c *SageMaker) UpdateEndpointWeightsAndCapacitiesRequest(input *UpdateEndpointWeightsAndCapacitiesInput) (req *request.Request, output *UpdateEndpointWeightsAndCapacitiesOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/StopHyperParameterTuningJob +func (c *SageMaker) StopHyperParameterTuningJobRequest(input *StopHyperParameterTuningJobInput) (req *request.Request, output *StopHyperParameterTuningJobOutput) { op := &request.Operation{ - Name: opUpdateEndpointWeightsAndCapacities, + Name: opStopHyperParameterTuningJob, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateEndpointWeightsAndCapacitiesInput{} + input = &StopHyperParameterTuningJobInput{} } - output = &UpdateEndpointWeightsAndCapacitiesOutput{} + output = &StopHyperParameterTuningJobOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateEndpointWeightsAndCapacities API operation for Amazon SageMaker Service. +// StopHyperParameterTuningJob API operation for Amazon SageMaker Service. // -// Updates variant weight of one or more variants associated with an existing -// endpoint, or capacity of one variant associated with an existing endpoint. -// When it receives the request, Amazon SageMaker sets the endpoint status to -// Updating. After updating the endpoint, it sets the status to InService. To -// check the status of an endpoint, use the DescribeEndpoint (http://docs.aws.amazon.com/sagemaker/latest/dg/API_DescribeEndpoint.html) -// API. +// Stops a running hyperparameter tuning job and all running training jobs that +// the tuning job launched. +// +// All model artifacts output from the training jobs are stored in Amazon Simple +// Storage Service (Amazon S3). All data that the training jobs write to Amazon +// CloudWatch Logs are still available in CloudWatch. After the tuning job moves +// to the Stopped state, it releases all reserved resources for the tuning job. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon SageMaker Service's -// API operation UpdateEndpointWeightsAndCapacities for usage and error information. +// API operation StopHyperParameterTuningJob for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceLimitExceeded "ResourceLimitExceeded" -// You have exceeded an Amazon SageMaker resource limit. For example, you might -// have too many training jobs created. +// * ErrCodeResourceNotFound "ResourceNotFound" +// Resource being access is not found. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateEndpointWeightsAndCapacities -func (c *SageMaker) UpdateEndpointWeightsAndCapacities(input *UpdateEndpointWeightsAndCapacitiesInput) (*UpdateEndpointWeightsAndCapacitiesOutput, error) { - req, out := c.UpdateEndpointWeightsAndCapacitiesRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/StopHyperParameterTuningJob +func (c *SageMaker) StopHyperParameterTuningJob(input *StopHyperParameterTuningJobInput) (*StopHyperParameterTuningJobOutput, error) { + req, out := c.StopHyperParameterTuningJobRequest(input) return out, req.Send() } -// UpdateEndpointWeightsAndCapacitiesWithContext is the same as UpdateEndpointWeightsAndCapacities with the addition of +// StopHyperParameterTuningJobWithContext is the same as StopHyperParameterTuningJob with the addition of // the ability to pass a context and additional request options. // -// See UpdateEndpointWeightsAndCapacities for details on how to use this API operation. +// See StopHyperParameterTuningJob for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *SageMaker) UpdateEndpointWeightsAndCapacitiesWithContext(ctx aws.Context, input *UpdateEndpointWeightsAndCapacitiesInput, opts ...request.Option) (*UpdateEndpointWeightsAndCapacitiesOutput, error) { - req, out := c.UpdateEndpointWeightsAndCapacitiesRequest(input) +func (c *SageMaker) StopHyperParameterTuningJobWithContext(ctx aws.Context, input *StopHyperParameterTuningJobInput, opts ...request.Option) (*StopHyperParameterTuningJobOutput, error) { + req, out := c.StopHyperParameterTuningJobRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateNotebookInstance = "UpdateNotebookInstance" +const opStopNotebookInstance = "StopNotebookInstance" -// UpdateNotebookInstanceRequest generates a "aws/request.Request" representing the -// client's request for the UpdateNotebookInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// StopNotebookInstanceRequest generates a "aws/request.Request" representing the +// client's request for the StopNotebookInstance operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateNotebookInstance for more information on using the UpdateNotebookInstance +// See StopNotebookInstance for more information on using the StopNotebookInstance // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateNotebookInstanceRequest method. -// req, resp := client.UpdateNotebookInstanceRequest(params) +// // Example sending a request using the StopNotebookInstanceRequest method. +// req, resp := client.StopNotebookInstanceRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateNotebookInstance -func (c *SageMaker) UpdateNotebookInstanceRequest(input *UpdateNotebookInstanceInput) (req *request.Request, output *UpdateNotebookInstanceOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/StopNotebookInstance +func (c *SageMaker) StopNotebookInstanceRequest(input *StopNotebookInstanceInput) (req *request.Request, output *StopNotebookInstanceOutput) { op := &request.Operation{ - Name: opUpdateNotebookInstance, + Name: opStopNotebookInstance, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateNotebookInstanceInput{} + input = &StopNotebookInstanceInput{} } - output = &UpdateNotebookInstanceOutput{} + output = &StopNotebookInstanceOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateNotebookInstance API operation for Amazon SageMaker Service. +// StopNotebookInstance API operation for Amazon SageMaker Service. // -// Updates a notebook instance. NotebookInstance updates include upgrading or -// downgrading the ML compute instance used for your notebook instance to accommodate -// changes in your workload requirements. You can also update the VPC security -// groups. +// Terminates the ML compute instance. Before terminating the instance, Amazon +// SageMaker disconnects the ML storage volume from it. Amazon SageMaker preserves +// the ML storage volume. +// +// To access data on the ML storage volume for a notebook instance that has +// been terminated, call the StartNotebookInstance API. StartNotebookInstance +// launches another ML compute instance, configures it, and attaches the preserved +// ML storage volume so you can continue your work. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon SageMaker Service's -// API operation UpdateNotebookInstance for usage and error information. -// -// Returned Error Codes: -// * ErrCodeResourceLimitExceeded "ResourceLimitExceeded" -// You have exceeded an Amazon SageMaker resource limit. For example, you might -// have too many training jobs created. -// -// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateNotebookInstance -func (c *SageMaker) UpdateNotebookInstance(input *UpdateNotebookInstanceInput) (*UpdateNotebookInstanceOutput, error) { - req, out := c.UpdateNotebookInstanceRequest(input) +// API operation StopNotebookInstance for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/StopNotebookInstance +func (c *SageMaker) StopNotebookInstance(input *StopNotebookInstanceInput) (*StopNotebookInstanceOutput, error) { + req, out := c.StopNotebookInstanceRequest(input) return out, req.Send() } -// UpdateNotebookInstanceWithContext is the same as UpdateNotebookInstance with the addition of +// StopNotebookInstanceWithContext is the same as StopNotebookInstance with the addition of // the ability to pass a context and additional request options. // -// See UpdateNotebookInstance for details on how to use this API operation. +// See StopNotebookInstance for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *SageMaker) UpdateNotebookInstanceWithContext(ctx aws.Context, input *UpdateNotebookInstanceInput, opts ...request.Option) (*UpdateNotebookInstanceOutput, error) { - req, out := c.UpdateNotebookInstanceRequest(input) +func (c *SageMaker) StopNotebookInstanceWithContext(ctx aws.Context, input *StopNotebookInstanceInput, opts ...request.Option) (*StopNotebookInstanceOutput, error) { + req, out := c.StopNotebookInstanceRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opUpdateNotebookInstanceLifecycleConfig = "UpdateNotebookInstanceLifecycleConfig" +const opStopTrainingJob = "StopTrainingJob" -// UpdateNotebookInstanceLifecycleConfigRequest generates a "aws/request.Request" representing the -// client's request for the UpdateNotebookInstanceLifecycleConfig operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// StopTrainingJobRequest generates a "aws/request.Request" representing the +// client's request for the StopTrainingJob operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateNotebookInstanceLifecycleConfig for more information on using the UpdateNotebookInstanceLifecycleConfig +// See StopTrainingJob for more information on using the StopTrainingJob // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateNotebookInstanceLifecycleConfigRequest method. -// req, resp := client.UpdateNotebookInstanceLifecycleConfigRequest(params) +// // Example sending a request using the StopTrainingJobRequest method. +// req, resp := client.StopTrainingJobRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateNotebookInstanceLifecycleConfig -func (c *SageMaker) UpdateNotebookInstanceLifecycleConfigRequest(input *UpdateNotebookInstanceLifecycleConfigInput) (req *request.Request, output *UpdateNotebookInstanceLifecycleConfigOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/StopTrainingJob +func (c *SageMaker) StopTrainingJobRequest(input *StopTrainingJobInput) (req *request.Request, output *StopTrainingJobOutput) { op := &request.Operation{ - Name: opUpdateNotebookInstanceLifecycleConfig, + Name: opStopTrainingJob, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateNotebookInstanceLifecycleConfigInput{} + input = &StopTrainingJobInput{} } - output = &UpdateNotebookInstanceLifecycleConfigOutput{} + output = &StopTrainingJobOutput{} req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) return } -// UpdateNotebookInstanceLifecycleConfig API operation for Amazon SageMaker Service. +// StopTrainingJob API operation for Amazon SageMaker Service. +// +// Stops a training job. To stop a job, Amazon SageMaker sends the algorithm +// the SIGTERM signal, which delays job termination for 120 seconds. Algorithms +// might use this 120-second window to save the model artifacts, so the results +// of the training is not lost. +// +// Training algorithms provided by Amazon SageMaker save the intermediate results +// of a model training job. This intermediate data is a valid model artifact. +// You can use the model artifacts that are saved when Amazon SageMaker stops +// a training job to create a model. // -// Updates a notebook instance lifecycle configuration created with the API. +// When it receives a StopTrainingJob request, Amazon SageMaker changes the +// status of the job to Stopping. After Amazon SageMaker stops the job, it sets +// the status to Stopped. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon SageMaker Service's -// API operation UpdateNotebookInstanceLifecycleConfig for usage and error information. +// API operation StopTrainingJob for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceLimitExceeded "ResourceLimitExceeded" -// You have exceeded an Amazon SageMaker resource limit. For example, you might -// have too many training jobs created. +// * ErrCodeResourceNotFound "ResourceNotFound" +// Resource being access is not found. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateNotebookInstanceLifecycleConfig -func (c *SageMaker) UpdateNotebookInstanceLifecycleConfig(input *UpdateNotebookInstanceLifecycleConfigInput) (*UpdateNotebookInstanceLifecycleConfigOutput, error) { - req, out := c.UpdateNotebookInstanceLifecycleConfigRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/StopTrainingJob +func (c *SageMaker) StopTrainingJob(input *StopTrainingJobInput) (*StopTrainingJobOutput, error) { + req, out := c.StopTrainingJobRequest(input) + return out, req.Send() +} + +// StopTrainingJobWithContext is the same as StopTrainingJob with the addition of +// the ability to pass a context and additional request options. +// +// See StopTrainingJob for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) StopTrainingJobWithContext(ctx aws.Context, input *StopTrainingJobInput, opts ...request.Option) (*StopTrainingJobOutput, error) { + req, out := c.StopTrainingJobRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStopTransformJob = "StopTransformJob" + +// StopTransformJobRequest generates a "aws/request.Request" representing the +// client's request for the StopTransformJob operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StopTransformJob for more information on using the StopTransformJob +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StopTransformJobRequest method. +// req, resp := client.StopTransformJobRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/StopTransformJob +func (c *SageMaker) StopTransformJobRequest(input *StopTransformJobInput) (req *request.Request, output *StopTransformJobOutput) { + op := &request.Operation{ + Name: opStopTransformJob, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StopTransformJobInput{} + } + + output = &StopTransformJobOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// StopTransformJob API operation for Amazon SageMaker Service. +// +// Stops a transform job. +// +// When Amazon SageMaker receives a StopTransformJob request, the status of +// the job changes to Stopping. After Amazon SageMaker stops the job, the status +// is set to Stopped. When you stop a transform job before it is completed, +// Amazon SageMaker doesn't store the job's output in Amazon S3. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation StopTransformJob for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFound "ResourceNotFound" +// Resource being access is not found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/StopTransformJob +func (c *SageMaker) StopTransformJob(input *StopTransformJobInput) (*StopTransformJobOutput, error) { + req, out := c.StopTransformJobRequest(input) + return out, req.Send() +} + +// StopTransformJobWithContext is the same as StopTransformJob with the addition of +// the ability to pass a context and additional request options. +// +// See StopTransformJob for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) StopTransformJobWithContext(ctx aws.Context, input *StopTransformJobInput, opts ...request.Option) (*StopTransformJobOutput, error) { + req, out := c.StopTransformJobRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateEndpoint = "UpdateEndpoint" + +// UpdateEndpointRequest generates a "aws/request.Request" representing the +// client's request for the UpdateEndpoint operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateEndpoint for more information on using the UpdateEndpoint +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateEndpointRequest method. +// req, resp := client.UpdateEndpointRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateEndpoint +func (c *SageMaker) UpdateEndpointRequest(input *UpdateEndpointInput) (req *request.Request, output *UpdateEndpointOutput) { + op := &request.Operation{ + Name: opUpdateEndpoint, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateEndpointInput{} + } + + output = &UpdateEndpointOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateEndpoint API operation for Amazon SageMaker Service. +// +// Deploys the new EndpointConfig specified in the request, switches to using +// newly created endpoint, and then deletes resources provisioned for the endpoint +// using the previous EndpointConfig (there is no availability loss). +// +// When Amazon SageMaker receives the request, it sets the endpoint status to +// Updating. After updating the endpoint, it sets the status to InService. To +// check the status of an endpoint, use the DescribeEndpoint (http://docs.aws.amazon.com/sagemaker/latest/dg/API_DescribeEndpoint.html) +// API. +// +// You cannot update an endpoint with the current EndpointConfig. To update +// an endpoint, you must create a new EndpointConfig. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation UpdateEndpoint for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceLimitExceeded "ResourceLimitExceeded" +// You have exceeded an Amazon SageMaker resource limit. For example, you might +// have too many training jobs created. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateEndpoint +func (c *SageMaker) UpdateEndpoint(input *UpdateEndpointInput) (*UpdateEndpointOutput, error) { + req, out := c.UpdateEndpointRequest(input) + return out, req.Send() +} + +// UpdateEndpointWithContext is the same as UpdateEndpoint with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateEndpoint for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) UpdateEndpointWithContext(ctx aws.Context, input *UpdateEndpointInput, opts ...request.Option) (*UpdateEndpointOutput, error) { + req, out := c.UpdateEndpointRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateEndpointWeightsAndCapacities = "UpdateEndpointWeightsAndCapacities" + +// UpdateEndpointWeightsAndCapacitiesRequest generates a "aws/request.Request" representing the +// client's request for the UpdateEndpointWeightsAndCapacities operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateEndpointWeightsAndCapacities for more information on using the UpdateEndpointWeightsAndCapacities +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateEndpointWeightsAndCapacitiesRequest method. +// req, resp := client.UpdateEndpointWeightsAndCapacitiesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateEndpointWeightsAndCapacities +func (c *SageMaker) UpdateEndpointWeightsAndCapacitiesRequest(input *UpdateEndpointWeightsAndCapacitiesInput) (req *request.Request, output *UpdateEndpointWeightsAndCapacitiesOutput) { + op := &request.Operation{ + Name: opUpdateEndpointWeightsAndCapacities, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateEndpointWeightsAndCapacitiesInput{} + } + + output = &UpdateEndpointWeightsAndCapacitiesOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateEndpointWeightsAndCapacities API operation for Amazon SageMaker Service. +// +// Updates variant weight of one or more variants associated with an existing +// endpoint, or capacity of one variant associated with an existing endpoint. +// When it receives the request, Amazon SageMaker sets the endpoint status to +// Updating. After updating the endpoint, it sets the status to InService. To +// check the status of an endpoint, use the DescribeEndpoint (http://docs.aws.amazon.com/sagemaker/latest/dg/API_DescribeEndpoint.html) +// API. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation UpdateEndpointWeightsAndCapacities for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceLimitExceeded "ResourceLimitExceeded" +// You have exceeded an Amazon SageMaker resource limit. For example, you might +// have too many training jobs created. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateEndpointWeightsAndCapacities +func (c *SageMaker) UpdateEndpointWeightsAndCapacities(input *UpdateEndpointWeightsAndCapacitiesInput) (*UpdateEndpointWeightsAndCapacitiesOutput, error) { + req, out := c.UpdateEndpointWeightsAndCapacitiesRequest(input) + return out, req.Send() +} + +// UpdateEndpointWeightsAndCapacitiesWithContext is the same as UpdateEndpointWeightsAndCapacities with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateEndpointWeightsAndCapacities for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) UpdateEndpointWeightsAndCapacitiesWithContext(ctx aws.Context, input *UpdateEndpointWeightsAndCapacitiesInput, opts ...request.Option) (*UpdateEndpointWeightsAndCapacitiesOutput, error) { + req, out := c.UpdateEndpointWeightsAndCapacitiesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) return out, req.Send() } -// UpdateNotebookInstanceLifecycleConfigWithContext is the same as UpdateNotebookInstanceLifecycleConfig with the addition of -// the ability to pass a context and additional request options. -// -// See UpdateNotebookInstanceLifecycleConfig for details on how to use this API operation. -// -// The context must be non-nil and will be used for request cancellation. If -// the context is nil a panic will occur. In the future the SDK may create -// sub-contexts for http.Requests. See https://golang.org/pkg/context/ -// for more information on using Contexts. -func (c *SageMaker) UpdateNotebookInstanceLifecycleConfigWithContext(ctx aws.Context, input *UpdateNotebookInstanceLifecycleConfigInput, opts ...request.Option) (*UpdateNotebookInstanceLifecycleConfigOutput, error) { - req, out := c.UpdateNotebookInstanceLifecycleConfigRequest(input) - req.SetContext(ctx) - req.ApplyOptions(opts...) - return out, req.Send() +const opUpdateNotebookInstance = "UpdateNotebookInstance" + +// UpdateNotebookInstanceRequest generates a "aws/request.Request" representing the +// client's request for the UpdateNotebookInstance operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateNotebookInstance for more information on using the UpdateNotebookInstance +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateNotebookInstanceRequest method. +// req, resp := client.UpdateNotebookInstanceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateNotebookInstance +func (c *SageMaker) UpdateNotebookInstanceRequest(input *UpdateNotebookInstanceInput) (req *request.Request, output *UpdateNotebookInstanceOutput) { + op := &request.Operation{ + Name: opUpdateNotebookInstance, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateNotebookInstanceInput{} + } + + output = &UpdateNotebookInstanceOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateNotebookInstance API operation for Amazon SageMaker Service. +// +// Updates a notebook instance. NotebookInstance updates include upgrading or +// downgrading the ML compute instance used for your notebook instance to accommodate +// changes in your workload requirements. You can also update the VPC security +// groups. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation UpdateNotebookInstance for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceLimitExceeded "ResourceLimitExceeded" +// You have exceeded an Amazon SageMaker resource limit. For example, you might +// have too many training jobs created. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateNotebookInstance +func (c *SageMaker) UpdateNotebookInstance(input *UpdateNotebookInstanceInput) (*UpdateNotebookInstanceOutput, error) { + req, out := c.UpdateNotebookInstanceRequest(input) + return out, req.Send() +} + +// UpdateNotebookInstanceWithContext is the same as UpdateNotebookInstance with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateNotebookInstance for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) UpdateNotebookInstanceWithContext(ctx aws.Context, input *UpdateNotebookInstanceInput, opts ...request.Option) (*UpdateNotebookInstanceOutput, error) { + req, out := c.UpdateNotebookInstanceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateNotebookInstanceLifecycleConfig = "UpdateNotebookInstanceLifecycleConfig" + +// UpdateNotebookInstanceLifecycleConfigRequest generates a "aws/request.Request" representing the +// client's request for the UpdateNotebookInstanceLifecycleConfig operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateNotebookInstanceLifecycleConfig for more information on using the UpdateNotebookInstanceLifecycleConfig +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateNotebookInstanceLifecycleConfigRequest method. +// req, resp := client.UpdateNotebookInstanceLifecycleConfigRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateNotebookInstanceLifecycleConfig +func (c *SageMaker) UpdateNotebookInstanceLifecycleConfigRequest(input *UpdateNotebookInstanceLifecycleConfigInput) (req *request.Request, output *UpdateNotebookInstanceLifecycleConfigOutput) { + op := &request.Operation{ + Name: opUpdateNotebookInstanceLifecycleConfig, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateNotebookInstanceLifecycleConfigInput{} + } + + output = &UpdateNotebookInstanceLifecycleConfigOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateNotebookInstanceLifecycleConfig API operation for Amazon SageMaker Service. +// +// Updates a notebook instance lifecycle configuration created with the CreateNotebookInstanceLifecycleConfig +// API. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation UpdateNotebookInstanceLifecycleConfig for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceLimitExceeded "ResourceLimitExceeded" +// You have exceeded an Amazon SageMaker resource limit. For example, you might +// have too many training jobs created. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateNotebookInstanceLifecycleConfig +func (c *SageMaker) UpdateNotebookInstanceLifecycleConfig(input *UpdateNotebookInstanceLifecycleConfigInput) (*UpdateNotebookInstanceLifecycleConfigOutput, error) { + req, out := c.UpdateNotebookInstanceLifecycleConfigRequest(input) + return out, req.Send() +} + +// UpdateNotebookInstanceLifecycleConfigWithContext is the same as UpdateNotebookInstanceLifecycleConfig with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateNotebookInstanceLifecycleConfig for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) UpdateNotebookInstanceLifecycleConfigWithContext(ctx aws.Context, input *UpdateNotebookInstanceLifecycleConfigInput, opts ...request.Option) (*UpdateNotebookInstanceLifecycleConfigOutput, error) { + req, out := c.UpdateNotebookInstanceLifecycleConfigRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type AddTagsInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the resource that you want to tag. + // + // ResourceArn is a required field + ResourceArn *string `type:"string" required:"true"` + + // An array of Tag objects. Each tag is a key-value pair. Only the key parameter + // is required. If you don't specify a value, Amazon SageMaker sets the value + // to an empty string. + // + // Tags is a required field + Tags []*Tag `type:"list" required:"true"` +} + +// String returns the string representation +func (s AddTagsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddTagsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddTagsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddTagsInput"} + if s.ResourceArn == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceArn")) + } + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceArn sets the ResourceArn field's value. +func (s *AddTagsInput) SetResourceArn(v string) *AddTagsInput { + s.ResourceArn = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *AddTagsInput) SetTags(v []*Tag) *AddTagsInput { + s.Tags = v + return s +} + +type AddTagsOutput struct { + _ struct{} `type:"structure"` + + // A list of tags associated with the Amazon SageMaker resource. + Tags []*Tag `type:"list"` +} + +// String returns the string representation +func (s AddTagsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddTagsOutput) GoString() string { + return s.String() +} + +// SetTags sets the Tags field's value. +func (s *AddTagsOutput) SetTags(v []*Tag) *AddTagsOutput { + s.Tags = v + return s +} + +// Specifies the training algorithm to use in a CreateTrainingJob (http://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) +// request. +// +// For more information about algorithms provided by Amazon SageMaker, see Algorithms +// (http://docs.aws.amazon.com/sagemaker/latest/dg/algos.html). For information +// about using your own algorithms, see Using Your Own Algorithms with Amazon +// SageMaker (http://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms.html). +type AlgorithmSpecification struct { + _ struct{} `type:"structure"` + + // A list of metric definition objects. Each object specifies the metric name + // and regular expressions used to parse algorithm logs. Amazon SageMaker publishes + // each metric to Amazon CloudWatch. + MetricDefinitions []*MetricDefinition `type:"list"` + + // The registry path of the Docker image that contains the training algorithm. + // For information about docker registry paths for built-in algorithms, see + // Algorithms Provided by Amazon SageMaker: Common Parameters (http://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html). + TrainingImage *string `type:"string"` + + // The input mode that the algorithm supports. For the input modes that Amazon + // SageMaker algorithms support, see Algorithms (http://docs.aws.amazon.com/sagemaker/latest/dg/algos.html). + // If an algorithm supports the File input mode, Amazon SageMaker downloads + // the training data from S3 to the provisioned ML storage Volume, and mounts + // the directory to docker volume for training container. If an algorithm supports + // the Pipe input mode, Amazon SageMaker streams data directly from S3 to the + // container. + // + // In File mode, make sure you provision ML storage volume with sufficient capacity + // to accommodate the data download from S3. In addition to the training data, + // the ML storage volume also stores the output model. The algorithm container + // use ML storage volume to also store intermediate information, if any. + // + // For distributed algorithms using File mode, training data is distributed + // uniformly, and your training duration is predictable if the input data objects + // size is approximately same. Amazon SageMaker does not split the files any + // further for model training. If the object sizes are skewed, training won't + // be optimal as the data distribution is also skewed where one host in a training + // cluster is overloaded, thus becoming bottleneck in training. + // + // TrainingInputMode is a required field + TrainingInputMode *string `type:"string" required:"true" enum:"TrainingInputMode"` +} + +// String returns the string representation +func (s AlgorithmSpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AlgorithmSpecification) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AlgorithmSpecification) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AlgorithmSpecification"} + if s.TrainingInputMode == nil { + invalidParams.Add(request.NewErrParamRequired("TrainingInputMode")) + } + if s.MetricDefinitions != nil { + for i, v := range s.MetricDefinitions { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "MetricDefinitions", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMetricDefinitions sets the MetricDefinitions field's value. +func (s *AlgorithmSpecification) SetMetricDefinitions(v []*MetricDefinition) *AlgorithmSpecification { + s.MetricDefinitions = v + return s +} + +// SetTrainingImage sets the TrainingImage field's value. +func (s *AlgorithmSpecification) SetTrainingImage(v string) *AlgorithmSpecification { + s.TrainingImage = &v + return s +} + +// SetTrainingInputMode sets the TrainingInputMode field's value. +func (s *AlgorithmSpecification) SetTrainingInputMode(v string) *AlgorithmSpecification { + s.TrainingInputMode = &v + return s +} + +// A list of categorical hyperparameters to tune. +type CategoricalParameterRange struct { + _ struct{} `type:"structure"` + + // The name of the categorical hyperparameter to tune. + // + // Name is a required field + Name *string `type:"string" required:"true"` + + // A list of the categories for the hyperparameter. + // + // Values is a required field + Values []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s CategoricalParameterRange) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CategoricalParameterRange) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CategoricalParameterRange) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CategoricalParameterRange"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Values == nil { + invalidParams.Add(request.NewErrParamRequired("Values")) + } + if s.Values != nil && len(s.Values) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Values", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *CategoricalParameterRange) SetName(v string) *CategoricalParameterRange { + s.Name = &v + return s +} + +// SetValues sets the Values field's value. +func (s *CategoricalParameterRange) SetValues(v []*string) *CategoricalParameterRange { + s.Values = v + return s +} + +// A channel is a named input source that training algorithms can consume. +type Channel struct { + _ struct{} `type:"structure"` + + // The name of the channel. + // + // ChannelName is a required field + ChannelName *string `min:"1" type:"string" required:"true"` + + // If training data is compressed, the compression type. The default value is + // None. CompressionType is used only in Pipe input mode. In File mode, leave + // this field unset or set it to None. + CompressionType *string `type:"string" enum:"CompressionType"` + + // The MIME type of the data. + ContentType *string `type:"string"` + + // The location of the channel data. + // + // DataSource is a required field + DataSource *DataSource `type:"structure" required:"true"` + + // (Optional) The input mode to use for the data channel in a training job. + // If you don't set a value for InputMode, Amazon SageMaker uses the value set + // for TrainingInputMode. Use this parameter to override the TrainingInputMode + // setting in a AlgorithmSpecification request when you have a channel that + // needs a different input mode from the training job's general setting. To + // download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned + // ML storage volume, and mount the directory to a Docker volume, use File input + // mode. To stream data directly from Amazon S3 to the container, choose Pipe + // input mode. + // + // To use a model for incremental training, choose File input model. + InputMode *string `type:"string" enum:"TrainingInputMode"` + + // Specify RecordIO as the value when input data is in raw format but the training + // algorithm requires the RecordIO format, in which case, Amazon SageMaker wraps + // each individual S3 object in a RecordIO record. If the input data is already + // in RecordIO format, you don't need to set this attribute. For more information, + // see Create a Dataset Using RecordIO (https://mxnet.incubator.apache.org/architecture/note_data_loading.html#data-format) + RecordWrapperType *string `type:"string" enum:"RecordWrapper"` +} + +// String returns the string representation +func (s Channel) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Channel) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Channel) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Channel"} + if s.ChannelName == nil { + invalidParams.Add(request.NewErrParamRequired("ChannelName")) + } + if s.ChannelName != nil && len(*s.ChannelName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ChannelName", 1)) + } + if s.DataSource == nil { + invalidParams.Add(request.NewErrParamRequired("DataSource")) + } + if s.DataSource != nil { + if err := s.DataSource.Validate(); err != nil { + invalidParams.AddNested("DataSource", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetChannelName sets the ChannelName field's value. +func (s *Channel) SetChannelName(v string) *Channel { + s.ChannelName = &v + return s +} + +// SetCompressionType sets the CompressionType field's value. +func (s *Channel) SetCompressionType(v string) *Channel { + s.CompressionType = &v + return s +} + +// SetContentType sets the ContentType field's value. +func (s *Channel) SetContentType(v string) *Channel { + s.ContentType = &v + return s +} + +// SetDataSource sets the DataSource field's value. +func (s *Channel) SetDataSource(v *DataSource) *Channel { + s.DataSource = v + return s +} + +// SetInputMode sets the InputMode field's value. +func (s *Channel) SetInputMode(v string) *Channel { + s.InputMode = &v + return s +} + +// SetRecordWrapperType sets the RecordWrapperType field's value. +func (s *Channel) SetRecordWrapperType(v string) *Channel { + s.RecordWrapperType = &v + return s +} + +// Describes the container, as part of model definition. +type ContainerDefinition struct { + _ struct{} `type:"structure"` + + // The DNS host name for the container after Amazon SageMaker deploys it. + ContainerHostname *string `type:"string"` + + // The environment variables to set in the Docker container. Each key and value + // in the Environment string to string map can have length of up to 1024. We + // support up to 16 entries in the map. + Environment map[string]*string `type:"map"` + + // The Amazon EC2 Container Registry (Amazon ECR) path where inference code + // is stored. If you are using your own custom algorithm instead of an algorithm + // provided by Amazon SageMaker, the inference code must meet Amazon SageMaker + // requirements. Amazon SageMaker supports both registry/repository[:tag] and + // registry/repository[@digest] image path formats. For more information, see + // Using Your Own Algorithms with Amazon SageMaker (http://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms.html) + // + // Image is a required field + Image *string `type:"string" required:"true"` + + // The S3 path where the model artifacts, which result from model training, + // are stored. This path must point to a single gzip compressed tar archive + // (.tar.gz suffix). + // + // If you provide a value for this parameter, Amazon SageMaker uses AWS Security + // Token Service to download model artifacts from the S3 path you provide. AWS + // STS is activated in your IAM user account by default. If you previously deactivated + // AWS STS for a region, you need to reactivate AWS STS for that region. For + // more information, see Activating and Deactivating AWS STS in an AWS Region + // (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) + // in the AWS Identity and Access Management User Guide. + ModelDataUrl *string `type:"string"` +} + +// String returns the string representation +func (s ContainerDefinition) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ContainerDefinition) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ContainerDefinition) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ContainerDefinition"} + if s.Image == nil { + invalidParams.Add(request.NewErrParamRequired("Image")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetContainerHostname sets the ContainerHostname field's value. +func (s *ContainerDefinition) SetContainerHostname(v string) *ContainerDefinition { + s.ContainerHostname = &v + return s +} + +// SetEnvironment sets the Environment field's value. +func (s *ContainerDefinition) SetEnvironment(v map[string]*string) *ContainerDefinition { + s.Environment = v + return s +} + +// SetImage sets the Image field's value. +func (s *ContainerDefinition) SetImage(v string) *ContainerDefinition { + s.Image = &v + return s +} + +// SetModelDataUrl sets the ModelDataUrl field's value. +func (s *ContainerDefinition) SetModelDataUrl(v string) *ContainerDefinition { + s.ModelDataUrl = &v + return s +} + +// A list of continuous hyperparameters to tune. +type ContinuousParameterRange struct { + _ struct{} `type:"structure"` + + // The maximum value for the hyperparameter. The tuning job uses floating-point + // values between MinValue value and this value for tuning. + // + // MaxValue is a required field + MaxValue *string `type:"string" required:"true"` + + // The minimum value for the hyperparameter. The tuning job uses floating-point + // values between this value and MaxValuefor tuning. + // + // MinValue is a required field + MinValue *string `type:"string" required:"true"` + + // The name of the continuous hyperparameter to tune. + // + // Name is a required field + Name *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s ContinuousParameterRange) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ContinuousParameterRange) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ContinuousParameterRange) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ContinuousParameterRange"} + if s.MaxValue == nil { + invalidParams.Add(request.NewErrParamRequired("MaxValue")) + } + if s.MinValue == nil { + invalidParams.Add(request.NewErrParamRequired("MinValue")) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxValue sets the MaxValue field's value. +func (s *ContinuousParameterRange) SetMaxValue(v string) *ContinuousParameterRange { + s.MaxValue = &v + return s +} + +// SetMinValue sets the MinValue field's value. +func (s *ContinuousParameterRange) SetMinValue(v string) *ContinuousParameterRange { + s.MinValue = &v + return s +} + +// SetName sets the Name field's value. +func (s *ContinuousParameterRange) SetName(v string) *ContinuousParameterRange { + s.Name = &v + return s +} + +type CreateEndpointConfigInput struct { + _ struct{} `type:"structure"` + + // The name of the endpoint configuration. You specify this name in a CreateEndpoint + // (http://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateEndpoint.html) + // request. + // + // EndpointConfigName is a required field + EndpointConfigName *string `type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of a AWS Key Management Service key that Amazon + // SageMaker uses to encrypt data on the storage volume attached to the ML compute + // instance that hosts the endpoint. + KmsKeyId *string `type:"string"` + + // An array of ProductionVariant objects, one for each model that you want to + // host at this endpoint. + // + // ProductionVariants is a required field + ProductionVariants []*ProductionVariant `min:"1" type:"list" required:"true"` + + // An array of key-value pairs. For more information, see Using Cost Allocation + // Tags (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html#allocation-what) + // in the AWS Billing and Cost Management User Guide. + Tags []*Tag `type:"list"` +} + +// String returns the string representation +func (s CreateEndpointConfigInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateEndpointConfigInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateEndpointConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateEndpointConfigInput"} + if s.EndpointConfigName == nil { + invalidParams.Add(request.NewErrParamRequired("EndpointConfigName")) + } + if s.ProductionVariants == nil { + invalidParams.Add(request.NewErrParamRequired("ProductionVariants")) + } + if s.ProductionVariants != nil && len(s.ProductionVariants) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ProductionVariants", 1)) + } + if s.ProductionVariants != nil { + for i, v := range s.ProductionVariants { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ProductionVariants", i), err.(request.ErrInvalidParams)) + } + } + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEndpointConfigName sets the EndpointConfigName field's value. +func (s *CreateEndpointConfigInput) SetEndpointConfigName(v string) *CreateEndpointConfigInput { + s.EndpointConfigName = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *CreateEndpointConfigInput) SetKmsKeyId(v string) *CreateEndpointConfigInput { + s.KmsKeyId = &v + return s +} + +// SetProductionVariants sets the ProductionVariants field's value. +func (s *CreateEndpointConfigInput) SetProductionVariants(v []*ProductionVariant) *CreateEndpointConfigInput { + s.ProductionVariants = v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateEndpointConfigInput) SetTags(v []*Tag) *CreateEndpointConfigInput { + s.Tags = v + return s +} + +type CreateEndpointConfigOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the endpoint configuration. + // + // EndpointConfigArn is a required field + EndpointConfigArn *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateEndpointConfigOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateEndpointConfigOutput) GoString() string { + return s.String() +} + +// SetEndpointConfigArn sets the EndpointConfigArn field's value. +func (s *CreateEndpointConfigOutput) SetEndpointConfigArn(v string) *CreateEndpointConfigOutput { + s.EndpointConfigArn = &v + return s +} + +type CreateEndpointInput struct { + _ struct{} `type:"structure"` + + // The name of an endpoint configuration. For more information, see CreateEndpointConfig + // (http://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateEndpointConfig.html). + // + // EndpointConfigName is a required field + EndpointConfigName *string `type:"string" required:"true"` + + // The name of the endpoint. The name must be unique within an AWS Region in + // your AWS account. + // + // EndpointName is a required field + EndpointName *string `type:"string" required:"true"` + + // An array of key-value pairs. For more information, see Using Cost Allocation + // Tags (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html#allocation-what)in + // the AWS Billing and Cost Management User Guide. + Tags []*Tag `type:"list"` +} + +// String returns the string representation +func (s CreateEndpointInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateEndpointInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateEndpointInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateEndpointInput"} + if s.EndpointConfigName == nil { + invalidParams.Add(request.NewErrParamRequired("EndpointConfigName")) + } + if s.EndpointName == nil { + invalidParams.Add(request.NewErrParamRequired("EndpointName")) + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEndpointConfigName sets the EndpointConfigName field's value. +func (s *CreateEndpointInput) SetEndpointConfigName(v string) *CreateEndpointInput { + s.EndpointConfigName = &v + return s +} + +// SetEndpointName sets the EndpointName field's value. +func (s *CreateEndpointInput) SetEndpointName(v string) *CreateEndpointInput { + s.EndpointName = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateEndpointInput) SetTags(v []*Tag) *CreateEndpointInput { + s.Tags = v + return s +} + +type CreateEndpointOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the endpoint. + // + // EndpointArn is a required field + EndpointArn *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateEndpointOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateEndpointOutput) GoString() string { + return s.String() +} + +// SetEndpointArn sets the EndpointArn field's value. +func (s *CreateEndpointOutput) SetEndpointArn(v string) *CreateEndpointOutput { + s.EndpointArn = &v + return s +} + +type CreateHyperParameterTuningJobInput struct { + _ struct{} `type:"structure"` + + // The HyperParameterTuningJobConfig object that describes the tuning job, including + // the search strategy, the objective metric used to evaluate training jobs, + // ranges of parameters to search, and resource limits for the tuning job. For + // more information, see automatic-model-tuning + // + // HyperParameterTuningJobConfig is a required field + HyperParameterTuningJobConfig *HyperParameterTuningJobConfig `type:"structure" required:"true"` + + // The name of the tuning job. This name is the prefix for the names of all + // training jobs that this tuning job launches. The name must be unique within + // the same AWS account and AWS Region. The name must have { } to { } characters. + // Valid characters are a-z, A-Z, 0-9, and : + = @ _ % - (hyphen). The name + // is not case sensitive. + // + // HyperParameterTuningJobName is a required field + HyperParameterTuningJobName *string `min:"1" type:"string" required:"true"` + + // An array of key-value pairs. You can use tags to categorize your AWS resources + // in different ways, for example, by purpose, owner, or environment. For more + // information, see AWS Tagging Strategies (https://aws.amazon.com/answers/account-management/aws-tagging-strategies/). + // + // Tags that you specify for the tuning job are also added to all training jobs + // that the tuning job launches. + Tags []*Tag `type:"list"` + + // The HyperParameterTrainingJobDefinition object that describes the training + // jobs that this tuning job launches, including static hyperparameters, input + // data configuration, output data configuration, resource configuration, and + // stopping condition. + // + // TrainingJobDefinition is a required field + TrainingJobDefinition *HyperParameterTrainingJobDefinition `type:"structure" required:"true"` + + // Specifies configuration for starting the hyperparameter tuning job using + // one or more previous tuning jobs as a starting point. The results of previous + // tuning jobs are used to inform which combinations of hyperparameters to search + // over in the new tuning job. + // + // All training jobs launched by the new hyperparameter tuning job are evaluated + // by using the objective metric. If you specify IDENTICAL_DATA_AND_ALGORITHM + // as the WarmStartType for the warm start configuration, the training job that + // performs the best in the new tuning job is compared to the best training + // jobs from the parent tuning jobs. From these, the training job that performs + // the best as measured by the objective metric is returned as the overall best + // training job. + // + // All training jobs launched by parent hyperparameter tuning jobs and the new + // hyperparameter tuning jobs count against the limit of training jobs for the + // tuning job. + WarmStartConfig *HyperParameterTuningJobWarmStartConfig `type:"structure"` +} + +// String returns the string representation +func (s CreateHyperParameterTuningJobInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateHyperParameterTuningJobInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateHyperParameterTuningJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateHyperParameterTuningJobInput"} + if s.HyperParameterTuningJobConfig == nil { + invalidParams.Add(request.NewErrParamRequired("HyperParameterTuningJobConfig")) + } + if s.HyperParameterTuningJobName == nil { + invalidParams.Add(request.NewErrParamRequired("HyperParameterTuningJobName")) + } + if s.HyperParameterTuningJobName != nil && len(*s.HyperParameterTuningJobName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("HyperParameterTuningJobName", 1)) + } + if s.TrainingJobDefinition == nil { + invalidParams.Add(request.NewErrParamRequired("TrainingJobDefinition")) + } + if s.HyperParameterTuningJobConfig != nil { + if err := s.HyperParameterTuningJobConfig.Validate(); err != nil { + invalidParams.AddNested("HyperParameterTuningJobConfig", err.(request.ErrInvalidParams)) + } + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + if s.TrainingJobDefinition != nil { + if err := s.TrainingJobDefinition.Validate(); err != nil { + invalidParams.AddNested("TrainingJobDefinition", err.(request.ErrInvalidParams)) + } + } + if s.WarmStartConfig != nil { + if err := s.WarmStartConfig.Validate(); err != nil { + invalidParams.AddNested("WarmStartConfig", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetHyperParameterTuningJobConfig sets the HyperParameterTuningJobConfig field's value. +func (s *CreateHyperParameterTuningJobInput) SetHyperParameterTuningJobConfig(v *HyperParameterTuningJobConfig) *CreateHyperParameterTuningJobInput { + s.HyperParameterTuningJobConfig = v + return s +} + +// SetHyperParameterTuningJobName sets the HyperParameterTuningJobName field's value. +func (s *CreateHyperParameterTuningJobInput) SetHyperParameterTuningJobName(v string) *CreateHyperParameterTuningJobInput { + s.HyperParameterTuningJobName = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateHyperParameterTuningJobInput) SetTags(v []*Tag) *CreateHyperParameterTuningJobInput { + s.Tags = v + return s +} + +// SetTrainingJobDefinition sets the TrainingJobDefinition field's value. +func (s *CreateHyperParameterTuningJobInput) SetTrainingJobDefinition(v *HyperParameterTrainingJobDefinition) *CreateHyperParameterTuningJobInput { + s.TrainingJobDefinition = v + return s +} + +// SetWarmStartConfig sets the WarmStartConfig field's value. +func (s *CreateHyperParameterTuningJobInput) SetWarmStartConfig(v *HyperParameterTuningJobWarmStartConfig) *CreateHyperParameterTuningJobInput { + s.WarmStartConfig = v + return s +} + +type CreateHyperParameterTuningJobOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the tuning job. Amazon SageMaker assigns + // an ARN to a hyperparameter tuning job when you create it. + // + // HyperParameterTuningJobArn is a required field + HyperParameterTuningJobArn *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateHyperParameterTuningJobOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateHyperParameterTuningJobOutput) GoString() string { + return s.String() +} + +// SetHyperParameterTuningJobArn sets the HyperParameterTuningJobArn field's value. +func (s *CreateHyperParameterTuningJobOutput) SetHyperParameterTuningJobArn(v string) *CreateHyperParameterTuningJobOutput { + s.HyperParameterTuningJobArn = &v + return s +} + +type CreateModelInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can + // assume to access model artifacts and docker image for deployment on ML compute + // instances or for batch transform jobs. Deploying on ML compute instances + // is part of model hosting. For more information, see Amazon SageMaker Roles + // (http://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). + // + // To be able to pass this role to Amazon SageMaker, the caller of this API + // must have the iam:PassRole permission. + // + // ExecutionRoleArn is a required field + ExecutionRoleArn *string `min:"20" type:"string" required:"true"` + + // The name of the new model. + // + // ModelName is a required field + ModelName *string `type:"string" required:"true"` + + // The location of the primary docker image containing inference code, associated + // artifacts, and custom environment map that the inference code uses when the + // model is deployed for predictions. + // + // PrimaryContainer is a required field + PrimaryContainer *ContainerDefinition `type:"structure" required:"true"` + + // An array of key-value pairs. For more information, see Using Cost Allocation + // Tags (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html#allocation-what) + // in the AWS Billing and Cost Management User Guide. + Tags []*Tag `type:"list"` + + // A VpcConfig object that specifies the VPC that you want your model to connect + // to. Control access to and from your model container by configuring the VPC. + // VpcConfig is used in hosting services and in batch transform. For more information, + // see Protect Endpoints by Using an Amazon Virtual Private Cloud (http://docs.aws.amazon.com/sagemaker/latest/dg/host-vpc.html) + // and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private + // Cloud (http://docs.aws.amazon.com/sagemaker/latest/dg/batch-vpc.html). + VpcConfig *VpcConfig `type:"structure"` +} + +// String returns the string representation +func (s CreateModelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateModelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateModelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateModelInput"} + if s.ExecutionRoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("ExecutionRoleArn")) + } + if s.ExecutionRoleArn != nil && len(*s.ExecutionRoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("ExecutionRoleArn", 20)) + } + if s.ModelName == nil { + invalidParams.Add(request.NewErrParamRequired("ModelName")) + } + if s.PrimaryContainer == nil { + invalidParams.Add(request.NewErrParamRequired("PrimaryContainer")) + } + if s.PrimaryContainer != nil { + if err := s.PrimaryContainer.Validate(); err != nil { + invalidParams.AddNested("PrimaryContainer", err.(request.ErrInvalidParams)) + } + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + if s.VpcConfig != nil { + if err := s.VpcConfig.Validate(); err != nil { + invalidParams.AddNested("VpcConfig", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetExecutionRoleArn sets the ExecutionRoleArn field's value. +func (s *CreateModelInput) SetExecutionRoleArn(v string) *CreateModelInput { + s.ExecutionRoleArn = &v + return s +} + +// SetModelName sets the ModelName field's value. +func (s *CreateModelInput) SetModelName(v string) *CreateModelInput { + s.ModelName = &v + return s +} + +// SetPrimaryContainer sets the PrimaryContainer field's value. +func (s *CreateModelInput) SetPrimaryContainer(v *ContainerDefinition) *CreateModelInput { + s.PrimaryContainer = v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateModelInput) SetTags(v []*Tag) *CreateModelInput { + s.Tags = v + return s +} + +// SetVpcConfig sets the VpcConfig field's value. +func (s *CreateModelInput) SetVpcConfig(v *VpcConfig) *CreateModelInput { + s.VpcConfig = v + return s +} + +type CreateModelOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the model created in Amazon SageMaker. + // + // ModelArn is a required field + ModelArn *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateModelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateModelOutput) GoString() string { + return s.String() +} + +// SetModelArn sets the ModelArn field's value. +func (s *CreateModelOutput) SetModelArn(v string) *CreateModelOutput { + s.ModelArn = &v + return s +} + +type CreateNotebookInstanceInput struct { + _ struct{} `type:"structure"` + + // Sets whether Amazon SageMaker provides internet access to the notebook instance. + // If you set this to Disabled this notebook instance will be able to access + // resources only in your VPC, and will not be able to connect to Amazon SageMaker + // training and endpoint services unless your configure a NAT Gateway in your + // VPC. + // + // For more information, see Notebook Instances Are Internet-Enabled by Default + // (http://docs.aws.amazon.com/sagemaker/latest/dg/appendix-additional-considerations.html#appendix-notebook-and-internet-access). + // You can set the value of this parameter to Disabled only if you set a value + // for the SubnetId parameter. + DirectInternetAccess *string `type:"string" enum:"DirectInternetAccess"` + + // The type of ML compute instance to launch for the notebook instance. + // + // InstanceType is a required field + InstanceType *string `type:"string" required:"true" enum:"InstanceType"` + + // If you provide a AWS KMS key ID, Amazon SageMaker uses it to encrypt data + // at rest on the ML storage volume that is attached to your notebook instance. + // The KMS key you provide must be enabled. For information, see Enabling and + // Disabling Keys (http://docs.aws.amazon.com/kms/latest/developerguide/enabling-keys.html) + // in the AWS Key Management Service Developer Guide. + KmsKeyId *string `type:"string"` + + // The name of a lifecycle configuration to associate with the notebook instance. + // For information about lifestyle configurations, see Step 2.1: (Optional) + // Customize a Notebook Instance (http://docs.aws.amazon.com/sagemaker/latest/dg/notebook-lifecycle-config.html). + LifecycleConfigName *string `type:"string"` + + // The name of the new notebook instance. + // + // NotebookInstanceName is a required field + NotebookInstanceName *string `type:"string" required:"true"` + + // When you send any requests to AWS resources from the notebook instance, Amazon + // SageMaker assumes this role to perform tasks on your behalf. You must grant + // this role necessary permissions so Amazon SageMaker can perform these tasks. + // The policy must allow the Amazon SageMaker service principal (sagemaker.amazonaws.com) + // permissions to assume this role. For more information, see Amazon SageMaker + // Roles (http://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). + // + // To be able to pass this role to Amazon SageMaker, the caller of this API + // must have the iam:PassRole permission. + // + // RoleArn is a required field + RoleArn *string `min:"20" type:"string" required:"true"` + + // The VPC security group IDs, in the form sg-xxxxxxxx. The security groups + // must be for the same VPC as specified in the subnet. + SecurityGroupIds []*string `type:"list"` + + // The ID of the subnet in a VPC to which you would like to have a connectivity + // from your ML compute instance. + SubnetId *string `type:"string"` + + // A list of tags to associate with the notebook instance. You can add tags + // later by using the CreateTags API. + Tags []*Tag `type:"list"` + + // The size, in GB, of the ML storage volume to attach to the notebook instance. + // The default value is 5 GB. + VolumeSizeInGB *int64 `min:"5" type:"integer"` +} + +// String returns the string representation +func (s CreateNotebookInstanceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateNotebookInstanceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateNotebookInstanceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateNotebookInstanceInput"} + if s.InstanceType == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceType")) + } + if s.NotebookInstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("NotebookInstanceName")) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + if s.RoleArn != nil && len(*s.RoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + } + if s.VolumeSizeInGB != nil && *s.VolumeSizeInGB < 5 { + invalidParams.Add(request.NewErrParamMinValue("VolumeSizeInGB", 5)) + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDirectInternetAccess sets the DirectInternetAccess field's value. +func (s *CreateNotebookInstanceInput) SetDirectInternetAccess(v string) *CreateNotebookInstanceInput { + s.DirectInternetAccess = &v + return s +} + +// SetInstanceType sets the InstanceType field's value. +func (s *CreateNotebookInstanceInput) SetInstanceType(v string) *CreateNotebookInstanceInput { + s.InstanceType = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *CreateNotebookInstanceInput) SetKmsKeyId(v string) *CreateNotebookInstanceInput { + s.KmsKeyId = &v + return s +} + +// SetLifecycleConfigName sets the LifecycleConfigName field's value. +func (s *CreateNotebookInstanceInput) SetLifecycleConfigName(v string) *CreateNotebookInstanceInput { + s.LifecycleConfigName = &v + return s +} + +// SetNotebookInstanceName sets the NotebookInstanceName field's value. +func (s *CreateNotebookInstanceInput) SetNotebookInstanceName(v string) *CreateNotebookInstanceInput { + s.NotebookInstanceName = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *CreateNotebookInstanceInput) SetRoleArn(v string) *CreateNotebookInstanceInput { + s.RoleArn = &v + return s +} + +// SetSecurityGroupIds sets the SecurityGroupIds field's value. +func (s *CreateNotebookInstanceInput) SetSecurityGroupIds(v []*string) *CreateNotebookInstanceInput { + s.SecurityGroupIds = v + return s +} + +// SetSubnetId sets the SubnetId field's value. +func (s *CreateNotebookInstanceInput) SetSubnetId(v string) *CreateNotebookInstanceInput { + s.SubnetId = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateNotebookInstanceInput) SetTags(v []*Tag) *CreateNotebookInstanceInput { + s.Tags = v + return s +} + +// SetVolumeSizeInGB sets the VolumeSizeInGB field's value. +func (s *CreateNotebookInstanceInput) SetVolumeSizeInGB(v int64) *CreateNotebookInstanceInput { + s.VolumeSizeInGB = &v + return s +} + +type CreateNotebookInstanceLifecycleConfigInput struct { + _ struct{} `type:"structure"` + + // The name of the lifecycle configuration. + // + // NotebookInstanceLifecycleConfigName is a required field + NotebookInstanceLifecycleConfigName *string `type:"string" required:"true"` + + // A shell script that runs only once, when you create a notebook instance. + // The shell script must be a base64-encoded string. + OnCreate []*NotebookInstanceLifecycleHook `type:"list"` + + // A shell script that runs every time you start a notebook instance, including + // when you create the notebook instance. The shell script must be a base64-encoded + // string. + OnStart []*NotebookInstanceLifecycleHook `type:"list"` +} + +// String returns the string representation +func (s CreateNotebookInstanceLifecycleConfigInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateNotebookInstanceLifecycleConfigInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateNotebookInstanceLifecycleConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateNotebookInstanceLifecycleConfigInput"} + if s.NotebookInstanceLifecycleConfigName == nil { + invalidParams.Add(request.NewErrParamRequired("NotebookInstanceLifecycleConfigName")) + } + if s.OnCreate != nil { + for i, v := range s.OnCreate { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "OnCreate", i), err.(request.ErrInvalidParams)) + } + } + } + if s.OnStart != nil { + for i, v := range s.OnStart { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "OnStart", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNotebookInstanceLifecycleConfigName sets the NotebookInstanceLifecycleConfigName field's value. +func (s *CreateNotebookInstanceLifecycleConfigInput) SetNotebookInstanceLifecycleConfigName(v string) *CreateNotebookInstanceLifecycleConfigInput { + s.NotebookInstanceLifecycleConfigName = &v + return s +} + +// SetOnCreate sets the OnCreate field's value. +func (s *CreateNotebookInstanceLifecycleConfigInput) SetOnCreate(v []*NotebookInstanceLifecycleHook) *CreateNotebookInstanceLifecycleConfigInput { + s.OnCreate = v + return s +} + +// SetOnStart sets the OnStart field's value. +func (s *CreateNotebookInstanceLifecycleConfigInput) SetOnStart(v []*NotebookInstanceLifecycleHook) *CreateNotebookInstanceLifecycleConfigInput { + s.OnStart = v + return s +} + +type CreateNotebookInstanceLifecycleConfigOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the lifecycle configuration. + NotebookInstanceLifecycleConfigArn *string `type:"string"` +} + +// String returns the string representation +func (s CreateNotebookInstanceLifecycleConfigOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateNotebookInstanceLifecycleConfigOutput) GoString() string { + return s.String() +} + +// SetNotebookInstanceLifecycleConfigArn sets the NotebookInstanceLifecycleConfigArn field's value. +func (s *CreateNotebookInstanceLifecycleConfigOutput) SetNotebookInstanceLifecycleConfigArn(v string) *CreateNotebookInstanceLifecycleConfigOutput { + s.NotebookInstanceLifecycleConfigArn = &v + return s +} + +type CreateNotebookInstanceOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the notebook instance. + NotebookInstanceArn *string `type:"string"` +} + +// String returns the string representation +func (s CreateNotebookInstanceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateNotebookInstanceOutput) GoString() string { + return s.String() +} + +// SetNotebookInstanceArn sets the NotebookInstanceArn field's value. +func (s *CreateNotebookInstanceOutput) SetNotebookInstanceArn(v string) *CreateNotebookInstanceOutput { + s.NotebookInstanceArn = &v + return s +} + +type CreatePresignedNotebookInstanceUrlInput struct { + _ struct{} `type:"structure"` + + // The name of the notebook instance. + // + // NotebookInstanceName is a required field + NotebookInstanceName *string `type:"string" required:"true"` + + // The duration of the session, in seconds. The default is 12 hours. + SessionExpirationDurationInSeconds *int64 `min:"1800" type:"integer"` +} + +// String returns the string representation +func (s CreatePresignedNotebookInstanceUrlInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreatePresignedNotebookInstanceUrlInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreatePresignedNotebookInstanceUrlInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreatePresignedNotebookInstanceUrlInput"} + if s.NotebookInstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("NotebookInstanceName")) + } + if s.SessionExpirationDurationInSeconds != nil && *s.SessionExpirationDurationInSeconds < 1800 { + invalidParams.Add(request.NewErrParamMinValue("SessionExpirationDurationInSeconds", 1800)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNotebookInstanceName sets the NotebookInstanceName field's value. +func (s *CreatePresignedNotebookInstanceUrlInput) SetNotebookInstanceName(v string) *CreatePresignedNotebookInstanceUrlInput { + s.NotebookInstanceName = &v + return s +} + +// SetSessionExpirationDurationInSeconds sets the SessionExpirationDurationInSeconds field's value. +func (s *CreatePresignedNotebookInstanceUrlInput) SetSessionExpirationDurationInSeconds(v int64) *CreatePresignedNotebookInstanceUrlInput { + s.SessionExpirationDurationInSeconds = &v + return s +} + +type CreatePresignedNotebookInstanceUrlOutput struct { + _ struct{} `type:"structure"` + + // A JSON object that contains the URL string. + AuthorizedUrl *string `type:"string"` +} + +// String returns the string representation +func (s CreatePresignedNotebookInstanceUrlOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreatePresignedNotebookInstanceUrlOutput) GoString() string { + return s.String() +} + +// SetAuthorizedUrl sets the AuthorizedUrl field's value. +func (s *CreatePresignedNotebookInstanceUrlOutput) SetAuthorizedUrl(v string) *CreatePresignedNotebookInstanceUrlOutput { + s.AuthorizedUrl = &v + return s +} + +type CreateTrainingJobInput struct { + _ struct{} `type:"structure"` + + // The registry path of the Docker image that contains the training algorithm + // and algorithm-specific metadata, including the input mode. For more information + // about algorithms provided by Amazon SageMaker, see Algorithms (http://docs.aws.amazon.com/sagemaker/latest/dg/algos.html). + // For information about providing your own algorithms, see Using Your Own Algorithms + // with Amazon SageMaker (http://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms.html). + // + // AlgorithmSpecification is a required field + AlgorithmSpecification *AlgorithmSpecification `type:"structure" required:"true"` + + // Algorithm-specific parameters that influence the quality of the model. You + // set hyperparameters before you start the learning process. For a list of + // hyperparameters for each training algorithm provided by Amazon SageMaker, + // see Algorithms (http://docs.aws.amazon.com/sagemaker/latest/dg/algos.html). + // + // You can specify a maximum of 100 hyperparameters. Each hyperparameter is + // a key-value pair. Each key and value is limited to 256 characters, as specified + // by the Length Constraint. + HyperParameters map[string]*string `type:"map"` + + // An array of Channel objects. Each channel is a named input source. InputDataConfig + // describes the input data and its location. + // + // Algorithms can accept input data from one or more channels. For example, + // an algorithm might have two channels of input data, training_data and validation_data. + // The configuration for each channel provides the S3 location where the input + // data is stored. It also provides information about the stored data: the MIME + // type, compression method, and whether the data is wrapped in RecordIO format. + // + // Depending on the input mode that the algorithm supports, Amazon SageMaker + // either copies input data files from an S3 bucket to a local directory in + // the Docker container, or makes it available as input streams. + InputDataConfig []*Channel `min:"1" type:"list"` + + // Specifies the path to the S3 bucket where you want to store model artifacts. + // Amazon SageMaker creates subfolders for the artifacts. + // + // OutputDataConfig is a required field + OutputDataConfig *OutputDataConfig `type:"structure" required:"true"` + + // The resources, including the ML compute instances and ML storage volumes, + // to use for model training. + // + // ML storage volumes store model artifacts and incremental states. Training + // algorithms might also use ML storage volumes for scratch space. If you want + // Amazon SageMaker to use the ML storage volume to store the training data, + // choose File as the TrainingInputMode in the algorithm specification. For + // distributed training algorithms, specify an instance count greater than 1. + // + // ResourceConfig is a required field + ResourceConfig *ResourceConfig `type:"structure" required:"true"` + + // The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume + // to perform tasks on your behalf. + // + // During model training, Amazon SageMaker needs your permission to read input + // data from an S3 bucket, download a Docker image that contains training code, + // write model artifacts to an S3 bucket, write logs to Amazon CloudWatch Logs, + // and publish metrics to Amazon CloudWatch. You grant permissions for all of + // these tasks to an IAM role. For more information, see Amazon SageMaker Roles + // (http://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). + // + // To be able to pass this role to Amazon SageMaker, the caller of this API + // must have the iam:PassRole permission. + // + // RoleArn is a required field + RoleArn *string `min:"20" type:"string" required:"true"` + + // Sets a duration for training. Use this parameter to cap model training costs. + // To stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal, which + // delays job termination for 120 seconds. Algorithms might use this 120-second + // window to save the model artifacts. + // + // When Amazon SageMaker terminates a job because the stopping condition has + // been met, training algorithms provided by Amazon SageMaker save the intermediate + // results of the job. This intermediate data is a valid model artifact. You + // can use it to create a model using the CreateModel API. + // + // StoppingCondition is a required field + StoppingCondition *StoppingCondition `type:"structure" required:"true"` + + // An array of key-value pairs. For more information, see Using Cost Allocation + // Tags (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html#allocation-what) + // in the AWS Billing and Cost Management User Guide. + Tags []*Tag `type:"list"` + + // The name of the training job. The name must be unique within an AWS Region + // in an AWS account. + // + // TrainingJobName is a required field + TrainingJobName *string `min:"1" type:"string" required:"true"` + + // A VpcConfig object that specifies the VPC that you want your training job + // to connect to. Control access to and from your training container by configuring + // the VPC. For more information, see Protect Training Jobs by Using an Amazon + // Virtual Private Cloud (http://docs.aws.amazon.com/sagemaker/latest/dg/train-vpc.html). + VpcConfig *VpcConfig `type:"structure"` +} + +// String returns the string representation +func (s CreateTrainingJobInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateTrainingJobInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateTrainingJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateTrainingJobInput"} + if s.AlgorithmSpecification == nil { + invalidParams.Add(request.NewErrParamRequired("AlgorithmSpecification")) + } + if s.InputDataConfig != nil && len(s.InputDataConfig) < 1 { + invalidParams.Add(request.NewErrParamMinLen("InputDataConfig", 1)) + } + if s.OutputDataConfig == nil { + invalidParams.Add(request.NewErrParamRequired("OutputDataConfig")) + } + if s.ResourceConfig == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceConfig")) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + if s.RoleArn != nil && len(*s.RoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + } + if s.StoppingCondition == nil { + invalidParams.Add(request.NewErrParamRequired("StoppingCondition")) + } + if s.TrainingJobName == nil { + invalidParams.Add(request.NewErrParamRequired("TrainingJobName")) + } + if s.TrainingJobName != nil && len(*s.TrainingJobName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TrainingJobName", 1)) + } + if s.AlgorithmSpecification != nil { + if err := s.AlgorithmSpecification.Validate(); err != nil { + invalidParams.AddNested("AlgorithmSpecification", err.(request.ErrInvalidParams)) + } + } + if s.InputDataConfig != nil { + for i, v := range s.InputDataConfig { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "InputDataConfig", i), err.(request.ErrInvalidParams)) + } + } + } + if s.OutputDataConfig != nil { + if err := s.OutputDataConfig.Validate(); err != nil { + invalidParams.AddNested("OutputDataConfig", err.(request.ErrInvalidParams)) + } + } + if s.ResourceConfig != nil { + if err := s.ResourceConfig.Validate(); err != nil { + invalidParams.AddNested("ResourceConfig", err.(request.ErrInvalidParams)) + } + } + if s.StoppingCondition != nil { + if err := s.StoppingCondition.Validate(); err != nil { + invalidParams.AddNested("StoppingCondition", err.(request.ErrInvalidParams)) + } + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + if s.VpcConfig != nil { + if err := s.VpcConfig.Validate(); err != nil { + invalidParams.AddNested("VpcConfig", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAlgorithmSpecification sets the AlgorithmSpecification field's value. +func (s *CreateTrainingJobInput) SetAlgorithmSpecification(v *AlgorithmSpecification) *CreateTrainingJobInput { + s.AlgorithmSpecification = v + return s +} + +// SetHyperParameters sets the HyperParameters field's value. +func (s *CreateTrainingJobInput) SetHyperParameters(v map[string]*string) *CreateTrainingJobInput { + s.HyperParameters = v + return s +} + +// SetInputDataConfig sets the InputDataConfig field's value. +func (s *CreateTrainingJobInput) SetInputDataConfig(v []*Channel) *CreateTrainingJobInput { + s.InputDataConfig = v + return s +} + +// SetOutputDataConfig sets the OutputDataConfig field's value. +func (s *CreateTrainingJobInput) SetOutputDataConfig(v *OutputDataConfig) *CreateTrainingJobInput { + s.OutputDataConfig = v + return s +} + +// SetResourceConfig sets the ResourceConfig field's value. +func (s *CreateTrainingJobInput) SetResourceConfig(v *ResourceConfig) *CreateTrainingJobInput { + s.ResourceConfig = v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *CreateTrainingJobInput) SetRoleArn(v string) *CreateTrainingJobInput { + s.RoleArn = &v + return s +} + +// SetStoppingCondition sets the StoppingCondition field's value. +func (s *CreateTrainingJobInput) SetStoppingCondition(v *StoppingCondition) *CreateTrainingJobInput { + s.StoppingCondition = v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateTrainingJobInput) SetTags(v []*Tag) *CreateTrainingJobInput { + s.Tags = v + return s +} + +// SetTrainingJobName sets the TrainingJobName field's value. +func (s *CreateTrainingJobInput) SetTrainingJobName(v string) *CreateTrainingJobInput { + s.TrainingJobName = &v + return s +} + +// SetVpcConfig sets the VpcConfig field's value. +func (s *CreateTrainingJobInput) SetVpcConfig(v *VpcConfig) *CreateTrainingJobInput { + s.VpcConfig = v + return s +} + +type CreateTrainingJobOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the training job. + // + // TrainingJobArn is a required field + TrainingJobArn *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateTrainingJobOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateTrainingJobOutput) GoString() string { + return s.String() +} + +// SetTrainingJobArn sets the TrainingJobArn field's value. +func (s *CreateTrainingJobOutput) SetTrainingJobArn(v string) *CreateTrainingJobOutput { + s.TrainingJobArn = &v + return s +} + +type CreateTransformJobInput struct { + _ struct{} `type:"structure"` + + // Determines the number of records included in a single mini-batch. SingleRecord + // means only one record is used per mini-batch. MultiRecord means a mini-batch + // is set to contain as many records that can fit within the MaxPayloadInMB + // limit. + // + // Batch transform will automatically split your input data into whatever payload + // size is specified if you set SplitType to Line and BatchStrategy to MultiRecord. + // There's no need to split the dataset into smaller files or to use larger + // payload sizes unless the records in your dataset are very large. + BatchStrategy *string `type:"string" enum:"BatchStrategy"` + + // The environment variables to set in the Docker container. We support up to + // 16 key and values entries in the map. + Environment map[string]*string `type:"map"` + + // The maximum number of parallel requests that can be sent to each instance + // in a transform job. This is good for algorithms that implement multiple workers + // on larger instances . The default value is 1. To allow Amazon SageMaker to + // determine the appropriate number for MaxConcurrentTransforms, set the value + // to 0. + MaxConcurrentTransforms *int64 `type:"integer"` + + // The maximum payload size allowed, in MB. A payload is the data portion of + // a record (without metadata). The value in MaxPayloadInMB must be greater + // or equal to the size of a single record. You can approximate the size of + // a record by dividing the size of your dataset by the number of records. Then + // multiply this value by the number of records you want in a mini-batch. It + // is recommended to enter a value slightly larger than this to ensure the records + // fit within the maximum payload size. The default value is 6 MB. For an unlimited + // payload size, set the value to 0. + MaxPayloadInMB *int64 `type:"integer"` + + // The name of the model that you want to use for the transform job. ModelName + // must be the name of an existing Amazon SageMaker model within an AWS Region + // in an AWS account. + // + // ModelName is a required field + ModelName *string `type:"string" required:"true"` + + // An array of key-value pairs. Adding tags is optional. For more information, + // see Using Cost Allocation Tags (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html#allocation-what) + // in the AWS Billing and Cost Management User Guide. + Tags []*Tag `type:"list"` + + // Describes the input source and the way the transform job consumes it. + // + // TransformInput is a required field + TransformInput *TransformInput `type:"structure" required:"true"` + + // The name of the transform job. The name must be unique within an AWS Region + // in an AWS account. + // + // TransformJobName is a required field + TransformJobName *string `min:"1" type:"string" required:"true"` + + // Describes the results of the transform job. + // + // TransformOutput is a required field + TransformOutput *TransformOutput `type:"structure" required:"true"` + + // Describes the resources, including ML instance types and ML instance count, + // to use for the transform job. + // + // TransformResources is a required field + TransformResources *TransformResources `type:"structure" required:"true"` +} + +// String returns the string representation +func (s CreateTransformJobInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateTransformJobInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateTransformJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateTransformJobInput"} + if s.ModelName == nil { + invalidParams.Add(request.NewErrParamRequired("ModelName")) + } + if s.TransformInput == nil { + invalidParams.Add(request.NewErrParamRequired("TransformInput")) + } + if s.TransformJobName == nil { + invalidParams.Add(request.NewErrParamRequired("TransformJobName")) + } + if s.TransformJobName != nil && len(*s.TransformJobName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TransformJobName", 1)) + } + if s.TransformOutput == nil { + invalidParams.Add(request.NewErrParamRequired("TransformOutput")) + } + if s.TransformResources == nil { + invalidParams.Add(request.NewErrParamRequired("TransformResources")) + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + if s.TransformInput != nil { + if err := s.TransformInput.Validate(); err != nil { + invalidParams.AddNested("TransformInput", err.(request.ErrInvalidParams)) + } + } + if s.TransformOutput != nil { + if err := s.TransformOutput.Validate(); err != nil { + invalidParams.AddNested("TransformOutput", err.(request.ErrInvalidParams)) + } + } + if s.TransformResources != nil { + if err := s.TransformResources.Validate(); err != nil { + invalidParams.AddNested("TransformResources", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBatchStrategy sets the BatchStrategy field's value. +func (s *CreateTransformJobInput) SetBatchStrategy(v string) *CreateTransformJobInput { + s.BatchStrategy = &v + return s +} + +// SetEnvironment sets the Environment field's value. +func (s *CreateTransformJobInput) SetEnvironment(v map[string]*string) *CreateTransformJobInput { + s.Environment = v + return s +} + +// SetMaxConcurrentTransforms sets the MaxConcurrentTransforms field's value. +func (s *CreateTransformJobInput) SetMaxConcurrentTransforms(v int64) *CreateTransformJobInput { + s.MaxConcurrentTransforms = &v + return s +} + +// SetMaxPayloadInMB sets the MaxPayloadInMB field's value. +func (s *CreateTransformJobInput) SetMaxPayloadInMB(v int64) *CreateTransformJobInput { + s.MaxPayloadInMB = &v + return s +} + +// SetModelName sets the ModelName field's value. +func (s *CreateTransformJobInput) SetModelName(v string) *CreateTransformJobInput { + s.ModelName = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateTransformJobInput) SetTags(v []*Tag) *CreateTransformJobInput { + s.Tags = v + return s +} + +// SetTransformInput sets the TransformInput field's value. +func (s *CreateTransformJobInput) SetTransformInput(v *TransformInput) *CreateTransformJobInput { + s.TransformInput = v + return s +} + +// SetTransformJobName sets the TransformJobName field's value. +func (s *CreateTransformJobInput) SetTransformJobName(v string) *CreateTransformJobInput { + s.TransformJobName = &v + return s +} + +// SetTransformOutput sets the TransformOutput field's value. +func (s *CreateTransformJobInput) SetTransformOutput(v *TransformOutput) *CreateTransformJobInput { + s.TransformOutput = v + return s +} + +// SetTransformResources sets the TransformResources field's value. +func (s *CreateTransformJobInput) SetTransformResources(v *TransformResources) *CreateTransformJobInput { + s.TransformResources = v + return s +} + +type CreateTransformJobOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the transform job. + // + // TransformJobArn is a required field + TransformJobArn *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateTransformJobOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateTransformJobOutput) GoString() string { + return s.String() +} + +// SetTransformJobArn sets the TransformJobArn field's value. +func (s *CreateTransformJobOutput) SetTransformJobArn(v string) *CreateTransformJobOutput { + s.TransformJobArn = &v + return s +} + +// Describes the location of the channel data. +type DataSource struct { + _ struct{} `type:"structure"` + + // The S3 location of the data source that is associated with a channel. + // + // S3DataSource is a required field + S3DataSource *S3DataSource `type:"structure" required:"true"` +} + +// String returns the string representation +func (s DataSource) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DataSource) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DataSource) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DataSource"} + if s.S3DataSource == nil { + invalidParams.Add(request.NewErrParamRequired("S3DataSource")) + } + if s.S3DataSource != nil { + if err := s.S3DataSource.Validate(); err != nil { + invalidParams.AddNested("S3DataSource", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetS3DataSource sets the S3DataSource field's value. +func (s *DataSource) SetS3DataSource(v *S3DataSource) *DataSource { + s.S3DataSource = v + return s +} + +type DeleteEndpointConfigInput struct { + _ struct{} `type:"structure"` + + // The name of the endpoint configuration that you want to delete. + // + // EndpointConfigName is a required field + EndpointConfigName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteEndpointConfigInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteEndpointConfigInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteEndpointConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteEndpointConfigInput"} + if s.EndpointConfigName == nil { + invalidParams.Add(request.NewErrParamRequired("EndpointConfigName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEndpointConfigName sets the EndpointConfigName field's value. +func (s *DeleteEndpointConfigInput) SetEndpointConfigName(v string) *DeleteEndpointConfigInput { + s.EndpointConfigName = &v + return s +} + +type DeleteEndpointConfigOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteEndpointConfigOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteEndpointConfigOutput) GoString() string { + return s.String() +} + +type DeleteEndpointInput struct { + _ struct{} `type:"structure"` + + // The name of the endpoint that you want to delete. + // + // EndpointName is a required field + EndpointName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteEndpointInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteEndpointInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteEndpointInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteEndpointInput"} + if s.EndpointName == nil { + invalidParams.Add(request.NewErrParamRequired("EndpointName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEndpointName sets the EndpointName field's value. +func (s *DeleteEndpointInput) SetEndpointName(v string) *DeleteEndpointInput { + s.EndpointName = &v + return s +} + +type DeleteEndpointOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteEndpointOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteEndpointOutput) GoString() string { + return s.String() +} + +type DeleteModelInput struct { + _ struct{} `type:"structure"` + + // The name of the model to delete. + // + // ModelName is a required field + ModelName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteModelInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteModelInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteModelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteModelInput"} + if s.ModelName == nil { + invalidParams.Add(request.NewErrParamRequired("ModelName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetModelName sets the ModelName field's value. +func (s *DeleteModelInput) SetModelName(v string) *DeleteModelInput { + s.ModelName = &v + return s +} + +type DeleteModelOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteModelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteModelOutput) GoString() string { + return s.String() +} + +type DeleteNotebookInstanceInput struct { + _ struct{} `type:"structure"` + + // The name of the Amazon SageMaker notebook instance to delete. + // + // NotebookInstanceName is a required field + NotebookInstanceName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteNotebookInstanceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteNotebookInstanceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteNotebookInstanceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteNotebookInstanceInput"} + if s.NotebookInstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("NotebookInstanceName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNotebookInstanceName sets the NotebookInstanceName field's value. +func (s *DeleteNotebookInstanceInput) SetNotebookInstanceName(v string) *DeleteNotebookInstanceInput { + s.NotebookInstanceName = &v + return s +} + +type DeleteNotebookInstanceLifecycleConfigInput struct { + _ struct{} `type:"structure"` + + // The name of the lifecycle configuration to delete. + // + // NotebookInstanceLifecycleConfigName is a required field + NotebookInstanceLifecycleConfigName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteNotebookInstanceLifecycleConfigInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteNotebookInstanceLifecycleConfigInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteNotebookInstanceLifecycleConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteNotebookInstanceLifecycleConfigInput"} + if s.NotebookInstanceLifecycleConfigName == nil { + invalidParams.Add(request.NewErrParamRequired("NotebookInstanceLifecycleConfigName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNotebookInstanceLifecycleConfigName sets the NotebookInstanceLifecycleConfigName field's value. +func (s *DeleteNotebookInstanceLifecycleConfigInput) SetNotebookInstanceLifecycleConfigName(v string) *DeleteNotebookInstanceLifecycleConfigInput { + s.NotebookInstanceLifecycleConfigName = &v + return s +} + +type DeleteNotebookInstanceLifecycleConfigOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteNotebookInstanceLifecycleConfigOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteNotebookInstanceLifecycleConfigOutput) GoString() string { + return s.String() +} + +type DeleteNotebookInstanceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteNotebookInstanceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteNotebookInstanceOutput) GoString() string { + return s.String() +} + +type DeleteTagsInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the resource whose tags you want to delete. + // + // ResourceArn is a required field + ResourceArn *string `type:"string" required:"true"` + + // An array or one or more tag keys to delete. + // + // TagKeys is a required field + TagKeys []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s DeleteTagsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteTagsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteTagsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteTagsInput"} + if s.ResourceArn == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceArn")) + } + if s.TagKeys == nil { + invalidParams.Add(request.NewErrParamRequired("TagKeys")) + } + if s.TagKeys != nil && len(s.TagKeys) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TagKeys", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceArn sets the ResourceArn field's value. +func (s *DeleteTagsInput) SetResourceArn(v string) *DeleteTagsInput { + s.ResourceArn = &v + return s +} + +// SetTagKeys sets the TagKeys field's value. +func (s *DeleteTagsInput) SetTagKeys(v []*string) *DeleteTagsInput { + s.TagKeys = v + return s +} + +type DeleteTagsOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteTagsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteTagsOutput) GoString() string { + return s.String() +} + +// Gets the Amazon EC2 Container Registry path of the docker image of the model +// that is hosted in this ProductionVariant. +// +// If you used the registry/repository[:tag] form to specify the image path +// of the primary container when you created the model hosted in this ProductionVariant, +// the path resolves to a path of the form registry/repository[@digest]. A digest +// is a hash value that identifies a specific version of an image. For information +// about Amazon ECR paths, see Pulling an Image (http://docs.aws.amazon.com//AmazonECR/latest/userguide/docker-pull-ecr-image.html) +// in the Amazon ECR User Guide. +type DeployedImage struct { + _ struct{} `type:"structure"` + + // The date and time when the image path for the model resolved to the ResolvedImage + ResolutionTime *time.Time `type:"timestamp"` + + // The specific digest path of the image hosted in this ProductionVariant. + ResolvedImage *string `type:"string"` + + // The image path you specified when you created the model. + SpecifiedImage *string `type:"string"` +} + +// String returns the string representation +func (s DeployedImage) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeployedImage) GoString() string { + return s.String() +} + +// SetResolutionTime sets the ResolutionTime field's value. +func (s *DeployedImage) SetResolutionTime(v time.Time) *DeployedImage { + s.ResolutionTime = &v + return s +} + +// SetResolvedImage sets the ResolvedImage field's value. +func (s *DeployedImage) SetResolvedImage(v string) *DeployedImage { + s.ResolvedImage = &v + return s +} + +// SetSpecifiedImage sets the SpecifiedImage field's value. +func (s *DeployedImage) SetSpecifiedImage(v string) *DeployedImage { + s.SpecifiedImage = &v + return s +} + +type DescribeEndpointConfigInput struct { + _ struct{} `type:"structure"` + + // The name of the endpoint configuration. + // + // EndpointConfigName is a required field + EndpointConfigName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeEndpointConfigInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEndpointConfigInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeEndpointConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeEndpointConfigInput"} + if s.EndpointConfigName == nil { + invalidParams.Add(request.NewErrParamRequired("EndpointConfigName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEndpointConfigName sets the EndpointConfigName field's value. +func (s *DescribeEndpointConfigInput) SetEndpointConfigName(v string) *DescribeEndpointConfigInput { + s.EndpointConfigName = &v + return s +} + +type DescribeEndpointConfigOutput struct { + _ struct{} `type:"structure"` + + // A timestamp that shows when the endpoint configuration was created. + // + // CreationTime is a required field + CreationTime *time.Time `type:"timestamp" required:"true"` + + // The Amazon Resource Name (ARN) of the endpoint configuration. + // + // EndpointConfigArn is a required field + EndpointConfigArn *string `min:"20" type:"string" required:"true"` + + // Name of the Amazon SageMaker endpoint configuration. + // + // EndpointConfigName is a required field + EndpointConfigName *string `type:"string" required:"true"` + + // AWS KMS key ID Amazon SageMaker uses to encrypt data when storing it on the + // ML storage volume attached to the instance. + KmsKeyId *string `type:"string"` + + // An array of ProductionVariant objects, one for each model that you want to + // host at this endpoint. + // + // ProductionVariants is a required field + ProductionVariants []*ProductionVariant `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s DescribeEndpointConfigOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEndpointConfigOutput) GoString() string { + return s.String() +} + +// SetCreationTime sets the CreationTime field's value. +func (s *DescribeEndpointConfigOutput) SetCreationTime(v time.Time) *DescribeEndpointConfigOutput { + s.CreationTime = &v + return s +} + +// SetEndpointConfigArn sets the EndpointConfigArn field's value. +func (s *DescribeEndpointConfigOutput) SetEndpointConfigArn(v string) *DescribeEndpointConfigOutput { + s.EndpointConfigArn = &v + return s +} + +// SetEndpointConfigName sets the EndpointConfigName field's value. +func (s *DescribeEndpointConfigOutput) SetEndpointConfigName(v string) *DescribeEndpointConfigOutput { + s.EndpointConfigName = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *DescribeEndpointConfigOutput) SetKmsKeyId(v string) *DescribeEndpointConfigOutput { + s.KmsKeyId = &v + return s +} + +// SetProductionVariants sets the ProductionVariants field's value. +func (s *DescribeEndpointConfigOutput) SetProductionVariants(v []*ProductionVariant) *DescribeEndpointConfigOutput { + s.ProductionVariants = v + return s +} + +type DescribeEndpointInput struct { + _ struct{} `type:"structure"` + + // The name of the endpoint. + // + // EndpointName is a required field + EndpointName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeEndpointInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEndpointInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeEndpointInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeEndpointInput"} + if s.EndpointName == nil { + invalidParams.Add(request.NewErrParamRequired("EndpointName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEndpointName sets the EndpointName field's value. +func (s *DescribeEndpointInput) SetEndpointName(v string) *DescribeEndpointInput { + s.EndpointName = &v + return s +} + +type DescribeEndpointOutput struct { + _ struct{} `type:"structure"` + + // A timestamp that shows when the endpoint was created. + // + // CreationTime is a required field + CreationTime *time.Time `type:"timestamp" required:"true"` + + // The Amazon Resource Name (ARN) of the endpoint. + // + // EndpointArn is a required field + EndpointArn *string `min:"20" type:"string" required:"true"` + + // The name of the endpoint configuration associated with this endpoint. + // + // EndpointConfigName is a required field + EndpointConfigName *string `type:"string" required:"true"` + + // Name of the endpoint. + // + // EndpointName is a required field + EndpointName *string `type:"string" required:"true"` + + // The status of the endpoint. + // + // * OutOfService: Endpoint is not available to take incoming requests. + // + // * Creating: CreateEndpoint is executing. + // + // * Updating: UpdateEndpoint or UpdateEndpointWeightsAndCapacities is executing. + // + // * SystemUpdating: Endpoint is undergoing maintenance and cannot be updated + // or deleted or re-scaled until it has completed. This maintenance operation + // does not change any customer-specified values such as VPC config, KMS + // encryption, model, instance type, or instance count. + // + // * RollingBack: Endpoint fails to scale up or down or change its variant + // weight and is in the process of rolling back to its previous configuration. + // Once the rollback completes, endpoint returns to an InService status. + // This transitional status only applies to an endpoint that has autoscaling + // enabled and is undergoing variant weight or capacity changes as part of + // an UpdateEndpointWeightsAndCapacities call or when the UpdateEndpointWeightsAndCapacities + // operation is called explicitly. + // + // * InService: Endpoint is available to process incoming requests. + // + // * Deleting: DeleteEndpoint is executing. + // + // * Failed: Endpoint could not be created, updated, or re-scaled. Use DescribeEndpointOutput$FailureReason + // for information about the failure. DeleteEndpoint is the only operation + // that can be performed on a failed endpoint. + // + // EndpointStatus is a required field + EndpointStatus *string `type:"string" required:"true" enum:"EndpointStatus"` + + // If the status of the endpoint is Failed, the reason why it failed. + FailureReason *string `type:"string"` + + // A timestamp that shows when the endpoint was last modified. + // + // LastModifiedTime is a required field + LastModifiedTime *time.Time `type:"timestamp" required:"true"` + + // An array of ProductionVariantSummary objects, one for each model hosted behind + // this endpoint. + ProductionVariants []*ProductionVariantSummary `min:"1" type:"list"` +} + +// String returns the string representation +func (s DescribeEndpointOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeEndpointOutput) GoString() string { + return s.String() +} + +// SetCreationTime sets the CreationTime field's value. +func (s *DescribeEndpointOutput) SetCreationTime(v time.Time) *DescribeEndpointOutput { + s.CreationTime = &v + return s +} + +// SetEndpointArn sets the EndpointArn field's value. +func (s *DescribeEndpointOutput) SetEndpointArn(v string) *DescribeEndpointOutput { + s.EndpointArn = &v + return s +} + +// SetEndpointConfigName sets the EndpointConfigName field's value. +func (s *DescribeEndpointOutput) SetEndpointConfigName(v string) *DescribeEndpointOutput { + s.EndpointConfigName = &v + return s +} + +// SetEndpointName sets the EndpointName field's value. +func (s *DescribeEndpointOutput) SetEndpointName(v string) *DescribeEndpointOutput { + s.EndpointName = &v + return s +} + +// SetEndpointStatus sets the EndpointStatus field's value. +func (s *DescribeEndpointOutput) SetEndpointStatus(v string) *DescribeEndpointOutput { + s.EndpointStatus = &v + return s +} + +// SetFailureReason sets the FailureReason field's value. +func (s *DescribeEndpointOutput) SetFailureReason(v string) *DescribeEndpointOutput { + s.FailureReason = &v + return s +} + +// SetLastModifiedTime sets the LastModifiedTime field's value. +func (s *DescribeEndpointOutput) SetLastModifiedTime(v time.Time) *DescribeEndpointOutput { + s.LastModifiedTime = &v + return s +} + +// SetProductionVariants sets the ProductionVariants field's value. +func (s *DescribeEndpointOutput) SetProductionVariants(v []*ProductionVariantSummary) *DescribeEndpointOutput { + s.ProductionVariants = v + return s +} + +type DescribeHyperParameterTuningJobInput struct { + _ struct{} `type:"structure"` + + // The name of the tuning job to describe. + // + // HyperParameterTuningJobName is a required field + HyperParameterTuningJobName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeHyperParameterTuningJobInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeHyperParameterTuningJobInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeHyperParameterTuningJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeHyperParameterTuningJobInput"} + if s.HyperParameterTuningJobName == nil { + invalidParams.Add(request.NewErrParamRequired("HyperParameterTuningJobName")) + } + if s.HyperParameterTuningJobName != nil && len(*s.HyperParameterTuningJobName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("HyperParameterTuningJobName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetHyperParameterTuningJobName sets the HyperParameterTuningJobName field's value. +func (s *DescribeHyperParameterTuningJobInput) SetHyperParameterTuningJobName(v string) *DescribeHyperParameterTuningJobInput { + s.HyperParameterTuningJobName = &v + return s +} + +type DescribeHyperParameterTuningJobOutput struct { + _ struct{} `type:"structure"` + + // A TrainingJobSummary object that describes the training job that completed + // with the best current HyperParameterTuningJobObjective. + BestTrainingJob *HyperParameterTrainingJobSummary `type:"structure"` + + // The date and time that the tuning job started. + // + // CreationTime is a required field + CreationTime *time.Time `type:"timestamp" required:"true"` + + // If the tuning job failed, the reason it failed. + FailureReason *string `type:"string"` + + // The date and time that the tuning job ended. + HyperParameterTuningEndTime *time.Time `type:"timestamp"` + + // The Amazon Resource Name (ARN) of the tuning job. + // + // HyperParameterTuningJobArn is a required field + HyperParameterTuningJobArn *string `type:"string" required:"true"` + + // The HyperParameterTuningJobConfig object that specifies the configuration + // of the tuning job. + // + // HyperParameterTuningJobConfig is a required field + HyperParameterTuningJobConfig *HyperParameterTuningJobConfig `type:"structure" required:"true"` + + // The name of the tuning job. + // + // HyperParameterTuningJobName is a required field + HyperParameterTuningJobName *string `min:"1" type:"string" required:"true"` + + // The status of the tuning job: InProgress, Completed, Failed, Stopping, or + // Stopped. + // + // HyperParameterTuningJobStatus is a required field + HyperParameterTuningJobStatus *string `type:"string" required:"true" enum:"HyperParameterTuningJobStatus"` + + // The date and time that the status of the tuning job was modified. + LastModifiedTime *time.Time `type:"timestamp"` + + // The ObjectiveStatusCounters object that specifies the number of training + // jobs, categorized by the status of their final objective metric, that this + // tuning job launched. + // + // ObjectiveStatusCounters is a required field + ObjectiveStatusCounters *ObjectiveStatusCounters `type:"structure" required:"true"` + + // If the hyperparameter tuning job is an incremental tuning job with a WarmStartType + // of IDENTICAL_DATA_AND_ALGORITHM, this is the TrainingJobSummary for the training + // job with the best objective metric value of all training jobs launched by + // this tuning job and all parent jobs specified for the incremental tuning + // job. + OverallBestTrainingJob *HyperParameterTrainingJobSummary `type:"structure"` + + // The HyperParameterTrainingJobDefinition object that specifies the definition + // of the training jobs that this tuning job launches. + // + // TrainingJobDefinition is a required field + TrainingJobDefinition *HyperParameterTrainingJobDefinition `type:"structure" required:"true"` + + // The TrainingJobStatusCounters object that specifies the number of training + // jobs, categorized by status, that this tuning job launched. + // + // TrainingJobStatusCounters is a required field + TrainingJobStatusCounters *TrainingJobStatusCounters `type:"structure" required:"true"` + + // The configuration for starting the hyperparameter parameter tuning job using + // one or more previous tuning jobs as a starting point. The results of previous + // tuning jobs are used to inform which combinations of hyperparameters to search + // over in the new tuning job. + WarmStartConfig *HyperParameterTuningJobWarmStartConfig `type:"structure"` +} + +// String returns the string representation +func (s DescribeHyperParameterTuningJobOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeHyperParameterTuningJobOutput) GoString() string { + return s.String() +} + +// SetBestTrainingJob sets the BestTrainingJob field's value. +func (s *DescribeHyperParameterTuningJobOutput) SetBestTrainingJob(v *HyperParameterTrainingJobSummary) *DescribeHyperParameterTuningJobOutput { + s.BestTrainingJob = v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *DescribeHyperParameterTuningJobOutput) SetCreationTime(v time.Time) *DescribeHyperParameterTuningJobOutput { + s.CreationTime = &v + return s +} + +// SetFailureReason sets the FailureReason field's value. +func (s *DescribeHyperParameterTuningJobOutput) SetFailureReason(v string) *DescribeHyperParameterTuningJobOutput { + s.FailureReason = &v + return s +} + +// SetHyperParameterTuningEndTime sets the HyperParameterTuningEndTime field's value. +func (s *DescribeHyperParameterTuningJobOutput) SetHyperParameterTuningEndTime(v time.Time) *DescribeHyperParameterTuningJobOutput { + s.HyperParameterTuningEndTime = &v + return s +} + +// SetHyperParameterTuningJobArn sets the HyperParameterTuningJobArn field's value. +func (s *DescribeHyperParameterTuningJobOutput) SetHyperParameterTuningJobArn(v string) *DescribeHyperParameterTuningJobOutput { + s.HyperParameterTuningJobArn = &v + return s +} + +// SetHyperParameterTuningJobConfig sets the HyperParameterTuningJobConfig field's value. +func (s *DescribeHyperParameterTuningJobOutput) SetHyperParameterTuningJobConfig(v *HyperParameterTuningJobConfig) *DescribeHyperParameterTuningJobOutput { + s.HyperParameterTuningJobConfig = v + return s +} + +// SetHyperParameterTuningJobName sets the HyperParameterTuningJobName field's value. +func (s *DescribeHyperParameterTuningJobOutput) SetHyperParameterTuningJobName(v string) *DescribeHyperParameterTuningJobOutput { + s.HyperParameterTuningJobName = &v + return s +} + +// SetHyperParameterTuningJobStatus sets the HyperParameterTuningJobStatus field's value. +func (s *DescribeHyperParameterTuningJobOutput) SetHyperParameterTuningJobStatus(v string) *DescribeHyperParameterTuningJobOutput { + s.HyperParameterTuningJobStatus = &v + return s +} + +// SetLastModifiedTime sets the LastModifiedTime field's value. +func (s *DescribeHyperParameterTuningJobOutput) SetLastModifiedTime(v time.Time) *DescribeHyperParameterTuningJobOutput { + s.LastModifiedTime = &v + return s +} + +// SetObjectiveStatusCounters sets the ObjectiveStatusCounters field's value. +func (s *DescribeHyperParameterTuningJobOutput) SetObjectiveStatusCounters(v *ObjectiveStatusCounters) *DescribeHyperParameterTuningJobOutput { + s.ObjectiveStatusCounters = v + return s } -type AddTagsInput struct { - _ struct{} `type:"structure"` +// SetOverallBestTrainingJob sets the OverallBestTrainingJob field's value. +func (s *DescribeHyperParameterTuningJobOutput) SetOverallBestTrainingJob(v *HyperParameterTrainingJobSummary) *DescribeHyperParameterTuningJobOutput { + s.OverallBestTrainingJob = v + return s +} - // The Amazon Resource Name (ARN) of the resource that you want to tag. - // - // ResourceArn is a required field - ResourceArn *string `type:"string" required:"true"` +// SetTrainingJobDefinition sets the TrainingJobDefinition field's value. +func (s *DescribeHyperParameterTuningJobOutput) SetTrainingJobDefinition(v *HyperParameterTrainingJobDefinition) *DescribeHyperParameterTuningJobOutput { + s.TrainingJobDefinition = v + return s +} - // An array of Tag objects. Each tag is a key-value pair. Only the key parameter - // is required. If you don't specify a value, Amazon SageMaker sets the value - // to an empty string. +// SetTrainingJobStatusCounters sets the TrainingJobStatusCounters field's value. +func (s *DescribeHyperParameterTuningJobOutput) SetTrainingJobStatusCounters(v *TrainingJobStatusCounters) *DescribeHyperParameterTuningJobOutput { + s.TrainingJobStatusCounters = v + return s +} + +// SetWarmStartConfig sets the WarmStartConfig field's value. +func (s *DescribeHyperParameterTuningJobOutput) SetWarmStartConfig(v *HyperParameterTuningJobWarmStartConfig) *DescribeHyperParameterTuningJobOutput { + s.WarmStartConfig = v + return s +} + +type DescribeModelInput struct { + _ struct{} `type:"structure"` + + // The name of the model. // - // Tags is a required field - Tags []*Tag `type:"list" required:"true"` + // ModelName is a required field + ModelName *string `type:"string" required:"true"` } // String returns the string representation -func (s AddTagsInput) String() string { +func (s DescribeModelInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AddTagsInput) GoString() string { +func (s DescribeModelInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *AddTagsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AddTagsInput"} - if s.ResourceArn == nil { - invalidParams.Add(request.NewErrParamRequired("ResourceArn")) - } - if s.Tags == nil { - invalidParams.Add(request.NewErrParamRequired("Tags")) - } - if s.Tags != nil { - for i, v := range s.Tags { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) - } - } +func (s *DescribeModelInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeModelInput"} + if s.ModelName == nil { + invalidParams.Add(request.NewErrParamRequired("ModelName")) } if invalidParams.Len() > 0 { @@ -3268,355 +7042,438 @@ func (s *AddTagsInput) Validate() error { return nil } -// SetResourceArn sets the ResourceArn field's value. -func (s *AddTagsInput) SetResourceArn(v string) *AddTagsInput { - s.ResourceArn = &v +// SetModelName sets the ModelName field's value. +func (s *DescribeModelInput) SetModelName(v string) *DescribeModelInput { + s.ModelName = &v return s } -// SetTags sets the Tags field's value. -func (s *AddTagsInput) SetTags(v []*Tag) *AddTagsInput { - s.Tags = v +type DescribeModelOutput struct { + _ struct{} `type:"structure"` + + // A timestamp that shows when the model was created. + // + // CreationTime is a required field + CreationTime *time.Time `type:"timestamp" required:"true"` + + // The Amazon Resource Name (ARN) of the IAM role that you specified for the + // model. + // + // ExecutionRoleArn is a required field + ExecutionRoleArn *string `min:"20" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the model. + // + // ModelArn is a required field + ModelArn *string `min:"20" type:"string" required:"true"` + + // Name of the Amazon SageMaker model. + // + // ModelName is a required field + ModelName *string `type:"string" required:"true"` + + // The location of the primary inference code, associated artifacts, and custom + // environment map that the inference code uses when it is deployed in production. + // + // PrimaryContainer is a required field + PrimaryContainer *ContainerDefinition `type:"structure" required:"true"` + + // A VpcConfig object that specifies the VPC that this model has access to. + // For more information, see Protect Endpoints by Using an Amazon Virtual Private + // Cloud (http://docs.aws.amazon.com/sagemaker/latest/dg/host-vpc.html) + VpcConfig *VpcConfig `type:"structure"` +} + +// String returns the string representation +func (s DescribeModelOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeModelOutput) GoString() string { + return s.String() +} + +// SetCreationTime sets the CreationTime field's value. +func (s *DescribeModelOutput) SetCreationTime(v time.Time) *DescribeModelOutput { + s.CreationTime = &v return s } -type AddTagsOutput struct { +// SetExecutionRoleArn sets the ExecutionRoleArn field's value. +func (s *DescribeModelOutput) SetExecutionRoleArn(v string) *DescribeModelOutput { + s.ExecutionRoleArn = &v + return s +} + +// SetModelArn sets the ModelArn field's value. +func (s *DescribeModelOutput) SetModelArn(v string) *DescribeModelOutput { + s.ModelArn = &v + return s +} + +// SetModelName sets the ModelName field's value. +func (s *DescribeModelOutput) SetModelName(v string) *DescribeModelOutput { + s.ModelName = &v + return s +} + +// SetPrimaryContainer sets the PrimaryContainer field's value. +func (s *DescribeModelOutput) SetPrimaryContainer(v *ContainerDefinition) *DescribeModelOutput { + s.PrimaryContainer = v + return s +} + +// SetVpcConfig sets the VpcConfig field's value. +func (s *DescribeModelOutput) SetVpcConfig(v *VpcConfig) *DescribeModelOutput { + s.VpcConfig = v + return s +} + +type DescribeNotebookInstanceInput struct { _ struct{} `type:"structure"` - // A list of tags associated with the Amazon SageMaker resource. - Tags []*Tag `type:"list"` + // The name of the notebook instance that you want information about. + // + // NotebookInstanceName is a required field + NotebookInstanceName *string `type:"string" required:"true"` } // String returns the string representation -func (s AddTagsOutput) String() string { +func (s DescribeNotebookInstanceInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AddTagsOutput) GoString() string { +func (s DescribeNotebookInstanceInput) GoString() string { return s.String() } -// SetTags sets the Tags field's value. -func (s *AddTagsOutput) SetTags(v []*Tag) *AddTagsOutput { - s.Tags = v +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeNotebookInstanceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeNotebookInstanceInput"} + if s.NotebookInstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("NotebookInstanceName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNotebookInstanceName sets the NotebookInstanceName field's value. +func (s *DescribeNotebookInstanceInput) SetNotebookInstanceName(v string) *DescribeNotebookInstanceInput { + s.NotebookInstanceName = &v return s } -// Specifies the training algorithm to use in a CreateTrainingJob (http://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTrainingJob.html) -// request. -// -// For more information about algorithms provided by Amazon SageMaker, see Algorithms -// (http://docs.aws.amazon.com/sagemaker/latest/dg/algos.html). For information -// about using your own algorithms, see Bring Your Own Algorithms (http://docs.aws.amazon.com/sagemaker/latest/dg/adv-topics-own-algo.html). -type AlgorithmSpecification struct { +type DescribeNotebookInstanceLifecycleConfigInput struct { _ struct{} `type:"structure"` - // The registry path of the Docker image that contains the training algorithm. - // For information about using your own algorithms, see Docker Registry Paths - // for Algorithms Provided by Amazon SageMaker (http://docs.aws.amazon.com/sagemaker/latest/dg/algos-docker-registry-paths.html). + // The name of the lifecycle configuration to describe. // - // TrainingImage is a required field - TrainingImage *string `type:"string" required:"true"` + // NotebookInstanceLifecycleConfigName is a required field + NotebookInstanceLifecycleConfigName *string `type:"string" required:"true"` +} - // The input mode that the algorithm supports. For the input modes that Amazon - // SageMaker algorithms support, see Algorithms (http://docs.aws.amazon.com/sagemaker/latest/dg/algos.html). - // If an algorithm supports the File input mode, Amazon SageMaker downloads - // the training data from S3 to the provisioned ML storage Volume, and mounts - // the directory to docker volume for training container. If an algorithm supports - // the Pipe input mode, Amazon SageMaker streams data directly from S3 to the - // container. - // - // In File mode, make sure you provision ML storage volume with sufficient capacity - // to accommodate the data download from S3. In addition to the training data, - // the ML storage volume also stores the output model. The algorithm container - // use ML storage volume to also store intermediate information, if any. - // - // For distributed algorithms using File mode, training data is distributed - // uniformly, and your training duration is predictable if the input data objects - // size is approximately same. Amazon SageMaker does not split the files any - // further for model training. If the object sizes are skewed, training won't - // be optimal as the data distribution is also skewed where one host in a training - // cluster is overloaded, thus becoming bottleneck in training. - // - // TrainingInputMode is a required field - TrainingInputMode *string `type:"string" required:"true" enum:"TrainingInputMode"` +// String returns the string representation +func (s DescribeNotebookInstanceLifecycleConfigInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeNotebookInstanceLifecycleConfigInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeNotebookInstanceLifecycleConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeNotebookInstanceLifecycleConfigInput"} + if s.NotebookInstanceLifecycleConfigName == nil { + invalidParams.Add(request.NewErrParamRequired("NotebookInstanceLifecycleConfigName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNotebookInstanceLifecycleConfigName sets the NotebookInstanceLifecycleConfigName field's value. +func (s *DescribeNotebookInstanceLifecycleConfigInput) SetNotebookInstanceLifecycleConfigName(v string) *DescribeNotebookInstanceLifecycleConfigInput { + s.NotebookInstanceLifecycleConfigName = &v + return s +} + +type DescribeNotebookInstanceLifecycleConfigOutput struct { + _ struct{} `type:"structure"` + + // A timestamp that tells when the lifecycle configuration was created. + CreationTime *time.Time `type:"timestamp"` + + // A timestamp that tells when the lifecycle configuration was last modified. + LastModifiedTime *time.Time `type:"timestamp"` + + // The Amazon Resource Name (ARN) of the lifecycle configuration. + NotebookInstanceLifecycleConfigArn *string `type:"string"` + + // The name of the lifecycle configuration. + NotebookInstanceLifecycleConfigName *string `type:"string"` + + // The shell script that runs only once, when you create a notebook instance. + OnCreate []*NotebookInstanceLifecycleHook `type:"list"` + + // The shell script that runs every time you start a notebook instance, including + // when you create the notebook instance. + OnStart []*NotebookInstanceLifecycleHook `type:"list"` } // String returns the string representation -func (s AlgorithmSpecification) String() string { +func (s DescribeNotebookInstanceLifecycleConfigOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s AlgorithmSpecification) GoString() string { +func (s DescribeNotebookInstanceLifecycleConfigOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *AlgorithmSpecification) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "AlgorithmSpecification"} - if s.TrainingImage == nil { - invalidParams.Add(request.NewErrParamRequired("TrainingImage")) - } - if s.TrainingInputMode == nil { - invalidParams.Add(request.NewErrParamRequired("TrainingInputMode")) - } +// SetCreationTime sets the CreationTime field's value. +func (s *DescribeNotebookInstanceLifecycleConfigOutput) SetCreationTime(v time.Time) *DescribeNotebookInstanceLifecycleConfigOutput { + s.CreationTime = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetLastModifiedTime sets the LastModifiedTime field's value. +func (s *DescribeNotebookInstanceLifecycleConfigOutput) SetLastModifiedTime(v time.Time) *DescribeNotebookInstanceLifecycleConfigOutput { + s.LastModifiedTime = &v + return s } -// SetTrainingImage sets the TrainingImage field's value. -func (s *AlgorithmSpecification) SetTrainingImage(v string) *AlgorithmSpecification { - s.TrainingImage = &v +// SetNotebookInstanceLifecycleConfigArn sets the NotebookInstanceLifecycleConfigArn field's value. +func (s *DescribeNotebookInstanceLifecycleConfigOutput) SetNotebookInstanceLifecycleConfigArn(v string) *DescribeNotebookInstanceLifecycleConfigOutput { + s.NotebookInstanceLifecycleConfigArn = &v return s } -// SetTrainingInputMode sets the TrainingInputMode field's value. -func (s *AlgorithmSpecification) SetTrainingInputMode(v string) *AlgorithmSpecification { - s.TrainingInputMode = &v +// SetNotebookInstanceLifecycleConfigName sets the NotebookInstanceLifecycleConfigName field's value. +func (s *DescribeNotebookInstanceLifecycleConfigOutput) SetNotebookInstanceLifecycleConfigName(v string) *DescribeNotebookInstanceLifecycleConfigOutput { + s.NotebookInstanceLifecycleConfigName = &v return s } -// A channel is a named input source that training algorithms can consume. -type Channel struct { +// SetOnCreate sets the OnCreate field's value. +func (s *DescribeNotebookInstanceLifecycleConfigOutput) SetOnCreate(v []*NotebookInstanceLifecycleHook) *DescribeNotebookInstanceLifecycleConfigOutput { + s.OnCreate = v + return s +} + +// SetOnStart sets the OnStart field's value. +func (s *DescribeNotebookInstanceLifecycleConfigOutput) SetOnStart(v []*NotebookInstanceLifecycleHook) *DescribeNotebookInstanceLifecycleConfigOutput { + s.OnStart = v + return s +} + +type DescribeNotebookInstanceOutput struct { _ struct{} `type:"structure"` - // The name of the channel. + // A timestamp. Use this parameter to return the time when the notebook instance + // was created + CreationTime *time.Time `type:"timestamp"` + + // Describes whether Amazon SageMaker provides internet access to the notebook + // instance. If this value is set to Disabled, he notebook instance does not + // have internet access, and cannot connect to Amazon SageMaker training and + // endpoint services. // - // ChannelName is a required field - ChannelName *string `min:"1" type:"string" required:"true"` + // For more information, see Notebook Instances Are Internet-Enabled by Default + // (http://docs.aws.amazon.com/sagemaker/latest/dg/appendix-additional-considerations.html#appendix-notebook-and-internet-access). + DirectInternetAccess *string `type:"string" enum:"DirectInternetAccess"` - // If training data is compressed, the compression type. The default value is - // None. CompressionType is used only in PIPE input mode. In FILE mode, leave - // this field unset or set it to None. - CompressionType *string `type:"string" enum:"CompressionType"` + // If status is failed, the reason it failed. + FailureReason *string `type:"string"` - // The MIME type of the data. - ContentType *string `type:"string"` + // The type of ML compute instance running on the notebook instance. + InstanceType *string `type:"string" enum:"InstanceType"` - // The location of the channel data. + // AWS KMS key ID Amazon SageMaker uses to encrypt data when storing it on the + // ML storage volume attached to the instance. + KmsKeyId *string `type:"string"` + + // A timestamp. Use this parameter to retrieve the time when the notebook instance + // was last modified. + LastModifiedTime *time.Time `type:"timestamp"` + + // Network interface IDs that Amazon SageMaker created at the time of creating + // the instance. + NetworkInterfaceId *string `type:"string"` + + // The Amazon Resource Name (ARN) of the notebook instance. + NotebookInstanceArn *string `type:"string"` + + // Returns the name of a notebook instance lifecycle configuration. // - // DataSource is a required field - DataSource *DataSource `type:"structure" required:"true"` + // For information about notebook instance lifestyle configurations, see Step + // 2.1: (Optional) Customize a Notebook Instance (http://docs.aws.amazon.com/sagemaker/latest/dg/notebook-lifecycle-config.html) + NotebookInstanceLifecycleConfigName *string `type:"string"` - // Specify RecordIO as the value when input data is in raw format but the training - // algorithm requires the RecordIO format, in which caseAmazon SageMaker wraps - // each individual S3 object in a RecordIO record. If the input data is already - // in RecordIO format, you don't need to set this attribute. For more information, - // see Create a Dataset Using RecordIO (https://mxnet.incubator.apache.org/how_to/recordio.html?highlight=im2rec) - RecordWrapperType *string `type:"string" enum:"RecordWrapper"` + // Name of the Amazon SageMaker notebook instance. + NotebookInstanceName *string `type:"string"` + + // The status of the notebook instance. + NotebookInstanceStatus *string `type:"string" enum:"NotebookInstanceStatus"` + + // Amazon Resource Name (ARN) of the IAM role associated with the instance. + RoleArn *string `min:"20" type:"string"` + + // The IDs of the VPC security groups. + SecurityGroups []*string `type:"list"` + + // The ID of the VPC subnet. + SubnetId *string `type:"string"` + + // The URL that you use to connect to the Jupyter notebook that is running in + // your notebook instance. + Url *string `type:"string"` + + // The size, in GB, of the ML storage volume attached to the notebook instance. + VolumeSizeInGB *int64 `min:"5" type:"integer"` } // String returns the string representation -func (s Channel) String() string { +func (s DescribeNotebookInstanceOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Channel) GoString() string { +func (s DescribeNotebookInstanceOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *Channel) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "Channel"} - if s.ChannelName == nil { - invalidParams.Add(request.NewErrParamRequired("ChannelName")) - } - if s.ChannelName != nil && len(*s.ChannelName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ChannelName", 1)) - } - if s.DataSource == nil { - invalidParams.Add(request.NewErrParamRequired("DataSource")) - } - if s.DataSource != nil { - if err := s.DataSource.Validate(); err != nil { - invalidParams.AddNested("DataSource", err.(request.ErrInvalidParams)) - } - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetCreationTime sets the CreationTime field's value. +func (s *DescribeNotebookInstanceOutput) SetCreationTime(v time.Time) *DescribeNotebookInstanceOutput { + s.CreationTime = &v + return s } -// SetChannelName sets the ChannelName field's value. -func (s *Channel) SetChannelName(v string) *Channel { - s.ChannelName = &v +// SetDirectInternetAccess sets the DirectInternetAccess field's value. +func (s *DescribeNotebookInstanceOutput) SetDirectInternetAccess(v string) *DescribeNotebookInstanceOutput { + s.DirectInternetAccess = &v return s } -// SetCompressionType sets the CompressionType field's value. -func (s *Channel) SetCompressionType(v string) *Channel { - s.CompressionType = &v +// SetFailureReason sets the FailureReason field's value. +func (s *DescribeNotebookInstanceOutput) SetFailureReason(v string) *DescribeNotebookInstanceOutput { + s.FailureReason = &v return s } -// SetContentType sets the ContentType field's value. -func (s *Channel) SetContentType(v string) *Channel { - s.ContentType = &v +// SetInstanceType sets the InstanceType field's value. +func (s *DescribeNotebookInstanceOutput) SetInstanceType(v string) *DescribeNotebookInstanceOutput { + s.InstanceType = &v return s } -// SetDataSource sets the DataSource field's value. -func (s *Channel) SetDataSource(v *DataSource) *Channel { - s.DataSource = v +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *DescribeNotebookInstanceOutput) SetKmsKeyId(v string) *DescribeNotebookInstanceOutput { + s.KmsKeyId = &v return s } -// SetRecordWrapperType sets the RecordWrapperType field's value. -func (s *Channel) SetRecordWrapperType(v string) *Channel { - s.RecordWrapperType = &v +// SetLastModifiedTime sets the LastModifiedTime field's value. +func (s *DescribeNotebookInstanceOutput) SetLastModifiedTime(v time.Time) *DescribeNotebookInstanceOutput { + s.LastModifiedTime = &v return s } -// Describes the container, as part of model definition. -type ContainerDefinition struct { - _ struct{} `type:"structure"` - - // The DNS host name for the container after Amazon SageMaker deploys it. - ContainerHostname *string `type:"string"` - - // The environment variables to set in the Docker container. Each key and value - // in the Environment string to string map can have length of up to 1024. We - // support up to 16 entries in the map. - Environment map[string]*string `type:"map"` - - // The Amazon EC2 Container Registry (Amazon ECR) path where inference code - // is stored. If you are using your own custom algorithm instead of an algorithm - // provided by Amazon SageMaker, the inference code must meet Amazon SageMaker - // requirements. For more information, see Using Your Own Algorithms with Amazon - // SageMaker (http://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms.html) - // - // Image is a required field - Image *string `type:"string" required:"true"` +// SetNetworkInterfaceId sets the NetworkInterfaceId field's value. +func (s *DescribeNotebookInstanceOutput) SetNetworkInterfaceId(v string) *DescribeNotebookInstanceOutput { + s.NetworkInterfaceId = &v + return s +} - // The S3 path where the model artifacts, which result from model training, - // are stored. This path must point to a single gzip compressed tar archive - // (.tar.gz suffix). - ModelDataUrl *string `type:"string"` +// SetNotebookInstanceArn sets the NotebookInstanceArn field's value. +func (s *DescribeNotebookInstanceOutput) SetNotebookInstanceArn(v string) *DescribeNotebookInstanceOutput { + s.NotebookInstanceArn = &v + return s } -// String returns the string representation -func (s ContainerDefinition) String() string { - return awsutil.Prettify(s) +// SetNotebookInstanceLifecycleConfigName sets the NotebookInstanceLifecycleConfigName field's value. +func (s *DescribeNotebookInstanceOutput) SetNotebookInstanceLifecycleConfigName(v string) *DescribeNotebookInstanceOutput { + s.NotebookInstanceLifecycleConfigName = &v + return s } -// GoString returns the string representation -func (s ContainerDefinition) GoString() string { - return s.String() +// SetNotebookInstanceName sets the NotebookInstanceName field's value. +func (s *DescribeNotebookInstanceOutput) SetNotebookInstanceName(v string) *DescribeNotebookInstanceOutput { + s.NotebookInstanceName = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *ContainerDefinition) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ContainerDefinition"} - if s.Image == nil { - invalidParams.Add(request.NewErrParamRequired("Image")) - } +// SetNotebookInstanceStatus sets the NotebookInstanceStatus field's value. +func (s *DescribeNotebookInstanceOutput) SetNotebookInstanceStatus(v string) *DescribeNotebookInstanceOutput { + s.NotebookInstanceStatus = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetRoleArn sets the RoleArn field's value. +func (s *DescribeNotebookInstanceOutput) SetRoleArn(v string) *DescribeNotebookInstanceOutput { + s.RoleArn = &v + return s } -// SetContainerHostname sets the ContainerHostname field's value. -func (s *ContainerDefinition) SetContainerHostname(v string) *ContainerDefinition { - s.ContainerHostname = &v +// SetSecurityGroups sets the SecurityGroups field's value. +func (s *DescribeNotebookInstanceOutput) SetSecurityGroups(v []*string) *DescribeNotebookInstanceOutput { + s.SecurityGroups = v return s } -// SetEnvironment sets the Environment field's value. -func (s *ContainerDefinition) SetEnvironment(v map[string]*string) *ContainerDefinition { - s.Environment = v +// SetSubnetId sets the SubnetId field's value. +func (s *DescribeNotebookInstanceOutput) SetSubnetId(v string) *DescribeNotebookInstanceOutput { + s.SubnetId = &v return s } -// SetImage sets the Image field's value. -func (s *ContainerDefinition) SetImage(v string) *ContainerDefinition { - s.Image = &v +// SetUrl sets the Url field's value. +func (s *DescribeNotebookInstanceOutput) SetUrl(v string) *DescribeNotebookInstanceOutput { + s.Url = &v return s } -// SetModelDataUrl sets the ModelDataUrl field's value. -func (s *ContainerDefinition) SetModelDataUrl(v string) *ContainerDefinition { - s.ModelDataUrl = &v +// SetVolumeSizeInGB sets the VolumeSizeInGB field's value. +func (s *DescribeNotebookInstanceOutput) SetVolumeSizeInGB(v int64) *DescribeNotebookInstanceOutput { + s.VolumeSizeInGB = &v return s } -type CreateEndpointConfigInput struct { +type DescribeTrainingJobInput struct { _ struct{} `type:"structure"` - // The name of the endpoint configuration. You specify this name in a CreateEndpoint - // (http://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateEndpoint.html) - // request. - // - // EndpointConfigName is a required field - EndpointConfigName *string `type:"string" required:"true"` - - // The Amazon Resource Name (ARN) of a AWS Key Management Service key that Amazon - // SageMaker uses to encrypt data on the storage volume attached to the ML compute - // instance that hosts the endpoint. - KmsKeyId *string `type:"string"` - - // An array of ProductionVariant objects, one for each model that you want to - // host at this endpoint. + // The name of the training job. // - // ProductionVariants is a required field - ProductionVariants []*ProductionVariant `min:"1" type:"list" required:"true"` - - // An array of key-value pairs. For more information, see Using Cost Allocation - // Tags (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html#allocation-what) - // in the AWS Billing and Cost Management User Guide. - Tags []*Tag `type:"list"` + // TrainingJobName is a required field + TrainingJobName *string `min:"1" type:"string" required:"true"` } // String returns the string representation -func (s CreateEndpointConfigInput) String() string { +func (s DescribeTrainingJobInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateEndpointConfigInput) GoString() string { +func (s DescribeTrainingJobInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateEndpointConfigInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateEndpointConfigInput"} - if s.EndpointConfigName == nil { - invalidParams.Add(request.NewErrParamRequired("EndpointConfigName")) - } - if s.ProductionVariants == nil { - invalidParams.Add(request.NewErrParamRequired("ProductionVariants")) - } - if s.ProductionVariants != nil && len(s.ProductionVariants) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ProductionVariants", 1)) - } - if s.ProductionVariants != nil { - for i, v := range s.ProductionVariants { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ProductionVariants", i), err.(request.ErrInvalidParams)) - } - } +func (s *DescribeTrainingJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeTrainingJobInput"} + if s.TrainingJobName == nil { + invalidParams.Add(request.NewErrParamRequired("TrainingJobName")) } - if s.Tags != nil { - for i, v := range s.Tags { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) - } - } + if s.TrainingJobName != nil && len(*s.TrainingJobName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TrainingJobName", 1)) } if invalidParams.Len() > 0 { @@ -3625,482 +7482,550 @@ func (s *CreateEndpointConfigInput) Validate() error { return nil } -// SetEndpointConfigName sets the EndpointConfigName field's value. -func (s *CreateEndpointConfigInput) SetEndpointConfigName(v string) *CreateEndpointConfigInput { - s.EndpointConfigName = &v +// SetTrainingJobName sets the TrainingJobName field's value. +func (s *DescribeTrainingJobInput) SetTrainingJobName(v string) *DescribeTrainingJobInput { + s.TrainingJobName = &v return s } -// SetKmsKeyId sets the KmsKeyId field's value. -func (s *CreateEndpointConfigInput) SetKmsKeyId(v string) *CreateEndpointConfigInput { - s.KmsKeyId = &v - return s -} +type DescribeTrainingJobOutput struct { + _ struct{} `type:"structure"` -// SetProductionVariants sets the ProductionVariants field's value. -func (s *CreateEndpointConfigInput) SetProductionVariants(v []*ProductionVariant) *CreateEndpointConfigInput { - s.ProductionVariants = v - return s -} + // Information about the algorithm used for training, and algorithm metadata. + // + // AlgorithmSpecification is a required field + AlgorithmSpecification *AlgorithmSpecification `type:"structure" required:"true"` -// SetTags sets the Tags field's value. -func (s *CreateEndpointConfigInput) SetTags(v []*Tag) *CreateEndpointConfigInput { - s.Tags = v - return s -} + // A timestamp that indicates when the training job was created. + // + // CreationTime is a required field + CreationTime *time.Time `type:"timestamp" required:"true"` -type CreateEndpointConfigOutput struct { - _ struct{} `type:"structure"` + // If the training job failed, the reason it failed. + FailureReason *string `type:"string"` - // The Amazon Resource Name (ARN) of the endpoint configuration. + // A collection of MetricData objects that specify the names, values, and dates + // and times that the training algorithm emitted to Amazon CloudWatch. + FinalMetricDataList []*MetricData `type:"list"` + + // Algorithm-specific parameters. + HyperParameters map[string]*string `type:"map"` + + // An array of Channel objects that describes each data input channel. + InputDataConfig []*Channel `min:"1" type:"list"` + + // A timestamp that indicates when the status of the training job was last modified. + LastModifiedTime *time.Time `type:"timestamp"` + + // Information about the Amazon S3 location that is configured for storing model + // artifacts. // - // EndpointConfigArn is a required field - EndpointConfigArn *string `min:"20" type:"string" required:"true"` -} + // ModelArtifacts is a required field + ModelArtifacts *ModelArtifacts `type:"structure" required:"true"` -// String returns the string representation -func (s CreateEndpointConfigOutput) String() string { - return awsutil.Prettify(s) -} + // The S3 path where model artifacts that you configured when creating the job + // are stored. Amazon SageMaker creates subfolders for model artifacts. + OutputDataConfig *OutputDataConfig `type:"structure"` -// GoString returns the string representation -func (s CreateEndpointConfigOutput) GoString() string { - return s.String() -} + // Resources, including ML compute instances and ML storage volumes, that are + // configured for model training. + // + // ResourceConfig is a required field + ResourceConfig *ResourceConfig `type:"structure" required:"true"` -// SetEndpointConfigArn sets the EndpointConfigArn field's value. -func (s *CreateEndpointConfigOutput) SetEndpointConfigArn(v string) *CreateEndpointConfigOutput { - s.EndpointConfigArn = &v - return s -} + // The AWS Identity and Access Management (IAM) role configured for the training + // job. + RoleArn *string `min:"20" type:"string"` -type CreateEndpointInput struct { - _ struct{} `type:"structure"` + // Provides detailed information about the state of the training job. For detailed + // information on the secondary status of the training job, see StatusMessage + // under SecondaryStatusTransition. + // + // Amazon SageMaker provides primary statuses and secondary statuses that apply + // to each of them: + // + // InProgressStarting - Starting the training job. + // + // Downloading - An optional stage for algorithms that support File training + // input mode. It indicates that data is being downloaded to the ML storage + // volumes. + // + // Training - Training is in progress. + // + // Uploading - Training is complete and the model artifacts are being uploaded + // to the S3 location. + // + // CompletedCompleted - The training job has completed. + // + // FailedFailed - The training job has failed. The reason for the failure is + // returned in the FailureReason field of DescribeTrainingJobResponse. + // + // StoppedMaxRuntimeExceeded - The job stopped because it exceeded the maximum + // allowed runtime. + // + // Stopped - The training job has stopped. + // + // StoppingStopping - Stopping the training job. + // + // Valid values for SecondaryStatus are subject to change. + // + // We no longer support the following secondary statuses: + // + // * LaunchingMLInstances + // + // * PreparingTrainingStack + // + // * DownloadingTrainingImage + // + // SecondaryStatus is a required field + SecondaryStatus *string `type:"string" required:"true" enum:"SecondaryStatus"` - // The name of an endpoint configuration. For more information, see CreateEndpointConfig - // (http://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateEndpointConfig.html). + // A history of all of the secondary statuses that the training job has transitioned + // through. + SecondaryStatusTransitions []*SecondaryStatusTransition `type:"list"` + + // The condition under which to stop the training job. // - // EndpointConfigName is a required field - EndpointConfigName *string `type:"string" required:"true"` + // StoppingCondition is a required field + StoppingCondition *StoppingCondition `type:"structure" required:"true"` - // The name of the endpoint. The name must be unique within an AWS Region in - // your AWS account. + // Indicates the time when the training job ends on training instances. You + // are billed for the time interval between the value of TrainingStartTime and + // this time. For successful jobs and stopped jobs, this is the time after model + // artifacts are uploaded. For failed jobs, this is the time when Amazon SageMaker + // detects a job failure. + TrainingEndTime *time.Time `type:"timestamp"` + + // The Amazon Resource Name (ARN) of the training job. // - // EndpointName is a required field - EndpointName *string `type:"string" required:"true"` + // TrainingJobArn is a required field + TrainingJobArn *string `type:"string" required:"true"` - // An array of key-value pairs. For more information, see Using Cost Allocation - // Tags (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html#allocation-what)in - // the AWS Billing and Cost Management User Guide. - Tags []*Tag `type:"list"` + // Name of the model training job. + // + // TrainingJobName is a required field + TrainingJobName *string `min:"1" type:"string" required:"true"` + + // The status of the training job. + // + // Amazon SageMaker provides the following training job statuses: + // + // * InProgress - The training is in progress. + // + // * Completed - The training job has completed. + // + // * Failed - The training job has failed. To see the reason for the failure, + // see the FailureReason field in the response to a DescribeTrainingJobResponse + // call. + // + // * Stopping - The training job is stopping. + // + // * Stopped - The training job has stopped. + // + // For more detailed information, see SecondaryStatus. + // + // TrainingJobStatus is a required field + TrainingJobStatus *string `type:"string" required:"true" enum:"TrainingJobStatus"` + + // Indicates the time when the training job starts on training instances. You + // are billed for the time interval between this time and the value of TrainingEndTime. + // The start time in CloudWatch Logs might be later than this time. The difference + // is due to the time it takes to download the training data and to the size + // of the training container. + TrainingStartTime *time.Time `type:"timestamp"` + + // The Amazon Resource Name (ARN) of the associated hyperparameter tuning job + // if the training job was launched by a hyperparameter tuning job. + TuningJobArn *string `type:"string"` + + // A VpcConfig object that specifies the VPC that this training job has access + // to. For more information, see Protect Training Jobs by Using an Amazon Virtual + // Private Cloud (http://docs.aws.amazon.com/sagemaker/latest/dg/train-vpc.html). + VpcConfig *VpcConfig `type:"structure"` } // String returns the string representation -func (s CreateEndpointInput) String() string { +func (s DescribeTrainingJobOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateEndpointInput) GoString() string { +func (s DescribeTrainingJobOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *CreateEndpointInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateEndpointInput"} - if s.EndpointConfigName == nil { - invalidParams.Add(request.NewErrParamRequired("EndpointConfigName")) - } - if s.EndpointName == nil { - invalidParams.Add(request.NewErrParamRequired("EndpointName")) - } - if s.Tags != nil { - for i, v := range s.Tags { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) - } - } - } +// SetAlgorithmSpecification sets the AlgorithmSpecification field's value. +func (s *DescribeTrainingJobOutput) SetAlgorithmSpecification(v *AlgorithmSpecification) *DescribeTrainingJobOutput { + s.AlgorithmSpecification = v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetCreationTime sets the CreationTime field's value. +func (s *DescribeTrainingJobOutput) SetCreationTime(v time.Time) *DescribeTrainingJobOutput { + s.CreationTime = &v + return s +} + +// SetFailureReason sets the FailureReason field's value. +func (s *DescribeTrainingJobOutput) SetFailureReason(v string) *DescribeTrainingJobOutput { + s.FailureReason = &v + return s +} + +// SetFinalMetricDataList sets the FinalMetricDataList field's value. +func (s *DescribeTrainingJobOutput) SetFinalMetricDataList(v []*MetricData) *DescribeTrainingJobOutput { + s.FinalMetricDataList = v + return s +} + +// SetHyperParameters sets the HyperParameters field's value. +func (s *DescribeTrainingJobOutput) SetHyperParameters(v map[string]*string) *DescribeTrainingJobOutput { + s.HyperParameters = v + return s } -// SetEndpointConfigName sets the EndpointConfigName field's value. -func (s *CreateEndpointInput) SetEndpointConfigName(v string) *CreateEndpointInput { - s.EndpointConfigName = &v +// SetInputDataConfig sets the InputDataConfig field's value. +func (s *DescribeTrainingJobOutput) SetInputDataConfig(v []*Channel) *DescribeTrainingJobOutput { + s.InputDataConfig = v return s } -// SetEndpointName sets the EndpointName field's value. -func (s *CreateEndpointInput) SetEndpointName(v string) *CreateEndpointInput { - s.EndpointName = &v +// SetLastModifiedTime sets the LastModifiedTime field's value. +func (s *DescribeTrainingJobOutput) SetLastModifiedTime(v time.Time) *DescribeTrainingJobOutput { + s.LastModifiedTime = &v return s } -// SetTags sets the Tags field's value. -func (s *CreateEndpointInput) SetTags(v []*Tag) *CreateEndpointInput { - s.Tags = v +// SetModelArtifacts sets the ModelArtifacts field's value. +func (s *DescribeTrainingJobOutput) SetModelArtifacts(v *ModelArtifacts) *DescribeTrainingJobOutput { + s.ModelArtifacts = v return s } -type CreateEndpointOutput struct { - _ struct{} `type:"structure"` - - // The Amazon Resource Name (ARN) of the endpoint. - // - // EndpointArn is a required field - EndpointArn *string `min:"20" type:"string" required:"true"` +// SetOutputDataConfig sets the OutputDataConfig field's value. +func (s *DescribeTrainingJobOutput) SetOutputDataConfig(v *OutputDataConfig) *DescribeTrainingJobOutput { + s.OutputDataConfig = v + return s } -// String returns the string representation -func (s CreateEndpointOutput) String() string { - return awsutil.Prettify(s) +// SetResourceConfig sets the ResourceConfig field's value. +func (s *DescribeTrainingJobOutput) SetResourceConfig(v *ResourceConfig) *DescribeTrainingJobOutput { + s.ResourceConfig = v + return s } -// GoString returns the string representation -func (s CreateEndpointOutput) GoString() string { - return s.String() +// SetRoleArn sets the RoleArn field's value. +func (s *DescribeTrainingJobOutput) SetRoleArn(v string) *DescribeTrainingJobOutput { + s.RoleArn = &v + return s } -// SetEndpointArn sets the EndpointArn field's value. -func (s *CreateEndpointOutput) SetEndpointArn(v string) *CreateEndpointOutput { - s.EndpointArn = &v +// SetSecondaryStatus sets the SecondaryStatus field's value. +func (s *DescribeTrainingJobOutput) SetSecondaryStatus(v string) *DescribeTrainingJobOutput { + s.SecondaryStatus = &v return s } -type CreateModelInput struct { - _ struct{} `type:"structure"` - - // The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can - // assume to access model artifacts and docker image for deployment on ML compute - // instances. Deploying on ML compute instances is part of model hosting. For - // more information, see Amazon SageMaker Roles (http://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). - // - // ExecutionRoleArn is a required field - ExecutionRoleArn *string `min:"20" type:"string" required:"true"` - - // The name of the new model. - // - // ModelName is a required field - ModelName *string `type:"string" required:"true"` - - // The location of the primary docker image containing inference code, associated - // artifacts, and custom environment map that the inference code uses when the - // model is deployed into production. - // - // PrimaryContainer is a required field - PrimaryContainer *ContainerDefinition `type:"structure" required:"true"` - - // An array of key-value pairs. For more information, see Using Cost Allocation - // Tags (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html#allocation-what) - // in the AWS Billing and Cost Management User Guide. - Tags []*Tag `type:"list"` +// SetSecondaryStatusTransitions sets the SecondaryStatusTransitions field's value. +func (s *DescribeTrainingJobOutput) SetSecondaryStatusTransitions(v []*SecondaryStatusTransition) *DescribeTrainingJobOutput { + s.SecondaryStatusTransitions = v + return s } -// String returns the string representation -func (s CreateModelInput) String() string { - return awsutil.Prettify(s) +// SetStoppingCondition sets the StoppingCondition field's value. +func (s *DescribeTrainingJobOutput) SetStoppingCondition(v *StoppingCondition) *DescribeTrainingJobOutput { + s.StoppingCondition = v + return s } -// GoString returns the string representation -func (s CreateModelInput) GoString() string { - return s.String() +// SetTrainingEndTime sets the TrainingEndTime field's value. +func (s *DescribeTrainingJobOutput) SetTrainingEndTime(v time.Time) *DescribeTrainingJobOutput { + s.TrainingEndTime = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *CreateModelInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateModelInput"} - if s.ExecutionRoleArn == nil { - invalidParams.Add(request.NewErrParamRequired("ExecutionRoleArn")) - } - if s.ExecutionRoleArn != nil && len(*s.ExecutionRoleArn) < 20 { - invalidParams.Add(request.NewErrParamMinLen("ExecutionRoleArn", 20)) - } - if s.ModelName == nil { - invalidParams.Add(request.NewErrParamRequired("ModelName")) - } - if s.PrimaryContainer == nil { - invalidParams.Add(request.NewErrParamRequired("PrimaryContainer")) - } - if s.PrimaryContainer != nil { - if err := s.PrimaryContainer.Validate(); err != nil { - invalidParams.AddNested("PrimaryContainer", err.(request.ErrInvalidParams)) - } - } - if s.Tags != nil { - for i, v := range s.Tags { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) - } - } - } +// SetTrainingJobArn sets the TrainingJobArn field's value. +func (s *DescribeTrainingJobOutput) SetTrainingJobArn(v string) *DescribeTrainingJobOutput { + s.TrainingJobArn = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetTrainingJobName sets the TrainingJobName field's value. +func (s *DescribeTrainingJobOutput) SetTrainingJobName(v string) *DescribeTrainingJobOutput { + s.TrainingJobName = &v + return s } -// SetExecutionRoleArn sets the ExecutionRoleArn field's value. -func (s *CreateModelInput) SetExecutionRoleArn(v string) *CreateModelInput { - s.ExecutionRoleArn = &v +// SetTrainingJobStatus sets the TrainingJobStatus field's value. +func (s *DescribeTrainingJobOutput) SetTrainingJobStatus(v string) *DescribeTrainingJobOutput { + s.TrainingJobStatus = &v return s } -// SetModelName sets the ModelName field's value. -func (s *CreateModelInput) SetModelName(v string) *CreateModelInput { - s.ModelName = &v +// SetTrainingStartTime sets the TrainingStartTime field's value. +func (s *DescribeTrainingJobOutput) SetTrainingStartTime(v time.Time) *DescribeTrainingJobOutput { + s.TrainingStartTime = &v return s } -// SetPrimaryContainer sets the PrimaryContainer field's value. -func (s *CreateModelInput) SetPrimaryContainer(v *ContainerDefinition) *CreateModelInput { - s.PrimaryContainer = v +// SetTuningJobArn sets the TuningJobArn field's value. +func (s *DescribeTrainingJobOutput) SetTuningJobArn(v string) *DescribeTrainingJobOutput { + s.TuningJobArn = &v return s } -// SetTags sets the Tags field's value. -func (s *CreateModelInput) SetTags(v []*Tag) *CreateModelInput { - s.Tags = v +// SetVpcConfig sets the VpcConfig field's value. +func (s *DescribeTrainingJobOutput) SetVpcConfig(v *VpcConfig) *DescribeTrainingJobOutput { + s.VpcConfig = v return s } -type CreateModelOutput struct { +type DescribeTransformJobInput struct { _ struct{} `type:"structure"` - // The ARN of the model created in Amazon SageMaker. + // The name of the transform job that you want to view details of. // - // ModelArn is a required field - ModelArn *string `min:"20" type:"string" required:"true"` + // TransformJobName is a required field + TransformJobName *string `min:"1" type:"string" required:"true"` } // String returns the string representation -func (s CreateModelOutput) String() string { +func (s DescribeTransformJobInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateModelOutput) GoString() string { +func (s DescribeTransformJobInput) GoString() string { return s.String() } -// SetModelArn sets the ModelArn field's value. -func (s *CreateModelOutput) SetModelArn(v string) *CreateModelOutput { - s.ModelArn = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeTransformJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeTransformJobInput"} + if s.TransformJobName == nil { + invalidParams.Add(request.NewErrParamRequired("TransformJobName")) + } + if s.TransformJobName != nil && len(*s.TransformJobName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TransformJobName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTransformJobName sets the TransformJobName field's value. +func (s *DescribeTransformJobInput) SetTransformJobName(v string) *DescribeTransformJobInput { + s.TransformJobName = &v return s } -type CreateNotebookInstanceInput struct { +type DescribeTransformJobOutput struct { _ struct{} `type:"structure"` - // Sets whether Amazon SageMaker provides internet access to the notebook instance. - // If you set this to Disabled this notebook instance will be able to access - // resources only in your VPC, and will not be able to connect to Amazon SageMaker - // training and endpoint services unless your configure a NAT Gateway in your - // VPC. + // SingleRecord means only one record was used per a batch. MultiRecord means + // batches contained as many records that could possibly fit within the MaxPayloadInMB + // limit. + BatchStrategy *string `type:"string" enum:"BatchStrategy"` + + // A timestamp that shows when the transform Job was created. // - // For more information, see appendix-notebook-and-internet-access. You can - // set the value of this parameter to Disabled only if you set a value for the - // SubnetId parameter. - DirectInternetAccess *string `type:"string" enum:"DirectInternetAccess"` + // CreationTime is a required field + CreationTime *time.Time `type:"timestamp" required:"true"` - // The type of ML compute instance to launch for the notebook instance. + Environment map[string]*string `type:"map"` + + // If the transform job failed, the reason that it failed. + FailureReason *string `type:"string"` + + // The maximum number of parallel requests on each instance node that can be + // launched in a transform job. The default value is 1. + MaxConcurrentTransforms *int64 `type:"integer"` + + // The maximum payload size , in MB used in the transform job. + MaxPayloadInMB *int64 `type:"integer"` + + // The name of the model used in the transform job. // - // InstanceType is a required field - InstanceType *string `type:"string" required:"true" enum:"InstanceType"` + // ModelName is a required field + ModelName *string `type:"string" required:"true"` - // If you provide a AWS KMS key ID, Amazon SageMaker uses it to encrypt data - // at rest on the ML storage volume that is attached to your notebook instance. - KmsKeyId *string `type:"string"` + // Indicates when the transform job is Completed, Stopped, or Failed. You are + // billed for the time interval between this time and the value of TransformStartTime. + TransformEndTime *time.Time `type:"timestamp"` - // The name of a lifecycle configuration to associate with the notebook instance. - // For information about lifestyle configurations, see notebook-lifecycle-config. - LifecycleConfigName *string `type:"string"` + // Describes the dataset to be transformed and the Amazon S3 location where + // it is stored. + // + // TransformInput is a required field + TransformInput *TransformInput `type:"structure" required:"true"` - // The name of the new notebook instance. + // The Amazon Resource Name (ARN) of the transform job. // - // NotebookInstanceName is a required field - NotebookInstanceName *string `type:"string" required:"true"` + // TransformJobArn is a required field + TransformJobArn *string `type:"string" required:"true"` - // When you send any requests to AWS resources from the notebook instance, Amazon - // SageMaker assumes this role to perform tasks on your behalf. You must grant - // this role necessary permissions so Amazon SageMaker can perform these tasks. - // The policy must allow the Amazon SageMaker service principal (sagemaker.amazonaws.com) - // permissions to assume this role. For more information, see Amazon SageMaker - // Roles (http://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). + // The name of the transform job. // - // RoleArn is a required field - RoleArn *string `min:"20" type:"string" required:"true"` + // TransformJobName is a required field + TransformJobName *string `min:"1" type:"string" required:"true"` - // The VPC security group IDs, in the form sg-xxxxxxxx. The security groups - // must be for the same VPC as specified in the subnet. - SecurityGroupIds []*string `type:"list"` + // The status of the transform job. If the transform job failed, the reason + // is returned in the FailureReason field. + // + // TransformJobStatus is a required field + TransformJobStatus *string `type:"string" required:"true" enum:"TransformJobStatus"` - // The ID of the subnet in a VPC to which you would like to have a connectivity - // from your ML compute instance. - SubnetId *string `type:"string"` + // Identifies the Amazon S3 location where you want Amazon SageMaker to save + // the results from the transform job. + TransformOutput *TransformOutput `type:"structure"` - // A list of tags to associate with the notebook instance. You can add tags - // later by using the CreateTags API. - Tags []*Tag `type:"list"` + // Describes the resources, including ML instance types and ML instance count, + // to use for the transform job. + // + // TransformResources is a required field + TransformResources *TransformResources `type:"structure" required:"true"` + + // Indicates when the transform job starts on ML instances. You are billed for + // the time interval between this time and the value of TransformEndTime. + TransformStartTime *time.Time `type:"timestamp"` } // String returns the string representation -func (s CreateNotebookInstanceInput) String() string { +func (s DescribeTransformJobOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateNotebookInstanceInput) GoString() string { +func (s DescribeTransformJobOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *CreateNotebookInstanceInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateNotebookInstanceInput"} - if s.InstanceType == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceType")) - } - if s.NotebookInstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("NotebookInstanceName")) - } - if s.RoleArn == nil { - invalidParams.Add(request.NewErrParamRequired("RoleArn")) - } - if s.RoleArn != nil && len(*s.RoleArn) < 20 { - invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) - } - if s.Tags != nil { - for i, v := range s.Tags { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) - } - } - } +// SetBatchStrategy sets the BatchStrategy field's value. +func (s *DescribeTransformJobOutput) SetBatchStrategy(v string) *DescribeTransformJobOutput { + s.BatchStrategy = &v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *DescribeTransformJobOutput) SetCreationTime(v time.Time) *DescribeTransformJobOutput { + s.CreationTime = &v + return s +} + +// SetEnvironment sets the Environment field's value. +func (s *DescribeTransformJobOutput) SetEnvironment(v map[string]*string) *DescribeTransformJobOutput { + s.Environment = v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetFailureReason sets the FailureReason field's value. +func (s *DescribeTransformJobOutput) SetFailureReason(v string) *DescribeTransformJobOutput { + s.FailureReason = &v + return s } -// SetDirectInternetAccess sets the DirectInternetAccess field's value. -func (s *CreateNotebookInstanceInput) SetDirectInternetAccess(v string) *CreateNotebookInstanceInput { - s.DirectInternetAccess = &v +// SetMaxConcurrentTransforms sets the MaxConcurrentTransforms field's value. +func (s *DescribeTransformJobOutput) SetMaxConcurrentTransforms(v int64) *DescribeTransformJobOutput { + s.MaxConcurrentTransforms = &v return s } -// SetInstanceType sets the InstanceType field's value. -func (s *CreateNotebookInstanceInput) SetInstanceType(v string) *CreateNotebookInstanceInput { - s.InstanceType = &v +// SetMaxPayloadInMB sets the MaxPayloadInMB field's value. +func (s *DescribeTransformJobOutput) SetMaxPayloadInMB(v int64) *DescribeTransformJobOutput { + s.MaxPayloadInMB = &v return s } -// SetKmsKeyId sets the KmsKeyId field's value. -func (s *CreateNotebookInstanceInput) SetKmsKeyId(v string) *CreateNotebookInstanceInput { - s.KmsKeyId = &v +// SetModelName sets the ModelName field's value. +func (s *DescribeTransformJobOutput) SetModelName(v string) *DescribeTransformJobOutput { + s.ModelName = &v return s } -// SetLifecycleConfigName sets the LifecycleConfigName field's value. -func (s *CreateNotebookInstanceInput) SetLifecycleConfigName(v string) *CreateNotebookInstanceInput { - s.LifecycleConfigName = &v +// SetTransformEndTime sets the TransformEndTime field's value. +func (s *DescribeTransformJobOutput) SetTransformEndTime(v time.Time) *DescribeTransformJobOutput { + s.TransformEndTime = &v return s } -// SetNotebookInstanceName sets the NotebookInstanceName field's value. -func (s *CreateNotebookInstanceInput) SetNotebookInstanceName(v string) *CreateNotebookInstanceInput { - s.NotebookInstanceName = &v +// SetTransformInput sets the TransformInput field's value. +func (s *DescribeTransformJobOutput) SetTransformInput(v *TransformInput) *DescribeTransformJobOutput { + s.TransformInput = v return s } -// SetRoleArn sets the RoleArn field's value. -func (s *CreateNotebookInstanceInput) SetRoleArn(v string) *CreateNotebookInstanceInput { - s.RoleArn = &v +// SetTransformJobArn sets the TransformJobArn field's value. +func (s *DescribeTransformJobOutput) SetTransformJobArn(v string) *DescribeTransformJobOutput { + s.TransformJobArn = &v return s } -// SetSecurityGroupIds sets the SecurityGroupIds field's value. -func (s *CreateNotebookInstanceInput) SetSecurityGroupIds(v []*string) *CreateNotebookInstanceInput { - s.SecurityGroupIds = v +// SetTransformJobName sets the TransformJobName field's value. +func (s *DescribeTransformJobOutput) SetTransformJobName(v string) *DescribeTransformJobOutput { + s.TransformJobName = &v return s } -// SetSubnetId sets the SubnetId field's value. -func (s *CreateNotebookInstanceInput) SetSubnetId(v string) *CreateNotebookInstanceInput { - s.SubnetId = &v +// SetTransformJobStatus sets the TransformJobStatus field's value. +func (s *DescribeTransformJobOutput) SetTransformJobStatus(v string) *DescribeTransformJobOutput { + s.TransformJobStatus = &v return s } -// SetTags sets the Tags field's value. -func (s *CreateNotebookInstanceInput) SetTags(v []*Tag) *CreateNotebookInstanceInput { - s.Tags = v +// SetTransformOutput sets the TransformOutput field's value. +func (s *DescribeTransformJobOutput) SetTransformOutput(v *TransformOutput) *DescribeTransformJobOutput { + s.TransformOutput = v return s } -type CreateNotebookInstanceLifecycleConfigInput struct { +// SetTransformResources sets the TransformResources field's value. +func (s *DescribeTransformJobOutput) SetTransformResources(v *TransformResources) *DescribeTransformJobOutput { + s.TransformResources = v + return s +} + +// SetTransformStartTime sets the TransformStartTime field's value. +func (s *DescribeTransformJobOutput) SetTransformStartTime(v time.Time) *DescribeTransformJobOutput { + s.TransformStartTime = &v + return s +} + +// Specifies weight and capacity values for a production variant. +type DesiredWeightAndCapacity struct { _ struct{} `type:"structure"` - // The name of the lifecycle configuration. - // - // NotebookInstanceLifecycleConfigName is a required field - NotebookInstanceLifecycleConfigName *string `type:"string" required:"true"` + // The variant's capacity. + DesiredInstanceCount *int64 `min:"1" type:"integer"` - // A shell script that runs only once, when you create a notebook instance. - OnCreate []*NotebookInstanceLifecycleHook `type:"list"` + // The variant's weight. + DesiredWeight *float64 `type:"float"` - // A shell script that runs every time you start a notebook instance, including - // when you create the notebook instance. - OnStart []*NotebookInstanceLifecycleHook `type:"list"` + // The name of the variant to update. + // + // VariantName is a required field + VariantName *string `type:"string" required:"true"` } // String returns the string representation -func (s CreateNotebookInstanceLifecycleConfigInput) String() string { +func (s DesiredWeightAndCapacity) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateNotebookInstanceLifecycleConfigInput) GoString() string { +func (s DesiredWeightAndCapacity) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateNotebookInstanceLifecycleConfigInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateNotebookInstanceLifecycleConfigInput"} - if s.NotebookInstanceLifecycleConfigName == nil { - invalidParams.Add(request.NewErrParamRequired("NotebookInstanceLifecycleConfigName")) - } - if s.OnCreate != nil { - for i, v := range s.OnCreate { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "OnCreate", i), err.(request.ErrInvalidParams)) - } - } +func (s *DesiredWeightAndCapacity) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DesiredWeightAndCapacity"} + if s.DesiredInstanceCount != nil && *s.DesiredInstanceCount < 1 { + invalidParams.Add(request.NewErrParamMinValue("DesiredInstanceCount", 1)) } - if s.OnStart != nil { - for i, v := range s.OnStart { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "OnStart", i), err.(request.ErrInvalidParams)) - } - } + if s.VariantName == nil { + invalidParams.Add(request.NewErrParamRequired("VariantName")) } if invalidParams.Len() > 0 { @@ -4109,100 +8034,277 @@ func (s *CreateNotebookInstanceLifecycleConfigInput) Validate() error { return nil } -// SetNotebookInstanceLifecycleConfigName sets the NotebookInstanceLifecycleConfigName field's value. -func (s *CreateNotebookInstanceLifecycleConfigInput) SetNotebookInstanceLifecycleConfigName(v string) *CreateNotebookInstanceLifecycleConfigInput { - s.NotebookInstanceLifecycleConfigName = &v +// SetDesiredInstanceCount sets the DesiredInstanceCount field's value. +func (s *DesiredWeightAndCapacity) SetDesiredInstanceCount(v int64) *DesiredWeightAndCapacity { + s.DesiredInstanceCount = &v return s } -// SetOnCreate sets the OnCreate field's value. -func (s *CreateNotebookInstanceLifecycleConfigInput) SetOnCreate(v []*NotebookInstanceLifecycleHook) *CreateNotebookInstanceLifecycleConfigInput { - s.OnCreate = v +// SetDesiredWeight sets the DesiredWeight field's value. +func (s *DesiredWeightAndCapacity) SetDesiredWeight(v float64) *DesiredWeightAndCapacity { + s.DesiredWeight = &v return s } -// SetOnStart sets the OnStart field's value. -func (s *CreateNotebookInstanceLifecycleConfigInput) SetOnStart(v []*NotebookInstanceLifecycleHook) *CreateNotebookInstanceLifecycleConfigInput { - s.OnStart = v +// SetVariantName sets the VariantName field's value. +func (s *DesiredWeightAndCapacity) SetVariantName(v string) *DesiredWeightAndCapacity { + s.VariantName = &v return s } -type CreateNotebookInstanceLifecycleConfigOutput struct { +// Provides summary information for an endpoint configuration. +type EndpointConfigSummary struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of the lifecycle configuration. - NotebookInstanceLifecycleConfigArn *string `type:"string"` + // A timestamp that shows when the endpoint configuration was created. + // + // CreationTime is a required field + CreationTime *time.Time `type:"timestamp" required:"true"` + + // The Amazon Resource Name (ARN) of the endpoint configuration. + // + // EndpointConfigArn is a required field + EndpointConfigArn *string `min:"20" type:"string" required:"true"` + + // The name of the endpoint configuration. + // + // EndpointConfigName is a required field + EndpointConfigName *string `type:"string" required:"true"` } // String returns the string representation -func (s CreateNotebookInstanceLifecycleConfigOutput) String() string { +func (s EndpointConfigSummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateNotebookInstanceLifecycleConfigOutput) GoString() string { +func (s EndpointConfigSummary) GoString() string { return s.String() } -// SetNotebookInstanceLifecycleConfigArn sets the NotebookInstanceLifecycleConfigArn field's value. -func (s *CreateNotebookInstanceLifecycleConfigOutput) SetNotebookInstanceLifecycleConfigArn(v string) *CreateNotebookInstanceLifecycleConfigOutput { - s.NotebookInstanceLifecycleConfigArn = &v +// SetCreationTime sets the CreationTime field's value. +func (s *EndpointConfigSummary) SetCreationTime(v time.Time) *EndpointConfigSummary { + s.CreationTime = &v return s } -type CreateNotebookInstanceOutput struct { +// SetEndpointConfigArn sets the EndpointConfigArn field's value. +func (s *EndpointConfigSummary) SetEndpointConfigArn(v string) *EndpointConfigSummary { + s.EndpointConfigArn = &v + return s +} + +// SetEndpointConfigName sets the EndpointConfigName field's value. +func (s *EndpointConfigSummary) SetEndpointConfigName(v string) *EndpointConfigSummary { + s.EndpointConfigName = &v + return s +} + +// Provides summary information for an endpoint. +type EndpointSummary struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) of the notebook instance. - NotebookInstanceArn *string `type:"string"` + // A timestamp that shows when the endpoint was created. + // + // CreationTime is a required field + CreationTime *time.Time `type:"timestamp" required:"true"` + + // The Amazon Resource Name (ARN) of the endpoint. + // + // EndpointArn is a required field + EndpointArn *string `min:"20" type:"string" required:"true"` + + // The name of the endpoint. + // + // EndpointName is a required field + EndpointName *string `type:"string" required:"true"` + + // The status of the endpoint. + // + // * OutOfService: Endpoint is not available to take incoming requests. + // + // * Creating: CreateEndpoint is executing. + // + // * Updating: UpdateEndpoint or UpdateEndpointWeightsAndCapacities is executing. + // + // * SystemUpdating: Endpoint is undergoing maintenance and cannot be updated + // or deleted or re-scaled until it has completed. This mainenance operation + // does not change any customer-specified values such as VPC config, KMS + // encryption, model, instance type, or instance count. + // + // * RollingBack: Endpoint fails to scale up or down or change its variant + // weight and is in the process of rolling back to its previous configuration. + // Once the rollback completes, endpoint returns to an InService status. + // This transitional status only applies to an endpoint that has autoscaling + // enabled and is undergoing variant weight or capacity changes as part of + // an UpdateEndpointWeightsAndCapacities call or when the UpdateEndpointWeightsAndCapacities + // operation is called explicitly. + // + // * InService: Endpoint is available to process incoming requests. + // + // * Deleting: DeleteEndpoint is executing. + // + // * Failed: Endpoint could not be created, updated, or re-scaled. Use DescribeEndpointOutput$FailureReason + // for information about the failure. DeleteEndpoint is the only operation + // that can be performed on a failed endpoint. + // + // To get a list of endpoints with a specified status, use the ListEndpointsInput$StatusEquals + // filter. + // + // EndpointStatus is a required field + EndpointStatus *string `type:"string" required:"true" enum:"EndpointStatus"` + + // A timestamp that shows when the endpoint was last modified. + // + // LastModifiedTime is a required field + LastModifiedTime *time.Time `type:"timestamp" required:"true"` } // String returns the string representation -func (s CreateNotebookInstanceOutput) String() string { +func (s EndpointSummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateNotebookInstanceOutput) GoString() string { +func (s EndpointSummary) GoString() string { return s.String() } -// SetNotebookInstanceArn sets the NotebookInstanceArn field's value. -func (s *CreateNotebookInstanceOutput) SetNotebookInstanceArn(v string) *CreateNotebookInstanceOutput { - s.NotebookInstanceArn = &v +// SetCreationTime sets the CreationTime field's value. +func (s *EndpointSummary) SetCreationTime(v time.Time) *EndpointSummary { + s.CreationTime = &v return s } -type CreatePresignedNotebookInstanceUrlInput struct { +// SetEndpointArn sets the EndpointArn field's value. +func (s *EndpointSummary) SetEndpointArn(v string) *EndpointSummary { + s.EndpointArn = &v + return s +} + +// SetEndpointName sets the EndpointName field's value. +func (s *EndpointSummary) SetEndpointName(v string) *EndpointSummary { + s.EndpointName = &v + return s +} + +// SetEndpointStatus sets the EndpointStatus field's value. +func (s *EndpointSummary) SetEndpointStatus(v string) *EndpointSummary { + s.EndpointStatus = &v + return s +} + +// SetLastModifiedTime sets the LastModifiedTime field's value. +func (s *EndpointSummary) SetLastModifiedTime(v time.Time) *EndpointSummary { + s.LastModifiedTime = &v + return s +} + +// Shows the final value for the objective metric for a training job that was +// launched by a hyperparameter tuning job. You define the objective metric +// in the HyperParameterTuningJobObjective parameter of HyperParameterTuningJobConfig. +type FinalHyperParameterTuningJobObjectiveMetric struct { + _ struct{} `type:"structure"` + + // The name of the objective metric. + // + // MetricName is a required field + MetricName *string `min:"1" type:"string" required:"true"` + + // Whether to minimize or maximize the objective metric. Valid values are Minimize + // and Maximize. + Type *string `type:"string" enum:"HyperParameterTuningJobObjectiveType"` + + // The value of the objective metric. + // + // Value is a required field + Value *float64 `type:"float" required:"true"` +} + +// String returns the string representation +func (s FinalHyperParameterTuningJobObjectiveMetric) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FinalHyperParameterTuningJobObjectiveMetric) GoString() string { + return s.String() +} + +// SetMetricName sets the MetricName field's value. +func (s *FinalHyperParameterTuningJobObjectiveMetric) SetMetricName(v string) *FinalHyperParameterTuningJobObjectiveMetric { + s.MetricName = &v + return s +} + +// SetType sets the Type field's value. +func (s *FinalHyperParameterTuningJobObjectiveMetric) SetType(v string) *FinalHyperParameterTuningJobObjectiveMetric { + s.Type = &v + return s +} + +// SetValue sets the Value field's value. +func (s *FinalHyperParameterTuningJobObjectiveMetric) SetValue(v float64) *FinalHyperParameterTuningJobObjectiveMetric { + s.Value = &v + return s +} + +// Specifies which training algorithm to use for training jobs that a hyperparameter +// tuning job launches and the metrics to monitor. +type HyperParameterAlgorithmSpecification struct { _ struct{} `type:"structure"` - // The name of the notebook instance. + // An array of MetricDefinition objects that specify the metrics that the algorithm + // emits. + MetricDefinitions []*MetricDefinition `type:"list"` + + // The registry path of the Docker image that contains the training algorithm. + // For information about Docker registry paths for built-in algorithms, see + // Algorithms Provided by Amazon SageMaker: Common Parameters (http://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html). + TrainingImage *string `type:"string"` + + // The input mode that the algorithm supports: File or Pipe. In File input mode, + // Amazon SageMaker downloads the training data from Amazon S3 to the storage + // volume that is attached to the training instance and mounts the directory + // to the Docker volume for the training container. In Pipe input mode, Amazon + // SageMaker streams data directly from Amazon S3 to the container. // - // NotebookInstanceName is a required field - NotebookInstanceName *string `type:"string" required:"true"` - - // The duration of the session, in seconds. The default is 12 hours. - SessionExpirationDurationInSeconds *int64 `min:"1800" type:"integer"` + // If you specify File mode, make sure that you provision the storage volume + // that is attached to the training instance with enough capacity to accommodate + // the training data downloaded from Amazon S3, the model artifacts, and intermediate + // information. + // + // For more information about input modes, see Algorithms (http://docs.aws.amazon.com/sagemaker/latest/dg/algos.html) + // + // TrainingInputMode is a required field + TrainingInputMode *string `type:"string" required:"true" enum:"TrainingInputMode"` } // String returns the string representation -func (s CreatePresignedNotebookInstanceUrlInput) String() string { +func (s HyperParameterAlgorithmSpecification) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreatePresignedNotebookInstanceUrlInput) GoString() string { +func (s HyperParameterAlgorithmSpecification) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreatePresignedNotebookInstanceUrlInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreatePresignedNotebookInstanceUrlInput"} - if s.NotebookInstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("NotebookInstanceName")) +func (s *HyperParameterAlgorithmSpecification) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "HyperParameterAlgorithmSpecification"} + if s.TrainingInputMode == nil { + invalidParams.Add(request.NewErrParamRequired("TrainingInputMode")) } - if s.SessionExpirationDurationInSeconds != nil && *s.SessionExpirationDurationInSeconds < 1800 { - invalidParams.Add(request.NewErrParamMinValue("SessionExpirationDurationInSeconds", 1800)) + if s.MetricDefinitions != nil { + for i, v := range s.MetricDefinitions { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "MetricDefinitions", i), err.(request.ErrInvalidParams)) + } + } } if invalidParams.Len() > 0 { @@ -4211,154 +8313,103 @@ func (s *CreatePresignedNotebookInstanceUrlInput) Validate() error { return nil } -// SetNotebookInstanceName sets the NotebookInstanceName field's value. -func (s *CreatePresignedNotebookInstanceUrlInput) SetNotebookInstanceName(v string) *CreatePresignedNotebookInstanceUrlInput { - s.NotebookInstanceName = &v +// SetMetricDefinitions sets the MetricDefinitions field's value. +func (s *HyperParameterAlgorithmSpecification) SetMetricDefinitions(v []*MetricDefinition) *HyperParameterAlgorithmSpecification { + s.MetricDefinitions = v return s } -// SetSessionExpirationDurationInSeconds sets the SessionExpirationDurationInSeconds field's value. -func (s *CreatePresignedNotebookInstanceUrlInput) SetSessionExpirationDurationInSeconds(v int64) *CreatePresignedNotebookInstanceUrlInput { - s.SessionExpirationDurationInSeconds = &v +// SetTrainingImage sets the TrainingImage field's value. +func (s *HyperParameterAlgorithmSpecification) SetTrainingImage(v string) *HyperParameterAlgorithmSpecification { + s.TrainingImage = &v return s } -type CreatePresignedNotebookInstanceUrlOutput struct { - _ struct{} `type:"structure"` - - // A JSON object that contains the URL string. - AuthorizedUrl *string `type:"string"` -} - -// String returns the string representation -func (s CreatePresignedNotebookInstanceUrlOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s CreatePresignedNotebookInstanceUrlOutput) GoString() string { - return s.String() -} - -// SetAuthorizedUrl sets the AuthorizedUrl field's value. -func (s *CreatePresignedNotebookInstanceUrlOutput) SetAuthorizedUrl(v string) *CreatePresignedNotebookInstanceUrlOutput { - s.AuthorizedUrl = &v +// SetTrainingInputMode sets the TrainingInputMode field's value. +func (s *HyperParameterAlgorithmSpecification) SetTrainingInputMode(v string) *HyperParameterAlgorithmSpecification { + s.TrainingInputMode = &v return s } -type CreateTrainingJobInput struct { +// Defines the training jobs launched by a hyperparameter tuning job. +type HyperParameterTrainingJobDefinition struct { _ struct{} `type:"structure"` - // The registry path of the Docker image that contains the training algorithm - // and algorithm-specific metadata, including the input mode. For more information - // about algorithms provided by Amazon SageMaker, see Algorithms (http://docs.aws.amazon.com/sagemaker/latest/dg/algos.html). - // For information about providing your own algorithms, see Bring Your Own Algorithms - // (http://docs.aws.amazon.com/sagemaker/latest/dg/adv-topics-own-algo.html). + // The HyperParameterAlgorithmSpecification object that specifies the algorithm + // to use for the training jobs that the tuning job launches. // // AlgorithmSpecification is a required field - AlgorithmSpecification *AlgorithmSpecification `type:"structure" required:"true"` - - // Algorithm-specific parameters. You set hyperparameters before you start the - // learning process. Hyperparameters influence the quality of the model. For - // a list of hyperparameters for each training algorithm provided by Amazon - // SageMaker, see Algorithms (http://docs.aws.amazon.com/sagemaker/latest/dg/algos.html). - // - // You can specify a maximum of 100 hyperparameters. Each hyperparameter is - // a key-value pair. Each key and value is limited to 256 characters, as specified - // by the Length Constraint. - HyperParameters map[string]*string `type:"map"` + AlgorithmSpecification *HyperParameterAlgorithmSpecification `type:"structure" required:"true"` - // An array of Channel objects. Each channel is a named input source. InputDataConfig - // describes the input data and its location. - // - // Algorithms can accept input data from one or more channels. For example, - // an algorithm might have two channels of input data, training_data and validation_data. - // The configuration for each channel provides the S3 location where the input - // data is stored. It also provides information about the stored data: the MIME - // type, compression method, and whether the data is wrapped in RecordIO format. - // - // Depending on the input mode that the algorithm supports, Amazon SageMaker - // either copies input data files from an S3 bucket to a local directory in - // the Docker container, or makes it available as input streams. - // - // InputDataConfig is a required field - InputDataConfig []*Channel `min:"1" type:"list" required:"true"` + // An array of Channel objects that specify the input for the training jobs + // that the tuning job launches. + InputDataConfig []*Channel `min:"1" type:"list"` - // Specifies the path to the S3 bucket where you want to store model artifacts. - // Amazon SageMaker creates subfolders for the artifacts. + // Specifies the path to the Amazon S3 bucket where you store model artifacts + // from the training jobs that the tuning job launches. // // OutputDataConfig is a required field OutputDataConfig *OutputDataConfig `type:"structure" required:"true"` - // The resources, including the ML compute instances and ML storage volumes, - // to use for model training. + // The resources, including the compute instances and storage volumes, to use + // for the training jobs that the tuning job launches. // - // ML storage volumes store model artifacts and incremental states. Training - // algorithms might also use ML storage volumes for scratch space. If you want - // Amazon SageMaker to use the ML storage volume to store the training data, - // choose File as the TrainingInputMode in the algorithm specification. For - // distributed training algorithms, specify an instance count greater than 1. + // Storage volumes store model artifacts and incremental states. Training algorithms + // might also use storage volumes for scratch space. If you want Amazon SageMaker + // to use the storage volume to store the training data, choose File as the + // TrainingInputMode in the algorithm specification. For distributed training + // algorithms, specify an instance count greater than 1. // // ResourceConfig is a required field ResourceConfig *ResourceConfig `type:"structure" required:"true"` - // The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume - // to perform tasks on your behalf. - // - // During model training, Amazon SageMaker needs your permission to read input - // data from an S3 bucket, download a Docker image that contains training code, - // write model artifacts to an S3 bucket, write logs to Amazon CloudWatch Logs, - // and publish metrics to Amazon CloudWatch. You grant permissions for all of - // these tasks to an IAM role. For more information, see Amazon SageMaker Roles - // (http://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). + // The Amazon Resource Name (ARN) of the IAM role associated with the training + // jobs that the tuning job launches. // // RoleArn is a required field RoleArn *string `min:"20" type:"string" required:"true"` - // Sets a duration for training. Use this parameter to cap model training costs. - // To stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal, which + // Specifies the values of hyperparameters that do not change for the tuning + // job. + StaticHyperParameters map[string]*string `type:"map"` + + // Sets a maximum duration for the training jobs that the tuning job launches. + // Use this parameter to limit model training costs. + // + // To stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal. This // delays job termination for 120 seconds. Algorithms might use this 120-second // window to save the model artifacts. // // When Amazon SageMaker terminates a job because the stopping condition has // been met, training algorithms provided by Amazon SageMaker save the intermediate - // results of the job. This intermediate data is a valid model artifact. You - // can use it to create a model using the CreateModel API. + // results of the job. // // StoppingCondition is a required field StoppingCondition *StoppingCondition `type:"structure" required:"true"` - // An array of key-value pairs. For more information, see Using Cost Allocation - // Tags (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html#allocation-what) - // in the AWS Billing and Cost Management User Guide. - Tags []*Tag `type:"list"` - - // The name of the training job. The name must be unique within an AWS Region - // in an AWS account. It appears in the Amazon SageMaker console. - // - // TrainingJobName is a required field - TrainingJobName *string `min:"1" type:"string" required:"true"` + // The VpcConfig object that specifies the VPC that you want the training jobs + // that this hyperparameter tuning job launches to connect to. Control access + // to and from your training container by configuring the VPC. For more information, + // see Protect Training Jobs by Using an Amazon Virtual Private Cloud (http://docs.aws.amazon.com/sagemaker/latest/dg/train-vpc.html). + VpcConfig *VpcConfig `type:"structure"` } // String returns the string representation -func (s CreateTrainingJobInput) String() string { +func (s HyperParameterTrainingJobDefinition) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateTrainingJobInput) GoString() string { +func (s HyperParameterTrainingJobDefinition) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateTrainingJobInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateTrainingJobInput"} +func (s *HyperParameterTrainingJobDefinition) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "HyperParameterTrainingJobDefinition"} if s.AlgorithmSpecification == nil { invalidParams.Add(request.NewErrParamRequired("AlgorithmSpecification")) } - if s.InputDataConfig == nil { - invalidParams.Add(request.NewErrParamRequired("InputDataConfig")) - } if s.InputDataConfig != nil && len(s.InputDataConfig) < 1 { invalidParams.Add(request.NewErrParamMinLen("InputDataConfig", 1)) } @@ -4377,12 +8428,6 @@ func (s *CreateTrainingJobInput) Validate() error { if s.StoppingCondition == nil { invalidParams.Add(request.NewErrParamRequired("StoppingCondition")) } - if s.TrainingJobName == nil { - invalidParams.Add(request.NewErrParamRequired("TrainingJobName")) - } - if s.TrainingJobName != nil && len(*s.TrainingJobName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("TrainingJobName", 1)) - } if s.AlgorithmSpecification != nil { if err := s.AlgorithmSpecification.Validate(); err != nil { invalidParams.AddNested("AlgorithmSpecification", err.(request.ErrInvalidParams)) @@ -4413,14 +8458,9 @@ func (s *CreateTrainingJobInput) Validate() error { invalidParams.AddNested("StoppingCondition", err.(request.ErrInvalidParams)) } } - if s.Tags != nil { - for i, v := range s.Tags { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) - } + if s.VpcConfig != nil { + if err := s.VpcConfig.Validate(); err != nil { + invalidParams.AddNested("VpcConfig", err.(request.ErrInvalidParams)) } } @@ -4431,152 +8471,257 @@ func (s *CreateTrainingJobInput) Validate() error { } // SetAlgorithmSpecification sets the AlgorithmSpecification field's value. -func (s *CreateTrainingJobInput) SetAlgorithmSpecification(v *AlgorithmSpecification) *CreateTrainingJobInput { +func (s *HyperParameterTrainingJobDefinition) SetAlgorithmSpecification(v *HyperParameterAlgorithmSpecification) *HyperParameterTrainingJobDefinition { s.AlgorithmSpecification = v return s } -// SetHyperParameters sets the HyperParameters field's value. -func (s *CreateTrainingJobInput) SetHyperParameters(v map[string]*string) *CreateTrainingJobInput { - s.HyperParameters = v - return s -} - // SetInputDataConfig sets the InputDataConfig field's value. -func (s *CreateTrainingJobInput) SetInputDataConfig(v []*Channel) *CreateTrainingJobInput { +func (s *HyperParameterTrainingJobDefinition) SetInputDataConfig(v []*Channel) *HyperParameterTrainingJobDefinition { s.InputDataConfig = v return s } // SetOutputDataConfig sets the OutputDataConfig field's value. -func (s *CreateTrainingJobInput) SetOutputDataConfig(v *OutputDataConfig) *CreateTrainingJobInput { +func (s *HyperParameterTrainingJobDefinition) SetOutputDataConfig(v *OutputDataConfig) *HyperParameterTrainingJobDefinition { s.OutputDataConfig = v return s } // SetResourceConfig sets the ResourceConfig field's value. -func (s *CreateTrainingJobInput) SetResourceConfig(v *ResourceConfig) *CreateTrainingJobInput { +func (s *HyperParameterTrainingJobDefinition) SetResourceConfig(v *ResourceConfig) *HyperParameterTrainingJobDefinition { s.ResourceConfig = v return s } // SetRoleArn sets the RoleArn field's value. -func (s *CreateTrainingJobInput) SetRoleArn(v string) *CreateTrainingJobInput { +func (s *HyperParameterTrainingJobDefinition) SetRoleArn(v string) *HyperParameterTrainingJobDefinition { s.RoleArn = &v return s } -// SetStoppingCondition sets the StoppingCondition field's value. -func (s *CreateTrainingJobInput) SetStoppingCondition(v *StoppingCondition) *CreateTrainingJobInput { - s.StoppingCondition = v +// SetStaticHyperParameters sets the StaticHyperParameters field's value. +func (s *HyperParameterTrainingJobDefinition) SetStaticHyperParameters(v map[string]*string) *HyperParameterTrainingJobDefinition { + s.StaticHyperParameters = v return s } -// SetTags sets the Tags field's value. -func (s *CreateTrainingJobInput) SetTags(v []*Tag) *CreateTrainingJobInput { - s.Tags = v +// SetStoppingCondition sets the StoppingCondition field's value. +func (s *HyperParameterTrainingJobDefinition) SetStoppingCondition(v *StoppingCondition) *HyperParameterTrainingJobDefinition { + s.StoppingCondition = v return s } -// SetTrainingJobName sets the TrainingJobName field's value. -func (s *CreateTrainingJobInput) SetTrainingJobName(v string) *CreateTrainingJobInput { - s.TrainingJobName = &v +// SetVpcConfig sets the VpcConfig field's value. +func (s *HyperParameterTrainingJobDefinition) SetVpcConfig(v *VpcConfig) *HyperParameterTrainingJobDefinition { + s.VpcConfig = v return s } -type CreateTrainingJobOutput struct { +// Specifies summary information about a training job. +type HyperParameterTrainingJobSummary struct { _ struct{} `type:"structure"` + // The date and time that the training job was created. + // + // CreationTime is a required field + CreationTime *time.Time `type:"timestamp" required:"true"` + + // The reason that the training job failed. + FailureReason *string `type:"string"` + + // The FinalHyperParameterTuningJobObjectiveMetric object that specifies the + // value of the objective metric of the tuning job that launched this training + // job. + FinalHyperParameterTuningJobObjectiveMetric *FinalHyperParameterTuningJobObjectiveMetric `type:"structure"` + + // The status of the objective metric for the training job: + // + // * Succeeded: The final objective metric for the training job was evaluated + // by the hyperparameter tuning job and used in the hyperparameter tuning + // process. + // + // * Pending: The training job is in progress and evaluation of its final + // objective metric is pending. + // + // * Failed: The final objective metric for the training job was not evaluated, + // and was not used in the hyperparameter tuning process. This typically + // occurs when the training job failed or did not emit an objective metric. + ObjectiveStatus *string `type:"string" enum:"ObjectiveStatus"` + + // The date and time that the training job ended. + TrainingEndTime *time.Time `type:"timestamp"` + // The Amazon Resource Name (ARN) of the training job. // // TrainingJobArn is a required field TrainingJobArn *string `type:"string" required:"true"` + + // The name of the training job. + // + // TrainingJobName is a required field + TrainingJobName *string `min:"1" type:"string" required:"true"` + + // The status of the training job. + // + // TrainingJobStatus is a required field + TrainingJobStatus *string `type:"string" required:"true" enum:"TrainingJobStatus"` + + // The date and time that the training job started. + TrainingStartTime *time.Time `type:"timestamp"` + + // A list of the hyperparameters for which you specified ranges to search. + // + // TunedHyperParameters is a required field + TunedHyperParameters map[string]*string `type:"map" required:"true"` + + TuningJobName *string `min:"1" type:"string"` } // String returns the string representation -func (s CreateTrainingJobOutput) String() string { +func (s HyperParameterTrainingJobSummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateTrainingJobOutput) GoString() string { +func (s HyperParameterTrainingJobSummary) GoString() string { return s.String() } -// SetTrainingJobArn sets the TrainingJobArn field's value. -func (s *CreateTrainingJobOutput) SetTrainingJobArn(v string) *CreateTrainingJobOutput { - s.TrainingJobArn = &v +// SetCreationTime sets the CreationTime field's value. +func (s *HyperParameterTrainingJobSummary) SetCreationTime(v time.Time) *HyperParameterTrainingJobSummary { + s.CreationTime = &v return s } -// Describes the location of the channel data. -type DataSource struct { - _ struct{} `type:"structure"` +// SetFailureReason sets the FailureReason field's value. +func (s *HyperParameterTrainingJobSummary) SetFailureReason(v string) *HyperParameterTrainingJobSummary { + s.FailureReason = &v + return s +} - // The S3 location of the data source that is associated with a channel. - // - // S3DataSource is a required field - S3DataSource *S3DataSource `type:"structure" required:"true"` +// SetFinalHyperParameterTuningJobObjectiveMetric sets the FinalHyperParameterTuningJobObjectiveMetric field's value. +func (s *HyperParameterTrainingJobSummary) SetFinalHyperParameterTuningJobObjectiveMetric(v *FinalHyperParameterTuningJobObjectiveMetric) *HyperParameterTrainingJobSummary { + s.FinalHyperParameterTuningJobObjectiveMetric = v + return s } -// String returns the string representation -func (s DataSource) String() string { - return awsutil.Prettify(s) +// SetObjectiveStatus sets the ObjectiveStatus field's value. +func (s *HyperParameterTrainingJobSummary) SetObjectiveStatus(v string) *HyperParameterTrainingJobSummary { + s.ObjectiveStatus = &v + return s } -// GoString returns the string representation -func (s DataSource) GoString() string { - return s.String() +// SetTrainingEndTime sets the TrainingEndTime field's value. +func (s *HyperParameterTrainingJobSummary) SetTrainingEndTime(v time.Time) *HyperParameterTrainingJobSummary { + s.TrainingEndTime = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DataSource) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DataSource"} - if s.S3DataSource == nil { - invalidParams.Add(request.NewErrParamRequired("S3DataSource")) - } - if s.S3DataSource != nil { - if err := s.S3DataSource.Validate(); err != nil { - invalidParams.AddNested("S3DataSource", err.(request.ErrInvalidParams)) - } - } +// SetTrainingJobArn sets the TrainingJobArn field's value. +func (s *HyperParameterTrainingJobSummary) SetTrainingJobArn(v string) *HyperParameterTrainingJobSummary { + s.TrainingJobArn = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetTrainingJobName sets the TrainingJobName field's value. +func (s *HyperParameterTrainingJobSummary) SetTrainingJobName(v string) *HyperParameterTrainingJobSummary { + s.TrainingJobName = &v + return s } -// SetS3DataSource sets the S3DataSource field's value. -func (s *DataSource) SetS3DataSource(v *S3DataSource) *DataSource { - s.S3DataSource = v +// SetTrainingJobStatus sets the TrainingJobStatus field's value. +func (s *HyperParameterTrainingJobSummary) SetTrainingJobStatus(v string) *HyperParameterTrainingJobSummary { + s.TrainingJobStatus = &v return s } -type DeleteEndpointConfigInput struct { +// SetTrainingStartTime sets the TrainingStartTime field's value. +func (s *HyperParameterTrainingJobSummary) SetTrainingStartTime(v time.Time) *HyperParameterTrainingJobSummary { + s.TrainingStartTime = &v + return s +} + +// SetTunedHyperParameters sets the TunedHyperParameters field's value. +func (s *HyperParameterTrainingJobSummary) SetTunedHyperParameters(v map[string]*string) *HyperParameterTrainingJobSummary { + s.TunedHyperParameters = v + return s +} + +// SetTuningJobName sets the TuningJobName field's value. +func (s *HyperParameterTrainingJobSummary) SetTuningJobName(v string) *HyperParameterTrainingJobSummary { + s.TuningJobName = &v + return s +} + +// Configures a hyperparameter tuning job. +type HyperParameterTuningJobConfig struct { _ struct{} `type:"structure"` - // The name of the endpoint configuration that you want to delete. + // The HyperParameterTuningJobObjective object that specifies the objective + // metric for this tuning job. + // + // HyperParameterTuningJobObjective is a required field + HyperParameterTuningJobObjective *HyperParameterTuningJobObjective `type:"structure" required:"true"` + + // The ParameterRanges object that specifies the ranges of hyperparameters that + // this tuning job searches. + // + // ParameterRanges is a required field + ParameterRanges *ParameterRanges `type:"structure" required:"true"` + + // The ResourceLimits object that specifies the maximum number of training jobs + // and parallel training jobs for this tuning job. + // + // ResourceLimits is a required field + ResourceLimits *ResourceLimits `type:"structure" required:"true"` + + // Specifies the search strategy for hyperparameters. Currently, the only valid + // value is Bayesian. // - // EndpointConfigName is a required field - EndpointConfigName *string `type:"string" required:"true"` + // Strategy is a required field + Strategy *string `type:"string" required:"true" enum:"HyperParameterTuningJobStrategyType"` } // String returns the string representation -func (s DeleteEndpointConfigInput) String() string { +func (s HyperParameterTuningJobConfig) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteEndpointConfigInput) GoString() string { +func (s HyperParameterTuningJobConfig) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteEndpointConfigInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteEndpointConfigInput"} - if s.EndpointConfigName == nil { - invalidParams.Add(request.NewErrParamRequired("EndpointConfigName")) +func (s *HyperParameterTuningJobConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "HyperParameterTuningJobConfig"} + if s.HyperParameterTuningJobObjective == nil { + invalidParams.Add(request.NewErrParamRequired("HyperParameterTuningJobObjective")) + } + if s.ParameterRanges == nil { + invalidParams.Add(request.NewErrParamRequired("ParameterRanges")) + } + if s.ResourceLimits == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceLimits")) + } + if s.Strategy == nil { + invalidParams.Add(request.NewErrParamRequired("Strategy")) + } + if s.HyperParameterTuningJobObjective != nil { + if err := s.HyperParameterTuningJobObjective.Validate(); err != nil { + invalidParams.AddNested("HyperParameterTuningJobObjective", err.(request.ErrInvalidParams)) + } + } + if s.ParameterRanges != nil { + if err := s.ParameterRanges.Validate(); err != nil { + invalidParams.AddNested("ParameterRanges", err.(request.ErrInvalidParams)) + } + } + if s.ResourceLimits != nil { + if err := s.ResourceLimits.Validate(); err != nil { + invalidParams.AddNested("ResourceLimits", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -4585,50 +8730,69 @@ func (s *DeleteEndpointConfigInput) Validate() error { return nil } -// SetEndpointConfigName sets the EndpointConfigName field's value. -func (s *DeleteEndpointConfigInput) SetEndpointConfigName(v string) *DeleteEndpointConfigInput { - s.EndpointConfigName = &v +// SetHyperParameterTuningJobObjective sets the HyperParameterTuningJobObjective field's value. +func (s *HyperParameterTuningJobConfig) SetHyperParameterTuningJobObjective(v *HyperParameterTuningJobObjective) *HyperParameterTuningJobConfig { + s.HyperParameterTuningJobObjective = v return s } -type DeleteEndpointConfigOutput struct { - _ struct{} `type:"structure"` +// SetParameterRanges sets the ParameterRanges field's value. +func (s *HyperParameterTuningJobConfig) SetParameterRanges(v *ParameterRanges) *HyperParameterTuningJobConfig { + s.ParameterRanges = v + return s } -// String returns the string representation -func (s DeleteEndpointConfigOutput) String() string { - return awsutil.Prettify(s) +// SetResourceLimits sets the ResourceLimits field's value. +func (s *HyperParameterTuningJobConfig) SetResourceLimits(v *ResourceLimits) *HyperParameterTuningJobConfig { + s.ResourceLimits = v + return s } -// GoString returns the string representation -func (s DeleteEndpointConfigOutput) GoString() string { - return s.String() +// SetStrategy sets the Strategy field's value. +func (s *HyperParameterTuningJobConfig) SetStrategy(v string) *HyperParameterTuningJobConfig { + s.Strategy = &v + return s } -type DeleteEndpointInput struct { +// Defines the objective metric for a hyperparameter tuning job. Hyperparameter +// tuning uses the value of this metric to evaluate the training jobs it launches, +// and returns the training job that results in either the highest or lowest +// value for this metric, depending on the value you specify for the Type parameter. +type HyperParameterTuningJobObjective struct { _ struct{} `type:"structure"` - // The name of the endpoint that you want to delete. + // The name of the metric to use for the objective metric. // - // EndpointName is a required field - EndpointName *string `type:"string" required:"true"` + // MetricName is a required field + MetricName *string `min:"1" type:"string" required:"true"` + + // Whether to minimize or maximize the objective metric. + // + // Type is a required field + Type *string `type:"string" required:"true" enum:"HyperParameterTuningJobObjectiveType"` } // String returns the string representation -func (s DeleteEndpointInput) String() string { +func (s HyperParameterTuningJobObjective) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteEndpointInput) GoString() string { +func (s HyperParameterTuningJobObjective) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteEndpointInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteEndpointInput"} - if s.EndpointName == nil { - invalidParams.Add(request.NewErrParamRequired("EndpointName")) +func (s *HyperParameterTuningJobObjective) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "HyperParameterTuningJobObjective"} + if s.MetricName == nil { + invalidParams.Add(request.NewErrParamRequired("MetricName")) + } + if s.MetricName != nil && len(*s.MetricName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MetricName", 1)) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) } if invalidParams.Len() > 0 { @@ -4637,140 +8801,223 @@ func (s *DeleteEndpointInput) Validate() error { return nil } -// SetEndpointName sets the EndpointName field's value. -func (s *DeleteEndpointInput) SetEndpointName(v string) *DeleteEndpointInput { - s.EndpointName = &v +// SetMetricName sets the MetricName field's value. +func (s *HyperParameterTuningJobObjective) SetMetricName(v string) *HyperParameterTuningJobObjective { + s.MetricName = &v return s } -type DeleteEndpointOutput struct { - _ struct{} `type:"structure"` +// SetType sets the Type field's value. +func (s *HyperParameterTuningJobObjective) SetType(v string) *HyperParameterTuningJobObjective { + s.Type = &v + return s } -// String returns the string representation -func (s DeleteEndpointOutput) String() string { - return awsutil.Prettify(s) -} +// Provides summary information about a hyperparameter tuning job. +type HyperParameterTuningJobSummary struct { + _ struct{} `type:"structure"` -// GoString returns the string representation -func (s DeleteEndpointOutput) GoString() string { - return s.String() -} + // The date and time that the tuning job was created. + // + // CreationTime is a required field + CreationTime *time.Time `type:"timestamp" required:"true"` -type DeleteModelInput struct { - _ struct{} `type:"structure"` + // The date and time that the tuning job ended. + HyperParameterTuningEndTime *time.Time `type:"timestamp"` - // The name of the model to delete. + // The Amazon Resource Name (ARN) of the tuning job. // - // ModelName is a required field - ModelName *string `type:"string" required:"true"` + // HyperParameterTuningJobArn is a required field + HyperParameterTuningJobArn *string `type:"string" required:"true"` + + // The name of the tuning job. + // + // HyperParameterTuningJobName is a required field + HyperParameterTuningJobName *string `min:"1" type:"string" required:"true"` + + // The status of the tuning job. + // + // HyperParameterTuningJobStatus is a required field + HyperParameterTuningJobStatus *string `type:"string" required:"true" enum:"HyperParameterTuningJobStatus"` + + // The date and time that the tuning job was modified. + LastModifiedTime *time.Time `type:"timestamp"` + + // The ObjectiveStatusCounters object that specifies the numbers of training + // jobs, categorized by objective metric status, that this tuning job launched. + // + // ObjectiveStatusCounters is a required field + ObjectiveStatusCounters *ObjectiveStatusCounters `type:"structure" required:"true"` + + // The ResourceLimits object that specifies the maximum number of training jobs + // and parallel training jobs allowed for this tuning job. + ResourceLimits *ResourceLimits `type:"structure"` + + // Specifies the search strategy hyperparameter tuning uses to choose which + // hyperparameters to use for each iteration. Currently, the only valid value + // is Bayesian. + // + // Strategy is a required field + Strategy *string `type:"string" required:"true" enum:"HyperParameterTuningJobStrategyType"` + + // The TrainingJobStatusCounters object that specifies the numbers of training + // jobs, categorized by status, that this tuning job launched. + // + // TrainingJobStatusCounters is a required field + TrainingJobStatusCounters *TrainingJobStatusCounters `type:"structure" required:"true"` } // String returns the string representation -func (s DeleteModelInput) String() string { +func (s HyperParameterTuningJobSummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteModelInput) GoString() string { +func (s HyperParameterTuningJobSummary) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteModelInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteModelInput"} - if s.ModelName == nil { - invalidParams.Add(request.NewErrParamRequired("ModelName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetCreationTime sets the CreationTime field's value. +func (s *HyperParameterTuningJobSummary) SetCreationTime(v time.Time) *HyperParameterTuningJobSummary { + s.CreationTime = &v + return s } -// SetModelName sets the ModelName field's value. -func (s *DeleteModelInput) SetModelName(v string) *DeleteModelInput { - s.ModelName = &v +// SetHyperParameterTuningEndTime sets the HyperParameterTuningEndTime field's value. +func (s *HyperParameterTuningJobSummary) SetHyperParameterTuningEndTime(v time.Time) *HyperParameterTuningJobSummary { + s.HyperParameterTuningEndTime = &v return s } -type DeleteModelOutput struct { - _ struct{} `type:"structure"` +// SetHyperParameterTuningJobArn sets the HyperParameterTuningJobArn field's value. +func (s *HyperParameterTuningJobSummary) SetHyperParameterTuningJobArn(v string) *HyperParameterTuningJobSummary { + s.HyperParameterTuningJobArn = &v + return s } -// String returns the string representation -func (s DeleteModelOutput) String() string { - return awsutil.Prettify(s) +// SetHyperParameterTuningJobName sets the HyperParameterTuningJobName field's value. +func (s *HyperParameterTuningJobSummary) SetHyperParameterTuningJobName(v string) *HyperParameterTuningJobSummary { + s.HyperParameterTuningJobName = &v + return s } -// GoString returns the string representation -func (s DeleteModelOutput) GoString() string { - return s.String() +// SetHyperParameterTuningJobStatus sets the HyperParameterTuningJobStatus field's value. +func (s *HyperParameterTuningJobSummary) SetHyperParameterTuningJobStatus(v string) *HyperParameterTuningJobSummary { + s.HyperParameterTuningJobStatus = &v + return s } -type DeleteNotebookInstanceInput struct { - _ struct{} `type:"structure"` - - // The name of the Amazon SageMaker notebook instance to delete. - // - // NotebookInstanceName is a required field - NotebookInstanceName *string `type:"string" required:"true"` +// SetLastModifiedTime sets the LastModifiedTime field's value. +func (s *HyperParameterTuningJobSummary) SetLastModifiedTime(v time.Time) *HyperParameterTuningJobSummary { + s.LastModifiedTime = &v + return s } -// String returns the string representation -func (s DeleteNotebookInstanceInput) String() string { - return awsutil.Prettify(s) +// SetObjectiveStatusCounters sets the ObjectiveStatusCounters field's value. +func (s *HyperParameterTuningJobSummary) SetObjectiveStatusCounters(v *ObjectiveStatusCounters) *HyperParameterTuningJobSummary { + s.ObjectiveStatusCounters = v + return s } -// GoString returns the string representation -func (s DeleteNotebookInstanceInput) GoString() string { - return s.String() +// SetResourceLimits sets the ResourceLimits field's value. +func (s *HyperParameterTuningJobSummary) SetResourceLimits(v *ResourceLimits) *HyperParameterTuningJobSummary { + s.ResourceLimits = v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteNotebookInstanceInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteNotebookInstanceInput"} - if s.NotebookInstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("NotebookInstanceName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetStrategy sets the Strategy field's value. +func (s *HyperParameterTuningJobSummary) SetStrategy(v string) *HyperParameterTuningJobSummary { + s.Strategy = &v + return s } -// SetNotebookInstanceName sets the NotebookInstanceName field's value. -func (s *DeleteNotebookInstanceInput) SetNotebookInstanceName(v string) *DeleteNotebookInstanceInput { - s.NotebookInstanceName = &v +// SetTrainingJobStatusCounters sets the TrainingJobStatusCounters field's value. +func (s *HyperParameterTuningJobSummary) SetTrainingJobStatusCounters(v *TrainingJobStatusCounters) *HyperParameterTuningJobSummary { + s.TrainingJobStatusCounters = v return s } -type DeleteNotebookInstanceLifecycleConfigInput struct { +// Specifies the configuration for a hyperparameter tuning job that uses one +// or more previous hyperparameter tuning jobs as a starting point. The results +// of previous tuning jobs are used to inform which combinations of hyperparameters +// to search over in the new tuning job. +// +// All training jobs launched by the new hyperparameter tuning job are evaluated +// by using the objective metric, and the training job that performs the best +// is compared to the best training jobs from the parent tuning jobs. From these, +// the training job that performs the best as measured by the objective metric +// is returned as the overall best training job. +// +// All training jobs launched by parent hyperparameter tuning jobs and the new +// hyperparameter tuning jobs count against the limit of training jobs for the +// tuning job. +type HyperParameterTuningJobWarmStartConfig struct { _ struct{} `type:"structure"` - // The name of the lifecycle configuration to delete. + // An array of hyperparameter tuning jobs that are used as the starting point + // for the new hyperparameter tuning job. For more information about warm starting + // a hyperparameter tuning job, see Using a Previous Hyperparameter Tuning Job + // as a Starting Point (http://docs.aws.amazon.com/automatic-model-tuning-incremental). // - // NotebookInstanceLifecycleConfigName is a required field - NotebookInstanceLifecycleConfigName *string `type:"string" required:"true"` + // ParentHyperParameterTuningJobs is a required field + ParentHyperParameterTuningJobs []*ParentHyperParameterTuningJob `min:"1" type:"list" required:"true"` + + // Specifies one of the following: + // + // IDENTICAL_DATA_AND_ALGORITHMThe new hyperparameter tuning job uses the same + // input data and training image as the parent tuning jobs. You can change the + // hyperparameter ranges to search and the maximum number of training jobs that + // the hyperparameter tuning job launches. You cannot use a new version of the + // training algorithm, unless the changes in the new version do not affect the + // algorithm itself. For example, changes that improve logging or adding support + // for a different data format are allowed. The objective metric for the new + // tuning job must be the same as for all parent jobs. + // + // TRANSFER_LEARNINGThe new hyperparameter tuning job can include input data, + // hyperparameter ranges, maximum number of concurrent training jobs, and maximum + // number of training jobs that are different than those of its parent hyperparameter + // tuning jobs. The training image can also be a different versionfrom the version + // used in the parent hyperparameter tuning job. You can also change hyperparameters + // from tunable to static, and from static to tunable, but the total number + // of static plus tunable hyperparameters must remain the same as it is in all + // parent jobs. The objective metric for the new tuning job must be the same + // as for all parent jobs. + // + // WarmStartType is a required field + WarmStartType *string `type:"string" required:"true" enum:"HyperParameterTuningJobWarmStartType"` } // String returns the string representation -func (s DeleteNotebookInstanceLifecycleConfigInput) String() string { +func (s HyperParameterTuningJobWarmStartConfig) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteNotebookInstanceLifecycleConfigInput) GoString() string { +func (s HyperParameterTuningJobWarmStartConfig) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteNotebookInstanceLifecycleConfigInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteNotebookInstanceLifecycleConfigInput"} - if s.NotebookInstanceLifecycleConfigName == nil { - invalidParams.Add(request.NewErrParamRequired("NotebookInstanceLifecycleConfigName")) +func (s *HyperParameterTuningJobWarmStartConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "HyperParameterTuningJobWarmStartConfig"} + if s.ParentHyperParameterTuningJobs == nil { + invalidParams.Add(request.NewErrParamRequired("ParentHyperParameterTuningJobs")) + } + if s.ParentHyperParameterTuningJobs != nil && len(s.ParentHyperParameterTuningJobs) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ParentHyperParameterTuningJobs", 1)) + } + if s.WarmStartType == nil { + invalidParams.Add(request.NewErrParamRequired("WarmStartType")) + } + if s.ParentHyperParameterTuningJobs != nil { + for i, v := range s.ParentHyperParameterTuningJobs { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ParentHyperParameterTuningJobs", i), err.(request.ErrInvalidParams)) + } + } } if invalidParams.Len() > 0 { @@ -4779,75 +9026,60 @@ func (s *DeleteNotebookInstanceLifecycleConfigInput) Validate() error { return nil } -// SetNotebookInstanceLifecycleConfigName sets the NotebookInstanceLifecycleConfigName field's value. -func (s *DeleteNotebookInstanceLifecycleConfigInput) SetNotebookInstanceLifecycleConfigName(v string) *DeleteNotebookInstanceLifecycleConfigInput { - s.NotebookInstanceLifecycleConfigName = &v +// SetParentHyperParameterTuningJobs sets the ParentHyperParameterTuningJobs field's value. +func (s *HyperParameterTuningJobWarmStartConfig) SetParentHyperParameterTuningJobs(v []*ParentHyperParameterTuningJob) *HyperParameterTuningJobWarmStartConfig { + s.ParentHyperParameterTuningJobs = v return s } -type DeleteNotebookInstanceLifecycleConfigOutput struct { - _ struct{} `type:"structure"` -} - -// String returns the string representation -func (s DeleteNotebookInstanceLifecycleConfigOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s DeleteNotebookInstanceLifecycleConfigOutput) GoString() string { - return s.String() +// SetWarmStartType sets the WarmStartType field's value. +func (s *HyperParameterTuningJobWarmStartConfig) SetWarmStartType(v string) *HyperParameterTuningJobWarmStartConfig { + s.WarmStartType = &v + return s } -type DeleteNotebookInstanceOutput struct { +// For a hyperparameter of the integer type, specifies the range that a hyperparameter +// tuning job searches. +type IntegerParameterRange struct { _ struct{} `type:"structure"` -} - -// String returns the string representation -func (s DeleteNotebookInstanceOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s DeleteNotebookInstanceOutput) GoString() string { - return s.String() -} -type DeleteTagsInput struct { - _ struct{} `type:"structure"` + // The maximum value of the hyperparameter to search. + // + // MaxValue is a required field + MaxValue *string `type:"string" required:"true"` - // The Amazon Resource Name (ARN) of the resource whose tags you want to delete. + // The minimum value of the hyperparameter to search. // - // ResourceArn is a required field - ResourceArn *string `type:"string" required:"true"` + // MinValue is a required field + MinValue *string `type:"string" required:"true"` - // An array or one or more tag keys to delete. + // The name of the hyperparameter to search. // - // TagKeys is a required field - TagKeys []*string `min:"1" type:"list" required:"true"` + // Name is a required field + Name *string `type:"string" required:"true"` } // String returns the string representation -func (s DeleteTagsInput) String() string { +func (s IntegerParameterRange) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteTagsInput) GoString() string { +func (s IntegerParameterRange) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteTagsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteTagsInput"} - if s.ResourceArn == nil { - invalidParams.Add(request.NewErrParamRequired("ResourceArn")) +func (s *IntegerParameterRange) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "IntegerParameterRange"} + if s.MaxValue == nil { + invalidParams.Add(request.NewErrParamRequired("MaxValue")) } - if s.TagKeys == nil { - invalidParams.Add(request.NewErrParamRequired("TagKeys")) + if s.MinValue == nil { + invalidParams.Add(request.NewErrParamRequired("MinValue")) } - if s.TagKeys != nil && len(s.TagKeys) < 1 { - invalidParams.Add(request.NewErrParamMinLen("TagKeys", 1)) + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) } if invalidParams.Len() > 0 { @@ -4856,56 +9088,69 @@ func (s *DeleteTagsInput) Validate() error { return nil } -// SetResourceArn sets the ResourceArn field's value. -func (s *DeleteTagsInput) SetResourceArn(v string) *DeleteTagsInput { - s.ResourceArn = &v +// SetMaxValue sets the MaxValue field's value. +func (s *IntegerParameterRange) SetMaxValue(v string) *IntegerParameterRange { + s.MaxValue = &v return s } -// SetTagKeys sets the TagKeys field's value. -func (s *DeleteTagsInput) SetTagKeys(v []*string) *DeleteTagsInput { - s.TagKeys = v +// SetMinValue sets the MinValue field's value. +func (s *IntegerParameterRange) SetMinValue(v string) *IntegerParameterRange { + s.MinValue = &v return s } -type DeleteTagsOutput struct { - _ struct{} `type:"structure"` +// SetName sets the Name field's value. +func (s *IntegerParameterRange) SetName(v string) *IntegerParameterRange { + s.Name = &v + return s } -// String returns the string representation -func (s DeleteTagsOutput) String() string { - return awsutil.Prettify(s) -} +type ListEndpointConfigsInput struct { + _ struct{} `type:"structure"` -// GoString returns the string representation -func (s DeleteTagsOutput) GoString() string { - return s.String() -} + // A filter that returns only endpoint configurations created after the specified + // time (timestamp). + CreationTimeAfter *time.Time `type:"timestamp"` -type DescribeEndpointConfigInput struct { - _ struct{} `type:"structure"` + // A filter that returns only endpoint configurations created before the specified + // time (timestamp). + CreationTimeBefore *time.Time `type:"timestamp"` - // The name of the endpoint configuration. - // - // EndpointConfigName is a required field - EndpointConfigName *string `type:"string" required:"true"` + // The maximum number of training jobs to return in the response. + MaxResults *int64 `min:"1" type:"integer"` + + // A string in the endpoint configuration name. This filter returns only endpoint + // configurations whose name contains the specified string. + NameContains *string `type:"string"` + + // If the result of the previous ListEndpointConfig request was truncated, the + // response includes a NextToken. To retrieve the next set of endpoint configurations, + // use the token in the next request. + NextToken *string `type:"string"` + + // The field to sort results by. The default is CreationTime. + SortBy *string `type:"string" enum:"EndpointConfigSortKey"` + + // The sort order for results. The default is Ascending. + SortOrder *string `type:"string" enum:"OrderKey"` } // String returns the string representation -func (s DescribeEndpointConfigInput) String() string { +func (s ListEndpointConfigsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeEndpointConfigInput) GoString() string { +func (s ListEndpointConfigsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeEndpointConfigInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeEndpointConfigInput"} - if s.EndpointConfigName == nil { - invalidParams.Add(request.NewErrParamRequired("EndpointConfigName")) +func (s *ListEndpointConfigsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListEndpointConfigsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } if invalidParams.Len() > 0 { @@ -4914,657 +9159,746 @@ func (s *DescribeEndpointConfigInput) Validate() error { return nil } -// SetEndpointConfigName sets the EndpointConfigName field's value. -func (s *DescribeEndpointConfigInput) SetEndpointConfigName(v string) *DescribeEndpointConfigInput { - s.EndpointConfigName = &v +// SetCreationTimeAfter sets the CreationTimeAfter field's value. +func (s *ListEndpointConfigsInput) SetCreationTimeAfter(v time.Time) *ListEndpointConfigsInput { + s.CreationTimeAfter = &v return s } -type DescribeEndpointConfigOutput struct { - _ struct{} `type:"structure"` - - // A timestamp that shows when the endpoint configuration was created. - // - // CreationTime is a required field - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` - - // The Amazon Resource Name (ARN) of the endpoint configuration. - // - // EndpointConfigArn is a required field - EndpointConfigArn *string `min:"20" type:"string" required:"true"` - - // Name of the Amazon SageMaker endpoint configuration. - // - // EndpointConfigName is a required field - EndpointConfigName *string `type:"string" required:"true"` - - // AWS KMS key ID Amazon SageMaker uses to encrypt data when storing it on the - // ML storage volume attached to the instance. - KmsKeyId *string `type:"string"` - - // An array of ProductionVariant objects, one for each model that you want to - // host at this endpoint. - // - // ProductionVariants is a required field - ProductionVariants []*ProductionVariant `min:"1" type:"list" required:"true"` -} - -// String returns the string representation -func (s DescribeEndpointConfigOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s DescribeEndpointConfigOutput) GoString() string { - return s.String() +// SetCreationTimeBefore sets the CreationTimeBefore field's value. +func (s *ListEndpointConfigsInput) SetCreationTimeBefore(v time.Time) *ListEndpointConfigsInput { + s.CreationTimeBefore = &v + return s } -// SetCreationTime sets the CreationTime field's value. -func (s *DescribeEndpointConfigOutput) SetCreationTime(v time.Time) *DescribeEndpointConfigOutput { - s.CreationTime = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListEndpointConfigsInput) SetMaxResults(v int64) *ListEndpointConfigsInput { + s.MaxResults = &v return s } -// SetEndpointConfigArn sets the EndpointConfigArn field's value. -func (s *DescribeEndpointConfigOutput) SetEndpointConfigArn(v string) *DescribeEndpointConfigOutput { - s.EndpointConfigArn = &v +// SetNameContains sets the NameContains field's value. +func (s *ListEndpointConfigsInput) SetNameContains(v string) *ListEndpointConfigsInput { + s.NameContains = &v return s } -// SetEndpointConfigName sets the EndpointConfigName field's value. -func (s *DescribeEndpointConfigOutput) SetEndpointConfigName(v string) *DescribeEndpointConfigOutput { - s.EndpointConfigName = &v +// SetNextToken sets the NextToken field's value. +func (s *ListEndpointConfigsInput) SetNextToken(v string) *ListEndpointConfigsInput { + s.NextToken = &v return s } -// SetKmsKeyId sets the KmsKeyId field's value. -func (s *DescribeEndpointConfigOutput) SetKmsKeyId(v string) *DescribeEndpointConfigOutput { - s.KmsKeyId = &v +// SetSortBy sets the SortBy field's value. +func (s *ListEndpointConfigsInput) SetSortBy(v string) *ListEndpointConfigsInput { + s.SortBy = &v return s } -// SetProductionVariants sets the ProductionVariants field's value. -func (s *DescribeEndpointConfigOutput) SetProductionVariants(v []*ProductionVariant) *DescribeEndpointConfigOutput { - s.ProductionVariants = v +// SetSortOrder sets the SortOrder field's value. +func (s *ListEndpointConfigsInput) SetSortOrder(v string) *ListEndpointConfigsInput { + s.SortOrder = &v return s } -type DescribeEndpointInput struct { +type ListEndpointConfigsOutput struct { _ struct{} `type:"structure"` - // The name of the endpoint. + // An array of endpoint configurations. // - // EndpointName is a required field - EndpointName *string `type:"string" required:"true"` + // EndpointConfigs is a required field + EndpointConfigs []*EndpointConfigSummary `type:"list" required:"true"` + + // If the response is truncated, Amazon SageMaker returns this token. To retrieve + // the next set of endpoint configurations, use it in the subsequent request + NextToken *string `type:"string"` } // String returns the string representation -func (s DescribeEndpointInput) String() string { +func (s ListEndpointConfigsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeEndpointInput) GoString() string { +func (s ListEndpointConfigsOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeEndpointInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeEndpointInput"} - if s.EndpointName == nil { - invalidParams.Add(request.NewErrParamRequired("EndpointName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetEndpointConfigs sets the EndpointConfigs field's value. +func (s *ListEndpointConfigsOutput) SetEndpointConfigs(v []*EndpointConfigSummary) *ListEndpointConfigsOutput { + s.EndpointConfigs = v + return s } -// SetEndpointName sets the EndpointName field's value. -func (s *DescribeEndpointInput) SetEndpointName(v string) *DescribeEndpointInput { - s.EndpointName = &v +// SetNextToken sets the NextToken field's value. +func (s *ListEndpointConfigsOutput) SetNextToken(v string) *ListEndpointConfigsOutput { + s.NextToken = &v return s } -type DescribeEndpointOutput struct { +type ListEndpointsInput struct { _ struct{} `type:"structure"` - // A timestamp that shows when the endpoint was created. - // - // CreationTime is a required field - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + // A filter that returns only endpoints that were created after the specified + // time (timestamp). + CreationTimeAfter *time.Time `type:"timestamp"` - // The Amazon Resource Name (ARN) of the endpoint. - // - // EndpointArn is a required field - EndpointArn *string `min:"20" type:"string" required:"true"` + // A filter that returns only endpoints that were created before the specified + // time (timestamp). + CreationTimeBefore *time.Time `type:"timestamp"` - // The name of the endpoint configuration associated with this endpoint. - // - // EndpointConfigName is a required field - EndpointConfigName *string `type:"string" required:"true"` + // A filter that returns only endpoints that were modified after the specified + // timestamp. + LastModifiedTimeAfter *time.Time `type:"timestamp"` - // Name of the endpoint. - // - // EndpointName is a required field - EndpointName *string `type:"string" required:"true"` + // A filter that returns only endpoints that were modified before the specified + // timestamp. + LastModifiedTimeBefore *time.Time `type:"timestamp"` - // The status of the endpoint. - // - // EndpointStatus is a required field - EndpointStatus *string `type:"string" required:"true" enum:"EndpointStatus"` + // The maximum number of endpoints to return in the response. + MaxResults *int64 `min:"1" type:"integer"` - // If the status of the endpoint is Failed, the reason why it failed. - FailureReason *string `type:"string"` + // A string in endpoint names. This filter returns only endpoints whose name + // contains the specified string. + NameContains *string `type:"string"` - // A timestamp that shows when the endpoint was last modified. - // - // LastModifiedTime is a required field - LastModifiedTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + // If the result of a ListEndpoints request was truncated, the response includes + // a NextToken. To retrieve the next set of endpoints, use the token in the + // next request. + NextToken *string `type:"string"` - // An array of ProductionVariant objects, one for each model hosted behind this - // endpoint. - ProductionVariants []*ProductionVariantSummary `min:"1" type:"list"` + // Sorts the list of results. The default is CreationTime. + SortBy *string `type:"string" enum:"EndpointSortKey"` + + // The sort order for results. The default is Ascending. + SortOrder *string `type:"string" enum:"OrderKey"` + + // A filter that returns only endpoints with the specified status. + StatusEquals *string `type:"string" enum:"EndpointStatus"` } // String returns the string representation -func (s DescribeEndpointOutput) String() string { +func (s ListEndpointsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeEndpointOutput) GoString() string { +func (s ListEndpointsInput) GoString() string { return s.String() } -// SetCreationTime sets the CreationTime field's value. -func (s *DescribeEndpointOutput) SetCreationTime(v time.Time) *DescribeEndpointOutput { - s.CreationTime = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListEndpointsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListEndpointsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCreationTimeAfter sets the CreationTimeAfter field's value. +func (s *ListEndpointsInput) SetCreationTimeAfter(v time.Time) *ListEndpointsInput { + s.CreationTimeAfter = &v return s } -// SetEndpointArn sets the EndpointArn field's value. -func (s *DescribeEndpointOutput) SetEndpointArn(v string) *DescribeEndpointOutput { - s.EndpointArn = &v +// SetCreationTimeBefore sets the CreationTimeBefore field's value. +func (s *ListEndpointsInput) SetCreationTimeBefore(v time.Time) *ListEndpointsInput { + s.CreationTimeBefore = &v return s } -// SetEndpointConfigName sets the EndpointConfigName field's value. -func (s *DescribeEndpointOutput) SetEndpointConfigName(v string) *DescribeEndpointOutput { - s.EndpointConfigName = &v +// SetLastModifiedTimeAfter sets the LastModifiedTimeAfter field's value. +func (s *ListEndpointsInput) SetLastModifiedTimeAfter(v time.Time) *ListEndpointsInput { + s.LastModifiedTimeAfter = &v return s } -// SetEndpointName sets the EndpointName field's value. -func (s *DescribeEndpointOutput) SetEndpointName(v string) *DescribeEndpointOutput { - s.EndpointName = &v +// SetLastModifiedTimeBefore sets the LastModifiedTimeBefore field's value. +func (s *ListEndpointsInput) SetLastModifiedTimeBefore(v time.Time) *ListEndpointsInput { + s.LastModifiedTimeBefore = &v return s } -// SetEndpointStatus sets the EndpointStatus field's value. -func (s *DescribeEndpointOutput) SetEndpointStatus(v string) *DescribeEndpointOutput { - s.EndpointStatus = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListEndpointsInput) SetMaxResults(v int64) *ListEndpointsInput { + s.MaxResults = &v return s } -// SetFailureReason sets the FailureReason field's value. -func (s *DescribeEndpointOutput) SetFailureReason(v string) *DescribeEndpointOutput { - s.FailureReason = &v +// SetNameContains sets the NameContains field's value. +func (s *ListEndpointsInput) SetNameContains(v string) *ListEndpointsInput { + s.NameContains = &v return s } -// SetLastModifiedTime sets the LastModifiedTime field's value. -func (s *DescribeEndpointOutput) SetLastModifiedTime(v time.Time) *DescribeEndpointOutput { - s.LastModifiedTime = &v +// SetNextToken sets the NextToken field's value. +func (s *ListEndpointsInput) SetNextToken(v string) *ListEndpointsInput { + s.NextToken = &v return s } -// SetProductionVariants sets the ProductionVariants field's value. -func (s *DescribeEndpointOutput) SetProductionVariants(v []*ProductionVariantSummary) *DescribeEndpointOutput { - s.ProductionVariants = v +// SetSortBy sets the SortBy field's value. +func (s *ListEndpointsInput) SetSortBy(v string) *ListEndpointsInput { + s.SortBy = &v + return s +} + +// SetSortOrder sets the SortOrder field's value. +func (s *ListEndpointsInput) SetSortOrder(v string) *ListEndpointsInput { + s.SortOrder = &v + return s +} + +// SetStatusEquals sets the StatusEquals field's value. +func (s *ListEndpointsInput) SetStatusEquals(v string) *ListEndpointsInput { + s.StatusEquals = &v return s } -type DescribeModelInput struct { +type ListEndpointsOutput struct { _ struct{} `type:"structure"` - // The name of the model. + // An array or endpoint objects. // - // ModelName is a required field - ModelName *string `type:"string" required:"true"` + // Endpoints is a required field + Endpoints []*EndpointSummary `type:"list" required:"true"` + + // If the response is truncated, Amazon SageMaker returns this token. To retrieve + // the next set of training jobs, use it in the subsequent request. + NextToken *string `type:"string"` } // String returns the string representation -func (s DescribeModelInput) String() string { +func (s ListEndpointsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeModelInput) GoString() string { +func (s ListEndpointsOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeModelInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeModelInput"} - if s.ModelName == nil { - invalidParams.Add(request.NewErrParamRequired("ModelName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetEndpoints sets the Endpoints field's value. +func (s *ListEndpointsOutput) SetEndpoints(v []*EndpointSummary) *ListEndpointsOutput { + s.Endpoints = v + return s } -// SetModelName sets the ModelName field's value. -func (s *DescribeModelInput) SetModelName(v string) *DescribeModelInput { - s.ModelName = &v +// SetNextToken sets the NextToken field's value. +func (s *ListEndpointsOutput) SetNextToken(v string) *ListEndpointsOutput { + s.NextToken = &v return s } -type DescribeModelOutput struct { +type ListHyperParameterTuningJobsInput struct { _ struct{} `type:"structure"` - // A timestamp that shows when the model was created. - // - // CreationTime is a required field - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + // A filter that returns only tuning jobs that were created after the specified + // time. + CreationTimeAfter *time.Time `type:"timestamp"` - // The Amazon Resource Name (ARN) of the IAM role that you specified for the - // model. - // - // ExecutionRoleArn is a required field - ExecutionRoleArn *string `min:"20" type:"string" required:"true"` + // A filter that returns only tuning jobs that were created before the specified + // time. + CreationTimeBefore *time.Time `type:"timestamp"` - // The Amazon Resource Name (ARN) of the model. - // - // ModelArn is a required field - ModelArn *string `min:"20" type:"string" required:"true"` + // A filter that returns only tuning jobs that were modified after the specified + // time. + LastModifiedTimeAfter *time.Time `type:"timestamp"` - // Name of the Amazon SageMaker model. - // - // ModelName is a required field - ModelName *string `type:"string" required:"true"` + // A filter that returns only tuning jobs that were modified before the specified + // time. + LastModifiedTimeBefore *time.Time `type:"timestamp"` - // The location of the primary inference code, associated artifacts, and custom - // environment map that the inference code uses when it is deployed in production. - // - // PrimaryContainer is a required field - PrimaryContainer *ContainerDefinition `type:"structure" required:"true"` + // The maximum number of tuning jobs to return. The default value is 10. + MaxResults *int64 `min:"1" type:"integer"` + + // A string in the tuning job name. This filter returns only tuning jobs whose + // name contains the specified string. + NameContains *string `type:"string"` + + // If the result of the previous ListHyperParameterTuningJobs request was truncated, + // the response includes a NextToken. To retrieve the next set of tuning jobs, + // use the token in the next request. + NextToken *string `type:"string"` + + // The field to sort results by. The default is Name. + SortBy *string `type:"string" enum:"HyperParameterTuningJobSortByOptions"` + + // The sort order for results. The default is Ascending. + SortOrder *string `type:"string" enum:"SortOrder"` + + // A filter that returns only tuning jobs with the specified status. + StatusEquals *string `type:"string" enum:"HyperParameterTuningJobStatus"` } // String returns the string representation -func (s DescribeModelOutput) String() string { +func (s ListHyperParameterTuningJobsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeModelOutput) GoString() string { +func (s ListHyperParameterTuningJobsInput) GoString() string { return s.String() } -// SetCreationTime sets the CreationTime field's value. -func (s *DescribeModelOutput) SetCreationTime(v time.Time) *DescribeModelOutput { - s.CreationTime = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListHyperParameterTuningJobsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListHyperParameterTuningJobsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetExecutionRoleArn sets the ExecutionRoleArn field's value. -func (s *DescribeModelOutput) SetExecutionRoleArn(v string) *DescribeModelOutput { - s.ExecutionRoleArn = &v +// SetCreationTimeAfter sets the CreationTimeAfter field's value. +func (s *ListHyperParameterTuningJobsInput) SetCreationTimeAfter(v time.Time) *ListHyperParameterTuningJobsInput { + s.CreationTimeAfter = &v return s } -// SetModelArn sets the ModelArn field's value. -func (s *DescribeModelOutput) SetModelArn(v string) *DescribeModelOutput { - s.ModelArn = &v +// SetCreationTimeBefore sets the CreationTimeBefore field's value. +func (s *ListHyperParameterTuningJobsInput) SetCreationTimeBefore(v time.Time) *ListHyperParameterTuningJobsInput { + s.CreationTimeBefore = &v return s } -// SetModelName sets the ModelName field's value. -func (s *DescribeModelOutput) SetModelName(v string) *DescribeModelOutput { - s.ModelName = &v +// SetLastModifiedTimeAfter sets the LastModifiedTimeAfter field's value. +func (s *ListHyperParameterTuningJobsInput) SetLastModifiedTimeAfter(v time.Time) *ListHyperParameterTuningJobsInput { + s.LastModifiedTimeAfter = &v return s } -// SetPrimaryContainer sets the PrimaryContainer field's value. -func (s *DescribeModelOutput) SetPrimaryContainer(v *ContainerDefinition) *DescribeModelOutput { - s.PrimaryContainer = v +// SetLastModifiedTimeBefore sets the LastModifiedTimeBefore field's value. +func (s *ListHyperParameterTuningJobsInput) SetLastModifiedTimeBefore(v time.Time) *ListHyperParameterTuningJobsInput { + s.LastModifiedTimeBefore = &v return s } -type DescribeNotebookInstanceInput struct { - _ struct{} `type:"structure"` - - // The name of the notebook instance that you want information about. - // - // NotebookInstanceName is a required field - NotebookInstanceName *string `type:"string" required:"true"` +// SetMaxResults sets the MaxResults field's value. +func (s *ListHyperParameterTuningJobsInput) SetMaxResults(v int64) *ListHyperParameterTuningJobsInput { + s.MaxResults = &v + return s } -// String returns the string representation -func (s DescribeNotebookInstanceInput) String() string { - return awsutil.Prettify(s) +// SetNameContains sets the NameContains field's value. +func (s *ListHyperParameterTuningJobsInput) SetNameContains(v string) *ListHyperParameterTuningJobsInput { + s.NameContains = &v + return s } -// GoString returns the string representation -func (s DescribeNotebookInstanceInput) GoString() string { - return s.String() +// SetNextToken sets the NextToken field's value. +func (s *ListHyperParameterTuningJobsInput) SetNextToken(v string) *ListHyperParameterTuningJobsInput { + s.NextToken = &v + return s } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeNotebookInstanceInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeNotebookInstanceInput"} - if s.NotebookInstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("NotebookInstanceName")) - } +// SetSortBy sets the SortBy field's value. +func (s *ListHyperParameterTuningJobsInput) SetSortBy(v string) *ListHyperParameterTuningJobsInput { + s.SortBy = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetSortOrder sets the SortOrder field's value. +func (s *ListHyperParameterTuningJobsInput) SetSortOrder(v string) *ListHyperParameterTuningJobsInput { + s.SortOrder = &v + return s } -// SetNotebookInstanceName sets the NotebookInstanceName field's value. -func (s *DescribeNotebookInstanceInput) SetNotebookInstanceName(v string) *DescribeNotebookInstanceInput { - s.NotebookInstanceName = &v +// SetStatusEquals sets the StatusEquals field's value. +func (s *ListHyperParameterTuningJobsInput) SetStatusEquals(v string) *ListHyperParameterTuningJobsInput { + s.StatusEquals = &v return s } -type DescribeNotebookInstanceLifecycleConfigInput struct { +type ListHyperParameterTuningJobsOutput struct { _ struct{} `type:"structure"` - // The name of the lifecycle configuration to describe. + // A list of HyperParameterTuningJobSummary objects that describe the tuning + // jobs that the ListHyperParameterTuningJobs request returned. // - // NotebookInstanceLifecycleConfigName is a required field - NotebookInstanceLifecycleConfigName *string `type:"string" required:"true"` + // HyperParameterTuningJobSummaries is a required field + HyperParameterTuningJobSummaries []*HyperParameterTuningJobSummary `type:"list" required:"true"` + + // If the result of this ListHyperParameterTuningJobs request was truncated, + // the response includes a NextToken. To retrieve the next set of tuning jobs, + // use the token in the next request. + NextToken *string `type:"string"` } // String returns the string representation -func (s DescribeNotebookInstanceLifecycleConfigInput) String() string { +func (s ListHyperParameterTuningJobsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeNotebookInstanceLifecycleConfigInput) GoString() string { +func (s ListHyperParameterTuningJobsOutput) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeNotebookInstanceLifecycleConfigInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeNotebookInstanceLifecycleConfigInput"} - if s.NotebookInstanceLifecycleConfigName == nil { - invalidParams.Add(request.NewErrParamRequired("NotebookInstanceLifecycleConfigName")) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetHyperParameterTuningJobSummaries sets the HyperParameterTuningJobSummaries field's value. +func (s *ListHyperParameterTuningJobsOutput) SetHyperParameterTuningJobSummaries(v []*HyperParameterTuningJobSummary) *ListHyperParameterTuningJobsOutput { + s.HyperParameterTuningJobSummaries = v + return s } -// SetNotebookInstanceLifecycleConfigName sets the NotebookInstanceLifecycleConfigName field's value. -func (s *DescribeNotebookInstanceLifecycleConfigInput) SetNotebookInstanceLifecycleConfigName(v string) *DescribeNotebookInstanceLifecycleConfigInput { - s.NotebookInstanceLifecycleConfigName = &v +// SetNextToken sets the NextToken field's value. +func (s *ListHyperParameterTuningJobsOutput) SetNextToken(v string) *ListHyperParameterTuningJobsOutput { + s.NextToken = &v return s } -type DescribeNotebookInstanceLifecycleConfigOutput struct { +type ListModelsInput struct { _ struct{} `type:"structure"` - // A timestamp that tells when the lifecycle configuration was created. - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` + // A filter that returns only models created after the specified time (timestamp). + CreationTimeAfter *time.Time `type:"timestamp"` - // A timestamp that tells when the lifecycle configuration was last modified. - LastModifiedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + // A filter that returns only models created before the specified time (timestamp). + CreationTimeBefore *time.Time `type:"timestamp"` - // The Amazon Resource Name (ARN) of the lifecycle configuration. - NotebookInstanceLifecycleConfigArn *string `type:"string"` + // The maximum number of models to return in the response. + MaxResults *int64 `min:"1" type:"integer"` - // The name of the lifecycle configuration. - NotebookInstanceLifecycleConfigName *string `type:"string"` + // A string in the training job name. This filter returns only models in the + // training job whose name contains the specified string. + NameContains *string `type:"string"` - // The shell script that runs only once, when you create a notebook instance. - OnCreate []*NotebookInstanceLifecycleHook `type:"list"` + // If the response to a previous ListModels request was truncated, the response + // includes a NextToken. To retrieve the next set of models, use the token in + // the next request. + NextToken *string `type:"string"` - // The shell script that runs every time you start a notebook instance, including - // when you create the notebook instance. - OnStart []*NotebookInstanceLifecycleHook `type:"list"` + // Sorts the list of results. The default is CreationTime. + SortBy *string `type:"string" enum:"ModelSortKey"` + + // The sort order for results. The default is Ascending. + SortOrder *string `type:"string" enum:"OrderKey"` } // String returns the string representation -func (s DescribeNotebookInstanceLifecycleConfigOutput) String() string { +func (s ListModelsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeNotebookInstanceLifecycleConfigOutput) GoString() string { +func (s ListModelsInput) GoString() string { return s.String() } -// SetCreationTime sets the CreationTime field's value. -func (s *DescribeNotebookInstanceLifecycleConfigOutput) SetCreationTime(v time.Time) *DescribeNotebookInstanceLifecycleConfigOutput { - s.CreationTime = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListModelsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListModelsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCreationTimeAfter sets the CreationTimeAfter field's value. +func (s *ListModelsInput) SetCreationTimeAfter(v time.Time) *ListModelsInput { + s.CreationTimeAfter = &v return s } -// SetLastModifiedTime sets the LastModifiedTime field's value. -func (s *DescribeNotebookInstanceLifecycleConfigOutput) SetLastModifiedTime(v time.Time) *DescribeNotebookInstanceLifecycleConfigOutput { - s.LastModifiedTime = &v +// SetCreationTimeBefore sets the CreationTimeBefore field's value. +func (s *ListModelsInput) SetCreationTimeBefore(v time.Time) *ListModelsInput { + s.CreationTimeBefore = &v return s } -// SetNotebookInstanceLifecycleConfigArn sets the NotebookInstanceLifecycleConfigArn field's value. -func (s *DescribeNotebookInstanceLifecycleConfigOutput) SetNotebookInstanceLifecycleConfigArn(v string) *DescribeNotebookInstanceLifecycleConfigOutput { - s.NotebookInstanceLifecycleConfigArn = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListModelsInput) SetMaxResults(v int64) *ListModelsInput { + s.MaxResults = &v return s } -// SetNotebookInstanceLifecycleConfigName sets the NotebookInstanceLifecycleConfigName field's value. -func (s *DescribeNotebookInstanceLifecycleConfigOutput) SetNotebookInstanceLifecycleConfigName(v string) *DescribeNotebookInstanceLifecycleConfigOutput { - s.NotebookInstanceLifecycleConfigName = &v +// SetNameContains sets the NameContains field's value. +func (s *ListModelsInput) SetNameContains(v string) *ListModelsInput { + s.NameContains = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListModelsInput) SetNextToken(v string) *ListModelsInput { + s.NextToken = &v return s } -// SetOnCreate sets the OnCreate field's value. -func (s *DescribeNotebookInstanceLifecycleConfigOutput) SetOnCreate(v []*NotebookInstanceLifecycleHook) *DescribeNotebookInstanceLifecycleConfigOutput { - s.OnCreate = v +// SetSortBy sets the SortBy field's value. +func (s *ListModelsInput) SetSortBy(v string) *ListModelsInput { + s.SortBy = &v return s } -// SetOnStart sets the OnStart field's value. -func (s *DescribeNotebookInstanceLifecycleConfigOutput) SetOnStart(v []*NotebookInstanceLifecycleHook) *DescribeNotebookInstanceLifecycleConfigOutput { - s.OnStart = v +// SetSortOrder sets the SortOrder field's value. +func (s *ListModelsInput) SetSortOrder(v string) *ListModelsInput { + s.SortOrder = &v return s } -type DescribeNotebookInstanceOutput struct { +type ListModelsOutput struct { _ struct{} `type:"structure"` - // A timestamp. Use this parameter to return the time when the notebook instance - // was created - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` - - // Describes whether the notebook instance has internet access. + // An array of ModelSummary objects, each of which lists a model. // - // For more information, see appendix-notebook-and-internet-access. - DirectInternetAccess *string `type:"string" enum:"DirectInternetAccess"` + // Models is a required field + Models []*ModelSummary `type:"list" required:"true"` - // If status is failed, the reason it failed. - FailureReason *string `type:"string"` + // If the response is truncated, Amazon SageMaker returns this token. To retrieve + // the next set of models, use it in the subsequent request. + NextToken *string `type:"string"` +} - // The type of ML compute instance running on the notebook instance. - InstanceType *string `type:"string" enum:"InstanceType"` +// String returns the string representation +func (s ListModelsOutput) String() string { + return awsutil.Prettify(s) +} - // AWS KMS key ID Amazon SageMaker uses to encrypt data when storing it on the - // ML storage volume attached to the instance. - KmsKeyId *string `type:"string"` +// GoString returns the string representation +func (s ListModelsOutput) GoString() string { + return s.String() +} - // A timestamp. Use this parameter to retrieve the time when the notebook instance - // was last modified. - LastModifiedTime *time.Time `type:"timestamp" timestampFormat:"unix"` +// SetModels sets the Models field's value. +func (s *ListModelsOutput) SetModels(v []*ModelSummary) *ListModelsOutput { + s.Models = v + return s +} - // Network interface IDs that Amazon SageMaker created at the time of creating - // the instance. - NetworkInterfaceId *string `type:"string"` +// SetNextToken sets the NextToken field's value. +func (s *ListModelsOutput) SetNextToken(v string) *ListModelsOutput { + s.NextToken = &v + return s +} - // The Amazon Resource Name (ARN) of the notebook instance. - NotebookInstanceArn *string `type:"string"` +type ListNotebookInstanceLifecycleConfigsInput struct { + _ struct{} `type:"structure"` - // Returns the name of a notebook instance lifecycle configuration. - // - // For information about notebook instance lifestyle configurations, see notebook-lifecycle-config. - NotebookInstanceLifecycleConfigName *string `type:"string"` + // A filter that returns only lifecycle configurations that were created after + // the specified time (timestamp). + CreationTimeAfter *time.Time `type:"timestamp"` - // Name of the Amazon SageMaker notebook instance. - NotebookInstanceName *string `type:"string"` + // A filter that returns only lifecycle configurations that were created before + // the specified time (timestamp). + CreationTimeBefore *time.Time `type:"timestamp"` - // The status of the notebook instance. - NotebookInstanceStatus *string `type:"string" enum:"NotebookInstanceStatus"` + // A filter that returns only lifecycle configurations that were modified after + // the specified time (timestamp). + LastModifiedTimeAfter *time.Time `type:"timestamp"` - // Amazon Resource Name (ARN) of the IAM role associated with the instance. - RoleArn *string `min:"20" type:"string"` + // A filter that returns only lifecycle configurations that were modified before + // the specified time (timestamp). + LastModifiedTimeBefore *time.Time `type:"timestamp"` - // The IDs of the VPC security groups. - SecurityGroups []*string `type:"list"` + // The maximum number of lifecycle configurations to return in the response. + MaxResults *int64 `min:"1" type:"integer"` - // The ID of the VPC subnet. - SubnetId *string `type:"string"` + // A string in the lifecycle configuration name. This filter returns only lifecycle + // configurations whose name contains the specified string. + NameContains *string `type:"string"` - // The URL that you use to connect to the Jupyter notebook that is running in - // your notebook instance. - Url *string `type:"string"` + // If the result of a ListNotebookInstanceLifecycleConfigs request was truncated, + // the response includes a NextToken. To get the next set of lifecycle configurations, + // use the token in the next request. + NextToken *string `type:"string"` + + // Sorts the list of results. The default is CreationTime. + SortBy *string `type:"string" enum:"NotebookInstanceLifecycleConfigSortKey"` + + // The sort order for results. + SortOrder *string `type:"string" enum:"NotebookInstanceLifecycleConfigSortOrder"` } // String returns the string representation -func (s DescribeNotebookInstanceOutput) String() string { +func (s ListNotebookInstanceLifecycleConfigsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeNotebookInstanceOutput) GoString() string { +func (s ListNotebookInstanceLifecycleConfigsInput) GoString() string { return s.String() } -// SetCreationTime sets the CreationTime field's value. -func (s *DescribeNotebookInstanceOutput) SetCreationTime(v time.Time) *DescribeNotebookInstanceOutput { - s.CreationTime = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListNotebookInstanceLifecycleConfigsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListNotebookInstanceLifecycleConfigsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetDirectInternetAccess sets the DirectInternetAccess field's value. -func (s *DescribeNotebookInstanceOutput) SetDirectInternetAccess(v string) *DescribeNotebookInstanceOutput { - s.DirectInternetAccess = &v +// SetCreationTimeAfter sets the CreationTimeAfter field's value. +func (s *ListNotebookInstanceLifecycleConfigsInput) SetCreationTimeAfter(v time.Time) *ListNotebookInstanceLifecycleConfigsInput { + s.CreationTimeAfter = &v return s } -// SetFailureReason sets the FailureReason field's value. -func (s *DescribeNotebookInstanceOutput) SetFailureReason(v string) *DescribeNotebookInstanceOutput { - s.FailureReason = &v +// SetCreationTimeBefore sets the CreationTimeBefore field's value. +func (s *ListNotebookInstanceLifecycleConfigsInput) SetCreationTimeBefore(v time.Time) *ListNotebookInstanceLifecycleConfigsInput { + s.CreationTimeBefore = &v return s } -// SetInstanceType sets the InstanceType field's value. -func (s *DescribeNotebookInstanceOutput) SetInstanceType(v string) *DescribeNotebookInstanceOutput { - s.InstanceType = &v +// SetLastModifiedTimeAfter sets the LastModifiedTimeAfter field's value. +func (s *ListNotebookInstanceLifecycleConfigsInput) SetLastModifiedTimeAfter(v time.Time) *ListNotebookInstanceLifecycleConfigsInput { + s.LastModifiedTimeAfter = &v return s } -// SetKmsKeyId sets the KmsKeyId field's value. -func (s *DescribeNotebookInstanceOutput) SetKmsKeyId(v string) *DescribeNotebookInstanceOutput { - s.KmsKeyId = &v +// SetLastModifiedTimeBefore sets the LastModifiedTimeBefore field's value. +func (s *ListNotebookInstanceLifecycleConfigsInput) SetLastModifiedTimeBefore(v time.Time) *ListNotebookInstanceLifecycleConfigsInput { + s.LastModifiedTimeBefore = &v return s } -// SetLastModifiedTime sets the LastModifiedTime field's value. -func (s *DescribeNotebookInstanceOutput) SetLastModifiedTime(v time.Time) *DescribeNotebookInstanceOutput { - s.LastModifiedTime = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListNotebookInstanceLifecycleConfigsInput) SetMaxResults(v int64) *ListNotebookInstanceLifecycleConfigsInput { + s.MaxResults = &v return s } -// SetNetworkInterfaceId sets the NetworkInterfaceId field's value. -func (s *DescribeNotebookInstanceOutput) SetNetworkInterfaceId(v string) *DescribeNotebookInstanceOutput { - s.NetworkInterfaceId = &v +// SetNameContains sets the NameContains field's value. +func (s *ListNotebookInstanceLifecycleConfigsInput) SetNameContains(v string) *ListNotebookInstanceLifecycleConfigsInput { + s.NameContains = &v return s } -// SetNotebookInstanceArn sets the NotebookInstanceArn field's value. -func (s *DescribeNotebookInstanceOutput) SetNotebookInstanceArn(v string) *DescribeNotebookInstanceOutput { - s.NotebookInstanceArn = &v +// SetNextToken sets the NextToken field's value. +func (s *ListNotebookInstanceLifecycleConfigsInput) SetNextToken(v string) *ListNotebookInstanceLifecycleConfigsInput { + s.NextToken = &v return s } -// SetNotebookInstanceLifecycleConfigName sets the NotebookInstanceLifecycleConfigName field's value. -func (s *DescribeNotebookInstanceOutput) SetNotebookInstanceLifecycleConfigName(v string) *DescribeNotebookInstanceOutput { - s.NotebookInstanceLifecycleConfigName = &v +// SetSortBy sets the SortBy field's value. +func (s *ListNotebookInstanceLifecycleConfigsInput) SetSortBy(v string) *ListNotebookInstanceLifecycleConfigsInput { + s.SortBy = &v return s } -// SetNotebookInstanceName sets the NotebookInstanceName field's value. -func (s *DescribeNotebookInstanceOutput) SetNotebookInstanceName(v string) *DescribeNotebookInstanceOutput { - s.NotebookInstanceName = &v +// SetSortOrder sets the SortOrder field's value. +func (s *ListNotebookInstanceLifecycleConfigsInput) SetSortOrder(v string) *ListNotebookInstanceLifecycleConfigsInput { + s.SortOrder = &v return s } -// SetNotebookInstanceStatus sets the NotebookInstanceStatus field's value. -func (s *DescribeNotebookInstanceOutput) SetNotebookInstanceStatus(v string) *DescribeNotebookInstanceOutput { - s.NotebookInstanceStatus = &v - return s +type ListNotebookInstanceLifecycleConfigsOutput struct { + _ struct{} `type:"structure"` + + // If the response is truncated, Amazon SageMaker returns this token. To get + // the next set of lifecycle configurations, use it in the next request. + NextToken *string `type:"string"` + + // An array of NotebookInstanceLifecycleConfiguration objects, each listing + // a lifecycle configuration. + NotebookInstanceLifecycleConfigs []*NotebookInstanceLifecycleConfigSummary `type:"list"` } -// SetRoleArn sets the RoleArn field's value. -func (s *DescribeNotebookInstanceOutput) SetRoleArn(v string) *DescribeNotebookInstanceOutput { - s.RoleArn = &v - return s +// String returns the string representation +func (s ListNotebookInstanceLifecycleConfigsOutput) String() string { + return awsutil.Prettify(s) } -// SetSecurityGroups sets the SecurityGroups field's value. -func (s *DescribeNotebookInstanceOutput) SetSecurityGroups(v []*string) *DescribeNotebookInstanceOutput { - s.SecurityGroups = v - return s +// GoString returns the string representation +func (s ListNotebookInstanceLifecycleConfigsOutput) GoString() string { + return s.String() } -// SetSubnetId sets the SubnetId field's value. -func (s *DescribeNotebookInstanceOutput) SetSubnetId(v string) *DescribeNotebookInstanceOutput { - s.SubnetId = &v +// SetNextToken sets the NextToken field's value. +func (s *ListNotebookInstanceLifecycleConfigsOutput) SetNextToken(v string) *ListNotebookInstanceLifecycleConfigsOutput { + s.NextToken = &v return s } -// SetUrl sets the Url field's value. -func (s *DescribeNotebookInstanceOutput) SetUrl(v string) *DescribeNotebookInstanceOutput { - s.Url = &v +// SetNotebookInstanceLifecycleConfigs sets the NotebookInstanceLifecycleConfigs field's value. +func (s *ListNotebookInstanceLifecycleConfigsOutput) SetNotebookInstanceLifecycleConfigs(v []*NotebookInstanceLifecycleConfigSummary) *ListNotebookInstanceLifecycleConfigsOutput { + s.NotebookInstanceLifecycleConfigs = v return s } -type DescribeTrainingJobInput struct { +type ListNotebookInstancesInput struct { _ struct{} `type:"structure"` - // The name of the training job. + // A filter that returns only notebook instances that were created after the + // specified time (timestamp). + CreationTimeAfter *time.Time `type:"timestamp"` + + // A filter that returns only notebook instances that were created before the + // specified time (timestamp). + CreationTimeBefore *time.Time `type:"timestamp"` + + // A filter that returns only notebook instances that were modified after the + // specified time (timestamp). + LastModifiedTimeAfter *time.Time `type:"timestamp"` + + // A filter that returns only notebook instances that were modified before the + // specified time (timestamp). + LastModifiedTimeBefore *time.Time `type:"timestamp"` + + // The maximum number of notebook instances to return. + MaxResults *int64 `min:"1" type:"integer"` + + // A string in the notebook instances' name. This filter returns only notebook + // instances whose name contains the specified string. + NameContains *string `type:"string"` + + // If the previous call to the ListNotebookInstances is truncated, the response + // includes a NextToken. You can use this token in your subsequent ListNotebookInstances + // request to fetch the next set of notebook instances. // - // TrainingJobName is a required field - TrainingJobName *string `min:"1" type:"string" required:"true"` + // You might specify a filter or a sort order in your request. When response + // is truncated, you must use the same values for the filer and sort order in + // the next request. + NextToken *string `type:"string"` + + // A string in the name of a notebook instances lifecycle configuration associated + // with this notebook instance. This filter returns only notebook instances + // associated with a lifecycle configuration with a name that contains the specified + // string. + NotebookInstanceLifecycleConfigNameContains *string `type:"string"` + + // The field to sort results by. The default is Name. + SortBy *string `type:"string" enum:"NotebookInstanceSortKey"` + + // The sort order for results. + SortOrder *string `type:"string" enum:"NotebookInstanceSortOrder"` + + // A filter that returns only notebook instances with the specified status. + StatusEquals *string `type:"string" enum:"NotebookInstanceStatus"` } // String returns the string representation -func (s DescribeTrainingJobInput) String() string { +func (s ListNotebookInstancesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeTrainingJobInput) GoString() string { +func (s ListNotebookInstancesInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeTrainingJobInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeTrainingJobInput"} - if s.TrainingJobName == nil { - invalidParams.Add(request.NewErrParamRequired("TrainingJobName")) - } - if s.TrainingJobName != nil && len(*s.TrainingJobName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("TrainingJobName", 1)) +func (s *ListNotebookInstancesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListNotebookInstancesInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } if invalidParams.Len() > 0 { @@ -5573,255 +9907,250 @@ func (s *DescribeTrainingJobInput) Validate() error { return nil } -// SetTrainingJobName sets the TrainingJobName field's value. -func (s *DescribeTrainingJobInput) SetTrainingJobName(v string) *DescribeTrainingJobInput { - s.TrainingJobName = &v +// SetCreationTimeAfter sets the CreationTimeAfter field's value. +func (s *ListNotebookInstancesInput) SetCreationTimeAfter(v time.Time) *ListNotebookInstancesInput { + s.CreationTimeAfter = &v return s } -type DescribeTrainingJobOutput struct { - _ struct{} `type:"structure"` - - // Information about the algorithm used for training, and algorithm metadata. - // - // AlgorithmSpecification is a required field - AlgorithmSpecification *AlgorithmSpecification `type:"structure" required:"true"` - - // A timestamp that indicates when the training job was created. - // - // CreationTime is a required field - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` - - // If the training job failed, the reason it failed. - FailureReason *string `type:"string"` - - // Algorithm-specific parameters. - HyperParameters map[string]*string `type:"map"` - - // An array of Channel objects that describes each data input channel. - // - // InputDataConfig is a required field - InputDataConfig []*Channel `min:"1" type:"list" required:"true"` +// SetCreationTimeBefore sets the CreationTimeBefore field's value. +func (s *ListNotebookInstancesInput) SetCreationTimeBefore(v time.Time) *ListNotebookInstancesInput { + s.CreationTimeBefore = &v + return s +} - // A timestamp that indicates when the status of the training job was last modified. - LastModifiedTime *time.Time `type:"timestamp" timestampFormat:"unix"` +// SetLastModifiedTimeAfter sets the LastModifiedTimeAfter field's value. +func (s *ListNotebookInstancesInput) SetLastModifiedTimeAfter(v time.Time) *ListNotebookInstancesInput { + s.LastModifiedTimeAfter = &v + return s +} - // Information about the Amazon S3 location that is configured for storing model - // artifacts. - // - // ModelArtifacts is a required field - ModelArtifacts *ModelArtifacts `type:"structure" required:"true"` +// SetLastModifiedTimeBefore sets the LastModifiedTimeBefore field's value. +func (s *ListNotebookInstancesInput) SetLastModifiedTimeBefore(v time.Time) *ListNotebookInstancesInput { + s.LastModifiedTimeBefore = &v + return s +} - // The S3 path where model artifacts that you configured when creating the job - // are stored. Amazon SageMaker creates subfolders for model artifacts. - OutputDataConfig *OutputDataConfig `type:"structure"` +// SetMaxResults sets the MaxResults field's value. +func (s *ListNotebookInstancesInput) SetMaxResults(v int64) *ListNotebookInstancesInput { + s.MaxResults = &v + return s +} - // Resources, including ML compute instances and ML storage volumes, that are - // configured for model training. - // - // ResourceConfig is a required field - ResourceConfig *ResourceConfig `type:"structure" required:"true"` +// SetNameContains sets the NameContains field's value. +func (s *ListNotebookInstancesInput) SetNameContains(v string) *ListNotebookInstancesInput { + s.NameContains = &v + return s +} - // The AWS Identity and Access Management (IAM) role configured for the training - // job. - RoleArn *string `min:"20" type:"string"` +// SetNextToken sets the NextToken field's value. +func (s *ListNotebookInstancesInput) SetNextToken(v string) *ListNotebookInstancesInput { + s.NextToken = &v + return s +} - // Provides granular information about the system state. For more information, - // see TrainingJobStatus. - // - // SecondaryStatus is a required field - SecondaryStatus *string `type:"string" required:"true" enum:"SecondaryStatus"` +// SetNotebookInstanceLifecycleConfigNameContains sets the NotebookInstanceLifecycleConfigNameContains field's value. +func (s *ListNotebookInstancesInput) SetNotebookInstanceLifecycleConfigNameContains(v string) *ListNotebookInstancesInput { + s.NotebookInstanceLifecycleConfigNameContains = &v + return s +} - // The condition under which to stop the training job. - // - // StoppingCondition is a required field - StoppingCondition *StoppingCondition `type:"structure" required:"true"` +// SetSortBy sets the SortBy field's value. +func (s *ListNotebookInstancesInput) SetSortBy(v string) *ListNotebookInstancesInput { + s.SortBy = &v + return s +} - // A timestamp that indicates when model training ended. - TrainingEndTime *time.Time `type:"timestamp" timestampFormat:"unix"` +// SetSortOrder sets the SortOrder field's value. +func (s *ListNotebookInstancesInput) SetSortOrder(v string) *ListNotebookInstancesInput { + s.SortOrder = &v + return s +} - // The Amazon Resource Name (ARN) of the training job. - // - // TrainingJobArn is a required field - TrainingJobArn *string `type:"string" required:"true"` +// SetStatusEquals sets the StatusEquals field's value. +func (s *ListNotebookInstancesInput) SetStatusEquals(v string) *ListNotebookInstancesInput { + s.StatusEquals = &v + return s +} - // Name of the model training job. - // - // TrainingJobName is a required field - TrainingJobName *string `min:"1" type:"string" required:"true"` +type ListNotebookInstancesOutput struct { + _ struct{} `type:"structure"` - // The status of the training job. - // - // For the InProgress status, Amazon SageMaker can return these secondary statuses: - // - // * Starting - Preparing for training. - // - // * Downloading - Optional stage for algorithms that support File training - // input mode. It indicates data is being downloaded to ML storage volumes. - // - // * Training - Training is in progress. - // - // * Uploading - Training is complete and model upload is in progress. - // - // For the Stopped training status, Amazon SageMaker can return these secondary - // statuses: - // - // * MaxRuntimeExceeded - Job stopped as a result of maximum allowed runtime - // exceeded. - // - // TrainingJobStatus is a required field - TrainingJobStatus *string `type:"string" required:"true" enum:"TrainingJobStatus"` + // If the response to the previous ListNotebookInstances request was truncated, + // Amazon SageMaker returns this token. To retrieve the next set of notebook + // instances, use the token in the next request. + NextToken *string `type:"string"` - // A timestamp that indicates when training started. - TrainingStartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + // An array of NotebookInstanceSummary objects, one for each notebook instance. + NotebookInstances []*NotebookInstanceSummary `type:"list"` } // String returns the string representation -func (s DescribeTrainingJobOutput) String() string { +func (s ListNotebookInstancesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeTrainingJobOutput) GoString() string { +func (s ListNotebookInstancesOutput) GoString() string { return s.String() } -// SetAlgorithmSpecification sets the AlgorithmSpecification field's value. -func (s *DescribeTrainingJobOutput) SetAlgorithmSpecification(v *AlgorithmSpecification) *DescribeTrainingJobOutput { - s.AlgorithmSpecification = v +// SetNextToken sets the NextToken field's value. +func (s *ListNotebookInstancesOutput) SetNextToken(v string) *ListNotebookInstancesOutput { + s.NextToken = &v return s } -// SetCreationTime sets the CreationTime field's value. -func (s *DescribeTrainingJobOutput) SetCreationTime(v time.Time) *DescribeTrainingJobOutput { - s.CreationTime = &v +// SetNotebookInstances sets the NotebookInstances field's value. +func (s *ListNotebookInstancesOutput) SetNotebookInstances(v []*NotebookInstanceSummary) *ListNotebookInstancesOutput { + s.NotebookInstances = v return s } -// SetFailureReason sets the FailureReason field's value. -func (s *DescribeTrainingJobOutput) SetFailureReason(v string) *DescribeTrainingJobOutput { - s.FailureReason = &v - return s -} +type ListTagsInput struct { + _ struct{} `type:"structure"` -// SetHyperParameters sets the HyperParameters field's value. -func (s *DescribeTrainingJobOutput) SetHyperParameters(v map[string]*string) *DescribeTrainingJobOutput { - s.HyperParameters = v - return s -} + // Maximum number of tags to return. + MaxResults *int64 `min:"50" type:"integer"` -// SetInputDataConfig sets the InputDataConfig field's value. -func (s *DescribeTrainingJobOutput) SetInputDataConfig(v []*Channel) *DescribeTrainingJobOutput { - s.InputDataConfig = v - return s -} + // If the response to the previous ListTags request is truncated, Amazon SageMaker + // returns this token. To retrieve the next set of tags, use it in the subsequent + // request. + NextToken *string `type:"string"` -// SetLastModifiedTime sets the LastModifiedTime field's value. -func (s *DescribeTrainingJobOutput) SetLastModifiedTime(v time.Time) *DescribeTrainingJobOutput { - s.LastModifiedTime = &v - return s + // The Amazon Resource Name (ARN) of the resource whose tags you want to retrieve. + // + // ResourceArn is a required field + ResourceArn *string `type:"string" required:"true"` } -// SetModelArtifacts sets the ModelArtifacts field's value. -func (s *DescribeTrainingJobOutput) SetModelArtifacts(v *ModelArtifacts) *DescribeTrainingJobOutput { - s.ModelArtifacts = v - return s +// String returns the string representation +func (s ListTagsInput) String() string { + return awsutil.Prettify(s) } -// SetOutputDataConfig sets the OutputDataConfig field's value. -func (s *DescribeTrainingJobOutput) SetOutputDataConfig(v *OutputDataConfig) *DescribeTrainingJobOutput { - s.OutputDataConfig = v - return s +// GoString returns the string representation +func (s ListTagsInput) GoString() string { + return s.String() } -// SetResourceConfig sets the ResourceConfig field's value. -func (s *DescribeTrainingJobOutput) SetResourceConfig(v *ResourceConfig) *DescribeTrainingJobOutput { - s.ResourceConfig = v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListTagsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTagsInput"} + if s.MaxResults != nil && *s.MaxResults < 50 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 50)) + } + if s.ResourceArn == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceArn")) + } -// SetRoleArn sets the RoleArn field's value. -func (s *DescribeTrainingJobOutput) SetRoleArn(v string) *DescribeTrainingJobOutput { - s.RoleArn = &v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetSecondaryStatus sets the SecondaryStatus field's value. -func (s *DescribeTrainingJobOutput) SetSecondaryStatus(v string) *DescribeTrainingJobOutput { - s.SecondaryStatus = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListTagsInput) SetMaxResults(v int64) *ListTagsInput { + s.MaxResults = &v return s } -// SetStoppingCondition sets the StoppingCondition field's value. -func (s *DescribeTrainingJobOutput) SetStoppingCondition(v *StoppingCondition) *DescribeTrainingJobOutput { - s.StoppingCondition = v +// SetNextToken sets the NextToken field's value. +func (s *ListTagsInput) SetNextToken(v string) *ListTagsInput { + s.NextToken = &v return s } -// SetTrainingEndTime sets the TrainingEndTime field's value. -func (s *DescribeTrainingJobOutput) SetTrainingEndTime(v time.Time) *DescribeTrainingJobOutput { - s.TrainingEndTime = &v +// SetResourceArn sets the ResourceArn field's value. +func (s *ListTagsInput) SetResourceArn(v string) *ListTagsInput { + s.ResourceArn = &v return s } -// SetTrainingJobArn sets the TrainingJobArn field's value. -func (s *DescribeTrainingJobOutput) SetTrainingJobArn(v string) *DescribeTrainingJobOutput { - s.TrainingJobArn = &v - return s +type ListTagsOutput struct { + _ struct{} `type:"structure"` + + // If response is truncated, Amazon SageMaker includes a token in the response. + // You can use this token in your subsequent request to fetch next set of tokens. + NextToken *string `type:"string"` + + // An array of Tag objects, each with a tag key and a value. + Tags []*Tag `type:"list"` } -// SetTrainingJobName sets the TrainingJobName field's value. -func (s *DescribeTrainingJobOutput) SetTrainingJobName(v string) *DescribeTrainingJobOutput { - s.TrainingJobName = &v - return s +// String returns the string representation +func (s ListTagsOutput) String() string { + return awsutil.Prettify(s) } -// SetTrainingJobStatus sets the TrainingJobStatus field's value. -func (s *DescribeTrainingJobOutput) SetTrainingJobStatus(v string) *DescribeTrainingJobOutput { - s.TrainingJobStatus = &v +// GoString returns the string representation +func (s ListTagsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListTagsOutput) SetNextToken(v string) *ListTagsOutput { + s.NextToken = &v return s } -// SetTrainingStartTime sets the TrainingStartTime field's value. -func (s *DescribeTrainingJobOutput) SetTrainingStartTime(v time.Time) *DescribeTrainingJobOutput { - s.TrainingStartTime = &v +// SetTags sets the Tags field's value. +func (s *ListTagsOutput) SetTags(v []*Tag) *ListTagsOutput { + s.Tags = v return s } -// Specifies weight and capacity values for a production variant. -type DesiredWeightAndCapacity struct { +type ListTrainingJobsForHyperParameterTuningJobInput struct { _ struct{} `type:"structure"` - // The variant's capacity. - DesiredInstanceCount *int64 `min:"1" type:"integer"` + // The name of the tuning job whose training jobs you want to list. + // + // HyperParameterTuningJobName is a required field + HyperParameterTuningJobName *string `min:"1" type:"string" required:"true"` - // The variant's weight. - DesiredWeight *float64 `type:"float"` + // The maximum number of training jobs to return. The default value is 10. + MaxResults *int64 `min:"1" type:"integer"` - // The name of the variant to update. + // If the result of the previous ListTrainingJobsForHyperParameterTuningJob + // request was truncated, the response includes a NextToken. To retrieve the + // next set of training jobs, use the token in the next request. + NextToken *string `type:"string"` + + // The field to sort results by. The default is Name. // - // VariantName is a required field - VariantName *string `type:"string" required:"true"` + // If the value of this field is FinalObjectiveMetricValue, any training jobs + // that did not return an objective metric are not listed. + SortBy *string `type:"string" enum:"TrainingJobSortByOptions"` + + // The sort order for results. The default is Ascending. + SortOrder *string `type:"string" enum:"SortOrder"` + + // A filter that returns only training jobs with the specified status. + StatusEquals *string `type:"string" enum:"TrainingJobStatus"` } // String returns the string representation -func (s DesiredWeightAndCapacity) String() string { +func (s ListTrainingJobsForHyperParameterTuningJobInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DesiredWeightAndCapacity) GoString() string { +func (s ListTrainingJobsForHyperParameterTuningJobInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DesiredWeightAndCapacity) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DesiredWeightAndCapacity"} - if s.DesiredInstanceCount != nil && *s.DesiredInstanceCount < 1 { - invalidParams.Add(request.NewErrParamMinValue("DesiredInstanceCount", 1)) +func (s *ListTrainingJobsForHyperParameterTuningJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTrainingJobsForHyperParameterTuningJobInput"} + if s.HyperParameterTuningJobName == nil { + invalidParams.Add(request.NewErrParamRequired("HyperParameterTuningJobName")) } - if s.VariantName == nil { - invalidParams.Add(request.NewErrParamRequired("VariantName")) + if s.HyperParameterTuningJobName != nil && len(*s.HyperParameterTuningJobName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("HyperParameterTuningJobName", 1)) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } if invalidParams.Len() > 0 { @@ -5830,185 +10159,133 @@ func (s *DesiredWeightAndCapacity) Validate() error { return nil } -// SetDesiredInstanceCount sets the DesiredInstanceCount field's value. -func (s *DesiredWeightAndCapacity) SetDesiredInstanceCount(v int64) *DesiredWeightAndCapacity { - s.DesiredInstanceCount = &v +// SetHyperParameterTuningJobName sets the HyperParameterTuningJobName field's value. +func (s *ListTrainingJobsForHyperParameterTuningJobInput) SetHyperParameterTuningJobName(v string) *ListTrainingJobsForHyperParameterTuningJobInput { + s.HyperParameterTuningJobName = &v return s } -// SetDesiredWeight sets the DesiredWeight field's value. -func (s *DesiredWeightAndCapacity) SetDesiredWeight(v float64) *DesiredWeightAndCapacity { - s.DesiredWeight = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListTrainingJobsForHyperParameterTuningJobInput) SetMaxResults(v int64) *ListTrainingJobsForHyperParameterTuningJobInput { + s.MaxResults = &v return s } -// SetVariantName sets the VariantName field's value. -func (s *DesiredWeightAndCapacity) SetVariantName(v string) *DesiredWeightAndCapacity { - s.VariantName = &v +// SetNextToken sets the NextToken field's value. +func (s *ListTrainingJobsForHyperParameterTuningJobInput) SetNextToken(v string) *ListTrainingJobsForHyperParameterTuningJobInput { + s.NextToken = &v return s } -// Provides summary information for an endpoint configuration. -type EndpointConfigSummary struct { - _ struct{} `type:"structure"` - - // A timestamp that shows when the endpoint configuration was created. - // - // CreationTime is a required field - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` - - // The Amazon Resource Name (ARN) of the endpoint configuration. - // - // EndpointConfigArn is a required field - EndpointConfigArn *string `min:"20" type:"string" required:"true"` - - // The name of the endpoint configuration. - // - // EndpointConfigName is a required field - EndpointConfigName *string `type:"string" required:"true"` -} - -// String returns the string representation -func (s EndpointConfigSummary) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s EndpointConfigSummary) GoString() string { - return s.String() -} - -// SetCreationTime sets the CreationTime field's value. -func (s *EndpointConfigSummary) SetCreationTime(v time.Time) *EndpointConfigSummary { - s.CreationTime = &v +// SetSortBy sets the SortBy field's value. +func (s *ListTrainingJobsForHyperParameterTuningJobInput) SetSortBy(v string) *ListTrainingJobsForHyperParameterTuningJobInput { + s.SortBy = &v return s } -// SetEndpointConfigArn sets the EndpointConfigArn field's value. -func (s *EndpointConfigSummary) SetEndpointConfigArn(v string) *EndpointConfigSummary { - s.EndpointConfigArn = &v +// SetSortOrder sets the SortOrder field's value. +func (s *ListTrainingJobsForHyperParameterTuningJobInput) SetSortOrder(v string) *ListTrainingJobsForHyperParameterTuningJobInput { + s.SortOrder = &v return s } -// SetEndpointConfigName sets the EndpointConfigName field's value. -func (s *EndpointConfigSummary) SetEndpointConfigName(v string) *EndpointConfigSummary { - s.EndpointConfigName = &v +// SetStatusEquals sets the StatusEquals field's value. +func (s *ListTrainingJobsForHyperParameterTuningJobInput) SetStatusEquals(v string) *ListTrainingJobsForHyperParameterTuningJobInput { + s.StatusEquals = &v return s } -// Provides summary information for an endpoint. -type EndpointSummary struct { +type ListTrainingJobsForHyperParameterTuningJobOutput struct { _ struct{} `type:"structure"` - // A timestamp that shows when the endpoint was created. - // - // CreationTime is a required field - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` - - // The Amazon Resource Name (ARN) of the endpoint. - // - // EndpointArn is a required field - EndpointArn *string `min:"20" type:"string" required:"true"` - - // The name of the endpoint. - // - // EndpointName is a required field - EndpointName *string `type:"string" required:"true"` - - // The status of the endpoint. - // - // EndpointStatus is a required field - EndpointStatus *string `type:"string" required:"true" enum:"EndpointStatus"` + // If the result of this ListTrainingJobsForHyperParameterTuningJob request + // was truncated, the response includes a NextToken. To retrieve the next set + // of training jobs, use the token in the next request. + NextToken *string `type:"string"` - // A timestamp that shows when the endpoint was last modified. + // A list of TrainingJobSummary objects that describe the training jobs that + // the ListTrainingJobsForHyperParameterTuningJob request returned. // - // LastModifiedTime is a required field - LastModifiedTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + // TrainingJobSummaries is a required field + TrainingJobSummaries []*HyperParameterTrainingJobSummary `type:"list" required:"true"` } // String returns the string representation -func (s EndpointSummary) String() string { +func (s ListTrainingJobsForHyperParameterTuningJobOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s EndpointSummary) GoString() string { +func (s ListTrainingJobsForHyperParameterTuningJobOutput) GoString() string { return s.String() } -// SetCreationTime sets the CreationTime field's value. -func (s *EndpointSummary) SetCreationTime(v time.Time) *EndpointSummary { - s.CreationTime = &v - return s -} - -// SetEndpointArn sets the EndpointArn field's value. -func (s *EndpointSummary) SetEndpointArn(v string) *EndpointSummary { - s.EndpointArn = &v +// SetNextToken sets the NextToken field's value. +func (s *ListTrainingJobsForHyperParameterTuningJobOutput) SetNextToken(v string) *ListTrainingJobsForHyperParameterTuningJobOutput { + s.NextToken = &v return s } -// SetEndpointName sets the EndpointName field's value. -func (s *EndpointSummary) SetEndpointName(v string) *EndpointSummary { - s.EndpointName = &v +// SetTrainingJobSummaries sets the TrainingJobSummaries field's value. +func (s *ListTrainingJobsForHyperParameterTuningJobOutput) SetTrainingJobSummaries(v []*HyperParameterTrainingJobSummary) *ListTrainingJobsForHyperParameterTuningJobOutput { + s.TrainingJobSummaries = v return s } -// SetEndpointStatus sets the EndpointStatus field's value. -func (s *EndpointSummary) SetEndpointStatus(v string) *EndpointSummary { - s.EndpointStatus = &v - return s -} +type ListTrainingJobsInput struct { + _ struct{} `type:"structure"` -// SetLastModifiedTime sets the LastModifiedTime field's value. -func (s *EndpointSummary) SetLastModifiedTime(v time.Time) *EndpointSummary { - s.LastModifiedTime = &v - return s -} + // A filter that returns only training jobs created after the specified time + // (timestamp). + CreationTimeAfter *time.Time `type:"timestamp"` -type ListEndpointConfigsInput struct { - _ struct{} `type:"structure"` + // A filter that returns only training jobs created before the specified time + // (timestamp). + CreationTimeBefore *time.Time `type:"timestamp"` - // A filter that returns only endpoint configurations created after the specified - // time (timestamp). - CreationTimeAfter *time.Time `type:"timestamp" timestampFormat:"unix"` + // A filter that returns only training jobs modified after the specified time + // (timestamp). + LastModifiedTimeAfter *time.Time `type:"timestamp"` - // A filter that returns only endpoint configurations created before the specified - // time (timestamp). - CreationTimeBefore *time.Time `type:"timestamp" timestampFormat:"unix"` + // A filter that returns only training jobs modified before the specified time + // (timestamp). + LastModifiedTimeBefore *time.Time `type:"timestamp"` // The maximum number of training jobs to return in the response. MaxResults *int64 `min:"1" type:"integer"` - // A string in the endpoint configuration name. This filter returns only endpoint - // configurations whose name contains the specified string. + // A string in the training job name. This filter returns only training jobs + // whose name contains the specified string. NameContains *string `type:"string"` - // If the result of the previous ListEndpointConfig request was truncated, the - // response includes a NextToken. To retrieve the next set of endpoint configurations, + // If the result of the previous ListTrainingJobs request was truncated, the + // response includes a NextToken. To retrieve the next set of training jobs, // use the token in the next request. NextToken *string `type:"string"` // The field to sort results by. The default is CreationTime. - SortBy *string `type:"string" enum:"EndpointConfigSortKey"` + SortBy *string `type:"string" enum:"SortBy"` // The sort order for results. The default is Ascending. - SortOrder *string `type:"string" enum:"OrderKey"` + SortOrder *string `type:"string" enum:"SortOrder"` + + // A filter that retrieves only training jobs with a specific status. + StatusEquals *string `type:"string" enum:"TrainingJobStatus"` } // String returns the string representation -func (s ListEndpointConfigsInput) String() string { +func (s ListTrainingJobsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListEndpointConfigsInput) GoString() string { +func (s ListTrainingJobsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListEndpointConfigsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListEndpointConfigsInput"} +func (s *ListTrainingJobsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTrainingJobsInput"} if s.MaxResults != nil && *s.MaxResults < 1 { invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } @@ -6020,136 +10297,151 @@ func (s *ListEndpointConfigsInput) Validate() error { } // SetCreationTimeAfter sets the CreationTimeAfter field's value. -func (s *ListEndpointConfigsInput) SetCreationTimeAfter(v time.Time) *ListEndpointConfigsInput { +func (s *ListTrainingJobsInput) SetCreationTimeAfter(v time.Time) *ListTrainingJobsInput { s.CreationTimeAfter = &v return s } // SetCreationTimeBefore sets the CreationTimeBefore field's value. -func (s *ListEndpointConfigsInput) SetCreationTimeBefore(v time.Time) *ListEndpointConfigsInput { +func (s *ListTrainingJobsInput) SetCreationTimeBefore(v time.Time) *ListTrainingJobsInput { s.CreationTimeBefore = &v return s } +// SetLastModifiedTimeAfter sets the LastModifiedTimeAfter field's value. +func (s *ListTrainingJobsInput) SetLastModifiedTimeAfter(v time.Time) *ListTrainingJobsInput { + s.LastModifiedTimeAfter = &v + return s +} + +// SetLastModifiedTimeBefore sets the LastModifiedTimeBefore field's value. +func (s *ListTrainingJobsInput) SetLastModifiedTimeBefore(v time.Time) *ListTrainingJobsInput { + s.LastModifiedTimeBefore = &v + return s +} + // SetMaxResults sets the MaxResults field's value. -func (s *ListEndpointConfigsInput) SetMaxResults(v int64) *ListEndpointConfigsInput { +func (s *ListTrainingJobsInput) SetMaxResults(v int64) *ListTrainingJobsInput { s.MaxResults = &v return s } // SetNameContains sets the NameContains field's value. -func (s *ListEndpointConfigsInput) SetNameContains(v string) *ListEndpointConfigsInput { +func (s *ListTrainingJobsInput) SetNameContains(v string) *ListTrainingJobsInput { s.NameContains = &v return s } // SetNextToken sets the NextToken field's value. -func (s *ListEndpointConfigsInput) SetNextToken(v string) *ListEndpointConfigsInput { +func (s *ListTrainingJobsInput) SetNextToken(v string) *ListTrainingJobsInput { s.NextToken = &v return s } // SetSortBy sets the SortBy field's value. -func (s *ListEndpointConfigsInput) SetSortBy(v string) *ListEndpointConfigsInput { +func (s *ListTrainingJobsInput) SetSortBy(v string) *ListTrainingJobsInput { s.SortBy = &v return s } // SetSortOrder sets the SortOrder field's value. -func (s *ListEndpointConfigsInput) SetSortOrder(v string) *ListEndpointConfigsInput { +func (s *ListTrainingJobsInput) SetSortOrder(v string) *ListTrainingJobsInput { s.SortOrder = &v return s } -type ListEndpointConfigsOutput struct { - _ struct{} `type:"structure"` +// SetStatusEquals sets the StatusEquals field's value. +func (s *ListTrainingJobsInput) SetStatusEquals(v string) *ListTrainingJobsInput { + s.StatusEquals = &v + return s +} - // An array of endpoint configurations. - // - // EndpointConfigs is a required field - EndpointConfigs []*EndpointConfigSummary `type:"list" required:"true"` +type ListTrainingJobsOutput struct { + _ struct{} `type:"structure"` // If the response is truncated, Amazon SageMaker returns this token. To retrieve - // the next set of endpoint configurations, use it in the subsequent request + // the next set of training jobs, use it in the subsequent request. NextToken *string `type:"string"` + + // An array of TrainingJobSummary objects, each listing a training job. + // + // TrainingJobSummaries is a required field + TrainingJobSummaries []*TrainingJobSummary `type:"list" required:"true"` } // String returns the string representation -func (s ListEndpointConfigsOutput) String() string { +func (s ListTrainingJobsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListEndpointConfigsOutput) GoString() string { +func (s ListTrainingJobsOutput) GoString() string { return s.String() } -// SetEndpointConfigs sets the EndpointConfigs field's value. -func (s *ListEndpointConfigsOutput) SetEndpointConfigs(v []*EndpointConfigSummary) *ListEndpointConfigsOutput { - s.EndpointConfigs = v - return s -} - // SetNextToken sets the NextToken field's value. -func (s *ListEndpointConfigsOutput) SetNextToken(v string) *ListEndpointConfigsOutput { +func (s *ListTrainingJobsOutput) SetNextToken(v string) *ListTrainingJobsOutput { s.NextToken = &v return s } -type ListEndpointsInput struct { +// SetTrainingJobSummaries sets the TrainingJobSummaries field's value. +func (s *ListTrainingJobsOutput) SetTrainingJobSummaries(v []*TrainingJobSummary) *ListTrainingJobsOutput { + s.TrainingJobSummaries = v + return s +} + +type ListTransformJobsInput struct { _ struct{} `type:"structure"` - // A filter that returns only endpoints that were created after the specified - // time (timestamp). - CreationTimeAfter *time.Time `type:"timestamp" timestampFormat:"unix"` + // A filter that returns only transform jobs created after the specified time. + CreationTimeAfter *time.Time `type:"timestamp"` - // A filter that returns only endpoints that were created before the specified - // time (timestamp). - CreationTimeBefore *time.Time `type:"timestamp" timestampFormat:"unix"` + // A filter that returns only transform jobs created before the specified time. + CreationTimeBefore *time.Time `type:"timestamp"` - // A filter that returns only endpoints that were modified after the specified - // timestamp. - LastModifiedTimeAfter *time.Time `type:"timestamp" timestampFormat:"unix"` + // A filter that returns only transform jobs modified after the specified time. + LastModifiedTimeAfter *time.Time `type:"timestamp"` - // A filter that returns only endpoints that were modified before the specified - // timestamp. - LastModifiedTimeBefore *time.Time `type:"timestamp" timestampFormat:"unix"` + // A filter that returns only transform jobs modified before the specified time. + LastModifiedTimeBefore *time.Time `type:"timestamp"` - // The maximum number of endpoints to return in the response. + // The maximum number of transform jobs to return in the response. The default + // value is 10. MaxResults *int64 `min:"1" type:"integer"` - // A string in endpoint names. This filter returns only endpoints whose name - // contains the specified string. + // A string in the transform job name. This filter returns only transform jobs + // whose name contains the specified string. NameContains *string `type:"string"` - // If the result of a ListEndpoints request was truncated, the response includes - // a NextToken. To retrieve the next set of endpoints, use the token in the - // next request. + // If the result of the previous ListTransformJobs request was truncated, the + // response includes a NextToken. To retrieve the next set of transform jobs, + // use the token in the next request. NextToken *string `type:"string"` - // Sorts the list of results. The default is CreationTime. - SortBy *string `type:"string" enum:"EndpointSortKey"` + // The field to sort results by. The default is CreationTime. + SortBy *string `type:"string" enum:"SortBy"` - // The sort order for results. The default is Ascending. - SortOrder *string `type:"string" enum:"OrderKey"` + // The sort order for results. The default is Descending. + SortOrder *string `type:"string" enum:"SortOrder"` - // A filter that returns only endpoints with the specified status. - StatusEquals *string `type:"string" enum:"EndpointStatus"` + // A filter that retrieves only transform jobs with a specific status. + StatusEquals *string `type:"string" enum:"TransformJobStatus"` } // String returns the string representation -func (s ListEndpointsInput) String() string { +func (s ListTransformJobsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListEndpointsInput) GoString() string { +func (s ListTransformJobsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListEndpointsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListEndpointsInput"} +func (s *ListTransformJobsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTransformJobsInput"} if s.MaxResults != nil && *s.MaxResults < 1 { invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } @@ -6161,143 +10453,187 @@ func (s *ListEndpointsInput) Validate() error { } // SetCreationTimeAfter sets the CreationTimeAfter field's value. -func (s *ListEndpointsInput) SetCreationTimeAfter(v time.Time) *ListEndpointsInput { +func (s *ListTransformJobsInput) SetCreationTimeAfter(v time.Time) *ListTransformJobsInput { s.CreationTimeAfter = &v return s } // SetCreationTimeBefore sets the CreationTimeBefore field's value. -func (s *ListEndpointsInput) SetCreationTimeBefore(v time.Time) *ListEndpointsInput { +func (s *ListTransformJobsInput) SetCreationTimeBefore(v time.Time) *ListTransformJobsInput { s.CreationTimeBefore = &v return s } // SetLastModifiedTimeAfter sets the LastModifiedTimeAfter field's value. -func (s *ListEndpointsInput) SetLastModifiedTimeAfter(v time.Time) *ListEndpointsInput { +func (s *ListTransformJobsInput) SetLastModifiedTimeAfter(v time.Time) *ListTransformJobsInput { s.LastModifiedTimeAfter = &v return s } // SetLastModifiedTimeBefore sets the LastModifiedTimeBefore field's value. -func (s *ListEndpointsInput) SetLastModifiedTimeBefore(v time.Time) *ListEndpointsInput { +func (s *ListTransformJobsInput) SetLastModifiedTimeBefore(v time.Time) *ListTransformJobsInput { s.LastModifiedTimeBefore = &v return s } // SetMaxResults sets the MaxResults field's value. -func (s *ListEndpointsInput) SetMaxResults(v int64) *ListEndpointsInput { +func (s *ListTransformJobsInput) SetMaxResults(v int64) *ListTransformJobsInput { s.MaxResults = &v return s } // SetNameContains sets the NameContains field's value. -func (s *ListEndpointsInput) SetNameContains(v string) *ListEndpointsInput { +func (s *ListTransformJobsInput) SetNameContains(v string) *ListTransformJobsInput { s.NameContains = &v return s } // SetNextToken sets the NextToken field's value. -func (s *ListEndpointsInput) SetNextToken(v string) *ListEndpointsInput { +func (s *ListTransformJobsInput) SetNextToken(v string) *ListTransformJobsInput { s.NextToken = &v return s } // SetSortBy sets the SortBy field's value. -func (s *ListEndpointsInput) SetSortBy(v string) *ListEndpointsInput { +func (s *ListTransformJobsInput) SetSortBy(v string) *ListTransformJobsInput { s.SortBy = &v return s } // SetSortOrder sets the SortOrder field's value. -func (s *ListEndpointsInput) SetSortOrder(v string) *ListEndpointsInput { +func (s *ListTransformJobsInput) SetSortOrder(v string) *ListTransformJobsInput { s.SortOrder = &v return s } // SetStatusEquals sets the StatusEquals field's value. -func (s *ListEndpointsInput) SetStatusEquals(v string) *ListEndpointsInput { +func (s *ListTransformJobsInput) SetStatusEquals(v string) *ListTransformJobsInput { s.StatusEquals = &v return s } -type ListEndpointsOutput struct { +type ListTransformJobsOutput struct { _ struct{} `type:"structure"` - // An array or endpoint objects. - // - // Endpoints is a required field - Endpoints []*EndpointSummary `type:"list" required:"true"` - // If the response is truncated, Amazon SageMaker returns this token. To retrieve - // the next set of training jobs, use it in the subsequent request. + // the next set of transform jobs, use it in the next request. NextToken *string `type:"string"` + + // An array of TransformJobSummary objects. + // + // TransformJobSummaries is a required field + TransformJobSummaries []*TransformJobSummary `type:"list" required:"true"` } // String returns the string representation -func (s ListEndpointsOutput) String() string { +func (s ListTransformJobsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListEndpointsOutput) GoString() string { +func (s ListTransformJobsOutput) GoString() string { return s.String() } -// SetEndpoints sets the Endpoints field's value. -func (s *ListEndpointsOutput) SetEndpoints(v []*EndpointSummary) *ListEndpointsOutput { - s.Endpoints = v +// SetNextToken sets the NextToken field's value. +func (s *ListTransformJobsOutput) SetNextToken(v string) *ListTransformJobsOutput { + s.NextToken = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListEndpointsOutput) SetNextToken(v string) *ListEndpointsOutput { - s.NextToken = &v +// SetTransformJobSummaries sets the TransformJobSummaries field's value. +func (s *ListTransformJobsOutput) SetTransformJobSummaries(v []*TransformJobSummary) *ListTransformJobsOutput { + s.TransformJobSummaries = v return s } -type ListModelsInput struct { +// The name, value, and date and time of a metric that was emitted to Amazon +// CloudWatch. +type MetricData struct { _ struct{} `type:"structure"` - // A filter that returns only models created after the specified time (timestamp). - CreationTimeAfter *time.Time `type:"timestamp" timestampFormat:"unix"` + // The name of the metric. + MetricName *string `min:"1" type:"string"` - // A filter that returns only models created before the specified time (timestamp). - CreationTimeBefore *time.Time `type:"timestamp" timestampFormat:"unix"` + // The date and time that the algorithm emitted the metric. + Timestamp *time.Time `type:"timestamp"` - // The maximum number of models to return in the response. - MaxResults *int64 `min:"1" type:"integer"` + // The value of the metric. + Value *float64 `type:"float"` +} - // A string in the training job name. This filter returns only models in the - // training job whose name contains the specified string. - NameContains *string `type:"string"` +// String returns the string representation +func (s MetricData) String() string { + return awsutil.Prettify(s) +} - // If the response to a previous ListModels request was truncated, the response - // includes a NextToken. To retrieve the next set of models, use the token in - // the next request. - NextToken *string `type:"string"` +// GoString returns the string representation +func (s MetricData) GoString() string { + return s.String() +} - // Sorts the list of results. The default is CreationTime. - SortBy *string `type:"string" enum:"ModelSortKey"` +// SetMetricName sets the MetricName field's value. +func (s *MetricData) SetMetricName(v string) *MetricData { + s.MetricName = &v + return s +} - // The sort order for results. The default is Ascending. - SortOrder *string `type:"string" enum:"OrderKey"` +// SetTimestamp sets the Timestamp field's value. +func (s *MetricData) SetTimestamp(v time.Time) *MetricData { + s.Timestamp = &v + return s +} + +// SetValue sets the Value field's value. +func (s *MetricData) SetValue(v float64) *MetricData { + s.Value = &v + return s +} + +// Specifies a metric that the training algorithm writes to stderr or stdout. +// Amazon SageMakerhyperparameter tuning captures all defined metrics. You specify +// one metric that a hyperparameter tuning job uses as its objective metric +// to choose the best training job. +type MetricDefinition struct { + _ struct{} `type:"structure"` + + // The name of the metric. + // + // Name is a required field + Name *string `min:"1" type:"string" required:"true"` + + // A regular expression that searches the output of a training job and gets + // the value of the metric. For more information about using regular expressions + // to define metrics, see Defining Objective Metrics (http://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-define-metrics.html). + // + // Regex is a required field + Regex *string `min:"1" type:"string" required:"true"` } // String returns the string representation -func (s ListModelsInput) String() string { +func (s MetricDefinition) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListModelsInput) GoString() string { +func (s MetricDefinition) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListModelsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListModelsInput"} - if s.MaxResults != nil && *s.MaxResults < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) +func (s *MetricDefinition) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MetricDefinition"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + if s.Regex == nil { + invalidParams.Add(request.NewErrParamRequired("Regex")) + } + if s.Regex != nil && len(*s.Regex) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Regex", 1)) } if invalidParams.Len() > 0 { @@ -6306,136 +10642,188 @@ func (s *ListModelsInput) Validate() error { return nil } -// SetCreationTimeAfter sets the CreationTimeAfter field's value. -func (s *ListModelsInput) SetCreationTimeAfter(v time.Time) *ListModelsInput { - s.CreationTimeAfter = &v +// SetName sets the Name field's value. +func (s *MetricDefinition) SetName(v string) *MetricDefinition { + s.Name = &v return s } -// SetCreationTimeBefore sets the CreationTimeBefore field's value. -func (s *ListModelsInput) SetCreationTimeBefore(v time.Time) *ListModelsInput { - s.CreationTimeBefore = &v +// SetRegex sets the Regex field's value. +func (s *MetricDefinition) SetRegex(v string) *MetricDefinition { + s.Regex = &v return s } -// SetMaxResults sets the MaxResults field's value. -func (s *ListModelsInput) SetMaxResults(v int64) *ListModelsInput { - s.MaxResults = &v - return s -} +// Provides information about the location that is configured for storing model +// artifacts. +type ModelArtifacts struct { + _ struct{} `type:"structure"` -// SetNameContains sets the NameContains field's value. -func (s *ListModelsInput) SetNameContains(v string) *ListModelsInput { - s.NameContains = &v - return s + // The path of the S3 object that contains the model artifacts. For example, + // s3://bucket-name/keynameprefix/model.tar.gz. + // + // S3ModelArtifacts is a required field + S3ModelArtifacts *string `type:"string" required:"true"` } -// SetNextToken sets the NextToken field's value. -func (s *ListModelsInput) SetNextToken(v string) *ListModelsInput { - s.NextToken = &v - return s +// String returns the string representation +func (s ModelArtifacts) String() string { + return awsutil.Prettify(s) } -// SetSortBy sets the SortBy field's value. -func (s *ListModelsInput) SetSortBy(v string) *ListModelsInput { - s.SortBy = &v - return s +// GoString returns the string representation +func (s ModelArtifacts) GoString() string { + return s.String() } -// SetSortOrder sets the SortOrder field's value. -func (s *ListModelsInput) SetSortOrder(v string) *ListModelsInput { - s.SortOrder = &v +// SetS3ModelArtifacts sets the S3ModelArtifacts field's value. +func (s *ModelArtifacts) SetS3ModelArtifacts(v string) *ModelArtifacts { + s.S3ModelArtifacts = &v return s } -type ListModelsOutput struct { +// Provides summary information about a model. +type ModelSummary struct { _ struct{} `type:"structure"` - // An array of ModelSummary objects, each of which lists a model. + // A timestamp that indicates when the model was created. // - // Models is a required field - Models []*ModelSummary `type:"list" required:"true"` + // CreationTime is a required field + CreationTime *time.Time `type:"timestamp" required:"true"` - // If the response is truncated, Amazon SageMaker returns this token. To retrieve - // the next set of models, use it in the subsequent request. - NextToken *string `type:"string"` + // The Amazon Resource Name (ARN) of the model. + // + // ModelArn is a required field + ModelArn *string `min:"20" type:"string" required:"true"` + + // The name of the model that you want a summary for. + // + // ModelName is a required field + ModelName *string `type:"string" required:"true"` } // String returns the string representation -func (s ListModelsOutput) String() string { +func (s ModelSummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListModelsOutput) GoString() string { +func (s ModelSummary) GoString() string { return s.String() } -// SetModels sets the Models field's value. -func (s *ListModelsOutput) SetModels(v []*ModelSummary) *ListModelsOutput { - s.Models = v +// SetCreationTime sets the CreationTime field's value. +func (s *ModelSummary) SetCreationTime(v time.Time) *ModelSummary { + s.CreationTime = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListModelsOutput) SetNextToken(v string) *ListModelsOutput { - s.NextToken = &v +// SetModelArn sets the ModelArn field's value. +func (s *ModelSummary) SetModelArn(v string) *ModelSummary { + s.ModelArn = &v return s } -type ListNotebookInstanceLifecycleConfigsInput struct { +// SetModelName sets the ModelName field's value. +func (s *ModelSummary) SetModelName(v string) *ModelSummary { + s.ModelName = &v + return s +} + +// Provides a summary of a notebook instance lifecycle configuration. +type NotebookInstanceLifecycleConfigSummary struct { _ struct{} `type:"structure"` - // A filter that returns only lifecycle configurations that were created after - // the specified time (timestamp). - CreationTimeAfter *time.Time `type:"timestamp" timestampFormat:"unix"` + // A timestamp that tells when the lifecycle configuration was created. + CreationTime *time.Time `type:"timestamp"` - // A filter that returns only lifecycle configurations that were created before - // the specified time (timestamp). - CreationTimeBefore *time.Time `type:"timestamp" timestampFormat:"unix"` + // A timestamp that tells when the lifecycle configuration was last modified. + LastModifiedTime *time.Time `type:"timestamp"` - // A filter that returns only lifecycle configurations that were modified after - // the specified time (timestamp). - LastModifiedTimeAfter *time.Time `type:"timestamp" timestampFormat:"unix"` + // The Amazon Resource Name (ARN) of the lifecycle configuration. + // + // NotebookInstanceLifecycleConfigArn is a required field + NotebookInstanceLifecycleConfigArn *string `type:"string" required:"true"` - // A filter that returns only lifecycle configurations that were modified before - // the specified time (timestamp). - LastModifiedTimeBefore *time.Time `type:"timestamp" timestampFormat:"unix"` + // The name of the lifecycle configuration. + // + // NotebookInstanceLifecycleConfigName is a required field + NotebookInstanceLifecycleConfigName *string `type:"string" required:"true"` +} - // The maximum number of lifecycle configurations to return in the response. - MaxResults *int64 `min:"1" type:"integer"` +// String returns the string representation +func (s NotebookInstanceLifecycleConfigSummary) String() string { + return awsutil.Prettify(s) +} - // A string in the lifecycle configuration name. This filter returns only lifecycle - // configurations whose name contains the specified string. - NameContains *string `type:"string"` +// GoString returns the string representation +func (s NotebookInstanceLifecycleConfigSummary) GoString() string { + return s.String() +} - // If the result of a ListNotebookInstanceLifecycleConfigs request was truncated, - // the response includes a NextToken. To get the next set of lifecycle configurations, - // use the token in the next request. - NextToken *string `type:"string"` +// SetCreationTime sets the CreationTime field's value. +func (s *NotebookInstanceLifecycleConfigSummary) SetCreationTime(v time.Time) *NotebookInstanceLifecycleConfigSummary { + s.CreationTime = &v + return s +} + +// SetLastModifiedTime sets the LastModifiedTime field's value. +func (s *NotebookInstanceLifecycleConfigSummary) SetLastModifiedTime(v time.Time) *NotebookInstanceLifecycleConfigSummary { + s.LastModifiedTime = &v + return s +} + +// SetNotebookInstanceLifecycleConfigArn sets the NotebookInstanceLifecycleConfigArn field's value. +func (s *NotebookInstanceLifecycleConfigSummary) SetNotebookInstanceLifecycleConfigArn(v string) *NotebookInstanceLifecycleConfigSummary { + s.NotebookInstanceLifecycleConfigArn = &v + return s +} + +// SetNotebookInstanceLifecycleConfigName sets the NotebookInstanceLifecycleConfigName field's value. +func (s *NotebookInstanceLifecycleConfigSummary) SetNotebookInstanceLifecycleConfigName(v string) *NotebookInstanceLifecycleConfigSummary { + s.NotebookInstanceLifecycleConfigName = &v + return s +} - // Sorts the list of results. The default is CreationTime. - SortBy *string `type:"string" enum:"NotebookInstanceLifecycleConfigSortKey"` +// Contains the notebook instance lifecycle configuration script. +// +// Each lifecycle configuration script has a limit of 16384 characters. +// +// The value of the $PATH environment variable that is available to both scripts +// is /sbin:bin:/usr/sbin:/usr/bin. +// +// View CloudWatch Logs for notebook instance lifecycle configurations in log +// group /aws/sagemaker/NotebookInstances in log stream [notebook-instance-name]/[LifecycleConfigHook]. +// +// Lifecycle configuration scripts cannot run for longer than 5 minutes. If +// a script runs for longer than 5 minutes, it fails and the notebook instance +// is not created or started. +// +// For information about notebook instance lifestyle configurations, see Step +// 2.1: (Optional) Customize a Notebook Instance (http://docs.aws.amazon.com/sagemaker/latest/dg/notebook-lifecycle-config.html). +type NotebookInstanceLifecycleHook struct { + _ struct{} `type:"structure"` - // The sort order for results. - SortOrder *string `type:"string" enum:"NotebookInstanceLifecycleConfigSortOrder"` + // A base64-encoded string that contains a shell script for a notebook instance + // lifecycle configuration. + Content *string `min:"1" type:"string"` } // String returns the string representation -func (s ListNotebookInstanceLifecycleConfigsInput) String() string { +func (s NotebookInstanceLifecycleHook) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListNotebookInstanceLifecycleConfigsInput) GoString() string { +func (s NotebookInstanceLifecycleHook) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListNotebookInstanceLifecycleConfigsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListNotebookInstanceLifecycleConfigsInput"} - if s.MaxResults != nil && *s.MaxResults < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) +func (s *NotebookInstanceLifecycleHook) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "NotebookInstanceLifecycleHook"} + if s.Content != nil && len(*s.Content) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Content", 1)) } if invalidParams.Len() > 0 { @@ -6444,160 +10832,214 @@ func (s *ListNotebookInstanceLifecycleConfigsInput) Validate() error { return nil } -// SetCreationTimeAfter sets the CreationTimeAfter field's value. -func (s *ListNotebookInstanceLifecycleConfigsInput) SetCreationTimeAfter(v time.Time) *ListNotebookInstanceLifecycleConfigsInput { - s.CreationTimeAfter = &v +// SetContent sets the Content field's value. +func (s *NotebookInstanceLifecycleHook) SetContent(v string) *NotebookInstanceLifecycleHook { + s.Content = &v return s } -// SetCreationTimeBefore sets the CreationTimeBefore field's value. -func (s *ListNotebookInstanceLifecycleConfigsInput) SetCreationTimeBefore(v time.Time) *ListNotebookInstanceLifecycleConfigsInput { - s.CreationTimeBefore = &v +// Provides summary information for an Amazon SageMaker notebook instance. +type NotebookInstanceSummary struct { + _ struct{} `type:"structure"` + + // A timestamp that shows when the notebook instance was created. + CreationTime *time.Time `type:"timestamp"` + + // The type of ML compute instance that the notebook instance is running on. + InstanceType *string `type:"string" enum:"InstanceType"` + + // A timestamp that shows when the notebook instance was last modified. + LastModifiedTime *time.Time `type:"timestamp"` + + // The Amazon Resource Name (ARN) of the notebook instance. + // + // NotebookInstanceArn is a required field + NotebookInstanceArn *string `type:"string" required:"true"` + + // The name of a notebook instance lifecycle configuration associated with this + // notebook instance. + // + // For information about notebook instance lifestyle configurations, see Step + // 2.1: (Optional) Customize a Notebook Instance (http://docs.aws.amazon.com/sagemaker/latest/dg/notebook-lifecycle-config.html). + NotebookInstanceLifecycleConfigName *string `type:"string"` + + // The name of the notebook instance that you want a summary for. + // + // NotebookInstanceName is a required field + NotebookInstanceName *string `type:"string" required:"true"` + + // The status of the notebook instance. + NotebookInstanceStatus *string `type:"string" enum:"NotebookInstanceStatus"` + + // The URL that you use to connect to the Jupyter instance running in your notebook + // instance. + Url *string `type:"string"` +} + +// String returns the string representation +func (s NotebookInstanceSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NotebookInstanceSummary) GoString() string { + return s.String() +} + +// SetCreationTime sets the CreationTime field's value. +func (s *NotebookInstanceSummary) SetCreationTime(v time.Time) *NotebookInstanceSummary { + s.CreationTime = &v return s } -// SetLastModifiedTimeAfter sets the LastModifiedTimeAfter field's value. -func (s *ListNotebookInstanceLifecycleConfigsInput) SetLastModifiedTimeAfter(v time.Time) *ListNotebookInstanceLifecycleConfigsInput { - s.LastModifiedTimeAfter = &v +// SetInstanceType sets the InstanceType field's value. +func (s *NotebookInstanceSummary) SetInstanceType(v string) *NotebookInstanceSummary { + s.InstanceType = &v return s } -// SetLastModifiedTimeBefore sets the LastModifiedTimeBefore field's value. -func (s *ListNotebookInstanceLifecycleConfigsInput) SetLastModifiedTimeBefore(v time.Time) *ListNotebookInstanceLifecycleConfigsInput { - s.LastModifiedTimeBefore = &v +// SetLastModifiedTime sets the LastModifiedTime field's value. +func (s *NotebookInstanceSummary) SetLastModifiedTime(v time.Time) *NotebookInstanceSummary { + s.LastModifiedTime = &v return s } -// SetMaxResults sets the MaxResults field's value. -func (s *ListNotebookInstanceLifecycleConfigsInput) SetMaxResults(v int64) *ListNotebookInstanceLifecycleConfigsInput { - s.MaxResults = &v +// SetNotebookInstanceArn sets the NotebookInstanceArn field's value. +func (s *NotebookInstanceSummary) SetNotebookInstanceArn(v string) *NotebookInstanceSummary { + s.NotebookInstanceArn = &v return s } -// SetNameContains sets the NameContains field's value. -func (s *ListNotebookInstanceLifecycleConfigsInput) SetNameContains(v string) *ListNotebookInstanceLifecycleConfigsInput { - s.NameContains = &v +// SetNotebookInstanceLifecycleConfigName sets the NotebookInstanceLifecycleConfigName field's value. +func (s *NotebookInstanceSummary) SetNotebookInstanceLifecycleConfigName(v string) *NotebookInstanceSummary { + s.NotebookInstanceLifecycleConfigName = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListNotebookInstanceLifecycleConfigsInput) SetNextToken(v string) *ListNotebookInstanceLifecycleConfigsInput { - s.NextToken = &v +// SetNotebookInstanceName sets the NotebookInstanceName field's value. +func (s *NotebookInstanceSummary) SetNotebookInstanceName(v string) *NotebookInstanceSummary { + s.NotebookInstanceName = &v return s } -// SetSortBy sets the SortBy field's value. -func (s *ListNotebookInstanceLifecycleConfigsInput) SetSortBy(v string) *ListNotebookInstanceLifecycleConfigsInput { - s.SortBy = &v +// SetNotebookInstanceStatus sets the NotebookInstanceStatus field's value. +func (s *NotebookInstanceSummary) SetNotebookInstanceStatus(v string) *NotebookInstanceSummary { + s.NotebookInstanceStatus = &v return s } -// SetSortOrder sets the SortOrder field's value. -func (s *ListNotebookInstanceLifecycleConfigsInput) SetSortOrder(v string) *ListNotebookInstanceLifecycleConfigsInput { - s.SortOrder = &v +// SetUrl sets the Url field's value. +func (s *NotebookInstanceSummary) SetUrl(v string) *NotebookInstanceSummary { + s.Url = &v return s } -type ListNotebookInstanceLifecycleConfigsOutput struct { +// Specifies the number of training jobs that this hyperparameter tuning job +// launched, categorized by the status of their objective metric. The objective +// metric status shows whether the final objective metric for the training job +// has been evaluated by the tuning job and used in the hyperparameter tuning +// process. +type ObjectiveStatusCounters struct { _ struct{} `type:"structure"` - // If the response is truncated, Amazon SageMaker returns this token. To get - // the next set of lifecycle configurations, use it in the next request. - NextToken *string `type:"string"` + // The number of training jobs whose final objective metric was not evaluated + // and used in the hyperparameter tuning process. This typically occurs when + // the training job failed or did not emit an objective metric. + Failed *int64 `type:"integer"` - // An array of NotebookInstanceLifecycleConfiguration objects, each listing - // a lifecycle configuration. - NotebookInstanceLifecycleConfigs []*NotebookInstanceLifecycleConfigSummary `type:"list"` + // The number of training jobs that are in progress and pending evaluation of + // their final objective metric. + Pending *int64 `type:"integer"` + + // The number of training jobs whose final objective metric was evaluated by + // the hyperparameter tuning job and used in the hyperparameter tuning process. + Succeeded *int64 `type:"integer"` } // String returns the string representation -func (s ListNotebookInstanceLifecycleConfigsOutput) String() string { +func (s ObjectiveStatusCounters) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListNotebookInstanceLifecycleConfigsOutput) GoString() string { +func (s ObjectiveStatusCounters) GoString() string { return s.String() } -// SetNextToken sets the NextToken field's value. -func (s *ListNotebookInstanceLifecycleConfigsOutput) SetNextToken(v string) *ListNotebookInstanceLifecycleConfigsOutput { - s.NextToken = &v +// SetFailed sets the Failed field's value. +func (s *ObjectiveStatusCounters) SetFailed(v int64) *ObjectiveStatusCounters { + s.Failed = &v return s } -// SetNotebookInstanceLifecycleConfigs sets the NotebookInstanceLifecycleConfigs field's value. -func (s *ListNotebookInstanceLifecycleConfigsOutput) SetNotebookInstanceLifecycleConfigs(v []*NotebookInstanceLifecycleConfigSummary) *ListNotebookInstanceLifecycleConfigsOutput { - s.NotebookInstanceLifecycleConfigs = v +// SetPending sets the Pending field's value. +func (s *ObjectiveStatusCounters) SetPending(v int64) *ObjectiveStatusCounters { + s.Pending = &v return s } -type ListNotebookInstancesInput struct { - _ struct{} `type:"structure"` - - // A filter that returns only notebook instances that were created after the - // specified time (timestamp). - CreationTimeAfter *time.Time `type:"timestamp" timestampFormat:"unix"` - - // A filter that returns only notebook instances that were created before the - // specified time (timestamp). - CreationTimeBefore *time.Time `type:"timestamp" timestampFormat:"unix"` - - // A filter that returns only notebook instances that were modified after the - // specified time (timestamp). - LastModifiedTimeAfter *time.Time `type:"timestamp" timestampFormat:"unix"` - - // A filter that returns only notebook instances that were modified before the - // specified time (timestamp). - LastModifiedTimeBefore *time.Time `type:"timestamp" timestampFormat:"unix"` - - // The maximum number of notebook instances to return. - MaxResults *int64 `min:"1" type:"integer"` +// SetSucceeded sets the Succeeded field's value. +func (s *ObjectiveStatusCounters) SetSucceeded(v int64) *ObjectiveStatusCounters { + s.Succeeded = &v + return s +} - // A string in the notebook instances' name. This filter returns only notebook - // instances whose name contains the specified string. - NameContains *string `type:"string"` +// Provides information about how to store model training results (model artifacts). +type OutputDataConfig struct { + _ struct{} `type:"structure"` - // If the previous call to the ListNotebookInstances is truncated, the response - // includes a NextToken. You can use this token in your subsequent ListNotebookInstances - // request to fetch the next set of notebook instances. + // The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to + // encrypt the model artifacts at rest using Amazon S3 server-side encryption. + // The KmsKeyId can be any of the following formats: // - // You might specify a filter or a sort order in your request. When response - // is truncated, you must use the same values for the filer and sort order in - // the next request. - NextToken *string `type:"string"` - - // A string in the name of a notebook instances lifecycle configuration associated - // with this notebook instance. This filter returns only notebook instances - // associated with a lifecycle configuration with a name that contains the specified - // string. - NotebookInstanceLifecycleConfigNameContains *string `type:"string"` - - // The field to sort results by. The default is Name. - SortBy *string `type:"string" enum:"NotebookInstanceSortKey"` - - // The sort order for results. - SortOrder *string `type:"string" enum:"NotebookInstanceSortOrder"` + // * // KMS Key ID + // + // "1234abcd-12ab-34cd-56ef-1234567890ab" + // + // * // Amazon Resource Name (ARN) of a KMS Key + // + // "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab" + // + // * // KMS Key Alias + // + // "alias/ExampleAlias" + // + // * // Amazon Resource Name (ARN) of a KMS Key Alias + // + // "arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias" + // + // If you don't provide the KMS key ID, Amazon SageMaker uses the default KMS + // key for Amazon S3 for your role's account. For more information, see KMS-Managed + // Encryption Keys (https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html) + // in Amazon Simple Storage Service Developer Guide. + // + // The KMS key policy must grant permission to the IAM role that you specify + // in your CreateTrainingJob request. Using Key Policies in AWS KMS (http://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) + // in the AWS Key Management Service Developer Guide. + KmsKeyId *string `type:"string"` - // A filter that returns only notebook instances with the specified status. - StatusEquals *string `type:"string" enum:"NotebookInstanceStatus"` + // Identifies the S3 path where you want Amazon SageMaker to store the model + // artifacts. For example, s3://bucket-name/key-name-prefix. + // + // S3OutputPath is a required field + S3OutputPath *string `type:"string" required:"true"` } // String returns the string representation -func (s ListNotebookInstancesInput) String() string { +func (s OutputDataConfig) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListNotebookInstancesInput) GoString() string { +func (s OutputDataConfig) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListNotebookInstancesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListNotebookInstancesInput"} - if s.MaxResults != nil && *s.MaxResults < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) +func (s *OutputDataConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "OutputDataConfig"} + if s.S3OutputPath == nil { + invalidParams.Add(request.NewErrParamRequired("S3OutputPath")) } if invalidParams.Len() > 0 { @@ -6606,141 +11048,212 @@ func (s *ListNotebookInstancesInput) Validate() error { return nil } -// SetCreationTimeAfter sets the CreationTimeAfter field's value. -func (s *ListNotebookInstancesInput) SetCreationTimeAfter(v time.Time) *ListNotebookInstancesInput { - s.CreationTimeAfter = &v +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *OutputDataConfig) SetKmsKeyId(v string) *OutputDataConfig { + s.KmsKeyId = &v return s } -// SetCreationTimeBefore sets the CreationTimeBefore field's value. -func (s *ListNotebookInstancesInput) SetCreationTimeBefore(v time.Time) *ListNotebookInstancesInput { - s.CreationTimeBefore = &v +// SetS3OutputPath sets the S3OutputPath field's value. +func (s *OutputDataConfig) SetS3OutputPath(v string) *OutputDataConfig { + s.S3OutputPath = &v return s } -// SetLastModifiedTimeAfter sets the LastModifiedTimeAfter field's value. -func (s *ListNotebookInstancesInput) SetLastModifiedTimeAfter(v time.Time) *ListNotebookInstancesInput { - s.LastModifiedTimeAfter = &v - return s -} +// Specifies ranges of integer, continuous, and categorical hyperparameters +// that a hyperparameter tuning job searches. The hyperparameter tuning job +// launches training jobs with hyperparameter values within these ranges to +// find the combination of values that result in the training job with the best +// performance as measured by the objective metric of the hyperparameter tuning +// job. +// +// You can specify a maximum of 20 hyperparameters that a hyperparameter tuning +// job can search over. Every possible value of a categorical parameter range +// counts against this limit. +type ParameterRanges struct { + _ struct{} `type:"structure"` -// SetLastModifiedTimeBefore sets the LastModifiedTimeBefore field's value. -func (s *ListNotebookInstancesInput) SetLastModifiedTimeBefore(v time.Time) *ListNotebookInstancesInput { - s.LastModifiedTimeBefore = &v - return s -} + // The array of CategoricalParameterRange objects that specify ranges of categorical + // hyperparameters that a hyperparameter tuning job searches. + CategoricalParameterRanges []*CategoricalParameterRange `type:"list"` -// SetMaxResults sets the MaxResults field's value. -func (s *ListNotebookInstancesInput) SetMaxResults(v int64) *ListNotebookInstancesInput { - s.MaxResults = &v - return s -} + // The array of ContinuousParameterRange objects that specify ranges of continuous + // hyperparameters that a hyperparameter tuning job searches. + ContinuousParameterRanges []*ContinuousParameterRange `type:"list"` -// SetNameContains sets the NameContains field's value. -func (s *ListNotebookInstancesInput) SetNameContains(v string) *ListNotebookInstancesInput { - s.NameContains = &v - return s + // The array of IntegerParameterRange objects that specify ranges of integer + // hyperparameters that a hyperparameter tuning job searches. + IntegerParameterRanges []*IntegerParameterRange `type:"list"` +} + +// String returns the string representation +func (s ParameterRanges) String() string { + return awsutil.Prettify(s) } -// SetNextToken sets the NextToken field's value. -func (s *ListNotebookInstancesInput) SetNextToken(v string) *ListNotebookInstancesInput { - s.NextToken = &v - return s +// GoString returns the string representation +func (s ParameterRanges) GoString() string { + return s.String() } -// SetNotebookInstanceLifecycleConfigNameContains sets the NotebookInstanceLifecycleConfigNameContains field's value. -func (s *ListNotebookInstancesInput) SetNotebookInstanceLifecycleConfigNameContains(v string) *ListNotebookInstancesInput { - s.NotebookInstanceLifecycleConfigNameContains = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *ParameterRanges) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ParameterRanges"} + if s.CategoricalParameterRanges != nil { + for i, v := range s.CategoricalParameterRanges { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "CategoricalParameterRanges", i), err.(request.ErrInvalidParams)) + } + } + } + if s.ContinuousParameterRanges != nil { + for i, v := range s.ContinuousParameterRanges { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ContinuousParameterRanges", i), err.(request.ErrInvalidParams)) + } + } + } + if s.IntegerParameterRanges != nil { + for i, v := range s.IntegerParameterRanges { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "IntegerParameterRanges", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetSortBy sets the SortBy field's value. -func (s *ListNotebookInstancesInput) SetSortBy(v string) *ListNotebookInstancesInput { - s.SortBy = &v +// SetCategoricalParameterRanges sets the CategoricalParameterRanges field's value. +func (s *ParameterRanges) SetCategoricalParameterRanges(v []*CategoricalParameterRange) *ParameterRanges { + s.CategoricalParameterRanges = v return s } -// SetSortOrder sets the SortOrder field's value. -func (s *ListNotebookInstancesInput) SetSortOrder(v string) *ListNotebookInstancesInput { - s.SortOrder = &v +// SetContinuousParameterRanges sets the ContinuousParameterRanges field's value. +func (s *ParameterRanges) SetContinuousParameterRanges(v []*ContinuousParameterRange) *ParameterRanges { + s.ContinuousParameterRanges = v return s } -// SetStatusEquals sets the StatusEquals field's value. -func (s *ListNotebookInstancesInput) SetStatusEquals(v string) *ListNotebookInstancesInput { - s.StatusEquals = &v +// SetIntegerParameterRanges sets the IntegerParameterRanges field's value. +func (s *ParameterRanges) SetIntegerParameterRanges(v []*IntegerParameterRange) *ParameterRanges { + s.IntegerParameterRanges = v return s } -type ListNotebookInstancesOutput struct { +// A previously completed or stopped hyperparameter tuning job to be used as +// a starting point for a new hyperparameter tuning job. +type ParentHyperParameterTuningJob struct { _ struct{} `type:"structure"` - // If the response to the previous ListNotebookInstances request was truncated, - // Amazon SageMaker returns this token. To retrieve the next set of notebook - // instances, use the token in the next request. - NextToken *string `type:"string"` - - // An array of NotebookInstanceSummary objects, one for each notebook instance. - NotebookInstances []*NotebookInstanceSummary `type:"list"` + // The name of the hyperparameter tuning job to be used as a starting point + // for a new hyperparameter tuning job. + HyperParameterTuningJobName *string `min:"1" type:"string"` } // String returns the string representation -func (s ListNotebookInstancesOutput) String() string { +func (s ParentHyperParameterTuningJob) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListNotebookInstancesOutput) GoString() string { +func (s ParentHyperParameterTuningJob) GoString() string { return s.String() } -// SetNextToken sets the NextToken field's value. -func (s *ListNotebookInstancesOutput) SetNextToken(v string) *ListNotebookInstancesOutput { - s.NextToken = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *ParentHyperParameterTuningJob) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ParentHyperParameterTuningJob"} + if s.HyperParameterTuningJobName != nil && len(*s.HyperParameterTuningJobName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("HyperParameterTuningJobName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetNotebookInstances sets the NotebookInstances field's value. -func (s *ListNotebookInstancesOutput) SetNotebookInstances(v []*NotebookInstanceSummary) *ListNotebookInstancesOutput { - s.NotebookInstances = v +// SetHyperParameterTuningJobName sets the HyperParameterTuningJobName field's value. +func (s *ParentHyperParameterTuningJob) SetHyperParameterTuningJobName(v string) *ParentHyperParameterTuningJob { + s.HyperParameterTuningJobName = &v return s } -type ListTagsInput struct { +// Identifies a model that you want to host and the resources to deploy for +// hosting it. If you are deploying multiple models, tell Amazon SageMaker how +// to distribute traffic among the models by specifying variant weights. +type ProductionVariant struct { _ struct{} `type:"structure"` - // Maximum number of tags to return. - MaxResults *int64 `min:"50" type:"integer"` + // Number of instances to launch initially. + // + // InitialInstanceCount is a required field + InitialInstanceCount *int64 `min:"1" type:"integer" required:"true"` - // If the response to the previous ListTags request is truncated, Amazon SageMaker - // returns this token. To retrieve the next set of tags, use it in the subsequent - // request. - NextToken *string `type:"string"` + // Determines initial traffic distribution among all of the models that you + // specify in the endpoint configuration. The traffic to a production variant + // is determined by the ratio of the VariantWeight to the sum of all VariantWeight + // values across all ProductionVariants. If unspecified, it defaults to 1.0. + InitialVariantWeight *float64 `type:"float"` - // The Amazon Resource Name (ARN) of the resource whose tags you want to retrieve. + // The ML compute instance type. // - // ResourceArn is a required field - ResourceArn *string `type:"string" required:"true"` + // InstanceType is a required field + InstanceType *string `type:"string" required:"true" enum:"ProductionVariantInstanceType"` + + // The name of the model that you want to host. This is the name that you specified + // when creating the model. + // + // ModelName is a required field + ModelName *string `type:"string" required:"true"` + + // The name of the production variant. + // + // VariantName is a required field + VariantName *string `type:"string" required:"true"` } // String returns the string representation -func (s ListTagsInput) String() string { +func (s ProductionVariant) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListTagsInput) GoString() string { +func (s ProductionVariant) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListTagsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListTagsInput"} - if s.MaxResults != nil && *s.MaxResults < 50 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 50)) +func (s *ProductionVariant) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ProductionVariant"} + if s.InitialInstanceCount == nil { + invalidParams.Add(request.NewErrParamRequired("InitialInstanceCount")) } - if s.ResourceArn == nil { - invalidParams.Add(request.NewErrParamRequired("ResourceArn")) + if s.InitialInstanceCount != nil && *s.InitialInstanceCount < 1 { + invalidParams.Add(request.NewErrParamMinValue("InitialInstanceCount", 1)) + } + if s.InstanceType == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceType")) + } + if s.ModelName == nil { + invalidParams.Add(request.NewErrParamRequired("ModelName")) + } + if s.VariantName == nil { + invalidParams.Add(request.NewErrParamRequired("VariantName")) } if invalidParams.Len() > 0 { @@ -6749,112 +11262,186 @@ func (s *ListTagsInput) Validate() error { return nil } -// SetMaxResults sets the MaxResults field's value. -func (s *ListTagsInput) SetMaxResults(v int64) *ListTagsInput { - s.MaxResults = &v +// SetInitialInstanceCount sets the InitialInstanceCount field's value. +func (s *ProductionVariant) SetInitialInstanceCount(v int64) *ProductionVariant { + s.InitialInstanceCount = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *ListTagsInput) SetNextToken(v string) *ListTagsInput { - s.NextToken = &v +// SetInitialVariantWeight sets the InitialVariantWeight field's value. +func (s *ProductionVariant) SetInitialVariantWeight(v float64) *ProductionVariant { + s.InitialVariantWeight = &v return s } -// SetResourceArn sets the ResourceArn field's value. -func (s *ListTagsInput) SetResourceArn(v string) *ListTagsInput { - s.ResourceArn = &v +// SetInstanceType sets the InstanceType field's value. +func (s *ProductionVariant) SetInstanceType(v string) *ProductionVariant { + s.InstanceType = &v return s } -type ListTagsOutput struct { +// SetModelName sets the ModelName field's value. +func (s *ProductionVariant) SetModelName(v string) *ProductionVariant { + s.ModelName = &v + return s +} + +// SetVariantName sets the VariantName field's value. +func (s *ProductionVariant) SetVariantName(v string) *ProductionVariant { + s.VariantName = &v + return s +} + +// Describes weight and capacities for a production variant associated with +// an endpoint. If you sent a request to the UpdateEndpointWeightsAndCapacities +// API and the endpoint status is Updating, you get different desired and current +// values. +type ProductionVariantSummary struct { _ struct{} `type:"structure"` - // If response is truncated, Amazon SageMaker includes a token in the response. - // You can use this token in your subsequent request to fetch next set of tokens. - NextToken *string `type:"string"` + // The number of instances associated with the variant. + CurrentInstanceCount *int64 `min:"1" type:"integer"` - // An array of Tag objects, each with a tag key and a value. - Tags []*Tag `type:"list"` + // The weight associated with the variant. + CurrentWeight *float64 `type:"float"` + + // An array of DeployedImage objects that specify the Amazon EC2 Container Registry + // paths of the inference images deployed on instances of this ProductionVariant. + DeployedImages []*DeployedImage `type:"list"` + + // The number of instances requested in the UpdateEndpointWeightsAndCapacities + // request. + DesiredInstanceCount *int64 `min:"1" type:"integer"` + + // The requested weight, as specified in the UpdateEndpointWeightsAndCapacities + // request. + DesiredWeight *float64 `type:"float"` + + // The name of the variant. + // + // VariantName is a required field + VariantName *string `type:"string" required:"true"` } // String returns the string representation -func (s ListTagsOutput) String() string { +func (s ProductionVariantSummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListTagsOutput) GoString() string { +func (s ProductionVariantSummary) GoString() string { return s.String() } -// SetNextToken sets the NextToken field's value. -func (s *ListTagsOutput) SetNextToken(v string) *ListTagsOutput { - s.NextToken = &v +// SetCurrentInstanceCount sets the CurrentInstanceCount field's value. +func (s *ProductionVariantSummary) SetCurrentInstanceCount(v int64) *ProductionVariantSummary { + s.CurrentInstanceCount = &v return s } -// SetTags sets the Tags field's value. -func (s *ListTagsOutput) SetTags(v []*Tag) *ListTagsOutput { - s.Tags = v +// SetCurrentWeight sets the CurrentWeight field's value. +func (s *ProductionVariantSummary) SetCurrentWeight(v float64) *ProductionVariantSummary { + s.CurrentWeight = &v return s } -type ListTrainingJobsInput struct { - _ struct{} `type:"structure"` - - // A filter that only training jobs created after the specified time (timestamp). - CreationTimeAfter *time.Time `type:"timestamp" timestampFormat:"unix"` - - // A filter that returns only training jobs created before the specified time - // (timestamp). - CreationTimeBefore *time.Time `type:"timestamp" timestampFormat:"unix"` +// SetDeployedImages sets the DeployedImages field's value. +func (s *ProductionVariantSummary) SetDeployedImages(v []*DeployedImage) *ProductionVariantSummary { + s.DeployedImages = v + return s +} - // A filter that returns only training jobs modified after the specified time - // (timestamp). - LastModifiedTimeAfter *time.Time `type:"timestamp" timestampFormat:"unix"` +// SetDesiredInstanceCount sets the DesiredInstanceCount field's value. +func (s *ProductionVariantSummary) SetDesiredInstanceCount(v int64) *ProductionVariantSummary { + s.DesiredInstanceCount = &v + return s +} - // A filter that returns only training jobs modified before the specified time - // (timestamp). - LastModifiedTimeBefore *time.Time `type:"timestamp" timestampFormat:"unix"` +// SetDesiredWeight sets the DesiredWeight field's value. +func (s *ProductionVariantSummary) SetDesiredWeight(v float64) *ProductionVariantSummary { + s.DesiredWeight = &v + return s +} - // The maximum number of training jobs to return in the response. - MaxResults *int64 `min:"1" type:"integer"` +// SetVariantName sets the VariantName field's value. +func (s *ProductionVariantSummary) SetVariantName(v string) *ProductionVariantSummary { + s.VariantName = &v + return s +} - // A string in the training job name. This filter returns only models whose - // name contains the specified string. - NameContains *string `type:"string"` +// Describes the resources, including ML compute instances and ML storage volumes, +// to use for model training. +type ResourceConfig struct { + _ struct{} `type:"structure"` - // If the result of the previous ListTrainingJobs request was truncated, the - // response includes a NextToken. To retrieve the next set of training jobs, - // use the token in the next request. - NextToken *string `type:"string"` + // The number of ML compute instances to use. For distributed training, provide + // a value greater than 1. + // + // InstanceCount is a required field + InstanceCount *int64 `min:"1" type:"integer" required:"true"` - // The field to sort results by. The default is CreationTime. - SortBy *string `type:"string" enum:"SortBy"` + // The ML compute instance type. + // + // InstanceType is a required field + InstanceType *string `type:"string" required:"true" enum:"TrainingInstanceType"` - // The sort order for results. The default is Ascending. - SortOrder *string `type:"string" enum:"SortOrder"` + // The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to + // encrypt data on the storage volume attached to the ML compute instance(s) + // that run the training job. The VolumeKmsKeyId can be any of the following + // formats: + // + // * // KMS Key ID + // + // "1234abcd-12ab-34cd-56ef-1234567890ab" + // + // * // Amazon Resource Name (ARN) of a KMS Key + // + // "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab" + VolumeKmsKeyId *string `type:"string"` - // A filter that retrieves only training jobs with a specific status. - StatusEquals *string `type:"string" enum:"TrainingJobStatus"` + // The size of the ML storage volume that you want to provision. + // + // ML storage volumes store model artifacts and incremental states. Training + // algorithms might also use the ML storage volume for scratch space. If you + // want to store the training data in the ML storage volume, choose File as + // the TrainingInputMode in the algorithm specification. + // + // You must specify sufficient ML storage for your scenario. + // + // Amazon SageMaker supports only the General Purpose SSD (gp2) ML storage volume + // type. + // + // VolumeSizeInGB is a required field + VolumeSizeInGB *int64 `min:"1" type:"integer" required:"true"` } // String returns the string representation -func (s ListTrainingJobsInput) String() string { +func (s ResourceConfig) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListTrainingJobsInput) GoString() string { +func (s ResourceConfig) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ListTrainingJobsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ListTrainingJobsInput"} - if s.MaxResults != nil && *s.MaxResults < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) +func (s *ResourceConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResourceConfig"} + if s.InstanceCount == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceCount")) + } + if s.InstanceCount != nil && *s.InstanceCount < 1 { + invalidParams.Add(request.NewErrParamMinValue("InstanceCount", 1)) + } + if s.InstanceType == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceType")) + } + if s.VolumeSizeInGB == nil { + invalidParams.Add(request.NewErrParamRequired("VolumeSizeInGB")) + } + if s.VolumeSizeInGB != nil && *s.VolumeSizeInGB < 1 { + invalidParams.Add(request.NewErrParamMinValue("VolumeSizeInGB", 1)) } if invalidParams.Len() > 0 { @@ -6863,260 +11450,469 @@ func (s *ListTrainingJobsInput) Validate() error { return nil } -// SetCreationTimeAfter sets the CreationTimeAfter field's value. -func (s *ListTrainingJobsInput) SetCreationTimeAfter(v time.Time) *ListTrainingJobsInput { - s.CreationTimeAfter = &v +// SetInstanceCount sets the InstanceCount field's value. +func (s *ResourceConfig) SetInstanceCount(v int64) *ResourceConfig { + s.InstanceCount = &v return s } -// SetCreationTimeBefore sets the CreationTimeBefore field's value. -func (s *ListTrainingJobsInput) SetCreationTimeBefore(v time.Time) *ListTrainingJobsInput { - s.CreationTimeBefore = &v +// SetInstanceType sets the InstanceType field's value. +func (s *ResourceConfig) SetInstanceType(v string) *ResourceConfig { + s.InstanceType = &v return s } -// SetLastModifiedTimeAfter sets the LastModifiedTimeAfter field's value. -func (s *ListTrainingJobsInput) SetLastModifiedTimeAfter(v time.Time) *ListTrainingJobsInput { - s.LastModifiedTimeAfter = &v +// SetVolumeKmsKeyId sets the VolumeKmsKeyId field's value. +func (s *ResourceConfig) SetVolumeKmsKeyId(v string) *ResourceConfig { + s.VolumeKmsKeyId = &v return s } -// SetLastModifiedTimeBefore sets the LastModifiedTimeBefore field's value. -func (s *ListTrainingJobsInput) SetLastModifiedTimeBefore(v time.Time) *ListTrainingJobsInput { - s.LastModifiedTimeBefore = &v +// SetVolumeSizeInGB sets the VolumeSizeInGB field's value. +func (s *ResourceConfig) SetVolumeSizeInGB(v int64) *ResourceConfig { + s.VolumeSizeInGB = &v return s } -// SetMaxResults sets the MaxResults field's value. -func (s *ListTrainingJobsInput) SetMaxResults(v int64) *ListTrainingJobsInput { - s.MaxResults = &v - return s +// Specifies the maximum number of training jobs and parallel training jobs +// that a hyperparameter tuning job can launch. +type ResourceLimits struct { + _ struct{} `type:"structure"` + + // The maximum number of training jobs that a hyperparameter tuning job can + // launch. + // + // MaxNumberOfTrainingJobs is a required field + MaxNumberOfTrainingJobs *int64 `min:"1" type:"integer" required:"true"` + + // The maximum number of concurrent training jobs that a hyperparameter tuning + // job can launch. + // + // MaxParallelTrainingJobs is a required field + MaxParallelTrainingJobs *int64 `min:"1" type:"integer" required:"true"` } -// SetNameContains sets the NameContains field's value. -func (s *ListTrainingJobsInput) SetNameContains(v string) *ListTrainingJobsInput { - s.NameContains = &v - return s +// String returns the string representation +func (s ResourceLimits) String() string { + return awsutil.Prettify(s) } -// SetNextToken sets the NextToken field's value. -func (s *ListTrainingJobsInput) SetNextToken(v string) *ListTrainingJobsInput { - s.NextToken = &v - return s +// GoString returns the string representation +func (s ResourceLimits) GoString() string { + return s.String() } -// SetSortBy sets the SortBy field's value. -func (s *ListTrainingJobsInput) SetSortBy(v string) *ListTrainingJobsInput { - s.SortBy = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *ResourceLimits) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResourceLimits"} + if s.MaxNumberOfTrainingJobs == nil { + invalidParams.Add(request.NewErrParamRequired("MaxNumberOfTrainingJobs")) + } + if s.MaxNumberOfTrainingJobs != nil && *s.MaxNumberOfTrainingJobs < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxNumberOfTrainingJobs", 1)) + } + if s.MaxParallelTrainingJobs == nil { + invalidParams.Add(request.NewErrParamRequired("MaxParallelTrainingJobs")) + } + if s.MaxParallelTrainingJobs != nil && *s.MaxParallelTrainingJobs < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxParallelTrainingJobs", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetSortOrder sets the SortOrder field's value. -func (s *ListTrainingJobsInput) SetSortOrder(v string) *ListTrainingJobsInput { - s.SortOrder = &v +// SetMaxNumberOfTrainingJobs sets the MaxNumberOfTrainingJobs field's value. +func (s *ResourceLimits) SetMaxNumberOfTrainingJobs(v int64) *ResourceLimits { + s.MaxNumberOfTrainingJobs = &v return s } -// SetStatusEquals sets the StatusEquals field's value. -func (s *ListTrainingJobsInput) SetStatusEquals(v string) *ListTrainingJobsInput { - s.StatusEquals = &v +// SetMaxParallelTrainingJobs sets the MaxParallelTrainingJobs field's value. +func (s *ResourceLimits) SetMaxParallelTrainingJobs(v int64) *ResourceLimits { + s.MaxParallelTrainingJobs = &v return s } -type ListTrainingJobsOutput struct { +// Describes the S3 data source. +type S3DataSource struct { _ struct{} `type:"structure"` - // If the response is truncated, Amazon SageMaker returns this token. To retrieve - // the next set of training jobs, use it in the subsequent request. - NextToken *string `type:"string"` + // If you want Amazon SageMaker to replicate the entire dataset on each ML compute + // instance that is launched for model training, specify FullyReplicated. + // + // If you want Amazon SageMaker to replicate a subset of data on each ML compute + // instance that is launched for model training, specify ShardedByS3Key. If + // there are n ML compute instances launched for a training job, each instance + // gets approximately 1/n of the number of S3 objects. In this case, model training + // on each machine uses only the subset of training data. + // + // Don't choose more ML compute instances for training than available S3 objects. + // If you do, some nodes won't get any data and you will pay for nodes that + // aren't getting any training data. This applies in both FILE and PIPE modes. + // Keep this in mind when developing algorithms. + // + // In distributed training, where you use multiple ML compute EC2 instances, + // you might choose ShardedByS3Key. If the algorithm requires copying training + // data to the ML storage volume (when TrainingInputMode is set to File), this + // copies 1/n of the number of objects. + S3DataDistributionType *string `type:"string" enum:"S3DataDistribution"` - // An array of TrainingJobSummary objects, each listing a training job. + // If you choose S3Prefix, S3Uri identifies a key name prefix. Amazon SageMaker + // uses all objects with the specified key name prefix for model training. // - // TrainingJobSummaries is a required field - TrainingJobSummaries []*TrainingJobSummary `type:"list" required:"true"` + // If you choose ManifestFile, S3Uri identifies an object that is a manifest + // file containing a list of object keys that you want Amazon SageMaker to use + // for model training. + // + // S3DataType is a required field + S3DataType *string `type:"string" required:"true" enum:"S3DataType"` + + // Depending on the value specified for the S3DataType, identifies either a + // key name prefix or a manifest. For example: + // + // * A key name prefix might look like this: s3://bucketname/exampleprefix. + // + // + // * A manifest might look like this: s3://bucketname/example.manifest + // + // The manifest is an S3 object which is a JSON file with the following format: + // + // + // [ + // + // {"prefix": "s3://customer_bucket/some/prefix/"}, + // + // "relative/path/to/custdata-1", + // + // "relative/path/custdata-2", + // + // ... + // + // ] + // + // The preceding JSON matches the following s3Uris: + // + // s3://customer_bucket/some/prefix/relative/path/to/custdata-1 + // + // s3://customer_bucket/some/prefix/relative/path/custdata-2 + // + // ... + // + // The complete set of s3uris in this manifest constitutes the input data for + // the channel for this datasource. The object that each s3uris points to + // must readable by the IAM role that Amazon SageMaker uses to perform tasks + // on your behalf. + // + // S3Uri is a required field + S3Uri *string `type:"string" required:"true"` } // String returns the string representation -func (s ListTrainingJobsOutput) String() string { +func (s S3DataSource) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ListTrainingJobsOutput) GoString() string { +func (s S3DataSource) GoString() string { return s.String() } -// SetNextToken sets the NextToken field's value. -func (s *ListTrainingJobsOutput) SetNextToken(v string) *ListTrainingJobsOutput { - s.NextToken = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *S3DataSource) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "S3DataSource"} + if s.S3DataType == nil { + invalidParams.Add(request.NewErrParamRequired("S3DataType")) + } + if s.S3Uri == nil { + invalidParams.Add(request.NewErrParamRequired("S3Uri")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetS3DataDistributionType sets the S3DataDistributionType field's value. +func (s *S3DataSource) SetS3DataDistributionType(v string) *S3DataSource { + s.S3DataDistributionType = &v return s } -// SetTrainingJobSummaries sets the TrainingJobSummaries field's value. -func (s *ListTrainingJobsOutput) SetTrainingJobSummaries(v []*TrainingJobSummary) *ListTrainingJobsOutput { - s.TrainingJobSummaries = v +// SetS3DataType sets the S3DataType field's value. +func (s *S3DataSource) SetS3DataType(v string) *S3DataSource { + s.S3DataType = &v return s } -// Provides information about the location that is configured for storing model -// artifacts. -type ModelArtifacts struct { +// SetS3Uri sets the S3Uri field's value. +func (s *S3DataSource) SetS3Uri(v string) *S3DataSource { + s.S3Uri = &v + return s +} + +// An array element of DescribeTrainingJobResponse$SecondaryStatusTransitions. +// It provides additional details about a status that the training job has transitioned +// through. A training job can be in one of several states, for example, starting, +// downloading, training, or uploading. Within each state, there are a number +// of intermediate states. For example, within the starting state, Amazon SageMaker +// could be starting the training job or launching the ML instances. These transitional +// states are referred to as the job's secondary status. +type SecondaryStatusTransition struct { _ struct{} `type:"structure"` - // The path of the S3 object that contains the model artifacts. For example, - // s3://bucket-name/keynameprefix/model.tar.gz. + // A timestamp that shows when the training job transitioned out of this secondary + // status state into another secondary status state or when the training job + // has ended. + EndTime *time.Time `type:"timestamp"` + + // A timestamp that shows when the training job transitioned to the current + // secondary status state. + // + // StartTime is a required field + StartTime *time.Time `type:"timestamp" required:"true"` + + // Contains a secondary status information from a training job. + // + // Status might be one of the following secondary statuses: + // + // InProgressStarting - Starting the training job. + // + // Downloading - An optional stage for algorithms that support File training + // input mode. It indicates that data is being downloaded to the ML storage + // volumes. + // + // Training - Training is in progress. + // + // Uploading - Training is complete and the model artifacts are being uploaded + // to the S3 location. + // + // CompletedCompleted - The training job has completed. + // + // FailedFailed - The training job has failed. The reason for the failure is + // returned in the FailureReason field of DescribeTrainingJobResponse. + // + // StoppedMaxRuntimeExceeded - The job stopped because it exceeded the maximum + // allowed runtime. + // + // Stopped - The training job has stopped. + // + // StoppingStopping - Stopping the training job. + // + // We no longer support the following secondary statuses: + // + // * LaunchingMLInstances + // + // * PreparingTrainingStack + // + // * DownloadingTrainingImage + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"SecondaryStatus"` + + // A detailed description of the progress within a secondary status. + // + // Amazon SageMaker provides secondary statuses and status messages that apply + // to each of them: + // + // StartingStarting the training job. + // + // Launching requested ML instances. // - // S3ModelArtifacts is a required field - S3ModelArtifacts *string `type:"string" required:"true"` + // Insufficient capacity error from EC2 while launching instances, retrying! + // + // Launched instance was unhealthy, replacing it! + // + // Preparing the instances for training. + // + // TrainingDownloading the training image. + // + // Training image download completed. Training in progress. + // + // Status messages are subject to change. Therefore, we recommend not including + // them in code that programmatically initiates actions. For examples, don't + // use status messages in if statements. + // + // To have an overview of your training job's progress, view TrainingJobStatus + // and SecondaryStatus in DescribeTrainingJobResponse, and StatusMessage together. + // For example, at the start of a training job, you might see the following: + // + // * TrainingJobStatus - InProgress + // + // * SecondaryStatus - Training + // + // * StatusMessage - Downloading the training image + StatusMessage *string `type:"string"` } // String returns the string representation -func (s ModelArtifacts) String() string { +func (s SecondaryStatusTransition) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ModelArtifacts) GoString() string { +func (s SecondaryStatusTransition) GoString() string { return s.String() } -// SetS3ModelArtifacts sets the S3ModelArtifacts field's value. -func (s *ModelArtifacts) SetS3ModelArtifacts(v string) *ModelArtifacts { - s.S3ModelArtifacts = &v +// SetEndTime sets the EndTime field's value. +func (s *SecondaryStatusTransition) SetEndTime(v time.Time) *SecondaryStatusTransition { + s.EndTime = &v return s } -// Provides summary information about a model. -type ModelSummary struct { - _ struct{} `type:"structure"` +// SetStartTime sets the StartTime field's value. +func (s *SecondaryStatusTransition) SetStartTime(v time.Time) *SecondaryStatusTransition { + s.StartTime = &v + return s +} - // A timestamp that indicates when the model was created. - // - // CreationTime is a required field - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` +// SetStatus sets the Status field's value. +func (s *SecondaryStatusTransition) SetStatus(v string) *SecondaryStatusTransition { + s.Status = &v + return s +} - // The Amazon Resource Name (ARN) of the model. - // - // ModelArn is a required field - ModelArn *string `min:"20" type:"string" required:"true"` +// SetStatusMessage sets the StatusMessage field's value. +func (s *SecondaryStatusTransition) SetStatusMessage(v string) *SecondaryStatusTransition { + s.StatusMessage = &v + return s +} - // The name of the model that you want a summary for. +type StartNotebookInstanceInput struct { + _ struct{} `type:"structure"` + + // The name of the notebook instance to start. // - // ModelName is a required field - ModelName *string `type:"string" required:"true"` + // NotebookInstanceName is a required field + NotebookInstanceName *string `type:"string" required:"true"` } // String returns the string representation -func (s ModelSummary) String() string { +func (s StartNotebookInstanceInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ModelSummary) GoString() string { +func (s StartNotebookInstanceInput) GoString() string { return s.String() } -// SetCreationTime sets the CreationTime field's value. -func (s *ModelSummary) SetCreationTime(v time.Time) *ModelSummary { - s.CreationTime = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *StartNotebookInstanceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StartNotebookInstanceInput"} + if s.NotebookInstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("NotebookInstanceName")) + } -// SetModelArn sets the ModelArn field's value. -func (s *ModelSummary) SetModelArn(v string) *ModelSummary { - s.ModelArn = &v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetModelName sets the ModelName field's value. -func (s *ModelSummary) SetModelName(v string) *ModelSummary { - s.ModelName = &v +// SetNotebookInstanceName sets the NotebookInstanceName field's value. +func (s *StartNotebookInstanceInput) SetNotebookInstanceName(v string) *StartNotebookInstanceInput { + s.NotebookInstanceName = &v return s } -// Provides a summary of a notebook instance lifecycle configuration. -type NotebookInstanceLifecycleConfigSummary struct { +type StartNotebookInstanceOutput struct { _ struct{} `type:"structure"` +} - // A timestamp that tells when the lifecycle configuration was created. - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` +// String returns the string representation +func (s StartNotebookInstanceOutput) String() string { + return awsutil.Prettify(s) +} - // A timestamp that tells when the lifecycle configuration was last modified. - LastModifiedTime *time.Time `type:"timestamp" timestampFormat:"unix"` +// GoString returns the string representation +func (s StartNotebookInstanceOutput) GoString() string { + return s.String() +} - // The Amazon Resource Name (ARN) of the lifecycle configuration. - // - // NotebookInstanceLifecycleConfigArn is a required field - NotebookInstanceLifecycleConfigArn *string `type:"string" required:"true"` +type StopHyperParameterTuningJobInput struct { + _ struct{} `type:"structure"` - // The name of the lifecycle configuration. + // The name of the tuning job to stop. // - // NotebookInstanceLifecycleConfigName is a required field - NotebookInstanceLifecycleConfigName *string `type:"string" required:"true"` + // HyperParameterTuningJobName is a required field + HyperParameterTuningJobName *string `min:"1" type:"string" required:"true"` } // String returns the string representation -func (s NotebookInstanceLifecycleConfigSummary) String() string { +func (s StopHyperParameterTuningJobInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s NotebookInstanceLifecycleConfigSummary) GoString() string { +func (s StopHyperParameterTuningJobInput) GoString() string { return s.String() } -// SetCreationTime sets the CreationTime field's value. -func (s *NotebookInstanceLifecycleConfigSummary) SetCreationTime(v time.Time) *NotebookInstanceLifecycleConfigSummary { - s.CreationTime = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *StopHyperParameterTuningJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StopHyperParameterTuningJobInput"} + if s.HyperParameterTuningJobName == nil { + invalidParams.Add(request.NewErrParamRequired("HyperParameterTuningJobName")) + } + if s.HyperParameterTuningJobName != nil && len(*s.HyperParameterTuningJobName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("HyperParameterTuningJobName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetLastModifiedTime sets the LastModifiedTime field's value. -func (s *NotebookInstanceLifecycleConfigSummary) SetLastModifiedTime(v time.Time) *NotebookInstanceLifecycleConfigSummary { - s.LastModifiedTime = &v +// SetHyperParameterTuningJobName sets the HyperParameterTuningJobName field's value. +func (s *StopHyperParameterTuningJobInput) SetHyperParameterTuningJobName(v string) *StopHyperParameterTuningJobInput { + s.HyperParameterTuningJobName = &v return s } -// SetNotebookInstanceLifecycleConfigArn sets the NotebookInstanceLifecycleConfigArn field's value. -func (s *NotebookInstanceLifecycleConfigSummary) SetNotebookInstanceLifecycleConfigArn(v string) *NotebookInstanceLifecycleConfigSummary { - s.NotebookInstanceLifecycleConfigArn = &v - return s +type StopHyperParameterTuningJobOutput struct { + _ struct{} `type:"structure"` } -// SetNotebookInstanceLifecycleConfigName sets the NotebookInstanceLifecycleConfigName field's value. -func (s *NotebookInstanceLifecycleConfigSummary) SetNotebookInstanceLifecycleConfigName(v string) *NotebookInstanceLifecycleConfigSummary { - s.NotebookInstanceLifecycleConfigName = &v - return s +// String returns the string representation +func (s StopHyperParameterTuningJobOutput) String() string { + return awsutil.Prettify(s) } -// Contains the notebook instance lifecycle configuration script. -// -// This script runs in the path /sbin:bin:/usr/sbin:/usr/bin. -// -// For information about notebook instance lifestyle configurations, see notebook-lifecycle-config. -type NotebookInstanceLifecycleHook struct { +// GoString returns the string representation +func (s StopHyperParameterTuningJobOutput) GoString() string { + return s.String() +} + +type StopNotebookInstanceInput struct { _ struct{} `type:"structure"` - // A base64-encoded string that contains a shell script for a notebook instance - // lifecycle configuration. - Content *string `min:"1" type:"string"` + // The name of the notebook instance to terminate. + // + // NotebookInstanceName is a required field + NotebookInstanceName *string `type:"string" required:"true"` } // String returns the string representation -func (s NotebookInstanceLifecycleHook) String() string { +func (s StopNotebookInstanceInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s NotebookInstanceLifecycleHook) GoString() string { +func (s StopNotebookInstanceInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *NotebookInstanceLifecycleHook) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "NotebookInstanceLifecycleHook"} - if s.Content != nil && len(*s.Content) < 1 { - invalidParams.Add(request.NewErrParamMinLen("Content", 1)) +func (s *StopNotebookInstanceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StopNotebookInstanceInput"} + if s.NotebookInstanceName == nil { + invalidParams.Add(request.NewErrParamRequired("NotebookInstanceName")) } if invalidParams.Len() > 0 { @@ -7125,147 +11921,108 @@ func (s *NotebookInstanceLifecycleHook) Validate() error { return nil } -// SetContent sets the Content field's value. -func (s *NotebookInstanceLifecycleHook) SetContent(v string) *NotebookInstanceLifecycleHook { - s.Content = &v +// SetNotebookInstanceName sets the NotebookInstanceName field's value. +func (s *StopNotebookInstanceInput) SetNotebookInstanceName(v string) *StopNotebookInstanceInput { + s.NotebookInstanceName = &v return s } -// Provides summary information for an Amazon SageMaker notebook instance. -type NotebookInstanceSummary struct { +type StopNotebookInstanceOutput struct { _ struct{} `type:"structure"` - - // A timestamp that shows when the notebook instance was created. - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix"` - - // The type of ML compute instance that the notebook instance is running on. - InstanceType *string `type:"string" enum:"InstanceType"` - - // A timestamp that shows when the notebook instance was last modified. - LastModifiedTime *time.Time `type:"timestamp" timestampFormat:"unix"` - - // The Amazon Resource Name (ARN) of the notebook instance. - // - // NotebookInstanceArn is a required field - NotebookInstanceArn *string `type:"string" required:"true"` - - // The name of a notebook instance lifecycle configuration associated with this - // notebook instance. - // - // For information about notebook instance lifestyle configurations, see notebook-lifecycle-config. - NotebookInstanceLifecycleConfigName *string `type:"string"` - - // The name of the notebook instance that you want a summary for. - // - // NotebookInstanceName is a required field - NotebookInstanceName *string `type:"string" required:"true"` - - // The status of the notebook instance. - NotebookInstanceStatus *string `type:"string" enum:"NotebookInstanceStatus"` - - // The URL that you use to connect to the Jupyter instance running in your notebook - // instance. - Url *string `type:"string"` } // String returns the string representation -func (s NotebookInstanceSummary) String() string { +func (s StopNotebookInstanceOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s NotebookInstanceSummary) GoString() string { +func (s StopNotebookInstanceOutput) GoString() string { return s.String() } -// SetCreationTime sets the CreationTime field's value. -func (s *NotebookInstanceSummary) SetCreationTime(v time.Time) *NotebookInstanceSummary { - s.CreationTime = &v - return s -} +type StopTrainingJobInput struct { + _ struct{} `type:"structure"` -// SetInstanceType sets the InstanceType field's value. -func (s *NotebookInstanceSummary) SetInstanceType(v string) *NotebookInstanceSummary { - s.InstanceType = &v - return s + // The name of the training job to stop. + // + // TrainingJobName is a required field + TrainingJobName *string `min:"1" type:"string" required:"true"` } -// SetLastModifiedTime sets the LastModifiedTime field's value. -func (s *NotebookInstanceSummary) SetLastModifiedTime(v time.Time) *NotebookInstanceSummary { - s.LastModifiedTime = &v - return s +// String returns the string representation +func (s StopTrainingJobInput) String() string { + return awsutil.Prettify(s) } -// SetNotebookInstanceArn sets the NotebookInstanceArn field's value. -func (s *NotebookInstanceSummary) SetNotebookInstanceArn(v string) *NotebookInstanceSummary { - s.NotebookInstanceArn = &v - return s +// GoString returns the string representation +func (s StopTrainingJobInput) GoString() string { + return s.String() } -// SetNotebookInstanceLifecycleConfigName sets the NotebookInstanceLifecycleConfigName field's value. -func (s *NotebookInstanceSummary) SetNotebookInstanceLifecycleConfigName(v string) *NotebookInstanceSummary { - s.NotebookInstanceLifecycleConfigName = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *StopTrainingJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StopTrainingJobInput"} + if s.TrainingJobName == nil { + invalidParams.Add(request.NewErrParamRequired("TrainingJobName")) + } + if s.TrainingJobName != nil && len(*s.TrainingJobName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TrainingJobName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetNotebookInstanceName sets the NotebookInstanceName field's value. -func (s *NotebookInstanceSummary) SetNotebookInstanceName(v string) *NotebookInstanceSummary { - s.NotebookInstanceName = &v +// SetTrainingJobName sets the TrainingJobName field's value. +func (s *StopTrainingJobInput) SetTrainingJobName(v string) *StopTrainingJobInput { + s.TrainingJobName = &v return s } -// SetNotebookInstanceStatus sets the NotebookInstanceStatus field's value. -func (s *NotebookInstanceSummary) SetNotebookInstanceStatus(v string) *NotebookInstanceSummary { - s.NotebookInstanceStatus = &v - return s +type StopTrainingJobOutput struct { + _ struct{} `type:"structure"` } -// SetUrl sets the Url field's value. -func (s *NotebookInstanceSummary) SetUrl(v string) *NotebookInstanceSummary { - s.Url = &v - return s +// String returns the string representation +func (s StopTrainingJobOutput) String() string { + return awsutil.Prettify(s) } -// Provides information about how to store model training results (model artifacts). -type OutputDataConfig struct { - _ struct{} `type:"structure"` +// GoString returns the string representation +func (s StopTrainingJobOutput) GoString() string { + return s.String() +} - // The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to - // encrypt the model artifacts at rest using Amazon S3 server-side encryption. - // - // If the configuration of the output S3 bucket requires server-side encryption - // for objects, and you don't provide the KMS key ID, Amazon SageMaker uses - // the default service key. For more information, see KMS-Managed Encryption - // Keys (https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html) - // in Amazon Simple Storage Service developer guide. - // - // The KMS key policy must grant permission to the IAM role you specify in your - // CreateTrainingJob request. Using Key Policies in AWS KMS (http://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) - // in the AWS Key Management Service Developer Guide. - KmsKeyId *string `type:"string"` +type StopTransformJobInput struct { + _ struct{} `type:"structure"` - // Identifies the S3 path where you want Amazon SageMaker to store the model - // artifacts. For example, s3://bucket-name/key-name-prefix. + // The name of the transform job to stop. // - // S3OutputPath is a required field - S3OutputPath *string `type:"string" required:"true"` + // TransformJobName is a required field + TransformJobName *string `min:"1" type:"string" required:"true"` } // String returns the string representation -func (s OutputDataConfig) String() string { +func (s StopTransformJobInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s OutputDataConfig) GoString() string { +func (s StopTransformJobInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *OutputDataConfig) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "OutputDataConfig"} - if s.S3OutputPath == nil { - invalidParams.Add(request.NewErrParamRequired("S3OutputPath")) +func (s *StopTransformJobInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StopTransformJobInput"} + if s.TransformJobName == nil { + invalidParams.Add(request.NewErrParamRequired("TransformJobName")) + } + if s.TransformJobName != nil && len(*s.TransformJobName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TransformJobName", 1)) } if invalidParams.Len() > 0 { @@ -7274,79 +12031,64 @@ func (s *OutputDataConfig) Validate() error { return nil } -// SetKmsKeyId sets the KmsKeyId field's value. -func (s *OutputDataConfig) SetKmsKeyId(v string) *OutputDataConfig { - s.KmsKeyId = &v - return s -} - -// SetS3OutputPath sets the S3OutputPath field's value. -func (s *OutputDataConfig) SetS3OutputPath(v string) *OutputDataConfig { - s.S3OutputPath = &v +// SetTransformJobName sets the TransformJobName field's value. +func (s *StopTransformJobInput) SetTransformJobName(v string) *StopTransformJobInput { + s.TransformJobName = &v return s } -// Identifies a model that you want to host and the resources to deploy for -// hosting it. If you are deploying multiple models, tell Amazon SageMaker how -// to distribute traffic among the models by specifying variant weights. -type ProductionVariant struct { +type StopTransformJobOutput struct { _ struct{} `type:"structure"` +} - // Number of instances to launch initially. - // - // InitialInstanceCount is a required field - InitialInstanceCount *int64 `min:"1" type:"integer" required:"true"` - - // Determines initial traffic distribution among all of the models that you - // specify in the endpoint configuration. The traffic to a production variant - // is determined by the ratio of the VariantWeight to the sum of all VariantWeight - // values across all ProductionVariants. If unspecified, it defaults to 1.0. - InitialVariantWeight *float64 `type:"float"` +// String returns the string representation +func (s StopTransformJobOutput) String() string { + return awsutil.Prettify(s) +} - // The ML compute instance type. - // - // InstanceType is a required field - InstanceType *string `type:"string" required:"true" enum:"ProductionVariantInstanceType"` +// GoString returns the string representation +func (s StopTransformJobOutput) GoString() string { + return s.String() +} - // The name of the model that you want to host. This is the name that you specified - // when creating the model. - // - // ModelName is a required field - ModelName *string `type:"string" required:"true"` +// Specifies how long model training can run. When model training reaches the +// limit, Amazon SageMaker ends the training job. Use this API to cap model +// training cost. +// +// To stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal, which +// delays job termination for120 seconds. Algorithms might use this 120-second +// window to save the model artifacts, so the results of training is not lost. +// +// Training algorithms provided by Amazon SageMaker automatically saves the +// intermediate results of a model training job (it is best effort case, as +// model might not be ready to save as some stages, for example training just +// started). This intermediate data is a valid model artifact. You can use it +// to create a model (CreateModel). +type StoppingCondition struct { + _ struct{} `type:"structure"` - // The name of the production variant. - // - // VariantName is a required field - VariantName *string `type:"string" required:"true"` + // The maximum length of time, in seconds, that the training job can run. If + // model training does not complete during this time, Amazon SageMaker ends + // the job. If value is not specified, default value is 1 day. Maximum value + // is 5 days. + MaxRuntimeInSeconds *int64 `min:"1" type:"integer"` } // String returns the string representation -func (s ProductionVariant) String() string { +func (s StoppingCondition) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ProductionVariant) GoString() string { +func (s StoppingCondition) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *ProductionVariant) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ProductionVariant"} - if s.InitialInstanceCount == nil { - invalidParams.Add(request.NewErrParamRequired("InitialInstanceCount")) - } - if s.InitialInstanceCount != nil && *s.InitialInstanceCount < 1 { - invalidParams.Add(request.NewErrParamMinValue("InitialInstanceCount", 1)) - } - if s.InstanceType == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceType")) - } - if s.ModelName == nil { - invalidParams.Add(request.NewErrParamRequired("ModelName")) - } - if s.VariantName == nil { - invalidParams.Add(request.NewErrParamRequired("VariantName")) +func (s *StoppingCondition) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StoppingCondition"} + if s.MaxRuntimeInSeconds != nil && *s.MaxRuntimeInSeconds < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxRuntimeInSeconds", 1)) } if invalidParams.Len() > 0 { @@ -7355,341 +12097,243 @@ func (s *ProductionVariant) Validate() error { return nil } -// SetInitialInstanceCount sets the InitialInstanceCount field's value. -func (s *ProductionVariant) SetInitialInstanceCount(v int64) *ProductionVariant { - s.InitialInstanceCount = &v - return s -} - -// SetInitialVariantWeight sets the InitialVariantWeight field's value. -func (s *ProductionVariant) SetInitialVariantWeight(v float64) *ProductionVariant { - s.InitialVariantWeight = &v - return s -} - -// SetInstanceType sets the InstanceType field's value. -func (s *ProductionVariant) SetInstanceType(v string) *ProductionVariant { - s.InstanceType = &v - return s -} - -// SetModelName sets the ModelName field's value. -func (s *ProductionVariant) SetModelName(v string) *ProductionVariant { - s.ModelName = &v - return s -} - -// SetVariantName sets the VariantName field's value. -func (s *ProductionVariant) SetVariantName(v string) *ProductionVariant { - s.VariantName = &v +// SetMaxRuntimeInSeconds sets the MaxRuntimeInSeconds field's value. +func (s *StoppingCondition) SetMaxRuntimeInSeconds(v int64) *StoppingCondition { + s.MaxRuntimeInSeconds = &v return s } -// Describes weight and capacities for a production variant associated with -// an endpoint. If you sent a request to the UpdateEndpointWeightsAndCapacities -// API and the endpoint status is Updating, you get different desired and current -// values. -type ProductionVariantSummary struct { +// Describes a tag. +type Tag struct { _ struct{} `type:"structure"` - // The number of instances associated with the variant. - CurrentInstanceCount *int64 `min:"1" type:"integer"` - - // The weight associated with the variant. - CurrentWeight *float64 `type:"float"` - - // The number of instances requested in the UpdateEndpointWeightsAndCapacities - // request. - DesiredInstanceCount *int64 `min:"1" type:"integer"` - - // The requested weight, as specified in the UpdateEndpointWeightsAndCapacities - // request. - DesiredWeight *float64 `type:"float"` + // The tag key. + // + // Key is a required field + Key *string `min:"1" type:"string" required:"true"` - // The name of the variant. + // The tag value. // - // VariantName is a required field - VariantName *string `type:"string" required:"true"` + // Value is a required field + Value *string `type:"string" required:"true"` } // String returns the string representation -func (s ProductionVariantSummary) String() string { +func (s Tag) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ProductionVariantSummary) GoString() string { +func (s Tag) GoString() string { return s.String() } -// SetCurrentInstanceCount sets the CurrentInstanceCount field's value. -func (s *ProductionVariantSummary) SetCurrentInstanceCount(v int64) *ProductionVariantSummary { - s.CurrentInstanceCount = &v - return s -} - -// SetCurrentWeight sets the CurrentWeight field's value. -func (s *ProductionVariantSummary) SetCurrentWeight(v float64) *ProductionVariantSummary { - s.CurrentWeight = &v - return s -} - -// SetDesiredInstanceCount sets the DesiredInstanceCount field's value. -func (s *ProductionVariantSummary) SetDesiredInstanceCount(v int64) *ProductionVariantSummary { - s.DesiredInstanceCount = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *Tag) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Tag"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } -// SetDesiredWeight sets the DesiredWeight field's value. -func (s *ProductionVariantSummary) SetDesiredWeight(v float64) *ProductionVariantSummary { - s.DesiredWeight = &v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetVariantName sets the VariantName field's value. -func (s *ProductionVariantSummary) SetVariantName(v string) *ProductionVariantSummary { - s.VariantName = &v +// SetKey sets the Key field's value. +func (s *Tag) SetKey(v string) *Tag { + s.Key = &v return s } -// Describes the resources, including ML compute instances and ML storage volumes, -// to use for model training. -type ResourceConfig struct { - _ struct{} `type:"structure"` - - // The number of ML compute instances to use. For distributed training, provide - // a value greater than 1. - // - // InstanceCount is a required field - InstanceCount *int64 `min:"1" type:"integer" required:"true"` - - // The ML compute instance type. - // - // InstanceType is a required field - InstanceType *string `type:"string" required:"true" enum:"TrainingInstanceType"` - - // The Amazon Resource Name (ARN) of a AWS Key Management Service key that Amazon - // SageMaker uses to encrypt data on the storage volume attached to the ML compute - // instance(s) that run the training job. - VolumeKmsKeyId *string `type:"string"` - - // The size of the ML storage volume that you want to provision. - // - // ML storage volumes store model artifacts and incremental states. Training - // algorithms might also use the ML storage volume for scratch space. If you - // want to store the training data in the ML storage volume, choose File as - // the TrainingInputMode in the algorithm specification. - // - // You must specify sufficient ML storage for your scenario. - // - // Amazon SageMaker supports only the General Purpose SSD (gp2) ML storage volume - // type. - // - // VolumeSizeInGB is a required field - VolumeSizeInGB *int64 `min:"1" type:"integer" required:"true"` +// SetValue sets the Value field's value. +func (s *Tag) SetValue(v string) *Tag { + s.Value = &v + return s +} + +// The numbers of training jobs launched by a hyperparameter tuning job, categorized +// by status. +type TrainingJobStatusCounters struct { + _ struct{} `type:"structure"` + + // The number of completed training jobs launched by the hyperparameter tuning + // job. + Completed *int64 `type:"integer"` + + // The number of in-progress training jobs launched by a hyperparameter tuning + // job. + InProgress *int64 `type:"integer"` + + // The number of training jobs that failed and can't be retried. A failed training + // job can't be retried if it failed because a client error occurred. + NonRetryableError *int64 `type:"integer"` + + // The number of training jobs that failed, but can be retried. A failed training + // job can be retried only if it failed because an internal service error occurred. + RetryableError *int64 `type:"integer"` + + // The number of training jobs launched by a hyperparameter tuning job that + // were manually stopped. + Stopped *int64 `type:"integer"` } // String returns the string representation -func (s ResourceConfig) String() string { +func (s TrainingJobStatusCounters) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ResourceConfig) GoString() string { +func (s TrainingJobStatusCounters) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *ResourceConfig) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ResourceConfig"} - if s.InstanceCount == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceCount")) - } - if s.InstanceCount != nil && *s.InstanceCount < 1 { - invalidParams.Add(request.NewErrParamMinValue("InstanceCount", 1)) - } - if s.InstanceType == nil { - invalidParams.Add(request.NewErrParamRequired("InstanceType")) - } - if s.VolumeSizeInGB == nil { - invalidParams.Add(request.NewErrParamRequired("VolumeSizeInGB")) - } - if s.VolumeSizeInGB != nil && *s.VolumeSizeInGB < 1 { - invalidParams.Add(request.NewErrParamMinValue("VolumeSizeInGB", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetCompleted sets the Completed field's value. +func (s *TrainingJobStatusCounters) SetCompleted(v int64) *TrainingJobStatusCounters { + s.Completed = &v + return s } -// SetInstanceCount sets the InstanceCount field's value. -func (s *ResourceConfig) SetInstanceCount(v int64) *ResourceConfig { - s.InstanceCount = &v +// SetInProgress sets the InProgress field's value. +func (s *TrainingJobStatusCounters) SetInProgress(v int64) *TrainingJobStatusCounters { + s.InProgress = &v return s } -// SetInstanceType sets the InstanceType field's value. -func (s *ResourceConfig) SetInstanceType(v string) *ResourceConfig { - s.InstanceType = &v +// SetNonRetryableError sets the NonRetryableError field's value. +func (s *TrainingJobStatusCounters) SetNonRetryableError(v int64) *TrainingJobStatusCounters { + s.NonRetryableError = &v return s } -// SetVolumeKmsKeyId sets the VolumeKmsKeyId field's value. -func (s *ResourceConfig) SetVolumeKmsKeyId(v string) *ResourceConfig { - s.VolumeKmsKeyId = &v +// SetRetryableError sets the RetryableError field's value. +func (s *TrainingJobStatusCounters) SetRetryableError(v int64) *TrainingJobStatusCounters { + s.RetryableError = &v return s } -// SetVolumeSizeInGB sets the VolumeSizeInGB field's value. -func (s *ResourceConfig) SetVolumeSizeInGB(v int64) *ResourceConfig { - s.VolumeSizeInGB = &v +// SetStopped sets the Stopped field's value. +func (s *TrainingJobStatusCounters) SetStopped(v int64) *TrainingJobStatusCounters { + s.Stopped = &v return s } -// Describes the S3 data source. -type S3DataSource struct { +// Provides summary information about a training job. +type TrainingJobSummary struct { _ struct{} `type:"structure"` - // If you want Amazon SageMaker to replicate the entire dataset on each ML compute - // instance that is launched for model training, specify FullyReplicated. - // - // If you want Amazon SageMaker to replicate a subset of data on each ML compute - // instance that is launched for model training, specify ShardedByS3Key. If - // there are n ML compute instances launched for a training job, each instance - // gets approximately 1/n of the number of S3 objects. In this case, model training - // on each machine uses only the subset of training data. - // - // Don't choose more ML compute instances for training than available S3 objects. - // If you do, some nodes won't get any data and you will pay for nodes that - // aren't getting any training data. This applies in both FILE and PIPE modes. - // Keep this in mind when developing algorithms. + // A timestamp that shows when the training job was created. // - // In distributed training, where you use multiple ML compute EC2 instances, - // you might choose ShardedByS3Key. If the algorithm requires copying training - // data to the ML storage volume (when TrainingInputMode is set to File), this - // copies 1/n of the number of objects. - S3DataDistributionType *string `type:"string" enum:"S3DataDistribution"` + // CreationTime is a required field + CreationTime *time.Time `type:"timestamp" required:"true"` - // If you choose S3Prefix, S3Uri identifies a key name prefix. Amazon SageMaker - // uses all objects with the specified key name prefix for model training. - // - // If you choose ManifestFile, S3Uri identifies an object that is a manifest - // file containing a list of object keys that you want Amazon SageMaker to use - // for model training. - // - // S3DataType is a required field - S3DataType *string `type:"string" required:"true" enum:"S3DataType"` + // Timestamp when the training job was last modified. + LastModifiedTime *time.Time `type:"timestamp"` - // Depending on the value specified for the S3DataType, identifies either a - // key name prefix or a manifest. For example: - // - // * A key name prefix might look like this: s3://bucketname/exampleprefix. - // - // - // * A manifest might look like this: s3://bucketname/example.manifest - // - // The manifest is an S3 object which is a JSON file with the following format: - // - // - // [ - // - // {"prefix": "s3://customer_bucket/some/prefix/"}, - // - // "relative/path/to/custdata-1", - // - // "relative/path/custdata-2", - // - // ... - // - // ] - // - // The preceding JSON matches the following s3Uris: - // - // s3://customer_bucket/some/prefix/relative/path/to/custdata-1 - // - // s3://customer_bucket/some/prefix/relative/path/custdata-1 + // A timestamp that shows when the training job ended. This field is set only + // if the training job has one of the terminal statuses (Completed, Failed, + // or Stopped). + TrainingEndTime *time.Time `type:"timestamp"` + + // The Amazon Resource Name (ARN) of the training job. // - // ... + // TrainingJobArn is a required field + TrainingJobArn *string `type:"string" required:"true"` + + // The name of the training job that you want a summary for. // - // The complete set of s3uris in this manifest constitutes the input data for - // the channel for this datasource. The object that each s3uris points to - // must readable by the IAM role that Amazon SageMaker uses to perform tasks - // on your behalf. + // TrainingJobName is a required field + TrainingJobName *string `min:"1" type:"string" required:"true"` + + // The status of the training job. // - // S3Uri is a required field - S3Uri *string `type:"string" required:"true"` + // TrainingJobStatus is a required field + TrainingJobStatus *string `type:"string" required:"true" enum:"TrainingJobStatus"` } // String returns the string representation -func (s S3DataSource) String() string { +func (s TrainingJobSummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s S3DataSource) GoString() string { +func (s TrainingJobSummary) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *S3DataSource) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "S3DataSource"} - if s.S3DataType == nil { - invalidParams.Add(request.NewErrParamRequired("S3DataType")) - } - if s.S3Uri == nil { - invalidParams.Add(request.NewErrParamRequired("S3Uri")) - } +// SetCreationTime sets the CreationTime field's value. +func (s *TrainingJobSummary) SetCreationTime(v time.Time) *TrainingJobSummary { + s.CreationTime = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetLastModifiedTime sets the LastModifiedTime field's value. +func (s *TrainingJobSummary) SetLastModifiedTime(v time.Time) *TrainingJobSummary { + s.LastModifiedTime = &v + return s } -// SetS3DataDistributionType sets the S3DataDistributionType field's value. -func (s *S3DataSource) SetS3DataDistributionType(v string) *S3DataSource { - s.S3DataDistributionType = &v +// SetTrainingEndTime sets the TrainingEndTime field's value. +func (s *TrainingJobSummary) SetTrainingEndTime(v time.Time) *TrainingJobSummary { + s.TrainingEndTime = &v return s } -// SetS3DataType sets the S3DataType field's value. -func (s *S3DataSource) SetS3DataType(v string) *S3DataSource { - s.S3DataType = &v +// SetTrainingJobArn sets the TrainingJobArn field's value. +func (s *TrainingJobSummary) SetTrainingJobArn(v string) *TrainingJobSummary { + s.TrainingJobArn = &v return s } -// SetS3Uri sets the S3Uri field's value. -func (s *S3DataSource) SetS3Uri(v string) *S3DataSource { - s.S3Uri = &v +// SetTrainingJobName sets the TrainingJobName field's value. +func (s *TrainingJobSummary) SetTrainingJobName(v string) *TrainingJobSummary { + s.TrainingJobName = &v return s } -type StartNotebookInstanceInput struct { +// SetTrainingJobStatus sets the TrainingJobStatus field's value. +func (s *TrainingJobSummary) SetTrainingJobStatus(v string) *TrainingJobSummary { + s.TrainingJobStatus = &v + return s +} + +// Describes the location of the channel data. +type TransformDataSource struct { _ struct{} `type:"structure"` - // The name of the notebook instance to start. + // The S3 location of the data source that is associated with a channel. // - // NotebookInstanceName is a required field - NotebookInstanceName *string `type:"string" required:"true"` + // S3DataSource is a required field + S3DataSource *TransformS3DataSource `type:"structure" required:"true"` } // String returns the string representation -func (s StartNotebookInstanceInput) String() string { +func (s TransformDataSource) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s StartNotebookInstanceInput) GoString() string { +func (s TransformDataSource) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *StartNotebookInstanceInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "StartNotebookInstanceInput"} - if s.NotebookInstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("NotebookInstanceName")) +func (s *TransformDataSource) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TransformDataSource"} + if s.S3DataSource == nil { + invalidParams.Add(request.NewErrParamRequired("S3DataSource")) + } + if s.S3DataSource != nil { + if err := s.S3DataSource.Validate(); err != nil { + invalidParams.AddNested("S3DataSource", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -7698,50 +12342,66 @@ func (s *StartNotebookInstanceInput) Validate() error { return nil } -// SetNotebookInstanceName sets the NotebookInstanceName field's value. -func (s *StartNotebookInstanceInput) SetNotebookInstanceName(v string) *StartNotebookInstanceInput { - s.NotebookInstanceName = &v +// SetS3DataSource sets the S3DataSource field's value. +func (s *TransformDataSource) SetS3DataSource(v *TransformS3DataSource) *TransformDataSource { + s.S3DataSource = v return s } -type StartNotebookInstanceOutput struct { +// Describes the input source of a transform job and the way the transform job +// consumes it. +type TransformInput struct { _ struct{} `type:"structure"` -} -// String returns the string representation -func (s StartNotebookInstanceOutput) String() string { - return awsutil.Prettify(s) -} + // Compressing data helps save on storage space. If your transform data is compressed, + // specify the compression type. Amazon SageMaker automatically decompresses + // the data for the transform job accordingly. The default value is None. + CompressionType *string `type:"string" enum:"CompressionType"` -// GoString returns the string representation -func (s StartNotebookInstanceOutput) GoString() string { - return s.String() -} + // The multipurpose internet mail extension (MIME) type of the data. Amazon + // SageMaker uses the MIME type with each http call to transfer data to the + // transform job. + ContentType *string `type:"string"` -type StopNotebookInstanceInput struct { - _ struct{} `type:"structure"` + // Describes the location of the channel data, meaning the S3 location of the + // input data that the model can consume. + // + // DataSource is a required field + DataSource *TransformDataSource `type:"structure" required:"true"` - // The name of the notebook instance to terminate. + // The method to use to split the transform job's data into smaller batches. + // The default value is None. If you don't want to split the data, specify None. + // If you want to split records on a newline character boundary, specify Line. + // To split records according to the RecordIO format, specify RecordIO. // - // NotebookInstanceName is a required field - NotebookInstanceName *string `type:"string" required:"true"` + // Amazon SageMaker will send maximum number of records per batch in each request + // up to the MaxPayloadInMB limit. For more information, see RecordIO data format + // (http://mxnet.io/architecture/note_data_loading.html#data-format). + // + // For information about the RecordIO format, see Data Format (http://mxnet.io/architecture/note_data_loading.html#data-format). + SplitType *string `type:"string" enum:"SplitType"` } // String returns the string representation -func (s StopNotebookInstanceInput) String() string { +func (s TransformInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s StopNotebookInstanceInput) GoString() string { +func (s TransformInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *StopNotebookInstanceInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "StopNotebookInstanceInput"} - if s.NotebookInstanceName == nil { - invalidParams.Add(request.NewErrParamRequired("NotebookInstanceName")) +func (s *TransformInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TransformInput"} + if s.DataSource == nil { + invalidParams.Add(request.NewErrParamRequired("DataSource")) + } + if s.DataSource != nil { + if err := s.DataSource.Validate(); err != nil { + invalidParams.AddNested("DataSource", err.(request.ErrInvalidParams)) + } } if invalidParams.Len() > 0 { @@ -7750,119 +12410,194 @@ func (s *StopNotebookInstanceInput) Validate() error { return nil } -// SetNotebookInstanceName sets the NotebookInstanceName field's value. -func (s *StopNotebookInstanceInput) SetNotebookInstanceName(v string) *StopNotebookInstanceInput { - s.NotebookInstanceName = &v +// SetCompressionType sets the CompressionType field's value. +func (s *TransformInput) SetCompressionType(v string) *TransformInput { + s.CompressionType = &v return s } -type StopNotebookInstanceOutput struct { - _ struct{} `type:"structure"` +// SetContentType sets the ContentType field's value. +func (s *TransformInput) SetContentType(v string) *TransformInput { + s.ContentType = &v + return s } -// String returns the string representation -func (s StopNotebookInstanceOutput) String() string { - return awsutil.Prettify(s) +// SetDataSource sets the DataSource field's value. +func (s *TransformInput) SetDataSource(v *TransformDataSource) *TransformInput { + s.DataSource = v + return s } -// GoString returns the string representation -func (s StopNotebookInstanceOutput) GoString() string { - return s.String() +// SetSplitType sets the SplitType field's value. +func (s *TransformInput) SetSplitType(v string) *TransformInput { + s.SplitType = &v + return s } -type StopTrainingJobInput struct { +// Provides a summary of a transform job. Multiple TransformJobSummary objects +// are returned as a list after calling ListTransformJobs. +type TransformJobSummary struct { _ struct{} `type:"structure"` - // The name of the training job to stop. + // A timestamp that shows when the transform Job was created. // - // TrainingJobName is a required field - TrainingJobName *string `min:"1" type:"string" required:"true"` + // CreationTime is a required field + CreationTime *time.Time `type:"timestamp" required:"true"` + + // If the transform job failed, the reason it failed. + FailureReason *string `type:"string"` + + // Indicates when the transform job was last modified. + LastModifiedTime *time.Time `type:"timestamp"` + + // Indicates when the transform job ends on compute instances. For successful + // jobs and stopped jobs, this is the exact time recorded after the results + // are uploaded. For failed jobs, this is when Amazon SageMaker detected that + // the job failed. + TransformEndTime *time.Time `type:"timestamp"` + + // The Amazon Resource Name (ARN) of the transform job. + // + // TransformJobArn is a required field + TransformJobArn *string `type:"string" required:"true"` + + // The name of the transform job. + // + // TransformJobName is a required field + TransformJobName *string `min:"1" type:"string" required:"true"` + + // The status of the transform job. + // + // TransformJobStatus is a required field + TransformJobStatus *string `type:"string" required:"true" enum:"TransformJobStatus"` } // String returns the string representation -func (s StopTrainingJobInput) String() string { +func (s TransformJobSummary) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s StopTrainingJobInput) GoString() string { +func (s TransformJobSummary) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *StopTrainingJobInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "StopTrainingJobInput"} - if s.TrainingJobName == nil { - invalidParams.Add(request.NewErrParamRequired("TrainingJobName")) - } - if s.TrainingJobName != nil && len(*s.TrainingJobName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("TrainingJobName", 1)) - } +// SetCreationTime sets the CreationTime field's value. +func (s *TransformJobSummary) SetCreationTime(v time.Time) *TransformJobSummary { + s.CreationTime = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetFailureReason sets the FailureReason field's value. +func (s *TransformJobSummary) SetFailureReason(v string) *TransformJobSummary { + s.FailureReason = &v + return s } -// SetTrainingJobName sets the TrainingJobName field's value. -func (s *StopTrainingJobInput) SetTrainingJobName(v string) *StopTrainingJobInput { - s.TrainingJobName = &v +// SetLastModifiedTime sets the LastModifiedTime field's value. +func (s *TransformJobSummary) SetLastModifiedTime(v time.Time) *TransformJobSummary { + s.LastModifiedTime = &v return s } -type StopTrainingJobOutput struct { - _ struct{} `type:"structure"` +// SetTransformEndTime sets the TransformEndTime field's value. +func (s *TransformJobSummary) SetTransformEndTime(v time.Time) *TransformJobSummary { + s.TransformEndTime = &v + return s } -// String returns the string representation -func (s StopTrainingJobOutput) String() string { - return awsutil.Prettify(s) +// SetTransformJobArn sets the TransformJobArn field's value. +func (s *TransformJobSummary) SetTransformJobArn(v string) *TransformJobSummary { + s.TransformJobArn = &v + return s } -// GoString returns the string representation -func (s StopTrainingJobOutput) GoString() string { - return s.String() +// SetTransformJobName sets the TransformJobName field's value. +func (s *TransformJobSummary) SetTransformJobName(v string) *TransformJobSummary { + s.TransformJobName = &v + return s } -// Specifies how long model training can run. When model training reaches the -// limit, Amazon SageMaker ends the training job. Use this API to cap model -// training cost. -// -// To stop a job, Amazon SageMaker sends the algorithm the SIGTERM signal, which -// delays job termination for120 seconds. Algorithms might use this 120-second -// window to save the model artifacts, so the results of training is not lost. -// -// Training algorithms provided by Amazon SageMaker automatically saves the -// intermediate results of a model training job (it is best effort case, as -// model might not be ready to save as some stages, for example training just -// started). This intermediate data is a valid model artifact. You can use it -// to create a model (CreateModel). -type StoppingCondition struct { +// SetTransformJobStatus sets the TransformJobStatus field's value. +func (s *TransformJobSummary) SetTransformJobStatus(v string) *TransformJobSummary { + s.TransformJobStatus = &v + return s +} + +// Describes the results of a transform job output. +type TransformOutput struct { _ struct{} `type:"structure"` - // The maximum length of time, in seconds, that the training job can run. If - // model training does not complete during this time, Amazon SageMaker ends - // the job. If value is not specified, default value is 1 day. Maximum value - // is 5 days. - MaxRuntimeInSeconds *int64 `min:"1" type:"integer"` + // The MIME type used to specify the output data. Amazon SageMaker uses the + // MIME type with each http call to transfer data from the transform job. + Accept *string `type:"string"` + + // Defines how to assemble the results of the transform job as a single S3 object. + // You should select a format that is most convenient to you. To concatenate + // the results in binary format, specify None. To add a newline character at + // the end of every transformed record, specify Line. + AssembleWith *string `type:"string" enum:"AssemblyType"` + + // The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to + // encrypt the model artifacts at rest using Amazon S3 server-side encryption. + // The KmsKeyId can be any of the following formats: + // + // * // KMS Key ID + // + // "1234abcd-12ab-34cd-56ef-1234567890ab" + // + // * // Amazon Resource Name (ARN) of a KMS Key + // + // "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab" + // + // * // KMS Key Alias + // + // "alias/ExampleAlias" + // + // * // Amazon Resource Name (ARN) of a KMS Key Alias + // + // "arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias" + // + // If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS + // key for Amazon S3 for your role's account. For more information, see KMS-Managed + // Encryption Keys (https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html) + // in the Amazon Simple Storage Service Developer Guide. + // + // The KMS key policy must grant permission to the IAM role that you specify + // in your CreateTramsformJob request. For more information, see Using Key Policies + // in AWS KMS (http://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) + // in the AWS Key Management Service Developer Guide. + KmsKeyId *string `type:"string"` + + // The Amazon S3 path where you want Amazon SageMaker to store the results of + // the transform job. For example, s3://bucket-name/key-name-prefix. + // + // For every S3 object used as input for the transform job, the transformed + // data is stored in a corresponding subfolder in the location under the output + // prefix. For example, the input data s3://bucket-name/input-name-prefix/dataset01/data.csv + // will have the transformed data stored at s3://bucket-name/key-name-prefix/dataset01/, + // based on the original name, as a series of .part files (.part0001, part0002, + // etc). + // + // S3OutputPath is a required field + S3OutputPath *string `type:"string" required:"true"` } // String returns the string representation -func (s StoppingCondition) String() string { +func (s TransformOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s StoppingCondition) GoString() string { +func (s TransformOutput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *StoppingCondition) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "StoppingCondition"} - if s.MaxRuntimeInSeconds != nil && *s.MaxRuntimeInSeconds < 1 { - invalidParams.Add(request.NewErrParamMinValue("MaxRuntimeInSeconds", 1)) +func (s *TransformOutput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TransformOutput"} + if s.S3OutputPath == nil { + invalidParams.Add(request.NewErrParamRequired("S3OutputPath")) } if invalidParams.Len() > 0 { @@ -7871,48 +12606,84 @@ func (s *StoppingCondition) Validate() error { return nil } -// SetMaxRuntimeInSeconds sets the MaxRuntimeInSeconds field's value. -func (s *StoppingCondition) SetMaxRuntimeInSeconds(v int64) *StoppingCondition { - s.MaxRuntimeInSeconds = &v +// SetAccept sets the Accept field's value. +func (s *TransformOutput) SetAccept(v string) *TransformOutput { + s.Accept = &v return s } -// Describes a tag. -type Tag struct { +// SetAssembleWith sets the AssembleWith field's value. +func (s *TransformOutput) SetAssembleWith(v string) *TransformOutput { + s.AssembleWith = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *TransformOutput) SetKmsKeyId(v string) *TransformOutput { + s.KmsKeyId = &v + return s +} + +// SetS3OutputPath sets the S3OutputPath field's value. +func (s *TransformOutput) SetS3OutputPath(v string) *TransformOutput { + s.S3OutputPath = &v + return s +} + +// Describes the resources, including ML instance types and ML instance count, +// to use for transform job. +type TransformResources struct { _ struct{} `type:"structure"` - // The tag key. + // The number of ML compute instances to use in the transform job. For distributed + // transform, provide a value greater than 1. The default value is 1. // - // Key is a required field - Key *string `min:"1" type:"string" required:"true"` + // InstanceCount is a required field + InstanceCount *int64 `min:"1" type:"integer" required:"true"` - // The tag value. + // The ML compute instance type for the transform job. For using built-in algorithms + // to transform moderately sized datasets, ml.m4.xlarge or ml.m5.large should + // suffice. There is no default value for InstanceType. // - // Value is a required field - Value *string `type:"string" required:"true"` + // InstanceType is a required field + InstanceType *string `type:"string" required:"true" enum:"TransformInstanceType"` + + // The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to + // encrypt data on the storage volume attached to the ML compute instance(s) + // that run the batch transform job. The VolumeKmsKeyId can be any of the following + // formats: + // + // * // KMS Key ID + // + // "1234abcd-12ab-34cd-56ef-1234567890ab" + // + // * // Amazon Resource Name (ARN) of a KMS Key + // + // "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab" + VolumeKmsKeyId *string `type:"string"` } // String returns the string representation -func (s Tag) String() string { +func (s TransformResources) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s Tag) GoString() string { +func (s TransformResources) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *Tag) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "Tag"} - if s.Key == nil { - invalidParams.Add(request.NewErrParamRequired("Key")) +func (s *TransformResources) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TransformResources"} + if s.InstanceCount == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceCount")) } - if s.Key != nil && len(*s.Key) < 1 { - invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + if s.InstanceCount != nil && *s.InstanceCount < 1 { + invalidParams.Add(request.NewErrParamMinValue("InstanceCount", 1)) } - if s.Value == nil { - invalidParams.Add(request.NewErrParamRequired("Value")) + if s.InstanceType == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceType")) } if invalidParams.Len() > 0 { @@ -7921,94 +12692,113 @@ func (s *Tag) Validate() error { return nil } -// SetKey sets the Key field's value. -func (s *Tag) SetKey(v string) *Tag { - s.Key = &v +// SetInstanceCount sets the InstanceCount field's value. +func (s *TransformResources) SetInstanceCount(v int64) *TransformResources { + s.InstanceCount = &v return s } -// SetValue sets the Value field's value. -func (s *Tag) SetValue(v string) *Tag { - s.Value = &v +// SetInstanceType sets the InstanceType field's value. +func (s *TransformResources) SetInstanceType(v string) *TransformResources { + s.InstanceType = &v return s } -// Provides summary information about a training job. -type TrainingJobSummary struct { +// SetVolumeKmsKeyId sets the VolumeKmsKeyId field's value. +func (s *TransformResources) SetVolumeKmsKeyId(v string) *TransformResources { + s.VolumeKmsKeyId = &v + return s +} + +// Describes the S3 data source. +type TransformS3DataSource struct { _ struct{} `type:"structure"` - // A timestamp that shows when the training job was created. + // If you choose S3Prefix, S3Uri identifies a key name prefix. Amazon SageMaker + // uses all objects with the specified key name prefix for batch transform. // - // CreationTime is a required field - CreationTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` - - // Timestamp when the training job was last modified. - LastModifiedTime *time.Time `type:"timestamp" timestampFormat:"unix"` - - // A timestamp that shows when the training job ended. This field is set only - // if the training job has one of the terminal statuses (Completed, Failed, - // or Stopped). - TrainingEndTime *time.Time `type:"timestamp" timestampFormat:"unix"` - - // The Amazon Resource Name (ARN) of the training job. + // If you choose ManifestFile, S3Uri identifies an object that is a manifest + // file containing a list of object keys that you want Amazon SageMaker to use + // for batch transform. // - // TrainingJobArn is a required field - TrainingJobArn *string `type:"string" required:"true"` + // S3DataType is a required field + S3DataType *string `type:"string" required:"true" enum:"S3DataType"` - // The name of the training job that you want a summary for. + // Depending on the value specified for the S3DataType, identifies either a + // key name prefix or a manifest. For example: // - // TrainingJobName is a required field - TrainingJobName *string `min:"1" type:"string" required:"true"` - - // The status of the training job. + // * A key name prefix might look like this: s3://bucketname/exampleprefix. // - // TrainingJobStatus is a required field - TrainingJobStatus *string `type:"string" required:"true" enum:"TrainingJobStatus"` + // + // * A manifest might look like this: s3://bucketname/example.manifest + // + // The manifest is an S3 object which is a JSON file with the following format: + // + // + // [ + // + // {"prefix": "s3://customer_bucket/some/prefix/"}, + // + // "relative/path/to/custdata-1", + // + // "relative/path/custdata-2", + // + // ... + // + // ] + // + // The preceding JSON matches the following S3Uris: + // + // s3://customer_bucket/some/prefix/relative/path/to/custdata-1 + // + // s3://customer_bucket/some/prefix/relative/path/custdata-1 + // + // ... + // + // The complete set of S3Uris in this manifest constitutes the input data for + // the channel for this datasource. The object that each S3Uris points to + // must be readable by the IAM role that Amazon SageMaker uses to perform + // tasks on your behalf. + // + // S3Uri is a required field + S3Uri *string `type:"string" required:"true"` } // String returns the string representation -func (s TrainingJobSummary) String() string { +func (s TransformS3DataSource) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s TrainingJobSummary) GoString() string { +func (s TransformS3DataSource) GoString() string { return s.String() } -// SetCreationTime sets the CreationTime field's value. -func (s *TrainingJobSummary) SetCreationTime(v time.Time) *TrainingJobSummary { - s.CreationTime = &v - return s -} - -// SetLastModifiedTime sets the LastModifiedTime field's value. -func (s *TrainingJobSummary) SetLastModifiedTime(v time.Time) *TrainingJobSummary { - s.LastModifiedTime = &v - return s -} - -// SetTrainingEndTime sets the TrainingEndTime field's value. -func (s *TrainingJobSummary) SetTrainingEndTime(v time.Time) *TrainingJobSummary { - s.TrainingEndTime = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *TransformS3DataSource) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TransformS3DataSource"} + if s.S3DataType == nil { + invalidParams.Add(request.NewErrParamRequired("S3DataType")) + } + if s.S3Uri == nil { + invalidParams.Add(request.NewErrParamRequired("S3Uri")) + } -// SetTrainingJobArn sets the TrainingJobArn field's value. -func (s *TrainingJobSummary) SetTrainingJobArn(v string) *TrainingJobSummary { - s.TrainingJobArn = &v - return s + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetTrainingJobName sets the TrainingJobName field's value. -func (s *TrainingJobSummary) SetTrainingJobName(v string) *TrainingJobSummary { - s.TrainingJobName = &v +// SetS3DataType sets the S3DataType field's value. +func (s *TransformS3DataSource) SetS3DataType(v string) *TransformS3DataSource { + s.S3DataType = &v return s } -// SetTrainingJobStatus sets the TrainingJobStatus field's value. -func (s *TrainingJobSummary) SetTrainingJobStatus(v string) *TrainingJobSummary { - s.TrainingJobStatus = &v +// SetS3Uri sets the S3Uri field's value. +func (s *TransformS3DataSource) SetS3Uri(v string) *TransformS3DataSource { + s.S3Uri = &v return s } @@ -8182,16 +12972,34 @@ func (s *UpdateEndpointWeightsAndCapacitiesOutput) SetEndpointArn(v string) *Upd type UpdateNotebookInstanceInput struct { _ struct{} `type:"structure"` + // Set to true to remove the notebook instance lifecycle configuration currently + // associated with the notebook instance. + DisassociateLifecycleConfig *bool `type:"boolean"` + // The Amazon ML compute instance type. InstanceType *string `type:"string" enum:"InstanceType"` + // The name of a lifecycle configuration to associate with the notebook instance. + // For information about lifestyle configurations, see Step 2.1: (Optional) + // Customize a Notebook Instance (http://docs.aws.amazon.com/sagemaker/latest/dg/notebook-lifecycle-config.html). + LifecycleConfigName *string `type:"string"` + // The name of the notebook instance to update. // // NotebookInstanceName is a required field NotebookInstanceName *string `type:"string" required:"true"` - // Amazon Resource Name (ARN) of the IAM role to associate with the instance. + // The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can + // assume to access the notebook instance. For more information, see Amazon + // SageMaker Roles (http://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). + // + // To be able to pass this role to Amazon SageMaker, the caller of this API + // must have the iam:PassRole permission. RoleArn *string `min:"20" type:"string"` + + // The size, in GB, of the ML storage volume to attach to the notebook instance. + // The default value is 5 GB. + VolumeSizeInGB *int64 `min:"5" type:"integer"` } // String returns the string representation @@ -8213,6 +13021,9 @@ func (s *UpdateNotebookInstanceInput) Validate() error { if s.RoleArn != nil && len(*s.RoleArn) < 20 { invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) } + if s.VolumeSizeInGB != nil && *s.VolumeSizeInGB < 5 { + invalidParams.Add(request.NewErrParamMinValue("VolumeSizeInGB", 5)) + } if invalidParams.Len() > 0 { return invalidParams @@ -8220,12 +13031,24 @@ func (s *UpdateNotebookInstanceInput) Validate() error { return nil } +// SetDisassociateLifecycleConfig sets the DisassociateLifecycleConfig field's value. +func (s *UpdateNotebookInstanceInput) SetDisassociateLifecycleConfig(v bool) *UpdateNotebookInstanceInput { + s.DisassociateLifecycleConfig = &v + return s +} + // SetInstanceType sets the InstanceType field's value. func (s *UpdateNotebookInstanceInput) SetInstanceType(v string) *UpdateNotebookInstanceInput { s.InstanceType = &v return s } +// SetLifecycleConfigName sets the LifecycleConfigName field's value. +func (s *UpdateNotebookInstanceInput) SetLifecycleConfigName(v string) *UpdateNotebookInstanceInput { + s.LifecycleConfigName = &v + return s +} + // SetNotebookInstanceName sets the NotebookInstanceName field's value. func (s *UpdateNotebookInstanceInput) SetNotebookInstanceName(v string) *UpdateNotebookInstanceInput { s.NotebookInstanceName = &v @@ -8238,6 +13061,12 @@ func (s *UpdateNotebookInstanceInput) SetRoleArn(v string) *UpdateNotebookInstan return s } +// SetVolumeSizeInGB sets the VolumeSizeInGB field's value. +func (s *UpdateNotebookInstanceInput) SetVolumeSizeInGB(v int64) *UpdateNotebookInstanceInput { + s.VolumeSizeInGB = &v + return s +} + type UpdateNotebookInstanceLifecycleConfigInput struct { _ struct{} `type:"structure"` @@ -8343,6 +13172,87 @@ func (s UpdateNotebookInstanceOutput) GoString() string { return s.String() } +// Specifies a VPC that your training jobs and hosted models have access to. +// Control access to and from your training and model containers by configuring +// the VPC. For more information, see Protect Endpoints by Using an Amazon Virtual +// Private Cloud (http://docs.aws.amazon.com/sagemaker/latest/dg/host-vpc.html) +// and Protect Training Jobs by Using an Amazon Virtual Private Cloud (http://docs.aws.amazon.com/sagemaker/latest/dg/train-vpc.html). +type VpcConfig struct { + _ struct{} `type:"structure"` + + // The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security + // groups for the VPC that is specified in the Subnets field. + // + // SecurityGroupIds is a required field + SecurityGroupIds []*string `min:"1" type:"list" required:"true"` + + // The ID of the subnets in the VPC to which you want to connect your training + // job or model. + // + // Subnets is a required field + Subnets []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s VpcConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s VpcConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *VpcConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "VpcConfig"} + if s.SecurityGroupIds == nil { + invalidParams.Add(request.NewErrParamRequired("SecurityGroupIds")) + } + if s.SecurityGroupIds != nil && len(s.SecurityGroupIds) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecurityGroupIds", 1)) + } + if s.Subnets == nil { + invalidParams.Add(request.NewErrParamRequired("Subnets")) + } + if s.Subnets != nil && len(s.Subnets) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Subnets", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSecurityGroupIds sets the SecurityGroupIds field's value. +func (s *VpcConfig) SetSecurityGroupIds(v []*string) *VpcConfig { + s.SecurityGroupIds = v + return s +} + +// SetSubnets sets the Subnets field's value. +func (s *VpcConfig) SetSubnets(v []*string) *VpcConfig { + s.Subnets = v + return s +} + +const ( + // AssemblyTypeNone is a AssemblyType enum value + AssemblyTypeNone = "None" + + // AssemblyTypeLine is a AssemblyType enum value + AssemblyTypeLine = "Line" +) + +const ( + // BatchStrategyMultiRecord is a BatchStrategy enum value + BatchStrategyMultiRecord = "MultiRecord" + + // BatchStrategySingleRecord is a BatchStrategy enum value + BatchStrategySingleRecord = "SingleRecord" +) + const ( // CompressionTypeNone is a CompressionType enum value CompressionTypeNone = "None" @@ -8388,6 +13298,9 @@ const ( // EndpointStatusUpdating is a EndpointStatus enum value EndpointStatusUpdating = "Updating" + // EndpointStatusSystemUpdating is a EndpointStatus enum value + EndpointStatusSystemUpdating = "SystemUpdating" + // EndpointStatusRollingBack is a EndpointStatus enum value EndpointStatusRollingBack = "RollingBack" @@ -8401,18 +13314,171 @@ const ( EndpointStatusFailed = "Failed" ) +const ( + // HyperParameterTuningJobObjectiveTypeMaximize is a HyperParameterTuningJobObjectiveType enum value + HyperParameterTuningJobObjectiveTypeMaximize = "Maximize" + + // HyperParameterTuningJobObjectiveTypeMinimize is a HyperParameterTuningJobObjectiveType enum value + HyperParameterTuningJobObjectiveTypeMinimize = "Minimize" +) + +const ( + // HyperParameterTuningJobSortByOptionsName is a HyperParameterTuningJobSortByOptions enum value + HyperParameterTuningJobSortByOptionsName = "Name" + + // HyperParameterTuningJobSortByOptionsStatus is a HyperParameterTuningJobSortByOptions enum value + HyperParameterTuningJobSortByOptionsStatus = "Status" + + // HyperParameterTuningJobSortByOptionsCreationTime is a HyperParameterTuningJobSortByOptions enum value + HyperParameterTuningJobSortByOptionsCreationTime = "CreationTime" +) + +const ( + // HyperParameterTuningJobStatusCompleted is a HyperParameterTuningJobStatus enum value + HyperParameterTuningJobStatusCompleted = "Completed" + + // HyperParameterTuningJobStatusInProgress is a HyperParameterTuningJobStatus enum value + HyperParameterTuningJobStatusInProgress = "InProgress" + + // HyperParameterTuningJobStatusFailed is a HyperParameterTuningJobStatus enum value + HyperParameterTuningJobStatusFailed = "Failed" + + // HyperParameterTuningJobStatusStopped is a HyperParameterTuningJobStatus enum value + HyperParameterTuningJobStatusStopped = "Stopped" + + // HyperParameterTuningJobStatusStopping is a HyperParameterTuningJobStatus enum value + HyperParameterTuningJobStatusStopping = "Stopping" +) + +// The strategy hyperparameter tuning uses to find the best combination of hyperparameters +// for your model. Currently, the only supported value is Bayesian. +const ( + // HyperParameterTuningJobStrategyTypeBayesian is a HyperParameterTuningJobStrategyType enum value + HyperParameterTuningJobStrategyTypeBayesian = "Bayesian" +) + +const ( + // HyperParameterTuningJobWarmStartTypeIdenticalDataAndAlgorithm is a HyperParameterTuningJobWarmStartType enum value + HyperParameterTuningJobWarmStartTypeIdenticalDataAndAlgorithm = "IdenticalDataAndAlgorithm" + + // HyperParameterTuningJobWarmStartTypeTransferLearning is a HyperParameterTuningJobWarmStartType enum value + HyperParameterTuningJobWarmStartTypeTransferLearning = "TransferLearning" +) + const ( // InstanceTypeMlT2Medium is a InstanceType enum value InstanceTypeMlT2Medium = "ml.t2.medium" + // InstanceTypeMlT2Large is a InstanceType enum value + InstanceTypeMlT2Large = "ml.t2.large" + + // InstanceTypeMlT2Xlarge is a InstanceType enum value + InstanceTypeMlT2Xlarge = "ml.t2.xlarge" + + // InstanceTypeMlT22xlarge is a InstanceType enum value + InstanceTypeMlT22xlarge = "ml.t2.2xlarge" + + // InstanceTypeMlT3Medium is a InstanceType enum value + InstanceTypeMlT3Medium = "ml.t3.medium" + + // InstanceTypeMlT3Large is a InstanceType enum value + InstanceTypeMlT3Large = "ml.t3.large" + + // InstanceTypeMlT3Xlarge is a InstanceType enum value + InstanceTypeMlT3Xlarge = "ml.t3.xlarge" + + // InstanceTypeMlT32xlarge is a InstanceType enum value + InstanceTypeMlT32xlarge = "ml.t3.2xlarge" + // InstanceTypeMlM4Xlarge is a InstanceType enum value InstanceTypeMlM4Xlarge = "ml.m4.xlarge" + // InstanceTypeMlM42xlarge is a InstanceType enum value + InstanceTypeMlM42xlarge = "ml.m4.2xlarge" + + // InstanceTypeMlM44xlarge is a InstanceType enum value + InstanceTypeMlM44xlarge = "ml.m4.4xlarge" + + // InstanceTypeMlM410xlarge is a InstanceType enum value + InstanceTypeMlM410xlarge = "ml.m4.10xlarge" + + // InstanceTypeMlM416xlarge is a InstanceType enum value + InstanceTypeMlM416xlarge = "ml.m4.16xlarge" + + // InstanceTypeMlM5Xlarge is a InstanceType enum value + InstanceTypeMlM5Xlarge = "ml.m5.xlarge" + + // InstanceTypeMlM52xlarge is a InstanceType enum value + InstanceTypeMlM52xlarge = "ml.m5.2xlarge" + + // InstanceTypeMlM54xlarge is a InstanceType enum value + InstanceTypeMlM54xlarge = "ml.m5.4xlarge" + + // InstanceTypeMlM512xlarge is a InstanceType enum value + InstanceTypeMlM512xlarge = "ml.m5.12xlarge" + + // InstanceTypeMlM524xlarge is a InstanceType enum value + InstanceTypeMlM524xlarge = "ml.m5.24xlarge" + + // InstanceTypeMlC4Xlarge is a InstanceType enum value + InstanceTypeMlC4Xlarge = "ml.c4.xlarge" + + // InstanceTypeMlC42xlarge is a InstanceType enum value + InstanceTypeMlC42xlarge = "ml.c4.2xlarge" + + // InstanceTypeMlC44xlarge is a InstanceType enum value + InstanceTypeMlC44xlarge = "ml.c4.4xlarge" + + // InstanceTypeMlC48xlarge is a InstanceType enum value + InstanceTypeMlC48xlarge = "ml.c4.8xlarge" + + // InstanceTypeMlC5Xlarge is a InstanceType enum value + InstanceTypeMlC5Xlarge = "ml.c5.xlarge" + + // InstanceTypeMlC52xlarge is a InstanceType enum value + InstanceTypeMlC52xlarge = "ml.c5.2xlarge" + + // InstanceTypeMlC54xlarge is a InstanceType enum value + InstanceTypeMlC54xlarge = "ml.c5.4xlarge" + + // InstanceTypeMlC59xlarge is a InstanceType enum value + InstanceTypeMlC59xlarge = "ml.c5.9xlarge" + + // InstanceTypeMlC518xlarge is a InstanceType enum value + InstanceTypeMlC518xlarge = "ml.c5.18xlarge" + + // InstanceTypeMlC5dXlarge is a InstanceType enum value + InstanceTypeMlC5dXlarge = "ml.c5d.xlarge" + + // InstanceTypeMlC5d2xlarge is a InstanceType enum value + InstanceTypeMlC5d2xlarge = "ml.c5d.2xlarge" + + // InstanceTypeMlC5d4xlarge is a InstanceType enum value + InstanceTypeMlC5d4xlarge = "ml.c5d.4xlarge" + + // InstanceTypeMlC5d9xlarge is a InstanceType enum value + InstanceTypeMlC5d9xlarge = "ml.c5d.9xlarge" + + // InstanceTypeMlC5d18xlarge is a InstanceType enum value + InstanceTypeMlC5d18xlarge = "ml.c5d.18xlarge" + // InstanceTypeMlP2Xlarge is a InstanceType enum value InstanceTypeMlP2Xlarge = "ml.p2.xlarge" + // InstanceTypeMlP28xlarge is a InstanceType enum value + InstanceTypeMlP28xlarge = "ml.p2.8xlarge" + + // InstanceTypeMlP216xlarge is a InstanceType enum value + InstanceTypeMlP216xlarge = "ml.p2.16xlarge" + // InstanceTypeMlP32xlarge is a InstanceType enum value InstanceTypeMlP32xlarge = "ml.p3.2xlarge" + + // InstanceTypeMlP38xlarge is a InstanceType enum value + InstanceTypeMlP38xlarge = "ml.p3.8xlarge" + + // InstanceTypeMlP316xlarge is a InstanceType enum value + InstanceTypeMlP316xlarge = "ml.p3.16xlarge" ) const ( @@ -8479,6 +13545,20 @@ const ( // NotebookInstanceStatusDeleting is a NotebookInstanceStatus enum value NotebookInstanceStatusDeleting = "Deleting" + + // NotebookInstanceStatusUpdating is a NotebookInstanceStatus enum value + NotebookInstanceStatusUpdating = "Updating" +) + +const ( + // ObjectiveStatusSucceeded is a ObjectiveStatus enum value + ObjectiveStatusSucceeded = "Succeeded" + + // ObjectiveStatusPending is a ObjectiveStatus enum value + ObjectiveStatusPending = "Pending" + + // ObjectiveStatusFailed is a ObjectiveStatus enum value + ObjectiveStatusFailed = "Failed" ) const ( @@ -8490,35 +13570,101 @@ const ( ) const ( + // ProductionVariantInstanceTypeMlT2Medium is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlT2Medium = "ml.t2.medium" + + // ProductionVariantInstanceTypeMlT2Large is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlT2Large = "ml.t2.large" + + // ProductionVariantInstanceTypeMlT2Xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlT2Xlarge = "ml.t2.xlarge" + + // ProductionVariantInstanceTypeMlT22xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlT22xlarge = "ml.t2.2xlarge" + + // ProductionVariantInstanceTypeMlM4Xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlM4Xlarge = "ml.m4.xlarge" + + // ProductionVariantInstanceTypeMlM42xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlM42xlarge = "ml.m4.2xlarge" + + // ProductionVariantInstanceTypeMlM44xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlM44xlarge = "ml.m4.4xlarge" + + // ProductionVariantInstanceTypeMlM410xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlM410xlarge = "ml.m4.10xlarge" + + // ProductionVariantInstanceTypeMlM416xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlM416xlarge = "ml.m4.16xlarge" + + // ProductionVariantInstanceTypeMlM5Large is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlM5Large = "ml.m5.large" + + // ProductionVariantInstanceTypeMlM5Xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlM5Xlarge = "ml.m5.xlarge" + + // ProductionVariantInstanceTypeMlM52xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlM52xlarge = "ml.m5.2xlarge" + + // ProductionVariantInstanceTypeMlM54xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlM54xlarge = "ml.m5.4xlarge" + + // ProductionVariantInstanceTypeMlM512xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlM512xlarge = "ml.m5.12xlarge" + + // ProductionVariantInstanceTypeMlM524xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlM524xlarge = "ml.m5.24xlarge" + + // ProductionVariantInstanceTypeMlC4Large is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlC4Large = "ml.c4.large" + + // ProductionVariantInstanceTypeMlC4Xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlC4Xlarge = "ml.c4.xlarge" + // ProductionVariantInstanceTypeMlC42xlarge is a ProductionVariantInstanceType enum value ProductionVariantInstanceTypeMlC42xlarge = "ml.c4.2xlarge" + // ProductionVariantInstanceTypeMlC44xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlC44xlarge = "ml.c4.4xlarge" + // ProductionVariantInstanceTypeMlC48xlarge is a ProductionVariantInstanceType enum value ProductionVariantInstanceTypeMlC48xlarge = "ml.c4.8xlarge" - // ProductionVariantInstanceTypeMlC4Xlarge is a ProductionVariantInstanceType enum value - ProductionVariantInstanceTypeMlC4Xlarge = "ml.c4.xlarge" + // ProductionVariantInstanceTypeMlP2Xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlP2Xlarge = "ml.p2.xlarge" - // ProductionVariantInstanceTypeMlC52xlarge is a ProductionVariantInstanceType enum value - ProductionVariantInstanceTypeMlC52xlarge = "ml.c5.2xlarge" + // ProductionVariantInstanceTypeMlP28xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlP28xlarge = "ml.p2.8xlarge" - // ProductionVariantInstanceTypeMlC59xlarge is a ProductionVariantInstanceType enum value - ProductionVariantInstanceTypeMlC59xlarge = "ml.c5.9xlarge" + // ProductionVariantInstanceTypeMlP216xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlP216xlarge = "ml.p2.16xlarge" + + // ProductionVariantInstanceTypeMlP32xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlP32xlarge = "ml.p3.2xlarge" + + // ProductionVariantInstanceTypeMlP38xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlP38xlarge = "ml.p3.8xlarge" + + // ProductionVariantInstanceTypeMlP316xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlP316xlarge = "ml.p3.16xlarge" + + // ProductionVariantInstanceTypeMlC5Large is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlC5Large = "ml.c5.large" // ProductionVariantInstanceTypeMlC5Xlarge is a ProductionVariantInstanceType enum value ProductionVariantInstanceTypeMlC5Xlarge = "ml.c5.xlarge" - // ProductionVariantInstanceTypeMlM4Xlarge is a ProductionVariantInstanceType enum value - ProductionVariantInstanceTypeMlM4Xlarge = "ml.m4.xlarge" + // ProductionVariantInstanceTypeMlC52xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlC52xlarge = "ml.c5.2xlarge" - // ProductionVariantInstanceTypeMlP2Xlarge is a ProductionVariantInstanceType enum value - ProductionVariantInstanceTypeMlP2Xlarge = "ml.p2.xlarge" + // ProductionVariantInstanceTypeMlC54xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlC54xlarge = "ml.c5.4xlarge" - // ProductionVariantInstanceTypeMlP32xlarge is a ProductionVariantInstanceType enum value - ProductionVariantInstanceTypeMlP32xlarge = "ml.p3.2xlarge" + // ProductionVariantInstanceTypeMlC59xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlC59xlarge = "ml.c5.9xlarge" - // ProductionVariantInstanceTypeMlT2Medium is a ProductionVariantInstanceType enum value - ProductionVariantInstanceTypeMlT2Medium = "ml.t2.medium" + // ProductionVariantInstanceTypeMlC518xlarge is a ProductionVariantInstanceType enum value + ProductionVariantInstanceTypeMlC518xlarge = "ml.c5.18xlarge" ) const ( @@ -8549,9 +13695,18 @@ const ( // SecondaryStatusStarting is a SecondaryStatus enum value SecondaryStatusStarting = "Starting" + // SecondaryStatusLaunchingMlinstances is a SecondaryStatus enum value + SecondaryStatusLaunchingMlinstances = "LaunchingMLInstances" + + // SecondaryStatusPreparingTrainingStack is a SecondaryStatus enum value + SecondaryStatusPreparingTrainingStack = "PreparingTrainingStack" + // SecondaryStatusDownloading is a SecondaryStatus enum value SecondaryStatusDownloading = "Downloading" + // SecondaryStatusDownloadingTrainingImage is a SecondaryStatus enum value + SecondaryStatusDownloadingTrainingImage = "DownloadingTrainingImage" + // SecondaryStatusTraining is a SecondaryStatus enum value SecondaryStatusTraining = "Training" @@ -8593,6 +13748,17 @@ const ( SortOrderDescending = "Descending" ) +const ( + // SplitTypeNone is a SplitType enum value + SplitTypeNone = "None" + + // SplitTypeLine is a SplitType enum value + SplitTypeLine = "Line" + + // SplitTypeRecordIo is a SplitType enum value + SplitTypeRecordIo = "RecordIO" +) + const ( // TrainingInputModePipe is a TrainingInputMode enum value TrainingInputModePipe = "Pipe" @@ -8605,18 +13771,45 @@ const ( // TrainingInstanceTypeMlM4Xlarge is a TrainingInstanceType enum value TrainingInstanceTypeMlM4Xlarge = "ml.m4.xlarge" + // TrainingInstanceTypeMlM42xlarge is a TrainingInstanceType enum value + TrainingInstanceTypeMlM42xlarge = "ml.m4.2xlarge" + // TrainingInstanceTypeMlM44xlarge is a TrainingInstanceType enum value TrainingInstanceTypeMlM44xlarge = "ml.m4.4xlarge" // TrainingInstanceTypeMlM410xlarge is a TrainingInstanceType enum value TrainingInstanceTypeMlM410xlarge = "ml.m4.10xlarge" + // TrainingInstanceTypeMlM416xlarge is a TrainingInstanceType enum value + TrainingInstanceTypeMlM416xlarge = "ml.m4.16xlarge" + + // TrainingInstanceTypeMlM5Large is a TrainingInstanceType enum value + TrainingInstanceTypeMlM5Large = "ml.m5.large" + + // TrainingInstanceTypeMlM5Xlarge is a TrainingInstanceType enum value + TrainingInstanceTypeMlM5Xlarge = "ml.m5.xlarge" + + // TrainingInstanceTypeMlM52xlarge is a TrainingInstanceType enum value + TrainingInstanceTypeMlM52xlarge = "ml.m5.2xlarge" + + // TrainingInstanceTypeMlM54xlarge is a TrainingInstanceType enum value + TrainingInstanceTypeMlM54xlarge = "ml.m5.4xlarge" + + // TrainingInstanceTypeMlM512xlarge is a TrainingInstanceType enum value + TrainingInstanceTypeMlM512xlarge = "ml.m5.12xlarge" + + // TrainingInstanceTypeMlM524xlarge is a TrainingInstanceType enum value + TrainingInstanceTypeMlM524xlarge = "ml.m5.24xlarge" + // TrainingInstanceTypeMlC4Xlarge is a TrainingInstanceType enum value TrainingInstanceTypeMlC4Xlarge = "ml.c4.xlarge" // TrainingInstanceTypeMlC42xlarge is a TrainingInstanceType enum value TrainingInstanceTypeMlC42xlarge = "ml.c4.2xlarge" + // TrainingInstanceTypeMlC44xlarge is a TrainingInstanceType enum value + TrainingInstanceTypeMlC44xlarge = "ml.c4.4xlarge" + // TrainingInstanceTypeMlC48xlarge is a TrainingInstanceType enum value TrainingInstanceTypeMlC48xlarge = "ml.c4.8xlarge" @@ -8654,6 +13847,20 @@ const ( TrainingInstanceTypeMlC518xlarge = "ml.c5.18xlarge" ) +const ( + // TrainingJobSortByOptionsName is a TrainingJobSortByOptions enum value + TrainingJobSortByOptionsName = "Name" + + // TrainingJobSortByOptionsCreationTime is a TrainingJobSortByOptions enum value + TrainingJobSortByOptionsCreationTime = "CreationTime" + + // TrainingJobSortByOptionsStatus is a TrainingJobSortByOptions enum value + TrainingJobSortByOptionsStatus = "Status" + + // TrainingJobSortByOptionsFinalObjectiveMetricValue is a TrainingJobSortByOptions enum value + TrainingJobSortByOptionsFinalObjectiveMetricValue = "FinalObjectiveMetricValue" +) + const ( // TrainingJobStatusInProgress is a TrainingJobStatus enum value TrainingJobStatusInProgress = "InProgress" @@ -8670,3 +13877,100 @@ const ( // TrainingJobStatusStopped is a TrainingJobStatus enum value TrainingJobStatusStopped = "Stopped" ) + +const ( + // TransformInstanceTypeMlM4Xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlM4Xlarge = "ml.m4.xlarge" + + // TransformInstanceTypeMlM42xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlM42xlarge = "ml.m4.2xlarge" + + // TransformInstanceTypeMlM44xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlM44xlarge = "ml.m4.4xlarge" + + // TransformInstanceTypeMlM410xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlM410xlarge = "ml.m4.10xlarge" + + // TransformInstanceTypeMlM416xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlM416xlarge = "ml.m4.16xlarge" + + // TransformInstanceTypeMlC4Xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlC4Xlarge = "ml.c4.xlarge" + + // TransformInstanceTypeMlC42xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlC42xlarge = "ml.c4.2xlarge" + + // TransformInstanceTypeMlC44xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlC44xlarge = "ml.c4.4xlarge" + + // TransformInstanceTypeMlC48xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlC48xlarge = "ml.c4.8xlarge" + + // TransformInstanceTypeMlP2Xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlP2Xlarge = "ml.p2.xlarge" + + // TransformInstanceTypeMlP28xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlP28xlarge = "ml.p2.8xlarge" + + // TransformInstanceTypeMlP216xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlP216xlarge = "ml.p2.16xlarge" + + // TransformInstanceTypeMlP32xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlP32xlarge = "ml.p3.2xlarge" + + // TransformInstanceTypeMlP38xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlP38xlarge = "ml.p3.8xlarge" + + // TransformInstanceTypeMlP316xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlP316xlarge = "ml.p3.16xlarge" + + // TransformInstanceTypeMlC5Xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlC5Xlarge = "ml.c5.xlarge" + + // TransformInstanceTypeMlC52xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlC52xlarge = "ml.c5.2xlarge" + + // TransformInstanceTypeMlC54xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlC54xlarge = "ml.c5.4xlarge" + + // TransformInstanceTypeMlC59xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlC59xlarge = "ml.c5.9xlarge" + + // TransformInstanceTypeMlC518xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlC518xlarge = "ml.c5.18xlarge" + + // TransformInstanceTypeMlM5Large is a TransformInstanceType enum value + TransformInstanceTypeMlM5Large = "ml.m5.large" + + // TransformInstanceTypeMlM5Xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlM5Xlarge = "ml.m5.xlarge" + + // TransformInstanceTypeMlM52xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlM52xlarge = "ml.m5.2xlarge" + + // TransformInstanceTypeMlM54xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlM54xlarge = "ml.m5.4xlarge" + + // TransformInstanceTypeMlM512xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlM512xlarge = "ml.m5.12xlarge" + + // TransformInstanceTypeMlM524xlarge is a TransformInstanceType enum value + TransformInstanceTypeMlM524xlarge = "ml.m5.24xlarge" +) + +const ( + // TransformJobStatusInProgress is a TransformJobStatus enum value + TransformJobStatusInProgress = "InProgress" + + // TransformJobStatusCompleted is a TransformJobStatus enum value + TransformJobStatusCompleted = "Completed" + + // TransformJobStatusFailed is a TransformJobStatus enum value + TransformJobStatusFailed = "Failed" + + // TransformJobStatusStopping is a TransformJobStatus enum value + TransformJobStatusStopping = "Stopping" + + // TransformJobStatusStopped is a TransformJobStatus enum value + TransformJobStatusStopped = "Stopped" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/sagemaker/service.go b/vendor/github.com/aws/aws-sdk-go/service/sagemaker/service.go index fac6d92bae3..7ae1df73414 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/sagemaker/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/sagemaker/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "sagemaker" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "sagemaker" // Name of service. + EndpointsID = "api.sagemaker" // ID to lookup a service endpoint with. + ServiceID = "SageMaker" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the SageMaker client with a session. @@ -45,19 +46,20 @@ const ( // svc := sagemaker.New(mySession, aws.NewConfig().WithRegion("us-west-2")) func New(p client.ConfigProvider, cfgs ...*aws.Config) *SageMaker { c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "sagemaker" + } return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) } // newClient creates, initializes and returns a new service client instance. func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *SageMaker { - if len(signingName) == 0 { - signingName = "sagemaker" - } svc := &SageMaker{ Client: client.New( cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/sagemaker/waiters.go b/vendor/github.com/aws/aws-sdk-go/service/sagemaker/waiters.go index c8515cc633d..e4054f0ffaf 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/sagemaker/waiters.go +++ b/vendor/github.com/aws/aws-sdk-go/service/sagemaker/waiters.go @@ -329,3 +329,64 @@ func (c *SageMaker) WaitUntilTrainingJobCompletedOrStoppedWithContext(ctx aws.Co return w.WaitWithContext(ctx) } + +// WaitUntilTransformJobCompletedOrStopped uses the SageMaker API operation +// DescribeTransformJob to wait for a condition to be met before returning. +// If the condition is not met within the max attempt window, an error will +// be returned. +func (c *SageMaker) WaitUntilTransformJobCompletedOrStopped(input *DescribeTransformJobInput) error { + return c.WaitUntilTransformJobCompletedOrStoppedWithContext(aws.BackgroundContext(), input) +} + +// WaitUntilTransformJobCompletedOrStoppedWithContext is an extended version of WaitUntilTransformJobCompletedOrStopped. +// With the support for passing in a context and options to configure the +// Waiter and the underlying request options. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) WaitUntilTransformJobCompletedOrStoppedWithContext(ctx aws.Context, input *DescribeTransformJobInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "WaitUntilTransformJobCompletedOrStopped", + MaxAttempts: 60, + Delay: request.ConstantWaiterDelay(60 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.PathWaiterMatch, Argument: "TransformJobStatus", + Expected: "Completed", + }, + { + State: request.SuccessWaiterState, + Matcher: request.PathWaiterMatch, Argument: "TransformJobStatus", + Expected: "Stopped", + }, + { + State: request.FailureWaiterState, + Matcher: request.PathWaiterMatch, Argument: "TransformJobStatus", + Expected: "Failed", + }, + { + State: request.FailureWaiterState, + Matcher: request.ErrorWaiterMatch, + Expected: "ValidationException", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *DescribeTransformJobInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeTransformJobRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + w.ApplyOptions(opts...) + + return w.WaitWithContext(ctx) +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/secretsmanager/api.go b/vendor/github.com/aws/aws-sdk-go/service/secretsmanager/api.go new file mode 100644 index 00000000000..6ba9f3e8395 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/secretsmanager/api.go @@ -0,0 +1,5411 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package secretsmanager + +import ( + "fmt" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol" + "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" +) + +const opCancelRotateSecret = "CancelRotateSecret" + +// CancelRotateSecretRequest generates a "aws/request.Request" representing the +// client's request for the CancelRotateSecret operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CancelRotateSecret for more information on using the CancelRotateSecret +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CancelRotateSecretRequest method. +// req, resp := client.CancelRotateSecretRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/CancelRotateSecret +func (c *SecretsManager) CancelRotateSecretRequest(input *CancelRotateSecretInput) (req *request.Request, output *CancelRotateSecretOutput) { + op := &request.Operation{ + Name: opCancelRotateSecret, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CancelRotateSecretInput{} + } + + output = &CancelRotateSecretOutput{} + req = c.newRequest(op, input, output) + return +} + +// CancelRotateSecret API operation for AWS Secrets Manager. +// +// Disables automatic scheduled rotation and cancels the rotation of a secret +// if one is currently in progress. +// +// To re-enable scheduled rotation, call RotateSecret with AutomaticallyRotateAfterDays +// set to a value greater than 0. This will immediately rotate your secret and +// then enable the automatic schedule. +// +// If you cancel a rotation that is in progress, it can leave the VersionStage +// labels in an unexpected state. Depending on what step of the rotation was +// in progress, you might need to remove the staging label AWSPENDING from the +// partially created version, specified by the VersionId response value. You +// should also evaluate the partially rotated new version to see if it should +// be deleted, which you can do by removing all staging labels from the new +// version's VersionStage field. +// +// To successfully start a rotation, the staging label AWSPENDING must be in +// one of the following states: +// +// * Not be attached to any version at all +// +// * Attached to the same version as the staging label AWSCURRENT +// +// If the staging label AWSPENDING is attached to a different version than the +// version with AWSCURRENT then the attempt to rotate fails. +// +// Minimum permissions +// +// To run this command, you must have the following permissions: +// +// * secretsmanager:CancelRotateSecret +// +// Related operations +// +// * To configure rotation for a secret or to manually trigger a rotation, +// use RotateSecret. +// +// * To get the rotation configuration details for a secret, use DescribeSecret. +// +// * To list all of the currently available secrets, use ListSecrets. +// +// * To list all of the versions currently associated with a secret, use +// ListSecretVersionIds. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Secrets Manager's +// API operation CancelRotateSecret for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// We can't find the resource that you asked for. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// You provided an invalid value for a parameter. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An error occurred on the server side. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// You provided a parameter value that is not valid for the current state of +// the resource. +// +// Possible causes: +// +// * You tried to perform the operation on a secret that's currently marked +// deleted. +// +// * You tried to enable rotation on a secret that doesn't already have a +// Lambda function ARN configured and you didn't include such an ARN as a +// parameter in this call. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/CancelRotateSecret +func (c *SecretsManager) CancelRotateSecret(input *CancelRotateSecretInput) (*CancelRotateSecretOutput, error) { + req, out := c.CancelRotateSecretRequest(input) + return out, req.Send() +} + +// CancelRotateSecretWithContext is the same as CancelRotateSecret with the addition of +// the ability to pass a context and additional request options. +// +// See CancelRotateSecret for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) CancelRotateSecretWithContext(ctx aws.Context, input *CancelRotateSecretInput, opts ...request.Option) (*CancelRotateSecretOutput, error) { + req, out := c.CancelRotateSecretRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateSecret = "CreateSecret" + +// CreateSecretRequest generates a "aws/request.Request" representing the +// client's request for the CreateSecret operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateSecret for more information on using the CreateSecret +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateSecretRequest method. +// req, resp := client.CreateSecretRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/CreateSecret +func (c *SecretsManager) CreateSecretRequest(input *CreateSecretInput) (req *request.Request, output *CreateSecretOutput) { + op := &request.Operation{ + Name: opCreateSecret, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateSecretInput{} + } + + output = &CreateSecretOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateSecret API operation for AWS Secrets Manager. +// +// Creates a new secret. A secret in Secrets Manager consists of both the protected +// secret data and the important information needed to manage the secret. +// +// Secrets Manager stores the encrypted secret data in one of a collection of +// "versions" associated with the secret. Each version contains a copy of the +// encrypted secret data. Each version is associated with one or more "staging +// labels" that identify where the version is in the rotation cycle. The SecretVersionsToStages +// field of the secret contains the mapping of staging labels to the active +// versions of the secret. Versions without a staging label are considered deprecated +// and are not included in the list. +// +// You provide the secret data to be encrypted by putting text in either the +// SecretString parameter or binary data in the SecretBinary parameter, but +// not both. If you include SecretString or SecretBinary then Secrets Manager +// also creates an initial secret version and automatically attaches the staging +// label AWSCURRENT to the new version. +// +// If you call an operation that needs to encrypt or decrypt the SecretString +// or SecretBinary for a secret in the same account as the calling user and +// that secret doesn't specify a AWS KMS encryption key, Secrets Manager uses +// the account's default AWS managed customer master key (CMK) with the alias +// aws/secretsmanager. If this key doesn't already exist in your account then +// Secrets Manager creates it for you automatically. All users and roles in +// the same AWS account automatically have access to use the default CMK. Note +// that if an Secrets Manager API call results in AWS having to create the account's +// AWS-managed CMK, it can result in a one-time significant delay in returning +// the result. +// +// If the secret is in a different AWS account from the credentials calling +// an API that requires encryption or decryption of the secret value then you +// must create and use a custom AWS KMS CMK because you can't access the default +// CMK for the account using credentials from a different AWS account. Store +// the ARN of the CMK in the secret when you create the secret or when you update +// it by including it in the KMSKeyId. If you call an API that must encrypt +// or decrypt SecretString or SecretBinary using credentials from a different +// account then the AWS KMS key policy must grant cross-account access to that +// other account's user or role for both the kms:GenerateDataKey and kms:Decrypt +// operations. +// +// Minimum permissions +// +// To run this command, you must have the following permissions: +// +// * secretsmanager:CreateSecret +// +// * kms:GenerateDataKey - needed only if you use a customer-managed AWS +// KMS key to encrypt the secret. You do not need this permission to use +// the account's default AWS managed CMK for Secrets Manager. +// +// * kms:Decrypt - needed only if you use a customer-managed AWS KMS key +// to encrypt the secret. You do not need this permission to use the account's +// default AWS managed CMK for Secrets Manager. +// +// * secretsmanager:TagResource - needed only if you include the Tags parameter. +// +// +// Related operations +// +// * To delete a secret, use DeleteSecret. +// +// * To modify an existing secret, use UpdateSecret. +// +// * To create a new version of a secret, use PutSecretValue. +// +// * To retrieve the encrypted secure string and secure binary values, use +// GetSecretValue. +// +// * To retrieve all other details for a secret, use DescribeSecret. This +// does not include the encrypted secure string and secure binary values. +// +// * To retrieve the list of secret versions associated with the current +// secret, use DescribeSecret and examine the SecretVersionsToStages response +// value. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Secrets Manager's +// API operation CreateSecret for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// You provided an invalid value for a parameter. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// You provided a parameter value that is not valid for the current state of +// the resource. +// +// Possible causes: +// +// * You tried to perform the operation on a secret that's currently marked +// deleted. +// +// * You tried to enable rotation on a secret that doesn't already have a +// Lambda function ARN configured and you didn't include such an ARN as a +// parameter in this call. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The request failed because it would exceed one of the Secrets Manager internal +// limits. +// +// * ErrCodeEncryptionFailure "EncryptionFailure" +// Secrets Manager can't encrypt the protected secret text using the provided +// KMS key. Check that the customer master key (CMK) is available, enabled, +// and not in an invalid state. For more information, see How Key State Affects +// Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html). +// +// * ErrCodeResourceExistsException "ResourceExistsException" +// A resource with the ID you requested already exists. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// We can't find the resource that you asked for. +// +// * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocumentException" +// The policy document that you provided isn't valid. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An error occurred on the server side. +// +// * ErrCodePreconditionNotMetException "PreconditionNotMetException" +// The request failed because you did not complete all the prerequisite steps. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/CreateSecret +func (c *SecretsManager) CreateSecret(input *CreateSecretInput) (*CreateSecretOutput, error) { + req, out := c.CreateSecretRequest(input) + return out, req.Send() +} + +// CreateSecretWithContext is the same as CreateSecret with the addition of +// the ability to pass a context and additional request options. +// +// See CreateSecret for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) CreateSecretWithContext(ctx aws.Context, input *CreateSecretInput, opts ...request.Option) (*CreateSecretOutput, error) { + req, out := c.CreateSecretRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteResourcePolicy = "DeleteResourcePolicy" + +// DeleteResourcePolicyRequest generates a "aws/request.Request" representing the +// client's request for the DeleteResourcePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteResourcePolicy for more information on using the DeleteResourcePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteResourcePolicyRequest method. +// req, resp := client.DeleteResourcePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/DeleteResourcePolicy +func (c *SecretsManager) DeleteResourcePolicyRequest(input *DeleteResourcePolicyInput) (req *request.Request, output *DeleteResourcePolicyOutput) { + op := &request.Operation{ + Name: opDeleteResourcePolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteResourcePolicyInput{} + } + + output = &DeleteResourcePolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteResourcePolicy API operation for AWS Secrets Manager. +// +// Deletes the resource-based permission policy that's attached to the secret. +// +// Minimum permissions +// +// To run this command, you must have the following permissions: +// +// * secretsmanager:DeleteResourcePolicy +// +// Related operations +// +// * To attach a resource policy to a secret, use PutResourcePolicy. +// +// * To retrieve the current resource-based policy that's attached to a secret, +// use GetResourcePolicy. +// +// * To list all of the currently available secrets, use ListSecrets. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Secrets Manager's +// API operation DeleteResourcePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// We can't find the resource that you asked for. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An error occurred on the server side. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// You provided a parameter value that is not valid for the current state of +// the resource. +// +// Possible causes: +// +// * You tried to perform the operation on a secret that's currently marked +// deleted. +// +// * You tried to enable rotation on a secret that doesn't already have a +// Lambda function ARN configured and you didn't include such an ARN as a +// parameter in this call. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/DeleteResourcePolicy +func (c *SecretsManager) DeleteResourcePolicy(input *DeleteResourcePolicyInput) (*DeleteResourcePolicyOutput, error) { + req, out := c.DeleteResourcePolicyRequest(input) + return out, req.Send() +} + +// DeleteResourcePolicyWithContext is the same as DeleteResourcePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteResourcePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) DeleteResourcePolicyWithContext(ctx aws.Context, input *DeleteResourcePolicyInput, opts ...request.Option) (*DeleteResourcePolicyOutput, error) { + req, out := c.DeleteResourcePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteSecret = "DeleteSecret" + +// DeleteSecretRequest generates a "aws/request.Request" representing the +// client's request for the DeleteSecret operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteSecret for more information on using the DeleteSecret +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteSecretRequest method. +// req, resp := client.DeleteSecretRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/DeleteSecret +func (c *SecretsManager) DeleteSecretRequest(input *DeleteSecretInput) (req *request.Request, output *DeleteSecretOutput) { + op := &request.Operation{ + Name: opDeleteSecret, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteSecretInput{} + } + + output = &DeleteSecretOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteSecret API operation for AWS Secrets Manager. +// +// Deletes an entire secret and all of its versions. You can optionally include +// a recovery window during which you can restore the secret. If you don't specify +// a recovery window value, the operation defaults to 30 days. Secrets Manager +// attaches a DeletionDate stamp to the secret that specifies the end of the +// recovery window. At the end of the recovery window, Secrets Manager deletes +// the secret permanently. +// +// At any time before recovery window ends, you can use RestoreSecret to remove +// the DeletionDate and cancel the deletion of the secret. +// +// You cannot access the encrypted secret information in any secret that is +// scheduled for deletion. If you need to access that information, you must +// cancel the deletion with RestoreSecret and then retrieve the information. +// +// There is no explicit operation to delete a version of a secret. Instead, +// remove all staging labels from the VersionStage field of a version. That +// marks the version as deprecated and allows Secrets Manager to delete it as +// needed. Versions that do not have any staging labels do not show up in ListSecretVersionIds +// unless you specify IncludeDeprecated. +// +// The permanent secret deletion at the end of the waiting period is performed +// as a background task with low priority. There is no guarantee of a specific +// time after the recovery window for the actual delete operation to occur. +// +// Minimum permissions +// +// To run this command, you must have the following permissions: +// +// * secretsmanager:DeleteSecret +// +// Related operations +// +// * To create a secret, use CreateSecret. +// +// * To cancel deletion of a version of a secret before the recovery window +// has expired, use RestoreSecret. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Secrets Manager's +// API operation DeleteSecret for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// We can't find the resource that you asked for. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// You provided an invalid value for a parameter. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// You provided a parameter value that is not valid for the current state of +// the resource. +// +// Possible causes: +// +// * You tried to perform the operation on a secret that's currently marked +// deleted. +// +// * You tried to enable rotation on a secret that doesn't already have a +// Lambda function ARN configured and you didn't include such an ARN as a +// parameter in this call. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/DeleteSecret +func (c *SecretsManager) DeleteSecret(input *DeleteSecretInput) (*DeleteSecretOutput, error) { + req, out := c.DeleteSecretRequest(input) + return out, req.Send() +} + +// DeleteSecretWithContext is the same as DeleteSecret with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteSecret for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) DeleteSecretWithContext(ctx aws.Context, input *DeleteSecretInput, opts ...request.Option) (*DeleteSecretOutput, error) { + req, out := c.DeleteSecretRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeSecret = "DescribeSecret" + +// DescribeSecretRequest generates a "aws/request.Request" representing the +// client's request for the DescribeSecret operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeSecret for more information on using the DescribeSecret +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeSecretRequest method. +// req, resp := client.DescribeSecretRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/DescribeSecret +func (c *SecretsManager) DescribeSecretRequest(input *DescribeSecretInput) (req *request.Request, output *DescribeSecretOutput) { + op := &request.Operation{ + Name: opDescribeSecret, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeSecretInput{} + } + + output = &DescribeSecretOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeSecret API operation for AWS Secrets Manager. +// +// Retrieves the details of a secret. It does not include the encrypted fields. +// Only those fields that are populated with a value are returned in the response. +// +// Minimum permissions +// +// To run this command, you must have the following permissions: +// +// * secretsmanager:DescribeSecret +// +// Related operations +// +// * To create a secret, use CreateSecret. +// +// * To modify a secret, use UpdateSecret. +// +// * To retrieve the encrypted secret information in a version of the secret, +// use GetSecretValue. +// +// * To list all of the secrets in the AWS account, use ListSecrets. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Secrets Manager's +// API operation DescribeSecret for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// We can't find the resource that you asked for. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/DescribeSecret +func (c *SecretsManager) DescribeSecret(input *DescribeSecretInput) (*DescribeSecretOutput, error) { + req, out := c.DescribeSecretRequest(input) + return out, req.Send() +} + +// DescribeSecretWithContext is the same as DescribeSecret with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeSecret for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) DescribeSecretWithContext(ctx aws.Context, input *DescribeSecretInput, opts ...request.Option) (*DescribeSecretOutput, error) { + req, out := c.DescribeSecretRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetRandomPassword = "GetRandomPassword" + +// GetRandomPasswordRequest generates a "aws/request.Request" representing the +// client's request for the GetRandomPassword operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetRandomPassword for more information on using the GetRandomPassword +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetRandomPasswordRequest method. +// req, resp := client.GetRandomPasswordRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/GetRandomPassword +func (c *SecretsManager) GetRandomPasswordRequest(input *GetRandomPasswordInput) (req *request.Request, output *GetRandomPasswordOutput) { + op := &request.Operation{ + Name: opGetRandomPassword, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetRandomPasswordInput{} + } + + output = &GetRandomPasswordOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetRandomPassword API operation for AWS Secrets Manager. +// +// Generates a random password of the specified complexity. This operation is +// intended for use in the Lambda rotation function. Per best practice, we recommend +// that you specify the maximum length and include every character type that +// the system you are generating a password for can support. +// +// Minimum permissions +// +// To run this command, you must have the following permissions: +// +// * secretsmanager:GetRandomPassword +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Secrets Manager's +// API operation GetRandomPassword for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// You provided an invalid value for a parameter. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// You provided a parameter value that is not valid for the current state of +// the resource. +// +// Possible causes: +// +// * You tried to perform the operation on a secret that's currently marked +// deleted. +// +// * You tried to enable rotation on a secret that doesn't already have a +// Lambda function ARN configured and you didn't include such an ARN as a +// parameter in this call. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/GetRandomPassword +func (c *SecretsManager) GetRandomPassword(input *GetRandomPasswordInput) (*GetRandomPasswordOutput, error) { + req, out := c.GetRandomPasswordRequest(input) + return out, req.Send() +} + +// GetRandomPasswordWithContext is the same as GetRandomPassword with the addition of +// the ability to pass a context and additional request options. +// +// See GetRandomPassword for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) GetRandomPasswordWithContext(ctx aws.Context, input *GetRandomPasswordInput, opts ...request.Option) (*GetRandomPasswordOutput, error) { + req, out := c.GetRandomPasswordRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetResourcePolicy = "GetResourcePolicy" + +// GetResourcePolicyRequest generates a "aws/request.Request" representing the +// client's request for the GetResourcePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetResourcePolicy for more information on using the GetResourcePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetResourcePolicyRequest method. +// req, resp := client.GetResourcePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/GetResourcePolicy +func (c *SecretsManager) GetResourcePolicyRequest(input *GetResourcePolicyInput) (req *request.Request, output *GetResourcePolicyOutput) { + op := &request.Operation{ + Name: opGetResourcePolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetResourcePolicyInput{} + } + + output = &GetResourcePolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetResourcePolicy API operation for AWS Secrets Manager. +// +// Retrieves the JSON text of the resource-based policy document that's attached +// to the specified secret. The JSON request string input and response output +// are shown formatted with white space and line breaks for better readability. +// Submit your input as a single line JSON string. +// +// Minimum permissions +// +// To run this command, you must have the following permissions: +// +// * secretsmanager:GetResourcePolicy +// +// Related operations +// +// * To attach a resource policy to a secret, use PutResourcePolicy. +// +// * To delete the resource-based policy that's attached to a secret, use +// DeleteResourcePolicy. +// +// * To list all of the currently available secrets, use ListSecrets. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Secrets Manager's +// API operation GetResourcePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// We can't find the resource that you asked for. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An error occurred on the server side. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// You provided a parameter value that is not valid for the current state of +// the resource. +// +// Possible causes: +// +// * You tried to perform the operation on a secret that's currently marked +// deleted. +// +// * You tried to enable rotation on a secret that doesn't already have a +// Lambda function ARN configured and you didn't include such an ARN as a +// parameter in this call. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/GetResourcePolicy +func (c *SecretsManager) GetResourcePolicy(input *GetResourcePolicyInput) (*GetResourcePolicyOutput, error) { + req, out := c.GetResourcePolicyRequest(input) + return out, req.Send() +} + +// GetResourcePolicyWithContext is the same as GetResourcePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See GetResourcePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) GetResourcePolicyWithContext(ctx aws.Context, input *GetResourcePolicyInput, opts ...request.Option) (*GetResourcePolicyOutput, error) { + req, out := c.GetResourcePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetSecretValue = "GetSecretValue" + +// GetSecretValueRequest generates a "aws/request.Request" representing the +// client's request for the GetSecretValue operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetSecretValue for more information on using the GetSecretValue +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetSecretValueRequest method. +// req, resp := client.GetSecretValueRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/GetSecretValue +func (c *SecretsManager) GetSecretValueRequest(input *GetSecretValueInput) (req *request.Request, output *GetSecretValueOutput) { + op := &request.Operation{ + Name: opGetSecretValue, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetSecretValueInput{} + } + + output = &GetSecretValueOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetSecretValue API operation for AWS Secrets Manager. +// +// Retrieves the contents of the encrypted fields SecretString or SecretBinary +// from the specified version of a secret, whichever contains content. +// +// Minimum permissions +// +// To run this command, you must have the following permissions: +// +// * secretsmanager:GetSecretValue +// +// * kms:Decrypt - required only if you use a customer-managed AWS KMS key +// to encrypt the secret. You do not need this permission to use the account's +// default AWS managed CMK for Secrets Manager. +// +// Related operations +// +// * To create a new version of the secret with different encrypted information, +// use PutSecretValue. +// +// * To retrieve the non-encrypted details for the secret, use DescribeSecret. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Secrets Manager's +// API operation GetSecretValue for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// We can't find the resource that you asked for. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// You provided an invalid value for a parameter. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// You provided a parameter value that is not valid for the current state of +// the resource. +// +// Possible causes: +// +// * You tried to perform the operation on a secret that's currently marked +// deleted. +// +// * You tried to enable rotation on a secret that doesn't already have a +// Lambda function ARN configured and you didn't include such an ARN as a +// parameter in this call. +// +// * ErrCodeDecryptionFailure "DecryptionFailure" +// Secrets Manager can't decrypt the protected secret text using the provided +// KMS key. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/GetSecretValue +func (c *SecretsManager) GetSecretValue(input *GetSecretValueInput) (*GetSecretValueOutput, error) { + req, out := c.GetSecretValueRequest(input) + return out, req.Send() +} + +// GetSecretValueWithContext is the same as GetSecretValue with the addition of +// the ability to pass a context and additional request options. +// +// See GetSecretValue for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) GetSecretValueWithContext(ctx aws.Context, input *GetSecretValueInput, opts ...request.Option) (*GetSecretValueOutput, error) { + req, out := c.GetSecretValueRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListSecretVersionIds = "ListSecretVersionIds" + +// ListSecretVersionIdsRequest generates a "aws/request.Request" representing the +// client's request for the ListSecretVersionIds operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListSecretVersionIds for more information on using the ListSecretVersionIds +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListSecretVersionIdsRequest method. +// req, resp := client.ListSecretVersionIdsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/ListSecretVersionIds +func (c *SecretsManager) ListSecretVersionIdsRequest(input *ListSecretVersionIdsInput) (req *request.Request, output *ListSecretVersionIdsOutput) { + op := &request.Operation{ + Name: opListSecretVersionIds, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListSecretVersionIdsInput{} + } + + output = &ListSecretVersionIdsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListSecretVersionIds API operation for AWS Secrets Manager. +// +// Lists all of the versions attached to the specified secret. The output does +// not include the SecretString or SecretBinary fields. By default, the list +// includes only versions that have at least one staging label in VersionStage +// attached. +// +// Always check the NextToken response parameter when calling any of the List* +// operations. These operations can occasionally return an empty or shorter +// than expected list of results even when there are more results available. +// When this happens, the NextToken response parameter contains a value to pass +// to the next call to the same API to request the next part of the list. +// +// Minimum permissions +// +// To run this command, you must have the following permissions: +// +// * secretsmanager:ListSecretVersionIds +// +// Related operations +// +// * To list the secrets in an account, use ListSecrets. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Secrets Manager's +// API operation ListSecretVersionIds for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// You provided an invalid NextToken value. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// We can't find the resource that you asked for. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/ListSecretVersionIds +func (c *SecretsManager) ListSecretVersionIds(input *ListSecretVersionIdsInput) (*ListSecretVersionIdsOutput, error) { + req, out := c.ListSecretVersionIdsRequest(input) + return out, req.Send() +} + +// ListSecretVersionIdsWithContext is the same as ListSecretVersionIds with the addition of +// the ability to pass a context and additional request options. +// +// See ListSecretVersionIds for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) ListSecretVersionIdsWithContext(ctx aws.Context, input *ListSecretVersionIdsInput, opts ...request.Option) (*ListSecretVersionIdsOutput, error) { + req, out := c.ListSecretVersionIdsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListSecretVersionIdsPages iterates over the pages of a ListSecretVersionIds operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListSecretVersionIds method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListSecretVersionIds operation. +// pageNum := 0 +// err := client.ListSecretVersionIdsPages(params, +// func(page *ListSecretVersionIdsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *SecretsManager) ListSecretVersionIdsPages(input *ListSecretVersionIdsInput, fn func(*ListSecretVersionIdsOutput, bool) bool) error { + return c.ListSecretVersionIdsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListSecretVersionIdsPagesWithContext same as ListSecretVersionIdsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) ListSecretVersionIdsPagesWithContext(ctx aws.Context, input *ListSecretVersionIdsInput, fn func(*ListSecretVersionIdsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListSecretVersionIdsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListSecretVersionIdsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListSecretVersionIdsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListSecrets = "ListSecrets" + +// ListSecretsRequest generates a "aws/request.Request" representing the +// client's request for the ListSecrets operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListSecrets for more information on using the ListSecrets +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListSecretsRequest method. +// req, resp := client.ListSecretsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/ListSecrets +func (c *SecretsManager) ListSecretsRequest(input *ListSecretsInput) (req *request.Request, output *ListSecretsOutput) { + op := &request.Operation{ + Name: opListSecrets, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListSecretsInput{} + } + + output = &ListSecretsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListSecrets API operation for AWS Secrets Manager. +// +// Lists all of the secrets that are stored by Secrets Manager in the AWS account. +// To list the versions currently stored for a specific secret, use ListSecretVersionIds. +// The encrypted fields SecretString and SecretBinary are not included in the +// output. To get that information, call the GetSecretValue operation. +// +// Always check the NextToken response parameter when calling any of the List* +// operations. These operations can occasionally return an empty or shorter +// than expected list of results even when there are more results available. +// When this happens, the NextToken response parameter contains a value to pass +// to the next call to the same API to request the next part of the list. +// +// Minimum permissions +// +// To run this command, you must have the following permissions: +// +// * secretsmanager:ListSecrets +// +// Related operations +// +// * To list the versions attached to a secret, use ListSecretVersionIds. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Secrets Manager's +// API operation ListSecrets for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// You provided an invalid value for a parameter. +// +// * ErrCodeInvalidNextTokenException "InvalidNextTokenException" +// You provided an invalid NextToken value. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/ListSecrets +func (c *SecretsManager) ListSecrets(input *ListSecretsInput) (*ListSecretsOutput, error) { + req, out := c.ListSecretsRequest(input) + return out, req.Send() +} + +// ListSecretsWithContext is the same as ListSecrets with the addition of +// the ability to pass a context and additional request options. +// +// See ListSecrets for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) ListSecretsWithContext(ctx aws.Context, input *ListSecretsInput, opts ...request.Option) (*ListSecretsOutput, error) { + req, out := c.ListSecretsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListSecretsPages iterates over the pages of a ListSecrets operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListSecrets method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListSecrets operation. +// pageNum := 0 +// err := client.ListSecretsPages(params, +// func(page *ListSecretsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *SecretsManager) ListSecretsPages(input *ListSecretsInput, fn func(*ListSecretsOutput, bool) bool) error { + return c.ListSecretsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListSecretsPagesWithContext same as ListSecretsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) ListSecretsPagesWithContext(ctx aws.Context, input *ListSecretsInput, fn func(*ListSecretsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListSecretsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListSecretsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListSecretsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opPutResourcePolicy = "PutResourcePolicy" + +// PutResourcePolicyRequest generates a "aws/request.Request" representing the +// client's request for the PutResourcePolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutResourcePolicy for more information on using the PutResourcePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutResourcePolicyRequest method. +// req, resp := client.PutResourcePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/PutResourcePolicy +func (c *SecretsManager) PutResourcePolicyRequest(input *PutResourcePolicyInput) (req *request.Request, output *PutResourcePolicyOutput) { + op := &request.Operation{ + Name: opPutResourcePolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutResourcePolicyInput{} + } + + output = &PutResourcePolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutResourcePolicy API operation for AWS Secrets Manager. +// +// Attaches the contents of the specified resource-based permission policy to +// a secret. A resource-based policy is optional. Alternatively, you can use +// IAM identity-based policies that specify the secret's Amazon Resource Name +// (ARN) in the policy statement's Resources element. You can also use a combination +// of both identity-based and resource-based policies. The affected users and +// roles receive the permissions that are permitted by all of the relevant policies. +// For more information, see Using Resource-Based Policies for AWS Secrets Manager +// (http://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_resource-based-policies.html). +// For the complete description of the AWS policy syntax and grammar, see IAM +// JSON Policy Reference (http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies.html) +// in the IAM User Guide. +// +// Minimum permissions +// +// To run this command, you must have the following permissions: +// +// * secretsmanager:PutResourcePolicy +// +// Related operations +// +// * To retrieve the resource policy that's attached to a secret, use GetResourcePolicy. +// +// * To delete the resource-based policy that's attached to a secret, use +// DeleteResourcePolicy. +// +// * To list all of the currently available secrets, use ListSecrets. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Secrets Manager's +// API operation PutResourcePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocumentException" +// The policy document that you provided isn't valid. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// We can't find the resource that you asked for. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// You provided an invalid value for a parameter. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An error occurred on the server side. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// You provided a parameter value that is not valid for the current state of +// the resource. +// +// Possible causes: +// +// * You tried to perform the operation on a secret that's currently marked +// deleted. +// +// * You tried to enable rotation on a secret that doesn't already have a +// Lambda function ARN configured and you didn't include such an ARN as a +// parameter in this call. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/PutResourcePolicy +func (c *SecretsManager) PutResourcePolicy(input *PutResourcePolicyInput) (*PutResourcePolicyOutput, error) { + req, out := c.PutResourcePolicyRequest(input) + return out, req.Send() +} + +// PutResourcePolicyWithContext is the same as PutResourcePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See PutResourcePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) PutResourcePolicyWithContext(ctx aws.Context, input *PutResourcePolicyInput, opts ...request.Option) (*PutResourcePolicyOutput, error) { + req, out := c.PutResourcePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutSecretValue = "PutSecretValue" + +// PutSecretValueRequest generates a "aws/request.Request" representing the +// client's request for the PutSecretValue operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutSecretValue for more information on using the PutSecretValue +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutSecretValueRequest method. +// req, resp := client.PutSecretValueRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/PutSecretValue +func (c *SecretsManager) PutSecretValueRequest(input *PutSecretValueInput) (req *request.Request, output *PutSecretValueOutput) { + op := &request.Operation{ + Name: opPutSecretValue, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutSecretValueInput{} + } + + output = &PutSecretValueOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutSecretValue API operation for AWS Secrets Manager. +// +// Stores a new encrypted secret value in the specified secret. To do this, +// the operation creates a new version and attaches it to the secret. The version +// can contain a new SecretString value or a new SecretBinary value. You can +// also specify the staging labels that are initially attached to the new version. +// +// The Secrets Manager console uses only the SecretString field. To add binary +// data to a secret with the SecretBinary field you must use the AWS CLI or +// one of the AWS SDKs. +// +// * If this operation creates the first version for the secret then Secrets +// Manager automatically attaches the staging label AWSCURRENT to the new +// version. +// +// * If another version of this secret already exists, then this operation +// does not automatically move any staging labels other than those that you +// explicitly specify in the VersionStages parameter. +// +// * If this operation moves the staging label AWSCURRENT from another version +// to this version (because you included it in the StagingLabels parameter) +// then Secrets Manager also automatically moves the staging label AWSPREVIOUS +// to the version that AWSCURRENT was removed from. +// +// * This operation is idempotent. If a version with a VersionId with the +// same value as the ClientRequestToken parameter already exists and you +// specify the same secret data, the operation succeeds but does nothing. +// However, if the secret data is different, then the operation fails because +// you cannot modify an existing version; you can only create new ones. +// +// If you call an operation that needs to encrypt or decrypt the SecretString +// or SecretBinary for a secret in the same account as the calling user and +// that secret doesn't specify a AWS KMS encryption key, Secrets Manager uses +// the account's default AWS managed customer master key (CMK) with the alias +// aws/secretsmanager. If this key doesn't already exist in your account then +// Secrets Manager creates it for you automatically. All users and roles in +// the same AWS account automatically have access to use the default CMK. Note +// that if an Secrets Manager API call results in AWS having to create the account's +// AWS-managed CMK, it can result in a one-time significant delay in returning +// the result. +// +// If the secret is in a different AWS account from the credentials calling +// an API that requires encryption or decryption of the secret value then you +// must create and use a custom AWS KMS CMK because you can't access the default +// CMK for the account using credentials from a different AWS account. Store +// the ARN of the CMK in the secret when you create the secret or when you update +// it by including it in the KMSKeyId. If you call an API that must encrypt +// or decrypt SecretString or SecretBinary using credentials from a different +// account then the AWS KMS key policy must grant cross-account access to that +// other account's user or role for both the kms:GenerateDataKey and kms:Decrypt +// operations. +// +// Minimum permissions +// +// To run this command, you must have the following permissions: +// +// * secretsmanager:PutSecretValue +// +// * kms:GenerateDataKey - needed only if you use a customer-managed AWS +// KMS key to encrypt the secret. You do not need this permission to use +// the account's default AWS managed CMK for Secrets Manager. +// +// Related operations +// +// * To retrieve the encrypted value you store in the version of a secret, +// use GetSecretValue. +// +// * To create a secret, use CreateSecret. +// +// * To get the details for a secret, use DescribeSecret. +// +// * To list the versions attached to a secret, use ListSecretVersionIds. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Secrets Manager's +// API operation PutSecretValue for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// You provided an invalid value for a parameter. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// You provided a parameter value that is not valid for the current state of +// the resource. +// +// Possible causes: +// +// * You tried to perform the operation on a secret that's currently marked +// deleted. +// +// * You tried to enable rotation on a secret that doesn't already have a +// Lambda function ARN configured and you didn't include such an ARN as a +// parameter in this call. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The request failed because it would exceed one of the Secrets Manager internal +// limits. +// +// * ErrCodeEncryptionFailure "EncryptionFailure" +// Secrets Manager can't encrypt the protected secret text using the provided +// KMS key. Check that the customer master key (CMK) is available, enabled, +// and not in an invalid state. For more information, see How Key State Affects +// Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html). +// +// * ErrCodeResourceExistsException "ResourceExistsException" +// A resource with the ID you requested already exists. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// We can't find the resource that you asked for. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/PutSecretValue +func (c *SecretsManager) PutSecretValue(input *PutSecretValueInput) (*PutSecretValueOutput, error) { + req, out := c.PutSecretValueRequest(input) + return out, req.Send() +} + +// PutSecretValueWithContext is the same as PutSecretValue with the addition of +// the ability to pass a context and additional request options. +// +// See PutSecretValue for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) PutSecretValueWithContext(ctx aws.Context, input *PutSecretValueInput, opts ...request.Option) (*PutSecretValueOutput, error) { + req, out := c.PutSecretValueRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRestoreSecret = "RestoreSecret" + +// RestoreSecretRequest generates a "aws/request.Request" representing the +// client's request for the RestoreSecret operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RestoreSecret for more information on using the RestoreSecret +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RestoreSecretRequest method. +// req, resp := client.RestoreSecretRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/RestoreSecret +func (c *SecretsManager) RestoreSecretRequest(input *RestoreSecretInput) (req *request.Request, output *RestoreSecretOutput) { + op := &request.Operation{ + Name: opRestoreSecret, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RestoreSecretInput{} + } + + output = &RestoreSecretOutput{} + req = c.newRequest(op, input, output) + return +} + +// RestoreSecret API operation for AWS Secrets Manager. +// +// Cancels the scheduled deletion of a secret by removing the DeletedDate time +// stamp. This makes the secret accessible to query once again. +// +// Minimum permissions +// +// To run this command, you must have the following permissions: +// +// * secretsmanager:RestoreSecret +// +// Related operations +// +// * To delete a secret, use DeleteSecret. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Secrets Manager's +// API operation RestoreSecret for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// We can't find the resource that you asked for. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// You provided an invalid value for a parameter. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// You provided a parameter value that is not valid for the current state of +// the resource. +// +// Possible causes: +// +// * You tried to perform the operation on a secret that's currently marked +// deleted. +// +// * You tried to enable rotation on a secret that doesn't already have a +// Lambda function ARN configured and you didn't include such an ARN as a +// parameter in this call. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/RestoreSecret +func (c *SecretsManager) RestoreSecret(input *RestoreSecretInput) (*RestoreSecretOutput, error) { + req, out := c.RestoreSecretRequest(input) + return out, req.Send() +} + +// RestoreSecretWithContext is the same as RestoreSecret with the addition of +// the ability to pass a context and additional request options. +// +// See RestoreSecret for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) RestoreSecretWithContext(ctx aws.Context, input *RestoreSecretInput, opts ...request.Option) (*RestoreSecretOutput, error) { + req, out := c.RestoreSecretRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRotateSecret = "RotateSecret" + +// RotateSecretRequest generates a "aws/request.Request" representing the +// client's request for the RotateSecret operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RotateSecret for more information on using the RotateSecret +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RotateSecretRequest method. +// req, resp := client.RotateSecretRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/RotateSecret +func (c *SecretsManager) RotateSecretRequest(input *RotateSecretInput) (req *request.Request, output *RotateSecretOutput) { + op := &request.Operation{ + Name: opRotateSecret, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RotateSecretInput{} + } + + output = &RotateSecretOutput{} + req = c.newRequest(op, input, output) + return +} + +// RotateSecret API operation for AWS Secrets Manager. +// +// Configures and starts the asynchronous process of rotating this secret. If +// you include the configuration parameters, the operation sets those values +// for the secret and then immediately starts a rotation. If you do not include +// the configuration parameters, the operation starts a rotation with the values +// already stored in the secret. After the rotation completes, the protected +// service and its clients all use the new version of the secret. +// +// This required configuration information includes the ARN of an AWS Lambda +// function and the time between scheduled rotations. The Lambda rotation function +// creates a new version of the secret and creates or updates the credentials +// on the protected service to match. After testing the new credentials, the +// function marks the new secret with the staging label AWSCURRENT so that your +// clients all immediately begin to use the new version. For more information +// about rotating secrets and how to configure a Lambda function to rotate the +// secrets for your protected service, see Rotating Secrets in AWS Secrets Manager +// (http://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html) +// in the AWS Secrets Manager User Guide. +// +// Secrets Manager schedules the next rotation when the previous one is complete. +// Secrets Manager schedules the date by adding the rotation interval (number +// of days) to the actual date of the last rotation. The service chooses the +// hour within that 24-hour date window randomly. The minute is also chosen +// somewhat randomly, but weighted towards the top of the hour and influenced +// by a variety of factors that help distribute load. +// +// The rotation function must end with the versions of the secret in one of +// two states: +// +// * The AWSPENDING and AWSCURRENT staging labels are attached to the same +// version of the secret, or +// +// * The AWSPENDING staging label is not attached to any version of the secret. +// +// If instead the AWSPENDING staging label is present but is not attached to +// the same version as AWSCURRENT then any later invocation of RotateSecret +// assumes that a previous rotation request is still in progress and returns +// an error. +// +// Minimum permissions +// +// To run this command, you must have the following permissions: +// +// * secretsmanager:RotateSecret +// +// * lambda:InvokeFunction (on the function specified in the secret's metadata) +// +// Related operations +// +// * To list the secrets in your account, use ListSecrets. +// +// * To get the details for a version of a secret, use DescribeSecret. +// +// * To create a new version of a secret, use CreateSecret. +// +// * To attach staging labels to or remove staging labels from a version +// of a secret, use UpdateSecretVersionStage. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Secrets Manager's +// API operation RotateSecret for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// We can't find the resource that you asked for. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// You provided an invalid value for a parameter. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An error occurred on the server side. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// You provided a parameter value that is not valid for the current state of +// the resource. +// +// Possible causes: +// +// * You tried to perform the operation on a secret that's currently marked +// deleted. +// +// * You tried to enable rotation on a secret that doesn't already have a +// Lambda function ARN configured and you didn't include such an ARN as a +// parameter in this call. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/RotateSecret +func (c *SecretsManager) RotateSecret(input *RotateSecretInput) (*RotateSecretOutput, error) { + req, out := c.RotateSecretRequest(input) + return out, req.Send() +} + +// RotateSecretWithContext is the same as RotateSecret with the addition of +// the ability to pass a context and additional request options. +// +// See RotateSecret for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) RotateSecretWithContext(ctx aws.Context, input *RotateSecretInput, opts ...request.Option) (*RotateSecretOutput, error) { + req, out := c.RotateSecretRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opTagResource = "TagResource" + +// TagResourceRequest generates a "aws/request.Request" representing the +// client's request for the TagResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See TagResource for more information on using the TagResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the TagResourceRequest method. +// req, resp := client.TagResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/TagResource +func (c *SecretsManager) TagResourceRequest(input *TagResourceInput) (req *request.Request, output *TagResourceOutput) { + op := &request.Operation{ + Name: opTagResource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &TagResourceInput{} + } + + output = &TagResourceOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// TagResource API operation for AWS Secrets Manager. +// +// Attaches one or more tags, each consisting of a key name and a value, to +// the specified secret. Tags are part of the secret's overall metadata, and +// are not associated with any specific version of the secret. This operation +// only appends tags to the existing list of tags. To remove tags, you must +// use UntagResource. +// +// The following basic restrictions apply to tags: +// +// * Maximum number of tags per secret—50 +// +// * Maximum key length—127 Unicode characters in UTF-8 +// +// * Maximum value length—255 Unicode characters in UTF-8 +// +// * Tag keys and values are case sensitive. +// +// * Do not use the aws: prefix in your tag names or values because it is +// reserved for AWS use. You can't edit or delete tag names or values with +// this prefix. Tags with this prefix do not count against your tags per +// secret limit. +// +// * If your tagging schema will be used across multiple services and resources, +// remember that other services might have restrictions on allowed characters. +// Generally allowed characters are: letters, spaces, and numbers representable +// in UTF-8, plus the following special characters: + - = . _ : / @. +// +// If you use tags as part of your security strategy, then adding or removing +// a tag can change permissions. If successfully completing this operation would +// result in you losing your permissions for this secret, then the operation +// is blocked and returns an Access Denied error. +// +// Minimum permissions +// +// To run this command, you must have the following permissions: +// +// * secretsmanager:TagResource +// +// Related operations +// +// * To remove one or more tags from the collection attached to a secret, +// use UntagResource. +// +// * To view the list of tags attached to a secret, use DescribeSecret. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Secrets Manager's +// API operation TagResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// We can't find the resource that you asked for. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// You provided a parameter value that is not valid for the current state of +// the resource. +// +// Possible causes: +// +// * You tried to perform the operation on a secret that's currently marked +// deleted. +// +// * You tried to enable rotation on a secret that doesn't already have a +// Lambda function ARN configured and you didn't include such an ARN as a +// parameter in this call. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// You provided an invalid value for a parameter. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/TagResource +func (c *SecretsManager) TagResource(input *TagResourceInput) (*TagResourceOutput, error) { + req, out := c.TagResourceRequest(input) + return out, req.Send() +} + +// TagResourceWithContext is the same as TagResource with the addition of +// the ability to pass a context and additional request options. +// +// See TagResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) TagResourceWithContext(ctx aws.Context, input *TagResourceInput, opts ...request.Option) (*TagResourceOutput, error) { + req, out := c.TagResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUntagResource = "UntagResource" + +// UntagResourceRequest generates a "aws/request.Request" representing the +// client's request for the UntagResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UntagResource for more information on using the UntagResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UntagResourceRequest method. +// req, resp := client.UntagResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/UntagResource +func (c *SecretsManager) UntagResourceRequest(input *UntagResourceInput) (req *request.Request, output *UntagResourceOutput) { + op := &request.Operation{ + Name: opUntagResource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UntagResourceInput{} + } + + output = &UntagResourceOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UntagResource API operation for AWS Secrets Manager. +// +// Removes one or more tags from the specified secret. +// +// This operation is idempotent. If a requested tag is not attached to the secret, +// no error is returned and the secret metadata is unchanged. +// +// If you use tags as part of your security strategy, then removing a tag can +// change permissions. If successfully completing this operation would result +// in you losing your permissions for this secret, then the operation is blocked +// and returns an Access Denied error. +// +// Minimum permissions +// +// To run this command, you must have the following permissions: +// +// * secretsmanager:UntagResource +// +// Related operations +// +// * To add one or more tags to the collection attached to a secret, use +// TagResource. +// +// * To view the list of tags attached to a secret, use DescribeSecret. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Secrets Manager's +// API operation UntagResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// We can't find the resource that you asked for. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// You provided a parameter value that is not valid for the current state of +// the resource. +// +// Possible causes: +// +// * You tried to perform the operation on a secret that's currently marked +// deleted. +// +// * You tried to enable rotation on a secret that doesn't already have a +// Lambda function ARN configured and you didn't include such an ARN as a +// parameter in this call. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// You provided an invalid value for a parameter. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/UntagResource +func (c *SecretsManager) UntagResource(input *UntagResourceInput) (*UntagResourceOutput, error) { + req, out := c.UntagResourceRequest(input) + return out, req.Send() +} + +// UntagResourceWithContext is the same as UntagResource with the addition of +// the ability to pass a context and additional request options. +// +// See UntagResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) UntagResourceWithContext(ctx aws.Context, input *UntagResourceInput, opts ...request.Option) (*UntagResourceOutput, error) { + req, out := c.UntagResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateSecret = "UpdateSecret" + +// UpdateSecretRequest generates a "aws/request.Request" representing the +// client's request for the UpdateSecret operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateSecret for more information on using the UpdateSecret +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateSecretRequest method. +// req, resp := client.UpdateSecretRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/UpdateSecret +func (c *SecretsManager) UpdateSecretRequest(input *UpdateSecretInput) (req *request.Request, output *UpdateSecretOutput) { + op := &request.Operation{ + Name: opUpdateSecret, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateSecretInput{} + } + + output = &UpdateSecretOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateSecret API operation for AWS Secrets Manager. +// +// Modifies many of the details of the specified secret. If you include a ClientRequestToken +// and eitherSecretString or SecretBinary then it also creates a new version +// attached to the secret. +// +// To modify the rotation configuration of a secret, use RotateSecret instead. +// +// The Secrets Manager console uses only the SecretString parameter and therefore +// limits you to encrypting and storing only a text string. To encrypt and store +// binary data as part of the version of a secret, you must use either the AWS +// CLI or one of the AWS SDKs. +// +// * If a version with a VersionId with the same value as the ClientRequestToken +// parameter already exists, the operation results in an error. You cannot +// modify an existing version, you can only create a new version. +// +// * If you include SecretString or SecretBinary to create a new secret version, +// Secrets Manager automatically attaches the staging label AWSCURRENT to +// the new version. +// +// If you call an operation that needs to encrypt or decrypt the SecretString +// or SecretBinary for a secret in the same account as the calling user and +// that secret doesn't specify a AWS KMS encryption key, Secrets Manager uses +// the account's default AWS managed customer master key (CMK) with the alias +// aws/secretsmanager. If this key doesn't already exist in your account then +// Secrets Manager creates it for you automatically. All users and roles in +// the same AWS account automatically have access to use the default CMK. Note +// that if an Secrets Manager API call results in AWS having to create the account's +// AWS-managed CMK, it can result in a one-time significant delay in returning +// the result. +// +// If the secret is in a different AWS account from the credentials calling +// an API that requires encryption or decryption of the secret value then you +// must create and use a custom AWS KMS CMK because you can't access the default +// CMK for the account using credentials from a different AWS account. Store +// the ARN of the CMK in the secret when you create the secret or when you update +// it by including it in the KMSKeyId. If you call an API that must encrypt +// or decrypt SecretString or SecretBinary using credentials from a different +// account then the AWS KMS key policy must grant cross-account access to that +// other account's user or role for both the kms:GenerateDataKey and kms:Decrypt +// operations. +// +// Minimum permissions +// +// To run this command, you must have the following permissions: +// +// * secretsmanager:UpdateSecret +// +// * kms:GenerateDataKey - needed only if you use a custom AWS KMS key to +// encrypt the secret. You do not need this permission to use the account's +// AWS managed CMK for Secrets Manager. +// +// * kms:Decrypt - needed only if you use a custom AWS KMS key to encrypt +// the secret. You do not need this permission to use the account's AWS managed +// CMK for Secrets Manager. +// +// Related operations +// +// * To create a new secret, use CreateSecret. +// +// * To add only a new version to an existing secret, use PutSecretValue. +// +// * To get the details for a secret, use DescribeSecret. +// +// * To list the versions contained in a secret, use ListSecretVersionIds. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Secrets Manager's +// API operation UpdateSecret for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// You provided an invalid value for a parameter. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// You provided a parameter value that is not valid for the current state of +// the resource. +// +// Possible causes: +// +// * You tried to perform the operation on a secret that's currently marked +// deleted. +// +// * You tried to enable rotation on a secret that doesn't already have a +// Lambda function ARN configured and you didn't include such an ARN as a +// parameter in this call. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The request failed because it would exceed one of the Secrets Manager internal +// limits. +// +// * ErrCodeEncryptionFailure "EncryptionFailure" +// Secrets Manager can't encrypt the protected secret text using the provided +// KMS key. Check that the customer master key (CMK) is available, enabled, +// and not in an invalid state. For more information, see How Key State Affects +// Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html). +// +// * ErrCodeResourceExistsException "ResourceExistsException" +// A resource with the ID you requested already exists. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// We can't find the resource that you asked for. +// +// * ErrCodeMalformedPolicyDocumentException "MalformedPolicyDocumentException" +// The policy document that you provided isn't valid. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An error occurred on the server side. +// +// * ErrCodePreconditionNotMetException "PreconditionNotMetException" +// The request failed because you did not complete all the prerequisite steps. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/UpdateSecret +func (c *SecretsManager) UpdateSecret(input *UpdateSecretInput) (*UpdateSecretOutput, error) { + req, out := c.UpdateSecretRequest(input) + return out, req.Send() +} + +// UpdateSecretWithContext is the same as UpdateSecret with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateSecret for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) UpdateSecretWithContext(ctx aws.Context, input *UpdateSecretInput, opts ...request.Option) (*UpdateSecretOutput, error) { + req, out := c.UpdateSecretRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateSecretVersionStage = "UpdateSecretVersionStage" + +// UpdateSecretVersionStageRequest generates a "aws/request.Request" representing the +// client's request for the UpdateSecretVersionStage operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateSecretVersionStage for more information on using the UpdateSecretVersionStage +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateSecretVersionStageRequest method. +// req, resp := client.UpdateSecretVersionStageRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/UpdateSecretVersionStage +func (c *SecretsManager) UpdateSecretVersionStageRequest(input *UpdateSecretVersionStageInput) (req *request.Request, output *UpdateSecretVersionStageOutput) { + op := &request.Operation{ + Name: opUpdateSecretVersionStage, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateSecretVersionStageInput{} + } + + output = &UpdateSecretVersionStageOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateSecretVersionStage API operation for AWS Secrets Manager. +// +// Modifies the staging labels attached to a version of a secret. Staging labels +// are used to track a version as it progresses through the secret rotation +// process. You can attach a staging label to only one version of a secret at +// a time. If a staging label to be added is already attached to another version, +// then it is moved--removed from the other version first and then attached +// to this one. For more information about staging labels, see Staging Labels +// (http://docs.aws.amazon.com/secretsmanager/latest/userguide/terms-concepts.html#term_staging-label) +// in the AWS Secrets Manager User Guide. +// +// The staging labels that you specify in the VersionStage parameter are added +// to the existing list of staging labels--they don't replace it. +// +// You can move the AWSCURRENT staging label to this version by including it +// in this call. +// +// Whenever you move AWSCURRENT, Secrets Manager automatically moves the label +// AWSPREVIOUS to the version that AWSCURRENT was removed from. +// +// If this action results in the last label being removed from a version, then +// the version is considered to be 'deprecated' and can be deleted by Secrets +// Manager. +// +// Minimum permissions +// +// To run this command, you must have the following permissions: +// +// * secretsmanager:UpdateSecretVersionStage +// +// Related operations +// +// * To get the list of staging labels that are currently associated with +// a version of a secret, use DescribeSecret and examine the SecretVersionsToStages +// response value. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Secrets Manager's +// API operation UpdateSecretVersionStage for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// We can't find the resource that you asked for. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// You provided an invalid value for a parameter. +// +// * ErrCodeInvalidRequestException "InvalidRequestException" +// You provided a parameter value that is not valid for the current state of +// the resource. +// +// Possible causes: +// +// * You tried to perform the operation on a secret that's currently marked +// deleted. +// +// * You tried to enable rotation on a secret that doesn't already have a +// Lambda function ARN configured and you didn't include such an ARN as a +// parameter in this call. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The request failed because it would exceed one of the Secrets Manager internal +// limits. +// +// * ErrCodeInternalServiceError "InternalServiceError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17/UpdateSecretVersionStage +func (c *SecretsManager) UpdateSecretVersionStage(input *UpdateSecretVersionStageInput) (*UpdateSecretVersionStageOutput, error) { + req, out := c.UpdateSecretVersionStageRequest(input) + return out, req.Send() +} + +// UpdateSecretVersionStageWithContext is the same as UpdateSecretVersionStage with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateSecretVersionStage for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SecretsManager) UpdateSecretVersionStageWithContext(ctx aws.Context, input *UpdateSecretVersionStageInput, opts ...request.Option) (*UpdateSecretVersionStageOutput, error) { + req, out := c.UpdateSecretVersionStageRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type CancelRotateSecretInput struct { + _ struct{} `type:"structure"` + + // Specifies the secret for which you want to cancel a rotation request. You + // can specify either the Amazon Resource Name (ARN) or the friendly name of + // the secret. + // + // If you specify an ARN, we generally recommend that you specify a complete + // ARN. You can specify a partial ARN too—for example, if you don’t include + // the final hyphen and six random characters that Secrets Manager adds at the + // end of the ARN when you created the secret. A partial ARN match can work + // as long as it uniquely matches only one secret. However, if your secret has + // a name that ends in a hyphen followed by six characters (before Secrets Manager + // adds the hyphen and six characters to the ARN) and you try to use that as + // a partial ARN, then those characters cause Secrets Manager to assume that + // you’re specifying a complete ARN. This confusion can cause unexpected results. + // To avoid this situation, we recommend that you don’t create secret names + // that end with a hyphen followed by six characters. + // + // SecretId is a required field + SecretId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CancelRotateSecretInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelRotateSecretInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CancelRotateSecretInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CancelRotateSecretInput"} + if s.SecretId == nil { + invalidParams.Add(request.NewErrParamRequired("SecretId")) + } + if s.SecretId != nil && len(*s.SecretId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecretId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSecretId sets the SecretId field's value. +func (s *CancelRotateSecretInput) SetSecretId(v string) *CancelRotateSecretInput { + s.SecretId = &v + return s +} + +type CancelRotateSecretOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the secret for which rotation was canceled. + ARN *string `min:"20" type:"string"` + + // The friendly name of the secret for which rotation was canceled. + Name *string `min:"1" type:"string"` + + // The unique identifier of the version of the secret that was created during + // the rotation. This version might not be complete, and should be evaluated + // for possible deletion. At the very least, you should remove the VersionStage + // value AWSPENDING to enable this version to be deleted. Failing to clean up + // a cancelled rotation can block you from successfully starting future rotations. + VersionId *string `min:"32" type:"string"` +} + +// String returns the string representation +func (s CancelRotateSecretOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelRotateSecretOutput) GoString() string { + return s.String() +} + +// SetARN sets the ARN field's value. +func (s *CancelRotateSecretOutput) SetARN(v string) *CancelRotateSecretOutput { + s.ARN = &v + return s +} + +// SetName sets the Name field's value. +func (s *CancelRotateSecretOutput) SetName(v string) *CancelRotateSecretOutput { + s.Name = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *CancelRotateSecretOutput) SetVersionId(v string) *CancelRotateSecretOutput { + s.VersionId = &v + return s +} + +type CreateSecretInput struct { + _ struct{} `type:"structure"` + + // (Optional) If you include SecretString or SecretBinary, then an initial version + // is created as part of the secret, and this parameter specifies a unique identifier + // for the new version. + // + // If you use the AWS CLI or one of the AWS SDK to call this operation, then + // you can leave this parameter empty. The CLI or SDK generates a random UUID + // for you and includes it as the value for this parameter in the request. If + // you don't use the SDK and instead generate a raw HTTP request to the Secrets + // Manager service endpoint, then you must generate a ClientRequestToken yourself + // for the new version and include that value in the request. + // + // This value helps ensure idempotency. Secrets Manager uses this value to prevent + // the accidental creation of duplicate versions if there are failures and retries + // during a rotation. We recommend that you generate a UUID-type (https://wikipedia.org/wiki/Universally_unique_identifier) + // value to ensure uniqueness of your versions within the specified secret. + // + // * If the ClientRequestToken value isn't already associated with a version + // of the secret then a new version of the secret is created. + // + // * If a version with this value already exists and that version's SecretString + // and SecretBinary values are the same as those in the request, then the + // request is ignored (the operation is idempotent). + // + // * If a version with this value already exists and that version's SecretString + // and SecretBinary values are different from those in the request then the + // request fails because you cannot modify an existing version. Instead, + // use PutSecretValue to create a new version. + // + // This value becomes the VersionId of the new version. + ClientRequestToken *string `min:"32" type:"string" idempotencyToken:"true"` + + // (Optional) Specifies a user-provided description of the secret. + Description *string `type:"string"` + + // (Optional) Specifies the ARN, Key ID, or alias of the AWS KMS customer master + // key (CMK) to be used to encrypt the SecretString or SecretBinary values in + // the versions stored in this secret. + // + // You can specify any of the supported ways to identify a AWS KMS key ID. If + // you need to reference a CMK in a different account, you can use only the + // key ARN or the alias ARN. + // + // If you don't specify this value, then Secrets Manager defaults to using the + // AWS account's default CMK (the one named aws/secretsmanager). If a AWS KMS + // CMK with that name doesn't yet exist, then Secrets Manager creates it for + // you automatically the first time it needs to encrypt a version's SecretString + // or SecretBinary fields. + // + // You can use the account's default CMK to encrypt and decrypt only if you + // call this operation using credentials from the same account that owns the + // secret. If the secret is in a different account, then you must create a custom + // CMK and specify the ARN in this field. + KmsKeyId *string `type:"string"` + + // Specifies the friendly name of the new secret. + // + // The secret name must be ASCII letters, digits, or the following characters + // : /_+=.@- + // + // Don't end your secret name with a hyphen followed by six characters. If you + // do so, you risk confusion and unexpected results when searching for a secret + // by partial ARN. This is because Secrets Manager automatically adds a hyphen + // and six random characters at the end of the ARN. + // + // Name is a required field + Name *string `min:"1" type:"string" required:"true"` + + // (Optional) Specifies binary data that you want to encrypt and store in the + // new version of the secret. To use this parameter in the command-line tools, + // we recommend that you store your binary data in a file and then use the appropriate + // technique for your tool to pass the contents of the file as a parameter. + // + // Either SecretString or SecretBinary must have a value, but not both. They + // cannot both be empty. + // + // This parameter is not available using the Secrets Manager console. It can + // be accessed only by using the AWS CLI or one of the AWS SDKs. + // + // SecretBinary is automatically base64 encoded/decoded by the SDK. + SecretBinary []byte `type:"blob"` + + // (Optional) Specifies text data that you want to encrypt and store in this + // new version of the secret. + // + // Either SecretString or SecretBinary must have a value, but not both. They + // cannot both be empty. + // + // If you create a secret by using the Secrets Manager console then Secrets + // Manager puts the protected secret text in only the SecretString parameter. + // The Secrets Manager console stores the information as a JSON structure of + // key/value pairs that the Lambda rotation function knows how to parse. + // + // For storing multiple values, we recommend that you use a JSON text string + // argument and specify key/value pairs. For information on how to format a + // JSON parameter for the various command line tool environments, see Using + // JSON for Parameters (http://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html#cli-using-param-json) + // in the AWS CLI User Guide. For example: + // + // [{"username":"bob"},{"password":"abc123xyz456"}] + // + // If your command-line tool or SDK requires quotation marks around the parameter, + // you should use single quotes to avoid confusion with the double quotes required + // in the JSON text. + SecretString *string `type:"string"` + + // (Optional) Specifies a list of user-defined tags that are attached to the + // secret. Each tag is a "Key" and "Value" pair of strings. This operation only + // appends tags to the existing list of tags. To remove tags, you must use UntagResource. + // + // Secrets Manager tag key names are case sensitive. A tag with the key "ABC" + // is a different tag from one with key "abc". + // + // If you check tags in IAM policy Condition elements as part of your security + // strategy, then adding or removing a tag can change permissions. If the successful + // completion of this operation would result in you losing your permissions + // for this secret, then this operation is blocked and returns an Access Denied + // error. + // + // This parameter requires a JSON text string argument. For information on how + // to format a JSON parameter for the various command line tool environments, + // see Using JSON for Parameters (http://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html#cli-using-param-json) + // in the AWS CLI User Guide. For example: + // + // [{"Key":"CostCenter","Value":"12345"},{"Key":"environment","Value":"production"}] + // + // If your command-line tool or SDK requires quotation marks around the parameter, + // you should use single quotes to avoid confusion with the double quotes required + // in the JSON text. + // + // The following basic restrictions apply to tags: + // + // * Maximum number of tags per secret—50 + // + // * Maximum key length—127 Unicode characters in UTF-8 + // + // * Maximum value length—255 Unicode characters in UTF-8 + // + // * Tag keys and values are case sensitive. + // + // * Do not use the aws: prefix in your tag names or values because it is + // reserved for AWS use. You can't edit or delete tag names or values with + // this prefix. Tags with this prefix do not count against your tags per + // secret limit. + // + // * If your tagging schema will be used across multiple services and resources, + // remember that other services might have restrictions on allowed characters. + // Generally allowed characters are: letters, spaces, and numbers representable + // in UTF-8, plus the following special characters: + - = . _ : / @. + Tags []*Tag `type:"list"` +} + +// String returns the string representation +func (s CreateSecretInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateSecretInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateSecretInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateSecretInput"} + if s.ClientRequestToken != nil && len(*s.ClientRequestToken) < 32 { + invalidParams.Add(request.NewErrParamMinLen("ClientRequestToken", 32)) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientRequestToken sets the ClientRequestToken field's value. +func (s *CreateSecretInput) SetClientRequestToken(v string) *CreateSecretInput { + s.ClientRequestToken = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *CreateSecretInput) SetDescription(v string) *CreateSecretInput { + s.Description = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *CreateSecretInput) SetKmsKeyId(v string) *CreateSecretInput { + s.KmsKeyId = &v + return s +} + +// SetName sets the Name field's value. +func (s *CreateSecretInput) SetName(v string) *CreateSecretInput { + s.Name = &v + return s +} + +// SetSecretBinary sets the SecretBinary field's value. +func (s *CreateSecretInput) SetSecretBinary(v []byte) *CreateSecretInput { + s.SecretBinary = v + return s +} + +// SetSecretString sets the SecretString field's value. +func (s *CreateSecretInput) SetSecretString(v string) *CreateSecretInput { + s.SecretString = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateSecretInput) SetTags(v []*Tag) *CreateSecretInput { + s.Tags = v + return s +} + +type CreateSecretOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the secret that you just created. + // + // Secrets Manager automatically adds several random characters to the name + // at the end of the ARN when you initially create a secret. This affects only + // the ARN and not the actual friendly name. This ensures that if you create + // a new secret with the same name as an old secret that you previously deleted, + // then users with access to the old secret don't automatically get access to + // the new secret because the ARNs are different. + ARN *string `min:"20" type:"string"` + + // The friendly name of the secret that you just created. + Name *string `min:"1" type:"string"` + + // The unique identifier that's associated with the version of the secret you + // just created. + VersionId *string `min:"32" type:"string"` +} + +// String returns the string representation +func (s CreateSecretOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateSecretOutput) GoString() string { + return s.String() +} + +// SetARN sets the ARN field's value. +func (s *CreateSecretOutput) SetARN(v string) *CreateSecretOutput { + s.ARN = &v + return s +} + +// SetName sets the Name field's value. +func (s *CreateSecretOutput) SetName(v string) *CreateSecretOutput { + s.Name = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *CreateSecretOutput) SetVersionId(v string) *CreateSecretOutput { + s.VersionId = &v + return s +} + +type DeleteResourcePolicyInput struct { + _ struct{} `type:"structure"` + + // Specifies the secret that you want to delete the attached resource-based + // policy for. You can specify either the Amazon Resource Name (ARN) or the + // friendly name of the secret. + // + // If you specify an ARN, we generally recommend that you specify a complete + // ARN. You can specify a partial ARN too—for example, if you don’t include + // the final hyphen and six random characters that Secrets Manager adds at the + // end of the ARN when you created the secret. A partial ARN match can work + // as long as it uniquely matches only one secret. However, if your secret has + // a name that ends in a hyphen followed by six characters (before Secrets Manager + // adds the hyphen and six characters to the ARN) and you try to use that as + // a partial ARN, then those characters cause Secrets Manager to assume that + // you’re specifying a complete ARN. This confusion can cause unexpected results. + // To avoid this situation, we recommend that you don’t create secret names + // that end with a hyphen followed by six characters. + // + // SecretId is a required field + SecretId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteResourcePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteResourcePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteResourcePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteResourcePolicyInput"} + if s.SecretId == nil { + invalidParams.Add(request.NewErrParamRequired("SecretId")) + } + if s.SecretId != nil && len(*s.SecretId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecretId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSecretId sets the SecretId field's value. +func (s *DeleteResourcePolicyInput) SetSecretId(v string) *DeleteResourcePolicyInput { + s.SecretId = &v + return s +} + +type DeleteResourcePolicyOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the secret that the resource-based policy was deleted for. + ARN *string `min:"20" type:"string"` + + // The friendly name of the secret that the resource-based policy was deleted + // for. + Name *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DeleteResourcePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteResourcePolicyOutput) GoString() string { + return s.String() +} + +// SetARN sets the ARN field's value. +func (s *DeleteResourcePolicyOutput) SetARN(v string) *DeleteResourcePolicyOutput { + s.ARN = &v + return s +} + +// SetName sets the Name field's value. +func (s *DeleteResourcePolicyOutput) SetName(v string) *DeleteResourcePolicyOutput { + s.Name = &v + return s +} + +type DeleteSecretInput struct { + _ struct{} `type:"structure"` + + // (Optional) Specifies that the secret is to be deleted without any recovery + // window. You can't use both this parameter and the RecoveryWindowInDays parameter + // in the same API call. + // + // An asynchronous background process performs the actual deletion, so there + // can be a short delay before the operation completes. If you write code to + // delete and then immediately recreate a secret with the same name, ensure + // that your code includes appropriate back off and retry logic. + // + // Use this parameter with caution. This parameter causes the operation to skip + // the normal waiting period before the permanent deletion that AWS would normally + // impose with the RecoveryWindowInDays parameter. If you delete a secret with + // the ForceDeleteWithouRecovery parameter, then you have no opportunity to + // recover the secret. It is permanently lost. + ForceDeleteWithoutRecovery *bool `type:"boolean"` + + // (Optional) Specifies the number of days that Secrets Manager waits before + // it can delete the secret. You can't use both this parameter and the ForceDeleteWithoutRecovery + // parameter in the same API call. + // + // This value can range from 7 to 30 days. The default value is 30. + RecoveryWindowInDays *int64 `type:"long"` + + // Specifies the secret that you want to delete. You can specify either the + // Amazon Resource Name (ARN) or the friendly name of the secret. + // + // If you specify an ARN, we generally recommend that you specify a complete + // ARN. You can specify a partial ARN too—for example, if you don’t include + // the final hyphen and six random characters that Secrets Manager adds at the + // end of the ARN when you created the secret. A partial ARN match can work + // as long as it uniquely matches only one secret. However, if your secret has + // a name that ends in a hyphen followed by six characters (before Secrets Manager + // adds the hyphen and six characters to the ARN) and you try to use that as + // a partial ARN, then those characters cause Secrets Manager to assume that + // you’re specifying a complete ARN. This confusion can cause unexpected results. + // To avoid this situation, we recommend that you don’t create secret names + // that end with a hyphen followed by six characters. + // + // SecretId is a required field + SecretId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteSecretInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSecretInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteSecretInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteSecretInput"} + if s.SecretId == nil { + invalidParams.Add(request.NewErrParamRequired("SecretId")) + } + if s.SecretId != nil && len(*s.SecretId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecretId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetForceDeleteWithoutRecovery sets the ForceDeleteWithoutRecovery field's value. +func (s *DeleteSecretInput) SetForceDeleteWithoutRecovery(v bool) *DeleteSecretInput { + s.ForceDeleteWithoutRecovery = &v + return s +} + +// SetRecoveryWindowInDays sets the RecoveryWindowInDays field's value. +func (s *DeleteSecretInput) SetRecoveryWindowInDays(v int64) *DeleteSecretInput { + s.RecoveryWindowInDays = &v + return s +} + +// SetSecretId sets the SecretId field's value. +func (s *DeleteSecretInput) SetSecretId(v string) *DeleteSecretInput { + s.SecretId = &v + return s +} + +type DeleteSecretOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the secret that is now scheduled for deletion. + ARN *string `min:"20" type:"string"` + + // The date and time after which this secret can be deleted by Secrets Manager + // and can no longer be restored. This value is the date and time of the delete + // request plus the number of days specified in RecoveryWindowInDays. + DeletionDate *time.Time `type:"timestamp"` + + // The friendly name of the secret that is now scheduled for deletion. + Name *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DeleteSecretOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSecretOutput) GoString() string { + return s.String() +} + +// SetARN sets the ARN field's value. +func (s *DeleteSecretOutput) SetARN(v string) *DeleteSecretOutput { + s.ARN = &v + return s +} + +// SetDeletionDate sets the DeletionDate field's value. +func (s *DeleteSecretOutput) SetDeletionDate(v time.Time) *DeleteSecretOutput { + s.DeletionDate = &v + return s +} + +// SetName sets the Name field's value. +func (s *DeleteSecretOutput) SetName(v string) *DeleteSecretOutput { + s.Name = &v + return s +} + +type DescribeSecretInput struct { + _ struct{} `type:"structure"` + + // The identifier of the secret whose details you want to retrieve. You can + // specify either the Amazon Resource Name (ARN) or the friendly name of the + // secret. + // + // If you specify an ARN, we generally recommend that you specify a complete + // ARN. You can specify a partial ARN too—for example, if you don’t include + // the final hyphen and six random characters that Secrets Manager adds at the + // end of the ARN when you created the secret. A partial ARN match can work + // as long as it uniquely matches only one secret. However, if your secret has + // a name that ends in a hyphen followed by six characters (before Secrets Manager + // adds the hyphen and six characters to the ARN) and you try to use that as + // a partial ARN, then those characters cause Secrets Manager to assume that + // you’re specifying a complete ARN. This confusion can cause unexpected results. + // To avoid this situation, we recommend that you don’t create secret names + // that end with a hyphen followed by six characters. + // + // SecretId is a required field + SecretId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeSecretInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeSecretInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeSecretInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeSecretInput"} + if s.SecretId == nil { + invalidParams.Add(request.NewErrParamRequired("SecretId")) + } + if s.SecretId != nil && len(*s.SecretId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecretId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSecretId sets the SecretId field's value. +func (s *DescribeSecretInput) SetSecretId(v string) *DescribeSecretInput { + s.SecretId = &v + return s +} + +type DescribeSecretOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the secret. + ARN *string `min:"20" type:"string"` + + // This value exists if the secret is scheduled for deletion. Some time after + // the specified date and time, Secrets Manager deletes the secret and all of + // its versions. + // + // If a secret is scheduled for deletion, then its details, including the encrypted + // secret information, is not accessible. To cancel a scheduled deletion and + // restore access, use RestoreSecret. + DeletedDate *time.Time `type:"timestamp"` + + // The user-provided description of the secret. + Description *string `type:"string"` + + // The ARN or alias of the AWS KMS customer master key (CMK) that's used to + // encrypt the SecretString or SecretBinary fields in each version of the secret. + // If you don't provide a key, then Secrets Manager defaults to encrypting the + // secret fields with the default AWS KMS CMK (the one named awssecretsmanager) + // for this account. + KmsKeyId *string `type:"string"` + + // The last date that this secret was accessed. This value is truncated to midnight + // of the date and therefore shows only the date, not the time. + LastAccessedDate *time.Time `type:"timestamp"` + + // The last date and time that this secret was modified in any way. + LastChangedDate *time.Time `type:"timestamp"` + + // The most recent date and time that the Secrets Manager rotation process was + // successfully completed. This value is null if the secret has never rotated. + LastRotatedDate *time.Time `type:"timestamp"` + + // The user-provided friendly name of the secret. + Name *string `min:"1" type:"string"` + + // Specifies whether automatic rotation is enabled for this secret. + // + // To enable rotation, use RotateSecret with AutomaticallyRotateAfterDays set + // to a value greater than 0. To disable rotation, use CancelRotateSecret. + RotationEnabled *bool `type:"boolean"` + + // The ARN of a Lambda function that's invoked by Secrets Manager to rotate + // the secret either automatically per the schedule or manually by a call to + // RotateSecret. + RotationLambdaARN *string `type:"string"` + + // A structure that contains the rotation configuration for this secret. + RotationRules *RotationRulesType `type:"structure"` + + // The list of user-defined tags that are associated with the secret. To add + // tags to a secret, use TagResource. To remove tags, use UntagResource. + Tags []*Tag `type:"list"` + + // A list of all of the currently assigned VersionStage staging labels and the + // VersionId that each is attached to. Staging labels are used to keep track + // of the different versions during the rotation process. + // + // A version that does not have any staging labels attached is considered deprecated + // and subject to deletion. Such versions are not included in this list. + VersionIdsToStages map[string][]*string `type:"map"` +} + +// String returns the string representation +func (s DescribeSecretOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeSecretOutput) GoString() string { + return s.String() +} + +// SetARN sets the ARN field's value. +func (s *DescribeSecretOutput) SetARN(v string) *DescribeSecretOutput { + s.ARN = &v + return s +} + +// SetDeletedDate sets the DeletedDate field's value. +func (s *DescribeSecretOutput) SetDeletedDate(v time.Time) *DescribeSecretOutput { + s.DeletedDate = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *DescribeSecretOutput) SetDescription(v string) *DescribeSecretOutput { + s.Description = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *DescribeSecretOutput) SetKmsKeyId(v string) *DescribeSecretOutput { + s.KmsKeyId = &v + return s +} + +// SetLastAccessedDate sets the LastAccessedDate field's value. +func (s *DescribeSecretOutput) SetLastAccessedDate(v time.Time) *DescribeSecretOutput { + s.LastAccessedDate = &v + return s +} + +// SetLastChangedDate sets the LastChangedDate field's value. +func (s *DescribeSecretOutput) SetLastChangedDate(v time.Time) *DescribeSecretOutput { + s.LastChangedDate = &v + return s +} + +// SetLastRotatedDate sets the LastRotatedDate field's value. +func (s *DescribeSecretOutput) SetLastRotatedDate(v time.Time) *DescribeSecretOutput { + s.LastRotatedDate = &v + return s +} + +// SetName sets the Name field's value. +func (s *DescribeSecretOutput) SetName(v string) *DescribeSecretOutput { + s.Name = &v + return s +} + +// SetRotationEnabled sets the RotationEnabled field's value. +func (s *DescribeSecretOutput) SetRotationEnabled(v bool) *DescribeSecretOutput { + s.RotationEnabled = &v + return s +} + +// SetRotationLambdaARN sets the RotationLambdaARN field's value. +func (s *DescribeSecretOutput) SetRotationLambdaARN(v string) *DescribeSecretOutput { + s.RotationLambdaARN = &v + return s +} + +// SetRotationRules sets the RotationRules field's value. +func (s *DescribeSecretOutput) SetRotationRules(v *RotationRulesType) *DescribeSecretOutput { + s.RotationRules = v + return s +} + +// SetTags sets the Tags field's value. +func (s *DescribeSecretOutput) SetTags(v []*Tag) *DescribeSecretOutput { + s.Tags = v + return s +} + +// SetVersionIdsToStages sets the VersionIdsToStages field's value. +func (s *DescribeSecretOutput) SetVersionIdsToStages(v map[string][]*string) *DescribeSecretOutput { + s.VersionIdsToStages = v + return s +} + +type GetRandomPasswordInput struct { + _ struct{} `type:"structure"` + + // A string that includes characters that should not be included in the generated + // password. The default is that all characters from the included sets can be + // used. + ExcludeCharacters *string `type:"string"` + + // Specifies that the generated password should not include lowercase letters. + // The default if you do not include this switch parameter is that lowercase + // letters can be included. + ExcludeLowercase *bool `type:"boolean"` + + // Specifies that the generated password should not include digits. The default + // if you do not include this switch parameter is that digits can be included. + ExcludeNumbers *bool `type:"boolean"` + + // Specifies that the generated password should not include punctuation characters. + // The default if you do not include this switch parameter is that punctuation + // characters can be included. + // + // The following are the punctuation characters that can be included in the + // generated password if you don't explicitly exclude them with ExcludeCharacters + // or ExcludePunctuation: + // + // ! " # $ % & ' ( ) * + , - . / : ; < = > ? @ [ \ ] ^ _ ` { | } ~ + ExcludePunctuation *bool `type:"boolean"` + + // Specifies that the generated password should not include uppercase letters. + // The default if you do not include this switch parameter is that uppercase + // letters can be included. + ExcludeUppercase *bool `type:"boolean"` + + // Specifies that the generated password can include the space character. The + // default if you do not include this switch parameter is that the space character + // is not included. + IncludeSpace *bool `type:"boolean"` + + // The desired length of the generated password. The default value if you do + // not include this parameter is 32 characters. + PasswordLength *int64 `min:"1" type:"long"` + + // A boolean value that specifies whether the generated password must include + // at least one of every allowed character type. The default value is True and + // the operation requires at least one of every character type. + RequireEachIncludedType *bool `type:"boolean"` +} + +// String returns the string representation +func (s GetRandomPasswordInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetRandomPasswordInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetRandomPasswordInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetRandomPasswordInput"} + if s.PasswordLength != nil && *s.PasswordLength < 1 { + invalidParams.Add(request.NewErrParamMinValue("PasswordLength", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetExcludeCharacters sets the ExcludeCharacters field's value. +func (s *GetRandomPasswordInput) SetExcludeCharacters(v string) *GetRandomPasswordInput { + s.ExcludeCharacters = &v + return s +} + +// SetExcludeLowercase sets the ExcludeLowercase field's value. +func (s *GetRandomPasswordInput) SetExcludeLowercase(v bool) *GetRandomPasswordInput { + s.ExcludeLowercase = &v + return s +} + +// SetExcludeNumbers sets the ExcludeNumbers field's value. +func (s *GetRandomPasswordInput) SetExcludeNumbers(v bool) *GetRandomPasswordInput { + s.ExcludeNumbers = &v + return s +} + +// SetExcludePunctuation sets the ExcludePunctuation field's value. +func (s *GetRandomPasswordInput) SetExcludePunctuation(v bool) *GetRandomPasswordInput { + s.ExcludePunctuation = &v + return s +} + +// SetExcludeUppercase sets the ExcludeUppercase field's value. +func (s *GetRandomPasswordInput) SetExcludeUppercase(v bool) *GetRandomPasswordInput { + s.ExcludeUppercase = &v + return s +} + +// SetIncludeSpace sets the IncludeSpace field's value. +func (s *GetRandomPasswordInput) SetIncludeSpace(v bool) *GetRandomPasswordInput { + s.IncludeSpace = &v + return s +} + +// SetPasswordLength sets the PasswordLength field's value. +func (s *GetRandomPasswordInput) SetPasswordLength(v int64) *GetRandomPasswordInput { + s.PasswordLength = &v + return s +} + +// SetRequireEachIncludedType sets the RequireEachIncludedType field's value. +func (s *GetRandomPasswordInput) SetRequireEachIncludedType(v bool) *GetRandomPasswordInput { + s.RequireEachIncludedType = &v + return s +} + +type GetRandomPasswordOutput struct { + _ struct{} `type:"structure"` + + // A string with the generated password. + RandomPassword *string `type:"string"` +} + +// String returns the string representation +func (s GetRandomPasswordOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetRandomPasswordOutput) GoString() string { + return s.String() +} + +// SetRandomPassword sets the RandomPassword field's value. +func (s *GetRandomPasswordOutput) SetRandomPassword(v string) *GetRandomPasswordOutput { + s.RandomPassword = &v + return s +} + +type GetResourcePolicyInput struct { + _ struct{} `type:"structure"` + + // Specifies the secret that you want to retrieve the attached resource-based + // policy for. You can specify either the Amazon Resource Name (ARN) or the + // friendly name of the secret. + // + // If you specify an ARN, we generally recommend that you specify a complete + // ARN. You can specify a partial ARN too—for example, if you don’t include + // the final hyphen and six random characters that Secrets Manager adds at the + // end of the ARN when you created the secret. A partial ARN match can work + // as long as it uniquely matches only one secret. However, if your secret has + // a name that ends in a hyphen followed by six characters (before Secrets Manager + // adds the hyphen and six characters to the ARN) and you try to use that as + // a partial ARN, then those characters cause Secrets Manager to assume that + // you’re specifying a complete ARN. This confusion can cause unexpected results. + // To avoid this situation, we recommend that you don’t create secret names + // that end with a hyphen followed by six characters. + // + // SecretId is a required field + SecretId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetResourcePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetResourcePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetResourcePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetResourcePolicyInput"} + if s.SecretId == nil { + invalidParams.Add(request.NewErrParamRequired("SecretId")) + } + if s.SecretId != nil && len(*s.SecretId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecretId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSecretId sets the SecretId field's value. +func (s *GetResourcePolicyInput) SetSecretId(v string) *GetResourcePolicyInput { + s.SecretId = &v + return s +} + +type GetResourcePolicyOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the secret that the resource-based policy was retrieved for. + ARN *string `min:"20" type:"string"` + + // The friendly name of the secret that the resource-based policy was retrieved + // for. + Name *string `min:"1" type:"string"` + + // A JSON-formatted string that describes the permissions that are associated + // with the attached secret. These permissions are combined with any permissions + // that are associated with the user or role that attempts to access this secret. + // The combined permissions specify who can access the secret and what actions + // they can perform. For more information, see Authentication and Access Control + // for AWS Secrets Manager (http://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access.html) + // in the AWS Secrets Manager User Guide. + ResourcePolicy *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s GetResourcePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetResourcePolicyOutput) GoString() string { + return s.String() +} + +// SetARN sets the ARN field's value. +func (s *GetResourcePolicyOutput) SetARN(v string) *GetResourcePolicyOutput { + s.ARN = &v + return s +} + +// SetName sets the Name field's value. +func (s *GetResourcePolicyOutput) SetName(v string) *GetResourcePolicyOutput { + s.Name = &v + return s +} + +// SetResourcePolicy sets the ResourcePolicy field's value. +func (s *GetResourcePolicyOutput) SetResourcePolicy(v string) *GetResourcePolicyOutput { + s.ResourcePolicy = &v + return s +} + +type GetSecretValueInput struct { + _ struct{} `type:"structure"` + + // Specifies the secret containing the version that you want to retrieve. You + // can specify either the Amazon Resource Name (ARN) or the friendly name of + // the secret. + // + // If you specify an ARN, we generally recommend that you specify a complete + // ARN. You can specify a partial ARN too—for example, if you don’t include + // the final hyphen and six random characters that Secrets Manager adds at the + // end of the ARN when you created the secret. A partial ARN match can work + // as long as it uniquely matches only one secret. However, if your secret has + // a name that ends in a hyphen followed by six characters (before Secrets Manager + // adds the hyphen and six characters to the ARN) and you try to use that as + // a partial ARN, then those characters cause Secrets Manager to assume that + // you’re specifying a complete ARN. This confusion can cause unexpected results. + // To avoid this situation, we recommend that you don’t create secret names + // that end with a hyphen followed by six characters. + // + // SecretId is a required field + SecretId *string `min:"1" type:"string" required:"true"` + + // Specifies the unique identifier of the version of the secret that you want + // to retrieve. If you specify this parameter then don't specify VersionStage. + // If you don't specify either a VersionStage or VersionId then the default + // is to perform the operation on the version with the VersionStage value of + // AWSCURRENT. + // + // This value is typically a UUID-type (https://wikipedia.org/wiki/Universally_unique_identifier) + // value with 32 hexadecimal digits. + VersionId *string `min:"32" type:"string"` + + // Specifies the secret version that you want to retrieve by the staging label + // attached to the version. + // + // Staging labels are used to keep track of different versions during the rotation + // process. If you use this parameter then don't specify VersionId. If you don't + // specify either a VersionStage or VersionId, then the default is to perform + // the operation on the version with the VersionStage value of AWSCURRENT. + VersionStage *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s GetSecretValueInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSecretValueInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetSecretValueInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetSecretValueInput"} + if s.SecretId == nil { + invalidParams.Add(request.NewErrParamRequired("SecretId")) + } + if s.SecretId != nil && len(*s.SecretId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecretId", 1)) + } + if s.VersionId != nil && len(*s.VersionId) < 32 { + invalidParams.Add(request.NewErrParamMinLen("VersionId", 32)) + } + if s.VersionStage != nil && len(*s.VersionStage) < 1 { + invalidParams.Add(request.NewErrParamMinLen("VersionStage", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSecretId sets the SecretId field's value. +func (s *GetSecretValueInput) SetSecretId(v string) *GetSecretValueInput { + s.SecretId = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *GetSecretValueInput) SetVersionId(v string) *GetSecretValueInput { + s.VersionId = &v + return s +} + +// SetVersionStage sets the VersionStage field's value. +func (s *GetSecretValueInput) SetVersionStage(v string) *GetSecretValueInput { + s.VersionStage = &v + return s +} + +type GetSecretValueOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the secret. + ARN *string `min:"20" type:"string"` + + // The date and time that this version of the secret was created. + CreatedDate *time.Time `type:"timestamp"` + + // The friendly name of the secret. + Name *string `min:"1" type:"string"` + + // The decrypted part of the protected secret information that was originally + // provided as binary data in the form of a byte array. The response parameter + // represents the binary data as a base64-encoded (https://tools.ietf.org/html/rfc4648#section-4) + // string. + // + // This parameter is not used if the secret is created by the Secrets Manager + // console. + // + // If you store custom information in this field of the secret, then you must + // code your Lambda rotation function to parse and interpret whatever you store + // in the SecretString or SecretBinary fields. + // + // SecretBinary is automatically base64 encoded/decoded by the SDK. + SecretBinary []byte `type:"blob"` + + // The decrypted part of the protected secret information that was originally + // provided as a string. + // + // If you create this secret by using the Secrets Manager console then only + // the SecretString parameter contains data. Secrets Manager stores the information + // as a JSON structure of key/value pairs that the Lambda rotation function + // knows how to parse. + // + // If you store custom information in the secret by using the CreateSecret, + // UpdateSecret, or PutSecretValue API operations instead of the Secrets Manager + // console, or by using the Other secret type in the console, then you must + // code your Lambda rotation function to parse and interpret those values. + SecretString *string `type:"string"` + + // The unique identifier of this version of the secret. + VersionId *string `min:"32" type:"string"` + + // A list of all of the staging labels currently attached to this version of + // the secret. + VersionStages []*string `min:"1" type:"list"` +} + +// String returns the string representation +func (s GetSecretValueOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetSecretValueOutput) GoString() string { + return s.String() +} + +// SetARN sets the ARN field's value. +func (s *GetSecretValueOutput) SetARN(v string) *GetSecretValueOutput { + s.ARN = &v + return s +} + +// SetCreatedDate sets the CreatedDate field's value. +func (s *GetSecretValueOutput) SetCreatedDate(v time.Time) *GetSecretValueOutput { + s.CreatedDate = &v + return s +} + +// SetName sets the Name field's value. +func (s *GetSecretValueOutput) SetName(v string) *GetSecretValueOutput { + s.Name = &v + return s +} + +// SetSecretBinary sets the SecretBinary field's value. +func (s *GetSecretValueOutput) SetSecretBinary(v []byte) *GetSecretValueOutput { + s.SecretBinary = v + return s +} + +// SetSecretString sets the SecretString field's value. +func (s *GetSecretValueOutput) SetSecretString(v string) *GetSecretValueOutput { + s.SecretString = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *GetSecretValueOutput) SetVersionId(v string) *GetSecretValueOutput { + s.VersionId = &v + return s +} + +// SetVersionStages sets the VersionStages field's value. +func (s *GetSecretValueOutput) SetVersionStages(v []*string) *GetSecretValueOutput { + s.VersionStages = v + return s +} + +type ListSecretVersionIdsInput struct { + _ struct{} `type:"structure"` + + // (Optional) Specifies that you want the results to include versions that do + // not have any staging labels attached to them. Such versions are considered + // deprecated and are subject to deletion by Secrets Manager as needed. + IncludeDeprecated *bool `type:"boolean"` + + // (Optional) Limits the number of results that you want to include in the response. + // If you don't include this parameter, it defaults to a value that's specific + // to the operation. If additional items exist beyond the maximum you specify, + // the NextToken response element is present and has a value (isn't null). Include + // that value as the NextToken request parameter in the next call to the operation + // to get the next part of the results. Note that Secrets Manager might return + // fewer results than the maximum even when there are more results available. + // You should check NextToken after every operation to ensure that you receive + // all of the results. + MaxResults *int64 `min:"1" type:"integer"` + + // (Optional) Use this parameter in a request if you receive a NextToken response + // in a previous request that indicates that there's more output available. + // In a subsequent call, set it to the value of the previous call's NextToken + // response to indicate where the output should continue from. + NextToken *string `min:"1" type:"string"` + + // The identifier for the secret containing the versions you want to list. You + // can specify either the Amazon Resource Name (ARN) or the friendly name of + // the secret. + // + // If you specify an ARN, we generally recommend that you specify a complete + // ARN. You can specify a partial ARN too—for example, if you don’t include + // the final hyphen and six random characters that Secrets Manager adds at the + // end of the ARN when you created the secret. A partial ARN match can work + // as long as it uniquely matches only one secret. However, if your secret has + // a name that ends in a hyphen followed by six characters (before Secrets Manager + // adds the hyphen and six characters to the ARN) and you try to use that as + // a partial ARN, then those characters cause Secrets Manager to assume that + // you’re specifying a complete ARN. This confusion can cause unexpected results. + // To avoid this situation, we recommend that you don’t create secret names + // that end with a hyphen followed by six characters. + // + // SecretId is a required field + SecretId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListSecretVersionIdsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListSecretVersionIdsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListSecretVersionIdsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListSecretVersionIdsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + if s.SecretId == nil { + invalidParams.Add(request.NewErrParamRequired("SecretId")) + } + if s.SecretId != nil && len(*s.SecretId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecretId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetIncludeDeprecated sets the IncludeDeprecated field's value. +func (s *ListSecretVersionIdsInput) SetIncludeDeprecated(v bool) *ListSecretVersionIdsInput { + s.IncludeDeprecated = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListSecretVersionIdsInput) SetMaxResults(v int64) *ListSecretVersionIdsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListSecretVersionIdsInput) SetNextToken(v string) *ListSecretVersionIdsInput { + s.NextToken = &v + return s +} + +// SetSecretId sets the SecretId field's value. +func (s *ListSecretVersionIdsInput) SetSecretId(v string) *ListSecretVersionIdsInput { + s.SecretId = &v + return s +} + +type ListSecretVersionIdsOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) for the secret. + // + // Secrets Manager automatically adds several random characters to the name + // at the end of the ARN when you initially create a secret. This affects only + // the ARN and not the actual friendly name. This ensures that if you create + // a new secret with the same name as an old secret that you previously deleted, + // then users with access to the old secret don't automatically get access to + // the new secret because the ARNs are different. + ARN *string `min:"20" type:"string"` + + // The friendly name of the secret. + Name *string `min:"1" type:"string"` + + // If present in the response, this value indicates that there's more output + // available than what's included in the current response. This can occur even + // when the response includes no values at all, such as when you ask for a filtered + // view of a very long list. Use this value in the NextToken request parameter + // in a subsequent call to the operation to continue processing and get the + // next part of the output. You should repeat this until the NextToken response + // element comes back empty (as null). + NextToken *string `min:"1" type:"string"` + + // The list of the currently available versions of the specified secret. + Versions []*SecretVersionsListEntry `type:"list"` +} + +// String returns the string representation +func (s ListSecretVersionIdsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListSecretVersionIdsOutput) GoString() string { + return s.String() +} + +// SetARN sets the ARN field's value. +func (s *ListSecretVersionIdsOutput) SetARN(v string) *ListSecretVersionIdsOutput { + s.ARN = &v + return s +} + +// SetName sets the Name field's value. +func (s *ListSecretVersionIdsOutput) SetName(v string) *ListSecretVersionIdsOutput { + s.Name = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListSecretVersionIdsOutput) SetNextToken(v string) *ListSecretVersionIdsOutput { + s.NextToken = &v + return s +} + +// SetVersions sets the Versions field's value. +func (s *ListSecretVersionIdsOutput) SetVersions(v []*SecretVersionsListEntry) *ListSecretVersionIdsOutput { + s.Versions = v + return s +} + +type ListSecretsInput struct { + _ struct{} `type:"structure"` + + // (Optional) Limits the number of results that you want to include in the response. + // If you don't include this parameter, it defaults to a value that's specific + // to the operation. If additional items exist beyond the maximum you specify, + // the NextToken response element is present and has a value (isn't null). Include + // that value as the NextToken request parameter in the next call to the operation + // to get the next part of the results. Note that Secrets Manager might return + // fewer results than the maximum even when there are more results available. + // You should check NextToken after every operation to ensure that you receive + // all of the results. + MaxResults *int64 `min:"1" type:"integer"` + + // (Optional) Use this parameter in a request if you receive a NextToken response + // in a previous request that indicates that there's more output available. + // In a subsequent call, set it to the value of the previous call's NextToken + // response to indicate where the output should continue from. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListSecretsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListSecretsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListSecretsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListSecretsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxResults sets the MaxResults field's value. +func (s *ListSecretsInput) SetMaxResults(v int64) *ListSecretsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListSecretsInput) SetNextToken(v string) *ListSecretsInput { + s.NextToken = &v + return s +} + +type ListSecretsOutput struct { + _ struct{} `type:"structure"` + + // If present in the response, this value indicates that there's more output + // available than what's included in the current response. This can occur even + // when the response includes no values at all, such as when you ask for a filtered + // view of a very long list. Use this value in the NextToken request parameter + // in a subsequent call to the operation to continue processing and get the + // next part of the output. You should repeat this until the NextToken response + // element comes back empty (as null). + NextToken *string `min:"1" type:"string"` + + // A list of the secrets in the account. + SecretList []*SecretListEntry `type:"list"` +} + +// String returns the string representation +func (s ListSecretsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListSecretsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListSecretsOutput) SetNextToken(v string) *ListSecretsOutput { + s.NextToken = &v + return s +} + +// SetSecretList sets the SecretList field's value. +func (s *ListSecretsOutput) SetSecretList(v []*SecretListEntry) *ListSecretsOutput { + s.SecretList = v + return s +} + +type PutResourcePolicyInput struct { + _ struct{} `type:"structure"` + + // A JSON-formatted string that's constructed according to the grammar and syntax + // for an AWS resource-based policy. The policy in the string identifies who + // can access or manage this secret and its versions. For information on how + // to format a JSON parameter for the various command line tool environments, + // see Using JSON for Parameters (http://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html#cli-using-param-json) + // in the AWS CLI User Guide. + // + // ResourcePolicy is a required field + ResourcePolicy *string `min:"1" type:"string" required:"true"` + + // Specifies the secret that you want to attach the resource-based policy to. + // You can specify either the ARN or the friendly name of the secret. + // + // If you specify an ARN, we generally recommend that you specify a complete + // ARN. You can specify a partial ARN too—for example, if you don’t include + // the final hyphen and six random characters that Secrets Manager adds at the + // end of the ARN when you created the secret. A partial ARN match can work + // as long as it uniquely matches only one secret. However, if your secret has + // a name that ends in a hyphen followed by six characters (before Secrets Manager + // adds the hyphen and six characters to the ARN) and you try to use that as + // a partial ARN, then those characters cause Secrets Manager to assume that + // you’re specifying a complete ARN. This confusion can cause unexpected results. + // To avoid this situation, we recommend that you don’t create secret names + // that end with a hyphen followed by six characters. + // + // SecretId is a required field + SecretId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s PutResourcePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutResourcePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutResourcePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutResourcePolicyInput"} + if s.ResourcePolicy == nil { + invalidParams.Add(request.NewErrParamRequired("ResourcePolicy")) + } + if s.ResourcePolicy != nil && len(*s.ResourcePolicy) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourcePolicy", 1)) + } + if s.SecretId == nil { + invalidParams.Add(request.NewErrParamRequired("SecretId")) + } + if s.SecretId != nil && len(*s.SecretId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecretId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourcePolicy sets the ResourcePolicy field's value. +func (s *PutResourcePolicyInput) SetResourcePolicy(v string) *PutResourcePolicyInput { + s.ResourcePolicy = &v + return s +} + +// SetSecretId sets the SecretId field's value. +func (s *PutResourcePolicyInput) SetSecretId(v string) *PutResourcePolicyInput { + s.SecretId = &v + return s +} + +type PutResourcePolicyOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the secret that the resource-based policy was retrieved for. + ARN *string `min:"20" type:"string"` + + // The friendly name of the secret that the resource-based policy was retrieved + // for. + Name *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s PutResourcePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutResourcePolicyOutput) GoString() string { + return s.String() +} + +// SetARN sets the ARN field's value. +func (s *PutResourcePolicyOutput) SetARN(v string) *PutResourcePolicyOutput { + s.ARN = &v + return s +} + +// SetName sets the Name field's value. +func (s *PutResourcePolicyOutput) SetName(v string) *PutResourcePolicyOutput { + s.Name = &v + return s +} + +type PutSecretValueInput struct { + _ struct{} `type:"structure"` + + // (Optional) Specifies a unique identifier for the new version of the secret. + // + // If you use the AWS CLI or one of the AWS SDK to call this operation, then + // you can leave this parameter empty. The CLI or SDK generates a random UUID + // for you and includes that in the request. If you don't use the SDK and instead + // generate a raw HTTP request to the Secrets Manager service endpoint, then + // you must generate a ClientRequestToken yourself for new versions and include + // that value in the request. + // + // This value helps ensure idempotency. Secrets Manager uses this value to prevent + // the accidental creation of duplicate versions if there are failures and retries + // during the Lambda rotation function's processing. We recommend that you generate + // a UUID-type (https://wikipedia.org/wiki/Universally_unique_identifier) value + // to ensure uniqueness within the specified secret. + // + // * If the ClientRequestToken value isn't already associated with a version + // of the secret then a new version of the secret is created. + // + // * If a version with this value already exists and that version's SecretString + // or SecretBinary values are the same as those in the request then the request + // is ignored (the operation is idempotent). + // + // * If a version with this value already exists and that version's SecretString + // and SecretBinary values are different from those in the request then the + // request fails because you cannot modify an existing secret version. You + // can only create new versions to store new secret values. + // + // This value becomes the VersionId of the new version. + ClientRequestToken *string `min:"32" type:"string" idempotencyToken:"true"` + + // (Optional) Specifies binary data that you want to encrypt and store in the + // new version of the secret. To use this parameter in the command-line tools, + // we recommend that you store your binary data in a file and then use the appropriate + // technique for your tool to pass the contents of the file as a parameter. + // Either SecretBinary or SecretString must have a value, but not both. They + // cannot both be empty. + // + // This parameter is not accessible if the secret using the Secrets Manager + // console. + // + // SecretBinary is automatically base64 encoded/decoded by the SDK. + SecretBinary []byte `type:"blob"` + + // Specifies the secret to which you want to add a new version. You can specify + // either the Amazon Resource Name (ARN) or the friendly name of the secret. + // The secret must already exist. + // + // If you specify an ARN, we generally recommend that you specify a complete + // ARN. You can specify a partial ARN too—for example, if you don’t include + // the final hyphen and six random characters that Secrets Manager adds at the + // end of the ARN when you created the secret. A partial ARN match can work + // as long as it uniquely matches only one secret. However, if your secret has + // a name that ends in a hyphen followed by six characters (before Secrets Manager + // adds the hyphen and six characters to the ARN) and you try to use that as + // a partial ARN, then those characters cause Secrets Manager to assume that + // you’re specifying a complete ARN. This confusion can cause unexpected results. + // To avoid this situation, we recommend that you don’t create secret names + // that end with a hyphen followed by six characters. + // + // SecretId is a required field + SecretId *string `min:"1" type:"string" required:"true"` + + // (Optional) Specifies text data that you want to encrypt and store in this + // new version of the secret. Either SecretString or SecretBinary must have + // a value, but not both. They cannot both be empty. + // + // If you create this secret by using the Secrets Manager console then Secrets + // Manager puts the protected secret text in only the SecretString parameter. + // The Secrets Manager console stores the information as a JSON structure of + // key/value pairs that the default Lambda rotation function knows how to parse. + // + // For storing multiple values, we recommend that you use a JSON text string + // argument and specify key/value pairs. For information on how to format a + // JSON parameter for the various command line tool environments, see Using + // JSON for Parameters (http://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html#cli-using-param-json) + // in the AWS CLI User Guide. + // + // For example: + // + // [{"username":"bob"},{"password":"abc123xyz456"}] + // + // If your command-line tool or SDK requires quotation marks around the parameter, + // you should use single quotes to avoid confusion with the double quotes required + // in the JSON text. + SecretString *string `type:"string"` + + // (Optional) Specifies a list of staging labels that are attached to this version + // of the secret. These staging labels are used to track the versions through + // the rotation process by the Lambda rotation function. + // + // A staging label must be unique to a single version of the secret. If you + // specify a staging label that's already associated with a different version + // of the same secret then that staging label is automatically removed from + // the other version and attached to this version. + // + // If you do not specify a value for VersionStages then Secrets Manager automatically + // moves the staging label AWSCURRENT to this new version. + VersionStages []*string `min:"1" type:"list"` +} + +// String returns the string representation +func (s PutSecretValueInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutSecretValueInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutSecretValueInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutSecretValueInput"} + if s.ClientRequestToken != nil && len(*s.ClientRequestToken) < 32 { + invalidParams.Add(request.NewErrParamMinLen("ClientRequestToken", 32)) + } + if s.SecretId == nil { + invalidParams.Add(request.NewErrParamRequired("SecretId")) + } + if s.SecretId != nil && len(*s.SecretId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecretId", 1)) + } + if s.VersionStages != nil && len(s.VersionStages) < 1 { + invalidParams.Add(request.NewErrParamMinLen("VersionStages", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientRequestToken sets the ClientRequestToken field's value. +func (s *PutSecretValueInput) SetClientRequestToken(v string) *PutSecretValueInput { + s.ClientRequestToken = &v + return s +} + +// SetSecretBinary sets the SecretBinary field's value. +func (s *PutSecretValueInput) SetSecretBinary(v []byte) *PutSecretValueInput { + s.SecretBinary = v + return s +} + +// SetSecretId sets the SecretId field's value. +func (s *PutSecretValueInput) SetSecretId(v string) *PutSecretValueInput { + s.SecretId = &v + return s +} + +// SetSecretString sets the SecretString field's value. +func (s *PutSecretValueInput) SetSecretString(v string) *PutSecretValueInput { + s.SecretString = &v + return s +} + +// SetVersionStages sets the VersionStages field's value. +func (s *PutSecretValueInput) SetVersionStages(v []*string) *PutSecretValueInput { + s.VersionStages = v + return s +} + +type PutSecretValueOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) for the secret for which you just created + // a version. + ARN *string `min:"20" type:"string"` + + // The friendly name of the secret for which you just created or updated a version. + Name *string `min:"1" type:"string"` + + // The unique identifier of the version of the secret you just created or updated. + VersionId *string `min:"32" type:"string"` + + // The list of staging labels that are currently attached to this version of + // the secret. Staging labels are used to track a version as it progresses through + // the secret rotation process. + VersionStages []*string `min:"1" type:"list"` +} + +// String returns the string representation +func (s PutSecretValueOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutSecretValueOutput) GoString() string { + return s.String() +} + +// SetARN sets the ARN field's value. +func (s *PutSecretValueOutput) SetARN(v string) *PutSecretValueOutput { + s.ARN = &v + return s +} + +// SetName sets the Name field's value. +func (s *PutSecretValueOutput) SetName(v string) *PutSecretValueOutput { + s.Name = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *PutSecretValueOutput) SetVersionId(v string) *PutSecretValueOutput { + s.VersionId = &v + return s +} + +// SetVersionStages sets the VersionStages field's value. +func (s *PutSecretValueOutput) SetVersionStages(v []*string) *PutSecretValueOutput { + s.VersionStages = v + return s +} + +type RestoreSecretInput struct { + _ struct{} `type:"structure"` + + // Specifies the secret that you want to restore from a previously scheduled + // deletion. You can specify either the Amazon Resource Name (ARN) or the friendly + // name of the secret. + // + // If you specify an ARN, we generally recommend that you specify a complete + // ARN. You can specify a partial ARN too—for example, if you don’t include + // the final hyphen and six random characters that Secrets Manager adds at the + // end of the ARN when you created the secret. A partial ARN match can work + // as long as it uniquely matches only one secret. However, if your secret has + // a name that ends in a hyphen followed by six characters (before Secrets Manager + // adds the hyphen and six characters to the ARN) and you try to use that as + // a partial ARN, then those characters cause Secrets Manager to assume that + // you’re specifying a complete ARN. This confusion can cause unexpected results. + // To avoid this situation, we recommend that you don’t create secret names + // that end with a hyphen followed by six characters. + // + // SecretId is a required field + SecretId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s RestoreSecretInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreSecretInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RestoreSecretInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RestoreSecretInput"} + if s.SecretId == nil { + invalidParams.Add(request.NewErrParamRequired("SecretId")) + } + if s.SecretId != nil && len(*s.SecretId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecretId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSecretId sets the SecretId field's value. +func (s *RestoreSecretInput) SetSecretId(v string) *RestoreSecretInput { + s.SecretId = &v + return s +} + +type RestoreSecretOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the secret that was restored. + ARN *string `min:"20" type:"string"` + + // The friendly name of the secret that was restored. + Name *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s RestoreSecretOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RestoreSecretOutput) GoString() string { + return s.String() +} + +// SetARN sets the ARN field's value. +func (s *RestoreSecretOutput) SetARN(v string) *RestoreSecretOutput { + s.ARN = &v + return s +} + +// SetName sets the Name field's value. +func (s *RestoreSecretOutput) SetName(v string) *RestoreSecretOutput { + s.Name = &v + return s +} + +type RotateSecretInput struct { + _ struct{} `type:"structure"` + + // (Optional) Specifies a unique identifier for the new version of the secret + // that helps ensure idempotency. + // + // If you use the AWS CLI or one of the AWS SDK to call this operation, then + // you can leave this parameter empty. The CLI or SDK generates a random UUID + // for you and includes that in the request for this parameter. If you don't + // use the SDK and instead generate a raw HTTP request to the Secrets Manager + // service endpoint, then you must generate a ClientRequestToken yourself for + // new versions and include that value in the request. + // + // You only need to specify your own value if you are implementing your own + // retry logic and want to ensure that a given secret is not created twice. + // We recommend that you generate a UUID-type (https://wikipedia.org/wiki/Universally_unique_identifier) + // value to ensure uniqueness within the specified secret. + // + // Secrets Manager uses this value to prevent the accidental creation of duplicate + // versions if there are failures and retries during the function's processing. + // This value becomes the VersionId of the new version. + ClientRequestToken *string `min:"32" type:"string" idempotencyToken:"true"` + + // (Optional) Specifies the ARN of the Lambda function that can rotate the secret. + RotationLambdaARN *string `type:"string"` + + // A structure that defines the rotation configuration for this secret. + RotationRules *RotationRulesType `type:"structure"` + + // Specifies the secret that you want to rotate. You can specify either the + // Amazon Resource Name (ARN) or the friendly name of the secret. + // + // If you specify an ARN, we generally recommend that you specify a complete + // ARN. You can specify a partial ARN too—for example, if you don’t include + // the final hyphen and six random characters that Secrets Manager adds at the + // end of the ARN when you created the secret. A partial ARN match can work + // as long as it uniquely matches only one secret. However, if your secret has + // a name that ends in a hyphen followed by six characters (before Secrets Manager + // adds the hyphen and six characters to the ARN) and you try to use that as + // a partial ARN, then those characters cause Secrets Manager to assume that + // you’re specifying a complete ARN. This confusion can cause unexpected results. + // To avoid this situation, we recommend that you don’t create secret names + // that end with a hyphen followed by six characters. + // + // SecretId is a required field + SecretId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s RotateSecretInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RotateSecretInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RotateSecretInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RotateSecretInput"} + if s.ClientRequestToken != nil && len(*s.ClientRequestToken) < 32 { + invalidParams.Add(request.NewErrParamMinLen("ClientRequestToken", 32)) + } + if s.SecretId == nil { + invalidParams.Add(request.NewErrParamRequired("SecretId")) + } + if s.SecretId != nil && len(*s.SecretId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecretId", 1)) + } + if s.RotationRules != nil { + if err := s.RotationRules.Validate(); err != nil { + invalidParams.AddNested("RotationRules", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientRequestToken sets the ClientRequestToken field's value. +func (s *RotateSecretInput) SetClientRequestToken(v string) *RotateSecretInput { + s.ClientRequestToken = &v + return s +} + +// SetRotationLambdaARN sets the RotationLambdaARN field's value. +func (s *RotateSecretInput) SetRotationLambdaARN(v string) *RotateSecretInput { + s.RotationLambdaARN = &v + return s +} + +// SetRotationRules sets the RotationRules field's value. +func (s *RotateSecretInput) SetRotationRules(v *RotationRulesType) *RotateSecretInput { + s.RotationRules = v + return s +} + +// SetSecretId sets the SecretId field's value. +func (s *RotateSecretInput) SetSecretId(v string) *RotateSecretInput { + s.SecretId = &v + return s +} + +type RotateSecretOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the secret. + ARN *string `min:"20" type:"string"` + + // The friendly name of the secret. + Name *string `min:"1" type:"string"` + + // The ID of the new version of the secret created by the rotation started by + // this request. + VersionId *string `min:"32" type:"string"` +} + +// String returns the string representation +func (s RotateSecretOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RotateSecretOutput) GoString() string { + return s.String() +} + +// SetARN sets the ARN field's value. +func (s *RotateSecretOutput) SetARN(v string) *RotateSecretOutput { + s.ARN = &v + return s +} + +// SetName sets the Name field's value. +func (s *RotateSecretOutput) SetName(v string) *RotateSecretOutput { + s.Name = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *RotateSecretOutput) SetVersionId(v string) *RotateSecretOutput { + s.VersionId = &v + return s +} + +// A structure that defines the rotation configuration for the secret. +type RotationRulesType struct { + _ struct{} `type:"structure"` + + // Specifies the number of days between automatic scheduled rotations of the + // secret. + // + // Secrets Manager schedules the next rotation when the previous one is complete. + // Secrets Manager schedules the date by adding the rotation interval (number + // of days) to the actual date of the last rotation. The service chooses the + // hour within that 24-hour date window randomly. The minute is also chosen + // somewhat randomly, but weighted towards the top of the hour and influenced + // by a variety of factors that help distribute load. + AutomaticallyAfterDays *int64 `min:"1" type:"long"` +} + +// String returns the string representation +func (s RotationRulesType) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RotationRulesType) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RotationRulesType) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RotationRulesType"} + if s.AutomaticallyAfterDays != nil && *s.AutomaticallyAfterDays < 1 { + invalidParams.Add(request.NewErrParamMinValue("AutomaticallyAfterDays", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAutomaticallyAfterDays sets the AutomaticallyAfterDays field's value. +func (s *RotationRulesType) SetAutomaticallyAfterDays(v int64) *RotationRulesType { + s.AutomaticallyAfterDays = &v + return s +} + +// A structure that contains the details about a secret. It does not include +// the encrypted SecretString and SecretBinary values. To get those values, +// use the GetSecretValue operation. +type SecretListEntry struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the secret. + // + // For more information about ARNs in Secrets Manager, see Policy Resources + // (http://docs.aws.amazon.com/secretsmanager/latest/userguide/reference_iam-permissions.html#iam-resources) + // in the AWS Secrets Manager User Guide. + ARN *string `min:"20" type:"string"` + + // The date and time on which this secret was deleted. Not present on active + // secrets. The secret can be recovered until the number of days in the recovery + // window has passed, as specified in the RecoveryWindowInDays parameter of + // the DeleteSecret operation. + DeletedDate *time.Time `type:"timestamp"` + + // The user-provided description of the secret. + Description *string `type:"string"` + + // The ARN or alias of the AWS KMS customer master key (CMK) that's used to + // encrypt the SecretString and SecretBinary fields in each version of the secret. + // If you don't provide a key, then Secrets Manager defaults to encrypting the + // secret fields with the default KMS CMK (the one named awssecretsmanager) + // for this account. + KmsKeyId *string `type:"string"` + + // The last date that this secret was accessed. This value is truncated to midnight + // of the date and therefore shows only the date, not the time. + LastAccessedDate *time.Time `type:"timestamp"` + + // The last date and time that this secret was modified in any way. + LastChangedDate *time.Time `type:"timestamp"` + + // The last date and time that the rotation process for this secret was invoked. + LastRotatedDate *time.Time `type:"timestamp"` + + // The friendly name of the secret. You can use forward slashes in the name + // to represent a path hierarchy. For example, /prod/databases/dbserver1 could + // represent the secret for a server named dbserver1 in the folder databases + // in the folder prod. + Name *string `min:"1" type:"string"` + + // Indicated whether automatic, scheduled rotation is enabled for this secret. + RotationEnabled *bool `type:"boolean"` + + // The ARN of an AWS Lambda function that's invoked by Secrets Manager to rotate + // and expire the secret either automatically per the schedule or manually by + // a call to RotateSecret. + RotationLambdaARN *string `type:"string"` + + // A structure that defines the rotation configuration for the secret. + RotationRules *RotationRulesType `type:"structure"` + + // A list of all of the currently assigned SecretVersionStage staging labels + // and the SecretVersionId that each is attached to. Staging labels are used + // to keep track of the different versions during the rotation process. + // + // A version that does not have any SecretVersionStage is considered deprecated + // and subject to deletion. Such versions are not included in this list. + SecretVersionsToStages map[string][]*string `type:"map"` + + // The list of user-defined tags that are associated with the secret. To add + // tags to a secret, use TagResource. To remove tags, use UntagResource. + Tags []*Tag `type:"list"` +} + +// String returns the string representation +func (s SecretListEntry) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SecretListEntry) GoString() string { + return s.String() +} + +// SetARN sets the ARN field's value. +func (s *SecretListEntry) SetARN(v string) *SecretListEntry { + s.ARN = &v + return s +} + +// SetDeletedDate sets the DeletedDate field's value. +func (s *SecretListEntry) SetDeletedDate(v time.Time) *SecretListEntry { + s.DeletedDate = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *SecretListEntry) SetDescription(v string) *SecretListEntry { + s.Description = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *SecretListEntry) SetKmsKeyId(v string) *SecretListEntry { + s.KmsKeyId = &v + return s +} + +// SetLastAccessedDate sets the LastAccessedDate field's value. +func (s *SecretListEntry) SetLastAccessedDate(v time.Time) *SecretListEntry { + s.LastAccessedDate = &v + return s +} + +// SetLastChangedDate sets the LastChangedDate field's value. +func (s *SecretListEntry) SetLastChangedDate(v time.Time) *SecretListEntry { + s.LastChangedDate = &v + return s +} + +// SetLastRotatedDate sets the LastRotatedDate field's value. +func (s *SecretListEntry) SetLastRotatedDate(v time.Time) *SecretListEntry { + s.LastRotatedDate = &v + return s +} + +// SetName sets the Name field's value. +func (s *SecretListEntry) SetName(v string) *SecretListEntry { + s.Name = &v + return s +} + +// SetRotationEnabled sets the RotationEnabled field's value. +func (s *SecretListEntry) SetRotationEnabled(v bool) *SecretListEntry { + s.RotationEnabled = &v + return s +} + +// SetRotationLambdaARN sets the RotationLambdaARN field's value. +func (s *SecretListEntry) SetRotationLambdaARN(v string) *SecretListEntry { + s.RotationLambdaARN = &v + return s +} + +// SetRotationRules sets the RotationRules field's value. +func (s *SecretListEntry) SetRotationRules(v *RotationRulesType) *SecretListEntry { + s.RotationRules = v + return s +} + +// SetSecretVersionsToStages sets the SecretVersionsToStages field's value. +func (s *SecretListEntry) SetSecretVersionsToStages(v map[string][]*string) *SecretListEntry { + s.SecretVersionsToStages = v + return s +} + +// SetTags sets the Tags field's value. +func (s *SecretListEntry) SetTags(v []*Tag) *SecretListEntry { + s.Tags = v + return s +} + +// A structure that contains information about one version of a secret. +type SecretVersionsListEntry struct { + _ struct{} `type:"structure"` + + // The date and time this version of the secret was created. + CreatedDate *time.Time `type:"timestamp"` + + // The date that this version of the secret was last accessed. Note that the + // resolution of this field is at the date level and does not include the time. + LastAccessedDate *time.Time `type:"timestamp"` + + // The unique version identifier of this version of the secret. + VersionId *string `min:"32" type:"string"` + + // An array of staging labels that are currently associated with this version + // of the secret. + VersionStages []*string `min:"1" type:"list"` +} + +// String returns the string representation +func (s SecretVersionsListEntry) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SecretVersionsListEntry) GoString() string { + return s.String() +} + +// SetCreatedDate sets the CreatedDate field's value. +func (s *SecretVersionsListEntry) SetCreatedDate(v time.Time) *SecretVersionsListEntry { + s.CreatedDate = &v + return s +} + +// SetLastAccessedDate sets the LastAccessedDate field's value. +func (s *SecretVersionsListEntry) SetLastAccessedDate(v time.Time) *SecretVersionsListEntry { + s.LastAccessedDate = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *SecretVersionsListEntry) SetVersionId(v string) *SecretVersionsListEntry { + s.VersionId = &v + return s +} + +// SetVersionStages sets the VersionStages field's value. +func (s *SecretVersionsListEntry) SetVersionStages(v []*string) *SecretVersionsListEntry { + s.VersionStages = v + return s +} + +// A structure that contains information about a tag. +type Tag struct { + _ struct{} `type:"structure"` + + // The key identifier, or name, of the tag. + Key *string `min:"1" type:"string"` + + // The string value that's associated with the key of the tag. + Value *string `type:"string"` +} + +// String returns the string representation +func (s Tag) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Tag) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Tag) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Tag"} + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *Tag) SetKey(v string) *Tag { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Tag) SetValue(v string) *Tag { + s.Value = &v + return s +} + +type TagResourceInput struct { + _ struct{} `type:"structure"` + + // The identifier for the secret that you want to attach tags to. You can specify + // either the Amazon Resource Name (ARN) or the friendly name of the secret. + // + // If you specify an ARN, we generally recommend that you specify a complete + // ARN. You can specify a partial ARN too—for example, if you don’t include + // the final hyphen and six random characters that Secrets Manager adds at the + // end of the ARN when you created the secret. A partial ARN match can work + // as long as it uniquely matches only one secret. However, if your secret has + // a name that ends in a hyphen followed by six characters (before Secrets Manager + // adds the hyphen and six characters to the ARN) and you try to use that as + // a partial ARN, then those characters cause Secrets Manager to assume that + // you’re specifying a complete ARN. This confusion can cause unexpected results. + // To avoid this situation, we recommend that you don’t create secret names + // that end with a hyphen followed by six characters. + // + // SecretId is a required field + SecretId *string `min:"1" type:"string" required:"true"` + + // The tags to attach to the secret. Each element in the list consists of a + // Key and a Value. + // + // This parameter to the API requires a JSON text string argument. For information + // on how to format a JSON parameter for the various command line tool environments, + // see Using JSON for Parameters (http://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html#cli-using-param-json) + // in the AWS CLI User Guide. For the AWS CLI, you can also use the syntax: + // --Tags Key="Key1",Value="Value1",Key="Key2",Value="Value2"[,…] + // + // Tags is a required field + Tags []*Tag `type:"list" required:"true"` +} + +// String returns the string representation +func (s TagResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *TagResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TagResourceInput"} + if s.SecretId == nil { + invalidParams.Add(request.NewErrParamRequired("SecretId")) + } + if s.SecretId != nil && len(*s.SecretId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecretId", 1)) + } + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSecretId sets the SecretId field's value. +func (s *TagResourceInput) SetSecretId(v string) *TagResourceInput { + s.SecretId = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *TagResourceInput) SetTags(v []*Tag) *TagResourceInput { + s.Tags = v + return s +} + +type TagResourceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s TagResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagResourceOutput) GoString() string { + return s.String() +} + +type UntagResourceInput struct { + _ struct{} `type:"structure"` + + // The identifier for the secret that you want to remove tags from. You can + // specify either the Amazon Resource Name (ARN) or the friendly name of the + // secret. + // + // If you specify an ARN, we generally recommend that you specify a complete + // ARN. You can specify a partial ARN too—for example, if you don’t include + // the final hyphen and six random characters that Secrets Manager adds at the + // end of the ARN when you created the secret. A partial ARN match can work + // as long as it uniquely matches only one secret. However, if your secret has + // a name that ends in a hyphen followed by six characters (before Secrets Manager + // adds the hyphen and six characters to the ARN) and you try to use that as + // a partial ARN, then those characters cause Secrets Manager to assume that + // you’re specifying a complete ARN. This confusion can cause unexpected results. + // To avoid this situation, we recommend that you don’t create secret names + // that end with a hyphen followed by six characters. + // + // SecretId is a required field + SecretId *string `min:"1" type:"string" required:"true"` + + // A list of tag key names to remove from the secret. You don't specify the + // value. Both the key and its associated value are removed. + // + // This parameter to the API requires a JSON text string argument. For information + // on how to format a JSON parameter for the various command line tool environments, + // see Using JSON for Parameters (http://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html#cli-using-param-json) + // in the AWS CLI User Guide. + // + // TagKeys is a required field + TagKeys []*string `type:"list" required:"true"` +} + +// String returns the string representation +func (s UntagResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UntagResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UntagResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UntagResourceInput"} + if s.SecretId == nil { + invalidParams.Add(request.NewErrParamRequired("SecretId")) + } + if s.SecretId != nil && len(*s.SecretId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecretId", 1)) + } + if s.TagKeys == nil { + invalidParams.Add(request.NewErrParamRequired("TagKeys")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSecretId sets the SecretId field's value. +func (s *UntagResourceInput) SetSecretId(v string) *UntagResourceInput { + s.SecretId = &v + return s +} + +// SetTagKeys sets the TagKeys field's value. +func (s *UntagResourceInput) SetTagKeys(v []*string) *UntagResourceInput { + s.TagKeys = v + return s +} + +type UntagResourceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UntagResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UntagResourceOutput) GoString() string { + return s.String() +} + +type UpdateSecretInput struct { + _ struct{} `type:"structure"` + + // (Optional) If you want to add a new version to the secret, this parameter + // specifies a unique identifier for the new version that helps ensure idempotency. + // + // If you use the AWS CLI or one of the AWS SDK to call this operation, then + // you can leave this parameter empty. The CLI or SDK generates a random UUID + // for you and includes that in the request. If you don't use the SDK and instead + // generate a raw HTTP request to the Secrets Manager service endpoint, then + // you must generate a ClientRequestToken yourself for new versions and include + // that value in the request. + // + // You typically only need to interact with this value if you implement your + // own retry logic and want to ensure that a given secret is not created twice. + // We recommend that you generate a UUID-type (https://wikipedia.org/wiki/Universally_unique_identifier) + // value to ensure uniqueness within the specified secret. + // + // Secrets Manager uses this value to prevent the accidental creation of duplicate + // versions if there are failures and retries during the Lambda rotation function's + // processing. + // + // * If the ClientRequestToken value isn't already associated with a version + // of the secret then a new version of the secret is created. + // + // * If a version with this value already exists and that version's SecretString + // and SecretBinary values are the same as those in the request then the + // request is ignored (the operation is idempotent). + // + // * If a version with this value already exists and that version's SecretString + // and SecretBinary values are different from the request then an error occurs + // because you cannot modify an existing secret value. + // + // This value becomes the VersionId of the new version. + ClientRequestToken *string `min:"32" type:"string" idempotencyToken:"true"` + + // (Optional) Specifies an updated user-provided description of the secret. + Description *string `type:"string"` + + // (Optional) Specifies an updated ARN or alias of the AWS KMS customer master + // key (CMK) to be used to encrypt the protected text in new versions of this + // secret. + // + // You can only use the account's default CMK to encrypt and decrypt if you + // call this operation using credentials from the same account that owns the + // secret. If the secret is in a different account, then you must create a custom + // CMK and provide the ARN of that CMK in this field. The user making the call + // must have permissions to both the secret and the CMK in their respective + // accounts. + KmsKeyId *string `type:"string"` + + // (Optional) Specifies updated binary data that you want to encrypt and store + // in the new version of the secret. To use this parameter in the command-line + // tools, we recommend that you store your binary data in a file and then use + // the appropriate technique for your tool to pass the contents of the file + // as a parameter. Either SecretBinary or SecretString must have a value, but + // not both. They cannot both be empty. + // + // This parameter is not accessible using the Secrets Manager console. + // + // SecretBinary is automatically base64 encoded/decoded by the SDK. + SecretBinary []byte `type:"blob"` + + // Specifies the secret that you want to modify or to which you want to add + // a new version. You can specify either the Amazon Resource Name (ARN) or the + // friendly name of the secret. + // + // If you specify an ARN, we generally recommend that you specify a complete + // ARN. You can specify a partial ARN too—for example, if you don’t include + // the final hyphen and six random characters that Secrets Manager adds at the + // end of the ARN when you created the secret. A partial ARN match can work + // as long as it uniquely matches only one secret. However, if your secret has + // a name that ends in a hyphen followed by six characters (before Secrets Manager + // adds the hyphen and six characters to the ARN) and you try to use that as + // a partial ARN, then those characters cause Secrets Manager to assume that + // you’re specifying a complete ARN. This confusion can cause unexpected results. + // To avoid this situation, we recommend that you don’t create secret names + // that end with a hyphen followed by six characters. + // + // SecretId is a required field + SecretId *string `min:"1" type:"string" required:"true"` + + // (Optional) Specifies updated text data that you want to encrypt and store + // in this new version of the secret. Either SecretBinary or SecretString must + // have a value, but not both. They cannot both be empty. + // + // If you create this secret by using the Secrets Manager console then Secrets + // Manager puts the protected secret text in only the SecretString parameter. + // The Secrets Manager console stores the information as a JSON structure of + // key/value pairs that the default Lambda rotation function knows how to parse. + // + // For storing multiple values, we recommend that you use a JSON text string + // argument and specify key/value pairs. For information on how to format a + // JSON parameter for the various command line tool environments, see Using + // JSON for Parameters (http://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html#cli-using-param-json) + // in the AWS CLI User Guide. For example: + // + // [{"username":"bob"},{"password":"abc123xyz456"}] + // + // If your command-line tool or SDK requires quotation marks around the parameter, + // you should use single quotes to avoid confusion with the double quotes required + // in the JSON text. You can also 'escape' the double quote character in the + // embedded JSON text by prefacing each with a backslash. For example, the following + // string is surrounded by double-quotes. All of the embedded double quotes + // are escaped: + // + // "[{\"username\":\"bob\"},{\"password\":\"abc123xyz456\"}]" + SecretString *string `type:"string"` +} + +// String returns the string representation +func (s UpdateSecretInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSecretInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateSecretInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateSecretInput"} + if s.ClientRequestToken != nil && len(*s.ClientRequestToken) < 32 { + invalidParams.Add(request.NewErrParamMinLen("ClientRequestToken", 32)) + } + if s.SecretId == nil { + invalidParams.Add(request.NewErrParamRequired("SecretId")) + } + if s.SecretId != nil && len(*s.SecretId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecretId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientRequestToken sets the ClientRequestToken field's value. +func (s *UpdateSecretInput) SetClientRequestToken(v string) *UpdateSecretInput { + s.ClientRequestToken = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *UpdateSecretInput) SetDescription(v string) *UpdateSecretInput { + s.Description = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *UpdateSecretInput) SetKmsKeyId(v string) *UpdateSecretInput { + s.KmsKeyId = &v + return s +} + +// SetSecretBinary sets the SecretBinary field's value. +func (s *UpdateSecretInput) SetSecretBinary(v []byte) *UpdateSecretInput { + s.SecretBinary = v + return s +} + +// SetSecretId sets the SecretId field's value. +func (s *UpdateSecretInput) SetSecretId(v string) *UpdateSecretInput { + s.SecretId = &v + return s +} + +// SetSecretString sets the SecretString field's value. +func (s *UpdateSecretInput) SetSecretString(v string) *UpdateSecretInput { + s.SecretString = &v + return s +} + +type UpdateSecretOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the secret that was updated. + // + // Secrets Manager automatically adds several random characters to the name + // at the end of the ARN when you initially create a secret. This affects only + // the ARN and not the actual friendly name. This ensures that if you create + // a new secret with the same name as an old secret that you previously deleted, + // then users with access to the old secret don't automatically get access to + // the new secret because the ARNs are different. + ARN *string `min:"20" type:"string"` + + // The friendly name of the secret that was updated. + Name *string `min:"1" type:"string"` + + // If a new version of the secret was created by this operation, then VersionId + // contains the unique identifier of the new version. + VersionId *string `min:"32" type:"string"` +} + +// String returns the string representation +func (s UpdateSecretOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSecretOutput) GoString() string { + return s.String() +} + +// SetARN sets the ARN field's value. +func (s *UpdateSecretOutput) SetARN(v string) *UpdateSecretOutput { + s.ARN = &v + return s +} + +// SetName sets the Name field's value. +func (s *UpdateSecretOutput) SetName(v string) *UpdateSecretOutput { + s.Name = &v + return s +} + +// SetVersionId sets the VersionId field's value. +func (s *UpdateSecretOutput) SetVersionId(v string) *UpdateSecretOutput { + s.VersionId = &v + return s +} + +type UpdateSecretVersionStageInput struct { + _ struct{} `type:"structure"` + + // (Optional) The secret version ID that you want to add the staging label to. + // If you want to remove a label from a version, then do not specify this parameter. + // + // If the staging label is already attached to a different version of the secret, + // then you must also specify the RemoveFromVersionId parameter. + MoveToVersionId *string `min:"32" type:"string"` + + // Specifies the secret version ID of the version that the staging label is + // to be removed from. If the staging label you are trying to attach to one + // version is already attached to a different version, then you must include + // this parameter and specify the version that the label is to be removed from. + // If the label is attached and you either do not specify this parameter, or + // the version ID does not match, then the operation fails. + RemoveFromVersionId *string `min:"32" type:"string"` + + // Specifies the secret with the version whose list of staging labels you want + // to modify. You can specify either the Amazon Resource Name (ARN) or the friendly + // name of the secret. + // + // If you specify an ARN, we generally recommend that you specify a complete + // ARN. You can specify a partial ARN too—for example, if you don’t include + // the final hyphen and six random characters that Secrets Manager adds at the + // end of the ARN when you created the secret. A partial ARN match can work + // as long as it uniquely matches only one secret. However, if your secret has + // a name that ends in a hyphen followed by six characters (before Secrets Manager + // adds the hyphen and six characters to the ARN) and you try to use that as + // a partial ARN, then those characters cause Secrets Manager to assume that + // you’re specifying a complete ARN. This confusion can cause unexpected results. + // To avoid this situation, we recommend that you don’t create secret names + // that end with a hyphen followed by six characters. + // + // SecretId is a required field + SecretId *string `min:"1" type:"string" required:"true"` + + // The staging label to add to this version. + // + // VersionStage is a required field + VersionStage *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateSecretVersionStageInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSecretVersionStageInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateSecretVersionStageInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateSecretVersionStageInput"} + if s.MoveToVersionId != nil && len(*s.MoveToVersionId) < 32 { + invalidParams.Add(request.NewErrParamMinLen("MoveToVersionId", 32)) + } + if s.RemoveFromVersionId != nil && len(*s.RemoveFromVersionId) < 32 { + invalidParams.Add(request.NewErrParamMinLen("RemoveFromVersionId", 32)) + } + if s.SecretId == nil { + invalidParams.Add(request.NewErrParamRequired("SecretId")) + } + if s.SecretId != nil && len(*s.SecretId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecretId", 1)) + } + if s.VersionStage == nil { + invalidParams.Add(request.NewErrParamRequired("VersionStage")) + } + if s.VersionStage != nil && len(*s.VersionStage) < 1 { + invalidParams.Add(request.NewErrParamMinLen("VersionStage", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMoveToVersionId sets the MoveToVersionId field's value. +func (s *UpdateSecretVersionStageInput) SetMoveToVersionId(v string) *UpdateSecretVersionStageInput { + s.MoveToVersionId = &v + return s +} + +// SetRemoveFromVersionId sets the RemoveFromVersionId field's value. +func (s *UpdateSecretVersionStageInput) SetRemoveFromVersionId(v string) *UpdateSecretVersionStageInput { + s.RemoveFromVersionId = &v + return s +} + +// SetSecretId sets the SecretId field's value. +func (s *UpdateSecretVersionStageInput) SetSecretId(v string) *UpdateSecretVersionStageInput { + s.SecretId = &v + return s +} + +// SetVersionStage sets the VersionStage field's value. +func (s *UpdateSecretVersionStageInput) SetVersionStage(v string) *UpdateSecretVersionStageInput { + s.VersionStage = &v + return s +} + +type UpdateSecretVersionStageOutput struct { + _ struct{} `type:"structure"` + + // The ARN of the secret with the staging label that was modified. + ARN *string `min:"20" type:"string"` + + // The friendly name of the secret with the staging label that was modified. + Name *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s UpdateSecretVersionStageOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSecretVersionStageOutput) GoString() string { + return s.String() +} + +// SetARN sets the ARN field's value. +func (s *UpdateSecretVersionStageOutput) SetARN(v string) *UpdateSecretVersionStageOutput { + s.ARN = &v + return s +} + +// SetName sets the Name field's value. +func (s *UpdateSecretVersionStageOutput) SetName(v string) *UpdateSecretVersionStageOutput { + s.Name = &v + return s +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/secretsmanager/doc.go b/vendor/github.com/aws/aws-sdk-go/service/secretsmanager/doc.go new file mode 100644 index 00000000000..0379c436595 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/secretsmanager/doc.go @@ -0,0 +1,87 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package secretsmanager provides the client and types for making API +// requests to AWS Secrets Manager. +// +// AWS Secrets Manager is a web service that enables you to store, manage, and +// retrieve, secrets. +// +// This guide provides descriptions of the Secrets Manager API. For more information +// about using this service, see the AWS Secrets Manager User Guide (http://docs.aws.amazon.com/secretsmanager/latest/userguide/introduction.html). +// +// API Version +// +// This version of the Secrets Manager API Reference documents the Secrets Manager +// API version 2017-10-17. +// +// As an alternative to using the API directly, you can use one of the AWS SDKs, +// which consist of libraries and sample code for various programming languages +// and platforms (such as Java, Ruby, .NET, iOS, and Android). The SDKs provide +// a convenient way to create programmatic access to AWS Secrets Manager. For +// example, the SDKs take care of cryptographically signing requests, managing +// errors, and retrying requests automatically. For more information about the +// AWS SDKs, including how to download and install them, see Tools for Amazon +// Web Services (http://aws.amazon.com/tools/). +// +// We recommend that you use the AWS SDKs to make programmatic API calls to +// Secrets Manager. However, you also can use the Secrets Manager HTTP Query +// API to make direct calls to the Secrets Manager web service. To learn more +// about the Secrets Manager HTTP Query API, see Making Query Requests (http://docs.aws.amazon.com/secretsmanager/latest/userguide/query-requests.html) +// in the AWS Secrets Manager User Guide. +// +// Secrets Manager supports GET and POST requests for all actions. That is, +// the API doesn't require you to use GET for some actions and POST for others. +// However, GET requests are subject to the limitation size of a URL. Therefore, +// for operations that require larger sizes, use a POST request. +// +// Support and Feedback for AWS Secrets Manager +// +// We welcome your feedback. Send your comments to awssecretsmanager-feedback@amazon.com +// (mailto:awssecretsmanager-feedback@amazon.com), or post your feedback and +// questions in the AWS Secrets Manager Discussion Forum (http://forums.aws.amazon.com/forum.jspa?forumID=296). +// For more information about the AWS Discussion Forums, see Forums Help (http://forums.aws.amazon.com/help.jspa). +// +// How examples are presented +// +// The JSON that AWS Secrets Manager expects as your request parameters and +// that the service returns as a response to HTTP query requests are single, +// long strings without line breaks or white space formatting. The JSON shown +// in the examples is formatted with both line breaks and white space to improve +// readability. When example input parameters would also result in long strings +// that extend beyond the screen, we insert line breaks to enhance readability. +// You should always submit the input as a single JSON text string. +// +// Logging API Requests +// +// AWS Secrets Manager supports AWS CloudTrail, a service that records AWS API +// calls for your AWS account and delivers log files to an Amazon S3 bucket. +// By using information that's collected by AWS CloudTrail, you can determine +// which requests were successfully made to Secrets Manager, who made the request, +// when it was made, and so on. For more about AWS Secrets Manager and its support +// for AWS CloudTrail, see Logging AWS Secrets Manager Events with AWS CloudTrail +// (http://docs.aws.amazon.com/secretsmanager/latest/userguide/monitoring.html#monitoring_cloudtrail) +// in the AWS Secrets Manager User Guide. To learn more about CloudTrail, including +// how to turn it on and find your log files, see the AWS CloudTrail User Guide +// (http://docs.aws.amazon.com/awscloudtrail/latest/userguide/what_is_cloud_trail_top_level.html). +// +// See https://docs.aws.amazon.com/goto/WebAPI/secretsmanager-2017-10-17 for more information on this service. +// +// See secretsmanager package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/secretsmanager/ +// +// Using the Client +// +// To contact AWS Secrets Manager with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the AWS Secrets Manager client SecretsManager for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/secretsmanager/#New +package secretsmanager diff --git a/vendor/github.com/aws/aws-sdk-go/service/secretsmanager/errors.go b/vendor/github.com/aws/aws-sdk-go/service/secretsmanager/errors.go new file mode 100644 index 00000000000..b3c0c48fdcc --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/secretsmanager/errors.go @@ -0,0 +1,87 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package secretsmanager + +const ( + + // ErrCodeDecryptionFailure for service response error code + // "DecryptionFailure". + // + // Secrets Manager can't decrypt the protected secret text using the provided + // KMS key. + ErrCodeDecryptionFailure = "DecryptionFailure" + + // ErrCodeEncryptionFailure for service response error code + // "EncryptionFailure". + // + // Secrets Manager can't encrypt the protected secret text using the provided + // KMS key. Check that the customer master key (CMK) is available, enabled, + // and not in an invalid state. For more information, see How Key State Affects + // Use of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html). + ErrCodeEncryptionFailure = "EncryptionFailure" + + // ErrCodeInternalServiceError for service response error code + // "InternalServiceError". + // + // An error occurred on the server side. + ErrCodeInternalServiceError = "InternalServiceError" + + // ErrCodeInvalidNextTokenException for service response error code + // "InvalidNextTokenException". + // + // You provided an invalid NextToken value. + ErrCodeInvalidNextTokenException = "InvalidNextTokenException" + + // ErrCodeInvalidParameterException for service response error code + // "InvalidParameterException". + // + // You provided an invalid value for a parameter. + ErrCodeInvalidParameterException = "InvalidParameterException" + + // ErrCodeInvalidRequestException for service response error code + // "InvalidRequestException". + // + // You provided a parameter value that is not valid for the current state of + // the resource. + // + // Possible causes: + // + // * You tried to perform the operation on a secret that's currently marked + // deleted. + // + // * You tried to enable rotation on a secret that doesn't already have a + // Lambda function ARN configured and you didn't include such an ARN as a + // parameter in this call. + ErrCodeInvalidRequestException = "InvalidRequestException" + + // ErrCodeLimitExceededException for service response error code + // "LimitExceededException". + // + // The request failed because it would exceed one of the Secrets Manager internal + // limits. + ErrCodeLimitExceededException = "LimitExceededException" + + // ErrCodeMalformedPolicyDocumentException for service response error code + // "MalformedPolicyDocumentException". + // + // The policy document that you provided isn't valid. + ErrCodeMalformedPolicyDocumentException = "MalformedPolicyDocumentException" + + // ErrCodePreconditionNotMetException for service response error code + // "PreconditionNotMetException". + // + // The request failed because you did not complete all the prerequisite steps. + ErrCodePreconditionNotMetException = "PreconditionNotMetException" + + // ErrCodeResourceExistsException for service response error code + // "ResourceExistsException". + // + // A resource with the ID you requested already exists. + ErrCodeResourceExistsException = "ResourceExistsException" + + // ErrCodeResourceNotFoundException for service response error code + // "ResourceNotFoundException". + // + // We can't find the resource that you asked for. + ErrCodeResourceNotFoundException = "ResourceNotFoundException" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/secretsmanager/service.go b/vendor/github.com/aws/aws-sdk-go/service/secretsmanager/service.go new file mode 100644 index 00000000000..c4758e96dac --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/secretsmanager/service.go @@ -0,0 +1,100 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package secretsmanager + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" +) + +// SecretsManager provides the API operation methods for making requests to +// AWS Secrets Manager. See this package's package overview docs +// for details on the service. +// +// SecretsManager methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type SecretsManager struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "secretsmanager" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Secrets Manager" // ServiceID is a unique identifer of a specific service. +) + +// New creates a new instance of the SecretsManager client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a SecretsManager client from just a session. +// svc := secretsmanager.New(mySession) +// +// // Create a SecretsManager client with additional configuration +// svc := secretsmanager.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *SecretsManager { + c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "secretsmanager" + } + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *SecretsManager { + svc := &SecretsManager{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + ServiceID: ServiceID, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2017-10-17", + JSONVersion: "1.1", + TargetPrefix: "secretsmanager", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(jsonrpc.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(jsonrpc.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(jsonrpc.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(jsonrpc.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a SecretsManager operation and runs any +// custom request initialization. +func (c *SecretsManager) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/serverlessapplicationrepository/api.go b/vendor/github.com/aws/aws-sdk-go/service/serverlessapplicationrepository/api.go new file mode 100644 index 00000000000..61a18db459a --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/serverlessapplicationrepository/api.go @@ -0,0 +1,3576 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package serverlessapplicationrepository + +import ( + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol" + "github.com/aws/aws-sdk-go/private/protocol/restjson" +) + +const opCreateApplication = "CreateApplication" + +// CreateApplicationRequest generates a "aws/request.Request" representing the +// client's request for the CreateApplication operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateApplication for more information on using the CreateApplication +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateApplicationRequest method. +// req, resp := client.CreateApplicationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/CreateApplication +func (c *ServerlessApplicationRepository) CreateApplicationRequest(input *CreateApplicationRequest) (req *request.Request, output *CreateApplicationOutput) { + op := &request.Operation{ + Name: opCreateApplication, + HTTPMethod: "POST", + HTTPPath: "/applications", + } + + if input == nil { + input = &CreateApplicationRequest{} + } + + output = &CreateApplicationOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateApplication API operation for AWSServerlessApplicationRepository. +// +// Creates an application, optionally including an AWS SAM file to create the +// first application version in the same call. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWSServerlessApplicationRepository's +// API operation CreateApplication for usage and error information. +// +// Returned Error Codes: +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The client is sending more than the allowed number of requests per unit of +// time. +// +// * ErrCodeBadRequestException "BadRequestException" +// One of the parameters in the request is invalid. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// The AWS Serverless Application Repository service encountered an internal +// error. +// +// * ErrCodeConflictException "ConflictException" +// The resource already exists. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The client is not authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/CreateApplication +func (c *ServerlessApplicationRepository) CreateApplication(input *CreateApplicationRequest) (*CreateApplicationOutput, error) { + req, out := c.CreateApplicationRequest(input) + return out, req.Send() +} + +// CreateApplicationWithContext is the same as CreateApplication with the addition of +// the ability to pass a context and additional request options. +// +// See CreateApplication for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServerlessApplicationRepository) CreateApplicationWithContext(ctx aws.Context, input *CreateApplicationRequest, opts ...request.Option) (*CreateApplicationOutput, error) { + req, out := c.CreateApplicationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateApplicationVersion = "CreateApplicationVersion" + +// CreateApplicationVersionRequest generates a "aws/request.Request" representing the +// client's request for the CreateApplicationVersion operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateApplicationVersion for more information on using the CreateApplicationVersion +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateApplicationVersionRequest method. +// req, resp := client.CreateApplicationVersionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/CreateApplicationVersion +func (c *ServerlessApplicationRepository) CreateApplicationVersionRequest(input *CreateApplicationVersionRequest) (req *request.Request, output *CreateApplicationVersionOutput) { + op := &request.Operation{ + Name: opCreateApplicationVersion, + HTTPMethod: "PUT", + HTTPPath: "/applications/{applicationId}/versions/{semanticVersion}", + } + + if input == nil { + input = &CreateApplicationVersionRequest{} + } + + output = &CreateApplicationVersionOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateApplicationVersion API operation for AWSServerlessApplicationRepository. +// +// Creates an application version. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWSServerlessApplicationRepository's +// API operation CreateApplicationVersion for usage and error information. +// +// Returned Error Codes: +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The client is sending more than the allowed number of requests per unit of +// time. +// +// * ErrCodeBadRequestException "BadRequestException" +// One of the parameters in the request is invalid. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// The AWS Serverless Application Repository service encountered an internal +// error. +// +// * ErrCodeConflictException "ConflictException" +// The resource already exists. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The client is not authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/CreateApplicationVersion +func (c *ServerlessApplicationRepository) CreateApplicationVersion(input *CreateApplicationVersionRequest) (*CreateApplicationVersionOutput, error) { + req, out := c.CreateApplicationVersionRequest(input) + return out, req.Send() +} + +// CreateApplicationVersionWithContext is the same as CreateApplicationVersion with the addition of +// the ability to pass a context and additional request options. +// +// See CreateApplicationVersion for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServerlessApplicationRepository) CreateApplicationVersionWithContext(ctx aws.Context, input *CreateApplicationVersionRequest, opts ...request.Option) (*CreateApplicationVersionOutput, error) { + req, out := c.CreateApplicationVersionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateCloudFormationChangeSet = "CreateCloudFormationChangeSet" + +// CreateCloudFormationChangeSetRequest generates a "aws/request.Request" representing the +// client's request for the CreateCloudFormationChangeSet operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateCloudFormationChangeSet for more information on using the CreateCloudFormationChangeSet +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateCloudFormationChangeSetRequest method. +// req, resp := client.CreateCloudFormationChangeSetRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/CreateCloudFormationChangeSet +func (c *ServerlessApplicationRepository) CreateCloudFormationChangeSetRequest(input *CreateCloudFormationChangeSetRequest) (req *request.Request, output *CreateCloudFormationChangeSetOutput) { + op := &request.Operation{ + Name: opCreateCloudFormationChangeSet, + HTTPMethod: "POST", + HTTPPath: "/applications/{applicationId}/changesets", + } + + if input == nil { + input = &CreateCloudFormationChangeSetRequest{} + } + + output = &CreateCloudFormationChangeSetOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateCloudFormationChangeSet API operation for AWSServerlessApplicationRepository. +// +// Creates an AWS CloudFormation change set for the given application. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWSServerlessApplicationRepository's +// API operation CreateCloudFormationChangeSet for usage and error information. +// +// Returned Error Codes: +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The client is sending more than the allowed number of requests per unit of +// time. +// +// * ErrCodeBadRequestException "BadRequestException" +// One of the parameters in the request is invalid. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// The AWS Serverless Application Repository service encountered an internal +// error. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The client is not authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/CreateCloudFormationChangeSet +func (c *ServerlessApplicationRepository) CreateCloudFormationChangeSet(input *CreateCloudFormationChangeSetRequest) (*CreateCloudFormationChangeSetOutput, error) { + req, out := c.CreateCloudFormationChangeSetRequest(input) + return out, req.Send() +} + +// CreateCloudFormationChangeSetWithContext is the same as CreateCloudFormationChangeSet with the addition of +// the ability to pass a context and additional request options. +// +// See CreateCloudFormationChangeSet for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServerlessApplicationRepository) CreateCloudFormationChangeSetWithContext(ctx aws.Context, input *CreateCloudFormationChangeSetRequest, opts ...request.Option) (*CreateCloudFormationChangeSetOutput, error) { + req, out := c.CreateCloudFormationChangeSetRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateCloudFormationTemplate = "CreateCloudFormationTemplate" + +// CreateCloudFormationTemplateRequest generates a "aws/request.Request" representing the +// client's request for the CreateCloudFormationTemplate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateCloudFormationTemplate for more information on using the CreateCloudFormationTemplate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateCloudFormationTemplateRequest method. +// req, resp := client.CreateCloudFormationTemplateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/CreateCloudFormationTemplate +func (c *ServerlessApplicationRepository) CreateCloudFormationTemplateRequest(input *CreateCloudFormationTemplateInput) (req *request.Request, output *CreateCloudFormationTemplateOutput) { + op := &request.Operation{ + Name: opCreateCloudFormationTemplate, + HTTPMethod: "POST", + HTTPPath: "/applications/{applicationId}/templates", + } + + if input == nil { + input = &CreateCloudFormationTemplateInput{} + } + + output = &CreateCloudFormationTemplateOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateCloudFormationTemplate API operation for AWSServerlessApplicationRepository. +// +// Creates an AWS CloudFormation template. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWSServerlessApplicationRepository's +// API operation CreateCloudFormationTemplate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNotFoundException "NotFoundException" +// The resource (for example, an access policy statement) specified in the request +// doesn't exist. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The client is sending more than the allowed number of requests per unit of +// time. +// +// * ErrCodeBadRequestException "BadRequestException" +// One of the parameters in the request is invalid. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// The AWS Serverless Application Repository service encountered an internal +// error. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The client is not authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/CreateCloudFormationTemplate +func (c *ServerlessApplicationRepository) CreateCloudFormationTemplate(input *CreateCloudFormationTemplateInput) (*CreateCloudFormationTemplateOutput, error) { + req, out := c.CreateCloudFormationTemplateRequest(input) + return out, req.Send() +} + +// CreateCloudFormationTemplateWithContext is the same as CreateCloudFormationTemplate with the addition of +// the ability to pass a context and additional request options. +// +// See CreateCloudFormationTemplate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServerlessApplicationRepository) CreateCloudFormationTemplateWithContext(ctx aws.Context, input *CreateCloudFormationTemplateInput, opts ...request.Option) (*CreateCloudFormationTemplateOutput, error) { + req, out := c.CreateCloudFormationTemplateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteApplication = "DeleteApplication" + +// DeleteApplicationRequest generates a "aws/request.Request" representing the +// client's request for the DeleteApplication operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteApplication for more information on using the DeleteApplication +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteApplicationRequest method. +// req, resp := client.DeleteApplicationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/DeleteApplication +func (c *ServerlessApplicationRepository) DeleteApplicationRequest(input *DeleteApplicationInput) (req *request.Request, output *DeleteApplicationOutput) { + op := &request.Operation{ + Name: opDeleteApplication, + HTTPMethod: "DELETE", + HTTPPath: "/applications/{applicationId}", + } + + if input == nil { + input = &DeleteApplicationInput{} + } + + output = &DeleteApplicationOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restjson.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteApplication API operation for AWSServerlessApplicationRepository. +// +// Deletes the specified application. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWSServerlessApplicationRepository's +// API operation DeleteApplication for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// One of the parameters in the request is invalid. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// The AWS Serverless Application Repository service encountered an internal +// error. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The client is not authenticated. +// +// * ErrCodeNotFoundException "NotFoundException" +// The resource (for example, an access policy statement) specified in the request +// doesn't exist. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The client is sending more than the allowed number of requests per unit of +// time. +// +// * ErrCodeConflictException "ConflictException" +// The resource already exists. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/DeleteApplication +func (c *ServerlessApplicationRepository) DeleteApplication(input *DeleteApplicationInput) (*DeleteApplicationOutput, error) { + req, out := c.DeleteApplicationRequest(input) + return out, req.Send() +} + +// DeleteApplicationWithContext is the same as DeleteApplication with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteApplication for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServerlessApplicationRepository) DeleteApplicationWithContext(ctx aws.Context, input *DeleteApplicationInput, opts ...request.Option) (*DeleteApplicationOutput, error) { + req, out := c.DeleteApplicationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetApplication = "GetApplication" + +// GetApplicationRequest generates a "aws/request.Request" representing the +// client's request for the GetApplication operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetApplication for more information on using the GetApplication +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetApplicationRequest method. +// req, resp := client.GetApplicationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/GetApplication +func (c *ServerlessApplicationRepository) GetApplicationRequest(input *GetApplicationInput) (req *request.Request, output *GetApplicationOutput) { + op := &request.Operation{ + Name: opGetApplication, + HTTPMethod: "GET", + HTTPPath: "/applications/{applicationId}", + } + + if input == nil { + input = &GetApplicationInput{} + } + + output = &GetApplicationOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetApplication API operation for AWSServerlessApplicationRepository. +// +// Gets the specified application. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWSServerlessApplicationRepository's +// API operation GetApplication for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNotFoundException "NotFoundException" +// The resource (for example, an access policy statement) specified in the request +// doesn't exist. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The client is sending more than the allowed number of requests per unit of +// time. +// +// * ErrCodeBadRequestException "BadRequestException" +// One of the parameters in the request is invalid. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// The AWS Serverless Application Repository service encountered an internal +// error. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The client is not authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/GetApplication +func (c *ServerlessApplicationRepository) GetApplication(input *GetApplicationInput) (*GetApplicationOutput, error) { + req, out := c.GetApplicationRequest(input) + return out, req.Send() +} + +// GetApplicationWithContext is the same as GetApplication with the addition of +// the ability to pass a context and additional request options. +// +// See GetApplication for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServerlessApplicationRepository) GetApplicationWithContext(ctx aws.Context, input *GetApplicationInput, opts ...request.Option) (*GetApplicationOutput, error) { + req, out := c.GetApplicationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetApplicationPolicy = "GetApplicationPolicy" + +// GetApplicationPolicyRequest generates a "aws/request.Request" representing the +// client's request for the GetApplicationPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetApplicationPolicy for more information on using the GetApplicationPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetApplicationPolicyRequest method. +// req, resp := client.GetApplicationPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/GetApplicationPolicy +func (c *ServerlessApplicationRepository) GetApplicationPolicyRequest(input *GetApplicationPolicyInput) (req *request.Request, output *GetApplicationPolicyOutput) { + op := &request.Operation{ + Name: opGetApplicationPolicy, + HTTPMethod: "GET", + HTTPPath: "/applications/{applicationId}/policy", + } + + if input == nil { + input = &GetApplicationPolicyInput{} + } + + output = &GetApplicationPolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetApplicationPolicy API operation for AWSServerlessApplicationRepository. +// +// Retrieves the policy for the application. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWSServerlessApplicationRepository's +// API operation GetApplicationPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNotFoundException "NotFoundException" +// The resource (for example, an access policy statement) specified in the request +// doesn't exist. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The client is sending more than the allowed number of requests per unit of +// time. +// +// * ErrCodeBadRequestException "BadRequestException" +// One of the parameters in the request is invalid. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// The AWS Serverless Application Repository service encountered an internal +// error. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The client is not authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/GetApplicationPolicy +func (c *ServerlessApplicationRepository) GetApplicationPolicy(input *GetApplicationPolicyInput) (*GetApplicationPolicyOutput, error) { + req, out := c.GetApplicationPolicyRequest(input) + return out, req.Send() +} + +// GetApplicationPolicyWithContext is the same as GetApplicationPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See GetApplicationPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServerlessApplicationRepository) GetApplicationPolicyWithContext(ctx aws.Context, input *GetApplicationPolicyInput, opts ...request.Option) (*GetApplicationPolicyOutput, error) { + req, out := c.GetApplicationPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetCloudFormationTemplate = "GetCloudFormationTemplate" + +// GetCloudFormationTemplateRequest generates a "aws/request.Request" representing the +// client's request for the GetCloudFormationTemplate operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetCloudFormationTemplate for more information on using the GetCloudFormationTemplate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetCloudFormationTemplateRequest method. +// req, resp := client.GetCloudFormationTemplateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/GetCloudFormationTemplate +func (c *ServerlessApplicationRepository) GetCloudFormationTemplateRequest(input *GetCloudFormationTemplateInput) (req *request.Request, output *GetCloudFormationTemplateOutput) { + op := &request.Operation{ + Name: opGetCloudFormationTemplate, + HTTPMethod: "GET", + HTTPPath: "/applications/{applicationId}/templates/{templateId}", + } + + if input == nil { + input = &GetCloudFormationTemplateInput{} + } + + output = &GetCloudFormationTemplateOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetCloudFormationTemplate API operation for AWSServerlessApplicationRepository. +// +// Gets the specified AWS CloudFormation template. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWSServerlessApplicationRepository's +// API operation GetCloudFormationTemplate for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNotFoundException "NotFoundException" +// The resource (for example, an access policy statement) specified in the request +// doesn't exist. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The client is sending more than the allowed number of requests per unit of +// time. +// +// * ErrCodeBadRequestException "BadRequestException" +// One of the parameters in the request is invalid. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// The AWS Serverless Application Repository service encountered an internal +// error. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The client is not authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/GetCloudFormationTemplate +func (c *ServerlessApplicationRepository) GetCloudFormationTemplate(input *GetCloudFormationTemplateInput) (*GetCloudFormationTemplateOutput, error) { + req, out := c.GetCloudFormationTemplateRequest(input) + return out, req.Send() +} + +// GetCloudFormationTemplateWithContext is the same as GetCloudFormationTemplate with the addition of +// the ability to pass a context and additional request options. +// +// See GetCloudFormationTemplate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServerlessApplicationRepository) GetCloudFormationTemplateWithContext(ctx aws.Context, input *GetCloudFormationTemplateInput, opts ...request.Option) (*GetCloudFormationTemplateOutput, error) { + req, out := c.GetCloudFormationTemplateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListApplicationVersions = "ListApplicationVersions" + +// ListApplicationVersionsRequest generates a "aws/request.Request" representing the +// client's request for the ListApplicationVersions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListApplicationVersions for more information on using the ListApplicationVersions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListApplicationVersionsRequest method. +// req, resp := client.ListApplicationVersionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/ListApplicationVersions +func (c *ServerlessApplicationRepository) ListApplicationVersionsRequest(input *ListApplicationVersionsInput) (req *request.Request, output *ListApplicationVersionsOutput) { + op := &request.Operation{ + Name: opListApplicationVersions, + HTTPMethod: "GET", + HTTPPath: "/applications/{applicationId}/versions", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxItems", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListApplicationVersionsInput{} + } + + output = &ListApplicationVersionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListApplicationVersions API operation for AWSServerlessApplicationRepository. +// +// Lists versions for the specified application. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWSServerlessApplicationRepository's +// API operation ListApplicationVersions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNotFoundException "NotFoundException" +// The resource (for example, an access policy statement) specified in the request +// doesn't exist. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The client is sending more than the allowed number of requests per unit of +// time. +// +// * ErrCodeBadRequestException "BadRequestException" +// One of the parameters in the request is invalid. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// The AWS Serverless Application Repository service encountered an internal +// error. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The client is not authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/ListApplicationVersions +func (c *ServerlessApplicationRepository) ListApplicationVersions(input *ListApplicationVersionsInput) (*ListApplicationVersionsOutput, error) { + req, out := c.ListApplicationVersionsRequest(input) + return out, req.Send() +} + +// ListApplicationVersionsWithContext is the same as ListApplicationVersions with the addition of +// the ability to pass a context and additional request options. +// +// See ListApplicationVersions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServerlessApplicationRepository) ListApplicationVersionsWithContext(ctx aws.Context, input *ListApplicationVersionsInput, opts ...request.Option) (*ListApplicationVersionsOutput, error) { + req, out := c.ListApplicationVersionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListApplicationVersionsPages iterates over the pages of a ListApplicationVersions operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListApplicationVersions method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListApplicationVersions operation. +// pageNum := 0 +// err := client.ListApplicationVersionsPages(params, +// func(page *ListApplicationVersionsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *ServerlessApplicationRepository) ListApplicationVersionsPages(input *ListApplicationVersionsInput, fn func(*ListApplicationVersionsOutput, bool) bool) error { + return c.ListApplicationVersionsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListApplicationVersionsPagesWithContext same as ListApplicationVersionsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServerlessApplicationRepository) ListApplicationVersionsPagesWithContext(ctx aws.Context, input *ListApplicationVersionsInput, fn func(*ListApplicationVersionsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListApplicationVersionsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListApplicationVersionsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListApplicationVersionsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListApplications = "ListApplications" + +// ListApplicationsRequest generates a "aws/request.Request" representing the +// client's request for the ListApplications operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListApplications for more information on using the ListApplications +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListApplicationsRequest method. +// req, resp := client.ListApplicationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/ListApplications +func (c *ServerlessApplicationRepository) ListApplicationsRequest(input *ListApplicationsInput) (req *request.Request, output *ListApplicationsOutput) { + op := &request.Operation{ + Name: opListApplications, + HTTPMethod: "GET", + HTTPPath: "/applications", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxItems", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListApplicationsInput{} + } + + output = &ListApplicationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListApplications API operation for AWSServerlessApplicationRepository. +// +// Lists applications owned by the requester. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWSServerlessApplicationRepository's +// API operation ListApplications for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNotFoundException "NotFoundException" +// The resource (for example, an access policy statement) specified in the request +// doesn't exist. +// +// * ErrCodeBadRequestException "BadRequestException" +// One of the parameters in the request is invalid. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// The AWS Serverless Application Repository service encountered an internal +// error. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The client is not authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/ListApplications +func (c *ServerlessApplicationRepository) ListApplications(input *ListApplicationsInput) (*ListApplicationsOutput, error) { + req, out := c.ListApplicationsRequest(input) + return out, req.Send() +} + +// ListApplicationsWithContext is the same as ListApplications with the addition of +// the ability to pass a context and additional request options. +// +// See ListApplications for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServerlessApplicationRepository) ListApplicationsWithContext(ctx aws.Context, input *ListApplicationsInput, opts ...request.Option) (*ListApplicationsOutput, error) { + req, out := c.ListApplicationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListApplicationsPages iterates over the pages of a ListApplications operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListApplications method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListApplications operation. +// pageNum := 0 +// err := client.ListApplicationsPages(params, +// func(page *ListApplicationsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *ServerlessApplicationRepository) ListApplicationsPages(input *ListApplicationsInput, fn func(*ListApplicationsOutput, bool) bool) error { + return c.ListApplicationsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListApplicationsPagesWithContext same as ListApplicationsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServerlessApplicationRepository) ListApplicationsPagesWithContext(ctx aws.Context, input *ListApplicationsInput, fn func(*ListApplicationsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListApplicationsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListApplicationsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListApplicationsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opPutApplicationPolicy = "PutApplicationPolicy" + +// PutApplicationPolicyRequest generates a "aws/request.Request" representing the +// client's request for the PutApplicationPolicy operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutApplicationPolicy for more information on using the PutApplicationPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutApplicationPolicyRequest method. +// req, resp := client.PutApplicationPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/PutApplicationPolicy +func (c *ServerlessApplicationRepository) PutApplicationPolicyRequest(input *PutApplicationPolicyInput) (req *request.Request, output *PutApplicationPolicyOutput) { + op := &request.Operation{ + Name: opPutApplicationPolicy, + HTTPMethod: "PUT", + HTTPPath: "/applications/{applicationId}/policy", + } + + if input == nil { + input = &PutApplicationPolicyInput{} + } + + output = &PutApplicationPolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutApplicationPolicy API operation for AWSServerlessApplicationRepository. +// +// Sets the permission policy for an application. For the list of actions supported +// for this operation, see Application Permissions (https://docs.aws.amazon.com/serverlessrepo/latest/devguide/access-control-resource-based.html#application-permissions) +// . +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWSServerlessApplicationRepository's +// API operation PutApplicationPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeNotFoundException "NotFoundException" +// The resource (for example, an access policy statement) specified in the request +// doesn't exist. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The client is sending more than the allowed number of requests per unit of +// time. +// +// * ErrCodeBadRequestException "BadRequestException" +// One of the parameters in the request is invalid. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// The AWS Serverless Application Repository service encountered an internal +// error. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The client is not authenticated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/PutApplicationPolicy +func (c *ServerlessApplicationRepository) PutApplicationPolicy(input *PutApplicationPolicyInput) (*PutApplicationPolicyOutput, error) { + req, out := c.PutApplicationPolicyRequest(input) + return out, req.Send() +} + +// PutApplicationPolicyWithContext is the same as PutApplicationPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See PutApplicationPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServerlessApplicationRepository) PutApplicationPolicyWithContext(ctx aws.Context, input *PutApplicationPolicyInput, opts ...request.Option) (*PutApplicationPolicyOutput, error) { + req, out := c.PutApplicationPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateApplication = "UpdateApplication" + +// UpdateApplicationRequest generates a "aws/request.Request" representing the +// client's request for the UpdateApplication operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateApplication for more information on using the UpdateApplication +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateApplicationRequest method. +// req, resp := client.UpdateApplicationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/UpdateApplication +func (c *ServerlessApplicationRepository) UpdateApplicationRequest(input *UpdateApplicationRequest) (req *request.Request, output *UpdateApplicationOutput) { + op := &request.Operation{ + Name: opUpdateApplication, + HTTPMethod: "PATCH", + HTTPPath: "/applications/{applicationId}", + } + + if input == nil { + input = &UpdateApplicationRequest{} + } + + output = &UpdateApplicationOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateApplication API operation for AWSServerlessApplicationRepository. +// +// Updates the specified application. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWSServerlessApplicationRepository's +// API operation UpdateApplication for usage and error information. +// +// Returned Error Codes: +// * ErrCodeBadRequestException "BadRequestException" +// One of the parameters in the request is invalid. +// +// * ErrCodeInternalServerErrorException "InternalServerErrorException" +// The AWS Serverless Application Repository service encountered an internal +// error. +// +// * ErrCodeForbiddenException "ForbiddenException" +// The client is not authenticated. +// +// * ErrCodeNotFoundException "NotFoundException" +// The resource (for example, an access policy statement) specified in the request +// doesn't exist. +// +// * ErrCodeTooManyRequestsException "TooManyRequestsException" +// The client is sending more than the allowed number of requests per unit of +// time. +// +// * ErrCodeConflictException "ConflictException" +// The resource already exists. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08/UpdateApplication +func (c *ServerlessApplicationRepository) UpdateApplication(input *UpdateApplicationRequest) (*UpdateApplicationOutput, error) { + req, out := c.UpdateApplicationRequest(input) + return out, req.Send() +} + +// UpdateApplicationWithContext is the same as UpdateApplication with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateApplication for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServerlessApplicationRepository) UpdateApplicationWithContext(ctx aws.Context, input *UpdateApplicationRequest, opts ...request.Option) (*UpdateApplicationOutput, error) { + req, out := c.UpdateApplicationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// Policy statement applied to the application. +type ApplicationPolicyStatement struct { + _ struct{} `type:"structure"` + + // For the list of actions supported for this operation, see Application Permissions + // (https://docs.aws.amazon.com/serverlessrepo/latest/devguide/access-control-resource-based.html#application-permissions). + // + // Actions is a required field + Actions []*string `locationName:"actions" type:"list" required:"true"` + + // An AWS account ID, or * to make the application public. + // + // Principals is a required field + Principals []*string `locationName:"principals" type:"list" required:"true"` + + // A unique ID for the statement. + StatementId *string `locationName:"statementId" type:"string"` +} + +// String returns the string representation +func (s ApplicationPolicyStatement) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ApplicationPolicyStatement) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ApplicationPolicyStatement) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ApplicationPolicyStatement"} + if s.Actions == nil { + invalidParams.Add(request.NewErrParamRequired("Actions")) + } + if s.Principals == nil { + invalidParams.Add(request.NewErrParamRequired("Principals")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetActions sets the Actions field's value. +func (s *ApplicationPolicyStatement) SetActions(v []*string) *ApplicationPolicyStatement { + s.Actions = v + return s +} + +// SetPrincipals sets the Principals field's value. +func (s *ApplicationPolicyStatement) SetPrincipals(v []*string) *ApplicationPolicyStatement { + s.Principals = v + return s +} + +// SetStatementId sets the StatementId field's value. +func (s *ApplicationPolicyStatement) SetStatementId(v string) *ApplicationPolicyStatement { + s.StatementId = &v + return s +} + +// Summary of details about the application. +type ApplicationSummary struct { + _ struct{} `type:"structure"` + + // The application Amazon Resource Name (ARN). + // + // ApplicationId is a required field + ApplicationId *string `locationName:"applicationId" type:"string" required:"true"` + + // The name of the author publishing the app. + // + // Minimum length=1. Maximum length=127. + // + // Pattern "^[a-z0-9](([a-z0-9]|-(?!-))*[a-z0-9])?$"; + // + // Author is a required field + Author *string `locationName:"author" type:"string" required:"true"` + + // The date and time this resource was created. + CreationTime *string `locationName:"creationTime" type:"string"` + + // The description of the application. + // + // Minimum length=1. Maximum length=256 + // + // Description is a required field + Description *string `locationName:"description" type:"string" required:"true"` + + // A URL with more information about the application, for example the location + // of your GitHub repository for the application. + HomePageUrl *string `locationName:"homePageUrl" type:"string"` + + // Labels to improve discovery of apps in search results. + // + // Minimum length=1. Maximum length=127. Maximum number of labels: 10 + // + // Pattern: "^[a-zA-Z0-9+\\-_:\\/@]+$"; + Labels []*string `locationName:"labels" type:"list"` + + // The name of the application. + // + // Minimum length=1. Maximum length=140 + // + // Pattern: "[a-zA-Z0-9\\-]+"; + // + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true"` + + // A valid identifier from https://spdx.org/licenses/ (https://spdx.org/licenses/). + SpdxLicenseId *string `locationName:"spdxLicenseId" type:"string"` +} + +// String returns the string representation +func (s ApplicationSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ApplicationSummary) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *ApplicationSummary) SetApplicationId(v string) *ApplicationSummary { + s.ApplicationId = &v + return s +} + +// SetAuthor sets the Author field's value. +func (s *ApplicationSummary) SetAuthor(v string) *ApplicationSummary { + s.Author = &v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *ApplicationSummary) SetCreationTime(v string) *ApplicationSummary { + s.CreationTime = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *ApplicationSummary) SetDescription(v string) *ApplicationSummary { + s.Description = &v + return s +} + +// SetHomePageUrl sets the HomePageUrl field's value. +func (s *ApplicationSummary) SetHomePageUrl(v string) *ApplicationSummary { + s.HomePageUrl = &v + return s +} + +// SetLabels sets the Labels field's value. +func (s *ApplicationSummary) SetLabels(v []*string) *ApplicationSummary { + s.Labels = v + return s +} + +// SetName sets the Name field's value. +func (s *ApplicationSummary) SetName(v string) *ApplicationSummary { + s.Name = &v + return s +} + +// SetSpdxLicenseId sets the SpdxLicenseId field's value. +func (s *ApplicationSummary) SetSpdxLicenseId(v string) *ApplicationSummary { + s.SpdxLicenseId = &v + return s +} + +type CreateApplicationOutput struct { + _ struct{} `type:"structure"` + + ApplicationId *string `locationName:"applicationId" type:"string"` + + Author *string `locationName:"author" type:"string"` + + CreationTime *string `locationName:"creationTime" type:"string"` + + Description *string `locationName:"description" type:"string"` + + HomePageUrl *string `locationName:"homePageUrl" type:"string"` + + Labels []*string `locationName:"labels" type:"list"` + + LicenseUrl *string `locationName:"licenseUrl" type:"string"` + + Name *string `locationName:"name" type:"string"` + + ReadmeUrl *string `locationName:"readmeUrl" type:"string"` + + SpdxLicenseId *string `locationName:"spdxLicenseId" type:"string"` + + // Application version details. + Version *Version `locationName:"version" type:"structure"` +} + +// String returns the string representation +func (s CreateApplicationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateApplicationOutput) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *CreateApplicationOutput) SetApplicationId(v string) *CreateApplicationOutput { + s.ApplicationId = &v + return s +} + +// SetAuthor sets the Author field's value. +func (s *CreateApplicationOutput) SetAuthor(v string) *CreateApplicationOutput { + s.Author = &v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *CreateApplicationOutput) SetCreationTime(v string) *CreateApplicationOutput { + s.CreationTime = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *CreateApplicationOutput) SetDescription(v string) *CreateApplicationOutput { + s.Description = &v + return s +} + +// SetHomePageUrl sets the HomePageUrl field's value. +func (s *CreateApplicationOutput) SetHomePageUrl(v string) *CreateApplicationOutput { + s.HomePageUrl = &v + return s +} + +// SetLabels sets the Labels field's value. +func (s *CreateApplicationOutput) SetLabels(v []*string) *CreateApplicationOutput { + s.Labels = v + return s +} + +// SetLicenseUrl sets the LicenseUrl field's value. +func (s *CreateApplicationOutput) SetLicenseUrl(v string) *CreateApplicationOutput { + s.LicenseUrl = &v + return s +} + +// SetName sets the Name field's value. +func (s *CreateApplicationOutput) SetName(v string) *CreateApplicationOutput { + s.Name = &v + return s +} + +// SetReadmeUrl sets the ReadmeUrl field's value. +func (s *CreateApplicationOutput) SetReadmeUrl(v string) *CreateApplicationOutput { + s.ReadmeUrl = &v + return s +} + +// SetSpdxLicenseId sets the SpdxLicenseId field's value. +func (s *CreateApplicationOutput) SetSpdxLicenseId(v string) *CreateApplicationOutput { + s.SpdxLicenseId = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *CreateApplicationOutput) SetVersion(v *Version) *CreateApplicationOutput { + s.Version = v + return s +} + +type CreateApplicationRequest struct { + _ struct{} `type:"structure"` + + // Author is a required field + Author *string `locationName:"author" type:"string" required:"true"` + + // Description is a required field + Description *string `locationName:"description" type:"string" required:"true"` + + HomePageUrl *string `locationName:"homePageUrl" type:"string"` + + Labels []*string `locationName:"labels" type:"list"` + + LicenseBody *string `locationName:"licenseBody" type:"string"` + + LicenseUrl *string `locationName:"licenseUrl" type:"string"` + + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true"` + + ReadmeBody *string `locationName:"readmeBody" type:"string"` + + ReadmeUrl *string `locationName:"readmeUrl" type:"string"` + + SemanticVersion *string `locationName:"semanticVersion" type:"string"` + + SourceCodeUrl *string `locationName:"sourceCodeUrl" type:"string"` + + SpdxLicenseId *string `locationName:"spdxLicenseId" type:"string"` + + TemplateBody *string `locationName:"templateBody" type:"string"` + + TemplateUrl *string `locationName:"templateUrl" type:"string"` +} + +// String returns the string representation +func (s CreateApplicationRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateApplicationRequest) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateApplicationRequest) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateApplicationRequest"} + if s.Author == nil { + invalidParams.Add(request.NewErrParamRequired("Author")) + } + if s.Description == nil { + invalidParams.Add(request.NewErrParamRequired("Description")) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAuthor sets the Author field's value. +func (s *CreateApplicationRequest) SetAuthor(v string) *CreateApplicationRequest { + s.Author = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *CreateApplicationRequest) SetDescription(v string) *CreateApplicationRequest { + s.Description = &v + return s +} + +// SetHomePageUrl sets the HomePageUrl field's value. +func (s *CreateApplicationRequest) SetHomePageUrl(v string) *CreateApplicationRequest { + s.HomePageUrl = &v + return s +} + +// SetLabels sets the Labels field's value. +func (s *CreateApplicationRequest) SetLabels(v []*string) *CreateApplicationRequest { + s.Labels = v + return s +} + +// SetLicenseBody sets the LicenseBody field's value. +func (s *CreateApplicationRequest) SetLicenseBody(v string) *CreateApplicationRequest { + s.LicenseBody = &v + return s +} + +// SetLicenseUrl sets the LicenseUrl field's value. +func (s *CreateApplicationRequest) SetLicenseUrl(v string) *CreateApplicationRequest { + s.LicenseUrl = &v + return s +} + +// SetName sets the Name field's value. +func (s *CreateApplicationRequest) SetName(v string) *CreateApplicationRequest { + s.Name = &v + return s +} + +// SetReadmeBody sets the ReadmeBody field's value. +func (s *CreateApplicationRequest) SetReadmeBody(v string) *CreateApplicationRequest { + s.ReadmeBody = &v + return s +} + +// SetReadmeUrl sets the ReadmeUrl field's value. +func (s *CreateApplicationRequest) SetReadmeUrl(v string) *CreateApplicationRequest { + s.ReadmeUrl = &v + return s +} + +// SetSemanticVersion sets the SemanticVersion field's value. +func (s *CreateApplicationRequest) SetSemanticVersion(v string) *CreateApplicationRequest { + s.SemanticVersion = &v + return s +} + +// SetSourceCodeUrl sets the SourceCodeUrl field's value. +func (s *CreateApplicationRequest) SetSourceCodeUrl(v string) *CreateApplicationRequest { + s.SourceCodeUrl = &v + return s +} + +// SetSpdxLicenseId sets the SpdxLicenseId field's value. +func (s *CreateApplicationRequest) SetSpdxLicenseId(v string) *CreateApplicationRequest { + s.SpdxLicenseId = &v + return s +} + +// SetTemplateBody sets the TemplateBody field's value. +func (s *CreateApplicationRequest) SetTemplateBody(v string) *CreateApplicationRequest { + s.TemplateBody = &v + return s +} + +// SetTemplateUrl sets the TemplateUrl field's value. +func (s *CreateApplicationRequest) SetTemplateUrl(v string) *CreateApplicationRequest { + s.TemplateUrl = &v + return s +} + +type CreateApplicationVersionOutput struct { + _ struct{} `type:"structure"` + + ApplicationId *string `locationName:"applicationId" type:"string"` + + CreationTime *string `locationName:"creationTime" type:"string"` + + ParameterDefinitions []*ParameterDefinition `locationName:"parameterDefinitions" type:"list"` + + RequiredCapabilities []*string `locationName:"requiredCapabilities" type:"list"` + + ResourcesSupported *bool `locationName:"resourcesSupported" type:"boolean"` + + SemanticVersion *string `locationName:"semanticVersion" type:"string"` + + SourceCodeUrl *string `locationName:"sourceCodeUrl" type:"string"` + + TemplateUrl *string `locationName:"templateUrl" type:"string"` +} + +// String returns the string representation +func (s CreateApplicationVersionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateApplicationVersionOutput) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *CreateApplicationVersionOutput) SetApplicationId(v string) *CreateApplicationVersionOutput { + s.ApplicationId = &v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *CreateApplicationVersionOutput) SetCreationTime(v string) *CreateApplicationVersionOutput { + s.CreationTime = &v + return s +} + +// SetParameterDefinitions sets the ParameterDefinitions field's value. +func (s *CreateApplicationVersionOutput) SetParameterDefinitions(v []*ParameterDefinition) *CreateApplicationVersionOutput { + s.ParameterDefinitions = v + return s +} + +// SetRequiredCapabilities sets the RequiredCapabilities field's value. +func (s *CreateApplicationVersionOutput) SetRequiredCapabilities(v []*string) *CreateApplicationVersionOutput { + s.RequiredCapabilities = v + return s +} + +// SetResourcesSupported sets the ResourcesSupported field's value. +func (s *CreateApplicationVersionOutput) SetResourcesSupported(v bool) *CreateApplicationVersionOutput { + s.ResourcesSupported = &v + return s +} + +// SetSemanticVersion sets the SemanticVersion field's value. +func (s *CreateApplicationVersionOutput) SetSemanticVersion(v string) *CreateApplicationVersionOutput { + s.SemanticVersion = &v + return s +} + +// SetSourceCodeUrl sets the SourceCodeUrl field's value. +func (s *CreateApplicationVersionOutput) SetSourceCodeUrl(v string) *CreateApplicationVersionOutput { + s.SourceCodeUrl = &v + return s +} + +// SetTemplateUrl sets the TemplateUrl field's value. +func (s *CreateApplicationVersionOutput) SetTemplateUrl(v string) *CreateApplicationVersionOutput { + s.TemplateUrl = &v + return s +} + +type CreateApplicationVersionRequest struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"applicationId" type:"string" required:"true"` + + // SemanticVersion is a required field + SemanticVersion *string `location:"uri" locationName:"semanticVersion" type:"string" required:"true"` + + SourceCodeUrl *string `locationName:"sourceCodeUrl" type:"string"` + + TemplateBody *string `locationName:"templateBody" type:"string"` + + TemplateUrl *string `locationName:"templateUrl" type:"string"` +} + +// String returns the string representation +func (s CreateApplicationVersionRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateApplicationVersionRequest) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateApplicationVersionRequest) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateApplicationVersionRequest"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.SemanticVersion == nil { + invalidParams.Add(request.NewErrParamRequired("SemanticVersion")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *CreateApplicationVersionRequest) SetApplicationId(v string) *CreateApplicationVersionRequest { + s.ApplicationId = &v + return s +} + +// SetSemanticVersion sets the SemanticVersion field's value. +func (s *CreateApplicationVersionRequest) SetSemanticVersion(v string) *CreateApplicationVersionRequest { + s.SemanticVersion = &v + return s +} + +// SetSourceCodeUrl sets the SourceCodeUrl field's value. +func (s *CreateApplicationVersionRequest) SetSourceCodeUrl(v string) *CreateApplicationVersionRequest { + s.SourceCodeUrl = &v + return s +} + +// SetTemplateBody sets the TemplateBody field's value. +func (s *CreateApplicationVersionRequest) SetTemplateBody(v string) *CreateApplicationVersionRequest { + s.TemplateBody = &v + return s +} + +// SetTemplateUrl sets the TemplateUrl field's value. +func (s *CreateApplicationVersionRequest) SetTemplateUrl(v string) *CreateApplicationVersionRequest { + s.TemplateUrl = &v + return s +} + +type CreateCloudFormationChangeSetOutput struct { + _ struct{} `type:"structure"` + + ApplicationId *string `locationName:"applicationId" type:"string"` + + ChangeSetId *string `locationName:"changeSetId" type:"string"` + + SemanticVersion *string `locationName:"semanticVersion" type:"string"` + + StackId *string `locationName:"stackId" type:"string"` +} + +// String returns the string representation +func (s CreateCloudFormationChangeSetOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateCloudFormationChangeSetOutput) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *CreateCloudFormationChangeSetOutput) SetApplicationId(v string) *CreateCloudFormationChangeSetOutput { + s.ApplicationId = &v + return s +} + +// SetChangeSetId sets the ChangeSetId field's value. +func (s *CreateCloudFormationChangeSetOutput) SetChangeSetId(v string) *CreateCloudFormationChangeSetOutput { + s.ChangeSetId = &v + return s +} + +// SetSemanticVersion sets the SemanticVersion field's value. +func (s *CreateCloudFormationChangeSetOutput) SetSemanticVersion(v string) *CreateCloudFormationChangeSetOutput { + s.SemanticVersion = &v + return s +} + +// SetStackId sets the StackId field's value. +func (s *CreateCloudFormationChangeSetOutput) SetStackId(v string) *CreateCloudFormationChangeSetOutput { + s.StackId = &v + return s +} + +type CreateCloudFormationChangeSetRequest struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"applicationId" type:"string" required:"true"` + + Capabilities []*string `locationName:"capabilities" type:"list"` + + ChangeSetName *string `locationName:"changeSetName" type:"string"` + + ClientToken *string `locationName:"clientToken" type:"string"` + + Description *string `locationName:"description" type:"string"` + + NotificationArns []*string `locationName:"notificationArns" type:"list"` + + ParameterOverrides []*ParameterValue `locationName:"parameterOverrides" type:"list"` + + ResourceTypes []*string `locationName:"resourceTypes" type:"list"` + + // This property corresponds to the AWS CloudFormation RollbackConfiguration + // (https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/RollbackConfiguration) + // Data Type. + RollbackConfiguration *RollbackConfiguration `locationName:"rollbackConfiguration" type:"structure"` + + SemanticVersion *string `locationName:"semanticVersion" type:"string"` + + // StackName is a required field + StackName *string `locationName:"stackName" type:"string" required:"true"` + + Tags []*Tag `locationName:"tags" type:"list"` + + TemplateId *string `locationName:"templateId" type:"string"` +} + +// String returns the string representation +func (s CreateCloudFormationChangeSetRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateCloudFormationChangeSetRequest) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateCloudFormationChangeSetRequest) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateCloudFormationChangeSetRequest"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.StackName == nil { + invalidParams.Add(request.NewErrParamRequired("StackName")) + } + if s.ParameterOverrides != nil { + for i, v := range s.ParameterOverrides { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ParameterOverrides", i), err.(request.ErrInvalidParams)) + } + } + } + if s.RollbackConfiguration != nil { + if err := s.RollbackConfiguration.Validate(); err != nil { + invalidParams.AddNested("RollbackConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *CreateCloudFormationChangeSetRequest) SetApplicationId(v string) *CreateCloudFormationChangeSetRequest { + s.ApplicationId = &v + return s +} + +// SetCapabilities sets the Capabilities field's value. +func (s *CreateCloudFormationChangeSetRequest) SetCapabilities(v []*string) *CreateCloudFormationChangeSetRequest { + s.Capabilities = v + return s +} + +// SetChangeSetName sets the ChangeSetName field's value. +func (s *CreateCloudFormationChangeSetRequest) SetChangeSetName(v string) *CreateCloudFormationChangeSetRequest { + s.ChangeSetName = &v + return s +} + +// SetClientToken sets the ClientToken field's value. +func (s *CreateCloudFormationChangeSetRequest) SetClientToken(v string) *CreateCloudFormationChangeSetRequest { + s.ClientToken = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *CreateCloudFormationChangeSetRequest) SetDescription(v string) *CreateCloudFormationChangeSetRequest { + s.Description = &v + return s +} + +// SetNotificationArns sets the NotificationArns field's value. +func (s *CreateCloudFormationChangeSetRequest) SetNotificationArns(v []*string) *CreateCloudFormationChangeSetRequest { + s.NotificationArns = v + return s +} + +// SetParameterOverrides sets the ParameterOverrides field's value. +func (s *CreateCloudFormationChangeSetRequest) SetParameterOverrides(v []*ParameterValue) *CreateCloudFormationChangeSetRequest { + s.ParameterOverrides = v + return s +} + +// SetResourceTypes sets the ResourceTypes field's value. +func (s *CreateCloudFormationChangeSetRequest) SetResourceTypes(v []*string) *CreateCloudFormationChangeSetRequest { + s.ResourceTypes = v + return s +} + +// SetRollbackConfiguration sets the RollbackConfiguration field's value. +func (s *CreateCloudFormationChangeSetRequest) SetRollbackConfiguration(v *RollbackConfiguration) *CreateCloudFormationChangeSetRequest { + s.RollbackConfiguration = v + return s +} + +// SetSemanticVersion sets the SemanticVersion field's value. +func (s *CreateCloudFormationChangeSetRequest) SetSemanticVersion(v string) *CreateCloudFormationChangeSetRequest { + s.SemanticVersion = &v + return s +} + +// SetStackName sets the StackName field's value. +func (s *CreateCloudFormationChangeSetRequest) SetStackName(v string) *CreateCloudFormationChangeSetRequest { + s.StackName = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateCloudFormationChangeSetRequest) SetTags(v []*Tag) *CreateCloudFormationChangeSetRequest { + s.Tags = v + return s +} + +// SetTemplateId sets the TemplateId field's value. +func (s *CreateCloudFormationChangeSetRequest) SetTemplateId(v string) *CreateCloudFormationChangeSetRequest { + s.TemplateId = &v + return s +} + +type CreateCloudFormationTemplateInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"applicationId" type:"string" required:"true"` + + SemanticVersion *string `locationName:"semanticVersion" type:"string"` +} + +// String returns the string representation +func (s CreateCloudFormationTemplateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateCloudFormationTemplateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateCloudFormationTemplateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateCloudFormationTemplateInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *CreateCloudFormationTemplateInput) SetApplicationId(v string) *CreateCloudFormationTemplateInput { + s.ApplicationId = &v + return s +} + +// SetSemanticVersion sets the SemanticVersion field's value. +func (s *CreateCloudFormationTemplateInput) SetSemanticVersion(v string) *CreateCloudFormationTemplateInput { + s.SemanticVersion = &v + return s +} + +type CreateCloudFormationTemplateOutput struct { + _ struct{} `type:"structure"` + + ApplicationId *string `locationName:"applicationId" type:"string"` + + CreationTime *string `locationName:"creationTime" type:"string"` + + ExpirationTime *string `locationName:"expirationTime" type:"string"` + + SemanticVersion *string `locationName:"semanticVersion" type:"string"` + + Status *string `locationName:"status" type:"string" enum:"Status"` + + TemplateId *string `locationName:"templateId" type:"string"` + + TemplateUrl *string `locationName:"templateUrl" type:"string"` +} + +// String returns the string representation +func (s CreateCloudFormationTemplateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateCloudFormationTemplateOutput) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *CreateCloudFormationTemplateOutput) SetApplicationId(v string) *CreateCloudFormationTemplateOutput { + s.ApplicationId = &v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *CreateCloudFormationTemplateOutput) SetCreationTime(v string) *CreateCloudFormationTemplateOutput { + s.CreationTime = &v + return s +} + +// SetExpirationTime sets the ExpirationTime field's value. +func (s *CreateCloudFormationTemplateOutput) SetExpirationTime(v string) *CreateCloudFormationTemplateOutput { + s.ExpirationTime = &v + return s +} + +// SetSemanticVersion sets the SemanticVersion field's value. +func (s *CreateCloudFormationTemplateOutput) SetSemanticVersion(v string) *CreateCloudFormationTemplateOutput { + s.SemanticVersion = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *CreateCloudFormationTemplateOutput) SetStatus(v string) *CreateCloudFormationTemplateOutput { + s.Status = &v + return s +} + +// SetTemplateId sets the TemplateId field's value. +func (s *CreateCloudFormationTemplateOutput) SetTemplateId(v string) *CreateCloudFormationTemplateOutput { + s.TemplateId = &v + return s +} + +// SetTemplateUrl sets the TemplateUrl field's value. +func (s *CreateCloudFormationTemplateOutput) SetTemplateUrl(v string) *CreateCloudFormationTemplateOutput { + s.TemplateUrl = &v + return s +} + +type DeleteApplicationInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"applicationId" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteApplicationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApplicationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteApplicationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteApplicationInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *DeleteApplicationInput) SetApplicationId(v string) *DeleteApplicationInput { + s.ApplicationId = &v + return s +} + +type DeleteApplicationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteApplicationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteApplicationOutput) GoString() string { + return s.String() +} + +type GetApplicationInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"applicationId" type:"string" required:"true"` + + SemanticVersion *string `location:"querystring" locationName:"semanticVersion" type:"string"` +} + +// String returns the string representation +func (s GetApplicationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetApplicationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetApplicationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetApplicationInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetApplicationInput) SetApplicationId(v string) *GetApplicationInput { + s.ApplicationId = &v + return s +} + +// SetSemanticVersion sets the SemanticVersion field's value. +func (s *GetApplicationInput) SetSemanticVersion(v string) *GetApplicationInput { + s.SemanticVersion = &v + return s +} + +type GetApplicationOutput struct { + _ struct{} `type:"structure"` + + ApplicationId *string `locationName:"applicationId" type:"string"` + + Author *string `locationName:"author" type:"string"` + + CreationTime *string `locationName:"creationTime" type:"string"` + + Description *string `locationName:"description" type:"string"` + + HomePageUrl *string `locationName:"homePageUrl" type:"string"` + + Labels []*string `locationName:"labels" type:"list"` + + LicenseUrl *string `locationName:"licenseUrl" type:"string"` + + Name *string `locationName:"name" type:"string"` + + ReadmeUrl *string `locationName:"readmeUrl" type:"string"` + + SpdxLicenseId *string `locationName:"spdxLicenseId" type:"string"` + + // Application version details. + Version *Version `locationName:"version" type:"structure"` +} + +// String returns the string representation +func (s GetApplicationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetApplicationOutput) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetApplicationOutput) SetApplicationId(v string) *GetApplicationOutput { + s.ApplicationId = &v + return s +} + +// SetAuthor sets the Author field's value. +func (s *GetApplicationOutput) SetAuthor(v string) *GetApplicationOutput { + s.Author = &v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *GetApplicationOutput) SetCreationTime(v string) *GetApplicationOutput { + s.CreationTime = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *GetApplicationOutput) SetDescription(v string) *GetApplicationOutput { + s.Description = &v + return s +} + +// SetHomePageUrl sets the HomePageUrl field's value. +func (s *GetApplicationOutput) SetHomePageUrl(v string) *GetApplicationOutput { + s.HomePageUrl = &v + return s +} + +// SetLabels sets the Labels field's value. +func (s *GetApplicationOutput) SetLabels(v []*string) *GetApplicationOutput { + s.Labels = v + return s +} + +// SetLicenseUrl sets the LicenseUrl field's value. +func (s *GetApplicationOutput) SetLicenseUrl(v string) *GetApplicationOutput { + s.LicenseUrl = &v + return s +} + +// SetName sets the Name field's value. +func (s *GetApplicationOutput) SetName(v string) *GetApplicationOutput { + s.Name = &v + return s +} + +// SetReadmeUrl sets the ReadmeUrl field's value. +func (s *GetApplicationOutput) SetReadmeUrl(v string) *GetApplicationOutput { + s.ReadmeUrl = &v + return s +} + +// SetSpdxLicenseId sets the SpdxLicenseId field's value. +func (s *GetApplicationOutput) SetSpdxLicenseId(v string) *GetApplicationOutput { + s.SpdxLicenseId = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *GetApplicationOutput) SetVersion(v *Version) *GetApplicationOutput { + s.Version = v + return s +} + +type GetApplicationPolicyInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"applicationId" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetApplicationPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetApplicationPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetApplicationPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetApplicationPolicyInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetApplicationPolicyInput) SetApplicationId(v string) *GetApplicationPolicyInput { + s.ApplicationId = &v + return s +} + +type GetApplicationPolicyOutput struct { + _ struct{} `type:"structure"` + + Statements []*ApplicationPolicyStatement `locationName:"statements" type:"list"` +} + +// String returns the string representation +func (s GetApplicationPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetApplicationPolicyOutput) GoString() string { + return s.String() +} + +// SetStatements sets the Statements field's value. +func (s *GetApplicationPolicyOutput) SetStatements(v []*ApplicationPolicyStatement) *GetApplicationPolicyOutput { + s.Statements = v + return s +} + +type GetCloudFormationTemplateInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"applicationId" type:"string" required:"true"` + + // TemplateId is a required field + TemplateId *string `location:"uri" locationName:"templateId" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetCloudFormationTemplateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCloudFormationTemplateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetCloudFormationTemplateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetCloudFormationTemplateInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.TemplateId == nil { + invalidParams.Add(request.NewErrParamRequired("TemplateId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetCloudFormationTemplateInput) SetApplicationId(v string) *GetCloudFormationTemplateInput { + s.ApplicationId = &v + return s +} + +// SetTemplateId sets the TemplateId field's value. +func (s *GetCloudFormationTemplateInput) SetTemplateId(v string) *GetCloudFormationTemplateInput { + s.TemplateId = &v + return s +} + +type GetCloudFormationTemplateOutput struct { + _ struct{} `type:"structure"` + + ApplicationId *string `locationName:"applicationId" type:"string"` + + CreationTime *string `locationName:"creationTime" type:"string"` + + ExpirationTime *string `locationName:"expirationTime" type:"string"` + + SemanticVersion *string `locationName:"semanticVersion" type:"string"` + + Status *string `locationName:"status" type:"string" enum:"Status"` + + TemplateId *string `locationName:"templateId" type:"string"` + + TemplateUrl *string `locationName:"templateUrl" type:"string"` +} + +// String returns the string representation +func (s GetCloudFormationTemplateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetCloudFormationTemplateOutput) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *GetCloudFormationTemplateOutput) SetApplicationId(v string) *GetCloudFormationTemplateOutput { + s.ApplicationId = &v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *GetCloudFormationTemplateOutput) SetCreationTime(v string) *GetCloudFormationTemplateOutput { + s.CreationTime = &v + return s +} + +// SetExpirationTime sets the ExpirationTime field's value. +func (s *GetCloudFormationTemplateOutput) SetExpirationTime(v string) *GetCloudFormationTemplateOutput { + s.ExpirationTime = &v + return s +} + +// SetSemanticVersion sets the SemanticVersion field's value. +func (s *GetCloudFormationTemplateOutput) SetSemanticVersion(v string) *GetCloudFormationTemplateOutput { + s.SemanticVersion = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *GetCloudFormationTemplateOutput) SetStatus(v string) *GetCloudFormationTemplateOutput { + s.Status = &v + return s +} + +// SetTemplateId sets the TemplateId field's value. +func (s *GetCloudFormationTemplateOutput) SetTemplateId(v string) *GetCloudFormationTemplateOutput { + s.TemplateId = &v + return s +} + +// SetTemplateUrl sets the TemplateUrl field's value. +func (s *GetCloudFormationTemplateOutput) SetTemplateUrl(v string) *GetCloudFormationTemplateOutput { + s.TemplateUrl = &v + return s +} + +type ListApplicationVersionsInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"applicationId" type:"string" required:"true"` + + MaxItems *int64 `location:"querystring" locationName:"maxItems" min:"1" type:"integer"` + + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s ListApplicationVersionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListApplicationVersionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListApplicationVersionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListApplicationVersionsInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *ListApplicationVersionsInput) SetApplicationId(v string) *ListApplicationVersionsInput { + s.ApplicationId = &v + return s +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListApplicationVersionsInput) SetMaxItems(v int64) *ListApplicationVersionsInput { + s.MaxItems = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListApplicationVersionsInput) SetNextToken(v string) *ListApplicationVersionsInput { + s.NextToken = &v + return s +} + +type ListApplicationVersionsOutput struct { + _ struct{} `type:"structure"` + + NextToken *string `locationName:"nextToken" type:"string"` + + Versions []*VersionSummary `locationName:"versions" type:"list"` +} + +// String returns the string representation +func (s ListApplicationVersionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListApplicationVersionsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *ListApplicationVersionsOutput) SetNextToken(v string) *ListApplicationVersionsOutput { + s.NextToken = &v + return s +} + +// SetVersions sets the Versions field's value. +func (s *ListApplicationVersionsOutput) SetVersions(v []*VersionSummary) *ListApplicationVersionsOutput { + s.Versions = v + return s +} + +type ListApplicationsInput struct { + _ struct{} `type:"structure"` + + MaxItems *int64 `location:"querystring" locationName:"maxItems" min:"1" type:"integer"` + + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s ListApplicationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListApplicationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListApplicationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListApplicationsInput"} + if s.MaxItems != nil && *s.MaxItems < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxItems", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMaxItems sets the MaxItems field's value. +func (s *ListApplicationsInput) SetMaxItems(v int64) *ListApplicationsInput { + s.MaxItems = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListApplicationsInput) SetNextToken(v string) *ListApplicationsInput { + s.NextToken = &v + return s +} + +type ListApplicationsOutput struct { + _ struct{} `type:"structure"` + + Applications []*ApplicationSummary `locationName:"applications" type:"list"` + + NextToken *string `locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s ListApplicationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListApplicationsOutput) GoString() string { + return s.String() +} + +// SetApplications sets the Applications field's value. +func (s *ListApplicationsOutput) SetApplications(v []*ApplicationSummary) *ListApplicationsOutput { + s.Applications = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *ListApplicationsOutput) SetNextToken(v string) *ListApplicationsOutput { + s.NextToken = &v + return s +} + +// Parameters supported by the application. +type ParameterDefinition struct { + _ struct{} `type:"structure"` + + // A regular expression that represents the patterns to allow for String types. + AllowedPattern *string `locationName:"allowedPattern" type:"string"` + + // An array containing the list of values allowed for the parameter. + AllowedValues []*string `locationName:"allowedValues" type:"list"` + + // A string that explains a constraint when the constraint is violated. For + // example, without a constraint description, a parameter that has an allowed + // pattern of [A-Za-z0-9]+ displays the following error message when the user + // specifies an invalid value: + // + // Malformed input-Parameter MyParameter must match pattern [A-Za-z0-9]+ + // + // By adding a constraint description, such as "must contain only uppercase + // and lowercase letters and numbers," you can display the following customized + // error message: + // + // Malformed input-Parameter MyParameter must contain only uppercase and lowercase + // letters and numbers. + ConstraintDescription *string `locationName:"constraintDescription" type:"string"` + + // A value of the appropriate type for the template to use if no value is specified + // when a stack is created. If you define constraints for the parameter, you + // must specify a value that adheres to those constraints. + DefaultValue *string `locationName:"defaultValue" type:"string"` + + // A string of up to 4,000 characters that describes the parameter. + Description *string `locationName:"description" type:"string"` + + // An integer value that determines the largest number of characters that you + // want to allow for String types. + MaxLength *int64 `locationName:"maxLength" type:"integer"` + + // A numeric value that determines the largest numeric value that you want to + // allow for Number types. + MaxValue *int64 `locationName:"maxValue" type:"integer"` + + // An integer value that determines the smallest number of characters that you + // want to allow for String types. + MinLength *int64 `locationName:"minLength" type:"integer"` + + // A numeric value that determines the smallest numeric value that you want + // to allow for Number types. + MinValue *int64 `locationName:"minValue" type:"integer"` + + // The name of the parameter. + // + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true"` + + // Whether to mask the parameter value whenever anyone makes a call that describes + // the stack. If you set the value to true, the parameter value is masked with + // asterisks (*****). + NoEcho *bool `locationName:"noEcho" type:"boolean"` + + // A list of AWS SAM resources that use this parameter. + // + // ReferencedByResources is a required field + ReferencedByResources []*string `locationName:"referencedByResources" type:"list" required:"true"` + + // The type of the parameter. + // + // Valid values: String | Number | List | CommaDelimitedList + // + // String: A literal string. + // + // For example, users can specify "MyUserName". + // + // Number: An integer or float. AWS CloudFormation validates the parameter value + // as a number. However, when you use the parameter elsewhere in your template + // (for example, by using the Ref intrinsic function), the parameter value becomes + // a string. + // + // For example, users might specify "8888". + // + // List: An array of integers or floats that are separated by commas. + // AWS CloudFormation validates the parameter value as numbers. However, when + // you use the parameter elsewhere in your template (for example, by using the + // Ref intrinsic function), the parameter value becomes a list of strings. + // + // For example, users might specify "80,20", and then Ref results in ["80","20"]. + // + // CommaDelimitedList: An array of literal strings that are separated by commas. + // The total number of strings should be one more than the total number of commas. + // Also, each member string is space-trimmed. + // + // For example, users might specify "test,dev,prod", and then Ref results in + // ["test","dev","prod"]. + Type *string `locationName:"type" type:"string"` +} + +// String returns the string representation +func (s ParameterDefinition) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ParameterDefinition) GoString() string { + return s.String() +} + +// SetAllowedPattern sets the AllowedPattern field's value. +func (s *ParameterDefinition) SetAllowedPattern(v string) *ParameterDefinition { + s.AllowedPattern = &v + return s +} + +// SetAllowedValues sets the AllowedValues field's value. +func (s *ParameterDefinition) SetAllowedValues(v []*string) *ParameterDefinition { + s.AllowedValues = v + return s +} + +// SetConstraintDescription sets the ConstraintDescription field's value. +func (s *ParameterDefinition) SetConstraintDescription(v string) *ParameterDefinition { + s.ConstraintDescription = &v + return s +} + +// SetDefaultValue sets the DefaultValue field's value. +func (s *ParameterDefinition) SetDefaultValue(v string) *ParameterDefinition { + s.DefaultValue = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *ParameterDefinition) SetDescription(v string) *ParameterDefinition { + s.Description = &v + return s +} + +// SetMaxLength sets the MaxLength field's value. +func (s *ParameterDefinition) SetMaxLength(v int64) *ParameterDefinition { + s.MaxLength = &v + return s +} + +// SetMaxValue sets the MaxValue field's value. +func (s *ParameterDefinition) SetMaxValue(v int64) *ParameterDefinition { + s.MaxValue = &v + return s +} + +// SetMinLength sets the MinLength field's value. +func (s *ParameterDefinition) SetMinLength(v int64) *ParameterDefinition { + s.MinLength = &v + return s +} + +// SetMinValue sets the MinValue field's value. +func (s *ParameterDefinition) SetMinValue(v int64) *ParameterDefinition { + s.MinValue = &v + return s +} + +// SetName sets the Name field's value. +func (s *ParameterDefinition) SetName(v string) *ParameterDefinition { + s.Name = &v + return s +} + +// SetNoEcho sets the NoEcho field's value. +func (s *ParameterDefinition) SetNoEcho(v bool) *ParameterDefinition { + s.NoEcho = &v + return s +} + +// SetReferencedByResources sets the ReferencedByResources field's value. +func (s *ParameterDefinition) SetReferencedByResources(v []*string) *ParameterDefinition { + s.ReferencedByResources = v + return s +} + +// SetType sets the Type field's value. +func (s *ParameterDefinition) SetType(v string) *ParameterDefinition { + s.Type = &v + return s +} + +// Parameter value of the application. +type ParameterValue struct { + _ struct{} `type:"structure"` + + // The key associated with the parameter. If you don't specify a key and value + // for a particular parameter, AWS CloudFormation uses the default value that + // is specified in your template. + // + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true"` + + // The input value associated with the parameter. + // + // Value is a required field + Value *string `locationName:"value" type:"string" required:"true"` +} + +// String returns the string representation +func (s ParameterValue) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ParameterValue) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ParameterValue) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ParameterValue"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *ParameterValue) SetName(v string) *ParameterValue { + s.Name = &v + return s +} + +// SetValue sets the Value field's value. +func (s *ParameterValue) SetValue(v string) *ParameterValue { + s.Value = &v + return s +} + +type PutApplicationPolicyInput struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"applicationId" type:"string" required:"true"` + + // Statements is a required field + Statements []*ApplicationPolicyStatement `locationName:"statements" type:"list" required:"true"` +} + +// String returns the string representation +func (s PutApplicationPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutApplicationPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutApplicationPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutApplicationPolicyInput"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + if s.Statements == nil { + invalidParams.Add(request.NewErrParamRequired("Statements")) + } + if s.Statements != nil { + for i, v := range s.Statements { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Statements", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *PutApplicationPolicyInput) SetApplicationId(v string) *PutApplicationPolicyInput { + s.ApplicationId = &v + return s +} + +// SetStatements sets the Statements field's value. +func (s *PutApplicationPolicyInput) SetStatements(v []*ApplicationPolicyStatement) *PutApplicationPolicyInput { + s.Statements = v + return s +} + +type PutApplicationPolicyOutput struct { + _ struct{} `type:"structure"` + + Statements []*ApplicationPolicyStatement `locationName:"statements" type:"list"` +} + +// String returns the string representation +func (s PutApplicationPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutApplicationPolicyOutput) GoString() string { + return s.String() +} + +// SetStatements sets the Statements field's value. +func (s *PutApplicationPolicyOutput) SetStatements(v []*ApplicationPolicyStatement) *PutApplicationPolicyOutput { + s.Statements = v + return s +} + +// This property corresponds to the AWS CloudFormation RollbackConfiguration +// (https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/RollbackConfiguration) +// Data Type. +type RollbackConfiguration struct { + _ struct{} `type:"structure"` + + // This property corresponds to the content of the same name for the AWS CloudFormation + // RollbackConfiguration (https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/RollbackConfiguration) + // Data Type. + MonitoringTimeInMinutes *int64 `locationName:"monitoringTimeInMinutes" type:"integer"` + + // This property corresponds to the content of the same name for the AWS CloudFormation + // RollbackConfiguration (https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/RollbackConfiguration) + // Data Type. + RollbackTriggers []*RollbackTrigger `locationName:"rollbackTriggers" type:"list"` +} + +// String returns the string representation +func (s RollbackConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RollbackConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RollbackConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RollbackConfiguration"} + if s.RollbackTriggers != nil { + for i, v := range s.RollbackTriggers { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "RollbackTriggers", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMonitoringTimeInMinutes sets the MonitoringTimeInMinutes field's value. +func (s *RollbackConfiguration) SetMonitoringTimeInMinutes(v int64) *RollbackConfiguration { + s.MonitoringTimeInMinutes = &v + return s +} + +// SetRollbackTriggers sets the RollbackTriggers field's value. +func (s *RollbackConfiguration) SetRollbackTriggers(v []*RollbackTrigger) *RollbackConfiguration { + s.RollbackTriggers = v + return s +} + +// This property corresponds to the AWS CloudFormation RollbackTrigger (https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/RollbackTrigger) +// Data Type. +type RollbackTrigger struct { + _ struct{} `type:"structure"` + + // This property corresponds to the content of the same name for the AWS CloudFormation + // RollbackTrigger (https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/RollbackTrigger) + // Data Type. + // + // Arn is a required field + Arn *string `locationName:"arn" type:"string" required:"true"` + + // This property corresponds to the content of the same name for the AWS CloudFormation + // RollbackTrigger (https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/RollbackTrigger) + // Data Type. + // + // Type is a required field + Type *string `locationName:"type" type:"string" required:"true"` +} + +// String returns the string representation +func (s RollbackTrigger) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RollbackTrigger) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RollbackTrigger) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RollbackTrigger"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *RollbackTrigger) SetArn(v string) *RollbackTrigger { + s.Arn = &v + return s +} + +// SetType sets the Type field's value. +func (s *RollbackTrigger) SetType(v string) *RollbackTrigger { + s.Type = &v + return s +} + +// This property corresponds to the AWS CloudFormation Tag (https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/Tag) +// Data Type. +type Tag struct { + _ struct{} `type:"structure"` + + // This property corresponds to the content of the same name for the AWS CloudFormation + // Tag (https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/Tag) + // Data Type. + // + // Key is a required field + Key *string `locationName:"key" type:"string" required:"true"` + + // This property corresponds to the content of the same name for the AWS CloudFormation + // Tag (https://docs.aws.amazon.com/goto/WebAPI/cloudformation-2010-05-15/Tag) + // Data Type. + // + // Value is a required field + Value *string `locationName:"value" type:"string" required:"true"` +} + +// String returns the string representation +func (s Tag) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Tag) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Tag) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Tag"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *Tag) SetKey(v string) *Tag { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Tag) SetValue(v string) *Tag { + s.Value = &v + return s +} + +type UpdateApplicationOutput struct { + _ struct{} `type:"structure"` + + ApplicationId *string `locationName:"applicationId" type:"string"` + + Author *string `locationName:"author" type:"string"` + + CreationTime *string `locationName:"creationTime" type:"string"` + + Description *string `locationName:"description" type:"string"` + + HomePageUrl *string `locationName:"homePageUrl" type:"string"` + + Labels []*string `locationName:"labels" type:"list"` + + LicenseUrl *string `locationName:"licenseUrl" type:"string"` + + Name *string `locationName:"name" type:"string"` + + ReadmeUrl *string `locationName:"readmeUrl" type:"string"` + + SpdxLicenseId *string `locationName:"spdxLicenseId" type:"string"` + + // Application version details. + Version *Version `locationName:"version" type:"structure"` +} + +// String returns the string representation +func (s UpdateApplicationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateApplicationOutput) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *UpdateApplicationOutput) SetApplicationId(v string) *UpdateApplicationOutput { + s.ApplicationId = &v + return s +} + +// SetAuthor sets the Author field's value. +func (s *UpdateApplicationOutput) SetAuthor(v string) *UpdateApplicationOutput { + s.Author = &v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *UpdateApplicationOutput) SetCreationTime(v string) *UpdateApplicationOutput { + s.CreationTime = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *UpdateApplicationOutput) SetDescription(v string) *UpdateApplicationOutput { + s.Description = &v + return s +} + +// SetHomePageUrl sets the HomePageUrl field's value. +func (s *UpdateApplicationOutput) SetHomePageUrl(v string) *UpdateApplicationOutput { + s.HomePageUrl = &v + return s +} + +// SetLabels sets the Labels field's value. +func (s *UpdateApplicationOutput) SetLabels(v []*string) *UpdateApplicationOutput { + s.Labels = v + return s +} + +// SetLicenseUrl sets the LicenseUrl field's value. +func (s *UpdateApplicationOutput) SetLicenseUrl(v string) *UpdateApplicationOutput { + s.LicenseUrl = &v + return s +} + +// SetName sets the Name field's value. +func (s *UpdateApplicationOutput) SetName(v string) *UpdateApplicationOutput { + s.Name = &v + return s +} + +// SetReadmeUrl sets the ReadmeUrl field's value. +func (s *UpdateApplicationOutput) SetReadmeUrl(v string) *UpdateApplicationOutput { + s.ReadmeUrl = &v + return s +} + +// SetSpdxLicenseId sets the SpdxLicenseId field's value. +func (s *UpdateApplicationOutput) SetSpdxLicenseId(v string) *UpdateApplicationOutput { + s.SpdxLicenseId = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *UpdateApplicationOutput) SetVersion(v *Version) *UpdateApplicationOutput { + s.Version = v + return s +} + +type UpdateApplicationRequest struct { + _ struct{} `type:"structure"` + + // ApplicationId is a required field + ApplicationId *string `location:"uri" locationName:"applicationId" type:"string" required:"true"` + + Author *string `locationName:"author" type:"string"` + + Description *string `locationName:"description" type:"string"` + + HomePageUrl *string `locationName:"homePageUrl" type:"string"` + + Labels []*string `locationName:"labels" type:"list"` + + ReadmeBody *string `locationName:"readmeBody" type:"string"` + + ReadmeUrl *string `locationName:"readmeUrl" type:"string"` +} + +// String returns the string representation +func (s UpdateApplicationRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateApplicationRequest) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateApplicationRequest) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateApplicationRequest"} + if s.ApplicationId == nil { + invalidParams.Add(request.NewErrParamRequired("ApplicationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *UpdateApplicationRequest) SetApplicationId(v string) *UpdateApplicationRequest { + s.ApplicationId = &v + return s +} + +// SetAuthor sets the Author field's value. +func (s *UpdateApplicationRequest) SetAuthor(v string) *UpdateApplicationRequest { + s.Author = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *UpdateApplicationRequest) SetDescription(v string) *UpdateApplicationRequest { + s.Description = &v + return s +} + +// SetHomePageUrl sets the HomePageUrl field's value. +func (s *UpdateApplicationRequest) SetHomePageUrl(v string) *UpdateApplicationRequest { + s.HomePageUrl = &v + return s +} + +// SetLabels sets the Labels field's value. +func (s *UpdateApplicationRequest) SetLabels(v []*string) *UpdateApplicationRequest { + s.Labels = v + return s +} + +// SetReadmeBody sets the ReadmeBody field's value. +func (s *UpdateApplicationRequest) SetReadmeBody(v string) *UpdateApplicationRequest { + s.ReadmeBody = &v + return s +} + +// SetReadmeUrl sets the ReadmeUrl field's value. +func (s *UpdateApplicationRequest) SetReadmeUrl(v string) *UpdateApplicationRequest { + s.ReadmeUrl = &v + return s +} + +// Application version details. +type Version struct { + _ struct{} `type:"structure"` + + // The application Amazon Resource Name (ARN). + // + // ApplicationId is a required field + ApplicationId *string `locationName:"applicationId" type:"string" required:"true"` + + // The date and time this resource was created. + // + // CreationTime is a required field + CreationTime *string `locationName:"creationTime" type:"string" required:"true"` + + // An array of parameter types supported by the application. + // + // ParameterDefinitions is a required field + ParameterDefinitions []*ParameterDefinition `locationName:"parameterDefinitions" type:"list" required:"true"` + + // A list of values that you must specify before you can deploy certain applications. + // Some applications might include resources that can affect permissions in + // your AWS account, for example, by creating new AWS Identity and Access Management + // (IAM) users. For those applications, you must explicitly acknowledge their + // capabilities by specifying this parameter. + // + // The only valid values are CAPABILITY_IAM, CAPABILITY_NAMED_IAM, and CAPABILITY_RESOURCE_POLICY. + // + // The following resources require you to specify CAPABILITY_IAM or CAPABILITY_NAMED_IAM: + // AWS::IAM::Group (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-group.html), + // AWS::IAM::InstanceProfile (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-instanceprofile.html), + // AWS::IAM::Policy (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-policy.html), + // and AWS::IAM::Role (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html). + // If the application contains IAM resources, you can specify either CAPABILITY_IAM + // or CAPABILITY_NAMED_IAM. If the application contains IAM resources with custom + // names, you must specify CAPABILITY_NAMED_IAM. + // + // The following resources require you to specify CAPABILITY_RESOURCE_POLICY: + // AWS::ApplicationAutoScaling::ScalingPolicy (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-applicationautoscaling-scalingpolicy.html), + // AWS::S3::BucketPolicy (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-policy.html), + // AWS::SQS::QueuePolicy (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-policy.html), + // and AWS::SNS:TopicPolicy (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sns-policy.html). + // + // If your application template contains any of the above resources, we recommend + // that you review all permissions associated with the application before deploying. + // If you don't specify this parameter for an application that requires capabilities, + // the call will fail. + // + // Valid values: CAPABILITY_IAM | CAPABILITY_NAMED_IAM | CAPABILITY_RESOURCE_POLICY + // + // RequiredCapabilities is a required field + RequiredCapabilities []*string `locationName:"requiredCapabilities" type:"list" required:"true"` + + // Whether all of the AWS resources contained in this application are supported + // in the region in which it is being retrieved. + // + // ResourcesSupported is a required field + ResourcesSupported *bool `locationName:"resourcesSupported" type:"boolean" required:"true"` + + // The semantic version of the application: + // + // https://semver.org/ (https://semver.org/) + // + // SemanticVersion is a required field + SemanticVersion *string `locationName:"semanticVersion" type:"string" required:"true"` + + // A link to a public repository for the source code of your application. + SourceCodeUrl *string `locationName:"sourceCodeUrl" type:"string"` + + // A link to the packaged AWS SAM template of your application. + // + // TemplateUrl is a required field + TemplateUrl *string `locationName:"templateUrl" type:"string" required:"true"` +} + +// String returns the string representation +func (s Version) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Version) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *Version) SetApplicationId(v string) *Version { + s.ApplicationId = &v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *Version) SetCreationTime(v string) *Version { + s.CreationTime = &v + return s +} + +// SetParameterDefinitions sets the ParameterDefinitions field's value. +func (s *Version) SetParameterDefinitions(v []*ParameterDefinition) *Version { + s.ParameterDefinitions = v + return s +} + +// SetRequiredCapabilities sets the RequiredCapabilities field's value. +func (s *Version) SetRequiredCapabilities(v []*string) *Version { + s.RequiredCapabilities = v + return s +} + +// SetResourcesSupported sets the ResourcesSupported field's value. +func (s *Version) SetResourcesSupported(v bool) *Version { + s.ResourcesSupported = &v + return s +} + +// SetSemanticVersion sets the SemanticVersion field's value. +func (s *Version) SetSemanticVersion(v string) *Version { + s.SemanticVersion = &v + return s +} + +// SetSourceCodeUrl sets the SourceCodeUrl field's value. +func (s *Version) SetSourceCodeUrl(v string) *Version { + s.SourceCodeUrl = &v + return s +} + +// SetTemplateUrl sets the TemplateUrl field's value. +func (s *Version) SetTemplateUrl(v string) *Version { + s.TemplateUrl = &v + return s +} + +// An application version summary. +type VersionSummary struct { + _ struct{} `type:"structure"` + + // The application Amazon Resource Name (ARN). + // + // ApplicationId is a required field + ApplicationId *string `locationName:"applicationId" type:"string" required:"true"` + + // The date and time this resource was created. + // + // CreationTime is a required field + CreationTime *string `locationName:"creationTime" type:"string" required:"true"` + + // The semantic version of the application: + // + // https://semver.org/ (https://semver.org/) + // + // SemanticVersion is a required field + SemanticVersion *string `locationName:"semanticVersion" type:"string" required:"true"` + + // A link to a public repository for the source code of your application. + SourceCodeUrl *string `locationName:"sourceCodeUrl" type:"string"` +} + +// String returns the string representation +func (s VersionSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s VersionSummary) GoString() string { + return s.String() +} + +// SetApplicationId sets the ApplicationId field's value. +func (s *VersionSummary) SetApplicationId(v string) *VersionSummary { + s.ApplicationId = &v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *VersionSummary) SetCreationTime(v string) *VersionSummary { + s.CreationTime = &v + return s +} + +// SetSemanticVersion sets the SemanticVersion field's value. +func (s *VersionSummary) SetSemanticVersion(v string) *VersionSummary { + s.SemanticVersion = &v + return s +} + +// SetSourceCodeUrl sets the SourceCodeUrl field's value. +func (s *VersionSummary) SetSourceCodeUrl(v string) *VersionSummary { + s.SourceCodeUrl = &v + return s +} + +const ( + // CapabilityCapabilityIam is a Capability enum value + CapabilityCapabilityIam = "CAPABILITY_IAM" + + // CapabilityCapabilityNamedIam is a Capability enum value + CapabilityCapabilityNamedIam = "CAPABILITY_NAMED_IAM" + + // CapabilityCapabilityResourcePolicy is a Capability enum value + CapabilityCapabilityResourcePolicy = "CAPABILITY_RESOURCE_POLICY" +) + +const ( + // StatusPreparing is a Status enum value + StatusPreparing = "PREPARING" + + // StatusActive is a Status enum value + StatusActive = "ACTIVE" + + // StatusExpired is a Status enum value + StatusExpired = "EXPIRED" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/serverlessapplicationrepository/doc.go b/vendor/github.com/aws/aws-sdk-go/service/serverlessapplicationrepository/doc.go new file mode 100644 index 00000000000..dde60865eee --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/serverlessapplicationrepository/doc.go @@ -0,0 +1,58 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package serverlessapplicationrepository provides the client and types for making API +// requests to AWSServerlessApplicationRepository. +// +// The AWS Serverless Application Repository makes it easy for developers and +// enterprises to quickly find and deploy serverless applications in the AWS +// Cloud. For more information about serverless applications, see Serverless +// Computing and Applications on the AWS website. +// +// The AWS Serverless Application Repository is deeply integrated with the AWS +// Lambda console, so that developers of all levels can get started with serverless +// computing without needing to learn anything new. You can use category keywords +// to browse for applications such as web and mobile backends, data processing +// applications, or chatbots. You can also search for applications by name, +// publisher, or event source. To use an application, you simply choose it, +// configure any required fields, and deploy it with a few clicks. +// +// You can also easily publish applications, sharing them publicly with the +// community at large, or privately within your team or across your organization. +// To publish a serverless application (or app), you can use the AWS Management +// Console, AWS Command Line Interface (AWS CLI), or AWS SDKs to upload the +// code. Along with the code, you upload a simple manifest file, also known +// as the AWS Serverless Application Model (AWS SAM) template. For more information +// about AWS SAM, see AWS Serverless Application Model (AWS SAM) on the AWS +// Labs GitHub repository. +// +// The AWS Serverless Application Repository Developer Guide contains more information +// about the two developer experiences available: +// +// * Consuming Applications – Browse for applications and view information +// about them, including source code and readme files. Also install, configure, +// and deploy applications of your choosing. +// +// Publishing Applications – Configure and upload applications to make them +// available to other developers, and publish new versions of applications. +// +// See https://docs.aws.amazon.com/goto/WebAPI/serverlessrepo-2017-09-08 for more information on this service. +// +// See serverlessapplicationrepository package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/serverlessapplicationrepository/ +// +// Using the Client +// +// To contact AWSServerlessApplicationRepository with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the AWSServerlessApplicationRepository client ServerlessApplicationRepository for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/serverlessapplicationrepository/#New +package serverlessapplicationrepository diff --git a/vendor/github.com/aws/aws-sdk-go/service/serverlessapplicationrepository/errors.go b/vendor/github.com/aws/aws-sdk-go/service/serverlessapplicationrepository/errors.go new file mode 100644 index 00000000000..2855c3aec7c --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/serverlessapplicationrepository/errors.go @@ -0,0 +1,45 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package serverlessapplicationrepository + +const ( + + // ErrCodeBadRequestException for service response error code + // "BadRequestException". + // + // One of the parameters in the request is invalid. + ErrCodeBadRequestException = "BadRequestException" + + // ErrCodeConflictException for service response error code + // "ConflictException". + // + // The resource already exists. + ErrCodeConflictException = "ConflictException" + + // ErrCodeForbiddenException for service response error code + // "ForbiddenException". + // + // The client is not authenticated. + ErrCodeForbiddenException = "ForbiddenException" + + // ErrCodeInternalServerErrorException for service response error code + // "InternalServerErrorException". + // + // The AWS Serverless Application Repository service encountered an internal + // error. + ErrCodeInternalServerErrorException = "InternalServerErrorException" + + // ErrCodeNotFoundException for service response error code + // "NotFoundException". + // + // The resource (for example, an access policy statement) specified in the request + // doesn't exist. + ErrCodeNotFoundException = "NotFoundException" + + // ErrCodeTooManyRequestsException for service response error code + // "TooManyRequestsException". + // + // The client is sending more than the allowed number of requests per unit of + // time. + ErrCodeTooManyRequestsException = "TooManyRequestsException" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/serverlessapplicationrepository/service.go b/vendor/github.com/aws/aws-sdk-go/service/serverlessapplicationrepository/service.go new file mode 100644 index 00000000000..78f8f47ce53 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/serverlessapplicationrepository/service.go @@ -0,0 +1,99 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package serverlessapplicationrepository + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/restjson" +) + +// ServerlessApplicationRepository provides the API operation methods for making requests to +// AWSServerlessApplicationRepository. See this package's package overview docs +// for details on the service. +// +// ServerlessApplicationRepository methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type ServerlessApplicationRepository struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "serverlessrepo" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "ServerlessApplicationRepository" // ServiceID is a unique identifer of a specific service. +) + +// New creates a new instance of the ServerlessApplicationRepository client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a ServerlessApplicationRepository client from just a session. +// svc := serverlessapplicationrepository.New(mySession) +// +// // Create a ServerlessApplicationRepository client with additional configuration +// svc := serverlessapplicationrepository.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *ServerlessApplicationRepository { + c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "serverlessrepo" + } + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *ServerlessApplicationRepository { + svc := &ServerlessApplicationRepository{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + ServiceID: ServiceID, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2017-09-08", + JSONVersion: "1.1", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(restjson.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(restjson.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(restjson.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(restjson.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a ServerlessApplicationRepository operation and runs any +// custom request initialization. +func (c *ServerlessApplicationRepository) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/servicecatalog/api.go b/vendor/github.com/aws/aws-sdk-go/service/servicecatalog/api.go index 61b40be773c..d12e3ee671f 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/servicecatalog/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/servicecatalog/api.go @@ -15,8 +15,8 @@ const opAcceptPortfolioShare = "AcceptPortfolioShare" // AcceptPortfolioShareRequest generates a "aws/request.Request" representing the // client's request for the AcceptPortfolioShare operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -102,8 +102,8 @@ const opAssociatePrincipalWithPortfolio = "AssociatePrincipalWithPortfolio" // AssociatePrincipalWithPortfolioRequest generates a "aws/request.Request" representing the // client's request for the AssociatePrincipalWithPortfolio operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -189,8 +189,8 @@ const opAssociateProductWithPortfolio = "AssociateProductWithPortfolio" // AssociateProductWithPortfolioRequest generates a "aws/request.Request" representing the // client's request for the AssociateProductWithPortfolio operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -272,12 +272,99 @@ func (c *ServiceCatalog) AssociateProductWithPortfolioWithContext(ctx aws.Contex return out, req.Send() } +const opAssociateServiceActionWithProvisioningArtifact = "AssociateServiceActionWithProvisioningArtifact" + +// AssociateServiceActionWithProvisioningArtifactRequest generates a "aws/request.Request" representing the +// client's request for the AssociateServiceActionWithProvisioningArtifact operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AssociateServiceActionWithProvisioningArtifact for more information on using the AssociateServiceActionWithProvisioningArtifact +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AssociateServiceActionWithProvisioningArtifactRequest method. +// req, resp := client.AssociateServiceActionWithProvisioningArtifactRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/AssociateServiceActionWithProvisioningArtifact +func (c *ServiceCatalog) AssociateServiceActionWithProvisioningArtifactRequest(input *AssociateServiceActionWithProvisioningArtifactInput) (req *request.Request, output *AssociateServiceActionWithProvisioningArtifactOutput) { + op := &request.Operation{ + Name: opAssociateServiceActionWithProvisioningArtifact, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AssociateServiceActionWithProvisioningArtifactInput{} + } + + output = &AssociateServiceActionWithProvisioningArtifactOutput{} + req = c.newRequest(op, input, output) + return +} + +// AssociateServiceActionWithProvisioningArtifact API operation for AWS Service Catalog. +// +// Associates a self-service action with a provisioning artifact. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Service Catalog's +// API operation AssociateServiceActionWithProvisioningArtifact for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeDuplicateResourceException "DuplicateResourceException" +// The specified resource is a duplicate. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The current limits of the service would have been exceeded by this operation. +// Decrease your resource use or increase your service limits and retry the +// operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/AssociateServiceActionWithProvisioningArtifact +func (c *ServiceCatalog) AssociateServiceActionWithProvisioningArtifact(input *AssociateServiceActionWithProvisioningArtifactInput) (*AssociateServiceActionWithProvisioningArtifactOutput, error) { + req, out := c.AssociateServiceActionWithProvisioningArtifactRequest(input) + return out, req.Send() +} + +// AssociateServiceActionWithProvisioningArtifactWithContext is the same as AssociateServiceActionWithProvisioningArtifact with the addition of +// the ability to pass a context and additional request options. +// +// See AssociateServiceActionWithProvisioningArtifact for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) AssociateServiceActionWithProvisioningArtifactWithContext(ctx aws.Context, input *AssociateServiceActionWithProvisioningArtifactInput, opts ...request.Option) (*AssociateServiceActionWithProvisioningArtifactOutput, error) { + req, out := c.AssociateServiceActionWithProvisioningArtifactRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opAssociateTagOptionWithResource = "AssociateTagOptionWithResource" // AssociateTagOptionWithResourceRequest generates a "aws/request.Request" representing the // client's request for the AssociateTagOptionWithResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -372,12 +459,171 @@ func (c *ServiceCatalog) AssociateTagOptionWithResourceWithContext(ctx aws.Conte return out, req.Send() } +const opBatchAssociateServiceActionWithProvisioningArtifact = "BatchAssociateServiceActionWithProvisioningArtifact" + +// BatchAssociateServiceActionWithProvisioningArtifactRequest generates a "aws/request.Request" representing the +// client's request for the BatchAssociateServiceActionWithProvisioningArtifact operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See BatchAssociateServiceActionWithProvisioningArtifact for more information on using the BatchAssociateServiceActionWithProvisioningArtifact +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the BatchAssociateServiceActionWithProvisioningArtifactRequest method. +// req, resp := client.BatchAssociateServiceActionWithProvisioningArtifactRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/BatchAssociateServiceActionWithProvisioningArtifact +func (c *ServiceCatalog) BatchAssociateServiceActionWithProvisioningArtifactRequest(input *BatchAssociateServiceActionWithProvisioningArtifactInput) (req *request.Request, output *BatchAssociateServiceActionWithProvisioningArtifactOutput) { + op := &request.Operation{ + Name: opBatchAssociateServiceActionWithProvisioningArtifact, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &BatchAssociateServiceActionWithProvisioningArtifactInput{} + } + + output = &BatchAssociateServiceActionWithProvisioningArtifactOutput{} + req = c.newRequest(op, input, output) + return +} + +// BatchAssociateServiceActionWithProvisioningArtifact API operation for AWS Service Catalog. +// +// Associates multiple self-service actions with provisioning artifacts. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Service Catalog's +// API operation BatchAssociateServiceActionWithProvisioningArtifact for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParametersException "InvalidParametersException" +// One or more parameters provided to the operation are not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/BatchAssociateServiceActionWithProvisioningArtifact +func (c *ServiceCatalog) BatchAssociateServiceActionWithProvisioningArtifact(input *BatchAssociateServiceActionWithProvisioningArtifactInput) (*BatchAssociateServiceActionWithProvisioningArtifactOutput, error) { + req, out := c.BatchAssociateServiceActionWithProvisioningArtifactRequest(input) + return out, req.Send() +} + +// BatchAssociateServiceActionWithProvisioningArtifactWithContext is the same as BatchAssociateServiceActionWithProvisioningArtifact with the addition of +// the ability to pass a context and additional request options. +// +// See BatchAssociateServiceActionWithProvisioningArtifact for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) BatchAssociateServiceActionWithProvisioningArtifactWithContext(ctx aws.Context, input *BatchAssociateServiceActionWithProvisioningArtifactInput, opts ...request.Option) (*BatchAssociateServiceActionWithProvisioningArtifactOutput, error) { + req, out := c.BatchAssociateServiceActionWithProvisioningArtifactRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opBatchDisassociateServiceActionFromProvisioningArtifact = "BatchDisassociateServiceActionFromProvisioningArtifact" + +// BatchDisassociateServiceActionFromProvisioningArtifactRequest generates a "aws/request.Request" representing the +// client's request for the BatchDisassociateServiceActionFromProvisioningArtifact operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See BatchDisassociateServiceActionFromProvisioningArtifact for more information on using the BatchDisassociateServiceActionFromProvisioningArtifact +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the BatchDisassociateServiceActionFromProvisioningArtifactRequest method. +// req, resp := client.BatchDisassociateServiceActionFromProvisioningArtifactRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/BatchDisassociateServiceActionFromProvisioningArtifact +func (c *ServiceCatalog) BatchDisassociateServiceActionFromProvisioningArtifactRequest(input *BatchDisassociateServiceActionFromProvisioningArtifactInput) (req *request.Request, output *BatchDisassociateServiceActionFromProvisioningArtifactOutput) { + op := &request.Operation{ + Name: opBatchDisassociateServiceActionFromProvisioningArtifact, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &BatchDisassociateServiceActionFromProvisioningArtifactInput{} + } + + output = &BatchDisassociateServiceActionFromProvisioningArtifactOutput{} + req = c.newRequest(op, input, output) + return +} + +// BatchDisassociateServiceActionFromProvisioningArtifact API operation for AWS Service Catalog. +// +// Disassociates a batch of self-service actions from the specified provisioning +// artifact. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Service Catalog's +// API operation BatchDisassociateServiceActionFromProvisioningArtifact for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParametersException "InvalidParametersException" +// One or more parameters provided to the operation are not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/BatchDisassociateServiceActionFromProvisioningArtifact +func (c *ServiceCatalog) BatchDisassociateServiceActionFromProvisioningArtifact(input *BatchDisassociateServiceActionFromProvisioningArtifactInput) (*BatchDisassociateServiceActionFromProvisioningArtifactOutput, error) { + req, out := c.BatchDisassociateServiceActionFromProvisioningArtifactRequest(input) + return out, req.Send() +} + +// BatchDisassociateServiceActionFromProvisioningArtifactWithContext is the same as BatchDisassociateServiceActionFromProvisioningArtifact with the addition of +// the ability to pass a context and additional request options. +// +// See BatchDisassociateServiceActionFromProvisioningArtifact for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) BatchDisassociateServiceActionFromProvisioningArtifactWithContext(ctx aws.Context, input *BatchDisassociateServiceActionFromProvisioningArtifactInput, opts ...request.Option) (*BatchDisassociateServiceActionFromProvisioningArtifactOutput, error) { + req, out := c.BatchDisassociateServiceActionFromProvisioningArtifactRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCopyProduct = "CopyProduct" // CopyProductRequest generates a "aws/request.Request" representing the // client's request for the CopyProduct operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -465,8 +711,8 @@ const opCreateConstraint = "CreateConstraint" // CreateConstraintRequest generates a "aws/request.Request" representing the // client's request for the CreateConstraint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -555,8 +801,8 @@ const opCreatePortfolio = "CreatePortfolio" // CreatePortfolioRequest generates a "aws/request.Request" representing the // client's request for the CreatePortfolio operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -644,8 +890,8 @@ const opCreatePortfolioShare = "CreatePortfolioShare" // CreatePortfolioShareRequest generates a "aws/request.Request" representing the // client's request for the CreatePortfolioShare operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -684,7 +930,10 @@ func (c *ServiceCatalog) CreatePortfolioShareRequest(input *CreatePortfolioShare // CreatePortfolioShare API operation for AWS Service Catalog. // -// Shares the specified portfolio with the specified account. +// Shares the specified portfolio with the specified account or organization +// node. Shares to an organization node can only be created by the master account +// of an Organization. AWSOrganizationsAccess must be enabled in order to create +// a portfolio share to an organization node. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -705,6 +954,9 @@ func (c *ServiceCatalog) CreatePortfolioShareRequest(input *CreatePortfolioShare // * ErrCodeInvalidParametersException "InvalidParametersException" // One or more parameters provided to the operation are not valid. // +// * ErrCodeOperationNotSupportedException "OperationNotSupportedException" +// The operation is not supported. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/CreatePortfolioShare func (c *ServiceCatalog) CreatePortfolioShare(input *CreatePortfolioShareInput) (*CreatePortfolioShareOutput, error) { req, out := c.CreatePortfolioShareRequest(input) @@ -731,8 +983,8 @@ const opCreateProduct = "CreateProduct" // CreateProductRequest generates a "aws/request.Request" representing the // client's request for the CreateProduct operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -820,8 +1072,8 @@ const opCreateProvisionedProductPlan = "CreateProvisionedProductPlan" // CreateProvisionedProductPlanRequest generates a "aws/request.Request" representing the // client's request for the CreateProvisionedProductPlan operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -915,8 +1167,8 @@ const opCreateProvisioningArtifact = "CreateProvisioningArtifact" // CreateProvisioningArtifactRequest generates a "aws/request.Request" representing the // client's request for the CreateProvisioningArtifact operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1002,12 +1254,96 @@ func (c *ServiceCatalog) CreateProvisioningArtifactWithContext(ctx aws.Context, return out, req.Send() } +const opCreateServiceAction = "CreateServiceAction" + +// CreateServiceActionRequest generates a "aws/request.Request" representing the +// client's request for the CreateServiceAction operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateServiceAction for more information on using the CreateServiceAction +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateServiceActionRequest method. +// req, resp := client.CreateServiceActionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/CreateServiceAction +func (c *ServiceCatalog) CreateServiceActionRequest(input *CreateServiceActionInput) (req *request.Request, output *CreateServiceActionOutput) { + op := &request.Operation{ + Name: opCreateServiceAction, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateServiceActionInput{} + } + + output = &CreateServiceActionOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateServiceAction API operation for AWS Service Catalog. +// +// Creates a self-service action. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Service Catalog's +// API operation CreateServiceAction for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParametersException "InvalidParametersException" +// One or more parameters provided to the operation are not valid. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// The current limits of the service would have been exceeded by this operation. +// Decrease your resource use or increase your service limits and retry the +// operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/CreateServiceAction +func (c *ServiceCatalog) CreateServiceAction(input *CreateServiceActionInput) (*CreateServiceActionOutput, error) { + req, out := c.CreateServiceActionRequest(input) + return out, req.Send() +} + +// CreateServiceActionWithContext is the same as CreateServiceAction with the addition of +// the ability to pass a context and additional request options. +// +// See CreateServiceAction for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) CreateServiceActionWithContext(ctx aws.Context, input *CreateServiceActionInput, opts ...request.Option) (*CreateServiceActionOutput, error) { + req, out := c.CreateServiceActionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateTagOption = "CreateTagOption" // CreateTagOptionRequest generates a "aws/request.Request" representing the // client's request for the CreateTagOption operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1095,8 +1431,8 @@ const opDeleteConstraint = "DeleteConstraint" // DeleteConstraintRequest generates a "aws/request.Request" representing the // client's request for the DeleteConstraint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1177,8 +1513,8 @@ const opDeletePortfolio = "DeletePortfolio" // DeletePortfolioRequest generates a "aws/request.Request" representing the // client's request for the DeletePortfolio operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1271,8 +1607,8 @@ const opDeletePortfolioShare = "DeletePortfolioShare" // DeletePortfolioShareRequest generates a "aws/request.Request" representing the // client's request for the DeletePortfolioShare operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1311,7 +1647,9 @@ func (c *ServiceCatalog) DeletePortfolioShareRequest(input *DeletePortfolioShare // DeletePortfolioShare API operation for AWS Service Catalog. // -// Stops sharing the specified portfolio with the specified account. +// Stops sharing the specified portfolio with the specified account or organization +// node. Shares to an organization node can only be deleted by the master account +// of an Organization. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1324,6 +1662,12 @@ func (c *ServiceCatalog) DeletePortfolioShareRequest(input *DeletePortfolioShare // * ErrCodeResourceNotFoundException "ResourceNotFoundException" // The specified resource was not found. // +// * ErrCodeInvalidParametersException "InvalidParametersException" +// One or more parameters provided to the operation are not valid. +// +// * ErrCodeOperationNotSupportedException "OperationNotSupportedException" +// The operation is not supported. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/DeletePortfolioShare func (c *ServiceCatalog) DeletePortfolioShare(input *DeletePortfolioShareInput) (*DeletePortfolioShareOutput, error) { req, out := c.DeletePortfolioShareRequest(input) @@ -1350,8 +1694,8 @@ const opDeleteProduct = "DeleteProduct" // DeleteProductRequest generates a "aws/request.Request" representing the // client's request for the DeleteProduct operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1444,8 +1788,8 @@ const opDeleteProvisionedProductPlan = "DeleteProvisionedProductPlan" // DeleteProvisionedProductPlanRequest generates a "aws/request.Request" representing the // client's request for the DeleteProvisionedProductPlan operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1526,8 +1870,8 @@ const opDeleteProvisioningArtifact = "DeleteProvisioningArtifact" // DeleteProvisioningArtifactRequest generates a "aws/request.Request" representing the // client's request for the DeleteProvisioningArtifact operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1613,53 +1957,136 @@ func (c *ServiceCatalog) DeleteProvisioningArtifactWithContext(ctx aws.Context, return out, req.Send() } -const opDeleteTagOption = "DeleteTagOption" +const opDeleteServiceAction = "DeleteServiceAction" -// DeleteTagOptionRequest generates a "aws/request.Request" representing the -// client's request for the DeleteTagOption operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteServiceActionRequest generates a "aws/request.Request" representing the +// client's request for the DeleteServiceAction operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DeleteTagOption for more information on using the DeleteTagOption +// See DeleteServiceAction for more information on using the DeleteServiceAction // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DeleteTagOptionRequest method. -// req, resp := client.DeleteTagOptionRequest(params) +// // Example sending a request using the DeleteServiceActionRequest method. +// req, resp := client.DeleteServiceActionRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/DeleteTagOption -func (c *ServiceCatalog) DeleteTagOptionRequest(input *DeleteTagOptionInput) (req *request.Request, output *DeleteTagOptionOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/DeleteServiceAction +func (c *ServiceCatalog) DeleteServiceActionRequest(input *DeleteServiceActionInput) (req *request.Request, output *DeleteServiceActionOutput) { op := &request.Operation{ - Name: opDeleteTagOption, + Name: opDeleteServiceAction, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &DeleteTagOptionInput{} + input = &DeleteServiceActionInput{} } - output = &DeleteTagOptionOutput{} + output = &DeleteServiceActionOutput{} req = c.newRequest(op, input, output) return } -// DeleteTagOption API operation for AWS Service Catalog. +// DeleteServiceAction API operation for AWS Service Catalog. // -// Deletes the specified TagOption. +// Deletes a self-service action. // -// You cannot delete a TagOption if it is associated with a product or portfolio. +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Service Catalog's +// API operation DeleteServiceAction for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeResourceInUseException "ResourceInUseException" +// A resource that is currently in use. Ensure that the resource is not in use +// and retry the operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/DeleteServiceAction +func (c *ServiceCatalog) DeleteServiceAction(input *DeleteServiceActionInput) (*DeleteServiceActionOutput, error) { + req, out := c.DeleteServiceActionRequest(input) + return out, req.Send() +} + +// DeleteServiceActionWithContext is the same as DeleteServiceAction with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteServiceAction for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) DeleteServiceActionWithContext(ctx aws.Context, input *DeleteServiceActionInput, opts ...request.Option) (*DeleteServiceActionOutput, error) { + req, out := c.DeleteServiceActionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteTagOption = "DeleteTagOption" + +// DeleteTagOptionRequest generates a "aws/request.Request" representing the +// client's request for the DeleteTagOption operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteTagOption for more information on using the DeleteTagOption +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteTagOptionRequest method. +// req, resp := client.DeleteTagOptionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/DeleteTagOption +func (c *ServiceCatalog) DeleteTagOptionRequest(input *DeleteTagOptionInput) (req *request.Request, output *DeleteTagOptionOutput) { + op := &request.Operation{ + Name: opDeleteTagOption, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteTagOptionInput{} + } + + output = &DeleteTagOptionOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteTagOption API operation for AWS Service Catalog. +// +// Deletes the specified TagOption. +// +// You cannot delete a TagOption if it is associated with a product or portfolio. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1707,8 +2134,8 @@ const opDescribeConstraint = "DescribeConstraint" // DescribeConstraintRequest generates a "aws/request.Request" representing the // client's request for the DescribeConstraint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1786,8 +2213,8 @@ const opDescribeCopyProductStatus = "DescribeCopyProductStatus" // DescribeCopyProductStatusRequest generates a "aws/request.Request" representing the // client's request for the DescribeCopyProductStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1865,8 +2292,8 @@ const opDescribePortfolio = "DescribePortfolio" // DescribePortfolioRequest generates a "aws/request.Request" representing the // client's request for the DescribePortfolio operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1940,12 +2367,98 @@ func (c *ServiceCatalog) DescribePortfolioWithContext(ctx aws.Context, input *De return out, req.Send() } +const opDescribePortfolioShareStatus = "DescribePortfolioShareStatus" + +// DescribePortfolioShareStatusRequest generates a "aws/request.Request" representing the +// client's request for the DescribePortfolioShareStatus operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribePortfolioShareStatus for more information on using the DescribePortfolioShareStatus +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribePortfolioShareStatusRequest method. +// req, resp := client.DescribePortfolioShareStatusRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/DescribePortfolioShareStatus +func (c *ServiceCatalog) DescribePortfolioShareStatusRequest(input *DescribePortfolioShareStatusInput) (req *request.Request, output *DescribePortfolioShareStatusOutput) { + op := &request.Operation{ + Name: opDescribePortfolioShareStatus, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribePortfolioShareStatusInput{} + } + + output = &DescribePortfolioShareStatusOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribePortfolioShareStatus API operation for AWS Service Catalog. +// +// Gets the status of the specified portfolio share operation. This API can +// only be called by the master account in the organization. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Service Catalog's +// API operation DescribePortfolioShareStatus for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInvalidParametersException "InvalidParametersException" +// One or more parameters provided to the operation are not valid. +// +// * ErrCodeOperationNotSupportedException "OperationNotSupportedException" +// The operation is not supported. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/DescribePortfolioShareStatus +func (c *ServiceCatalog) DescribePortfolioShareStatus(input *DescribePortfolioShareStatusInput) (*DescribePortfolioShareStatusOutput, error) { + req, out := c.DescribePortfolioShareStatusRequest(input) + return out, req.Send() +} + +// DescribePortfolioShareStatusWithContext is the same as DescribePortfolioShareStatus with the addition of +// the ability to pass a context and additional request options. +// +// See DescribePortfolioShareStatus for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) DescribePortfolioShareStatusWithContext(ctx aws.Context, input *DescribePortfolioShareStatusInput, opts ...request.Option) (*DescribePortfolioShareStatusOutput, error) { + req, out := c.DescribePortfolioShareStatusRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeProduct = "DescribeProduct" // DescribeProductRequest generates a "aws/request.Request" representing the // client's request for the DescribeProduct operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2026,8 +2539,8 @@ const opDescribeProductAsAdmin = "DescribeProductAsAdmin" // DescribeProductAsAdminRequest generates a "aws/request.Request" representing the // client's request for the DescribeProductAsAdmin operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2106,8 +2619,8 @@ const opDescribeProductView = "DescribeProductView" // DescribeProductViewRequest generates a "aws/request.Request" representing the // client's request for the DescribeProductView operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2188,8 +2701,8 @@ const opDescribeProvisionedProduct = "DescribeProvisionedProduct" // DescribeProvisionedProductRequest generates a "aws/request.Request" representing the // client's request for the DescribeProvisionedProduct operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2267,8 +2780,8 @@ const opDescribeProvisionedProductPlan = "DescribeProvisionedProductPlan" // DescribeProvisionedProductPlanRequest generates a "aws/request.Request" representing the // client's request for the DescribeProvisionedProductPlan operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2349,8 +2862,8 @@ const opDescribeProvisioningArtifact = "DescribeProvisioningArtifact" // DescribeProvisioningArtifactRequest generates a "aws/request.Request" representing the // client's request for the DescribeProvisioningArtifact operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2429,8 +2942,8 @@ const opDescribeProvisioningParameters = "DescribeProvisioningParameters" // DescribeProvisioningParametersRequest generates a "aws/request.Request" representing the // client's request for the DescribeProvisioningParameters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2519,8 +3032,8 @@ const opDescribeRecord = "DescribeRecord" // DescribeRecordRequest generates a "aws/request.Request" representing the // client's request for the DescribeRecord operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2597,12 +3110,91 @@ func (c *ServiceCatalog) DescribeRecordWithContext(ctx aws.Context, input *Descr return out, req.Send() } +const opDescribeServiceAction = "DescribeServiceAction" + +// DescribeServiceActionRequest generates a "aws/request.Request" representing the +// client's request for the DescribeServiceAction operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeServiceAction for more information on using the DescribeServiceAction +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeServiceActionRequest method. +// req, resp := client.DescribeServiceActionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/DescribeServiceAction +func (c *ServiceCatalog) DescribeServiceActionRequest(input *DescribeServiceActionInput) (req *request.Request, output *DescribeServiceActionOutput) { + op := &request.Operation{ + Name: opDescribeServiceAction, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeServiceActionInput{} + } + + output = &DescribeServiceActionOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeServiceAction API operation for AWS Service Catalog. +// +// Describes a self-service action. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Service Catalog's +// API operation DescribeServiceAction for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/DescribeServiceAction +func (c *ServiceCatalog) DescribeServiceAction(input *DescribeServiceActionInput) (*DescribeServiceActionOutput, error) { + req, out := c.DescribeServiceActionRequest(input) + return out, req.Send() +} + +// DescribeServiceActionWithContext is the same as DescribeServiceAction with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeServiceAction for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) DescribeServiceActionWithContext(ctx aws.Context, input *DescribeServiceActionInput, opts ...request.Option) (*DescribeServiceActionOutput, error) { + req, out := c.DescribeServiceActionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeTagOption = "DescribeTagOption" // DescribeTagOptionRequest generates a "aws/request.Request" representing the // client's request for the DescribeTagOption operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2681,12 +3273,103 @@ func (c *ServiceCatalog) DescribeTagOptionWithContext(ctx aws.Context, input *De return out, req.Send() } +const opDisableAWSOrganizationsAccess = "DisableAWSOrganizationsAccess" + +// DisableAWSOrganizationsAccessRequest generates a "aws/request.Request" representing the +// client's request for the DisableAWSOrganizationsAccess operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DisableAWSOrganizationsAccess for more information on using the DisableAWSOrganizationsAccess +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DisableAWSOrganizationsAccessRequest method. +// req, resp := client.DisableAWSOrganizationsAccessRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/DisableAWSOrganizationsAccess +func (c *ServiceCatalog) DisableAWSOrganizationsAccessRequest(input *DisableAWSOrganizationsAccessInput) (req *request.Request, output *DisableAWSOrganizationsAccessOutput) { + op := &request.Operation{ + Name: opDisableAWSOrganizationsAccess, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DisableAWSOrganizationsAccessInput{} + } + + output = &DisableAWSOrganizationsAccessOutput{} + req = c.newRequest(op, input, output) + return +} + +// DisableAWSOrganizationsAccess API operation for AWS Service Catalog. +// +// Disable portfolio sharing through AWS Organizations feature. This feature +// will not delete your current shares but it will prevent you from creating +// new shares throughout your organization. Current shares will not be in sync +// with your organization structure if it changes after calling this API. This +// API can only be called by the master account in the organization. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Service Catalog's +// API operation DisableAWSOrganizationsAccess for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInvalidStateException "InvalidStateException" +// An attempt was made to modify a resource that is in a state that is not valid. +// Check your resources to ensure that they are in valid states before retrying +// the operation. +// +// * ErrCodeOperationNotSupportedException "OperationNotSupportedException" +// The operation is not supported. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/DisableAWSOrganizationsAccess +func (c *ServiceCatalog) DisableAWSOrganizationsAccess(input *DisableAWSOrganizationsAccessInput) (*DisableAWSOrganizationsAccessOutput, error) { + req, out := c.DisableAWSOrganizationsAccessRequest(input) + return out, req.Send() +} + +// DisableAWSOrganizationsAccessWithContext is the same as DisableAWSOrganizationsAccess with the addition of +// the ability to pass a context and additional request options. +// +// See DisableAWSOrganizationsAccess for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) DisableAWSOrganizationsAccessWithContext(ctx aws.Context, input *DisableAWSOrganizationsAccessInput, opts ...request.Option) (*DisableAWSOrganizationsAccessOutput, error) { + req, out := c.DisableAWSOrganizationsAccessRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDisassociatePrincipalFromPortfolio = "DisassociatePrincipalFromPortfolio" // DisassociatePrincipalFromPortfolioRequest generates a "aws/request.Request" representing the // client's request for the DisassociatePrincipalFromPortfolio operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2767,8 +3450,8 @@ const opDisassociateProductFromPortfolio = "DisassociateProductFromPortfolio" // DisassociateProductFromPortfolioRequest generates a "aws/request.Request" representing the // client's request for the DisassociateProductFromPortfolio operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2849,17 +3532,97 @@ func (c *ServiceCatalog) DisassociateProductFromPortfolioWithContext(ctx aws.Con return out, req.Send() } -const opDisassociateTagOptionFromResource = "DisassociateTagOptionFromResource" +const opDisassociateServiceActionFromProvisioningArtifact = "DisassociateServiceActionFromProvisioningArtifact" -// DisassociateTagOptionFromResourceRequest generates a "aws/request.Request" representing the -// client's request for the DisassociateTagOptionFromResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DisassociateServiceActionFromProvisioningArtifactRequest generates a "aws/request.Request" representing the +// client's request for the DisassociateServiceActionFromProvisioningArtifact operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DisassociateTagOptionFromResource for more information on using the DisassociateTagOptionFromResource +// See DisassociateServiceActionFromProvisioningArtifact for more information on using the DisassociateServiceActionFromProvisioningArtifact +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DisassociateServiceActionFromProvisioningArtifactRequest method. +// req, resp := client.DisassociateServiceActionFromProvisioningArtifactRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/DisassociateServiceActionFromProvisioningArtifact +func (c *ServiceCatalog) DisassociateServiceActionFromProvisioningArtifactRequest(input *DisassociateServiceActionFromProvisioningArtifactInput) (req *request.Request, output *DisassociateServiceActionFromProvisioningArtifactOutput) { + op := &request.Operation{ + Name: opDisassociateServiceActionFromProvisioningArtifact, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DisassociateServiceActionFromProvisioningArtifactInput{} + } + + output = &DisassociateServiceActionFromProvisioningArtifactOutput{} + req = c.newRequest(op, input, output) + return +} + +// DisassociateServiceActionFromProvisioningArtifact API operation for AWS Service Catalog. +// +// Disassociates the specified self-service action association from the specified +// provisioning artifact. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Service Catalog's +// API operation DisassociateServiceActionFromProvisioningArtifact for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/DisassociateServiceActionFromProvisioningArtifact +func (c *ServiceCatalog) DisassociateServiceActionFromProvisioningArtifact(input *DisassociateServiceActionFromProvisioningArtifactInput) (*DisassociateServiceActionFromProvisioningArtifactOutput, error) { + req, out := c.DisassociateServiceActionFromProvisioningArtifactRequest(input) + return out, req.Send() +} + +// DisassociateServiceActionFromProvisioningArtifactWithContext is the same as DisassociateServiceActionFromProvisioningArtifact with the addition of +// the ability to pass a context and additional request options. +// +// See DisassociateServiceActionFromProvisioningArtifact for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) DisassociateServiceActionFromProvisioningArtifactWithContext(ctx aws.Context, input *DisassociateServiceActionFromProvisioningArtifactInput, opts ...request.Option) (*DisassociateServiceActionFromProvisioningArtifactOutput, error) { + req, out := c.DisassociateServiceActionFromProvisioningArtifactRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDisassociateTagOptionFromResource = "DisassociateTagOptionFromResource" + +// DisassociateTagOptionFromResourceRequest generates a "aws/request.Request" representing the +// client's request for the DisassociateTagOptionFromResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DisassociateTagOptionFromResource for more information on using the DisassociateTagOptionFromResource // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration @@ -2933,12 +3696,105 @@ func (c *ServiceCatalog) DisassociateTagOptionFromResourceWithContext(ctx aws.Co return out, req.Send() } +const opEnableAWSOrganizationsAccess = "EnableAWSOrganizationsAccess" + +// EnableAWSOrganizationsAccessRequest generates a "aws/request.Request" representing the +// client's request for the EnableAWSOrganizationsAccess operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See EnableAWSOrganizationsAccess for more information on using the EnableAWSOrganizationsAccess +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the EnableAWSOrganizationsAccessRequest method. +// req, resp := client.EnableAWSOrganizationsAccessRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/EnableAWSOrganizationsAccess +func (c *ServiceCatalog) EnableAWSOrganizationsAccessRequest(input *EnableAWSOrganizationsAccessInput) (req *request.Request, output *EnableAWSOrganizationsAccessOutput) { + op := &request.Operation{ + Name: opEnableAWSOrganizationsAccess, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &EnableAWSOrganizationsAccessInput{} + } + + output = &EnableAWSOrganizationsAccessOutput{} + req = c.newRequest(op, input, output) + return +} + +// EnableAWSOrganizationsAccess API operation for AWS Service Catalog. +// +// Enable portfolio sharing feature through AWS Organizations. This API will +// allow Service Catalog to receive updates on your organization in order to +// sync your shares with the current structure. This API can only be called +// by the master account in the organization. +// +// By calling this API Service Catalog will use FAS credentials to call organizations:EnableAWSServiceAccess +// so that your shares can be in sync with any changes in your AWS Organizations. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Service Catalog's +// API operation EnableAWSOrganizationsAccess for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInvalidStateException "InvalidStateException" +// An attempt was made to modify a resource that is in a state that is not valid. +// Check your resources to ensure that they are in valid states before retrying +// the operation. +// +// * ErrCodeOperationNotSupportedException "OperationNotSupportedException" +// The operation is not supported. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/EnableAWSOrganizationsAccess +func (c *ServiceCatalog) EnableAWSOrganizationsAccess(input *EnableAWSOrganizationsAccessInput) (*EnableAWSOrganizationsAccessOutput, error) { + req, out := c.EnableAWSOrganizationsAccessRequest(input) + return out, req.Send() +} + +// EnableAWSOrganizationsAccessWithContext is the same as EnableAWSOrganizationsAccess with the addition of +// the ability to pass a context and additional request options. +// +// See EnableAWSOrganizationsAccess for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) EnableAWSOrganizationsAccessWithContext(ctx aws.Context, input *EnableAWSOrganizationsAccessInput, opts ...request.Option) (*EnableAWSOrganizationsAccessOutput, error) { + req, out := c.EnableAWSOrganizationsAccessRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opExecuteProvisionedProductPlan = "ExecuteProvisionedProductPlan" // ExecuteProvisionedProductPlanRequest generates a "aws/request.Request" representing the // client's request for the ExecuteProvisionedProductPlan operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3021,12 +3877,182 @@ func (c *ServiceCatalog) ExecuteProvisionedProductPlanWithContext(ctx aws.Contex return out, req.Send() } +const opExecuteProvisionedProductServiceAction = "ExecuteProvisionedProductServiceAction" + +// ExecuteProvisionedProductServiceActionRequest generates a "aws/request.Request" representing the +// client's request for the ExecuteProvisionedProductServiceAction operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ExecuteProvisionedProductServiceAction for more information on using the ExecuteProvisionedProductServiceAction +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ExecuteProvisionedProductServiceActionRequest method. +// req, resp := client.ExecuteProvisionedProductServiceActionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/ExecuteProvisionedProductServiceAction +func (c *ServiceCatalog) ExecuteProvisionedProductServiceActionRequest(input *ExecuteProvisionedProductServiceActionInput) (req *request.Request, output *ExecuteProvisionedProductServiceActionOutput) { + op := &request.Operation{ + Name: opExecuteProvisionedProductServiceAction, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ExecuteProvisionedProductServiceActionInput{} + } + + output = &ExecuteProvisionedProductServiceActionOutput{} + req = c.newRequest(op, input, output) + return +} + +// ExecuteProvisionedProductServiceAction API operation for AWS Service Catalog. +// +// Executes a self-service action against a provisioned product. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Service Catalog's +// API operation ExecuteProvisionedProductServiceAction for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParametersException "InvalidParametersException" +// One or more parameters provided to the operation are not valid. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInvalidStateException "InvalidStateException" +// An attempt was made to modify a resource that is in a state that is not valid. +// Check your resources to ensure that they are in valid states before retrying +// the operation. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/ExecuteProvisionedProductServiceAction +func (c *ServiceCatalog) ExecuteProvisionedProductServiceAction(input *ExecuteProvisionedProductServiceActionInput) (*ExecuteProvisionedProductServiceActionOutput, error) { + req, out := c.ExecuteProvisionedProductServiceActionRequest(input) + return out, req.Send() +} + +// ExecuteProvisionedProductServiceActionWithContext is the same as ExecuteProvisionedProductServiceAction with the addition of +// the ability to pass a context and additional request options. +// +// See ExecuteProvisionedProductServiceAction for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) ExecuteProvisionedProductServiceActionWithContext(ctx aws.Context, input *ExecuteProvisionedProductServiceActionInput, opts ...request.Option) (*ExecuteProvisionedProductServiceActionOutput, error) { + req, out := c.ExecuteProvisionedProductServiceActionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetAWSOrganizationsAccessStatus = "GetAWSOrganizationsAccessStatus" + +// GetAWSOrganizationsAccessStatusRequest generates a "aws/request.Request" representing the +// client's request for the GetAWSOrganizationsAccessStatus operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetAWSOrganizationsAccessStatus for more information on using the GetAWSOrganizationsAccessStatus +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetAWSOrganizationsAccessStatusRequest method. +// req, resp := client.GetAWSOrganizationsAccessStatusRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/GetAWSOrganizationsAccessStatus +func (c *ServiceCatalog) GetAWSOrganizationsAccessStatusRequest(input *GetAWSOrganizationsAccessStatusInput) (req *request.Request, output *GetAWSOrganizationsAccessStatusOutput) { + op := &request.Operation{ + Name: opGetAWSOrganizationsAccessStatus, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetAWSOrganizationsAccessStatusInput{} + } + + output = &GetAWSOrganizationsAccessStatusOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetAWSOrganizationsAccessStatus API operation for AWS Service Catalog. +// +// Get the Access Status for AWS Organization portfolio share feature. This +// API can only be called by the master account in the organization. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Service Catalog's +// API operation GetAWSOrganizationsAccessStatus for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeOperationNotSupportedException "OperationNotSupportedException" +// The operation is not supported. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/GetAWSOrganizationsAccessStatus +func (c *ServiceCatalog) GetAWSOrganizationsAccessStatus(input *GetAWSOrganizationsAccessStatusInput) (*GetAWSOrganizationsAccessStatusOutput, error) { + req, out := c.GetAWSOrganizationsAccessStatusRequest(input) + return out, req.Send() +} + +// GetAWSOrganizationsAccessStatusWithContext is the same as GetAWSOrganizationsAccessStatus with the addition of +// the ability to pass a context and additional request options. +// +// See GetAWSOrganizationsAccessStatus for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) GetAWSOrganizationsAccessStatusWithContext(ctx aws.Context, input *GetAWSOrganizationsAccessStatusInput, opts ...request.Option) (*GetAWSOrganizationsAccessStatusOutput, error) { + req, out := c.GetAWSOrganizationsAccessStatusRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opListAcceptedPortfolioShares = "ListAcceptedPortfolioShares" // ListAcceptedPortfolioSharesRequest generates a "aws/request.Request" representing the // client's request for the ListAcceptedPortfolioShares operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3084,6 +4110,9 @@ func (c *ServiceCatalog) ListAcceptedPortfolioSharesRequest(input *ListAcceptedP // * ErrCodeInvalidParametersException "InvalidParametersException" // One or more parameters provided to the operation are not valid. // +// * ErrCodeOperationNotSupportedException "OperationNotSupportedException" +// The operation is not supported. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/ListAcceptedPortfolioShares func (c *ServiceCatalog) ListAcceptedPortfolioShares(input *ListAcceptedPortfolioSharesInput) (*ListAcceptedPortfolioSharesOutput, error) { req, out := c.ListAcceptedPortfolioSharesRequest(input) @@ -3160,8 +4189,8 @@ const opListConstraintsForPortfolio = "ListConstraintsForPortfolio" // ListConstraintsForPortfolioRequest generates a "aws/request.Request" representing the // client's request for the ListConstraintsForPortfolio operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3298,8 +4327,8 @@ const opListLaunchPaths = "ListLaunchPaths" // ListLaunchPathsRequest generates a "aws/request.Request" representing the // client's request for the ListLaunchPaths operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3434,39 +4463,181 @@ func (c *ServiceCatalog) ListLaunchPathsPagesWithContext(ctx aws.Context, input return p.Err() } -const opListPortfolioAccess = "ListPortfolioAccess" +const opListOrganizationPortfolioAccess = "ListOrganizationPortfolioAccess" -// ListPortfolioAccessRequest generates a "aws/request.Request" representing the -// client's request for the ListPortfolioAccess operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListOrganizationPortfolioAccessRequest generates a "aws/request.Request" representing the +// client's request for the ListOrganizationPortfolioAccess operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListPortfolioAccess for more information on using the ListPortfolioAccess +// See ListOrganizationPortfolioAccess for more information on using the ListOrganizationPortfolioAccess // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListPortfolioAccessRequest method. -// req, resp := client.ListPortfolioAccessRequest(params) +// // Example sending a request using the ListOrganizationPortfolioAccessRequest method. +// req, resp := client.ListOrganizationPortfolioAccessRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/ListPortfolioAccess -func (c *ServiceCatalog) ListPortfolioAccessRequest(input *ListPortfolioAccessInput) (req *request.Request, output *ListPortfolioAccessOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/ListOrganizationPortfolioAccess +func (c *ServiceCatalog) ListOrganizationPortfolioAccessRequest(input *ListOrganizationPortfolioAccessInput) (req *request.Request, output *ListOrganizationPortfolioAccessOutput) { op := &request.Operation{ - Name: opListPortfolioAccess, + Name: opListOrganizationPortfolioAccess, HTTPMethod: "POST", HTTPPath: "/", - } - + Paginator: &request.Paginator{ + InputTokens: []string{"PageToken"}, + OutputTokens: []string{"NextPageToken"}, + LimitToken: "PageSize", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListOrganizationPortfolioAccessInput{} + } + + output = &ListOrganizationPortfolioAccessOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListOrganizationPortfolioAccess API operation for AWS Service Catalog. +// +// Lists the organization nodes that have access to the specified portfolio. +// This API can only be called by the master account in the organization. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Service Catalog's +// API operation ListOrganizationPortfolioAccess for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInvalidParametersException "InvalidParametersException" +// One or more parameters provided to the operation are not valid. +// +// * ErrCodeOperationNotSupportedException "OperationNotSupportedException" +// The operation is not supported. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/ListOrganizationPortfolioAccess +func (c *ServiceCatalog) ListOrganizationPortfolioAccess(input *ListOrganizationPortfolioAccessInput) (*ListOrganizationPortfolioAccessOutput, error) { + req, out := c.ListOrganizationPortfolioAccessRequest(input) + return out, req.Send() +} + +// ListOrganizationPortfolioAccessWithContext is the same as ListOrganizationPortfolioAccess with the addition of +// the ability to pass a context and additional request options. +// +// See ListOrganizationPortfolioAccess for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) ListOrganizationPortfolioAccessWithContext(ctx aws.Context, input *ListOrganizationPortfolioAccessInput, opts ...request.Option) (*ListOrganizationPortfolioAccessOutput, error) { + req, out := c.ListOrganizationPortfolioAccessRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListOrganizationPortfolioAccessPages iterates over the pages of a ListOrganizationPortfolioAccess operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListOrganizationPortfolioAccess method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListOrganizationPortfolioAccess operation. +// pageNum := 0 +// err := client.ListOrganizationPortfolioAccessPages(params, +// func(page *ListOrganizationPortfolioAccessOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *ServiceCatalog) ListOrganizationPortfolioAccessPages(input *ListOrganizationPortfolioAccessInput, fn func(*ListOrganizationPortfolioAccessOutput, bool) bool) error { + return c.ListOrganizationPortfolioAccessPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListOrganizationPortfolioAccessPagesWithContext same as ListOrganizationPortfolioAccessPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) ListOrganizationPortfolioAccessPagesWithContext(ctx aws.Context, input *ListOrganizationPortfolioAccessInput, fn func(*ListOrganizationPortfolioAccessOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListOrganizationPortfolioAccessInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListOrganizationPortfolioAccessRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListOrganizationPortfolioAccessOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListPortfolioAccess = "ListPortfolioAccess" + +// ListPortfolioAccessRequest generates a "aws/request.Request" representing the +// client's request for the ListPortfolioAccess operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListPortfolioAccess for more information on using the ListPortfolioAccess +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListPortfolioAccessRequest method. +// req, resp := client.ListPortfolioAccessRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/ListPortfolioAccess +func (c *ServiceCatalog) ListPortfolioAccessRequest(input *ListPortfolioAccessInput) (req *request.Request, output *ListPortfolioAccessOutput) { + op := &request.Operation{ + Name: opListPortfolioAccess, + HTTPMethod: "POST", + HTTPPath: "/", + } + if input == nil { input = &ListPortfolioAccessInput{} } @@ -3517,8 +4688,8 @@ const opListPortfolios = "ListPortfolios" // ListPortfoliosRequest generates a "aws/request.Request" representing the // client's request for the ListPortfolios operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3652,8 +4823,8 @@ const opListPortfoliosForProduct = "ListPortfoliosForProduct" // ListPortfoliosForProductRequest generates a "aws/request.Request" representing the // client's request for the ListPortfoliosForProduct operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3790,8 +4961,8 @@ const opListPrincipalsForPortfolio = "ListPrincipalsForPortfolio" // ListPrincipalsForPortfolioRequest generates a "aws/request.Request" representing the // client's request for the ListPrincipalsForPortfolio operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3928,8 +5099,8 @@ const opListProvisionedProductPlans = "ListProvisionedProductPlans" // ListProvisionedProductPlansRequest generates a "aws/request.Request" representing the // client's request for the ListProvisionedProductPlans operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4011,8 +5182,8 @@ const opListProvisioningArtifacts = "ListProvisioningArtifacts" // ListProvisioningArtifactsRequest generates a "aws/request.Request" representing the // client's request for the ListProvisioningArtifacts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4090,12 +5261,151 @@ func (c *ServiceCatalog) ListProvisioningArtifactsWithContext(ctx aws.Context, i return out, req.Send() } +const opListProvisioningArtifactsForServiceAction = "ListProvisioningArtifactsForServiceAction" + +// ListProvisioningArtifactsForServiceActionRequest generates a "aws/request.Request" representing the +// client's request for the ListProvisioningArtifactsForServiceAction operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListProvisioningArtifactsForServiceAction for more information on using the ListProvisioningArtifactsForServiceAction +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListProvisioningArtifactsForServiceActionRequest method. +// req, resp := client.ListProvisioningArtifactsForServiceActionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/ListProvisioningArtifactsForServiceAction +func (c *ServiceCatalog) ListProvisioningArtifactsForServiceActionRequest(input *ListProvisioningArtifactsForServiceActionInput) (req *request.Request, output *ListProvisioningArtifactsForServiceActionOutput) { + op := &request.Operation{ + Name: opListProvisioningArtifactsForServiceAction, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"PageToken"}, + OutputTokens: []string{"NextPageToken"}, + LimitToken: "PageSize", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListProvisioningArtifactsForServiceActionInput{} + } + + output = &ListProvisioningArtifactsForServiceActionOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListProvisioningArtifactsForServiceAction API operation for AWS Service Catalog. +// +// Lists all provisioning artifacts (also known as versions) for the specified +// self-service action. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Service Catalog's +// API operation ListProvisioningArtifactsForServiceAction for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInvalidParametersException "InvalidParametersException" +// One or more parameters provided to the operation are not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/ListProvisioningArtifactsForServiceAction +func (c *ServiceCatalog) ListProvisioningArtifactsForServiceAction(input *ListProvisioningArtifactsForServiceActionInput) (*ListProvisioningArtifactsForServiceActionOutput, error) { + req, out := c.ListProvisioningArtifactsForServiceActionRequest(input) + return out, req.Send() +} + +// ListProvisioningArtifactsForServiceActionWithContext is the same as ListProvisioningArtifactsForServiceAction with the addition of +// the ability to pass a context and additional request options. +// +// See ListProvisioningArtifactsForServiceAction for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) ListProvisioningArtifactsForServiceActionWithContext(ctx aws.Context, input *ListProvisioningArtifactsForServiceActionInput, opts ...request.Option) (*ListProvisioningArtifactsForServiceActionOutput, error) { + req, out := c.ListProvisioningArtifactsForServiceActionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListProvisioningArtifactsForServiceActionPages iterates over the pages of a ListProvisioningArtifactsForServiceAction operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListProvisioningArtifactsForServiceAction method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListProvisioningArtifactsForServiceAction operation. +// pageNum := 0 +// err := client.ListProvisioningArtifactsForServiceActionPages(params, +// func(page *ListProvisioningArtifactsForServiceActionOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *ServiceCatalog) ListProvisioningArtifactsForServiceActionPages(input *ListProvisioningArtifactsForServiceActionInput, fn func(*ListProvisioningArtifactsForServiceActionOutput, bool) bool) error { + return c.ListProvisioningArtifactsForServiceActionPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListProvisioningArtifactsForServiceActionPagesWithContext same as ListProvisioningArtifactsForServiceActionPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) ListProvisioningArtifactsForServiceActionPagesWithContext(ctx aws.Context, input *ListProvisioningArtifactsForServiceActionInput, fn func(*ListProvisioningArtifactsForServiceActionOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListProvisioningArtifactsForServiceActionInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListProvisioningArtifactsForServiceActionRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListProvisioningArtifactsForServiceActionOutput), !p.HasNextPage()) + } + return p.Err() +} + const opListRecordHistory = "ListRecordHistory" // ListRecordHistoryRequest generates a "aws/request.Request" representing the // client's request for the ListRecordHistory operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4173,8 +5483,8 @@ const opListResourcesForTagOption = "ListResourcesForTagOption" // ListResourcesForTagOptionRequest generates a "aws/request.Request" representing the // client's request for the ListResourcesForTagOption operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4312,125 +5622,399 @@ func (c *ServiceCatalog) ListResourcesForTagOptionPagesWithContext(ctx aws.Conte return p.Err() } -const opListTagOptions = "ListTagOptions" +const opListServiceActions = "ListServiceActions" -// ListTagOptionsRequest generates a "aws/request.Request" representing the -// client's request for the ListTagOptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// ListServiceActionsRequest generates a "aws/request.Request" representing the +// client's request for the ListServiceActions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ListTagOptions for more information on using the ListTagOptions +// See ListServiceActions for more information on using the ListServiceActions // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ListTagOptionsRequest method. -// req, resp := client.ListTagOptionsRequest(params) +// // Example sending a request using the ListServiceActionsRequest method. +// req, resp := client.ListServiceActionsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/ListTagOptions -func (c *ServiceCatalog) ListTagOptionsRequest(input *ListTagOptionsInput) (req *request.Request, output *ListTagOptionsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/ListServiceActions +func (c *ServiceCatalog) ListServiceActionsRequest(input *ListServiceActionsInput) (req *request.Request, output *ListServiceActionsOutput) { op := &request.Operation{ - Name: opListTagOptions, + Name: opListServiceActions, HTTPMethod: "POST", HTTPPath: "/", Paginator: &request.Paginator{ InputTokens: []string{"PageToken"}, - OutputTokens: []string{"PageToken"}, + OutputTokens: []string{"NextPageToken"}, LimitToken: "PageSize", TruncationToken: "", }, } if input == nil { - input = &ListTagOptionsInput{} + input = &ListServiceActionsInput{} } - output = &ListTagOptionsOutput{} + output = &ListServiceActionsOutput{} req = c.newRequest(op, input, output) return } -// ListTagOptions API operation for AWS Service Catalog. +// ListServiceActions API operation for AWS Service Catalog. // -// Lists the specified TagOptions or all TagOptions. +// Lists all self-service actions. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Service Catalog's -// API operation ListTagOptions for usage and error information. +// API operation ListServiceActions for usage and error information. // // Returned Error Codes: -// * ErrCodeTagOptionNotMigratedException "TagOptionNotMigratedException" -// An operation requiring TagOptions failed because the TagOptions migration -// process has not been performed for this account. Please use the AWS console -// to perform the migration process before retrying the operation. -// // * ErrCodeInvalidParametersException "InvalidParametersException" // One or more parameters provided to the operation are not valid. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/ListTagOptions -func (c *ServiceCatalog) ListTagOptions(input *ListTagOptionsInput) (*ListTagOptionsOutput, error) { - req, out := c.ListTagOptionsRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/ListServiceActions +func (c *ServiceCatalog) ListServiceActions(input *ListServiceActionsInput) (*ListServiceActionsOutput, error) { + req, out := c.ListServiceActionsRequest(input) return out, req.Send() } -// ListTagOptionsWithContext is the same as ListTagOptions with the addition of +// ListServiceActionsWithContext is the same as ListServiceActions with the addition of // the ability to pass a context and additional request options. // -// See ListTagOptions for details on how to use this API operation. +// See ListServiceActions for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ServiceCatalog) ListTagOptionsWithContext(ctx aws.Context, input *ListTagOptionsInput, opts ...request.Option) (*ListTagOptionsOutput, error) { - req, out := c.ListTagOptionsRequest(input) +func (c *ServiceCatalog) ListServiceActionsWithContext(ctx aws.Context, input *ListServiceActionsInput, opts ...request.Option) (*ListServiceActionsOutput, error) { + req, out := c.ListServiceActionsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// ListTagOptionsPages iterates over the pages of a ListTagOptions operation, +// ListServiceActionsPages iterates over the pages of a ListServiceActions operation, // calling the "fn" function with the response data for each page. To stop // iterating, return false from the fn function. // -// See ListTagOptions method for more information on how to use this operation. +// See ListServiceActions method for more information on how to use this operation. // // Note: This operation can generate multiple requests to a service. // -// // Example iterating over at most 3 pages of a ListTagOptions operation. +// // Example iterating over at most 3 pages of a ListServiceActions operation. // pageNum := 0 -// err := client.ListTagOptionsPages(params, -// func(page *ListTagOptionsOutput, lastPage bool) bool { +// err := client.ListServiceActionsPages(params, +// func(page *ListServiceActionsOutput, lastPage bool) bool { // pageNum++ // fmt.Println(page) // return pageNum <= 3 // }) // -func (c *ServiceCatalog) ListTagOptionsPages(input *ListTagOptionsInput, fn func(*ListTagOptionsOutput, bool) bool) error { - return c.ListTagOptionsPagesWithContext(aws.BackgroundContext(), input, fn) +func (c *ServiceCatalog) ListServiceActionsPages(input *ListServiceActionsInput, fn func(*ListServiceActionsOutput, bool) bool) error { + return c.ListServiceActionsPagesWithContext(aws.BackgroundContext(), input, fn) } -// ListTagOptionsPagesWithContext same as ListTagOptionsPages except +// ListServiceActionsPagesWithContext same as ListServiceActionsPages except // it takes a Context and allows setting request options on the pages. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *ServiceCatalog) ListTagOptionsPagesWithContext(ctx aws.Context, input *ListTagOptionsInput, fn func(*ListTagOptionsOutput, bool) bool, opts ...request.Option) error { +func (c *ServiceCatalog) ListServiceActionsPagesWithContext(ctx aws.Context, input *ListServiceActionsInput, fn func(*ListServiceActionsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListServiceActionsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListServiceActionsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListServiceActionsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListServiceActionsForProvisioningArtifact = "ListServiceActionsForProvisioningArtifact" + +// ListServiceActionsForProvisioningArtifactRequest generates a "aws/request.Request" representing the +// client's request for the ListServiceActionsForProvisioningArtifact operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListServiceActionsForProvisioningArtifact for more information on using the ListServiceActionsForProvisioningArtifact +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListServiceActionsForProvisioningArtifactRequest method. +// req, resp := client.ListServiceActionsForProvisioningArtifactRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/ListServiceActionsForProvisioningArtifact +func (c *ServiceCatalog) ListServiceActionsForProvisioningArtifactRequest(input *ListServiceActionsForProvisioningArtifactInput) (req *request.Request, output *ListServiceActionsForProvisioningArtifactOutput) { + op := &request.Operation{ + Name: opListServiceActionsForProvisioningArtifact, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"PageToken"}, + OutputTokens: []string{"NextPageToken"}, + LimitToken: "PageSize", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListServiceActionsForProvisioningArtifactInput{} + } + + output = &ListServiceActionsForProvisioningArtifactOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListServiceActionsForProvisioningArtifact API operation for AWS Service Catalog. +// +// Returns a paginated list of self-service actions associated with the specified +// Product ID and Provisioning Artifact ID. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Service Catalog's +// API operation ListServiceActionsForProvisioningArtifact for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInvalidParametersException "InvalidParametersException" +// One or more parameters provided to the operation are not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/ListServiceActionsForProvisioningArtifact +func (c *ServiceCatalog) ListServiceActionsForProvisioningArtifact(input *ListServiceActionsForProvisioningArtifactInput) (*ListServiceActionsForProvisioningArtifactOutput, error) { + req, out := c.ListServiceActionsForProvisioningArtifactRequest(input) + return out, req.Send() +} + +// ListServiceActionsForProvisioningArtifactWithContext is the same as ListServiceActionsForProvisioningArtifact with the addition of +// the ability to pass a context and additional request options. +// +// See ListServiceActionsForProvisioningArtifact for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) ListServiceActionsForProvisioningArtifactWithContext(ctx aws.Context, input *ListServiceActionsForProvisioningArtifactInput, opts ...request.Option) (*ListServiceActionsForProvisioningArtifactOutput, error) { + req, out := c.ListServiceActionsForProvisioningArtifactRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListServiceActionsForProvisioningArtifactPages iterates over the pages of a ListServiceActionsForProvisioningArtifact operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListServiceActionsForProvisioningArtifact method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListServiceActionsForProvisioningArtifact operation. +// pageNum := 0 +// err := client.ListServiceActionsForProvisioningArtifactPages(params, +// func(page *ListServiceActionsForProvisioningArtifactOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *ServiceCatalog) ListServiceActionsForProvisioningArtifactPages(input *ListServiceActionsForProvisioningArtifactInput, fn func(*ListServiceActionsForProvisioningArtifactOutput, bool) bool) error { + return c.ListServiceActionsForProvisioningArtifactPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListServiceActionsForProvisioningArtifactPagesWithContext same as ListServiceActionsForProvisioningArtifactPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) ListServiceActionsForProvisioningArtifactPagesWithContext(ctx aws.Context, input *ListServiceActionsForProvisioningArtifactInput, fn func(*ListServiceActionsForProvisioningArtifactOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListServiceActionsForProvisioningArtifactInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListServiceActionsForProvisioningArtifactRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListServiceActionsForProvisioningArtifactOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListTagOptions = "ListTagOptions" + +// ListTagOptionsRequest generates a "aws/request.Request" representing the +// client's request for the ListTagOptions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListTagOptions for more information on using the ListTagOptions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListTagOptionsRequest method. +// req, resp := client.ListTagOptionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/ListTagOptions +func (c *ServiceCatalog) ListTagOptionsRequest(input *ListTagOptionsInput) (req *request.Request, output *ListTagOptionsOutput) { + op := &request.Operation{ + Name: opListTagOptions, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"PageToken"}, + OutputTokens: []string{"PageToken"}, + LimitToken: "PageSize", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListTagOptionsInput{} + } + + output = &ListTagOptionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListTagOptions API operation for AWS Service Catalog. +// +// Lists the specified TagOptions or all TagOptions. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Service Catalog's +// API operation ListTagOptions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeTagOptionNotMigratedException "TagOptionNotMigratedException" +// An operation requiring TagOptions failed because the TagOptions migration +// process has not been performed for this account. Please use the AWS console +// to perform the migration process before retrying the operation. +// +// * ErrCodeInvalidParametersException "InvalidParametersException" +// One or more parameters provided to the operation are not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/ListTagOptions +func (c *ServiceCatalog) ListTagOptions(input *ListTagOptionsInput) (*ListTagOptionsOutput, error) { + req, out := c.ListTagOptionsRequest(input) + return out, req.Send() +} + +// ListTagOptionsWithContext is the same as ListTagOptions with the addition of +// the ability to pass a context and additional request options. +// +// See ListTagOptions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) ListTagOptionsWithContext(ctx aws.Context, input *ListTagOptionsInput, opts ...request.Option) (*ListTagOptionsOutput, error) { + req, out := c.ListTagOptionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListTagOptionsPages iterates over the pages of a ListTagOptions operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListTagOptions method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListTagOptions operation. +// pageNum := 0 +// err := client.ListTagOptionsPages(params, +// func(page *ListTagOptionsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *ServiceCatalog) ListTagOptionsPages(input *ListTagOptionsInput, fn func(*ListTagOptionsOutput, bool) bool) error { + return c.ListTagOptionsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListTagOptionsPagesWithContext same as ListTagOptionsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) ListTagOptionsPagesWithContext(ctx aws.Context, input *ListTagOptionsInput, fn func(*ListTagOptionsOutput, bool) bool, opts ...request.Option) error { p := request.Pagination{ NewRequest: func() (*request.Request, error) { var inCpy *ListTagOptionsInput @@ -4456,8 +6040,8 @@ const opProvisionProduct = "ProvisionProduct" // ProvisionProductRequest generates a "aws/request.Request" representing the // client's request for the ProvisionProduct operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4551,8 +6135,8 @@ const opRejectPortfolioShare = "RejectPortfolioShare" // RejectPortfolioShareRequest generates a "aws/request.Request" representing the // client's request for the RejectPortfolioShare operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4630,8 +6214,8 @@ const opScanProvisionedProducts = "ScanProvisionedProducts" // ScanProvisionedProductsRequest generates a "aws/request.Request" representing the // client's request for the ScanProvisionedProducts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4711,8 +6295,8 @@ const opSearchProducts = "SearchProducts" // SearchProductsRequest generates a "aws/request.Request" representing the // client's request for the SearchProducts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4846,8 +6430,8 @@ const opSearchProductsAsAdmin = "SearchProductsAsAdmin" // SearchProductsAsAdminRequest generates a "aws/request.Request" representing the // client's request for the SearchProductsAsAdmin operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4984,8 +6568,8 @@ const opSearchProvisionedProducts = "SearchProvisionedProducts" // SearchProvisionedProductsRequest generates a "aws/request.Request" representing the // client's request for the SearchProvisionedProducts operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5119,8 +6703,8 @@ const opTerminateProvisionedProduct = "TerminateProvisionedProduct" // TerminateProvisionedProductRequest generates a "aws/request.Request" representing the // client's request for the TerminateProvisionedProduct operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5203,8 +6787,8 @@ const opUpdateConstraint = "UpdateConstraint" // UpdateConstraintRequest generates a "aws/request.Request" representing the // client's request for the UpdateConstraint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5285,8 +6869,8 @@ const opUpdatePortfolio = "UpdatePortfolio" // UpdatePortfolioRequest generates a "aws/request.Request" representing the // client's request for the UpdatePortfolio operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5379,8 +6963,8 @@ const opUpdateProduct = "UpdateProduct" // UpdateProductRequest generates a "aws/request.Request" representing the // client's request for the UpdateProduct operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5466,8 +7050,8 @@ const opUpdateProvisionedProduct = "UpdateProvisionedProduct" // UpdateProvisionedProductRequest generates a "aws/request.Request" representing the // client's request for the UpdateProvisionedProduct operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5555,8 +7139,8 @@ const opUpdateProvisioningArtifact = "UpdateProvisioningArtifact" // UpdateProvisioningArtifactRequest generates a "aws/request.Request" representing the // client's request for the UpdateProvisioningArtifact operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5637,64 +7221,146 @@ func (c *ServiceCatalog) UpdateProvisioningArtifactWithContext(ctx aws.Context, return out, req.Send() } -const opUpdateTagOption = "UpdateTagOption" +const opUpdateServiceAction = "UpdateServiceAction" -// UpdateTagOptionRequest generates a "aws/request.Request" representing the -// client's request for the UpdateTagOption operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// UpdateServiceActionRequest generates a "aws/request.Request" representing the +// client's request for the UpdateServiceAction operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See UpdateTagOption for more information on using the UpdateTagOption +// See UpdateServiceAction for more information on using the UpdateServiceAction // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the UpdateTagOptionRequest method. -// req, resp := client.UpdateTagOptionRequest(params) +// // Example sending a request using the UpdateServiceActionRequest method. +// req, resp := client.UpdateServiceActionRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/UpdateTagOption -func (c *ServiceCatalog) UpdateTagOptionRequest(input *UpdateTagOptionInput) (req *request.Request, output *UpdateTagOptionOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/UpdateServiceAction +func (c *ServiceCatalog) UpdateServiceActionRequest(input *UpdateServiceActionInput) (req *request.Request, output *UpdateServiceActionOutput) { op := &request.Operation{ - Name: opUpdateTagOption, + Name: opUpdateServiceAction, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &UpdateTagOptionInput{} + input = &UpdateServiceActionInput{} } - output = &UpdateTagOptionOutput{} + output = &UpdateServiceActionOutput{} req = c.newRequest(op, input, output) return } -// UpdateTagOption API operation for AWS Service Catalog. +// UpdateServiceAction API operation for AWS Service Catalog. // -// Updates the specified TagOption. +// Updates a self-service action. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for AWS Service Catalog's -// API operation UpdateTagOption for usage and error information. +// API operation UpdateServiceAction for usage and error information. // // Returned Error Codes: -// * ErrCodeTagOptionNotMigratedException "TagOptionNotMigratedException" -// An operation requiring TagOptions failed because the TagOptions migration -// process has not been performed for this account. Please use the AWS console -// to perform the migration process before retrying the operation. +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource was not found. +// +// * ErrCodeInvalidParametersException "InvalidParametersException" +// One or more parameters provided to the operation are not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/UpdateServiceAction +func (c *ServiceCatalog) UpdateServiceAction(input *UpdateServiceActionInput) (*UpdateServiceActionOutput, error) { + req, out := c.UpdateServiceActionRequest(input) + return out, req.Send() +} + +// UpdateServiceActionWithContext is the same as UpdateServiceAction with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateServiceAction for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ServiceCatalog) UpdateServiceActionWithContext(ctx aws.Context, input *UpdateServiceActionInput, opts ...request.Option) (*UpdateServiceActionOutput, error) { + req, out := c.UpdateServiceActionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateTagOption = "UpdateTagOption" + +// UpdateTagOptionRequest generates a "aws/request.Request" representing the +// client's request for the UpdateTagOption operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateTagOption for more information on using the UpdateTagOption +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateTagOptionRequest method. +// req, resp := client.UpdateTagOptionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/servicecatalog-2015-12-10/UpdateTagOption +func (c *ServiceCatalog) UpdateTagOptionRequest(input *UpdateTagOptionInput) (req *request.Request, output *UpdateTagOptionOutput) { + op := &request.Operation{ + Name: opUpdateTagOption, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateTagOptionInput{} + } + + output = &UpdateTagOptionOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateTagOption API operation for AWS Service Catalog. +// +// Updates the specified TagOption. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Service Catalog's +// API operation UpdateTagOption for usage and error information. +// +// Returned Error Codes: +// * ErrCodeTagOptionNotMigratedException "TagOptionNotMigratedException" +// An operation requiring TagOptions failed because the TagOptions migration +// process has not been performed for this account. Please use the AWS console +// to perform the migration process before retrying the operation. // // * ErrCodeResourceNotFoundException "ResourceNotFoundException" // The specified resource was not found. @@ -5743,6 +7409,20 @@ type AcceptPortfolioShareInput struct { // // PortfolioId is a required field PortfolioId *string `min:"1" type:"string" required:"true"` + + // The type of shared portfolios to accept. The default is to accept imported + // portfolios. + // + // * AWS_ORGANIZATIONS - Accept portfolios shared by the master account of + // your organization. + // + // * IMPORTED - Accept imported portfolios. + // + // * AWS_SERVICECATALOG - Not supported. (Throws ResourceNotFoundException.) + // + // For example, aws servicecatalog accept-portfolio-share --portfolio-id "port-2qwzkwxt3y5fk" + // --portfolio-share-type AWS_ORGANIZATIONS + PortfolioShareType *string `type:"string" enum:"PortfolioShareType"` } // String returns the string representation @@ -5783,6 +7463,12 @@ func (s *AcceptPortfolioShareInput) SetPortfolioId(v string) *AcceptPortfolioSha return s } +// SetPortfolioShareType sets the PortfolioShareType field's value. +func (s *AcceptPortfolioShareInput) SetPortfolioShareType(v string) *AcceptPortfolioShareInput { + s.PortfolioShareType = &v + return s +} + type AcceptPortfolioShareOutput struct { _ struct{} `type:"structure"` } @@ -6036,6 +7722,110 @@ func (s AssociateProductWithPortfolioOutput) GoString() string { return s.String() } +type AssociateServiceActionWithProvisioningArtifactInput struct { + _ struct{} `type:"structure"` + + // The language code. + // + // * en - English (default) + // + // * jp - Japanese + // + // * zh - Chinese + AcceptLanguage *string `type:"string"` + + // The product identifier. For example, prod-abcdzk7xy33qa. + // + // ProductId is a required field + ProductId *string `min:"1" type:"string" required:"true"` + + // The identifier of the provisioning artifact. For example, pa-4abcdjnxjj6ne. + // + // ProvisioningArtifactId is a required field + ProvisioningArtifactId *string `min:"1" type:"string" required:"true"` + + // The self-service action identifier. For example, act-fs7abcd89wxyz. + // + // ServiceActionId is a required field + ServiceActionId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s AssociateServiceActionWithProvisioningArtifactInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociateServiceActionWithProvisioningArtifactInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AssociateServiceActionWithProvisioningArtifactInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AssociateServiceActionWithProvisioningArtifactInput"} + if s.ProductId == nil { + invalidParams.Add(request.NewErrParamRequired("ProductId")) + } + if s.ProductId != nil && len(*s.ProductId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ProductId", 1)) + } + if s.ProvisioningArtifactId == nil { + invalidParams.Add(request.NewErrParamRequired("ProvisioningArtifactId")) + } + if s.ProvisioningArtifactId != nil && len(*s.ProvisioningArtifactId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ProvisioningArtifactId", 1)) + } + if s.ServiceActionId == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceActionId")) + } + if s.ServiceActionId != nil && len(*s.ServiceActionId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ServiceActionId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAcceptLanguage sets the AcceptLanguage field's value. +func (s *AssociateServiceActionWithProvisioningArtifactInput) SetAcceptLanguage(v string) *AssociateServiceActionWithProvisioningArtifactInput { + s.AcceptLanguage = &v + return s +} + +// SetProductId sets the ProductId field's value. +func (s *AssociateServiceActionWithProvisioningArtifactInput) SetProductId(v string) *AssociateServiceActionWithProvisioningArtifactInput { + s.ProductId = &v + return s +} + +// SetProvisioningArtifactId sets the ProvisioningArtifactId field's value. +func (s *AssociateServiceActionWithProvisioningArtifactInput) SetProvisioningArtifactId(v string) *AssociateServiceActionWithProvisioningArtifactInput { + s.ProvisioningArtifactId = &v + return s +} + +// SetServiceActionId sets the ServiceActionId field's value. +func (s *AssociateServiceActionWithProvisioningArtifactInput) SetServiceActionId(v string) *AssociateServiceActionWithProvisioningArtifactInput { + s.ServiceActionId = &v + return s +} + +type AssociateServiceActionWithProvisioningArtifactOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AssociateServiceActionWithProvisioningArtifactOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociateServiceActionWithProvisioningArtifactOutput) GoString() string { + return s.String() +} + type AssociateTagOptionWithResourceInput struct { _ struct{} `type:"structure"` @@ -6105,88 +7895,270 @@ func (s AssociateTagOptionWithResourceOutput) GoString() string { return s.String() } -// Information about a CloudWatch dashboard. -type CloudWatchDashboard struct { +type BatchAssociateServiceActionWithProvisioningArtifactInput struct { _ struct{} `type:"structure"` - // The name of the CloudWatch dashboard. - Name *string `type:"string"` + // The language code. + // + // * en - English (default) + // + // * jp - Japanese + // + // * zh - Chinese + AcceptLanguage *string `type:"string"` + + // One or more associations, each consisting of the Action ID, the Product ID, + // and the Provisioning Artifact ID. + // + // ServiceActionAssociations is a required field + ServiceActionAssociations []*ServiceActionAssociation `min:"1" type:"list" required:"true"` } // String returns the string representation -func (s CloudWatchDashboard) String() string { +func (s BatchAssociateServiceActionWithProvisioningArtifactInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CloudWatchDashboard) GoString() string { +func (s BatchAssociateServiceActionWithProvisioningArtifactInput) GoString() string { return s.String() } -// SetName sets the Name field's value. -func (s *CloudWatchDashboard) SetName(v string) *CloudWatchDashboard { - s.Name = &v - return s -} +// Validate inspects the fields of the type to determine if they are valid. +func (s *BatchAssociateServiceActionWithProvisioningArtifactInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BatchAssociateServiceActionWithProvisioningArtifactInput"} + if s.ServiceActionAssociations == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceActionAssociations")) + } + if s.ServiceActionAssociations != nil && len(s.ServiceActionAssociations) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ServiceActionAssociations", 1)) + } + if s.ServiceActionAssociations != nil { + for i, v := range s.ServiceActionAssociations { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ServiceActionAssociations", i), err.(request.ErrInvalidParams)) + } + } + } -// Information about a constraint. -type ConstraintDetail struct { - _ struct{} `type:"structure"` + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} - // The identifier of the constraint. - ConstraintId *string `min:"1" type:"string"` +// SetAcceptLanguage sets the AcceptLanguage field's value. +func (s *BatchAssociateServiceActionWithProvisioningArtifactInput) SetAcceptLanguage(v string) *BatchAssociateServiceActionWithProvisioningArtifactInput { + s.AcceptLanguage = &v + return s +} - // The description of the constraint. - Description *string `type:"string"` +// SetServiceActionAssociations sets the ServiceActionAssociations field's value. +func (s *BatchAssociateServiceActionWithProvisioningArtifactInput) SetServiceActionAssociations(v []*ServiceActionAssociation) *BatchAssociateServiceActionWithProvisioningArtifactInput { + s.ServiceActionAssociations = v + return s +} - // The owner of the constraint. - Owner *string `type:"string"` +type BatchAssociateServiceActionWithProvisioningArtifactOutput struct { + _ struct{} `type:"structure"` - // The type of constraint. - // - // * LAUNCH - // - // * NOTIFICATION - // - // * TEMPLATE - Type *string `min:"1" type:"string"` + // An object that contains a list of errors, along with information to help + // you identify the self-service action. + FailedServiceActionAssociations []*FailedServiceActionAssociation `type:"list"` } // String returns the string representation -func (s ConstraintDetail) String() string { +func (s BatchAssociateServiceActionWithProvisioningArtifactOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ConstraintDetail) GoString() string { +func (s BatchAssociateServiceActionWithProvisioningArtifactOutput) GoString() string { return s.String() } -// SetConstraintId sets the ConstraintId field's value. -func (s *ConstraintDetail) SetConstraintId(v string) *ConstraintDetail { - s.ConstraintId = &v +// SetFailedServiceActionAssociations sets the FailedServiceActionAssociations field's value. +func (s *BatchAssociateServiceActionWithProvisioningArtifactOutput) SetFailedServiceActionAssociations(v []*FailedServiceActionAssociation) *BatchAssociateServiceActionWithProvisioningArtifactOutput { + s.FailedServiceActionAssociations = v return s } -// SetDescription sets the Description field's value. -func (s *ConstraintDetail) SetDescription(v string) *ConstraintDetail { - s.Description = &v - return s -} +type BatchDisassociateServiceActionFromProvisioningArtifactInput struct { + _ struct{} `type:"structure"` -// SetOwner sets the Owner field's value. -func (s *ConstraintDetail) SetOwner(v string) *ConstraintDetail { - s.Owner = &v - return s -} + // The language code. + // + // * en - English (default) + // + // * jp - Japanese + // + // * zh - Chinese + AcceptLanguage *string `type:"string"` -// SetType sets the Type field's value. -func (s *ConstraintDetail) SetType(v string) *ConstraintDetail { - s.Type = &v - return s + // One or more associations, each consisting of the Action ID, the Product ID, + // and the Provisioning Artifact ID. + // + // ServiceActionAssociations is a required field + ServiceActionAssociations []*ServiceActionAssociation `min:"1" type:"list" required:"true"` } -// Summary information about a constraint. +// String returns the string representation +func (s BatchDisassociateServiceActionFromProvisioningArtifactInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchDisassociateServiceActionFromProvisioningArtifactInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *BatchDisassociateServiceActionFromProvisioningArtifactInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "BatchDisassociateServiceActionFromProvisioningArtifactInput"} + if s.ServiceActionAssociations == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceActionAssociations")) + } + if s.ServiceActionAssociations != nil && len(s.ServiceActionAssociations) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ServiceActionAssociations", 1)) + } + if s.ServiceActionAssociations != nil { + for i, v := range s.ServiceActionAssociations { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ServiceActionAssociations", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAcceptLanguage sets the AcceptLanguage field's value. +func (s *BatchDisassociateServiceActionFromProvisioningArtifactInput) SetAcceptLanguage(v string) *BatchDisassociateServiceActionFromProvisioningArtifactInput { + s.AcceptLanguage = &v + return s +} + +// SetServiceActionAssociations sets the ServiceActionAssociations field's value. +func (s *BatchDisassociateServiceActionFromProvisioningArtifactInput) SetServiceActionAssociations(v []*ServiceActionAssociation) *BatchDisassociateServiceActionFromProvisioningArtifactInput { + s.ServiceActionAssociations = v + return s +} + +type BatchDisassociateServiceActionFromProvisioningArtifactOutput struct { + _ struct{} `type:"structure"` + + // An object that contains a list of errors, along with information to help + // you identify the self-service action. + FailedServiceActionAssociations []*FailedServiceActionAssociation `type:"list"` +} + +// String returns the string representation +func (s BatchDisassociateServiceActionFromProvisioningArtifactOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s BatchDisassociateServiceActionFromProvisioningArtifactOutput) GoString() string { + return s.String() +} + +// SetFailedServiceActionAssociations sets the FailedServiceActionAssociations field's value. +func (s *BatchDisassociateServiceActionFromProvisioningArtifactOutput) SetFailedServiceActionAssociations(v []*FailedServiceActionAssociation) *BatchDisassociateServiceActionFromProvisioningArtifactOutput { + s.FailedServiceActionAssociations = v + return s +} + +// Information about a CloudWatch dashboard. +type CloudWatchDashboard struct { + _ struct{} `type:"structure"` + + // The name of the CloudWatch dashboard. + Name *string `type:"string"` +} + +// String returns the string representation +func (s CloudWatchDashboard) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CloudWatchDashboard) GoString() string { + return s.String() +} + +// SetName sets the Name field's value. +func (s *CloudWatchDashboard) SetName(v string) *CloudWatchDashboard { + s.Name = &v + return s +} + +// Information about a constraint. +type ConstraintDetail struct { + _ struct{} `type:"structure"` + + // The identifier of the constraint. + ConstraintId *string `min:"1" type:"string"` + + // The description of the constraint. + Description *string `type:"string"` + + // The owner of the constraint. + Owner *string `type:"string"` + + // The type of constraint. + // + // * LAUNCH + // + // * NOTIFICATION + // + // * TEMPLATE + Type *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ConstraintDetail) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConstraintDetail) GoString() string { + return s.String() +} + +// SetConstraintId sets the ConstraintId field's value. +func (s *ConstraintDetail) SetConstraintId(v string) *ConstraintDetail { + s.ConstraintId = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *ConstraintDetail) SetDescription(v string) *ConstraintDetail { + s.Description = &v + return s +} + +// SetOwner sets the Owner field's value. +func (s *ConstraintDetail) SetOwner(v string) *ConstraintDetail { + s.Owner = &v + return s +} + +// SetType sets the Type field's value. +func (s *ConstraintDetail) SetType(v string) *ConstraintDetail { + s.Type = &v + return s +} + +// Summary information about a constraint. type ConstraintSummary struct { _ struct{} `type:"structure"` @@ -6391,11 +8363,27 @@ type CreateConstraintInput struct { // // LAUNCHSpecify the RoleArn property as follows: // - // \"RoleArn\" : \"arn:aws:iam::123456789012:role/LaunchRole\" + // {"RoleArn" : "arn:aws:iam::123456789012:role/LaunchRole"} + // + // You cannot have both a LAUNCH and a STACKSET constraint. + // + // You also cannot have more than one LAUNCH constraint on a product and portfolio. // // NOTIFICATIONSpecify the NotificationArns property as follows: // - // \"NotificationArns\" : [\"arn:aws:sns:us-east-1:123456789012:Topic\"] + // {"NotificationArns" : ["arn:aws:sns:us-east-1:123456789012:Topic"]} + // + // STACKSETSpecify the Parameters property as follows: + // + // {"Version": "String", "Properties": {"AccountList": [ "String" ], "RegionList": + // [ "String" ], "AdminRole": "String", "ExecutionRole": "String"}} + // + // You cannot have both a LAUNCH and a STACKSET constraint. + // + // You also cannot have more than one STACKSET constraint on a product and portfolio. + // + // Products with a STACKSET constraint will launch an AWS CloudFormation stack + // set. // // TEMPLATESpecify the Rules property. For more information, see Template Constraint // Rules (http://docs.aws.amazon.com/servicecatalog/latest/adminguide/reference-template_constraint_rules.html). @@ -6419,6 +8407,8 @@ type CreateConstraintInput struct { // // * NOTIFICATION // + // * STACKSET + // // * TEMPLATE // // Type is a required field @@ -6719,10 +8709,15 @@ type CreatePortfolioShareInput struct { // * zh - Chinese AcceptLanguage *string `type:"string"` - // The AWS account ID. - // - // AccountId is a required field - AccountId *string `type:"string" required:"true"` + // The AWS account ID. For example, 123456789012. + AccountId *string `type:"string"` + + // The organization node to whom you are going to share. If OrganizationNode + // is passed in, PortfolioShare will be created for the node and its children + // (when applies), and a PortfolioShareToken will be returned in the output + // in order for the administrator to monitor the status of the PortfolioShare + // creation process. + OrganizationNode *OrganizationNode `type:"structure"` // The portfolio identifier. // @@ -6743,9 +8738,6 @@ func (s CreatePortfolioShareInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *CreatePortfolioShareInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "CreatePortfolioShareInput"} - if s.AccountId == nil { - invalidParams.Add(request.NewErrParamRequired("AccountId")) - } if s.PortfolioId == nil { invalidParams.Add(request.NewErrParamRequired("PortfolioId")) } @@ -6771,6 +8763,12 @@ func (s *CreatePortfolioShareInput) SetAccountId(v string) *CreatePortfolioShare return s } +// SetOrganizationNode sets the OrganizationNode field's value. +func (s *CreatePortfolioShareInput) SetOrganizationNode(v *OrganizationNode) *CreatePortfolioShareInput { + s.OrganizationNode = v + return s +} + // SetPortfolioId sets the PortfolioId field's value. func (s *CreatePortfolioShareInput) SetPortfolioId(v string) *CreatePortfolioShareInput { s.PortfolioId = &v @@ -6779,6 +8777,10 @@ func (s *CreatePortfolioShareInput) SetPortfolioId(v string) *CreatePortfolioSha type CreatePortfolioShareOutput struct { _ struct{} `type:"structure"` + + // The portfolio share unique identifier. This will only be returned if portfolio + // is shared to an organization node. + PortfolioShareToken *string `type:"string"` } // String returns the string representation @@ -6791,6 +8793,12 @@ func (s CreatePortfolioShareOutput) GoString() string { return s.String() } +// SetPortfolioShareToken sets the PortfolioShareToken field's value. +func (s *CreatePortfolioShareOutput) SetPortfolioShareToken(v string) *CreatePortfolioShareOutput { + s.PortfolioShareToken = &v + return s +} + type CreateProductInput struct { _ struct{} `type:"structure"` @@ -7410,6 +9418,158 @@ func (s *CreateProvisioningArtifactOutput) SetStatus(v string) *CreateProvisioni return s } +type CreateServiceActionInput struct { + _ struct{} `type:"structure"` + + // The language code. + // + // * en - English (default) + // + // * jp - Japanese + // + // * zh - Chinese + AcceptLanguage *string `type:"string"` + + // The self-service action definition. Can be one of the following: + // + // NameThe name of the AWS Systems Manager Document. For example, AWS-RestartEC2Instance. + // + // VersionThe AWS Systems Manager automation document version. For example, + // "Version": "1" + // + // AssumeRoleThe Amazon Resource Name (ARN) of the role that performs the self-service + // actions on your behalf. For example, "AssumeRole": "arn:aws:iam::12345678910:role/ActionRole". + // + // To reuse the provisioned product launch role, set to "AssumeRole": "LAUNCH_ROLE". + // + // ParametersThe list of parameters in JSON format. + // + // For example: [{\"Name\":\"InstanceId\",\"Type\":\"TARGET\"}]. + // + // Definition is a required field + Definition map[string]*string `min:"1" type:"map" required:"true"` + + // The service action definition type. For example, SSM_AUTOMATION. + // + // DefinitionType is a required field + DefinitionType *string `type:"string" required:"true" enum:"ServiceActionDefinitionType"` + + // The self-service action description. + Description *string `type:"string"` + + // A unique identifier that you provide to ensure idempotency. If multiple requests + // differ only by the idempotency token, the same response is returned for each + // repeated request. + // + // IdempotencyToken is a required field + IdempotencyToken *string `min:"1" type:"string" required:"true" idempotencyToken:"true"` + + // The self-service action name. + // + // Name is a required field + Name *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateServiceActionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateServiceActionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateServiceActionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateServiceActionInput"} + if s.Definition == nil { + invalidParams.Add(request.NewErrParamRequired("Definition")) + } + if s.Definition != nil && len(s.Definition) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Definition", 1)) + } + if s.DefinitionType == nil { + invalidParams.Add(request.NewErrParamRequired("DefinitionType")) + } + if s.IdempotencyToken == nil { + invalidParams.Add(request.NewErrParamRequired("IdempotencyToken")) + } + if s.IdempotencyToken != nil && len(*s.IdempotencyToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("IdempotencyToken", 1)) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAcceptLanguage sets the AcceptLanguage field's value. +func (s *CreateServiceActionInput) SetAcceptLanguage(v string) *CreateServiceActionInput { + s.AcceptLanguage = &v + return s +} + +// SetDefinition sets the Definition field's value. +func (s *CreateServiceActionInput) SetDefinition(v map[string]*string) *CreateServiceActionInput { + s.Definition = v + return s +} + +// SetDefinitionType sets the DefinitionType field's value. +func (s *CreateServiceActionInput) SetDefinitionType(v string) *CreateServiceActionInput { + s.DefinitionType = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *CreateServiceActionInput) SetDescription(v string) *CreateServiceActionInput { + s.Description = &v + return s +} + +// SetIdempotencyToken sets the IdempotencyToken field's value. +func (s *CreateServiceActionInput) SetIdempotencyToken(v string) *CreateServiceActionInput { + s.IdempotencyToken = &v + return s +} + +// SetName sets the Name field's value. +func (s *CreateServiceActionInput) SetName(v string) *CreateServiceActionInput { + s.Name = &v + return s +} + +type CreateServiceActionOutput struct { + _ struct{} `type:"structure"` + + // An object containing information about the self-service action. + ServiceActionDetail *ServiceActionDetail `type:"structure"` +} + +// String returns the string representation +func (s CreateServiceActionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateServiceActionOutput) GoString() string { + return s.String() +} + +// SetServiceActionDetail sets the ServiceActionDetail field's value. +func (s *CreateServiceActionOutput) SetServiceActionDetail(v *ServiceActionDetail) *CreateServiceActionOutput { + s.ServiceActionDetail = v + return s +} + type CreateTagOptionInput struct { _ struct{} `type:"structure"` @@ -7644,9 +9804,10 @@ type DeletePortfolioShareInput struct { AcceptLanguage *string `type:"string"` // The AWS account ID. - // - // AccountId is a required field - AccountId *string `type:"string" required:"true"` + AccountId *string `type:"string"` + + // The organization node to whom you are going to stop sharing. + OrganizationNode *OrganizationNode `type:"structure"` // The portfolio identifier. // @@ -7667,9 +9828,6 @@ func (s DeletePortfolioShareInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *DeletePortfolioShareInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "DeletePortfolioShareInput"} - if s.AccountId == nil { - invalidParams.Add(request.NewErrParamRequired("AccountId")) - } if s.PortfolioId == nil { invalidParams.Add(request.NewErrParamRequired("PortfolioId")) } @@ -7695,6 +9853,12 @@ func (s *DeletePortfolioShareInput) SetAccountId(v string) *DeletePortfolioShare return s } +// SetOrganizationNode sets the OrganizationNode field's value. +func (s *DeletePortfolioShareInput) SetOrganizationNode(v *OrganizationNode) *DeletePortfolioShareInput { + s.OrganizationNode = v + return s +} + // SetPortfolioId sets the PortfolioId field's value. func (s *DeletePortfolioShareInput) SetPortfolioId(v string) *DeletePortfolioShareInput { s.PortfolioId = &v @@ -7703,6 +9867,10 @@ func (s *DeletePortfolioShareInput) SetPortfolioId(v string) *DeletePortfolioSha type DeletePortfolioShareOutput struct { _ struct{} `type:"structure"` + + // The portfolio share unique identifier. This will only be returned if delete + // is made to an organization node. + PortfolioShareToken *string `type:"string"` } // String returns the string representation @@ -7715,6 +9883,12 @@ func (s DeletePortfolioShareOutput) GoString() string { return s.String() } +// SetPortfolioShareToken sets the PortfolioShareToken field's value. +func (s *DeletePortfolioShareOutput) SetPortfolioShareToken(v string) *DeletePortfolioShareOutput { + s.PortfolioShareToken = &v + return s +} + type DeleteProductInput struct { _ struct{} `type:"structure"` @@ -7952,6 +10126,76 @@ func (s DeleteProvisioningArtifactOutput) GoString() string { return s.String() } +type DeleteServiceActionInput struct { + _ struct{} `type:"structure"` + + // The language code. + // + // * en - English (default) + // + // * jp - Japanese + // + // * zh - Chinese + AcceptLanguage *string `type:"string"` + + // The self-service action identifier. For example, act-fs7abcd89wxyz. + // + // Id is a required field + Id *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteServiceActionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteServiceActionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteServiceActionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteServiceActionInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.Id != nil && len(*s.Id) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Id", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAcceptLanguage sets the AcceptLanguage field's value. +func (s *DeleteServiceActionInput) SetAcceptLanguage(v string) *DeleteServiceActionInput { + s.AcceptLanguage = &v + return s +} + +// SetId sets the Id field's value. +func (s *DeleteServiceActionInput) SetId(v string) *DeleteServiceActionInput { + s.Id = &v + return s +} + +type DeleteServiceActionOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteServiceActionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteServiceActionOutput) GoString() string { + return s.String() +} + type DeleteTagOptionInput struct { _ struct{} `type:"structure"` @@ -8298,6 +10542,105 @@ func (s *DescribePortfolioOutput) SetTags(v []*Tag) *DescribePortfolioOutput { return s } +type DescribePortfolioShareStatusInput struct { + _ struct{} `type:"structure"` + + // The token for the portfolio share operation. This token is returned either + // by CreatePortfolioShare or by DeletePortfolioShare. + // + // PortfolioShareToken is a required field + PortfolioShareToken *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribePortfolioShareStatusInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribePortfolioShareStatusInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribePortfolioShareStatusInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribePortfolioShareStatusInput"} + if s.PortfolioShareToken == nil { + invalidParams.Add(request.NewErrParamRequired("PortfolioShareToken")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPortfolioShareToken sets the PortfolioShareToken field's value. +func (s *DescribePortfolioShareStatusInput) SetPortfolioShareToken(v string) *DescribePortfolioShareStatusInput { + s.PortfolioShareToken = &v + return s +} + +type DescribePortfolioShareStatusOutput struct { + _ struct{} `type:"structure"` + + // Organization node identifier. It can be either account id, organizational + // unit id or organization id. + OrganizationNodeValue *string `type:"string"` + + // The portfolio identifier. + PortfolioId *string `min:"1" type:"string"` + + // The token for the portfolio share operation. For example, share-6v24abcdefghi. + PortfolioShareToken *string `type:"string"` + + // Information about the portfolio share operation. + ShareDetails *ShareDetails `type:"structure"` + + // Status of the portfolio share operation. + Status *string `type:"string" enum:"ShareStatus"` +} + +// String returns the string representation +func (s DescribePortfolioShareStatusOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribePortfolioShareStatusOutput) GoString() string { + return s.String() +} + +// SetOrganizationNodeValue sets the OrganizationNodeValue field's value. +func (s *DescribePortfolioShareStatusOutput) SetOrganizationNodeValue(v string) *DescribePortfolioShareStatusOutput { + s.OrganizationNodeValue = &v + return s +} + +// SetPortfolioId sets the PortfolioId field's value. +func (s *DescribePortfolioShareStatusOutput) SetPortfolioId(v string) *DescribePortfolioShareStatusOutput { + s.PortfolioId = &v + return s +} + +// SetPortfolioShareToken sets the PortfolioShareToken field's value. +func (s *DescribePortfolioShareStatusOutput) SetPortfolioShareToken(v string) *DescribePortfolioShareStatusOutput { + s.PortfolioShareToken = &v + return s +} + +// SetShareDetails sets the ShareDetails field's value. +func (s *DescribePortfolioShareStatusOutput) SetShareDetails(v *ShareDetails) *DescribePortfolioShareStatusOutput { + s.ShareDetails = v + return s +} + +// SetStatus sets the Status field's value. +func (s *DescribePortfolioShareStatusOutput) SetStatus(v string) *DescribePortfolioShareStatusOutput { + s.Status = &v + return s +} + type DescribeProductAsAdminInput struct { _ struct{} `type:"structure"` @@ -9005,6 +11348,10 @@ type DescribeProvisioningParametersOutput struct { // Information about the parameters used to provision the product. ProvisioningArtifactParameters []*ProvisioningArtifactParameter `type:"list"` + // An object that contains information about preferences, such as regions and + // accounts, for the provisioning artifact. + ProvisioningArtifactPreferences *ProvisioningArtifactPreferences `type:"structure"` + // Information about the TagOptions associated with the resource. TagOptions []*TagOptionSummary `type:"list"` @@ -9035,6 +11382,12 @@ func (s *DescribeProvisioningParametersOutput) SetProvisioningArtifactParameters return s } +// SetProvisioningArtifactPreferences sets the ProvisioningArtifactPreferences field's value. +func (s *DescribeProvisioningParametersOutput) SetProvisioningArtifactPreferences(v *ProvisioningArtifactPreferences) *DescribeProvisioningParametersOutput { + s.ProvisioningArtifactPreferences = v + return s +} + // SetTagOptions sets the TagOptions field's value. func (s *DescribeProvisioningParametersOutput) SetTagOptions(v []*TagOptionSummary) *DescribeProvisioningParametersOutput { s.TagOptions = v @@ -9167,6 +11520,85 @@ func (s *DescribeRecordOutput) SetRecordOutputs(v []*RecordOutput) *DescribeReco return s } +type DescribeServiceActionInput struct { + _ struct{} `type:"structure"` + + // The language code. + // + // * en - English (default) + // + // * jp - Japanese + // + // * zh - Chinese + AcceptLanguage *string `type:"string"` + + // The self-service action identifier. + // + // Id is a required field + Id *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeServiceActionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeServiceActionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeServiceActionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeServiceActionInput"} + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.Id != nil && len(*s.Id) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Id", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAcceptLanguage sets the AcceptLanguage field's value. +func (s *DescribeServiceActionInput) SetAcceptLanguage(v string) *DescribeServiceActionInput { + s.AcceptLanguage = &v + return s +} + +// SetId sets the Id field's value. +func (s *DescribeServiceActionInput) SetId(v string) *DescribeServiceActionInput { + s.Id = &v + return s +} + +type DescribeServiceActionOutput struct { + _ struct{} `type:"structure"` + + // Detailed information about the self-service action. + ServiceActionDetail *ServiceActionDetail `type:"structure"` +} + +// String returns the string representation +func (s DescribeServiceActionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeServiceActionOutput) GoString() string { + return s.String() +} + +// SetServiceActionDetail sets the ServiceActionDetail field's value. +func (s *DescribeServiceActionOutput) SetServiceActionDetail(v *ServiceActionDetail) *DescribeServiceActionOutput { + s.ServiceActionDetail = v + return s +} + type DescribeTagOptionInput struct { _ struct{} `type:"structure"` @@ -9231,6 +11663,34 @@ func (s *DescribeTagOptionOutput) SetTagOptionDetail(v *TagOptionDetail) *Descri return s } +type DisableAWSOrganizationsAccessInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DisableAWSOrganizationsAccessInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisableAWSOrganizationsAccessInput) GoString() string { + return s.String() +} + +type DisableAWSOrganizationsAccessOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DisableAWSOrganizationsAccessOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisableAWSOrganizationsAccessOutput) GoString() string { + return s.String() +} + type DisassociatePrincipalFromPortfolioInput struct { _ struct{} `type:"structure"` @@ -9405,6 +11865,110 @@ func (s DisassociateProductFromPortfolioOutput) GoString() string { return s.String() } +type DisassociateServiceActionFromProvisioningArtifactInput struct { + _ struct{} `type:"structure"` + + // The language code. + // + // * en - English (default) + // + // * jp - Japanese + // + // * zh - Chinese + AcceptLanguage *string `type:"string"` + + // The product identifier. For example, prod-abcdzk7xy33qa. + // + // ProductId is a required field + ProductId *string `min:"1" type:"string" required:"true"` + + // The identifier of the provisioning artifact. For example, pa-4abcdjnxjj6ne. + // + // ProvisioningArtifactId is a required field + ProvisioningArtifactId *string `min:"1" type:"string" required:"true"` + + // The self-service action identifier. For example, act-fs7abcd89wxyz. + // + // ServiceActionId is a required field + ServiceActionId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DisassociateServiceActionFromProvisioningArtifactInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisassociateServiceActionFromProvisioningArtifactInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DisassociateServiceActionFromProvisioningArtifactInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DisassociateServiceActionFromProvisioningArtifactInput"} + if s.ProductId == nil { + invalidParams.Add(request.NewErrParamRequired("ProductId")) + } + if s.ProductId != nil && len(*s.ProductId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ProductId", 1)) + } + if s.ProvisioningArtifactId == nil { + invalidParams.Add(request.NewErrParamRequired("ProvisioningArtifactId")) + } + if s.ProvisioningArtifactId != nil && len(*s.ProvisioningArtifactId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ProvisioningArtifactId", 1)) + } + if s.ServiceActionId == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceActionId")) + } + if s.ServiceActionId != nil && len(*s.ServiceActionId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ServiceActionId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAcceptLanguage sets the AcceptLanguage field's value. +func (s *DisassociateServiceActionFromProvisioningArtifactInput) SetAcceptLanguage(v string) *DisassociateServiceActionFromProvisioningArtifactInput { + s.AcceptLanguage = &v + return s +} + +// SetProductId sets the ProductId field's value. +func (s *DisassociateServiceActionFromProvisioningArtifactInput) SetProductId(v string) *DisassociateServiceActionFromProvisioningArtifactInput { + s.ProductId = &v + return s +} + +// SetProvisioningArtifactId sets the ProvisioningArtifactId field's value. +func (s *DisassociateServiceActionFromProvisioningArtifactInput) SetProvisioningArtifactId(v string) *DisassociateServiceActionFromProvisioningArtifactInput { + s.ProvisioningArtifactId = &v + return s +} + +// SetServiceActionId sets the ServiceActionId field's value. +func (s *DisassociateServiceActionFromProvisioningArtifactInput) SetServiceActionId(v string) *DisassociateServiceActionFromProvisioningArtifactInput { + s.ServiceActionId = &v + return s +} + +type DisassociateServiceActionFromProvisioningArtifactOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DisassociateServiceActionFromProvisioningArtifactOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisassociateServiceActionFromProvisioningArtifactOutput) GoString() string { + return s.String() +} + type DisassociateTagOptionFromResourceInput struct { _ struct{} `type:"structure"` @@ -9474,6 +12038,34 @@ func (s DisassociateTagOptionFromResourceOutput) GoString() string { return s.String() } +type EnableAWSOrganizationsAccessInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s EnableAWSOrganizationsAccessInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EnableAWSOrganizationsAccessInput) GoString() string { + return s.String() +} + +type EnableAWSOrganizationsAccessOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s EnableAWSOrganizationsAccessOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EnableAWSOrganizationsAccessOutput) GoString() string { + return s.String() +} + type ExecuteProvisionedProductPlanInput struct { _ struct{} `type:"structure"` @@ -9486,89 +12078,301 @@ type ExecuteProvisionedProductPlanInput struct { // * zh - Chinese AcceptLanguage *string `type:"string"` - // A unique identifier that you provide to ensure idempotency. If multiple requests - // differ only by the idempotency token, the same response is returned for each - // repeated request. - // - // IdempotencyToken is a required field - IdempotencyToken *string `min:"1" type:"string" required:"true" idempotencyToken:"true"` + // A unique identifier that you provide to ensure idempotency. If multiple requests + // differ only by the idempotency token, the same response is returned for each + // repeated request. + // + // IdempotencyToken is a required field + IdempotencyToken *string `min:"1" type:"string" required:"true" idempotencyToken:"true"` + + // The plan identifier. + // + // PlanId is a required field + PlanId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ExecuteProvisionedProductPlanInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ExecuteProvisionedProductPlanInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ExecuteProvisionedProductPlanInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ExecuteProvisionedProductPlanInput"} + if s.IdempotencyToken == nil { + invalidParams.Add(request.NewErrParamRequired("IdempotencyToken")) + } + if s.IdempotencyToken != nil && len(*s.IdempotencyToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("IdempotencyToken", 1)) + } + if s.PlanId == nil { + invalidParams.Add(request.NewErrParamRequired("PlanId")) + } + if s.PlanId != nil && len(*s.PlanId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PlanId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAcceptLanguage sets the AcceptLanguage field's value. +func (s *ExecuteProvisionedProductPlanInput) SetAcceptLanguage(v string) *ExecuteProvisionedProductPlanInput { + s.AcceptLanguage = &v + return s +} + +// SetIdempotencyToken sets the IdempotencyToken field's value. +func (s *ExecuteProvisionedProductPlanInput) SetIdempotencyToken(v string) *ExecuteProvisionedProductPlanInput { + s.IdempotencyToken = &v + return s +} + +// SetPlanId sets the PlanId field's value. +func (s *ExecuteProvisionedProductPlanInput) SetPlanId(v string) *ExecuteProvisionedProductPlanInput { + s.PlanId = &v + return s +} + +type ExecuteProvisionedProductPlanOutput struct { + _ struct{} `type:"structure"` + + // Information about the result of provisioning the product. + RecordDetail *RecordDetail `type:"structure"` +} + +// String returns the string representation +func (s ExecuteProvisionedProductPlanOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ExecuteProvisionedProductPlanOutput) GoString() string { + return s.String() +} + +// SetRecordDetail sets the RecordDetail field's value. +func (s *ExecuteProvisionedProductPlanOutput) SetRecordDetail(v *RecordDetail) *ExecuteProvisionedProductPlanOutput { + s.RecordDetail = v + return s +} + +type ExecuteProvisionedProductServiceActionInput struct { + _ struct{} `type:"structure"` + + // The language code. + // + // * en - English (default) + // + // * jp - Japanese + // + // * zh - Chinese + AcceptLanguage *string `type:"string"` + + // An idempotency token that uniquely identifies the execute request. + // + // ExecuteToken is a required field + ExecuteToken *string `min:"1" type:"string" required:"true" idempotencyToken:"true"` + + // The identifier of the provisioned product. + // + // ProvisionedProductId is a required field + ProvisionedProductId *string `min:"1" type:"string" required:"true"` + + // The self-service action identifier. For example, act-fs7abcd89wxyz. + // + // ServiceActionId is a required field + ServiceActionId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ExecuteProvisionedProductServiceActionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ExecuteProvisionedProductServiceActionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ExecuteProvisionedProductServiceActionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ExecuteProvisionedProductServiceActionInput"} + if s.ExecuteToken == nil { + invalidParams.Add(request.NewErrParamRequired("ExecuteToken")) + } + if s.ExecuteToken != nil && len(*s.ExecuteToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ExecuteToken", 1)) + } + if s.ProvisionedProductId == nil { + invalidParams.Add(request.NewErrParamRequired("ProvisionedProductId")) + } + if s.ProvisionedProductId != nil && len(*s.ProvisionedProductId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ProvisionedProductId", 1)) + } + if s.ServiceActionId == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceActionId")) + } + if s.ServiceActionId != nil && len(*s.ServiceActionId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ServiceActionId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAcceptLanguage sets the AcceptLanguage field's value. +func (s *ExecuteProvisionedProductServiceActionInput) SetAcceptLanguage(v string) *ExecuteProvisionedProductServiceActionInput { + s.AcceptLanguage = &v + return s +} + +// SetExecuteToken sets the ExecuteToken field's value. +func (s *ExecuteProvisionedProductServiceActionInput) SetExecuteToken(v string) *ExecuteProvisionedProductServiceActionInput { + s.ExecuteToken = &v + return s +} + +// SetProvisionedProductId sets the ProvisionedProductId field's value. +func (s *ExecuteProvisionedProductServiceActionInput) SetProvisionedProductId(v string) *ExecuteProvisionedProductServiceActionInput { + s.ProvisionedProductId = &v + return s +} + +// SetServiceActionId sets the ServiceActionId field's value. +func (s *ExecuteProvisionedProductServiceActionInput) SetServiceActionId(v string) *ExecuteProvisionedProductServiceActionInput { + s.ServiceActionId = &v + return s +} + +type ExecuteProvisionedProductServiceActionOutput struct { + _ struct{} `type:"structure"` + + // An object containing detailed information about the result of provisioning + // the product. + RecordDetail *RecordDetail `type:"structure"` +} + +// String returns the string representation +func (s ExecuteProvisionedProductServiceActionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ExecuteProvisionedProductServiceActionOutput) GoString() string { + return s.String() +} + +// SetRecordDetail sets the RecordDetail field's value. +func (s *ExecuteProvisionedProductServiceActionOutput) SetRecordDetail(v *RecordDetail) *ExecuteProvisionedProductServiceActionOutput { + s.RecordDetail = v + return s +} + +// An object containing information about the error, along with identifying +// information about the self-service action and its associations. +type FailedServiceActionAssociation struct { + _ struct{} `type:"structure"` + + // The error code. Valid values are listed below. + ErrorCode *string `type:"string" enum:"ServiceActionAssociationErrorCode"` + + // A text description of the error. + ErrorMessage *string `min:"1" type:"string"` - // The plan identifier. - // - // PlanId is a required field - PlanId *string `min:"1" type:"string" required:"true"` + // The product identifier. For example, prod-abcdzk7xy33qa. + ProductId *string `min:"1" type:"string"` + + // The identifier of the provisioning artifact. For example, pa-4abcdjnxjj6ne. + ProvisioningArtifactId *string `min:"1" type:"string"` + + // The self-service action identifier. For example, act-fs7abcd89wxyz. + ServiceActionId *string `min:"1" type:"string"` } // String returns the string representation -func (s ExecuteProvisionedProductPlanInput) String() string { +func (s FailedServiceActionAssociation) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ExecuteProvisionedProductPlanInput) GoString() string { +func (s FailedServiceActionAssociation) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *ExecuteProvisionedProductPlanInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "ExecuteProvisionedProductPlanInput"} - if s.IdempotencyToken == nil { - invalidParams.Add(request.NewErrParamRequired("IdempotencyToken")) - } - if s.IdempotencyToken != nil && len(*s.IdempotencyToken) < 1 { - invalidParams.Add(request.NewErrParamMinLen("IdempotencyToken", 1)) - } - if s.PlanId == nil { - invalidParams.Add(request.NewErrParamRequired("PlanId")) - } - if s.PlanId != nil && len(*s.PlanId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("PlanId", 1)) - } +// SetErrorCode sets the ErrorCode field's value. +func (s *FailedServiceActionAssociation) SetErrorCode(v string) *FailedServiceActionAssociation { + s.ErrorCode = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetErrorMessage sets the ErrorMessage field's value. +func (s *FailedServiceActionAssociation) SetErrorMessage(v string) *FailedServiceActionAssociation { + s.ErrorMessage = &v + return s } -// SetAcceptLanguage sets the AcceptLanguage field's value. -func (s *ExecuteProvisionedProductPlanInput) SetAcceptLanguage(v string) *ExecuteProvisionedProductPlanInput { - s.AcceptLanguage = &v +// SetProductId sets the ProductId field's value. +func (s *FailedServiceActionAssociation) SetProductId(v string) *FailedServiceActionAssociation { + s.ProductId = &v return s } -// SetIdempotencyToken sets the IdempotencyToken field's value. -func (s *ExecuteProvisionedProductPlanInput) SetIdempotencyToken(v string) *ExecuteProvisionedProductPlanInput { - s.IdempotencyToken = &v +// SetProvisioningArtifactId sets the ProvisioningArtifactId field's value. +func (s *FailedServiceActionAssociation) SetProvisioningArtifactId(v string) *FailedServiceActionAssociation { + s.ProvisioningArtifactId = &v return s } -// SetPlanId sets the PlanId field's value. -func (s *ExecuteProvisionedProductPlanInput) SetPlanId(v string) *ExecuteProvisionedProductPlanInput { - s.PlanId = &v +// SetServiceActionId sets the ServiceActionId field's value. +func (s *FailedServiceActionAssociation) SetServiceActionId(v string) *FailedServiceActionAssociation { + s.ServiceActionId = &v return s } -type ExecuteProvisionedProductPlanOutput struct { +type GetAWSOrganizationsAccessStatusInput struct { _ struct{} `type:"structure"` +} - // Information about the result of provisioning the product. - RecordDetail *RecordDetail `type:"structure"` +// String returns the string representation +func (s GetAWSOrganizationsAccessStatusInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetAWSOrganizationsAccessStatusInput) GoString() string { + return s.String() +} + +type GetAWSOrganizationsAccessStatusOutput struct { + _ struct{} `type:"structure"` + + // The status of the portfolio share feature. + AccessStatus *string `type:"string" enum:"AccessStatus"` } // String returns the string representation -func (s ExecuteProvisionedProductPlanOutput) String() string { +func (s GetAWSOrganizationsAccessStatusOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ExecuteProvisionedProductPlanOutput) GoString() string { +func (s GetAWSOrganizationsAccessStatusOutput) GoString() string { return s.String() } -// SetRecordDetail sets the RecordDetail field's value. -func (s *ExecuteProvisionedProductPlanOutput) SetRecordDetail(v *RecordDetail) *ExecuteProvisionedProductPlanOutput { - s.RecordDetail = v +// SetAccessStatus sets the AccessStatus field's value. +func (s *GetAWSOrganizationsAccessStatusOutput) SetAccessStatus(v string) *GetAWSOrganizationsAccessStatusOutput { + s.AccessStatus = &v return s } @@ -9641,6 +12445,16 @@ type ListAcceptedPortfolioSharesInput struct { // The page token for the next set of results. To retrieve the first set of // results, use null. PageToken *string `type:"string"` + + // The type of shared portfolios to list. The default is to list imported portfolios. + // + // * AWS_ORGANIZATIONS - List portfolios shared by the master account of + // your organization + // + // * AWS_SERVICECATALOG - List default portfolios + // + // * IMPORTED - List imported portfolios + PortfolioShareType *string `type:"string" enum:"PortfolioShareType"` } // String returns the string representation @@ -9671,6 +12485,12 @@ func (s *ListAcceptedPortfolioSharesInput) SetPageToken(v string) *ListAcceptedP return s } +// SetPortfolioShareType sets the PortfolioShareType field's value. +func (s *ListAcceptedPortfolioSharesInput) SetPortfolioShareType(v string) *ListAcceptedPortfolioSharesInput { + s.PortfolioShareType = &v + return s +} + type ListAcceptedPortfolioSharesOutput struct { _ struct{} `type:"structure"` @@ -9932,6 +12752,135 @@ func (s *ListLaunchPathsOutput) SetNextPageToken(v string) *ListLaunchPathsOutpu return s } +type ListOrganizationPortfolioAccessInput struct { + _ struct{} `type:"structure"` + + // The language code. + // + // * en - English (default) + // + // * jp - Japanese + // + // * zh - Chinese + AcceptLanguage *string `type:"string"` + + // The organization node type that will be returned in the output. + // + // * ORGANIZATION - Organization that has access to the portfolio. + // + // * ORGANIZATIONAL_UNIT - Organizational unit that has access to the portfolio + // within your organization. + // + // * ACCOUNT - Account that has access to the portfolio within your organization. + // + // OrganizationNodeType is a required field + OrganizationNodeType *string `type:"string" required:"true" enum:"OrganizationNodeType"` + + // The maximum number of items to return with this call. + PageSize *int64 `type:"integer"` + + // The page token for the next set of results. To retrieve the first set of + // results, use null. + PageToken *string `type:"string"` + + // The portfolio identifier. For example, port-2abcdext3y5fk. + // + // PortfolioId is a required field + PortfolioId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListOrganizationPortfolioAccessInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListOrganizationPortfolioAccessInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListOrganizationPortfolioAccessInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListOrganizationPortfolioAccessInput"} + if s.OrganizationNodeType == nil { + invalidParams.Add(request.NewErrParamRequired("OrganizationNodeType")) + } + if s.PortfolioId == nil { + invalidParams.Add(request.NewErrParamRequired("PortfolioId")) + } + if s.PortfolioId != nil && len(*s.PortfolioId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PortfolioId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAcceptLanguage sets the AcceptLanguage field's value. +func (s *ListOrganizationPortfolioAccessInput) SetAcceptLanguage(v string) *ListOrganizationPortfolioAccessInput { + s.AcceptLanguage = &v + return s +} + +// SetOrganizationNodeType sets the OrganizationNodeType field's value. +func (s *ListOrganizationPortfolioAccessInput) SetOrganizationNodeType(v string) *ListOrganizationPortfolioAccessInput { + s.OrganizationNodeType = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *ListOrganizationPortfolioAccessInput) SetPageSize(v int64) *ListOrganizationPortfolioAccessInput { + s.PageSize = &v + return s +} + +// SetPageToken sets the PageToken field's value. +func (s *ListOrganizationPortfolioAccessInput) SetPageToken(v string) *ListOrganizationPortfolioAccessInput { + s.PageToken = &v + return s +} + +// SetPortfolioId sets the PortfolioId field's value. +func (s *ListOrganizationPortfolioAccessInput) SetPortfolioId(v string) *ListOrganizationPortfolioAccessInput { + s.PortfolioId = &v + return s +} + +type ListOrganizationPortfolioAccessOutput struct { + _ struct{} `type:"structure"` + + // The page token to use to retrieve the next set of results. If there are no + // additional results, this value is null. + NextPageToken *string `type:"string"` + + // Displays information about the organization nodes. + OrganizationNodes []*OrganizationNode `type:"list"` +} + +// String returns the string representation +func (s ListOrganizationPortfolioAccessOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListOrganizationPortfolioAccessOutput) GoString() string { + return s.String() +} + +// SetNextPageToken sets the NextPageToken field's value. +func (s *ListOrganizationPortfolioAccessOutput) SetNextPageToken(v string) *ListOrganizationPortfolioAccessOutput { + s.NextPageToken = &v + return s +} + +// SetOrganizationNodes sets the OrganizationNodes field's value. +func (s *ListOrganizationPortfolioAccessOutput) SetOrganizationNodes(v []*OrganizationNode) *ListOrganizationPortfolioAccessOutput { + s.OrganizationNodes = v + return s +} + type ListPortfolioAccessInput struct { _ struct{} `type:"structure"` @@ -10430,6 +13379,115 @@ func (s *ListProvisionedProductPlansOutput) SetProvisionedProductPlans(v []*Prov return s } +type ListProvisioningArtifactsForServiceActionInput struct { + _ struct{} `type:"structure"` + + // The language code. + // + // * en - English (default) + // + // * jp - Japanese + // + // * zh - Chinese + AcceptLanguage *string `type:"string"` + + // The maximum number of items to return with this call. + PageSize *int64 `type:"integer"` + + // The page token for the next set of results. To retrieve the first set of + // results, use null. + PageToken *string `type:"string"` + + // The self-service action identifier. For example, act-fs7abcd89wxyz. + // + // ServiceActionId is a required field + ServiceActionId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListProvisioningArtifactsForServiceActionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListProvisioningArtifactsForServiceActionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListProvisioningArtifactsForServiceActionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListProvisioningArtifactsForServiceActionInput"} + if s.ServiceActionId == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceActionId")) + } + if s.ServiceActionId != nil && len(*s.ServiceActionId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ServiceActionId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAcceptLanguage sets the AcceptLanguage field's value. +func (s *ListProvisioningArtifactsForServiceActionInput) SetAcceptLanguage(v string) *ListProvisioningArtifactsForServiceActionInput { + s.AcceptLanguage = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *ListProvisioningArtifactsForServiceActionInput) SetPageSize(v int64) *ListProvisioningArtifactsForServiceActionInput { + s.PageSize = &v + return s +} + +// SetPageToken sets the PageToken field's value. +func (s *ListProvisioningArtifactsForServiceActionInput) SetPageToken(v string) *ListProvisioningArtifactsForServiceActionInput { + s.PageToken = &v + return s +} + +// SetServiceActionId sets the ServiceActionId field's value. +func (s *ListProvisioningArtifactsForServiceActionInput) SetServiceActionId(v string) *ListProvisioningArtifactsForServiceActionInput { + s.ServiceActionId = &v + return s +} + +type ListProvisioningArtifactsForServiceActionOutput struct { + _ struct{} `type:"structure"` + + // The page token to use to retrieve the next set of results. If there are no + // additional results, this value is null. + NextPageToken *string `type:"string"` + + // An array of objects with information about product views and provisioning + // artifacts. + ProvisioningArtifactViews []*ProvisioningArtifactView `type:"list"` +} + +// String returns the string representation +func (s ListProvisioningArtifactsForServiceActionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListProvisioningArtifactsForServiceActionOutput) GoString() string { + return s.String() +} + +// SetNextPageToken sets the NextPageToken field's value. +func (s *ListProvisioningArtifactsForServiceActionOutput) SetNextPageToken(v string) *ListProvisioningArtifactsForServiceActionOutput { + s.NextPageToken = &v + return s +} + +// SetProvisioningArtifactViews sets the ProvisioningArtifactViews field's value. +func (s *ListProvisioningArtifactsForServiceActionOutput) SetProvisioningArtifactViews(v []*ProvisioningArtifactView) *ListProvisioningArtifactsForServiceActionOutput { + s.ProvisioningArtifactViews = v + return s +} + type ListProvisioningArtifactsInput struct { _ struct{} `type:"structure"` @@ -10762,6 +13820,214 @@ func (s *ListResourcesForTagOptionOutput) SetResourceDetails(v []*ResourceDetail return s } +type ListServiceActionsForProvisioningArtifactInput struct { + _ struct{} `type:"structure"` + + // The language code. + // + // * en - English (default) + // + // * jp - Japanese + // + // * zh - Chinese + AcceptLanguage *string `type:"string"` + + // The maximum number of items to return with this call. + PageSize *int64 `type:"integer"` + + // The page token for the next set of results. To retrieve the first set of + // results, use null. + PageToken *string `type:"string"` + + // The product identifier. For example, prod-abcdzk7xy33qa. + // + // ProductId is a required field + ProductId *string `min:"1" type:"string" required:"true"` + + // The identifier of the provisioning artifact. For example, pa-4abcdjnxjj6ne. + // + // ProvisioningArtifactId is a required field + ProvisioningArtifactId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListServiceActionsForProvisioningArtifactInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListServiceActionsForProvisioningArtifactInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListServiceActionsForProvisioningArtifactInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListServiceActionsForProvisioningArtifactInput"} + if s.ProductId == nil { + invalidParams.Add(request.NewErrParamRequired("ProductId")) + } + if s.ProductId != nil && len(*s.ProductId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ProductId", 1)) + } + if s.ProvisioningArtifactId == nil { + invalidParams.Add(request.NewErrParamRequired("ProvisioningArtifactId")) + } + if s.ProvisioningArtifactId != nil && len(*s.ProvisioningArtifactId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ProvisioningArtifactId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAcceptLanguage sets the AcceptLanguage field's value. +func (s *ListServiceActionsForProvisioningArtifactInput) SetAcceptLanguage(v string) *ListServiceActionsForProvisioningArtifactInput { + s.AcceptLanguage = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *ListServiceActionsForProvisioningArtifactInput) SetPageSize(v int64) *ListServiceActionsForProvisioningArtifactInput { + s.PageSize = &v + return s +} + +// SetPageToken sets the PageToken field's value. +func (s *ListServiceActionsForProvisioningArtifactInput) SetPageToken(v string) *ListServiceActionsForProvisioningArtifactInput { + s.PageToken = &v + return s +} + +// SetProductId sets the ProductId field's value. +func (s *ListServiceActionsForProvisioningArtifactInput) SetProductId(v string) *ListServiceActionsForProvisioningArtifactInput { + s.ProductId = &v + return s +} + +// SetProvisioningArtifactId sets the ProvisioningArtifactId field's value. +func (s *ListServiceActionsForProvisioningArtifactInput) SetProvisioningArtifactId(v string) *ListServiceActionsForProvisioningArtifactInput { + s.ProvisioningArtifactId = &v + return s +} + +type ListServiceActionsForProvisioningArtifactOutput struct { + _ struct{} `type:"structure"` + + // The page token to use to retrieve the next set of results. If there are no + // additional results, this value is null. + NextPageToken *string `type:"string"` + + // An object containing information about the self-service actions associated + // with the provisioning artifact. + ServiceActionSummaries []*ServiceActionSummary `type:"list"` +} + +// String returns the string representation +func (s ListServiceActionsForProvisioningArtifactOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListServiceActionsForProvisioningArtifactOutput) GoString() string { + return s.String() +} + +// SetNextPageToken sets the NextPageToken field's value. +func (s *ListServiceActionsForProvisioningArtifactOutput) SetNextPageToken(v string) *ListServiceActionsForProvisioningArtifactOutput { + s.NextPageToken = &v + return s +} + +// SetServiceActionSummaries sets the ServiceActionSummaries field's value. +func (s *ListServiceActionsForProvisioningArtifactOutput) SetServiceActionSummaries(v []*ServiceActionSummary) *ListServiceActionsForProvisioningArtifactOutput { + s.ServiceActionSummaries = v + return s +} + +type ListServiceActionsInput struct { + _ struct{} `type:"structure"` + + // The language code. + // + // * en - English (default) + // + // * jp - Japanese + // + // * zh - Chinese + AcceptLanguage *string `type:"string"` + + // The maximum number of items to return with this call. + PageSize *int64 `type:"integer"` + + // The page token for the next set of results. To retrieve the first set of + // results, use null. + PageToken *string `type:"string"` +} + +// String returns the string representation +func (s ListServiceActionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListServiceActionsInput) GoString() string { + return s.String() +} + +// SetAcceptLanguage sets the AcceptLanguage field's value. +func (s *ListServiceActionsInput) SetAcceptLanguage(v string) *ListServiceActionsInput { + s.AcceptLanguage = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *ListServiceActionsInput) SetPageSize(v int64) *ListServiceActionsInput { + s.PageSize = &v + return s +} + +// SetPageToken sets the PageToken field's value. +func (s *ListServiceActionsInput) SetPageToken(v string) *ListServiceActionsInput { + s.PageToken = &v + return s +} + +type ListServiceActionsOutput struct { + _ struct{} `type:"structure"` + + // The page token to use to retrieve the next set of results. If there are no + // additional results, this value is null. + NextPageToken *string `type:"string"` + + // An object containing information about the service actions associated with + // the provisioning artifact. + ServiceActionSummaries []*ServiceActionSummary `type:"list"` +} + +// String returns the string representation +func (s ListServiceActionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListServiceActionsOutput) GoString() string { + return s.String() +} + +// SetNextPageToken sets the NextPageToken field's value. +func (s *ListServiceActionsOutput) SetNextPageToken(v string) *ListServiceActionsOutput { + s.NextPageToken = &v + return s +} + +// SetServiceActionSummaries sets the ServiceActionSummaries field's value. +func (s *ListServiceActionsOutput) SetServiceActionSummaries(v []*ServiceActionSummary) *ListServiceActionsOutput { + s.ServiceActionSummaries = v + return s +} + // Filters to use when listing TagOptions. type ListTagOptionsFilters struct { _ struct{} `type:"structure"` @@ -10911,6 +14177,36 @@ func (s *ListTagOptionsOutput) SetTagOptionDetails(v []*TagOptionDetail) *ListTa return s } +type OrganizationNode struct { + _ struct{} `type:"structure"` + + Type *string `type:"string" enum:"OrganizationNodeType"` + + Value *string `type:"string"` +} + +// String returns the string representation +func (s OrganizationNode) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OrganizationNode) GoString() string { + return s.String() +} + +// SetType sets the Type field's value. +func (s *OrganizationNode) SetType(v string) *OrganizationNode { + s.Type = &v + return s +} + +// SetValue sets the Value field's value. +func (s *OrganizationNode) SetValue(v string) *OrganizationNode { + s.Value = &v + return s +} + // The constraints that the administrator has put on the parameter. type ParameterConstraints struct { _ struct{} `type:"structure"` @@ -10943,7 +14239,7 @@ type PortfolioDetail struct { ARN *string `min:"1" type:"string"` // The UTC time stamp of the creation time. - CreatedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedTime *time.Time `type:"timestamp"` // The description of the portfolio. Description *string `type:"string"` @@ -11076,7 +14372,7 @@ type ProductViewDetail struct { _ struct{} `type:"structure"` // The UTC time stamp of the creation time. - CreatedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedTime *time.Time `type:"timestamp"` // The ARN of the product. ProductARN *string `min:"1" type:"string"` @@ -11296,6 +14592,10 @@ type ProvisionProductInput struct { // the product. ProvisioningParameters []*ProvisioningParameter `type:"list"` + // An object that contains information about the provisioning preferences for + // a stack set. + ProvisioningPreferences *ProvisioningPreferences `type:"structure"` + // One or more tags. Tags []*Tag `type:"list"` } @@ -11350,6 +14650,11 @@ func (s *ProvisionProductInput) Validate() error { } } } + if s.ProvisioningPreferences != nil { + if err := s.ProvisioningPreferences.Validate(); err != nil { + invalidParams.AddNested("ProvisioningPreferences", err.(request.ErrInvalidParams)) + } + } if s.Tags != nil { for i, v := range s.Tags { if v == nil { @@ -11415,6 +14720,12 @@ func (s *ProvisionProductInput) SetProvisioningParameters(v []*ProvisioningParam return s } +// SetProvisioningPreferences sets the ProvisioningPreferences field's value. +func (s *ProvisionProductInput) SetProvisioningPreferences(v *ProvisioningPreferences) *ProvisionProductInput { + s.ProvisioningPreferences = v + return s +} + // SetTags sets the Tags field's value. func (s *ProvisionProductInput) SetTags(v []*Tag) *ProvisionProductInput { s.Tags = v @@ -11452,7 +14763,7 @@ type ProvisionedProductAttribute struct { Arn *string `min:"1" type:"string"` // The UTC time stamp of the creation time. - CreatedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedTime *time.Time `type:"timestamp"` // The identifier of the provisioned product. Id *string `min:"1" type:"string"` @@ -11502,7 +14813,7 @@ type ProvisionedProductAttribute struct { // One or more tags. Tags []*Tag `type:"list"` - // The type of provisioned product. The supported value is CFN_STACK. + // The type of provisioned product. The supported values are CFN_STACK and CFN_STACKSET. Type *string `type:"string"` // The Amazon Resource Name (ARN) of the IAM user. @@ -11621,7 +14932,7 @@ type ProvisionedProductDetail struct { Arn *string `min:"1" type:"string"` // The UTC time stamp of the creation time. - CreatedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedTime *time.Time `type:"timestamp"` // The identifier of the provisioned product. Id *string `type:"string"` @@ -11637,6 +14948,12 @@ type ProvisionedProductDetail struct { // The user-friendly name of the provisioned product. Name *string `min:"1" type:"string"` + // The product identifier. For example, prod-abcdzk7xy33qa. + ProductId *string `min:"1" type:"string"` + + // The identifier of the provisioning artifact. For example, pa-4abcdjnxjj6ne. + ProvisioningArtifactId *string `min:"1" type:"string"` + // The current status of the provisioned product. // // * AVAILABLE - Stable state, ready to perform any operation. The most recent @@ -11658,7 +14975,7 @@ type ProvisionedProductDetail struct { // The current status message of the provisioned product. StatusMessage *string `type:"string"` - // The type of provisioned product. The supported value is CFN_STACK. + // The type of provisioned product. The supported values are CFN_STACK and CFN_STACKSET. Type *string `type:"string"` } @@ -11708,6 +15025,18 @@ func (s *ProvisionedProductDetail) SetName(v string) *ProvisionedProductDetail { return s } +// SetProductId sets the ProductId field's value. +func (s *ProvisionedProductDetail) SetProductId(v string) *ProvisionedProductDetail { + s.ProductId = &v + return s +} + +// SetProvisioningArtifactId sets the ProvisioningArtifactId field's value. +func (s *ProvisionedProductDetail) SetProvisioningArtifactId(v string) *ProvisionedProductDetail { + s.ProvisioningArtifactId = &v + return s +} + // SetStatus sets the Status field's value. func (s *ProvisionedProductDetail) SetStatus(v string) *ProvisionedProductDetail { s.Status = &v @@ -11731,7 +15060,7 @@ type ProvisionedProductPlanDetails struct { _ struct{} `type:"structure"` // The UTC time stamp of the creation time. - CreatedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedTime *time.Time `type:"timestamp"` // Passed to CloudFormation. The SNS topic ARNs to which to publish stack-related // events. @@ -11777,7 +15106,7 @@ type ProvisionedProductPlanDetails struct { Tags []*Tag `type:"list"` // The time when the plan was last updated. - UpdatedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + UpdatedTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -11955,7 +15284,7 @@ type ProvisioningArtifact struct { _ struct{} `type:"structure"` // The UTC time stamp of the creation time. - CreatedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedTime *time.Time `type:"timestamp"` // The description of the provisioning artifact. Description *string `type:"string"` @@ -12010,7 +15339,7 @@ type ProvisioningArtifactDetail struct { Active *bool `type:"boolean"` // The UTC time stamp of the creation time. - CreatedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedTime *time.Time `type:"timestamp"` // The description of the provisioning artifact. Description *string `type:"string"` @@ -12148,6 +15477,52 @@ func (s *ProvisioningArtifactParameter) SetParameterType(v string) *Provisioning return s } +// The user-defined preferences that will be applied during product provisioning, +// unless overridden by ProvisioningPreferences or UpdateProvisioningPreferences. +// +// For more information on maximum concurrent accounts and failure tolerance, +// see Stack set operation options (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-concepts.html#stackset-ops-options) +// in the AWS CloudFormation User Guide. +type ProvisioningArtifactPreferences struct { + _ struct{} `type:"structure"` + + // One or more AWS accounts where stack instances are deployed from the stack + // set. These accounts can be scoped in ProvisioningPreferences$StackSetAccounts + // and UpdateProvisioningPreferences$StackSetAccounts. + // + // Applicable only to a CFN_STACKSET provisioned product type. + StackSetAccounts []*string `type:"list"` + + // One or more AWS Regions where stack instances are deployed from the stack + // set. These regions can be scoped in ProvisioningPreferences$StackSetRegions + // and UpdateProvisioningPreferences$StackSetRegions. + // + // Applicable only to a CFN_STACKSET provisioned product type. + StackSetRegions []*string `type:"list"` +} + +// String returns the string representation +func (s ProvisioningArtifactPreferences) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ProvisioningArtifactPreferences) GoString() string { + return s.String() +} + +// SetStackSetAccounts sets the StackSetAccounts field's value. +func (s *ProvisioningArtifactPreferences) SetStackSetAccounts(v []*string) *ProvisioningArtifactPreferences { + s.StackSetAccounts = v + return s +} + +// SetStackSetRegions sets the StackSetRegions field's value. +func (s *ProvisioningArtifactPreferences) SetStackSetRegions(v []*string) *ProvisioningArtifactPreferences { + s.StackSetRegions = v + return s +} + // Information about a provisioning artifact (also known as a version) for a // product. type ProvisioningArtifactProperties struct { @@ -12235,7 +15610,7 @@ type ProvisioningArtifactSummary struct { _ struct{} `type:"structure"` // The UTC time stamp of the creation time. - CreatedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedTime *time.Time `type:"timestamp"` // The description of the provisioning artifact. Description *string `type:"string"` @@ -12291,6 +15666,41 @@ func (s *ProvisioningArtifactSummary) SetProvisioningArtifactMetadata(v map[stri return s } +// An object that contains summary information about a product view and a provisioning +// artifact. +type ProvisioningArtifactView struct { + _ struct{} `type:"structure"` + + // Summary information about a product view. + ProductViewSummary *ProductViewSummary `type:"structure"` + + // Information about a provisioning artifact. A provisioning artifact is also + // known as a product version. + ProvisioningArtifact *ProvisioningArtifact `type:"structure"` +} + +// String returns the string representation +func (s ProvisioningArtifactView) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ProvisioningArtifactView) GoString() string { + return s.String() +} + +// SetProductViewSummary sets the ProductViewSummary field's value. +func (s *ProvisioningArtifactView) SetProductViewSummary(v *ProductViewSummary) *ProvisioningArtifactView { + s.ProductViewSummary = v + return s +} + +// SetProvisioningArtifact sets the ProvisioningArtifact field's value. +func (s *ProvisioningArtifactView) SetProvisioningArtifact(v *ProvisioningArtifact) *ProvisioningArtifactView { + s.ProvisioningArtifact = v + return s +} + // Information about a parameter used to provision a product. type ProvisioningParameter struct { _ struct{} `type:"structure"` @@ -12337,12 +15747,163 @@ func (s *ProvisioningParameter) SetValue(v string) *ProvisioningParameter { return s } +// The user-defined preferences that will be applied when updating a provisioned +// product. Not all preferences are applicable to all provisioned product types. +type ProvisioningPreferences struct { + _ struct{} `type:"structure"` + + // One or more AWS accounts that will have access to the provisioned product. + // + // Applicable only to a CFN_STACKSET provisioned product type. + // + // The AWS accounts specified should be within the list of accounts in the STACKSET + // constraint. To get the list of accounts in the STACKSET constraint, use the + // DescribeProvisioningParameters operation. + // + // If no values are specified, the default value is all accounts from the STACKSET + // constraint. + StackSetAccounts []*string `type:"list"` + + // The number of accounts, per region, for which this operation can fail before + // AWS Service Catalog stops the operation in that region. If the operation + // is stopped in a region, AWS Service Catalog doesn't attempt the operation + // in any subsequent regions. + // + // Applicable only to a CFN_STACKSET provisioned product type. + // + // Conditional: You must specify either StackSetFailureToleranceCount or StackSetFailureTolerancePercentage, + // but not both. + // + // The default value is 0 if no value is specified. + StackSetFailureToleranceCount *int64 `type:"integer"` + + // The percentage of accounts, per region, for which this stack operation can + // fail before AWS Service Catalog stops the operation in that region. If the + // operation is stopped in a region, AWS Service Catalog doesn't attempt the + // operation in any subsequent regions. + // + // When calculating the number of accounts based on the specified percentage, + // AWS Service Catalog rounds down to the next whole number. + // + // Applicable only to a CFN_STACKSET provisioned product type. + // + // Conditional: You must specify either StackSetFailureToleranceCount or StackSetFailureTolerancePercentage, + // but not both. + StackSetFailureTolerancePercentage *int64 `type:"integer"` + + // The maximum number of accounts in which to perform this operation at one + // time. This is dependent on the value of StackSetFailureToleranceCount. StackSetMaxConcurrentCount + // is at most one more than the StackSetFailureToleranceCount. + // + // Note that this setting lets you specify the maximum for operations. For large + // deployments, under certain circumstances the actual number of accounts acted + // upon concurrently may be lower due to service throttling. + // + // Applicable only to a CFN_STACKSET provisioned product type. + // + // Conditional: You must specify either StackSetMaxConcurrentCount or StackSetMaxConcurrentPercentage, + // but not both. + StackSetMaxConcurrencyCount *int64 `min:"1" type:"integer"` + + // The maximum percentage of accounts in which to perform this operation at + // one time. + // + // When calculating the number of accounts based on the specified percentage, + // AWS Service Catalog rounds down to the next whole number. This is true except + // in cases where rounding down would result is zero. In this case, AWS Service + // Catalog sets the number as 1 instead. + // + // Note that this setting lets you specify the maximum for operations. For large + // deployments, under certain circumstances the actual number of accounts acted + // upon concurrently may be lower due to service throttling. + // + // Applicable only to a CFN_STACKSET provisioned product type. + // + // Conditional: You must specify either StackSetMaxConcurrentCount or StackSetMaxConcurrentPercentage, + // but not both. + StackSetMaxConcurrencyPercentage *int64 `min:"1" type:"integer"` + + // One or more AWS Regions where the provisioned product will be available. + // + // Applicable only to a CFN_STACKSET provisioned product type. + // + // The specified regions should be within the list of regions from the STACKSET + // constraint. To get the list of regions in the STACKSET constraint, use the + // DescribeProvisioningParameters operation. + // + // If no values are specified, the default value is all regions from the STACKSET + // constraint. + StackSetRegions []*string `type:"list"` +} + +// String returns the string representation +func (s ProvisioningPreferences) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ProvisioningPreferences) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ProvisioningPreferences) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ProvisioningPreferences"} + if s.StackSetMaxConcurrencyCount != nil && *s.StackSetMaxConcurrencyCount < 1 { + invalidParams.Add(request.NewErrParamMinValue("StackSetMaxConcurrencyCount", 1)) + } + if s.StackSetMaxConcurrencyPercentage != nil && *s.StackSetMaxConcurrencyPercentage < 1 { + invalidParams.Add(request.NewErrParamMinValue("StackSetMaxConcurrencyPercentage", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetStackSetAccounts sets the StackSetAccounts field's value. +func (s *ProvisioningPreferences) SetStackSetAccounts(v []*string) *ProvisioningPreferences { + s.StackSetAccounts = v + return s +} + +// SetStackSetFailureToleranceCount sets the StackSetFailureToleranceCount field's value. +func (s *ProvisioningPreferences) SetStackSetFailureToleranceCount(v int64) *ProvisioningPreferences { + s.StackSetFailureToleranceCount = &v + return s +} + +// SetStackSetFailureTolerancePercentage sets the StackSetFailureTolerancePercentage field's value. +func (s *ProvisioningPreferences) SetStackSetFailureTolerancePercentage(v int64) *ProvisioningPreferences { + s.StackSetFailureTolerancePercentage = &v + return s +} + +// SetStackSetMaxConcurrencyCount sets the StackSetMaxConcurrencyCount field's value. +func (s *ProvisioningPreferences) SetStackSetMaxConcurrencyCount(v int64) *ProvisioningPreferences { + s.StackSetMaxConcurrencyCount = &v + return s +} + +// SetStackSetMaxConcurrencyPercentage sets the StackSetMaxConcurrencyPercentage field's value. +func (s *ProvisioningPreferences) SetStackSetMaxConcurrencyPercentage(v int64) *ProvisioningPreferences { + s.StackSetMaxConcurrencyPercentage = &v + return s +} + +// SetStackSetRegions sets the StackSetRegions field's value. +func (s *ProvisioningPreferences) SetStackSetRegions(v []*string) *ProvisioningPreferences { + s.StackSetRegions = v + return s +} + // Information about a request operation. type RecordDetail struct { _ struct{} `type:"structure"` // The UTC time stamp of the creation time. - CreatedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedTime *time.Time `type:"timestamp"` // The path identifier. PathId *string `min:"1" type:"string"` @@ -12356,7 +15917,7 @@ type RecordDetail struct { // The user-friendly name of the provisioned product. ProvisionedProductName *string `min:"1" type:"string"` - // The type of provisioned product. The supported value is CFN_STACK. + // The type of provisioned product. The supported values are CFN_STACK and CFN_STACKSET. ProvisionedProductType *string `type:"string"` // The identifier of the provisioning artifact. @@ -12397,7 +15958,7 @@ type RecordDetail struct { Status *string `type:"string" enum:"RecordStatus"` // The time when the record was last updated. - UpdatedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + UpdatedTime *time.Time `type:"timestamp"` } // String returns the string representation @@ -12614,6 +16175,20 @@ type RejectPortfolioShareInput struct { // // PortfolioId is a required field PortfolioId *string `min:"1" type:"string" required:"true"` + + // The type of shared portfolios to reject. The default is to reject imported + // portfolios. + // + // * AWS_ORGANIZATIONS - Reject portfolios shared by the master account of + // your organization. + // + // * IMPORTED - Reject imported portfolios. + // + // * AWS_SERVICECATALOG - Not supported. (Throws ResourceNotFoundException.) + // + // For example, aws servicecatalog reject-portfolio-share --portfolio-id "port-2qwzkwxt3y5fk" + // --portfolio-share-type AWS_ORGANIZATIONS + PortfolioShareType *string `type:"string" enum:"PortfolioShareType"` } // String returns the string representation @@ -12654,6 +16229,12 @@ func (s *RejectPortfolioShareInput) SetPortfolioId(v string) *RejectPortfolioSha return s } +// SetPortfolioShareType sets the PortfolioShareType field's value. +func (s *RejectPortfolioShareInput) SetPortfolioShareType(v string) *RejectPortfolioShareInput { + s.PortfolioShareType = &v + return s +} + type RejectPortfolioShareOutput struct { _ struct{} `type:"structure"` } @@ -12799,7 +16380,7 @@ type ResourceDetail struct { ARN *string `type:"string"` // The creation time of the resource. - CreatedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedTime *time.Time `type:"timestamp"` // The description of the resource. Description *string `type:"string"` @@ -13298,81 +16879,317 @@ func (s *SearchProvisionedProductsInput) SetAcceptLanguage(v string) *SearchProv return s } -// SetAccessLevelFilter sets the AccessLevelFilter field's value. -func (s *SearchProvisionedProductsInput) SetAccessLevelFilter(v *AccessLevelFilter) *SearchProvisionedProductsInput { - s.AccessLevelFilter = v +// SetAccessLevelFilter sets the AccessLevelFilter field's value. +func (s *SearchProvisionedProductsInput) SetAccessLevelFilter(v *AccessLevelFilter) *SearchProvisionedProductsInput { + s.AccessLevelFilter = v + return s +} + +// SetFilters sets the Filters field's value. +func (s *SearchProvisionedProductsInput) SetFilters(v map[string][]*string) *SearchProvisionedProductsInput { + s.Filters = v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *SearchProvisionedProductsInput) SetPageSize(v int64) *SearchProvisionedProductsInput { + s.PageSize = &v + return s +} + +// SetPageToken sets the PageToken field's value. +func (s *SearchProvisionedProductsInput) SetPageToken(v string) *SearchProvisionedProductsInput { + s.PageToken = &v + return s +} + +// SetSortBy sets the SortBy field's value. +func (s *SearchProvisionedProductsInput) SetSortBy(v string) *SearchProvisionedProductsInput { + s.SortBy = &v + return s +} + +// SetSortOrder sets the SortOrder field's value. +func (s *SearchProvisionedProductsInput) SetSortOrder(v string) *SearchProvisionedProductsInput { + s.SortOrder = &v + return s +} + +type SearchProvisionedProductsOutput struct { + _ struct{} `type:"structure"` + + // The page token to use to retrieve the next set of results. If there are no + // additional results, this value is null. + NextPageToken *string `type:"string"` + + // Information about the provisioned products. + ProvisionedProducts []*ProvisionedProductAttribute `type:"list"` + + // The number of provisioned products found. + TotalResultsCount *int64 `type:"integer"` +} + +// String returns the string representation +func (s SearchProvisionedProductsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SearchProvisionedProductsOutput) GoString() string { + return s.String() +} + +// SetNextPageToken sets the NextPageToken field's value. +func (s *SearchProvisionedProductsOutput) SetNextPageToken(v string) *SearchProvisionedProductsOutput { + s.NextPageToken = &v + return s +} + +// SetProvisionedProducts sets the ProvisionedProducts field's value. +func (s *SearchProvisionedProductsOutput) SetProvisionedProducts(v []*ProvisionedProductAttribute) *SearchProvisionedProductsOutput { + s.ProvisionedProducts = v + return s +} + +// SetTotalResultsCount sets the TotalResultsCount field's value. +func (s *SearchProvisionedProductsOutput) SetTotalResultsCount(v int64) *SearchProvisionedProductsOutput { + s.TotalResultsCount = &v + return s +} + +// A self-service action association consisting of the Action ID, the Product +// ID, and the Provisioning Artifact ID. +type ServiceActionAssociation struct { + _ struct{} `type:"structure"` + + // The product identifier. For example, prod-abcdzk7xy33qa. + // + // ProductId is a required field + ProductId *string `min:"1" type:"string" required:"true"` + + // The identifier of the provisioning artifact. For example, pa-4abcdjnxjj6ne. + // + // ProvisioningArtifactId is a required field + ProvisioningArtifactId *string `min:"1" type:"string" required:"true"` + + // The self-service action identifier. For example, act-fs7abcd89wxyz. + // + // ServiceActionId is a required field + ServiceActionId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ServiceActionAssociation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServiceActionAssociation) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ServiceActionAssociation) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ServiceActionAssociation"} + if s.ProductId == nil { + invalidParams.Add(request.NewErrParamRequired("ProductId")) + } + if s.ProductId != nil && len(*s.ProductId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ProductId", 1)) + } + if s.ProvisioningArtifactId == nil { + invalidParams.Add(request.NewErrParamRequired("ProvisioningArtifactId")) + } + if s.ProvisioningArtifactId != nil && len(*s.ProvisioningArtifactId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ProvisioningArtifactId", 1)) + } + if s.ServiceActionId == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceActionId")) + } + if s.ServiceActionId != nil && len(*s.ServiceActionId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ServiceActionId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetProductId sets the ProductId field's value. +func (s *ServiceActionAssociation) SetProductId(v string) *ServiceActionAssociation { + s.ProductId = &v + return s +} + +// SetProvisioningArtifactId sets the ProvisioningArtifactId field's value. +func (s *ServiceActionAssociation) SetProvisioningArtifactId(v string) *ServiceActionAssociation { + s.ProvisioningArtifactId = &v + return s +} + +// SetServiceActionId sets the ServiceActionId field's value. +func (s *ServiceActionAssociation) SetServiceActionId(v string) *ServiceActionAssociation { + s.ServiceActionId = &v + return s +} + +// An object containing detailed information about the self-service action. +type ServiceActionDetail struct { + _ struct{} `type:"structure"` + + // A map that defines the self-service action. + Definition map[string]*string `min:"1" type:"map"` + + // Summary information about the self-service action. + ServiceActionSummary *ServiceActionSummary `type:"structure"` +} + +// String returns the string representation +func (s ServiceActionDetail) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServiceActionDetail) GoString() string { + return s.String() +} + +// SetDefinition sets the Definition field's value. +func (s *ServiceActionDetail) SetDefinition(v map[string]*string) *ServiceActionDetail { + s.Definition = v + return s +} + +// SetServiceActionSummary sets the ServiceActionSummary field's value. +func (s *ServiceActionDetail) SetServiceActionSummary(v *ServiceActionSummary) *ServiceActionDetail { + s.ServiceActionSummary = v + return s +} + +// Detailed information about the self-service action. +type ServiceActionSummary struct { + _ struct{} `type:"structure"` + + // The self-service action definition type. For example, SSM_AUTOMATION. + DefinitionType *string `type:"string" enum:"ServiceActionDefinitionType"` + + // The self-service action description. + Description *string `type:"string"` + + // The self-service action identifier. + Id *string `min:"1" type:"string"` + + // The self-service action name. + Name *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ServiceActionSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServiceActionSummary) GoString() string { + return s.String() +} + +// SetDefinitionType sets the DefinitionType field's value. +func (s *ServiceActionSummary) SetDefinitionType(v string) *ServiceActionSummary { + s.DefinitionType = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *ServiceActionSummary) SetDescription(v string) *ServiceActionSummary { + s.Description = &v + return s +} + +// SetId sets the Id field's value. +func (s *ServiceActionSummary) SetId(v string) *ServiceActionSummary { + s.Id = &v + return s +} + +// SetName sets the Name field's value. +func (s *ServiceActionSummary) SetName(v string) *ServiceActionSummary { + s.Name = &v return s } -// SetFilters sets the Filters field's value. -func (s *SearchProvisionedProductsInput) SetFilters(v map[string][]*string) *SearchProvisionedProductsInput { - s.Filters = v - return s +// Information about the portfolio share operation. +type ShareDetails struct { + _ struct{} `type:"structure"` + + // List of errors. + ShareErrors []*ShareError `type:"list"` + + // List of accounts for whom the operation succeeded. + SuccessfulShares []*string `type:"list"` } -// SetPageSize sets the PageSize field's value. -func (s *SearchProvisionedProductsInput) SetPageSize(v int64) *SearchProvisionedProductsInput { - s.PageSize = &v - return s +// String returns the string representation +func (s ShareDetails) String() string { + return awsutil.Prettify(s) } -// SetPageToken sets the PageToken field's value. -func (s *SearchProvisionedProductsInput) SetPageToken(v string) *SearchProvisionedProductsInput { - s.PageToken = &v - return s +// GoString returns the string representation +func (s ShareDetails) GoString() string { + return s.String() } -// SetSortBy sets the SortBy field's value. -func (s *SearchProvisionedProductsInput) SetSortBy(v string) *SearchProvisionedProductsInput { - s.SortBy = &v +// SetShareErrors sets the ShareErrors field's value. +func (s *ShareDetails) SetShareErrors(v []*ShareError) *ShareDetails { + s.ShareErrors = v return s } -// SetSortOrder sets the SortOrder field's value. -func (s *SearchProvisionedProductsInput) SetSortOrder(v string) *SearchProvisionedProductsInput { - s.SortOrder = &v +// SetSuccessfulShares sets the SuccessfulShares field's value. +func (s *ShareDetails) SetSuccessfulShares(v []*string) *ShareDetails { + s.SuccessfulShares = v return s } -type SearchProvisionedProductsOutput struct { +// Errors that occurred during the portfolio share operation. +type ShareError struct { _ struct{} `type:"structure"` - // The page token to use to retrieve the next set of results. If there are no - // additional results, this value is null. - NextPageToken *string `type:"string"` + // List of accounts impacted by the error. + Accounts []*string `type:"list"` - // Information about the provisioned products. - ProvisionedProducts []*ProvisionedProductAttribute `type:"list"` + // Error type that happened when processing the operation. + Error *string `type:"string"` - // The number of provisioned products found. - TotalResultsCount *int64 `type:"integer"` + // Information about the error. + Message *string `type:"string"` } // String returns the string representation -func (s SearchProvisionedProductsOutput) String() string { +func (s ShareError) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SearchProvisionedProductsOutput) GoString() string { +func (s ShareError) GoString() string { return s.String() } -// SetNextPageToken sets the NextPageToken field's value. -func (s *SearchProvisionedProductsOutput) SetNextPageToken(v string) *SearchProvisionedProductsOutput { - s.NextPageToken = &v +// SetAccounts sets the Accounts field's value. +func (s *ShareError) SetAccounts(v []*string) *ShareError { + s.Accounts = v return s } -// SetProvisionedProducts sets the ProvisionedProducts field's value. -func (s *SearchProvisionedProductsOutput) SetProvisionedProducts(v []*ProvisionedProductAttribute) *SearchProvisionedProductsOutput { - s.ProvisionedProducts = v +// SetError sets the Error field's value. +func (s *ShareError) SetError(v string) *ShareError { + s.Error = &v return s } -// SetTotalResultsCount sets the TotalResultsCount field's value. -func (s *SearchProvisionedProductsOutput) SetTotalResultsCount(v int64) *SearchProvisionedProductsOutput { - s.TotalResultsCount = &v +// SetMessage sets the Message field's value. +func (s *ShareError) SetMessage(v string) *ShareError { + s.Message = &v return s } @@ -14088,7 +17905,7 @@ type UpdateProvisionedProductInput struct { // path, and required if the product has more than one path. PathId *string `min:"1" type:"string"` - // The identifier of the provisioned product. + // The identifier of the product. ProductId *string `min:"1" type:"string"` // The identifier of the provisioned product. You cannot specify both ProvisionedProductName @@ -14105,6 +17922,10 @@ type UpdateProvisionedProductInput struct { // The new parameters. ProvisioningParameters []*UpdateProvisioningParameter `type:"list"` + // An object that contains information about the provisioning preferences for + // a stack set. + ProvisioningPreferences *UpdateProvisioningPreferences `type:"structure"` + // The idempotency token that uniquely identifies the provisioning update request. // // UpdateToken is a required field @@ -14155,6 +17976,11 @@ func (s *UpdateProvisionedProductInput) Validate() error { } } } + if s.ProvisioningPreferences != nil { + if err := s.ProvisioningPreferences.Validate(); err != nil { + invalidParams.AddNested("ProvisioningPreferences", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -14204,6 +18030,12 @@ func (s *UpdateProvisionedProductInput) SetProvisioningParameters(v []*UpdatePro return s } +// SetProvisioningPreferences sets the ProvisioningPreferences field's value. +func (s *UpdateProvisionedProductInput) SetProvisioningPreferences(v *UpdateProvisioningPreferences) *UpdateProvisionedProductInput { + s.ProvisioningPreferences = v + return s +} + // SetUpdateToken sets the UpdateToken field's value. func (s *UpdateProvisionedProductInput) SetUpdateToken(v string) *UpdateProvisionedProductInput { s.UpdateToken = &v @@ -14429,6 +18261,293 @@ func (s *UpdateProvisioningParameter) SetValue(v string) *UpdateProvisioningPara return s } +// The user-defined preferences that will be applied when updating a provisioned +// product. Not all preferences are applicable to all provisioned product types. +type UpdateProvisioningPreferences struct { + _ struct{} `type:"structure"` + + // One or more AWS accounts that will have access to the provisioned product. + // + // Applicable only to a CFN_STACKSET provisioned product type. + // + // The AWS accounts specified should be within the list of accounts in the STACKSET + // constraint. To get the list of accounts in the STACKSET constraint, use the + // DescribeProvisioningParameters operation. + // + // If no values are specified, the default value is all accounts from the STACKSET + // constraint. + StackSetAccounts []*string `type:"list"` + + // The number of accounts, per region, for which this operation can fail before + // AWS Service Catalog stops the operation in that region. If the operation + // is stopped in a region, AWS Service Catalog doesn't attempt the operation + // in any subsequent regions. + // + // Applicable only to a CFN_STACKSET provisioned product type. + // + // Conditional: You must specify either StackSetFailureToleranceCount or StackSetFailureTolerancePercentage, + // but not both. + // + // The default value is 0 if no value is specified. + StackSetFailureToleranceCount *int64 `type:"integer"` + + // The percentage of accounts, per region, for which this stack operation can + // fail before AWS Service Catalog stops the operation in that region. If the + // operation is stopped in a region, AWS Service Catalog doesn't attempt the + // operation in any subsequent regions. + // + // When calculating the number of accounts based on the specified percentage, + // AWS Service Catalog rounds down to the next whole number. + // + // Applicable only to a CFN_STACKSET provisioned product type. + // + // Conditional: You must specify either StackSetFailureToleranceCount or StackSetFailureTolerancePercentage, + // but not both. + StackSetFailureTolerancePercentage *int64 `type:"integer"` + + // The maximum number of accounts in which to perform this operation at one + // time. This is dependent on the value of StackSetFailureToleranceCount. StackSetMaxConcurrentCount + // is at most one more than the StackSetFailureToleranceCount. + // + // Note that this setting lets you specify the maximum for operations. For large + // deployments, under certain circumstances the actual number of accounts acted + // upon concurrently may be lower due to service throttling. + // + // Applicable only to a CFN_STACKSET provisioned product type. + // + // Conditional: You must specify either StackSetMaxConcurrentCount or StackSetMaxConcurrentPercentage, + // but not both. + StackSetMaxConcurrencyCount *int64 `min:"1" type:"integer"` + + // The maximum percentage of accounts in which to perform this operation at + // one time. + // + // When calculating the number of accounts based on the specified percentage, + // AWS Service Catalog rounds down to the next whole number. This is true except + // in cases where rounding down would result is zero. In this case, AWS Service + // Catalog sets the number as 1 instead. + // + // Note that this setting lets you specify the maximum for operations. For large + // deployments, under certain circumstances the actual number of accounts acted + // upon concurrently may be lower due to service throttling. + // + // Applicable only to a CFN_STACKSET provisioned product type. + // + // Conditional: You must specify either StackSetMaxConcurrentCount or StackSetMaxConcurrentPercentage, + // but not both. + StackSetMaxConcurrencyPercentage *int64 `min:"1" type:"integer"` + + // Determines what action AWS Service Catalog performs to a stack set or a stack + // instance represented by the provisioned product. The default value is UPDATE + // if nothing is specified. + // + // Applicable only to a CFN_STACKSET provisioned product type. + // + // CREATECreates a new stack instance in the stack set represented by the provisioned + // product. In this case, only new stack instances are created based on accounts + // and regions; if new ProductId or ProvisioningArtifactID are passed, they + // will be ignored. + // + // UPDATEUpdates the stack set represented by the provisioned product and also + // its stack instances. + // + // DELETEDeletes a stack instance in the stack set represented by the provisioned + // product. + StackSetOperationType *string `type:"string" enum:"StackSetOperationType"` + + // One or more AWS Regions where the provisioned product will be available. + // + // Applicable only to a CFN_STACKSET provisioned product type. + // + // The specified regions should be within the list of regions from the STACKSET + // constraint. To get the list of regions in the STACKSET constraint, use the + // DescribeProvisioningParameters operation. + // + // If no values are specified, the default value is all regions from the STACKSET + // constraint. + StackSetRegions []*string `type:"list"` +} + +// String returns the string representation +func (s UpdateProvisioningPreferences) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateProvisioningPreferences) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateProvisioningPreferences) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateProvisioningPreferences"} + if s.StackSetMaxConcurrencyCount != nil && *s.StackSetMaxConcurrencyCount < 1 { + invalidParams.Add(request.NewErrParamMinValue("StackSetMaxConcurrencyCount", 1)) + } + if s.StackSetMaxConcurrencyPercentage != nil && *s.StackSetMaxConcurrencyPercentage < 1 { + invalidParams.Add(request.NewErrParamMinValue("StackSetMaxConcurrencyPercentage", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetStackSetAccounts sets the StackSetAccounts field's value. +func (s *UpdateProvisioningPreferences) SetStackSetAccounts(v []*string) *UpdateProvisioningPreferences { + s.StackSetAccounts = v + return s +} + +// SetStackSetFailureToleranceCount sets the StackSetFailureToleranceCount field's value. +func (s *UpdateProvisioningPreferences) SetStackSetFailureToleranceCount(v int64) *UpdateProvisioningPreferences { + s.StackSetFailureToleranceCount = &v + return s +} + +// SetStackSetFailureTolerancePercentage sets the StackSetFailureTolerancePercentage field's value. +func (s *UpdateProvisioningPreferences) SetStackSetFailureTolerancePercentage(v int64) *UpdateProvisioningPreferences { + s.StackSetFailureTolerancePercentage = &v + return s +} + +// SetStackSetMaxConcurrencyCount sets the StackSetMaxConcurrencyCount field's value. +func (s *UpdateProvisioningPreferences) SetStackSetMaxConcurrencyCount(v int64) *UpdateProvisioningPreferences { + s.StackSetMaxConcurrencyCount = &v + return s +} + +// SetStackSetMaxConcurrencyPercentage sets the StackSetMaxConcurrencyPercentage field's value. +func (s *UpdateProvisioningPreferences) SetStackSetMaxConcurrencyPercentage(v int64) *UpdateProvisioningPreferences { + s.StackSetMaxConcurrencyPercentage = &v + return s +} + +// SetStackSetOperationType sets the StackSetOperationType field's value. +func (s *UpdateProvisioningPreferences) SetStackSetOperationType(v string) *UpdateProvisioningPreferences { + s.StackSetOperationType = &v + return s +} + +// SetStackSetRegions sets the StackSetRegions field's value. +func (s *UpdateProvisioningPreferences) SetStackSetRegions(v []*string) *UpdateProvisioningPreferences { + s.StackSetRegions = v + return s +} + +type UpdateServiceActionInput struct { + _ struct{} `type:"structure"` + + // The language code. + // + // * en - English (default) + // + // * jp - Japanese + // + // * zh - Chinese + AcceptLanguage *string `type:"string"` + + // A map that defines the self-service action. + Definition map[string]*string `min:"1" type:"map"` + + // The self-service action description. + Description *string `type:"string"` + + // The self-service action identifier. + // + // Id is a required field + Id *string `min:"1" type:"string" required:"true"` + + // The self-service action name. + Name *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s UpdateServiceActionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateServiceActionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateServiceActionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateServiceActionInput"} + if s.Definition != nil && len(s.Definition) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Definition", 1)) + } + if s.Id == nil { + invalidParams.Add(request.NewErrParamRequired("Id")) + } + if s.Id != nil && len(*s.Id) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Id", 1)) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAcceptLanguage sets the AcceptLanguage field's value. +func (s *UpdateServiceActionInput) SetAcceptLanguage(v string) *UpdateServiceActionInput { + s.AcceptLanguage = &v + return s +} + +// SetDefinition sets the Definition field's value. +func (s *UpdateServiceActionInput) SetDefinition(v map[string]*string) *UpdateServiceActionInput { + s.Definition = v + return s +} + +// SetDescription sets the Description field's value. +func (s *UpdateServiceActionInput) SetDescription(v string) *UpdateServiceActionInput { + s.Description = &v + return s +} + +// SetId sets the Id field's value. +func (s *UpdateServiceActionInput) SetId(v string) *UpdateServiceActionInput { + s.Id = &v + return s +} + +// SetName sets the Name field's value. +func (s *UpdateServiceActionInput) SetName(v string) *UpdateServiceActionInput { + s.Name = &v + return s +} + +type UpdateServiceActionOutput struct { + _ struct{} `type:"structure"` + + // Detailed information about the self-service action. + ServiceActionDetail *ServiceActionDetail `type:"structure"` +} + +// String returns the string representation +func (s UpdateServiceActionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateServiceActionOutput) GoString() string { + return s.String() +} + +// SetServiceActionDetail sets the ServiceActionDetail field's value. +func (s *UpdateServiceActionOutput) SetServiceActionDetail(v *ServiceActionDetail) *UpdateServiceActionOutput { + s.ServiceActionDetail = v + return s +} + type UpdateTagOptionInput struct { _ struct{} `type:"structure"` @@ -14558,6 +18677,17 @@ const ( AccessLevelFilterKeyUser = "User" ) +const ( + // AccessStatusEnabled is a AccessStatus enum value + AccessStatusEnabled = "ENABLED" + + // AccessStatusUnderChange is a AccessStatus enum value + AccessStatusUnderChange = "UNDER_CHANGE" + + // AccessStatusDisabled is a AccessStatus enum value + AccessStatusDisabled = "DISABLED" +) + const ( // ChangeActionAdd is a ChangeAction enum value ChangeActionAdd = "ADD" @@ -14593,6 +18723,28 @@ const ( EvaluationTypeDynamic = "DYNAMIC" ) +const ( + // OrganizationNodeTypeOrganization is a OrganizationNodeType enum value + OrganizationNodeTypeOrganization = "ORGANIZATION" + + // OrganizationNodeTypeOrganizationalUnit is a OrganizationNodeType enum value + OrganizationNodeTypeOrganizationalUnit = "ORGANIZATIONAL_UNIT" + + // OrganizationNodeTypeAccount is a OrganizationNodeType enum value + OrganizationNodeTypeAccount = "ACCOUNT" +) + +const ( + // PortfolioShareTypeImported is a PortfolioShareType enum value + PortfolioShareTypeImported = "IMPORTED" + + // PortfolioShareTypeAwsServicecatalog is a PortfolioShareType enum value + PortfolioShareTypeAwsServicecatalog = "AWS_SERVICECATALOG" + + // PortfolioShareTypeAwsOrganizations is a PortfolioShareType enum value + PortfolioShareTypeAwsOrganizations = "AWS_ORGANIZATIONS" +) + const ( // PrincipalTypeIam is a PrincipalType enum value PrincipalTypeIam = "IAM" @@ -14758,6 +18910,59 @@ const ( ResourceAttributeTags = "TAGS" ) +const ( + // ServiceActionAssociationErrorCodeDuplicateResource is a ServiceActionAssociationErrorCode enum value + ServiceActionAssociationErrorCodeDuplicateResource = "DUPLICATE_RESOURCE" + + // ServiceActionAssociationErrorCodeInternalFailure is a ServiceActionAssociationErrorCode enum value + ServiceActionAssociationErrorCodeInternalFailure = "INTERNAL_FAILURE" + + // ServiceActionAssociationErrorCodeLimitExceeded is a ServiceActionAssociationErrorCode enum value + ServiceActionAssociationErrorCodeLimitExceeded = "LIMIT_EXCEEDED" + + // ServiceActionAssociationErrorCodeResourceNotFound is a ServiceActionAssociationErrorCode enum value + ServiceActionAssociationErrorCodeResourceNotFound = "RESOURCE_NOT_FOUND" + + // ServiceActionAssociationErrorCodeThrottling is a ServiceActionAssociationErrorCode enum value + ServiceActionAssociationErrorCodeThrottling = "THROTTLING" +) + +const ( + // ServiceActionDefinitionKeyName is a ServiceActionDefinitionKey enum value + ServiceActionDefinitionKeyName = "Name" + + // ServiceActionDefinitionKeyVersion is a ServiceActionDefinitionKey enum value + ServiceActionDefinitionKeyVersion = "Version" + + // ServiceActionDefinitionKeyAssumeRole is a ServiceActionDefinitionKey enum value + ServiceActionDefinitionKeyAssumeRole = "AssumeRole" + + // ServiceActionDefinitionKeyParameters is a ServiceActionDefinitionKey enum value + ServiceActionDefinitionKeyParameters = "Parameters" +) + +const ( + // ServiceActionDefinitionTypeSsmAutomation is a ServiceActionDefinitionType enum value + ServiceActionDefinitionTypeSsmAutomation = "SSM_AUTOMATION" +) + +const ( + // ShareStatusNotStarted is a ShareStatus enum value + ShareStatusNotStarted = "NOT_STARTED" + + // ShareStatusInProgress is a ShareStatus enum value + ShareStatusInProgress = "IN_PROGRESS" + + // ShareStatusCompleted is a ShareStatus enum value + ShareStatusCompleted = "COMPLETED" + + // ShareStatusCompletedWithErrors is a ShareStatus enum value + ShareStatusCompletedWithErrors = "COMPLETED_WITH_ERRORS" + + // ShareStatusError is a ShareStatus enum value + ShareStatusError = "ERROR" +) + const ( // SortOrderAscending is a SortOrder enum value SortOrderAscending = "ASCENDING" @@ -14766,6 +18971,17 @@ const ( SortOrderDescending = "DESCENDING" ) +const ( + // StackSetOperationTypeCreate is a StackSetOperationType enum value + StackSetOperationTypeCreate = "CREATE" + + // StackSetOperationTypeUpdate is a StackSetOperationType enum value + StackSetOperationTypeUpdate = "UPDATE" + + // StackSetOperationTypeDelete is a StackSetOperationType enum value + StackSetOperationTypeDelete = "DELETE" +) + const ( // StatusAvailable is a Status enum value StatusAvailable = "AVAILABLE" diff --git a/vendor/github.com/aws/aws-sdk-go/service/servicecatalog/errors.go b/vendor/github.com/aws/aws-sdk-go/service/servicecatalog/errors.go index 4142808247a..357d9e52f5d 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/servicecatalog/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/servicecatalog/errors.go @@ -32,6 +32,12 @@ const ( // operation. ErrCodeLimitExceededException = "LimitExceededException" + // ErrCodeOperationNotSupportedException for service response error code + // "OperationNotSupportedException". + // + // The operation is not supported. + ErrCodeOperationNotSupportedException = "OperationNotSupportedException" + // ErrCodeResourceInUseException for service response error code // "ResourceInUseException". // diff --git a/vendor/github.com/aws/aws-sdk-go/service/servicecatalog/service.go b/vendor/github.com/aws/aws-sdk-go/service/servicecatalog/service.go index 05ced0ca302..f15a5b8024d 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/servicecatalog/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/servicecatalog/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "servicecatalog" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "servicecatalog" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Service Catalog" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the ServiceCatalog client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/servicediscovery/api.go b/vendor/github.com/aws/aws-sdk-go/service/servicediscovery/api.go index 747378cf36a..c9d272a6615 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/servicediscovery/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/servicediscovery/api.go @@ -17,8 +17,8 @@ const opCreatePrivateDnsNamespace = "CreatePrivateDnsNamespace" // CreatePrivateDnsNamespaceRequest generates a "aws/request.Request" representing the // client's request for the CreatePrivateDnsNamespace operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -113,8 +113,8 @@ const opCreatePublicDnsNamespace = "CreatePublicDnsNamespace" // CreatePublicDnsNamespaceRequest generates a "aws/request.Request" representing the // client's request for the CreatePublicDnsNamespace operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -209,8 +209,8 @@ const opCreateService = "CreateService" // CreateServiceRequest generates a "aws/request.Request" representing the // client's request for the CreateService operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -313,8 +313,8 @@ const opDeleteNamespace = "DeleteNamespace" // DeleteNamespaceRequest generates a "aws/request.Request" representing the // client's request for the DeleteNamespace operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -404,8 +404,8 @@ const opDeleteService = "DeleteService" // DeleteServiceRequest generates a "aws/request.Request" representing the // client's request for the DeleteService operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -492,8 +492,8 @@ const opDeregisterInstance = "DeregisterInstance" // DeregisterInstanceRequest generates a "aws/request.Request" representing the // client's request for the DeregisterInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -587,8 +587,8 @@ const opGetInstance = "GetInstance" // GetInstanceRequest generates a "aws/request.Request" representing the // client's request for the GetInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -674,8 +674,8 @@ const opGetInstancesHealthStatus = "GetInstancesHealthStatus" // GetInstancesHealthStatusRequest generates a "aws/request.Request" representing the // client's request for the GetInstancesHealthStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -821,8 +821,8 @@ const opGetNamespace = "GetNamespace" // GetNamespaceRequest generates a "aws/request.Request" representing the // client's request for the GetNamespace operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -904,8 +904,8 @@ const opGetOperation = "GetOperation" // GetOperationRequest generates a "aws/request.Request" representing the // client's request for the GetOperation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -986,8 +986,8 @@ const opGetService = "GetService" // GetServiceRequest generates a "aws/request.Request" representing the // client's request for the GetService operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1069,8 +1069,8 @@ const opListInstances = "ListInstances" // ListInstancesRequest generates a "aws/request.Request" representing the // client's request for the ListInstances operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1209,8 +1209,8 @@ const opListNamespaces = "ListNamespaces" // ListNamespacesRequest generates a "aws/request.Request" representing the // client's request for the ListNamespaces operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1346,8 +1346,8 @@ const opListOperations = "ListOperations" // ListOperationsRequest generates a "aws/request.Request" representing the // client's request for the ListOperations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1482,8 +1482,8 @@ const opListServices = "ListServices" // ListServicesRequest generates a "aws/request.Request" representing the // client's request for the ListServices operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1619,8 +1619,8 @@ const opRegisterInstance = "RegisterInstance" // RegisterInstanceRequest generates a "aws/request.Request" representing the // client's request for the RegisterInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1745,8 +1745,8 @@ const opUpdateInstanceCustomHealthStatus = "UpdateInstanceCustomHealthStatus" // UpdateInstanceCustomHealthStatusRequest generates a "aws/request.Request" representing the // client's request for the UpdateInstanceCustomHealthStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1834,8 +1834,8 @@ const opUpdateService = "UpdateService" // UpdateServiceRequest generates a "aws/request.Request" representing the // client's request for the UpdateService operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3963,7 +3963,7 @@ type Namespace struct { // Universal Time (UTC). The value of CreateDate is accurate to milliseconds. // For example, the value 1516925490.087 represents Friday, January 26, 2018 // 12:11:30.087 AM. - CreateDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreateDate *time.Time `type:"timestamp"` // A unique string that identifies the request and that allows failed requests // to be retried without the risk of executing an operation twice. @@ -4216,7 +4216,7 @@ type Operation struct { // and Coordinated Universal Time (UTC). The value of CreateDate is accurate // to milliseconds. For example, the value 1516925490.087 represents Friday, // January 26, 2018 12:11:30.087 AM. - CreateDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreateDate *time.Time `type:"timestamp"` // The code associated with ErrorMessage. Values for ErrorCode include the following: // @@ -4269,7 +4269,7 @@ type Operation struct { // in Unix date/time format and Coordinated Universal Time (UTC). The value // of UpdateDate is accurate to milliseconds. For example, the value 1516925490.087 // represents Friday, January 26, 2018 12:11:30.087 AM. - UpdateDate *time.Time `type:"timestamp" timestampFormat:"unix"` + UpdateDate *time.Time `type:"timestamp"` } // String returns the string representation @@ -4670,7 +4670,7 @@ type Service struct { // Universal Time (UTC). The value of CreateDate is accurate to milliseconds. // For example, the value 1516925490.087 represents Friday, January 26, 2018 // 12:11:30.087 AM. - CreateDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreateDate *time.Time `type:"timestamp"` // A unique string that identifies the request and that allows failed requests // to be retried without the risk of executing the operation twice. CreatorRequestId diff --git a/vendor/github.com/aws/aws-sdk-go/service/servicediscovery/service.go b/vendor/github.com/aws/aws-sdk-go/service/servicediscovery/service.go index 3a0c82b22a6..b5c177a62d4 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/servicediscovery/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/servicediscovery/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "servicediscovery" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "servicediscovery" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "ServiceDiscovery" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the ServiceDiscovery client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/ses/api.go b/vendor/github.com/aws/aws-sdk-go/service/ses/api.go index 5325ba8fd1b..70f02210530 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ses/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ses/api.go @@ -17,8 +17,8 @@ const opCloneReceiptRuleSet = "CloneReceiptRuleSet" // CloneReceiptRuleSetRequest generates a "aws/request.Request" representing the // client's request for the CloneReceiptRuleSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -110,8 +110,8 @@ const opCreateConfigurationSet = "CreateConfigurationSet" // CreateConfigurationSetRequest generates a "aws/request.Request" representing the // client's request for the CreateConfigurationSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -203,8 +203,8 @@ const opCreateConfigurationSetEventDestination = "CreateConfigurationSetEventDes // CreateConfigurationSetEventDestinationRequest generates a "aws/request.Request" representing the // client's request for the CreateConfigurationSetEventDestination operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -246,7 +246,7 @@ func (c *SES) CreateConfigurationSetEventDestinationRequest(input *CreateConfigu // Creates a configuration set event destination. // // When you create or update an event destination, you must provide one, and -// only one, destination. The destination can be Amazon CloudWatch, Amazon Kinesis +// only one, destination. The destination can be CloudWatch, Amazon Kinesis // Firehose, or Amazon Simple Notification Service (Amazon SNS). // // An event destination is the AWS service to which Amazon SES publishes the @@ -312,8 +312,8 @@ const opCreateConfigurationSetTrackingOptions = "CreateConfigurationSetTrackingO // CreateConfigurationSetTrackingOptionsRequest generates a "aws/request.Request" representing the // client's request for the CreateConfigurationSetTrackingOptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -357,9 +357,8 @@ func (c *SES) CreateConfigurationSetTrackingOptionsRequest(input *CreateConfigur // // By default, images and links used for tracking open and click events are // hosted on domains operated by Amazon SES. You can configure a subdomain of -// your own to handle these events. For information about using configuration -// sets, see Configuring Custom Domains to Handle Open and Click Tracking (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/configure-custom-open-click-domains.html) -// in the Amazon SES Developer Guide (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/Welcome.html). +// your own to handle these events. For information about using custom domains, +// see the Amazon SES Developer Guide (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/configure-custom-open-click-domains.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -410,8 +409,8 @@ const opCreateCustomVerificationEmailTemplate = "CreateCustomVerificationEmailTe // CreateCustomVerificationEmailTemplateRequest generates a "aws/request.Request" representing the // client's request for the CreateCustomVerificationEmailTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -455,7 +454,7 @@ func (c *SES) CreateCustomVerificationEmailTemplateRequest(input *CreateCustomVe // Creates a new custom verification email template. // // For more information about custom verification email templates, see Using -// Custom Verification Email Templates (https://docs.aws.amazon.com/ses/latest/DeveloperGuide/custom-verification-emails.html) +// Custom Verification Email Templates (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/custom-verification-emails.html) // in the Amazon SES Developer Guide. // // You can execute this operation no more than once per second. @@ -510,8 +509,8 @@ const opCreateReceiptFilter = "CreateReceiptFilter" // CreateReceiptFilterRequest generates a "aws/request.Request" representing the // client's request for the CreateReceiptFilter operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -598,8 +597,8 @@ const opCreateReceiptRule = "CreateReceiptRule" // CreateReceiptRuleRequest generates a "aws/request.Request" representing the // client's request for the CreateReceiptRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -709,8 +708,8 @@ const opCreateReceiptRuleSet = "CreateReceiptRuleSet" // CreateReceiptRuleSetRequest generates a "aws/request.Request" representing the // client's request for the CreateReceiptRuleSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -797,8 +796,8 @@ const opCreateTemplate = "CreateTemplate" // CreateTemplateRequest generates a "aws/request.Request" representing the // client's request for the CreateTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -855,8 +854,8 @@ func (c *SES) CreateTemplateRequest(input *CreateTemplateInput) (req *request.Re // Indicates that a resource could not be created because of a naming conflict. // // * ErrCodeInvalidTemplateException "InvalidTemplate" -// Indicates that a template could not be created because it contained invalid -// JSON. +// Indicates that the template that you specified could not be rendered. This +// issue may occur when a template refers to a partial that does not exist. // // * ErrCodeLimitExceededException "LimitExceeded" // Indicates that a resource could not be created because of service limits. @@ -888,8 +887,8 @@ const opDeleteConfigurationSet = "DeleteConfigurationSet" // DeleteConfigurationSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteConfigurationSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -971,8 +970,8 @@ const opDeleteConfigurationSetEventDestination = "DeleteConfigurationSetEventDes // DeleteConfigurationSetEventDestinationRequest generates a "aws/request.Request" representing the // client's request for the DeleteConfigurationSetEventDestination operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1058,8 +1057,8 @@ const opDeleteConfigurationSetTrackingOptions = "DeleteConfigurationSetTrackingO // DeleteConfigurationSetTrackingOptionsRequest generates a "aws/request.Request" representing the // client's request for the DeleteConfigurationSetTrackingOptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1103,9 +1102,8 @@ func (c *SES) DeleteConfigurationSetTrackingOptionsRequest(input *DeleteConfigur // // By default, images and links used for tracking open and click events are // hosted on domains operated by Amazon SES. You can configure a subdomain of -// your own to handle these events. For information about using configuration -// sets, see Configuring Custom Domains to Handle Open and Click Tracking (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/configure-custom-open-click-domains.html) -// in the Amazon SES Developer Guide (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/Welcome.html). +// your own to handle these events. For information about using custom domains, +// see the Amazon SES Developer Guide (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/configure-custom-open-click-domains.html). // // Deleting this kind of association will result in emails sent using the specified // configuration set to capture open and click events using the standard, Amazon @@ -1151,8 +1149,8 @@ const opDeleteCustomVerificationEmailTemplate = "DeleteCustomVerificationEmailTe // DeleteCustomVerificationEmailTemplateRequest generates a "aws/request.Request" representing the // client's request for the DeleteCustomVerificationEmailTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1196,7 +1194,7 @@ func (c *SES) DeleteCustomVerificationEmailTemplateRequest(input *DeleteCustomVe // Deletes an existing custom verification email template. // // For more information about custom verification email templates, see Using -// Custom Verification Email Templates (https://docs.aws.amazon.com/ses/latest/DeveloperGuide/custom-verification-emails.html) +// Custom Verification Email Templates (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/custom-verification-emails.html) // in the Amazon SES Developer Guide. // // You can execute this operation no more than once per second. @@ -1233,8 +1231,8 @@ const opDeleteIdentity = "DeleteIdentity" // DeleteIdentityRequest generates a "aws/request.Request" representing the // client's request for the DeleteIdentity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1310,8 +1308,8 @@ const opDeleteIdentityPolicy = "DeleteIdentityPolicy" // DeleteIdentityPolicyRequest generates a "aws/request.Request" representing the // client's request for the DeleteIdentityPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1395,8 +1393,8 @@ const opDeleteReceiptFilter = "DeleteReceiptFilter" // DeleteReceiptFilterRequest generates a "aws/request.Request" representing the // client's request for the DeleteReceiptFilter operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1474,8 +1472,8 @@ const opDeleteReceiptRule = "DeleteReceiptRule" // DeleteReceiptRuleRequest generates a "aws/request.Request" representing the // client's request for the DeleteReceiptRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1558,8 +1556,8 @@ const opDeleteReceiptRuleSet = "DeleteReceiptRuleSet" // DeleteReceiptRuleSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteReceiptRuleSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1644,8 +1642,8 @@ const opDeleteTemplate = "DeleteTemplate" // DeleteTemplateRequest generates a "aws/request.Request" representing the // client's request for the DeleteTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1720,8 +1718,8 @@ const opDeleteVerifiedEmailAddress = "DeleteVerifiedEmailAddress" // DeleteVerifiedEmailAddressRequest generates a "aws/request.Request" representing the // client's request for the DeleteVerifiedEmailAddress operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1797,8 +1795,8 @@ const opDescribeActiveReceiptRuleSet = "DescribeActiveReceiptRuleSet" // DescribeActiveReceiptRuleSetRequest generates a "aws/request.Request" representing the // client's request for the DescribeActiveReceiptRuleSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1877,8 +1875,8 @@ const opDescribeConfigurationSet = "DescribeConfigurationSet" // DescribeConfigurationSetRequest generates a "aws/request.Request" representing the // client's request for the DescribeConfigurationSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1959,8 +1957,8 @@ const opDescribeReceiptRule = "DescribeReceiptRule" // DescribeReceiptRuleRequest generates a "aws/request.Request" representing the // client's request for the DescribeReceiptRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2046,8 +2044,8 @@ const opDescribeReceiptRuleSet = "DescribeReceiptRuleSet" // DescribeReceiptRuleSetRequest generates a "aws/request.Request" representing the // client's request for the DescribeReceiptRuleSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2130,8 +2128,8 @@ const opGetAccountSendingEnabled = "GetAccountSendingEnabled" // GetAccountSendingEnabledRequest generates a "aws/request.Request" representing the // client's request for the GetAccountSendingEnabled operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2170,7 +2168,8 @@ func (c *SES) GetAccountSendingEnabledRequest(input *GetAccountSendingEnabledInp // GetAccountSendingEnabled API operation for Amazon Simple Email Service. // -// Returns the email sending status of the Amazon SES account. +// Returns the email sending status of the Amazon SES account for the current +// region. // // You can execute this operation no more than once per second. // @@ -2206,8 +2205,8 @@ const opGetCustomVerificationEmailTemplate = "GetCustomVerificationEmailTemplate // GetCustomVerificationEmailTemplateRequest generates a "aws/request.Request" representing the // client's request for the GetCustomVerificationEmailTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2250,7 +2249,7 @@ func (c *SES) GetCustomVerificationEmailTemplateRequest(input *GetCustomVerifica // specify. // // For more information about custom verification email templates, see Using -// Custom Verification Email Templates (https://docs.aws.amazon.com/ses/latest/DeveloperGuide/custom-verification-emails.html) +// Custom Verification Email Templates (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/custom-verification-emails.html) // in the Amazon SES Developer Guide. // // You can execute this operation no more than once per second. @@ -2293,8 +2292,8 @@ const opGetIdentityDkimAttributes = "GetIdentityDkimAttributes" // GetIdentityDkimAttributesRequest generates a "aws/request.Request" representing the // client's request for the GetIdentityDkimAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2388,8 +2387,8 @@ const opGetIdentityMailFromDomainAttributes = "GetIdentityMailFromDomainAttribut // GetIdentityMailFromDomainAttributesRequest generates a "aws/request.Request" representing the // client's request for the GetIdentityMailFromDomainAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2466,8 +2465,8 @@ const opGetIdentityNotificationAttributes = "GetIdentityNotificationAttributes" // GetIdentityNotificationAttributesRequest generates a "aws/request.Request" representing the // client's request for the GetIdentityNotificationAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2547,8 +2546,8 @@ const opGetIdentityPolicies = "GetIdentityPolicies" // GetIdentityPoliciesRequest generates a "aws/request.Request" representing the // client's request for the GetIdentityPolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2633,8 +2632,8 @@ const opGetIdentityVerificationAttributes = "GetIdentityVerificationAttributes" // GetIdentityVerificationAttributesRequest generates a "aws/request.Request" representing the // client's request for the GetIdentityVerificationAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2728,8 +2727,8 @@ const opGetSendQuota = "GetSendQuota" // GetSendQuotaRequest generates a "aws/request.Request" representing the // client's request for the GetSendQuota operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2804,8 +2803,8 @@ const opGetSendStatistics = "GetSendStatistics" // GetSendStatisticsRequest generates a "aws/request.Request" representing the // client's request for the GetSendStatistics operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2844,7 +2843,7 @@ func (c *SES) GetSendStatisticsRequest(input *GetSendStatisticsInput) (req *requ // GetSendStatistics API operation for Amazon Simple Email Service. // -// Provides sending statistics for the Amazon SES account. The result is a list +// Provides sending statistics for the current AWS Region. The result is a list // of data points, representing the last two weeks of sending activity. Each // data point in the list contains statistics for a 15-minute period of time. // @@ -2882,8 +2881,8 @@ const opGetTemplate = "GetTemplate" // GetTemplateRequest generates a "aws/request.Request" representing the // client's request for the GetTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2965,8 +2964,8 @@ const opListConfigurationSets = "ListConfigurationSets" // ListConfigurationSetsRequest generates a "aws/request.Request" representing the // client's request for the ListConfigurationSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3006,8 +3005,8 @@ func (c *SES) ListConfigurationSetsRequest(input *ListConfigurationSetsInput) (r // ListConfigurationSets API operation for Amazon Simple Email Service. // // Provides a list of the configuration sets associated with your Amazon SES -// account. For information about using configuration sets, see Monitoring Your -// Amazon SES Sending Activity (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/monitor-sending-activity.html) +// account in the current AWS Region. For information about using configuration +// sets, see Monitoring Your Amazon SES Sending Activity (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/monitor-sending-activity.html) // in the Amazon SES Developer Guide. // // You can execute this operation no more than once per second. This operation @@ -3049,8 +3048,8 @@ const opListCustomVerificationEmailTemplates = "ListCustomVerificationEmailTempl // ListCustomVerificationEmailTemplatesRequest generates a "aws/request.Request" representing the // client's request for the ListCustomVerificationEmailTemplates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3095,10 +3094,11 @@ func (c *SES) ListCustomVerificationEmailTemplatesRequest(input *ListCustomVerif // ListCustomVerificationEmailTemplates API operation for Amazon Simple Email Service. // -// Lists the existing custom verification email templates for your account. +// Lists the existing custom verification email templates for your account in +// the current AWS Region. // // For more information about custom verification email templates, see Using -// Custom Verification Email Templates (https://docs.aws.amazon.com/ses/latest/DeveloperGuide/custom-verification-emails.html) +// Custom Verification Email Templates (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/custom-verification-emails.html) // in the Amazon SES Developer Guide. // // You can execute this operation no more than once per second. @@ -3185,8 +3185,8 @@ const opListIdentities = "ListIdentities" // ListIdentitiesRequest generates a "aws/request.Request" representing the // client's request for the ListIdentities operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3232,7 +3232,8 @@ func (c *SES) ListIdentitiesRequest(input *ListIdentitiesInput) (req *request.Re // ListIdentities API operation for Amazon Simple Email Service. // // Returns a list containing all of the identities (email addresses and domains) -// for your AWS account, regardless of verification status. +// for your AWS account in the current AWS Region, regardless of verification +// status. // // You can execute this operation no more than once per second. // @@ -3318,8 +3319,8 @@ const opListIdentityPolicies = "ListIdentityPolicies" // ListIdentityPoliciesRequest generates a "aws/request.Request" representing the // client's request for the ListIdentityPolicies operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3403,8 +3404,8 @@ const opListReceiptFilters = "ListReceiptFilters" // ListReceiptFiltersRequest generates a "aws/request.Request" representing the // client's request for the ListReceiptFilters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3443,7 +3444,8 @@ func (c *SES) ListReceiptFiltersRequest(input *ListReceiptFiltersInput) (req *re // ListReceiptFilters API operation for Amazon Simple Email Service. // -// Lists the IP address filters associated with your AWS account. +// Lists the IP address filters associated with your AWS account in the current +// AWS Region. // // For information about managing IP address filters, see the Amazon SES Developer // Guide (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email-managing-ip-filters.html). @@ -3482,8 +3484,8 @@ const opListReceiptRuleSets = "ListReceiptRuleSets" // ListReceiptRuleSetsRequest generates a "aws/request.Request" representing the // client's request for the ListReceiptRuleSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3522,10 +3524,10 @@ func (c *SES) ListReceiptRuleSetsRequest(input *ListReceiptRuleSetsInput) (req * // ListReceiptRuleSets API operation for Amazon Simple Email Service. // -// Lists the receipt rule sets that exist under your AWS account. If there are -// additional receipt rule sets to be retrieved, you will receive a NextToken -// that you can provide to the next call to ListReceiptRuleSets to retrieve -// the additional entries. +// Lists the receipt rule sets that exist under your AWS account in the current +// AWS Region. If there are additional receipt rule sets to be retrieved, you +// will receive a NextToken that you can provide to the next call to ListReceiptRuleSets +// to retrieve the additional entries. // // For information about managing receipt rule sets, see the Amazon SES Developer // Guide (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email-managing-receipt-rule-sets.html). @@ -3564,8 +3566,8 @@ const opListTemplates = "ListTemplates" // ListTemplatesRequest generates a "aws/request.Request" representing the // client's request for the ListTemplates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3604,7 +3606,8 @@ func (c *SES) ListTemplatesRequest(input *ListTemplatesInput) (req *request.Requ // ListTemplates API operation for Amazon Simple Email Service. // -// Lists the email templates present in your Amazon SES account. +// Lists the email templates present in your Amazon SES account in the current +// AWS Region. // // You can execute this operation no more than once per second. // @@ -3640,8 +3643,8 @@ const opListVerifiedEmailAddresses = "ListVerifiedEmailAddresses" // ListVerifiedEmailAddressesRequest generates a "aws/request.Request" representing the // client's request for the ListVerifiedEmailAddresses operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3715,8 +3718,8 @@ const opPutIdentityPolicy = "PutIdentityPolicy" // PutIdentityPolicyRequest generates a "aws/request.Request" representing the // client's request for the PutIdentityPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3805,8 +3808,8 @@ const opReorderReceiptRuleSet = "ReorderReceiptRuleSet" // ReorderReceiptRuleSetRequest generates a "aws/request.Request" representing the // client's request for the ReorderReceiptRuleSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3896,8 +3899,8 @@ const opSendBounce = "SendBounce" // SendBounceRequest generates a "aws/request.Request" representing the // client's request for the SendBounce operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3986,8 +3989,8 @@ const opSendBulkTemplatedEmail = "SendBulkTemplatedEmail" // SendBulkTemplatedEmailRequest generates a "aws/request.Request" representing the // client's request for the SendBulkTemplatedEmail operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4043,8 +4046,7 @@ func (c *SES) SendBulkTemplatedEmailRequest(input *SendBulkTemplatedEmailInput) // Email Addresses and Domains (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-addresses-and-domains.html) // in the Amazon SES Developer Guide. // -// * The total size of the message, including attachments, must be less than -// 10 MB. +// * The maximum message size is 10 MB. // // * Each Destination parameter must include at least one recipient email // address. The recipient address can be a To: address, a CC: address, or @@ -4053,6 +4055,15 @@ func (c *SES) SendBulkTemplatedEmailRequest(input *SendBulkTemplatedEmailInput) // message will be rejected, even if the message contains other recipients // that are valid. // +// * The message may not include more than 50 recipients, across the To:, +// CC: and BCC: fields. If you need to send an email message to a larger +// audience, you can divide your recipient list into groups of 50 or fewer, +// and then call the SendBulkTemplatedEmail operation several times to send +// the message to each group. +// +// * The number of destinations you can contact in a single call to the API +// may be limited by your account's maximum sending rate. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -4115,8 +4126,8 @@ const opSendCustomVerificationEmail = "SendCustomVerificationEmail" // SendCustomVerificationEmailRequest generates a "aws/request.Request" representing the // client's request for the SendCustomVerificationEmail operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4156,12 +4167,13 @@ func (c *SES) SendCustomVerificationEmailRequest(input *SendCustomVerificationEm // SendCustomVerificationEmail API operation for Amazon Simple Email Service. // // Adds an email address to the list of identities for your Amazon SES account -// and attempts to verify it. As a result of executing this operation, a customized -// verification email is sent to the specified address. +// in the current AWS Region and attempts to verify it. As a result of executing +// this operation, a customized verification email is sent to the specified +// address. // // To use this operation, you must first create a custom verification email // template. For more information about creating and using custom verification -// email templates, see Using Custom Verification Email Templates (https://docs.aws.amazon.com/ses/latest/DeveloperGuide/custom-verification-emails.html) +// email templates, see Using Custom Verification Email Templates (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/custom-verification-emails.html) // in the Amazon SES Developer Guide. // // You can execute this operation no more than once per second. @@ -4219,8 +4231,8 @@ const opSendEmail = "SendEmail" // SendEmailRequest generates a "aws/request.Request" representing the // client's request for the SendEmail operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4273,8 +4285,7 @@ func (c *SES) SendEmailRequest(input *SendEmailInput) (req *request.Request, out // Email Addresses and Domains (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-addresses-and-domains.html) // in the Amazon SES Developer Guide. // -// * The total size of the message, including attachments, must be smaller -// than 10 MB. +// * The maximum message size is 10 MB. // // * The message must include at least one recipient email address. The recipient // address can be a To: address, a CC: address, or a BCC: address. If a recipient @@ -4353,8 +4364,8 @@ const opSendRawEmail = "SendRawEmail" // SendRawEmailRequest generates a "aws/request.Request" representing the // client's request for the SendRawEmail operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4393,45 +4404,49 @@ func (c *SES) SendRawEmailRequest(input *SendRawEmailInput) (req *request.Reques // SendRawEmail API operation for Amazon Simple Email Service. // -// Composes an email message and immediately queues it for sending. When calling -// this operation, you may specify the message headers as well as the content. -// The SendRawEmail operation is particularly useful for sending multipart MIME -// emails (such as those that contain both a plain-text and an HTML version). +// Composes an email message and immediately queues it for sending. // -// In order to send email using the SendRawEmail operation, your message must -// meet the following requirements: +// This operation is more flexible than the SendEmail API operation. When you +// use the SendRawEmail operation, you can specify the headers of the message +// as well as its content. This flexibility is useful, for example, when you +// want to send a multipart MIME email (such a message that contains both a +// text and an HTML version). You can also use this operation to send messages +// that include attachments. // -// * The message must be sent from a verified email address or domain. If -// you attempt to send email using a non-verified address or domain, the -// operation will result in an "Email address not verified" error. +// The SendRawEmail operation has the following requirements: // -// * If your account is still in the Amazon SES sandbox, you may only send -// to verified addresses or domains, or to email addresses associated with -// the Amazon SES Mailbox Simulator. For more information, see Verifying -// Email Addresses and Domains (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-addresses-and-domains.html) -// in the Amazon SES Developer Guide. +// * You can only send email from verified email addresses or domains (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-addresses-and-domains.html). +// If you try to send email from an address that isn't verified, the operation +// results in an "Email address not verified" error. // -// * The total size of the message, including attachments, must be smaller -// than 10 MB. +// * If your account is still in the Amazon SES sandbox (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/request-production-access.html), +// you can only send email to other verified addresses in your account, or +// to addresses that are associated with the Amazon SES mailbox simulator +// (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/mailbox-simulator.html). // -// * The message must include at least one recipient email address. The recipient -// address can be a To: address, a CC: address, or a BCC: address. If a recipient -// email address is invalid (that is, it is not in the format UserName@[SubDomain.]Domain.TopLevelDomain), -// the entire message will be rejected, even if the message contains other -// recipients that are valid. +// * The maximum message size, including attachments, is 10 MB. // -// * The message may not include more than 50 recipients, across the To:, -// CC: and BCC: fields. If you need to send an email message to a larger -// audience, you can divide your recipient list into groups of 50 or fewer, -// and then call the SendRawEmail operation several times to send the message -// to each group. +// * Each message has to include at least one recipient address. A recipient +// address includes any address on the To:, CC:, or BCC: lines. // -// For every message that you send, the total number of recipients (including -// each recipient in the To:, CC: and BCC: fields) is counted against the maximum -// number of emails you can send in a 24-hour period (your sending quota). For -// more information about sending quotas in Amazon SES, see Managing Your Amazon -// SES Sending Limits (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/manage-sending-limits.html) -// in the Amazon SES Developer Guide. +// * If you send a single message to more than one recipient address, and +// one of the recipient addresses isn't in a valid format (that is, it's +// not in the format UserName@[SubDomain.]Domain.TopLevelDomain), Amazon +// SES rejects the entire message, even if the other addresses are valid. +// +// * Each message can include up to 50 recipient addresses across the To:, +// CC:, or BCC: lines. If you need to send a single message to more than +// 50 recipients, you have to split the list of recipient addresses into +// groups of less than 50 recipients, and send separate messages to each +// group. +// +// * Amazon SES allows you to specify 8-bit Content-Transfer-Encoding for +// MIME message parts. However, if Amazon SES has to modify the contents +// of your message (for example, if you use open and click tracking), 8-bit +// content isn't preserved. For this reason, we highly recommend that you +// encode all content that isn't 7-bit ASCII. For more information, see MIME +// Encoding (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-email-raw.html#send-email-mime-encoding) +// in the Amazon SES Developer Guide. // // Additionally, keep the following considerations in mind when using the SendRawEmail // operation: @@ -4465,6 +4480,13 @@ func (c *SES) SendRawEmailRequest(input *SendRawEmailInput) (req *request.Reques // see the Using Sending Authorization with Amazon SES (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/sending-authorization.html) // in the Amazon SES Developer Guide. // +// * For every message that you send, the total number of recipients (including +// each recipient in the To:, CC: and BCC: fields) is counted against the +// maximum number of emails you can send in a 24-hour period (your sending +// quota). For more information about sending quotas in Amazon SES, see Managing +// Your Amazon SES Sending Limits (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/manage-sending-limits.html) +// in the Amazon SES Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -4523,8 +4545,8 @@ const opSendTemplatedEmail = "SendTemplatedEmail" // SendTemplatedEmailRequest generates a "aws/request.Request" representing the // client's request for the SendTemplatedEmail operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4580,8 +4602,7 @@ func (c *SES) SendTemplatedEmailRequest(input *SendTemplatedEmailInput) (req *re // Email Addresses and Domains (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-addresses-and-domains.html) // in the Amazon SES Developer Guide. // -// * The total size of the message, including attachments, must be less than -// 10 MB. +// * The maximum message size is 10 MB. // // * Calls to the SendTemplatedEmail operation may only include one Destination // parameter. A destination is a set of recipients who will receive the same @@ -4595,6 +4616,17 @@ func (c *SES) SendTemplatedEmailRequest(input *SendTemplatedEmailInput) (req *re // message will be rejected, even if the message contains other recipients // that are valid. // +// If your call to the SendTemplatedEmail operation includes all of the required +// parameters, Amazon SES accepts it and returns a Message ID. However, if Amazon +// SES can't render the email because the template contains errors, it doesn't +// send the email. Additionally, because it already accepted the message, Amazon +// SES doesn't return a message stating that it was unable to send the email. +// +// For these reasons, we highly recommend that you set up Amazon SES to send +// you notifications when Rendering Failure events occur. For more information, +// see Sending Personalized Email Using the Amazon SES API (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-personalized-email-api.html) +// in the Amazon Simple Email Service Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -4657,8 +4689,8 @@ const opSetActiveReceiptRuleSet = "SetActiveReceiptRuleSet" // SetActiveReceiptRuleSetRequest generates a "aws/request.Request" representing the // client's request for the SetActiveReceiptRuleSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4744,8 +4776,8 @@ const opSetIdentityDkimEnabled = "SetIdentityDkimEnabled" // SetIdentityDkimEnabledRequest generates a "aws/request.Request" representing the // client's request for the SetIdentityDkimEnabled operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4835,8 +4867,8 @@ const opSetIdentityFeedbackForwardingEnabled = "SetIdentityFeedbackForwardingEna // SetIdentityFeedbackForwardingEnabledRequest generates a "aws/request.Request" representing the // client's request for the SetIdentityFeedbackForwardingEnabled operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4920,8 +4952,8 @@ const opSetIdentityHeadersInNotificationsEnabled = "SetIdentityHeadersInNotifica // SetIdentityHeadersInNotificationsEnabledRequest generates a "aws/request.Request" representing the // client's request for the SetIdentityHeadersInNotificationsEnabled operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5001,8 +5033,8 @@ const opSetIdentityMailFromDomain = "SetIdentityMailFromDomain" // SetIdentityMailFromDomainRequest generates a "aws/request.Request" representing the // client's request for the SetIdentityMailFromDomain operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5083,8 +5115,8 @@ const opSetIdentityNotificationTopic = "SetIdentityNotificationTopic" // SetIdentityNotificationTopicRequest generates a "aws/request.Request" representing the // client's request for the SetIdentityNotificationTopic operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5123,13 +5155,12 @@ func (c *SES) SetIdentityNotificationTopicRequest(input *SetIdentityNotification // SetIdentityNotificationTopic API operation for Amazon Simple Email Service. // -// Given an identity (an email address or a domain), sets the Amazon Simple -// Notification Service (Amazon SNS) topic to which Amazon SES will publish -// bounce, complaint, and/or delivery notifications for emails sent with that -// identity as the Source. -// -// Unless feedback forwarding is enabled, you must specify Amazon SNS topics -// for bounce and complaint notifications. For more information, see SetIdentityFeedbackForwardingEnabled. +// Sets an Amazon Simple Notification Service (Amazon SNS) topic to use when +// delivering notifications. When you use this operation, you specify a verified +// identity, such as an email address or domain. When you send an email that +// uses the chosen identity in the Source field, Amazon SES sends notifications +// to the topic you specified. You can send bounce, complaint, or delivery notifications +// (or any combination of the three) to the Amazon SNS topic that you specify. // // You can execute this operation no more than once per second. // @@ -5168,8 +5199,8 @@ const opSetReceiptRulePosition = "SetReceiptRulePosition" // SetReceiptRulePositionRequest generates a "aws/request.Request" representing the // client's request for the SetReceiptRulePosition operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5255,8 +5286,8 @@ const opTestRenderTemplate = "TestRenderTemplate" // TestRenderTemplateRequest generates a "aws/request.Request" representing the // client's request for the TestRenderTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5347,8 +5378,8 @@ const opUpdateAccountSendingEnabled = "UpdateAccountSendingEnabled" // UpdateAccountSendingEnabledRequest generates a "aws/request.Request" representing the // client's request for the UpdateAccountSendingEnabled operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5389,10 +5420,11 @@ func (c *SES) UpdateAccountSendingEnabledRequest(input *UpdateAccountSendingEnab // UpdateAccountSendingEnabled API operation for Amazon Simple Email Service. // -// Enables or disables email sending across your entire Amazon SES account. -// You can use this operation in conjunction with Amazon CloudWatch alarms to -// temporarily pause email sending across your Amazon SES account when reputation -// metrics (such as your bounce on complaint rate) reach certain thresholds. +// Enables or disables email sending across your entire Amazon SES account in +// the current AWS Region. You can use this operation in conjunction with Amazon +// CloudWatch alarms to temporarily pause email sending across your Amazon SES +// account in a given AWS Region when reputation metrics (such as your bounce +// or complaint rates) reach certain thresholds. // // You can execute this operation no more than once per second. // @@ -5428,8 +5460,8 @@ const opUpdateConfigurationSetEventDestination = "UpdateConfigurationSetEventDes // UpdateConfigurationSetEventDestinationRequest generates a "aws/request.Request" representing the // client's request for the UpdateConfigurationSetEventDestination operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5533,8 +5565,8 @@ const opUpdateConfigurationSetReputationMetricsEnabled = "UpdateConfigurationSet // UpdateConfigurationSetReputationMetricsEnabledRequest generates a "aws/request.Request" representing the // client's request for the UpdateConfigurationSetReputationMetricsEnabled operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5576,10 +5608,10 @@ func (c *SES) UpdateConfigurationSetReputationMetricsEnabledRequest(input *Updat // UpdateConfigurationSetReputationMetricsEnabled API operation for Amazon Simple Email Service. // // Enables or disables the publishing of reputation metrics for emails sent -// using a specific configuration set. Reputation metrics include bounce and -// complaint rates. These metrics are published to Amazon CloudWatch. By using -// Amazon CloudWatch, you can create alarms when bounce or complaint rates exceed -// a certain threshold. +// using a specific configuration set in a given AWS Region. Reputation metrics +// include bounce and complaint rates. These metrics are published to Amazon +// CloudWatch. By using CloudWatch, you can create alarms when bounce or complaint +// rates exceed certain thresholds. // // You can execute this operation no more than once per second. // @@ -5620,8 +5652,8 @@ const opUpdateConfigurationSetSendingEnabled = "UpdateConfigurationSetSendingEna // UpdateConfigurationSetSendingEnabledRequest generates a "aws/request.Request" representing the // client's request for the UpdateConfigurationSetSendingEnabled operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5663,10 +5695,10 @@ func (c *SES) UpdateConfigurationSetSendingEnabledRequest(input *UpdateConfigura // UpdateConfigurationSetSendingEnabled API operation for Amazon Simple Email Service. // // Enables or disables email sending for messages sent using a specific configuration -// set. You can use this operation in conjunction with Amazon CloudWatch alarms -// to temporarily pause email sending for a configuration set when the reputation -// metrics for that configuration set (such as your bounce on complaint rate) -// reach certain thresholds. +// set in a given AWS Region. You can use this operation in conjunction with +// Amazon CloudWatch alarms to temporarily pause email sending for a configuration +// set when the reputation metrics for that configuration set (such as your +// bounce on complaint rate) exceed certain thresholds. // // You can execute this operation no more than once per second. // @@ -5707,8 +5739,8 @@ const opUpdateConfigurationSetTrackingOptions = "UpdateConfigurationSetTrackingO // UpdateConfigurationSetTrackingOptionsRequest generates a "aws/request.Request" representing the // client's request for the UpdateConfigurationSetTrackingOptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5752,9 +5784,8 @@ func (c *SES) UpdateConfigurationSetTrackingOptionsRequest(input *UpdateConfigur // // By default, images and links used for tracking open and click events are // hosted on domains operated by Amazon SES. You can configure a subdomain of -// your own to handle these events. For information about using configuration -// sets, see Configuring Custom Domains to Handle Open and Click Tracking (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/configure-custom-open-click-domains.html) -// in the Amazon SES Developer Guide (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/Welcome.html). +// your own to handle these events. For information about using custom domains, +// see the Amazon SES Developer Guide (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/configure-custom-open-click-domains.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -5804,8 +5835,8 @@ const opUpdateCustomVerificationEmailTemplate = "UpdateCustomVerificationEmailTe // UpdateCustomVerificationEmailTemplateRequest generates a "aws/request.Request" representing the // client's request for the UpdateCustomVerificationEmailTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5849,7 +5880,7 @@ func (c *SES) UpdateCustomVerificationEmailTemplateRequest(input *UpdateCustomVe // Updates an existing custom verification email template. // // For more information about custom verification email templates, see Using -// Custom Verification Email Templates (https://docs.aws.amazon.com/ses/latest/DeveloperGuide/custom-verification-emails.html) +// Custom Verification Email Templates (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/custom-verification-emails.html) // in the Amazon SES Developer Guide. // // You can execute this operation no more than once per second. @@ -5900,8 +5931,8 @@ const opUpdateReceiptRule = "UpdateReceiptRule" // UpdateReceiptRuleRequest generates a "aws/request.Request" representing the // client's request for the UpdateReceiptRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6008,8 +6039,8 @@ const opUpdateTemplate = "UpdateTemplate" // UpdateTemplateRequest generates a "aws/request.Request" representing the // client's request for the UpdateTemplate operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6067,8 +6098,8 @@ func (c *SES) UpdateTemplateRequest(input *UpdateTemplateInput) (req *request.Re // SES account. // // * ErrCodeInvalidTemplateException "InvalidTemplate" -// Indicates that a template could not be created because it contained invalid -// JSON. +// Indicates that the template that you specified could not be rendered. This +// issue may occur when a template refers to a partial that does not exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/email-2010-12-01/UpdateTemplate func (c *SES) UpdateTemplate(input *UpdateTemplateInput) (*UpdateTemplateOutput, error) { @@ -6096,8 +6127,8 @@ const opVerifyDomainDkim = "VerifyDomainDkim" // VerifyDomainDkimRequest generates a "aws/request.Request" representing the // client's request for the VerifyDomainDkim operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6184,8 +6215,8 @@ const opVerifyDomainIdentity = "VerifyDomainIdentity" // VerifyDomainIdentityRequest generates a "aws/request.Request" representing the // client's request for the VerifyDomainIdentity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6224,9 +6255,9 @@ func (c *SES) VerifyDomainIdentityRequest(input *VerifyDomainIdentityInput) (req // VerifyDomainIdentity API operation for Amazon Simple Email Service. // -// Adds a domain to the list of identities for your Amazon SES account and attempts -// to verify it. For more information about verifying domains, see Verifying -// Email Addresses and Domains (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-addresses-and-domains.html) +// Adds a domain to the list of identities for your Amazon SES account in the +// current AWS Region and attempts to verify it. For more information about +// verifying domains, see Verifying Email Addresses and Domains (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-addresses-and-domains.html) // in the Amazon SES Developer Guide. // // You can execute this operation no more than once per second. @@ -6263,8 +6294,8 @@ const opVerifyEmailAddress = "VerifyEmailAddress" // VerifyEmailAddressRequest generates a "aws/request.Request" representing the // client's request for the VerifyEmailAddress operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6339,8 +6370,8 @@ const opVerifyEmailIdentity = "VerifyEmailIdentity" // VerifyEmailIdentityRequest generates a "aws/request.Request" representing the // client's request for the VerifyEmailIdentity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6380,8 +6411,8 @@ func (c *SES) VerifyEmailIdentityRequest(input *VerifyEmailIdentityInput) (req * // VerifyEmailIdentity API operation for Amazon Simple Email Service. // // Adds an email address to the list of identities for your Amazon SES account -// and attempts to verify it. As a result of executing this operation, a verification -// email is sent to the specified address. +// in the current AWS region and attempts to verify it. As a result of executing +// this operation, a verification email is sent to the specified address. // // You can execute this operation no more than once per second. // @@ -7348,8 +7379,8 @@ type CreateConfigurationSetTrackingOptionsInput struct { // emails. // // For more information, see Configuring Custom Domains to Handle Open and Click - // Tracking (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/configure-custom-open-click-domains.html) - // in the Amazon SES Developer Guide (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/Welcome.html). + // Tracking (ses/latest/DeveloperGuide/configure-custom-open-click-domains.html) + // in the Amazon SES Developer Guide. // // TrackingOptions is a required field TrackingOptions *TrackingOptions `type:"structure" required:"true"` @@ -9094,12 +9125,12 @@ func (s GetAccountSendingEnabledInput) GoString() string { } // Represents a request to return the email sending status for your Amazon SES -// account. +// account in the current AWS Region. type GetAccountSendingEnabledOutput struct { _ struct{} `type:"structure"` // Describes whether email sending is enabled or disabled for your Amazon SES - // account. + // account in the current AWS Region. Enabled *bool `type:"boolean"` } @@ -10237,7 +10268,7 @@ func (s *ListConfigurationSetsOutput) SetNextToken(v string) *ListConfigurationS // for your account. // // For more information about custom verification email templates, see Using -// Custom Verification Email Templates (https://docs.aws.amazon.com/ses/latest/DeveloperGuide/custom-verification-emails.html) +// Custom Verification Email Templates (ses/latest/DeveloperGuide/custom-verification-emails.html) // in the Amazon SES Developer Guide. type ListCustomVerificationEmailTemplatesInput struct { _ struct{} `type:"structure"` @@ -10765,7 +10796,7 @@ type MessageDsn struct { // When the message was received by the reporting mail transfer agent (MTA), // in RFC 822 (https://www.ietf.org/rfc/rfc0822.txt) date-time format. - ArrivalDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + ArrivalDate *time.Time `type:"timestamp"` // Additional X-headers to include in the DSN. ExtensionFields []*ExtensionField `type:"list"` @@ -11088,7 +11119,7 @@ type ReceiptAction struct { StopAction *StopAction `type:"structure"` // Calls Amazon WorkMail and, optionally, publishes a notification to Amazon - // SNS. + // Amazon SNS. WorkmailAction *WorkmailAction `type:"structure"` } @@ -11448,7 +11479,7 @@ type ReceiptRuleSetMetadata struct { _ struct{} `type:"structure"` // The date and time the receipt rule set was created. - CreatedTimestamp *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CreatedTimestamp *time.Time `type:"timestamp"` // The name of the receipt rule set. The name must: // @@ -11518,7 +11549,7 @@ type RecipientDsnFields struct { // The time the final delivery attempt was made, in RFC 822 (https://www.ietf.org/rfc/rfc0822.txt) // date-time format. - LastAttemptDate *time.Time `type:"timestamp" timestampFormat:"iso8601"` + LastAttemptDate *time.Time `type:"timestamp"` // The MTA to which the remote MTA attempted to deliver the message, formatted // as specified in RFC 3464 (https://tools.ietf.org/html/rfc3464) (mta-name-type; @@ -11695,7 +11726,7 @@ type ReputationOptions struct { // // If email sending for the configuration set has never been disabled and later // re-enabled, the value of this attribute is null. - LastFreshStart *time.Time `type:"timestamp" timestampFormat:"iso8601"` + LastFreshStart *time.Time `type:"timestamp"` // Describes whether or not Amazon SES publishes reputation metrics for the // configuration set, such as bounce and complaint rates, to Amazon CloudWatch. @@ -11787,10 +11818,10 @@ type S3Action struct { // using Amazon S3 server-side encryption. This means that you must use the // Amazon S3 encryption client to decrypt the email after retrieving it from // Amazon S3, as the service has no access to use your AWS KMS keys for decryption. - // This encryption client is currently available with the AWS Java SDK (http://aws.amazon.com/sdk-for-java/) - // and AWS Ruby SDK (http://aws.amazon.com/sdk-for-ruby/) only. For more information - // about client-side encryption using AWS KMS master keys, see the Amazon S3 - // Developer Guide (AmazonS3/latest/dev/UsingClientSideEncryption.html). + // This encryption client is currently available with the AWS SDK for Java (http://aws.amazon.com/sdk-for-java/) + // and AWS SDK for Ruby (http://aws.amazon.com/sdk-for-ruby/) only. For more + // information about client-side encryption using AWS KMS master keys, see the + // Amazon S3 Developer Guide (http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html). KmsKeyArn *string `type:"string"` // The key prefix of the Amazon S3 bucket. The key prefix is similar to a directory @@ -12458,7 +12489,7 @@ type SendDataPoint struct { Rejects *int64 `type:"long"` // Time of the data point. - Timestamp *time.Time `type:"timestamp" timestampFormat:"iso8601"` + Timestamp *time.Time `type:"timestamp"` } // String returns the string representation @@ -12736,18 +12767,26 @@ type SendRawEmailInput struct { // SendRawEmail in this guide, or see the Amazon SES Developer Guide (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/sending-authorization-delegate-sender-tasks-email.html). FromArn *string `type:"string"` - // The raw text of the message. The client is responsible for ensuring the following: + // The raw email message itself. The message has to meet the following criteria: // - // * Message must contain a header and a body, separated by a blank line. + // * The message has to contain a header and a body, separated by a blank + // line. // - // * All required header fields must be present. + // * All of the required header fields must be present in the message. // // * Each part of a multipart MIME message must be formatted properly. // - // * MIME content types must be among those supported by Amazon SES. For - // more information, go to the Amazon SES Developer Guide (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/mime-types.html). + // * Attachments must be of a content type that Amazon SES supports. For + // a list on unsupported content types, see Unsupported Attachment Types + // (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/mime-types.html) + // in the Amazon SES Developer Guide. + // + // * The entire message must be base64-encoded. // - // * Must be base64-encoded. + // * If any of the MIME parts in your message contain content that is outside + // of the 7-bit ASCII character range, we highly recommend that you encode + // that content. For more information, see Sending Raw Email (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-email-raw.html) + // in the Amazon SES Developer Guide. // // * Per RFC 5321 (https://tools.ietf.org/html/rfc5321#section-4.5.3.1.6), // the maximum length of each line of text, including the , must not @@ -13537,9 +13576,14 @@ func (s SetIdentityMailFromDomainOutput) GoString() string { type SetIdentityNotificationTopicInput struct { _ struct{} `type:"structure"` - // The identity for which the Amazon SNS topic will be set. You can specify - // an identity by using its name or by using its Amazon Resource Name (ARN). - // Examples: user@example.com, example.com, arn:aws:ses:us-east-1:123456789012:identity/example.com. + // The identity (email address or domain) that you want to set the Amazon SNS + // topic for. + // + // You can only specify a verified identity for this parameter. + // + // You can specify an identity by using its name or by using its Amazon Resource + // Name (ARN). The following examples are all valid identities: sender@example.com, + // example.com, arn:aws:ses:us-east-1:123456789012:identity/example.com. // // Identity is a required field Identity *string `type:"string" required:"true"` @@ -13824,7 +13868,7 @@ type TemplateMetadata struct { _ struct{} `type:"structure"` // The time and date the template was created. - CreatedTimestamp *time.Time `type:"timestamp" timestampFormat:"iso8601"` + CreatedTimestamp *time.Time `type:"timestamp"` // The name of the template. Name *string `type:"string"` @@ -13935,8 +13979,8 @@ func (s *TestRenderTemplateOutput) SetRenderedTemplate(v string) *TestRenderTemp // emails. // // For more information, see Configuring Custom Domains to Handle Open and Click -// Tracking (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/configure-custom-open-click-domains.html) -// in the Amazon SES Developer Guide (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/Welcome.html). +// Tracking (ses/latest/DeveloperGuide/configure-custom-open-click-domains.html) +// in the Amazon SES Developer Guide. type TrackingOptions struct { _ struct{} `type:"structure"` @@ -13967,7 +14011,7 @@ type UpdateAccountSendingEnabledInput struct { _ struct{} `type:"structure"` // Describes whether email sending is enabled or disabled for your Amazon SES - // account. + // account in the current AWS Region. Enabled *bool `type:"boolean"` } @@ -14231,8 +14275,8 @@ type UpdateConfigurationSetTrackingOptionsInput struct { // emails. // // For more information, see Configuring Custom Domains to Handle Open and Click - // Tracking (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/configure-custom-open-click-domains.html) - // in the Amazon SES Developer Guide (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/Welcome.html). + // Tracking (ses/latest/DeveloperGuide/configure-custom-open-click-domains.html) + // in the Amazon SES Developer Guide. // // TrackingOptions is a required field TrackingOptions *TrackingOptions `type:"structure" required:"true"` diff --git a/vendor/github.com/aws/aws-sdk-go/service/ses/doc.go b/vendor/github.com/aws/aws-sdk-go/service/ses/doc.go index 0050868b2ed..6ba270a7578 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ses/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ses/doc.go @@ -3,9 +3,10 @@ // Package ses provides the client and types for making API // requests to Amazon Simple Email Service. // -// This is the API Reference for Amazon Simple Email Service (https://aws.amazon.com/ses/) -// (Amazon SES). This documentation is intended to be used in conjunction with -// the Amazon SES Developer Guide (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/Welcome.html). +// This document contains reference information for the Amazon Simple Email +// Service (https://aws.amazon.com/ses/) (Amazon SES) API, version 2010-12-01. +// This document is best used in conjunction with the Amazon SES Developer Guide +// (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/Welcome.html). // // For a list of Amazon SES endpoints to use in service requests, see Regions // and Amazon SES (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/regions.html) diff --git a/vendor/github.com/aws/aws-sdk-go/service/ses/errors.go b/vendor/github.com/aws/aws-sdk-go/service/ses/errors.go index 2d03c244004..dd94d63518b 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ses/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ses/errors.go @@ -158,8 +158,8 @@ const ( // ErrCodeInvalidTemplateException for service response error code // "InvalidTemplate". // - // Indicates that a template could not be created because it contained invalid - // JSON. + // Indicates that the template that you specified could not be rendered. This + // issue may occur when a template refers to a partial that does not exist. ErrCodeInvalidTemplateException = "InvalidTemplate" // ErrCodeInvalidTrackingOptionsException for service response error code diff --git a/vendor/github.com/aws/aws-sdk-go/service/ses/service.go b/vendor/github.com/aws/aws-sdk-go/service/ses/service.go index b078893079f..0e33b771f53 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ses/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ses/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "email" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "email" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "SES" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the SES client with a session. @@ -45,19 +46,20 @@ const ( // svc := ses.New(mySession, aws.NewConfig().WithRegion("us-west-2")) func New(p client.ConfigProvider, cfgs ...*aws.Config) *SES { c := p.ClientConfig(EndpointsID, cfgs...) + if c.SigningNameDerived || len(c.SigningName) == 0 { + c.SigningName = "ses" + } return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) } // newClient creates, initializes and returns a new service client instance. func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *SES { - if len(signingName) == 0 { - signingName = "ses" - } svc := &SES{ Client: client.New( cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/sfn/api.go b/vendor/github.com/aws/aws-sdk-go/service/sfn/api.go index f6119da2bdf..52da5bab516 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/sfn/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/sfn/api.go @@ -14,8 +14,8 @@ const opCreateActivity = "CreateActivity" // CreateActivityRequest generates a "aws/request.Request" representing the // client's request for the CreateActivity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -102,8 +102,8 @@ const opCreateStateMachine = "CreateStateMachine" // CreateStateMachineRequest generates a "aws/request.Request" representing the // client's request for the CreateStateMachine operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -201,8 +201,8 @@ const opDeleteActivity = "DeleteActivity" // DeleteActivityRequest generates a "aws/request.Request" representing the // client's request for the DeleteActivity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -280,8 +280,8 @@ const opDeleteStateMachine = "DeleteStateMachine" // DeleteStateMachineRequest generates a "aws/request.Request" representing the // client's request for the DeleteStateMachine operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -364,8 +364,8 @@ const opDescribeActivity = "DescribeActivity" // DescribeActivityRequest generates a "aws/request.Request" representing the // client's request for the DescribeActivity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -446,8 +446,8 @@ const opDescribeExecution = "DescribeExecution" // DescribeExecutionRequest generates a "aws/request.Request" representing the // client's request for the DescribeExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -528,8 +528,8 @@ const opDescribeStateMachine = "DescribeStateMachine" // DescribeStateMachineRequest generates a "aws/request.Request" representing the // client's request for the DescribeStateMachine operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -610,8 +610,8 @@ const opDescribeStateMachineForExecution = "DescribeStateMachineForExecution" // DescribeStateMachineForExecutionRequest generates a "aws/request.Request" representing the // client's request for the DescribeStateMachineForExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -692,8 +692,8 @@ const opGetActivityTask = "GetActivityTask" // GetActivityTaskRequest generates a "aws/request.Request" representing the // client's request for the GetActivityTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -787,8 +787,8 @@ const opGetExecutionHistory = "GetExecutionHistory" // GetExecutionHistoryRequest generates a "aws/request.Request" representing the // client's request for the GetExecutionHistory operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -934,8 +934,8 @@ const opListActivities = "ListActivities" // ListActivitiesRequest generates a "aws/request.Request" representing the // client's request for the ListActivities operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1073,8 +1073,8 @@ const opListExecutions = "ListExecutions" // ListExecutionsRequest generates a "aws/request.Request" representing the // client's request for the ListExecutions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1218,8 +1218,8 @@ const opListStateMachines = "ListStateMachines" // ListStateMachinesRequest generates a "aws/request.Request" representing the // client's request for the ListStateMachines operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1357,8 +1357,8 @@ const opSendTaskFailure = "SendTaskFailure" // SendTaskFailureRequest generates a "aws/request.Request" representing the // client's request for the SendTaskFailure operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1440,8 +1440,8 @@ const opSendTaskHeartbeat = "SendTaskHeartbeat" // SendTaskHeartbeatRequest generates a "aws/request.Request" representing the // client's request for the SendTaskHeartbeat operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1535,8 +1535,8 @@ const opSendTaskSuccess = "SendTaskSuccess" // SendTaskSuccessRequest generates a "aws/request.Request" representing the // client's request for the SendTaskSuccess operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1622,8 +1622,8 @@ const opStartExecution = "StartExecution" // StartExecutionRequest generates a "aws/request.Request" representing the // client's request for the StartExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1722,8 +1722,8 @@ const opStopExecution = "StopExecution" // StopExecutionRequest generates a "aws/request.Request" representing the // client's request for the StopExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1804,8 +1804,8 @@ const opUpdateStateMachine = "UpdateStateMachine" // UpdateStateMachineRequest generates a "aws/request.Request" representing the // client's request for the UpdateStateMachine operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1944,7 +1944,7 @@ type ActivityListItem struct { // The date the activity is created. // // CreationDate is a required field - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix" required:"true"` + CreationDate *time.Time `locationName:"creationDate" type:"timestamp" required:"true"` // The name of the activity. // @@ -2229,7 +2229,7 @@ type CreateActivityOutput struct { // The date the activity is created. // // CreationDate is a required field - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix" required:"true"` + CreationDate *time.Time `locationName:"creationDate" type:"timestamp" required:"true"` } // String returns the string representation @@ -2350,7 +2350,7 @@ type CreateStateMachineOutput struct { // The date the state machine is created. // // CreationDate is a required field - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix" required:"true"` + CreationDate *time.Time `locationName:"creationDate" type:"timestamp" required:"true"` // The Amazon Resource Name (ARN) that identifies the created state machine. // @@ -2542,7 +2542,7 @@ type DescribeActivityOutput struct { // The date the activity is created. // // CreationDate is a required field - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix" required:"true"` + CreationDate *time.Time `locationName:"creationDate" type:"timestamp" required:"true"` // The name of the activity. // @@ -2668,7 +2668,7 @@ type DescribeExecutionOutput struct { // The date the execution is started. // // StartDate is a required field - StartDate *time.Time `locationName:"startDate" type:"timestamp" timestampFormat:"unix" required:"true"` + StartDate *time.Time `locationName:"startDate" type:"timestamp" required:"true"` // The Amazon Resource Name (ARN) of the executed stated machine. // @@ -2681,7 +2681,7 @@ type DescribeExecutionOutput struct { Status *string `locationName:"status" type:"string" required:"true" enum:"ExecutionStatus"` // If the execution has already ended, the date the execution stopped. - StopDate *time.Time `locationName:"stopDate" type:"timestamp" timestampFormat:"unix"` + StopDate *time.Time `locationName:"stopDate" type:"timestamp"` } // String returns the string representation @@ -2812,7 +2812,7 @@ type DescribeStateMachineForExecutionOutput struct { // For a newly created state machine, this is the creation date. // // UpdateDate is a required field - UpdateDate *time.Time `locationName:"updateDate" type:"timestamp" timestampFormat:"unix" required:"true"` + UpdateDate *time.Time `locationName:"updateDate" type:"timestamp" required:"true"` } // String returns the string representation @@ -2902,7 +2902,7 @@ type DescribeStateMachineOutput struct { // The date the state machine is created. // // CreationDate is a required field - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix" required:"true"` + CreationDate *time.Time `locationName:"creationDate" type:"timestamp" required:"true"` // The Amazon States Language definition of the state machine. // @@ -3083,7 +3083,7 @@ type ExecutionListItem struct { // The date the execution started. // // StartDate is a required field - StartDate *time.Time `locationName:"startDate" type:"timestamp" timestampFormat:"unix" required:"true"` + StartDate *time.Time `locationName:"startDate" type:"timestamp" required:"true"` // The Amazon Resource Name (ARN) of the executed state machine. // @@ -3096,7 +3096,7 @@ type ExecutionListItem struct { Status *string `locationName:"status" type:"string" required:"true" enum:"ExecutionStatus"` // If the execution already ended, the date the execution stopped. - StopDate *time.Time `locationName:"stopDate" type:"timestamp" timestampFormat:"unix"` + StopDate *time.Time `locationName:"stopDate" type:"timestamp"` } // String returns the string representation @@ -3524,7 +3524,7 @@ type HistoryEvent struct { // The date the event occurred. // // Timestamp is a required field - Timestamp *time.Time `locationName:"timestamp" type:"timestamp" timestampFormat:"unix" required:"true"` + Timestamp *time.Time `locationName:"timestamp" type:"timestamp" required:"true"` // The type of the event. // @@ -4498,7 +4498,7 @@ type StartExecutionOutput struct { // The date the execution is started. // // StartDate is a required field - StartDate *time.Time `locationName:"startDate" type:"timestamp" timestampFormat:"unix" required:"true"` + StartDate *time.Time `locationName:"startDate" type:"timestamp" required:"true"` } // String returns the string representation @@ -4612,7 +4612,7 @@ type StateMachineListItem struct { // The date the state machine is created. // // CreationDate is a required field - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix" required:"true"` + CreationDate *time.Time `locationName:"creationDate" type:"timestamp" required:"true"` // The name of the state machine. // @@ -4730,7 +4730,7 @@ type StopExecutionOutput struct { // The date the execution is stopped. // // StopDate is a required field - StopDate *time.Time `locationName:"stopDate" type:"timestamp" timestampFormat:"unix" required:"true"` + StopDate *time.Time `locationName:"stopDate" type:"timestamp" required:"true"` } // String returns the string representation @@ -4820,7 +4820,7 @@ type UpdateStateMachineOutput struct { // The date and time the state machine was updated. // // UpdateDate is a required field - UpdateDate *time.Time `locationName:"updateDate" type:"timestamp" timestampFormat:"unix" required:"true"` + UpdateDate *time.Time `locationName:"updateDate" type:"timestamp" required:"true"` } // String returns the string representation diff --git a/vendor/github.com/aws/aws-sdk-go/service/sfn/service.go b/vendor/github.com/aws/aws-sdk-go/service/sfn/service.go index f9d12a58f79..2436268f075 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/sfn/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/sfn/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "states" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "states" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "SFN" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the SFN client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/simpledb/api.go b/vendor/github.com/aws/aws-sdk-go/service/simpledb/api.go index 24dc68744dc..372e87e3ef7 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/simpledb/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/simpledb/api.go @@ -16,8 +16,8 @@ const opBatchDeleteAttributes = "BatchDeleteAttributes" // BatchDeleteAttributesRequest generates a "aws/request.Request" representing the // client's request for the BatchDeleteAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -112,8 +112,8 @@ const opBatchPutAttributes = "BatchPutAttributes" // BatchPutAttributesRequest generates a "aws/request.Request" representing the // client's request for the BatchPutAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -256,8 +256,8 @@ const opCreateDomain = "CreateDomain" // CreateDomainRequest generates a "aws/request.Request" representing the // client's request for the CreateDomain operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -348,8 +348,8 @@ const opDeleteAttributes = "DeleteAttributes" // DeleteAttributesRequest generates a "aws/request.Request" representing the // client's request for the DeleteAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -445,8 +445,8 @@ const opDeleteDomain = "DeleteDomain" // DeleteDomainRequest generates a "aws/request.Request" representing the // client's request for the DeleteDomain operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -528,8 +528,8 @@ const opDomainMetadata = "DomainMetadata" // DomainMetadataRequest generates a "aws/request.Request" representing the // client's request for the DomainMetadata operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -609,8 +609,8 @@ const opGetAttributes = "GetAttributes" // GetAttributesRequest generates a "aws/request.Request" representing the // client's request for the GetAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -700,8 +700,8 @@ const opListDomains = "ListDomains" // ListDomainsRequest generates a "aws/request.Request" representing the // client's request for the ListDomains operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -840,8 +840,8 @@ const opPutAttributes = "PutAttributes" // PutAttributesRequest generates a "aws/request.Request" representing the // client's request for the PutAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -966,8 +966,8 @@ const opSelect = "Select" // SelectRequest generates a "aws/request.Request" representing the // client's request for the Select operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. diff --git a/vendor/github.com/aws/aws-sdk-go/service/simpledb/service.go b/vendor/github.com/aws/aws-sdk-go/service/simpledb/service.go index fb3a0e3cdc7..d4de27413cb 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/simpledb/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/simpledb/service.go @@ -30,8 +30,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "sdb" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "sdb" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "SimpleDB" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the SimpleDB client with a session. @@ -56,6 +57,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/sns/api.go b/vendor/github.com/aws/aws-sdk-go/service/sns/api.go index 8f658a43034..f6cf35b3287 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/sns/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/sns/api.go @@ -16,8 +16,8 @@ const opAddPermission = "AddPermission" // AddPermissionRequest generates a "aws/request.Request" representing the // client's request for the AddPermission operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -107,8 +107,8 @@ const opCheckIfPhoneNumberIsOptedOut = "CheckIfPhoneNumberIsOptedOut" // CheckIfPhoneNumberIsOptedOutRequest generates a "aws/request.Request" representing the // client's request for the CheckIfPhoneNumberIsOptedOut operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -201,8 +201,8 @@ const opConfirmSubscription = "ConfirmSubscription" // ConfirmSubscriptionRequest generates a "aws/request.Request" representing the // client's request for the ConfirmSubscription operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -296,8 +296,8 @@ const opCreatePlatformApplication = "CreatePlatformApplication" // CreatePlatformApplicationRequest generates a "aws/request.Request" representing the // client's request for the CreatePlatformApplication operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -406,8 +406,8 @@ const opCreatePlatformEndpoint = "CreatePlatformEndpoint" // CreatePlatformEndpointRequest generates a "aws/request.Request" representing the // client's request for the CreatePlatformEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -507,8 +507,8 @@ const opCreateTopic = "CreateTopic" // CreateTopicRequest generates a "aws/request.Request" representing the // client's request for the CreateTopic operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -573,6 +573,10 @@ func (c *SNS) CreateTopicRequest(input *CreateTopicInput) (req *request.Request, // * ErrCodeAuthorizationErrorException "AuthorizationError" // Indicates that the user has been denied access to the requested resource. // +// * ErrCodeInvalidSecurityException "InvalidSecurity" +// The credential signature isn't valid. You must use an HTTPS endpoint and +// sign your request using Signature Version 4. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/sns-2010-03-31/CreateTopic func (c *SNS) CreateTopic(input *CreateTopicInput) (*CreateTopicOutput, error) { req, out := c.CreateTopicRequest(input) @@ -599,8 +603,8 @@ const opDeleteEndpoint = "DeleteEndpoint" // DeleteEndpointRequest generates a "aws/request.Request" representing the // client's request for the DeleteEndpoint operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -691,8 +695,8 @@ const opDeletePlatformApplication = "DeletePlatformApplication" // DeletePlatformApplicationRequest generates a "aws/request.Request" representing the // client's request for the DeletePlatformApplication operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -780,8 +784,8 @@ const opDeleteTopic = "DeleteTopic" // DeleteTopicRequest generates a "aws/request.Request" representing the // client's request for the DeleteTopic operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -873,8 +877,8 @@ const opGetEndpointAttributes = "GetEndpointAttributes" // GetEndpointAttributesRequest generates a "aws/request.Request" representing the // client's request for the GetEndpointAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -963,8 +967,8 @@ const opGetPlatformApplicationAttributes = "GetPlatformApplicationAttributes" // GetPlatformApplicationAttributesRequest generates a "aws/request.Request" representing the // client's request for the GetPlatformApplicationAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1053,8 +1057,8 @@ const opGetSMSAttributes = "GetSMSAttributes" // GetSMSAttributesRequest generates a "aws/request.Request" representing the // client's request for the GetSMSAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1144,8 +1148,8 @@ const opGetSubscriptionAttributes = "GetSubscriptionAttributes" // GetSubscriptionAttributesRequest generates a "aws/request.Request" representing the // client's request for the GetSubscriptionAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1232,8 +1236,8 @@ const opGetTopicAttributes = "GetTopicAttributes" // GetTopicAttributesRequest generates a "aws/request.Request" representing the // client's request for the GetTopicAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1295,6 +1299,10 @@ func (c *SNS) GetTopicAttributesRequest(input *GetTopicAttributesInput) (req *re // * ErrCodeAuthorizationErrorException "AuthorizationError" // Indicates that the user has been denied access to the requested resource. // +// * ErrCodeInvalidSecurityException "InvalidSecurity" +// The credential signature isn't valid. You must use an HTTPS endpoint and +// sign your request using Signature Version 4. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/sns-2010-03-31/GetTopicAttributes func (c *SNS) GetTopicAttributes(input *GetTopicAttributesInput) (*GetTopicAttributesOutput, error) { req, out := c.GetTopicAttributesRequest(input) @@ -1321,8 +1329,8 @@ const opListEndpointsByPlatformApplication = "ListEndpointsByPlatformApplication // ListEndpointsByPlatformApplicationRequest generates a "aws/request.Request" representing the // client's request for the ListEndpointsByPlatformApplication operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1376,6 +1384,8 @@ func (c *SNS) ListEndpointsByPlatformApplicationRequest(input *ListEndpointsByPl // are no more records to return, NextToken will be null. For more information, // see Using Amazon SNS Mobile Push Notifications (http://docs.aws.amazon.com/sns/latest/dg/SNSMobilePush.html). // +// This action is throttled at 30 transactions per second (TPS). +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -1472,8 +1482,8 @@ const opListPhoneNumbersOptedOut = "ListPhoneNumbersOptedOut" // ListPhoneNumbersOptedOutRequest generates a "aws/request.Request" representing the // client's request for the ListPhoneNumbersOptedOut operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1569,8 +1579,8 @@ const opListPlatformApplications = "ListPlatformApplications" // ListPlatformApplicationsRequest generates a "aws/request.Request" representing the // client's request for the ListPlatformApplications operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1624,6 +1634,8 @@ func (c *SNS) ListPlatformApplicationsRequest(input *ListPlatformApplicationsInp // no more records to return, NextToken will be null. For more information, // see Using Amazon SNS Mobile Push Notifications (http://docs.aws.amazon.com/sns/latest/dg/SNSMobilePush.html). // +// This action is throttled at 15 transactions per second (TPS). +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -1717,8 +1729,8 @@ const opListSubscriptions = "ListSubscriptions" // ListSubscriptionsRequest generates a "aws/request.Request" representing the // client's request for the ListSubscriptions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1768,6 +1780,8 @@ func (c *SNS) ListSubscriptionsRequest(input *ListSubscriptionsInput) (req *requ // is also returned. Use the NextToken parameter in a new ListSubscriptions // call to get further results. // +// This action is throttled at 30 transactions per second (TPS). +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -1861,8 +1875,8 @@ const opListSubscriptionsByTopic = "ListSubscriptionsByTopic" // ListSubscriptionsByTopicRequest generates a "aws/request.Request" representing the // client's request for the ListSubscriptionsByTopic operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1912,6 +1926,8 @@ func (c *SNS) ListSubscriptionsByTopicRequest(input *ListSubscriptionsByTopicInp // a NextToken is also returned. Use the NextToken parameter in a new ListSubscriptionsByTopic // call to get further results. // +// This action is throttled at 30 transactions per second (TPS). +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -2008,8 +2024,8 @@ const opListTopics = "ListTopics" // ListTopicsRequest generates a "aws/request.Request" representing the // client's request for the ListTopics operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2058,6 +2074,8 @@ func (c *SNS) ListTopicsRequest(input *ListTopicsInput) (req *request.Request, o // of topics, up to 100. If there are more topics, a NextToken is also returned. // Use the NextToken parameter in a new ListTopics call to get further results. // +// This action is throttled at 30 transactions per second (TPS). +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -2151,8 +2169,8 @@ const opOptInPhoneNumber = "OptInPhoneNumber" // OptInPhoneNumberRequest generates a "aws/request.Request" representing the // client's request for the OptInPhoneNumber operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2243,8 +2261,8 @@ const opPublish = "Publish" // PublishRequest generates a "aws/request.Request" representing the // client's request for the Publish operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2283,10 +2301,15 @@ func (c *SNS) PublishRequest(input *PublishInput) (req *request.Request, output // Publish API operation for Amazon Simple Notification Service. // -// Sends a message to all of a topic's subscribed endpoints. When a messageId -// is returned, the message has been saved and Amazon SNS will attempt to deliver -// it to the topic's subscribers shortly. The format of the outgoing message -// to each subscribed endpoint depends on the notification protocol. +// Sends a message to an Amazon SNS topic or sends a text message (SMS message) +// directly to a phone number. +// +// If you send a message to a topic, Amazon SNS delivers the message to each +// endpoint that is subscribed to the topic. The format of the message depends +// on the notification protocol for each subscribed endpoint. +// +// When a messageId is returned, the message has been saved and Amazon SNS will +// attempt to deliver it shortly. // // To use the Publish action for sending a message to a mobile endpoint, such // as an app on a Kindle device or mobile phone, you must specify the EndpointArn @@ -2325,6 +2348,36 @@ func (c *SNS) PublishRequest(input *PublishInput) (req *request.Request, output // * ErrCodeAuthorizationErrorException "AuthorizationError" // Indicates that the user has been denied access to the requested resource. // +// * ErrCodeKMSDisabledException "KMSDisabled" +// The request was rejected because the specified customer master key (CMK) +// isn't enabled. +// +// * ErrCodeKMSInvalidStateException "KMSInvalidState" +// The request was rejected because the state of the specified resource isn't +// valid for this request. For more information, see How Key State Affects Use +// of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) +// in the AWS Key Management Service Developer Guide. +// +// * ErrCodeKMSNotFoundException "KMSNotFound" +// The request was rejected because the specified entity or resource can't be +// found. +// +// * ErrCodeKMSOptInRequired "KMSOptInRequired" +// The AWS access key ID needs a subscription for the service. +// +// * ErrCodeKMSThrottlingException "KMSThrottling" +// The request was denied due to request throttling. For more information about +// throttling, see Limits (http://docs.aws.amazon.com/kms/latest/developerguide/limits.html#requests-per-second) +// in the AWS Key Management Service Developer Guide. +// +// * ErrCodeKMSAccessDeniedException "KMSAccessDenied" +// The ciphertext references a key that doesn't exist or that you don't have +// access to. +// +// * ErrCodeInvalidSecurityException "InvalidSecurity" +// The credential signature isn't valid. You must use an HTTPS endpoint and +// sign your request using Signature Version 4. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/sns-2010-03-31/Publish func (c *SNS) Publish(input *PublishInput) (*PublishOutput, error) { req, out := c.PublishRequest(input) @@ -2351,8 +2404,8 @@ const opRemovePermission = "RemovePermission" // RemovePermissionRequest generates a "aws/request.Request" representing the // client's request for the RemovePermission operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2441,8 +2494,8 @@ const opSetEndpointAttributes = "SetEndpointAttributes" // SetEndpointAttributesRequest generates a "aws/request.Request" representing the // client's request for the SetEndpointAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2533,8 +2586,8 @@ const opSetPlatformApplicationAttributes = "SetPlatformApplicationAttributes" // SetPlatformApplicationAttributesRequest generates a "aws/request.Request" representing the // client's request for the SetPlatformApplicationAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2627,8 +2680,8 @@ const opSetSMSAttributes = "SetSMSAttributes" // SetSMSAttributesRequest generates a "aws/request.Request" representing the // client's request for the SetSMSAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2722,8 +2775,8 @@ const opSetSubscriptionAttributes = "SetSubscriptionAttributes" // SetSubscriptionAttributesRequest generates a "aws/request.Request" representing the // client's request for the SetSubscriptionAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2764,7 +2817,8 @@ func (c *SNS) SetSubscriptionAttributesRequest(input *SetSubscriptionAttributesI // SetSubscriptionAttributes API operation for Amazon Simple Notification Service. // -// Allows a subscription owner to set an attribute of the topic to a new value. +// Allows a subscription owner to set an attribute of the subscription to a +// new value. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2777,6 +2831,11 @@ func (c *SNS) SetSubscriptionAttributesRequest(input *SetSubscriptionAttributesI // * ErrCodeInvalidParameterException "InvalidParameter" // Indicates that a request parameter does not comply with the associated constraints. // +// * ErrCodeFilterPolicyLimitExceededException "FilterPolicyLimitExceeded" +// Indicates that the number of filter polices in your AWS account exceeds the +// limit. To add more filter polices, submit an SNS Limit Increase case in the +// AWS Support Center. +// // * ErrCodeInternalErrorException "InternalError" // Indicates an internal service error. // @@ -2812,8 +2871,8 @@ const opSetTopicAttributes = "SetTopicAttributes" // SetTopicAttributesRequest generates a "aws/request.Request" representing the // client's request for the SetTopicAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2876,6 +2935,10 @@ func (c *SNS) SetTopicAttributesRequest(input *SetTopicAttributesInput) (req *re // * ErrCodeAuthorizationErrorException "AuthorizationError" // Indicates that the user has been denied access to the requested resource. // +// * ErrCodeInvalidSecurityException "InvalidSecurity" +// The credential signature isn't valid. You must use an HTTPS endpoint and +// sign your request using Signature Version 4. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/sns-2010-03-31/SetTopicAttributes func (c *SNS) SetTopicAttributes(input *SetTopicAttributesInput) (*SetTopicAttributesOutput, error) { req, out := c.SetTopicAttributesRequest(input) @@ -2902,8 +2965,8 @@ const opSubscribe = "Subscribe" // SubscribeRequest generates a "aws/request.Request" representing the // client's request for the Subscribe operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2947,6 +3010,8 @@ func (c *SNS) SubscribeRequest(input *SubscribeInput) (req *request.Request, out // the ConfirmSubscription action with the token from the confirmation message. // Confirmation tokens are valid for three days. // +// This action is throttled at 100 transactions per second (TPS). +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -2958,6 +3023,11 @@ func (c *SNS) SubscribeRequest(input *SubscribeInput) (req *request.Request, out // * ErrCodeSubscriptionLimitExceededException "SubscriptionLimitExceeded" // Indicates that the customer already owns the maximum allowed number of subscriptions. // +// * ErrCodeFilterPolicyLimitExceededException "FilterPolicyLimitExceeded" +// Indicates that the number of filter polices in your AWS account exceeds the +// limit. To add more filter polices, submit an SNS Limit Increase case in the +// AWS Support Center. +// // * ErrCodeInvalidParameterException "InvalidParameter" // Indicates that a request parameter does not comply with the associated constraints. // @@ -2970,6 +3040,10 @@ func (c *SNS) SubscribeRequest(input *SubscribeInput) (req *request.Request, out // * ErrCodeAuthorizationErrorException "AuthorizationError" // Indicates that the user has been denied access to the requested resource. // +// * ErrCodeInvalidSecurityException "InvalidSecurity" +// The credential signature isn't valid. You must use an HTTPS endpoint and +// sign your request using Signature Version 4. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/sns-2010-03-31/Subscribe func (c *SNS) Subscribe(input *SubscribeInput) (*SubscribeOutput, error) { req, out := c.SubscribeRequest(input) @@ -2996,8 +3070,8 @@ const opUnsubscribe = "Unsubscribe" // UnsubscribeRequest generates a "aws/request.Request" representing the // client's request for the Unsubscribe operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3045,6 +3119,8 @@ func (c *SNS) UnsubscribeRequest(input *UnsubscribeInput) (req *request.Request, // message is delivered to the endpoint, so that the endpoint owner can easily // resubscribe to the topic if the Unsubscribe request was unintended. // +// This action is throttled at 100 transactions per second (TPS). +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -3065,6 +3141,10 @@ func (c *SNS) UnsubscribeRequest(input *UnsubscribeInput) (req *request.Request, // * ErrCodeNotFoundException "NotFound" // Indicates that the requested resource does not exist. // +// * ErrCodeInvalidSecurityException "InvalidSecurity" +// The credential signature isn't valid. You must use an HTTPS endpoint and +// sign your request using Signature Version 4. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/sns-2010-03-31/Unsubscribe func (c *SNS) Unsubscribe(input *UnsubscribeInput) (*UnsubscribeOutput, error) { req, out := c.UnsubscribeRequest(input) @@ -3542,6 +3622,20 @@ func (s *CreatePlatformEndpointOutput) SetEndpointArn(v string) *CreatePlatformE type CreateTopicInput struct { _ struct{} `type:"structure"` + // A map of attributes with their corresponding values. + // + // The following lists the names, descriptions, and values of the special request + // parameters that the CreateTopic action uses: + // + // * DeliveryPolicy – The policy that defines how Amazon SNS retries failed + // deliveries to HTTP/S endpoints. + // + // * DisplayName – The display name to use for a topic with SMS subscriptions. + // + // * Policy – The policy that defines who can access your topic. By default, + // only the topic owner can publish or subscribe to the topic. + Attributes map[string]*string `type:"map"` + // The name of the topic you want to create. // // Constraints: Topic names must be made up of only uppercase and lowercase @@ -3575,6 +3669,12 @@ func (s *CreateTopicInput) Validate() error { return nil } +// SetAttributes sets the Attributes field's value. +func (s *CreateTopicInput) SetAttributes(v map[string]*string) *CreateTopicInput { + s.Attributes = v + return s +} + // SetName sets the Name field's value. func (s *CreateTopicInput) SetName(v string) *CreateTopicInput { s.Name = &v @@ -3841,16 +3941,16 @@ type GetEndpointAttributesOutput struct { // Attributes include the following: // - // * CustomUserData -- arbitrary user data to associate with the endpoint. + // * CustomUserData – arbitrary user data to associate with the endpoint. // Amazon SNS does not use this data. The data must be in UTF-8 format and // less than 2KB. // - // * Enabled -- flag that enables/disables delivery to the endpoint. Amazon + // * Enabled – flag that enables/disables delivery to the endpoint. Amazon // SNS will set this to false when a notification service indicates to Amazon // SNS that the endpoint is invalid. Users can set it back to true, typically // after updating Token. // - // * Token -- device token, also referred to as a registration id, for an + // * Token – device token, also referred to as a registration id, for an // app and mobile device. This is returned from the notification service // when an app and mobile device are registered with the notification service. Attributes map[string]*string `type:"map"` @@ -3917,16 +4017,16 @@ type GetPlatformApplicationAttributesOutput struct { // Attributes include the following: // - // * EventEndpointCreated -- Topic ARN to which EndpointCreated event notifications + // * EventEndpointCreated – Topic ARN to which EndpointCreated event notifications // should be sent. // - // * EventEndpointDeleted -- Topic ARN to which EndpointDeleted event notifications + // * EventEndpointDeleted – Topic ARN to which EndpointDeleted event notifications // should be sent. // - // * EventEndpointUpdated -- Topic ARN to which EndpointUpdate event notifications + // * EventEndpointUpdated – Topic ARN to which EndpointUpdate event notifications // should be sent. // - // * EventDeliveryFailure -- Topic ARN to which DeliveryFailure event notifications + // * EventDeliveryFailure – Topic ARN to which DeliveryFailure event notifications // should be sent upon Direct Publish delivery failure (permanent) to one // of the application's endpoints. Attributes map[string]*string `type:"map"` @@ -4047,21 +4147,31 @@ type GetSubscriptionAttributesOutput struct { // A map of the subscription's attributes. Attributes in this map include the // following: // - // * SubscriptionArn -- the subscription's ARN + // * ConfirmationWasAuthenticated – true if the subscription confirmation + // request was authenticated. // - // * TopicArn -- the topic ARN that the subscription is associated with + // * DeliveryPolicy – The JSON serialization of the subscription's delivery + // policy. // - // * Owner -- the AWS account ID of the subscription's owner + // * EffectiveDeliveryPolicy – The JSON serialization of the effective delivery + // policy that takes into account the topic delivery policy and account system + // defaults. // - // * ConfirmationWasAuthenticated -- true if the subscription confirmation - // request was authenticated + // * FilterPolicy – The filter policy JSON that is assigned to the subscription. // - // * DeliveryPolicy -- the JSON serialization of the subscription's delivery - // policy + // * Owner – The AWS account ID of the subscription's owner. // - // * EffectiveDeliveryPolicy -- the JSON serialization of the effective delivery - // policy that takes into account the topic delivery policy and account system - // defaults + // * PendingConfirmation – true if the subscription hasn't been confirmed. + // To confirm a pending subscription, call the ConfirmSubscription action + // with a confirmation token. + // + // * RawMessageDelivery – true if raw message delivery is enabled for the + // subscription. Raw messages are free of JSON formatting and can be sent + // to HTTP/S and Amazon SQS endpoints. + // + // * SubscriptionArn – The subscription's ARN. + // + // * TopicArn – The topic ARN that the subscription is associated with. Attributes map[string]*string `type:"map"` } @@ -4126,27 +4236,26 @@ type GetTopicAttributesOutput struct { // A map of the topic's attributes. Attributes in this map include the following: // - // * TopicArn -- the topic's ARN + // * TopicArn – the topic's ARN // - // * Owner -- the AWS account ID of the topic's owner + // * Owner – the AWS account ID of the topic's owner // - // * Policy -- the JSON serialization of the topic's access control policy + // * Policy – the JSON serialization of the topic's access control policy // - // * DisplayName -- the human-readable name used in the "From" field for - // notifications to email and email-json endpoints + // * DisplayName – the human-readable name used in the "From" field for notifications + // to email and email-json endpoints // - // * SubscriptionsPending -- the number of subscriptions pending confirmation + // * SubscriptionsPending – the number of subscriptions pending confirmation // on this topic // - // * SubscriptionsConfirmed -- the number of confirmed subscriptions on this + // * SubscriptionsConfirmed – the number of confirmed subscriptions on this // topic // - // * SubscriptionsDeleted -- the number of deleted subscriptions on this - // topic + // * SubscriptionsDeleted – the number of deleted subscriptions on this topic // - // * DeliveryPolicy -- the JSON serialization of the topic's delivery policy + // * DeliveryPolicy – the JSON serialization of the topic's delivery policy // - // * EffectiveDeliveryPolicy -- the JSON serialization of the effective delivery + // * EffectiveDeliveryPolicy – the JSON serialization of the effective delivery // policy that takes into account system defaults Attributes map[string]*string `type:"map"` } @@ -4586,8 +4695,9 @@ type MessageAttributeValue struct { // BinaryValue is automatically base64 encoded/decoded by the SDK. BinaryValue []byte `type:"blob"` - // Amazon SNS supports the following logical data types: String, Number, and - // Binary. For more information, see Message Attribute Data Types (http://docs.aws.amazon.com/sns/latest/dg/SNSMessageAttributes.html#SNSMessageAttributes.DataTypes). + // Amazon SNS supports the following logical data types: String, String.Array, + // Number, and Binary. For more information, see Message Attribute Data Types + // (http://docs.aws.amazon.com/sns/latest/dg/SNSMessageAttributes.html#SNSMessageAttributes.DataTypes). // // DataType is a required field DataType *string `type:"string" required:"true"` @@ -4729,17 +4839,31 @@ func (s *PlatformApplication) SetPlatformApplicationArn(v string) *PlatformAppli type PublishInput struct { _ struct{} `type:"structure"` - // The message you want to send to the topic. + // The message you want to send. // - // If you want to send the same message to all transport protocols, include - // the text of the message as a String value. + // The Message parameter is always a string. If you set MessageStructure to + // json, you must string-encode the Message parameter. // + // If you are publishing to a topic and you want to send the same message to + // all transport protocols, include the text of the message as a String value. // If you want to send different messages for each transport protocol, set the // value of the MessageStructure parameter to json and use a JSON object for // the Message parameter. // - // Constraints: Messages must be UTF-8 encoded strings at most 256 KB in size - // (262144 bytes, not 262144 characters). + // Constraints: + // + // With the exception of SMS, messages must be UTF-8 encoded strings and at + // most 256 KB in size (262,144 bytes, not 262,144 characters). + // + // * For SMS, each message can contain up to 140 characters. This character + // limit depends on the encoding schema. For example, an SMS message can + // contain 160 GSM characters, 140 ASCII characters, or 70 UCS-2 characters. + // + // * If you publish a message that exceeds this size limit, Amazon SNS sends + // the message as multiple messages, each fitting within the size limit. + // Messages aren't truncated mid-word but are cut off at whole-word boundaries. + // + // * The total size limit for a single SMS Publish action is 1,600 characters. // // JSON-specific constraints: // @@ -4995,16 +5119,16 @@ type SetEndpointAttributesInput struct { // A map of the endpoint attributes. Attributes in this map include the following: // - // * CustomUserData -- arbitrary user data to associate with the endpoint. + // * CustomUserData – arbitrary user data to associate with the endpoint. // Amazon SNS does not use this data. The data must be in UTF-8 format and // less than 2KB. // - // * Enabled -- flag that enables/disables delivery to the endpoint. Amazon + // * Enabled – flag that enables/disables delivery to the endpoint. Amazon // SNS will set this to false when a notification service indicates to Amazon // SNS that the endpoint is invalid. Users can set it back to true, typically // after updating Token. // - // * Token -- device token, also referred to as a registration id, for an + // * Token – device token, also referred to as a registration id, for an // app and mobile device. This is returned from the notification service // when an app and mobile device are registered with the notification service. // @@ -5076,36 +5200,35 @@ type SetPlatformApplicationAttributesInput struct { // A map of the platform application attributes. Attributes in this map include // the following: // - // * PlatformCredential -- The credential received from the notification - // service. For APNS/APNS_SANDBOX, PlatformCredential is private key. For - // GCM, PlatformCredential is "API key". For ADM, PlatformCredential is "client - // secret". + // * PlatformCredential – The credential received from the notification service. + // For APNS/APNS_SANDBOX, PlatformCredential is private key. For GCM, PlatformCredential + // is "API key". For ADM, PlatformCredential is "client secret". // - // * PlatformPrincipal -- The principal received from the notification service. + // * PlatformPrincipal – The principal received from the notification service. // For APNS/APNS_SANDBOX, PlatformPrincipal is SSL certificate. For GCM, // PlatformPrincipal is not applicable. For ADM, PlatformPrincipal is "client // id". // - // * EventEndpointCreated -- Topic ARN to which EndpointCreated event notifications + // * EventEndpointCreated – Topic ARN to which EndpointCreated event notifications // should be sent. // - // * EventEndpointDeleted -- Topic ARN to which EndpointDeleted event notifications + // * EventEndpointDeleted – Topic ARN to which EndpointDeleted event notifications // should be sent. // - // * EventEndpointUpdated -- Topic ARN to which EndpointUpdate event notifications + // * EventEndpointUpdated – Topic ARN to which EndpointUpdate event notifications // should be sent. // - // * EventDeliveryFailure -- Topic ARN to which DeliveryFailure event notifications + // * EventDeliveryFailure – Topic ARN to which DeliveryFailure event notifications // should be sent upon Direct Publish delivery failure (permanent) to one // of the application's endpoints. // - // * SuccessFeedbackRoleArn -- IAM role ARN used to give Amazon SNS write + // * SuccessFeedbackRoleArn – IAM role ARN used to give Amazon SNS write // access to use CloudWatch Logs on your behalf. // - // * FailureFeedbackRoleArn -- IAM role ARN used to give Amazon SNS write + // * FailureFeedbackRoleArn – IAM role ARN used to give Amazon SNS write // access to use CloudWatch Logs on your behalf. // - // * SuccessFeedbackSampleRate -- Sample rate percentage (0-100) of successfully + // * SuccessFeedbackSampleRate – Sample rate percentage (0-100) of successfully // delivered messages. // // Attributes is a required field @@ -5186,8 +5309,10 @@ type SetSMSAttributesInput struct { // costs that exceed your limit. // // By default, the spend limit is set to the maximum allowed by Amazon SNS. - // If you want to exceed the maximum, contact AWS Support (https://aws.amazon.com/premiumsupport/) - // or your AWS sales representative for a service limit increase. + // If you want to raise the limit, submit an SNS Limit Increase case (https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase&limitType=service-code-sns). + // For New limit value, enter your desired monthly spend limit. In the Use Case + // Description field, explain that you are requesting an SMS monthly spend limit + // increase. // // DeliveryStatusIAMRole – The ARN of the IAM role that allows Amazon SNS to // write logs about SMS deliveries in CloudWatch Logs. For each SMS message @@ -5298,10 +5423,22 @@ func (s SetSMSAttributesOutput) GoString() string { type SetSubscriptionAttributesInput struct { _ struct{} `type:"structure"` - // The name of the attribute you want to set. Only a subset of the subscriptions - // attributes are mutable. + // A map of attributes with their corresponding values. + // + // The following lists the names, descriptions, and values of the special request + // parameters that the SetTopicAttributes action uses: + // + // * DeliveryPolicy – The policy that defines how Amazon SNS retries failed + // deliveries to HTTP/S endpoints. + // + // * FilterPolicy – The simple JSON object that lets your subscriber receive + // only a subset of messages, rather than receiving every message published + // to the topic. // - // Valid values: DeliveryPolicy | RawMessageDelivery + // * RawMessageDelivery – When set to true, enables raw message delivery + // to Amazon SQS or HTTP/S endpoints. This eliminates the need for the endpoints + // to process JSON formatting, which is otherwise created for Amazon SNS + // metadata. // // AttributeName is a required field AttributeName *string `type:"string" required:"true"` @@ -5377,10 +5514,18 @@ func (s SetSubscriptionAttributesOutput) GoString() string { type SetTopicAttributesInput struct { _ struct{} `type:"structure"` - // The name of the attribute you want to set. Only a subset of the topic's attributes - // are mutable. + // A map of attributes with their corresponding values. // - // Valid values: Policy | DisplayName | DeliveryPolicy + // The following lists the names, descriptions, and values of the special request + // parameters that the SetTopicAttributes action uses: + // + // * DeliveryPolicy – The policy that defines how Amazon SNS retries failed + // deliveries to HTTP/S endpoints. + // + // * DisplayName – The display name to use for a topic with SMS subscriptions. + // + // * Policy – The policy that defines who can access your topic. By default, + // only the topic owner can publish or subscribe to the topic. // // AttributeName is a required field AttributeName *string `type:"string" required:"true"` @@ -5456,6 +5601,24 @@ func (s SetTopicAttributesOutput) GoString() string { type SubscribeInput struct { _ struct{} `type:"structure"` + // A map of attributes with their corresponding values. + // + // The following lists the names, descriptions, and values of the special request + // parameters that the SetTopicAttributes action uses: + // + // * DeliveryPolicy – The policy that defines how Amazon SNS retries failed + // deliveries to HTTP/S endpoints. + // + // * FilterPolicy – The simple JSON object that lets your subscriber receive + // only a subset of messages, rather than receiving every message published + // to the topic. + // + // * RawMessageDelivery – When set to true, enables raw message delivery + // to Amazon SQS or HTTP/S endpoints. This eliminates the need for the endpoints + // to process JSON formatting, which is otherwise created for Amazon SNS + // metadata. + Attributes map[string]*string `type:"map"` + // The endpoint that you want to receive notifications. Endpoints vary by protocol: // // * For the http protocol, the endpoint is an URL beginning with "http://" @@ -5479,26 +5642,41 @@ type SubscribeInput struct { // The protocol you want to use. Supported protocols include: // - // * http -- delivery of JSON-encoded message via HTTP POST + // * http – delivery of JSON-encoded message via HTTP POST // - // * https -- delivery of JSON-encoded message via HTTPS POST + // * https – delivery of JSON-encoded message via HTTPS POST // - // * email -- delivery of message via SMTP + // * email – delivery of message via SMTP // - // * email-json -- delivery of JSON-encoded message via SMTP + // * email-json – delivery of JSON-encoded message via SMTP // - // * sms -- delivery of message via SMS + // * sms – delivery of message via SMS // - // * sqs -- delivery of JSON-encoded message to an Amazon SQS queue + // * sqs – delivery of JSON-encoded message to an Amazon SQS queue // - // * application -- delivery of JSON-encoded message to an EndpointArn for + // * application – delivery of JSON-encoded message to an EndpointArn for // a mobile app and device. // - // * lambda -- delivery of JSON-encoded message to an AWS Lambda function. + // * lambda – delivery of JSON-encoded message to an AWS Lambda function. // // Protocol is a required field Protocol *string `type:"string" required:"true"` + // Sets whether the response from the Subscribe request includes the subscription + // ARN, even if the subscription is not yet confirmed. + // + // If you set this parameter to false, the response includes the ARN for confirmed + // subscriptions, but it includes an ARN value of "pending subscription" for + // subscriptions that are not yet confirmed. A subscription becomes confirmed + // when the subscriber calls the ConfirmSubscription action with a confirmation + // token. + // + // If you set this parameter to true, the response includes the ARN in all cases, + // even if the subscription is not yet confirmed. + // + // The default value is false. + ReturnSubscriptionArn *bool `type:"boolean"` + // The ARN of the topic you want to subscribe to. // // TopicArn is a required field @@ -5531,6 +5709,12 @@ func (s *SubscribeInput) Validate() error { return nil } +// SetAttributes sets the Attributes field's value. +func (s *SubscribeInput) SetAttributes(v map[string]*string) *SubscribeInput { + s.Attributes = v + return s +} + // SetEndpoint sets the Endpoint field's value. func (s *SubscribeInput) SetEndpoint(v string) *SubscribeInput { s.Endpoint = &v @@ -5543,6 +5727,12 @@ func (s *SubscribeInput) SetProtocol(v string) *SubscribeInput { return s } +// SetReturnSubscriptionArn sets the ReturnSubscriptionArn field's value. +func (s *SubscribeInput) SetReturnSubscriptionArn(v bool) *SubscribeInput { + s.ReturnSubscriptionArn = &v + return s +} + // SetTopicArn sets the TopicArn field's value. func (s *SubscribeInput) SetTopicArn(v string) *SubscribeInput { s.TopicArn = &v @@ -5553,8 +5743,10 @@ func (s *SubscribeInput) SetTopicArn(v string) *SubscribeInput { type SubscribeOutput struct { _ struct{} `type:"structure"` - // The ARN of the subscription, if the service was able to create a subscription - // immediately (without requiring endpoint owner confirmation). + // The ARN of the subscription if it is confirmed, or the string "pending confirmation" + // if the subscription requires confirmation. However, if the API request parameter + // ReturnSubscriptionArn is true, then the value is always the subscription + // ARN, even if the subscription requires confirmation. SubscriptionArn *string `type:"string"` } diff --git a/vendor/github.com/aws/aws-sdk-go/service/sns/errors.go b/vendor/github.com/aws/aws-sdk-go/service/sns/errors.go index 60e503f256a..9a16c083fa3 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/sns/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/sns/errors.go @@ -16,6 +16,14 @@ const ( // Exception error indicating endpoint disabled. ErrCodeEndpointDisabledException = "EndpointDisabled" + // ErrCodeFilterPolicyLimitExceededException for service response error code + // "FilterPolicyLimitExceeded". + // + // Indicates that the number of filter polices in your AWS account exceeds the + // limit. To add more filter polices, submit an SNS Limit Increase case in the + // AWS Support Center. + ErrCodeFilterPolicyLimitExceededException = "FilterPolicyLimitExceeded" + // ErrCodeInternalErrorException for service response error code // "InternalError". // @@ -34,6 +42,57 @@ const ( // Indicates that a request parameter does not comply with the associated constraints. ErrCodeInvalidParameterValueException = "ParameterValueInvalid" + // ErrCodeInvalidSecurityException for service response error code + // "InvalidSecurity". + // + // The credential signature isn't valid. You must use an HTTPS endpoint and + // sign your request using Signature Version 4. + ErrCodeInvalidSecurityException = "InvalidSecurity" + + // ErrCodeKMSAccessDeniedException for service response error code + // "KMSAccessDenied". + // + // The ciphertext references a key that doesn't exist or that you don't have + // access to. + ErrCodeKMSAccessDeniedException = "KMSAccessDenied" + + // ErrCodeKMSDisabledException for service response error code + // "KMSDisabled". + // + // The request was rejected because the specified customer master key (CMK) + // isn't enabled. + ErrCodeKMSDisabledException = "KMSDisabled" + + // ErrCodeKMSInvalidStateException for service response error code + // "KMSInvalidState". + // + // The request was rejected because the state of the specified resource isn't + // valid for this request. For more information, see How Key State Affects Use + // of a Customer Master Key (http://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) + // in the AWS Key Management Service Developer Guide. + ErrCodeKMSInvalidStateException = "KMSInvalidState" + + // ErrCodeKMSNotFoundException for service response error code + // "KMSNotFound". + // + // The request was rejected because the specified entity or resource can't be + // found. + ErrCodeKMSNotFoundException = "KMSNotFound" + + // ErrCodeKMSOptInRequired for service response error code + // "KMSOptInRequired". + // + // The AWS access key ID needs a subscription for the service. + ErrCodeKMSOptInRequired = "KMSOptInRequired" + + // ErrCodeKMSThrottlingException for service response error code + // "KMSThrottling". + // + // The request was denied due to request throttling. For more information about + // throttling, see Limits (http://docs.aws.amazon.com/kms/latest/developerguide/limits.html#requests-per-second) + // in the AWS Key Management Service Developer Guide. + ErrCodeKMSThrottlingException = "KMSThrottling" + // ErrCodeNotFoundException for service response error code // "NotFound". // diff --git a/vendor/github.com/aws/aws-sdk-go/service/sns/service.go b/vendor/github.com/aws/aws-sdk-go/service/sns/service.go index fb75ea60a5d..96d7c8ba05c 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/sns/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/sns/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "sns" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "sns" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "SNS" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the SNS client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/sqs/api.go b/vendor/github.com/aws/aws-sdk-go/service/sqs/api.go index c6997da8711..df375a7026a 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/sqs/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/sqs/api.go @@ -16,8 +16,8 @@ const opAddPermission = "AddPermission" // AddPermissionRequest generates a "aws/request.Request" representing the // client's request for the AddPermission operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -63,21 +63,29 @@ func (c *SQS) AddPermissionRequest(input *AddPermissionInput) (req *request.Requ // // When you create a queue, you have full control access rights for the queue. // Only you, the owner of the queue, can grant or deny permissions to the queue. -// For more information about these permissions, see Shared Queues (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/acp-overview.html) +// For more information about these permissions, see Allow Developers to Write +// Messages to a Shared Queue (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-writing-an-sqs-policy.html#write-messages-to-shared-queue) // in the Amazon Simple Queue Service Developer Guide. // // AddPermission writes an Amazon-SQS-generated policy. If you want to write // your own policy, use SetQueueAttributes to upload your policy. For more information -// about writing your own policy, see Using The Access Policy Language (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/AccessPolicyLanguage.html) +// about writing your own policy, see Using Custom Policies with the Amazon +// SQS Access Policy Language (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-creating-custom-policies.html) // in the Amazon Simple Queue Service Developer Guide. // +// An Amazon SQS policy can have a maximum of 7 actions. +// // Some actions take lists of parameters. These lists are specified using the // param.n notation. Values of n are integers starting from 1. For example, // a parameter list with two elements looks like this: // -// &Attribute.1=this +// &Attribute.1=first +// +// &Attribute.2=second // -// &Attribute.2=that +// Cross-account permissions don't apply to this action. For more information, +// see see Grant Cross-Account Permissions to a Role and a User Name (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-customer-managed-policy-examples.html#grant-cross-account-permissions-to-role-and-user-name) +// in the Amazon Simple Queue Service Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -88,10 +96,10 @@ func (c *SQS) AddPermissionRequest(input *AddPermissionInput) (req *request.Requ // // Returned Error Codes: // * ErrCodeOverLimit "OverLimit" -// The action that you requested would violate a limit. For example, ReceiveMessage -// returns this error if the maximum number of inflight messages is reached. -// AddPermission returns this error if the maximum number of permissions for -// the queue is reached. +// The specified action violates a limit. For example, ReceiveMessage returns +// this error if the maximum number of inflight messages is reached and AddPermission +// returns this error if the maximum number of permissions for the queue is +// reached. // // See also, https://docs.aws.amazon.com/goto/WebAPI/sqs-2012-11-05/AddPermission func (c *SQS) AddPermission(input *AddPermissionInput) (*AddPermissionOutput, error) { @@ -119,8 +127,8 @@ const opChangeMessageVisibility = "ChangeMessageVisibility" // ChangeMessageVisibilityRequest generates a "aws/request.Request" representing the // client's request for the ChangeMessageVisibility operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -162,18 +170,15 @@ func (c *SQS) ChangeMessageVisibilityRequest(input *ChangeMessageVisibilityInput // ChangeMessageVisibility API operation for Amazon Simple Queue Service. // // Changes the visibility timeout of a specified message in a queue to a new -// value. The maximum allowed timeout value is 12 hours. Thus, you can't extend -// the timeout of a message in an existing queue to more than a total visibility -// timeout of 12 hours. For more information, see Visibility Timeout (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html) +// value. The maximum allowed timeout value is 12 hours. For more information, +// see Visibility Timeout (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html) // in the Amazon Simple Queue Service Developer Guide. // // For example, you have a message with a visibility timeout of 5 minutes. After -// 3 minutes, you call ChangeMessageVisiblity with a timeout of 10 minutes. -// At that time, the timeout for the message is extended by 10 minutes beyond -// the time of the ChangeMessageVisibility action. This results in a total visibility -// timeout of 13 minutes. You can continue to call the ChangeMessageVisibility -// to extend the visibility timeout to a maximum of 12 hours. If you try to -// extend the visibility timeout beyond 12 hours, your request is rejected. +// 3 minutes, you call ChangeMessageVisibility with a timeout of 10 minutes. +// You can continue to call ChangeMessageVisibility to extend the visibility +// timeout to a maximum of 12 hours. If you try to extend the visibility timeout +// beyond 12 hours, your request is rejected. // // A message is considered to be in flight after it's received from a queue // by a consumer, but not yet deleted from the queue. @@ -207,10 +212,10 @@ func (c *SQS) ChangeMessageVisibilityRequest(input *ChangeMessageVisibilityInput // // Returned Error Codes: // * ErrCodeMessageNotInflight "AWS.SimpleQueueService.MessageNotInflight" -// The message referred to isn't in flight. +// The specified message isn't in flight. // // * ErrCodeReceiptHandleIsInvalid "ReceiptHandleIsInvalid" -// The receipt handle provided isn't valid. +// The specified receipt handle isn't valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/sqs-2012-11-05/ChangeMessageVisibility func (c *SQS) ChangeMessageVisibility(input *ChangeMessageVisibilityInput) (*ChangeMessageVisibilityOutput, error) { @@ -238,8 +243,8 @@ const opChangeMessageVisibilityBatch = "ChangeMessageVisibilityBatch" // ChangeMessageVisibilityBatchRequest generates a "aws/request.Request" representing the // client's request for the ChangeMessageVisibilityBatch operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -291,9 +296,9 @@ func (c *SQS) ChangeMessageVisibilityBatchRequest(input *ChangeMessageVisibility // param.n notation. Values of n are integers starting from 1. For example, // a parameter list with two elements looks like this: // -// &Attribute.1=this +// &Attribute.1=first // -// &Attribute.2=that +// &Attribute.2=second // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -341,8 +346,8 @@ const opCreateQueue = "CreateQueue" // CreateQueueRequest generates a "aws/request.Request" representing the // client's request for the CreateQueue operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -390,7 +395,7 @@ func (c *SQS) CreateQueueRequest(input *CreateQueueInput) (req *request.Request, // You can't change the queue type after you create it and you can't convert // an existing standard queue into a FIFO queue. You must either create a // new FIFO queue for your application or delete your existing standard queue -// and recreate it as a FIFO queue. For more information, see Moving From +// and recreate it as a FIFO queue. For more information, see Moving From // a Standard Queue to a FIFO Queue (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html#FIFO-queues-moving) // in the Amazon Simple Queue Service Developer Guide. // @@ -418,9 +423,13 @@ func (c *SQS) CreateQueueRequest(input *CreateQueueInput) (req *request.Request, // param.n notation. Values of n are integers starting from 1. For example, // a parameter list with two elements looks like this: // -// &Attribute.1=this +// &Attribute.1=first // -// &Attribute.2=that +// &Attribute.2=second +// +// Cross-account permissions don't apply to this action. For more information, +// see see Grant Cross-Account Permissions to a Role and a User Name (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-customer-managed-policy-examples.html#grant-cross-account-permissions-to-role-and-user-name) +// in the Amazon Simple Queue Service Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -432,10 +441,10 @@ func (c *SQS) CreateQueueRequest(input *CreateQueueInput) (req *request.Request, // Returned Error Codes: // * ErrCodeQueueDeletedRecently "AWS.SimpleQueueService.QueueDeletedRecently" // You must wait 60 seconds after deleting a queue before you can create another -// one with the same name. +// queue with the same name. // // * ErrCodeQueueNameExists "QueueAlreadyExists" -// A queue already exists with this name. Amazon SQS returns this error only +// A queue with this name already exists. Amazon SQS returns this error only // if the request includes attributes whose values differ from those of the // existing queue. // @@ -465,8 +474,8 @@ const opDeleteMessage = "DeleteMessage" // DeleteMessageRequest generates a "aws/request.Request" representing the // client's request for the DeleteMessage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -507,26 +516,26 @@ func (c *SQS) DeleteMessageRequest(input *DeleteMessageInput) (req *request.Requ // DeleteMessage API operation for Amazon Simple Queue Service. // -// Deletes the specified message from the specified queue. You specify the message -// by using the message's receipt handle and not the MessageId you receive when -// you send the message. Even if the message is locked by another reader due -// to the visibility timeout setting, it is still deleted from the queue. If -// you leave a message in the queue for longer than the queue's configured retention -// period, Amazon SQS automatically deletes the message. +// Deletes the specified message from the specified queue. To select the message +// to delete, use the ReceiptHandle of the message (not the MessageId which +// you receive when you send the message). Amazon SQS can delete a message from +// a queue even if a visibility timeout setting causes the message to be locked +// by another consumer. Amazon SQS automatically deletes messages left in a +// queue longer than the retention period configured for the queue. // -// The receipt handle is associated with a specific instance of receiving the -// message. If you receive a message more than once, the receipt handle you -// get each time you receive the message is different. If you don't provide -// the most recently received receipt handle for the message when you use the -// DeleteMessage action, the request succeeds, but the message might not be -// deleted. +// The ReceiptHandle is associated with a specific instance of receiving a message. +// If you receive a message more than once, the ReceiptHandle is different each +// time you receive a message. When you use the DeleteMessage action, you must +// provide the most recently received ReceiptHandle for the message (otherwise, +// the request succeeds, but the message might not be deleted). // // For standard queues, it is possible to receive a message even after you delete -// it. This might happen on rare occasions if one of the servers storing a copy -// of the message is unavailable when you send the request to delete the message. -// The copy remains on the server and might be returned to you on a subsequent -// receive request. You should ensure that your application is idempotent, so -// that receiving a message more than once does not cause issues. +// it. This might happen on rare occasions if one of the servers which stores +// a copy of the message is unavailable when you send the request to delete +// the message. The copy remains on the server and might be returned to you +// during a subsequent receive request. You should ensure that your application +// is idempotent, so that receiving a message more than once does not cause +// issues. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -537,10 +546,10 @@ func (c *SQS) DeleteMessageRequest(input *DeleteMessageInput) (req *request.Requ // // Returned Error Codes: // * ErrCodeInvalidIdFormat "InvalidIdFormat" -// The receipt handle isn't valid for the current version. +// The specified receipt handle isn't valid for the current version. // // * ErrCodeReceiptHandleIsInvalid "ReceiptHandleIsInvalid" -// The receipt handle provided isn't valid. +// The specified receipt handle isn't valid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/sqs-2012-11-05/DeleteMessage func (c *SQS) DeleteMessage(input *DeleteMessageInput) (*DeleteMessageOutput, error) { @@ -568,8 +577,8 @@ const opDeleteMessageBatch = "DeleteMessageBatch" // DeleteMessageBatchRequest generates a "aws/request.Request" representing the // client's request for the DeleteMessageBatch operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -620,9 +629,9 @@ func (c *SQS) DeleteMessageBatchRequest(input *DeleteMessageBatchInput) (req *re // param.n notation. Values of n are integers starting from 1. For example, // a parameter list with two elements looks like this: // -// &Attribute.1=this +// &Attribute.1=first // -// &Attribute.2=that +// &Attribute.2=second // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -670,8 +679,8 @@ const opDeleteQueue = "DeleteQueue" // DeleteQueueRequest generates a "aws/request.Request" representing the // client's request for the DeleteQueue operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -726,6 +735,10 @@ func (c *SQS) DeleteQueueRequest(input *DeleteQueueInput) (req *request.Request, // When you delete a queue, you must wait at least 60 seconds before creating // a queue with the same name. // +// Cross-account permissions don't apply to this action. For more information, +// see see Grant Cross-Account Permissions to a Role and a User Name (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-customer-managed-policy-examples.html#grant-cross-account-permissions-to-role-and-user-name) +// in the Amazon Simple Queue Service Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -758,8 +771,8 @@ const opGetQueueAttributes = "GetQueueAttributes" // GetQueueAttributesRequest generates a "aws/request.Request" representing the // client's request for the GetQueueAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -807,9 +820,9 @@ func (c *SQS) GetQueueAttributesRequest(input *GetQueueAttributesInput) (req *re // param.n notation. Values of n are integers starting from 1. For example, // a parameter list with two elements looks like this: // -// &Attribute.1=this +// &Attribute.1=first // -// &Attribute.2=that +// &Attribute.2=second // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -820,7 +833,7 @@ func (c *SQS) GetQueueAttributesRequest(input *GetQueueAttributesInput) (req *re // // Returned Error Codes: // * ErrCodeInvalidAttributeName "InvalidAttributeName" -// The attribute referred to doesn't exist. +// The specified attribute doesn't exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/sqs-2012-11-05/GetQueueAttributes func (c *SQS) GetQueueAttributes(input *GetQueueAttributesInput) (*GetQueueAttributesOutput, error) { @@ -848,8 +861,8 @@ const opGetQueueUrl = "GetQueueUrl" // GetQueueUrlRequest generates a "aws/request.Request" representing the // client's request for the GetQueueUrl operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -888,13 +901,13 @@ func (c *SQS) GetQueueUrlRequest(input *GetQueueUrlInput) (req *request.Request, // GetQueueUrl API operation for Amazon Simple Queue Service. // -// Returns the URL of an existing queue. This action provides a simple way to -// retrieve the URL of an Amazon SQS queue. +// Returns the URL of an existing Amazon SQS queue. // // To access a queue that belongs to another AWS account, use the QueueOwnerAWSAccountId // parameter to specify the account ID of the queue's owner. The queue's owner // must grant you permission to access the queue. For more information about -// shared queue access, see AddPermission or see Shared Queues (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/acp-overview.html) +// shared queue access, see AddPermission or see Allow Developers to Write Messages +// to a Shared Queue (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-writing-an-sqs-policy.html#write-messages-to-shared-queue) // in the Amazon Simple Queue Service Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -906,7 +919,7 @@ func (c *SQS) GetQueueUrlRequest(input *GetQueueUrlInput) (req *request.Request, // // Returned Error Codes: // * ErrCodeQueueDoesNotExist "AWS.SimpleQueueService.NonExistentQueue" -// The queue referred to doesn't exist. +// The specified queue doesn't exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/sqs-2012-11-05/GetQueueUrl func (c *SQS) GetQueueUrl(input *GetQueueUrlInput) (*GetQueueUrlOutput, error) { @@ -934,8 +947,8 @@ const opListDeadLetterSourceQueues = "ListDeadLetterSourceQueues" // ListDeadLetterSourceQueuesRequest generates a "aws/request.Request" representing the // client's request for the ListDeadLetterSourceQueues operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -990,7 +1003,7 @@ func (c *SQS) ListDeadLetterSourceQueuesRequest(input *ListDeadLetterSourceQueue // // Returned Error Codes: // * ErrCodeQueueDoesNotExist "AWS.SimpleQueueService.NonExistentQueue" -// The queue referred to doesn't exist. +// The specified queue doesn't exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/sqs-2012-11-05/ListDeadLetterSourceQueues func (c *SQS) ListDeadLetterSourceQueues(input *ListDeadLetterSourceQueuesInput) (*ListDeadLetterSourceQueuesOutput, error) { @@ -1018,8 +1031,8 @@ const opListQueueTags = "ListQueueTags" // ListQueueTagsRequest generates a "aws/request.Request" representing the // client's request for the ListQueueTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1059,7 +1072,7 @@ func (c *SQS) ListQueueTagsRequest(input *ListQueueTagsInput) (req *request.Requ // ListQueueTags API operation for Amazon Simple Queue Service. // // List all cost allocation tags added to the specified Amazon SQS queue. For -// an overview, see Tagging Amazon SQS Queues (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-tagging-queues.html) +// an overview, see Tagging Your Amazon SQS Queues (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-queue-tags.html) // in the Amazon Simple Queue Service Developer Guide. // // When you use queue tags, keep the following guidelines in mind: @@ -1074,10 +1087,14 @@ func (c *SQS) ListQueueTagsRequest(input *ListQueueTagsInput) (req *request.Requ // * A new tag with a key identical to that of an existing tag overwrites // the existing tag. // -// * Tagging API actions are limited to 5 TPS per AWS account. If your application +// * Tagging actions are limited to 5 TPS per AWS account. If your application // requires a higher throughput, file a technical support request (https://console.aws.amazon.com/support/home#/case/create?issueType=technical). // -// For a full list of tag restrictions, see Limits Related to Queues (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/limits-queues.html) +// For a full list of tag restrictions, see Limits Related to Queues (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-limits.html#limits-queues) +// in the Amazon Simple Queue Service Developer Guide. +// +// Cross-account permissions don't apply to this action. For more information, +// see see Grant Cross-Account Permissions to a Role and a User Name (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-customer-managed-policy-examples.html#grant-cross-account-permissions-to-role-and-user-name) // in the Amazon Simple Queue Service Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -1112,8 +1129,8 @@ const opListQueues = "ListQueues" // ListQueuesRequest generates a "aws/request.Request" representing the // client's request for the ListQueues operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1156,6 +1173,10 @@ func (c *SQS) ListQueuesRequest(input *ListQueuesInput) (req *request.Request, o // is 1,000. If you specify a value for the optional QueueNamePrefix parameter, // only queues with a name that begins with the specified value are returned. // +// Cross-account permissions don't apply to this action. For more information, +// see see Grant Cross-Account Permissions to a Role and a User Name (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-customer-managed-policy-examples.html#grant-cross-account-permissions-to-role-and-user-name) +// in the Amazon Simple Queue Service Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -1188,8 +1209,8 @@ const opPurgeQueue = "PurgeQueue" // PurgeQueueRequest generates a "aws/request.Request" representing the // client's request for the PurgeQueue operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1232,14 +1253,17 @@ func (c *SQS) PurgeQueueRequest(input *PurgeQueueInput) (req *request.Request, o // // Deletes the messages in a queue specified by the QueueURL parameter. // -// When you use the PurgeQueue action, you can't retrieve a message deleted +// When you use the PurgeQueue action, you can't retrieve any messages deleted // from a queue. // -// When you purge a queue, the message deletion process takes up to 60 seconds. -// All messages sent to the queue before calling the PurgeQueue action are deleted. -// Messages sent to the queue while it is being purged might be deleted. While -// the queue is being purged, messages sent to the queue before PurgeQueue is -// called might be received, but are deleted within the next minute. +// The message deletion process takes up to 60 seconds. We recommend waiting +// for 60 seconds regardless of your queue's size. +// +// Messages sent to the queue before you call PurgeQueue might be received but +// are deleted within the next minute. +// +// Messages sent to the queue after you call PurgeQueue might be deleted while +// the queue is being purged. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1250,7 +1274,7 @@ func (c *SQS) PurgeQueueRequest(input *PurgeQueueInput) (req *request.Request, o // // Returned Error Codes: // * ErrCodeQueueDoesNotExist "AWS.SimpleQueueService.NonExistentQueue" -// The queue referred to doesn't exist. +// The specified queue doesn't exist. // // * ErrCodePurgeQueueInProgress "AWS.SimpleQueueService.PurgeQueueInProgress" // Indicates that the specified queue previously received a PurgeQueue request @@ -1283,8 +1307,8 @@ const opReceiveMessage = "ReceiveMessage" // ReceiveMessageRequest generates a "aws/request.Request" representing the // client's request for the ReceiveMessage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1380,10 +1404,10 @@ func (c *SQS) ReceiveMessageRequest(input *ReceiveMessageInput) (req *request.Re // // Returned Error Codes: // * ErrCodeOverLimit "OverLimit" -// The action that you requested would violate a limit. For example, ReceiveMessage -// returns this error if the maximum number of inflight messages is reached. -// AddPermission returns this error if the maximum number of permissions for -// the queue is reached. +// The specified action violates a limit. For example, ReceiveMessage returns +// this error if the maximum number of inflight messages is reached and AddPermission +// returns this error if the maximum number of permissions for the queue is +// reached. // // See also, https://docs.aws.amazon.com/goto/WebAPI/sqs-2012-11-05/ReceiveMessage func (c *SQS) ReceiveMessage(input *ReceiveMessageInput) (*ReceiveMessageOutput, error) { @@ -1411,8 +1435,8 @@ const opRemovePermission = "RemovePermission" // RemovePermissionRequest generates a "aws/request.Request" representing the // client's request for the RemovePermission operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1454,7 +1478,13 @@ func (c *SQS) RemovePermissionRequest(input *RemovePermissionInput) (req *reques // RemovePermission API operation for Amazon Simple Queue Service. // // Revokes any permissions in the queue policy that matches the specified Label -// parameter. Only the owner of the queue can remove permissions. +// parameter. +// +// Only the owner of a queue can remove permissions from it. +// +// Cross-account permissions don't apply to this action. For more information, +// see see Grant Cross-Account Permissions to a Role and a User Name (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-customer-managed-policy-examples.html#grant-cross-account-permissions-to-role-and-user-name) +// in the Amazon Simple Queue Service Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1488,8 +1518,8 @@ const opSendMessage = "SendMessage" // SendMessageRequest generates a "aws/request.Request" representing the // client's request for the SendMessage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1578,8 +1608,8 @@ const opSendMessageBatch = "SendMessageBatch" // SendMessageBatchRequest generates a "aws/request.Request" representing the // client's request for the SendMessageBatch operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1646,9 +1676,9 @@ func (c *SQS) SendMessageBatchRequest(input *SendMessageBatchInput) (req *reques // param.n notation. Values of n are integers starting from 1. For example, // a parameter list with two elements looks like this: // -// &Attribute.1=this +// &Attribute.1=first // -// &Attribute.2=that +// &Attribute.2=second // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1702,8 +1732,8 @@ const opSetQueueAttributes = "SetQueueAttributes" // SetQueueAttributesRequest generates a "aws/request.Request" representing the // client's request for the SetQueueAttributes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1753,6 +1783,10 @@ func (c *SQS) SetQueueAttributesRequest(input *SetQueueAttributesInput) (req *re // this action, we recommend that you structure your code so that it can handle // new attributes gracefully. // +// Cross-account permissions don't apply to this action. For more information, +// see see Grant Cross-Account Permissions to a Role and a User Name (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-customer-managed-policy-examples.html#grant-cross-account-permissions-to-role-and-user-name) +// in the Amazon Simple Queue Service Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -1762,7 +1796,7 @@ func (c *SQS) SetQueueAttributesRequest(input *SetQueueAttributesInput) (req *re // // Returned Error Codes: // * ErrCodeInvalidAttributeName "InvalidAttributeName" -// The attribute referred to doesn't exist. +// The specified attribute doesn't exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/sqs-2012-11-05/SetQueueAttributes func (c *SQS) SetQueueAttributes(input *SetQueueAttributesInput) (*SetQueueAttributesOutput, error) { @@ -1790,8 +1824,8 @@ const opTagQueue = "TagQueue" // TagQueueRequest generates a "aws/request.Request" representing the // client's request for the TagQueue operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1833,7 +1867,7 @@ func (c *SQS) TagQueueRequest(input *TagQueueInput) (req *request.Request, outpu // TagQueue API operation for Amazon Simple Queue Service. // // Add cost allocation tags to the specified Amazon SQS queue. For an overview, -// see Tagging Amazon SQS Queues (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-tagging-queues.html) +// see Tagging Your Amazon SQS Queues (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-queue-tags.html) // in the Amazon Simple Queue Service Developer Guide. // // When you use queue tags, keep the following guidelines in mind: @@ -1848,10 +1882,14 @@ func (c *SQS) TagQueueRequest(input *TagQueueInput) (req *request.Request, outpu // * A new tag with a key identical to that of an existing tag overwrites // the existing tag. // -// * Tagging API actions are limited to 5 TPS per AWS account. If your application +// * Tagging actions are limited to 5 TPS per AWS account. If your application // requires a higher throughput, file a technical support request (https://console.aws.amazon.com/support/home#/case/create?issueType=technical). // -// For a full list of tag restrictions, see Limits Related to Queues (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/limits-queues.html) +// For a full list of tag restrictions, see Limits Related to Queues (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-limits.html#limits-queues) +// in the Amazon Simple Queue Service Developer Guide. +// +// Cross-account permissions don't apply to this action. For more information, +// see see Grant Cross-Account Permissions to a Role and a User Name (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-customer-managed-policy-examples.html#grant-cross-account-permissions-to-role-and-user-name) // in the Amazon Simple Queue Service Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -1886,8 +1924,8 @@ const opUntagQueue = "UntagQueue" // UntagQueueRequest generates a "aws/request.Request" representing the // client's request for the UntagQueue operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1929,7 +1967,7 @@ func (c *SQS) UntagQueueRequest(input *UntagQueueInput) (req *request.Request, o // UntagQueue API operation for Amazon Simple Queue Service. // // Remove cost allocation tags from the specified Amazon SQS queue. For an overview, -// see Tagging Amazon SQS Queues (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-tagging-queues.html) +// see Tagging Your Amazon SQS Queues (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-queue-tags.html) // in the Amazon Simple Queue Service Developer Guide. // // When you use queue tags, keep the following guidelines in mind: @@ -1944,10 +1982,14 @@ func (c *SQS) UntagQueueRequest(input *UntagQueueInput) (req *request.Request, o // * A new tag with a key identical to that of an existing tag overwrites // the existing tag. // -// * Tagging API actions are limited to 5 TPS per AWS account. If your application +// * Tagging actions are limited to 5 TPS per AWS account. If your application // requires a higher throughput, file a technical support request (https://console.aws.amazon.com/support/home#/case/create?issueType=technical). // -// For a full list of tag restrictions, see Limits Related to Queues (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/limits-queues.html) +// For a full list of tag restrictions, see Limits Related to Queues (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-limits.html#limits-queues) +// in the Amazon Simple Queue Service Developer Guide. +// +// Cross-account permissions don't apply to this action. For more information, +// see see Grant Cross-Account Permissions to a Role and a User Name (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-customer-managed-policy-examples.html#grant-cross-account-permissions-to-role-and-user-name) // in the Amazon Simple Queue Service Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -1984,30 +2026,17 @@ type AddPermissionInput struct { // The AWS account number of the principal (http://docs.aws.amazon.com/general/latest/gr/glos-chap.html#P) // who is given permission. The principal must have an AWS account, but does // not need to be signed up for Amazon SQS. For information about locating the - // AWS account identification, see Your AWS Identifiers (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/AWSCredentials.html) + // AWS account identification, see Your AWS Identifiers (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-making-api-requests.html#sqs-api-request-authentication) // in the Amazon Simple Queue Service Developer Guide. // // AWSAccountIds is a required field AWSAccountIds []*string `locationNameList:"AWSAccountId" type:"list" flattened:"true" required:"true"` - // The action the client wants to allow for the specified principal. The following - // values are valid: - // - // * * - // - // * ChangeMessageVisibility - // - // * DeleteMessage + // The action the client wants to allow for the specified principal. Valid values: + // the name of any action or *. // - // * GetQueueAttributes - // - // * GetQueueUrl - // - // * ReceiveMessage - // - // * SendMessage - // - // For more information about these actions, see Understanding Permissions (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/acp-overview.html#PermissionTypes) + // For more information about these actions, see Overview of Managing Access + // Permissions to Your Amazon Simple Queue Service Resource (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-overview-of-managing-access.html) // in the Amazon Simple Queue Service Developer Guide. // // Specifying SendMessage, DeleteMessage, or ChangeMessageVisibility for ActionName.n @@ -2026,7 +2055,7 @@ type AddPermissionInput struct { // The URL of the Amazon SQS queue to which permissions are added. // - // Queue URLs are case-sensitive. + // Queue URLs and names are case-sensitive. // // QueueUrl is a required field QueueUrl *string `type:"string" required:"true"` @@ -2102,8 +2131,8 @@ func (s AddPermissionOutput) GoString() string { return s.String() } -// This is used in the responses of batch API to give a detailed description -// of the result of an action on each entry in the request. +// Gives a detailed description of the result of an action on each entry in +// the request. type BatchResultErrorEntry struct { _ struct{} `type:"structure"` @@ -2120,7 +2149,7 @@ type BatchResultErrorEntry struct { // A message explaining why the action failed on this entry. Message *string `type:"string"` - // Specifies whether the error happened due to the sender's fault. + // Specifies whether the error happened due to the producer. // // SenderFault is a required field SenderFault *bool `type:"boolean" required:"true"` @@ -2171,7 +2200,7 @@ type ChangeMessageVisibilityBatchInput struct { // The URL of the Amazon SQS queue whose messages' visibility is changed. // - // Queue URLs are case-sensitive. + // Queue URLs and names are case-sensitive. // // QueueUrl is a required field QueueUrl *string `type:"string" required:"true"` @@ -2272,7 +2301,7 @@ func (s *ChangeMessageVisibilityBatchOutput) SetSuccessful(v []*ChangeMessageVis // // &ChangeMessageVisibilityBatchRequestEntry.1.Id=change_visibility_msg_2 // -// &ChangeMessageVisibilityBatchRequestEntry.1.ReceiptHandle=Your_Receipt_Handle +// &ChangeMessageVisibilityBatchRequestEntry.1.ReceiptHandle=your_receipt_handle // // &ChangeMessageVisibilityBatchRequestEntry.1.VisibilityTimeout=45 type ChangeMessageVisibilityBatchRequestEntry struct { @@ -2370,7 +2399,7 @@ type ChangeMessageVisibilityInput struct { // The URL of the Amazon SQS queue whose message's visibility is changed. // - // Queue URLs are case-sensitive. + // Queue URLs and names are case-sensitive. // // QueueUrl is a required field QueueUrl *string `type:"string" required:"true"` @@ -2459,16 +2488,15 @@ type CreateQueueInput struct { // // * DelaySeconds - The length of time, in seconds, for which the delivery // of all messages in the queue is delayed. Valid values: An integer from - // 0 to 900 seconds (15 minutes). The default is 0 (zero). + // 0 to 900 seconds (15 minutes). Default: 0. // // * MaximumMessageSize - The limit of how many bytes a message can contain // before Amazon SQS rejects it. Valid values: An integer from 1,024 bytes - // (1 KiB) to 262,144 bytes (256 KiB). The default is 262,144 (256 KiB). - // + // (1 KiB) to 262,144 bytes (256 KiB). Default: 262,144 (256 KiB). // // * MessageRetentionPeriod - The length of time, in seconds, for which Amazon // SQS retains a message. Valid values: An integer from 60 seconds (1 minute) - // to 1,209,600 seconds (14 days). The default is 345,600 (4 days). + // to 1,209,600 seconds (14 days). Default: 345,600 (4 days). // // * Policy - The queue's policy. A valid AWS policy. For more information // about policy structure, see Overview of AWS IAM Policies (http://docs.aws.amazon.com/IAM/latest/UserGuide/PoliciesOverview.html) @@ -2476,7 +2504,7 @@ type CreateQueueInput struct { // // * ReceiveMessageWaitTimeSeconds - The length of time, in seconds, for // which a ReceiveMessage action waits for a message to arrive. Valid values: - // An integer from 0 to 20 (seconds). The default is 0 (zero). + // An integer from 0 to 20 (seconds). Default: 0. // // * RedrivePolicy - The string that includes the parameters for the dead-letter // queue functionality of the source queue. For more information about the @@ -2489,14 +2517,17 @@ type CreateQueueInput struct { // is exceeded. // // maxReceiveCount - The number of times a message is delivered to the source - // queue before being moved to the dead-letter queue. + // queue before being moved to the dead-letter queue. When the ReceiveCount + // for a message exceeds the maxReceiveCount for a queue, Amazon SQS moves + // the message to the dead-letter-queue. // // The dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, // the dead-letter queue of a standard queue must also be a standard queue. // - // * VisibilityTimeout - The visibility timeout for the queue. Valid values: - // An integer from 0 to 43,200 (12 hours). The default is 30. For more information - // about the visibility timeout, see Visibility Timeout (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html) + // * VisibilityTimeout - The visibility timeout for the queue, in seconds. + // Valid values: An integer from 0 to 43,200 (12 hours). Default: 30. For + // more information about the visibility timeout, see Visibility Timeout + // (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html) // in the Amazon Simple Queue Service Developer Guide. // // The following attributes apply only to server-side-encryption (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-server-side-encryption.html): @@ -2512,10 +2543,10 @@ type CreateQueueInput struct { // Amazon SQS can reuse a data key (http://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#data-keys) // to encrypt or decrypt messages before calling AWS KMS again. An integer // representing seconds, between 60 seconds (1 minute) and 86,400 seconds - // (24 hours). The default is 300 (5 minutes). A shorter time period provides - // better security but results in more calls to KMS which might incur charges - // after Free Tier. For more information, see How Does the Data Key Reuse - // Period Work? (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-server-side-encryption.html#sqs-how-does-the-data-key-reuse-period-work). + // (24 hours). Default: 300 (5 minutes). A shorter time period provides better + // security but results in more calls to KMS which might incur charges after + // Free Tier. For more information, see How Does the Data Key Reuse Period + // Work? (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-server-side-encryption.html#sqs-how-does-the-data-key-reuse-period-work). // // // The following attributes apply only to FIFO (first-in-first-out) queues (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html): @@ -2556,20 +2587,6 @@ type CreateQueueInput struct { // message with a MessageDeduplicationId that is the same as the one generated // for the first MessageDeduplicationId, the two messages are treated as // duplicates and only one copy of the message is delivered. - // - // Any other valid special request parameters (such as the following) are ignored: - // - // * ApproximateNumberOfMessages - // - // * ApproximateNumberOfMessagesDelayed - // - // * ApproximateNumberOfMessagesNotVisible - // - // * CreatedTimestamp - // - // * LastModifiedTimestamp - // - // * QueueArn Attributes map[string]*string `locationName:"Attribute" locationNameKey:"Name" locationNameValue:"Value" type:"map" flattened:"true"` // The name of the new queue. The following limits apply to this name: @@ -2581,7 +2598,7 @@ type CreateQueueInput struct { // // * A FIFO queue name must end with the .fifo suffix. // - // Queue names are case-sensitive. + // Queue URLs and names are case-sensitive. // // QueueName is a required field QueueName *string `type:"string" required:"true"` @@ -2656,7 +2673,7 @@ type DeleteMessageBatchInput struct { // The URL of the Amazon SQS queue from which messages are deleted. // - // Queue URLs are case-sensitive. + // Queue URLs and names are case-sensitive. // // QueueUrl is a required field QueueUrl *string `type:"string" required:"true"` @@ -2836,7 +2853,7 @@ type DeleteMessageInput struct { // The URL of the Amazon SQS queue from which messages are deleted. // - // Queue URLs are case-sensitive. + // Queue URLs and names are case-sensitive. // // QueueUrl is a required field QueueUrl *string `type:"string" required:"true"` @@ -2904,7 +2921,7 @@ type DeleteQueueInput struct { // The URL of the Amazon SQS queue to delete. // - // Queue URLs are case-sensitive. + // Queue URLs and names are case-sensitive. // // QueueUrl is a required field QueueUrl *string `type:"string" required:"true"` @@ -2966,18 +2983,18 @@ type GetQueueAttributesInput struct { // // * All - Returns all values. // - // * ApproximateNumberOfMessages - Returns the approximate number of visible - // messages in a queue. For more information, see Resources Required to Process - // Messages (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-resources-required-process-messages.html) - // in the Amazon Simple Queue Service Developer Guide. + // * ApproximateNumberOfMessages - Returns the approximate number of messages + // available for retrieval from the queue. // // * ApproximateNumberOfMessagesDelayed - Returns the approximate number - // of messages that are waiting to be added to the queue. + // of messages in the queue that are delayed and not available for reading + // immediately. This can happen when the queue is configured as a delay queue + // or when a message has been sent with a delay parameter. // // * ApproximateNumberOfMessagesNotVisible - Returns the approximate number - // of messages that have not timed-out and aren't deleted. For more information, - // see Resources Required to Process Messages (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-resources-required-process-messages.html) - // in the Amazon Simple Queue Service Developer Guide. + // of messages that are in flight. Messages are considered to be in flight + // if they have been sent to a client but have not yet been deleted or have + // not yet reached the end of their visibility window. // // * CreatedTimestamp - Returns the time when the queue was created in seconds // (epoch time (http://en.wikipedia.org/wiki/Unix_time)). @@ -3011,7 +3028,9 @@ type GetQueueAttributesInput struct { // is exceeded. // // maxReceiveCount - The number of times a message is delivered to the source - // queue before being moved to the dead-letter queue. + // queue before being moved to the dead-letter queue. When the ReceiveCount + // for a message exceeds the maxReceiveCount for a queue, Amazon SQS moves + // the message to the dead-letter-queue. // // * VisibilityTimeout - Returns the visibility timeout for the queue. For // more information about the visibility timeout, see Visibility Timeout @@ -3048,7 +3067,7 @@ type GetQueueAttributesInput struct { // The URL of the Amazon SQS queue whose attribute information is retrieved. // - // Queue URLs are case-sensitive. + // Queue URLs and names are case-sensitive. // // QueueUrl is a required field QueueUrl *string `type:"string" required:"true"` @@ -3119,7 +3138,7 @@ type GetQueueUrlInput struct { // The name of the queue whose URL must be fetched. Maximum 80 characters. Valid // values: alphanumeric characters, hyphens (-), and underscores (_). // - // Queue names are case-sensitive. + // Queue URLs and names are case-sensitive. // // QueueName is a required field QueueName *string `type:"string" required:"true"` @@ -3163,7 +3182,7 @@ func (s *GetQueueUrlInput) SetQueueOwnerAWSAccountId(v string) *GetQueueUrlInput return s } -// For more information, see Responses (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/UnderstandingResponses.html) +// For more information, see Interpreting Responses (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-api-responses.html) // in the Amazon Simple Queue Service Developer Guide. type GetQueueUrlOutput struct { _ struct{} `type:"structure"` @@ -3193,7 +3212,7 @@ type ListDeadLetterSourceQueuesInput struct { // The URL of a dead-letter queue. // - // Queue URLs are case-sensitive. + // Queue URLs and names are case-sensitive. // // QueueUrl is a required field QueueUrl *string `type:"string" required:"true"` @@ -3322,7 +3341,7 @@ type ListQueuesInput struct { // A string to use for filtering the list results. Only those queues whose name // begins with the specified string are returned. // - // Queue names are case-sensitive. + // Queue URLs and names are case-sensitive. QueueNamePrefix *string `type:"string"` } @@ -3370,8 +3389,24 @@ func (s *ListQueuesOutput) SetQueueUrls(v []*string) *ListQueuesOutput { type Message struct { _ struct{} `type:"structure"` - // SenderId, SentTimestamp, ApproximateReceiveCount, and/or ApproximateFirstReceiveTimestamp. - // SentTimestamp and ApproximateFirstReceiveTimestamp are each returned as an + // A map of the attributes requested in ReceiveMessage to their respective values. + // Supported attributes: + // + // * ApproximateReceiveCount + // + // * ApproximateFirstReceiveTimestamp + // + // * MessageDeduplicationId + // + // * MessageGroupId + // + // * SenderId + // + // * SentTimestamp + // + // * SequenceNumber + // + // ApproximateFirstReceiveTimestamp and SentTimestamp are each returned as an // integer representing the epoch time (http://en.wikipedia.org/wiki/Unix_time) // in milliseconds. Attributes map[string]*string `locationName:"Attribute" locationNameKey:"Name" locationNameValue:"Value" type:"map" flattened:"true"` @@ -3389,7 +3424,7 @@ type Message struct { MD5OfMessageAttributes *string `type:"string"` // Each message attribute consists of a Name, Type, and Value. For more information, - // see Message Attribute Items and Validation (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-attributes.html#message-attributes-items-validation) + // see Amazon SQS Message Attributes (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-attributes.html) // in the Amazon Simple Queue Service Developer Guide. MessageAttributes map[string]*MessageAttributeValue `locationName:"MessageAttribute" locationNameKey:"Name" locationNameValue:"Value" type:"map" flattened:"true"` @@ -3477,8 +3512,8 @@ type MessageAttributeValue struct { // Amazon SQS supports the following logical data types: String, Number, and // Binary. For the Number data type, you must use StringValue. // - // You can also append custom labels. For more information, see Message Attribute - // Data Types and Validation (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-attributes.html#message-attributes-data-types-validation) + // You can also append custom labels. For more information, see Amazon SQS Message + // Attributes (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-attributes.html) // in the Amazon Simple Queue Service Developer Guide. // // DataType is a required field @@ -3550,7 +3585,7 @@ type PurgeQueueInput struct { // The URL of the queue from which the PurgeQueue action deletes messages. // - // Queue URLs are case-sensitive. + // Queue URLs and names are case-sensitive. // // QueueUrl is a required field QueueUrl *string `type:"string" required:"true"` @@ -3602,8 +3637,8 @@ func (s PurgeQueueOutput) GoString() string { type ReceiveMessageInput struct { _ struct{} `type:"structure"` - // A list of attributes that need to be returned along with each message. These - // attributes include: + // A list of s that need to be returned along with each message. These attributes + // include: // // * All - Returns all values. // @@ -3623,51 +3658,19 @@ type ReceiveMessageInput struct { // * SentTimestamp - Returns the time the message was sent to the queue (epoch // time (http://en.wikipedia.org/wiki/Unix_time) in milliseconds). // - // * MessageDeduplicationId - Returns the value provided by the sender that - // calls the SendMessage action. + // * MessageDeduplicationId - Returns the value provided by the producer + // that calls the SendMessage action. // - // * MessageGroupId - Returns the value provided by the sender that calls + // * MessageGroupId - Returns the value provided by the producer that calls // the SendMessage action. Messages with the same MessageGroupId are returned // in sequence. // // * SequenceNumber - Returns the value provided by Amazon SQS. - // - // Any other valid special request parameters (such as the following) are ignored: - // - // * ApproximateNumberOfMessages - // - // * ApproximateNumberOfMessagesDelayed - // - // * ApproximateNumberOfMessagesNotVisible - // - // * CreatedTimestamp - // - // * ContentBasedDeduplication - // - // * DelaySeconds - // - // * FifoQueue - // - // * LastModifiedTimestamp - // - // * MaximumMessageSize - // - // * MessageRetentionPeriod - // - // * Policy - // - // * QueueArn, - // - // * ReceiveMessageWaitTimeSeconds - // - // * RedrivePolicy - // - // * VisibilityTimeout AttributeNames []*string `locationNameList:"AttributeName" type:"list" flattened:"true"` // The maximum number of messages to return. Amazon SQS never returns more messages - // than this value (however, fewer messages might be returned). Valid values - // are 1 to 10. Default is 1. + // than this value (however, fewer messages might be returned). Valid values: + // 1 to 10. Default: 1. MaxNumberOfMessages *int64 `type:"integer"` // The name of the message attribute, where N is the index. @@ -3694,7 +3697,7 @@ type ReceiveMessageInput struct { // The URL of the Amazon SQS queue from which messages are received. // - // Queue URLs are case-sensitive. + // Queue URLs and names are case-sensitive. // // QueueUrl is a required field QueueUrl *string `type:"string" required:"true"` @@ -3726,11 +3729,11 @@ type ReceiveMessageInput struct { // information, see Visibility Timeout (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html) // in the Amazon Simple Queue Service Developer Guide. // - // If a caller of the ReceiveMessage action is still processing messages when - // the visibility timeout expires and messages become visible, another worker - // reading from the same queue can receive the same messages and therefore - // process duplicates. Also, if a reader whose message processing time is - // longer than the visibility timeout tries to delete the processed messages, + // If a caller of the ReceiveMessage action still processes messages when the + // visibility timeout expires and messages become visible, another worker + // consuming from the same queue can receive the same messages and therefore + // process duplicates. Also, if a consumer whose message processing time + // is longer than the visibility timeout tries to delete the processed messages, // the action fails with an error. // // To mitigate this effect, ensure that your application observes a safe threshold @@ -3750,7 +3753,7 @@ type ReceiveMessageInput struct { // can contain alphanumeric characters (a-z, A-Z, 0-9) and punctuation (!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~). // // For best practices of using ReceiveRequestAttemptId, see Using the ReceiveRequestAttemptId - // Request Parameter (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queue-recommendations.html#using-receiverequestattemptid-request-parameter) + // Request Parameter (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/using-receiverequestattemptid-request-parameter.html) // in the Amazon Simple Queue Service Developer Guide. ReceiveRequestAttemptId *string `type:"string"` @@ -3865,7 +3868,7 @@ type RemovePermissionInput struct { // The URL of the Amazon SQS queue from which permissions are removed. // - // Queue URLs are case-sensitive. + // Queue URLs and names are case-sensitive. // // QueueUrl is a required field QueueUrl *string `type:"string" required:"true"` @@ -3933,7 +3936,7 @@ type SendMessageBatchInput struct { // The URL of the Amazon SQS queue to which batched messages are sent. // - // Queue URLs are case-sensitive. + // Queue URLs and names are case-sensitive. // // QueueUrl is a required field QueueUrl *string `type:"string" required:"true"` @@ -4044,11 +4047,14 @@ type SendMessageBatchRequestEntry struct { // // The Ids of a batch request need to be unique within a request // + // This identifier can have up to 80 characters. The following characters are + // accepted: alphanumeric characters, hyphens(-), and underscores (_). + // // Id is a required field Id *string `type:"string" required:"true"` // Each message attribute consists of a Name, Type, and Value. For more information, - // see Message Attribute Items and Validation (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-attributes.html#message-attributes-items-validation) + // see Amazon SQS Message Attributes (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-attributes.html) // in the Amazon Simple Queue Service Developer Guide. MessageAttributes map[string]*MessageAttributeValue `locationName:"MessageAttribute" locationNameKey:"Name" locationNameValue:"Value" type:"map" flattened:"true"` @@ -4090,18 +4096,21 @@ type SendMessageBatchRequestEntry struct { // one generated for the first MessageDeduplicationId, the two messages are // treated as duplicates and only one copy of the message is delivered. // - // The MessageDeduplicationId is available to the recipient of the message (this + // The MessageDeduplicationId is available to the consumer of the message (this // can be useful for troubleshooting delivery issues). // // If a message is sent successfully but the acknowledgement is lost and the // message is resent with the same MessageDeduplicationId after the deduplication // interval, Amazon SQS can't detect duplicate messages. // + // Amazon SQS continues to keep track of the message deduplication ID even after + // the message is received and deleted. + // // The length of MessageDeduplicationId is 128 characters. MessageDeduplicationId // can contain alphanumeric characters (a-z, A-Z, 0-9) and punctuation (!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~). // // For best practices of using MessageDeduplicationId, see Using the MessageDeduplicationId - // Property (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queue-recommendations.html#using-messagededuplicationid-property) + // Property (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/using-messagededuplicationid-property.html) // in the Amazon Simple Queue Service Developer Guide. MessageDeduplicationId *string `type:"string"` @@ -4112,8 +4121,8 @@ type SendMessageBatchRequestEntry struct { // (however, messages in different message groups might be processed out of // order). To interleave multiple ordered streams within a single queue, use // MessageGroupId values (for example, session data for multiple users). In - // this scenario, multiple readers can process the queue, but the session data - // of each user is processed in a FIFO fashion. + // this scenario, multiple consumers can process the queue, but the session + // data of each user is processed in a FIFO fashion. // // * You must associate a non-empty MessageGroupId with a message. If you // don't provide a MessageGroupId, the action fails. @@ -4122,11 +4131,11 @@ type SendMessageBatchRequestEntry struct { // For each MessageGroupId, the messages are sorted by time sent. The caller // can't specify a MessageGroupId. // - // The length of MessageGroupId is 128 characters. Valid values are alphanumeric + // The length of MessageGroupId is 128 characters. Valid values: alphanumeric // characters and punctuation (!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~). // // For best practices of using MessageGroupId, see Using the MessageGroupId - // Property (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queue-recommendations.html#using-messagegroupid-property) + // Property (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/using-messagegroupid-property.html) // in the Amazon Simple Queue Service Developer Guide. // // MessageGroupId is required for FIFO queues. You can't use it for Standard @@ -4296,7 +4305,7 @@ type SendMessageInput struct { DelaySeconds *int64 `type:"integer"` // Each message attribute consists of a Name, Type, and Value. For more information, - // see Message Attribute Items and Validation (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-attributes.html#message-attributes-items-validation) + // see Amazon SQS Message Attributes (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-attributes.html) // in the Amazon Simple Queue Service Developer Guide. MessageAttributes map[string]*MessageAttributeValue `locationName:"MessageAttribute" locationNameKey:"Name" locationNameValue:"Value" type:"map" flattened:"true"` @@ -4346,18 +4355,21 @@ type SendMessageInput struct { // one generated for the first MessageDeduplicationId, the two messages are // treated as duplicates and only one copy of the message is delivered. // - // The MessageDeduplicationId is available to the recipient of the message (this + // The MessageDeduplicationId is available to the consumer of the message (this // can be useful for troubleshooting delivery issues). // // If a message is sent successfully but the acknowledgement is lost and the // message is resent with the same MessageDeduplicationId after the deduplication // interval, Amazon SQS can't detect duplicate messages. // + // Amazon SQS continues to keep track of the message deduplication ID even after + // the message is received and deleted. + // // The length of MessageDeduplicationId is 128 characters. MessageDeduplicationId // can contain alphanumeric characters (a-z, A-Z, 0-9) and punctuation (!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~). // // For best practices of using MessageDeduplicationId, see Using the MessageDeduplicationId - // Property (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queue-recommendations.html#using-messagededuplicationid-property) + // Property (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/using-messagededuplicationid-property.html) // in the Amazon Simple Queue Service Developer Guide. MessageDeduplicationId *string `type:"string"` @@ -4368,8 +4380,8 @@ type SendMessageInput struct { // (however, messages in different message groups might be processed out of // order). To interleave multiple ordered streams within a single queue, use // MessageGroupId values (for example, session data for multiple users). In - // this scenario, multiple readers can process the queue, but the session data - // of each user is processed in a FIFO fashion. + // this scenario, multiple consumers can process the queue, but the session + // data of each user is processed in a FIFO fashion. // // * You must associate a non-empty MessageGroupId with a message. If you // don't provide a MessageGroupId, the action fails. @@ -4378,11 +4390,11 @@ type SendMessageInput struct { // For each MessageGroupId, the messages are sorted by time sent. The caller // can't specify a MessageGroupId. // - // The length of MessageGroupId is 128 characters. Valid values are alphanumeric + // The length of MessageGroupId is 128 characters. Valid values: alphanumeric // characters and punctuation (!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~). // // For best practices of using MessageGroupId, see Using the MessageGroupId - // Property (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queue-recommendations.html#using-messagegroupid-property) + // Property (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/using-messagegroupid-property.html) // in the Amazon Simple Queue Service Developer Guide. // // MessageGroupId is required for FIFO queues. You can't use it for Standard @@ -4391,7 +4403,7 @@ type SendMessageInput struct { // The URL of the Amazon SQS queue to which a message is sent. // - // Queue URLs are case-sensitive. + // Queue URLs and names are case-sensitive. // // QueueUrl is a required field QueueUrl *string `type:"string" required:"true"` @@ -4543,16 +4555,15 @@ type SetQueueAttributesInput struct { // // * DelaySeconds - The length of time, in seconds, for which the delivery // of all messages in the queue is delayed. Valid values: An integer from - // 0 to 900 (15 minutes). The default is 0 (zero). + // 0 to 900 (15 minutes). Default: 0. // // * MaximumMessageSize - The limit of how many bytes a message can contain // before Amazon SQS rejects it. Valid values: An integer from 1,024 bytes - // (1 KiB) up to 262,144 bytes (256 KiB). The default is 262,144 (256 KiB). - // + // (1 KiB) up to 262,144 bytes (256 KiB). Default: 262,144 (256 KiB). // // * MessageRetentionPeriod - The length of time, in seconds, for which Amazon // SQS retains a message. Valid values: An integer representing seconds, - // from 60 (1 minute) to 1,209,600 (14 days). The default is 345,600 (4 days). + // from 60 (1 minute) to 1,209,600 (14 days). Default: 345,600 (4 days). // // // * Policy - The queue's policy. A valid AWS policy. For more information @@ -4561,7 +4572,7 @@ type SetQueueAttributesInput struct { // // * ReceiveMessageWaitTimeSeconds - The length of time, in seconds, for // which a ReceiveMessage action waits for a message to arrive. Valid values: - // an integer from 0 to 20 (seconds). The default is 0. + // an integer from 0 to 20 (seconds). Default: 0. // // * RedrivePolicy - The string that includes the parameters for the dead-letter // queue functionality of the source queue. For more information about the @@ -4574,14 +4585,17 @@ type SetQueueAttributesInput struct { // is exceeded. // // maxReceiveCount - The number of times a message is delivered to the source - // queue before being moved to the dead-letter queue. + // queue before being moved to the dead-letter queue. When the ReceiveCount + // for a message exceeds the maxReceiveCount for a queue, Amazon SQS moves + // the message to the dead-letter-queue. // // The dead-letter queue of a FIFO queue must also be a FIFO queue. Similarly, // the dead-letter queue of a standard queue must also be a standard queue. // - // * VisibilityTimeout - The visibility timeout for the queue. Valid values: - // an integer from 0 to 43,200 (12 hours). The default is 30. For more information - // about the visibility timeout, see Visibility Timeout (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html) + // * VisibilityTimeout - The visibility timeout for the queue, in seconds. + // Valid values: an integer from 0 to 43,200 (12 hours). Default: 30. For + // more information about the visibility timeout, see Visibility Timeout + // (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html) // in the Amazon Simple Queue Service Developer Guide. // // The following attributes apply only to server-side-encryption (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-server-side-encryption.html): @@ -4597,10 +4611,10 @@ type SetQueueAttributesInput struct { // Amazon SQS can reuse a data key (http://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#data-keys) // to encrypt or decrypt messages before calling AWS KMS again. An integer // representing seconds, between 60 seconds (1 minute) and 86,400 seconds - // (24 hours). The default is 300 (5 minutes). A shorter time period provides - // better security but results in more calls to KMS which might incur charges - // after Free Tier. For more information, see How Does the Data Key Reuse - // Period Work? (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-server-side-encryption.html#sqs-how-does-the-data-key-reuse-period-work). + // (24 hours). Default: 300 (5 minutes). A shorter time period provides better + // security but results in more calls to KMS which might incur charges after + // Free Tier. For more information, see How Does the Data Key Reuse Period + // Work? (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-server-side-encryption.html#sqs-how-does-the-data-key-reuse-period-work). // // // The following attribute applies only to FIFO (first-in-first-out) queues @@ -4634,26 +4648,12 @@ type SetQueueAttributesInput struct { // for the first MessageDeduplicationId, the two messages are treated as // duplicates and only one copy of the message is delivered. // - // Any other valid special request parameters (such as the following) are ignored: - // - // * ApproximateNumberOfMessages - // - // * ApproximateNumberOfMessagesDelayed - // - // * ApproximateNumberOfMessagesNotVisible - // - // * CreatedTimestamp - // - // * LastModifiedTimestamp - // - // * QueueArn - // // Attributes is a required field Attributes map[string]*string `locationName:"Attribute" locationNameKey:"Name" locationNameValue:"Value" type:"map" flattened:"true" required:"true"` // The URL of the Amazon SQS queue whose attributes are set. // - // Queue URLs are case-sensitive. + // Queue URLs and names are case-sensitive. // // QueueUrl is a required field QueueUrl *string `type:"string" required:"true"` diff --git a/vendor/github.com/aws/aws-sdk-go/service/sqs/doc.go b/vendor/github.com/aws/aws-sdk-go/service/sqs/doc.go index 7f0c5799fb3..6f338035969 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/sqs/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/sqs/doc.go @@ -31,11 +31,14 @@ // // * Amazon Simple Queue Service Developer Guide // -// Making API Requests (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/MakingRequestsArticle.html) +// Making API Requests (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-making-api-requests.html) // -// Using Amazon SQS Message Attributes (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-attributes.html) +// Amazon SQS Message Attributes (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-attributes.html) // -// Using Amazon SQS Dead-Letter Queues (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html) +// Amazon SQS Dead-Letter Queues (http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html) +// +// * Amazon SQS in the (http://docs.aws.amazon.com/cli/latest/reference/sqs/index.html)AWS +// CLI Command Reference // // * Amazon Web Services General Reference // diff --git a/vendor/github.com/aws/aws-sdk-go/service/sqs/errors.go b/vendor/github.com/aws/aws-sdk-go/service/sqs/errors.go index 722867d3293..89eb40d7fd2 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/sqs/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/sqs/errors.go @@ -25,7 +25,7 @@ const ( // ErrCodeInvalidAttributeName for service response error code // "InvalidAttributeName". // - // The attribute referred to doesn't exist. + // The specified attribute doesn't exist. ErrCodeInvalidAttributeName = "InvalidAttributeName" // ErrCodeInvalidBatchEntryId for service response error code @@ -37,7 +37,7 @@ const ( // ErrCodeInvalidIdFormat for service response error code // "InvalidIdFormat". // - // The receipt handle isn't valid for the current version. + // The specified receipt handle isn't valid for the current version. ErrCodeInvalidIdFormat = "InvalidIdFormat" // ErrCodeInvalidMessageContents for service response error code @@ -49,16 +49,16 @@ const ( // ErrCodeMessageNotInflight for service response error code // "AWS.SimpleQueueService.MessageNotInflight". // - // The message referred to isn't in flight. + // The specified message isn't in flight. ErrCodeMessageNotInflight = "AWS.SimpleQueueService.MessageNotInflight" // ErrCodeOverLimit for service response error code // "OverLimit". // - // The action that you requested would violate a limit. For example, ReceiveMessage - // returns this error if the maximum number of inflight messages is reached. - // AddPermission returns this error if the maximum number of permissions for - // the queue is reached. + // The specified action violates a limit. For example, ReceiveMessage returns + // this error if the maximum number of inflight messages is reached and AddPermission + // returns this error if the maximum number of permissions for the queue is + // reached. ErrCodeOverLimit = "OverLimit" // ErrCodePurgeQueueInProgress for service response error code @@ -73,19 +73,19 @@ const ( // "AWS.SimpleQueueService.QueueDeletedRecently". // // You must wait 60 seconds after deleting a queue before you can create another - // one with the same name. + // queue with the same name. ErrCodeQueueDeletedRecently = "AWS.SimpleQueueService.QueueDeletedRecently" // ErrCodeQueueDoesNotExist for service response error code // "AWS.SimpleQueueService.NonExistentQueue". // - // The queue referred to doesn't exist. + // The specified queue doesn't exist. ErrCodeQueueDoesNotExist = "AWS.SimpleQueueService.NonExistentQueue" // ErrCodeQueueNameExists for service response error code // "QueueAlreadyExists". // - // A queue already exists with this name. Amazon SQS returns this error only + // A queue with this name already exists. Amazon SQS returns this error only // if the request includes attributes whose values differ from those of the // existing queue. ErrCodeQueueNameExists = "QueueAlreadyExists" @@ -93,7 +93,7 @@ const ( // ErrCodeReceiptHandleIsInvalid for service response error code // "ReceiptHandleIsInvalid". // - // The receipt handle provided isn't valid. + // The specified receipt handle isn't valid. ErrCodeReceiptHandleIsInvalid = "ReceiptHandleIsInvalid" // ErrCodeTooManyEntriesInBatchRequest for service response error code diff --git a/vendor/github.com/aws/aws-sdk-go/service/sqs/service.go b/vendor/github.com/aws/aws-sdk-go/service/sqs/service.go index 50b5f4b80c0..d463ecf0ddb 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/sqs/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/sqs/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "sqs" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "sqs" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "SQS" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the SQS client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/ssm/api.go b/vendor/github.com/aws/aws-sdk-go/service/ssm/api.go index daf3b9f2f8c..ae39b49803e 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ssm/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ssm/api.go @@ -15,8 +15,8 @@ const opAddTagsToResource = "AddTagsToResource" // AddTagsToResourceRequest generates a "aws/request.Request" representing the // client's request for the AddTagsToResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -99,6 +99,10 @@ func (c *SSM) AddTagsToResourceRequest(input *AddTagsToResourceInput) (req *requ // The Targets parameter includes too many tags. Remove one or more tags and // try the command again. // +// * ErrCodeTooManyUpdates "TooManyUpdates" +// There are concurrent updates for a resource that supports one update at a +// time. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/AddTagsToResource func (c *SSM) AddTagsToResource(input *AddTagsToResourceInput) (*AddTagsToResourceOutput, error) { req, out := c.AddTagsToResourceRequest(input) @@ -125,8 +129,8 @@ const opCancelCommand = "CancelCommand" // CancelCommandRequest generates a "aws/request.Request" representing the // client's request for the CancelCommand operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -186,12 +190,12 @@ func (c *SSM) CancelCommandRequest(input *CancelCommandInput) (req *request.Requ // // You do not have permission to access the instance. // -// The SSM Agent is not running. On managed instances and Linux instances, verify +// SSM Agent is not running. On managed instances and Linux instances, verify // that the SSM Agent is running. On EC2 Windows instances, verify that the // EC2Config service is running. // -// The SSM Agent or EC2Config service is not registered to the SSM endpoint. -// Try reinstalling the SSM Agent or EC2Config service. +// SSM Agent or EC2Config service is not registered to the SSM endpoint. Try +// reinstalling SSM Agent or EC2Config service. // // The instance is not in valid state. Valid states are: Running, Pending, Stopped, // Stopping. Invalid states are: Shutting-down and Terminated. @@ -221,12 +225,100 @@ func (c *SSM) CancelCommandWithContext(ctx aws.Context, input *CancelCommandInpu return out, req.Send() } +const opCancelMaintenanceWindowExecution = "CancelMaintenanceWindowExecution" + +// CancelMaintenanceWindowExecutionRequest generates a "aws/request.Request" representing the +// client's request for the CancelMaintenanceWindowExecution operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CancelMaintenanceWindowExecution for more information on using the CancelMaintenanceWindowExecution +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CancelMaintenanceWindowExecutionRequest method. +// req, resp := client.CancelMaintenanceWindowExecutionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/CancelMaintenanceWindowExecution +func (c *SSM) CancelMaintenanceWindowExecutionRequest(input *CancelMaintenanceWindowExecutionInput) (req *request.Request, output *CancelMaintenanceWindowExecutionOutput) { + op := &request.Operation{ + Name: opCancelMaintenanceWindowExecution, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CancelMaintenanceWindowExecutionInput{} + } + + output = &CancelMaintenanceWindowExecutionOutput{} + req = c.newRequest(op, input, output) + return +} + +// CancelMaintenanceWindowExecution API operation for Amazon Simple Systems Manager (SSM). +// +// Stops a Maintenance Window execution that is already in progress and cancels +// any tasks in the window that have not already starting running. (Tasks already +// in progress will continue to completion.) +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation CancelMaintenanceWindowExecution for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/CancelMaintenanceWindowExecution +func (c *SSM) CancelMaintenanceWindowExecution(input *CancelMaintenanceWindowExecutionInput) (*CancelMaintenanceWindowExecutionOutput, error) { + req, out := c.CancelMaintenanceWindowExecutionRequest(input) + return out, req.Send() +} + +// CancelMaintenanceWindowExecutionWithContext is the same as CancelMaintenanceWindowExecution with the addition of +// the ability to pass a context and additional request options. +// +// See CancelMaintenanceWindowExecution for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) CancelMaintenanceWindowExecutionWithContext(ctx aws.Context, input *CancelMaintenanceWindowExecutionInput, opts ...request.Option) (*CancelMaintenanceWindowExecutionOutput, error) { + req, out := c.CancelMaintenanceWindowExecutionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateActivation = "CreateActivation" // CreateActivationRequest generates a "aws/request.Request" representing the // client's request for the CreateActivation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -308,8 +400,8 @@ const opCreateAssociation = "CreateAssociation" // CreateAssociationRequest generates a "aws/request.Request" representing the // client's request for the CreateAssociation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -352,11 +444,11 @@ func (c *SSM) CreateAssociationRequest(input *CreateAssociationInput) (req *requ // or targets. // // When you associate a document with one or more instances using instance IDs -// or tags, the SSM Agent running on the instance processes the document and -// configures the instance as specified. +// or tags, SSM Agent running on the instance processes the document and configures +// the instance as specified. // // If you associate a document with an instance that already has an associated -// document, the system throws the AssociationAlreadyExists exception. +// document, the system returns the AssociationAlreadyExists exception. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -386,12 +478,12 @@ func (c *SSM) CreateAssociationRequest(input *CreateAssociationInput) (req *requ // // You do not have permission to access the instance. // -// The SSM Agent is not running. On managed instances and Linux instances, verify +// SSM Agent is not running. On managed instances and Linux instances, verify // that the SSM Agent is running. On EC2 Windows instances, verify that the // EC2Config service is running. // -// The SSM Agent or EC2Config service is not registered to the SSM endpoint. -// Try reinstalling the SSM Agent or EC2Config service. +// SSM Agent or EC2Config service is not registered to the SSM endpoint. Try +// reinstalling SSM Agent or EC2Config service. // // The instance is not in valid state. Valid states are: Running, Pending, Stopped, // Stopping. Invalid states are: Shutting-down and Terminated. @@ -441,8 +533,8 @@ const opCreateAssociationBatch = "CreateAssociationBatch" // CreateAssociationBatchRequest generates a "aws/request.Request" representing the // client's request for the CreateAssociationBatch operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -485,11 +577,11 @@ func (c *SSM) CreateAssociationBatchRequest(input *CreateAssociationBatchInput) // or targets. // // When you associate a document with one or more instances using instance IDs -// or tags, the SSM Agent running on the instance processes the document and -// configures the instance as specified. +// or tags, SSM Agent running on the instance processes the document and configures +// the instance as specified. // // If you associate a document with an instance that already has an associated -// document, the system throws the AssociationAlreadyExists exception. +// document, the system returns the AssociationAlreadyExists exception. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -513,12 +605,12 @@ func (c *SSM) CreateAssociationBatchRequest(input *CreateAssociationBatchInput) // // You do not have permission to access the instance. // -// The SSM Agent is not running. On managed instances and Linux instances, verify +// SSM Agent is not running. On managed instances and Linux instances, verify // that the SSM Agent is running. On EC2 Windows instances, verify that the // EC2Config service is running. // -// The SSM Agent or EC2Config service is not registered to the SSM endpoint. -// Try reinstalling the SSM Agent or EC2Config service. +// SSM Agent or EC2Config service is not registered to the SSM endpoint. Try +// reinstalling SSM Agent or EC2Config service. // // The instance is not in valid state. Valid states are: Running, Pending, Stopped, // Stopping. Invalid states are: Shutting-down and Terminated. @@ -574,8 +666,8 @@ const opCreateDocument = "CreateDocument" // CreateDocumentRequest generates a "aws/request.Request" representing the // client's request for the CreateDocument operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -671,8 +763,8 @@ const opCreateMaintenanceWindow = "CreateMaintenanceWindow" // CreateMaintenanceWindowRequest generates a "aws/request.Request" representing the // client's request for the CreateMaintenanceWindow operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -761,8 +853,8 @@ const opCreatePatchBaseline = "CreatePatchBaseline" // CreatePatchBaselineRequest generates a "aws/request.Request" representing the // client's request for the CreatePatchBaseline operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -854,8 +946,8 @@ const opCreateResourceDataSync = "CreateResourceDataSync" // CreateResourceDataSyncRequest generates a "aws/request.Request" representing the // client's request for the CreateResourceDataSync operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -903,8 +995,8 @@ func (c *SSM) CreateResourceDataSyncRequest(input *CreateResourceDataSyncInput) // you enable encryption in Amazon S3 to ensure secure data storage. We also // recommend that you secure access to the Amazon S3 bucket by creating a restrictive // bucket policy. To view an example of a restrictive Amazon S3 bucket policy -// for Resource Data Sync, see Configuring Resource Data Sync for Inventory -// (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-inventory-configuring.html#sysman-inventory-datasync). +// for Resource Data Sync, see Create a Resource Data Sync for Inventory (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-inventory-datasync-create.html) +// in the AWS Systems Manager User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -952,8 +1044,8 @@ const opDeleteActivation = "DeleteActivation" // DeleteActivationRequest generates a "aws/request.Request" representing the // client's request for the DeleteActivation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1046,8 +1138,8 @@ const opDeleteAssociation = "DeleteAssociation" // DeleteAssociationRequest generates a "aws/request.Request" representing the // client's request for the DeleteAssociation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1115,12 +1207,12 @@ func (c *SSM) DeleteAssociationRequest(input *DeleteAssociationInput) (req *requ // // You do not have permission to access the instance. // -// The SSM Agent is not running. On managed instances and Linux instances, verify +// SSM Agent is not running. On managed instances and Linux instances, verify // that the SSM Agent is running. On EC2 Windows instances, verify that the // EC2Config service is running. // -// The SSM Agent or EC2Config service is not registered to the SSM endpoint. -// Try reinstalling the SSM Agent or EC2Config service. +// SSM Agent or EC2Config service is not registered to the SSM endpoint. Try +// reinstalling SSM Agent or EC2Config service. // // The instance is not in valid state. Valid states are: Running, Pending, Stopped, // Stopping. Invalid states are: Shutting-down and Terminated. @@ -1155,8 +1247,8 @@ const opDeleteDocument = "DeleteDocument" // DeleteDocumentRequest generates a "aws/request.Request" representing the // client's request for the DeleteDocument operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1245,12 +1337,107 @@ func (c *SSM) DeleteDocumentWithContext(ctx aws.Context, input *DeleteDocumentIn return out, req.Send() } +const opDeleteInventory = "DeleteInventory" + +// DeleteInventoryRequest generates a "aws/request.Request" representing the +// client's request for the DeleteInventory operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteInventory for more information on using the DeleteInventory +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteInventoryRequest method. +// req, resp := client.DeleteInventoryRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeleteInventory +func (c *SSM) DeleteInventoryRequest(input *DeleteInventoryInput) (req *request.Request, output *DeleteInventoryOutput) { + op := &request.Operation{ + Name: opDeleteInventory, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteInventoryInput{} + } + + output = &DeleteInventoryOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteInventory API operation for Amazon Simple Systems Manager (SSM). +// +// Delete a custom inventory type, or the data associated with a custom Inventory +// type. Deleting a custom inventory type is also referred to as deleting a +// custom inventory schema. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DeleteInventory for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidTypeNameException "InvalidTypeNameException" +// The parameter type name is not valid. +// +// * ErrCodeInvalidOptionException "InvalidOptionException" +// The delete inventory option specified is not valid. Verify the option and +// try again. +// +// * ErrCodeInvalidDeleteInventoryParametersException "InvalidDeleteInventoryParametersException" +// One or more of the parameters specified for the delete operation is not valid. +// Verify all parameters and try again. +// +// * ErrCodeInvalidInventoryRequestException "InvalidInventoryRequestException" +// The request is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DeleteInventory +func (c *SSM) DeleteInventory(input *DeleteInventoryInput) (*DeleteInventoryOutput, error) { + req, out := c.DeleteInventoryRequest(input) + return out, req.Send() +} + +// DeleteInventoryWithContext is the same as DeleteInventory with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteInventory for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DeleteInventoryWithContext(ctx aws.Context, input *DeleteInventoryInput, opts ...request.Option) (*DeleteInventoryOutput, error) { + req, out := c.DeleteInventoryRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteMaintenanceWindow = "DeleteMaintenanceWindow" // DeleteMaintenanceWindowRequest generates a "aws/request.Request" representing the // client's request for the DeleteMaintenanceWindow operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1328,8 +1515,8 @@ const opDeleteParameter = "DeleteParameter" // DeleteParameterRequest generates a "aws/request.Request" representing the // client's request for the DeleteParameter operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1410,8 +1597,8 @@ const opDeleteParameters = "DeleteParameters" // DeleteParametersRequest generates a "aws/request.Request" representing the // client's request for the DeleteParameters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1490,8 +1677,8 @@ const opDeletePatchBaseline = "DeletePatchBaseline" // DeletePatchBaselineRequest generates a "aws/request.Request" representing the // client's request for the DeletePatchBaseline operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1573,8 +1760,8 @@ const opDeleteResourceDataSync = "DeleteResourceDataSync" // DeleteResourceDataSyncRequest generates a "aws/request.Request" representing the // client's request for the DeleteResourceDataSync operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1658,8 +1845,8 @@ const opDeregisterManagedInstance = "DeregisterManagedInstance" // DeregisterManagedInstanceRequest generates a "aws/request.Request" representing the // client's request for the DeregisterManagedInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1700,7 +1887,7 @@ func (c *SSM) DeregisterManagedInstanceRequest(input *DeregisterManagedInstanceI // // Removes the server or virtual machine from the list of registered servers. // You can reregister the instance again at any time. If you don't plan to use -// Run Command on the server, we suggest uninstalling the SSM Agent first. +// Run Command on the server, we suggest uninstalling SSM Agent first. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1715,12 +1902,12 @@ func (c *SSM) DeregisterManagedInstanceRequest(input *DeregisterManagedInstanceI // // You do not have permission to access the instance. // -// The SSM Agent is not running. On managed instances and Linux instances, verify +// SSM Agent is not running. On managed instances and Linux instances, verify // that the SSM Agent is running. On EC2 Windows instances, verify that the // EC2Config service is running. // -// The SSM Agent or EC2Config service is not registered to the SSM endpoint. -// Try reinstalling the SSM Agent or EC2Config service. +// SSM Agent or EC2Config service is not registered to the SSM endpoint. Try +// reinstalling SSM Agent or EC2Config service. // // The instance is not in valid state. Valid states are: Running, Pending, Stopped, // Stopping. Invalid states are: Shutting-down and Terminated. @@ -1754,8 +1941,8 @@ const opDeregisterPatchBaselineForPatchGroup = "DeregisterPatchBaselineForPatchG // DeregisterPatchBaselineForPatchGroupRequest generates a "aws/request.Request" representing the // client's request for the DeregisterPatchBaselineForPatchGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1837,8 +2024,8 @@ const opDeregisterTargetFromMaintenanceWindow = "DeregisterTargetFromMaintenance // DeregisterTargetFromMaintenanceWindowRequest generates a "aws/request.Request" representing the // client's request for the DeregisterTargetFromMaintenanceWindow operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1927,8 +2114,8 @@ const opDeregisterTaskFromMaintenanceWindow = "DeregisterTaskFromMaintenanceWind // DeregisterTaskFromMaintenanceWindowRequest generates a "aws/request.Request" representing the // client's request for the DeregisterTaskFromMaintenanceWindow operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2013,8 +2200,8 @@ const opDescribeActivations = "DescribeActivations" // DescribeActivationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeActivations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2157,8 +2344,8 @@ const opDescribeAssociation = "DescribeAssociation" // DescribeAssociationRequest generates a "aws/request.Request" representing the // client's request for the DescribeAssociation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2230,12 +2417,12 @@ func (c *SSM) DescribeAssociationRequest(input *DescribeAssociationInput) (req * // // You do not have permission to access the instance. // -// The SSM Agent is not running. On managed instances and Linux instances, verify +// SSM Agent is not running. On managed instances and Linux instances, verify // that the SSM Agent is running. On EC2 Windows instances, verify that the // EC2Config service is running. // -// The SSM Agent or EC2Config service is not registered to the SSM endpoint. -// Try reinstalling the SSM Agent or EC2Config service. +// SSM Agent or EC2Config service is not registered to the SSM endpoint. Try +// reinstalling SSM Agent or EC2Config service. // // The instance is not in valid state. Valid states are: Running, Pending, Stopped, // Stopping. Invalid states are: Shutting-down and Terminated. @@ -2262,12 +2449,186 @@ func (c *SSM) DescribeAssociationWithContext(ctx aws.Context, input *DescribeAss return out, req.Send() } +const opDescribeAssociationExecutionTargets = "DescribeAssociationExecutionTargets" + +// DescribeAssociationExecutionTargetsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeAssociationExecutionTargets operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeAssociationExecutionTargets for more information on using the DescribeAssociationExecutionTargets +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeAssociationExecutionTargetsRequest method. +// req, resp := client.DescribeAssociationExecutionTargetsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeAssociationExecutionTargets +func (c *SSM) DescribeAssociationExecutionTargetsRequest(input *DescribeAssociationExecutionTargetsInput) (req *request.Request, output *DescribeAssociationExecutionTargetsOutput) { + op := &request.Operation{ + Name: opDescribeAssociationExecutionTargets, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeAssociationExecutionTargetsInput{} + } + + output = &DescribeAssociationExecutionTargetsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeAssociationExecutionTargets API operation for Amazon Simple Systems Manager (SSM). +// +// Use this API action to view information about a specific execution of a specific +// association. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeAssociationExecutionTargets for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeAssociationDoesNotExist "AssociationDoesNotExist" +// The specified association does not exist. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// * ErrCodeAssociationExecutionDoesNotExist "AssociationExecutionDoesNotExist" +// The specified execution ID does not exist. Verify the ID number and try again. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeAssociationExecutionTargets +func (c *SSM) DescribeAssociationExecutionTargets(input *DescribeAssociationExecutionTargetsInput) (*DescribeAssociationExecutionTargetsOutput, error) { + req, out := c.DescribeAssociationExecutionTargetsRequest(input) + return out, req.Send() +} + +// DescribeAssociationExecutionTargetsWithContext is the same as DescribeAssociationExecutionTargets with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeAssociationExecutionTargets for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeAssociationExecutionTargetsWithContext(ctx aws.Context, input *DescribeAssociationExecutionTargetsInput, opts ...request.Option) (*DescribeAssociationExecutionTargetsOutput, error) { + req, out := c.DescribeAssociationExecutionTargetsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeAssociationExecutions = "DescribeAssociationExecutions" + +// DescribeAssociationExecutionsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeAssociationExecutions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeAssociationExecutions for more information on using the DescribeAssociationExecutions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeAssociationExecutionsRequest method. +// req, resp := client.DescribeAssociationExecutionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeAssociationExecutions +func (c *SSM) DescribeAssociationExecutionsRequest(input *DescribeAssociationExecutionsInput) (req *request.Request, output *DescribeAssociationExecutionsOutput) { + op := &request.Operation{ + Name: opDescribeAssociationExecutions, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeAssociationExecutionsInput{} + } + + output = &DescribeAssociationExecutionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeAssociationExecutions API operation for Amazon Simple Systems Manager (SSM). +// +// Use this API action to view all executions for a specific association ID. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeAssociationExecutions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeAssociationDoesNotExist "AssociationDoesNotExist" +// The specified association does not exist. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeAssociationExecutions +func (c *SSM) DescribeAssociationExecutions(input *DescribeAssociationExecutionsInput) (*DescribeAssociationExecutionsOutput, error) { + req, out := c.DescribeAssociationExecutionsRequest(input) + return out, req.Send() +} + +// DescribeAssociationExecutionsWithContext is the same as DescribeAssociationExecutions with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeAssociationExecutions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeAssociationExecutionsWithContext(ctx aws.Context, input *DescribeAssociationExecutionsInput, opts ...request.Option) (*DescribeAssociationExecutionsOutput, error) { + req, out := c.DescribeAssociationExecutionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeAutomationExecutions = "DescribeAutomationExecutions" // DescribeAutomationExecutionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeAutomationExecutions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2354,8 +2715,8 @@ const opDescribeAutomationStepExecutions = "DescribeAutomationStepExecutions" // DescribeAutomationStepExecutionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeAutomationStepExecutions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2447,8 +2808,8 @@ const opDescribeAvailablePatches = "DescribeAvailablePatches" // DescribeAvailablePatchesRequest generates a "aws/request.Request" representing the // client's request for the DescribeAvailablePatches operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2526,8 +2887,8 @@ const opDescribeDocument = "DescribeDocument" // DescribeDocumentRequest generates a "aws/request.Request" representing the // client's request for the DescribeDocument operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2611,8 +2972,8 @@ const opDescribeDocumentPermission = "DescribeDocumentPermission" // DescribeDocumentPermissionRequest generates a "aws/request.Request" representing the // client's request for the DescribeDocumentPermission operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2699,8 +3060,8 @@ const opDescribeEffectiveInstanceAssociations = "DescribeEffectiveInstanceAssoci // DescribeEffectiveInstanceAssociationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeEffectiveInstanceAssociations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2757,12 +3118,12 @@ func (c *SSM) DescribeEffectiveInstanceAssociationsRequest(input *DescribeEffect // // You do not have permission to access the instance. // -// The SSM Agent is not running. On managed instances and Linux instances, verify +// SSM Agent is not running. On managed instances and Linux instances, verify // that the SSM Agent is running. On EC2 Windows instances, verify that the // EC2Config service is running. // -// The SSM Agent or EC2Config service is not registered to the SSM endpoint. -// Try reinstalling the SSM Agent or EC2Config service. +// SSM Agent or EC2Config service is not registered to the SSM endpoint. Try +// reinstalling SSM Agent or EC2Config service. // // The instance is not in valid state. Valid states are: Running, Pending, Stopped, // Stopping. Invalid states are: Shutting-down and Terminated. @@ -2796,8 +3157,8 @@ const opDescribeEffectivePatchesForPatchBaseline = "DescribeEffectivePatchesForP // DescribeEffectivePatchesForPatchBaselineRequest generates a "aws/request.Request" representing the // client's request for the DescribeEffectivePatchesForPatchBaseline operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2893,8 +3254,8 @@ const opDescribeInstanceAssociationsStatus = "DescribeInstanceAssociationsStatus // DescribeInstanceAssociationsStatusRequest generates a "aws/request.Request" representing the // client's request for the DescribeInstanceAssociationsStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2951,12 +3312,12 @@ func (c *SSM) DescribeInstanceAssociationsStatusRequest(input *DescribeInstanceA // // You do not have permission to access the instance. // -// The SSM Agent is not running. On managed instances and Linux instances, verify +// SSM Agent is not running. On managed instances and Linux instances, verify // that the SSM Agent is running. On EC2 Windows instances, verify that the // EC2Config service is running. // -// The SSM Agent or EC2Config service is not registered to the SSM endpoint. -// Try reinstalling the SSM Agent or EC2Config service. +// SSM Agent or EC2Config service is not registered to the SSM endpoint. Try +// reinstalling SSM Agent or EC2Config service. // // The instance is not in valid state. Valid states are: Running, Pending, Stopped, // Stopping. Invalid states are: Shutting-down and Terminated. @@ -2990,8 +3351,8 @@ const opDescribeInstanceInformation = "DescribeInstanceInformation" // DescribeInstanceInformationRequest generates a "aws/request.Request" representing the // client's request for the DescribeInstanceInformation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3043,6 +3404,10 @@ func (c *SSM) DescribeInstanceInformationRequest(input *DescribeInstanceInformat // information for all your instances. If you specify an instance ID that is // not valid or an instance that you do not own, you receive an error. // +// The IamRole field for this API action is the Amazon Identity and Access Management +// (IAM) role assigned to on-premises instances. This call does not return the +// IAM role for Amazon EC2 instances. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -3059,12 +3424,12 @@ func (c *SSM) DescribeInstanceInformationRequest(input *DescribeInstanceInformat // // You do not have permission to access the instance. // -// The SSM Agent is not running. On managed instances and Linux instances, verify +// SSM Agent is not running. On managed instances and Linux instances, verify // that the SSM Agent is running. On EC2 Windows instances, verify that the // EC2Config service is running. // -// The SSM Agent or EC2Config service is not registered to the SSM endpoint. -// Try reinstalling the SSM Agent or EC2Config service. +// SSM Agent or EC2Config service is not registered to the SSM endpoint. Try +// reinstalling SSM Agent or EC2Config service. // // The instance is not in valid state. Valid states are: Running, Pending, Stopped, // Stopping. Invalid states are: Shutting-down and Terminated. @@ -3154,8 +3519,8 @@ const opDescribeInstancePatchStates = "DescribeInstancePatchStates" // DescribeInstancePatchStatesRequest generates a "aws/request.Request" representing the // client's request for the DescribeInstancePatchStates operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3236,8 +3601,8 @@ const opDescribeInstancePatchStatesForPatchGroup = "DescribeInstancePatchStatesF // DescribeInstancePatchStatesForPatchGroupRequest generates a "aws/request.Request" representing the // client's request for the DescribeInstancePatchStatesForPatchGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3323,8 +3688,8 @@ const opDescribeInstancePatches = "DescribeInstancePatches" // DescribeInstancePatchesRequest generates a "aws/request.Request" representing the // client's request for the DescribeInstancePatches operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3382,12 +3747,12 @@ func (c *SSM) DescribeInstancePatchesRequest(input *DescribeInstancePatchesInput // // You do not have permission to access the instance. // -// The SSM Agent is not running. On managed instances and Linux instances, verify +// SSM Agent is not running. On managed instances and Linux instances, verify // that the SSM Agent is running. On EC2 Windows instances, verify that the // EC2Config service is running. // -// The SSM Agent or EC2Config service is not registered to the SSM endpoint. -// Try reinstalling the SSM Agent or EC2Config service. +// SSM Agent or EC2Config service is not registered to the SSM endpoint. Try +// reinstalling SSM Agent or EC2Config service. // // The instance is not in valid state. Valid states are: Running, Pending, Stopped, // Stopping. Invalid states are: Shutting-down and Terminated. @@ -3421,12 +3786,98 @@ func (c *SSM) DescribeInstancePatchesWithContext(ctx aws.Context, input *Describ return out, req.Send() } +const opDescribeInventoryDeletions = "DescribeInventoryDeletions" + +// DescribeInventoryDeletionsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeInventoryDeletions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeInventoryDeletions for more information on using the DescribeInventoryDeletions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeInventoryDeletionsRequest method. +// req, resp := client.DescribeInventoryDeletionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeInventoryDeletions +func (c *SSM) DescribeInventoryDeletionsRequest(input *DescribeInventoryDeletionsInput) (req *request.Request, output *DescribeInventoryDeletionsOutput) { + op := &request.Operation{ + Name: opDescribeInventoryDeletions, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeInventoryDeletionsInput{} + } + + output = &DescribeInventoryDeletionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeInventoryDeletions API operation for Amazon Simple Systems Manager (SSM). +// +// Describes a specific delete inventory operation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeInventoryDeletions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidDeletionIdException "InvalidDeletionIdException" +// The ID specified for the delete operation does not exist or is not valide. +// Verify the ID and try again. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeInventoryDeletions +func (c *SSM) DescribeInventoryDeletions(input *DescribeInventoryDeletionsInput) (*DescribeInventoryDeletionsOutput, error) { + req, out := c.DescribeInventoryDeletionsRequest(input) + return out, req.Send() +} + +// DescribeInventoryDeletionsWithContext is the same as DescribeInventoryDeletions with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInventoryDeletions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeInventoryDeletionsWithContext(ctx aws.Context, input *DescribeInventoryDeletionsInput, opts ...request.Option) (*DescribeInventoryDeletionsOutput, error) { + req, out := c.DescribeInventoryDeletionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeMaintenanceWindowExecutionTaskInvocations = "DescribeMaintenanceWindowExecutionTaskInvocations" // DescribeMaintenanceWindowExecutionTaskInvocationsRequest generates a "aws/request.Request" representing the // client's request for the DescribeMaintenanceWindowExecutionTaskInvocations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3512,8 +3963,8 @@ const opDescribeMaintenanceWindowExecutionTasks = "DescribeMaintenanceWindowExec // DescribeMaintenanceWindowExecutionTasksRequest generates a "aws/request.Request" representing the // client's request for the DescribeMaintenanceWindowExecutionTasks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3598,8 +4049,8 @@ const opDescribeMaintenanceWindowExecutions = "DescribeMaintenanceWindowExecutio // DescribeMaintenanceWindowExecutionsRequest generates a "aws/request.Request" representing the // client's request for the DescribeMaintenanceWindowExecutions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3675,12 +4126,98 @@ func (c *SSM) DescribeMaintenanceWindowExecutionsWithContext(ctx aws.Context, in return out, req.Send() } +const opDescribeMaintenanceWindowSchedule = "DescribeMaintenanceWindowSchedule" + +// DescribeMaintenanceWindowScheduleRequest generates a "aws/request.Request" representing the +// client's request for the DescribeMaintenanceWindowSchedule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeMaintenanceWindowSchedule for more information on using the DescribeMaintenanceWindowSchedule +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeMaintenanceWindowScheduleRequest method. +// req, resp := client.DescribeMaintenanceWindowScheduleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeMaintenanceWindowSchedule +func (c *SSM) DescribeMaintenanceWindowScheduleRequest(input *DescribeMaintenanceWindowScheduleInput) (req *request.Request, output *DescribeMaintenanceWindowScheduleOutput) { + op := &request.Operation{ + Name: opDescribeMaintenanceWindowSchedule, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeMaintenanceWindowScheduleInput{} + } + + output = &DescribeMaintenanceWindowScheduleOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeMaintenanceWindowSchedule API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieves information about upcoming executions of a Maintenance Window. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeMaintenanceWindowSchedule for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeMaintenanceWindowSchedule +func (c *SSM) DescribeMaintenanceWindowSchedule(input *DescribeMaintenanceWindowScheduleInput) (*DescribeMaintenanceWindowScheduleOutput, error) { + req, out := c.DescribeMaintenanceWindowScheduleRequest(input) + return out, req.Send() +} + +// DescribeMaintenanceWindowScheduleWithContext is the same as DescribeMaintenanceWindowSchedule with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeMaintenanceWindowSchedule for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeMaintenanceWindowScheduleWithContext(ctx aws.Context, input *DescribeMaintenanceWindowScheduleInput, opts ...request.Option) (*DescribeMaintenanceWindowScheduleOutput, error) { + req, out := c.DescribeMaintenanceWindowScheduleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeMaintenanceWindowTargets = "DescribeMaintenanceWindowTargets" // DescribeMaintenanceWindowTargetsRequest generates a "aws/request.Request" representing the // client's request for the DescribeMaintenanceWindowTargets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3765,8 +4302,8 @@ const opDescribeMaintenanceWindowTasks = "DescribeMaintenanceWindowTasks" // DescribeMaintenanceWindowTasksRequest generates a "aws/request.Request" representing the // client's request for the DescribeMaintenanceWindowTasks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3851,8 +4388,8 @@ const opDescribeMaintenanceWindows = "DescribeMaintenanceWindows" // DescribeMaintenanceWindowsRequest generates a "aws/request.Request" representing the // client's request for the DescribeMaintenanceWindows operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3926,12 +4463,92 @@ func (c *SSM) DescribeMaintenanceWindowsWithContext(ctx aws.Context, input *Desc return out, req.Send() } +const opDescribeMaintenanceWindowsForTarget = "DescribeMaintenanceWindowsForTarget" + +// DescribeMaintenanceWindowsForTargetRequest generates a "aws/request.Request" representing the +// client's request for the DescribeMaintenanceWindowsForTarget operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeMaintenanceWindowsForTarget for more information on using the DescribeMaintenanceWindowsForTarget +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeMaintenanceWindowsForTargetRequest method. +// req, resp := client.DescribeMaintenanceWindowsForTargetRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeMaintenanceWindowsForTarget +func (c *SSM) DescribeMaintenanceWindowsForTargetRequest(input *DescribeMaintenanceWindowsForTargetInput) (req *request.Request, output *DescribeMaintenanceWindowsForTargetOutput) { + op := &request.Operation{ + Name: opDescribeMaintenanceWindowsForTarget, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeMaintenanceWindowsForTargetInput{} + } + + output = &DescribeMaintenanceWindowsForTargetOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeMaintenanceWindowsForTarget API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieves information about the Maintenance Windows targets or tasks that +// an instance is associated with. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeMaintenanceWindowsForTarget for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeMaintenanceWindowsForTarget +func (c *SSM) DescribeMaintenanceWindowsForTarget(input *DescribeMaintenanceWindowsForTargetInput) (*DescribeMaintenanceWindowsForTargetOutput, error) { + req, out := c.DescribeMaintenanceWindowsForTargetRequest(input) + return out, req.Send() +} + +// DescribeMaintenanceWindowsForTargetWithContext is the same as DescribeMaintenanceWindowsForTarget with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeMaintenanceWindowsForTarget for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeMaintenanceWindowsForTargetWithContext(ctx aws.Context, input *DescribeMaintenanceWindowsForTargetInput, opts ...request.Option) (*DescribeMaintenanceWindowsForTargetOutput, error) { + req, out := c.DescribeMaintenanceWindowsForTargetRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeParameters = "DescribeParameters" // DescribeParametersRequest generates a "aws/request.Request" representing the // client's request for the DescribeParameters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4086,8 +4703,8 @@ const opDescribePatchBaselines = "DescribePatchBaselines" // DescribePatchBaselinesRequest generates a "aws/request.Request" representing the // client's request for the DescribePatchBaselines operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4165,8 +4782,8 @@ const opDescribePatchGroupState = "DescribePatchGroupState" // DescribePatchGroupStateRequest generates a "aws/request.Request" representing the // client's request for the DescribePatchGroupState operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4247,8 +4864,8 @@ const opDescribePatchGroups = "DescribePatchGroups" // DescribePatchGroupsRequest generates a "aws/request.Request" representing the // client's request for the DescribePatchGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4322,12 +4939,98 @@ func (c *SSM) DescribePatchGroupsWithContext(ctx aws.Context, input *DescribePat return out, req.Send() } +const opDescribeSessions = "DescribeSessions" + +// DescribeSessionsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeSessions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeSessions for more information on using the DescribeSessions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeSessionsRequest method. +// req, resp := client.DescribeSessionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeSessions +func (c *SSM) DescribeSessionsRequest(input *DescribeSessionsInput) (req *request.Request, output *DescribeSessionsOutput) { + op := &request.Operation{ + Name: opDescribeSessions, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeSessionsInput{} + } + + output = &DescribeSessionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeSessions API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieves a list of all active sessions (both connected and disconnected) +// or terminated sessions from the past 30 days. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation DescribeSessions for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeInvalidFilterKey "InvalidFilterKey" +// The specified key is not valid. +// +// * ErrCodeInvalidNextToken "InvalidNextToken" +// The specified token is not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/DescribeSessions +func (c *SSM) DescribeSessions(input *DescribeSessionsInput) (*DescribeSessionsOutput, error) { + req, out := c.DescribeSessionsRequest(input) + return out, req.Send() +} + +// DescribeSessionsWithContext is the same as DescribeSessions with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeSessions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) DescribeSessionsWithContext(ctx aws.Context, input *DescribeSessionsInput, opts ...request.Option) (*DescribeSessionsOutput, error) { + req, out := c.DescribeSessionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opGetAutomationExecution = "GetAutomationExecution" // GetAutomationExecutionRequest generates a "aws/request.Request" representing the // client's request for the GetAutomationExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4409,8 +5112,8 @@ const opGetCommandInvocation = "GetCommandInvocation" // GetCommandInvocationRequest generates a "aws/request.Request" representing the // client's request for the GetCommandInvocation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4470,12 +5173,12 @@ func (c *SSM) GetCommandInvocationRequest(input *GetCommandInvocationInput) (req // // You do not have permission to access the instance. // -// The SSM Agent is not running. On managed instances and Linux instances, verify +// SSM Agent is not running. On managed instances and Linux instances, verify // that the SSM Agent is running. On EC2 Windows instances, verify that the // EC2Config service is running. // -// The SSM Agent or EC2Config service is not registered to the SSM endpoint. -// Try reinstalling the SSM Agent or EC2Config service. +// SSM Agent or EC2Config service is not registered to the SSM endpoint. Try +// reinstalling SSM Agent or EC2Config service. // // The instance is not in valid state. Valid states are: Running, Pending, Stopped, // Stopping. Invalid states are: Shutting-down and Terminated. @@ -4509,53 +5212,136 @@ func (c *SSM) GetCommandInvocationWithContext(ctx aws.Context, input *GetCommand return out, req.Send() } -const opGetDefaultPatchBaseline = "GetDefaultPatchBaseline" +const opGetConnectionStatus = "GetConnectionStatus" -// GetDefaultPatchBaselineRequest generates a "aws/request.Request" representing the -// client's request for the GetDefaultPatchBaseline operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// GetConnectionStatusRequest generates a "aws/request.Request" representing the +// client's request for the GetConnectionStatus operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See GetDefaultPatchBaseline for more information on using the GetDefaultPatchBaseline +// See GetConnectionStatus for more information on using the GetConnectionStatus // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the GetDefaultPatchBaselineRequest method. -// req, resp := client.GetDefaultPatchBaselineRequest(params) +// // Example sending a request using the GetConnectionStatusRequest method. +// req, resp := client.GetConnectionStatusRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetDefaultPatchBaseline -func (c *SSM) GetDefaultPatchBaselineRequest(input *GetDefaultPatchBaselineInput) (req *request.Request, output *GetDefaultPatchBaselineOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetConnectionStatus +func (c *SSM) GetConnectionStatusRequest(input *GetConnectionStatusInput) (req *request.Request, output *GetConnectionStatusOutput) { op := &request.Operation{ - Name: opGetDefaultPatchBaseline, + Name: opGetConnectionStatus, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &GetDefaultPatchBaselineInput{} + input = &GetConnectionStatusInput{} } - output = &GetDefaultPatchBaselineOutput{} + output = &GetConnectionStatusOutput{} req = c.newRequest(op, input, output) return } -// GetDefaultPatchBaseline API operation for Amazon Simple Systems Manager (SSM). +// GetConnectionStatus API operation for Amazon Simple Systems Manager (SSM). // -// Retrieves the default patch baseline. Note that Systems Manager supports -// creating multiple default patch baselines. For example, you can create a -// default patch baseline for each operating system. +// Retrieves the Session Manager connection status for an instance to determine +// whether it is connected and ready to receive Session Manager connections. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation GetConnectionStatus for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetConnectionStatus +func (c *SSM) GetConnectionStatus(input *GetConnectionStatusInput) (*GetConnectionStatusOutput, error) { + req, out := c.GetConnectionStatusRequest(input) + return out, req.Send() +} + +// GetConnectionStatusWithContext is the same as GetConnectionStatus with the addition of +// the ability to pass a context and additional request options. +// +// See GetConnectionStatus for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) GetConnectionStatusWithContext(ctx aws.Context, input *GetConnectionStatusInput, opts ...request.Option) (*GetConnectionStatusOutput, error) { + req, out := c.GetConnectionStatusRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetDefaultPatchBaseline = "GetDefaultPatchBaseline" + +// GetDefaultPatchBaselineRequest generates a "aws/request.Request" representing the +// client's request for the GetDefaultPatchBaseline operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetDefaultPatchBaseline for more information on using the GetDefaultPatchBaseline +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetDefaultPatchBaselineRequest method. +// req, resp := client.GetDefaultPatchBaselineRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/GetDefaultPatchBaseline +func (c *SSM) GetDefaultPatchBaselineRequest(input *GetDefaultPatchBaselineInput) (req *request.Request, output *GetDefaultPatchBaselineOutput) { + op := &request.Operation{ + Name: opGetDefaultPatchBaseline, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetDefaultPatchBaselineInput{} + } + + output = &GetDefaultPatchBaselineOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetDefaultPatchBaseline API operation for Amazon Simple Systems Manager (SSM). +// +// Retrieves the default patch baseline. Note that Systems Manager supports +// creating multiple default patch baselines. For example, you can create a +// default patch baseline for each operating system. +// +// If you do not specify an operating system value, the default patch baseline +// for Windows is returned. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4594,8 +5380,8 @@ const opGetDeployablePatchSnapshotForInstance = "GetDeployablePatchSnapshotForIn // GetDeployablePatchSnapshotForInstanceRequest generates a "aws/request.Request" representing the // client's request for the GetDeployablePatchSnapshotForInstance operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4679,8 +5465,8 @@ const opGetDocument = "GetDocument" // GetDocumentRequest generates a "aws/request.Request" representing the // client's request for the GetDocument operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4764,8 +5550,8 @@ const opGetInventory = "GetInventory" // GetInventoryRequest generates a "aws/request.Request" representing the // client's request for the GetInventory operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4821,12 +5607,19 @@ func (c *SSM) GetInventoryRequest(input *GetInventoryInput) (req *request.Reques // The filter name is not valid. Verify the you entered the correct name and // try again. // +// * ErrCodeInvalidInventoryGroupException "InvalidInventoryGroupException" +// The specified inventory group is not valid. +// // * ErrCodeInvalidNextToken "InvalidNextToken" // The specified token is not valid. // // * ErrCodeInvalidTypeNameException "InvalidTypeNameException" // The parameter type name is not valid. // +// * ErrCodeInvalidAggregatorException "InvalidAggregatorException" +// The specified aggregator is not valid for inventory groups. Verify that the +// aggregator uses a valid inventory type such as AWS:Application or AWS:InstanceInformation. +// // * ErrCodeInvalidResultAttributeException "InvalidResultAttributeException" // The specified inventory item result attribute is not valid. // @@ -4856,8 +5649,8 @@ const opGetInventorySchema = "GetInventorySchema" // GetInventorySchemaRequest generates a "aws/request.Request" representing the // client's request for the GetInventorySchema operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4942,8 +5735,8 @@ const opGetMaintenanceWindow = "GetMaintenanceWindow" // GetMaintenanceWindowRequest generates a "aws/request.Request" representing the // client's request for the GetMaintenanceWindow operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5028,8 +5821,8 @@ const opGetMaintenanceWindowExecution = "GetMaintenanceWindowExecution" // GetMaintenanceWindowExecutionRequest generates a "aws/request.Request" representing the // client's request for the GetMaintenanceWindowExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5115,8 +5908,8 @@ const opGetMaintenanceWindowExecutionTask = "GetMaintenanceWindowExecutionTask" // GetMaintenanceWindowExecutionTaskRequest generates a "aws/request.Request" representing the // client's request for the GetMaintenanceWindowExecutionTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5202,8 +5995,8 @@ const opGetMaintenanceWindowExecutionTaskInvocation = "GetMaintenanceWindowExecu // GetMaintenanceWindowExecutionTaskInvocationRequest generates a "aws/request.Request" representing the // client's request for the GetMaintenanceWindowExecutionTaskInvocation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5289,8 +6082,8 @@ const opGetMaintenanceWindowTask = "GetMaintenanceWindowTask" // GetMaintenanceWindowTaskRequest generates a "aws/request.Request" representing the // client's request for the GetMaintenanceWindowTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5375,8 +6168,8 @@ const opGetParameter = "GetParameter" // GetParameterRequest generates a "aws/request.Request" representing the // client's request for the GetParameter operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5415,7 +6208,8 @@ func (c *SSM) GetParameterRequest(input *GetParameterInput) (req *request.Reques // GetParameter API operation for Amazon Simple Systems Manager (SSM). // -// Get information about a parameter by using the parameter name. +// Get information about a parameter by using the parameter name. Don't confuse +// this API action with the GetParameters API action. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -5464,8 +6258,8 @@ const opGetParameterHistory = "GetParameterHistory" // GetParameterHistoryRequest generates a "aws/request.Request" representing the // client's request for the GetParameterHistory operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5608,8 +6402,8 @@ const opGetParameters = "GetParameters" // GetParametersRequest generates a "aws/request.Request" representing the // client's request for the GetParameters operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5648,7 +6442,8 @@ func (c *SSM) GetParametersRequest(input *GetParametersInput) (req *request.Requ // GetParameters API operation for Amazon Simple Systems Manager (SSM). // -// Get details of a parameter. +// Get details of a parameter. Don't confuse this API action with the GetParameter +// API action. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -5690,8 +6485,8 @@ const opGetParametersByPath = "GetParametersByPath" // GetParametersByPathRequest generates a "aws/request.Request" representing the // client's request for the GetParametersByPath operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5737,7 +6532,8 @@ func (c *SSM) GetParametersByPathRequest(input *GetParametersByPathInput) (req * // GetParametersByPath API operation for Amazon Simple Systems Manager (SSM). // // Retrieve parameters in a specific hierarchy. For more information, see Working -// with Systems Manager Parameters (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-working.html). +// with Systems Manager Parameters (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-working.html) +// in the AWS Systems Manager User Guide. // // Request results are returned on a best-effort basis. If you specify MaxResults // in the request, the response includes information up to the limit specified. @@ -5852,8 +6648,8 @@ const opGetPatchBaseline = "GetPatchBaseline" // GetPatchBaselineRequest generates a "aws/request.Request" representing the // client's request for the GetPatchBaseline operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5942,8 +6738,8 @@ const opGetPatchBaselineForPatchGroup = "GetPatchBaselineForPatchGroup" // GetPatchBaselineForPatchGroupRequest generates a "aws/request.Request" representing the // client's request for the GetPatchBaselineForPatchGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6018,12 +6814,133 @@ func (c *SSM) GetPatchBaselineForPatchGroupWithContext(ctx aws.Context, input *G return out, req.Send() } +const opLabelParameterVersion = "LabelParameterVersion" + +// LabelParameterVersionRequest generates a "aws/request.Request" representing the +// client's request for the LabelParameterVersion operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See LabelParameterVersion for more information on using the LabelParameterVersion +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the LabelParameterVersionRequest method. +// req, resp := client.LabelParameterVersionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/LabelParameterVersion +func (c *SSM) LabelParameterVersionRequest(input *LabelParameterVersionInput) (req *request.Request, output *LabelParameterVersionOutput) { + op := &request.Operation{ + Name: opLabelParameterVersion, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &LabelParameterVersionInput{} + } + + output = &LabelParameterVersionOutput{} + req = c.newRequest(op, input, output) + return +} + +// LabelParameterVersion API operation for Amazon Simple Systems Manager (SSM). +// +// A parameter label is a user-defined alias to help you manage different versions +// of a parameter. When you modify a parameter, Systems Manager automatically +// saves a new version and increments the version number by one. A label can +// help you remember the purpose of a parameter when there are multiple versions. +// +// Parameter labels have the following requirements and restrictions. +// +// * A version of a parameter can have a maximum of 10 labels. +// +// * You can't attach the same label to different versions of the same parameter. +// For example, if version 1 has the label Production, then you can't attach +// Production to version 2. +// +// * You can move a label from one version of a parameter to another. +// +// * You can't create a label when you create a new parameter. You must attach +// a label to a specific version of a parameter. +// +// * You can't delete a parameter label. If you no longer want to use a parameter +// label, then you must move it to a different version of a parameter. +// +// * A label can have a maximum of 100 characters. +// +// * Labels can contain letters (case sensitive), numbers, periods (.), hyphens +// (-), or underscores (_). +// +// * Labels can't begin with a number, "aws," or "ssm" (not case sensitive). +// If a label fails to meet these requirements, then the label is not associated +// with a parameter and the system displays it in the list of InvalidLabels. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation LabelParameterVersion for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// * ErrCodeTooManyUpdates "TooManyUpdates" +// There are concurrent updates for a resource that supports one update at a +// time. +// +// * ErrCodeParameterNotFound "ParameterNotFound" +// The parameter could not be found. Verify the name and try again. +// +// * ErrCodeParameterVersionNotFound "ParameterVersionNotFound" +// The specified parameter version was not found. Verify the parameter name +// and version, and try again. +// +// * ErrCodeParameterVersionLabelLimitExceeded "ParameterVersionLabelLimitExceeded" +// A parameter version can have a maximum of ten labels. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/LabelParameterVersion +func (c *SSM) LabelParameterVersion(input *LabelParameterVersionInput) (*LabelParameterVersionOutput, error) { + req, out := c.LabelParameterVersionRequest(input) + return out, req.Send() +} + +// LabelParameterVersionWithContext is the same as LabelParameterVersion with the addition of +// the ability to pass a context and additional request options. +// +// See LabelParameterVersion for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) LabelParameterVersionWithContext(ctx aws.Context, input *LabelParameterVersionInput, opts ...request.Option) (*LabelParameterVersionOutput, error) { + req, out := c.LabelParameterVersionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opListAssociationVersions = "ListAssociationVersions" // ListAssociationVersionsRequest generates a "aws/request.Request" representing the // client's request for the ListAssociationVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6107,8 +7024,8 @@ const opListAssociations = "ListAssociations" // ListAssociationsRequest generates a "aws/request.Request" representing the // client's request for the ListAssociations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6245,8 +7162,8 @@ const opListCommandInvocations = "ListCommandInvocations" // ListCommandInvocationsRequest generates a "aws/request.Request" representing the // client's request for the ListCommandInvocations operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6315,12 +7232,12 @@ func (c *SSM) ListCommandInvocationsRequest(input *ListCommandInvocationsInput) // // You do not have permission to access the instance. // -// The SSM Agent is not running. On managed instances and Linux instances, verify +// SSM Agent is not running. On managed instances and Linux instances, verify // that the SSM Agent is running. On EC2 Windows instances, verify that the // EC2Config service is running. // -// The SSM Agent or EC2Config service is not registered to the SSM endpoint. -// Try reinstalling the SSM Agent or EC2Config service. +// SSM Agent or EC2Config service is not registered to the SSM endpoint. Try +// reinstalling SSM Agent or EC2Config service. // // The instance is not in valid state. Valid states are: Running, Pending, Stopped, // Stopping. Invalid states are: Shutting-down and Terminated. @@ -6407,8 +7324,8 @@ const opListCommands = "ListCommands" // ListCommandsRequest generates a "aws/request.Request" representing the // client's request for the ListCommands operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6473,12 +7390,12 @@ func (c *SSM) ListCommandsRequest(input *ListCommandsInput) (req *request.Reques // // You do not have permission to access the instance. // -// The SSM Agent is not running. On managed instances and Linux instances, verify +// SSM Agent is not running. On managed instances and Linux instances, verify // that the SSM Agent is running. On EC2 Windows instances, verify that the // EC2Config service is running. // -// The SSM Agent or EC2Config service is not registered to the SSM endpoint. -// Try reinstalling the SSM Agent or EC2Config service. +// SSM Agent or EC2Config service is not registered to the SSM endpoint. Try +// reinstalling SSM Agent or EC2Config service. // // The instance is not in valid state. Valid states are: Running, Pending, Stopped, // Stopping. Invalid states are: Shutting-down and Terminated. @@ -6565,8 +7482,8 @@ const opListComplianceItems = "ListComplianceItems" // ListComplianceItemsRequest generates a "aws/request.Request" representing the // client's request for the ListComplianceItems operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6662,8 +7579,8 @@ const opListComplianceSummaries = "ListComplianceSummaries" // ListComplianceSummariesRequest generates a "aws/request.Request" representing the // client's request for the ListComplianceSummaries operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6750,8 +7667,8 @@ const opListDocumentVersions = "ListDocumentVersions" // ListDocumentVersionsRequest generates a "aws/request.Request" representing the // client's request for the ListDocumentVersions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6835,8 +7752,8 @@ const opListDocuments = "ListDocuments" // ListDocumentsRequest generates a "aws/request.Request" representing the // client's request for the ListDocuments operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6976,8 +7893,8 @@ const opListInventoryEntries = "ListInventoryEntries" // ListInventoryEntriesRequest generates a "aws/request.Request" representing the // client's request for the ListInventoryEntries operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7034,12 +7951,12 @@ func (c *SSM) ListInventoryEntriesRequest(input *ListInventoryEntriesInput) (req // // You do not have permission to access the instance. // -// The SSM Agent is not running. On managed instances and Linux instances, verify +// SSM Agent is not running. On managed instances and Linux instances, verify // that the SSM Agent is running. On EC2 Windows instances, verify that the // EC2Config service is running. // -// The SSM Agent or EC2Config service is not registered to the SSM endpoint. -// Try reinstalling the SSM Agent or EC2Config service. +// SSM Agent or EC2Config service is not registered to the SSM endpoint. Try +// reinstalling SSM Agent or EC2Config service. // // The instance is not in valid state. Valid states are: Running, Pending, Stopped, // Stopping. Invalid states are: Shutting-down and Terminated. @@ -7080,8 +7997,8 @@ const opListResourceComplianceSummaries = "ListResourceComplianceSummaries" // ListResourceComplianceSummariesRequest generates a "aws/request.Request" representing the // client's request for the ListResourceComplianceSummaries operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7168,8 +8085,8 @@ const opListResourceDataSync = "ListResourceDataSync" // ListResourceDataSyncRequest generates a "aws/request.Request" representing the // client's request for the ListResourceDataSync operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7259,8 +8176,8 @@ const opListTagsForResource = "ListTagsForResource" // ListTagsForResourceRequest generates a "aws/request.Request" representing the // client's request for the ListTagsForResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7346,8 +8263,8 @@ const opModifyDocumentPermission = "ModifyDocumentPermission" // ModifyDocumentPermissionRequest generates a "aws/request.Request" representing the // client's request for the ModifyDocumentPermission operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7443,8 +8360,8 @@ const opPutComplianceItems = "PutComplianceItems" // PutComplianceItemsRequest generates a "aws/request.Request" representing the // client's request for the PutComplianceItems operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7584,8 +8501,8 @@ const opPutInventory = "PutInventory" // PutInventoryRequest generates a "aws/request.Request" representing the // client's request for the PutInventory operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7644,12 +8561,12 @@ func (c *SSM) PutInventoryRequest(input *PutInventoryInput) (req *request.Reques // // You do not have permission to access the instance. // -// The SSM Agent is not running. On managed instances and Linux instances, verify +// SSM Agent is not running. On managed instances and Linux instances, verify // that the SSM Agent is running. On EC2 Windows instances, verify that the // EC2Config service is running. // -// The SSM Agent or EC2Config service is not registered to the SSM endpoint. -// Try reinstalling the SSM Agent or EC2Config service. +// SSM Agent or EC2Config service is not registered to the SSM endpoint. Try +// reinstalling SSM Agent or EC2Config service. // // The instance is not in valid state. Valid states are: Running, Pending, Stopped, // Stopping. Invalid states are: Shutting-down and Terminated. @@ -7716,8 +8633,8 @@ const opPutParameter = "PutParameter" // PutParameterRequest generates a "aws/request.Request" representing the // client's request for the PutParameter operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7756,7 +8673,7 @@ func (c *SSM) PutParameterRequest(input *PutParameterInput) (req *request.Reques // PutParameter API operation for Amazon Simple Systems Manager (SSM). // -// Add one or more parameters to the system. +// Add a parameter to the system. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -7784,8 +8701,9 @@ func (c *SSM) PutParameterRequest(input *PutParameterInput) (req *request.Reques // The parameter already exists. You can't create duplicate parameters. // // * ErrCodeHierarchyLevelLimitExceededException "HierarchyLevelLimitExceededException" -// A hierarchy can have a maximum of 15 levels. For more information, see Working -// with Systems Manager Parameters (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-working.html). +// A hierarchy can have a maximum of 15 levels. For more information, see Requirements +// and Constraints for Parameter Names (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-parameter-name-constraints.html) +// in the AWS Systems Manager User Guide. // // * ErrCodeHierarchyTypeMismatchException "HierarchyTypeMismatchException" // Parameter Store does not support changing a parameter type in a hierarchy. @@ -7830,8 +8748,8 @@ const opRegisterDefaultPatchBaseline = "RegisterDefaultPatchBaseline" // RegisterDefaultPatchBaselineRequest generates a "aws/request.Request" representing the // client's request for the RegisterDefaultPatchBaseline operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7920,8 +8838,8 @@ const opRegisterPatchBaselineForPatchGroup = "RegisterPatchBaselineForPatchGroup // RegisterPatchBaselineForPatchGroupRequest generates a "aws/request.Request" representing the // client's request for the RegisterPatchBaselineForPatchGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8021,8 +8939,8 @@ const opRegisterTargetWithMaintenanceWindow = "RegisterTargetWithMaintenanceWind // RegisterTargetWithMaintenanceWindowRequest generates a "aws/request.Request" representing the // client's request for the RegisterTargetWithMaintenanceWindow operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8118,8 +9036,8 @@ const opRegisterTaskWithMaintenanceWindow = "RegisterTaskWithMaintenanceWindow" // RegisterTaskWithMaintenanceWindowRequest generates a "aws/request.Request" representing the // client's request for the RegisterTaskWithMaintenanceWindow operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8219,8 +9137,8 @@ const opRemoveTagsFromResource = "RemoveTagsFromResource" // RemoveTagsFromResourceRequest generates a "aws/request.Request" representing the // client's request for the RemoveTagsFromResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8280,6 +9198,10 @@ func (c *SSM) RemoveTagsFromResourceRequest(input *RemoveTagsFromResourceInput) // * ErrCodeInternalServerError "InternalServerError" // An error occurred on the server side. // +// * ErrCodeTooManyUpdates "TooManyUpdates" +// There are concurrent updates for a resource that supports one update at a +// time. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/RemoveTagsFromResource func (c *SSM) RemoveTagsFromResource(input *RemoveTagsFromResourceInput) (*RemoveTagsFromResourceOutput, error) { req, out := c.RemoveTagsFromResourceRequest(input) @@ -8302,12 +9224,102 @@ func (c *SSM) RemoveTagsFromResourceWithContext(ctx aws.Context, input *RemoveTa return out, req.Send() } +const opResumeSession = "ResumeSession" + +// ResumeSessionRequest generates a "aws/request.Request" representing the +// client's request for the ResumeSession operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ResumeSession for more information on using the ResumeSession +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ResumeSessionRequest method. +// req, resp := client.ResumeSessionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ResumeSession +func (c *SSM) ResumeSessionRequest(input *ResumeSessionInput) (req *request.Request, output *ResumeSessionOutput) { + op := &request.Operation{ + Name: opResumeSession, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ResumeSessionInput{} + } + + output = &ResumeSessionOutput{} + req = c.newRequest(op, input, output) + return +} + +// ResumeSession API operation for Amazon Simple Systems Manager (SSM). +// +// Reconnects a session to an instance after it has been disconnected. Connections +// can be resumed for disconnected sessions, but not terminated sessions. +// +// This command is primarily for use by client machines to automatically reconnect +// during intermittent network issues. It is not intended for any other use. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation ResumeSession for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/ResumeSession +func (c *SSM) ResumeSession(input *ResumeSessionInput) (*ResumeSessionOutput, error) { + req, out := c.ResumeSessionRequest(input) + return out, req.Send() +} + +// ResumeSessionWithContext is the same as ResumeSession with the addition of +// the ability to pass a context and additional request options. +// +// See ResumeSession for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) ResumeSessionWithContext(ctx aws.Context, input *ResumeSessionInput, opts ...request.Option) (*ResumeSessionOutput, error) { + req, out := c.ResumeSessionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opSendAutomationSignal = "SendAutomationSignal" // SendAutomationSignalRequest generates a "aws/request.Request" representing the // client's request for the SendAutomationSignal operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8397,8 +9409,8 @@ const opSendCommand = "SendCommand" // SendCommandRequest generates a "aws/request.Request" representing the // client's request for the SendCommand operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8458,12 +9470,12 @@ func (c *SSM) SendCommandRequest(input *SendCommandInput) (req *request.Request, // // You do not have permission to access the instance. // -// The SSM Agent is not running. On managed instances and Linux instances, verify +// SSM Agent is not running. On managed instances and Linux instances, verify // that the SSM Agent is running. On EC2 Windows instances, verify that the // EC2Config service is running. // -// The SSM Agent or EC2Config service is not registered to the SSM endpoint. -// Try reinstalling the SSM Agent or EC2Config service. +// SSM Agent or EC2Config service is not registered to the SSM endpoint. Try +// reinstalling SSM Agent or EC2Config service. // // The instance is not in valid state. Valid states are: Running, Pending, Stopped, // Stopping. Invalid states are: Shutting-down and Terminated. @@ -8471,6 +9483,9 @@ func (c *SSM) SendCommandRequest(input *SendCommandInput) (req *request.Request, // * ErrCodeInvalidDocument "InvalidDocument" // The specified document does not exist. // +// * ErrCodeInvalidDocumentVersion "InvalidDocumentVersion" +// The document version is not valid or does not exist. +// // * ErrCodeInvalidOutputFolder "InvalidOutputFolder" // The S3 bucket does not exist. // @@ -8519,77 +9534,160 @@ func (c *SSM) SendCommandWithContext(ctx aws.Context, input *SendCommandInput, o return out, req.Send() } -const opStartAutomationExecution = "StartAutomationExecution" +const opStartAssociationsOnce = "StartAssociationsOnce" -// StartAutomationExecutionRequest generates a "aws/request.Request" representing the -// client's request for the StartAutomationExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// StartAssociationsOnceRequest generates a "aws/request.Request" representing the +// client's request for the StartAssociationsOnce operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See StartAutomationExecution for more information on using the StartAutomationExecution +// See StartAssociationsOnce for more information on using the StartAssociationsOnce // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the StartAutomationExecutionRequest method. -// req, resp := client.StartAutomationExecutionRequest(params) +// // Example sending a request using the StartAssociationsOnceRequest method. +// req, resp := client.StartAssociationsOnceRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/StartAutomationExecution -func (c *SSM) StartAutomationExecutionRequest(input *StartAutomationExecutionInput) (req *request.Request, output *StartAutomationExecutionOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/StartAssociationsOnce +func (c *SSM) StartAssociationsOnceRequest(input *StartAssociationsOnceInput) (req *request.Request, output *StartAssociationsOnceOutput) { op := &request.Operation{ - Name: opStartAutomationExecution, + Name: opStartAssociationsOnce, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &StartAutomationExecutionInput{} + input = &StartAssociationsOnceInput{} } - output = &StartAutomationExecutionOutput{} + output = &StartAssociationsOnceOutput{} req = c.newRequest(op, input, output) return } -// StartAutomationExecution API operation for Amazon Simple Systems Manager (SSM). +// StartAssociationsOnce API operation for Amazon Simple Systems Manager (SSM). // -// Initiates execution of an Automation document. +// Use this API action to execute an association immediately and only one time. +// This action can be helpful when troubleshooting associations. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s -// API operation StartAutomationExecution for usage and error information. +// API operation StartAssociationsOnce for usage and error information. // // Returned Error Codes: -// * ErrCodeAutomationDefinitionNotFoundException "AutomationDefinitionNotFoundException" -// An Automation document with the specified name could not be found. +// * ErrCodeInvalidAssociation "InvalidAssociation" +// The association is not valid or does not exist. // -// * ErrCodeInvalidAutomationExecutionParametersException "InvalidAutomationExecutionParametersException" -// The supplied parameters for invoking the specified Automation document are -// incorrect. For example, they may not match the set of parameters permitted -// for the specified Automation document. +// * ErrCodeAssociationDoesNotExist "AssociationDoesNotExist" +// The specified association does not exist. // -// * ErrCodeAutomationExecutionLimitExceededException "AutomationExecutionLimitExceededException" -// The number of simultaneously running Automation executions exceeded the allowable -// limit. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/StartAssociationsOnce +func (c *SSM) StartAssociationsOnce(input *StartAssociationsOnceInput) (*StartAssociationsOnceOutput, error) { + req, out := c.StartAssociationsOnceRequest(input) + return out, req.Send() +} + +// StartAssociationsOnceWithContext is the same as StartAssociationsOnce with the addition of +// the ability to pass a context and additional request options. // -// * ErrCodeAutomationDefinitionVersionNotFoundException "AutomationDefinitionVersionNotFoundException" -// An Automation document with the specified name and version could not be found. +// See StartAssociationsOnce for details on how to use this API operation. // -// * ErrCodeIdempotentParameterMismatch "IdempotentParameterMismatch" -// Error returned when an idempotent operation is retried and the parameters +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) StartAssociationsOnceWithContext(ctx aws.Context, input *StartAssociationsOnceInput, opts ...request.Option) (*StartAssociationsOnceOutput, error) { + req, out := c.StartAssociationsOnceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStartAutomationExecution = "StartAutomationExecution" + +// StartAutomationExecutionRequest generates a "aws/request.Request" representing the +// client's request for the StartAutomationExecution operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StartAutomationExecution for more information on using the StartAutomationExecution +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StartAutomationExecutionRequest method. +// req, resp := client.StartAutomationExecutionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/StartAutomationExecution +func (c *SSM) StartAutomationExecutionRequest(input *StartAutomationExecutionInput) (req *request.Request, output *StartAutomationExecutionOutput) { + op := &request.Operation{ + Name: opStartAutomationExecution, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StartAutomationExecutionInput{} + } + + output = &StartAutomationExecutionOutput{} + req = c.newRequest(op, input, output) + return +} + +// StartAutomationExecution API operation for Amazon Simple Systems Manager (SSM). +// +// Initiates execution of an Automation document. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation StartAutomationExecution for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAutomationDefinitionNotFoundException "AutomationDefinitionNotFoundException" +// An Automation document with the specified name could not be found. +// +// * ErrCodeInvalidAutomationExecutionParametersException "InvalidAutomationExecutionParametersException" +// The supplied parameters for invoking the specified Automation document are +// incorrect. For example, they may not match the set of parameters permitted +// for the specified Automation document. +// +// * ErrCodeAutomationExecutionLimitExceededException "AutomationExecutionLimitExceededException" +// The number of simultaneously running Automation executions exceeded the allowable +// limit. +// +// * ErrCodeAutomationDefinitionVersionNotFoundException "AutomationDefinitionVersionNotFoundException" +// An Automation document with the specified name and version could not be found. +// +// * ErrCodeIdempotentParameterMismatch "IdempotentParameterMismatch" +// Error returned when an idempotent operation is retried and the parameters // don't match the original call to the API with the same idempotency token. // // * ErrCodeInvalidTarget "InvalidTarget" @@ -8621,12 +9719,108 @@ func (c *SSM) StartAutomationExecutionWithContext(ctx aws.Context, input *StartA return out, req.Send() } +const opStartSession = "StartSession" + +// StartSessionRequest generates a "aws/request.Request" representing the +// client's request for the StartSession operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StartSession for more information on using the StartSession +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StartSessionRequest method. +// req, resp := client.StartSessionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/StartSession +func (c *SSM) StartSessionRequest(input *StartSessionInput) (req *request.Request, output *StartSessionOutput) { + op := &request.Operation{ + Name: opStartSession, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StartSessionInput{} + } + + output = &StartSessionOutput{} + req = c.newRequest(op, input, output) + return +} + +// StartSession API operation for Amazon Simple Systems Manager (SSM). +// +// Initiates a connection to a target (for example, an instance) for a Session +// Manager session. Returns a URL and token that can be used to open a WebSocket +// connection for sending input and receiving outputs. +// +// AWS CLI usage: start-session is an interactive command that requires the +// Session Manager plugin to be installed on the client machine making the call. +// For information, see Install the Session Manager Plugin for the AWS CLI +// (http://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html) +// in the AWS Systems Manager User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation StartSession for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidDocument "InvalidDocument" +// The specified document does not exist. +// +// * ErrCodeTargetNotConnected "TargetNotConnected" +// The specified target instance for the session is not fully configured for +// use with Session Manager. For more information, see Getting Started with +// Session Manager (http://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started.html) +// in the AWS Systems Manager User Guide. +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/StartSession +func (c *SSM) StartSession(input *StartSessionInput) (*StartSessionOutput, error) { + req, out := c.StartSessionRequest(input) + return out, req.Send() +} + +// StartSessionWithContext is the same as StartSession with the addition of +// the ability to pass a context and additional request options. +// +// See StartSession for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) StartSessionWithContext(ctx aws.Context, input *StartSessionInput, opts ...request.Option) (*StartSessionOutput, error) { + req, out := c.StartSessionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opStopAutomationExecution = "StopAutomationExecution" // StopAutomationExecutionRequest generates a "aws/request.Request" representing the // client's request for the StopAutomationExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8707,12 +9901,100 @@ func (c *SSM) StopAutomationExecutionWithContext(ctx aws.Context, input *StopAut return out, req.Send() } +const opTerminateSession = "TerminateSession" + +// TerminateSessionRequest generates a "aws/request.Request" representing the +// client's request for the TerminateSession operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See TerminateSession for more information on using the TerminateSession +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the TerminateSessionRequest method. +// req, resp := client.TerminateSessionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/TerminateSession +func (c *SSM) TerminateSessionRequest(input *TerminateSessionInput) (req *request.Request, output *TerminateSessionOutput) { + op := &request.Operation{ + Name: opTerminateSession, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &TerminateSessionInput{} + } + + output = &TerminateSessionOutput{} + req = c.newRequest(op, input, output) + return +} + +// TerminateSession API operation for Amazon Simple Systems Manager (SSM). +// +// Permanently ends a session and closes the data connection between the Session +// Manager client and SSM Agent on the instance. A terminated session cannot +// be resumed. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Systems Manager (SSM)'s +// API operation TerminateSession for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDoesNotExistException "DoesNotExistException" +// Error returned when the ID specified for a resource, such as a Maintenance +// Window or Patch baseline, doesn't exist. +// +// For information about resource limits in Systems Manager, see AWS Systems +// Manager Limits (http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ssm). +// +// * ErrCodeInternalServerError "InternalServerError" +// An error occurred on the server side. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ssm-2014-11-06/TerminateSession +func (c *SSM) TerminateSession(input *TerminateSessionInput) (*TerminateSessionOutput, error) { + req, out := c.TerminateSessionRequest(input) + return out, req.Send() +} + +// TerminateSessionWithContext is the same as TerminateSession with the addition of +// the ability to pass a context and additional request options. +// +// See TerminateSession for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SSM) TerminateSessionWithContext(ctx aws.Context, input *TerminateSessionInput, opts ...request.Option) (*TerminateSessionOutput, error) { + req, out := c.TerminateSessionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opUpdateAssociation = "UpdateAssociation" // UpdateAssociationRequest generates a "aws/request.Request" representing the // client's request for the UpdateAssociation operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8831,8 +10113,8 @@ const opUpdateAssociationStatus = "UpdateAssociationStatus" // UpdateAssociationStatusRequest generates a "aws/request.Request" representing the // client's request for the UpdateAssociationStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8890,12 +10172,12 @@ func (c *SSM) UpdateAssociationStatusRequest(input *UpdateAssociationStatusInput // // You do not have permission to access the instance. // -// The SSM Agent is not running. On managed instances and Linux instances, verify +// SSM Agent is not running. On managed instances and Linux instances, verify // that the SSM Agent is running. On EC2 Windows instances, verify that the // EC2Config service is running. // -// The SSM Agent or EC2Config service is not registered to the SSM endpoint. -// Try reinstalling the SSM Agent or EC2Config service. +// SSM Agent or EC2Config service is not registered to the SSM endpoint. Try +// reinstalling SSM Agent or EC2Config service. // // The instance is not in valid state. Valid states are: Running, Pending, Stopped, // Stopping. Invalid states are: Shutting-down and Terminated. @@ -8939,8 +10221,8 @@ const opUpdateDocument = "UpdateDocument" // UpdateDocumentRequest generates a "aws/request.Request" representing the // client's request for the UpdateDocument operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9041,8 +10323,8 @@ const opUpdateDocumentDefaultVersion = "UpdateDocumentDefaultVersion" // UpdateDocumentDefaultVersionRequest generates a "aws/request.Request" representing the // client's request for the UpdateDocumentDefaultVersion operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9129,8 +10411,8 @@ const opUpdateMaintenanceWindow = "UpdateMaintenanceWindow" // UpdateMaintenanceWindowRequest generates a "aws/request.Request" representing the // client's request for the UpdateMaintenanceWindow operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9215,8 +10497,8 @@ const opUpdateMaintenanceWindowTarget = "UpdateMaintenanceWindowTarget" // UpdateMaintenanceWindowTargetRequest generates a "aws/request.Request" representing the // client's request for the UpdateMaintenanceWindowTarget operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9317,8 +10599,8 @@ const opUpdateMaintenanceWindowTask = "UpdateMaintenanceWindowTask" // UpdateMaintenanceWindowTaskRequest generates a "aws/request.Request" representing the // client's request for the UpdateMaintenanceWindowTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9360,18 +10642,18 @@ func (c *SSM) UpdateMaintenanceWindowTaskRequest(input *UpdateMaintenanceWindowT // Modifies a task assigned to a Maintenance Window. You can't change the task // type, but you can change the following values: // -// Task ARN. For example, you can change a RUN_COMMAND task from AWS-RunPowerShellScript -// to AWS-RunShellScript. +// * TaskARN. For example, you can change a RUN_COMMAND task from AWS-RunPowerShellScript +// to AWS-RunShellScript. // -// Service role ARN. +// * ServiceRoleArn // -// Task parameters. +// * TaskInvocationParameters // -// Task priority. +// * Priority // -// Task MaxConcurrency and MaxErrors. +// * MaxConcurrency // -// Log location. +// * MaxErrors // // If a parameter is null, then the corresponding field is not modified. Also, // if you set Replace to true, then all fields required by the RegisterTaskWithMaintenanceWindow @@ -9422,8 +10704,8 @@ const opUpdateManagedInstanceRole = "UpdateManagedInstanceRole" // UpdateManagedInstanceRoleRequest generates a "aws/request.Request" representing the // client's request for the UpdateManagedInstanceRole operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9478,12 +10760,12 @@ func (c *SSM) UpdateManagedInstanceRoleRequest(input *UpdateManagedInstanceRoleI // // You do not have permission to access the instance. // -// The SSM Agent is not running. On managed instances and Linux instances, verify +// SSM Agent is not running. On managed instances and Linux instances, verify // that the SSM Agent is running. On EC2 Windows instances, verify that the // EC2Config service is running. // -// The SSM Agent or EC2Config service is not registered to the SSM endpoint. -// Try reinstalling the SSM Agent or EC2Config service. +// SSM Agent or EC2Config service is not registered to the SSM endpoint. Try +// reinstalling SSM Agent or EC2Config service. // // The instance is not in valid state. Valid states are: Running, Pending, Stopped, // Stopping. Invalid states are: Shutting-down and Terminated. @@ -9517,8 +10799,8 @@ const opUpdatePatchBaseline = "UpdatePatchBaseline" // UpdatePatchBaselineRequest generates a "aws/request.Request" representing the // client's request for the UpdatePatchBaseline operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -9613,7 +10895,7 @@ type Activation struct { ActivationId *string `type:"string"` // The date the activation was created. - CreatedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `type:"timestamp"` // A name for the managed instance when it is created. DefaultInstanceName *string `type:"string"` @@ -9622,7 +10904,7 @@ type Activation struct { Description *string `type:"string"` // The date when this activation can no longer be used to register managed instances. - ExpirationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + ExpirationDate *time.Time `type:"timestamp"` // Whether or not the activation is expired. Expired *bool `type:"boolean"` @@ -9708,16 +10990,29 @@ type AddTagsToResourceInput struct { // The resource ID you want to tag. // - // For the ManagedInstance, MaintenanceWindow, and PatchBaseline values, use - // the ID of the resource, such as mw-01234361858c9b57b for a Maintenance Window. + // Use the ID of the resource. Here are some examples: + // + // ManagedInstance: mi-012345abcde + // + // MaintenanceWindow: mw-012345abcde + // + // PatchBaseline: pb-012345abcde // // For the Document and Parameter values, use the name of the resource. // + // The ManagedInstance type for this API action is only for on-premises managed + // instances. You must specify the the name of the managed instance in the following + // format: mi-ID_number. For example, mi-1a2b3c4d5e6f. + // // ResourceId is a required field ResourceId *string `type:"string" required:"true"` // Specifies the type of resource you are tagging. // + // The ManagedInstance type for this API action is for on-premises managed instances. + // You must specify the the name of the managed instance in the following format: + // mi-ID_number. For example, mi-1a2b3c4d5e6f. + // // ResourceType is a required field ResourceType *string `type:"string" required:"true" enum:"ResourceTypeForTagging"` @@ -9725,6 +11020,8 @@ type AddTagsToResourceInput struct { // the tag to have a value, specify the parameter with no value, and we set // the value to an empty string. // + // Do not enter personally identifiable information in this field. + // // Tags is a required field Tags []*Tag `type:"list" required:"true"` } @@ -9821,7 +11118,7 @@ type Association struct { InstanceId *string `type:"string"` // The date on which the association was last run. - LastExecutionDate *time.Time `type:"timestamp" timestampFormat:"unix"` + LastExecutionDate *time.Time `type:"timestamp"` // The name of the Systems Manager document. Name *string `type:"string"` @@ -9919,8 +11216,11 @@ type AssociationDescription struct { // The association version. AssociationVersion *string `type:"string"` + // The severity level that is assigned to the association. + ComplianceSeverity *string `type:"string" enum:"AssociationComplianceSeverity"` + // The date when the association was made. - Date *time.Time `type:"timestamp" timestampFormat:"unix"` + Date *time.Time `type:"timestamp"` // The document version. DocumentVersion *string `type:"string"` @@ -9929,13 +11229,39 @@ type AssociationDescription struct { InstanceId *string `type:"string"` // The date on which the association was last run. - LastExecutionDate *time.Time `type:"timestamp" timestampFormat:"unix"` + LastExecutionDate *time.Time `type:"timestamp"` // The last date on which the association was successfully run. - LastSuccessfulExecutionDate *time.Time `type:"timestamp" timestampFormat:"unix"` + LastSuccessfulExecutionDate *time.Time `type:"timestamp"` // The date when the association was last updated. - LastUpdateAssociationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + LastUpdateAssociationDate *time.Time `type:"timestamp"` + + // The maximum number of targets allowed to run the association at the same + // time. You can specify a number, for example 10, or a percentage of the target + // set, for example 10%. The default value is 100%, which means all targets + // run the association at the same time. + // + // If a new instance starts and attempts to execute an association while Systems + // Manager is executing MaxConcurrency associations, the association is allowed + // to run. During the next association interval, the new instance will process + // its association within the limit specified for MaxConcurrency. + MaxConcurrency *string `min:"1" type:"string"` + + // The number of errors that are allowed before the system stops sending requests + // to run the association on additional targets. You can specify either an absolute + // number of errors, for example 10, or a percentage of the target set, for + // example 10%. If you specify 3, for example, the system stops sending requests + // when the fourth error is received. If you specify 0, then the system stops + // sending requests after the first error is returned. If you run an association + // on 50 instances and set MaxError to 10%, then the system stops sending the + // request when the sixth error is received. + // + // Executions that are already running an association when MaxErrors is reached + // are allowed to complete, but some of these executions may fail as well. If + // you need to ensure that there won't be more than max-errors failed executions, + // set MaxConcurrency to 1 so that executions proceed one at a time. + MaxErrors *string `min:"1" type:"string"` // The name of the Systems Manager document. Name *string `type:"string"` @@ -9987,6 +11313,12 @@ func (s *AssociationDescription) SetAssociationVersion(v string) *AssociationDes return s } +// SetComplianceSeverity sets the ComplianceSeverity field's value. +func (s *AssociationDescription) SetComplianceSeverity(v string) *AssociationDescription { + s.ComplianceSeverity = &v + return s +} + // SetDate sets the Date field's value. func (s *AssociationDescription) SetDate(v time.Time) *AssociationDescription { s.Date = &v @@ -10023,6 +11355,18 @@ func (s *AssociationDescription) SetLastUpdateAssociationDate(v time.Time) *Asso return s } +// SetMaxConcurrency sets the MaxConcurrency field's value. +func (s *AssociationDescription) SetMaxConcurrency(v string) *AssociationDescription { + s.MaxConcurrency = &v + return s +} + +// SetMaxErrors sets the MaxErrors field's value. +func (s *AssociationDescription) SetMaxErrors(v string) *AssociationDescription { + s.MaxErrors = &v + return s +} + // SetName sets the Name field's value. func (s *AssociationDescription) SetName(v string) *AssociationDescription { s.Name = &v @@ -10065,30 +11409,343 @@ func (s *AssociationDescription) SetTargets(v []*Target) *AssociationDescription return s } -// Describes a filter. -type AssociationFilter struct { +// Includes information about the specified association. +type AssociationExecution struct { _ struct{} `type:"structure"` - // The name of the filter. - // - // Key is a required field - Key *string `locationName:"key" type:"string" required:"true" enum:"AssociationFilterKey"` + // The association ID. + AssociationId *string `type:"string"` - // The filter value. - // - // Value is a required field - Value *string `locationName:"value" min:"1" type:"string" required:"true"` -} + // The association version. + AssociationVersion *string `type:"string"` -// String returns the string representation -func (s AssociationFilter) String() string { - return awsutil.Prettify(s) -} + // The time the execution started. + CreatedTime *time.Time `type:"timestamp"` -// GoString returns the string representation -func (s AssociationFilter) GoString() string { - return s.String() -} + // Detailed status information about the execution. + DetailedStatus *string `type:"string"` + + // The execution ID for the association. If the association does not run at + // intervals or according to a schedule, then the ExecutionID is the same as + // the AssociationID. + ExecutionId *string `type:"string"` + + // The date of the last execution. + LastExecutionDate *time.Time `type:"timestamp"` + + // An aggregate status of the resources in the execution based on the status + // type. + ResourceCountByStatus *string `type:"string"` + + // The status of the association execution. + Status *string `type:"string"` +} + +// String returns the string representation +func (s AssociationExecution) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociationExecution) GoString() string { + return s.String() +} + +// SetAssociationId sets the AssociationId field's value. +func (s *AssociationExecution) SetAssociationId(v string) *AssociationExecution { + s.AssociationId = &v + return s +} + +// SetAssociationVersion sets the AssociationVersion field's value. +func (s *AssociationExecution) SetAssociationVersion(v string) *AssociationExecution { + s.AssociationVersion = &v + return s +} + +// SetCreatedTime sets the CreatedTime field's value. +func (s *AssociationExecution) SetCreatedTime(v time.Time) *AssociationExecution { + s.CreatedTime = &v + return s +} + +// SetDetailedStatus sets the DetailedStatus field's value. +func (s *AssociationExecution) SetDetailedStatus(v string) *AssociationExecution { + s.DetailedStatus = &v + return s +} + +// SetExecutionId sets the ExecutionId field's value. +func (s *AssociationExecution) SetExecutionId(v string) *AssociationExecution { + s.ExecutionId = &v + return s +} + +// SetLastExecutionDate sets the LastExecutionDate field's value. +func (s *AssociationExecution) SetLastExecutionDate(v time.Time) *AssociationExecution { + s.LastExecutionDate = &v + return s +} + +// SetResourceCountByStatus sets the ResourceCountByStatus field's value. +func (s *AssociationExecution) SetResourceCountByStatus(v string) *AssociationExecution { + s.ResourceCountByStatus = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *AssociationExecution) SetStatus(v string) *AssociationExecution { + s.Status = &v + return s +} + +// Filters used in the request. +type AssociationExecutionFilter struct { + _ struct{} `type:"structure"` + + // The key value used in the request. + // + // Key is a required field + Key *string `type:"string" required:"true" enum:"AssociationExecutionFilterKey"` + + // The filter type specified in the request. + // + // Type is a required field + Type *string `type:"string" required:"true" enum:"AssociationFilterOperatorType"` + + // The value specified for the key. + // + // Value is a required field + Value *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s AssociationExecutionFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociationExecutionFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AssociationExecutionFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AssociationExecutionFilter"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Type == nil { + invalidParams.Add(request.NewErrParamRequired("Type")) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + if s.Value != nil && len(*s.Value) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Value", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *AssociationExecutionFilter) SetKey(v string) *AssociationExecutionFilter { + s.Key = &v + return s +} + +// SetType sets the Type field's value. +func (s *AssociationExecutionFilter) SetType(v string) *AssociationExecutionFilter { + s.Type = &v + return s +} + +// SetValue sets the Value field's value. +func (s *AssociationExecutionFilter) SetValue(v string) *AssociationExecutionFilter { + s.Value = &v + return s +} + +// Includes information about the specified association execution. +type AssociationExecutionTarget struct { + _ struct{} `type:"structure"` + + // The association ID. + AssociationId *string `type:"string"` + + // The association version. + AssociationVersion *string `type:"string"` + + // Detailed information about the execution status. + DetailedStatus *string `type:"string"` + + // The execution ID. If the association does not run at intervals or according + // to a schedule, then the ExecutionID is the same as the AssociationID. + ExecutionId *string `type:"string"` + + // The date of the last execution. + LastExecutionDate *time.Time `type:"timestamp"` + + // The location where the association details are saved. + OutputSource *OutputSource `type:"structure"` + + // The resource ID, for example, the instance ID where the association ran. + ResourceId *string `min:"1" type:"string"` + + // The resource type, for example, instance. + ResourceType *string `min:"1" type:"string"` + + // The association execution status. + Status *string `type:"string"` +} + +// String returns the string representation +func (s AssociationExecutionTarget) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociationExecutionTarget) GoString() string { + return s.String() +} + +// SetAssociationId sets the AssociationId field's value. +func (s *AssociationExecutionTarget) SetAssociationId(v string) *AssociationExecutionTarget { + s.AssociationId = &v + return s +} + +// SetAssociationVersion sets the AssociationVersion field's value. +func (s *AssociationExecutionTarget) SetAssociationVersion(v string) *AssociationExecutionTarget { + s.AssociationVersion = &v + return s +} + +// SetDetailedStatus sets the DetailedStatus field's value. +func (s *AssociationExecutionTarget) SetDetailedStatus(v string) *AssociationExecutionTarget { + s.DetailedStatus = &v + return s +} + +// SetExecutionId sets the ExecutionId field's value. +func (s *AssociationExecutionTarget) SetExecutionId(v string) *AssociationExecutionTarget { + s.ExecutionId = &v + return s +} + +// SetLastExecutionDate sets the LastExecutionDate field's value. +func (s *AssociationExecutionTarget) SetLastExecutionDate(v time.Time) *AssociationExecutionTarget { + s.LastExecutionDate = &v + return s +} + +// SetOutputSource sets the OutputSource field's value. +func (s *AssociationExecutionTarget) SetOutputSource(v *OutputSource) *AssociationExecutionTarget { + s.OutputSource = v + return s +} + +// SetResourceId sets the ResourceId field's value. +func (s *AssociationExecutionTarget) SetResourceId(v string) *AssociationExecutionTarget { + s.ResourceId = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *AssociationExecutionTarget) SetResourceType(v string) *AssociationExecutionTarget { + s.ResourceType = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *AssociationExecutionTarget) SetStatus(v string) *AssociationExecutionTarget { + s.Status = &v + return s +} + +// Filters for the association execution. +type AssociationExecutionTargetsFilter struct { + _ struct{} `type:"structure"` + + // The key value used in the request. + // + // Key is a required field + Key *string `type:"string" required:"true" enum:"AssociationExecutionTargetsFilterKey"` + + // The value specified for the key. + // + // Value is a required field + Value *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s AssociationExecutionTargetsFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociationExecutionTargetsFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AssociationExecutionTargetsFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AssociationExecutionTargetsFilter"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + if s.Value != nil && len(*s.Value) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Value", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *AssociationExecutionTargetsFilter) SetKey(v string) *AssociationExecutionTargetsFilter { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *AssociationExecutionTargetsFilter) SetValue(v string) *AssociationExecutionTargetsFilter { + s.Value = &v + return s +} + +// Describes a filter. +type AssociationFilter struct { + _ struct{} `type:"structure"` + + // The name of the filter. + // + // Key is a required field + Key *string `locationName:"key" type:"string" required:"true" enum:"AssociationFilterKey"` + + // The filter value. + // + // Value is a required field + Value *string `locationName:"value" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s AssociationFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociationFilter) GoString() string { + return s.String() +} // Validate inspects the fields of the type to determine if they are valid. func (s *AssociationFilter) Validate() error { @@ -10175,7 +11832,7 @@ type AssociationStatus struct { // The date when the status changed. // // Date is a required field - Date *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + Date *time.Time `type:"timestamp" required:"true"` // The reason for the status. // @@ -10258,13 +11915,42 @@ type AssociationVersionInfo struct { // The association version. AssociationVersion *string `type:"string"` + // The severity level that is assigned to the association. + ComplianceSeverity *string `type:"string" enum:"AssociationComplianceSeverity"` + // The date the association version was created. - CreatedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `type:"timestamp"` // The version of a Systems Manager document used when the association version // was created. DocumentVersion *string `type:"string"` + // The maximum number of targets allowed to run the association at the same + // time. You can specify a number, for example 10, or a percentage of the target + // set, for example 10%. The default value is 100%, which means all targets + // run the association at the same time. + // + // If a new instance starts and attempts to execute an association while Systems + // Manager is executing MaxConcurrency associations, the association is allowed + // to run. During the next association interval, the new instance will process + // its association within the limit specified for MaxConcurrency. + MaxConcurrency *string `min:"1" type:"string"` + + // The number of errors that are allowed before the system stops sending requests + // to run the association on additional targets. You can specify either an absolute + // number of errors, for example 10, or a percentage of the target set, for + // example 10%. If you specify 3, for example, the system stops sending requests + // when the fourth error is received. If you specify 0, then the system stops + // sending requests after the first error is returned. If you run an association + // on 50 instances and set MaxError to 10%, then the system stops sending the + // request when the sixth error is received. + // + // Executions that are already running an association when MaxErrors is reached + // are allowed to complete, but some of these executions may fail as well. If + // you need to ensure that there won't be more than max-errors failed executions, + // set MaxConcurrency to 1 so that executions proceed one at a time. + MaxErrors *string `min:"1" type:"string"` + // The name specified when the association was created. Name *string `type:"string"` @@ -10312,6 +11998,12 @@ func (s *AssociationVersionInfo) SetAssociationVersion(v string) *AssociationVer return s } +// SetComplianceSeverity sets the ComplianceSeverity field's value. +func (s *AssociationVersionInfo) SetComplianceSeverity(v string) *AssociationVersionInfo { + s.ComplianceSeverity = &v + return s +} + // SetCreatedDate sets the CreatedDate field's value. func (s *AssociationVersionInfo) SetCreatedDate(v time.Time) *AssociationVersionInfo { s.CreatedDate = &v @@ -10324,6 +12016,18 @@ func (s *AssociationVersionInfo) SetDocumentVersion(v string) *AssociationVersio return s } +// SetMaxConcurrency sets the MaxConcurrency field's value. +func (s *AssociationVersionInfo) SetMaxConcurrency(v string) *AssociationVersionInfo { + s.MaxConcurrency = &v + return s +} + +// SetMaxErrors sets the MaxErrors field's value. +func (s *AssociationVersionInfo) SetMaxErrors(v string) *AssociationVersionInfo { + s.MaxErrors = &v + return s +} + // SetName sets the Name field's value. func (s *AssociationVersionInfo) SetName(v string) *AssociationVersionInfo { s.Name = &v @@ -10381,10 +12085,10 @@ type AutomationExecution struct { ExecutedBy *string `type:"string"` // The time the execution finished. - ExecutionEndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + ExecutionEndTime *time.Time `type:"timestamp"` // The time the execution started. - ExecutionStartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + ExecutionStartTime *time.Time `type:"timestamp"` // A message describing why an execution has failed, if the status is set to // Failed. @@ -10409,6 +12113,10 @@ type AutomationExecution struct { // The AutomationExecutionId of the parent automation. ParentAutomationExecutionId *string `min:"36" type:"string"` + // An aggregate of step execution statuses displayed in the AWS Console for + // a multi-Region and multi-account Automation execution. + ProgressCounters *ProgressCounters `type:"structure"` + // A list of resolved targets in the rate control execution. ResolvedTargets *ResolvedTargets `type:"structure"` @@ -10424,6 +12132,13 @@ type AutomationExecution struct { // The target of the execution. Target *string `type:"string"` + // The combination of AWS Regions and/or AWS accounts where you want to execute + // the Automation. + TargetLocations []*TargetLocation `min:"1" type:"list"` + + // The specified key-value mapping of document parameters to target resources. + TargetMaps []map[string][]*string `type:"list"` + // The parameter name. TargetParameterName *string `min:"1" type:"string"` @@ -10537,6 +12252,12 @@ func (s *AutomationExecution) SetParentAutomationExecutionId(v string) *Automati return s } +// SetProgressCounters sets the ProgressCounters field's value. +func (s *AutomationExecution) SetProgressCounters(v *ProgressCounters) *AutomationExecution { + s.ProgressCounters = v + return s +} + // SetResolvedTargets sets the ResolvedTargets field's value. func (s *AutomationExecution) SetResolvedTargets(v *ResolvedTargets) *AutomationExecution { s.ResolvedTargets = v @@ -10561,6 +12282,18 @@ func (s *AutomationExecution) SetTarget(v string) *AutomationExecution { return s } +// SetTargetLocations sets the TargetLocations field's value. +func (s *AutomationExecution) SetTargetLocations(v []*TargetLocation) *AutomationExecution { + s.TargetLocations = v + return s +} + +// SetTargetMaps sets the TargetMaps field's value. +func (s *AutomationExecution) SetTargetMaps(v []map[string][]*string) *AutomationExecution { + s.TargetMaps = v + return s +} + // SetTargetParameterName sets the TargetParameterName field's value. func (s *AutomationExecution) SetTargetParameterName(v string) *AutomationExecution { s.TargetParameterName = &v @@ -10644,6 +12377,13 @@ type AutomationExecutionMetadata struct { // Timed out, or Cancelled. AutomationExecutionStatus *string `type:"string" enum:"AutomationExecutionStatus"` + // Use this filter with DescribeAutomationExecution. Specify either Local of + // CrossAccount. CrossAccount is an Automation that executes in multiple AWS + // Regions and accounts. For more information, see Concurrently Executing Automations + // in Multiple AWS Regions and Accounts (http://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation-multiple-accounts-and-regions.html) + // in the AWS Systems Manager User Guide. + AutomationType *string `type:"string" enum:"AutomationType"` + // The action of the currently executing step. CurrentAction *string `type:"string"` @@ -10661,10 +12401,10 @@ type AutomationExecutionMetadata struct { // The time the execution finished. This is not populated if the execution is // still in progress. - ExecutionEndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + ExecutionEndTime *time.Time `type:"timestamp"` // The time the execution started.> - ExecutionStartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + ExecutionStartTime *time.Time `type:"timestamp"` // The list of execution outputs as defined in the Automation document. FailureMessage *string `type:"string"` @@ -10693,6 +12433,9 @@ type AutomationExecutionMetadata struct { // The list of execution outputs as defined in the Automation document. Target *string `type:"string"` + // The specified key-value mapping of document parameters to target resources. + TargetMaps []map[string][]*string `type:"list"` + // The list of execution outputs as defined in the Automation document. TargetParameterName *string `min:"1" type:"string"` @@ -10722,6 +12465,12 @@ func (s *AutomationExecutionMetadata) SetAutomationExecutionStatus(v string) *Au return s } +// SetAutomationType sets the AutomationType field's value. +func (s *AutomationExecutionMetadata) SetAutomationType(v string) *AutomationExecutionMetadata { + s.AutomationType = &v + return s +} + // SetCurrentAction sets the CurrentAction field's value. func (s *AutomationExecutionMetadata) SetCurrentAction(v string) *AutomationExecutionMetadata { s.CurrentAction = &v @@ -10818,6 +12567,12 @@ func (s *AutomationExecutionMetadata) SetTarget(v string) *AutomationExecutionMe return s } +// SetTargetMaps sets the TargetMaps field's value. +func (s *AutomationExecutionMetadata) SetTargetMaps(v []map[string][]*string) *AutomationExecutionMetadata { + s.TargetMaps = v + return s +} + // SetTargetParameterName sets the TargetParameterName field's value. func (s *AutomationExecutionMetadata) SetTargetParameterName(v string) *AutomationExecutionMetadata { s.TargetParameterName = &v @@ -10830,38 +12585,101 @@ func (s *AutomationExecutionMetadata) SetTargets(v []*Target) *AutomationExecuti return s } -type CancelCommandInput struct { +type CancelCommandInput struct { + _ struct{} `type:"structure"` + + // The ID of the command you want to cancel. + // + // CommandId is a required field + CommandId *string `min:"36" type:"string" required:"true"` + + // (Optional) A list of instance IDs on which you want to cancel the command. + // If not provided, the command is canceled on every instance on which it was + // requested. + InstanceIds []*string `type:"list"` +} + +// String returns the string representation +func (s CancelCommandInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelCommandInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CancelCommandInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CancelCommandInput"} + if s.CommandId == nil { + invalidParams.Add(request.NewErrParamRequired("CommandId")) + } + if s.CommandId != nil && len(*s.CommandId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("CommandId", 36)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCommandId sets the CommandId field's value. +func (s *CancelCommandInput) SetCommandId(v string) *CancelCommandInput { + s.CommandId = &v + return s +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *CancelCommandInput) SetInstanceIds(v []*string) *CancelCommandInput { + s.InstanceIds = v + return s +} + +// Whether or not the command was successfully canceled. There is no guarantee +// that a request can be canceled. +type CancelCommandOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s CancelCommandOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelCommandOutput) GoString() string { + return s.String() +} + +type CancelMaintenanceWindowExecutionInput struct { _ struct{} `type:"structure"` - // The ID of the command you want to cancel. + // The ID of the Maintenance Window execution to stop. // - // CommandId is a required field - CommandId *string `min:"36" type:"string" required:"true"` - - // (Optional) A list of instance IDs on which you want to cancel the command. - // If not provided, the command is canceled on every instance on which it was - // requested. - InstanceIds []*string `type:"list"` + // WindowExecutionId is a required field + WindowExecutionId *string `min:"36" type:"string" required:"true"` } // String returns the string representation -func (s CancelCommandInput) String() string { +func (s CancelMaintenanceWindowExecutionInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CancelCommandInput) GoString() string { +func (s CancelMaintenanceWindowExecutionInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CancelCommandInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CancelCommandInput"} - if s.CommandId == nil { - invalidParams.Add(request.NewErrParamRequired("CommandId")) +func (s *CancelMaintenanceWindowExecutionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CancelMaintenanceWindowExecutionInput"} + if s.WindowExecutionId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowExecutionId")) } - if s.CommandId != nil && len(*s.CommandId) < 36 { - invalidParams.Add(request.NewErrParamMinLen("CommandId", 36)) + if s.WindowExecutionId != nil && len(*s.WindowExecutionId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("WindowExecutionId", 36)) } if invalidParams.Len() > 0 { @@ -10870,38 +12688,91 @@ func (s *CancelCommandInput) Validate() error { return nil } -// SetCommandId sets the CommandId field's value. -func (s *CancelCommandInput) SetCommandId(v string) *CancelCommandInput { - s.CommandId = &v +// SetWindowExecutionId sets the WindowExecutionId field's value. +func (s *CancelMaintenanceWindowExecutionInput) SetWindowExecutionId(v string) *CancelMaintenanceWindowExecutionInput { + s.WindowExecutionId = &v return s } -// SetInstanceIds sets the InstanceIds field's value. -func (s *CancelCommandInput) SetInstanceIds(v []*string) *CancelCommandInput { - s.InstanceIds = v +type CancelMaintenanceWindowExecutionOutput struct { + _ struct{} `type:"structure"` + + // The ID of the Maintenance Window execution that has been stopped. + WindowExecutionId *string `min:"36" type:"string"` +} + +// String returns the string representation +func (s CancelMaintenanceWindowExecutionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelMaintenanceWindowExecutionOutput) GoString() string { + return s.String() +} + +// SetWindowExecutionId sets the WindowExecutionId field's value. +func (s *CancelMaintenanceWindowExecutionOutput) SetWindowExecutionId(v string) *CancelMaintenanceWindowExecutionOutput { + s.WindowExecutionId = &v return s } -// Whether or not the command was successfully canceled. There is no guarantee -// that a request can be canceled. -type CancelCommandOutput struct { +// Configuration options for sending command output to CloudWatch Logs. +type CloudWatchOutputConfig struct { _ struct{} `type:"structure"` + + // The name of the CloudWatch log group where you want to send command output. + // If you don't specify a group name, Systems Manager automatically creates + // a log group for you. The log group uses the following naming format: aws/ssm/SystemsManagerDocumentName. + CloudWatchLogGroupName *string `min:"1" type:"string"` + + // Enables Systems Manager to send command output to CloudWatch Logs. + CloudWatchOutputEnabled *bool `type:"boolean"` } // String returns the string representation -func (s CancelCommandOutput) String() string { +func (s CloudWatchOutputConfig) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CancelCommandOutput) GoString() string { +func (s CloudWatchOutputConfig) GoString() string { return s.String() } +// Validate inspects the fields of the type to determine if they are valid. +func (s *CloudWatchOutputConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CloudWatchOutputConfig"} + if s.CloudWatchLogGroupName != nil && len(*s.CloudWatchLogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("CloudWatchLogGroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCloudWatchLogGroupName sets the CloudWatchLogGroupName field's value. +func (s *CloudWatchOutputConfig) SetCloudWatchLogGroupName(v string) *CloudWatchOutputConfig { + s.CloudWatchLogGroupName = &v + return s +} + +// SetCloudWatchOutputEnabled sets the CloudWatchOutputEnabled field's value. +func (s *CloudWatchOutputConfig) SetCloudWatchOutputEnabled(v bool) *CloudWatchOutputConfig { + s.CloudWatchOutputEnabled = &v + return s +} + // Describes a command request. type Command struct { _ struct{} `type:"structure"` + // CloudWatch Logs information where you want Systems Manager to send the command + // output. + CloudWatchOutputConfig *CloudWatchOutputConfig `type:"structure"` + // A unique identifier for this command. CommandId *string `min:"36" type:"string"` @@ -10914,16 +12785,22 @@ type Command struct { // Timed Out, Delivery Timed Out, Canceled, Terminated, or Undeliverable. CompletedCount *int64 `type:"integer"` + // The number of targets for which the status is Delivery Timed Out. + DeliveryTimedOutCount *int64 `type:"integer"` + // The name of the document requested for execution. DocumentName *string `type:"string"` + // The SSM document version. + DocumentVersion *string `type:"string"` + // The number of targets for which the status is Failed or Execution Timed Out. ErrorCount *int64 `type:"integer"` // If this time is reached and the command has not already started executing, - // it will not execute. Calculated based on the ExpiresAfter user input provided + // it will not run. Calculated based on the ExpiresAfter user input provided // as part of the SendCommand API. - ExpiresAfter *time.Time `type:"timestamp" timestampFormat:"unix"` + ExpiresAfter *time.Time `type:"timestamp"` // The instance IDs against which this command was requested. InstanceIds []*string `type:"list"` @@ -10931,15 +12808,17 @@ type Command struct { // The maximum number of instances that are allowed to execute the command at // the same time. You can specify a number of instances, such as 10, or a percentage // of instances, such as 10%. The default value is 50. For more information - // about how to use MaxConcurrency, see Executing a Command Using Systems Manager - // Run Command (http://docs.aws.amazon.com/systems-manager/latest/userguide/run-command.html). + // about how to use MaxConcurrency, see Executing Commands Using Systems Manager + // Run Command (http://docs.aws.amazon.com/systems-manager/latest/userguide/run-command.html) + // in the AWS Systems Manager User Guide. MaxConcurrency *string `min:"1" type:"string"` // The maximum number of errors allowed before the system stops sending the // command to additional targets. You can specify a number of errors, such as // 10, or a percentage or errors, such as 10%. The default value is 0. For more - // information about how to use MaxErrors, see Executing a Command Using Systems - // Manager Run Command (http://docs.aws.amazon.com/systems-manager/latest/userguide/run-command.html). + // information about how to use MaxErrors, see Executing Commands Using Systems + // Manager Run Command (http://docs.aws.amazon.com/systems-manager/latest/userguide/run-command.html) + // in the AWS Systems Manager User Guide. MaxErrors *string `min:"1" type:"string"` // Configurations for sending notifications about command status changes. @@ -10962,7 +12841,7 @@ type Command struct { Parameters map[string][]*string `type:"map"` // The date and time the command was requested. - RequestedDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + RequestedDateTime *time.Time `type:"timestamp"` // The IAM service role that Run Command uses to act on your behalf when sending // notifications about command status changes. @@ -10974,8 +12853,10 @@ type Command struct { // A detailed status of the command execution. StatusDetails includes more information // than Status because it includes states resulting from error and concurrency // control parameters. StatusDetails can show different results than Status. - // For more information about these statuses, see Run Command Status (http://docs.aws.amazon.com/systems-manager/latest/userguide/monitor-about-status.html). - // StatusDetails can be one of the following values: + // For more information about these statuses, see Understanding Command Statuses + // (http://docs.aws.amazon.com/systems-manager/latest/userguide/monitor-commands.html) + // in the AWS Systems Manager User Guide. StatusDetails can be one of the following + // values: // // * Pending: The command has not been sent to any instances. // @@ -11025,6 +12906,12 @@ func (s Command) GoString() string { return s.String() } +// SetCloudWatchOutputConfig sets the CloudWatchOutputConfig field's value. +func (s *Command) SetCloudWatchOutputConfig(v *CloudWatchOutputConfig) *Command { + s.CloudWatchOutputConfig = v + return s +} + // SetCommandId sets the CommandId field's value. func (s *Command) SetCommandId(v string) *Command { s.CommandId = &v @@ -11043,12 +12930,24 @@ func (s *Command) SetCompletedCount(v int64) *Command { return s } +// SetDeliveryTimedOutCount sets the DeliveryTimedOutCount field's value. +func (s *Command) SetDeliveryTimedOutCount(v int64) *Command { + s.DeliveryTimedOutCount = &v + return s +} + // SetDocumentName sets the DocumentName field's value. func (s *Command) SetDocumentName(v string) *Command { s.DocumentName = &v return s } +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *Command) SetDocumentVersion(v string) *Command { + s.DocumentVersion = &v + return s +} + // SetErrorCount sets the ErrorCount field's value. func (s *Command) SetErrorCount(v int64) *Command { s.ErrorCount = &v @@ -11154,7 +13053,44 @@ type CommandFilter struct { // Key is a required field Key *string `locationName:"key" type:"string" required:"true" enum:"CommandFilterKey"` - // The filter value. + // The filter value. Valid values for each filter key are as follows: + // + // * InvokedAfter: Specify a timestamp to limit your results. For example, + // specify 2018-07-07T00:00:00Z to see a list of command executions occurring + // July 7, 2018, and later. + // + // * InvokedBefore: Specify a timestamp to limit your results. For example, + // specify 2018-07-07T00:00:00Z to see a list of command executions from + // before July 7, 2018. + // + // * Status: Specify a valid command status to see a list of all command + // executions with that status. Status values you can specify include: + // + // Pending + // + // InProgress + // + // Success + // + // Cancelled + // + // Failed + // + // TimedOut + // + // Cancelling + // + // * DocumentName: Specify name of the SSM document for which you want to + // see command execution results. For example, specify AWS-RunPatchBaseline + // to see command executions that used this SSM document to perform security + // patching operations on instances. + // + // * ExecutionStage: Specify one of the following values: + // + // Executing: Returns a list of command executions that are currently still + // running. + // + // Complete: Returns a list of command executions that have already completed. // // Value is a required field Value *string `locationName:"value" min:"1" type:"string" required:"true"` @@ -11209,6 +13145,10 @@ func (s *CommandFilter) SetValue(v string) *CommandFilter { type CommandInvocation struct { _ struct{} `type:"structure"` + // CloudWatch Logs information where you want Systems Manager to send the command + // output. + CloudWatchOutputConfig *CloudWatchOutputConfig `type:"structure"` + // The command against which this invocation was requested. CommandId *string `min:"36" type:"string"` @@ -11221,6 +13161,9 @@ type CommandInvocation struct { // The document name that was requested for execution. DocumentName *string `type:"string"` + // The SSM document version. + DocumentVersion *string `type:"string"` + // The instance ID in which this invocation was requested. InstanceId *string `type:"string"` @@ -11234,7 +13177,7 @@ type CommandInvocation struct { NotificationConfig *NotificationConfig `type:"structure"` // The time and date the request was sent to this instance. - RequestedDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + RequestedDateTime *time.Time `type:"timestamp"` // The IAM service role that Run Command uses to act on your behalf when sending // notifications about command status changes on a per instance basis. @@ -11259,8 +13202,9 @@ type CommandInvocation struct { // targeted by the command). StatusDetails includes more information than Status // because it includes states resulting from error and concurrency control parameters. // StatusDetails can show different results than Status. For more information - // about these statuses, see Run Command Status (http://docs.aws.amazon.com/systems-manager/latest/userguide/monitor-about-status.html). - // StatusDetails can be one of the following values: + // about these statuses, see Understanding Command Statuses (http://docs.aws.amazon.com/systems-manager/latest/userguide/monitor-commands.html) + // in the AWS Systems Manager User Guide. StatusDetails can be one of the following + // values: // // * Pending: The command has not been sent to the instance. // @@ -11313,6 +13257,12 @@ func (s CommandInvocation) GoString() string { return s.String() } +// SetCloudWatchOutputConfig sets the CloudWatchOutputConfig field's value. +func (s *CommandInvocation) SetCloudWatchOutputConfig(v *CloudWatchOutputConfig) *CommandInvocation { + s.CloudWatchOutputConfig = v + return s +} + // SetCommandId sets the CommandId field's value. func (s *CommandInvocation) SetCommandId(v string) *CommandInvocation { s.CommandId = &v @@ -11337,6 +13287,12 @@ func (s *CommandInvocation) SetDocumentName(v string) *CommandInvocation { return s } +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *CommandInvocation) SetDocumentVersion(v string) *CommandInvocation { + s.DocumentVersion = &v + return s +} + // SetInstanceId sets the InstanceId field's value. func (s *CommandInvocation) SetInstanceId(v string) *CommandInvocation { s.InstanceId = &v @@ -11449,10 +13405,10 @@ type CommandPlugin struct { // The time the plugin stopped executing. Could stop prematurely if, for example, // a cancel command was sent. - ResponseFinishDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + ResponseFinishDateTime *time.Time `type:"timestamp"` // The time the plugin started executing. - ResponseStartDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + ResponseStartDateTime *time.Time `type:"timestamp"` // The URL for the complete text written by the plugin to stderr. If execution // is not yet complete, then this string is empty. @@ -11469,8 +13425,10 @@ type CommandPlugin struct { // A detailed status of the plugin execution. StatusDetails includes more information // than Status because it includes states resulting from error and concurrency // control parameters. StatusDetails can show different results than Status. - // For more information about these statuses, see Run Command Status (http://docs.aws.amazon.com/systems-manager/latest/userguide/monitor-about-status.html). - // StatusDetails can be one of the following values: + // For more information about these statuses, see Understanding Command Statuses + // (http://docs.aws.amazon.com/systems-manager/latest/userguide/monitor-commands.html) + // in the AWS Systems Manager User Guide. StatusDetails can be one of the following + // values: // // * Pending: The command has not been sent to the instance. // @@ -11606,7 +13564,7 @@ type ComplianceExecutionSummary struct { // format: yyyy-MM-dd'T'HH:mm:ss'Z'. // // ExecutionTime is a required field - ExecutionTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + ExecutionTime *time.Time `type:"timestamp" required:"true"` // The type of execution. For example, Command is a valid execution type. ExecutionType *string `type:"string"` @@ -11671,7 +13629,7 @@ type ComplianceItem struct { ExecutionSummary *ComplianceExecutionSummary `type:"structure"` // An ID for the compliance item. For example, if the compliance item is a Windows - // patch, the ID could be the number of the KB article. Here's an example: KB4010320. + // patch, the ID could be the number of the KB article; for example: KB4010320. Id *string `min:"1" type:"string"` // An ID for the resource. For a managed instance, this is the instance ID. @@ -11689,8 +13647,8 @@ type ComplianceItem struct { Status *string `type:"string" enum:"ComplianceStatus"` // A title for the compliance item. For example, if the compliance item is a - // Windows patch, the title could be the title of the KB article for the patch. - // Here's an example: Security Update for Active Directory Federation Services. + // Windows patch, the title could be the title of the KB article for the patch; + // for example: Security Update for Active Directory Federation Services. Title *string `type:"string"` } @@ -11781,8 +13739,8 @@ type ComplianceItemEntry struct { Status *string `type:"string" required:"true" enum:"ComplianceStatus"` // The title of the compliance item. For example, if the compliance item is - // a Windows patch, the title could be the title of the KB article for the patch. - // Here's an example: Security Update for Active Directory Federation Services. + // a Windows patch, the title could be the title of the KB article for the patch; + // for example: Security Update for Active Directory Federation Services. Title *string `type:"string"` } @@ -11986,15 +13944,19 @@ type CreateActivationInput struct { // The name of the registered, managed instance as it will appear in the Amazon // EC2 console or when you use the AWS command line tools to list EC2 resources. + // + // Do not enter personally identifiable information in this field. DefaultInstanceName *string `type:"string"` - // A userdefined description of the resource that you want to register with + // A user-defined description of the resource that you want to register with // Amazon EC2. + // + // Do not enter personally identifiable information in this field. Description *string `type:"string"` // The date by which this activation request should expire. The default value // is 24 hours. - ExpirationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + ExpirationDate *time.Time `type:"timestamp"` // The Amazon Identity and Access Management (IAM) role that you want to assign // to the managed instance. @@ -12180,19 +14142,48 @@ func (s *CreateAssociationBatchOutput) SetSuccessful(v []*AssociationDescription return s } -// Describes the association of a Systems Manager document and an instance. +// Describes the association of a Systems Manager SSM document and an instance. type CreateAssociationBatchRequestEntry struct { _ struct{} `type:"structure"` // Specify a descriptive name for the association. AssociationName *string `type:"string"` + // The severity level to assign to the association. + ComplianceSeverity *string `type:"string" enum:"AssociationComplianceSeverity"` + // The document version. DocumentVersion *string `type:"string"` // The ID of the instance. InstanceId *string `type:"string"` + // The maximum number of targets allowed to run the association at the same + // time. You can specify a number, for example 10, or a percentage of the target + // set, for example 10%. The default value is 100%, which means all targets + // run the association at the same time. + // + // If a new instance starts and attempts to execute an association while Systems + // Manager is executing MaxConcurrency associations, the association is allowed + // to run. During the next association interval, the new instance will process + // its association within the limit specified for MaxConcurrency. + MaxConcurrency *string `min:"1" type:"string"` + + // The number of errors that are allowed before the system stops sending requests + // to run the association on additional targets. You can specify either an absolute + // number of errors, for example 10, or a percentage of the target set, for + // example 10%. If you specify 3, for example, the system stops sending requests + // when the fourth error is received. If you specify 0, then the system stops + // sending requests after the first error is returned. If you run an association + // on 50 instances and set MaxError to 10%, then the system stops sending the + // request when the sixth error is received. + // + // Executions that are already running an association when MaxErrors is reached + // are allowed to complete, but some of these executions may fail as well. If + // you need to ensure that there won't be more than max-errors failed executions, + // set MaxConcurrency to 1 so that executions proceed one at a time. + MaxErrors *string `min:"1" type:"string"` + // The name of the configuration document. // // Name is a required field @@ -12224,6 +14215,12 @@ func (s CreateAssociationBatchRequestEntry) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *CreateAssociationBatchRequestEntry) Validate() error { invalidParams := request.ErrInvalidParams{Context: "CreateAssociationBatchRequestEntry"} + if s.MaxConcurrency != nil && len(*s.MaxConcurrency) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MaxConcurrency", 1)) + } + if s.MaxErrors != nil && len(*s.MaxErrors) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MaxErrors", 1)) + } if s.Name == nil { invalidParams.Add(request.NewErrParamRequired("Name")) } @@ -12258,6 +14255,12 @@ func (s *CreateAssociationBatchRequestEntry) SetAssociationName(v string) *Creat return s } +// SetComplianceSeverity sets the ComplianceSeverity field's value. +func (s *CreateAssociationBatchRequestEntry) SetComplianceSeverity(v string) *CreateAssociationBatchRequestEntry { + s.ComplianceSeverity = &v + return s +} + // SetDocumentVersion sets the DocumentVersion field's value. func (s *CreateAssociationBatchRequestEntry) SetDocumentVersion(v string) *CreateAssociationBatchRequestEntry { s.DocumentVersion = &v @@ -12270,6 +14273,18 @@ func (s *CreateAssociationBatchRequestEntry) SetInstanceId(v string) *CreateAsso return s } +// SetMaxConcurrency sets the MaxConcurrency field's value. +func (s *CreateAssociationBatchRequestEntry) SetMaxConcurrency(v string) *CreateAssociationBatchRequestEntry { + s.MaxConcurrency = &v + return s +} + +// SetMaxErrors sets the MaxErrors field's value. +func (s *CreateAssociationBatchRequestEntry) SetMaxErrors(v string) *CreateAssociationBatchRequestEntry { + s.MaxErrors = &v + return s +} + // SetName sets the Name field's value. func (s *CreateAssociationBatchRequestEntry) SetName(v string) *CreateAssociationBatchRequestEntry { s.Name = &v @@ -12306,6 +14321,9 @@ type CreateAssociationInput struct { // Specify a descriptive name for the association. AssociationName *string `type:"string"` + // The severity level to assign to the association. + ComplianceSeverity *string `type:"string" enum:"AssociationComplianceSeverity"` + // The document version you want to associate with the target(s). Can be a specific // version or the default version. DocumentVersion *string `type:"string"` @@ -12313,6 +14331,32 @@ type CreateAssociationInput struct { // The instance ID. InstanceId *string `type:"string"` + // The maximum number of targets allowed to run the association at the same + // time. You can specify a number, for example 10, or a percentage of the target + // set, for example 10%. The default value is 100%, which means all targets + // run the association at the same time. + // + // If a new instance starts and attempts to execute an association while Systems + // Manager is executing MaxConcurrency associations, the association is allowed + // to run. During the next association interval, the new instance will process + // its association within the limit specified for MaxConcurrency. + MaxConcurrency *string `min:"1" type:"string"` + + // The number of errors that are allowed before the system stops sending requests + // to run the association on additional targets. You can specify either an absolute + // number of errors, for example 10, or a percentage of the target set, for + // example 10%. If you specify 3, for example, the system stops sending requests + // when the fourth error is received. If you specify 0, then the system stops + // sending requests after the first error is returned. If you run an association + // on 50 instances and set MaxError to 10%, then the system stops sending the + // request when the sixth error is received. + // + // Executions that are already running an association when MaxErrors is reached + // are allowed to complete, but some of these executions may fail as well. If + // you need to ensure that there won't be more than max-errors failed executions, + // set MaxConcurrency to 1 so that executions proceed one at a time. + MaxErrors *string `min:"1" type:"string"` + // The name of the Systems Manager document. // // Name is a required field @@ -12344,6 +14388,12 @@ func (s CreateAssociationInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *CreateAssociationInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "CreateAssociationInput"} + if s.MaxConcurrency != nil && len(*s.MaxConcurrency) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MaxConcurrency", 1)) + } + if s.MaxErrors != nil && len(*s.MaxErrors) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MaxErrors", 1)) + } if s.Name == nil { invalidParams.Add(request.NewErrParamRequired("Name")) } @@ -12378,6 +14428,12 @@ func (s *CreateAssociationInput) SetAssociationName(v string) *CreateAssociation return s } +// SetComplianceSeverity sets the ComplianceSeverity field's value. +func (s *CreateAssociationInput) SetComplianceSeverity(v string) *CreateAssociationInput { + s.ComplianceSeverity = &v + return s +} + // SetDocumentVersion sets the DocumentVersion field's value. func (s *CreateAssociationInput) SetDocumentVersion(v string) *CreateAssociationInput { s.DocumentVersion = &v @@ -12390,6 +14446,18 @@ func (s *CreateAssociationInput) SetInstanceId(v string) *CreateAssociationInput return s } +// SetMaxConcurrency sets the MaxConcurrency field's value. +func (s *CreateAssociationInput) SetMaxConcurrency(v string) *CreateAssociationInput { + s.MaxConcurrency = &v + return s +} + +// SetMaxErrors sets the MaxErrors field's value. +func (s *CreateAssociationInput) SetMaxErrors(v string) *CreateAssociationInput { + s.MaxErrors = &v + return s +} + // SetName sets the Name field's value. func (s *CreateAssociationInput) SetName(v string) *CreateAssociationInput { s.Name = &v @@ -12597,6 +14665,11 @@ type CreateMaintenanceWindowInput struct { // Duration is a required field Duration *int64 `min:"1" type:"integer" required:"true"` + // The date and time, in ISO-8601 Extended format, for when you want the Maintenance + // Window to become inactive. EndDate allows you to set a date and time in the + // future when the Maintenance Window will no longer run. + EndDate *string `type:"string"` + // The name of the Maintenance Window. // // Name is a required field @@ -12606,6 +14679,17 @@ type CreateMaintenanceWindowInput struct { // // Schedule is a required field Schedule *string `min:"1" type:"string" required:"true"` + + // The time zone that the scheduled Maintenance Window executions are based + // on, in Internet Assigned Numbers Authority (IANA) format. For example: "America/Los_Angeles", + // "etc/UTC", or "Asia/Seoul". For more information, see the Time Zone Database + // (https://www.iana.org/time-zones) on the IANA website. + ScheduleTimezone *string `type:"string"` + + // The date and time, in ISO-8601 Extended format, for when you want the Maintenance + // Window to become active. StartDate allows you to delay activation of the + // Maintenance Window until the specified future date. + StartDate *string `type:"string"` } // String returns the string representation @@ -12688,6 +14772,12 @@ func (s *CreateMaintenanceWindowInput) SetDuration(v int64) *CreateMaintenanceWi return s } +// SetEndDate sets the EndDate field's value. +func (s *CreateMaintenanceWindowInput) SetEndDate(v string) *CreateMaintenanceWindowInput { + s.EndDate = &v + return s +} + // SetName sets the Name field's value. func (s *CreateMaintenanceWindowInput) SetName(v string) *CreateMaintenanceWindowInput { s.Name = &v @@ -12700,6 +14790,18 @@ func (s *CreateMaintenanceWindowInput) SetSchedule(v string) *CreateMaintenanceW return s } +// SetScheduleTimezone sets the ScheduleTimezone field's value. +func (s *CreateMaintenanceWindowInput) SetScheduleTimezone(v string) *CreateMaintenanceWindowInput { + s.ScheduleTimezone = &v + return s +} + +// SetStartDate sets the StartDate field's value. +func (s *CreateMaintenanceWindowInput) SetStartDate(v string) *CreateMaintenanceWindowInput { + s.StartDate = &v + return s +} + type CreateMaintenanceWindowOutput struct { _ struct{} `type:"structure"` @@ -12730,12 +14832,16 @@ type CreatePatchBaselineInput struct { ApprovalRules *PatchRuleGroup `type:"structure"` // A list of explicitly approved patches for the baseline. + // + // For information about accepted formats for lists of approved patches and + // rejected patches, see Package Name Formats for Approved and Rejected Patch + // Lists (http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-approved-rejected-package-name-formats.html) + // in the AWS Systems Manager User Guide. ApprovedPatches []*string `type:"list"` // Defines the compliance level for approved patches. This means that if an // approved patch is reported as missing, this is the severity of the compliance - // violation. Valid compliance severity levels include the following: CRITICAL, - // HIGH, MEDIUM, LOW, INFORMATIONAL, UNSPECIFIED. The default value is UNSPECIFIED. + // violation. The default value is UNSPECIFIED. ApprovedPatchesComplianceLevel *string `type:"string" enum:"PatchComplianceLevel"` // Indicates whether the list of approved patches includes non-security updates @@ -12762,8 +14868,28 @@ type CreatePatchBaselineInput struct { OperatingSystem *string `type:"string" enum:"OperatingSystem"` // A list of explicitly rejected patches for the baseline. + // + // For information about accepted formats for lists of approved patches and + // rejected patches, see Package Name Formats for Approved and Rejected Patch + // Lists (http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-approved-rejected-package-name-formats.html) + // in the AWS Systems Manager User Guide. RejectedPatches []*string `type:"list"` + // The action for Patch Manager to take on patches included in the RejectedPackages + // list. + // + // * ALLOW_AS_DEPENDENCY: A package in the Rejected patches list is installed + // only if it is a dependency of another package. It is considered compliant + // with the patch baseline, and its status is reported as InstalledOther. + // This is the default action if no option is specified. + // + // * BLOCK: Packages in the RejectedPatches list, and packages that include + // them as dependencies, are not installed under any circumstances. If a + // package was installed before it was added to the Rejected patches list, + // it is considered non-compliant with the patch baseline, and its status + // is reported as InstalledRejected. + RejectedPatchesAction *string `type:"string" enum:"PatchAction"` + // Information about the patches to use to update the instances, including target // operating systems and source repositories. Applies to Linux instances only. Sources []*PatchSource `type:"list"` @@ -12881,6 +15007,12 @@ func (s *CreatePatchBaselineInput) SetRejectedPatches(v []*string) *CreatePatchB return s } +// SetRejectedPatchesAction sets the RejectedPatchesAction field's value. +func (s *CreatePatchBaselineInput) SetRejectedPatchesAction(v string) *CreatePatchBaselineInput { + s.RejectedPatchesAction = &v + return s +} + // SetSources sets the Sources field's value. func (s *CreatePatchBaselineInput) SetSources(v []*PatchSource) *CreatePatchBaselineInput { s.Sources = v @@ -13101,20 +15233,103 @@ type DeleteDocumentInput struct { } // String returns the string representation -func (s DeleteDocumentInput) String() string { +func (s DeleteDocumentInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDocumentInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteDocumentInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDocumentInput"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *DeleteDocumentInput) SetName(v string) *DeleteDocumentInput { + s.Name = &v + return s +} + +type DeleteDocumentOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteDocumentOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDocumentOutput) GoString() string { + return s.String() +} + +type DeleteInventoryInput struct { + _ struct{} `type:"structure"` + + // User-provided idempotency token. + ClientToken *string `min:"1" type:"string" idempotencyToken:"true"` + + // Use this option to view a summary of the deletion request without deleting + // any data or the data type. This option is useful when you only want to understand + // what will be deleted. Once you validate that the data to be deleted is what + // you intend to delete, you can run the same command without specifying the + // DryRun option. + DryRun *bool `type:"boolean"` + + // Use the SchemaDeleteOption to delete a custom inventory type (schema). If + // you don't choose this option, the system only deletes existing inventory + // data associated with the custom inventory type. Choose one of the following + // options: + // + // DisableSchema: If you choose this option, the system ignores all inventory + // data for the specified version, and any earlier versions. To enable this + // schema again, you must call the PutInventory action for a version greater + // than the disbled version. + // + // DeleteSchema: This option deletes the specified custom type from the Inventory + // service. You can recreate the schema later, if you want. + SchemaDeleteOption *string `type:"string" enum:"InventorySchemaDeleteOption"` + + // The name of the custom inventory type for which you want to delete either + // all previously collected data, or the inventory type itself. + // + // TypeName is a required field + TypeName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteInventoryInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteDocumentInput) GoString() string { +func (s DeleteInventoryInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteDocumentInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteDocumentInput"} - if s.Name == nil { - invalidParams.Add(request.NewErrParamRequired("Name")) +func (s *DeleteInventoryInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteInventoryInput"} + if s.ClientToken != nil && len(*s.ClientToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientToken", 1)) + } + if s.TypeName == nil { + invalidParams.Add(request.NewErrParamRequired("TypeName")) + } + if s.TypeName != nil && len(*s.TypeName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TypeName", 1)) } if invalidParams.Len() > 0 { @@ -13123,26 +15338,76 @@ func (s *DeleteDocumentInput) Validate() error { return nil } -// SetName sets the Name field's value. -func (s *DeleteDocumentInput) SetName(v string) *DeleteDocumentInput { - s.Name = &v +// SetClientToken sets the ClientToken field's value. +func (s *DeleteInventoryInput) SetClientToken(v string) *DeleteInventoryInput { + s.ClientToken = &v return s } -type DeleteDocumentOutput struct { +// SetDryRun sets the DryRun field's value. +func (s *DeleteInventoryInput) SetDryRun(v bool) *DeleteInventoryInput { + s.DryRun = &v + return s +} + +// SetSchemaDeleteOption sets the SchemaDeleteOption field's value. +func (s *DeleteInventoryInput) SetSchemaDeleteOption(v string) *DeleteInventoryInput { + s.SchemaDeleteOption = &v + return s +} + +// SetTypeName sets the TypeName field's value. +func (s *DeleteInventoryInput) SetTypeName(v string) *DeleteInventoryInput { + s.TypeName = &v + return s +} + +type DeleteInventoryOutput struct { _ struct{} `type:"structure"` + + // Every DeleteInventory action is assigned a unique ID. This option returns + // a unique ID. You can use this ID to query the status of a delete operation. + // This option is useful for ensuring that a delete operation has completed + // before you begin other actions. + DeletionId *string `type:"string"` + + // A summary of the delete operation. For more information about this summary, + // see Understanding the Delete Inventory Summary (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-inventory-delete.html#sysman-inventory-delete-summary) + // in the AWS Systems Manager User Guide. + DeletionSummary *InventoryDeletionSummary `type:"structure"` + + // The name of the inventory data type specified in the request. + TypeName *string `min:"1" type:"string"` } // String returns the string representation -func (s DeleteDocumentOutput) String() string { +func (s DeleteInventoryOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteDocumentOutput) GoString() string { +func (s DeleteInventoryOutput) GoString() string { return s.String() } +// SetDeletionId sets the DeletionId field's value. +func (s *DeleteInventoryOutput) SetDeletionId(v string) *DeleteInventoryOutput { + s.DeletionId = &v + return s +} + +// SetDeletionSummary sets the DeletionSummary field's value. +func (s *DeleteInventoryOutput) SetDeletionSummary(v *InventoryDeletionSummary) *DeleteInventoryOutput { + s.DeletionSummary = v + return s +} + +// SetTypeName sets the TypeName field's value. +func (s *DeleteInventoryOutput) SetTypeName(v string) *DeleteInventoryOutput { + s.TypeName = &v + return s +} + type DeleteMaintenanceWindowInput struct { _ struct{} `type:"structure"` @@ -13661,82 +15926,322 @@ func (s *DeregisterTargetFromMaintenanceWindowInput) SetWindowId(v string) *Dere return s } -// SetWindowTargetId sets the WindowTargetId field's value. -func (s *DeregisterTargetFromMaintenanceWindowInput) SetWindowTargetId(v string) *DeregisterTargetFromMaintenanceWindowInput { - s.WindowTargetId = &v +// SetWindowTargetId sets the WindowTargetId field's value. +func (s *DeregisterTargetFromMaintenanceWindowInput) SetWindowTargetId(v string) *DeregisterTargetFromMaintenanceWindowInput { + s.WindowTargetId = &v + return s +} + +type DeregisterTargetFromMaintenanceWindowOutput struct { + _ struct{} `type:"structure"` + + // The ID of the Maintenance Window the target was removed from. + WindowId *string `min:"20" type:"string"` + + // The ID of the removed target definition. + WindowTargetId *string `min:"36" type:"string"` +} + +// String returns the string representation +func (s DeregisterTargetFromMaintenanceWindowOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeregisterTargetFromMaintenanceWindowOutput) GoString() string { + return s.String() +} + +// SetWindowId sets the WindowId field's value. +func (s *DeregisterTargetFromMaintenanceWindowOutput) SetWindowId(v string) *DeregisterTargetFromMaintenanceWindowOutput { + s.WindowId = &v + return s +} + +// SetWindowTargetId sets the WindowTargetId field's value. +func (s *DeregisterTargetFromMaintenanceWindowOutput) SetWindowTargetId(v string) *DeregisterTargetFromMaintenanceWindowOutput { + s.WindowTargetId = &v + return s +} + +type DeregisterTaskFromMaintenanceWindowInput struct { + _ struct{} `type:"structure"` + + // The ID of the Maintenance Window the task should be removed from. + // + // WindowId is a required field + WindowId *string `min:"20" type:"string" required:"true"` + + // The ID of the task to remove from the Maintenance Window. + // + // WindowTaskId is a required field + WindowTaskId *string `min:"36" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeregisterTaskFromMaintenanceWindowInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeregisterTaskFromMaintenanceWindowInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeregisterTaskFromMaintenanceWindowInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeregisterTaskFromMaintenanceWindowInput"} + if s.WindowId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowId")) + } + if s.WindowId != nil && len(*s.WindowId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("WindowId", 20)) + } + if s.WindowTaskId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowTaskId")) + } + if s.WindowTaskId != nil && len(*s.WindowTaskId) < 36 { + invalidParams.Add(request.NewErrParamMinLen("WindowTaskId", 36)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetWindowId sets the WindowId field's value. +func (s *DeregisterTaskFromMaintenanceWindowInput) SetWindowId(v string) *DeregisterTaskFromMaintenanceWindowInput { + s.WindowId = &v + return s +} + +// SetWindowTaskId sets the WindowTaskId field's value. +func (s *DeregisterTaskFromMaintenanceWindowInput) SetWindowTaskId(v string) *DeregisterTaskFromMaintenanceWindowInput { + s.WindowTaskId = &v + return s +} + +type DeregisterTaskFromMaintenanceWindowOutput struct { + _ struct{} `type:"structure"` + + // The ID of the Maintenance Window the task was removed from. + WindowId *string `min:"20" type:"string"` + + // The ID of the task removed from the Maintenance Window. + WindowTaskId *string `min:"36" type:"string"` +} + +// String returns the string representation +func (s DeregisterTaskFromMaintenanceWindowOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeregisterTaskFromMaintenanceWindowOutput) GoString() string { + return s.String() +} + +// SetWindowId sets the WindowId field's value. +func (s *DeregisterTaskFromMaintenanceWindowOutput) SetWindowId(v string) *DeregisterTaskFromMaintenanceWindowOutput { + s.WindowId = &v + return s +} + +// SetWindowTaskId sets the WindowTaskId field's value. +func (s *DeregisterTaskFromMaintenanceWindowOutput) SetWindowTaskId(v string) *DeregisterTaskFromMaintenanceWindowOutput { + s.WindowTaskId = &v + return s +} + +// Filter for the DescribeActivation API. +type DescribeActivationsFilter struct { + _ struct{} `type:"structure"` + + // The name of the filter. + FilterKey *string `type:"string" enum:"DescribeActivationsFilterKeys"` + + // The filter values. + FilterValues []*string `type:"list"` +} + +// String returns the string representation +func (s DescribeActivationsFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeActivationsFilter) GoString() string { + return s.String() +} + +// SetFilterKey sets the FilterKey field's value. +func (s *DescribeActivationsFilter) SetFilterKey(v string) *DescribeActivationsFilter { + s.FilterKey = &v + return s +} + +// SetFilterValues sets the FilterValues field's value. +func (s *DescribeActivationsFilter) SetFilterValues(v []*string) *DescribeActivationsFilter { + s.FilterValues = v + return s +} + +type DescribeActivationsInput struct { + _ struct{} `type:"structure"` + + // A filter to view information about your activations. + Filters []*DescribeActivationsFilter `type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // A token to start the list. Use this token to get the next set of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeActivationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeActivationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeActivationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeActivationsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeActivationsInput) SetFilters(v []*DescribeActivationsFilter) *DescribeActivationsInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeActivationsInput) SetMaxResults(v int64) *DescribeActivationsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeActivationsInput) SetNextToken(v string) *DescribeActivationsInput { + s.NextToken = &v return s } -type DeregisterTargetFromMaintenanceWindowOutput struct { +type DescribeActivationsOutput struct { _ struct{} `type:"structure"` - // The ID of the Maintenance Window the target was removed from. - WindowId *string `min:"20" type:"string"` + // A list of activations for your AWS account. + ActivationList []*Activation `type:"list"` - // The ID of the removed target definition. - WindowTargetId *string `min:"36" type:"string"` + // The token for the next set of items to return. Use this token to get the + // next set of results. + NextToken *string `type:"string"` } // String returns the string representation -func (s DeregisterTargetFromMaintenanceWindowOutput) String() string { +func (s DescribeActivationsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeregisterTargetFromMaintenanceWindowOutput) GoString() string { +func (s DescribeActivationsOutput) GoString() string { return s.String() } -// SetWindowId sets the WindowId field's value. -func (s *DeregisterTargetFromMaintenanceWindowOutput) SetWindowId(v string) *DeregisterTargetFromMaintenanceWindowOutput { - s.WindowId = &v +// SetActivationList sets the ActivationList field's value. +func (s *DescribeActivationsOutput) SetActivationList(v []*Activation) *DescribeActivationsOutput { + s.ActivationList = v return s } -// SetWindowTargetId sets the WindowTargetId field's value. -func (s *DeregisterTargetFromMaintenanceWindowOutput) SetWindowTargetId(v string) *DeregisterTargetFromMaintenanceWindowOutput { - s.WindowTargetId = &v +// SetNextToken sets the NextToken field's value. +func (s *DescribeActivationsOutput) SetNextToken(v string) *DescribeActivationsOutput { + s.NextToken = &v return s } -type DeregisterTaskFromMaintenanceWindowInput struct { +type DescribeAssociationExecutionTargetsInput struct { _ struct{} `type:"structure"` - // The ID of the Maintenance Window the task should be removed from. + // The association ID that includes the execution for which you want to view + // details. // - // WindowId is a required field - WindowId *string `min:"20" type:"string" required:"true"` + // AssociationId is a required field + AssociationId *string `type:"string" required:"true"` - // The ID of the task to remove from the Maintenance Window. + // The execution ID for which you want to view details. // - // WindowTaskId is a required field - WindowTaskId *string `min:"36" type:"string" required:"true"` + // ExecutionId is a required field + ExecutionId *string `type:"string" required:"true"` + + // Filters for the request. You can specify the following filters and values. + // + // Status (EQUAL) + // + // ResourceId (EQUAL) + // + // ResourceType (EQUAL) + Filters []*AssociationExecutionTargetsFilter `min:"1" type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // A token to start the list. Use this token to get the next set of results. + NextToken *string `type:"string"` } // String returns the string representation -func (s DeregisterTaskFromMaintenanceWindowInput) String() string { +func (s DescribeAssociationExecutionTargetsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeregisterTaskFromMaintenanceWindowInput) GoString() string { +func (s DescribeAssociationExecutionTargetsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeregisterTaskFromMaintenanceWindowInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeregisterTaskFromMaintenanceWindowInput"} - if s.WindowId == nil { - invalidParams.Add(request.NewErrParamRequired("WindowId")) +func (s *DescribeAssociationExecutionTargetsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeAssociationExecutionTargetsInput"} + if s.AssociationId == nil { + invalidParams.Add(request.NewErrParamRequired("AssociationId")) } - if s.WindowId != nil && len(*s.WindowId) < 20 { - invalidParams.Add(request.NewErrParamMinLen("WindowId", 20)) + if s.ExecutionId == nil { + invalidParams.Add(request.NewErrParamRequired("ExecutionId")) } - if s.WindowTaskId == nil { - invalidParams.Add(request.NewErrParamRequired("WindowTaskId")) + if s.Filters != nil && len(s.Filters) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Filters", 1)) } - if s.WindowTaskId != nil && len(*s.WindowTaskId) < 36 { - invalidParams.Add(request.NewErrParamMinLen("WindowTaskId", 36)) + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } } if invalidParams.Len() > 0 { @@ -13745,88 +16250,85 @@ func (s *DeregisterTaskFromMaintenanceWindowInput) Validate() error { return nil } -// SetWindowId sets the WindowId field's value. -func (s *DeregisterTaskFromMaintenanceWindowInput) SetWindowId(v string) *DeregisterTaskFromMaintenanceWindowInput { - s.WindowId = &v +// SetAssociationId sets the AssociationId field's value. +func (s *DescribeAssociationExecutionTargetsInput) SetAssociationId(v string) *DescribeAssociationExecutionTargetsInput { + s.AssociationId = &v return s } -// SetWindowTaskId sets the WindowTaskId field's value. -func (s *DeregisterTaskFromMaintenanceWindowInput) SetWindowTaskId(v string) *DeregisterTaskFromMaintenanceWindowInput { - s.WindowTaskId = &v +// SetExecutionId sets the ExecutionId field's value. +func (s *DescribeAssociationExecutionTargetsInput) SetExecutionId(v string) *DescribeAssociationExecutionTargetsInput { + s.ExecutionId = &v return s } -type DeregisterTaskFromMaintenanceWindowOutput struct { - _ struct{} `type:"structure"` - - // The ID of the Maintenance Window the task was removed from. - WindowId *string `min:"20" type:"string"` - - // The ID of the task removed from the Maintenance Window. - WindowTaskId *string `min:"36" type:"string"` -} - -// String returns the string representation -func (s DeregisterTaskFromMaintenanceWindowOutput) String() string { - return awsutil.Prettify(s) -} - -// GoString returns the string representation -func (s DeregisterTaskFromMaintenanceWindowOutput) GoString() string { - return s.String() +// SetFilters sets the Filters field's value. +func (s *DescribeAssociationExecutionTargetsInput) SetFilters(v []*AssociationExecutionTargetsFilter) *DescribeAssociationExecutionTargetsInput { + s.Filters = v + return s } -// SetWindowId sets the WindowId field's value. -func (s *DeregisterTaskFromMaintenanceWindowOutput) SetWindowId(v string) *DeregisterTaskFromMaintenanceWindowOutput { - s.WindowId = &v +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeAssociationExecutionTargetsInput) SetMaxResults(v int64) *DescribeAssociationExecutionTargetsInput { + s.MaxResults = &v return s } -// SetWindowTaskId sets the WindowTaskId field's value. -func (s *DeregisterTaskFromMaintenanceWindowOutput) SetWindowTaskId(v string) *DeregisterTaskFromMaintenanceWindowOutput { - s.WindowTaskId = &v +// SetNextToken sets the NextToken field's value. +func (s *DescribeAssociationExecutionTargetsInput) SetNextToken(v string) *DescribeAssociationExecutionTargetsInput { + s.NextToken = &v return s } -// Filter for the DescribeActivation API. -type DescribeActivationsFilter struct { +type DescribeAssociationExecutionTargetsOutput struct { _ struct{} `type:"structure"` - // The name of the filter. - FilterKey *string `type:"string" enum:"DescribeActivationsFilterKeys"` + // Information about the execution. + AssociationExecutionTargets []*AssociationExecutionTarget `type:"list"` - // The filter values. - FilterValues []*string `type:"list"` + // The token for the next set of items to return. Use this token to get the + // next set of results. + NextToken *string `type:"string"` } // String returns the string representation -func (s DescribeActivationsFilter) String() string { +func (s DescribeAssociationExecutionTargetsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeActivationsFilter) GoString() string { +func (s DescribeAssociationExecutionTargetsOutput) GoString() string { return s.String() } -// SetFilterKey sets the FilterKey field's value. -func (s *DescribeActivationsFilter) SetFilterKey(v string) *DescribeActivationsFilter { - s.FilterKey = &v +// SetAssociationExecutionTargets sets the AssociationExecutionTargets field's value. +func (s *DescribeAssociationExecutionTargetsOutput) SetAssociationExecutionTargets(v []*AssociationExecutionTarget) *DescribeAssociationExecutionTargetsOutput { + s.AssociationExecutionTargets = v return s } -// SetFilterValues sets the FilterValues field's value. -func (s *DescribeActivationsFilter) SetFilterValues(v []*string) *DescribeActivationsFilter { - s.FilterValues = v +// SetNextToken sets the NextToken field's value. +func (s *DescribeAssociationExecutionTargetsOutput) SetNextToken(v string) *DescribeAssociationExecutionTargetsOutput { + s.NextToken = &v return s } -type DescribeActivationsInput struct { +type DescribeAssociationExecutionsInput struct { _ struct{} `type:"structure"` - // A filter to view information about your activations. - Filters []*DescribeActivationsFilter `type:"list"` + // The association ID for which you want to view execution history details. + // + // AssociationId is a required field + AssociationId *string `type:"string" required:"true"` + + // Filters for the request. You can specify the following filters and values. + // + // ExecutionId (EQUAL) + // + // Status (EQUAL) + // + // CreatedTime (EQUAL, GREATER_THAN, LESS_THAN) + Filters []*AssociationExecutionFilter `min:"1" type:"list"` // The maximum number of items to return for this call. The call also returns // a token that you can specify in a subsequent call to get the next set of @@ -13838,21 +16340,37 @@ type DescribeActivationsInput struct { } // String returns the string representation -func (s DescribeActivationsInput) String() string { +func (s DescribeAssociationExecutionsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeActivationsInput) GoString() string { +func (s DescribeAssociationExecutionsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeActivationsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeActivationsInput"} +func (s *DescribeAssociationExecutionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeAssociationExecutionsInput"} + if s.AssociationId == nil { + invalidParams.Add(request.NewErrParamRequired("AssociationId")) + } + if s.Filters != nil && len(s.Filters) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Filters", 1)) + } if s.MaxResults != nil && *s.MaxResults < 1 { invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } if invalidParams.Len() > 0 { return invalidParams @@ -13860,29 +16378,35 @@ func (s *DescribeActivationsInput) Validate() error { return nil } +// SetAssociationId sets the AssociationId field's value. +func (s *DescribeAssociationExecutionsInput) SetAssociationId(v string) *DescribeAssociationExecutionsInput { + s.AssociationId = &v + return s +} + // SetFilters sets the Filters field's value. -func (s *DescribeActivationsInput) SetFilters(v []*DescribeActivationsFilter) *DescribeActivationsInput { +func (s *DescribeAssociationExecutionsInput) SetFilters(v []*AssociationExecutionFilter) *DescribeAssociationExecutionsInput { s.Filters = v return s } // SetMaxResults sets the MaxResults field's value. -func (s *DescribeActivationsInput) SetMaxResults(v int64) *DescribeActivationsInput { +func (s *DescribeAssociationExecutionsInput) SetMaxResults(v int64) *DescribeAssociationExecutionsInput { s.MaxResults = &v return s } // SetNextToken sets the NextToken field's value. -func (s *DescribeActivationsInput) SetNextToken(v string) *DescribeActivationsInput { +func (s *DescribeAssociationExecutionsInput) SetNextToken(v string) *DescribeAssociationExecutionsInput { s.NextToken = &v return s } -type DescribeActivationsOutput struct { +type DescribeAssociationExecutionsOutput struct { _ struct{} `type:"structure"` - // A list of activations for your AWS account. - ActivationList []*Activation `type:"list"` + // A list of the executions for the specified association ID. + AssociationExecutions []*AssociationExecution `type:"list"` // The token for the next set of items to return. Use this token to get the // next set of results. @@ -13890,23 +16414,23 @@ type DescribeActivationsOutput struct { } // String returns the string representation -func (s DescribeActivationsOutput) String() string { +func (s DescribeAssociationExecutionsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeActivationsOutput) GoString() string { +func (s DescribeAssociationExecutionsOutput) GoString() string { return s.String() } -// SetActivationList sets the ActivationList field's value. -func (s *DescribeActivationsOutput) SetActivationList(v []*Activation) *DescribeActivationsOutput { - s.ActivationList = v +// SetAssociationExecutions sets the AssociationExecutions field's value. +func (s *DescribeAssociationExecutionsOutput) SetAssociationExecutions(v []*AssociationExecution) *DescribeAssociationExecutionsOutput { + s.AssociationExecutions = v return s } // SetNextToken sets the NextToken field's value. -func (s *DescribeActivationsOutput) SetNextToken(v string) *DescribeActivationsOutput { +func (s *DescribeAssociationExecutionsOutput) SetNextToken(v string) *DescribeAssociationExecutionsOutput { s.NextToken = &v return s } @@ -14757,9 +17281,17 @@ type DescribeInstanceInformationInput struct { _ struct{} `type:"structure"` // One or more filters. Use a filter to return a more specific list of instances. + // You can filter on Amazon EC2 tag. Specify tags by using a key-value mapping. Filters []*InstanceInformationStringFilter `type:"list"` - // One or more filters. Use a filter to return a more specific list of instances. + // This is a legacy method. We recommend that you don't use this method. Instead, + // use the InstanceInformationFilter action. The InstanceInformationFilter action + // enables you to return instance information by using tags that are specified + // as a key-value mapping. + // + // If you do use this method, then you can't use the InstanceInformationFilter + // action. Using this method and the InstanceInformationFilter action causes + // an exception error. InstanceInformationFilterList []*InstanceInformationFilter `type:"list"` // The maximum number of items to return for this call. The call also returns @@ -15187,8 +17719,7 @@ type DescribeInstancePatchesOutput struct { // // Severity (string) // - // State (string: "INSTALLED", "INSTALLED OTHER", "MISSING", "NOT APPLICABLE", - // "FAILED") + // State (string, such as "INSTALLED" or "FAILED") // // InstalledTime (DateTime) // @@ -15197,24 +17728,114 @@ type DescribeInstancePatchesOutput struct { } // String returns the string representation -func (s DescribeInstancePatchesOutput) String() string { +func (s DescribeInstancePatchesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstancePatchesOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInstancePatchesOutput) SetNextToken(v string) *DescribeInstancePatchesOutput { + s.NextToken = &v + return s +} + +// SetPatches sets the Patches field's value. +func (s *DescribeInstancePatchesOutput) SetPatches(v []*PatchComplianceData) *DescribeInstancePatchesOutput { + s.Patches = v + return s +} + +type DescribeInventoryDeletionsInput struct { + _ struct{} `type:"structure"` + + // Specify the delete inventory ID for which you want information. This ID was + // returned by the DeleteInventory action. + DeletionId *string `type:"string"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // A token to start the list. Use this token to get the next set of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInventoryDeletionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInventoryDeletionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeInventoryDeletionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeInventoryDeletionsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDeletionId sets the DeletionId field's value. +func (s *DescribeInventoryDeletionsInput) SetDeletionId(v string) *DescribeInventoryDeletionsInput { + s.DeletionId = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeInventoryDeletionsInput) SetMaxResults(v int64) *DescribeInventoryDeletionsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInventoryDeletionsInput) SetNextToken(v string) *DescribeInventoryDeletionsInput { + s.NextToken = &v + return s +} + +type DescribeInventoryDeletionsOutput struct { + _ struct{} `type:"structure"` + + // A list of status items for deleted inventory. + InventoryDeletions []*InventoryDeletionStatusItem `type:"list"` + + // The token for the next set of items to return. Use this token to get the + // next set of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInventoryDeletionsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeInstancePatchesOutput) GoString() string { +func (s DescribeInventoryDeletionsOutput) GoString() string { return s.String() } -// SetNextToken sets the NextToken field's value. -func (s *DescribeInstancePatchesOutput) SetNextToken(v string) *DescribeInstancePatchesOutput { - s.NextToken = &v +// SetInventoryDeletions sets the InventoryDeletions field's value. +func (s *DescribeInventoryDeletionsOutput) SetInventoryDeletions(v []*InventoryDeletionStatusItem) *DescribeInventoryDeletionsOutput { + s.InventoryDeletions = v return s } -// SetPatches sets the Patches field's value. -func (s *DescribeInstancePatchesOutput) SetPatches(v []*PatchComplianceData) *DescribeInstancePatchesOutput { - s.Patches = v +// SetNextToken sets the NextToken field's value. +func (s *DescribeInventoryDeletionsOutput) SetNextToken(v string) *DescribeInventoryDeletionsOutput { + s.NextToken = &v return s } @@ -15599,6 +18220,150 @@ func (s *DescribeMaintenanceWindowExecutionsOutput) SetWindowExecutions(v []*Mai return s } +type DescribeMaintenanceWindowScheduleInput struct { + _ struct{} `type:"structure"` + + // Filters used to limit the range of results. For example, you can limit Maintenance + // Window executions to only those scheduled before or after a certain date + // and time. + Filters []*PatchOrchestratorFilter `type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` + + // The type of resource you want to retrieve information about. For example, + // "INSTANCE". + ResourceType *string `type:"string" enum:"MaintenanceWindowResourceType"` + + // The instance ID or key/value pair to retrieve information about. + Targets []*Target `type:"list"` + + // The ID of the Maintenance Window to retrieve information about. + WindowId *string `min:"20" type:"string"` +} + +// String returns the string representation +func (s DescribeMaintenanceWindowScheduleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMaintenanceWindowScheduleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeMaintenanceWindowScheduleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeMaintenanceWindowScheduleInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.WindowId != nil && len(*s.WindowId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("WindowId", 20)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + if s.Targets != nil { + for i, v := range s.Targets { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Targets", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeMaintenanceWindowScheduleInput) SetFilters(v []*PatchOrchestratorFilter) *DescribeMaintenanceWindowScheduleInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeMaintenanceWindowScheduleInput) SetMaxResults(v int64) *DescribeMaintenanceWindowScheduleInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeMaintenanceWindowScheduleInput) SetNextToken(v string) *DescribeMaintenanceWindowScheduleInput { + s.NextToken = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *DescribeMaintenanceWindowScheduleInput) SetResourceType(v string) *DescribeMaintenanceWindowScheduleInput { + s.ResourceType = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *DescribeMaintenanceWindowScheduleInput) SetTargets(v []*Target) *DescribeMaintenanceWindowScheduleInput { + s.Targets = v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *DescribeMaintenanceWindowScheduleInput) SetWindowId(v string) *DescribeMaintenanceWindowScheduleInput { + s.WindowId = &v + return s +} + +type DescribeMaintenanceWindowScheduleOutput struct { + _ struct{} `type:"structure"` + + // The token for the next set of items to return. (You use this token in the + // next call.) + NextToken *string `type:"string"` + + // Information about Maintenance Window executions scheduled for the specified + // time range. + ScheduledWindowExecutions []*ScheduledWindowExecution `type:"list"` +} + +// String returns the string representation +func (s DescribeMaintenanceWindowScheduleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMaintenanceWindowScheduleOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeMaintenanceWindowScheduleOutput) SetNextToken(v string) *DescribeMaintenanceWindowScheduleOutput { + s.NextToken = &v + return s +} + +// SetScheduledWindowExecutions sets the ScheduledWindowExecutions field's value. +func (s *DescribeMaintenanceWindowScheduleOutput) SetScheduledWindowExecutions(v []*ScheduledWindowExecution) *DescribeMaintenanceWindowScheduleOutput { + s.ScheduledWindowExecutions = v + return s +} + type DescribeMaintenanceWindowTargetsInput struct { _ struct{} `type:"structure"` @@ -15691,83 +18456,203 @@ type DescribeMaintenanceWindowTargetsOutput struct { // items to return, the string is empty. NextToken *string `type:"string"` - // Information about the targets in the Maintenance Window. - Targets []*MaintenanceWindowTarget `type:"list"` + // Information about the targets in the Maintenance Window. + Targets []*MaintenanceWindowTarget `type:"list"` +} + +// String returns the string representation +func (s DescribeMaintenanceWindowTargetsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMaintenanceWindowTargetsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeMaintenanceWindowTargetsOutput) SetNextToken(v string) *DescribeMaintenanceWindowTargetsOutput { + s.NextToken = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *DescribeMaintenanceWindowTargetsOutput) SetTargets(v []*MaintenanceWindowTarget) *DescribeMaintenanceWindowTargetsOutput { + s.Targets = v + return s +} + +type DescribeMaintenanceWindowTasksInput struct { + _ struct{} `type:"structure"` + + // Optional filters used to narrow down the scope of the returned tasks. The + // supported filter keys are WindowTaskId, TaskArn, Priority, and TaskType. + Filters []*MaintenanceWindowFilter `type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"10" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` + + // The ID of the Maintenance Window whose tasks should be retrieved. + // + // WindowId is a required field + WindowId *string `min:"20" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeMaintenanceWindowTasksInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMaintenanceWindowTasksInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeMaintenanceWindowTasksInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeMaintenanceWindowTasksInput"} + if s.MaxResults != nil && *s.MaxResults < 10 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 10)) + } + if s.WindowId == nil { + invalidParams.Add(request.NewErrParamRequired("WindowId")) + } + if s.WindowId != nil && len(*s.WindowId) < 20 { + invalidParams.Add(request.NewErrParamMinLen("WindowId", 20)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeMaintenanceWindowTasksInput) SetFilters(v []*MaintenanceWindowFilter) *DescribeMaintenanceWindowTasksInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeMaintenanceWindowTasksInput) SetMaxResults(v int64) *DescribeMaintenanceWindowTasksInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeMaintenanceWindowTasksInput) SetNextToken(v string) *DescribeMaintenanceWindowTasksInput { + s.NextToken = &v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *DescribeMaintenanceWindowTasksInput) SetWindowId(v string) *DescribeMaintenanceWindowTasksInput { + s.WindowId = &v + return s +} + +type DescribeMaintenanceWindowTasksOutput struct { + _ struct{} `type:"structure"` + + // The token to use when requesting the next set of items. If there are no additional + // items to return, the string is empty. + NextToken *string `type:"string"` + + // Information about the tasks in the Maintenance Window. + Tasks []*MaintenanceWindowTask `type:"list"` } // String returns the string representation -func (s DescribeMaintenanceWindowTargetsOutput) String() string { +func (s DescribeMaintenanceWindowTasksOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeMaintenanceWindowTargetsOutput) GoString() string { +func (s DescribeMaintenanceWindowTasksOutput) GoString() string { return s.String() } // SetNextToken sets the NextToken field's value. -func (s *DescribeMaintenanceWindowTargetsOutput) SetNextToken(v string) *DescribeMaintenanceWindowTargetsOutput { +func (s *DescribeMaintenanceWindowTasksOutput) SetNextToken(v string) *DescribeMaintenanceWindowTasksOutput { s.NextToken = &v return s } -// SetTargets sets the Targets field's value. -func (s *DescribeMaintenanceWindowTargetsOutput) SetTargets(v []*MaintenanceWindowTarget) *DescribeMaintenanceWindowTargetsOutput { - s.Targets = v +// SetTasks sets the Tasks field's value. +func (s *DescribeMaintenanceWindowTasksOutput) SetTasks(v []*MaintenanceWindowTask) *DescribeMaintenanceWindowTasksOutput { + s.Tasks = v return s } -type DescribeMaintenanceWindowTasksInput struct { +type DescribeMaintenanceWindowsForTargetInput struct { _ struct{} `type:"structure"` - // Optional filters used to narrow down the scope of the returned tasks. The - // supported filter keys are WindowTaskId, TaskArn, Priority, and TaskType. - Filters []*MaintenanceWindowFilter `type:"list"` - // The maximum number of items to return for this call. The call also returns // a token that you can specify in a subsequent call to get the next set of // results. - MaxResults *int64 `min:"10" type:"integer"` + MaxResults *int64 `min:"1" type:"integer"` // The token for the next set of items to return. (You received this token from // a previous call.) NextToken *string `type:"string"` - // The ID of the Maintenance Window whose tasks should be retrieved. + // The type of resource you want to retrieve information about. For example, + // "INSTANCE". // - // WindowId is a required field - WindowId *string `min:"20" type:"string" required:"true"` + // ResourceType is a required field + ResourceType *string `type:"string" required:"true" enum:"MaintenanceWindowResourceType"` + + // The instance ID or key/value pair to retrieve information about. + // + // Targets is a required field + Targets []*Target `type:"list" required:"true"` } // String returns the string representation -func (s DescribeMaintenanceWindowTasksInput) String() string { +func (s DescribeMaintenanceWindowsForTargetInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeMaintenanceWindowTasksInput) GoString() string { +func (s DescribeMaintenanceWindowsForTargetInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeMaintenanceWindowTasksInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeMaintenanceWindowTasksInput"} - if s.MaxResults != nil && *s.MaxResults < 10 { - invalidParams.Add(request.NewErrParamMinValue("MaxResults", 10)) +func (s *DescribeMaintenanceWindowsForTargetInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeMaintenanceWindowsForTargetInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } - if s.WindowId == nil { - invalidParams.Add(request.NewErrParamRequired("WindowId")) + if s.ResourceType == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceType")) } - if s.WindowId != nil && len(*s.WindowId) < 20 { - invalidParams.Add(request.NewErrParamMinLen("WindowId", 20)) + if s.Targets == nil { + invalidParams.Add(request.NewErrParamRequired("Targets")) } - if s.Filters != nil { - for i, v := range s.Filters { + if s.Targets != nil { + for i, v := range s.Targets { if v == nil { continue } if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Targets", i), err.(request.ErrInvalidParams)) } } } @@ -15778,60 +18663,61 @@ func (s *DescribeMaintenanceWindowTasksInput) Validate() error { return nil } -// SetFilters sets the Filters field's value. -func (s *DescribeMaintenanceWindowTasksInput) SetFilters(v []*MaintenanceWindowFilter) *DescribeMaintenanceWindowTasksInput { - s.Filters = v - return s -} - // SetMaxResults sets the MaxResults field's value. -func (s *DescribeMaintenanceWindowTasksInput) SetMaxResults(v int64) *DescribeMaintenanceWindowTasksInput { +func (s *DescribeMaintenanceWindowsForTargetInput) SetMaxResults(v int64) *DescribeMaintenanceWindowsForTargetInput { s.MaxResults = &v return s } // SetNextToken sets the NextToken field's value. -func (s *DescribeMaintenanceWindowTasksInput) SetNextToken(v string) *DescribeMaintenanceWindowTasksInput { +func (s *DescribeMaintenanceWindowsForTargetInput) SetNextToken(v string) *DescribeMaintenanceWindowsForTargetInput { s.NextToken = &v return s } -// SetWindowId sets the WindowId field's value. -func (s *DescribeMaintenanceWindowTasksInput) SetWindowId(v string) *DescribeMaintenanceWindowTasksInput { - s.WindowId = &v +// SetResourceType sets the ResourceType field's value. +func (s *DescribeMaintenanceWindowsForTargetInput) SetResourceType(v string) *DescribeMaintenanceWindowsForTargetInput { + s.ResourceType = &v return s } -type DescribeMaintenanceWindowTasksOutput struct { +// SetTargets sets the Targets field's value. +func (s *DescribeMaintenanceWindowsForTargetInput) SetTargets(v []*Target) *DescribeMaintenanceWindowsForTargetInput { + s.Targets = v + return s +} + +type DescribeMaintenanceWindowsForTargetOutput struct { _ struct{} `type:"structure"` - // The token to use when requesting the next set of items. If there are no additional - // items to return, the string is empty. + // The token for the next set of items to return. (You use this token in the + // next call.) NextToken *string `type:"string"` - // Information about the tasks in the Maintenance Window. - Tasks []*MaintenanceWindowTask `type:"list"` + // Information about the Maintenance Window targets and tasks an instance is + // associated with. + WindowIdentities []*MaintenanceWindowIdentityForTarget `type:"list"` } // String returns the string representation -func (s DescribeMaintenanceWindowTasksOutput) String() string { +func (s DescribeMaintenanceWindowsForTargetOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeMaintenanceWindowTasksOutput) GoString() string { +func (s DescribeMaintenanceWindowsForTargetOutput) GoString() string { return s.String() } // SetNextToken sets the NextToken field's value. -func (s *DescribeMaintenanceWindowTasksOutput) SetNextToken(v string) *DescribeMaintenanceWindowTasksOutput { +func (s *DescribeMaintenanceWindowsForTargetOutput) SetNextToken(v string) *DescribeMaintenanceWindowsForTargetOutput { s.NextToken = &v return s } -// SetTasks sets the Tasks field's value. -func (s *DescribeMaintenanceWindowTasksOutput) SetTasks(v []*MaintenanceWindowTask) *DescribeMaintenanceWindowTasksOutput { - s.Tasks = v +// SetWindowIdentities sets the WindowIdentities field's value. +func (s *DescribeMaintenanceWindowsForTargetOutput) SetWindowIdentities(v []*MaintenanceWindowIdentityForTarget) *DescribeMaintenanceWindowsForTargetOutput { + s.WindowIdentities = v return s } @@ -16215,6 +19101,14 @@ type DescribePatchGroupStateOutput struct { // The number of instances with installed patches. InstancesWithInstalledPatches *int64 `type:"integer"` + // The number of instances with patches installed that are specified in a RejectedPatches + // list. Patches with a status of INSTALLED_REJECTED were typically installed + // before they were added to a RejectedPatches list. + // + // If ALLOW_AS_DEPENDENCY is the specified option for RejectedPatchesAction, + // the value of InstancesWithInstalledRejectedPatches will always be 0 (zero). + InstancesWithInstalledRejectedPatches *int64 `type:"integer"` + // The number of instances with missing patches from the patch baseline. InstancesWithMissingPatches *int64 `type:"integer"` @@ -16256,6 +19150,12 @@ func (s *DescribePatchGroupStateOutput) SetInstancesWithInstalledPatches(v int64 return s } +// SetInstancesWithInstalledRejectedPatches sets the InstancesWithInstalledRejectedPatches field's value. +func (s *DescribePatchGroupStateOutput) SetInstancesWithInstalledRejectedPatches(v int64) *DescribePatchGroupStateOutput { + s.InstancesWithInstalledRejectedPatches = &v + return s +} + // SetInstancesWithMissingPatches sets the InstancesWithMissingPatches field's value. func (s *DescribePatchGroupStateOutput) SetInstancesWithMissingPatches(v int64) *DescribePatchGroupStateOutput { s.InstancesWithMissingPatches = &v @@ -16370,6 +19270,123 @@ func (s *DescribePatchGroupsOutput) SetNextToken(v string) *DescribePatchGroupsO return s } +type DescribeSessionsInput struct { + _ struct{} `type:"structure"` + + // One or more filters to limit the type of sessions returned by the request. + Filters []*SessionFilter `min:"1" type:"list"` + + // The maximum number of items to return for this call. The call also returns + // a token that you can specify in a subsequent call to get the next set of + // results. + MaxResults *int64 `min:"1" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` + + // The session status to retrieve a list of sessions for. For example, "Active". + // + // State is a required field + State *string `type:"string" required:"true" enum:"SessionState"` +} + +// String returns the string representation +func (s DescribeSessionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeSessionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeSessionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeSessionsInput"} + if s.Filters != nil && len(s.Filters) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Filters", 1)) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.State == nil { + invalidParams.Add(request.NewErrParamRequired("State")) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *DescribeSessionsInput) SetFilters(v []*SessionFilter) *DescribeSessionsInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeSessionsInput) SetMaxResults(v int64) *DescribeSessionsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeSessionsInput) SetNextToken(v string) *DescribeSessionsInput { + s.NextToken = &v + return s +} + +// SetState sets the State field's value. +func (s *DescribeSessionsInput) SetState(v string) *DescribeSessionsInput { + s.State = &v + return s +} + +type DescribeSessionsOutput struct { + _ struct{} `type:"structure"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `type:"string"` + + // A list of sessions meeting the request parameters. + Sessions []*Session `type:"list"` +} + +// String returns the string representation +func (s DescribeSessionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeSessionsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeSessionsOutput) SetNextToken(v string) *DescribeSessionsOutput { + s.NextToken = &v + return s +} + +// SetSessions sets the Sessions field's value. +func (s *DescribeSessionsOutput) SetSessions(v []*Session) *DescribeSessionsOutput { + s.Sessions = v + return s +} + // A default version of a document. type DocumentDefaultVersionDescription struct { _ struct{} `type:"structure"` @@ -16408,7 +19425,7 @@ type DocumentDescription struct { _ struct{} `type:"structure"` // The date when the document was created. - CreatedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `type:"timestamp"` // The default version. DefaultVersion *string `type:"string"` @@ -16871,7 +19888,7 @@ type DocumentVersionInfo struct { _ struct{} `type:"structure"` // The date the document was created. - CreatedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `type:"timestamp"` // The document format, either JSON or YAML. DocumentFormat *string `type:"string" enum:"DocumentFormat"` @@ -17193,6 +20210,9 @@ func (s *GetCommandInvocationInput) SetPluginName(v string) *GetCommandInvocatio type GetCommandInvocationOutput struct { _ struct{} `type:"structure"` + // CloudWatch Logs information where Systems Manager sent the command output. + CloudWatchOutputConfig *CloudWatchOutputConfig `type:"structure"` + // The parent command ID of the invocation plugin. CommandId *string `min:"36" type:"string"` @@ -17202,6 +20222,9 @@ type GetCommandInvocationOutput struct { // The name of the document that was executed. For example, AWS-RunShellScript. DocumentName *string `type:"string"` + // The SSM document version used in the request. + DocumentVersion *string `type:"string"` + // Duration since ExecutionStartDateTime. ExecutionElapsedTime *string `type:"string"` @@ -17254,16 +20277,16 @@ type GetCommandInvocationOutput struct { // If an Amazon S3 bucket was not specified, then this string is empty. StandardOutputUrl *string `type:"string"` - // The status of the parent command for this invocation. This status can be - // different than StatusDetails. + // The status of this invocation plugin. This status can be different than StatusDetails. Status *string `type:"string" enum:"CommandInvocationStatus"` // A detailed status of the command execution for an invocation. StatusDetails // includes more information than Status because it includes states resulting // from error and concurrency control parameters. StatusDetails can show different - // results than Status. For more information about these statuses, see Run Command - // Status (http://docs.aws.amazon.com/systems-manager/latest/userguide/monitor-about-status.html). - // StatusDetails can be one of the following values: + // results than Status. For more information about these statuses, see Understanding + // Command Statuses (http://docs.aws.amazon.com/systems-manager/latest/userguide/monitor-commands.html) + // in the AWS Systems Manager User Guide. StatusDetails can be one of the following + // values: // // * Pending: The command has not been sent to the instance. // @@ -17318,6 +20341,12 @@ func (s GetCommandInvocationOutput) GoString() string { return s.String() } +// SetCloudWatchOutputConfig sets the CloudWatchOutputConfig field's value. +func (s *GetCommandInvocationOutput) SetCloudWatchOutputConfig(v *CloudWatchOutputConfig) *GetCommandInvocationOutput { + s.CloudWatchOutputConfig = v + return s +} + // SetCommandId sets the CommandId field's value. func (s *GetCommandInvocationOutput) SetCommandId(v string) *GetCommandInvocationOutput { s.CommandId = &v @@ -17336,6 +20365,12 @@ func (s *GetCommandInvocationOutput) SetDocumentName(v string) *GetCommandInvoca return s } +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *GetCommandInvocationOutput) SetDocumentVersion(v string) *GetCommandInvocationOutput { + s.DocumentVersion = &v + return s +} + // SetExecutionElapsedTime sets the ExecutionElapsedTime field's value. func (s *GetCommandInvocationOutput) SetExecutionElapsedTime(v string) *GetCommandInvocationOutput { s.ExecutionElapsedTime = &v @@ -17408,6 +20443,80 @@ func (s *GetCommandInvocationOutput) SetStatusDetails(v string) *GetCommandInvoc return s } +type GetConnectionStatusInput struct { + _ struct{} `type:"structure"` + + // The ID of the instance. + // + // Target is a required field + Target *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetConnectionStatusInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetConnectionStatusInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetConnectionStatusInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetConnectionStatusInput"} + if s.Target == nil { + invalidParams.Add(request.NewErrParamRequired("Target")) + } + if s.Target != nil && len(*s.Target) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Target", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTarget sets the Target field's value. +func (s *GetConnectionStatusInput) SetTarget(v string) *GetConnectionStatusInput { + s.Target = &v + return s +} + +type GetConnectionStatusOutput struct { + _ struct{} `type:"structure"` + + // The status of the connection to the instance. For example, 'Connected' or + // 'Not Connected'. + Status *string `type:"string" enum:"ConnectionStatus"` + + // The ID of the instance to check connection status. + Target *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s GetConnectionStatusOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetConnectionStatusOutput) GoString() string { + return s.String() +} + +// SetStatus sets the Status field's value. +func (s *GetConnectionStatusOutput) SetStatus(v string) *GetConnectionStatusOutput { + s.Status = &v + return s +} + +// SetTarget sets the Target field's value. +func (s *GetConnectionStatusOutput) SetTarget(v string) *GetConnectionStatusOutput { + s.Target = &v + return s +} + type GetDefaultPatchBaselineInput struct { _ struct{} `type:"structure"` @@ -17991,10 +21100,10 @@ type GetMaintenanceWindowExecutionOutput struct { _ struct{} `type:"structure"` // The time the Maintenance Window finished executing. - EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `type:"timestamp"` // The time the Maintenance Window started executing. - StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `type:"timestamp"` // The status of the Maintenance Window execution. Status *string `type:"string" enum:"MaintenanceWindowExecutionStatus"` @@ -18194,7 +21303,7 @@ type GetMaintenanceWindowExecutionTaskInvocationOutput struct { _ struct{} `type:"structure"` // The time that the task finished executing on the target. - EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `type:"timestamp"` // The execution ID. ExecutionId *string `type:"string"` @@ -18210,7 +21319,7 @@ type GetMaintenanceWindowExecutionTaskInvocationOutput struct { Parameters *string `type:"string"` // The time that the task started executing on the target. - StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `type:"timestamp"` // The task status for an invocation. Status *string `type:"string" enum:"MaintenanceWindowExecutionStatus"` @@ -18319,7 +21428,7 @@ type GetMaintenanceWindowExecutionTaskOutput struct { _ struct{} `type:"structure"` // The time the task execution completed. - EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `type:"timestamp"` // The defined maximum number of task executions that could be run in parallel. MaxConcurrency *string `min:"1" type:"string"` @@ -18335,7 +21444,7 @@ type GetMaintenanceWindowExecutionTaskOutput struct { ServiceRole *string `type:"string"` // The time the task execution started. - StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `type:"timestamp"` // The status of the task. Status *string `type:"string" enum:"MaintenanceWindowExecutionStatus"` @@ -18350,8 +21459,14 @@ type GetMaintenanceWindowExecutionTaskOutput struct { // was retrieved. TaskExecutionId *string `min:"36" type:"string"` - // The parameters passed to the task when it was executed. The map has the following - // format: + // The parameters passed to the task when it was executed. + // + // TaskParameters has been deprecated. To specify parameters to pass to a task + // when it runs, instead use the Parameters option in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. + // + // The map has the following format: // // Key: string, between 1 and 255 characters // @@ -18502,7 +21617,7 @@ type GetMaintenanceWindowOutput struct { AllowUnassociatedTargets *bool `type:"boolean"` // The date the Maintenance Window was created. - CreatedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `type:"timestamp"` // The number of hours before the end of the Maintenance Window that Systems // Manager stops scheduling new tasks for execution. @@ -18517,15 +21632,35 @@ type GetMaintenanceWindowOutput struct { // Whether the Maintenance Windows is enabled. Enabled *bool `type:"boolean"` + // The date and time, in ISO-8601 Extended format, for when the Maintenance + // Window is scheduled to become inactive. The Maintenance Window will not run + // after this specified time. + EndDate *string `type:"string"` + // The date the Maintenance Window was last modified. - ModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + ModifiedDate *time.Time `type:"timestamp"` // The name of the Maintenance Window. Name *string `min:"3" type:"string"` + // The next time the Maintenance Window will actually run, taking into account + // any specified times for the Maintenance Window to become active or inactive. + NextExecutionTime *string `type:"string"` + // The schedule of the Maintenance Window in the form of a cron or rate expression. Schedule *string `min:"1" type:"string"` + // The time zone that the scheduled Maintenance Window executions are based + // on, in Internet Assigned Numbers Authority (IANA) format. For example: "America/Los_Angeles", + // "etc/UTC", or "Asia/Seoul". For more information, see the Time Zone Database + // (https://www.iana.org/time-zones) on the IANA website. + ScheduleTimezone *string `type:"string"` + + // The date and time, in ISO-8601 Extended format, for when the Maintenance + // Window is scheduled to become active. The Maintenance Window will not run + // before this specified time. + StartDate *string `type:"string"` + // The ID of the created Maintenance Window. WindowId *string `min:"20" type:"string"` } @@ -18576,6 +21711,12 @@ func (s *GetMaintenanceWindowOutput) SetEnabled(v bool) *GetMaintenanceWindowOut return s } +// SetEndDate sets the EndDate field's value. +func (s *GetMaintenanceWindowOutput) SetEndDate(v string) *GetMaintenanceWindowOutput { + s.EndDate = &v + return s +} + // SetModifiedDate sets the ModifiedDate field's value. func (s *GetMaintenanceWindowOutput) SetModifiedDate(v time.Time) *GetMaintenanceWindowOutput { s.ModifiedDate = &v @@ -18588,12 +21729,30 @@ func (s *GetMaintenanceWindowOutput) SetName(v string) *GetMaintenanceWindowOutp return s } +// SetNextExecutionTime sets the NextExecutionTime field's value. +func (s *GetMaintenanceWindowOutput) SetNextExecutionTime(v string) *GetMaintenanceWindowOutput { + s.NextExecutionTime = &v + return s +} + // SetSchedule sets the Schedule field's value. func (s *GetMaintenanceWindowOutput) SetSchedule(v string) *GetMaintenanceWindowOutput { s.Schedule = &v return s } +// SetScheduleTimezone sets the ScheduleTimezone field's value. +func (s *GetMaintenanceWindowOutput) SetScheduleTimezone(v string) *GetMaintenanceWindowOutput { + s.ScheduleTimezone = &v + return s +} + +// SetStartDate sets the StartDate field's value. +func (s *GetMaintenanceWindowOutput) SetStartDate(v string) *GetMaintenanceWindowOutput { + s.StartDate = &v + return s +} + // SetWindowId sets the WindowId field's value. func (s *GetMaintenanceWindowOutput) SetWindowId(v string) *GetMaintenanceWindowOutput { s.WindowId = &v @@ -18665,6 +21824,11 @@ type GetMaintenanceWindowTaskOutput struct { Description *string `min:"1" type:"string"` // The location in Amazon S3 where the task results are logged. + // + // LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, + // instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. LoggingInfo *LoggingInfo `type:"structure"` // The maximum number of targets allowed to run this task in parallel. @@ -18696,6 +21860,11 @@ type GetMaintenanceWindowTaskOutput struct { TaskInvocationParameters *MaintenanceWindowTaskInvocationParameters `type:"structure"` // The parameters to pass to the task when it executes. + // + // TaskParameters has been deprecated. To specify parameters to pass to a task + // when it runs, instead use the Parameters option in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. TaskParameters map[string]*MaintenanceWindowTaskParameterValueExpression `type:"map"` // The type of task to execute. @@ -18996,16 +22165,24 @@ type GetParametersByPathInput struct { NextToken *string `type:"string"` // Filters to limit the request results. + // + // You can't filter using the parameter name. ParameterFilters []*ParameterStringFilter `type:"list"` // The hierarchy for the parameter. Hierarchies start with a forward slash (/) - // and end with the parameter name. A hierarchy can have a maximum of 15 levels. - // Here is an example of a hierarchy: /Finance/Prod/IAD/WinServ2016/license33 + // and end with the parameter name. A parameter name hierarchy can have a maximum + // of 15 levels. Here is an example of a hierarchy: /Finance/Prod/IAD/WinServ2016/license33 // // Path is a required field Path *string `min:"1" type:"string" required:"true"` // Retrieve all parameters within a hierarchy. + // + // If a user has access to a path, then the user can access all levels of that + // path. For example, if a user has permission to access path /a, then the user + // can also access /a/b. Even if a user has explicitly been denied access in + // IAM for parameter /a, they can still call the GetParametersByPath API action + // recursively and view /a/b. Recursive *bool `type:"boolean"` // Retrieve all parameters in a hierarchy with their value decrypted. @@ -19360,7 +22537,7 @@ type GetPatchBaselineOutput struct { BaselineId *string `min:"20" type:"string"` // The date the patch baseline was created. - CreatedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `type:"timestamp"` // A description of the patch baseline. Description *string `min:"1" type:"string"` @@ -19369,7 +22546,7 @@ type GetPatchBaselineOutput struct { GlobalFilters *PatchFilterGroup `type:"structure"` // The date the patch baseline was last modified. - ModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + ModifiedDate *time.Time `type:"timestamp"` // The name of the patch baseline. Name *string `min:"3" type:"string"` @@ -19383,6 +22560,11 @@ type GetPatchBaselineOutput struct { // A list of explicitly rejected patches for the baseline. RejectedPatches []*string `type:"list"` + // The action specified to take on patches included in the RejectedPatches list. + // A patch can be allowed only if it is a dependency of another package, or + // blocked entirely along with packages that include it as a dependency. + RejectedPatchesAction *string `type:"string" enum:"PatchAction"` + // Information about the patches to use to update the instances, including target // operating systems and source repositories. Applies to Linux instances only. Sources []*PatchSource `type:"list"` @@ -19476,6 +22658,12 @@ func (s *GetPatchBaselineOutput) SetRejectedPatches(v []*string) *GetPatchBaseli return s } +// SetRejectedPatchesAction sets the RejectedPatchesAction field's value. +func (s *GetPatchBaselineOutput) SetRejectedPatchesAction(v string) *GetPatchBaselineOutput { + s.RejectedPatchesAction = &v + return s +} + // SetSources sets the Sources field's value. func (s *GetPatchBaselineOutput) SetSources(v []*PatchSource) *GetPatchBaselineOutput { s.Sources = v @@ -19652,7 +22840,7 @@ type InstanceAssociationStatusInfo struct { ErrorCode *string `type:"string"` // The date the instance association executed. - ExecutionDate *time.Time `type:"timestamp" timestampFormat:"unix"` + ExecutionDate *time.Time `type:"timestamp"` // Summary information about association execution. ExecutionSummary *string `min:"1" type:"string"` @@ -19760,7 +22948,7 @@ type InstanceInformation struct { // The activation ID created by Systems Manager when the server or VM was registered. ActivationId *string `type:"string"` - // The version of the SSM Agent running on your Linux instance. + // The version of SSM Agent running on your Linux instance. AgentVersion *string `type:"string"` // Information about the association. @@ -19775,32 +22963,33 @@ type InstanceInformation struct { // The IP address of the managed instance. IPAddress *string `min:"1" type:"string"` - // The Amazon Identity and Access Management (IAM) role assigned to EC2 instances - // or managed instances. + // The Amazon Identity and Access Management (IAM) role assigned to the on-premises + // Systems Manager managed instances. This call does not return the IAM role + // for Amazon EC2 instances. IamRole *string `type:"string"` // The instance ID. InstanceId *string `type:"string"` - // Indicates whether latest version of the SSM Agent is running on your instance. + // Indicates whether latest version of SSM Agent is running on your instance. // Some older versions of Windows Server use the EC2Config service to process // SSM requests. For this reason, this field does not indicate whether or not // the latest version is installed on Windows managed instances. IsLatestVersion *bool `type:"boolean"` // The date the association was last executed. - LastAssociationExecutionDate *time.Time `type:"timestamp" timestampFormat:"unix"` + LastAssociationExecutionDate *time.Time `type:"timestamp"` // The date and time when agent last pinged Systems Manager service. - LastPingDateTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LastPingDateTime *time.Time `type:"timestamp"` // The last date the association was successfully run. - LastSuccessfulAssociationExecutionDate *time.Time `type:"timestamp" timestampFormat:"unix"` + LastSuccessfulAssociationExecutionDate *time.Time `type:"timestamp"` // The name of the managed instance. Name *string `type:"string"` - // Connection status of the SSM Agent. + // Connection status of SSM Agent. PingStatus *string `type:"string" enum:"PingStatus"` // The name of the operating system platform running on your instance. @@ -19813,7 +23002,7 @@ type InstanceInformation struct { PlatformVersion *string `type:"string"` // The date the server or VM was registered with AWS as a managed instance. - RegistrationDate *time.Time `type:"timestamp" timestampFormat:"unix"` + RegistrationDate *time.Time `type:"timestamp"` // The type of instance. Instances are either EC2 instances or managed instances. ResourceType *string `type:"string" enum:"ResourceType"` @@ -19943,7 +23132,12 @@ func (s *InstanceInformation) SetResourceType(v string) *InstanceInformation { return s } -// Describes a filter for a specific list of instances. +// Describes a filter for a specific list of instances. You can filter instances +// information by using tags. You specify tags by using a key-value mapping. +// +// Use this action instead of the DescribeInstanceInformationRequest$InstanceInformationFilterList +// method. The InstanceInformationFilterList method is a legacy method and does +// not support tags. type InstanceInformationFilter struct { _ struct{} `type:"structure"` @@ -20077,6 +23271,16 @@ type InstancePatchState struct { // during the last patching operation, but failed to install. FailedCount *int64 `type:"integer"` + // An https URL or an Amazon S3 path-style URL to a list of patches to be installed. + // This patch installation list, which you maintain in an Amazon S3 bucket in + // YAML format and specify in the SSM document AWS-RunPatchBaseline, overrides + // the patches specified by the default patch baseline. + // + // For more information about the InstallOverrideList parameter, see About the + // SSM Document AWS-RunPatchBaseline (http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-about-aws-runpatchbaseline.html) + // in the AWS Systems Manager User Guide. + InstallOverrideList *string `min:"1" type:"string"` + // The number of patches from the patch baseline that are installed on the instance. InstalledCount *int64 `type:"integer"` @@ -20084,6 +23288,14 @@ type InstancePatchState struct { // on the instance. InstalledOtherCount *int64 `type:"integer"` + // The number of instances with patches installed that are specified in a RejectedPatches + // list. Patches with a status of InstalledRejected were typically installed + // before they were added to a RejectedPatches list. + // + // If ALLOW_AS_DEPENDENCY is the specified option for RejectedPatchesAction, + // the value of InstalledRejectedCount will always be 0 (zero). + InstalledRejectedCount *int64 `type:"integer"` + // The ID of the managed instance the high-level patch compliance information // was collected for. // @@ -20107,14 +23319,14 @@ type InstancePatchState struct { // The time the most recent patching operation completed on the instance. // // OperationEndTime is a required field - OperationEndTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + OperationEndTime *time.Time `type:"timestamp" required:"true"` // The time the most recent patching operation was started on the instance. // // OperationStartTime is a required field - OperationStartTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + OperationStartTime *time.Time `type:"timestamp" required:"true"` - // Placeholder information, this field will always be empty in the current release + // Placeholder information. This field will always be empty in the current release // of the service. OwnerInformation *string `min:"1" type:"string"` @@ -20150,6 +23362,12 @@ func (s *InstancePatchState) SetFailedCount(v int64) *InstancePatchState { return s } +// SetInstallOverrideList sets the InstallOverrideList field's value. +func (s *InstancePatchState) SetInstallOverrideList(v string) *InstancePatchState { + s.InstallOverrideList = &v + return s +} + // SetInstalledCount sets the InstalledCount field's value. func (s *InstancePatchState) SetInstalledCount(v int64) *InstancePatchState { s.InstalledCount = &v @@ -20162,6 +23380,12 @@ func (s *InstancePatchState) SetInstalledOtherCount(v int64) *InstancePatchState return s } +// SetInstalledRejectedCount sets the InstalledRejectedCount field's value. +func (s *InstancePatchState) SetInstalledRejectedCount(v int64) *InstancePatchState { + s.InstalledRejectedCount = &v + return s +} + // SetInstanceId sets the InstanceId field's value. func (s *InstancePatchState) SetInstanceId(v string) *InstancePatchState { s.InstanceId = &v @@ -20301,6 +23525,11 @@ type InventoryAggregator struct { // The inventory type and attribute name for aggregation. Expression *string `min:"1" type:"string"` + + // A user-defined set of one or more filters on which to aggregate inventory + // data. Groups return a count of resources that match and don't match the specified + // criteria. + Groups []*InventoryGroup `min:"1" type:"list"` } // String returns the string representation @@ -20322,6 +23551,9 @@ func (s *InventoryAggregator) Validate() error { if s.Expression != nil && len(*s.Expression) < 1 { invalidParams.Add(request.NewErrParamMinLen("Expression", 1)) } + if s.Groups != nil && len(s.Groups) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Groups", 1)) + } if s.Aggregators != nil { for i, v := range s.Aggregators { if v == nil { @@ -20332,6 +23564,16 @@ func (s *InventoryAggregator) Validate() error { } } } + if s.Groups != nil { + for i, v := range s.Groups { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Groups", i), err.(request.ErrInvalidParams)) + } + } + } if invalidParams.Len() > 0 { return invalidParams @@ -20351,6 +23593,178 @@ func (s *InventoryAggregator) SetExpression(v string) *InventoryAggregator { return s } +// SetGroups sets the Groups field's value. +func (s *InventoryAggregator) SetGroups(v []*InventoryGroup) *InventoryAggregator { + s.Groups = v + return s +} + +// Status information returned by the DeleteInventory action. +type InventoryDeletionStatusItem struct { + _ struct{} `type:"structure"` + + // The deletion ID returned by the DeleteInventory action. + DeletionId *string `type:"string"` + + // The UTC timestamp when the delete operation started. + DeletionStartTime *time.Time `type:"timestamp"` + + // Information about the delete operation. For more information about this summary, + // see Understanding the Delete Inventory Summary (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-inventory-delete.html#sysman-inventory-delete-summary) + // in the AWS Systems Manager User Guide. + DeletionSummary *InventoryDeletionSummary `type:"structure"` + + // The status of the operation. Possible values are InProgress and Complete. + LastStatus *string `type:"string" enum:"InventoryDeletionStatus"` + + // Information about the status. + LastStatusMessage *string `type:"string"` + + // The UTC timestamp of when the last status report. + LastStatusUpdateTime *time.Time `type:"timestamp"` + + // The name of the inventory data type. + TypeName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s InventoryDeletionStatusItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryDeletionStatusItem) GoString() string { + return s.String() +} + +// SetDeletionId sets the DeletionId field's value. +func (s *InventoryDeletionStatusItem) SetDeletionId(v string) *InventoryDeletionStatusItem { + s.DeletionId = &v + return s +} + +// SetDeletionStartTime sets the DeletionStartTime field's value. +func (s *InventoryDeletionStatusItem) SetDeletionStartTime(v time.Time) *InventoryDeletionStatusItem { + s.DeletionStartTime = &v + return s +} + +// SetDeletionSummary sets the DeletionSummary field's value. +func (s *InventoryDeletionStatusItem) SetDeletionSummary(v *InventoryDeletionSummary) *InventoryDeletionStatusItem { + s.DeletionSummary = v + return s +} + +// SetLastStatus sets the LastStatus field's value. +func (s *InventoryDeletionStatusItem) SetLastStatus(v string) *InventoryDeletionStatusItem { + s.LastStatus = &v + return s +} + +// SetLastStatusMessage sets the LastStatusMessage field's value. +func (s *InventoryDeletionStatusItem) SetLastStatusMessage(v string) *InventoryDeletionStatusItem { + s.LastStatusMessage = &v + return s +} + +// SetLastStatusUpdateTime sets the LastStatusUpdateTime field's value. +func (s *InventoryDeletionStatusItem) SetLastStatusUpdateTime(v time.Time) *InventoryDeletionStatusItem { + s.LastStatusUpdateTime = &v + return s +} + +// SetTypeName sets the TypeName field's value. +func (s *InventoryDeletionStatusItem) SetTypeName(v string) *InventoryDeletionStatusItem { + s.TypeName = &v + return s +} + +// Information about the delete operation. +type InventoryDeletionSummary struct { + _ struct{} `type:"structure"` + + // Remaining number of items to delete. + RemainingCount *int64 `type:"integer"` + + // A list of counts and versions for deleted items. + SummaryItems []*InventoryDeletionSummaryItem `type:"list"` + + // The total number of items to delete. This count does not change during the + // delete operation. + TotalCount *int64 `type:"integer"` +} + +// String returns the string representation +func (s InventoryDeletionSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryDeletionSummary) GoString() string { + return s.String() +} + +// SetRemainingCount sets the RemainingCount field's value. +func (s *InventoryDeletionSummary) SetRemainingCount(v int64) *InventoryDeletionSummary { + s.RemainingCount = &v + return s +} + +// SetSummaryItems sets the SummaryItems field's value. +func (s *InventoryDeletionSummary) SetSummaryItems(v []*InventoryDeletionSummaryItem) *InventoryDeletionSummary { + s.SummaryItems = v + return s +} + +// SetTotalCount sets the TotalCount field's value. +func (s *InventoryDeletionSummary) SetTotalCount(v int64) *InventoryDeletionSummary { + s.TotalCount = &v + return s +} + +// Either a count, remaining count, or a version number in a delete inventory +// summary. +type InventoryDeletionSummaryItem struct { + _ struct{} `type:"structure"` + + // A count of the number of deleted items. + Count *int64 `type:"integer"` + + // The remaining number of items to delete. + RemainingCount *int64 `type:"integer"` + + // The inventory type version. + Version *string `type:"string"` +} + +// String returns the string representation +func (s InventoryDeletionSummaryItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryDeletionSummaryItem) GoString() string { + return s.String() +} + +// SetCount sets the Count field's value. +func (s *InventoryDeletionSummaryItem) SetCount(v int64) *InventoryDeletionSummaryItem { + s.Count = &v + return s +} + +// SetRemainingCount sets the RemainingCount field's value. +func (s *InventoryDeletionSummaryItem) SetRemainingCount(v int64) *InventoryDeletionSummaryItem { + s.RemainingCount = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *InventoryDeletionSummaryItem) SetVersion(v string) *InventoryDeletionSummaryItem { + s.Version = &v + return s +} + // One or more filters. Use a filter to return a more specific list of results. type InventoryFilter struct { _ struct{} `type:"structure"` @@ -20421,6 +23835,79 @@ func (s *InventoryFilter) SetValues(v []*string) *InventoryFilter { return s } +// A user-defined set of one or more filters on which to aggregate inventory +// data. Groups return a count of resources that match and don't match the specified +// criteria. +type InventoryGroup struct { + _ struct{} `type:"structure"` + + // Filters define the criteria for the group. The matchingCount field displays + // the number of resources that match the criteria. The notMatchingCount field + // displays the number of resources that don't match the criteria. + // + // Filters is a required field + Filters []*InventoryFilter `min:"1" type:"list" required:"true"` + + // The name of the group. + // + // Name is a required field + Name *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s InventoryGroup) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryGroup) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InventoryGroup) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InventoryGroup"} + if s.Filters == nil { + invalidParams.Add(request.NewErrParamRequired("Filters")) + } + if s.Filters != nil && len(s.Filters) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Filters", 1)) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilters sets the Filters field's value. +func (s *InventoryGroup) SetFilters(v []*InventoryFilter) *InventoryGroup { + s.Filters = v + return s +} + +// SetName sets the Name field's value. +func (s *InventoryGroup) SetName(v string) *InventoryGroup { + s.Name = &v + return s +} + // Information collected from managed instances based on your inventory policy // document type InventoryItem struct { @@ -20731,6 +24218,100 @@ func (s *InventoryResultItem) SetTypeName(v string) *InventoryResultItem { return s } +type LabelParameterVersionInput struct { + _ struct{} `type:"structure"` + + // One or more labels to attach to the specified parameter version. + // + // Labels is a required field + Labels []*string `min:"1" type:"list" required:"true"` + + // The parameter name on which you want to attach one or more labels. + // + // Name is a required field + Name *string `min:"1" type:"string" required:"true"` + + // The specific version of the parameter on which you want to attach one or + // more labels. If no version is specified, the system attaches the label to + // the latest version.) + ParameterVersion *int64 `type:"long"` +} + +// String returns the string representation +func (s LabelParameterVersionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LabelParameterVersionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *LabelParameterVersionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LabelParameterVersionInput"} + if s.Labels == nil { + invalidParams.Add(request.NewErrParamRequired("Labels")) + } + if s.Labels != nil && len(s.Labels) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Labels", 1)) + } + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + if s.Name != nil && len(*s.Name) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Name", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLabels sets the Labels field's value. +func (s *LabelParameterVersionInput) SetLabels(v []*string) *LabelParameterVersionInput { + s.Labels = v + return s +} + +// SetName sets the Name field's value. +func (s *LabelParameterVersionInput) SetName(v string) *LabelParameterVersionInput { + s.Name = &v + return s +} + +// SetParameterVersion sets the ParameterVersion field's value. +func (s *LabelParameterVersionInput) SetParameterVersion(v int64) *LabelParameterVersionInput { + s.ParameterVersion = &v + return s +} + +type LabelParameterVersionOutput struct { + _ struct{} `type:"structure"` + + // The label does not meet the requirements. For information about parameter + // label requirements, see Labeling Parameters (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-labels.html) + // in the AWS Systems Manager User Guide. + InvalidLabels []*string `min:"1" type:"list"` +} + +// String returns the string representation +func (s LabelParameterVersionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LabelParameterVersionOutput) GoString() string { + return s.String() +} + +// SetInvalidLabels sets the InvalidLabels field's value. +func (s *LabelParameterVersionOutput) SetInvalidLabels(v []*string) *LabelParameterVersionOutput { + s.InvalidLabels = v + return s +} + type ListAssociationVersionsInput struct { _ struct{} `type:"structure"` @@ -22061,6 +25642,11 @@ func (s *ListTagsForResourceOutput) SetTagList(v []*Tag) *ListTagsForResourceOut } // Information about an Amazon S3 bucket to write instance-level logs to. +// +// LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, +// instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters +// structure. For information about how Systems Manager handles these options +// for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. type LoggingInfo struct { _ struct{} `type:"structure"` @@ -22136,6 +25722,22 @@ type MaintenanceWindowAutomationParameters struct { DocumentVersion *string `type:"string"` // The parameters for the AUTOMATION task. + // + // For information about specifying and updating task parameters, see RegisterTaskWithMaintenanceWindow + // and UpdateMaintenanceWindowTask. + // + // LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, + // instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. + // + // TaskParameters has been deprecated. To specify parameters to pass to a task + // when it runs, instead use the Parameters option in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. + // + // For AUTOMATION task types, Systems Manager ignores any values specified for + // these parameters. Parameters map[string][]*string `min:"1" type:"map"` } @@ -22179,10 +25781,10 @@ type MaintenanceWindowExecution struct { _ struct{} `type:"structure"` // The time the execution finished. - EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `type:"timestamp"` // The time the execution started. - StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `type:"timestamp"` // The status of the execution. Status *string `type:"string" enum:"MaintenanceWindowExecutionStatus"` @@ -22249,10 +25851,10 @@ type MaintenanceWindowExecutionTaskIdentity struct { _ struct{} `type:"structure"` // The time the task execution finished. - EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `type:"timestamp"` // The time the task execution started. - StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `type:"timestamp"` // The status of the task execution. Status *string `type:"string" enum:"MaintenanceWindowExecutionStatus"` @@ -22338,7 +25940,7 @@ type MaintenanceWindowExecutionTaskInvocationIdentity struct { _ struct{} `type:"structure"` // The time the invocation finished. - EndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + EndTime *time.Time `type:"timestamp"` // The ID of the action performed in the service that actually handled the task // invocation. If the task type is RUN_COMMAND, this value is the command ID. @@ -22356,7 +25958,7 @@ type MaintenanceWindowExecutionTaskInvocationIdentity struct { Parameters *string `type:"string"` // The time the invocation started. - StartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + StartTime *time.Time `type:"timestamp"` // The status of the task invocation. Status *string `type:"string" enum:"MaintenanceWindowExecutionStatus"` @@ -22461,7 +26063,7 @@ func (s *MaintenanceWindowExecutionTaskInvocationIdentity) SetWindowTargetId(v s return s } -// Filter used in the request. +// Filter used in the request. Supported filter keys are Name and Enabled. type MaintenanceWindowFilter struct { _ struct{} `type:"structure"` @@ -22524,9 +26126,28 @@ type MaintenanceWindowIdentity struct { // Whether the Maintenance Window is enabled. Enabled *bool `type:"boolean"` + // The date and time, in ISO-8601 Extended format, for when the Maintenance + // Window is scheduled to become inactive. + EndDate *string `type:"string"` + // The name of the Maintenance Window. Name *string `min:"3" type:"string"` + // The next time the Maintenance Window will actually run, taking into account + // any specified times for the Maintenance Window to become active or inactive. + NextExecutionTime *string `type:"string"` + + // The schedule of the Maintenance Window in the form of a cron or rate expression. + Schedule *string `min:"1" type:"string"` + + // The time zone that the scheduled Maintenance Window executions are based + // on, in Internet Assigned Numbers Authority (IANA) format. + ScheduleTimezone *string `type:"string"` + + // The date and time, in ISO-8601 Extended format, for when the Maintenance + // Window is scheduled to become active. + StartDate *string `type:"string"` + // The ID of the Maintenance Window. WindowId *string `min:"20" type:"string"` } @@ -22565,19 +26186,98 @@ func (s *MaintenanceWindowIdentity) SetEnabled(v bool) *MaintenanceWindowIdentit return s } +// SetEndDate sets the EndDate field's value. +func (s *MaintenanceWindowIdentity) SetEndDate(v string) *MaintenanceWindowIdentity { + s.EndDate = &v + return s +} + // SetName sets the Name field's value. func (s *MaintenanceWindowIdentity) SetName(v string) *MaintenanceWindowIdentity { s.Name = &v return s } +// SetNextExecutionTime sets the NextExecutionTime field's value. +func (s *MaintenanceWindowIdentity) SetNextExecutionTime(v string) *MaintenanceWindowIdentity { + s.NextExecutionTime = &v + return s +} + +// SetSchedule sets the Schedule field's value. +func (s *MaintenanceWindowIdentity) SetSchedule(v string) *MaintenanceWindowIdentity { + s.Schedule = &v + return s +} + +// SetScheduleTimezone sets the ScheduleTimezone field's value. +func (s *MaintenanceWindowIdentity) SetScheduleTimezone(v string) *MaintenanceWindowIdentity { + s.ScheduleTimezone = &v + return s +} + +// SetStartDate sets the StartDate field's value. +func (s *MaintenanceWindowIdentity) SetStartDate(v string) *MaintenanceWindowIdentity { + s.StartDate = &v + return s +} + // SetWindowId sets the WindowId field's value. func (s *MaintenanceWindowIdentity) SetWindowId(v string) *MaintenanceWindowIdentity { s.WindowId = &v return s } +// The Maintenance Window to which the specified target belongs. +type MaintenanceWindowIdentityForTarget struct { + _ struct{} `type:"structure"` + + // The name of the Maintenance Window. + Name *string `min:"3" type:"string"` + + // The ID of the Maintenance Window. + WindowId *string `min:"20" type:"string"` +} + +// String returns the string representation +func (s MaintenanceWindowIdentityForTarget) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MaintenanceWindowIdentityForTarget) GoString() string { + return s.String() +} + +// SetName sets the Name field's value. +func (s *MaintenanceWindowIdentityForTarget) SetName(v string) *MaintenanceWindowIdentityForTarget { + s.Name = &v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *MaintenanceWindowIdentityForTarget) SetWindowId(v string) *MaintenanceWindowIdentityForTarget { + s.WindowId = &v + return s +} + // The parameters for a LAMBDA task type. +// +// For information about specifying and updating task parameters, see RegisterTaskWithMaintenanceWindow +// and UpdateMaintenanceWindowTask. +// +// LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, +// instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters +// structure. For information about how Systems Manager handles these options +// for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. +// +// TaskParameters has been deprecated. To specify parameters to pass to a task +// when it runs, instead use the Parameters option in the TaskInvocationParameters +// structure. For information about how Systems Manager handles these options +// for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. +// +// For Lambda tasks, Systems Manager ignores any values specified for TaskParameters +// and LoggingInfo. type MaintenanceWindowLambdaParameters struct { _ struct{} `type:"structure"` @@ -22643,6 +26343,22 @@ func (s *MaintenanceWindowLambdaParameters) SetQualifier(v string) *MaintenanceW } // The parameters for a RUN_COMMAND task type. +// +// For information about specifying and updating task parameters, see RegisterTaskWithMaintenanceWindow +// and UpdateMaintenanceWindowTask. +// +// LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, +// instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters +// structure. For information about how Systems Manager handles these options +// for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. +// +// TaskParameters has been deprecated. To specify parameters to pass to a task +// when it runs, instead use the Parameters option in the TaskInvocationParameters +// structure. For information about how Systems Manager handles these options +// for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. +// +// For Run Command tasks, Systems Manager uses specified values for TaskParameters +// and LoggingInfo only if no values are specified for TaskInvocationParameters. type MaintenanceWindowRunCommandParameters struct { _ struct{} `type:"structure"` @@ -22673,7 +26389,7 @@ type MaintenanceWindowRunCommandParameters struct { ServiceRoleArn *string `type:"string"` // If this time is reached and the command has not already started executing, - // it doesn not execute. + // it doesn't run. TimeoutSeconds *int64 `min:"30" type:"integer"` } @@ -22757,7 +26473,23 @@ func (s *MaintenanceWindowRunCommandParameters) SetTimeoutSeconds(v int64) *Main return s } -// The parameters for the STEP_FUNCTION execution. +// The parameters for a STEP_FUNCTION task. +// +// For information about specifying and updating task parameters, see RegisterTaskWithMaintenanceWindow +// and UpdateMaintenanceWindowTask. +// +// LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, +// instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters +// structure. For information about how Systems Manager handles these options +// for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. +// +// TaskParameters has been deprecated. To specify parameters to pass to a task +// when it runs, instead use the Parameters option in the TaskInvocationParameters +// structure. For information about how Systems Manager handles these options +// for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. +// +// For Step Functions tasks, Systems Manager ignores any values specified for +// TaskParameters and LoggingInfo. type MaintenanceWindowStepFunctionsParameters struct { _ struct{} `type:"structure"` @@ -22891,6 +26623,11 @@ type MaintenanceWindowTask struct { Description *string `min:"1" type:"string"` // Information about an Amazon S3 bucket to write task-level logs to. + // + // LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, + // instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. LoggingInfo *LoggingInfo `type:"structure"` // The maximum number of targets this task can be run for in parallel. @@ -22921,6 +26658,11 @@ type MaintenanceWindowTask struct { TaskArn *string `min:"1" type:"string"` // The parameters that should be passed to the task when it is executed. + // + // TaskParameters has been deprecated. To specify parameters to pass to a task + // when it runs, instead use the Parameters option in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. TaskParameters map[string]*MaintenanceWindowTaskParameterValueExpression `type:"map"` // The type of task. The type can be one of the following: RUN_COMMAND, AUTOMATION, @@ -23026,7 +26768,7 @@ func (s *MaintenanceWindowTask) SetWindowTaskId(v string) *MaintenanceWindowTask type MaintenanceWindowTaskInvocationParameters struct { _ struct{} `type:"structure"` - // The parameters for a AUTOMATION task type. + // The parameters for an AUTOMATION task type. Automation *MaintenanceWindowAutomationParameters `type:"structure"` // The parameters for a LAMBDA task type. @@ -23260,8 +27002,8 @@ type NotificationConfig struct { // The different events for which you can receive notifications. These events // include the following: All (events), InProgress, Success, TimedOut, Cancelled, - // Failed. To learn more about these events, see Setting Up Events and Notifications - // (http://docs.aws.amazon.com/systems-manager/latest/userguide/monitor-commands.html) + // Failed. To learn more about these events, see Configuring Amazon SNS Notifications + // for Run Command (http://docs.aws.amazon.com/systems-manager/latest/userguide/rc-sns-notifications.html) // in the AWS Systems Manager User Guide. NotificationEvents []*string `type:"list"` @@ -23299,13 +27041,67 @@ func (s *NotificationConfig) SetNotificationType(v string) *NotificationConfig { return s } +// Information about the source where the association execution details are +// stored. +type OutputSource struct { + _ struct{} `type:"structure"` + + // The ID of the output source, for example the URL of an Amazon S3 bucket. + OutputSourceId *string `min:"36" type:"string"` + + // The type of source where the association execution details are stored, for + // example, Amazon S3. + OutputSourceType *string `type:"string"` +} + +// String returns the string representation +func (s OutputSource) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OutputSource) GoString() string { + return s.String() +} + +// SetOutputSourceId sets the OutputSourceId field's value. +func (s *OutputSource) SetOutputSourceId(v string) *OutputSource { + s.OutputSourceId = &v + return s +} + +// SetOutputSourceType sets the OutputSourceType field's value. +func (s *OutputSource) SetOutputSourceType(v string) *OutputSource { + s.OutputSourceType = &v + return s +} + // An Amazon EC2 Systems Manager parameter in Parameter Store. type Parameter struct { _ struct{} `type:"structure"` + // The Amazon Resource Name (ARN) of the parameter. + ARN *string `type:"string"` + + // Date the parameter was last changed or updated and the parameter version + // was created. + LastModifiedDate *time.Time `type:"timestamp"` + // The name of the parameter. Name *string `min:"1" type:"string"` + // Either the version number or the label used to retrieve the parameter value. + // Specify selectors by using one of the following formats: + // + // parameter_name:version + // + // parameter_name:label + Selector *string `type:"string"` + + // Applies to parameters that reference information in other AWS services. SourceResult + // is the raw result or response from the source. + SourceResult *string `type:"string"` + // The type of parameter. Valid values include the following: String, String // list, Secure string. Type *string `type:"string" enum:"ParameterType"` @@ -23327,12 +27123,36 @@ func (s Parameter) GoString() string { return s.String() } +// SetARN sets the ARN field's value. +func (s *Parameter) SetARN(v string) *Parameter { + s.ARN = &v + return s +} + +// SetLastModifiedDate sets the LastModifiedDate field's value. +func (s *Parameter) SetLastModifiedDate(v time.Time) *Parameter { + s.LastModifiedDate = &v + return s +} + // SetName sets the Name field's value. func (s *Parameter) SetName(v string) *Parameter { s.Name = &v return s } +// SetSelector sets the Selector field's value. +func (s *Parameter) SetSelector(v string) *Parameter { + s.Selector = &v + return s +} + +// SetSourceResult sets the SourceResult field's value. +func (s *Parameter) SetSourceResult(v string) *Parameter { + s.SourceResult = &v + return s +} + // SetType sets the Type field's value. func (s *Parameter) SetType(v string) *Parameter { s.Type = &v @@ -23366,8 +27186,11 @@ type ParameterHistory struct { // The ID of the query key used for this parameter. KeyId *string `min:"1" type:"string"` + // Labels assigned to the parameter version. + Labels []*string `min:"1" type:"list"` + // Date the parameter was last changed or updated. - LastModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + LastModifiedDate *time.Time `type:"timestamp"` // Amazon Resource Name (ARN) of the AWS user who last changed the parameter. LastModifiedUser *string `type:"string"` @@ -23413,6 +27236,12 @@ func (s *ParameterHistory) SetKeyId(v string) *ParameterHistory { return s } +// SetLabels sets the Labels field's value. +func (s *ParameterHistory) SetLabels(v []*string) *ParameterHistory { + s.Labels = v + return s +} + // SetLastModifiedDate sets the LastModifiedDate field's value. func (s *ParameterHistory) SetLastModifiedDate(v time.Time) *ParameterHistory { s.LastModifiedDate = &v @@ -23466,7 +27295,7 @@ type ParameterMetadata struct { KeyId *string `min:"1" type:"string"` // Date the parameter was last changed or updated. - LastModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + LastModifiedDate *time.Time `type:"timestamp"` // Amazon Resource Name (ARN) of the AWS user who last changed the parameter. LastModifiedUser *string `type:"string"` @@ -23541,6 +27370,8 @@ func (s *ParameterMetadata) SetVersion(v int64) *ParameterMetadata { } // One or more filters. Use a filter to return a more specific list of results. +// +// The Name field can't be used with the GetParametersByPath API action. type ParameterStringFilter struct { _ struct{} `type:"structure"` @@ -23699,7 +27530,7 @@ type Patch struct { ProductFamily *string `type:"string"` // The date the patch was released. - ReleaseDate *time.Time `type:"timestamp" timestampFormat:"unix"` + ReleaseDate *time.Time `type:"timestamp"` // The title of the patch. Title *string `type:"string"` @@ -23873,7 +27704,7 @@ type PatchComplianceData struct { // operating systems provide this level of information. // // InstalledTime is a required field - InstalledTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + InstalledTime *time.Time `type:"timestamp" required:"true"` // The operating system-specific ID of the patch. // @@ -23885,8 +27716,10 @@ type PatchComplianceData struct { // Severity is a required field Severity *string `type:"string" required:"true"` - // The state of the patch on the instance (INSTALLED, INSTALLED_OTHER, MISSING, - // NOT_APPLICABLE or FAILED). + // The state of the patch on the instance, such as INSTALLED or FAILED. + // + // For descriptions of each patch state, see About Patch Compliance (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-compliance-about.html#sysman-compliance-monitor-patch) + // in the AWS Systems Manager User Guide. // // State is a required field State *string `type:"string" required:"true" enum:"PatchComplianceDataState"` @@ -23986,6 +27819,10 @@ func (s *PatchComplianceData) SetTitle(v string) *PatchComplianceData { // // * WindowsServer2016 // +// * * +// +// Use a wildcard character (*) to target all supported operating system versions. +// // Supported key:CLASSIFICATION // // Supported values: @@ -24037,6 +27874,10 @@ func (s *PatchComplianceData) SetTitle(v string) *PatchComplianceData { // // * Ubuntu16.04 // +// * * +// +// Use a wildcard character (*) to target all supported operating system versions. +// // Supported key:PRIORITY // // Supported values: @@ -24090,6 +27931,54 @@ func (s *PatchComplianceData) SetTitle(v string) *PatchComplianceData { // // * AmazonLinux2017.09 // +// * * +// +// Use a wildcard character (*) to target all supported operating system versions. +// +// Supported key:CLASSIFICATION +// +// Supported values: +// +// * Security +// +// * Bugfix +// +// * Enhancement +// +// * Recommended +// +// * Newpackage +// +// Supported key:SEVERITY +// +// Supported values: +// +// * Critical +// +// * Important +// +// * Medium +// +// * Low +// +// Amazon Linux 2 Operating Systems +// +// The supported keys for Amazon Linux 2 operating systems are PRODUCT, CLASSIFICATION, +// and SEVERITY. See the following lists for valid values for each of these +// keys. +// +// Supported key:PRODUCT +// +// Supported values: +// +// * AmazonLinux2 +// +// * AmazonLinux2.0 +// +// * * +// +// Use a wildcard character (*) to target all supported operating system versions. +// // Supported key:CLASSIFICATION // // Supported values: @@ -24146,6 +28035,10 @@ func (s *PatchComplianceData) SetTitle(v string) *PatchComplianceData { // // * RedhatEnterpriseLinux7.4 // +// * * +// +// Use a wildcard character (*) to target all supported operating system versions. +// // Supported key:CLASSIFICATION // // Supported values: @@ -24172,9 +28065,9 @@ func (s *PatchComplianceData) SetTitle(v string) *PatchComplianceData { // // * Low // -// SUSE Linux Enterprise Server (SUSE) Operating Systems +// SUSE Linux Enterprise Server (SLES) Operating Systems // -// The supported keys for SUSE operating systems are PRODUCT, CLASSIFICATION, +// The supported keys for SLES operating systems are PRODUCT, CLASSIFICATION, // and SEVERITY. See the following lists for valid values for each of these // keys. // @@ -24202,6 +28095,10 @@ func (s *PatchComplianceData) SetTitle(v string) *PatchComplianceData { // // * Suse12.9 // +// * * +// +// Use a wildcard character (*) to target all supported operating system versions. +// // Supported key:CLASSIFICATION // // Supported values: @@ -24229,6 +28126,66 @@ func (s *PatchComplianceData) SetTitle(v string) *PatchComplianceData { // * Moderate // // * Low +// +// CentOS Operating Systems +// +// The supported keys for CentOS operating systems are PRODUCT, CLASSIFICATION, +// and SEVERITY. See the following lists for valid values for each of these +// keys. +// +// Supported key:PRODUCT +// +// Supported values: +// +// * CentOS6.5 +// +// * CentOS6.6 +// +// * CentOS6.7 +// +// * CentOS6.8 +// +// * CentOS6.9 +// +// * CentOS7.0 +// +// * CentOS7.1 +// +// * CentOS7.2 +// +// * CentOS7.3 +// +// * CentOS7.4 +// +// * * +// +// Use a wildcard character (*) to target all supported operating system versions. +// +// Supported key:CLASSIFICATION +// +// Supported values: +// +// * Security +// +// * Bugfix +// +// * Enhancement +// +// * Recommended +// +// * Newpackage +// +// Supported key:SEVERITY +// +// Supported values: +// +// * Critical +// +// * Important +// +// * Medium +// +// * Low type PatchFilter struct { _ struct{} `type:"structure"` @@ -24423,7 +28380,8 @@ type PatchRule struct { _ struct{} `type:"structure"` // The number of days after the release date of each patch matched by the rule - // the patch is marked as approved in the patch baseline. + // that the patch is marked as approved in the patch baseline. For example, + // a value of 7 means that patches are approved seven days after they are released. // // ApproveAfterDays is a required field ApproveAfterDays *int64 `type:"integer" required:"true"` @@ -24561,7 +28519,7 @@ type PatchSource struct { // // keepcache=0 // - // debualevel=2 + // debuglevel=2 // // Configuration is a required field Configuration *string `min:"1" type:"string" required:"true"` @@ -24637,7 +28595,7 @@ type PatchStatus struct { _ struct{} `type:"structure"` // The date the patch was approved (or will be approved if the status is PENDING_APPROVAL). - ApprovalDate *time.Time `type:"timestamp" timestampFormat:"unix"` + ApprovalDate *time.Time `type:"timestamp"` // The compliance severity level for a patch. ComplianceLevel *string `type:"string" enum:"PatchComplianceLevel"` @@ -24675,6 +28633,72 @@ func (s *PatchStatus) SetDeploymentStatus(v string) *PatchStatus { return s } +// An aggregate of step execution statuses displayed in the AWS Console for +// a multi-Region and multi-account Automation execution. +type ProgressCounters struct { + _ struct{} `type:"structure"` + + // The total number of steps that the system cancelled in all specified AWS + // Regions and accounts for the current Automation execution. + CancelledSteps *int64 `type:"integer"` + + // The total number of steps that failed to execute in all specified AWS Regions + // and accounts for the current Automation execution. + FailedSteps *int64 `type:"integer"` + + // The total number of steps that successfully completed in all specified AWS + // Regions and accounts for the current Automation execution. + SuccessSteps *int64 `type:"integer"` + + // The total number of steps that timed out in all specified AWS Regions and + // accounts for the current Automation execution. + TimedOutSteps *int64 `type:"integer"` + + // The total number of steps executed in all specified AWS Regions and accounts + // for the current Automation execution. + TotalSteps *int64 `type:"integer"` +} + +// String returns the string representation +func (s ProgressCounters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ProgressCounters) GoString() string { + return s.String() +} + +// SetCancelledSteps sets the CancelledSteps field's value. +func (s *ProgressCounters) SetCancelledSteps(v int64) *ProgressCounters { + s.CancelledSteps = &v + return s +} + +// SetFailedSteps sets the FailedSteps field's value. +func (s *ProgressCounters) SetFailedSteps(v int64) *ProgressCounters { + s.FailedSteps = &v + return s +} + +// SetSuccessSteps sets the SuccessSteps field's value. +func (s *ProgressCounters) SetSuccessSteps(v int64) *ProgressCounters { + s.SuccessSteps = &v + return s +} + +// SetTimedOutSteps sets the TimedOutSteps field's value. +func (s *ProgressCounters) SetTimedOutSteps(v int64) *ProgressCounters { + s.TimedOutSteps = &v + return s +} + +// SetTotalSteps sets the TotalSteps field's value. +func (s *ProgressCounters) SetTotalSteps(v int64) *ProgressCounters { + s.TotalSteps = &v + return s +} + type PutComplianceItemsInput struct { _ struct{} `type:"structure"` @@ -24892,6 +28916,9 @@ func (s *PutInventoryInput) SetItems(v []*InventoryItem) *PutInventoryInput { type PutInventoryOutput struct { _ struct{} `type:"structure"` + + // Information about the request. + Message *string `type:"string"` } // String returns the string representation @@ -24904,6 +28931,12 @@ func (s PutInventoryOutput) GoString() string { return s.String() } +// SetMessage sets the Message field's value. +func (s *PutInventoryOutput) SetMessage(v string) *PutInventoryOutput { + s.Message = &v + return s +} + type PutParameterInput struct { _ struct{} `type:"structure"` @@ -24912,20 +28945,49 @@ type PutParameterInput struct { // AllowedPattern=^\d+$ AllowedPattern *string `type:"string"` - // Information about the parameter that you want to add to the system. + // Information about the parameter that you want to add to the system. Optional + // but recommended. + // + // Do not enter personally identifiable information in this field. Description *string `type:"string"` - // The KMS Key ID that you want to use to encrypt a parameter when you choose - // the SecureString data type. If you don't specify a key ID, the system uses - // the default key associated with your AWS account. + // The KMS Key ID that you want to use to encrypt a parameter. Either the default + // AWS Key Management Service (AWS KMS) key automatically assigned to your AWS + // account or a custom key. Required for parameters that use the SecureString + // data type. + // + // If you don't specify a key ID, the system uses the default key associated + // with your AWS account. + // + // * To use your default AWS KMS key, choose the SecureString data type, + // and do not specify the Key ID when you create the parameter. The system + // automatically populates Key ID with your default KMS key. + // + // * To use a custom KMS key, choose the SecureString data type with the + // Key ID parameter. KeyId *string `min:"1" type:"string"` // The fully qualified name of the parameter that you want to add to the system. // The fully qualified name includes the complete hierarchy of the parameter // path and name. For example: /Dev/DBServer/MySQL/db-string13 // - // For information about parameter name requirements and restrictions, see About - // Creating Systems Manager Parameters (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-su-create.html#sysman-paramstore-su-create-about) + // Naming Constraints: + // + // * Parameter names are case sensitive. + // + // * A parameter name must be unique within an AWS Region + // + // * A parameter name can't be prefixed with "aws" or "ssm" (case-insensitive). + // + // * Parameter names can include only the following symbols and letters: + // a-zA-Z0-9_.-/ + // + // * A parameter name can't include spaces. + // + // * Parameter hierarchies are limited to a maximum depth of fifteen levels. + // + // For additional information about valid values for parameter names, see Requirements + // and Constraints for Parameter Names (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-parameter-name-constraints.html) // in the AWS Systems Manager User Guide. // // The maximum length constraint listed below includes capacity for additional @@ -24940,6 +29002,13 @@ type PutParameterInput struct { // The type of parameter that you want to add to the system. // + // Items in a StringList must be separated by a comma (,). You can't use other + // punctuation or special character to escape items in the list. If you have + // a parameter value that requires a comma, then use the String data type. + // + // SecureString is not currently supported for AWS CloudFormation templates + // or in the China Regions. + // // Type is a required field Type *string `type:"string" required:"true" enum:"ParameterType"` @@ -25232,8 +29301,17 @@ type RegisterTargetWithMaintenanceWindowInput struct { // ResourceType is a required field ResourceType *string `type:"string" required:"true" enum:"MaintenanceWindowResourceType"` - // The targets (either instances or tags). Instances are specified using Key=instanceids,Values=,. - // Tags are specified using Key=,Values=. + // The targets (either instances or tags). + // + // Specify instances using the following format: + // + // Key=InstanceIds,Values=, + // + // Specify tags using either of the following formats: + // + // Key=tag:,Values=, + // + // Key=tag-key,Values=, // // Targets is a required field Targets []*Target `type:"list" required:"true"` @@ -25374,6 +29452,11 @@ type RegisterTaskWithMaintenanceWindowInput struct { // A structure containing information about an Amazon S3 bucket to write instance-level // logs to. + // + // LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, + // instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. LoggingInfo *LoggingInfo `type:"structure"` // The maximum number of targets this task can be run for in parallel. @@ -25394,13 +29477,30 @@ type RegisterTaskWithMaintenanceWindowInput struct { // order with tasks that have the same priority scheduled in parallel. Priority *int64 `type:"integer"` - // The role that should be assumed when executing the task. + // The role to assume when running the Maintenance Window task. // - // ServiceRoleArn is a required field - ServiceRoleArn *string `type:"string" required:"true"` + // If you do not specify a service role ARN, Systems Manager will use your account's + // service-linked role for Systems Manager by default. If no service-linked + // role for Systems Manager exists in your account, it will be created when + // you run RegisterTaskWithMaintenanceWindow without specifying a service role + // ARN. + // + // For more information, see Service-Linked Role Permissions for Systems Manager + // (http://docs.aws.amazon.com/systems-manager/latest/userguide/using-service-linked-roles.html#slr-permissions) + // and Should I Use a Service-Linked Role or a Custom Service Role to Run Maintenance + // Window Tasks? (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-maintenance-permissions.html#maintenance-window-tasks-service-role) + // in the AWS Systems Manager User Guide. + ServiceRoleArn *string `type:"string"` - // The targets (either instances or tags). Instances are specified using Key=instanceids,Values=,. - // Tags are specified using Key=,Values=. + // The targets (either instances or Maintenance Window targets). + // + // Specify instances using the following format: + // + // Key=InstanceIds,Values=, + // + // Specify Maintenance Window targets using the following format: + // + // Key=,Values=, // // Targets is a required field Targets []*Target `type:"list" required:"true"` @@ -25415,6 +29515,11 @@ type RegisterTaskWithMaintenanceWindowInput struct { TaskInvocationParameters *MaintenanceWindowTaskInvocationParameters `type:"structure"` // The parameters that should be passed to the task when it is executed. + // + // TaskParameters has been deprecated. To specify parameters to pass to a task + // when it runs, instead use the Parameters option in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. TaskParameters map[string]*MaintenanceWindowTaskParameterValueExpression `type:"map"` // The type of task being registered. @@ -25422,7 +29527,7 @@ type RegisterTaskWithMaintenanceWindowInput struct { // TaskType is a required field TaskType *string `type:"string" required:"true" enum:"MaintenanceWindowTaskType"` - // The id of the Maintenance Window the task should be added to. + // The ID of the Maintenance Window the task should be added to. // // WindowId is a required field WindowId *string `min:"20" type:"string" required:"true"` @@ -25462,9 +29567,6 @@ func (s *RegisterTaskWithMaintenanceWindowInput) Validate() error { if s.Name != nil && len(*s.Name) < 3 { invalidParams.Add(request.NewErrParamMinLen("Name", 3)) } - if s.ServiceRoleArn == nil { - invalidParams.Add(request.NewErrParamRequired("ServiceRoleArn")) - } if s.Targets == nil { invalidParams.Add(request.NewErrParamRequired("Targets")) } @@ -25597,7 +29699,7 @@ func (s *RegisterTaskWithMaintenanceWindowInput) SetWindowId(v string) *Register type RegisterTaskWithMaintenanceWindowOutput struct { _ struct{} `type:"structure"` - // The id of the task in the Maintenance Window. + // The ID of the task in the Maintenance Window. WindowTaskId *string `min:"36" type:"string"` } @@ -25620,13 +29722,30 @@ func (s *RegisterTaskWithMaintenanceWindowOutput) SetWindowTaskId(v string) *Reg type RemoveTagsFromResourceInput struct { _ struct{} `type:"structure"` - // The resource ID for which you want to remove tags. + // The resource ID for which you want to remove tags. Use the ID of the resource. + // Here are some examples: + // + // ManagedInstance: mi-012345abcde + // + // MaintenanceWindow: mw-012345abcde + // + // PatchBaseline: pb-012345abcde + // + // For the Document and Parameter values, use the name of the resource. + // + // The ManagedInstance type for this API action is only for on-premises managed + // instances. You must specify the the name of the managed instance in the following + // format: mi-ID_number. For example, mi-1a2b3c4d5e6f. // // ResourceId is a required field ResourceId *string `type:"string" required:"true"` // The type of resource of which you want to remove a tag. // + // The ManagedInstance type for this API action is only for on-premises managed + // instances. You must specify the the name of the managed instance in the following + // format: mi-ID_number. For example, mi-1a2b3c4d5e6f. + // // ResourceType is a required field ResourceType *string `type:"string" required:"true" enum:"ResourceTypeForTagging"` @@ -25828,19 +29947,19 @@ type ResourceDataSyncItem struct { LastStatus *string `type:"string" enum:"LastResourceDataSyncStatus"` // The last time the sync operations returned a status of SUCCESSFUL (UTC). - LastSuccessfulSyncTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LastSuccessfulSyncTime *time.Time `type:"timestamp"` // The status message details reported by the last sync. LastSyncStatusMessage *string `type:"string"` // The last time the configuration attempted to sync (UTC). - LastSyncTime *time.Time `type:"timestamp" timestampFormat:"unix"` + LastSyncTime *time.Time `type:"timestamp"` // Configuration information for the target Amazon S3 bucket. S3Destination *ResourceDataSyncS3Destination `type:"structure"` // The date and time the configuration was created (UTC). - SyncCreatedTime *time.Time `type:"timestamp" timestampFormat:"unix"` + SyncCreatedTime *time.Time `type:"timestamp"` // The name of the Resource Data Sync. SyncName *string `min:"1" type:"string"` @@ -26039,6 +30158,98 @@ func (s *ResultAttribute) SetTypeName(v string) *ResultAttribute { return s } +type ResumeSessionInput struct { + _ struct{} `type:"structure"` + + // The ID of the disconnected session to resume. + // + // SessionId is a required field + SessionId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ResumeSessionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResumeSessionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ResumeSessionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResumeSessionInput"} + if s.SessionId == nil { + invalidParams.Add(request.NewErrParamRequired("SessionId")) + } + if s.SessionId != nil && len(*s.SessionId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SessionId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSessionId sets the SessionId field's value. +func (s *ResumeSessionInput) SetSessionId(v string) *ResumeSessionInput { + s.SessionId = &v + return s +} + +type ResumeSessionOutput struct { + _ struct{} `type:"structure"` + + // The ID of the session. + SessionId *string `min:"1" type:"string"` + + // A URL back to SSM Agent on the instance that the Session Manager client uses + // to send commands and receive output from the instance. Format: wss://ssm-messages.region.amazonaws.com/v1/data-channel/session-id?stream=(input|output). + // + // region represents the Region identifier for an AWS Region supported by AWS + // Systems Manager, such as us-east-2 for the US East (Ohio) Region. For a list + // of supported region values, see the Region column in the AWS Systems Manager + // table of regions and endpoints (http://docs.aws.amazon.com/general/latest/gr/rande.html#ssm_region) + // in the AWS General Reference. + // + // session-id represents the ID of a Session Manager session, such as 1a2b3c4dEXAMPLE. + StreamUrl *string `type:"string"` + + // An encrypted token value containing session and caller information. Used + // to authenticate the connection to the instance. + TokenValue *string `type:"string"` +} + +// String returns the string representation +func (s ResumeSessionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResumeSessionOutput) GoString() string { + return s.String() +} + +// SetSessionId sets the SessionId field's value. +func (s *ResumeSessionOutput) SetSessionId(v string) *ResumeSessionOutput { + s.SessionId = &v + return s +} + +// SetStreamUrl sets the StreamUrl field's value. +func (s *ResumeSessionOutput) SetStreamUrl(v string) *ResumeSessionOutput { + s.StreamUrl = &v + return s +} + +// SetTokenValue sets the TokenValue field's value. +func (s *ResumeSessionOutput) SetTokenValue(v string) *ResumeSessionOutput { + s.TokenValue = &v + return s +} + // An Amazon S3 bucket where you want to store the results of this request. type S3OutputLocation struct { _ struct{} `type:"structure"` @@ -26125,6 +30336,49 @@ func (s *S3OutputUrl) SetOutputUrl(v string) *S3OutputUrl { return s } +// Information about a scheduled execution for a Maintenance Window. +type ScheduledWindowExecution struct { + _ struct{} `type:"structure"` + + // The time, in ISO-8601 Extended format, that the Maintenance Window is scheduled + // to be run. + ExecutionTime *string `type:"string"` + + // The name of the Maintenance Window to be run. + Name *string `min:"3" type:"string"` + + // The ID of the Maintenance Window to be run. + WindowId *string `min:"20" type:"string"` +} + +// String returns the string representation +func (s ScheduledWindowExecution) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ScheduledWindowExecution) GoString() string { + return s.String() +} + +// SetExecutionTime sets the ExecutionTime field's value. +func (s *ScheduledWindowExecution) SetExecutionTime(v string) *ScheduledWindowExecution { + s.ExecutionTime = &v + return s +} + +// SetName sets the Name field's value. +func (s *ScheduledWindowExecution) SetName(v string) *ScheduledWindowExecution { + s.Name = &v + return s +} + +// SetWindowId sets the WindowId field's value. +func (s *ScheduledWindowExecution) SetWindowId(v string) *ScheduledWindowExecution { + s.WindowId = &v + return s +} + type SendAutomationSignalInput struct { _ struct{} `type:"structure"` @@ -26212,6 +30466,9 @@ func (s SendAutomationSignalOutput) GoString() string { type SendCommandInput struct { _ struct{} `type:"structure"` + // Enables Systems Manager to send Run Command output to Amazon CloudWatch Logs. + CloudWatchOutputConfig *CloudWatchOutputConfig `type:"structure"` + // User-specified information about the command, such as a brief description // of what the command should do. Comment *string `type:"string"` @@ -26232,24 +30489,40 @@ type SendCommandInput struct { // DocumentName is a required field DocumentName *string `type:"string" required:"true"` + // The SSM document version to use in the request. You can specify $DEFAULT, + // $LATEST, or a specific version number. If you execute commands by using the + // AWS CLI, then you must escape the first two options by using a backslash. + // If you specify a version number, then you don't need to use the backslash. + // For example: + // + // --document-version "\$DEFAULT" + // + // --document-version "\$LATEST" + // + // --document-version "3" + DocumentVersion *string `type:"string"` + // The instance IDs where the command should execute. You can specify a maximum // of 50 IDs. If you prefer not to list individual instance IDs, you can instead // send commands to a fleet of instances using the Targets parameter, which - // accepts EC2 tags. For more information about how to use Targets, see Sending - // Commands to a Fleet (http://docs.aws.amazon.com/systems-manager/latest/userguide/send-commands-multiple.html). + // accepts EC2 tags. For more information about how to use targets, see Sending + // Commands to a Fleet (http://docs.aws.amazon.com/systems-manager/latest/userguide/send-commands-multiple.html) + // in the AWS Systems Manager User Guide. InstanceIds []*string `type:"list"` // (Optional) The maximum number of instances that are allowed to execute the // command at the same time. You can specify a number such as 10 or a percentage // such as 10%. The default value is 50. For more information about how to use - // MaxConcurrency, see Using Concurrency Controls (http://docs.aws.amazon.com/systems-manager/latest/userguide/send-commands-velocity.html). + // MaxConcurrency, see Using Concurrency Controls (http://docs.aws.amazon.com/systems-manager/latest/userguide/send-commands-multiple.html#send-commands-velocity) + // in the AWS Systems Manager User Guide. MaxConcurrency *string `min:"1" type:"string"` // The maximum number of errors allowed without the command failing. When the // command fails one more time beyond the value of MaxErrors, the systems stops // sending the command to additional targets. You can specify a number like // 10 or a percentage like 10%. The default value is 0. For more information - // about how to use MaxErrors, see Using Error Controls (http://docs.aws.amazon.com/systems-manager/latest/userguide/send-commands-maxerrors.html). + // about how to use MaxErrors, see Using Error Controls (http://docs.aws.amazon.com/systems-manager/latest/userguide/send-commands-multiple.html#send-commands-maxerrors) + // in the AWS Systems Manager User Guide. MaxErrors *string `min:"1" type:"string"` // Configurations for sending notifications. @@ -26275,12 +30548,13 @@ type SendCommandInput struct { // (Optional) An array of search criteria that targets instances using a Key,Value // combination that you specify. Targets is required if you don't provide one - // or more instance IDs in the call. For more information about how to use Targets, - // see Sending Commands to a Fleet (http://docs.aws.amazon.com/systems-manager/latest/userguide/send-commands-multiple.html). + // or more instance IDs in the call. For more information about how to use targets, + // see Sending Commands to a Fleet (http://docs.aws.amazon.com/systems-manager/latest/userguide/send-commands-multiple.html) + // in the AWS Systems Manager User Guide. Targets []*Target `type:"list"` // If this time is reached and the command has not already started executing, - // it will not execute. + // it will not run. TimeoutSeconds *int64 `min:"30" type:"integer"` } @@ -26315,6 +30589,11 @@ func (s *SendCommandInput) Validate() error { if s.TimeoutSeconds != nil && *s.TimeoutSeconds < 30 { invalidParams.Add(request.NewErrParamMinValue("TimeoutSeconds", 30)) } + if s.CloudWatchOutputConfig != nil { + if err := s.CloudWatchOutputConfig.Validate(); err != nil { + invalidParams.AddNested("CloudWatchOutputConfig", err.(request.ErrInvalidParams)) + } + } if s.Targets != nil { for i, v := range s.Targets { if v == nil { @@ -26326,123 +30605,349 @@ func (s *SendCommandInput) Validate() error { } } - if invalidParams.Len() > 0 { - return invalidParams - } - return nil + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCloudWatchOutputConfig sets the CloudWatchOutputConfig field's value. +func (s *SendCommandInput) SetCloudWatchOutputConfig(v *CloudWatchOutputConfig) *SendCommandInput { + s.CloudWatchOutputConfig = v + return s +} + +// SetComment sets the Comment field's value. +func (s *SendCommandInput) SetComment(v string) *SendCommandInput { + s.Comment = &v + return s +} + +// SetDocumentHash sets the DocumentHash field's value. +func (s *SendCommandInput) SetDocumentHash(v string) *SendCommandInput { + s.DocumentHash = &v + return s +} + +// SetDocumentHashType sets the DocumentHashType field's value. +func (s *SendCommandInput) SetDocumentHashType(v string) *SendCommandInput { + s.DocumentHashType = &v + return s +} + +// SetDocumentName sets the DocumentName field's value. +func (s *SendCommandInput) SetDocumentName(v string) *SendCommandInput { + s.DocumentName = &v + return s +} + +// SetDocumentVersion sets the DocumentVersion field's value. +func (s *SendCommandInput) SetDocumentVersion(v string) *SendCommandInput { + s.DocumentVersion = &v + return s +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *SendCommandInput) SetInstanceIds(v []*string) *SendCommandInput { + s.InstanceIds = v + return s +} + +// SetMaxConcurrency sets the MaxConcurrency field's value. +func (s *SendCommandInput) SetMaxConcurrency(v string) *SendCommandInput { + s.MaxConcurrency = &v + return s +} + +// SetMaxErrors sets the MaxErrors field's value. +func (s *SendCommandInput) SetMaxErrors(v string) *SendCommandInput { + s.MaxErrors = &v + return s +} + +// SetNotificationConfig sets the NotificationConfig field's value. +func (s *SendCommandInput) SetNotificationConfig(v *NotificationConfig) *SendCommandInput { + s.NotificationConfig = v + return s +} + +// SetOutputS3BucketName sets the OutputS3BucketName field's value. +func (s *SendCommandInput) SetOutputS3BucketName(v string) *SendCommandInput { + s.OutputS3BucketName = &v + return s +} + +// SetOutputS3KeyPrefix sets the OutputS3KeyPrefix field's value. +func (s *SendCommandInput) SetOutputS3KeyPrefix(v string) *SendCommandInput { + s.OutputS3KeyPrefix = &v + return s +} + +// SetOutputS3Region sets the OutputS3Region field's value. +func (s *SendCommandInput) SetOutputS3Region(v string) *SendCommandInput { + s.OutputS3Region = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *SendCommandInput) SetParameters(v map[string][]*string) *SendCommandInput { + s.Parameters = v + return s +} + +// SetServiceRoleArn sets the ServiceRoleArn field's value. +func (s *SendCommandInput) SetServiceRoleArn(v string) *SendCommandInput { + s.ServiceRoleArn = &v + return s +} + +// SetTargets sets the Targets field's value. +func (s *SendCommandInput) SetTargets(v []*Target) *SendCommandInput { + s.Targets = v + return s +} + +// SetTimeoutSeconds sets the TimeoutSeconds field's value. +func (s *SendCommandInput) SetTimeoutSeconds(v int64) *SendCommandInput { + s.TimeoutSeconds = &v + return s +} + +type SendCommandOutput struct { + _ struct{} `type:"structure"` + + // The request as it was received by Systems Manager. Also provides the command + // ID which can be used future references to this request. + Command *Command `type:"structure"` +} + +// String returns the string representation +func (s SendCommandOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SendCommandOutput) GoString() string { + return s.String() +} + +// SetCommand sets the Command field's value. +func (s *SendCommandOutput) SetCommand(v *Command) *SendCommandOutput { + s.Command = v + return s +} + +// Information about a Session Manager connection to an instance. +type Session struct { + _ struct{} `type:"structure"` + + // Reserved for future use. + Details *string `min:"1" type:"string"` + + // The name of the Session Manager SSM document used to define the parameters + // and plugin settings for the session. For example, SSM-SessionManagerRunShell. + DocumentName *string `type:"string"` + + // The date and time, in ISO-8601 Extended format, when the session was terminated. + EndDate *time.Time `type:"timestamp"` + + // Reserved for future use. + OutputUrl *SessionManagerOutputUrl `type:"structure"` + + // The ID of the AWS user account that started the session. + Owner *string `min:"1" type:"string"` + + // The ID of the session. + SessionId *string `min:"1" type:"string"` + + // The date and time, in ISO-8601 Extended format, when the session began. + StartDate *time.Time `type:"timestamp"` + + // The status of the session. For example, "Connected" or "Terminated". + Status *string `type:"string" enum:"SessionStatus"` + + // The instance that the Session Manager session connected to. + Target *string `min:"1" type:"string"` } -// SetComment sets the Comment field's value. -func (s *SendCommandInput) SetComment(v string) *SendCommandInput { - s.Comment = &v - return s +// String returns the string representation +func (s Session) String() string { + return awsutil.Prettify(s) } -// SetDocumentHash sets the DocumentHash field's value. -func (s *SendCommandInput) SetDocumentHash(v string) *SendCommandInput { - s.DocumentHash = &v - return s +// GoString returns the string representation +func (s Session) GoString() string { + return s.String() } -// SetDocumentHashType sets the DocumentHashType field's value. -func (s *SendCommandInput) SetDocumentHashType(v string) *SendCommandInput { - s.DocumentHashType = &v +// SetDetails sets the Details field's value. +func (s *Session) SetDetails(v string) *Session { + s.Details = &v return s } // SetDocumentName sets the DocumentName field's value. -func (s *SendCommandInput) SetDocumentName(v string) *SendCommandInput { +func (s *Session) SetDocumentName(v string) *Session { s.DocumentName = &v return s } -// SetInstanceIds sets the InstanceIds field's value. -func (s *SendCommandInput) SetInstanceIds(v []*string) *SendCommandInput { - s.InstanceIds = v +// SetEndDate sets the EndDate field's value. +func (s *Session) SetEndDate(v time.Time) *Session { + s.EndDate = &v return s } -// SetMaxConcurrency sets the MaxConcurrency field's value. -func (s *SendCommandInput) SetMaxConcurrency(v string) *SendCommandInput { - s.MaxConcurrency = &v +// SetOutputUrl sets the OutputUrl field's value. +func (s *Session) SetOutputUrl(v *SessionManagerOutputUrl) *Session { + s.OutputUrl = v return s } -// SetMaxErrors sets the MaxErrors field's value. -func (s *SendCommandInput) SetMaxErrors(v string) *SendCommandInput { - s.MaxErrors = &v +// SetOwner sets the Owner field's value. +func (s *Session) SetOwner(v string) *Session { + s.Owner = &v return s } -// SetNotificationConfig sets the NotificationConfig field's value. -func (s *SendCommandInput) SetNotificationConfig(v *NotificationConfig) *SendCommandInput { - s.NotificationConfig = v +// SetSessionId sets the SessionId field's value. +func (s *Session) SetSessionId(v string) *Session { + s.SessionId = &v return s } -// SetOutputS3BucketName sets the OutputS3BucketName field's value. -func (s *SendCommandInput) SetOutputS3BucketName(v string) *SendCommandInput { - s.OutputS3BucketName = &v +// SetStartDate sets the StartDate field's value. +func (s *Session) SetStartDate(v time.Time) *Session { + s.StartDate = &v return s } -// SetOutputS3KeyPrefix sets the OutputS3KeyPrefix field's value. -func (s *SendCommandInput) SetOutputS3KeyPrefix(v string) *SendCommandInput { - s.OutputS3KeyPrefix = &v +// SetStatus sets the Status field's value. +func (s *Session) SetStatus(v string) *Session { + s.Status = &v return s } -// SetOutputS3Region sets the OutputS3Region field's value. -func (s *SendCommandInput) SetOutputS3Region(v string) *SendCommandInput { - s.OutputS3Region = &v +// SetTarget sets the Target field's value. +func (s *Session) SetTarget(v string) *Session { + s.Target = &v return s } -// SetParameters sets the Parameters field's value. -func (s *SendCommandInput) SetParameters(v map[string][]*string) *SendCommandInput { - s.Parameters = v - return s +// Describes a filter for Session Manager information. +type SessionFilter struct { + _ struct{} `type:"structure"` + + // The name of the filter. + // + // Key is a required field + Key *string `locationName:"key" type:"string" required:"true" enum:"SessionFilterKey"` + + // The filter value. Valid values for each filter key are as follows: + // + // * InvokedAfter: Specify a timestamp to limit your results. For example, + // specify 2018-08-29T00:00:00Z to see sessions that started August 29, 2018, + // and later. + // + // * InvokedBefore: Specify a timestamp to limit your results. For example, + // specify 2018-08-29T00:00:00Z to see sessions that started before August + // 29, 2018. + // + // * Target: Specify an instance to which session connections have been made. + // + // * Owner: Specify an AWS user account to see a list of sessions started + // by that user. + // + // * Status: Specify a valid session status to see a list of all sessions + // with that status. Status values you can specify include: + // + // Connected + // + // Connecting + // + // Disconnected + // + // Terminated + // + // Terminating + // + // Failed + // + // Value is a required field + Value *string `locationName:"value" min:"1" type:"string" required:"true"` } -// SetServiceRoleArn sets the ServiceRoleArn field's value. -func (s *SendCommandInput) SetServiceRoleArn(v string) *SendCommandInput { - s.ServiceRoleArn = &v - return s +// String returns the string representation +func (s SessionFilter) String() string { + return awsutil.Prettify(s) } -// SetTargets sets the Targets field's value. -func (s *SendCommandInput) SetTargets(v []*Target) *SendCommandInput { - s.Targets = v +// GoString returns the string representation +func (s SessionFilter) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SessionFilter) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SessionFilter"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + if s.Value != nil && len(*s.Value) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Value", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *SessionFilter) SetKey(v string) *SessionFilter { + s.Key = &v return s } -// SetTimeoutSeconds sets the TimeoutSeconds field's value. -func (s *SendCommandInput) SetTimeoutSeconds(v int64) *SendCommandInput { - s.TimeoutSeconds = &v +// SetValue sets the Value field's value. +func (s *SessionFilter) SetValue(v string) *SessionFilter { + s.Value = &v return s } -type SendCommandOutput struct { +// Reserved for future use. +type SessionManagerOutputUrl struct { _ struct{} `type:"structure"` - // The request as it was received by Systems Manager. Also provides the command - // ID which can be used future references to this request. - Command *Command `type:"structure"` + // Reserved for future use. + CloudWatchOutputUrl *string `min:"1" type:"string"` + + // Reserved for future use. + S3OutputUrl *string `min:"1" type:"string"` } // String returns the string representation -func (s SendCommandOutput) String() string { +func (s SessionManagerOutputUrl) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s SendCommandOutput) GoString() string { +func (s SessionManagerOutputUrl) GoString() string { return s.String() } -// SetCommand sets the Command field's value. -func (s *SendCommandOutput) SetCommand(v *Command) *SendCommandOutput { - s.Command = v +// SetCloudWatchOutputUrl sets the CloudWatchOutputUrl field's value. +func (s *SessionManagerOutputUrl) SetCloudWatchOutputUrl(v string) *SessionManagerOutputUrl { + s.CloudWatchOutputUrl = &v + return s +} + +// SetS3OutputUrl sets the S3OutputUrl field's value. +func (s *SessionManagerOutputUrl) SetS3OutputUrl(v string) *SessionManagerOutputUrl { + s.S3OutputUrl = &v return s } @@ -26528,6 +31033,61 @@ func (s *SeveritySummary) SetUnspecifiedCount(v int64) *SeveritySummary { return s } +type StartAssociationsOnceInput struct { + _ struct{} `type:"structure"` + + // The association IDs that you want to execute immediately and only one time. + // + // AssociationIds is a required field + AssociationIds []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s StartAssociationsOnceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartAssociationsOnceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StartAssociationsOnceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StartAssociationsOnceInput"} + if s.AssociationIds == nil { + invalidParams.Add(request.NewErrParamRequired("AssociationIds")) + } + if s.AssociationIds != nil && len(s.AssociationIds) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AssociationIds", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAssociationIds sets the AssociationIds field's value. +func (s *StartAssociationsOnceInput) SetAssociationIds(v []*string) *StartAssociationsOnceInput { + s.AssociationIds = v + return s +} + +type StartAssociationsOnceOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s StartAssociationsOnceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartAssociationsOnceOutput) GoString() string { + return s.String() +} + type StartAutomationExecutionInput struct { _ struct{} `type:"structure"` @@ -26572,8 +31132,19 @@ type StartAutomationExecutionInput struct { // in the Automation document. Parameters map[string][]*string `min:"1" type:"map"` + // A location is a combination of AWS Regions and/or AWS accounts where you + // want to execute the Automation. Use this action to start an Automation in + // multiple Regions and multiple accounts. For more information, see Concurrently + // Executing Automations in Multiple AWS Regions and Accounts (http://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation-multiple-accounts-and-regions.html) + // in the AWS Systems Manager User Guide. + TargetLocations []*TargetLocation `min:"1" type:"list"` + + // A key-value mapping of document parameters to target resources. Both Targets + // and TargetMaps cannot be specified together. + TargetMaps []map[string][]*string `type:"list"` + // The name of the parameter used as the target resource for the rate-controlled - // execution. Required if you specify Targets. + // execution. Required if you specify targets. TargetParameterName *string `min:"1" type:"string"` // A key-value mapping to target resources. Required if you specify TargetParameterName. @@ -26608,9 +31179,22 @@ func (s *StartAutomationExecutionInput) Validate() error { if s.Parameters != nil && len(s.Parameters) < 1 { invalidParams.Add(request.NewErrParamMinLen("Parameters", 1)) } + if s.TargetLocations != nil && len(s.TargetLocations) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TargetLocations", 1)) + } if s.TargetParameterName != nil && len(*s.TargetParameterName) < 1 { invalidParams.Add(request.NewErrParamMinLen("TargetParameterName", 1)) } + if s.TargetLocations != nil { + for i, v := range s.TargetLocations { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "TargetLocations", i), err.(request.ErrInvalidParams)) + } + } + } if s.Targets != nil { for i, v := range s.Targets { if v == nil { @@ -26670,6 +31254,18 @@ func (s *StartAutomationExecutionInput) SetParameters(v map[string][]*string) *S return s } +// SetTargetLocations sets the TargetLocations field's value. +func (s *StartAutomationExecutionInput) SetTargetLocations(v []*TargetLocation) *StartAutomationExecutionInput { + s.TargetLocations = v + return s +} + +// SetTargetMaps sets the TargetMaps field's value. +func (s *StartAutomationExecutionInput) SetTargetMaps(v []map[string][]*string) *StartAutomationExecutionInput { + s.TargetMaps = v + return s +} + // SetTargetParameterName sets the TargetParameterName field's value. func (s *StartAutomationExecutionInput) SetTargetParameterName(v string) *StartAutomationExecutionInput { s.TargetParameterName = &v @@ -26705,6 +31301,118 @@ func (s *StartAutomationExecutionOutput) SetAutomationExecutionId(v string) *Sta return s } +type StartSessionInput struct { + _ struct{} `type:"structure"` + + // The name of the SSM document to define the parameters and plugin settings + // for the session. For example, SSM-SessionManagerRunShell. If no document + // name is provided, a shell to the instance is launched by default. + DocumentName *string `type:"string"` + + // Reserved for future use. + Parameters map[string][]*string `type:"map"` + + // The instance to connect to for the session. + // + // Target is a required field + Target *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s StartSessionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartSessionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StartSessionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StartSessionInput"} + if s.Target == nil { + invalidParams.Add(request.NewErrParamRequired("Target")) + } + if s.Target != nil && len(*s.Target) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Target", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDocumentName sets the DocumentName field's value. +func (s *StartSessionInput) SetDocumentName(v string) *StartSessionInput { + s.DocumentName = &v + return s +} + +// SetParameters sets the Parameters field's value. +func (s *StartSessionInput) SetParameters(v map[string][]*string) *StartSessionInput { + s.Parameters = v + return s +} + +// SetTarget sets the Target field's value. +func (s *StartSessionInput) SetTarget(v string) *StartSessionInput { + s.Target = &v + return s +} + +type StartSessionOutput struct { + _ struct{} `type:"structure"` + + // The ID of the session. + SessionId *string `min:"1" type:"string"` + + // A URL back to SSM Agent on the instance that the Session Manager client uses + // to send commands and receive output from the instance. Format: wss://ssm-messages.region.amazonaws.com/v1/data-channel/session-id?stream=(input|output) + // + // region represents the Region identifier for an AWS Region supported by AWS + // Systems Manager, such as us-east-2 for the US East (Ohio) Region. For a list + // of supported region values, see the Region column in the AWS Systems Manager + // table of regions and endpoints (http://docs.aws.amazon.com/general/latest/gr/rande.html#ssm_region) + // in the AWS General Reference. + // + // session-id represents the ID of a Session Manager session, such as 1a2b3c4dEXAMPLE. + StreamUrl *string `type:"string"` + + // An encrypted token value containing session and caller information. Used + // to authenticate the connection to the instance. + TokenValue *string `type:"string"` +} + +// String returns the string representation +func (s StartSessionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartSessionOutput) GoString() string { + return s.String() +} + +// SetSessionId sets the SessionId field's value. +func (s *StartSessionOutput) SetSessionId(v string) *StartSessionOutput { + s.SessionId = &v + return s +} + +// SetStreamUrl sets the StreamUrl field's value. +func (s *StartSessionOutput) SetStreamUrl(v string) *StartSessionOutput { + s.StreamUrl = &v + return s +} + +// SetTokenValue sets the TokenValue field's value. +func (s *StartSessionOutput) SetTokenValue(v string) *StartSessionOutput { + s.TokenValue = &v + return s +} + // Detailed information about an the execution state of an Automation step. type StepExecution struct { _ struct{} `type:"structure"` @@ -26715,11 +31423,11 @@ type StepExecution struct { // If a step has finished execution, this contains the time the execution ended. // If the step has not yet concluded, this field is not populated. - ExecutionEndTime *time.Time `type:"timestamp" timestampFormat:"unix"` + ExecutionEndTime *time.Time `type:"timestamp"` // If a step has begun execution, this contains the time the step started. If // the step is in Pending status, this field is not populated. - ExecutionStartTime *time.Time `type:"timestamp" timestampFormat:"unix"` + ExecutionStartTime *time.Time `type:"timestamp"` // Information about the Automation failure. FailureDetails *FailureDetails `type:"structure"` @@ -26730,10 +31438,21 @@ type StepExecution struct { // Fully-resolved values passed into the step before execution. Inputs map[string]*string `type:"map"` + // The flag which can be used to help decide whether the failure of current + // step leads to the Automation failure. + IsCritical *bool `type:"boolean"` + + // The flag which can be used to end automation no matter whether the step succeeds + // or fails. + IsEnd *bool `type:"boolean"` + // The maximum number of tries to run the action of the step. The default value // is 1. MaxAttempts *int64 `type:"integer"` + // The next step after the step succeeds. + NextStep *string `type:"string"` + // The action to take if the step fails. The default value is Abort. OnFailure *string `type:"string"` @@ -26759,8 +31478,22 @@ type StepExecution struct { // Success, Cancelled, Failed, and TimedOut. StepStatus *string `type:"string" enum:"AutomationExecutionStatus"` + // The combination of AWS Regions and accounts targeted by the current Automation + // execution. + TargetLocation *TargetLocation `type:"structure"` + + // The targets for the step execution. + Targets []*Target `type:"list"` + // The timeout seconds of the step. TimeoutSeconds *int64 `type:"long"` + + // Strategies used when step fails, we support Continue and Abort. Abort will + // fail the automation when the step fails. Continue will ignore the failure + // of current step and allow automation to execute the next step. With conditional + // branching, we add step:stepName to support the automation to go to another + // specific step. + ValidNextSteps []*string `type:"list"` } // String returns the string representation @@ -26809,12 +31542,30 @@ func (s *StepExecution) SetInputs(v map[string]*string) *StepExecution { return s } +// SetIsCritical sets the IsCritical field's value. +func (s *StepExecution) SetIsCritical(v bool) *StepExecution { + s.IsCritical = &v + return s +} + +// SetIsEnd sets the IsEnd field's value. +func (s *StepExecution) SetIsEnd(v bool) *StepExecution { + s.IsEnd = &v + return s +} + // SetMaxAttempts sets the MaxAttempts field's value. func (s *StepExecution) SetMaxAttempts(v int64) *StepExecution { s.MaxAttempts = &v return s } +// SetNextStep sets the NextStep field's value. +func (s *StepExecution) SetNextStep(v string) *StepExecution { + s.NextStep = &v + return s +} + // SetOnFailure sets the OnFailure field's value. func (s *StepExecution) SetOnFailure(v string) *StepExecution { s.OnFailure = &v @@ -26863,12 +31614,30 @@ func (s *StepExecution) SetStepStatus(v string) *StepExecution { return s } +// SetTargetLocation sets the TargetLocation field's value. +func (s *StepExecution) SetTargetLocation(v *TargetLocation) *StepExecution { + s.TargetLocation = v + return s +} + +// SetTargets sets the Targets field's value. +func (s *StepExecution) SetTargets(v []*Target) *StepExecution { + s.Targets = v + return s +} + // SetTimeoutSeconds sets the TimeoutSeconds field's value. func (s *StepExecution) SetTimeoutSeconds(v int64) *StepExecution { s.TimeoutSeconds = &v return s } +// SetValidNextSteps sets the ValidNextSteps field's value. +func (s *StepExecution) SetValidNextSteps(v []*string) *StepExecution { + s.ValidNextSteps = v + return s +} + // A filter to limit the amount of step execution information returned by the // call. type StepExecutionFilter struct { @@ -27064,14 +31833,16 @@ type Target struct { // User-defined criteria for sending commands that target instances that meet // the criteria. Key can be tag: or InstanceIds. For more information // about how to send commands that target instances using Key,Value parameters, - // see Executing a Command Using Systems Manager Run Command (http://docs.aws.amazon.com/systems-manager/latest/userguide/send-commands-multiple.html). + // see Targeting Multiple Instances (http://docs.aws.amazon.com/systems-manager/latest/userguide/send-commands-multiple.html#send-commands-targeting) + // in the AWS Systems Manager User Guide. Key *string `min:"1" type:"string"` // User-defined criteria that maps to Key. For example, if you specified tag:ServerRole, // you could specify value:WebServer to execute a command on instances that // include Amazon EC2 tags of ServerRole,WebServer. For more information about // how to send commands that target instances using Key,Value parameters, see - // Executing a Command Using Systems Manager Run Command (http://docs.aws.amazon.com/systems-manager/latest/userguide/send-commands-multiple.html). + // Sending Commands to a Fleet (http://docs.aws.amazon.com/systems-manager/latest/userguide/send-commands-multiple.html) + // in the AWS Systems Manager User Guide. Values []*string `type:"list"` } @@ -27110,6 +31881,158 @@ func (s *Target) SetValues(v []*string) *Target { return s } +// The combination of AWS Regions and accounts targeted by the current Automation +// execution. +type TargetLocation struct { + _ struct{} `type:"structure"` + + // The AWS accounts targeted by the current Automation execution. + Accounts []*string `min:"1" type:"list"` + + // The Automation execution role used by the currently executing Automation. + ExecutionRoleName *string `min:"1" type:"string"` + + // The AWS Regions targeted by the current Automation execution. + Regions []*string `min:"1" type:"list"` + + // The maxium number of AWS accounts and AWS regions allowed to run the Automation + // concurrently + TargetLocationMaxConcurrency *string `min:"1" type:"string"` + + // The maxium number of errors allowed before the system stops queueing additional + // Automation executions for the currently executing Automation. + TargetLocationMaxErrors *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s TargetLocation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TargetLocation) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *TargetLocation) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TargetLocation"} + if s.Accounts != nil && len(s.Accounts) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Accounts", 1)) + } + if s.ExecutionRoleName != nil && len(*s.ExecutionRoleName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ExecutionRoleName", 1)) + } + if s.Regions != nil && len(s.Regions) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Regions", 1)) + } + if s.TargetLocationMaxConcurrency != nil && len(*s.TargetLocationMaxConcurrency) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TargetLocationMaxConcurrency", 1)) + } + if s.TargetLocationMaxErrors != nil && len(*s.TargetLocationMaxErrors) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TargetLocationMaxErrors", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccounts sets the Accounts field's value. +func (s *TargetLocation) SetAccounts(v []*string) *TargetLocation { + s.Accounts = v + return s +} + +// SetExecutionRoleName sets the ExecutionRoleName field's value. +func (s *TargetLocation) SetExecutionRoleName(v string) *TargetLocation { + s.ExecutionRoleName = &v + return s +} + +// SetRegions sets the Regions field's value. +func (s *TargetLocation) SetRegions(v []*string) *TargetLocation { + s.Regions = v + return s +} + +// SetTargetLocationMaxConcurrency sets the TargetLocationMaxConcurrency field's value. +func (s *TargetLocation) SetTargetLocationMaxConcurrency(v string) *TargetLocation { + s.TargetLocationMaxConcurrency = &v + return s +} + +// SetTargetLocationMaxErrors sets the TargetLocationMaxErrors field's value. +func (s *TargetLocation) SetTargetLocationMaxErrors(v string) *TargetLocation { + s.TargetLocationMaxErrors = &v + return s +} + +type TerminateSessionInput struct { + _ struct{} `type:"structure"` + + // The ID of the session to terminate. + // + // SessionId is a required field + SessionId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s TerminateSessionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TerminateSessionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *TerminateSessionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TerminateSessionInput"} + if s.SessionId == nil { + invalidParams.Add(request.NewErrParamRequired("SessionId")) + } + if s.SessionId != nil && len(*s.SessionId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SessionId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSessionId sets the SessionId field's value. +func (s *TerminateSessionInput) SetSessionId(v string) *TerminateSessionInput { + s.SessionId = &v + return s +} + +type TerminateSessionOutput struct { + _ struct{} `type:"structure"` + + // The ID of the session that has been terminated. + SessionId *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s TerminateSessionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TerminateSessionOutput) GoString() string { + return s.String() +} + +// SetSessionId sets the SessionId field's value. +func (s *TerminateSessionOutput) SetSessionId(v string) *TerminateSessionOutput { + s.SessionId = &v + return s +} + type UpdateAssociationInput struct { _ struct{} `type:"structure"` @@ -27126,9 +32049,38 @@ type UpdateAssociationInput struct { // this request succeeds, either specify $LATEST, or omit this parameter. AssociationVersion *string `type:"string"` + // The severity level to assign to the association. + ComplianceSeverity *string `type:"string" enum:"AssociationComplianceSeverity"` + // The document version you want update for the association. DocumentVersion *string `type:"string"` + // The maximum number of targets allowed to run the association at the same + // time. You can specify a number, for example 10, or a percentage of the target + // set, for example 10%. The default value is 100%, which means all targets + // run the association at the same time. + // + // If a new instance starts and attempts to execute an association while Systems + // Manager is executing MaxConcurrency associations, the association is allowed + // to run. During the next association interval, the new instance will process + // its association within the limit specified for MaxConcurrency. + MaxConcurrency *string `min:"1" type:"string"` + + // The number of errors that are allowed before the system stops sending requests + // to run the association on additional targets. You can specify either an absolute + // number of errors, for example 10, or a percentage of the target set, for + // example 10%. If you specify 3, for example, the system stops sending requests + // when the fourth error is received. If you specify 0, then the system stops + // sending requests after the first error is returned. If you run an association + // on 50 instances and set MaxError to 10%, then the system stops sending the + // request when the sixth error is received. + // + // Executions that are already running an association when MaxErrors is reached + // are allowed to complete, but some of these executions may fail as well. If + // you need to ensure that there won't be more than max-errors failed executions, + // set MaxConcurrency to 1 so that executions proceed one at a time. + MaxErrors *string `min:"1" type:"string"` + // The name of the association document. Name *string `type:"string"` @@ -27162,6 +32114,12 @@ func (s *UpdateAssociationInput) Validate() error { if s.AssociationId == nil { invalidParams.Add(request.NewErrParamRequired("AssociationId")) } + if s.MaxConcurrency != nil && len(*s.MaxConcurrency) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MaxConcurrency", 1)) + } + if s.MaxErrors != nil && len(*s.MaxErrors) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MaxErrors", 1)) + } if s.ScheduleExpression != nil && len(*s.ScheduleExpression) < 1 { invalidParams.Add(request.NewErrParamMinLen("ScheduleExpression", 1)) } @@ -27205,12 +32163,30 @@ func (s *UpdateAssociationInput) SetAssociationVersion(v string) *UpdateAssociat return s } +// SetComplianceSeverity sets the ComplianceSeverity field's value. +func (s *UpdateAssociationInput) SetComplianceSeverity(v string) *UpdateAssociationInput { + s.ComplianceSeverity = &v + return s +} + // SetDocumentVersion sets the DocumentVersion field's value. func (s *UpdateAssociationInput) SetDocumentVersion(v string) *UpdateAssociationInput { s.DocumentVersion = &v return s } +// SetMaxConcurrency sets the MaxConcurrency field's value. +func (s *UpdateAssociationInput) SetMaxConcurrency(v string) *UpdateAssociationInput { + s.MaxConcurrency = &v + return s +} + +// SetMaxErrors sets the MaxErrors field's value. +func (s *UpdateAssociationInput) SetMaxErrors(v string) *UpdateAssociationInput { + s.MaxErrors = &v + return s +} + // SetName sets the Name field's value. func (s *UpdateAssociationInput) SetName(v string) *UpdateAssociationInput { s.Name = &v @@ -27560,6 +32536,11 @@ type UpdateMaintenanceWindowInput struct { // Whether the Maintenance Window is enabled. Enabled *bool `type:"boolean"` + // The date and time, in ISO-8601 Extended format, for when you want the Maintenance + // Window to become inactive. EndDate allows you to set a date and time in the + // future when the Maintenance Window will no longer run. + EndDate *string `type:"string"` + // The name of the Maintenance Window. Name *string `min:"3" type:"string"` @@ -27571,6 +32552,18 @@ type UpdateMaintenanceWindowInput struct { // The schedule of the Maintenance Window in the form of a cron or rate expression. Schedule *string `min:"1" type:"string"` + // The time zone that the scheduled Maintenance Window executions are based + // on, in Internet Assigned Numbers Authority (IANA) format. For example: "America/Los_Angeles", + // "etc/UTC", or "Asia/Seoul". For more information, see the Time Zone Database + // (https://www.iana.org/time-zones) on the IANA website. + ScheduleTimezone *string `type:"string"` + + // The time zone that the scheduled Maintenance Window executions are based + // on, in Internet Assigned Numbers Authority (IANA) format. For example: "America/Los_Angeles", + // "etc/UTC", or "Asia/Seoul". For more information, see the Time Zone Database + // (https://www.iana.org/time-zones) on the IANA website. + StartDate *string `type:"string"` + // The ID of the Maintenance Window to update. // // WindowId is a required field @@ -27645,6 +32638,12 @@ func (s *UpdateMaintenanceWindowInput) SetEnabled(v bool) *UpdateMaintenanceWind return s } +// SetEndDate sets the EndDate field's value. +func (s *UpdateMaintenanceWindowInput) SetEndDate(v string) *UpdateMaintenanceWindowInput { + s.EndDate = &v + return s +} + // SetName sets the Name field's value. func (s *UpdateMaintenanceWindowInput) SetName(v string) *UpdateMaintenanceWindowInput { s.Name = &v @@ -27663,6 +32662,18 @@ func (s *UpdateMaintenanceWindowInput) SetSchedule(v string) *UpdateMaintenanceW return s } +// SetScheduleTimezone sets the ScheduleTimezone field's value. +func (s *UpdateMaintenanceWindowInput) SetScheduleTimezone(v string) *UpdateMaintenanceWindowInput { + s.ScheduleTimezone = &v + return s +} + +// SetStartDate sets the StartDate field's value. +func (s *UpdateMaintenanceWindowInput) SetStartDate(v string) *UpdateMaintenanceWindowInput { + s.StartDate = &v + return s +} + // SetWindowId sets the WindowId field's value. func (s *UpdateMaintenanceWindowInput) SetWindowId(v string) *UpdateMaintenanceWindowInput { s.WindowId = &v @@ -27689,12 +32700,28 @@ type UpdateMaintenanceWindowOutput struct { // Whether the Maintenance Window is enabled. Enabled *bool `type:"boolean"` + // The date and time, in ISO-8601 Extended format, for when the Maintenance + // Window is scheduled to become inactive. The Maintenance Window will not run + // after this specified time. + EndDate *string `type:"string"` + // The name of the Maintenance Window. Name *string `min:"3" type:"string"` // The schedule of the Maintenance Window in the form of a cron or rate expression. Schedule *string `min:"1" type:"string"` + // The time zone that the scheduled Maintenance Window executions are based + // on, in Internet Assigned Numbers Authority (IANA) format. For example: "America/Los_Angeles", + // "etc/UTC", or "Asia/Seoul". For more information, see the Time Zone Database + // (https://www.iana.org/time-zones) on the IANA website. + ScheduleTimezone *string `type:"string"` + + // The date and time, in ISO-8601 Extended format, for when the Maintenance + // Window is scheduled to become active. The Maintenance Window will not run + // before this specified time. + StartDate *string `type:"string"` + // The ID of the created Maintenance Window. WindowId *string `min:"20" type:"string"` } @@ -27739,6 +32766,12 @@ func (s *UpdateMaintenanceWindowOutput) SetEnabled(v bool) *UpdateMaintenanceWin return s } +// SetEndDate sets the EndDate field's value. +func (s *UpdateMaintenanceWindowOutput) SetEndDate(v string) *UpdateMaintenanceWindowOutput { + s.EndDate = &v + return s +} + // SetName sets the Name field's value. func (s *UpdateMaintenanceWindowOutput) SetName(v string) *UpdateMaintenanceWindowOutput { s.Name = &v @@ -27751,6 +32784,18 @@ func (s *UpdateMaintenanceWindowOutput) SetSchedule(v string) *UpdateMaintenance return s } +// SetScheduleTimezone sets the ScheduleTimezone field's value. +func (s *UpdateMaintenanceWindowOutput) SetScheduleTimezone(v string) *UpdateMaintenanceWindowOutput { + s.ScheduleTimezone = &v + return s +} + +// SetStartDate sets the StartDate field's value. +func (s *UpdateMaintenanceWindowOutput) SetStartDate(v string) *UpdateMaintenanceWindowOutput { + s.StartDate = &v + return s +} + // SetWindowId sets the WindowId field's value. func (s *UpdateMaintenanceWindowOutput) SetWindowId(v string) *UpdateMaintenanceWindowOutput { s.WindowId = &v @@ -27957,6 +33002,11 @@ type UpdateMaintenanceWindowTaskInput struct { Description *string `min:"1" type:"string"` // The new logging location in Amazon S3 to specify. + // + // LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, + // instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. LoggingInfo *LoggingInfo `type:"structure"` // The new MaxConcurrency value you want to specify. MaxConcurrency is the number @@ -27981,6 +33031,18 @@ type UpdateMaintenanceWindowTaskInput struct { // The IAM service role ARN to modify. The system assumes this role during task // execution. + // + // If you do not specify a service role ARN, Systems Manager will use your account's + // service-linked role for Systems Manager by default. If no service-linked + // role for Systems Manager exists in your account, it will be created when + // you run RegisterTaskWithMaintenanceWindow without specifying a service role + // ARN. + // + // For more information, see Service-Linked Role Permissions for Systems Manager + // (http://docs.aws.amazon.com/systems-manager/latest/userguide/using-service-linked-roles.html#slr-permissions) + // and Should I Use a Service-Linked Role or a Custom Service Role to Run Maintenance + // Window Tasks? (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-maintenance-permissions.html#maintenance-window-tasks-service-role) + // in the AWS Systems Manager User Guide. ServiceRoleArn *string `type:"string"` // The targets (either instances or tags) to modify. Instances are specified @@ -27995,7 +33057,14 @@ type UpdateMaintenanceWindowTaskInput struct { // fields that match the task type. All other fields should be empty. TaskInvocationParameters *MaintenanceWindowTaskInvocationParameters `type:"structure"` - // The parameters to modify. The map has the following format: + // The parameters to modify. + // + // TaskParameters has been deprecated. To specify parameters to pass to a task + // when it runs, instead use the Parameters option in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. + // + // The map has the following format: // // Key: string, between 1 and 255 characters // @@ -28171,6 +33240,11 @@ type UpdateMaintenanceWindowTaskOutput struct { Description *string `min:"1" type:"string"` // The updated logging information in Amazon S3. + // + // LoggingInfo has been deprecated. To specify an S3 bucket to contain logs, + // instead use the OutputS3BucketName and OutputS3KeyPrefix options in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. LoggingInfo *LoggingInfo `type:"structure"` // The updated MaxConcurrency value. @@ -28198,6 +33272,11 @@ type UpdateMaintenanceWindowTaskOutput struct { TaskInvocationParameters *MaintenanceWindowTaskInvocationParameters `type:"structure"` // The updated parameter values. + // + // TaskParameters has been deprecated. To specify parameters to pass to a task + // when it runs, instead use the Parameters option in the TaskInvocationParameters + // structure. For information about how Systems Manager handles these options + // for the supported Maintenance Window task types, see MaintenanceWindowTaskInvocationParameters. TaskParameters map[string]*MaintenanceWindowTaskParameterValueExpression `type:"map"` // The ID of the Maintenance Window that was updated. @@ -28368,6 +33447,11 @@ type UpdatePatchBaselineInput struct { ApprovalRules *PatchRuleGroup `type:"structure"` // A list of explicitly approved patches for the baseline. + // + // For information about accepted formats for lists of approved patches and + // rejected patches, see Package Name Formats for Approved and Rejected Patch + // Lists (http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-approved-rejected-package-name-formats.html) + // in the AWS Systems Manager User Guide. ApprovedPatches []*string `type:"list"` // Assigns a new compliance severity level to an existing patch baseline. @@ -28393,8 +33477,28 @@ type UpdatePatchBaselineInput struct { Name *string `min:"3" type:"string"` // A list of explicitly rejected patches for the baseline. + // + // For information about accepted formats for lists of approved patches and + // rejected patches, see Package Name Formats for Approved and Rejected Patch + // Lists (http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-approved-rejected-package-name-formats.html) + // in the AWS Systems Manager User Guide. RejectedPatches []*string `type:"list"` + // The action for Patch Manager to take on patches included in the RejectedPackages + // list. + // + // * ALLOW_AS_DEPENDENCY: A package in the Rejected patches list is installed + // only if it is a dependency of another package. It is considered compliant + // with the patch baseline, and its status is reported as InstalledOther. + // This is the default action if no option is specified. + // + // * BLOCK: Packages in the RejectedPatches list, and packages that include + // them as dependencies, are not installed under any circumstances. If a + // package was installed before it was added to the Rejected patches list, + // it is considered non-compliant with the patch baseline, and its status + // is reported as InstalledRejected. + RejectedPatchesAction *string `type:"string" enum:"PatchAction"` + // If True, then all fields that are required by the CreatePatchBaseline action // are also required for this API request. Optional fields that are not specified // are set to null. @@ -28511,6 +33615,12 @@ func (s *UpdatePatchBaselineInput) SetRejectedPatches(v []*string) *UpdatePatchB return s } +// SetRejectedPatchesAction sets the RejectedPatchesAction field's value. +func (s *UpdatePatchBaselineInput) SetRejectedPatchesAction(v string) *UpdatePatchBaselineInput { + s.RejectedPatchesAction = &v + return s +} + // SetReplace sets the Replace field's value. func (s *UpdatePatchBaselineInput) SetReplace(v bool) *UpdatePatchBaselineInput { s.Replace = &v @@ -28545,7 +33655,7 @@ type UpdatePatchBaselineOutput struct { BaselineId *string `min:"20" type:"string"` // The date when the patch baseline was created. - CreatedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + CreatedDate *time.Time `type:"timestamp"` // A description of the Patch Baseline. Description *string `min:"1" type:"string"` @@ -28554,7 +33664,7 @@ type UpdatePatchBaselineOutput struct { GlobalFilters *PatchFilterGroup `type:"structure"` // The date when the patch baseline was last modified. - ModifiedDate *time.Time `type:"timestamp" timestampFormat:"unix"` + ModifiedDate *time.Time `type:"timestamp"` // The name of the patch baseline. Name *string `min:"3" type:"string"` @@ -28565,6 +33675,11 @@ type UpdatePatchBaselineOutput struct { // A list of explicitly rejected patches for the baseline. RejectedPatches []*string `type:"list"` + // The action specified to take on patches included in the RejectedPatches list. + // A patch can be allowed only if it is a dependency of another package, or + // blocked entirely along with packages that include it as a dependency. + RejectedPatchesAction *string `type:"string" enum:"PatchAction"` + // Information about the patches to use to update the instances, including target // operating systems and source repositories. Applies to Linux instances only. Sources []*PatchSource `type:"list"` @@ -28652,12 +33767,57 @@ func (s *UpdatePatchBaselineOutput) SetRejectedPatches(v []*string) *UpdatePatch return s } +// SetRejectedPatchesAction sets the RejectedPatchesAction field's value. +func (s *UpdatePatchBaselineOutput) SetRejectedPatchesAction(v string) *UpdatePatchBaselineOutput { + s.RejectedPatchesAction = &v + return s +} + // SetSources sets the Sources field's value. func (s *UpdatePatchBaselineOutput) SetSources(v []*PatchSource) *UpdatePatchBaselineOutput { s.Sources = v return s } +const ( + // AssociationComplianceSeverityCritical is a AssociationComplianceSeverity enum value + AssociationComplianceSeverityCritical = "CRITICAL" + + // AssociationComplianceSeverityHigh is a AssociationComplianceSeverity enum value + AssociationComplianceSeverityHigh = "HIGH" + + // AssociationComplianceSeverityMedium is a AssociationComplianceSeverity enum value + AssociationComplianceSeverityMedium = "MEDIUM" + + // AssociationComplianceSeverityLow is a AssociationComplianceSeverity enum value + AssociationComplianceSeverityLow = "LOW" + + // AssociationComplianceSeverityUnspecified is a AssociationComplianceSeverity enum value + AssociationComplianceSeverityUnspecified = "UNSPECIFIED" +) + +const ( + // AssociationExecutionFilterKeyExecutionId is a AssociationExecutionFilterKey enum value + AssociationExecutionFilterKeyExecutionId = "ExecutionId" + + // AssociationExecutionFilterKeyStatus is a AssociationExecutionFilterKey enum value + AssociationExecutionFilterKeyStatus = "Status" + + // AssociationExecutionFilterKeyCreatedTime is a AssociationExecutionFilterKey enum value + AssociationExecutionFilterKeyCreatedTime = "CreatedTime" +) + +const ( + // AssociationExecutionTargetsFilterKeyStatus is a AssociationExecutionTargetsFilterKey enum value + AssociationExecutionTargetsFilterKeyStatus = "Status" + + // AssociationExecutionTargetsFilterKeyResourceId is a AssociationExecutionTargetsFilterKey enum value + AssociationExecutionTargetsFilterKeyResourceId = "ResourceId" + + // AssociationExecutionTargetsFilterKeyResourceType is a AssociationExecutionTargetsFilterKey enum value + AssociationExecutionTargetsFilterKeyResourceType = "ResourceType" +) + const ( // AssociationFilterKeyInstanceId is a AssociationFilterKey enum value AssociationFilterKeyInstanceId = "InstanceId" @@ -28681,6 +33841,17 @@ const ( AssociationFilterKeyAssociationName = "AssociationName" ) +const ( + // AssociationFilterOperatorTypeEqual is a AssociationFilterOperatorType enum value + AssociationFilterOperatorTypeEqual = "EQUAL" + + // AssociationFilterOperatorTypeLessThan is a AssociationFilterOperatorType enum value + AssociationFilterOperatorTypeLessThan = "LESS_THAN" + + // AssociationFilterOperatorTypeGreaterThan is a AssociationFilterOperatorType enum value + AssociationFilterOperatorTypeGreaterThan = "GREATER_THAN" +) + const ( // AssociationStatusNamePending is a AssociationStatusName enum value AssociationStatusNamePending = "Pending" @@ -28713,6 +33884,9 @@ const ( // AutomationExecutionFilterKeyStartTimeAfter is a AutomationExecutionFilterKey enum value AutomationExecutionFilterKeyStartTimeAfter = "StartTimeAfter" + + // AutomationExecutionFilterKeyAutomationType is a AutomationExecutionFilterKey enum value + AutomationExecutionFilterKeyAutomationType = "AutomationType" ) const ( @@ -28741,6 +33915,14 @@ const ( AutomationExecutionStatusFailed = "Failed" ) +const ( + // AutomationTypeCrossAccount is a AutomationType enum value + AutomationTypeCrossAccount = "CrossAccount" + + // AutomationTypeLocal is a AutomationType enum value + AutomationTypeLocal = "Local" +) + const ( // CommandFilterKeyInvokedAfter is a CommandFilterKey enum value CommandFilterKeyInvokedAfter = "InvokedAfter" @@ -28750,6 +33932,12 @@ const ( // CommandFilterKeyStatus is a CommandFilterKey enum value CommandFilterKeyStatus = "Status" + + // CommandFilterKeyExecutionStage is a CommandFilterKey enum value + CommandFilterKeyExecutionStage = "ExecutionStage" + + // CommandFilterKeyDocumentName is a CommandFilterKey enum value + CommandFilterKeyDocumentName = "DocumentName" ) const ( @@ -28866,6 +34054,14 @@ const ( ComplianceStatusNonCompliant = "NON_COMPLIANT" ) +const ( + // ConnectionStatusConnected is a ConnectionStatus enum value + ConnectionStatusConnected = "Connected" + + // ConnectionStatusNotConnected is a ConnectionStatus enum value + ConnectionStatusNotConnected = "NotConnected" +) + const ( // DescribeActivationsFilterKeysActivationIds is a DescribeActivationsFilterKeys enum value DescribeActivationsFilterKeysActivationIds = "ActivationIds" @@ -28943,6 +34139,9 @@ const ( // DocumentTypeAutomation is a DocumentType enum value DocumentTypeAutomation = "Automation" + + // DocumentTypeSession is a DocumentType enum value + DocumentTypeSession = "Session" ) const ( @@ -29012,6 +34211,14 @@ const ( InventoryAttributeDataTypeNumber = "number" ) +const ( + // InventoryDeletionStatusInProgress is a InventoryDeletionStatus enum value + InventoryDeletionStatusInProgress = "InProgress" + + // InventoryDeletionStatusComplete is a InventoryDeletionStatus enum value + InventoryDeletionStatusComplete = "Complete" +) + const ( // InventoryQueryOperatorTypeEqual is a InventoryQueryOperatorType enum value InventoryQueryOperatorTypeEqual = "Equal" @@ -29027,6 +34234,17 @@ const ( // InventoryQueryOperatorTypeGreaterThan is a InventoryQueryOperatorType enum value InventoryQueryOperatorTypeGreaterThan = "GreaterThan" + + // InventoryQueryOperatorTypeExists is a InventoryQueryOperatorType enum value + InventoryQueryOperatorTypeExists = "Exists" +) + +const ( + // InventorySchemaDeleteOptionDisableSchema is a InventorySchemaDeleteOption enum value + InventorySchemaDeleteOptionDisableSchema = "DisableSchema" + + // InventorySchemaDeleteOptionDeleteSchema is a InventorySchemaDeleteOption enum value + InventorySchemaDeleteOptionDeleteSchema = "DeleteSchema" ) const ( @@ -29120,6 +34338,9 @@ const ( // OperatingSystemAmazonLinux is a OperatingSystem enum value OperatingSystemAmazonLinux = "AMAZON_LINUX" + // OperatingSystemAmazonLinux2 is a OperatingSystem enum value + OperatingSystemAmazonLinux2 = "AMAZON_LINUX_2" + // OperatingSystemUbuntu is a OperatingSystem enum value OperatingSystemUbuntu = "UBUNTU" @@ -29128,6 +34349,9 @@ const ( // OperatingSystemSuse is a OperatingSystem enum value OperatingSystemSuse = "SUSE" + + // OperatingSystemCentos is a OperatingSystem enum value + OperatingSystemCentos = "CENTOS" ) const ( @@ -29152,6 +34376,14 @@ const ( ParametersFilterKeyKeyId = "KeyId" ) +const ( + // PatchActionAllowAsDependency is a PatchAction enum value + PatchActionAllowAsDependency = "ALLOW_AS_DEPENDENCY" + + // PatchActionBlock is a PatchAction enum value + PatchActionBlock = "BLOCK" +) + const ( // PatchComplianceDataStateInstalled is a PatchComplianceDataState enum value PatchComplianceDataStateInstalled = "INSTALLED" @@ -29159,6 +34391,9 @@ const ( // PatchComplianceDataStateInstalledOther is a PatchComplianceDataState enum value PatchComplianceDataStateInstalledOther = "INSTALLED_OTHER" + // PatchComplianceDataStateInstalledRejected is a PatchComplianceDataState enum value + PatchComplianceDataStateInstalledRejected = "INSTALLED_REJECTED" + // PatchComplianceDataStateMissing is a PatchComplianceDataState enum value PatchComplianceDataStateMissing = "MISSING" @@ -29286,6 +34521,51 @@ const ( ResourceTypeForTaggingPatchBaseline = "PatchBaseline" ) +const ( + // SessionFilterKeyInvokedAfter is a SessionFilterKey enum value + SessionFilterKeyInvokedAfter = "InvokedAfter" + + // SessionFilterKeyInvokedBefore is a SessionFilterKey enum value + SessionFilterKeyInvokedBefore = "InvokedBefore" + + // SessionFilterKeyTarget is a SessionFilterKey enum value + SessionFilterKeyTarget = "Target" + + // SessionFilterKeyOwner is a SessionFilterKey enum value + SessionFilterKeyOwner = "Owner" + + // SessionFilterKeyStatus is a SessionFilterKey enum value + SessionFilterKeyStatus = "Status" +) + +const ( + // SessionStateActive is a SessionState enum value + SessionStateActive = "Active" + + // SessionStateHistory is a SessionState enum value + SessionStateHistory = "History" +) + +const ( + // SessionStatusConnected is a SessionStatus enum value + SessionStatusConnected = "Connected" + + // SessionStatusConnecting is a SessionStatus enum value + SessionStatusConnecting = "Connecting" + + // SessionStatusDisconnected is a SessionStatus enum value + SessionStatusDisconnected = "Disconnected" + + // SessionStatusTerminated is a SessionStatus enum value + SessionStatusTerminated = "Terminated" + + // SessionStatusTerminating is a SessionStatus enum value + SessionStatusTerminating = "Terminating" + + // SessionStatusFailed is a SessionStatus enum value + SessionStatusFailed = "Failed" +) + const ( // SignalTypeApprove is a SignalType enum value SignalTypeApprove = "Approve" diff --git a/vendor/github.com/aws/aws-sdk-go/service/ssm/doc.go b/vendor/github.com/aws/aws-sdk-go/service/ssm/doc.go index 4f18dadcb2c..6964adba01b 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ssm/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ssm/doc.go @@ -15,7 +15,8 @@ // (http://docs.aws.amazon.com/systems-manager/latest/userguide/). // // To get started, verify prerequisites and configure managed instances. For -// more information, see Systems Manager Prerequisites (http://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-setting-up.html). +// more information, see Systems Manager Prerequisites (http://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-setting-up.html) +// in the AWS Systems Manager User Guide. // // For information about other API actions you can perform on Amazon EC2 instances, // see the Amazon EC2 API Reference (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/). diff --git a/vendor/github.com/aws/aws-sdk-go/service/ssm/errors.go b/vendor/github.com/aws/aws-sdk-go/service/ssm/errors.go index 8c932a093c3..da7ff9e647c 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ssm/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ssm/errors.go @@ -30,6 +30,12 @@ const ( // The specified association does not exist. ErrCodeAssociationDoesNotExist = "AssociationDoesNotExist" + // ErrCodeAssociationExecutionDoesNotExist for service response error code + // "AssociationExecutionDoesNotExist". + // + // The specified execution ID does not exist. Verify the ID number and try again. + ErrCodeAssociationExecutionDoesNotExist = "AssociationExecutionDoesNotExist" + // ErrCodeAssociationLimitExceeded for service response error code // "AssociationLimitExceeded". // @@ -150,8 +156,9 @@ const ( // ErrCodeHierarchyLevelLimitExceededException for service response error code // "HierarchyLevelLimitExceededException". // - // A hierarchy can have a maximum of 15 levels. For more information, see Working - // with Systems Manager Parameters (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-working.html). + // A hierarchy can have a maximum of 15 levels. For more information, see Requirements + // and Constraints for Parameter Names (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-parameter-name-constraints.html) + // in the AWS Systems Manager User Guide. ErrCodeHierarchyLevelLimitExceededException = "HierarchyLevelLimitExceededException" // ErrCodeHierarchyTypeMismatchException for service response error code @@ -189,12 +196,25 @@ const ( // or ActivationCode and try again. ErrCodeInvalidActivationId = "InvalidActivationId" + // ErrCodeInvalidAggregatorException for service response error code + // "InvalidAggregatorException". + // + // The specified aggregator is not valid for inventory groups. Verify that the + // aggregator uses a valid inventory type such as AWS:Application or AWS:InstanceInformation. + ErrCodeInvalidAggregatorException = "InvalidAggregatorException" + // ErrCodeInvalidAllowedPatternException for service response error code // "InvalidAllowedPatternException". // // The request does not meet the regular expression requirement. ErrCodeInvalidAllowedPatternException = "InvalidAllowedPatternException" + // ErrCodeInvalidAssociation for service response error code + // "InvalidAssociation". + // + // The association is not valid or does not exist. + ErrCodeInvalidAssociation = "InvalidAssociation" + // ErrCodeInvalidAssociationVersion for service response error code // "InvalidAssociationVersion". // @@ -227,6 +247,20 @@ const ( // "InvalidCommandId". ErrCodeInvalidCommandId = "InvalidCommandId" + // ErrCodeInvalidDeleteInventoryParametersException for service response error code + // "InvalidDeleteInventoryParametersException". + // + // One or more of the parameters specified for the delete operation is not valid. + // Verify all parameters and try again. + ErrCodeInvalidDeleteInventoryParametersException = "InvalidDeleteInventoryParametersException" + + // ErrCodeInvalidDeletionIdException for service response error code + // "InvalidDeletionIdException". + // + // The ID specified for the delete operation does not exist or is not valide. + // Verify the ID and try again. + ErrCodeInvalidDeletionIdException = "InvalidDeletionIdException" + // ErrCodeInvalidDocument for service response error code // "InvalidDocument". // @@ -291,12 +325,12 @@ const ( // // You do not have permission to access the instance. // - // The SSM Agent is not running. On managed instances and Linux instances, verify + // SSM Agent is not running. On managed instances and Linux instances, verify // that the SSM Agent is running. On EC2 Windows instances, verify that the // EC2Config service is running. // - // The SSM Agent or EC2Config service is not registered to the SSM endpoint. - // Try reinstalling the SSM Agent or EC2Config service. + // SSM Agent or EC2Config service is not registered to the SSM endpoint. Try + // reinstalling SSM Agent or EC2Config service. // // The instance is not in valid state. Valid states are: Running, Pending, Stopped, // Stopping. Invalid states are: Shutting-down and Terminated. @@ -308,6 +342,12 @@ const ( // The specified filter value is not valid. ErrCodeInvalidInstanceInformationFilterValue = "InvalidInstanceInformationFilterValue" + // ErrCodeInvalidInventoryGroupException for service response error code + // "InvalidInventoryGroupException". + // + // The specified inventory group is not valid. + ErrCodeInvalidInventoryGroupException = "InvalidInventoryGroupException" + // ErrCodeInvalidInventoryItemContextException for service response error code // "InvalidInventoryItemContextException". // @@ -315,6 +355,12 @@ const ( // Verify the keys and values, and try again. ErrCodeInvalidInventoryItemContextException = "InvalidInventoryItemContextException" + // ErrCodeInvalidInventoryRequestException for service response error code + // "InvalidInventoryRequestException". + // + // The request is not valid. + ErrCodeInvalidInventoryRequestException = "InvalidInventoryRequestException" + // ErrCodeInvalidItemContentException for service response error code // "InvalidItemContentException". // @@ -340,6 +386,13 @@ const ( // Resource Name (ARN) was provided for an Amazon SNS topic. ErrCodeInvalidNotificationConfig = "InvalidNotificationConfig" + // ErrCodeInvalidOptionException for service response error code + // "InvalidOptionException". + // + // The delete inventory option specified is not valid. Verify the option and + // try again. + ErrCodeInvalidOptionException = "InvalidOptionException" + // ErrCodeInvalidOutputFolder for service response error code // "InvalidOutputFolder". // @@ -484,6 +537,12 @@ const ( // The parameter name is not valid. ErrCodeParameterPatternMismatchException = "ParameterPatternMismatchException" + // ErrCodeParameterVersionLabelLimitExceeded for service response error code + // "ParameterVersionLabelLimitExceeded". + // + // A parameter version can have a maximum of ten labels. + ErrCodeParameterVersionLabelLimitExceeded = "ParameterVersionLabelLimitExceeded" + // ErrCodeParameterVersionNotFound for service response error code // "ParameterVersionNotFound". // @@ -551,6 +610,15 @@ const ( // operation, but the target is still referenced in a task. ErrCodeTargetInUseException = "TargetInUseException" + // ErrCodeTargetNotConnected for service response error code + // "TargetNotConnected". + // + // The specified target instance for the session is not fully configured for + // use with Session Manager. For more information, see Getting Started with + // Session Manager (http://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started.html) + // in the AWS Systems Manager User Guide. + ErrCodeTargetNotConnected = "TargetNotConnected" + // ErrCodeTooManyTagsError for service response error code // "TooManyTagsError". // diff --git a/vendor/github.com/aws/aws-sdk-go/service/ssm/service.go b/vendor/github.com/aws/aws-sdk-go/service/ssm/service.go index d414fb7d88c..9a6b8f71c22 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ssm/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ssm/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "ssm" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "ssm" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "SSM" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the SSM client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/storagegateway/api.go b/vendor/github.com/aws/aws-sdk-go/service/storagegateway/api.go new file mode 100644 index 00000000000..8556f852a71 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/storagegateway/api.go @@ -0,0 +1,15741 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package storagegateway + +import ( + "fmt" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" +) + +const opActivateGateway = "ActivateGateway" + +// ActivateGatewayRequest generates a "aws/request.Request" representing the +// client's request for the ActivateGateway operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ActivateGateway for more information on using the ActivateGateway +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ActivateGatewayRequest method. +// req, resp := client.ActivateGatewayRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ActivateGateway +func (c *StorageGateway) ActivateGatewayRequest(input *ActivateGatewayInput) (req *request.Request, output *ActivateGatewayOutput) { + op := &request.Operation{ + Name: opActivateGateway, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ActivateGatewayInput{} + } + + output = &ActivateGatewayOutput{} + req = c.newRequest(op, input, output) + return +} + +// ActivateGateway API operation for AWS Storage Gateway. +// +// Activates the gateway you previously deployed on your host. In the activation +// process, you specify information such as the region you want to use for storing +// snapshots or tapes, the time zone for scheduled snapshots the gateway snapshot +// schedule window, an activation key, and a name for your gateway. The activation +// process also associates your gateway with your account; for more information, +// see UpdateGatewayInformation. +// +// You must turn on the gateway VM before you can activate your gateway. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation ActivateGateway for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ActivateGateway +func (c *StorageGateway) ActivateGateway(input *ActivateGatewayInput) (*ActivateGatewayOutput, error) { + req, out := c.ActivateGatewayRequest(input) + return out, req.Send() +} + +// ActivateGatewayWithContext is the same as ActivateGateway with the addition of +// the ability to pass a context and additional request options. +// +// See ActivateGateway for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) ActivateGatewayWithContext(ctx aws.Context, input *ActivateGatewayInput, opts ...request.Option) (*ActivateGatewayOutput, error) { + req, out := c.ActivateGatewayRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAddCache = "AddCache" + +// AddCacheRequest generates a "aws/request.Request" representing the +// client's request for the AddCache operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AddCache for more information on using the AddCache +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AddCacheRequest method. +// req, resp := client.AddCacheRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/AddCache +func (c *StorageGateway) AddCacheRequest(input *AddCacheInput) (req *request.Request, output *AddCacheOutput) { + op := &request.Operation{ + Name: opAddCache, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AddCacheInput{} + } + + output = &AddCacheOutput{} + req = c.newRequest(op, input, output) + return +} + +// AddCache API operation for AWS Storage Gateway. +// +// Configures one or more gateway local disks as cache for a gateway. This operation +// is only supported in the cached volume, tape and file gateway type (see Storage +// Gateway Concepts (http://docs.aws.amazon.com/storagegateway/latest/userguide/StorageGatewayConcepts.html)). +// +// In the request, you specify the gateway Amazon Resource Name (ARN) to which +// you want to add cache, and one or more disk IDs that you want to configure +// as cache. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation AddCache for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/AddCache +func (c *StorageGateway) AddCache(input *AddCacheInput) (*AddCacheOutput, error) { + req, out := c.AddCacheRequest(input) + return out, req.Send() +} + +// AddCacheWithContext is the same as AddCache with the addition of +// the ability to pass a context and additional request options. +// +// See AddCache for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) AddCacheWithContext(ctx aws.Context, input *AddCacheInput, opts ...request.Option) (*AddCacheOutput, error) { + req, out := c.AddCacheRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAddTagsToResource = "AddTagsToResource" + +// AddTagsToResourceRequest generates a "aws/request.Request" representing the +// client's request for the AddTagsToResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AddTagsToResource for more information on using the AddTagsToResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AddTagsToResourceRequest method. +// req, resp := client.AddTagsToResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/AddTagsToResource +func (c *StorageGateway) AddTagsToResourceRequest(input *AddTagsToResourceInput) (req *request.Request, output *AddTagsToResourceOutput) { + op := &request.Operation{ + Name: opAddTagsToResource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AddTagsToResourceInput{} + } + + output = &AddTagsToResourceOutput{} + req = c.newRequest(op, input, output) + return +} + +// AddTagsToResource API operation for AWS Storage Gateway. +// +// Adds one or more tags to the specified resource. You use tags to add metadata +// to resources, which you can use to categorize these resources. For example, +// you can categorize resources by purpose, owner, environment, or team. Each +// tag consists of a key and a value, which you define. You can add tags to +// the following AWS Storage Gateway resources: +// +// * Storage gateways of all types +// +// * Storage Volumes +// +// * Virtual Tapes +// +// You can create a maximum of 10 tags for each resource. Virtual tapes and +// storage volumes that are recovered to a new gateway maintain their tags. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation AddTagsToResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/AddTagsToResource +func (c *StorageGateway) AddTagsToResource(input *AddTagsToResourceInput) (*AddTagsToResourceOutput, error) { + req, out := c.AddTagsToResourceRequest(input) + return out, req.Send() +} + +// AddTagsToResourceWithContext is the same as AddTagsToResource with the addition of +// the ability to pass a context and additional request options. +// +// See AddTagsToResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) AddTagsToResourceWithContext(ctx aws.Context, input *AddTagsToResourceInput, opts ...request.Option) (*AddTagsToResourceOutput, error) { + req, out := c.AddTagsToResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAddUploadBuffer = "AddUploadBuffer" + +// AddUploadBufferRequest generates a "aws/request.Request" representing the +// client's request for the AddUploadBuffer operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AddUploadBuffer for more information on using the AddUploadBuffer +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AddUploadBufferRequest method. +// req, resp := client.AddUploadBufferRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/AddUploadBuffer +func (c *StorageGateway) AddUploadBufferRequest(input *AddUploadBufferInput) (req *request.Request, output *AddUploadBufferOutput) { + op := &request.Operation{ + Name: opAddUploadBuffer, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AddUploadBufferInput{} + } + + output = &AddUploadBufferOutput{} + req = c.newRequest(op, input, output) + return +} + +// AddUploadBuffer API operation for AWS Storage Gateway. +// +// Configures one or more gateway local disks as upload buffer for a specified +// gateway. This operation is supported for the stored volume, cached volume +// and tape gateway types. +// +// In the request, you specify the gateway Amazon Resource Name (ARN) to which +// you want to add upload buffer, and one or more disk IDs that you want to +// configure as upload buffer. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation AddUploadBuffer for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/AddUploadBuffer +func (c *StorageGateway) AddUploadBuffer(input *AddUploadBufferInput) (*AddUploadBufferOutput, error) { + req, out := c.AddUploadBufferRequest(input) + return out, req.Send() +} + +// AddUploadBufferWithContext is the same as AddUploadBuffer with the addition of +// the ability to pass a context and additional request options. +// +// See AddUploadBuffer for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) AddUploadBufferWithContext(ctx aws.Context, input *AddUploadBufferInput, opts ...request.Option) (*AddUploadBufferOutput, error) { + req, out := c.AddUploadBufferRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAddWorkingStorage = "AddWorkingStorage" + +// AddWorkingStorageRequest generates a "aws/request.Request" representing the +// client's request for the AddWorkingStorage operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AddWorkingStorage for more information on using the AddWorkingStorage +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AddWorkingStorageRequest method. +// req, resp := client.AddWorkingStorageRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/AddWorkingStorage +func (c *StorageGateway) AddWorkingStorageRequest(input *AddWorkingStorageInput) (req *request.Request, output *AddWorkingStorageOutput) { + op := &request.Operation{ + Name: opAddWorkingStorage, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AddWorkingStorageInput{} + } + + output = &AddWorkingStorageOutput{} + req = c.newRequest(op, input, output) + return +} + +// AddWorkingStorage API operation for AWS Storage Gateway. +// +// Configures one or more gateway local disks as working storage for a gateway. +// This operation is only supported in the stored volume gateway type. This +// operation is deprecated in cached volume API version 20120630. Use AddUploadBuffer +// instead. +// +// Working storage is also referred to as upload buffer. You can also use the +// AddUploadBuffer operation to add upload buffer to a stored volume gateway. +// +// In the request, you specify the gateway Amazon Resource Name (ARN) to which +// you want to add working storage, and one or more disk IDs that you want to +// configure as working storage. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation AddWorkingStorage for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/AddWorkingStorage +func (c *StorageGateway) AddWorkingStorage(input *AddWorkingStorageInput) (*AddWorkingStorageOutput, error) { + req, out := c.AddWorkingStorageRequest(input) + return out, req.Send() +} + +// AddWorkingStorageWithContext is the same as AddWorkingStorage with the addition of +// the ability to pass a context and additional request options. +// +// See AddWorkingStorage for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) AddWorkingStorageWithContext(ctx aws.Context, input *AddWorkingStorageInput, opts ...request.Option) (*AddWorkingStorageOutput, error) { + req, out := c.AddWorkingStorageRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCancelArchival = "CancelArchival" + +// CancelArchivalRequest generates a "aws/request.Request" representing the +// client's request for the CancelArchival operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CancelArchival for more information on using the CancelArchival +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CancelArchivalRequest method. +// req, resp := client.CancelArchivalRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CancelArchival +func (c *StorageGateway) CancelArchivalRequest(input *CancelArchivalInput) (req *request.Request, output *CancelArchivalOutput) { + op := &request.Operation{ + Name: opCancelArchival, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CancelArchivalInput{} + } + + output = &CancelArchivalOutput{} + req = c.newRequest(op, input, output) + return +} + +// CancelArchival API operation for AWS Storage Gateway. +// +// Cancels archiving of a virtual tape to the virtual tape shelf (VTS) after +// the archiving process is initiated. This operation is only supported in the +// tape gateway type. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation CancelArchival for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CancelArchival +func (c *StorageGateway) CancelArchival(input *CancelArchivalInput) (*CancelArchivalOutput, error) { + req, out := c.CancelArchivalRequest(input) + return out, req.Send() +} + +// CancelArchivalWithContext is the same as CancelArchival with the addition of +// the ability to pass a context and additional request options. +// +// See CancelArchival for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) CancelArchivalWithContext(ctx aws.Context, input *CancelArchivalInput, opts ...request.Option) (*CancelArchivalOutput, error) { + req, out := c.CancelArchivalRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCancelRetrieval = "CancelRetrieval" + +// CancelRetrievalRequest generates a "aws/request.Request" representing the +// client's request for the CancelRetrieval operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CancelRetrieval for more information on using the CancelRetrieval +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CancelRetrievalRequest method. +// req, resp := client.CancelRetrievalRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CancelRetrieval +func (c *StorageGateway) CancelRetrievalRequest(input *CancelRetrievalInput) (req *request.Request, output *CancelRetrievalOutput) { + op := &request.Operation{ + Name: opCancelRetrieval, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CancelRetrievalInput{} + } + + output = &CancelRetrievalOutput{} + req = c.newRequest(op, input, output) + return +} + +// CancelRetrieval API operation for AWS Storage Gateway. +// +// Cancels retrieval of a virtual tape from the virtual tape shelf (VTS) to +// a gateway after the retrieval process is initiated. The virtual tape is returned +// to the VTS. This operation is only supported in the tape gateway type. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation CancelRetrieval for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CancelRetrieval +func (c *StorageGateway) CancelRetrieval(input *CancelRetrievalInput) (*CancelRetrievalOutput, error) { + req, out := c.CancelRetrievalRequest(input) + return out, req.Send() +} + +// CancelRetrievalWithContext is the same as CancelRetrieval with the addition of +// the ability to pass a context and additional request options. +// +// See CancelRetrieval for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) CancelRetrievalWithContext(ctx aws.Context, input *CancelRetrievalInput, opts ...request.Option) (*CancelRetrievalOutput, error) { + req, out := c.CancelRetrievalRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateCachediSCSIVolume = "CreateCachediSCSIVolume" + +// CreateCachediSCSIVolumeRequest generates a "aws/request.Request" representing the +// client's request for the CreateCachediSCSIVolume operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateCachediSCSIVolume for more information on using the CreateCachediSCSIVolume +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateCachediSCSIVolumeRequest method. +// req, resp := client.CreateCachediSCSIVolumeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CreateCachediSCSIVolume +func (c *StorageGateway) CreateCachediSCSIVolumeRequest(input *CreateCachediSCSIVolumeInput) (req *request.Request, output *CreateCachediSCSIVolumeOutput) { + op := &request.Operation{ + Name: opCreateCachediSCSIVolume, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateCachediSCSIVolumeInput{} + } + + output = &CreateCachediSCSIVolumeOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateCachediSCSIVolume API operation for AWS Storage Gateway. +// +// Creates a cached volume on a specified cached volume gateway. This operation +// is only supported in the cached volume gateway type. +// +// Cache storage must be allocated to the gateway before you can create a cached +// volume. Use the AddCache operation to add cache storage to a gateway. +// +// In the request, you must specify the gateway, size of the volume in bytes, +// the iSCSI target name, an IP address on which to expose the target, and a +// unique client token. In response, the gateway creates the volume and returns +// information about it. This information includes the volume Amazon Resource +// Name (ARN), its size, and the iSCSI target ARN that initiators can use to +// connect to the volume target. +// +// Optionally, you can provide the ARN for an existing volume as the SourceVolumeARN +// for this cached volume, which creates an exact copy of the existing volume’s +// latest recovery point. The VolumeSizeInBytes value must be equal to or larger +// than the size of the copied volume, in bytes. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation CreateCachediSCSIVolume for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CreateCachediSCSIVolume +func (c *StorageGateway) CreateCachediSCSIVolume(input *CreateCachediSCSIVolumeInput) (*CreateCachediSCSIVolumeOutput, error) { + req, out := c.CreateCachediSCSIVolumeRequest(input) + return out, req.Send() +} + +// CreateCachediSCSIVolumeWithContext is the same as CreateCachediSCSIVolume with the addition of +// the ability to pass a context and additional request options. +// +// See CreateCachediSCSIVolume for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) CreateCachediSCSIVolumeWithContext(ctx aws.Context, input *CreateCachediSCSIVolumeInput, opts ...request.Option) (*CreateCachediSCSIVolumeOutput, error) { + req, out := c.CreateCachediSCSIVolumeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateNFSFileShare = "CreateNFSFileShare" + +// CreateNFSFileShareRequest generates a "aws/request.Request" representing the +// client's request for the CreateNFSFileShare operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateNFSFileShare for more information on using the CreateNFSFileShare +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateNFSFileShareRequest method. +// req, resp := client.CreateNFSFileShareRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CreateNFSFileShare +func (c *StorageGateway) CreateNFSFileShareRequest(input *CreateNFSFileShareInput) (req *request.Request, output *CreateNFSFileShareOutput) { + op := &request.Operation{ + Name: opCreateNFSFileShare, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateNFSFileShareInput{} + } + + output = &CreateNFSFileShareOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateNFSFileShare API operation for AWS Storage Gateway. +// +// Creates a Network File System (NFS) file share on an existing file gateway. +// In Storage Gateway, a file share is a file system mount point backed by Amazon +// S3 cloud storage. Storage Gateway exposes file shares using a NFS interface. +// This operation is only supported for file gateways. +// +// File gateway requires AWS Security Token Service (AWS STS) to be activated +// to enable you create a file share. Make sure AWS STS is activated in the +// region you are creating your file gateway in. If AWS STS is not activated +// in the region, activate it. For information about how to activate AWS STS, +// see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity +// and Access Management User Guide. +// +// File gateway does not support creating hard or symbolic links on a file share. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation CreateNFSFileShare for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CreateNFSFileShare +func (c *StorageGateway) CreateNFSFileShare(input *CreateNFSFileShareInput) (*CreateNFSFileShareOutput, error) { + req, out := c.CreateNFSFileShareRequest(input) + return out, req.Send() +} + +// CreateNFSFileShareWithContext is the same as CreateNFSFileShare with the addition of +// the ability to pass a context and additional request options. +// +// See CreateNFSFileShare for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) CreateNFSFileShareWithContext(ctx aws.Context, input *CreateNFSFileShareInput, opts ...request.Option) (*CreateNFSFileShareOutput, error) { + req, out := c.CreateNFSFileShareRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateSMBFileShare = "CreateSMBFileShare" + +// CreateSMBFileShareRequest generates a "aws/request.Request" representing the +// client's request for the CreateSMBFileShare operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateSMBFileShare for more information on using the CreateSMBFileShare +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateSMBFileShareRequest method. +// req, resp := client.CreateSMBFileShareRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CreateSMBFileShare +func (c *StorageGateway) CreateSMBFileShareRequest(input *CreateSMBFileShareInput) (req *request.Request, output *CreateSMBFileShareOutput) { + op := &request.Operation{ + Name: opCreateSMBFileShare, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateSMBFileShareInput{} + } + + output = &CreateSMBFileShareOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateSMBFileShare API operation for AWS Storage Gateway. +// +// Creates a Server Message Block (SMB) file share on an existing file gateway. +// In Storage Gateway, a file share is a file system mount point backed by Amazon +// S3 cloud storage. Storage Gateway expose file shares using a SMB interface. +// This operation is only supported for file gateways. +// +// File gateways require AWS Security Token Service (AWS STS) to be activated +// to enable you to create a file share. Make sure that AWS STS is activated +// in the AWS Region you are creating your file gateway in. If AWS STS is not +// activated in this AWS Region, activate it. For information about how to activate +// AWS STS, see Activating and Deactivating AWS STS in an AWS Region (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) +// in the AWS Identity and Access Management User Guide. +// +// File gateways don't support creating hard or symbolic links on a file share. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation CreateSMBFileShare for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CreateSMBFileShare +func (c *StorageGateway) CreateSMBFileShare(input *CreateSMBFileShareInput) (*CreateSMBFileShareOutput, error) { + req, out := c.CreateSMBFileShareRequest(input) + return out, req.Send() +} + +// CreateSMBFileShareWithContext is the same as CreateSMBFileShare with the addition of +// the ability to pass a context and additional request options. +// +// See CreateSMBFileShare for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) CreateSMBFileShareWithContext(ctx aws.Context, input *CreateSMBFileShareInput, opts ...request.Option) (*CreateSMBFileShareOutput, error) { + req, out := c.CreateSMBFileShareRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateSnapshot = "CreateSnapshot" + +// CreateSnapshotRequest generates a "aws/request.Request" representing the +// client's request for the CreateSnapshot operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateSnapshot for more information on using the CreateSnapshot +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateSnapshotRequest method. +// req, resp := client.CreateSnapshotRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CreateSnapshot +func (c *StorageGateway) CreateSnapshotRequest(input *CreateSnapshotInput) (req *request.Request, output *CreateSnapshotOutput) { + op := &request.Operation{ + Name: opCreateSnapshot, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateSnapshotInput{} + } + + output = &CreateSnapshotOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateSnapshot API operation for AWS Storage Gateway. +// +// Initiates a snapshot of a volume. +// +// AWS Storage Gateway provides the ability to back up point-in-time snapshots +// of your data to Amazon Simple Storage (S3) for durable off-site recovery, +// as well as import the data to an Amazon Elastic Block Store (EBS) volume +// in Amazon Elastic Compute Cloud (EC2). You can take snapshots of your gateway +// volume on a scheduled or ad-hoc basis. This API enables you to take ad-hoc +// snapshot. For more information, see Editing a Snapshot Schedule (http://docs.aws.amazon.com/storagegateway/latest/userguide/managing-volumes.html#SchedulingSnapshot). +// +// In the CreateSnapshot request you identify the volume by providing its Amazon +// Resource Name (ARN). You must also provide description for the snapshot. +// When AWS Storage Gateway takes the snapshot of specified volume, the snapshot +// and description appears in the AWS Storage Gateway Console. In response, +// AWS Storage Gateway returns you a snapshot ID. You can use this snapshot +// ID to check the snapshot progress or later use it when you want to create +// a volume from a snapshot. This operation is only supported in stored and +// cached volume gateway type. +// +// To list or delete a snapshot, you must use the Amazon EC2 API. For more information, +// see DescribeSnapshots or DeleteSnapshot in the EC2 API reference (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_Operations.html). +// +// Volume and snapshot IDs are changing to a longer length ID format. For more +// information, see the important note on the Welcome (http://docs.aws.amazon.com/storagegateway/latest/APIReference/Welcome.html) +// page. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation CreateSnapshot for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// * ErrCodeServiceUnavailableError "ServiceUnavailableError" +// An internal server error has occurred because the service is unavailable. +// For more information, see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CreateSnapshot +func (c *StorageGateway) CreateSnapshot(input *CreateSnapshotInput) (*CreateSnapshotOutput, error) { + req, out := c.CreateSnapshotRequest(input) + return out, req.Send() +} + +// CreateSnapshotWithContext is the same as CreateSnapshot with the addition of +// the ability to pass a context and additional request options. +// +// See CreateSnapshot for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) CreateSnapshotWithContext(ctx aws.Context, input *CreateSnapshotInput, opts ...request.Option) (*CreateSnapshotOutput, error) { + req, out := c.CreateSnapshotRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateSnapshotFromVolumeRecoveryPoint = "CreateSnapshotFromVolumeRecoveryPoint" + +// CreateSnapshotFromVolumeRecoveryPointRequest generates a "aws/request.Request" representing the +// client's request for the CreateSnapshotFromVolumeRecoveryPoint operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateSnapshotFromVolumeRecoveryPoint for more information on using the CreateSnapshotFromVolumeRecoveryPoint +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateSnapshotFromVolumeRecoveryPointRequest method. +// req, resp := client.CreateSnapshotFromVolumeRecoveryPointRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CreateSnapshotFromVolumeRecoveryPoint +func (c *StorageGateway) CreateSnapshotFromVolumeRecoveryPointRequest(input *CreateSnapshotFromVolumeRecoveryPointInput) (req *request.Request, output *CreateSnapshotFromVolumeRecoveryPointOutput) { + op := &request.Operation{ + Name: opCreateSnapshotFromVolumeRecoveryPoint, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateSnapshotFromVolumeRecoveryPointInput{} + } + + output = &CreateSnapshotFromVolumeRecoveryPointOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateSnapshotFromVolumeRecoveryPoint API operation for AWS Storage Gateway. +// +// Initiates a snapshot of a gateway from a volume recovery point. This operation +// is only supported in the cached volume gateway type. +// +// A volume recovery point is a point in time at which all data of the volume +// is consistent and from which you can create a snapshot. To get a list of +// volume recovery point for cached volume gateway, use ListVolumeRecoveryPoints. +// +// In the CreateSnapshotFromVolumeRecoveryPoint request, you identify the volume +// by providing its Amazon Resource Name (ARN). You must also provide a description +// for the snapshot. When the gateway takes a snapshot of the specified volume, +// the snapshot and its description appear in the AWS Storage Gateway console. +// In response, the gateway returns you a snapshot ID. You can use this snapshot +// ID to check the snapshot progress or later use it when you want to create +// a volume from a snapshot. +// +// To list or delete a snapshot, you must use the Amazon EC2 API. For more information, +// in Amazon Elastic Compute Cloud API Reference. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation CreateSnapshotFromVolumeRecoveryPoint for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// * ErrCodeServiceUnavailableError "ServiceUnavailableError" +// An internal server error has occurred because the service is unavailable. +// For more information, see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CreateSnapshotFromVolumeRecoveryPoint +func (c *StorageGateway) CreateSnapshotFromVolumeRecoveryPoint(input *CreateSnapshotFromVolumeRecoveryPointInput) (*CreateSnapshotFromVolumeRecoveryPointOutput, error) { + req, out := c.CreateSnapshotFromVolumeRecoveryPointRequest(input) + return out, req.Send() +} + +// CreateSnapshotFromVolumeRecoveryPointWithContext is the same as CreateSnapshotFromVolumeRecoveryPoint with the addition of +// the ability to pass a context and additional request options. +// +// See CreateSnapshotFromVolumeRecoveryPoint for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) CreateSnapshotFromVolumeRecoveryPointWithContext(ctx aws.Context, input *CreateSnapshotFromVolumeRecoveryPointInput, opts ...request.Option) (*CreateSnapshotFromVolumeRecoveryPointOutput, error) { + req, out := c.CreateSnapshotFromVolumeRecoveryPointRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateStorediSCSIVolume = "CreateStorediSCSIVolume" + +// CreateStorediSCSIVolumeRequest generates a "aws/request.Request" representing the +// client's request for the CreateStorediSCSIVolume operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateStorediSCSIVolume for more information on using the CreateStorediSCSIVolume +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateStorediSCSIVolumeRequest method. +// req, resp := client.CreateStorediSCSIVolumeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CreateStorediSCSIVolume +func (c *StorageGateway) CreateStorediSCSIVolumeRequest(input *CreateStorediSCSIVolumeInput) (req *request.Request, output *CreateStorediSCSIVolumeOutput) { + op := &request.Operation{ + Name: opCreateStorediSCSIVolume, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateStorediSCSIVolumeInput{} + } + + output = &CreateStorediSCSIVolumeOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateStorediSCSIVolume API operation for AWS Storage Gateway. +// +// Creates a volume on a specified gateway. This operation is only supported +// in the stored volume gateway type. +// +// The size of the volume to create is inferred from the disk size. You can +// choose to preserve existing data on the disk, create volume from an existing +// snapshot, or create an empty volume. If you choose to create an empty gateway +// volume, then any existing data on the disk is erased. +// +// In the request you must specify the gateway and the disk information on which +// you are creating the volume. In response, the gateway creates the volume +// and returns volume information such as the volume Amazon Resource Name (ARN), +// its size, and the iSCSI target ARN that initiators can use to connect to +// the volume target. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation CreateStorediSCSIVolume for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CreateStorediSCSIVolume +func (c *StorageGateway) CreateStorediSCSIVolume(input *CreateStorediSCSIVolumeInput) (*CreateStorediSCSIVolumeOutput, error) { + req, out := c.CreateStorediSCSIVolumeRequest(input) + return out, req.Send() +} + +// CreateStorediSCSIVolumeWithContext is the same as CreateStorediSCSIVolume with the addition of +// the ability to pass a context and additional request options. +// +// See CreateStorediSCSIVolume for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) CreateStorediSCSIVolumeWithContext(ctx aws.Context, input *CreateStorediSCSIVolumeInput, opts ...request.Option) (*CreateStorediSCSIVolumeOutput, error) { + req, out := c.CreateStorediSCSIVolumeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateTapeWithBarcode = "CreateTapeWithBarcode" + +// CreateTapeWithBarcodeRequest generates a "aws/request.Request" representing the +// client's request for the CreateTapeWithBarcode operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateTapeWithBarcode for more information on using the CreateTapeWithBarcode +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateTapeWithBarcodeRequest method. +// req, resp := client.CreateTapeWithBarcodeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CreateTapeWithBarcode +func (c *StorageGateway) CreateTapeWithBarcodeRequest(input *CreateTapeWithBarcodeInput) (req *request.Request, output *CreateTapeWithBarcodeOutput) { + op := &request.Operation{ + Name: opCreateTapeWithBarcode, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateTapeWithBarcodeInput{} + } + + output = &CreateTapeWithBarcodeOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateTapeWithBarcode API operation for AWS Storage Gateway. +// +// Creates a virtual tape by using your own barcode. You write data to the virtual +// tape and then archive the tape. A barcode is unique and can not be reused +// if it has already been used on a tape . This applies to barcodes used on +// deleted tapes. This operation is only supported in the tape gateway type. +// +// Cache storage must be allocated to the gateway before you can create a virtual +// tape. Use the AddCache operation to add cache storage to a gateway. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation CreateTapeWithBarcode for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CreateTapeWithBarcode +func (c *StorageGateway) CreateTapeWithBarcode(input *CreateTapeWithBarcodeInput) (*CreateTapeWithBarcodeOutput, error) { + req, out := c.CreateTapeWithBarcodeRequest(input) + return out, req.Send() +} + +// CreateTapeWithBarcodeWithContext is the same as CreateTapeWithBarcode with the addition of +// the ability to pass a context and additional request options. +// +// See CreateTapeWithBarcode for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) CreateTapeWithBarcodeWithContext(ctx aws.Context, input *CreateTapeWithBarcodeInput, opts ...request.Option) (*CreateTapeWithBarcodeOutput, error) { + req, out := c.CreateTapeWithBarcodeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateTapes = "CreateTapes" + +// CreateTapesRequest generates a "aws/request.Request" representing the +// client's request for the CreateTapes operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateTapes for more information on using the CreateTapes +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateTapesRequest method. +// req, resp := client.CreateTapesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CreateTapes +func (c *StorageGateway) CreateTapesRequest(input *CreateTapesInput) (req *request.Request, output *CreateTapesOutput) { + op := &request.Operation{ + Name: opCreateTapes, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateTapesInput{} + } + + output = &CreateTapesOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateTapes API operation for AWS Storage Gateway. +// +// Creates one or more virtual tapes. You write data to the virtual tapes and +// then archive the tapes. This operation is only supported in the tape gateway +// type. +// +// Cache storage must be allocated to the gateway before you can create virtual +// tapes. Use the AddCache operation to add cache storage to a gateway. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation CreateTapes for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/CreateTapes +func (c *StorageGateway) CreateTapes(input *CreateTapesInput) (*CreateTapesOutput, error) { + req, out := c.CreateTapesRequest(input) + return out, req.Send() +} + +// CreateTapesWithContext is the same as CreateTapes with the addition of +// the ability to pass a context and additional request options. +// +// See CreateTapes for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) CreateTapesWithContext(ctx aws.Context, input *CreateTapesInput, opts ...request.Option) (*CreateTapesOutput, error) { + req, out := c.CreateTapesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteBandwidthRateLimit = "DeleteBandwidthRateLimit" + +// DeleteBandwidthRateLimitRequest generates a "aws/request.Request" representing the +// client's request for the DeleteBandwidthRateLimit operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteBandwidthRateLimit for more information on using the DeleteBandwidthRateLimit +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteBandwidthRateLimitRequest method. +// req, resp := client.DeleteBandwidthRateLimitRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DeleteBandwidthRateLimit +func (c *StorageGateway) DeleteBandwidthRateLimitRequest(input *DeleteBandwidthRateLimitInput) (req *request.Request, output *DeleteBandwidthRateLimitOutput) { + op := &request.Operation{ + Name: opDeleteBandwidthRateLimit, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteBandwidthRateLimitInput{} + } + + output = &DeleteBandwidthRateLimitOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteBandwidthRateLimit API operation for AWS Storage Gateway. +// +// Deletes the bandwidth rate limits of a gateway. You can delete either the +// upload and download bandwidth rate limit, or you can delete both. If you +// delete only one of the limits, the other limit remains unchanged. To specify +// which gateway to work with, use the Amazon Resource Name (ARN) of the gateway +// in your request. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DeleteBandwidthRateLimit for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DeleteBandwidthRateLimit +func (c *StorageGateway) DeleteBandwidthRateLimit(input *DeleteBandwidthRateLimitInput) (*DeleteBandwidthRateLimitOutput, error) { + req, out := c.DeleteBandwidthRateLimitRequest(input) + return out, req.Send() +} + +// DeleteBandwidthRateLimitWithContext is the same as DeleteBandwidthRateLimit with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteBandwidthRateLimit for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DeleteBandwidthRateLimitWithContext(ctx aws.Context, input *DeleteBandwidthRateLimitInput, opts ...request.Option) (*DeleteBandwidthRateLimitOutput, error) { + req, out := c.DeleteBandwidthRateLimitRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteChapCredentials = "DeleteChapCredentials" + +// DeleteChapCredentialsRequest generates a "aws/request.Request" representing the +// client's request for the DeleteChapCredentials operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteChapCredentials for more information on using the DeleteChapCredentials +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteChapCredentialsRequest method. +// req, resp := client.DeleteChapCredentialsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DeleteChapCredentials +func (c *StorageGateway) DeleteChapCredentialsRequest(input *DeleteChapCredentialsInput) (req *request.Request, output *DeleteChapCredentialsOutput) { + op := &request.Operation{ + Name: opDeleteChapCredentials, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteChapCredentialsInput{} + } + + output = &DeleteChapCredentialsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteChapCredentials API operation for AWS Storage Gateway. +// +// Deletes Challenge-Handshake Authentication Protocol (CHAP) credentials for +// a specified iSCSI target and initiator pair. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DeleteChapCredentials for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DeleteChapCredentials +func (c *StorageGateway) DeleteChapCredentials(input *DeleteChapCredentialsInput) (*DeleteChapCredentialsOutput, error) { + req, out := c.DeleteChapCredentialsRequest(input) + return out, req.Send() +} + +// DeleteChapCredentialsWithContext is the same as DeleteChapCredentials with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteChapCredentials for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DeleteChapCredentialsWithContext(ctx aws.Context, input *DeleteChapCredentialsInput, opts ...request.Option) (*DeleteChapCredentialsOutput, error) { + req, out := c.DeleteChapCredentialsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteFileShare = "DeleteFileShare" + +// DeleteFileShareRequest generates a "aws/request.Request" representing the +// client's request for the DeleteFileShare operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteFileShare for more information on using the DeleteFileShare +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteFileShareRequest method. +// req, resp := client.DeleteFileShareRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DeleteFileShare +func (c *StorageGateway) DeleteFileShareRequest(input *DeleteFileShareInput) (req *request.Request, output *DeleteFileShareOutput) { + op := &request.Operation{ + Name: opDeleteFileShare, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteFileShareInput{} + } + + output = &DeleteFileShareOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteFileShare API operation for AWS Storage Gateway. +// +// Deletes a file share from a file gateway. This operation is only supported +// for file gateways. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DeleteFileShare for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DeleteFileShare +func (c *StorageGateway) DeleteFileShare(input *DeleteFileShareInput) (*DeleteFileShareOutput, error) { + req, out := c.DeleteFileShareRequest(input) + return out, req.Send() +} + +// DeleteFileShareWithContext is the same as DeleteFileShare with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteFileShare for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DeleteFileShareWithContext(ctx aws.Context, input *DeleteFileShareInput, opts ...request.Option) (*DeleteFileShareOutput, error) { + req, out := c.DeleteFileShareRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteGateway = "DeleteGateway" + +// DeleteGatewayRequest generates a "aws/request.Request" representing the +// client's request for the DeleteGateway operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteGateway for more information on using the DeleteGateway +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteGatewayRequest method. +// req, resp := client.DeleteGatewayRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DeleteGateway +func (c *StorageGateway) DeleteGatewayRequest(input *DeleteGatewayInput) (req *request.Request, output *DeleteGatewayOutput) { + op := &request.Operation{ + Name: opDeleteGateway, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteGatewayInput{} + } + + output = &DeleteGatewayOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteGateway API operation for AWS Storage Gateway. +// +// Deletes a gateway. To specify which gateway to delete, use the Amazon Resource +// Name (ARN) of the gateway in your request. The operation deletes the gateway; +// however, it does not delete the gateway virtual machine (VM) from your host +// computer. +// +// After you delete a gateway, you cannot reactivate it. Completed snapshots +// of the gateway volumes are not deleted upon deleting the gateway, however, +// pending snapshots will not complete. After you delete a gateway, your next +// step is to remove it from your environment. +// +// You no longer pay software charges after the gateway is deleted; however, +// your existing Amazon EBS snapshots persist and you will continue to be billed +// for these snapshots. You can choose to remove all remaining Amazon EBS snapshots +// by canceling your Amazon EC2 subscription.  If you prefer not to cancel your +// Amazon EC2 subscription, you can delete your snapshots using the Amazon EC2 +// console. For more information, see the AWS Storage Gateway Detail Page (http://aws.amazon.com/storagegateway). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DeleteGateway for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DeleteGateway +func (c *StorageGateway) DeleteGateway(input *DeleteGatewayInput) (*DeleteGatewayOutput, error) { + req, out := c.DeleteGatewayRequest(input) + return out, req.Send() +} + +// DeleteGatewayWithContext is the same as DeleteGateway with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteGateway for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DeleteGatewayWithContext(ctx aws.Context, input *DeleteGatewayInput, opts ...request.Option) (*DeleteGatewayOutput, error) { + req, out := c.DeleteGatewayRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteSnapshotSchedule = "DeleteSnapshotSchedule" + +// DeleteSnapshotScheduleRequest generates a "aws/request.Request" representing the +// client's request for the DeleteSnapshotSchedule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteSnapshotSchedule for more information on using the DeleteSnapshotSchedule +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteSnapshotScheduleRequest method. +// req, resp := client.DeleteSnapshotScheduleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DeleteSnapshotSchedule +func (c *StorageGateway) DeleteSnapshotScheduleRequest(input *DeleteSnapshotScheduleInput) (req *request.Request, output *DeleteSnapshotScheduleOutput) { + op := &request.Operation{ + Name: opDeleteSnapshotSchedule, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteSnapshotScheduleInput{} + } + + output = &DeleteSnapshotScheduleOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteSnapshotSchedule API operation for AWS Storage Gateway. +// +// Deletes a snapshot of a volume. +// +// You can take snapshots of your gateway volumes on a scheduled or ad hoc basis. +// This API action enables you to delete a snapshot schedule for a volume. For +// more information, see Working with Snapshots (http://docs.aws.amazon.com/storagegateway/latest/userguide/WorkingWithSnapshots.html). +// In the DeleteSnapshotSchedule request, you identify the volume by providing +// its Amazon Resource Name (ARN). This operation is only supported in stored +// and cached volume gateway types. +// +// To list or delete a snapshot, you must use the Amazon EC2 API. in Amazon +// Elastic Compute Cloud API Reference. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DeleteSnapshotSchedule for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DeleteSnapshotSchedule +func (c *StorageGateway) DeleteSnapshotSchedule(input *DeleteSnapshotScheduleInput) (*DeleteSnapshotScheduleOutput, error) { + req, out := c.DeleteSnapshotScheduleRequest(input) + return out, req.Send() +} + +// DeleteSnapshotScheduleWithContext is the same as DeleteSnapshotSchedule with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteSnapshotSchedule for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DeleteSnapshotScheduleWithContext(ctx aws.Context, input *DeleteSnapshotScheduleInput, opts ...request.Option) (*DeleteSnapshotScheduleOutput, error) { + req, out := c.DeleteSnapshotScheduleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteTape = "DeleteTape" + +// DeleteTapeRequest generates a "aws/request.Request" representing the +// client's request for the DeleteTape operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteTape for more information on using the DeleteTape +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteTapeRequest method. +// req, resp := client.DeleteTapeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DeleteTape +func (c *StorageGateway) DeleteTapeRequest(input *DeleteTapeInput) (req *request.Request, output *DeleteTapeOutput) { + op := &request.Operation{ + Name: opDeleteTape, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteTapeInput{} + } + + output = &DeleteTapeOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteTape API operation for AWS Storage Gateway. +// +// Deletes the specified virtual tape. This operation is only supported in the +// tape gateway type. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DeleteTape for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DeleteTape +func (c *StorageGateway) DeleteTape(input *DeleteTapeInput) (*DeleteTapeOutput, error) { + req, out := c.DeleteTapeRequest(input) + return out, req.Send() +} + +// DeleteTapeWithContext is the same as DeleteTape with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteTape for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DeleteTapeWithContext(ctx aws.Context, input *DeleteTapeInput, opts ...request.Option) (*DeleteTapeOutput, error) { + req, out := c.DeleteTapeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteTapeArchive = "DeleteTapeArchive" + +// DeleteTapeArchiveRequest generates a "aws/request.Request" representing the +// client's request for the DeleteTapeArchive operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteTapeArchive for more information on using the DeleteTapeArchive +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteTapeArchiveRequest method. +// req, resp := client.DeleteTapeArchiveRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DeleteTapeArchive +func (c *StorageGateway) DeleteTapeArchiveRequest(input *DeleteTapeArchiveInput) (req *request.Request, output *DeleteTapeArchiveOutput) { + op := &request.Operation{ + Name: opDeleteTapeArchive, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteTapeArchiveInput{} + } + + output = &DeleteTapeArchiveOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteTapeArchive API operation for AWS Storage Gateway. +// +// Deletes the specified virtual tape from the virtual tape shelf (VTS). This +// operation is only supported in the tape gateway type. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DeleteTapeArchive for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DeleteTapeArchive +func (c *StorageGateway) DeleteTapeArchive(input *DeleteTapeArchiveInput) (*DeleteTapeArchiveOutput, error) { + req, out := c.DeleteTapeArchiveRequest(input) + return out, req.Send() +} + +// DeleteTapeArchiveWithContext is the same as DeleteTapeArchive with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteTapeArchive for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DeleteTapeArchiveWithContext(ctx aws.Context, input *DeleteTapeArchiveInput, opts ...request.Option) (*DeleteTapeArchiveOutput, error) { + req, out := c.DeleteTapeArchiveRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteVolume = "DeleteVolume" + +// DeleteVolumeRequest generates a "aws/request.Request" representing the +// client's request for the DeleteVolume operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteVolume for more information on using the DeleteVolume +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteVolumeRequest method. +// req, resp := client.DeleteVolumeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DeleteVolume +func (c *StorageGateway) DeleteVolumeRequest(input *DeleteVolumeInput) (req *request.Request, output *DeleteVolumeOutput) { + op := &request.Operation{ + Name: opDeleteVolume, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteVolumeInput{} + } + + output = &DeleteVolumeOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteVolume API operation for AWS Storage Gateway. +// +// Deletes the specified storage volume that you previously created using the +// CreateCachediSCSIVolume or CreateStorediSCSIVolume API. This operation is +// only supported in the cached volume and stored volume types. For stored volume +// gateways, the local disk that was configured as the storage volume is not +// deleted. You can reuse the local disk to create another storage volume. +// +// Before you delete a volume, make sure there are no iSCSI connections to the +// volume you are deleting. You should also make sure there is no snapshot in +// progress. You can use the Amazon Elastic Compute Cloud (Amazon EC2) API to +// query snapshots on the volume you are deleting and check the snapshot status. +// For more information, go to DescribeSnapshots (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeSnapshots.html) +// in the Amazon Elastic Compute Cloud API Reference. +// +// In the request, you must provide the Amazon Resource Name (ARN) of the storage +// volume you want to delete. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DeleteVolume for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DeleteVolume +func (c *StorageGateway) DeleteVolume(input *DeleteVolumeInput) (*DeleteVolumeOutput, error) { + req, out := c.DeleteVolumeRequest(input) + return out, req.Send() +} + +// DeleteVolumeWithContext is the same as DeleteVolume with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteVolume for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DeleteVolumeWithContext(ctx aws.Context, input *DeleteVolumeInput, opts ...request.Option) (*DeleteVolumeOutput, error) { + req, out := c.DeleteVolumeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeBandwidthRateLimit = "DescribeBandwidthRateLimit" + +// DescribeBandwidthRateLimitRequest generates a "aws/request.Request" representing the +// client's request for the DescribeBandwidthRateLimit operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeBandwidthRateLimit for more information on using the DescribeBandwidthRateLimit +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeBandwidthRateLimitRequest method. +// req, resp := client.DescribeBandwidthRateLimitRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeBandwidthRateLimit +func (c *StorageGateway) DescribeBandwidthRateLimitRequest(input *DescribeBandwidthRateLimitInput) (req *request.Request, output *DescribeBandwidthRateLimitOutput) { + op := &request.Operation{ + Name: opDescribeBandwidthRateLimit, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeBandwidthRateLimitInput{} + } + + output = &DescribeBandwidthRateLimitOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeBandwidthRateLimit API operation for AWS Storage Gateway. +// +// Returns the bandwidth rate limits of a gateway. By default, these limits +// are not set, which means no bandwidth rate limiting is in effect. +// +// This operation only returns a value for a bandwidth rate limit only if the +// limit is set. If no limits are set for the gateway, then this operation returns +// only the gateway ARN in the response body. To specify which gateway to describe, +// use the Amazon Resource Name (ARN) of the gateway in your request. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DescribeBandwidthRateLimit for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeBandwidthRateLimit +func (c *StorageGateway) DescribeBandwidthRateLimit(input *DescribeBandwidthRateLimitInput) (*DescribeBandwidthRateLimitOutput, error) { + req, out := c.DescribeBandwidthRateLimitRequest(input) + return out, req.Send() +} + +// DescribeBandwidthRateLimitWithContext is the same as DescribeBandwidthRateLimit with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeBandwidthRateLimit for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeBandwidthRateLimitWithContext(ctx aws.Context, input *DescribeBandwidthRateLimitInput, opts ...request.Option) (*DescribeBandwidthRateLimitOutput, error) { + req, out := c.DescribeBandwidthRateLimitRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeCache = "DescribeCache" + +// DescribeCacheRequest generates a "aws/request.Request" representing the +// client's request for the DescribeCache operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeCache for more information on using the DescribeCache +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeCacheRequest method. +// req, resp := client.DescribeCacheRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeCache +func (c *StorageGateway) DescribeCacheRequest(input *DescribeCacheInput) (req *request.Request, output *DescribeCacheOutput) { + op := &request.Operation{ + Name: opDescribeCache, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeCacheInput{} + } + + output = &DescribeCacheOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeCache API operation for AWS Storage Gateway. +// +// Returns information about the cache of a gateway. This operation is only +// supported in the cached volume, tape and file gateway types. +// +// The response includes disk IDs that are configured as cache, and it includes +// the amount of cache allocated and used. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DescribeCache for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeCache +func (c *StorageGateway) DescribeCache(input *DescribeCacheInput) (*DescribeCacheOutput, error) { + req, out := c.DescribeCacheRequest(input) + return out, req.Send() +} + +// DescribeCacheWithContext is the same as DescribeCache with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeCache for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeCacheWithContext(ctx aws.Context, input *DescribeCacheInput, opts ...request.Option) (*DescribeCacheOutput, error) { + req, out := c.DescribeCacheRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeCachediSCSIVolumes = "DescribeCachediSCSIVolumes" + +// DescribeCachediSCSIVolumesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeCachediSCSIVolumes operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeCachediSCSIVolumes for more information on using the DescribeCachediSCSIVolumes +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeCachediSCSIVolumesRequest method. +// req, resp := client.DescribeCachediSCSIVolumesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeCachediSCSIVolumes +func (c *StorageGateway) DescribeCachediSCSIVolumesRequest(input *DescribeCachediSCSIVolumesInput) (req *request.Request, output *DescribeCachediSCSIVolumesOutput) { + op := &request.Operation{ + Name: opDescribeCachediSCSIVolumes, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeCachediSCSIVolumesInput{} + } + + output = &DescribeCachediSCSIVolumesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeCachediSCSIVolumes API operation for AWS Storage Gateway. +// +// Returns a description of the gateway volumes specified in the request. This +// operation is only supported in the cached volume gateway types. +// +// The list of gateway volumes in the request must be from one gateway. In the +// response Amazon Storage Gateway returns volume information sorted by volume +// Amazon Resource Name (ARN). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DescribeCachediSCSIVolumes for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeCachediSCSIVolumes +func (c *StorageGateway) DescribeCachediSCSIVolumes(input *DescribeCachediSCSIVolumesInput) (*DescribeCachediSCSIVolumesOutput, error) { + req, out := c.DescribeCachediSCSIVolumesRequest(input) + return out, req.Send() +} + +// DescribeCachediSCSIVolumesWithContext is the same as DescribeCachediSCSIVolumes with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeCachediSCSIVolumes for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeCachediSCSIVolumesWithContext(ctx aws.Context, input *DescribeCachediSCSIVolumesInput, opts ...request.Option) (*DescribeCachediSCSIVolumesOutput, error) { + req, out := c.DescribeCachediSCSIVolumesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeChapCredentials = "DescribeChapCredentials" + +// DescribeChapCredentialsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeChapCredentials operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeChapCredentials for more information on using the DescribeChapCredentials +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeChapCredentialsRequest method. +// req, resp := client.DescribeChapCredentialsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeChapCredentials +func (c *StorageGateway) DescribeChapCredentialsRequest(input *DescribeChapCredentialsInput) (req *request.Request, output *DescribeChapCredentialsOutput) { + op := &request.Operation{ + Name: opDescribeChapCredentials, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeChapCredentialsInput{} + } + + output = &DescribeChapCredentialsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeChapCredentials API operation for AWS Storage Gateway. +// +// Returns an array of Challenge-Handshake Authentication Protocol (CHAP) credentials +// information for a specified iSCSI target, one for each target-initiator pair. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DescribeChapCredentials for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeChapCredentials +func (c *StorageGateway) DescribeChapCredentials(input *DescribeChapCredentialsInput) (*DescribeChapCredentialsOutput, error) { + req, out := c.DescribeChapCredentialsRequest(input) + return out, req.Send() +} + +// DescribeChapCredentialsWithContext is the same as DescribeChapCredentials with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeChapCredentials for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeChapCredentialsWithContext(ctx aws.Context, input *DescribeChapCredentialsInput, opts ...request.Option) (*DescribeChapCredentialsOutput, error) { + req, out := c.DescribeChapCredentialsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeGatewayInformation = "DescribeGatewayInformation" + +// DescribeGatewayInformationRequest generates a "aws/request.Request" representing the +// client's request for the DescribeGatewayInformation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeGatewayInformation for more information on using the DescribeGatewayInformation +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeGatewayInformationRequest method. +// req, resp := client.DescribeGatewayInformationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeGatewayInformation +func (c *StorageGateway) DescribeGatewayInformationRequest(input *DescribeGatewayInformationInput) (req *request.Request, output *DescribeGatewayInformationOutput) { + op := &request.Operation{ + Name: opDescribeGatewayInformation, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeGatewayInformationInput{} + } + + output = &DescribeGatewayInformationOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeGatewayInformation API operation for AWS Storage Gateway. +// +// Returns metadata about a gateway such as its name, network interfaces, configured +// time zone, and the state (whether the gateway is running or not). To specify +// which gateway to describe, use the Amazon Resource Name (ARN) of the gateway +// in your request. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DescribeGatewayInformation for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeGatewayInformation +func (c *StorageGateway) DescribeGatewayInformation(input *DescribeGatewayInformationInput) (*DescribeGatewayInformationOutput, error) { + req, out := c.DescribeGatewayInformationRequest(input) + return out, req.Send() +} + +// DescribeGatewayInformationWithContext is the same as DescribeGatewayInformation with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeGatewayInformation for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeGatewayInformationWithContext(ctx aws.Context, input *DescribeGatewayInformationInput, opts ...request.Option) (*DescribeGatewayInformationOutput, error) { + req, out := c.DescribeGatewayInformationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeMaintenanceStartTime = "DescribeMaintenanceStartTime" + +// DescribeMaintenanceStartTimeRequest generates a "aws/request.Request" representing the +// client's request for the DescribeMaintenanceStartTime operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeMaintenanceStartTime for more information on using the DescribeMaintenanceStartTime +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeMaintenanceStartTimeRequest method. +// req, resp := client.DescribeMaintenanceStartTimeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeMaintenanceStartTime +func (c *StorageGateway) DescribeMaintenanceStartTimeRequest(input *DescribeMaintenanceStartTimeInput) (req *request.Request, output *DescribeMaintenanceStartTimeOutput) { + op := &request.Operation{ + Name: opDescribeMaintenanceStartTime, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeMaintenanceStartTimeInput{} + } + + output = &DescribeMaintenanceStartTimeOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeMaintenanceStartTime API operation for AWS Storage Gateway. +// +// Returns your gateway's weekly maintenance start time including the day and +// time of the week. Note that values are in terms of the gateway's time zone. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DescribeMaintenanceStartTime for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeMaintenanceStartTime +func (c *StorageGateway) DescribeMaintenanceStartTime(input *DescribeMaintenanceStartTimeInput) (*DescribeMaintenanceStartTimeOutput, error) { + req, out := c.DescribeMaintenanceStartTimeRequest(input) + return out, req.Send() +} + +// DescribeMaintenanceStartTimeWithContext is the same as DescribeMaintenanceStartTime with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeMaintenanceStartTime for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeMaintenanceStartTimeWithContext(ctx aws.Context, input *DescribeMaintenanceStartTimeInput, opts ...request.Option) (*DescribeMaintenanceStartTimeOutput, error) { + req, out := c.DescribeMaintenanceStartTimeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeNFSFileShares = "DescribeNFSFileShares" + +// DescribeNFSFileSharesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeNFSFileShares operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeNFSFileShares for more information on using the DescribeNFSFileShares +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeNFSFileSharesRequest method. +// req, resp := client.DescribeNFSFileSharesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeNFSFileShares +func (c *StorageGateway) DescribeNFSFileSharesRequest(input *DescribeNFSFileSharesInput) (req *request.Request, output *DescribeNFSFileSharesOutput) { + op := &request.Operation{ + Name: opDescribeNFSFileShares, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeNFSFileSharesInput{} + } + + output = &DescribeNFSFileSharesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeNFSFileShares API operation for AWS Storage Gateway. +// +// Gets a description for one or more Network File System (NFS) file shares +// from a file gateway. This operation is only supported for file gateways. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DescribeNFSFileShares for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeNFSFileShares +func (c *StorageGateway) DescribeNFSFileShares(input *DescribeNFSFileSharesInput) (*DescribeNFSFileSharesOutput, error) { + req, out := c.DescribeNFSFileSharesRequest(input) + return out, req.Send() +} + +// DescribeNFSFileSharesWithContext is the same as DescribeNFSFileShares with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeNFSFileShares for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeNFSFileSharesWithContext(ctx aws.Context, input *DescribeNFSFileSharesInput, opts ...request.Option) (*DescribeNFSFileSharesOutput, error) { + req, out := c.DescribeNFSFileSharesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeSMBFileShares = "DescribeSMBFileShares" + +// DescribeSMBFileSharesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeSMBFileShares operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeSMBFileShares for more information on using the DescribeSMBFileShares +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeSMBFileSharesRequest method. +// req, resp := client.DescribeSMBFileSharesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeSMBFileShares +func (c *StorageGateway) DescribeSMBFileSharesRequest(input *DescribeSMBFileSharesInput) (req *request.Request, output *DescribeSMBFileSharesOutput) { + op := &request.Operation{ + Name: opDescribeSMBFileShares, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeSMBFileSharesInput{} + } + + output = &DescribeSMBFileSharesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeSMBFileShares API operation for AWS Storage Gateway. +// +// Gets a description for one or more Server Message Block (SMB) file shares +// from a file gateway. This operation is only supported for file gateways. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DescribeSMBFileShares for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeSMBFileShares +func (c *StorageGateway) DescribeSMBFileShares(input *DescribeSMBFileSharesInput) (*DescribeSMBFileSharesOutput, error) { + req, out := c.DescribeSMBFileSharesRequest(input) + return out, req.Send() +} + +// DescribeSMBFileSharesWithContext is the same as DescribeSMBFileShares with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeSMBFileShares for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeSMBFileSharesWithContext(ctx aws.Context, input *DescribeSMBFileSharesInput, opts ...request.Option) (*DescribeSMBFileSharesOutput, error) { + req, out := c.DescribeSMBFileSharesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeSMBSettings = "DescribeSMBSettings" + +// DescribeSMBSettingsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeSMBSettings operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeSMBSettings for more information on using the DescribeSMBSettings +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeSMBSettingsRequest method. +// req, resp := client.DescribeSMBSettingsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeSMBSettings +func (c *StorageGateway) DescribeSMBSettingsRequest(input *DescribeSMBSettingsInput) (req *request.Request, output *DescribeSMBSettingsOutput) { + op := &request.Operation{ + Name: opDescribeSMBSettings, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeSMBSettingsInput{} + } + + output = &DescribeSMBSettingsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeSMBSettings API operation for AWS Storage Gateway. +// +// Gets a description of a Server Message Block (SMB) file share settings from +// a file gateway. This operation is only supported for file gateways. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DescribeSMBSettings for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeSMBSettings +func (c *StorageGateway) DescribeSMBSettings(input *DescribeSMBSettingsInput) (*DescribeSMBSettingsOutput, error) { + req, out := c.DescribeSMBSettingsRequest(input) + return out, req.Send() +} + +// DescribeSMBSettingsWithContext is the same as DescribeSMBSettings with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeSMBSettings for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeSMBSettingsWithContext(ctx aws.Context, input *DescribeSMBSettingsInput, opts ...request.Option) (*DescribeSMBSettingsOutput, error) { + req, out := c.DescribeSMBSettingsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeSnapshotSchedule = "DescribeSnapshotSchedule" + +// DescribeSnapshotScheduleRequest generates a "aws/request.Request" representing the +// client's request for the DescribeSnapshotSchedule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeSnapshotSchedule for more information on using the DescribeSnapshotSchedule +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeSnapshotScheduleRequest method. +// req, resp := client.DescribeSnapshotScheduleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeSnapshotSchedule +func (c *StorageGateway) DescribeSnapshotScheduleRequest(input *DescribeSnapshotScheduleInput) (req *request.Request, output *DescribeSnapshotScheduleOutput) { + op := &request.Operation{ + Name: opDescribeSnapshotSchedule, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeSnapshotScheduleInput{} + } + + output = &DescribeSnapshotScheduleOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeSnapshotSchedule API operation for AWS Storage Gateway. +// +// Describes the snapshot schedule for the specified gateway volume. The snapshot +// schedule information includes intervals at which snapshots are automatically +// initiated on the volume. This operation is only supported in the cached volume +// and stored volume types. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DescribeSnapshotSchedule for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeSnapshotSchedule +func (c *StorageGateway) DescribeSnapshotSchedule(input *DescribeSnapshotScheduleInput) (*DescribeSnapshotScheduleOutput, error) { + req, out := c.DescribeSnapshotScheduleRequest(input) + return out, req.Send() +} + +// DescribeSnapshotScheduleWithContext is the same as DescribeSnapshotSchedule with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeSnapshotSchedule for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeSnapshotScheduleWithContext(ctx aws.Context, input *DescribeSnapshotScheduleInput, opts ...request.Option) (*DescribeSnapshotScheduleOutput, error) { + req, out := c.DescribeSnapshotScheduleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeStorediSCSIVolumes = "DescribeStorediSCSIVolumes" + +// DescribeStorediSCSIVolumesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeStorediSCSIVolumes operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeStorediSCSIVolumes for more information on using the DescribeStorediSCSIVolumes +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeStorediSCSIVolumesRequest method. +// req, resp := client.DescribeStorediSCSIVolumesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeStorediSCSIVolumes +func (c *StorageGateway) DescribeStorediSCSIVolumesRequest(input *DescribeStorediSCSIVolumesInput) (req *request.Request, output *DescribeStorediSCSIVolumesOutput) { + op := &request.Operation{ + Name: opDescribeStorediSCSIVolumes, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeStorediSCSIVolumesInput{} + } + + output = &DescribeStorediSCSIVolumesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeStorediSCSIVolumes API operation for AWS Storage Gateway. +// +// Returns the description of the gateway volumes specified in the request. +// The list of gateway volumes in the request must be from one gateway. In the +// response Amazon Storage Gateway returns volume information sorted by volume +// ARNs. This operation is only supported in stored volume gateway type. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DescribeStorediSCSIVolumes for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeStorediSCSIVolumes +func (c *StorageGateway) DescribeStorediSCSIVolumes(input *DescribeStorediSCSIVolumesInput) (*DescribeStorediSCSIVolumesOutput, error) { + req, out := c.DescribeStorediSCSIVolumesRequest(input) + return out, req.Send() +} + +// DescribeStorediSCSIVolumesWithContext is the same as DescribeStorediSCSIVolumes with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeStorediSCSIVolumes for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeStorediSCSIVolumesWithContext(ctx aws.Context, input *DescribeStorediSCSIVolumesInput, opts ...request.Option) (*DescribeStorediSCSIVolumesOutput, error) { + req, out := c.DescribeStorediSCSIVolumesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeTapeArchives = "DescribeTapeArchives" + +// DescribeTapeArchivesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeTapeArchives operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeTapeArchives for more information on using the DescribeTapeArchives +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeTapeArchivesRequest method. +// req, resp := client.DescribeTapeArchivesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeTapeArchives +func (c *StorageGateway) DescribeTapeArchivesRequest(input *DescribeTapeArchivesInput) (req *request.Request, output *DescribeTapeArchivesOutput) { + op := &request.Operation{ + Name: opDescribeTapeArchives, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "Limit", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeTapeArchivesInput{} + } + + output = &DescribeTapeArchivesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeTapeArchives API operation for AWS Storage Gateway. +// +// Returns a description of specified virtual tapes in the virtual tape shelf +// (VTS). This operation is only supported in the tape gateway type. +// +// If a specific TapeARN is not specified, AWS Storage Gateway returns a description +// of all virtual tapes found in the VTS associated with your account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DescribeTapeArchives for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeTapeArchives +func (c *StorageGateway) DescribeTapeArchives(input *DescribeTapeArchivesInput) (*DescribeTapeArchivesOutput, error) { + req, out := c.DescribeTapeArchivesRequest(input) + return out, req.Send() +} + +// DescribeTapeArchivesWithContext is the same as DescribeTapeArchives with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeTapeArchives for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeTapeArchivesWithContext(ctx aws.Context, input *DescribeTapeArchivesInput, opts ...request.Option) (*DescribeTapeArchivesOutput, error) { + req, out := c.DescribeTapeArchivesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeTapeArchivesPages iterates over the pages of a DescribeTapeArchives operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeTapeArchives method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeTapeArchives operation. +// pageNum := 0 +// err := client.DescribeTapeArchivesPages(params, +// func(page *DescribeTapeArchivesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *StorageGateway) DescribeTapeArchivesPages(input *DescribeTapeArchivesInput, fn func(*DescribeTapeArchivesOutput, bool) bool) error { + return c.DescribeTapeArchivesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeTapeArchivesPagesWithContext same as DescribeTapeArchivesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeTapeArchivesPagesWithContext(ctx aws.Context, input *DescribeTapeArchivesInput, fn func(*DescribeTapeArchivesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeTapeArchivesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeTapeArchivesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeTapeArchivesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDescribeTapeRecoveryPoints = "DescribeTapeRecoveryPoints" + +// DescribeTapeRecoveryPointsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeTapeRecoveryPoints operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeTapeRecoveryPoints for more information on using the DescribeTapeRecoveryPoints +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeTapeRecoveryPointsRequest method. +// req, resp := client.DescribeTapeRecoveryPointsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeTapeRecoveryPoints +func (c *StorageGateway) DescribeTapeRecoveryPointsRequest(input *DescribeTapeRecoveryPointsInput) (req *request.Request, output *DescribeTapeRecoveryPointsOutput) { + op := &request.Operation{ + Name: opDescribeTapeRecoveryPoints, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "Limit", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeTapeRecoveryPointsInput{} + } + + output = &DescribeTapeRecoveryPointsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeTapeRecoveryPoints API operation for AWS Storage Gateway. +// +// Returns a list of virtual tape recovery points that are available for the +// specified tape gateway. +// +// A recovery point is a point-in-time view of a virtual tape at which all the +// data on the virtual tape is consistent. If your gateway crashes, virtual +// tapes that have recovery points can be recovered to a new gateway. This operation +// is only supported in the tape gateway type. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DescribeTapeRecoveryPoints for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeTapeRecoveryPoints +func (c *StorageGateway) DescribeTapeRecoveryPoints(input *DescribeTapeRecoveryPointsInput) (*DescribeTapeRecoveryPointsOutput, error) { + req, out := c.DescribeTapeRecoveryPointsRequest(input) + return out, req.Send() +} + +// DescribeTapeRecoveryPointsWithContext is the same as DescribeTapeRecoveryPoints with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeTapeRecoveryPoints for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeTapeRecoveryPointsWithContext(ctx aws.Context, input *DescribeTapeRecoveryPointsInput, opts ...request.Option) (*DescribeTapeRecoveryPointsOutput, error) { + req, out := c.DescribeTapeRecoveryPointsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeTapeRecoveryPointsPages iterates over the pages of a DescribeTapeRecoveryPoints operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeTapeRecoveryPoints method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeTapeRecoveryPoints operation. +// pageNum := 0 +// err := client.DescribeTapeRecoveryPointsPages(params, +// func(page *DescribeTapeRecoveryPointsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *StorageGateway) DescribeTapeRecoveryPointsPages(input *DescribeTapeRecoveryPointsInput, fn func(*DescribeTapeRecoveryPointsOutput, bool) bool) error { + return c.DescribeTapeRecoveryPointsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeTapeRecoveryPointsPagesWithContext same as DescribeTapeRecoveryPointsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeTapeRecoveryPointsPagesWithContext(ctx aws.Context, input *DescribeTapeRecoveryPointsInput, fn func(*DescribeTapeRecoveryPointsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeTapeRecoveryPointsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeTapeRecoveryPointsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeTapeRecoveryPointsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDescribeTapes = "DescribeTapes" + +// DescribeTapesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeTapes operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeTapes for more information on using the DescribeTapes +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeTapesRequest method. +// req, resp := client.DescribeTapesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeTapes +func (c *StorageGateway) DescribeTapesRequest(input *DescribeTapesInput) (req *request.Request, output *DescribeTapesOutput) { + op := &request.Operation{ + Name: opDescribeTapes, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "Limit", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeTapesInput{} + } + + output = &DescribeTapesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeTapes API operation for AWS Storage Gateway. +// +// Returns a description of the specified Amazon Resource Name (ARN) of virtual +// tapes. If a TapeARN is not specified, returns a description of all virtual +// tapes associated with the specified gateway. This operation is only supported +// in the tape gateway type. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DescribeTapes for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeTapes +func (c *StorageGateway) DescribeTapes(input *DescribeTapesInput) (*DescribeTapesOutput, error) { + req, out := c.DescribeTapesRequest(input) + return out, req.Send() +} + +// DescribeTapesWithContext is the same as DescribeTapes with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeTapes for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeTapesWithContext(ctx aws.Context, input *DescribeTapesInput, opts ...request.Option) (*DescribeTapesOutput, error) { + req, out := c.DescribeTapesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeTapesPages iterates over the pages of a DescribeTapes operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeTapes method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeTapes operation. +// pageNum := 0 +// err := client.DescribeTapesPages(params, +// func(page *DescribeTapesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *StorageGateway) DescribeTapesPages(input *DescribeTapesInput, fn func(*DescribeTapesOutput, bool) bool) error { + return c.DescribeTapesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeTapesPagesWithContext same as DescribeTapesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeTapesPagesWithContext(ctx aws.Context, input *DescribeTapesInput, fn func(*DescribeTapesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeTapesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeTapesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeTapesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDescribeUploadBuffer = "DescribeUploadBuffer" + +// DescribeUploadBufferRequest generates a "aws/request.Request" representing the +// client's request for the DescribeUploadBuffer operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeUploadBuffer for more information on using the DescribeUploadBuffer +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeUploadBufferRequest method. +// req, resp := client.DescribeUploadBufferRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeUploadBuffer +func (c *StorageGateway) DescribeUploadBufferRequest(input *DescribeUploadBufferInput) (req *request.Request, output *DescribeUploadBufferOutput) { + op := &request.Operation{ + Name: opDescribeUploadBuffer, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeUploadBufferInput{} + } + + output = &DescribeUploadBufferOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeUploadBuffer API operation for AWS Storage Gateway. +// +// Returns information about the upload buffer of a gateway. This operation +// is supported for the stored volume, cached volume and tape gateway types. +// +// The response includes disk IDs that are configured as upload buffer space, +// and it includes the amount of upload buffer space allocated and used. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DescribeUploadBuffer for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeUploadBuffer +func (c *StorageGateway) DescribeUploadBuffer(input *DescribeUploadBufferInput) (*DescribeUploadBufferOutput, error) { + req, out := c.DescribeUploadBufferRequest(input) + return out, req.Send() +} + +// DescribeUploadBufferWithContext is the same as DescribeUploadBuffer with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeUploadBuffer for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeUploadBufferWithContext(ctx aws.Context, input *DescribeUploadBufferInput, opts ...request.Option) (*DescribeUploadBufferOutput, error) { + req, out := c.DescribeUploadBufferRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeVTLDevices = "DescribeVTLDevices" + +// DescribeVTLDevicesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeVTLDevices operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeVTLDevices for more information on using the DescribeVTLDevices +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeVTLDevicesRequest method. +// req, resp := client.DescribeVTLDevicesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeVTLDevices +func (c *StorageGateway) DescribeVTLDevicesRequest(input *DescribeVTLDevicesInput) (req *request.Request, output *DescribeVTLDevicesOutput) { + op := &request.Operation{ + Name: opDescribeVTLDevices, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "Limit", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeVTLDevicesInput{} + } + + output = &DescribeVTLDevicesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeVTLDevices API operation for AWS Storage Gateway. +// +// Returns a description of virtual tape library (VTL) devices for the specified +// tape gateway. In the response, AWS Storage Gateway returns VTL device information. +// +// This operation is only supported in the tape gateway type. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DescribeVTLDevices for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeVTLDevices +func (c *StorageGateway) DescribeVTLDevices(input *DescribeVTLDevicesInput) (*DescribeVTLDevicesOutput, error) { + req, out := c.DescribeVTLDevicesRequest(input) + return out, req.Send() +} + +// DescribeVTLDevicesWithContext is the same as DescribeVTLDevices with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeVTLDevices for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeVTLDevicesWithContext(ctx aws.Context, input *DescribeVTLDevicesInput, opts ...request.Option) (*DescribeVTLDevicesOutput, error) { + req, out := c.DescribeVTLDevicesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeVTLDevicesPages iterates over the pages of a DescribeVTLDevices operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeVTLDevices method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeVTLDevices operation. +// pageNum := 0 +// err := client.DescribeVTLDevicesPages(params, +// func(page *DescribeVTLDevicesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *StorageGateway) DescribeVTLDevicesPages(input *DescribeVTLDevicesInput, fn func(*DescribeVTLDevicesOutput, bool) bool) error { + return c.DescribeVTLDevicesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeVTLDevicesPagesWithContext same as DescribeVTLDevicesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeVTLDevicesPagesWithContext(ctx aws.Context, input *DescribeVTLDevicesInput, fn func(*DescribeVTLDevicesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeVTLDevicesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeVTLDevicesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeVTLDevicesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDescribeWorkingStorage = "DescribeWorkingStorage" + +// DescribeWorkingStorageRequest generates a "aws/request.Request" representing the +// client's request for the DescribeWorkingStorage operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeWorkingStorage for more information on using the DescribeWorkingStorage +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeWorkingStorageRequest method. +// req, resp := client.DescribeWorkingStorageRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeWorkingStorage +func (c *StorageGateway) DescribeWorkingStorageRequest(input *DescribeWorkingStorageInput) (req *request.Request, output *DescribeWorkingStorageOutput) { + op := &request.Operation{ + Name: opDescribeWorkingStorage, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeWorkingStorageInput{} + } + + output = &DescribeWorkingStorageOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeWorkingStorage API operation for AWS Storage Gateway. +// +// Returns information about the working storage of a gateway. This operation +// is only supported in the stored volumes gateway type. This operation is deprecated +// in cached volumes API version (20120630). Use DescribeUploadBuffer instead. +// +// Working storage is also referred to as upload buffer. You can also use the +// DescribeUploadBuffer operation to add upload buffer to a stored volume gateway. +// +// The response includes disk IDs that are configured as working storage, and +// it includes the amount of working storage allocated and used. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DescribeWorkingStorage for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DescribeWorkingStorage +func (c *StorageGateway) DescribeWorkingStorage(input *DescribeWorkingStorageInput) (*DescribeWorkingStorageOutput, error) { + req, out := c.DescribeWorkingStorageRequest(input) + return out, req.Send() +} + +// DescribeWorkingStorageWithContext is the same as DescribeWorkingStorage with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeWorkingStorage for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DescribeWorkingStorageWithContext(ctx aws.Context, input *DescribeWorkingStorageInput, opts ...request.Option) (*DescribeWorkingStorageOutput, error) { + req, out := c.DescribeWorkingStorageRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDisableGateway = "DisableGateway" + +// DisableGatewayRequest generates a "aws/request.Request" representing the +// client's request for the DisableGateway operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DisableGateway for more information on using the DisableGateway +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DisableGatewayRequest method. +// req, resp := client.DisableGatewayRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DisableGateway +func (c *StorageGateway) DisableGatewayRequest(input *DisableGatewayInput) (req *request.Request, output *DisableGatewayOutput) { + op := &request.Operation{ + Name: opDisableGateway, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DisableGatewayInput{} + } + + output = &DisableGatewayOutput{} + req = c.newRequest(op, input, output) + return +} + +// DisableGateway API operation for AWS Storage Gateway. +// +// Disables a tape gateway when the gateway is no longer functioning. For example, +// if your gateway VM is damaged, you can disable the gateway so you can recover +// virtual tapes. +// +// Use this operation for a tape gateway that is not reachable or not functioning. +// This operation is only supported in the tape gateway type. +// +// Once a gateway is disabled it cannot be enabled. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation DisableGateway for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/DisableGateway +func (c *StorageGateway) DisableGateway(input *DisableGatewayInput) (*DisableGatewayOutput, error) { + req, out := c.DisableGatewayRequest(input) + return out, req.Send() +} + +// DisableGatewayWithContext is the same as DisableGateway with the addition of +// the ability to pass a context and additional request options. +// +// See DisableGateway for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) DisableGatewayWithContext(ctx aws.Context, input *DisableGatewayInput, opts ...request.Option) (*DisableGatewayOutput, error) { + req, out := c.DisableGatewayRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opJoinDomain = "JoinDomain" + +// JoinDomainRequest generates a "aws/request.Request" representing the +// client's request for the JoinDomain operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See JoinDomain for more information on using the JoinDomain +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the JoinDomainRequest method. +// req, resp := client.JoinDomainRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/JoinDomain +func (c *StorageGateway) JoinDomainRequest(input *JoinDomainInput) (req *request.Request, output *JoinDomainOutput) { + op := &request.Operation{ + Name: opJoinDomain, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &JoinDomainInput{} + } + + output = &JoinDomainOutput{} + req = c.newRequest(op, input, output) + return +} + +// JoinDomain API operation for AWS Storage Gateway. +// +// Adds a file gateway to an Active Directory domain. This operation is only +// supported for file gateways that support the SMB file protocol. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation JoinDomain for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/JoinDomain +func (c *StorageGateway) JoinDomain(input *JoinDomainInput) (*JoinDomainOutput, error) { + req, out := c.JoinDomainRequest(input) + return out, req.Send() +} + +// JoinDomainWithContext is the same as JoinDomain with the addition of +// the ability to pass a context and additional request options. +// +// See JoinDomain for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) JoinDomainWithContext(ctx aws.Context, input *JoinDomainInput, opts ...request.Option) (*JoinDomainOutput, error) { + req, out := c.JoinDomainRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListFileShares = "ListFileShares" + +// ListFileSharesRequest generates a "aws/request.Request" representing the +// client's request for the ListFileShares operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListFileShares for more information on using the ListFileShares +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListFileSharesRequest method. +// req, resp := client.ListFileSharesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ListFileShares +func (c *StorageGateway) ListFileSharesRequest(input *ListFileSharesInput) (req *request.Request, output *ListFileSharesOutput) { + op := &request.Operation{ + Name: opListFileShares, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListFileSharesInput{} + } + + output = &ListFileSharesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListFileShares API operation for AWS Storage Gateway. +// +// Gets a list of the file shares for a specific file gateway, or the list of +// file shares that belong to the calling user account. This operation is only +// supported for file gateways. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation ListFileShares for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ListFileShares +func (c *StorageGateway) ListFileShares(input *ListFileSharesInput) (*ListFileSharesOutput, error) { + req, out := c.ListFileSharesRequest(input) + return out, req.Send() +} + +// ListFileSharesWithContext is the same as ListFileShares with the addition of +// the ability to pass a context and additional request options. +// +// See ListFileShares for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) ListFileSharesWithContext(ctx aws.Context, input *ListFileSharesInput, opts ...request.Option) (*ListFileSharesOutput, error) { + req, out := c.ListFileSharesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListGateways = "ListGateways" + +// ListGatewaysRequest generates a "aws/request.Request" representing the +// client's request for the ListGateways operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListGateways for more information on using the ListGateways +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListGatewaysRequest method. +// req, resp := client.ListGatewaysRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ListGateways +func (c *StorageGateway) ListGatewaysRequest(input *ListGatewaysInput) (req *request.Request, output *ListGatewaysOutput) { + op := &request.Operation{ + Name: opListGateways, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "Limit", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListGatewaysInput{} + } + + output = &ListGatewaysOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListGateways API operation for AWS Storage Gateway. +// +// Lists gateways owned by an AWS account in a region specified in the request. +// The returned list is ordered by gateway Amazon Resource Name (ARN). +// +// By default, the operation returns a maximum of 100 gateways. This operation +// supports pagination that allows you to optionally reduce the number of gateways +// returned in a response. +// +// If you have more gateways than are returned in a response (that is, the response +// returns only a truncated list of your gateways), the response contains a +// marker that you can specify in your next request to fetch the next page of +// gateways. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation ListGateways for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ListGateways +func (c *StorageGateway) ListGateways(input *ListGatewaysInput) (*ListGatewaysOutput, error) { + req, out := c.ListGatewaysRequest(input) + return out, req.Send() +} + +// ListGatewaysWithContext is the same as ListGateways with the addition of +// the ability to pass a context and additional request options. +// +// See ListGateways for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) ListGatewaysWithContext(ctx aws.Context, input *ListGatewaysInput, opts ...request.Option) (*ListGatewaysOutput, error) { + req, out := c.ListGatewaysRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListGatewaysPages iterates over the pages of a ListGateways operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListGateways method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListGateways operation. +// pageNum := 0 +// err := client.ListGatewaysPages(params, +// func(page *ListGatewaysOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *StorageGateway) ListGatewaysPages(input *ListGatewaysInput, fn func(*ListGatewaysOutput, bool) bool) error { + return c.ListGatewaysPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListGatewaysPagesWithContext same as ListGatewaysPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) ListGatewaysPagesWithContext(ctx aws.Context, input *ListGatewaysInput, fn func(*ListGatewaysOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListGatewaysInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListGatewaysRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListGatewaysOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListLocalDisks = "ListLocalDisks" + +// ListLocalDisksRequest generates a "aws/request.Request" representing the +// client's request for the ListLocalDisks operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListLocalDisks for more information on using the ListLocalDisks +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListLocalDisksRequest method. +// req, resp := client.ListLocalDisksRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ListLocalDisks +func (c *StorageGateway) ListLocalDisksRequest(input *ListLocalDisksInput) (req *request.Request, output *ListLocalDisksOutput) { + op := &request.Operation{ + Name: opListLocalDisks, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListLocalDisksInput{} + } + + output = &ListLocalDisksOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListLocalDisks API operation for AWS Storage Gateway. +// +// Returns a list of the gateway's local disks. To specify which gateway to +// describe, you use the Amazon Resource Name (ARN) of the gateway in the body +// of the request. +// +// The request returns a list of all disks, specifying which are configured +// as working storage, cache storage, or stored volume or not configured at +// all. The response includes a DiskStatus field. This field can have a value +// of present (the disk is available to use), missing (the disk is no longer +// connected to the gateway), or mismatch (the disk node is occupied by a disk +// that has incorrect metadata or the disk content is corrupted). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation ListLocalDisks for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ListLocalDisks +func (c *StorageGateway) ListLocalDisks(input *ListLocalDisksInput) (*ListLocalDisksOutput, error) { + req, out := c.ListLocalDisksRequest(input) + return out, req.Send() +} + +// ListLocalDisksWithContext is the same as ListLocalDisks with the addition of +// the ability to pass a context and additional request options. +// +// See ListLocalDisks for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) ListLocalDisksWithContext(ctx aws.Context, input *ListLocalDisksInput, opts ...request.Option) (*ListLocalDisksOutput, error) { + req, out := c.ListLocalDisksRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListTagsForResource = "ListTagsForResource" + +// ListTagsForResourceRequest generates a "aws/request.Request" representing the +// client's request for the ListTagsForResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListTagsForResource for more information on using the ListTagsForResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListTagsForResourceRequest method. +// req, resp := client.ListTagsForResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ListTagsForResource +func (c *StorageGateway) ListTagsForResourceRequest(input *ListTagsForResourceInput) (req *request.Request, output *ListTagsForResourceOutput) { + op := &request.Operation{ + Name: opListTagsForResource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListTagsForResourceInput{} + } + + output = &ListTagsForResourceOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListTagsForResource API operation for AWS Storage Gateway. +// +// Lists the tags that have been added to the specified resource. This operation +// is only supported in the cached volume, stored volume and tape gateway type. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation ListTagsForResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ListTagsForResource +func (c *StorageGateway) ListTagsForResource(input *ListTagsForResourceInput) (*ListTagsForResourceOutput, error) { + req, out := c.ListTagsForResourceRequest(input) + return out, req.Send() +} + +// ListTagsForResourceWithContext is the same as ListTagsForResource with the addition of +// the ability to pass a context and additional request options. +// +// See ListTagsForResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) ListTagsForResourceWithContext(ctx aws.Context, input *ListTagsForResourceInput, opts ...request.Option) (*ListTagsForResourceOutput, error) { + req, out := c.ListTagsForResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListTapes = "ListTapes" + +// ListTapesRequest generates a "aws/request.Request" representing the +// client's request for the ListTapes operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListTapes for more information on using the ListTapes +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListTapesRequest method. +// req, resp := client.ListTapesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ListTapes +func (c *StorageGateway) ListTapesRequest(input *ListTapesInput) (req *request.Request, output *ListTapesOutput) { + op := &request.Operation{ + Name: opListTapes, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListTapesInput{} + } + + output = &ListTapesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListTapes API operation for AWS Storage Gateway. +// +// Lists virtual tapes in your virtual tape library (VTL) and your virtual tape +// shelf (VTS). You specify the tapes to list by specifying one or more tape +// Amazon Resource Names (ARNs). If you don't specify a tape ARN, the operation +// lists all virtual tapes in both your VTL and VTS. +// +// This operation supports pagination. By default, the operation returns a maximum +// of up to 100 tapes. You can optionally specify the Limit parameter in the +// body to limit the number of tapes in the response. If the number of tapes +// returned in the response is truncated, the response includes a Marker element +// that you can use in your subsequent request to retrieve the next set of tapes. +// This operation is only supported in the tape gateway type. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation ListTapes for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ListTapes +func (c *StorageGateway) ListTapes(input *ListTapesInput) (*ListTapesOutput, error) { + req, out := c.ListTapesRequest(input) + return out, req.Send() +} + +// ListTapesWithContext is the same as ListTapes with the addition of +// the ability to pass a context and additional request options. +// +// See ListTapes for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) ListTapesWithContext(ctx aws.Context, input *ListTapesInput, opts ...request.Option) (*ListTapesOutput, error) { + req, out := c.ListTapesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListVolumeInitiators = "ListVolumeInitiators" + +// ListVolumeInitiatorsRequest generates a "aws/request.Request" representing the +// client's request for the ListVolumeInitiators operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListVolumeInitiators for more information on using the ListVolumeInitiators +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListVolumeInitiatorsRequest method. +// req, resp := client.ListVolumeInitiatorsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ListVolumeInitiators +func (c *StorageGateway) ListVolumeInitiatorsRequest(input *ListVolumeInitiatorsInput) (req *request.Request, output *ListVolumeInitiatorsOutput) { + op := &request.Operation{ + Name: opListVolumeInitiators, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListVolumeInitiatorsInput{} + } + + output = &ListVolumeInitiatorsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListVolumeInitiators API operation for AWS Storage Gateway. +// +// Lists iSCSI initiators that are connected to a volume. You can use this operation +// to determine whether a volume is being used or not. This operation is only +// supported in the cached volume and stored volume gateway types. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation ListVolumeInitiators for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ListVolumeInitiators +func (c *StorageGateway) ListVolumeInitiators(input *ListVolumeInitiatorsInput) (*ListVolumeInitiatorsOutput, error) { + req, out := c.ListVolumeInitiatorsRequest(input) + return out, req.Send() +} + +// ListVolumeInitiatorsWithContext is the same as ListVolumeInitiators with the addition of +// the ability to pass a context and additional request options. +// +// See ListVolumeInitiators for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) ListVolumeInitiatorsWithContext(ctx aws.Context, input *ListVolumeInitiatorsInput, opts ...request.Option) (*ListVolumeInitiatorsOutput, error) { + req, out := c.ListVolumeInitiatorsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListVolumeRecoveryPoints = "ListVolumeRecoveryPoints" + +// ListVolumeRecoveryPointsRequest generates a "aws/request.Request" representing the +// client's request for the ListVolumeRecoveryPoints operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListVolumeRecoveryPoints for more information on using the ListVolumeRecoveryPoints +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListVolumeRecoveryPointsRequest method. +// req, resp := client.ListVolumeRecoveryPointsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ListVolumeRecoveryPoints +func (c *StorageGateway) ListVolumeRecoveryPointsRequest(input *ListVolumeRecoveryPointsInput) (req *request.Request, output *ListVolumeRecoveryPointsOutput) { + op := &request.Operation{ + Name: opListVolumeRecoveryPoints, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListVolumeRecoveryPointsInput{} + } + + output = &ListVolumeRecoveryPointsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListVolumeRecoveryPoints API operation for AWS Storage Gateway. +// +// Lists the recovery points for a specified gateway. This operation is only +// supported in the cached volume gateway type. +// +// Each cache volume has one recovery point. A volume recovery point is a point +// in time at which all data of the volume is consistent and from which you +// can create a snapshot or clone a new cached volume from a source volume. +// To create a snapshot from a volume recovery point use the CreateSnapshotFromVolumeRecoveryPoint +// operation. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation ListVolumeRecoveryPoints for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ListVolumeRecoveryPoints +func (c *StorageGateway) ListVolumeRecoveryPoints(input *ListVolumeRecoveryPointsInput) (*ListVolumeRecoveryPointsOutput, error) { + req, out := c.ListVolumeRecoveryPointsRequest(input) + return out, req.Send() +} + +// ListVolumeRecoveryPointsWithContext is the same as ListVolumeRecoveryPoints with the addition of +// the ability to pass a context and additional request options. +// +// See ListVolumeRecoveryPoints for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) ListVolumeRecoveryPointsWithContext(ctx aws.Context, input *ListVolumeRecoveryPointsInput, opts ...request.Option) (*ListVolumeRecoveryPointsOutput, error) { + req, out := c.ListVolumeRecoveryPointsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListVolumes = "ListVolumes" + +// ListVolumesRequest generates a "aws/request.Request" representing the +// client's request for the ListVolumes operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListVolumes for more information on using the ListVolumes +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListVolumesRequest method. +// req, resp := client.ListVolumesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ListVolumes +func (c *StorageGateway) ListVolumesRequest(input *ListVolumesInput) (req *request.Request, output *ListVolumesOutput) { + op := &request.Operation{ + Name: opListVolumes, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "Limit", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListVolumesInput{} + } + + output = &ListVolumesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListVolumes API operation for AWS Storage Gateway. +// +// Lists the iSCSI stored volumes of a gateway. Results are sorted by volume +// ARN. The response includes only the volume ARNs. If you want additional volume +// information, use the DescribeStorediSCSIVolumes or the DescribeCachediSCSIVolumes +// API. +// +// The operation supports pagination. By default, the operation returns a maximum +// of up to 100 volumes. You can optionally specify the Limit field in the body +// to limit the number of volumes in the response. If the number of volumes +// returned in the response is truncated, the response includes a Marker field. +// You can use this Marker value in your subsequent request to retrieve the +// next set of volumes. This operation is only supported in the cached volume +// and stored volume gateway types. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation ListVolumes for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ListVolumes +func (c *StorageGateway) ListVolumes(input *ListVolumesInput) (*ListVolumesOutput, error) { + req, out := c.ListVolumesRequest(input) + return out, req.Send() +} + +// ListVolumesWithContext is the same as ListVolumes with the addition of +// the ability to pass a context and additional request options. +// +// See ListVolumes for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) ListVolumesWithContext(ctx aws.Context, input *ListVolumesInput, opts ...request.Option) (*ListVolumesOutput, error) { + req, out := c.ListVolumesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListVolumesPages iterates over the pages of a ListVolumes operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListVolumes method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListVolumes operation. +// pageNum := 0 +// err := client.ListVolumesPages(params, +// func(page *ListVolumesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *StorageGateway) ListVolumesPages(input *ListVolumesInput, fn func(*ListVolumesOutput, bool) bool) error { + return c.ListVolumesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListVolumesPagesWithContext same as ListVolumesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) ListVolumesPagesWithContext(ctx aws.Context, input *ListVolumesInput, fn func(*ListVolumesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListVolumesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListVolumesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*ListVolumesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opNotifyWhenUploaded = "NotifyWhenUploaded" + +// NotifyWhenUploadedRequest generates a "aws/request.Request" representing the +// client's request for the NotifyWhenUploaded operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See NotifyWhenUploaded for more information on using the NotifyWhenUploaded +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the NotifyWhenUploadedRequest method. +// req, resp := client.NotifyWhenUploadedRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/NotifyWhenUploaded +func (c *StorageGateway) NotifyWhenUploadedRequest(input *NotifyWhenUploadedInput) (req *request.Request, output *NotifyWhenUploadedOutput) { + op := &request.Operation{ + Name: opNotifyWhenUploaded, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &NotifyWhenUploadedInput{} + } + + output = &NotifyWhenUploadedOutput{} + req = c.newRequest(op, input, output) + return +} + +// NotifyWhenUploaded API operation for AWS Storage Gateway. +// +// Sends you notification through CloudWatch Events when all files written to +// your NFS file share have been uploaded to Amazon S3. +// +// AWS Storage Gateway can send a notification through Amazon CloudWatch Events +// when all files written to your file share up to that point in time have been +// uploaded to Amazon S3. These files include files written to the NFS file +// share up to the time that you make a request for notification. When the upload +// is done, Storage Gateway sends you notification through an Amazon CloudWatch +// Event. You can configure CloudWatch Events to send the notification through +// event targets such as Amazon SNS or AWS Lambda function. This operation is +// only supported for file gateways. +// +// For more information, see Getting File Upload Notification in the Storage +// Gateway User Guide (https://docs.aws.amazon.com/storagegateway/latest/userguide/monitoring-file-gateway.html#get-upload-notification). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation NotifyWhenUploaded for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/NotifyWhenUploaded +func (c *StorageGateway) NotifyWhenUploaded(input *NotifyWhenUploadedInput) (*NotifyWhenUploadedOutput, error) { + req, out := c.NotifyWhenUploadedRequest(input) + return out, req.Send() +} + +// NotifyWhenUploadedWithContext is the same as NotifyWhenUploaded with the addition of +// the ability to pass a context and additional request options. +// +// See NotifyWhenUploaded for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) NotifyWhenUploadedWithContext(ctx aws.Context, input *NotifyWhenUploadedInput, opts ...request.Option) (*NotifyWhenUploadedOutput, error) { + req, out := c.NotifyWhenUploadedRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRefreshCache = "RefreshCache" + +// RefreshCacheRequest generates a "aws/request.Request" representing the +// client's request for the RefreshCache operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RefreshCache for more information on using the RefreshCache +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RefreshCacheRequest method. +// req, resp := client.RefreshCacheRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/RefreshCache +func (c *StorageGateway) RefreshCacheRequest(input *RefreshCacheInput) (req *request.Request, output *RefreshCacheOutput) { + op := &request.Operation{ + Name: opRefreshCache, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RefreshCacheInput{} + } + + output = &RefreshCacheOutput{} + req = c.newRequest(op, input, output) + return +} + +// RefreshCache API operation for AWS Storage Gateway. +// +// Refreshes the cache for the specified file share. This operation finds objects +// in the Amazon S3 bucket that were added, removed or replaced since the gateway +// last listed the bucket's contents and cached the results. This operation +// is only supported in the file gateway type. You can subscribe to be notified +// through an Amazon CloudWatch event when your RefreshCache operation completes. +// For more information, see Getting Notified About File Operations (https://docs.aws.amazon.com/storagegateway/latest/userguide/monitoring-file-gateway.html#get-notification). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation RefreshCache for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/RefreshCache +func (c *StorageGateway) RefreshCache(input *RefreshCacheInput) (*RefreshCacheOutput, error) { + req, out := c.RefreshCacheRequest(input) + return out, req.Send() +} + +// RefreshCacheWithContext is the same as RefreshCache with the addition of +// the ability to pass a context and additional request options. +// +// See RefreshCache for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) RefreshCacheWithContext(ctx aws.Context, input *RefreshCacheInput, opts ...request.Option) (*RefreshCacheOutput, error) { + req, out := c.RefreshCacheRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRemoveTagsFromResource = "RemoveTagsFromResource" + +// RemoveTagsFromResourceRequest generates a "aws/request.Request" representing the +// client's request for the RemoveTagsFromResource operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RemoveTagsFromResource for more information on using the RemoveTagsFromResource +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RemoveTagsFromResourceRequest method. +// req, resp := client.RemoveTagsFromResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/RemoveTagsFromResource +func (c *StorageGateway) RemoveTagsFromResourceRequest(input *RemoveTagsFromResourceInput) (req *request.Request, output *RemoveTagsFromResourceOutput) { + op := &request.Operation{ + Name: opRemoveTagsFromResource, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RemoveTagsFromResourceInput{} + } + + output = &RemoveTagsFromResourceOutput{} + req = c.newRequest(op, input, output) + return +} + +// RemoveTagsFromResource API operation for AWS Storage Gateway. +// +// Removes one or more tags from the specified resource. This operation is only +// supported in the cached volume, stored volume and tape gateway types. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation RemoveTagsFromResource for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/RemoveTagsFromResource +func (c *StorageGateway) RemoveTagsFromResource(input *RemoveTagsFromResourceInput) (*RemoveTagsFromResourceOutput, error) { + req, out := c.RemoveTagsFromResourceRequest(input) + return out, req.Send() +} + +// RemoveTagsFromResourceWithContext is the same as RemoveTagsFromResource with the addition of +// the ability to pass a context and additional request options. +// +// See RemoveTagsFromResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) RemoveTagsFromResourceWithContext(ctx aws.Context, input *RemoveTagsFromResourceInput, opts ...request.Option) (*RemoveTagsFromResourceOutput, error) { + req, out := c.RemoveTagsFromResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opResetCache = "ResetCache" + +// ResetCacheRequest generates a "aws/request.Request" representing the +// client's request for the ResetCache operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ResetCache for more information on using the ResetCache +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ResetCacheRequest method. +// req, resp := client.ResetCacheRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ResetCache +func (c *StorageGateway) ResetCacheRequest(input *ResetCacheInput) (req *request.Request, output *ResetCacheOutput) { + op := &request.Operation{ + Name: opResetCache, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ResetCacheInput{} + } + + output = &ResetCacheOutput{} + req = c.newRequest(op, input, output) + return +} + +// ResetCache API operation for AWS Storage Gateway. +// +// Resets all cache disks that have encountered a error and makes the disks +// available for reconfiguration as cache storage. If your cache disk encounters +// a error, the gateway prevents read and write operations on virtual tapes +// in the gateway. For example, an error can occur when a disk is corrupted +// or removed from the gateway. When a cache is reset, the gateway loses its +// cache storage. At this point you can reconfigure the disks as cache disks. +// This operation is only supported in the cached volume and tape types. +// +// If the cache disk you are resetting contains data that has not been uploaded +// to Amazon S3 yet, that data can be lost. After you reset cache disks, there +// will be no configured cache disks left in the gateway, so you must configure +// at least one new cache disk for your gateway to function properly. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation ResetCache for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ResetCache +func (c *StorageGateway) ResetCache(input *ResetCacheInput) (*ResetCacheOutput, error) { + req, out := c.ResetCacheRequest(input) + return out, req.Send() +} + +// ResetCacheWithContext is the same as ResetCache with the addition of +// the ability to pass a context and additional request options. +// +// See ResetCache for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) ResetCacheWithContext(ctx aws.Context, input *ResetCacheInput, opts ...request.Option) (*ResetCacheOutput, error) { + req, out := c.ResetCacheRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRetrieveTapeArchive = "RetrieveTapeArchive" + +// RetrieveTapeArchiveRequest generates a "aws/request.Request" representing the +// client's request for the RetrieveTapeArchive operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RetrieveTapeArchive for more information on using the RetrieveTapeArchive +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RetrieveTapeArchiveRequest method. +// req, resp := client.RetrieveTapeArchiveRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/RetrieveTapeArchive +func (c *StorageGateway) RetrieveTapeArchiveRequest(input *RetrieveTapeArchiveInput) (req *request.Request, output *RetrieveTapeArchiveOutput) { + op := &request.Operation{ + Name: opRetrieveTapeArchive, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RetrieveTapeArchiveInput{} + } + + output = &RetrieveTapeArchiveOutput{} + req = c.newRequest(op, input, output) + return +} + +// RetrieveTapeArchive API operation for AWS Storage Gateway. +// +// Retrieves an archived virtual tape from the virtual tape shelf (VTS) to a +// tape gateway. Virtual tapes archived in the VTS are not associated with any +// gateway. However after a tape is retrieved, it is associated with a gateway, +// even though it is also listed in the VTS, that is, archive. This operation +// is only supported in the tape gateway type. +// +// Once a tape is successfully retrieved to a gateway, it cannot be retrieved +// again to another gateway. You must archive the tape again before you can +// retrieve it to another gateway. This operation is only supported in the tape +// gateway type. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation RetrieveTapeArchive for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/RetrieveTapeArchive +func (c *StorageGateway) RetrieveTapeArchive(input *RetrieveTapeArchiveInput) (*RetrieveTapeArchiveOutput, error) { + req, out := c.RetrieveTapeArchiveRequest(input) + return out, req.Send() +} + +// RetrieveTapeArchiveWithContext is the same as RetrieveTapeArchive with the addition of +// the ability to pass a context and additional request options. +// +// See RetrieveTapeArchive for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) RetrieveTapeArchiveWithContext(ctx aws.Context, input *RetrieveTapeArchiveInput, opts ...request.Option) (*RetrieveTapeArchiveOutput, error) { + req, out := c.RetrieveTapeArchiveRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRetrieveTapeRecoveryPoint = "RetrieveTapeRecoveryPoint" + +// RetrieveTapeRecoveryPointRequest generates a "aws/request.Request" representing the +// client's request for the RetrieveTapeRecoveryPoint operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RetrieveTapeRecoveryPoint for more information on using the RetrieveTapeRecoveryPoint +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RetrieveTapeRecoveryPointRequest method. +// req, resp := client.RetrieveTapeRecoveryPointRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/RetrieveTapeRecoveryPoint +func (c *StorageGateway) RetrieveTapeRecoveryPointRequest(input *RetrieveTapeRecoveryPointInput) (req *request.Request, output *RetrieveTapeRecoveryPointOutput) { + op := &request.Operation{ + Name: opRetrieveTapeRecoveryPoint, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RetrieveTapeRecoveryPointInput{} + } + + output = &RetrieveTapeRecoveryPointOutput{} + req = c.newRequest(op, input, output) + return +} + +// RetrieveTapeRecoveryPoint API operation for AWS Storage Gateway. +// +// Retrieves the recovery point for the specified virtual tape. This operation +// is only supported in the tape gateway type. +// +// A recovery point is a point in time view of a virtual tape at which all the +// data on the tape is consistent. If your gateway crashes, virtual tapes that +// have recovery points can be recovered to a new gateway. +// +// The virtual tape can be retrieved to only one gateway. The retrieved tape +// is read-only. The virtual tape can be retrieved to only a tape gateway. There +// is no charge for retrieving recovery points. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation RetrieveTapeRecoveryPoint for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/RetrieveTapeRecoveryPoint +func (c *StorageGateway) RetrieveTapeRecoveryPoint(input *RetrieveTapeRecoveryPointInput) (*RetrieveTapeRecoveryPointOutput, error) { + req, out := c.RetrieveTapeRecoveryPointRequest(input) + return out, req.Send() +} + +// RetrieveTapeRecoveryPointWithContext is the same as RetrieveTapeRecoveryPoint with the addition of +// the ability to pass a context and additional request options. +// +// See RetrieveTapeRecoveryPoint for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) RetrieveTapeRecoveryPointWithContext(ctx aws.Context, input *RetrieveTapeRecoveryPointInput, opts ...request.Option) (*RetrieveTapeRecoveryPointOutput, error) { + req, out := c.RetrieveTapeRecoveryPointRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opSetLocalConsolePassword = "SetLocalConsolePassword" + +// SetLocalConsolePasswordRequest generates a "aws/request.Request" representing the +// client's request for the SetLocalConsolePassword operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SetLocalConsolePassword for more information on using the SetLocalConsolePassword +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SetLocalConsolePasswordRequest method. +// req, resp := client.SetLocalConsolePasswordRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/SetLocalConsolePassword +func (c *StorageGateway) SetLocalConsolePasswordRequest(input *SetLocalConsolePasswordInput) (req *request.Request, output *SetLocalConsolePasswordOutput) { + op := &request.Operation{ + Name: opSetLocalConsolePassword, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &SetLocalConsolePasswordInput{} + } + + output = &SetLocalConsolePasswordOutput{} + req = c.newRequest(op, input, output) + return +} + +// SetLocalConsolePassword API operation for AWS Storage Gateway. +// +// Sets the password for your VM local console. When you log in to the local +// console for the first time, you log in to the VM with the default credentials. +// We recommend that you set a new password. You don't need to know the default +// password to set a new password. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation SetLocalConsolePassword for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/SetLocalConsolePassword +func (c *StorageGateway) SetLocalConsolePassword(input *SetLocalConsolePasswordInput) (*SetLocalConsolePasswordOutput, error) { + req, out := c.SetLocalConsolePasswordRequest(input) + return out, req.Send() +} + +// SetLocalConsolePasswordWithContext is the same as SetLocalConsolePassword with the addition of +// the ability to pass a context and additional request options. +// +// See SetLocalConsolePassword for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) SetLocalConsolePasswordWithContext(ctx aws.Context, input *SetLocalConsolePasswordInput, opts ...request.Option) (*SetLocalConsolePasswordOutput, error) { + req, out := c.SetLocalConsolePasswordRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opSetSMBGuestPassword = "SetSMBGuestPassword" + +// SetSMBGuestPasswordRequest generates a "aws/request.Request" representing the +// client's request for the SetSMBGuestPassword operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See SetSMBGuestPassword for more information on using the SetSMBGuestPassword +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the SetSMBGuestPasswordRequest method. +// req, resp := client.SetSMBGuestPasswordRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/SetSMBGuestPassword +func (c *StorageGateway) SetSMBGuestPasswordRequest(input *SetSMBGuestPasswordInput) (req *request.Request, output *SetSMBGuestPasswordOutput) { + op := &request.Operation{ + Name: opSetSMBGuestPassword, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &SetSMBGuestPasswordInput{} + } + + output = &SetSMBGuestPasswordOutput{} + req = c.newRequest(op, input, output) + return +} + +// SetSMBGuestPassword API operation for AWS Storage Gateway. +// +// Sets the password for the guest user smbguest. The smbguest user is the user +// when the authentication method for the file share is set to GuestAccess. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation SetSMBGuestPassword for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/SetSMBGuestPassword +func (c *StorageGateway) SetSMBGuestPassword(input *SetSMBGuestPasswordInput) (*SetSMBGuestPasswordOutput, error) { + req, out := c.SetSMBGuestPasswordRequest(input) + return out, req.Send() +} + +// SetSMBGuestPasswordWithContext is the same as SetSMBGuestPassword with the addition of +// the ability to pass a context and additional request options. +// +// See SetSMBGuestPassword for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) SetSMBGuestPasswordWithContext(ctx aws.Context, input *SetSMBGuestPasswordInput, opts ...request.Option) (*SetSMBGuestPasswordOutput, error) { + req, out := c.SetSMBGuestPasswordRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opShutdownGateway = "ShutdownGateway" + +// ShutdownGatewayRequest generates a "aws/request.Request" representing the +// client's request for the ShutdownGateway operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ShutdownGateway for more information on using the ShutdownGateway +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ShutdownGatewayRequest method. +// req, resp := client.ShutdownGatewayRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ShutdownGateway +func (c *StorageGateway) ShutdownGatewayRequest(input *ShutdownGatewayInput) (req *request.Request, output *ShutdownGatewayOutput) { + op := &request.Operation{ + Name: opShutdownGateway, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ShutdownGatewayInput{} + } + + output = &ShutdownGatewayOutput{} + req = c.newRequest(op, input, output) + return +} + +// ShutdownGateway API operation for AWS Storage Gateway. +// +// Shuts down a gateway. To specify which gateway to shut down, use the Amazon +// Resource Name (ARN) of the gateway in the body of your request. +// +// The operation shuts down the gateway service component running in the gateway's +// virtual machine (VM) and not the host VM. +// +// If you want to shut down the VM, it is recommended that you first shut down +// the gateway component in the VM to avoid unpredictable conditions. +// +// After the gateway is shutdown, you cannot call any other API except StartGateway, +// DescribeGatewayInformation, and ListGateways. For more information, see ActivateGateway. +// Your applications cannot read from or write to the gateway's storage volumes, +// and there are no snapshots taken. +// +// When you make a shutdown request, you will get a 200 OK success response +// immediately. However, it might take some time for the gateway to shut down. +// You can call the DescribeGatewayInformation API to check the status. For +// more information, see ActivateGateway. +// +// If do not intend to use the gateway again, you must delete the gateway (using +// DeleteGateway) to no longer pay software charges associated with the gateway. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation ShutdownGateway for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/ShutdownGateway +func (c *StorageGateway) ShutdownGateway(input *ShutdownGatewayInput) (*ShutdownGatewayOutput, error) { + req, out := c.ShutdownGatewayRequest(input) + return out, req.Send() +} + +// ShutdownGatewayWithContext is the same as ShutdownGateway with the addition of +// the ability to pass a context and additional request options. +// +// See ShutdownGateway for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) ShutdownGatewayWithContext(ctx aws.Context, input *ShutdownGatewayInput, opts ...request.Option) (*ShutdownGatewayOutput, error) { + req, out := c.ShutdownGatewayRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStartGateway = "StartGateway" + +// StartGatewayRequest generates a "aws/request.Request" representing the +// client's request for the StartGateway operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StartGateway for more information on using the StartGateway +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StartGatewayRequest method. +// req, resp := client.StartGatewayRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/StartGateway +func (c *StorageGateway) StartGatewayRequest(input *StartGatewayInput) (req *request.Request, output *StartGatewayOutput) { + op := &request.Operation{ + Name: opStartGateway, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StartGatewayInput{} + } + + output = &StartGatewayOutput{} + req = c.newRequest(op, input, output) + return +} + +// StartGateway API operation for AWS Storage Gateway. +// +// Starts a gateway that you previously shut down (see ShutdownGateway). After +// the gateway starts, you can then make other API calls, your applications +// can read from or write to the gateway's storage volumes and you will be able +// to take snapshot backups. +// +// When you make a request, you will get a 200 OK success response immediately. +// However, it might take some time for the gateway to be ready. You should +// call DescribeGatewayInformation and check the status before making any additional +// API calls. For more information, see ActivateGateway. +// +// To specify which gateway to start, use the Amazon Resource Name (ARN) of +// the gateway in your request. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation StartGateway for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/StartGateway +func (c *StorageGateway) StartGateway(input *StartGatewayInput) (*StartGatewayOutput, error) { + req, out := c.StartGatewayRequest(input) + return out, req.Send() +} + +// StartGatewayWithContext is the same as StartGateway with the addition of +// the ability to pass a context and additional request options. +// +// See StartGateway for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) StartGatewayWithContext(ctx aws.Context, input *StartGatewayInput, opts ...request.Option) (*StartGatewayOutput, error) { + req, out := c.StartGatewayRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateBandwidthRateLimit = "UpdateBandwidthRateLimit" + +// UpdateBandwidthRateLimitRequest generates a "aws/request.Request" representing the +// client's request for the UpdateBandwidthRateLimit operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateBandwidthRateLimit for more information on using the UpdateBandwidthRateLimit +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateBandwidthRateLimitRequest method. +// req, resp := client.UpdateBandwidthRateLimitRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/UpdateBandwidthRateLimit +func (c *StorageGateway) UpdateBandwidthRateLimitRequest(input *UpdateBandwidthRateLimitInput) (req *request.Request, output *UpdateBandwidthRateLimitOutput) { + op := &request.Operation{ + Name: opUpdateBandwidthRateLimit, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateBandwidthRateLimitInput{} + } + + output = &UpdateBandwidthRateLimitOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateBandwidthRateLimit API operation for AWS Storage Gateway. +// +// Updates the bandwidth rate limits of a gateway. You can update both the upload +// and download bandwidth rate limit or specify only one of the two. If you +// don't set a bandwidth rate limit, the existing rate limit remains. +// +// By default, a gateway's bandwidth rate limits are not set. If you don't set +// any limit, the gateway does not have any limitations on its bandwidth usage +// and could potentially use the maximum available bandwidth. +// +// To specify which gateway to update, use the Amazon Resource Name (ARN) of +// the gateway in your request. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation UpdateBandwidthRateLimit for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/UpdateBandwidthRateLimit +func (c *StorageGateway) UpdateBandwidthRateLimit(input *UpdateBandwidthRateLimitInput) (*UpdateBandwidthRateLimitOutput, error) { + req, out := c.UpdateBandwidthRateLimitRequest(input) + return out, req.Send() +} + +// UpdateBandwidthRateLimitWithContext is the same as UpdateBandwidthRateLimit with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateBandwidthRateLimit for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) UpdateBandwidthRateLimitWithContext(ctx aws.Context, input *UpdateBandwidthRateLimitInput, opts ...request.Option) (*UpdateBandwidthRateLimitOutput, error) { + req, out := c.UpdateBandwidthRateLimitRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateChapCredentials = "UpdateChapCredentials" + +// UpdateChapCredentialsRequest generates a "aws/request.Request" representing the +// client's request for the UpdateChapCredentials operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateChapCredentials for more information on using the UpdateChapCredentials +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateChapCredentialsRequest method. +// req, resp := client.UpdateChapCredentialsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/UpdateChapCredentials +func (c *StorageGateway) UpdateChapCredentialsRequest(input *UpdateChapCredentialsInput) (req *request.Request, output *UpdateChapCredentialsOutput) { + op := &request.Operation{ + Name: opUpdateChapCredentials, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateChapCredentialsInput{} + } + + output = &UpdateChapCredentialsOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateChapCredentials API operation for AWS Storage Gateway. +// +// Updates the Challenge-Handshake Authentication Protocol (CHAP) credentials +// for a specified iSCSI target. By default, a gateway does not have CHAP enabled; +// however, for added security, you might use it. +// +// When you update CHAP credentials, all existing connections on the target +// are closed and initiators must reconnect with the new credentials. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation UpdateChapCredentials for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/UpdateChapCredentials +func (c *StorageGateway) UpdateChapCredentials(input *UpdateChapCredentialsInput) (*UpdateChapCredentialsOutput, error) { + req, out := c.UpdateChapCredentialsRequest(input) + return out, req.Send() +} + +// UpdateChapCredentialsWithContext is the same as UpdateChapCredentials with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateChapCredentials for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) UpdateChapCredentialsWithContext(ctx aws.Context, input *UpdateChapCredentialsInput, opts ...request.Option) (*UpdateChapCredentialsOutput, error) { + req, out := c.UpdateChapCredentialsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateGatewayInformation = "UpdateGatewayInformation" + +// UpdateGatewayInformationRequest generates a "aws/request.Request" representing the +// client's request for the UpdateGatewayInformation operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateGatewayInformation for more information on using the UpdateGatewayInformation +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateGatewayInformationRequest method. +// req, resp := client.UpdateGatewayInformationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/UpdateGatewayInformation +func (c *StorageGateway) UpdateGatewayInformationRequest(input *UpdateGatewayInformationInput) (req *request.Request, output *UpdateGatewayInformationOutput) { + op := &request.Operation{ + Name: opUpdateGatewayInformation, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateGatewayInformationInput{} + } + + output = &UpdateGatewayInformationOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateGatewayInformation API operation for AWS Storage Gateway. +// +// Updates a gateway's metadata, which includes the gateway's name and time +// zone. To specify which gateway to update, use the Amazon Resource Name (ARN) +// of the gateway in your request. +// +// For Gateways activated after September 2, 2015, the gateway's ARN contains +// the gateway ID rather than the gateway name. However, changing the name of +// the gateway has no effect on the gateway's ARN. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation UpdateGatewayInformation for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/UpdateGatewayInformation +func (c *StorageGateway) UpdateGatewayInformation(input *UpdateGatewayInformationInput) (*UpdateGatewayInformationOutput, error) { + req, out := c.UpdateGatewayInformationRequest(input) + return out, req.Send() +} + +// UpdateGatewayInformationWithContext is the same as UpdateGatewayInformation with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateGatewayInformation for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) UpdateGatewayInformationWithContext(ctx aws.Context, input *UpdateGatewayInformationInput, opts ...request.Option) (*UpdateGatewayInformationOutput, error) { + req, out := c.UpdateGatewayInformationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateGatewaySoftwareNow = "UpdateGatewaySoftwareNow" + +// UpdateGatewaySoftwareNowRequest generates a "aws/request.Request" representing the +// client's request for the UpdateGatewaySoftwareNow operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateGatewaySoftwareNow for more information on using the UpdateGatewaySoftwareNow +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateGatewaySoftwareNowRequest method. +// req, resp := client.UpdateGatewaySoftwareNowRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/UpdateGatewaySoftwareNow +func (c *StorageGateway) UpdateGatewaySoftwareNowRequest(input *UpdateGatewaySoftwareNowInput) (req *request.Request, output *UpdateGatewaySoftwareNowOutput) { + op := &request.Operation{ + Name: opUpdateGatewaySoftwareNow, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateGatewaySoftwareNowInput{} + } + + output = &UpdateGatewaySoftwareNowOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateGatewaySoftwareNow API operation for AWS Storage Gateway. +// +// Updates the gateway virtual machine (VM) software. The request immediately +// triggers the software update. +// +// When you make this request, you get a 200 OK success response immediately. +// However, it might take some time for the update to complete. You can call +// DescribeGatewayInformation to verify the gateway is in the STATE_RUNNING +// state. +// +// A software update forces a system restart of your gateway. You can minimize +// the chance of any disruption to your applications by increasing your iSCSI +// Initiators' timeouts. For more information about increasing iSCSI Initiator +// timeouts for Windows and Linux, see Customizing Your Windows iSCSI Settings +// (http://docs.aws.amazon.com/storagegateway/latest/userguide/ConfiguringiSCSIClientInitiatorWindowsClient.html#CustomizeWindowsiSCSISettings) +// and Customizing Your Linux iSCSI Settings (http://docs.aws.amazon.com/storagegateway/latest/userguide/ConfiguringiSCSIClientInitiatorRedHatClient.html#CustomizeLinuxiSCSISettings), +// respectively. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation UpdateGatewaySoftwareNow for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/UpdateGatewaySoftwareNow +func (c *StorageGateway) UpdateGatewaySoftwareNow(input *UpdateGatewaySoftwareNowInput) (*UpdateGatewaySoftwareNowOutput, error) { + req, out := c.UpdateGatewaySoftwareNowRequest(input) + return out, req.Send() +} + +// UpdateGatewaySoftwareNowWithContext is the same as UpdateGatewaySoftwareNow with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateGatewaySoftwareNow for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) UpdateGatewaySoftwareNowWithContext(ctx aws.Context, input *UpdateGatewaySoftwareNowInput, opts ...request.Option) (*UpdateGatewaySoftwareNowOutput, error) { + req, out := c.UpdateGatewaySoftwareNowRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateMaintenanceStartTime = "UpdateMaintenanceStartTime" + +// UpdateMaintenanceStartTimeRequest generates a "aws/request.Request" representing the +// client's request for the UpdateMaintenanceStartTime operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateMaintenanceStartTime for more information on using the UpdateMaintenanceStartTime +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateMaintenanceStartTimeRequest method. +// req, resp := client.UpdateMaintenanceStartTimeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/UpdateMaintenanceStartTime +func (c *StorageGateway) UpdateMaintenanceStartTimeRequest(input *UpdateMaintenanceStartTimeInput) (req *request.Request, output *UpdateMaintenanceStartTimeOutput) { + op := &request.Operation{ + Name: opUpdateMaintenanceStartTime, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateMaintenanceStartTimeInput{} + } + + output = &UpdateMaintenanceStartTimeOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateMaintenanceStartTime API operation for AWS Storage Gateway. +// +// Updates a gateway's weekly maintenance start time information, including +// day and time of the week. The maintenance time is the time in your gateway's +// time zone. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation UpdateMaintenanceStartTime for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/UpdateMaintenanceStartTime +func (c *StorageGateway) UpdateMaintenanceStartTime(input *UpdateMaintenanceStartTimeInput) (*UpdateMaintenanceStartTimeOutput, error) { + req, out := c.UpdateMaintenanceStartTimeRequest(input) + return out, req.Send() +} + +// UpdateMaintenanceStartTimeWithContext is the same as UpdateMaintenanceStartTime with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateMaintenanceStartTime for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) UpdateMaintenanceStartTimeWithContext(ctx aws.Context, input *UpdateMaintenanceStartTimeInput, opts ...request.Option) (*UpdateMaintenanceStartTimeOutput, error) { + req, out := c.UpdateMaintenanceStartTimeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateNFSFileShare = "UpdateNFSFileShare" + +// UpdateNFSFileShareRequest generates a "aws/request.Request" representing the +// client's request for the UpdateNFSFileShare operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateNFSFileShare for more information on using the UpdateNFSFileShare +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateNFSFileShareRequest method. +// req, resp := client.UpdateNFSFileShareRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/UpdateNFSFileShare +func (c *StorageGateway) UpdateNFSFileShareRequest(input *UpdateNFSFileShareInput) (req *request.Request, output *UpdateNFSFileShareOutput) { + op := &request.Operation{ + Name: opUpdateNFSFileShare, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateNFSFileShareInput{} + } + + output = &UpdateNFSFileShareOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateNFSFileShare API operation for AWS Storage Gateway. +// +// Updates a Network File System (NFS) file share. This operation is only supported +// in the file gateway type. +// +// To leave a file share field unchanged, set the corresponding input field +// to null. +// +// Updates the following file share setting: +// +// * Default storage class for your S3 bucket +// +// * Metadata defaults for your S3 bucket +// +// * Allowed NFS clients for your file share +// +// * Squash settings +// +// * Write status of your file share +// +// To leave a file share field unchanged, set the corresponding input field +// to null. This operation is only supported in file gateways. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation UpdateNFSFileShare for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/UpdateNFSFileShare +func (c *StorageGateway) UpdateNFSFileShare(input *UpdateNFSFileShareInput) (*UpdateNFSFileShareOutput, error) { + req, out := c.UpdateNFSFileShareRequest(input) + return out, req.Send() +} + +// UpdateNFSFileShareWithContext is the same as UpdateNFSFileShare with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateNFSFileShare for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) UpdateNFSFileShareWithContext(ctx aws.Context, input *UpdateNFSFileShareInput, opts ...request.Option) (*UpdateNFSFileShareOutput, error) { + req, out := c.UpdateNFSFileShareRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateSMBFileShare = "UpdateSMBFileShare" + +// UpdateSMBFileShareRequest generates a "aws/request.Request" representing the +// client's request for the UpdateSMBFileShare operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateSMBFileShare for more information on using the UpdateSMBFileShare +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateSMBFileShareRequest method. +// req, resp := client.UpdateSMBFileShareRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/UpdateSMBFileShare +func (c *StorageGateway) UpdateSMBFileShareRequest(input *UpdateSMBFileShareInput) (req *request.Request, output *UpdateSMBFileShareOutput) { + op := &request.Operation{ + Name: opUpdateSMBFileShare, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateSMBFileShareInput{} + } + + output = &UpdateSMBFileShareOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateSMBFileShare API operation for AWS Storage Gateway. +// +// Updates a Server Message Block (SMB) file share. +// +// To leave a file share field unchanged, set the corresponding input field +// to null. This operation is only supported for file gateways. +// +// File gateways require AWS Security Token Service (AWS STS) to be activated +// to enable you to create a file share. Make sure that AWS STS is activated +// in the AWS Region you are creating your file gateway in. If AWS STS is not +// activated in this AWS Region, activate it. For information about how to activate +// AWS STS, see Activating and Deactivating AWS STS in an AWS Region (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) +// in the AWS Identity and Access Management User Guide. +// +// File gateways don't support creating hard or symbolic links on a file share. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation UpdateSMBFileShare for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/UpdateSMBFileShare +func (c *StorageGateway) UpdateSMBFileShare(input *UpdateSMBFileShareInput) (*UpdateSMBFileShareOutput, error) { + req, out := c.UpdateSMBFileShareRequest(input) + return out, req.Send() +} + +// UpdateSMBFileShareWithContext is the same as UpdateSMBFileShare with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateSMBFileShare for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) UpdateSMBFileShareWithContext(ctx aws.Context, input *UpdateSMBFileShareInput, opts ...request.Option) (*UpdateSMBFileShareOutput, error) { + req, out := c.UpdateSMBFileShareRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateSnapshotSchedule = "UpdateSnapshotSchedule" + +// UpdateSnapshotScheduleRequest generates a "aws/request.Request" representing the +// client's request for the UpdateSnapshotSchedule operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateSnapshotSchedule for more information on using the UpdateSnapshotSchedule +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateSnapshotScheduleRequest method. +// req, resp := client.UpdateSnapshotScheduleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/UpdateSnapshotSchedule +func (c *StorageGateway) UpdateSnapshotScheduleRequest(input *UpdateSnapshotScheduleInput) (req *request.Request, output *UpdateSnapshotScheduleOutput) { + op := &request.Operation{ + Name: opUpdateSnapshotSchedule, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateSnapshotScheduleInput{} + } + + output = &UpdateSnapshotScheduleOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateSnapshotSchedule API operation for AWS Storage Gateway. +// +// Updates a snapshot schedule configured for a gateway volume. This operation +// is only supported in the cached volume and stored volume gateway types. +// +// The default snapshot schedule for volume is once every 24 hours, starting +// at the creation time of the volume. You can use this API to change the snapshot +// schedule configured for the volume. +// +// In the request you must identify the gateway volume whose snapshot schedule +// you want to update, and the schedule information, including when you want +// the snapshot to begin on a day and the frequency (in hours) of snapshots. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation UpdateSnapshotSchedule for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/UpdateSnapshotSchedule +func (c *StorageGateway) UpdateSnapshotSchedule(input *UpdateSnapshotScheduleInput) (*UpdateSnapshotScheduleOutput, error) { + req, out := c.UpdateSnapshotScheduleRequest(input) + return out, req.Send() +} + +// UpdateSnapshotScheduleWithContext is the same as UpdateSnapshotSchedule with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateSnapshotSchedule for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) UpdateSnapshotScheduleWithContext(ctx aws.Context, input *UpdateSnapshotScheduleInput, opts ...request.Option) (*UpdateSnapshotScheduleOutput, error) { + req, out := c.UpdateSnapshotScheduleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateVTLDeviceType = "UpdateVTLDeviceType" + +// UpdateVTLDeviceTypeRequest generates a "aws/request.Request" representing the +// client's request for the UpdateVTLDeviceType operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateVTLDeviceType for more information on using the UpdateVTLDeviceType +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateVTLDeviceTypeRequest method. +// req, resp := client.UpdateVTLDeviceTypeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/UpdateVTLDeviceType +func (c *StorageGateway) UpdateVTLDeviceTypeRequest(input *UpdateVTLDeviceTypeInput) (req *request.Request, output *UpdateVTLDeviceTypeOutput) { + op := &request.Operation{ + Name: opUpdateVTLDeviceType, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateVTLDeviceTypeInput{} + } + + output = &UpdateVTLDeviceTypeOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateVTLDeviceType API operation for AWS Storage Gateway. +// +// Updates the type of medium changer in a tape gateway. When you activate a +// tape gateway, you select a medium changer type for the tape gateway. This +// operation enables you to select a different type of medium changer after +// a tape gateway is activated. This operation is only supported in the tape +// gateway type. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Storage Gateway's +// API operation UpdateVTLDeviceType for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidGatewayRequestException "InvalidGatewayRequestException" +// An exception occurred because an invalid gateway request was issued to the +// service. For more information, see the error and message fields. +// +// * ErrCodeInternalServerError "InternalServerError" +// An internal server error has occurred during the request. For more information, +// see the error and message fields. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30/UpdateVTLDeviceType +func (c *StorageGateway) UpdateVTLDeviceType(input *UpdateVTLDeviceTypeInput) (*UpdateVTLDeviceTypeOutput, error) { + req, out := c.UpdateVTLDeviceTypeRequest(input) + return out, req.Send() +} + +// UpdateVTLDeviceTypeWithContext is the same as UpdateVTLDeviceType with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateVTLDeviceType for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *StorageGateway) UpdateVTLDeviceTypeWithContext(ctx aws.Context, input *UpdateVTLDeviceTypeInput, opts ...request.Option) (*UpdateVTLDeviceTypeOutput, error) { + req, out := c.UpdateVTLDeviceTypeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// A JSON object containing one or more of the following fields: +// +// * ActivateGatewayInput$ActivationKey +// +// * ActivateGatewayInput$GatewayName +// +// * ActivateGatewayInput$GatewayRegion +// +// * ActivateGatewayInput$GatewayTimezone +// +// * ActivateGatewayInput$GatewayType +// +// * ActivateGatewayInput$TapeDriveType +// +// * ActivateGatewayInput$MediumChangerType +type ActivateGatewayInput struct { + _ struct{} `type:"structure"` + + // Your gateway activation key. You can obtain the activation key by sending + // an HTTP GET request with redirects enabled to the gateway IP address (port + // 80). The redirect URL returned in the response provides you the activation + // key for your gateway in the query string parameter activationKey. It may + // also include other activation-related parameters, however, these are merely + // defaults -- the arguments you pass to the ActivateGateway API call determine + // the actual configuration of your gateway. + // + // For more information, see https://docs.aws.amazon.com/storagegateway/latest/userguide/get-activation-key.html + // in the Storage Gateway User Guide. + // + // ActivationKey is a required field + ActivationKey *string `min:"1" type:"string" required:"true"` + + // The name you configured for your gateway. + // + // GatewayName is a required field + GatewayName *string `min:"2" type:"string" required:"true"` + + // A value that indicates the region where you want to store your data. The + // gateway region specified must be the same region as the region in your Host + // header in the request. For more information about available regions and endpoints + // for AWS Storage Gateway, see Regions and Endpoints (http://docs.aws.amazon.com/general/latest/gr/rande.html#sg_region) + // in the Amazon Web Services Glossary. + // + // Valid Values: "us-east-1", "us-east-2", "us-west-1", "us-west-2", "ca-central-1", + // "eu-west-1", "eu-central-1", "eu-west-2", "eu-west-3", "ap-northeast-1", + // "ap-northeast-2", "ap-southeast-1", "ap-southeast-2", "ap-south-1", "sa-east-1" + // + // GatewayRegion is a required field + GatewayRegion *string `min:"1" type:"string" required:"true"` + + // A value that indicates the time zone you want to set for the gateway. The + // time zone is of the format "GMT-hr:mm" or "GMT+hr:mm". For example, GMT-4:00 + // indicates the time is 4 hours behind GMT. GMT+2:00 indicates the time is + // 2 hours ahead of GMT. The time zone is used, for example, for scheduling + // snapshots and your gateway's maintenance schedule. + // + // GatewayTimezone is a required field + GatewayTimezone *string `min:"3" type:"string" required:"true"` + + // A value that defines the type of gateway to activate. The type specified + // is critical to all later functions of the gateway and cannot be changed after + // activation. The default value is CACHED. + // + // Valid Values: "STORED", "CACHED", "VTL", "FILE_S3" + GatewayType *string `min:"2" type:"string"` + + // The value that indicates the type of medium changer to use for tape gateway. + // This field is optional. + // + // Valid Values: "STK-L700", "AWS-Gateway-VTL" + MediumChangerType *string `min:"2" type:"string"` + + // The value that indicates the type of tape drive to use for tape gateway. + // This field is optional. + // + // Valid Values: "IBM-ULT3580-TD5" + TapeDriveType *string `min:"2" type:"string"` +} + +// String returns the string representation +func (s ActivateGatewayInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ActivateGatewayInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ActivateGatewayInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ActivateGatewayInput"} + if s.ActivationKey == nil { + invalidParams.Add(request.NewErrParamRequired("ActivationKey")) + } + if s.ActivationKey != nil && len(*s.ActivationKey) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ActivationKey", 1)) + } + if s.GatewayName == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayName")) + } + if s.GatewayName != nil && len(*s.GatewayName) < 2 { + invalidParams.Add(request.NewErrParamMinLen("GatewayName", 2)) + } + if s.GatewayRegion == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayRegion")) + } + if s.GatewayRegion != nil && len(*s.GatewayRegion) < 1 { + invalidParams.Add(request.NewErrParamMinLen("GatewayRegion", 1)) + } + if s.GatewayTimezone == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayTimezone")) + } + if s.GatewayTimezone != nil && len(*s.GatewayTimezone) < 3 { + invalidParams.Add(request.NewErrParamMinLen("GatewayTimezone", 3)) + } + if s.GatewayType != nil && len(*s.GatewayType) < 2 { + invalidParams.Add(request.NewErrParamMinLen("GatewayType", 2)) + } + if s.MediumChangerType != nil && len(*s.MediumChangerType) < 2 { + invalidParams.Add(request.NewErrParamMinLen("MediumChangerType", 2)) + } + if s.TapeDriveType != nil && len(*s.TapeDriveType) < 2 { + invalidParams.Add(request.NewErrParamMinLen("TapeDriveType", 2)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetActivationKey sets the ActivationKey field's value. +func (s *ActivateGatewayInput) SetActivationKey(v string) *ActivateGatewayInput { + s.ActivationKey = &v + return s +} + +// SetGatewayName sets the GatewayName field's value. +func (s *ActivateGatewayInput) SetGatewayName(v string) *ActivateGatewayInput { + s.GatewayName = &v + return s +} + +// SetGatewayRegion sets the GatewayRegion field's value. +func (s *ActivateGatewayInput) SetGatewayRegion(v string) *ActivateGatewayInput { + s.GatewayRegion = &v + return s +} + +// SetGatewayTimezone sets the GatewayTimezone field's value. +func (s *ActivateGatewayInput) SetGatewayTimezone(v string) *ActivateGatewayInput { + s.GatewayTimezone = &v + return s +} + +// SetGatewayType sets the GatewayType field's value. +func (s *ActivateGatewayInput) SetGatewayType(v string) *ActivateGatewayInput { + s.GatewayType = &v + return s +} + +// SetMediumChangerType sets the MediumChangerType field's value. +func (s *ActivateGatewayInput) SetMediumChangerType(v string) *ActivateGatewayInput { + s.MediumChangerType = &v + return s +} + +// SetTapeDriveType sets the TapeDriveType field's value. +func (s *ActivateGatewayInput) SetTapeDriveType(v string) *ActivateGatewayInput { + s.TapeDriveType = &v + return s +} + +// AWS Storage Gateway returns the Amazon Resource Name (ARN) of the activated +// gateway. It is a string made of information such as your account, gateway +// name, and region. This ARN is used to reference the gateway in other API +// operations as well as resource-based authorization. +// +// For gateways activated prior to September 02, 2015, the gateway ARN contains +// the gateway name rather than the gateway ID. Changing the name of the gateway +// has no effect on the gateway ARN. +type ActivateGatewayOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s ActivateGatewayOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ActivateGatewayOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *ActivateGatewayOutput) SetGatewayARN(v string) *ActivateGatewayOutput { + s.GatewayARN = &v + return s +} + +type AddCacheInput struct { + _ struct{} `type:"structure"` + + // DiskIds is a required field + DiskIds []*string `type:"list" required:"true"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s AddCacheInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddCacheInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddCacheInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddCacheInput"} + if s.DiskIds == nil { + invalidParams.Add(request.NewErrParamRequired("DiskIds")) + } + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDiskIds sets the DiskIds field's value. +func (s *AddCacheInput) SetDiskIds(v []*string) *AddCacheInput { + s.DiskIds = v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *AddCacheInput) SetGatewayARN(v string) *AddCacheInput { + s.GatewayARN = &v + return s +} + +type AddCacheOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s AddCacheOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddCacheOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *AddCacheOutput) SetGatewayARN(v string) *AddCacheOutput { + s.GatewayARN = &v + return s +} + +// AddTagsToResourceInput +type AddTagsToResourceInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the resource you want to add tags to. + // + // ResourceARN is a required field + ResourceARN *string `min:"50" type:"string" required:"true"` + + // The key-value pair that represents the tag you want to add to the resource. + // The value can be an empty string. + // + // Valid characters for key and value are letters, spaces, and numbers representable + // in UTF-8 format, and the following special characters: + - = . _ : / @. + // + // Tags is a required field + Tags []*Tag `type:"list" required:"true"` +} + +// String returns the string representation +func (s AddTagsToResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddTagsToResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddTagsToResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddTagsToResourceInput"} + if s.ResourceARN == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceARN")) + } + if s.ResourceARN != nil && len(*s.ResourceARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("ResourceARN", 50)) + } + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceARN sets the ResourceARN field's value. +func (s *AddTagsToResourceInput) SetResourceARN(v string) *AddTagsToResourceInput { + s.ResourceARN = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *AddTagsToResourceInput) SetTags(v []*Tag) *AddTagsToResourceInput { + s.Tags = v + return s +} + +// AddTagsToResourceOutput +type AddTagsToResourceOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the resource you want to add tags to. + ResourceARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s AddTagsToResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddTagsToResourceOutput) GoString() string { + return s.String() +} + +// SetResourceARN sets the ResourceARN field's value. +func (s *AddTagsToResourceOutput) SetResourceARN(v string) *AddTagsToResourceOutput { + s.ResourceARN = &v + return s +} + +type AddUploadBufferInput struct { + _ struct{} `type:"structure"` + + // DiskIds is a required field + DiskIds []*string `type:"list" required:"true"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s AddUploadBufferInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddUploadBufferInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddUploadBufferInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddUploadBufferInput"} + if s.DiskIds == nil { + invalidParams.Add(request.NewErrParamRequired("DiskIds")) + } + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDiskIds sets the DiskIds field's value. +func (s *AddUploadBufferInput) SetDiskIds(v []*string) *AddUploadBufferInput { + s.DiskIds = v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *AddUploadBufferInput) SetGatewayARN(v string) *AddUploadBufferInput { + s.GatewayARN = &v + return s +} + +type AddUploadBufferOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s AddUploadBufferOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddUploadBufferOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *AddUploadBufferOutput) SetGatewayARN(v string) *AddUploadBufferOutput { + s.GatewayARN = &v + return s +} + +// A JSON object containing one or more of the following fields: +// +// * AddWorkingStorageInput$DiskIds +type AddWorkingStorageInput struct { + _ struct{} `type:"structure"` + + // An array of strings that identify disks that are to be configured as working + // storage. Each string have a minimum length of 1 and maximum length of 300. + // You can get the disk IDs from the ListLocalDisks API. + // + // DiskIds is a required field + DiskIds []*string `type:"list" required:"true"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s AddWorkingStorageInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddWorkingStorageInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AddWorkingStorageInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AddWorkingStorageInput"} + if s.DiskIds == nil { + invalidParams.Add(request.NewErrParamRequired("DiskIds")) + } + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDiskIds sets the DiskIds field's value. +func (s *AddWorkingStorageInput) SetDiskIds(v []*string) *AddWorkingStorageInput { + s.DiskIds = v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *AddWorkingStorageInput) SetGatewayARN(v string) *AddWorkingStorageInput { + s.GatewayARN = &v + return s +} + +// A JSON object containing the of the gateway for which working storage was +// configured. +type AddWorkingStorageOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s AddWorkingStorageOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AddWorkingStorageOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *AddWorkingStorageOutput) SetGatewayARN(v string) *AddWorkingStorageOutput { + s.GatewayARN = &v + return s +} + +// Describes an iSCSI cached volume. +type CachediSCSIVolume struct { + _ struct{} `type:"structure"` + + // The date the volume was created. Volumes created prior to March 28, 2017 + // don’t have this time stamp. + CreatedDate *time.Time `type:"timestamp"` + + // The Amazon Resource Name (ARN) of the AWS KMS key used for Amazon S3 server + // side encryption. This value can only be set when KMSEncrypted is true. Optional. + KMSKey *string `min:"20" type:"string"` + + // If the cached volume was created from a snapshot, this field contains the + // snapshot ID used, e.g. snap-78e22663. Otherwise, this field is not included. + SourceSnapshotId *string `type:"string"` + + // The Amazon Resource Name (ARN) of the storage volume. + VolumeARN *string `min:"50" type:"string"` + + // The unique identifier of the volume, e.g. vol-AE4B946D. + VolumeId *string `min:"12" type:"string"` + + // Represents the percentage complete if the volume is restoring or bootstrapping + // that represents the percent of data transferred. This field does not appear + // in the response if the cached volume is not restoring or bootstrapping. + VolumeProgress *float64 `type:"double"` + + // The size, in bytes, of the volume capacity. + VolumeSizeInBytes *int64 `type:"long"` + + // One of the VolumeStatus values that indicates the state of the storage volume. + VolumeStatus *string `min:"3" type:"string"` + + // One of the VolumeType enumeration values that describes the type of the volume. + VolumeType *string `min:"3" type:"string"` + + // The size of the data stored on the volume in bytes. + // + // This value is not available for volumes created prior to May 13, 2015, until + // you store data on the volume. + VolumeUsedInBytes *int64 `type:"long"` + + // An VolumeiSCSIAttributes object that represents a collection of iSCSI attributes + // for one stored volume. + VolumeiSCSIAttributes *VolumeiSCSIAttributes `type:"structure"` +} + +// String returns the string representation +func (s CachediSCSIVolume) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CachediSCSIVolume) GoString() string { + return s.String() +} + +// SetCreatedDate sets the CreatedDate field's value. +func (s *CachediSCSIVolume) SetCreatedDate(v time.Time) *CachediSCSIVolume { + s.CreatedDate = &v + return s +} + +// SetKMSKey sets the KMSKey field's value. +func (s *CachediSCSIVolume) SetKMSKey(v string) *CachediSCSIVolume { + s.KMSKey = &v + return s +} + +// SetSourceSnapshotId sets the SourceSnapshotId field's value. +func (s *CachediSCSIVolume) SetSourceSnapshotId(v string) *CachediSCSIVolume { + s.SourceSnapshotId = &v + return s +} + +// SetVolumeARN sets the VolumeARN field's value. +func (s *CachediSCSIVolume) SetVolumeARN(v string) *CachediSCSIVolume { + s.VolumeARN = &v + return s +} + +// SetVolumeId sets the VolumeId field's value. +func (s *CachediSCSIVolume) SetVolumeId(v string) *CachediSCSIVolume { + s.VolumeId = &v + return s +} + +// SetVolumeProgress sets the VolumeProgress field's value. +func (s *CachediSCSIVolume) SetVolumeProgress(v float64) *CachediSCSIVolume { + s.VolumeProgress = &v + return s +} + +// SetVolumeSizeInBytes sets the VolumeSizeInBytes field's value. +func (s *CachediSCSIVolume) SetVolumeSizeInBytes(v int64) *CachediSCSIVolume { + s.VolumeSizeInBytes = &v + return s +} + +// SetVolumeStatus sets the VolumeStatus field's value. +func (s *CachediSCSIVolume) SetVolumeStatus(v string) *CachediSCSIVolume { + s.VolumeStatus = &v + return s +} + +// SetVolumeType sets the VolumeType field's value. +func (s *CachediSCSIVolume) SetVolumeType(v string) *CachediSCSIVolume { + s.VolumeType = &v + return s +} + +// SetVolumeUsedInBytes sets the VolumeUsedInBytes field's value. +func (s *CachediSCSIVolume) SetVolumeUsedInBytes(v int64) *CachediSCSIVolume { + s.VolumeUsedInBytes = &v + return s +} + +// SetVolumeiSCSIAttributes sets the VolumeiSCSIAttributes field's value. +func (s *CachediSCSIVolume) SetVolumeiSCSIAttributes(v *VolumeiSCSIAttributes) *CachediSCSIVolume { + s.VolumeiSCSIAttributes = v + return s +} + +// CancelArchivalInput +type CancelArchivalInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the virtual tape you want to cancel archiving + // for. + // + // TapeARN is a required field + TapeARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s CancelArchivalInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelArchivalInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CancelArchivalInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CancelArchivalInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.TapeARN == nil { + invalidParams.Add(request.NewErrParamRequired("TapeARN")) + } + if s.TapeARN != nil && len(*s.TapeARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("TapeARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *CancelArchivalInput) SetGatewayARN(v string) *CancelArchivalInput { + s.GatewayARN = &v + return s +} + +// SetTapeARN sets the TapeARN field's value. +func (s *CancelArchivalInput) SetTapeARN(v string) *CancelArchivalInput { + s.TapeARN = &v + return s +} + +// CancelArchivalOutput +type CancelArchivalOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the virtual tape for which archiving was + // canceled. + TapeARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s CancelArchivalOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelArchivalOutput) GoString() string { + return s.String() +} + +// SetTapeARN sets the TapeARN field's value. +func (s *CancelArchivalOutput) SetTapeARN(v string) *CancelArchivalOutput { + s.TapeARN = &v + return s +} + +// CancelRetrievalInput +type CancelRetrievalInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the virtual tape you want to cancel retrieval + // for. + // + // TapeARN is a required field + TapeARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s CancelRetrievalInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelRetrievalInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CancelRetrievalInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CancelRetrievalInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.TapeARN == nil { + invalidParams.Add(request.NewErrParamRequired("TapeARN")) + } + if s.TapeARN != nil && len(*s.TapeARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("TapeARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *CancelRetrievalInput) SetGatewayARN(v string) *CancelRetrievalInput { + s.GatewayARN = &v + return s +} + +// SetTapeARN sets the TapeARN field's value. +func (s *CancelRetrievalInput) SetTapeARN(v string) *CancelRetrievalInput { + s.TapeARN = &v + return s +} + +// CancelRetrievalOutput +type CancelRetrievalOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the virtual tape for which retrieval was + // canceled. + TapeARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s CancelRetrievalOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelRetrievalOutput) GoString() string { + return s.String() +} + +// SetTapeARN sets the TapeARN field's value. +func (s *CancelRetrievalOutput) SetTapeARN(v string) *CancelRetrievalOutput { + s.TapeARN = &v + return s +} + +// Describes Challenge-Handshake Authentication Protocol (CHAP) information +// that supports authentication between your gateway and iSCSI initiators. +type ChapInfo struct { + _ struct{} `type:"structure"` + + // The iSCSI initiator that connects to the target. + InitiatorName *string `min:"1" type:"string"` + + // The secret key that the initiator (for example, the Windows client) must + // provide to participate in mutual CHAP with the target. + SecretToAuthenticateInitiator *string `min:"1" type:"string"` + + // The secret key that the target must provide to participate in mutual CHAP + // with the initiator (e.g. Windows client). + SecretToAuthenticateTarget *string `min:"1" type:"string"` + + // The Amazon Resource Name (ARN) of the volume. + // + // Valid Values: 50 to 500 lowercase letters, numbers, periods (.), and hyphens + // (-). + TargetARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s ChapInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ChapInfo) GoString() string { + return s.String() +} + +// SetInitiatorName sets the InitiatorName field's value. +func (s *ChapInfo) SetInitiatorName(v string) *ChapInfo { + s.InitiatorName = &v + return s +} + +// SetSecretToAuthenticateInitiator sets the SecretToAuthenticateInitiator field's value. +func (s *ChapInfo) SetSecretToAuthenticateInitiator(v string) *ChapInfo { + s.SecretToAuthenticateInitiator = &v + return s +} + +// SetSecretToAuthenticateTarget sets the SecretToAuthenticateTarget field's value. +func (s *ChapInfo) SetSecretToAuthenticateTarget(v string) *ChapInfo { + s.SecretToAuthenticateTarget = &v + return s +} + +// SetTargetARN sets the TargetARN field's value. +func (s *ChapInfo) SetTargetARN(v string) *ChapInfo { + s.TargetARN = &v + return s +} + +type CreateCachediSCSIVolumeInput struct { + _ struct{} `type:"structure"` + + // A unique identifier that you use to retry a request. If you retry a request, + // use the same ClientToken you specified in the initial request. + // + // ClientToken is a required field + ClientToken *string `min:"5" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` + + // True to use Amazon S3 server side encryption with your own AWS KMS key, or + // false to use a key managed by Amazon S3. Optional. + KMSEncrypted *bool `type:"boolean"` + + // The Amazon Resource Name (ARN) of the AWS KMS key used for Amazon S3 server + // side encryption. This value can only be set when KMSEncrypted is true. Optional. + KMSKey *string `min:"20" type:"string"` + + // The network interface of the gateway on which to expose the iSCSI target. + // Only IPv4 addresses are accepted. Use DescribeGatewayInformation to get a + // list of the network interfaces available on a gateway. + // + // Valid Values: A valid IP address. + // + // NetworkInterfaceId is a required field + NetworkInterfaceId *string `type:"string" required:"true"` + + // The snapshot ID (e.g. "snap-1122aabb") of the snapshot to restore as the + // new cached volume. Specify this field if you want to create the iSCSI storage + // volume from a snapshot otherwise do not include this field. To list snapshots + // for your account use DescribeSnapshots (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeSnapshots.html) + // in the Amazon Elastic Compute Cloud API Reference. + SnapshotId *string `type:"string"` + + // The ARN for an existing volume. Specifying this ARN makes the new volume + // into an exact copy of the specified existing volume's latest recovery point. + // The VolumeSizeInBytes value for this new volume must be equal to or larger + // than the size of the existing volume, in bytes. + SourceVolumeARN *string `min:"50" type:"string"` + + // The name of the iSCSI target used by initiators to connect to the target + // and as a suffix for the target ARN. For example, specifying TargetName as + // myvolume results in the target ARN of arn:aws:storagegateway:us-east-2:111122223333:gateway/sgw-12A3456B/target/iqn.1997-05.com.amazon:myvolume. + // The target name must be unique across all volumes of a gateway. + // + // TargetName is a required field + TargetName *string `min:"1" type:"string" required:"true"` + + // The size of the volume in bytes. + // + // VolumeSizeInBytes is a required field + VolumeSizeInBytes *int64 `type:"long" required:"true"` +} + +// String returns the string representation +func (s CreateCachediSCSIVolumeInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateCachediSCSIVolumeInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateCachediSCSIVolumeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateCachediSCSIVolumeInput"} + if s.ClientToken == nil { + invalidParams.Add(request.NewErrParamRequired("ClientToken")) + } + if s.ClientToken != nil && len(*s.ClientToken) < 5 { + invalidParams.Add(request.NewErrParamMinLen("ClientToken", 5)) + } + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.KMSKey != nil && len(*s.KMSKey) < 20 { + invalidParams.Add(request.NewErrParamMinLen("KMSKey", 20)) + } + if s.NetworkInterfaceId == nil { + invalidParams.Add(request.NewErrParamRequired("NetworkInterfaceId")) + } + if s.SourceVolumeARN != nil && len(*s.SourceVolumeARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("SourceVolumeARN", 50)) + } + if s.TargetName == nil { + invalidParams.Add(request.NewErrParamRequired("TargetName")) + } + if s.TargetName != nil && len(*s.TargetName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TargetName", 1)) + } + if s.VolumeSizeInBytes == nil { + invalidParams.Add(request.NewErrParamRequired("VolumeSizeInBytes")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientToken sets the ClientToken field's value. +func (s *CreateCachediSCSIVolumeInput) SetClientToken(v string) *CreateCachediSCSIVolumeInput { + s.ClientToken = &v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *CreateCachediSCSIVolumeInput) SetGatewayARN(v string) *CreateCachediSCSIVolumeInput { + s.GatewayARN = &v + return s +} + +// SetKMSEncrypted sets the KMSEncrypted field's value. +func (s *CreateCachediSCSIVolumeInput) SetKMSEncrypted(v bool) *CreateCachediSCSIVolumeInput { + s.KMSEncrypted = &v + return s +} + +// SetKMSKey sets the KMSKey field's value. +func (s *CreateCachediSCSIVolumeInput) SetKMSKey(v string) *CreateCachediSCSIVolumeInput { + s.KMSKey = &v + return s +} + +// SetNetworkInterfaceId sets the NetworkInterfaceId field's value. +func (s *CreateCachediSCSIVolumeInput) SetNetworkInterfaceId(v string) *CreateCachediSCSIVolumeInput { + s.NetworkInterfaceId = &v + return s +} + +// SetSnapshotId sets the SnapshotId field's value. +func (s *CreateCachediSCSIVolumeInput) SetSnapshotId(v string) *CreateCachediSCSIVolumeInput { + s.SnapshotId = &v + return s +} + +// SetSourceVolumeARN sets the SourceVolumeARN field's value. +func (s *CreateCachediSCSIVolumeInput) SetSourceVolumeARN(v string) *CreateCachediSCSIVolumeInput { + s.SourceVolumeARN = &v + return s +} + +// SetTargetName sets the TargetName field's value. +func (s *CreateCachediSCSIVolumeInput) SetTargetName(v string) *CreateCachediSCSIVolumeInput { + s.TargetName = &v + return s +} + +// SetVolumeSizeInBytes sets the VolumeSizeInBytes field's value. +func (s *CreateCachediSCSIVolumeInput) SetVolumeSizeInBytes(v int64) *CreateCachediSCSIVolumeInput { + s.VolumeSizeInBytes = &v + return s +} + +type CreateCachediSCSIVolumeOutput struct { + _ struct{} `type:"structure"` + + // he Amazon Resource Name (ARN) of the volume target that includes the iSCSI + // name that initiators can use to connect to the target. + TargetARN *string `min:"50" type:"string"` + + // The Amazon Resource Name (ARN) of the configured volume. + VolumeARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s CreateCachediSCSIVolumeOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateCachediSCSIVolumeOutput) GoString() string { + return s.String() +} + +// SetTargetARN sets the TargetARN field's value. +func (s *CreateCachediSCSIVolumeOutput) SetTargetARN(v string) *CreateCachediSCSIVolumeOutput { + s.TargetARN = &v + return s +} + +// SetVolumeARN sets the VolumeARN field's value. +func (s *CreateCachediSCSIVolumeOutput) SetVolumeARN(v string) *CreateCachediSCSIVolumeOutput { + s.VolumeARN = &v + return s +} + +// CreateNFSFileShareInput +type CreateNFSFileShareInput struct { + _ struct{} `type:"structure"` + + // The list of clients that are allowed to access the file gateway. The list + // must contain either valid IP addresses or valid CIDR blocks. + ClientList []*string `min:"1" type:"list"` + + // A unique string value that you supply that is used by file gateway to ensure + // idempotent file share creation. + // + // ClientToken is a required field + ClientToken *string `min:"5" type:"string" required:"true"` + + // The default storage class for objects put into an Amazon S3 bucket by the + // file gateway. Possible values are S3_STANDARD, S3_STANDARD_IA, or S3_ONEZONE_IA. + // If this field is not populated, the default value S3_STANDARD is used. Optional. + DefaultStorageClass *string `min:"5" type:"string"` + + // The Amazon Resource Name (ARN) of the file gateway on which you want to create + // a file share. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` + + // A value that enables guessing of the MIME type for uploaded objects based + // on file extensions. Set this value to true to enable MIME type guessing, + // and otherwise to false. The default value is true. + GuessMIMETypeEnabled *bool `type:"boolean"` + + // True to use Amazon S3 server side encryption with your own AWS KMS key, or + // false to use a key managed by Amazon S3. Optional. + KMSEncrypted *bool `type:"boolean"` + + // The Amazon Resource Name (ARN) AWS KMS key used for Amazon S3 server side + // encryption. This value can only be set when KMSEncrypted is true. Optional. + KMSKey *string `min:"20" type:"string"` + + // The ARN of the backed storage used for storing file data. + // + // LocationARN is a required field + LocationARN *string `min:"16" type:"string" required:"true"` + + // File share default values. Optional. + NFSFileShareDefaults *NFSFileShareDefaults `type:"structure"` + + // A value that sets the access control list permission for objects in the S3 + // bucket that a file gateway puts objects into. The default value is "private". + ObjectACL *string `type:"string" enum:"ObjectACL"` + + // A value that sets the write status of a file share. This value is true if + // the write status is read-only, and otherwise false. + ReadOnly *bool `type:"boolean"` + + // A value that sets the access control list permission for objects in the Amazon + // S3 bucket that a file gateway puts objects into. The default value is private. + RequesterPays *bool `type:"boolean"` + + // The ARN of the AWS Identity and Access Management (IAM) role that a file + // gateway assumes when it accesses the underlying storage. + // + // Role is a required field + Role *string `min:"20" type:"string" required:"true"` + + // Maps a user to anonymous user. Valid options are the following: + // + // * RootSquash - Only root is mapped to anonymous user. + // + // * NoSquash - No one is mapped to anonymous user + // + // * AllSquash - Everyone is mapped to anonymous user. + Squash *string `min:"5" type:"string"` +} + +// String returns the string representation +func (s CreateNFSFileShareInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateNFSFileShareInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateNFSFileShareInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateNFSFileShareInput"} + if s.ClientList != nil && len(s.ClientList) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientList", 1)) + } + if s.ClientToken == nil { + invalidParams.Add(request.NewErrParamRequired("ClientToken")) + } + if s.ClientToken != nil && len(*s.ClientToken) < 5 { + invalidParams.Add(request.NewErrParamMinLen("ClientToken", 5)) + } + if s.DefaultStorageClass != nil && len(*s.DefaultStorageClass) < 5 { + invalidParams.Add(request.NewErrParamMinLen("DefaultStorageClass", 5)) + } + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.KMSKey != nil && len(*s.KMSKey) < 20 { + invalidParams.Add(request.NewErrParamMinLen("KMSKey", 20)) + } + if s.LocationARN == nil { + invalidParams.Add(request.NewErrParamRequired("LocationARN")) + } + if s.LocationARN != nil && len(*s.LocationARN) < 16 { + invalidParams.Add(request.NewErrParamMinLen("LocationARN", 16)) + } + if s.Role == nil { + invalidParams.Add(request.NewErrParamRequired("Role")) + } + if s.Role != nil && len(*s.Role) < 20 { + invalidParams.Add(request.NewErrParamMinLen("Role", 20)) + } + if s.Squash != nil && len(*s.Squash) < 5 { + invalidParams.Add(request.NewErrParamMinLen("Squash", 5)) + } + if s.NFSFileShareDefaults != nil { + if err := s.NFSFileShareDefaults.Validate(); err != nil { + invalidParams.AddNested("NFSFileShareDefaults", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientList sets the ClientList field's value. +func (s *CreateNFSFileShareInput) SetClientList(v []*string) *CreateNFSFileShareInput { + s.ClientList = v + return s +} + +// SetClientToken sets the ClientToken field's value. +func (s *CreateNFSFileShareInput) SetClientToken(v string) *CreateNFSFileShareInput { + s.ClientToken = &v + return s +} + +// SetDefaultStorageClass sets the DefaultStorageClass field's value. +func (s *CreateNFSFileShareInput) SetDefaultStorageClass(v string) *CreateNFSFileShareInput { + s.DefaultStorageClass = &v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *CreateNFSFileShareInput) SetGatewayARN(v string) *CreateNFSFileShareInput { + s.GatewayARN = &v + return s +} + +// SetGuessMIMETypeEnabled sets the GuessMIMETypeEnabled field's value. +func (s *CreateNFSFileShareInput) SetGuessMIMETypeEnabled(v bool) *CreateNFSFileShareInput { + s.GuessMIMETypeEnabled = &v + return s +} + +// SetKMSEncrypted sets the KMSEncrypted field's value. +func (s *CreateNFSFileShareInput) SetKMSEncrypted(v bool) *CreateNFSFileShareInput { + s.KMSEncrypted = &v + return s +} + +// SetKMSKey sets the KMSKey field's value. +func (s *CreateNFSFileShareInput) SetKMSKey(v string) *CreateNFSFileShareInput { + s.KMSKey = &v + return s +} + +// SetLocationARN sets the LocationARN field's value. +func (s *CreateNFSFileShareInput) SetLocationARN(v string) *CreateNFSFileShareInput { + s.LocationARN = &v + return s +} + +// SetNFSFileShareDefaults sets the NFSFileShareDefaults field's value. +func (s *CreateNFSFileShareInput) SetNFSFileShareDefaults(v *NFSFileShareDefaults) *CreateNFSFileShareInput { + s.NFSFileShareDefaults = v + return s +} + +// SetObjectACL sets the ObjectACL field's value. +func (s *CreateNFSFileShareInput) SetObjectACL(v string) *CreateNFSFileShareInput { + s.ObjectACL = &v + return s +} + +// SetReadOnly sets the ReadOnly field's value. +func (s *CreateNFSFileShareInput) SetReadOnly(v bool) *CreateNFSFileShareInput { + s.ReadOnly = &v + return s +} + +// SetRequesterPays sets the RequesterPays field's value. +func (s *CreateNFSFileShareInput) SetRequesterPays(v bool) *CreateNFSFileShareInput { + s.RequesterPays = &v + return s +} + +// SetRole sets the Role field's value. +func (s *CreateNFSFileShareInput) SetRole(v string) *CreateNFSFileShareInput { + s.Role = &v + return s +} + +// SetSquash sets the Squash field's value. +func (s *CreateNFSFileShareInput) SetSquash(v string) *CreateNFSFileShareInput { + s.Squash = &v + return s +} + +// CreateNFSFileShareOutput +type CreateNFSFileShareOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the newly created file share. + FileShareARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s CreateNFSFileShareOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateNFSFileShareOutput) GoString() string { + return s.String() +} + +// SetFileShareARN sets the FileShareARN field's value. +func (s *CreateNFSFileShareOutput) SetFileShareARN(v string) *CreateNFSFileShareOutput { + s.FileShareARN = &v + return s +} + +// CreateSMBFileShareInput +type CreateSMBFileShareInput struct { + _ struct{} `type:"structure"` + + // The authentication method that users use to access the file share. + // + // Valid values are ActiveDirectory or GuestAccess. The default is ActiveDirectory. + Authentication *string `min:"5" type:"string"` + + // A unique string value that you supply that is used by file gateway to ensure + // idempotent file share creation. + // + // ClientToken is a required field + ClientToken *string `min:"5" type:"string" required:"true"` + + // The default storage class for objects put into an Amazon S3 bucket by the + // file gateway. Possible values are S3_STANDARD, S3_STANDARD_IA, or S3_ONEZONE_IA. + // If this field is not populated, the default value S3_STANDARD is used. Optional. + DefaultStorageClass *string `min:"5" type:"string"` + + // The Amazon Resource Name (ARN) of the file gateway on which you want to create + // a file share. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` + + // A value that enables guessing of the MIME type for uploaded objects based + // on file extensions. Set this value to true to enable MIME type guessing, + // and otherwise to false. The default value is true. + GuessMIMETypeEnabled *bool `type:"boolean"` + + // A list of users or groups in the Active Directory that are not allowed to + // access the file share. A group must be prefixed with the @ character. For + // example @group1. Can only be set if Authentication is set to ActiveDirectory. + InvalidUserList []*string `type:"list"` + + // True to use Amazon S3 server side encryption with your own AWS KMS key, or + // false to use a key managed by Amazon S3. Optional. + KMSEncrypted *bool `type:"boolean"` + + // The Amazon Resource Name (ARN) of the AWS KMS key used for Amazon S3 server + // side encryption. This value can only be set when KMSEncrypted is true. Optional. + KMSKey *string `min:"20" type:"string"` + + // The ARN of the backed storage used for storing file data. + // + // LocationARN is a required field + LocationARN *string `min:"16" type:"string" required:"true"` + + // A value that sets the access control list permission for objects in the S3 + // bucket that a file gateway puts objects into. The default value is "private". + ObjectACL *string `type:"string" enum:"ObjectACL"` + + // A value that sets the write status of a file share. This value is true if + // the write status is read-only, and otherwise false. + ReadOnly *bool `type:"boolean"` + + // A value that sets the access control list permission for objects in the Amazon + // S3 bucket that a file gateway puts objects into. The default value is private. + RequesterPays *bool `type:"boolean"` + + // The ARN of the AWS Identity and Access Management (IAM) role that a file + // gateway assumes when it accesses the underlying storage. + // + // Role is a required field + Role *string `min:"20" type:"string" required:"true"` + + // A list of users or groups in the Active Directory that are allowed to access + // the file share. A group must be prefixed with the @ character. For example + // @group1. Can only be set if Authentication is set to ActiveDirectory. + ValidUserList []*string `type:"list"` +} + +// String returns the string representation +func (s CreateSMBFileShareInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateSMBFileShareInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateSMBFileShareInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateSMBFileShareInput"} + if s.Authentication != nil && len(*s.Authentication) < 5 { + invalidParams.Add(request.NewErrParamMinLen("Authentication", 5)) + } + if s.ClientToken == nil { + invalidParams.Add(request.NewErrParamRequired("ClientToken")) + } + if s.ClientToken != nil && len(*s.ClientToken) < 5 { + invalidParams.Add(request.NewErrParamMinLen("ClientToken", 5)) + } + if s.DefaultStorageClass != nil && len(*s.DefaultStorageClass) < 5 { + invalidParams.Add(request.NewErrParamMinLen("DefaultStorageClass", 5)) + } + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.KMSKey != nil && len(*s.KMSKey) < 20 { + invalidParams.Add(request.NewErrParamMinLen("KMSKey", 20)) + } + if s.LocationARN == nil { + invalidParams.Add(request.NewErrParamRequired("LocationARN")) + } + if s.LocationARN != nil && len(*s.LocationARN) < 16 { + invalidParams.Add(request.NewErrParamMinLen("LocationARN", 16)) + } + if s.Role == nil { + invalidParams.Add(request.NewErrParamRequired("Role")) + } + if s.Role != nil && len(*s.Role) < 20 { + invalidParams.Add(request.NewErrParamMinLen("Role", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAuthentication sets the Authentication field's value. +func (s *CreateSMBFileShareInput) SetAuthentication(v string) *CreateSMBFileShareInput { + s.Authentication = &v + return s +} + +// SetClientToken sets the ClientToken field's value. +func (s *CreateSMBFileShareInput) SetClientToken(v string) *CreateSMBFileShareInput { + s.ClientToken = &v + return s +} + +// SetDefaultStorageClass sets the DefaultStorageClass field's value. +func (s *CreateSMBFileShareInput) SetDefaultStorageClass(v string) *CreateSMBFileShareInput { + s.DefaultStorageClass = &v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *CreateSMBFileShareInput) SetGatewayARN(v string) *CreateSMBFileShareInput { + s.GatewayARN = &v + return s +} + +// SetGuessMIMETypeEnabled sets the GuessMIMETypeEnabled field's value. +func (s *CreateSMBFileShareInput) SetGuessMIMETypeEnabled(v bool) *CreateSMBFileShareInput { + s.GuessMIMETypeEnabled = &v + return s +} + +// SetInvalidUserList sets the InvalidUserList field's value. +func (s *CreateSMBFileShareInput) SetInvalidUserList(v []*string) *CreateSMBFileShareInput { + s.InvalidUserList = v + return s +} + +// SetKMSEncrypted sets the KMSEncrypted field's value. +func (s *CreateSMBFileShareInput) SetKMSEncrypted(v bool) *CreateSMBFileShareInput { + s.KMSEncrypted = &v + return s +} + +// SetKMSKey sets the KMSKey field's value. +func (s *CreateSMBFileShareInput) SetKMSKey(v string) *CreateSMBFileShareInput { + s.KMSKey = &v + return s +} + +// SetLocationARN sets the LocationARN field's value. +func (s *CreateSMBFileShareInput) SetLocationARN(v string) *CreateSMBFileShareInput { + s.LocationARN = &v + return s +} + +// SetObjectACL sets the ObjectACL field's value. +func (s *CreateSMBFileShareInput) SetObjectACL(v string) *CreateSMBFileShareInput { + s.ObjectACL = &v + return s +} + +// SetReadOnly sets the ReadOnly field's value. +func (s *CreateSMBFileShareInput) SetReadOnly(v bool) *CreateSMBFileShareInput { + s.ReadOnly = &v + return s +} + +// SetRequesterPays sets the RequesterPays field's value. +func (s *CreateSMBFileShareInput) SetRequesterPays(v bool) *CreateSMBFileShareInput { + s.RequesterPays = &v + return s +} + +// SetRole sets the Role field's value. +func (s *CreateSMBFileShareInput) SetRole(v string) *CreateSMBFileShareInput { + s.Role = &v + return s +} + +// SetValidUserList sets the ValidUserList field's value. +func (s *CreateSMBFileShareInput) SetValidUserList(v []*string) *CreateSMBFileShareInput { + s.ValidUserList = v + return s +} + +// CreateSMBFileShareOutput +type CreateSMBFileShareOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the newly created file share. + FileShareARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s CreateSMBFileShareOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateSMBFileShareOutput) GoString() string { + return s.String() +} + +// SetFileShareARN sets the FileShareARN field's value. +func (s *CreateSMBFileShareOutput) SetFileShareARN(v string) *CreateSMBFileShareOutput { + s.FileShareARN = &v + return s +} + +type CreateSnapshotFromVolumeRecoveryPointInput struct { + _ struct{} `type:"structure"` + + // SnapshotDescription is a required field + SnapshotDescription *string `min:"1" type:"string" required:"true"` + + // VolumeARN is a required field + VolumeARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateSnapshotFromVolumeRecoveryPointInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateSnapshotFromVolumeRecoveryPointInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateSnapshotFromVolumeRecoveryPointInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateSnapshotFromVolumeRecoveryPointInput"} + if s.SnapshotDescription == nil { + invalidParams.Add(request.NewErrParamRequired("SnapshotDescription")) + } + if s.SnapshotDescription != nil && len(*s.SnapshotDescription) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SnapshotDescription", 1)) + } + if s.VolumeARN == nil { + invalidParams.Add(request.NewErrParamRequired("VolumeARN")) + } + if s.VolumeARN != nil && len(*s.VolumeARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("VolumeARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSnapshotDescription sets the SnapshotDescription field's value. +func (s *CreateSnapshotFromVolumeRecoveryPointInput) SetSnapshotDescription(v string) *CreateSnapshotFromVolumeRecoveryPointInput { + s.SnapshotDescription = &v + return s +} + +// SetVolumeARN sets the VolumeARN field's value. +func (s *CreateSnapshotFromVolumeRecoveryPointInput) SetVolumeARN(v string) *CreateSnapshotFromVolumeRecoveryPointInput { + s.VolumeARN = &v + return s +} + +type CreateSnapshotFromVolumeRecoveryPointOutput struct { + _ struct{} `type:"structure"` + + SnapshotId *string `type:"string"` + + VolumeARN *string `min:"50" type:"string"` + + VolumeRecoveryPointTime *string `type:"string"` +} + +// String returns the string representation +func (s CreateSnapshotFromVolumeRecoveryPointOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateSnapshotFromVolumeRecoveryPointOutput) GoString() string { + return s.String() +} + +// SetSnapshotId sets the SnapshotId field's value. +func (s *CreateSnapshotFromVolumeRecoveryPointOutput) SetSnapshotId(v string) *CreateSnapshotFromVolumeRecoveryPointOutput { + s.SnapshotId = &v + return s +} + +// SetVolumeARN sets the VolumeARN field's value. +func (s *CreateSnapshotFromVolumeRecoveryPointOutput) SetVolumeARN(v string) *CreateSnapshotFromVolumeRecoveryPointOutput { + s.VolumeARN = &v + return s +} + +// SetVolumeRecoveryPointTime sets the VolumeRecoveryPointTime field's value. +func (s *CreateSnapshotFromVolumeRecoveryPointOutput) SetVolumeRecoveryPointTime(v string) *CreateSnapshotFromVolumeRecoveryPointOutput { + s.VolumeRecoveryPointTime = &v + return s +} + +// A JSON object containing one or more of the following fields: +// +// * CreateSnapshotInput$SnapshotDescription +// +// * CreateSnapshotInput$VolumeARN +type CreateSnapshotInput struct { + _ struct{} `type:"structure"` + + // Textual description of the snapshot that appears in the Amazon EC2 console, + // Elastic Block Store snapshots panel in the Description field, and in the + // AWS Storage Gateway snapshot Details pane, Description field + // + // SnapshotDescription is a required field + SnapshotDescription *string `min:"1" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the volume. Use the ListVolumes operation + // to return a list of gateway volumes. + // + // VolumeARN is a required field + VolumeARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateSnapshotInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateSnapshotInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateSnapshotInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateSnapshotInput"} + if s.SnapshotDescription == nil { + invalidParams.Add(request.NewErrParamRequired("SnapshotDescription")) + } + if s.SnapshotDescription != nil && len(*s.SnapshotDescription) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SnapshotDescription", 1)) + } + if s.VolumeARN == nil { + invalidParams.Add(request.NewErrParamRequired("VolumeARN")) + } + if s.VolumeARN != nil && len(*s.VolumeARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("VolumeARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSnapshotDescription sets the SnapshotDescription field's value. +func (s *CreateSnapshotInput) SetSnapshotDescription(v string) *CreateSnapshotInput { + s.SnapshotDescription = &v + return s +} + +// SetVolumeARN sets the VolumeARN field's value. +func (s *CreateSnapshotInput) SetVolumeARN(v string) *CreateSnapshotInput { + s.VolumeARN = &v + return s +} + +// A JSON object containing the following fields: +type CreateSnapshotOutput struct { + _ struct{} `type:"structure"` + + // The snapshot ID that is used to refer to the snapshot in future operations + // such as describing snapshots (Amazon Elastic Compute Cloud API DescribeSnapshots) + // or creating a volume from a snapshot (CreateStorediSCSIVolume). + SnapshotId *string `type:"string"` + + // The Amazon Resource Name (ARN) of the volume of which the snapshot was taken. + VolumeARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s CreateSnapshotOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateSnapshotOutput) GoString() string { + return s.String() +} + +// SetSnapshotId sets the SnapshotId field's value. +func (s *CreateSnapshotOutput) SetSnapshotId(v string) *CreateSnapshotOutput { + s.SnapshotId = &v + return s +} + +// SetVolumeARN sets the VolumeARN field's value. +func (s *CreateSnapshotOutput) SetVolumeARN(v string) *CreateSnapshotOutput { + s.VolumeARN = &v + return s +} + +// A JSON object containing one or more of the following fields: +// +// * CreateStorediSCSIVolumeInput$DiskId +// +// * CreateStorediSCSIVolumeInput$NetworkInterfaceId +// +// * CreateStorediSCSIVolumeInput$PreserveExistingData +// +// * CreateStorediSCSIVolumeInput$SnapshotId +// +// * CreateStorediSCSIVolumeInput$TargetName +type CreateStorediSCSIVolumeInput struct { + _ struct{} `type:"structure"` + + // The unique identifier for the gateway local disk that is configured as a + // stored volume. Use ListLocalDisks (http://docs.aws.amazon.com/storagegateway/latest/userguide/API_ListLocalDisks.html) + // to list disk IDs for a gateway. + // + // DiskId is a required field + DiskId *string `min:"1" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` + + // True to use Amazon S3 server side encryption with your own AWS KMS key, or + // false to use a key managed by Amazon S3. Optional. + KMSEncrypted *bool `type:"boolean"` + + // The Amazon Resource Name (ARN) of the KMS key used for Amazon S3 server side + // encryption. This value can only be set when KMSEncrypted is true. Optional. + KMSKey *string `min:"20" type:"string"` + + // The network interface of the gateway on which to expose the iSCSI target. + // Only IPv4 addresses are accepted. Use DescribeGatewayInformation to get a + // list of the network interfaces available on a gateway. + // + // Valid Values: A valid IP address. + // + // NetworkInterfaceId is a required field + NetworkInterfaceId *string `type:"string" required:"true"` + + // Specify this field as true if you want to preserve the data on the local + // disk. Otherwise, specifying this field as false creates an empty volume. + // + // Valid Values: true, false + // + // PreserveExistingData is a required field + PreserveExistingData *bool `type:"boolean" required:"true"` + + // The snapshot ID (e.g. "snap-1122aabb") of the snapshot to restore as the + // new stored volume. Specify this field if you want to create the iSCSI storage + // volume from a snapshot otherwise do not include this field. To list snapshots + // for your account use DescribeSnapshots (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeSnapshots.html) + // in the Amazon Elastic Compute Cloud API Reference. + SnapshotId *string `type:"string"` + + // The name of the iSCSI target used by initiators to connect to the target + // and as a suffix for the target ARN. For example, specifying TargetName as + // myvolume results in the target ARN of arn:aws:storagegateway:us-east-2:111122223333:gateway/sgw-12A3456B/target/iqn.1997-05.com.amazon:myvolume. + // The target name must be unique across all volumes of a gateway. + // + // TargetName is a required field + TargetName *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateStorediSCSIVolumeInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateStorediSCSIVolumeInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateStorediSCSIVolumeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateStorediSCSIVolumeInput"} + if s.DiskId == nil { + invalidParams.Add(request.NewErrParamRequired("DiskId")) + } + if s.DiskId != nil && len(*s.DiskId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DiskId", 1)) + } + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.KMSKey != nil && len(*s.KMSKey) < 20 { + invalidParams.Add(request.NewErrParamMinLen("KMSKey", 20)) + } + if s.NetworkInterfaceId == nil { + invalidParams.Add(request.NewErrParamRequired("NetworkInterfaceId")) + } + if s.PreserveExistingData == nil { + invalidParams.Add(request.NewErrParamRequired("PreserveExistingData")) + } + if s.TargetName == nil { + invalidParams.Add(request.NewErrParamRequired("TargetName")) + } + if s.TargetName != nil && len(*s.TargetName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TargetName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDiskId sets the DiskId field's value. +func (s *CreateStorediSCSIVolumeInput) SetDiskId(v string) *CreateStorediSCSIVolumeInput { + s.DiskId = &v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *CreateStorediSCSIVolumeInput) SetGatewayARN(v string) *CreateStorediSCSIVolumeInput { + s.GatewayARN = &v + return s +} + +// SetKMSEncrypted sets the KMSEncrypted field's value. +func (s *CreateStorediSCSIVolumeInput) SetKMSEncrypted(v bool) *CreateStorediSCSIVolumeInput { + s.KMSEncrypted = &v + return s +} + +// SetKMSKey sets the KMSKey field's value. +func (s *CreateStorediSCSIVolumeInput) SetKMSKey(v string) *CreateStorediSCSIVolumeInput { + s.KMSKey = &v + return s +} + +// SetNetworkInterfaceId sets the NetworkInterfaceId field's value. +func (s *CreateStorediSCSIVolumeInput) SetNetworkInterfaceId(v string) *CreateStorediSCSIVolumeInput { + s.NetworkInterfaceId = &v + return s +} + +// SetPreserveExistingData sets the PreserveExistingData field's value. +func (s *CreateStorediSCSIVolumeInput) SetPreserveExistingData(v bool) *CreateStorediSCSIVolumeInput { + s.PreserveExistingData = &v + return s +} + +// SetSnapshotId sets the SnapshotId field's value. +func (s *CreateStorediSCSIVolumeInput) SetSnapshotId(v string) *CreateStorediSCSIVolumeInput { + s.SnapshotId = &v + return s +} + +// SetTargetName sets the TargetName field's value. +func (s *CreateStorediSCSIVolumeInput) SetTargetName(v string) *CreateStorediSCSIVolumeInput { + s.TargetName = &v + return s +} + +// A JSON object containing the following fields: +type CreateStorediSCSIVolumeOutput struct { + _ struct{} `type:"structure"` + + // he Amazon Resource Name (ARN) of the volume target that includes the iSCSI + // name that initiators can use to connect to the target. + TargetARN *string `min:"50" type:"string"` + + // The Amazon Resource Name (ARN) of the configured volume. + VolumeARN *string `min:"50" type:"string"` + + // The size of the volume in bytes. + VolumeSizeInBytes *int64 `type:"long"` +} + +// String returns the string representation +func (s CreateStorediSCSIVolumeOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateStorediSCSIVolumeOutput) GoString() string { + return s.String() +} + +// SetTargetARN sets the TargetARN field's value. +func (s *CreateStorediSCSIVolumeOutput) SetTargetARN(v string) *CreateStorediSCSIVolumeOutput { + s.TargetARN = &v + return s +} + +// SetVolumeARN sets the VolumeARN field's value. +func (s *CreateStorediSCSIVolumeOutput) SetVolumeARN(v string) *CreateStorediSCSIVolumeOutput { + s.VolumeARN = &v + return s +} + +// SetVolumeSizeInBytes sets the VolumeSizeInBytes field's value. +func (s *CreateStorediSCSIVolumeOutput) SetVolumeSizeInBytes(v int64) *CreateStorediSCSIVolumeOutput { + s.VolumeSizeInBytes = &v + return s +} + +// CreateTapeWithBarcodeInput +type CreateTapeWithBarcodeInput struct { + _ struct{} `type:"structure"` + + // The unique Amazon Resource Name (ARN) that represents the gateway to associate + // the virtual tape with. Use the ListGateways operation to return a list of + // gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` + + // True to use Amazon S3 server side encryption with your own AWS KMS key, or + // false to use a key managed by Amazon S3. Optional. + KMSEncrypted *bool `type:"boolean"` + + // The Amazon Resource Name (ARN) of the AWS KMS Key used for Amazon S3 server + // side encryption. This value can only be set when KMSEncrypted is true. Optional. + KMSKey *string `min:"20" type:"string"` + + // The barcode that you want to assign to the tape. + // + // Barcodes cannot be reused. This includes barcodes used for tapes that have + // been deleted. + // + // TapeBarcode is a required field + TapeBarcode *string `min:"7" type:"string" required:"true"` + + // The size, in bytes, of the virtual tape that you want to create. + // + // The size must be aligned by gigabyte (1024*1024*1024 byte). + // + // TapeSizeInBytes is a required field + TapeSizeInBytes *int64 `type:"long" required:"true"` +} + +// String returns the string representation +func (s CreateTapeWithBarcodeInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateTapeWithBarcodeInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateTapeWithBarcodeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateTapeWithBarcodeInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.KMSKey != nil && len(*s.KMSKey) < 20 { + invalidParams.Add(request.NewErrParamMinLen("KMSKey", 20)) + } + if s.TapeBarcode == nil { + invalidParams.Add(request.NewErrParamRequired("TapeBarcode")) + } + if s.TapeBarcode != nil && len(*s.TapeBarcode) < 7 { + invalidParams.Add(request.NewErrParamMinLen("TapeBarcode", 7)) + } + if s.TapeSizeInBytes == nil { + invalidParams.Add(request.NewErrParamRequired("TapeSizeInBytes")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *CreateTapeWithBarcodeInput) SetGatewayARN(v string) *CreateTapeWithBarcodeInput { + s.GatewayARN = &v + return s +} + +// SetKMSEncrypted sets the KMSEncrypted field's value. +func (s *CreateTapeWithBarcodeInput) SetKMSEncrypted(v bool) *CreateTapeWithBarcodeInput { + s.KMSEncrypted = &v + return s +} + +// SetKMSKey sets the KMSKey field's value. +func (s *CreateTapeWithBarcodeInput) SetKMSKey(v string) *CreateTapeWithBarcodeInput { + s.KMSKey = &v + return s +} + +// SetTapeBarcode sets the TapeBarcode field's value. +func (s *CreateTapeWithBarcodeInput) SetTapeBarcode(v string) *CreateTapeWithBarcodeInput { + s.TapeBarcode = &v + return s +} + +// SetTapeSizeInBytes sets the TapeSizeInBytes field's value. +func (s *CreateTapeWithBarcodeInput) SetTapeSizeInBytes(v int64) *CreateTapeWithBarcodeInput { + s.TapeSizeInBytes = &v + return s +} + +// CreateTapeOutput +type CreateTapeWithBarcodeOutput struct { + _ struct{} `type:"structure"` + + // A unique Amazon Resource Name (ARN) that represents the virtual tape that + // was created. + TapeARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s CreateTapeWithBarcodeOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateTapeWithBarcodeOutput) GoString() string { + return s.String() +} + +// SetTapeARN sets the TapeARN field's value. +func (s *CreateTapeWithBarcodeOutput) SetTapeARN(v string) *CreateTapeWithBarcodeOutput { + s.TapeARN = &v + return s +} + +// CreateTapesInput +type CreateTapesInput struct { + _ struct{} `type:"structure"` + + // A unique identifier that you use to retry a request. If you retry a request, + // use the same ClientToken you specified in the initial request. + // + // Using the same ClientToken prevents creating the tape multiple times. + // + // ClientToken is a required field + ClientToken *string `min:"5" type:"string" required:"true"` + + // The unique Amazon Resource Name (ARN) that represents the gateway to associate + // the virtual tapes with. Use the ListGateways operation to return a list of + // gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` + + // True to use Amazon S3 server side encryption with your own AWS KMS key, or + // false to use a key managed by Amazon S3. Optional. + KMSEncrypted *bool `type:"boolean"` + + // The Amazon Resource Name (ARN) of the AWS KMS key used for Amazon S3 server + // side encryption. This value can only be set when KMSEncrypted is true. Optional. + KMSKey *string `min:"20" type:"string"` + + // The number of virtual tapes that you want to create. + // + // NumTapesToCreate is a required field + NumTapesToCreate *int64 `min:"1" type:"integer" required:"true"` + + // A prefix that you append to the barcode of the virtual tape you are creating. + // This prefix makes the barcode unique. + // + // The prefix must be 1 to 4 characters in length and must be one of the uppercase + // letters from A to Z. + // + // TapeBarcodePrefix is a required field + TapeBarcodePrefix *string `min:"1" type:"string" required:"true"` + + // The size, in bytes, of the virtual tapes that you want to create. + // + // The size must be aligned by gigabyte (1024*1024*1024 byte). + // + // TapeSizeInBytes is a required field + TapeSizeInBytes *int64 `type:"long" required:"true"` +} + +// String returns the string representation +func (s CreateTapesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateTapesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateTapesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateTapesInput"} + if s.ClientToken == nil { + invalidParams.Add(request.NewErrParamRequired("ClientToken")) + } + if s.ClientToken != nil && len(*s.ClientToken) < 5 { + invalidParams.Add(request.NewErrParamMinLen("ClientToken", 5)) + } + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.KMSKey != nil && len(*s.KMSKey) < 20 { + invalidParams.Add(request.NewErrParamMinLen("KMSKey", 20)) + } + if s.NumTapesToCreate == nil { + invalidParams.Add(request.NewErrParamRequired("NumTapesToCreate")) + } + if s.NumTapesToCreate != nil && *s.NumTapesToCreate < 1 { + invalidParams.Add(request.NewErrParamMinValue("NumTapesToCreate", 1)) + } + if s.TapeBarcodePrefix == nil { + invalidParams.Add(request.NewErrParamRequired("TapeBarcodePrefix")) + } + if s.TapeBarcodePrefix != nil && len(*s.TapeBarcodePrefix) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TapeBarcodePrefix", 1)) + } + if s.TapeSizeInBytes == nil { + invalidParams.Add(request.NewErrParamRequired("TapeSizeInBytes")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientToken sets the ClientToken field's value. +func (s *CreateTapesInput) SetClientToken(v string) *CreateTapesInput { + s.ClientToken = &v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *CreateTapesInput) SetGatewayARN(v string) *CreateTapesInput { + s.GatewayARN = &v + return s +} + +// SetKMSEncrypted sets the KMSEncrypted field's value. +func (s *CreateTapesInput) SetKMSEncrypted(v bool) *CreateTapesInput { + s.KMSEncrypted = &v + return s +} + +// SetKMSKey sets the KMSKey field's value. +func (s *CreateTapesInput) SetKMSKey(v string) *CreateTapesInput { + s.KMSKey = &v + return s +} + +// SetNumTapesToCreate sets the NumTapesToCreate field's value. +func (s *CreateTapesInput) SetNumTapesToCreate(v int64) *CreateTapesInput { + s.NumTapesToCreate = &v + return s +} + +// SetTapeBarcodePrefix sets the TapeBarcodePrefix field's value. +func (s *CreateTapesInput) SetTapeBarcodePrefix(v string) *CreateTapesInput { + s.TapeBarcodePrefix = &v + return s +} + +// SetTapeSizeInBytes sets the TapeSizeInBytes field's value. +func (s *CreateTapesInput) SetTapeSizeInBytes(v int64) *CreateTapesInput { + s.TapeSizeInBytes = &v + return s +} + +// CreateTapeOutput +type CreateTapesOutput struct { + _ struct{} `type:"structure"` + + // A list of unique Amazon Resource Names (ARNs) that represents the virtual + // tapes that were created. + TapeARNs []*string `type:"list"` +} + +// String returns the string representation +func (s CreateTapesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateTapesOutput) GoString() string { + return s.String() +} + +// SetTapeARNs sets the TapeARNs field's value. +func (s *CreateTapesOutput) SetTapeARNs(v []*string) *CreateTapesOutput { + s.TapeARNs = v + return s +} + +// A JSON object containing the following fields: +// +// * DeleteBandwidthRateLimitInput$BandwidthType +type DeleteBandwidthRateLimitInput struct { + _ struct{} `type:"structure"` + + // One of the BandwidthType values that indicates the gateway bandwidth rate + // limit to delete. + // + // Valid Values: Upload, Download, All. + // + // BandwidthType is a required field + BandwidthType *string `min:"3" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteBandwidthRateLimitInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBandwidthRateLimitInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteBandwidthRateLimitInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteBandwidthRateLimitInput"} + if s.BandwidthType == nil { + invalidParams.Add(request.NewErrParamRequired("BandwidthType")) + } + if s.BandwidthType != nil && len(*s.BandwidthType) < 3 { + invalidParams.Add(request.NewErrParamMinLen("BandwidthType", 3)) + } + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBandwidthType sets the BandwidthType field's value. +func (s *DeleteBandwidthRateLimitInput) SetBandwidthType(v string) *DeleteBandwidthRateLimitInput { + s.BandwidthType = &v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DeleteBandwidthRateLimitInput) SetGatewayARN(v string) *DeleteBandwidthRateLimitInput { + s.GatewayARN = &v + return s +} + +// A JSON object containing the of the gateway whose bandwidth rate information +// was deleted. +type DeleteBandwidthRateLimitOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s DeleteBandwidthRateLimitOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBandwidthRateLimitOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DeleteBandwidthRateLimitOutput) SetGatewayARN(v string) *DeleteBandwidthRateLimitOutput { + s.GatewayARN = &v + return s +} + +// A JSON object containing one or more of the following fields: +// +// * DeleteChapCredentialsInput$InitiatorName +// +// * DeleteChapCredentialsInput$TargetARN +type DeleteChapCredentialsInput struct { + _ struct{} `type:"structure"` + + // The iSCSI initiator that connects to the target. + // + // InitiatorName is a required field + InitiatorName *string `min:"1" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the iSCSI volume target. Use the DescribeStorediSCSIVolumes + // operation to return to retrieve the TargetARN for specified VolumeARN. + // + // TargetARN is a required field + TargetARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteChapCredentialsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteChapCredentialsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteChapCredentialsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteChapCredentialsInput"} + if s.InitiatorName == nil { + invalidParams.Add(request.NewErrParamRequired("InitiatorName")) + } + if s.InitiatorName != nil && len(*s.InitiatorName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("InitiatorName", 1)) + } + if s.TargetARN == nil { + invalidParams.Add(request.NewErrParamRequired("TargetARN")) + } + if s.TargetARN != nil && len(*s.TargetARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("TargetARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInitiatorName sets the InitiatorName field's value. +func (s *DeleteChapCredentialsInput) SetInitiatorName(v string) *DeleteChapCredentialsInput { + s.InitiatorName = &v + return s +} + +// SetTargetARN sets the TargetARN field's value. +func (s *DeleteChapCredentialsInput) SetTargetARN(v string) *DeleteChapCredentialsInput { + s.TargetARN = &v + return s +} + +// A JSON object containing the following fields: +type DeleteChapCredentialsOutput struct { + _ struct{} `type:"structure"` + + // The iSCSI initiator that connects to the target. + InitiatorName *string `min:"1" type:"string"` + + // The Amazon Resource Name (ARN) of the target. + TargetARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s DeleteChapCredentialsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteChapCredentialsOutput) GoString() string { + return s.String() +} + +// SetInitiatorName sets the InitiatorName field's value. +func (s *DeleteChapCredentialsOutput) SetInitiatorName(v string) *DeleteChapCredentialsOutput { + s.InitiatorName = &v + return s +} + +// SetTargetARN sets the TargetARN field's value. +func (s *DeleteChapCredentialsOutput) SetTargetARN(v string) *DeleteChapCredentialsOutput { + s.TargetARN = &v + return s +} + +// DeleteFileShareInput +type DeleteFileShareInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the file share to be deleted. + // + // FileShareARN is a required field + FileShareARN *string `min:"50" type:"string" required:"true"` + + // If this value is set to true, the operation deletes a file share immediately + // and aborts all data uploads to AWS. Otherwise, the file share is not deleted + // until all data is uploaded to AWS. This process aborts the data upload process, + // and the file share enters the FORCE_DELETING status. + ForceDelete *bool `type:"boolean"` +} + +// String returns the string representation +func (s DeleteFileShareInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteFileShareInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteFileShareInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteFileShareInput"} + if s.FileShareARN == nil { + invalidParams.Add(request.NewErrParamRequired("FileShareARN")) + } + if s.FileShareARN != nil && len(*s.FileShareARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("FileShareARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFileShareARN sets the FileShareARN field's value. +func (s *DeleteFileShareInput) SetFileShareARN(v string) *DeleteFileShareInput { + s.FileShareARN = &v + return s +} + +// SetForceDelete sets the ForceDelete field's value. +func (s *DeleteFileShareInput) SetForceDelete(v bool) *DeleteFileShareInput { + s.ForceDelete = &v + return s +} + +// DeleteFileShareOutput +type DeleteFileShareOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the deleted file share. + FileShareARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s DeleteFileShareOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteFileShareOutput) GoString() string { + return s.String() +} + +// SetFileShareARN sets the FileShareARN field's value. +func (s *DeleteFileShareOutput) SetFileShareARN(v string) *DeleteFileShareOutput { + s.FileShareARN = &v + return s +} + +// A JSON object containing the ID of the gateway to delete. +type DeleteGatewayInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteGatewayInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteGatewayInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteGatewayInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteGatewayInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DeleteGatewayInput) SetGatewayARN(v string) *DeleteGatewayInput { + s.GatewayARN = &v + return s +} + +// A JSON object containing the ID of the deleted gateway. +type DeleteGatewayOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s DeleteGatewayOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteGatewayOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DeleteGatewayOutput) SetGatewayARN(v string) *DeleteGatewayOutput { + s.GatewayARN = &v + return s +} + +type DeleteSnapshotScheduleInput struct { + _ struct{} `type:"structure"` + + // VolumeARN is a required field + VolumeARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteSnapshotScheduleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSnapshotScheduleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteSnapshotScheduleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteSnapshotScheduleInput"} + if s.VolumeARN == nil { + invalidParams.Add(request.NewErrParamRequired("VolumeARN")) + } + if s.VolumeARN != nil && len(*s.VolumeARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("VolumeARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetVolumeARN sets the VolumeARN field's value. +func (s *DeleteSnapshotScheduleInput) SetVolumeARN(v string) *DeleteSnapshotScheduleInput { + s.VolumeARN = &v + return s +} + +type DeleteSnapshotScheduleOutput struct { + _ struct{} `type:"structure"` + + VolumeARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s DeleteSnapshotScheduleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSnapshotScheduleOutput) GoString() string { + return s.String() +} + +// SetVolumeARN sets the VolumeARN field's value. +func (s *DeleteSnapshotScheduleOutput) SetVolumeARN(v string) *DeleteSnapshotScheduleOutput { + s.VolumeARN = &v + return s +} + +// DeleteTapeArchiveInput +type DeleteTapeArchiveInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the virtual tape to delete from the virtual + // tape shelf (VTS). + // + // TapeARN is a required field + TapeARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteTapeArchiveInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteTapeArchiveInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteTapeArchiveInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteTapeArchiveInput"} + if s.TapeARN == nil { + invalidParams.Add(request.NewErrParamRequired("TapeARN")) + } + if s.TapeARN != nil && len(*s.TapeARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("TapeARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTapeARN sets the TapeARN field's value. +func (s *DeleteTapeArchiveInput) SetTapeARN(v string) *DeleteTapeArchiveInput { + s.TapeARN = &v + return s +} + +// DeleteTapeArchiveOutput +type DeleteTapeArchiveOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the virtual tape that was deleted from + // the virtual tape shelf (VTS). + TapeARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s DeleteTapeArchiveOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteTapeArchiveOutput) GoString() string { + return s.String() +} + +// SetTapeARN sets the TapeARN field's value. +func (s *DeleteTapeArchiveOutput) SetTapeARN(v string) *DeleteTapeArchiveOutput { + s.TapeARN = &v + return s +} + +// DeleteTapeInput +type DeleteTapeInput struct { + _ struct{} `type:"structure"` + + // The unique Amazon Resource Name (ARN) of the gateway that the virtual tape + // to delete is associated with. Use the ListGateways operation to return a + // list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the virtual tape to delete. + // + // TapeARN is a required field + TapeARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteTapeInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteTapeInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteTapeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteTapeInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.TapeARN == nil { + invalidParams.Add(request.NewErrParamRequired("TapeARN")) + } + if s.TapeARN != nil && len(*s.TapeARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("TapeARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DeleteTapeInput) SetGatewayARN(v string) *DeleteTapeInput { + s.GatewayARN = &v + return s +} + +// SetTapeARN sets the TapeARN field's value. +func (s *DeleteTapeInput) SetTapeARN(v string) *DeleteTapeInput { + s.TapeARN = &v + return s +} + +// DeleteTapeOutput +type DeleteTapeOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the deleted virtual tape. + TapeARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s DeleteTapeOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteTapeOutput) GoString() string { + return s.String() +} + +// SetTapeARN sets the TapeARN field's value. +func (s *DeleteTapeOutput) SetTapeARN(v string) *DeleteTapeOutput { + s.TapeARN = &v + return s +} + +// A JSON object containing the DeleteVolumeInput$VolumeARN to delete. +type DeleteVolumeInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the volume. Use the ListVolumes operation + // to return a list of gateway volumes. + // + // VolumeARN is a required field + VolumeARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteVolumeInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteVolumeInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteVolumeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteVolumeInput"} + if s.VolumeARN == nil { + invalidParams.Add(request.NewErrParamRequired("VolumeARN")) + } + if s.VolumeARN != nil && len(*s.VolumeARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("VolumeARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetVolumeARN sets the VolumeARN field's value. +func (s *DeleteVolumeInput) SetVolumeARN(v string) *DeleteVolumeInput { + s.VolumeARN = &v + return s +} + +// A JSON object containing the of the storage volume that was deleted +type DeleteVolumeOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the storage volume that was deleted. It + // is the same ARN you provided in the request. + VolumeARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s DeleteVolumeOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteVolumeOutput) GoString() string { + return s.String() +} + +// SetVolumeARN sets the VolumeARN field's value. +func (s *DeleteVolumeOutput) SetVolumeARN(v string) *DeleteVolumeOutput { + s.VolumeARN = &v + return s +} + +// A JSON object containing the of the gateway. +type DescribeBandwidthRateLimitInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeBandwidthRateLimitInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeBandwidthRateLimitInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeBandwidthRateLimitInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeBandwidthRateLimitInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DescribeBandwidthRateLimitInput) SetGatewayARN(v string) *DescribeBandwidthRateLimitInput { + s.GatewayARN = &v + return s +} + +// A JSON object containing the following fields: +type DescribeBandwidthRateLimitOutput struct { + _ struct{} `type:"structure"` + + // The average download bandwidth rate limit in bits per second. This field + // does not appear in the response if the download rate limit is not set. + AverageDownloadRateLimitInBitsPerSec *int64 `min:"102400" type:"long"` + + // The average upload bandwidth rate limit in bits per second. This field does + // not appear in the response if the upload rate limit is not set. + AverageUploadRateLimitInBitsPerSec *int64 `min:"51200" type:"long"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s DescribeBandwidthRateLimitOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeBandwidthRateLimitOutput) GoString() string { + return s.String() +} + +// SetAverageDownloadRateLimitInBitsPerSec sets the AverageDownloadRateLimitInBitsPerSec field's value. +func (s *DescribeBandwidthRateLimitOutput) SetAverageDownloadRateLimitInBitsPerSec(v int64) *DescribeBandwidthRateLimitOutput { + s.AverageDownloadRateLimitInBitsPerSec = &v + return s +} + +// SetAverageUploadRateLimitInBitsPerSec sets the AverageUploadRateLimitInBitsPerSec field's value. +func (s *DescribeBandwidthRateLimitOutput) SetAverageUploadRateLimitInBitsPerSec(v int64) *DescribeBandwidthRateLimitOutput { + s.AverageUploadRateLimitInBitsPerSec = &v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DescribeBandwidthRateLimitOutput) SetGatewayARN(v string) *DescribeBandwidthRateLimitOutput { + s.GatewayARN = &v + return s +} + +type DescribeCacheInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeCacheInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeCacheInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeCacheInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeCacheInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DescribeCacheInput) SetGatewayARN(v string) *DescribeCacheInput { + s.GatewayARN = &v + return s +} + +type DescribeCacheOutput struct { + _ struct{} `type:"structure"` + + CacheAllocatedInBytes *int64 `type:"long"` + + CacheDirtyPercentage *float64 `type:"double"` + + CacheHitPercentage *float64 `type:"double"` + + CacheMissPercentage *float64 `type:"double"` + + CacheUsedPercentage *float64 `type:"double"` + + DiskIds []*string `type:"list"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s DescribeCacheOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeCacheOutput) GoString() string { + return s.String() +} + +// SetCacheAllocatedInBytes sets the CacheAllocatedInBytes field's value. +func (s *DescribeCacheOutput) SetCacheAllocatedInBytes(v int64) *DescribeCacheOutput { + s.CacheAllocatedInBytes = &v + return s +} + +// SetCacheDirtyPercentage sets the CacheDirtyPercentage field's value. +func (s *DescribeCacheOutput) SetCacheDirtyPercentage(v float64) *DescribeCacheOutput { + s.CacheDirtyPercentage = &v + return s +} + +// SetCacheHitPercentage sets the CacheHitPercentage field's value. +func (s *DescribeCacheOutput) SetCacheHitPercentage(v float64) *DescribeCacheOutput { + s.CacheHitPercentage = &v + return s +} + +// SetCacheMissPercentage sets the CacheMissPercentage field's value. +func (s *DescribeCacheOutput) SetCacheMissPercentage(v float64) *DescribeCacheOutput { + s.CacheMissPercentage = &v + return s +} + +// SetCacheUsedPercentage sets the CacheUsedPercentage field's value. +func (s *DescribeCacheOutput) SetCacheUsedPercentage(v float64) *DescribeCacheOutput { + s.CacheUsedPercentage = &v + return s +} + +// SetDiskIds sets the DiskIds field's value. +func (s *DescribeCacheOutput) SetDiskIds(v []*string) *DescribeCacheOutput { + s.DiskIds = v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DescribeCacheOutput) SetGatewayARN(v string) *DescribeCacheOutput { + s.GatewayARN = &v + return s +} + +type DescribeCachediSCSIVolumesInput struct { + _ struct{} `type:"structure"` + + // VolumeARNs is a required field + VolumeARNs []*string `type:"list" required:"true"` +} + +// String returns the string representation +func (s DescribeCachediSCSIVolumesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeCachediSCSIVolumesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeCachediSCSIVolumesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeCachediSCSIVolumesInput"} + if s.VolumeARNs == nil { + invalidParams.Add(request.NewErrParamRequired("VolumeARNs")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetVolumeARNs sets the VolumeARNs field's value. +func (s *DescribeCachediSCSIVolumesInput) SetVolumeARNs(v []*string) *DescribeCachediSCSIVolumesInput { + s.VolumeARNs = v + return s +} + +// A JSON object containing the following fields: +type DescribeCachediSCSIVolumesOutput struct { + _ struct{} `type:"structure"` + + // An array of objects where each object contains metadata about one cached + // volume. + CachediSCSIVolumes []*CachediSCSIVolume `type:"list"` +} + +// String returns the string representation +func (s DescribeCachediSCSIVolumesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeCachediSCSIVolumesOutput) GoString() string { + return s.String() +} + +// SetCachediSCSIVolumes sets the CachediSCSIVolumes field's value. +func (s *DescribeCachediSCSIVolumesOutput) SetCachediSCSIVolumes(v []*CachediSCSIVolume) *DescribeCachediSCSIVolumesOutput { + s.CachediSCSIVolumes = v + return s +} + +// A JSON object containing the Amazon Resource Name (ARN) of the iSCSI volume +// target. +type DescribeChapCredentialsInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the iSCSI volume target. Use the DescribeStorediSCSIVolumes + // operation to return to retrieve the TargetARN for specified VolumeARN. + // + // TargetARN is a required field + TargetARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeChapCredentialsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeChapCredentialsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeChapCredentialsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeChapCredentialsInput"} + if s.TargetARN == nil { + invalidParams.Add(request.NewErrParamRequired("TargetARN")) + } + if s.TargetARN != nil && len(*s.TargetARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("TargetARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTargetARN sets the TargetARN field's value. +func (s *DescribeChapCredentialsInput) SetTargetARN(v string) *DescribeChapCredentialsInput { + s.TargetARN = &v + return s +} + +// A JSON object containing a . +type DescribeChapCredentialsOutput struct { + _ struct{} `type:"structure"` + + // An array of ChapInfo objects that represent CHAP credentials. Each object + // in the array contains CHAP credential information for one target-initiator + // pair. If no CHAP credentials are set, an empty array is returned. CHAP credential + // information is provided in a JSON object with the following fields: + // + // * InitiatorName: The iSCSI initiator that connects to the target. + // + // * SecretToAuthenticateInitiator: The secret key that the initiator (for + // example, the Windows client) must provide to participate in mutual CHAP + // with the target. + // + // * SecretToAuthenticateTarget: The secret key that the target must provide + // to participate in mutual CHAP with the initiator (e.g. Windows client). + // + // * TargetARN: The Amazon Resource Name (ARN) of the storage volume. + ChapCredentials []*ChapInfo `type:"list"` +} + +// String returns the string representation +func (s DescribeChapCredentialsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeChapCredentialsOutput) GoString() string { + return s.String() +} + +// SetChapCredentials sets the ChapCredentials field's value. +func (s *DescribeChapCredentialsOutput) SetChapCredentials(v []*ChapInfo) *DescribeChapCredentialsOutput { + s.ChapCredentials = v + return s +} + +// A JSON object containing the ID of the gateway. +type DescribeGatewayInformationInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeGatewayInformationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeGatewayInformationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeGatewayInformationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeGatewayInformationInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DescribeGatewayInformationInput) SetGatewayARN(v string) *DescribeGatewayInformationInput { + s.GatewayARN = &v + return s +} + +// A JSON object containing the following fields: +type DescribeGatewayInformationOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` + + // The unique identifier assigned to your gateway during activation. This ID + // becomes part of the gateway Amazon Resource Name (ARN), which you use as + // input for other operations. + GatewayId *string `min:"12" type:"string"` + + // The name you configured for your gateway. + GatewayName *string `type:"string"` + + // A NetworkInterface array that contains descriptions of the gateway network + // interfaces. + GatewayNetworkInterfaces []*NetworkInterface `type:"list"` + + // A value that indicates the operating state of the gateway. + GatewayState *string `min:"2" type:"string"` + + // A value that indicates the time zone configured for the gateway. + GatewayTimezone *string `min:"3" type:"string"` + + // The type of the gateway. + GatewayType *string `min:"2" type:"string"` + + // The date on which the last software update was applied to the gateway. If + // the gateway has never been updated, this field does not return a value in + // the response. + LastSoftwareUpdate *string `min:"1" type:"string"` + + // The date on which an update to the gateway is available. This date is in + // the time zone of the gateway. If the gateway is not available for an update + // this field is not returned in the response. + NextUpdateAvailabilityDate *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeGatewayInformationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeGatewayInformationOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DescribeGatewayInformationOutput) SetGatewayARN(v string) *DescribeGatewayInformationOutput { + s.GatewayARN = &v + return s +} + +// SetGatewayId sets the GatewayId field's value. +func (s *DescribeGatewayInformationOutput) SetGatewayId(v string) *DescribeGatewayInformationOutput { + s.GatewayId = &v + return s +} + +// SetGatewayName sets the GatewayName field's value. +func (s *DescribeGatewayInformationOutput) SetGatewayName(v string) *DescribeGatewayInformationOutput { + s.GatewayName = &v + return s +} + +// SetGatewayNetworkInterfaces sets the GatewayNetworkInterfaces field's value. +func (s *DescribeGatewayInformationOutput) SetGatewayNetworkInterfaces(v []*NetworkInterface) *DescribeGatewayInformationOutput { + s.GatewayNetworkInterfaces = v + return s +} + +// SetGatewayState sets the GatewayState field's value. +func (s *DescribeGatewayInformationOutput) SetGatewayState(v string) *DescribeGatewayInformationOutput { + s.GatewayState = &v + return s +} + +// SetGatewayTimezone sets the GatewayTimezone field's value. +func (s *DescribeGatewayInformationOutput) SetGatewayTimezone(v string) *DescribeGatewayInformationOutput { + s.GatewayTimezone = &v + return s +} + +// SetGatewayType sets the GatewayType field's value. +func (s *DescribeGatewayInformationOutput) SetGatewayType(v string) *DescribeGatewayInformationOutput { + s.GatewayType = &v + return s +} + +// SetLastSoftwareUpdate sets the LastSoftwareUpdate field's value. +func (s *DescribeGatewayInformationOutput) SetLastSoftwareUpdate(v string) *DescribeGatewayInformationOutput { + s.LastSoftwareUpdate = &v + return s +} + +// SetNextUpdateAvailabilityDate sets the NextUpdateAvailabilityDate field's value. +func (s *DescribeGatewayInformationOutput) SetNextUpdateAvailabilityDate(v string) *DescribeGatewayInformationOutput { + s.NextUpdateAvailabilityDate = &v + return s +} + +// A JSON object containing the of the gateway. +type DescribeMaintenanceStartTimeInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeMaintenanceStartTimeInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMaintenanceStartTimeInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeMaintenanceStartTimeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeMaintenanceStartTimeInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DescribeMaintenanceStartTimeInput) SetGatewayARN(v string) *DescribeMaintenanceStartTimeInput { + s.GatewayARN = &v + return s +} + +// A JSON object containing the following fields: +// +// * DescribeMaintenanceStartTimeOutput$DayOfWeek +// +// * DescribeMaintenanceStartTimeOutput$HourOfDay +// +// * DescribeMaintenanceStartTimeOutput$MinuteOfHour +// +// * DescribeMaintenanceStartTimeOutput$Timezone +type DescribeMaintenanceStartTimeOutput struct { + _ struct{} `type:"structure"` + + // An ordinal number between 0 and 6 that represents the day of the week, where + // 0 represents Sunday and 6 represents Saturday. The day of week is in the + // time zone of the gateway. + DayOfWeek *int64 `type:"integer"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` + + // The hour component of the maintenance start time represented as hh, where + // hh is the hour (0 to 23). The hour of the day is in the time zone of the + // gateway. + HourOfDay *int64 `type:"integer"` + + // The minute component of the maintenance start time represented as mm, where + // mm is the minute (0 to 59). The minute of the hour is in the time zone of + // the gateway. + MinuteOfHour *int64 `type:"integer"` + + Timezone *string `min:"3" type:"string"` +} + +// String returns the string representation +func (s DescribeMaintenanceStartTimeOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMaintenanceStartTimeOutput) GoString() string { + return s.String() +} + +// SetDayOfWeek sets the DayOfWeek field's value. +func (s *DescribeMaintenanceStartTimeOutput) SetDayOfWeek(v int64) *DescribeMaintenanceStartTimeOutput { + s.DayOfWeek = &v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DescribeMaintenanceStartTimeOutput) SetGatewayARN(v string) *DescribeMaintenanceStartTimeOutput { + s.GatewayARN = &v + return s +} + +// SetHourOfDay sets the HourOfDay field's value. +func (s *DescribeMaintenanceStartTimeOutput) SetHourOfDay(v int64) *DescribeMaintenanceStartTimeOutput { + s.HourOfDay = &v + return s +} + +// SetMinuteOfHour sets the MinuteOfHour field's value. +func (s *DescribeMaintenanceStartTimeOutput) SetMinuteOfHour(v int64) *DescribeMaintenanceStartTimeOutput { + s.MinuteOfHour = &v + return s +} + +// SetTimezone sets the Timezone field's value. +func (s *DescribeMaintenanceStartTimeOutput) SetTimezone(v string) *DescribeMaintenanceStartTimeOutput { + s.Timezone = &v + return s +} + +// DescribeNFSFileSharesInput +type DescribeNFSFileSharesInput struct { + _ struct{} `type:"structure"` + + // An array containing the Amazon Resource Name (ARN) of each file share to + // be described. + // + // FileShareARNList is a required field + FileShareARNList []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s DescribeNFSFileSharesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeNFSFileSharesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeNFSFileSharesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeNFSFileSharesInput"} + if s.FileShareARNList == nil { + invalidParams.Add(request.NewErrParamRequired("FileShareARNList")) + } + if s.FileShareARNList != nil && len(s.FileShareARNList) < 1 { + invalidParams.Add(request.NewErrParamMinLen("FileShareARNList", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFileShareARNList sets the FileShareARNList field's value. +func (s *DescribeNFSFileSharesInput) SetFileShareARNList(v []*string) *DescribeNFSFileSharesInput { + s.FileShareARNList = v + return s +} + +// DescribeNFSFileSharesOutput +type DescribeNFSFileSharesOutput struct { + _ struct{} `type:"structure"` + + // An array containing a description for each requested file share. + NFSFileShareInfoList []*NFSFileShareInfo `type:"list"` +} + +// String returns the string representation +func (s DescribeNFSFileSharesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeNFSFileSharesOutput) GoString() string { + return s.String() +} + +// SetNFSFileShareInfoList sets the NFSFileShareInfoList field's value. +func (s *DescribeNFSFileSharesOutput) SetNFSFileShareInfoList(v []*NFSFileShareInfo) *DescribeNFSFileSharesOutput { + s.NFSFileShareInfoList = v + return s +} + +// DescribeSMBFileSharesInput +type DescribeSMBFileSharesInput struct { + _ struct{} `type:"structure"` + + // An array containing the Amazon Resource Name (ARN) of each file share to + // be described. + // + // FileShareARNList is a required field + FileShareARNList []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s DescribeSMBFileSharesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeSMBFileSharesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeSMBFileSharesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeSMBFileSharesInput"} + if s.FileShareARNList == nil { + invalidParams.Add(request.NewErrParamRequired("FileShareARNList")) + } + if s.FileShareARNList != nil && len(s.FileShareARNList) < 1 { + invalidParams.Add(request.NewErrParamMinLen("FileShareARNList", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFileShareARNList sets the FileShareARNList field's value. +func (s *DescribeSMBFileSharesInput) SetFileShareARNList(v []*string) *DescribeSMBFileSharesInput { + s.FileShareARNList = v + return s +} + +// DescribeSMBFileSharesOutput +type DescribeSMBFileSharesOutput struct { + _ struct{} `type:"structure"` + + // An array containing a description for each requested file share. + SMBFileShareInfoList []*SMBFileShareInfo `type:"list"` +} + +// String returns the string representation +func (s DescribeSMBFileSharesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeSMBFileSharesOutput) GoString() string { + return s.String() +} + +// SetSMBFileShareInfoList sets the SMBFileShareInfoList field's value. +func (s *DescribeSMBFileSharesOutput) SetSMBFileShareInfoList(v []*SMBFileShareInfo) *DescribeSMBFileSharesOutput { + s.SMBFileShareInfoList = v + return s +} + +type DescribeSMBSettingsInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeSMBSettingsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeSMBSettingsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeSMBSettingsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeSMBSettingsInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DescribeSMBSettingsInput) SetGatewayARN(v string) *DescribeSMBSettingsInput { + s.GatewayARN = &v + return s +} + +type DescribeSMBSettingsOutput struct { + _ struct{} `type:"structure"` + + // The name of the domain that the gateway is joined to. + DomainName *string `type:"string"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` + + // This value is true if a password for the guest user “smbguest” is set, and + // otherwise false. + SMBGuestPasswordSet *bool `type:"boolean"` +} + +// String returns the string representation +func (s DescribeSMBSettingsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeSMBSettingsOutput) GoString() string { + return s.String() +} + +// SetDomainName sets the DomainName field's value. +func (s *DescribeSMBSettingsOutput) SetDomainName(v string) *DescribeSMBSettingsOutput { + s.DomainName = &v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DescribeSMBSettingsOutput) SetGatewayARN(v string) *DescribeSMBSettingsOutput { + s.GatewayARN = &v + return s +} + +// SetSMBGuestPasswordSet sets the SMBGuestPasswordSet field's value. +func (s *DescribeSMBSettingsOutput) SetSMBGuestPasswordSet(v bool) *DescribeSMBSettingsOutput { + s.SMBGuestPasswordSet = &v + return s +} + +// A JSON object containing the DescribeSnapshotScheduleInput$VolumeARN of the +// volume. +type DescribeSnapshotScheduleInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the volume. Use the ListVolumes operation + // to return a list of gateway volumes. + // + // VolumeARN is a required field + VolumeARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeSnapshotScheduleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeSnapshotScheduleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeSnapshotScheduleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeSnapshotScheduleInput"} + if s.VolumeARN == nil { + invalidParams.Add(request.NewErrParamRequired("VolumeARN")) + } + if s.VolumeARN != nil && len(*s.VolumeARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("VolumeARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetVolumeARN sets the VolumeARN field's value. +func (s *DescribeSnapshotScheduleInput) SetVolumeARN(v string) *DescribeSnapshotScheduleInput { + s.VolumeARN = &v + return s +} + +type DescribeSnapshotScheduleOutput struct { + _ struct{} `type:"structure"` + + Description *string `min:"1" type:"string"` + + RecurrenceInHours *int64 `min:"1" type:"integer"` + + StartAt *int64 `type:"integer"` + + Timezone *string `min:"3" type:"string"` + + VolumeARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s DescribeSnapshotScheduleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeSnapshotScheduleOutput) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *DescribeSnapshotScheduleOutput) SetDescription(v string) *DescribeSnapshotScheduleOutput { + s.Description = &v + return s +} + +// SetRecurrenceInHours sets the RecurrenceInHours field's value. +func (s *DescribeSnapshotScheduleOutput) SetRecurrenceInHours(v int64) *DescribeSnapshotScheduleOutput { + s.RecurrenceInHours = &v + return s +} + +// SetStartAt sets the StartAt field's value. +func (s *DescribeSnapshotScheduleOutput) SetStartAt(v int64) *DescribeSnapshotScheduleOutput { + s.StartAt = &v + return s +} + +// SetTimezone sets the Timezone field's value. +func (s *DescribeSnapshotScheduleOutput) SetTimezone(v string) *DescribeSnapshotScheduleOutput { + s.Timezone = &v + return s +} + +// SetVolumeARN sets the VolumeARN field's value. +func (s *DescribeSnapshotScheduleOutput) SetVolumeARN(v string) *DescribeSnapshotScheduleOutput { + s.VolumeARN = &v + return s +} + +// A JSON object containing a list of DescribeStorediSCSIVolumesInput$VolumeARNs. +type DescribeStorediSCSIVolumesInput struct { + _ struct{} `type:"structure"` + + // An array of strings where each string represents the Amazon Resource Name + // (ARN) of a stored volume. All of the specified stored volumes must from the + // same gateway. Use ListVolumes to get volume ARNs for a gateway. + // + // VolumeARNs is a required field + VolumeARNs []*string `type:"list" required:"true"` +} + +// String returns the string representation +func (s DescribeStorediSCSIVolumesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStorediSCSIVolumesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeStorediSCSIVolumesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeStorediSCSIVolumesInput"} + if s.VolumeARNs == nil { + invalidParams.Add(request.NewErrParamRequired("VolumeARNs")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetVolumeARNs sets the VolumeARNs field's value. +func (s *DescribeStorediSCSIVolumesInput) SetVolumeARNs(v []*string) *DescribeStorediSCSIVolumesInput { + s.VolumeARNs = v + return s +} + +type DescribeStorediSCSIVolumesOutput struct { + _ struct{} `type:"structure"` + + StorediSCSIVolumes []*StorediSCSIVolume `type:"list"` +} + +// String returns the string representation +func (s DescribeStorediSCSIVolumesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeStorediSCSIVolumesOutput) GoString() string { + return s.String() +} + +// SetStorediSCSIVolumes sets the StorediSCSIVolumes field's value. +func (s *DescribeStorediSCSIVolumesOutput) SetStorediSCSIVolumes(v []*StorediSCSIVolume) *DescribeStorediSCSIVolumesOutput { + s.StorediSCSIVolumes = v + return s +} + +// DescribeTapeArchivesInput +type DescribeTapeArchivesInput struct { + _ struct{} `type:"structure"` + + // Specifies that the number of virtual tapes descried be limited to the specified + // number. + Limit *int64 `min:"1" type:"integer"` + + // An opaque string that indicates the position at which to begin describing + // virtual tapes. + Marker *string `min:"1" type:"string"` + + // Specifies one or more unique Amazon Resource Names (ARNs) that represent + // the virtual tapes you want to describe. + TapeARNs []*string `type:"list"` +} + +// String returns the string representation +func (s DescribeTapeArchivesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeTapeArchivesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeTapeArchivesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeTapeArchivesInput"} + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLimit sets the Limit field's value. +func (s *DescribeTapeArchivesInput) SetLimit(v int64) *DescribeTapeArchivesInput { + s.Limit = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeTapeArchivesInput) SetMarker(v string) *DescribeTapeArchivesInput { + s.Marker = &v + return s +} + +// SetTapeARNs sets the TapeARNs field's value. +func (s *DescribeTapeArchivesInput) SetTapeARNs(v []*string) *DescribeTapeArchivesInput { + s.TapeARNs = v + return s +} + +// DescribeTapeArchivesOutput +type DescribeTapeArchivesOutput struct { + _ struct{} `type:"structure"` + + // An opaque string that indicates the position at which the virtual tapes that + // were fetched for description ended. Use this marker in your next request + // to fetch the next set of virtual tapes in the virtual tape shelf (VTS). If + // there are no more virtual tapes to describe, this field does not appear in + // the response. + Marker *string `min:"1" type:"string"` + + // An array of virtual tape objects in the virtual tape shelf (VTS). The description + // includes of the Amazon Resource Name (ARN) of the virtual tapes. The information + // returned includes the Amazon Resource Names (ARNs) of the tapes, size of + // the tapes, status of the tapes, progress of the description and tape barcode. + TapeArchives []*TapeArchive `type:"list"` +} + +// String returns the string representation +func (s DescribeTapeArchivesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeTapeArchivesOutput) GoString() string { + return s.String() +} + +// SetMarker sets the Marker field's value. +func (s *DescribeTapeArchivesOutput) SetMarker(v string) *DescribeTapeArchivesOutput { + s.Marker = &v + return s +} + +// SetTapeArchives sets the TapeArchives field's value. +func (s *DescribeTapeArchivesOutput) SetTapeArchives(v []*TapeArchive) *DescribeTapeArchivesOutput { + s.TapeArchives = v + return s +} + +// DescribeTapeRecoveryPointsInput +type DescribeTapeRecoveryPointsInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` + + // Specifies that the number of virtual tape recovery points that are described + // be limited to the specified number. + Limit *int64 `min:"1" type:"integer"` + + // An opaque string that indicates the position at which to begin describing + // the virtual tape recovery points. + Marker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeTapeRecoveryPointsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeTapeRecoveryPointsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeTapeRecoveryPointsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeTapeRecoveryPointsInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DescribeTapeRecoveryPointsInput) SetGatewayARN(v string) *DescribeTapeRecoveryPointsInput { + s.GatewayARN = &v + return s +} + +// SetLimit sets the Limit field's value. +func (s *DescribeTapeRecoveryPointsInput) SetLimit(v int64) *DescribeTapeRecoveryPointsInput { + s.Limit = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeTapeRecoveryPointsInput) SetMarker(v string) *DescribeTapeRecoveryPointsInput { + s.Marker = &v + return s +} + +// DescribeTapeRecoveryPointsOutput +type DescribeTapeRecoveryPointsOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` + + // An opaque string that indicates the position at which the virtual tape recovery + // points that were listed for description ended. + // + // Use this marker in your next request to list the next set of virtual tape + // recovery points in the list. If there are no more recovery points to describe, + // this field does not appear in the response. + Marker *string `min:"1" type:"string"` + + // An array of TapeRecoveryPointInfos that are available for the specified gateway. + TapeRecoveryPointInfos []*TapeRecoveryPointInfo `type:"list"` +} + +// String returns the string representation +func (s DescribeTapeRecoveryPointsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeTapeRecoveryPointsOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DescribeTapeRecoveryPointsOutput) SetGatewayARN(v string) *DescribeTapeRecoveryPointsOutput { + s.GatewayARN = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeTapeRecoveryPointsOutput) SetMarker(v string) *DescribeTapeRecoveryPointsOutput { + s.Marker = &v + return s +} + +// SetTapeRecoveryPointInfos sets the TapeRecoveryPointInfos field's value. +func (s *DescribeTapeRecoveryPointsOutput) SetTapeRecoveryPointInfos(v []*TapeRecoveryPointInfo) *DescribeTapeRecoveryPointsOutput { + s.TapeRecoveryPointInfos = v + return s +} + +// DescribeTapesInput +type DescribeTapesInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` + + // Specifies that the number of virtual tapes described be limited to the specified + // number. + // + // Amazon Web Services may impose its own limit, if this field is not set. + Limit *int64 `min:"1" type:"integer"` + + // A marker value, obtained in a previous call to DescribeTapes. This marker + // indicates which page of results to retrieve. + // + // If not specified, the first page of results is retrieved. + Marker *string `min:"1" type:"string"` + + // Specifies one or more unique Amazon Resource Names (ARNs) that represent + // the virtual tapes you want to describe. If this parameter is not specified, + // Tape gateway returns a description of all virtual tapes associated with the + // specified gateway. + TapeARNs []*string `type:"list"` +} + +// String returns the string representation +func (s DescribeTapesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeTapesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeTapesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeTapesInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DescribeTapesInput) SetGatewayARN(v string) *DescribeTapesInput { + s.GatewayARN = &v + return s +} + +// SetLimit sets the Limit field's value. +func (s *DescribeTapesInput) SetLimit(v int64) *DescribeTapesInput { + s.Limit = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeTapesInput) SetMarker(v string) *DescribeTapesInput { + s.Marker = &v + return s +} + +// SetTapeARNs sets the TapeARNs field's value. +func (s *DescribeTapesInput) SetTapeARNs(v []*string) *DescribeTapesInput { + s.TapeARNs = v + return s +} + +// DescribeTapesOutput +type DescribeTapesOutput struct { + _ struct{} `type:"structure"` + + // An opaque string which can be used as part of a subsequent DescribeTapes + // call to retrieve the next page of results. + // + // If a response does not contain a marker, then there are no more results to + // be retrieved. + Marker *string `min:"1" type:"string"` + + // An array of virtual tape descriptions. + Tapes []*Tape `type:"list"` +} + +// String returns the string representation +func (s DescribeTapesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeTapesOutput) GoString() string { + return s.String() +} + +// SetMarker sets the Marker field's value. +func (s *DescribeTapesOutput) SetMarker(v string) *DescribeTapesOutput { + s.Marker = &v + return s +} + +// SetTapes sets the Tapes field's value. +func (s *DescribeTapesOutput) SetTapes(v []*Tape) *DescribeTapesOutput { + s.Tapes = v + return s +} + +type DescribeUploadBufferInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeUploadBufferInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeUploadBufferInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeUploadBufferInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeUploadBufferInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DescribeUploadBufferInput) SetGatewayARN(v string) *DescribeUploadBufferInput { + s.GatewayARN = &v + return s +} + +type DescribeUploadBufferOutput struct { + _ struct{} `type:"structure"` + + DiskIds []*string `type:"list"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` + + UploadBufferAllocatedInBytes *int64 `type:"long"` + + UploadBufferUsedInBytes *int64 `type:"long"` +} + +// String returns the string representation +func (s DescribeUploadBufferOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeUploadBufferOutput) GoString() string { + return s.String() +} + +// SetDiskIds sets the DiskIds field's value. +func (s *DescribeUploadBufferOutput) SetDiskIds(v []*string) *DescribeUploadBufferOutput { + s.DiskIds = v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DescribeUploadBufferOutput) SetGatewayARN(v string) *DescribeUploadBufferOutput { + s.GatewayARN = &v + return s +} + +// SetUploadBufferAllocatedInBytes sets the UploadBufferAllocatedInBytes field's value. +func (s *DescribeUploadBufferOutput) SetUploadBufferAllocatedInBytes(v int64) *DescribeUploadBufferOutput { + s.UploadBufferAllocatedInBytes = &v + return s +} + +// SetUploadBufferUsedInBytes sets the UploadBufferUsedInBytes field's value. +func (s *DescribeUploadBufferOutput) SetUploadBufferUsedInBytes(v int64) *DescribeUploadBufferOutput { + s.UploadBufferUsedInBytes = &v + return s +} + +// DescribeVTLDevicesInput +type DescribeVTLDevicesInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` + + // Specifies that the number of VTL devices described be limited to the specified + // number. + Limit *int64 `min:"1" type:"integer"` + + // An opaque string that indicates the position at which to begin describing + // the VTL devices. + Marker *string `min:"1" type:"string"` + + // An array of strings, where each string represents the Amazon Resource Name + // (ARN) of a VTL device. + // + // All of the specified VTL devices must be from the same gateway. If no VTL + // devices are specified, the result will contain all devices on the specified + // gateway. + VTLDeviceARNs []*string `type:"list"` +} + +// String returns the string representation +func (s DescribeVTLDevicesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeVTLDevicesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeVTLDevicesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeVTLDevicesInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DescribeVTLDevicesInput) SetGatewayARN(v string) *DescribeVTLDevicesInput { + s.GatewayARN = &v + return s +} + +// SetLimit sets the Limit field's value. +func (s *DescribeVTLDevicesInput) SetLimit(v int64) *DescribeVTLDevicesInput { + s.Limit = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeVTLDevicesInput) SetMarker(v string) *DescribeVTLDevicesInput { + s.Marker = &v + return s +} + +// SetVTLDeviceARNs sets the VTLDeviceARNs field's value. +func (s *DescribeVTLDevicesInput) SetVTLDeviceARNs(v []*string) *DescribeVTLDevicesInput { + s.VTLDeviceARNs = v + return s +} + +// DescribeVTLDevicesOutput +type DescribeVTLDevicesOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` + + // An opaque string that indicates the position at which the VTL devices that + // were fetched for description ended. Use the marker in your next request to + // fetch the next set of VTL devices in the list. If there are no more VTL devices + // to describe, this field does not appear in the response. + Marker *string `min:"1" type:"string"` + + // An array of VTL device objects composed of the Amazon Resource Name(ARN) + // of the VTL devices. + VTLDevices []*VTLDevice `type:"list"` +} + +// String returns the string representation +func (s DescribeVTLDevicesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeVTLDevicesOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DescribeVTLDevicesOutput) SetGatewayARN(v string) *DescribeVTLDevicesOutput { + s.GatewayARN = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeVTLDevicesOutput) SetMarker(v string) *DescribeVTLDevicesOutput { + s.Marker = &v + return s +} + +// SetVTLDevices sets the VTLDevices field's value. +func (s *DescribeVTLDevicesOutput) SetVTLDevices(v []*VTLDevice) *DescribeVTLDevicesOutput { + s.VTLDevices = v + return s +} + +// A JSON object containing the of the gateway. +type DescribeWorkingStorageInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeWorkingStorageInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeWorkingStorageInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeWorkingStorageInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeWorkingStorageInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DescribeWorkingStorageInput) SetGatewayARN(v string) *DescribeWorkingStorageInput { + s.GatewayARN = &v + return s +} + +// A JSON object containing the following fields: +type DescribeWorkingStorageOutput struct { + _ struct{} `type:"structure"` + + // An array of the gateway's local disk IDs that are configured as working storage. + // Each local disk ID is specified as a string (minimum length of 1 and maximum + // length of 300). If no local disks are configured as working storage, then + // the DiskIds array is empty. + DiskIds []*string `type:"list"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` + + // The total working storage in bytes allocated for the gateway. If no working + // storage is configured for the gateway, this field returns 0. + WorkingStorageAllocatedInBytes *int64 `type:"long"` + + // The total working storage in bytes in use by the gateway. If no working storage + // is configured for the gateway, this field returns 0. + WorkingStorageUsedInBytes *int64 `type:"long"` +} + +// String returns the string representation +func (s DescribeWorkingStorageOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeWorkingStorageOutput) GoString() string { + return s.String() +} + +// SetDiskIds sets the DiskIds field's value. +func (s *DescribeWorkingStorageOutput) SetDiskIds(v []*string) *DescribeWorkingStorageOutput { + s.DiskIds = v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DescribeWorkingStorageOutput) SetGatewayARN(v string) *DescribeWorkingStorageOutput { + s.GatewayARN = &v + return s +} + +// SetWorkingStorageAllocatedInBytes sets the WorkingStorageAllocatedInBytes field's value. +func (s *DescribeWorkingStorageOutput) SetWorkingStorageAllocatedInBytes(v int64) *DescribeWorkingStorageOutput { + s.WorkingStorageAllocatedInBytes = &v + return s +} + +// SetWorkingStorageUsedInBytes sets the WorkingStorageUsedInBytes field's value. +func (s *DescribeWorkingStorageOutput) SetWorkingStorageUsedInBytes(v int64) *DescribeWorkingStorageOutput { + s.WorkingStorageUsedInBytes = &v + return s +} + +// Lists iSCSI information about a VTL device. +type DeviceiSCSIAttributes struct { + _ struct{} `type:"structure"` + + // Indicates whether mutual CHAP is enabled for the iSCSI target. + ChapEnabled *bool `type:"boolean"` + + // The network interface identifier of the VTL device. + NetworkInterfaceId *string `type:"string"` + + // The port used to communicate with iSCSI VTL device targets. + NetworkInterfacePort *int64 `type:"integer"` + + // Specifies the unique Amazon Resource Name (ARN) that encodes the iSCSI qualified + // name(iqn) of a tape drive or media changer target. + TargetARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s DeviceiSCSIAttributes) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeviceiSCSIAttributes) GoString() string { + return s.String() +} + +// SetChapEnabled sets the ChapEnabled field's value. +func (s *DeviceiSCSIAttributes) SetChapEnabled(v bool) *DeviceiSCSIAttributes { + s.ChapEnabled = &v + return s +} + +// SetNetworkInterfaceId sets the NetworkInterfaceId field's value. +func (s *DeviceiSCSIAttributes) SetNetworkInterfaceId(v string) *DeviceiSCSIAttributes { + s.NetworkInterfaceId = &v + return s +} + +// SetNetworkInterfacePort sets the NetworkInterfacePort field's value. +func (s *DeviceiSCSIAttributes) SetNetworkInterfacePort(v int64) *DeviceiSCSIAttributes { + s.NetworkInterfacePort = &v + return s +} + +// SetTargetARN sets the TargetARN field's value. +func (s *DeviceiSCSIAttributes) SetTargetARN(v string) *DeviceiSCSIAttributes { + s.TargetARN = &v + return s +} + +// DisableGatewayInput +type DisableGatewayInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s DisableGatewayInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisableGatewayInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DisableGatewayInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DisableGatewayInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DisableGatewayInput) SetGatewayARN(v string) *DisableGatewayInput { + s.GatewayARN = &v + return s +} + +// DisableGatewayOutput +type DisableGatewayOutput struct { + _ struct{} `type:"structure"` + + // The unique Amazon Resource Name (ARN) of the disabled gateway. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s DisableGatewayOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisableGatewayOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *DisableGatewayOutput) SetGatewayARN(v string) *DisableGatewayOutput { + s.GatewayARN = &v + return s +} + +type Disk struct { + _ struct{} `type:"structure"` + + DiskAllocationResource *string `type:"string"` + + DiskAllocationType *string `min:"3" type:"string"` + + DiskId *string `min:"1" type:"string"` + + DiskNode *string `type:"string"` + + DiskPath *string `type:"string"` + + DiskSizeInBytes *int64 `type:"long"` + + DiskStatus *string `type:"string"` +} + +// String returns the string representation +func (s Disk) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Disk) GoString() string { + return s.String() +} + +// SetDiskAllocationResource sets the DiskAllocationResource field's value. +func (s *Disk) SetDiskAllocationResource(v string) *Disk { + s.DiskAllocationResource = &v + return s +} + +// SetDiskAllocationType sets the DiskAllocationType field's value. +func (s *Disk) SetDiskAllocationType(v string) *Disk { + s.DiskAllocationType = &v + return s +} + +// SetDiskId sets the DiskId field's value. +func (s *Disk) SetDiskId(v string) *Disk { + s.DiskId = &v + return s +} + +// SetDiskNode sets the DiskNode field's value. +func (s *Disk) SetDiskNode(v string) *Disk { + s.DiskNode = &v + return s +} + +// SetDiskPath sets the DiskPath field's value. +func (s *Disk) SetDiskPath(v string) *Disk { + s.DiskPath = &v + return s +} + +// SetDiskSizeInBytes sets the DiskSizeInBytes field's value. +func (s *Disk) SetDiskSizeInBytes(v int64) *Disk { + s.DiskSizeInBytes = &v + return s +} + +// SetDiskStatus sets the DiskStatus field's value. +func (s *Disk) SetDiskStatus(v string) *Disk { + s.DiskStatus = &v + return s +} + +// Provides additional information about an error that was returned by the service +// as an or. See the errorCode and errorDetails members for more information +// about the error. +type Error struct { + _ struct{} `type:"structure"` + + // Additional information about the error. + ErrorCode *string `locationName:"errorCode" type:"string" enum:"ErrorCode"` + + // Human-readable text that provides detail about the error that occurred. + ErrorDetails map[string]*string `locationName:"errorDetails" type:"map"` +} + +// String returns the string representation +func (s Error) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Error) GoString() string { + return s.String() +} + +// SetErrorCode sets the ErrorCode field's value. +func (s *Error) SetErrorCode(v string) *Error { + s.ErrorCode = &v + return s +} + +// SetErrorDetails sets the ErrorDetails field's value. +func (s *Error) SetErrorDetails(v map[string]*string) *Error { + s.ErrorDetails = v + return s +} + +// Describes a file share. +type FileShareInfo struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the file share. + FileShareARN *string `min:"50" type:"string"` + + // The ID of the file share. + FileShareId *string `min:"12" type:"string"` + + // The status of the file share. Possible values are CREATING, UPDATING, AVAILABLE + // and DELETING. + FileShareStatus *string `min:"3" type:"string"` + + // The type of the file share. + FileShareType *string `type:"string" enum:"FileShareType"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s FileShareInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FileShareInfo) GoString() string { + return s.String() +} + +// SetFileShareARN sets the FileShareARN field's value. +func (s *FileShareInfo) SetFileShareARN(v string) *FileShareInfo { + s.FileShareARN = &v + return s +} + +// SetFileShareId sets the FileShareId field's value. +func (s *FileShareInfo) SetFileShareId(v string) *FileShareInfo { + s.FileShareId = &v + return s +} + +// SetFileShareStatus sets the FileShareStatus field's value. +func (s *FileShareInfo) SetFileShareStatus(v string) *FileShareInfo { + s.FileShareStatus = &v + return s +} + +// SetFileShareType sets the FileShareType field's value. +func (s *FileShareInfo) SetFileShareType(v string) *FileShareInfo { + s.FileShareType = &v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *FileShareInfo) SetGatewayARN(v string) *FileShareInfo { + s.GatewayARN = &v + return s +} + +// Describes a gateway object. +type GatewayInfo struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` + + // The unique identifier assigned to your gateway during activation. This ID + // becomes part of the gateway Amazon Resource Name (ARN), which you use as + // input for other operations. + GatewayId *string `min:"12" type:"string"` + + // The name of the gateway. + GatewayName *string `type:"string"` + + // The state of the gateway. + // + // Valid Values: DISABLED or ACTIVE + GatewayOperationalState *string `min:"2" type:"string"` + + // The type of the gateway. + GatewayType *string `min:"2" type:"string"` +} + +// String returns the string representation +func (s GatewayInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GatewayInfo) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *GatewayInfo) SetGatewayARN(v string) *GatewayInfo { + s.GatewayARN = &v + return s +} + +// SetGatewayId sets the GatewayId field's value. +func (s *GatewayInfo) SetGatewayId(v string) *GatewayInfo { + s.GatewayId = &v + return s +} + +// SetGatewayName sets the GatewayName field's value. +func (s *GatewayInfo) SetGatewayName(v string) *GatewayInfo { + s.GatewayName = &v + return s +} + +// SetGatewayOperationalState sets the GatewayOperationalState field's value. +func (s *GatewayInfo) SetGatewayOperationalState(v string) *GatewayInfo { + s.GatewayOperationalState = &v + return s +} + +// SetGatewayType sets the GatewayType field's value. +func (s *GatewayInfo) SetGatewayType(v string) *GatewayInfo { + s.GatewayType = &v + return s +} + +// JoinDomainInput +type JoinDomainInput struct { + _ struct{} `type:"structure"` + + // The name of the domain that you want the gateway to join. + // + // DomainName is a required field + DomainName *string `type:"string" required:"true"` + + // The unique Amazon Resource Name (ARN) of the file gateway you want to add + // to the Active Directory domain. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` + + // Sets the password of the user who has permission to add the gateway to the + // Active Directory domain. + // + // Password is a required field + Password *string `type:"string" required:"true"` + + // Sets the user name of user who has permission to add the gateway to the Active + // Directory domain. + // + // UserName is a required field + UserName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s JoinDomainInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s JoinDomainInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *JoinDomainInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "JoinDomainInput"} + if s.DomainName == nil { + invalidParams.Add(request.NewErrParamRequired("DomainName")) + } + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.Password == nil { + invalidParams.Add(request.NewErrParamRequired("Password")) + } + if s.UserName == nil { + invalidParams.Add(request.NewErrParamRequired("UserName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDomainName sets the DomainName field's value. +func (s *JoinDomainInput) SetDomainName(v string) *JoinDomainInput { + s.DomainName = &v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *JoinDomainInput) SetGatewayARN(v string) *JoinDomainInput { + s.GatewayARN = &v + return s +} + +// SetPassword sets the Password field's value. +func (s *JoinDomainInput) SetPassword(v string) *JoinDomainInput { + s.Password = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *JoinDomainInput) SetUserName(v string) *JoinDomainInput { + s.UserName = &v + return s +} + +// JoinDomainOutput +type JoinDomainOutput struct { + _ struct{} `type:"structure"` + + // The unique Amazon Resource Name (ARN) of the gateway that joined the domain. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s JoinDomainOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s JoinDomainOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *JoinDomainOutput) SetGatewayARN(v string) *JoinDomainOutput { + s.GatewayARN = &v + return s +} + +// ListFileShareInput +type ListFileSharesInput struct { + _ struct{} `type:"structure"` + + // The Amazon resource Name (ARN) of the gateway whose file shares you want + // to list. If this field is not present, all file shares under your account + // are listed. + GatewayARN *string `min:"50" type:"string"` + + // The maximum number of file shares to return in the response. The value must + // be an integer with a value greater than zero. Optional. + Limit *int64 `min:"1" type:"integer"` + + // Opaque pagination token returned from a previous ListFileShares operation. + // If present, Marker specifies where to continue the list from after a previous + // call to ListFileShares. Optional. + Marker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListFileSharesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListFileSharesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListFileSharesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListFileSharesInput"} + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *ListFileSharesInput) SetGatewayARN(v string) *ListFileSharesInput { + s.GatewayARN = &v + return s +} + +// SetLimit sets the Limit field's value. +func (s *ListFileSharesInput) SetLimit(v int64) *ListFileSharesInput { + s.Limit = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListFileSharesInput) SetMarker(v string) *ListFileSharesInput { + s.Marker = &v + return s +} + +// ListFileShareOutput +type ListFileSharesOutput struct { + _ struct{} `type:"structure"` + + // An array of information about the file gateway's file shares. + FileShareInfoList []*FileShareInfo `type:"list"` + + // If the request includes Marker, the response returns that value in this field. + Marker *string `min:"1" type:"string"` + + // If a value is present, there are more file shares to return. In a subsequent + // request, use NextMarker as the value for Marker to retrieve the next set + // of file shares. + NextMarker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListFileSharesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListFileSharesOutput) GoString() string { + return s.String() +} + +// SetFileShareInfoList sets the FileShareInfoList field's value. +func (s *ListFileSharesOutput) SetFileShareInfoList(v []*FileShareInfo) *ListFileSharesOutput { + s.FileShareInfoList = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListFileSharesOutput) SetMarker(v string) *ListFileSharesOutput { + s.Marker = &v + return s +} + +// SetNextMarker sets the NextMarker field's value. +func (s *ListFileSharesOutput) SetNextMarker(v string) *ListFileSharesOutput { + s.NextMarker = &v + return s +} + +// A JSON object containing zero or more of the following fields: +// +// * ListGatewaysInput$Limit +// +// * ListGatewaysInput$Marker +type ListGatewaysInput struct { + _ struct{} `type:"structure"` + + // Specifies that the list of gateways returned be limited to the specified + // number of items. + Limit *int64 `min:"1" type:"integer"` + + // An opaque string that indicates the position at which to begin the returned + // list of gateways. + Marker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListGatewaysInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListGatewaysInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListGatewaysInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListGatewaysInput"} + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLimit sets the Limit field's value. +func (s *ListGatewaysInput) SetLimit(v int64) *ListGatewaysInput { + s.Limit = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListGatewaysInput) SetMarker(v string) *ListGatewaysInput { + s.Marker = &v + return s +} + +type ListGatewaysOutput struct { + _ struct{} `type:"structure"` + + Gateways []*GatewayInfo `type:"list"` + + Marker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListGatewaysOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListGatewaysOutput) GoString() string { + return s.String() +} + +// SetGateways sets the Gateways field's value. +func (s *ListGatewaysOutput) SetGateways(v []*GatewayInfo) *ListGatewaysOutput { + s.Gateways = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListGatewaysOutput) SetMarker(v string) *ListGatewaysOutput { + s.Marker = &v + return s +} + +// A JSON object containing the of the gateway. +type ListLocalDisksInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListLocalDisksInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListLocalDisksInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListLocalDisksInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListLocalDisksInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *ListLocalDisksInput) SetGatewayARN(v string) *ListLocalDisksInput { + s.GatewayARN = &v + return s +} + +type ListLocalDisksOutput struct { + _ struct{} `type:"structure"` + + Disks []*Disk `type:"list"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s ListLocalDisksOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListLocalDisksOutput) GoString() string { + return s.String() +} + +// SetDisks sets the Disks field's value. +func (s *ListLocalDisksOutput) SetDisks(v []*Disk) *ListLocalDisksOutput { + s.Disks = v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *ListLocalDisksOutput) SetGatewayARN(v string) *ListLocalDisksOutput { + s.GatewayARN = &v + return s +} + +// ListTagsForResourceInput +type ListTagsForResourceInput struct { + _ struct{} `type:"structure"` + + // Specifies that the list of tags returned be limited to the specified number + // of items. + Limit *int64 `min:"1" type:"integer"` + + // An opaque string that indicates the position at which to begin returning + // the list of tags. + Marker *string `min:"1" type:"string"` + + // The Amazon Resource Name (ARN) of the resource for which you want to list + // tags. + // + // ResourceARN is a required field + ResourceARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListTagsForResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsForResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListTagsForResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTagsForResourceInput"} + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + if s.ResourceARN == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceARN")) + } + if s.ResourceARN != nil && len(*s.ResourceARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("ResourceARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLimit sets the Limit field's value. +func (s *ListTagsForResourceInput) SetLimit(v int64) *ListTagsForResourceInput { + s.Limit = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListTagsForResourceInput) SetMarker(v string) *ListTagsForResourceInput { + s.Marker = &v + return s +} + +// SetResourceARN sets the ResourceARN field's value. +func (s *ListTagsForResourceInput) SetResourceARN(v string) *ListTagsForResourceInput { + s.ResourceARN = &v + return s +} + +// ListTagsForResourceOutput +type ListTagsForResourceOutput struct { + _ struct{} `type:"structure"` + + // An opaque string that indicates the position at which to stop returning the + // list of tags. + Marker *string `min:"1" type:"string"` + + // he Amazon Resource Name (ARN) of the resource for which you want to list + // tags. + ResourceARN *string `min:"50" type:"string"` + + // An array that contains the tags for the specified resource. + Tags []*Tag `type:"list"` +} + +// String returns the string representation +func (s ListTagsForResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsForResourceOutput) GoString() string { + return s.String() +} + +// SetMarker sets the Marker field's value. +func (s *ListTagsForResourceOutput) SetMarker(v string) *ListTagsForResourceOutput { + s.Marker = &v + return s +} + +// SetResourceARN sets the ResourceARN field's value. +func (s *ListTagsForResourceOutput) SetResourceARN(v string) *ListTagsForResourceOutput { + s.ResourceARN = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *ListTagsForResourceOutput) SetTags(v []*Tag) *ListTagsForResourceOutput { + s.Tags = v + return s +} + +// A JSON object that contains one or more of the following fields: +// +// * ListTapesInput$Limit +// +// * ListTapesInput$Marker +// +// * ListTapesInput$TapeARNs +type ListTapesInput struct { + _ struct{} `type:"structure"` + + // An optional number limit for the tapes in the list returned by this call. + Limit *int64 `min:"1" type:"integer"` + + // A string that indicates the position at which to begin the returned list + // of tapes. + Marker *string `min:"1" type:"string"` + + // The Amazon Resource Name (ARN) of each of the tapes you want to list. If + // you don't specify a tape ARN, the response lists all tapes in both your VTL + // and VTS. + TapeARNs []*string `type:"list"` +} + +// String returns the string representation +func (s ListTapesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTapesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListTapesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTapesInput"} + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLimit sets the Limit field's value. +func (s *ListTapesInput) SetLimit(v int64) *ListTapesInput { + s.Limit = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListTapesInput) SetMarker(v string) *ListTapesInput { + s.Marker = &v + return s +} + +// SetTapeARNs sets the TapeARNs field's value. +func (s *ListTapesInput) SetTapeARNs(v []*string) *ListTapesInput { + s.TapeARNs = v + return s +} + +// A JSON object containing the following fields: +// +// * ListTapesOutput$Marker +// +// * ListTapesOutput$VolumeInfos +type ListTapesOutput struct { + _ struct{} `type:"structure"` + + // A string that indicates the position at which to begin returning the next + // list of tapes. Use the marker in your next request to continue pagination + // of tapes. If there are no more tapes to list, this element does not appear + // in the response body. + Marker *string `min:"1" type:"string"` + + // An array of TapeInfo objects, where each object describes an a single tape. + // If there not tapes in the tape library or VTS, then the TapeInfos is an empty + // array. + TapeInfos []*TapeInfo `type:"list"` +} + +// String returns the string representation +func (s ListTapesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTapesOutput) GoString() string { + return s.String() +} + +// SetMarker sets the Marker field's value. +func (s *ListTapesOutput) SetMarker(v string) *ListTapesOutput { + s.Marker = &v + return s +} + +// SetTapeInfos sets the TapeInfos field's value. +func (s *ListTapesOutput) SetTapeInfos(v []*TapeInfo) *ListTapesOutput { + s.TapeInfos = v + return s +} + +// ListVolumeInitiatorsInput +type ListVolumeInitiatorsInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the volume. Use the ListVolumes operation + // to return a list of gateway volumes for the gateway. + // + // VolumeARN is a required field + VolumeARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListVolumeInitiatorsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListVolumeInitiatorsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListVolumeInitiatorsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListVolumeInitiatorsInput"} + if s.VolumeARN == nil { + invalidParams.Add(request.NewErrParamRequired("VolumeARN")) + } + if s.VolumeARN != nil && len(*s.VolumeARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("VolumeARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetVolumeARN sets the VolumeARN field's value. +func (s *ListVolumeInitiatorsInput) SetVolumeARN(v string) *ListVolumeInitiatorsInput { + s.VolumeARN = &v + return s +} + +// ListVolumeInitiatorsOutput +type ListVolumeInitiatorsOutput struct { + _ struct{} `type:"structure"` + + // The host names and port numbers of all iSCSI initiators that are connected + // to the gateway. + Initiators []*string `type:"list"` +} + +// String returns the string representation +func (s ListVolumeInitiatorsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListVolumeInitiatorsOutput) GoString() string { + return s.String() +} + +// SetInitiators sets the Initiators field's value. +func (s *ListVolumeInitiatorsOutput) SetInitiators(v []*string) *ListVolumeInitiatorsOutput { + s.Initiators = v + return s +} + +type ListVolumeRecoveryPointsInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListVolumeRecoveryPointsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListVolumeRecoveryPointsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListVolumeRecoveryPointsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListVolumeRecoveryPointsInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *ListVolumeRecoveryPointsInput) SetGatewayARN(v string) *ListVolumeRecoveryPointsInput { + s.GatewayARN = &v + return s +} + +type ListVolumeRecoveryPointsOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` + + VolumeRecoveryPointInfos []*VolumeRecoveryPointInfo `type:"list"` +} + +// String returns the string representation +func (s ListVolumeRecoveryPointsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListVolumeRecoveryPointsOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *ListVolumeRecoveryPointsOutput) SetGatewayARN(v string) *ListVolumeRecoveryPointsOutput { + s.GatewayARN = &v + return s +} + +// SetVolumeRecoveryPointInfos sets the VolumeRecoveryPointInfos field's value. +func (s *ListVolumeRecoveryPointsOutput) SetVolumeRecoveryPointInfos(v []*VolumeRecoveryPointInfo) *ListVolumeRecoveryPointsOutput { + s.VolumeRecoveryPointInfos = v + return s +} + +// A JSON object that contains one or more of the following fields: +// +// * ListVolumesInput$Limit +// +// * ListVolumesInput$Marker +type ListVolumesInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` + + // Specifies that the list of volumes returned be limited to the specified number + // of items. + Limit *int64 `min:"1" type:"integer"` + + // A string that indicates the position at which to begin the returned list + // of volumes. Obtain the marker from the response of a previous List iSCSI + // Volumes request. + Marker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListVolumesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListVolumesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListVolumesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListVolumesInput"} + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.Marker != nil && len(*s.Marker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Marker", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *ListVolumesInput) SetGatewayARN(v string) *ListVolumesInput { + s.GatewayARN = &v + return s +} + +// SetLimit sets the Limit field's value. +func (s *ListVolumesInput) SetLimit(v int64) *ListVolumesInput { + s.Limit = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListVolumesInput) SetMarker(v string) *ListVolumesInput { + s.Marker = &v + return s +} + +type ListVolumesOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` + + Marker *string `min:"1" type:"string"` + + VolumeInfos []*VolumeInfo `type:"list"` +} + +// String returns the string representation +func (s ListVolumesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListVolumesOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *ListVolumesOutput) SetGatewayARN(v string) *ListVolumesOutput { + s.GatewayARN = &v + return s +} + +// SetMarker sets the Marker field's value. +func (s *ListVolumesOutput) SetMarker(v string) *ListVolumesOutput { + s.Marker = &v + return s +} + +// SetVolumeInfos sets the VolumeInfos field's value. +func (s *ListVolumesOutput) SetVolumeInfos(v []*VolumeInfo) *ListVolumesOutput { + s.VolumeInfos = v + return s +} + +// Describes Network File System (NFS) file share default values. Files and +// folders stored as Amazon S3 objects in S3 buckets don't, by default, have +// Unix file permissions assigned to them. Upon discovery in an S3 bucket by +// Storage Gateway, the S3 objects that represent files and folders are assigned +// these default Unix permissions. This operation is only supported for file +// gateways. +type NFSFileShareDefaults struct { + _ struct{} `type:"structure"` + + // The Unix directory mode in the form "nnnn". For example, "0666" represents + // the default access mode for all directories inside the file share. The default + // value is 0777. + DirectoryMode *string `min:"1" type:"string"` + + // The Unix file mode in the form "nnnn". For example, "0666" represents the + // default file mode inside the file share. The default value is 0666. + FileMode *string `min:"1" type:"string"` + + // The default group ID for the file share (unless the files have another group + // ID specified). The default value is nfsnobody. + GroupId *int64 `type:"long"` + + // The default owner ID for files in the file share (unless the files have another + // owner ID specified). The default value is nfsnobody. + OwnerId *int64 `type:"long"` +} + +// String returns the string representation +func (s NFSFileShareDefaults) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NFSFileShareDefaults) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *NFSFileShareDefaults) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "NFSFileShareDefaults"} + if s.DirectoryMode != nil && len(*s.DirectoryMode) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DirectoryMode", 1)) + } + if s.FileMode != nil && len(*s.FileMode) < 1 { + invalidParams.Add(request.NewErrParamMinLen("FileMode", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDirectoryMode sets the DirectoryMode field's value. +func (s *NFSFileShareDefaults) SetDirectoryMode(v string) *NFSFileShareDefaults { + s.DirectoryMode = &v + return s +} + +// SetFileMode sets the FileMode field's value. +func (s *NFSFileShareDefaults) SetFileMode(v string) *NFSFileShareDefaults { + s.FileMode = &v + return s +} + +// SetGroupId sets the GroupId field's value. +func (s *NFSFileShareDefaults) SetGroupId(v int64) *NFSFileShareDefaults { + s.GroupId = &v + return s +} + +// SetOwnerId sets the OwnerId field's value. +func (s *NFSFileShareDefaults) SetOwnerId(v int64) *NFSFileShareDefaults { + s.OwnerId = &v + return s +} + +// The Unix file permissions and ownership information assigned, by default, +// to native S3 objects when file gateway discovers them in S3 buckets. This +// operation is only supported in file gateways. +type NFSFileShareInfo struct { + _ struct{} `type:"structure"` + + // The list of clients that are allowed to access the file gateway. The list + // must contain either valid IP addresses or valid CIDR blocks. + ClientList []*string `min:"1" type:"list"` + + // The default storage class for objects put into an Amazon S3 bucket by the + // file gateway. Possible values are S3_STANDARD, S3_STANDARD_IA, or S3_ONEZONE_IA. + // If this field is not populated, the default value S3_STANDARD is used. Optional. + DefaultStorageClass *string `min:"5" type:"string"` + + // The Amazon Resource Name (ARN) of the file share. + FileShareARN *string `min:"50" type:"string"` + + // The ID of the file share. + FileShareId *string `min:"12" type:"string"` + + // The status of the file share. Possible values are CREATING, UPDATING, AVAILABLE + // and DELETING. + FileShareStatus *string `min:"3" type:"string"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` + + // A value that enables guessing of the MIME type for uploaded objects based + // on file extensions. Set this value to true to enable MIME type guessing, + // and otherwise to false. The default value is true. + GuessMIMETypeEnabled *bool `type:"boolean"` + + // True to use Amazon S3 server side encryption with your own AWS KMS key, or + // false to use a key managed by Amazon S3. Optional. + KMSEncrypted *bool `type:"boolean"` + + // The Amazon Resource Name (ARN) of the AWS KMS key used for Amazon S3 server + // side encryption. This value can only be set when KMSEncrypted is true. Optional. + KMSKey *string `min:"20" type:"string"` + + // The ARN of the backend storage used for storing file data. + LocationARN *string `min:"16" type:"string"` + + // Describes Network File System (NFS) file share default values. Files and + // folders stored as Amazon S3 objects in S3 buckets don't, by default, have + // Unix file permissions assigned to them. Upon discovery in an S3 bucket by + // Storage Gateway, the S3 objects that represent files and folders are assigned + // these default Unix permissions. This operation is only supported for file + // gateways. + NFSFileShareDefaults *NFSFileShareDefaults `type:"structure"` + + // A value that sets the access control list permission for objects in the S3 + // bucket that a file gateway puts objects into. The default value is "private". + ObjectACL *string `type:"string" enum:"ObjectACL"` + + // The file share path used by the NFS client to identify the mount point. + Path *string `type:"string"` + + // A value that sets the write status of a file share. This value is true if + // the write status is read-only, and otherwise false. + ReadOnly *bool `type:"boolean"` + + // A value that sets the access control list permission for objects in the Amazon + // S3 bucket that a file gateway puts objects into. The default value is private. + RequesterPays *bool `type:"boolean"` + + // The ARN of the IAM role that file gateway assumes when it accesses the underlying + // storage. + Role *string `min:"20" type:"string"` + + // The user mapped to anonymous user. Valid options are the following: + // + // * RootSquash - Only root is mapped to anonymous user. + // + // * NoSquash - No one is mapped to anonymous user + // + // * AllSquash - Everyone is mapped to anonymous user. + Squash *string `min:"5" type:"string"` +} + +// String returns the string representation +func (s NFSFileShareInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NFSFileShareInfo) GoString() string { + return s.String() +} + +// SetClientList sets the ClientList field's value. +func (s *NFSFileShareInfo) SetClientList(v []*string) *NFSFileShareInfo { + s.ClientList = v + return s +} + +// SetDefaultStorageClass sets the DefaultStorageClass field's value. +func (s *NFSFileShareInfo) SetDefaultStorageClass(v string) *NFSFileShareInfo { + s.DefaultStorageClass = &v + return s +} + +// SetFileShareARN sets the FileShareARN field's value. +func (s *NFSFileShareInfo) SetFileShareARN(v string) *NFSFileShareInfo { + s.FileShareARN = &v + return s +} + +// SetFileShareId sets the FileShareId field's value. +func (s *NFSFileShareInfo) SetFileShareId(v string) *NFSFileShareInfo { + s.FileShareId = &v + return s +} + +// SetFileShareStatus sets the FileShareStatus field's value. +func (s *NFSFileShareInfo) SetFileShareStatus(v string) *NFSFileShareInfo { + s.FileShareStatus = &v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *NFSFileShareInfo) SetGatewayARN(v string) *NFSFileShareInfo { + s.GatewayARN = &v + return s +} + +// SetGuessMIMETypeEnabled sets the GuessMIMETypeEnabled field's value. +func (s *NFSFileShareInfo) SetGuessMIMETypeEnabled(v bool) *NFSFileShareInfo { + s.GuessMIMETypeEnabled = &v + return s +} + +// SetKMSEncrypted sets the KMSEncrypted field's value. +func (s *NFSFileShareInfo) SetKMSEncrypted(v bool) *NFSFileShareInfo { + s.KMSEncrypted = &v + return s +} + +// SetKMSKey sets the KMSKey field's value. +func (s *NFSFileShareInfo) SetKMSKey(v string) *NFSFileShareInfo { + s.KMSKey = &v + return s +} + +// SetLocationARN sets the LocationARN field's value. +func (s *NFSFileShareInfo) SetLocationARN(v string) *NFSFileShareInfo { + s.LocationARN = &v + return s +} + +// SetNFSFileShareDefaults sets the NFSFileShareDefaults field's value. +func (s *NFSFileShareInfo) SetNFSFileShareDefaults(v *NFSFileShareDefaults) *NFSFileShareInfo { + s.NFSFileShareDefaults = v + return s +} + +// SetObjectACL sets the ObjectACL field's value. +func (s *NFSFileShareInfo) SetObjectACL(v string) *NFSFileShareInfo { + s.ObjectACL = &v + return s +} + +// SetPath sets the Path field's value. +func (s *NFSFileShareInfo) SetPath(v string) *NFSFileShareInfo { + s.Path = &v + return s +} + +// SetReadOnly sets the ReadOnly field's value. +func (s *NFSFileShareInfo) SetReadOnly(v bool) *NFSFileShareInfo { + s.ReadOnly = &v + return s +} + +// SetRequesterPays sets the RequesterPays field's value. +func (s *NFSFileShareInfo) SetRequesterPays(v bool) *NFSFileShareInfo { + s.RequesterPays = &v + return s +} + +// SetRole sets the Role field's value. +func (s *NFSFileShareInfo) SetRole(v string) *NFSFileShareInfo { + s.Role = &v + return s +} + +// SetSquash sets the Squash field's value. +func (s *NFSFileShareInfo) SetSquash(v string) *NFSFileShareInfo { + s.Squash = &v + return s +} + +// Describes a gateway's network interface. +type NetworkInterface struct { + _ struct{} `type:"structure"` + + // The Internet Protocol version 4 (IPv4) address of the interface. + Ipv4Address *string `type:"string"` + + // The Internet Protocol version 6 (IPv6) address of the interface. Currently + // not supported. + Ipv6Address *string `type:"string"` + + // The Media Access Control (MAC) address of the interface. + // + // This is currently unsupported and will not be returned in output. + MacAddress *string `type:"string"` +} + +// String returns the string representation +func (s NetworkInterface) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NetworkInterface) GoString() string { + return s.String() +} + +// SetIpv4Address sets the Ipv4Address field's value. +func (s *NetworkInterface) SetIpv4Address(v string) *NetworkInterface { + s.Ipv4Address = &v + return s +} + +// SetIpv6Address sets the Ipv6Address field's value. +func (s *NetworkInterface) SetIpv6Address(v string) *NetworkInterface { + s.Ipv6Address = &v + return s +} + +// SetMacAddress sets the MacAddress field's value. +func (s *NetworkInterface) SetMacAddress(v string) *NetworkInterface { + s.MacAddress = &v + return s +} + +type NotifyWhenUploadedInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the file share. + // + // FileShareARN is a required field + FileShareARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s NotifyWhenUploadedInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NotifyWhenUploadedInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *NotifyWhenUploadedInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "NotifyWhenUploadedInput"} + if s.FileShareARN == nil { + invalidParams.Add(request.NewErrParamRequired("FileShareARN")) + } + if s.FileShareARN != nil && len(*s.FileShareARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("FileShareARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFileShareARN sets the FileShareARN field's value. +func (s *NotifyWhenUploadedInput) SetFileShareARN(v string) *NotifyWhenUploadedInput { + s.FileShareARN = &v + return s +} + +type NotifyWhenUploadedOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the file share. + FileShareARN *string `min:"50" type:"string"` + + // The randomly generated ID of the notification that was sent. This ID is in + // UUID format. + NotificationId *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s NotifyWhenUploadedOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s NotifyWhenUploadedOutput) GoString() string { + return s.String() +} + +// SetFileShareARN sets the FileShareARN field's value. +func (s *NotifyWhenUploadedOutput) SetFileShareARN(v string) *NotifyWhenUploadedOutput { + s.FileShareARN = &v + return s +} + +// SetNotificationId sets the NotificationId field's value. +func (s *NotifyWhenUploadedOutput) SetNotificationId(v string) *NotifyWhenUploadedOutput { + s.NotificationId = &v + return s +} + +type RefreshCacheInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the file share. + // + // FileShareARN is a required field + FileShareARN *string `min:"50" type:"string" required:"true"` + + // A comma-separated list of the paths of folders to refresh in the cache. The + // default is ["/"]. The default refreshes objects and folders at the root of + // the Amazon S3 bucket. If Recursive is set to "true", the entire S3 bucket + // that the file share has access to is refreshed. + FolderList []*string `min:"1" type:"list"` + + // A value that specifies whether to recursively refresh folders in the cache. + // The refresh includes folders that were in the cache the last time the gateway + // listed the folder's contents. If this value set to "true", each folder that + // is listed in FolderList is recursively updated. Otherwise, subfolders listed + // in FolderList are not refreshed. Only objects that are in folders listed + // directly under FolderList are found and used for the update. The default + // is "true". + Recursive *bool `type:"boolean"` +} + +// String returns the string representation +func (s RefreshCacheInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RefreshCacheInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RefreshCacheInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RefreshCacheInput"} + if s.FileShareARN == nil { + invalidParams.Add(request.NewErrParamRequired("FileShareARN")) + } + if s.FileShareARN != nil && len(*s.FileShareARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("FileShareARN", 50)) + } + if s.FolderList != nil && len(s.FolderList) < 1 { + invalidParams.Add(request.NewErrParamMinLen("FolderList", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFileShareARN sets the FileShareARN field's value. +func (s *RefreshCacheInput) SetFileShareARN(v string) *RefreshCacheInput { + s.FileShareARN = &v + return s +} + +// SetFolderList sets the FolderList field's value. +func (s *RefreshCacheInput) SetFolderList(v []*string) *RefreshCacheInput { + s.FolderList = v + return s +} + +// SetRecursive sets the Recursive field's value. +func (s *RefreshCacheInput) SetRecursive(v bool) *RefreshCacheInput { + s.Recursive = &v + return s +} + +// RefreshCacheOutput +type RefreshCacheOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the file share. + FileShareARN *string `min:"50" type:"string"` + + // The randomly generated ID of the notification that was sent. This ID is in + // UUID format. + NotificationId *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s RefreshCacheOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RefreshCacheOutput) GoString() string { + return s.String() +} + +// SetFileShareARN sets the FileShareARN field's value. +func (s *RefreshCacheOutput) SetFileShareARN(v string) *RefreshCacheOutput { + s.FileShareARN = &v + return s +} + +// SetNotificationId sets the NotificationId field's value. +func (s *RefreshCacheOutput) SetNotificationId(v string) *RefreshCacheOutput { + s.NotificationId = &v + return s +} + +// RemoveTagsFromResourceInput +type RemoveTagsFromResourceInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the resource you want to remove the tags + // from. + // + // ResourceARN is a required field + ResourceARN *string `min:"50" type:"string" required:"true"` + + // The keys of the tags you want to remove from the specified resource. A tag + // is composed of a key/value pair. + // + // TagKeys is a required field + TagKeys []*string `type:"list" required:"true"` +} + +// String returns the string representation +func (s RemoveTagsFromResourceInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveTagsFromResourceInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RemoveTagsFromResourceInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RemoveTagsFromResourceInput"} + if s.ResourceARN == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceARN")) + } + if s.ResourceARN != nil && len(*s.ResourceARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("ResourceARN", 50)) + } + if s.TagKeys == nil { + invalidParams.Add(request.NewErrParamRequired("TagKeys")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceARN sets the ResourceARN field's value. +func (s *RemoveTagsFromResourceInput) SetResourceARN(v string) *RemoveTagsFromResourceInput { + s.ResourceARN = &v + return s +} + +// SetTagKeys sets the TagKeys field's value. +func (s *RemoveTagsFromResourceInput) SetTagKeys(v []*string) *RemoveTagsFromResourceInput { + s.TagKeys = v + return s +} + +// RemoveTagsFromResourceOutput +type RemoveTagsFromResourceOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the resource that the tags were removed + // from. + ResourceARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s RemoveTagsFromResourceOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveTagsFromResourceOutput) GoString() string { + return s.String() +} + +// SetResourceARN sets the ResourceARN field's value. +func (s *RemoveTagsFromResourceOutput) SetResourceARN(v string) *RemoveTagsFromResourceOutput { + s.ResourceARN = &v + return s +} + +type ResetCacheInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s ResetCacheInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResetCacheInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ResetCacheInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResetCacheInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *ResetCacheInput) SetGatewayARN(v string) *ResetCacheInput { + s.GatewayARN = &v + return s +} + +type ResetCacheOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s ResetCacheOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResetCacheOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *ResetCacheOutput) SetGatewayARN(v string) *ResetCacheOutput { + s.GatewayARN = &v + return s +} + +// RetrieveTapeArchiveInput +type RetrieveTapeArchiveInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway you want to retrieve the virtual + // tape to. Use the ListGateways operation to return a list of gateways for + // your account and region. + // + // You retrieve archived virtual tapes to only one gateway and the gateway must + // be a tape gateway. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the virtual tape you want to retrieve from + // the virtual tape shelf (VTS). + // + // TapeARN is a required field + TapeARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s RetrieveTapeArchiveInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RetrieveTapeArchiveInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RetrieveTapeArchiveInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RetrieveTapeArchiveInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.TapeARN == nil { + invalidParams.Add(request.NewErrParamRequired("TapeARN")) + } + if s.TapeARN != nil && len(*s.TapeARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("TapeARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *RetrieveTapeArchiveInput) SetGatewayARN(v string) *RetrieveTapeArchiveInput { + s.GatewayARN = &v + return s +} + +// SetTapeARN sets the TapeARN field's value. +func (s *RetrieveTapeArchiveInput) SetTapeARN(v string) *RetrieveTapeArchiveInput { + s.TapeARN = &v + return s +} + +// RetrieveTapeArchiveOutput +type RetrieveTapeArchiveOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the retrieved virtual tape. + TapeARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s RetrieveTapeArchiveOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RetrieveTapeArchiveOutput) GoString() string { + return s.String() +} + +// SetTapeARN sets the TapeARN field's value. +func (s *RetrieveTapeArchiveOutput) SetTapeARN(v string) *RetrieveTapeArchiveOutput { + s.TapeARN = &v + return s +} + +// RetrieveTapeRecoveryPointInput +type RetrieveTapeRecoveryPointInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the virtual tape for which you want to + // retrieve the recovery point. + // + // TapeARN is a required field + TapeARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s RetrieveTapeRecoveryPointInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RetrieveTapeRecoveryPointInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RetrieveTapeRecoveryPointInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RetrieveTapeRecoveryPointInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.TapeARN == nil { + invalidParams.Add(request.NewErrParamRequired("TapeARN")) + } + if s.TapeARN != nil && len(*s.TapeARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("TapeARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *RetrieveTapeRecoveryPointInput) SetGatewayARN(v string) *RetrieveTapeRecoveryPointInput { + s.GatewayARN = &v + return s +} + +// SetTapeARN sets the TapeARN field's value. +func (s *RetrieveTapeRecoveryPointInput) SetTapeARN(v string) *RetrieveTapeRecoveryPointInput { + s.TapeARN = &v + return s +} + +// RetrieveTapeRecoveryPointOutput +type RetrieveTapeRecoveryPointOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the virtual tape for which the recovery + // point was retrieved. + TapeARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s RetrieveTapeRecoveryPointOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RetrieveTapeRecoveryPointOutput) GoString() string { + return s.String() +} + +// SetTapeARN sets the TapeARN field's value. +func (s *RetrieveTapeRecoveryPointOutput) SetTapeARN(v string) *RetrieveTapeRecoveryPointOutput { + s.TapeARN = &v + return s +} + +// The Windows file permissions and ownership information assigned, by default, +// to native S3 objects when file gateway discovers them in S3 buckets. This +// operation is only supported for file gateways. +type SMBFileShareInfo struct { + _ struct{} `type:"structure"` + + // The authentication method of the file share. + // + // Valid values are ActiveDirectory or GuestAccess. The default is ActiveDirectory. + Authentication *string `min:"5" type:"string"` + + // The default storage class for objects put into an Amazon S3 bucket by the + // file gateway. Possible values are S3_STANDARD, S3_STANDARD_IA, or S3_ONEZONE_IA. + // If this field is not populated, the default value S3_STANDARD is used. Optional. + DefaultStorageClass *string `min:"5" type:"string"` + + // The Amazon Resource Name (ARN) of the file share. + FileShareARN *string `min:"50" type:"string"` + + // The ID of the file share. + FileShareId *string `min:"12" type:"string"` + + // The status of the file share. Possible values are CREATING, UPDATING, AVAILABLE + // and DELETING. + FileShareStatus *string `min:"3" type:"string"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` + + // A value that enables guessing of the MIME type for uploaded objects based + // on file extensions. Set this value to true to enable MIME type guessing, + // and otherwise to false. The default value is true. + GuessMIMETypeEnabled *bool `type:"boolean"` + + // A list of users or groups in the Active Directory that are not allowed to + // access the file share. A group must be prefixed with the @ character. For + // example @group1. Can only be set if Authentication is set to ActiveDirectory. + InvalidUserList []*string `type:"list"` + + // True to use Amazon S3 server-side encryption with your own AWS KMS key, or + // false to use a key managed by Amazon S3. Optional. + KMSEncrypted *bool `type:"boolean"` + + // The Amazon Resource Name (ARN) of the AWS KMS key used for Amazon S3 server + // side encryption. This value can only be set when KMSEncrypted is true. Optional. + KMSKey *string `min:"20" type:"string"` + + // The ARN of the backend storage used for storing file data. + LocationARN *string `min:"16" type:"string"` + + // A value that sets the access control list permission for objects in the S3 + // bucket that a file gateway puts objects into. The default value is "private". + ObjectACL *string `type:"string" enum:"ObjectACL"` + + // The file share path used by the SMB client to identify the mount point. + Path *string `type:"string"` + + // A value that sets the write status of a file share. This value is true if + // the write status is read-only, and otherwise false. + ReadOnly *bool `type:"boolean"` + + // A value that sets the access control list permission for objects in the Amazon + // S3 bucket that a file gateway puts objects into. The default value is private. + RequesterPays *bool `type:"boolean"` + + // The ARN of the IAM role that file gateway assumes when it accesses the underlying + // storage. + Role *string `min:"20" type:"string"` + + // A list of users or groups in the Active Directory that are allowed to access + // the file share. A group must be prefixed with the @ character. For example + // @group1. Can only be set if Authentication is set to ActiveDirectory. + ValidUserList []*string `type:"list"` +} + +// String returns the string representation +func (s SMBFileShareInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SMBFileShareInfo) GoString() string { + return s.String() +} + +// SetAuthentication sets the Authentication field's value. +func (s *SMBFileShareInfo) SetAuthentication(v string) *SMBFileShareInfo { + s.Authentication = &v + return s +} + +// SetDefaultStorageClass sets the DefaultStorageClass field's value. +func (s *SMBFileShareInfo) SetDefaultStorageClass(v string) *SMBFileShareInfo { + s.DefaultStorageClass = &v + return s +} + +// SetFileShareARN sets the FileShareARN field's value. +func (s *SMBFileShareInfo) SetFileShareARN(v string) *SMBFileShareInfo { + s.FileShareARN = &v + return s +} + +// SetFileShareId sets the FileShareId field's value. +func (s *SMBFileShareInfo) SetFileShareId(v string) *SMBFileShareInfo { + s.FileShareId = &v + return s +} + +// SetFileShareStatus sets the FileShareStatus field's value. +func (s *SMBFileShareInfo) SetFileShareStatus(v string) *SMBFileShareInfo { + s.FileShareStatus = &v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *SMBFileShareInfo) SetGatewayARN(v string) *SMBFileShareInfo { + s.GatewayARN = &v + return s +} + +// SetGuessMIMETypeEnabled sets the GuessMIMETypeEnabled field's value. +func (s *SMBFileShareInfo) SetGuessMIMETypeEnabled(v bool) *SMBFileShareInfo { + s.GuessMIMETypeEnabled = &v + return s +} + +// SetInvalidUserList sets the InvalidUserList field's value. +func (s *SMBFileShareInfo) SetInvalidUserList(v []*string) *SMBFileShareInfo { + s.InvalidUserList = v + return s +} + +// SetKMSEncrypted sets the KMSEncrypted field's value. +func (s *SMBFileShareInfo) SetKMSEncrypted(v bool) *SMBFileShareInfo { + s.KMSEncrypted = &v + return s +} + +// SetKMSKey sets the KMSKey field's value. +func (s *SMBFileShareInfo) SetKMSKey(v string) *SMBFileShareInfo { + s.KMSKey = &v + return s +} + +// SetLocationARN sets the LocationARN field's value. +func (s *SMBFileShareInfo) SetLocationARN(v string) *SMBFileShareInfo { + s.LocationARN = &v + return s +} + +// SetObjectACL sets the ObjectACL field's value. +func (s *SMBFileShareInfo) SetObjectACL(v string) *SMBFileShareInfo { + s.ObjectACL = &v + return s +} + +// SetPath sets the Path field's value. +func (s *SMBFileShareInfo) SetPath(v string) *SMBFileShareInfo { + s.Path = &v + return s +} + +// SetReadOnly sets the ReadOnly field's value. +func (s *SMBFileShareInfo) SetReadOnly(v bool) *SMBFileShareInfo { + s.ReadOnly = &v + return s +} + +// SetRequesterPays sets the RequesterPays field's value. +func (s *SMBFileShareInfo) SetRequesterPays(v bool) *SMBFileShareInfo { + s.RequesterPays = &v + return s +} + +// SetRole sets the Role field's value. +func (s *SMBFileShareInfo) SetRole(v string) *SMBFileShareInfo { + s.Role = &v + return s +} + +// SetValidUserList sets the ValidUserList field's value. +func (s *SMBFileShareInfo) SetValidUserList(v []*string) *SMBFileShareInfo { + s.ValidUserList = v + return s +} + +// SetLocalConsolePasswordInput +type SetLocalConsolePasswordInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` + + // The password you want to set for your VM local console. + // + // LocalConsolePassword is a required field + LocalConsolePassword *string `min:"6" type:"string" required:"true"` +} + +// String returns the string representation +func (s SetLocalConsolePasswordInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SetLocalConsolePasswordInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SetLocalConsolePasswordInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SetLocalConsolePasswordInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.LocalConsolePassword == nil { + invalidParams.Add(request.NewErrParamRequired("LocalConsolePassword")) + } + if s.LocalConsolePassword != nil && len(*s.LocalConsolePassword) < 6 { + invalidParams.Add(request.NewErrParamMinLen("LocalConsolePassword", 6)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *SetLocalConsolePasswordInput) SetGatewayARN(v string) *SetLocalConsolePasswordInput { + s.GatewayARN = &v + return s +} + +// SetLocalConsolePassword sets the LocalConsolePassword field's value. +func (s *SetLocalConsolePasswordInput) SetLocalConsolePassword(v string) *SetLocalConsolePasswordInput { + s.LocalConsolePassword = &v + return s +} + +type SetLocalConsolePasswordOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s SetLocalConsolePasswordOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SetLocalConsolePasswordOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *SetLocalConsolePasswordOutput) SetGatewayARN(v string) *SetLocalConsolePasswordOutput { + s.GatewayARN = &v + return s +} + +// SetSMBGuestPasswordInput +type SetSMBGuestPasswordInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the file gateway the SMB file share is + // associated with. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` + + // The password that you want to set for your SMB Server. + // + // Password is a required field + Password *string `min:"6" type:"string" required:"true"` +} + +// String returns the string representation +func (s SetSMBGuestPasswordInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SetSMBGuestPasswordInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SetSMBGuestPasswordInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SetSMBGuestPasswordInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.Password == nil { + invalidParams.Add(request.NewErrParamRequired("Password")) + } + if s.Password != nil && len(*s.Password) < 6 { + invalidParams.Add(request.NewErrParamMinLen("Password", 6)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *SetSMBGuestPasswordInput) SetGatewayARN(v string) *SetSMBGuestPasswordInput { + s.GatewayARN = &v + return s +} + +// SetPassword sets the Password field's value. +func (s *SetSMBGuestPasswordInput) SetPassword(v string) *SetSMBGuestPasswordInput { + s.Password = &v + return s +} + +type SetSMBGuestPasswordOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s SetSMBGuestPasswordOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SetSMBGuestPasswordOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *SetSMBGuestPasswordOutput) SetGatewayARN(v string) *SetSMBGuestPasswordOutput { + s.GatewayARN = &v + return s +} + +// A JSON object containing the of the gateway to shut down. +type ShutdownGatewayInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s ShutdownGatewayInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ShutdownGatewayInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ShutdownGatewayInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ShutdownGatewayInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *ShutdownGatewayInput) SetGatewayARN(v string) *ShutdownGatewayInput { + s.GatewayARN = &v + return s +} + +// A JSON object containing the of the gateway that was shut down. +type ShutdownGatewayOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s ShutdownGatewayOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ShutdownGatewayOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *ShutdownGatewayOutput) SetGatewayARN(v string) *ShutdownGatewayOutput { + s.GatewayARN = &v + return s +} + +// A JSON object containing the of the gateway to start. +type StartGatewayInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s StartGatewayInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartGatewayInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StartGatewayInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StartGatewayInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *StartGatewayInput) SetGatewayARN(v string) *StartGatewayInput { + s.GatewayARN = &v + return s +} + +// A JSON object containing the of the gateway that was restarted. +type StartGatewayOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s StartGatewayOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartGatewayOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *StartGatewayOutput) SetGatewayARN(v string) *StartGatewayOutput { + s.GatewayARN = &v + return s +} + +// Describes an iSCSI stored volume. +type StorediSCSIVolume struct { + _ struct{} `type:"structure"` + + // The date the volume was created. Volumes created prior to March 28, 2017 + // don’t have this time stamp. + CreatedDate *time.Time `type:"timestamp"` + + // The Amazon Resource Name (ARN) of the AWS KMS key used for Amazon S3 server + // side encryption. This value can only be set when KMSEncrypted is true. Optional. + KMSKey *string `min:"20" type:"string"` + + // Indicates if when the stored volume was created, existing data on the underlying + // local disk was preserved. + // + // Valid Values: true, false + PreservedExistingData *bool `type:"boolean"` + + // If the stored volume was created from a snapshot, this field contains the + // snapshot ID used, e.g. snap-78e22663. Otherwise, this field is not included. + SourceSnapshotId *string `type:"string"` + + // The Amazon Resource Name (ARN) of the storage volume. + VolumeARN *string `min:"50" type:"string"` + + // The ID of the local disk that was specified in the CreateStorediSCSIVolume + // operation. + VolumeDiskId *string `min:"1" type:"string"` + + // The unique identifier of the volume, e.g. vol-AE4B946D. + VolumeId *string `min:"12" type:"string"` + + // Represents the percentage complete if the volume is restoring or bootstrapping + // that represents the percent of data transferred. This field does not appear + // in the response if the stored volume is not restoring or bootstrapping. + VolumeProgress *float64 `type:"double"` + + // The size of the volume in bytes. + VolumeSizeInBytes *int64 `type:"long"` + + // One of the VolumeStatus values that indicates the state of the storage volume. + VolumeStatus *string `min:"3" type:"string"` + + // One of the VolumeType enumeration values describing the type of the volume. + VolumeType *string `min:"3" type:"string"` + + // The size of the data stored on the volume in bytes. + // + // This value is not available for volumes created prior to May 13, 2015, until + // you store data on the volume. + VolumeUsedInBytes *int64 `type:"long"` + + // An VolumeiSCSIAttributes object that represents a collection of iSCSI attributes + // for one stored volume. + VolumeiSCSIAttributes *VolumeiSCSIAttributes `type:"structure"` +} + +// String returns the string representation +func (s StorediSCSIVolume) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StorediSCSIVolume) GoString() string { + return s.String() +} + +// SetCreatedDate sets the CreatedDate field's value. +func (s *StorediSCSIVolume) SetCreatedDate(v time.Time) *StorediSCSIVolume { + s.CreatedDate = &v + return s +} + +// SetKMSKey sets the KMSKey field's value. +func (s *StorediSCSIVolume) SetKMSKey(v string) *StorediSCSIVolume { + s.KMSKey = &v + return s +} + +// SetPreservedExistingData sets the PreservedExistingData field's value. +func (s *StorediSCSIVolume) SetPreservedExistingData(v bool) *StorediSCSIVolume { + s.PreservedExistingData = &v + return s +} + +// SetSourceSnapshotId sets the SourceSnapshotId field's value. +func (s *StorediSCSIVolume) SetSourceSnapshotId(v string) *StorediSCSIVolume { + s.SourceSnapshotId = &v + return s +} + +// SetVolumeARN sets the VolumeARN field's value. +func (s *StorediSCSIVolume) SetVolumeARN(v string) *StorediSCSIVolume { + s.VolumeARN = &v + return s +} + +// SetVolumeDiskId sets the VolumeDiskId field's value. +func (s *StorediSCSIVolume) SetVolumeDiskId(v string) *StorediSCSIVolume { + s.VolumeDiskId = &v + return s +} + +// SetVolumeId sets the VolumeId field's value. +func (s *StorediSCSIVolume) SetVolumeId(v string) *StorediSCSIVolume { + s.VolumeId = &v + return s +} + +// SetVolumeProgress sets the VolumeProgress field's value. +func (s *StorediSCSIVolume) SetVolumeProgress(v float64) *StorediSCSIVolume { + s.VolumeProgress = &v + return s +} + +// SetVolumeSizeInBytes sets the VolumeSizeInBytes field's value. +func (s *StorediSCSIVolume) SetVolumeSizeInBytes(v int64) *StorediSCSIVolume { + s.VolumeSizeInBytes = &v + return s +} + +// SetVolumeStatus sets the VolumeStatus field's value. +func (s *StorediSCSIVolume) SetVolumeStatus(v string) *StorediSCSIVolume { + s.VolumeStatus = &v + return s +} + +// SetVolumeType sets the VolumeType field's value. +func (s *StorediSCSIVolume) SetVolumeType(v string) *StorediSCSIVolume { + s.VolumeType = &v + return s +} + +// SetVolumeUsedInBytes sets the VolumeUsedInBytes field's value. +func (s *StorediSCSIVolume) SetVolumeUsedInBytes(v int64) *StorediSCSIVolume { + s.VolumeUsedInBytes = &v + return s +} + +// SetVolumeiSCSIAttributes sets the VolumeiSCSIAttributes field's value. +func (s *StorediSCSIVolume) SetVolumeiSCSIAttributes(v *VolumeiSCSIAttributes) *StorediSCSIVolume { + s.VolumeiSCSIAttributes = v + return s +} + +type Tag struct { + _ struct{} `type:"structure"` + + // Key is a required field + Key *string `min:"1" type:"string" required:"true"` + + // Value is a required field + Value *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s Tag) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Tag) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Tag) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Tag"} + if s.Key == nil { + invalidParams.Add(request.NewErrParamRequired("Key")) + } + if s.Key != nil && len(*s.Key) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Key", 1)) + } + if s.Value == nil { + invalidParams.Add(request.NewErrParamRequired("Value")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKey sets the Key field's value. +func (s *Tag) SetKey(v string) *Tag { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *Tag) SetValue(v string) *Tag { + s.Value = &v + return s +} + +// Describes a virtual tape object. +type Tape struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the AWS KMS key used for Amazon S3 server + // side encryption. This value can only be set when KMSEncrypted is true. Optional. + KMSKey *string `min:"20" type:"string"` + + // For archiving virtual tapes, indicates how much data remains to be uploaded + // before archiving is complete. + // + // Range: 0 (not started) to 100 (complete). + Progress *float64 `type:"double"` + + // The Amazon Resource Name (ARN) of the virtual tape. + TapeARN *string `min:"50" type:"string"` + + // The barcode that identifies a specific virtual tape. + TapeBarcode *string `min:"7" type:"string"` + + // The date the virtual tape was created. + TapeCreatedDate *time.Time `type:"timestamp"` + + // The size, in bytes, of the virtual tape capacity. + TapeSizeInBytes *int64 `type:"long"` + + // The current state of the virtual tape. + TapeStatus *string `type:"string"` + + // The size, in bytes, of data stored on the virtual tape. + // + // This value is not available for tapes created prior to May 13, 2015. + TapeUsedInBytes *int64 `type:"long"` + + // The virtual tape library (VTL) device that the virtual tape is associated + // with. + VTLDevice *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s Tape) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Tape) GoString() string { + return s.String() +} + +// SetKMSKey sets the KMSKey field's value. +func (s *Tape) SetKMSKey(v string) *Tape { + s.KMSKey = &v + return s +} + +// SetProgress sets the Progress field's value. +func (s *Tape) SetProgress(v float64) *Tape { + s.Progress = &v + return s +} + +// SetTapeARN sets the TapeARN field's value. +func (s *Tape) SetTapeARN(v string) *Tape { + s.TapeARN = &v + return s +} + +// SetTapeBarcode sets the TapeBarcode field's value. +func (s *Tape) SetTapeBarcode(v string) *Tape { + s.TapeBarcode = &v + return s +} + +// SetTapeCreatedDate sets the TapeCreatedDate field's value. +func (s *Tape) SetTapeCreatedDate(v time.Time) *Tape { + s.TapeCreatedDate = &v + return s +} + +// SetTapeSizeInBytes sets the TapeSizeInBytes field's value. +func (s *Tape) SetTapeSizeInBytes(v int64) *Tape { + s.TapeSizeInBytes = &v + return s +} + +// SetTapeStatus sets the TapeStatus field's value. +func (s *Tape) SetTapeStatus(v string) *Tape { + s.TapeStatus = &v + return s +} + +// SetTapeUsedInBytes sets the TapeUsedInBytes field's value. +func (s *Tape) SetTapeUsedInBytes(v int64) *Tape { + s.TapeUsedInBytes = &v + return s +} + +// SetVTLDevice sets the VTLDevice field's value. +func (s *Tape) SetVTLDevice(v string) *Tape { + s.VTLDevice = &v + return s +} + +// Represents a virtual tape that is archived in the virtual tape shelf (VTS). +type TapeArchive struct { + _ struct{} `type:"structure"` + + // The time that the archiving of the virtual tape was completed. + // + // The default time stamp format is in the ISO8601 extended YYYY-MM-DD'T'HH:MM:SS'Z' + // format. + CompletionTime *time.Time `type:"timestamp"` + + // The Amazon Resource Name (ARN) of the AWS KMS key used for Amazon S3 server + // side encryption. This value can only be set when KMSEncrypted is true. Optional. + KMSKey *string `min:"20" type:"string"` + + // The Amazon Resource Name (ARN) of the tape gateway that the virtual tape + // is being retrieved to. + // + // The virtual tape is retrieved from the virtual tape shelf (VTS). + RetrievedTo *string `min:"50" type:"string"` + + // The Amazon Resource Name (ARN) of an archived virtual tape. + TapeARN *string `min:"50" type:"string"` + + // The barcode that identifies the archived virtual tape. + TapeBarcode *string `min:"7" type:"string"` + + // The date the virtual tape was created. + TapeCreatedDate *time.Time `type:"timestamp"` + + // The size, in bytes, of the archived virtual tape. + TapeSizeInBytes *int64 `type:"long"` + + // The current state of the archived virtual tape. + TapeStatus *string `type:"string"` + + // The size, in bytes, of data stored on the virtual tape. + // + // This value is not available for tapes created prior to May 13, 2015. + TapeUsedInBytes *int64 `type:"long"` +} + +// String returns the string representation +func (s TapeArchive) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TapeArchive) GoString() string { + return s.String() +} + +// SetCompletionTime sets the CompletionTime field's value. +func (s *TapeArchive) SetCompletionTime(v time.Time) *TapeArchive { + s.CompletionTime = &v + return s +} + +// SetKMSKey sets the KMSKey field's value. +func (s *TapeArchive) SetKMSKey(v string) *TapeArchive { + s.KMSKey = &v + return s +} + +// SetRetrievedTo sets the RetrievedTo field's value. +func (s *TapeArchive) SetRetrievedTo(v string) *TapeArchive { + s.RetrievedTo = &v + return s +} + +// SetTapeARN sets the TapeARN field's value. +func (s *TapeArchive) SetTapeARN(v string) *TapeArchive { + s.TapeARN = &v + return s +} + +// SetTapeBarcode sets the TapeBarcode field's value. +func (s *TapeArchive) SetTapeBarcode(v string) *TapeArchive { + s.TapeBarcode = &v + return s +} + +// SetTapeCreatedDate sets the TapeCreatedDate field's value. +func (s *TapeArchive) SetTapeCreatedDate(v time.Time) *TapeArchive { + s.TapeCreatedDate = &v + return s +} + +// SetTapeSizeInBytes sets the TapeSizeInBytes field's value. +func (s *TapeArchive) SetTapeSizeInBytes(v int64) *TapeArchive { + s.TapeSizeInBytes = &v + return s +} + +// SetTapeStatus sets the TapeStatus field's value. +func (s *TapeArchive) SetTapeStatus(v string) *TapeArchive { + s.TapeStatus = &v + return s +} + +// SetTapeUsedInBytes sets the TapeUsedInBytes field's value. +func (s *TapeArchive) SetTapeUsedInBytes(v int64) *TapeArchive { + s.TapeUsedInBytes = &v + return s +} + +// Describes a virtual tape. +type TapeInfo struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` + + // The Amazon Resource Name (ARN) of a virtual tape. + TapeARN *string `min:"50" type:"string"` + + // The barcode that identifies a specific virtual tape. + TapeBarcode *string `min:"7" type:"string"` + + // The size, in bytes, of a virtual tape. + TapeSizeInBytes *int64 `type:"long"` + + // The status of the tape. + TapeStatus *string `type:"string"` +} + +// String returns the string representation +func (s TapeInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TapeInfo) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *TapeInfo) SetGatewayARN(v string) *TapeInfo { + s.GatewayARN = &v + return s +} + +// SetTapeARN sets the TapeARN field's value. +func (s *TapeInfo) SetTapeARN(v string) *TapeInfo { + s.TapeARN = &v + return s +} + +// SetTapeBarcode sets the TapeBarcode field's value. +func (s *TapeInfo) SetTapeBarcode(v string) *TapeInfo { + s.TapeBarcode = &v + return s +} + +// SetTapeSizeInBytes sets the TapeSizeInBytes field's value. +func (s *TapeInfo) SetTapeSizeInBytes(v int64) *TapeInfo { + s.TapeSizeInBytes = &v + return s +} + +// SetTapeStatus sets the TapeStatus field's value. +func (s *TapeInfo) SetTapeStatus(v string) *TapeInfo { + s.TapeStatus = &v + return s +} + +// Describes a recovery point. +type TapeRecoveryPointInfo struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the virtual tape. + TapeARN *string `min:"50" type:"string"` + + // The time when the point-in-time view of the virtual tape was replicated for + // later recovery. + // + // The default time stamp format of the tape recovery point time is in the ISO8601 + // extended YYYY-MM-DD'T'HH:MM:SS'Z' format. + TapeRecoveryPointTime *time.Time `type:"timestamp"` + + // The size, in bytes, of the virtual tapes to recover. + TapeSizeInBytes *int64 `type:"long"` + + TapeStatus *string `type:"string"` +} + +// String returns the string representation +func (s TapeRecoveryPointInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TapeRecoveryPointInfo) GoString() string { + return s.String() +} + +// SetTapeARN sets the TapeARN field's value. +func (s *TapeRecoveryPointInfo) SetTapeARN(v string) *TapeRecoveryPointInfo { + s.TapeARN = &v + return s +} + +// SetTapeRecoveryPointTime sets the TapeRecoveryPointTime field's value. +func (s *TapeRecoveryPointInfo) SetTapeRecoveryPointTime(v time.Time) *TapeRecoveryPointInfo { + s.TapeRecoveryPointTime = &v + return s +} + +// SetTapeSizeInBytes sets the TapeSizeInBytes field's value. +func (s *TapeRecoveryPointInfo) SetTapeSizeInBytes(v int64) *TapeRecoveryPointInfo { + s.TapeSizeInBytes = &v + return s +} + +// SetTapeStatus sets the TapeStatus field's value. +func (s *TapeRecoveryPointInfo) SetTapeStatus(v string) *TapeRecoveryPointInfo { + s.TapeStatus = &v + return s +} + +// A JSON object containing one or more of the following fields: +// +// * UpdateBandwidthRateLimitInput$AverageDownloadRateLimitInBitsPerSec +// +// * UpdateBandwidthRateLimitInput$AverageUploadRateLimitInBitsPerSec +type UpdateBandwidthRateLimitInput struct { + _ struct{} `type:"structure"` + + // The average download bandwidth rate limit in bits per second. + AverageDownloadRateLimitInBitsPerSec *int64 `min:"102400" type:"long"` + + // The average upload bandwidth rate limit in bits per second. + AverageUploadRateLimitInBitsPerSec *int64 `min:"51200" type:"long"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateBandwidthRateLimitInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateBandwidthRateLimitInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateBandwidthRateLimitInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateBandwidthRateLimitInput"} + if s.AverageDownloadRateLimitInBitsPerSec != nil && *s.AverageDownloadRateLimitInBitsPerSec < 102400 { + invalidParams.Add(request.NewErrParamMinValue("AverageDownloadRateLimitInBitsPerSec", 102400)) + } + if s.AverageUploadRateLimitInBitsPerSec != nil && *s.AverageUploadRateLimitInBitsPerSec < 51200 { + invalidParams.Add(request.NewErrParamMinValue("AverageUploadRateLimitInBitsPerSec", 51200)) + } + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAverageDownloadRateLimitInBitsPerSec sets the AverageDownloadRateLimitInBitsPerSec field's value. +func (s *UpdateBandwidthRateLimitInput) SetAverageDownloadRateLimitInBitsPerSec(v int64) *UpdateBandwidthRateLimitInput { + s.AverageDownloadRateLimitInBitsPerSec = &v + return s +} + +// SetAverageUploadRateLimitInBitsPerSec sets the AverageUploadRateLimitInBitsPerSec field's value. +func (s *UpdateBandwidthRateLimitInput) SetAverageUploadRateLimitInBitsPerSec(v int64) *UpdateBandwidthRateLimitInput { + s.AverageUploadRateLimitInBitsPerSec = &v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *UpdateBandwidthRateLimitInput) SetGatewayARN(v string) *UpdateBandwidthRateLimitInput { + s.GatewayARN = &v + return s +} + +// A JSON object containing the of the gateway whose throttle information was +// updated. +type UpdateBandwidthRateLimitOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s UpdateBandwidthRateLimitOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateBandwidthRateLimitOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *UpdateBandwidthRateLimitOutput) SetGatewayARN(v string) *UpdateBandwidthRateLimitOutput { + s.GatewayARN = &v + return s +} + +// A JSON object containing one or more of the following fields: +// +// * UpdateChapCredentialsInput$InitiatorName +// +// * UpdateChapCredentialsInput$SecretToAuthenticateInitiator +// +// * UpdateChapCredentialsInput$SecretToAuthenticateTarget +// +// * UpdateChapCredentialsInput$TargetARN +type UpdateChapCredentialsInput struct { + _ struct{} `type:"structure"` + + // The iSCSI initiator that connects to the target. + // + // InitiatorName is a required field + InitiatorName *string `min:"1" type:"string" required:"true"` + + // The secret key that the initiator (for example, the Windows client) must + // provide to participate in mutual CHAP with the target. + // + // The secret key must be between 12 and 16 bytes when encoded in UTF-8. + // + // SecretToAuthenticateInitiator is a required field + SecretToAuthenticateInitiator *string `min:"1" type:"string" required:"true"` + + // The secret key that the target must provide to participate in mutual CHAP + // with the initiator (e.g. Windows client). + // + // Byte constraints: Minimum bytes of 12. Maximum bytes of 16. + // + // The secret key must be between 12 and 16 bytes when encoded in UTF-8. + SecretToAuthenticateTarget *string `min:"1" type:"string"` + + // The Amazon Resource Name (ARN) of the iSCSI volume target. Use the DescribeStorediSCSIVolumes + // operation to return the TargetARN for specified VolumeARN. + // + // TargetARN is a required field + TargetARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateChapCredentialsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateChapCredentialsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateChapCredentialsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateChapCredentialsInput"} + if s.InitiatorName == nil { + invalidParams.Add(request.NewErrParamRequired("InitiatorName")) + } + if s.InitiatorName != nil && len(*s.InitiatorName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("InitiatorName", 1)) + } + if s.SecretToAuthenticateInitiator == nil { + invalidParams.Add(request.NewErrParamRequired("SecretToAuthenticateInitiator")) + } + if s.SecretToAuthenticateInitiator != nil && len(*s.SecretToAuthenticateInitiator) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecretToAuthenticateInitiator", 1)) + } + if s.SecretToAuthenticateTarget != nil && len(*s.SecretToAuthenticateTarget) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecretToAuthenticateTarget", 1)) + } + if s.TargetARN == nil { + invalidParams.Add(request.NewErrParamRequired("TargetARN")) + } + if s.TargetARN != nil && len(*s.TargetARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("TargetARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetInitiatorName sets the InitiatorName field's value. +func (s *UpdateChapCredentialsInput) SetInitiatorName(v string) *UpdateChapCredentialsInput { + s.InitiatorName = &v + return s +} + +// SetSecretToAuthenticateInitiator sets the SecretToAuthenticateInitiator field's value. +func (s *UpdateChapCredentialsInput) SetSecretToAuthenticateInitiator(v string) *UpdateChapCredentialsInput { + s.SecretToAuthenticateInitiator = &v + return s +} + +// SetSecretToAuthenticateTarget sets the SecretToAuthenticateTarget field's value. +func (s *UpdateChapCredentialsInput) SetSecretToAuthenticateTarget(v string) *UpdateChapCredentialsInput { + s.SecretToAuthenticateTarget = &v + return s +} + +// SetTargetARN sets the TargetARN field's value. +func (s *UpdateChapCredentialsInput) SetTargetARN(v string) *UpdateChapCredentialsInput { + s.TargetARN = &v + return s +} + +// A JSON object containing the following fields: +type UpdateChapCredentialsOutput struct { + _ struct{} `type:"structure"` + + // The iSCSI initiator that connects to the target. This is the same initiator + // name specified in the request. + InitiatorName *string `min:"1" type:"string"` + + // The Amazon Resource Name (ARN) of the target. This is the same target specified + // in the request. + TargetARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s UpdateChapCredentialsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateChapCredentialsOutput) GoString() string { + return s.String() +} + +// SetInitiatorName sets the InitiatorName field's value. +func (s *UpdateChapCredentialsOutput) SetInitiatorName(v string) *UpdateChapCredentialsOutput { + s.InitiatorName = &v + return s +} + +// SetTargetARN sets the TargetARN field's value. +func (s *UpdateChapCredentialsOutput) SetTargetARN(v string) *UpdateChapCredentialsOutput { + s.TargetARN = &v + return s +} + +type UpdateGatewayInformationInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` + + // The name you configured for your gateway. + GatewayName *string `min:"2" type:"string"` + + GatewayTimezone *string `min:"3" type:"string"` +} + +// String returns the string representation +func (s UpdateGatewayInformationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateGatewayInformationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateGatewayInformationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateGatewayInformationInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.GatewayName != nil && len(*s.GatewayName) < 2 { + invalidParams.Add(request.NewErrParamMinLen("GatewayName", 2)) + } + if s.GatewayTimezone != nil && len(*s.GatewayTimezone) < 3 { + invalidParams.Add(request.NewErrParamMinLen("GatewayTimezone", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *UpdateGatewayInformationInput) SetGatewayARN(v string) *UpdateGatewayInformationInput { + s.GatewayARN = &v + return s +} + +// SetGatewayName sets the GatewayName field's value. +func (s *UpdateGatewayInformationInput) SetGatewayName(v string) *UpdateGatewayInformationInput { + s.GatewayName = &v + return s +} + +// SetGatewayTimezone sets the GatewayTimezone field's value. +func (s *UpdateGatewayInformationInput) SetGatewayTimezone(v string) *UpdateGatewayInformationInput { + s.GatewayTimezone = &v + return s +} + +// A JSON object containing the ARN of the gateway that was updated. +type UpdateGatewayInformationOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` + + GatewayName *string `type:"string"` +} + +// String returns the string representation +func (s UpdateGatewayInformationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateGatewayInformationOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *UpdateGatewayInformationOutput) SetGatewayARN(v string) *UpdateGatewayInformationOutput { + s.GatewayARN = &v + return s +} + +// SetGatewayName sets the GatewayName field's value. +func (s *UpdateGatewayInformationOutput) SetGatewayName(v string) *UpdateGatewayInformationOutput { + s.GatewayName = &v + return s +} + +// A JSON object containing the of the gateway to update. +type UpdateGatewaySoftwareNowInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateGatewaySoftwareNowInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateGatewaySoftwareNowInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateGatewaySoftwareNowInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateGatewaySoftwareNowInput"} + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *UpdateGatewaySoftwareNowInput) SetGatewayARN(v string) *UpdateGatewaySoftwareNowInput { + s.GatewayARN = &v + return s +} + +// A JSON object containing the of the gateway that was updated. +type UpdateGatewaySoftwareNowOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s UpdateGatewaySoftwareNowOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateGatewaySoftwareNowOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *UpdateGatewaySoftwareNowOutput) SetGatewayARN(v string) *UpdateGatewaySoftwareNowOutput { + s.GatewayARN = &v + return s +} + +// A JSON object containing the following fields: +// +// * UpdateMaintenanceStartTimeInput$DayOfWeek +// +// * UpdateMaintenanceStartTimeInput$HourOfDay +// +// * UpdateMaintenanceStartTimeInput$MinuteOfHour +type UpdateMaintenanceStartTimeInput struct { + _ struct{} `type:"structure"` + + // The maintenance start time day of the week represented as an ordinal number + // from 0 to 6, where 0 represents Sunday and 6 Saturday. + // + // DayOfWeek is a required field + DayOfWeek *int64 `type:"integer" required:"true"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + // + // GatewayARN is a required field + GatewayARN *string `min:"50" type:"string" required:"true"` + + // The hour component of the maintenance start time represented as hh, where + // hh is the hour (00 to 23). The hour of the day is in the time zone of the + // gateway. + // + // HourOfDay is a required field + HourOfDay *int64 `type:"integer" required:"true"` + + // The minute component of the maintenance start time represented as mm, where + // mm is the minute (00 to 59). The minute of the hour is in the time zone of + // the gateway. + // + // MinuteOfHour is a required field + MinuteOfHour *int64 `type:"integer" required:"true"` +} + +// String returns the string representation +func (s UpdateMaintenanceStartTimeInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateMaintenanceStartTimeInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateMaintenanceStartTimeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateMaintenanceStartTimeInput"} + if s.DayOfWeek == nil { + invalidParams.Add(request.NewErrParamRequired("DayOfWeek")) + } + if s.GatewayARN == nil { + invalidParams.Add(request.NewErrParamRequired("GatewayARN")) + } + if s.GatewayARN != nil && len(*s.GatewayARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("GatewayARN", 50)) + } + if s.HourOfDay == nil { + invalidParams.Add(request.NewErrParamRequired("HourOfDay")) + } + if s.MinuteOfHour == nil { + invalidParams.Add(request.NewErrParamRequired("MinuteOfHour")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDayOfWeek sets the DayOfWeek field's value. +func (s *UpdateMaintenanceStartTimeInput) SetDayOfWeek(v int64) *UpdateMaintenanceStartTimeInput { + s.DayOfWeek = &v + return s +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *UpdateMaintenanceStartTimeInput) SetGatewayARN(v string) *UpdateMaintenanceStartTimeInput { + s.GatewayARN = &v + return s +} + +// SetHourOfDay sets the HourOfDay field's value. +func (s *UpdateMaintenanceStartTimeInput) SetHourOfDay(v int64) *UpdateMaintenanceStartTimeInput { + s.HourOfDay = &v + return s +} + +// SetMinuteOfHour sets the MinuteOfHour field's value. +func (s *UpdateMaintenanceStartTimeInput) SetMinuteOfHour(v int64) *UpdateMaintenanceStartTimeInput { + s.MinuteOfHour = &v + return s +} + +// A JSON object containing the of the gateway whose maintenance start time +// is updated. +type UpdateMaintenanceStartTimeOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s UpdateMaintenanceStartTimeOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateMaintenanceStartTimeOutput) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *UpdateMaintenanceStartTimeOutput) SetGatewayARN(v string) *UpdateMaintenanceStartTimeOutput { + s.GatewayARN = &v + return s +} + +// UpdateNFSFileShareInput +type UpdateNFSFileShareInput struct { + _ struct{} `type:"structure"` + + // The list of clients that are allowed to access the file gateway. The list + // must contain either valid IP addresses or valid CIDR blocks. + ClientList []*string `min:"1" type:"list"` + + // The default storage class for objects put into an Amazon S3 bucket by the + // file gateway. Possible values are S3_STANDARD, S3_STANDARD_IA, or S3_ONEZONE_IA. + // If this field is not populated, the default value S3_STANDARD is used. Optional. + DefaultStorageClass *string `min:"5" type:"string"` + + // The Amazon Resource Name (ARN) of the file share to be updated. + // + // FileShareARN is a required field + FileShareARN *string `min:"50" type:"string" required:"true"` + + // A value that enables guessing of the MIME type for uploaded objects based + // on file extensions. Set this value to true to enable MIME type guessing, + // and otherwise to false. The default value is true. + GuessMIMETypeEnabled *bool `type:"boolean"` + + // True to use Amazon S3 server side encryption with your own AWS KMS key, or + // false to use a key managed by Amazon S3. Optional. + KMSEncrypted *bool `type:"boolean"` + + // The Amazon Resource Name (ARN) of the AWS KMS key used for Amazon S3 server + // side encryption. This value can only be set when KMSEncrypted is true. Optional. + KMSKey *string `min:"20" type:"string"` + + // The default values for the file share. Optional. + NFSFileShareDefaults *NFSFileShareDefaults `type:"structure"` + + // A value that sets the access control list permission for objects in the S3 + // bucket that a file gateway puts objects into. The default value is "private". + ObjectACL *string `type:"string" enum:"ObjectACL"` + + // A value that sets the write status of a file share. This value is true if + // the write status is read-only, and otherwise false. + ReadOnly *bool `type:"boolean"` + + // A value that sets the access control list permission for objects in the Amazon + // S3 bucket that a file gateway puts objects into. The default value is private. + RequesterPays *bool `type:"boolean"` + + // The user mapped to anonymous user. Valid options are the following: + // + // * RootSquash - Only root is mapped to anonymous user. + // + // * NoSquash - No one is mapped to anonymous user + // + // * AllSquash - Everyone is mapped to anonymous user. + Squash *string `min:"5" type:"string"` +} + +// String returns the string representation +func (s UpdateNFSFileShareInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateNFSFileShareInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateNFSFileShareInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateNFSFileShareInput"} + if s.ClientList != nil && len(s.ClientList) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClientList", 1)) + } + if s.DefaultStorageClass != nil && len(*s.DefaultStorageClass) < 5 { + invalidParams.Add(request.NewErrParamMinLen("DefaultStorageClass", 5)) + } + if s.FileShareARN == nil { + invalidParams.Add(request.NewErrParamRequired("FileShareARN")) + } + if s.FileShareARN != nil && len(*s.FileShareARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("FileShareARN", 50)) + } + if s.KMSKey != nil && len(*s.KMSKey) < 20 { + invalidParams.Add(request.NewErrParamMinLen("KMSKey", 20)) + } + if s.Squash != nil && len(*s.Squash) < 5 { + invalidParams.Add(request.NewErrParamMinLen("Squash", 5)) + } + if s.NFSFileShareDefaults != nil { + if err := s.NFSFileShareDefaults.Validate(); err != nil { + invalidParams.AddNested("NFSFileShareDefaults", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientList sets the ClientList field's value. +func (s *UpdateNFSFileShareInput) SetClientList(v []*string) *UpdateNFSFileShareInput { + s.ClientList = v + return s +} + +// SetDefaultStorageClass sets the DefaultStorageClass field's value. +func (s *UpdateNFSFileShareInput) SetDefaultStorageClass(v string) *UpdateNFSFileShareInput { + s.DefaultStorageClass = &v + return s +} + +// SetFileShareARN sets the FileShareARN field's value. +func (s *UpdateNFSFileShareInput) SetFileShareARN(v string) *UpdateNFSFileShareInput { + s.FileShareARN = &v + return s +} + +// SetGuessMIMETypeEnabled sets the GuessMIMETypeEnabled field's value. +func (s *UpdateNFSFileShareInput) SetGuessMIMETypeEnabled(v bool) *UpdateNFSFileShareInput { + s.GuessMIMETypeEnabled = &v + return s +} + +// SetKMSEncrypted sets the KMSEncrypted field's value. +func (s *UpdateNFSFileShareInput) SetKMSEncrypted(v bool) *UpdateNFSFileShareInput { + s.KMSEncrypted = &v + return s +} + +// SetKMSKey sets the KMSKey field's value. +func (s *UpdateNFSFileShareInput) SetKMSKey(v string) *UpdateNFSFileShareInput { + s.KMSKey = &v + return s +} + +// SetNFSFileShareDefaults sets the NFSFileShareDefaults field's value. +func (s *UpdateNFSFileShareInput) SetNFSFileShareDefaults(v *NFSFileShareDefaults) *UpdateNFSFileShareInput { + s.NFSFileShareDefaults = v + return s +} + +// SetObjectACL sets the ObjectACL field's value. +func (s *UpdateNFSFileShareInput) SetObjectACL(v string) *UpdateNFSFileShareInput { + s.ObjectACL = &v + return s +} + +// SetReadOnly sets the ReadOnly field's value. +func (s *UpdateNFSFileShareInput) SetReadOnly(v bool) *UpdateNFSFileShareInput { + s.ReadOnly = &v + return s +} + +// SetRequesterPays sets the RequesterPays field's value. +func (s *UpdateNFSFileShareInput) SetRequesterPays(v bool) *UpdateNFSFileShareInput { + s.RequesterPays = &v + return s +} + +// SetSquash sets the Squash field's value. +func (s *UpdateNFSFileShareInput) SetSquash(v string) *UpdateNFSFileShareInput { + s.Squash = &v + return s +} + +// UpdateNFSFileShareOutput +type UpdateNFSFileShareOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the updated file share. + FileShareARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s UpdateNFSFileShareOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateNFSFileShareOutput) GoString() string { + return s.String() +} + +// SetFileShareARN sets the FileShareARN field's value. +func (s *UpdateNFSFileShareOutput) SetFileShareARN(v string) *UpdateNFSFileShareOutput { + s.FileShareARN = &v + return s +} + +// UpdateSMBFileShareInput +type UpdateSMBFileShareInput struct { + _ struct{} `type:"structure"` + + // The default storage class for objects put into an Amazon S3 bucket by the + // file gateway. Possible values are S3_STANDARD, S3_STANDARD_IA, or S3_ONEZONE_IA. + // If this field is not populated, the default value S3_STANDARD is used. Optional. + DefaultStorageClass *string `min:"5" type:"string"` + + // The Amazon Resource Name (ARN) of the SMB file share that you want to update. + // + // FileShareARN is a required field + FileShareARN *string `min:"50" type:"string" required:"true"` + + // A value that enables guessing of the MIME type for uploaded objects based + // on file extensions. Set this value to true to enable MIME type guessing, + // and otherwise to false. The default value is true. + GuessMIMETypeEnabled *bool `type:"boolean"` + + // A list of users or groups in the Active Directory that are not allowed to + // access the file share. A group must be prefixed with the @ character. For + // example @group1. Can only be set if Authentication is set to ActiveDirectory. + InvalidUserList []*string `type:"list"` + + // True to use Amazon S3 server side encryption with your own AWS KMS key, or + // false to use a key managed by Amazon S3. Optional. + KMSEncrypted *bool `type:"boolean"` + + // The Amazon Resource Name (ARN) of the AWS KMS key used for Amazon S3 server + // side encryption. This value can only be set when KMSEncrypted is true. Optional. + KMSKey *string `min:"20" type:"string"` + + // A value that sets the access control list permission for objects in the S3 + // bucket that a file gateway puts objects into. The default value is "private". + ObjectACL *string `type:"string" enum:"ObjectACL"` + + // A value that sets the write status of a file share. This value is true if + // the write status is read-only, and otherwise false. + ReadOnly *bool `type:"boolean"` + + // A value that sets the access control list permission for objects in the Amazon + // S3 bucket that a file gateway puts objects into. The default value is private. + RequesterPays *bool `type:"boolean"` + + // A list of users or groups in the Active Directory that are allowed to access + // the file share. A group must be prefixed with the @ character. For example + // @group1. Can only be set if Authentication is set to ActiveDirectory. + ValidUserList []*string `type:"list"` +} + +// String returns the string representation +func (s UpdateSMBFileShareInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSMBFileShareInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateSMBFileShareInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateSMBFileShareInput"} + if s.DefaultStorageClass != nil && len(*s.DefaultStorageClass) < 5 { + invalidParams.Add(request.NewErrParamMinLen("DefaultStorageClass", 5)) + } + if s.FileShareARN == nil { + invalidParams.Add(request.NewErrParamRequired("FileShareARN")) + } + if s.FileShareARN != nil && len(*s.FileShareARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("FileShareARN", 50)) + } + if s.KMSKey != nil && len(*s.KMSKey) < 20 { + invalidParams.Add(request.NewErrParamMinLen("KMSKey", 20)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDefaultStorageClass sets the DefaultStorageClass field's value. +func (s *UpdateSMBFileShareInput) SetDefaultStorageClass(v string) *UpdateSMBFileShareInput { + s.DefaultStorageClass = &v + return s +} + +// SetFileShareARN sets the FileShareARN field's value. +func (s *UpdateSMBFileShareInput) SetFileShareARN(v string) *UpdateSMBFileShareInput { + s.FileShareARN = &v + return s +} + +// SetGuessMIMETypeEnabled sets the GuessMIMETypeEnabled field's value. +func (s *UpdateSMBFileShareInput) SetGuessMIMETypeEnabled(v bool) *UpdateSMBFileShareInput { + s.GuessMIMETypeEnabled = &v + return s +} + +// SetInvalidUserList sets the InvalidUserList field's value. +func (s *UpdateSMBFileShareInput) SetInvalidUserList(v []*string) *UpdateSMBFileShareInput { + s.InvalidUserList = v + return s +} + +// SetKMSEncrypted sets the KMSEncrypted field's value. +func (s *UpdateSMBFileShareInput) SetKMSEncrypted(v bool) *UpdateSMBFileShareInput { + s.KMSEncrypted = &v + return s +} + +// SetKMSKey sets the KMSKey field's value. +func (s *UpdateSMBFileShareInput) SetKMSKey(v string) *UpdateSMBFileShareInput { + s.KMSKey = &v + return s +} + +// SetObjectACL sets the ObjectACL field's value. +func (s *UpdateSMBFileShareInput) SetObjectACL(v string) *UpdateSMBFileShareInput { + s.ObjectACL = &v + return s +} + +// SetReadOnly sets the ReadOnly field's value. +func (s *UpdateSMBFileShareInput) SetReadOnly(v bool) *UpdateSMBFileShareInput { + s.ReadOnly = &v + return s +} + +// SetRequesterPays sets the RequesterPays field's value. +func (s *UpdateSMBFileShareInput) SetRequesterPays(v bool) *UpdateSMBFileShareInput { + s.RequesterPays = &v + return s +} + +// SetValidUserList sets the ValidUserList field's value. +func (s *UpdateSMBFileShareInput) SetValidUserList(v []*string) *UpdateSMBFileShareInput { + s.ValidUserList = v + return s +} + +// UpdateSMBFileShareOutput +type UpdateSMBFileShareOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the updated SMB file share. + FileShareARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s UpdateSMBFileShareOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSMBFileShareOutput) GoString() string { + return s.String() +} + +// SetFileShareARN sets the FileShareARN field's value. +func (s *UpdateSMBFileShareOutput) SetFileShareARN(v string) *UpdateSMBFileShareOutput { + s.FileShareARN = &v + return s +} + +// A JSON object containing one or more of the following fields: +// +// * UpdateSnapshotScheduleInput$Description +// +// * UpdateSnapshotScheduleInput$RecurrenceInHours +// +// * UpdateSnapshotScheduleInput$StartAt +// +// * UpdateSnapshotScheduleInput$VolumeARN +type UpdateSnapshotScheduleInput struct { + _ struct{} `type:"structure"` + + // Optional description of the snapshot that overwrites the existing description. + Description *string `min:"1" type:"string"` + + // Frequency of snapshots. Specify the number of hours between snapshots. + // + // RecurrenceInHours is a required field + RecurrenceInHours *int64 `min:"1" type:"integer" required:"true"` + + // The hour of the day at which the snapshot schedule begins represented as + // hh, where hh is the hour (0 to 23). The hour of the day is in the time zone + // of the gateway. + // + // StartAt is a required field + StartAt *int64 `type:"integer" required:"true"` + + // The Amazon Resource Name (ARN) of the volume. Use the ListVolumes operation + // to return a list of gateway volumes. + // + // VolumeARN is a required field + VolumeARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateSnapshotScheduleInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSnapshotScheduleInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateSnapshotScheduleInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateSnapshotScheduleInput"} + if s.Description != nil && len(*s.Description) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Description", 1)) + } + if s.RecurrenceInHours == nil { + invalidParams.Add(request.NewErrParamRequired("RecurrenceInHours")) + } + if s.RecurrenceInHours != nil && *s.RecurrenceInHours < 1 { + invalidParams.Add(request.NewErrParamMinValue("RecurrenceInHours", 1)) + } + if s.StartAt == nil { + invalidParams.Add(request.NewErrParamRequired("StartAt")) + } + if s.VolumeARN == nil { + invalidParams.Add(request.NewErrParamRequired("VolumeARN")) + } + if s.VolumeARN != nil && len(*s.VolumeARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("VolumeARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDescription sets the Description field's value. +func (s *UpdateSnapshotScheduleInput) SetDescription(v string) *UpdateSnapshotScheduleInput { + s.Description = &v + return s +} + +// SetRecurrenceInHours sets the RecurrenceInHours field's value. +func (s *UpdateSnapshotScheduleInput) SetRecurrenceInHours(v int64) *UpdateSnapshotScheduleInput { + s.RecurrenceInHours = &v + return s +} + +// SetStartAt sets the StartAt field's value. +func (s *UpdateSnapshotScheduleInput) SetStartAt(v int64) *UpdateSnapshotScheduleInput { + s.StartAt = &v + return s +} + +// SetVolumeARN sets the VolumeARN field's value. +func (s *UpdateSnapshotScheduleInput) SetVolumeARN(v string) *UpdateSnapshotScheduleInput { + s.VolumeARN = &v + return s +} + +// A JSON object containing the of the updated storage volume. +type UpdateSnapshotScheduleOutput struct { + _ struct{} `type:"structure"` + + VolumeARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s UpdateSnapshotScheduleOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSnapshotScheduleOutput) GoString() string { + return s.String() +} + +// SetVolumeARN sets the VolumeARN field's value. +func (s *UpdateSnapshotScheduleOutput) SetVolumeARN(v string) *UpdateSnapshotScheduleOutput { + s.VolumeARN = &v + return s +} + +type UpdateVTLDeviceTypeInput struct { + _ struct{} `type:"structure"` + + // The type of medium changer you want to select. + // + // Valid Values: "STK-L700", "AWS-Gateway-VTL" + // + // DeviceType is a required field + DeviceType *string `min:"2" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of the medium changer you want to select. + // + // VTLDeviceARN is a required field + VTLDeviceARN *string `min:"50" type:"string" required:"true"` +} + +// String returns the string representation +func (s UpdateVTLDeviceTypeInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateVTLDeviceTypeInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateVTLDeviceTypeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateVTLDeviceTypeInput"} + if s.DeviceType == nil { + invalidParams.Add(request.NewErrParamRequired("DeviceType")) + } + if s.DeviceType != nil && len(*s.DeviceType) < 2 { + invalidParams.Add(request.NewErrParamMinLen("DeviceType", 2)) + } + if s.VTLDeviceARN == nil { + invalidParams.Add(request.NewErrParamRequired("VTLDeviceARN")) + } + if s.VTLDeviceARN != nil && len(*s.VTLDeviceARN) < 50 { + invalidParams.Add(request.NewErrParamMinLen("VTLDeviceARN", 50)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDeviceType sets the DeviceType field's value. +func (s *UpdateVTLDeviceTypeInput) SetDeviceType(v string) *UpdateVTLDeviceTypeInput { + s.DeviceType = &v + return s +} + +// SetVTLDeviceARN sets the VTLDeviceARN field's value. +func (s *UpdateVTLDeviceTypeInput) SetVTLDeviceARN(v string) *UpdateVTLDeviceTypeInput { + s.VTLDeviceARN = &v + return s +} + +// UpdateVTLDeviceTypeOutput +type UpdateVTLDeviceTypeOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the medium changer you have selected. + VTLDeviceARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s UpdateVTLDeviceTypeOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateVTLDeviceTypeOutput) GoString() string { + return s.String() +} + +// SetVTLDeviceARN sets the VTLDeviceARN field's value. +func (s *UpdateVTLDeviceTypeOutput) SetVTLDeviceARN(v string) *UpdateVTLDeviceTypeOutput { + s.VTLDeviceARN = &v + return s +} + +// Represents a device object associated with a tape gateway. +type VTLDevice struct { + _ struct{} `type:"structure"` + + // A list of iSCSI information about a VTL device. + DeviceiSCSIAttributes *DeviceiSCSIAttributes `type:"structure"` + + // Specifies the unique Amazon Resource Name (ARN) of the device (tape drive + // or media changer). + VTLDeviceARN *string `min:"50" type:"string"` + + VTLDeviceProductIdentifier *string `type:"string"` + + VTLDeviceType *string `type:"string"` + + VTLDeviceVendor *string `type:"string"` +} + +// String returns the string representation +func (s VTLDevice) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s VTLDevice) GoString() string { + return s.String() +} + +// SetDeviceiSCSIAttributes sets the DeviceiSCSIAttributes field's value. +func (s *VTLDevice) SetDeviceiSCSIAttributes(v *DeviceiSCSIAttributes) *VTLDevice { + s.DeviceiSCSIAttributes = v + return s +} + +// SetVTLDeviceARN sets the VTLDeviceARN field's value. +func (s *VTLDevice) SetVTLDeviceARN(v string) *VTLDevice { + s.VTLDeviceARN = &v + return s +} + +// SetVTLDeviceProductIdentifier sets the VTLDeviceProductIdentifier field's value. +func (s *VTLDevice) SetVTLDeviceProductIdentifier(v string) *VTLDevice { + s.VTLDeviceProductIdentifier = &v + return s +} + +// SetVTLDeviceType sets the VTLDeviceType field's value. +func (s *VTLDevice) SetVTLDeviceType(v string) *VTLDevice { + s.VTLDeviceType = &v + return s +} + +// SetVTLDeviceVendor sets the VTLDeviceVendor field's value. +func (s *VTLDevice) SetVTLDeviceVendor(v string) *VTLDevice { + s.VTLDeviceVendor = &v + return s +} + +// Describes a storage volume object. +type VolumeInfo struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation + // to return a list of gateways for your account and region. + GatewayARN *string `min:"50" type:"string"` + + // The unique identifier assigned to your gateway during activation. This ID + // becomes part of the gateway Amazon Resource Name (ARN), which you use as + // input for other operations. + // + // Valid Values: 50 to 500 lowercase letters, numbers, periods (.), and hyphens + // (-). + GatewayId *string `min:"12" type:"string"` + + // The Amazon Resource Name (ARN) for the storage volume. For example, the following + // is a valid ARN: + // + // arn:aws:storagegateway:us-east-2:111122223333:gateway/sgw-12A3456B/volume/vol-1122AABB + // + // Valid Values: 50 to 500 lowercase letters, numbers, periods (.), and hyphens + // (-). + VolumeARN *string `min:"50" type:"string"` + + // The unique identifier assigned to the volume. This ID becomes part of the + // volume Amazon Resource Name (ARN), which you use as input for other operations. + // + // Valid Values: 50 to 500 lowercase letters, numbers, periods (.), and hyphens + // (-). + VolumeId *string `min:"12" type:"string"` + + // The size of the volume in bytes. + // + // Valid Values: 50 to 500 lowercase letters, numbers, periods (.), and hyphens + // (-). + VolumeSizeInBytes *int64 `type:"long"` + + VolumeType *string `min:"3" type:"string"` +} + +// String returns the string representation +func (s VolumeInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s VolumeInfo) GoString() string { + return s.String() +} + +// SetGatewayARN sets the GatewayARN field's value. +func (s *VolumeInfo) SetGatewayARN(v string) *VolumeInfo { + s.GatewayARN = &v + return s +} + +// SetGatewayId sets the GatewayId field's value. +func (s *VolumeInfo) SetGatewayId(v string) *VolumeInfo { + s.GatewayId = &v + return s +} + +// SetVolumeARN sets the VolumeARN field's value. +func (s *VolumeInfo) SetVolumeARN(v string) *VolumeInfo { + s.VolumeARN = &v + return s +} + +// SetVolumeId sets the VolumeId field's value. +func (s *VolumeInfo) SetVolumeId(v string) *VolumeInfo { + s.VolumeId = &v + return s +} + +// SetVolumeSizeInBytes sets the VolumeSizeInBytes field's value. +func (s *VolumeInfo) SetVolumeSizeInBytes(v int64) *VolumeInfo { + s.VolumeSizeInBytes = &v + return s +} + +// SetVolumeType sets the VolumeType field's value. +func (s *VolumeInfo) SetVolumeType(v string) *VolumeInfo { + s.VolumeType = &v + return s +} + +type VolumeRecoveryPointInfo struct { + _ struct{} `type:"structure"` + + VolumeARN *string `min:"50" type:"string"` + + VolumeRecoveryPointTime *string `type:"string"` + + VolumeSizeInBytes *int64 `type:"long"` + + VolumeUsageInBytes *int64 `type:"long"` +} + +// String returns the string representation +func (s VolumeRecoveryPointInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s VolumeRecoveryPointInfo) GoString() string { + return s.String() +} + +// SetVolumeARN sets the VolumeARN field's value. +func (s *VolumeRecoveryPointInfo) SetVolumeARN(v string) *VolumeRecoveryPointInfo { + s.VolumeARN = &v + return s +} + +// SetVolumeRecoveryPointTime sets the VolumeRecoveryPointTime field's value. +func (s *VolumeRecoveryPointInfo) SetVolumeRecoveryPointTime(v string) *VolumeRecoveryPointInfo { + s.VolumeRecoveryPointTime = &v + return s +} + +// SetVolumeSizeInBytes sets the VolumeSizeInBytes field's value. +func (s *VolumeRecoveryPointInfo) SetVolumeSizeInBytes(v int64) *VolumeRecoveryPointInfo { + s.VolumeSizeInBytes = &v + return s +} + +// SetVolumeUsageInBytes sets the VolumeUsageInBytes field's value. +func (s *VolumeRecoveryPointInfo) SetVolumeUsageInBytes(v int64) *VolumeRecoveryPointInfo { + s.VolumeUsageInBytes = &v + return s +} + +// Lists iSCSI information about a volume. +type VolumeiSCSIAttributes struct { + _ struct{} `type:"structure"` + + // Indicates whether mutual CHAP is enabled for the iSCSI target. + ChapEnabled *bool `type:"boolean"` + + // The logical disk number. + LunNumber *int64 `min:"1" type:"integer"` + + // The network interface identifier. + NetworkInterfaceId *string `type:"string"` + + // The port used to communicate with iSCSI targets. + NetworkInterfacePort *int64 `type:"integer"` + + // The Amazon Resource Name (ARN) of the volume target. + TargetARN *string `min:"50" type:"string"` +} + +// String returns the string representation +func (s VolumeiSCSIAttributes) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s VolumeiSCSIAttributes) GoString() string { + return s.String() +} + +// SetChapEnabled sets the ChapEnabled field's value. +func (s *VolumeiSCSIAttributes) SetChapEnabled(v bool) *VolumeiSCSIAttributes { + s.ChapEnabled = &v + return s +} + +// SetLunNumber sets the LunNumber field's value. +func (s *VolumeiSCSIAttributes) SetLunNumber(v int64) *VolumeiSCSIAttributes { + s.LunNumber = &v + return s +} + +// SetNetworkInterfaceId sets the NetworkInterfaceId field's value. +func (s *VolumeiSCSIAttributes) SetNetworkInterfaceId(v string) *VolumeiSCSIAttributes { + s.NetworkInterfaceId = &v + return s +} + +// SetNetworkInterfacePort sets the NetworkInterfacePort field's value. +func (s *VolumeiSCSIAttributes) SetNetworkInterfacePort(v int64) *VolumeiSCSIAttributes { + s.NetworkInterfacePort = &v + return s +} + +// SetTargetARN sets the TargetARN field's value. +func (s *VolumeiSCSIAttributes) SetTargetARN(v string) *VolumeiSCSIAttributes { + s.TargetARN = &v + return s +} + +const ( + // ErrorCodeActivationKeyExpired is a ErrorCode enum value + ErrorCodeActivationKeyExpired = "ActivationKeyExpired" + + // ErrorCodeActivationKeyInvalid is a ErrorCode enum value + ErrorCodeActivationKeyInvalid = "ActivationKeyInvalid" + + // ErrorCodeActivationKeyNotFound is a ErrorCode enum value + ErrorCodeActivationKeyNotFound = "ActivationKeyNotFound" + + // ErrorCodeGatewayInternalError is a ErrorCode enum value + ErrorCodeGatewayInternalError = "GatewayInternalError" + + // ErrorCodeGatewayNotConnected is a ErrorCode enum value + ErrorCodeGatewayNotConnected = "GatewayNotConnected" + + // ErrorCodeGatewayNotFound is a ErrorCode enum value + ErrorCodeGatewayNotFound = "GatewayNotFound" + + // ErrorCodeGatewayProxyNetworkConnectionBusy is a ErrorCode enum value + ErrorCodeGatewayProxyNetworkConnectionBusy = "GatewayProxyNetworkConnectionBusy" + + // ErrorCodeAuthenticationFailure is a ErrorCode enum value + ErrorCodeAuthenticationFailure = "AuthenticationFailure" + + // ErrorCodeBandwidthThrottleScheduleNotFound is a ErrorCode enum value + ErrorCodeBandwidthThrottleScheduleNotFound = "BandwidthThrottleScheduleNotFound" + + // ErrorCodeBlocked is a ErrorCode enum value + ErrorCodeBlocked = "Blocked" + + // ErrorCodeCannotExportSnapshot is a ErrorCode enum value + ErrorCodeCannotExportSnapshot = "CannotExportSnapshot" + + // ErrorCodeChapCredentialNotFound is a ErrorCode enum value + ErrorCodeChapCredentialNotFound = "ChapCredentialNotFound" + + // ErrorCodeDiskAlreadyAllocated is a ErrorCode enum value + ErrorCodeDiskAlreadyAllocated = "DiskAlreadyAllocated" + + // ErrorCodeDiskDoesNotExist is a ErrorCode enum value + ErrorCodeDiskDoesNotExist = "DiskDoesNotExist" + + // ErrorCodeDiskSizeGreaterThanVolumeMaxSize is a ErrorCode enum value + ErrorCodeDiskSizeGreaterThanVolumeMaxSize = "DiskSizeGreaterThanVolumeMaxSize" + + // ErrorCodeDiskSizeLessThanVolumeSize is a ErrorCode enum value + ErrorCodeDiskSizeLessThanVolumeSize = "DiskSizeLessThanVolumeSize" + + // ErrorCodeDiskSizeNotGigAligned is a ErrorCode enum value + ErrorCodeDiskSizeNotGigAligned = "DiskSizeNotGigAligned" + + // ErrorCodeDuplicateCertificateInfo is a ErrorCode enum value + ErrorCodeDuplicateCertificateInfo = "DuplicateCertificateInfo" + + // ErrorCodeDuplicateSchedule is a ErrorCode enum value + ErrorCodeDuplicateSchedule = "DuplicateSchedule" + + // ErrorCodeEndpointNotFound is a ErrorCode enum value + ErrorCodeEndpointNotFound = "EndpointNotFound" + + // ErrorCodeIamnotSupported is a ErrorCode enum value + ErrorCodeIamnotSupported = "IAMNotSupported" + + // ErrorCodeInitiatorInvalid is a ErrorCode enum value + ErrorCodeInitiatorInvalid = "InitiatorInvalid" + + // ErrorCodeInitiatorNotFound is a ErrorCode enum value + ErrorCodeInitiatorNotFound = "InitiatorNotFound" + + // ErrorCodeInternalError is a ErrorCode enum value + ErrorCodeInternalError = "InternalError" + + // ErrorCodeInvalidGateway is a ErrorCode enum value + ErrorCodeInvalidGateway = "InvalidGateway" + + // ErrorCodeInvalidEndpoint is a ErrorCode enum value + ErrorCodeInvalidEndpoint = "InvalidEndpoint" + + // ErrorCodeInvalidParameters is a ErrorCode enum value + ErrorCodeInvalidParameters = "InvalidParameters" + + // ErrorCodeInvalidSchedule is a ErrorCode enum value + ErrorCodeInvalidSchedule = "InvalidSchedule" + + // ErrorCodeLocalStorageLimitExceeded is a ErrorCode enum value + ErrorCodeLocalStorageLimitExceeded = "LocalStorageLimitExceeded" + + // ErrorCodeLunAlreadyAllocated is a ErrorCode enum value + ErrorCodeLunAlreadyAllocated = "LunAlreadyAllocated " + + // ErrorCodeLunInvalid is a ErrorCode enum value + ErrorCodeLunInvalid = "LunInvalid" + + // ErrorCodeMaximumContentLengthExceeded is a ErrorCode enum value + ErrorCodeMaximumContentLengthExceeded = "MaximumContentLengthExceeded" + + // ErrorCodeMaximumTapeCartridgeCountExceeded is a ErrorCode enum value + ErrorCodeMaximumTapeCartridgeCountExceeded = "MaximumTapeCartridgeCountExceeded" + + // ErrorCodeMaximumVolumeCountExceeded is a ErrorCode enum value + ErrorCodeMaximumVolumeCountExceeded = "MaximumVolumeCountExceeded" + + // ErrorCodeNetworkConfigurationChanged is a ErrorCode enum value + ErrorCodeNetworkConfigurationChanged = "NetworkConfigurationChanged" + + // ErrorCodeNoDisksAvailable is a ErrorCode enum value + ErrorCodeNoDisksAvailable = "NoDisksAvailable" + + // ErrorCodeNotImplemented is a ErrorCode enum value + ErrorCodeNotImplemented = "NotImplemented" + + // ErrorCodeNotSupported is a ErrorCode enum value + ErrorCodeNotSupported = "NotSupported" + + // ErrorCodeOperationAborted is a ErrorCode enum value + ErrorCodeOperationAborted = "OperationAborted" + + // ErrorCodeOutdatedGateway is a ErrorCode enum value + ErrorCodeOutdatedGateway = "OutdatedGateway" + + // ErrorCodeParametersNotImplemented is a ErrorCode enum value + ErrorCodeParametersNotImplemented = "ParametersNotImplemented" + + // ErrorCodeRegionInvalid is a ErrorCode enum value + ErrorCodeRegionInvalid = "RegionInvalid" + + // ErrorCodeRequestTimeout is a ErrorCode enum value + ErrorCodeRequestTimeout = "RequestTimeout" + + // ErrorCodeServiceUnavailable is a ErrorCode enum value + ErrorCodeServiceUnavailable = "ServiceUnavailable" + + // ErrorCodeSnapshotDeleted is a ErrorCode enum value + ErrorCodeSnapshotDeleted = "SnapshotDeleted" + + // ErrorCodeSnapshotIdInvalid is a ErrorCode enum value + ErrorCodeSnapshotIdInvalid = "SnapshotIdInvalid" + + // ErrorCodeSnapshotInProgress is a ErrorCode enum value + ErrorCodeSnapshotInProgress = "SnapshotInProgress" + + // ErrorCodeSnapshotNotFound is a ErrorCode enum value + ErrorCodeSnapshotNotFound = "SnapshotNotFound" + + // ErrorCodeSnapshotScheduleNotFound is a ErrorCode enum value + ErrorCodeSnapshotScheduleNotFound = "SnapshotScheduleNotFound" + + // ErrorCodeStagingAreaFull is a ErrorCode enum value + ErrorCodeStagingAreaFull = "StagingAreaFull" + + // ErrorCodeStorageFailure is a ErrorCode enum value + ErrorCodeStorageFailure = "StorageFailure" + + // ErrorCodeTapeCartridgeNotFound is a ErrorCode enum value + ErrorCodeTapeCartridgeNotFound = "TapeCartridgeNotFound" + + // ErrorCodeTargetAlreadyExists is a ErrorCode enum value + ErrorCodeTargetAlreadyExists = "TargetAlreadyExists" + + // ErrorCodeTargetInvalid is a ErrorCode enum value + ErrorCodeTargetInvalid = "TargetInvalid" + + // ErrorCodeTargetNotFound is a ErrorCode enum value + ErrorCodeTargetNotFound = "TargetNotFound" + + // ErrorCodeUnauthorizedOperation is a ErrorCode enum value + ErrorCodeUnauthorizedOperation = "UnauthorizedOperation" + + // ErrorCodeVolumeAlreadyExists is a ErrorCode enum value + ErrorCodeVolumeAlreadyExists = "VolumeAlreadyExists" + + // ErrorCodeVolumeIdInvalid is a ErrorCode enum value + ErrorCodeVolumeIdInvalid = "VolumeIdInvalid" + + // ErrorCodeVolumeInUse is a ErrorCode enum value + ErrorCodeVolumeInUse = "VolumeInUse" + + // ErrorCodeVolumeNotFound is a ErrorCode enum value + ErrorCodeVolumeNotFound = "VolumeNotFound" + + // ErrorCodeVolumeNotReady is a ErrorCode enum value + ErrorCodeVolumeNotReady = "VolumeNotReady" +) + +// The type of the file share. +const ( + // FileShareTypeNfs is a FileShareType enum value + FileShareTypeNfs = "NFS" + + // FileShareTypeSmb is a FileShareType enum value + FileShareTypeSmb = "SMB" +) + +// A value that sets the access control list permission for objects in the S3 +// bucket that a file gateway puts objects into. The default value is "private". +const ( + // ObjectACLPrivate is a ObjectACL enum value + ObjectACLPrivate = "private" + + // ObjectACLPublicRead is a ObjectACL enum value + ObjectACLPublicRead = "public-read" + + // ObjectACLPublicReadWrite is a ObjectACL enum value + ObjectACLPublicReadWrite = "public-read-write" + + // ObjectACLAuthenticatedRead is a ObjectACL enum value + ObjectACLAuthenticatedRead = "authenticated-read" + + // ObjectACLBucketOwnerRead is a ObjectACL enum value + ObjectACLBucketOwnerRead = "bucket-owner-read" + + // ObjectACLBucketOwnerFullControl is a ObjectACL enum value + ObjectACLBucketOwnerFullControl = "bucket-owner-full-control" + + // ObjectACLAwsExecRead is a ObjectACL enum value + ObjectACLAwsExecRead = "aws-exec-read" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/storagegateway/doc.go b/vendor/github.com/aws/aws-sdk-go/service/storagegateway/doc.go new file mode 100644 index 00000000000..b53d442235a --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/storagegateway/doc.go @@ -0,0 +1,79 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package storagegateway provides the client and types for making API +// requests to AWS Storage Gateway. +// +// AWS Storage Gateway is the service that connects an on-premises software +// appliance with cloud-based storage to provide seamless and secure integration +// between an organization's on-premises IT environment and AWS's storage infrastructure. +// The service enables you to securely upload data to the AWS cloud for cost +// effective backup and rapid disaster recovery. +// +// Use the following links to get started using the AWS Storage Gateway Service +// API Reference: +// +// * AWS Storage Gateway Required Request Headers (http://docs.aws.amazon.com/storagegateway/latest/userguide/AWSStorageGatewayAPI.html#AWSStorageGatewayHTTPRequestsHeaders): +// Describes the required headers that you must send with every POST request +// to AWS Storage Gateway. +// +// * Signing Requests (http://docs.aws.amazon.com/storagegateway/latest/userguide/AWSStorageGatewayAPI.html#AWSStorageGatewaySigningRequests): +// AWS Storage Gateway requires that you authenticate every request you send; +// this topic describes how sign such a request. +// +// * Error Responses (http://docs.aws.amazon.com/storagegateway/latest/userguide/AWSStorageGatewayAPI.html#APIErrorResponses): +// Provides reference information about AWS Storage Gateway errors. +// +// * Operations in AWS Storage Gateway (http://docs.aws.amazon.com/storagegateway/latest/APIReference/API_Operations.html): +// Contains detailed descriptions of all AWS Storage Gateway operations, +// their request parameters, response elements, possible errors, and examples +// of requests and responses. +// +// * AWS Storage Gateway Regions and Endpoints: (http://docs.aws.amazon.com/general/latest/gr/rande.html#sg_region) +// Provides a list of each AWS region and endpoints available for use with +// AWS Storage Gateway. +// +// AWS Storage Gateway resource IDs are in uppercase. When you use these resource +// IDs with the Amazon EC2 API, EC2 expects resource IDs in lowercase. You must +// change your resource ID to lowercase to use it with the EC2 API. For example, +// in Storage Gateway the ID for a volume might be vol-AA22BB012345DAF670. When +// you use this ID with the EC2 API, you must change it to vol-aa22bb012345daf670. +// Otherwise, the EC2 API might not behave as expected. +// +// IDs for Storage Gateway volumes and Amazon EBS snapshots created from gateway +// volumes are changing to a longer format. Starting in December 2016, all new +// volumes and snapshots will be created with a 17-character string. Starting +// in April 2016, you will be able to use these longer IDs so you can test your +// systems with the new format. For more information, see Longer EC2 and EBS +// Resource IDs (https://aws.amazon.com/ec2/faqs/#longer-ids). +// +// For example, a volume Amazon Resource Name (ARN) with the longer volume +// ID format looks like the following: +// +// arn:aws:storagegateway:us-west-2:111122223333:gateway/sgw-12A3456B/volume/vol-1122AABBCCDDEEFFG. +// +// A snapshot ID with the longer ID format looks like the following: snap-78e226633445566ee. +// +// For more information, see Announcement: Heads-up – Longer AWS Storage Gateway +// volume and snapshot IDs coming in 2016 (https://forums.aws.amazon.com/ann.jspa?annID=3557). +// +// See https://docs.aws.amazon.com/goto/WebAPI/storagegateway-2013-06-30 for more information on this service. +// +// See storagegateway package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/storagegateway/ +// +// Using the Client +// +// To contact AWS Storage Gateway with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the AWS Storage Gateway client StorageGateway for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/storagegateway/#New +package storagegateway diff --git a/vendor/github.com/aws/aws-sdk-go/service/storagegateway/errors.go b/vendor/github.com/aws/aws-sdk-go/service/storagegateway/errors.go new file mode 100644 index 00000000000..01b9816e3aa --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/storagegateway/errors.go @@ -0,0 +1,27 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package storagegateway + +const ( + + // ErrCodeInternalServerError for service response error code + // "InternalServerError". + // + // An internal server error has occurred during the request. For more information, + // see the error and message fields. + ErrCodeInternalServerError = "InternalServerError" + + // ErrCodeInvalidGatewayRequestException for service response error code + // "InvalidGatewayRequestException". + // + // An exception occurred because an invalid gateway request was issued to the + // service. For more information, see the error and message fields. + ErrCodeInvalidGatewayRequestException = "InvalidGatewayRequestException" + + // ErrCodeServiceUnavailableError for service response error code + // "ServiceUnavailableError". + // + // An internal server error has occurred because the service is unavailable. + // For more information, see the error and message fields. + ErrCodeServiceUnavailableError = "ServiceUnavailableError" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/storagegateway/service.go b/vendor/github.com/aws/aws-sdk-go/service/storagegateway/service.go new file mode 100644 index 00000000000..9a0c08f6962 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/storagegateway/service.go @@ -0,0 +1,97 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package storagegateway + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" +) + +// StorageGateway provides the API operation methods for making requests to +// AWS Storage Gateway. See this package's package overview docs +// for details on the service. +// +// StorageGateway methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type StorageGateway struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "storagegateway" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "Storage Gateway" // ServiceID is a unique identifer of a specific service. +) + +// New creates a new instance of the StorageGateway client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a StorageGateway client from just a session. +// svc := storagegateway.New(mySession) +// +// // Create a StorageGateway client with additional configuration +// svc := storagegateway.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *StorageGateway { + c := p.ClientConfig(EndpointsID, cfgs...) + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *StorageGateway { + svc := &StorageGateway{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + ServiceID: ServiceID, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2013-06-30", + JSONVersion: "1.1", + TargetPrefix: "StorageGateway_20130630", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(jsonrpc.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(jsonrpc.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(jsonrpc.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(jsonrpc.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a StorageGateway operation and runs any +// custom request initialization. +func (c *StorageGateway) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/sts/api.go b/vendor/github.com/aws/aws-sdk-go/service/sts/api.go index 2b17d06670b..ee908f9167b 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/sts/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/sts/api.go @@ -14,8 +14,8 @@ const opAssumeRole = "AssumeRole" // AssumeRoleRequest generates a "aws/request.Request" representing the // client's request for the AssumeRole operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -88,9 +88,18 @@ func (c *STS) AssumeRoleRequest(input *AssumeRoleInput) (req *request.Request, o // Scenarios for Temporary Credentials (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html#sts-introduction) // in the IAM User Guide. // -// The temporary security credentials are valid for the duration that you specified -// when calling AssumeRole, which can be from 900 seconds (15 minutes) to a -// maximum of 3600 seconds (1 hour). The default is 1 hour. +// By default, the temporary security credentials created by AssumeRole last +// for one hour. However, you can use the optional DurationSeconds parameter +// to specify the duration of your session. You can provide a value from 900 +// seconds (15 minutes) up to the maximum session duration setting for the role. +// This setting can have a value from 1 hour to 12 hours. To learn how to view +// the maximum value for your role, see View the Maximum Session Duration Setting +// for a Role (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session) +// in the IAM User Guide. The maximum session duration limit applies when you +// use the AssumeRole* API operations or the assume-role* CLI operations but +// does not apply when you use those operations to create a console URL. For +// more information, see Using IAM Roles (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html) +// in the IAM User Guide. // // The temporary security credentials created by AssumeRole can be used to make // API calls to any AWS service with the following exception: you cannot call @@ -199,8 +208,8 @@ const opAssumeRoleWithSAML = "AssumeRoleWithSAML" // AssumeRoleWithSAMLRequest generates a "aws/request.Request" representing the // client's request for the AssumeRoleWithSAML operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -252,11 +261,20 @@ func (c *STS) AssumeRoleWithSAMLRequest(input *AssumeRoleWithSAMLInput) (req *re // an access key ID, a secret access key, and a security token. Applications // can use these temporary security credentials to sign calls to AWS services. // -// The temporary security credentials are valid for the duration that you specified -// when calling AssumeRole, or until the time specified in the SAML authentication -// response's SessionNotOnOrAfter value, whichever is shorter. The duration -// can be from 900 seconds (15 minutes) to a maximum of 3600 seconds (1 hour). -// The default is 1 hour. +// By default, the temporary security credentials created by AssumeRoleWithSAML +// last for one hour. However, you can use the optional DurationSeconds parameter +// to specify the duration of your session. Your role session lasts for the +// duration that you specify, or until the time specified in the SAML authentication +// response's SessionNotOnOrAfter value, whichever is shorter. You can provide +// a DurationSeconds value from 900 seconds (15 minutes) up to the maximum session +// duration setting for the role. This setting can have a value from 1 hour +// to 12 hours. To learn how to view the maximum value for your role, see View +// the Maximum Session Duration Setting for a Role (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session) +// in the IAM User Guide. The maximum session duration limit applies when you +// use the AssumeRole* API operations or the assume-role* CLI operations but +// does not apply when you use those operations to create a console URL. For +// more information, see Using IAM Roles (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html) +// in the IAM User Guide. // // The temporary security credentials created by AssumeRoleWithSAML can be used // to make API calls to any AWS service with the following exception: you cannot @@ -372,8 +390,8 @@ const opAssumeRoleWithWebIdentity = "AssumeRoleWithWebIdentity" // AssumeRoleWithWebIdentityRequest generates a "aws/request.Request" representing the // client's request for the AssumeRoleWithWebIdentity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -443,9 +461,18 @@ func (c *STS) AssumeRoleWithWebIdentityRequest(input *AssumeRoleWithWebIdentityI // key ID, a secret access key, and a security token. Applications can use these // temporary security credentials to sign calls to AWS service APIs. // -// The credentials are valid for the duration that you specified when calling -// AssumeRoleWithWebIdentity, which can be from 900 seconds (15 minutes) to -// a maximum of 3600 seconds (1 hour). The default is 1 hour. +// By default, the temporary security credentials created by AssumeRoleWithWebIdentity +// last for one hour. However, you can use the optional DurationSeconds parameter +// to specify the duration of your session. You can provide a value from 900 +// seconds (15 minutes) up to the maximum session duration setting for the role. +// This setting can have a value from 1 hour to 12 hours. To learn how to view +// the maximum value for your role, see View the Maximum Session Duration Setting +// for a Role (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session) +// in the IAM User Guide. The maximum session duration limit applies when you +// use the AssumeRole* API operations or the assume-role* CLI operations but +// does not apply when you use those operations to create a console URL. For +// more information, see Using IAM Roles (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html) +// in the IAM User Guide. // // The temporary security credentials created by AssumeRoleWithWebIdentity can // be used to make API calls to any AWS service with the following exception: @@ -574,8 +601,8 @@ const opDecodeAuthorizationMessage = "DecodeAuthorizationMessage" // DecodeAuthorizationMessageRequest generates a "aws/request.Request" representing the // client's request for the DecodeAuthorizationMessage operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -686,8 +713,8 @@ const opGetCallerIdentity = "GetCallerIdentity" // GetCallerIdentityRequest generates a "aws/request.Request" representing the // client's request for the GetCallerIdentity operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -761,8 +788,8 @@ const opGetFederationToken = "GetFederationToken" // GetFederationTokenRequest generates a "aws/request.Request" representing the // client's request for the GetFederationToken operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -930,8 +957,8 @@ const opGetSessionToken = "GetSessionToken" // GetSessionTokenRequest generates a "aws/request.Request" representing the // client's request for the GetSessionToken operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1058,15 +1085,23 @@ type AssumeRoleInput struct { _ struct{} `type:"structure"` // The duration, in seconds, of the role session. The value can range from 900 - // seconds (15 minutes) to 3600 seconds (1 hour). By default, the value is set - // to 3600 seconds. + // seconds (15 minutes) up to the maximum session duration setting for the role. + // This setting can have a value from 1 hour to 12 hours. If you specify a value + // higher than this setting, the operation fails. For example, if you specify + // a session duration of 12 hours, but your administrator set the maximum session + // duration to 6 hours, your operation fails. To learn how to view the maximum + // value for your role, see View the Maximum Session Duration Setting for a + // Role (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session) + // in the IAM User Guide. + // + // By default, the value is set to 3600 seconds. // - // This is separate from the duration of a console session that you might request - // using the returned credentials. The request to the federation endpoint for - // a console sign-in token takes a SessionDuration parameter that specifies - // the maximum length of the console session, separately from the DurationSeconds - // parameter on this API. For more information, see Creating a URL that Enables - // Federated Users to Access the AWS Management Console (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-custom-url.html) + // The DurationSeconds parameter is separate from the duration of a console + // session that you might request using the returned credentials. The request + // to the federation endpoint for a console sign-in token takes a SessionDuration + // parameter that specifies the maximum length of the console session. For more + // information, see Creating a URL that Enables Federated Users to Access the + // AWS Management Console (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-custom-url.html) // in the IAM User Guide. DurationSeconds *int64 `min:"900" type:"integer"` @@ -1301,18 +1336,27 @@ func (s *AssumeRoleOutput) SetPackedPolicySize(v int64) *AssumeRoleOutput { type AssumeRoleWithSAMLInput struct { _ struct{} `type:"structure"` - // The duration, in seconds, of the role session. The value can range from 900 - // seconds (15 minutes) to 3600 seconds (1 hour). By default, the value is set - // to 3600 seconds. An expiration can also be specified in the SAML authentication - // response's SessionNotOnOrAfter value. The actual expiration time is whichever - // value is shorter. + // The duration, in seconds, of the role session. Your role session lasts for + // the duration that you specify for the DurationSeconds parameter, or until + // the time specified in the SAML authentication response's SessionNotOnOrAfter + // value, whichever is shorter. You can provide a DurationSeconds value from + // 900 seconds (15 minutes) up to the maximum session duration setting for the + // role. This setting can have a value from 1 hour to 12 hours. If you specify + // a value higher than this setting, the operation fails. For example, if you + // specify a session duration of 12 hours, but your administrator set the maximum + // session duration to 6 hours, your operation fails. To learn how to view the + // maximum value for your role, see View the Maximum Session Duration Setting + // for a Role (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session) + // in the IAM User Guide. + // + // By default, the value is set to 3600 seconds. // - // This is separate from the duration of a console session that you might request - // using the returned credentials. The request to the federation endpoint for - // a console sign-in token takes a SessionDuration parameter that specifies - // the maximum length of the console session, separately from the DurationSeconds - // parameter on this API. For more information, see Enabling SAML 2.0 Federated - // Users to Access the AWS Management Console (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-saml.html) + // The DurationSeconds parameter is separate from the duration of a console + // session that you might request using the returned credentials. The request + // to the federation endpoint for a console sign-in token takes a SessionDuration + // parameter that specifies the maximum length of the console session. For more + // information, see Creating a URL that Enables Federated Users to Access the + // AWS Management Console (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-custom-url.html) // in the IAM User Guide. DurationSeconds *int64 `min:"900" type:"integer"` @@ -1553,15 +1597,23 @@ type AssumeRoleWithWebIdentityInput struct { _ struct{} `type:"structure"` // The duration, in seconds, of the role session. The value can range from 900 - // seconds (15 minutes) to 3600 seconds (1 hour). By default, the value is set - // to 3600 seconds. + // seconds (15 minutes) up to the maximum session duration setting for the role. + // This setting can have a value from 1 hour to 12 hours. If you specify a value + // higher than this setting, the operation fails. For example, if you specify + // a session duration of 12 hours, but your administrator set the maximum session + // duration to 6 hours, your operation fails. To learn how to view the maximum + // value for your role, see View the Maximum Session Duration Setting for a + // Role (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session) + // in the IAM User Guide. + // + // By default, the value is set to 3600 seconds. // - // This is separate from the duration of a console session that you might request - // using the returned credentials. The request to the federation endpoint for - // a console sign-in token takes a SessionDuration parameter that specifies - // the maximum length of the console session, separately from the DurationSeconds - // parameter on this API. For more information, see Creating a URL that Enables - // Federated Users to Access the AWS Management Console (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-custom-url.html) + // The DurationSeconds parameter is separate from the duration of a console + // session that you might request using the returned credentials. The request + // to the federation endpoint for a console sign-in token takes a SessionDuration + // parameter that specifies the maximum length of the console session. For more + // information, see Creating a URL that Enables Federated Users to Access the + // AWS Management Console (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-custom-url.html) // in the IAM User Guide. DurationSeconds *int64 `min:"900" type:"integer"` @@ -1856,7 +1908,7 @@ type Credentials struct { // The date on which the current credentials expire. // // Expiration is a required field - Expiration *time.Time `type:"timestamp" timestampFormat:"iso8601" required:"true"` + Expiration *time.Time `type:"timestamp" required:"true"` // The secret access key that can be used to sign requests. // diff --git a/vendor/github.com/aws/aws-sdk-go/service/sts/service.go b/vendor/github.com/aws/aws-sdk-go/service/sts/service.go index 1ee5839e046..185c914d1b3 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/sts/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/sts/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "sts" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "sts" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "STS" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the STS client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/swf/api.go b/vendor/github.com/aws/aws-sdk-go/service/swf/api.go index a69f7b2887c..aba3cc7303e 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/swf/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/swf/api.go @@ -17,8 +17,8 @@ const opCountClosedWorkflowExecutions = "CountClosedWorkflowExecutions" // CountClosedWorkflowExecutionsRequest generates a "aws/request.Request" representing the // client's request for the CountClosedWorkflowExecutions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -129,8 +129,8 @@ const opCountOpenWorkflowExecutions = "CountOpenWorkflowExecutions" // CountOpenWorkflowExecutionsRequest generates a "aws/request.Request" representing the // client's request for the CountOpenWorkflowExecutions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -241,8 +241,8 @@ const opCountPendingActivityTasks = "CountPendingActivityTasks" // CountPendingActivityTasksRequest generates a "aws/request.Request" representing the // client's request for the CountPendingActivityTasks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -347,8 +347,8 @@ const opCountPendingDecisionTasks = "CountPendingDecisionTasks" // CountPendingDecisionTasksRequest generates a "aws/request.Request" representing the // client's request for the CountPendingDecisionTasks operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -453,8 +453,8 @@ const opDeprecateActivityType = "DeprecateActivityType" // DeprecateActivityTypeRequest generates a "aws/request.Request" representing the // client's request for the DeprecateActivityType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -569,8 +569,8 @@ const opDeprecateDomain = "DeprecateDomain" // DeprecateDomainRequest generates a "aws/request.Request" representing the // client's request for the DeprecateDomain operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -683,8 +683,8 @@ const opDeprecateWorkflowType = "DeprecateWorkflowType" // DeprecateWorkflowTypeRequest generates a "aws/request.Request" representing the // client's request for the DeprecateWorkflowType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -800,8 +800,8 @@ const opDescribeActivityType = "DescribeActivityType" // DescribeActivityTypeRequest generates a "aws/request.Request" representing the // client's request for the DescribeActivityType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -908,8 +908,8 @@ const opDescribeDomain = "DescribeDomain" // DescribeDomainRequest generates a "aws/request.Request" representing the // client's request for the DescribeDomain operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1010,8 +1010,8 @@ const opDescribeWorkflowExecution = "DescribeWorkflowExecution" // DescribeWorkflowExecutionRequest generates a "aws/request.Request" representing the // client's request for the DescribeWorkflowExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1115,8 +1115,8 @@ const opDescribeWorkflowType = "DescribeWorkflowType" // DescribeWorkflowTypeRequest generates a "aws/request.Request" representing the // client's request for the DescribeWorkflowType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1223,8 +1223,8 @@ const opGetWorkflowExecutionHistory = "GetWorkflowExecutionHistory" // GetWorkflowExecutionHistoryRequest generates a "aws/request.Request" representing the // client's request for the GetWorkflowExecutionHistory operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1385,8 +1385,8 @@ const opListActivityTypes = "ListActivityTypes" // ListActivityTypesRequest generates a "aws/request.Request" representing the // client's request for the ListActivityTypes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1546,8 +1546,8 @@ const opListClosedWorkflowExecutions = "ListClosedWorkflowExecutions" // ListClosedWorkflowExecutionsRequest generates a "aws/request.Request" representing the // client's request for the ListClosedWorkflowExecutions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1716,8 +1716,8 @@ const opListDomains = "ListDomains" // ListDomainsRequest generates a "aws/request.Request" representing the // client's request for the ListDomains operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1874,8 +1874,8 @@ const opListOpenWorkflowExecutions = "ListOpenWorkflowExecutions" // ListOpenWorkflowExecutionsRequest generates a "aws/request.Request" representing the // client's request for the ListOpenWorkflowExecutions operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2044,8 +2044,8 @@ const opListWorkflowTypes = "ListWorkflowTypes" // ListWorkflowTypesRequest generates a "aws/request.Request" representing the // client's request for the ListWorkflowTypes operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2203,8 +2203,8 @@ const opPollForActivityTask = "PollForActivityTask" // PollForActivityTaskRequest generates a "aws/request.Request" representing the // client's request for the PollForActivityTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2321,8 +2321,8 @@ const opPollForDecisionTask = "PollForDecisionTask" // PollForDecisionTaskRequest generates a "aws/request.Request" representing the // client's request for the PollForDecisionTask operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2505,8 +2505,8 @@ const opRecordActivityTaskHeartbeat = "RecordActivityTaskHeartbeat" // RecordActivityTaskHeartbeatRequest generates a "aws/request.Request" representing the // client's request for the RecordActivityTaskHeartbeat operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2631,8 +2631,8 @@ const opRegisterActivityType = "RegisterActivityType" // RegisterActivityTypeRequest generates a "aws/request.Request" representing the // client's request for the RegisterActivityType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2757,8 +2757,8 @@ const opRegisterDomain = "RegisterDomain" // RegisterDomainRequest generates a "aws/request.Request" representing the // client's request for the RegisterDomain operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2865,8 +2865,8 @@ const opRegisterWorkflowType = "RegisterWorkflowType" // RegisterWorkflowTypeRequest generates a "aws/request.Request" representing the // client's request for the RegisterWorkflowType operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2994,8 +2994,8 @@ const opRequestCancelWorkflowExecution = "RequestCancelWorkflowExecution" // RequestCancelWorkflowExecutionRequest generates a "aws/request.Request" representing the // client's request for the RequestCancelWorkflowExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3108,8 +3108,8 @@ const opRespondActivityTaskCanceled = "RespondActivityTaskCanceled" // RespondActivityTaskCanceledRequest generates a "aws/request.Request" representing the // client's request for the RespondActivityTaskCanceled operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3225,8 +3225,8 @@ const opRespondActivityTaskCompleted = "RespondActivityTaskCompleted" // RespondActivityTaskCompletedRequest generates a "aws/request.Request" representing the // client's request for the RespondActivityTaskCompleted operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3341,8 +3341,8 @@ const opRespondActivityTaskFailed = "RespondActivityTaskFailed" // RespondActivityTaskFailedRequest generates a "aws/request.Request" representing the // client's request for the RespondActivityTaskFailed operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3452,8 +3452,8 @@ const opRespondDecisionTaskCompleted = "RespondDecisionTaskCompleted" // RespondDecisionTaskCompletedRequest generates a "aws/request.Request" representing the // client's request for the RespondDecisionTaskCompleted operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3552,8 +3552,8 @@ const opSignalWorkflowExecution = "SignalWorkflowExecution" // SignalWorkflowExecutionRequest generates a "aws/request.Request" representing the // client's request for the SignalWorkflowExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3664,8 +3664,8 @@ const opStartWorkflowExecution = "StartWorkflowExecution" // StartWorkflowExecutionRequest generates a "aws/request.Request" representing the // client's request for the StartWorkflowExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3810,8 +3810,8 @@ const opTerminateWorkflowExecution = "TerminateWorkflowExecution" // TerminateWorkflowExecutionRequest generates a "aws/request.Request" representing the // client's request for the TerminateWorkflowExecution operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4555,10 +4555,10 @@ type ActivityTypeInfo struct { // The date and time this activity type was created through RegisterActivityType. // // CreationDate is a required field - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix" required:"true"` + CreationDate *time.Time `locationName:"creationDate" type:"timestamp" required:"true"` // If DEPRECATED, the date and time DeprecateActivityType was called. - DeprecationDate *time.Time `locationName:"deprecationDate" type:"timestamp" timestampFormat:"unix"` + DeprecationDate *time.Time `locationName:"deprecationDate" type:"timestamp"` // The description of the activity type provided in RegisterActivityType. Description *string `locationName:"description" type:"string"` @@ -7034,7 +7034,7 @@ type DescribeWorkflowExecutionOutput struct { // The time when the last activity task was scheduled for this workflow execution. // You can use this information to determine if the workflow has not made progress // for an unusually long period of time and might require a corrective action. - LatestActivityTaskTimestamp *time.Time `locationName:"latestActivityTaskTimestamp" type:"timestamp" timestampFormat:"unix"` + LatestActivityTaskTimestamp *time.Time `locationName:"latestActivityTaskTimestamp" type:"timestamp"` // The latest executionContext provided by the decider for this workflow execution. // A decider can provide an executionContext (a free-form string) when closing @@ -7283,12 +7283,12 @@ type ExecutionTimeFilter struct { _ struct{} `type:"structure"` // Specifies the latest start or close date and time to return. - LatestDate *time.Time `locationName:"latestDate" type:"timestamp" timestampFormat:"unix"` + LatestDate *time.Time `locationName:"latestDate" type:"timestamp"` // Specifies the oldest start or close date and time to return. // // OldestDate is a required field - OldestDate *time.Time `locationName:"oldestDate" type:"timestamp" timestampFormat:"unix" required:"true"` + OldestDate *time.Time `locationName:"oldestDate" type:"timestamp" required:"true"` } // String returns the string representation @@ -7897,7 +7897,7 @@ type HistoryEvent struct { // The date and time when the event occurred. // // EventTimestamp is a required field - EventTimestamp *time.Time `locationName:"eventTimestamp" type:"timestamp" timestampFormat:"unix" required:"true"` + EventTimestamp *time.Time `locationName:"eventTimestamp" type:"timestamp" required:"true"` // The type of the history event. // @@ -14108,7 +14108,7 @@ type WorkflowExecutionInfo struct { // The time when the workflow execution was closed. Set only if the execution // status is CLOSED. - CloseTimestamp *time.Time `locationName:"closeTimestamp" type:"timestamp" timestampFormat:"unix"` + CloseTimestamp *time.Time `locationName:"closeTimestamp" type:"timestamp"` // The workflow execution this information is about. // @@ -14127,7 +14127,7 @@ type WorkflowExecutionInfo struct { // The time when the execution was started. // // StartTimestamp is a required field - StartTimestamp *time.Time `locationName:"startTimestamp" type:"timestamp" timestampFormat:"unix" required:"true"` + StartTimestamp *time.Time `locationName:"startTimestamp" type:"timestamp" required:"true"` // The list of tags associated with the workflow execution. Tags can be used // to identify and list workflow executions of interest through the visibility @@ -14889,11 +14889,11 @@ type WorkflowTypeInfo struct { // The date when this type was registered. // // CreationDate is a required field - CreationDate *time.Time `locationName:"creationDate" type:"timestamp" timestampFormat:"unix" required:"true"` + CreationDate *time.Time `locationName:"creationDate" type:"timestamp" required:"true"` // If the type is in deprecated state, then it is set to the date when the type // was deprecated. - DeprecationDate *time.Time `locationName:"deprecationDate" type:"timestamp" timestampFormat:"unix"` + DeprecationDate *time.Time `locationName:"deprecationDate" type:"timestamp"` // The description of the type registered through RegisterWorkflowType. Description *string `locationName:"description" type:"string"` diff --git a/vendor/github.com/aws/aws-sdk-go/service/swf/service.go b/vendor/github.com/aws/aws-sdk-go/service/swf/service.go index ac85f412c7a..014d89a5241 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/swf/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/swf/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "swf" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "swf" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "SWF" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the SWF client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/waf/api.go b/vendor/github.com/aws/aws-sdk-go/service/waf/api.go index f54737269ef..56c16743cba 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/waf/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/waf/api.go @@ -15,8 +15,8 @@ const opCreateByteMatchSet = "CreateByteMatchSet" // CreateByteMatchSetRequest generates a "aws/request.Request" representing the // client's request for the CreateByteMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -86,18 +86,18 @@ func (c *WAF) CreateByteMatchSetRequest(input *CreateByteMatchSetInput) (req *re // API operation CreateByteMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeDisallowedNameException "DisallowedNameException" +// * ErrCodeDisallowedNameException "WAFDisallowedNameException" // The name specified is invalid. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -126,11 +126,11 @@ func (c *WAF) CreateByteMatchSetRequest(input *CreateByteMatchSetInput) (req *re // * Your request references an ARN that is malformed, or corresponds to // a resource with which a web ACL cannot be associated. // -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -162,8 +162,8 @@ const opCreateGeoMatchSet = "CreateGeoMatchSet" // CreateGeoMatchSetRequest generates a "aws/request.Request" representing the // client's request for the CreateGeoMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -232,22 +232,22 @@ func (c *WAF) CreateGeoMatchSetRequest(input *CreateGeoMatchSetInput) (req *requ // API operation CreateGeoMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeDisallowedNameException "DisallowedNameException" +// * ErrCodeDisallowedNameException "WAFDisallowedNameException" // The name specified is invalid. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -276,7 +276,7 @@ func (c *WAF) CreateGeoMatchSetRequest(input *CreateGeoMatchSetInput) (req *requ // * Your request references an ARN that is malformed, or corresponds to // a resource with which a web ACL cannot be associated. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -308,8 +308,8 @@ const opCreateIPSet = "CreateIPSet" // CreateIPSetRequest generates a "aws/request.Request" representing the // client's request for the CreateIPSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -379,22 +379,22 @@ func (c *WAF) CreateIPSetRequest(input *CreateIPSetInput) (req *request.Request, // API operation CreateIPSet for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeDisallowedNameException "DisallowedNameException" +// * ErrCodeDisallowedNameException "WAFDisallowedNameException" // The name specified is invalid. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -423,7 +423,7 @@ func (c *WAF) CreateIPSetRequest(input *CreateIPSetInput) (req *request.Request, // * Your request references an ARN that is malformed, or corresponds to // a resource with which a web ACL cannot be associated. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -455,8 +455,8 @@ const opCreateRateBasedRule = "CreateRateBasedRule" // CreateRateBasedRuleRequest generates a "aws/request.Request" representing the // client's request for the CreateRateBasedRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -565,18 +565,18 @@ func (c *WAF) CreateRateBasedRuleRequest(input *CreateRateBasedRuleInput) (req * // API operation CreateRateBasedRule for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeDisallowedNameException "DisallowedNameException" +// * ErrCodeDisallowedNameException "WAFDisallowedNameException" // The name specified is invalid. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -605,7 +605,7 @@ func (c *WAF) CreateRateBasedRuleRequest(input *CreateRateBasedRuleInput) (req * // * Your request references an ARN that is malformed, or corresponds to // a resource with which a web ACL cannot be associated. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -637,8 +637,8 @@ const opCreateRegexMatchSet = "CreateRegexMatchSet" // CreateRegexMatchSetRequest generates a "aws/request.Request" representing the // client's request for the CreateRegexMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -709,18 +709,18 @@ func (c *WAF) CreateRegexMatchSetRequest(input *CreateRegexMatchSetInput) (req * // API operation CreateRegexMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeDisallowedNameException "DisallowedNameException" +// * ErrCodeDisallowedNameException "WAFDisallowedNameException" // The name specified is invalid. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -752,8 +752,8 @@ const opCreateRegexPatternSet = "CreateRegexPatternSet" // CreateRegexPatternSetRequest generates a "aws/request.Request" representing the // client's request for the CreateRegexPatternSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -820,18 +820,18 @@ func (c *WAF) CreateRegexPatternSetRequest(input *CreateRegexPatternSetInput) (r // API operation CreateRegexPatternSet for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeDisallowedNameException "DisallowedNameException" +// * ErrCodeDisallowedNameException "WAFDisallowedNameException" // The name specified is invalid. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -863,8 +863,8 @@ const opCreateRule = "CreateRule" // CreateRuleRequest generates a "aws/request.Request" representing the // client's request for the CreateRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -948,18 +948,18 @@ func (c *WAF) CreateRuleRequest(input *CreateRuleInput) (req *request.Request, o // API operation CreateRule for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeDisallowedNameException "DisallowedNameException" +// * ErrCodeDisallowedNameException "WAFDisallowedNameException" // The name specified is invalid. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -988,7 +988,7 @@ func (c *WAF) CreateRuleRequest(input *CreateRuleInput) (req *request.Request, o // * Your request references an ARN that is malformed, or corresponds to // a resource with which a web ACL cannot be associated. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -1020,8 +1020,8 @@ const opCreateRuleGroup = "CreateRuleGroup" // CreateRuleGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateRuleGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1083,18 +1083,18 @@ func (c *WAF) CreateRuleGroupRequest(input *CreateRuleGroupInput) (req *request. // API operation CreateRuleGroup for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeDisallowedNameException "DisallowedNameException" +// * ErrCodeDisallowedNameException "WAFDisallowedNameException" // The name specified is invalid. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -1126,8 +1126,8 @@ const opCreateSizeConstraintSet = "CreateSizeConstraintSet" // CreateSizeConstraintSetRequest generates a "aws/request.Request" representing the // client's request for the CreateSizeConstraintSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1198,22 +1198,22 @@ func (c *WAF) CreateSizeConstraintSetRequest(input *CreateSizeConstraintSetInput // API operation CreateSizeConstraintSet for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeDisallowedNameException "DisallowedNameException" +// * ErrCodeDisallowedNameException "WAFDisallowedNameException" // The name specified is invalid. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -1242,7 +1242,7 @@ func (c *WAF) CreateSizeConstraintSetRequest(input *CreateSizeConstraintSetInput // * Your request references an ARN that is malformed, or corresponds to // a resource with which a web ACL cannot be associated. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -1274,8 +1274,8 @@ const opCreateSqlInjectionMatchSet = "CreateSqlInjectionMatchSet" // CreateSqlInjectionMatchSetRequest generates a "aws/request.Request" representing the // client's request for the CreateSqlInjectionMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1342,18 +1342,18 @@ func (c *WAF) CreateSqlInjectionMatchSetRequest(input *CreateSqlInjectionMatchSe // API operation CreateSqlInjectionMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeDisallowedNameException "DisallowedNameException" +// * ErrCodeDisallowedNameException "WAFDisallowedNameException" // The name specified is invalid. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -1382,11 +1382,11 @@ func (c *WAF) CreateSqlInjectionMatchSetRequest(input *CreateSqlInjectionMatchSe // * Your request references an ARN that is malformed, or corresponds to // a resource with which a web ACL cannot be associated. // -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -1418,8 +1418,8 @@ const opCreateWebACL = "CreateWebACL" // CreateWebACLRequest generates a "aws/request.Request" representing the // client's request for the CreateWebACL operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1498,22 +1498,22 @@ func (c *WAF) CreateWebACLRequest(input *CreateWebACLInput) (req *request.Reques // API operation CreateWebACL for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeDisallowedNameException "DisallowedNameException" +// * ErrCodeDisallowedNameException "WAFDisallowedNameException" // The name specified is invalid. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -1542,7 +1542,7 @@ func (c *WAF) CreateWebACLRequest(input *CreateWebACLInput) (req *request.Reques // * Your request references an ARN that is malformed, or corresponds to // a resource with which a web ACL cannot be associated. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -1574,8 +1574,8 @@ const opCreateXssMatchSet = "CreateXssMatchSet" // CreateXssMatchSetRequest generates a "aws/request.Request" representing the // client's request for the CreateXssMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1643,18 +1643,18 @@ func (c *WAF) CreateXssMatchSetRequest(input *CreateXssMatchSetInput) (req *requ // API operation CreateXssMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeDisallowedNameException "DisallowedNameException" +// * ErrCodeDisallowedNameException "WAFDisallowedNameException" // The name specified is invalid. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -1683,11 +1683,11 @@ func (c *WAF) CreateXssMatchSetRequest(input *CreateXssMatchSetInput) (req *requ // * Your request references an ARN that is malformed, or corresponds to // a resource with which a web ACL cannot be associated. // -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -1719,8 +1719,8 @@ const opDeleteByteMatchSet = "DeleteByteMatchSet" // DeleteByteMatchSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteByteMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1783,18 +1783,18 @@ func (c *WAF) DeleteByteMatchSetRequest(input *DeleteByteMatchSetInput) (req *re // API operation DeleteByteMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeReferencedItemException "ReferencedItemException" +// * ErrCodeReferencedItemException "WAFReferencedItemException" // The operation failed because you tried to delete an object that is still // in use. For example: // @@ -1802,11 +1802,11 @@ func (c *WAF) DeleteByteMatchSetRequest(input *DeleteByteMatchSetInput) (req *re // // * You tried to delete a Rule that is still referenced by a WebACL. // -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeNonEmptyEntityException "NonEmptyEntityException" +// * ErrCodeNonEmptyEntityException "WAFNonEmptyEntityException" // The operation failed because you tried to delete an object that isn't empty. // For example: // @@ -1846,8 +1846,8 @@ const opDeleteGeoMatchSet = "DeleteGeoMatchSet" // DeleteGeoMatchSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteGeoMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1909,22 +1909,22 @@ func (c *WAF) DeleteGeoMatchSetRequest(input *DeleteGeoMatchSetInput) (req *requ // API operation DeleteGeoMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeReferencedItemException "ReferencedItemException" +// * ErrCodeReferencedItemException "WAFReferencedItemException" // The operation failed because you tried to delete an object that is still // in use. For example: // @@ -1932,7 +1932,7 @@ func (c *WAF) DeleteGeoMatchSetRequest(input *DeleteGeoMatchSetInput) (req *requ // // * You tried to delete a Rule that is still referenced by a WebACL. // -// * ErrCodeNonEmptyEntityException "NonEmptyEntityException" +// * ErrCodeNonEmptyEntityException "WAFNonEmptyEntityException" // The operation failed because you tried to delete an object that isn't empty. // For example: // @@ -1972,8 +1972,8 @@ const opDeleteIPSet = "DeleteIPSet" // DeleteIPSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteIPSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2035,22 +2035,22 @@ func (c *WAF) DeleteIPSetRequest(input *DeleteIPSetInput) (req *request.Request, // API operation DeleteIPSet for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeReferencedItemException "ReferencedItemException" +// * ErrCodeReferencedItemException "WAFReferencedItemException" // The operation failed because you tried to delete an object that is still // in use. For example: // @@ -2058,7 +2058,7 @@ func (c *WAF) DeleteIPSetRequest(input *DeleteIPSetInput) (req *request.Request, // // * You tried to delete a Rule that is still referenced by a WebACL. // -// * ErrCodeNonEmptyEntityException "NonEmptyEntityException" +// * ErrCodeNonEmptyEntityException "WAFNonEmptyEntityException" // The operation failed because you tried to delete an object that isn't empty. // For example: // @@ -2094,12 +2094,99 @@ func (c *WAF) DeleteIPSetWithContext(ctx aws.Context, input *DeleteIPSetInput, o return out, req.Send() } +const opDeleteLoggingConfiguration = "DeleteLoggingConfiguration" + +// DeleteLoggingConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the DeleteLoggingConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteLoggingConfiguration for more information on using the DeleteLoggingConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteLoggingConfigurationRequest method. +// req, resp := client.DeleteLoggingConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/DeleteLoggingConfiguration +func (c *WAF) DeleteLoggingConfigurationRequest(input *DeleteLoggingConfigurationInput) (req *request.Request, output *DeleteLoggingConfigurationOutput) { + op := &request.Operation{ + Name: opDeleteLoggingConfiguration, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteLoggingConfigurationInput{} + } + + output = &DeleteLoggingConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteLoggingConfiguration API operation for AWS WAF. +// +// Permanently deletes the LoggingConfiguration from the specified web ACL. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS WAF's +// API operation DeleteLoggingConfiguration for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalErrorException "WAFInternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" +// The operation failed because the referenced object doesn't exist. +// +// * ErrCodeStaleDataException "WAFStaleDataException" +// The operation failed because you tried to create, update, or delete an object +// by using a change token that has already been used. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/DeleteLoggingConfiguration +func (c *WAF) DeleteLoggingConfiguration(input *DeleteLoggingConfigurationInput) (*DeleteLoggingConfigurationOutput, error) { + req, out := c.DeleteLoggingConfigurationRequest(input) + return out, req.Send() +} + +// DeleteLoggingConfigurationWithContext is the same as DeleteLoggingConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteLoggingConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WAF) DeleteLoggingConfigurationWithContext(ctx aws.Context, input *DeleteLoggingConfigurationInput, opts ...request.Option) (*DeleteLoggingConfigurationOutput, error) { + req, out := c.DeleteLoggingConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeletePermissionPolicy = "DeletePermissionPolicy" // DeletePermissionPolicyRequest generates a "aws/request.Request" representing the // client's request for the DeletePermissionPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2150,15 +2237,15 @@ func (c *WAF) DeletePermissionPolicyRequest(input *DeletePermissionPolicyInput) // API operation DeletePermissionPolicy for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/DeletePermissionPolicy @@ -2187,8 +2274,8 @@ const opDeleteRateBasedRule = "DeleteRateBasedRule" // DeleteRateBasedRuleRequest generates a "aws/request.Request" representing the // client's request for the DeleteRateBasedRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2252,22 +2339,22 @@ func (c *WAF) DeleteRateBasedRuleRequest(input *DeleteRateBasedRuleInput) (req * // API operation DeleteRateBasedRule for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeReferencedItemException "ReferencedItemException" +// * ErrCodeReferencedItemException "WAFReferencedItemException" // The operation failed because you tried to delete an object that is still // in use. For example: // @@ -2275,7 +2362,7 @@ func (c *WAF) DeleteRateBasedRuleRequest(input *DeleteRateBasedRuleInput) (req * // // * You tried to delete a Rule that is still referenced by a WebACL. // -// * ErrCodeNonEmptyEntityException "NonEmptyEntityException" +// * ErrCodeNonEmptyEntityException "WAFNonEmptyEntityException" // The operation failed because you tried to delete an object that isn't empty. // For example: // @@ -2315,8 +2402,8 @@ const opDeleteRegexMatchSet = "DeleteRegexMatchSet" // DeleteRegexMatchSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteRegexMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2379,18 +2466,18 @@ func (c *WAF) DeleteRegexMatchSetRequest(input *DeleteRegexMatchSetInput) (req * // API operation DeleteRegexMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeReferencedItemException "ReferencedItemException" +// * ErrCodeReferencedItemException "WAFReferencedItemException" // The operation failed because you tried to delete an object that is still // in use. For example: // @@ -2398,11 +2485,11 @@ func (c *WAF) DeleteRegexMatchSetRequest(input *DeleteRegexMatchSetInput) (req * // // * You tried to delete a Rule that is still referenced by a WebACL. // -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeNonEmptyEntityException "NonEmptyEntityException" +// * ErrCodeNonEmptyEntityException "WAFNonEmptyEntityException" // The operation failed because you tried to delete an object that isn't empty. // For example: // @@ -2442,8 +2529,8 @@ const opDeleteRegexPatternSet = "DeleteRegexPatternSet" // DeleteRegexPatternSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteRegexPatternSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2494,18 +2581,18 @@ func (c *WAF) DeleteRegexPatternSetRequest(input *DeleteRegexPatternSetInput) (r // API operation DeleteRegexPatternSet for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeReferencedItemException "ReferencedItemException" +// * ErrCodeReferencedItemException "WAFReferencedItemException" // The operation failed because you tried to delete an object that is still // in use. For example: // @@ -2513,11 +2600,11 @@ func (c *WAF) DeleteRegexPatternSetRequest(input *DeleteRegexPatternSetInput) (r // // * You tried to delete a Rule that is still referenced by a WebACL. // -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeNonEmptyEntityException "NonEmptyEntityException" +// * ErrCodeNonEmptyEntityException "WAFNonEmptyEntityException" // The operation failed because you tried to delete an object that isn't empty. // For example: // @@ -2557,8 +2644,8 @@ const opDeleteRule = "DeleteRule" // DeleteRuleRequest generates a "aws/request.Request" representing the // client's request for the DeleteRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2620,22 +2707,22 @@ func (c *WAF) DeleteRuleRequest(input *DeleteRuleInput) (req *request.Request, o // API operation DeleteRule for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeReferencedItemException "ReferencedItemException" +// * ErrCodeReferencedItemException "WAFReferencedItemException" // The operation failed because you tried to delete an object that is still // in use. For example: // @@ -2643,7 +2730,7 @@ func (c *WAF) DeleteRuleRequest(input *DeleteRuleInput) (req *request.Request, o // // * You tried to delete a Rule that is still referenced by a WebACL. // -// * ErrCodeNonEmptyEntityException "NonEmptyEntityException" +// * ErrCodeNonEmptyEntityException "WAFNonEmptyEntityException" // The operation failed because you tried to delete an object that isn't empty. // For example: // @@ -2683,8 +2770,8 @@ const opDeleteRuleGroup = "DeleteRuleGroup" // DeleteRuleGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteRuleGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2745,18 +2832,18 @@ func (c *WAF) DeleteRuleGroupRequest(input *DeleteRuleGroupInput) (req *request. // API operation DeleteRuleGroup for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeReferencedItemException "ReferencedItemException" +// * ErrCodeReferencedItemException "WAFReferencedItemException" // The operation failed because you tried to delete an object that is still // in use. For example: // @@ -2764,7 +2851,7 @@ func (c *WAF) DeleteRuleGroupRequest(input *DeleteRuleGroupInput) (req *request. // // * You tried to delete a Rule that is still referenced by a WebACL. // -// * ErrCodeNonEmptyEntityException "NonEmptyEntityException" +// * ErrCodeNonEmptyEntityException "WAFNonEmptyEntityException" // The operation failed because you tried to delete an object that isn't empty. // For example: // @@ -2778,6 +2865,24 @@ func (c *WAF) DeleteRuleGroupRequest(input *DeleteRuleGroupInput) (req *request. // // * You tried to delete an IPSet that references one or more IP addresses. // +// * ErrCodeInvalidOperationException "WAFInvalidOperationException" +// The operation failed because there was nothing to do. For example: +// +// * You tried to remove a Rule from a WebACL, but the Rule isn't in the +// specified WebACL. +// +// * You tried to remove an IP address from an IPSet, but the IP address +// isn't in the specified IPSet. +// +// * You tried to remove a ByteMatchTuple from a ByteMatchSet, but the ByteMatchTuple +// isn't in the specified WebACL. +// +// * You tried to add a Rule to a WebACL, but the Rule already exists in +// the specified WebACL. +// +// * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple +// already exists in the specified WebACL. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/DeleteRuleGroup func (c *WAF) DeleteRuleGroup(input *DeleteRuleGroupInput) (*DeleteRuleGroupOutput, error) { req, out := c.DeleteRuleGroupRequest(input) @@ -2804,8 +2909,8 @@ const opDeleteSizeConstraintSet = "DeleteSizeConstraintSet" // DeleteSizeConstraintSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteSizeConstraintSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2868,22 +2973,22 @@ func (c *WAF) DeleteSizeConstraintSetRequest(input *DeleteSizeConstraintSetInput // API operation DeleteSizeConstraintSet for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeReferencedItemException "ReferencedItemException" +// * ErrCodeReferencedItemException "WAFReferencedItemException" // The operation failed because you tried to delete an object that is still // in use. For example: // @@ -2891,7 +2996,7 @@ func (c *WAF) DeleteSizeConstraintSetRequest(input *DeleteSizeConstraintSetInput // // * You tried to delete a Rule that is still referenced by a WebACL. // -// * ErrCodeNonEmptyEntityException "NonEmptyEntityException" +// * ErrCodeNonEmptyEntityException "WAFNonEmptyEntityException" // The operation failed because you tried to delete an object that isn't empty. // For example: // @@ -2931,8 +3036,8 @@ const opDeleteSqlInjectionMatchSet = "DeleteSqlInjectionMatchSet" // DeleteSqlInjectionMatchSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteSqlInjectionMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2996,18 +3101,18 @@ func (c *WAF) DeleteSqlInjectionMatchSetRequest(input *DeleteSqlInjectionMatchSe // API operation DeleteSqlInjectionMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeReferencedItemException "ReferencedItemException" +// * ErrCodeReferencedItemException "WAFReferencedItemException" // The operation failed because you tried to delete an object that is still // in use. For example: // @@ -3015,11 +3120,11 @@ func (c *WAF) DeleteSqlInjectionMatchSetRequest(input *DeleteSqlInjectionMatchSe // // * You tried to delete a Rule that is still referenced by a WebACL. // -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeNonEmptyEntityException "NonEmptyEntityException" +// * ErrCodeNonEmptyEntityException "WAFNonEmptyEntityException" // The operation failed because you tried to delete an object that isn't empty. // For example: // @@ -3059,8 +3164,8 @@ const opDeleteWebACL = "DeleteWebACL" // DeleteWebACLRequest generates a "aws/request.Request" representing the // client's request for the DeleteWebACL operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3119,22 +3224,22 @@ func (c *WAF) DeleteWebACLRequest(input *DeleteWebACLInput) (req *request.Reques // API operation DeleteWebACL for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeReferencedItemException "ReferencedItemException" +// * ErrCodeReferencedItemException "WAFReferencedItemException" // The operation failed because you tried to delete an object that is still // in use. For example: // @@ -3142,7 +3247,7 @@ func (c *WAF) DeleteWebACLRequest(input *DeleteWebACLInput) (req *request.Reques // // * You tried to delete a Rule that is still referenced by a WebACL. // -// * ErrCodeNonEmptyEntityException "NonEmptyEntityException" +// * ErrCodeNonEmptyEntityException "WAFNonEmptyEntityException" // The operation failed because you tried to delete an object that isn't empty. // For example: // @@ -3182,8 +3287,8 @@ const opDeleteXssMatchSet = "DeleteXssMatchSet" // DeleteXssMatchSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteXssMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3246,18 +3351,18 @@ func (c *WAF) DeleteXssMatchSetRequest(input *DeleteXssMatchSetInput) (req *requ // API operation DeleteXssMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeReferencedItemException "ReferencedItemException" +// * ErrCodeReferencedItemException "WAFReferencedItemException" // The operation failed because you tried to delete an object that is still // in use. For example: // @@ -3265,11 +3370,11 @@ func (c *WAF) DeleteXssMatchSetRequest(input *DeleteXssMatchSetInput) (req *requ // // * You tried to delete a Rule that is still referenced by a WebACL. // -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeNonEmptyEntityException "NonEmptyEntityException" +// * ErrCodeNonEmptyEntityException "WAFNonEmptyEntityException" // The operation failed because you tried to delete an object that isn't empty. // For example: // @@ -3309,8 +3414,8 @@ const opGetByteMatchSet = "GetByteMatchSet" // GetByteMatchSetRequest generates a "aws/request.Request" representing the // client's request for the GetByteMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3359,15 +3464,15 @@ func (c *WAF) GetByteMatchSetRequest(input *GetByteMatchSetInput) (req *request. // API operation GetByteMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/GetByteMatchSet @@ -3396,8 +3501,8 @@ const opGetChangeToken = "GetChangeToken" // GetChangeTokenRequest generates a "aws/request.Request" representing the // client's request for the GetChangeToken operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3460,7 +3565,7 @@ func (c *WAF) GetChangeTokenRequest(input *GetChangeTokenInput) (req *request.Re // API operation GetChangeToken for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // @@ -3490,8 +3595,8 @@ const opGetChangeTokenStatus = "GetChangeTokenStatus" // GetChangeTokenStatusRequest generates a "aws/request.Request" representing the // client's request for the GetChangeTokenStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3550,10 +3655,10 @@ func (c *WAF) GetChangeTokenStatusRequest(input *GetChangeTokenStatusInput) (req // API operation GetChangeTokenStatus for usage and error information. // // Returned Error Codes: -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // @@ -3583,8 +3688,8 @@ const opGetGeoMatchSet = "GetGeoMatchSet" // GetGeoMatchSetRequest generates a "aws/request.Request" representing the // client's request for the GetGeoMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3633,15 +3738,15 @@ func (c *WAF) GetGeoMatchSetRequest(input *GetGeoMatchSetInput) (req *request.Re // API operation GetGeoMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/GetGeoMatchSet @@ -3670,8 +3775,8 @@ const opGetIPSet = "GetIPSet" // GetIPSetRequest generates a "aws/request.Request" representing the // client's request for the GetIPSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3720,15 +3825,15 @@ func (c *WAF) GetIPSetRequest(input *GetIPSetInput) (req *request.Request, outpu // API operation GetIPSet for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/GetIPSet @@ -3753,12 +3858,95 @@ func (c *WAF) GetIPSetWithContext(ctx aws.Context, input *GetIPSetInput, opts .. return out, req.Send() } +const opGetLoggingConfiguration = "GetLoggingConfiguration" + +// GetLoggingConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the GetLoggingConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetLoggingConfiguration for more information on using the GetLoggingConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetLoggingConfigurationRequest method. +// req, resp := client.GetLoggingConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/GetLoggingConfiguration +func (c *WAF) GetLoggingConfigurationRequest(input *GetLoggingConfigurationInput) (req *request.Request, output *GetLoggingConfigurationOutput) { + op := &request.Operation{ + Name: opGetLoggingConfiguration, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetLoggingConfigurationInput{} + } + + output = &GetLoggingConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetLoggingConfiguration API operation for AWS WAF. +// +// Returns the LoggingConfiguration for the specified web ACL. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS WAF's +// API operation GetLoggingConfiguration for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalErrorException "WAFInternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" +// The operation failed because the referenced object doesn't exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/GetLoggingConfiguration +func (c *WAF) GetLoggingConfiguration(input *GetLoggingConfigurationInput) (*GetLoggingConfigurationOutput, error) { + req, out := c.GetLoggingConfigurationRequest(input) + return out, req.Send() +} + +// GetLoggingConfigurationWithContext is the same as GetLoggingConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See GetLoggingConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WAF) GetLoggingConfigurationWithContext(ctx aws.Context, input *GetLoggingConfigurationInput, opts ...request.Option) (*GetLoggingConfigurationOutput, error) { + req, out := c.GetLoggingConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opGetPermissionPolicy = "GetPermissionPolicy" // GetPermissionPolicyRequest generates a "aws/request.Request" representing the // client's request for the GetPermissionPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3807,11 +3995,11 @@ func (c *WAF) GetPermissionPolicyRequest(input *GetPermissionPolicyInput) (req * // API operation GetPermissionPolicy for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/GetPermissionPolicy @@ -3840,8 +4028,8 @@ const opGetRateBasedRule = "GetRateBasedRule" // GetRateBasedRuleRequest generates a "aws/request.Request" representing the // client's request for the GetRateBasedRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3891,15 +4079,15 @@ func (c *WAF) GetRateBasedRuleRequest(input *GetRateBasedRuleInput) (req *reques // API operation GetRateBasedRule for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/GetRateBasedRule @@ -3928,8 +4116,8 @@ const opGetRateBasedRuleManagedKeys = "GetRateBasedRuleManagedKeys" // GetRateBasedRuleManagedKeysRequest generates a "aws/request.Request" representing the // client's request for the GetRateBasedRuleManagedKeys operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3981,18 +4169,18 @@ func (c *WAF) GetRateBasedRuleManagedKeysRequest(input *GetRateBasedRuleManagedK // API operation GetRateBasedRuleManagedKeys for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -4047,8 +4235,8 @@ const opGetRegexMatchSet = "GetRegexMatchSet" // GetRegexMatchSetRequest generates a "aws/request.Request" representing the // client's request for the GetRegexMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4097,15 +4285,15 @@ func (c *WAF) GetRegexMatchSetRequest(input *GetRegexMatchSetInput) (req *reques // API operation GetRegexMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/GetRegexMatchSet @@ -4134,8 +4322,8 @@ const opGetRegexPatternSet = "GetRegexPatternSet" // GetRegexPatternSetRequest generates a "aws/request.Request" representing the // client's request for the GetRegexPatternSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4184,15 +4372,15 @@ func (c *WAF) GetRegexPatternSetRequest(input *GetRegexPatternSetInput) (req *re // API operation GetRegexPatternSet for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/GetRegexPatternSet @@ -4221,8 +4409,8 @@ const opGetRule = "GetRule" // GetRuleRequest generates a "aws/request.Request" representing the // client's request for the GetRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4272,15 +4460,15 @@ func (c *WAF) GetRuleRequest(input *GetRuleInput) (req *request.Request, output // API operation GetRule for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/GetRule @@ -4309,8 +4497,8 @@ const opGetRuleGroup = "GetRuleGroup" // GetRuleGroupRequest generates a "aws/request.Request" representing the // client's request for the GetRuleGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4362,11 +4550,11 @@ func (c *WAF) GetRuleGroupRequest(input *GetRuleGroupInput) (req *request.Reques // API operation GetRuleGroup for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/GetRuleGroup @@ -4395,8 +4583,8 @@ const opGetSampledRequests = "GetSampledRequests" // GetSampledRequestsRequest generates a "aws/request.Request" representing the // client's request for the GetSampledRequests operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4455,10 +4643,10 @@ func (c *WAF) GetSampledRequestsRequest(input *GetSampledRequestsInput) (req *re // API operation GetSampledRequests for usage and error information. // // Returned Error Codes: -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // @@ -4488,8 +4676,8 @@ const opGetSizeConstraintSet = "GetSizeConstraintSet" // GetSizeConstraintSetRequest generates a "aws/request.Request" representing the // client's request for the GetSizeConstraintSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4538,15 +4726,15 @@ func (c *WAF) GetSizeConstraintSetRequest(input *GetSizeConstraintSetInput) (req // API operation GetSizeConstraintSet for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/GetSizeConstraintSet @@ -4575,8 +4763,8 @@ const opGetSqlInjectionMatchSet = "GetSqlInjectionMatchSet" // GetSqlInjectionMatchSetRequest generates a "aws/request.Request" representing the // client's request for the GetSqlInjectionMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4625,15 +4813,15 @@ func (c *WAF) GetSqlInjectionMatchSetRequest(input *GetSqlInjectionMatchSetInput // API operation GetSqlInjectionMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/GetSqlInjectionMatchSet @@ -4662,8 +4850,8 @@ const opGetWebACL = "GetWebACL" // GetWebACLRequest generates a "aws/request.Request" representing the // client's request for the GetWebACL operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4712,15 +4900,15 @@ func (c *WAF) GetWebACLRequest(input *GetWebACLInput) (req *request.Request, out // API operation GetWebACL for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/GetWebACL @@ -4749,8 +4937,8 @@ const opGetXssMatchSet = "GetXssMatchSet" // GetXssMatchSetRequest generates a "aws/request.Request" representing the // client's request for the GetXssMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4799,15 +4987,15 @@ func (c *WAF) GetXssMatchSetRequest(input *GetXssMatchSetInput) (req *request.Re // API operation GetXssMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/GetXssMatchSet @@ -4836,8 +5024,8 @@ const opListActivatedRulesInRuleGroup = "ListActivatedRulesInRuleGroup" // ListActivatedRulesInRuleGroupRequest generates a "aws/request.Request" representing the // client's request for the ListActivatedRulesInRuleGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4886,14 +5074,14 @@ func (c *WAF) ListActivatedRulesInRuleGroupRequest(input *ListActivatedRulesInRu // API operation ListActivatedRulesInRuleGroup for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -4948,8 +5136,8 @@ const opListByteMatchSets = "ListByteMatchSets" // ListByteMatchSetsRequest generates a "aws/request.Request" representing the // client's request for the ListByteMatchSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4998,11 +5186,11 @@ func (c *WAF) ListByteMatchSetsRequest(input *ListByteMatchSetsInput) (req *requ // API operation ListByteMatchSets for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // @@ -5032,8 +5220,8 @@ const opListGeoMatchSets = "ListGeoMatchSets" // ListGeoMatchSetsRequest generates a "aws/request.Request" representing the // client's request for the ListGeoMatchSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5082,11 +5270,11 @@ func (c *WAF) ListGeoMatchSetsRequest(input *ListGeoMatchSetsInput) (req *reques // API operation ListGeoMatchSets for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // @@ -5116,8 +5304,8 @@ const opListIPSets = "ListIPSets" // ListIPSetsRequest generates a "aws/request.Request" representing the // client's request for the ListIPSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5166,11 +5354,11 @@ func (c *WAF) ListIPSetsRequest(input *ListIPSetsInput) (req *request.Request, o // API operation ListIPSets for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // @@ -5196,12 +5384,124 @@ func (c *WAF) ListIPSetsWithContext(ctx aws.Context, input *ListIPSetsInput, opt return out, req.Send() } +const opListLoggingConfigurations = "ListLoggingConfigurations" + +// ListLoggingConfigurationsRequest generates a "aws/request.Request" representing the +// client's request for the ListLoggingConfigurations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListLoggingConfigurations for more information on using the ListLoggingConfigurations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListLoggingConfigurationsRequest method. +// req, resp := client.ListLoggingConfigurationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/ListLoggingConfigurations +func (c *WAF) ListLoggingConfigurationsRequest(input *ListLoggingConfigurationsInput) (req *request.Request, output *ListLoggingConfigurationsOutput) { + op := &request.Operation{ + Name: opListLoggingConfigurations, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListLoggingConfigurationsInput{} + } + + output = &ListLoggingConfigurationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListLoggingConfigurations API operation for AWS WAF. +// +// Returns an array of LoggingConfiguration objects. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS WAF's +// API operation ListLoggingConfigurations for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalErrorException "WAFInternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" +// The operation failed because the referenced object doesn't exist. +// +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" +// The operation failed because AWS WAF didn't recognize a parameter in the +// request. For example: +// +// * You specified an invalid parameter name. +// +// * You specified an invalid value. +// +// * You tried to update an object (ByteMatchSet, IPSet, Rule, or WebACL) +// using an action other than INSERT or DELETE. +// +// * You tried to create a WebACL with a DefaultActionType other than ALLOW, +// BLOCK, or COUNT. +// +// * You tried to create a RateBasedRule with a RateKey value other than +// IP. +// +// * You tried to update a WebACL with a WafActionType other than ALLOW, +// BLOCK, or COUNT. +// +// * You tried to update a ByteMatchSet with a FieldToMatchType other than +// HEADER, METHOD, QUERY_STRING, URI, or BODY. +// +// * You tried to update a ByteMatchSet with a Field of HEADER but no value +// for Data. +// +// * Your request references an ARN that is malformed, or corresponds to +// a resource with which a web ACL cannot be associated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/ListLoggingConfigurations +func (c *WAF) ListLoggingConfigurations(input *ListLoggingConfigurationsInput) (*ListLoggingConfigurationsOutput, error) { + req, out := c.ListLoggingConfigurationsRequest(input) + return out, req.Send() +} + +// ListLoggingConfigurationsWithContext is the same as ListLoggingConfigurations with the addition of +// the ability to pass a context and additional request options. +// +// See ListLoggingConfigurations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WAF) ListLoggingConfigurationsWithContext(ctx aws.Context, input *ListLoggingConfigurationsInput, opts ...request.Option) (*ListLoggingConfigurationsOutput, error) { + req, out := c.ListLoggingConfigurationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opListRateBasedRules = "ListRateBasedRules" // ListRateBasedRulesRequest generates a "aws/request.Request" representing the // client's request for the ListRateBasedRules operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5250,11 +5550,11 @@ func (c *WAF) ListRateBasedRulesRequest(input *ListRateBasedRulesInput) (req *re // API operation ListRateBasedRules for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // @@ -5284,8 +5584,8 @@ const opListRegexMatchSets = "ListRegexMatchSets" // ListRegexMatchSetsRequest generates a "aws/request.Request" representing the // client's request for the ListRegexMatchSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5334,11 +5634,11 @@ func (c *WAF) ListRegexMatchSetsRequest(input *ListRegexMatchSetsInput) (req *re // API operation ListRegexMatchSets for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // @@ -5368,8 +5668,8 @@ const opListRegexPatternSets = "ListRegexPatternSets" // ListRegexPatternSetsRequest generates a "aws/request.Request" representing the // client's request for the ListRegexPatternSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5418,11 +5718,11 @@ func (c *WAF) ListRegexPatternSetsRequest(input *ListRegexPatternSetsInput) (req // API operation ListRegexPatternSets for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // @@ -5452,8 +5752,8 @@ const opListRuleGroups = "ListRuleGroups" // ListRuleGroupsRequest generates a "aws/request.Request" representing the // client's request for the ListRuleGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5502,7 +5802,7 @@ func (c *WAF) ListRuleGroupsRequest(input *ListRuleGroupsInput) (req *request.Re // API operation ListRuleGroups for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // @@ -5532,8 +5832,8 @@ const opListRules = "ListRules" // ListRulesRequest generates a "aws/request.Request" representing the // client's request for the ListRules operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5582,11 +5882,11 @@ func (c *WAF) ListRulesRequest(input *ListRulesInput) (req *request.Request, out // API operation ListRules for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // @@ -5616,8 +5916,8 @@ const opListSizeConstraintSets = "ListSizeConstraintSets" // ListSizeConstraintSetsRequest generates a "aws/request.Request" representing the // client's request for the ListSizeConstraintSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5666,11 +5966,11 @@ func (c *WAF) ListSizeConstraintSetsRequest(input *ListSizeConstraintSetsInput) // API operation ListSizeConstraintSets for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // @@ -5700,8 +6000,8 @@ const opListSqlInjectionMatchSets = "ListSqlInjectionMatchSets" // ListSqlInjectionMatchSetsRequest generates a "aws/request.Request" representing the // client's request for the ListSqlInjectionMatchSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5750,11 +6050,11 @@ func (c *WAF) ListSqlInjectionMatchSetsRequest(input *ListSqlInjectionMatchSetsI // API operation ListSqlInjectionMatchSets for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // @@ -5784,8 +6084,8 @@ const opListSubscribedRuleGroups = "ListSubscribedRuleGroups" // ListSubscribedRuleGroupsRequest generates a "aws/request.Request" representing the // client's request for the ListSubscribedRuleGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5834,10 +6134,10 @@ func (c *WAF) ListSubscribedRuleGroupsRequest(input *ListSubscribedRuleGroupsInp // API operation ListSubscribedRuleGroups for usage and error information. // // Returned Error Codes: -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // @@ -5867,8 +6167,8 @@ const opListWebACLs = "ListWebACLs" // ListWebACLsRequest generates a "aws/request.Request" representing the // client's request for the ListWebACLs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5917,11 +6217,11 @@ func (c *WAF) ListWebACLsRequest(input *ListWebACLsInput) (req *request.Request, // API operation ListWebACLs for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // @@ -5951,8 +6251,8 @@ const opListXssMatchSets = "ListXssMatchSets" // ListXssMatchSetsRequest generates a "aws/request.Request" representing the // client's request for the ListXssMatchSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6001,11 +6301,11 @@ func (c *WAF) ListXssMatchSetsRequest(input *ListXssMatchSetsInput) (req *reques // API operation ListXssMatchSets for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // @@ -6031,12 +6331,114 @@ func (c *WAF) ListXssMatchSetsWithContext(ctx aws.Context, input *ListXssMatchSe return out, req.Send() } +const opPutLoggingConfiguration = "PutLoggingConfiguration" + +// PutLoggingConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the PutLoggingConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutLoggingConfiguration for more information on using the PutLoggingConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutLoggingConfigurationRequest method. +// req, resp := client.PutLoggingConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/PutLoggingConfiguration +func (c *WAF) PutLoggingConfigurationRequest(input *PutLoggingConfigurationInput) (req *request.Request, output *PutLoggingConfigurationOutput) { + op := &request.Operation{ + Name: opPutLoggingConfiguration, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutLoggingConfigurationInput{} + } + + output = &PutLoggingConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutLoggingConfiguration API operation for AWS WAF. +// +// Associates a LoggingConfiguration with a specified web ACL. +// +// You can access information about all traffic that AWS WAF inspects using +// the following steps: +// +// Create an Amazon Kinesis Data Firehose delivery stream. For more information, +// see Creating an Amazon Kinesis Data Firehose Delivery Stream (https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html). +// +// Associate that delivery stream to your web ACL using a PutLoggingConfiguration +// request. +// +// When you successfully enable logging using a PutLoggingConfiguration request, +// AWS WAF will create a service linked role with the necessary permissions +// to write logs to the Amazon Kinesis Data Firehose delivery stream. For more +// information, see Logging Web ACL Traffic Information (http://docs.aws.amazon.com/waf/latest/developerguide/logging.html) +// in the AWS WAF Developer Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS WAF's +// API operation PutLoggingConfiguration for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInternalErrorException "WAFInternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" +// The operation failed because the referenced object doesn't exist. +// +// * ErrCodeStaleDataException "WAFStaleDataException" +// The operation failed because you tried to create, update, or delete an object +// by using a change token that has already been used. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/PutLoggingConfiguration +func (c *WAF) PutLoggingConfiguration(input *PutLoggingConfigurationInput) (*PutLoggingConfigurationOutput, error) { + req, out := c.PutLoggingConfigurationRequest(input) + return out, req.Send() +} + +// PutLoggingConfigurationWithContext is the same as PutLoggingConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See PutLoggingConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WAF) PutLoggingConfigurationWithContext(ctx aws.Context, input *PutLoggingConfigurationInput, opts ...request.Option) (*PutLoggingConfigurationOutput, error) { + req, out := c.PutLoggingConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opPutPermissionPolicy = "PutPermissionPolicy" // PutPermissionPolicyRequest generates a "aws/request.Request" representing the // client's request for the PutPermissionPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6086,8 +6488,9 @@ func (c *WAF) PutPermissionPolicyRequest(input *PutPermissionPolicyInput) (req * // // * Effect must specify Allow. // -// * The Action in the policy must be waf:UpdateWebACL and waf-regional:UpdateWebACL. -// Any extra or wildcard actions in the policy will be rejected. +// * The Action in the policy must be waf:UpdateWebACL, waf-regional:UpdateWebACL, +// waf:GetRuleGroup and waf-regional:GetRuleGroup . Any extra or wildcard +// actions in the policy will be rejected. // // * The policy cannot include a Resource parameter. // @@ -6110,18 +6513,18 @@ func (c *WAF) PutPermissionPolicyRequest(input *PutPermissionPolicyInput) (req * // API operation PutPermissionPolicy for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeInvalidPermissionPolicyException "InvalidPermissionPolicyException" +// * ErrCodeInvalidPermissionPolicyException "WAFInvalidPermissionPolicyException" // The operation failed because the specified policy is not in the proper format. // // The policy is subject to the following restrictions: @@ -6132,8 +6535,9 @@ func (c *WAF) PutPermissionPolicyRequest(input *PutPermissionPolicyInput) (req * // // * Effect must specify Allow. // -// * The Action in the policy must be waf:UpdateWebACL or waf-regional:UpdateWebACL. -// Any extra or wildcard actions in the policy will be rejected. +// * The Action in the policy must be waf:UpdateWebACL, waf-regional:UpdateWebACL, +// waf:GetRuleGroup and waf-regional:GetRuleGroup . Any extra or wildcard +// actions in the policy will be rejected. // // * The policy cannot include a Resource parameter. // @@ -6170,8 +6574,8 @@ const opUpdateByteMatchSet = "UpdateByteMatchSet" // UpdateByteMatchSetRequest generates a "aws/request.Request" representing the // client's request for the UpdateByteMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6256,15 +6660,15 @@ func (c *WAF) UpdateByteMatchSetRequest(input *UpdateByteMatchSetInput) (req *re // API operation UpdateByteMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeInvalidOperationException "InvalidOperationException" +// * ErrCodeInvalidOperationException "WAFInvalidOperationException" // The operation failed because there was nothing to do. For example: // // * You tried to remove a Rule from a WebACL, but the Rule isn't in the @@ -6279,13 +6683,10 @@ func (c *WAF) UpdateByteMatchSetRequest(input *UpdateByteMatchSetInput) (req *re // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -6314,7 +6715,7 @@ func (c *WAF) UpdateByteMatchSetRequest(input *UpdateByteMatchSetInput) (req *re // * Your request references an ARN that is malformed, or corresponds to // a resource with which a web ACL cannot be associated. // -// * ErrCodeNonexistentContainerException "NonexistentContainerException" +// * ErrCodeNonexistentContainerException "WAFNonexistentContainerException" // The operation failed because you tried to add an object to or delete an object // from another object that doesn't exist. For example: // @@ -6330,14 +6731,14 @@ func (c *WAF) UpdateByteMatchSetRequest(input *UpdateByteMatchSetInput) (req *re // * You tried to add a ByteMatchTuple to or delete a ByteMatchTuple from // a ByteMatchSet that doesn't exist. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -6369,8 +6770,8 @@ const opUpdateGeoMatchSet = "UpdateGeoMatchSet" // UpdateGeoMatchSetRequest generates a "aws/request.Request" representing the // client's request for the UpdateGeoMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6446,19 +6847,19 @@ func (c *WAF) UpdateGeoMatchSetRequest(input *UpdateGeoMatchSetInput) (req *requ // API operation UpdateGeoMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeInvalidOperationException "InvalidOperationException" +// * ErrCodeInvalidOperationException "WAFInvalidOperationException" // The operation failed because there was nothing to do. For example: // // * You tried to remove a Rule from a WebACL, but the Rule isn't in the @@ -6473,13 +6874,10 @@ func (c *WAF) UpdateGeoMatchSetRequest(input *UpdateGeoMatchSetInput) (req *requ // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -6508,7 +6906,7 @@ func (c *WAF) UpdateGeoMatchSetRequest(input *UpdateGeoMatchSetInput) (req *requ // * Your request references an ARN that is malformed, or corresponds to // a resource with which a web ACL cannot be associated. // -// * ErrCodeNonexistentContainerException "NonexistentContainerException" +// * ErrCodeNonexistentContainerException "WAFNonexistentContainerException" // The operation failed because you tried to add an object to or delete an object // from another object that doesn't exist. For example: // @@ -6524,10 +6922,10 @@ func (c *WAF) UpdateGeoMatchSetRequest(input *UpdateGeoMatchSetInput) (req *requ // * You tried to add a ByteMatchTuple to or delete a ByteMatchTuple from // a ByteMatchSet that doesn't exist. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeReferencedItemException "ReferencedItemException" +// * ErrCodeReferencedItemException "WAFReferencedItemException" // The operation failed because you tried to delete an object that is still // in use. For example: // @@ -6535,7 +6933,7 @@ func (c *WAF) UpdateGeoMatchSetRequest(input *UpdateGeoMatchSetInput) (req *requ // // * You tried to delete a Rule that is still referenced by a WebACL. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -6567,8 +6965,8 @@ const opUpdateIPSet = "UpdateIPSet" // UpdateIPSetRequest generates a "aws/request.Request" representing the // client's request for the UpdateIPSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6620,9 +7018,10 @@ func (c *WAF) UpdateIPSetRequest(input *UpdateIPSetInput) (req *request.Request, // range of IP addresses from 192.0.2.0 to 192.0.2.255) or 192.0.2.44/32 // (for the individual IP address 192.0.2.44). // -// AWS WAF supports /8, /16, /24, and /32 IP address ranges for IPv4, and /24, -// /32, /48, /56, /64 and /128 for IPv6. For more information about CIDR notation, -// see the Wikipedia entry Classless Inter-Domain Routing (https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). +// AWS WAF supports IPv4 address ranges: /8 and any range between /16 through +// /32. AWS WAF supports IPv6 address ranges: /16, /24, /32, /48, /56, /64, +// and /128. For more information about CIDR notation, see the Wikipedia entry +// Classless Inter-Domain Routing (https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). // // IPv6 addresses can be represented using any of the following formats: // @@ -6654,6 +7053,8 @@ func (c *WAF) UpdateIPSetRequest(input *UpdateIPSetInput) (req *request.Request, // and/or the IP addresses that you want to delete. If you want to change an // IP address, you delete the existing IP address and add the new one. // +// You can insert a maximum of 1000 addresses in a single request. +// // For more information about how to use the AWS WAF API to allow or block HTTP // requests, see the AWS WAF Developer Guide (http://docs.aws.amazon.com/waf/latest/developerguide/). // @@ -6665,19 +7066,19 @@ func (c *WAF) UpdateIPSetRequest(input *UpdateIPSetInput) (req *request.Request, // API operation UpdateIPSet for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeInvalidOperationException "InvalidOperationException" +// * ErrCodeInvalidOperationException "WAFInvalidOperationException" // The operation failed because there was nothing to do. For example: // // * You tried to remove a Rule from a WebACL, but the Rule isn't in the @@ -6692,13 +7093,10 @@ func (c *WAF) UpdateIPSetRequest(input *UpdateIPSetInput) (req *request.Request, // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -6727,7 +7125,7 @@ func (c *WAF) UpdateIPSetRequest(input *UpdateIPSetInput) (req *request.Request, // * Your request references an ARN that is malformed, or corresponds to // a resource with which a web ACL cannot be associated. // -// * ErrCodeNonexistentContainerException "NonexistentContainerException" +// * ErrCodeNonexistentContainerException "WAFNonexistentContainerException" // The operation failed because you tried to add an object to or delete an object // from another object that doesn't exist. For example: // @@ -6743,10 +7141,10 @@ func (c *WAF) UpdateIPSetRequest(input *UpdateIPSetInput) (req *request.Request, // * You tried to add a ByteMatchTuple to or delete a ByteMatchTuple from // a ByteMatchSet that doesn't exist. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeReferencedItemException "ReferencedItemException" +// * ErrCodeReferencedItemException "WAFReferencedItemException" // The operation failed because you tried to delete an object that is still // in use. For example: // @@ -6754,7 +7152,7 @@ func (c *WAF) UpdateIPSetRequest(input *UpdateIPSetInput) (req *request.Request, // // * You tried to delete a Rule that is still referenced by a WebACL. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -6786,8 +7184,8 @@ const opUpdateRateBasedRule = "UpdateRateBasedRule" // UpdateRateBasedRuleRequest generates a "aws/request.Request" representing the // client's request for the UpdateRateBasedRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6873,19 +7271,19 @@ func (c *WAF) UpdateRateBasedRuleRequest(input *UpdateRateBasedRuleInput) (req * // API operation UpdateRateBasedRule for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeInvalidOperationException "InvalidOperationException" +// * ErrCodeInvalidOperationException "WAFInvalidOperationException" // The operation failed because there was nothing to do. For example: // // * You tried to remove a Rule from a WebACL, but the Rule isn't in the @@ -6900,13 +7298,10 @@ func (c *WAF) UpdateRateBasedRuleRequest(input *UpdateRateBasedRuleInput) (req * // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -6935,7 +7330,7 @@ func (c *WAF) UpdateRateBasedRuleRequest(input *UpdateRateBasedRuleInput) (req * // * Your request references an ARN that is malformed, or corresponds to // a resource with which a web ACL cannot be associated. // -// * ErrCodeNonexistentContainerException "NonexistentContainerException" +// * ErrCodeNonexistentContainerException "WAFNonexistentContainerException" // The operation failed because you tried to add an object to or delete an object // from another object that doesn't exist. For example: // @@ -6951,10 +7346,10 @@ func (c *WAF) UpdateRateBasedRuleRequest(input *UpdateRateBasedRuleInput) (req * // * You tried to add a ByteMatchTuple to or delete a ByteMatchTuple from // a ByteMatchSet that doesn't exist. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeReferencedItemException "ReferencedItemException" +// * ErrCodeReferencedItemException "WAFReferencedItemException" // The operation failed because you tried to delete an object that is still // in use. For example: // @@ -6962,7 +7357,7 @@ func (c *WAF) UpdateRateBasedRuleRequest(input *UpdateRateBasedRuleInput) (req * // // * You tried to delete a Rule that is still referenced by a WebACL. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -6994,8 +7389,8 @@ const opUpdateRegexMatchSet = "UpdateRegexMatchSet" // UpdateRegexMatchSetRequest generates a "aws/request.Request" representing the // client's request for the UpdateRegexMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7077,27 +7472,27 @@ func (c *WAF) UpdateRegexMatchSetRequest(input *UpdateRegexMatchSetInput) (req * // API operation UpdateRegexMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeDisallowedNameException "DisallowedNameException" +// * ErrCodeDisallowedNameException "WAFDisallowedNameException" // The name specified is invalid. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) // in the AWS WAF Developer Guide. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeNonexistentContainerException "NonexistentContainerException" +// * ErrCodeNonexistentContainerException "WAFNonexistentContainerException" // The operation failed because you tried to add an object to or delete an object // from another object that doesn't exist. For example: // @@ -7113,7 +7508,7 @@ func (c *WAF) UpdateRegexMatchSetRequest(input *UpdateRegexMatchSetInput) (req * // * You tried to add a ByteMatchTuple to or delete a ByteMatchTuple from // a ByteMatchSet that doesn't exist. // -// * ErrCodeInvalidOperationException "InvalidOperationException" +// * ErrCodeInvalidOperationException "WAFInvalidOperationException" // The operation failed because there was nothing to do. For example: // // * You tried to remove a Rule from a WebACL, but the Rule isn't in the @@ -7128,13 +7523,10 @@ func (c *WAF) UpdateRegexMatchSetRequest(input *UpdateRegexMatchSetInput) (req * // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // @@ -7164,8 +7556,8 @@ const opUpdateRegexPatternSet = "UpdateRegexPatternSet" // UpdateRegexPatternSetRequest generates a "aws/request.Request" representing the // client's request for the UpdateRegexPatternSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7244,24 +7636,24 @@ func (c *WAF) UpdateRegexPatternSetRequest(input *UpdateRegexPatternSetInput) (r // API operation UpdateRegexPatternSet for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) // in the AWS WAF Developer Guide. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeNonexistentContainerException "NonexistentContainerException" +// * ErrCodeNonexistentContainerException "WAFNonexistentContainerException" // The operation failed because you tried to add an object to or delete an object // from another object that doesn't exist. For example: // @@ -7277,7 +7669,7 @@ func (c *WAF) UpdateRegexPatternSetRequest(input *UpdateRegexPatternSetInput) (r // * You tried to add a ByteMatchTuple to or delete a ByteMatchTuple from // a ByteMatchSet that doesn't exist. // -// * ErrCodeInvalidOperationException "InvalidOperationException" +// * ErrCodeInvalidOperationException "WAFInvalidOperationException" // The operation failed because there was nothing to do. For example: // // * You tried to remove a Rule from a WebACL, but the Rule isn't in the @@ -7292,17 +7684,14 @@ func (c *WAF) UpdateRegexPatternSetRequest(input *UpdateRegexPatternSetInput) (r // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeInvalidRegexPatternException "InvalidRegexPatternException" +// * ErrCodeInvalidRegexPatternException "WAFInvalidRegexPatternException" // The regular expression (regex) you specified in RegexPatternString is invalid. // // See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/UpdateRegexPatternSet @@ -7331,8 +7720,8 @@ const opUpdateRule = "UpdateRule" // UpdateRuleRequest generates a "aws/request.Request" representing the // client's request for the UpdateRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7413,19 +7802,19 @@ func (c *WAF) UpdateRuleRequest(input *UpdateRuleInput) (req *request.Request, o // API operation UpdateRule for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeInvalidOperationException "InvalidOperationException" +// * ErrCodeInvalidOperationException "WAFInvalidOperationException" // The operation failed because there was nothing to do. For example: // // * You tried to remove a Rule from a WebACL, but the Rule isn't in the @@ -7440,13 +7829,10 @@ func (c *WAF) UpdateRuleRequest(input *UpdateRuleInput) (req *request.Request, o // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -7475,7 +7861,7 @@ func (c *WAF) UpdateRuleRequest(input *UpdateRuleInput) (req *request.Request, o // * Your request references an ARN that is malformed, or corresponds to // a resource with which a web ACL cannot be associated. // -// * ErrCodeNonexistentContainerException "NonexistentContainerException" +// * ErrCodeNonexistentContainerException "WAFNonexistentContainerException" // The operation failed because you tried to add an object to or delete an object // from another object that doesn't exist. For example: // @@ -7491,10 +7877,10 @@ func (c *WAF) UpdateRuleRequest(input *UpdateRuleInput) (req *request.Request, o // * You tried to add a ByteMatchTuple to or delete a ByteMatchTuple from // a ByteMatchSet that doesn't exist. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeReferencedItemException "ReferencedItemException" +// * ErrCodeReferencedItemException "WAFReferencedItemException" // The operation failed because you tried to delete an object that is still // in use. For example: // @@ -7502,7 +7888,7 @@ func (c *WAF) UpdateRuleRequest(input *UpdateRuleInput) (req *request.Request, o // // * You tried to delete a Rule that is still referenced by a WebACL. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -7534,8 +7920,8 @@ const opUpdateRuleGroup = "UpdateRuleGroup" // UpdateRuleGroupRequest generates a "aws/request.Request" representing the // client's request for the UpdateRuleGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7606,15 +7992,15 @@ func (c *WAF) UpdateRuleGroupRequest(input *UpdateRuleGroupInput) (req *request. // API operation UpdateRuleGroup for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeNonexistentContainerException "NonexistentContainerException" +// * ErrCodeNonexistentContainerException "WAFNonexistentContainerException" // The operation failed because you tried to add an object to or delete an object // from another object that doesn't exist. For example: // @@ -7630,10 +8016,10 @@ func (c *WAF) UpdateRuleGroupRequest(input *UpdateRuleGroupInput) (req *request. // * You tried to add a ByteMatchTuple to or delete a ByteMatchTuple from // a ByteMatchSet that doesn't exist. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeInvalidOperationException "InvalidOperationException" +// * ErrCodeInvalidOperationException "WAFInvalidOperationException" // The operation failed because there was nothing to do. For example: // // * You tried to remove a Rule from a WebACL, but the Rule isn't in the @@ -7648,19 +8034,16 @@ func (c *WAF) UpdateRuleGroupRequest(input *UpdateRuleGroupInput) (req *request. // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) // in the AWS WAF Developer Guide. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -7715,8 +8098,8 @@ const opUpdateSizeConstraintSet = "UpdateSizeConstraintSet" // UpdateSizeConstraintSetRequest generates a "aws/request.Request" representing the // client's request for the UpdateSizeConstraintSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7770,6 +8153,8 @@ func (c *WAF) UpdateSizeConstraintSetRequest(input *UpdateSizeConstraintSetInput // of the request body are not supported because the AWS resource forwards // only the first 8192 bytes of your request to AWS WAF. // +// You can only specify a single type of TextTransformation. +// // * A ComparisonOperator used for evaluating the selected part of the request // against the specified Size, such as equals, greater than, less than, and // so on. @@ -7803,19 +8188,19 @@ func (c *WAF) UpdateSizeConstraintSetRequest(input *UpdateSizeConstraintSetInput // API operation UpdateSizeConstraintSet for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeInvalidOperationException "InvalidOperationException" +// * ErrCodeInvalidOperationException "WAFInvalidOperationException" // The operation failed because there was nothing to do. For example: // // * You tried to remove a Rule from a WebACL, but the Rule isn't in the @@ -7830,13 +8215,10 @@ func (c *WAF) UpdateSizeConstraintSetRequest(input *UpdateSizeConstraintSetInput // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -7865,7 +8247,7 @@ func (c *WAF) UpdateSizeConstraintSetRequest(input *UpdateSizeConstraintSetInput // * Your request references an ARN that is malformed, or corresponds to // a resource with which a web ACL cannot be associated. // -// * ErrCodeNonexistentContainerException "NonexistentContainerException" +// * ErrCodeNonexistentContainerException "WAFNonexistentContainerException" // The operation failed because you tried to add an object to or delete an object // from another object that doesn't exist. For example: // @@ -7881,10 +8263,10 @@ func (c *WAF) UpdateSizeConstraintSetRequest(input *UpdateSizeConstraintSetInput // * You tried to add a ByteMatchTuple to or delete a ByteMatchTuple from // a ByteMatchSet that doesn't exist. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeReferencedItemException "ReferencedItemException" +// * ErrCodeReferencedItemException "WAFReferencedItemException" // The operation failed because you tried to delete an object that is still // in use. For example: // @@ -7892,7 +8274,7 @@ func (c *WAF) UpdateSizeConstraintSetRequest(input *UpdateSizeConstraintSetInput // // * You tried to delete a Rule that is still referenced by a WebACL. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -7924,8 +8306,8 @@ const opUpdateSqlInjectionMatchSet = "UpdateSqlInjectionMatchSet" // UpdateSqlInjectionMatchSetRequest generates a "aws/request.Request" representing the // client's request for the UpdateSqlInjectionMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7972,12 +8354,15 @@ func (c *WAF) UpdateSqlInjectionMatchSetRequest(input *UpdateSqlInjectionMatchSe // object and add a new one. // // * FieldToMatch: The part of web requests that you want AWS WAF to inspect -// and, if you want AWS WAF to inspect a header, the name of the header. +// and, if you want AWS WAF to inspect a header or custom query parameter, +// the name of the header or parameter. // // * TextTransformation: Which text transformation, if any, to perform on // the web request before inspecting the request for snippets of malicious // SQL code. // +// You can only specify a single type of TextTransformation. +// // You use SqlInjectionMatchSet objects to specify which CloudFront requests // you want to allow, block, or count. For example, if you're receiving requests // that contain snippets of SQL code in the query string and you want to block @@ -8005,15 +8390,15 @@ func (c *WAF) UpdateSqlInjectionMatchSetRequest(input *UpdateSqlInjectionMatchSe // API operation UpdateSqlInjectionMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeInvalidOperationException "InvalidOperationException" +// * ErrCodeInvalidOperationException "WAFInvalidOperationException" // The operation failed because there was nothing to do. For example: // // * You tried to remove a Rule from a WebACL, but the Rule isn't in the @@ -8028,13 +8413,10 @@ func (c *WAF) UpdateSqlInjectionMatchSetRequest(input *UpdateSqlInjectionMatchSe // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -8063,7 +8445,7 @@ func (c *WAF) UpdateSqlInjectionMatchSetRequest(input *UpdateSqlInjectionMatchSe // * Your request references an ARN that is malformed, or corresponds to // a resource with which a web ACL cannot be associated. // -// * ErrCodeNonexistentContainerException "NonexistentContainerException" +// * ErrCodeNonexistentContainerException "WAFNonexistentContainerException" // The operation failed because you tried to add an object to or delete an object // from another object that doesn't exist. For example: // @@ -8079,14 +8461,14 @@ func (c *WAF) UpdateSqlInjectionMatchSetRequest(input *UpdateSqlInjectionMatchSe // * You tried to add a ByteMatchTuple to or delete a ByteMatchTuple from // a ByteMatchSet that doesn't exist. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -8118,8 +8500,8 @@ const opUpdateWebACL = "UpdateWebACL" // UpdateWebACLRequest generates a "aws/request.Request" representing the // client's request for the UpdateWebACL operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8215,19 +8597,19 @@ func (c *WAF) UpdateWebACLRequest(input *UpdateWebACLInput) (req *request.Reques // API operation UpdateWebACL for usage and error information. // // Returned Error Codes: -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeInvalidOperationException "InvalidOperationException" +// * ErrCodeInvalidOperationException "WAFInvalidOperationException" // The operation failed because there was nothing to do. For example: // // * You tried to remove a Rule from a WebACL, but the Rule isn't in the @@ -8242,13 +8624,10 @@ func (c *WAF) UpdateWebACLRequest(input *UpdateWebACLInput) (req *request.Reques // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -8277,7 +8656,7 @@ func (c *WAF) UpdateWebACLRequest(input *UpdateWebACLInput) (req *request.Reques // * Your request references an ARN that is malformed, or corresponds to // a resource with which a web ACL cannot be associated. // -// * ErrCodeNonexistentContainerException "NonexistentContainerException" +// * ErrCodeNonexistentContainerException "WAFNonexistentContainerException" // The operation failed because you tried to add an object to or delete an object // from another object that doesn't exist. For example: // @@ -8293,10 +8672,10 @@ func (c *WAF) UpdateWebACLRequest(input *UpdateWebACLInput) (req *request.Reques // * You tried to add a ByteMatchTuple to or delete a ByteMatchTuple from // a ByteMatchSet that doesn't exist. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeReferencedItemException "ReferencedItemException" +// * ErrCodeReferencedItemException "WAFReferencedItemException" // The operation failed because you tried to delete an object that is still // in use. For example: // @@ -8304,13 +8683,13 @@ func (c *WAF) UpdateWebACLRequest(input *UpdateWebACLInput) (req *request.Reques // // * You tried to delete a Rule that is still referenced by a WebACL. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) // in the AWS WAF Developer Guide. // -// * ErrCodeSubscriptionNotFoundException "SubscriptionNotFoundException" +// * ErrCodeSubscriptionNotFoundException "WAFSubscriptionNotFoundException" // The specified subscription does not exist. // // See also, https://docs.aws.amazon.com/goto/WebAPI/waf-2015-08-24/UpdateWebACL @@ -8339,8 +8718,8 @@ const opUpdateXssMatchSet = "UpdateXssMatchSet" // UpdateXssMatchSetRequest generates a "aws/request.Request" representing the // client's request for the UpdateXssMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8387,12 +8766,15 @@ func (c *WAF) UpdateXssMatchSetRequest(input *UpdateXssMatchSetInput) (req *requ // add a new one. // // * FieldToMatch: The part of web requests that you want AWS WAF to inspect -// and, if you want AWS WAF to inspect a header, the name of the header. +// and, if you want AWS WAF to inspect a header or custom query parameter, +// the name of the header or parameter. // // * TextTransformation: Which text transformation, if any, to perform on // the web request before inspecting the request for cross-site scripting // attacks. // +// You can only specify a single type of TextTransformation. +// // You use XssMatchSet objects to specify which CloudFront requests you want // to allow, block, or count. For example, if you're receiving requests that // contain cross-site scripting attacks in the request body and you want to @@ -8420,15 +8802,15 @@ func (c *WAF) UpdateXssMatchSetRequest(input *UpdateXssMatchSetInput) (req *requ // API operation UpdateXssMatchSet for usage and error information. // // Returned Error Codes: -// * ErrCodeInternalErrorException "InternalErrorException" +// * ErrCodeInternalErrorException "WAFInternalErrorException" // The operation failed because of a system problem, even though the request // was valid. Retry your request. // -// * ErrCodeInvalidAccountException "InvalidAccountException" +// * ErrCodeInvalidAccountException "WAFInvalidAccountException" // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. // -// * ErrCodeInvalidOperationException "InvalidOperationException" +// * ErrCodeInvalidOperationException "WAFInvalidOperationException" // The operation failed because there was nothing to do. For example: // // * You tried to remove a Rule from a WebACL, but the Rule isn't in the @@ -8443,13 +8825,10 @@ func (c *WAF) UpdateXssMatchSetRequest(input *UpdateXssMatchSetInput) (req *requ // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // -// * ErrCodeInvalidParameterException "InvalidParameterException" +// * ErrCodeInvalidParameterException "WAFInvalidParameterException" // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: // @@ -8478,7 +8857,7 @@ func (c *WAF) UpdateXssMatchSetRequest(input *UpdateXssMatchSetInput) (req *requ // * Your request references an ARN that is malformed, or corresponds to // a resource with which a web ACL cannot be associated. // -// * ErrCodeNonexistentContainerException "NonexistentContainerException" +// * ErrCodeNonexistentContainerException "WAFNonexistentContainerException" // The operation failed because you tried to add an object to or delete an object // from another object that doesn't exist. For example: // @@ -8494,14 +8873,14 @@ func (c *WAF) UpdateXssMatchSetRequest(input *UpdateXssMatchSetInput) (req *requ // * You tried to add a ByteMatchTuple to or delete a ByteMatchTuple from // a ByteMatchSet that doesn't exist. // -// * ErrCodeNonexistentItemException "NonexistentItemException" +// * ErrCodeNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // -// * ErrCodeStaleDataException "StaleDataException" +// * ErrCodeStaleDataException "WAFStaleDataException" // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. // -// * ErrCodeLimitsExceededException "LimitsExceededException" +// * ErrCodeLimitsExceededException "WAFLimitsExceededException" // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) @@ -8923,6 +9302,14 @@ type ByteMatchTuple struct { // of the body, you can create a size constraint set. For more information, // see CreateSizeConstraintSet. // + // * SINGLE_QUERY_ARG: The parameter in the query string that you will inspect, + // such as UserName or SalesRegion. The maximum length for SINGLE_QUERY_ARG + // is 30 characters. + // + // * ALL_QUERY_ARGS: Similar to SINGLE_QUERY_ARG, but instead of inspecting + // a single parameter, AWS WAF inspects all parameters within the query string + // for the value or regex pattern that you specify in TargetString. + // // If TargetString includes alphabetic characters A-Z and a-z, note that the // value is case sensitive. // @@ -8951,11 +9338,13 @@ type ByteMatchTuple struct { // AWS WAF performs the transformation on TargetString before inspecting a request // for a match. // + // You can only specify a single type of TextTransformation. + // // CMD_LINE // - // When you're concerned that attackers are injecting an operating system commandline - // command and using unusual formatting to disguise some or all of the command, - // use this option to perform the following transformations: + // When you're concerned that attackers are injecting an operating system command + // line command and using unusual formatting to disguise some or all of the + // command, use this option to perform the following transformations: // // * Delete the following characters: \ " ' ^ // @@ -10536,8 +10925,81 @@ func (s *DeleteIPSetInput) Validate() error { if s.IPSetId == nil { invalidParams.Add(request.NewErrParamRequired("IPSetId")) } - if s.IPSetId != nil && len(*s.IPSetId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("IPSetId", 1)) + if s.IPSetId != nil && len(*s.IPSetId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("IPSetId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetChangeToken sets the ChangeToken field's value. +func (s *DeleteIPSetInput) SetChangeToken(v string) *DeleteIPSetInput { + s.ChangeToken = &v + return s +} + +// SetIPSetId sets the IPSetId field's value. +func (s *DeleteIPSetInput) SetIPSetId(v string) *DeleteIPSetInput { + s.IPSetId = &v + return s +} + +type DeleteIPSetOutput struct { + _ struct{} `type:"structure"` + + // The ChangeToken that you used to submit the DeleteIPSet request. You can + // also use this value to query the status of the request. For more information, + // see GetChangeTokenStatus. + ChangeToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DeleteIPSetOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteIPSetOutput) GoString() string { + return s.String() +} + +// SetChangeToken sets the ChangeToken field's value. +func (s *DeleteIPSetOutput) SetChangeToken(v string) *DeleteIPSetOutput { + s.ChangeToken = &v + return s +} + +type DeleteLoggingConfigurationInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the web ACL from which you want to delete + // the LoggingConfiguration. + // + // ResourceArn is a required field + ResourceArn *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteLoggingConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLoggingConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteLoggingConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteLoggingConfigurationInput"} + if s.ResourceArn == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceArn")) + } + if s.ResourceArn != nil && len(*s.ResourceArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceArn", 1)) } if invalidParams.Len() > 0 { @@ -10546,43 +11008,26 @@ func (s *DeleteIPSetInput) Validate() error { return nil } -// SetChangeToken sets the ChangeToken field's value. -func (s *DeleteIPSetInput) SetChangeToken(v string) *DeleteIPSetInput { - s.ChangeToken = &v - return s -} - -// SetIPSetId sets the IPSetId field's value. -func (s *DeleteIPSetInput) SetIPSetId(v string) *DeleteIPSetInput { - s.IPSetId = &v +// SetResourceArn sets the ResourceArn field's value. +func (s *DeleteLoggingConfigurationInput) SetResourceArn(v string) *DeleteLoggingConfigurationInput { + s.ResourceArn = &v return s } -type DeleteIPSetOutput struct { +type DeleteLoggingConfigurationOutput struct { _ struct{} `type:"structure"` - - // The ChangeToken that you used to submit the DeleteIPSet request. You can - // also use this value to query the status of the request. For more information, - // see GetChangeTokenStatus. - ChangeToken *string `min:"1" type:"string"` } // String returns the string representation -func (s DeleteIPSetOutput) String() string { +func (s DeleteLoggingConfigurationOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteIPSetOutput) GoString() string { +func (s DeleteLoggingConfigurationOutput) GoString() string { return s.String() } -// SetChangeToken sets the ChangeToken field's value. -func (s *DeleteIPSetOutput) SetChangeToken(v string) *DeleteIPSetOutput { - s.ChangeToken = &v - return s -} - type DeletePermissionPolicyInput struct { _ struct{} `type:"structure"` @@ -11406,10 +11851,14 @@ type FieldToMatch struct { _ struct{} `type:"structure"` // When the value of Type is HEADER, enter the name of the header that you want - // AWS WAF to search, for example, User-Agent or Referer. If the value of Type - // is any other value, omit Data. + // AWS WAF to search, for example, User-Agent or Referer. The name of the header + // is not case sensitive. // - // The name of the header is not case sensitive. + // When the value of Type is SINGLE_QUERY_ARG, enter the name of the parameter + // that you want AWS WAF to search, for example, UserName or SalesRegion. The + // parameter name is not case sensitive. + // + // If the value of Type is any other value, omit Data. Data *string `type:"string"` // The part of the web request that you want AWS WAF to search for a specified @@ -11437,6 +11886,14 @@ type FieldToMatch struct { // of the body, you can create a size constraint set. For more information, // see CreateSizeConstraintSet. // + // * SINGLE_QUERY_ARG: The parameter in the query string that you will inspect, + // such as UserName or SalesRegion. The maximum length for SINGLE_QUERY_ARG + // is 30 characters. + // + // * ALL_QUERY_ARGS: Similar to SINGLE_QUERY_ARG, but rather than inspecting + // a single parameter, AWS WAF will inspect all parameters within the query + // for the value or regex pattern that you specify in TargetString. + // // Type is a required field Type *string `type:"string" required:"true" enum:"MatchFieldType"` } @@ -11997,6 +12454,71 @@ func (s *GetIPSetOutput) SetIPSet(v *IPSet) *GetIPSetOutput { return s } +type GetLoggingConfigurationInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the web ACL for which you want to get the + // LoggingConfiguration. + // + // ResourceArn is a required field + ResourceArn *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetLoggingConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetLoggingConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetLoggingConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetLoggingConfigurationInput"} + if s.ResourceArn == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceArn")) + } + if s.ResourceArn != nil && len(*s.ResourceArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceArn", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceArn sets the ResourceArn field's value. +func (s *GetLoggingConfigurationInput) SetResourceArn(v string) *GetLoggingConfigurationInput { + s.ResourceArn = &v + return s +} + +type GetLoggingConfigurationOutput struct { + _ struct{} `type:"structure"` + + // The LoggingConfiguration for the specified web ACL. + LoggingConfiguration *LoggingConfiguration `type:"structure"` +} + +// String returns the string representation +func (s GetLoggingConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetLoggingConfigurationOutput) GoString() string { + return s.String() +} + +// SetLoggingConfiguration sets the LoggingConfiguration field's value. +func (s *GetLoggingConfigurationOutput) SetLoggingConfiguration(v *LoggingConfiguration) *GetLoggingConfigurationOutput { + s.LoggingConfiguration = v + return s +} + type GetPermissionPolicyInput struct { _ struct{} `type:"structure"` @@ -13059,15 +13581,15 @@ func (s *HTTPRequest) SetURI(v string) *HTTPRequest { } // Contains one or more IP addresses or blocks of IP addresses specified in -// Classless Inter-Domain Routing (CIDR) notation. AWS WAF supports /8, /16, -// /24, and /32 IP address ranges for IPv4, and /24, /32, /48, /56, /64 and -// /128 for IPv6. +// Classless Inter-Domain Routing (CIDR) notation. AWS WAF supports IPv4 address +// ranges: /8 and any range between /16 through /32. AWS WAF supports IPv6 address +// ranges: /16, /24, /32, /48, /56, /64, and /128. // // To specify an individual IP address, you specify the four-part IP address // followed by a /32, for example, 192.0.2.0/31. To block a range of IP addresses, -// you can specify a /128, /64, /56, /48, /32, /24, /16, or /8 CIDR. For more -// information about CIDR notation, see the Wikipedia entry Classless Inter-Domain -// Routing (https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). +// you can specify /8 or any range between /16 through /32 (for IPv4) or /16, +// /24, /32, /48, /56, /64, or /128 (for IPv6). For more information about CIDR +// notation, see the Wikipedia entry Classless Inter-Domain Routing (https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). type IPSet struct { _ struct{} `type:"structure"` @@ -13656,6 +14178,94 @@ func (s *ListIPSetsOutput) SetNextMarker(v string) *ListIPSetsOutput { return s } +type ListLoggingConfigurationsInput struct { + _ struct{} `type:"structure"` + + // Specifies the number of LoggingConfigurations that you want AWS WAF to return + // for this request. If you have more LoggingConfigurations than the number + // that you specify for Limit, the response includes a NextMarker value that + // you can use to get another batch of LoggingConfigurations. + Limit *int64 `type:"integer"` + + // If you specify a value for Limit and you have more LoggingConfigurations + // than the value of Limit, AWS WAF returns a NextMarker value in the response + // that allows you to list another group of LoggingConfigurations. For the second + // and subsequent ListLoggingConfigurations requests, specify the value of NextMarker + // from the previous response to get information about another batch of ListLoggingConfigurations. + NextMarker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListLoggingConfigurationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListLoggingConfigurationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListLoggingConfigurationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListLoggingConfigurationsInput"} + if s.NextMarker != nil && len(*s.NextMarker) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextMarker", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLimit sets the Limit field's value. +func (s *ListLoggingConfigurationsInput) SetLimit(v int64) *ListLoggingConfigurationsInput { + s.Limit = &v + return s +} + +// SetNextMarker sets the NextMarker field's value. +func (s *ListLoggingConfigurationsInput) SetNextMarker(v string) *ListLoggingConfigurationsInput { + s.NextMarker = &v + return s +} + +type ListLoggingConfigurationsOutput struct { + _ struct{} `type:"structure"` + + // An array of LoggingConfiguration objects. + LoggingConfigurations []*LoggingConfiguration `type:"list"` + + // If you have more LoggingConfigurations than the number that you specified + // for Limit in the request, the response includes a NextMarker value. To list + // more LoggingConfigurations, submit another ListLoggingConfigurations request, + // and specify the NextMarker value from the response in the NextMarker value + // in the next request. + NextMarker *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ListLoggingConfigurationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListLoggingConfigurationsOutput) GoString() string { + return s.String() +} + +// SetLoggingConfigurations sets the LoggingConfigurations field's value. +func (s *ListLoggingConfigurationsOutput) SetLoggingConfigurations(v []*LoggingConfiguration) *ListLoggingConfigurationsOutput { + s.LoggingConfigurations = v + return s +} + +// SetNextMarker sets the NextMarker field's value. +func (s *ListLoggingConfigurationsOutput) SetNextMarker(v string) *ListLoggingConfigurationsOutput { + s.NextMarker = &v + return s +} + type ListRateBasedRulesInput struct { _ struct{} `type:"structure"` @@ -14538,6 +15148,88 @@ func (s *ListXssMatchSetsOutput) SetXssMatchSets(v []*XssMatchSetSummary) *ListX return s } +// The Amazon Kinesis Data Firehose delivery streams, RedactedFields information, +// and the web ACL Amazon Resource Name (ARN). +type LoggingConfiguration struct { + _ struct{} `type:"structure"` + + // An array of Amazon Kinesis Data Firehose delivery stream ARNs. + // + // LogDestinationConfigs is a required field + LogDestinationConfigs []*string `min:"1" type:"list" required:"true"` + + // The parts of the request that you want redacted from the logs. For example, + // if you redact the cookie field, the cookie field in the delivery stream will + // be xxx. + RedactedFields []*FieldToMatch `type:"list"` + + // The Amazon Resource Name (ARN) of the web ACL that you want to associate + // with LogDestinationConfigs. + // + // ResourceArn is a required field + ResourceArn *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s LoggingConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LoggingConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *LoggingConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LoggingConfiguration"} + if s.LogDestinationConfigs == nil { + invalidParams.Add(request.NewErrParamRequired("LogDestinationConfigs")) + } + if s.LogDestinationConfigs != nil && len(s.LogDestinationConfigs) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogDestinationConfigs", 1)) + } + if s.ResourceArn == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceArn")) + } + if s.ResourceArn != nil && len(*s.ResourceArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceArn", 1)) + } + if s.RedactedFields != nil { + for i, v := range s.RedactedFields { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "RedactedFields", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLogDestinationConfigs sets the LogDestinationConfigs field's value. +func (s *LoggingConfiguration) SetLogDestinationConfigs(v []*string) *LoggingConfiguration { + s.LogDestinationConfigs = v + return s +} + +// SetRedactedFields sets the RedactedFields field's value. +func (s *LoggingConfiguration) SetRedactedFields(v []*FieldToMatch) *LoggingConfiguration { + s.RedactedFields = v + return s +} + +// SetResourceArn sets the ResourceArn field's value. +func (s *LoggingConfiguration) SetResourceArn(v string) *LoggingConfiguration { + s.ResourceArn = &v + return s +} + // Specifies the ByteMatchSet, IPSet, SqlInjectionMatchSet, XssMatchSet, RegexMatchSet, // GeoMatchSet, and SizeConstraintSet objects that you want to add to a Rule // and, for each object, indicates whether you want to negate the settings, @@ -14566,7 +15258,7 @@ type Predicate struct { // Negated is a required field Negated *bool `type:"boolean" required:"true"` - // The type of predicate in a Rule, such as ByteMatchSet or IPSet. + // The type of predicate in a Rule, such as ByteMatch or IPSet. // // Type is a required field Type *string `type:"string" required:"true" enum:"PredicateType"` @@ -14622,6 +15314,74 @@ func (s *Predicate) SetType(v string) *Predicate { return s } +type PutLoggingConfigurationInput struct { + _ struct{} `type:"structure"` + + // The Amazon Kinesis Data Firehose delivery streams that contains the inspected + // traffic information, the redacted fields details, and the Amazon Resource + // Name (ARN) of the web ACL to monitor. + // + // LoggingConfiguration is a required field + LoggingConfiguration *LoggingConfiguration `type:"structure" required:"true"` +} + +// String returns the string representation +func (s PutLoggingConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutLoggingConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutLoggingConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutLoggingConfigurationInput"} + if s.LoggingConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("LoggingConfiguration")) + } + if s.LoggingConfiguration != nil { + if err := s.LoggingConfiguration.Validate(); err != nil { + invalidParams.AddNested("LoggingConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLoggingConfiguration sets the LoggingConfiguration field's value. +func (s *PutLoggingConfigurationInput) SetLoggingConfiguration(v *LoggingConfiguration) *PutLoggingConfigurationInput { + s.LoggingConfiguration = v + return s +} + +type PutLoggingConfigurationOutput struct { + _ struct{} `type:"structure"` + + // The LoggingConfiguration that you submitted in the request. + LoggingConfiguration *LoggingConfiguration `type:"structure"` +} + +// String returns the string representation +func (s PutLoggingConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutLoggingConfigurationOutput) GoString() string { + return s.String() +} + +// SetLoggingConfiguration sets the LoggingConfiguration field's value. +func (s *PutLoggingConfigurationOutput) SetLoggingConfiguration(v *LoggingConfiguration) *PutLoggingConfigurationOutput { + s.LoggingConfiguration = v + return s +} + type PutPermissionPolicyInput struct { _ struct{} `type:"structure"` @@ -15010,6 +15770,8 @@ type RegexMatchTuple struct { // AWS WAF performs the transformation on RegexPatternSet before inspecting // a request for a match. // + // You can only specify a single type of TextTransformation. + // // CMD_LINE // // When you're concerned that attackers are injecting an operating system commandline @@ -15660,7 +16422,7 @@ type SampledHTTPRequest struct { // The time at which AWS WAF received the request from your AWS resource, in // Unix time format (in seconds). - Timestamp *time.Time `type:"timestamp" timestampFormat:"unix"` + Timestamp *time.Time `type:"timestamp"` // A value that indicates how one result in the response relates proportionally // to other results in the response. A result that has a weight of 2 represents @@ -15763,6 +16525,8 @@ type SizeConstraint struct { // AWS WAF performs the transformation on FieldToMatch before inspecting a request // for a match. // + // You can only specify a single type of TextTransformation. + // // Note that if you choose BODY for the value of Type, you must choose NONE // for TextTransformation because CloudFront forwards only the first 8192 bytes // for inspection. @@ -16240,11 +17004,13 @@ type SqlInjectionMatchTuple struct { // AWS WAF performs the transformation on FieldToMatch before inspecting a request // for a match. // + // You can only specify a single type of TextTransformation. + // // CMD_LINE // - // When you're concerned that attackers are injecting an operating system commandline - // command and using unusual formatting to disguise some or all of the command, - // use this option to perform the following transformations: + // When you're concerned that attackers are injecting an operating system command + // line command and using unusual formatting to disguise some or all of the + // command, use this option to perform the following transformations: // // * Delete the following characters: \ " ' ^ // @@ -16424,7 +17190,7 @@ type TimeWindow struct { // time range in the previous three hours. // // EndTime is a required field - EndTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + EndTime *time.Time `type:"timestamp" required:"true"` // The beginning of the time range from which you want GetSampledRequests to // return a sample of the requests that your AWS resource received. Specify @@ -16432,7 +17198,7 @@ type TimeWindow struct { // any time range in the previous three hours. // // StartTime is a required field - StartTime *time.Time `type:"timestamp" timestampFormat:"unix" required:"true"` + StartTime *time.Time `type:"timestamp" required:"true"` } // String returns the string representation @@ -16732,6 +17498,8 @@ type UpdateIPSetInput struct { // // * IPSetDescriptor: Contains Type and Value // + // You can insert a maximum of 1000 addresses in a single request. + // // Updates is a required field Updates []*IPSetUpdate `min:"1" type:"list" required:"true"` } @@ -18363,11 +19131,13 @@ type XssMatchTuple struct { // AWS WAF performs the transformation on FieldToMatch before inspecting a request // for a match. // + // You can only specify a single type of TextTransformation. + // // CMD_LINE // - // When you're concerned that attackers are injecting an operating system commandline - // command and using unusual formatting to disguise some or all of the command, - // use this option to perform the following transformations: + // When you're concerned that attackers are injecting an operating system command + // line command and using unusual formatting to disguise some or all of the + // command, use this option to perform the following transformations: // // * Delete the following characters: \ " ' ^ // @@ -19292,6 +20062,12 @@ const ( // MatchFieldTypeBody is a MatchFieldType enum value MatchFieldTypeBody = "BODY" + + // MatchFieldTypeSingleQueryArg is a MatchFieldType enum value + MatchFieldTypeSingleQueryArg = "SINGLE_QUERY_ARG" + + // MatchFieldTypeAllQueryArgs is a MatchFieldType enum value + MatchFieldTypeAllQueryArgs = "ALL_QUERY_ARGS" ) const ( diff --git a/vendor/github.com/aws/aws-sdk-go/service/waf/errors.go b/vendor/github.com/aws/aws-sdk-go/service/waf/errors.go index 1e85bed52a4..02c9752b0bc 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/waf/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/waf/errors.go @@ -5,27 +5,27 @@ package waf const ( // ErrCodeDisallowedNameException for service response error code - // "DisallowedNameException". + // "WAFDisallowedNameException". // // The name specified is invalid. - ErrCodeDisallowedNameException = "DisallowedNameException" + ErrCodeDisallowedNameException = "WAFDisallowedNameException" // ErrCodeInternalErrorException for service response error code - // "InternalErrorException". + // "WAFInternalErrorException". // // The operation failed because of a system problem, even though the request // was valid. Retry your request. - ErrCodeInternalErrorException = "InternalErrorException" + ErrCodeInternalErrorException = "WAFInternalErrorException" // ErrCodeInvalidAccountException for service response error code - // "InvalidAccountException". + // "WAFInvalidAccountException". // // The operation failed because you tried to create, update, or delete an object // by using an invalid account identifier. - ErrCodeInvalidAccountException = "InvalidAccountException" + ErrCodeInvalidAccountException = "WAFInvalidAccountException" // ErrCodeInvalidOperationException for service response error code - // "InvalidOperationException". + // "WAFInvalidOperationException". // // The operation failed because there was nothing to do. For example: // @@ -41,15 +41,12 @@ const ( // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // - // * You tried to add an IP address to an IPSet, but the IP address already - // exists in the specified IPSet. - // // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. - ErrCodeInvalidOperationException = "InvalidOperationException" + ErrCodeInvalidOperationException = "WAFInvalidOperationException" // ErrCodeInvalidParameterException for service response error code - // "InvalidParameterException". + // "WAFInvalidParameterException". // // The operation failed because AWS WAF didn't recognize a parameter in the // request. For example: @@ -78,10 +75,10 @@ const ( // // * Your request references an ARN that is malformed, or corresponds to // a resource with which a web ACL cannot be associated. - ErrCodeInvalidParameterException = "InvalidParameterException" + ErrCodeInvalidParameterException = "WAFInvalidParameterException" // ErrCodeInvalidPermissionPolicyException for service response error code - // "InvalidPermissionPolicyException". + // "WAFInvalidPermissionPolicyException". // // The operation failed because the specified policy is not in the proper format. // @@ -93,8 +90,9 @@ const ( // // * Effect must specify Allow. // - // * The Action in the policy must be waf:UpdateWebACL or waf-regional:UpdateWebACL. - // Any extra or wildcard actions in the policy will be rejected. + // * The Action in the policy must be waf:UpdateWebACL, waf-regional:UpdateWebACL, + // waf:GetRuleGroup and waf-regional:GetRuleGroup . Any extra or wildcard + // actions in the policy will be rejected. // // * The policy cannot include a Resource parameter. // @@ -104,25 +102,25 @@ const ( // * The user making the request must be the owner of the RuleGroup. // // * Your policy must be composed using IAM Policy version 2012-10-17. - ErrCodeInvalidPermissionPolicyException = "InvalidPermissionPolicyException" + ErrCodeInvalidPermissionPolicyException = "WAFInvalidPermissionPolicyException" // ErrCodeInvalidRegexPatternException for service response error code - // "InvalidRegexPatternException". + // "WAFInvalidRegexPatternException". // // The regular expression (regex) you specified in RegexPatternString is invalid. - ErrCodeInvalidRegexPatternException = "InvalidRegexPatternException" + ErrCodeInvalidRegexPatternException = "WAFInvalidRegexPatternException" // ErrCodeLimitsExceededException for service response error code - // "LimitsExceededException". + // "WAFLimitsExceededException". // // The operation exceeds a resource limit, for example, the maximum number of // WebACL objects that you can create for an AWS account. For more information, // see Limits (http://docs.aws.amazon.com/waf/latest/developerguide/limits.html) // in the AWS WAF Developer Guide. - ErrCodeLimitsExceededException = "LimitsExceededException" + ErrCodeLimitsExceededException = "WAFLimitsExceededException" // ErrCodeNonEmptyEntityException for service response error code - // "NonEmptyEntityException". + // "WAFNonEmptyEntityException". // // The operation failed because you tried to delete an object that isn't empty. // For example: @@ -136,10 +134,10 @@ const ( // objects. // // * You tried to delete an IPSet that references one or more IP addresses. - ErrCodeNonEmptyEntityException = "NonEmptyEntityException" + ErrCodeNonEmptyEntityException = "WAFNonEmptyEntityException" // ErrCodeNonexistentContainerException for service response error code - // "NonexistentContainerException". + // "WAFNonexistentContainerException". // // The operation failed because you tried to add an object to or delete an object // from another object that doesn't exist. For example: @@ -155,16 +153,16 @@ const ( // // * You tried to add a ByteMatchTuple to or delete a ByteMatchTuple from // a ByteMatchSet that doesn't exist. - ErrCodeNonexistentContainerException = "NonexistentContainerException" + ErrCodeNonexistentContainerException = "WAFNonexistentContainerException" // ErrCodeNonexistentItemException for service response error code - // "NonexistentItemException". + // "WAFNonexistentItemException". // // The operation failed because the referenced object doesn't exist. - ErrCodeNonexistentItemException = "NonexistentItemException" + ErrCodeNonexistentItemException = "WAFNonexistentItemException" // ErrCodeReferencedItemException for service response error code - // "ReferencedItemException". + // "WAFReferencedItemException". // // The operation failed because you tried to delete an object that is still // in use. For example: @@ -172,18 +170,18 @@ const ( // * You tried to delete a ByteMatchSet that is still referenced by a Rule. // // * You tried to delete a Rule that is still referenced by a WebACL. - ErrCodeReferencedItemException = "ReferencedItemException" + ErrCodeReferencedItemException = "WAFReferencedItemException" // ErrCodeStaleDataException for service response error code - // "StaleDataException". + // "WAFStaleDataException". // // The operation failed because you tried to create, update, or delete an object // by using a change token that has already been used. - ErrCodeStaleDataException = "StaleDataException" + ErrCodeStaleDataException = "WAFStaleDataException" // ErrCodeSubscriptionNotFoundException for service response error code - // "SubscriptionNotFoundException". + // "WAFSubscriptionNotFoundException". // // The specified subscription does not exist. - ErrCodeSubscriptionNotFoundException = "SubscriptionNotFoundException" + ErrCodeSubscriptionNotFoundException = "WAFSubscriptionNotFoundException" ) diff --git a/vendor/github.com/aws/aws-sdk-go/service/waf/service.go b/vendor/github.com/aws/aws-sdk-go/service/waf/service.go index 91648a19f40..09bf43d9eeb 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/waf/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/waf/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "waf" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "waf" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "WAF" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the WAF client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/wafregional/api.go b/vendor/github.com/aws/aws-sdk-go/service/wafregional/api.go index 0a7d006f15a..a9415a5b904 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/wafregional/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/wafregional/api.go @@ -13,8 +13,8 @@ const opAssociateWebACL = "AssociateWebACL" // AssociateWebACLRequest generates a "aws/request.Request" representing the // client's request for the AssociateWebACL operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -133,8 +133,8 @@ const opCreateByteMatchSet = "CreateByteMatchSet" // CreateByteMatchSetRequest generates a "aws/request.Request" representing the // client's request for the CreateByteMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -280,8 +280,8 @@ const opCreateGeoMatchSet = "CreateGeoMatchSet" // CreateGeoMatchSetRequest generates a "aws/request.Request" representing the // client's request for the CreateGeoMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -426,8 +426,8 @@ const opCreateIPSet = "CreateIPSet" // CreateIPSetRequest generates a "aws/request.Request" representing the // client's request for the CreateIPSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -573,8 +573,8 @@ const opCreateRateBasedRule = "CreateRateBasedRule" // CreateRateBasedRuleRequest generates a "aws/request.Request" representing the // client's request for the CreateRateBasedRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -755,8 +755,8 @@ const opCreateRegexMatchSet = "CreateRegexMatchSet" // CreateRegexMatchSetRequest generates a "aws/request.Request" representing the // client's request for the CreateRegexMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -870,8 +870,8 @@ const opCreateRegexPatternSet = "CreateRegexPatternSet" // CreateRegexPatternSetRequest generates a "aws/request.Request" representing the // client's request for the CreateRegexPatternSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -981,8 +981,8 @@ const opCreateRule = "CreateRule" // CreateRuleRequest generates a "aws/request.Request" representing the // client's request for the CreateRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1138,8 +1138,8 @@ const opCreateRuleGroup = "CreateRuleGroup" // CreateRuleGroupRequest generates a "aws/request.Request" representing the // client's request for the CreateRuleGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1244,8 +1244,8 @@ const opCreateSizeConstraintSet = "CreateSizeConstraintSet" // CreateSizeConstraintSetRequest generates a "aws/request.Request" representing the // client's request for the CreateSizeConstraintSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1392,8 +1392,8 @@ const opCreateSqlInjectionMatchSet = "CreateSqlInjectionMatchSet" // CreateSqlInjectionMatchSetRequest generates a "aws/request.Request" representing the // client's request for the CreateSqlInjectionMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1536,8 +1536,8 @@ const opCreateWebACL = "CreateWebACL" // CreateWebACLRequest generates a "aws/request.Request" representing the // client's request for the CreateWebACL operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1692,8 +1692,8 @@ const opCreateXssMatchSet = "CreateXssMatchSet" // CreateXssMatchSetRequest generates a "aws/request.Request" representing the // client's request for the CreateXssMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1837,8 +1837,8 @@ const opDeleteByteMatchSet = "DeleteByteMatchSet" // DeleteByteMatchSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteByteMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -1964,8 +1964,8 @@ const opDeleteGeoMatchSet = "DeleteGeoMatchSet" // DeleteGeoMatchSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteGeoMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2090,8 +2090,8 @@ const opDeleteIPSet = "DeleteIPSet" // DeleteIPSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteIPSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2212,12 +2212,99 @@ func (c *WAFRegional) DeleteIPSetWithContext(ctx aws.Context, input *waf.DeleteI return out, req.Send() } +const opDeleteLoggingConfiguration = "DeleteLoggingConfiguration" + +// DeleteLoggingConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the DeleteLoggingConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteLoggingConfiguration for more information on using the DeleteLoggingConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteLoggingConfigurationRequest method. +// req, resp := client.DeleteLoggingConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/waf-regional-2016-11-28/DeleteLoggingConfiguration +func (c *WAFRegional) DeleteLoggingConfigurationRequest(input *waf.DeleteLoggingConfigurationInput) (req *request.Request, output *waf.DeleteLoggingConfigurationOutput) { + op := &request.Operation{ + Name: opDeleteLoggingConfiguration, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &waf.DeleteLoggingConfigurationInput{} + } + + output = &waf.DeleteLoggingConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteLoggingConfiguration API operation for AWS WAF Regional. +// +// Permanently deletes the LoggingConfiguration from the specified web ACL. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS WAF Regional's +// API operation DeleteLoggingConfiguration for usage and error information. +// +// Returned Error Codes: +// * ErrCodeWAFInternalErrorException "WAFInternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// * ErrCodeWAFNonexistentItemException "WAFNonexistentItemException" +// The operation failed because the referenced object doesn't exist. +// +// * ErrCodeWAFStaleDataException "WAFStaleDataException" +// The operation failed because you tried to create, update, or delete an object +// by using a change token that has already been used. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/waf-regional-2016-11-28/DeleteLoggingConfiguration +func (c *WAFRegional) DeleteLoggingConfiguration(input *waf.DeleteLoggingConfigurationInput) (*waf.DeleteLoggingConfigurationOutput, error) { + req, out := c.DeleteLoggingConfigurationRequest(input) + return out, req.Send() +} + +// DeleteLoggingConfigurationWithContext is the same as DeleteLoggingConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteLoggingConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WAFRegional) DeleteLoggingConfigurationWithContext(ctx aws.Context, input *waf.DeleteLoggingConfigurationInput, opts ...request.Option) (*waf.DeleteLoggingConfigurationOutput, error) { + req, out := c.DeleteLoggingConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeletePermissionPolicy = "DeletePermissionPolicy" // DeletePermissionPolicyRequest generates a "aws/request.Request" representing the // client's request for the DeletePermissionPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2305,8 +2392,8 @@ const opDeleteRateBasedRule = "DeleteRateBasedRule" // DeleteRateBasedRuleRequest generates a "aws/request.Request" representing the // client's request for the DeleteRateBasedRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2433,8 +2520,8 @@ const opDeleteRegexMatchSet = "DeleteRegexMatchSet" // DeleteRegexMatchSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteRegexMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2560,8 +2647,8 @@ const opDeleteRegexPatternSet = "DeleteRegexPatternSet" // DeleteRegexPatternSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteRegexPatternSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2675,8 +2762,8 @@ const opDeleteRule = "DeleteRule" // DeleteRuleRequest generates a "aws/request.Request" representing the // client's request for the DeleteRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2801,8 +2888,8 @@ const opDeleteRuleGroup = "DeleteRuleGroup" // DeleteRuleGroupRequest generates a "aws/request.Request" representing the // client's request for the DeleteRuleGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -2896,6 +2983,24 @@ func (c *WAFRegional) DeleteRuleGroupRequest(input *waf.DeleteRuleGroupInput) (r // // * You tried to delete an IPSet that references one or more IP addresses. // +// * ErrCodeWAFInvalidOperationException "WAFInvalidOperationException" +// The operation failed because there was nothing to do. For example: +// +// * You tried to remove a Rule from a WebACL, but the Rule isn't in the +// specified WebACL. +// +// * You tried to remove an IP address from an IPSet, but the IP address +// isn't in the specified IPSet. +// +// * You tried to remove a ByteMatchTuple from a ByteMatchSet, but the ByteMatchTuple +// isn't in the specified WebACL. +// +// * You tried to add a Rule to a WebACL, but the Rule already exists in +// the specified WebACL. +// +// * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple +// already exists in the specified WebACL. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/waf-regional-2016-11-28/DeleteRuleGroup func (c *WAFRegional) DeleteRuleGroup(input *waf.DeleteRuleGroupInput) (*waf.DeleteRuleGroupOutput, error) { req, out := c.DeleteRuleGroupRequest(input) @@ -2922,8 +3027,8 @@ const opDeleteSizeConstraintSet = "DeleteSizeConstraintSet" // DeleteSizeConstraintSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteSizeConstraintSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3049,8 +3154,8 @@ const opDeleteSqlInjectionMatchSet = "DeleteSqlInjectionMatchSet" // DeleteSqlInjectionMatchSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteSqlInjectionMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3177,8 +3282,8 @@ const opDeleteWebACL = "DeleteWebACL" // DeleteWebACLRequest generates a "aws/request.Request" representing the // client's request for the DeleteWebACL operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3300,8 +3405,8 @@ const opDeleteXssMatchSet = "DeleteXssMatchSet" // DeleteXssMatchSetRequest generates a "aws/request.Request" representing the // client's request for the DeleteXssMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3427,8 +3532,8 @@ const opDisassociateWebACL = "DisassociateWebACL" // DisassociateWebACLRequest generates a "aws/request.Request" representing the // client's request for the DisassociateWebACL operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3543,8 +3648,8 @@ const opGetByteMatchSet = "GetByteMatchSet" // GetByteMatchSetRequest generates a "aws/request.Request" representing the // client's request for the GetByteMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3630,8 +3735,8 @@ const opGetChangeToken = "GetChangeToken" // GetChangeTokenRequest generates a "aws/request.Request" representing the // client's request for the GetChangeToken operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3724,8 +3829,8 @@ const opGetChangeTokenStatus = "GetChangeTokenStatus" // GetChangeTokenStatusRequest generates a "aws/request.Request" representing the // client's request for the GetChangeTokenStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3817,8 +3922,8 @@ const opGetGeoMatchSet = "GetGeoMatchSet" // GetGeoMatchSetRequest generates a "aws/request.Request" representing the // client's request for the GetGeoMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3904,8 +4009,8 @@ const opGetIPSet = "GetIPSet" // GetIPSetRequest generates a "aws/request.Request" representing the // client's request for the GetIPSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -3987,12 +4092,95 @@ func (c *WAFRegional) GetIPSetWithContext(ctx aws.Context, input *waf.GetIPSetIn return out, req.Send() } +const opGetLoggingConfiguration = "GetLoggingConfiguration" + +// GetLoggingConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the GetLoggingConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetLoggingConfiguration for more information on using the GetLoggingConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetLoggingConfigurationRequest method. +// req, resp := client.GetLoggingConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/waf-regional-2016-11-28/GetLoggingConfiguration +func (c *WAFRegional) GetLoggingConfigurationRequest(input *waf.GetLoggingConfigurationInput) (req *request.Request, output *waf.GetLoggingConfigurationOutput) { + op := &request.Operation{ + Name: opGetLoggingConfiguration, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &waf.GetLoggingConfigurationInput{} + } + + output = &waf.GetLoggingConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetLoggingConfiguration API operation for AWS WAF Regional. +// +// Returns the LoggingConfiguration for the specified web ACL. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS WAF Regional's +// API operation GetLoggingConfiguration for usage and error information. +// +// Returned Error Codes: +// * ErrCodeWAFInternalErrorException "WAFInternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// * ErrCodeWAFNonexistentItemException "WAFNonexistentItemException" +// The operation failed because the referenced object doesn't exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/waf-regional-2016-11-28/GetLoggingConfiguration +func (c *WAFRegional) GetLoggingConfiguration(input *waf.GetLoggingConfigurationInput) (*waf.GetLoggingConfigurationOutput, error) { + req, out := c.GetLoggingConfigurationRequest(input) + return out, req.Send() +} + +// GetLoggingConfigurationWithContext is the same as GetLoggingConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See GetLoggingConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WAFRegional) GetLoggingConfigurationWithContext(ctx aws.Context, input *waf.GetLoggingConfigurationInput, opts ...request.Option) (*waf.GetLoggingConfigurationOutput, error) { + req, out := c.GetLoggingConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opGetPermissionPolicy = "GetPermissionPolicy" // GetPermissionPolicyRequest generates a "aws/request.Request" representing the // client's request for the GetPermissionPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4074,8 +4262,8 @@ const opGetRateBasedRule = "GetRateBasedRule" // GetRateBasedRuleRequest generates a "aws/request.Request" representing the // client's request for the GetRateBasedRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4162,8 +4350,8 @@ const opGetRateBasedRuleManagedKeys = "GetRateBasedRuleManagedKeys" // GetRateBasedRuleManagedKeysRequest generates a "aws/request.Request" representing the // client's request for the GetRateBasedRuleManagedKeys operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4281,8 +4469,8 @@ const opGetRegexMatchSet = "GetRegexMatchSet" // GetRegexMatchSetRequest generates a "aws/request.Request" representing the // client's request for the GetRegexMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4368,8 +4556,8 @@ const opGetRegexPatternSet = "GetRegexPatternSet" // GetRegexPatternSetRequest generates a "aws/request.Request" representing the // client's request for the GetRegexPatternSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4455,8 +4643,8 @@ const opGetRule = "GetRule" // GetRuleRequest generates a "aws/request.Request" representing the // client's request for the GetRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4543,8 +4731,8 @@ const opGetRuleGroup = "GetRuleGroup" // GetRuleGroupRequest generates a "aws/request.Request" representing the // client's request for the GetRuleGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4629,8 +4817,8 @@ const opGetSampledRequests = "GetSampledRequests" // GetSampledRequestsRequest generates a "aws/request.Request" representing the // client's request for the GetSampledRequests operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4722,8 +4910,8 @@ const opGetSizeConstraintSet = "GetSizeConstraintSet" // GetSizeConstraintSetRequest generates a "aws/request.Request" representing the // client's request for the GetSizeConstraintSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4809,8 +4997,8 @@ const opGetSqlInjectionMatchSet = "GetSqlInjectionMatchSet" // GetSqlInjectionMatchSetRequest generates a "aws/request.Request" representing the // client's request for the GetSqlInjectionMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4896,8 +5084,8 @@ const opGetWebACL = "GetWebACL" // GetWebACLRequest generates a "aws/request.Request" representing the // client's request for the GetWebACL operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -4983,8 +5171,8 @@ const opGetWebACLForResource = "GetWebACLForResource" // GetWebACLForResourceRequest generates a "aws/request.Request" representing the // client's request for the GetWebACLForResource operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5103,8 +5291,8 @@ const opGetXssMatchSet = "GetXssMatchSet" // GetXssMatchSetRequest generates a "aws/request.Request" representing the // client's request for the GetXssMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5190,8 +5378,8 @@ const opListActivatedRulesInRuleGroup = "ListActivatedRulesInRuleGroup" // ListActivatedRulesInRuleGroupRequest generates a "aws/request.Request" representing the // client's request for the ListActivatedRulesInRuleGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5302,8 +5490,8 @@ const opListByteMatchSets = "ListByteMatchSets" // ListByteMatchSetsRequest generates a "aws/request.Request" representing the // client's request for the ListByteMatchSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5386,8 +5574,8 @@ const opListGeoMatchSets = "ListGeoMatchSets" // ListGeoMatchSetsRequest generates a "aws/request.Request" representing the // client's request for the ListGeoMatchSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5470,8 +5658,8 @@ const opListIPSets = "ListIPSets" // ListIPSetsRequest generates a "aws/request.Request" representing the // client's request for the ListIPSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5550,12 +5738,124 @@ func (c *WAFRegional) ListIPSetsWithContext(ctx aws.Context, input *waf.ListIPSe return out, req.Send() } +const opListLoggingConfigurations = "ListLoggingConfigurations" + +// ListLoggingConfigurationsRequest generates a "aws/request.Request" representing the +// client's request for the ListLoggingConfigurations operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListLoggingConfigurations for more information on using the ListLoggingConfigurations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListLoggingConfigurationsRequest method. +// req, resp := client.ListLoggingConfigurationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/waf-regional-2016-11-28/ListLoggingConfigurations +func (c *WAFRegional) ListLoggingConfigurationsRequest(input *waf.ListLoggingConfigurationsInput) (req *request.Request, output *waf.ListLoggingConfigurationsOutput) { + op := &request.Operation{ + Name: opListLoggingConfigurations, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &waf.ListLoggingConfigurationsInput{} + } + + output = &waf.ListLoggingConfigurationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListLoggingConfigurations API operation for AWS WAF Regional. +// +// Returns an array of LoggingConfiguration objects. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS WAF Regional's +// API operation ListLoggingConfigurations for usage and error information. +// +// Returned Error Codes: +// * ErrCodeWAFInternalErrorException "WAFInternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// * ErrCodeWAFNonexistentItemException "WAFNonexistentItemException" +// The operation failed because the referenced object doesn't exist. +// +// * ErrCodeWAFInvalidParameterException "WAFInvalidParameterException" +// The operation failed because AWS WAF didn't recognize a parameter in the +// request. For example: +// +// * You specified an invalid parameter name. +// +// * You specified an invalid value. +// +// * You tried to update an object (ByteMatchSet, IPSet, Rule, or WebACL) +// using an action other than INSERT or DELETE. +// +// * You tried to create a WebACL with a DefaultActionType other than ALLOW, +// BLOCK, or COUNT. +// +// * You tried to create a RateBasedRule with a RateKey value other than +// IP. +// +// * You tried to update a WebACL with a WafActionType other than ALLOW, +// BLOCK, or COUNT. +// +// * You tried to update a ByteMatchSet with a FieldToMatchType other than +// HEADER, METHOD, QUERY_STRING, URI, or BODY. +// +// * You tried to update a ByteMatchSet with a Field of HEADER but no value +// for Data. +// +// * Your request references an ARN that is malformed, or corresponds to +// a resource with which a web ACL cannot be associated. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/waf-regional-2016-11-28/ListLoggingConfigurations +func (c *WAFRegional) ListLoggingConfigurations(input *waf.ListLoggingConfigurationsInput) (*waf.ListLoggingConfigurationsOutput, error) { + req, out := c.ListLoggingConfigurationsRequest(input) + return out, req.Send() +} + +// ListLoggingConfigurationsWithContext is the same as ListLoggingConfigurations with the addition of +// the ability to pass a context and additional request options. +// +// See ListLoggingConfigurations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WAFRegional) ListLoggingConfigurationsWithContext(ctx aws.Context, input *waf.ListLoggingConfigurationsInput, opts ...request.Option) (*waf.ListLoggingConfigurationsOutput, error) { + req, out := c.ListLoggingConfigurationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opListRateBasedRules = "ListRateBasedRules" // ListRateBasedRulesRequest generates a "aws/request.Request" representing the // client's request for the ListRateBasedRules operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5638,8 +5938,8 @@ const opListRegexMatchSets = "ListRegexMatchSets" // ListRegexMatchSetsRequest generates a "aws/request.Request" representing the // client's request for the ListRegexMatchSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5722,8 +6022,8 @@ const opListRegexPatternSets = "ListRegexPatternSets" // ListRegexPatternSetsRequest generates a "aws/request.Request" representing the // client's request for the ListRegexPatternSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5806,8 +6106,8 @@ const opListResourcesForWebACL = "ListResourcesForWebACL" // ListResourcesForWebACLRequest generates a "aws/request.Request" representing the // client's request for the ListResourcesForWebACL operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5867,6 +6167,35 @@ func (c *WAFRegional) ListResourcesForWebACLRequest(input *ListResourcesForWebAC // * ErrCodeWAFNonexistentItemException "WAFNonexistentItemException" // The operation failed because the referenced object doesn't exist. // +// * ErrCodeWAFInvalidParameterException "WAFInvalidParameterException" +// The operation failed because AWS WAF didn't recognize a parameter in the +// request. For example: +// +// * You specified an invalid parameter name. +// +// * You specified an invalid value. +// +// * You tried to update an object (ByteMatchSet, IPSet, Rule, or WebACL) +// using an action other than INSERT or DELETE. +// +// * You tried to create a WebACL with a DefaultActionType other than ALLOW, +// BLOCK, or COUNT. +// +// * You tried to create a RateBasedRule with a RateKey value other than +// IP. +// +// * You tried to update a WebACL with a WafActionType other than ALLOW, +// BLOCK, or COUNT. +// +// * You tried to update a ByteMatchSet with a FieldToMatchType other than +// HEADER, METHOD, QUERY_STRING, URI, or BODY. +// +// * You tried to update a ByteMatchSet with a Field of HEADER but no value +// for Data. +// +// * Your request references an ARN that is malformed, or corresponds to +// a resource with which a web ACL cannot be associated. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/waf-regional-2016-11-28/ListResourcesForWebACL func (c *WAFRegional) ListResourcesForWebACL(input *ListResourcesForWebACLInput) (*ListResourcesForWebACLOutput, error) { req, out := c.ListResourcesForWebACLRequest(input) @@ -5893,8 +6222,8 @@ const opListRuleGroups = "ListRuleGroups" // ListRuleGroupsRequest generates a "aws/request.Request" representing the // client's request for the ListRuleGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -5973,8 +6302,8 @@ const opListRules = "ListRules" // ListRulesRequest generates a "aws/request.Request" representing the // client's request for the ListRules operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6057,8 +6386,8 @@ const opListSizeConstraintSets = "ListSizeConstraintSets" // ListSizeConstraintSetsRequest generates a "aws/request.Request" representing the // client's request for the ListSizeConstraintSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6141,8 +6470,8 @@ const opListSqlInjectionMatchSets = "ListSqlInjectionMatchSets" // ListSqlInjectionMatchSetsRequest generates a "aws/request.Request" representing the // client's request for the ListSqlInjectionMatchSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6225,8 +6554,8 @@ const opListSubscribedRuleGroups = "ListSubscribedRuleGroups" // ListSubscribedRuleGroupsRequest generates a "aws/request.Request" representing the // client's request for the ListSubscribedRuleGroups operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6308,8 +6637,8 @@ const opListWebACLs = "ListWebACLs" // ListWebACLsRequest generates a "aws/request.Request" representing the // client's request for the ListWebACLs operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6392,8 +6721,8 @@ const opListXssMatchSets = "ListXssMatchSets" // ListXssMatchSetsRequest generates a "aws/request.Request" representing the // client's request for the ListXssMatchSets operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6472,12 +6801,122 @@ func (c *WAFRegional) ListXssMatchSetsWithContext(ctx aws.Context, input *waf.Li return out, req.Send() } +const opPutLoggingConfiguration = "PutLoggingConfiguration" + +// PutLoggingConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the PutLoggingConfiguration operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutLoggingConfiguration for more information on using the PutLoggingConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutLoggingConfigurationRequest method. +// req, resp := client.PutLoggingConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/waf-regional-2016-11-28/PutLoggingConfiguration +func (c *WAFRegional) PutLoggingConfigurationRequest(input *waf.PutLoggingConfigurationInput) (req *request.Request, output *waf.PutLoggingConfigurationOutput) { + op := &request.Operation{ + Name: opPutLoggingConfiguration, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &waf.PutLoggingConfigurationInput{} + } + + output = &waf.PutLoggingConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutLoggingConfiguration API operation for AWS WAF Regional. +// +// Associates a LoggingConfiguration with a specified web ACL. +// +// You can access information about all traffic that AWS WAF inspects using +// the following steps: +// +// Create an Amazon Kinesis Data Firehose . +// +// Associate that firehose to your web ACL using a PutLoggingConfiguration request. +// +// When you successfully enable logging using a PutLoggingConfiguration request, +// AWS WAF will create a service linked role with the necessary permissions +// to write logs to the Amazon Kinesis Data Firehose. For more information, +// see Logging Web ACL Traffic Information (http://docs.aws.amazon.com/waf/latest/developerguide/logging.html) +// in the AWS WAF Developer Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS WAF Regional's +// API operation PutLoggingConfiguration for usage and error information. +// +// Returned Error Codes: +// * ErrCodeWAFInternalErrorException "WAFInternalErrorException" +// The operation failed because of a system problem, even though the request +// was valid. Retry your request. +// +// * ErrCodeWAFNonexistentItemException "WAFNonexistentItemException" +// The operation failed because the referenced object doesn't exist. +// +// * ErrCodeWAFStaleDataException "WAFStaleDataException" +// The operation failed because you tried to create, update, or delete an object +// by using a change token that has already been used. +// +// * ErrCodeWAFServiceLinkedRoleErrorException "WAFServiceLinkedRoleErrorException" +// AWS WAF is not able to access the service linked role. This can be caused +// by a previous PutLoggingConfiguration request, which can lock the service +// linked role for about 20 seconds. Please try your request again. The service +// linked role can also be locked by a previous DeleteServiceLinkedRole request, +// which can lock the role for 15 minutes or more. If you recently made a DeleteServiceLinkedRole, +// wait at least 15 minutes and try the request again. If you receive this same +// exception again, you will have to wait additional time until the role is +// unlocked. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/waf-regional-2016-11-28/PutLoggingConfiguration +func (c *WAFRegional) PutLoggingConfiguration(input *waf.PutLoggingConfigurationInput) (*waf.PutLoggingConfigurationOutput, error) { + req, out := c.PutLoggingConfigurationRequest(input) + return out, req.Send() +} + +// PutLoggingConfigurationWithContext is the same as PutLoggingConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See PutLoggingConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WAFRegional) PutLoggingConfigurationWithContext(ctx aws.Context, input *waf.PutLoggingConfigurationInput, opts ...request.Option) (*waf.PutLoggingConfigurationOutput, error) { + req, out := c.PutLoggingConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opPutPermissionPolicy = "PutPermissionPolicy" // PutPermissionPolicyRequest generates a "aws/request.Request" representing the // client's request for the PutPermissionPolicy operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6527,8 +6966,9 @@ func (c *WAFRegional) PutPermissionPolicyRequest(input *waf.PutPermissionPolicyI // // * Effect must specify Allow. // -// * The Action in the policy must be waf:UpdateWebACL and waf-regional:UpdateWebACL. -// Any extra or wildcard actions in the policy will be rejected. +// * The Action in the policy must be waf:UpdateWebACL, waf-regional:UpdateWebACL, +// waf:GetRuleGroup and waf-regional:GetRuleGroup . Any extra or wildcard +// actions in the policy will be rejected. // // * The policy cannot include a Resource parameter. // @@ -6573,8 +7013,9 @@ func (c *WAFRegional) PutPermissionPolicyRequest(input *waf.PutPermissionPolicyI // // * Effect must specify Allow. // -// * The Action in the policy must be waf:UpdateWebACL or waf-regional:UpdateWebACL. -// Any extra or wildcard actions in the policy will be rejected. +// * The Action in the policy must be waf:UpdateWebACL, waf-regional:UpdateWebACL, +// waf:GetRuleGroup and waf-regional:GetRuleGroup . Any extra or wildcard +// actions in the policy will be rejected. // // * The policy cannot include a Resource parameter. // @@ -6611,8 +7052,8 @@ const opUpdateByteMatchSet = "UpdateByteMatchSet" // UpdateByteMatchSetRequest generates a "aws/request.Request" representing the // client's request for the UpdateByteMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6720,9 +7161,6 @@ func (c *WAFRegional) UpdateByteMatchSetRequest(input *waf.UpdateByteMatchSetInp // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // @@ -6810,8 +7248,8 @@ const opUpdateGeoMatchSet = "UpdateGeoMatchSet" // UpdateGeoMatchSetRequest generates a "aws/request.Request" representing the // client's request for the UpdateGeoMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -6914,9 +7352,6 @@ func (c *WAFRegional) UpdateGeoMatchSetRequest(input *waf.UpdateGeoMatchSetInput // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // @@ -7008,8 +7443,8 @@ const opUpdateIPSet = "UpdateIPSet" // UpdateIPSetRequest generates a "aws/request.Request" representing the // client's request for the UpdateIPSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7061,9 +7496,10 @@ func (c *WAFRegional) UpdateIPSetRequest(input *waf.UpdateIPSetInput) (req *requ // range of IP addresses from 192.0.2.0 to 192.0.2.255) or 192.0.2.44/32 // (for the individual IP address 192.0.2.44). // -// AWS WAF supports /8, /16, /24, and /32 IP address ranges for IPv4, and /24, -// /32, /48, /56, /64 and /128 for IPv6. For more information about CIDR notation, -// see the Wikipedia entry Classless Inter-Domain Routing (https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). +// AWS WAF supports IPv4 address ranges: /8 and any range between /16 through +// /32. AWS WAF supports IPv6 address ranges: /16, /24, /32, /48, /56, /64, +// and /128. For more information about CIDR notation, see the Wikipedia entry +// Classless Inter-Domain Routing (https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). // // IPv6 addresses can be represented using any of the following formats: // @@ -7095,6 +7531,8 @@ func (c *WAFRegional) UpdateIPSetRequest(input *waf.UpdateIPSetInput) (req *requ // and/or the IP addresses that you want to delete. If you want to change an // IP address, you delete the existing IP address and add the new one. // +// You can insert a maximum of 1000 addresses in a single request. +// // For more information about how to use the AWS WAF API to allow or block HTTP // requests, see the AWS WAF Developer Guide (http://docs.aws.amazon.com/waf/latest/developerguide/). // @@ -7133,9 +7571,6 @@ func (c *WAFRegional) UpdateIPSetRequest(input *waf.UpdateIPSetInput) (req *requ // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // @@ -7227,8 +7662,8 @@ const opUpdateRateBasedRule = "UpdateRateBasedRule" // UpdateRateBasedRuleRequest generates a "aws/request.Request" representing the // client's request for the UpdateRateBasedRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7341,9 +7776,6 @@ func (c *WAFRegional) UpdateRateBasedRuleRequest(input *waf.UpdateRateBasedRuleI // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // @@ -7435,8 +7867,8 @@ const opUpdateRegexMatchSet = "UpdateRegexMatchSet" // UpdateRegexMatchSetRequest generates a "aws/request.Request" representing the // client's request for the UpdateRegexMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7569,9 +8001,6 @@ func (c *WAFRegional) UpdateRegexMatchSetRequest(input *waf.UpdateRegexMatchSetI // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // @@ -7605,8 +8034,8 @@ const opUpdateRegexPatternSet = "UpdateRegexPatternSet" // UpdateRegexPatternSetRequest generates a "aws/request.Request" representing the // client's request for the UpdateRegexPatternSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7733,9 +8162,6 @@ func (c *WAFRegional) UpdateRegexPatternSetRequest(input *waf.UpdateRegexPattern // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // @@ -7772,8 +8198,8 @@ const opUpdateRule = "UpdateRule" // UpdateRuleRequest generates a "aws/request.Request" representing the // client's request for the UpdateRule operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -7881,9 +8307,6 @@ func (c *WAFRegional) UpdateRuleRequest(input *waf.UpdateRuleInput) (req *reques // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // @@ -7975,8 +8398,8 @@ const opUpdateRuleGroup = "UpdateRuleGroup" // UpdateRuleGroupRequest generates a "aws/request.Request" representing the // client's request for the UpdateRuleGroup operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8089,9 +8512,6 @@ func (c *WAFRegional) UpdateRuleGroupRequest(input *waf.UpdateRuleGroupInput) (r // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // @@ -8156,8 +8576,8 @@ const opUpdateSizeConstraintSet = "UpdateSizeConstraintSet" // UpdateSizeConstraintSetRequest generates a "aws/request.Request" representing the // client's request for the UpdateSizeConstraintSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8211,6 +8631,8 @@ func (c *WAFRegional) UpdateSizeConstraintSetRequest(input *waf.UpdateSizeConstr // of the request body are not supported because the AWS resource forwards // only the first 8192 bytes of your request to AWS WAF. // +// You can only specify a single type of TextTransformation. +// // * A ComparisonOperator used for evaluating the selected part of the request // against the specified Size, such as equals, greater than, less than, and // so on. @@ -8271,9 +8693,6 @@ func (c *WAFRegional) UpdateSizeConstraintSetRequest(input *waf.UpdateSizeConstr // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // @@ -8365,8 +8784,8 @@ const opUpdateSqlInjectionMatchSet = "UpdateSqlInjectionMatchSet" // UpdateSqlInjectionMatchSetRequest generates a "aws/request.Request" representing the // client's request for the UpdateSqlInjectionMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8413,12 +8832,15 @@ func (c *WAFRegional) UpdateSqlInjectionMatchSetRequest(input *waf.UpdateSqlInje // object and add a new one. // // * FieldToMatch: The part of web requests that you want AWS WAF to inspect -// and, if you want AWS WAF to inspect a header, the name of the header. +// and, if you want AWS WAF to inspect a header or custom query parameter, +// the name of the header or parameter. // // * TextTransformation: Which text transformation, if any, to perform on // the web request before inspecting the request for snippets of malicious // SQL code. // +// You can only specify a single type of TextTransformation. +// // You use SqlInjectionMatchSet objects to specify which CloudFront requests // you want to allow, block, or count. For example, if you're receiving requests // that contain snippets of SQL code in the query string and you want to block @@ -8469,9 +8891,6 @@ func (c *WAFRegional) UpdateSqlInjectionMatchSetRequest(input *waf.UpdateSqlInje // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // @@ -8559,8 +8978,8 @@ const opUpdateWebACL = "UpdateWebACL" // UpdateWebACLRequest generates a "aws/request.Request" representing the // client's request for the UpdateWebACL operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8683,9 +9102,6 @@ func (c *WAFRegional) UpdateWebACLRequest(input *waf.UpdateWebACLInput) (req *re // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // @@ -8780,8 +9196,8 @@ const opUpdateXssMatchSet = "UpdateXssMatchSet" // UpdateXssMatchSetRequest generates a "aws/request.Request" representing the // client's request for the UpdateXssMatchSet operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. @@ -8828,12 +9244,15 @@ func (c *WAFRegional) UpdateXssMatchSetRequest(input *waf.UpdateXssMatchSetInput // add a new one. // // * FieldToMatch: The part of web requests that you want AWS WAF to inspect -// and, if you want AWS WAF to inspect a header, the name of the header. +// and, if you want AWS WAF to inspect a header or custom query parameter, +// the name of the header or parameter. // // * TextTransformation: Which text transformation, if any, to perform on // the web request before inspecting the request for cross-site scripting // attacks. // +// You can only specify a single type of TextTransformation. +// // You use XssMatchSet objects to specify which CloudFront requests you want // to allow, block, or count. For example, if you're receiving requests that // contain cross-site scripting attacks in the request body and you want to @@ -8884,9 +9303,6 @@ func (c *WAFRegional) UpdateXssMatchSetRequest(input *waf.UpdateXssMatchSetInput // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // -// * You tried to add an IP address to an IPSet, but the IP address already -// exists in the specified IPSet. -// // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. // @@ -9166,6 +9582,10 @@ func (s *GetWebACLForResourceOutput) SetWebACLSummary(v *waf.WebACLSummary) *Get type ListResourcesForWebACLInput struct { _ struct{} `type:"structure"` + // The type of resource to list, either and application load balancer or Amazon + // API Gateway. + ResourceType *string `type:"string" enum:"ResourceType"` + // The unique identifier (ID) of the web ACL for which to list the associated // resources. // @@ -9199,6 +9619,12 @@ func (s *ListResourcesForWebACLInput) Validate() error { return nil } +// SetResourceType sets the ResourceType field's value. +func (s *ListResourcesForWebACLInput) SetResourceType(v string) *ListResourcesForWebACLInput { + s.ResourceType = &v + return s +} + // SetWebACLId sets the WebACLId field's value. func (s *ListResourcesForWebACLInput) SetWebACLId(v string) *ListResourcesForWebACLInput { s.WebACLId = &v @@ -10046,6 +10472,12 @@ const ( // MatchFieldTypeBody is a MatchFieldType enum value MatchFieldTypeBody = "BODY" + + // MatchFieldTypeSingleQueryArg is a MatchFieldType enum value + MatchFieldTypeSingleQueryArg = "SINGLE_QUERY_ARG" + + // MatchFieldTypeAllQueryArgs is a MatchFieldType enum value + MatchFieldTypeAllQueryArgs = "ALL_QUERY_ARGS" ) const ( @@ -10148,6 +10580,14 @@ const ( RateKeyIp = "IP" ) +const ( + // ResourceTypeApplicationLoadBalancer is a ResourceType enum value + ResourceTypeApplicationLoadBalancer = "APPLICATION_LOAD_BALANCER" + + // ResourceTypeApiGateway is a ResourceType enum value + ResourceTypeApiGateway = "API_GATEWAY" +) + const ( // TextTransformationNone is a TextTransformation enum value TextTransformationNone = "NONE" diff --git a/vendor/github.com/aws/aws-sdk-go/service/wafregional/errors.go b/vendor/github.com/aws/aws-sdk-go/service/wafregional/errors.go index 793dbc2fcae..b82c80fc6f2 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/wafregional/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/wafregional/errors.go @@ -41,9 +41,6 @@ const ( // * You tried to add a Rule to a WebACL, but the Rule already exists in // the specified WebACL. // - // * You tried to add an IP address to an IPSet, but the IP address already - // exists in the specified IPSet. - // // * You tried to add a ByteMatchTuple to a ByteMatchSet, but the ByteMatchTuple // already exists in the specified WebACL. ErrCodeWAFInvalidOperationException = "WAFInvalidOperationException" @@ -93,8 +90,9 @@ const ( // // * Effect must specify Allow. // - // * The Action in the policy must be waf:UpdateWebACL or waf-regional:UpdateWebACL. - // Any extra or wildcard actions in the policy will be rejected. + // * The Action in the policy must be waf:UpdateWebACL, waf-regional:UpdateWebACL, + // waf:GetRuleGroup and waf-regional:GetRuleGroup . Any extra or wildcard + // actions in the policy will be rejected. // // * The policy cannot include a Resource parameter. // @@ -174,6 +172,19 @@ const ( // * You tried to delete a Rule that is still referenced by a WebACL. ErrCodeWAFReferencedItemException = "WAFReferencedItemException" + // ErrCodeWAFServiceLinkedRoleErrorException for service response error code + // "WAFServiceLinkedRoleErrorException". + // + // AWS WAF is not able to access the service linked role. This can be caused + // by a previous PutLoggingConfiguration request, which can lock the service + // linked role for about 20 seconds. Please try your request again. The service + // linked role can also be locked by a previous DeleteServiceLinkedRole request, + // which can lock the role for 15 minutes or more. If you recently made a DeleteServiceLinkedRole, + // wait at least 15 minutes and try the request again. If you receive this same + // exception again, you will have to wait additional time until the role is + // unlocked. + ErrCodeWAFServiceLinkedRoleErrorException = "WAFServiceLinkedRoleErrorException" + // ErrCodeWAFStaleDataException for service response error code // "WAFStaleDataException". // diff --git a/vendor/github.com/aws/aws-sdk-go/service/wafregional/service.go b/vendor/github.com/aws/aws-sdk-go/service/wafregional/service.go index 8f0e70947bb..3a267ae6360 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/wafregional/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/wafregional/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "waf-regional" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "waf-regional" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "WAF Regional" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the WAFRegional client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/aws/aws-sdk-go/service/workspaces/api.go b/vendor/github.com/aws/aws-sdk-go/service/workspaces/api.go index 5dfb336c19e..02ba82fa210 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/workspaces/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/workspaces/api.go @@ -11,1395 +11,4150 @@ import ( "github.com/aws/aws-sdk-go/aws/request" ) -const opCreateTags = "CreateTags" +const opAssociateIpGroups = "AssociateIpGroups" -// CreateTagsRequest generates a "aws/request.Request" representing the -// client's request for the CreateTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// AssociateIpGroupsRequest generates a "aws/request.Request" representing the +// client's request for the AssociateIpGroups operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See CreateTags for more information on using the CreateTags +// See AssociateIpGroups for more information on using the AssociateIpGroups // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the CreateTagsRequest method. -// req, resp := client.CreateTagsRequest(params) +// // Example sending a request using the AssociateIpGroupsRequest method. +// req, resp := client.AssociateIpGroupsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/CreateTags -func (c *WorkSpaces) CreateTagsRequest(input *CreateTagsInput) (req *request.Request, output *CreateTagsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/AssociateIpGroups +func (c *WorkSpaces) AssociateIpGroupsRequest(input *AssociateIpGroupsInput) (req *request.Request, output *AssociateIpGroupsOutput) { op := &request.Operation{ - Name: opCreateTags, + Name: opAssociateIpGroups, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &CreateTagsInput{} + input = &AssociateIpGroupsInput{} } - output = &CreateTagsOutput{} + output = &AssociateIpGroupsOutput{} req = c.newRequest(op, input, output) return } -// CreateTags API operation for Amazon WorkSpaces. +// AssociateIpGroups API operation for Amazon WorkSpaces. // -// Creates tags for the specified WorkSpace. +// Associates the specified IP access control group with the specified directory. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon WorkSpaces's -// API operation CreateTags for usage and error information. +// API operation AssociateIpGroups for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The resource could not be found. -// // * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" // One or more parameter values are not valid. // +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The resource could not be found. +// // * ErrCodeResourceLimitExceededException "ResourceLimitExceededException" // Your resource limits have been exceeded. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/CreateTags -func (c *WorkSpaces) CreateTags(input *CreateTagsInput) (*CreateTagsOutput, error) { - req, out := c.CreateTagsRequest(input) +// * ErrCodeInvalidResourceStateException "InvalidResourceStateException" +// The state of the resource is not valid for this operation. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// The user is not authorized to access a resource. +// +// * ErrCodeOperationNotSupportedException "OperationNotSupportedException" +// This operation is not supported. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/AssociateIpGroups +func (c *WorkSpaces) AssociateIpGroups(input *AssociateIpGroupsInput) (*AssociateIpGroupsOutput, error) { + req, out := c.AssociateIpGroupsRequest(input) return out, req.Send() } -// CreateTagsWithContext is the same as CreateTags with the addition of +// AssociateIpGroupsWithContext is the same as AssociateIpGroups with the addition of // the ability to pass a context and additional request options. // -// See CreateTags for details on how to use this API operation. +// See AssociateIpGroups for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *WorkSpaces) CreateTagsWithContext(ctx aws.Context, input *CreateTagsInput, opts ...request.Option) (*CreateTagsOutput, error) { - req, out := c.CreateTagsRequest(input) +func (c *WorkSpaces) AssociateIpGroupsWithContext(ctx aws.Context, input *AssociateIpGroupsInput, opts ...request.Option) (*AssociateIpGroupsOutput, error) { + req, out := c.AssociateIpGroupsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opCreateWorkspaces = "CreateWorkspaces" +const opAuthorizeIpRules = "AuthorizeIpRules" -// CreateWorkspacesRequest generates a "aws/request.Request" representing the -// client's request for the CreateWorkspaces operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// AuthorizeIpRulesRequest generates a "aws/request.Request" representing the +// client's request for the AuthorizeIpRules operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See CreateWorkspaces for more information on using the CreateWorkspaces +// See AuthorizeIpRules for more information on using the AuthorizeIpRules // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the CreateWorkspacesRequest method. -// req, resp := client.CreateWorkspacesRequest(params) +// // Example sending a request using the AuthorizeIpRulesRequest method. +// req, resp := client.AuthorizeIpRulesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/CreateWorkspaces -func (c *WorkSpaces) CreateWorkspacesRequest(input *CreateWorkspacesInput) (req *request.Request, output *CreateWorkspacesOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/AuthorizeIpRules +func (c *WorkSpaces) AuthorizeIpRulesRequest(input *AuthorizeIpRulesInput) (req *request.Request, output *AuthorizeIpRulesOutput) { op := &request.Operation{ - Name: opCreateWorkspaces, + Name: opAuthorizeIpRules, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &CreateWorkspacesInput{} + input = &AuthorizeIpRulesInput{} } - output = &CreateWorkspacesOutput{} + output = &AuthorizeIpRulesOutput{} req = c.newRequest(op, input, output) return } -// CreateWorkspaces API operation for Amazon WorkSpaces. +// AuthorizeIpRules API operation for Amazon WorkSpaces. // -// Creates one or more WorkSpaces. +// Adds one or more rules to the specified IP access control group. // -// This operation is asynchronous and returns before the WorkSpaces are created. +// This action gives users permission to access their WorkSpaces from the CIDR +// address ranges specified in the rules. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon WorkSpaces's -// API operation CreateWorkspaces for usage and error information. +// API operation AuthorizeIpRules for usage and error information. // // Returned Error Codes: +// * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" +// One or more parameter values are not valid. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The resource could not be found. +// // * ErrCodeResourceLimitExceededException "ResourceLimitExceededException" // Your resource limits have been exceeded. // -// * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" -// One or more parameter values are not valid. +// * ErrCodeInvalidResourceStateException "InvalidResourceStateException" +// The state of the resource is not valid for this operation. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/CreateWorkspaces -func (c *WorkSpaces) CreateWorkspaces(input *CreateWorkspacesInput) (*CreateWorkspacesOutput, error) { - req, out := c.CreateWorkspacesRequest(input) +// * ErrCodeAccessDeniedException "AccessDeniedException" +// The user is not authorized to access a resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/AuthorizeIpRules +func (c *WorkSpaces) AuthorizeIpRules(input *AuthorizeIpRulesInput) (*AuthorizeIpRulesOutput, error) { + req, out := c.AuthorizeIpRulesRequest(input) return out, req.Send() } -// CreateWorkspacesWithContext is the same as CreateWorkspaces with the addition of +// AuthorizeIpRulesWithContext is the same as AuthorizeIpRules with the addition of // the ability to pass a context and additional request options. // -// See CreateWorkspaces for details on how to use this API operation. +// See AuthorizeIpRules for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *WorkSpaces) CreateWorkspacesWithContext(ctx aws.Context, input *CreateWorkspacesInput, opts ...request.Option) (*CreateWorkspacesOutput, error) { - req, out := c.CreateWorkspacesRequest(input) +func (c *WorkSpaces) AuthorizeIpRulesWithContext(ctx aws.Context, input *AuthorizeIpRulesInput, opts ...request.Option) (*AuthorizeIpRulesOutput, error) { + req, out := c.AuthorizeIpRulesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDeleteTags = "DeleteTags" +const opCreateIpGroup = "CreateIpGroup" -// DeleteTagsRequest generates a "aws/request.Request" representing the -// client's request for the DeleteTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// CreateIpGroupRequest generates a "aws/request.Request" representing the +// client's request for the CreateIpGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DeleteTags for more information on using the DeleteTags +// See CreateIpGroup for more information on using the CreateIpGroup // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DeleteTagsRequest method. -// req, resp := client.DeleteTagsRequest(params) +// // Example sending a request using the CreateIpGroupRequest method. +// req, resp := client.CreateIpGroupRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DeleteTags -func (c *WorkSpaces) DeleteTagsRequest(input *DeleteTagsInput) (req *request.Request, output *DeleteTagsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/CreateIpGroup +func (c *WorkSpaces) CreateIpGroupRequest(input *CreateIpGroupInput) (req *request.Request, output *CreateIpGroupOutput) { op := &request.Operation{ - Name: opDeleteTags, + Name: opCreateIpGroup, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &DeleteTagsInput{} + input = &CreateIpGroupInput{} } - output = &DeleteTagsOutput{} + output = &CreateIpGroupOutput{} req = c.newRequest(op, input, output) return } -// DeleteTags API operation for Amazon WorkSpaces. +// CreateIpGroup API operation for Amazon WorkSpaces. +// +// Creates an IP access control group. +// +// An IP access control group provides you with the ability to control the IP +// addresses from which users are allowed to access their WorkSpaces. To specify +// the CIDR address ranges, add rules to your IP access control group and then +// associate the group with your directory. You can add rules when you create +// the group or at any time using AuthorizeIpRules. // -// Deletes the specified tags from a WorkSpace. +// There is a default IP access control group associated with your directory. +// If you don't associate an IP access control group with your directory, the +// default group is used. The default group includes a default rule that allows +// users to access their WorkSpaces from anywhere. You cannot modify the default +// IP access control group for your directory. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon WorkSpaces's -// API operation DeleteTags for usage and error information. +// API operation CreateIpGroup for usage and error information. // // Returned Error Codes: -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The resource could not be found. -// // * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" // One or more parameter values are not valid. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DeleteTags -func (c *WorkSpaces) DeleteTags(input *DeleteTagsInput) (*DeleteTagsOutput, error) { - req, out := c.DeleteTagsRequest(input) +// * ErrCodeResourceLimitExceededException "ResourceLimitExceededException" +// Your resource limits have been exceeded. +// +// * ErrCodeResourceAlreadyExistsException "ResourceAlreadyExistsException" +// The specified resource already exists. +// +// * ErrCodeResourceCreationFailedException "ResourceCreationFailedException" +// The resource could not be created. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// The user is not authorized to access a resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/CreateIpGroup +func (c *WorkSpaces) CreateIpGroup(input *CreateIpGroupInput) (*CreateIpGroupOutput, error) { + req, out := c.CreateIpGroupRequest(input) return out, req.Send() } -// DeleteTagsWithContext is the same as DeleteTags with the addition of +// CreateIpGroupWithContext is the same as CreateIpGroup with the addition of // the ability to pass a context and additional request options. // -// See DeleteTags for details on how to use this API operation. +// See CreateIpGroup for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *WorkSpaces) DeleteTagsWithContext(ctx aws.Context, input *DeleteTagsInput, opts ...request.Option) (*DeleteTagsOutput, error) { - req, out := c.DeleteTagsRequest(input) +func (c *WorkSpaces) CreateIpGroupWithContext(ctx aws.Context, input *CreateIpGroupInput, opts ...request.Option) (*CreateIpGroupOutput, error) { + req, out := c.CreateIpGroupRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeTags = "DescribeTags" +const opCreateTags = "CreateTags" -// DescribeTagsRequest generates a "aws/request.Request" representing the -// client's request for the DescribeTags operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// CreateTagsRequest generates a "aws/request.Request" representing the +// client's request for the CreateTags operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeTags for more information on using the DescribeTags +// See CreateTags for more information on using the CreateTags // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeTagsRequest method. -// req, resp := client.DescribeTagsRequest(params) +// // Example sending a request using the CreateTagsRequest method. +// req, resp := client.CreateTagsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeTags -func (c *WorkSpaces) DescribeTagsRequest(input *DescribeTagsInput) (req *request.Request, output *DescribeTagsOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/CreateTags +func (c *WorkSpaces) CreateTagsRequest(input *CreateTagsInput) (req *request.Request, output *CreateTagsOutput) { op := &request.Operation{ - Name: opDescribeTags, + Name: opCreateTags, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &DescribeTagsInput{} + input = &CreateTagsInput{} } - output = &DescribeTagsOutput{} + output = &CreateTagsOutput{} req = c.newRequest(op, input, output) return } -// DescribeTags API operation for Amazon WorkSpaces. +// CreateTags API operation for Amazon WorkSpaces. // -// Describes the tags for the specified WorkSpace. +// Creates the specified tags for the specified WorkSpace. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon WorkSpaces's -// API operation DescribeTags for usage and error information. +// API operation CreateTags for usage and error information. // // Returned Error Codes: // * ErrCodeResourceNotFoundException "ResourceNotFoundException" // The resource could not be found. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeTags -func (c *WorkSpaces) DescribeTags(input *DescribeTagsInput) (*DescribeTagsOutput, error) { - req, out := c.DescribeTagsRequest(input) +// * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" +// One or more parameter values are not valid. +// +// * ErrCodeResourceLimitExceededException "ResourceLimitExceededException" +// Your resource limits have been exceeded. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/CreateTags +func (c *WorkSpaces) CreateTags(input *CreateTagsInput) (*CreateTagsOutput, error) { + req, out := c.CreateTagsRequest(input) return out, req.Send() } -// DescribeTagsWithContext is the same as DescribeTags with the addition of +// CreateTagsWithContext is the same as CreateTags with the addition of // the ability to pass a context and additional request options. // -// See DescribeTags for details on how to use this API operation. +// See CreateTags for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *WorkSpaces) DescribeTagsWithContext(ctx aws.Context, input *DescribeTagsInput, opts ...request.Option) (*DescribeTagsOutput, error) { - req, out := c.DescribeTagsRequest(input) +func (c *WorkSpaces) CreateTagsWithContext(ctx aws.Context, input *CreateTagsInput, opts ...request.Option) (*CreateTagsOutput, error) { + req, out := c.CreateTagsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opDescribeWorkspaceBundles = "DescribeWorkspaceBundles" +const opCreateWorkspaces = "CreateWorkspaces" -// DescribeWorkspaceBundlesRequest generates a "aws/request.Request" representing the -// client's request for the DescribeWorkspaceBundles operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// CreateWorkspacesRequest generates a "aws/request.Request" representing the +// client's request for the CreateWorkspaces operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeWorkspaceBundles for more information on using the DescribeWorkspaceBundles +// See CreateWorkspaces for more information on using the CreateWorkspaces // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeWorkspaceBundlesRequest method. -// req, resp := client.DescribeWorkspaceBundlesRequest(params) +// // Example sending a request using the CreateWorkspacesRequest method. +// req, resp := client.CreateWorkspacesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeWorkspaceBundles -func (c *WorkSpaces) DescribeWorkspaceBundlesRequest(input *DescribeWorkspaceBundlesInput) (req *request.Request, output *DescribeWorkspaceBundlesOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/CreateWorkspaces +func (c *WorkSpaces) CreateWorkspacesRequest(input *CreateWorkspacesInput) (req *request.Request, output *CreateWorkspacesOutput) { op := &request.Operation{ - Name: opDescribeWorkspaceBundles, + Name: opCreateWorkspaces, HTTPMethod: "POST", HTTPPath: "/", - Paginator: &request.Paginator{ - InputTokens: []string{"NextToken"}, - OutputTokens: []string{"NextToken"}, - LimitToken: "", - TruncationToken: "", - }, } if input == nil { - input = &DescribeWorkspaceBundlesInput{} + input = &CreateWorkspacesInput{} } - output = &DescribeWorkspaceBundlesOutput{} + output = &CreateWorkspacesOutput{} req = c.newRequest(op, input, output) return } -// DescribeWorkspaceBundles API operation for Amazon WorkSpaces. +// CreateWorkspaces API operation for Amazon WorkSpaces. // -// Describes the available WorkSpace bundles. +// Creates one or more WorkSpaces. // -// You can filter the results using either bundle ID or owner, but not both. +// This operation is asynchronous and returns before the WorkSpaces are created. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon WorkSpaces's -// API operation DescribeWorkspaceBundles for usage and error information. +// API operation CreateWorkspaces for usage and error information. // // Returned Error Codes: +// * ErrCodeResourceLimitExceededException "ResourceLimitExceededException" +// Your resource limits have been exceeded. +// // * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" // One or more parameter values are not valid. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeWorkspaceBundles -func (c *WorkSpaces) DescribeWorkspaceBundles(input *DescribeWorkspaceBundlesInput) (*DescribeWorkspaceBundlesOutput, error) { - req, out := c.DescribeWorkspaceBundlesRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/CreateWorkspaces +func (c *WorkSpaces) CreateWorkspaces(input *CreateWorkspacesInput) (*CreateWorkspacesOutput, error) { + req, out := c.CreateWorkspacesRequest(input) return out, req.Send() } -// DescribeWorkspaceBundlesWithContext is the same as DescribeWorkspaceBundles with the addition of +// CreateWorkspacesWithContext is the same as CreateWorkspaces with the addition of // the ability to pass a context and additional request options. // -// See DescribeWorkspaceBundles for details on how to use this API operation. +// See CreateWorkspaces for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *WorkSpaces) DescribeWorkspaceBundlesWithContext(ctx aws.Context, input *DescribeWorkspaceBundlesInput, opts ...request.Option) (*DescribeWorkspaceBundlesOutput, error) { - req, out := c.DescribeWorkspaceBundlesRequest(input) +func (c *WorkSpaces) CreateWorkspacesWithContext(ctx aws.Context, input *CreateWorkspacesInput, opts ...request.Option) (*CreateWorkspacesOutput, error) { + req, out := c.CreateWorkspacesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// DescribeWorkspaceBundlesPages iterates over the pages of a DescribeWorkspaceBundles operation, -// calling the "fn" function with the response data for each page. To stop -// iterating, return false from the fn function. -// -// See DescribeWorkspaceBundles method for more information on how to use this operation. -// -// Note: This operation can generate multiple requests to a service. -// -// // Example iterating over at most 3 pages of a DescribeWorkspaceBundles operation. -// pageNum := 0 -// err := client.DescribeWorkspaceBundlesPages(params, -// func(page *DescribeWorkspaceBundlesOutput, lastPage bool) bool { -// pageNum++ -// fmt.Println(page) -// return pageNum <= 3 -// }) -// -func (c *WorkSpaces) DescribeWorkspaceBundlesPages(input *DescribeWorkspaceBundlesInput, fn func(*DescribeWorkspaceBundlesOutput, bool) bool) error { - return c.DescribeWorkspaceBundlesPagesWithContext(aws.BackgroundContext(), input, fn) -} - -// DescribeWorkspaceBundlesPagesWithContext same as DescribeWorkspaceBundlesPages except -// it takes a Context and allows setting request options on the pages. -// -// The context must be non-nil and will be used for request cancellation. If -// the context is nil a panic will occur. In the future the SDK may create -// sub-contexts for http.Requests. See https://golang.org/pkg/context/ -// for more information on using Contexts. -func (c *WorkSpaces) DescribeWorkspaceBundlesPagesWithContext(ctx aws.Context, input *DescribeWorkspaceBundlesInput, fn func(*DescribeWorkspaceBundlesOutput, bool) bool, opts ...request.Option) error { - p := request.Pagination{ - NewRequest: func() (*request.Request, error) { - var inCpy *DescribeWorkspaceBundlesInput - if input != nil { - tmp := *input - inCpy = &tmp - } - req, _ := c.DescribeWorkspaceBundlesRequest(inCpy) - req.SetContext(ctx) - req.ApplyOptions(opts...) - return req, nil - }, - } - - cont := true - for p.Next() && cont { - cont = fn(p.Page().(*DescribeWorkspaceBundlesOutput), !p.HasNextPage()) - } - return p.Err() -} - -const opDescribeWorkspaceDirectories = "DescribeWorkspaceDirectories" +const opDeleteIpGroup = "DeleteIpGroup" -// DescribeWorkspaceDirectoriesRequest generates a "aws/request.Request" representing the -// client's request for the DescribeWorkspaceDirectories operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteIpGroupRequest generates a "aws/request.Request" representing the +// client's request for the DeleteIpGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeWorkspaceDirectories for more information on using the DescribeWorkspaceDirectories +// See DeleteIpGroup for more information on using the DeleteIpGroup // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeWorkspaceDirectoriesRequest method. -// req, resp := client.DescribeWorkspaceDirectoriesRequest(params) +// // Example sending a request using the DeleteIpGroupRequest method. +// req, resp := client.DeleteIpGroupRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeWorkspaceDirectories -func (c *WorkSpaces) DescribeWorkspaceDirectoriesRequest(input *DescribeWorkspaceDirectoriesInput) (req *request.Request, output *DescribeWorkspaceDirectoriesOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DeleteIpGroup +func (c *WorkSpaces) DeleteIpGroupRequest(input *DeleteIpGroupInput) (req *request.Request, output *DeleteIpGroupOutput) { op := &request.Operation{ - Name: opDescribeWorkspaceDirectories, + Name: opDeleteIpGroup, HTTPMethod: "POST", HTTPPath: "/", - Paginator: &request.Paginator{ - InputTokens: []string{"NextToken"}, - OutputTokens: []string{"NextToken"}, - LimitToken: "", - TruncationToken: "", - }, } if input == nil { - input = &DescribeWorkspaceDirectoriesInput{} + input = &DeleteIpGroupInput{} } - output = &DescribeWorkspaceDirectoriesOutput{} + output = &DeleteIpGroupOutput{} req = c.newRequest(op, input, output) return } -// DescribeWorkspaceDirectories API operation for Amazon WorkSpaces. +// DeleteIpGroup API operation for Amazon WorkSpaces. // -// Describes the available AWS Directory Service directories that are registered -// with Amazon WorkSpaces. +// Deletes the specified IP access control group. +// +// You cannot delete an IP access control group that is associated with a directory. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon WorkSpaces's -// API operation DescribeWorkspaceDirectories for usage and error information. +// API operation DeleteIpGroup for usage and error information. // // Returned Error Codes: // * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" // One or more parameter values are not valid. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeWorkspaceDirectories -func (c *WorkSpaces) DescribeWorkspaceDirectories(input *DescribeWorkspaceDirectoriesInput) (*DescribeWorkspaceDirectoriesOutput, error) { - req, out := c.DescribeWorkspaceDirectoriesRequest(input) +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The resource could not be found. +// +// * ErrCodeResourceAssociatedException "ResourceAssociatedException" +// The resource is associated with a directory. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// The user is not authorized to access a resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DeleteIpGroup +func (c *WorkSpaces) DeleteIpGroup(input *DeleteIpGroupInput) (*DeleteIpGroupOutput, error) { + req, out := c.DeleteIpGroupRequest(input) return out, req.Send() } -// DescribeWorkspaceDirectoriesWithContext is the same as DescribeWorkspaceDirectories with the addition of +// DeleteIpGroupWithContext is the same as DeleteIpGroup with the addition of // the ability to pass a context and additional request options. // -// See DescribeWorkspaceDirectories for details on how to use this API operation. +// See DeleteIpGroup for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *WorkSpaces) DescribeWorkspaceDirectoriesWithContext(ctx aws.Context, input *DescribeWorkspaceDirectoriesInput, opts ...request.Option) (*DescribeWorkspaceDirectoriesOutput, error) { - req, out := c.DescribeWorkspaceDirectoriesRequest(input) +func (c *WorkSpaces) DeleteIpGroupWithContext(ctx aws.Context, input *DeleteIpGroupInput, opts ...request.Option) (*DeleteIpGroupOutput, error) { + req, out := c.DeleteIpGroupRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// DescribeWorkspaceDirectoriesPages iterates over the pages of a DescribeWorkspaceDirectories operation, -// calling the "fn" function with the response data for each page. To stop -// iterating, return false from the fn function. -// -// See DescribeWorkspaceDirectories method for more information on how to use this operation. -// -// Note: This operation can generate multiple requests to a service. -// -// // Example iterating over at most 3 pages of a DescribeWorkspaceDirectories operation. -// pageNum := 0 -// err := client.DescribeWorkspaceDirectoriesPages(params, -// func(page *DescribeWorkspaceDirectoriesOutput, lastPage bool) bool { -// pageNum++ -// fmt.Println(page) -// return pageNum <= 3 -// }) -// -func (c *WorkSpaces) DescribeWorkspaceDirectoriesPages(input *DescribeWorkspaceDirectoriesInput, fn func(*DescribeWorkspaceDirectoriesOutput, bool) bool) error { - return c.DescribeWorkspaceDirectoriesPagesWithContext(aws.BackgroundContext(), input, fn) -} - -// DescribeWorkspaceDirectoriesPagesWithContext same as DescribeWorkspaceDirectoriesPages except -// it takes a Context and allows setting request options on the pages. -// -// The context must be non-nil and will be used for request cancellation. If -// the context is nil a panic will occur. In the future the SDK may create -// sub-contexts for http.Requests. See https://golang.org/pkg/context/ -// for more information on using Contexts. -func (c *WorkSpaces) DescribeWorkspaceDirectoriesPagesWithContext(ctx aws.Context, input *DescribeWorkspaceDirectoriesInput, fn func(*DescribeWorkspaceDirectoriesOutput, bool) bool, opts ...request.Option) error { - p := request.Pagination{ - NewRequest: func() (*request.Request, error) { - var inCpy *DescribeWorkspaceDirectoriesInput - if input != nil { - tmp := *input - inCpy = &tmp - } - req, _ := c.DescribeWorkspaceDirectoriesRequest(inCpy) - req.SetContext(ctx) - req.ApplyOptions(opts...) - return req, nil - }, - } - - cont := true - for p.Next() && cont { - cont = fn(p.Page().(*DescribeWorkspaceDirectoriesOutput), !p.HasNextPage()) - } - return p.Err() -} - -const opDescribeWorkspaces = "DescribeWorkspaces" +const opDeleteTags = "DeleteTags" -// DescribeWorkspacesRequest generates a "aws/request.Request" representing the -// client's request for the DescribeWorkspaces operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteTagsRequest generates a "aws/request.Request" representing the +// client's request for the DeleteTags operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeWorkspaces for more information on using the DescribeWorkspaces +// See DeleteTags for more information on using the DeleteTags // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeWorkspacesRequest method. -// req, resp := client.DescribeWorkspacesRequest(params) +// // Example sending a request using the DeleteTagsRequest method. +// req, resp := client.DeleteTagsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeWorkspaces -func (c *WorkSpaces) DescribeWorkspacesRequest(input *DescribeWorkspacesInput) (req *request.Request, output *DescribeWorkspacesOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DeleteTags +func (c *WorkSpaces) DeleteTagsRequest(input *DeleteTagsInput) (req *request.Request, output *DeleteTagsOutput) { op := &request.Operation{ - Name: opDescribeWorkspaces, + Name: opDeleteTags, HTTPMethod: "POST", HTTPPath: "/", - Paginator: &request.Paginator{ - InputTokens: []string{"NextToken"}, - OutputTokens: []string{"NextToken"}, - LimitToken: "Limit", - TruncationToken: "", - }, } if input == nil { - input = &DescribeWorkspacesInput{} + input = &DeleteTagsInput{} } - output = &DescribeWorkspacesOutput{} + output = &DeleteTagsOutput{} req = c.newRequest(op, input, output) return } -// DescribeWorkspaces API operation for Amazon WorkSpaces. -// -// Describes the specified WorkSpaces. +// DeleteTags API operation for Amazon WorkSpaces. // -// You can filter the results using bundle ID, directory ID, or owner, but you -// can specify only one filter at a time. +// Deletes the specified tags from the specified WorkSpace. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon WorkSpaces's -// API operation DescribeWorkspaces for usage and error information. +// API operation DeleteTags for usage and error information. // // Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The resource could not be found. +// // * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" // One or more parameter values are not valid. // -// * ErrCodeResourceUnavailableException "ResourceUnavailableException" -// The specified resource is not available. -// -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeWorkspaces -func (c *WorkSpaces) DescribeWorkspaces(input *DescribeWorkspacesInput) (*DescribeWorkspacesOutput, error) { - req, out := c.DescribeWorkspacesRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DeleteTags +func (c *WorkSpaces) DeleteTags(input *DeleteTagsInput) (*DeleteTagsOutput, error) { + req, out := c.DeleteTagsRequest(input) return out, req.Send() } -// DescribeWorkspacesWithContext is the same as DescribeWorkspaces with the addition of +// DeleteTagsWithContext is the same as DeleteTags with the addition of // the ability to pass a context and additional request options. // -// See DescribeWorkspaces for details on how to use this API operation. +// See DeleteTags for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *WorkSpaces) DescribeWorkspacesWithContext(ctx aws.Context, input *DescribeWorkspacesInput, opts ...request.Option) (*DescribeWorkspacesOutput, error) { - req, out := c.DescribeWorkspacesRequest(input) +func (c *WorkSpaces) DeleteTagsWithContext(ctx aws.Context, input *DeleteTagsInput, opts ...request.Option) (*DeleteTagsOutput, error) { + req, out := c.DeleteTagsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// DescribeWorkspacesPages iterates over the pages of a DescribeWorkspaces operation, -// calling the "fn" function with the response data for each page. To stop -// iterating, return false from the fn function. -// -// See DescribeWorkspaces method for more information on how to use this operation. -// -// Note: This operation can generate multiple requests to a service. -// -// // Example iterating over at most 3 pages of a DescribeWorkspaces operation. -// pageNum := 0 -// err := client.DescribeWorkspacesPages(params, -// func(page *DescribeWorkspacesOutput, lastPage bool) bool { -// pageNum++ -// fmt.Println(page) -// return pageNum <= 3 -// }) -// -func (c *WorkSpaces) DescribeWorkspacesPages(input *DescribeWorkspacesInput, fn func(*DescribeWorkspacesOutput, bool) bool) error { - return c.DescribeWorkspacesPagesWithContext(aws.BackgroundContext(), input, fn) -} - -// DescribeWorkspacesPagesWithContext same as DescribeWorkspacesPages except -// it takes a Context and allows setting request options on the pages. -// -// The context must be non-nil and will be used for request cancellation. If -// the context is nil a panic will occur. In the future the SDK may create -// sub-contexts for http.Requests. See https://golang.org/pkg/context/ -// for more information on using Contexts. -func (c *WorkSpaces) DescribeWorkspacesPagesWithContext(ctx aws.Context, input *DescribeWorkspacesInput, fn func(*DescribeWorkspacesOutput, bool) bool, opts ...request.Option) error { - p := request.Pagination{ - NewRequest: func() (*request.Request, error) { - var inCpy *DescribeWorkspacesInput - if input != nil { - tmp := *input - inCpy = &tmp - } - req, _ := c.DescribeWorkspacesRequest(inCpy) - req.SetContext(ctx) - req.ApplyOptions(opts...) - return req, nil - }, - } - - cont := true - for p.Next() && cont { - cont = fn(p.Page().(*DescribeWorkspacesOutput), !p.HasNextPage()) - } - return p.Err() -} - -const opDescribeWorkspacesConnectionStatus = "DescribeWorkspacesConnectionStatus" +const opDeleteWorkspaceImage = "DeleteWorkspaceImage" -// DescribeWorkspacesConnectionStatusRequest generates a "aws/request.Request" representing the -// client's request for the DescribeWorkspacesConnectionStatus operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DeleteWorkspaceImageRequest generates a "aws/request.Request" representing the +// client's request for the DeleteWorkspaceImage operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See DescribeWorkspacesConnectionStatus for more information on using the DescribeWorkspacesConnectionStatus +// See DeleteWorkspaceImage for more information on using the DeleteWorkspaceImage // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the DescribeWorkspacesConnectionStatusRequest method. -// req, resp := client.DescribeWorkspacesConnectionStatusRequest(params) +// // Example sending a request using the DeleteWorkspaceImageRequest method. +// req, resp := client.DeleteWorkspaceImageRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeWorkspacesConnectionStatus -func (c *WorkSpaces) DescribeWorkspacesConnectionStatusRequest(input *DescribeWorkspacesConnectionStatusInput) (req *request.Request, output *DescribeWorkspacesConnectionStatusOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DeleteWorkspaceImage +func (c *WorkSpaces) DeleteWorkspaceImageRequest(input *DeleteWorkspaceImageInput) (req *request.Request, output *DeleteWorkspaceImageOutput) { op := &request.Operation{ - Name: opDescribeWorkspacesConnectionStatus, + Name: opDeleteWorkspaceImage, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &DescribeWorkspacesConnectionStatusInput{} + input = &DeleteWorkspaceImageInput{} } - output = &DescribeWorkspacesConnectionStatusOutput{} + output = &DeleteWorkspaceImageOutput{} req = c.newRequest(op, input, output) return } -// DescribeWorkspacesConnectionStatus API operation for Amazon WorkSpaces. +// DeleteWorkspaceImage API operation for Amazon WorkSpaces. // -// Describes the connection status of the specified WorkSpaces. +// Deletes the specified image from your account. To delete an image, you must +// first delete any bundles that are associated with the image. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon WorkSpaces's -// API operation DescribeWorkspacesConnectionStatus for usage and error information. +// API operation DeleteWorkspaceImage for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" -// One or more parameter values are not valid. +// * ErrCodeResourceAssociatedException "ResourceAssociatedException" +// The resource is associated with a directory. // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeWorkspacesConnectionStatus -func (c *WorkSpaces) DescribeWorkspacesConnectionStatus(input *DescribeWorkspacesConnectionStatusInput) (*DescribeWorkspacesConnectionStatusOutput, error) { - req, out := c.DescribeWorkspacesConnectionStatusRequest(input) +// * ErrCodeInvalidResourceStateException "InvalidResourceStateException" +// The state of the resource is not valid for this operation. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// The user is not authorized to access a resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DeleteWorkspaceImage +func (c *WorkSpaces) DeleteWorkspaceImage(input *DeleteWorkspaceImageInput) (*DeleteWorkspaceImageOutput, error) { + req, out := c.DeleteWorkspaceImageRequest(input) return out, req.Send() } -// DescribeWorkspacesConnectionStatusWithContext is the same as DescribeWorkspacesConnectionStatus with the addition of +// DeleteWorkspaceImageWithContext is the same as DeleteWorkspaceImage with the addition of // the ability to pass a context and additional request options. // -// See DescribeWorkspacesConnectionStatus for details on how to use this API operation. +// See DeleteWorkspaceImage for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *WorkSpaces) DescribeWorkspacesConnectionStatusWithContext(ctx aws.Context, input *DescribeWorkspacesConnectionStatusInput, opts ...request.Option) (*DescribeWorkspacesConnectionStatusOutput, error) { - req, out := c.DescribeWorkspacesConnectionStatusRequest(input) +func (c *WorkSpaces) DeleteWorkspaceImageWithContext(ctx aws.Context, input *DeleteWorkspaceImageInput, opts ...request.Option) (*DeleteWorkspaceImageOutput, error) { + req, out := c.DeleteWorkspaceImageRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opModifyWorkspaceProperties = "ModifyWorkspaceProperties" +const opDescribeAccount = "DescribeAccount" -// ModifyWorkspacePropertiesRequest generates a "aws/request.Request" representing the -// client's request for the ModifyWorkspaceProperties operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeAccountRequest generates a "aws/request.Request" representing the +// client's request for the DescribeAccount operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ModifyWorkspaceProperties for more information on using the ModifyWorkspaceProperties +// See DescribeAccount for more information on using the DescribeAccount // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ModifyWorkspacePropertiesRequest method. -// req, resp := client.ModifyWorkspacePropertiesRequest(params) +// // Example sending a request using the DescribeAccountRequest method. +// req, resp := client.DescribeAccountRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/ModifyWorkspaceProperties -func (c *WorkSpaces) ModifyWorkspacePropertiesRequest(input *ModifyWorkspacePropertiesInput) (req *request.Request, output *ModifyWorkspacePropertiesOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeAccount +func (c *WorkSpaces) DescribeAccountRequest(input *DescribeAccountInput) (req *request.Request, output *DescribeAccountOutput) { op := &request.Operation{ - Name: opModifyWorkspaceProperties, + Name: opDescribeAccount, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &ModifyWorkspacePropertiesInput{} + input = &DescribeAccountInput{} } - output = &ModifyWorkspacePropertiesOutput{} + output = &DescribeAccountOutput{} req = c.newRequest(op, input, output) return } -// ModifyWorkspaceProperties API operation for Amazon WorkSpaces. +// DescribeAccount API operation for Amazon WorkSpaces. // -// Modifies the specified WorkSpace properties. +// Retrieves a list that describes the configuration of bring your own license +// (BYOL) for the specified account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon WorkSpaces's -// API operation ModifyWorkspaceProperties for usage and error information. +// API operation DescribeAccount for usage and error information. // // Returned Error Codes: -// * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" -// One or more parameter values are not valid. -// -// * ErrCodeInvalidResourceStateException "InvalidResourceStateException" -// The state of the WorkSpace is not valid for this operation. -// -// * ErrCodeOperationInProgressException "OperationInProgressException" -// The properties of this WorkSpace are currently being modified. Try again -// in a moment. -// -// * ErrCodeUnsupportedWorkspaceConfigurationException "UnsupportedWorkspaceConfigurationException" -// The configuration of this WorkSpace is not supported for this operation. -// For more information, see the Amazon WorkSpaces Administration Guide (http://docs.aws.amazon.com/workspaces/latest/adminguide/). -// -// * ErrCodeResourceNotFoundException "ResourceNotFoundException" -// The resource could not be found. -// // * ErrCodeAccessDeniedException "AccessDeniedException" // The user is not authorized to access a resource. // -// * ErrCodeResourceUnavailableException "ResourceUnavailableException" -// The specified resource is not available. -// -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/ModifyWorkspaceProperties -func (c *WorkSpaces) ModifyWorkspaceProperties(input *ModifyWorkspacePropertiesInput) (*ModifyWorkspacePropertiesOutput, error) { - req, out := c.ModifyWorkspacePropertiesRequest(input) +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeAccount +func (c *WorkSpaces) DescribeAccount(input *DescribeAccountInput) (*DescribeAccountOutput, error) { + req, out := c.DescribeAccountRequest(input) return out, req.Send() } -// ModifyWorkspacePropertiesWithContext is the same as ModifyWorkspaceProperties with the addition of +// DescribeAccountWithContext is the same as DescribeAccount with the addition of // the ability to pass a context and additional request options. // -// See ModifyWorkspaceProperties for details on how to use this API operation. +// See DescribeAccount for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *WorkSpaces) ModifyWorkspacePropertiesWithContext(ctx aws.Context, input *ModifyWorkspacePropertiesInput, opts ...request.Option) (*ModifyWorkspacePropertiesOutput, error) { - req, out := c.ModifyWorkspacePropertiesRequest(input) +func (c *WorkSpaces) DescribeAccountWithContext(ctx aws.Context, input *DescribeAccountInput, opts ...request.Option) (*DescribeAccountOutput, error) { + req, out := c.DescribeAccountRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opRebootWorkspaces = "RebootWorkspaces" +const opDescribeAccountModifications = "DescribeAccountModifications" -// RebootWorkspacesRequest generates a "aws/request.Request" representing the -// client's request for the RebootWorkspaces operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeAccountModificationsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeAccountModifications operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See RebootWorkspaces for more information on using the RebootWorkspaces +// See DescribeAccountModifications for more information on using the DescribeAccountModifications // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the RebootWorkspacesRequest method. -// req, resp := client.RebootWorkspacesRequest(params) +// // Example sending a request using the DescribeAccountModificationsRequest method. +// req, resp := client.DescribeAccountModificationsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/RebootWorkspaces -func (c *WorkSpaces) RebootWorkspacesRequest(input *RebootWorkspacesInput) (req *request.Request, output *RebootWorkspacesOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeAccountModifications +func (c *WorkSpaces) DescribeAccountModificationsRequest(input *DescribeAccountModificationsInput) (req *request.Request, output *DescribeAccountModificationsOutput) { op := &request.Operation{ - Name: opRebootWorkspaces, + Name: opDescribeAccountModifications, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &RebootWorkspacesInput{} + input = &DescribeAccountModificationsInput{} } - output = &RebootWorkspacesOutput{} + output = &DescribeAccountModificationsOutput{} req = c.newRequest(op, input, output) return } -// RebootWorkspaces API operation for Amazon WorkSpaces. -// -// Reboots the specified WorkSpaces. -// -// You cannot reboot a WorkSpace unless its state is AVAILABLE, IMPAIRED, or -// INOPERABLE. +// DescribeAccountModifications API operation for Amazon WorkSpaces. // -// This operation is asynchronous and returns before the WorkSpaces have rebooted. +// Retrieves a list that describes modifications to the configuration of bring +// your own license (BYOL) for the specified account. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon WorkSpaces's -// API operation RebootWorkspaces for usage and error information. -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/RebootWorkspaces -func (c *WorkSpaces) RebootWorkspaces(input *RebootWorkspacesInput) (*RebootWorkspacesOutput, error) { - req, out := c.RebootWorkspacesRequest(input) +// API operation DescribeAccountModifications for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAccessDeniedException "AccessDeniedException" +// The user is not authorized to access a resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeAccountModifications +func (c *WorkSpaces) DescribeAccountModifications(input *DescribeAccountModificationsInput) (*DescribeAccountModificationsOutput, error) { + req, out := c.DescribeAccountModificationsRequest(input) return out, req.Send() } -// RebootWorkspacesWithContext is the same as RebootWorkspaces with the addition of +// DescribeAccountModificationsWithContext is the same as DescribeAccountModifications with the addition of // the ability to pass a context and additional request options. // -// See RebootWorkspaces for details on how to use this API operation. +// See DescribeAccountModifications for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *WorkSpaces) RebootWorkspacesWithContext(ctx aws.Context, input *RebootWorkspacesInput, opts ...request.Option) (*RebootWorkspacesOutput, error) { - req, out := c.RebootWorkspacesRequest(input) +func (c *WorkSpaces) DescribeAccountModificationsWithContext(ctx aws.Context, input *DescribeAccountModificationsInput, opts ...request.Option) (*DescribeAccountModificationsOutput, error) { + req, out := c.DescribeAccountModificationsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opRebuildWorkspaces = "RebuildWorkspaces" +const opDescribeClientProperties = "DescribeClientProperties" -// RebuildWorkspacesRequest generates a "aws/request.Request" representing the -// client's request for the RebuildWorkspaces operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeClientPropertiesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeClientProperties operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See RebuildWorkspaces for more information on using the RebuildWorkspaces +// See DescribeClientProperties for more information on using the DescribeClientProperties // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the RebuildWorkspacesRequest method. -// req, resp := client.RebuildWorkspacesRequest(params) +// // Example sending a request using the DescribeClientPropertiesRequest method. +// req, resp := client.DescribeClientPropertiesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/RebuildWorkspaces -func (c *WorkSpaces) RebuildWorkspacesRequest(input *RebuildWorkspacesInput) (req *request.Request, output *RebuildWorkspacesOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeClientProperties +func (c *WorkSpaces) DescribeClientPropertiesRequest(input *DescribeClientPropertiesInput) (req *request.Request, output *DescribeClientPropertiesOutput) { op := &request.Operation{ - Name: opRebuildWorkspaces, + Name: opDescribeClientProperties, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &RebuildWorkspacesInput{} + input = &DescribeClientPropertiesInput{} } - output = &RebuildWorkspacesOutput{} + output = &DescribeClientPropertiesOutput{} req = c.newRequest(op, input, output) return } -// RebuildWorkspaces API operation for Amazon WorkSpaces. -// -// Rebuilds the specified WorkSpaces. -// -// You cannot rebuild a WorkSpace unless its state is AVAILABLE or ERROR. +// DescribeClientProperties API operation for Amazon WorkSpaces. // -// Rebuilding a WorkSpace is a potentially destructive action that can result -// in the loss of data. For more information, see Rebuild a WorkSpace (http://docs.aws.amazon.com/workspaces/latest/adminguide/reset-workspace.html). -// -// This operation is asynchronous and returns before the WorkSpaces have been -// completely rebuilt. +// Retrieves a list that describes one or more specified Amazon WorkSpaces clients. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon WorkSpaces's -// API operation RebuildWorkspaces for usage and error information. -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/RebuildWorkspaces -func (c *WorkSpaces) RebuildWorkspaces(input *RebuildWorkspacesInput) (*RebuildWorkspacesOutput, error) { - req, out := c.RebuildWorkspacesRequest(input) +// API operation DescribeClientProperties for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" +// One or more parameter values are not valid. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The resource could not be found. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// The user is not authorized to access a resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeClientProperties +func (c *WorkSpaces) DescribeClientProperties(input *DescribeClientPropertiesInput) (*DescribeClientPropertiesOutput, error) { + req, out := c.DescribeClientPropertiesRequest(input) return out, req.Send() } -// RebuildWorkspacesWithContext is the same as RebuildWorkspaces with the addition of +// DescribeClientPropertiesWithContext is the same as DescribeClientProperties with the addition of // the ability to pass a context and additional request options. // -// See RebuildWorkspaces for details on how to use this API operation. +// See DescribeClientProperties for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *WorkSpaces) RebuildWorkspacesWithContext(ctx aws.Context, input *RebuildWorkspacesInput, opts ...request.Option) (*RebuildWorkspacesOutput, error) { - req, out := c.RebuildWorkspacesRequest(input) +func (c *WorkSpaces) DescribeClientPropertiesWithContext(ctx aws.Context, input *DescribeClientPropertiesInput, opts ...request.Option) (*DescribeClientPropertiesOutput, error) { + req, out := c.DescribeClientPropertiesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opStartWorkspaces = "StartWorkspaces" +const opDescribeIpGroups = "DescribeIpGroups" -// StartWorkspacesRequest generates a "aws/request.Request" representing the -// client's request for the StartWorkspaces operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeIpGroupsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeIpGroups operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See StartWorkspaces for more information on using the StartWorkspaces +// See DescribeIpGroups for more information on using the DescribeIpGroups // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the StartWorkspacesRequest method. -// req, resp := client.StartWorkspacesRequest(params) +// // Example sending a request using the DescribeIpGroupsRequest method. +// req, resp := client.DescribeIpGroupsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/StartWorkspaces -func (c *WorkSpaces) StartWorkspacesRequest(input *StartWorkspacesInput) (req *request.Request, output *StartWorkspacesOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeIpGroups +func (c *WorkSpaces) DescribeIpGroupsRequest(input *DescribeIpGroupsInput) (req *request.Request, output *DescribeIpGroupsOutput) { op := &request.Operation{ - Name: opStartWorkspaces, + Name: opDescribeIpGroups, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &StartWorkspacesInput{} + input = &DescribeIpGroupsInput{} } - output = &StartWorkspacesOutput{} + output = &DescribeIpGroupsOutput{} req = c.newRequest(op, input, output) return } -// StartWorkspaces API operation for Amazon WorkSpaces. +// DescribeIpGroups API operation for Amazon WorkSpaces. // -// Starts the specified WorkSpaces. -// -// You cannot start a WorkSpace unless it has a running mode of AutoStop and -// a state of STOPPED. +// Describes one or more of your IP access control groups. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon WorkSpaces's -// API operation StartWorkspaces for usage and error information. -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/StartWorkspaces -func (c *WorkSpaces) StartWorkspaces(input *StartWorkspacesInput) (*StartWorkspacesOutput, error) { - req, out := c.StartWorkspacesRequest(input) +// API operation DescribeIpGroups for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" +// One or more parameter values are not valid. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// The user is not authorized to access a resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeIpGroups +func (c *WorkSpaces) DescribeIpGroups(input *DescribeIpGroupsInput) (*DescribeIpGroupsOutput, error) { + req, out := c.DescribeIpGroupsRequest(input) return out, req.Send() } -// StartWorkspacesWithContext is the same as StartWorkspaces with the addition of +// DescribeIpGroupsWithContext is the same as DescribeIpGroups with the addition of // the ability to pass a context and additional request options. // -// See StartWorkspaces for details on how to use this API operation. +// See DescribeIpGroups for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *WorkSpaces) StartWorkspacesWithContext(ctx aws.Context, input *StartWorkspacesInput, opts ...request.Option) (*StartWorkspacesOutput, error) { - req, out := c.StartWorkspacesRequest(input) +func (c *WorkSpaces) DescribeIpGroupsWithContext(ctx aws.Context, input *DescribeIpGroupsInput, opts ...request.Option) (*DescribeIpGroupsOutput, error) { + req, out := c.DescribeIpGroupsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opStopWorkspaces = "StopWorkspaces" +const opDescribeTags = "DescribeTags" -// StopWorkspacesRequest generates a "aws/request.Request" representing the -// client's request for the StopWorkspaces operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeTagsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeTags operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See StopWorkspaces for more information on using the StopWorkspaces +// See DescribeTags for more information on using the DescribeTags // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the StopWorkspacesRequest method. -// req, resp := client.StopWorkspacesRequest(params) +// // Example sending a request using the DescribeTagsRequest method. +// req, resp := client.DescribeTagsRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/StopWorkspaces -func (c *WorkSpaces) StopWorkspacesRequest(input *StopWorkspacesInput) (req *request.Request, output *StopWorkspacesOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeTags +func (c *WorkSpaces) DescribeTagsRequest(input *DescribeTagsInput) (req *request.Request, output *DescribeTagsOutput) { op := &request.Operation{ - Name: opStopWorkspaces, + Name: opDescribeTags, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &StopWorkspacesInput{} + input = &DescribeTagsInput{} } - output = &StopWorkspacesOutput{} + output = &DescribeTagsOutput{} req = c.newRequest(op, input, output) return } -// StopWorkspaces API operation for Amazon WorkSpaces. -// -// Stops the specified WorkSpaces. +// DescribeTags API operation for Amazon WorkSpaces. // -// You cannot stop a WorkSpace unless it has a running mode of AutoStop and -// a state of AVAILABLE, IMPAIRED, UNHEALTHY, or ERROR. +// Describes the specified tags for the specified WorkSpace. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon WorkSpaces's -// API operation StopWorkspaces for usage and error information. -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/StopWorkspaces -func (c *WorkSpaces) StopWorkspaces(input *StopWorkspacesInput) (*StopWorkspacesOutput, error) { - req, out := c.StopWorkspacesRequest(input) +// API operation DescribeTags for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The resource could not be found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeTags +func (c *WorkSpaces) DescribeTags(input *DescribeTagsInput) (*DescribeTagsOutput, error) { + req, out := c.DescribeTagsRequest(input) return out, req.Send() } -// StopWorkspacesWithContext is the same as StopWorkspaces with the addition of +// DescribeTagsWithContext is the same as DescribeTags with the addition of // the ability to pass a context and additional request options. // -// See StopWorkspaces for details on how to use this API operation. +// See DescribeTags for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *WorkSpaces) StopWorkspacesWithContext(ctx aws.Context, input *StopWorkspacesInput, opts ...request.Option) (*StopWorkspacesOutput, error) { - req, out := c.StopWorkspacesRequest(input) +func (c *WorkSpaces) DescribeTagsWithContext(ctx aws.Context, input *DescribeTagsInput, opts ...request.Option) (*DescribeTagsOutput, error) { + req, out := c.DescribeTagsRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -const opTerminateWorkspaces = "TerminateWorkspaces" +const opDescribeWorkspaceBundles = "DescribeWorkspaceBundles" -// TerminateWorkspacesRequest generates a "aws/request.Request" representing the -// client's request for the TerminateWorkspaces operation. The "output" return -// value will be populated with the request's response once the request complets -// successfuly. +// DescribeWorkspaceBundlesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeWorkspaceBundles operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See TerminateWorkspaces for more information on using the TerminateWorkspaces +// See DescribeWorkspaceBundles for more information on using the DescribeWorkspaceBundles // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the TerminateWorkspacesRequest method. -// req, resp := client.TerminateWorkspacesRequest(params) +// // Example sending a request using the DescribeWorkspaceBundlesRequest method. +// req, resp := client.DescribeWorkspaceBundlesRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/TerminateWorkspaces -func (c *WorkSpaces) TerminateWorkspacesRequest(input *TerminateWorkspacesInput) (req *request.Request, output *TerminateWorkspacesOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeWorkspaceBundles +func (c *WorkSpaces) DescribeWorkspaceBundlesRequest(input *DescribeWorkspaceBundlesInput) (req *request.Request, output *DescribeWorkspaceBundlesOutput) { op := &request.Operation{ - Name: opTerminateWorkspaces, + Name: opDescribeWorkspaceBundles, HTTPMethod: "POST", HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "", + TruncationToken: "", + }, } if input == nil { - input = &TerminateWorkspacesInput{} + input = &DescribeWorkspaceBundlesInput{} } - output = &TerminateWorkspacesOutput{} + output = &DescribeWorkspaceBundlesOutput{} req = c.newRequest(op, input, output) return } -// TerminateWorkspaces API operation for Amazon WorkSpaces. -// -// Terminates the specified WorkSpaces. -// -// Terminating a WorkSpace is a permanent action and cannot be undone. The user's -// data is destroyed. If you need to archive any user data, contact Amazon Web -// Services before terminating the WorkSpace. +// DescribeWorkspaceBundles API operation for Amazon WorkSpaces. // -// You can terminate a WorkSpace that is in any state except SUSPENDED. +// Retrieves a list that describes the available WorkSpace bundles. // -// This operation is asynchronous and returns before the WorkSpaces have been -// completely terminated. +// You can filter the results using either bundle ID or owner, but not both. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon WorkSpaces's -// API operation TerminateWorkspaces for usage and error information. -// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/TerminateWorkspaces -func (c *WorkSpaces) TerminateWorkspaces(input *TerminateWorkspacesInput) (*TerminateWorkspacesOutput, error) { - req, out := c.TerminateWorkspacesRequest(input) +// API operation DescribeWorkspaceBundles for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" +// One or more parameter values are not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeWorkspaceBundles +func (c *WorkSpaces) DescribeWorkspaceBundles(input *DescribeWorkspaceBundlesInput) (*DescribeWorkspaceBundlesOutput, error) { + req, out := c.DescribeWorkspaceBundlesRequest(input) return out, req.Send() } -// TerminateWorkspacesWithContext is the same as TerminateWorkspaces with the addition of +// DescribeWorkspaceBundlesWithContext is the same as DescribeWorkspaceBundles with the addition of // the ability to pass a context and additional request options. // -// See TerminateWorkspaces for details on how to use this API operation. +// See DescribeWorkspaceBundles for details on how to use this API operation. // // The context must be non-nil and will be used for request cancellation. If // the context is nil a panic will occur. In the future the SDK may create // sub-contexts for http.Requests. See https://golang.org/pkg/context/ // for more information on using Contexts. -func (c *WorkSpaces) TerminateWorkspacesWithContext(ctx aws.Context, input *TerminateWorkspacesInput, opts ...request.Option) (*TerminateWorkspacesOutput, error) { - req, out := c.TerminateWorkspacesRequest(input) +func (c *WorkSpaces) DescribeWorkspaceBundlesWithContext(ctx aws.Context, input *DescribeWorkspaceBundlesInput, opts ...request.Option) (*DescribeWorkspaceBundlesOutput, error) { + req, out := c.DescribeWorkspaceBundlesRequest(input) req.SetContext(ctx) req.ApplyOptions(opts...) return out, req.Send() } -// Information about the compute type. -type ComputeType struct { - _ struct{} `type:"structure"` - - // The compute type. - Name *string `type:"string" enum:"Compute"` +// DescribeWorkspaceBundlesPages iterates over the pages of a DescribeWorkspaceBundles operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeWorkspaceBundles method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeWorkspaceBundles operation. +// pageNum := 0 +// err := client.DescribeWorkspaceBundlesPages(params, +// func(page *DescribeWorkspaceBundlesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *WorkSpaces) DescribeWorkspaceBundlesPages(input *DescribeWorkspaceBundlesInput, fn func(*DescribeWorkspaceBundlesOutput, bool) bool) error { + return c.DescribeWorkspaceBundlesPagesWithContext(aws.BackgroundContext(), input, fn) } -// String returns the string representation -func (s ComputeType) String() string { - return awsutil.Prettify(s) +// DescribeWorkspaceBundlesPagesWithContext same as DescribeWorkspaceBundlesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) DescribeWorkspaceBundlesPagesWithContext(ctx aws.Context, input *DescribeWorkspaceBundlesInput, fn func(*DescribeWorkspaceBundlesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeWorkspaceBundlesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeWorkspaceBundlesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeWorkspaceBundlesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDescribeWorkspaceDirectories = "DescribeWorkspaceDirectories" + +// DescribeWorkspaceDirectoriesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeWorkspaceDirectories operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeWorkspaceDirectories for more information on using the DescribeWorkspaceDirectories +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeWorkspaceDirectoriesRequest method. +// req, resp := client.DescribeWorkspaceDirectoriesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeWorkspaceDirectories +func (c *WorkSpaces) DescribeWorkspaceDirectoriesRequest(input *DescribeWorkspaceDirectoriesInput) (req *request.Request, output *DescribeWorkspaceDirectoriesOutput) { + op := &request.Operation{ + Name: opDescribeWorkspaceDirectories, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeWorkspaceDirectoriesInput{} + } + + output = &DescribeWorkspaceDirectoriesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeWorkspaceDirectories API operation for Amazon WorkSpaces. +// +// Describes the available AWS Directory Service directories that are registered +// with Amazon WorkSpaces. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon WorkSpaces's +// API operation DescribeWorkspaceDirectories for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" +// One or more parameter values are not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeWorkspaceDirectories +func (c *WorkSpaces) DescribeWorkspaceDirectories(input *DescribeWorkspaceDirectoriesInput) (*DescribeWorkspaceDirectoriesOutput, error) { + req, out := c.DescribeWorkspaceDirectoriesRequest(input) + return out, req.Send() +} + +// DescribeWorkspaceDirectoriesWithContext is the same as DescribeWorkspaceDirectories with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeWorkspaceDirectories for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) DescribeWorkspaceDirectoriesWithContext(ctx aws.Context, input *DescribeWorkspaceDirectoriesInput, opts ...request.Option) (*DescribeWorkspaceDirectoriesOutput, error) { + req, out := c.DescribeWorkspaceDirectoriesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeWorkspaceDirectoriesPages iterates over the pages of a DescribeWorkspaceDirectories operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeWorkspaceDirectories method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeWorkspaceDirectories operation. +// pageNum := 0 +// err := client.DescribeWorkspaceDirectoriesPages(params, +// func(page *DescribeWorkspaceDirectoriesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *WorkSpaces) DescribeWorkspaceDirectoriesPages(input *DescribeWorkspaceDirectoriesInput, fn func(*DescribeWorkspaceDirectoriesOutput, bool) bool) error { + return c.DescribeWorkspaceDirectoriesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeWorkspaceDirectoriesPagesWithContext same as DescribeWorkspaceDirectoriesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) DescribeWorkspaceDirectoriesPagesWithContext(ctx aws.Context, input *DescribeWorkspaceDirectoriesInput, fn func(*DescribeWorkspaceDirectoriesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeWorkspaceDirectoriesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeWorkspaceDirectoriesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeWorkspaceDirectoriesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDescribeWorkspaceImages = "DescribeWorkspaceImages" + +// DescribeWorkspaceImagesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeWorkspaceImages operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeWorkspaceImages for more information on using the DescribeWorkspaceImages +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeWorkspaceImagesRequest method. +// req, resp := client.DescribeWorkspaceImagesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeWorkspaceImages +func (c *WorkSpaces) DescribeWorkspaceImagesRequest(input *DescribeWorkspaceImagesInput) (req *request.Request, output *DescribeWorkspaceImagesOutput) { + op := &request.Operation{ + Name: opDescribeWorkspaceImages, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeWorkspaceImagesInput{} + } + + output = &DescribeWorkspaceImagesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeWorkspaceImages API operation for Amazon WorkSpaces. +// +// Retrieves a list that describes one or more specified images, if the image +// identifiers are provided. Otherwise, all images in the account are described. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon WorkSpaces's +// API operation DescribeWorkspaceImages for usage and error information. +// +// Returned Error Codes: +// * ErrCodeAccessDeniedException "AccessDeniedException" +// The user is not authorized to access a resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeWorkspaceImages +func (c *WorkSpaces) DescribeWorkspaceImages(input *DescribeWorkspaceImagesInput) (*DescribeWorkspaceImagesOutput, error) { + req, out := c.DescribeWorkspaceImagesRequest(input) + return out, req.Send() +} + +// DescribeWorkspaceImagesWithContext is the same as DescribeWorkspaceImages with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeWorkspaceImages for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) DescribeWorkspaceImagesWithContext(ctx aws.Context, input *DescribeWorkspaceImagesInput, opts ...request.Option) (*DescribeWorkspaceImagesOutput, error) { + req, out := c.DescribeWorkspaceImagesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeWorkspaces = "DescribeWorkspaces" + +// DescribeWorkspacesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeWorkspaces operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeWorkspaces for more information on using the DescribeWorkspaces +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeWorkspacesRequest method. +// req, resp := client.DescribeWorkspacesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeWorkspaces +func (c *WorkSpaces) DescribeWorkspacesRequest(input *DescribeWorkspacesInput) (req *request.Request, output *DescribeWorkspacesOutput) { + op := &request.Operation{ + Name: opDescribeWorkspaces, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "Limit", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeWorkspacesInput{} + } + + output = &DescribeWorkspacesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeWorkspaces API operation for Amazon WorkSpaces. +// +// Describes the specified WorkSpaces. +// +// You can filter the results by using the bundle identifier, directory identifier, +// or owner, but you can specify only one filter at a time. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon WorkSpaces's +// API operation DescribeWorkspaces for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" +// One or more parameter values are not valid. +// +// * ErrCodeResourceUnavailableException "ResourceUnavailableException" +// The specified resource is not available. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeWorkspaces +func (c *WorkSpaces) DescribeWorkspaces(input *DescribeWorkspacesInput) (*DescribeWorkspacesOutput, error) { + req, out := c.DescribeWorkspacesRequest(input) + return out, req.Send() +} + +// DescribeWorkspacesWithContext is the same as DescribeWorkspaces with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeWorkspaces for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) DescribeWorkspacesWithContext(ctx aws.Context, input *DescribeWorkspacesInput, opts ...request.Option) (*DescribeWorkspacesOutput, error) { + req, out := c.DescribeWorkspacesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeWorkspacesPages iterates over the pages of a DescribeWorkspaces operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeWorkspaces method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeWorkspaces operation. +// pageNum := 0 +// err := client.DescribeWorkspacesPages(params, +// func(page *DescribeWorkspacesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *WorkSpaces) DescribeWorkspacesPages(input *DescribeWorkspacesInput, fn func(*DescribeWorkspacesOutput, bool) bool) error { + return c.DescribeWorkspacesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeWorkspacesPagesWithContext same as DescribeWorkspacesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) DescribeWorkspacesPagesWithContext(ctx aws.Context, input *DescribeWorkspacesInput, fn func(*DescribeWorkspacesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeWorkspacesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeWorkspacesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeWorkspacesOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDescribeWorkspacesConnectionStatus = "DescribeWorkspacesConnectionStatus" + +// DescribeWorkspacesConnectionStatusRequest generates a "aws/request.Request" representing the +// client's request for the DescribeWorkspacesConnectionStatus operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeWorkspacesConnectionStatus for more information on using the DescribeWorkspacesConnectionStatus +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeWorkspacesConnectionStatusRequest method. +// req, resp := client.DescribeWorkspacesConnectionStatusRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeWorkspacesConnectionStatus +func (c *WorkSpaces) DescribeWorkspacesConnectionStatusRequest(input *DescribeWorkspacesConnectionStatusInput) (req *request.Request, output *DescribeWorkspacesConnectionStatusOutput) { + op := &request.Operation{ + Name: opDescribeWorkspacesConnectionStatus, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeWorkspacesConnectionStatusInput{} + } + + output = &DescribeWorkspacesConnectionStatusOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeWorkspacesConnectionStatus API operation for Amazon WorkSpaces. +// +// Describes the connection status of the specified WorkSpaces. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon WorkSpaces's +// API operation DescribeWorkspacesConnectionStatus for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" +// One or more parameter values are not valid. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DescribeWorkspacesConnectionStatus +func (c *WorkSpaces) DescribeWorkspacesConnectionStatus(input *DescribeWorkspacesConnectionStatusInput) (*DescribeWorkspacesConnectionStatusOutput, error) { + req, out := c.DescribeWorkspacesConnectionStatusRequest(input) + return out, req.Send() +} + +// DescribeWorkspacesConnectionStatusWithContext is the same as DescribeWorkspacesConnectionStatus with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeWorkspacesConnectionStatus for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) DescribeWorkspacesConnectionStatusWithContext(ctx aws.Context, input *DescribeWorkspacesConnectionStatusInput, opts ...request.Option) (*DescribeWorkspacesConnectionStatusOutput, error) { + req, out := c.DescribeWorkspacesConnectionStatusRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDisassociateIpGroups = "DisassociateIpGroups" + +// DisassociateIpGroupsRequest generates a "aws/request.Request" representing the +// client's request for the DisassociateIpGroups operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DisassociateIpGroups for more information on using the DisassociateIpGroups +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DisassociateIpGroupsRequest method. +// req, resp := client.DisassociateIpGroupsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DisassociateIpGroups +func (c *WorkSpaces) DisassociateIpGroupsRequest(input *DisassociateIpGroupsInput) (req *request.Request, output *DisassociateIpGroupsOutput) { + op := &request.Operation{ + Name: opDisassociateIpGroups, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DisassociateIpGroupsInput{} + } + + output = &DisassociateIpGroupsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DisassociateIpGroups API operation for Amazon WorkSpaces. +// +// Disassociates the specified IP access control group from the specified directory. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon WorkSpaces's +// API operation DisassociateIpGroups for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" +// One or more parameter values are not valid. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The resource could not be found. +// +// * ErrCodeInvalidResourceStateException "InvalidResourceStateException" +// The state of the resource is not valid for this operation. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// The user is not authorized to access a resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/DisassociateIpGroups +func (c *WorkSpaces) DisassociateIpGroups(input *DisassociateIpGroupsInput) (*DisassociateIpGroupsOutput, error) { + req, out := c.DisassociateIpGroupsRequest(input) + return out, req.Send() +} + +// DisassociateIpGroupsWithContext is the same as DisassociateIpGroups with the addition of +// the ability to pass a context and additional request options. +// +// See DisassociateIpGroups for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) DisassociateIpGroupsWithContext(ctx aws.Context, input *DisassociateIpGroupsInput, opts ...request.Option) (*DisassociateIpGroupsOutput, error) { + req, out := c.DisassociateIpGroupsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opImportWorkspaceImage = "ImportWorkspaceImage" + +// ImportWorkspaceImageRequest generates a "aws/request.Request" representing the +// client's request for the ImportWorkspaceImage operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ImportWorkspaceImage for more information on using the ImportWorkspaceImage +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ImportWorkspaceImageRequest method. +// req, resp := client.ImportWorkspaceImageRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/ImportWorkspaceImage +func (c *WorkSpaces) ImportWorkspaceImageRequest(input *ImportWorkspaceImageInput) (req *request.Request, output *ImportWorkspaceImageOutput) { + op := &request.Operation{ + Name: opImportWorkspaceImage, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ImportWorkspaceImageInput{} + } + + output = &ImportWorkspaceImageOutput{} + req = c.newRequest(op, input, output) + return +} + +// ImportWorkspaceImage API operation for Amazon WorkSpaces. +// +// Imports the specified Windows 7 or Windows 10 bring your own license (BYOL) +// image into Amazon WorkSpaces. The image must be an already licensed EC2 image +// that is in your AWS account, and you must own the image. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon WorkSpaces's +// API operation ImportWorkspaceImage for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceLimitExceededException "ResourceLimitExceededException" +// Your resource limits have been exceeded. +// +// * ErrCodeResourceAlreadyExistsException "ResourceAlreadyExistsException" +// The specified resource already exists. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The resource could not be found. +// +// * ErrCodeOperationNotSupportedException "OperationNotSupportedException" +// This operation is not supported. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// The user is not authorized to access a resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/ImportWorkspaceImage +func (c *WorkSpaces) ImportWorkspaceImage(input *ImportWorkspaceImageInput) (*ImportWorkspaceImageOutput, error) { + req, out := c.ImportWorkspaceImageRequest(input) + return out, req.Send() +} + +// ImportWorkspaceImageWithContext is the same as ImportWorkspaceImage with the addition of +// the ability to pass a context and additional request options. +// +// See ImportWorkspaceImage for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) ImportWorkspaceImageWithContext(ctx aws.Context, input *ImportWorkspaceImageInput, opts ...request.Option) (*ImportWorkspaceImageOutput, error) { + req, out := c.ImportWorkspaceImageRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opListAvailableManagementCidrRanges = "ListAvailableManagementCidrRanges" + +// ListAvailableManagementCidrRangesRequest generates a "aws/request.Request" representing the +// client's request for the ListAvailableManagementCidrRanges operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListAvailableManagementCidrRanges for more information on using the ListAvailableManagementCidrRanges +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListAvailableManagementCidrRangesRequest method. +// req, resp := client.ListAvailableManagementCidrRangesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/ListAvailableManagementCidrRanges +func (c *WorkSpaces) ListAvailableManagementCidrRangesRequest(input *ListAvailableManagementCidrRangesInput) (req *request.Request, output *ListAvailableManagementCidrRangesOutput) { + op := &request.Operation{ + Name: opListAvailableManagementCidrRanges, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListAvailableManagementCidrRangesInput{} + } + + output = &ListAvailableManagementCidrRangesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListAvailableManagementCidrRanges API operation for Amazon WorkSpaces. +// +// Retrieves a list of IP address ranges, specified as IPv4 CIDR blocks, that +// you can use for the network management interface when you enable bring your +// own license (BYOL). +// +// The management network interface is connected to a secure Amazon WorkSpaces +// management network. It is used for interactive streaming of the WorkSpace +// desktop to Amazon WorkSpaces clients, and to allow Amazon WorkSpaces to manage +// the WorkSpace. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon WorkSpaces's +// API operation ListAvailableManagementCidrRanges for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" +// One or more parameter values are not valid. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// The user is not authorized to access a resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/ListAvailableManagementCidrRanges +func (c *WorkSpaces) ListAvailableManagementCidrRanges(input *ListAvailableManagementCidrRangesInput) (*ListAvailableManagementCidrRangesOutput, error) { + req, out := c.ListAvailableManagementCidrRangesRequest(input) + return out, req.Send() +} + +// ListAvailableManagementCidrRangesWithContext is the same as ListAvailableManagementCidrRanges with the addition of +// the ability to pass a context and additional request options. +// +// See ListAvailableManagementCidrRanges for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) ListAvailableManagementCidrRangesWithContext(ctx aws.Context, input *ListAvailableManagementCidrRangesInput, opts ...request.Option) (*ListAvailableManagementCidrRangesOutput, error) { + req, out := c.ListAvailableManagementCidrRangesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyAccount = "ModifyAccount" + +// ModifyAccountRequest generates a "aws/request.Request" representing the +// client's request for the ModifyAccount operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyAccount for more information on using the ModifyAccount +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyAccountRequest method. +// req, resp := client.ModifyAccountRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/ModifyAccount +func (c *WorkSpaces) ModifyAccountRequest(input *ModifyAccountInput) (req *request.Request, output *ModifyAccountOutput) { + op := &request.Operation{ + Name: opModifyAccount, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyAccountInput{} + } + + output = &ModifyAccountOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyAccount API operation for Amazon WorkSpaces. +// +// Modifies the configuration of bring your own license (BYOL) for the specified +// account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon WorkSpaces's +// API operation ModifyAccount for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" +// One or more parameter values are not valid. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// The user is not authorized to access a resource. +// +// * ErrCodeInvalidResourceStateException "InvalidResourceStateException" +// The state of the resource is not valid for this operation. +// +// * ErrCodeResourceUnavailableException "ResourceUnavailableException" +// The specified resource is not available. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The resource could not be found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/ModifyAccount +func (c *WorkSpaces) ModifyAccount(input *ModifyAccountInput) (*ModifyAccountOutput, error) { + req, out := c.ModifyAccountRequest(input) + return out, req.Send() +} + +// ModifyAccountWithContext is the same as ModifyAccount with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyAccount for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) ModifyAccountWithContext(ctx aws.Context, input *ModifyAccountInput, opts ...request.Option) (*ModifyAccountOutput, error) { + req, out := c.ModifyAccountRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyClientProperties = "ModifyClientProperties" + +// ModifyClientPropertiesRequest generates a "aws/request.Request" representing the +// client's request for the ModifyClientProperties operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyClientProperties for more information on using the ModifyClientProperties +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyClientPropertiesRequest method. +// req, resp := client.ModifyClientPropertiesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/ModifyClientProperties +func (c *WorkSpaces) ModifyClientPropertiesRequest(input *ModifyClientPropertiesInput) (req *request.Request, output *ModifyClientPropertiesOutput) { + op := &request.Operation{ + Name: opModifyClientProperties, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyClientPropertiesInput{} + } + + output = &ModifyClientPropertiesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyClientProperties API operation for Amazon WorkSpaces. +// +// Modifies the properties of the specified Amazon WorkSpaces client. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon WorkSpaces's +// API operation ModifyClientProperties for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" +// One or more parameter values are not valid. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The resource could not be found. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// The user is not authorized to access a resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/ModifyClientProperties +func (c *WorkSpaces) ModifyClientProperties(input *ModifyClientPropertiesInput) (*ModifyClientPropertiesOutput, error) { + req, out := c.ModifyClientPropertiesRequest(input) + return out, req.Send() +} + +// ModifyClientPropertiesWithContext is the same as ModifyClientProperties with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyClientProperties for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) ModifyClientPropertiesWithContext(ctx aws.Context, input *ModifyClientPropertiesInput, opts ...request.Option) (*ModifyClientPropertiesOutput, error) { + req, out := c.ModifyClientPropertiesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyWorkspaceProperties = "ModifyWorkspaceProperties" + +// ModifyWorkspacePropertiesRequest generates a "aws/request.Request" representing the +// client's request for the ModifyWorkspaceProperties operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyWorkspaceProperties for more information on using the ModifyWorkspaceProperties +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyWorkspacePropertiesRequest method. +// req, resp := client.ModifyWorkspacePropertiesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/ModifyWorkspaceProperties +func (c *WorkSpaces) ModifyWorkspacePropertiesRequest(input *ModifyWorkspacePropertiesInput) (req *request.Request, output *ModifyWorkspacePropertiesOutput) { + op := &request.Operation{ + Name: opModifyWorkspaceProperties, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyWorkspacePropertiesInput{} + } + + output = &ModifyWorkspacePropertiesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyWorkspaceProperties API operation for Amazon WorkSpaces. +// +// Modifies the specified WorkSpace properties. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon WorkSpaces's +// API operation ModifyWorkspaceProperties for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" +// One or more parameter values are not valid. +// +// * ErrCodeInvalidResourceStateException "InvalidResourceStateException" +// The state of the resource is not valid for this operation. +// +// * ErrCodeOperationInProgressException "OperationInProgressException" +// The properties of this WorkSpace are currently being modified. Try again +// in a moment. +// +// * ErrCodeUnsupportedWorkspaceConfigurationException "UnsupportedWorkspaceConfigurationException" +// The configuration of this WorkSpace is not supported for this operation. +// For more information, see the Amazon WorkSpaces Administration Guide (http://docs.aws.amazon.com/workspaces/latest/adminguide/). +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The resource could not be found. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// The user is not authorized to access a resource. +// +// * ErrCodeResourceUnavailableException "ResourceUnavailableException" +// The specified resource is not available. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/ModifyWorkspaceProperties +func (c *WorkSpaces) ModifyWorkspaceProperties(input *ModifyWorkspacePropertiesInput) (*ModifyWorkspacePropertiesOutput, error) { + req, out := c.ModifyWorkspacePropertiesRequest(input) + return out, req.Send() +} + +// ModifyWorkspacePropertiesWithContext is the same as ModifyWorkspaceProperties with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyWorkspaceProperties for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) ModifyWorkspacePropertiesWithContext(ctx aws.Context, input *ModifyWorkspacePropertiesInput, opts ...request.Option) (*ModifyWorkspacePropertiesOutput, error) { + req, out := c.ModifyWorkspacePropertiesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyWorkspaceState = "ModifyWorkspaceState" + +// ModifyWorkspaceStateRequest generates a "aws/request.Request" representing the +// client's request for the ModifyWorkspaceState operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyWorkspaceState for more information on using the ModifyWorkspaceState +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyWorkspaceStateRequest method. +// req, resp := client.ModifyWorkspaceStateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/ModifyWorkspaceState +func (c *WorkSpaces) ModifyWorkspaceStateRequest(input *ModifyWorkspaceStateInput) (req *request.Request, output *ModifyWorkspaceStateOutput) { + op := &request.Operation{ + Name: opModifyWorkspaceState, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyWorkspaceStateInput{} + } + + output = &ModifyWorkspaceStateOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyWorkspaceState API operation for Amazon WorkSpaces. +// +// Sets the state of the specified WorkSpace. +// +// To maintain a WorkSpace without being interrupted, set the WorkSpace state +// to ADMIN_MAINTENANCE. WorkSpaces in this state do not respond to requests +// to reboot, stop, start, or rebuild. An AutoStop WorkSpace in this state is +// not stopped. Users can log into a WorkSpace in the ADMIN_MAINTENANCE state. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon WorkSpaces's +// API operation ModifyWorkspaceState for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" +// One or more parameter values are not valid. +// +// * ErrCodeInvalidResourceStateException "InvalidResourceStateException" +// The state of the resource is not valid for this operation. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The resource could not be found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/ModifyWorkspaceState +func (c *WorkSpaces) ModifyWorkspaceState(input *ModifyWorkspaceStateInput) (*ModifyWorkspaceStateOutput, error) { + req, out := c.ModifyWorkspaceStateRequest(input) + return out, req.Send() +} + +// ModifyWorkspaceStateWithContext is the same as ModifyWorkspaceState with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyWorkspaceState for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) ModifyWorkspaceStateWithContext(ctx aws.Context, input *ModifyWorkspaceStateInput, opts ...request.Option) (*ModifyWorkspaceStateOutput, error) { + req, out := c.ModifyWorkspaceStateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRebootWorkspaces = "RebootWorkspaces" + +// RebootWorkspacesRequest generates a "aws/request.Request" representing the +// client's request for the RebootWorkspaces operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RebootWorkspaces for more information on using the RebootWorkspaces +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RebootWorkspacesRequest method. +// req, resp := client.RebootWorkspacesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/RebootWorkspaces +func (c *WorkSpaces) RebootWorkspacesRequest(input *RebootWorkspacesInput) (req *request.Request, output *RebootWorkspacesOutput) { + op := &request.Operation{ + Name: opRebootWorkspaces, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RebootWorkspacesInput{} + } + + output = &RebootWorkspacesOutput{} + req = c.newRequest(op, input, output) + return +} + +// RebootWorkspaces API operation for Amazon WorkSpaces. +// +// Reboots the specified WorkSpaces. +// +// You cannot reboot a WorkSpace unless its state is AVAILABLE or UNHEALTHY. +// +// This operation is asynchronous and returns before the WorkSpaces have rebooted. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon WorkSpaces's +// API operation RebootWorkspaces for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/RebootWorkspaces +func (c *WorkSpaces) RebootWorkspaces(input *RebootWorkspacesInput) (*RebootWorkspacesOutput, error) { + req, out := c.RebootWorkspacesRequest(input) + return out, req.Send() +} + +// RebootWorkspacesWithContext is the same as RebootWorkspaces with the addition of +// the ability to pass a context and additional request options. +// +// See RebootWorkspaces for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) RebootWorkspacesWithContext(ctx aws.Context, input *RebootWorkspacesInput, opts ...request.Option) (*RebootWorkspacesOutput, error) { + req, out := c.RebootWorkspacesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRebuildWorkspaces = "RebuildWorkspaces" + +// RebuildWorkspacesRequest generates a "aws/request.Request" representing the +// client's request for the RebuildWorkspaces operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RebuildWorkspaces for more information on using the RebuildWorkspaces +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RebuildWorkspacesRequest method. +// req, resp := client.RebuildWorkspacesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/RebuildWorkspaces +func (c *WorkSpaces) RebuildWorkspacesRequest(input *RebuildWorkspacesInput) (req *request.Request, output *RebuildWorkspacesOutput) { + op := &request.Operation{ + Name: opRebuildWorkspaces, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RebuildWorkspacesInput{} + } + + output = &RebuildWorkspacesOutput{} + req = c.newRequest(op, input, output) + return +} + +// RebuildWorkspaces API operation for Amazon WorkSpaces. +// +// Rebuilds the specified WorkSpace. +// +// You cannot rebuild a WorkSpace unless its state is AVAILABLE, ERROR, or UNHEALTHY. +// +// Rebuilding a WorkSpace is a potentially destructive action that can result +// in the loss of data. For more information, see Rebuild a WorkSpace (http://docs.aws.amazon.com/workspaces/latest/adminguide/reset-workspace.html). +// +// This operation is asynchronous and returns before the WorkSpaces have been +// completely rebuilt. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon WorkSpaces's +// API operation RebuildWorkspaces for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/RebuildWorkspaces +func (c *WorkSpaces) RebuildWorkspaces(input *RebuildWorkspacesInput) (*RebuildWorkspacesOutput, error) { + req, out := c.RebuildWorkspacesRequest(input) + return out, req.Send() +} + +// RebuildWorkspacesWithContext is the same as RebuildWorkspaces with the addition of +// the ability to pass a context and additional request options. +// +// See RebuildWorkspaces for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) RebuildWorkspacesWithContext(ctx aws.Context, input *RebuildWorkspacesInput, opts ...request.Option) (*RebuildWorkspacesOutput, error) { + req, out := c.RebuildWorkspacesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRevokeIpRules = "RevokeIpRules" + +// RevokeIpRulesRequest generates a "aws/request.Request" representing the +// client's request for the RevokeIpRules operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RevokeIpRules for more information on using the RevokeIpRules +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RevokeIpRulesRequest method. +// req, resp := client.RevokeIpRulesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/RevokeIpRules +func (c *WorkSpaces) RevokeIpRulesRequest(input *RevokeIpRulesInput) (req *request.Request, output *RevokeIpRulesOutput) { + op := &request.Operation{ + Name: opRevokeIpRules, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RevokeIpRulesInput{} + } + + output = &RevokeIpRulesOutput{} + req = c.newRequest(op, input, output) + return +} + +// RevokeIpRules API operation for Amazon WorkSpaces. +// +// Removes one or more rules from the specified IP access control group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon WorkSpaces's +// API operation RevokeIpRules for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" +// One or more parameter values are not valid. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The resource could not be found. +// +// * ErrCodeInvalidResourceStateException "InvalidResourceStateException" +// The state of the resource is not valid for this operation. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// The user is not authorized to access a resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/RevokeIpRules +func (c *WorkSpaces) RevokeIpRules(input *RevokeIpRulesInput) (*RevokeIpRulesOutput, error) { + req, out := c.RevokeIpRulesRequest(input) + return out, req.Send() +} + +// RevokeIpRulesWithContext is the same as RevokeIpRules with the addition of +// the ability to pass a context and additional request options. +// +// See RevokeIpRules for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) RevokeIpRulesWithContext(ctx aws.Context, input *RevokeIpRulesInput, opts ...request.Option) (*RevokeIpRulesOutput, error) { + req, out := c.RevokeIpRulesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStartWorkspaces = "StartWorkspaces" + +// StartWorkspacesRequest generates a "aws/request.Request" representing the +// client's request for the StartWorkspaces operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StartWorkspaces for more information on using the StartWorkspaces +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StartWorkspacesRequest method. +// req, resp := client.StartWorkspacesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/StartWorkspaces +func (c *WorkSpaces) StartWorkspacesRequest(input *StartWorkspacesInput) (req *request.Request, output *StartWorkspacesOutput) { + op := &request.Operation{ + Name: opStartWorkspaces, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StartWorkspacesInput{} + } + + output = &StartWorkspacesOutput{} + req = c.newRequest(op, input, output) + return +} + +// StartWorkspaces API operation for Amazon WorkSpaces. +// +// Starts the specified WorkSpaces. +// +// You cannot start a WorkSpace unless it has a running mode of AutoStop and +// a state of STOPPED. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon WorkSpaces's +// API operation StartWorkspaces for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/StartWorkspaces +func (c *WorkSpaces) StartWorkspaces(input *StartWorkspacesInput) (*StartWorkspacesOutput, error) { + req, out := c.StartWorkspacesRequest(input) + return out, req.Send() +} + +// StartWorkspacesWithContext is the same as StartWorkspaces with the addition of +// the ability to pass a context and additional request options. +// +// See StartWorkspaces for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) StartWorkspacesWithContext(ctx aws.Context, input *StartWorkspacesInput, opts ...request.Option) (*StartWorkspacesOutput, error) { + req, out := c.StartWorkspacesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStopWorkspaces = "StopWorkspaces" + +// StopWorkspacesRequest generates a "aws/request.Request" representing the +// client's request for the StopWorkspaces operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StopWorkspaces for more information on using the StopWorkspaces +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StopWorkspacesRequest method. +// req, resp := client.StopWorkspacesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/StopWorkspaces +func (c *WorkSpaces) StopWorkspacesRequest(input *StopWorkspacesInput) (req *request.Request, output *StopWorkspacesOutput) { + op := &request.Operation{ + Name: opStopWorkspaces, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StopWorkspacesInput{} + } + + output = &StopWorkspacesOutput{} + req = c.newRequest(op, input, output) + return +} + +// StopWorkspaces API operation for Amazon WorkSpaces. +// +// Stops the specified WorkSpaces. +// +// You cannot stop a WorkSpace unless it has a running mode of AutoStop and +// a state of AVAILABLE, IMPAIRED, UNHEALTHY, or ERROR. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon WorkSpaces's +// API operation StopWorkspaces for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/StopWorkspaces +func (c *WorkSpaces) StopWorkspaces(input *StopWorkspacesInput) (*StopWorkspacesOutput, error) { + req, out := c.StopWorkspacesRequest(input) + return out, req.Send() +} + +// StopWorkspacesWithContext is the same as StopWorkspaces with the addition of +// the ability to pass a context and additional request options. +// +// See StopWorkspaces for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) StopWorkspacesWithContext(ctx aws.Context, input *StopWorkspacesInput, opts ...request.Option) (*StopWorkspacesOutput, error) { + req, out := c.StopWorkspacesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opTerminateWorkspaces = "TerminateWorkspaces" + +// TerminateWorkspacesRequest generates a "aws/request.Request" representing the +// client's request for the TerminateWorkspaces operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See TerminateWorkspaces for more information on using the TerminateWorkspaces +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the TerminateWorkspacesRequest method. +// req, resp := client.TerminateWorkspacesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/TerminateWorkspaces +func (c *WorkSpaces) TerminateWorkspacesRequest(input *TerminateWorkspacesInput) (req *request.Request, output *TerminateWorkspacesOutput) { + op := &request.Operation{ + Name: opTerminateWorkspaces, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &TerminateWorkspacesInput{} + } + + output = &TerminateWorkspacesOutput{} + req = c.newRequest(op, input, output) + return +} + +// TerminateWorkspaces API operation for Amazon WorkSpaces. +// +// Terminates the specified WorkSpaces. +// +// Terminating a WorkSpace is a permanent action and cannot be undone. The user's +// data is destroyed. If you need to archive any user data, contact Amazon Web +// Services before terminating the WorkSpace. +// +// You can terminate a WorkSpace that is in any state except SUSPENDED. +// +// This operation is asynchronous and returns before the WorkSpaces have been +// completely terminated. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon WorkSpaces's +// API operation TerminateWorkspaces for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/TerminateWorkspaces +func (c *WorkSpaces) TerminateWorkspaces(input *TerminateWorkspacesInput) (*TerminateWorkspacesOutput, error) { + req, out := c.TerminateWorkspacesRequest(input) + return out, req.Send() +} + +// TerminateWorkspacesWithContext is the same as TerminateWorkspaces with the addition of +// the ability to pass a context and additional request options. +// +// See TerminateWorkspaces for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) TerminateWorkspacesWithContext(ctx aws.Context, input *TerminateWorkspacesInput, opts ...request.Option) (*TerminateWorkspacesOutput, error) { + req, out := c.TerminateWorkspacesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateRulesOfIpGroup = "UpdateRulesOfIpGroup" + +// UpdateRulesOfIpGroupRequest generates a "aws/request.Request" representing the +// client's request for the UpdateRulesOfIpGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateRulesOfIpGroup for more information on using the UpdateRulesOfIpGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateRulesOfIpGroupRequest method. +// req, resp := client.UpdateRulesOfIpGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/UpdateRulesOfIpGroup +func (c *WorkSpaces) UpdateRulesOfIpGroupRequest(input *UpdateRulesOfIpGroupInput) (req *request.Request, output *UpdateRulesOfIpGroupOutput) { + op := &request.Operation{ + Name: opUpdateRulesOfIpGroup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateRulesOfIpGroupInput{} + } + + output = &UpdateRulesOfIpGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateRulesOfIpGroup API operation for Amazon WorkSpaces. +// +// Replaces the current rules of the specified IP access control group with +// the specified rules. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon WorkSpaces's +// API operation UpdateRulesOfIpGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterValuesException "InvalidParameterValuesException" +// One or more parameter values are not valid. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The resource could not be found. +// +// * ErrCodeResourceLimitExceededException "ResourceLimitExceededException" +// Your resource limits have been exceeded. +// +// * ErrCodeInvalidResourceStateException "InvalidResourceStateException" +// The state of the resource is not valid for this operation. +// +// * ErrCodeAccessDeniedException "AccessDeniedException" +// The user is not authorized to access a resource. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08/UpdateRulesOfIpGroup +func (c *WorkSpaces) UpdateRulesOfIpGroup(input *UpdateRulesOfIpGroupInput) (*UpdateRulesOfIpGroupOutput, error) { + req, out := c.UpdateRulesOfIpGroupRequest(input) + return out, req.Send() +} + +// UpdateRulesOfIpGroupWithContext is the same as UpdateRulesOfIpGroup with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateRulesOfIpGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *WorkSpaces) UpdateRulesOfIpGroupWithContext(ctx aws.Context, input *UpdateRulesOfIpGroupInput, opts ...request.Option) (*UpdateRulesOfIpGroupOutput, error) { + req, out := c.UpdateRulesOfIpGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// Describes a modification to the configuration of bring your own license (BYOL) +// for the specified account. +type AccountModification struct { + _ struct{} `type:"structure"` + + // The IP address range, specified as an IPv4 CIDR block, for the management + // network interface used for the account. + DedicatedTenancyManagementCidrRange *string `type:"string"` + + // The status of BYOL (whether BYOL is being enabled or disabled). + DedicatedTenancySupport *string `type:"string" enum:"DedicatedTenancySupportResultEnum"` + + // The error code that is returned if the configuration of BYOL cannot be modified. + ErrorCode *string `type:"string"` + + // The text of the error message that is returned if the configuration of BYOL + // cannot be modified. + ErrorMessage *string `type:"string"` + + // The state of the modification to the configuration of BYOL. + ModificationState *string `type:"string" enum:"DedicatedTenancyModificationStateEnum"` + + // The timestamp when the modification of the BYOL configuration was started. + StartTime *time.Time `type:"timestamp"` +} + +// String returns the string representation +func (s AccountModification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AccountModification) GoString() string { + return s.String() +} + +// SetDedicatedTenancyManagementCidrRange sets the DedicatedTenancyManagementCidrRange field's value. +func (s *AccountModification) SetDedicatedTenancyManagementCidrRange(v string) *AccountModification { + s.DedicatedTenancyManagementCidrRange = &v + return s +} + +// SetDedicatedTenancySupport sets the DedicatedTenancySupport field's value. +func (s *AccountModification) SetDedicatedTenancySupport(v string) *AccountModification { + s.DedicatedTenancySupport = &v + return s +} + +// SetErrorCode sets the ErrorCode field's value. +func (s *AccountModification) SetErrorCode(v string) *AccountModification { + s.ErrorCode = &v + return s +} + +// SetErrorMessage sets the ErrorMessage field's value. +func (s *AccountModification) SetErrorMessage(v string) *AccountModification { + s.ErrorMessage = &v + return s +} + +// SetModificationState sets the ModificationState field's value. +func (s *AccountModification) SetModificationState(v string) *AccountModification { + s.ModificationState = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *AccountModification) SetStartTime(v time.Time) *AccountModification { + s.StartTime = &v + return s +} + +type AssociateIpGroupsInput struct { + _ struct{} `type:"structure"` + + // The identifier of the directory. + // + // DirectoryId is a required field + DirectoryId *string `type:"string" required:"true"` + + // The identifiers of one or more IP access control groups. + // + // GroupIds is a required field + GroupIds []*string `type:"list" required:"true"` +} + +// String returns the string representation +func (s AssociateIpGroupsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociateIpGroupsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AssociateIpGroupsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AssociateIpGroupsInput"} + if s.DirectoryId == nil { + invalidParams.Add(request.NewErrParamRequired("DirectoryId")) + } + if s.GroupIds == nil { + invalidParams.Add(request.NewErrParamRequired("GroupIds")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDirectoryId sets the DirectoryId field's value. +func (s *AssociateIpGroupsInput) SetDirectoryId(v string) *AssociateIpGroupsInput { + s.DirectoryId = &v + return s +} + +// SetGroupIds sets the GroupIds field's value. +func (s *AssociateIpGroupsInput) SetGroupIds(v []*string) *AssociateIpGroupsInput { + s.GroupIds = v + return s +} + +type AssociateIpGroupsOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AssociateIpGroupsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociateIpGroupsOutput) GoString() string { + return s.String() +} + +type AuthorizeIpRulesInput struct { + _ struct{} `type:"structure"` + + // The identifier of the group. + // + // GroupId is a required field + GroupId *string `type:"string" required:"true"` + + // The rules to add to the group. + // + // UserRules is a required field + UserRules []*IpRuleItem `type:"list" required:"true"` +} + +// String returns the string representation +func (s AuthorizeIpRulesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AuthorizeIpRulesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AuthorizeIpRulesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AuthorizeIpRulesInput"} + if s.GroupId == nil { + invalidParams.Add(request.NewErrParamRequired("GroupId")) + } + if s.UserRules == nil { + invalidParams.Add(request.NewErrParamRequired("UserRules")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupId sets the GroupId field's value. +func (s *AuthorizeIpRulesInput) SetGroupId(v string) *AuthorizeIpRulesInput { + s.GroupId = &v + return s +} + +// SetUserRules sets the UserRules field's value. +func (s *AuthorizeIpRulesInput) SetUserRules(v []*IpRuleItem) *AuthorizeIpRulesInput { + s.UserRules = v + return s +} + +type AuthorizeIpRulesOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AuthorizeIpRulesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AuthorizeIpRulesOutput) GoString() string { + return s.String() +} + +// Describes an Amazon WorkSpaces client. +type ClientProperties struct { + _ struct{} `type:"structure"` + + // Specifies whether users can cache their credentials on the Amazon WorkSpaces + // client. When enabled, users can choose to reconnect to their WorkSpaces without + // re-entering their credentials. + ReconnectEnabled *string `type:"string" enum:"ReconnectEnum"` +} + +// String returns the string representation +func (s ClientProperties) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ClientProperties) GoString() string { + return s.String() +} + +// SetReconnectEnabled sets the ReconnectEnabled field's value. +func (s *ClientProperties) SetReconnectEnabled(v string) *ClientProperties { + s.ReconnectEnabled = &v + return s +} + +// Information about the Amazon WorkSpaces client. +type ClientPropertiesResult struct { + _ struct{} `type:"structure"` + + // Information about the Amazon WorkSpaces client. + ClientProperties *ClientProperties `type:"structure"` + + // The resource identifier, in the form of a directory ID. + ResourceId *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ClientPropertiesResult) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ClientPropertiesResult) GoString() string { + return s.String() +} + +// SetClientProperties sets the ClientProperties field's value. +func (s *ClientPropertiesResult) SetClientProperties(v *ClientProperties) *ClientPropertiesResult { + s.ClientProperties = v + return s +} + +// SetResourceId sets the ResourceId field's value. +func (s *ClientPropertiesResult) SetResourceId(v string) *ClientPropertiesResult { + s.ResourceId = &v + return s +} + +// Describes the compute type. +type ComputeType struct { + _ struct{} `type:"structure"` + + // The compute type. + Name *string `type:"string" enum:"Compute"` +} + +// String returns the string representation +func (s ComputeType) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ComputeType) GoString() string { + return s.String() +} + +// SetName sets the Name field's value. +func (s *ComputeType) SetName(v string) *ComputeType { + s.Name = &v + return s +} + +type CreateIpGroupInput struct { + _ struct{} `type:"structure"` + + // The description of the group. + GroupDesc *string `type:"string"` + + // The name of the group. + // + // GroupName is a required field + GroupName *string `type:"string" required:"true"` + + // The rules to add to the group. + UserRules []*IpRuleItem `type:"list"` +} + +// String returns the string representation +func (s CreateIpGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateIpGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateIpGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateIpGroupInput"} + if s.GroupName == nil { + invalidParams.Add(request.NewErrParamRequired("GroupName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupDesc sets the GroupDesc field's value. +func (s *CreateIpGroupInput) SetGroupDesc(v string) *CreateIpGroupInput { + s.GroupDesc = &v + return s +} + +// SetGroupName sets the GroupName field's value. +func (s *CreateIpGroupInput) SetGroupName(v string) *CreateIpGroupInput { + s.GroupName = &v + return s +} + +// SetUserRules sets the UserRules field's value. +func (s *CreateIpGroupInput) SetUserRules(v []*IpRuleItem) *CreateIpGroupInput { + s.UserRules = v + return s +} + +type CreateIpGroupOutput struct { + _ struct{} `type:"structure"` + + // The identifier of the group. + GroupId *string `type:"string"` +} + +// String returns the string representation +func (s CreateIpGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateIpGroupOutput) GoString() string { + return s.String() +} + +// SetGroupId sets the GroupId field's value. +func (s *CreateIpGroupOutput) SetGroupId(v string) *CreateIpGroupOutput { + s.GroupId = &v + return s +} + +type CreateTagsInput struct { + _ struct{} `type:"structure"` + + // The identifier of the WorkSpace. To find this ID, use DescribeWorkspaces. + // + // ResourceId is a required field + ResourceId *string `min:"1" type:"string" required:"true"` + + // The tags. Each WorkSpace can have a maximum of 50 tags. + // + // Tags is a required field + Tags []*Tag `type:"list" required:"true"` +} + +// String returns the string representation +func (s CreateTagsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateTagsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateTagsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateTagsInput"} + if s.ResourceId == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceId")) + } + if s.ResourceId != nil && len(*s.ResourceId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceId", 1)) + } + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceId sets the ResourceId field's value. +func (s *CreateTagsInput) SetResourceId(v string) *CreateTagsInput { + s.ResourceId = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateTagsInput) SetTags(v []*Tag) *CreateTagsInput { + s.Tags = v + return s +} + +type CreateTagsOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s CreateTagsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateTagsOutput) GoString() string { + return s.String() +} + +type CreateWorkspacesInput struct { + _ struct{} `type:"structure"` + + // The WorkSpaces to create. You can specify up to 25 WorkSpaces. + // + // Workspaces is a required field + Workspaces []*WorkspaceRequest `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s CreateWorkspacesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateWorkspacesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateWorkspacesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateWorkspacesInput"} + if s.Workspaces == nil { + invalidParams.Add(request.NewErrParamRequired("Workspaces")) + } + if s.Workspaces != nil && len(s.Workspaces) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Workspaces", 1)) + } + if s.Workspaces != nil { + for i, v := range s.Workspaces { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Workspaces", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetWorkspaces sets the Workspaces field's value. +func (s *CreateWorkspacesInput) SetWorkspaces(v []*WorkspaceRequest) *CreateWorkspacesInput { + s.Workspaces = v + return s +} + +type CreateWorkspacesOutput struct { + _ struct{} `type:"structure"` + + // Information about the WorkSpaces that could not be created. + FailedRequests []*FailedCreateWorkspaceRequest `type:"list"` + + // Information about the WorkSpaces that were created. + // + // Because this operation is asynchronous, the identifier returned is not immediately + // available for use with other operations. For example, if you call DescribeWorkspaces + // before the WorkSpace is created, the information returned can be incomplete. + PendingRequests []*Workspace `type:"list"` +} + +// String returns the string representation +func (s CreateWorkspacesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateWorkspacesOutput) GoString() string { + return s.String() +} + +// SetFailedRequests sets the FailedRequests field's value. +func (s *CreateWorkspacesOutput) SetFailedRequests(v []*FailedCreateWorkspaceRequest) *CreateWorkspacesOutput { + s.FailedRequests = v + return s +} + +// SetPendingRequests sets the PendingRequests field's value. +func (s *CreateWorkspacesOutput) SetPendingRequests(v []*Workspace) *CreateWorkspacesOutput { + s.PendingRequests = v + return s +} + +// Describes the default values used to create a WorkSpace. +type DefaultWorkspaceCreationProperties struct { + _ struct{} `type:"structure"` + + // The identifier of any security groups to apply to WorkSpaces when they are + // created. + CustomSecurityGroupId *string `type:"string"` + + // The organizational unit (OU) in the directory for the WorkSpace machine accounts. + DefaultOu *string `type:"string"` + + // The public IP address to attach to all WorkSpaces that are created or rebuilt. + EnableInternetAccess *bool `type:"boolean"` + + // Specifies whether the directory is enabled for Amazon WorkDocs. + EnableWorkDocs *bool `type:"boolean"` + + // Specifies whether the WorkSpace user is an administrator on the WorkSpace. + UserEnabledAsLocalAdministrator *bool `type:"boolean"` +} + +// String returns the string representation +func (s DefaultWorkspaceCreationProperties) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DefaultWorkspaceCreationProperties) GoString() string { + return s.String() +} + +// SetCustomSecurityGroupId sets the CustomSecurityGroupId field's value. +func (s *DefaultWorkspaceCreationProperties) SetCustomSecurityGroupId(v string) *DefaultWorkspaceCreationProperties { + s.CustomSecurityGroupId = &v + return s +} + +// SetDefaultOu sets the DefaultOu field's value. +func (s *DefaultWorkspaceCreationProperties) SetDefaultOu(v string) *DefaultWorkspaceCreationProperties { + s.DefaultOu = &v + return s +} + +// SetEnableInternetAccess sets the EnableInternetAccess field's value. +func (s *DefaultWorkspaceCreationProperties) SetEnableInternetAccess(v bool) *DefaultWorkspaceCreationProperties { + s.EnableInternetAccess = &v + return s +} + +// SetEnableWorkDocs sets the EnableWorkDocs field's value. +func (s *DefaultWorkspaceCreationProperties) SetEnableWorkDocs(v bool) *DefaultWorkspaceCreationProperties { + s.EnableWorkDocs = &v + return s +} + +// SetUserEnabledAsLocalAdministrator sets the UserEnabledAsLocalAdministrator field's value. +func (s *DefaultWorkspaceCreationProperties) SetUserEnabledAsLocalAdministrator(v bool) *DefaultWorkspaceCreationProperties { + s.UserEnabledAsLocalAdministrator = &v + return s +} + +type DeleteIpGroupInput struct { + _ struct{} `type:"structure"` + + // The identifier of the IP access control group. + // + // GroupId is a required field + GroupId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteIpGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteIpGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteIpGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteIpGroupInput"} + if s.GroupId == nil { + invalidParams.Add(request.NewErrParamRequired("GroupId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupId sets the GroupId field's value. +func (s *DeleteIpGroupInput) SetGroupId(v string) *DeleteIpGroupInput { + s.GroupId = &v + return s +} + +type DeleteIpGroupOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteIpGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteIpGroupOutput) GoString() string { + return s.String() +} + +type DeleteTagsInput struct { + _ struct{} `type:"structure"` + + // The identifier of the WorkSpace. To find this ID, use DescribeWorkspaces. + // + // ResourceId is a required field + ResourceId *string `min:"1" type:"string" required:"true"` + + // The tag keys. + // + // TagKeys is a required field + TagKeys []*string `type:"list" required:"true"` +} + +// String returns the string representation +func (s DeleteTagsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteTagsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteTagsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteTagsInput"} + if s.ResourceId == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceId")) + } + if s.ResourceId != nil && len(*s.ResourceId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceId", 1)) + } + if s.TagKeys == nil { + invalidParams.Add(request.NewErrParamRequired("TagKeys")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceId sets the ResourceId field's value. +func (s *DeleteTagsInput) SetResourceId(v string) *DeleteTagsInput { + s.ResourceId = &v + return s +} + +// SetTagKeys sets the TagKeys field's value. +func (s *DeleteTagsInput) SetTagKeys(v []*string) *DeleteTagsInput { + s.TagKeys = v + return s +} + +type DeleteTagsOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteTagsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteTagsOutput) GoString() string { + return s.String() +} + +type DeleteWorkspaceImageInput struct { + _ struct{} `type:"structure"` + + // The identifier of the image. + // + // ImageId is a required field + ImageId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteWorkspaceImageInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteWorkspaceImageInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteWorkspaceImageInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteWorkspaceImageInput"} + if s.ImageId == nil { + invalidParams.Add(request.NewErrParamRequired("ImageId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetImageId sets the ImageId field's value. +func (s *DeleteWorkspaceImageInput) SetImageId(v string) *DeleteWorkspaceImageInput { + s.ImageId = &v + return s +} + +type DeleteWorkspaceImageOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteWorkspaceImageOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteWorkspaceImageOutput) GoString() string { + return s.String() +} + +type DescribeAccountInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DescribeAccountInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAccountInput) GoString() string { + return s.String() +} + +type DescribeAccountModificationsInput struct { + _ struct{} `type:"structure"` + + // If you received a NextToken from a previous call that was paginated, provide + // this token to receive the next set of results. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeAccountModificationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAccountModificationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeAccountModificationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeAccountModificationsInput"} + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeAccountModificationsInput) SetNextToken(v string) *DescribeAccountModificationsInput { + s.NextToken = &v + return s +} + +type DescribeAccountModificationsOutput struct { + _ struct{} `type:"structure"` + + // The list of modifications to the configuration of BYOL. + AccountModifications []*AccountModification `type:"list"` + + // The token to use to retrieve the next set of results, or null if no more + // results are available. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeAccountModificationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAccountModificationsOutput) GoString() string { + return s.String() +} + +// SetAccountModifications sets the AccountModifications field's value. +func (s *DescribeAccountModificationsOutput) SetAccountModifications(v []*AccountModification) *DescribeAccountModificationsOutput { + s.AccountModifications = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeAccountModificationsOutput) SetNextToken(v string) *DescribeAccountModificationsOutput { + s.NextToken = &v + return s +} + +type DescribeAccountOutput struct { + _ struct{} `type:"structure"` + + // The IP address range, specified as an IPv4 CIDR block, used for the management + // network interface. + // + // The management network interface is connected to a secure Amazon WorkSpaces + // management network. It is used for interactive streaming of the WorkSpace + // desktop to Amazon WorkSpaces clients, and to allow Amazon WorkSpaces to manage + // the WorkSpace. + DedicatedTenancyManagementCidrRange *string `type:"string"` + + // The status of BYOL (whether BYOL is enabled or disabled). + DedicatedTenancySupport *string `type:"string" enum:"DedicatedTenancySupportResultEnum"` +} + +// String returns the string representation +func (s DescribeAccountOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAccountOutput) GoString() string { + return s.String() +} + +// SetDedicatedTenancyManagementCidrRange sets the DedicatedTenancyManagementCidrRange field's value. +func (s *DescribeAccountOutput) SetDedicatedTenancyManagementCidrRange(v string) *DescribeAccountOutput { + s.DedicatedTenancyManagementCidrRange = &v + return s +} + +// SetDedicatedTenancySupport sets the DedicatedTenancySupport field's value. +func (s *DescribeAccountOutput) SetDedicatedTenancySupport(v string) *DescribeAccountOutput { + s.DedicatedTenancySupport = &v + return s +} + +type DescribeClientPropertiesInput struct { + _ struct{} `type:"structure"` + + // The resource identifiers, in the form of directory IDs. + // + // ResourceIds is a required field + ResourceIds []*string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s DescribeClientPropertiesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeClientPropertiesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeClientPropertiesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeClientPropertiesInput"} + if s.ResourceIds == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceIds")) + } + if s.ResourceIds != nil && len(s.ResourceIds) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceIds", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceIds sets the ResourceIds field's value. +func (s *DescribeClientPropertiesInput) SetResourceIds(v []*string) *DescribeClientPropertiesInput { + s.ResourceIds = v + return s +} + +type DescribeClientPropertiesOutput struct { + _ struct{} `type:"structure"` + + // Information about the specified Amazon WorkSpaces clients. + ClientPropertiesList []*ClientPropertiesResult `type:"list"` +} + +// String returns the string representation +func (s DescribeClientPropertiesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeClientPropertiesOutput) GoString() string { + return s.String() +} + +// SetClientPropertiesList sets the ClientPropertiesList field's value. +func (s *DescribeClientPropertiesOutput) SetClientPropertiesList(v []*ClientPropertiesResult) *DescribeClientPropertiesOutput { + s.ClientPropertiesList = v + return s +} + +type DescribeIpGroupsInput struct { + _ struct{} `type:"structure"` + + // The identifiers of one or more IP access control groups. + GroupIds []*string `type:"list"` + + // The maximum number of items to return. + MaxResults *int64 `min:"1" type:"integer"` + + // If you received a NextToken from a previous call that was paginated, provide + // this token to receive the next set of results. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeIpGroupsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeIpGroupsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeIpGroupsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeIpGroupsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupIds sets the GroupIds field's value. +func (s *DescribeIpGroupsInput) SetGroupIds(v []*string) *DescribeIpGroupsInput { + s.GroupIds = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeIpGroupsInput) SetMaxResults(v int64) *DescribeIpGroupsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeIpGroupsInput) SetNextToken(v string) *DescribeIpGroupsInput { + s.NextToken = &v + return s +} + +type DescribeIpGroupsOutput struct { + _ struct{} `type:"structure"` + + // The token to use to retrieve the next set of results, or null if no more + // results are available. + NextToken *string `min:"1" type:"string"` + + // Information about the IP access control groups. + Result []*IpGroup `type:"list"` +} + +// String returns the string representation +func (s DescribeIpGroupsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeIpGroupsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeIpGroupsOutput) SetNextToken(v string) *DescribeIpGroupsOutput { + s.NextToken = &v + return s +} + +// SetResult sets the Result field's value. +func (s *DescribeIpGroupsOutput) SetResult(v []*IpGroup) *DescribeIpGroupsOutput { + s.Result = v + return s +} + +type DescribeTagsInput struct { + _ struct{} `type:"structure"` + + // The identifier of the WorkSpace. To find this ID, use DescribeWorkspaces. + // + // ResourceId is a required field + ResourceId *string `min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeTagsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeTagsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeTagsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeTagsInput"} + if s.ResourceId == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceId")) + } + if s.ResourceId != nil && len(*s.ResourceId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetResourceId sets the ResourceId field's value. +func (s *DescribeTagsInput) SetResourceId(v string) *DescribeTagsInput { + s.ResourceId = &v + return s +} + +type DescribeTagsOutput struct { + _ struct{} `type:"structure"` + + // The tags. + TagList []*Tag `type:"list"` +} + +// String returns the string representation +func (s DescribeTagsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeTagsOutput) GoString() string { + return s.String() +} + +// SetTagList sets the TagList field's value. +func (s *DescribeTagsOutput) SetTagList(v []*Tag) *DescribeTagsOutput { + s.TagList = v + return s +} + +type DescribeWorkspaceBundlesInput struct { + _ struct{} `type:"structure"` + + // The identifiers of the bundles. You cannot combine this parameter with any + // other filter. + BundleIds []*string `min:"1" type:"list"` + + // The token for the next set of results. (You received this token from a previous + // call.) + NextToken *string `min:"1" type:"string"` + + // The owner of the bundles. You cannot combine this parameter with any other + // filter. + // + // Specify AMAZON to describe the bundles provided by AWS or null to describe + // the bundles that belong to your account. + Owner *string `type:"string"` +} + +// String returns the string representation +func (s DescribeWorkspaceBundlesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeWorkspaceBundlesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeWorkspaceBundlesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeWorkspaceBundlesInput"} + if s.BundleIds != nil && len(s.BundleIds) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BundleIds", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBundleIds sets the BundleIds field's value. +func (s *DescribeWorkspaceBundlesInput) SetBundleIds(v []*string) *DescribeWorkspaceBundlesInput { + s.BundleIds = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeWorkspaceBundlesInput) SetNextToken(v string) *DescribeWorkspaceBundlesInput { + s.NextToken = &v + return s +} + +// SetOwner sets the Owner field's value. +func (s *DescribeWorkspaceBundlesInput) SetOwner(v string) *DescribeWorkspaceBundlesInput { + s.Owner = &v + return s +} + +type DescribeWorkspaceBundlesOutput struct { + _ struct{} `type:"structure"` + + // Information about the bundles. + Bundles []*WorkspaceBundle `type:"list"` + + // The token to use to retrieve the next set of results, or null if there are + // no more results available. This token is valid for one day and must be used + // within that time frame. + NextToken *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeWorkspaceBundlesOutput) String() string { + return awsutil.Prettify(s) } // GoString returns the string representation -func (s ComputeType) GoString() string { +func (s DescribeWorkspaceBundlesOutput) GoString() string { return s.String() } -// SetName sets the Name field's value. -func (s *ComputeType) SetName(v string) *ComputeType { - s.Name = &v +// SetBundles sets the Bundles field's value. +func (s *DescribeWorkspaceBundlesOutput) SetBundles(v []*WorkspaceBundle) *DescribeWorkspaceBundlesOutput { + s.Bundles = v return s } -type CreateTagsInput struct { +// SetNextToken sets the NextToken field's value. +func (s *DescribeWorkspaceBundlesOutput) SetNextToken(v string) *DescribeWorkspaceBundlesOutput { + s.NextToken = &v + return s +} + +type DescribeWorkspaceDirectoriesInput struct { _ struct{} `type:"structure"` - // The ID of the resource. - // - // ResourceId is a required field - ResourceId *string `min:"1" type:"string" required:"true"` + // The identifiers of the directories. If the value is null, all directories + // are retrieved. + DirectoryIds []*string `min:"1" type:"list"` - // The tags. Each resource can have a maximum of 50 tags. - // - // Tags is a required field - Tags []*Tag `type:"list" required:"true"` + // If you received a NextToken from a previous call that was paginated, provide + // this token to receive the next set of results. + NextToken *string `min:"1" type:"string"` } // String returns the string representation -func (s CreateTagsInput) String() string { +func (s DescribeWorkspaceDirectoriesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateTagsInput) GoString() string { +func (s DescribeWorkspaceDirectoriesInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateTagsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateTagsInput"} - if s.ResourceId == nil { - invalidParams.Add(request.NewErrParamRequired("ResourceId")) - } - if s.ResourceId != nil && len(*s.ResourceId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ResourceId", 1)) - } - if s.Tags == nil { - invalidParams.Add(request.NewErrParamRequired("Tags")) +func (s *DescribeWorkspaceDirectoriesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeWorkspaceDirectoriesInput"} + if s.DirectoryIds != nil && len(s.DirectoryIds) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DirectoryIds", 1)) } - if s.Tags != nil { - for i, v := range s.Tags { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) - } - } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) } if invalidParams.Len() > 0 { @@ -1408,69 +4163,86 @@ func (s *CreateTagsInput) Validate() error { return nil } -// SetResourceId sets the ResourceId field's value. -func (s *CreateTagsInput) SetResourceId(v string) *CreateTagsInput { - s.ResourceId = &v +// SetDirectoryIds sets the DirectoryIds field's value. +func (s *DescribeWorkspaceDirectoriesInput) SetDirectoryIds(v []*string) *DescribeWorkspaceDirectoriesInput { + s.DirectoryIds = v return s } -// SetTags sets the Tags field's value. -func (s *CreateTagsInput) SetTags(v []*Tag) *CreateTagsInput { - s.Tags = v +// SetNextToken sets the NextToken field's value. +func (s *DescribeWorkspaceDirectoriesInput) SetNextToken(v string) *DescribeWorkspaceDirectoriesInput { + s.NextToken = &v return s } -type CreateTagsOutput struct { +type DescribeWorkspaceDirectoriesOutput struct { _ struct{} `type:"structure"` + + // Information about the directories. + Directories []*WorkspaceDirectory `type:"list"` + + // The token to use to retrieve the next set of results, or null if no more + // results are available. + NextToken *string `min:"1" type:"string"` } // String returns the string representation -func (s CreateTagsOutput) String() string { +func (s DescribeWorkspaceDirectoriesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateTagsOutput) GoString() string { +func (s DescribeWorkspaceDirectoriesOutput) GoString() string { return s.String() } -type CreateWorkspacesInput struct { +// SetDirectories sets the Directories field's value. +func (s *DescribeWorkspaceDirectoriesOutput) SetDirectories(v []*WorkspaceDirectory) *DescribeWorkspaceDirectoriesOutput { + s.Directories = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeWorkspaceDirectoriesOutput) SetNextToken(v string) *DescribeWorkspaceDirectoriesOutput { + s.NextToken = &v + return s +} + +type DescribeWorkspaceImagesInput struct { _ struct{} `type:"structure"` - // Information about the WorkSpaces to create. - // - // Workspaces is a required field - Workspaces []*WorkspaceRequest `min:"1" type:"list" required:"true"` + // The identifier of the image. + ImageIds []*string `min:"1" type:"list"` + + // The maximum number of items to return. + MaxResults *int64 `min:"1" type:"integer"` + + // If you received a NextToken from a previous call that was paginated, provide + // this token to receive the next set of results. + NextToken *string `min:"1" type:"string"` } // String returns the string representation -func (s CreateWorkspacesInput) String() string { +func (s DescribeWorkspaceImagesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateWorkspacesInput) GoString() string { +func (s DescribeWorkspaceImagesInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *CreateWorkspacesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "CreateWorkspacesInput"} - if s.Workspaces == nil { - invalidParams.Add(request.NewErrParamRequired("Workspaces")) +func (s *DescribeWorkspaceImagesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeWorkspaceImagesInput"} + if s.ImageIds != nil && len(s.ImageIds) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ImageIds", 1)) } - if s.Workspaces != nil && len(s.Workspaces) < 1 { - invalidParams.Add(request.NewErrParamMinLen("Workspaces", 1)) + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } - if s.Workspaces != nil { - for i, v := range s.Workspaces { - if v == nil { - continue - } - if err := v.Validate(); err != nil { - invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Workspaces", i), err.(request.ErrInvalidParams)) - } - } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) } if invalidParams.Len() > 0 { @@ -1479,144 +4251,194 @@ func (s *CreateWorkspacesInput) Validate() error { return nil } -// SetWorkspaces sets the Workspaces field's value. -func (s *CreateWorkspacesInput) SetWorkspaces(v []*WorkspaceRequest) *CreateWorkspacesInput { - s.Workspaces = v +// SetImageIds sets the ImageIds field's value. +func (s *DescribeWorkspaceImagesInput) SetImageIds(v []*string) *DescribeWorkspaceImagesInput { + s.ImageIds = v return s } -type CreateWorkspacesOutput struct { +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeWorkspaceImagesInput) SetMaxResults(v int64) *DescribeWorkspaceImagesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeWorkspaceImagesInput) SetNextToken(v string) *DescribeWorkspaceImagesInput { + s.NextToken = &v + return s +} + +type DescribeWorkspaceImagesOutput struct { _ struct{} `type:"structure"` - // Information about the WorkSpaces that could not be created. - FailedRequests []*FailedCreateWorkspaceRequest `type:"list"` + // Information about the images. + Images []*WorkspaceImage `type:"list"` - // Information about the WorkSpaces that were created. - // - // Because this operation is asynchronous, the identifier returned is not immediately - // available for use with other operations. For example, if you call DescribeWorkspaces - // before the WorkSpace is created, the information returned can be incomplete. - PendingRequests []*Workspace `type:"list"` + // The token to use to retrieve the next set of results, or null if no more + // results are available. + NextToken *string `min:"1" type:"string"` } // String returns the string representation -func (s CreateWorkspacesOutput) String() string { +func (s DescribeWorkspaceImagesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s CreateWorkspacesOutput) GoString() string { +func (s DescribeWorkspaceImagesOutput) GoString() string { return s.String() } -// SetFailedRequests sets the FailedRequests field's value. -func (s *CreateWorkspacesOutput) SetFailedRequests(v []*FailedCreateWorkspaceRequest) *CreateWorkspacesOutput { - s.FailedRequests = v +// SetImages sets the Images field's value. +func (s *DescribeWorkspaceImagesOutput) SetImages(v []*WorkspaceImage) *DescribeWorkspaceImagesOutput { + s.Images = v return s } -// SetPendingRequests sets the PendingRequests field's value. -func (s *CreateWorkspacesOutput) SetPendingRequests(v []*Workspace) *CreateWorkspacesOutput { - s.PendingRequests = v +// SetNextToken sets the NextToken field's value. +func (s *DescribeWorkspaceImagesOutput) SetNextToken(v string) *DescribeWorkspaceImagesOutput { + s.NextToken = &v return s } -// Information about defaults used to create a WorkSpace. -type DefaultWorkspaceCreationProperties struct { +type DescribeWorkspacesConnectionStatusInput struct { _ struct{} `type:"structure"` - // The identifier of any security groups to apply to WorkSpaces when they are - // created. - CustomSecurityGroupId *string `type:"string"` - - // The organizational unit (OU) in the directory for the WorkSpace machine accounts. - DefaultOu *string `type:"string"` - - // The public IP address to attach to all WorkSpaces that are created or rebuilt. - EnableInternetAccess *bool `type:"boolean"` - - // Indicates whether the directory is enabled for Amazon WorkDocs. - EnableWorkDocs *bool `type:"boolean"` + // If you received a NextToken from a previous call that was paginated, provide + // this token to receive the next set of results. + NextToken *string `min:"1" type:"string"` - // Indicates whether the WorkSpace user is an administrator on the WorkSpace. - UserEnabledAsLocalAdministrator *bool `type:"boolean"` + // The identifiers of the WorkSpaces. You can specify up to 25 WorkSpaces. + WorkspaceIds []*string `min:"1" type:"list"` } // String returns the string representation -func (s DefaultWorkspaceCreationProperties) String() string { +func (s DescribeWorkspacesConnectionStatusInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DefaultWorkspaceCreationProperties) GoString() string { +func (s DescribeWorkspacesConnectionStatusInput) GoString() string { return s.String() } -// SetCustomSecurityGroupId sets the CustomSecurityGroupId field's value. -func (s *DefaultWorkspaceCreationProperties) SetCustomSecurityGroupId(v string) *DefaultWorkspaceCreationProperties { - s.CustomSecurityGroupId = &v - return s +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeWorkspacesConnectionStatusInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeWorkspacesConnectionStatusInput"} + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + if s.WorkspaceIds != nil && len(s.WorkspaceIds) < 1 { + invalidParams.Add(request.NewErrParamMinLen("WorkspaceIds", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil } -// SetDefaultOu sets the DefaultOu field's value. -func (s *DefaultWorkspaceCreationProperties) SetDefaultOu(v string) *DefaultWorkspaceCreationProperties { - s.DefaultOu = &v +// SetNextToken sets the NextToken field's value. +func (s *DescribeWorkspacesConnectionStatusInput) SetNextToken(v string) *DescribeWorkspacesConnectionStatusInput { + s.NextToken = &v return s } -// SetEnableInternetAccess sets the EnableInternetAccess field's value. -func (s *DefaultWorkspaceCreationProperties) SetEnableInternetAccess(v bool) *DefaultWorkspaceCreationProperties { - s.EnableInternetAccess = &v +// SetWorkspaceIds sets the WorkspaceIds field's value. +func (s *DescribeWorkspacesConnectionStatusInput) SetWorkspaceIds(v []*string) *DescribeWorkspacesConnectionStatusInput { + s.WorkspaceIds = v return s } -// SetEnableWorkDocs sets the EnableWorkDocs field's value. -func (s *DefaultWorkspaceCreationProperties) SetEnableWorkDocs(v bool) *DefaultWorkspaceCreationProperties { - s.EnableWorkDocs = &v +type DescribeWorkspacesConnectionStatusOutput struct { + _ struct{} `type:"structure"` + + // The token to use to retrieve the next set of results, or null if no more + // results are available. + NextToken *string `min:"1" type:"string"` + + // Information about the connection status of the WorkSpace. + WorkspacesConnectionStatus []*WorkspaceConnectionStatus `type:"list"` +} + +// String returns the string representation +func (s DescribeWorkspacesConnectionStatusOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeWorkspacesConnectionStatusOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeWorkspacesConnectionStatusOutput) SetNextToken(v string) *DescribeWorkspacesConnectionStatusOutput { + s.NextToken = &v return s } -// SetUserEnabledAsLocalAdministrator sets the UserEnabledAsLocalAdministrator field's value. -func (s *DefaultWorkspaceCreationProperties) SetUserEnabledAsLocalAdministrator(v bool) *DefaultWorkspaceCreationProperties { - s.UserEnabledAsLocalAdministrator = &v +// SetWorkspacesConnectionStatus sets the WorkspacesConnectionStatus field's value. +func (s *DescribeWorkspacesConnectionStatusOutput) SetWorkspacesConnectionStatus(v []*WorkspaceConnectionStatus) *DescribeWorkspacesConnectionStatusOutput { + s.WorkspacesConnectionStatus = v return s } -type DeleteTagsInput struct { - _ struct{} `type:"structure"` +type DescribeWorkspacesInput struct { + _ struct{} `type:"structure"` + + // The identifier of the bundle. All WorkSpaces that are created from this bundle + // are retrieved. You cannot combine this parameter with any other filter. + BundleId *string `type:"string"` + + // The identifier of the directory. In addition, you can optionally specify + // a specific directory user (see UserName). You cannot combine this parameter + // with any other filter. + DirectoryId *string `type:"string"` + + // The maximum number of items to return. + Limit *int64 `min:"1" type:"integer"` - // The ID of the resource. - // - // ResourceId is a required field - ResourceId *string `min:"1" type:"string" required:"true"` + // If you received a NextToken from a previous call that was paginated, provide + // this token to receive the next set of results. + NextToken *string `min:"1" type:"string"` - // The tag keys. + // The name of the directory user. You must specify this parameter with DirectoryId. + UserName *string `min:"1" type:"string"` + + // The identifiers of the WorkSpaces. You cannot combine this parameter with + // any other filter. // - // TagKeys is a required field - TagKeys []*string `type:"list" required:"true"` + // Because the CreateWorkspaces operation is asynchronous, the identifier it + // returns is not immediately available. If you immediately call DescribeWorkspaces + // with this identifier, no information is returned. + WorkspaceIds []*string `min:"1" type:"list"` } // String returns the string representation -func (s DeleteTagsInput) String() string { +func (s DescribeWorkspacesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteTagsInput) GoString() string { +func (s DescribeWorkspacesInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DeleteTagsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DeleteTagsInput"} - if s.ResourceId == nil { - invalidParams.Add(request.NewErrParamRequired("ResourceId")) +func (s *DescribeWorkspacesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeWorkspacesInput"} + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) } - if s.ResourceId != nil && len(*s.ResourceId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ResourceId", 1)) + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) } - if s.TagKeys == nil { - invalidParams.Add(request.NewErrParamRequired("TagKeys")) + if s.UserName != nil && len(*s.UserName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) + } + if s.WorkspaceIds != nil && len(s.WorkspaceIds) < 1 { + invalidParams.Add(request.NewErrParamMinLen("WorkspaceIds", 1)) } if invalidParams.Len() > 0 { @@ -1625,59 +4447,110 @@ func (s *DeleteTagsInput) Validate() error { return nil } -// SetResourceId sets the ResourceId field's value. -func (s *DeleteTagsInput) SetResourceId(v string) *DeleteTagsInput { - s.ResourceId = &v +// SetBundleId sets the BundleId field's value. +func (s *DescribeWorkspacesInput) SetBundleId(v string) *DescribeWorkspacesInput { + s.BundleId = &v return s } -// SetTagKeys sets the TagKeys field's value. -func (s *DeleteTagsInput) SetTagKeys(v []*string) *DeleteTagsInput { - s.TagKeys = v +// SetDirectoryId sets the DirectoryId field's value. +func (s *DescribeWorkspacesInput) SetDirectoryId(v string) *DescribeWorkspacesInput { + s.DirectoryId = &v return s } -type DeleteTagsOutput struct { +// SetLimit sets the Limit field's value. +func (s *DescribeWorkspacesInput) SetLimit(v int64) *DescribeWorkspacesInput { + s.Limit = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeWorkspacesInput) SetNextToken(v string) *DescribeWorkspacesInput { + s.NextToken = &v + return s +} + +// SetUserName sets the UserName field's value. +func (s *DescribeWorkspacesInput) SetUserName(v string) *DescribeWorkspacesInput { + s.UserName = &v + return s +} + +// SetWorkspaceIds sets the WorkspaceIds field's value. +func (s *DescribeWorkspacesInput) SetWorkspaceIds(v []*string) *DescribeWorkspacesInput { + s.WorkspaceIds = v + return s +} + +type DescribeWorkspacesOutput struct { _ struct{} `type:"structure"` + + // The token to use to retrieve the next set of results, or null if no more + // results are available. + NextToken *string `min:"1" type:"string"` + + // Information about the WorkSpaces. + // + // Because CreateWorkspaces is an asynchronous operation, some of the returned + // information could be incomplete. + Workspaces []*Workspace `type:"list"` } // String returns the string representation -func (s DeleteTagsOutput) String() string { +func (s DescribeWorkspacesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DeleteTagsOutput) GoString() string { +func (s DescribeWorkspacesOutput) GoString() string { return s.String() } -type DescribeTagsInput struct { +// SetNextToken sets the NextToken field's value. +func (s *DescribeWorkspacesOutput) SetNextToken(v string) *DescribeWorkspacesOutput { + s.NextToken = &v + return s +} + +// SetWorkspaces sets the Workspaces field's value. +func (s *DescribeWorkspacesOutput) SetWorkspaces(v []*Workspace) *DescribeWorkspacesOutput { + s.Workspaces = v + return s +} + +type DisassociateIpGroupsInput struct { _ struct{} `type:"structure"` - // The ID of the resource. + // The identifier of the directory. // - // ResourceId is a required field - ResourceId *string `min:"1" type:"string" required:"true"` + // DirectoryId is a required field + DirectoryId *string `type:"string" required:"true"` + + // The identifiers of one or more IP access control groups. + // + // GroupIds is a required field + GroupIds []*string `type:"list" required:"true"` } // String returns the string representation -func (s DescribeTagsInput) String() string { +func (s DisassociateIpGroupsInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeTagsInput) GoString() string { +func (s DisassociateIpGroupsInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeTagsInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeTagsInput"} - if s.ResourceId == nil { - invalidParams.Add(request.NewErrParamRequired("ResourceId")) +func (s *DisassociateIpGroupsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DisassociateIpGroupsInput"} + if s.DirectoryId == nil { + invalidParams.Add(request.NewErrParamRequired("DirectoryId")) } - if s.ResourceId != nil && len(*s.ResourceId) < 1 { - invalidParams.Add(request.NewErrParamMinLen("ResourceId", 1)) + if s.GroupIds == nil { + invalidParams.Add(request.NewErrParamRequired("GroupIds")) } if invalidParams.Len() > 0 { @@ -1686,162 +4559,174 @@ func (s *DescribeTagsInput) Validate() error { return nil } -// SetResourceId sets the ResourceId field's value. -func (s *DescribeTagsInput) SetResourceId(v string) *DescribeTagsInput { - s.ResourceId = &v +// SetDirectoryId sets the DirectoryId field's value. +func (s *DisassociateIpGroupsInput) SetDirectoryId(v string) *DisassociateIpGroupsInput { + s.DirectoryId = &v return s } -type DescribeTagsOutput struct { - _ struct{} `type:"structure"` +// SetGroupIds sets the GroupIds field's value. +func (s *DisassociateIpGroupsInput) SetGroupIds(v []*string) *DisassociateIpGroupsInput { + s.GroupIds = v + return s +} - // The tags. - TagList []*Tag `type:"list"` +type DisassociateIpGroupsOutput struct { + _ struct{} `type:"structure"` } // String returns the string representation -func (s DescribeTagsOutput) String() string { +func (s DisassociateIpGroupsOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeTagsOutput) GoString() string { +func (s DisassociateIpGroupsOutput) GoString() string { return s.String() } -// SetTagList sets the TagList field's value. -func (s *DescribeTagsOutput) SetTagList(v []*Tag) *DescribeTagsOutput { - s.TagList = v - return s -} - -type DescribeWorkspaceBundlesInput struct { +// Describes a WorkSpace that cannot be created. +type FailedCreateWorkspaceRequest struct { _ struct{} `type:"structure"` - // The IDs of the bundles. This parameter cannot be combined with any other - // filter. - BundleIds []*string `min:"1" type:"list"` + // The error code that is returned if the WorkSpace cannot be created. + ErrorCode *string `type:"string"` - // The token for the next set of results. (You received this token from a previous - // call.) - NextToken *string `min:"1" type:"string"` + // The text of the error message that is returned if the WorkSpace cannot be + // created. + ErrorMessage *string `type:"string"` - // The owner of the bundles. This parameter cannot be combined with any other - // filter. - // - // Specify AMAZON to describe the bundles provided by AWS or null to describe - // the bundles that belong to your account. - Owner *string `type:"string"` + // Information about the WorkSpace. + WorkspaceRequest *WorkspaceRequest `type:"structure"` } // String returns the string representation -func (s DescribeWorkspaceBundlesInput) String() string { +func (s FailedCreateWorkspaceRequest) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeWorkspaceBundlesInput) GoString() string { +func (s FailedCreateWorkspaceRequest) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeWorkspaceBundlesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeWorkspaceBundlesInput"} - if s.BundleIds != nil && len(s.BundleIds) < 1 { - invalidParams.Add(request.NewErrParamMinLen("BundleIds", 1)) - } - if s.NextToken != nil && len(*s.NextToken) < 1 { - invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) - } - - if invalidParams.Len() > 0 { - return invalidParams - } - return nil -} - -// SetBundleIds sets the BundleIds field's value. -func (s *DescribeWorkspaceBundlesInput) SetBundleIds(v []*string) *DescribeWorkspaceBundlesInput { - s.BundleIds = v +// SetErrorCode sets the ErrorCode field's value. +func (s *FailedCreateWorkspaceRequest) SetErrorCode(v string) *FailedCreateWorkspaceRequest { + s.ErrorCode = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *DescribeWorkspaceBundlesInput) SetNextToken(v string) *DescribeWorkspaceBundlesInput { - s.NextToken = &v +// SetErrorMessage sets the ErrorMessage field's value. +func (s *FailedCreateWorkspaceRequest) SetErrorMessage(v string) *FailedCreateWorkspaceRequest { + s.ErrorMessage = &v return s } -// SetOwner sets the Owner field's value. -func (s *DescribeWorkspaceBundlesInput) SetOwner(v string) *DescribeWorkspaceBundlesInput { - s.Owner = &v +// SetWorkspaceRequest sets the WorkspaceRequest field's value. +func (s *FailedCreateWorkspaceRequest) SetWorkspaceRequest(v *WorkspaceRequest) *FailedCreateWorkspaceRequest { + s.WorkspaceRequest = v return s } -type DescribeWorkspaceBundlesOutput struct { +// Describes a WorkSpace that could not be rebooted. (RebootWorkspaces), rebuilt +// (RebuildWorkspaces), terminated (TerminateWorkspaces), started (StartWorkspaces), +// or stopped (StopWorkspaces). +type FailedWorkspaceChangeRequest struct { _ struct{} `type:"structure"` - // Information about the bundles. - Bundles []*WorkspaceBundle `type:"list"` + // The error code that is returned if the WorkSpace cannot be rebooted. + ErrorCode *string `type:"string"` - // The token to use to retrieve the next set of results, or null if there are - // no more results available. This token is valid for one day and must be used - // within that time frame. - NextToken *string `min:"1" type:"string"` + // The text of the error message that is returned if the WorkSpace cannot be + // rebooted. + ErrorMessage *string `type:"string"` + + // The identifier of the WorkSpace. + WorkspaceId *string `type:"string"` } // String returns the string representation -func (s DescribeWorkspaceBundlesOutput) String() string { +func (s FailedWorkspaceChangeRequest) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeWorkspaceBundlesOutput) GoString() string { +func (s FailedWorkspaceChangeRequest) GoString() string { return s.String() } -// SetBundles sets the Bundles field's value. -func (s *DescribeWorkspaceBundlesOutput) SetBundles(v []*WorkspaceBundle) *DescribeWorkspaceBundlesOutput { - s.Bundles = v +// SetErrorCode sets the ErrorCode field's value. +func (s *FailedWorkspaceChangeRequest) SetErrorCode(v string) *FailedWorkspaceChangeRequest { + s.ErrorCode = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *DescribeWorkspaceBundlesOutput) SetNextToken(v string) *DescribeWorkspaceBundlesOutput { - s.NextToken = &v +// SetErrorMessage sets the ErrorMessage field's value. +func (s *FailedWorkspaceChangeRequest) SetErrorMessage(v string) *FailedWorkspaceChangeRequest { + s.ErrorMessage = &v return s } -type DescribeWorkspaceDirectoriesInput struct { +// SetWorkspaceId sets the WorkspaceId field's value. +func (s *FailedWorkspaceChangeRequest) SetWorkspaceId(v string) *FailedWorkspaceChangeRequest { + s.WorkspaceId = &v + return s +} + +type ImportWorkspaceImageInput struct { _ struct{} `type:"structure"` - // The identifiers of the directories. If the value is null, all directories - // are retrieved. - DirectoryIds []*string `min:"1" type:"list"` + // The identifier of the EC2 image. + // + // Ec2ImageId is a required field + Ec2ImageId *string `type:"string" required:"true"` - // The token for the next set of results. (You received this token from a previous - // call.) - NextToken *string `min:"1" type:"string"` + // The description of the WorkSpace image. + // + // ImageDescription is a required field + ImageDescription *string `min:"1" type:"string" required:"true"` + + // The name of the WorkSpace image. + // + // ImageName is a required field + ImageName *string `min:"1" type:"string" required:"true"` + + // The ingestion process to be used when importing the image. + // + // IngestionProcess is a required field + IngestionProcess *string `type:"string" required:"true" enum:"WorkspaceImageIngestionProcess"` } // String returns the string representation -func (s DescribeWorkspaceDirectoriesInput) String() string { +func (s ImportWorkspaceImageInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeWorkspaceDirectoriesInput) GoString() string { +func (s ImportWorkspaceImageInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeWorkspaceDirectoriesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeWorkspaceDirectoriesInput"} - if s.DirectoryIds != nil && len(s.DirectoryIds) < 1 { - invalidParams.Add(request.NewErrParamMinLen("DirectoryIds", 1)) +func (s *ImportWorkspaceImageInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ImportWorkspaceImageInput"} + if s.Ec2ImageId == nil { + invalidParams.Add(request.NewErrParamRequired("Ec2ImageId")) } - if s.NextToken != nil && len(*s.NextToken) < 1 { - invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + if s.ImageDescription == nil { + invalidParams.Add(request.NewErrParamRequired("ImageDescription")) + } + if s.ImageDescription != nil && len(*s.ImageDescription) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ImageDescription", 1)) + } + if s.ImageName == nil { + invalidParams.Add(request.NewErrParamRequired("ImageName")) + } + if s.ImageName != nil && len(*s.ImageName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ImageName", 1)) + } + if s.IngestionProcess == nil { + invalidParams.Add(request.NewErrParamRequired("IngestionProcess")) } if invalidParams.Len() > 0 { @@ -1850,190 +4735,177 @@ func (s *DescribeWorkspaceDirectoriesInput) Validate() error { return nil } -// SetDirectoryIds sets the DirectoryIds field's value. -func (s *DescribeWorkspaceDirectoriesInput) SetDirectoryIds(v []*string) *DescribeWorkspaceDirectoriesInput { - s.DirectoryIds = v +// SetEc2ImageId sets the Ec2ImageId field's value. +func (s *ImportWorkspaceImageInput) SetEc2ImageId(v string) *ImportWorkspaceImageInput { + s.Ec2ImageId = &v return s } -// SetNextToken sets the NextToken field's value. -func (s *DescribeWorkspaceDirectoriesInput) SetNextToken(v string) *DescribeWorkspaceDirectoriesInput { - s.NextToken = &v +// SetImageDescription sets the ImageDescription field's value. +func (s *ImportWorkspaceImageInput) SetImageDescription(v string) *ImportWorkspaceImageInput { + s.ImageDescription = &v return s } -type DescribeWorkspaceDirectoriesOutput struct { - _ struct{} `type:"structure"` +// SetImageName sets the ImageName field's value. +func (s *ImportWorkspaceImageInput) SetImageName(v string) *ImportWorkspaceImageInput { + s.ImageName = &v + return s +} - // Information about the directories. - Directories []*WorkspaceDirectory `type:"list"` +// SetIngestionProcess sets the IngestionProcess field's value. +func (s *ImportWorkspaceImageInput) SetIngestionProcess(v string) *ImportWorkspaceImageInput { + s.IngestionProcess = &v + return s +} - // The token to use to retrieve the next set of results, or null if there are - // no more results available. This token is valid for one day and must be used - // within that time frame. - NextToken *string `min:"1" type:"string"` +type ImportWorkspaceImageOutput struct { + _ struct{} `type:"structure"` + + // The identifier of the WorkSpace image. + ImageId *string `type:"string"` } // String returns the string representation -func (s DescribeWorkspaceDirectoriesOutput) String() string { +func (s ImportWorkspaceImageOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeWorkspaceDirectoriesOutput) GoString() string { +func (s ImportWorkspaceImageOutput) GoString() string { return s.String() } -// SetDirectories sets the Directories field's value. -func (s *DescribeWorkspaceDirectoriesOutput) SetDirectories(v []*WorkspaceDirectory) *DescribeWorkspaceDirectoriesOutput { - s.Directories = v - return s -} - -// SetNextToken sets the NextToken field's value. -func (s *DescribeWorkspaceDirectoriesOutput) SetNextToken(v string) *DescribeWorkspaceDirectoriesOutput { - s.NextToken = &v +// SetImageId sets the ImageId field's value. +func (s *ImportWorkspaceImageOutput) SetImageId(v string) *ImportWorkspaceImageOutput { + s.ImageId = &v return s } -type DescribeWorkspacesConnectionStatusInput struct { +// Describes an IP access control group. +type IpGroup struct { _ struct{} `type:"structure"` - // The token for the next set of results. (You received this token from a previous - // call.) - NextToken *string `min:"1" type:"string"` + // The description of the group. + GroupDesc *string `locationName:"groupDesc" type:"string"` - // The identifiers of the WorkSpaces. - WorkspaceIds []*string `min:"1" type:"list"` + // The identifier of the group. + GroupId *string `locationName:"groupId" type:"string"` + + // The name of the group. + GroupName *string `locationName:"groupName" type:"string"` + + // The rules. + UserRules []*IpRuleItem `locationName:"userRules" type:"list"` } // String returns the string representation -func (s DescribeWorkspacesConnectionStatusInput) String() string { +func (s IpGroup) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeWorkspacesConnectionStatusInput) GoString() string { +func (s IpGroup) GoString() string { return s.String() } -// Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeWorkspacesConnectionStatusInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeWorkspacesConnectionStatusInput"} - if s.NextToken != nil && len(*s.NextToken) < 1 { - invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) - } - if s.WorkspaceIds != nil && len(s.WorkspaceIds) < 1 { - invalidParams.Add(request.NewErrParamMinLen("WorkspaceIds", 1)) - } +// SetGroupDesc sets the GroupDesc field's value. +func (s *IpGroup) SetGroupDesc(v string) *IpGroup { + s.GroupDesc = &v + return s +} - if invalidParams.Len() > 0 { - return invalidParams - } - return nil +// SetGroupId sets the GroupId field's value. +func (s *IpGroup) SetGroupId(v string) *IpGroup { + s.GroupId = &v + return s } -// SetNextToken sets the NextToken field's value. -func (s *DescribeWorkspacesConnectionStatusInput) SetNextToken(v string) *DescribeWorkspacesConnectionStatusInput { - s.NextToken = &v +// SetGroupName sets the GroupName field's value. +func (s *IpGroup) SetGroupName(v string) *IpGroup { + s.GroupName = &v return s } -// SetWorkspaceIds sets the WorkspaceIds field's value. -func (s *DescribeWorkspacesConnectionStatusInput) SetWorkspaceIds(v []*string) *DescribeWorkspacesConnectionStatusInput { - s.WorkspaceIds = v +// SetUserRules sets the UserRules field's value. +func (s *IpGroup) SetUserRules(v []*IpRuleItem) *IpGroup { + s.UserRules = v return s } -type DescribeWorkspacesConnectionStatusOutput struct { +// Describes a rule for an IP access control group. +type IpRuleItem struct { _ struct{} `type:"structure"` - // The token to use to retrieve the next set of results, or null if there are - // no more results available. - NextToken *string `min:"1" type:"string"` + // The IP address range, in CIDR notation. + IpRule *string `locationName:"ipRule" type:"string"` - // Information about the connection status of the WorkSpace. - WorkspacesConnectionStatus []*WorkspaceConnectionStatus `type:"list"` + // The description. + RuleDesc *string `locationName:"ruleDesc" type:"string"` } // String returns the string representation -func (s DescribeWorkspacesConnectionStatusOutput) String() string { +func (s IpRuleItem) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeWorkspacesConnectionStatusOutput) GoString() string { +func (s IpRuleItem) GoString() string { return s.String() } -// SetNextToken sets the NextToken field's value. -func (s *DescribeWorkspacesConnectionStatusOutput) SetNextToken(v string) *DescribeWorkspacesConnectionStatusOutput { - s.NextToken = &v +// SetIpRule sets the IpRule field's value. +func (s *IpRuleItem) SetIpRule(v string) *IpRuleItem { + s.IpRule = &v return s } -// SetWorkspacesConnectionStatus sets the WorkspacesConnectionStatus field's value. -func (s *DescribeWorkspacesConnectionStatusOutput) SetWorkspacesConnectionStatus(v []*WorkspaceConnectionStatus) *DescribeWorkspacesConnectionStatusOutput { - s.WorkspacesConnectionStatus = v +// SetRuleDesc sets the RuleDesc field's value. +func (s *IpRuleItem) SetRuleDesc(v string) *IpRuleItem { + s.RuleDesc = &v return s } -type DescribeWorkspacesInput struct { +type ListAvailableManagementCidrRangesInput struct { _ struct{} `type:"structure"` - // The ID of the bundle. All WorkSpaces that are created from this bundle are - // retrieved. This parameter cannot be combined with any other filter. - BundleId *string `type:"string"` - - // The ID of the directory. In addition, you can optionally specify a specific - // directory user (see UserName). This parameter cannot be combined with any - // other filter. - DirectoryId *string `type:"string"` + // The IP address range to search. Specify an IP address range that is compatible + // with your network and in CIDR notation (that is, specify the range as an + // IPv4 CIDR block). + // + // ManagementCidrRangeConstraint is a required field + ManagementCidrRangeConstraint *string `type:"string" required:"true"` // The maximum number of items to return. - Limit *int64 `min:"1" type:"integer"` + MaxResults *int64 `min:"1" type:"integer"` - // The token for the next set of results. (You received this token from a previous - // call.) + // If you received a NextToken from a previous call that was paginated, provide + // this token to receive the next set of results. NextToken *string `min:"1" type:"string"` - - // The name of the directory user. You must specify this parameter with DirectoryId. - UserName *string `min:"1" type:"string"` - - // The IDs of the WorkSpaces. This parameter cannot be combined with any other - // filter. - // - // Because the CreateWorkspaces operation is asynchronous, the identifier it - // returns is not immediately available. If you immediately call DescribeWorkspaces - // with this identifier, no information is returned. - WorkspaceIds []*string `min:"1" type:"list"` } // String returns the string representation -func (s DescribeWorkspacesInput) String() string { +func (s ListAvailableManagementCidrRangesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeWorkspacesInput) GoString() string { +func (s ListAvailableManagementCidrRangesInput) GoString() string { return s.String() } // Validate inspects the fields of the type to determine if they are valid. -func (s *DescribeWorkspacesInput) Validate() error { - invalidParams := request.ErrInvalidParams{Context: "DescribeWorkspacesInput"} - if s.Limit != nil && *s.Limit < 1 { - invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) +func (s *ListAvailableManagementCidrRangesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListAvailableManagementCidrRangesInput"} + if s.ManagementCidrRangeConstraint == nil { + invalidParams.Add(request.NewErrParamRequired("ManagementCidrRangeConstraint")) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) } if s.NextToken != nil && len(*s.NextToken) < 1 { invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) } - if s.UserName != nil && len(*s.UserName) < 1 { - invalidParams.Add(request.NewErrParamMinLen("UserName", 1)) - } - if s.WorkspaceIds != nil && len(s.WorkspaceIds) < 1 { - invalidParams.Add(request.NewErrParamMinLen("WorkspaceIds", 1)) - } if invalidParams.Len() > 0 { return invalidParams @@ -2041,202 +4913,209 @@ func (s *DescribeWorkspacesInput) Validate() error { return nil } -// SetBundleId sets the BundleId field's value. -func (s *DescribeWorkspacesInput) SetBundleId(v string) *DescribeWorkspacesInput { - s.BundleId = &v - return s -} - -// SetDirectoryId sets the DirectoryId field's value. -func (s *DescribeWorkspacesInput) SetDirectoryId(v string) *DescribeWorkspacesInput { - s.DirectoryId = &v +// SetManagementCidrRangeConstraint sets the ManagementCidrRangeConstraint field's value. +func (s *ListAvailableManagementCidrRangesInput) SetManagementCidrRangeConstraint(v string) *ListAvailableManagementCidrRangesInput { + s.ManagementCidrRangeConstraint = &v return s } -// SetLimit sets the Limit field's value. -func (s *DescribeWorkspacesInput) SetLimit(v int64) *DescribeWorkspacesInput { - s.Limit = &v +// SetMaxResults sets the MaxResults field's value. +func (s *ListAvailableManagementCidrRangesInput) SetMaxResults(v int64) *ListAvailableManagementCidrRangesInput { + s.MaxResults = &v return s } // SetNextToken sets the NextToken field's value. -func (s *DescribeWorkspacesInput) SetNextToken(v string) *DescribeWorkspacesInput { +func (s *ListAvailableManagementCidrRangesInput) SetNextToken(v string) *ListAvailableManagementCidrRangesInput { s.NextToken = &v return s } -// SetUserName sets the UserName field's value. -func (s *DescribeWorkspacesInput) SetUserName(v string) *DescribeWorkspacesInput { - s.UserName = &v - return s -} - -// SetWorkspaceIds sets the WorkspaceIds field's value. -func (s *DescribeWorkspacesInput) SetWorkspaceIds(v []*string) *DescribeWorkspacesInput { - s.WorkspaceIds = v - return s -} - -type DescribeWorkspacesOutput struct { +type ListAvailableManagementCidrRangesOutput struct { _ struct{} `type:"structure"` - // The token to use to retrieve the next set of results, or null if there are - // no more results available. This token is valid for one day and must be used - // within that time frame. - NextToken *string `min:"1" type:"string"` + // The list of available IP address ranges, specified as IPv4 CIDR blocks. + ManagementCidrRanges []*string `type:"list"` - // Information about the WorkSpaces. - // - // Because CreateWorkspaces is an asynchronous operation, some of the returned - // information could be incomplete. - Workspaces []*Workspace `type:"list"` + // The token to use to retrieve the next set of results, or null if no more + // results are available. + NextToken *string `min:"1" type:"string"` } // String returns the string representation -func (s DescribeWorkspacesOutput) String() string { +func (s ListAvailableManagementCidrRangesOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DescribeWorkspacesOutput) GoString() string { +func (s ListAvailableManagementCidrRangesOutput) GoString() string { return s.String() } -// SetNextToken sets the NextToken field's value. -func (s *DescribeWorkspacesOutput) SetNextToken(v string) *DescribeWorkspacesOutput { - s.NextToken = &v +// SetManagementCidrRanges sets the ManagementCidrRanges field's value. +func (s *ListAvailableManagementCidrRangesOutput) SetManagementCidrRanges(v []*string) *ListAvailableManagementCidrRangesOutput { + s.ManagementCidrRanges = v return s } -// SetWorkspaces sets the Workspaces field's value. -func (s *DescribeWorkspacesOutput) SetWorkspaces(v []*Workspace) *DescribeWorkspacesOutput { - s.Workspaces = v +// SetNextToken sets the NextToken field's value. +func (s *ListAvailableManagementCidrRangesOutput) SetNextToken(v string) *ListAvailableManagementCidrRangesOutput { + s.NextToken = &v return s } -// Information about a WorkSpace that could not be created. -type FailedCreateWorkspaceRequest struct { +// Describes a WorkSpace modification. +type ModificationState struct { _ struct{} `type:"structure"` - // The error code. - ErrorCode *string `type:"string"` - - // The textual error message. - ErrorMessage *string `type:"string"` + // The resource. + Resource *string `type:"string" enum:"ModificationResourceEnum"` - // Information about the WorkSpace. - WorkspaceRequest *WorkspaceRequest `type:"structure"` + // The modification state. + State *string `type:"string" enum:"ModificationStateEnum"` } // String returns the string representation -func (s FailedCreateWorkspaceRequest) String() string { +func (s ModificationState) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s FailedCreateWorkspaceRequest) GoString() string { +func (s ModificationState) GoString() string { return s.String() } -// SetErrorCode sets the ErrorCode field's value. -func (s *FailedCreateWorkspaceRequest) SetErrorCode(v string) *FailedCreateWorkspaceRequest { - s.ErrorCode = &v - return s -} - -// SetErrorMessage sets the ErrorMessage field's value. -func (s *FailedCreateWorkspaceRequest) SetErrorMessage(v string) *FailedCreateWorkspaceRequest { - s.ErrorMessage = &v +// SetResource sets the Resource field's value. +func (s *ModificationState) SetResource(v string) *ModificationState { + s.Resource = &v return s } -// SetWorkspaceRequest sets the WorkspaceRequest field's value. -func (s *FailedCreateWorkspaceRequest) SetWorkspaceRequest(v *WorkspaceRequest) *FailedCreateWorkspaceRequest { - s.WorkspaceRequest = v +// SetState sets the State field's value. +func (s *ModificationState) SetState(v string) *ModificationState { + s.State = &v return s } -// Information about a WorkSpace that could not be rebooted (RebootWorkspaces), -// rebuilt (RebuildWorkspaces), terminated (TerminateWorkspaces), started (StartWorkspaces), -// or stopped (StopWorkspaces). -type FailedWorkspaceChangeRequest struct { +type ModifyAccountInput struct { _ struct{} `type:"structure"` - // The error code. - ErrorCode *string `type:"string"` - - // The textual error message. - ErrorMessage *string `type:"string"` + // The IP address range, specified as an IPv4 CIDR block, for the management + // network interface. Specify an IP address range that is compatible with your + // network and in CIDR notation (that is, specify the range as an IPv4 CIDR + // block). The CIDR block size must be /16 (for example, 203.0.113.25/16). It + // must also be specified as available by the ListAvailableManagementCidrRanges + // operation. + DedicatedTenancyManagementCidrRange *string `type:"string"` - // The identifier of the WorkSpace. - WorkspaceId *string `type:"string"` + // The status of BYOL. + DedicatedTenancySupport *string `type:"string" enum:"DedicatedTenancySupportEnum"` } // String returns the string representation -func (s FailedWorkspaceChangeRequest) String() string { +func (s ModifyAccountInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s FailedWorkspaceChangeRequest) GoString() string { +func (s ModifyAccountInput) GoString() string { return s.String() } -// SetErrorCode sets the ErrorCode field's value. -func (s *FailedWorkspaceChangeRequest) SetErrorCode(v string) *FailedWorkspaceChangeRequest { - s.ErrorCode = &v +// SetDedicatedTenancyManagementCidrRange sets the DedicatedTenancyManagementCidrRange field's value. +func (s *ModifyAccountInput) SetDedicatedTenancyManagementCidrRange(v string) *ModifyAccountInput { + s.DedicatedTenancyManagementCidrRange = &v return s } -// SetErrorMessage sets the ErrorMessage field's value. -func (s *FailedWorkspaceChangeRequest) SetErrorMessage(v string) *FailedWorkspaceChangeRequest { - s.ErrorMessage = &v +// SetDedicatedTenancySupport sets the DedicatedTenancySupport field's value. +func (s *ModifyAccountInput) SetDedicatedTenancySupport(v string) *ModifyAccountInput { + s.DedicatedTenancySupport = &v return s } -// SetWorkspaceId sets the WorkspaceId field's value. -func (s *FailedWorkspaceChangeRequest) SetWorkspaceId(v string) *FailedWorkspaceChangeRequest { - s.WorkspaceId = &v - return s +type ModifyAccountOutput struct { + _ struct{} `type:"structure"` } -// Information about a WorkSpace modification. -type ModificationState struct { +// String returns the string representation +func (s ModifyAccountOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyAccountOutput) GoString() string { + return s.String() +} + +type ModifyClientPropertiesInput struct { _ struct{} `type:"structure"` - // The resource. - Resource *string `type:"string" enum:"ModificationResourceEnum"` + // Information about the Amazon WorkSpaces client. + ClientProperties *ClientProperties `type:"structure"` - // The modification state. - State *string `type:"string" enum:"ModificationStateEnum"` + // The resource identifiers, in the form of directory IDs. + // + // ResourceId is a required field + ResourceId *string `min:"1" type:"string" required:"true"` } // String returns the string representation -func (s ModificationState) String() string { +func (s ModifyClientPropertiesInput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s ModificationState) GoString() string { +func (s ModifyClientPropertiesInput) GoString() string { return s.String() } -// SetResource sets the Resource field's value. -func (s *ModificationState) SetResource(v string) *ModificationState { - s.Resource = &v +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyClientPropertiesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyClientPropertiesInput"} + if s.ResourceId == nil { + invalidParams.Add(request.NewErrParamRequired("ResourceId")) + } + if s.ResourceId != nil && len(*s.ResourceId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ResourceId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientProperties sets the ClientProperties field's value. +func (s *ModifyClientPropertiesInput) SetClientProperties(v *ClientProperties) *ModifyClientPropertiesInput { + s.ClientProperties = v return s } -// SetState sets the State field's value. -func (s *ModificationState) SetState(v string) *ModificationState { - s.State = &v +// SetResourceId sets the ResourceId field's value. +func (s *ModifyClientPropertiesInput) SetResourceId(v string) *ModifyClientPropertiesInput { + s.ResourceId = &v return s } +type ModifyClientPropertiesOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s ModifyClientPropertiesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyClientPropertiesOutput) GoString() string { + return s.String() +} + type ModifyWorkspacePropertiesInput struct { _ struct{} `type:"structure"` - // The ID of the WorkSpace. + // The identifier of the WorkSpace. // // WorkspaceId is a required field WorkspaceId *string `type:"string" required:"true"` @@ -2299,7 +5178,97 @@ func (s ModifyWorkspacePropertiesOutput) GoString() string { return s.String() } -// Information used to reboot a WorkSpace. +type ModifyWorkspaceStateInput struct { + _ struct{} `type:"structure"` + + // The identifier of the WorkSpace. + // + // WorkspaceId is a required field + WorkspaceId *string `type:"string" required:"true"` + + // The WorkSpace state. + // + // WorkspaceState is a required field + WorkspaceState *string `type:"string" required:"true" enum:"TargetWorkspaceState"` +} + +// String returns the string representation +func (s ModifyWorkspaceStateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyWorkspaceStateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyWorkspaceStateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyWorkspaceStateInput"} + if s.WorkspaceId == nil { + invalidParams.Add(request.NewErrParamRequired("WorkspaceId")) + } + if s.WorkspaceState == nil { + invalidParams.Add(request.NewErrParamRequired("WorkspaceState")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetWorkspaceId sets the WorkspaceId field's value. +func (s *ModifyWorkspaceStateInput) SetWorkspaceId(v string) *ModifyWorkspaceStateInput { + s.WorkspaceId = &v + return s +} + +// SetWorkspaceState sets the WorkspaceState field's value. +func (s *ModifyWorkspaceStateInput) SetWorkspaceState(v string) *ModifyWorkspaceStateInput { + s.WorkspaceState = &v + return s +} + +type ModifyWorkspaceStateOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s ModifyWorkspaceStateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyWorkspaceStateOutput) GoString() string { + return s.String() +} + +// The operating system that the image is running. +type OperatingSystem struct { + _ struct{} `type:"structure"` + + // The operating system. + Type *string `type:"string" enum:"OperatingSystemType"` +} + +// String returns the string representation +func (s OperatingSystem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OperatingSystem) GoString() string { + return s.String() +} + +// SetType sets the Type field's value. +func (s *OperatingSystem) SetType(v string) *OperatingSystem { + s.Type = &v + return s +} + +// Describes the information used to reboot a WorkSpace. type RebootRequest struct { _ struct{} `type:"structure"` @@ -2341,7 +5310,7 @@ func (s *RebootRequest) SetWorkspaceId(v string) *RebootRequest { type RebootWorkspacesInput struct { _ struct{} `type:"structure"` - // The WorkSpaces to reboot. + // The WorkSpaces to reboot. You can specify up to 25 WorkSpaces. // // RebootWorkspaceRequests is a required field RebootWorkspaceRequests []*RebootRequest `min:"1" type:"list" required:"true"` @@ -2412,7 +5381,7 @@ func (s *RebootWorkspacesOutput) SetFailedRequests(v []*FailedWorkspaceChangeReq return s } -// Information used to rebuild a WorkSpace. +// Describes the information used to rebuild a WorkSpace. type RebuildRequest struct { _ struct{} `type:"structure"` @@ -2454,7 +5423,7 @@ func (s *RebuildRequest) SetWorkspaceId(v string) *RebuildRequest { type RebuildWorkspacesInput struct { _ struct{} `type:"structure"` - // The WorkSpaces to rebuild. + // The WorkSpace to rebuild. You can specify a single WorkSpace. // // RebuildWorkspaceRequests is a required field RebuildWorkspaceRequests []*RebuildRequest `min:"1" type:"list" required:"true"` @@ -2505,7 +5474,7 @@ func (s *RebuildWorkspacesInput) SetRebuildWorkspaceRequests(v []*RebuildRequest type RebuildWorkspacesOutput struct { _ struct{} `type:"structure"` - // Information about the WorkSpaces that could not be rebuilt. + // Information about the WorkSpace that could not be rebuilt. FailedRequests []*FailedWorkspaceChangeRequest `type:"list"` } @@ -2525,7 +5494,73 @@ func (s *RebuildWorkspacesOutput) SetFailedRequests(v []*FailedWorkspaceChangeRe return s } -// Information about the root volume for a WorkSpace bundle. +type RevokeIpRulesInput struct { + _ struct{} `type:"structure"` + + // The identifier of the group. + // + // GroupId is a required field + GroupId *string `type:"string" required:"true"` + + // The rules to remove from the group. + // + // UserRules is a required field + UserRules []*string `type:"list" required:"true"` +} + +// String returns the string representation +func (s RevokeIpRulesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RevokeIpRulesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RevokeIpRulesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RevokeIpRulesInput"} + if s.GroupId == nil { + invalidParams.Add(request.NewErrParamRequired("GroupId")) + } + if s.UserRules == nil { + invalidParams.Add(request.NewErrParamRequired("UserRules")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupId sets the GroupId field's value. +func (s *RevokeIpRulesInput) SetGroupId(v string) *RevokeIpRulesInput { + s.GroupId = &v + return s +} + +// SetUserRules sets the UserRules field's value. +func (s *RevokeIpRulesInput) SetUserRules(v []*string) *RevokeIpRulesInput { + s.UserRules = v + return s +} + +type RevokeIpRulesOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s RevokeIpRulesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RevokeIpRulesOutput) GoString() string { + return s.String() +} + +// Describes the root volume for a WorkSpace bundle. type RootStorage struct { _ struct{} `type:"structure"` @@ -2553,7 +5588,7 @@ func (s *RootStorage) SetCapacity(v string) *RootStorage { type StartRequest struct { _ struct{} `type:"structure"` - // The ID of the WorkSpace. + // The identifier of the WorkSpace. WorkspaceId *string `type:"string"` } @@ -2576,7 +5611,7 @@ func (s *StartRequest) SetWorkspaceId(v string) *StartRequest { type StartWorkspacesInput struct { _ struct{} `type:"structure"` - // The WorkSpaces to start. + // The WorkSpaces to start. You can specify up to 25 WorkSpaces. // // StartWorkspaceRequests is a required field StartWorkspaceRequests []*StartRequest `min:"1" type:"list" required:"true"` @@ -2637,11 +5672,11 @@ func (s *StartWorkspacesOutput) SetFailedRequests(v []*FailedWorkspaceChangeRequ return s } -// Information used to stop a WorkSpace. +// Describes the information used to stop a WorkSpace. type StopRequest struct { _ struct{} `type:"structure"` - // The ID of the WorkSpace. + // The identifier of the WorkSpace. WorkspaceId *string `type:"string"` } @@ -2664,7 +5699,7 @@ func (s *StopRequest) SetWorkspaceId(v string) *StopRequest { type StopWorkspacesInput struct { _ struct{} `type:"structure"` - // The WorkSpaces to stop. + // The WorkSpaces to stop. You can specify up to 25 WorkSpaces. // // StopWorkspaceRequests is a required field StopWorkspaceRequests []*StopRequest `min:"1" type:"list" required:"true"` @@ -2725,7 +5760,7 @@ func (s *StopWorkspacesOutput) SetFailedRequests(v []*FailedWorkspaceChangeReque return s } -// Information about a tag. +// Describes a tag. type Tag struct { _ struct{} `type:"structure"` @@ -2776,7 +5811,7 @@ func (s *Tag) SetValue(v string) *Tag { return s } -// Information used to terminate a WorkSpace. +// Describes the information used to terminate a WorkSpace. type TerminateRequest struct { _ struct{} `type:"structure"` @@ -2818,7 +5853,7 @@ func (s *TerminateRequest) SetWorkspaceId(v string) *TerminateRequest { type TerminateWorkspacesInput struct { _ struct{} `type:"structure"` - // The WorkSpaces to terminate. + // The WorkSpaces to terminate. You can specify up to 25 WorkSpaces. // // TerminateWorkspaceRequests is a required field TerminateWorkspaceRequests []*TerminateRequest `min:"1" type:"list" required:"true"` @@ -2889,7 +5924,73 @@ func (s *TerminateWorkspacesOutput) SetFailedRequests(v []*FailedWorkspaceChange return s } -// Information about the user storage for a WorkSpace bundle. +type UpdateRulesOfIpGroupInput struct { + _ struct{} `type:"structure"` + + // The identifier of the group. + // + // GroupId is a required field + GroupId *string `type:"string" required:"true"` + + // One or more rules. + // + // UserRules is a required field + UserRules []*IpRuleItem `type:"list" required:"true"` +} + +// String returns the string representation +func (s UpdateRulesOfIpGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateRulesOfIpGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateRulesOfIpGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateRulesOfIpGroupInput"} + if s.GroupId == nil { + invalidParams.Add(request.NewErrParamRequired("GroupId")) + } + if s.UserRules == nil { + invalidParams.Add(request.NewErrParamRequired("UserRules")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroupId sets the GroupId field's value. +func (s *UpdateRulesOfIpGroupInput) SetGroupId(v string) *UpdateRulesOfIpGroupInput { + s.GroupId = &v + return s +} + +// SetUserRules sets the UserRules field's value. +func (s *UpdateRulesOfIpGroupInput) SetUserRules(v []*IpRuleItem) *UpdateRulesOfIpGroupInput { + s.UserRules = v + return s +} + +type UpdateRulesOfIpGroupOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UpdateRulesOfIpGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateRulesOfIpGroupOutput) GoString() string { + return s.String() +} + +// Describes the user storage for a WorkSpace bundle. type UserStorage struct { _ struct{} `type:"structure"` @@ -2913,7 +6014,7 @@ func (s *UserStorage) SetCapacity(v string) *UserStorage { return s } -// Information about a WorkSpace. +// Describes a WorkSpace. type Workspace struct { _ struct{} `type:"structure"` @@ -2926,11 +6027,11 @@ type Workspace struct { // The identifier of the AWS Directory Service directory for the WorkSpace. DirectoryId *string `type:"string"` - // If the WorkSpace could not be created, contains the error code. + // The error code that is returned if the WorkSpace cannot be created. ErrorCode *string `type:"string"` - // If the WorkSpace could not be created, contains a textual error message that - // describes the failure. + // The text of the error message that is returned if the WorkSpace cannot be + // created. ErrorMessage *string `type:"string"` // The IP address of the WorkSpace. @@ -3064,7 +6165,7 @@ func (s *Workspace) SetWorkspaceProperties(v *WorkspaceProperties) *Workspace { return s } -// Information about a WorkSpace bundle. +// Describes a WorkSpace bundle. type WorkspaceBundle struct { _ struct{} `type:"structure"` @@ -3151,13 +6252,13 @@ type WorkspaceConnectionStatus struct { // the WorkSpace is stopped. ConnectionState *string `type:"string" enum:"ConnectionState"` - // The timestamp of the connection state check. - ConnectionStateCheckTimestamp *time.Time `type:"timestamp" timestampFormat:"unix"` + // The timestamp of the connection status check. + ConnectionStateCheckTimestamp *time.Time `type:"timestamp"` // The timestamp of the last known user connection. - LastKnownUserConnectionTimestamp *time.Time `type:"timestamp" timestampFormat:"unix"` + LastKnownUserConnectionTimestamp *time.Time `type:"timestamp"` - // The ID of the WorkSpace. + // The identifier of the WorkSpace. WorkspaceId *string `type:"string"` } @@ -3195,8 +6296,7 @@ func (s *WorkspaceConnectionStatus) SetWorkspaceId(v string) *WorkspaceConnectio return s } -// Contains information about an AWS Directory Service directory for use with -// Amazon WorkSpaces. +// Describes an AWS Directory Service directory that is used with Amazon WorkSpaces. type WorkspaceDirectory struct { _ struct{} `type:"structure"` @@ -3222,6 +6322,9 @@ type WorkspaceDirectory struct { // to make calls to other services, such as Amazon EC2, on your behalf. IamRoleId *string `type:"string"` + // The identifiers of the IP access control groups associated with the directory. + IpGroupIds []*string `locationName:"ipGroupIds" type:"list"` + // The registration code for the directory. This is the code that users enter // in their Amazon WorkSpaces client application to connect to the directory. RegistrationCode *string `min:"1" type:"string"` @@ -3291,6 +6394,12 @@ func (s *WorkspaceDirectory) SetIamRoleId(v string) *WorkspaceDirectory { return s } +// SetIpGroupIds sets the IpGroupIds field's value. +func (s *WorkspaceDirectory) SetIpGroupIds(v []*string) *WorkspaceDirectory { + s.IpGroupIds = v + return s +} + // SetRegistrationCode sets the RegistrationCode field's value. func (s *WorkspaceDirectory) SetRegistrationCode(v string) *WorkspaceDirectory { s.RegistrationCode = &v @@ -3321,7 +6430,95 @@ func (s *WorkspaceDirectory) SetWorkspaceSecurityGroupId(v string) *WorkspaceDir return s } -// Information about a WorkSpace. +// Describes a WorkSpace image. +type WorkspaceImage struct { + _ struct{} `type:"structure"` + + // The description of the image. + Description *string `min:"1" type:"string"` + + // The error code that is returned for the image. + ErrorCode *string `type:"string"` + + // The text of the error message that is returned for the image. + ErrorMessage *string `type:"string"` + + // The identifier of the image. + ImageId *string `type:"string"` + + // The name of the image. + Name *string `min:"1" type:"string"` + + // The operating system that the image is running. + OperatingSystem *OperatingSystem `type:"structure"` + + // Specifies whether the image is running on dedicated hardware. When bring + // your own license (BYOL) is enabled, this value is set to DEDICATED. + RequiredTenancy *string `type:"string" enum:"WorkspaceImageRequiredTenancy"` + + // The status of the image. + State *string `type:"string" enum:"WorkspaceImageState"` +} + +// String returns the string representation +func (s WorkspaceImage) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s WorkspaceImage) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *WorkspaceImage) SetDescription(v string) *WorkspaceImage { + s.Description = &v + return s +} + +// SetErrorCode sets the ErrorCode field's value. +func (s *WorkspaceImage) SetErrorCode(v string) *WorkspaceImage { + s.ErrorCode = &v + return s +} + +// SetErrorMessage sets the ErrorMessage field's value. +func (s *WorkspaceImage) SetErrorMessage(v string) *WorkspaceImage { + s.ErrorMessage = &v + return s +} + +// SetImageId sets the ImageId field's value. +func (s *WorkspaceImage) SetImageId(v string) *WorkspaceImage { + s.ImageId = &v + return s +} + +// SetName sets the Name field's value. +func (s *WorkspaceImage) SetName(v string) *WorkspaceImage { + s.Name = &v + return s +} + +// SetOperatingSystem sets the OperatingSystem field's value. +func (s *WorkspaceImage) SetOperatingSystem(v *OperatingSystem) *WorkspaceImage { + s.OperatingSystem = v + return s +} + +// SetRequiredTenancy sets the RequiredTenancy field's value. +func (s *WorkspaceImage) SetRequiredTenancy(v string) *WorkspaceImage { + s.RequiredTenancy = &v + return s +} + +// SetState sets the State field's value. +func (s *WorkspaceImage) SetState(v string) *WorkspaceImage { + s.State = &v + return s +} + +// Describes a WorkSpace. type WorkspaceProperties struct { _ struct{} `type:"structure"` @@ -3383,7 +6580,7 @@ func (s *WorkspaceProperties) SetUserVolumeSizeGib(v int64) *WorkspaceProperties return s } -// Information used to create a WorkSpace. +// Describes the information used to create a WorkSpace. type WorkspaceRequest struct { _ struct{} `type:"structure"` @@ -3526,6 +6723,12 @@ const ( // ComputeGraphics is a Compute enum value ComputeGraphics = "GRAPHICS" + + // ComputePowerpro is a Compute enum value + ComputePowerpro = "POWERPRO" + + // ComputeGraphicspro is a Compute enum value + ComputeGraphicspro = "GRAPHICSPRO" ) const ( @@ -3539,6 +6742,30 @@ const ( ConnectionStateUnknown = "UNKNOWN" ) +const ( + // DedicatedTenancyModificationStateEnumPending is a DedicatedTenancyModificationStateEnum enum value + DedicatedTenancyModificationStateEnumPending = "PENDING" + + // DedicatedTenancyModificationStateEnumCompleted is a DedicatedTenancyModificationStateEnum enum value + DedicatedTenancyModificationStateEnumCompleted = "COMPLETED" + + // DedicatedTenancyModificationStateEnumFailed is a DedicatedTenancyModificationStateEnum enum value + DedicatedTenancyModificationStateEnumFailed = "FAILED" +) + +const ( + // DedicatedTenancySupportEnumEnabled is a DedicatedTenancySupportEnum enum value + DedicatedTenancySupportEnumEnabled = "ENABLED" +) + +const ( + // DedicatedTenancySupportResultEnumEnabled is a DedicatedTenancySupportResultEnum enum value + DedicatedTenancySupportResultEnumEnabled = "ENABLED" + + // DedicatedTenancySupportResultEnumDisabled is a DedicatedTenancySupportResultEnum enum value + DedicatedTenancySupportResultEnumDisabled = "DISABLED" +) + const ( // ModificationResourceEnumRootVolume is a ModificationResourceEnum enum value ModificationResourceEnumRootVolume = "ROOT_VOLUME" @@ -3558,6 +6785,22 @@ const ( ModificationStateEnumUpdateInProgress = "UPDATE_IN_PROGRESS" ) +const ( + // OperatingSystemTypeWindows is a OperatingSystemType enum value + OperatingSystemTypeWindows = "WINDOWS" + + // OperatingSystemTypeLinux is a OperatingSystemType enum value + OperatingSystemTypeLinux = "LINUX" +) + +const ( + // ReconnectEnumEnabled is a ReconnectEnum enum value + ReconnectEnumEnabled = "ENABLED" + + // ReconnectEnumDisabled is a ReconnectEnum enum value + ReconnectEnumDisabled = "DISABLED" +) + const ( // RunningModeAutoStop is a RunningMode enum value RunningModeAutoStop = "AUTO_STOP" @@ -3566,6 +6809,14 @@ const ( RunningModeAlwaysOn = "ALWAYS_ON" ) +const ( + // TargetWorkspaceStateAvailable is a TargetWorkspaceState enum value + TargetWorkspaceStateAvailable = "AVAILABLE" + + // TargetWorkspaceStateAdminMaintenance is a TargetWorkspaceState enum value + TargetWorkspaceStateAdminMaintenance = "ADMIN_MAINTENANCE" +) + const ( // WorkspaceDirectoryStateRegistering is a WorkspaceDirectoryState enum value WorkspaceDirectoryStateRegistering = "REGISTERING" @@ -3591,6 +6842,36 @@ const ( WorkspaceDirectoryTypeAdConnector = "AD_CONNECTOR" ) +const ( + // WorkspaceImageIngestionProcessByolRegular is a WorkspaceImageIngestionProcess enum value + WorkspaceImageIngestionProcessByolRegular = "BYOL_REGULAR" + + // WorkspaceImageIngestionProcessByolGraphics is a WorkspaceImageIngestionProcess enum value + WorkspaceImageIngestionProcessByolGraphics = "BYOL_GRAPHICS" + + // WorkspaceImageIngestionProcessByolGraphicspro is a WorkspaceImageIngestionProcess enum value + WorkspaceImageIngestionProcessByolGraphicspro = "BYOL_GRAPHICSPRO" +) + +const ( + // WorkspaceImageRequiredTenancyDefault is a WorkspaceImageRequiredTenancy enum value + WorkspaceImageRequiredTenancyDefault = "DEFAULT" + + // WorkspaceImageRequiredTenancyDedicated is a WorkspaceImageRequiredTenancy enum value + WorkspaceImageRequiredTenancyDedicated = "DEDICATED" +) + +const ( + // WorkspaceImageStateAvailable is a WorkspaceImageState enum value + WorkspaceImageStateAvailable = "AVAILABLE" + + // WorkspaceImageStatePending is a WorkspaceImageState enum value + WorkspaceImageStatePending = "PENDING" + + // WorkspaceImageStateError is a WorkspaceImageState enum value + WorkspaceImageStateError = "ERROR" +) + const ( // WorkspaceStatePending is a WorkspaceState enum value WorkspaceStatePending = "PENDING" @@ -3616,6 +6897,9 @@ const ( // WorkspaceStateMaintenance is a WorkspaceState enum value WorkspaceStateMaintenance = "MAINTENANCE" + // WorkspaceStateAdminMaintenance is a WorkspaceState enum value + WorkspaceStateAdminMaintenance = "ADMIN_MAINTENANCE" + // WorkspaceStateTerminating is a WorkspaceState enum value WorkspaceStateTerminating = "TERMINATING" diff --git a/vendor/github.com/aws/aws-sdk-go/service/workspaces/doc.go b/vendor/github.com/aws/aws-sdk-go/service/workspaces/doc.go index cae0167d1b3..882a1973f0d 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/workspaces/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/workspaces/doc.go @@ -4,7 +4,7 @@ // requests to Amazon WorkSpaces. // // Amazon WorkSpaces enables you to provision virtual, cloud-based Microsoft -// Windows desktops for your users. +// Windows and Amazon Linux desktops for your users. // // See https://docs.aws.amazon.com/goto/WebAPI/workspaces-2015-04-08 for more information on this service. // diff --git a/vendor/github.com/aws/aws-sdk-go/service/workspaces/errors.go b/vendor/github.com/aws/aws-sdk-go/service/workspaces/errors.go index 78fcbedb2b2..2083335b777 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/workspaces/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/workspaces/errors.go @@ -19,7 +19,7 @@ const ( // ErrCodeInvalidResourceStateException for service response error code // "InvalidResourceStateException". // - // The state of the WorkSpace is not valid for this operation. + // The state of the resource is not valid for this operation. ErrCodeInvalidResourceStateException = "InvalidResourceStateException" // ErrCodeOperationInProgressException for service response error code @@ -29,6 +29,30 @@ const ( // in a moment. ErrCodeOperationInProgressException = "OperationInProgressException" + // ErrCodeOperationNotSupportedException for service response error code + // "OperationNotSupportedException". + // + // This operation is not supported. + ErrCodeOperationNotSupportedException = "OperationNotSupportedException" + + // ErrCodeResourceAlreadyExistsException for service response error code + // "ResourceAlreadyExistsException". + // + // The specified resource already exists. + ErrCodeResourceAlreadyExistsException = "ResourceAlreadyExistsException" + + // ErrCodeResourceAssociatedException for service response error code + // "ResourceAssociatedException". + // + // The resource is associated with a directory. + ErrCodeResourceAssociatedException = "ResourceAssociatedException" + + // ErrCodeResourceCreationFailedException for service response error code + // "ResourceCreationFailedException". + // + // The resource could not be created. + ErrCodeResourceCreationFailedException = "ResourceCreationFailedException" + // ErrCodeResourceLimitExceededException for service response error code // "ResourceLimitExceededException". // diff --git a/vendor/github.com/aws/aws-sdk-go/service/workspaces/service.go b/vendor/github.com/aws/aws-sdk-go/service/workspaces/service.go index 45241703115..38e1cc2ee1f 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/workspaces/service.go +++ b/vendor/github.com/aws/aws-sdk-go/service/workspaces/service.go @@ -29,8 +29,9 @@ var initRequest func(*request.Request) // Service information constants const ( - ServiceName = "workspaces" // Service endpoint prefix API calls made to. - EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. + ServiceName = "workspaces" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "WorkSpaces" // ServiceID is a unique identifer of a specific service. ) // New creates a new instance of the WorkSpaces client with a session. @@ -55,6 +56,7 @@ func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegio cfg, metadata.ClientInfo{ ServiceName: ServiceName, + ServiceID: ServiceID, SigningName: signingName, SigningRegion: signingRegion, Endpoint: endpoint, diff --git a/vendor/github.com/beevik/etree/CONTRIBUTORS b/vendor/github.com/beevik/etree/CONTRIBUTORS index 084662c3a83..45a53954118 100644 --- a/vendor/github.com/beevik/etree/CONTRIBUTORS +++ b/vendor/github.com/beevik/etree/CONTRIBUTORS @@ -6,3 +6,4 @@ Matt Smith (ma314smith) Michal Jemala (michaljemala) Nicolas Piganeau (npiganeau) Chris Brown (ccbrown) +Earncef Sequeira (earncef) \ No newline at end of file diff --git a/vendor/github.com/beevik/etree/RELEASE_NOTES.md b/vendor/github.com/beevik/etree/RELEASE_NOTES.md new file mode 100644 index 00000000000..b3a39bd9270 --- /dev/null +++ b/vendor/github.com/beevik/etree/RELEASE_NOTES.md @@ -0,0 +1,27 @@ +Release v1.0.1 +============== + +**Changes** + +* Added support for absolute etree Path queries. An absolute path begins with + `/` or `//` and begins its search from the element's document root. +* Added [`GetPath`](https://godoc.org/github.com/beevik/etree#Element.GetPath) + and [`GetRelativePath`](https://godoc.org/github.com/beevik/etree#Element.GetRelativePath) + functions to the [`Element`](https://godoc.org/github.com/beevik/etree#Element) + type. + +**Breaking changes** + +* A path starting with `//` is now interpreted as an absolute path. + Previously, it was interpreted as a relative path starting from the element + whose + [`FindElement`](https://godoc.org/github.com/beevik/etree#Element.FindElement) + method was called. To remain compatible with this release, all paths + prefixed with `//` should be prefixed with `.//` when called from any + element other than the document's root. + + +Release v1.0.0 +============== + +Initial release. diff --git a/vendor/github.com/beevik/etree/etree.go b/vendor/github.com/beevik/etree/etree.go index 36b279f6003..461a7aa8a86 100644 --- a/vendor/github.com/beevik/etree/etree.go +++ b/vendor/github.com/beevik/etree/etree.go @@ -424,7 +424,8 @@ func (e *Element) readFrom(ri io.Reader, settings ReadSettings) (n int64, err er } // SelectAttr finds an element attribute matching the requested key and -// returns it if found. The key may be prefixed by a namespace and a colon. +// returns it if found. Returns nil if no matching attribute is found. The key +// may be prefixed by a namespace and a colon. func (e *Element) SelectAttr(key string) *Attr { space, skey := spaceDecompose(key) for i, a := range e.Attr { @@ -460,7 +461,8 @@ func (e *Element) ChildElements() []*Element { } // SelectElement returns the first child element with the given tag. The tag -// may be prefixed by a namespace and a colon. +// may be prefixed by a namespace and a colon. Returns nil if no element with +// a matching tag was found. func (e *Element) SelectElement(tag string) *Element { space, stag := spaceDecompose(tag) for _, t := range e.Child { @@ -485,13 +487,14 @@ func (e *Element) SelectElements(tag string) []*Element { } // FindElement returns the first element matched by the XPath-like path -// string. Panics if an invalid path string is supplied. +// string. Returns nil if no element is found using the path. Panics if an +// invalid path string is supplied. func (e *Element) FindElement(path string) *Element { return e.FindElementPath(MustCompilePath(path)) } // FindElementPath returns the first element matched by the XPath-like path -// string. +// string. Returns nil if no element is found using the path. func (e *Element) FindElementPath(path Path) *Element { p := newPather() elements := p.traverse(e, path) @@ -515,6 +518,94 @@ func (e *Element) FindElementsPath(path Path) []*Element { return p.traverse(e, path) } +// GetPath returns the absolute path of the element. +func (e *Element) GetPath() string { + path := []string{} + for seg := e; seg != nil; seg = seg.Parent() { + if seg.Tag != "" { + path = append(path, seg.Tag) + } + } + + // Reverse the path. + for i, j := 0, len(path)-1; i < j; i, j = i+1, j-1 { + path[i], path[j] = path[j], path[i] + } + + return "/" + strings.Join(path, "/") +} + +// GetRelativePath returns the path of the element relative to the source +// element. If the two elements are not part of the same element tree, then +// GetRelativePath returns the empty string. +func (e *Element) GetRelativePath(source *Element) string { + var path []*Element + + if source == nil { + return "" + } + + // Build a reverse path from the element toward the root. Stop if the + // source element is encountered. + var seg *Element + for seg = e; seg != nil && seg != source; seg = seg.Parent() { + path = append(path, seg) + } + + // If we found the source element, reverse the path and compose the + // string. + if seg == source { + if len(path) == 0 { + return "." + } + parts := []string{} + for i := len(path) - 1; i >= 0; i-- { + parts = append(parts, path[i].Tag) + } + return "./" + strings.Join(parts, "/") + } + + // The source wasn't encountered, so climb from the source element toward + // the root of the tree until an element in the reversed path is + // encountered. + + findPathIndex := func(e *Element, path []*Element) int { + for i, ee := range path { + if e == ee { + return i + } + } + return -1 + } + + climb := 0 + for seg = source; seg != nil; seg = seg.Parent() { + i := findPathIndex(seg, path) + if i >= 0 { + path = path[:i] // truncate at found segment + break + } + climb++ + } + + // No element in the reversed path was encountered, so the two elements + // must not be part of the same tree. + if seg == nil { + return "" + } + + // Reverse the (possibly truncated) path and prepend ".." segments to + // climb. + parts := []string{} + for i := 0; i < climb; i++ { + parts = append(parts, "..") + } + for i := len(path) - 1; i >= 0; i-- { + parts = append(parts, path[i].Tag) + } + return strings.Join(parts, "/") +} + // indent recursively inserts proper indentation between an // XML element's child tokens. func (e *Element) indent(depth int, indent indentFunc) { diff --git a/vendor/github.com/beevik/etree/path.go b/vendor/github.com/beevik/etree/path.go index 9cf245eb393..a1a59bdc494 100644 --- a/vendor/github.com/beevik/etree/path.go +++ b/vendor/github.com/beevik/etree/path.go @@ -10,46 +10,54 @@ import ( ) /* -A Path is an object that represents an optimized version of an -XPath-like search string. Although path strings are XPath-like, -only the following limited syntax is supported: - - . Selects the current element - .. Selects the parent of the current element - * Selects all child elements - // Selects all descendants of the current element - tag Selects all child elements with the given tag - [#] Selects the element of the given index (1-based, - negative starts from the end) - [@attrib] Selects all elements with the given attribute - [@attrib='val'] Selects all elements with the given attribute set to val - [tag] Selects all elements with a child element named tag - [tag='val'] Selects all elements with a child element named tag - and text matching val - [text()] Selects all elements with non-empty text - [text()='val'] Selects all elements whose text matches val +A Path is an object that represents an optimized version of an XPath-like +search string. A path search string is a slash-separated series of "selectors" +allowing traversal through an XML hierarchy. Although etree path strings are +similar to XPath strings, they have a more limited set of selectors and +filtering options. The following selectors and filters are supported by etree +paths: + + . Select the current element. + .. Select the parent of the current element. + * Select all child elements of the current element. + / Select the root element when used at the start of a path. + // Select all descendants of the current element. If used at + the start of a path, select all descendants of the root. + tag Select all child elements with the given tag. + [#] Select the element of the given index (1-based, + negative starts from the end). + [@attrib] Select all elements with the given attribute. + [@attrib='val'] Select all elements with the given attribute set to val. + [tag] Select all elements with a child element named tag. + [tag='val'] Select all elements with a child element named tag + and text matching val. + [text()] Select all elements with non-empty text. + [text()='val'] Select all elements whose text matches val. Examples: -Select the title elements of all descendant book elements having a -'category' attribute of 'WEB': +Select the bookstore child element of the root element: + /bookstore + +Beginning a search from the root element, select the title elements of all +descendant book elements having a 'category' attribute of 'WEB': //book[@category='WEB']/title -Select the first book element with a title child containing the text -'Great Expectations': +Beginning a search from the current element, select the first descendant book +element with a title child containing the text 'Great Expectations': .//book[title='Great Expectations'][1] -Starting from the current element, select all children of book elements -with an attribute 'language' set to 'english': +Beginning a search from the current element, select all children of book +elements with an attribute 'language' set to 'english': ./book/*[@language='english'] -Starting from the current element, select all children of book elements -containing the text 'special': +Beginning a search from the current element, select all children of book +elements containing the text 'special': ./book/*[text()='special'] -Select all descendant book elements whose title element has an attribute -'language' set to 'french': - //book/title[@language='french']/.. +Beginning a search from the current element, select all descendant book +elements whose title element has an attribute 'language' equal to 'french': + .//book/title[@language='french']/.. */ type Path struct { @@ -180,22 +188,20 @@ type compiler struct { // through an element tree and returns a slice of segment // descriptors. func (c *compiler) parsePath(path string) []segment { - // If path starts or ends with //, fix it - if strings.HasPrefix(path, "//") { - path = "." + path - } + // If path ends with //, fix it if strings.HasSuffix(path, "//") { path = path + "*" } - // Paths cannot be absolute + var segments []segment + + // Check for an absolute path if strings.HasPrefix(path, "/") { - c.err = ErrPath("paths cannot be absolute.") - return nil + segments = append(segments, segment{new(selectRoot), []filter{}}) + path = path[1:] } - // Split path into segment objects - var segments []segment + // Split path into segments for _, s := range splitPath(path) { segments = append(segments, c.parseSegment(s)) if c.err != ErrPath("") { @@ -225,7 +231,7 @@ func (c *compiler) parseSegment(path string) segment { pieces := strings.Split(path, "[") seg := segment{ sel: c.parseSelector(pieces[0]), - filters: make([]filter, 0), + filters: []filter{}, } for i := 1; i < len(pieces); i++ { fpath := pieces[i] @@ -305,6 +311,17 @@ func (s *selectSelf) apply(e *Element, p *pather) { p.candidates = append(p.candidates, e) } +// selectRoot selects the element's root node. +type selectRoot struct{} + +func (s *selectRoot) apply(e *Element, p *pather) { + root := e + for root.parent != nil { + root = root.parent + } + p.candidates = append(p.candidates, root) +} + // selectParent selects the element's parent into the candidate list. type selectParent struct{} diff --git a/vendor/github.com/boombuler/barcode/LICENSE b/vendor/github.com/boombuler/barcode/LICENSE new file mode 100644 index 00000000000..862b0ddcd7f --- /dev/null +++ b/vendor/github.com/boombuler/barcode/LICENSE @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2014 Florian Sundermann + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/vendor/github.com/boombuler/barcode/README.md b/vendor/github.com/boombuler/barcode/README.md new file mode 100644 index 00000000000..2a988db3999 --- /dev/null +++ b/vendor/github.com/boombuler/barcode/README.md @@ -0,0 +1,53 @@ +[![Join the chat at https://gitter.im/golang-barcode/Lobby](https://badges.gitter.im/golang-barcode/Lobby.svg)](https://gitter.im/golang-barcode/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) + +## Introduction ## + +This is a package for GO which can be used to create different types of barcodes. + +## Supported Barcode Types ## +* 2 of 5 +* Aztec Code +* Codabar +* Code 128 +* Code 39 +* Code 93 +* Datamatrix +* EAN 13 +* EAN 8 +* PDF 417 +* QR Code + +## Example ## + +This is a simple example on how to create a QR-Code and write it to a png-file +```go +package main + +import ( + "image/png" + "os" + + "github.com/boombuler/barcode" + "github.com/boombuler/barcode/qr" +) + +func main() { + // Create the barcode + qrCode, _ := qr.Encode("Hello World", qr.M, qr.Auto) + + // Scale the barcode to 200x200 pixels + qrCode, _ = barcode.Scale(qrCode, 200, 200) + + // create the output file + file, _ := os.Create("qrcode.png") + defer file.Close() + + // encode the barcode as png + png.Encode(file, qrCode) +} +``` + +## Documentation ## +See [GoDoc](https://godoc.org/github.com/boombuler/barcode) + +To create a barcode use the Encode function from one of the subpackages. diff --git a/vendor/github.com/boombuler/barcode/barcode.go b/vendor/github.com/boombuler/barcode/barcode.go new file mode 100644 index 00000000000..25f4a693db0 --- /dev/null +++ b/vendor/github.com/boombuler/barcode/barcode.go @@ -0,0 +1,42 @@ +package barcode + +import "image" + +const ( + TypeAztec = "Aztec" + TypeCodabar = "Codabar" + TypeCode128 = "Code 128" + TypeCode39 = "Code 39" + TypeCode93 = "Code 93" + TypeDataMatrix = "DataMatrix" + TypeEAN8 = "EAN 8" + TypeEAN13 = "EAN 13" + TypePDF = "PDF417" + TypeQR = "QR Code" + Type2of5 = "2 of 5" + Type2of5Interleaved = "2 of 5 (interleaved)" +) + +// Contains some meta information about a barcode +type Metadata struct { + // the name of the barcode kind + CodeKind string + // contains 1 for 1D barcodes or 2 for 2D barcodes + Dimensions byte +} + +// a rendered and encoded barcode +type Barcode interface { + image.Image + // returns some meta information about the barcode + Metadata() Metadata + // the data that was encoded in this barcode + Content() string +} + +// Additional interface that some barcodes might implement to provide +// the value of its checksum. +type BarcodeIntCS interface { + Barcode + CheckSum() int +} diff --git a/vendor/github.com/boombuler/barcode/go.mod b/vendor/github.com/boombuler/barcode/go.mod new file mode 100644 index 00000000000..ed53593b928 --- /dev/null +++ b/vendor/github.com/boombuler/barcode/go.mod @@ -0,0 +1 @@ +module github.com/boombuler/barcode diff --git a/vendor/github.com/boombuler/barcode/qr/alphanumeric.go b/vendor/github.com/boombuler/barcode/qr/alphanumeric.go new file mode 100644 index 00000000000..4ded7c8e030 --- /dev/null +++ b/vendor/github.com/boombuler/barcode/qr/alphanumeric.go @@ -0,0 +1,66 @@ +package qr + +import ( + "errors" + "fmt" + "strings" + + "github.com/boombuler/barcode/utils" +) + +const charSet string = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ $%*+-./:" + +func stringToAlphaIdx(content string) <-chan int { + result := make(chan int) + go func() { + for _, r := range content { + idx := strings.IndexRune(charSet, r) + result <- idx + if idx < 0 { + break + } + } + close(result) + }() + + return result +} + +func encodeAlphaNumeric(content string, ecl ErrorCorrectionLevel) (*utils.BitList, *versionInfo, error) { + + contentLenIsOdd := len(content)%2 == 1 + contentBitCount := (len(content) / 2) * 11 + if contentLenIsOdd { + contentBitCount += 6 + } + vi := findSmallestVersionInfo(ecl, alphaNumericMode, contentBitCount) + if vi == nil { + return nil, nil, errors.New("To much data to encode") + } + + res := new(utils.BitList) + res.AddBits(int(alphaNumericMode), 4) + res.AddBits(len(content), vi.charCountBits(alphaNumericMode)) + + encoder := stringToAlphaIdx(content) + + for idx := 0; idx < len(content)/2; idx++ { + c1 := <-encoder + c2 := <-encoder + if c1 < 0 || c2 < 0 { + return nil, nil, fmt.Errorf("\"%s\" can not be encoded as %s", content, AlphaNumeric) + } + res.AddBits(c1*45+c2, 11) + } + if contentLenIsOdd { + c := <-encoder + if c < 0 { + return nil, nil, fmt.Errorf("\"%s\" can not be encoded as %s", content, AlphaNumeric) + } + res.AddBits(c, 6) + } + + addPaddingAndTerminator(res, vi) + + return res, vi, nil +} diff --git a/vendor/github.com/boombuler/barcode/qr/automatic.go b/vendor/github.com/boombuler/barcode/qr/automatic.go new file mode 100644 index 00000000000..e7c56013f16 --- /dev/null +++ b/vendor/github.com/boombuler/barcode/qr/automatic.go @@ -0,0 +1,23 @@ +package qr + +import ( + "fmt" + + "github.com/boombuler/barcode/utils" +) + +func encodeAuto(content string, ecl ErrorCorrectionLevel) (*utils.BitList, *versionInfo, error) { + bits, vi, _ := Numeric.getEncoder()(content, ecl) + if bits != nil && vi != nil { + return bits, vi, nil + } + bits, vi, _ = AlphaNumeric.getEncoder()(content, ecl) + if bits != nil && vi != nil { + return bits, vi, nil + } + bits, vi, _ = Unicode.getEncoder()(content, ecl) + if bits != nil && vi != nil { + return bits, vi, nil + } + return nil, nil, fmt.Errorf("No encoding found to encode \"%s\"", content) +} diff --git a/vendor/github.com/boombuler/barcode/qr/blocks.go b/vendor/github.com/boombuler/barcode/qr/blocks.go new file mode 100644 index 00000000000..d3173787f60 --- /dev/null +++ b/vendor/github.com/boombuler/barcode/qr/blocks.go @@ -0,0 +1,59 @@ +package qr + +type block struct { + data []byte + ecc []byte +} +type blockList []*block + +func splitToBlocks(data <-chan byte, vi *versionInfo) blockList { + result := make(blockList, vi.NumberOfBlocksInGroup1+vi.NumberOfBlocksInGroup2) + + for b := 0; b < int(vi.NumberOfBlocksInGroup1); b++ { + blk := new(block) + blk.data = make([]byte, vi.DataCodeWordsPerBlockInGroup1) + for cw := 0; cw < int(vi.DataCodeWordsPerBlockInGroup1); cw++ { + blk.data[cw] = <-data + } + blk.ecc = ec.calcECC(blk.data, vi.ErrorCorrectionCodewordsPerBlock) + result[b] = blk + } + + for b := 0; b < int(vi.NumberOfBlocksInGroup2); b++ { + blk := new(block) + blk.data = make([]byte, vi.DataCodeWordsPerBlockInGroup2) + for cw := 0; cw < int(vi.DataCodeWordsPerBlockInGroup2); cw++ { + blk.data[cw] = <-data + } + blk.ecc = ec.calcECC(blk.data, vi.ErrorCorrectionCodewordsPerBlock) + result[int(vi.NumberOfBlocksInGroup1)+b] = blk + } + + return result +} + +func (bl blockList) interleave(vi *versionInfo) []byte { + var maxCodewordCount int + if vi.DataCodeWordsPerBlockInGroup1 > vi.DataCodeWordsPerBlockInGroup2 { + maxCodewordCount = int(vi.DataCodeWordsPerBlockInGroup1) + } else { + maxCodewordCount = int(vi.DataCodeWordsPerBlockInGroup2) + } + resultLen := (vi.DataCodeWordsPerBlockInGroup1+vi.ErrorCorrectionCodewordsPerBlock)*vi.NumberOfBlocksInGroup1 + + (vi.DataCodeWordsPerBlockInGroup2+vi.ErrorCorrectionCodewordsPerBlock)*vi.NumberOfBlocksInGroup2 + + result := make([]byte, 0, resultLen) + for i := 0; i < maxCodewordCount; i++ { + for b := 0; b < len(bl); b++ { + if len(bl[b].data) > i { + result = append(result, bl[b].data[i]) + } + } + } + for i := 0; i < int(vi.ErrorCorrectionCodewordsPerBlock); i++ { + for b := 0; b < len(bl); b++ { + result = append(result, bl[b].ecc[i]) + } + } + return result +} diff --git a/vendor/github.com/boombuler/barcode/qr/encoder.go b/vendor/github.com/boombuler/barcode/qr/encoder.go new file mode 100644 index 00000000000..2c6ab2111ad --- /dev/null +++ b/vendor/github.com/boombuler/barcode/qr/encoder.go @@ -0,0 +1,416 @@ +// Package qr can be used to create QR barcodes. +package qr + +import ( + "image" + + "github.com/boombuler/barcode" + "github.com/boombuler/barcode/utils" +) + +type encodeFn func(content string, eccLevel ErrorCorrectionLevel) (*utils.BitList, *versionInfo, error) + +// Encoding mode for QR Codes. +type Encoding byte + +const ( + // Auto will choose ths best matching encoding + Auto Encoding = iota + // Numeric encoding only encodes numbers [0-9] + Numeric + // AlphaNumeric encoding only encodes uppercase letters, numbers and [Space], $, %, *, +, -, ., /, : + AlphaNumeric + // Unicode encoding encodes the string as utf-8 + Unicode + // only for testing purpose + unknownEncoding +) + +func (e Encoding) getEncoder() encodeFn { + switch e { + case Auto: + return encodeAuto + case Numeric: + return encodeNumeric + case AlphaNumeric: + return encodeAlphaNumeric + case Unicode: + return encodeUnicode + } + return nil +} + +func (e Encoding) String() string { + switch e { + case Auto: + return "Auto" + case Numeric: + return "Numeric" + case AlphaNumeric: + return "AlphaNumeric" + case Unicode: + return "Unicode" + } + return "" +} + +// Encode returns a QR barcode with the given content, error correction level and uses the given encoding +func Encode(content string, level ErrorCorrectionLevel, mode Encoding) (barcode.Barcode, error) { + bits, vi, err := mode.getEncoder()(content, level) + if err != nil { + return nil, err + } + + blocks := splitToBlocks(bits.IterateBytes(), vi) + data := blocks.interleave(vi) + result := render(data, vi) + result.content = content + return result, nil +} + +func render(data []byte, vi *versionInfo) *qrcode { + dim := vi.modulWidth() + results := make([]*qrcode, 8) + for i := 0; i < 8; i++ { + results[i] = newBarcode(dim) + } + + occupied := newBarcode(dim) + + setAll := func(x int, y int, val bool) { + occupied.Set(x, y, true) + for i := 0; i < 8; i++ { + results[i].Set(x, y, val) + } + } + + drawFinderPatterns(vi, setAll) + drawAlignmentPatterns(occupied, vi, setAll) + + //Timing Pattern: + var i int + for i = 0; i < dim; i++ { + if !occupied.Get(i, 6) { + setAll(i, 6, i%2 == 0) + } + if !occupied.Get(6, i) { + setAll(6, i, i%2 == 0) + } + } + // Dark Module + setAll(8, dim-8, true) + + drawVersionInfo(vi, setAll) + drawFormatInfo(vi, -1, occupied.Set) + for i := 0; i < 8; i++ { + drawFormatInfo(vi, i, results[i].Set) + } + + // Write the data + var curBitNo int + + for pos := range iterateModules(occupied) { + var curBit bool + if curBitNo < len(data)*8 { + curBit = ((data[curBitNo/8] >> uint(7-(curBitNo%8))) & 1) == 1 + } else { + curBit = false + } + + for i := 0; i < 8; i++ { + setMasked(pos.X, pos.Y, curBit, i, results[i].Set) + } + curBitNo++ + } + + lowestPenalty := ^uint(0) + lowestPenaltyIdx := -1 + for i := 0; i < 8; i++ { + p := results[i].calcPenalty() + if p < lowestPenalty { + lowestPenalty = p + lowestPenaltyIdx = i + } + } + return results[lowestPenaltyIdx] +} + +func setMasked(x, y int, val bool, mask int, set func(int, int, bool)) { + switch mask { + case 0: + val = val != (((y + x) % 2) == 0) + break + case 1: + val = val != ((y % 2) == 0) + break + case 2: + val = val != ((x % 3) == 0) + break + case 3: + val = val != (((y + x) % 3) == 0) + break + case 4: + val = val != (((y/2 + x/3) % 2) == 0) + break + case 5: + val = val != (((y*x)%2)+((y*x)%3) == 0) + break + case 6: + val = val != ((((y*x)%2)+((y*x)%3))%2 == 0) + break + case 7: + val = val != ((((y+x)%2)+((y*x)%3))%2 == 0) + } + set(x, y, val) +} + +func iterateModules(occupied *qrcode) <-chan image.Point { + result := make(chan image.Point) + allPoints := make(chan image.Point) + go func() { + curX := occupied.dimension - 1 + curY := occupied.dimension - 1 + isUpward := true + + for true { + if isUpward { + allPoints <- image.Pt(curX, curY) + allPoints <- image.Pt(curX-1, curY) + curY-- + if curY < 0 { + curY = 0 + curX -= 2 + if curX == 6 { + curX-- + } + if curX < 0 { + break + } + isUpward = false + } + } else { + allPoints <- image.Pt(curX, curY) + allPoints <- image.Pt(curX-1, curY) + curY++ + if curY >= occupied.dimension { + curY = occupied.dimension - 1 + curX -= 2 + if curX == 6 { + curX-- + } + isUpward = true + if curX < 0 { + break + } + } + } + } + + close(allPoints) + }() + go func() { + for pt := range allPoints { + if !occupied.Get(pt.X, pt.Y) { + result <- pt + } + } + close(result) + }() + return result +} + +func drawFinderPatterns(vi *versionInfo, set func(int, int, bool)) { + dim := vi.modulWidth() + drawPattern := func(xoff int, yoff int) { + for x := -1; x < 8; x++ { + for y := -1; y < 8; y++ { + val := (x == 0 || x == 6 || y == 0 || y == 6 || (x > 1 && x < 5 && y > 1 && y < 5)) && (x <= 6 && y <= 6 && x >= 0 && y >= 0) + + if x+xoff >= 0 && x+xoff < dim && y+yoff >= 0 && y+yoff < dim { + set(x+xoff, y+yoff, val) + } + } + } + } + drawPattern(0, 0) + drawPattern(0, dim-7) + drawPattern(dim-7, 0) +} + +func drawAlignmentPatterns(occupied *qrcode, vi *versionInfo, set func(int, int, bool)) { + drawPattern := func(xoff int, yoff int) { + for x := -2; x <= 2; x++ { + for y := -2; y <= 2; y++ { + val := x == -2 || x == 2 || y == -2 || y == 2 || (x == 0 && y == 0) + set(x+xoff, y+yoff, val) + } + } + } + positions := vi.alignmentPatternPlacements() + + for _, x := range positions { + for _, y := range positions { + if occupied.Get(x, y) { + continue + } + drawPattern(x, y) + } + } +} + +var formatInfos = map[ErrorCorrectionLevel]map[int][]bool{ + L: { + 0: []bool{true, true, true, false, true, true, true, true, true, false, false, false, true, false, false}, + 1: []bool{true, true, true, false, false, true, false, true, true, true, true, false, false, true, true}, + 2: []bool{true, true, true, true, true, false, true, true, false, true, false, true, false, true, false}, + 3: []bool{true, true, true, true, false, false, false, true, false, false, true, true, true, false, true}, + 4: []bool{true, true, false, false, true, true, false, false, false, true, false, true, true, true, true}, + 5: []bool{true, true, false, false, false, true, true, false, false, false, true, true, false, false, false}, + 6: []bool{true, true, false, true, true, false, false, false, true, false, false, false, false, false, true}, + 7: []bool{true, true, false, true, false, false, true, false, true, true, true, false, true, true, false}, + }, + M: { + 0: []bool{true, false, true, false, true, false, false, false, false, false, true, false, false, true, false}, + 1: []bool{true, false, true, false, false, false, true, false, false, true, false, false, true, false, true}, + 2: []bool{true, false, true, true, true, true, false, false, true, true, true, true, true, false, false}, + 3: []bool{true, false, true, true, false, true, true, false, true, false, false, true, false, true, true}, + 4: []bool{true, false, false, false, true, false, true, true, true, true, true, true, false, false, true}, + 5: []bool{true, false, false, false, false, false, false, true, true, false, false, true, true, true, false}, + 6: []bool{true, false, false, true, true, true, true, true, false, false, true, false, true, true, true}, + 7: []bool{true, false, false, true, false, true, false, true, false, true, false, false, false, false, false}, + }, + Q: { + 0: []bool{false, true, true, false, true, false, true, false, true, false, true, true, true, true, true}, + 1: []bool{false, true, true, false, false, false, false, false, true, true, false, true, false, false, false}, + 2: []bool{false, true, true, true, true, true, true, false, false, true, true, false, false, false, true}, + 3: []bool{false, true, true, true, false, true, false, false, false, false, false, false, true, true, false}, + 4: []bool{false, true, false, false, true, false, false, true, false, true, true, false, true, false, false}, + 5: []bool{false, true, false, false, false, false, true, true, false, false, false, false, false, true, true}, + 6: []bool{false, true, false, true, true, true, false, true, true, false, true, true, false, true, false}, + 7: []bool{false, true, false, true, false, true, true, true, true, true, false, true, true, false, true}, + }, + H: { + 0: []bool{false, false, true, false, true, true, false, true, false, false, false, true, false, false, true}, + 1: []bool{false, false, true, false, false, true, true, true, false, true, true, true, true, true, false}, + 2: []bool{false, false, true, true, true, false, false, true, true, true, false, false, true, true, true}, + 3: []bool{false, false, true, true, false, false, true, true, true, false, true, false, false, false, false}, + 4: []bool{false, false, false, false, true, true, true, false, true, true, false, false, false, true, false}, + 5: []bool{false, false, false, false, false, true, false, false, true, false, true, false, true, false, true}, + 6: []bool{false, false, false, true, true, false, true, false, false, false, false, true, true, false, false}, + 7: []bool{false, false, false, true, false, false, false, false, false, true, true, true, false, true, true}, + }, +} + +func drawFormatInfo(vi *versionInfo, usedMask int, set func(int, int, bool)) { + var formatInfo []bool + + if usedMask == -1 { + formatInfo = []bool{true, true, true, true, true, true, true, true, true, true, true, true, true, true, true} // Set all to true cause -1 --> occupied mask. + } else { + formatInfo = formatInfos[vi.Level][usedMask] + } + + if len(formatInfo) == 15 { + dim := vi.modulWidth() + set(0, 8, formatInfo[0]) + set(1, 8, formatInfo[1]) + set(2, 8, formatInfo[2]) + set(3, 8, formatInfo[3]) + set(4, 8, formatInfo[4]) + set(5, 8, formatInfo[5]) + set(7, 8, formatInfo[6]) + set(8, 8, formatInfo[7]) + set(8, 7, formatInfo[8]) + set(8, 5, formatInfo[9]) + set(8, 4, formatInfo[10]) + set(8, 3, formatInfo[11]) + set(8, 2, formatInfo[12]) + set(8, 1, formatInfo[13]) + set(8, 0, formatInfo[14]) + + set(8, dim-1, formatInfo[0]) + set(8, dim-2, formatInfo[1]) + set(8, dim-3, formatInfo[2]) + set(8, dim-4, formatInfo[3]) + set(8, dim-5, formatInfo[4]) + set(8, dim-6, formatInfo[5]) + set(8, dim-7, formatInfo[6]) + set(dim-8, 8, formatInfo[7]) + set(dim-7, 8, formatInfo[8]) + set(dim-6, 8, formatInfo[9]) + set(dim-5, 8, formatInfo[10]) + set(dim-4, 8, formatInfo[11]) + set(dim-3, 8, formatInfo[12]) + set(dim-2, 8, formatInfo[13]) + set(dim-1, 8, formatInfo[14]) + } +} + +var versionInfoBitsByVersion = map[byte][]bool{ + 7: []bool{false, false, false, true, true, true, true, true, false, false, true, false, false, true, false, true, false, false}, + 8: []bool{false, false, true, false, false, false, false, true, false, true, true, false, true, true, true, true, false, false}, + 9: []bool{false, false, true, false, false, true, true, false, true, false, true, false, false, true, true, false, false, true}, + 10: []bool{false, false, true, false, true, false, false, true, false, false, true, true, false, true, false, false, true, true}, + 11: []bool{false, false, true, false, true, true, true, false, true, true, true, true, true, true, false, true, true, false}, + 12: []bool{false, false, true, true, false, false, false, true, true, true, false, true, true, false, false, false, true, false}, + 13: []bool{false, false, true, true, false, true, true, false, false, false, false, true, false, false, false, true, true, true}, + 14: []bool{false, false, true, true, true, false, false, true, true, false, false, false, false, false, true, true, false, true}, + 15: []bool{false, false, true, true, true, true, true, false, false, true, false, false, true, false, true, false, false, false}, + 16: []bool{false, true, false, false, false, false, true, false, true, true, false, true, true, true, true, false, false, false}, + 17: []bool{false, true, false, false, false, true, false, true, false, false, false, true, false, true, true, true, false, true}, + 18: []bool{false, true, false, false, true, false, true, false, true, false, false, false, false, true, false, true, true, true}, + 19: []bool{false, true, false, false, true, true, false, true, false, true, false, false, true, true, false, false, true, false}, + 20: []bool{false, true, false, true, false, false, true, false, false, true, true, false, true, false, false, true, true, false}, + 21: []bool{false, true, false, true, false, true, false, true, true, false, true, false, false, false, false, false, true, true}, + 22: []bool{false, true, false, true, true, false, true, false, false, false, true, true, false, false, true, false, false, true}, + 23: []bool{false, true, false, true, true, true, false, true, true, true, true, true, true, false, true, true, false, false}, + 24: []bool{false, true, true, false, false, false, true, true, true, false, true, true, false, false, false, true, false, false}, + 25: []bool{false, true, true, false, false, true, false, false, false, true, true, true, true, false, false, false, false, true}, + 26: []bool{false, true, true, false, true, false, true, true, true, true, true, false, true, false, true, false, true, true}, + 27: []bool{false, true, true, false, true, true, false, false, false, false, true, false, false, false, true, true, true, false}, + 28: []bool{false, true, true, true, false, false, true, true, false, false, false, false, false, true, true, false, true, false}, + 29: []bool{false, true, true, true, false, true, false, false, true, true, false, false, true, true, true, true, true, true}, + 30: []bool{false, true, true, true, true, false, true, true, false, true, false, true, true, true, false, true, false, true}, + 31: []bool{false, true, true, true, true, true, false, false, true, false, false, true, false, true, false, false, false, false}, + 32: []bool{true, false, false, false, false, false, true, false, false, true, true, true, false, true, false, true, false, true}, + 33: []bool{true, false, false, false, false, true, false, true, true, false, true, true, true, true, false, false, false, false}, + 34: []bool{true, false, false, false, true, false, true, false, false, false, true, false, true, true, true, false, true, false}, + 35: []bool{true, false, false, false, true, true, false, true, true, true, true, false, false, true, true, true, true, true}, + 36: []bool{true, false, false, true, false, false, true, false, true, true, false, false, false, false, true, false, true, true}, + 37: []bool{true, false, false, true, false, true, false, true, false, false, false, false, true, false, true, true, true, false}, + 38: []bool{true, false, false, true, true, false, true, false, true, false, false, true, true, false, false, true, false, false}, + 39: []bool{true, false, false, true, true, true, false, true, false, true, false, true, false, false, false, false, false, true}, + 40: []bool{true, false, true, false, false, false, true, true, false, false, false, true, true, false, true, false, false, true}, +} + +func drawVersionInfo(vi *versionInfo, set func(int, int, bool)) { + versionInfoBits, ok := versionInfoBitsByVersion[vi.Version] + + if ok && len(versionInfoBits) > 0 { + for i := 0; i < len(versionInfoBits); i++ { + x := (vi.modulWidth() - 11) + i%3 + y := i / 3 + set(x, y, versionInfoBits[len(versionInfoBits)-i-1]) + set(y, x, versionInfoBits[len(versionInfoBits)-i-1]) + } + } + +} + +func addPaddingAndTerminator(bl *utils.BitList, vi *versionInfo) { + for i := 0; i < 4 && bl.Len() < vi.totalDataBytes()*8; i++ { + bl.AddBit(false) + } + + for bl.Len()%8 != 0 { + bl.AddBit(false) + } + + for i := 0; bl.Len() < vi.totalDataBytes()*8; i++ { + if i%2 == 0 { + bl.AddByte(236) + } else { + bl.AddByte(17) + } + } +} diff --git a/vendor/github.com/boombuler/barcode/qr/errorcorrection.go b/vendor/github.com/boombuler/barcode/qr/errorcorrection.go new file mode 100644 index 00000000000..08ebf0ce62e --- /dev/null +++ b/vendor/github.com/boombuler/barcode/qr/errorcorrection.go @@ -0,0 +1,29 @@ +package qr + +import ( + "github.com/boombuler/barcode/utils" +) + +type errorCorrection struct { + rs *utils.ReedSolomonEncoder +} + +var ec = newErrorCorrection() + +func newErrorCorrection() *errorCorrection { + fld := utils.NewGaloisField(285, 256, 0) + return &errorCorrection{utils.NewReedSolomonEncoder(fld)} +} + +func (ec *errorCorrection) calcECC(data []byte, eccCount byte) []byte { + dataInts := make([]int, len(data)) + for i := 0; i < len(data); i++ { + dataInts[i] = int(data[i]) + } + res := ec.rs.Encode(dataInts, int(eccCount)) + result := make([]byte, len(res)) + for i := 0; i < len(res); i++ { + result[i] = byte(res[i]) + } + return result +} diff --git a/vendor/github.com/boombuler/barcode/qr/numeric.go b/vendor/github.com/boombuler/barcode/qr/numeric.go new file mode 100644 index 00000000000..49b44cc45d3 --- /dev/null +++ b/vendor/github.com/boombuler/barcode/qr/numeric.go @@ -0,0 +1,56 @@ +package qr + +import ( + "errors" + "fmt" + "strconv" + + "github.com/boombuler/barcode/utils" +) + +func encodeNumeric(content string, ecl ErrorCorrectionLevel) (*utils.BitList, *versionInfo, error) { + contentBitCount := (len(content) / 3) * 10 + switch len(content) % 3 { + case 1: + contentBitCount += 4 + case 2: + contentBitCount += 7 + } + vi := findSmallestVersionInfo(ecl, numericMode, contentBitCount) + if vi == nil { + return nil, nil, errors.New("To much data to encode") + } + res := new(utils.BitList) + res.AddBits(int(numericMode), 4) + res.AddBits(len(content), vi.charCountBits(numericMode)) + + for pos := 0; pos < len(content); pos += 3 { + var curStr string + if pos+3 <= len(content) { + curStr = content[pos : pos+3] + } else { + curStr = content[pos:] + } + + i, err := strconv.Atoi(curStr) + if err != nil || i < 0 { + return nil, nil, fmt.Errorf("\"%s\" can not be encoded as %s", content, Numeric) + } + var bitCnt byte + switch len(curStr) % 3 { + case 0: + bitCnt = 10 + case 1: + bitCnt = 4 + break + case 2: + bitCnt = 7 + break + } + + res.AddBits(i, bitCnt) + } + + addPaddingAndTerminator(res, vi) + return res, vi, nil +} diff --git a/vendor/github.com/boombuler/barcode/qr/qrcode.go b/vendor/github.com/boombuler/barcode/qr/qrcode.go new file mode 100644 index 00000000000..13607604bb8 --- /dev/null +++ b/vendor/github.com/boombuler/barcode/qr/qrcode.go @@ -0,0 +1,166 @@ +package qr + +import ( + "image" + "image/color" + "math" + + "github.com/boombuler/barcode" + "github.com/boombuler/barcode/utils" +) + +type qrcode struct { + dimension int + data *utils.BitList + content string +} + +func (qr *qrcode) Content() string { + return qr.content +} + +func (qr *qrcode) Metadata() barcode.Metadata { + return barcode.Metadata{barcode.TypeQR, 2} +} + +func (qr *qrcode) ColorModel() color.Model { + return color.Gray16Model +} + +func (qr *qrcode) Bounds() image.Rectangle { + return image.Rect(0, 0, qr.dimension, qr.dimension) +} + +func (qr *qrcode) At(x, y int) color.Color { + if qr.Get(x, y) { + return color.Black + } + return color.White +} + +func (qr *qrcode) Get(x, y int) bool { + return qr.data.GetBit(x*qr.dimension + y) +} + +func (qr *qrcode) Set(x, y int, val bool) { + qr.data.SetBit(x*qr.dimension+y, val) +} + +func (qr *qrcode) calcPenalty() uint { + return qr.calcPenaltyRule1() + qr.calcPenaltyRule2() + qr.calcPenaltyRule3() + qr.calcPenaltyRule4() +} + +func (qr *qrcode) calcPenaltyRule1() uint { + var result uint + for x := 0; x < qr.dimension; x++ { + checkForX := false + var cntX uint + checkForY := false + var cntY uint + + for y := 0; y < qr.dimension; y++ { + if qr.Get(x, y) == checkForX { + cntX++ + } else { + checkForX = !checkForX + if cntX >= 5 { + result += cntX - 2 + } + cntX = 1 + } + + if qr.Get(y, x) == checkForY { + cntY++ + } else { + checkForY = !checkForY + if cntY >= 5 { + result += cntY - 2 + } + cntY = 1 + } + } + + if cntX >= 5 { + result += cntX - 2 + } + if cntY >= 5 { + result += cntY - 2 + } + } + + return result +} + +func (qr *qrcode) calcPenaltyRule2() uint { + var result uint + for x := 0; x < qr.dimension-1; x++ { + for y := 0; y < qr.dimension-1; y++ { + check := qr.Get(x, y) + if qr.Get(x, y+1) == check && qr.Get(x+1, y) == check && qr.Get(x+1, y+1) == check { + result += 3 + } + } + } + return result +} + +func (qr *qrcode) calcPenaltyRule3() uint { + pattern1 := []bool{true, false, true, true, true, false, true, false, false, false, false} + pattern2 := []bool{false, false, false, false, true, false, true, true, true, false, true} + + var result uint + for x := 0; x <= qr.dimension-len(pattern1); x++ { + for y := 0; y < qr.dimension; y++ { + pattern1XFound := true + pattern2XFound := true + pattern1YFound := true + pattern2YFound := true + + for i := 0; i < len(pattern1); i++ { + iv := qr.Get(x+i, y) + if iv != pattern1[i] { + pattern1XFound = false + } + if iv != pattern2[i] { + pattern2XFound = false + } + iv = qr.Get(y, x+i) + if iv != pattern1[i] { + pattern1YFound = false + } + if iv != pattern2[i] { + pattern2YFound = false + } + } + if pattern1XFound || pattern2XFound { + result += 40 + } + if pattern1YFound || pattern2YFound { + result += 40 + } + } + } + + return result +} + +func (qr *qrcode) calcPenaltyRule4() uint { + totalNum := qr.data.Len() + trueCnt := 0 + for i := 0; i < totalNum; i++ { + if qr.data.GetBit(i) { + trueCnt++ + } + } + percDark := float64(trueCnt) * 100 / float64(totalNum) + floor := math.Abs(math.Floor(percDark/5) - 10) + ceil := math.Abs(math.Ceil(percDark/5) - 10) + return uint(math.Min(floor, ceil) * 10) +} + +func newBarcode(dim int) *qrcode { + res := new(qrcode) + res.dimension = dim + res.data = utils.NewBitList(dim * dim) + return res +} diff --git a/vendor/github.com/boombuler/barcode/qr/unicode.go b/vendor/github.com/boombuler/barcode/qr/unicode.go new file mode 100644 index 00000000000..a9135ab6d96 --- /dev/null +++ b/vendor/github.com/boombuler/barcode/qr/unicode.go @@ -0,0 +1,27 @@ +package qr + +import ( + "errors" + + "github.com/boombuler/barcode/utils" +) + +func encodeUnicode(content string, ecl ErrorCorrectionLevel) (*utils.BitList, *versionInfo, error) { + data := []byte(content) + + vi := findSmallestVersionInfo(ecl, byteMode, len(data)*8) + if vi == nil { + return nil, nil, errors.New("To much data to encode") + } + + // It's not correct to add the unicode bytes to the result directly but most readers can't handle the + // required ECI header... + res := new(utils.BitList) + res.AddBits(int(byteMode), 4) + res.AddBits(len(content), vi.charCountBits(byteMode)) + for _, b := range data { + res.AddByte(b) + } + addPaddingAndTerminator(res, vi) + return res, vi, nil +} diff --git a/vendor/github.com/boombuler/barcode/qr/versioninfo.go b/vendor/github.com/boombuler/barcode/qr/versioninfo.go new file mode 100644 index 00000000000..6852a5766ef --- /dev/null +++ b/vendor/github.com/boombuler/barcode/qr/versioninfo.go @@ -0,0 +1,310 @@ +package qr + +import "math" + +// ErrorCorrectionLevel indicates the amount of "backup data" stored in the QR code +type ErrorCorrectionLevel byte + +const ( + // L recovers 7% of data + L ErrorCorrectionLevel = iota + // M recovers 15% of data + M + // Q recovers 25% of data + Q + // H recovers 30% of data + H +) + +func (ecl ErrorCorrectionLevel) String() string { + switch ecl { + case L: + return "L" + case M: + return "M" + case Q: + return "Q" + case H: + return "H" + } + return "unknown" +} + +type encodingMode byte + +const ( + numericMode encodingMode = 1 + alphaNumericMode encodingMode = 2 + byteMode encodingMode = 4 + kanjiMode encodingMode = 8 +) + +type versionInfo struct { + Version byte + Level ErrorCorrectionLevel + ErrorCorrectionCodewordsPerBlock byte + NumberOfBlocksInGroup1 byte + DataCodeWordsPerBlockInGroup1 byte + NumberOfBlocksInGroup2 byte + DataCodeWordsPerBlockInGroup2 byte +} + +var versionInfos = []*versionInfo{ + &versionInfo{1, L, 7, 1, 19, 0, 0}, + &versionInfo{1, M, 10, 1, 16, 0, 0}, + &versionInfo{1, Q, 13, 1, 13, 0, 0}, + &versionInfo{1, H, 17, 1, 9, 0, 0}, + &versionInfo{2, L, 10, 1, 34, 0, 0}, + &versionInfo{2, M, 16, 1, 28, 0, 0}, + &versionInfo{2, Q, 22, 1, 22, 0, 0}, + &versionInfo{2, H, 28, 1, 16, 0, 0}, + &versionInfo{3, L, 15, 1, 55, 0, 0}, + &versionInfo{3, M, 26, 1, 44, 0, 0}, + &versionInfo{3, Q, 18, 2, 17, 0, 0}, + &versionInfo{3, H, 22, 2, 13, 0, 0}, + &versionInfo{4, L, 20, 1, 80, 0, 0}, + &versionInfo{4, M, 18, 2, 32, 0, 0}, + &versionInfo{4, Q, 26, 2, 24, 0, 0}, + &versionInfo{4, H, 16, 4, 9, 0, 0}, + &versionInfo{5, L, 26, 1, 108, 0, 0}, + &versionInfo{5, M, 24, 2, 43, 0, 0}, + &versionInfo{5, Q, 18, 2, 15, 2, 16}, + &versionInfo{5, H, 22, 2, 11, 2, 12}, + &versionInfo{6, L, 18, 2, 68, 0, 0}, + &versionInfo{6, M, 16, 4, 27, 0, 0}, + &versionInfo{6, Q, 24, 4, 19, 0, 0}, + &versionInfo{6, H, 28, 4, 15, 0, 0}, + &versionInfo{7, L, 20, 2, 78, 0, 0}, + &versionInfo{7, M, 18, 4, 31, 0, 0}, + &versionInfo{7, Q, 18, 2, 14, 4, 15}, + &versionInfo{7, H, 26, 4, 13, 1, 14}, + &versionInfo{8, L, 24, 2, 97, 0, 0}, + &versionInfo{8, M, 22, 2, 38, 2, 39}, + &versionInfo{8, Q, 22, 4, 18, 2, 19}, + &versionInfo{8, H, 26, 4, 14, 2, 15}, + &versionInfo{9, L, 30, 2, 116, 0, 0}, + &versionInfo{9, M, 22, 3, 36, 2, 37}, + &versionInfo{9, Q, 20, 4, 16, 4, 17}, + &versionInfo{9, H, 24, 4, 12, 4, 13}, + &versionInfo{10, L, 18, 2, 68, 2, 69}, + &versionInfo{10, M, 26, 4, 43, 1, 44}, + &versionInfo{10, Q, 24, 6, 19, 2, 20}, + &versionInfo{10, H, 28, 6, 15, 2, 16}, + &versionInfo{11, L, 20, 4, 81, 0, 0}, + &versionInfo{11, M, 30, 1, 50, 4, 51}, + &versionInfo{11, Q, 28, 4, 22, 4, 23}, + &versionInfo{11, H, 24, 3, 12, 8, 13}, + &versionInfo{12, L, 24, 2, 92, 2, 93}, + &versionInfo{12, M, 22, 6, 36, 2, 37}, + &versionInfo{12, Q, 26, 4, 20, 6, 21}, + &versionInfo{12, H, 28, 7, 14, 4, 15}, + &versionInfo{13, L, 26, 4, 107, 0, 0}, + &versionInfo{13, M, 22, 8, 37, 1, 38}, + &versionInfo{13, Q, 24, 8, 20, 4, 21}, + &versionInfo{13, H, 22, 12, 11, 4, 12}, + &versionInfo{14, L, 30, 3, 115, 1, 116}, + &versionInfo{14, M, 24, 4, 40, 5, 41}, + &versionInfo{14, Q, 20, 11, 16, 5, 17}, + &versionInfo{14, H, 24, 11, 12, 5, 13}, + &versionInfo{15, L, 22, 5, 87, 1, 88}, + &versionInfo{15, M, 24, 5, 41, 5, 42}, + &versionInfo{15, Q, 30, 5, 24, 7, 25}, + &versionInfo{15, H, 24, 11, 12, 7, 13}, + &versionInfo{16, L, 24, 5, 98, 1, 99}, + &versionInfo{16, M, 28, 7, 45, 3, 46}, + &versionInfo{16, Q, 24, 15, 19, 2, 20}, + &versionInfo{16, H, 30, 3, 15, 13, 16}, + &versionInfo{17, L, 28, 1, 107, 5, 108}, + &versionInfo{17, M, 28, 10, 46, 1, 47}, + &versionInfo{17, Q, 28, 1, 22, 15, 23}, + &versionInfo{17, H, 28, 2, 14, 17, 15}, + &versionInfo{18, L, 30, 5, 120, 1, 121}, + &versionInfo{18, M, 26, 9, 43, 4, 44}, + &versionInfo{18, Q, 28, 17, 22, 1, 23}, + &versionInfo{18, H, 28, 2, 14, 19, 15}, + &versionInfo{19, L, 28, 3, 113, 4, 114}, + &versionInfo{19, M, 26, 3, 44, 11, 45}, + &versionInfo{19, Q, 26, 17, 21, 4, 22}, + &versionInfo{19, H, 26, 9, 13, 16, 14}, + &versionInfo{20, L, 28, 3, 107, 5, 108}, + &versionInfo{20, M, 26, 3, 41, 13, 42}, + &versionInfo{20, Q, 30, 15, 24, 5, 25}, + &versionInfo{20, H, 28, 15, 15, 10, 16}, + &versionInfo{21, L, 28, 4, 116, 4, 117}, + &versionInfo{21, M, 26, 17, 42, 0, 0}, + &versionInfo{21, Q, 28, 17, 22, 6, 23}, + &versionInfo{21, H, 30, 19, 16, 6, 17}, + &versionInfo{22, L, 28, 2, 111, 7, 112}, + &versionInfo{22, M, 28, 17, 46, 0, 0}, + &versionInfo{22, Q, 30, 7, 24, 16, 25}, + &versionInfo{22, H, 24, 34, 13, 0, 0}, + &versionInfo{23, L, 30, 4, 121, 5, 122}, + &versionInfo{23, M, 28, 4, 47, 14, 48}, + &versionInfo{23, Q, 30, 11, 24, 14, 25}, + &versionInfo{23, H, 30, 16, 15, 14, 16}, + &versionInfo{24, L, 30, 6, 117, 4, 118}, + &versionInfo{24, M, 28, 6, 45, 14, 46}, + &versionInfo{24, Q, 30, 11, 24, 16, 25}, + &versionInfo{24, H, 30, 30, 16, 2, 17}, + &versionInfo{25, L, 26, 8, 106, 4, 107}, + &versionInfo{25, M, 28, 8, 47, 13, 48}, + &versionInfo{25, Q, 30, 7, 24, 22, 25}, + &versionInfo{25, H, 30, 22, 15, 13, 16}, + &versionInfo{26, L, 28, 10, 114, 2, 115}, + &versionInfo{26, M, 28, 19, 46, 4, 47}, + &versionInfo{26, Q, 28, 28, 22, 6, 23}, + &versionInfo{26, H, 30, 33, 16, 4, 17}, + &versionInfo{27, L, 30, 8, 122, 4, 123}, + &versionInfo{27, M, 28, 22, 45, 3, 46}, + &versionInfo{27, Q, 30, 8, 23, 26, 24}, + &versionInfo{27, H, 30, 12, 15, 28, 16}, + &versionInfo{28, L, 30, 3, 117, 10, 118}, + &versionInfo{28, M, 28, 3, 45, 23, 46}, + &versionInfo{28, Q, 30, 4, 24, 31, 25}, + &versionInfo{28, H, 30, 11, 15, 31, 16}, + &versionInfo{29, L, 30, 7, 116, 7, 117}, + &versionInfo{29, M, 28, 21, 45, 7, 46}, + &versionInfo{29, Q, 30, 1, 23, 37, 24}, + &versionInfo{29, H, 30, 19, 15, 26, 16}, + &versionInfo{30, L, 30, 5, 115, 10, 116}, + &versionInfo{30, M, 28, 19, 47, 10, 48}, + &versionInfo{30, Q, 30, 15, 24, 25, 25}, + &versionInfo{30, H, 30, 23, 15, 25, 16}, + &versionInfo{31, L, 30, 13, 115, 3, 116}, + &versionInfo{31, M, 28, 2, 46, 29, 47}, + &versionInfo{31, Q, 30, 42, 24, 1, 25}, + &versionInfo{31, H, 30, 23, 15, 28, 16}, + &versionInfo{32, L, 30, 17, 115, 0, 0}, + &versionInfo{32, M, 28, 10, 46, 23, 47}, + &versionInfo{32, Q, 30, 10, 24, 35, 25}, + &versionInfo{32, H, 30, 19, 15, 35, 16}, + &versionInfo{33, L, 30, 17, 115, 1, 116}, + &versionInfo{33, M, 28, 14, 46, 21, 47}, + &versionInfo{33, Q, 30, 29, 24, 19, 25}, + &versionInfo{33, H, 30, 11, 15, 46, 16}, + &versionInfo{34, L, 30, 13, 115, 6, 116}, + &versionInfo{34, M, 28, 14, 46, 23, 47}, + &versionInfo{34, Q, 30, 44, 24, 7, 25}, + &versionInfo{34, H, 30, 59, 16, 1, 17}, + &versionInfo{35, L, 30, 12, 121, 7, 122}, + &versionInfo{35, M, 28, 12, 47, 26, 48}, + &versionInfo{35, Q, 30, 39, 24, 14, 25}, + &versionInfo{35, H, 30, 22, 15, 41, 16}, + &versionInfo{36, L, 30, 6, 121, 14, 122}, + &versionInfo{36, M, 28, 6, 47, 34, 48}, + &versionInfo{36, Q, 30, 46, 24, 10, 25}, + &versionInfo{36, H, 30, 2, 15, 64, 16}, + &versionInfo{37, L, 30, 17, 122, 4, 123}, + &versionInfo{37, M, 28, 29, 46, 14, 47}, + &versionInfo{37, Q, 30, 49, 24, 10, 25}, + &versionInfo{37, H, 30, 24, 15, 46, 16}, + &versionInfo{38, L, 30, 4, 122, 18, 123}, + &versionInfo{38, M, 28, 13, 46, 32, 47}, + &versionInfo{38, Q, 30, 48, 24, 14, 25}, + &versionInfo{38, H, 30, 42, 15, 32, 16}, + &versionInfo{39, L, 30, 20, 117, 4, 118}, + &versionInfo{39, M, 28, 40, 47, 7, 48}, + &versionInfo{39, Q, 30, 43, 24, 22, 25}, + &versionInfo{39, H, 30, 10, 15, 67, 16}, + &versionInfo{40, L, 30, 19, 118, 6, 119}, + &versionInfo{40, M, 28, 18, 47, 31, 48}, + &versionInfo{40, Q, 30, 34, 24, 34, 25}, + &versionInfo{40, H, 30, 20, 15, 61, 16}, +} + +func (vi *versionInfo) totalDataBytes() int { + g1Data := int(vi.NumberOfBlocksInGroup1) * int(vi.DataCodeWordsPerBlockInGroup1) + g2Data := int(vi.NumberOfBlocksInGroup2) * int(vi.DataCodeWordsPerBlockInGroup2) + return (g1Data + g2Data) +} + +func (vi *versionInfo) charCountBits(m encodingMode) byte { + switch m { + case numericMode: + if vi.Version < 10 { + return 10 + } else if vi.Version < 27 { + return 12 + } + return 14 + + case alphaNumericMode: + if vi.Version < 10 { + return 9 + } else if vi.Version < 27 { + return 11 + } + return 13 + + case byteMode: + if vi.Version < 10 { + return 8 + } + return 16 + + case kanjiMode: + if vi.Version < 10 { + return 8 + } else if vi.Version < 27 { + return 10 + } + return 12 + default: + return 0 + } +} + +func (vi *versionInfo) modulWidth() int { + return ((int(vi.Version) - 1) * 4) + 21 +} + +func (vi *versionInfo) alignmentPatternPlacements() []int { + if vi.Version == 1 { + return make([]int, 0) + } + + first := 6 + last := vi.modulWidth() - 7 + space := float64(last - first) + count := int(math.Ceil(space/28)) + 1 + + result := make([]int, count) + result[0] = first + result[len(result)-1] = last + if count > 2 { + step := int(math.Ceil(float64(last-first) / float64(count-1))) + if step%2 == 1 { + frac := float64(last-first) / float64(count-1) + _, x := math.Modf(frac) + if x >= 0.5 { + frac = math.Ceil(frac) + } else { + frac = math.Floor(frac) + } + + if int(frac)%2 == 0 { + step-- + } else { + step++ + } + } + + for i := 1; i <= count-2; i++ { + result[i] = last - (step * (count - 1 - i)) + } + } + + return result +} + +func findSmallestVersionInfo(ecl ErrorCorrectionLevel, mode encodingMode, dataBits int) *versionInfo { + dataBits = dataBits + 4 // mode indicator + for _, vi := range versionInfos { + if vi.Level == ecl { + if (vi.totalDataBytes() * 8) >= (dataBits + int(vi.charCountBits(mode))) { + return vi + } + } + } + return nil +} diff --git a/vendor/github.com/boombuler/barcode/scaledbarcode.go b/vendor/github.com/boombuler/barcode/scaledbarcode.go new file mode 100644 index 00000000000..152b1801748 --- /dev/null +++ b/vendor/github.com/boombuler/barcode/scaledbarcode.go @@ -0,0 +1,134 @@ +package barcode + +import ( + "errors" + "fmt" + "image" + "image/color" + "math" +) + +type wrapFunc func(x, y int) color.Color + +type scaledBarcode struct { + wrapped Barcode + wrapperFunc wrapFunc + rect image.Rectangle +} + +type intCSscaledBC struct { + scaledBarcode +} + +func (bc *scaledBarcode) Content() string { + return bc.wrapped.Content() +} + +func (bc *scaledBarcode) Metadata() Metadata { + return bc.wrapped.Metadata() +} + +func (bc *scaledBarcode) ColorModel() color.Model { + return bc.wrapped.ColorModel() +} + +func (bc *scaledBarcode) Bounds() image.Rectangle { + return bc.rect +} + +func (bc *scaledBarcode) At(x, y int) color.Color { + return bc.wrapperFunc(x, y) +} + +func (bc *intCSscaledBC) CheckSum() int { + if cs, ok := bc.wrapped.(BarcodeIntCS); ok { + return cs.CheckSum() + } + return 0 +} + +// Scale returns a resized barcode with the given width and height. +func Scale(bc Barcode, width, height int) (Barcode, error) { + switch bc.Metadata().Dimensions { + case 1: + return scale1DCode(bc, width, height) + case 2: + return scale2DCode(bc, width, height) + } + + return nil, errors.New("unsupported barcode format") +} + +func newScaledBC(wrapped Barcode, wrapperFunc wrapFunc, rect image.Rectangle) Barcode { + result := &scaledBarcode{ + wrapped: wrapped, + wrapperFunc: wrapperFunc, + rect: rect, + } + + if _, ok := wrapped.(BarcodeIntCS); ok { + return &intCSscaledBC{*result} + } + return result +} + +func scale2DCode(bc Barcode, width, height int) (Barcode, error) { + orgBounds := bc.Bounds() + orgWidth := orgBounds.Max.X - orgBounds.Min.X + orgHeight := orgBounds.Max.Y - orgBounds.Min.Y + + factor := int(math.Min(float64(width)/float64(orgWidth), float64(height)/float64(orgHeight))) + if factor <= 0 { + return nil, fmt.Errorf("can not scale barcode to an image smaller than %dx%d", orgWidth, orgHeight) + } + + offsetX := (width - (orgWidth * factor)) / 2 + offsetY := (height - (orgHeight * factor)) / 2 + + wrap := func(x, y int) color.Color { + if x < offsetX || y < offsetY { + return color.White + } + x = (x - offsetX) / factor + y = (y - offsetY) / factor + if x >= orgWidth || y >= orgHeight { + return color.White + } + return bc.At(x, y) + } + + return newScaledBC( + bc, + wrap, + image.Rect(0, 0, width, height), + ), nil +} + +func scale1DCode(bc Barcode, width, height int) (Barcode, error) { + orgBounds := bc.Bounds() + orgWidth := orgBounds.Max.X - orgBounds.Min.X + factor := int(float64(width) / float64(orgWidth)) + + if factor <= 0 { + return nil, fmt.Errorf("can not scale barcode to an image smaller than %dx1", orgWidth) + } + offsetX := (width - (orgWidth * factor)) / 2 + + wrap := func(x, y int) color.Color { + if x < offsetX { + return color.White + } + x = (x - offsetX) / factor + + if x >= orgWidth { + return color.White + } + return bc.At(x, 0) + } + + return newScaledBC( + bc, + wrap, + image.Rect(0, 0, width, height), + ), nil +} diff --git a/vendor/github.com/boombuler/barcode/utils/base1dcode.go b/vendor/github.com/boombuler/barcode/utils/base1dcode.go new file mode 100644 index 00000000000..75e50048c64 --- /dev/null +++ b/vendor/github.com/boombuler/barcode/utils/base1dcode.go @@ -0,0 +1,57 @@ +// Package utils contain some utilities which are needed to create barcodes +package utils + +import ( + "image" + "image/color" + + "github.com/boombuler/barcode" +) + +type base1DCode struct { + *BitList + kind string + content string +} + +type base1DCodeIntCS struct { + base1DCode + checksum int +} + +func (c *base1DCode) Content() string { + return c.content +} + +func (c *base1DCode) Metadata() barcode.Metadata { + return barcode.Metadata{c.kind, 1} +} + +func (c *base1DCode) ColorModel() color.Model { + return color.Gray16Model +} + +func (c *base1DCode) Bounds() image.Rectangle { + return image.Rect(0, 0, c.Len(), 1) +} + +func (c *base1DCode) At(x, y int) color.Color { + if c.GetBit(x) { + return color.Black + } + return color.White +} + +func (c *base1DCodeIntCS) CheckSum() int { + return c.checksum +} + +// New1DCode creates a new 1D barcode where the bars are represented by the bits in the bars BitList +func New1DCodeIntCheckSum(codeKind, content string, bars *BitList, checksum int) barcode.BarcodeIntCS { + return &base1DCodeIntCS{base1DCode{bars, codeKind, content}, checksum} +} + +// New1DCode creates a new 1D barcode where the bars are represented by the bits in the bars BitList +func New1DCode(codeKind, content string, bars *BitList) barcode.Barcode { + return &base1DCode{bars, codeKind, content} +} diff --git a/vendor/github.com/boombuler/barcode/utils/bitlist.go b/vendor/github.com/boombuler/barcode/utils/bitlist.go new file mode 100644 index 00000000000..bb05e53b5d7 --- /dev/null +++ b/vendor/github.com/boombuler/barcode/utils/bitlist.go @@ -0,0 +1,119 @@ +package utils + +// BitList is a list that contains bits +type BitList struct { + count int + data []int32 +} + +// NewBitList returns a new BitList with the given length +// all bits are initialize with false +func NewBitList(capacity int) *BitList { + bl := new(BitList) + bl.count = capacity + x := 0 + if capacity%32 != 0 { + x = 1 + } + bl.data = make([]int32, capacity/32+x) + return bl +} + +// Len returns the number of contained bits +func (bl *BitList) Len() int { + return bl.count +} + +func (bl *BitList) grow() { + growBy := len(bl.data) + if growBy < 128 { + growBy = 128 + } else if growBy >= 1024 { + growBy = 1024 + } + + nd := make([]int32, len(bl.data)+growBy) + copy(nd, bl.data) + bl.data = nd +} + +// AddBit appends the given bits to the end of the list +func (bl *BitList) AddBit(bits ...bool) { + for _, bit := range bits { + itmIndex := bl.count / 32 + for itmIndex >= len(bl.data) { + bl.grow() + } + bl.SetBit(bl.count, bit) + bl.count++ + } +} + +// SetBit sets the bit at the given index to the given value +func (bl *BitList) SetBit(index int, value bool) { + itmIndex := index / 32 + itmBitShift := 31 - (index % 32) + if value { + bl.data[itmIndex] = bl.data[itmIndex] | 1<> uint(itmBitShift)) & 1) == 1 +} + +// AddByte appends all 8 bits of the given byte to the end of the list +func (bl *BitList) AddByte(b byte) { + for i := 7; i >= 0; i-- { + bl.AddBit(((b >> uint(i)) & 1) == 1) + } +} + +// AddBits appends the last (LSB) 'count' bits of 'b' the the end of the list +func (bl *BitList) AddBits(b int, count byte) { + for i := int(count) - 1; i >= 0; i-- { + bl.AddBit(((b >> uint(i)) & 1) == 1) + } +} + +// GetBytes returns all bits of the BitList as a []byte +func (bl *BitList) GetBytes() []byte { + len := bl.count >> 3 + if (bl.count % 8) != 0 { + len++ + } + result := make([]byte, len) + for i := 0; i < len; i++ { + shift := (3 - (i % 4)) * 8 + result[i] = (byte)((bl.data[i/4] >> uint(shift)) & 0xFF) + } + return result +} + +// IterateBytes iterates through all bytes contained in the BitList +func (bl *BitList) IterateBytes() <-chan byte { + res := make(chan byte) + + go func() { + c := bl.count + shift := 24 + i := 0 + for c > 0 { + res <- byte((bl.data[i] >> uint(shift)) & 0xFF) + shift -= 8 + if shift < 0 { + shift = 24 + i++ + } + c -= 8 + } + close(res) + }() + + return res +} diff --git a/vendor/github.com/boombuler/barcode/utils/galoisfield.go b/vendor/github.com/boombuler/barcode/utils/galoisfield.go new file mode 100644 index 00000000000..68726fbfdef --- /dev/null +++ b/vendor/github.com/boombuler/barcode/utils/galoisfield.go @@ -0,0 +1,65 @@ +package utils + +// GaloisField encapsulates galois field arithmetics +type GaloisField struct { + Size int + Base int + ALogTbl []int + LogTbl []int +} + +// NewGaloisField creates a new galois field +func NewGaloisField(pp, fieldSize, b int) *GaloisField { + result := new(GaloisField) + + result.Size = fieldSize + result.Base = b + result.ALogTbl = make([]int, fieldSize) + result.LogTbl = make([]int, fieldSize) + + x := 1 + for i := 0; i < fieldSize; i++ { + result.ALogTbl[i] = x + x = x * 2 + if x >= fieldSize { + x = (x ^ pp) & (fieldSize - 1) + } + } + + for i := 0; i < fieldSize; i++ { + result.LogTbl[result.ALogTbl[i]] = int(i) + } + + return result +} + +func (gf *GaloisField) Zero() *GFPoly { + return NewGFPoly(gf, []int{0}) +} + +// AddOrSub add or substract two numbers +func (gf *GaloisField) AddOrSub(a, b int) int { + return a ^ b +} + +// Multiply multiplys two numbers +func (gf *GaloisField) Multiply(a, b int) int { + if a == 0 || b == 0 { + return 0 + } + return gf.ALogTbl[(gf.LogTbl[a]+gf.LogTbl[b])%(gf.Size-1)] +} + +// Divide divides two numbers +func (gf *GaloisField) Divide(a, b int) int { + if b == 0 { + panic("divide by zero") + } else if a == 0 { + return 0 + } + return gf.ALogTbl[(gf.LogTbl[a]-gf.LogTbl[b])%(gf.Size-1)] +} + +func (gf *GaloisField) Invers(num int) int { + return gf.ALogTbl[(gf.Size-1)-gf.LogTbl[num]] +} diff --git a/vendor/github.com/boombuler/barcode/utils/gfpoly.go b/vendor/github.com/boombuler/barcode/utils/gfpoly.go new file mode 100644 index 00000000000..c56bb40b9a9 --- /dev/null +++ b/vendor/github.com/boombuler/barcode/utils/gfpoly.go @@ -0,0 +1,103 @@ +package utils + +type GFPoly struct { + gf *GaloisField + Coefficients []int +} + +func (gp *GFPoly) Degree() int { + return len(gp.Coefficients) - 1 +} + +func (gp *GFPoly) Zero() bool { + return gp.Coefficients[0] == 0 +} + +// GetCoefficient returns the coefficient of x ^ degree +func (gp *GFPoly) GetCoefficient(degree int) int { + return gp.Coefficients[gp.Degree()-degree] +} + +func (gp *GFPoly) AddOrSubstract(other *GFPoly) *GFPoly { + if gp.Zero() { + return other + } else if other.Zero() { + return gp + } + smallCoeff := gp.Coefficients + largeCoeff := other.Coefficients + if len(smallCoeff) > len(largeCoeff) { + largeCoeff, smallCoeff = smallCoeff, largeCoeff + } + sumDiff := make([]int, len(largeCoeff)) + lenDiff := len(largeCoeff) - len(smallCoeff) + copy(sumDiff, largeCoeff[:lenDiff]) + for i := lenDiff; i < len(largeCoeff); i++ { + sumDiff[i] = int(gp.gf.AddOrSub(int(smallCoeff[i-lenDiff]), int(largeCoeff[i]))) + } + return NewGFPoly(gp.gf, sumDiff) +} + +func (gp *GFPoly) MultByMonominal(degree int, coeff int) *GFPoly { + if coeff == 0 { + return gp.gf.Zero() + } + size := len(gp.Coefficients) + result := make([]int, size+degree) + for i := 0; i < size; i++ { + result[i] = int(gp.gf.Multiply(int(gp.Coefficients[i]), int(coeff))) + } + return NewGFPoly(gp.gf, result) +} + +func (gp *GFPoly) Multiply(other *GFPoly) *GFPoly { + if gp.Zero() || other.Zero() { + return gp.gf.Zero() + } + aCoeff := gp.Coefficients + aLen := len(aCoeff) + bCoeff := other.Coefficients + bLen := len(bCoeff) + product := make([]int, aLen+bLen-1) + for i := 0; i < aLen; i++ { + ac := int(aCoeff[i]) + for j := 0; j < bLen; j++ { + bc := int(bCoeff[j]) + product[i+j] = int(gp.gf.AddOrSub(int(product[i+j]), gp.gf.Multiply(ac, bc))) + } + } + return NewGFPoly(gp.gf, product) +} + +func (gp *GFPoly) Divide(other *GFPoly) (quotient *GFPoly, remainder *GFPoly) { + quotient = gp.gf.Zero() + remainder = gp + fld := gp.gf + denomLeadTerm := other.GetCoefficient(other.Degree()) + inversDenomLeadTerm := fld.Invers(int(denomLeadTerm)) + for remainder.Degree() >= other.Degree() && !remainder.Zero() { + degreeDiff := remainder.Degree() - other.Degree() + scale := int(fld.Multiply(int(remainder.GetCoefficient(remainder.Degree())), inversDenomLeadTerm)) + term := other.MultByMonominal(degreeDiff, scale) + itQuot := NewMonominalPoly(fld, degreeDiff, scale) + quotient = quotient.AddOrSubstract(itQuot) + remainder = remainder.AddOrSubstract(term) + } + return +} + +func NewMonominalPoly(field *GaloisField, degree int, coeff int) *GFPoly { + if coeff == 0 { + return field.Zero() + } + result := make([]int, degree+1) + result[0] = coeff + return NewGFPoly(field, result) +} + +func NewGFPoly(field *GaloisField, coefficients []int) *GFPoly { + for len(coefficients) > 1 && coefficients[0] == 0 { + coefficients = coefficients[1:] + } + return &GFPoly{field, coefficients} +} diff --git a/vendor/github.com/boombuler/barcode/utils/reedsolomon.go b/vendor/github.com/boombuler/barcode/utils/reedsolomon.go new file mode 100644 index 00000000000..53af91ad446 --- /dev/null +++ b/vendor/github.com/boombuler/barcode/utils/reedsolomon.go @@ -0,0 +1,44 @@ +package utils + +import ( + "sync" +) + +type ReedSolomonEncoder struct { + gf *GaloisField + polynomes []*GFPoly + m *sync.Mutex +} + +func NewReedSolomonEncoder(gf *GaloisField) *ReedSolomonEncoder { + return &ReedSolomonEncoder{ + gf, []*GFPoly{NewGFPoly(gf, []int{1})}, new(sync.Mutex), + } +} + +func (rs *ReedSolomonEncoder) getPolynomial(degree int) *GFPoly { + rs.m.Lock() + defer rs.m.Unlock() + + if degree >= len(rs.polynomes) { + last := rs.polynomes[len(rs.polynomes)-1] + for d := len(rs.polynomes); d <= degree; d++ { + next := last.Multiply(NewGFPoly(rs.gf, []int{1, rs.gf.ALogTbl[d-1+rs.gf.Base]})) + rs.polynomes = append(rs.polynomes, next) + last = next + } + } + return rs.polynomes[degree] +} + +func (rs *ReedSolomonEncoder) Encode(data []int, eccCount int) []int { + generator := rs.getPolynomial(eccCount) + info := NewGFPoly(rs.gf, data) + info = info.MultByMonominal(eccCount, 1) + _, remainder := info.Divide(generator) + + result := make([]int, eccCount) + numZero := int(eccCount) - len(remainder.Coefficients) + copy(result[numZero:], remainder.Coefficients) + return result +} diff --git a/vendor/github.com/boombuler/barcode/utils/runeint.go b/vendor/github.com/boombuler/barcode/utils/runeint.go new file mode 100644 index 00000000000..d2e5e61e567 --- /dev/null +++ b/vendor/github.com/boombuler/barcode/utils/runeint.go @@ -0,0 +1,19 @@ +package utils + +// RuneToInt converts a rune between '0' and '9' to an integer between 0 and 9 +// If the rune is outside of this range -1 is returned. +func RuneToInt(r rune) int { + if r >= '0' && r <= '9' { + return int(r - '0') + } + return -1 +} + +// IntToRune converts a digit 0 - 9 to the rune '0' - '9'. If the given int is outside +// of this range 'F' is returned! +func IntToRune(i int) rune { + if i >= 0 && i <= 9 { + return rune(i + '0') + } + return 'F' +} diff --git a/vendor/github.com/davecgh/go-spew/LICENSE b/vendor/github.com/davecgh/go-spew/LICENSE index c836416192d..bc52e96f2b0 100644 --- a/vendor/github.com/davecgh/go-spew/LICENSE +++ b/vendor/github.com/davecgh/go-spew/LICENSE @@ -2,7 +2,7 @@ ISC License Copyright (c) 2012-2016 Dave Collins -Permission to use, copy, modify, and distribute this software for any +Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. diff --git a/vendor/github.com/davecgh/go-spew/spew/bypass.go b/vendor/github.com/davecgh/go-spew/spew/bypass.go index 8a4a6589a2d..792994785e3 100644 --- a/vendor/github.com/davecgh/go-spew/spew/bypass.go +++ b/vendor/github.com/davecgh/go-spew/spew/bypass.go @@ -16,7 +16,9 @@ // when the code is not running on Google App Engine, compiled by GopherJS, and // "-tags safe" is not added to the go build command line. The "disableunsafe" // tag is deprecated and thus should not be used. -// +build !js,!appengine,!safe,!disableunsafe +// Go versions prior to 1.4 are disabled because they use a different layout +// for interfaces which make the implementation of unsafeReflectValue more complex. +// +build !js,!appengine,!safe,!disableunsafe,go1.4 package spew @@ -34,80 +36,49 @@ const ( ptrSize = unsafe.Sizeof((*byte)(nil)) ) +type flag uintptr + var ( - // offsetPtr, offsetScalar, and offsetFlag are the offsets for the - // internal reflect.Value fields. These values are valid before golang - // commit ecccf07e7f9d which changed the format. The are also valid - // after commit 82f48826c6c7 which changed the format again to mirror - // the original format. Code in the init function updates these offsets - // as necessary. - offsetPtr = uintptr(ptrSize) - offsetScalar = uintptr(0) - offsetFlag = uintptr(ptrSize * 2) - - // flagKindWidth and flagKindShift indicate various bits that the - // reflect package uses internally to track kind information. - // - // flagRO indicates whether or not the value field of a reflect.Value is - // read-only. - // - // flagIndir indicates whether the value field of a reflect.Value is - // the actual data or a pointer to the data. - // - // These values are valid before golang commit 90a7c3c86944 which - // changed their positions. Code in the init function updates these - // flags as necessary. - flagKindWidth = uintptr(5) - flagKindShift = uintptr(flagKindWidth - 1) - flagRO = uintptr(1 << 0) - flagIndir = uintptr(1 << 1) + // flagRO indicates whether the value field of a reflect.Value + // is read-only. + flagRO flag + + // flagAddr indicates whether the address of the reflect.Value's + // value may be taken. + flagAddr flag ) -func init() { - // Older versions of reflect.Value stored small integers directly in the - // ptr field (which is named val in the older versions). Versions - // between commits ecccf07e7f9d and 82f48826c6c7 added a new field named - // scalar for this purpose which unfortunately came before the flag - // field, so the offset of the flag field is different for those - // versions. - // - // This code constructs a new reflect.Value from a known small integer - // and checks if the size of the reflect.Value struct indicates it has - // the scalar field. When it does, the offsets are updated accordingly. - vv := reflect.ValueOf(0xf00) - if unsafe.Sizeof(vv) == (ptrSize * 4) { - offsetScalar = ptrSize * 2 - offsetFlag = ptrSize * 3 - } +// flagKindMask holds the bits that make up the kind +// part of the flags field. In all the supported versions, +// it is in the lower 5 bits. +const flagKindMask = flag(0x1f) - // Commit 90a7c3c86944 changed the flag positions such that the low - // order bits are the kind. This code extracts the kind from the flags - // field and ensures it's the correct type. When it's not, the flag - // order has been changed to the newer format, so the flags are updated - // accordingly. - upf := unsafe.Pointer(uintptr(unsafe.Pointer(&vv)) + offsetFlag) - upfv := *(*uintptr)(upf) - flagKindMask := uintptr((1<>flagKindShift != uintptr(reflect.Int) { - flagKindShift = 0 - flagRO = 1 << 5 - flagIndir = 1 << 6 - - // Commit adf9b30e5594 modified the flags to separate the - // flagRO flag into two bits which specifies whether or not the - // field is embedded. This causes flagIndir to move over a bit - // and means that flagRO is the combination of either of the - // original flagRO bit and the new bit. - // - // This code detects the change by extracting what used to be - // the indirect bit to ensure it's set. When it's not, the flag - // order has been changed to the newer format, so the flags are - // updated accordingly. - if upfv&flagIndir == 0 { - flagRO = 3 << 5 - flagIndir = 1 << 7 - } +// Different versions of Go have used different +// bit layouts for the flags type. This table +// records the known combinations. +var okFlags = []struct { + ro, addr flag +}{{ + // From Go 1.4 to 1.5 + ro: 1 << 5, + addr: 1 << 7, +}, { + // Up to Go tip. + ro: 1<<5 | 1<<6, + addr: 1 << 8, +}} + +var flagValOffset = func() uintptr { + field, ok := reflect.TypeOf(reflect.Value{}).FieldByName("flag") + if !ok { + panic("reflect.Value has no flag field") } + return field.Offset +}() + +// flagField returns a pointer to the flag field of a reflect.Value. +func flagField(v *reflect.Value) *flag { + return (*flag)(unsafe.Pointer(uintptr(unsafe.Pointer(v)) + flagValOffset)) } // unsafeReflectValue converts the passed reflect.Value into a one that bypasses @@ -119,34 +90,56 @@ func init() { // This allows us to check for implementations of the Stringer and error // interfaces to be used for pretty printing ordinarily unaddressable and // inaccessible values such as unexported struct fields. -func unsafeReflectValue(v reflect.Value) (rv reflect.Value) { - indirects := 1 - vt := v.Type() - upv := unsafe.Pointer(uintptr(unsafe.Pointer(&v)) + offsetPtr) - rvf := *(*uintptr)(unsafe.Pointer(uintptr(unsafe.Pointer(&v)) + offsetFlag)) - if rvf&flagIndir != 0 { - vt = reflect.PtrTo(v.Type()) - indirects++ - } else if offsetScalar != 0 { - // The value is in the scalar field when it's not one of the - // reference types. - switch vt.Kind() { - case reflect.Uintptr: - case reflect.Chan: - case reflect.Func: - case reflect.Map: - case reflect.Ptr: - case reflect.UnsafePointer: - default: - upv = unsafe.Pointer(uintptr(unsafe.Pointer(&v)) + - offsetScalar) - } +func unsafeReflectValue(v reflect.Value) reflect.Value { + if !v.IsValid() || (v.CanInterface() && v.CanAddr()) { + return v } + flagFieldPtr := flagField(&v) + *flagFieldPtr &^= flagRO + *flagFieldPtr |= flagAddr + return v +} - pv := reflect.NewAt(vt, upv) - rv = pv - for i := 0; i < indirects; i++ { - rv = rv.Elem() +// Sanity checks against future reflect package changes +// to the type or semantics of the Value.flag field. +func init() { + field, ok := reflect.TypeOf(reflect.Value{}).FieldByName("flag") + if !ok { + panic("reflect.Value has no flag field") + } + if field.Type.Kind() != reflect.TypeOf(flag(0)).Kind() { + panic("reflect.Value flag field has changed kind") + } + type t0 int + var t struct { + A t0 + // t0 will have flagEmbedRO set. + t0 + // a will have flagStickyRO set + a t0 + } + vA := reflect.ValueOf(t).FieldByName("A") + va := reflect.ValueOf(t).FieldByName("a") + vt0 := reflect.ValueOf(t).FieldByName("t0") + + // Infer flagRO from the difference between the flags + // for the (otherwise identical) fields in t. + flagPublic := *flagField(&vA) + flagWithRO := *flagField(&va) | *flagField(&vt0) + flagRO = flagPublic ^ flagWithRO + + // Infer flagAddr from the difference between a value + // taken from a pointer and not. + vPtrA := reflect.ValueOf(&t).Elem().FieldByName("A") + flagNoPtr := *flagField(&vA) + flagPtr := *flagField(&vPtrA) + flagAddr = flagNoPtr ^ flagPtr + + // Check that the inferred flags tally with one of the known versions. + for _, f := range okFlags { + if flagRO == f.ro && flagAddr == f.addr { + return + } } - return rv + panic("reflect.Value read-only flag has changed semantics") } diff --git a/vendor/github.com/davecgh/go-spew/spew/bypasssafe.go b/vendor/github.com/davecgh/go-spew/spew/bypasssafe.go index 1fe3cf3d5d1..205c28d68c4 100644 --- a/vendor/github.com/davecgh/go-spew/spew/bypasssafe.go +++ b/vendor/github.com/davecgh/go-spew/spew/bypasssafe.go @@ -16,7 +16,7 @@ // when the code is running on Google App Engine, compiled by GopherJS, or // "-tags safe" is added to the go build command line. The "disableunsafe" // tag is deprecated and thus should not be used. -// +build js appengine safe disableunsafe +// +build js appengine safe disableunsafe !go1.4 package spew diff --git a/vendor/github.com/davecgh/go-spew/spew/common.go b/vendor/github.com/davecgh/go-spew/spew/common.go index 7c519ff47ac..1be8ce94576 100644 --- a/vendor/github.com/davecgh/go-spew/spew/common.go +++ b/vendor/github.com/davecgh/go-spew/spew/common.go @@ -180,7 +180,7 @@ func printComplex(w io.Writer, c complex128, floatPrecision int) { w.Write(closeParenBytes) } -// printHexPtr outputs a uintptr formatted as hexidecimal with a leading '0x' +// printHexPtr outputs a uintptr formatted as hexadecimal with a leading '0x' // prefix to Writer w. func printHexPtr(w io.Writer, p uintptr) { // Null pointer. diff --git a/vendor/github.com/davecgh/go-spew/spew/dump.go b/vendor/github.com/davecgh/go-spew/spew/dump.go index df1d582a728..f78d89fc1f6 100644 --- a/vendor/github.com/davecgh/go-spew/spew/dump.go +++ b/vendor/github.com/davecgh/go-spew/spew/dump.go @@ -35,16 +35,16 @@ var ( // cCharRE is a regular expression that matches a cgo char. // It is used to detect character arrays to hexdump them. - cCharRE = regexp.MustCompile("^.*\\._Ctype_char$") + cCharRE = regexp.MustCompile(`^.*\._Ctype_char$`) // cUnsignedCharRE is a regular expression that matches a cgo unsigned // char. It is used to detect unsigned character arrays to hexdump // them. - cUnsignedCharRE = regexp.MustCompile("^.*\\._Ctype_unsignedchar$") + cUnsignedCharRE = regexp.MustCompile(`^.*\._Ctype_unsignedchar$`) // cUint8tCharRE is a regular expression that matches a cgo uint8_t. // It is used to detect uint8_t arrays to hexdump them. - cUint8tCharRE = regexp.MustCompile("^.*\\._Ctype_uint8_t$") + cUint8tCharRE = regexp.MustCompile(`^.*\._Ctype_uint8_t$`) ) // dumpState contains information about the state of a dump operation. @@ -143,10 +143,10 @@ func (d *dumpState) dumpPtr(v reflect.Value) { // Display dereferenced value. d.w.Write(openParenBytes) switch { - case nilFound == true: + case nilFound: d.w.Write(nilAngleBytes) - case cycleFound == true: + case cycleFound: d.w.Write(circularBytes) default: diff --git a/vendor/github.com/davecgh/go-spew/spew/format.go b/vendor/github.com/davecgh/go-spew/spew/format.go index c49875bacbb..b04edb7d7ac 100644 --- a/vendor/github.com/davecgh/go-spew/spew/format.go +++ b/vendor/github.com/davecgh/go-spew/spew/format.go @@ -182,10 +182,10 @@ func (f *formatState) formatPtr(v reflect.Value) { // Display dereferenced value. switch { - case nilFound == true: + case nilFound: f.fs.Write(nilAngleBytes) - case cycleFound == true: + case cycleFound: f.fs.Write(circularShortBytes) default: diff --git a/vendor/github.com/fsouza/go-dockerclient/AUTHORS b/vendor/github.com/fsouza/go-dockerclient/AUTHORS deleted file mode 100644 index d27226e58b2..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/AUTHORS +++ /dev/null @@ -1,136 +0,0 @@ -# This is the official list of go-dockerclient authors for copyright purposes. - -Abhishek Chanda -Adam Bell-Hanssen -Adrien Kohlbecker -Aldrin Leal -Andreas Jaekle -Andrews Medina -Andrey Sibiryov -Andy Goldstein -Antonio Murdaca -Artem Sidorenko -Ben Marini -Ben McCann -Ben Parees -Benno van den Berg -Bradley Cicenas -Brendan Fosberry -Brian Lalor -Brian P. Hamachek -Brian Palmer -Bryan Boreham -Burke Libbey -Carlos Diaz-Padron -Cesar Wong -Cezar Sa Espinola -Cheah Chu Yeow -cheneydeng -Chris Bednarski -CMGS -Colin Hebert -Craig Jellick -Dan Williams -Daniel, Dao Quang Minh -Daniel Garcia -Daniel Hiltgen -Darren Shepherd -Dave Choi -David Huie -Dawn Chen -Dinesh Subhraveti -Drew Wells -Ed -Elias G. Schneevoigt -Erez Horev -Eric Anderson -Ewout Prangsma -Fabio Rehm -Fatih Arslan -Flavia Missi -Francisco Souza -Frank Groeneveld -George Moura -Grégoire Delattre -Guillermo Álvarez Fernández -Harry Zhang -He Simei -Ivan Mikushin -James Bardin -James Nugent -Jari Kolehmainen -Jason Wilder -Jawher Moussa -Jean-Baptiste Dalido -Jeff Mitchell -Jeffrey Hulten -Jen Andre -Jérôme Laurens -Johan Euphrosine -John Hughes -Kamil Domanski -Karan Misra -Ken Herner -Kim, Hirokuni -Kyle Allan -Liron Levin -Lior Yankovich -Liu Peng -Lorenz Leutgeb -Lucas Clemente -Lucas Weiblen -Lyon Hill -Mantas Matelis -Martin Sweeney -Máximo Cuadros Ortiz -Michael Schmatz -Michal Fojtik -Mike Dillon -Mrunal Patel -Nate Jones -Nguyen Sy Thanh Son -Nicholas Van Wiggeren -Nick Ethier -Omeid Matten -Orivej Desh -Paul Bellamy -Paul Morie -Paul Weil -Peter Edge -Peter Jihoon Kim -Phil Lu -Philippe Lafoucrière -Rafe Colton -Raphaël Pinson -Rob Miller -Robbert Klarenbeek -Robert Williamson -Roman Khlystik -Salvador Gironès -Sam Rijs -Sami Wagiaalla -Samuel Archambault -Samuel Karp -Seth Jennings -Silas Sewell -Simon Eskildsen -Simon Menke -Skolos -Soulou -Sridhar Ratnakumar -Summer Mousa -Sunjin Lee -Tarsis Azevedo -Tim Schindler -Timothy St. Clair -Tobi Knaup -Tom Wilkie -Tonic -ttyh061 -Victor Marmol -Vincenzo Prignano -Vlad Alexandru Ionescu -Wiliam Souza -Ye Yin -Yu, Zou -Yuriy Bogdanov diff --git a/vendor/github.com/fsouza/go-dockerclient/DOCKER-LICENSE b/vendor/github.com/fsouza/go-dockerclient/DOCKER-LICENSE deleted file mode 100644 index 70663447487..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/DOCKER-LICENSE +++ /dev/null @@ -1,6 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - -You can find the Docker license at the following link: -https://raw.githubusercontent.com/docker/docker/master/LICENSE diff --git a/vendor/github.com/fsouza/go-dockerclient/LICENSE b/vendor/github.com/fsouza/go-dockerclient/LICENSE deleted file mode 100644 index b1cdd4cd20c..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/LICENSE +++ /dev/null @@ -1,22 +0,0 @@ -Copyright (c) 2016, go-dockerclient authors -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - * Redistributions of source code must retain the above copyright notice, -this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above copyright notice, -this list of conditions and the following disclaimer in the documentation -and/or other materials provided with the distribution. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE -FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL -DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR -SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, -OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/fsouza/go-dockerclient/Makefile b/vendor/github.com/fsouza/go-dockerclient/Makefile deleted file mode 100644 index 2d6a5cf8998..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/Makefile +++ /dev/null @@ -1,57 +0,0 @@ -.PHONY: \ - all \ - vendor \ - lint \ - vet \ - fmt \ - fmtcheck \ - pretest \ - test \ - integration \ - cov \ - clean - -PKGS = . ./testing - -all: test - -vendor: - @ go get -v github.com/mjibson/party - party -d external -c -u - -lint: - @ go get -v github.com/golang/lint/golint - @for file in $$(git ls-files '*.go' | grep -v 'external/'); do \ - export output="$$(golint $${file} | grep -v 'type name will be used as docker.DockerInfo')"; \ - [ -n "$${output}" ] && echo "$${output}" && export status=1; \ - done; \ - exit $${status:-0} - -vet: - go vet $(PKGS) - -fmt: - gofmt -s -w $(PKGS) - -fmtcheck: - @ export output=$$(gofmt -s -d $(PKGS)); \ - [ -n "$${output}" ] && echo "$${output}" && export status=1; \ - exit $${status:-0} - -pretest: lint vet fmtcheck - -gotest: - go test $(GO_TEST_FLAGS) $(PKGS) - -test: pretest gotest - -integration: - go test -tags docker_integration -run TestIntegration -v - -cov: - @ go get -v github.com/axw/gocov/gocov - @ go get golang.org/x/tools/cmd/cover - gocov test | gocov report - -clean: - go clean $(PKGS) diff --git a/vendor/github.com/fsouza/go-dockerclient/README.markdown b/vendor/github.com/fsouza/go-dockerclient/README.markdown deleted file mode 100644 index 234b9e49e14..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/README.markdown +++ /dev/null @@ -1,106 +0,0 @@ -# go-dockerclient - -[![Travis](https://img.shields.io/travis/fsouza/go-dockerclient/master.svg?style=flat-square)](https://travis-ci.org/fsouza/go-dockerclient) -[![GoDoc](https://img.shields.io/badge/api-Godoc-blue.svg?style=flat-square)](https://godoc.org/github.com/fsouza/go-dockerclient) - -This package presents a client for the Docker remote API. It also provides -support for the extensions in the [Swarm API](https://docs.docker.com/swarm/swarm-api/). -It currently supports the Docker API up to version 1.23. - -This package also provides support for docker's network API, which is a simple -passthrough to the libnetwork remote API. Note that docker's network API is -only available in docker 1.8 and above, and only enabled in docker if -DOCKER_EXPERIMENTAL is defined during the docker build process. - -For more details, check the [remote API documentation](http://docs.docker.com/engine/reference/api/docker_remote_api/). - -## Vendoring - -If you are having issues with Go 1.5 and have `GO15VENDOREXPERIMENT` set with an application that has go-dockerclient vendored, -please update your vendoring of go-dockerclient :) We recently moved the `vendor` directory to `external` so that go-dockerclient -is compatible with this configuration. See [338](https://github.com/fsouza/go-dockerclient/issues/338) and [339](https://github.com/fsouza/go-dockerclient/pull/339) -for details. - -## Example - -```go -package main - -import ( - "fmt" - - "github.com/fsouza/go-dockerclient" -) - -func main() { - endpoint := "unix:///var/run/docker.sock" - client, _ := docker.NewClient(endpoint) - imgs, _ := client.ListImages(docker.ListImagesOptions{All: false}) - for _, img := range imgs { - fmt.Println("ID: ", img.ID) - fmt.Println("RepoTags: ", img.RepoTags) - fmt.Println("Created: ", img.Created) - fmt.Println("Size: ", img.Size) - fmt.Println("VirtualSize: ", img.VirtualSize) - fmt.Println("ParentId: ", img.ParentID) - } -} -``` - -## Using with TLS - -In order to instantiate the client for a TLS-enabled daemon, you should use NewTLSClient, passing the endpoint and path for key and certificates as parameters. - -```go -package main - -import ( - "fmt" - - "github.com/fsouza/go-dockerclient" -) - -func main() { - endpoint := "tcp://[ip]:[port]" - path := os.Getenv("DOCKER_CERT_PATH") - ca := fmt.Sprintf("%s/ca.pem", path) - cert := fmt.Sprintf("%s/cert.pem", path) - key := fmt.Sprintf("%s/key.pem", path) - client, _ := docker.NewTLSClient(endpoint, cert, key, ca) - // use client -} -``` - -If using [docker-machine](https://docs.docker.com/machine/), or another application that exports environment variables -`DOCKER_HOST, DOCKER_TLS_VERIFY, DOCKER_CERT_PATH`, you can use NewClientFromEnv. - - -```go -package main - -import ( - "fmt" - - "github.com/fsouza/go-dockerclient" -) - -func main() { - client, _ := docker.NewClientFromEnv() - // use client -} -``` - -See the documentation for more details. - -## Developing - -All development commands can be seen in the [Makefile](Makefile). - -Commited code must pass: - -* [golint](https://github.com/golang/lint) -* [go vet](https://godoc.org/golang.org/x/tools/cmd/vet) -* [gofmt](https://golang.org/cmd/gofmt) -* [go test](https://golang.org/cmd/go/#hdr-Test_packages) - -Running `make test` will check all of these. If your editor does not automatically call gofmt, `make fmt` will format all go files in this repository. diff --git a/vendor/github.com/fsouza/go-dockerclient/auth.go b/vendor/github.com/fsouza/go-dockerclient/auth.go deleted file mode 100644 index 95596d78272..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/auth.go +++ /dev/null @@ -1,158 +0,0 @@ -// Copyright 2015 go-dockerclient authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package docker - -import ( - "bytes" - "encoding/base64" - "encoding/json" - "errors" - "fmt" - "io" - "io/ioutil" - "os" - "path" - "strings" -) - -// ErrCannotParseDockercfg is the error returned by NewAuthConfigurations when the dockercfg cannot be parsed. -var ErrCannotParseDockercfg = errors.New("Failed to read authentication from dockercfg") - -// AuthConfiguration represents authentication options to use in the PushImage -// method. It represents the authentication in the Docker index server. -type AuthConfiguration struct { - Username string `json:"username,omitempty"` - Password string `json:"password,omitempty"` - Email string `json:"email,omitempty"` - ServerAddress string `json:"serveraddress,omitempty"` -} - -// AuthConfigurations represents authentication options to use for the -// PushImage method accommodating the new X-Registry-Config header -type AuthConfigurations struct { - Configs map[string]AuthConfiguration `json:"configs"` -} - -// AuthConfigurations119 is used to serialize a set of AuthConfigurations -// for Docker API >= 1.19. -type AuthConfigurations119 map[string]AuthConfiguration - -// dockerConfig represents a registry authentation configuration from the -// .dockercfg file. -type dockerConfig struct { - Auth string `json:"auth"` - Email string `json:"email"` -} - -// NewAuthConfigurationsFromDockerCfg returns AuthConfigurations from the -// ~/.dockercfg file. -func NewAuthConfigurationsFromDockerCfg() (*AuthConfigurations, error) { - var r io.Reader - var err error - p := path.Join(os.Getenv("HOME"), ".docker", "config.json") - r, err = os.Open(p) - if err != nil { - p := path.Join(os.Getenv("HOME"), ".dockercfg") - r, err = os.Open(p) - if err != nil { - return nil, err - } - } - return NewAuthConfigurations(r) -} - -// NewAuthConfigurations returns AuthConfigurations from a JSON encoded string in the -// same format as the .dockercfg file. -func NewAuthConfigurations(r io.Reader) (*AuthConfigurations, error) { - var auth *AuthConfigurations - confs, err := parseDockerConfig(r) - if err != nil { - return nil, err - } - auth, err = authConfigs(confs) - if err != nil { - return nil, err - } - return auth, nil -} - -func parseDockerConfig(r io.Reader) (map[string]dockerConfig, error) { - buf := new(bytes.Buffer) - buf.ReadFrom(r) - byteData := buf.Bytes() - - confsWrapper := struct { - Auths map[string]dockerConfig `json:"auths"` - }{} - if err := json.Unmarshal(byteData, &confsWrapper); err == nil { - if len(confsWrapper.Auths) > 0 { - return confsWrapper.Auths, nil - } - } - - var confs map[string]dockerConfig - if err := json.Unmarshal(byteData, &confs); err != nil { - return nil, err - } - return confs, nil -} - -// authConfigs converts a dockerConfigs map to a AuthConfigurations object. -func authConfigs(confs map[string]dockerConfig) (*AuthConfigurations, error) { - c := &AuthConfigurations{ - Configs: make(map[string]AuthConfiguration), - } - for reg, conf := range confs { - data, err := base64.StdEncoding.DecodeString(conf.Auth) - if err != nil { - return nil, err - } - userpass := strings.SplitN(string(data), ":", 2) - if len(userpass) != 2 { - return nil, ErrCannotParseDockercfg - } - c.Configs[reg] = AuthConfiguration{ - Email: conf.Email, - Username: userpass[0], - Password: userpass[1], - ServerAddress: reg, - } - } - return c, nil -} - -// AuthStatus returns the authentication status for Docker API versions >= 1.23. -type AuthStatus struct { - Status string `json:"Status,omitempty" yaml:"Status,omitempty"` - IdentityToken string `json:"IdentityToken,omitempty" yaml:"IdentityToken,omitempty"` -} - -// AuthCheck validates the given credentials. It returns nil if successful. -// -// For Docker API versions >= 1.23, the AuthStatus struct will be populated, otherwise it will be empty.` -// -// See https://goo.gl/6nsZkH for more details. -func (c *Client) AuthCheck(conf *AuthConfiguration) (AuthStatus, error) { - var authStatus AuthStatus - if conf == nil { - return authStatus, fmt.Errorf("conf is nil") - } - resp, err := c.do("POST", "/auth", doOptions{data: conf}) - if err != nil { - return authStatus, err - } - defer resp.Body.Close() - data, err := ioutil.ReadAll(resp.Body) - if err != nil { - return authStatus, err - } - if len(data) == 0 { - return authStatus, nil - } - if err := json.Unmarshal(data, &authStatus); err != nil { - return authStatus, err - } - return authStatus, nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/cancelable.go b/vendor/github.com/fsouza/go-dockerclient/cancelable.go deleted file mode 100644 index 375fbd15c39..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/cancelable.go +++ /dev/null @@ -1,17 +0,0 @@ -// Copyright 2016 go-dockerclient authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build go1.5 - -package docker - -import "net/http" - -func cancelable(client *http.Client, req *http.Request) func() { - ch := make(chan struct{}) - req.Cancel = ch - return func() { - close(ch) - } -} diff --git a/vendor/github.com/fsouza/go-dockerclient/cancelable_go14.go b/vendor/github.com/fsouza/go-dockerclient/cancelable_go14.go deleted file mode 100644 index 3c203986fc6..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/cancelable_go14.go +++ /dev/null @@ -1,19 +0,0 @@ -// Copyright 2016 go-dockerclient authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !go1.5 - -package docker - -import "net/http" - -func cancelable(client *http.Client, req *http.Request) func() { - return func() { - if rc, ok := client.Transport.(interface { - CancelRequest(*http.Request) - }); ok { - rc.CancelRequest(req) - } - } -} diff --git a/vendor/github.com/fsouza/go-dockerclient/change.go b/vendor/github.com/fsouza/go-dockerclient/change.go deleted file mode 100644 index d133594d480..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/change.go +++ /dev/null @@ -1,43 +0,0 @@ -// Copyright 2014 go-dockerclient authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package docker - -import "fmt" - -// ChangeType is a type for constants indicating the type of change -// in a container -type ChangeType int - -const ( - // ChangeModify is the ChangeType for container modifications - ChangeModify ChangeType = iota - - // ChangeAdd is the ChangeType for additions to a container - ChangeAdd - - // ChangeDelete is the ChangeType for deletions from a container - ChangeDelete -) - -// Change represents a change in a container. -// -// See https://goo.gl/9GsTIF for more details. -type Change struct { - Path string - Kind ChangeType -} - -func (change *Change) String() string { - var kind string - switch change.Kind { - case ChangeModify: - kind = "C" - case ChangeAdd: - kind = "A" - case ChangeDelete: - kind = "D" - } - return fmt.Sprintf("%s %s", kind, change.Path) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/client.go b/vendor/github.com/fsouza/go-dockerclient/client.go deleted file mode 100644 index a3f09cd8bbc..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/client.go +++ /dev/null @@ -1,995 +0,0 @@ -// Copyright 2015 go-dockerclient authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// Package docker provides a client for the Docker remote API. -// -// See https://goo.gl/G3plxW for more details on the remote API. -package docker - -import ( - "bufio" - "bytes" - "crypto/tls" - "crypto/x509" - "encoding/json" - "errors" - "fmt" - "io" - "io/ioutil" - "net" - "net/http" - "net/http/httputil" - "net/url" - "os" - "path/filepath" - "reflect" - "runtime" - "strconv" - "strings" - "sync/atomic" - "time" - - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts" - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/homedir" - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/stdcopy" - "github.com/fsouza/go-dockerclient/external/github.com/hashicorp/go-cleanhttp" -) - -const userAgent = "go-dockerclient" - -var ( - // ErrInvalidEndpoint is returned when the endpoint is not a valid HTTP URL. - ErrInvalidEndpoint = errors.New("invalid endpoint") - - // ErrConnectionRefused is returned when the client cannot connect to the given endpoint. - ErrConnectionRefused = errors.New("cannot connect to Docker endpoint") - - // ErrInactivityTimeout is returned when a streamable call has been inactive for some time. - ErrInactivityTimeout = errors.New("inactivity time exceeded timeout") - - apiVersion112, _ = NewAPIVersion("1.12") - - apiVersion119, _ = NewAPIVersion("1.19") -) - -// APIVersion is an internal representation of a version of the Remote API. -type APIVersion []int - -// NewAPIVersion returns an instance of APIVersion for the given string. -// -// The given string must be in the form .., where , -// and are integer numbers. -func NewAPIVersion(input string) (APIVersion, error) { - if !strings.Contains(input, ".") { - return nil, fmt.Errorf("Unable to parse version %q", input) - } - raw := strings.Split(input, "-") - arr := strings.Split(raw[0], ".") - ret := make(APIVersion, len(arr)) - var err error - for i, val := range arr { - ret[i], err = strconv.Atoi(val) - if err != nil { - return nil, fmt.Errorf("Unable to parse version %q: %q is not an integer", input, val) - } - } - return ret, nil -} - -func (version APIVersion) String() string { - var str string - for i, val := range version { - str += strconv.Itoa(val) - if i < len(version)-1 { - str += "." - } - } - return str -} - -// LessThan is a function for comparing APIVersion structs -func (version APIVersion) LessThan(other APIVersion) bool { - return version.compare(other) < 0 -} - -// LessThanOrEqualTo is a function for comparing APIVersion structs -func (version APIVersion) LessThanOrEqualTo(other APIVersion) bool { - return version.compare(other) <= 0 -} - -// GreaterThan is a function for comparing APIVersion structs -func (version APIVersion) GreaterThan(other APIVersion) bool { - return version.compare(other) > 0 -} - -// GreaterThanOrEqualTo is a function for comparing APIVersion structs -func (version APIVersion) GreaterThanOrEqualTo(other APIVersion) bool { - return version.compare(other) >= 0 -} - -func (version APIVersion) compare(other APIVersion) int { - for i, v := range version { - if i <= len(other)-1 { - otherVersion := other[i] - - if v < otherVersion { - return -1 - } else if v > otherVersion { - return 1 - } - } - } - if len(version) > len(other) { - return 1 - } - if len(version) < len(other) { - return -1 - } - return 0 -} - -// Client is the basic type of this package. It provides methods for -// interaction with the API. -type Client struct { - SkipServerVersionCheck bool - HTTPClient *http.Client - TLSConfig *tls.Config - Dialer *net.Dialer - - endpoint string - endpointURL *url.URL - eventMonitor *eventMonitoringState - requestedAPIVersion APIVersion - serverAPIVersion APIVersion - expectedAPIVersion APIVersion - unixHTTPClient *http.Client -} - -// NewClient returns a Client instance ready for communication with the given -// server endpoint. It will use the latest remote API version available in the -// server. -func NewClient(endpoint string) (*Client, error) { - client, err := NewVersionedClient(endpoint, "") - if err != nil { - return nil, err - } - client.SkipServerVersionCheck = true - return client, nil -} - -// NewTLSClient returns a Client instance ready for TLS communications with the givens -// server endpoint, key and certificates . It will use the latest remote API version -// available in the server. -func NewTLSClient(endpoint string, cert, key, ca string) (*Client, error) { - client, err := NewVersionedTLSClient(endpoint, cert, key, ca, "") - if err != nil { - return nil, err - } - client.SkipServerVersionCheck = true - return client, nil -} - -// NewTLSClientFromBytes returns a Client instance ready for TLS communications with the givens -// server endpoint, key and certificates (passed inline to the function as opposed to being -// read from a local file). It will use the latest remote API version available in the server. -func NewTLSClientFromBytes(endpoint string, certPEMBlock, keyPEMBlock, caPEMCert []byte) (*Client, error) { - client, err := NewVersionedTLSClientFromBytes(endpoint, certPEMBlock, keyPEMBlock, caPEMCert, "") - if err != nil { - return nil, err - } - client.SkipServerVersionCheck = true - return client, nil -} - -// NewVersionedClient returns a Client instance ready for communication with -// the given server endpoint, using a specific remote API version. -func NewVersionedClient(endpoint string, apiVersionString string) (*Client, error) { - u, err := parseEndpoint(endpoint, false) - if err != nil { - return nil, err - } - var requestedAPIVersion APIVersion - if strings.Contains(apiVersionString, ".") { - requestedAPIVersion, err = NewAPIVersion(apiVersionString) - if err != nil { - return nil, err - } - } - return &Client{ - HTTPClient: cleanhttp.DefaultClient(), - Dialer: &net.Dialer{}, - endpoint: endpoint, - endpointURL: u, - eventMonitor: new(eventMonitoringState), - requestedAPIVersion: requestedAPIVersion, - }, nil -} - -// NewVersionnedTLSClient has been DEPRECATED, please use NewVersionedTLSClient. -func NewVersionnedTLSClient(endpoint string, cert, key, ca, apiVersionString string) (*Client, error) { - return NewVersionedTLSClient(endpoint, cert, key, ca, apiVersionString) -} - -// NewVersionedTLSClient returns a Client instance ready for TLS communications with the givens -// server endpoint, key and certificates, using a specific remote API version. -func NewVersionedTLSClient(endpoint string, cert, key, ca, apiVersionString string) (*Client, error) { - certPEMBlock, err := ioutil.ReadFile(cert) - if err != nil { - return nil, err - } - keyPEMBlock, err := ioutil.ReadFile(key) - if err != nil { - return nil, err - } - caPEMCert, err := ioutil.ReadFile(ca) - if err != nil { - return nil, err - } - return NewVersionedTLSClientFromBytes(endpoint, certPEMBlock, keyPEMBlock, caPEMCert, apiVersionString) -} - -// NewClientFromEnv returns a Client instance ready for communication created from -// Docker's default logic for the environment variables DOCKER_HOST, DOCKER_TLS_VERIFY, and DOCKER_CERT_PATH. -// -// See https://github.com/docker/docker/blob/1f963af697e8df3a78217f6fdbf67b8123a7db94/docker/docker.go#L68. -// See https://github.com/docker/compose/blob/81707ef1ad94403789166d2fe042c8a718a4c748/compose/cli/docker_client.py#L7. -func NewClientFromEnv() (*Client, error) { - client, err := NewVersionedClientFromEnv("") - if err != nil { - return nil, err - } - client.SkipServerVersionCheck = true - return client, nil -} - -// NewVersionedClientFromEnv returns a Client instance ready for TLS communications created from -// Docker's default logic for the environment variables DOCKER_HOST, DOCKER_TLS_VERIFY, and DOCKER_CERT_PATH, -// and using a specific remote API version. -// -// See https://github.com/docker/docker/blob/1f963af697e8df3a78217f6fdbf67b8123a7db94/docker/docker.go#L68. -// See https://github.com/docker/compose/blob/81707ef1ad94403789166d2fe042c8a718a4c748/compose/cli/docker_client.py#L7. -func NewVersionedClientFromEnv(apiVersionString string) (*Client, error) { - dockerEnv, err := getDockerEnv() - if err != nil { - return nil, err - } - dockerHost := dockerEnv.dockerHost - if dockerEnv.dockerTLSVerify { - parts := strings.SplitN(dockerEnv.dockerHost, "://", 2) - if len(parts) != 2 { - return nil, fmt.Errorf("could not split %s into two parts by ://", dockerHost) - } - cert := filepath.Join(dockerEnv.dockerCertPath, "cert.pem") - key := filepath.Join(dockerEnv.dockerCertPath, "key.pem") - ca := filepath.Join(dockerEnv.dockerCertPath, "ca.pem") - return NewVersionedTLSClient(dockerEnv.dockerHost, cert, key, ca, apiVersionString) - } - return NewVersionedClient(dockerEnv.dockerHost, apiVersionString) -} - -// NewVersionedTLSClientFromBytes returns a Client instance ready for TLS communications with the givens -// server endpoint, key and certificates (passed inline to the function as opposed to being -// read from a local file), using a specific remote API version. -func NewVersionedTLSClientFromBytes(endpoint string, certPEMBlock, keyPEMBlock, caPEMCert []byte, apiVersionString string) (*Client, error) { - u, err := parseEndpoint(endpoint, true) - if err != nil { - return nil, err - } - var requestedAPIVersion APIVersion - if strings.Contains(apiVersionString, ".") { - requestedAPIVersion, err = NewAPIVersion(apiVersionString) - if err != nil { - return nil, err - } - } - if certPEMBlock == nil || keyPEMBlock == nil { - return nil, errors.New("Both cert and key are required") - } - tlsCert, err := tls.X509KeyPair(certPEMBlock, keyPEMBlock) - if err != nil { - return nil, err - } - tlsConfig := &tls.Config{Certificates: []tls.Certificate{tlsCert}} - if caPEMCert == nil { - tlsConfig.InsecureSkipVerify = true - } else { - caPool := x509.NewCertPool() - if !caPool.AppendCertsFromPEM(caPEMCert) { - return nil, errors.New("Could not add RootCA pem") - } - tlsConfig.RootCAs = caPool - } - tr := cleanhttp.DefaultTransport() - tr.TLSClientConfig = tlsConfig - if err != nil { - return nil, err - } - return &Client{ - HTTPClient: &http.Client{Transport: tr}, - TLSConfig: tlsConfig, - Dialer: &net.Dialer{}, - endpoint: endpoint, - endpointURL: u, - eventMonitor: new(eventMonitoringState), - requestedAPIVersion: requestedAPIVersion, - }, nil -} - -func (c *Client) checkAPIVersion() error { - serverAPIVersionString, err := c.getServerAPIVersionString() - if err != nil { - return err - } - c.serverAPIVersion, err = NewAPIVersion(serverAPIVersionString) - if err != nil { - return err - } - if c.requestedAPIVersion == nil { - c.expectedAPIVersion = c.serverAPIVersion - } else { - c.expectedAPIVersion = c.requestedAPIVersion - } - return nil -} - -// Endpoint returns the current endpoint. It's useful for getting the endpoint -// when using functions that get this data from the environment (like -// NewClientFromEnv. -func (c *Client) Endpoint() string { - return c.endpoint -} - -// Ping pings the docker server -// -// See https://goo.gl/kQCfJj for more details. -func (c *Client) Ping() error { - path := "/_ping" - resp, err := c.do("GET", path, doOptions{}) - if err != nil { - return err - } - if resp.StatusCode != http.StatusOK { - return newError(resp) - } - resp.Body.Close() - return nil -} - -func (c *Client) getServerAPIVersionString() (version string, err error) { - resp, err := c.do("GET", "/version", doOptions{}) - if err != nil { - return "", err - } - defer resp.Body.Close() - if resp.StatusCode != http.StatusOK { - return "", fmt.Errorf("Received unexpected status %d while trying to retrieve the server version", resp.StatusCode) - } - var versionResponse map[string]interface{} - if err := json.NewDecoder(resp.Body).Decode(&versionResponse); err != nil { - return "", err - } - if version, ok := (versionResponse["ApiVersion"]).(string); ok { - return version, nil - } - return "", nil -} - -type doOptions struct { - data interface{} - forceJSON bool - headers map[string]string -} - -func (c *Client) do(method, path string, doOptions doOptions) (*http.Response, error) { - var params io.Reader - if doOptions.data != nil || doOptions.forceJSON { - buf, err := json.Marshal(doOptions.data) - if err != nil { - return nil, err - } - params = bytes.NewBuffer(buf) - } - if path != "/version" && !c.SkipServerVersionCheck && c.expectedAPIVersion == nil { - err := c.checkAPIVersion() - if err != nil { - return nil, err - } - } - httpClient := c.HTTPClient - protocol := c.endpointURL.Scheme - var u string - if protocol == "unix" { - httpClient = c.unixClient() - u = c.getFakeUnixURL(path) - } else { - u = c.getURL(path) - } - req, err := http.NewRequest(method, u, params) - if err != nil { - return nil, err - } - req.Header.Set("User-Agent", userAgent) - if doOptions.data != nil { - req.Header.Set("Content-Type", "application/json") - } else if method == "POST" { - req.Header.Set("Content-Type", "plain/text") - } - - for k, v := range doOptions.headers { - req.Header.Set(k, v) - } - resp, err := httpClient.Do(req) - if err != nil { - if strings.Contains(err.Error(), "connection refused") { - return nil, ErrConnectionRefused - } - return nil, err - } - if resp.StatusCode < 200 || resp.StatusCode >= 400 { - return nil, newError(resp) - } - return resp, nil -} - -type streamOptions struct { - setRawTerminal bool - rawJSONStream bool - useJSONDecoder bool - headers map[string]string - in io.Reader - stdout io.Writer - stderr io.Writer - // timeout is the initial connection timeout - timeout time.Duration - // Timeout with no data is received, it's reset every time new data - // arrives - inactivityTimeout time.Duration -} - -func (c *Client) stream(method, path string, streamOptions streamOptions) error { - if (method == "POST" || method == "PUT") && streamOptions.in == nil { - streamOptions.in = bytes.NewReader(nil) - } - if path != "/version" && !c.SkipServerVersionCheck && c.expectedAPIVersion == nil { - err := c.checkAPIVersion() - if err != nil { - return err - } - } - req, err := http.NewRequest(method, c.getURL(path), streamOptions.in) - if err != nil { - return err - } - req.Header.Set("User-Agent", userAgent) - if method == "POST" { - req.Header.Set("Content-Type", "plain/text") - } - for key, val := range streamOptions.headers { - req.Header.Set(key, val) - } - var resp *http.Response - protocol := c.endpointURL.Scheme - address := c.endpointURL.Path - if streamOptions.stdout == nil { - streamOptions.stdout = ioutil.Discard - } - if streamOptions.stderr == nil { - streamOptions.stderr = ioutil.Discard - } - cancelRequest := cancelable(c.HTTPClient, req) - if protocol == "unix" { - dial, err := c.Dialer.Dial(protocol, address) - if err != nil { - return err - } - cancelRequest = func() { dial.Close() } - defer dial.Close() - breader := bufio.NewReader(dial) - err = req.Write(dial) - if err != nil { - return err - } - - // ReadResponse may hang if server does not replay - if streamOptions.timeout > 0 { - dial.SetDeadline(time.Now().Add(streamOptions.timeout)) - } - - if resp, err = http.ReadResponse(breader, req); err != nil { - // Cancel timeout for future I/O operations - if streamOptions.timeout > 0 { - dial.SetDeadline(time.Time{}) - } - if strings.Contains(err.Error(), "connection refused") { - return ErrConnectionRefused - } - return err - } - } else { - if resp, err = c.HTTPClient.Do(req); err != nil { - if strings.Contains(err.Error(), "connection refused") { - return ErrConnectionRefused - } - return err - } - } - defer resp.Body.Close() - if resp.StatusCode < 200 || resp.StatusCode >= 400 { - return newError(resp) - } - var canceled uint32 - if streamOptions.inactivityTimeout > 0 { - ch := handleInactivityTimeout(&streamOptions, cancelRequest, &canceled) - defer close(ch) - } - err = handleStreamResponse(resp, &streamOptions) - if err != nil { - if atomic.LoadUint32(&canceled) != 0 { - return ErrInactivityTimeout - } - return err - } - return nil -} - -func handleStreamResponse(resp *http.Response, streamOptions *streamOptions) error { - var err error - if !streamOptions.useJSONDecoder && resp.Header.Get("Content-Type") != "application/json" { - if streamOptions.setRawTerminal { - _, err = io.Copy(streamOptions.stdout, resp.Body) - } else { - _, err = stdcopy.StdCopy(streamOptions.stdout, streamOptions.stderr, resp.Body) - } - return err - } - // if we want to get raw json stream, just copy it back to output - // without decoding it - if streamOptions.rawJSONStream { - _, err = io.Copy(streamOptions.stdout, resp.Body) - return err - } - dec := json.NewDecoder(resp.Body) - for { - var m jsonMessage - if err := dec.Decode(&m); err == io.EOF { - break - } else if err != nil { - return err - } - if m.Stream != "" { - fmt.Fprint(streamOptions.stdout, m.Stream) - } else if m.Progress != "" { - fmt.Fprintf(streamOptions.stdout, "%s %s\r", m.Status, m.Progress) - } else if m.Error != "" { - return errors.New(m.Error) - } - if m.Status != "" { - fmt.Fprintln(streamOptions.stdout, m.Status) - } - } - return nil -} - -type proxyWriter struct { - io.Writer - calls uint64 -} - -func (p *proxyWriter) callCount() uint64 { - return atomic.LoadUint64(&p.calls) -} - -func (p *proxyWriter) Write(data []byte) (int, error) { - atomic.AddUint64(&p.calls, 1) - return p.Writer.Write(data) -} - -func handleInactivityTimeout(options *streamOptions, cancelRequest func(), canceled *uint32) chan<- struct{} { - done := make(chan struct{}) - proxyStdout := &proxyWriter{Writer: options.stdout} - proxyStderr := &proxyWriter{Writer: options.stderr} - options.stdout = proxyStdout - options.stderr = proxyStderr - go func() { - var lastCallCount uint64 - for { - select { - case <-time.After(options.inactivityTimeout): - case <-done: - return - } - curCallCount := proxyStdout.callCount() + proxyStderr.callCount() - if curCallCount == lastCallCount { - atomic.AddUint32(canceled, 1) - cancelRequest() - return - } - lastCallCount = curCallCount - } - }() - return done -} - -type hijackOptions struct { - success chan struct{} - setRawTerminal bool - in io.Reader - stdout io.Writer - stderr io.Writer - data interface{} -} - -// CloseWaiter is an interface with methods for closing the underlying resource -// and then waiting for it to finish processing. -type CloseWaiter interface { - io.Closer - Wait() error -} - -type waiterFunc func() error - -func (w waiterFunc) Wait() error { return w() } - -type closerFunc func() error - -func (c closerFunc) Close() error { return c() } - -func (c *Client) hijack(method, path string, hijackOptions hijackOptions) (CloseWaiter, error) { - if path != "/version" && !c.SkipServerVersionCheck && c.expectedAPIVersion == nil { - err := c.checkAPIVersion() - if err != nil { - return nil, err - } - } - var params io.Reader - if hijackOptions.data != nil { - buf, err := json.Marshal(hijackOptions.data) - if err != nil { - return nil, err - } - params = bytes.NewBuffer(buf) - } - req, err := http.NewRequest(method, c.getURL(path), params) - if err != nil { - return nil, err - } - req.Header.Set("Content-Type", "application/json") - req.Header.Set("Connection", "Upgrade") - req.Header.Set("Upgrade", "tcp") - protocol := c.endpointURL.Scheme - address := c.endpointURL.Path - if protocol != "unix" { - protocol = "tcp" - address = c.endpointURL.Host - } - var dial net.Conn - if c.TLSConfig != nil && protocol != "unix" { - dial, err = tlsDialWithDialer(c.Dialer, protocol, address, c.TLSConfig) - if err != nil { - return nil, err - } - } else { - dial, err = c.Dialer.Dial(protocol, address) - if err != nil { - return nil, err - } - } - - errs := make(chan error) - quit := make(chan struct{}) - go func() { - clientconn := httputil.NewClientConn(dial, nil) - defer clientconn.Close() - clientconn.Do(req) - if hijackOptions.success != nil { - hijackOptions.success <- struct{}{} - <-hijackOptions.success - } - rwc, br := clientconn.Hijack() - defer rwc.Close() - - errChanOut := make(chan error, 1) - errChanIn := make(chan error, 1) - if hijackOptions.stdout == nil && hijackOptions.stderr == nil { - close(errChanOut) - } else { - // Only copy if hijackOptions.stdout and/or hijackOptions.stderr is actually set. - // Otherwise, if the only stream you care about is stdin, your attach session - // will "hang" until the container terminates, even though you're not reading - // stdout/stderr - if hijackOptions.stdout == nil { - hijackOptions.stdout = ioutil.Discard - } - if hijackOptions.stderr == nil { - hijackOptions.stderr = ioutil.Discard - } - - go func() { - defer func() { - if hijackOptions.in != nil { - if closer, ok := hijackOptions.in.(io.Closer); ok { - closer.Close() - } - errChanIn <- nil - } - }() - - var err error - if hijackOptions.setRawTerminal { - _, err = io.Copy(hijackOptions.stdout, br) - } else { - _, err = stdcopy.StdCopy(hijackOptions.stdout, hijackOptions.stderr, br) - } - errChanOut <- err - }() - } - - go func() { - var err error - if hijackOptions.in != nil { - _, err = io.Copy(rwc, hijackOptions.in) - } - errChanIn <- err - rwc.(interface { - CloseWrite() error - }).CloseWrite() - }() - - var errIn error - select { - case errIn = <-errChanIn: - case <-quit: - return - } - - var errOut error - select { - case errOut = <-errChanOut: - case <-quit: - return - } - - if errIn != nil { - errs <- errIn - } else { - errs <- errOut - } - }() - - return struct { - closerFunc - waiterFunc - }{ - closerFunc(func() error { close(quit); return nil }), - waiterFunc(func() error { return <-errs }), - }, nil -} - -func (c *Client) getURL(path string) string { - urlStr := strings.TrimRight(c.endpointURL.String(), "/") - if c.endpointURL.Scheme == "unix" { - urlStr = "" - } - if c.requestedAPIVersion != nil { - return fmt.Sprintf("%s/v%s%s", urlStr, c.requestedAPIVersion, path) - } - return fmt.Sprintf("%s%s", urlStr, path) -} - -// getFakeUnixURL returns the URL needed to make an HTTP request over a UNIX -// domain socket to the given path. -func (c *Client) getFakeUnixURL(path string) string { - u := *c.endpointURL // Copy. - - // Override URL so that net/http will not complain. - u.Scheme = "http" - u.Host = "unix.sock" // Doesn't matter what this is - it's not used. - u.Path = "" - urlStr := strings.TrimRight(u.String(), "/") - if c.requestedAPIVersion != nil { - return fmt.Sprintf("%s/v%s%s", urlStr, c.requestedAPIVersion, path) - } - return fmt.Sprintf("%s%s", urlStr, path) -} - -func (c *Client) unixClient() *http.Client { - if c.unixHTTPClient != nil { - return c.unixHTTPClient - } - socketPath := c.endpointURL.Path - tr := &http.Transport{ - Dial: func(network, addr string) (net.Conn, error) { - return c.Dialer.Dial("unix", socketPath) - }, - } - cleanhttp.SetTransportFinalizer(tr) - c.unixHTTPClient = &http.Client{Transport: tr} - return c.unixHTTPClient -} - -type jsonMessage struct { - Status string `json:"status,omitempty"` - Progress string `json:"progress,omitempty"` - Error string `json:"error,omitempty"` - Stream string `json:"stream,omitempty"` -} - -func queryString(opts interface{}) string { - if opts == nil { - return "" - } - value := reflect.ValueOf(opts) - if value.Kind() == reflect.Ptr { - value = value.Elem() - } - if value.Kind() != reflect.Struct { - return "" - } - items := url.Values(map[string][]string{}) - for i := 0; i < value.NumField(); i++ { - field := value.Type().Field(i) - if field.PkgPath != "" { - continue - } - key := field.Tag.Get("qs") - if key == "" { - key = strings.ToLower(field.Name) - } else if key == "-" { - continue - } - addQueryStringValue(items, key, value.Field(i)) - } - return items.Encode() -} - -func addQueryStringValue(items url.Values, key string, v reflect.Value) { - switch v.Kind() { - case reflect.Bool: - if v.Bool() { - items.Add(key, "1") - } - case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: - if v.Int() > 0 { - items.Add(key, strconv.FormatInt(v.Int(), 10)) - } - case reflect.Float32, reflect.Float64: - if v.Float() > 0 { - items.Add(key, strconv.FormatFloat(v.Float(), 'f', -1, 64)) - } - case reflect.String: - if v.String() != "" { - items.Add(key, v.String()) - } - case reflect.Ptr: - if !v.IsNil() { - if b, err := json.Marshal(v.Interface()); err == nil { - items.Add(key, string(b)) - } - } - case reflect.Map: - if len(v.MapKeys()) > 0 { - if b, err := json.Marshal(v.Interface()); err == nil { - items.Add(key, string(b)) - } - } - case reflect.Array, reflect.Slice: - vLen := v.Len() - if vLen > 0 { - for i := 0; i < vLen; i++ { - addQueryStringValue(items, key, v.Index(i)) - } - } - } -} - -// Error represents failures in the API. It represents a failure from the API. -type Error struct { - Status int - Message string -} - -func newError(resp *http.Response) *Error { - defer resp.Body.Close() - data, err := ioutil.ReadAll(resp.Body) - if err != nil { - return &Error{Status: resp.StatusCode, Message: fmt.Sprintf("cannot read body, err: %v", err)} - } - return &Error{Status: resp.StatusCode, Message: string(data)} -} - -func (e *Error) Error() string { - return fmt.Sprintf("API error (%d): %s", e.Status, e.Message) -} - -func parseEndpoint(endpoint string, tls bool) (*url.URL, error) { - if endpoint != "" && !strings.Contains(endpoint, "://") { - endpoint = "tcp://" + endpoint - } - u, err := url.Parse(endpoint) - if err != nil { - return nil, ErrInvalidEndpoint - } - if tls { - u.Scheme = "https" - } - switch u.Scheme { - case "unix": - return u, nil - case "http", "https", "tcp": - _, port, err := net.SplitHostPort(u.Host) - if err != nil { - if e, ok := err.(*net.AddrError); ok { - if e.Err == "missing port in address" { - return u, nil - } - } - return nil, ErrInvalidEndpoint - } - number, err := strconv.ParseInt(port, 10, 64) - if err == nil && number > 0 && number < 65536 { - if u.Scheme == "tcp" { - if tls { - u.Scheme = "https" - } else { - u.Scheme = "http" - } - } - return u, nil - } - return nil, ErrInvalidEndpoint - default: - return nil, ErrInvalidEndpoint - } -} - -type dockerEnv struct { - dockerHost string - dockerTLSVerify bool - dockerCertPath string -} - -func getDockerEnv() (*dockerEnv, error) { - dockerHost := os.Getenv("DOCKER_HOST") - var err error - if dockerHost == "" { - dockerHost, err = DefaultDockerHost() - if err != nil { - return nil, err - } - } - dockerTLSVerify := os.Getenv("DOCKER_TLS_VERIFY") != "" - var dockerCertPath string - if dockerTLSVerify { - dockerCertPath = os.Getenv("DOCKER_CERT_PATH") - if dockerCertPath == "" { - home := homedir.Get() - if home == "" { - return nil, errors.New("environment variable HOME must be set if DOCKER_CERT_PATH is not set") - } - dockerCertPath = filepath.Join(home, ".docker") - dockerCertPath, err = filepath.Abs(dockerCertPath) - if err != nil { - return nil, err - } - } - } - return &dockerEnv{ - dockerHost: dockerHost, - dockerTLSVerify: dockerTLSVerify, - dockerCertPath: dockerCertPath, - }, nil -} - -// DefaultDockerHost returns the default docker socket for the current OS -func DefaultDockerHost() (string, error) { - var defaultHost string - if runtime.GOOS == "windows" { - // If we do not have a host, default to TCP socket on Windows - defaultHost = fmt.Sprintf("tcp://%s:%d", opts.DefaultHTTPHost, opts.DefaultHTTPPort) - } else { - // If we do not have a host, default to unix socket - defaultHost = fmt.Sprintf("unix://%s", opts.DefaultUnixSocket) - } - return opts.ValidateHost(defaultHost) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/container.go b/vendor/github.com/fsouza/go-dockerclient/container.go deleted file mode 100644 index f7ed5f5740f..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/container.go +++ /dev/null @@ -1,1325 +0,0 @@ -// Copyright 2015 go-dockerclient authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package docker - -import ( - "encoding/json" - "errors" - "fmt" - "io" - "net/http" - "net/url" - "strconv" - "strings" - "time" - - "github.com/fsouza/go-dockerclient/external/github.com/docker/go-units" -) - -// ErrContainerAlreadyExists is the error returned by CreateContainer when the -// container already exists. -var ErrContainerAlreadyExists = errors.New("container already exists") - -// ListContainersOptions specify parameters to the ListContainers function. -// -// See https://goo.gl/47a6tO for more details. -type ListContainersOptions struct { - All bool - Size bool - Limit int - Since string - Before string - Filters map[string][]string -} - -// APIPort is a type that represents a port mapping returned by the Docker API -type APIPort struct { - PrivatePort int64 `json:"PrivatePort,omitempty" yaml:"PrivatePort,omitempty"` - PublicPort int64 `json:"PublicPort,omitempty" yaml:"PublicPort,omitempty"` - Type string `json:"Type,omitempty" yaml:"Type,omitempty"` - IP string `json:"IP,omitempty" yaml:"IP,omitempty"` -} - -// APIMount represents a mount point for a container. -type APIMount struct { - Name string `json:"Name,omitempty" yaml:"Name,omitempty"` - Source string `json:"Source,omitempty" yaml:"Source,omitempty"` - Destination string `json:"Destination,omitempty" yaml:"Destination,omitempty"` - Driver string `json:"Driver,omitempty" yaml:"Driver,omitempty"` - Mode string `json:"Mode,omitempty" yaml:"Mode,omitempty"` - RW bool `json:"RW,omitempty" yaml:"RW,omitempty"` - Propogation string `json:"Propogation,omitempty" yaml:"Propogation,omitempty"` -} - -// APIContainers represents each container in the list returned by -// ListContainers. -type APIContainers struct { - ID string `json:"Id" yaml:"Id"` - Image string `json:"Image,omitempty" yaml:"Image,omitempty"` - Command string `json:"Command,omitempty" yaml:"Command,omitempty"` - Created int64 `json:"Created,omitempty" yaml:"Created,omitempty"` - State string `json:"State,omitempty" yaml:"State,omitempty"` - Status string `json:"Status,omitempty" yaml:"Status,omitempty"` - Ports []APIPort `json:"Ports,omitempty" yaml:"Ports,omitempty"` - SizeRw int64 `json:"SizeRw,omitempty" yaml:"SizeRw,omitempty"` - SizeRootFs int64 `json:"SizeRootFs,omitempty" yaml:"SizeRootFs,omitempty"` - Names []string `json:"Names,omitempty" yaml:"Names,omitempty"` - Labels map[string]string `json:"Labels,omitempty" yaml:"Labels,omitempty"` - Networks NetworkList `json:"NetworkSettings,omitempty" yaml:"NetworkSettings,omitempty"` - Mounts []APIMount `json:"Mounts,omitempty" yaml:"Mounts,omitempty"` -} - -// NetworkList encapsulates a map of networks, as returned by the Docker API in -// ListContainers. -type NetworkList struct { - Networks map[string]ContainerNetwork `json:"Networks" yaml:"Networks,omitempty"` -} - -// ListContainers returns a slice of containers matching the given criteria. -// -// See https://goo.gl/47a6tO for more details. -func (c *Client) ListContainers(opts ListContainersOptions) ([]APIContainers, error) { - path := "/containers/json?" + queryString(opts) - resp, err := c.do("GET", path, doOptions{}) - if err != nil { - return nil, err - } - defer resp.Body.Close() - var containers []APIContainers - if err := json.NewDecoder(resp.Body).Decode(&containers); err != nil { - return nil, err - } - return containers, nil -} - -// Port represents the port number and the protocol, in the form -// /. For example: 80/tcp. -type Port string - -// Port returns the number of the port. -func (p Port) Port() string { - return strings.Split(string(p), "/")[0] -} - -// Proto returns the name of the protocol. -func (p Port) Proto() string { - parts := strings.Split(string(p), "/") - if len(parts) == 1 { - return "tcp" - } - return parts[1] -} - -// State represents the state of a container. -type State struct { - Status string `json:"Status,omitempty" yaml:"Status,omitempty"` - Running bool `json:"Running,omitempty" yaml:"Running,omitempty"` - Paused bool `json:"Paused,omitempty" yaml:"Paused,omitempty"` - Restarting bool `json:"Restarting,omitempty" yaml:"Restarting,omitempty"` - OOMKilled bool `json:"OOMKilled,omitempty" yaml:"OOMKilled,omitempty"` - RemovalInProgress bool `json:"RemovalInProgress,omitempty" yaml:"RemovalInProgress,omitempty"` - Dead bool `json:"Dead,omitempty" yaml:"Dead,omitempty"` - Pid int `json:"Pid,omitempty" yaml:"Pid,omitempty"` - ExitCode int `json:"ExitCode,omitempty" yaml:"ExitCode,omitempty"` - Error string `json:"Error,omitempty" yaml:"Error,omitempty"` - StartedAt time.Time `json:"StartedAt,omitempty" yaml:"StartedAt,omitempty"` - FinishedAt time.Time `json:"FinishedAt,omitempty" yaml:"FinishedAt,omitempty"` -} - -// String returns a human-readable description of the state -func (s *State) String() string { - if s.Running { - if s.Paused { - return fmt.Sprintf("Up %s (Paused)", units.HumanDuration(time.Now().UTC().Sub(s.StartedAt))) - } - if s.Restarting { - return fmt.Sprintf("Restarting (%d) %s ago", s.ExitCode, units.HumanDuration(time.Now().UTC().Sub(s.FinishedAt))) - } - - return fmt.Sprintf("Up %s", units.HumanDuration(time.Now().UTC().Sub(s.StartedAt))) - } - - if s.RemovalInProgress { - return "Removal In Progress" - } - - if s.Dead { - return "Dead" - } - - if s.StartedAt.IsZero() { - return "Created" - } - - if s.FinishedAt.IsZero() { - return "" - } - - return fmt.Sprintf("Exited (%d) %s ago", s.ExitCode, units.HumanDuration(time.Now().UTC().Sub(s.FinishedAt))) -} - -// StateString returns a single string to describe state -func (s *State) StateString() string { - if s.Running { - if s.Paused { - return "paused" - } - if s.Restarting { - return "restarting" - } - return "running" - } - - if s.Dead { - return "dead" - } - - if s.StartedAt.IsZero() { - return "created" - } - - return "exited" -} - -// PortBinding represents the host/container port mapping as returned in the -// `docker inspect` json -type PortBinding struct { - HostIP string `json:"HostIP,omitempty" yaml:"HostIP,omitempty"` - HostPort string `json:"HostPort,omitempty" yaml:"HostPort,omitempty"` -} - -// PortMapping represents a deprecated field in the `docker inspect` output, -// and its value as found in NetworkSettings should always be nil -type PortMapping map[string]string - -// ContainerNetwork represents the networking settings of a container per network. -type ContainerNetwork struct { - MacAddress string `json:"MacAddress,omitempty" yaml:"MacAddress,omitempty"` - GlobalIPv6PrefixLen int `json:"GlobalIPv6PrefixLen,omitempty" yaml:"GlobalIPv6PrefixLen,omitempty"` - GlobalIPv6Address string `json:"GlobalIPv6Address,omitempty" yaml:"GlobalIPv6Address,omitempty"` - IPv6Gateway string `json:"IPv6Gateway,omitempty" yaml:"IPv6Gateway,omitempty"` - IPPrefixLen int `json:"IPPrefixLen,omitempty" yaml:"IPPrefixLen,omitempty"` - IPAddress string `json:"IPAddress,omitempty" yaml:"IPAddress,omitempty"` - Gateway string `json:"Gateway,omitempty" yaml:"Gateway,omitempty"` - EndpointID string `json:"EndpointID,omitempty" yaml:"EndpointID,omitempty"` - NetworkID string `json:"NetworkID,omitempty" yaml:"NetworkID,omitempty"` -} - -// NetworkSettings contains network-related information about a container -type NetworkSettings struct { - Networks map[string]ContainerNetwork `json:"Networks,omitempty" yaml:"Networks,omitempty"` - IPAddress string `json:"IPAddress,omitempty" yaml:"IPAddress,omitempty"` - IPPrefixLen int `json:"IPPrefixLen,omitempty" yaml:"IPPrefixLen,omitempty"` - MacAddress string `json:"MacAddress,omitempty" yaml:"MacAddress,omitempty"` - Gateway string `json:"Gateway,omitempty" yaml:"Gateway,omitempty"` - Bridge string `json:"Bridge,omitempty" yaml:"Bridge,omitempty"` - PortMapping map[string]PortMapping `json:"PortMapping,omitempty" yaml:"PortMapping,omitempty"` - Ports map[Port][]PortBinding `json:"Ports,omitempty" yaml:"Ports,omitempty"` - NetworkID string `json:"NetworkID,omitempty" yaml:"NetworkID,omitempty"` - EndpointID string `json:"EndpointID,omitempty" yaml:"EndpointID,omitempty"` - SandboxKey string `json:"SandboxKey,omitempty" yaml:"SandboxKey,omitempty"` - GlobalIPv6Address string `json:"GlobalIPv6Address,omitempty" yaml:"GlobalIPv6Address,omitempty"` - GlobalIPv6PrefixLen int `json:"GlobalIPv6PrefixLen,omitempty" yaml:"GlobalIPv6PrefixLen,omitempty"` - IPv6Gateway string `json:"IPv6Gateway,omitempty" yaml:"IPv6Gateway,omitempty"` - LinkLocalIPv6Address string `json:"LinkLocalIPv6Address,omitempty" yaml:"LinkLocalIPv6Address,omitempty"` - LinkLocalIPv6PrefixLen int `json:"LinkLocalIPv6PrefixLen,omitempty" yaml:"LinkLocalIPv6PrefixLen,omitempty"` - SecondaryIPAddresses []string `json:"SecondaryIPAddresses,omitempty" yaml:"SecondaryIPAddresses,omitempty"` - SecondaryIPv6Addresses []string `json:"SecondaryIPv6Addresses,omitempty" yaml:"SecondaryIPv6Addresses,omitempty"` -} - -// PortMappingAPI translates the port mappings as contained in NetworkSettings -// into the format in which they would appear when returned by the API -func (settings *NetworkSettings) PortMappingAPI() []APIPort { - var mapping []APIPort - for port, bindings := range settings.Ports { - p, _ := parsePort(port.Port()) - if len(bindings) == 0 { - mapping = append(mapping, APIPort{ - PrivatePort: int64(p), - Type: port.Proto(), - }) - continue - } - for _, binding := range bindings { - p, _ := parsePort(port.Port()) - h, _ := parsePort(binding.HostPort) - mapping = append(mapping, APIPort{ - PrivatePort: int64(p), - PublicPort: int64(h), - Type: port.Proto(), - IP: binding.HostIP, - }) - } - } - return mapping -} - -func parsePort(rawPort string) (int, error) { - port, err := strconv.ParseUint(rawPort, 10, 16) - if err != nil { - return 0, err - } - return int(port), nil -} - -// Config is the list of configuration options used when creating a container. -// Config does not contain the options that are specific to starting a container on a -// given host. Those are contained in HostConfig -type Config struct { - Hostname string `json:"Hostname,omitempty" yaml:"Hostname,omitempty"` - Domainname string `json:"Domainname,omitempty" yaml:"Domainname,omitempty"` - User string `json:"User,omitempty" yaml:"User,omitempty"` - Memory int64 `json:"Memory,omitempty" yaml:"Memory,omitempty"` - MemorySwap int64 `json:"MemorySwap,omitempty" yaml:"MemorySwap,omitempty"` - MemoryReservation int64 `json:"MemoryReservation,omitempty" yaml:"MemoryReservation,omitempty"` - KernelMemory int64 `json:"KernelMemory,omitempty" yaml:"KernelMemory,omitempty"` - PidsLimit int64 `json:"PidsLimit,omitempty" yaml:"PidsLimit,omitempty"` - CPUShares int64 `json:"CpuShares,omitempty" yaml:"CpuShares,omitempty"` - CPUSet string `json:"Cpuset,omitempty" yaml:"Cpuset,omitempty"` - AttachStdin bool `json:"AttachStdin,omitempty" yaml:"AttachStdin,omitempty"` - AttachStdout bool `json:"AttachStdout,omitempty" yaml:"AttachStdout,omitempty"` - AttachStderr bool `json:"AttachStderr,omitempty" yaml:"AttachStderr,omitempty"` - PortSpecs []string `json:"PortSpecs,omitempty" yaml:"PortSpecs,omitempty"` - ExposedPorts map[Port]struct{} `json:"ExposedPorts,omitempty" yaml:"ExposedPorts,omitempty"` - StopSignal string `json:"StopSignal,omitempty" yaml:"StopSignal,omitempty"` - Tty bool `json:"Tty,omitempty" yaml:"Tty,omitempty"` - OpenStdin bool `json:"OpenStdin,omitempty" yaml:"OpenStdin,omitempty"` - StdinOnce bool `json:"StdinOnce,omitempty" yaml:"StdinOnce,omitempty"` - Env []string `json:"Env,omitempty" yaml:"Env,omitempty"` - Cmd []string `json:"Cmd" yaml:"Cmd"` - DNS []string `json:"Dns,omitempty" yaml:"Dns,omitempty"` // For Docker API v1.9 and below only - Image string `json:"Image,omitempty" yaml:"Image,omitempty"` - Volumes map[string]struct{} `json:"Volumes,omitempty" yaml:"Volumes,omitempty"` - VolumeDriver string `json:"VolumeDriver,omitempty" yaml:"VolumeDriver,omitempty"` - VolumesFrom string `json:"VolumesFrom,omitempty" yaml:"VolumesFrom,omitempty"` - WorkingDir string `json:"WorkingDir,omitempty" yaml:"WorkingDir,omitempty"` - MacAddress string `json:"MacAddress,omitempty" yaml:"MacAddress,omitempty"` - Entrypoint []string `json:"Entrypoint" yaml:"Entrypoint"` - NetworkDisabled bool `json:"NetworkDisabled,omitempty" yaml:"NetworkDisabled,omitempty"` - SecurityOpts []string `json:"SecurityOpts,omitempty" yaml:"SecurityOpts,omitempty"` - OnBuild []string `json:"OnBuild,omitempty" yaml:"OnBuild,omitempty"` - Mounts []Mount `json:"Mounts,omitempty" yaml:"Mounts,omitempty"` - Labels map[string]string `json:"Labels,omitempty" yaml:"Labels,omitempty"` -} - -// Mount represents a mount point in the container. -// -// It has been added in the version 1.20 of the Docker API, available since -// Docker 1.8. -type Mount struct { - Name string - Source string - Destination string - Driver string - Mode string - RW bool -} - -// LogConfig defines the log driver type and the configuration for it. -type LogConfig struct { - Type string `json:"Type,omitempty" yaml:"Type,omitempty"` - Config map[string]string `json:"Config,omitempty" yaml:"Config,omitempty"` -} - -// ULimit defines system-wide resource limitations -// This can help a lot in system administration, e.g. when a user starts too many processes and therefore makes the system unresponsive for other users. -type ULimit struct { - Name string `json:"Name,omitempty" yaml:"Name,omitempty"` - Soft int64 `json:"Soft,omitempty" yaml:"Soft,omitempty"` - Hard int64 `json:"Hard,omitempty" yaml:"Hard,omitempty"` -} - -// SwarmNode containers information about which Swarm node the container is on -type SwarmNode struct { - ID string `json:"ID,omitempty" yaml:"ID,omitempty"` - IP string `json:"IP,omitempty" yaml:"IP,omitempty"` - Addr string `json:"Addr,omitempty" yaml:"Addr,omitempty"` - Name string `json:"Name,omitempty" yaml:"Name,omitempty"` - CPUs int64 `json:"CPUs,omitempty" yaml:"CPUs,omitempty"` - Memory int64 `json:"Memory,omitempty" yaml:"Memory,omitempty"` - Labels map[string]string `json:"Labels,omitempty" yaml:"Labels,omitempty"` -} - -// GraphDriver contains information about the GraphDriver used by the container -type GraphDriver struct { - Name string `json:"Name,omitempty" yaml:"Name,omitempty"` - Data map[string]string `json:"Data,omitempty" yaml:"Data,omitempty"` -} - -// Container is the type encompasing everything about a container - its config, -// hostconfig, etc. -type Container struct { - ID string `json:"Id" yaml:"Id"` - - Created time.Time `json:"Created,omitempty" yaml:"Created,omitempty"` - - Path string `json:"Path,omitempty" yaml:"Path,omitempty"` - Args []string `json:"Args,omitempty" yaml:"Args,omitempty"` - - Config *Config `json:"Config,omitempty" yaml:"Config,omitempty"` - State State `json:"State,omitempty" yaml:"State,omitempty"` - Image string `json:"Image,omitempty" yaml:"Image,omitempty"` - - Node *SwarmNode `json:"Node,omitempty" yaml:"Node,omitempty"` - - NetworkSettings *NetworkSettings `json:"NetworkSettings,omitempty" yaml:"NetworkSettings,omitempty"` - - SysInitPath string `json:"SysInitPath,omitempty" yaml:"SysInitPath,omitempty"` - ResolvConfPath string `json:"ResolvConfPath,omitempty" yaml:"ResolvConfPath,omitempty"` - HostnamePath string `json:"HostnamePath,omitempty" yaml:"HostnamePath,omitempty"` - HostsPath string `json:"HostsPath,omitempty" yaml:"HostsPath,omitempty"` - LogPath string `json:"LogPath,omitempty" yaml:"LogPath,omitempty"` - Name string `json:"Name,omitempty" yaml:"Name,omitempty"` - Driver string `json:"Driver,omitempty" yaml:"Driver,omitempty"` - Mounts []Mount `json:"Mounts,omitempty" yaml:"Mounts,omitempty"` - - Volumes map[string]string `json:"Volumes,omitempty" yaml:"Volumes,omitempty"` - VolumesRW map[string]bool `json:"VolumesRW,omitempty" yaml:"VolumesRW,omitempty"` - HostConfig *HostConfig `json:"HostConfig,omitempty" yaml:"HostConfig,omitempty"` - ExecIDs []string `json:"ExecIDs,omitempty" yaml:"ExecIDs,omitempty"` - GraphDriver *GraphDriver `json:"GraphDriver,omitempty" yaml:"GraphDriver,omitempty"` - - RestartCount int `json:"RestartCount,omitempty" yaml:"RestartCount,omitempty"` - - AppArmorProfile string `json:"AppArmorProfile,omitempty" yaml:"AppArmorProfile,omitempty"` -} - -// UpdateContainerOptions specify parameters to the UpdateContainer function. -// -// See https://goo.gl/Y6fXUy for more details. -type UpdateContainerOptions struct { - BlkioWeight int `json:"BlkioWeight"` - CPUShares int `json:"CpuShares"` - CPUPeriod int `json:"CpuPeriod"` - CPUQuota int `json:"CpuQuota"` - CpusetCpus string `json:"CpusetCpus"` - CpusetMems string `json:"CpusetMems"` - Memory int `json:"Memory"` - MemorySwap int `json:"MemorySwap"` - MemoryReservation int `json:"MemoryReservation"` - KernelMemory int `json:"KernelMemory"` - RestartPolicy RestartPolicy `json:"RestartPolicy,omitempty"` -} - -// UpdateContainer updates the container at ID with the options -// -// See https://goo.gl/Y6fXUy for more details. -func (c *Client) UpdateContainer(id string, opts UpdateContainerOptions) error { - resp, err := c.do("POST", fmt.Sprintf("/containers/"+id+"/update"), doOptions{data: opts, forceJSON: true}) - if err != nil { - return err - } - defer resp.Body.Close() - return nil -} - -// RenameContainerOptions specify parameters to the RenameContainer function. -// -// See https://goo.gl/laSOIy for more details. -type RenameContainerOptions struct { - // ID of container to rename - ID string `qs:"-"` - - // New name - Name string `json:"name,omitempty" yaml:"name,omitempty"` -} - -// RenameContainer updates and existing containers name -// -// See https://goo.gl/laSOIy for more details. -func (c *Client) RenameContainer(opts RenameContainerOptions) error { - resp, err := c.do("POST", fmt.Sprintf("/containers/"+opts.ID+"/rename?%s", queryString(opts)), doOptions{}) - if err != nil { - return err - } - resp.Body.Close() - return nil -} - -// InspectContainer returns information about a container by its ID. -// -// See https://goo.gl/RdIq0b for more details. -func (c *Client) InspectContainer(id string) (*Container, error) { - path := "/containers/" + id + "/json" - resp, err := c.do("GET", path, doOptions{}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return nil, &NoSuchContainer{ID: id} - } - return nil, err - } - defer resp.Body.Close() - var container Container - if err := json.NewDecoder(resp.Body).Decode(&container); err != nil { - return nil, err - } - return &container, nil -} - -// ContainerChanges returns changes in the filesystem of the given container. -// -// See https://goo.gl/9GsTIF for more details. -func (c *Client) ContainerChanges(id string) ([]Change, error) { - path := "/containers/" + id + "/changes" - resp, err := c.do("GET", path, doOptions{}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return nil, &NoSuchContainer{ID: id} - } - return nil, err - } - defer resp.Body.Close() - var changes []Change - if err := json.NewDecoder(resp.Body).Decode(&changes); err != nil { - return nil, err - } - return changes, nil -} - -// CreateContainerOptions specify parameters to the CreateContainer function. -// -// See https://goo.gl/WxQzrr for more details. -type CreateContainerOptions struct { - Name string - Config *Config `qs:"-"` - HostConfig *HostConfig `qs:"-"` -} - -// CreateContainer creates a new container, returning the container instance, -// or an error in case of failure. -// -// See https://goo.gl/WxQzrr for more details. -func (c *Client) CreateContainer(opts CreateContainerOptions) (*Container, error) { - path := "/containers/create?" + queryString(opts) - resp, err := c.do( - "POST", - path, - doOptions{ - data: struct { - *Config - HostConfig *HostConfig `json:"HostConfig,omitempty" yaml:"HostConfig,omitempty"` - }{ - opts.Config, - opts.HostConfig, - }, - }, - ) - - if e, ok := err.(*Error); ok { - if e.Status == http.StatusNotFound { - return nil, ErrNoSuchImage - } - if e.Status == http.StatusConflict { - return nil, ErrContainerAlreadyExists - } - } - - if err != nil { - return nil, err - } - defer resp.Body.Close() - var container Container - if err := json.NewDecoder(resp.Body).Decode(&container); err != nil { - return nil, err - } - - container.Name = opts.Name - - return &container, nil -} - -// KeyValuePair is a type for generic key/value pairs as used in the Lxc -// configuration -type KeyValuePair struct { - Key string `json:"Key,omitempty" yaml:"Key,omitempty"` - Value string `json:"Value,omitempty" yaml:"Value,omitempty"` -} - -// RestartPolicy represents the policy for automatically restarting a container. -// -// Possible values are: -// -// - always: the docker daemon will always restart the container -// - on-failure: the docker daemon will restart the container on failures, at -// most MaximumRetryCount times -// - no: the docker daemon will not restart the container automatically -type RestartPolicy struct { - Name string `json:"Name,omitempty" yaml:"Name,omitempty"` - MaximumRetryCount int `json:"MaximumRetryCount,omitempty" yaml:"MaximumRetryCount,omitempty"` -} - -// AlwaysRestart returns a restart policy that tells the Docker daemon to -// always restart the container. -func AlwaysRestart() RestartPolicy { - return RestartPolicy{Name: "always"} -} - -// RestartOnFailure returns a restart policy that tells the Docker daemon to -// restart the container on failures, trying at most maxRetry times. -func RestartOnFailure(maxRetry int) RestartPolicy { - return RestartPolicy{Name: "on-failure", MaximumRetryCount: maxRetry} -} - -// NeverRestart returns a restart policy that tells the Docker daemon to never -// restart the container on failures. -func NeverRestart() RestartPolicy { - return RestartPolicy{Name: "no"} -} - -// Device represents a device mapping between the Docker host and the -// container. -type Device struct { - PathOnHost string `json:"PathOnHost,omitempty" yaml:"PathOnHost,omitempty"` - PathInContainer string `json:"PathInContainer,omitempty" yaml:"PathInContainer,omitempty"` - CgroupPermissions string `json:"CgroupPermissions,omitempty" yaml:"CgroupPermissions,omitempty"` -} - -// BlockWeight represents a relative device weight for an individual device inside -// of a container -// -// See https://goo.gl/FSdP0H for more details. -type BlockWeight struct { - Path string `json:"Path,omitempty"` - Weight string `json:"Weight,omitempty"` -} - -// BlockLimit represents a read/write limit in IOPS or Bandwidth for a device -// inside of a container -// -// See https://goo.gl/FSdP0H for more details. -type BlockLimit struct { - Path string `json:"Path,omitempty"` - Rate string `json:"Rate,omitempty"` -} - -// HostConfig contains the container options related to starting a container on -// a given host -type HostConfig struct { - Binds []string `json:"Binds,omitempty" yaml:"Binds,omitempty"` - CapAdd []string `json:"CapAdd,omitempty" yaml:"CapAdd,omitempty"` - CapDrop []string `json:"CapDrop,omitempty" yaml:"CapDrop,omitempty"` - GroupAdd []string `json:"GroupAdd,omitempty" yaml:"GroupAdd,omitempty"` - ContainerIDFile string `json:"ContainerIDFile,omitempty" yaml:"ContainerIDFile,omitempty"` - LxcConf []KeyValuePair `json:"LxcConf,omitempty" yaml:"LxcConf,omitempty"` - Privileged bool `json:"Privileged,omitempty" yaml:"Privileged,omitempty"` - PortBindings map[Port][]PortBinding `json:"PortBindings,omitempty" yaml:"PortBindings,omitempty"` - Links []string `json:"Links,omitempty" yaml:"Links,omitempty"` - PublishAllPorts bool `json:"PublishAllPorts,omitempty" yaml:"PublishAllPorts,omitempty"` - DNS []string `json:"Dns,omitempty" yaml:"Dns,omitempty"` // For Docker API v1.10 and above only - DNSOptions []string `json:"DnsOptions,omitempty" yaml:"DnsOptions,omitempty"` - DNSSearch []string `json:"DnsSearch,omitempty" yaml:"DnsSearch,omitempty"` - ExtraHosts []string `json:"ExtraHosts,omitempty" yaml:"ExtraHosts,omitempty"` - VolumesFrom []string `json:"VolumesFrom,omitempty" yaml:"VolumesFrom,omitempty"` - UsernsMode string `json:"UsernsMode,omitempty" yaml:"UsernsMode,omitempty"` - NetworkMode string `json:"NetworkMode,omitempty" yaml:"NetworkMode,omitempty"` - IpcMode string `json:"IpcMode,omitempty" yaml:"IpcMode,omitempty"` - PidMode string `json:"PidMode,omitempty" yaml:"PidMode,omitempty"` - UTSMode string `json:"UTSMode,omitempty" yaml:"UTSMode,omitempty"` - RestartPolicy RestartPolicy `json:"RestartPolicy,omitempty" yaml:"RestartPolicy,omitempty"` - Devices []Device `json:"Devices,omitempty" yaml:"Devices,omitempty"` - LogConfig LogConfig `json:"LogConfig,omitempty" yaml:"LogConfig,omitempty"` - ReadonlyRootfs bool `json:"ReadonlyRootfs,omitempty" yaml:"ReadonlyRootfs,omitempty"` - SecurityOpt []string `json:"SecurityOpt,omitempty" yaml:"SecurityOpt,omitempty"` - CgroupParent string `json:"CgroupParent,omitempty" yaml:"CgroupParent,omitempty"` - Memory int64 `json:"Memory,omitempty" yaml:"Memory,omitempty"` - MemorySwap int64 `json:"MemorySwap,omitempty" yaml:"MemorySwap,omitempty"` - MemorySwappiness int64 `json:"MemorySwappiness,omitempty" yaml:"MemorySwappiness,omitempty"` - OOMKillDisable bool `json:"OomKillDisable,omitempty" yaml:"OomKillDisable"` - CPUShares int64 `json:"CpuShares,omitempty" yaml:"CpuShares,omitempty"` - CPUSet string `json:"Cpuset,omitempty" yaml:"Cpuset,omitempty"` - CPUSetCPUs string `json:"CpusetCpus,omitempty" yaml:"CpusetCpus,omitempty"` - CPUSetMEMs string `json:"CpusetMems,omitempty" yaml:"CpusetMems,omitempty"` - CPUQuota int64 `json:"CpuQuota,omitempty" yaml:"CpuQuota,omitempty"` - CPUPeriod int64 `json:"CpuPeriod,omitempty" yaml:"CpuPeriod,omitempty"` - BlkioWeight int64 `json:"BlkioWeight,omitempty" yaml:"BlkioWeight"` - BlkioWeightDevice []BlockWeight `json:"BlkioWeightDevice,omitempty" yaml:"BlkioWeightDevice"` - BlkioDeviceReadBps []BlockLimit `json:"BlkioDeviceReadBps,omitempty" yaml:"BlkioDeviceReadBps"` - BlkioDeviceReadIOps []BlockLimit `json:"BlkioDeviceReadIOps,omitempty" yaml:"BlkioDeviceReadIOps"` - BlkioDeviceWriteBps []BlockLimit `json:"BlkioDeviceWriteBps,omitempty" yaml:"BlkioDeviceWriteBps"` - BlkioDeviceWriteIOps []BlockLimit `json:"BlkioDeviceWriteIOps,omitempty" yaml:"BlkioDeviceWriteIOps"` - Ulimits []ULimit `json:"Ulimits,omitempty" yaml:"Ulimits,omitempty"` - VolumeDriver string `json:"VolumeDriver,omitempty" yaml:"VolumeDriver,omitempty"` - OomScoreAdj int `json:"OomScoreAdj,omitempty" yaml:"OomScoreAdj,omitempty"` - ShmSize int64 `json:"ShmSize,omitempty" yaml:"ShmSize,omitempty"` -} - -// StartContainer starts a container, returning an error in case of failure. -// -// See https://goo.gl/MrBAJv for more details. -func (c *Client) StartContainer(id string, hostConfig *HostConfig) error { - path := "/containers/" + id + "/start" - resp, err := c.do("POST", path, doOptions{data: hostConfig, forceJSON: true}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return &NoSuchContainer{ID: id, Err: err} - } - return err - } - if resp.StatusCode == http.StatusNotModified { - return &ContainerAlreadyRunning{ID: id} - } - resp.Body.Close() - return nil -} - -// StopContainer stops a container, killing it after the given timeout (in -// seconds). -// -// See https://goo.gl/USqsFt for more details. -func (c *Client) StopContainer(id string, timeout uint) error { - path := fmt.Sprintf("/containers/%s/stop?t=%d", id, timeout) - resp, err := c.do("POST", path, doOptions{}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return &NoSuchContainer{ID: id} - } - return err - } - if resp.StatusCode == http.StatusNotModified { - return &ContainerNotRunning{ID: id} - } - resp.Body.Close() - return nil -} - -// RestartContainer stops a container, killing it after the given timeout (in -// seconds), during the stop process. -// -// See https://goo.gl/QzsDnz for more details. -func (c *Client) RestartContainer(id string, timeout uint) error { - path := fmt.Sprintf("/containers/%s/restart?t=%d", id, timeout) - resp, err := c.do("POST", path, doOptions{}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return &NoSuchContainer{ID: id} - } - return err - } - resp.Body.Close() - return nil -} - -// PauseContainer pauses the given container. -// -// See https://goo.gl/OF7W9X for more details. -func (c *Client) PauseContainer(id string) error { - path := fmt.Sprintf("/containers/%s/pause", id) - resp, err := c.do("POST", path, doOptions{}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return &NoSuchContainer{ID: id} - } - return err - } - resp.Body.Close() - return nil -} - -// UnpauseContainer unpauses the given container. -// -// See https://goo.gl/7dwyPA for more details. -func (c *Client) UnpauseContainer(id string) error { - path := fmt.Sprintf("/containers/%s/unpause", id) - resp, err := c.do("POST", path, doOptions{}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return &NoSuchContainer{ID: id} - } - return err - } - resp.Body.Close() - return nil -} - -// TopResult represents the list of processes running in a container, as -// returned by /containers//top. -// -// See https://goo.gl/Rb46aY for more details. -type TopResult struct { - Titles []string - Processes [][]string -} - -// TopContainer returns processes running inside a container -// -// See https://goo.gl/Rb46aY for more details. -func (c *Client) TopContainer(id string, psArgs string) (TopResult, error) { - var args string - var result TopResult - if psArgs != "" { - args = fmt.Sprintf("?ps_args=%s", psArgs) - } - path := fmt.Sprintf("/containers/%s/top%s", id, args) - resp, err := c.do("GET", path, doOptions{}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return result, &NoSuchContainer{ID: id} - } - return result, err - } - defer resp.Body.Close() - if err := json.NewDecoder(resp.Body).Decode(&result); err != nil { - return result, err - } - return result, nil -} - -// Stats represents container statistics, returned by /containers//stats. -// -// See https://goo.gl/GNmLHb for more details. -type Stats struct { - Read time.Time `json:"read,omitempty" yaml:"read,omitempty"` - PidsStats struct { - Current uint64 `json:"current,omitempty" yaml:"current,omitempty"` - } `json:"pids_stats,omitempty" yaml:"pids_stats,omitempty"` - Network NetworkStats `json:"network,omitempty" yaml:"network,omitempty"` - Networks map[string]NetworkStats `json:"networks,omitempty" yaml:"networks,omitempty"` - MemoryStats struct { - Stats struct { - TotalPgmafault uint64 `json:"total_pgmafault,omitempty" yaml:"total_pgmafault,omitempty"` - Cache uint64 `json:"cache,omitempty" yaml:"cache,omitempty"` - MappedFile uint64 `json:"mapped_file,omitempty" yaml:"mapped_file,omitempty"` - TotalInactiveFile uint64 `json:"total_inactive_file,omitempty" yaml:"total_inactive_file,omitempty"` - Pgpgout uint64 `json:"pgpgout,omitempty" yaml:"pgpgout,omitempty"` - Rss uint64 `json:"rss,omitempty" yaml:"rss,omitempty"` - TotalMappedFile uint64 `json:"total_mapped_file,omitempty" yaml:"total_mapped_file,omitempty"` - Writeback uint64 `json:"writeback,omitempty" yaml:"writeback,omitempty"` - Unevictable uint64 `json:"unevictable,omitempty" yaml:"unevictable,omitempty"` - Pgpgin uint64 `json:"pgpgin,omitempty" yaml:"pgpgin,omitempty"` - TotalUnevictable uint64 `json:"total_unevictable,omitempty" yaml:"total_unevictable,omitempty"` - Pgmajfault uint64 `json:"pgmajfault,omitempty" yaml:"pgmajfault,omitempty"` - TotalRss uint64 `json:"total_rss,omitempty" yaml:"total_rss,omitempty"` - TotalRssHuge uint64 `json:"total_rss_huge,omitempty" yaml:"total_rss_huge,omitempty"` - TotalWriteback uint64 `json:"total_writeback,omitempty" yaml:"total_writeback,omitempty"` - TotalInactiveAnon uint64 `json:"total_inactive_anon,omitempty" yaml:"total_inactive_anon,omitempty"` - RssHuge uint64 `json:"rss_huge,omitempty" yaml:"rss_huge,omitempty"` - HierarchicalMemoryLimit uint64 `json:"hierarchical_memory_limit,omitempty" yaml:"hierarchical_memory_limit,omitempty"` - TotalPgfault uint64 `json:"total_pgfault,omitempty" yaml:"total_pgfault,omitempty"` - TotalActiveFile uint64 `json:"total_active_file,omitempty" yaml:"total_active_file,omitempty"` - ActiveAnon uint64 `json:"active_anon,omitempty" yaml:"active_anon,omitempty"` - TotalActiveAnon uint64 `json:"total_active_anon,omitempty" yaml:"total_active_anon,omitempty"` - TotalPgpgout uint64 `json:"total_pgpgout,omitempty" yaml:"total_pgpgout,omitempty"` - TotalCache uint64 `json:"total_cache,omitempty" yaml:"total_cache,omitempty"` - InactiveAnon uint64 `json:"inactive_anon,omitempty" yaml:"inactive_anon,omitempty"` - ActiveFile uint64 `json:"active_file,omitempty" yaml:"active_file,omitempty"` - Pgfault uint64 `json:"pgfault,omitempty" yaml:"pgfault,omitempty"` - InactiveFile uint64 `json:"inactive_file,omitempty" yaml:"inactive_file,omitempty"` - TotalPgpgin uint64 `json:"total_pgpgin,omitempty" yaml:"total_pgpgin,omitempty"` - HierarchicalMemswLimit uint64 `json:"hierarchical_memsw_limit,omitempty" yaml:"hierarchical_memsw_limit,omitempty"` - Swap uint64 `json:"swap,omitempty" yaml:"swap,omitempty"` - } `json:"stats,omitempty" yaml:"stats,omitempty"` - MaxUsage uint64 `json:"max_usage,omitempty" yaml:"max_usage,omitempty"` - Usage uint64 `json:"usage,omitempty" yaml:"usage,omitempty"` - Failcnt uint64 `json:"failcnt,omitempty" yaml:"failcnt,omitempty"` - Limit uint64 `json:"limit,omitempty" yaml:"limit,omitempty"` - } `json:"memory_stats,omitempty" yaml:"memory_stats,omitempty"` - BlkioStats struct { - IOServiceBytesRecursive []BlkioStatsEntry `json:"io_service_bytes_recursive,omitempty" yaml:"io_service_bytes_recursive,omitempty"` - IOServicedRecursive []BlkioStatsEntry `json:"io_serviced_recursive,omitempty" yaml:"io_serviced_recursive,omitempty"` - IOQueueRecursive []BlkioStatsEntry `json:"io_queue_recursive,omitempty" yaml:"io_queue_recursive,omitempty"` - IOServiceTimeRecursive []BlkioStatsEntry `json:"io_service_time_recursive,omitempty" yaml:"io_service_time_recursive,omitempty"` - IOWaitTimeRecursive []BlkioStatsEntry `json:"io_wait_time_recursive,omitempty" yaml:"io_wait_time_recursive,omitempty"` - IOMergedRecursive []BlkioStatsEntry `json:"io_merged_recursive,omitempty" yaml:"io_merged_recursive,omitempty"` - IOTimeRecursive []BlkioStatsEntry `json:"io_time_recursive,omitempty" yaml:"io_time_recursive,omitempty"` - SectorsRecursive []BlkioStatsEntry `json:"sectors_recursive,omitempty" yaml:"sectors_recursive,omitempty"` - } `json:"blkio_stats,omitempty" yaml:"blkio_stats,omitempty"` - CPUStats CPUStats `json:"cpu_stats,omitempty" yaml:"cpu_stats,omitempty"` - PreCPUStats CPUStats `json:"precpu_stats,omitempty"` -} - -// NetworkStats is a stats entry for network stats -type NetworkStats struct { - RxDropped uint64 `json:"rx_dropped,omitempty" yaml:"rx_dropped,omitempty"` - RxBytes uint64 `json:"rx_bytes,omitempty" yaml:"rx_bytes,omitempty"` - RxErrors uint64 `json:"rx_errors,omitempty" yaml:"rx_errors,omitempty"` - TxPackets uint64 `json:"tx_packets,omitempty" yaml:"tx_packets,omitempty"` - TxDropped uint64 `json:"tx_dropped,omitempty" yaml:"tx_dropped,omitempty"` - RxPackets uint64 `json:"rx_packets,omitempty" yaml:"rx_packets,omitempty"` - TxErrors uint64 `json:"tx_errors,omitempty" yaml:"tx_errors,omitempty"` - TxBytes uint64 `json:"tx_bytes,omitempty" yaml:"tx_bytes,omitempty"` -} - -// CPUStats is a stats entry for cpu stats -type CPUStats struct { - CPUUsage struct { - PercpuUsage []uint64 `json:"percpu_usage,omitempty" yaml:"percpu_usage,omitempty"` - UsageInUsermode uint64 `json:"usage_in_usermode,omitempty" yaml:"usage_in_usermode,omitempty"` - TotalUsage uint64 `json:"total_usage,omitempty" yaml:"total_usage,omitempty"` - UsageInKernelmode uint64 `json:"usage_in_kernelmode,omitempty" yaml:"usage_in_kernelmode,omitempty"` - } `json:"cpu_usage,omitempty" yaml:"cpu_usage,omitempty"` - SystemCPUUsage uint64 `json:"system_cpu_usage,omitempty" yaml:"system_cpu_usage,omitempty"` - ThrottlingData struct { - Periods uint64 `json:"periods,omitempty"` - ThrottledPeriods uint64 `json:"throttled_periods,omitempty"` - ThrottledTime uint64 `json:"throttled_time,omitempty"` - } `json:"throttling_data,omitempty" yaml:"throttling_data,omitempty"` -} - -// BlkioStatsEntry is a stats entry for blkio_stats -type BlkioStatsEntry struct { - Major uint64 `json:"major,omitempty" yaml:"major,omitempty"` - Minor uint64 `json:"minor,omitempty" yaml:"minor,omitempty"` - Op string `json:"op,omitempty" yaml:"op,omitempty"` - Value uint64 `json:"value,omitempty" yaml:"value,omitempty"` -} - -// StatsOptions specify parameters to the Stats function. -// -// See https://goo.gl/GNmLHb for more details. -type StatsOptions struct { - ID string - Stats chan<- *Stats - Stream bool - // A flag that enables stopping the stats operation - Done <-chan bool - // Initial connection timeout - Timeout time.Duration - // Timeout with no data is received, it's reset every time new data - // arrives - InactivityTimeout time.Duration `qs:"-"` -} - -// Stats sends container statistics for the given container to the given channel. -// -// This function is blocking, similar to a streaming call for logs, and should be run -// on a separate goroutine from the caller. Note that this function will block until -// the given container is removed, not just exited. When finished, this function -// will close the given channel. Alternatively, function can be stopped by -// signaling on the Done channel. -// -// See https://goo.gl/GNmLHb for more details. -func (c *Client) Stats(opts StatsOptions) (retErr error) { - errC := make(chan error, 1) - readCloser, writeCloser := io.Pipe() - - defer func() { - close(opts.Stats) - - select { - case err := <-errC: - if err != nil && retErr == nil { - retErr = err - } - default: - // No errors - } - - if err := readCloser.Close(); err != nil && retErr == nil { - retErr = err - } - }() - - go func() { - err := c.stream("GET", fmt.Sprintf("/containers/%s/stats?stream=%v", opts.ID, opts.Stream), streamOptions{ - rawJSONStream: true, - useJSONDecoder: true, - stdout: writeCloser, - timeout: opts.Timeout, - inactivityTimeout: opts.InactivityTimeout, - }) - if err != nil { - dockerError, ok := err.(*Error) - if ok { - if dockerError.Status == http.StatusNotFound { - err = &NoSuchContainer{ID: opts.ID} - } - } - } - if closeErr := writeCloser.Close(); closeErr != nil && err == nil { - err = closeErr - } - errC <- err - close(errC) - }() - - quit := make(chan struct{}) - defer close(quit) - go func() { - // block here waiting for the signal to stop function - select { - case <-opts.Done: - readCloser.Close() - case <-quit: - return - } - }() - - decoder := json.NewDecoder(readCloser) - stats := new(Stats) - for err := decoder.Decode(stats); err != io.EOF; err = decoder.Decode(stats) { - if err != nil { - return err - } - opts.Stats <- stats - stats = new(Stats) - } - return nil -} - -// KillContainerOptions represents the set of options that can be used in a -// call to KillContainer. -// -// See https://goo.gl/hkS9i8 for more details. -type KillContainerOptions struct { - // The ID of the container. - ID string `qs:"-"` - - // The signal to send to the container. When omitted, Docker server - // will assume SIGKILL. - Signal Signal -} - -// KillContainer sends a signal to a container, returning an error in case of -// failure. -// -// See https://goo.gl/hkS9i8 for more details. -func (c *Client) KillContainer(opts KillContainerOptions) error { - path := "/containers/" + opts.ID + "/kill" + "?" + queryString(opts) - resp, err := c.do("POST", path, doOptions{}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return &NoSuchContainer{ID: opts.ID} - } - return err - } - resp.Body.Close() - return nil -} - -// RemoveContainerOptions encapsulates options to remove a container. -// -// See https://goo.gl/RQyX62 for more details. -type RemoveContainerOptions struct { - // The ID of the container. - ID string `qs:"-"` - - // A flag that indicates whether Docker should remove the volumes - // associated to the container. - RemoveVolumes bool `qs:"v"` - - // A flag that indicates whether Docker should remove the container - // even if it is currently running. - Force bool -} - -// RemoveContainer removes a container, returning an error in case of failure. -// -// See https://goo.gl/RQyX62 for more details. -func (c *Client) RemoveContainer(opts RemoveContainerOptions) error { - path := "/containers/" + opts.ID + "?" + queryString(opts) - resp, err := c.do("DELETE", path, doOptions{}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return &NoSuchContainer{ID: opts.ID} - } - return err - } - resp.Body.Close() - return nil -} - -// UploadToContainerOptions is the set of options that can be used when -// uploading an archive into a container. -// -// See https://goo.gl/Ss97HW for more details. -type UploadToContainerOptions struct { - InputStream io.Reader `json:"-" qs:"-"` - Path string `qs:"path"` - NoOverwriteDirNonDir bool `qs:"noOverwriteDirNonDir"` -} - -// UploadToContainer uploads a tar archive to be extracted to a path in the -// filesystem of the container. -// -// See https://goo.gl/Ss97HW for more details. -func (c *Client) UploadToContainer(id string, opts UploadToContainerOptions) error { - url := fmt.Sprintf("/containers/%s/archive?", id) + queryString(opts) - - return c.stream("PUT", url, streamOptions{ - in: opts.InputStream, - }) -} - -// DownloadFromContainerOptions is the set of options that can be used when -// downloading resources from a container. -// -// See https://goo.gl/KnZJDX for more details. -type DownloadFromContainerOptions struct { - OutputStream io.Writer `json:"-" qs:"-"` - Path string `qs:"path"` - InactivityTimeout time.Duration `qs:"-"` -} - -// DownloadFromContainer downloads a tar archive of files or folders in a container. -// -// See https://goo.gl/KnZJDX for more details. -func (c *Client) DownloadFromContainer(id string, opts DownloadFromContainerOptions) error { - url := fmt.Sprintf("/containers/%s/archive?", id) + queryString(opts) - - return c.stream("GET", url, streamOptions{ - setRawTerminal: true, - stdout: opts.OutputStream, - inactivityTimeout: opts.InactivityTimeout, - }) -} - -// CopyFromContainerOptions has been DEPRECATED, please use DownloadFromContainerOptions along with DownloadFromContainer. -// -// See https://goo.gl/R2jevW for more details. -type CopyFromContainerOptions struct { - OutputStream io.Writer `json:"-"` - Container string `json:"-"` - Resource string -} - -// CopyFromContainer has been DEPRECATED, please use DownloadFromContainerOptions along with DownloadFromContainer. -// -// See https://goo.gl/R2jevW for more details. -func (c *Client) CopyFromContainer(opts CopyFromContainerOptions) error { - if opts.Container == "" { - return &NoSuchContainer{ID: opts.Container} - } - url := fmt.Sprintf("/containers/%s/copy", opts.Container) - resp, err := c.do("POST", url, doOptions{data: opts}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return &NoSuchContainer{ID: opts.Container} - } - return err - } - defer resp.Body.Close() - _, err = io.Copy(opts.OutputStream, resp.Body) - return err -} - -// WaitContainer blocks until the given container stops, return the exit code -// of the container status. -// -// See https://goo.gl/Gc1rge for more details. -func (c *Client) WaitContainer(id string) (int, error) { - resp, err := c.do("POST", "/containers/"+id+"/wait", doOptions{}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return 0, &NoSuchContainer{ID: id} - } - return 0, err - } - defer resp.Body.Close() - var r struct{ StatusCode int } - if err := json.NewDecoder(resp.Body).Decode(&r); err != nil { - return 0, err - } - return r.StatusCode, nil -} - -// CommitContainerOptions aggregates parameters to the CommitContainer method. -// -// See https://goo.gl/mqfoCw for more details. -type CommitContainerOptions struct { - Container string - Repository string `qs:"repo"` - Tag string - Message string `qs:"comment"` - Author string - Run *Config `qs:"-"` -} - -// CommitContainer creates a new image from a container's changes. -// -// See https://goo.gl/mqfoCw for more details. -func (c *Client) CommitContainer(opts CommitContainerOptions) (*Image, error) { - path := "/commit?" + queryString(opts) - resp, err := c.do("POST", path, doOptions{data: opts.Run}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return nil, &NoSuchContainer{ID: opts.Container} - } - return nil, err - } - defer resp.Body.Close() - var image Image - if err := json.NewDecoder(resp.Body).Decode(&image); err != nil { - return nil, err - } - return &image, nil -} - -// AttachToContainerOptions is the set of options that can be used when -// attaching to a container. -// -// See https://goo.gl/NKpkFk for more details. -type AttachToContainerOptions struct { - Container string `qs:"-"` - InputStream io.Reader `qs:"-"` - OutputStream io.Writer `qs:"-"` - ErrorStream io.Writer `qs:"-"` - - // Get container logs, sending it to OutputStream. - Logs bool - - // Stream the response? - Stream bool - - // Attach to stdin, and use InputStream. - Stdin bool - - // Attach to stdout, and use OutputStream. - Stdout bool - - // Attach to stderr, and use ErrorStream. - Stderr bool - - // If set, after a successful connect, a sentinel will be sent and then the - // client will block on receive before continuing. - // - // It must be an unbuffered channel. Using a buffered channel can lead - // to unexpected behavior. - Success chan struct{} - - // Use raw terminal? Usually true when the container contains a TTY. - RawTerminal bool `qs:"-"` -} - -// AttachToContainer attaches to a container, using the given options. -// -// See https://goo.gl/NKpkFk for more details. -func (c *Client) AttachToContainer(opts AttachToContainerOptions) error { - cw, err := c.AttachToContainerNonBlocking(opts) - if err != nil { - return err - } - return cw.Wait() -} - -// AttachToContainerNonBlocking attaches to a container, using the given options. -// This function does not block. -// -// See https://goo.gl/NKpkFk for more details. -func (c *Client) AttachToContainerNonBlocking(opts AttachToContainerOptions) (CloseWaiter, error) { - if opts.Container == "" { - return nil, &NoSuchContainer{ID: opts.Container} - } - path := "/containers/" + opts.Container + "/attach?" + queryString(opts) - return c.hijack("POST", path, hijackOptions{ - success: opts.Success, - setRawTerminal: opts.RawTerminal, - in: opts.InputStream, - stdout: opts.OutputStream, - stderr: opts.ErrorStream, - }) -} - -// LogsOptions represents the set of options used when getting logs from a -// container. -// -// See https://goo.gl/yl8PGm for more details. -type LogsOptions struct { - Container string `qs:"-"` - OutputStream io.Writer `qs:"-"` - ErrorStream io.Writer `qs:"-"` - InactivityTimeout time.Duration `qs:"-"` - Follow bool - Stdout bool - Stderr bool - Since int64 - Timestamps bool - Tail string - - // Use raw terminal? Usually true when the container contains a TTY. - RawTerminal bool `qs:"-"` -} - -// Logs gets stdout and stderr logs from the specified container. -// -// See https://goo.gl/yl8PGm for more details. -func (c *Client) Logs(opts LogsOptions) error { - if opts.Container == "" { - return &NoSuchContainer{ID: opts.Container} - } - if opts.Tail == "" { - opts.Tail = "all" - } - path := "/containers/" + opts.Container + "/logs?" + queryString(opts) - return c.stream("GET", path, streamOptions{ - setRawTerminal: opts.RawTerminal, - stdout: opts.OutputStream, - stderr: opts.ErrorStream, - inactivityTimeout: opts.InactivityTimeout, - }) -} - -// ResizeContainerTTY resizes the terminal to the given height and width. -// -// See https://goo.gl/xERhCc for more details. -func (c *Client) ResizeContainerTTY(id string, height, width int) error { - params := make(url.Values) - params.Set("h", strconv.Itoa(height)) - params.Set("w", strconv.Itoa(width)) - resp, err := c.do("POST", "/containers/"+id+"/resize?"+params.Encode(), doOptions{}) - if err != nil { - return err - } - resp.Body.Close() - return nil -} - -// ExportContainerOptions is the set of parameters to the ExportContainer -// method. -// -// See https://goo.gl/dOkTyk for more details. -type ExportContainerOptions struct { - ID string - OutputStream io.Writer - InactivityTimeout time.Duration `qs:"-"` -} - -// ExportContainer export the contents of container id as tar archive -// and prints the exported contents to stdout. -// -// See https://goo.gl/dOkTyk for more details. -func (c *Client) ExportContainer(opts ExportContainerOptions) error { - if opts.ID == "" { - return &NoSuchContainer{ID: opts.ID} - } - url := fmt.Sprintf("/containers/%s/export", opts.ID) - return c.stream("GET", url, streamOptions{ - setRawTerminal: true, - stdout: opts.OutputStream, - inactivityTimeout: opts.InactivityTimeout, - }) -} - -// NoSuchContainer is the error returned when a given container does not exist. -type NoSuchContainer struct { - ID string - Err error -} - -func (err *NoSuchContainer) Error() string { - if err.Err != nil { - return err.Err.Error() - } - return "No such container: " + err.ID -} - -// ContainerAlreadyRunning is the error returned when a given container is -// already running. -type ContainerAlreadyRunning struct { - ID string -} - -func (err *ContainerAlreadyRunning) Error() string { - return "Container already running: " + err.ID -} - -// ContainerNotRunning is the error returned when a given container is not -// running. -type ContainerNotRunning struct { - ID string -} - -func (err *ContainerNotRunning) Error() string { - return "Container not running: " + err.ID -} diff --git a/vendor/github.com/fsouza/go-dockerclient/env.go b/vendor/github.com/fsouza/go-dockerclient/env.go deleted file mode 100644 index c54b0b0e802..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/env.go +++ /dev/null @@ -1,168 +0,0 @@ -// Copyright 2014 Docker authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the DOCKER-LICENSE file. - -package docker - -import ( - "encoding/json" - "fmt" - "io" - "strconv" - "strings" -) - -// Env represents a list of key-pair represented in the form KEY=VALUE. -type Env []string - -// Get returns the string value of the given key. -func (env *Env) Get(key string) (value string) { - return env.Map()[key] -} - -// Exists checks whether the given key is defined in the internal Env -// representation. -func (env *Env) Exists(key string) bool { - _, exists := env.Map()[key] - return exists -} - -// GetBool returns a boolean representation of the given key. The key is false -// whenever its value if 0, no, false, none or an empty string. Any other value -// will be interpreted as true. -func (env *Env) GetBool(key string) (value bool) { - s := strings.ToLower(strings.Trim(env.Get(key), " \t")) - if s == "" || s == "0" || s == "no" || s == "false" || s == "none" { - return false - } - return true -} - -// SetBool defines a boolean value to the given key. -func (env *Env) SetBool(key string, value bool) { - if value { - env.Set(key, "1") - } else { - env.Set(key, "0") - } -} - -// GetInt returns the value of the provided key, converted to int. -// -// It the value cannot be represented as an integer, it returns -1. -func (env *Env) GetInt(key string) int { - return int(env.GetInt64(key)) -} - -// SetInt defines an integer value to the given key. -func (env *Env) SetInt(key string, value int) { - env.Set(key, strconv.Itoa(value)) -} - -// GetInt64 returns the value of the provided key, converted to int64. -// -// It the value cannot be represented as an integer, it returns -1. -func (env *Env) GetInt64(key string) int64 { - s := strings.Trim(env.Get(key), " \t") - val, err := strconv.ParseInt(s, 10, 64) - if err != nil { - return -1 - } - return val -} - -// SetInt64 defines an integer (64-bit wide) value to the given key. -func (env *Env) SetInt64(key string, value int64) { - env.Set(key, strconv.FormatInt(value, 10)) -} - -// GetJSON unmarshals the value of the provided key in the provided iface. -// -// iface is a value that can be provided to the json.Unmarshal function. -func (env *Env) GetJSON(key string, iface interface{}) error { - sval := env.Get(key) - if sval == "" { - return nil - } - return json.Unmarshal([]byte(sval), iface) -} - -// SetJSON marshals the given value to JSON format and stores it using the -// provided key. -func (env *Env) SetJSON(key string, value interface{}) error { - sval, err := json.Marshal(value) - if err != nil { - return err - } - env.Set(key, string(sval)) - return nil -} - -// GetList returns a list of strings matching the provided key. It handles the -// list as a JSON representation of a list of strings. -// -// If the given key matches to a single string, it will return a list -// containing only the value that matches the key. -func (env *Env) GetList(key string) []string { - sval := env.Get(key) - if sval == "" { - return nil - } - var l []string - if err := json.Unmarshal([]byte(sval), &l); err != nil { - l = append(l, sval) - } - return l -} - -// SetList stores the given list in the provided key, after serializing it to -// JSON format. -func (env *Env) SetList(key string, value []string) error { - return env.SetJSON(key, value) -} - -// Set defines the value of a key to the given string. -func (env *Env) Set(key, value string) { - *env = append(*env, key+"="+value) -} - -// Decode decodes `src` as a json dictionary, and adds each decoded key-value -// pair to the environment. -// -// If `src` cannot be decoded as a json dictionary, an error is returned. -func (env *Env) Decode(src io.Reader) error { - m := make(map[string]interface{}) - if err := json.NewDecoder(src).Decode(&m); err != nil { - return err - } - for k, v := range m { - env.SetAuto(k, v) - } - return nil -} - -// SetAuto will try to define the Set* method to call based on the given value. -func (env *Env) SetAuto(key string, value interface{}) { - if fval, ok := value.(float64); ok { - env.SetInt64(key, int64(fval)) - } else if sval, ok := value.(string); ok { - env.Set(key, sval) - } else if val, err := json.Marshal(value); err == nil { - env.Set(key, string(val)) - } else { - env.Set(key, fmt.Sprintf("%v", value)) - } -} - -// Map returns the map representation of the env. -func (env *Env) Map() map[string]string { - if len(*env) == 0 { - return nil - } - m := make(map[string]string) - for _, kv := range *env { - parts := strings.SplitN(kv, "=", 2) - m[parts[0]] = parts[1] - } - return m -} diff --git a/vendor/github.com/fsouza/go-dockerclient/event.go b/vendor/github.com/fsouza/go-dockerclient/event.go deleted file mode 100644 index 120cdc9bff2..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/event.go +++ /dev/null @@ -1,379 +0,0 @@ -// Copyright 2015 go-dockerclient authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package docker - -import ( - "encoding/json" - "errors" - "fmt" - "io" - "math" - "net" - "net/http" - "net/http/httputil" - "sync" - "sync/atomic" - "time" -) - -// APIEvents represents events coming from the Docker API -// The fields in the Docker API changed in API version 1.22, and -// events for more than images and containers are now fired off. -// To maintain forward and backward compatibility, go-dockerclient -// replicates the event in both the new and old format as faithfully as possible. -// -// For events that only exist in 1.22 in later, `Status` is filled in as -// `"Type:Action"` instead of just `Action` to allow for older clients to -// differentiate and not break if they rely on the pre-1.22 Status types. -// -// The transformEvent method can be consulted for more information about how -// events are translated from new/old API formats -type APIEvents struct { - // New API Fields in 1.22 - Action string `json:"action,omitempty"` - Type string `json:"type,omitempty"` - Actor APIActor `json:"actor,omitempty"` - - // Old API fields for < 1.22 - Status string `json:"status,omitempty"` - ID string `json:"id,omitempty"` - From string `json:"from,omitempty"` - - // Fields in both - Time int64 `json:"time,omitempty"` - TimeNano int64 `json:"timeNano,omitempty"` -} - -// APIActor represents an actor that accomplishes something for an event -type APIActor struct { - ID string `json:"id,omitempty"` - Attributes map[string]string `json:"attributes,omitempty"` -} - -type eventMonitoringState struct { - sync.RWMutex - sync.WaitGroup - enabled bool - lastSeen int64 - C chan *APIEvents - errC chan error - listeners []chan<- *APIEvents -} - -const ( - maxMonitorConnRetries = 5 - retryInitialWaitTime = 10. -) - -var ( - // ErrNoListeners is the error returned when no listeners are available - // to receive an event. - ErrNoListeners = errors.New("no listeners present to receive event") - - // ErrListenerAlreadyExists is the error returned when the listerner already - // exists. - ErrListenerAlreadyExists = errors.New("listener already exists for docker events") - - // EOFEvent is sent when the event listener receives an EOF error. - EOFEvent = &APIEvents{ - Type: "EOF", - Status: "EOF", - } -) - -// AddEventListener adds a new listener to container events in the Docker API. -// -// The parameter is a channel through which events will be sent. -func (c *Client) AddEventListener(listener chan<- *APIEvents) error { - var err error - if !c.eventMonitor.isEnabled() { - err = c.eventMonitor.enableEventMonitoring(c) - if err != nil { - return err - } - } - err = c.eventMonitor.addListener(listener) - if err != nil { - return err - } - return nil -} - -// RemoveEventListener removes a listener from the monitor. -func (c *Client) RemoveEventListener(listener chan *APIEvents) error { - err := c.eventMonitor.removeListener(listener) - if err != nil { - return err - } - if len(c.eventMonitor.listeners) == 0 { - c.eventMonitor.disableEventMonitoring() - } - return nil -} - -func (eventState *eventMonitoringState) addListener(listener chan<- *APIEvents) error { - eventState.Lock() - defer eventState.Unlock() - if listenerExists(listener, &eventState.listeners) { - return ErrListenerAlreadyExists - } - eventState.Add(1) - eventState.listeners = append(eventState.listeners, listener) - return nil -} - -func (eventState *eventMonitoringState) removeListener(listener chan<- *APIEvents) error { - eventState.Lock() - defer eventState.Unlock() - if listenerExists(listener, &eventState.listeners) { - var newListeners []chan<- *APIEvents - for _, l := range eventState.listeners { - if l != listener { - newListeners = append(newListeners, l) - } - } - eventState.listeners = newListeners - eventState.Add(-1) - } - return nil -} - -func (eventState *eventMonitoringState) closeListeners() { - for _, l := range eventState.listeners { - close(l) - eventState.Add(-1) - } - eventState.listeners = nil -} - -func listenerExists(a chan<- *APIEvents, list *[]chan<- *APIEvents) bool { - for _, b := range *list { - if b == a { - return true - } - } - return false -} - -func (eventState *eventMonitoringState) enableEventMonitoring(c *Client) error { - eventState.Lock() - defer eventState.Unlock() - if !eventState.enabled { - eventState.enabled = true - atomic.StoreInt64(&eventState.lastSeen, 0) - eventState.C = make(chan *APIEvents, 100) - eventState.errC = make(chan error, 1) - go eventState.monitorEvents(c) - } - return nil -} - -func (eventState *eventMonitoringState) disableEventMonitoring() error { - eventState.Lock() - defer eventState.Unlock() - - eventState.closeListeners() - - eventState.Wait() - - if eventState.enabled { - eventState.enabled = false - close(eventState.C) - close(eventState.errC) - } - return nil -} - -func (eventState *eventMonitoringState) monitorEvents(c *Client) { - var err error - for eventState.noListeners() { - time.Sleep(10 * time.Millisecond) - } - if err = eventState.connectWithRetry(c); err != nil { - // terminate if connect failed - eventState.disableEventMonitoring() - return - } - for eventState.isEnabled() { - timeout := time.After(100 * time.Millisecond) - select { - case ev, ok := <-eventState.C: - if !ok { - return - } - if ev == EOFEvent { - eventState.disableEventMonitoring() - return - } - eventState.updateLastSeen(ev) - go eventState.sendEvent(ev) - case err = <-eventState.errC: - if err == ErrNoListeners { - eventState.disableEventMonitoring() - return - } else if err != nil { - defer func() { go eventState.monitorEvents(c) }() - return - } - case <-timeout: - continue - } - } -} - -func (eventState *eventMonitoringState) connectWithRetry(c *Client) error { - var retries int - eventState.RLock() - eventChan := eventState.C - errChan := eventState.errC - eventState.RUnlock() - err := c.eventHijack(atomic.LoadInt64(&eventState.lastSeen), eventChan, errChan) - for ; err != nil && retries < maxMonitorConnRetries; retries++ { - waitTime := int64(retryInitialWaitTime * math.Pow(2, float64(retries))) - time.Sleep(time.Duration(waitTime) * time.Millisecond) - eventState.RLock() - eventChan = eventState.C - errChan = eventState.errC - eventState.RUnlock() - err = c.eventHijack(atomic.LoadInt64(&eventState.lastSeen), eventChan, errChan) - } - return err -} - -func (eventState *eventMonitoringState) noListeners() bool { - eventState.RLock() - defer eventState.RUnlock() - return len(eventState.listeners) == 0 -} - -func (eventState *eventMonitoringState) isEnabled() bool { - eventState.RLock() - defer eventState.RUnlock() - return eventState.enabled -} - -func (eventState *eventMonitoringState) sendEvent(event *APIEvents) { - eventState.RLock() - defer eventState.RUnlock() - eventState.Add(1) - defer eventState.Done() - if eventState.enabled { - if len(eventState.listeners) == 0 { - eventState.errC <- ErrNoListeners - return - } - - for _, listener := range eventState.listeners { - listener <- event - } - } -} - -func (eventState *eventMonitoringState) updateLastSeen(e *APIEvents) { - eventState.Lock() - defer eventState.Unlock() - if atomic.LoadInt64(&eventState.lastSeen) < e.Time { - atomic.StoreInt64(&eventState.lastSeen, e.Time) - } -} - -func (c *Client) eventHijack(startTime int64, eventChan chan *APIEvents, errChan chan error) error { - uri := "/events" - if startTime != 0 { - uri += fmt.Sprintf("?since=%d", startTime) - } - protocol := c.endpointURL.Scheme - address := c.endpointURL.Path - if protocol != "unix" { - protocol = "tcp" - address = c.endpointURL.Host - } - var dial net.Conn - var err error - if c.TLSConfig == nil { - dial, err = c.Dialer.Dial(protocol, address) - } else { - dial, err = tlsDialWithDialer(c.Dialer, protocol, address, c.TLSConfig) - } - if err != nil { - return err - } - conn := httputil.NewClientConn(dial, nil) - req, err := http.NewRequest("GET", uri, nil) - if err != nil { - return err - } - res, err := conn.Do(req) - if err != nil { - return err - } - go func(res *http.Response, conn *httputil.ClientConn) { - defer conn.Close() - defer res.Body.Close() - decoder := json.NewDecoder(res.Body) - for { - var event APIEvents - if err = decoder.Decode(&event); err != nil { - if err == io.EOF || err == io.ErrUnexpectedEOF { - c.eventMonitor.RLock() - if c.eventMonitor.enabled && c.eventMonitor.C == eventChan { - // Signal that we're exiting. - eventChan <- EOFEvent - } - c.eventMonitor.RUnlock() - break - } - errChan <- err - } - if event.Time == 0 { - continue - } - if !c.eventMonitor.isEnabled() || c.eventMonitor.C != eventChan { - return - } - transformEvent(&event) - eventChan <- &event - } - }(res, conn) - return nil -} - -// transformEvent takes an event and determines what version it is from -// then populates both versions of the event -func transformEvent(event *APIEvents) { - // if event version is <= 1.21 there will be no Action and no Type - if event.Action == "" && event.Type == "" { - event.Action = event.Status - event.Actor.ID = event.ID - event.Actor.Attributes = map[string]string{} - switch event.Status { - case "delete", "import", "pull", "push", "tag", "untag": - event.Type = "image" - default: - event.Type = "container" - if event.From != "" { - event.Actor.Attributes["image"] = event.From - } - } - } else { - if event.Status == "" { - if event.Type == "image" || event.Type == "container" { - event.Status = event.Action - } else { - // Because just the Status has been overloaded with different Types - // if an event is not for an image or a container, we prepend the type - // to avoid problems for people relying on actions being only for - // images and containers - event.Status = event.Type + ":" + event.Action - } - } - if event.ID == "" { - event.ID = event.Actor.ID - } - if event.From == "" { - event.From = event.Actor.Attributes["image"] - } - } -} diff --git a/vendor/github.com/fsouza/go-dockerclient/exec.go b/vendor/github.com/fsouza/go-dockerclient/exec.go deleted file mode 100644 index 1a16da9d621..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/exec.go +++ /dev/null @@ -1,202 +0,0 @@ -// Copyright 2015 go-dockerclient authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package docker - -import ( - "encoding/json" - "fmt" - "io" - "net/http" - "net/url" - "strconv" -) - -// Exec is the type representing a `docker exec` instance and containing the -// instance ID -type Exec struct { - ID string `json:"Id,omitempty" yaml:"Id,omitempty"` -} - -// CreateExecOptions specify parameters to the CreateExecContainer function. -// -// See https://goo.gl/1KSIb7 for more details -type CreateExecOptions struct { - AttachStdin bool `json:"AttachStdin,omitempty" yaml:"AttachStdin,omitempty"` - AttachStdout bool `json:"AttachStdout,omitempty" yaml:"AttachStdout,omitempty"` - AttachStderr bool `json:"AttachStderr,omitempty" yaml:"AttachStderr,omitempty"` - Tty bool `json:"Tty,omitempty" yaml:"Tty,omitempty"` - Cmd []string `json:"Cmd,omitempty" yaml:"Cmd,omitempty"` - Container string `json:"Container,omitempty" yaml:"Container,omitempty"` - User string `json:"User,omitempty" yaml:"User,omitempty"` -} - -// CreateExec sets up an exec instance in a running container `id`, returning the exec -// instance, or an error in case of failure. -// -// See https://goo.gl/1KSIb7 for more details -func (c *Client) CreateExec(opts CreateExecOptions) (*Exec, error) { - path := fmt.Sprintf("/containers/%s/exec", opts.Container) - resp, err := c.do("POST", path, doOptions{data: opts}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return nil, &NoSuchContainer{ID: opts.Container} - } - return nil, err - } - defer resp.Body.Close() - var exec Exec - if err := json.NewDecoder(resp.Body).Decode(&exec); err != nil { - return nil, err - } - - return &exec, nil -} - -// StartExecOptions specify parameters to the StartExecContainer function. -// -// See https://goo.gl/iQCnto for more details -type StartExecOptions struct { - Detach bool `json:"Detach,omitempty" yaml:"Detach,omitempty"` - - Tty bool `json:"Tty,omitempty" yaml:"Tty,omitempty"` - - InputStream io.Reader `qs:"-"` - OutputStream io.Writer `qs:"-"` - ErrorStream io.Writer `qs:"-"` - - // Use raw terminal? Usually true when the container contains a TTY. - RawTerminal bool `qs:"-"` - - // If set, after a successful connect, a sentinel will be sent and then the - // client will block on receive before continuing. - // - // It must be an unbuffered channel. Using a buffered channel can lead - // to unexpected behavior. - Success chan struct{} `json:"-"` -} - -// StartExec starts a previously set up exec instance id. If opts.Detach is -// true, it returns after starting the exec command. Otherwise, it sets up an -// interactive session with the exec command. -// -// See https://goo.gl/iQCnto for more details -func (c *Client) StartExec(id string, opts StartExecOptions) error { - cw, err := c.StartExecNonBlocking(id, opts) - if err != nil { - return err - } - if cw != nil { - return cw.Wait() - } - return nil -} - -// StartExecNonBlocking starts a previously set up exec instance id. If opts.Detach is -// true, it returns after starting the exec command. Otherwise, it sets up an -// interactive session with the exec command. -// -// See https://goo.gl/iQCnto for more details -func (c *Client) StartExecNonBlocking(id string, opts StartExecOptions) (CloseWaiter, error) { - if id == "" { - return nil, &NoSuchExec{ID: id} - } - - path := fmt.Sprintf("/exec/%s/start", id) - - if opts.Detach { - resp, err := c.do("POST", path, doOptions{data: opts}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return nil, &NoSuchExec{ID: id} - } - return nil, err - } - defer resp.Body.Close() - return nil, nil - } - - return c.hijack("POST", path, hijackOptions{ - success: opts.Success, - setRawTerminal: opts.RawTerminal, - in: opts.InputStream, - stdout: opts.OutputStream, - stderr: opts.ErrorStream, - data: opts, - }) -} - -// ResizeExecTTY resizes the tty session used by the exec command id. This API -// is valid only if Tty was specified as part of creating and starting the exec -// command. -// -// See https://goo.gl/e1JpsA for more details -func (c *Client) ResizeExecTTY(id string, height, width int) error { - params := make(url.Values) - params.Set("h", strconv.Itoa(height)) - params.Set("w", strconv.Itoa(width)) - - path := fmt.Sprintf("/exec/%s/resize?%s", id, params.Encode()) - resp, err := c.do("POST", path, doOptions{}) - if err != nil { - return err - } - resp.Body.Close() - return nil -} - -// ExecProcessConfig is a type describing the command associated to a Exec -// instance. It's used in the ExecInspect type. -type ExecProcessConfig struct { - Privileged bool `json:"privileged,omitempty" yaml:"privileged,omitempty"` - User string `json:"user,omitempty" yaml:"user,omitempty"` - Tty bool `json:"tty,omitempty" yaml:"tty,omitempty"` - EntryPoint string `json:"entrypoint,omitempty" yaml:"entrypoint,omitempty"` - Arguments []string `json:"arguments,omitempty" yaml:"arguments,omitempty"` -} - -// ExecInspect is a type with details about a exec instance, including the -// exit code if the command has finished running. It's returned by a api -// call to /exec/(id)/json -// -// See https://goo.gl/gPtX9R for more details -type ExecInspect struct { - ID string `json:"ID,omitempty" yaml:"ID,omitempty"` - Running bool `json:"Running,omitempty" yaml:"Running,omitempty"` - ExitCode int `json:"ExitCode,omitempty" yaml:"ExitCode,omitempty"` - OpenStdin bool `json:"OpenStdin,omitempty" yaml:"OpenStdin,omitempty"` - OpenStderr bool `json:"OpenStderr,omitempty" yaml:"OpenStderr,omitempty"` - OpenStdout bool `json:"OpenStdout,omitempty" yaml:"OpenStdout,omitempty"` - ProcessConfig ExecProcessConfig `json:"ProcessConfig,omitempty" yaml:"ProcessConfig,omitempty"` - Container Container `json:"Container,omitempty" yaml:"Container,omitempty"` -} - -// InspectExec returns low-level information about the exec command id. -// -// See https://goo.gl/gPtX9R for more details -func (c *Client) InspectExec(id string) (*ExecInspect, error) { - path := fmt.Sprintf("/exec/%s/json", id) - resp, err := c.do("GET", path, doOptions{}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return nil, &NoSuchExec{ID: id} - } - return nil, err - } - defer resp.Body.Close() - var exec ExecInspect - if err := json.NewDecoder(resp.Body).Decode(&exec); err != nil { - return nil, err - } - return &exec, nil -} - -// NoSuchExec is the error returned when a given exec instance does not exist. -type NoSuchExec struct { - ID string -} - -func (err *NoSuchExec) Error() string { - return "No such exec instance: " + err.ID -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/CHANGELOG.md b/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/CHANGELOG.md deleted file mode 100644 index ecc843272b0..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/CHANGELOG.md +++ /dev/null @@ -1,55 +0,0 @@ -# 0.9.0 (Unreleased) - -* logrus/text_formatter: don't emit empty msg -* logrus/hooks/airbrake: move out of main repository -* logrus/hooks/sentry: move out of main repository -* logrus/hooks/papertrail: move out of main repository -* logrus/hooks/bugsnag: move out of main repository - -# 0.8.7 - -* logrus/core: fix possible race (#216) -* logrus/doc: small typo fixes and doc improvements - - -# 0.8.6 - -* hooks/raven: allow passing an initialized client - -# 0.8.5 - -* logrus/core: revert #208 - -# 0.8.4 - -* formatter/text: fix data race (#218) - -# 0.8.3 - -* logrus/core: fix entry log level (#208) -* logrus/core: improve performance of text formatter by 40% -* logrus/core: expose `LevelHooks` type -* logrus/core: add support for DragonflyBSD and NetBSD -* formatter/text: print structs more verbosely - -# 0.8.2 - -* logrus: fix more Fatal family functions - -# 0.8.1 - -* logrus: fix not exiting on `Fatalf` and `Fatalln` - -# 0.8.0 - -* logrus: defaults to stderr instead of stdout -* hooks/sentry: add special field for `*http.Request` -* formatter/text: ignore Windows for colors - -# 0.7.3 - -* formatter/\*: allow configuration of timestamp layout - -# 0.7.2 - -* formatter/text: Add configuration option for time format (#158) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/LICENSE b/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/LICENSE deleted file mode 100644 index f090cb42f37..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/LICENSE +++ /dev/null @@ -1,21 +0,0 @@ -The MIT License (MIT) - -Copyright (c) 2014 Simon Eskildsen - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in -all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN -THE SOFTWARE. diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/README.md b/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/README.md deleted file mode 100644 index 55d3a8d5f65..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/README.md +++ /dev/null @@ -1,365 +0,0 @@ -# Logrus :walrus: [![Build Status](https://travis-ci.org/Sirupsen/logrus.svg?branch=master)](https://travis-ci.org/Sirupsen/logrus) [![godoc reference](https://godoc.org/github.com/Sirupsen/logrus?status.png)][godoc] - -Logrus is a structured logger for Go (golang), completely API compatible with -the standard library logger. [Godoc][godoc]. **Please note the Logrus API is not -yet stable (pre 1.0). Logrus itself is completely stable and has been used in -many large deployments. The core API is unlikely to change much but please -version control your Logrus to make sure you aren't fetching latest `master` on -every build.** - -Nicely color-coded in development (when a TTY is attached, otherwise just -plain text): - -![Colored](http://i.imgur.com/PY7qMwd.png) - -With `log.Formatter = new(logrus.JSONFormatter)`, for easy parsing by logstash -or Splunk: - -```json -{"animal":"walrus","level":"info","msg":"A group of walrus emerges from the -ocean","size":10,"time":"2014-03-10 19:57:38.562264131 -0400 EDT"} - -{"level":"warning","msg":"The group's number increased tremendously!", -"number":122,"omg":true,"time":"2014-03-10 19:57:38.562471297 -0400 EDT"} - -{"animal":"walrus","level":"info","msg":"A giant walrus appears!", -"size":10,"time":"2014-03-10 19:57:38.562500591 -0400 EDT"} - -{"animal":"walrus","level":"info","msg":"Tremendously sized cow enters the ocean.", -"size":9,"time":"2014-03-10 19:57:38.562527896 -0400 EDT"} - -{"level":"fatal","msg":"The ice breaks!","number":100,"omg":true, -"time":"2014-03-10 19:57:38.562543128 -0400 EDT"} -``` - -With the default `log.Formatter = new(&log.TextFormatter{})` when a TTY is not -attached, the output is compatible with the -[logfmt](http://godoc.org/github.com/kr/logfmt) format: - -```text -time="2015-03-26T01:27:38-04:00" level=debug msg="Started observing beach" animal=walrus number=8 -time="2015-03-26T01:27:38-04:00" level=info msg="A group of walrus emerges from the ocean" animal=walrus size=10 -time="2015-03-26T01:27:38-04:00" level=warning msg="The group's number increased tremendously!" number=122 omg=true -time="2015-03-26T01:27:38-04:00" level=debug msg="Temperature changes" temperature=-4 -time="2015-03-26T01:27:38-04:00" level=panic msg="It's over 9000!" animal=orca size=9009 -time="2015-03-26T01:27:38-04:00" level=fatal msg="The ice breaks!" err=&{0x2082280c0 map[animal:orca size:9009] 2015-03-26 01:27:38.441574009 -0400 EDT panic It's over 9000!} number=100 omg=true -exit status 1 -``` - -#### Example - -The simplest way to use Logrus is simply the package-level exported logger: - -```go -package main - -import ( - log "github.com/Sirupsen/logrus" -) - -func main() { - log.WithFields(log.Fields{ - "animal": "walrus", - }).Info("A walrus appears") -} -``` - -Note that it's completely api-compatible with the stdlib logger, so you can -replace your `log` imports everywhere with `log "github.com/Sirupsen/logrus"` -and you'll now have the flexibility of Logrus. You can customize it all you -want: - -```go -package main - -import ( - "os" - log "github.com/Sirupsen/logrus" -) - -func init() { - // Log as JSON instead of the default ASCII formatter. - log.SetFormatter(&log.JSONFormatter{}) - - // Output to stderr instead of stdout, could also be a file. - log.SetOutput(os.Stderr) - - // Only log the warning severity or above. - log.SetLevel(log.WarnLevel) -} - -func main() { - log.WithFields(log.Fields{ - "animal": "walrus", - "size": 10, - }).Info("A group of walrus emerges from the ocean") - - log.WithFields(log.Fields{ - "omg": true, - "number": 122, - }).Warn("The group's number increased tremendously!") - - log.WithFields(log.Fields{ - "omg": true, - "number": 100, - }).Fatal("The ice breaks!") - - // A common pattern is to re-use fields between logging statements by re-using - // the logrus.Entry returned from WithFields() - contextLogger := log.WithFields(log.Fields{ - "common": "this is a common field", - "other": "I also should be logged always", - }) - - contextLogger.Info("I'll be logged with common and other field") - contextLogger.Info("Me too") -} -``` - -For more advanced usage such as logging to multiple locations from the same -application, you can also create an instance of the `logrus` Logger: - -```go -package main - -import ( - "github.com/Sirupsen/logrus" -) - -// Create a new instance of the logger. You can have any number of instances. -var log = logrus.New() - -func main() { - // The API for setting attributes is a little different than the package level - // exported logger. See Godoc. - log.Out = os.Stderr - - log.WithFields(logrus.Fields{ - "animal": "walrus", - "size": 10, - }).Info("A group of walrus emerges from the ocean") -} -``` - -#### Fields - -Logrus encourages careful, structured logging though logging fields instead of -long, unparseable error messages. For example, instead of: `log.Fatalf("Failed -to send event %s to topic %s with key %d")`, you should log the much more -discoverable: - -```go -log.WithFields(log.Fields{ - "event": event, - "topic": topic, - "key": key, -}).Fatal("Failed to send event") -``` - -We've found this API forces you to think about logging in a way that produces -much more useful logging messages. We've been in countless situations where just -a single added field to a log statement that was already there would've saved us -hours. The `WithFields` call is optional. - -In general, with Logrus using any of the `printf`-family functions should be -seen as a hint you should add a field, however, you can still use the -`printf`-family functions with Logrus. - -#### Hooks - -You can add hooks for logging levels. For example to send errors to an exception -tracking service on `Error`, `Fatal` and `Panic`, info to StatsD or log to -multiple places simultaneously, e.g. syslog. - -Logrus comes with [built-in hooks](hooks/). Add those, or your custom hook, in -`init`: - -```go -import ( - log "github.com/Sirupsen/logrus" - "gopkg.in/gemnasium/logrus-airbrake-hook.v2" // the package is named "aibrake" - logrus_syslog "github.com/Sirupsen/logrus/hooks/syslog" - "log/syslog" -) - -func init() { - - // Use the Airbrake hook to report errors that have Error severity or above to - // an exception tracker. You can create custom hooks, see the Hooks section. - log.AddHook(airbrake.NewHook(123, "xyz", "production")) - - hook, err := logrus_syslog.NewSyslogHook("udp", "localhost:514", syslog.LOG_INFO, "") - if err != nil { - log.Error("Unable to connect to local syslog daemon") - } else { - log.AddHook(hook) - } -} -``` -Note: Syslog hook also support connecting to local syslog (Ex. "/dev/log" or "/var/run/syslog" or "/var/run/log"). For the detail, please check the [syslog hook README](hooks/syslog/README.md). - -| Hook | Description | -| ----- | ----------- | -| [Airbrake](https://github.com/gemnasium/logrus-airbrake-hook) | Send errors to the Airbrake API V3. Uses the official [`gobrake`](https://github.com/airbrake/gobrake) behind the scenes. | -| [Airbrake "legacy"](https://github.com/gemnasium/logrus-airbrake-legacy-hook) | Send errors to an exception tracking service compatible with the Airbrake API V2. Uses [`airbrake-go`](https://github.com/tobi/airbrake-go) behind the scenes. | -| [Papertrail](https://github.com/polds/logrus-papertrail-hook) | Send errors to the [Papertrail](https://papertrailapp.com) hosted logging service via UDP. | -| [Syslog](https://github.com/Sirupsen/logrus/blob/master/hooks/syslog/syslog.go) | Send errors to remote syslog server. Uses standard library `log/syslog` behind the scenes. | -| [Bugsnag](https://github.com/Shopify/logrus-bugsnag/blob/master/bugsnag.go) | Send errors to the Bugsnag exception tracking service. | -| [Sentry](https://github.com/evalphobia/logrus_sentry) | Send errors to the Sentry error logging and aggregation service. | -| [Hiprus](https://github.com/nubo/hiprus) | Send errors to a channel in hipchat. | -| [Logrusly](https://github.com/sebest/logrusly) | Send logs to [Loggly](https://www.loggly.com/) | -| [Slackrus](https://github.com/johntdyer/slackrus) | Hook for Slack chat. | -| [Journalhook](https://github.com/wercker/journalhook) | Hook for logging to `systemd-journald` | -| [Graylog](https://github.com/gemnasium/logrus-graylog-hook) | Hook for logging to [Graylog](http://graylog2.org/) | -| [Raygun](https://github.com/squirkle/logrus-raygun-hook) | Hook for logging to [Raygun.io](http://raygun.io/) | -| [LFShook](https://github.com/rifflock/lfshook) | Hook for logging to the local filesystem | -| [Honeybadger](https://github.com/agonzalezro/logrus_honeybadger) | Hook for sending exceptions to Honeybadger | -| [Mail](https://github.com/zbindenren/logrus_mail) | Hook for sending exceptions via mail | -| [Rollrus](https://github.com/heroku/rollrus) | Hook for sending errors to rollbar | -| [Fluentd](https://github.com/evalphobia/logrus_fluent) | Hook for logging to fluentd | -| [Mongodb](https://github.com/weekface/mgorus) | Hook for logging to mongodb | -| [InfluxDB](https://github.com/Abramovic/logrus_influxdb) | Hook for logging to influxdb | -| [Octokit](https://github.com/dorajistyle/logrus-octokit-hook) | Hook for logging to github via octokit | -| [DeferPanic](https://github.com/deferpanic/dp-logrus) | Hook for logging to DeferPanic | - -#### Level logging - -Logrus has six logging levels: Debug, Info, Warning, Error, Fatal and Panic. - -```go -log.Debug("Useful debugging information.") -log.Info("Something noteworthy happened!") -log.Warn("You should probably take a look at this.") -log.Error("Something failed but I'm not quitting.") -// Calls os.Exit(1) after logging -log.Fatal("Bye.") -// Calls panic() after logging -log.Panic("I'm bailing.") -``` - -You can set the logging level on a `Logger`, then it will only log entries with -that severity or anything above it: - -```go -// Will log anything that is info or above (warn, error, fatal, panic). Default. -log.SetLevel(log.InfoLevel) -``` - -It may be useful to set `log.Level = logrus.DebugLevel` in a debug or verbose -environment if your application has that. - -#### Entries - -Besides the fields added with `WithField` or `WithFields` some fields are -automatically added to all logging events: - -1. `time`. The timestamp when the entry was created. -2. `msg`. The logging message passed to `{Info,Warn,Error,Fatal,Panic}` after - the `AddFields` call. E.g. `Failed to send event.` -3. `level`. The logging level. E.g. `info`. - -#### Environments - -Logrus has no notion of environment. - -If you wish for hooks and formatters to only be used in specific environments, -you should handle that yourself. For example, if your application has a global -variable `Environment`, which is a string representation of the environment you -could do: - -```go -import ( - log "github.com/Sirupsen/logrus" -) - -init() { - // do something here to set environment depending on an environment variable - // or command-line flag - if Environment == "production" { - log.SetFormatter(&log.JSONFormatter{}) - } else { - // The TextFormatter is default, you don't actually have to do this. - log.SetFormatter(&log.TextFormatter{}) - } -} -``` - -This configuration is how `logrus` was intended to be used, but JSON in -production is mostly only useful if you do log aggregation with tools like -Splunk or Logstash. - -#### Formatters - -The built-in logging formatters are: - -* `logrus.TextFormatter`. Logs the event in colors if stdout is a tty, otherwise - without colors. - * *Note:* to force colored output when there is no TTY, set the `ForceColors` - field to `true`. To force no colored output even if there is a TTY set the - `DisableColors` field to `true` -* `logrus.JSONFormatter`. Logs fields as JSON. -* `logrus/formatters/logstash.LogstashFormatter`. Logs fields as [Logstash](http://logstash.net) Events. - - ```go - logrus.SetFormatter(&logstash.LogstashFormatter{Type: "application_name"}) - ``` - -Third party logging formatters: - -* [`prefixed`](https://github.com/x-cray/logrus-prefixed-formatter). Displays log entry source along with alternative layout. -* [`zalgo`](https://github.com/aybabtme/logzalgo). Invoking the P͉̫o̳̼̊w̖͈̰͎e̬͔̭͂r͚̼̹̲ ̫͓͉̳͈ō̠͕͖̚f̝͍̠ ͕̲̞͖͑Z̖̫̤̫ͪa͉̬͈̗l͖͎g̳̥o̰̥̅!̣͔̲̻͊̄ ̙̘̦̹̦. - -You can define your formatter by implementing the `Formatter` interface, -requiring a `Format` method. `Format` takes an `*Entry`. `entry.Data` is a -`Fields` type (`map[string]interface{}`) with all your fields as well as the -default ones (see Entries section above): - -```go -type MyJSONFormatter struct { -} - -log.SetFormatter(new(MyJSONFormatter)) - -func (f *MyJSONFormatter) Format(entry *Entry) ([]byte, error) { - // Note this doesn't include Time, Level and Message which are available on - // the Entry. Consult `godoc` on information about those fields or read the - // source of the official loggers. - serialized, err := json.Marshal(entry.Data) - if err != nil { - return nil, fmt.Errorf("Failed to marshal fields to JSON, %v", err) - } - return append(serialized, '\n'), nil -} -``` - -#### Logger as an `io.Writer` - -Logrus can be transformed into an `io.Writer`. That writer is the end of an `io.Pipe` and it is your responsibility to close it. - -```go -w := logger.Writer() -defer w.Close() - -srv := http.Server{ - // create a stdlib log.Logger that writes to - // logrus.Logger. - ErrorLog: log.New(w, "", 0), -} -``` - -Each line written to that writer will be printed the usual way, using formatters -and hooks. The level for those entries is `info`. - -#### Rotation - -Log rotation is not provided with Logrus. Log rotation should be done by an -external program (like `logrotate(8)`) that can compress and delete old log -entries. It should not be a feature of the application-level logger. - -#### Tools - -| Tool | Description | -| ---- | ----------- | -|[Logrus Mate](https://github.com/gogap/logrus_mate)|Logrus mate is a tool for Logrus to manage loggers, you can initial logger's level, hook and formatter by config file, the logger will generated with different config at different environment.| - -[godoc]: https://godoc.org/github.com/Sirupsen/logrus diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/doc.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/doc.go deleted file mode 100644 index dddd5f877bf..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/doc.go +++ /dev/null @@ -1,26 +0,0 @@ -/* -Package logrus is a structured logger for Go, completely API compatible with the standard library logger. - - -The simplest way to use Logrus is simply the package-level exported logger: - - package main - - import ( - log "github.com/Sirupsen/logrus" - ) - - func main() { - log.WithFields(log.Fields{ - "animal": "walrus", - "number": 1, - "size": 10, - }).Info("A walrus appears") - } - -Output: - time="2015-09-07T08:48:33Z" level=info msg="A walrus appears" animal=walrus number=1 size=10 - -For a full guide visit https://github.com/Sirupsen/logrus -*/ -package logrus diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/entry.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/entry.go deleted file mode 100644 index 9ae900bc5ef..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/entry.go +++ /dev/null @@ -1,264 +0,0 @@ -package logrus - -import ( - "bytes" - "fmt" - "io" - "os" - "time" -) - -// Defines the key when adding errors using WithError. -var ErrorKey = "error" - -// An entry is the final or intermediate Logrus logging entry. It contains all -// the fields passed with WithField{,s}. It's finally logged when Debug, Info, -// Warn, Error, Fatal or Panic is called on it. These objects can be reused and -// passed around as much as you wish to avoid field duplication. -type Entry struct { - Logger *Logger - - // Contains all the fields set by the user. - Data Fields - - // Time at which the log entry was created - Time time.Time - - // Level the log entry was logged at: Debug, Info, Warn, Error, Fatal or Panic - Level Level - - // Message passed to Debug, Info, Warn, Error, Fatal or Panic - Message string -} - -func NewEntry(logger *Logger) *Entry { - return &Entry{ - Logger: logger, - // Default is three fields, give a little extra room - Data: make(Fields, 5), - } -} - -// Returns a reader for the entry, which is a proxy to the formatter. -func (entry *Entry) Reader() (*bytes.Buffer, error) { - serialized, err := entry.Logger.Formatter.Format(entry) - return bytes.NewBuffer(serialized), err -} - -// Returns the string representation from the reader and ultimately the -// formatter. -func (entry *Entry) String() (string, error) { - reader, err := entry.Reader() - if err != nil { - return "", err - } - - return reader.String(), err -} - -// Add an error as single field (using the key defined in ErrorKey) to the Entry. -func (entry *Entry) WithError(err error) *Entry { - return entry.WithField(ErrorKey, err) -} - -// Add a single field to the Entry. -func (entry *Entry) WithField(key string, value interface{}) *Entry { - return entry.WithFields(Fields{key: value}) -} - -// Add a map of fields to the Entry. -func (entry *Entry) WithFields(fields Fields) *Entry { - data := Fields{} - for k, v := range entry.Data { - data[k] = v - } - for k, v := range fields { - data[k] = v - } - return &Entry{Logger: entry.Logger, Data: data} -} - -// This function is not declared with a pointer value because otherwise -// race conditions will occur when using multiple goroutines -func (entry Entry) log(level Level, msg string) { - entry.Time = time.Now() - entry.Level = level - entry.Message = msg - - if err := entry.Logger.Hooks.Fire(level, &entry); err != nil { - entry.Logger.mu.Lock() - fmt.Fprintf(os.Stderr, "Failed to fire hook: %v\n", err) - entry.Logger.mu.Unlock() - } - - reader, err := entry.Reader() - if err != nil { - entry.Logger.mu.Lock() - fmt.Fprintf(os.Stderr, "Failed to obtain reader, %v\n", err) - entry.Logger.mu.Unlock() - } - - entry.Logger.mu.Lock() - defer entry.Logger.mu.Unlock() - - _, err = io.Copy(entry.Logger.Out, reader) - if err != nil { - fmt.Fprintf(os.Stderr, "Failed to write to log, %v\n", err) - } - - // To avoid Entry#log() returning a value that only would make sense for - // panic() to use in Entry#Panic(), we avoid the allocation by checking - // directly here. - if level <= PanicLevel { - panic(&entry) - } -} - -func (entry *Entry) Debug(args ...interface{}) { - if entry.Logger.Level >= DebugLevel { - entry.log(DebugLevel, fmt.Sprint(args...)) - } -} - -func (entry *Entry) Print(args ...interface{}) { - entry.Info(args...) -} - -func (entry *Entry) Info(args ...interface{}) { - if entry.Logger.Level >= InfoLevel { - entry.log(InfoLevel, fmt.Sprint(args...)) - } -} - -func (entry *Entry) Warn(args ...interface{}) { - if entry.Logger.Level >= WarnLevel { - entry.log(WarnLevel, fmt.Sprint(args...)) - } -} - -func (entry *Entry) Warning(args ...interface{}) { - entry.Warn(args...) -} - -func (entry *Entry) Error(args ...interface{}) { - if entry.Logger.Level >= ErrorLevel { - entry.log(ErrorLevel, fmt.Sprint(args...)) - } -} - -func (entry *Entry) Fatal(args ...interface{}) { - if entry.Logger.Level >= FatalLevel { - entry.log(FatalLevel, fmt.Sprint(args...)) - } - os.Exit(1) -} - -func (entry *Entry) Panic(args ...interface{}) { - if entry.Logger.Level >= PanicLevel { - entry.log(PanicLevel, fmt.Sprint(args...)) - } - panic(fmt.Sprint(args...)) -} - -// Entry Printf family functions - -func (entry *Entry) Debugf(format string, args ...interface{}) { - if entry.Logger.Level >= DebugLevel { - entry.Debug(fmt.Sprintf(format, args...)) - } -} - -func (entry *Entry) Infof(format string, args ...interface{}) { - if entry.Logger.Level >= InfoLevel { - entry.Info(fmt.Sprintf(format, args...)) - } -} - -func (entry *Entry) Printf(format string, args ...interface{}) { - entry.Infof(format, args...) -} - -func (entry *Entry) Warnf(format string, args ...interface{}) { - if entry.Logger.Level >= WarnLevel { - entry.Warn(fmt.Sprintf(format, args...)) - } -} - -func (entry *Entry) Warningf(format string, args ...interface{}) { - entry.Warnf(format, args...) -} - -func (entry *Entry) Errorf(format string, args ...interface{}) { - if entry.Logger.Level >= ErrorLevel { - entry.Error(fmt.Sprintf(format, args...)) - } -} - -func (entry *Entry) Fatalf(format string, args ...interface{}) { - if entry.Logger.Level >= FatalLevel { - entry.Fatal(fmt.Sprintf(format, args...)) - } - os.Exit(1) -} - -func (entry *Entry) Panicf(format string, args ...interface{}) { - if entry.Logger.Level >= PanicLevel { - entry.Panic(fmt.Sprintf(format, args...)) - } -} - -// Entry Println family functions - -func (entry *Entry) Debugln(args ...interface{}) { - if entry.Logger.Level >= DebugLevel { - entry.Debug(entry.sprintlnn(args...)) - } -} - -func (entry *Entry) Infoln(args ...interface{}) { - if entry.Logger.Level >= InfoLevel { - entry.Info(entry.sprintlnn(args...)) - } -} - -func (entry *Entry) Println(args ...interface{}) { - entry.Infoln(args...) -} - -func (entry *Entry) Warnln(args ...interface{}) { - if entry.Logger.Level >= WarnLevel { - entry.Warn(entry.sprintlnn(args...)) - } -} - -func (entry *Entry) Warningln(args ...interface{}) { - entry.Warnln(args...) -} - -func (entry *Entry) Errorln(args ...interface{}) { - if entry.Logger.Level >= ErrorLevel { - entry.Error(entry.sprintlnn(args...)) - } -} - -func (entry *Entry) Fatalln(args ...interface{}) { - if entry.Logger.Level >= FatalLevel { - entry.Fatal(entry.sprintlnn(args...)) - } - os.Exit(1) -} - -func (entry *Entry) Panicln(args ...interface{}) { - if entry.Logger.Level >= PanicLevel { - entry.Panic(entry.sprintlnn(args...)) - } -} - -// Sprintlnn => Sprint no newline. This is to get the behavior of how -// fmt.Sprintln where spaces are always added between operands, regardless of -// their type. Instead of vendoring the Sprintln implementation to spare a -// string allocation, we do the simplest thing. -func (entry *Entry) sprintlnn(args ...interface{}) string { - msg := fmt.Sprintln(args...) - return msg[:len(msg)-1] -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/exported.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/exported.go deleted file mode 100644 index 9a0120ac1dd..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/exported.go +++ /dev/null @@ -1,193 +0,0 @@ -package logrus - -import ( - "io" -) - -var ( - // std is the name of the standard logger in stdlib `log` - std = New() -) - -func StandardLogger() *Logger { - return std -} - -// SetOutput sets the standard logger output. -func SetOutput(out io.Writer) { - std.mu.Lock() - defer std.mu.Unlock() - std.Out = out -} - -// SetFormatter sets the standard logger formatter. -func SetFormatter(formatter Formatter) { - std.mu.Lock() - defer std.mu.Unlock() - std.Formatter = formatter -} - -// SetLevel sets the standard logger level. -func SetLevel(level Level) { - std.mu.Lock() - defer std.mu.Unlock() - std.Level = level -} - -// GetLevel returns the standard logger level. -func GetLevel() Level { - std.mu.Lock() - defer std.mu.Unlock() - return std.Level -} - -// AddHook adds a hook to the standard logger hooks. -func AddHook(hook Hook) { - std.mu.Lock() - defer std.mu.Unlock() - std.Hooks.Add(hook) -} - -// WithError creates an entry from the standard logger and adds an error to it, using the value defined in ErrorKey as key. -func WithError(err error) *Entry { - return std.WithField(ErrorKey, err) -} - -// WithField creates an entry from the standard logger and adds a field to -// it. If you want multiple fields, use `WithFields`. -// -// Note that it doesn't log until you call Debug, Print, Info, Warn, Fatal -// or Panic on the Entry it returns. -func WithField(key string, value interface{}) *Entry { - return std.WithField(key, value) -} - -// WithFields creates an entry from the standard logger and adds multiple -// fields to it. This is simply a helper for `WithField`, invoking it -// once for each field. -// -// Note that it doesn't log until you call Debug, Print, Info, Warn, Fatal -// or Panic on the Entry it returns. -func WithFields(fields Fields) *Entry { - return std.WithFields(fields) -} - -// Debug logs a message at level Debug on the standard logger. -func Debug(args ...interface{}) { - std.Debug(args...) -} - -// Print logs a message at level Info on the standard logger. -func Print(args ...interface{}) { - std.Print(args...) -} - -// Info logs a message at level Info on the standard logger. -func Info(args ...interface{}) { - std.Info(args...) -} - -// Warn logs a message at level Warn on the standard logger. -func Warn(args ...interface{}) { - std.Warn(args...) -} - -// Warning logs a message at level Warn on the standard logger. -func Warning(args ...interface{}) { - std.Warning(args...) -} - -// Error logs a message at level Error on the standard logger. -func Error(args ...interface{}) { - std.Error(args...) -} - -// Panic logs a message at level Panic on the standard logger. -func Panic(args ...interface{}) { - std.Panic(args...) -} - -// Fatal logs a message at level Fatal on the standard logger. -func Fatal(args ...interface{}) { - std.Fatal(args...) -} - -// Debugf logs a message at level Debug on the standard logger. -func Debugf(format string, args ...interface{}) { - std.Debugf(format, args...) -} - -// Printf logs a message at level Info on the standard logger. -func Printf(format string, args ...interface{}) { - std.Printf(format, args...) -} - -// Infof logs a message at level Info on the standard logger. -func Infof(format string, args ...interface{}) { - std.Infof(format, args...) -} - -// Warnf logs a message at level Warn on the standard logger. -func Warnf(format string, args ...interface{}) { - std.Warnf(format, args...) -} - -// Warningf logs a message at level Warn on the standard logger. -func Warningf(format string, args ...interface{}) { - std.Warningf(format, args...) -} - -// Errorf logs a message at level Error on the standard logger. -func Errorf(format string, args ...interface{}) { - std.Errorf(format, args...) -} - -// Panicf logs a message at level Panic on the standard logger. -func Panicf(format string, args ...interface{}) { - std.Panicf(format, args...) -} - -// Fatalf logs a message at level Fatal on the standard logger. -func Fatalf(format string, args ...interface{}) { - std.Fatalf(format, args...) -} - -// Debugln logs a message at level Debug on the standard logger. -func Debugln(args ...interface{}) { - std.Debugln(args...) -} - -// Println logs a message at level Info on the standard logger. -func Println(args ...interface{}) { - std.Println(args...) -} - -// Infoln logs a message at level Info on the standard logger. -func Infoln(args ...interface{}) { - std.Infoln(args...) -} - -// Warnln logs a message at level Warn on the standard logger. -func Warnln(args ...interface{}) { - std.Warnln(args...) -} - -// Warningln logs a message at level Warn on the standard logger. -func Warningln(args ...interface{}) { - std.Warningln(args...) -} - -// Errorln logs a message at level Error on the standard logger. -func Errorln(args ...interface{}) { - std.Errorln(args...) -} - -// Panicln logs a message at level Panic on the standard logger. -func Panicln(args ...interface{}) { - std.Panicln(args...) -} - -// Fatalln logs a message at level Fatal on the standard logger. -func Fatalln(args ...interface{}) { - std.Fatalln(args...) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/formatter.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/formatter.go deleted file mode 100644 index 104d689f187..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/formatter.go +++ /dev/null @@ -1,48 +0,0 @@ -package logrus - -import "time" - -const DefaultTimestampFormat = time.RFC3339 - -// The Formatter interface is used to implement a custom Formatter. It takes an -// `Entry`. It exposes all the fields, including the default ones: -// -// * `entry.Data["msg"]`. The message passed from Info, Warn, Error .. -// * `entry.Data["time"]`. The timestamp. -// * `entry.Data["level"]. The level the entry was logged at. -// -// Any additional fields added with `WithField` or `WithFields` are also in -// `entry.Data`. Format is expected to return an array of bytes which are then -// logged to `logger.Out`. -type Formatter interface { - Format(*Entry) ([]byte, error) -} - -// This is to not silently overwrite `time`, `msg` and `level` fields when -// dumping it. If this code wasn't there doing: -// -// logrus.WithField("level", 1).Info("hello") -// -// Would just silently drop the user provided level. Instead with this code -// it'll logged as: -// -// {"level": "info", "fields.level": 1, "msg": "hello", "time": "..."} -// -// It's not exported because it's still using Data in an opinionated way. It's to -// avoid code duplication between the two default formatters. -func prefixFieldClashes(data Fields) { - _, ok := data["time"] - if ok { - data["fields.time"] = data["time"] - } - - _, ok = data["msg"] - if ok { - data["fields.msg"] = data["msg"] - } - - _, ok = data["level"] - if ok { - data["fields.level"] = data["level"] - } -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/hooks.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/hooks.go deleted file mode 100644 index 3f151cdc392..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/hooks.go +++ /dev/null @@ -1,34 +0,0 @@ -package logrus - -// A hook to be fired when logging on the logging levels returned from -// `Levels()` on your implementation of the interface. Note that this is not -// fired in a goroutine or a channel with workers, you should handle such -// functionality yourself if your call is non-blocking and you don't wish for -// the logging calls for levels returned from `Levels()` to block. -type Hook interface { - Levels() []Level - Fire(*Entry) error -} - -// Internal type for storing the hooks on a logger instance. -type LevelHooks map[Level][]Hook - -// Add a hook to an instance of logger. This is called with -// `log.Hooks.Add(new(MyHook))` where `MyHook` implements the `Hook` interface. -func (hooks LevelHooks) Add(hook Hook) { - for _, level := range hook.Levels() { - hooks[level] = append(hooks[level], hook) - } -} - -// Fire all the hooks for the passed level. Used by `entry.log` to fire -// appropriate hooks for a log entry. -func (hooks LevelHooks) Fire(level Level, entry *Entry) error { - for _, hook := range hooks[level] { - if err := hook.Fire(entry); err != nil { - return err - } - } - - return nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/json_formatter.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/json_formatter.go deleted file mode 100644 index 2ad6dc5cf4f..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/json_formatter.go +++ /dev/null @@ -1,41 +0,0 @@ -package logrus - -import ( - "encoding/json" - "fmt" -) - -type JSONFormatter struct { - // TimestampFormat sets the format used for marshaling timestamps. - TimestampFormat string -} - -func (f *JSONFormatter) Format(entry *Entry) ([]byte, error) { - data := make(Fields, len(entry.Data)+3) - for k, v := range entry.Data { - switch v := v.(type) { - case error: - // Otherwise errors are ignored by `encoding/json` - // https://github.com/Sirupsen/logrus/issues/137 - data[k] = v.Error() - default: - data[k] = v - } - } - prefixFieldClashes(data) - - timestampFormat := f.TimestampFormat - if timestampFormat == "" { - timestampFormat = DefaultTimestampFormat - } - - data["time"] = entry.Time.Format(timestampFormat) - data["msg"] = entry.Message - data["level"] = entry.Level.String() - - serialized, err := json.Marshal(data) - if err != nil { - return nil, fmt.Errorf("Failed to marshal fields to JSON, %v", err) - } - return append(serialized, '\n'), nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/logger.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/logger.go deleted file mode 100644 index 2fdb2317612..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/logger.go +++ /dev/null @@ -1,212 +0,0 @@ -package logrus - -import ( - "io" - "os" - "sync" -) - -type Logger struct { - // The logs are `io.Copy`'d to this in a mutex. It's common to set this to a - // file, or leave it default which is `os.Stderr`. You can also set this to - // something more adventorous, such as logging to Kafka. - Out io.Writer - // Hooks for the logger instance. These allow firing events based on logging - // levels and log entries. For example, to send errors to an error tracking - // service, log to StatsD or dump the core on fatal errors. - Hooks LevelHooks - // All log entries pass through the formatter before logged to Out. The - // included formatters are `TextFormatter` and `JSONFormatter` for which - // TextFormatter is the default. In development (when a TTY is attached) it - // logs with colors, but to a file it wouldn't. You can easily implement your - // own that implements the `Formatter` interface, see the `README` or included - // formatters for examples. - Formatter Formatter - // The logging level the logger should log at. This is typically (and defaults - // to) `logrus.Info`, which allows Info(), Warn(), Error() and Fatal() to be - // logged. `logrus.Debug` is useful in - Level Level - // Used to sync writing to the log. - mu sync.Mutex -} - -// Creates a new logger. Configuration should be set by changing `Formatter`, -// `Out` and `Hooks` directly on the default logger instance. You can also just -// instantiate your own: -// -// var log = &Logger{ -// Out: os.Stderr, -// Formatter: new(JSONFormatter), -// Hooks: make(LevelHooks), -// Level: logrus.DebugLevel, -// } -// -// It's recommended to make this a global instance called `log`. -func New() *Logger { - return &Logger{ - Out: os.Stderr, - Formatter: new(TextFormatter), - Hooks: make(LevelHooks), - Level: InfoLevel, - } -} - -// Adds a field to the log entry, note that you it doesn't log until you call -// Debug, Print, Info, Warn, Fatal or Panic. It only creates a log entry. -// If you want multiple fields, use `WithFields`. -func (logger *Logger) WithField(key string, value interface{}) *Entry { - return NewEntry(logger).WithField(key, value) -} - -// Adds a struct of fields to the log entry. All it does is call `WithField` for -// each `Field`. -func (logger *Logger) WithFields(fields Fields) *Entry { - return NewEntry(logger).WithFields(fields) -} - -// Add an error as single field to the log entry. All it does is call -// `WithError` for the given `error`. -func (logger *Logger) WithError(err error) *Entry { - return NewEntry(logger).WithError(err) -} - -func (logger *Logger) Debugf(format string, args ...interface{}) { - if logger.Level >= DebugLevel { - NewEntry(logger).Debugf(format, args...) - } -} - -func (logger *Logger) Infof(format string, args ...interface{}) { - if logger.Level >= InfoLevel { - NewEntry(logger).Infof(format, args...) - } -} - -func (logger *Logger) Printf(format string, args ...interface{}) { - NewEntry(logger).Printf(format, args...) -} - -func (logger *Logger) Warnf(format string, args ...interface{}) { - if logger.Level >= WarnLevel { - NewEntry(logger).Warnf(format, args...) - } -} - -func (logger *Logger) Warningf(format string, args ...interface{}) { - if logger.Level >= WarnLevel { - NewEntry(logger).Warnf(format, args...) - } -} - -func (logger *Logger) Errorf(format string, args ...interface{}) { - if logger.Level >= ErrorLevel { - NewEntry(logger).Errorf(format, args...) - } -} - -func (logger *Logger) Fatalf(format string, args ...interface{}) { - if logger.Level >= FatalLevel { - NewEntry(logger).Fatalf(format, args...) - } - os.Exit(1) -} - -func (logger *Logger) Panicf(format string, args ...interface{}) { - if logger.Level >= PanicLevel { - NewEntry(logger).Panicf(format, args...) - } -} - -func (logger *Logger) Debug(args ...interface{}) { - if logger.Level >= DebugLevel { - NewEntry(logger).Debug(args...) - } -} - -func (logger *Logger) Info(args ...interface{}) { - if logger.Level >= InfoLevel { - NewEntry(logger).Info(args...) - } -} - -func (logger *Logger) Print(args ...interface{}) { - NewEntry(logger).Info(args...) -} - -func (logger *Logger) Warn(args ...interface{}) { - if logger.Level >= WarnLevel { - NewEntry(logger).Warn(args...) - } -} - -func (logger *Logger) Warning(args ...interface{}) { - if logger.Level >= WarnLevel { - NewEntry(logger).Warn(args...) - } -} - -func (logger *Logger) Error(args ...interface{}) { - if logger.Level >= ErrorLevel { - NewEntry(logger).Error(args...) - } -} - -func (logger *Logger) Fatal(args ...interface{}) { - if logger.Level >= FatalLevel { - NewEntry(logger).Fatal(args...) - } - os.Exit(1) -} - -func (logger *Logger) Panic(args ...interface{}) { - if logger.Level >= PanicLevel { - NewEntry(logger).Panic(args...) - } -} - -func (logger *Logger) Debugln(args ...interface{}) { - if logger.Level >= DebugLevel { - NewEntry(logger).Debugln(args...) - } -} - -func (logger *Logger) Infoln(args ...interface{}) { - if logger.Level >= InfoLevel { - NewEntry(logger).Infoln(args...) - } -} - -func (logger *Logger) Println(args ...interface{}) { - NewEntry(logger).Println(args...) -} - -func (logger *Logger) Warnln(args ...interface{}) { - if logger.Level >= WarnLevel { - NewEntry(logger).Warnln(args...) - } -} - -func (logger *Logger) Warningln(args ...interface{}) { - if logger.Level >= WarnLevel { - NewEntry(logger).Warnln(args...) - } -} - -func (logger *Logger) Errorln(args ...interface{}) { - if logger.Level >= ErrorLevel { - NewEntry(logger).Errorln(args...) - } -} - -func (logger *Logger) Fatalln(args ...interface{}) { - if logger.Level >= FatalLevel { - NewEntry(logger).Fatalln(args...) - } - os.Exit(1) -} - -func (logger *Logger) Panicln(args ...interface{}) { - if logger.Level >= PanicLevel { - NewEntry(logger).Panicln(args...) - } -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/logrus.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/logrus.go deleted file mode 100644 index 0c09fbc2643..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/logrus.go +++ /dev/null @@ -1,98 +0,0 @@ -package logrus - -import ( - "fmt" - "log" -) - -// Fields type, used to pass to `WithFields`. -type Fields map[string]interface{} - -// Level type -type Level uint8 - -// Convert the Level to a string. E.g. PanicLevel becomes "panic". -func (level Level) String() string { - switch level { - case DebugLevel: - return "debug" - case InfoLevel: - return "info" - case WarnLevel: - return "warning" - case ErrorLevel: - return "error" - case FatalLevel: - return "fatal" - case PanicLevel: - return "panic" - } - - return "unknown" -} - -// ParseLevel takes a string level and returns the Logrus log level constant. -func ParseLevel(lvl string) (Level, error) { - switch lvl { - case "panic": - return PanicLevel, nil - case "fatal": - return FatalLevel, nil - case "error": - return ErrorLevel, nil - case "warn", "warning": - return WarnLevel, nil - case "info": - return InfoLevel, nil - case "debug": - return DebugLevel, nil - } - - var l Level - return l, fmt.Errorf("not a valid logrus Level: %q", lvl) -} - -// These are the different logging levels. You can set the logging level to log -// on your instance of logger, obtained with `logrus.New()`. -const ( - // PanicLevel level, highest level of severity. Logs and then calls panic with the - // message passed to Debug, Info, ... - PanicLevel Level = iota - // FatalLevel level. Logs and then calls `os.Exit(1)`. It will exit even if the - // logging level is set to Panic. - FatalLevel - // ErrorLevel level. Logs. Used for errors that should definitely be noted. - // Commonly used for hooks to send errors to an error tracking service. - ErrorLevel - // WarnLevel level. Non-critical entries that deserve eyes. - WarnLevel - // InfoLevel level. General operational entries about what's going on inside the - // application. - InfoLevel - // DebugLevel level. Usually only enabled when debugging. Very verbose logging. - DebugLevel -) - -// Won't compile if StdLogger can't be realized by a log.Logger -var ( - _ StdLogger = &log.Logger{} - _ StdLogger = &Entry{} - _ StdLogger = &Logger{} -) - -// StdLogger is what your logrus-enabled library should take, that way -// it'll accept a stdlib logger and a logrus logger. There's no standard -// interface, this is the closest we get, unfortunately. -type StdLogger interface { - Print(...interface{}) - Printf(string, ...interface{}) - Println(...interface{}) - - Fatal(...interface{}) - Fatalf(string, ...interface{}) - Fatalln(...interface{}) - - Panic(...interface{}) - Panicf(string, ...interface{}) - Panicln(...interface{}) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/terminal_bsd.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/terminal_bsd.go deleted file mode 100644 index 71f8d67a55d..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/terminal_bsd.go +++ /dev/null @@ -1,9 +0,0 @@ -// +build darwin freebsd openbsd netbsd dragonfly - -package logrus - -import "syscall" - -const ioctlReadTermios = syscall.TIOCGETA - -type Termios syscall.Termios diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/terminal_linux.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/terminal_linux.go deleted file mode 100644 index a2c0b40db61..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/terminal_linux.go +++ /dev/null @@ -1,12 +0,0 @@ -// Based on ssh/terminal: -// Copyright 2013 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package logrus - -import "syscall" - -const ioctlReadTermios = syscall.TCGETS - -type Termios syscall.Termios diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/terminal_notwindows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/terminal_notwindows.go deleted file mode 100644 index b343b3a3755..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/terminal_notwindows.go +++ /dev/null @@ -1,21 +0,0 @@ -// Based on ssh/terminal: -// Copyright 2011 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build linux darwin freebsd openbsd netbsd dragonfly - -package logrus - -import ( - "syscall" - "unsafe" -) - -// IsTerminal returns true if stderr's file descriptor is a terminal. -func IsTerminal() bool { - fd := syscall.Stderr - var termios Termios - _, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlReadTermios, uintptr(unsafe.Pointer(&termios)), 0, 0, 0) - return err == 0 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/terminal_solaris.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/terminal_solaris.go deleted file mode 100644 index 743df457f4d..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/terminal_solaris.go +++ /dev/null @@ -1,15 +0,0 @@ -// +build solaris - -package logrus - -import ( - "os" - - "github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix" -) - -// IsTerminal returns true if the given file descriptor is a terminal. -func IsTerminal() bool { - _, err := unix.IoctlGetTermios(int(os.Stdout.Fd()), unix.TCGETA) - return err == nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/terminal_windows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/terminal_windows.go deleted file mode 100644 index 0146845d16c..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/terminal_windows.go +++ /dev/null @@ -1,27 +0,0 @@ -// Based on ssh/terminal: -// Copyright 2011 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build windows - -package logrus - -import ( - "syscall" - "unsafe" -) - -var kernel32 = syscall.NewLazyDLL("kernel32.dll") - -var ( - procGetConsoleMode = kernel32.NewProc("GetConsoleMode") -) - -// IsTerminal returns true if stderr's file descriptor is a terminal. -func IsTerminal() bool { - fd := syscall.Stderr - var st uint32 - r, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, uintptr(fd), uintptr(unsafe.Pointer(&st)), 0) - return r != 0 && e == 0 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/text_formatter.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/text_formatter.go deleted file mode 100644 index 06ef2023374..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/text_formatter.go +++ /dev/null @@ -1,161 +0,0 @@ -package logrus - -import ( - "bytes" - "fmt" - "runtime" - "sort" - "strings" - "time" -) - -const ( - nocolor = 0 - red = 31 - green = 32 - yellow = 33 - blue = 34 - gray = 37 -) - -var ( - baseTimestamp time.Time - isTerminal bool -) - -func init() { - baseTimestamp = time.Now() - isTerminal = IsTerminal() -} - -func miniTS() int { - return int(time.Since(baseTimestamp) / time.Second) -} - -type TextFormatter struct { - // Set to true to bypass checking for a TTY before outputting colors. - ForceColors bool - - // Force disabling colors. - DisableColors bool - - // Disable timestamp logging. useful when output is redirected to logging - // system that already adds timestamps. - DisableTimestamp bool - - // Enable logging the full timestamp when a TTY is attached instead of just - // the time passed since beginning of execution. - FullTimestamp bool - - // TimestampFormat to use for display when a full timestamp is printed - TimestampFormat string - - // The fields are sorted by default for a consistent output. For applications - // that log extremely frequently and don't use the JSON formatter this may not - // be desired. - DisableSorting bool -} - -func (f *TextFormatter) Format(entry *Entry) ([]byte, error) { - var keys []string = make([]string, 0, len(entry.Data)) - for k := range entry.Data { - keys = append(keys, k) - } - - if !f.DisableSorting { - sort.Strings(keys) - } - - b := &bytes.Buffer{} - - prefixFieldClashes(entry.Data) - - isColorTerminal := isTerminal && (runtime.GOOS != "windows") - isColored := (f.ForceColors || isColorTerminal) && !f.DisableColors - - timestampFormat := f.TimestampFormat - if timestampFormat == "" { - timestampFormat = DefaultTimestampFormat - } - if isColored { - f.printColored(b, entry, keys, timestampFormat) - } else { - if !f.DisableTimestamp { - f.appendKeyValue(b, "time", entry.Time.Format(timestampFormat)) - } - f.appendKeyValue(b, "level", entry.Level.String()) - if entry.Message != "" { - f.appendKeyValue(b, "msg", entry.Message) - } - for _, key := range keys { - f.appendKeyValue(b, key, entry.Data[key]) - } - } - - b.WriteByte('\n') - return b.Bytes(), nil -} - -func (f *TextFormatter) printColored(b *bytes.Buffer, entry *Entry, keys []string, timestampFormat string) { - var levelColor int - switch entry.Level { - case DebugLevel: - levelColor = gray - case WarnLevel: - levelColor = yellow - case ErrorLevel, FatalLevel, PanicLevel: - levelColor = red - default: - levelColor = blue - } - - levelText := strings.ToUpper(entry.Level.String())[0:4] - - if !f.FullTimestamp { - fmt.Fprintf(b, "\x1b[%dm%s\x1b[0m[%04d] %-44s ", levelColor, levelText, miniTS(), entry.Message) - } else { - fmt.Fprintf(b, "\x1b[%dm%s\x1b[0m[%s] %-44s ", levelColor, levelText, entry.Time.Format(timestampFormat), entry.Message) - } - for _, k := range keys { - v := entry.Data[k] - fmt.Fprintf(b, " \x1b[%dm%s\x1b[0m=%+v", levelColor, k, v) - } -} - -func needsQuoting(text string) bool { - for _, ch := range text { - if !((ch >= 'a' && ch <= 'z') || - (ch >= 'A' && ch <= 'Z') || - (ch >= '0' && ch <= '9') || - ch == '-' || ch == '.') { - return false - } - } - return true -} - -func (f *TextFormatter) appendKeyValue(b *bytes.Buffer, key string, value interface{}) { - - b.WriteString(key) - b.WriteByte('=') - - switch value := value.(type) { - case string: - if needsQuoting(value) { - b.WriteString(value) - } else { - fmt.Fprintf(b, "%q", value) - } - case error: - errmsg := value.Error() - if needsQuoting(errmsg) { - b.WriteString(errmsg) - } else { - fmt.Fprintf(b, "%q", value) - } - default: - fmt.Fprint(b, value) - } - - b.WriteByte(' ') -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/writer.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/writer.go deleted file mode 100644 index 1e30b1c753a..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus/writer.go +++ /dev/null @@ -1,31 +0,0 @@ -package logrus - -import ( - "bufio" - "io" - "runtime" -) - -func (logger *Logger) Writer() *io.PipeWriter { - reader, writer := io.Pipe() - - go logger.writerScanner(reader) - runtime.SetFinalizer(writer, writerFinalizer) - - return writer -} - -func (logger *Logger) writerScanner(reader *io.PipeReader) { - scanner := bufio.NewScanner(reader) - for scanner.Scan() { - logger.Print(scanner.Text()) - } - if err := scanner.Err(); err != nil { - logger.Errorf("Error while reading from Writer: %s", err) - } - reader.Close() -} - -func writerFinalizer(writer *io.PipeWriter) { - writer.Close() -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/envfile.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/envfile.go deleted file mode 100644 index ba8b4f20165..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/envfile.go +++ /dev/null @@ -1,67 +0,0 @@ -package opts - -import ( - "bufio" - "fmt" - "os" - "strings" -) - -// ParseEnvFile reads a file with environment variables enumerated by lines -// -// ``Environment variable names used by the utilities in the Shell and -// Utilities volume of IEEE Std 1003.1-2001 consist solely of uppercase -// letters, digits, and the '_' (underscore) from the characters defined in -// Portable Character Set and do not begin with a digit. *But*, other -// characters may be permitted by an implementation; applications shall -// tolerate the presence of such names.'' -// -- http://pubs.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap08.html -// -// As of #16585, it's up to application inside docker to validate or not -// environment variables, that's why we just strip leading whitespace and -// nothing more. -func ParseEnvFile(filename string) ([]string, error) { - fh, err := os.Open(filename) - if err != nil { - return []string{}, err - } - defer fh.Close() - - lines := []string{} - scanner := bufio.NewScanner(fh) - for scanner.Scan() { - // trim the line from all leading whitespace first - line := strings.TrimLeft(scanner.Text(), whiteSpaces) - // line is not empty, and not starting with '#' - if len(line) > 0 && !strings.HasPrefix(line, "#") { - data := strings.SplitN(line, "=", 2) - - // trim the front of a variable, but nothing else - variable := strings.TrimLeft(data[0], whiteSpaces) - if strings.ContainsAny(variable, whiteSpaces) { - return []string{}, ErrBadEnvVariable{fmt.Sprintf("variable '%s' has white spaces", variable)} - } - - if len(data) > 1 { - - // pass the value through, no trimming - lines = append(lines, fmt.Sprintf("%s=%s", variable, data[1])) - } else { - // if only a pass-through variable is given, clean it up. - lines = append(lines, fmt.Sprintf("%s=%s", strings.TrimSpace(line), os.Getenv(line))) - } - } - } - return lines, scanner.Err() -} - -var whiteSpaces = " \t" - -// ErrBadEnvVariable typed error for bad environment variable -type ErrBadEnvVariable struct { - msg string -} - -func (e ErrBadEnvVariable) Error() string { - return fmt.Sprintf("poorly formatted environment: %s", e.msg) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/hosts.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/hosts.go deleted file mode 100644 index d1b69854152..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/hosts.go +++ /dev/null @@ -1,146 +0,0 @@ -package opts - -import ( - "fmt" - "net" - "net/url" - "runtime" - "strconv" - "strings" -) - -var ( - // DefaultHTTPPort Default HTTP Port used if only the protocol is provided to -H flag e.g. docker daemon -H tcp:// - // TODO Windows. DefaultHTTPPort is only used on Windows if a -H parameter - // is not supplied. A better longer term solution would be to use a named - // pipe as the default on the Windows daemon. - // These are the IANA registered port numbers for use with Docker - // see http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=docker - DefaultHTTPPort = 2375 // Default HTTP Port - // DefaultTLSHTTPPort Default HTTP Port used when TLS enabled - DefaultTLSHTTPPort = 2376 // Default TLS encrypted HTTP Port - // DefaultUnixSocket Path for the unix socket. - // Docker daemon by default always listens on the default unix socket - DefaultUnixSocket = "/var/run/docker.sock" - // DefaultTCPHost constant defines the default host string used by docker on Windows - DefaultTCPHost = fmt.Sprintf("tcp://%s:%d", DefaultHTTPHost, DefaultHTTPPort) - // DefaultTLSHost constant defines the default host string used by docker for TLS sockets - DefaultTLSHost = fmt.Sprintf("tcp://%s:%d", DefaultHTTPHost, DefaultTLSHTTPPort) -) - -// ValidateHost validates that the specified string is a valid host and returns it. -func ValidateHost(val string) (string, error) { - _, err := parseDockerDaemonHost(DefaultTCPHost, DefaultTLSHost, DefaultUnixSocket, "", val) - if err != nil { - return val, err - } - // Note: unlike most flag validators, we don't return the mutated value here - // we need to know what the user entered later (using ParseHost) to adjust for tls - return val, nil -} - -// ParseHost and set defaults for a Daemon host string -func ParseHost(defaultHost, val string) (string, error) { - host, err := parseDockerDaemonHost(DefaultTCPHost, DefaultTLSHost, DefaultUnixSocket, defaultHost, val) - if err != nil { - return val, err - } - return host, nil -} - -// parseDockerDaemonHost parses the specified address and returns an address that will be used as the host. -// Depending of the address specified, will use the defaultTCPAddr or defaultUnixAddr -// defaultUnixAddr must be a absolute file path (no `unix://` prefix) -// defaultTCPAddr must be the full `tcp://host:port` form -func parseDockerDaemonHost(defaultTCPAddr, defaultTLSHost, defaultUnixAddr, defaultAddr, addr string) (string, error) { - addr = strings.TrimSpace(addr) - if addr == "" { - if defaultAddr == defaultTLSHost { - return defaultTLSHost, nil - } - if runtime.GOOS != "windows" { - return fmt.Sprintf("unix://%s", defaultUnixAddr), nil - } - return defaultTCPAddr, nil - } - addrParts := strings.Split(addr, "://") - if len(addrParts) == 1 { - addrParts = []string{"tcp", addrParts[0]} - } - - switch addrParts[0] { - case "tcp": - return parseTCPAddr(addrParts[1], defaultTCPAddr) - case "unix": - return parseUnixAddr(addrParts[1], defaultUnixAddr) - case "fd": - return addr, nil - default: - return "", fmt.Errorf("Invalid bind address format: %s", addr) - } -} - -// parseUnixAddr parses and validates that the specified address is a valid UNIX -// socket address. It returns a formatted UNIX socket address, either using the -// address parsed from addr, or the contents of defaultAddr if addr is a blank -// string. -func parseUnixAddr(addr string, defaultAddr string) (string, error) { - addr = strings.TrimPrefix(addr, "unix://") - if strings.Contains(addr, "://") { - return "", fmt.Errorf("Invalid proto, expected unix: %s", addr) - } - if addr == "" { - addr = defaultAddr - } - return fmt.Sprintf("unix://%s", addr), nil -} - -// parseTCPAddr parses and validates that the specified address is a valid TCP -// address. It returns a formatted TCP address, either using the address parsed -// from tryAddr, or the contents of defaultAddr if tryAddr is a blank string. -// tryAddr is expected to have already been Trim()'d -// defaultAddr must be in the full `tcp://host:port` form -func parseTCPAddr(tryAddr string, defaultAddr string) (string, error) { - if tryAddr == "" || tryAddr == "tcp://" { - return defaultAddr, nil - } - addr := strings.TrimPrefix(tryAddr, "tcp://") - if strings.Contains(addr, "://") || addr == "" { - return "", fmt.Errorf("Invalid proto, expected tcp: %s", tryAddr) - } - - defaultAddr = strings.TrimPrefix(defaultAddr, "tcp://") - defaultHost, defaultPort, err := net.SplitHostPort(defaultAddr) - if err != nil { - return "", err - } - // url.Parse fails for trailing colon on IPv6 brackets on Go 1.5, but - // not 1.4. See https://github.com/golang/go/issues/12200 and - // https://github.com/golang/go/issues/6530. - if strings.HasSuffix(addr, "]:") { - addr += defaultPort - } - - u, err := url.Parse("tcp://" + addr) - if err != nil { - return "", err - } - - host, port, err := net.SplitHostPort(u.Host) - if err != nil { - return "", fmt.Errorf("Invalid bind address format: %s", tryAddr) - } - - if host == "" { - host = defaultHost - } - if port == "" { - port = defaultPort - } - p, err := strconv.Atoi(port) - if err != nil && p == 0 { - return "", fmt.Errorf("Invalid bind address format: %s", tryAddr) - } - - return fmt.Sprintf("tcp://%s%s", net.JoinHostPort(host, port), u.Path), nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/hosts_unix.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/hosts_unix.go deleted file mode 100644 index 611407a9d94..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/hosts_unix.go +++ /dev/null @@ -1,8 +0,0 @@ -// +build !windows - -package opts - -import "fmt" - -// DefaultHost constant defines the default host string used by docker on other hosts than Windows -var DefaultHost = fmt.Sprintf("unix://%s", DefaultUnixSocket) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/hosts_windows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/hosts_windows.go deleted file mode 100644 index ec52e9a70ae..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/hosts_windows.go +++ /dev/null @@ -1,6 +0,0 @@ -// +build windows - -package opts - -// DefaultHost constant defines the default host string used by docker on Windows -var DefaultHost = DefaultTCPHost diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/ip.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/ip.go deleted file mode 100644 index c7b0dc99473..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/ip.go +++ /dev/null @@ -1,42 +0,0 @@ -package opts - -import ( - "fmt" - "net" -) - -// IPOpt holds an IP. It is used to store values from CLI flags. -type IPOpt struct { - *net.IP -} - -// NewIPOpt creates a new IPOpt from a reference net.IP and a -// string representation of an IP. If the string is not a valid -// IP it will fallback to the specified reference. -func NewIPOpt(ref *net.IP, defaultVal string) *IPOpt { - o := &IPOpt{ - IP: ref, - } - o.Set(defaultVal) - return o -} - -// Set sets an IPv4 or IPv6 address from a given string. If the given -// string is not parseable as an IP address it returns an error. -func (o *IPOpt) Set(val string) error { - ip := net.ParseIP(val) - if ip == nil { - return fmt.Errorf("%s is not an ip address", val) - } - *o.IP = ip - return nil -} - -// String returns the IP address stored in the IPOpt. If stored IP is a -// nil pointer, it returns an empty string. -func (o *IPOpt) String() string { - if *o.IP == nil { - return "" - } - return o.IP.String() -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/opts.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/opts.go deleted file mode 100644 index b244f5a3a9c..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/opts.go +++ /dev/null @@ -1,252 +0,0 @@ -package opts - -import ( - "fmt" - "net" - "os" - "regexp" - "strings" -) - -var ( - alphaRegexp = regexp.MustCompile(`[a-zA-Z]`) - domainRegexp = regexp.MustCompile(`^(:?(:?[a-zA-Z0-9]|(:?[a-zA-Z0-9][a-zA-Z0-9\-]*[a-zA-Z0-9]))(:?\.(:?[a-zA-Z0-9]|(:?[a-zA-Z0-9][a-zA-Z0-9\-]*[a-zA-Z0-9])))*)\.?\s*$`) -) - -// ListOpts holds a list of values and a validation function. -type ListOpts struct { - values *[]string - validator ValidatorFctType -} - -// NewListOpts creates a new ListOpts with the specified validator. -func NewListOpts(validator ValidatorFctType) ListOpts { - var values []string - return *NewListOptsRef(&values, validator) -} - -// NewListOptsRef creates a new ListOpts with the specified values and validator. -func NewListOptsRef(values *[]string, validator ValidatorFctType) *ListOpts { - return &ListOpts{ - values: values, - validator: validator, - } -} - -func (opts *ListOpts) String() string { - return fmt.Sprintf("%v", []string((*opts.values))) -} - -// Set validates if needed the input value and add it to the -// internal slice. -func (opts *ListOpts) Set(value string) error { - if opts.validator != nil { - v, err := opts.validator(value) - if err != nil { - return err - } - value = v - } - (*opts.values) = append((*opts.values), value) - return nil -} - -// Delete removes the specified element from the slice. -func (opts *ListOpts) Delete(key string) { - for i, k := range *opts.values { - if k == key { - (*opts.values) = append((*opts.values)[:i], (*opts.values)[i+1:]...) - return - } - } -} - -// GetMap returns the content of values in a map in order to avoid -// duplicates. -func (opts *ListOpts) GetMap() map[string]struct{} { - ret := make(map[string]struct{}) - for _, k := range *opts.values { - ret[k] = struct{}{} - } - return ret -} - -// GetAll returns the values of slice. -func (opts *ListOpts) GetAll() []string { - return (*opts.values) -} - -// GetAllOrEmpty returns the values of the slice -// or an empty slice when there are no values. -func (opts *ListOpts) GetAllOrEmpty() []string { - v := *opts.values - if v == nil { - return make([]string, 0) - } - return v -} - -// Get checks the existence of the specified key. -func (opts *ListOpts) Get(key string) bool { - for _, k := range *opts.values { - if k == key { - return true - } - } - return false -} - -// Len returns the amount of element in the slice. -func (opts *ListOpts) Len() int { - return len((*opts.values)) -} - -//MapOpts holds a map of values and a validation function. -type MapOpts struct { - values map[string]string - validator ValidatorFctType -} - -// Set validates if needed the input value and add it to the -// internal map, by splitting on '='. -func (opts *MapOpts) Set(value string) error { - if opts.validator != nil { - v, err := opts.validator(value) - if err != nil { - return err - } - value = v - } - vals := strings.SplitN(value, "=", 2) - if len(vals) == 1 { - (opts.values)[vals[0]] = "" - } else { - (opts.values)[vals[0]] = vals[1] - } - return nil -} - -// GetAll returns the values of MapOpts as a map. -func (opts *MapOpts) GetAll() map[string]string { - return opts.values -} - -func (opts *MapOpts) String() string { - return fmt.Sprintf("%v", map[string]string((opts.values))) -} - -// NewMapOpts creates a new MapOpts with the specified map of values and a validator. -func NewMapOpts(values map[string]string, validator ValidatorFctType) *MapOpts { - if values == nil { - values = make(map[string]string) - } - return &MapOpts{ - values: values, - validator: validator, - } -} - -// ValidatorFctType defines a validator function that returns a validated string and/or an error. -type ValidatorFctType func(val string) (string, error) - -// ValidatorFctListType defines a validator function that returns a validated list of string and/or an error -type ValidatorFctListType func(val string) ([]string, error) - -// ValidateAttach validates that the specified string is a valid attach option. -func ValidateAttach(val string) (string, error) { - s := strings.ToLower(val) - for _, str := range []string{"stdin", "stdout", "stderr"} { - if s == str { - return s, nil - } - } - return val, fmt.Errorf("valid streams are STDIN, STDOUT and STDERR") -} - -// ValidateEnv validates an environment variable and returns it. -// If no value is specified, it returns the current value using os.Getenv. -// -// As on ParseEnvFile and related to #16585, environment variable names -// are not validate what so ever, it's up to application inside docker -// to validate them or not. -func ValidateEnv(val string) (string, error) { - arr := strings.Split(val, "=") - if len(arr) > 1 { - return val, nil - } - if !doesEnvExist(val) { - return val, nil - } - return fmt.Sprintf("%s=%s", val, os.Getenv(val)), nil -} - -// ValidateIPAddress validates an Ip address. -func ValidateIPAddress(val string) (string, error) { - var ip = net.ParseIP(strings.TrimSpace(val)) - if ip != nil { - return ip.String(), nil - } - return "", fmt.Errorf("%s is not an ip address", val) -} - -// ValidateMACAddress validates a MAC address. -func ValidateMACAddress(val string) (string, error) { - _, err := net.ParseMAC(strings.TrimSpace(val)) - if err != nil { - return "", err - } - return val, nil -} - -// ValidateDNSSearch validates domain for resolvconf search configuration. -// A zero length domain is represented by a dot (.). -func ValidateDNSSearch(val string) (string, error) { - if val = strings.Trim(val, " "); val == "." { - return val, nil - } - return validateDomain(val) -} - -func validateDomain(val string) (string, error) { - if alphaRegexp.FindString(val) == "" { - return "", fmt.Errorf("%s is not a valid domain", val) - } - ns := domainRegexp.FindSubmatch([]byte(val)) - if len(ns) > 0 && len(ns[1]) < 255 { - return string(ns[1]), nil - } - return "", fmt.Errorf("%s is not a valid domain", val) -} - -// ValidateExtraHost validates that the specified string is a valid extrahost and returns it. -// ExtraHost are in the form of name:ip where the ip has to be a valid ip (ipv4 or ipv6). -func ValidateExtraHost(val string) (string, error) { - // allow for IPv6 addresses in extra hosts by only splitting on first ":" - arr := strings.SplitN(val, ":", 2) - if len(arr) != 2 || len(arr[0]) == 0 { - return "", fmt.Errorf("bad format for add-host: %q", val) - } - if _, err := ValidateIPAddress(arr[1]); err != nil { - return "", fmt.Errorf("invalid IP address in add-host: %q", arr[1]) - } - return val, nil -} - -// ValidateLabel validates that the specified string is a valid label, and returns it. -// Labels are in the form on key=value. -func ValidateLabel(val string) (string, error) { - if strings.Count(val, "=") < 1 { - return "", fmt.Errorf("bad attribute format: %s", val) - } - return val, nil -} - -func doesEnvExist(name string) bool { - for _, entry := range os.Environ() { - parts := strings.SplitN(entry, "=", 2) - if parts[0] == name { - return true - } - } - return false -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/opts_unix.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/opts_unix.go deleted file mode 100644 index f1ce844a8f6..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/opts_unix.go +++ /dev/null @@ -1,6 +0,0 @@ -// +build !windows - -package opts - -// DefaultHTTPHost Default HTTP Host used if only port is provided to -H flag e.g. docker daemon -H tcp://:8080 -const DefaultHTTPHost = "localhost" diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/opts_windows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/opts_windows.go deleted file mode 100644 index 2a9e2be7447..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts/opts_windows.go +++ /dev/null @@ -1,56 +0,0 @@ -package opts - -// TODO Windows. Identify bug in GOLang 1.5.1 and/or Windows Server 2016 TP4. -// @jhowardmsft, @swernli. -// -// On Windows, this mitigates a problem with the default options of running -// a docker client against a local docker daemon on TP4. -// -// What was found that if the default host is "localhost", even if the client -// (and daemon as this is local) is not physically on a network, and the DNS -// cache is flushed (ipconfig /flushdns), then the client will pause for -// exactly one second when connecting to the daemon for calls. For example -// using docker run windowsservercore cmd, the CLI will send a create followed -// by an attach. You see the delay between the attach finishing and the attach -// being seen by the daemon. -// -// Here's some daemon debug logs with additional debug spew put in. The -// AfterWriteJSON log is the very last thing the daemon does as part of the -// create call. The POST /attach is the second CLI call. Notice the second -// time gap. -// -// time="2015-11-06T13:38:37.259627400-08:00" level=debug msg="After createRootfs" -// time="2015-11-06T13:38:37.263626300-08:00" level=debug msg="After setHostConfig" -// time="2015-11-06T13:38:37.267631200-08:00" level=debug msg="before createContainerPl...." -// time="2015-11-06T13:38:37.271629500-08:00" level=debug msg=ToDiskLocking.... -// time="2015-11-06T13:38:37.275643200-08:00" level=debug msg="loggin event...." -// time="2015-11-06T13:38:37.277627600-08:00" level=debug msg="logged event...." -// time="2015-11-06T13:38:37.279631800-08:00" level=debug msg="In defer func" -// time="2015-11-06T13:38:37.282628100-08:00" level=debug msg="After daemon.create" -// time="2015-11-06T13:38:37.286651700-08:00" level=debug msg="return 2" -// time="2015-11-06T13:38:37.289629500-08:00" level=debug msg="Returned from daemon.ContainerCreate" -// time="2015-11-06T13:38:37.311629100-08:00" level=debug msg="After WriteJSON" -// ... 1 second gap here.... -// time="2015-11-06T13:38:38.317866200-08:00" level=debug msg="Calling POST /v1.22/containers/984758282b842f779e805664b2c95d563adc9a979c8a3973e68c807843ee4757/attach" -// time="2015-11-06T13:38:38.326882500-08:00" level=info msg="POST /v1.22/containers/984758282b842f779e805664b2c95d563adc9a979c8a3973e68c807843ee4757/attach?stderr=1&stdin=1&stdout=1&stream=1" -// -// We suspect this is either a bug introduced in GOLang 1.5.1, or that a change -// in GOLang 1.5.1 (from 1.4.3) is exposing a bug in Windows TP4. In theory, -// the Windows networking stack is supposed to resolve "localhost" internally, -// without hitting DNS, or even reading the hosts file (which is why localhost -// is commented out in the hosts file on Windows). -// -// We have validated that working around this using the actual IPv4 localhost -// address does not cause the delay. -// -// This does not occur with the docker client built with 1.4.3 on the same -// Windows TP4 build, regardless of whether the daemon is built using 1.5.1 -// or 1.4.3. It does not occur on Linux. We also verified we see the same thing -// on a cross-compiled Windows binary (from Linux). -// -// Final note: This is a mitigation, not a 'real' fix. It is still susceptible -// to the delay in TP4 if a user were to do 'docker run -H=tcp://localhost:2375...' -// explicitly. - -// DefaultHTTPHost Default HTTP Host used if only port is provided to -H flag e.g. docker daemon -H tcp://:8080 -const DefaultHTTPHost = "127.0.0.1" diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/README.md b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/README.md deleted file mode 100644 index 7307d9694f6..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/README.md +++ /dev/null @@ -1 +0,0 @@ -This code provides helper functions for dealing with archive files. diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/archive.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/archive.go deleted file mode 100644 index ce84347d301..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/archive.go +++ /dev/null @@ -1,1049 +0,0 @@ -package archive - -import ( - "archive/tar" - "bufio" - "bytes" - "compress/bzip2" - "compress/gzip" - "errors" - "fmt" - "io" - "io/ioutil" - "os" - "os/exec" - "path/filepath" - "runtime" - "strings" - "syscall" - - "github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus" - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/fileutils" - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools" - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils" - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/pools" - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/promise" - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system" -) - -type ( - // Archive is a type of io.ReadCloser which has two interfaces Read and Closer. - Archive io.ReadCloser - // Reader is a type of io.Reader. - Reader io.Reader - // Compression is the state represents if compressed or not. - Compression int - // TarChownOptions wraps the chown options UID and GID. - TarChownOptions struct { - UID, GID int - } - // TarOptions wraps the tar options. - TarOptions struct { - IncludeFiles []string - ExcludePatterns []string - Compression Compression - NoLchown bool - UIDMaps []idtools.IDMap - GIDMaps []idtools.IDMap - ChownOpts *TarChownOptions - IncludeSourceDir bool - // When unpacking, specifies whether overwriting a directory with a - // non-directory is allowed and vice versa. - NoOverwriteDirNonDir bool - // For each include when creating an archive, the included name will be - // replaced with the matching name from this map. - RebaseNames map[string]string - } - - // Archiver allows the reuse of most utility functions of this package - // with a pluggable Untar function. Also, to facilitate the passing of - // specific id mappings for untar, an archiver can be created with maps - // which will then be passed to Untar operations - Archiver struct { - Untar func(io.Reader, string, *TarOptions) error - UIDMaps []idtools.IDMap - GIDMaps []idtools.IDMap - } - - // breakoutError is used to differentiate errors related to breaking out - // When testing archive breakout in the unit tests, this error is expected - // in order for the test to pass. - breakoutError error -) - -var ( - // ErrNotImplemented is the error message of function not implemented. - ErrNotImplemented = errors.New("Function not implemented") - defaultArchiver = &Archiver{Untar: Untar, UIDMaps: nil, GIDMaps: nil} -) - -const ( - // HeaderSize is the size in bytes of a tar header - HeaderSize = 512 -) - -const ( - // Uncompressed represents the uncompressed. - Uncompressed Compression = iota - // Bzip2 is bzip2 compression algorithm. - Bzip2 - // Gzip is gzip compression algorithm. - Gzip - // Xz is xz compression algorithm. - Xz -) - -// IsArchive checks for the magic bytes of a tar or any supported compression -// algorithm. -func IsArchive(header []byte) bool { - compression := DetectCompression(header) - if compression != Uncompressed { - return true - } - r := tar.NewReader(bytes.NewBuffer(header)) - _, err := r.Next() - return err == nil -} - -// IsArchivePath checks if the (possibly compressed) file at the given path -// starts with a tar file header. -func IsArchivePath(path string) bool { - file, err := os.Open(path) - if err != nil { - return false - } - defer file.Close() - rdr, err := DecompressStream(file) - if err != nil { - return false - } - r := tar.NewReader(rdr) - _, err = r.Next() - return err == nil -} - -// DetectCompression detects the compression algorithm of the source. -func DetectCompression(source []byte) Compression { - for compression, m := range map[Compression][]byte{ - Bzip2: {0x42, 0x5A, 0x68}, - Gzip: {0x1F, 0x8B, 0x08}, - Xz: {0xFD, 0x37, 0x7A, 0x58, 0x5A, 0x00}, - } { - if len(source) < len(m) { - logrus.Debugf("Len too short") - continue - } - if bytes.Compare(m, source[:len(m)]) == 0 { - return compression - } - } - return Uncompressed -} - -func xzDecompress(archive io.Reader) (io.ReadCloser, <-chan struct{}, error) { - args := []string{"xz", "-d", "-c", "-q"} - - return cmdStream(exec.Command(args[0], args[1:]...), archive) -} - -// DecompressStream decompress the archive and returns a ReaderCloser with the decompressed archive. -func DecompressStream(archive io.Reader) (io.ReadCloser, error) { - p := pools.BufioReader32KPool - buf := p.Get(archive) - bs, err := buf.Peek(10) - if err != nil && err != io.EOF { - // Note: we'll ignore any io.EOF error because there are some odd - // cases where the layer.tar file will be empty (zero bytes) and - // that results in an io.EOF from the Peek() call. So, in those - // cases we'll just treat it as a non-compressed stream and - // that means just create an empty layer. - // See Issue 18170 - return nil, err - } - - compression := DetectCompression(bs) - switch compression { - case Uncompressed: - readBufWrapper := p.NewReadCloserWrapper(buf, buf) - return readBufWrapper, nil - case Gzip: - gzReader, err := gzip.NewReader(buf) - if err != nil { - return nil, err - } - readBufWrapper := p.NewReadCloserWrapper(buf, gzReader) - return readBufWrapper, nil - case Bzip2: - bz2Reader := bzip2.NewReader(buf) - readBufWrapper := p.NewReadCloserWrapper(buf, bz2Reader) - return readBufWrapper, nil - case Xz: - xzReader, chdone, err := xzDecompress(buf) - if err != nil { - return nil, err - } - readBufWrapper := p.NewReadCloserWrapper(buf, xzReader) - return ioutils.NewReadCloserWrapper(readBufWrapper, func() error { - <-chdone - return readBufWrapper.Close() - }), nil - default: - return nil, fmt.Errorf("Unsupported compression format %s", (&compression).Extension()) - } -} - -// CompressStream compresses the dest with specified compression algorithm. -func CompressStream(dest io.WriteCloser, compression Compression) (io.WriteCloser, error) { - p := pools.BufioWriter32KPool - buf := p.Get(dest) - switch compression { - case Uncompressed: - writeBufWrapper := p.NewWriteCloserWrapper(buf, buf) - return writeBufWrapper, nil - case Gzip: - gzWriter := gzip.NewWriter(dest) - writeBufWrapper := p.NewWriteCloserWrapper(buf, gzWriter) - return writeBufWrapper, nil - case Bzip2, Xz: - // archive/bzip2 does not support writing, and there is no xz support at all - // However, this is not a problem as docker only currently generates gzipped tars - return nil, fmt.Errorf("Unsupported compression format %s", (&compression).Extension()) - default: - return nil, fmt.Errorf("Unsupported compression format %s", (&compression).Extension()) - } -} - -// Extension returns the extension of a file that uses the specified compression algorithm. -func (compression *Compression) Extension() string { - switch *compression { - case Uncompressed: - return "tar" - case Bzip2: - return "tar.bz2" - case Gzip: - return "tar.gz" - case Xz: - return "tar.xz" - } - return "" -} - -type tarAppender struct { - TarWriter *tar.Writer - Buffer *bufio.Writer - - // for hardlink mapping - SeenFiles map[uint64]string - UIDMaps []idtools.IDMap - GIDMaps []idtools.IDMap -} - -// canonicalTarName provides a platform-independent and consistent posix-style -//path for files and directories to be archived regardless of the platform. -func canonicalTarName(name string, isDir bool) (string, error) { - name, err := CanonicalTarNameForPath(name) - if err != nil { - return "", err - } - - // suffix with '/' for directories - if isDir && !strings.HasSuffix(name, "/") { - name += "/" - } - return name, nil -} - -func (ta *tarAppender) addTarFile(path, name string) error { - fi, err := os.Lstat(path) - if err != nil { - return err - } - - link := "" - if fi.Mode()&os.ModeSymlink != 0 { - if link, err = os.Readlink(path); err != nil { - return err - } - } - - hdr, err := tar.FileInfoHeader(fi, link) - if err != nil { - return err - } - hdr.Mode = int64(chmodTarEntry(os.FileMode(hdr.Mode))) - - name, err = canonicalTarName(name, fi.IsDir()) - if err != nil { - return fmt.Errorf("tar: cannot canonicalize path: %v", err) - } - hdr.Name = name - - inode, err := setHeaderForSpecialDevice(hdr, ta, name, fi.Sys()) - if err != nil { - return err - } - - // if it's not a directory and has more than 1 link, - // it's hardlinked, so set the type flag accordingly - if !fi.IsDir() && hasHardlinks(fi) { - // a link should have a name that it links too - // and that linked name should be first in the tar archive - if oldpath, ok := ta.SeenFiles[inode]; ok { - hdr.Typeflag = tar.TypeLink - hdr.Linkname = oldpath - hdr.Size = 0 // This Must be here for the writer math to add up! - } else { - ta.SeenFiles[inode] = name - } - } - - capability, _ := system.Lgetxattr(path, "security.capability") - if capability != nil { - hdr.Xattrs = make(map[string]string) - hdr.Xattrs["security.capability"] = string(capability) - } - - //handle re-mapping container ID mappings back to host ID mappings before - //writing tar headers/files. We skip whiteout files because they were written - //by the kernel and already have proper ownership relative to the host - if !strings.HasPrefix(filepath.Base(hdr.Name), WhiteoutPrefix) && (ta.UIDMaps != nil || ta.GIDMaps != nil) { - uid, gid, err := getFileUIDGID(fi.Sys()) - if err != nil { - return err - } - xUID, err := idtools.ToContainer(uid, ta.UIDMaps) - if err != nil { - return err - } - xGID, err := idtools.ToContainer(gid, ta.GIDMaps) - if err != nil { - return err - } - hdr.Uid = xUID - hdr.Gid = xGID - } - - if err := ta.TarWriter.WriteHeader(hdr); err != nil { - return err - } - - if hdr.Typeflag == tar.TypeReg { - file, err := os.Open(path) - if err != nil { - return err - } - - ta.Buffer.Reset(ta.TarWriter) - defer ta.Buffer.Reset(nil) - _, err = io.Copy(ta.Buffer, file) - file.Close() - if err != nil { - return err - } - err = ta.Buffer.Flush() - if err != nil { - return err - } - } - - return nil -} - -func createTarFile(path, extractDir string, hdr *tar.Header, reader io.Reader, Lchown bool, chownOpts *TarChownOptions) error { - // hdr.Mode is in linux format, which we can use for sycalls, - // but for os.Foo() calls we need the mode converted to os.FileMode, - // so use hdrInfo.Mode() (they differ for e.g. setuid bits) - hdrInfo := hdr.FileInfo() - - switch hdr.Typeflag { - case tar.TypeDir: - // Create directory unless it exists as a directory already. - // In that case we just want to merge the two - if fi, err := os.Lstat(path); !(err == nil && fi.IsDir()) { - if err := os.Mkdir(path, hdrInfo.Mode()); err != nil { - return err - } - } - - case tar.TypeReg, tar.TypeRegA: - // Source is regular file - file, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY, hdrInfo.Mode()) - if err != nil { - return err - } - if _, err := io.Copy(file, reader); err != nil { - file.Close() - return err - } - file.Close() - - case tar.TypeBlock, tar.TypeChar, tar.TypeFifo: - // Handle this is an OS-specific way - if err := handleTarTypeBlockCharFifo(hdr, path); err != nil { - return err - } - - case tar.TypeLink: - targetPath := filepath.Join(extractDir, hdr.Linkname) - // check for hardlink breakout - if !strings.HasPrefix(targetPath, extractDir) { - return breakoutError(fmt.Errorf("invalid hardlink %q -> %q", targetPath, hdr.Linkname)) - } - if err := os.Link(targetPath, path); err != nil { - return err - } - - case tar.TypeSymlink: - // path -> hdr.Linkname = targetPath - // e.g. /extractDir/path/to/symlink -> ../2/file = /extractDir/path/2/file - targetPath := filepath.Join(filepath.Dir(path), hdr.Linkname) - - // the reason we don't need to check symlinks in the path (with FollowSymlinkInScope) is because - // that symlink would first have to be created, which would be caught earlier, at this very check: - if !strings.HasPrefix(targetPath, extractDir) { - return breakoutError(fmt.Errorf("invalid symlink %q -> %q", path, hdr.Linkname)) - } - if err := os.Symlink(hdr.Linkname, path); err != nil { - return err - } - - case tar.TypeXGlobalHeader: - logrus.Debugf("PAX Global Extended Headers found and ignored") - return nil - - default: - return fmt.Errorf("Unhandled tar header type %d\n", hdr.Typeflag) - } - - // Lchown is not supported on Windows. - if Lchown && runtime.GOOS != "windows" { - if chownOpts == nil { - chownOpts = &TarChownOptions{UID: hdr.Uid, GID: hdr.Gid} - } - if err := os.Lchown(path, chownOpts.UID, chownOpts.GID); err != nil { - return err - } - } - - for key, value := range hdr.Xattrs { - if err := system.Lsetxattr(path, key, []byte(value), 0); err != nil { - return err - } - } - - // There is no LChmod, so ignore mode for symlink. Also, this - // must happen after chown, as that can modify the file mode - if err := handleLChmod(hdr, path, hdrInfo); err != nil { - return err - } - - aTime := hdr.AccessTime - if aTime.Before(hdr.ModTime) { - // Last access time should never be before last modified time. - aTime = hdr.ModTime - } - - // system.Chtimes doesn't support a NOFOLLOW flag atm - if hdr.Typeflag == tar.TypeLink { - if fi, err := os.Lstat(hdr.Linkname); err == nil && (fi.Mode()&os.ModeSymlink == 0) { - if err := system.Chtimes(path, aTime, hdr.ModTime); err != nil { - return err - } - } - } else if hdr.Typeflag != tar.TypeSymlink { - if err := system.Chtimes(path, aTime, hdr.ModTime); err != nil { - return err - } - } else { - ts := []syscall.Timespec{timeToTimespec(aTime), timeToTimespec(hdr.ModTime)} - if err := system.LUtimesNano(path, ts); err != nil && err != system.ErrNotSupportedPlatform { - return err - } - } - return nil -} - -// Tar creates an archive from the directory at `path`, and returns it as a -// stream of bytes. -func Tar(path string, compression Compression) (io.ReadCloser, error) { - return TarWithOptions(path, &TarOptions{Compression: compression}) -} - -// TarWithOptions creates an archive from the directory at `path`, only including files whose relative -// paths are included in `options.IncludeFiles` (if non-nil) or not in `options.ExcludePatterns`. -func TarWithOptions(srcPath string, options *TarOptions) (io.ReadCloser, error) { - - // Fix the source path to work with long path names. This is a no-op - // on platforms other than Windows. - srcPath = fixVolumePathPrefix(srcPath) - - patterns, patDirs, exceptions, err := fileutils.CleanPatterns(options.ExcludePatterns) - - if err != nil { - return nil, err - } - - pipeReader, pipeWriter := io.Pipe() - - compressWriter, err := CompressStream(pipeWriter, options.Compression) - if err != nil { - return nil, err - } - - go func() { - ta := &tarAppender{ - TarWriter: tar.NewWriter(compressWriter), - Buffer: pools.BufioWriter32KPool.Get(nil), - SeenFiles: make(map[uint64]string), - UIDMaps: options.UIDMaps, - GIDMaps: options.GIDMaps, - } - - defer func() { - // Make sure to check the error on Close. - if err := ta.TarWriter.Close(); err != nil { - logrus.Debugf("Can't close tar writer: %s", err) - } - if err := compressWriter.Close(); err != nil { - logrus.Debugf("Can't close compress writer: %s", err) - } - if err := pipeWriter.Close(); err != nil { - logrus.Debugf("Can't close pipe writer: %s", err) - } - }() - - // this buffer is needed for the duration of this piped stream - defer pools.BufioWriter32KPool.Put(ta.Buffer) - - // In general we log errors here but ignore them because - // during e.g. a diff operation the container can continue - // mutating the filesystem and we can see transient errors - // from this - - stat, err := os.Lstat(srcPath) - if err != nil { - return - } - - if !stat.IsDir() { - // We can't later join a non-dir with any includes because the - // 'walk' will error if "file/." is stat-ed and "file" is not a - // directory. So, we must split the source path and use the - // basename as the include. - if len(options.IncludeFiles) > 0 { - logrus.Warn("Tar: Can't archive a file with includes") - } - - dir, base := SplitPathDirEntry(srcPath) - srcPath = dir - options.IncludeFiles = []string{base} - } - - if len(options.IncludeFiles) == 0 { - options.IncludeFiles = []string{"."} - } - - seen := make(map[string]bool) - - for _, include := range options.IncludeFiles { - rebaseName := options.RebaseNames[include] - - walkRoot := getWalkRoot(srcPath, include) - filepath.Walk(walkRoot, func(filePath string, f os.FileInfo, err error) error { - if err != nil { - logrus.Debugf("Tar: Can't stat file %s to tar: %s", srcPath, err) - return nil - } - - relFilePath, err := filepath.Rel(srcPath, filePath) - if err != nil || (!options.IncludeSourceDir && relFilePath == "." && f.IsDir()) { - // Error getting relative path OR we are looking - // at the source directory path. Skip in both situations. - return nil - } - - if options.IncludeSourceDir && include == "." && relFilePath != "." { - relFilePath = strings.Join([]string{".", relFilePath}, string(filepath.Separator)) - } - - skip := false - - // If "include" is an exact match for the current file - // then even if there's an "excludePatterns" pattern that - // matches it, don't skip it. IOW, assume an explicit 'include' - // is asking for that file no matter what - which is true - // for some files, like .dockerignore and Dockerfile (sometimes) - if include != relFilePath { - skip, err = fileutils.OptimizedMatches(relFilePath, patterns, patDirs) - if err != nil { - logrus.Debugf("Error matching %s: %v", relFilePath, err) - return err - } - } - - if skip { - if !exceptions && f.IsDir() { - return filepath.SkipDir - } - return nil - } - - if seen[relFilePath] { - return nil - } - seen[relFilePath] = true - - // Rename the base resource. - if rebaseName != "" { - var replacement string - if rebaseName != string(filepath.Separator) { - // Special case the root directory to replace with an - // empty string instead so that we don't end up with - // double slashes in the paths. - replacement = rebaseName - } - - relFilePath = strings.Replace(relFilePath, include, replacement, 1) - } - - if err := ta.addTarFile(filePath, relFilePath); err != nil { - logrus.Debugf("Can't add file %s to tar: %s", filePath, err) - } - return nil - }) - } - }() - - return pipeReader, nil -} - -// Unpack unpacks the decompressedArchive to dest with options. -func Unpack(decompressedArchive io.Reader, dest string, options *TarOptions) error { - tr := tar.NewReader(decompressedArchive) - trBuf := pools.BufioReader32KPool.Get(nil) - defer pools.BufioReader32KPool.Put(trBuf) - - var dirs []*tar.Header - remappedRootUID, remappedRootGID, err := idtools.GetRootUIDGID(options.UIDMaps, options.GIDMaps) - if err != nil { - return err - } - - // Iterate through the files in the archive. -loop: - for { - hdr, err := tr.Next() - if err == io.EOF { - // end of tar archive - break - } - if err != nil { - return err - } - - // Normalize name, for safety and for a simple is-root check - // This keeps "../" as-is, but normalizes "/../" to "/". Or Windows: - // This keeps "..\" as-is, but normalizes "\..\" to "\". - hdr.Name = filepath.Clean(hdr.Name) - - for _, exclude := range options.ExcludePatterns { - if strings.HasPrefix(hdr.Name, exclude) { - continue loop - } - } - - // After calling filepath.Clean(hdr.Name) above, hdr.Name will now be in - // the filepath format for the OS on which the daemon is running. Hence - // the check for a slash-suffix MUST be done in an OS-agnostic way. - if !strings.HasSuffix(hdr.Name, string(os.PathSeparator)) { - // Not the root directory, ensure that the parent directory exists - parent := filepath.Dir(hdr.Name) - parentPath := filepath.Join(dest, parent) - if _, err := os.Lstat(parentPath); err != nil && os.IsNotExist(err) { - err = system.MkdirAll(parentPath, 0777) - if err != nil { - return err - } - } - } - - path := filepath.Join(dest, hdr.Name) - rel, err := filepath.Rel(dest, path) - if err != nil { - return err - } - if strings.HasPrefix(rel, ".."+string(os.PathSeparator)) { - return breakoutError(fmt.Errorf("%q is outside of %q", hdr.Name, dest)) - } - - // If path exits we almost always just want to remove and replace it - // The only exception is when it is a directory *and* the file from - // the layer is also a directory. Then we want to merge them (i.e. - // just apply the metadata from the layer). - if fi, err := os.Lstat(path); err == nil { - if options.NoOverwriteDirNonDir && fi.IsDir() && hdr.Typeflag != tar.TypeDir { - // If NoOverwriteDirNonDir is true then we cannot replace - // an existing directory with a non-directory from the archive. - return fmt.Errorf("cannot overwrite directory %q with non-directory %q", path, dest) - } - - if options.NoOverwriteDirNonDir && !fi.IsDir() && hdr.Typeflag == tar.TypeDir { - // If NoOverwriteDirNonDir is true then we cannot replace - // an existing non-directory with a directory from the archive. - return fmt.Errorf("cannot overwrite non-directory %q with directory %q", path, dest) - } - - if fi.IsDir() && hdr.Name == "." { - continue - } - - if !(fi.IsDir() && hdr.Typeflag == tar.TypeDir) { - if err := os.RemoveAll(path); err != nil { - return err - } - } - } - trBuf.Reset(tr) - - // if the options contain a uid & gid maps, convert header uid/gid - // entries using the maps such that lchown sets the proper mapped - // uid/gid after writing the file. We only perform this mapping if - // the file isn't already owned by the remapped root UID or GID, as - // that specific uid/gid has no mapping from container -> host, and - // those files already have the proper ownership for inside the - // container. - if hdr.Uid != remappedRootUID { - xUID, err := idtools.ToHost(hdr.Uid, options.UIDMaps) - if err != nil { - return err - } - hdr.Uid = xUID - } - if hdr.Gid != remappedRootGID { - xGID, err := idtools.ToHost(hdr.Gid, options.GIDMaps) - if err != nil { - return err - } - hdr.Gid = xGID - } - - if err := createTarFile(path, dest, hdr, trBuf, !options.NoLchown, options.ChownOpts); err != nil { - return err - } - - // Directory mtimes must be handled at the end to avoid further - // file creation in them to modify the directory mtime - if hdr.Typeflag == tar.TypeDir { - dirs = append(dirs, hdr) - } - } - - for _, hdr := range dirs { - path := filepath.Join(dest, hdr.Name) - - if err := system.Chtimes(path, hdr.AccessTime, hdr.ModTime); err != nil { - return err - } - } - return nil -} - -// Untar reads a stream of bytes from `archive`, parses it as a tar archive, -// and unpacks it into the directory at `dest`. -// The archive may be compressed with one of the following algorithms: -// identity (uncompressed), gzip, bzip2, xz. -// FIXME: specify behavior when target path exists vs. doesn't exist. -func Untar(tarArchive io.Reader, dest string, options *TarOptions) error { - return untarHandler(tarArchive, dest, options, true) -} - -// UntarUncompressed reads a stream of bytes from `archive`, parses it as a tar archive, -// and unpacks it into the directory at `dest`. -// The archive must be an uncompressed stream. -func UntarUncompressed(tarArchive io.Reader, dest string, options *TarOptions) error { - return untarHandler(tarArchive, dest, options, false) -} - -// Handler for teasing out the automatic decompression -func untarHandler(tarArchive io.Reader, dest string, options *TarOptions, decompress bool) error { - if tarArchive == nil { - return fmt.Errorf("Empty archive") - } - dest = filepath.Clean(dest) - if options == nil { - options = &TarOptions{} - } - if options.ExcludePatterns == nil { - options.ExcludePatterns = []string{} - } - - r := tarArchive - if decompress { - decompressedArchive, err := DecompressStream(tarArchive) - if err != nil { - return err - } - defer decompressedArchive.Close() - r = decompressedArchive - } - - return Unpack(r, dest, options) -} - -// TarUntar is a convenience function which calls Tar and Untar, with the output of one piped into the other. -// If either Tar or Untar fails, TarUntar aborts and returns the error. -func (archiver *Archiver) TarUntar(src, dst string) error { - logrus.Debugf("TarUntar(%s %s)", src, dst) - archive, err := TarWithOptions(src, &TarOptions{Compression: Uncompressed}) - if err != nil { - return err - } - defer archive.Close() - - var options *TarOptions - if archiver.UIDMaps != nil || archiver.GIDMaps != nil { - options = &TarOptions{ - UIDMaps: archiver.UIDMaps, - GIDMaps: archiver.GIDMaps, - } - } - return archiver.Untar(archive, dst, options) -} - -// TarUntar is a convenience function which calls Tar and Untar, with the output of one piped into the other. -// If either Tar or Untar fails, TarUntar aborts and returns the error. -func TarUntar(src, dst string) error { - return defaultArchiver.TarUntar(src, dst) -} - -// UntarPath untar a file from path to a destination, src is the source tar file path. -func (archiver *Archiver) UntarPath(src, dst string) error { - archive, err := os.Open(src) - if err != nil { - return err - } - defer archive.Close() - var options *TarOptions - if archiver.UIDMaps != nil || archiver.GIDMaps != nil { - options = &TarOptions{ - UIDMaps: archiver.UIDMaps, - GIDMaps: archiver.GIDMaps, - } - } - return archiver.Untar(archive, dst, options) -} - -// UntarPath is a convenience function which looks for an archive -// at filesystem path `src`, and unpacks it at `dst`. -func UntarPath(src, dst string) error { - return defaultArchiver.UntarPath(src, dst) -} - -// CopyWithTar creates a tar archive of filesystem path `src`, and -// unpacks it at filesystem path `dst`. -// The archive is streamed directly with fixed buffering and no -// intermediary disk IO. -func (archiver *Archiver) CopyWithTar(src, dst string) error { - srcSt, err := os.Stat(src) - if err != nil { - return err - } - if !srcSt.IsDir() { - return archiver.CopyFileWithTar(src, dst) - } - // Create dst, copy src's content into it - logrus.Debugf("Creating dest directory: %s", dst) - if err := system.MkdirAll(dst, 0755); err != nil { - return err - } - logrus.Debugf("Calling TarUntar(%s, %s)", src, dst) - return archiver.TarUntar(src, dst) -} - -// CopyWithTar creates a tar archive of filesystem path `src`, and -// unpacks it at filesystem path `dst`. -// The archive is streamed directly with fixed buffering and no -// intermediary disk IO. -func CopyWithTar(src, dst string) error { - return defaultArchiver.CopyWithTar(src, dst) -} - -// CopyFileWithTar emulates the behavior of the 'cp' command-line -// for a single file. It copies a regular file from path `src` to -// path `dst`, and preserves all its metadata. -func (archiver *Archiver) CopyFileWithTar(src, dst string) (err error) { - logrus.Debugf("CopyFileWithTar(%s, %s)", src, dst) - srcSt, err := os.Stat(src) - if err != nil { - return err - } - - if srcSt.IsDir() { - return fmt.Errorf("Can't copy a directory") - } - - // Clean up the trailing slash. This must be done in an operating - // system specific manner. - if dst[len(dst)-1] == os.PathSeparator { - dst = filepath.Join(dst, filepath.Base(src)) - } - // Create the holding directory if necessary - if err := system.MkdirAll(filepath.Dir(dst), 0700); err != nil { - return err - } - - r, w := io.Pipe() - errC := promise.Go(func() error { - defer w.Close() - - srcF, err := os.Open(src) - if err != nil { - return err - } - defer srcF.Close() - - hdr, err := tar.FileInfoHeader(srcSt, "") - if err != nil { - return err - } - hdr.Name = filepath.Base(dst) - hdr.Mode = int64(chmodTarEntry(os.FileMode(hdr.Mode))) - - remappedRootUID, remappedRootGID, err := idtools.GetRootUIDGID(archiver.UIDMaps, archiver.GIDMaps) - if err != nil { - return err - } - - // only perform mapping if the file being copied isn't already owned by the - // uid or gid of the remapped root in the container - if remappedRootUID != hdr.Uid { - xUID, err := idtools.ToHost(hdr.Uid, archiver.UIDMaps) - if err != nil { - return err - } - hdr.Uid = xUID - } - if remappedRootGID != hdr.Gid { - xGID, err := idtools.ToHost(hdr.Gid, archiver.GIDMaps) - if err != nil { - return err - } - hdr.Gid = xGID - } - - tw := tar.NewWriter(w) - defer tw.Close() - if err := tw.WriteHeader(hdr); err != nil { - return err - } - if _, err := io.Copy(tw, srcF); err != nil { - return err - } - return nil - }) - defer func() { - if er := <-errC; err != nil { - err = er - } - }() - - err = archiver.Untar(r, filepath.Dir(dst), nil) - if err != nil { - r.CloseWithError(err) - } - return err -} - -// CopyFileWithTar emulates the behavior of the 'cp' command-line -// for a single file. It copies a regular file from path `src` to -// path `dst`, and preserves all its metadata. -// -// Destination handling is in an operating specific manner depending -// where the daemon is running. If `dst` ends with a trailing slash -// the final destination path will be `dst/base(src)` (Linux) or -// `dst\base(src)` (Windows). -func CopyFileWithTar(src, dst string) (err error) { - return defaultArchiver.CopyFileWithTar(src, dst) -} - -// cmdStream executes a command, and returns its stdout as a stream. -// If the command fails to run or doesn't complete successfully, an error -// will be returned, including anything written on stderr. -func cmdStream(cmd *exec.Cmd, input io.Reader) (io.ReadCloser, <-chan struct{}, error) { - chdone := make(chan struct{}) - cmd.Stdin = input - pipeR, pipeW := io.Pipe() - cmd.Stdout = pipeW - var errBuf bytes.Buffer - cmd.Stderr = &errBuf - - // Run the command and return the pipe - if err := cmd.Start(); err != nil { - return nil, nil, err - } - - // Copy stdout to the returned pipe - go func() { - if err := cmd.Wait(); err != nil { - pipeW.CloseWithError(fmt.Errorf("%s: %s", err, errBuf.String())) - } else { - pipeW.Close() - } - close(chdone) - }() - - return pipeR, chdone, nil -} - -// NewTempArchive reads the content of src into a temporary file, and returns the contents -// of that file as an archive. The archive can only be read once - as soon as reading completes, -// the file will be deleted. -func NewTempArchive(src Archive, dir string) (*TempArchive, error) { - f, err := ioutil.TempFile(dir, "") - if err != nil { - return nil, err - } - if _, err := io.Copy(f, src); err != nil { - return nil, err - } - if _, err := f.Seek(0, 0); err != nil { - return nil, err - } - st, err := f.Stat() - if err != nil { - return nil, err - } - size := st.Size() - return &TempArchive{File: f, Size: size}, nil -} - -// TempArchive is a temporary archive. The archive can only be read once - as soon as reading completes, -// the file will be deleted. -type TempArchive struct { - *os.File - Size int64 // Pre-computed from Stat().Size() as a convenience - read int64 - closed bool -} - -// Close closes the underlying file if it's still open, or does a no-op -// to allow callers to try to close the TempArchive multiple times safely. -func (archive *TempArchive) Close() error { - if archive.closed { - return nil - } - - archive.closed = true - - return archive.File.Close() -} - -func (archive *TempArchive) Read(data []byte) (int, error) { - n, err := archive.File.Read(data) - archive.read += int64(n) - if err != nil || archive.read == archive.Size { - archive.Close() - os.Remove(archive.File.Name()) - } - return n, err -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/archive_unix.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/archive_unix.go deleted file mode 100644 index 86c68882537..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/archive_unix.go +++ /dev/null @@ -1,112 +0,0 @@ -// +build !windows - -package archive - -import ( - "archive/tar" - "errors" - "os" - "path/filepath" - "syscall" - - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system" -) - -// fixVolumePathPrefix does platform specific processing to ensure that if -// the path being passed in is not in a volume path format, convert it to one. -func fixVolumePathPrefix(srcPath string) string { - return srcPath -} - -// getWalkRoot calculates the root path when performing a TarWithOptions. -// We use a separate function as this is platform specific. On Linux, we -// can't use filepath.Join(srcPath,include) because this will clean away -// a trailing "." or "/" which may be important. -func getWalkRoot(srcPath string, include string) string { - return srcPath + string(filepath.Separator) + include -} - -// CanonicalTarNameForPath returns platform-specific filepath -// to canonical posix-style path for tar archival. p is relative -// path. -func CanonicalTarNameForPath(p string) (string, error) { - return p, nil // already unix-style -} - -// chmodTarEntry is used to adjust the file permissions used in tar header based -// on the platform the archival is done. - -func chmodTarEntry(perm os.FileMode) os.FileMode { - return perm // noop for unix as golang APIs provide perm bits correctly -} - -func setHeaderForSpecialDevice(hdr *tar.Header, ta *tarAppender, name string, stat interface{}) (inode uint64, err error) { - s, ok := stat.(*syscall.Stat_t) - - if !ok { - err = errors.New("cannot convert stat value to syscall.Stat_t") - return - } - - inode = uint64(s.Ino) - - // Currently go does not fill in the major/minors - if s.Mode&syscall.S_IFBLK != 0 || - s.Mode&syscall.S_IFCHR != 0 { - hdr.Devmajor = int64(major(uint64(s.Rdev))) - hdr.Devminor = int64(minor(uint64(s.Rdev))) - } - - return -} - -func getFileUIDGID(stat interface{}) (int, int, error) { - s, ok := stat.(*syscall.Stat_t) - - if !ok { - return -1, -1, errors.New("cannot convert stat value to syscall.Stat_t") - } - return int(s.Uid), int(s.Gid), nil -} - -func major(device uint64) uint64 { - return (device >> 8) & 0xfff -} - -func minor(device uint64) uint64 { - return (device & 0xff) | ((device >> 12) & 0xfff00) -} - -// handleTarTypeBlockCharFifo is an OS-specific helper function used by -// createTarFile to handle the following types of header: Block; Char; Fifo -func handleTarTypeBlockCharFifo(hdr *tar.Header, path string) error { - mode := uint32(hdr.Mode & 07777) - switch hdr.Typeflag { - case tar.TypeBlock: - mode |= syscall.S_IFBLK - case tar.TypeChar: - mode |= syscall.S_IFCHR - case tar.TypeFifo: - mode |= syscall.S_IFIFO - } - - if err := system.Mknod(path, mode, int(system.Mkdev(hdr.Devmajor, hdr.Devminor))); err != nil { - return err - } - return nil -} - -func handleLChmod(hdr *tar.Header, path string, hdrInfo os.FileInfo) error { - if hdr.Typeflag == tar.TypeLink { - if fi, err := os.Lstat(hdr.Linkname); err == nil && (fi.Mode()&os.ModeSymlink == 0) { - if err := os.Chmod(path, hdrInfo.Mode()); err != nil { - return err - } - } - } else if hdr.Typeflag != tar.TypeSymlink { - if err := os.Chmod(path, hdrInfo.Mode()); err != nil { - return err - } - } - return nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/archive_windows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/archive_windows.go deleted file mode 100644 index 23d60aa41af..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/archive_windows.go +++ /dev/null @@ -1,70 +0,0 @@ -// +build windows - -package archive - -import ( - "archive/tar" - "fmt" - "os" - "path/filepath" - "strings" - - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/longpath" -) - -// fixVolumePathPrefix does platform specific processing to ensure that if -// the path being passed in is not in a volume path format, convert it to one. -func fixVolumePathPrefix(srcPath string) string { - return longpath.AddPrefix(srcPath) -} - -// getWalkRoot calculates the root path when performing a TarWithOptions. -// We use a separate function as this is platform specific. -func getWalkRoot(srcPath string, include string) string { - return filepath.Join(srcPath, include) -} - -// CanonicalTarNameForPath returns platform-specific filepath -// to canonical posix-style path for tar archival. p is relative -// path. -func CanonicalTarNameForPath(p string) (string, error) { - // windows: convert windows style relative path with backslashes - // into forward slashes. Since windows does not allow '/' or '\' - // in file names, it is mostly safe to replace however we must - // check just in case - if strings.Contains(p, "/") { - return "", fmt.Errorf("Windows path contains forward slash: %s", p) - } - return strings.Replace(p, string(os.PathSeparator), "/", -1), nil - -} - -// chmodTarEntry is used to adjust the file permissions used in tar header based -// on the platform the archival is done. -func chmodTarEntry(perm os.FileMode) os.FileMode { - perm &= 0755 - // Add the x bit: make everything +x from windows - perm |= 0111 - - return perm -} - -func setHeaderForSpecialDevice(hdr *tar.Header, ta *tarAppender, name string, stat interface{}) (inode uint64, err error) { - // do nothing. no notion of Rdev, Inode, Nlink in stat on Windows - return -} - -// handleTarTypeBlockCharFifo is an OS-specific helper function used by -// createTarFile to handle the following types of header: Block; Char; Fifo -func handleTarTypeBlockCharFifo(hdr *tar.Header, path string) error { - return nil -} - -func handleLChmod(hdr *tar.Header, path string, hdrInfo os.FileInfo) error { - return nil -} - -func getFileUIDGID(stat interface{}) (int, int, error) { - // no notion of file ownership mapping yet on Windows - return 0, 0, nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/changes.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/changes.go deleted file mode 100644 index a2a1dc36e40..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/changes.go +++ /dev/null @@ -1,416 +0,0 @@ -package archive - -import ( - "archive/tar" - "bytes" - "fmt" - "io" - "io/ioutil" - "os" - "path/filepath" - "sort" - "strings" - "syscall" - "time" - - "github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus" - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools" - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/pools" - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system" -) - -// ChangeType represents the change type. -type ChangeType int - -const ( - // ChangeModify represents the modify operation. - ChangeModify = iota - // ChangeAdd represents the add operation. - ChangeAdd - // ChangeDelete represents the delete operation. - ChangeDelete -) - -func (c ChangeType) String() string { - switch c { - case ChangeModify: - return "C" - case ChangeAdd: - return "A" - case ChangeDelete: - return "D" - } - return "" -} - -// Change represents a change, it wraps the change type and path. -// It describes changes of the files in the path respect to the -// parent layers. The change could be modify, add, delete. -// This is used for layer diff. -type Change struct { - Path string - Kind ChangeType -} - -func (change *Change) String() string { - return fmt.Sprintf("%s %s", change.Kind, change.Path) -} - -// for sort.Sort -type changesByPath []Change - -func (c changesByPath) Less(i, j int) bool { return c[i].Path < c[j].Path } -func (c changesByPath) Len() int { return len(c) } -func (c changesByPath) Swap(i, j int) { c[j], c[i] = c[i], c[j] } - -// Gnu tar and the go tar writer don't have sub-second mtime -// precision, which is problematic when we apply changes via tar -// files, we handle this by comparing for exact times, *or* same -// second count and either a or b having exactly 0 nanoseconds -func sameFsTime(a, b time.Time) bool { - return a == b || - (a.Unix() == b.Unix() && - (a.Nanosecond() == 0 || b.Nanosecond() == 0)) -} - -func sameFsTimeSpec(a, b syscall.Timespec) bool { - return a.Sec == b.Sec && - (a.Nsec == b.Nsec || a.Nsec == 0 || b.Nsec == 0) -} - -// Changes walks the path rw and determines changes for the files in the path, -// with respect to the parent layers -func Changes(layers []string, rw string) ([]Change, error) { - var ( - changes []Change - changedDirs = make(map[string]struct{}) - ) - - err := filepath.Walk(rw, func(path string, f os.FileInfo, err error) error { - if err != nil { - return err - } - - // Rebase path - path, err = filepath.Rel(rw, path) - if err != nil { - return err - } - - // As this runs on the daemon side, file paths are OS specific. - path = filepath.Join(string(os.PathSeparator), path) - - // Skip root - if path == string(os.PathSeparator) { - return nil - } - - // Skip AUFS metadata - if matched, err := filepath.Match(string(os.PathSeparator)+WhiteoutMetaPrefix+"*", path); err != nil || matched { - return err - } - - change := Change{ - Path: path, - } - - // Find out what kind of modification happened - file := filepath.Base(path) - // If there is a whiteout, then the file was removed - if strings.HasPrefix(file, WhiteoutPrefix) { - originalFile := file[len(WhiteoutPrefix):] - change.Path = filepath.Join(filepath.Dir(path), originalFile) - change.Kind = ChangeDelete - } else { - // Otherwise, the file was added - change.Kind = ChangeAdd - - // ...Unless it already existed in a top layer, in which case, it's a modification - for _, layer := range layers { - stat, err := os.Stat(filepath.Join(layer, path)) - if err != nil && !os.IsNotExist(err) { - return err - } - if err == nil { - // The file existed in the top layer, so that's a modification - - // However, if it's a directory, maybe it wasn't actually modified. - // If you modify /foo/bar/baz, then /foo will be part of the changed files only because it's the parent of bar - if stat.IsDir() && f.IsDir() { - if f.Size() == stat.Size() && f.Mode() == stat.Mode() && sameFsTime(f.ModTime(), stat.ModTime()) { - // Both directories are the same, don't record the change - return nil - } - } - change.Kind = ChangeModify - break - } - } - } - - // If /foo/bar/file.txt is modified, then /foo/bar must be part of the changed files. - // This block is here to ensure the change is recorded even if the - // modify time, mode and size of the parent directory in the rw and ro layers are all equal. - // Check https://github.com/docker/docker/pull/13590 for details. - if f.IsDir() { - changedDirs[path] = struct{}{} - } - if change.Kind == ChangeAdd || change.Kind == ChangeDelete { - parent := filepath.Dir(path) - if _, ok := changedDirs[parent]; !ok && parent != "/" { - changes = append(changes, Change{Path: parent, Kind: ChangeModify}) - changedDirs[parent] = struct{}{} - } - } - - // Record change - changes = append(changes, change) - return nil - }) - if err != nil && !os.IsNotExist(err) { - return nil, err - } - return changes, nil -} - -// FileInfo describes the information of a file. -type FileInfo struct { - parent *FileInfo - name string - stat *system.StatT - children map[string]*FileInfo - capability []byte - added bool -} - -// LookUp looks up the file information of a file. -func (info *FileInfo) LookUp(path string) *FileInfo { - // As this runs on the daemon side, file paths are OS specific. - parent := info - if path == string(os.PathSeparator) { - return info - } - - pathElements := strings.Split(path, string(os.PathSeparator)) - for _, elem := range pathElements { - if elem != "" { - child := parent.children[elem] - if child == nil { - return nil - } - parent = child - } - } - return parent -} - -func (info *FileInfo) path() string { - if info.parent == nil { - // As this runs on the daemon side, file paths are OS specific. - return string(os.PathSeparator) - } - return filepath.Join(info.parent.path(), info.name) -} - -func (info *FileInfo) addChanges(oldInfo *FileInfo, changes *[]Change) { - - sizeAtEntry := len(*changes) - - if oldInfo == nil { - // add - change := Change{ - Path: info.path(), - Kind: ChangeAdd, - } - *changes = append(*changes, change) - info.added = true - } - - // We make a copy so we can modify it to detect additions - // also, we only recurse on the old dir if the new info is a directory - // otherwise any previous delete/change is considered recursive - oldChildren := make(map[string]*FileInfo) - if oldInfo != nil && info.isDir() { - for k, v := range oldInfo.children { - oldChildren[k] = v - } - } - - for name, newChild := range info.children { - oldChild, _ := oldChildren[name] - if oldChild != nil { - // change? - oldStat := oldChild.stat - newStat := newChild.stat - // Note: We can't compare inode or ctime or blocksize here, because these change - // when copying a file into a container. However, that is not generally a problem - // because any content change will change mtime, and any status change should - // be visible when actually comparing the stat fields. The only time this - // breaks down is if some code intentionally hides a change by setting - // back mtime - if statDifferent(oldStat, newStat) || - bytes.Compare(oldChild.capability, newChild.capability) != 0 { - change := Change{ - Path: newChild.path(), - Kind: ChangeModify, - } - *changes = append(*changes, change) - newChild.added = true - } - - // Remove from copy so we can detect deletions - delete(oldChildren, name) - } - - newChild.addChanges(oldChild, changes) - } - for _, oldChild := range oldChildren { - // delete - change := Change{ - Path: oldChild.path(), - Kind: ChangeDelete, - } - *changes = append(*changes, change) - } - - // If there were changes inside this directory, we need to add it, even if the directory - // itself wasn't changed. This is needed to properly save and restore filesystem permissions. - // As this runs on the daemon side, file paths are OS specific. - if len(*changes) > sizeAtEntry && info.isDir() && !info.added && info.path() != string(os.PathSeparator) { - change := Change{ - Path: info.path(), - Kind: ChangeModify, - } - // Let's insert the directory entry before the recently added entries located inside this dir - *changes = append(*changes, change) // just to resize the slice, will be overwritten - copy((*changes)[sizeAtEntry+1:], (*changes)[sizeAtEntry:]) - (*changes)[sizeAtEntry] = change - } - -} - -// Changes add changes to file information. -func (info *FileInfo) Changes(oldInfo *FileInfo) []Change { - var changes []Change - - info.addChanges(oldInfo, &changes) - - return changes -} - -func newRootFileInfo() *FileInfo { - // As this runs on the daemon side, file paths are OS specific. - root := &FileInfo{ - name: string(os.PathSeparator), - children: make(map[string]*FileInfo), - } - return root -} - -// ChangesDirs compares two directories and generates an array of Change objects describing the changes. -// If oldDir is "", then all files in newDir will be Add-Changes. -func ChangesDirs(newDir, oldDir string) ([]Change, error) { - var ( - oldRoot, newRoot *FileInfo - ) - if oldDir == "" { - emptyDir, err := ioutil.TempDir("", "empty") - if err != nil { - return nil, err - } - defer os.Remove(emptyDir) - oldDir = emptyDir - } - oldRoot, newRoot, err := collectFileInfoForChanges(oldDir, newDir) - if err != nil { - return nil, err - } - - return newRoot.Changes(oldRoot), nil -} - -// ChangesSize calculates the size in bytes of the provided changes, based on newDir. -func ChangesSize(newDir string, changes []Change) int64 { - var ( - size int64 - sf = make(map[uint64]struct{}) - ) - for _, change := range changes { - if change.Kind == ChangeModify || change.Kind == ChangeAdd { - file := filepath.Join(newDir, change.Path) - fileInfo, err := os.Lstat(file) - if err != nil { - logrus.Errorf("Can not stat %q: %s", file, err) - continue - } - - if fileInfo != nil && !fileInfo.IsDir() { - if hasHardlinks(fileInfo) { - inode := getIno(fileInfo) - if _, ok := sf[inode]; !ok { - size += fileInfo.Size() - sf[inode] = struct{}{} - } - } else { - size += fileInfo.Size() - } - } - } - } - return size -} - -// ExportChanges produces an Archive from the provided changes, relative to dir. -func ExportChanges(dir string, changes []Change, uidMaps, gidMaps []idtools.IDMap) (Archive, error) { - reader, writer := io.Pipe() - go func() { - ta := &tarAppender{ - TarWriter: tar.NewWriter(writer), - Buffer: pools.BufioWriter32KPool.Get(nil), - SeenFiles: make(map[uint64]string), - UIDMaps: uidMaps, - GIDMaps: gidMaps, - } - // this buffer is needed for the duration of this piped stream - defer pools.BufioWriter32KPool.Put(ta.Buffer) - - sort.Sort(changesByPath(changes)) - - // In general we log errors here but ignore them because - // during e.g. a diff operation the container can continue - // mutating the filesystem and we can see transient errors - // from this - for _, change := range changes { - if change.Kind == ChangeDelete { - whiteOutDir := filepath.Dir(change.Path) - whiteOutBase := filepath.Base(change.Path) - whiteOut := filepath.Join(whiteOutDir, WhiteoutPrefix+whiteOutBase) - timestamp := time.Now() - hdr := &tar.Header{ - Name: whiteOut[1:], - Size: 0, - ModTime: timestamp, - AccessTime: timestamp, - ChangeTime: timestamp, - } - if err := ta.TarWriter.WriteHeader(hdr); err != nil { - logrus.Debugf("Can't write whiteout header: %s", err) - } - } else { - path := filepath.Join(dir, change.Path) - if err := ta.addTarFile(path, change.Path[1:]); err != nil { - logrus.Debugf("Can't add file %s to tar: %s", path, err) - } - } - } - - // Make sure to check the error on Close. - if err := ta.TarWriter.Close(); err != nil { - logrus.Debugf("Can't close layer: %s", err) - } - if err := writer.Close(); err != nil { - logrus.Debugf("failed close Changes writer: %s", err) - } - }() - return reader, nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/changes_linux.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/changes_linux.go deleted file mode 100644 index 378cc09c859..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/changes_linux.go +++ /dev/null @@ -1,285 +0,0 @@ -package archive - -import ( - "bytes" - "fmt" - "os" - "path/filepath" - "sort" - "syscall" - "unsafe" - - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system" -) - -// walker is used to implement collectFileInfoForChanges on linux. Where this -// method in general returns the entire contents of two directory trees, we -// optimize some FS calls out on linux. In particular, we take advantage of the -// fact that getdents(2) returns the inode of each file in the directory being -// walked, which, when walking two trees in parallel to generate a list of -// changes, can be used to prune subtrees without ever having to lstat(2) them -// directly. Eliminating stat calls in this way can save up to seconds on large -// images. -type walker struct { - dir1 string - dir2 string - root1 *FileInfo - root2 *FileInfo -} - -// collectFileInfoForChanges returns a complete representation of the trees -// rooted at dir1 and dir2, with one important exception: any subtree or -// leaf where the inode and device numbers are an exact match between dir1 -// and dir2 will be pruned from the results. This method is *only* to be used -// to generating a list of changes between the two directories, as it does not -// reflect the full contents. -func collectFileInfoForChanges(dir1, dir2 string) (*FileInfo, *FileInfo, error) { - w := &walker{ - dir1: dir1, - dir2: dir2, - root1: newRootFileInfo(), - root2: newRootFileInfo(), - } - - i1, err := os.Lstat(w.dir1) - if err != nil { - return nil, nil, err - } - i2, err := os.Lstat(w.dir2) - if err != nil { - return nil, nil, err - } - - if err := w.walk("/", i1, i2); err != nil { - return nil, nil, err - } - - return w.root1, w.root2, nil -} - -// Given a FileInfo, its path info, and a reference to the root of the tree -// being constructed, register this file with the tree. -func walkchunk(path string, fi os.FileInfo, dir string, root *FileInfo) error { - if fi == nil { - return nil - } - parent := root.LookUp(filepath.Dir(path)) - if parent == nil { - return fmt.Errorf("collectFileInfoForChanges: Unexpectedly no parent for %s", path) - } - info := &FileInfo{ - name: filepath.Base(path), - children: make(map[string]*FileInfo), - parent: parent, - } - cpath := filepath.Join(dir, path) - stat, err := system.FromStatT(fi.Sys().(*syscall.Stat_t)) - if err != nil { - return err - } - info.stat = stat - info.capability, _ = system.Lgetxattr(cpath, "security.capability") // lgetxattr(2): fs access - parent.children[info.name] = info - return nil -} - -// Walk a subtree rooted at the same path in both trees being iterated. For -// example, /docker/overlay/1234/a/b/c/d and /docker/overlay/8888/a/b/c/d -func (w *walker) walk(path string, i1, i2 os.FileInfo) (err error) { - // Register these nodes with the return trees, unless we're still at the - // (already-created) roots: - if path != "/" { - if err := walkchunk(path, i1, w.dir1, w.root1); err != nil { - return err - } - if err := walkchunk(path, i2, w.dir2, w.root2); err != nil { - return err - } - } - - is1Dir := i1 != nil && i1.IsDir() - is2Dir := i2 != nil && i2.IsDir() - - sameDevice := false - if i1 != nil && i2 != nil { - si1 := i1.Sys().(*syscall.Stat_t) - si2 := i2.Sys().(*syscall.Stat_t) - if si1.Dev == si2.Dev { - sameDevice = true - } - } - - // If these files are both non-existent, or leaves (non-dirs), we are done. - if !is1Dir && !is2Dir { - return nil - } - - // Fetch the names of all the files contained in both directories being walked: - var names1, names2 []nameIno - if is1Dir { - names1, err = readdirnames(filepath.Join(w.dir1, path)) // getdents(2): fs access - if err != nil { - return err - } - } - if is2Dir { - names2, err = readdirnames(filepath.Join(w.dir2, path)) // getdents(2): fs access - if err != nil { - return err - } - } - - // We have lists of the files contained in both parallel directories, sorted - // in the same order. Walk them in parallel, generating a unique merged list - // of all items present in either or both directories. - var names []string - ix1 := 0 - ix2 := 0 - - for { - if ix1 >= len(names1) { - break - } - if ix2 >= len(names2) { - break - } - - ni1 := names1[ix1] - ni2 := names2[ix2] - - switch bytes.Compare([]byte(ni1.name), []byte(ni2.name)) { - case -1: // ni1 < ni2 -- advance ni1 - // we will not encounter ni1 in names2 - names = append(names, ni1.name) - ix1++ - case 0: // ni1 == ni2 - if ni1.ino != ni2.ino || !sameDevice { - names = append(names, ni1.name) - } - ix1++ - ix2++ - case 1: // ni1 > ni2 -- advance ni2 - // we will not encounter ni2 in names1 - names = append(names, ni2.name) - ix2++ - } - } - for ix1 < len(names1) { - names = append(names, names1[ix1].name) - ix1++ - } - for ix2 < len(names2) { - names = append(names, names2[ix2].name) - ix2++ - } - - // For each of the names present in either or both of the directories being - // iterated, stat the name under each root, and recurse the pair of them: - for _, name := range names { - fname := filepath.Join(path, name) - var cInfo1, cInfo2 os.FileInfo - if is1Dir { - cInfo1, err = os.Lstat(filepath.Join(w.dir1, fname)) // lstat(2): fs access - if err != nil && !os.IsNotExist(err) { - return err - } - } - if is2Dir { - cInfo2, err = os.Lstat(filepath.Join(w.dir2, fname)) // lstat(2): fs access - if err != nil && !os.IsNotExist(err) { - return err - } - } - if err = w.walk(fname, cInfo1, cInfo2); err != nil { - return err - } - } - return nil -} - -// {name,inode} pairs used to support the early-pruning logic of the walker type -type nameIno struct { - name string - ino uint64 -} - -type nameInoSlice []nameIno - -func (s nameInoSlice) Len() int { return len(s) } -func (s nameInoSlice) Swap(i, j int) { s[i], s[j] = s[j], s[i] } -func (s nameInoSlice) Less(i, j int) bool { return s[i].name < s[j].name } - -// readdirnames is a hacked-apart version of the Go stdlib code, exposing inode -// numbers further up the stack when reading directory contents. Unlike -// os.Readdirnames, which returns a list of filenames, this function returns a -// list of {filename,inode} pairs. -func readdirnames(dirname string) (names []nameIno, err error) { - var ( - size = 100 - buf = make([]byte, 4096) - nbuf int - bufp int - nb int - ) - - f, err := os.Open(dirname) - if err != nil { - return nil, err - } - defer f.Close() - - names = make([]nameIno, 0, size) // Empty with room to grow. - for { - // Refill the buffer if necessary - if bufp >= nbuf { - bufp = 0 - nbuf, err = syscall.ReadDirent(int(f.Fd()), buf) // getdents on linux - if nbuf < 0 { - nbuf = 0 - } - if err != nil { - return nil, os.NewSyscallError("readdirent", err) - } - if nbuf <= 0 { - break // EOF - } - } - - // Drain the buffer - nb, names = parseDirent(buf[bufp:nbuf], names) - bufp += nb - } - - sl := nameInoSlice(names) - sort.Sort(sl) - return sl, nil -} - -// parseDirent is a minor modification of syscall.ParseDirent (linux version) -// which returns {name,inode} pairs instead of just names. -func parseDirent(buf []byte, names []nameIno) (consumed int, newnames []nameIno) { - origlen := len(buf) - for len(buf) > 0 { - dirent := (*syscall.Dirent)(unsafe.Pointer(&buf[0])) - buf = buf[dirent.Reclen:] - if dirent.Ino == 0 { // File absent in directory. - continue - } - bytes := (*[10000]byte)(unsafe.Pointer(&dirent.Name[0])) - var name = string(bytes[0:clen(bytes[:])]) - if name == "." || name == ".." { // Useless names - continue - } - names = append(names, nameIno{name, dirent.Ino}) - } - return origlen - len(buf), names -} - -func clen(n []byte) int { - for i := 0; i < len(n); i++ { - if n[i] == 0 { - return i - } - } - return len(n) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/changes_other.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/changes_other.go deleted file mode 100644 index 35832f087d0..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/changes_other.go +++ /dev/null @@ -1,97 +0,0 @@ -// +build !linux - -package archive - -import ( - "fmt" - "os" - "path/filepath" - "runtime" - "strings" - - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system" -) - -func collectFileInfoForChanges(oldDir, newDir string) (*FileInfo, *FileInfo, error) { - var ( - oldRoot, newRoot *FileInfo - err1, err2 error - errs = make(chan error, 2) - ) - go func() { - oldRoot, err1 = collectFileInfo(oldDir) - errs <- err1 - }() - go func() { - newRoot, err2 = collectFileInfo(newDir) - errs <- err2 - }() - - // block until both routines have returned - for i := 0; i < 2; i++ { - if err := <-errs; err != nil { - return nil, nil, err - } - } - - return oldRoot, newRoot, nil -} - -func collectFileInfo(sourceDir string) (*FileInfo, error) { - root := newRootFileInfo() - - err := filepath.Walk(sourceDir, func(path string, f os.FileInfo, err error) error { - if err != nil { - return err - } - - // Rebase path - relPath, err := filepath.Rel(sourceDir, path) - if err != nil { - return err - } - - // As this runs on the daemon side, file paths are OS specific. - relPath = filepath.Join(string(os.PathSeparator), relPath) - - // See https://github.com/golang/go/issues/9168 - bug in filepath.Join. - // Temporary workaround. If the returned path starts with two backslashes, - // trim it down to a single backslash. Only relevant on Windows. - if runtime.GOOS == "windows" { - if strings.HasPrefix(relPath, `\\`) { - relPath = relPath[1:] - } - } - - if relPath == string(os.PathSeparator) { - return nil - } - - parent := root.LookUp(filepath.Dir(relPath)) - if parent == nil { - return fmt.Errorf("collectFileInfo: Unexpectedly no parent for %s", relPath) - } - - info := &FileInfo{ - name: filepath.Base(relPath), - children: make(map[string]*FileInfo), - parent: parent, - } - - s, err := system.Lstat(path) - if err != nil { - return err - } - info.stat = s - - info.capability, _ = system.Lgetxattr(path, "security.capability") - - parent.children[info.name] = info - - return nil - }) - if err != nil { - return nil, err - } - return root, nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/changes_unix.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/changes_unix.go deleted file mode 100644 index 6646b4dfda7..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/changes_unix.go +++ /dev/null @@ -1,36 +0,0 @@ -// +build !windows - -package archive - -import ( - "os" - "syscall" - - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system" -) - -func statDifferent(oldStat *system.StatT, newStat *system.StatT) bool { - // Don't look at size for dirs, its not a good measure of change - if oldStat.Mode() != newStat.Mode() || - oldStat.UID() != newStat.UID() || - oldStat.GID() != newStat.GID() || - oldStat.Rdev() != newStat.Rdev() || - // Don't look at size for dirs, its not a good measure of change - (oldStat.Mode()&syscall.S_IFDIR != syscall.S_IFDIR && - (!sameFsTimeSpec(oldStat.Mtim(), newStat.Mtim()) || (oldStat.Size() != newStat.Size()))) { - return true - } - return false -} - -func (info *FileInfo) isDir() bool { - return info.parent == nil || info.stat.Mode()&syscall.S_IFDIR != 0 -} - -func getIno(fi os.FileInfo) uint64 { - return uint64(fi.Sys().(*syscall.Stat_t).Ino) -} - -func hasHardlinks(fi os.FileInfo) bool { - return fi.Sys().(*syscall.Stat_t).Nlink > 1 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/changes_windows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/changes_windows.go deleted file mode 100644 index 2d8708d0aef..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/changes_windows.go +++ /dev/null @@ -1,30 +0,0 @@ -package archive - -import ( - "os" - - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system" -) - -func statDifferent(oldStat *system.StatT, newStat *system.StatT) bool { - - // Don't look at size for dirs, its not a good measure of change - if oldStat.ModTime() != newStat.ModTime() || - oldStat.Mode() != newStat.Mode() || - oldStat.Size() != newStat.Size() && !oldStat.IsDir() { - return true - } - return false -} - -func (info *FileInfo) isDir() bool { - return info.parent == nil || info.stat.IsDir() -} - -func getIno(fi os.FileInfo) (inode uint64) { - return -} - -func hasHardlinks(fi os.FileInfo) bool { - return false -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/copy.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/copy.go deleted file mode 100644 index e9509126438..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/copy.go +++ /dev/null @@ -1,458 +0,0 @@ -package archive - -import ( - "archive/tar" - "errors" - "io" - "io/ioutil" - "os" - "path/filepath" - "strings" - - "github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus" - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system" -) - -// Errors used or returned by this file. -var ( - ErrNotDirectory = errors.New("not a directory") - ErrDirNotExists = errors.New("no such directory") - ErrCannotCopyDir = errors.New("cannot copy directory") - ErrInvalidCopySource = errors.New("invalid copy source content") -) - -// PreserveTrailingDotOrSeparator returns the given cleaned path (after -// processing using any utility functions from the path or filepath stdlib -// packages) and appends a trailing `/.` or `/` if its corresponding original -// path (from before being processed by utility functions from the path or -// filepath stdlib packages) ends with a trailing `/.` or `/`. If the cleaned -// path already ends in a `.` path segment, then another is not added. If the -// clean path already ends in a path separator, then another is not added. -func PreserveTrailingDotOrSeparator(cleanedPath, originalPath string) string { - // Ensure paths are in platform semantics - cleanedPath = normalizePath(cleanedPath) - originalPath = normalizePath(originalPath) - - if !specifiesCurrentDir(cleanedPath) && specifiesCurrentDir(originalPath) { - if !hasTrailingPathSeparator(cleanedPath) { - // Add a separator if it doesn't already end with one (a cleaned - // path would only end in a separator if it is the root). - cleanedPath += string(filepath.Separator) - } - cleanedPath += "." - } - - if !hasTrailingPathSeparator(cleanedPath) && hasTrailingPathSeparator(originalPath) { - cleanedPath += string(filepath.Separator) - } - - return cleanedPath -} - -// assertsDirectory returns whether the given path is -// asserted to be a directory, i.e., the path ends with -// a trailing '/' or `/.`, assuming a path separator of `/`. -func assertsDirectory(path string) bool { - return hasTrailingPathSeparator(path) || specifiesCurrentDir(path) -} - -// hasTrailingPathSeparator returns whether the given -// path ends with the system's path separator character. -func hasTrailingPathSeparator(path string) bool { - return len(path) > 0 && os.IsPathSeparator(path[len(path)-1]) -} - -// specifiesCurrentDir returns whether the given path specifies -// a "current directory", i.e., the last path segment is `.`. -func specifiesCurrentDir(path string) bool { - return filepath.Base(path) == "." -} - -// SplitPathDirEntry splits the given path between its directory name and its -// basename by first cleaning the path but preserves a trailing "." if the -// original path specified the current directory. -func SplitPathDirEntry(path string) (dir, base string) { - cleanedPath := filepath.Clean(normalizePath(path)) - - if specifiesCurrentDir(path) { - cleanedPath += string(filepath.Separator) + "." - } - - return filepath.Dir(cleanedPath), filepath.Base(cleanedPath) -} - -// TarResource archives the resource described by the given CopyInfo to a Tar -// archive. A non-nil error is returned if sourcePath does not exist or is -// asserted to be a directory but exists as another type of file. -// -// This function acts as a convenient wrapper around TarWithOptions, which -// requires a directory as the source path. TarResource accepts either a -// directory or a file path and correctly sets the Tar options. -func TarResource(sourceInfo CopyInfo) (content Archive, err error) { - return TarResourceRebase(sourceInfo.Path, sourceInfo.RebaseName) -} - -// TarResourceRebase is like TarResource but renames the first path element of -// items in the resulting tar archive to match the given rebaseName if not "". -func TarResourceRebase(sourcePath, rebaseName string) (content Archive, err error) { - sourcePath = normalizePath(sourcePath) - if _, err = os.Lstat(sourcePath); err != nil { - // Catches the case where the source does not exist or is not a - // directory if asserted to be a directory, as this also causes an - // error. - return - } - - // Separate the source path between it's directory and - // the entry in that directory which we are archiving. - sourceDir, sourceBase := SplitPathDirEntry(sourcePath) - - filter := []string{sourceBase} - - logrus.Debugf("copying %q from %q", sourceBase, sourceDir) - - return TarWithOptions(sourceDir, &TarOptions{ - Compression: Uncompressed, - IncludeFiles: filter, - IncludeSourceDir: true, - RebaseNames: map[string]string{ - sourceBase: rebaseName, - }, - }) -} - -// CopyInfo holds basic info about the source -// or destination path of a copy operation. -type CopyInfo struct { - Path string - Exists bool - IsDir bool - RebaseName string -} - -// CopyInfoSourcePath stats the given path to create a CopyInfo -// struct representing that resource for the source of an archive copy -// operation. The given path should be an absolute local path. A source path -// has all symlinks evaluated that appear before the last path separator ("/" -// on Unix). As it is to be a copy source, the path must exist. -func CopyInfoSourcePath(path string, followLink bool) (CopyInfo, error) { - // normalize the file path and then evaluate the symbol link - // we will use the target file instead of the symbol link if - // followLink is set - path = normalizePath(path) - - resolvedPath, rebaseName, err := ResolveHostSourcePath(path, followLink) - if err != nil { - return CopyInfo{}, err - } - - stat, err := os.Lstat(resolvedPath) - if err != nil { - return CopyInfo{}, err - } - - return CopyInfo{ - Path: resolvedPath, - Exists: true, - IsDir: stat.IsDir(), - RebaseName: rebaseName, - }, nil -} - -// CopyInfoDestinationPath stats the given path to create a CopyInfo -// struct representing that resource for the destination of an archive copy -// operation. The given path should be an absolute local path. -func CopyInfoDestinationPath(path string) (info CopyInfo, err error) { - maxSymlinkIter := 10 // filepath.EvalSymlinks uses 255, but 10 already seems like a lot. - path = normalizePath(path) - originalPath := path - - stat, err := os.Lstat(path) - - if err == nil && stat.Mode()&os.ModeSymlink == 0 { - // The path exists and is not a symlink. - return CopyInfo{ - Path: path, - Exists: true, - IsDir: stat.IsDir(), - }, nil - } - - // While the path is a symlink. - for n := 0; err == nil && stat.Mode()&os.ModeSymlink != 0; n++ { - if n > maxSymlinkIter { - // Don't follow symlinks more than this arbitrary number of times. - return CopyInfo{}, errors.New("too many symlinks in " + originalPath) - } - - // The path is a symbolic link. We need to evaluate it so that the - // destination of the copy operation is the link target and not the - // link itself. This is notably different than CopyInfoSourcePath which - // only evaluates symlinks before the last appearing path separator. - // Also note that it is okay if the last path element is a broken - // symlink as the copy operation should create the target. - var linkTarget string - - linkTarget, err = os.Readlink(path) - if err != nil { - return CopyInfo{}, err - } - - if !system.IsAbs(linkTarget) { - // Join with the parent directory. - dstParent, _ := SplitPathDirEntry(path) - linkTarget = filepath.Join(dstParent, linkTarget) - } - - path = linkTarget - stat, err = os.Lstat(path) - } - - if err != nil { - // It's okay if the destination path doesn't exist. We can still - // continue the copy operation if the parent directory exists. - if !os.IsNotExist(err) { - return CopyInfo{}, err - } - - // Ensure destination parent dir exists. - dstParent, _ := SplitPathDirEntry(path) - - parentDirStat, err := os.Lstat(dstParent) - if err != nil { - return CopyInfo{}, err - } - if !parentDirStat.IsDir() { - return CopyInfo{}, ErrNotDirectory - } - - return CopyInfo{Path: path}, nil - } - - // The path exists after resolving symlinks. - return CopyInfo{ - Path: path, - Exists: true, - IsDir: stat.IsDir(), - }, nil -} - -// PrepareArchiveCopy prepares the given srcContent archive, which should -// contain the archived resource described by srcInfo, to the destination -// described by dstInfo. Returns the possibly modified content archive along -// with the path to the destination directory which it should be extracted to. -func PrepareArchiveCopy(srcContent Reader, srcInfo, dstInfo CopyInfo) (dstDir string, content Archive, err error) { - // Ensure in platform semantics - srcInfo.Path = normalizePath(srcInfo.Path) - dstInfo.Path = normalizePath(dstInfo.Path) - - // Separate the destination path between its directory and base - // components in case the source archive contents need to be rebased. - dstDir, dstBase := SplitPathDirEntry(dstInfo.Path) - _, srcBase := SplitPathDirEntry(srcInfo.Path) - - switch { - case dstInfo.Exists && dstInfo.IsDir: - // The destination exists as a directory. No alteration - // to srcContent is needed as its contents can be - // simply extracted to the destination directory. - return dstInfo.Path, ioutil.NopCloser(srcContent), nil - case dstInfo.Exists && srcInfo.IsDir: - // The destination exists as some type of file and the source - // content is a directory. This is an error condition since - // you cannot copy a directory to an existing file location. - return "", nil, ErrCannotCopyDir - case dstInfo.Exists: - // The destination exists as some type of file and the source content - // is also a file. The source content entry will have to be renamed to - // have a basename which matches the destination path's basename. - if len(srcInfo.RebaseName) != 0 { - srcBase = srcInfo.RebaseName - } - return dstDir, RebaseArchiveEntries(srcContent, srcBase, dstBase), nil - case srcInfo.IsDir: - // The destination does not exist and the source content is an archive - // of a directory. The archive should be extracted to the parent of - // the destination path instead, and when it is, the directory that is - // created as a result should take the name of the destination path. - // The source content entries will have to be renamed to have a - // basename which matches the destination path's basename. - if len(srcInfo.RebaseName) != 0 { - srcBase = srcInfo.RebaseName - } - return dstDir, RebaseArchiveEntries(srcContent, srcBase, dstBase), nil - case assertsDirectory(dstInfo.Path): - // The destination does not exist and is asserted to be created as a - // directory, but the source content is not a directory. This is an - // error condition since you cannot create a directory from a file - // source. - return "", nil, ErrDirNotExists - default: - // The last remaining case is when the destination does not exist, is - // not asserted to be a directory, and the source content is not an - // archive of a directory. It this case, the destination file will need - // to be created when the archive is extracted and the source content - // entry will have to be renamed to have a basename which matches the - // destination path's basename. - if len(srcInfo.RebaseName) != 0 { - srcBase = srcInfo.RebaseName - } - return dstDir, RebaseArchiveEntries(srcContent, srcBase, dstBase), nil - } - -} - -// RebaseArchiveEntries rewrites the given srcContent archive replacing -// an occurrence of oldBase with newBase at the beginning of entry names. -func RebaseArchiveEntries(srcContent Reader, oldBase, newBase string) Archive { - if oldBase == string(os.PathSeparator) { - // If oldBase specifies the root directory, use an empty string as - // oldBase instead so that newBase doesn't replace the path separator - // that all paths will start with. - oldBase = "" - } - - rebased, w := io.Pipe() - - go func() { - srcTar := tar.NewReader(srcContent) - rebasedTar := tar.NewWriter(w) - - for { - hdr, err := srcTar.Next() - if err == io.EOF { - // Signals end of archive. - rebasedTar.Close() - w.Close() - return - } - if err != nil { - w.CloseWithError(err) - return - } - - hdr.Name = strings.Replace(hdr.Name, oldBase, newBase, 1) - - if err = rebasedTar.WriteHeader(hdr); err != nil { - w.CloseWithError(err) - return - } - - if _, err = io.Copy(rebasedTar, srcTar); err != nil { - w.CloseWithError(err) - return - } - } - }() - - return rebased -} - -// CopyResource performs an archive copy from the given source path to the -// given destination path. The source path MUST exist and the destination -// path's parent directory must exist. -func CopyResource(srcPath, dstPath string, followLink bool) error { - var ( - srcInfo CopyInfo - err error - ) - - // Ensure in platform semantics - srcPath = normalizePath(srcPath) - dstPath = normalizePath(dstPath) - - // Clean the source and destination paths. - srcPath = PreserveTrailingDotOrSeparator(filepath.Clean(srcPath), srcPath) - dstPath = PreserveTrailingDotOrSeparator(filepath.Clean(dstPath), dstPath) - - if srcInfo, err = CopyInfoSourcePath(srcPath, followLink); err != nil { - return err - } - - content, err := TarResource(srcInfo) - if err != nil { - return err - } - defer content.Close() - - return CopyTo(content, srcInfo, dstPath) -} - -// CopyTo handles extracting the given content whose -// entries should be sourced from srcInfo to dstPath. -func CopyTo(content Reader, srcInfo CopyInfo, dstPath string) error { - // The destination path need not exist, but CopyInfoDestinationPath will - // ensure that at least the parent directory exists. - dstInfo, err := CopyInfoDestinationPath(normalizePath(dstPath)) - if err != nil { - return err - } - - dstDir, copyArchive, err := PrepareArchiveCopy(content, srcInfo, dstInfo) - if err != nil { - return err - } - defer copyArchive.Close() - - options := &TarOptions{ - NoLchown: true, - NoOverwriteDirNonDir: true, - } - - return Untar(copyArchive, dstDir, options) -} - -// ResolveHostSourcePath decides real path need to be copied with parameters such as -// whether to follow symbol link or not, if followLink is true, resolvedPath will return -// link target of any symbol link file, else it will only resolve symlink of directory -// but return symbol link file itself without resolving. -func ResolveHostSourcePath(path string, followLink bool) (resolvedPath, rebaseName string, err error) { - if followLink { - resolvedPath, err = filepath.EvalSymlinks(path) - if err != nil { - return - } - - resolvedPath, rebaseName = GetRebaseName(path, resolvedPath) - } else { - dirPath, basePath := filepath.Split(path) - - // if not follow symbol link, then resolve symbol link of parent dir - var resolvedDirPath string - resolvedDirPath, err = filepath.EvalSymlinks(dirPath) - if err != nil { - return - } - // resolvedDirPath will have been cleaned (no trailing path separators) so - // we can manually join it with the base path element. - resolvedPath = resolvedDirPath + string(filepath.Separator) + basePath - if hasTrailingPathSeparator(path) && filepath.Base(path) != filepath.Base(resolvedPath) { - rebaseName = filepath.Base(path) - } - } - return resolvedPath, rebaseName, nil -} - -// GetRebaseName normalizes and compares path and resolvedPath, -// return completed resolved path and rebased file name -func GetRebaseName(path, resolvedPath string) (string, string) { - // linkTarget will have been cleaned (no trailing path separators and dot) so - // we can manually join it with them - var rebaseName string - if specifiesCurrentDir(path) && !specifiesCurrentDir(resolvedPath) { - resolvedPath += string(filepath.Separator) + "." - } - - if hasTrailingPathSeparator(path) && !hasTrailingPathSeparator(resolvedPath) { - resolvedPath += string(filepath.Separator) - } - - if filepath.Base(path) != filepath.Base(resolvedPath) { - // In the case where the path had a trailing separator and a symlink - // evaluation has changed the last path component, we will need to - // rebase the name in the archive that is being copied to match the - // originally requested name. - rebaseName = filepath.Base(path) - } - return resolvedPath, rebaseName -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/copy_unix.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/copy_unix.go deleted file mode 100644 index e305b5e4af9..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/copy_unix.go +++ /dev/null @@ -1,11 +0,0 @@ -// +build !windows - -package archive - -import ( - "path/filepath" -) - -func normalizePath(path string) string { - return filepath.ToSlash(path) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/copy_windows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/copy_windows.go deleted file mode 100644 index 2b775b45c4f..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/copy_windows.go +++ /dev/null @@ -1,9 +0,0 @@ -package archive - -import ( - "path/filepath" -) - -func normalizePath(path string) string { - return filepath.FromSlash(path) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/diff.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/diff.go deleted file mode 100644 index 887dd54ccf6..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/diff.go +++ /dev/null @@ -1,279 +0,0 @@ -package archive - -import ( - "archive/tar" - "fmt" - "io" - "io/ioutil" - "os" - "path/filepath" - "runtime" - "strings" - - "github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus" - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools" - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/pools" - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system" -) - -// UnpackLayer unpack `layer` to a `dest`. The stream `layer` can be -// compressed or uncompressed. -// Returns the size in bytes of the contents of the layer. -func UnpackLayer(dest string, layer Reader, options *TarOptions) (size int64, err error) { - tr := tar.NewReader(layer) - trBuf := pools.BufioReader32KPool.Get(tr) - defer pools.BufioReader32KPool.Put(trBuf) - - var dirs []*tar.Header - unpackedPaths := make(map[string]struct{}) - - if options == nil { - options = &TarOptions{} - } - if options.ExcludePatterns == nil { - options.ExcludePatterns = []string{} - } - remappedRootUID, remappedRootGID, err := idtools.GetRootUIDGID(options.UIDMaps, options.GIDMaps) - if err != nil { - return 0, err - } - - aufsTempdir := "" - aufsHardlinks := make(map[string]*tar.Header) - - if options == nil { - options = &TarOptions{} - } - // Iterate through the files in the archive. - for { - hdr, err := tr.Next() - if err == io.EOF { - // end of tar archive - break - } - if err != nil { - return 0, err - } - - size += hdr.Size - - // Normalize name, for safety and for a simple is-root check - hdr.Name = filepath.Clean(hdr.Name) - - // Windows does not support filenames with colons in them. Ignore - // these files. This is not a problem though (although it might - // appear that it is). Let's suppose a client is running docker pull. - // The daemon it points to is Windows. Would it make sense for the - // client to be doing a docker pull Ubuntu for example (which has files - // with colons in the name under /usr/share/man/man3)? No, absolutely - // not as it would really only make sense that they were pulling a - // Windows image. However, for development, it is necessary to be able - // to pull Linux images which are in the repository. - // - // TODO Windows. Once the registry is aware of what images are Windows- - // specific or Linux-specific, this warning should be changed to an error - // to cater for the situation where someone does manage to upload a Linux - // image but have it tagged as Windows inadvertently. - if runtime.GOOS == "windows" { - if strings.Contains(hdr.Name, ":") { - logrus.Warnf("Windows: Ignoring %s (is this a Linux image?)", hdr.Name) - continue - } - } - - // Note as these operations are platform specific, so must the slash be. - if !strings.HasSuffix(hdr.Name, string(os.PathSeparator)) { - // Not the root directory, ensure that the parent directory exists. - // This happened in some tests where an image had a tarfile without any - // parent directories. - parent := filepath.Dir(hdr.Name) - parentPath := filepath.Join(dest, parent) - - if _, err := os.Lstat(parentPath); err != nil && os.IsNotExist(err) { - err = system.MkdirAll(parentPath, 0600) - if err != nil { - return 0, err - } - } - } - - // Skip AUFS metadata dirs - if strings.HasPrefix(hdr.Name, WhiteoutMetaPrefix) { - // Regular files inside /.wh..wh.plnk can be used as hardlink targets - // We don't want this directory, but we need the files in them so that - // such hardlinks can be resolved. - if strings.HasPrefix(hdr.Name, WhiteoutLinkDir) && hdr.Typeflag == tar.TypeReg { - basename := filepath.Base(hdr.Name) - aufsHardlinks[basename] = hdr - if aufsTempdir == "" { - if aufsTempdir, err = ioutil.TempDir("", "dockerplnk"); err != nil { - return 0, err - } - defer os.RemoveAll(aufsTempdir) - } - if err := createTarFile(filepath.Join(aufsTempdir, basename), dest, hdr, tr, true, nil); err != nil { - return 0, err - } - } - - if hdr.Name != WhiteoutOpaqueDir { - continue - } - } - path := filepath.Join(dest, hdr.Name) - rel, err := filepath.Rel(dest, path) - if err != nil { - return 0, err - } - - // Note as these operations are platform specific, so must the slash be. - if strings.HasPrefix(rel, ".."+string(os.PathSeparator)) { - return 0, breakoutError(fmt.Errorf("%q is outside of %q", hdr.Name, dest)) - } - base := filepath.Base(path) - - if strings.HasPrefix(base, WhiteoutPrefix) { - dir := filepath.Dir(path) - if base == WhiteoutOpaqueDir { - _, err := os.Lstat(dir) - if err != nil { - return 0, err - } - err = filepath.Walk(dir, func(path string, info os.FileInfo, err error) error { - if err != nil { - if os.IsNotExist(err) { - err = nil // parent was deleted - } - return err - } - if path == dir { - return nil - } - if _, exists := unpackedPaths[path]; !exists { - err := os.RemoveAll(path) - return err - } - return nil - }) - if err != nil { - return 0, err - } - } else { - originalBase := base[len(WhiteoutPrefix):] - originalPath := filepath.Join(dir, originalBase) - if err := os.RemoveAll(originalPath); err != nil { - return 0, err - } - } - } else { - // If path exits we almost always just want to remove and replace it. - // The only exception is when it is a directory *and* the file from - // the layer is also a directory. Then we want to merge them (i.e. - // just apply the metadata from the layer). - if fi, err := os.Lstat(path); err == nil { - if !(fi.IsDir() && hdr.Typeflag == tar.TypeDir) { - if err := os.RemoveAll(path); err != nil { - return 0, err - } - } - } - - trBuf.Reset(tr) - srcData := io.Reader(trBuf) - srcHdr := hdr - - // Hard links into /.wh..wh.plnk don't work, as we don't extract that directory, so - // we manually retarget these into the temporary files we extracted them into - if hdr.Typeflag == tar.TypeLink && strings.HasPrefix(filepath.Clean(hdr.Linkname), WhiteoutLinkDir) { - linkBasename := filepath.Base(hdr.Linkname) - srcHdr = aufsHardlinks[linkBasename] - if srcHdr == nil { - return 0, fmt.Errorf("Invalid aufs hardlink") - } - tmpFile, err := os.Open(filepath.Join(aufsTempdir, linkBasename)) - if err != nil { - return 0, err - } - defer tmpFile.Close() - srcData = tmpFile - } - - // if the options contain a uid & gid maps, convert header uid/gid - // entries using the maps such that lchown sets the proper mapped - // uid/gid after writing the file. We only perform this mapping if - // the file isn't already owned by the remapped root UID or GID, as - // that specific uid/gid has no mapping from container -> host, and - // those files already have the proper ownership for inside the - // container. - if srcHdr.Uid != remappedRootUID { - xUID, err := idtools.ToHost(srcHdr.Uid, options.UIDMaps) - if err != nil { - return 0, err - } - srcHdr.Uid = xUID - } - if srcHdr.Gid != remappedRootGID { - xGID, err := idtools.ToHost(srcHdr.Gid, options.GIDMaps) - if err != nil { - return 0, err - } - srcHdr.Gid = xGID - } - if err := createTarFile(path, dest, srcHdr, srcData, true, nil); err != nil { - return 0, err - } - - // Directory mtimes must be handled at the end to avoid further - // file creation in them to modify the directory mtime - if hdr.Typeflag == tar.TypeDir { - dirs = append(dirs, hdr) - } - unpackedPaths[path] = struct{}{} - } - } - - for _, hdr := range dirs { - path := filepath.Join(dest, hdr.Name) - if err := system.Chtimes(path, hdr.AccessTime, hdr.ModTime); err != nil { - return 0, err - } - } - - return size, nil -} - -// ApplyLayer parses a diff in the standard layer format from `layer`, -// and applies it to the directory `dest`. The stream `layer` can be -// compressed or uncompressed. -// Returns the size in bytes of the contents of the layer. -func ApplyLayer(dest string, layer Reader) (int64, error) { - return applyLayerHandler(dest, layer, &TarOptions{}, true) -} - -// ApplyUncompressedLayer parses a diff in the standard layer format from -// `layer`, and applies it to the directory `dest`. The stream `layer` -// can only be uncompressed. -// Returns the size in bytes of the contents of the layer. -func ApplyUncompressedLayer(dest string, layer Reader, options *TarOptions) (int64, error) { - return applyLayerHandler(dest, layer, options, false) -} - -// do the bulk load of ApplyLayer, but allow for not calling DecompressStream -func applyLayerHandler(dest string, layer Reader, options *TarOptions, decompress bool) (int64, error) { - dest = filepath.Clean(dest) - - // We need to be able to set any perms - oldmask, err := system.Umask(0) - if err != nil { - return 0, err - } - defer system.Umask(oldmask) // ignore err, ErrNotSupportedPlatform - - if decompress { - layer, err = DecompressStream(layer) - if err != nil { - return 0, err - } - } - return UnpackLayer(dest, layer, options) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/time_linux.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/time_linux.go deleted file mode 100644 index 3448569b1eb..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/time_linux.go +++ /dev/null @@ -1,16 +0,0 @@ -package archive - -import ( - "syscall" - "time" -) - -func timeToTimespec(time time.Time) (ts syscall.Timespec) { - if time.IsZero() { - // Return UTIME_OMIT special value - ts.Sec = 0 - ts.Nsec = ((1 << 30) - 2) - return - } - return syscall.NsecToTimespec(time.UnixNano()) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/time_unsupported.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/time_unsupported.go deleted file mode 100644 index e85aac05408..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/time_unsupported.go +++ /dev/null @@ -1,16 +0,0 @@ -// +build !linux - -package archive - -import ( - "syscall" - "time" -) - -func timeToTimespec(time time.Time) (ts syscall.Timespec) { - nsec := int64(0) - if !time.IsZero() { - nsec = time.UnixNano() - } - return syscall.NsecToTimespec(nsec) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/whiteouts.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/whiteouts.go deleted file mode 100644 index d20478a10dc..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/whiteouts.go +++ /dev/null @@ -1,23 +0,0 @@ -package archive - -// Whiteouts are files with a special meaning for the layered filesystem. -// Docker uses AUFS whiteout files inside exported archives. In other -// filesystems these files are generated/handled on tar creation/extraction. - -// WhiteoutPrefix prefix means file is a whiteout. If this is followed by a -// filename this means that file has been removed from the base layer. -const WhiteoutPrefix = ".wh." - -// WhiteoutMetaPrefix prefix means whiteout has a special meaning and is not -// for removing an actual file. Normally these files are excluded from exported -// archives. -const WhiteoutMetaPrefix = WhiteoutPrefix + WhiteoutPrefix - -// WhiteoutLinkDir is a directory AUFS uses for storing hardlink links to other -// layers. Normally these should not go into exported archives and all changed -// hardlinks should be copied to the top layer. -const WhiteoutLinkDir = WhiteoutMetaPrefix + "plnk" - -// WhiteoutOpaqueDir file means directory has been made opaque - meaning -// readdir calls to this directory do not follow to lower layers. -const WhiteoutOpaqueDir = WhiteoutMetaPrefix + ".opq" diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/wrap.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/wrap.go deleted file mode 100644 index dfb335c0b6c..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive/wrap.go +++ /dev/null @@ -1,59 +0,0 @@ -package archive - -import ( - "archive/tar" - "bytes" - "io/ioutil" -) - -// Generate generates a new archive from the content provided -// as input. -// -// `files` is a sequence of path/content pairs. A new file is -// added to the archive for each pair. -// If the last pair is incomplete, the file is created with an -// empty content. For example: -// -// Generate("foo.txt", "hello world", "emptyfile") -// -// The above call will return an archive with 2 files: -// * ./foo.txt with content "hello world" -// * ./empty with empty content -// -// FIXME: stream content instead of buffering -// FIXME: specify permissions and other archive metadata -func Generate(input ...string) (Archive, error) { - files := parseStringPairs(input...) - buf := new(bytes.Buffer) - tw := tar.NewWriter(buf) - for _, file := range files { - name, content := file[0], file[1] - hdr := &tar.Header{ - Name: name, - Size: int64(len(content)), - } - if err := tw.WriteHeader(hdr); err != nil { - return nil, err - } - if _, err := tw.Write([]byte(content)); err != nil { - return nil, err - } - } - if err := tw.Close(); err != nil { - return nil, err - } - return ioutil.NopCloser(buf), nil -} - -func parseStringPairs(input ...string) (output [][2]string) { - output = make([][2]string, 0, len(input)/2+1) - for i := 0; i < len(input); i += 2 { - var pair [2]string - pair[0] = input[i] - if i+1 < len(input) { - pair[1] = input[i+1] - } - output = append(output, pair) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/fileutils/fileutils.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/fileutils/fileutils.go deleted file mode 100644 index a15cf4bc5ec..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/fileutils/fileutils.go +++ /dev/null @@ -1,279 +0,0 @@ -package fileutils - -import ( - "errors" - "fmt" - "io" - "os" - "path/filepath" - "regexp" - "strings" - "text/scanner" - - "github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus" -) - -// exclusion return true if the specified pattern is an exclusion -func exclusion(pattern string) bool { - return pattern[0] == '!' -} - -// empty return true if the specified pattern is empty -func empty(pattern string) bool { - return pattern == "" -} - -// CleanPatterns takes a slice of patterns returns a new -// slice of patterns cleaned with filepath.Clean, stripped -// of any empty patterns and lets the caller know whether the -// slice contains any exception patterns (prefixed with !). -func CleanPatterns(patterns []string) ([]string, [][]string, bool, error) { - // Loop over exclusion patterns and: - // 1. Clean them up. - // 2. Indicate whether we are dealing with any exception rules. - // 3. Error if we see a single exclusion marker on it's own (!). - cleanedPatterns := []string{} - patternDirs := [][]string{} - exceptions := false - for _, pattern := range patterns { - // Eliminate leading and trailing whitespace. - pattern = strings.TrimSpace(pattern) - if empty(pattern) { - continue - } - if exclusion(pattern) { - if len(pattern) == 1 { - return nil, nil, false, errors.New("Illegal exclusion pattern: !") - } - exceptions = true - } - pattern = filepath.Clean(pattern) - cleanedPatterns = append(cleanedPatterns, pattern) - if exclusion(pattern) { - pattern = pattern[1:] - } - patternDirs = append(patternDirs, strings.Split(pattern, "/")) - } - - return cleanedPatterns, patternDirs, exceptions, nil -} - -// Matches returns true if file matches any of the patterns -// and isn't excluded by any of the subsequent patterns. -func Matches(file string, patterns []string) (bool, error) { - file = filepath.Clean(file) - - if file == "." { - // Don't let them exclude everything, kind of silly. - return false, nil - } - - patterns, patDirs, _, err := CleanPatterns(patterns) - if err != nil { - return false, err - } - - return OptimizedMatches(file, patterns, patDirs) -} - -// OptimizedMatches is basically the same as fileutils.Matches() but optimized for archive.go. -// It will assume that the inputs have been preprocessed and therefore the function -// doesn't need to do as much error checking and clean-up. This was done to avoid -// repeating these steps on each file being checked during the archive process. -// The more generic fileutils.Matches() can't make these assumptions. -func OptimizedMatches(file string, patterns []string, patDirs [][]string) (bool, error) { - matched := false - parentPath := filepath.Dir(file) - parentPathDirs := strings.Split(parentPath, "/") - - for i, pattern := range patterns { - negative := false - - if exclusion(pattern) { - negative = true - pattern = pattern[1:] - } - - match, err := regexpMatch(pattern, file) - if err != nil { - return false, fmt.Errorf("Error in pattern (%s): %s", pattern, err) - } - - if !match && parentPath != "." { - // Check to see if the pattern matches one of our parent dirs. - if len(patDirs[i]) <= len(parentPathDirs) { - match, _ = regexpMatch(strings.Join(patDirs[i], "/"), - strings.Join(parentPathDirs[:len(patDirs[i])], "/")) - } - } - - if match { - matched = !negative - } - } - - if matched { - logrus.Debugf("Skipping excluded path: %s", file) - } - - return matched, nil -} - -// regexpMatch tries to match the logic of filepath.Match but -// does so using regexp logic. We do this so that we can expand the -// wildcard set to include other things, like "**" to mean any number -// of directories. This means that we should be backwards compatible -// with filepath.Match(). We'll end up supporting more stuff, due to -// the fact that we're using regexp, but that's ok - it does no harm. -func regexpMatch(pattern, path string) (bool, error) { - regStr := "^" - - // Do some syntax checking on the pattern. - // filepath's Match() has some really weird rules that are inconsistent - // so instead of trying to dup their logic, just call Match() for its - // error state and if there is an error in the pattern return it. - // If this becomes an issue we can remove this since its really only - // needed in the error (syntax) case - which isn't really critical. - if _, err := filepath.Match(pattern, path); err != nil { - return false, err - } - - // Go through the pattern and convert it to a regexp. - // We use a scanner so we can support utf-8 chars. - var scan scanner.Scanner - scan.Init(strings.NewReader(pattern)) - - sl := string(os.PathSeparator) - escSL := sl - if sl == `\` { - escSL += `\` - } - - for scan.Peek() != scanner.EOF { - ch := scan.Next() - - if ch == '*' { - if scan.Peek() == '*' { - // is some flavor of "**" - scan.Next() - - if scan.Peek() == scanner.EOF { - // is "**EOF" - to align with .gitignore just accept all - regStr += ".*" - } else { - // is "**" - regStr += "((.*" + escSL + ")|([^" + escSL + "]*))" - } - - // Treat **/ as ** so eat the "/" - if string(scan.Peek()) == sl { - scan.Next() - } - } else { - // is "*" so map it to anything but "/" - regStr += "[^" + escSL + "]*" - } - } else if ch == '?' { - // "?" is any char except "/" - regStr += "[^" + escSL + "]" - } else if strings.Index(".$", string(ch)) != -1 { - // Escape some regexp special chars that have no meaning - // in golang's filepath.Match - regStr += `\` + string(ch) - } else if ch == '\\' { - // escape next char. Note that a trailing \ in the pattern - // will be left alone (but need to escape it) - if sl == `\` { - // On windows map "\" to "\\", meaning an escaped backslash, - // and then just continue because filepath.Match on - // Windows doesn't allow escaping at all - regStr += escSL - continue - } - if scan.Peek() != scanner.EOF { - regStr += `\` + string(scan.Next()) - } else { - regStr += `\` - } - } else { - regStr += string(ch) - } - } - - regStr += "$" - - res, err := regexp.MatchString(regStr, path) - - // Map regexp's error to filepath's so no one knows we're not using filepath - if err != nil { - err = filepath.ErrBadPattern - } - - return res, err -} - -// CopyFile copies from src to dst until either EOF is reached -// on src or an error occurs. It verifies src exists and remove -// the dst if it exists. -func CopyFile(src, dst string) (int64, error) { - cleanSrc := filepath.Clean(src) - cleanDst := filepath.Clean(dst) - if cleanSrc == cleanDst { - return 0, nil - } - sf, err := os.Open(cleanSrc) - if err != nil { - return 0, err - } - defer sf.Close() - if err := os.Remove(cleanDst); err != nil && !os.IsNotExist(err) { - return 0, err - } - df, err := os.Create(cleanDst) - if err != nil { - return 0, err - } - defer df.Close() - return io.Copy(df, sf) -} - -// ReadSymlinkedDirectory returns the target directory of a symlink. -// The target of the symbolic link may not be a file. -func ReadSymlinkedDirectory(path string) (string, error) { - var realPath string - var err error - if realPath, err = filepath.Abs(path); err != nil { - return "", fmt.Errorf("unable to get absolute path for %s: %s", path, err) - } - if realPath, err = filepath.EvalSymlinks(realPath); err != nil { - return "", fmt.Errorf("failed to canonicalise path for %s: %s", path, err) - } - realPathInfo, err := os.Stat(realPath) - if err != nil { - return "", fmt.Errorf("failed to stat target '%s' of '%s': %s", realPath, path, err) - } - if !realPathInfo.Mode().IsDir() { - return "", fmt.Errorf("canonical path points to a file '%s'", realPath) - } - return realPath, nil -} - -// CreateIfNotExists creates a file or a directory only if it does not already exist. -func CreateIfNotExists(path string, isDir bool) error { - if _, err := os.Stat(path); err != nil { - if os.IsNotExist(err) { - if isDir { - return os.MkdirAll(path, 0755) - } - if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil { - return err - } - f, err := os.OpenFile(path, os.O_CREATE, 0755) - if err != nil { - return err - } - f.Close() - } - } - return nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/fileutils/fileutils_unix.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/fileutils/fileutils_unix.go deleted file mode 100644 index 7e00802c124..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/fileutils/fileutils_unix.go +++ /dev/null @@ -1,22 +0,0 @@ -// +build linux freebsd - -package fileutils - -import ( - "fmt" - "io/ioutil" - "os" - - "github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus" -) - -// GetTotalUsedFds Returns the number of used File Descriptors by -// reading it via /proc filesystem. -func GetTotalUsedFds() int { - if fds, err := ioutil.ReadDir(fmt.Sprintf("/proc/%d/fd", os.Getpid())); err != nil { - logrus.Errorf("Error opening /proc/%d/fd: %s", os.Getpid(), err) - } else { - return len(fds) - } - return -1 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/fileutils/fileutils_windows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/fileutils/fileutils_windows.go deleted file mode 100644 index 5ec21cace52..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/fileutils/fileutils_windows.go +++ /dev/null @@ -1,7 +0,0 @@ -package fileutils - -// GetTotalUsedFds Returns the number of used File Descriptors. Not supported -// on Windows. -func GetTotalUsedFds() int { - return -1 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/homedir/homedir.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/homedir/homedir.go deleted file mode 100644 index dcae1788245..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/homedir/homedir.go +++ /dev/null @@ -1,39 +0,0 @@ -package homedir - -import ( - "os" - "runtime" - - "github.com/fsouza/go-dockerclient/external/github.com/opencontainers/runc/libcontainer/user" -) - -// Key returns the env var name for the user's home dir based on -// the platform being run on -func Key() string { - if runtime.GOOS == "windows" { - return "USERPROFILE" - } - return "HOME" -} - -// Get returns the home directory of the current user with the help of -// environment variables depending on the target operating system. -// Returned path should be used with "path/filepath" to form new paths. -func Get() string { - home := os.Getenv(Key()) - if home == "" && runtime.GOOS != "windows" { - if u, err := user.CurrentUser(); err == nil { - return u.Home - } - } - return home -} - -// GetShortcutString returns the string that is shortcut to user's home directory -// in the native shell of the platform running on. -func GetShortcutString() string { - if runtime.GOOS == "windows" { - return "%USERPROFILE%" // be careful while using in format functions - } - return "~" -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools/idtools.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools/idtools.go deleted file mode 100644 index a1301ee976b..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools/idtools.go +++ /dev/null @@ -1,195 +0,0 @@ -package idtools - -import ( - "bufio" - "fmt" - "os" - "sort" - "strconv" - "strings" -) - -// IDMap contains a single entry for user namespace range remapping. An array -// of IDMap entries represents the structure that will be provided to the Linux -// kernel for creating a user namespace. -type IDMap struct { - ContainerID int `json:"container_id"` - HostID int `json:"host_id"` - Size int `json:"size"` -} - -type subIDRange struct { - Start int - Length int -} - -type ranges []subIDRange - -func (e ranges) Len() int { return len(e) } -func (e ranges) Swap(i, j int) { e[i], e[j] = e[j], e[i] } -func (e ranges) Less(i, j int) bool { return e[i].Start < e[j].Start } - -const ( - subuidFileName string = "/etc/subuid" - subgidFileName string = "/etc/subgid" -) - -// MkdirAllAs creates a directory (include any along the path) and then modifies -// ownership to the requested uid/gid. If the directory already exists, this -// function will still change ownership to the requested uid/gid pair. -func MkdirAllAs(path string, mode os.FileMode, ownerUID, ownerGID int) error { - return mkdirAs(path, mode, ownerUID, ownerGID, true, true) -} - -// MkdirAllNewAs creates a directory (include any along the path) and then modifies -// ownership ONLY of newly created directories to the requested uid/gid. If the -// directories along the path exist, no change of ownership will be performed -func MkdirAllNewAs(path string, mode os.FileMode, ownerUID, ownerGID int) error { - return mkdirAs(path, mode, ownerUID, ownerGID, true, false) -} - -// MkdirAs creates a directory and then modifies ownership to the requested uid/gid. -// If the directory already exists, this function still changes ownership -func MkdirAs(path string, mode os.FileMode, ownerUID, ownerGID int) error { - return mkdirAs(path, mode, ownerUID, ownerGID, false, true) -} - -// GetRootUIDGID retrieves the remapped root uid/gid pair from the set of maps. -// If the maps are empty, then the root uid/gid will default to "real" 0/0 -func GetRootUIDGID(uidMap, gidMap []IDMap) (int, int, error) { - var uid, gid int - - if uidMap != nil { - xUID, err := ToHost(0, uidMap) - if err != nil { - return -1, -1, err - } - uid = xUID - } - if gidMap != nil { - xGID, err := ToHost(0, gidMap) - if err != nil { - return -1, -1, err - } - gid = xGID - } - return uid, gid, nil -} - -// ToContainer takes an id mapping, and uses it to translate a -// host ID to the remapped ID. If no map is provided, then the translation -// assumes a 1-to-1 mapping and returns the passed in id -func ToContainer(hostID int, idMap []IDMap) (int, error) { - if idMap == nil { - return hostID, nil - } - for _, m := range idMap { - if (hostID >= m.HostID) && (hostID <= (m.HostID + m.Size - 1)) { - contID := m.ContainerID + (hostID - m.HostID) - return contID, nil - } - } - return -1, fmt.Errorf("Host ID %d cannot be mapped to a container ID", hostID) -} - -// ToHost takes an id mapping and a remapped ID, and translates the -// ID to the mapped host ID. If no map is provided, then the translation -// assumes a 1-to-1 mapping and returns the passed in id # -func ToHost(contID int, idMap []IDMap) (int, error) { - if idMap == nil { - return contID, nil - } - for _, m := range idMap { - if (contID >= m.ContainerID) && (contID <= (m.ContainerID + m.Size - 1)) { - hostID := m.HostID + (contID - m.ContainerID) - return hostID, nil - } - } - return -1, fmt.Errorf("Container ID %d cannot be mapped to a host ID", contID) -} - -// CreateIDMappings takes a requested user and group name and -// using the data from /etc/sub{uid,gid} ranges, creates the -// proper uid and gid remapping ranges for that user/group pair -func CreateIDMappings(username, groupname string) ([]IDMap, []IDMap, error) { - subuidRanges, err := parseSubuid(username) - if err != nil { - return nil, nil, err - } - subgidRanges, err := parseSubgid(groupname) - if err != nil { - return nil, nil, err - } - if len(subuidRanges) == 0 { - return nil, nil, fmt.Errorf("No subuid ranges found for user %q", username) - } - if len(subgidRanges) == 0 { - return nil, nil, fmt.Errorf("No subgid ranges found for group %q", groupname) - } - - return createIDMap(subuidRanges), createIDMap(subgidRanges), nil -} - -func createIDMap(subidRanges ranges) []IDMap { - idMap := []IDMap{} - - // sort the ranges by lowest ID first - sort.Sort(subidRanges) - containerID := 0 - for _, idrange := range subidRanges { - idMap = append(idMap, IDMap{ - ContainerID: containerID, - HostID: idrange.Start, - Size: idrange.Length, - }) - containerID = containerID + idrange.Length - } - return idMap -} - -func parseSubuid(username string) (ranges, error) { - return parseSubidFile(subuidFileName, username) -} - -func parseSubgid(username string) (ranges, error) { - return parseSubidFile(subgidFileName, username) -} - -func parseSubidFile(path, username string) (ranges, error) { - var rangeList ranges - - subidFile, err := os.Open(path) - if err != nil { - return rangeList, err - } - defer subidFile.Close() - - s := bufio.NewScanner(subidFile) - for s.Scan() { - if err := s.Err(); err != nil { - return rangeList, err - } - - text := strings.TrimSpace(s.Text()) - if text == "" { - continue - } - parts := strings.Split(text, ":") - if len(parts) != 3 { - return rangeList, fmt.Errorf("Cannot parse subuid/gid information: Format not correct for %s file", path) - } - if parts[0] == username { - // return the first entry for a user; ignores potential for multiple ranges per user - startid, err := strconv.Atoi(parts[1]) - if err != nil { - return rangeList, fmt.Errorf("String to int conversion failed during subuid/gid parsing of %s: %v", path, err) - } - length, err := strconv.Atoi(parts[2]) - if err != nil { - return rangeList, fmt.Errorf("String to int conversion failed during subuid/gid parsing of %s: %v", path, err) - } - rangeList = append(rangeList, subIDRange{startid, length}) - } - } - return rangeList, nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools/idtools_unix.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools/idtools_unix.go deleted file mode 100644 index 0444307d225..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools/idtools_unix.go +++ /dev/null @@ -1,60 +0,0 @@ -// +build !windows - -package idtools - -import ( - "os" - "path/filepath" - - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system" -) - -func mkdirAs(path string, mode os.FileMode, ownerUID, ownerGID int, mkAll, chownExisting bool) error { - // make an array containing the original path asked for, plus (for mkAll == true) - // all path components leading up to the complete path that don't exist before we MkdirAll - // so that we can chown all of them properly at the end. If chownExisting is false, we won't - // chown the full directory path if it exists - var paths []string - if _, err := os.Stat(path); err != nil && os.IsNotExist(err) { - paths = []string{path} - } else if err == nil && chownExisting { - if err := os.Chown(path, ownerUID, ownerGID); err != nil { - return err - } - // short-circuit--we were called with an existing directory and chown was requested - return nil - } else if err == nil { - // nothing to do; directory path fully exists already and chown was NOT requested - return nil - } - - if mkAll { - // walk back to "/" looking for directories which do not exist - // and add them to the paths array for chown after creation - dirPath := path - for { - dirPath = filepath.Dir(dirPath) - if dirPath == "/" { - break - } - if _, err := os.Stat(dirPath); err != nil && os.IsNotExist(err) { - paths = append(paths, dirPath) - } - } - if err := system.MkdirAll(path, mode); err != nil && !os.IsExist(err) { - return err - } - } else { - if err := os.Mkdir(path, mode); err != nil && !os.IsExist(err) { - return err - } - } - // even if it existed, we will chown the requested path + any subpaths that - // didn't exist when we called MkdirAll - for _, pathComponent := range paths { - if err := os.Chown(pathComponent, ownerUID, ownerGID); err != nil { - return err - } - } - return nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools/idtools_windows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools/idtools_windows.go deleted file mode 100644 index d5ec992db76..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools/idtools_windows.go +++ /dev/null @@ -1,18 +0,0 @@ -// +build windows - -package idtools - -import ( - "os" - - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system" -) - -// Platforms such as Windows do not support the UID/GID concept. So make this -// just a wrapper around system.MkdirAll. -func mkdirAs(path string, mode os.FileMode, ownerUID, ownerGID int, mkAll, chownExisting bool) error { - if err := system.MkdirAll(path, mode); err != nil && !os.IsExist(err) { - return err - } - return nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools/usergroupadd_linux.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools/usergroupadd_linux.go deleted file mode 100644 index c1eedff1043..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools/usergroupadd_linux.go +++ /dev/null @@ -1,155 +0,0 @@ -package idtools - -import ( - "fmt" - "os/exec" - "path/filepath" - "strings" - "syscall" -) - -// add a user and/or group to Linux /etc/passwd, /etc/group using standard -// Linux distribution commands: -// adduser --uid --shell /bin/login --no-create-home --disabled-login --ingroup -// useradd -M -u -s /bin/nologin -N -g -// addgroup --gid -// groupadd -g - -const baseUID int = 10000 -const baseGID int = 10000 -const idMAX int = 65534 - -var ( - userCommand string - groupCommand string - - cmdTemplates = map[string]string{ - "adduser": "--uid %d --shell /bin/false --no-create-home --disabled-login --ingroup %s %s", - "useradd": "-M -u %d -s /bin/false -N -g %s %s", - "addgroup": "--gid %d %s", - "groupadd": "-g %d %s", - } -) - -func init() { - // set up which commands are used for adding users/groups dependent on distro - if _, err := resolveBinary("adduser"); err == nil { - userCommand = "adduser" - } else if _, err := resolveBinary("useradd"); err == nil { - userCommand = "useradd" - } - if _, err := resolveBinary("addgroup"); err == nil { - groupCommand = "addgroup" - } else if _, err := resolveBinary("groupadd"); err == nil { - groupCommand = "groupadd" - } -} - -func resolveBinary(binname string) (string, error) { - binaryPath, err := exec.LookPath(binname) - if err != nil { - return "", err - } - resolvedPath, err := filepath.EvalSymlinks(binaryPath) - if err != nil { - return "", err - } - //only return no error if the final resolved binary basename - //matches what was searched for - if filepath.Base(resolvedPath) == binname { - return resolvedPath, nil - } - return "", fmt.Errorf("Binary %q does not resolve to a binary of that name in $PATH (%q)", binname, resolvedPath) -} - -// AddNamespaceRangesUser takes a name and finds an unused uid, gid pair -// and calls the appropriate helper function to add the group and then -// the user to the group in /etc/group and /etc/passwd respectively. -// This new user's /etc/sub{uid,gid} ranges will be used for user namespace -// mapping ranges in containers. -func AddNamespaceRangesUser(name string) (int, int, error) { - // Find unused uid, gid pair - uid, err := findUnusedUID(baseUID) - if err != nil { - return -1, -1, fmt.Errorf("Unable to find unused UID: %v", err) - } - gid, err := findUnusedGID(baseGID) - if err != nil { - return -1, -1, fmt.Errorf("Unable to find unused GID: %v", err) - } - - // First add the group that we will use - if err := addGroup(name, gid); err != nil { - return -1, -1, fmt.Errorf("Error adding group %q: %v", name, err) - } - // Add the user as a member of the group - if err := addUser(name, uid, name); err != nil { - return -1, -1, fmt.Errorf("Error adding user %q: %v", name, err) - } - return uid, gid, nil -} - -func addUser(userName string, uid int, groupName string) error { - - if userCommand == "" { - return fmt.Errorf("Cannot add user; no useradd/adduser binary found") - } - args := fmt.Sprintf(cmdTemplates[userCommand], uid, groupName, userName) - return execAddCmd(userCommand, args) -} - -func addGroup(groupName string, gid int) error { - - if groupCommand == "" { - return fmt.Errorf("Cannot add group; no groupadd/addgroup binary found") - } - args := fmt.Sprintf(cmdTemplates[groupCommand], gid, groupName) - // only error out if the error isn't that the group already exists - // if the group exists then our needs are already met - if err := execAddCmd(groupCommand, args); err != nil && !strings.Contains(err.Error(), "already exists") { - return err - } - return nil -} - -func execAddCmd(cmd, args string) error { - execCmd := exec.Command(cmd, strings.Split(args, " ")...) - out, err := execCmd.CombinedOutput() - if err != nil { - return fmt.Errorf("Failed to add user/group with error: %v; output: %q", err, string(out)) - } - return nil -} - -func findUnusedUID(startUID int) (int, error) { - return findUnused("passwd", startUID) -} - -func findUnusedGID(startGID int) (int, error) { - return findUnused("group", startGID) -} - -func findUnused(file string, id int) (int, error) { - for { - cmdStr := fmt.Sprintf("cat /etc/%s | cut -d: -f3 | grep '^%d$'", file, id) - cmd := exec.Command("sh", "-c", cmdStr) - if err := cmd.Run(); err != nil { - // if a non-zero return code occurs, then we know the ID was not found - // and is usable - if exiterr, ok := err.(*exec.ExitError); ok { - // The program has exited with an exit code != 0 - if status, ok := exiterr.Sys().(syscall.WaitStatus); ok { - if status.ExitStatus() == 1 { - //no match, we can use this ID - return id, nil - } - } - } - return -1, fmt.Errorf("Error looking in /etc/%s for unused ID: %v", file, err) - } - id++ - if id > idMAX { - return -1, fmt.Errorf("Maximum id in %q reached with finding unused numeric ID", file) - } - } -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools/usergroupadd_unsupported.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools/usergroupadd_unsupported.go deleted file mode 100644 index d98b354cbd8..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools/usergroupadd_unsupported.go +++ /dev/null @@ -1,12 +0,0 @@ -// +build !linux - -package idtools - -import "fmt" - -// AddNamespaceRangesUser takes a name and finds an unused uid, gid pair -// and calls the appropriate helper function to add the group and then -// the user to the group in /etc/group and /etc/passwd respectively. -func AddNamespaceRangesUser(name string) (int, int, error) { - return -1, -1, fmt.Errorf("No support for adding users or groups on this OS") -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/bytespipe.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/bytespipe.go deleted file mode 100644 index e263c284f09..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/bytespipe.go +++ /dev/null @@ -1,152 +0,0 @@ -package ioutils - -import ( - "errors" - "io" - "sync" -) - -// maxCap is the highest capacity to use in byte slices that buffer data. -const maxCap = 1e6 - -// blockThreshold is the minimum number of bytes in the buffer which will cause -// a write to BytesPipe to block when allocating a new slice. -const blockThreshold = 1e6 - -// ErrClosed is returned when Write is called on a closed BytesPipe. -var ErrClosed = errors.New("write to closed BytesPipe") - -// BytesPipe is io.ReadWriteCloser which works similarly to pipe(queue). -// All written data may be read at most once. Also, BytesPipe allocates -// and releases new byte slices to adjust to current needs, so the buffer -// won't be overgrown after peak loads. -type BytesPipe struct { - mu sync.Mutex - wait *sync.Cond - buf [][]byte // slice of byte-slices of buffered data - lastRead int // index in the first slice to a read point - bufLen int // length of data buffered over the slices - closeErr error // error to return from next Read. set to nil if not closed. -} - -// NewBytesPipe creates new BytesPipe, initialized by specified slice. -// If buf is nil, then it will be initialized with slice which cap is 64. -// buf will be adjusted in a way that len(buf) == 0, cap(buf) == cap(buf). -func NewBytesPipe(buf []byte) *BytesPipe { - if cap(buf) == 0 { - buf = make([]byte, 0, 64) - } - bp := &BytesPipe{ - buf: [][]byte{buf[:0]}, - } - bp.wait = sync.NewCond(&bp.mu) - return bp -} - -// Write writes p to BytesPipe. -// It can allocate new []byte slices in a process of writing. -func (bp *BytesPipe) Write(p []byte) (int, error) { - bp.mu.Lock() - defer bp.mu.Unlock() - written := 0 - for { - if bp.closeErr != nil { - return written, ErrClosed - } - // write data to the last buffer - b := bp.buf[len(bp.buf)-1] - // copy data to the current empty allocated area - n := copy(b[len(b):cap(b)], p) - // increment buffered data length - bp.bufLen += n - // include written data in last buffer - bp.buf[len(bp.buf)-1] = b[:len(b)+n] - - written += n - - // if there was enough room to write all then break - if len(p) == n { - break - } - - // more data: write to the next slice - p = p[n:] - - // block if too much data is still in the buffer - for bp.bufLen >= blockThreshold { - bp.wait.Wait() - } - - // allocate slice that has twice the size of the last unless maximum reached - nextCap := 2 * cap(bp.buf[len(bp.buf)-1]) - if nextCap > maxCap { - nextCap = maxCap - } - // add new byte slice to the buffers slice and continue writing - bp.buf = append(bp.buf, make([]byte, 0, nextCap)) - } - bp.wait.Broadcast() - return written, nil -} - -// CloseWithError causes further reads from a BytesPipe to return immediately. -func (bp *BytesPipe) CloseWithError(err error) error { - bp.mu.Lock() - if err != nil { - bp.closeErr = err - } else { - bp.closeErr = io.EOF - } - bp.wait.Broadcast() - bp.mu.Unlock() - return nil -} - -// Close causes further reads from a BytesPipe to return immediately. -func (bp *BytesPipe) Close() error { - return bp.CloseWithError(nil) -} - -func (bp *BytesPipe) len() int { - return bp.bufLen - bp.lastRead -} - -// Read reads bytes from BytesPipe. -// Data could be read only once. -func (bp *BytesPipe) Read(p []byte) (n int, err error) { - bp.mu.Lock() - defer bp.mu.Unlock() - if bp.len() == 0 { - if bp.closeErr != nil { - return 0, bp.closeErr - } - bp.wait.Wait() - if bp.len() == 0 && bp.closeErr != nil { - return 0, bp.closeErr - } - } - for { - read := copy(p, bp.buf[0][bp.lastRead:]) - n += read - bp.lastRead += read - if bp.len() == 0 { - // we have read everything. reset to the beginning. - bp.lastRead = 0 - bp.bufLen -= len(bp.buf[0]) - bp.buf[0] = bp.buf[0][:0] - break - } - // break if everything was read - if len(p) == read { - break - } - // more buffered data and more asked. read from next slice. - p = p[read:] - bp.lastRead = 0 - bp.bufLen -= len(bp.buf[0]) - bp.buf[0] = nil // throw away old slice - bp.buf = bp.buf[1:] // switch to next - } - bp.wait.Broadcast() - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/fmt.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/fmt.go deleted file mode 100644 index 0b04b0ba3e6..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/fmt.go +++ /dev/null @@ -1,22 +0,0 @@ -package ioutils - -import ( - "fmt" - "io" -) - -// FprintfIfNotEmpty prints the string value if it's not empty -func FprintfIfNotEmpty(w io.Writer, format, value string) (int, error) { - if value != "" { - return fmt.Fprintf(w, format, value) - } - return 0, nil -} - -// FprintfIfTrue prints the boolean value if it's true -func FprintfIfTrue(w io.Writer, format string, ok bool) (int, error) { - if ok { - return fmt.Fprintf(w, format, ok) - } - return 0, nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/multireader.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/multireader.go deleted file mode 100644 index 0d2d76b4797..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/multireader.go +++ /dev/null @@ -1,226 +0,0 @@ -package ioutils - -import ( - "bytes" - "fmt" - "io" - "os" -) - -type pos struct { - idx int - offset int64 -} - -type multiReadSeeker struct { - readers []io.ReadSeeker - pos *pos - posIdx map[io.ReadSeeker]int -} - -func (r *multiReadSeeker) Seek(offset int64, whence int) (int64, error) { - var tmpOffset int64 - switch whence { - case os.SEEK_SET: - for i, rdr := range r.readers { - // get size of the current reader - s, err := rdr.Seek(0, os.SEEK_END) - if err != nil { - return -1, err - } - - if offset > tmpOffset+s { - if i == len(r.readers)-1 { - rdrOffset := s + (offset - tmpOffset) - if _, err := rdr.Seek(rdrOffset, os.SEEK_SET); err != nil { - return -1, err - } - r.pos = &pos{i, rdrOffset} - return offset, nil - } - - tmpOffset += s - continue - } - - rdrOffset := offset - tmpOffset - idx := i - - rdr.Seek(rdrOffset, os.SEEK_SET) - // make sure all following readers are at 0 - for _, rdr := range r.readers[i+1:] { - rdr.Seek(0, os.SEEK_SET) - } - - if rdrOffset == s && i != len(r.readers)-1 { - idx++ - rdrOffset = 0 - } - r.pos = &pos{idx, rdrOffset} - return offset, nil - } - case os.SEEK_END: - for _, rdr := range r.readers { - s, err := rdr.Seek(0, os.SEEK_END) - if err != nil { - return -1, err - } - tmpOffset += s - } - r.Seek(tmpOffset+offset, os.SEEK_SET) - return tmpOffset + offset, nil - case os.SEEK_CUR: - if r.pos == nil { - return r.Seek(offset, os.SEEK_SET) - } - // Just return the current offset - if offset == 0 { - return r.getCurOffset() - } - - curOffset, err := r.getCurOffset() - if err != nil { - return -1, err - } - rdr, rdrOffset, err := r.getReaderForOffset(curOffset + offset) - if err != nil { - return -1, err - } - - r.pos = &pos{r.posIdx[rdr], rdrOffset} - return curOffset + offset, nil - default: - return -1, fmt.Errorf("Invalid whence: %d", whence) - } - - return -1, fmt.Errorf("Error seeking for whence: %d, offset: %d", whence, offset) -} - -func (r *multiReadSeeker) getReaderForOffset(offset int64) (io.ReadSeeker, int64, error) { - var rdr io.ReadSeeker - var rdrOffset int64 - - for i, rdr := range r.readers { - offsetTo, err := r.getOffsetToReader(rdr) - if err != nil { - return nil, -1, err - } - if offsetTo > offset { - rdr = r.readers[i-1] - rdrOffset = offsetTo - offset - break - } - - if rdr == r.readers[len(r.readers)-1] { - rdrOffset = offsetTo + offset - break - } - } - - return rdr, rdrOffset, nil -} - -func (r *multiReadSeeker) getCurOffset() (int64, error) { - var totalSize int64 - for _, rdr := range r.readers[:r.pos.idx+1] { - if r.posIdx[rdr] == r.pos.idx { - totalSize += r.pos.offset - break - } - - size, err := getReadSeekerSize(rdr) - if err != nil { - return -1, fmt.Errorf("error getting seeker size: %v", err) - } - totalSize += size - } - return totalSize, nil -} - -func (r *multiReadSeeker) getOffsetToReader(rdr io.ReadSeeker) (int64, error) { - var offset int64 - for _, r := range r.readers { - if r == rdr { - break - } - - size, err := getReadSeekerSize(rdr) - if err != nil { - return -1, err - } - offset += size - } - return offset, nil -} - -func (r *multiReadSeeker) Read(b []byte) (int, error) { - if r.pos == nil { - r.pos = &pos{0, 0} - } - - bCap := int64(cap(b)) - buf := bytes.NewBuffer(nil) - var rdr io.ReadSeeker - - for _, rdr = range r.readers[r.pos.idx:] { - readBytes, err := io.CopyN(buf, rdr, bCap) - if err != nil && err != io.EOF { - return -1, err - } - bCap -= readBytes - - if bCap == 0 { - break - } - } - - rdrPos, err := rdr.Seek(0, os.SEEK_CUR) - if err != nil { - return -1, err - } - r.pos = &pos{r.posIdx[rdr], rdrPos} - return buf.Read(b) -} - -func getReadSeekerSize(rdr io.ReadSeeker) (int64, error) { - // save the current position - pos, err := rdr.Seek(0, os.SEEK_CUR) - if err != nil { - return -1, err - } - - // get the size - size, err := rdr.Seek(0, os.SEEK_END) - if err != nil { - return -1, err - } - - // reset the position - if _, err := rdr.Seek(pos, os.SEEK_SET); err != nil { - return -1, err - } - return size, nil -} - -// MultiReadSeeker returns a ReadSeeker that's the logical concatenation of the provided -// input readseekers. After calling this method the initial position is set to the -// beginning of the first ReadSeeker. At the end of a ReadSeeker, Read always advances -// to the beginning of the next ReadSeeker and returns EOF at the end of the last ReadSeeker. -// Seek can be used over the sum of lengths of all readseekers. -// -// When a MultiReadSeeker is used, no Read and Seek operations should be made on -// its ReadSeeker components. Also, users should make no assumption on the state -// of individual readseekers while the MultiReadSeeker is used. -func MultiReadSeeker(readers ...io.ReadSeeker) io.ReadSeeker { - if len(readers) == 1 { - return readers[0] - } - idx := make(map[io.ReadSeeker]int) - for i, rdr := range readers { - idx[rdr] = i - } - return &multiReadSeeker{ - readers: readers, - posIdx: idx, - } -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/readers.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/readers.go deleted file mode 100644 index a891955ace7..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/readers.go +++ /dev/null @@ -1,154 +0,0 @@ -package ioutils - -import ( - "crypto/sha256" - "encoding/hex" - "io" - - "github.com/fsouza/go-dockerclient/external/golang.org/x/net/context" -) - -type readCloserWrapper struct { - io.Reader - closer func() error -} - -func (r *readCloserWrapper) Close() error { - return r.closer() -} - -// NewReadCloserWrapper returns a new io.ReadCloser. -func NewReadCloserWrapper(r io.Reader, closer func() error) io.ReadCloser { - return &readCloserWrapper{ - Reader: r, - closer: closer, - } -} - -type readerErrWrapper struct { - reader io.Reader - closer func() -} - -func (r *readerErrWrapper) Read(p []byte) (int, error) { - n, err := r.reader.Read(p) - if err != nil { - r.closer() - } - return n, err -} - -// NewReaderErrWrapper returns a new io.Reader. -func NewReaderErrWrapper(r io.Reader, closer func()) io.Reader { - return &readerErrWrapper{ - reader: r, - closer: closer, - } -} - -// HashData returns the sha256 sum of src. -func HashData(src io.Reader) (string, error) { - h := sha256.New() - if _, err := io.Copy(h, src); err != nil { - return "", err - } - return "sha256:" + hex.EncodeToString(h.Sum(nil)), nil -} - -// OnEOFReader wraps a io.ReadCloser and a function -// the function will run at the end of file or close the file. -type OnEOFReader struct { - Rc io.ReadCloser - Fn func() -} - -func (r *OnEOFReader) Read(p []byte) (n int, err error) { - n, err = r.Rc.Read(p) - if err == io.EOF { - r.runFunc() - } - return -} - -// Close closes the file and run the function. -func (r *OnEOFReader) Close() error { - err := r.Rc.Close() - r.runFunc() - return err -} - -func (r *OnEOFReader) runFunc() { - if fn := r.Fn; fn != nil { - fn() - r.Fn = nil - } -} - -// cancelReadCloser wraps an io.ReadCloser with a context for cancelling read -// operations. -type cancelReadCloser struct { - cancel func() - pR *io.PipeReader // Stream to read from - pW *io.PipeWriter -} - -// NewCancelReadCloser creates a wrapper that closes the ReadCloser when the -// context is cancelled. The returned io.ReadCloser must be closed when it is -// no longer needed. -func NewCancelReadCloser(ctx context.Context, in io.ReadCloser) io.ReadCloser { - pR, pW := io.Pipe() - - // Create a context used to signal when the pipe is closed - doneCtx, cancel := context.WithCancel(context.Background()) - - p := &cancelReadCloser{ - cancel: cancel, - pR: pR, - pW: pW, - } - - go func() { - _, err := io.Copy(pW, in) - select { - case <-ctx.Done(): - // If the context was closed, p.closeWithError - // was already called. Calling it again would - // change the error that Read returns. - default: - p.closeWithError(err) - } - in.Close() - }() - go func() { - for { - select { - case <-ctx.Done(): - p.closeWithError(ctx.Err()) - case <-doneCtx.Done(): - return - } - } - }() - - return p -} - -// Read wraps the Read method of the pipe that provides data from the wrapped -// ReadCloser. -func (p *cancelReadCloser) Read(buf []byte) (n int, err error) { - return p.pR.Read(buf) -} - -// closeWithError closes the wrapper and its underlying reader. It will -// cause future calls to Read to return err. -func (p *cancelReadCloser) closeWithError(err error) { - p.pW.CloseWithError(err) - p.cancel() -} - -// Close closes the wrapper its underlying reader. It will cause -// future calls to Read to return io.EOF. -func (p *cancelReadCloser) Close() error { - p.closeWithError(io.EOF) - return nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/scheduler.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/scheduler.go deleted file mode 100644 index 3c88f29e355..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/scheduler.go +++ /dev/null @@ -1,6 +0,0 @@ -// +build !gccgo - -package ioutils - -func callSchedulerIfNecessary() { -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/scheduler_gccgo.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/scheduler_gccgo.go deleted file mode 100644 index c11d02b9476..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/scheduler_gccgo.go +++ /dev/null @@ -1,13 +0,0 @@ -// +build gccgo - -package ioutils - -import ( - "runtime" -) - -func callSchedulerIfNecessary() { - //allow or force Go scheduler to switch context, without explicitly - //forcing this will make it hang when using gccgo implementation - runtime.Gosched() -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/temp_unix.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/temp_unix.go deleted file mode 100644 index 1539ad21b57..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/temp_unix.go +++ /dev/null @@ -1,10 +0,0 @@ -// +build !windows - -package ioutils - -import "io/ioutil" - -// TempDir on Unix systems is equivalent to ioutil.TempDir. -func TempDir(dir, prefix string) (string, error) { - return ioutil.TempDir(dir, prefix) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/temp_windows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/temp_windows.go deleted file mode 100644 index 72c0bc59748..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/temp_windows.go +++ /dev/null @@ -1,18 +0,0 @@ -// +build windows - -package ioutils - -import ( - "io/ioutil" - - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/longpath" -) - -// TempDir is the equivalent of ioutil.TempDir, except that the result is in Windows longpath format. -func TempDir(dir, prefix string) (string, error) { - tempDir, err := ioutil.TempDir(dir, prefix) - if err != nil { - return "", err - } - return longpath.AddPrefix(tempDir), nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/writeflusher.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/writeflusher.go deleted file mode 100644 index 2b35a266620..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/writeflusher.go +++ /dev/null @@ -1,92 +0,0 @@ -package ioutils - -import ( - "errors" - "io" - "net/http" - "sync" -) - -// WriteFlusher wraps the Write and Flush operation ensuring that every write -// is a flush. In addition, the Close method can be called to intercept -// Read/Write calls if the targets lifecycle has already ended. -type WriteFlusher struct { - mu sync.Mutex - w io.Writer - flusher http.Flusher - flushed bool - closed error - - // TODO(stevvooe): Use channel for closed instead, remove mutex. Using a - // channel will allow one to properly order the operations. -} - -var errWriteFlusherClosed = errors.New("writeflusher: closed") - -func (wf *WriteFlusher) Write(b []byte) (n int, err error) { - wf.mu.Lock() - defer wf.mu.Unlock() - if wf.closed != nil { - return 0, wf.closed - } - - n, err = wf.w.Write(b) - wf.flush() // every write is a flush. - return n, err -} - -// Flush the stream immediately. -func (wf *WriteFlusher) Flush() { - wf.mu.Lock() - defer wf.mu.Unlock() - - wf.flush() -} - -// flush the stream immediately without taking a lock. Used internally. -func (wf *WriteFlusher) flush() { - if wf.closed != nil { - return - } - - wf.flushed = true - wf.flusher.Flush() -} - -// Flushed returns the state of flushed. -// If it's flushed, return true, or else it return false. -func (wf *WriteFlusher) Flushed() bool { - // BUG(stevvooe): Remove this method. Its use is inherently racy. Seems to - // be used to detect whether or a response code has been issued or not. - // Another hook should be used instead. - wf.mu.Lock() - defer wf.mu.Unlock() - - return wf.flushed -} - -// Close closes the write flusher, disallowing any further writes to the -// target. After the flusher is closed, all calls to write or flush will -// result in an error. -func (wf *WriteFlusher) Close() error { - wf.mu.Lock() - defer wf.mu.Unlock() - - if wf.closed != nil { - return wf.closed - } - - wf.closed = errWriteFlusherClosed - return nil -} - -// NewWriteFlusher returns a new WriteFlusher. -func NewWriteFlusher(w io.Writer) *WriteFlusher { - var flusher http.Flusher - if f, ok := w.(http.Flusher); ok { - flusher = f - } else { - flusher = &NopFlusher{} - } - return &WriteFlusher{w: w, flusher: flusher} -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/writers.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/writers.go deleted file mode 100644 index ccc7f9c23e0..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils/writers.go +++ /dev/null @@ -1,66 +0,0 @@ -package ioutils - -import "io" - -// NopWriter represents a type which write operation is nop. -type NopWriter struct{} - -func (*NopWriter) Write(buf []byte) (int, error) { - return len(buf), nil -} - -type nopWriteCloser struct { - io.Writer -} - -func (w *nopWriteCloser) Close() error { return nil } - -// NopWriteCloser returns a nopWriteCloser. -func NopWriteCloser(w io.Writer) io.WriteCloser { - return &nopWriteCloser{w} -} - -// NopFlusher represents a type which flush operation is nop. -type NopFlusher struct{} - -// Flush is a nop operation. -func (f *NopFlusher) Flush() {} - -type writeCloserWrapper struct { - io.Writer - closer func() error -} - -func (r *writeCloserWrapper) Close() error { - return r.closer() -} - -// NewWriteCloserWrapper returns a new io.WriteCloser. -func NewWriteCloserWrapper(r io.Writer, closer func() error) io.WriteCloser { - return &writeCloserWrapper{ - Writer: r, - closer: closer, - } -} - -// WriteCounter wraps a concrete io.Writer and hold a count of the number -// of bytes written to the writer during a "session". -// This can be convenient when write return is masked -// (e.g., json.Encoder.Encode()) -type WriteCounter struct { - Count int64 - Writer io.Writer -} - -// NewWriteCounter returns a new WriteCounter. -func NewWriteCounter(w io.Writer) *WriteCounter { - return &WriteCounter{ - Writer: w, - } -} - -func (wc *WriteCounter) Write(p []byte) (count int, err error) { - count, err = wc.Writer.Write(p) - wc.Count += int64(count) - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/longpath/longpath.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/longpath/longpath.go deleted file mode 100644 index 9b15bfff4c9..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/longpath/longpath.go +++ /dev/null @@ -1,26 +0,0 @@ -// longpath introduces some constants and helper functions for handling long paths -// in Windows, which are expected to be prepended with `\\?\` and followed by either -// a drive letter, a UNC server\share, or a volume identifier. - -package longpath - -import ( - "strings" -) - -// Prefix is the longpath prefix for Windows file paths. -const Prefix = `\\?\` - -// AddPrefix will add the Windows long path prefix to the path provided if -// it does not already have it. -func AddPrefix(path string) string { - if !strings.HasPrefix(path, Prefix) { - if strings.HasPrefix(path, `\\`) { - // This is a UNC path, so we need to add 'UNC' to the path as well. - path = Prefix + `UNC` + path[1:] - } else { - path = Prefix + path - } - } - return path -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/pools/pools.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/pools/pools.go deleted file mode 100644 index 515fb4d0508..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/pools/pools.go +++ /dev/null @@ -1,119 +0,0 @@ -// Package pools provides a collection of pools which provide various -// data types with buffers. These can be used to lower the number of -// memory allocations and reuse buffers. -// -// New pools should be added to this package to allow them to be -// shared across packages. -// -// Utility functions which operate on pools should be added to this -// package to allow them to be reused. -package pools - -import ( - "bufio" - "io" - "sync" - - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils" -) - -var ( - // BufioReader32KPool is a pool which returns bufio.Reader with a 32K buffer. - BufioReader32KPool *BufioReaderPool - // BufioWriter32KPool is a pool which returns bufio.Writer with a 32K buffer. - BufioWriter32KPool *BufioWriterPool -) - -const buffer32K = 32 * 1024 - -// BufioReaderPool is a bufio reader that uses sync.Pool. -type BufioReaderPool struct { - pool sync.Pool -} - -func init() { - BufioReader32KPool = newBufioReaderPoolWithSize(buffer32K) - BufioWriter32KPool = newBufioWriterPoolWithSize(buffer32K) -} - -// newBufioReaderPoolWithSize is unexported because new pools should be -// added here to be shared where required. -func newBufioReaderPoolWithSize(size int) *BufioReaderPool { - pool := sync.Pool{ - New: func() interface{} { return bufio.NewReaderSize(nil, size) }, - } - return &BufioReaderPool{pool: pool} -} - -// Get returns a bufio.Reader which reads from r. The buffer size is that of the pool. -func (bufPool *BufioReaderPool) Get(r io.Reader) *bufio.Reader { - buf := bufPool.pool.Get().(*bufio.Reader) - buf.Reset(r) - return buf -} - -// Put puts the bufio.Reader back into the pool. -func (bufPool *BufioReaderPool) Put(b *bufio.Reader) { - b.Reset(nil) - bufPool.pool.Put(b) -} - -// Copy is a convenience wrapper which uses a buffer to avoid allocation in io.Copy. -func Copy(dst io.Writer, src io.Reader) (written int64, err error) { - buf := BufioReader32KPool.Get(src) - written, err = io.Copy(dst, buf) - BufioReader32KPool.Put(buf) - return -} - -// NewReadCloserWrapper returns a wrapper which puts the bufio.Reader back -// into the pool and closes the reader if it's an io.ReadCloser. -func (bufPool *BufioReaderPool) NewReadCloserWrapper(buf *bufio.Reader, r io.Reader) io.ReadCloser { - return ioutils.NewReadCloserWrapper(r, func() error { - if readCloser, ok := r.(io.ReadCloser); ok { - readCloser.Close() - } - bufPool.Put(buf) - return nil - }) -} - -// BufioWriterPool is a bufio writer that uses sync.Pool. -type BufioWriterPool struct { - pool sync.Pool -} - -// newBufioWriterPoolWithSize is unexported because new pools should be -// added here to be shared where required. -func newBufioWriterPoolWithSize(size int) *BufioWriterPool { - pool := sync.Pool{ - New: func() interface{} { return bufio.NewWriterSize(nil, size) }, - } - return &BufioWriterPool{pool: pool} -} - -// Get returns a bufio.Writer which writes to w. The buffer size is that of the pool. -func (bufPool *BufioWriterPool) Get(w io.Writer) *bufio.Writer { - buf := bufPool.pool.Get().(*bufio.Writer) - buf.Reset(w) - return buf -} - -// Put puts the bufio.Writer back into the pool. -func (bufPool *BufioWriterPool) Put(b *bufio.Writer) { - b.Reset(nil) - bufPool.pool.Put(b) -} - -// NewWriteCloserWrapper returns a wrapper which puts the bufio.Writer back -// into the pool and closes the writer if it's an io.Writecloser. -func (bufPool *BufioWriterPool) NewWriteCloserWrapper(buf *bufio.Writer, w io.Writer) io.WriteCloser { - return ioutils.NewWriteCloserWrapper(w, func() error { - buf.Flush() - if writeCloser, ok := w.(io.WriteCloser); ok { - writeCloser.Close() - } - bufPool.Put(buf) - return nil - }) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/promise/promise.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/promise/promise.go deleted file mode 100644 index dd52b9082f7..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/promise/promise.go +++ /dev/null @@ -1,11 +0,0 @@ -package promise - -// Go is a basic promise implementation: it wraps calls a function in a goroutine, -// and returns a channel which will later return the function's return value. -func Go(f func() error) chan error { - ch := make(chan error, 1) - go func() { - ch <- f() - }() - return ch -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/stdcopy/stdcopy.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/stdcopy/stdcopy.go deleted file mode 100644 index b2c60046ada..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/stdcopy/stdcopy.go +++ /dev/null @@ -1,175 +0,0 @@ -package stdcopy - -import ( - "encoding/binary" - "errors" - "io" - - "github.com/fsouza/go-dockerclient/external/github.com/Sirupsen/logrus" -) - -const ( - stdWriterPrefixLen = 8 - stdWriterFdIndex = 0 - stdWriterSizeIndex = 4 - - startingBufLen = 32*1024 + stdWriterPrefixLen + 1 -) - -// StdType prefixes type and length to standard stream. -type StdType [stdWriterPrefixLen]byte - -var ( - // Stdin represents standard input stream type. - Stdin = StdType{0: 0} - // Stdout represents standard output stream type. - Stdout = StdType{0: 1} - // Stderr represents standard error steam type. - Stderr = StdType{0: 2} -) - -// StdWriter is wrapper of io.Writer with extra customized info. -type StdWriter struct { - io.Writer - prefix StdType - sizeBuf []byte -} - -func (w *StdWriter) Write(buf []byte) (n int, err error) { - var n1, n2 int - if w == nil || w.Writer == nil { - return 0, errors.New("Writer not instantiated") - } - binary.BigEndian.PutUint32(w.prefix[4:], uint32(len(buf))) - n1, err = w.Writer.Write(w.prefix[:]) - if err != nil { - n = n1 - stdWriterPrefixLen - } else { - n2, err = w.Writer.Write(buf) - n = n1 + n2 - stdWriterPrefixLen - } - if n < 0 { - n = 0 - } - return -} - -// NewStdWriter instantiates a new Writer. -// Everything written to it will be encapsulated using a custom format, -// and written to the underlying `w` stream. -// This allows multiple write streams (e.g. stdout and stderr) to be muxed into a single connection. -// `t` indicates the id of the stream to encapsulate. -// It can be stdcopy.Stdin, stdcopy.Stdout, stdcopy.Stderr. -func NewStdWriter(w io.Writer, t StdType) *StdWriter { - return &StdWriter{ - Writer: w, - prefix: t, - sizeBuf: make([]byte, 4), - } -} - -var errInvalidStdHeader = errors.New("Unrecognized input header") - -// StdCopy is a modified version of io.Copy. -// -// StdCopy will demultiplex `src`, assuming that it contains two streams, -// previously multiplexed together using a StdWriter instance. -// As it reads from `src`, StdCopy will write to `dstout` and `dsterr`. -// -// StdCopy will read until it hits EOF on `src`. It will then return a nil error. -// In other words: if `err` is non nil, it indicates a real underlying error. -// -// `written` will hold the total number of bytes written to `dstout` and `dsterr`. -func StdCopy(dstout, dsterr io.Writer, src io.Reader) (written int64, err error) { - var ( - buf = make([]byte, startingBufLen) - bufLen = len(buf) - nr, nw int - er, ew error - out io.Writer - frameSize int - ) - - for { - // Make sure we have at least a full header - for nr < stdWriterPrefixLen { - var nr2 int - nr2, er = src.Read(buf[nr:]) - nr += nr2 - if er == io.EOF { - if nr < stdWriterPrefixLen { - logrus.Debugf("Corrupted prefix: %v", buf[:nr]) - return written, nil - } - break - } - if er != nil { - logrus.Debugf("Error reading header: %s", er) - return 0, er - } - } - - // Check the first byte to know where to write - switch buf[stdWriterFdIndex] { - case 0: - fallthrough - case 1: - // Write on stdout - out = dstout - case 2: - // Write on stderr - out = dsterr - default: - logrus.Debugf("Error selecting output fd: (%d)", buf[stdWriterFdIndex]) - return 0, errInvalidStdHeader - } - - // Retrieve the size of the frame - frameSize = int(binary.BigEndian.Uint32(buf[stdWriterSizeIndex : stdWriterSizeIndex+4])) - logrus.Debugf("framesize: %d", frameSize) - - // Check if the buffer is big enough to read the frame. - // Extend it if necessary. - if frameSize+stdWriterPrefixLen > bufLen { - logrus.Debugf("Extending buffer cap by %d (was %d)", frameSize+stdWriterPrefixLen-bufLen+1, len(buf)) - buf = append(buf, make([]byte, frameSize+stdWriterPrefixLen-bufLen+1)...) - bufLen = len(buf) - } - - // While the amount of bytes read is less than the size of the frame + header, we keep reading - for nr < frameSize+stdWriterPrefixLen { - var nr2 int - nr2, er = src.Read(buf[nr:]) - nr += nr2 - if er == io.EOF { - if nr < frameSize+stdWriterPrefixLen { - logrus.Debugf("Corrupted frame: %v", buf[stdWriterPrefixLen:nr]) - return written, nil - } - break - } - if er != nil { - logrus.Debugf("Error reading frame: %s", er) - return 0, er - } - } - - // Write the retrieved frame (without header) - nw, ew = out.Write(buf[stdWriterPrefixLen : frameSize+stdWriterPrefixLen]) - if ew != nil { - logrus.Debugf("Error writing frame: %s", ew) - return 0, ew - } - // If the frame has not been fully written: error - if nw != frameSize { - logrus.Debugf("Error Short Write: (%d on %d)", nw, frameSize) - return 0, io.ErrShortWrite - } - written += int64(nw) - - // Move the rest of the buffer to the beginning - copy(buf, buf[frameSize+stdWriterPrefixLen:]) - // Move the index - nr -= frameSize + stdWriterPrefixLen - } -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/chtimes.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/chtimes.go deleted file mode 100644 index acf3f566f74..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/chtimes.go +++ /dev/null @@ -1,47 +0,0 @@ -package system - -import ( - "os" - "syscall" - "time" - "unsafe" -) - -var ( - maxTime time.Time -) - -func init() { - if unsafe.Sizeof(syscall.Timespec{}.Nsec) == 8 { - // This is a 64 bit timespec - // os.Chtimes limits time to the following - maxTime = time.Unix(0, 1<<63-1) - } else { - // This is a 32 bit timespec - maxTime = time.Unix(1<<31-1, 0) - } -} - -// Chtimes changes the access time and modified time of a file at the given path -func Chtimes(name string, atime time.Time, mtime time.Time) error { - unixMinTime := time.Unix(0, 0) - unixMaxTime := maxTime - - // If the modified time is prior to the Unix Epoch, or after the - // end of Unix Time, os.Chtimes has undefined behavior - // default to Unix Epoch in this case, just in case - - if atime.Before(unixMinTime) || atime.After(unixMaxTime) { - atime = unixMinTime - } - - if mtime.Before(unixMinTime) || mtime.After(unixMaxTime) { - mtime = unixMinTime - } - - if err := os.Chtimes(name, atime, mtime); err != nil { - return err - } - - return nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/chtimes_unix.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/chtimes_unix.go deleted file mode 100644 index 09d58bcbfdd..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/chtimes_unix.go +++ /dev/null @@ -1,14 +0,0 @@ -// +build !windows - -package system - -import ( - "time" -) - -//setCTime will set the create time on a file. On Unix, the create -//time is updated as a side effect of setting the modified time, so -//no action is required. -func setCTime(path string, ctime time.Time) error { - return nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/chtimes_windows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/chtimes_windows.go deleted file mode 100644 index 29458684659..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/chtimes_windows.go +++ /dev/null @@ -1,27 +0,0 @@ -// +build windows - -package system - -import ( - "syscall" - "time" -) - -//setCTime will set the create time on a file. On Windows, this requires -//calling SetFileTime and explicitly including the create time. -func setCTime(path string, ctime time.Time) error { - ctimespec := syscall.NsecToTimespec(ctime.UnixNano()) - pathp, e := syscall.UTF16PtrFromString(path) - if e != nil { - return e - } - h, e := syscall.CreateFile(pathp, - syscall.FILE_WRITE_ATTRIBUTES, syscall.FILE_SHARE_WRITE, nil, - syscall.OPEN_EXISTING, syscall.FILE_FLAG_BACKUP_SEMANTICS, 0) - if e != nil { - return e - } - defer syscall.Close(h) - c := syscall.NsecToFiletime(syscall.TimespecToNsec(ctimespec)) - return syscall.SetFileTime(h, &c, nil, nil) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/errors.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/errors.go deleted file mode 100644 index 288318985e3..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/errors.go +++ /dev/null @@ -1,10 +0,0 @@ -package system - -import ( - "errors" -) - -var ( - // ErrNotSupportedPlatform means the platform is not supported. - ErrNotSupportedPlatform = errors.New("platform and architecture is not supported") -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/events_windows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/events_windows.go deleted file mode 100644 index 04e2de78714..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/events_windows.go +++ /dev/null @@ -1,83 +0,0 @@ -package system - -// This file implements syscalls for Win32 events which are not implemented -// in golang. - -import ( - "syscall" - "unsafe" -) - -var ( - procCreateEvent = modkernel32.NewProc("CreateEventW") - procOpenEvent = modkernel32.NewProc("OpenEventW") - procSetEvent = modkernel32.NewProc("SetEvent") - procResetEvent = modkernel32.NewProc("ResetEvent") - procPulseEvent = modkernel32.NewProc("PulseEvent") -) - -// CreateEvent implements win32 CreateEventW func in golang. It will create an event object. -func CreateEvent(eventAttributes *syscall.SecurityAttributes, manualReset bool, initialState bool, name string) (handle syscall.Handle, err error) { - namep, _ := syscall.UTF16PtrFromString(name) - var _p1 uint32 - if manualReset { - _p1 = 1 - } - var _p2 uint32 - if initialState { - _p2 = 1 - } - r0, _, e1 := procCreateEvent.Call(uintptr(unsafe.Pointer(eventAttributes)), uintptr(_p1), uintptr(_p2), uintptr(unsafe.Pointer(namep))) - use(unsafe.Pointer(namep)) - handle = syscall.Handle(r0) - if handle == syscall.InvalidHandle { - err = e1 - } - return -} - -// OpenEvent implements win32 OpenEventW func in golang. It opens an event object. -func OpenEvent(desiredAccess uint32, inheritHandle bool, name string) (handle syscall.Handle, err error) { - namep, _ := syscall.UTF16PtrFromString(name) - var _p1 uint32 - if inheritHandle { - _p1 = 1 - } - r0, _, e1 := procOpenEvent.Call(uintptr(desiredAccess), uintptr(_p1), uintptr(unsafe.Pointer(namep))) - use(unsafe.Pointer(namep)) - handle = syscall.Handle(r0) - if handle == syscall.InvalidHandle { - err = e1 - } - return -} - -// SetEvent implements win32 SetEvent func in golang. -func SetEvent(handle syscall.Handle) (err error) { - return setResetPulse(handle, procSetEvent) -} - -// ResetEvent implements win32 ResetEvent func in golang. -func ResetEvent(handle syscall.Handle) (err error) { - return setResetPulse(handle, procResetEvent) -} - -// PulseEvent implements win32 PulseEvent func in golang. -func PulseEvent(handle syscall.Handle) (err error) { - return setResetPulse(handle, procPulseEvent) -} - -func setResetPulse(handle syscall.Handle, proc *syscall.LazyProc) (err error) { - r0, _, _ := proc.Call(uintptr(handle)) - if r0 != 0 { - err = syscall.Errno(r0) - } - return -} - -var temp unsafe.Pointer - -// use ensures a variable is kept alive without the GC freeing while still needed -func use(p unsafe.Pointer) { - temp = p -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/filesys.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/filesys.go deleted file mode 100644 index c14feb84965..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/filesys.go +++ /dev/null @@ -1,19 +0,0 @@ -// +build !windows - -package system - -import ( - "os" - "path/filepath" -) - -// MkdirAll creates a directory named path along with any necessary parents, -// with permission specified by attribute perm for all dir created. -func MkdirAll(path string, perm os.FileMode) error { - return os.MkdirAll(path, perm) -} - -// IsAbs is a platform-specific wrapper for filepath.IsAbs. -func IsAbs(path string) bool { - return filepath.IsAbs(path) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/filesys_windows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/filesys_windows.go deleted file mode 100644 index 16823d5517c..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/filesys_windows.go +++ /dev/null @@ -1,82 +0,0 @@ -// +build windows - -package system - -import ( - "os" - "path/filepath" - "regexp" - "strings" - "syscall" -) - -// MkdirAll implementation that is volume path aware for Windows. -func MkdirAll(path string, perm os.FileMode) error { - if re := regexp.MustCompile(`^\\\\\?\\Volume{[a-z0-9-]+}$`); re.MatchString(path) { - return nil - } - - // The rest of this method is copied from os.MkdirAll and should be kept - // as-is to ensure compatibility. - - // Fast path: if we can tell whether path is a directory or file, stop with success or error. - dir, err := os.Stat(path) - if err == nil { - if dir.IsDir() { - return nil - } - return &os.PathError{ - Op: "mkdir", - Path: path, - Err: syscall.ENOTDIR, - } - } - - // Slow path: make sure parent exists and then call Mkdir for path. - i := len(path) - for i > 0 && os.IsPathSeparator(path[i-1]) { // Skip trailing path separator. - i-- - } - - j := i - for j > 0 && !os.IsPathSeparator(path[j-1]) { // Scan backward over element. - j-- - } - - if j > 1 { - // Create parent - err = MkdirAll(path[0:j-1], perm) - if err != nil { - return err - } - } - - // Parent now exists; invoke Mkdir and use its result. - err = os.Mkdir(path, perm) - if err != nil { - // Handle arguments like "foo/." by - // double-checking that directory doesn't exist. - dir, err1 := os.Lstat(path) - if err1 == nil && dir.IsDir() { - return nil - } - return err - } - return nil -} - -// IsAbs is a platform-specific wrapper for filepath.IsAbs. On Windows, -// golang filepath.IsAbs does not consider a path \windows\system32 as absolute -// as it doesn't start with a drive-letter/colon combination. However, in -// docker we need to verify things such as WORKDIR /windows/system32 in -// a Dockerfile (which gets translated to \windows\system32 when being processed -// by the daemon. This SHOULD be treated as absolute from a docker processing -// perspective. -func IsAbs(path string) bool { - if !filepath.IsAbs(path) { - if !strings.HasPrefix(path, string(os.PathSeparator)) { - return false - } - } - return true -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/lstat.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/lstat.go deleted file mode 100644 index bd23c4d50b2..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/lstat.go +++ /dev/null @@ -1,19 +0,0 @@ -// +build !windows - -package system - -import ( - "syscall" -) - -// Lstat takes a path to a file and returns -// a system.StatT type pertaining to that file. -// -// Throws an error if the file does not exist -func Lstat(path string) (*StatT, error) { - s := &syscall.Stat_t{} - if err := syscall.Lstat(path, s); err != nil { - return nil, err - } - return fromStatT(s) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/lstat_windows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/lstat_windows.go deleted file mode 100644 index 49e87eb40ba..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/lstat_windows.go +++ /dev/null @@ -1,25 +0,0 @@ -// +build windows - -package system - -import ( - "os" -) - -// Lstat calls os.Lstat to get a fileinfo interface back. -// This is then copied into our own locally defined structure. -// Note the Linux version uses fromStatT to do the copy back, -// but that not strictly necessary when already in an OS specific module. -func Lstat(path string) (*StatT, error) { - fi, err := os.Lstat(path) - if err != nil { - return nil, err - } - - return &StatT{ - name: fi.Name(), - size: fi.Size(), - mode: fi.Mode(), - modTime: fi.ModTime(), - isDir: fi.IsDir()}, nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/meminfo.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/meminfo.go deleted file mode 100644 index 3b6e947e675..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/meminfo.go +++ /dev/null @@ -1,17 +0,0 @@ -package system - -// MemInfo contains memory statistics of the host system. -type MemInfo struct { - // Total usable RAM (i.e. physical RAM minus a few reserved bits and the - // kernel binary code). - MemTotal int64 - - // Amount of free memory. - MemFree int64 - - // Total amount of swap space available. - SwapTotal int64 - - // Amount of swap space that is currently unused. - SwapFree int64 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/meminfo_linux.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/meminfo_linux.go deleted file mode 100644 index c14dbf37644..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/meminfo_linux.go +++ /dev/null @@ -1,66 +0,0 @@ -package system - -import ( - "bufio" - "io" - "os" - "strconv" - "strings" - - "github.com/fsouza/go-dockerclient/external/github.com/docker/go-units" -) - -// ReadMemInfo retrieves memory statistics of the host system and returns a -// MemInfo type. -func ReadMemInfo() (*MemInfo, error) { - file, err := os.Open("/proc/meminfo") - if err != nil { - return nil, err - } - defer file.Close() - return parseMemInfo(file) -} - -// parseMemInfo parses the /proc/meminfo file into -// a MemInfo object given a io.Reader to the file. -// -// Throws error if there are problems reading from the file -func parseMemInfo(reader io.Reader) (*MemInfo, error) { - meminfo := &MemInfo{} - scanner := bufio.NewScanner(reader) - for scanner.Scan() { - // Expected format: ["MemTotal:", "1234", "kB"] - parts := strings.Fields(scanner.Text()) - - // Sanity checks: Skip malformed entries. - if len(parts) < 3 || parts[2] != "kB" { - continue - } - - // Convert to bytes. - size, err := strconv.Atoi(parts[1]) - if err != nil { - continue - } - bytes := int64(size) * units.KiB - - switch parts[0] { - case "MemTotal:": - meminfo.MemTotal = bytes - case "MemFree:": - meminfo.MemFree = bytes - case "SwapTotal:": - meminfo.SwapTotal = bytes - case "SwapFree:": - meminfo.SwapFree = bytes - } - - } - - // Handle errors that may have occurred during the reading of the file. - if err := scanner.Err(); err != nil { - return nil, err - } - - return meminfo, nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/meminfo_unsupported.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/meminfo_unsupported.go deleted file mode 100644 index 82ddd30c1b0..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/meminfo_unsupported.go +++ /dev/null @@ -1,8 +0,0 @@ -// +build !linux,!windows - -package system - -// ReadMemInfo is not supported on platforms other than linux and windows. -func ReadMemInfo() (*MemInfo, error) { - return nil, ErrNotSupportedPlatform -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/meminfo_windows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/meminfo_windows.go deleted file mode 100644 index d46642598cf..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/meminfo_windows.go +++ /dev/null @@ -1,44 +0,0 @@ -package system - -import ( - "syscall" - "unsafe" -) - -var ( - modkernel32 = syscall.NewLazyDLL("kernel32.dll") - - procGlobalMemoryStatusEx = modkernel32.NewProc("GlobalMemoryStatusEx") -) - -// https://msdn.microsoft.com/en-us/library/windows/desktop/aa366589(v=vs.85).aspx -// https://msdn.microsoft.com/en-us/library/windows/desktop/aa366770(v=vs.85).aspx -type memorystatusex struct { - dwLength uint32 - dwMemoryLoad uint32 - ullTotalPhys uint64 - ullAvailPhys uint64 - ullTotalPageFile uint64 - ullAvailPageFile uint64 - ullTotalVirtual uint64 - ullAvailVirtual uint64 - ullAvailExtendedVirtual uint64 -} - -// ReadMemInfo retrieves memory statistics of the host system and returns a -// MemInfo type. -func ReadMemInfo() (*MemInfo, error) { - msi := &memorystatusex{ - dwLength: 64, - } - r1, _, _ := procGlobalMemoryStatusEx.Call(uintptr(unsafe.Pointer(msi))) - if r1 == 0 { - return &MemInfo{}, nil - } - return &MemInfo{ - MemTotal: int64(msi.ullTotalPhys), - MemFree: int64(msi.ullAvailPhys), - SwapTotal: int64(msi.ullTotalPageFile), - SwapFree: int64(msi.ullAvailPageFile), - }, nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/mknod.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/mknod.go deleted file mode 100644 index 73958182b4e..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/mknod.go +++ /dev/null @@ -1,22 +0,0 @@ -// +build !windows - -package system - -import ( - "syscall" -) - -// Mknod creates a filesystem node (file, device special file or named pipe) named path -// with attributes specified by mode and dev. -func Mknod(path string, mode uint32, dev int) error { - return syscall.Mknod(path, mode, dev) -} - -// Mkdev is used to build the value of linux devices (in /dev/) which specifies major -// and minor number of the newly created device special file. -// Linux device nodes are a bit weird due to backwards compat with 16 bit device nodes. -// They are, from low to high: the lower 8 bits of the minor, then 12 bits of the major, -// then the top 12 bits of the minor. -func Mkdev(major int64, minor int64) uint32 { - return uint32(((minor & 0xfff00) << 12) | ((major & 0xfff) << 8) | (minor & 0xff)) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/mknod_windows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/mknod_windows.go deleted file mode 100644 index 2e863c0215b..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/mknod_windows.go +++ /dev/null @@ -1,13 +0,0 @@ -// +build windows - -package system - -// Mknod is not implemented on Windows. -func Mknod(path string, mode uint32, dev int) error { - return ErrNotSupportedPlatform -} - -// Mkdev is not implemented on Windows. -func Mkdev(major int64, minor int64) uint32 { - panic("Mkdev not implemented on Windows.") -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/path_unix.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/path_unix.go deleted file mode 100644 index 1b6cc9cbd9f..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/path_unix.go +++ /dev/null @@ -1,8 +0,0 @@ -// +build !windows - -package system - -// DefaultPathEnv is unix style list of directories to search for -// executables. Each directory is separated from the next by a colon -// ':' character . -const DefaultPathEnv = "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/path_windows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/path_windows.go deleted file mode 100644 index 09e7f89fed3..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/path_windows.go +++ /dev/null @@ -1,7 +0,0 @@ -// +build windows - -package system - -// DefaultPathEnv is deliberately empty on Windows as the default path will be set by -// the container. Docker has no context of what the default path should be. -const DefaultPathEnv = "" diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat.go deleted file mode 100644 index 087034c5ec5..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat.go +++ /dev/null @@ -1,53 +0,0 @@ -// +build !windows - -package system - -import ( - "syscall" -) - -// StatT type contains status of a file. It contains metadata -// like permission, owner, group, size, etc about a file. -type StatT struct { - mode uint32 - uid uint32 - gid uint32 - rdev uint64 - size int64 - mtim syscall.Timespec -} - -// Mode returns file's permission mode. -func (s StatT) Mode() uint32 { - return s.mode -} - -// UID returns file's user id of owner. -func (s StatT) UID() uint32 { - return s.uid -} - -// GID returns file's group id of owner. -func (s StatT) GID() uint32 { - return s.gid -} - -// Rdev returns file's device ID (if it's special file). -func (s StatT) Rdev() uint64 { - return s.rdev -} - -// Size returns file's size. -func (s StatT) Size() int64 { - return s.size -} - -// Mtim returns file's last modification time. -func (s StatT) Mtim() syscall.Timespec { - return s.mtim -} - -// GetLastModification returns file's last modification time. -func (s StatT) GetLastModification() syscall.Timespec { - return s.Mtim() -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat_freebsd.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat_freebsd.go deleted file mode 100644 index d0fb6f15190..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat_freebsd.go +++ /dev/null @@ -1,27 +0,0 @@ -package system - -import ( - "syscall" -) - -// fromStatT converts a syscall.Stat_t type to a system.Stat_t type -func fromStatT(s *syscall.Stat_t) (*StatT, error) { - return &StatT{size: s.Size, - mode: uint32(s.Mode), - uid: s.Uid, - gid: s.Gid, - rdev: uint64(s.Rdev), - mtim: s.Mtimespec}, nil -} - -// Stat takes a path to a file and returns -// a system.Stat_t type pertaining to that file. -// -// Throws an error if the file does not exist -func Stat(path string) (*StatT, error) { - s := &syscall.Stat_t{} - if err := syscall.Stat(path, s); err != nil { - return nil, err - } - return fromStatT(s) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat_linux.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat_linux.go deleted file mode 100644 index 8b1eded1387..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat_linux.go +++ /dev/null @@ -1,33 +0,0 @@ -package system - -import ( - "syscall" -) - -// fromStatT converts a syscall.Stat_t type to a system.Stat_t type -func fromStatT(s *syscall.Stat_t) (*StatT, error) { - return &StatT{size: s.Size, - mode: s.Mode, - uid: s.Uid, - gid: s.Gid, - rdev: s.Rdev, - mtim: s.Mtim}, nil -} - -// FromStatT exists only on linux, and loads a system.StatT from a -// syscal.Stat_t. -func FromStatT(s *syscall.Stat_t) (*StatT, error) { - return fromStatT(s) -} - -// Stat takes a path to a file and returns -// a system.StatT type pertaining to that file. -// -// Throws an error if the file does not exist -func Stat(path string) (*StatT, error) { - s := &syscall.Stat_t{} - if err := syscall.Stat(path, s); err != nil { - return nil, err - } - return fromStatT(s) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat_openbsd.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat_openbsd.go deleted file mode 100644 index 3c3b71fb219..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat_openbsd.go +++ /dev/null @@ -1,15 +0,0 @@ -package system - -import ( - "syscall" -) - -// fromStatT creates a system.StatT type from a syscall.Stat_t type -func fromStatT(s *syscall.Stat_t) (*StatT, error) { - return &StatT{size: s.Size, - mode: uint32(s.Mode), - uid: s.Uid, - gid: s.Gid, - rdev: uint64(s.Rdev), - mtim: s.Mtim}, nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat_solaris.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat_solaris.go deleted file mode 100644 index b01d08acfee..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat_solaris.go +++ /dev/null @@ -1,17 +0,0 @@ -// +build solaris - -package system - -import ( - "syscall" -) - -// fromStatT creates a system.StatT type from a syscall.Stat_t type -func fromStatT(s *syscall.Stat_t) (*StatT, error) { - return &StatT{size: s.Size, - mode: uint32(s.Mode), - uid: s.Uid, - gid: s.Gid, - rdev: uint64(s.Rdev), - mtim: s.Mtim}, nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat_unsupported.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat_unsupported.go deleted file mode 100644 index f53e9de4d1a..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat_unsupported.go +++ /dev/null @@ -1,17 +0,0 @@ -// +build !linux,!windows,!freebsd,!solaris,!openbsd - -package system - -import ( - "syscall" -) - -// fromStatT creates a system.StatT type from a syscall.Stat_t type -func fromStatT(s *syscall.Stat_t) (*StatT, error) { - return &StatT{size: s.Size, - mode: uint32(s.Mode), - uid: s.Uid, - gid: s.Gid, - rdev: uint64(s.Rdev), - mtim: s.Mtimespec}, nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat_windows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat_windows.go deleted file mode 100644 index 39490c625c0..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/stat_windows.go +++ /dev/null @@ -1,43 +0,0 @@ -// +build windows - -package system - -import ( - "os" - "time" -) - -// StatT type contains status of a file. It contains metadata -// like name, permission, size, etc about a file. -type StatT struct { - name string - size int64 - mode os.FileMode - modTime time.Time - isDir bool -} - -// Name returns file's name. -func (s StatT) Name() string { - return s.name -} - -// Size returns file's size. -func (s StatT) Size() int64 { - return s.size -} - -// Mode returns file's permission mode. -func (s StatT) Mode() os.FileMode { - return s.mode -} - -// ModTime returns file's last modification time. -func (s StatT) ModTime() time.Time { - return s.modTime -} - -// IsDir returns whether file is actually a directory. -func (s StatT) IsDir() bool { - return s.isDir -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/syscall_unix.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/syscall_unix.go deleted file mode 100644 index f1497c587ef..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/syscall_unix.go +++ /dev/null @@ -1,11 +0,0 @@ -// +build linux freebsd - -package system - -import "syscall" - -// Unmount is a platform-specific helper function to call -// the unmount syscall. -func Unmount(dest string) error { - return syscall.Unmount(dest, 0) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/syscall_windows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/syscall_windows.go deleted file mode 100644 index 273aa234bbd..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/syscall_windows.go +++ /dev/null @@ -1,36 +0,0 @@ -package system - -import ( - "fmt" - "syscall" -) - -// OSVersion is a wrapper for Windows version information -// https://msdn.microsoft.com/en-us/library/windows/desktop/ms724439(v=vs.85).aspx -type OSVersion struct { - Version uint32 - MajorVersion uint8 - MinorVersion uint8 - Build uint16 -} - -// GetOSVersion gets the operating system version on Windows. Note that -// docker.exe must be manifested to get the correct version information. -func GetOSVersion() (OSVersion, error) { - var err error - osv := OSVersion{} - osv.Version, err = syscall.GetVersion() - if err != nil { - return osv, fmt.Errorf("Failed to call GetVersion()") - } - osv.MajorVersion = uint8(osv.Version & 0xFF) - osv.MinorVersion = uint8(osv.Version >> 8 & 0xFF) - osv.Build = uint16(osv.Version >> 16) - return osv, nil -} - -// Unmount is a platform-specific helper function to call -// the unmount syscall. Not supported on Windows -func Unmount(dest string) error { - return nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/umask.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/umask.go deleted file mode 100644 index c670fcd7586..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/umask.go +++ /dev/null @@ -1,13 +0,0 @@ -// +build !windows - -package system - -import ( - "syscall" -) - -// Umask sets current process's file mode creation mask to newmask -// and return oldmask. -func Umask(newmask int) (oldmask int, err error) { - return syscall.Umask(newmask), nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/umask_windows.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/umask_windows.go deleted file mode 100644 index 13f1de1769c..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/umask_windows.go +++ /dev/null @@ -1,9 +0,0 @@ -// +build windows - -package system - -// Umask is not supported on the windows platform. -func Umask(newmask int) (oldmask int, err error) { - // should not be called on cli code path - return 0, ErrNotSupportedPlatform -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/utimes_darwin.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/utimes_darwin.go deleted file mode 100644 index 0a16197544d..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/utimes_darwin.go +++ /dev/null @@ -1,8 +0,0 @@ -package system - -import "syscall" - -// LUtimesNano is not supported by darwin platform. -func LUtimesNano(path string, ts []syscall.Timespec) error { - return ErrNotSupportedPlatform -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/utimes_freebsd.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/utimes_freebsd.go deleted file mode 100644 index e2eac3b553e..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/utimes_freebsd.go +++ /dev/null @@ -1,22 +0,0 @@ -package system - -import ( - "syscall" - "unsafe" -) - -// LUtimesNano is used to change access and modification time of the specified path. -// It's used for symbol link file because syscall.UtimesNano doesn't support a NOFOLLOW flag atm. -func LUtimesNano(path string, ts []syscall.Timespec) error { - var _path *byte - _path, err := syscall.BytePtrFromString(path) - if err != nil { - return err - } - - if _, _, err := syscall.Syscall(syscall.SYS_LUTIMES, uintptr(unsafe.Pointer(_path)), uintptr(unsafe.Pointer(&ts[0])), 0); err != 0 && err != syscall.ENOSYS { - return err - } - - return nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/utimes_linux.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/utimes_linux.go deleted file mode 100644 index fc8a1aba95c..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/utimes_linux.go +++ /dev/null @@ -1,26 +0,0 @@ -package system - -import ( - "syscall" - "unsafe" -) - -// LUtimesNano is used to change access and modification time of the specified path. -// It's used for symbol link file because syscall.UtimesNano doesn't support a NOFOLLOW flag atm. -func LUtimesNano(path string, ts []syscall.Timespec) error { - // These are not currently available in syscall - atFdCwd := -100 - atSymLinkNoFollow := 0x100 - - var _path *byte - _path, err := syscall.BytePtrFromString(path) - if err != nil { - return err - } - - if _, _, err := syscall.Syscall6(syscall.SYS_UTIMENSAT, uintptr(atFdCwd), uintptr(unsafe.Pointer(_path)), uintptr(unsafe.Pointer(&ts[0])), uintptr(atSymLinkNoFollow), 0, 0); err != 0 && err != syscall.ENOSYS { - return err - } - - return nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/utimes_unsupported.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/utimes_unsupported.go deleted file mode 100644 index 50c3a04364d..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/utimes_unsupported.go +++ /dev/null @@ -1,10 +0,0 @@ -// +build !linux,!freebsd,!darwin - -package system - -import "syscall" - -// LUtimesNano is not supported on platforms other than linux, freebsd and darwin. -func LUtimesNano(path string, ts []syscall.Timespec) error { - return ErrNotSupportedPlatform -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/xattrs_linux.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/xattrs_linux.go deleted file mode 100644 index d2e2c057998..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/xattrs_linux.go +++ /dev/null @@ -1,63 +0,0 @@ -package system - -import ( - "syscall" - "unsafe" -) - -// Lgetxattr retrieves the value of the extended attribute identified by attr -// and associated with the given path in the file system. -// It will returns a nil slice and nil error if the xattr is not set. -func Lgetxattr(path string, attr string) ([]byte, error) { - pathBytes, err := syscall.BytePtrFromString(path) - if err != nil { - return nil, err - } - attrBytes, err := syscall.BytePtrFromString(attr) - if err != nil { - return nil, err - } - - dest := make([]byte, 128) - destBytes := unsafe.Pointer(&dest[0]) - sz, _, errno := syscall.Syscall6(syscall.SYS_LGETXATTR, uintptr(unsafe.Pointer(pathBytes)), uintptr(unsafe.Pointer(attrBytes)), uintptr(destBytes), uintptr(len(dest)), 0, 0) - if errno == syscall.ENODATA { - return nil, nil - } - if errno == syscall.ERANGE { - dest = make([]byte, sz) - destBytes := unsafe.Pointer(&dest[0]) - sz, _, errno = syscall.Syscall6(syscall.SYS_LGETXATTR, uintptr(unsafe.Pointer(pathBytes)), uintptr(unsafe.Pointer(attrBytes)), uintptr(destBytes), uintptr(len(dest)), 0, 0) - } - if errno != 0 { - return nil, errno - } - - return dest[:sz], nil -} - -var _zero uintptr - -// Lsetxattr sets the value of the extended attribute identified by attr -// and associated with the given path in the file system. -func Lsetxattr(path string, attr string, data []byte, flags int) error { - pathBytes, err := syscall.BytePtrFromString(path) - if err != nil { - return err - } - attrBytes, err := syscall.BytePtrFromString(attr) - if err != nil { - return err - } - var dataBytes unsafe.Pointer - if len(data) > 0 { - dataBytes = unsafe.Pointer(&data[0]) - } else { - dataBytes = unsafe.Pointer(&_zero) - } - _, _, errno := syscall.Syscall6(syscall.SYS_LSETXATTR, uintptr(unsafe.Pointer(pathBytes)), uintptr(unsafe.Pointer(attrBytes)), uintptr(dataBytes), uintptr(len(data)), uintptr(flags), 0) - if errno != 0 { - return errno - } - return nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/xattrs_unsupported.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/xattrs_unsupported.go deleted file mode 100644 index 0114f2227cf..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system/xattrs_unsupported.go +++ /dev/null @@ -1,13 +0,0 @@ -// +build !linux - -package system - -// Lgetxattr is not supported on platforms other than linux. -func Lgetxattr(path string, attr string) ([]byte, error) { - return nil, ErrNotSupportedPlatform -} - -// Lsetxattr is not supported on platforms other than linux. -func Lsetxattr(path string, attr string, data []byte, flags int) error { - return ErrNotSupportedPlatform -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/CONTRIBUTING.md b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/CONTRIBUTING.md deleted file mode 100644 index 9ea86d784ec..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/CONTRIBUTING.md +++ /dev/null @@ -1,67 +0,0 @@ -# Contributing to go-units - -Want to hack on go-units? Awesome! Here are instructions to get you started. - -go-units is a part of the [Docker](https://www.docker.com) project, and follows -the same rules and principles. If you're already familiar with the way -Docker does things, you'll feel right at home. - -Otherwise, go read Docker's -[contributions guidelines](https://github.com/docker/docker/blob/master/CONTRIBUTING.md), -[issue triaging](https://github.com/docker/docker/blob/master/project/ISSUE-TRIAGE.md), -[review process](https://github.com/docker/docker/blob/master/project/REVIEWING.md) and -[branches and tags](https://github.com/docker/docker/blob/master/project/BRANCHES-AND-TAGS.md). - -### Sign your work - -The sign-off is a simple line at the end of the explanation for the patch. Your -signature certifies that you wrote the patch or otherwise have the right to pass -it on as an open-source patch. The rules are pretty simple: if you can certify -the below (from [developercertificate.org](http://developercertificate.org/)): - -``` -Developer Certificate of Origin -Version 1.1 - -Copyright (C) 2004, 2006 The Linux Foundation and its contributors. -660 York Street, Suite 102, -San Francisco, CA 94110 USA - -Everyone is permitted to copy and distribute verbatim copies of this -license document, but changing it is not allowed. - -Developer's Certificate of Origin 1.1 - -By making a contribution to this project, I certify that: - -(a) The contribution was created in whole or in part by me and I - have the right to submit it under the open source license - indicated in the file; or - -(b) The contribution is based upon previous work that, to the best - of my knowledge, is covered under an appropriate open source - license and I have the right under that license to submit that - work with modifications, whether created in whole or in part - by me, under the same open source license (unless I am - permitted to submit under a different license), as indicated - in the file; or - -(c) The contribution was provided directly to me by some other - person who certified (a), (b) or (c) and I have not modified - it. - -(d) I understand and agree that this project and the contribution - are public and that a record of the contribution (including all - personal information I submit with it, including my sign-off) is - maintained indefinitely and may be redistributed consistent with - this project or the open source license(s) involved. -``` - -Then you just add a line to every git commit message: - - Signed-off-by: Joe Smith - -Use your real name (sorry, no pseudonyms or anonymous contributions.) - -If you set your `user.name` and `user.email` git configs, you can sign your -commit automatically with `git commit -s`. diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/LICENSE.code b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/LICENSE.code deleted file mode 100644 index b55b37bc316..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/LICENSE.code +++ /dev/null @@ -1,191 +0,0 @@ - - Apache License - Version 2.0, January 2004 - https://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - Copyright 2015 Docker, Inc. - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - https://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/LICENSE.docs b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/LICENSE.docs deleted file mode 100644 index e26cd4fc8ed..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/LICENSE.docs +++ /dev/null @@ -1,425 +0,0 @@ -Attribution-ShareAlike 4.0 International - -======================================================================= - -Creative Commons Corporation ("Creative Commons") is not a law firm and -does not provide legal services or legal advice. Distribution of -Creative Commons public licenses does not create a lawyer-client or -other relationship. Creative Commons makes its licenses and related -information available on an "as-is" basis. Creative Commons gives no -warranties regarding its licenses, any material licensed under their -terms and conditions, or any related information. Creative Commons -disclaims all liability for damages resulting from their use to the -fullest extent possible. - -Using Creative Commons Public Licenses - -Creative Commons public licenses provide a standard set of terms and -conditions that creators and other rights holders may use to share -original works of authorship and other material subject to copyright -and certain other rights specified in the public license below. The -following considerations are for informational purposes only, are not -exhaustive, and do not form part of our licenses. - - Considerations for licensors: Our public licenses are - intended for use by those authorized to give the public - permission to use material in ways otherwise restricted by - copyright and certain other rights. Our licenses are - irrevocable. Licensors should read and understand the terms - and conditions of the license they choose before applying it. - Licensors should also secure all rights necessary before - applying our licenses so that the public can reuse the - material as expected. Licensors should clearly mark any - material not subject to the license. This includes other CC- - licensed material, or material used under an exception or - limitation to copyright. More considerations for licensors: - wiki.creativecommons.org/Considerations_for_licensors - - Considerations for the public: By using one of our public - licenses, a licensor grants the public permission to use the - licensed material under specified terms and conditions. If - the licensor's permission is not necessary for any reason--for - example, because of any applicable exception or limitation to - copyright--then that use is not regulated by the license. Our - licenses grant only permissions under copyright and certain - other rights that a licensor has authority to grant. Use of - the licensed material may still be restricted for other - reasons, including because others have copyright or other - rights in the material. A licensor may make special requests, - such as asking that all changes be marked or described. - Although not required by our licenses, you are encouraged to - respect those requests where reasonable. More_considerations - for the public: - wiki.creativecommons.org/Considerations_for_licensees - -======================================================================= - -Creative Commons Attribution-ShareAlike 4.0 International Public -License - -By exercising the Licensed Rights (defined below), You accept and agree -to be bound by the terms and conditions of this Creative Commons -Attribution-ShareAlike 4.0 International Public License ("Public -License"). To the extent this Public License may be interpreted as a -contract, You are granted the Licensed Rights in consideration of Your -acceptance of these terms and conditions, and the Licensor grants You -such rights in consideration of benefits the Licensor receives from -making the Licensed Material available under these terms and -conditions. - - -Section 1 -- Definitions. - - a. Adapted Material means material subject to Copyright and Similar - Rights that is derived from or based upon the Licensed Material - and in which the Licensed Material is translated, altered, - arranged, transformed, or otherwise modified in a manner requiring - permission under the Copyright and Similar Rights held by the - Licensor. For purposes of this Public License, where the Licensed - Material is a musical work, performance, or sound recording, - Adapted Material is always produced where the Licensed Material is - synched in timed relation with a moving image. - - b. Adapter's License means the license You apply to Your Copyright - and Similar Rights in Your contributions to Adapted Material in - accordance with the terms and conditions of this Public License. - - c. BY-SA Compatible License means a license listed at - creativecommons.org/compatiblelicenses, approved by Creative - Commons as essentially the equivalent of this Public License. - - d. Copyright and Similar Rights means copyright and/or similar rights - closely related to copyright including, without limitation, - performance, broadcast, sound recording, and Sui Generis Database - Rights, without regard to how the rights are labeled or - categorized. For purposes of this Public License, the rights - specified in Section 2(b)(1)-(2) are not Copyright and Similar - Rights. - - e. Effective Technological Measures means those measures that, in the - absence of proper authority, may not be circumvented under laws - fulfilling obligations under Article 11 of the WIPO Copyright - Treaty adopted on December 20, 1996, and/or similar international - agreements. - - f. Exceptions and Limitations means fair use, fair dealing, and/or - any other exception or limitation to Copyright and Similar Rights - that applies to Your use of the Licensed Material. - - g. License Elements means the license attributes listed in the name - of a Creative Commons Public License. The License Elements of this - Public License are Attribution and ShareAlike. - - h. Licensed Material means the artistic or literary work, database, - or other material to which the Licensor applied this Public - License. - - i. Licensed Rights means the rights granted to You subject to the - terms and conditions of this Public License, which are limited to - all Copyright and Similar Rights that apply to Your use of the - Licensed Material and that the Licensor has authority to license. - - j. Licensor means the individual(s) or entity(ies) granting rights - under this Public License. - - k. Share means to provide material to the public by any means or - process that requires permission under the Licensed Rights, such - as reproduction, public display, public performance, distribution, - dissemination, communication, or importation, and to make material - available to the public including in ways that members of the - public may access the material from a place and at a time - individually chosen by them. - - l. Sui Generis Database Rights means rights other than copyright - resulting from Directive 96/9/EC of the European Parliament and of - the Council of 11 March 1996 on the legal protection of databases, - as amended and/or succeeded, as well as other essentially - equivalent rights anywhere in the world. - - m. You means the individual or entity exercising the Licensed Rights - under this Public License. Your has a corresponding meaning. - - -Section 2 -- Scope. - - a. License grant. - - 1. Subject to the terms and conditions of this Public License, - the Licensor hereby grants You a worldwide, royalty-free, - non-sublicensable, non-exclusive, irrevocable license to - exercise the Licensed Rights in the Licensed Material to: - - a. reproduce and Share the Licensed Material, in whole or - in part; and - - b. produce, reproduce, and Share Adapted Material. - - 2. Exceptions and Limitations. For the avoidance of doubt, where - Exceptions and Limitations apply to Your use, this Public - License does not apply, and You do not need to comply with - its terms and conditions. - - 3. Term. The term of this Public License is specified in Section - 6(a). - - 4. Media and formats; technical modifications allowed. The - Licensor authorizes You to exercise the Licensed Rights in - all media and formats whether now known or hereafter created, - and to make technical modifications necessary to do so. The - Licensor waives and/or agrees not to assert any right or - authority to forbid You from making technical modifications - necessary to exercise the Licensed Rights, including - technical modifications necessary to circumvent Effective - Technological Measures. For purposes of this Public License, - simply making modifications authorized by this Section 2(a) - (4) never produces Adapted Material. - - 5. Downstream recipients. - - a. Offer from the Licensor -- Licensed Material. Every - recipient of the Licensed Material automatically - receives an offer from the Licensor to exercise the - Licensed Rights under the terms and conditions of this - Public License. - - b. Additional offer from the Licensor -- Adapted Material. - Every recipient of Adapted Material from You - automatically receives an offer from the Licensor to - exercise the Licensed Rights in the Adapted Material - under the conditions of the Adapter's License You apply. - - c. No downstream restrictions. You may not offer or impose - any additional or different terms or conditions on, or - apply any Effective Technological Measures to, the - Licensed Material if doing so restricts exercise of the - Licensed Rights by any recipient of the Licensed - Material. - - 6. No endorsement. Nothing in this Public License constitutes or - may be construed as permission to assert or imply that You - are, or that Your use of the Licensed Material is, connected - with, or sponsored, endorsed, or granted official status by, - the Licensor or others designated to receive attribution as - provided in Section 3(a)(1)(A)(i). - - b. Other rights. - - 1. Moral rights, such as the right of integrity, are not - licensed under this Public License, nor are publicity, - privacy, and/or other similar personality rights; however, to - the extent possible, the Licensor waives and/or agrees not to - assert any such rights held by the Licensor to the limited - extent necessary to allow You to exercise the Licensed - Rights, but not otherwise. - - 2. Patent and trademark rights are not licensed under this - Public License. - - 3. To the extent possible, the Licensor waives any right to - collect royalties from You for the exercise of the Licensed - Rights, whether directly or through a collecting society - under any voluntary or waivable statutory or compulsory - licensing scheme. In all other cases the Licensor expressly - reserves any right to collect such royalties. - - -Section 3 -- License Conditions. - -Your exercise of the Licensed Rights is expressly made subject to the -following conditions. - - a. Attribution. - - 1. If You Share the Licensed Material (including in modified - form), You must: - - a. retain the following if it is supplied by the Licensor - with the Licensed Material: - - i. identification of the creator(s) of the Licensed - Material and any others designated to receive - attribution, in any reasonable manner requested by - the Licensor (including by pseudonym if - designated); - - ii. a copyright notice; - - iii. a notice that refers to this Public License; - - iv. a notice that refers to the disclaimer of - warranties; - - v. a URI or hyperlink to the Licensed Material to the - extent reasonably practicable; - - b. indicate if You modified the Licensed Material and - retain an indication of any previous modifications; and - - c. indicate the Licensed Material is licensed under this - Public License, and include the text of, or the URI or - hyperlink to, this Public License. - - 2. You may satisfy the conditions in Section 3(a)(1) in any - reasonable manner based on the medium, means, and context in - which You Share the Licensed Material. For example, it may be - reasonable to satisfy the conditions by providing a URI or - hyperlink to a resource that includes the required - information. - - 3. If requested by the Licensor, You must remove any of the - information required by Section 3(a)(1)(A) to the extent - reasonably practicable. - - b. ShareAlike. - - In addition to the conditions in Section 3(a), if You Share - Adapted Material You produce, the following conditions also apply. - - 1. The Adapter's License You apply must be a Creative Commons - license with the same License Elements, this version or - later, or a BY-SA Compatible License. - - 2. You must include the text of, or the URI or hyperlink to, the - Adapter's License You apply. You may satisfy this condition - in any reasonable manner based on the medium, means, and - context in which You Share Adapted Material. - - 3. You may not offer or impose any additional or different terms - or conditions on, or apply any Effective Technological - Measures to, Adapted Material that restrict exercise of the - rights granted under the Adapter's License You apply. - - -Section 4 -- Sui Generis Database Rights. - -Where the Licensed Rights include Sui Generis Database Rights that -apply to Your use of the Licensed Material: - - a. for the avoidance of doubt, Section 2(a)(1) grants You the right - to extract, reuse, reproduce, and Share all or a substantial - portion of the contents of the database; - - b. if You include all or a substantial portion of the database - contents in a database in which You have Sui Generis Database - Rights, then the database in which You have Sui Generis Database - Rights (but not its individual contents) is Adapted Material, - - including for purposes of Section 3(b); and - c. You must comply with the conditions in Section 3(a) if You Share - all or a substantial portion of the contents of the database. - -For the avoidance of doubt, this Section 4 supplements and does not -replace Your obligations under this Public License where the Licensed -Rights include other Copyright and Similar Rights. - - -Section 5 -- Disclaimer of Warranties and Limitation of Liability. - - a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE - EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS - AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF - ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, - IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, - WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR - PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, - ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT - KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT - ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU. - - b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE - TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, - NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, - INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, - COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR - USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN - ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR - DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR - IN PART, THIS LIMITATION MAY NOT APPLY TO YOU. - - c. The disclaimer of warranties and limitation of liability provided - above shall be interpreted in a manner that, to the extent - possible, most closely approximates an absolute disclaimer and - waiver of all liability. - - -Section 6 -- Term and Termination. - - a. This Public License applies for the term of the Copyright and - Similar Rights licensed here. However, if You fail to comply with - this Public License, then Your rights under this Public License - terminate automatically. - - b. Where Your right to use the Licensed Material has terminated under - Section 6(a), it reinstates: - - 1. automatically as of the date the violation is cured, provided - it is cured within 30 days of Your discovery of the - violation; or - - 2. upon express reinstatement by the Licensor. - - For the avoidance of doubt, this Section 6(b) does not affect any - right the Licensor may have to seek remedies for Your violations - of this Public License. - - c. For the avoidance of doubt, the Licensor may also offer the - Licensed Material under separate terms or conditions or stop - distributing the Licensed Material at any time; however, doing so - will not terminate this Public License. - - d. Sections 1, 5, 6, 7, and 8 survive termination of this Public - License. - - -Section 7 -- Other Terms and Conditions. - - a. The Licensor shall not be bound by any additional or different - terms or conditions communicated by You unless expressly agreed. - - b. Any arrangements, understandings, or agreements regarding the - Licensed Material not stated herein are separate from and - independent of the terms and conditions of this Public License. - - -Section 8 -- Interpretation. - - a. For the avoidance of doubt, this Public License does not, and - shall not be interpreted to, reduce, limit, restrict, or impose - conditions on any use of the Licensed Material that could lawfully - be made without permission under this Public License. - - b. To the extent possible, if any provision of this Public License is - deemed unenforceable, it shall be automatically reformed to the - minimum extent necessary to make it enforceable. If the provision - cannot be reformed, it shall be severed from this Public License - without affecting the enforceability of the remaining terms and - conditions. - - c. No term or condition of this Public License will be waived and no - failure to comply consented to unless expressly agreed to by the - Licensor. - - d. Nothing in this Public License constitutes or may be interpreted - as a limitation upon, or waiver of, any privileges and immunities - that apply to the Licensor or You, including from the legal - processes of any jurisdiction or authority. - - -======================================================================= - -Creative Commons is not a party to its public licenses. -Notwithstanding, Creative Commons may elect to apply one of its public -licenses to material it publishes and in those instances will be -considered the "Licensor." Except for the limited purpose of indicating -that material is shared under a Creative Commons public license or as -otherwise permitted by the Creative Commons policies published at -creativecommons.org/policies, Creative Commons does not authorize the -use of the trademark "Creative Commons" or any other trademark or logo -of Creative Commons without its prior written consent including, -without limitation, in connection with any unauthorized modifications -to any of its public licenses or any other arrangements, -understandings, or agreements concerning use of licensed material. For -the avoidance of doubt, this paragraph does not form part of the public -licenses. - -Creative Commons may be contacted at creativecommons.org. diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/MAINTAINERS b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/MAINTAINERS deleted file mode 100644 index 477be8b214b..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/MAINTAINERS +++ /dev/null @@ -1,27 +0,0 @@ -# go-connections maintainers file -# -# This file describes who runs the docker/go-connections project and how. -# This is a living document - if you see something out of date or missing, speak up! -# -# It is structured to be consumable by both humans and programs. -# To extract its contents programmatically, use any TOML-compliant parser. -# -# This file is compiled into the MAINTAINERS file in docker/opensource. -# -[Org] - [Org."Core maintainers"] - people = [ - "calavera", - ] - -[people] - -# A reference list of all people associated with the project. -# All other sections should refer to people by their canonical key -# in the people section. - - # ADD YOURSELF HERE IN ALPHABETICAL ORDER - [people.calavera] - Name = "David Calavera" - Email = "david.calavera@gmail.com" - GitHub = "calavera" diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/README.md b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/README.md deleted file mode 100644 index 3ce4d79daca..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/README.md +++ /dev/null @@ -1,18 +0,0 @@ -[![GoDoc](https://godoc.org/github.com/docker/go-units?status.svg)](https://godoc.org/github.com/docker/go-units) - -# Introduction - -go-units is a library to transform human friendly measurements into machine friendly values. - -## Usage - -See the [docs in godoc](https://godoc.org/github.com/docker/go-units) for examples and documentation. - -## Copyright and license - -Copyright © 2015 Docker, Inc. All rights reserved, except as follows. Code -is released under the Apache 2.0 license. The README.md file, and files in the -"docs" folder are licensed under the Creative Commons Attribution 4.0 -International License under the terms and conditions set forth in the file -"LICENSE.docs". You may obtain a duplicate copy of the same license, titled -CC-BY-SA-4.0, at http://creativecommons.org/licenses/by/4.0/. diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/circle.yml b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/circle.yml deleted file mode 100644 index 9043b35478c..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/circle.yml +++ /dev/null @@ -1,11 +0,0 @@ -dependencies: - post: - # install golint - - go get github.com/golang/lint/golint - -test: - pre: - # run analysis before tests - - go vet ./... - - test -z "$(golint ./... | tee /dev/stderr)" - - test -z "$(gofmt -s -l . | tee /dev/stderr)" diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/duration.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/duration.go deleted file mode 100644 index c219a8a968c..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/duration.go +++ /dev/null @@ -1,33 +0,0 @@ -// Package units provides helper function to parse and print size and time units -// in human-readable format. -package units - -import ( - "fmt" - "time" -) - -// HumanDuration returns a human-readable approximation of a duration -// (eg. "About a minute", "4 hours ago", etc.). -func HumanDuration(d time.Duration) string { - if seconds := int(d.Seconds()); seconds < 1 { - return "Less than a second" - } else if seconds < 60 { - return fmt.Sprintf("%d seconds", seconds) - } else if minutes := int(d.Minutes()); minutes == 1 { - return "About a minute" - } else if minutes < 60 { - return fmt.Sprintf("%d minutes", minutes) - } else if hours := int(d.Hours()); hours == 1 { - return "About an hour" - } else if hours < 48 { - return fmt.Sprintf("%d hours", hours) - } else if hours < 24*7*2 { - return fmt.Sprintf("%d days", hours/24) - } else if hours < 24*30*3 { - return fmt.Sprintf("%d weeks", hours/24/7) - } else if hours < 24*365*2 { - return fmt.Sprintf("%d months", hours/24/30) - } - return fmt.Sprintf("%d years", int(d.Hours())/24/365) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/size.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/size.go deleted file mode 100644 index 3b59daff31b..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/size.go +++ /dev/null @@ -1,95 +0,0 @@ -package units - -import ( - "fmt" - "regexp" - "strconv" - "strings" -) - -// See: http://en.wikipedia.org/wiki/Binary_prefix -const ( - // Decimal - - KB = 1000 - MB = 1000 * KB - GB = 1000 * MB - TB = 1000 * GB - PB = 1000 * TB - - // Binary - - KiB = 1024 - MiB = 1024 * KiB - GiB = 1024 * MiB - TiB = 1024 * GiB - PiB = 1024 * TiB -) - -type unitMap map[string]int64 - -var ( - decimalMap = unitMap{"k": KB, "m": MB, "g": GB, "t": TB, "p": PB} - binaryMap = unitMap{"k": KiB, "m": MiB, "g": GiB, "t": TiB, "p": PiB} - sizeRegex = regexp.MustCompile(`^(\d+)([kKmMgGtTpP])?[bB]?$`) -) - -var decimapAbbrs = []string{"B", "kB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB"} -var binaryAbbrs = []string{"B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB"} - -// CustomSize returns a human-readable approximation of a size -// using custom format. -func CustomSize(format string, size float64, base float64, _map []string) string { - i := 0 - for size >= base { - size = size / base - i++ - } - return fmt.Sprintf(format, size, _map[i]) -} - -// HumanSize returns a human-readable approximation of a size -// capped at 4 valid numbers (eg. "2.746 MB", "796 KB"). -func HumanSize(size float64) string { - return CustomSize("%.4g %s", size, 1000.0, decimapAbbrs) -} - -// BytesSize returns a human-readable size in bytes, kibibytes, -// mebibytes, gibibytes, or tebibytes (eg. "44kiB", "17MiB"). -func BytesSize(size float64) string { - return CustomSize("%.4g %s", size, 1024.0, binaryAbbrs) -} - -// FromHumanSize returns an integer from a human-readable specification of a -// size using SI standard (eg. "44kB", "17MB"). -func FromHumanSize(size string) (int64, error) { - return parseSize(size, decimalMap) -} - -// RAMInBytes parses a human-readable string representing an amount of RAM -// in bytes, kibibytes, mebibytes, gibibytes, or tebibytes and -// returns the number of bytes, or -1 if the string is unparseable. -// Units are case-insensitive, and the 'b' suffix is optional. -func RAMInBytes(size string) (int64, error) { - return parseSize(size, binaryMap) -} - -// Parses the human-readable size string into the amount it represents. -func parseSize(sizeStr string, uMap unitMap) (int64, error) { - matches := sizeRegex.FindStringSubmatch(sizeStr) - if len(matches) != 3 { - return -1, fmt.Errorf("invalid size: '%s'", sizeStr) - } - - size, err := strconv.ParseInt(matches[1], 10, 0) - if err != nil { - return -1, err - } - - unitPrefix := strings.ToLower(matches[2]) - if mul, ok := uMap[unitPrefix]; ok { - size *= mul - } - - return size, nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/ulimit.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/ulimit.go deleted file mode 100644 index 5ac7fd825fc..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/docker/go-units/ulimit.go +++ /dev/null @@ -1,118 +0,0 @@ -package units - -import ( - "fmt" - "strconv" - "strings" -) - -// Ulimit is a human friendly version of Rlimit. -type Ulimit struct { - Name string - Hard int64 - Soft int64 -} - -// Rlimit specifies the resource limits, such as max open files. -type Rlimit struct { - Type int `json:"type,omitempty"` - Hard uint64 `json:"hard,omitempty"` - Soft uint64 `json:"soft,omitempty"` -} - -const ( - // magic numbers for making the syscall - // some of these are defined in the syscall package, but not all. - // Also since Windows client doesn't get access to the syscall package, need to - // define these here - rlimitAs = 9 - rlimitCore = 4 - rlimitCPU = 0 - rlimitData = 2 - rlimitFsize = 1 - rlimitLocks = 10 - rlimitMemlock = 8 - rlimitMsgqueue = 12 - rlimitNice = 13 - rlimitNofile = 7 - rlimitNproc = 6 - rlimitRss = 5 - rlimitRtprio = 14 - rlimitRttime = 15 - rlimitSigpending = 11 - rlimitStack = 3 -) - -var ulimitNameMapping = map[string]int{ - //"as": rlimitAs, // Disabled since this doesn't seem usable with the way Docker inits a container. - "core": rlimitCore, - "cpu": rlimitCPU, - "data": rlimitData, - "fsize": rlimitFsize, - "locks": rlimitLocks, - "memlock": rlimitMemlock, - "msgqueue": rlimitMsgqueue, - "nice": rlimitNice, - "nofile": rlimitNofile, - "nproc": rlimitNproc, - "rss": rlimitRss, - "rtprio": rlimitRtprio, - "rttime": rlimitRttime, - "sigpending": rlimitSigpending, - "stack": rlimitStack, -} - -// ParseUlimit parses and returns a Ulimit from the specified string. -func ParseUlimit(val string) (*Ulimit, error) { - parts := strings.SplitN(val, "=", 2) - if len(parts) != 2 { - return nil, fmt.Errorf("invalid ulimit argument: %s", val) - } - - if _, exists := ulimitNameMapping[parts[0]]; !exists { - return nil, fmt.Errorf("invalid ulimit type: %s", parts[0]) - } - - var ( - soft int64 - hard = &soft // default to soft in case no hard was set - temp int64 - err error - ) - switch limitVals := strings.Split(parts[1], ":"); len(limitVals) { - case 2: - temp, err = strconv.ParseInt(limitVals[1], 10, 64) - if err != nil { - return nil, err - } - hard = &temp - fallthrough - case 1: - soft, err = strconv.ParseInt(limitVals[0], 10, 64) - if err != nil { - return nil, err - } - default: - return nil, fmt.Errorf("too many limit value arguments - %s, can only have up to two, `soft[:hard]`", parts[1]) - } - - if soft > *hard { - return nil, fmt.Errorf("ulimit soft limit must be less than or equal to hard limit: %d > %d", soft, *hard) - } - - return &Ulimit{Name: parts[0], Soft: soft, Hard: *hard}, nil -} - -// GetRlimit returns the RLimit corresponding to Ulimit. -func (u *Ulimit) GetRlimit() (*Rlimit, error) { - t, exists := ulimitNameMapping[u.Name] - if !exists { - return nil, fmt.Errorf("invalid ulimit name %s", u.Name) - } - - return &Rlimit{Type: t, Soft: uint64(u.Soft), Hard: uint64(u.Hard)}, nil -} - -func (u *Ulimit) String() string { - return fmt.Sprintf("%s=%d:%d", u.Name, u.Soft, u.Hard) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/context/LICENSE b/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/context/LICENSE deleted file mode 100644 index 0e5fb872800..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/context/LICENSE +++ /dev/null @@ -1,27 +0,0 @@ -Copyright (c) 2012 Rodrigo Moraes. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/context/README.md b/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/context/README.md deleted file mode 100644 index c60a31b053b..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/context/README.md +++ /dev/null @@ -1,7 +0,0 @@ -context -======= -[![Build Status](https://travis-ci.org/gorilla/context.png?branch=master)](https://travis-ci.org/gorilla/context) - -gorilla/context is a general purpose registry for global request variables. - -Read the full documentation here: http://www.gorillatoolkit.org/pkg/context diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/context/context.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/context/context.go deleted file mode 100644 index 81cb128b19c..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/context/context.go +++ /dev/null @@ -1,143 +0,0 @@ -// Copyright 2012 The Gorilla Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package context - -import ( - "net/http" - "sync" - "time" -) - -var ( - mutex sync.RWMutex - data = make(map[*http.Request]map[interface{}]interface{}) - datat = make(map[*http.Request]int64) -) - -// Set stores a value for a given key in a given request. -func Set(r *http.Request, key, val interface{}) { - mutex.Lock() - if data[r] == nil { - data[r] = make(map[interface{}]interface{}) - datat[r] = time.Now().Unix() - } - data[r][key] = val - mutex.Unlock() -} - -// Get returns a value stored for a given key in a given request. -func Get(r *http.Request, key interface{}) interface{} { - mutex.RLock() - if ctx := data[r]; ctx != nil { - value := ctx[key] - mutex.RUnlock() - return value - } - mutex.RUnlock() - return nil -} - -// GetOk returns stored value and presence state like multi-value return of map access. -func GetOk(r *http.Request, key interface{}) (interface{}, bool) { - mutex.RLock() - if _, ok := data[r]; ok { - value, ok := data[r][key] - mutex.RUnlock() - return value, ok - } - mutex.RUnlock() - return nil, false -} - -// GetAll returns all stored values for the request as a map. Nil is returned for invalid requests. -func GetAll(r *http.Request) map[interface{}]interface{} { - mutex.RLock() - if context, ok := data[r]; ok { - result := make(map[interface{}]interface{}, len(context)) - for k, v := range context { - result[k] = v - } - mutex.RUnlock() - return result - } - mutex.RUnlock() - return nil -} - -// GetAllOk returns all stored values for the request as a map and a boolean value that indicates if -// the request was registered. -func GetAllOk(r *http.Request) (map[interface{}]interface{}, bool) { - mutex.RLock() - context, ok := data[r] - result := make(map[interface{}]interface{}, len(context)) - for k, v := range context { - result[k] = v - } - mutex.RUnlock() - return result, ok -} - -// Delete removes a value stored for a given key in a given request. -func Delete(r *http.Request, key interface{}) { - mutex.Lock() - if data[r] != nil { - delete(data[r], key) - } - mutex.Unlock() -} - -// Clear removes all values stored for a given request. -// -// This is usually called by a handler wrapper to clean up request -// variables at the end of a request lifetime. See ClearHandler(). -func Clear(r *http.Request) { - mutex.Lock() - clear(r) - mutex.Unlock() -} - -// clear is Clear without the lock. -func clear(r *http.Request) { - delete(data, r) - delete(datat, r) -} - -// Purge removes request data stored for longer than maxAge, in seconds. -// It returns the amount of requests removed. -// -// If maxAge <= 0, all request data is removed. -// -// This is only used for sanity check: in case context cleaning was not -// properly set some request data can be kept forever, consuming an increasing -// amount of memory. In case this is detected, Purge() must be called -// periodically until the problem is fixed. -func Purge(maxAge int) int { - mutex.Lock() - count := 0 - if maxAge <= 0 { - count = len(data) - data = make(map[*http.Request]map[interface{}]interface{}) - datat = make(map[*http.Request]int64) - } else { - min := time.Now().Unix() - int64(maxAge) - for r := range data { - if datat[r] < min { - clear(r) - count++ - } - } - } - mutex.Unlock() - return count -} - -// ClearHandler wraps an http.Handler and clears request values at the end -// of a request lifetime. -func ClearHandler(h http.Handler) http.Handler { - return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - defer Clear(r) - h.ServeHTTP(w, r) - }) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/context/doc.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/context/doc.go deleted file mode 100644 index 73c7400311e..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/context/doc.go +++ /dev/null @@ -1,82 +0,0 @@ -// Copyright 2012 The Gorilla Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -/* -Package context stores values shared during a request lifetime. - -For example, a router can set variables extracted from the URL and later -application handlers can access those values, or it can be used to store -sessions values to be saved at the end of a request. There are several -others common uses. - -The idea was posted by Brad Fitzpatrick to the go-nuts mailing list: - - http://groups.google.com/group/golang-nuts/msg/e2d679d303aa5d53 - -Here's the basic usage: first define the keys that you will need. The key -type is interface{} so a key can be of any type that supports equality. -Here we define a key using a custom int type to avoid name collisions: - - package foo - - import ( - "github.com/gorilla/context" - ) - - type key int - - const MyKey key = 0 - -Then set a variable. Variables are bound to an http.Request object, so you -need a request instance to set a value: - - context.Set(r, MyKey, "bar") - -The application can later access the variable using the same key you provided: - - func MyHandler(w http.ResponseWriter, r *http.Request) { - // val is "bar". - val := context.Get(r, foo.MyKey) - - // returns ("bar", true) - val, ok := context.GetOk(r, foo.MyKey) - // ... - } - -And that's all about the basic usage. We discuss some other ideas below. - -Any type can be stored in the context. To enforce a given type, make the key -private and wrap Get() and Set() to accept and return values of a specific -type: - - type key int - - const mykey key = 0 - - // GetMyKey returns a value for this package from the request values. - func GetMyKey(r *http.Request) SomeType { - if rv := context.Get(r, mykey); rv != nil { - return rv.(SomeType) - } - return nil - } - - // SetMyKey sets a value for this package in the request values. - func SetMyKey(r *http.Request, val SomeType) { - context.Set(r, mykey, val) - } - -Variables must be cleared at the end of a request, to remove all values -that were stored. This can be done in an http.Handler, after a request was -served. Just call Clear() passing the request: - - context.Clear(r) - -...or use ClearHandler(), which conveniently wraps an http.Handler to clear -variables at the end of a request lifetime. - -The Routers from the packages gorilla/mux and gorilla/pat call Clear() -so if you are using either of them you don't need to clear the context manually. -*/ -package context diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/mux/LICENSE b/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/mux/LICENSE deleted file mode 100644 index 0e5fb872800..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/mux/LICENSE +++ /dev/null @@ -1,27 +0,0 @@ -Copyright (c) 2012 Rodrigo Moraes. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/mux/README.md b/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/mux/README.md deleted file mode 100644 index b987c9e5d10..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/mux/README.md +++ /dev/null @@ -1,240 +0,0 @@ -mux -=== -[![GoDoc](https://godoc.org/github.com/gorilla/mux?status.svg)](https://godoc.org/github.com/gorilla/mux) -[![Build Status](https://travis-ci.org/gorilla/mux.svg?branch=master)](https://travis-ci.org/gorilla/mux) - -Package `gorilla/mux` implements a request router and dispatcher. - -The name mux stands for "HTTP request multiplexer". Like the standard `http.ServeMux`, `mux.Router` matches incoming requests against a list of registered routes and calls a handler for the route that matches the URL or other conditions. The main features are: - -* Requests can be matched based on URL host, path, path prefix, schemes, header and query values, HTTP methods or using custom matchers. -* URL hosts and paths can have variables with an optional regular expression. -* Registered URLs can be built, or "reversed", which helps maintaining references to resources. -* Routes can be used as subrouters: nested routes are only tested if the parent route matches. This is useful to define groups of routes that share common conditions like a host, a path prefix or other repeated attributes. As a bonus, this optimizes request matching. -* It implements the `http.Handler` interface so it is compatible with the standard `http.ServeMux`. - -Let's start registering a couple of URL paths and handlers: - -```go -func main() { - r := mux.NewRouter() - r.HandleFunc("/", HomeHandler) - r.HandleFunc("/products", ProductsHandler) - r.HandleFunc("/articles", ArticlesHandler) - http.Handle("/", r) -} -``` - -Here we register three routes mapping URL paths to handlers. This is equivalent to how `http.HandleFunc()` works: if an incoming request URL matches one of the paths, the corresponding handler is called passing (`http.ResponseWriter`, `*http.Request`) as parameters. - -Paths can have variables. They are defined using the format `{name}` or `{name:pattern}`. If a regular expression pattern is not defined, the matched variable will be anything until the next slash. For example: - -```go -r := mux.NewRouter() -r.HandleFunc("/products/{key}", ProductHandler) -r.HandleFunc("/articles/{category}/", ArticlesCategoryHandler) -r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler) -``` - -The names are used to create a map of route variables which can be retrieved calling `mux.Vars()`: - -```go -vars := mux.Vars(request) -category := vars["category"] -``` - -And this is all you need to know about the basic usage. More advanced options are explained below. - -Routes can also be restricted to a domain or subdomain. Just define a host pattern to be matched. They can also have variables: - -```go -r := mux.NewRouter() -// Only matches if domain is "www.example.com". -r.Host("www.example.com") -// Matches a dynamic subdomain. -r.Host("{subdomain:[a-z]+}.domain.com") -``` - -There are several other matchers that can be added. To match path prefixes: - -```go -r.PathPrefix("/products/") -``` - -...or HTTP methods: - -```go -r.Methods("GET", "POST") -``` - -...or URL schemes: - -```go -r.Schemes("https") -``` - -...or header values: - -```go -r.Headers("X-Requested-With", "XMLHttpRequest") -``` - -...or query values: - -```go -r.Queries("key", "value") -``` - -...or to use a custom matcher function: - -```go -r.MatcherFunc(func(r *http.Request, rm *RouteMatch) bool { - return r.ProtoMajor == 0 -}) -``` - -...and finally, it is possible to combine several matchers in a single route: - -```go -r.HandleFunc("/products", ProductsHandler). - Host("www.example.com"). - Methods("GET"). - Schemes("http") -``` - -Setting the same matching conditions again and again can be boring, so we have a way to group several routes that share the same requirements. We call it "subrouting". - -For example, let's say we have several URLs that should only match when the host is `www.example.com`. Create a route for that host and get a "subrouter" from it: - -```go -r := mux.NewRouter() -s := r.Host("www.example.com").Subrouter() -``` - -Then register routes in the subrouter: - -```go -s.HandleFunc("/products/", ProductsHandler) -s.HandleFunc("/products/{key}", ProductHandler) -s.HandleFunc("/articles/{category}/{id:[0-9]+}"), ArticleHandler) -``` - -The three URL paths we registered above will only be tested if the domain is `www.example.com`, because the subrouter is tested first. This is not only convenient, but also optimizes request matching. You can create subrouters combining any attribute matchers accepted by a route. - -Subrouters can be used to create domain or path "namespaces": you define subrouters in a central place and then parts of the app can register its paths relatively to a given subrouter. - -There's one more thing about subroutes. When a subrouter has a path prefix, the inner routes use it as base for their paths: - -```go -r := mux.NewRouter() -s := r.PathPrefix("/products").Subrouter() -// "/products/" -s.HandleFunc("/", ProductsHandler) -// "/products/{key}/" -s.HandleFunc("/{key}/", ProductHandler) -// "/products/{key}/details" -s.HandleFunc("/{key}/details", ProductDetailsHandler) -``` - -Now let's see how to build registered URLs. - -Routes can be named. All routes that define a name can have their URLs built, or "reversed". We define a name calling `Name()` on a route. For example: - -```go -r := mux.NewRouter() -r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler). - Name("article") -``` - -To build a URL, get the route and call the `URL()` method, passing a sequence of key/value pairs for the route variables. For the previous route, we would do: - -```go -url, err := r.Get("article").URL("category", "technology", "id", "42") -``` - -...and the result will be a `url.URL` with the following path: - -``` -"/articles/technology/42" -``` - -This also works for host variables: - -```go -r := mux.NewRouter() -r.Host("{subdomain}.domain.com"). - Path("/articles/{category}/{id:[0-9]+}"). - HandlerFunc(ArticleHandler). - Name("article") - -// url.String() will be "http://news.domain.com/articles/technology/42" -url, err := r.Get("article").URL("subdomain", "news", - "category", "technology", - "id", "42") -``` - -All variables defined in the route are required, and their values must conform to the corresponding patterns. These requirements guarantee that a generated URL will always match a registered route -- the only exception is for explicitly defined "build-only" routes which never match. - -Regex support also exists for matching Headers within a route. For example, we could do: - -```go -r.HeadersRegexp("Content-Type", "application/(text|json)") -``` - -...and the route will match both requests with a Content-Type of `application/json` as well as `application/text` - -There's also a way to build only the URL host or path for a route: use the methods `URLHost()` or `URLPath()` instead. For the previous route, we would do: - -```go -// "http://news.domain.com/" -host, err := r.Get("article").URLHost("subdomain", "news") - -// "/articles/technology/42" -path, err := r.Get("article").URLPath("category", "technology", "id", "42") -``` - -And if you use subrouters, host and path defined separately can be built as well: - -```go -r := mux.NewRouter() -s := r.Host("{subdomain}.domain.com").Subrouter() -s.Path("/articles/{category}/{id:[0-9]+}"). - HandlerFunc(ArticleHandler). - Name("article") - -// "http://news.domain.com/articles/technology/42" -url, err := r.Get("article").URL("subdomain", "news", - "category", "technology", - "id", "42") -``` - -## Full Example - -Here's a complete, runnable example of a small `mux` based server: - -```go -package main - -import ( - "net/http" - - "github.com/gorilla/mux" -) - -func YourHandler(w http.ResponseWriter, r *http.Request) { - w.Write([]byte("Gorilla!\n")) -} - -func main() { - r := mux.NewRouter() - // Routes consist of a path and a handler function. - r.HandleFunc("/", YourHandler) - - // Bind to a port and pass our router in - http.ListenAndServe(":8000", r) -} -``` - -## License - -BSD licensed. See the LICENSE file for details. diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/mux/doc.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/mux/doc.go deleted file mode 100644 index 49798cb5cf5..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/mux/doc.go +++ /dev/null @@ -1,206 +0,0 @@ -// Copyright 2012 The Gorilla Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -/* -Package gorilla/mux implements a request router and dispatcher. - -The name mux stands for "HTTP request multiplexer". Like the standard -http.ServeMux, mux.Router matches incoming requests against a list of -registered routes and calls a handler for the route that matches the URL -or other conditions. The main features are: - - * Requests can be matched based on URL host, path, path prefix, schemes, - header and query values, HTTP methods or using custom matchers. - * URL hosts and paths can have variables with an optional regular - expression. - * Registered URLs can be built, or "reversed", which helps maintaining - references to resources. - * Routes can be used as subrouters: nested routes are only tested if the - parent route matches. This is useful to define groups of routes that - share common conditions like a host, a path prefix or other repeated - attributes. As a bonus, this optimizes request matching. - * It implements the http.Handler interface so it is compatible with the - standard http.ServeMux. - -Let's start registering a couple of URL paths and handlers: - - func main() { - r := mux.NewRouter() - r.HandleFunc("/", HomeHandler) - r.HandleFunc("/products", ProductsHandler) - r.HandleFunc("/articles", ArticlesHandler) - http.Handle("/", r) - } - -Here we register three routes mapping URL paths to handlers. This is -equivalent to how http.HandleFunc() works: if an incoming request URL matches -one of the paths, the corresponding handler is called passing -(http.ResponseWriter, *http.Request) as parameters. - -Paths can have variables. They are defined using the format {name} or -{name:pattern}. If a regular expression pattern is not defined, the matched -variable will be anything until the next slash. For example: - - r := mux.NewRouter() - r.HandleFunc("/products/{key}", ProductHandler) - r.HandleFunc("/articles/{category}/", ArticlesCategoryHandler) - r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler) - -The names are used to create a map of route variables which can be retrieved -calling mux.Vars(): - - vars := mux.Vars(request) - category := vars["category"] - -And this is all you need to know about the basic usage. More advanced options -are explained below. - -Routes can also be restricted to a domain or subdomain. Just define a host -pattern to be matched. They can also have variables: - - r := mux.NewRouter() - // Only matches if domain is "www.example.com". - r.Host("www.example.com") - // Matches a dynamic subdomain. - r.Host("{subdomain:[a-z]+}.domain.com") - -There are several other matchers that can be added. To match path prefixes: - - r.PathPrefix("/products/") - -...or HTTP methods: - - r.Methods("GET", "POST") - -...or URL schemes: - - r.Schemes("https") - -...or header values: - - r.Headers("X-Requested-With", "XMLHttpRequest") - -...or query values: - - r.Queries("key", "value") - -...or to use a custom matcher function: - - r.MatcherFunc(func(r *http.Request, rm *RouteMatch) bool { - return r.ProtoMajor == 0 - }) - -...and finally, it is possible to combine several matchers in a single route: - - r.HandleFunc("/products", ProductsHandler). - Host("www.example.com"). - Methods("GET"). - Schemes("http") - -Setting the same matching conditions again and again can be boring, so we have -a way to group several routes that share the same requirements. -We call it "subrouting". - -For example, let's say we have several URLs that should only match when the -host is "www.example.com". Create a route for that host and get a "subrouter" -from it: - - r := mux.NewRouter() - s := r.Host("www.example.com").Subrouter() - -Then register routes in the subrouter: - - s.HandleFunc("/products/", ProductsHandler) - s.HandleFunc("/products/{key}", ProductHandler) - s.HandleFunc("/articles/{category}/{id:[0-9]+}"), ArticleHandler) - -The three URL paths we registered above will only be tested if the domain is -"www.example.com", because the subrouter is tested first. This is not -only convenient, but also optimizes request matching. You can create -subrouters combining any attribute matchers accepted by a route. - -Subrouters can be used to create domain or path "namespaces": you define -subrouters in a central place and then parts of the app can register its -paths relatively to a given subrouter. - -There's one more thing about subroutes. When a subrouter has a path prefix, -the inner routes use it as base for their paths: - - r := mux.NewRouter() - s := r.PathPrefix("/products").Subrouter() - // "/products/" - s.HandleFunc("/", ProductsHandler) - // "/products/{key}/" - s.HandleFunc("/{key}/", ProductHandler) - // "/products/{key}/details" - s.HandleFunc("/{key}/details", ProductDetailsHandler) - -Now let's see how to build registered URLs. - -Routes can be named. All routes that define a name can have their URLs built, -or "reversed". We define a name calling Name() on a route. For example: - - r := mux.NewRouter() - r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler). - Name("article") - -To build a URL, get the route and call the URL() method, passing a sequence of -key/value pairs for the route variables. For the previous route, we would do: - - url, err := r.Get("article").URL("category", "technology", "id", "42") - -...and the result will be a url.URL with the following path: - - "/articles/technology/42" - -This also works for host variables: - - r := mux.NewRouter() - r.Host("{subdomain}.domain.com"). - Path("/articles/{category}/{id:[0-9]+}"). - HandlerFunc(ArticleHandler). - Name("article") - - // url.String() will be "http://news.domain.com/articles/technology/42" - url, err := r.Get("article").URL("subdomain", "news", - "category", "technology", - "id", "42") - -All variables defined in the route are required, and their values must -conform to the corresponding patterns. These requirements guarantee that a -generated URL will always match a registered route -- the only exception is -for explicitly defined "build-only" routes which never match. - -Regex support also exists for matching Headers within a route. For example, we could do: - - r.HeadersRegexp("Content-Type", "application/(text|json)") - -...and the route will match both requests with a Content-Type of `application/json` as well as -`application/text` - -There's also a way to build only the URL host or path for a route: -use the methods URLHost() or URLPath() instead. For the previous route, -we would do: - - // "http://news.domain.com/" - host, err := r.Get("article").URLHost("subdomain", "news") - - // "/articles/technology/42" - path, err := r.Get("article").URLPath("category", "technology", "id", "42") - -And if you use subrouters, host and path defined separately can be built -as well: - - r := mux.NewRouter() - s := r.Host("{subdomain}.domain.com").Subrouter() - s.Path("/articles/{category}/{id:[0-9]+}"). - HandlerFunc(ArticleHandler). - Name("article") - - // "http://news.domain.com/articles/technology/42" - url, err := r.Get("article").URL("subdomain", "news", - "category", "technology", - "id", "42") -*/ -package mux diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/mux/mux.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/mux/mux.go deleted file mode 100644 index cb03ddfe533..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/mux/mux.go +++ /dev/null @@ -1,481 +0,0 @@ -// Copyright 2012 The Gorilla Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package mux - -import ( - "errors" - "fmt" - "net/http" - "path" - "regexp" - - "github.com/fsouza/go-dockerclient/external/github.com/gorilla/context" -) - -// NewRouter returns a new router instance. -func NewRouter() *Router { - return &Router{namedRoutes: make(map[string]*Route), KeepContext: false} -} - -// Router registers routes to be matched and dispatches a handler. -// -// It implements the http.Handler interface, so it can be registered to serve -// requests: -// -// var router = mux.NewRouter() -// -// func main() { -// http.Handle("/", router) -// } -// -// Or, for Google App Engine, register it in a init() function: -// -// func init() { -// http.Handle("/", router) -// } -// -// This will send all incoming requests to the router. -type Router struct { - // Configurable Handler to be used when no route matches. - NotFoundHandler http.Handler - // Parent route, if this is a subrouter. - parent parentRoute - // Routes to be matched, in order. - routes []*Route - // Routes by name for URL building. - namedRoutes map[string]*Route - // See Router.StrictSlash(). This defines the flag for new routes. - strictSlash bool - // If true, do not clear the request context after handling the request - KeepContext bool -} - -// Match matches registered routes against the request. -func (r *Router) Match(req *http.Request, match *RouteMatch) bool { - for _, route := range r.routes { - if route.Match(req, match) { - return true - } - } - - // Closest match for a router (includes sub-routers) - if r.NotFoundHandler != nil { - match.Handler = r.NotFoundHandler - return true - } - return false -} - -// ServeHTTP dispatches the handler registered in the matched route. -// -// When there is a match, the route variables can be retrieved calling -// mux.Vars(request). -func (r *Router) ServeHTTP(w http.ResponseWriter, req *http.Request) { - // Clean path to canonical form and redirect. - if p := cleanPath(req.URL.Path); p != req.URL.Path { - - // Added 3 lines (Philip Schlump) - It was dropping the query string and #whatever from query. - // This matches with fix in go 1.2 r.c. 4 for same problem. Go Issue: - // http://code.google.com/p/go/issues/detail?id=5252 - url := *req.URL - url.Path = p - p = url.String() - - w.Header().Set("Location", p) - w.WriteHeader(http.StatusMovedPermanently) - return - } - var match RouteMatch - var handler http.Handler - if r.Match(req, &match) { - handler = match.Handler - setVars(req, match.Vars) - setCurrentRoute(req, match.Route) - } - if handler == nil { - handler = http.NotFoundHandler() - } - if !r.KeepContext { - defer context.Clear(req) - } - handler.ServeHTTP(w, req) -} - -// Get returns a route registered with the given name. -func (r *Router) Get(name string) *Route { - return r.getNamedRoutes()[name] -} - -// GetRoute returns a route registered with the given name. This method -// was renamed to Get() and remains here for backwards compatibility. -func (r *Router) GetRoute(name string) *Route { - return r.getNamedRoutes()[name] -} - -// StrictSlash defines the trailing slash behavior for new routes. The initial -// value is false. -// -// When true, if the route path is "/path/", accessing "/path" will redirect -// to the former and vice versa. In other words, your application will always -// see the path as specified in the route. -// -// When false, if the route path is "/path", accessing "/path/" will not match -// this route and vice versa. -// -// Special case: when a route sets a path prefix using the PathPrefix() method, -// strict slash is ignored for that route because the redirect behavior can't -// be determined from a prefix alone. However, any subrouters created from that -// route inherit the original StrictSlash setting. -func (r *Router) StrictSlash(value bool) *Router { - r.strictSlash = value - return r -} - -// ---------------------------------------------------------------------------- -// parentRoute -// ---------------------------------------------------------------------------- - -// getNamedRoutes returns the map where named routes are registered. -func (r *Router) getNamedRoutes() map[string]*Route { - if r.namedRoutes == nil { - if r.parent != nil { - r.namedRoutes = r.parent.getNamedRoutes() - } else { - r.namedRoutes = make(map[string]*Route) - } - } - return r.namedRoutes -} - -// getRegexpGroup returns regexp definitions from the parent route, if any. -func (r *Router) getRegexpGroup() *routeRegexpGroup { - if r.parent != nil { - return r.parent.getRegexpGroup() - } - return nil -} - -func (r *Router) buildVars(m map[string]string) map[string]string { - if r.parent != nil { - m = r.parent.buildVars(m) - } - return m -} - -// ---------------------------------------------------------------------------- -// Route factories -// ---------------------------------------------------------------------------- - -// NewRoute registers an empty route. -func (r *Router) NewRoute() *Route { - route := &Route{parent: r, strictSlash: r.strictSlash} - r.routes = append(r.routes, route) - return route -} - -// Handle registers a new route with a matcher for the URL path. -// See Route.Path() and Route.Handler(). -func (r *Router) Handle(path string, handler http.Handler) *Route { - return r.NewRoute().Path(path).Handler(handler) -} - -// HandleFunc registers a new route with a matcher for the URL path. -// See Route.Path() and Route.HandlerFunc(). -func (r *Router) HandleFunc(path string, f func(http.ResponseWriter, - *http.Request)) *Route { - return r.NewRoute().Path(path).HandlerFunc(f) -} - -// Headers registers a new route with a matcher for request header values. -// See Route.Headers(). -func (r *Router) Headers(pairs ...string) *Route { - return r.NewRoute().Headers(pairs...) -} - -// Host registers a new route with a matcher for the URL host. -// See Route.Host(). -func (r *Router) Host(tpl string) *Route { - return r.NewRoute().Host(tpl) -} - -// MatcherFunc registers a new route with a custom matcher function. -// See Route.MatcherFunc(). -func (r *Router) MatcherFunc(f MatcherFunc) *Route { - return r.NewRoute().MatcherFunc(f) -} - -// Methods registers a new route with a matcher for HTTP methods. -// See Route.Methods(). -func (r *Router) Methods(methods ...string) *Route { - return r.NewRoute().Methods(methods...) -} - -// Path registers a new route with a matcher for the URL path. -// See Route.Path(). -func (r *Router) Path(tpl string) *Route { - return r.NewRoute().Path(tpl) -} - -// PathPrefix registers a new route with a matcher for the URL path prefix. -// See Route.PathPrefix(). -func (r *Router) PathPrefix(tpl string) *Route { - return r.NewRoute().PathPrefix(tpl) -} - -// Queries registers a new route with a matcher for URL query values. -// See Route.Queries(). -func (r *Router) Queries(pairs ...string) *Route { - return r.NewRoute().Queries(pairs...) -} - -// Schemes registers a new route with a matcher for URL schemes. -// See Route.Schemes(). -func (r *Router) Schemes(schemes ...string) *Route { - return r.NewRoute().Schemes(schemes...) -} - -// BuildVars registers a new route with a custom function for modifying -// route variables before building a URL. -func (r *Router) BuildVarsFunc(f BuildVarsFunc) *Route { - return r.NewRoute().BuildVarsFunc(f) -} - -// Walk walks the router and all its sub-routers, calling walkFn for each route -// in the tree. The routes are walked in the order they were added. Sub-routers -// are explored depth-first. -func (r *Router) Walk(walkFn WalkFunc) error { - return r.walk(walkFn, []*Route{}) -} - -// SkipRouter is used as a return value from WalkFuncs to indicate that the -// router that walk is about to descend down to should be skipped. -var SkipRouter = errors.New("skip this router") - -// WalkFunc is the type of the function called for each route visited by Walk. -// At every invocation, it is given the current route, and the current router, -// and a list of ancestor routes that lead to the current route. -type WalkFunc func(route *Route, router *Router, ancestors []*Route) error - -func (r *Router) walk(walkFn WalkFunc, ancestors []*Route) error { - for _, t := range r.routes { - if t.regexp == nil || t.regexp.path == nil || t.regexp.path.template == "" { - continue - } - - err := walkFn(t, r, ancestors) - if err == SkipRouter { - continue - } - for _, sr := range t.matchers { - if h, ok := sr.(*Router); ok { - err := h.walk(walkFn, ancestors) - if err != nil { - return err - } - } - } - if h, ok := t.handler.(*Router); ok { - ancestors = append(ancestors, t) - err := h.walk(walkFn, ancestors) - if err != nil { - return err - } - ancestors = ancestors[:len(ancestors)-1] - } - } - return nil -} - -// ---------------------------------------------------------------------------- -// Context -// ---------------------------------------------------------------------------- - -// RouteMatch stores information about a matched route. -type RouteMatch struct { - Route *Route - Handler http.Handler - Vars map[string]string -} - -type contextKey int - -const ( - varsKey contextKey = iota - routeKey -) - -// Vars returns the route variables for the current request, if any. -func Vars(r *http.Request) map[string]string { - if rv := context.Get(r, varsKey); rv != nil { - return rv.(map[string]string) - } - return nil -} - -// CurrentRoute returns the matched route for the current request, if any. -// This only works when called inside the handler of the matched route -// because the matched route is stored in the request context which is cleared -// after the handler returns, unless the KeepContext option is set on the -// Router. -func CurrentRoute(r *http.Request) *Route { - if rv := context.Get(r, routeKey); rv != nil { - return rv.(*Route) - } - return nil -} - -func setVars(r *http.Request, val interface{}) { - if val != nil { - context.Set(r, varsKey, val) - } -} - -func setCurrentRoute(r *http.Request, val interface{}) { - if val != nil { - context.Set(r, routeKey, val) - } -} - -// ---------------------------------------------------------------------------- -// Helpers -// ---------------------------------------------------------------------------- - -// cleanPath returns the canonical path for p, eliminating . and .. elements. -// Borrowed from the net/http package. -func cleanPath(p string) string { - if p == "" { - return "/" - } - if p[0] != '/' { - p = "/" + p - } - np := path.Clean(p) - // path.Clean removes trailing slash except for root; - // put the trailing slash back if necessary. - if p[len(p)-1] == '/' && np != "/" { - np += "/" - } - return np -} - -// uniqueVars returns an error if two slices contain duplicated strings. -func uniqueVars(s1, s2 []string) error { - for _, v1 := range s1 { - for _, v2 := range s2 { - if v1 == v2 { - return fmt.Errorf("mux: duplicated route variable %q", v2) - } - } - } - return nil -} - -// checkPairs returns the count of strings passed in, and an error if -// the count is not an even number. -func checkPairs(pairs ...string) (int, error) { - length := len(pairs) - if length%2 != 0 { - return length, fmt.Errorf( - "mux: number of parameters must be multiple of 2, got %v", pairs) - } - return length, nil -} - -// mapFromPairsToString converts variadic string parameters to a -// string to string map. -func mapFromPairsToString(pairs ...string) (map[string]string, error) { - length, err := checkPairs(pairs...) - if err != nil { - return nil, err - } - m := make(map[string]string, length/2) - for i := 0; i < length; i += 2 { - m[pairs[i]] = pairs[i+1] - } - return m, nil -} - -// mapFromPairsToRegex converts variadic string paramers to a -// string to regex map. -func mapFromPairsToRegex(pairs ...string) (map[string]*regexp.Regexp, error) { - length, err := checkPairs(pairs...) - if err != nil { - return nil, err - } - m := make(map[string]*regexp.Regexp, length/2) - for i := 0; i < length; i += 2 { - regex, err := regexp.Compile(pairs[i+1]) - if err != nil { - return nil, err - } - m[pairs[i]] = regex - } - return m, nil -} - -// matchInArray returns true if the given string value is in the array. -func matchInArray(arr []string, value string) bool { - for _, v := range arr { - if v == value { - return true - } - } - return false -} - -// matchMapWithString returns true if the given key/value pairs exist in a given map. -func matchMapWithString(toCheck map[string]string, toMatch map[string][]string, canonicalKey bool) bool { - for k, v := range toCheck { - // Check if key exists. - if canonicalKey { - k = http.CanonicalHeaderKey(k) - } - if values := toMatch[k]; values == nil { - return false - } else if v != "" { - // If value was defined as an empty string we only check that the - // key exists. Otherwise we also check for equality. - valueExists := false - for _, value := range values { - if v == value { - valueExists = true - break - } - } - if !valueExists { - return false - } - } - } - return true -} - -// matchMapWithRegex returns true if the given key/value pairs exist in a given map compiled against -// the given regex -func matchMapWithRegex(toCheck map[string]*regexp.Regexp, toMatch map[string][]string, canonicalKey bool) bool { - for k, v := range toCheck { - // Check if key exists. - if canonicalKey { - k = http.CanonicalHeaderKey(k) - } - if values := toMatch[k]; values == nil { - return false - } else if v != nil { - // If value was defined as an empty string we only check that the - // key exists. Otherwise we also check for equality. - valueExists := false - for _, value := range values { - if v.MatchString(value) { - valueExists = true - break - } - } - if !valueExists { - return false - } - } - } - return true -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/mux/regexp.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/mux/regexp.go deleted file mode 100644 index 06728dd545e..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/mux/regexp.go +++ /dev/null @@ -1,317 +0,0 @@ -// Copyright 2012 The Gorilla Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package mux - -import ( - "bytes" - "fmt" - "net/http" - "net/url" - "regexp" - "strconv" - "strings" -) - -// newRouteRegexp parses a route template and returns a routeRegexp, -// used to match a host, a path or a query string. -// -// It will extract named variables, assemble a regexp to be matched, create -// a "reverse" template to build URLs and compile regexps to validate variable -// values used in URL building. -// -// Previously we accepted only Python-like identifiers for variable -// names ([a-zA-Z_][a-zA-Z0-9_]*), but currently the only restriction is that -// name and pattern can't be empty, and names can't contain a colon. -func newRouteRegexp(tpl string, matchHost, matchPrefix, matchQuery, strictSlash bool) (*routeRegexp, error) { - // Check if it is well-formed. - idxs, errBraces := braceIndices(tpl) - if errBraces != nil { - return nil, errBraces - } - // Backup the original. - template := tpl - // Now let's parse it. - defaultPattern := "[^/]+" - if matchQuery { - defaultPattern = "[^?&]*" - } else if matchHost { - defaultPattern = "[^.]+" - matchPrefix = false - } - // Only match strict slash if not matching - if matchPrefix || matchHost || matchQuery { - strictSlash = false - } - // Set a flag for strictSlash. - endSlash := false - if strictSlash && strings.HasSuffix(tpl, "/") { - tpl = tpl[:len(tpl)-1] - endSlash = true - } - varsN := make([]string, len(idxs)/2) - varsR := make([]*regexp.Regexp, len(idxs)/2) - pattern := bytes.NewBufferString("") - pattern.WriteByte('^') - reverse := bytes.NewBufferString("") - var end int - var err error - for i := 0; i < len(idxs); i += 2 { - // Set all values we are interested in. - raw := tpl[end:idxs[i]] - end = idxs[i+1] - parts := strings.SplitN(tpl[idxs[i]+1:end-1], ":", 2) - name := parts[0] - patt := defaultPattern - if len(parts) == 2 { - patt = parts[1] - } - // Name or pattern can't be empty. - if name == "" || patt == "" { - return nil, fmt.Errorf("mux: missing name or pattern in %q", - tpl[idxs[i]:end]) - } - // Build the regexp pattern. - varIdx := i / 2 - fmt.Fprintf(pattern, "%s(?P<%s>%s)", regexp.QuoteMeta(raw), varGroupName(varIdx), patt) - // Build the reverse template. - fmt.Fprintf(reverse, "%s%%s", raw) - - // Append variable name and compiled pattern. - varsN[varIdx] = name - varsR[varIdx], err = regexp.Compile(fmt.Sprintf("^%s$", patt)) - if err != nil { - return nil, err - } - } - // Add the remaining. - raw := tpl[end:] - pattern.WriteString(regexp.QuoteMeta(raw)) - if strictSlash { - pattern.WriteString("[/]?") - } - if matchQuery { - // Add the default pattern if the query value is empty - if queryVal := strings.SplitN(template, "=", 2)[1]; queryVal == "" { - pattern.WriteString(defaultPattern) - } - } - if !matchPrefix { - pattern.WriteByte('$') - } - reverse.WriteString(raw) - if endSlash { - reverse.WriteByte('/') - } - // Compile full regexp. - reg, errCompile := regexp.Compile(pattern.String()) - if errCompile != nil { - return nil, errCompile - } - // Done! - return &routeRegexp{ - template: template, - matchHost: matchHost, - matchQuery: matchQuery, - strictSlash: strictSlash, - regexp: reg, - reverse: reverse.String(), - varsN: varsN, - varsR: varsR, - }, nil -} - -// routeRegexp stores a regexp to match a host or path and information to -// collect and validate route variables. -type routeRegexp struct { - // The unmodified template. - template string - // True for host match, false for path or query string match. - matchHost bool - // True for query string match, false for path and host match. - matchQuery bool - // The strictSlash value defined on the route, but disabled if PathPrefix was used. - strictSlash bool - // Expanded regexp. - regexp *regexp.Regexp - // Reverse template. - reverse string - // Variable names. - varsN []string - // Variable regexps (validators). - varsR []*regexp.Regexp -} - -// Match matches the regexp against the URL host or path. -func (r *routeRegexp) Match(req *http.Request, match *RouteMatch) bool { - if !r.matchHost { - if r.matchQuery { - return r.matchQueryString(req) - } else { - return r.regexp.MatchString(req.URL.Path) - } - } - return r.regexp.MatchString(getHost(req)) -} - -// url builds a URL part using the given values. -func (r *routeRegexp) url(values map[string]string) (string, error) { - urlValues := make([]interface{}, len(r.varsN)) - for k, v := range r.varsN { - value, ok := values[v] - if !ok { - return "", fmt.Errorf("mux: missing route variable %q", v) - } - urlValues[k] = value - } - rv := fmt.Sprintf(r.reverse, urlValues...) - if !r.regexp.MatchString(rv) { - // The URL is checked against the full regexp, instead of checking - // individual variables. This is faster but to provide a good error - // message, we check individual regexps if the URL doesn't match. - for k, v := range r.varsN { - if !r.varsR[k].MatchString(values[v]) { - return "", fmt.Errorf( - "mux: variable %q doesn't match, expected %q", values[v], - r.varsR[k].String()) - } - } - } - return rv, nil -} - -// getUrlQuery returns a single query parameter from a request URL. -// For a URL with foo=bar&baz=ding, we return only the relevant key -// value pair for the routeRegexp. -func (r *routeRegexp) getUrlQuery(req *http.Request) string { - if !r.matchQuery { - return "" - } - templateKey := strings.SplitN(r.template, "=", 2)[0] - for key, vals := range req.URL.Query() { - if key == templateKey && len(vals) > 0 { - return key + "=" + vals[0] - } - } - return "" -} - -func (r *routeRegexp) matchQueryString(req *http.Request) bool { - return r.regexp.MatchString(r.getUrlQuery(req)) -} - -// braceIndices returns the first level curly brace indices from a string. -// It returns an error in case of unbalanced braces. -func braceIndices(s string) ([]int, error) { - var level, idx int - idxs := make([]int, 0) - for i := 0; i < len(s); i++ { - switch s[i] { - case '{': - if level++; level == 1 { - idx = i - } - case '}': - if level--; level == 0 { - idxs = append(idxs, idx, i+1) - } else if level < 0 { - return nil, fmt.Errorf("mux: unbalanced braces in %q", s) - } - } - } - if level != 0 { - return nil, fmt.Errorf("mux: unbalanced braces in %q", s) - } - return idxs, nil -} - -// varGroupName builds a capturing group name for the indexed variable. -func varGroupName(idx int) string { - return "v" + strconv.Itoa(idx) -} - -// ---------------------------------------------------------------------------- -// routeRegexpGroup -// ---------------------------------------------------------------------------- - -// routeRegexpGroup groups the route matchers that carry variables. -type routeRegexpGroup struct { - host *routeRegexp - path *routeRegexp - queries []*routeRegexp -} - -// setMatch extracts the variables from the URL once a route matches. -func (v *routeRegexpGroup) setMatch(req *http.Request, m *RouteMatch, r *Route) { - // Store host variables. - if v.host != nil { - hostVars := v.host.regexp.FindStringSubmatch(getHost(req)) - if hostVars != nil { - subexpNames := v.host.regexp.SubexpNames() - varName := 0 - for i, name := range subexpNames[1:] { - if name != "" && name == varGroupName(varName) { - m.Vars[v.host.varsN[varName]] = hostVars[i+1] - varName++ - } - } - } - } - // Store path variables. - if v.path != nil { - pathVars := v.path.regexp.FindStringSubmatch(req.URL.Path) - if pathVars != nil { - subexpNames := v.path.regexp.SubexpNames() - varName := 0 - for i, name := range subexpNames[1:] { - if name != "" && name == varGroupName(varName) { - m.Vars[v.path.varsN[varName]] = pathVars[i+1] - varName++ - } - } - // Check if we should redirect. - if v.path.strictSlash { - p1 := strings.HasSuffix(req.URL.Path, "/") - p2 := strings.HasSuffix(v.path.template, "/") - if p1 != p2 { - u, _ := url.Parse(req.URL.String()) - if p1 { - u.Path = u.Path[:len(u.Path)-1] - } else { - u.Path += "/" - } - m.Handler = http.RedirectHandler(u.String(), 301) - } - } - } - } - // Store query string variables. - for _, q := range v.queries { - queryVars := q.regexp.FindStringSubmatch(q.getUrlQuery(req)) - if queryVars != nil { - subexpNames := q.regexp.SubexpNames() - varName := 0 - for i, name := range subexpNames[1:] { - if name != "" && name == varGroupName(varName) { - m.Vars[q.varsN[varName]] = queryVars[i+1] - varName++ - } - } - } - } -} - -// getHost tries its best to return the request host. -func getHost(r *http.Request) string { - if r.URL.IsAbs() { - return r.URL.Host - } - host := r.Host - // Slice off any port information. - if i := strings.Index(host, ":"); i != -1 { - host = host[:i] - } - return host - -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/mux/route.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/mux/route.go deleted file mode 100644 index 913432c1c0d..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/gorilla/mux/route.go +++ /dev/null @@ -1,595 +0,0 @@ -// Copyright 2012 The Gorilla Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package mux - -import ( - "errors" - "fmt" - "net/http" - "net/url" - "regexp" - "strings" -) - -// Route stores information to match a request and build URLs. -type Route struct { - // Parent where the route was registered (a Router). - parent parentRoute - // Request handler for the route. - handler http.Handler - // List of matchers. - matchers []matcher - // Manager for the variables from host and path. - regexp *routeRegexpGroup - // If true, when the path pattern is "/path/", accessing "/path" will - // redirect to the former and vice versa. - strictSlash bool - // If true, this route never matches: it is only used to build URLs. - buildOnly bool - // The name used to build URLs. - name string - // Error resulted from building a route. - err error - - buildVarsFunc BuildVarsFunc -} - -// Match matches the route against the request. -func (r *Route) Match(req *http.Request, match *RouteMatch) bool { - if r.buildOnly || r.err != nil { - return false - } - // Match everything. - for _, m := range r.matchers { - if matched := m.Match(req, match); !matched { - return false - } - } - // Yay, we have a match. Let's collect some info about it. - if match.Route == nil { - match.Route = r - } - if match.Handler == nil { - match.Handler = r.handler - } - if match.Vars == nil { - match.Vars = make(map[string]string) - } - // Set variables. - if r.regexp != nil { - r.regexp.setMatch(req, match, r) - } - return true -} - -// ---------------------------------------------------------------------------- -// Route attributes -// ---------------------------------------------------------------------------- - -// GetError returns an error resulted from building the route, if any. -func (r *Route) GetError() error { - return r.err -} - -// BuildOnly sets the route to never match: it is only used to build URLs. -func (r *Route) BuildOnly() *Route { - r.buildOnly = true - return r -} - -// Handler -------------------------------------------------------------------- - -// Handler sets a handler for the route. -func (r *Route) Handler(handler http.Handler) *Route { - if r.err == nil { - r.handler = handler - } - return r -} - -// HandlerFunc sets a handler function for the route. -func (r *Route) HandlerFunc(f func(http.ResponseWriter, *http.Request)) *Route { - return r.Handler(http.HandlerFunc(f)) -} - -// GetHandler returns the handler for the route, if any. -func (r *Route) GetHandler() http.Handler { - return r.handler -} - -// Name ----------------------------------------------------------------------- - -// Name sets the name for the route, used to build URLs. -// If the name was registered already it will be overwritten. -func (r *Route) Name(name string) *Route { - if r.name != "" { - r.err = fmt.Errorf("mux: route already has name %q, can't set %q", - r.name, name) - } - if r.err == nil { - r.name = name - r.getNamedRoutes()[name] = r - } - return r -} - -// GetName returns the name for the route, if any. -func (r *Route) GetName() string { - return r.name -} - -// ---------------------------------------------------------------------------- -// Matchers -// ---------------------------------------------------------------------------- - -// matcher types try to match a request. -type matcher interface { - Match(*http.Request, *RouteMatch) bool -} - -// addMatcher adds a matcher to the route. -func (r *Route) addMatcher(m matcher) *Route { - if r.err == nil { - r.matchers = append(r.matchers, m) - } - return r -} - -// addRegexpMatcher adds a host or path matcher and builder to a route. -func (r *Route) addRegexpMatcher(tpl string, matchHost, matchPrefix, matchQuery bool) error { - if r.err != nil { - return r.err - } - r.regexp = r.getRegexpGroup() - if !matchHost && !matchQuery { - if len(tpl) == 0 || tpl[0] != '/' { - return fmt.Errorf("mux: path must start with a slash, got %q", tpl) - } - if r.regexp.path != nil { - tpl = strings.TrimRight(r.regexp.path.template, "/") + tpl - } - } - rr, err := newRouteRegexp(tpl, matchHost, matchPrefix, matchQuery, r.strictSlash) - if err != nil { - return err - } - for _, q := range r.regexp.queries { - if err = uniqueVars(rr.varsN, q.varsN); err != nil { - return err - } - } - if matchHost { - if r.regexp.path != nil { - if err = uniqueVars(rr.varsN, r.regexp.path.varsN); err != nil { - return err - } - } - r.regexp.host = rr - } else { - if r.regexp.host != nil { - if err = uniqueVars(rr.varsN, r.regexp.host.varsN); err != nil { - return err - } - } - if matchQuery { - r.regexp.queries = append(r.regexp.queries, rr) - } else { - r.regexp.path = rr - } - } - r.addMatcher(rr) - return nil -} - -// Headers -------------------------------------------------------------------- - -// headerMatcher matches the request against header values. -type headerMatcher map[string]string - -func (m headerMatcher) Match(r *http.Request, match *RouteMatch) bool { - return matchMapWithString(m, r.Header, true) -} - -// Headers adds a matcher for request header values. -// It accepts a sequence of key/value pairs to be matched. For example: -// -// r := mux.NewRouter() -// r.Headers("Content-Type", "application/json", -// "X-Requested-With", "XMLHttpRequest") -// -// The above route will only match if both request header values match. -// If the value is an empty string, it will match any value if the key is set. -func (r *Route) Headers(pairs ...string) *Route { - if r.err == nil { - var headers map[string]string - headers, r.err = mapFromPairsToString(pairs...) - return r.addMatcher(headerMatcher(headers)) - } - return r -} - -// headerRegexMatcher matches the request against the route given a regex for the header -type headerRegexMatcher map[string]*regexp.Regexp - -func (m headerRegexMatcher) Match(r *http.Request, match *RouteMatch) bool { - return matchMapWithRegex(m, r.Header, true) -} - -// Regular expressions can be used with headers as well. -// It accepts a sequence of key/value pairs, where the value has regex support. For example -// r := mux.NewRouter() -// r.HeadersRegexp("Content-Type", "application/(text|json)", -// "X-Requested-With", "XMLHttpRequest") -// -// The above route will only match if both the request header matches both regular expressions. -// It the value is an empty string, it will match any value if the key is set. -func (r *Route) HeadersRegexp(pairs ...string) *Route { - if r.err == nil { - var headers map[string]*regexp.Regexp - headers, r.err = mapFromPairsToRegex(pairs...) - return r.addMatcher(headerRegexMatcher(headers)) - } - return r -} - -// Host ----------------------------------------------------------------------- - -// Host adds a matcher for the URL host. -// It accepts a template with zero or more URL variables enclosed by {}. -// Variables can define an optional regexp pattern to be matched: -// -// - {name} matches anything until the next dot. -// -// - {name:pattern} matches the given regexp pattern. -// -// For example: -// -// r := mux.NewRouter() -// r.Host("www.example.com") -// r.Host("{subdomain}.domain.com") -// r.Host("{subdomain:[a-z]+}.domain.com") -// -// Variable names must be unique in a given route. They can be retrieved -// calling mux.Vars(request). -func (r *Route) Host(tpl string) *Route { - r.err = r.addRegexpMatcher(tpl, true, false, false) - return r -} - -// MatcherFunc ---------------------------------------------------------------- - -// MatcherFunc is the function signature used by custom matchers. -type MatcherFunc func(*http.Request, *RouteMatch) bool - -func (m MatcherFunc) Match(r *http.Request, match *RouteMatch) bool { - return m(r, match) -} - -// MatcherFunc adds a custom function to be used as request matcher. -func (r *Route) MatcherFunc(f MatcherFunc) *Route { - return r.addMatcher(f) -} - -// Methods -------------------------------------------------------------------- - -// methodMatcher matches the request against HTTP methods. -type methodMatcher []string - -func (m methodMatcher) Match(r *http.Request, match *RouteMatch) bool { - return matchInArray(m, r.Method) -} - -// Methods adds a matcher for HTTP methods. -// It accepts a sequence of one or more methods to be matched, e.g.: -// "GET", "POST", "PUT". -func (r *Route) Methods(methods ...string) *Route { - for k, v := range methods { - methods[k] = strings.ToUpper(v) - } - return r.addMatcher(methodMatcher(methods)) -} - -// Path ----------------------------------------------------------------------- - -// Path adds a matcher for the URL path. -// It accepts a template with zero or more URL variables enclosed by {}. The -// template must start with a "/". -// Variables can define an optional regexp pattern to be matched: -// -// - {name} matches anything until the next slash. -// -// - {name:pattern} matches the given regexp pattern. -// -// For example: -// -// r := mux.NewRouter() -// r.Path("/products/").Handler(ProductsHandler) -// r.Path("/products/{key}").Handler(ProductsHandler) -// r.Path("/articles/{category}/{id:[0-9]+}"). -// Handler(ArticleHandler) -// -// Variable names must be unique in a given route. They can be retrieved -// calling mux.Vars(request). -func (r *Route) Path(tpl string) *Route { - r.err = r.addRegexpMatcher(tpl, false, false, false) - return r -} - -// PathPrefix ----------------------------------------------------------------- - -// PathPrefix adds a matcher for the URL path prefix. This matches if the given -// template is a prefix of the full URL path. See Route.Path() for details on -// the tpl argument. -// -// Note that it does not treat slashes specially ("/foobar/" will be matched by -// the prefix "/foo") so you may want to use a trailing slash here. -// -// Also note that the setting of Router.StrictSlash() has no effect on routes -// with a PathPrefix matcher. -func (r *Route) PathPrefix(tpl string) *Route { - r.err = r.addRegexpMatcher(tpl, false, true, false) - return r -} - -// Query ---------------------------------------------------------------------- - -// Queries adds a matcher for URL query values. -// It accepts a sequence of key/value pairs. Values may define variables. -// For example: -// -// r := mux.NewRouter() -// r.Queries("foo", "bar", "id", "{id:[0-9]+}") -// -// The above route will only match if the URL contains the defined queries -// values, e.g.: ?foo=bar&id=42. -// -// It the value is an empty string, it will match any value if the key is set. -// -// Variables can define an optional regexp pattern to be matched: -// -// - {name} matches anything until the next slash. -// -// - {name:pattern} matches the given regexp pattern. -func (r *Route) Queries(pairs ...string) *Route { - length := len(pairs) - if length%2 != 0 { - r.err = fmt.Errorf( - "mux: number of parameters must be multiple of 2, got %v", pairs) - return nil - } - for i := 0; i < length; i += 2 { - if r.err = r.addRegexpMatcher(pairs[i]+"="+pairs[i+1], false, false, true); r.err != nil { - return r - } - } - - return r -} - -// Schemes -------------------------------------------------------------------- - -// schemeMatcher matches the request against URL schemes. -type schemeMatcher []string - -func (m schemeMatcher) Match(r *http.Request, match *RouteMatch) bool { - return matchInArray(m, r.URL.Scheme) -} - -// Schemes adds a matcher for URL schemes. -// It accepts a sequence of schemes to be matched, e.g.: "http", "https". -func (r *Route) Schemes(schemes ...string) *Route { - for k, v := range schemes { - schemes[k] = strings.ToLower(v) - } - return r.addMatcher(schemeMatcher(schemes)) -} - -// BuildVarsFunc -------------------------------------------------------------- - -// BuildVarsFunc is the function signature used by custom build variable -// functions (which can modify route variables before a route's URL is built). -type BuildVarsFunc func(map[string]string) map[string]string - -// BuildVarsFunc adds a custom function to be used to modify build variables -// before a route's URL is built. -func (r *Route) BuildVarsFunc(f BuildVarsFunc) *Route { - r.buildVarsFunc = f - return r -} - -// Subrouter ------------------------------------------------------------------ - -// Subrouter creates a subrouter for the route. -// -// It will test the inner routes only if the parent route matched. For example: -// -// r := mux.NewRouter() -// s := r.Host("www.example.com").Subrouter() -// s.HandleFunc("/products/", ProductsHandler) -// s.HandleFunc("/products/{key}", ProductHandler) -// s.HandleFunc("/articles/{category}/{id:[0-9]+}"), ArticleHandler) -// -// Here, the routes registered in the subrouter won't be tested if the host -// doesn't match. -func (r *Route) Subrouter() *Router { - router := &Router{parent: r, strictSlash: r.strictSlash} - r.addMatcher(router) - return router -} - -// ---------------------------------------------------------------------------- -// URL building -// ---------------------------------------------------------------------------- - -// URL builds a URL for the route. -// -// It accepts a sequence of key/value pairs for the route variables. For -// example, given this route: -// -// r := mux.NewRouter() -// r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler). -// Name("article") -// -// ...a URL for it can be built using: -// -// url, err := r.Get("article").URL("category", "technology", "id", "42") -// -// ...which will return an url.URL with the following path: -// -// "/articles/technology/42" -// -// This also works for host variables: -// -// r := mux.NewRouter() -// r.Host("{subdomain}.domain.com"). -// HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler). -// Name("article") -// -// // url.String() will be "http://news.domain.com/articles/technology/42" -// url, err := r.Get("article").URL("subdomain", "news", -// "category", "technology", -// "id", "42") -// -// All variables defined in the route are required, and their values must -// conform to the corresponding patterns. -func (r *Route) URL(pairs ...string) (*url.URL, error) { - if r.err != nil { - return nil, r.err - } - if r.regexp == nil { - return nil, errors.New("mux: route doesn't have a host or path") - } - values, err := r.prepareVars(pairs...) - if err != nil { - return nil, err - } - var scheme, host, path string - if r.regexp.host != nil { - // Set a default scheme. - scheme = "http" - if host, err = r.regexp.host.url(values); err != nil { - return nil, err - } - } - if r.regexp.path != nil { - if path, err = r.regexp.path.url(values); err != nil { - return nil, err - } - } - return &url.URL{ - Scheme: scheme, - Host: host, - Path: path, - }, nil -} - -// URLHost builds the host part of the URL for a route. See Route.URL(). -// -// The route must have a host defined. -func (r *Route) URLHost(pairs ...string) (*url.URL, error) { - if r.err != nil { - return nil, r.err - } - if r.regexp == nil || r.regexp.host == nil { - return nil, errors.New("mux: route doesn't have a host") - } - values, err := r.prepareVars(pairs...) - if err != nil { - return nil, err - } - host, err := r.regexp.host.url(values) - if err != nil { - return nil, err - } - return &url.URL{ - Scheme: "http", - Host: host, - }, nil -} - -// URLPath builds the path part of the URL for a route. See Route.URL(). -// -// The route must have a path defined. -func (r *Route) URLPath(pairs ...string) (*url.URL, error) { - if r.err != nil { - return nil, r.err - } - if r.regexp == nil || r.regexp.path == nil { - return nil, errors.New("mux: route doesn't have a path") - } - values, err := r.prepareVars(pairs...) - if err != nil { - return nil, err - } - path, err := r.regexp.path.url(values) - if err != nil { - return nil, err - } - return &url.URL{ - Path: path, - }, nil -} - -// prepareVars converts the route variable pairs into a map. If the route has a -// BuildVarsFunc, it is invoked. -func (r *Route) prepareVars(pairs ...string) (map[string]string, error) { - m, err := mapFromPairsToString(pairs...) - if err != nil { - return nil, err - } - return r.buildVars(m), nil -} - -func (r *Route) buildVars(m map[string]string) map[string]string { - if r.parent != nil { - m = r.parent.buildVars(m) - } - if r.buildVarsFunc != nil { - m = r.buildVarsFunc(m) - } - return m -} - -// ---------------------------------------------------------------------------- -// parentRoute -// ---------------------------------------------------------------------------- - -// parentRoute allows routes to know about parent host and path definitions. -type parentRoute interface { - getNamedRoutes() map[string]*Route - getRegexpGroup() *routeRegexpGroup - buildVars(map[string]string) map[string]string -} - -// getNamedRoutes returns the map where named routes are registered. -func (r *Route) getNamedRoutes() map[string]*Route { - if r.parent == nil { - // During tests router is not always set. - r.parent = NewRouter() - } - return r.parent.getNamedRoutes() -} - -// getRegexpGroup returns regexp definitions from this route. -func (r *Route) getRegexpGroup() *routeRegexpGroup { - if r.regexp == nil { - if r.parent == nil { - // During tests router is not always set. - r.parent = NewRouter() - } - regexp := r.parent.getRegexpGroup() - if regexp == nil { - r.regexp = new(routeRegexpGroup) - } else { - // Copy. - r.regexp = &routeRegexpGroup{ - host: regexp.host, - path: regexp.path, - queries: regexp.queries, - } - } - } - return r.regexp -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/hashicorp/go-cleanhttp/LICENSE b/vendor/github.com/fsouza/go-dockerclient/external/github.com/hashicorp/go-cleanhttp/LICENSE deleted file mode 100644 index e87a115e462..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/hashicorp/go-cleanhttp/LICENSE +++ /dev/null @@ -1,363 +0,0 @@ -Mozilla Public License, version 2.0 - -1. Definitions - -1.1. "Contributor" - - means each individual or legal entity that creates, contributes to the - creation of, or owns Covered Software. - -1.2. "Contributor Version" - - means the combination of the Contributions of others (if any) used by a - Contributor and that particular Contributor's Contribution. - -1.3. "Contribution" - - means Covered Software of a particular Contributor. - -1.4. "Covered Software" - - means Source Code Form to which the initial Contributor has attached the - notice in Exhibit A, the Executable Form of such Source Code Form, and - Modifications of such Source Code Form, in each case including portions - thereof. - -1.5. "Incompatible With Secondary Licenses" - means - - a. that the initial Contributor has attached the notice described in - Exhibit B to the Covered Software; or - - b. that the Covered Software was made available under the terms of - version 1.1 or earlier of the License, but not also under the terms of - a Secondary License. - -1.6. "Executable Form" - - means any form of the work other than Source Code Form. - -1.7. "Larger Work" - - means a work that combines Covered Software with other material, in a - separate file or files, that is not Covered Software. - -1.8. "License" - - means this document. - -1.9. "Licensable" - - means having the right to grant, to the maximum extent possible, whether - at the time of the initial grant or subsequently, any and all of the - rights conveyed by this License. - -1.10. "Modifications" - - means any of the following: - - a. any file in Source Code Form that results from an addition to, - deletion from, or modification of the contents of Covered Software; or - - b. any new file in Source Code Form that contains any Covered Software. - -1.11. "Patent Claims" of a Contributor - - means any patent claim(s), including without limitation, method, - process, and apparatus claims, in any patent Licensable by such - Contributor that would be infringed, but for the grant of the License, - by the making, using, selling, offering for sale, having made, import, - or transfer of either its Contributions or its Contributor Version. - -1.12. "Secondary License" - - means either the GNU General Public License, Version 2.0, the GNU Lesser - General Public License, Version 2.1, the GNU Affero General Public - License, Version 3.0, or any later versions of those licenses. - -1.13. "Source Code Form" - - means the form of the work preferred for making modifications. - -1.14. "You" (or "Your") - - means an individual or a legal entity exercising rights under this - License. For legal entities, "You" includes any entity that controls, is - controlled by, or is under common control with You. For purposes of this - definition, "control" means (a) the power, direct or indirect, to cause - the direction or management of such entity, whether by contract or - otherwise, or (b) ownership of more than fifty percent (50%) of the - outstanding shares or beneficial ownership of such entity. - - -2. License Grants and Conditions - -2.1. Grants - - Each Contributor hereby grants You a world-wide, royalty-free, - non-exclusive license: - - a. under intellectual property rights (other than patent or trademark) - Licensable by such Contributor to use, reproduce, make available, - modify, display, perform, distribute, and otherwise exploit its - Contributions, either on an unmodified basis, with Modifications, or - as part of a Larger Work; and - - b. under Patent Claims of such Contributor to make, use, sell, offer for - sale, have made, import, and otherwise transfer either its - Contributions or its Contributor Version. - -2.2. Effective Date - - The licenses granted in Section 2.1 with respect to any Contribution - become effective for each Contribution on the date the Contributor first - distributes such Contribution. - -2.3. Limitations on Grant Scope - - The licenses granted in this Section 2 are the only rights granted under - this License. No additional rights or licenses will be implied from the - distribution or licensing of Covered Software under this License. - Notwithstanding Section 2.1(b) above, no patent license is granted by a - Contributor: - - a. for any code that a Contributor has removed from Covered Software; or - - b. for infringements caused by: (i) Your and any other third party's - modifications of Covered Software, or (ii) the combination of its - Contributions with other software (except as part of its Contributor - Version); or - - c. under Patent Claims infringed by Covered Software in the absence of - its Contributions. - - This License does not grant any rights in the trademarks, service marks, - or logos of any Contributor (except as may be necessary to comply with - the notice requirements in Section 3.4). - -2.4. Subsequent Licenses - - No Contributor makes additional grants as a result of Your choice to - distribute the Covered Software under a subsequent version of this - License (see Section 10.2) or under the terms of a Secondary License (if - permitted under the terms of Section 3.3). - -2.5. Representation - - Each Contributor represents that the Contributor believes its - Contributions are its original creation(s) or it has sufficient rights to - grant the rights to its Contributions conveyed by this License. - -2.6. Fair Use - - This License is not intended to limit any rights You have under - applicable copyright doctrines of fair use, fair dealing, or other - equivalents. - -2.7. Conditions - - Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in - Section 2.1. - - -3. Responsibilities - -3.1. Distribution of Source Form - - All distribution of Covered Software in Source Code Form, including any - Modifications that You create or to which You contribute, must be under - the terms of this License. You must inform recipients that the Source - Code Form of the Covered Software is governed by the terms of this - License, and how they can obtain a copy of this License. You may not - attempt to alter or restrict the recipients' rights in the Source Code - Form. - -3.2. Distribution of Executable Form - - If You distribute Covered Software in Executable Form then: - - a. such Covered Software must also be made available in Source Code Form, - as described in Section 3.1, and You must inform recipients of the - Executable Form how they can obtain a copy of such Source Code Form by - reasonable means in a timely manner, at a charge no more than the cost - of distribution to the recipient; and - - b. You may distribute such Executable Form under the terms of this - License, or sublicense it under different terms, provided that the - license for the Executable Form does not attempt to limit or alter the - recipients' rights in the Source Code Form under this License. - -3.3. Distribution of a Larger Work - - You may create and distribute a Larger Work under terms of Your choice, - provided that You also comply with the requirements of this License for - the Covered Software. If the Larger Work is a combination of Covered - Software with a work governed by one or more Secondary Licenses, and the - Covered Software is not Incompatible With Secondary Licenses, this - License permits You to additionally distribute such Covered Software - under the terms of such Secondary License(s), so that the recipient of - the Larger Work may, at their option, further distribute the Covered - Software under the terms of either this License or such Secondary - License(s). - -3.4. Notices - - You may not remove or alter the substance of any license notices - (including copyright notices, patent notices, disclaimers of warranty, or - limitations of liability) contained within the Source Code Form of the - Covered Software, except that You may alter any license notices to the - extent required to remedy known factual inaccuracies. - -3.5. Application of Additional Terms - - You may choose to offer, and to charge a fee for, warranty, support, - indemnity or liability obligations to one or more recipients of Covered - Software. However, You may do so only on Your own behalf, and not on - behalf of any Contributor. You must make it absolutely clear that any - such warranty, support, indemnity, or liability obligation is offered by - You alone, and You hereby agree to indemnify every Contributor for any - liability incurred by such Contributor as a result of warranty, support, - indemnity or liability terms You offer. You may include additional - disclaimers of warranty and limitations of liability specific to any - jurisdiction. - -4. Inability to Comply Due to Statute or Regulation - - If it is impossible for You to comply with any of the terms of this License - with respect to some or all of the Covered Software due to statute, - judicial order, or regulation then You must: (a) comply with the terms of - this License to the maximum extent possible; and (b) describe the - limitations and the code they affect. Such description must be placed in a - text file included with all distributions of the Covered Software under - this License. Except to the extent prohibited by statute or regulation, - such description must be sufficiently detailed for a recipient of ordinary - skill to be able to understand it. - -5. Termination - -5.1. The rights granted under this License will terminate automatically if You - fail to comply with any of its terms. However, if You become compliant, - then the rights granted under this License from a particular Contributor - are reinstated (a) provisionally, unless and until such Contributor - explicitly and finally terminates Your grants, and (b) on an ongoing - basis, if such Contributor fails to notify You of the non-compliance by - some reasonable means prior to 60 days after You have come back into - compliance. Moreover, Your grants from a particular Contributor are - reinstated on an ongoing basis if such Contributor notifies You of the - non-compliance by some reasonable means, this is the first time You have - received notice of non-compliance with this License from such - Contributor, and You become compliant prior to 30 days after Your receipt - of the notice. - -5.2. If You initiate litigation against any entity by asserting a patent - infringement claim (excluding declaratory judgment actions, - counter-claims, and cross-claims) alleging that a Contributor Version - directly or indirectly infringes any patent, then the rights granted to - You by any and all Contributors for the Covered Software under Section - 2.1 of this License shall terminate. - -5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user - license agreements (excluding distributors and resellers) which have been - validly granted by You or Your distributors under this License prior to - termination shall survive termination. - -6. Disclaimer of Warranty - - Covered Software is provided under this License on an "as is" basis, - without warranty of any kind, either expressed, implied, or statutory, - including, without limitation, warranties that the Covered Software is free - of defects, merchantable, fit for a particular purpose or non-infringing. - The entire risk as to the quality and performance of the Covered Software - is with You. Should any Covered Software prove defective in any respect, - You (not any Contributor) assume the cost of any necessary servicing, - repair, or correction. This disclaimer of warranty constitutes an essential - part of this License. No use of any Covered Software is authorized under - this License except under this disclaimer. - -7. Limitation of Liability - - Under no circumstances and under no legal theory, whether tort (including - negligence), contract, or otherwise, shall any Contributor, or anyone who - distributes Covered Software as permitted above, be liable to You for any - direct, indirect, special, incidental, or consequential damages of any - character including, without limitation, damages for lost profits, loss of - goodwill, work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses, even if such party shall have been - informed of the possibility of such damages. This limitation of liability - shall not apply to liability for death or personal injury resulting from - such party's negligence to the extent applicable law prohibits such - limitation. Some jurisdictions do not allow the exclusion or limitation of - incidental or consequential damages, so this exclusion and limitation may - not apply to You. - -8. Litigation - - Any litigation relating to this License may be brought only in the courts - of a jurisdiction where the defendant maintains its principal place of - business and such litigation shall be governed by laws of that - jurisdiction, without reference to its conflict-of-law provisions. Nothing - in this Section shall prevent a party's ability to bring cross-claims or - counter-claims. - -9. Miscellaneous - - This License represents the complete agreement concerning the subject - matter hereof. If any provision of this License is held to be - unenforceable, such provision shall be reformed only to the extent - necessary to make it enforceable. Any law or regulation which provides that - the language of a contract shall be construed against the drafter shall not - be used to construe this License against a Contributor. - - -10. Versions of the License - -10.1. New Versions - - Mozilla Foundation is the license steward. Except as provided in Section - 10.3, no one other than the license steward has the right to modify or - publish new versions of this License. Each version will be given a - distinguishing version number. - -10.2. Effect of New Versions - - You may distribute the Covered Software under the terms of the version - of the License under which You originally received the Covered Software, - or under the terms of any subsequent version published by the license - steward. - -10.3. Modified Versions - - If you create software not governed by this License, and you want to - create a new license for such software, you may create and use a - modified version of this License if you rename the license and remove - any references to the name of the license steward (except to note that - such modified license differs from this License). - -10.4. Distributing Source Code Form that is Incompatible With Secondary - Licenses If You choose to distribute Source Code Form that is - Incompatible With Secondary Licenses under the terms of this version of - the License, the notice described in Exhibit B of this License must be - attached. - -Exhibit A - Source Code Form License Notice - - This Source Code Form is subject to the - terms of the Mozilla Public License, v. - 2.0. If a copy of the MPL was not - distributed with this file, You can - obtain one at - http://mozilla.org/MPL/2.0/. - -If it is not possible or desirable to put the notice in a particular file, -then You may include the notice in a location (such as a LICENSE file in a -relevant directory) where a recipient would be likely to look for such a -notice. - -You may add additional accurate notices of copyright ownership. - -Exhibit B - "Incompatible With Secondary Licenses" Notice - - This Source Code Form is "Incompatible - With Secondary Licenses", as defined by - the Mozilla Public License, v. 2.0. - diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/hashicorp/go-cleanhttp/README.md b/vendor/github.com/fsouza/go-dockerclient/external/github.com/hashicorp/go-cleanhttp/README.md deleted file mode 100644 index 036e5313fc8..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/hashicorp/go-cleanhttp/README.md +++ /dev/null @@ -1,30 +0,0 @@ -# cleanhttp - -Functions for accessing "clean" Go http.Client values - -------------- - -The Go standard library contains a default `http.Client` called -`http.DefaultClient`. It is a common idiom in Go code to start with -`http.DefaultClient` and tweak it as necessary, and in fact, this is -encouraged; from the `http` package documentation: - -> The Client's Transport typically has internal state (cached TCP connections), -so Clients should be reused instead of created as needed. Clients are safe for -concurrent use by multiple goroutines. - -Unfortunately, this is a shared value, and it is not uncommon for libraries to -assume that they are free to modify it at will. With enough dependencies, it -can be very easy to encounter strange problems and race conditions due to -manipulation of this shared value across libraries and goroutines (clients are -safe for concurrent use, but writing values to the client struct itself is not -protected). - -Making things worse is the fact that a bare `http.Client` will use a default -`http.Transport` called `http.DefaultTransport`, which is another global value -that behaves the same way. So it is not simply enough to replace -`http.DefaultClient` with `&http.Client{}`. - -This repository provides some simple functions to get a "clean" `http.Client` --- one that uses the same default values as the Go standard library, but -returns a client that does not share any state with other clients. diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/hashicorp/go-cleanhttp/cleanhttp.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/hashicorp/go-cleanhttp/cleanhttp.go deleted file mode 100644 index c692e23f461..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/hashicorp/go-cleanhttp/cleanhttp.go +++ /dev/null @@ -1,40 +0,0 @@ -package cleanhttp - -import ( - "net" - "net/http" - "runtime" - "time" -) - -// DefaultTransport returns a new http.Transport with the same default values -// as http.DefaultTransport -func DefaultTransport() *http.Transport { - transport := &http.Transport{ - Proxy: http.ProxyFromEnvironment, - Dial: (&net.Dialer{ - Timeout: 30 * time.Second, - KeepAlive: 30 * time.Second, - }).Dial, - TLSHandshakeTimeout: 10 * time.Second, - } - SetTransportFinalizer(transport) - return transport -} - -// DefaultClient returns a new http.Client with the same default values as -// http.Client, but with a non-shared Transport -func DefaultClient() *http.Client { - return &http.Client{ - Transport: DefaultTransport(), - } -} - -// SetTransportFinalizer sets a finalizer on the transport to ensure that -// idle connections are closed prior to garbage collection; otherwise -// these may leak -func SetTransportFinalizer(transport *http.Transport) { - runtime.SetFinalizer(&transport, func(t **http.Transport) { - (*t).CloseIdleConnections() - }) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/opencontainers/runc/libcontainer/user/MAINTAINERS b/vendor/github.com/fsouza/go-dockerclient/external/github.com/opencontainers/runc/libcontainer/user/MAINTAINERS deleted file mode 100644 index edbe2006694..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/opencontainers/runc/libcontainer/user/MAINTAINERS +++ /dev/null @@ -1,2 +0,0 @@ -Tianon Gravi (@tianon) -Aleksa Sarai (@cyphar) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/opencontainers/runc/libcontainer/user/lookup.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/opencontainers/runc/libcontainer/user/lookup.go deleted file mode 100644 index 6f8a982ff72..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/opencontainers/runc/libcontainer/user/lookup.go +++ /dev/null @@ -1,108 +0,0 @@ -package user - -import ( - "errors" - "fmt" - "syscall" -) - -var ( - // The current operating system does not provide the required data for user lookups. - ErrUnsupported = errors.New("user lookup: operating system does not provide passwd-formatted data") -) - -func lookupUser(filter func(u User) bool) (User, error) { - // Get operating system-specific passwd reader-closer. - passwd, err := GetPasswd() - if err != nil { - return User{}, err - } - defer passwd.Close() - - // Get the users. - users, err := ParsePasswdFilter(passwd, filter) - if err != nil { - return User{}, err - } - - // No user entries found. - if len(users) == 0 { - return User{}, fmt.Errorf("no matching entries in passwd file") - } - - // Assume the first entry is the "correct" one. - return users[0], nil -} - -// CurrentUser looks up the current user by their user id in /etc/passwd. If the -// user cannot be found (or there is no /etc/passwd file on the filesystem), -// then CurrentUser returns an error. -func CurrentUser() (User, error) { - return LookupUid(syscall.Getuid()) -} - -// LookupUser looks up a user by their username in /etc/passwd. If the user -// cannot be found (or there is no /etc/passwd file on the filesystem), then -// LookupUser returns an error. -func LookupUser(username string) (User, error) { - return lookupUser(func(u User) bool { - return u.Name == username - }) -} - -// LookupUid looks up a user by their user id in /etc/passwd. If the user cannot -// be found (or there is no /etc/passwd file on the filesystem), then LookupId -// returns an error. -func LookupUid(uid int) (User, error) { - return lookupUser(func(u User) bool { - return u.Uid == uid - }) -} - -func lookupGroup(filter func(g Group) bool) (Group, error) { - // Get operating system-specific group reader-closer. - group, err := GetGroup() - if err != nil { - return Group{}, err - } - defer group.Close() - - // Get the users. - groups, err := ParseGroupFilter(group, filter) - if err != nil { - return Group{}, err - } - - // No user entries found. - if len(groups) == 0 { - return Group{}, fmt.Errorf("no matching entries in group file") - } - - // Assume the first entry is the "correct" one. - return groups[0], nil -} - -// CurrentGroup looks up the current user's group by their primary group id's -// entry in /etc/passwd. If the group cannot be found (or there is no -// /etc/group file on the filesystem), then CurrentGroup returns an error. -func CurrentGroup() (Group, error) { - return LookupGid(syscall.Getgid()) -} - -// LookupGroup looks up a group by its name in /etc/group. If the group cannot -// be found (or there is no /etc/group file on the filesystem), then LookupGroup -// returns an error. -func LookupGroup(groupname string) (Group, error) { - return lookupGroup(func(g Group) bool { - return g.Name == groupname - }) -} - -// LookupGid looks up a group by its group id in /etc/group. If the group cannot -// be found (or there is no /etc/group file on the filesystem), then LookupGid -// returns an error. -func LookupGid(gid int) (Group, error) { - return lookupGroup(func(g Group) bool { - return g.Gid == gid - }) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/opencontainers/runc/libcontainer/user/lookup_unix.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/opencontainers/runc/libcontainer/user/lookup_unix.go deleted file mode 100644 index 758b734c225..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/opencontainers/runc/libcontainer/user/lookup_unix.go +++ /dev/null @@ -1,30 +0,0 @@ -// +build darwin dragonfly freebsd linux netbsd openbsd solaris - -package user - -import ( - "io" - "os" -) - -// Unix-specific path to the passwd and group formatted files. -const ( - unixPasswdPath = "/etc/passwd" - unixGroupPath = "/etc/group" -) - -func GetPasswdPath() (string, error) { - return unixPasswdPath, nil -} - -func GetPasswd() (io.ReadCloser, error) { - return os.Open(unixPasswdPath) -} - -func GetGroupPath() (string, error) { - return unixGroupPath, nil -} - -func GetGroup() (io.ReadCloser, error) { - return os.Open(unixGroupPath) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/opencontainers/runc/libcontainer/user/lookup_unsupported.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/opencontainers/runc/libcontainer/user/lookup_unsupported.go deleted file mode 100644 index 7217948870c..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/opencontainers/runc/libcontainer/user/lookup_unsupported.go +++ /dev/null @@ -1,21 +0,0 @@ -// +build !darwin,!dragonfly,!freebsd,!linux,!netbsd,!openbsd,!solaris - -package user - -import "io" - -func GetPasswdPath() (string, error) { - return "", ErrUnsupported -} - -func GetPasswd() (io.ReadCloser, error) { - return nil, ErrUnsupported -} - -func GetGroupPath() (string, error) { - return "", ErrUnsupported -} - -func GetGroup() (io.ReadCloser, error) { - return nil, ErrUnsupported -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/github.com/opencontainers/runc/libcontainer/user/user.go b/vendor/github.com/fsouza/go-dockerclient/external/github.com/opencontainers/runc/libcontainer/user/user.go deleted file mode 100644 index e6375ea4dd5..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/github.com/opencontainers/runc/libcontainer/user/user.go +++ /dev/null @@ -1,418 +0,0 @@ -package user - -import ( - "bufio" - "fmt" - "io" - "os" - "strconv" - "strings" -) - -const ( - minId = 0 - maxId = 1<<31 - 1 //for 32-bit systems compatibility -) - -var ( - ErrRange = fmt.Errorf("Uids and gids must be in range %d-%d", minId, maxId) -) - -type User struct { - Name string - Pass string - Uid int - Gid int - Gecos string - Home string - Shell string -} - -type Group struct { - Name string - Pass string - Gid int - List []string -} - -func parseLine(line string, v ...interface{}) { - if line == "" { - return - } - - parts := strings.Split(line, ":") - for i, p := range parts { - if len(v) <= i { - // if we have more "parts" than we have places to put them, bail for great "tolerance" of naughty configuration files - break - } - - switch e := v[i].(type) { - case *string: - // "root", "adm", "/bin/bash" - *e = p - case *int: - // "0", "4", "1000" - // ignore string to int conversion errors, for great "tolerance" of naughty configuration files - *e, _ = strconv.Atoi(p) - case *[]string: - // "", "root", "root,adm,daemon" - if p != "" { - *e = strings.Split(p, ",") - } else { - *e = []string{} - } - default: - // panic, because this is a programming/logic error, not a runtime one - panic("parseLine expects only pointers! argument " + strconv.Itoa(i) + " is not a pointer!") - } - } -} - -func ParsePasswdFile(path string) ([]User, error) { - passwd, err := os.Open(path) - if err != nil { - return nil, err - } - defer passwd.Close() - return ParsePasswd(passwd) -} - -func ParsePasswd(passwd io.Reader) ([]User, error) { - return ParsePasswdFilter(passwd, nil) -} - -func ParsePasswdFileFilter(path string, filter func(User) bool) ([]User, error) { - passwd, err := os.Open(path) - if err != nil { - return nil, err - } - defer passwd.Close() - return ParsePasswdFilter(passwd, filter) -} - -func ParsePasswdFilter(r io.Reader, filter func(User) bool) ([]User, error) { - if r == nil { - return nil, fmt.Errorf("nil source for passwd-formatted data") - } - - var ( - s = bufio.NewScanner(r) - out = []User{} - ) - - for s.Scan() { - if err := s.Err(); err != nil { - return nil, err - } - - text := strings.TrimSpace(s.Text()) - if text == "" { - continue - } - - // see: man 5 passwd - // name:password:UID:GID:GECOS:directory:shell - // Name:Pass:Uid:Gid:Gecos:Home:Shell - // root:x:0:0:root:/root:/bin/bash - // adm:x:3:4:adm:/var/adm:/bin/false - p := User{} - parseLine( - text, - &p.Name, &p.Pass, &p.Uid, &p.Gid, &p.Gecos, &p.Home, &p.Shell, - ) - - if filter == nil || filter(p) { - out = append(out, p) - } - } - - return out, nil -} - -func ParseGroupFile(path string) ([]Group, error) { - group, err := os.Open(path) - if err != nil { - return nil, err - } - defer group.Close() - return ParseGroup(group) -} - -func ParseGroup(group io.Reader) ([]Group, error) { - return ParseGroupFilter(group, nil) -} - -func ParseGroupFileFilter(path string, filter func(Group) bool) ([]Group, error) { - group, err := os.Open(path) - if err != nil { - return nil, err - } - defer group.Close() - return ParseGroupFilter(group, filter) -} - -func ParseGroupFilter(r io.Reader, filter func(Group) bool) ([]Group, error) { - if r == nil { - return nil, fmt.Errorf("nil source for group-formatted data") - } - - var ( - s = bufio.NewScanner(r) - out = []Group{} - ) - - for s.Scan() { - if err := s.Err(); err != nil { - return nil, err - } - - text := s.Text() - if text == "" { - continue - } - - // see: man 5 group - // group_name:password:GID:user_list - // Name:Pass:Gid:List - // root:x:0:root - // adm:x:4:root,adm,daemon - p := Group{} - parseLine( - text, - &p.Name, &p.Pass, &p.Gid, &p.List, - ) - - if filter == nil || filter(p) { - out = append(out, p) - } - } - - return out, nil -} - -type ExecUser struct { - Uid, Gid int - Sgids []int - Home string -} - -// GetExecUserPath is a wrapper for GetExecUser. It reads data from each of the -// given file paths and uses that data as the arguments to GetExecUser. If the -// files cannot be opened for any reason, the error is ignored and a nil -// io.Reader is passed instead. -func GetExecUserPath(userSpec string, defaults *ExecUser, passwdPath, groupPath string) (*ExecUser, error) { - passwd, err := os.Open(passwdPath) - if err != nil { - passwd = nil - } else { - defer passwd.Close() - } - - group, err := os.Open(groupPath) - if err != nil { - group = nil - } else { - defer group.Close() - } - - return GetExecUser(userSpec, defaults, passwd, group) -} - -// GetExecUser parses a user specification string (using the passwd and group -// readers as sources for /etc/passwd and /etc/group data, respectively). In -// the case of blank fields or missing data from the sources, the values in -// defaults is used. -// -// GetExecUser will return an error if a user or group literal could not be -// found in any entry in passwd and group respectively. -// -// Examples of valid user specifications are: -// * "" -// * "user" -// * "uid" -// * "user:group" -// * "uid:gid -// * "user:gid" -// * "uid:group" -func GetExecUser(userSpec string, defaults *ExecUser, passwd, group io.Reader) (*ExecUser, error) { - var ( - userArg, groupArg string - name string - ) - - if defaults == nil { - defaults = new(ExecUser) - } - - // Copy over defaults. - user := &ExecUser{ - Uid: defaults.Uid, - Gid: defaults.Gid, - Sgids: defaults.Sgids, - Home: defaults.Home, - } - - // Sgids slice *cannot* be nil. - if user.Sgids == nil { - user.Sgids = []int{} - } - - // allow for userArg to have either "user" syntax, or optionally "user:group" syntax - parseLine(userSpec, &userArg, &groupArg) - - users, err := ParsePasswdFilter(passwd, func(u User) bool { - if userArg == "" { - return u.Uid == user.Uid - } - return u.Name == userArg || strconv.Itoa(u.Uid) == userArg - }) - if err != nil && passwd != nil { - if userArg == "" { - userArg = strconv.Itoa(user.Uid) - } - return nil, fmt.Errorf("Unable to find user %v: %v", userArg, err) - } - - haveUser := users != nil && len(users) > 0 - if haveUser { - // if we found any user entries that matched our filter, let's take the first one as "correct" - name = users[0].Name - user.Uid = users[0].Uid - user.Gid = users[0].Gid - user.Home = users[0].Home - } else if userArg != "" { - // we asked for a user but didn't find them... let's check to see if we wanted a numeric user - user.Uid, err = strconv.Atoi(userArg) - if err != nil { - // not numeric - we have to bail - return nil, fmt.Errorf("Unable to find user %v", userArg) - } - - // Must be inside valid uid range. - if user.Uid < minId || user.Uid > maxId { - return nil, ErrRange - } - - // if userArg couldn't be found in /etc/passwd but is numeric, just roll with it - this is legit - } - - if groupArg != "" || name != "" { - groups, err := ParseGroupFilter(group, func(g Group) bool { - // Explicit group format takes precedence. - if groupArg != "" { - return g.Name == groupArg || strconv.Itoa(g.Gid) == groupArg - } - - // Check if user is a member. - for _, u := range g.List { - if u == name { - return true - } - } - - return false - }) - if err != nil && group != nil { - return nil, fmt.Errorf("Unable to find groups for user %v: %v", users[0].Name, err) - } - - haveGroup := groups != nil && len(groups) > 0 - if groupArg != "" { - if haveGroup { - // if we found any group entries that matched our filter, let's take the first one as "correct" - user.Gid = groups[0].Gid - } else { - // we asked for a group but didn't find id... let's check to see if we wanted a numeric group - user.Gid, err = strconv.Atoi(groupArg) - if err != nil { - // not numeric - we have to bail - return nil, fmt.Errorf("Unable to find group %v", groupArg) - } - - // Ensure gid is inside gid range. - if user.Gid < minId || user.Gid > maxId { - return nil, ErrRange - } - - // if groupArg couldn't be found in /etc/group but is numeric, just roll with it - this is legit - } - } else if haveGroup { - // If implicit group format, fill supplementary gids. - user.Sgids = make([]int, len(groups)) - for i, group := range groups { - user.Sgids[i] = group.Gid - } - } - } - - return user, nil -} - -// GetAdditionalGroups looks up a list of groups by name or group id -// against the given /etc/group formatted data. If a group name cannot -// be found, an error will be returned. If a group id cannot be found, -// or the given group data is nil, the id will be returned as-is -// provided it is in the legal range. -func GetAdditionalGroups(additionalGroups []string, group io.Reader) ([]int, error) { - var groups = []Group{} - if group != nil { - var err error - groups, err = ParseGroupFilter(group, func(g Group) bool { - for _, ag := range additionalGroups { - if g.Name == ag || strconv.Itoa(g.Gid) == ag { - return true - } - } - return false - }) - if err != nil { - return nil, fmt.Errorf("Unable to find additional groups %v: %v", additionalGroups, err) - } - } - - gidMap := make(map[int]struct{}) - for _, ag := range additionalGroups { - var found bool - for _, g := range groups { - // if we found a matched group either by name or gid, take the - // first matched as correct - if g.Name == ag || strconv.Itoa(g.Gid) == ag { - if _, ok := gidMap[g.Gid]; !ok { - gidMap[g.Gid] = struct{}{} - found = true - break - } - } - } - // we asked for a group but didn't find it. let's check to see - // if we wanted a numeric group - if !found { - gid, err := strconv.Atoi(ag) - if err != nil { - return nil, fmt.Errorf("Unable to find group %s", ag) - } - // Ensure gid is inside gid range. - if gid < minId || gid > maxId { - return nil, ErrRange - } - gidMap[gid] = struct{}{} - } - } - gids := []int{} - for gid := range gidMap { - gids = append(gids, gid) - } - return gids, nil -} - -// GetAdditionalGroupsPath is a wrapper around GetAdditionalGroups -// that opens the groupPath given and gives it as an argument to -// GetAdditionalGroups. -func GetAdditionalGroupsPath(additionalGroups []string, groupPath string) ([]int, error) { - group, err := os.Open(groupPath) - if err == nil { - defer group.Close() - } - return GetAdditionalGroups(additionalGroups, group) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/net/context/context.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/net/context/context.go deleted file mode 100644 index dd138571fa5..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/net/context/context.go +++ /dev/null @@ -1,447 +0,0 @@ -// Copyright 2014 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// Package context defines the Context type, which carries deadlines, -// cancelation signals, and other request-scoped values across API boundaries -// and between processes. -// -// Incoming requests to a server should create a Context, and outgoing calls to -// servers should accept a Context. The chain of function calls between must -// propagate the Context, optionally replacing it with a modified copy created -// using WithDeadline, WithTimeout, WithCancel, or WithValue. -// -// Programs that use Contexts should follow these rules to keep interfaces -// consistent across packages and enable static analysis tools to check context -// propagation: -// -// Do not store Contexts inside a struct type; instead, pass a Context -// explicitly to each function that needs it. The Context should be the first -// parameter, typically named ctx: -// -// func DoSomething(ctx context.Context, arg Arg) error { -// // ... use ctx ... -// } -// -// Do not pass a nil Context, even if a function permits it. Pass context.TODO -// if you are unsure about which Context to use. -// -// Use context Values only for request-scoped data that transits processes and -// APIs, not for passing optional parameters to functions. -// -// The same Context may be passed to functions running in different goroutines; -// Contexts are safe for simultaneous use by multiple goroutines. -// -// See http://blog.golang.org/context for example code for a server that uses -// Contexts. -package context // import "github.com/fsouza/go-dockerclient/external/golang.org/x/net/context" - -import ( - "errors" - "fmt" - "sync" - "time" -) - -// A Context carries a deadline, a cancelation signal, and other values across -// API boundaries. -// -// Context's methods may be called by multiple goroutines simultaneously. -type Context interface { - // Deadline returns the time when work done on behalf of this context - // should be canceled. Deadline returns ok==false when no deadline is - // set. Successive calls to Deadline return the same results. - Deadline() (deadline time.Time, ok bool) - - // Done returns a channel that's closed when work done on behalf of this - // context should be canceled. Done may return nil if this context can - // never be canceled. Successive calls to Done return the same value. - // - // WithCancel arranges for Done to be closed when cancel is called; - // WithDeadline arranges for Done to be closed when the deadline - // expires; WithTimeout arranges for Done to be closed when the timeout - // elapses. - // - // Done is provided for use in select statements: - // - // // Stream generates values with DoSomething and sends them to out - // // until DoSomething returns an error or ctx.Done is closed. - // func Stream(ctx context.Context, out <-chan Value) error { - // for { - // v, err := DoSomething(ctx) - // if err != nil { - // return err - // } - // select { - // case <-ctx.Done(): - // return ctx.Err() - // case out <- v: - // } - // } - // } - // - // See http://blog.golang.org/pipelines for more examples of how to use - // a Done channel for cancelation. - Done() <-chan struct{} - - // Err returns a non-nil error value after Done is closed. Err returns - // Canceled if the context was canceled or DeadlineExceeded if the - // context's deadline passed. No other values for Err are defined. - // After Done is closed, successive calls to Err return the same value. - Err() error - - // Value returns the value associated with this context for key, or nil - // if no value is associated with key. Successive calls to Value with - // the same key returns the same result. - // - // Use context values only for request-scoped data that transits - // processes and API boundaries, not for passing optional parameters to - // functions. - // - // A key identifies a specific value in a Context. Functions that wish - // to store values in Context typically allocate a key in a global - // variable then use that key as the argument to context.WithValue and - // Context.Value. A key can be any type that supports equality; - // packages should define keys as an unexported type to avoid - // collisions. - // - // Packages that define a Context key should provide type-safe accessors - // for the values stores using that key: - // - // // Package user defines a User type that's stored in Contexts. - // package user - // - // import "golang.org/x/net/context" - // - // // User is the type of value stored in the Contexts. - // type User struct {...} - // - // // key is an unexported type for keys defined in this package. - // // This prevents collisions with keys defined in other packages. - // type key int - // - // // userKey is the key for user.User values in Contexts. It is - // // unexported; clients use user.NewContext and user.FromContext - // // instead of using this key directly. - // var userKey key = 0 - // - // // NewContext returns a new Context that carries value u. - // func NewContext(ctx context.Context, u *User) context.Context { - // return context.WithValue(ctx, userKey, u) - // } - // - // // FromContext returns the User value stored in ctx, if any. - // func FromContext(ctx context.Context) (*User, bool) { - // u, ok := ctx.Value(userKey).(*User) - // return u, ok - // } - Value(key interface{}) interface{} -} - -// Canceled is the error returned by Context.Err when the context is canceled. -var Canceled = errors.New("context canceled") - -// DeadlineExceeded is the error returned by Context.Err when the context's -// deadline passes. -var DeadlineExceeded = errors.New("context deadline exceeded") - -// An emptyCtx is never canceled, has no values, and has no deadline. It is not -// struct{}, since vars of this type must have distinct addresses. -type emptyCtx int - -func (*emptyCtx) Deadline() (deadline time.Time, ok bool) { - return -} - -func (*emptyCtx) Done() <-chan struct{} { - return nil -} - -func (*emptyCtx) Err() error { - return nil -} - -func (*emptyCtx) Value(key interface{}) interface{} { - return nil -} - -func (e *emptyCtx) String() string { - switch e { - case background: - return "context.Background" - case todo: - return "context.TODO" - } - return "unknown empty Context" -} - -var ( - background = new(emptyCtx) - todo = new(emptyCtx) -) - -// Background returns a non-nil, empty Context. It is never canceled, has no -// values, and has no deadline. It is typically used by the main function, -// initialization, and tests, and as the top-level Context for incoming -// requests. -func Background() Context { - return background -} - -// TODO returns a non-nil, empty Context. Code should use context.TODO when -// it's unclear which Context to use or it is not yet available (because the -// surrounding function has not yet been extended to accept a Context -// parameter). TODO is recognized by static analysis tools that determine -// whether Contexts are propagated correctly in a program. -func TODO() Context { - return todo -} - -// A CancelFunc tells an operation to abandon its work. -// A CancelFunc does not wait for the work to stop. -// After the first call, subsequent calls to a CancelFunc do nothing. -type CancelFunc func() - -// WithCancel returns a copy of parent with a new Done channel. The returned -// context's Done channel is closed when the returned cancel function is called -// or when the parent context's Done channel is closed, whichever happens first. -// -// Canceling this context releases resources associated with it, so code should -// call cancel as soon as the operations running in this Context complete. -func WithCancel(parent Context) (ctx Context, cancel CancelFunc) { - c := newCancelCtx(parent) - propagateCancel(parent, c) - return c, func() { c.cancel(true, Canceled) } -} - -// newCancelCtx returns an initialized cancelCtx. -func newCancelCtx(parent Context) *cancelCtx { - return &cancelCtx{ - Context: parent, - done: make(chan struct{}), - } -} - -// propagateCancel arranges for child to be canceled when parent is. -func propagateCancel(parent Context, child canceler) { - if parent.Done() == nil { - return // parent is never canceled - } - if p, ok := parentCancelCtx(parent); ok { - p.mu.Lock() - if p.err != nil { - // parent has already been canceled - child.cancel(false, p.err) - } else { - if p.children == nil { - p.children = make(map[canceler]bool) - } - p.children[child] = true - } - p.mu.Unlock() - } else { - go func() { - select { - case <-parent.Done(): - child.cancel(false, parent.Err()) - case <-child.Done(): - } - }() - } -} - -// parentCancelCtx follows a chain of parent references until it finds a -// *cancelCtx. This function understands how each of the concrete types in this -// package represents its parent. -func parentCancelCtx(parent Context) (*cancelCtx, bool) { - for { - switch c := parent.(type) { - case *cancelCtx: - return c, true - case *timerCtx: - return c.cancelCtx, true - case *valueCtx: - parent = c.Context - default: - return nil, false - } - } -} - -// removeChild removes a context from its parent. -func removeChild(parent Context, child canceler) { - p, ok := parentCancelCtx(parent) - if !ok { - return - } - p.mu.Lock() - if p.children != nil { - delete(p.children, child) - } - p.mu.Unlock() -} - -// A canceler is a context type that can be canceled directly. The -// implementations are *cancelCtx and *timerCtx. -type canceler interface { - cancel(removeFromParent bool, err error) - Done() <-chan struct{} -} - -// A cancelCtx can be canceled. When canceled, it also cancels any children -// that implement canceler. -type cancelCtx struct { - Context - - done chan struct{} // closed by the first cancel call. - - mu sync.Mutex - children map[canceler]bool // set to nil by the first cancel call - err error // set to non-nil by the first cancel call -} - -func (c *cancelCtx) Done() <-chan struct{} { - return c.done -} - -func (c *cancelCtx) Err() error { - c.mu.Lock() - defer c.mu.Unlock() - return c.err -} - -func (c *cancelCtx) String() string { - return fmt.Sprintf("%v.WithCancel", c.Context) -} - -// cancel closes c.done, cancels each of c's children, and, if -// removeFromParent is true, removes c from its parent's children. -func (c *cancelCtx) cancel(removeFromParent bool, err error) { - if err == nil { - panic("context: internal error: missing cancel error") - } - c.mu.Lock() - if c.err != nil { - c.mu.Unlock() - return // already canceled - } - c.err = err - close(c.done) - for child := range c.children { - // NOTE: acquiring the child's lock while holding parent's lock. - child.cancel(false, err) - } - c.children = nil - c.mu.Unlock() - - if removeFromParent { - removeChild(c.Context, c) - } -} - -// WithDeadline returns a copy of the parent context with the deadline adjusted -// to be no later than d. If the parent's deadline is already earlier than d, -// WithDeadline(parent, d) is semantically equivalent to parent. The returned -// context's Done channel is closed when the deadline expires, when the returned -// cancel function is called, or when the parent context's Done channel is -// closed, whichever happens first. -// -// Canceling this context releases resources associated with it, so code should -// call cancel as soon as the operations running in this Context complete. -func WithDeadline(parent Context, deadline time.Time) (Context, CancelFunc) { - if cur, ok := parent.Deadline(); ok && cur.Before(deadline) { - // The current deadline is already sooner than the new one. - return WithCancel(parent) - } - c := &timerCtx{ - cancelCtx: newCancelCtx(parent), - deadline: deadline, - } - propagateCancel(parent, c) - d := deadline.Sub(time.Now()) - if d <= 0 { - c.cancel(true, DeadlineExceeded) // deadline has already passed - return c, func() { c.cancel(true, Canceled) } - } - c.mu.Lock() - defer c.mu.Unlock() - if c.err == nil { - c.timer = time.AfterFunc(d, func() { - c.cancel(true, DeadlineExceeded) - }) - } - return c, func() { c.cancel(true, Canceled) } -} - -// A timerCtx carries a timer and a deadline. It embeds a cancelCtx to -// implement Done and Err. It implements cancel by stopping its timer then -// delegating to cancelCtx.cancel. -type timerCtx struct { - *cancelCtx - timer *time.Timer // Under cancelCtx.mu. - - deadline time.Time -} - -func (c *timerCtx) Deadline() (deadline time.Time, ok bool) { - return c.deadline, true -} - -func (c *timerCtx) String() string { - return fmt.Sprintf("%v.WithDeadline(%s [%s])", c.cancelCtx.Context, c.deadline, c.deadline.Sub(time.Now())) -} - -func (c *timerCtx) cancel(removeFromParent bool, err error) { - c.cancelCtx.cancel(false, err) - if removeFromParent { - // Remove this timerCtx from its parent cancelCtx's children. - removeChild(c.cancelCtx.Context, c) - } - c.mu.Lock() - if c.timer != nil { - c.timer.Stop() - c.timer = nil - } - c.mu.Unlock() -} - -// WithTimeout returns WithDeadline(parent, time.Now().Add(timeout)). -// -// Canceling this context releases resources associated with it, so code should -// call cancel as soon as the operations running in this Context complete: -// -// func slowOperationWithTimeout(ctx context.Context) (Result, error) { -// ctx, cancel := context.WithTimeout(ctx, 100*time.Millisecond) -// defer cancel() // releases resources if slowOperation completes before timeout elapses -// return slowOperation(ctx) -// } -func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc) { - return WithDeadline(parent, time.Now().Add(timeout)) -} - -// WithValue returns a copy of parent in which the value associated with key is -// val. -// -// Use context Values only for request-scoped data that transits processes and -// APIs, not for passing optional parameters to functions. -func WithValue(parent Context, key interface{}, val interface{}) Context { - return &valueCtx{parent, key, val} -} - -// A valueCtx carries a key-value pair. It implements Value for that key and -// delegates all other calls to the embedded Context. -type valueCtx struct { - Context - key, val interface{} -} - -func (c *valueCtx) String() string { - return fmt.Sprintf("%v.WithValue(%#v, %#v)", c.Context, c.key, c.val) -} - -func (c *valueCtx) Value(key interface{}) interface{} { - if c.key == key { - return c.val - } - return c.Context.Value(key) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm.s deleted file mode 100644 index 8ed2fdb94b1..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm.s +++ /dev/null @@ -1,10 +0,0 @@ -// Copyright 2014 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !gccgo - -#include "textflag.h" - -TEXT ·use(SB),NOSPLIT,$0 - RET diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_darwin_386.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_darwin_386.s deleted file mode 100644 index 8a7278319e3..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_darwin_386.s +++ /dev/null @@ -1,29 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !gccgo - -#include "textflag.h" - -// -// System call support for 386, Darwin -// - -// Just jump to package syscall's implementation for all these functions. -// The runtime may know about them. - -TEXT ·Syscall(SB),NOSPLIT,$0-28 - JMP syscall·Syscall(SB) - -TEXT ·Syscall6(SB),NOSPLIT,$0-40 - JMP syscall·Syscall6(SB) - -TEXT ·Syscall9(SB),NOSPLIT,$0-52 - JMP syscall·Syscall9(SB) - -TEXT ·RawSyscall(SB),NOSPLIT,$0-28 - JMP syscall·RawSyscall(SB) - -TEXT ·RawSyscall6(SB),NOSPLIT,$0-40 - JMP syscall·RawSyscall6(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_darwin_amd64.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_darwin_amd64.s deleted file mode 100644 index 6321421f272..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_darwin_amd64.s +++ /dev/null @@ -1,29 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !gccgo - -#include "textflag.h" - -// -// System call support for AMD64, Darwin -// - -// Just jump to package syscall's implementation for all these functions. -// The runtime may know about them. - -TEXT ·Syscall(SB),NOSPLIT,$0-56 - JMP syscall·Syscall(SB) - -TEXT ·Syscall6(SB),NOSPLIT,$0-80 - JMP syscall·Syscall6(SB) - -TEXT ·Syscall9(SB),NOSPLIT,$0-104 - JMP syscall·Syscall9(SB) - -TEXT ·RawSyscall(SB),NOSPLIT,$0-56 - JMP syscall·RawSyscall(SB) - -TEXT ·RawSyscall6(SB),NOSPLIT,$0-80 - JMP syscall·RawSyscall6(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_darwin_arm.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_darwin_arm.s deleted file mode 100644 index 333242d5061..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_darwin_arm.s +++ /dev/null @@ -1,30 +0,0 @@ -// Copyright 2015 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !gccgo -// +build arm,darwin - -#include "textflag.h" - -// -// System call support for ARM, Darwin -// - -// Just jump to package syscall's implementation for all these functions. -// The runtime may know about them. - -TEXT ·Syscall(SB),NOSPLIT,$0-28 - B syscall·Syscall(SB) - -TEXT ·Syscall6(SB),NOSPLIT,$0-40 - B syscall·Syscall6(SB) - -TEXT ·Syscall9(SB),NOSPLIT,$0-52 - B syscall·Syscall9(SB) - -TEXT ·RawSyscall(SB),NOSPLIT,$0-28 - B syscall·RawSyscall(SB) - -TEXT ·RawSyscall6(SB),NOSPLIT,$0-40 - B syscall·RawSyscall6(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_darwin_arm64.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_darwin_arm64.s deleted file mode 100644 index 97e01743718..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_darwin_arm64.s +++ /dev/null @@ -1,30 +0,0 @@ -// Copyright 2015 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !gccgo -// +build arm64,darwin - -#include "textflag.h" - -// -// System call support for AMD64, Darwin -// - -// Just jump to package syscall's implementation for all these functions. -// The runtime may know about them. - -TEXT ·Syscall(SB),NOSPLIT,$0-56 - B syscall·Syscall(SB) - -TEXT ·Syscall6(SB),NOSPLIT,$0-80 - B syscall·Syscall6(SB) - -TEXT ·Syscall9(SB),NOSPLIT,$0-104 - B syscall·Syscall9(SB) - -TEXT ·RawSyscall(SB),NOSPLIT,$0-56 - B syscall·RawSyscall(SB) - -TEXT ·RawSyscall6(SB),NOSPLIT,$0-80 - B syscall·RawSyscall6(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_dragonfly_386.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_dragonfly_386.s deleted file mode 100644 index 7e55e0d3175..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_dragonfly_386.s +++ /dev/null @@ -1,29 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !gccgo - -#include "textflag.h" - -// -// System call support for 386, FreeBSD -// - -// Just jump to package syscall's implementation for all these functions. -// The runtime may know about them. - -TEXT ·Syscall(SB),NOSPLIT,$0-32 - JMP syscall·Syscall(SB) - -TEXT ·Syscall6(SB),NOSPLIT,$0-44 - JMP syscall·Syscall6(SB) - -TEXT ·Syscall9(SB),NOSPLIT,$0-56 - JMP syscall·Syscall9(SB) - -TEXT ·RawSyscall(SB),NOSPLIT,$0-32 - JMP syscall·RawSyscall(SB) - -TEXT ·RawSyscall6(SB),NOSPLIT,$0-44 - JMP syscall·RawSyscall6(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_dragonfly_amd64.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_dragonfly_amd64.s deleted file mode 100644 index d5ed6726cc1..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_dragonfly_amd64.s +++ /dev/null @@ -1,29 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !gccgo - -#include "textflag.h" - -// -// System call support for AMD64, DragonFly -// - -// Just jump to package syscall's implementation for all these functions. -// The runtime may know about them. - -TEXT ·Syscall(SB),NOSPLIT,$0-64 - JMP syscall·Syscall(SB) - -TEXT ·Syscall6(SB),NOSPLIT,$0-88 - JMP syscall·Syscall6(SB) - -TEXT ·Syscall9(SB),NOSPLIT,$0-112 - JMP syscall·Syscall9(SB) - -TEXT ·RawSyscall(SB),NOSPLIT,$0-64 - JMP syscall·RawSyscall(SB) - -TEXT ·RawSyscall6(SB),NOSPLIT,$0-88 - JMP syscall·RawSyscall6(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_freebsd_386.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_freebsd_386.s deleted file mode 100644 index c9a0a260156..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_freebsd_386.s +++ /dev/null @@ -1,29 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !gccgo - -#include "textflag.h" - -// -// System call support for 386, FreeBSD -// - -// Just jump to package syscall's implementation for all these functions. -// The runtime may know about them. - -TEXT ·Syscall(SB),NOSPLIT,$0-28 - JMP syscall·Syscall(SB) - -TEXT ·Syscall6(SB),NOSPLIT,$0-40 - JMP syscall·Syscall6(SB) - -TEXT ·Syscall9(SB),NOSPLIT,$0-52 - JMP syscall·Syscall9(SB) - -TEXT ·RawSyscall(SB),NOSPLIT,$0-28 - JMP syscall·RawSyscall(SB) - -TEXT ·RawSyscall6(SB),NOSPLIT,$0-40 - JMP syscall·RawSyscall6(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_freebsd_amd64.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_freebsd_amd64.s deleted file mode 100644 index 35172477c86..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_freebsd_amd64.s +++ /dev/null @@ -1,29 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !gccgo - -#include "textflag.h" - -// -// System call support for AMD64, FreeBSD -// - -// Just jump to package syscall's implementation for all these functions. -// The runtime may know about them. - -TEXT ·Syscall(SB),NOSPLIT,$0-56 - JMP syscall·Syscall(SB) - -TEXT ·Syscall6(SB),NOSPLIT,$0-80 - JMP syscall·Syscall6(SB) - -TEXT ·Syscall9(SB),NOSPLIT,$0-104 - JMP syscall·Syscall9(SB) - -TEXT ·RawSyscall(SB),NOSPLIT,$0-56 - JMP syscall·RawSyscall(SB) - -TEXT ·RawSyscall6(SB),NOSPLIT,$0-80 - JMP syscall·RawSyscall6(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_freebsd_arm.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_freebsd_arm.s deleted file mode 100644 index 9227c875bfe..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_freebsd_arm.s +++ /dev/null @@ -1,29 +0,0 @@ -// Copyright 2012 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !gccgo - -#include "textflag.h" - -// -// System call support for ARM, FreeBSD -// - -// Just jump to package syscall's implementation for all these functions. -// The runtime may know about them. - -TEXT ·Syscall(SB),NOSPLIT,$0-28 - B syscall·Syscall(SB) - -TEXT ·Syscall6(SB),NOSPLIT,$0-40 - B syscall·Syscall6(SB) - -TEXT ·Syscall9(SB),NOSPLIT,$0-52 - B syscall·Syscall9(SB) - -TEXT ·RawSyscall(SB),NOSPLIT,$0-28 - B syscall·RawSyscall(SB) - -TEXT ·RawSyscall6(SB),NOSPLIT,$0-40 - B syscall·RawSyscall6(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_linux_386.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_linux_386.s deleted file mode 100644 index 4db2909323f..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_linux_386.s +++ /dev/null @@ -1,35 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !gccgo - -#include "textflag.h" - -// -// System calls for 386, Linux -// - -// Just jump to package syscall's implementation for all these functions. -// The runtime may know about them. - -TEXT ·Syscall(SB),NOSPLIT,$0-28 - JMP syscall·Syscall(SB) - -TEXT ·Syscall6(SB),NOSPLIT,$0-40 - JMP syscall·Syscall6(SB) - -TEXT ·RawSyscall(SB),NOSPLIT,$0-28 - JMP syscall·RawSyscall(SB) - -TEXT ·RawSyscall6(SB),NOSPLIT,$0-40 - JMP syscall·RawSyscall6(SB) - -TEXT ·socketcall(SB),NOSPLIT,$0-36 - JMP syscall·socketcall(SB) - -TEXT ·rawsocketcall(SB),NOSPLIT,$0-36 - JMP syscall·rawsocketcall(SB) - -TEXT ·seek(SB),NOSPLIT,$0-28 - JMP syscall·seek(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_linux_amd64.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_linux_amd64.s deleted file mode 100644 index 44e25c62f92..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_linux_amd64.s +++ /dev/null @@ -1,29 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !gccgo - -#include "textflag.h" - -// -// System calls for AMD64, Linux -// - -// Just jump to package syscall's implementation for all these functions. -// The runtime may know about them. - -TEXT ·Syscall(SB),NOSPLIT,$0-56 - JMP syscall·Syscall(SB) - -TEXT ·Syscall6(SB),NOSPLIT,$0-80 - JMP syscall·Syscall6(SB) - -TEXT ·RawSyscall(SB),NOSPLIT,$0-56 - JMP syscall·RawSyscall(SB) - -TEXT ·RawSyscall6(SB),NOSPLIT,$0-80 - JMP syscall·RawSyscall6(SB) - -TEXT ·gettimeofday(SB),NOSPLIT,$0-16 - JMP syscall·gettimeofday(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_linux_arm.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_linux_arm.s deleted file mode 100644 index cf0b5746582..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_linux_arm.s +++ /dev/null @@ -1,29 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !gccgo - -#include "textflag.h" - -// -// System calls for arm, Linux -// - -// Just jump to package syscall's implementation for all these functions. -// The runtime may know about them. - -TEXT ·Syscall(SB),NOSPLIT,$0-28 - B syscall·Syscall(SB) - -TEXT ·Syscall6(SB),NOSPLIT,$0-40 - B syscall·Syscall6(SB) - -TEXT ·RawSyscall(SB),NOSPLIT,$0-28 - B syscall·RawSyscall(SB) - -TEXT ·RawSyscall6(SB),NOSPLIT,$0-40 - B syscall·RawSyscall6(SB) - -TEXT ·seek(SB),NOSPLIT,$0-32 - B syscall·seek(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_linux_arm64.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_linux_arm64.s deleted file mode 100644 index 4be9bfedeaf..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_linux_arm64.s +++ /dev/null @@ -1,24 +0,0 @@ -// Copyright 2015 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build linux -// +build arm64 -// +build !gccgo - -#include "textflag.h" - -// Just jump to package syscall's implementation for all these functions. -// The runtime may know about them. - -TEXT ·Syscall(SB),NOSPLIT,$0-56 - B syscall·Syscall(SB) - -TEXT ·Syscall6(SB),NOSPLIT,$0-80 - B syscall·Syscall6(SB) - -TEXT ·RawSyscall(SB),NOSPLIT,$0-56 - B syscall·RawSyscall(SB) - -TEXT ·RawSyscall6(SB),NOSPLIT,$0-80 - B syscall·RawSyscall6(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_linux_ppc64x.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_linux_ppc64x.s deleted file mode 100644 index 8d231feb4b9..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_linux_ppc64x.s +++ /dev/null @@ -1,28 +0,0 @@ -// Copyright 2014 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build linux -// +build ppc64 ppc64le -// +build !gccgo - -#include "textflag.h" - -// -// System calls for ppc64, Linux -// - -// Just jump to package syscall's implementation for all these functions. -// The runtime may know about them. - -TEXT ·Syscall(SB),NOSPLIT,$0-56 - BR syscall·Syscall(SB) - -TEXT ·Syscall6(SB),NOSPLIT,$0-80 - BR syscall·Syscall6(SB) - -TEXT ·RawSyscall(SB),NOSPLIT,$0-56 - BR syscall·RawSyscall(SB) - -TEXT ·RawSyscall6(SB),NOSPLIT,$0-80 - BR syscall·RawSyscall6(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_netbsd_386.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_netbsd_386.s deleted file mode 100644 index 48bdcd7632a..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_netbsd_386.s +++ /dev/null @@ -1,29 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !gccgo - -#include "textflag.h" - -// -// System call support for 386, NetBSD -// - -// Just jump to package syscall's implementation for all these functions. -// The runtime may know about them. - -TEXT ·Syscall(SB),NOSPLIT,$0-28 - JMP syscall·Syscall(SB) - -TEXT ·Syscall6(SB),NOSPLIT,$0-40 - JMP syscall·Syscall6(SB) - -TEXT ·Syscall9(SB),NOSPLIT,$0-52 - JMP syscall·Syscall9(SB) - -TEXT ·RawSyscall(SB),NOSPLIT,$0-28 - JMP syscall·RawSyscall(SB) - -TEXT ·RawSyscall6(SB),NOSPLIT,$0-40 - JMP syscall·RawSyscall6(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_netbsd_amd64.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_netbsd_amd64.s deleted file mode 100644 index 2ede05c72f0..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_netbsd_amd64.s +++ /dev/null @@ -1,29 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !gccgo - -#include "textflag.h" - -// -// System call support for AMD64, NetBSD -// - -// Just jump to package syscall's implementation for all these functions. -// The runtime may know about them. - -TEXT ·Syscall(SB),NOSPLIT,$0-56 - JMP syscall·Syscall(SB) - -TEXT ·Syscall6(SB),NOSPLIT,$0-80 - JMP syscall·Syscall6(SB) - -TEXT ·Syscall9(SB),NOSPLIT,$0-104 - JMP syscall·Syscall9(SB) - -TEXT ·RawSyscall(SB),NOSPLIT,$0-56 - JMP syscall·RawSyscall(SB) - -TEXT ·RawSyscall6(SB),NOSPLIT,$0-80 - JMP syscall·RawSyscall6(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_netbsd_arm.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_netbsd_arm.s deleted file mode 100644 index e8928571c45..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_netbsd_arm.s +++ /dev/null @@ -1,29 +0,0 @@ -// Copyright 2013 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !gccgo - -#include "textflag.h" - -// -// System call support for ARM, NetBSD -// - -// Just jump to package syscall's implementation for all these functions. -// The runtime may know about them. - -TEXT ·Syscall(SB),NOSPLIT,$0-28 - B syscall·Syscall(SB) - -TEXT ·Syscall6(SB),NOSPLIT,$0-40 - B syscall·Syscall6(SB) - -TEXT ·Syscall9(SB),NOSPLIT,$0-52 - B syscall·Syscall9(SB) - -TEXT ·RawSyscall(SB),NOSPLIT,$0-28 - B syscall·RawSyscall(SB) - -TEXT ·RawSyscall6(SB),NOSPLIT,$0-40 - B syscall·RawSyscall6(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_openbsd_386.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_openbsd_386.s deleted file mode 100644 index 00576f3c835..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_openbsd_386.s +++ /dev/null @@ -1,29 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !gccgo - -#include "textflag.h" - -// -// System call support for 386, OpenBSD -// - -// Just jump to package syscall's implementation for all these functions. -// The runtime may know about them. - -TEXT ·Syscall(SB),NOSPLIT,$0-28 - JMP syscall·Syscall(SB) - -TEXT ·Syscall6(SB),NOSPLIT,$0-40 - JMP syscall·Syscall6(SB) - -TEXT ·Syscall9(SB),NOSPLIT,$0-52 - JMP syscall·Syscall9(SB) - -TEXT ·RawSyscall(SB),NOSPLIT,$0-28 - JMP syscall·RawSyscall(SB) - -TEXT ·RawSyscall6(SB),NOSPLIT,$0-40 - JMP syscall·RawSyscall6(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_openbsd_amd64.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_openbsd_amd64.s deleted file mode 100644 index 790ef77f86e..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_openbsd_amd64.s +++ /dev/null @@ -1,29 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !gccgo - -#include "textflag.h" - -// -// System call support for AMD64, OpenBSD -// - -// Just jump to package syscall's implementation for all these functions. -// The runtime may know about them. - -TEXT ·Syscall(SB),NOSPLIT,$0-56 - JMP syscall·Syscall(SB) - -TEXT ·Syscall6(SB),NOSPLIT,$0-80 - JMP syscall·Syscall6(SB) - -TEXT ·Syscall9(SB),NOSPLIT,$0-104 - JMP syscall·Syscall9(SB) - -TEXT ·RawSyscall(SB),NOSPLIT,$0-56 - JMP syscall·RawSyscall(SB) - -TEXT ·RawSyscall6(SB),NOSPLIT,$0-80 - JMP syscall·RawSyscall6(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_solaris_amd64.s b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_solaris_amd64.s deleted file mode 100644 index 43ed17a05f3..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/asm_solaris_amd64.s +++ /dev/null @@ -1,17 +0,0 @@ -// Copyright 2014 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !gccgo - -#include "textflag.h" - -// -// System calls for amd64, Solaris are implemented in runtime/syscall_solaris.go -// - -TEXT ·sysvicall6(SB),NOSPLIT,$0-64 - JMP syscall·sysvicall6(SB) - -TEXT ·rawSysvicall6(SB),NOSPLIT,$0-64 - JMP syscall·rawSysvicall6(SB) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/constants.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/constants.go deleted file mode 100644 index a96f0ebc264..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/constants.go +++ /dev/null @@ -1,13 +0,0 @@ -// Copyright 2015 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build darwin dragonfly freebsd linux netbsd openbsd solaris - -package unix - -const ( - R_OK = 0x4 - W_OK = 0x2 - X_OK = 0x1 -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/env_unix.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/env_unix.go deleted file mode 100644 index 45e281a047d..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/env_unix.go +++ /dev/null @@ -1,27 +0,0 @@ -// Copyright 2010 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build darwin dragonfly freebsd linux netbsd openbsd solaris - -// Unix environment variables. - -package unix - -import "syscall" - -func Getenv(key string) (value string, found bool) { - return syscall.Getenv(key) -} - -func Setenv(key, value string) error { - return syscall.Setenv(key, value) -} - -func Clearenv() { - syscall.Clearenv() -} - -func Environ() []string { - return syscall.Environ() -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/env_unset.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/env_unset.go deleted file mode 100644 index 9222262559b..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/env_unset.go +++ /dev/null @@ -1,14 +0,0 @@ -// Copyright 2014 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build go1.4 - -package unix - -import "syscall" - -func Unsetenv(key string) error { - // This was added in Go 1.4. - return syscall.Unsetenv(key) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/flock.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/flock.go deleted file mode 100644 index ce67a59528a..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/flock.go +++ /dev/null @@ -1,24 +0,0 @@ -// +build linux darwin freebsd openbsd netbsd dragonfly - -// Copyright 2014 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build darwin dragonfly freebsd linux netbsd openbsd - -package unix - -import "unsafe" - -// fcntl64Syscall is usually SYS_FCNTL, but is overridden on 32-bit Linux -// systems by flock_linux_32bit.go to be SYS_FCNTL64. -var fcntl64Syscall uintptr = SYS_FCNTL - -// FcntlFlock performs a fcntl syscall for the F_GETLK, F_SETLK or F_SETLKW command. -func FcntlFlock(fd uintptr, cmd int, lk *Flock_t) error { - _, _, errno := Syscall(fcntl64Syscall, fd, uintptr(cmd), uintptr(unsafe.Pointer(lk))) - if errno == 0 { - return nil - } - return errno -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/flock_linux_32bit.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/flock_linux_32bit.go deleted file mode 100644 index 362831c3f70..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/flock_linux_32bit.go +++ /dev/null @@ -1,13 +0,0 @@ -// +build linux,386 linux,arm - -// Copyright 2014 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package unix - -func init() { - // On 32-bit Linux systems, the fcntl syscall that matches Go's - // Flock_t type is SYS_FCNTL64, not SYS_FCNTL. - fcntl64Syscall = SYS_FCNTL64 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/gccgo.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/gccgo.go deleted file mode 100644 index 94c82321247..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/gccgo.go +++ /dev/null @@ -1,46 +0,0 @@ -// Copyright 2015 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build gccgo - -package unix - -import "syscall" - -// We can't use the gc-syntax .s files for gccgo. On the plus side -// much of the functionality can be written directly in Go. - -//extern gccgoRealSyscall -func realSyscall(trap, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r, errno uintptr) - -func Syscall(trap, a1, a2, a3 uintptr) (r1, r2 uintptr, err syscall.Errno) { - syscall.Entersyscall() - r, errno := realSyscall(trap, a1, a2, a3, 0, 0, 0, 0, 0, 0) - syscall.Exitsyscall() - return r, 0, syscall.Errno(errno) -} - -func Syscall6(trap, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err syscall.Errno) { - syscall.Entersyscall() - r, errno := realSyscall(trap, a1, a2, a3, a4, a5, a6, 0, 0, 0) - syscall.Exitsyscall() - return r, 0, syscall.Errno(errno) -} - -func Syscall9(trap, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) { - syscall.Entersyscall() - r, errno := realSyscall(trap, a1, a2, a3, a4, a5, a6, a7, a8, a9) - syscall.Exitsyscall() - return r, 0, syscall.Errno(errno) -} - -func RawSyscall(trap, a1, a2, a3 uintptr) (r1, r2 uintptr, err syscall.Errno) { - r, errno := realSyscall(trap, a1, a2, a3, 0, 0, 0, 0, 0, 0) - return r, 0, syscall.Errno(errno) -} - -func RawSyscall6(trap, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err syscall.Errno) { - r, errno := realSyscall(trap, a1, a2, a3, a4, a5, a6, 0, 0, 0) - return r, 0, syscall.Errno(errno) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/gccgo_c.c b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/gccgo_c.c deleted file mode 100644 index 07f6be0392e..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/gccgo_c.c +++ /dev/null @@ -1,41 +0,0 @@ -// Copyright 2015 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build gccgo - -#include -#include -#include - -#define _STRINGIFY2_(x) #x -#define _STRINGIFY_(x) _STRINGIFY2_(x) -#define GOSYM_PREFIX _STRINGIFY_(__USER_LABEL_PREFIX__) - -// Call syscall from C code because the gccgo support for calling from -// Go to C does not support varargs functions. - -struct ret { - uintptr_t r; - uintptr_t err; -}; - -struct ret -gccgoRealSyscall(uintptr_t trap, uintptr_t a1, uintptr_t a2, uintptr_t a3, uintptr_t a4, uintptr_t a5, uintptr_t a6, uintptr_t a7, uintptr_t a8, uintptr_t a9) -{ - struct ret r; - - errno = 0; - r.r = syscall(trap, a1, a2, a3, a4, a5, a6, a7, a8, a9); - r.err = errno; - return r; -} - -// Define the use function in C so that it is not inlined. - -extern void use(void *) __asm__ (GOSYM_PREFIX GOPKGPATH ".use") __attribute__((noinline)); - -void -use(void *p __attribute__ ((unused))) -{ -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/gccgo_linux_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/gccgo_linux_amd64.go deleted file mode 100644 index bffe1a77db5..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/gccgo_linux_amd64.go +++ /dev/null @@ -1,20 +0,0 @@ -// Copyright 2015 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build gccgo,linux,amd64 - -package unix - -import "syscall" - -//extern gettimeofday -func realGettimeofday(*Timeval, *byte) int32 - -func gettimeofday(tv *Timeval) (err syscall.Errno) { - r := realGettimeofday(tv, nil) - if r < 0 { - return syscall.GetErrno() - } - return 0 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/mkall.sh b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/mkall.sh deleted file mode 100755 index de95a4bbcf5..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/mkall.sh +++ /dev/null @@ -1,274 +0,0 @@ -#!/usr/bin/env bash -# Copyright 2009 The Go Authors. All rights reserved. -# Use of this source code is governed by a BSD-style -# license that can be found in the LICENSE file. - -# The unix package provides access to the raw system call -# interface of the underlying operating system. Porting Go to -# a new architecture/operating system combination requires -# some manual effort, though there are tools that automate -# much of the process. The auto-generated files have names -# beginning with z. -# -# This script runs or (given -n) prints suggested commands to generate z files -# for the current system. Running those commands is not automatic. -# This script is documentation more than anything else. -# -# * asm_${GOOS}_${GOARCH}.s -# -# This hand-written assembly file implements system call dispatch. -# There are three entry points: -# -# func Syscall(trap, a1, a2, a3 uintptr) (r1, r2, err uintptr); -# func Syscall6(trap, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2, err uintptr); -# func RawSyscall(trap, a1, a2, a3 uintptr) (r1, r2, err uintptr); -# -# The first and second are the standard ones; they differ only in -# how many arguments can be passed to the kernel. -# The third is for low-level use by the ForkExec wrapper; -# unlike the first two, it does not call into the scheduler to -# let it know that a system call is running. -# -# * syscall_${GOOS}.go -# -# This hand-written Go file implements system calls that need -# special handling and lists "//sys" comments giving prototypes -# for ones that can be auto-generated. Mksyscall reads those -# comments to generate the stubs. -# -# * syscall_${GOOS}_${GOARCH}.go -# -# Same as syscall_${GOOS}.go except that it contains code specific -# to ${GOOS} on one particular architecture. -# -# * types_${GOOS}.c -# -# This hand-written C file includes standard C headers and then -# creates typedef or enum names beginning with a dollar sign -# (use of $ in variable names is a gcc extension). The hardest -# part about preparing this file is figuring out which headers to -# include and which symbols need to be #defined to get the -# actual data structures that pass through to the kernel system calls. -# Some C libraries present alternate versions for binary compatibility -# and translate them on the way in and out of system calls, but -# there is almost always a #define that can get the real ones. -# See types_darwin.c and types_linux.c for examples. -# -# * zerror_${GOOS}_${GOARCH}.go -# -# This machine-generated file defines the system's error numbers, -# error strings, and signal numbers. The generator is "mkerrors.sh". -# Usually no arguments are needed, but mkerrors.sh will pass its -# arguments on to godefs. -# -# * zsyscall_${GOOS}_${GOARCH}.go -# -# Generated by mksyscall.pl; see syscall_${GOOS}.go above. -# -# * zsysnum_${GOOS}_${GOARCH}.go -# -# Generated by mksysnum_${GOOS}. -# -# * ztypes_${GOOS}_${GOARCH}.go -# -# Generated by godefs; see types_${GOOS}.c above. - -GOOSARCH="${GOOS}_${GOARCH}" - -# defaults -mksyscall="./mksyscall.pl" -mkerrors="./mkerrors.sh" -zerrors="zerrors_$GOOSARCH.go" -mksysctl="" -zsysctl="zsysctl_$GOOSARCH.go" -mksysnum= -mktypes= -run="sh" - -case "$1" in --syscalls) - for i in zsyscall*go - do - sed 1q $i | sed 's;^// ;;' | sh > _$i && gofmt < _$i > $i - rm _$i - done - exit 0 - ;; --n) - run="cat" - shift -esac - -case "$#" in -0) - ;; -*) - echo 'usage: mkall.sh [-n]' 1>&2 - exit 2 -esac - -GOOSARCH_in=syscall_$GOOSARCH.go -case "$GOOSARCH" in -_* | *_ | _) - echo 'undefined $GOOS_$GOARCH:' "$GOOSARCH" 1>&2 - exit 1 - ;; -darwin_386) - mkerrors="$mkerrors -m32" - mksyscall="./mksyscall.pl -l32" - mksysnum="./mksysnum_darwin.pl $(xcrun --show-sdk-path --sdk macosx)/usr/include/sys/syscall.h" - mktypes="GOARCH=$GOARCH go tool cgo -godefs" - ;; -darwin_amd64) - mkerrors="$mkerrors -m64" - mksysnum="./mksysnum_darwin.pl $(xcrun --show-sdk-path --sdk macosx)/usr/include/sys/syscall.h" - mktypes="GOARCH=$GOARCH go tool cgo -godefs" - ;; -darwin_arm) - mkerrors="$mkerrors" - mksysnum="./mksysnum_darwin.pl /usr/include/sys/syscall.h" - mktypes="GOARCH=$GOARCH go tool cgo -godefs" - ;; -darwin_arm64) - mkerrors="$mkerrors -m64" - mksysnum="./mksysnum_darwin.pl $(xcrun --show-sdk-path --sdk iphoneos)/usr/include/sys/syscall.h" - mktypes="GOARCH=$GOARCH go tool cgo -godefs" - ;; -dragonfly_386) - mkerrors="$mkerrors -m32" - mksyscall="./mksyscall.pl -l32 -dragonfly" - mksysnum="curl -s 'http://gitweb.dragonflybsd.org/dragonfly.git/blob_plain/HEAD:/sys/kern/syscalls.master' | ./mksysnum_dragonfly.pl" - mktypes="GOARCH=$GOARCH go tool cgo -godefs" - ;; -dragonfly_amd64) - mkerrors="$mkerrors -m64" - mksyscall="./mksyscall.pl -dragonfly" - mksysnum="curl -s 'http://gitweb.dragonflybsd.org/dragonfly.git/blob_plain/HEAD:/sys/kern/syscalls.master' | ./mksysnum_dragonfly.pl" - mktypes="GOARCH=$GOARCH go tool cgo -godefs" - ;; -freebsd_386) - mkerrors="$mkerrors -m32" - mksyscall="./mksyscall.pl -l32" - mksysnum="curl -s 'http://svn.freebsd.org/base/stable/10/sys/kern/syscalls.master' | ./mksysnum_freebsd.pl" - mktypes="GOARCH=$GOARCH go tool cgo -godefs" - ;; -freebsd_amd64) - mkerrors="$mkerrors -m64" - mksysnum="curl -s 'http://svn.freebsd.org/base/stable/10/sys/kern/syscalls.master' | ./mksysnum_freebsd.pl" - mktypes="GOARCH=$GOARCH go tool cgo -godefs" - ;; -freebsd_arm) - mkerrors="$mkerrors" - mksyscall="./mksyscall.pl -l32 -arm" - mksysnum="curl -s 'http://svn.freebsd.org/base/stable/10/sys/kern/syscalls.master' | ./mksysnum_freebsd.pl" - # Let the type of C char be singed for making the bare syscall - # API consistent across over platforms. - mktypes="GOARCH=$GOARCH go tool cgo -godefs -- -fsigned-char" - ;; -linux_386) - mkerrors="$mkerrors -m32" - mksyscall="./mksyscall.pl -l32" - mksysnum="./mksysnum_linux.pl /usr/include/asm/unistd_32.h" - mktypes="GOARCH=$GOARCH go tool cgo -godefs" - ;; -linux_amd64) - unistd_h=$(ls -1 /usr/include/asm/unistd_64.h /usr/include/x86_64-linux-gnu/asm/unistd_64.h 2>/dev/null | head -1) - if [ "$unistd_h" = "" ]; then - echo >&2 cannot find unistd_64.h - exit 1 - fi - mkerrors="$mkerrors -m64" - mksysnum="./mksysnum_linux.pl $unistd_h" - mktypes="GOARCH=$GOARCH go tool cgo -godefs" - ;; -linux_arm) - mkerrors="$mkerrors" - mksyscall="./mksyscall.pl -l32 -arm" - mksysnum="curl -s 'http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/plain/arch/arm/include/uapi/asm/unistd.h' | ./mksysnum_linux.pl -" - mktypes="GOARCH=$GOARCH go tool cgo -godefs" - ;; -linux_arm64) - unistd_h=$(ls -1 /usr/include/asm/unistd.h /usr/include/asm-generic/unistd.h 2>/dev/null | head -1) - if [ "$unistd_h" = "" ]; then - echo >&2 cannot find unistd_64.h - exit 1 - fi - mksysnum="./mksysnum_linux.pl $unistd_h" - # Let the type of C char be singed for making the bare syscall - # API consistent across over platforms. - mktypes="GOARCH=$GOARCH go tool cgo -godefs -- -fsigned-char" - ;; -linux_ppc64) - GOOSARCH_in=syscall_linux_ppc64x.go - unistd_h=/usr/include/asm/unistd.h - mkerrors="$mkerrors -m64" - mksysnum="./mksysnum_linux.pl $unistd_h" - mktypes="GOARCH=$GOARCH go tool cgo -godefs" - ;; -linux_ppc64le) - GOOSARCH_in=syscall_linux_ppc64x.go - unistd_h=/usr/include/powerpc64le-linux-gnu/asm/unistd.h - mkerrors="$mkerrors -m64" - mksysnum="./mksysnum_linux.pl $unistd_h" - mktypes="GOARCH=$GOARCH go tool cgo -godefs" - ;; -netbsd_386) - mkerrors="$mkerrors -m32" - mksyscall="./mksyscall.pl -l32 -netbsd" - mksysnum="curl -s 'http://cvsweb.netbsd.org/bsdweb.cgi/~checkout~/src/sys/kern/syscalls.master' | ./mksysnum_netbsd.pl" - mktypes="GOARCH=$GOARCH go tool cgo -godefs" - ;; -netbsd_amd64) - mkerrors="$mkerrors -m64" - mksyscall="./mksyscall.pl -netbsd" - mksysnum="curl -s 'http://cvsweb.netbsd.org/bsdweb.cgi/~checkout~/src/sys/kern/syscalls.master' | ./mksysnum_netbsd.pl" - mktypes="GOARCH=$GOARCH go tool cgo -godefs" - ;; -openbsd_386) - mkerrors="$mkerrors -m32" - mksyscall="./mksyscall.pl -l32 -openbsd" - mksysctl="./mksysctl_openbsd.pl" - zsysctl="zsysctl_openbsd.go" - mksysnum="curl -s 'http://cvsweb.openbsd.org/cgi-bin/cvsweb/~checkout~/src/sys/kern/syscalls.master' | ./mksysnum_openbsd.pl" - mktypes="GOARCH=$GOARCH go tool cgo -godefs" - ;; -openbsd_amd64) - mkerrors="$mkerrors -m64" - mksyscall="./mksyscall.pl -openbsd" - mksysctl="./mksysctl_openbsd.pl" - zsysctl="zsysctl_openbsd.go" - mksysnum="curl -s 'http://cvsweb.openbsd.org/cgi-bin/cvsweb/~checkout~/src/sys/kern/syscalls.master' | ./mksysnum_openbsd.pl" - mktypes="GOARCH=$GOARCH go tool cgo -godefs" - ;; -solaris_amd64) - mksyscall="./mksyscall_solaris.pl" - mkerrors="$mkerrors -m64" - mksysnum= - mktypes="GOARCH=$GOARCH go tool cgo -godefs" - ;; -*) - echo 'unrecognized $GOOS_$GOARCH: ' "$GOOSARCH" 1>&2 - exit 1 - ;; -esac - -( - if [ -n "$mkerrors" ]; then echo "$mkerrors |gofmt >$zerrors"; fi - case "$GOOS" in - *) - syscall_goos="syscall_$GOOS.go" - case "$GOOS" in - darwin | dragonfly | freebsd | netbsd | openbsd) - syscall_goos="syscall_bsd.go $syscall_goos" - ;; - esac - if [ -n "$mksyscall" ]; then echo "$mksyscall $syscall_goos $GOOSARCH_in |gofmt >zsyscall_$GOOSARCH.go"; fi - ;; - esac - if [ -n "$mksysctl" ]; then echo "$mksysctl |gofmt >$zsysctl"; fi - if [ -n "$mksysnum" ]; then echo "$mksysnum |gofmt >zsysnum_$GOOSARCH.go"; fi - if [ -n "$mktypes" ]; then - echo "echo // +build $GOARCH,$GOOS > ztypes_$GOOSARCH.go"; - echo "$mktypes types_$GOOS.go | gofmt >>ztypes_$GOOSARCH.go"; - fi -) | $run diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/mkerrors.sh b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/mkerrors.sh deleted file mode 100755 index c40d788c4ab..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/mkerrors.sh +++ /dev/null @@ -1,476 +0,0 @@ -#!/usr/bin/env bash -# Copyright 2009 The Go Authors. All rights reserved. -# Use of this source code is governed by a BSD-style -# license that can be found in the LICENSE file. - -# Generate Go code listing errors and other #defined constant -# values (ENAMETOOLONG etc.), by asking the preprocessor -# about the definitions. - -unset LANG -export LC_ALL=C -export LC_CTYPE=C - -if test -z "$GOARCH" -o -z "$GOOS"; then - echo 1>&2 "GOARCH or GOOS not defined in environment" - exit 1 -fi - -CC=${CC:-cc} - -if [[ "$GOOS" -eq "solaris" ]]; then - # Assumes GNU versions of utilities in PATH. - export PATH=/usr/gnu/bin:$PATH -fi - -uname=$(uname) - -includes_Darwin=' -#define _DARWIN_C_SOURCE -#define KERNEL -#define _DARWIN_USE_64_BIT_INODE -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -' - -includes_DragonFly=' -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -' - -includes_FreeBSD=' -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#if __FreeBSD__ >= 10 -#define IFT_CARP 0xf8 // IFT_CARP is deprecated in FreeBSD 10 -#undef SIOCAIFADDR -#define SIOCAIFADDR _IOW(105, 26, struct oifaliasreq) // ifaliasreq contains if_data -#undef SIOCSIFPHYADDR -#define SIOCSIFPHYADDR _IOW(105, 70, struct oifaliasreq) // ifaliasreq contains if_data -#endif -' - -includes_Linux=' -#define _LARGEFILE_SOURCE -#define _LARGEFILE64_SOURCE -#ifndef __LP64__ -#define _FILE_OFFSET_BITS 64 -#endif -#define _GNU_SOURCE - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#ifndef MSG_FASTOPEN -#define MSG_FASTOPEN 0x20000000 -#endif - -#ifndef PTRACE_GETREGS -#define PTRACE_GETREGS 0xc -#endif - -#ifndef PTRACE_SETREGS -#define PTRACE_SETREGS 0xd -#endif -' - -includes_NetBSD=' -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -// Needed since refers to it... -#define schedppq 1 -' - -includes_OpenBSD=' -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -// We keep some constants not supported in OpenBSD 5.5 and beyond for -// the promise of compatibility. -#define EMUL_ENABLED 0x1 -#define EMUL_NATIVE 0x2 -#define IPV6_FAITH 0x1d -#define IPV6_OPTIONS 0x1 -#define IPV6_RTHDR_STRICT 0x1 -#define IPV6_SOCKOPT_RESERVED1 0x3 -#define SIOCGIFGENERIC 0xc020693a -#define SIOCSIFGENERIC 0x80206939 -#define WALTSIG 0x4 -' - -includes_SunOS=' -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -' - - -includes=' -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -' -ccflags="$@" - -# Write go tool cgo -godefs input. -( - echo package unix - echo - echo '/*' - indirect="includes_$(uname)" - echo "${!indirect} $includes" - echo '*/' - echo 'import "C"' - echo 'import "syscall"' - echo - echo 'const (' - - # The gcc command line prints all the #defines - # it encounters while processing the input - echo "${!indirect} $includes" | $CC -x c - -E -dM $ccflags | - awk ' - $1 != "#define" || $2 ~ /\(/ || $3 == "" {next} - - $2 ~ /^E([ABCD]X|[BIS]P|[SD]I|S|FL)$/ {next} # 386 registers - $2 ~ /^(SIGEV_|SIGSTKSZ|SIGRT(MIN|MAX))/ {next} - $2 ~ /^(SCM_SRCRT)$/ {next} - $2 ~ /^(MAP_FAILED)$/ {next} - $2 ~ /^ELF_.*$/ {next}# contains ELF_ARCH, etc. - - $2 ~ /^EXTATTR_NAMESPACE_NAMES/ || - $2 ~ /^EXTATTR_NAMESPACE_[A-Z]+_STRING/ {next} - - $2 !~ /^ETH_/ && - $2 !~ /^EPROC_/ && - $2 !~ /^EQUIV_/ && - $2 !~ /^EXPR_/ && - $2 ~ /^E[A-Z0-9_]+$/ || - $2 ~ /^B[0-9_]+$/ || - $2 == "BOTHER" || - $2 ~ /^CI?BAUD(EX)?$/ || - $2 == "IBSHIFT" || - $2 ~ /^V[A-Z0-9]+$/ || - $2 ~ /^CS[A-Z0-9]/ || - $2 ~ /^I(SIG|CANON|CRNL|UCLC|EXTEN|MAXBEL|STRIP|UTF8)$/ || - $2 ~ /^IGN/ || - $2 ~ /^IX(ON|ANY|OFF)$/ || - $2 ~ /^IN(LCR|PCK)$/ || - $2 ~ /(^FLU?SH)|(FLU?SH$)/ || - $2 ~ /^C(LOCAL|READ|MSPAR|RTSCTS)$/ || - $2 == "BRKINT" || - $2 == "HUPCL" || - $2 == "PENDIN" || - $2 == "TOSTOP" || - $2 == "XCASE" || - $2 == "ALTWERASE" || - $2 == "NOKERNINFO" || - $2 ~ /^PAR/ || - $2 ~ /^SIG[^_]/ || - $2 ~ /^O[CNPFPL][A-Z]+[^_][A-Z]+$/ || - $2 ~ /^(NL|CR|TAB|BS|VT|FF)DLY$/ || - $2 ~ /^(NL|CR|TAB|BS|VT|FF)[0-9]$/ || - $2 ~ /^O?XTABS$/ || - $2 ~ /^TC[IO](ON|OFF)$/ || - $2 ~ /^IN_/ || - $2 ~ /^LOCK_(SH|EX|NB|UN)$/ || - $2 ~ /^(AF|SOCK|SO|SOL|IPPROTO|IP|IPV6|ICMP6|TCP|EVFILT|NOTE|EV|SHUT|PROT|MAP|PACKET|MSG|SCM|MCL|DT|MADV|PR)_/ || - $2 == "ICMPV6_FILTER" || - $2 == "SOMAXCONN" || - $2 == "NAME_MAX" || - $2 == "IFNAMSIZ" || - $2 ~ /^CTL_(MAXNAME|NET|QUERY)$/ || - $2 ~ /^SYSCTL_VERS/ || - $2 ~ /^(MS|MNT)_/ || - $2 ~ /^TUN(SET|GET|ATTACH|DETACH)/ || - $2 ~ /^(O|F|FD|NAME|S|PTRACE|PT)_/ || - $2 ~ /^LINUX_REBOOT_CMD_/ || - $2 ~ /^LINUX_REBOOT_MAGIC[12]$/ || - $2 !~ "NLA_TYPE_MASK" && - $2 ~ /^(NETLINK|NLM|NLMSG|NLA|IFA|IFAN|RT|RTCF|RTN|RTPROT|RTNH|ARPHRD|ETH_P)_/ || - $2 ~ /^SIOC/ || - $2 ~ /^TIOC/ || - $2 ~ /^TCGET/ || - $2 ~ /^TCSET/ || - $2 ~ /^TC(FLSH|SBRKP?|XONC)$/ || - $2 !~ "RTF_BITS" && - $2 ~ /^(IFF|IFT|NET_RT|RTM|RTF|RTV|RTA|RTAX)_/ || - $2 ~ /^BIOC/ || - $2 ~ /^RUSAGE_(SELF|CHILDREN|THREAD)/ || - $2 ~ /^RLIMIT_(AS|CORE|CPU|DATA|FSIZE|NOFILE|STACK)|RLIM_INFINITY/ || - $2 ~ /^PRIO_(PROCESS|PGRP|USER)/ || - $2 ~ /^CLONE_[A-Z_]+/ || - $2 !~ /^(BPF_TIMEVAL)$/ && - $2 ~ /^(BPF|DLT)_/ || - $2 ~ /^CLOCK_/ || - $2 !~ "WMESGLEN" && - $2 ~ /^W[A-Z0-9]+$/ {printf("\t%s = C.%s\n", $2, $2)} - $2 ~ /^__WCOREFLAG$/ {next} - $2 ~ /^__W[A-Z0-9]+$/ {printf("\t%s = C.%s\n", substr($2,3), $2)} - - {next} - ' | sort - - echo ')' -) >_const.go - -# Pull out the error names for later. -errors=$( - echo '#include ' | $CC -x c - -E -dM $ccflags | - awk '$1=="#define" && $2 ~ /^E[A-Z0-9_]+$/ { print $2 }' | - sort -) - -# Pull out the signal names for later. -signals=$( - echo '#include ' | $CC -x c - -E -dM $ccflags | - awk '$1=="#define" && $2 ~ /^SIG[A-Z0-9]+$/ { print $2 }' | - egrep -v '(SIGSTKSIZE|SIGSTKSZ|SIGRT)' | - sort -) - -# Again, writing regexps to a file. -echo '#include ' | $CC -x c - -E -dM $ccflags | - awk '$1=="#define" && $2 ~ /^E[A-Z0-9_]+$/ { print "^\t" $2 "[ \t]*=" }' | - sort >_error.grep -echo '#include ' | $CC -x c - -E -dM $ccflags | - awk '$1=="#define" && $2 ~ /^SIG[A-Z0-9]+$/ { print "^\t" $2 "[ \t]*=" }' | - egrep -v '(SIGSTKSIZE|SIGSTKSZ|SIGRT)' | - sort >_signal.grep - -echo '// mkerrors.sh' "$@" -echo '// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT' -echo -echo "// +build ${GOARCH},${GOOS}" -echo -go tool cgo -godefs -- "$@" _const.go >_error.out -cat _error.out | grep -vf _error.grep | grep -vf _signal.grep -echo -echo '// Errors' -echo 'const (' -cat _error.out | grep -f _error.grep | sed 's/=\(.*\)/= syscall.Errno(\1)/' -echo ')' - -echo -echo '// Signals' -echo 'const (' -cat _error.out | grep -f _signal.grep | sed 's/=\(.*\)/= syscall.Signal(\1)/' -echo ')' - -# Run C program to print error and syscall strings. -( - echo -E " -#include -#include -#include -#include -#include -#include - -#define nelem(x) (sizeof(x)/sizeof((x)[0])) - -enum { A = 'A', Z = 'Z', a = 'a', z = 'z' }; // avoid need for single quotes below - -int errors[] = { -" - for i in $errors - do - echo -E ' '$i, - done - - echo -E " -}; - -int signals[] = { -" - for i in $signals - do - echo -E ' '$i, - done - - # Use -E because on some systems bash builtin interprets \n itself. - echo -E ' -}; - -static int -intcmp(const void *a, const void *b) -{ - return *(int*)a - *(int*)b; -} - -int -main(void) -{ - int i, j, e; - char buf[1024], *p; - - printf("\n\n// Error table\n"); - printf("var errors = [...]string {\n"); - qsort(errors, nelem(errors), sizeof errors[0], intcmp); - for(i=0; i 0 && errors[i-1] == e) - continue; - strcpy(buf, strerror(e)); - // lowercase first letter: Bad -> bad, but STREAM -> STREAM. - if(A <= buf[0] && buf[0] <= Z && a <= buf[1] && buf[1] <= z) - buf[0] += a - A; - printf("\t%d: \"%s\",\n", e, buf); - } - printf("}\n\n"); - - printf("\n\n// Signal table\n"); - printf("var signals = [...]string {\n"); - qsort(signals, nelem(signals), sizeof signals[0], intcmp); - for(i=0; i 0 && signals[i-1] == e) - continue; - strcpy(buf, strsignal(e)); - // lowercase first letter: Bad -> bad, but STREAM -> STREAM. - if(A <= buf[0] && buf[0] <= Z && a <= buf[1] && buf[1] <= z) - buf[0] += a - A; - // cut trailing : number. - p = strrchr(buf, ":"[0]); - if(p) - *p = '\0'; - printf("\t%d: \"%s\",\n", e, buf); - } - printf("}\n\n"); - - return 0; -} - -' -) >_errors.c - -$CC $ccflags -o _errors _errors.c && $GORUN ./_errors && rm -f _errors.c _errors _const.go _error.grep _signal.grep _error.out diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/mksyscall.pl b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/mksyscall.pl deleted file mode 100755 index b1e7766daeb..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/mksyscall.pl +++ /dev/null @@ -1,323 +0,0 @@ -#!/usr/bin/env perl -# Copyright 2009 The Go Authors. All rights reserved. -# Use of this source code is governed by a BSD-style -# license that can be found in the LICENSE file. - -# This program reads a file containing function prototypes -# (like syscall_darwin.go) and generates system call bodies. -# The prototypes are marked by lines beginning with "//sys" -# and read like func declarations if //sys is replaced by func, but: -# * The parameter lists must give a name for each argument. -# This includes return parameters. -# * The parameter lists must give a type for each argument: -# the (x, y, z int) shorthand is not allowed. -# * If the return parameter is an error number, it must be named errno. - -# A line beginning with //sysnb is like //sys, except that the -# goroutine will not be suspended during the execution of the system -# call. This must only be used for system calls which can never -# block, as otherwise the system call could cause all goroutines to -# hang. - -use strict; - -my $cmdline = "mksyscall.pl " . join(' ', @ARGV); -my $errors = 0; -my $_32bit = ""; -my $plan9 = 0; -my $openbsd = 0; -my $netbsd = 0; -my $dragonfly = 0; -my $arm = 0; # 64-bit value should use (even, odd)-pair - -if($ARGV[0] eq "-b32") { - $_32bit = "big-endian"; - shift; -} elsif($ARGV[0] eq "-l32") { - $_32bit = "little-endian"; - shift; -} -if($ARGV[0] eq "-plan9") { - $plan9 = 1; - shift; -} -if($ARGV[0] eq "-openbsd") { - $openbsd = 1; - shift; -} -if($ARGV[0] eq "-netbsd") { - $netbsd = 1; - shift; -} -if($ARGV[0] eq "-dragonfly") { - $dragonfly = 1; - shift; -} -if($ARGV[0] eq "-arm") { - $arm = 1; - shift; -} - -if($ARGV[0] =~ /^-/) { - print STDERR "usage: mksyscall.pl [-b32 | -l32] [file ...]\n"; - exit 1; -} - -if($ENV{'GOARCH'} eq "" || $ENV{'GOOS'} eq "") { - print STDERR "GOARCH or GOOS not defined in environment\n"; - exit 1; -} - -sub parseparamlist($) { - my ($list) = @_; - $list =~ s/^\s*//; - $list =~ s/\s*$//; - if($list eq "") { - return (); - } - return split(/\s*,\s*/, $list); -} - -sub parseparam($) { - my ($p) = @_; - if($p !~ /^(\S*) (\S*)$/) { - print STDERR "$ARGV:$.: malformed parameter: $p\n"; - $errors = 1; - return ("xx", "int"); - } - return ($1, $2); -} - -my $text = ""; -while(<>) { - chomp; - s/\s+/ /g; - s/^\s+//; - s/\s+$//; - my $nonblock = /^\/\/sysnb /; - next if !/^\/\/sys / && !$nonblock; - - # Line must be of the form - # func Open(path string, mode int, perm int) (fd int, errno error) - # Split into name, in params, out params. - if(!/^\/\/sys(nb)? (\w+)\(([^()]*)\)\s*(?:\(([^()]+)\))?\s*(?:=\s*((?i)SYS_[A-Z0-9_]+))?$/) { - print STDERR "$ARGV:$.: malformed //sys declaration\n"; - $errors = 1; - next; - } - my ($func, $in, $out, $sysname) = ($2, $3, $4, $5); - - # Split argument lists on comma. - my @in = parseparamlist($in); - my @out = parseparamlist($out); - - # Try in vain to keep people from editing this file. - # The theory is that they jump into the middle of the file - # without reading the header. - $text .= "// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT\n\n"; - - # Go function header. - my $out_decl = @out ? sprintf(" (%s)", join(', ', @out)) : ""; - $text .= sprintf "func %s(%s)%s {\n", $func, join(', ', @in), $out_decl; - - # Check if err return available - my $errvar = ""; - foreach my $p (@out) { - my ($name, $type) = parseparam($p); - if($type eq "error") { - $errvar = $name; - last; - } - } - - # Prepare arguments to Syscall. - my @args = (); - my @uses = (); - my $n = 0; - foreach my $p (@in) { - my ($name, $type) = parseparam($p); - if($type =~ /^\*/) { - push @args, "uintptr(unsafe.Pointer($name))"; - } elsif($type eq "string" && $errvar ne "") { - $text .= "\tvar _p$n *byte\n"; - $text .= "\t_p$n, $errvar = BytePtrFromString($name)\n"; - $text .= "\tif $errvar != nil {\n\t\treturn\n\t}\n"; - push @args, "uintptr(unsafe.Pointer(_p$n))"; - push @uses, "use(unsafe.Pointer(_p$n))"; - $n++; - } elsif($type eq "string") { - print STDERR "$ARGV:$.: $func uses string arguments, but has no error return\n"; - $text .= "\tvar _p$n *byte\n"; - $text .= "\t_p$n, _ = BytePtrFromString($name)\n"; - push @args, "uintptr(unsafe.Pointer(_p$n))"; - push @uses, "use(unsafe.Pointer(_p$n))"; - $n++; - } elsif($type =~ /^\[\](.*)/) { - # Convert slice into pointer, length. - # Have to be careful not to take address of &a[0] if len == 0: - # pass dummy pointer in that case. - # Used to pass nil, but some OSes or simulators reject write(fd, nil, 0). - $text .= "\tvar _p$n unsafe.Pointer\n"; - $text .= "\tif len($name) > 0 {\n\t\t_p$n = unsafe.Pointer(\&${name}[0])\n\t}"; - $text .= " else {\n\t\t_p$n = unsafe.Pointer(&_zero)\n\t}"; - $text .= "\n"; - push @args, "uintptr(_p$n)", "uintptr(len($name))"; - $n++; - } elsif($type eq "int64" && ($openbsd || $netbsd)) { - push @args, "0"; - if($_32bit eq "big-endian") { - push @args, "uintptr($name>>32)", "uintptr($name)"; - } elsif($_32bit eq "little-endian") { - push @args, "uintptr($name)", "uintptr($name>>32)"; - } else { - push @args, "uintptr($name)"; - } - } elsif($type eq "int64" && $dragonfly) { - if ($func !~ /^extp(read|write)/i) { - push @args, "0"; - } - if($_32bit eq "big-endian") { - push @args, "uintptr($name>>32)", "uintptr($name)"; - } elsif($_32bit eq "little-endian") { - push @args, "uintptr($name)", "uintptr($name>>32)"; - } else { - push @args, "uintptr($name)"; - } - } elsif($type eq "int64" && $_32bit ne "") { - if(@args % 2 && $arm) { - # arm abi specifies 64-bit argument uses - # (even, odd) pair - push @args, "0" - } - if($_32bit eq "big-endian") { - push @args, "uintptr($name>>32)", "uintptr($name)"; - } else { - push @args, "uintptr($name)", "uintptr($name>>32)"; - } - } else { - push @args, "uintptr($name)"; - } - } - - # Determine which form to use; pad args with zeros. - my $asm = "Syscall"; - if ($nonblock) { - $asm = "RawSyscall"; - } - if(@args <= 3) { - while(@args < 3) { - push @args, "0"; - } - } elsif(@args <= 6) { - $asm .= "6"; - while(@args < 6) { - push @args, "0"; - } - } elsif(@args <= 9) { - $asm .= "9"; - while(@args < 9) { - push @args, "0"; - } - } else { - print STDERR "$ARGV:$.: too many arguments to system call\n"; - } - - # System call number. - if($sysname eq "") { - $sysname = "SYS_$func"; - $sysname =~ s/([a-z])([A-Z])/${1}_$2/g; # turn FooBar into Foo_Bar - $sysname =~ y/a-z/A-Z/; - } - - # Actual call. - my $args = join(', ', @args); - my $call = "$asm($sysname, $args)"; - - # Assign return values. - my $body = ""; - my @ret = ("_", "_", "_"); - my $do_errno = 0; - for(my $i=0; $i<@out; $i++) { - my $p = $out[$i]; - my ($name, $type) = parseparam($p); - my $reg = ""; - if($name eq "err" && !$plan9) { - $reg = "e1"; - $ret[2] = $reg; - $do_errno = 1; - } elsif($name eq "err" && $plan9) { - $ret[0] = "r0"; - $ret[2] = "e1"; - next; - } else { - $reg = sprintf("r%d", $i); - $ret[$i] = $reg; - } - if($type eq "bool") { - $reg = "$reg != 0"; - } - if($type eq "int64" && $_32bit ne "") { - # 64-bit number in r1:r0 or r0:r1. - if($i+2 > @out) { - print STDERR "$ARGV:$.: not enough registers for int64 return\n"; - } - if($_32bit eq "big-endian") { - $reg = sprintf("int64(r%d)<<32 | int64(r%d)", $i, $i+1); - } else { - $reg = sprintf("int64(r%d)<<32 | int64(r%d)", $i+1, $i); - } - $ret[$i] = sprintf("r%d", $i); - $ret[$i+1] = sprintf("r%d", $i+1); - } - if($reg ne "e1" || $plan9) { - $body .= "\t$name = $type($reg)\n"; - } - } - if ($ret[0] eq "_" && $ret[1] eq "_" && $ret[2] eq "_") { - $text .= "\t$call\n"; - } else { - $text .= "\t$ret[0], $ret[1], $ret[2] := $call\n"; - } - foreach my $use (@uses) { - $text .= "\t$use\n"; - } - $text .= $body; - - if ($plan9 && $ret[2] eq "e1") { - $text .= "\tif int32(r0) == -1 {\n"; - $text .= "\t\terr = e1\n"; - $text .= "\t}\n"; - } elsif ($do_errno) { - $text .= "\tif e1 != 0 {\n"; - $text .= "\t\terr = errnoErr(e1)\n"; - $text .= "\t}\n"; - } - $text .= "\treturn\n"; - $text .= "}\n\n"; -} - -chomp $text; -chomp $text; - -if($errors) { - exit 1; -} - -print <) { - chomp; - s/\s+/ /g; - s/^\s+//; - s/\s+$//; - $package = $1 if !$package && /^package (\S+)$/; - my $nonblock = /^\/\/sysnb /; - next if !/^\/\/sys / && !$nonblock; - - # Line must be of the form - # func Open(path string, mode int, perm int) (fd int, err error) - # Split into name, in params, out params. - if(!/^\/\/sys(nb)? (\w+)\(([^()]*)\)\s*(?:\(([^()]+)\))?\s*(?:=\s*(?:(\w*)\.)?(\w*))?$/) { - print STDERR "$ARGV:$.: malformed //sys declaration\n"; - $errors = 1; - next; - } - my ($nb, $func, $in, $out, $modname, $sysname) = ($1, $2, $3, $4, $5, $6); - - # Split argument lists on comma. - my @in = parseparamlist($in); - my @out = parseparamlist($out); - - # So file name. - if($modname eq "") { - $modname = "libc"; - } - - # System call name. - if($sysname eq "") { - $sysname = "$func"; - } - - # System call pointer variable name. - my $sysvarname = "proc$sysname"; - - my $strconvfunc = "BytePtrFromString"; - my $strconvtype = "*byte"; - - $sysname =~ y/A-Z/a-z/; # All libc functions are lowercase. - - # Runtime import of function to allow cross-platform builds. - $dynimports .= "//go:cgo_import_dynamic libc_${sysname} ${sysname} \"$modname.so\"\n"; - # Link symbol to proc address variable. - $linknames .= "//go:linkname ${sysvarname} libc_${sysname}\n"; - # Library proc address variable. - push @vars, $sysvarname; - - # Go function header. - $out = join(', ', @out); - if($out ne "") { - $out = " ($out)"; - } - if($text ne "") { - $text .= "\n" - } - $text .= sprintf "func %s(%s)%s {\n", $func, join(', ', @in), $out; - - # Check if err return available - my $errvar = ""; - foreach my $p (@out) { - my ($name, $type) = parseparam($p); - if($type eq "error") { - $errvar = $name; - last; - } - } - - # Prepare arguments to Syscall. - my @args = (); - my @uses = (); - my $n = 0; - foreach my $p (@in) { - my ($name, $type) = parseparam($p); - if($type =~ /^\*/) { - push @args, "uintptr(unsafe.Pointer($name))"; - } elsif($type eq "string" && $errvar ne "") { - $text .= "\tvar _p$n $strconvtype\n"; - $text .= "\t_p$n, $errvar = $strconvfunc($name)\n"; - $text .= "\tif $errvar != nil {\n\t\treturn\n\t}\n"; - push @args, "uintptr(unsafe.Pointer(_p$n))"; - push @uses, "use(unsafe.Pointer(_p$n))"; - $n++; - } elsif($type eq "string") { - print STDERR "$ARGV:$.: $func uses string arguments, but has no error return\n"; - $text .= "\tvar _p$n $strconvtype\n"; - $text .= "\t_p$n, _ = $strconvfunc($name)\n"; - push @args, "uintptr(unsafe.Pointer(_p$n))"; - push @uses, "use(unsafe.Pointer(_p$n))"; - $n++; - } elsif($type =~ /^\[\](.*)/) { - # Convert slice into pointer, length. - # Have to be careful not to take address of &a[0] if len == 0: - # pass nil in that case. - $text .= "\tvar _p$n *$1\n"; - $text .= "\tif len($name) > 0 {\n\t\t_p$n = \&$name\[0]\n\t}\n"; - push @args, "uintptr(unsafe.Pointer(_p$n))", "uintptr(len($name))"; - $n++; - } elsif($type eq "int64" && $_32bit ne "") { - if($_32bit eq "big-endian") { - push @args, "uintptr($name >> 32)", "uintptr($name)"; - } else { - push @args, "uintptr($name)", "uintptr($name >> 32)"; - } - } elsif($type eq "bool") { - $text .= "\tvar _p$n uint32\n"; - $text .= "\tif $name {\n\t\t_p$n = 1\n\t} else {\n\t\t_p$n = 0\n\t}\n"; - push @args, "uintptr(_p$n)"; - $n++; - } else { - push @args, "uintptr($name)"; - } - } - my $nargs = @args; - - # Determine which form to use; pad args with zeros. - my $asm = "sysvicall6"; - if ($nonblock) { - $asm = "rawSysvicall6"; - } - if(@args <= 6) { - while(@args < 6) { - push @args, "0"; - } - } else { - print STDERR "$ARGV:$.: too many arguments to system call\n"; - } - - # Actual call. - my $args = join(', ', @args); - my $call = "$asm(uintptr(unsafe.Pointer(&$sysvarname)), $nargs, $args)"; - - # Assign return values. - my $body = ""; - my $failexpr = ""; - my @ret = ("_", "_", "_"); - my @pout= (); - my $do_errno = 0; - for(my $i=0; $i<@out; $i++) { - my $p = $out[$i]; - my ($name, $type) = parseparam($p); - my $reg = ""; - if($name eq "err") { - $reg = "e1"; - $ret[2] = $reg; - $do_errno = 1; - } else { - $reg = sprintf("r%d", $i); - $ret[$i] = $reg; - } - if($type eq "bool") { - $reg = "$reg != 0"; - } - if($type eq "int64" && $_32bit ne "") { - # 64-bit number in r1:r0 or r0:r1. - if($i+2 > @out) { - print STDERR "$ARGV:$.: not enough registers for int64 return\n"; - } - if($_32bit eq "big-endian") { - $reg = sprintf("int64(r%d)<<32 | int64(r%d)", $i, $i+1); - } else { - $reg = sprintf("int64(r%d)<<32 | int64(r%d)", $i+1, $i); - } - $ret[$i] = sprintf("r%d", $i); - $ret[$i+1] = sprintf("r%d", $i+1); - } - if($reg ne "e1") { - $body .= "\t$name = $type($reg)\n"; - } - } - if ($ret[0] eq "_" && $ret[1] eq "_" && $ret[2] eq "_") { - $text .= "\t$call\n"; - } else { - $text .= "\t$ret[0], $ret[1], $ret[2] := $call\n"; - } - foreach my $use (@uses) { - $text .= "\t$use\n"; - } - $text .= $body; - - if ($do_errno) { - $text .= "\tif e1 != 0 {\n"; - $text .= "\t\terr = e1\n"; - $text .= "\t}\n"; - } - $text .= "\treturn\n"; - $text .= "}\n"; -} - -if($errors) { - exit 1; -} - -print < "net.inet", - "net.inet.ipproto" => "net.inet", - "net.inet6.ipv6proto" => "net.inet6", - "net.inet6.ipv6" => "net.inet6.ip6", - "net.inet.icmpv6" => "net.inet6.icmp6", - "net.inet6.divert6" => "net.inet6.divert", - "net.inet6.tcp6" => "net.inet.tcp", - "net.inet6.udp6" => "net.inet.udp", - "mpls" => "net.mpls", - "swpenc" => "vm.swapencrypt" -); - -# Node mappings -my %node_map = ( - "net.inet.ip.ifq" => "net.ifq", - "net.inet.pfsync" => "net.pfsync", - "net.mpls.ifq" => "net.ifq" -); - -my $ctlname; -my %mib = (); -my %sysctl = (); -my $node; - -sub debug() { - print STDERR "$_[0]\n" if $debug; -} - -# Walk the MIB and build a sysctl name to OID mapping. -sub build_sysctl() { - my ($node, $name, $oid) = @_; - my %node = %{$node}; - my @oid = @{$oid}; - - foreach my $key (sort keys %node) { - my @node = @{$node{$key}}; - my $nodename = $name.($name ne '' ? '.' : '').$key; - my @nodeoid = (@oid, $node[0]); - if ($node[1] eq 'CTLTYPE_NODE') { - if (exists $node_map{$nodename}) { - $node = \%mib; - $ctlname = $node_map{$nodename}; - foreach my $part (split /\./, $ctlname) { - $node = \%{@{$$node{$part}}[2]}; - } - } else { - $node = $node[2]; - } - &build_sysctl($node, $nodename, \@nodeoid); - } elsif ($node[1] ne '') { - $sysctl{$nodename} = \@nodeoid; - } - } -} - -foreach my $ctl (@ctls) { - $ctls{$ctl} = $ctl; -} - -# Build MIB -foreach my $header (@headers) { - &debug("Processing $header..."); - open HEADER, "/usr/include/$header" || - print STDERR "Failed to open $header\n"; - while (
) { - if ($_ =~ /^#define\s+(CTL_NAMES)\s+{/ || - $_ =~ /^#define\s+(CTL_(.*)_NAMES)\s+{/ || - $_ =~ /^#define\s+((.*)CTL_NAMES)\s+{/) { - if ($1 eq 'CTL_NAMES') { - # Top level. - $node = \%mib; - } else { - # Node. - my $nodename = lc($2); - if ($header =~ /^netinet\//) { - $ctlname = "net.inet.$nodename"; - } elsif ($header =~ /^netinet6\//) { - $ctlname = "net.inet6.$nodename"; - } elsif ($header =~ /^net\//) { - $ctlname = "net.$nodename"; - } else { - $ctlname = "$nodename"; - $ctlname =~ s/^(fs|net|kern)_/$1\./; - } - if (exists $ctl_map{$ctlname}) { - $ctlname = $ctl_map{$ctlname}; - } - if (not exists $ctls{$ctlname}) { - &debug("Ignoring $ctlname..."); - next; - } - - # Walk down from the top of the MIB. - $node = \%mib; - foreach my $part (split /\./, $ctlname) { - if (not exists $$node{$part}) { - &debug("Missing node $part"); - $$node{$part} = [ 0, '', {} ]; - } - $node = \%{@{$$node{$part}}[2]}; - } - } - - # Populate current node with entries. - my $i = -1; - while (defined($_) && $_ !~ /^}/) { - $_ =
; - $i++ if $_ =~ /{.*}/; - next if $_ !~ /{\s+"(\w+)",\s+(CTLTYPE_[A-Z]+)\s+}/; - $$node{$1} = [ $i, $2, {} ]; - } - } - } - close HEADER; -} - -&build_sysctl(\%mib, "", []); - -print <){ - if(/^#define\s+SYS_(\w+)\s+([0-9]+)/){ - my $name = $1; - my $num = $2; - $name =~ y/a-z/A-Z/; - print " SYS_$name = $num;" - } -} - -print <){ - if(/^([0-9]+)\s+STD\s+({ \S+\s+(\w+).*)$/){ - my $num = $1; - my $proto = $2; - my $name = "SYS_$3"; - $name =~ y/a-z/A-Z/; - - # There are multiple entries for enosys and nosys, so comment them out. - if($name =~ /^SYS_E?NOSYS$/){ - $name = "// $name"; - } - if($name eq 'SYS_SYS_EXIT'){ - $name = 'SYS_EXIT'; - } - - print " $name = $num; // $proto\n"; - } -} - -print <){ - if(/^([0-9]+)\s+\S+\s+STD\s+({ \S+\s+(\w+).*)$/){ - my $num = $1; - my $proto = $2; - my $name = "SYS_$3"; - $name =~ y/a-z/A-Z/; - - # There are multiple entries for enosys and nosys, so comment them out. - if($name =~ /^SYS_E?NOSYS$/){ - $name = "// $name"; - } - if($name eq 'SYS_SYS_EXIT'){ - $name = 'SYS_EXIT'; - } - if($name =~ /^SYS_CAP_+/ || $name =~ /^SYS___CAP_+/){ - next - } - - print " $name = $num; // $proto\n"; - - # We keep Capsicum syscall numbers for FreeBSD - # 9-STABLE here because we are not sure whether they - # are mature and stable. - if($num == 513){ - print " SYS_CAP_NEW = 514 // { int cap_new(int fd, uint64_t rights); }\n"; - print " SYS_CAP_GETRIGHTS = 515 // { int cap_getrights(int fd, \\\n"; - print " SYS_CAP_ENTER = 516 // { int cap_enter(void); }\n"; - print " SYS_CAP_GETMODE = 517 // { int cap_getmode(u_int *modep); }\n"; - } - } -} - -print < 999){ - # ignore deprecated syscalls that are no longer implemented - # https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/uapi/asm-generic/unistd.h?id=refs/heads/master#n716 - return; - } - $name =~ y/a-z/A-Z/; - print " SYS_$name = $num;\n"; -} - -my $prev; -open(GCC, "gcc -E -dD $ARGV[0] |") || die "can't run gcc"; -while(){ - if(/^#define __NR_syscalls\s+/) { - # ignore redefinitions of __NR_syscalls - } - elsif(/^#define __NR_(\w+)\s+([0-9]+)/){ - $prev = $2; - fmt($1, $2); - } - elsif(/^#define __NR3264_(\w+)\s+([0-9]+)/){ - $prev = $2; - fmt($1, $2); - } - elsif(/^#define __NR_(\w+)\s+\(\w+\+\s*([0-9]+)\)/){ - fmt($1, $prev+$2) - } -} - -print <){ - if($line =~ /^(.*)\\$/) { - # Handle continuation - $line = $1; - $_ =~ s/^\s+//; - $line .= $_; - } else { - # New line - $line = $_; - } - next if $line =~ /\\$/; - if($line =~ /^([0-9]+)\s+((STD)|(NOERR))\s+(RUMP\s+)?({\s+\S+\s*\*?\s*\|(\S+)\|(\S*)\|(\w+).*\s+})(\s+(\S+))?$/) { - my $num = $1; - my $proto = $6; - my $compat = $8; - my $name = "$7_$9"; - - $name = "$7_$11" if $11 ne ''; - $name =~ y/a-z/A-Z/; - - if($compat eq '' || $compat eq '30' || $compat eq '50') { - print " $name = $num; // $proto\n"; - } - } -} - -print <){ - if(/^([0-9]+)\s+STD\s+(NOLOCK\s+)?({ \S+\s+\*?(\w+).*)$/){ - my $num = $1; - my $proto = $3; - my $name = $4; - $name =~ y/a-z/A-Z/; - - # There are multiple entries for enosys and nosys, so comment them out. - if($name =~ /^SYS_E?NOSYS$/){ - $name = "// $name"; - } - if($name eq 'SYS_SYS_EXIT'){ - $name = 'SYS_EXIT'; - } - - print " $name = $num; // $proto\n"; - } -} - -print < len(b) { - return nil, nil, EINVAL - } - return h, b[cmsgAlignOf(SizeofCmsghdr):h.Len], nil -} - -// UnixRights encodes a set of open file descriptors into a socket -// control message for sending to another process. -func UnixRights(fds ...int) []byte { - datalen := len(fds) * 4 - b := make([]byte, CmsgSpace(datalen)) - h := (*Cmsghdr)(unsafe.Pointer(&b[0])) - h.Level = SOL_SOCKET - h.Type = SCM_RIGHTS - h.SetLen(CmsgLen(datalen)) - data := cmsgData(h) - for _, fd := range fds { - *(*int32)(data) = int32(fd) - data = unsafe.Pointer(uintptr(data) + 4) - } - return b -} - -// ParseUnixRights decodes a socket control message that contains an -// integer array of open file descriptors from another process. -func ParseUnixRights(m *SocketControlMessage) ([]int, error) { - if m.Header.Level != SOL_SOCKET { - return nil, EINVAL - } - if m.Header.Type != SCM_RIGHTS { - return nil, EINVAL - } - fds := make([]int, len(m.Data)>>2) - for i, j := 0, 0; i < len(m.Data); i += 4 { - fds[j] = int(*(*int32)(unsafe.Pointer(&m.Data[i]))) - j++ - } - return fds, nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/str.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/str.go deleted file mode 100644 index 35ed6643536..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/str.go +++ /dev/null @@ -1,26 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build darwin dragonfly freebsd linux netbsd openbsd solaris - -package unix - -func itoa(val int) string { // do it here rather than with fmt to avoid dependency - if val < 0 { - return "-" + uitoa(uint(-val)) - } - return uitoa(uint(val)) -} - -func uitoa(val uint) string { - var buf [32]byte // big enough for int64 - i := len(buf) - 1 - for val >= 10 { - buf[i] = byte(val%10 + '0') - i-- - val /= 10 - } - buf[i] = byte(val + '0') - return string(buf[i:]) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall.go deleted file mode 100644 index 012f2d64fd2..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall.go +++ /dev/null @@ -1,74 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build darwin dragonfly freebsd linux netbsd openbsd solaris - -// Package unix contains an interface to the low-level operating system -// primitives. OS details vary depending on the underlying system, and -// by default, godoc will display OS-specific documentation for the current -// system. If you want godoc to display OS documentation for another -// system, set $GOOS and $GOARCH to the desired system. For example, if -// you want to view documentation for freebsd/arm on linux/amd64, set $GOOS -// to freebsd and $GOARCH to arm. -// The primary use of this package is inside other packages that provide a more -// portable interface to the system, such as "os", "time" and "net". Use -// those packages rather than this one if you can. -// For details of the functions and data types in this package consult -// the manuals for the appropriate operating system. -// These calls return err == nil to indicate success; otherwise -// err represents an operating system error describing the failure and -// holds a value of type syscall.Errno. -package unix // import "github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix" - -import "unsafe" - -// ByteSliceFromString returns a NUL-terminated slice of bytes -// containing the text of s. If s contains a NUL byte at any -// location, it returns (nil, EINVAL). -func ByteSliceFromString(s string) ([]byte, error) { - for i := 0; i < len(s); i++ { - if s[i] == 0 { - return nil, EINVAL - } - } - a := make([]byte, len(s)+1) - copy(a, s) - return a, nil -} - -// BytePtrFromString returns a pointer to a NUL-terminated array of -// bytes containing the text of s. If s contains a NUL byte at any -// location, it returns (nil, EINVAL). -func BytePtrFromString(s string) (*byte, error) { - a, err := ByteSliceFromString(s) - if err != nil { - return nil, err - } - return &a[0], nil -} - -// Single-word zero for use when we need a valid pointer to 0 bytes. -// See mkunix.pl. -var _zero uintptr - -func (ts *Timespec) Unix() (sec int64, nsec int64) { - return int64(ts.Sec), int64(ts.Nsec) -} - -func (tv *Timeval) Unix() (sec int64, nsec int64) { - return int64(tv.Sec), int64(tv.Usec) * 1000 -} - -func (ts *Timespec) Nano() int64 { - return int64(ts.Sec)*1e9 + int64(ts.Nsec) -} - -func (tv *Timeval) Nano() int64 { - return int64(tv.Sec)*1e9 + int64(tv.Usec)*1000 -} - -// use is a no-op, but the compiler cannot see that it is. -// Calling use(p) ensures that p is kept live until that point. -//go:noescape -func use(p unsafe.Pointer) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_bsd.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_bsd.go deleted file mode 100644 index e9671764ccb..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_bsd.go +++ /dev/null @@ -1,628 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build darwin dragonfly freebsd netbsd openbsd - -// BSD system call wrappers shared by *BSD based systems -// including OS X (Darwin) and FreeBSD. Like the other -// syscall_*.go files it is compiled as Go code but also -// used as input to mksyscall which parses the //sys -// lines and generates system call stubs. - -package unix - -import ( - "runtime" - "syscall" - "unsafe" -) - -/* - * Wrapped - */ - -//sysnb getgroups(ngid int, gid *_Gid_t) (n int, err error) -//sysnb setgroups(ngid int, gid *_Gid_t) (err error) - -func Getgroups() (gids []int, err error) { - n, err := getgroups(0, nil) - if err != nil { - return nil, err - } - if n == 0 { - return nil, nil - } - - // Sanity check group count. Max is 16 on BSD. - if n < 0 || n > 1000 { - return nil, EINVAL - } - - a := make([]_Gid_t, n) - n, err = getgroups(n, &a[0]) - if err != nil { - return nil, err - } - gids = make([]int, n) - for i, v := range a[0:n] { - gids[i] = int(v) - } - return -} - -func Setgroups(gids []int) (err error) { - if len(gids) == 0 { - return setgroups(0, nil) - } - - a := make([]_Gid_t, len(gids)) - for i, v := range gids { - a[i] = _Gid_t(v) - } - return setgroups(len(a), &a[0]) -} - -func ReadDirent(fd int, buf []byte) (n int, err error) { - // Final argument is (basep *uintptr) and the syscall doesn't take nil. - // 64 bits should be enough. (32 bits isn't even on 386). Since the - // actual system call is getdirentries64, 64 is a good guess. - // TODO(rsc): Can we use a single global basep for all calls? - var base = (*uintptr)(unsafe.Pointer(new(uint64))) - return Getdirentries(fd, buf, base) -} - -// Wait status is 7 bits at bottom, either 0 (exited), -// 0x7F (stopped), or a signal number that caused an exit. -// The 0x80 bit is whether there was a core dump. -// An extra number (exit code, signal causing a stop) -// is in the high bits. - -type WaitStatus uint32 - -const ( - mask = 0x7F - core = 0x80 - shift = 8 - - exited = 0 - stopped = 0x7F -) - -func (w WaitStatus) Exited() bool { return w&mask == exited } - -func (w WaitStatus) ExitStatus() int { - if w&mask != exited { - return -1 - } - return int(w >> shift) -} - -func (w WaitStatus) Signaled() bool { return w&mask != stopped && w&mask != 0 } - -func (w WaitStatus) Signal() syscall.Signal { - sig := syscall.Signal(w & mask) - if sig == stopped || sig == 0 { - return -1 - } - return sig -} - -func (w WaitStatus) CoreDump() bool { return w.Signaled() && w&core != 0 } - -func (w WaitStatus) Stopped() bool { return w&mask == stopped && syscall.Signal(w>>shift) != SIGSTOP } - -func (w WaitStatus) Continued() bool { return w&mask == stopped && syscall.Signal(w>>shift) == SIGSTOP } - -func (w WaitStatus) StopSignal() syscall.Signal { - if !w.Stopped() { - return -1 - } - return syscall.Signal(w>>shift) & 0xFF -} - -func (w WaitStatus) TrapCause() int { return -1 } - -//sys wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) - -func Wait4(pid int, wstatus *WaitStatus, options int, rusage *Rusage) (wpid int, err error) { - var status _C_int - wpid, err = wait4(pid, &status, options, rusage) - if wstatus != nil { - *wstatus = WaitStatus(status) - } - return -} - -//sys accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) -//sys bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) -//sys connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) -//sysnb socket(domain int, typ int, proto int) (fd int, err error) -//sys getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) -//sys setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) -//sysnb getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) -//sysnb getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) -//sys Shutdown(s int, how int) (err error) - -func (sa *SockaddrInet4) sockaddr() (unsafe.Pointer, _Socklen, error) { - if sa.Port < 0 || sa.Port > 0xFFFF { - return nil, 0, EINVAL - } - sa.raw.Len = SizeofSockaddrInet4 - sa.raw.Family = AF_INET - p := (*[2]byte)(unsafe.Pointer(&sa.raw.Port)) - p[0] = byte(sa.Port >> 8) - p[1] = byte(sa.Port) - for i := 0; i < len(sa.Addr); i++ { - sa.raw.Addr[i] = sa.Addr[i] - } - return unsafe.Pointer(&sa.raw), _Socklen(sa.raw.Len), nil -} - -func (sa *SockaddrInet6) sockaddr() (unsafe.Pointer, _Socklen, error) { - if sa.Port < 0 || sa.Port > 0xFFFF { - return nil, 0, EINVAL - } - sa.raw.Len = SizeofSockaddrInet6 - sa.raw.Family = AF_INET6 - p := (*[2]byte)(unsafe.Pointer(&sa.raw.Port)) - p[0] = byte(sa.Port >> 8) - p[1] = byte(sa.Port) - sa.raw.Scope_id = sa.ZoneId - for i := 0; i < len(sa.Addr); i++ { - sa.raw.Addr[i] = sa.Addr[i] - } - return unsafe.Pointer(&sa.raw), _Socklen(sa.raw.Len), nil -} - -func (sa *SockaddrUnix) sockaddr() (unsafe.Pointer, _Socklen, error) { - name := sa.Name - n := len(name) - if n >= len(sa.raw.Path) || n == 0 { - return nil, 0, EINVAL - } - sa.raw.Len = byte(3 + n) // 2 for Family, Len; 1 for NUL - sa.raw.Family = AF_UNIX - for i := 0; i < n; i++ { - sa.raw.Path[i] = int8(name[i]) - } - return unsafe.Pointer(&sa.raw), _Socklen(sa.raw.Len), nil -} - -func (sa *SockaddrDatalink) sockaddr() (unsafe.Pointer, _Socklen, error) { - if sa.Index == 0 { - return nil, 0, EINVAL - } - sa.raw.Len = sa.Len - sa.raw.Family = AF_LINK - sa.raw.Index = sa.Index - sa.raw.Type = sa.Type - sa.raw.Nlen = sa.Nlen - sa.raw.Alen = sa.Alen - sa.raw.Slen = sa.Slen - for i := 0; i < len(sa.raw.Data); i++ { - sa.raw.Data[i] = sa.Data[i] - } - return unsafe.Pointer(&sa.raw), SizeofSockaddrDatalink, nil -} - -func anyToSockaddr(rsa *RawSockaddrAny) (Sockaddr, error) { - switch rsa.Addr.Family { - case AF_LINK: - pp := (*RawSockaddrDatalink)(unsafe.Pointer(rsa)) - sa := new(SockaddrDatalink) - sa.Len = pp.Len - sa.Family = pp.Family - sa.Index = pp.Index - sa.Type = pp.Type - sa.Nlen = pp.Nlen - sa.Alen = pp.Alen - sa.Slen = pp.Slen - for i := 0; i < len(sa.Data); i++ { - sa.Data[i] = pp.Data[i] - } - return sa, nil - - case AF_UNIX: - pp := (*RawSockaddrUnix)(unsafe.Pointer(rsa)) - if pp.Len < 2 || pp.Len > SizeofSockaddrUnix { - return nil, EINVAL - } - sa := new(SockaddrUnix) - - // Some BSDs include the trailing NUL in the length, whereas - // others do not. Work around this by subtracting the leading - // family and len. The path is then scanned to see if a NUL - // terminator still exists within the length. - n := int(pp.Len) - 2 // subtract leading Family, Len - for i := 0; i < n; i++ { - if pp.Path[i] == 0 { - // found early NUL; assume Len included the NUL - // or was overestimating. - n = i - break - } - } - bytes := (*[10000]byte)(unsafe.Pointer(&pp.Path[0]))[0:n] - sa.Name = string(bytes) - return sa, nil - - case AF_INET: - pp := (*RawSockaddrInet4)(unsafe.Pointer(rsa)) - sa := new(SockaddrInet4) - p := (*[2]byte)(unsafe.Pointer(&pp.Port)) - sa.Port = int(p[0])<<8 + int(p[1]) - for i := 0; i < len(sa.Addr); i++ { - sa.Addr[i] = pp.Addr[i] - } - return sa, nil - - case AF_INET6: - pp := (*RawSockaddrInet6)(unsafe.Pointer(rsa)) - sa := new(SockaddrInet6) - p := (*[2]byte)(unsafe.Pointer(&pp.Port)) - sa.Port = int(p[0])<<8 + int(p[1]) - sa.ZoneId = pp.Scope_id - for i := 0; i < len(sa.Addr); i++ { - sa.Addr[i] = pp.Addr[i] - } - return sa, nil - } - return nil, EAFNOSUPPORT -} - -func Accept(fd int) (nfd int, sa Sockaddr, err error) { - var rsa RawSockaddrAny - var len _Socklen = SizeofSockaddrAny - nfd, err = accept(fd, &rsa, &len) - if err != nil { - return - } - if runtime.GOOS == "darwin" && len == 0 { - // Accepted socket has no address. - // This is likely due to a bug in xnu kernels, - // where instead of ECONNABORTED error socket - // is accepted, but has no address. - Close(nfd) - return 0, nil, ECONNABORTED - } - sa, err = anyToSockaddr(&rsa) - if err != nil { - Close(nfd) - nfd = 0 - } - return -} - -func Getsockname(fd int) (sa Sockaddr, err error) { - var rsa RawSockaddrAny - var len _Socklen = SizeofSockaddrAny - if err = getsockname(fd, &rsa, &len); err != nil { - return - } - // TODO(jsing): DragonFly has a "bug" (see issue 3349), which should be - // reported upstream. - if runtime.GOOS == "dragonfly" && rsa.Addr.Family == AF_UNSPEC && rsa.Addr.Len == 0 { - rsa.Addr.Family = AF_UNIX - rsa.Addr.Len = SizeofSockaddrUnix - } - return anyToSockaddr(&rsa) -} - -//sysnb socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) - -func GetsockoptByte(fd, level, opt int) (value byte, err error) { - var n byte - vallen := _Socklen(1) - err = getsockopt(fd, level, opt, unsafe.Pointer(&n), &vallen) - return n, err -} - -func GetsockoptInet4Addr(fd, level, opt int) (value [4]byte, err error) { - vallen := _Socklen(4) - err = getsockopt(fd, level, opt, unsafe.Pointer(&value[0]), &vallen) - return value, err -} - -func GetsockoptIPMreq(fd, level, opt int) (*IPMreq, error) { - var value IPMreq - vallen := _Socklen(SizeofIPMreq) - err := getsockopt(fd, level, opt, unsafe.Pointer(&value), &vallen) - return &value, err -} - -func GetsockoptIPv6Mreq(fd, level, opt int) (*IPv6Mreq, error) { - var value IPv6Mreq - vallen := _Socklen(SizeofIPv6Mreq) - err := getsockopt(fd, level, opt, unsafe.Pointer(&value), &vallen) - return &value, err -} - -func GetsockoptIPv6MTUInfo(fd, level, opt int) (*IPv6MTUInfo, error) { - var value IPv6MTUInfo - vallen := _Socklen(SizeofIPv6MTUInfo) - err := getsockopt(fd, level, opt, unsafe.Pointer(&value), &vallen) - return &value, err -} - -func GetsockoptICMPv6Filter(fd, level, opt int) (*ICMPv6Filter, error) { - var value ICMPv6Filter - vallen := _Socklen(SizeofICMPv6Filter) - err := getsockopt(fd, level, opt, unsafe.Pointer(&value), &vallen) - return &value, err -} - -//sys recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) -//sys sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) -//sys recvmsg(s int, msg *Msghdr, flags int) (n int, err error) - -func Recvmsg(fd int, p, oob []byte, flags int) (n, oobn int, recvflags int, from Sockaddr, err error) { - var msg Msghdr - var rsa RawSockaddrAny - msg.Name = (*byte)(unsafe.Pointer(&rsa)) - msg.Namelen = uint32(SizeofSockaddrAny) - var iov Iovec - if len(p) > 0 { - iov.Base = (*byte)(unsafe.Pointer(&p[0])) - iov.SetLen(len(p)) - } - var dummy byte - if len(oob) > 0 { - // receive at least one normal byte - if len(p) == 0 { - iov.Base = &dummy - iov.SetLen(1) - } - msg.Control = (*byte)(unsafe.Pointer(&oob[0])) - msg.SetControllen(len(oob)) - } - msg.Iov = &iov - msg.Iovlen = 1 - if n, err = recvmsg(fd, &msg, flags); err != nil { - return - } - oobn = int(msg.Controllen) - recvflags = int(msg.Flags) - // source address is only specified if the socket is unconnected - if rsa.Addr.Family != AF_UNSPEC { - from, err = anyToSockaddr(&rsa) - } - return -} - -//sys sendmsg(s int, msg *Msghdr, flags int) (n int, err error) - -func Sendmsg(fd int, p, oob []byte, to Sockaddr, flags int) (err error) { - _, err = SendmsgN(fd, p, oob, to, flags) - return -} - -func SendmsgN(fd int, p, oob []byte, to Sockaddr, flags int) (n int, err error) { - var ptr unsafe.Pointer - var salen _Socklen - if to != nil { - ptr, salen, err = to.sockaddr() - if err != nil { - return 0, err - } - } - var msg Msghdr - msg.Name = (*byte)(unsafe.Pointer(ptr)) - msg.Namelen = uint32(salen) - var iov Iovec - if len(p) > 0 { - iov.Base = (*byte)(unsafe.Pointer(&p[0])) - iov.SetLen(len(p)) - } - var dummy byte - if len(oob) > 0 { - // send at least one normal byte - if len(p) == 0 { - iov.Base = &dummy - iov.SetLen(1) - } - msg.Control = (*byte)(unsafe.Pointer(&oob[0])) - msg.SetControllen(len(oob)) - } - msg.Iov = &iov - msg.Iovlen = 1 - if n, err = sendmsg(fd, &msg, flags); err != nil { - return 0, err - } - if len(oob) > 0 && len(p) == 0 { - n = 0 - } - return n, nil -} - -//sys kevent(kq int, change unsafe.Pointer, nchange int, event unsafe.Pointer, nevent int, timeout *Timespec) (n int, err error) - -func Kevent(kq int, changes, events []Kevent_t, timeout *Timespec) (n int, err error) { - var change, event unsafe.Pointer - if len(changes) > 0 { - change = unsafe.Pointer(&changes[0]) - } - if len(events) > 0 { - event = unsafe.Pointer(&events[0]) - } - return kevent(kq, change, len(changes), event, len(events), timeout) -} - -//sys sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) = SYS___SYSCTL - -// sysctlmib translates name to mib number and appends any additional args. -func sysctlmib(name string, args ...int) ([]_C_int, error) { - // Translate name to mib number. - mib, err := nametomib(name) - if err != nil { - return nil, err - } - - for _, a := range args { - mib = append(mib, _C_int(a)) - } - - return mib, nil -} - -func Sysctl(name string) (string, error) { - return SysctlArgs(name) -} - -func SysctlArgs(name string, args ...int) (string, error) { - mib, err := sysctlmib(name, args...) - if err != nil { - return "", err - } - - // Find size. - n := uintptr(0) - if err := sysctl(mib, nil, &n, nil, 0); err != nil { - return "", err - } - if n == 0 { - return "", nil - } - - // Read into buffer of that size. - buf := make([]byte, n) - if err := sysctl(mib, &buf[0], &n, nil, 0); err != nil { - return "", err - } - - // Throw away terminating NUL. - if n > 0 && buf[n-1] == '\x00' { - n-- - } - return string(buf[0:n]), nil -} - -func SysctlUint32(name string) (uint32, error) { - return SysctlUint32Args(name) -} - -func SysctlUint32Args(name string, args ...int) (uint32, error) { - mib, err := sysctlmib(name, args...) - if err != nil { - return 0, err - } - - n := uintptr(4) - buf := make([]byte, 4) - if err := sysctl(mib, &buf[0], &n, nil, 0); err != nil { - return 0, err - } - if n != 4 { - return 0, EIO - } - return *(*uint32)(unsafe.Pointer(&buf[0])), nil -} - -func SysctlUint64(name string, args ...int) (uint64, error) { - mib, err := sysctlmib(name, args...) - if err != nil { - return 0, err - } - - n := uintptr(8) - buf := make([]byte, 8) - if err := sysctl(mib, &buf[0], &n, nil, 0); err != nil { - return 0, err - } - if n != 8 { - return 0, EIO - } - return *(*uint64)(unsafe.Pointer(&buf[0])), nil -} - -func SysctlRaw(name string, args ...int) ([]byte, error) { - mib, err := sysctlmib(name, args...) - if err != nil { - return nil, err - } - - // Find size. - n := uintptr(0) - if err := sysctl(mib, nil, &n, nil, 0); err != nil { - return nil, err - } - if n == 0 { - return nil, nil - } - - // Read into buffer of that size. - buf := make([]byte, n) - if err := sysctl(mib, &buf[0], &n, nil, 0); err != nil { - return nil, err - } - - // The actual call may return less than the original reported required - // size so ensure we deal with that. - return buf[:n], nil -} - -//sys utimes(path string, timeval *[2]Timeval) (err error) - -func Utimes(path string, tv []Timeval) error { - if tv == nil { - return utimes(path, nil) - } - if len(tv) != 2 { - return EINVAL - } - return utimes(path, (*[2]Timeval)(unsafe.Pointer(&tv[0]))) -} - -func UtimesNano(path string, ts []Timespec) error { - if ts == nil { - return utimes(path, nil) - } - // TODO: The BSDs can do utimensat with SYS_UTIMENSAT but it - // isn't supported by darwin so this uses utimes instead - if len(ts) != 2 { - return EINVAL - } - // Not as efficient as it could be because Timespec and - // Timeval have different types in the different OSes - tv := [2]Timeval{ - NsecToTimeval(TimespecToNsec(ts[0])), - NsecToTimeval(TimespecToNsec(ts[1])), - } - return utimes(path, (*[2]Timeval)(unsafe.Pointer(&tv[0]))) -} - -//sys futimes(fd int, timeval *[2]Timeval) (err error) - -func Futimes(fd int, tv []Timeval) error { - if tv == nil { - return futimes(fd, nil) - } - if len(tv) != 2 { - return EINVAL - } - return futimes(fd, (*[2]Timeval)(unsafe.Pointer(&tv[0]))) -} - -//sys fcntl(fd int, cmd int, arg int) (val int, err error) - -// TODO: wrap -// Acct(name nil-string) (err error) -// Gethostuuid(uuid *byte, timeout *Timespec) (err error) -// Madvise(addr *byte, len int, behav int) (err error) -// Mprotect(addr *byte, len int, prot int) (err error) -// Msync(addr *byte, len int, flags int) (err error) -// Ptrace(req int, pid int, addr uintptr, data int) (ret uintptr, err error) - -var mapper = &mmapper{ - active: make(map[*byte][]byte), - mmap: mmap, - munmap: munmap, -} - -func Mmap(fd int, offset int64, length int, prot int, flags int) (data []byte, err error) { - return mapper.Mmap(fd, offset, length, prot, flags) -} - -func Munmap(b []byte) (err error) { - return mapper.Munmap(b) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_darwin.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_darwin.go deleted file mode 100644 index 0d1771c3fca..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_darwin.go +++ /dev/null @@ -1,509 +0,0 @@ -// Copyright 2009,2010 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// Darwin system calls. -// This file is compiled as ordinary Go code, -// but it is also input to mksyscall, -// which parses the //sys lines and generates system call stubs. -// Note that sometimes we use a lowercase //sys name and wrap -// it in our own nicer implementation, either here or in -// syscall_bsd.go or syscall_unix.go. - -package unix - -import ( - errorspkg "errors" - "syscall" - "unsafe" -) - -const ImplementsGetwd = true - -func Getwd() (string, error) { - buf := make([]byte, 2048) - attrs, err := getAttrList(".", attrList{CommonAttr: attrCmnFullpath}, buf, 0) - if err == nil && len(attrs) == 1 && len(attrs[0]) >= 2 { - wd := string(attrs[0]) - // Sanity check that it's an absolute path and ends - // in a null byte, which we then strip. - if wd[0] == '/' && wd[len(wd)-1] == 0 { - return wd[:len(wd)-1], nil - } - } - // If pkg/os/getwd.go gets ENOTSUP, it will fall back to the - // slow algorithm. - return "", ENOTSUP -} - -type SockaddrDatalink struct { - Len uint8 - Family uint8 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [12]int8 - raw RawSockaddrDatalink -} - -// Translate "kern.hostname" to []_C_int{0,1,2,3}. -func nametomib(name string) (mib []_C_int, err error) { - const siz = unsafe.Sizeof(mib[0]) - - // NOTE(rsc): It seems strange to set the buffer to have - // size CTL_MAXNAME+2 but use only CTL_MAXNAME - // as the size. I don't know why the +2 is here, but the - // kernel uses +2 for its own implementation of this function. - // I am scared that if we don't include the +2 here, the kernel - // will silently write 2 words farther than we specify - // and we'll get memory corruption. - var buf [CTL_MAXNAME + 2]_C_int - n := uintptr(CTL_MAXNAME) * siz - - p := (*byte)(unsafe.Pointer(&buf[0])) - bytes, err := ByteSliceFromString(name) - if err != nil { - return nil, err - } - - // Magic sysctl: "setting" 0.3 to a string name - // lets you read back the array of integers form. - if err = sysctl([]_C_int{0, 3}, p, &n, &bytes[0], uintptr(len(name))); err != nil { - return nil, err - } - return buf[0 : n/siz], nil -} - -// ParseDirent parses up to max directory entries in buf, -// appending the names to names. It returns the number -// bytes consumed from buf, the number of entries added -// to names, and the new names slice. -func ParseDirent(buf []byte, max int, names []string) (consumed int, count int, newnames []string) { - origlen := len(buf) - for max != 0 && len(buf) > 0 { - dirent := (*Dirent)(unsafe.Pointer(&buf[0])) - if dirent.Reclen == 0 { - buf = nil - break - } - buf = buf[dirent.Reclen:] - if dirent.Ino == 0 { // File absent in directory. - continue - } - bytes := (*[10000]byte)(unsafe.Pointer(&dirent.Name[0])) - var name = string(bytes[0:dirent.Namlen]) - if name == "." || name == ".." { // Useless names - continue - } - max-- - count++ - names = append(names, name) - } - return origlen - len(buf), count, names -} - -//sys ptrace(request int, pid int, addr uintptr, data uintptr) (err error) -func PtraceAttach(pid int) (err error) { return ptrace(PT_ATTACH, pid, 0, 0) } -func PtraceDetach(pid int) (err error) { return ptrace(PT_DETACH, pid, 0, 0) } - -const ( - attrBitMapCount = 5 - attrCmnFullpath = 0x08000000 -) - -type attrList struct { - bitmapCount uint16 - _ uint16 - CommonAttr uint32 - VolAttr uint32 - DirAttr uint32 - FileAttr uint32 - Forkattr uint32 -} - -func getAttrList(path string, attrList attrList, attrBuf []byte, options uint) (attrs [][]byte, err error) { - if len(attrBuf) < 4 { - return nil, errorspkg.New("attrBuf too small") - } - attrList.bitmapCount = attrBitMapCount - - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return nil, err - } - - _, _, e1 := Syscall6( - SYS_GETATTRLIST, - uintptr(unsafe.Pointer(_p0)), - uintptr(unsafe.Pointer(&attrList)), - uintptr(unsafe.Pointer(&attrBuf[0])), - uintptr(len(attrBuf)), - uintptr(options), - 0, - ) - if e1 != 0 { - return nil, e1 - } - size := *(*uint32)(unsafe.Pointer(&attrBuf[0])) - - // dat is the section of attrBuf that contains valid data, - // without the 4 byte length header. All attribute offsets - // are relative to dat. - dat := attrBuf - if int(size) < len(attrBuf) { - dat = dat[:size] - } - dat = dat[4:] // remove length prefix - - for i := uint32(0); int(i) < len(dat); { - header := dat[i:] - if len(header) < 8 { - return attrs, errorspkg.New("truncated attribute header") - } - datOff := *(*int32)(unsafe.Pointer(&header[0])) - attrLen := *(*uint32)(unsafe.Pointer(&header[4])) - if datOff < 0 || uint32(datOff)+attrLen > uint32(len(dat)) { - return attrs, errorspkg.New("truncated results; attrBuf too small") - } - end := uint32(datOff) + attrLen - attrs = append(attrs, dat[datOff:end]) - i = end - if r := i % 4; r != 0 { - i += (4 - r) - } - } - return -} - -//sysnb pipe() (r int, w int, err error) - -func Pipe(p []int) (err error) { - if len(p) != 2 { - return EINVAL - } - p[0], p[1], err = pipe() - return -} - -func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { - var _p0 unsafe.Pointer - var bufsize uintptr - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - bufsize = unsafe.Sizeof(Statfs_t{}) * uintptr(len(buf)) - } - r0, _, e1 := Syscall(SYS_GETFSSTAT64, uintptr(_p0), bufsize, uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -/* - * Wrapped - */ - -//sys kill(pid int, signum int, posix int) (err error) - -func Kill(pid int, signum syscall.Signal) (err error) { return kill(pid, int(signum), 1) } - -/* - * Exposed directly - */ -//sys Access(path string, mode uint32) (err error) -//sys Adjtime(delta *Timeval, olddelta *Timeval) (err error) -//sys Chdir(path string) (err error) -//sys Chflags(path string, flags int) (err error) -//sys Chmod(path string, mode uint32) (err error) -//sys Chown(path string, uid int, gid int) (err error) -//sys Chroot(path string) (err error) -//sys Close(fd int) (err error) -//sys Dup(fd int) (nfd int, err error) -//sys Dup2(from int, to int) (err error) -//sys Exchangedata(path1 string, path2 string, options int) (err error) -//sys Exit(code int) -//sys Fchdir(fd int) (err error) -//sys Fchflags(fd int, flags int) (err error) -//sys Fchmod(fd int, mode uint32) (err error) -//sys Fchown(fd int, uid int, gid int) (err error) -//sys Flock(fd int, how int) (err error) -//sys Fpathconf(fd int, name int) (val int, err error) -//sys Fstat(fd int, stat *Stat_t) (err error) = SYS_FSTAT64 -//sys Fstatfs(fd int, stat *Statfs_t) (err error) = SYS_FSTATFS64 -//sys Fsync(fd int) (err error) -//sys Ftruncate(fd int, length int64) (err error) -//sys Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) = SYS_GETDIRENTRIES64 -//sys Getdtablesize() (size int) -//sysnb Getegid() (egid int) -//sysnb Geteuid() (uid int) -//sysnb Getgid() (gid int) -//sysnb Getpgid(pid int) (pgid int, err error) -//sysnb Getpgrp() (pgrp int) -//sysnb Getpid() (pid int) -//sysnb Getppid() (ppid int) -//sys Getpriority(which int, who int) (prio int, err error) -//sysnb Getrlimit(which int, lim *Rlimit) (err error) -//sysnb Getrusage(who int, rusage *Rusage) (err error) -//sysnb Getsid(pid int) (sid int, err error) -//sysnb Getuid() (uid int) -//sysnb Issetugid() (tainted bool) -//sys Kqueue() (fd int, err error) -//sys Lchown(path string, uid int, gid int) (err error) -//sys Link(path string, link string) (err error) -//sys Listen(s int, backlog int) (err error) -//sys Lstat(path string, stat *Stat_t) (err error) = SYS_LSTAT64 -//sys Mkdir(path string, mode uint32) (err error) -//sys Mkfifo(path string, mode uint32) (err error) -//sys Mknod(path string, mode uint32, dev int) (err error) -//sys Mlock(b []byte) (err error) -//sys Mlockall(flags int) (err error) -//sys Mprotect(b []byte, prot int) (err error) -//sys Munlock(b []byte) (err error) -//sys Munlockall() (err error) -//sys Open(path string, mode int, perm uint32) (fd int, err error) -//sys Pathconf(path string, name int) (val int, err error) -//sys Pread(fd int, p []byte, offset int64) (n int, err error) -//sys Pwrite(fd int, p []byte, offset int64) (n int, err error) -//sys read(fd int, p []byte) (n int, err error) -//sys Readlink(path string, buf []byte) (n int, err error) -//sys Rename(from string, to string) (err error) -//sys Revoke(path string) (err error) -//sys Rmdir(path string) (err error) -//sys Seek(fd int, offset int64, whence int) (newoffset int64, err error) = SYS_LSEEK -//sys Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) -//sys Setegid(egid int) (err error) -//sysnb Seteuid(euid int) (err error) -//sysnb Setgid(gid int) (err error) -//sys Setlogin(name string) (err error) -//sysnb Setpgid(pid int, pgid int) (err error) -//sys Setpriority(which int, who int, prio int) (err error) -//sys Setprivexec(flag int) (err error) -//sysnb Setregid(rgid int, egid int) (err error) -//sysnb Setreuid(ruid int, euid int) (err error) -//sysnb Setrlimit(which int, lim *Rlimit) (err error) -//sysnb Setsid() (pid int, err error) -//sysnb Settimeofday(tp *Timeval) (err error) -//sysnb Setuid(uid int) (err error) -//sys Stat(path string, stat *Stat_t) (err error) = SYS_STAT64 -//sys Statfs(path string, stat *Statfs_t) (err error) = SYS_STATFS64 -//sys Symlink(path string, link string) (err error) -//sys Sync() (err error) -//sys Truncate(path string, length int64) (err error) -//sys Umask(newmask int) (oldmask int) -//sys Undelete(path string) (err error) -//sys Unlink(path string) (err error) -//sys Unmount(path string, flags int) (err error) -//sys write(fd int, p []byte) (n int, err error) -//sys mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) -//sys munmap(addr uintptr, length uintptr) (err error) -//sys readlen(fd int, buf *byte, nbuf int) (n int, err error) = SYS_READ -//sys writelen(fd int, buf *byte, nbuf int) (n int, err error) = SYS_WRITE - -/* - * Unimplemented - */ -// Profil -// Sigaction -// Sigprocmask -// Getlogin -// Sigpending -// Sigaltstack -// Ioctl -// Reboot -// Execve -// Vfork -// Sbrk -// Sstk -// Ovadvise -// Mincore -// Setitimer -// Swapon -// Select -// Sigsuspend -// Readv -// Writev -// Nfssvc -// Getfh -// Quotactl -// Mount -// Csops -// Waitid -// Add_profil -// Kdebug_trace -// Sigreturn -// Mmap -// Mlock -// Munlock -// Atsocket -// Kqueue_from_portset_np -// Kqueue_portset -// Getattrlist -// Setattrlist -// Getdirentriesattr -// Searchfs -// Delete -// Copyfile -// Poll -// Watchevent -// Waitevent -// Modwatch -// Getxattr -// Fgetxattr -// Setxattr -// Fsetxattr -// Removexattr -// Fremovexattr -// Listxattr -// Flistxattr -// Fsctl -// Initgroups -// Posix_spawn -// Nfsclnt -// Fhopen -// Minherit -// Semsys -// Msgsys -// Shmsys -// Semctl -// Semget -// Semop -// Msgctl -// Msgget -// Msgsnd -// Msgrcv -// Shmat -// Shmctl -// Shmdt -// Shmget -// Shm_open -// Shm_unlink -// Sem_open -// Sem_close -// Sem_unlink -// Sem_wait -// Sem_trywait -// Sem_post -// Sem_getvalue -// Sem_init -// Sem_destroy -// Open_extended -// Umask_extended -// Stat_extended -// Lstat_extended -// Fstat_extended -// Chmod_extended -// Fchmod_extended -// Access_extended -// Settid -// Gettid -// Setsgroups -// Getsgroups -// Setwgroups -// Getwgroups -// Mkfifo_extended -// Mkdir_extended -// Identitysvc -// Shared_region_check_np -// Shared_region_map_np -// __pthread_mutex_destroy -// __pthread_mutex_init -// __pthread_mutex_lock -// __pthread_mutex_trylock -// __pthread_mutex_unlock -// __pthread_cond_init -// __pthread_cond_destroy -// __pthread_cond_broadcast -// __pthread_cond_signal -// Setsid_with_pid -// __pthread_cond_timedwait -// Aio_fsync -// Aio_return -// Aio_suspend -// Aio_cancel -// Aio_error -// Aio_read -// Aio_write -// Lio_listio -// __pthread_cond_wait -// Iopolicysys -// Mlockall -// Munlockall -// __pthread_kill -// __pthread_sigmask -// __sigwait -// __disable_threadsignal -// __pthread_markcancel -// __pthread_canceled -// __semwait_signal -// Proc_info -// sendfile -// Stat64_extended -// Lstat64_extended -// Fstat64_extended -// __pthread_chdir -// __pthread_fchdir -// Audit -// Auditon -// Getauid -// Setauid -// Getaudit -// Setaudit -// Getaudit_addr -// Setaudit_addr -// Auditctl -// Bsdthread_create -// Bsdthread_terminate -// Stack_snapshot -// Bsdthread_register -// Workq_open -// Workq_ops -// __mac_execve -// __mac_syscall -// __mac_get_file -// __mac_set_file -// __mac_get_link -// __mac_set_link -// __mac_get_proc -// __mac_set_proc -// __mac_get_fd -// __mac_set_fd -// __mac_get_pid -// __mac_get_lcid -// __mac_get_lctx -// __mac_set_lctx -// Setlcid -// Read_nocancel -// Write_nocancel -// Open_nocancel -// Close_nocancel -// Wait4_nocancel -// Recvmsg_nocancel -// Sendmsg_nocancel -// Recvfrom_nocancel -// Accept_nocancel -// Msync_nocancel -// Fcntl_nocancel -// Select_nocancel -// Fsync_nocancel -// Connect_nocancel -// Sigsuspend_nocancel -// Readv_nocancel -// Writev_nocancel -// Sendto_nocancel -// Pread_nocancel -// Pwrite_nocancel -// Waitid_nocancel -// Poll_nocancel -// Msgsnd_nocancel -// Msgrcv_nocancel -// Sem_wait_nocancel -// Aio_suspend_nocancel -// __sigwait_nocancel -// __semwait_signal_nocancel -// __mac_mount -// __mac_get_mount -// __mac_getfsstat diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_darwin_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_darwin_386.go deleted file mode 100644 index 3195c8bf5c6..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_darwin_386.go +++ /dev/null @@ -1,79 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build 386,darwin - -package unix - -import ( - "syscall" - "unsafe" -) - -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int32(nsec / 1e9) - ts.Nsec = int32(nsec % 1e9) - return -} - -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int32(nsec / 1e9) - return -} - -//sysnb gettimeofday(tp *Timeval) (sec int32, usec int32, err error) -func Gettimeofday(tv *Timeval) (err error) { - // The tv passed to gettimeofday must be non-nil - // but is otherwise unused. The answers come back - // in the two registers. - sec, usec, err := gettimeofday(tv) - tv.Sec = int32(sec) - tv.Usec = int32(usec) - return err -} - -func SetKevent(k *Kevent_t, fd, mode, flags int) { - k.Ident = uint32(fd) - k.Filter = int16(mode) - k.Flags = uint16(flags) -} - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint32(length) -} - -func (msghdr *Msghdr) SetControllen(length int) { - msghdr.Controllen = uint32(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint32(length) -} - -func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { - var length = uint64(count) - - _, _, e1 := Syscall9(SYS_SENDFILE, uintptr(infd), uintptr(outfd), uintptr(*offset), uintptr(*offset>>32), uintptr(unsafe.Pointer(&length)), 0, 0, 0, 0) - - written = int(length) - - if e1 != 0 { - err = e1 - } - return -} - -func Syscall9(num, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) - -// SYS___SYSCTL is used by syscall_bsd.go for all BSDs, but in modern versions -// of darwin/386 the syscall is called sysctl instead of __sysctl. -const SYS___SYSCTL = SYS_SYSCTL diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_darwin_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_darwin_amd64.go deleted file mode 100644 index 7adb98ded51..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_darwin_amd64.go +++ /dev/null @@ -1,81 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build amd64,darwin - -package unix - -import ( - "syscall" - "unsafe" -) - -//sys Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) - -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return -} - -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int64(nsec / 1e9) - return -} - -//sysnb gettimeofday(tp *Timeval) (sec int64, usec int32, err error) -func Gettimeofday(tv *Timeval) (err error) { - // The tv passed to gettimeofday must be non-nil - // but is otherwise unused. The answers come back - // in the two registers. - sec, usec, err := gettimeofday(tv) - tv.Sec = sec - tv.Usec = usec - return err -} - -func SetKevent(k *Kevent_t, fd, mode, flags int) { - k.Ident = uint64(fd) - k.Filter = int16(mode) - k.Flags = uint16(flags) -} - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint64(length) -} - -func (msghdr *Msghdr) SetControllen(length int) { - msghdr.Controllen = uint32(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint32(length) -} - -func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { - var length = uint64(count) - - _, _, e1 := Syscall6(SYS_SENDFILE, uintptr(infd), uintptr(outfd), uintptr(*offset), uintptr(unsafe.Pointer(&length)), 0, 0) - - written = int(length) - - if e1 != 0 { - err = e1 - } - return -} - -func Syscall9(num, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) - -// SYS___SYSCTL is used by syscall_bsd.go for all BSDs, but in modern versions -// of darwin/amd64 the syscall is called sysctl instead of __sysctl. -const SYS___SYSCTL = SYS_SYSCTL diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_darwin_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_darwin_arm.go deleted file mode 100644 index e47ffd73967..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_darwin_arm.go +++ /dev/null @@ -1,73 +0,0 @@ -// Copyright 2015 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package unix - -import ( - "syscall" - "unsafe" -) - -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int32(nsec / 1e9) - ts.Nsec = int32(nsec % 1e9) - return -} - -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int32(nsec / 1e9) - return -} - -//sysnb gettimeofday(tp *Timeval) (sec int32, usec int32, err error) -func Gettimeofday(tv *Timeval) (err error) { - // The tv passed to gettimeofday must be non-nil - // but is otherwise unused. The answers come back - // in the two registers. - sec, usec, err := gettimeofday(tv) - tv.Sec = int32(sec) - tv.Usec = int32(usec) - return err -} - -func SetKevent(k *Kevent_t, fd, mode, flags int) { - k.Ident = uint32(fd) - k.Filter = int16(mode) - k.Flags = uint16(flags) -} - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint32(length) -} - -func (msghdr *Msghdr) SetControllen(length int) { - msghdr.Controllen = uint32(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint32(length) -} - -func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { - var length = uint64(count) - - _, _, e1 := Syscall9(SYS_SENDFILE, uintptr(infd), uintptr(outfd), uintptr(*offset), uintptr(*offset>>32), uintptr(unsafe.Pointer(&length)), 0, 0, 0, 0) - - written = int(length) - - if e1 != 0 { - err = e1 - } - return -} - -func Syscall9(num, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) // sic diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_darwin_arm64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_darwin_arm64.go deleted file mode 100644 index 2560a959983..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_darwin_arm64.go +++ /dev/null @@ -1,79 +0,0 @@ -// Copyright 2015 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build arm64,darwin - -package unix - -import ( - "syscall" - "unsafe" -) - -func Getpagesize() int { return 16384 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return -} - -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int64(nsec / 1e9) - return -} - -//sysnb gettimeofday(tp *Timeval) (sec int64, usec int32, err error) -func Gettimeofday(tv *Timeval) (err error) { - // The tv passed to gettimeofday must be non-nil - // but is otherwise unused. The answers come back - // in the two registers. - sec, usec, err := gettimeofday(tv) - tv.Sec = sec - tv.Usec = usec - return err -} - -func SetKevent(k *Kevent_t, fd, mode, flags int) { - k.Ident = uint64(fd) - k.Filter = int16(mode) - k.Flags = uint16(flags) -} - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint64(length) -} - -func (msghdr *Msghdr) SetControllen(length int) { - msghdr.Controllen = uint32(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint32(length) -} - -func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { - var length = uint64(count) - - _, _, e1 := Syscall6(SYS_SENDFILE, uintptr(infd), uintptr(outfd), uintptr(*offset), uintptr(unsafe.Pointer(&length)), 0, 0) - - written = int(length) - - if e1 != 0 { - err = e1 - } - return -} - -func Syscall9(num, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) // sic - -// SYS___SYSCTL is used by syscall_bsd.go for all BSDs, but in modern versions -// of darwin/arm64 the syscall is called sysctl instead of __sysctl. -const SYS___SYSCTL = SYS_SYSCTL diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_dragonfly.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_dragonfly.go deleted file mode 100644 index fbbe0dce255..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_dragonfly.go +++ /dev/null @@ -1,411 +0,0 @@ -// Copyright 2009,2010 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// FreeBSD system calls. -// This file is compiled as ordinary Go code, -// but it is also input to mksyscall, -// which parses the //sys lines and generates system call stubs. -// Note that sometimes we use a lowercase //sys name and wrap -// it in our own nicer implementation, either here or in -// syscall_bsd.go or syscall_unix.go. - -package unix - -import "unsafe" - -type SockaddrDatalink struct { - Len uint8 - Family uint8 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [12]int8 - Rcf uint16 - Route [16]uint16 - raw RawSockaddrDatalink -} - -// Translate "kern.hostname" to []_C_int{0,1,2,3}. -func nametomib(name string) (mib []_C_int, err error) { - const siz = unsafe.Sizeof(mib[0]) - - // NOTE(rsc): It seems strange to set the buffer to have - // size CTL_MAXNAME+2 but use only CTL_MAXNAME - // as the size. I don't know why the +2 is here, but the - // kernel uses +2 for its own implementation of this function. - // I am scared that if we don't include the +2 here, the kernel - // will silently write 2 words farther than we specify - // and we'll get memory corruption. - var buf [CTL_MAXNAME + 2]_C_int - n := uintptr(CTL_MAXNAME) * siz - - p := (*byte)(unsafe.Pointer(&buf[0])) - bytes, err := ByteSliceFromString(name) - if err != nil { - return nil, err - } - - // Magic sysctl: "setting" 0.3 to a string name - // lets you read back the array of integers form. - if err = sysctl([]_C_int{0, 3}, p, &n, &bytes[0], uintptr(len(name))); err != nil { - return nil, err - } - return buf[0 : n/siz], nil -} - -// ParseDirent parses up to max directory entries in buf, -// appending the names to names. It returns the number -// bytes consumed from buf, the number of entries added -// to names, and the new names slice. -func ParseDirent(buf []byte, max int, names []string) (consumed int, count int, newnames []string) { - origlen := len(buf) - for max != 0 && len(buf) > 0 { - dirent := (*Dirent)(unsafe.Pointer(&buf[0])) - reclen := int(16+dirent.Namlen+1+7) & ^7 - buf = buf[reclen:] - if dirent.Fileno == 0 { // File absent in directory. - continue - } - bytes := (*[10000]byte)(unsafe.Pointer(&dirent.Name[0])) - var name = string(bytes[0:dirent.Namlen]) - if name == "." || name == ".." { // Useless names - continue - } - max-- - count++ - names = append(names, name) - } - return origlen - len(buf), count, names -} - -//sysnb pipe() (r int, w int, err error) - -func Pipe(p []int) (err error) { - if len(p) != 2 { - return EINVAL - } - p[0], p[1], err = pipe() - return -} - -//sys extpread(fd int, p []byte, flags int, offset int64) (n int, err error) -func Pread(fd int, p []byte, offset int64) (n int, err error) { - return extpread(fd, p, 0, offset) -} - -//sys extpwrite(fd int, p []byte, flags int, offset int64) (n int, err error) -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - return extpwrite(fd, p, 0, offset) -} - -func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { - var _p0 unsafe.Pointer - var bufsize uintptr - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - bufsize = unsafe.Sizeof(Statfs_t{}) * uintptr(len(buf)) - } - r0, _, e1 := Syscall(SYS_GETFSSTAT, uintptr(_p0), bufsize, uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -/* - * Exposed directly - */ -//sys Access(path string, mode uint32) (err error) -//sys Adjtime(delta *Timeval, olddelta *Timeval) (err error) -//sys Chdir(path string) (err error) -//sys Chflags(path string, flags int) (err error) -//sys Chmod(path string, mode uint32) (err error) -//sys Chown(path string, uid int, gid int) (err error) -//sys Chroot(path string) (err error) -//sys Close(fd int) (err error) -//sys Dup(fd int) (nfd int, err error) -//sys Dup2(from int, to int) (err error) -//sys Exit(code int) -//sys Fchdir(fd int) (err error) -//sys Fchflags(fd int, flags int) (err error) -//sys Fchmod(fd int, mode uint32) (err error) -//sys Fchown(fd int, uid int, gid int) (err error) -//sys Flock(fd int, how int) (err error) -//sys Fpathconf(fd int, name int) (val int, err error) -//sys Fstat(fd int, stat *Stat_t) (err error) -//sys Fstatfs(fd int, stat *Statfs_t) (err error) -//sys Fsync(fd int) (err error) -//sys Ftruncate(fd int, length int64) (err error) -//sys Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) -//sys Getdtablesize() (size int) -//sysnb Getegid() (egid int) -//sysnb Geteuid() (uid int) -//sysnb Getgid() (gid int) -//sysnb Getpgid(pid int) (pgid int, err error) -//sysnb Getpgrp() (pgrp int) -//sysnb Getpid() (pid int) -//sysnb Getppid() (ppid int) -//sys Getpriority(which int, who int) (prio int, err error) -//sysnb Getrlimit(which int, lim *Rlimit) (err error) -//sysnb Getrusage(who int, rusage *Rusage) (err error) -//sysnb Getsid(pid int) (sid int, err error) -//sysnb Gettimeofday(tv *Timeval) (err error) -//sysnb Getuid() (uid int) -//sys Issetugid() (tainted bool) -//sys Kill(pid int, signum syscall.Signal) (err error) -//sys Kqueue() (fd int, err error) -//sys Lchown(path string, uid int, gid int) (err error) -//sys Link(path string, link string) (err error) -//sys Listen(s int, backlog int) (err error) -//sys Lstat(path string, stat *Stat_t) (err error) -//sys Mkdir(path string, mode uint32) (err error) -//sys Mkfifo(path string, mode uint32) (err error) -//sys Mknod(path string, mode uint32, dev int) (err error) -//sys Mlock(b []byte) (err error) -//sys Mlockall(flags int) (err error) -//sys Mprotect(b []byte, prot int) (err error) -//sys Munlock(b []byte) (err error) -//sys Munlockall() (err error) -//sys Nanosleep(time *Timespec, leftover *Timespec) (err error) -//sys Open(path string, mode int, perm uint32) (fd int, err error) -//sys Pathconf(path string, name int) (val int, err error) -//sys read(fd int, p []byte) (n int, err error) -//sys Readlink(path string, buf []byte) (n int, err error) -//sys Rename(from string, to string) (err error) -//sys Revoke(path string) (err error) -//sys Rmdir(path string) (err error) -//sys Seek(fd int, offset int64, whence int) (newoffset int64, err error) = SYS_LSEEK -//sys Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) -//sysnb Setegid(egid int) (err error) -//sysnb Seteuid(euid int) (err error) -//sysnb Setgid(gid int) (err error) -//sys Setlogin(name string) (err error) -//sysnb Setpgid(pid int, pgid int) (err error) -//sys Setpriority(which int, who int, prio int) (err error) -//sysnb Setregid(rgid int, egid int) (err error) -//sysnb Setreuid(ruid int, euid int) (err error) -//sysnb Setresgid(rgid int, egid int, sgid int) (err error) -//sysnb Setresuid(ruid int, euid int, suid int) (err error) -//sysnb Setrlimit(which int, lim *Rlimit) (err error) -//sysnb Setsid() (pid int, err error) -//sysnb Settimeofday(tp *Timeval) (err error) -//sysnb Setuid(uid int) (err error) -//sys Stat(path string, stat *Stat_t) (err error) -//sys Statfs(path string, stat *Statfs_t) (err error) -//sys Symlink(path string, link string) (err error) -//sys Sync() (err error) -//sys Truncate(path string, length int64) (err error) -//sys Umask(newmask int) (oldmask int) -//sys Undelete(path string) (err error) -//sys Unlink(path string) (err error) -//sys Unmount(path string, flags int) (err error) -//sys write(fd int, p []byte) (n int, err error) -//sys mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) -//sys munmap(addr uintptr, length uintptr) (err error) -//sys readlen(fd int, buf *byte, nbuf int) (n int, err error) = SYS_READ -//sys writelen(fd int, buf *byte, nbuf int) (n int, err error) = SYS_WRITE - -/* - * Unimplemented - * TODO(jsing): Update this list for DragonFly. - */ -// Profil -// Sigaction -// Sigprocmask -// Getlogin -// Sigpending -// Sigaltstack -// Ioctl -// Reboot -// Execve -// Vfork -// Sbrk -// Sstk -// Ovadvise -// Mincore -// Setitimer -// Swapon -// Select -// Sigsuspend -// Readv -// Writev -// Nfssvc -// Getfh -// Quotactl -// Mount -// Csops -// Waitid -// Add_profil -// Kdebug_trace -// Sigreturn -// Mmap -// Atsocket -// Kqueue_from_portset_np -// Kqueue_portset -// Getattrlist -// Setattrlist -// Getdirentriesattr -// Searchfs -// Delete -// Copyfile -// Poll -// Watchevent -// Waitevent -// Modwatch -// Getxattr -// Fgetxattr -// Setxattr -// Fsetxattr -// Removexattr -// Fremovexattr -// Listxattr -// Flistxattr -// Fsctl -// Initgroups -// Posix_spawn -// Nfsclnt -// Fhopen -// Minherit -// Semsys -// Msgsys -// Shmsys -// Semctl -// Semget -// Semop -// Msgctl -// Msgget -// Msgsnd -// Msgrcv -// Shmat -// Shmctl -// Shmdt -// Shmget -// Shm_open -// Shm_unlink -// Sem_open -// Sem_close -// Sem_unlink -// Sem_wait -// Sem_trywait -// Sem_post -// Sem_getvalue -// Sem_init -// Sem_destroy -// Open_extended -// Umask_extended -// Stat_extended -// Lstat_extended -// Fstat_extended -// Chmod_extended -// Fchmod_extended -// Access_extended -// Settid -// Gettid -// Setsgroups -// Getsgroups -// Setwgroups -// Getwgroups -// Mkfifo_extended -// Mkdir_extended -// Identitysvc -// Shared_region_check_np -// Shared_region_map_np -// __pthread_mutex_destroy -// __pthread_mutex_init -// __pthread_mutex_lock -// __pthread_mutex_trylock -// __pthread_mutex_unlock -// __pthread_cond_init -// __pthread_cond_destroy -// __pthread_cond_broadcast -// __pthread_cond_signal -// Setsid_with_pid -// __pthread_cond_timedwait -// Aio_fsync -// Aio_return -// Aio_suspend -// Aio_cancel -// Aio_error -// Aio_read -// Aio_write -// Lio_listio -// __pthread_cond_wait -// Iopolicysys -// __pthread_kill -// __pthread_sigmask -// __sigwait -// __disable_threadsignal -// __pthread_markcancel -// __pthread_canceled -// __semwait_signal -// Proc_info -// Stat64_extended -// Lstat64_extended -// Fstat64_extended -// __pthread_chdir -// __pthread_fchdir -// Audit -// Auditon -// Getauid -// Setauid -// Getaudit -// Setaudit -// Getaudit_addr -// Setaudit_addr -// Auditctl -// Bsdthread_create -// Bsdthread_terminate -// Stack_snapshot -// Bsdthread_register -// Workq_open -// Workq_ops -// __mac_execve -// __mac_syscall -// __mac_get_file -// __mac_set_file -// __mac_get_link -// __mac_set_link -// __mac_get_proc -// __mac_set_proc -// __mac_get_fd -// __mac_set_fd -// __mac_get_pid -// __mac_get_lcid -// __mac_get_lctx -// __mac_set_lctx -// Setlcid -// Read_nocancel -// Write_nocancel -// Open_nocancel -// Close_nocancel -// Wait4_nocancel -// Recvmsg_nocancel -// Sendmsg_nocancel -// Recvfrom_nocancel -// Accept_nocancel -// Msync_nocancel -// Fcntl_nocancel -// Select_nocancel -// Fsync_nocancel -// Connect_nocancel -// Sigsuspend_nocancel -// Readv_nocancel -// Writev_nocancel -// Sendto_nocancel -// Pread_nocancel -// Pwrite_nocancel -// Waitid_nocancel -// Poll_nocancel -// Msgsnd_nocancel -// Msgrcv_nocancel -// Sem_wait_nocancel -// Aio_suspend_nocancel -// __sigwait_nocancel -// __semwait_signal_nocancel -// __mac_mount -// __mac_get_mount -// __mac_getfsstat diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_dragonfly_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_dragonfly_386.go deleted file mode 100644 index 41c2e69782a..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_dragonfly_386.go +++ /dev/null @@ -1,63 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build 386,dragonfly - -package unix - -import ( - "syscall" - "unsafe" -) - -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int32(nsec / 1e9) - ts.Nsec = int32(nsec % 1e9) - return -} - -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int32(nsec / 1e9) - return -} - -func SetKevent(k *Kevent_t, fd, mode, flags int) { - k.Ident = uint32(fd) - k.Filter = int16(mode) - k.Flags = uint16(flags) -} - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint32(length) -} - -func (msghdr *Msghdr) SetControllen(length int) { - msghdr.Controllen = uint32(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint32(length) -} - -func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { - var writtenOut uint64 = 0 - _, _, e1 := Syscall9(SYS_SENDFILE, uintptr(infd), uintptr(outfd), uintptr(*offset), uintptr((*offset)>>32), uintptr(count), 0, uintptr(unsafe.Pointer(&writtenOut)), 0, 0) - - written = int(writtenOut) - - if e1 != 0 { - err = e1 - } - return -} - -func Syscall9(num, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_dragonfly_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_dragonfly_amd64.go deleted file mode 100644 index 2ed92590e27..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_dragonfly_amd64.go +++ /dev/null @@ -1,63 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build amd64,dragonfly - -package unix - -import ( - "syscall" - "unsafe" -) - -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return -} - -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = nsec % 1e9 / 1e3 - tv.Sec = int64(nsec / 1e9) - return -} - -func SetKevent(k *Kevent_t, fd, mode, flags int) { - k.Ident = uint64(fd) - k.Filter = int16(mode) - k.Flags = uint16(flags) -} - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint64(length) -} - -func (msghdr *Msghdr) SetControllen(length int) { - msghdr.Controllen = uint32(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint32(length) -} - -func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { - var writtenOut uint64 = 0 - _, _, e1 := Syscall9(SYS_SENDFILE, uintptr(infd), uintptr(outfd), uintptr(*offset), uintptr(count), 0, uintptr(unsafe.Pointer(&writtenOut)), 0, 0, 0) - - written = int(writtenOut) - - if e1 != 0 { - err = e1 - } - return -} - -func Syscall9(num, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_freebsd.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_freebsd.go deleted file mode 100644 index ec56ed608a3..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_freebsd.go +++ /dev/null @@ -1,682 +0,0 @@ -// Copyright 2009,2010 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// FreeBSD system calls. -// This file is compiled as ordinary Go code, -// but it is also input to mksyscall, -// which parses the //sys lines and generates system call stubs. -// Note that sometimes we use a lowercase //sys name and wrap -// it in our own nicer implementation, either here or in -// syscall_bsd.go or syscall_unix.go. - -package unix - -import "unsafe" - -type SockaddrDatalink struct { - Len uint8 - Family uint8 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [46]int8 - raw RawSockaddrDatalink -} - -// Translate "kern.hostname" to []_C_int{0,1,2,3}. -func nametomib(name string) (mib []_C_int, err error) { - const siz = unsafe.Sizeof(mib[0]) - - // NOTE(rsc): It seems strange to set the buffer to have - // size CTL_MAXNAME+2 but use only CTL_MAXNAME - // as the size. I don't know why the +2 is here, but the - // kernel uses +2 for its own implementation of this function. - // I am scared that if we don't include the +2 here, the kernel - // will silently write 2 words farther than we specify - // and we'll get memory corruption. - var buf [CTL_MAXNAME + 2]_C_int - n := uintptr(CTL_MAXNAME) * siz - - p := (*byte)(unsafe.Pointer(&buf[0])) - bytes, err := ByteSliceFromString(name) - if err != nil { - return nil, err - } - - // Magic sysctl: "setting" 0.3 to a string name - // lets you read back the array of integers form. - if err = sysctl([]_C_int{0, 3}, p, &n, &bytes[0], uintptr(len(name))); err != nil { - return nil, err - } - return buf[0 : n/siz], nil -} - -// ParseDirent parses up to max directory entries in buf, -// appending the names to names. It returns the number -// bytes consumed from buf, the number of entries added -// to names, and the new names slice. -func ParseDirent(buf []byte, max int, names []string) (consumed int, count int, newnames []string) { - origlen := len(buf) - for max != 0 && len(buf) > 0 { - dirent := (*Dirent)(unsafe.Pointer(&buf[0])) - if dirent.Reclen == 0 { - buf = nil - break - } - buf = buf[dirent.Reclen:] - if dirent.Fileno == 0 { // File absent in directory. - continue - } - bytes := (*[10000]byte)(unsafe.Pointer(&dirent.Name[0])) - var name = string(bytes[0:dirent.Namlen]) - if name == "." || name == ".." { // Useless names - continue - } - max-- - count++ - names = append(names, name) - } - return origlen - len(buf), count, names -} - -//sysnb pipe() (r int, w int, err error) - -func Pipe(p []int) (err error) { - if len(p) != 2 { - return EINVAL - } - p[0], p[1], err = pipe() - return -} - -func GetsockoptIPMreqn(fd, level, opt int) (*IPMreqn, error) { - var value IPMreqn - vallen := _Socklen(SizeofIPMreqn) - errno := getsockopt(fd, level, opt, unsafe.Pointer(&value), &vallen) - return &value, errno -} - -func SetsockoptIPMreqn(fd, level, opt int, mreq *IPMreqn) (err error) { - return setsockopt(fd, level, opt, unsafe.Pointer(mreq), unsafe.Sizeof(*mreq)) -} - -func Accept4(fd, flags int) (nfd int, sa Sockaddr, err error) { - var rsa RawSockaddrAny - var len _Socklen = SizeofSockaddrAny - nfd, err = accept4(fd, &rsa, &len, flags) - if err != nil { - return - } - if len > SizeofSockaddrAny { - panic("RawSockaddrAny too small") - } - sa, err = anyToSockaddr(&rsa) - if err != nil { - Close(nfd) - nfd = 0 - } - return -} - -func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { - var _p0 unsafe.Pointer - var bufsize uintptr - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - bufsize = unsafe.Sizeof(Statfs_t{}) * uintptr(len(buf)) - } - r0, _, e1 := Syscall(SYS_GETFSSTAT, uintptr(_p0), bufsize, uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -// Derive extattr namespace and attribute name - -func xattrnamespace(fullattr string) (ns int, attr string, err error) { - s := -1 - for idx, val := range fullattr { - if val == '.' { - s = idx - break - } - } - - if s == -1 { - return -1, "", ENOATTR - } - - namespace := fullattr[0:s] - attr = fullattr[s+1:] - - switch namespace { - case "user": - return EXTATTR_NAMESPACE_USER, attr, nil - case "system": - return EXTATTR_NAMESPACE_SYSTEM, attr, nil - default: - return -1, "", ENOATTR - } -} - -func initxattrdest(dest []byte, idx int) (d unsafe.Pointer) { - if len(dest) > idx { - return unsafe.Pointer(&dest[idx]) - } else { - return unsafe.Pointer(_zero) - } -} - -// FreeBSD implements its own syscalls to handle extended attributes - -func Getxattr(file string, attr string, dest []byte) (sz int, err error) { - d := initxattrdest(dest, 0) - destsize := len(dest) - - nsid, a, err := xattrnamespace(attr) - if err != nil { - return -1, err - } - - return ExtattrGetFile(file, nsid, a, uintptr(d), destsize) -} - -func Fgetxattr(fd int, attr string, dest []byte) (sz int, err error) { - d := initxattrdest(dest, 0) - destsize := len(dest) - - nsid, a, err := xattrnamespace(attr) - if err != nil { - return -1, err - } - - return ExtattrGetFd(fd, nsid, a, uintptr(d), destsize) -} - -func Lgetxattr(link string, attr string, dest []byte) (sz int, err error) { - d := initxattrdest(dest, 0) - destsize := len(dest) - - nsid, a, err := xattrnamespace(attr) - if err != nil { - return -1, err - } - - return ExtattrGetLink(link, nsid, a, uintptr(d), destsize) -} - -// flags are unused on FreeBSD - -func Fsetxattr(fd int, attr string, data []byte, flags int) (err error) { - d := unsafe.Pointer(&data[0]) - datasiz := len(data) - - nsid, a, err := xattrnamespace(attr) - if err != nil { - return - } - - _, err = ExtattrSetFd(fd, nsid, a, uintptr(d), datasiz) - return -} - -func Setxattr(file string, attr string, data []byte, flags int) (err error) { - d := unsafe.Pointer(&data[0]) - datasiz := len(data) - - nsid, a, err := xattrnamespace(attr) - if err != nil { - return - } - - _, err = ExtattrSetFile(file, nsid, a, uintptr(d), datasiz) - return -} - -func Lsetxattr(link string, attr string, data []byte, flags int) (err error) { - d := unsafe.Pointer(&data[0]) - datasiz := len(data) - - nsid, a, err := xattrnamespace(attr) - if err != nil { - return - } - - _, err = ExtattrSetLink(link, nsid, a, uintptr(d), datasiz) - return -} - -func Removexattr(file string, attr string) (err error) { - nsid, a, err := xattrnamespace(attr) - if err != nil { - return - } - - err = ExtattrDeleteFile(file, nsid, a) - return -} - -func Fremovexattr(fd int, attr string) (err error) { - nsid, a, err := xattrnamespace(attr) - if err != nil { - return - } - - err = ExtattrDeleteFd(fd, nsid, a) - return -} - -func Lremovexattr(link string, attr string) (err error) { - nsid, a, err := xattrnamespace(attr) - if err != nil { - return - } - - err = ExtattrDeleteLink(link, nsid, a) - return -} - -func Listxattr(file string, dest []byte) (sz int, err error) { - d := initxattrdest(dest, 0) - destsiz := len(dest) - - // FreeBSD won't allow you to list xattrs from multiple namespaces - s := 0 - var e error - for _, nsid := range [...]int{EXTATTR_NAMESPACE_USER, EXTATTR_NAMESPACE_SYSTEM} { - stmp, e := ExtattrListFile(file, nsid, uintptr(d), destsiz) - - /* Errors accessing system attrs are ignored so that - * we can implement the Linux-like behavior of omitting errors that - * we don't have read permissions on - * - * Linux will still error if we ask for user attributes on a file that - * we don't have read permissions on, so don't ignore those errors - */ - if e != nil && e == EPERM && nsid != EXTATTR_NAMESPACE_USER { - e = nil - continue - } else if e != nil { - return s, e - } - - s += stmp - destsiz -= s - if destsiz < 0 { - destsiz = 0 - } - d = initxattrdest(dest, s) - } - - return s, e -} - -func Flistxattr(fd int, dest []byte) (sz int, err error) { - d := initxattrdest(dest, 0) - destsiz := len(dest) - - s := 0 - var e error - for _, nsid := range [...]int{EXTATTR_NAMESPACE_USER, EXTATTR_NAMESPACE_SYSTEM} { - stmp, e := ExtattrListFd(fd, nsid, uintptr(d), destsiz) - if e != nil && e == EPERM && nsid != EXTATTR_NAMESPACE_USER { - e = nil - continue - } else if e != nil { - return s, e - } - - s += stmp - destsiz -= s - if destsiz < 0 { - destsiz = 0 - } - d = initxattrdest(dest, s) - } - - return s, e -} - -func Llistxattr(link string, dest []byte) (sz int, err error) { - d := initxattrdest(dest, 0) - destsiz := len(dest) - - s := 0 - var e error - for _, nsid := range [...]int{EXTATTR_NAMESPACE_USER, EXTATTR_NAMESPACE_SYSTEM} { - stmp, e := ExtattrListLink(link, nsid, uintptr(d), destsiz) - if e != nil && e == EPERM && nsid != EXTATTR_NAMESPACE_USER { - e = nil - continue - } else if e != nil { - return s, e - } - - s += stmp - destsiz -= s - if destsiz < 0 { - destsiz = 0 - } - d = initxattrdest(dest, s) - } - - return s, e -} - -/* - * Exposed directly - */ -//sys Access(path string, mode uint32) (err error) -//sys Adjtime(delta *Timeval, olddelta *Timeval) (err error) -//sys Chdir(path string) (err error) -//sys Chflags(path string, flags int) (err error) -//sys Chmod(path string, mode uint32) (err error) -//sys Chown(path string, uid int, gid int) (err error) -//sys Chroot(path string) (err error) -//sys Close(fd int) (err error) -//sys Dup(fd int) (nfd int, err error) -//sys Dup2(from int, to int) (err error) -//sys Exit(code int) -//sys ExtattrGetFd(fd int, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) -//sys ExtattrSetFd(fd int, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) -//sys ExtattrDeleteFd(fd int, attrnamespace int, attrname string) (err error) -//sys ExtattrListFd(fd int, attrnamespace int, data uintptr, nbytes int) (ret int, err error) -//sys ExtattrGetFile(file string, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) -//sys ExtattrSetFile(file string, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) -//sys ExtattrDeleteFile(file string, attrnamespace int, attrname string) (err error) -//sys ExtattrListFile(file string, attrnamespace int, data uintptr, nbytes int) (ret int, err error) -//sys ExtattrGetLink(link string, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) -//sys ExtattrSetLink(link string, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) -//sys ExtattrDeleteLink(link string, attrnamespace int, attrname string) (err error) -//sys ExtattrListLink(link string, attrnamespace int, data uintptr, nbytes int) (ret int, err error) -//sys Fadvise(fd int, offset int64, length int64, advice int) (err error) = SYS_POSIX_FADVISE -//sys Fchdir(fd int) (err error) -//sys Fchflags(fd int, flags int) (err error) -//sys Fchmod(fd int, mode uint32) (err error) -//sys Fchown(fd int, uid int, gid int) (err error) -//sys Flock(fd int, how int) (err error) -//sys Fpathconf(fd int, name int) (val int, err error) -//sys Fstat(fd int, stat *Stat_t) (err error) -//sys Fstatfs(fd int, stat *Statfs_t) (err error) -//sys Fsync(fd int) (err error) -//sys Ftruncate(fd int, length int64) (err error) -//sys Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) -//sys Getdtablesize() (size int) -//sysnb Getegid() (egid int) -//sysnb Geteuid() (uid int) -//sysnb Getgid() (gid int) -//sysnb Getpgid(pid int) (pgid int, err error) -//sysnb Getpgrp() (pgrp int) -//sysnb Getpid() (pid int) -//sysnb Getppid() (ppid int) -//sys Getpriority(which int, who int) (prio int, err error) -//sysnb Getrlimit(which int, lim *Rlimit) (err error) -//sysnb Getrusage(who int, rusage *Rusage) (err error) -//sysnb Getsid(pid int) (sid int, err error) -//sysnb Gettimeofday(tv *Timeval) (err error) -//sysnb Getuid() (uid int) -//sys Issetugid() (tainted bool) -//sys Kill(pid int, signum syscall.Signal) (err error) -//sys Kqueue() (fd int, err error) -//sys Lchown(path string, uid int, gid int) (err error) -//sys Link(path string, link string) (err error) -//sys Listen(s int, backlog int) (err error) -//sys Lstat(path string, stat *Stat_t) (err error) -//sys Mkdir(path string, mode uint32) (err error) -//sys Mkfifo(path string, mode uint32) (err error) -//sys Mknod(path string, mode uint32, dev int) (err error) -//sys Mlock(b []byte) (err error) -//sys Mlockall(flags int) (err error) -//sys Mprotect(b []byte, prot int) (err error) -//sys Munlock(b []byte) (err error) -//sys Munlockall() (err error) -//sys Nanosleep(time *Timespec, leftover *Timespec) (err error) -//sys Open(path string, mode int, perm uint32) (fd int, err error) -//sys Pathconf(path string, name int) (val int, err error) -//sys Pread(fd int, p []byte, offset int64) (n int, err error) -//sys Pwrite(fd int, p []byte, offset int64) (n int, err error) -//sys read(fd int, p []byte) (n int, err error) -//sys Readlink(path string, buf []byte) (n int, err error) -//sys Rename(from string, to string) (err error) -//sys Revoke(path string) (err error) -//sys Rmdir(path string) (err error) -//sys Seek(fd int, offset int64, whence int) (newoffset int64, err error) = SYS_LSEEK -//sys Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) -//sysnb Setegid(egid int) (err error) -//sysnb Seteuid(euid int) (err error) -//sysnb Setgid(gid int) (err error) -//sys Setlogin(name string) (err error) -//sysnb Setpgid(pid int, pgid int) (err error) -//sys Setpriority(which int, who int, prio int) (err error) -//sysnb Setregid(rgid int, egid int) (err error) -//sysnb Setreuid(ruid int, euid int) (err error) -//sysnb Setresgid(rgid int, egid int, sgid int) (err error) -//sysnb Setresuid(ruid int, euid int, suid int) (err error) -//sysnb Setrlimit(which int, lim *Rlimit) (err error) -//sysnb Setsid() (pid int, err error) -//sysnb Settimeofday(tp *Timeval) (err error) -//sysnb Setuid(uid int) (err error) -//sys Stat(path string, stat *Stat_t) (err error) -//sys Statfs(path string, stat *Statfs_t) (err error) -//sys Symlink(path string, link string) (err error) -//sys Sync() (err error) -//sys Truncate(path string, length int64) (err error) -//sys Umask(newmask int) (oldmask int) -//sys Undelete(path string) (err error) -//sys Unlink(path string) (err error) -//sys Unmount(path string, flags int) (err error) -//sys write(fd int, p []byte) (n int, err error) -//sys mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) -//sys munmap(addr uintptr, length uintptr) (err error) -//sys readlen(fd int, buf *byte, nbuf int) (n int, err error) = SYS_READ -//sys writelen(fd int, buf *byte, nbuf int) (n int, err error) = SYS_WRITE -//sys accept4(fd int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (nfd int, err error) - -/* - * Unimplemented - */ -// Profil -// Sigaction -// Sigprocmask -// Getlogin -// Sigpending -// Sigaltstack -// Ioctl -// Reboot -// Execve -// Vfork -// Sbrk -// Sstk -// Ovadvise -// Mincore -// Setitimer -// Swapon -// Select -// Sigsuspend -// Readv -// Writev -// Nfssvc -// Getfh -// Quotactl -// Mount -// Csops -// Waitid -// Add_profil -// Kdebug_trace -// Sigreturn -// Mmap -// Mlock -// Munlock -// Atsocket -// Kqueue_from_portset_np -// Kqueue_portset -// Getattrlist -// Setattrlist -// Getdirentriesattr -// Searchfs -// Delete -// Copyfile -// Poll -// Watchevent -// Waitevent -// Modwatch -// Getxattr -// Fgetxattr -// Setxattr -// Fsetxattr -// Removexattr -// Fremovexattr -// Listxattr -// Flistxattr -// Fsctl -// Initgroups -// Posix_spawn -// Nfsclnt -// Fhopen -// Minherit -// Semsys -// Msgsys -// Shmsys -// Semctl -// Semget -// Semop -// Msgctl -// Msgget -// Msgsnd -// Msgrcv -// Shmat -// Shmctl -// Shmdt -// Shmget -// Shm_open -// Shm_unlink -// Sem_open -// Sem_close -// Sem_unlink -// Sem_wait -// Sem_trywait -// Sem_post -// Sem_getvalue -// Sem_init -// Sem_destroy -// Open_extended -// Umask_extended -// Stat_extended -// Lstat_extended -// Fstat_extended -// Chmod_extended -// Fchmod_extended -// Access_extended -// Settid -// Gettid -// Setsgroups -// Getsgroups -// Setwgroups -// Getwgroups -// Mkfifo_extended -// Mkdir_extended -// Identitysvc -// Shared_region_check_np -// Shared_region_map_np -// __pthread_mutex_destroy -// __pthread_mutex_init -// __pthread_mutex_lock -// __pthread_mutex_trylock -// __pthread_mutex_unlock -// __pthread_cond_init -// __pthread_cond_destroy -// __pthread_cond_broadcast -// __pthread_cond_signal -// Setsid_with_pid -// __pthread_cond_timedwait -// Aio_fsync -// Aio_return -// Aio_suspend -// Aio_cancel -// Aio_error -// Aio_read -// Aio_write -// Lio_listio -// __pthread_cond_wait -// Iopolicysys -// Mlockall -// Munlockall -// __pthread_kill -// __pthread_sigmask -// __sigwait -// __disable_threadsignal -// __pthread_markcancel -// __pthread_canceled -// __semwait_signal -// Proc_info -// Stat64_extended -// Lstat64_extended -// Fstat64_extended -// __pthread_chdir -// __pthread_fchdir -// Audit -// Auditon -// Getauid -// Setauid -// Getaudit -// Setaudit -// Getaudit_addr -// Setaudit_addr -// Auditctl -// Bsdthread_create -// Bsdthread_terminate -// Stack_snapshot -// Bsdthread_register -// Workq_open -// Workq_ops -// __mac_execve -// __mac_syscall -// __mac_get_file -// __mac_set_file -// __mac_get_link -// __mac_set_link -// __mac_get_proc -// __mac_set_proc -// __mac_get_fd -// __mac_set_fd -// __mac_get_pid -// __mac_get_lcid -// __mac_get_lctx -// __mac_set_lctx -// Setlcid -// Read_nocancel -// Write_nocancel -// Open_nocancel -// Close_nocancel -// Wait4_nocancel -// Recvmsg_nocancel -// Sendmsg_nocancel -// Recvfrom_nocancel -// Accept_nocancel -// Msync_nocancel -// Fcntl_nocancel -// Select_nocancel -// Fsync_nocancel -// Connect_nocancel -// Sigsuspend_nocancel -// Readv_nocancel -// Writev_nocancel -// Sendto_nocancel -// Pread_nocancel -// Pwrite_nocancel -// Waitid_nocancel -// Poll_nocancel -// Msgsnd_nocancel -// Msgrcv_nocancel -// Sem_wait_nocancel -// Aio_suspend_nocancel -// __sigwait_nocancel -// __semwait_signal_nocancel -// __mac_mount -// __mac_get_mount -// __mac_getfsstat diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_freebsd_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_freebsd_386.go deleted file mode 100644 index 6255d40ff86..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_freebsd_386.go +++ /dev/null @@ -1,63 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build 386,freebsd - -package unix - -import ( - "syscall" - "unsafe" -) - -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int32(nsec / 1e9) - ts.Nsec = int32(nsec % 1e9) - return -} - -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int32(nsec / 1e9) - return -} - -func SetKevent(k *Kevent_t, fd, mode, flags int) { - k.Ident = uint32(fd) - k.Filter = int16(mode) - k.Flags = uint16(flags) -} - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint32(length) -} - -func (msghdr *Msghdr) SetControllen(length int) { - msghdr.Controllen = uint32(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint32(length) -} - -func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { - var writtenOut uint64 = 0 - _, _, e1 := Syscall9(SYS_SENDFILE, uintptr(infd), uintptr(outfd), uintptr(*offset), uintptr((*offset)>>32), uintptr(count), 0, uintptr(unsafe.Pointer(&writtenOut)), 0, 0) - - written = int(writtenOut) - - if e1 != 0 { - err = e1 - } - return -} - -func Syscall9(num, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_freebsd_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_freebsd_amd64.go deleted file mode 100644 index 8b395d596dc..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_freebsd_amd64.go +++ /dev/null @@ -1,63 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build amd64,freebsd - -package unix - -import ( - "syscall" - "unsafe" -) - -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return -} - -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = nsec % 1e9 / 1e3 - tv.Sec = int64(nsec / 1e9) - return -} - -func SetKevent(k *Kevent_t, fd, mode, flags int) { - k.Ident = uint64(fd) - k.Filter = int16(mode) - k.Flags = uint16(flags) -} - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint64(length) -} - -func (msghdr *Msghdr) SetControllen(length int) { - msghdr.Controllen = uint32(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint32(length) -} - -func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { - var writtenOut uint64 = 0 - _, _, e1 := Syscall9(SYS_SENDFILE, uintptr(infd), uintptr(outfd), uintptr(*offset), uintptr(count), 0, uintptr(unsafe.Pointer(&writtenOut)), 0, 0, 0) - - written = int(writtenOut) - - if e1 != 0 { - err = e1 - } - return -} - -func Syscall9(num, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_freebsd_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_freebsd_arm.go deleted file mode 100644 index 4e72d46a816..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_freebsd_arm.go +++ /dev/null @@ -1,63 +0,0 @@ -// Copyright 2012 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build arm,freebsd - -package unix - -import ( - "syscall" - "unsafe" -) - -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return ts.Sec*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = int32(nsec % 1e9) - return -} - -func TimevalToNsec(tv Timeval) int64 { return tv.Sec*1e9 + int64(tv.Usec)*1e3 } - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = nsec / 1e9 - return -} - -func SetKevent(k *Kevent_t, fd, mode, flags int) { - k.Ident = uint32(fd) - k.Filter = int16(mode) - k.Flags = uint16(flags) -} - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint32(length) -} - -func (msghdr *Msghdr) SetControllen(length int) { - msghdr.Controllen = uint32(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint32(length) -} - -func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { - var writtenOut uint64 = 0 - _, _, e1 := Syscall9(SYS_SENDFILE, uintptr(infd), uintptr(outfd), uintptr(*offset), uintptr((*offset)>>32), uintptr(count), 0, uintptr(unsafe.Pointer(&writtenOut)), 0, 0) - - written = int(writtenOut) - - if e1 != 0 { - err = e1 - } - return -} - -func Syscall9(num, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_linux.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_linux.go deleted file mode 100644 index d3ee5d2c2f6..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_linux.go +++ /dev/null @@ -1,1086 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// Linux system calls. -// This file is compiled as ordinary Go code, -// but it is also input to mksyscall, -// which parses the //sys lines and generates system call stubs. -// Note that sometimes we use a lowercase //sys name and -// wrap it in our own nicer implementation. - -package unix - -import ( - "syscall" - "unsafe" -) - -/* - * Wrapped - */ - -func Access(path string, mode uint32) (err error) { - return Faccessat(AT_FDCWD, path, mode, 0) -} - -func Chmod(path string, mode uint32) (err error) { - return Fchmodat(AT_FDCWD, path, mode, 0) -} - -func Chown(path string, uid int, gid int) (err error) { - return Fchownat(AT_FDCWD, path, uid, gid, 0) -} - -func Creat(path string, mode uint32) (fd int, err error) { - return Open(path, O_CREAT|O_WRONLY|O_TRUNC, mode) -} - -//sys linkat(olddirfd int, oldpath string, newdirfd int, newpath string, flags int) (err error) - -func Link(oldpath string, newpath string) (err error) { - return linkat(AT_FDCWD, oldpath, AT_FDCWD, newpath, 0) -} - -func Mkdir(path string, mode uint32) (err error) { - return Mkdirat(AT_FDCWD, path, mode) -} - -func Mknod(path string, mode uint32, dev int) (err error) { - return Mknodat(AT_FDCWD, path, mode, dev) -} - -func Open(path string, mode int, perm uint32) (fd int, err error) { - return openat(AT_FDCWD, path, mode|O_LARGEFILE, perm) -} - -//sys openat(dirfd int, path string, flags int, mode uint32) (fd int, err error) - -func Openat(dirfd int, path string, flags int, mode uint32) (fd int, err error) { - return openat(dirfd, path, flags|O_LARGEFILE, mode) -} - -//sys readlinkat(dirfd int, path string, buf []byte) (n int, err error) - -func Readlink(path string, buf []byte) (n int, err error) { - return readlinkat(AT_FDCWD, path, buf) -} - -func Rename(oldpath string, newpath string) (err error) { - return Renameat(AT_FDCWD, oldpath, AT_FDCWD, newpath) -} - -func Rmdir(path string) error { - return unlinkat(AT_FDCWD, path, AT_REMOVEDIR) -} - -//sys symlinkat(oldpath string, newdirfd int, newpath string) (err error) - -func Symlink(oldpath string, newpath string) (err error) { - return symlinkat(oldpath, AT_FDCWD, newpath) -} - -func Unlink(path string) error { - return unlinkat(AT_FDCWD, path, 0) -} - -//sys unlinkat(dirfd int, path string, flags int) (err error) - -func Unlinkat(dirfd int, path string) error { - return unlinkat(dirfd, path, 0) -} - -//sys utimes(path string, times *[2]Timeval) (err error) - -func Utimes(path string, tv []Timeval) (err error) { - if tv == nil { - return utimes(path, nil) - } - if len(tv) != 2 { - return EINVAL - } - return utimes(path, (*[2]Timeval)(unsafe.Pointer(&tv[0]))) -} - -//sys utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) - -func UtimesNano(path string, ts []Timespec) error { - if ts == nil { - err := utimensat(AT_FDCWD, path, nil, 0) - if err != ENOSYS { - return err - } - return utimes(path, nil) - } - if len(ts) != 2 { - return EINVAL - } - err := utimensat(AT_FDCWD, path, (*[2]Timespec)(unsafe.Pointer(&ts[0])), 0) - if err != ENOSYS { - return err - } - // If the utimensat syscall isn't available (utimensat was added to Linux - // in 2.6.22, Released, 8 July 2007) then fall back to utimes - var tv [2]Timeval - for i := 0; i < 2; i++ { - tv[i].Sec = ts[i].Sec - tv[i].Usec = ts[i].Nsec / 1000 - } - return utimes(path, (*[2]Timeval)(unsafe.Pointer(&tv[0]))) -} - -func UtimesNanoAt(dirfd int, path string, ts []Timespec, flags int) error { - if ts == nil { - return utimensat(dirfd, path, nil, flags) - } - if len(ts) != 2 { - return EINVAL - } - return utimensat(dirfd, path, (*[2]Timespec)(unsafe.Pointer(&ts[0])), flags) -} - -//sys futimesat(dirfd int, path *byte, times *[2]Timeval) (err error) - -func Futimesat(dirfd int, path string, tv []Timeval) error { - pathp, err := BytePtrFromString(path) - if err != nil { - return err - } - if tv == nil { - return futimesat(dirfd, pathp, nil) - } - if len(tv) != 2 { - return EINVAL - } - return futimesat(dirfd, pathp, (*[2]Timeval)(unsafe.Pointer(&tv[0]))) -} - -func Futimes(fd int, tv []Timeval) (err error) { - // Believe it or not, this is the best we can do on Linux - // (and is what glibc does). - return Utimes("/proc/self/fd/"+itoa(fd), tv) -} - -const ImplementsGetwd = true - -//sys Getcwd(buf []byte) (n int, err error) - -func Getwd() (wd string, err error) { - var buf [PathMax]byte - n, err := Getcwd(buf[0:]) - if err != nil { - return "", err - } - // Getcwd returns the number of bytes written to buf, including the NUL. - if n < 1 || n > len(buf) || buf[n-1] != 0 { - return "", EINVAL - } - return string(buf[0 : n-1]), nil -} - -func Getgroups() (gids []int, err error) { - n, err := getgroups(0, nil) - if err != nil { - return nil, err - } - if n == 0 { - return nil, nil - } - - // Sanity check group count. Max is 1<<16 on Linux. - if n < 0 || n > 1<<20 { - return nil, EINVAL - } - - a := make([]_Gid_t, n) - n, err = getgroups(n, &a[0]) - if err != nil { - return nil, err - } - gids = make([]int, n) - for i, v := range a[0:n] { - gids[i] = int(v) - } - return -} - -func Setgroups(gids []int) (err error) { - if len(gids) == 0 { - return setgroups(0, nil) - } - - a := make([]_Gid_t, len(gids)) - for i, v := range gids { - a[i] = _Gid_t(v) - } - return setgroups(len(a), &a[0]) -} - -type WaitStatus uint32 - -// Wait status is 7 bits at bottom, either 0 (exited), -// 0x7F (stopped), or a signal number that caused an exit. -// The 0x80 bit is whether there was a core dump. -// An extra number (exit code, signal causing a stop) -// is in the high bits. At least that's the idea. -// There are various irregularities. For example, the -// "continued" status is 0xFFFF, distinguishing itself -// from stopped via the core dump bit. - -const ( - mask = 0x7F - core = 0x80 - exited = 0x00 - stopped = 0x7F - shift = 8 -) - -func (w WaitStatus) Exited() bool { return w&mask == exited } - -func (w WaitStatus) Signaled() bool { return w&mask != stopped && w&mask != exited } - -func (w WaitStatus) Stopped() bool { return w&0xFF == stopped } - -func (w WaitStatus) Continued() bool { return w == 0xFFFF } - -func (w WaitStatus) CoreDump() bool { return w.Signaled() && w&core != 0 } - -func (w WaitStatus) ExitStatus() int { - if !w.Exited() { - return -1 - } - return int(w>>shift) & 0xFF -} - -func (w WaitStatus) Signal() syscall.Signal { - if !w.Signaled() { - return -1 - } - return syscall.Signal(w & mask) -} - -func (w WaitStatus) StopSignal() syscall.Signal { - if !w.Stopped() { - return -1 - } - return syscall.Signal(w>>shift) & 0xFF -} - -func (w WaitStatus) TrapCause() int { - if w.StopSignal() != SIGTRAP { - return -1 - } - return int(w>>shift) >> 8 -} - -//sys wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) - -func Wait4(pid int, wstatus *WaitStatus, options int, rusage *Rusage) (wpid int, err error) { - var status _C_int - wpid, err = wait4(pid, &status, options, rusage) - if wstatus != nil { - *wstatus = WaitStatus(status) - } - return -} - -func Mkfifo(path string, mode uint32) (err error) { - return Mknod(path, mode|S_IFIFO, 0) -} - -func (sa *SockaddrInet4) sockaddr() (unsafe.Pointer, _Socklen, error) { - if sa.Port < 0 || sa.Port > 0xFFFF { - return nil, 0, EINVAL - } - sa.raw.Family = AF_INET - p := (*[2]byte)(unsafe.Pointer(&sa.raw.Port)) - p[0] = byte(sa.Port >> 8) - p[1] = byte(sa.Port) - for i := 0; i < len(sa.Addr); i++ { - sa.raw.Addr[i] = sa.Addr[i] - } - return unsafe.Pointer(&sa.raw), SizeofSockaddrInet4, nil -} - -func (sa *SockaddrInet6) sockaddr() (unsafe.Pointer, _Socklen, error) { - if sa.Port < 0 || sa.Port > 0xFFFF { - return nil, 0, EINVAL - } - sa.raw.Family = AF_INET6 - p := (*[2]byte)(unsafe.Pointer(&sa.raw.Port)) - p[0] = byte(sa.Port >> 8) - p[1] = byte(sa.Port) - sa.raw.Scope_id = sa.ZoneId - for i := 0; i < len(sa.Addr); i++ { - sa.raw.Addr[i] = sa.Addr[i] - } - return unsafe.Pointer(&sa.raw), SizeofSockaddrInet6, nil -} - -func (sa *SockaddrUnix) sockaddr() (unsafe.Pointer, _Socklen, error) { - name := sa.Name - n := len(name) - if n >= len(sa.raw.Path) { - return nil, 0, EINVAL - } - sa.raw.Family = AF_UNIX - for i := 0; i < n; i++ { - sa.raw.Path[i] = int8(name[i]) - } - // length is family (uint16), name, NUL. - sl := _Socklen(2) - if n > 0 { - sl += _Socklen(n) + 1 - } - if sa.raw.Path[0] == '@' { - sa.raw.Path[0] = 0 - // Don't count trailing NUL for abstract address. - sl-- - } - - return unsafe.Pointer(&sa.raw), sl, nil -} - -type SockaddrLinklayer struct { - Protocol uint16 - Ifindex int - Hatype uint16 - Pkttype uint8 - Halen uint8 - Addr [8]byte - raw RawSockaddrLinklayer -} - -func (sa *SockaddrLinklayer) sockaddr() (unsafe.Pointer, _Socklen, error) { - if sa.Ifindex < 0 || sa.Ifindex > 0x7fffffff { - return nil, 0, EINVAL - } - sa.raw.Family = AF_PACKET - sa.raw.Protocol = sa.Protocol - sa.raw.Ifindex = int32(sa.Ifindex) - sa.raw.Hatype = sa.Hatype - sa.raw.Pkttype = sa.Pkttype - sa.raw.Halen = sa.Halen - for i := 0; i < len(sa.Addr); i++ { - sa.raw.Addr[i] = sa.Addr[i] - } - return unsafe.Pointer(&sa.raw), SizeofSockaddrLinklayer, nil -} - -type SockaddrNetlink struct { - Family uint16 - Pad uint16 - Pid uint32 - Groups uint32 - raw RawSockaddrNetlink -} - -func (sa *SockaddrNetlink) sockaddr() (unsafe.Pointer, _Socklen, error) { - sa.raw.Family = AF_NETLINK - sa.raw.Pad = sa.Pad - sa.raw.Pid = sa.Pid - sa.raw.Groups = sa.Groups - return unsafe.Pointer(&sa.raw), SizeofSockaddrNetlink, nil -} - -func anyToSockaddr(rsa *RawSockaddrAny) (Sockaddr, error) { - switch rsa.Addr.Family { - case AF_NETLINK: - pp := (*RawSockaddrNetlink)(unsafe.Pointer(rsa)) - sa := new(SockaddrNetlink) - sa.Family = pp.Family - sa.Pad = pp.Pad - sa.Pid = pp.Pid - sa.Groups = pp.Groups - return sa, nil - - case AF_PACKET: - pp := (*RawSockaddrLinklayer)(unsafe.Pointer(rsa)) - sa := new(SockaddrLinklayer) - sa.Protocol = pp.Protocol - sa.Ifindex = int(pp.Ifindex) - sa.Hatype = pp.Hatype - sa.Pkttype = pp.Pkttype - sa.Halen = pp.Halen - for i := 0; i < len(sa.Addr); i++ { - sa.Addr[i] = pp.Addr[i] - } - return sa, nil - - case AF_UNIX: - pp := (*RawSockaddrUnix)(unsafe.Pointer(rsa)) - sa := new(SockaddrUnix) - if pp.Path[0] == 0 { - // "Abstract" Unix domain socket. - // Rewrite leading NUL as @ for textual display. - // (This is the standard convention.) - // Not friendly to overwrite in place, - // but the callers below don't care. - pp.Path[0] = '@' - } - - // Assume path ends at NUL. - // This is not technically the Linux semantics for - // abstract Unix domain sockets--they are supposed - // to be uninterpreted fixed-size binary blobs--but - // everyone uses this convention. - n := 0 - for n < len(pp.Path) && pp.Path[n] != 0 { - n++ - } - bytes := (*[10000]byte)(unsafe.Pointer(&pp.Path[0]))[0:n] - sa.Name = string(bytes) - return sa, nil - - case AF_INET: - pp := (*RawSockaddrInet4)(unsafe.Pointer(rsa)) - sa := new(SockaddrInet4) - p := (*[2]byte)(unsafe.Pointer(&pp.Port)) - sa.Port = int(p[0])<<8 + int(p[1]) - for i := 0; i < len(sa.Addr); i++ { - sa.Addr[i] = pp.Addr[i] - } - return sa, nil - - case AF_INET6: - pp := (*RawSockaddrInet6)(unsafe.Pointer(rsa)) - sa := new(SockaddrInet6) - p := (*[2]byte)(unsafe.Pointer(&pp.Port)) - sa.Port = int(p[0])<<8 + int(p[1]) - sa.ZoneId = pp.Scope_id - for i := 0; i < len(sa.Addr); i++ { - sa.Addr[i] = pp.Addr[i] - } - return sa, nil - } - return nil, EAFNOSUPPORT -} - -func Accept(fd int) (nfd int, sa Sockaddr, err error) { - var rsa RawSockaddrAny - var len _Socklen = SizeofSockaddrAny - nfd, err = accept(fd, &rsa, &len) - if err != nil { - return - } - sa, err = anyToSockaddr(&rsa) - if err != nil { - Close(nfd) - nfd = 0 - } - return -} - -func Accept4(fd int, flags int) (nfd int, sa Sockaddr, err error) { - var rsa RawSockaddrAny - var len _Socklen = SizeofSockaddrAny - nfd, err = accept4(fd, &rsa, &len, flags) - if err != nil { - return - } - if len > SizeofSockaddrAny { - panic("RawSockaddrAny too small") - } - sa, err = anyToSockaddr(&rsa) - if err != nil { - Close(nfd) - nfd = 0 - } - return -} - -func Getsockname(fd int) (sa Sockaddr, err error) { - var rsa RawSockaddrAny - var len _Socklen = SizeofSockaddrAny - if err = getsockname(fd, &rsa, &len); err != nil { - return - } - return anyToSockaddr(&rsa) -} - -func GetsockoptInet4Addr(fd, level, opt int) (value [4]byte, err error) { - vallen := _Socklen(4) - err = getsockopt(fd, level, opt, unsafe.Pointer(&value[0]), &vallen) - return value, err -} - -func GetsockoptIPMreq(fd, level, opt int) (*IPMreq, error) { - var value IPMreq - vallen := _Socklen(SizeofIPMreq) - err := getsockopt(fd, level, opt, unsafe.Pointer(&value), &vallen) - return &value, err -} - -func GetsockoptIPMreqn(fd, level, opt int) (*IPMreqn, error) { - var value IPMreqn - vallen := _Socklen(SizeofIPMreqn) - err := getsockopt(fd, level, opt, unsafe.Pointer(&value), &vallen) - return &value, err -} - -func GetsockoptIPv6Mreq(fd, level, opt int) (*IPv6Mreq, error) { - var value IPv6Mreq - vallen := _Socklen(SizeofIPv6Mreq) - err := getsockopt(fd, level, opt, unsafe.Pointer(&value), &vallen) - return &value, err -} - -func GetsockoptIPv6MTUInfo(fd, level, opt int) (*IPv6MTUInfo, error) { - var value IPv6MTUInfo - vallen := _Socklen(SizeofIPv6MTUInfo) - err := getsockopt(fd, level, opt, unsafe.Pointer(&value), &vallen) - return &value, err -} - -func GetsockoptICMPv6Filter(fd, level, opt int) (*ICMPv6Filter, error) { - var value ICMPv6Filter - vallen := _Socklen(SizeofICMPv6Filter) - err := getsockopt(fd, level, opt, unsafe.Pointer(&value), &vallen) - return &value, err -} - -func GetsockoptUcred(fd, level, opt int) (*Ucred, error) { - var value Ucred - vallen := _Socklen(SizeofUcred) - err := getsockopt(fd, level, opt, unsafe.Pointer(&value), &vallen) - return &value, err -} - -func SetsockoptIPMreqn(fd, level, opt int, mreq *IPMreqn) (err error) { - return setsockopt(fd, level, opt, unsafe.Pointer(mreq), unsafe.Sizeof(*mreq)) -} - -func Recvmsg(fd int, p, oob []byte, flags int) (n, oobn int, recvflags int, from Sockaddr, err error) { - var msg Msghdr - var rsa RawSockaddrAny - msg.Name = (*byte)(unsafe.Pointer(&rsa)) - msg.Namelen = uint32(SizeofSockaddrAny) - var iov Iovec - if len(p) > 0 { - iov.Base = (*byte)(unsafe.Pointer(&p[0])) - iov.SetLen(len(p)) - } - var dummy byte - if len(oob) > 0 { - // receive at least one normal byte - if len(p) == 0 { - iov.Base = &dummy - iov.SetLen(1) - } - msg.Control = (*byte)(unsafe.Pointer(&oob[0])) - msg.SetControllen(len(oob)) - } - msg.Iov = &iov - msg.Iovlen = 1 - if n, err = recvmsg(fd, &msg, flags); err != nil { - return - } - oobn = int(msg.Controllen) - recvflags = int(msg.Flags) - // source address is only specified if the socket is unconnected - if rsa.Addr.Family != AF_UNSPEC { - from, err = anyToSockaddr(&rsa) - } - return -} - -func Sendmsg(fd int, p, oob []byte, to Sockaddr, flags int) (err error) { - _, err = SendmsgN(fd, p, oob, to, flags) - return -} - -func SendmsgN(fd int, p, oob []byte, to Sockaddr, flags int) (n int, err error) { - var ptr unsafe.Pointer - var salen _Socklen - if to != nil { - var err error - ptr, salen, err = to.sockaddr() - if err != nil { - return 0, err - } - } - var msg Msghdr - msg.Name = (*byte)(unsafe.Pointer(ptr)) - msg.Namelen = uint32(salen) - var iov Iovec - if len(p) > 0 { - iov.Base = (*byte)(unsafe.Pointer(&p[0])) - iov.SetLen(len(p)) - } - var dummy byte - if len(oob) > 0 { - // send at least one normal byte - if len(p) == 0 { - iov.Base = &dummy - iov.SetLen(1) - } - msg.Control = (*byte)(unsafe.Pointer(&oob[0])) - msg.SetControllen(len(oob)) - } - msg.Iov = &iov - msg.Iovlen = 1 - if n, err = sendmsg(fd, &msg, flags); err != nil { - return 0, err - } - if len(oob) > 0 && len(p) == 0 { - n = 0 - } - return n, nil -} - -// BindToDevice binds the socket associated with fd to device. -func BindToDevice(fd int, device string) (err error) { - return SetsockoptString(fd, SOL_SOCKET, SO_BINDTODEVICE, device) -} - -//sys ptrace(request int, pid int, addr uintptr, data uintptr) (err error) - -func ptracePeek(req int, pid int, addr uintptr, out []byte) (count int, err error) { - // The peek requests are machine-size oriented, so we wrap it - // to retrieve arbitrary-length data. - - // The ptrace syscall differs from glibc's ptrace. - // Peeks returns the word in *data, not as the return value. - - var buf [sizeofPtr]byte - - // Leading edge. PEEKTEXT/PEEKDATA don't require aligned - // access (PEEKUSER warns that it might), but if we don't - // align our reads, we might straddle an unmapped page - // boundary and not get the bytes leading up to the page - // boundary. - n := 0 - if addr%sizeofPtr != 0 { - err = ptrace(req, pid, addr-addr%sizeofPtr, uintptr(unsafe.Pointer(&buf[0]))) - if err != nil { - return 0, err - } - n += copy(out, buf[addr%sizeofPtr:]) - out = out[n:] - } - - // Remainder. - for len(out) > 0 { - // We use an internal buffer to guarantee alignment. - // It's not documented if this is necessary, but we're paranoid. - err = ptrace(req, pid, addr+uintptr(n), uintptr(unsafe.Pointer(&buf[0]))) - if err != nil { - return n, err - } - copied := copy(out, buf[0:]) - n += copied - out = out[copied:] - } - - return n, nil -} - -func PtracePeekText(pid int, addr uintptr, out []byte) (count int, err error) { - return ptracePeek(PTRACE_PEEKTEXT, pid, addr, out) -} - -func PtracePeekData(pid int, addr uintptr, out []byte) (count int, err error) { - return ptracePeek(PTRACE_PEEKDATA, pid, addr, out) -} - -func ptracePoke(pokeReq int, peekReq int, pid int, addr uintptr, data []byte) (count int, err error) { - // As for ptracePeek, we need to align our accesses to deal - // with the possibility of straddling an invalid page. - - // Leading edge. - n := 0 - if addr%sizeofPtr != 0 { - var buf [sizeofPtr]byte - err = ptrace(peekReq, pid, addr-addr%sizeofPtr, uintptr(unsafe.Pointer(&buf[0]))) - if err != nil { - return 0, err - } - n += copy(buf[addr%sizeofPtr:], data) - word := *((*uintptr)(unsafe.Pointer(&buf[0]))) - err = ptrace(pokeReq, pid, addr-addr%sizeofPtr, word) - if err != nil { - return 0, err - } - data = data[n:] - } - - // Interior. - for len(data) > sizeofPtr { - word := *((*uintptr)(unsafe.Pointer(&data[0]))) - err = ptrace(pokeReq, pid, addr+uintptr(n), word) - if err != nil { - return n, err - } - n += sizeofPtr - data = data[sizeofPtr:] - } - - // Trailing edge. - if len(data) > 0 { - var buf [sizeofPtr]byte - err = ptrace(peekReq, pid, addr+uintptr(n), uintptr(unsafe.Pointer(&buf[0]))) - if err != nil { - return n, err - } - copy(buf[0:], data) - word := *((*uintptr)(unsafe.Pointer(&buf[0]))) - err = ptrace(pokeReq, pid, addr+uintptr(n), word) - if err != nil { - return n, err - } - n += len(data) - } - - return n, nil -} - -func PtracePokeText(pid int, addr uintptr, data []byte) (count int, err error) { - return ptracePoke(PTRACE_POKETEXT, PTRACE_PEEKTEXT, pid, addr, data) -} - -func PtracePokeData(pid int, addr uintptr, data []byte) (count int, err error) { - return ptracePoke(PTRACE_POKEDATA, PTRACE_PEEKDATA, pid, addr, data) -} - -func PtraceGetRegs(pid int, regsout *PtraceRegs) (err error) { - return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout))) -} - -func PtraceSetRegs(pid int, regs *PtraceRegs) (err error) { - return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs))) -} - -func PtraceSetOptions(pid int, options int) (err error) { - return ptrace(PTRACE_SETOPTIONS, pid, 0, uintptr(options)) -} - -func PtraceGetEventMsg(pid int) (msg uint, err error) { - var data _C_long - err = ptrace(PTRACE_GETEVENTMSG, pid, 0, uintptr(unsafe.Pointer(&data))) - msg = uint(data) - return -} - -func PtraceCont(pid int, signal int) (err error) { - return ptrace(PTRACE_CONT, pid, 0, uintptr(signal)) -} - -func PtraceSyscall(pid int, signal int) (err error) { - return ptrace(PTRACE_SYSCALL, pid, 0, uintptr(signal)) -} - -func PtraceSingleStep(pid int) (err error) { return ptrace(PTRACE_SINGLESTEP, pid, 0, 0) } - -func PtraceAttach(pid int) (err error) { return ptrace(PTRACE_ATTACH, pid, 0, 0) } - -func PtraceDetach(pid int) (err error) { return ptrace(PTRACE_DETACH, pid, 0, 0) } - -//sys reboot(magic1 uint, magic2 uint, cmd int, arg string) (err error) - -func Reboot(cmd int) (err error) { - return reboot(LINUX_REBOOT_MAGIC1, LINUX_REBOOT_MAGIC2, cmd, "") -} - -func clen(n []byte) int { - for i := 0; i < len(n); i++ { - if n[i] == 0 { - return i - } - } - return len(n) -} - -func ReadDirent(fd int, buf []byte) (n int, err error) { - return Getdents(fd, buf) -} - -func ParseDirent(buf []byte, max int, names []string) (consumed int, count int, newnames []string) { - origlen := len(buf) - count = 0 - for max != 0 && len(buf) > 0 { - dirent := (*Dirent)(unsafe.Pointer(&buf[0])) - buf = buf[dirent.Reclen:] - if dirent.Ino == 0 { // File absent in directory. - continue - } - bytes := (*[10000]byte)(unsafe.Pointer(&dirent.Name[0])) - var name = string(bytes[0:clen(bytes[:])]) - if name == "." || name == ".." { // Useless names - continue - } - max-- - count++ - names = append(names, name) - } - return origlen - len(buf), count, names -} - -//sys mount(source string, target string, fstype string, flags uintptr, data *byte) (err error) - -func Mount(source string, target string, fstype string, flags uintptr, data string) (err error) { - // Certain file systems get rather angry and EINVAL if you give - // them an empty string of data, rather than NULL. - if data == "" { - return mount(source, target, fstype, flags, nil) - } - datap, err := BytePtrFromString(data) - if err != nil { - return err - } - return mount(source, target, fstype, flags, datap) -} - -// Sendto -// Recvfrom -// Socketpair - -/* - * Direct access - */ -//sys Acct(path string) (err error) -//sys Adjtimex(buf *Timex) (state int, err error) -//sys Chdir(path string) (err error) -//sys Chroot(path string) (err error) -//sys ClockGettime(clockid int32, time *Timespec) (err error) -//sys Close(fd int) (err error) -//sys Dup(oldfd int) (fd int, err error) -//sys Dup3(oldfd int, newfd int, flags int) (err error) -//sysnb EpollCreate(size int) (fd int, err error) -//sysnb EpollCreate1(flag int) (fd int, err error) -//sysnb EpollCtl(epfd int, op int, fd int, event *EpollEvent) (err error) -//sys EpollWait(epfd int, events []EpollEvent, msec int) (n int, err error) -//sys Exit(code int) = SYS_EXIT_GROUP -//sys Faccessat(dirfd int, path string, mode uint32, flags int) (err error) -//sys Fallocate(fd int, mode uint32, off int64, len int64) (err error) -//sys Fchdir(fd int) (err error) -//sys Fchmod(fd int, mode uint32) (err error) -//sys Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) -//sys Fchownat(dirfd int, path string, uid int, gid int, flags int) (err error) -//sys fcntl(fd int, cmd int, arg int) (val int, err error) -//sys Fdatasync(fd int) (err error) -//sys Flock(fd int, how int) (err error) -//sys Fsync(fd int) (err error) -//sys Getdents(fd int, buf []byte) (n int, err error) = SYS_GETDENTS64 -//sysnb Getpgid(pid int) (pgid int, err error) - -func Getpgrp() (pid int) { - pid, _ = Getpgid(0) - return -} - -//sysnb Getpid() (pid int) -//sysnb Getppid() (ppid int) -//sys Getpriority(which int, who int) (prio int, err error) -//sysnb Getrusage(who int, rusage *Rusage) (err error) -//sysnb Gettid() (tid int) -//sys Getxattr(path string, attr string, dest []byte) (sz int, err error) -//sys InotifyAddWatch(fd int, pathname string, mask uint32) (watchdesc int, err error) -//sysnb InotifyInit1(flags int) (fd int, err error) -//sysnb InotifyRmWatch(fd int, watchdesc uint32) (success int, err error) -//sysnb Kill(pid int, sig syscall.Signal) (err error) -//sys Klogctl(typ int, buf []byte) (n int, err error) = SYS_SYSLOG -//sys Listxattr(path string, dest []byte) (sz int, err error) -//sys Mkdirat(dirfd int, path string, mode uint32) (err error) -//sys Mknodat(dirfd int, path string, mode uint32, dev int) (err error) -//sys Nanosleep(time *Timespec, leftover *Timespec) (err error) -//sys Pause() (err error) -//sys PivotRoot(newroot string, putold string) (err error) = SYS_PIVOT_ROOT -//sysnb prlimit(pid int, resource int, old *Rlimit, newlimit *Rlimit) (err error) = SYS_PRLIMIT64 -//sys Prctl(option int, arg2 uintptr, arg3 uintptr, arg4 uintptr, arg5 uintptr) (err error) -//sys read(fd int, p []byte) (n int, err error) -//sys Removexattr(path string, attr string) (err error) -//sys Renameat(olddirfd int, oldpath string, newdirfd int, newpath string) (err error) -//sys Setdomainname(p []byte) (err error) -//sys Sethostname(p []byte) (err error) -//sysnb Setpgid(pid int, pgid int) (err error) -//sysnb Setsid() (pid int, err error) -//sysnb Settimeofday(tv *Timeval) (err error) - -// issue 1435. -// On linux Setuid and Setgid only affects the current thread, not the process. -// This does not match what most callers expect so we must return an error -// here rather than letting the caller think that the call succeeded. - -func Setuid(uid int) (err error) { - return EOPNOTSUPP -} - -func Setgid(uid int) (err error) { - return EOPNOTSUPP -} - -//sys Setpriority(which int, who int, prio int) (err error) -//sys Setxattr(path string, attr string, data []byte, flags int) (err error) -//sys Sync() -//sysnb Sysinfo(info *Sysinfo_t) (err error) -//sys Tee(rfd int, wfd int, len int, flags int) (n int64, err error) -//sysnb Tgkill(tgid int, tid int, sig syscall.Signal) (err error) -//sysnb Times(tms *Tms) (ticks uintptr, err error) -//sysnb Umask(mask int) (oldmask int) -//sysnb Uname(buf *Utsname) (err error) -//sys Unmount(target string, flags int) (err error) = SYS_UMOUNT2 -//sys Unshare(flags int) (err error) -//sys Ustat(dev int, ubuf *Ustat_t) (err error) -//sys Utime(path string, buf *Utimbuf) (err error) -//sys write(fd int, p []byte) (n int, err error) -//sys exitThread(code int) (err error) = SYS_EXIT -//sys readlen(fd int, p *byte, np int) (n int, err error) = SYS_READ -//sys writelen(fd int, p *byte, np int) (n int, err error) = SYS_WRITE - -// mmap varies by architecture; see syscall_linux_*.go. -//sys munmap(addr uintptr, length uintptr) (err error) - -var mapper = &mmapper{ - active: make(map[*byte][]byte), - mmap: mmap, - munmap: munmap, -} - -func Mmap(fd int, offset int64, length int, prot int, flags int) (data []byte, err error) { - return mapper.Mmap(fd, offset, length, prot, flags) -} - -func Munmap(b []byte) (err error) { - return mapper.Munmap(b) -} - -//sys Madvise(b []byte, advice int) (err error) -//sys Mprotect(b []byte, prot int) (err error) -//sys Mlock(b []byte) (err error) -//sys Munlock(b []byte) (err error) -//sys Mlockall(flags int) (err error) -//sys Munlockall() (err error) - -/* - * Unimplemented - */ -// AddKey -// AfsSyscall -// Alarm -// ArchPrctl -// Brk -// Capget -// Capset -// ClockGetres -// ClockNanosleep -// ClockSettime -// Clone -// CreateModule -// DeleteModule -// EpollCtlOld -// EpollPwait -// EpollWaitOld -// Eventfd -// Execve -// Fgetxattr -// Flistxattr -// Fork -// Fremovexattr -// Fsetxattr -// Futex -// GetKernelSyms -// GetMempolicy -// GetRobustList -// GetThreadArea -// Getitimer -// Getpmsg -// IoCancel -// IoDestroy -// IoGetevents -// IoSetup -// IoSubmit -// Ioctl -// IoprioGet -// IoprioSet -// KexecLoad -// Keyctl -// Lgetxattr -// Llistxattr -// LookupDcookie -// Lremovexattr -// Lsetxattr -// Mbind -// MigratePages -// Mincore -// ModifyLdt -// Mount -// MovePages -// Mprotect -// MqGetsetattr -// MqNotify -// MqOpen -// MqTimedreceive -// MqTimedsend -// MqUnlink -// Mremap -// Msgctl -// Msgget -// Msgrcv -// Msgsnd -// Msync -// Newfstatat -// Nfsservctl -// Personality -// Poll -// Ppoll -// Pselect6 -// Ptrace -// Putpmsg -// QueryModule -// Quotactl -// Readahead -// Readv -// RemapFilePages -// RequestKey -// RestartSyscall -// RtSigaction -// RtSigpending -// RtSigprocmask -// RtSigqueueinfo -// RtSigreturn -// RtSigsuspend -// RtSigtimedwait -// SchedGetPriorityMax -// SchedGetPriorityMin -// SchedGetaffinity -// SchedGetparam -// SchedGetscheduler -// SchedRrGetInterval -// SchedSetaffinity -// SchedSetparam -// SchedYield -// Security -// Semctl -// Semget -// Semop -// Semtimedop -// SetMempolicy -// SetRobustList -// SetThreadArea -// SetTidAddress -// Shmat -// Shmctl -// Shmdt -// Shmget -// Sigaltstack -// Signalfd -// Swapoff -// Swapon -// Sysfs -// TimerCreate -// TimerDelete -// TimerGetoverrun -// TimerGettime -// TimerSettime -// Timerfd -// Tkill (obsolete) -// Tuxcall -// Umount2 -// Uselib -// Utimensat -// Vfork -// Vhangup -// Vmsplice -// Vserver -// Waitid -// _Sysctl diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_linux_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_linux_386.go deleted file mode 100644 index 7171219af7f..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_linux_386.go +++ /dev/null @@ -1,388 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// TODO(rsc): Rewrite all nn(SP) references into name+(nn-8)(FP) -// so that go vet can check that they are correct. - -// +build 386,linux - -package unix - -import ( - "syscall" - "unsafe" -) - -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int32(nsec / 1e9) - ts.Nsec = int32(nsec % 1e9) - return -} - -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Sec = int32(nsec / 1e9) - tv.Usec = int32(nsec % 1e9 / 1e3) - return -} - -//sysnb pipe(p *[2]_C_int) (err error) - -func Pipe(p []int) (err error) { - if len(p) != 2 { - return EINVAL - } - var pp [2]_C_int - err = pipe(&pp) - p[0] = int(pp[0]) - p[1] = int(pp[1]) - return -} - -//sysnb pipe2(p *[2]_C_int, flags int) (err error) - -func Pipe2(p []int, flags int) (err error) { - if len(p) != 2 { - return EINVAL - } - var pp [2]_C_int - err = pipe2(&pp, flags) - p[0] = int(pp[0]) - p[1] = int(pp[1]) - return -} - -// 64-bit file system and 32-bit uid calls -// (386 default is 32-bit file system and 16-bit uid). -//sys Dup2(oldfd int, newfd int) (err error) -//sys Fadvise(fd int, offset int64, length int64, advice int) (err error) = SYS_FADVISE64_64 -//sys Fchown(fd int, uid int, gid int) (err error) = SYS_FCHOWN32 -//sys Fstat(fd int, stat *Stat_t) (err error) = SYS_FSTAT64 -//sys Ftruncate(fd int, length int64) (err error) = SYS_FTRUNCATE64 -//sysnb Getegid() (egid int) = SYS_GETEGID32 -//sysnb Geteuid() (euid int) = SYS_GETEUID32 -//sysnb Getgid() (gid int) = SYS_GETGID32 -//sysnb Getuid() (uid int) = SYS_GETUID32 -//sysnb InotifyInit() (fd int, err error) -//sys Ioperm(from int, num int, on int) (err error) -//sys Iopl(level int) (err error) -//sys Lchown(path string, uid int, gid int) (err error) = SYS_LCHOWN32 -//sys Lstat(path string, stat *Stat_t) (err error) = SYS_LSTAT64 -//sys Pread(fd int, p []byte, offset int64) (n int, err error) = SYS_PREAD64 -//sys Pwrite(fd int, p []byte, offset int64) (n int, err error) = SYS_PWRITE64 -//sys sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) = SYS_SENDFILE64 -//sys Setfsgid(gid int) (err error) = SYS_SETFSGID32 -//sys Setfsuid(uid int) (err error) = SYS_SETFSUID32 -//sysnb Setregid(rgid int, egid int) (err error) = SYS_SETREGID32 -//sysnb Setresgid(rgid int, egid int, sgid int) (err error) = SYS_SETRESGID32 -//sysnb Setresuid(ruid int, euid int, suid int) (err error) = SYS_SETRESUID32 -//sysnb Setreuid(ruid int, euid int) (err error) = SYS_SETREUID32 -//sys Splice(rfd int, roff *int64, wfd int, woff *int64, len int, flags int) (n int, err error) -//sys Stat(path string, stat *Stat_t) (err error) = SYS_STAT64 -//sys SyncFileRange(fd int, off int64, n int64, flags int) (err error) -//sys Truncate(path string, length int64) (err error) = SYS_TRUNCATE64 -//sysnb getgroups(n int, list *_Gid_t) (nn int, err error) = SYS_GETGROUPS32 -//sysnb setgroups(n int, list *_Gid_t) (err error) = SYS_SETGROUPS32 -//sys Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) = SYS__NEWSELECT - -//sys mmap2(addr uintptr, length uintptr, prot int, flags int, fd int, pageOffset uintptr) (xaddr uintptr, err error) - -func mmap(addr uintptr, length uintptr, prot int, flags int, fd int, offset int64) (xaddr uintptr, err error) { - page := uintptr(offset / 4096) - if offset != int64(page)*4096 { - return 0, EINVAL - } - return mmap2(addr, length, prot, flags, fd, page) -} - -type rlimit32 struct { - Cur uint32 - Max uint32 -} - -//sysnb getrlimit(resource int, rlim *rlimit32) (err error) = SYS_GETRLIMIT - -const rlimInf32 = ^uint32(0) -const rlimInf64 = ^uint64(0) - -func Getrlimit(resource int, rlim *Rlimit) (err error) { - err = prlimit(0, resource, nil, rlim) - if err != ENOSYS { - return err - } - - rl := rlimit32{} - err = getrlimit(resource, &rl) - if err != nil { - return - } - - if rl.Cur == rlimInf32 { - rlim.Cur = rlimInf64 - } else { - rlim.Cur = uint64(rl.Cur) - } - - if rl.Max == rlimInf32 { - rlim.Max = rlimInf64 - } else { - rlim.Max = uint64(rl.Max) - } - return -} - -//sysnb setrlimit(resource int, rlim *rlimit32) (err error) = SYS_SETRLIMIT - -func Setrlimit(resource int, rlim *Rlimit) (err error) { - err = prlimit(0, resource, rlim, nil) - if err != ENOSYS { - return err - } - - rl := rlimit32{} - if rlim.Cur == rlimInf64 { - rl.Cur = rlimInf32 - } else if rlim.Cur < uint64(rlimInf32) { - rl.Cur = uint32(rlim.Cur) - } else { - return EINVAL - } - if rlim.Max == rlimInf64 { - rl.Max = rlimInf32 - } else if rlim.Max < uint64(rlimInf32) { - rl.Max = uint32(rlim.Max) - } else { - return EINVAL - } - - return setrlimit(resource, &rl) -} - -// Underlying system call writes to newoffset via pointer. -// Implemented in assembly to avoid allocation. -func seek(fd int, offset int64, whence int) (newoffset int64, err syscall.Errno) - -func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { - newoffset, errno := seek(fd, offset, whence) - if errno != 0 { - return 0, errno - } - return newoffset, nil -} - -// Vsyscalls on amd64. -//sysnb Gettimeofday(tv *Timeval) (err error) -//sysnb Time(t *Time_t) (tt Time_t, err error) - -// On x86 Linux, all the socket calls go through an extra indirection, -// I think because the 5-register system call interface can't handle -// the 6-argument calls like sendto and recvfrom. Instead the -// arguments to the underlying system call are the number below -// and a pointer to an array of uintptr. We hide the pointer in the -// socketcall assembly to avoid allocation on every system call. - -const ( - // see linux/net.h - _SOCKET = 1 - _BIND = 2 - _CONNECT = 3 - _LISTEN = 4 - _ACCEPT = 5 - _GETSOCKNAME = 6 - _GETPEERNAME = 7 - _SOCKETPAIR = 8 - _SEND = 9 - _RECV = 10 - _SENDTO = 11 - _RECVFROM = 12 - _SHUTDOWN = 13 - _SETSOCKOPT = 14 - _GETSOCKOPT = 15 - _SENDMSG = 16 - _RECVMSG = 17 - _ACCEPT4 = 18 - _RECVMMSG = 19 - _SENDMMSG = 20 -) - -func socketcall(call int, a0, a1, a2, a3, a4, a5 uintptr) (n int, err syscall.Errno) -func rawsocketcall(call int, a0, a1, a2, a3, a4, a5 uintptr) (n int, err syscall.Errno) - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - fd, e := socketcall(_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), 0, 0, 0) - if e != 0 { - err = e - } - return -} - -func accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error) { - fd, e := socketcall(_ACCEPT4, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), uintptr(flags), 0, 0) - if e != 0 { - err = e - } - return -} - -func getsockname(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, e := rawsocketcall(_GETSOCKNAME, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), 0, 0, 0) - if e != 0 { - err = e - } - return -} - -func getpeername(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, e := rawsocketcall(_GETPEERNAME, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), 0, 0, 0) - if e != 0 { - err = e - } - return -} - -func socketpair(domain int, typ int, flags int, fd *[2]int32) (err error) { - _, e := rawsocketcall(_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(flags), uintptr(unsafe.Pointer(fd)), 0, 0) - if e != 0 { - err = e - } - return -} - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, e := socketcall(_BIND, uintptr(s), uintptr(addr), uintptr(addrlen), 0, 0, 0) - if e != 0 { - err = e - } - return -} - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, e := socketcall(_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen), 0, 0, 0) - if e != 0 { - err = e - } - return -} - -func socket(domain int, typ int, proto int) (fd int, err error) { - fd, e := rawsocketcall(_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto), 0, 0, 0) - if e != 0 { - err = e - } - return -} - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, e := socketcall(_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e != 0 { - err = e - } - return -} - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, e := socketcall(_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), vallen, 0) - if e != 0 { - err = e - } - return -} - -func recvfrom(s int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var base uintptr - if len(p) > 0 { - base = uintptr(unsafe.Pointer(&p[0])) - } - n, e := socketcall(_RECVFROM, uintptr(s), base, uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - if e != 0 { - err = e - } - return -} - -func sendto(s int, p []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var base uintptr - if len(p) > 0 { - base = uintptr(unsafe.Pointer(&p[0])) - } - _, e := socketcall(_SENDTO, uintptr(s), base, uintptr(len(p)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e != 0 { - err = e - } - return -} - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - n, e := socketcall(_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags), 0, 0, 0) - if e != 0 { - err = e - } - return -} - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - n, e := socketcall(_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags), 0, 0, 0) - if e != 0 { - err = e - } - return -} - -func Listen(s int, n int) (err error) { - _, e := socketcall(_LISTEN, uintptr(s), uintptr(n), 0, 0, 0, 0) - if e != 0 { - err = e - } - return -} - -func Shutdown(s, how int) (err error) { - _, e := socketcall(_SHUTDOWN, uintptr(s), uintptr(how), 0, 0, 0, 0) - if e != 0 { - err = e - } - return -} - -func Fstatfs(fd int, buf *Statfs_t) (err error) { - _, _, e := Syscall(SYS_FSTATFS64, uintptr(fd), unsafe.Sizeof(*buf), uintptr(unsafe.Pointer(buf))) - if e != 0 { - err = e - } - return -} - -func Statfs(path string, buf *Statfs_t) (err error) { - pathp, err := BytePtrFromString(path) - if err != nil { - return err - } - _, _, e := Syscall(SYS_STATFS64, uintptr(unsafe.Pointer(pathp)), unsafe.Sizeof(*buf), uintptr(unsafe.Pointer(buf))) - if e != 0 { - err = e - } - return -} - -func (r *PtraceRegs) PC() uint64 { return uint64(uint32(r.Eip)) } - -func (r *PtraceRegs) SetPC(pc uint64) { r.Eip = int32(pc) } - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint32(length) -} - -func (msghdr *Msghdr) SetControllen(length int) { - msghdr.Controllen = uint32(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint32(length) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_linux_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_linux_amd64.go deleted file mode 100644 index ae70c2afca0..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_linux_amd64.go +++ /dev/null @@ -1,146 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build amd64,linux - -package unix - -import "syscall" - -//sys Dup2(oldfd int, newfd int) (err error) -//sys Fadvise(fd int, offset int64, length int64, advice int) (err error) = SYS_FADVISE64 -//sys Fchown(fd int, uid int, gid int) (err error) -//sys Fstat(fd int, stat *Stat_t) (err error) -//sys Fstatfs(fd int, buf *Statfs_t) (err error) -//sys Ftruncate(fd int, length int64) (err error) -//sysnb Getegid() (egid int) -//sysnb Geteuid() (euid int) -//sysnb Getgid() (gid int) -//sysnb Getrlimit(resource int, rlim *Rlimit) (err error) -//sysnb Getuid() (uid int) -//sysnb InotifyInit() (fd int, err error) -//sys Ioperm(from int, num int, on int) (err error) -//sys Iopl(level int) (err error) -//sys Lchown(path string, uid int, gid int) (err error) -//sys Listen(s int, n int) (err error) -//sys Lstat(path string, stat *Stat_t) (err error) -//sys Pread(fd int, p []byte, offset int64) (n int, err error) = SYS_PREAD64 -//sys Pwrite(fd int, p []byte, offset int64) (n int, err error) = SYS_PWRITE64 -//sys Seek(fd int, offset int64, whence int) (off int64, err error) = SYS_LSEEK -//sys Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) -//sys sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) -//sys Setfsgid(gid int) (err error) -//sys Setfsuid(uid int) (err error) -//sysnb Setregid(rgid int, egid int) (err error) -//sysnb Setresgid(rgid int, egid int, sgid int) (err error) -//sysnb Setresuid(ruid int, euid int, suid int) (err error) -//sysnb Setrlimit(resource int, rlim *Rlimit) (err error) -//sysnb Setreuid(ruid int, euid int) (err error) -//sys Shutdown(fd int, how int) (err error) -//sys Splice(rfd int, roff *int64, wfd int, woff *int64, len int, flags int) (n int64, err error) -//sys Stat(path string, stat *Stat_t) (err error) -//sys Statfs(path string, buf *Statfs_t) (err error) -//sys SyncFileRange(fd int, off int64, n int64, flags int) (err error) -//sys Truncate(path string, length int64) (err error) -//sys accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) -//sys accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error) -//sys bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) -//sys connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) -//sysnb getgroups(n int, list *_Gid_t) (nn int, err error) -//sysnb setgroups(n int, list *_Gid_t) (err error) -//sys getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) -//sys setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) -//sysnb socket(domain int, typ int, proto int) (fd int, err error) -//sysnb socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) -//sysnb getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) -//sysnb getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) -//sys recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) -//sys sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) -//sys recvmsg(s int, msg *Msghdr, flags int) (n int, err error) -//sys sendmsg(s int, msg *Msghdr, flags int) (n int, err error) -//sys mmap(addr uintptr, length uintptr, prot int, flags int, fd int, offset int64) (xaddr uintptr, err error) - -//go:noescape -func gettimeofday(tv *Timeval) (err syscall.Errno) - -func Gettimeofday(tv *Timeval) (err error) { - errno := gettimeofday(tv) - if errno != 0 { - return errno - } - return nil -} - -func Getpagesize() int { return 4096 } - -func Time(t *Time_t) (tt Time_t, err error) { - var tv Timeval - errno := gettimeofday(&tv) - if errno != 0 { - return 0, errno - } - if t != nil { - *t = Time_t(tv.Sec) - } - return Time_t(tv.Sec), nil -} - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return -} - -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Sec = nsec / 1e9 - tv.Usec = nsec % 1e9 / 1e3 - return -} - -//sysnb pipe(p *[2]_C_int) (err error) - -func Pipe(p []int) (err error) { - if len(p) != 2 { - return EINVAL - } - var pp [2]_C_int - err = pipe(&pp) - p[0] = int(pp[0]) - p[1] = int(pp[1]) - return -} - -//sysnb pipe2(p *[2]_C_int, flags int) (err error) - -func Pipe2(p []int, flags int) (err error) { - if len(p) != 2 { - return EINVAL - } - var pp [2]_C_int - err = pipe2(&pp, flags) - p[0] = int(pp[0]) - p[1] = int(pp[1]) - return -} - -func (r *PtraceRegs) PC() uint64 { return r.Rip } - -func (r *PtraceRegs) SetPC(pc uint64) { r.Rip = pc } - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint64(length) -} - -func (msghdr *Msghdr) SetControllen(length int) { - msghdr.Controllen = uint64(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint64(length) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_linux_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_linux_arm.go deleted file mode 100644 index abc41c3ea5d..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_linux_arm.go +++ /dev/null @@ -1,233 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build arm,linux - -package unix - -import ( - "syscall" - "unsafe" -) - -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int32(nsec / 1e9) - ts.Nsec = int32(nsec % 1e9) - return -} - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Sec = int32(nsec / 1e9) - tv.Usec = int32(nsec % 1e9 / 1e3) - return -} - -func Pipe(p []int) (err error) { - if len(p) != 2 { - return EINVAL - } - var pp [2]_C_int - err = pipe2(&pp, 0) - p[0] = int(pp[0]) - p[1] = int(pp[1]) - return -} - -//sysnb pipe2(p *[2]_C_int, flags int) (err error) - -func Pipe2(p []int, flags int) (err error) { - if len(p) != 2 { - return EINVAL - } - var pp [2]_C_int - err = pipe2(&pp, flags) - p[0] = int(pp[0]) - p[1] = int(pp[1]) - return -} - -// Underlying system call writes to newoffset via pointer. -// Implemented in assembly to avoid allocation. -func seek(fd int, offset int64, whence int) (newoffset int64, err syscall.Errno) - -func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { - newoffset, errno := seek(fd, offset, whence) - if errno != 0 { - return 0, errno - } - return newoffset, nil -} - -//sys accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) -//sys accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error) -//sys bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) -//sys connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) -//sysnb getgroups(n int, list *_Gid_t) (nn int, err error) = SYS_GETGROUPS32 -//sysnb setgroups(n int, list *_Gid_t) (err error) = SYS_SETGROUPS32 -//sys getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) -//sys setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) -//sysnb socket(domain int, typ int, proto int) (fd int, err error) -//sysnb getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) -//sysnb getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) -//sys recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) -//sys sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) -//sysnb socketpair(domain int, typ int, flags int, fd *[2]int32) (err error) -//sys recvmsg(s int, msg *Msghdr, flags int) (n int, err error) -//sys sendmsg(s int, msg *Msghdr, flags int) (n int, err error) - -// 64-bit file system and 32-bit uid calls -// (16-bit uid calls are not always supported in newer kernels) -//sys Dup2(oldfd int, newfd int) (err error) -//sys Fchown(fd int, uid int, gid int) (err error) = SYS_FCHOWN32 -//sys Fstat(fd int, stat *Stat_t) (err error) = SYS_FSTAT64 -//sysnb Getegid() (egid int) = SYS_GETEGID32 -//sysnb Geteuid() (euid int) = SYS_GETEUID32 -//sysnb Getgid() (gid int) = SYS_GETGID32 -//sysnb Getuid() (uid int) = SYS_GETUID32 -//sysnb InotifyInit() (fd int, err error) -//sys Lchown(path string, uid int, gid int) (err error) = SYS_LCHOWN32 -//sys Listen(s int, n int) (err error) -//sys Lstat(path string, stat *Stat_t) (err error) = SYS_LSTAT64 -//sys sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) = SYS_SENDFILE64 -//sys Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) = SYS__NEWSELECT -//sys Setfsgid(gid int) (err error) = SYS_SETFSGID32 -//sys Setfsuid(uid int) (err error) = SYS_SETFSUID32 -//sysnb Setregid(rgid int, egid int) (err error) = SYS_SETREGID32 -//sysnb Setresgid(rgid int, egid int, sgid int) (err error) = SYS_SETRESGID32 -//sysnb Setresuid(ruid int, euid int, suid int) (err error) = SYS_SETRESUID32 -//sysnb Setreuid(ruid int, euid int) (err error) = SYS_SETREUID32 -//sys Shutdown(fd int, how int) (err error) -//sys Splice(rfd int, roff *int64, wfd int, woff *int64, len int, flags int) (n int, err error) -//sys Stat(path string, stat *Stat_t) (err error) = SYS_STAT64 - -// Vsyscalls on amd64. -//sysnb Gettimeofday(tv *Timeval) (err error) -//sysnb Time(t *Time_t) (tt Time_t, err error) - -//sys Pread(fd int, p []byte, offset int64) (n int, err error) = SYS_PREAD64 -//sys Pwrite(fd int, p []byte, offset int64) (n int, err error) = SYS_PWRITE64 -//sys Truncate(path string, length int64) (err error) = SYS_TRUNCATE64 -//sys Ftruncate(fd int, length int64) (err error) = SYS_FTRUNCATE64 - -func Fadvise(fd int, offset int64, length int64, advice int) (err error) { - _, _, e1 := Syscall6(SYS_ARM_FADVISE64_64, uintptr(fd), uintptr(advice), uintptr(offset), uintptr(offset>>32), uintptr(length), uintptr(length>>32)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -//sys mmap2(addr uintptr, length uintptr, prot int, flags int, fd int, pageOffset uintptr) (xaddr uintptr, err error) - -func Fstatfs(fd int, buf *Statfs_t) (err error) { - _, _, e := Syscall(SYS_FSTATFS64, uintptr(fd), unsafe.Sizeof(*buf), uintptr(unsafe.Pointer(buf))) - if e != 0 { - err = e - } - return -} - -func Statfs(path string, buf *Statfs_t) (err error) { - pathp, err := BytePtrFromString(path) - if err != nil { - return err - } - _, _, e := Syscall(SYS_STATFS64, uintptr(unsafe.Pointer(pathp)), unsafe.Sizeof(*buf), uintptr(unsafe.Pointer(buf))) - if e != 0 { - err = e - } - return -} - -func mmap(addr uintptr, length uintptr, prot int, flags int, fd int, offset int64) (xaddr uintptr, err error) { - page := uintptr(offset / 4096) - if offset != int64(page)*4096 { - return 0, EINVAL - } - return mmap2(addr, length, prot, flags, fd, page) -} - -type rlimit32 struct { - Cur uint32 - Max uint32 -} - -//sysnb getrlimit(resource int, rlim *rlimit32) (err error) = SYS_GETRLIMIT - -const rlimInf32 = ^uint32(0) -const rlimInf64 = ^uint64(0) - -func Getrlimit(resource int, rlim *Rlimit) (err error) { - err = prlimit(0, resource, nil, rlim) - if err != ENOSYS { - return err - } - - rl := rlimit32{} - err = getrlimit(resource, &rl) - if err != nil { - return - } - - if rl.Cur == rlimInf32 { - rlim.Cur = rlimInf64 - } else { - rlim.Cur = uint64(rl.Cur) - } - - if rl.Max == rlimInf32 { - rlim.Max = rlimInf64 - } else { - rlim.Max = uint64(rl.Max) - } - return -} - -//sysnb setrlimit(resource int, rlim *rlimit32) (err error) = SYS_SETRLIMIT - -func Setrlimit(resource int, rlim *Rlimit) (err error) { - err = prlimit(0, resource, rlim, nil) - if err != ENOSYS { - return err - } - - rl := rlimit32{} - if rlim.Cur == rlimInf64 { - rl.Cur = rlimInf32 - } else if rlim.Cur < uint64(rlimInf32) { - rl.Cur = uint32(rlim.Cur) - } else { - return EINVAL - } - if rlim.Max == rlimInf64 { - rl.Max = rlimInf32 - } else if rlim.Max < uint64(rlimInf32) { - rl.Max = uint32(rlim.Max) - } else { - return EINVAL - } - - return setrlimit(resource, &rl) -} - -func (r *PtraceRegs) PC() uint64 { return uint64(r.Uregs[15]) } - -func (r *PtraceRegs) SetPC(pc uint64) { r.Uregs[15] = uint32(pc) } - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint32(length) -} - -func (msghdr *Msghdr) SetControllen(length int) { - msghdr.Controllen = uint32(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint32(length) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_linux_arm64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_linux_arm64.go deleted file mode 100644 index f3d72dfd301..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_linux_arm64.go +++ /dev/null @@ -1,150 +0,0 @@ -// Copyright 2015 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build arm64,linux - -package unix - -const _SYS_dup = SYS_DUP3 - -//sys Fchown(fd int, uid int, gid int) (err error) -//sys Fstat(fd int, stat *Stat_t) (err error) -//sys Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) -//sys Fstatfs(fd int, buf *Statfs_t) (err error) -//sys Ftruncate(fd int, length int64) (err error) -//sysnb Getegid() (egid int) -//sysnb Geteuid() (euid int) -//sysnb Getgid() (gid int) -//sysnb Getrlimit(resource int, rlim *Rlimit) (err error) -//sysnb Getuid() (uid int) -//sys Listen(s int, n int) (err error) -//sys Pread(fd int, p []byte, offset int64) (n int, err error) = SYS_PREAD64 -//sys Pwrite(fd int, p []byte, offset int64) (n int, err error) = SYS_PWRITE64 -//sys Seek(fd int, offset int64, whence int) (off int64, err error) = SYS_LSEEK -//sys Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) = SYS_PSELECT6 -//sys sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) -//sys Setfsgid(gid int) (err error) -//sys Setfsuid(uid int) (err error) -//sysnb Setregid(rgid int, egid int) (err error) -//sysnb Setresgid(rgid int, egid int, sgid int) (err error) -//sysnb Setresuid(ruid int, euid int, suid int) (err error) -//sysnb Setrlimit(resource int, rlim *Rlimit) (err error) -//sysnb Setreuid(ruid int, euid int) (err error) -//sys Shutdown(fd int, how int) (err error) -//sys Splice(rfd int, roff *int64, wfd int, woff *int64, len int, flags int) (n int64, err error) - -func Stat(path string, stat *Stat_t) (err error) { - return Fstatat(AT_FDCWD, path, stat, 0) -} - -func Lchown(path string, uid int, gid int) (err error) { - return Fchownat(AT_FDCWD, path, uid, gid, AT_SYMLINK_NOFOLLOW) -} - -func Lstat(path string, stat *Stat_t) (err error) { - return Fstatat(AT_FDCWD, path, stat, AT_SYMLINK_NOFOLLOW) -} - -//sys Statfs(path string, buf *Statfs_t) (err error) -//sys SyncFileRange(fd int, off int64, n int64, flags int) (err error) -//sys Truncate(path string, length int64) (err error) -//sys accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) -//sys accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error) -//sys bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) -//sys connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) -//sysnb getgroups(n int, list *_Gid_t) (nn int, err error) -//sysnb setgroups(n int, list *_Gid_t) (err error) -//sys getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) -//sys setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) -//sysnb socket(domain int, typ int, proto int) (fd int, err error) -//sysnb socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) -//sysnb getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) -//sysnb getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) -//sys recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) -//sys sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) -//sys recvmsg(s int, msg *Msghdr, flags int) (n int, err error) -//sys sendmsg(s int, msg *Msghdr, flags int) (n int, err error) -//sys mmap(addr uintptr, length uintptr, prot int, flags int, fd int, offset int64) (xaddr uintptr, err error) - -func Getpagesize() int { return 65536 } - -//sysnb Gettimeofday(tv *Timeval) (err error) -//sysnb Time(t *Time_t) (tt Time_t, err error) - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return -} - -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Sec = nsec / 1e9 - tv.Usec = nsec % 1e9 / 1e3 - return -} - -func Pipe(p []int) (err error) { - if len(p) != 2 { - return EINVAL - } - var pp [2]_C_int - err = pipe2(&pp, 0) - p[0] = int(pp[0]) - p[1] = int(pp[1]) - return -} - -//sysnb pipe2(p *[2]_C_int, flags int) (err error) - -func Pipe2(p []int, flags int) (err error) { - if len(p) != 2 { - return EINVAL - } - var pp [2]_C_int - err = pipe2(&pp, flags) - p[0] = int(pp[0]) - p[1] = int(pp[1]) - return -} - -func (r *PtraceRegs) PC() uint64 { return r.Pc } - -func (r *PtraceRegs) SetPC(pc uint64) { r.Pc = pc } - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint64(length) -} - -func (msghdr *Msghdr) SetControllen(length int) { - msghdr.Controllen = uint64(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint64(length) -} - -func InotifyInit() (fd int, err error) { - return InotifyInit1(0) -} - -// TODO(dfc): constants that should be in zsysnum_linux_arm64.go, remove -// these when the deprecated syscalls that the syscall package relies on -// are removed. -const ( - SYS_GETPGRP = 1060 - SYS_UTIMES = 1037 - SYS_FUTIMESAT = 1066 - SYS_PAUSE = 1061 - SYS_USTAT = 1070 - SYS_UTIME = 1063 - SYS_LCHOWN = 1032 - SYS_TIME = 1062 - SYS_EPOLL_CREATE = 1042 - SYS_EPOLL_WAIT = 1069 -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_linux_ppc64x.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_linux_ppc64x.go deleted file mode 100644 index 67eed6334c4..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_linux_ppc64x.go +++ /dev/null @@ -1,96 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build linux -// +build ppc64 ppc64le - -package unix - -//sys Fchown(fd int, uid int, gid int) (err error) -//sys Fstat(fd int, stat *Stat_t) (err error) -//sys Fstatfs(fd int, buf *Statfs_t) (err error) -//sys Ftruncate(fd int, length int64) (err error) -//sysnb Getegid() (egid int) -//sysnb Geteuid() (euid int) -//sysnb Getgid() (gid int) -//sysnb Getrlimit(resource int, rlim *Rlimit) (err error) = SYS_UGETRLIMIT -//sysnb Getuid() (uid int) -//sys Ioperm(from int, num int, on int) (err error) -//sys Iopl(level int) (err error) -//sys Lchown(path string, uid int, gid int) (err error) -//sys Listen(s int, n int) (err error) -//sys Lstat(path string, stat *Stat_t) (err error) -//sys Pread(fd int, p []byte, offset int64) (n int, err error) = SYS_PREAD64 -//sys Pwrite(fd int, p []byte, offset int64) (n int, err error) = SYS_PWRITE64 -//sys Seek(fd int, offset int64, whence int) (off int64, err error) = SYS_LSEEK -//sys Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) -//sys sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) -//sys Setfsgid(gid int) (err error) -//sys Setfsuid(uid int) (err error) -//sysnb Setregid(rgid int, egid int) (err error) -//sysnb Setresgid(rgid int, egid int, sgid int) (err error) -//sysnb Setresuid(ruid int, euid int, suid int) (err error) -//sysnb Setrlimit(resource int, rlim *Rlimit) (err error) -//sysnb Setreuid(ruid int, euid int) (err error) -//sys Shutdown(fd int, how int) (err error) -//sys Splice(rfd int, roff *int64, wfd int, woff *int64, len int, flags int) (n int64, err error) -//sys Stat(path string, stat *Stat_t) (err error) -//sys Statfs(path string, buf *Statfs_t) (err error) -//sys SyncFileRange(fd int, off int64, n int64, flags int) (err error) = SYS_SYNC_FILE_RANGE2 -//sys Truncate(path string, length int64) (err error) -//sys accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) -//sys accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error) -//sys bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) -//sys connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) -//sysnb getgroups(n int, list *_Gid_t) (nn int, err error) -//sysnb setgroups(n int, list *_Gid_t) (err error) -//sys getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) -//sys setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) -//sysnb socket(domain int, typ int, proto int) (fd int, err error) -//sysnb socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) -//sysnb getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) -//sysnb getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) -//sys recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) -//sys sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) -//sys recvmsg(s int, msg *Msghdr, flags int) (n int, err error) -//sys sendmsg(s int, msg *Msghdr, flags int) (n int, err error) -//sys mmap(addr uintptr, length uintptr, prot int, flags int, fd int, offset int64) (xaddr uintptr, err error) - -func Getpagesize() int { return 65536 } - -//sysnb Gettimeofday(tv *Timeval) (err error) -//sysnb Time(t *Time_t) (tt Time_t, err error) - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return -} - -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Sec = nsec / 1e9 - tv.Usec = nsec % 1e9 / 1e3 - return -} - -func (r *PtraceRegs) PC() uint64 { return r.Nip } - -func (r *PtraceRegs) SetPC(pc uint64) { r.Nip = pc } - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint64(length) -} - -func (msghdr *Msghdr) SetControllen(length int) { - msghdr.Controllen = uint64(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint64(length) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_netbsd.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_netbsd.go deleted file mode 100644 index c4e945cd696..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_netbsd.go +++ /dev/null @@ -1,492 +0,0 @@ -// Copyright 2009,2010 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// NetBSD system calls. -// This file is compiled as ordinary Go code, -// but it is also input to mksyscall, -// which parses the //sys lines and generates system call stubs. -// Note that sometimes we use a lowercase //sys name and wrap -// it in our own nicer implementation, either here or in -// syscall_bsd.go or syscall_unix.go. - -package unix - -import ( - "syscall" - "unsafe" -) - -type SockaddrDatalink struct { - Len uint8 - Family uint8 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [12]int8 - raw RawSockaddrDatalink -} - -func Syscall9(trap, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) - -func sysctlNodes(mib []_C_int) (nodes []Sysctlnode, err error) { - var olen uintptr - - // Get a list of all sysctl nodes below the given MIB by performing - // a sysctl for the given MIB with CTL_QUERY appended. - mib = append(mib, CTL_QUERY) - qnode := Sysctlnode{Flags: SYSCTL_VERS_1} - qp := (*byte)(unsafe.Pointer(&qnode)) - sz := unsafe.Sizeof(qnode) - if err = sysctl(mib, nil, &olen, qp, sz); err != nil { - return nil, err - } - - // Now that we know the size, get the actual nodes. - nodes = make([]Sysctlnode, olen/sz) - np := (*byte)(unsafe.Pointer(&nodes[0])) - if err = sysctl(mib, np, &olen, qp, sz); err != nil { - return nil, err - } - - return nodes, nil -} - -func nametomib(name string) (mib []_C_int, err error) { - - // Split name into components. - var parts []string - last := 0 - for i := 0; i < len(name); i++ { - if name[i] == '.' { - parts = append(parts, name[last:i]) - last = i + 1 - } - } - parts = append(parts, name[last:]) - - // Discover the nodes and construct the MIB OID. - for partno, part := range parts { - nodes, err := sysctlNodes(mib) - if err != nil { - return nil, err - } - for _, node := range nodes { - n := make([]byte, 0) - for i := range node.Name { - if node.Name[i] != 0 { - n = append(n, byte(node.Name[i])) - } - } - if string(n) == part { - mib = append(mib, _C_int(node.Num)) - break - } - } - if len(mib) != partno+1 { - return nil, EINVAL - } - } - - return mib, nil -} - -// ParseDirent parses up to max directory entries in buf, -// appending the names to names. It returns the number -// bytes consumed from buf, the number of entries added -// to names, and the new names slice. -func ParseDirent(buf []byte, max int, names []string) (consumed int, count int, newnames []string) { - origlen := len(buf) - for max != 0 && len(buf) > 0 { - dirent := (*Dirent)(unsafe.Pointer(&buf[0])) - if dirent.Reclen == 0 { - buf = nil - break - } - buf = buf[dirent.Reclen:] - if dirent.Fileno == 0 { // File absent in directory. - continue - } - bytes := (*[10000]byte)(unsafe.Pointer(&dirent.Name[0])) - var name = string(bytes[0:dirent.Namlen]) - if name == "." || name == ".." { // Useless names - continue - } - max-- - count++ - names = append(names, name) - } - return origlen - len(buf), count, names -} - -//sysnb pipe() (fd1 int, fd2 int, err error) -func Pipe(p []int) (err error) { - if len(p) != 2 { - return EINVAL - } - p[0], p[1], err = pipe() - return -} - -//sys getdents(fd int, buf []byte) (n int, err error) -func Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) { - return getdents(fd, buf) -} - -// TODO -func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { - return -1, ENOSYS -} - -/* - * Exposed directly - */ -//sys Access(path string, mode uint32) (err error) -//sys Adjtime(delta *Timeval, olddelta *Timeval) (err error) -//sys Chdir(path string) (err error) -//sys Chflags(path string, flags int) (err error) -//sys Chmod(path string, mode uint32) (err error) -//sys Chown(path string, uid int, gid int) (err error) -//sys Chroot(path string) (err error) -//sys Close(fd int) (err error) -//sys Dup(fd int) (nfd int, err error) -//sys Dup2(from int, to int) (err error) -//sys Exit(code int) -//sys Fchdir(fd int) (err error) -//sys Fchflags(fd int, flags int) (err error) -//sys Fchmod(fd int, mode uint32) (err error) -//sys Fchown(fd int, uid int, gid int) (err error) -//sys Flock(fd int, how int) (err error) -//sys Fpathconf(fd int, name int) (val int, err error) -//sys Fstat(fd int, stat *Stat_t) (err error) -//sys Fsync(fd int) (err error) -//sys Ftruncate(fd int, length int64) (err error) -//sysnb Getegid() (egid int) -//sysnb Geteuid() (uid int) -//sysnb Getgid() (gid int) -//sysnb Getpgid(pid int) (pgid int, err error) -//sysnb Getpgrp() (pgrp int) -//sysnb Getpid() (pid int) -//sysnb Getppid() (ppid int) -//sys Getpriority(which int, who int) (prio int, err error) -//sysnb Getrlimit(which int, lim *Rlimit) (err error) -//sysnb Getrusage(who int, rusage *Rusage) (err error) -//sysnb Getsid(pid int) (sid int, err error) -//sysnb Gettimeofday(tv *Timeval) (err error) -//sysnb Getuid() (uid int) -//sys Issetugid() (tainted bool) -//sys Kill(pid int, signum syscall.Signal) (err error) -//sys Kqueue() (fd int, err error) -//sys Lchown(path string, uid int, gid int) (err error) -//sys Link(path string, link string) (err error) -//sys Listen(s int, backlog int) (err error) -//sys Lstat(path string, stat *Stat_t) (err error) -//sys Mkdir(path string, mode uint32) (err error) -//sys Mkfifo(path string, mode uint32) (err error) -//sys Mknod(path string, mode uint32, dev int) (err error) -//sys Mlock(b []byte) (err error) -//sys Mlockall(flags int) (err error) -//sys Mprotect(b []byte, prot int) (err error) -//sys Munlock(b []byte) (err error) -//sys Munlockall() (err error) -//sys Nanosleep(time *Timespec, leftover *Timespec) (err error) -//sys Open(path string, mode int, perm uint32) (fd int, err error) -//sys Pathconf(path string, name int) (val int, err error) -//sys Pread(fd int, p []byte, offset int64) (n int, err error) -//sys Pwrite(fd int, p []byte, offset int64) (n int, err error) -//sys read(fd int, p []byte) (n int, err error) -//sys Readlink(path string, buf []byte) (n int, err error) -//sys Rename(from string, to string) (err error) -//sys Revoke(path string) (err error) -//sys Rmdir(path string) (err error) -//sys Seek(fd int, offset int64, whence int) (newoffset int64, err error) = SYS_LSEEK -//sys Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) -//sysnb Setegid(egid int) (err error) -//sysnb Seteuid(euid int) (err error) -//sysnb Setgid(gid int) (err error) -//sysnb Setpgid(pid int, pgid int) (err error) -//sys Setpriority(which int, who int, prio int) (err error) -//sysnb Setregid(rgid int, egid int) (err error) -//sysnb Setreuid(ruid int, euid int) (err error) -//sysnb Setrlimit(which int, lim *Rlimit) (err error) -//sysnb Setsid() (pid int, err error) -//sysnb Settimeofday(tp *Timeval) (err error) -//sysnb Setuid(uid int) (err error) -//sys Stat(path string, stat *Stat_t) (err error) -//sys Symlink(path string, link string) (err error) -//sys Sync() (err error) -//sys Truncate(path string, length int64) (err error) -//sys Umask(newmask int) (oldmask int) -//sys Unlink(path string) (err error) -//sys Unmount(path string, flags int) (err error) -//sys write(fd int, p []byte) (n int, err error) -//sys mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) -//sys munmap(addr uintptr, length uintptr) (err error) -//sys readlen(fd int, buf *byte, nbuf int) (n int, err error) = SYS_READ -//sys writelen(fd int, buf *byte, nbuf int) (n int, err error) = SYS_WRITE - -/* - * Unimplemented - */ -// ____semctl13 -// __clone -// __fhopen40 -// __fhstat40 -// __fhstatvfs140 -// __fstat30 -// __getcwd -// __getfh30 -// __getlogin -// __lstat30 -// __mount50 -// __msgctl13 -// __msync13 -// __ntp_gettime30 -// __posix_chown -// __posix_fadvise50 -// __posix_fchown -// __posix_lchown -// __posix_rename -// __setlogin -// __shmctl13 -// __sigaction_sigtramp -// __sigaltstack14 -// __sigpending14 -// __sigprocmask14 -// __sigsuspend14 -// __sigtimedwait -// __stat30 -// __syscall -// __vfork14 -// _ksem_close -// _ksem_destroy -// _ksem_getvalue -// _ksem_init -// _ksem_open -// _ksem_post -// _ksem_trywait -// _ksem_unlink -// _ksem_wait -// _lwp_continue -// _lwp_create -// _lwp_ctl -// _lwp_detach -// _lwp_exit -// _lwp_getname -// _lwp_getprivate -// _lwp_kill -// _lwp_park -// _lwp_self -// _lwp_setname -// _lwp_setprivate -// _lwp_suspend -// _lwp_unpark -// _lwp_unpark_all -// _lwp_wait -// _lwp_wakeup -// _pset_bind -// _sched_getaffinity -// _sched_getparam -// _sched_setaffinity -// _sched_setparam -// acct -// aio_cancel -// aio_error -// aio_fsync -// aio_read -// aio_return -// aio_suspend -// aio_write -// break -// clock_getres -// clock_gettime -// clock_settime -// compat_09_ogetdomainname -// compat_09_osetdomainname -// compat_09_ouname -// compat_10_omsgsys -// compat_10_osemsys -// compat_10_oshmsys -// compat_12_fstat12 -// compat_12_getdirentries -// compat_12_lstat12 -// compat_12_msync -// compat_12_oreboot -// compat_12_oswapon -// compat_12_stat12 -// compat_13_sigaction13 -// compat_13_sigaltstack13 -// compat_13_sigpending13 -// compat_13_sigprocmask13 -// compat_13_sigreturn13 -// compat_13_sigsuspend13 -// compat_14___semctl -// compat_14_msgctl -// compat_14_shmctl -// compat_16___sigaction14 -// compat_16___sigreturn14 -// compat_20_fhstatfs -// compat_20_fstatfs -// compat_20_getfsstat -// compat_20_statfs -// compat_30___fhstat30 -// compat_30___fstat13 -// compat_30___lstat13 -// compat_30___stat13 -// compat_30_fhopen -// compat_30_fhstat -// compat_30_fhstatvfs1 -// compat_30_getdents -// compat_30_getfh -// compat_30_ntp_gettime -// compat_30_socket -// compat_40_mount -// compat_43_fstat43 -// compat_43_lstat43 -// compat_43_oaccept -// compat_43_ocreat -// compat_43_oftruncate -// compat_43_ogetdirentries -// compat_43_ogetdtablesize -// compat_43_ogethostid -// compat_43_ogethostname -// compat_43_ogetkerninfo -// compat_43_ogetpagesize -// compat_43_ogetpeername -// compat_43_ogetrlimit -// compat_43_ogetsockname -// compat_43_okillpg -// compat_43_olseek -// compat_43_ommap -// compat_43_oquota -// compat_43_orecv -// compat_43_orecvfrom -// compat_43_orecvmsg -// compat_43_osend -// compat_43_osendmsg -// compat_43_osethostid -// compat_43_osethostname -// compat_43_osetrlimit -// compat_43_osigblock -// compat_43_osigsetmask -// compat_43_osigstack -// compat_43_osigvec -// compat_43_otruncate -// compat_43_owait -// compat_43_stat43 -// execve -// extattr_delete_fd -// extattr_delete_file -// extattr_delete_link -// extattr_get_fd -// extattr_get_file -// extattr_get_link -// extattr_list_fd -// extattr_list_file -// extattr_list_link -// extattr_set_fd -// extattr_set_file -// extattr_set_link -// extattrctl -// fchroot -// fdatasync -// fgetxattr -// fktrace -// flistxattr -// fork -// fremovexattr -// fsetxattr -// fstatvfs1 -// fsync_range -// getcontext -// getitimer -// getvfsstat -// getxattr -// ioctl -// ktrace -// lchflags -// lchmod -// lfs_bmapv -// lfs_markv -// lfs_segclean -// lfs_segwait -// lgetxattr -// lio_listio -// listxattr -// llistxattr -// lremovexattr -// lseek -// lsetxattr -// lutimes -// madvise -// mincore -// minherit -// modctl -// mq_close -// mq_getattr -// mq_notify -// mq_open -// mq_receive -// mq_send -// mq_setattr -// mq_timedreceive -// mq_timedsend -// mq_unlink -// mremap -// msgget -// msgrcv -// msgsnd -// nfssvc -// ntp_adjtime -// pmc_control -// pmc_get_info -// poll -// pollts -// preadv -// profil -// pselect -// pset_assign -// pset_create -// pset_destroy -// ptrace -// pwritev -// quotactl -// rasctl -// readv -// reboot -// removexattr -// sa_enable -// sa_preempt -// sa_register -// sa_setconcurrency -// sa_stacks -// sa_yield -// sbrk -// sched_yield -// semconfig -// semget -// semop -// setcontext -// setitimer -// setxattr -// shmat -// shmdt -// shmget -// sstk -// statvfs1 -// swapctl -// sysarch -// syscall -// timer_create -// timer_delete -// timer_getoverrun -// timer_gettime -// timer_settime -// undelete -// utrace -// uuidgen -// vadvise -// vfork -// writev diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_netbsd_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_netbsd_386.go deleted file mode 100644 index 1b0e1af1257..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_netbsd_386.go +++ /dev/null @@ -1,44 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build 386,netbsd - -package unix - -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int64(nsec / 1e9) - ts.Nsec = int32(nsec % 1e9) - return -} - -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int64(nsec / 1e9) - return -} - -func SetKevent(k *Kevent_t, fd, mode, flags int) { - k.Ident = uint32(fd) - k.Filter = uint32(mode) - k.Flags = uint32(flags) -} - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint32(length) -} - -func (msghdr *Msghdr) SetControllen(length int) { - msghdr.Controllen = uint32(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint32(length) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_netbsd_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_netbsd_amd64.go deleted file mode 100644 index 1b6dcbe35d6..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_netbsd_amd64.go +++ /dev/null @@ -1,44 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build amd64,netbsd - -package unix - -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int64(nsec / 1e9) - ts.Nsec = int64(nsec % 1e9) - return -} - -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int64(nsec / 1e9) - return -} - -func SetKevent(k *Kevent_t, fd, mode, flags int) { - k.Ident = uint64(fd) - k.Filter = uint32(mode) - k.Flags = uint32(flags) -} - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint64(length) -} - -func (msghdr *Msghdr) SetControllen(length int) { - msghdr.Controllen = uint32(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint32(length) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_netbsd_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_netbsd_arm.go deleted file mode 100644 index 87d1d6fed1e..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_netbsd_arm.go +++ /dev/null @@ -1,44 +0,0 @@ -// Copyright 2013 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build arm,netbsd - -package unix - -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int64(nsec / 1e9) - ts.Nsec = int32(nsec % 1e9) - return -} - -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int64(nsec / 1e9) - return -} - -func SetKevent(k *Kevent_t, fd, mode, flags int) { - k.Ident = uint32(fd) - k.Filter = uint32(mode) - k.Flags = uint32(flags) -} - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint32(length) -} - -func (msghdr *Msghdr) SetControllen(length int) { - msghdr.Controllen = uint32(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint32(length) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_no_getwd.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_no_getwd.go deleted file mode 100644 index 530792ea93b..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_no_getwd.go +++ /dev/null @@ -1,11 +0,0 @@ -// Copyright 2013 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build dragonfly freebsd netbsd openbsd - -package unix - -const ImplementsGetwd = false - -func Getwd() (string, error) { return "", ENOTSUP } diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_openbsd.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_openbsd.go deleted file mode 100644 index 246131d2afc..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_openbsd.go +++ /dev/null @@ -1,303 +0,0 @@ -// Copyright 2009,2010 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// OpenBSD system calls. -// This file is compiled as ordinary Go code, -// but it is also input to mksyscall, -// which parses the //sys lines and generates system call stubs. -// Note that sometimes we use a lowercase //sys name and wrap -// it in our own nicer implementation, either here or in -// syscall_bsd.go or syscall_unix.go. - -package unix - -import ( - "syscall" - "unsafe" -) - -type SockaddrDatalink struct { - Len uint8 - Family uint8 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [24]int8 - raw RawSockaddrDatalink -} - -func Syscall9(trap, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) - -func nametomib(name string) (mib []_C_int, err error) { - - // Perform lookup via a binary search - left := 0 - right := len(sysctlMib) - 1 - for { - idx := left + (right-left)/2 - switch { - case name == sysctlMib[idx].ctlname: - return sysctlMib[idx].ctloid, nil - case name > sysctlMib[idx].ctlname: - left = idx + 1 - default: - right = idx - 1 - } - if left > right { - break - } - } - return nil, EINVAL -} - -// ParseDirent parses up to max directory entries in buf, -// appending the names to names. It returns the number -// bytes consumed from buf, the number of entries added -// to names, and the new names slice. -func ParseDirent(buf []byte, max int, names []string) (consumed int, count int, newnames []string) { - origlen := len(buf) - for max != 0 && len(buf) > 0 { - dirent := (*Dirent)(unsafe.Pointer(&buf[0])) - if dirent.Reclen == 0 { - buf = nil - break - } - buf = buf[dirent.Reclen:] - if dirent.Fileno == 0 { // File absent in directory. - continue - } - bytes := (*[10000]byte)(unsafe.Pointer(&dirent.Name[0])) - var name = string(bytes[0:dirent.Namlen]) - if name == "." || name == ".." { // Useless names - continue - } - max-- - count++ - names = append(names, name) - } - return origlen - len(buf), count, names -} - -//sysnb pipe(p *[2]_C_int) (err error) -func Pipe(p []int) (err error) { - if len(p) != 2 { - return EINVAL - } - var pp [2]_C_int - err = pipe(&pp) - p[0] = int(pp[0]) - p[1] = int(pp[1]) - return -} - -//sys getdents(fd int, buf []byte) (n int, err error) -func Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) { - return getdents(fd, buf) -} - -// TODO -func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { - return -1, ENOSYS -} - -func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { - var _p0 unsafe.Pointer - var bufsize uintptr - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - bufsize = unsafe.Sizeof(Statfs_t{}) * uintptr(len(buf)) - } - r0, _, e1 := Syscall(SYS_GETFSSTAT, uintptr(_p0), bufsize, uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -/* - * Exposed directly - */ -//sys Access(path string, mode uint32) (err error) -//sys Adjtime(delta *Timeval, olddelta *Timeval) (err error) -//sys Chdir(path string) (err error) -//sys Chflags(path string, flags int) (err error) -//sys Chmod(path string, mode uint32) (err error) -//sys Chown(path string, uid int, gid int) (err error) -//sys Chroot(path string) (err error) -//sys Close(fd int) (err error) -//sys Dup(fd int) (nfd int, err error) -//sys Dup2(from int, to int) (err error) -//sys Exit(code int) -//sys Fchdir(fd int) (err error) -//sys Fchflags(fd int, flags int) (err error) -//sys Fchmod(fd int, mode uint32) (err error) -//sys Fchown(fd int, uid int, gid int) (err error) -//sys Flock(fd int, how int) (err error) -//sys Fpathconf(fd int, name int) (val int, err error) -//sys Fstat(fd int, stat *Stat_t) (err error) -//sys Fstatfs(fd int, stat *Statfs_t) (err error) -//sys Fsync(fd int) (err error) -//sys Ftruncate(fd int, length int64) (err error) -//sysnb Getegid() (egid int) -//sysnb Geteuid() (uid int) -//sysnb Getgid() (gid int) -//sysnb Getpgid(pid int) (pgid int, err error) -//sysnb Getpgrp() (pgrp int) -//sysnb Getpid() (pid int) -//sysnb Getppid() (ppid int) -//sys Getpriority(which int, who int) (prio int, err error) -//sysnb Getrlimit(which int, lim *Rlimit) (err error) -//sysnb Getrusage(who int, rusage *Rusage) (err error) -//sysnb Getsid(pid int) (sid int, err error) -//sysnb Gettimeofday(tv *Timeval) (err error) -//sysnb Getuid() (uid int) -//sys Issetugid() (tainted bool) -//sys Kill(pid int, signum syscall.Signal) (err error) -//sys Kqueue() (fd int, err error) -//sys Lchown(path string, uid int, gid int) (err error) -//sys Link(path string, link string) (err error) -//sys Listen(s int, backlog int) (err error) -//sys Lstat(path string, stat *Stat_t) (err error) -//sys Mkdir(path string, mode uint32) (err error) -//sys Mkfifo(path string, mode uint32) (err error) -//sys Mknod(path string, mode uint32, dev int) (err error) -//sys Mlock(b []byte) (err error) -//sys Mlockall(flags int) (err error) -//sys Mprotect(b []byte, prot int) (err error) -//sys Munlock(b []byte) (err error) -//sys Munlockall() (err error) -//sys Nanosleep(time *Timespec, leftover *Timespec) (err error) -//sys Open(path string, mode int, perm uint32) (fd int, err error) -//sys Pathconf(path string, name int) (val int, err error) -//sys Pread(fd int, p []byte, offset int64) (n int, err error) -//sys Pwrite(fd int, p []byte, offset int64) (n int, err error) -//sys read(fd int, p []byte) (n int, err error) -//sys Readlink(path string, buf []byte) (n int, err error) -//sys Rename(from string, to string) (err error) -//sys Revoke(path string) (err error) -//sys Rmdir(path string) (err error) -//sys Seek(fd int, offset int64, whence int) (newoffset int64, err error) = SYS_LSEEK -//sys Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) -//sysnb Setegid(egid int) (err error) -//sysnb Seteuid(euid int) (err error) -//sysnb Setgid(gid int) (err error) -//sys Setlogin(name string) (err error) -//sysnb Setpgid(pid int, pgid int) (err error) -//sys Setpriority(which int, who int, prio int) (err error) -//sysnb Setregid(rgid int, egid int) (err error) -//sysnb Setreuid(ruid int, euid int) (err error) -//sysnb Setresgid(rgid int, egid int, sgid int) (err error) -//sysnb Setresuid(ruid int, euid int, suid int) (err error) -//sysnb Setrlimit(which int, lim *Rlimit) (err error) -//sysnb Setsid() (pid int, err error) -//sysnb Settimeofday(tp *Timeval) (err error) -//sysnb Setuid(uid int) (err error) -//sys Stat(path string, stat *Stat_t) (err error) -//sys Statfs(path string, stat *Statfs_t) (err error) -//sys Symlink(path string, link string) (err error) -//sys Sync() (err error) -//sys Truncate(path string, length int64) (err error) -//sys Umask(newmask int) (oldmask int) -//sys Unlink(path string) (err error) -//sys Unmount(path string, flags int) (err error) -//sys write(fd int, p []byte) (n int, err error) -//sys mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) -//sys munmap(addr uintptr, length uintptr) (err error) -//sys readlen(fd int, buf *byte, nbuf int) (n int, err error) = SYS_READ -//sys writelen(fd int, buf *byte, nbuf int) (n int, err error) = SYS_WRITE - -/* - * Unimplemented - */ -// __getcwd -// __semctl -// __syscall -// __sysctl -// adjfreq -// break -// clock_getres -// clock_gettime -// clock_settime -// closefrom -// execve -// faccessat -// fchmodat -// fchownat -// fcntl -// fhopen -// fhstat -// fhstatfs -// fork -// fstatat -// futimens -// getfh -// getgid -// getitimer -// getlogin -// getresgid -// getresuid -// getrtable -// getthrid -// ioctl -// ktrace -// lfs_bmapv -// lfs_markv -// lfs_segclean -// lfs_segwait -// linkat -// mincore -// minherit -// mkdirat -// mkfifoat -// mknodat -// mount -// mquery -// msgctl -// msgget -// msgrcv -// msgsnd -// nfssvc -// nnpfspioctl -// openat -// poll -// preadv -// profil -// pwritev -// quotactl -// readlinkat -// readv -// reboot -// renameat -// rfork -// sched_yield -// semget -// semop -// setgroups -// setitimer -// setrtable -// setsockopt -// shmat -// shmctl -// shmdt -// shmget -// sigaction -// sigaltstack -// sigpending -// sigprocmask -// sigreturn -// sigsuspend -// symlinkat -// sysarch -// syscall -// threxit -// thrsigdivert -// thrsleep -// thrwakeup -// unlinkat -// utimensat -// vfork -// writev diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_openbsd_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_openbsd_386.go deleted file mode 100644 index 9529b20e82e..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_openbsd_386.go +++ /dev/null @@ -1,44 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build 386,openbsd - -package unix - -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int64(nsec / 1e9) - ts.Nsec = int32(nsec % 1e9) - return -} - -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int64(nsec / 1e9) - return -} - -func SetKevent(k *Kevent_t, fd, mode, flags int) { - k.Ident = uint32(fd) - k.Filter = int16(mode) - k.Flags = uint16(flags) -} - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint32(length) -} - -func (msghdr *Msghdr) SetControllen(length int) { - msghdr.Controllen = uint32(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint32(length) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_openbsd_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_openbsd_amd64.go deleted file mode 100644 index fc6402946e3..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_openbsd_amd64.go +++ /dev/null @@ -1,44 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build amd64,openbsd - -package unix - -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return -} - -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = nsec % 1e9 / 1e3 - tv.Sec = nsec / 1e9 - return -} - -func SetKevent(k *Kevent_t, fd, mode, flags int) { - k.Ident = uint64(fd) - k.Filter = int16(mode) - k.Flags = uint16(flags) -} - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint64(length) -} - -func (msghdr *Msghdr) SetControllen(length int) { - msghdr.Controllen = uint32(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint32(length) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_solaris.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_solaris.go deleted file mode 100644 index 00837326481..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_solaris.go +++ /dev/null @@ -1,713 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// Solaris system calls. -// This file is compiled as ordinary Go code, -// but it is also input to mksyscall, -// which parses the //sys lines and generates system call stubs. -// Note that sometimes we use a lowercase //sys name and wrap -// it in our own nicer implementation, either here or in -// syscall_solaris.go or syscall_unix.go. - -package unix - -import ( - "sync/atomic" - "syscall" - "unsafe" -) - -// Implemented in runtime/syscall_solaris.go. -type syscallFunc uintptr - -func rawSysvicall6(trap, nargs, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err syscall.Errno) -func sysvicall6(trap, nargs, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err syscall.Errno) - -type SockaddrDatalink struct { - Family uint16 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [244]int8 - raw RawSockaddrDatalink -} - -func clen(n []byte) int { - for i := 0; i < len(n); i++ { - if n[i] == 0 { - return i - } - } - return len(n) -} - -// ParseDirent parses up to max directory entries in buf, -// appending the names to names. It returns the number -// bytes consumed from buf, the number of entries added -// to names, and the new names slice. -func ParseDirent(buf []byte, max int, names []string) (consumed int, count int, newnames []string) { - origlen := len(buf) - for max != 0 && len(buf) > 0 { - dirent := (*Dirent)(unsafe.Pointer(&buf[0])) - if dirent.Reclen == 0 { - buf = nil - break - } - buf = buf[dirent.Reclen:] - if dirent.Ino == 0 { // File absent in directory. - continue - } - bytes := (*[10000]byte)(unsafe.Pointer(&dirent.Name[0])) - var name = string(bytes[0:clen(bytes[:])]) - if name == "." || name == ".." { // Useless names - continue - } - max-- - count++ - names = append(names, name) - } - return origlen - len(buf), count, names -} - -func pipe() (r uintptr, w uintptr, err uintptr) - -func Pipe(p []int) (err error) { - if len(p) != 2 { - return EINVAL - } - r0, w0, e1 := pipe() - if e1 != 0 { - err = syscall.Errno(e1) - } - p[0], p[1] = int(r0), int(w0) - return -} - -func (sa *SockaddrInet4) sockaddr() (unsafe.Pointer, _Socklen, error) { - if sa.Port < 0 || sa.Port > 0xFFFF { - return nil, 0, EINVAL - } - sa.raw.Family = AF_INET - p := (*[2]byte)(unsafe.Pointer(&sa.raw.Port)) - p[0] = byte(sa.Port >> 8) - p[1] = byte(sa.Port) - for i := 0; i < len(sa.Addr); i++ { - sa.raw.Addr[i] = sa.Addr[i] - } - return unsafe.Pointer(&sa.raw), SizeofSockaddrInet4, nil -} - -func (sa *SockaddrInet6) sockaddr() (unsafe.Pointer, _Socklen, error) { - if sa.Port < 0 || sa.Port > 0xFFFF { - return nil, 0, EINVAL - } - sa.raw.Family = AF_INET6 - p := (*[2]byte)(unsafe.Pointer(&sa.raw.Port)) - p[0] = byte(sa.Port >> 8) - p[1] = byte(sa.Port) - sa.raw.Scope_id = sa.ZoneId - for i := 0; i < len(sa.Addr); i++ { - sa.raw.Addr[i] = sa.Addr[i] - } - return unsafe.Pointer(&sa.raw), SizeofSockaddrInet6, nil -} - -func (sa *SockaddrUnix) sockaddr() (unsafe.Pointer, _Socklen, error) { - name := sa.Name - n := len(name) - if n >= len(sa.raw.Path) { - return nil, 0, EINVAL - } - sa.raw.Family = AF_UNIX - for i := 0; i < n; i++ { - sa.raw.Path[i] = int8(name[i]) - } - // length is family (uint16), name, NUL. - sl := _Socklen(2) - if n > 0 { - sl += _Socklen(n) + 1 - } - if sa.raw.Path[0] == '@' { - sa.raw.Path[0] = 0 - // Don't count trailing NUL for abstract address. - sl-- - } - - return unsafe.Pointer(&sa.raw), sl, nil -} - -//sys getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) = libsocket.getsockname - -func Getsockname(fd int) (sa Sockaddr, err error) { - var rsa RawSockaddrAny - var len _Socklen = SizeofSockaddrAny - if err = getsockname(fd, &rsa, &len); err != nil { - return - } - return anyToSockaddr(&rsa) -} - -const ImplementsGetwd = true - -//sys Getcwd(buf []byte) (n int, err error) - -func Getwd() (wd string, err error) { - var buf [PathMax]byte - // Getcwd will return an error if it failed for any reason. - _, err = Getcwd(buf[0:]) - if err != nil { - return "", err - } - n := clen(buf[:]) - if n < 1 { - return "", EINVAL - } - return string(buf[:n]), nil -} - -/* - * Wrapped - */ - -//sysnb getgroups(ngid int, gid *_Gid_t) (n int, err error) -//sysnb setgroups(ngid int, gid *_Gid_t) (err error) - -func Getgroups() (gids []int, err error) { - n, err := getgroups(0, nil) - // Check for error and sanity check group count. Newer versions of - // Solaris allow up to 1024 (NGROUPS_MAX). - if n < 0 || n > 1024 { - if err != nil { - return nil, err - } - return nil, EINVAL - } else if n == 0 { - return nil, nil - } - - a := make([]_Gid_t, n) - n, err = getgroups(n, &a[0]) - if n == -1 { - return nil, err - } - gids = make([]int, n) - for i, v := range a[0:n] { - gids[i] = int(v) - } - return -} - -func Setgroups(gids []int) (err error) { - if len(gids) == 0 { - return setgroups(0, nil) - } - - a := make([]_Gid_t, len(gids)) - for i, v := range gids { - a[i] = _Gid_t(v) - } - return setgroups(len(a), &a[0]) -} - -func ReadDirent(fd int, buf []byte) (n int, err error) { - // Final argument is (basep *uintptr) and the syscall doesn't take nil. - // TODO(rsc): Can we use a single global basep for all calls? - return Getdents(fd, buf, new(uintptr)) -} - -// Wait status is 7 bits at bottom, either 0 (exited), -// 0x7F (stopped), or a signal number that caused an exit. -// The 0x80 bit is whether there was a core dump. -// An extra number (exit code, signal causing a stop) -// is in the high bits. - -type WaitStatus uint32 - -const ( - mask = 0x7F - core = 0x80 - shift = 8 - - exited = 0 - stopped = 0x7F -) - -func (w WaitStatus) Exited() bool { return w&mask == exited } - -func (w WaitStatus) ExitStatus() int { - if w&mask != exited { - return -1 - } - return int(w >> shift) -} - -func (w WaitStatus) Signaled() bool { return w&mask != stopped && w&mask != 0 } - -func (w WaitStatus) Signal() syscall.Signal { - sig := syscall.Signal(w & mask) - if sig == stopped || sig == 0 { - return -1 - } - return sig -} - -func (w WaitStatus) CoreDump() bool { return w.Signaled() && w&core != 0 } - -func (w WaitStatus) Stopped() bool { return w&mask == stopped && syscall.Signal(w>>shift) != SIGSTOP } - -func (w WaitStatus) Continued() bool { return w&mask == stopped && syscall.Signal(w>>shift) == SIGSTOP } - -func (w WaitStatus) StopSignal() syscall.Signal { - if !w.Stopped() { - return -1 - } - return syscall.Signal(w>>shift) & 0xFF -} - -func (w WaitStatus) TrapCause() int { return -1 } - -func wait4(pid uintptr, wstatus *WaitStatus, options uintptr, rusage *Rusage) (wpid uintptr, err uintptr) - -func Wait4(pid int, wstatus *WaitStatus, options int, rusage *Rusage) (wpid int, err error) { - r0, e1 := wait4(uintptr(pid), wstatus, uintptr(options), rusage) - if e1 != 0 { - err = syscall.Errno(e1) - } - return int(r0), err -} - -func gethostname() (name string, err uintptr) - -func Gethostname() (name string, err error) { - name, e1 := gethostname() - if e1 != 0 { - err = syscall.Errno(e1) - } - return name, err -} - -//sys utimes(path string, times *[2]Timeval) (err error) - -func Utimes(path string, tv []Timeval) (err error) { - if tv == nil { - return utimes(path, nil) - } - if len(tv) != 2 { - return EINVAL - } - return utimes(path, (*[2]Timeval)(unsafe.Pointer(&tv[0]))) -} - -//sys utimensat(fd int, path string, times *[2]Timespec, flag int) (err error) - -func UtimesNano(path string, ts []Timespec) error { - if ts == nil { - return utimensat(AT_FDCWD, path, nil, 0) - } - if len(ts) != 2 { - return EINVAL - } - return utimensat(AT_FDCWD, path, (*[2]Timespec)(unsafe.Pointer(&ts[0])), 0) -} - -func UtimesNanoAt(dirfd int, path string, ts []Timespec, flags int) error { - if ts == nil { - return utimensat(dirfd, path, nil, flags) - } - if len(ts) != 2 { - return EINVAL - } - return utimensat(dirfd, path, (*[2]Timespec)(unsafe.Pointer(&ts[0])), flags) -} - -//sys fcntl(fd int, cmd int, arg int) (val int, err error) - -// FcntlFlock performs a fcntl syscall for the F_GETLK, F_SETLK or F_SETLKW command. -func FcntlFlock(fd uintptr, cmd int, lk *Flock_t) error { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procfcntl)), 3, uintptr(fd), uintptr(cmd), uintptr(unsafe.Pointer(lk)), 0, 0, 0) - if e1 != 0 { - return e1 - } - return nil -} - -//sys futimesat(fildes int, path *byte, times *[2]Timeval) (err error) - -func Futimesat(dirfd int, path string, tv []Timeval) error { - pathp, err := BytePtrFromString(path) - if err != nil { - return err - } - if tv == nil { - return futimesat(dirfd, pathp, nil) - } - if len(tv) != 2 { - return EINVAL - } - return futimesat(dirfd, pathp, (*[2]Timeval)(unsafe.Pointer(&tv[0]))) -} - -// Solaris doesn't have an futimes function because it allows NULL to be -// specified as the path for futimesat. However, Go doesn't like -// NULL-style string interfaces, so this simple wrapper is provided. -func Futimes(fd int, tv []Timeval) error { - if tv == nil { - return futimesat(fd, nil, nil) - } - if len(tv) != 2 { - return EINVAL - } - return futimesat(fd, nil, (*[2]Timeval)(unsafe.Pointer(&tv[0]))) -} - -func anyToSockaddr(rsa *RawSockaddrAny) (Sockaddr, error) { - switch rsa.Addr.Family { - case AF_UNIX: - pp := (*RawSockaddrUnix)(unsafe.Pointer(rsa)) - sa := new(SockaddrUnix) - // Assume path ends at NUL. - // This is not technically the Solaris semantics for - // abstract Unix domain sockets -- they are supposed - // to be uninterpreted fixed-size binary blobs -- but - // everyone uses this convention. - n := 0 - for n < len(pp.Path) && pp.Path[n] != 0 { - n++ - } - bytes := (*[10000]byte)(unsafe.Pointer(&pp.Path[0]))[0:n] - sa.Name = string(bytes) - return sa, nil - - case AF_INET: - pp := (*RawSockaddrInet4)(unsafe.Pointer(rsa)) - sa := new(SockaddrInet4) - p := (*[2]byte)(unsafe.Pointer(&pp.Port)) - sa.Port = int(p[0])<<8 + int(p[1]) - for i := 0; i < len(sa.Addr); i++ { - sa.Addr[i] = pp.Addr[i] - } - return sa, nil - - case AF_INET6: - pp := (*RawSockaddrInet6)(unsafe.Pointer(rsa)) - sa := new(SockaddrInet6) - p := (*[2]byte)(unsafe.Pointer(&pp.Port)) - sa.Port = int(p[0])<<8 + int(p[1]) - sa.ZoneId = pp.Scope_id - for i := 0; i < len(sa.Addr); i++ { - sa.Addr[i] = pp.Addr[i] - } - return sa, nil - } - return nil, EAFNOSUPPORT -} - -//sys accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) = libsocket.accept - -func Accept(fd int) (nfd int, sa Sockaddr, err error) { - var rsa RawSockaddrAny - var len _Socklen = SizeofSockaddrAny - nfd, err = accept(fd, &rsa, &len) - if nfd == -1 { - return - } - sa, err = anyToSockaddr(&rsa) - if err != nil { - Close(nfd) - nfd = 0 - } - return -} - -//sys recvmsg(s int, msg *Msghdr, flags int) (n int, err error) = libsocket.recvmsg - -func Recvmsg(fd int, p, oob []byte, flags int) (n, oobn int, recvflags int, from Sockaddr, err error) { - var msg Msghdr - var rsa RawSockaddrAny - msg.Name = (*byte)(unsafe.Pointer(&rsa)) - msg.Namelen = uint32(SizeofSockaddrAny) - var iov Iovec - if len(p) > 0 { - iov.Base = (*int8)(unsafe.Pointer(&p[0])) - iov.SetLen(len(p)) - } - var dummy int8 - if len(oob) > 0 { - // receive at least one normal byte - if len(p) == 0 { - iov.Base = &dummy - iov.SetLen(1) - } - msg.Accrights = (*int8)(unsafe.Pointer(&oob[0])) - } - msg.Iov = &iov - msg.Iovlen = 1 - if n, err = recvmsg(fd, &msg, flags); n == -1 { - return - } - oobn = int(msg.Accrightslen) - // source address is only specified if the socket is unconnected - if rsa.Addr.Family != AF_UNSPEC { - from, err = anyToSockaddr(&rsa) - } - return -} - -func Sendmsg(fd int, p, oob []byte, to Sockaddr, flags int) (err error) { - _, err = SendmsgN(fd, p, oob, to, flags) - return -} - -//sys sendmsg(s int, msg *Msghdr, flags int) (n int, err error) = libsocket.sendmsg - -func SendmsgN(fd int, p, oob []byte, to Sockaddr, flags int) (n int, err error) { - var ptr unsafe.Pointer - var salen _Socklen - if to != nil { - ptr, salen, err = to.sockaddr() - if err != nil { - return 0, err - } - } - var msg Msghdr - msg.Name = (*byte)(unsafe.Pointer(ptr)) - msg.Namelen = uint32(salen) - var iov Iovec - if len(p) > 0 { - iov.Base = (*int8)(unsafe.Pointer(&p[0])) - iov.SetLen(len(p)) - } - var dummy int8 - if len(oob) > 0 { - // send at least one normal byte - if len(p) == 0 { - iov.Base = &dummy - iov.SetLen(1) - } - msg.Accrights = (*int8)(unsafe.Pointer(&oob[0])) - } - msg.Iov = &iov - msg.Iovlen = 1 - if n, err = sendmsg(fd, &msg, flags); err != nil { - return 0, err - } - if len(oob) > 0 && len(p) == 0 { - n = 0 - } - return n, nil -} - -//sys acct(path *byte) (err error) - -func Acct(path string) (err error) { - if len(path) == 0 { - // Assume caller wants to disable accounting. - return acct(nil) - } - - pathp, err := BytePtrFromString(path) - if err != nil { - return err - } - return acct(pathp) -} - -/* - * Expose the ioctl function - */ - -//sys ioctl(fd int, req int, arg uintptr) (err error) - -func IoctlSetInt(fd int, req int, value int) (err error) { - return ioctl(fd, req, uintptr(value)) -} - -func IoctlSetWinsize(fd int, req int, value *Winsize) (err error) { - return ioctl(fd, req, uintptr(unsafe.Pointer(value))) -} - -func IoctlSetTermios(fd int, req int, value *Termios) (err error) { - return ioctl(fd, req, uintptr(unsafe.Pointer(value))) -} - -func IoctlSetTermio(fd int, req int, value *Termio) (err error) { - return ioctl(fd, req, uintptr(unsafe.Pointer(value))) -} - -func IoctlGetInt(fd int, req int) (int, error) { - var value int - err := ioctl(fd, req, uintptr(unsafe.Pointer(&value))) - return value, err -} - -func IoctlGetWinsize(fd int, req int) (*Winsize, error) { - var value Winsize - err := ioctl(fd, req, uintptr(unsafe.Pointer(&value))) - return &value, err -} - -func IoctlGetTermios(fd int, req int) (*Termios, error) { - var value Termios - err := ioctl(fd, req, uintptr(unsafe.Pointer(&value))) - return &value, err -} - -func IoctlGetTermio(fd int, req int) (*Termio, error) { - var value Termio - err := ioctl(fd, req, uintptr(unsafe.Pointer(&value))) - return &value, err -} - -/* - * Exposed directly - */ -//sys Access(path string, mode uint32) (err error) -//sys Adjtime(delta *Timeval, olddelta *Timeval) (err error) -//sys Chdir(path string) (err error) -//sys Chmod(path string, mode uint32) (err error) -//sys Chown(path string, uid int, gid int) (err error) -//sys Chroot(path string) (err error) -//sys Close(fd int) (err error) -//sys Creat(path string, mode uint32) (fd int, err error) -//sys Dup(fd int) (nfd int, err error) -//sys Dup2(oldfd int, newfd int) (err error) -//sys Exit(code int) -//sys Fchdir(fd int) (err error) -//sys Fchmod(fd int, mode uint32) (err error) -//sys Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) -//sys Fchown(fd int, uid int, gid int) (err error) -//sys Fchownat(dirfd int, path string, uid int, gid int, flags int) (err error) -//sys Fdatasync(fd int) (err error) -//sys Fpathconf(fd int, name int) (val int, err error) -//sys Fstat(fd int, stat *Stat_t) (err error) -//sys Getdents(fd int, buf []byte, basep *uintptr) (n int, err error) -//sysnb Getgid() (gid int) -//sysnb Getpid() (pid int) -//sysnb Getpgid(pid int) (pgid int, err error) -//sysnb Getpgrp() (pgid int, err error) -//sys Geteuid() (euid int) -//sys Getegid() (egid int) -//sys Getppid() (ppid int) -//sys Getpriority(which int, who int) (n int, err error) -//sysnb Getrlimit(which int, lim *Rlimit) (err error) -//sysnb Getrusage(who int, rusage *Rusage) (err error) -//sysnb Gettimeofday(tv *Timeval) (err error) -//sysnb Getuid() (uid int) -//sys Kill(pid int, signum syscall.Signal) (err error) -//sys Lchown(path string, uid int, gid int) (err error) -//sys Link(path string, link string) (err error) -//sys Listen(s int, backlog int) (err error) = libsocket.listen -//sys Lstat(path string, stat *Stat_t) (err error) -//sys Madvise(b []byte, advice int) (err error) -//sys Mkdir(path string, mode uint32) (err error) -//sys Mkdirat(dirfd int, path string, mode uint32) (err error) -//sys Mkfifo(path string, mode uint32) (err error) -//sys Mkfifoat(dirfd int, path string, mode uint32) (err error) -//sys Mknod(path string, mode uint32, dev int) (err error) -//sys Mknodat(dirfd int, path string, mode uint32, dev int) (err error) -//sys Mlock(b []byte) (err error) -//sys Mlockall(flags int) (err error) -//sys Mprotect(b []byte, prot int) (err error) -//sys Munlock(b []byte) (err error) -//sys Munlockall() (err error) -//sys Nanosleep(time *Timespec, leftover *Timespec) (err error) -//sys Open(path string, mode int, perm uint32) (fd int, err error) -//sys Openat(dirfd int, path string, flags int, mode uint32) (fd int, err error) -//sys Pathconf(path string, name int) (val int, err error) -//sys Pause() (err error) -//sys Pread(fd int, p []byte, offset int64) (n int, err error) -//sys Pwrite(fd int, p []byte, offset int64) (n int, err error) -//sys read(fd int, p []byte) (n int, err error) -//sys Readlink(path string, buf []byte) (n int, err error) -//sys Rename(from string, to string) (err error) -//sys Renameat(olddirfd int, oldpath string, newdirfd int, newpath string) (err error) -//sys Rmdir(path string) (err error) -//sys Seek(fd int, offset int64, whence int) (newoffset int64, err error) = lseek -//sysnb Setegid(egid int) (err error) -//sysnb Seteuid(euid int) (err error) -//sysnb Setgid(gid int) (err error) -//sys Sethostname(p []byte) (err error) -//sysnb Setpgid(pid int, pgid int) (err error) -//sys Setpriority(which int, who int, prio int) (err error) -//sysnb Setregid(rgid int, egid int) (err error) -//sysnb Setreuid(ruid int, euid int) (err error) -//sysnb Setrlimit(which int, lim *Rlimit) (err error) -//sysnb Setsid() (pid int, err error) -//sysnb Setuid(uid int) (err error) -//sys Shutdown(s int, how int) (err error) = libsocket.shutdown -//sys Stat(path string, stat *Stat_t) (err error) -//sys Symlink(path string, link string) (err error) -//sys Sync() (err error) -//sysnb Times(tms *Tms) (ticks uintptr, err error) -//sys Truncate(path string, length int64) (err error) -//sys Fsync(fd int) (err error) -//sys Ftruncate(fd int, length int64) (err error) -//sys Umask(mask int) (oldmask int) -//sysnb Uname(buf *Utsname) (err error) -//sys Unmount(target string, flags int) (err error) = libc.umount -//sys Unlink(path string) (err error) -//sys Unlinkat(dirfd int, path string) (err error) -//sys Ustat(dev int, ubuf *Ustat_t) (err error) -//sys Utime(path string, buf *Utimbuf) (err error) -//sys bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) = libsocket.bind -//sys connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) = libsocket.connect -//sys mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) -//sys munmap(addr uintptr, length uintptr) (err error) -//sys sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) = libsocket.sendto -//sys socket(domain int, typ int, proto int) (fd int, err error) = libsocket.socket -//sysnb socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) = libsocket.socketpair -//sys write(fd int, p []byte) (n int, err error) -//sys getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) = libsocket.getsockopt -//sysnb getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) = libsocket.getpeername -//sys setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) = libsocket.setsockopt -//sys recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) = libsocket.recvfrom - -func readlen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procread)), 3, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf), 0, 0, 0) - n = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func writelen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procwrite)), 3, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf), 0, 0, 0) - n = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -var mapper = &mmapper{ - active: make(map[*byte][]byte), - mmap: mmap, - munmap: munmap, -} - -func Mmap(fd int, offset int64, length int, prot int, flags int) (data []byte, err error) { - return mapper.Mmap(fd, offset, length, prot, flags) -} - -func Munmap(b []byte) (err error) { - return mapper.Munmap(b) -} - -//sys sysconf(name int) (n int64, err error) - -// pageSize caches the value of Getpagesize, since it can't change -// once the system is booted. -var pageSize int64 // accessed atomically - -func Getpagesize() int { - n := atomic.LoadInt64(&pageSize) - if n == 0 { - n, _ = sysconf(_SC_PAGESIZE) - atomic.StoreInt64(&pageSize, n) - } - return int(n) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_solaris_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_solaris_amd64.go deleted file mode 100644 index 2e44630cd06..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_solaris_amd64.go +++ /dev/null @@ -1,37 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build amd64,solaris - -package unix - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return -} - -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } - -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = nsec % 1e9 / 1e3 - tv.Sec = int64(nsec / 1e9) - return -} - -func (iov *Iovec) SetLen(length int) { - iov.Len = uint64(length) -} - -func (cmsg *Cmsghdr) SetLen(length int) { - cmsg.Len = uint32(length) -} - -func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { - // TODO(aram): implement this, see issue 5847. - panic("unimplemented") -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_unix.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_unix.go deleted file mode 100644 index b46b25028c9..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/syscall_unix.go +++ /dev/null @@ -1,297 +0,0 @@ -// Copyright 2009 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build darwin dragonfly freebsd linux netbsd openbsd solaris - -package unix - -import ( - "runtime" - "sync" - "syscall" - "unsafe" -) - -var ( - Stdin = 0 - Stdout = 1 - Stderr = 2 -) - -const ( - darwin64Bit = runtime.GOOS == "darwin" && sizeofPtr == 8 - dragonfly64Bit = runtime.GOOS == "dragonfly" && sizeofPtr == 8 - netbsd32Bit = runtime.GOOS == "netbsd" && sizeofPtr == 4 -) - -// Do the interface allocations only once for common -// Errno values. -var ( - errEAGAIN error = syscall.EAGAIN - errEINVAL error = syscall.EINVAL - errENOENT error = syscall.ENOENT -) - -// errnoErr returns common boxed Errno values, to prevent -// allocations at runtime. -func errnoErr(e syscall.Errno) error { - switch e { - case 0: - return nil - case EAGAIN: - return errEAGAIN - case EINVAL: - return errEINVAL - case ENOENT: - return errENOENT - } - return e -} - -func Syscall(trap, a1, a2, a3 uintptr) (r1, r2 uintptr, err syscall.Errno) -func Syscall6(trap, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err syscall.Errno) -func RawSyscall(trap, a1, a2, a3 uintptr) (r1, r2 uintptr, err syscall.Errno) -func RawSyscall6(trap, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err syscall.Errno) - -// Mmap manager, for use by operating system-specific implementations. - -type mmapper struct { - sync.Mutex - active map[*byte][]byte // active mappings; key is last byte in mapping - mmap func(addr, length uintptr, prot, flags, fd int, offset int64) (uintptr, error) - munmap func(addr uintptr, length uintptr) error -} - -func (m *mmapper) Mmap(fd int, offset int64, length int, prot int, flags int) (data []byte, err error) { - if length <= 0 { - return nil, EINVAL - } - - // Map the requested memory. - addr, errno := m.mmap(0, uintptr(length), prot, flags, fd, offset) - if errno != nil { - return nil, errno - } - - // Slice memory layout - var sl = struct { - addr uintptr - len int - cap int - }{addr, length, length} - - // Use unsafe to turn sl into a []byte. - b := *(*[]byte)(unsafe.Pointer(&sl)) - - // Register mapping in m and return it. - p := &b[cap(b)-1] - m.Lock() - defer m.Unlock() - m.active[p] = b - return b, nil -} - -func (m *mmapper) Munmap(data []byte) (err error) { - if len(data) == 0 || len(data) != cap(data) { - return EINVAL - } - - // Find the base of the mapping. - p := &data[cap(data)-1] - m.Lock() - defer m.Unlock() - b := m.active[p] - if b == nil || &b[0] != &data[0] { - return EINVAL - } - - // Unmap the memory and update m. - if errno := m.munmap(uintptr(unsafe.Pointer(&b[0])), uintptr(len(b))); errno != nil { - return errno - } - delete(m.active, p) - return nil -} - -func Read(fd int, p []byte) (n int, err error) { - n, err = read(fd, p) - if raceenabled { - if n > 0 { - raceWriteRange(unsafe.Pointer(&p[0]), n) - } - if err == nil { - raceAcquire(unsafe.Pointer(&ioSync)) - } - } - return -} - -func Write(fd int, p []byte) (n int, err error) { - if raceenabled { - raceReleaseMerge(unsafe.Pointer(&ioSync)) - } - n, err = write(fd, p) - if raceenabled && n > 0 { - raceReadRange(unsafe.Pointer(&p[0]), n) - } - return -} - -// For testing: clients can set this flag to force -// creation of IPv6 sockets to return EAFNOSUPPORT. -var SocketDisableIPv6 bool - -type Sockaddr interface { - sockaddr() (ptr unsafe.Pointer, len _Socklen, err error) // lowercase; only we can define Sockaddrs -} - -type SockaddrInet4 struct { - Port int - Addr [4]byte - raw RawSockaddrInet4 -} - -type SockaddrInet6 struct { - Port int - ZoneId uint32 - Addr [16]byte - raw RawSockaddrInet6 -} - -type SockaddrUnix struct { - Name string - raw RawSockaddrUnix -} - -func Bind(fd int, sa Sockaddr) (err error) { - ptr, n, err := sa.sockaddr() - if err != nil { - return err - } - return bind(fd, ptr, n) -} - -func Connect(fd int, sa Sockaddr) (err error) { - ptr, n, err := sa.sockaddr() - if err != nil { - return err - } - return connect(fd, ptr, n) -} - -func Getpeername(fd int) (sa Sockaddr, err error) { - var rsa RawSockaddrAny - var len _Socklen = SizeofSockaddrAny - if err = getpeername(fd, &rsa, &len); err != nil { - return - } - return anyToSockaddr(&rsa) -} - -func GetsockoptInt(fd, level, opt int) (value int, err error) { - var n int32 - vallen := _Socklen(4) - err = getsockopt(fd, level, opt, unsafe.Pointer(&n), &vallen) - return int(n), err -} - -func Recvfrom(fd int, p []byte, flags int) (n int, from Sockaddr, err error) { - var rsa RawSockaddrAny - var len _Socklen = SizeofSockaddrAny - if n, err = recvfrom(fd, p, flags, &rsa, &len); err != nil { - return - } - if rsa.Addr.Family != AF_UNSPEC { - from, err = anyToSockaddr(&rsa) - } - return -} - -func Sendto(fd int, p []byte, flags int, to Sockaddr) (err error) { - ptr, n, err := to.sockaddr() - if err != nil { - return err - } - return sendto(fd, p, flags, ptr, n) -} - -func SetsockoptByte(fd, level, opt int, value byte) (err error) { - return setsockopt(fd, level, opt, unsafe.Pointer(&value), 1) -} - -func SetsockoptInt(fd, level, opt int, value int) (err error) { - var n = int32(value) - return setsockopt(fd, level, opt, unsafe.Pointer(&n), 4) -} - -func SetsockoptInet4Addr(fd, level, opt int, value [4]byte) (err error) { - return setsockopt(fd, level, opt, unsafe.Pointer(&value[0]), 4) -} - -func SetsockoptIPMreq(fd, level, opt int, mreq *IPMreq) (err error) { - return setsockopt(fd, level, opt, unsafe.Pointer(mreq), SizeofIPMreq) -} - -func SetsockoptIPv6Mreq(fd, level, opt int, mreq *IPv6Mreq) (err error) { - return setsockopt(fd, level, opt, unsafe.Pointer(mreq), SizeofIPv6Mreq) -} - -func SetsockoptICMPv6Filter(fd, level, opt int, filter *ICMPv6Filter) error { - return setsockopt(fd, level, opt, unsafe.Pointer(filter), SizeofICMPv6Filter) -} - -func SetsockoptLinger(fd, level, opt int, l *Linger) (err error) { - return setsockopt(fd, level, opt, unsafe.Pointer(l), SizeofLinger) -} - -func SetsockoptString(fd, level, opt int, s string) (err error) { - return setsockopt(fd, level, opt, unsafe.Pointer(&[]byte(s)[0]), uintptr(len(s))) -} - -func SetsockoptTimeval(fd, level, opt int, tv *Timeval) (err error) { - return setsockopt(fd, level, opt, unsafe.Pointer(tv), unsafe.Sizeof(*tv)) -} - -func Socket(domain, typ, proto int) (fd int, err error) { - if domain == AF_INET6 && SocketDisableIPv6 { - return -1, EAFNOSUPPORT - } - fd, err = socket(domain, typ, proto) - return -} - -func Socketpair(domain, typ, proto int) (fd [2]int, err error) { - var fdx [2]int32 - err = socketpair(domain, typ, proto, &fdx) - if err == nil { - fd[0] = int(fdx[0]) - fd[1] = int(fdx[1]) - } - return -} - -func Sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { - if raceenabled { - raceReleaseMerge(unsafe.Pointer(&ioSync)) - } - return sendfile(outfd, infd, offset, count) -} - -var ioSync int64 - -func CloseOnExec(fd int) { fcntl(fd, F_SETFD, FD_CLOEXEC) } - -func SetNonblock(fd int, nonblocking bool) (err error) { - flag, err := fcntl(fd, F_GETFL, 0) - if err != nil { - return err - } - if nonblocking { - flag |= O_NONBLOCK - } else { - flag &= ^O_NONBLOCK - } - _, err = fcntl(fd, F_SETFL, flag) - return err -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_darwin_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_darwin_386.go deleted file mode 100644 index 8e63888351e..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_darwin_386.go +++ /dev/null @@ -1,1576 +0,0 @@ -// mkerrors.sh -m32 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build 386,darwin - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- -m32 _const.go - -package unix - -import "syscall" - -const ( - AF_APPLETALK = 0x10 - AF_CCITT = 0xa - AF_CHAOS = 0x5 - AF_CNT = 0x15 - AF_COIP = 0x14 - AF_DATAKIT = 0x9 - AF_DECnet = 0xc - AF_DLI = 0xd - AF_E164 = 0x1c - AF_ECMA = 0x8 - AF_HYLINK = 0xf - AF_IEEE80211 = 0x25 - AF_IMPLINK = 0x3 - AF_INET = 0x2 - AF_INET6 = 0x1e - AF_IPX = 0x17 - AF_ISDN = 0x1c - AF_ISO = 0x7 - AF_LAT = 0xe - AF_LINK = 0x12 - AF_LOCAL = 0x1 - AF_MAX = 0x28 - AF_NATM = 0x1f - AF_NDRV = 0x1b - AF_NETBIOS = 0x21 - AF_NS = 0x6 - AF_OSI = 0x7 - AF_PPP = 0x22 - AF_PUP = 0x4 - AF_RESERVED_36 = 0x24 - AF_ROUTE = 0x11 - AF_SIP = 0x18 - AF_SNA = 0xb - AF_SYSTEM = 0x20 - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - AF_UTUN = 0x26 - B0 = 0x0 - B110 = 0x6e - B115200 = 0x1c200 - B1200 = 0x4b0 - B134 = 0x86 - B14400 = 0x3840 - B150 = 0x96 - B1800 = 0x708 - B19200 = 0x4b00 - B200 = 0xc8 - B230400 = 0x38400 - B2400 = 0x960 - B28800 = 0x7080 - B300 = 0x12c - B38400 = 0x9600 - B4800 = 0x12c0 - B50 = 0x32 - B57600 = 0xe100 - B600 = 0x258 - B7200 = 0x1c20 - B75 = 0x4b - B76800 = 0x12c00 - B9600 = 0x2580 - BIOCFLUSH = 0x20004268 - BIOCGBLEN = 0x40044266 - BIOCGDLT = 0x4004426a - BIOCGDLTLIST = 0xc00c4279 - BIOCGETIF = 0x4020426b - BIOCGHDRCMPLT = 0x40044274 - BIOCGRSIG = 0x40044272 - BIOCGRTIMEOUT = 0x4008426e - BIOCGSEESENT = 0x40044276 - BIOCGSTATS = 0x4008426f - BIOCIMMEDIATE = 0x80044270 - BIOCPROMISC = 0x20004269 - BIOCSBLEN = 0xc0044266 - BIOCSDLT = 0x80044278 - BIOCSETF = 0x80084267 - BIOCSETFNR = 0x8008427e - BIOCSETIF = 0x8020426c - BIOCSHDRCMPLT = 0x80044275 - BIOCSRSIG = 0x80044273 - BIOCSRTIMEOUT = 0x8008426d - BIOCSSEESENT = 0x80044277 - BIOCVERSION = 0x40044271 - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALIGNMENT = 0x4 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXBUFSIZE = 0x80000 - BPF_MAXINSNS = 0x200 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINBUFSIZE = 0x20 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RELEASE = 0x30bb6 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_W = 0x0 - BPF_X = 0x8 - BRKINT = 0x2 - CFLUSH = 0xf - CLOCAL = 0x8000 - CREAD = 0x800 - CS5 = 0x0 - CS6 = 0x100 - CS7 = 0x200 - CS8 = 0x300 - CSIZE = 0x300 - CSTART = 0x11 - CSTATUS = 0x14 - CSTOP = 0x13 - CSTOPB = 0x400 - CSUSP = 0x1a - CTL_MAXNAME = 0xc - CTL_NET = 0x4 - DLT_A429 = 0xb8 - DLT_A653_ICM = 0xb9 - DLT_AIRONET_HEADER = 0x78 - DLT_AOS = 0xde - DLT_APPLE_IP_OVER_IEEE1394 = 0x8a - DLT_ARCNET = 0x7 - DLT_ARCNET_LINUX = 0x81 - DLT_ATM_CLIP = 0x13 - DLT_ATM_RFC1483 = 0xb - DLT_AURORA = 0x7e - DLT_AX25 = 0x3 - DLT_AX25_KISS = 0xca - DLT_BACNET_MS_TP = 0xa5 - DLT_BLUETOOTH_HCI_H4 = 0xbb - DLT_BLUETOOTH_HCI_H4_WITH_PHDR = 0xc9 - DLT_CAN20B = 0xbe - DLT_CAN_SOCKETCAN = 0xe3 - DLT_CHAOS = 0x5 - DLT_CHDLC = 0x68 - DLT_CISCO_IOS = 0x76 - DLT_C_HDLC = 0x68 - DLT_C_HDLC_WITH_DIR = 0xcd - DLT_DBUS = 0xe7 - DLT_DECT = 0xdd - DLT_DOCSIS = 0x8f - DLT_DVB_CI = 0xeb - DLT_ECONET = 0x73 - DLT_EN10MB = 0x1 - DLT_EN3MB = 0x2 - DLT_ENC = 0x6d - DLT_ERF = 0xc5 - DLT_ERF_ETH = 0xaf - DLT_ERF_POS = 0xb0 - DLT_FC_2 = 0xe0 - DLT_FC_2_WITH_FRAME_DELIMS = 0xe1 - DLT_FDDI = 0xa - DLT_FLEXRAY = 0xd2 - DLT_FRELAY = 0x6b - DLT_FRELAY_WITH_DIR = 0xce - DLT_GCOM_SERIAL = 0xad - DLT_GCOM_T1E1 = 0xac - DLT_GPF_F = 0xab - DLT_GPF_T = 0xaa - DLT_GPRS_LLC = 0xa9 - DLT_GSMTAP_ABIS = 0xda - DLT_GSMTAP_UM = 0xd9 - DLT_HHDLC = 0x79 - DLT_IBM_SN = 0x92 - DLT_IBM_SP = 0x91 - DLT_IEEE802 = 0x6 - DLT_IEEE802_11 = 0x69 - DLT_IEEE802_11_RADIO = 0x7f - DLT_IEEE802_11_RADIO_AVS = 0xa3 - DLT_IEEE802_15_4 = 0xc3 - DLT_IEEE802_15_4_LINUX = 0xbf - DLT_IEEE802_15_4_NOFCS = 0xe6 - DLT_IEEE802_15_4_NONASK_PHY = 0xd7 - DLT_IEEE802_16_MAC_CPS = 0xbc - DLT_IEEE802_16_MAC_CPS_RADIO = 0xc1 - DLT_IPFILTER = 0x74 - DLT_IPMB = 0xc7 - DLT_IPMB_LINUX = 0xd1 - DLT_IPNET = 0xe2 - DLT_IPOIB = 0xf2 - DLT_IPV4 = 0xe4 - DLT_IPV6 = 0xe5 - DLT_IP_OVER_FC = 0x7a - DLT_JUNIPER_ATM1 = 0x89 - DLT_JUNIPER_ATM2 = 0x87 - DLT_JUNIPER_ATM_CEMIC = 0xee - DLT_JUNIPER_CHDLC = 0xb5 - DLT_JUNIPER_ES = 0x84 - DLT_JUNIPER_ETHER = 0xb2 - DLT_JUNIPER_FIBRECHANNEL = 0xea - DLT_JUNIPER_FRELAY = 0xb4 - DLT_JUNIPER_GGSN = 0x85 - DLT_JUNIPER_ISM = 0xc2 - DLT_JUNIPER_MFR = 0x86 - DLT_JUNIPER_MLFR = 0x83 - DLT_JUNIPER_MLPPP = 0x82 - DLT_JUNIPER_MONITOR = 0xa4 - DLT_JUNIPER_PIC_PEER = 0xae - DLT_JUNIPER_PPP = 0xb3 - DLT_JUNIPER_PPPOE = 0xa7 - DLT_JUNIPER_PPPOE_ATM = 0xa8 - DLT_JUNIPER_SERVICES = 0x88 - DLT_JUNIPER_SRX_E2E = 0xe9 - DLT_JUNIPER_ST = 0xc8 - DLT_JUNIPER_VP = 0xb7 - DLT_JUNIPER_VS = 0xe8 - DLT_LAPB_WITH_DIR = 0xcf - DLT_LAPD = 0xcb - DLT_LIN = 0xd4 - DLT_LINUX_EVDEV = 0xd8 - DLT_LINUX_IRDA = 0x90 - DLT_LINUX_LAPD = 0xb1 - DLT_LINUX_PPP_WITHDIRECTION = 0xa6 - DLT_LINUX_SLL = 0x71 - DLT_LOOP = 0x6c - DLT_LTALK = 0x72 - DLT_MATCHING_MAX = 0xf5 - DLT_MATCHING_MIN = 0x68 - DLT_MFR = 0xb6 - DLT_MOST = 0xd3 - DLT_MPEG_2_TS = 0xf3 - DLT_MPLS = 0xdb - DLT_MTP2 = 0x8c - DLT_MTP2_WITH_PHDR = 0x8b - DLT_MTP3 = 0x8d - DLT_MUX27010 = 0xec - DLT_NETANALYZER = 0xf0 - DLT_NETANALYZER_TRANSPARENT = 0xf1 - DLT_NFC_LLCP = 0xf5 - DLT_NFLOG = 0xef - DLT_NG40 = 0xf4 - DLT_NULL = 0x0 - DLT_PCI_EXP = 0x7d - DLT_PFLOG = 0x75 - DLT_PFSYNC = 0x12 - DLT_PPI = 0xc0 - DLT_PPP = 0x9 - DLT_PPP_BSDOS = 0x10 - DLT_PPP_ETHER = 0x33 - DLT_PPP_PPPD = 0xa6 - DLT_PPP_SERIAL = 0x32 - DLT_PPP_WITH_DIR = 0xcc - DLT_PPP_WITH_DIRECTION = 0xa6 - DLT_PRISM_HEADER = 0x77 - DLT_PRONET = 0x4 - DLT_RAIF1 = 0xc6 - DLT_RAW = 0xc - DLT_RIO = 0x7c - DLT_SCCP = 0x8e - DLT_SITA = 0xc4 - DLT_SLIP = 0x8 - DLT_SLIP_BSDOS = 0xf - DLT_STANAG_5066_D_PDU = 0xed - DLT_SUNATM = 0x7b - DLT_SYMANTEC_FIREWALL = 0x63 - DLT_TZSP = 0x80 - DLT_USB = 0xba - DLT_USB_LINUX = 0xbd - DLT_USB_LINUX_MMAPPED = 0xdc - DLT_USER0 = 0x93 - DLT_USER1 = 0x94 - DLT_USER10 = 0x9d - DLT_USER11 = 0x9e - DLT_USER12 = 0x9f - DLT_USER13 = 0xa0 - DLT_USER14 = 0xa1 - DLT_USER15 = 0xa2 - DLT_USER2 = 0x95 - DLT_USER3 = 0x96 - DLT_USER4 = 0x97 - DLT_USER5 = 0x98 - DLT_USER6 = 0x99 - DLT_USER7 = 0x9a - DLT_USER8 = 0x9b - DLT_USER9 = 0x9c - DLT_WIHART = 0xdf - DLT_X2E_SERIAL = 0xd5 - DLT_X2E_XORAYA = 0xd6 - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - DT_WHT = 0xe - ECHO = 0x8 - ECHOCTL = 0x40 - ECHOE = 0x2 - ECHOK = 0x4 - ECHOKE = 0x1 - ECHONL = 0x10 - ECHOPRT = 0x20 - EVFILT_AIO = -0x3 - EVFILT_FS = -0x9 - EVFILT_MACHPORT = -0x8 - EVFILT_PROC = -0x5 - EVFILT_READ = -0x1 - EVFILT_SIGNAL = -0x6 - EVFILT_SYSCOUNT = 0xe - EVFILT_THREADMARKER = 0xe - EVFILT_TIMER = -0x7 - EVFILT_USER = -0xa - EVFILT_VM = -0xc - EVFILT_VNODE = -0x4 - EVFILT_WRITE = -0x2 - EV_ADD = 0x1 - EV_CLEAR = 0x20 - EV_DELETE = 0x2 - EV_DISABLE = 0x8 - EV_DISPATCH = 0x80 - EV_ENABLE = 0x4 - EV_EOF = 0x8000 - EV_ERROR = 0x4000 - EV_FLAG0 = 0x1000 - EV_FLAG1 = 0x2000 - EV_ONESHOT = 0x10 - EV_OOBAND = 0x2000 - EV_POLL = 0x1000 - EV_RECEIPT = 0x40 - EV_SYSFLAGS = 0xf000 - EXTA = 0x4b00 - EXTB = 0x9600 - EXTPROC = 0x800 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x400 - FLUSHO = 0x800000 - F_ADDFILESIGS = 0x3d - F_ADDSIGS = 0x3b - F_ALLOCATEALL = 0x4 - F_ALLOCATECONTIG = 0x2 - F_CHKCLEAN = 0x29 - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0x43 - F_FINDSIGS = 0x4e - F_FLUSH_DATA = 0x28 - F_FREEZE_FS = 0x35 - F_FULLFSYNC = 0x33 - F_GETCODEDIR = 0x48 - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLK = 0x7 - F_GETLKPID = 0x42 - F_GETNOSIGPIPE = 0x4a - F_GETOWN = 0x5 - F_GETPATH = 0x32 - F_GETPATH_MTMINFO = 0x47 - F_GETPROTECTIONCLASS = 0x3f - F_GETPROTECTIONLEVEL = 0x4d - F_GLOBAL_NOCACHE = 0x37 - F_LOG2PHYS = 0x31 - F_LOG2PHYS_EXT = 0x41 - F_NOCACHE = 0x30 - F_NODIRECT = 0x3e - F_OK = 0x0 - F_PATHPKG_CHECK = 0x34 - F_PEOFPOSMODE = 0x3 - F_PREALLOCATE = 0x2a - F_RDADVISE = 0x2c - F_RDAHEAD = 0x2d - F_RDLCK = 0x1 - F_SETBACKINGSTORE = 0x46 - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLK = 0x8 - F_SETLKW = 0x9 - F_SETLKWTIMEOUT = 0xa - F_SETNOSIGPIPE = 0x49 - F_SETOWN = 0x6 - F_SETPROTECTIONCLASS = 0x40 - F_SETSIZE = 0x2b - F_SINGLE_WRITER = 0x4c - F_THAW_FS = 0x36 - F_TRANSCODEKEY = 0x4b - F_UNLCK = 0x2 - F_VOLPOSMODE = 0x4 - F_WRLCK = 0x3 - HUPCL = 0x4000 - ICANON = 0x100 - ICMP6_FILTER = 0x12 - ICRNL = 0x100 - IEXTEN = 0x400 - IFF_ALLMULTI = 0x200 - IFF_ALTPHYS = 0x4000 - IFF_BROADCAST = 0x2 - IFF_DEBUG = 0x4 - IFF_LINK0 = 0x1000 - IFF_LINK1 = 0x2000 - IFF_LINK2 = 0x4000 - IFF_LOOPBACK = 0x8 - IFF_MULTICAST = 0x8000 - IFF_NOARP = 0x80 - IFF_NOTRAILERS = 0x20 - IFF_OACTIVE = 0x400 - IFF_POINTOPOINT = 0x10 - IFF_PROMISC = 0x100 - IFF_RUNNING = 0x40 - IFF_SIMPLEX = 0x800 - IFF_UP = 0x1 - IFNAMSIZ = 0x10 - IFT_1822 = 0x2 - IFT_AAL5 = 0x31 - IFT_ARCNET = 0x23 - IFT_ARCNETPLUS = 0x24 - IFT_ATM = 0x25 - IFT_BRIDGE = 0xd1 - IFT_CARP = 0xf8 - IFT_CELLULAR = 0xff - IFT_CEPT = 0x13 - IFT_DS3 = 0x1e - IFT_ENC = 0xf4 - IFT_EON = 0x19 - IFT_ETHER = 0x6 - IFT_FAITH = 0x38 - IFT_FDDI = 0xf - IFT_FRELAY = 0x20 - IFT_FRELAYDCE = 0x2c - IFT_GIF = 0x37 - IFT_HDH1822 = 0x3 - IFT_HIPPI = 0x2f - IFT_HSSI = 0x2e - IFT_HY = 0xe - IFT_IEEE1394 = 0x90 - IFT_IEEE8023ADLAG = 0x88 - IFT_ISDNBASIC = 0x14 - IFT_ISDNPRIMARY = 0x15 - IFT_ISO88022LLC = 0x29 - IFT_ISO88023 = 0x7 - IFT_ISO88024 = 0x8 - IFT_ISO88025 = 0x9 - IFT_ISO88026 = 0xa - IFT_L2VLAN = 0x87 - IFT_LAPB = 0x10 - IFT_LOCALTALK = 0x2a - IFT_LOOP = 0x18 - IFT_MIOX25 = 0x26 - IFT_MODEM = 0x30 - IFT_NSIP = 0x1b - IFT_OTHER = 0x1 - IFT_P10 = 0xc - IFT_P80 = 0xd - IFT_PARA = 0x22 - IFT_PDP = 0xff - IFT_PFLOG = 0xf5 - IFT_PFSYNC = 0xf6 - IFT_PKTAP = 0xfe - IFT_PPP = 0x17 - IFT_PROPMUX = 0x36 - IFT_PROPVIRTUAL = 0x35 - IFT_PTPSERIAL = 0x16 - IFT_RS232 = 0x21 - IFT_SDLC = 0x11 - IFT_SIP = 0x1f - IFT_SLIP = 0x1c - IFT_SMDSDXI = 0x2b - IFT_SMDSICIP = 0x34 - IFT_SONET = 0x27 - IFT_SONETPATH = 0x32 - IFT_SONETVT = 0x33 - IFT_STARLAN = 0xb - IFT_STF = 0x39 - IFT_T1 = 0x12 - IFT_ULTRA = 0x1d - IFT_V35 = 0x2d - IFT_X25 = 0x5 - IFT_X25DDN = 0x4 - IFT_X25PLE = 0x28 - IFT_XETHER = 0x1a - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLASSD_HOST = 0xfffffff - IN_CLASSD_NET = 0xf0000000 - IN_CLASSD_NSHIFT = 0x1c - IN_LINKLOCALNETNUM = 0xa9fe0000 - IN_LOOPBACKNET = 0x7f - IPPROTO_3PC = 0x22 - IPPROTO_ADFS = 0x44 - IPPROTO_AH = 0x33 - IPPROTO_AHIP = 0x3d - IPPROTO_APES = 0x63 - IPPROTO_ARGUS = 0xd - IPPROTO_AX25 = 0x5d - IPPROTO_BHA = 0x31 - IPPROTO_BLT = 0x1e - IPPROTO_BRSATMON = 0x4c - IPPROTO_CFTP = 0x3e - IPPROTO_CHAOS = 0x10 - IPPROTO_CMTP = 0x26 - IPPROTO_CPHB = 0x49 - IPPROTO_CPNX = 0x48 - IPPROTO_DDP = 0x25 - IPPROTO_DGP = 0x56 - IPPROTO_DIVERT = 0xfe - IPPROTO_DONE = 0x101 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_EMCON = 0xe - IPPROTO_ENCAP = 0x62 - IPPROTO_EON = 0x50 - IPPROTO_ESP = 0x32 - IPPROTO_ETHERIP = 0x61 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GGP = 0x3 - IPPROTO_GMTP = 0x64 - IPPROTO_GRE = 0x2f - IPPROTO_HELLO = 0x3f - IPPROTO_HMP = 0x14 - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IDPR = 0x23 - IPPROTO_IDRP = 0x2d - IPPROTO_IGMP = 0x2 - IPPROTO_IGP = 0x55 - IPPROTO_IGRP = 0x58 - IPPROTO_IL = 0x28 - IPPROTO_INLSP = 0x34 - IPPROTO_INP = 0x20 - IPPROTO_IP = 0x0 - IPPROTO_IPCOMP = 0x6c - IPPROTO_IPCV = 0x47 - IPPROTO_IPEIP = 0x5e - IPPROTO_IPIP = 0x4 - IPPROTO_IPPC = 0x43 - IPPROTO_IPV4 = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_IRTP = 0x1c - IPPROTO_KRYPTOLAN = 0x41 - IPPROTO_LARP = 0x5b - IPPROTO_LEAF1 = 0x19 - IPPROTO_LEAF2 = 0x1a - IPPROTO_MAX = 0x100 - IPPROTO_MAXID = 0x34 - IPPROTO_MEAS = 0x13 - IPPROTO_MHRP = 0x30 - IPPROTO_MICP = 0x5f - IPPROTO_MTP = 0x5c - IPPROTO_MUX = 0x12 - IPPROTO_ND = 0x4d - IPPROTO_NHRP = 0x36 - IPPROTO_NONE = 0x3b - IPPROTO_NSP = 0x1f - IPPROTO_NVPII = 0xb - IPPROTO_OSPFIGP = 0x59 - IPPROTO_PGM = 0x71 - IPPROTO_PIGP = 0x9 - IPPROTO_PIM = 0x67 - IPPROTO_PRM = 0x15 - IPPROTO_PUP = 0xc - IPPROTO_PVP = 0x4b - IPPROTO_RAW = 0xff - IPPROTO_RCCMON = 0xa - IPPROTO_RDP = 0x1b - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_RVD = 0x42 - IPPROTO_SATEXPAK = 0x40 - IPPROTO_SATMON = 0x45 - IPPROTO_SCCSP = 0x60 - IPPROTO_SCTP = 0x84 - IPPROTO_SDRP = 0x2a - IPPROTO_SEP = 0x21 - IPPROTO_SRPC = 0x5a - IPPROTO_ST = 0x7 - IPPROTO_SVMTP = 0x52 - IPPROTO_SWIPE = 0x35 - IPPROTO_TCF = 0x57 - IPPROTO_TCP = 0x6 - IPPROTO_TP = 0x1d - IPPROTO_TPXX = 0x27 - IPPROTO_TRUNK1 = 0x17 - IPPROTO_TRUNK2 = 0x18 - IPPROTO_TTP = 0x54 - IPPROTO_UDP = 0x11 - IPPROTO_VINES = 0x53 - IPPROTO_VISA = 0x46 - IPPROTO_VMTP = 0x51 - IPPROTO_WBEXPAK = 0x4f - IPPROTO_WBMON = 0x4e - IPPROTO_WSN = 0x4a - IPPROTO_XNET = 0xf - IPPROTO_XTP = 0x24 - IPV6_2292DSTOPTS = 0x17 - IPV6_2292HOPLIMIT = 0x14 - IPV6_2292HOPOPTS = 0x16 - IPV6_2292NEXTHOP = 0x15 - IPV6_2292PKTINFO = 0x13 - IPV6_2292PKTOPTIONS = 0x19 - IPV6_2292RTHDR = 0x18 - IPV6_BINDV6ONLY = 0x1b - IPV6_BOUND_IF = 0x7d - IPV6_CHECKSUM = 0x1a - IPV6_DEFAULT_MULTICAST_HOPS = 0x1 - IPV6_DEFAULT_MULTICAST_LOOP = 0x1 - IPV6_DEFHLIM = 0x40 - IPV6_FAITH = 0x1d - IPV6_FLOWINFO_MASK = 0xffffff0f - IPV6_FLOWLABEL_MASK = 0xffff0f00 - IPV6_FRAGTTL = 0x3c - IPV6_FW_ADD = 0x1e - IPV6_FW_DEL = 0x1f - IPV6_FW_FLUSH = 0x20 - IPV6_FW_GET = 0x22 - IPV6_FW_ZERO = 0x21 - IPV6_HLIMDEC = 0x1 - IPV6_IPSEC_POLICY = 0x1c - IPV6_JOIN_GROUP = 0xc - IPV6_LEAVE_GROUP = 0xd - IPV6_MAXHLIM = 0xff - IPV6_MAXOPTHDR = 0x800 - IPV6_MAXPACKET = 0xffff - IPV6_MAX_GROUP_SRC_FILTER = 0x200 - IPV6_MAX_MEMBERSHIPS = 0xfff - IPV6_MAX_SOCK_SRC_FILTER = 0x80 - IPV6_MIN_MEMBERSHIPS = 0x1f - IPV6_MMTU = 0x500 - IPV6_MULTICAST_HOPS = 0xa - IPV6_MULTICAST_IF = 0x9 - IPV6_MULTICAST_LOOP = 0xb - IPV6_PORTRANGE = 0xe - IPV6_PORTRANGE_DEFAULT = 0x0 - IPV6_PORTRANGE_HIGH = 0x1 - IPV6_PORTRANGE_LOW = 0x2 - IPV6_RECVTCLASS = 0x23 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_SOCKOPT_RESERVED1 = 0x3 - IPV6_TCLASS = 0x24 - IPV6_UNICAST_HOPS = 0x4 - IPV6_V6ONLY = 0x1b - IPV6_VERSION = 0x60 - IPV6_VERSION_MASK = 0xf0 - IP_ADD_MEMBERSHIP = 0xc - IP_ADD_SOURCE_MEMBERSHIP = 0x46 - IP_BLOCK_SOURCE = 0x48 - IP_BOUND_IF = 0x19 - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DROP_MEMBERSHIP = 0xd - IP_DROP_SOURCE_MEMBERSHIP = 0x47 - IP_DUMMYNET_CONFIGURE = 0x3c - IP_DUMMYNET_DEL = 0x3d - IP_DUMMYNET_FLUSH = 0x3e - IP_DUMMYNET_GET = 0x40 - IP_FAITH = 0x16 - IP_FW_ADD = 0x28 - IP_FW_DEL = 0x29 - IP_FW_FLUSH = 0x2a - IP_FW_GET = 0x2c - IP_FW_RESETLOG = 0x2d - IP_FW_ZERO = 0x2b - IP_HDRINCL = 0x2 - IP_IPSEC_POLICY = 0x15 - IP_MAXPACKET = 0xffff - IP_MAX_GROUP_SRC_FILTER = 0x200 - IP_MAX_MEMBERSHIPS = 0xfff - IP_MAX_SOCK_MUTE_FILTER = 0x80 - IP_MAX_SOCK_SRC_FILTER = 0x80 - IP_MF = 0x2000 - IP_MIN_MEMBERSHIPS = 0x1f - IP_MSFILTER = 0x4a - IP_MSS = 0x240 - IP_MULTICAST_IF = 0x9 - IP_MULTICAST_IFINDEX = 0x42 - IP_MULTICAST_LOOP = 0xb - IP_MULTICAST_TTL = 0xa - IP_MULTICAST_VIF = 0xe - IP_NAT__XXX = 0x37 - IP_OFFMASK = 0x1fff - IP_OLD_FW_ADD = 0x32 - IP_OLD_FW_DEL = 0x33 - IP_OLD_FW_FLUSH = 0x34 - IP_OLD_FW_GET = 0x36 - IP_OLD_FW_RESETLOG = 0x38 - IP_OLD_FW_ZERO = 0x35 - IP_OPTIONS = 0x1 - IP_PKTINFO = 0x1a - IP_PORTRANGE = 0x13 - IP_PORTRANGE_DEFAULT = 0x0 - IP_PORTRANGE_HIGH = 0x1 - IP_PORTRANGE_LOW = 0x2 - IP_RECVDSTADDR = 0x7 - IP_RECVIF = 0x14 - IP_RECVOPTS = 0x5 - IP_RECVPKTINFO = 0x1a - IP_RECVRETOPTS = 0x6 - IP_RECVTTL = 0x18 - IP_RETOPTS = 0x8 - IP_RF = 0x8000 - IP_RSVP_OFF = 0x10 - IP_RSVP_ON = 0xf - IP_RSVP_VIF_OFF = 0x12 - IP_RSVP_VIF_ON = 0x11 - IP_STRIPHDR = 0x17 - IP_TOS = 0x3 - IP_TRAFFIC_MGT_BACKGROUND = 0x41 - IP_TTL = 0x4 - IP_UNBLOCK_SOURCE = 0x49 - ISIG = 0x80 - ISTRIP = 0x20 - IUTF8 = 0x4000 - IXANY = 0x800 - IXOFF = 0x400 - IXON = 0x200 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_CAN_REUSE = 0x9 - MADV_DONTNEED = 0x4 - MADV_FREE = 0x5 - MADV_FREE_REUSABLE = 0x7 - MADV_FREE_REUSE = 0x8 - MADV_NORMAL = 0x0 - MADV_RANDOM = 0x1 - MADV_SEQUENTIAL = 0x2 - MADV_WILLNEED = 0x3 - MADV_ZERO_WIRED_PAGES = 0x6 - MAP_ANON = 0x1000 - MAP_COPY = 0x2 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_HASSEMAPHORE = 0x200 - MAP_JIT = 0x800 - MAP_NOCACHE = 0x400 - MAP_NOEXTEND = 0x100 - MAP_NORESERVE = 0x40 - MAP_PRIVATE = 0x2 - MAP_RENAME = 0x20 - MAP_RESERVED0080 = 0x80 - MAP_SHARED = 0x1 - MCL_CURRENT = 0x1 - MCL_FUTURE = 0x2 - MSG_CTRUNC = 0x20 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x80 - MSG_EOF = 0x100 - MSG_EOR = 0x8 - MSG_FLUSH = 0x400 - MSG_HAVEMORE = 0x2000 - MSG_HOLD = 0x800 - MSG_NEEDSA = 0x10000 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_RCVMORE = 0x4000 - MSG_SEND = 0x1000 - MSG_TRUNC = 0x10 - MSG_WAITALL = 0x40 - MSG_WAITSTREAM = 0x200 - MS_ASYNC = 0x1 - MS_DEACTIVATE = 0x8 - MS_INVALIDATE = 0x2 - MS_KILLPAGES = 0x4 - MS_SYNC = 0x10 - NAME_MAX = 0xff - NET_RT_DUMP = 0x1 - NET_RT_DUMP2 = 0x7 - NET_RT_FLAGS = 0x2 - NET_RT_IFLIST = 0x3 - NET_RT_IFLIST2 = 0x6 - NET_RT_MAXID = 0xa - NET_RT_STAT = 0x4 - NET_RT_TRASH = 0x5 - NOFLSH = 0x80000000 - NOTE_ABSOLUTE = 0x8 - NOTE_ATTRIB = 0x8 - NOTE_BACKGROUND = 0x40 - NOTE_CHILD = 0x4 - NOTE_CRITICAL = 0x20 - NOTE_DELETE = 0x1 - NOTE_EXEC = 0x20000000 - NOTE_EXIT = 0x80000000 - NOTE_EXITSTATUS = 0x4000000 - NOTE_EXIT_CSERROR = 0x40000 - NOTE_EXIT_DECRYPTFAIL = 0x10000 - NOTE_EXIT_DETAIL = 0x2000000 - NOTE_EXIT_DETAIL_MASK = 0x70000 - NOTE_EXIT_MEMORY = 0x20000 - NOTE_EXIT_REPARENTED = 0x80000 - NOTE_EXTEND = 0x4 - NOTE_FFAND = 0x40000000 - NOTE_FFCOPY = 0xc0000000 - NOTE_FFCTRLMASK = 0xc0000000 - NOTE_FFLAGSMASK = 0xffffff - NOTE_FFNOP = 0x0 - NOTE_FFOR = 0x80000000 - NOTE_FORK = 0x40000000 - NOTE_LEEWAY = 0x10 - NOTE_LINK = 0x10 - NOTE_LOWAT = 0x1 - NOTE_NONE = 0x80 - NOTE_NSECONDS = 0x4 - NOTE_PCTRLMASK = -0x100000 - NOTE_PDATAMASK = 0xfffff - NOTE_REAP = 0x10000000 - NOTE_RENAME = 0x20 - NOTE_REVOKE = 0x40 - NOTE_SECONDS = 0x1 - NOTE_SIGNAL = 0x8000000 - NOTE_TRACK = 0x1 - NOTE_TRACKERR = 0x2 - NOTE_TRIGGER = 0x1000000 - NOTE_USECONDS = 0x2 - NOTE_VM_ERROR = 0x10000000 - NOTE_VM_PRESSURE = 0x80000000 - NOTE_VM_PRESSURE_SUDDEN_TERMINATE = 0x20000000 - NOTE_VM_PRESSURE_TERMINATE = 0x40000000 - NOTE_WRITE = 0x2 - OCRNL = 0x10 - OFDEL = 0x20000 - OFILL = 0x80 - ONLCR = 0x2 - ONLRET = 0x40 - ONOCR = 0x20 - ONOEOT = 0x8 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_ALERT = 0x20000000 - O_APPEND = 0x8 - O_ASYNC = 0x40 - O_CLOEXEC = 0x1000000 - O_CREAT = 0x200 - O_DIRECTORY = 0x100000 - O_DP_GETRAWENCRYPTED = 0x1 - O_DSYNC = 0x400000 - O_EVTONLY = 0x8000 - O_EXCL = 0x800 - O_EXLOCK = 0x20 - O_FSYNC = 0x80 - O_NDELAY = 0x4 - O_NOCTTY = 0x20000 - O_NOFOLLOW = 0x100 - O_NONBLOCK = 0x4 - O_POPUP = 0x80000000 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_SHLOCK = 0x10 - O_SYMLINK = 0x200000 - O_SYNC = 0x80 - O_TRUNC = 0x400 - O_WRONLY = 0x1 - PARENB = 0x1000 - PARMRK = 0x8 - PARODD = 0x2000 - PENDIN = 0x20000000 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PROT_EXEC = 0x4 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_WRITE = 0x2 - PT_ATTACH = 0xa - PT_ATTACHEXC = 0xe - PT_CONTINUE = 0x7 - PT_DENY_ATTACH = 0x1f - PT_DETACH = 0xb - PT_FIRSTMACH = 0x20 - PT_FORCEQUOTA = 0x1e - PT_KILL = 0x8 - PT_READ_D = 0x2 - PT_READ_I = 0x1 - PT_READ_U = 0x3 - PT_SIGEXC = 0xc - PT_STEP = 0x9 - PT_THUPDATE = 0xd - PT_TRACE_ME = 0x0 - PT_WRITE_D = 0x5 - PT_WRITE_I = 0x4 - PT_WRITE_U = 0x6 - RLIMIT_AS = 0x5 - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_CPU_USAGE_MONITOR = 0x2 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x8 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = 0x7fffffffffffffff - RTAX_AUTHOR = 0x6 - RTAX_BRD = 0x7 - RTAX_DST = 0x0 - RTAX_GATEWAY = 0x1 - RTAX_GENMASK = 0x3 - RTAX_IFA = 0x5 - RTAX_IFP = 0x4 - RTAX_MAX = 0x8 - RTAX_NETMASK = 0x2 - RTA_AUTHOR = 0x40 - RTA_BRD = 0x80 - RTA_DST = 0x1 - RTA_GATEWAY = 0x2 - RTA_GENMASK = 0x8 - RTA_IFA = 0x20 - RTA_IFP = 0x10 - RTA_NETMASK = 0x4 - RTF_BLACKHOLE = 0x1000 - RTF_BROADCAST = 0x400000 - RTF_CLONING = 0x100 - RTF_CONDEMNED = 0x2000000 - RTF_DELCLONE = 0x80 - RTF_DONE = 0x40 - RTF_DYNAMIC = 0x10 - RTF_GATEWAY = 0x2 - RTF_HOST = 0x4 - RTF_IFREF = 0x4000000 - RTF_IFSCOPE = 0x1000000 - RTF_LLINFO = 0x400 - RTF_LOCAL = 0x200000 - RTF_MODIFIED = 0x20 - RTF_MULTICAST = 0x800000 - RTF_NOIFREF = 0x2000 - RTF_PINNED = 0x100000 - RTF_PRCLONING = 0x10000 - RTF_PROTO1 = 0x8000 - RTF_PROTO2 = 0x4000 - RTF_PROTO3 = 0x40000 - RTF_PROXY = 0x8000000 - RTF_REJECT = 0x8 - RTF_ROUTER = 0x10000000 - RTF_STATIC = 0x800 - RTF_UP = 0x1 - RTF_WASCLONED = 0x20000 - RTF_XRESOLVE = 0x200 - RTM_ADD = 0x1 - RTM_CHANGE = 0x3 - RTM_DELADDR = 0xd - RTM_DELETE = 0x2 - RTM_DELMADDR = 0x10 - RTM_GET = 0x4 - RTM_GET2 = 0x14 - RTM_IFINFO = 0xe - RTM_IFINFO2 = 0x12 - RTM_LOCK = 0x8 - RTM_LOSING = 0x5 - RTM_MISS = 0x7 - RTM_NEWADDR = 0xc - RTM_NEWMADDR = 0xf - RTM_NEWMADDR2 = 0x13 - RTM_OLDADD = 0x9 - RTM_OLDDEL = 0xa - RTM_REDIRECT = 0x6 - RTM_RESOLVE = 0xb - RTM_RTTUNIT = 0xf4240 - RTM_VERSION = 0x5 - RTV_EXPIRE = 0x4 - RTV_HOPCOUNT = 0x2 - RTV_MTU = 0x1 - RTV_RPIPE = 0x8 - RTV_RTT = 0x40 - RTV_RTTVAR = 0x80 - RTV_SPIPE = 0x10 - RTV_SSTHRESH = 0x20 - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - SCM_CREDS = 0x3 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x2 - SCM_TIMESTAMP_MONOTONIC = 0x4 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDMULTI = 0x80206931 - SIOCAIFADDR = 0x8040691a - SIOCARPIPLL = 0xc0206928 - SIOCATMARK = 0x40047307 - SIOCAUTOADDR = 0xc0206926 - SIOCAUTONETMASK = 0x80206927 - SIOCDELMULTI = 0x80206932 - SIOCDIFADDR = 0x80206919 - SIOCDIFPHYADDR = 0x80206941 - SIOCGDRVSPEC = 0xc01c697b - SIOCGETVLAN = 0xc020697f - SIOCGHIWAT = 0x40047301 - SIOCGIFADDR = 0xc0206921 - SIOCGIFALTMTU = 0xc0206948 - SIOCGIFASYNCMAP = 0xc020697c - SIOCGIFBOND = 0xc0206947 - SIOCGIFBRDADDR = 0xc0206923 - SIOCGIFCAP = 0xc020695b - SIOCGIFCONF = 0xc0086924 - SIOCGIFDEVMTU = 0xc0206944 - SIOCGIFDSTADDR = 0xc0206922 - SIOCGIFFLAGS = 0xc0206911 - SIOCGIFGENERIC = 0xc020693a - SIOCGIFKPI = 0xc0206987 - SIOCGIFMAC = 0xc0206982 - SIOCGIFMEDIA = 0xc0286938 - SIOCGIFMETRIC = 0xc0206917 - SIOCGIFMTU = 0xc0206933 - SIOCGIFNETMASK = 0xc0206925 - SIOCGIFPDSTADDR = 0xc0206940 - SIOCGIFPHYS = 0xc0206935 - SIOCGIFPSRCADDR = 0xc020693f - SIOCGIFSTATUS = 0xc331693d - SIOCGIFVLAN = 0xc020697f - SIOCGIFWAKEFLAGS = 0xc0206988 - SIOCGLOWAT = 0x40047303 - SIOCGPGRP = 0x40047309 - SIOCIFCREATE = 0xc0206978 - SIOCIFCREATE2 = 0xc020697a - SIOCIFDESTROY = 0x80206979 - SIOCIFGCLONERS = 0xc00c6981 - SIOCRSLVMULTI = 0xc008693b - SIOCSDRVSPEC = 0x801c697b - SIOCSETVLAN = 0x8020697e - SIOCSHIWAT = 0x80047300 - SIOCSIFADDR = 0x8020690c - SIOCSIFALTMTU = 0x80206945 - SIOCSIFASYNCMAP = 0x8020697d - SIOCSIFBOND = 0x80206946 - SIOCSIFBRDADDR = 0x80206913 - SIOCSIFCAP = 0x8020695a - SIOCSIFDSTADDR = 0x8020690e - SIOCSIFFLAGS = 0x80206910 - SIOCSIFGENERIC = 0x80206939 - SIOCSIFKPI = 0x80206986 - SIOCSIFLLADDR = 0x8020693c - SIOCSIFMAC = 0x80206983 - SIOCSIFMEDIA = 0xc0206937 - SIOCSIFMETRIC = 0x80206918 - SIOCSIFMTU = 0x80206934 - SIOCSIFNETMASK = 0x80206916 - SIOCSIFPHYADDR = 0x8040693e - SIOCSIFPHYS = 0x80206936 - SIOCSIFVLAN = 0x8020697e - SIOCSLOWAT = 0x80047302 - SIOCSPGRP = 0x80047308 - SOCK_DGRAM = 0x2 - SOCK_MAXADDRLEN = 0xff - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_SOCKET = 0xffff - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x2 - SO_BROADCAST = 0x20 - SO_DEBUG = 0x1 - SO_DONTROUTE = 0x10 - SO_DONTTRUNC = 0x2000 - SO_ERROR = 0x1007 - SO_KEEPALIVE = 0x8 - SO_LABEL = 0x1010 - SO_LINGER = 0x80 - SO_LINGER_SEC = 0x1080 - SO_NKE = 0x1021 - SO_NOADDRERR = 0x1023 - SO_NOSIGPIPE = 0x1022 - SO_NOTIFYCONFLICT = 0x1026 - SO_NP_EXTENSIONS = 0x1083 - SO_NREAD = 0x1020 - SO_NUMRCVPKT = 0x1112 - SO_NWRITE = 0x1024 - SO_OOBINLINE = 0x100 - SO_PEERLABEL = 0x1011 - SO_RANDOMPORT = 0x1082 - SO_RCVBUF = 0x1002 - SO_RCVLOWAT = 0x1004 - SO_RCVTIMEO = 0x1006 - SO_REUSEADDR = 0x4 - SO_REUSEPORT = 0x200 - SO_REUSESHAREUID = 0x1025 - SO_SNDBUF = 0x1001 - SO_SNDLOWAT = 0x1003 - SO_SNDTIMEO = 0x1005 - SO_TIMESTAMP = 0x400 - SO_TIMESTAMP_MONOTONIC = 0x800 - SO_TYPE = 0x1008 - SO_UPCALLCLOSEWAIT = 0x1027 - SO_USELOOPBACK = 0x40 - SO_WANTMORE = 0x4000 - SO_WANTOOBFLAG = 0x8000 - S_IEXEC = 0x40 - S_IFBLK = 0x6000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFIFO = 0x1000 - S_IFLNK = 0xa000 - S_IFMT = 0xf000 - S_IFREG = 0x8000 - S_IFSOCK = 0xc000 - S_IFWHT = 0xe000 - S_IREAD = 0x100 - S_IRGRP = 0x20 - S_IROTH = 0x4 - S_IRUSR = 0x100 - S_IRWXG = 0x38 - S_IRWXO = 0x7 - S_IRWXU = 0x1c0 - S_ISGID = 0x400 - S_ISTXT = 0x200 - S_ISUID = 0x800 - S_ISVTX = 0x200 - S_IWGRP = 0x10 - S_IWOTH = 0x2 - S_IWRITE = 0x80 - S_IWUSR = 0x80 - S_IXGRP = 0x8 - S_IXOTH = 0x1 - S_IXUSR = 0x40 - TCIFLUSH = 0x1 - TCIOFLUSH = 0x3 - TCOFLUSH = 0x2 - TCP_CONNECTIONTIMEOUT = 0x20 - TCP_ENABLE_ECN = 0x104 - TCP_KEEPALIVE = 0x10 - TCP_KEEPCNT = 0x102 - TCP_KEEPINTVL = 0x101 - TCP_MAXHLEN = 0x3c - TCP_MAXOLEN = 0x28 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_SACK = 0x4 - TCP_MAX_WINSHIFT = 0xe - TCP_MINMSS = 0xd8 - TCP_MSS = 0x200 - TCP_NODELAY = 0x1 - TCP_NOOPT = 0x8 - TCP_NOPUSH = 0x4 - TCP_NOTSENT_LOWAT = 0x201 - TCP_RXT_CONNDROPTIME = 0x80 - TCP_RXT_FINDROP = 0x100 - TCP_SENDMOREACKS = 0x103 - TCSAFLUSH = 0x2 - TIOCCBRK = 0x2000747a - TIOCCDTR = 0x20007478 - TIOCCONS = 0x80047462 - TIOCDCDTIMESTAMP = 0x40087458 - TIOCDRAIN = 0x2000745e - TIOCDSIMICROCODE = 0x20007455 - TIOCEXCL = 0x2000740d - TIOCEXT = 0x80047460 - TIOCFLUSH = 0x80047410 - TIOCGDRAINWAIT = 0x40047456 - TIOCGETA = 0x402c7413 - TIOCGETD = 0x4004741a - TIOCGPGRP = 0x40047477 - TIOCGWINSZ = 0x40087468 - TIOCIXOFF = 0x20007480 - TIOCIXON = 0x20007481 - TIOCMBIC = 0x8004746b - TIOCMBIS = 0x8004746c - TIOCMGDTRWAIT = 0x4004745a - TIOCMGET = 0x4004746a - TIOCMODG = 0x40047403 - TIOCMODS = 0x80047404 - TIOCMSDTRWAIT = 0x8004745b - TIOCMSET = 0x8004746d - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x20007471 - TIOCNXCL = 0x2000740e - TIOCOUTQ = 0x40047473 - TIOCPKT = 0x80047470 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCPTYGNAME = 0x40807453 - TIOCPTYGRANT = 0x20007454 - TIOCPTYUNLK = 0x20007452 - TIOCREMOTE = 0x80047469 - TIOCSBRK = 0x2000747b - TIOCSCONS = 0x20007463 - TIOCSCTTY = 0x20007461 - TIOCSDRAINWAIT = 0x80047457 - TIOCSDTR = 0x20007479 - TIOCSETA = 0x802c7414 - TIOCSETAF = 0x802c7416 - TIOCSETAW = 0x802c7415 - TIOCSETD = 0x8004741b - TIOCSIG = 0x2000745f - TIOCSPGRP = 0x80047476 - TIOCSTART = 0x2000746e - TIOCSTAT = 0x20007465 - TIOCSTI = 0x80017472 - TIOCSTOP = 0x2000746f - TIOCSWINSZ = 0x80087467 - TIOCTIMESTAMP = 0x40087459 - TIOCUCNTL = 0x80047466 - TOSTOP = 0x400000 - VDISCARD = 0xf - VDSUSP = 0xb - VEOF = 0x0 - VEOL = 0x1 - VEOL2 = 0x2 - VERASE = 0x3 - VINTR = 0x8 - VKILL = 0x5 - VLNEXT = 0xe - VMIN = 0x10 - VQUIT = 0x9 - VREPRINT = 0x6 - VSTART = 0xc - VSTATUS = 0x12 - VSTOP = 0xd - VSUSP = 0xa - VT0 = 0x0 - VT1 = 0x10000 - VTDLY = 0x10000 - VTIME = 0x11 - VWERASE = 0x4 - WCONTINUED = 0x10 - WCOREFLAG = 0x80 - WEXITED = 0x4 - WNOHANG = 0x1 - WNOWAIT = 0x20 - WORDSIZE = 0x20 - WSTOPPED = 0x8 - WUNTRACED = 0x2 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x30) - EADDRNOTAVAIL = syscall.Errno(0x31) - EAFNOSUPPORT = syscall.Errno(0x2f) - EAGAIN = syscall.Errno(0x23) - EALREADY = syscall.Errno(0x25) - EAUTH = syscall.Errno(0x50) - EBADARCH = syscall.Errno(0x56) - EBADEXEC = syscall.Errno(0x55) - EBADF = syscall.Errno(0x9) - EBADMACHO = syscall.Errno(0x58) - EBADMSG = syscall.Errno(0x5e) - EBADRPC = syscall.Errno(0x48) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x59) - ECHILD = syscall.Errno(0xa) - ECONNABORTED = syscall.Errno(0x35) - ECONNREFUSED = syscall.Errno(0x3d) - ECONNRESET = syscall.Errno(0x36) - EDEADLK = syscall.Errno(0xb) - EDESTADDRREQ = syscall.Errno(0x27) - EDEVERR = syscall.Errno(0x53) - EDOM = syscall.Errno(0x21) - EDQUOT = syscall.Errno(0x45) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EFTYPE = syscall.Errno(0x4f) - EHOSTDOWN = syscall.Errno(0x40) - EHOSTUNREACH = syscall.Errno(0x41) - EIDRM = syscall.Errno(0x5a) - EILSEQ = syscall.Errno(0x5c) - EINPROGRESS = syscall.Errno(0x24) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EISCONN = syscall.Errno(0x38) - EISDIR = syscall.Errno(0x15) - ELAST = syscall.Errno(0x6a) - ELOOP = syscall.Errno(0x3e) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x28) - EMULTIHOP = syscall.Errno(0x5f) - ENAMETOOLONG = syscall.Errno(0x3f) - ENEEDAUTH = syscall.Errno(0x51) - ENETDOWN = syscall.Errno(0x32) - ENETRESET = syscall.Errno(0x34) - ENETUNREACH = syscall.Errno(0x33) - ENFILE = syscall.Errno(0x17) - ENOATTR = syscall.Errno(0x5d) - ENOBUFS = syscall.Errno(0x37) - ENODATA = syscall.Errno(0x60) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOLCK = syscall.Errno(0x4d) - ENOLINK = syscall.Errno(0x61) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x5b) - ENOPOLICY = syscall.Errno(0x67) - ENOPROTOOPT = syscall.Errno(0x2a) - ENOSPC = syscall.Errno(0x1c) - ENOSR = syscall.Errno(0x62) - ENOSTR = syscall.Errno(0x63) - ENOSYS = syscall.Errno(0x4e) - ENOTBLK = syscall.Errno(0xf) - ENOTCONN = syscall.Errno(0x39) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x42) - ENOTRECOVERABLE = syscall.Errno(0x68) - ENOTSOCK = syscall.Errno(0x26) - ENOTSUP = syscall.Errno(0x2d) - ENOTTY = syscall.Errno(0x19) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x66) - EOVERFLOW = syscall.Errno(0x54) - EOWNERDEAD = syscall.Errno(0x69) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x2e) - EPIPE = syscall.Errno(0x20) - EPROCLIM = syscall.Errno(0x43) - EPROCUNAVAIL = syscall.Errno(0x4c) - EPROGMISMATCH = syscall.Errno(0x4b) - EPROGUNAVAIL = syscall.Errno(0x4a) - EPROTO = syscall.Errno(0x64) - EPROTONOSUPPORT = syscall.Errno(0x2b) - EPROTOTYPE = syscall.Errno(0x29) - EPWROFF = syscall.Errno(0x52) - EQFULL = syscall.Errno(0x6a) - ERANGE = syscall.Errno(0x22) - EREMOTE = syscall.Errno(0x47) - EROFS = syscall.Errno(0x1e) - ERPCMISMATCH = syscall.Errno(0x49) - ESHLIBVERS = syscall.Errno(0x57) - ESHUTDOWN = syscall.Errno(0x3a) - ESOCKTNOSUPPORT = syscall.Errno(0x2c) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESTALE = syscall.Errno(0x46) - ETIME = syscall.Errno(0x65) - ETIMEDOUT = syscall.Errno(0x3c) - ETOOMANYREFS = syscall.Errno(0x3b) - ETXTBSY = syscall.Errno(0x1a) - EUSERS = syscall.Errno(0x44) - EWOULDBLOCK = syscall.Errno(0x23) - EXDEV = syscall.Errno(0x12) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0xa) - SIGCHLD = syscall.Signal(0x14) - SIGCONT = syscall.Signal(0x13) - SIGEMT = syscall.Signal(0x7) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINFO = syscall.Signal(0x1d) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x17) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGPIPE = syscall.Signal(0xd) - SIGPROF = syscall.Signal(0x1b) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTOP = syscall.Signal(0x11) - SIGSYS = syscall.Signal(0xc) - SIGTERM = syscall.Signal(0xf) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x12) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGURG = syscall.Signal(0x10) - SIGUSR1 = syscall.Signal(0x1e) - SIGUSR2 = syscall.Signal(0x1f) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) - -// Error table -var errors = [...]string{ - 1: "operation not permitted", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "input/output error", - 6: "device not configured", - 7: "argument list too long", - 8: "exec format error", - 9: "bad file descriptor", - 10: "no child processes", - 11: "resource deadlock avoided", - 12: "cannot allocate memory", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "resource busy", - 17: "file exists", - 18: "cross-device link", - 19: "operation not supported by device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "too many open files in system", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "numerical argument out of domain", - 34: "result too large", - 35: "resource temporarily unavailable", - 36: "operation now in progress", - 37: "operation already in progress", - 38: "socket operation on non-socket", - 39: "destination address required", - 40: "message too long", - 41: "protocol wrong type for socket", - 42: "protocol not available", - 43: "protocol not supported", - 44: "socket type not supported", - 45: "operation not supported", - 46: "protocol family not supported", - 47: "address family not supported by protocol family", - 48: "address already in use", - 49: "can't assign requested address", - 50: "network is down", - 51: "network is unreachable", - 52: "network dropped connection on reset", - 53: "software caused connection abort", - 54: "connection reset by peer", - 55: "no buffer space available", - 56: "socket is already connected", - 57: "socket is not connected", - 58: "can't send after socket shutdown", - 59: "too many references: can't splice", - 60: "operation timed out", - 61: "connection refused", - 62: "too many levels of symbolic links", - 63: "file name too long", - 64: "host is down", - 65: "no route to host", - 66: "directory not empty", - 67: "too many processes", - 68: "too many users", - 69: "disc quota exceeded", - 70: "stale NFS file handle", - 71: "too many levels of remote in path", - 72: "RPC struct is bad", - 73: "RPC version wrong", - 74: "RPC prog. not avail", - 75: "program version wrong", - 76: "bad procedure for program", - 77: "no locks available", - 78: "function not implemented", - 79: "inappropriate file type or format", - 80: "authentication error", - 81: "need authenticator", - 82: "device power is off", - 83: "device error", - 84: "value too large to be stored in data type", - 85: "bad executable (or shared library)", - 86: "bad CPU type in executable", - 87: "shared library version mismatch", - 88: "malformed Mach-o file", - 89: "operation canceled", - 90: "identifier removed", - 91: "no message of desired type", - 92: "illegal byte sequence", - 93: "attribute not found", - 94: "bad message", - 95: "EMULTIHOP (Reserved)", - 96: "no message available on STREAM", - 97: "ENOLINK (Reserved)", - 98: "no STREAM resources", - 99: "not a STREAM", - 100: "protocol error", - 101: "STREAM ioctl timeout", - 102: "operation not supported on socket", - 103: "policy not found", - 104: "state not recoverable", - 105: "previous owner died", - 106: "interface output queue is full", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal instruction", - 5: "trace/BPT trap", - 6: "abort trap", - 7: "EMT trap", - 8: "floating point exception", - 9: "killed", - 10: "bus error", - 11: "segmentation fault", - 12: "bad system call", - 13: "broken pipe", - 14: "alarm clock", - 15: "terminated", - 16: "urgent I/O condition", - 17: "suspended (signal)", - 18: "suspended", - 19: "continued", - 20: "child exited", - 21: "stopped (tty input)", - 22: "stopped (tty output)", - 23: "I/O possible", - 24: "cputime limit exceeded", - 25: "filesize limit exceeded", - 26: "virtual timer expired", - 27: "profiling timer expired", - 28: "window size changes", - 29: "information request", - 30: "user defined signal 1", - 31: "user defined signal 2", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_darwin_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_darwin_amd64.go deleted file mode 100644 index 9594f93817a..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_darwin_amd64.go +++ /dev/null @@ -1,1576 +0,0 @@ -// mkerrors.sh -m64 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build amd64,darwin - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- -m64 _const.go - -package unix - -import "syscall" - -const ( - AF_APPLETALK = 0x10 - AF_CCITT = 0xa - AF_CHAOS = 0x5 - AF_CNT = 0x15 - AF_COIP = 0x14 - AF_DATAKIT = 0x9 - AF_DECnet = 0xc - AF_DLI = 0xd - AF_E164 = 0x1c - AF_ECMA = 0x8 - AF_HYLINK = 0xf - AF_IEEE80211 = 0x25 - AF_IMPLINK = 0x3 - AF_INET = 0x2 - AF_INET6 = 0x1e - AF_IPX = 0x17 - AF_ISDN = 0x1c - AF_ISO = 0x7 - AF_LAT = 0xe - AF_LINK = 0x12 - AF_LOCAL = 0x1 - AF_MAX = 0x28 - AF_NATM = 0x1f - AF_NDRV = 0x1b - AF_NETBIOS = 0x21 - AF_NS = 0x6 - AF_OSI = 0x7 - AF_PPP = 0x22 - AF_PUP = 0x4 - AF_RESERVED_36 = 0x24 - AF_ROUTE = 0x11 - AF_SIP = 0x18 - AF_SNA = 0xb - AF_SYSTEM = 0x20 - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - AF_UTUN = 0x26 - B0 = 0x0 - B110 = 0x6e - B115200 = 0x1c200 - B1200 = 0x4b0 - B134 = 0x86 - B14400 = 0x3840 - B150 = 0x96 - B1800 = 0x708 - B19200 = 0x4b00 - B200 = 0xc8 - B230400 = 0x38400 - B2400 = 0x960 - B28800 = 0x7080 - B300 = 0x12c - B38400 = 0x9600 - B4800 = 0x12c0 - B50 = 0x32 - B57600 = 0xe100 - B600 = 0x258 - B7200 = 0x1c20 - B75 = 0x4b - B76800 = 0x12c00 - B9600 = 0x2580 - BIOCFLUSH = 0x20004268 - BIOCGBLEN = 0x40044266 - BIOCGDLT = 0x4004426a - BIOCGDLTLIST = 0xc00c4279 - BIOCGETIF = 0x4020426b - BIOCGHDRCMPLT = 0x40044274 - BIOCGRSIG = 0x40044272 - BIOCGRTIMEOUT = 0x4010426e - BIOCGSEESENT = 0x40044276 - BIOCGSTATS = 0x4008426f - BIOCIMMEDIATE = 0x80044270 - BIOCPROMISC = 0x20004269 - BIOCSBLEN = 0xc0044266 - BIOCSDLT = 0x80044278 - BIOCSETF = 0x80104267 - BIOCSETFNR = 0x8010427e - BIOCSETIF = 0x8020426c - BIOCSHDRCMPLT = 0x80044275 - BIOCSRSIG = 0x80044273 - BIOCSRTIMEOUT = 0x8010426d - BIOCSSEESENT = 0x80044277 - BIOCVERSION = 0x40044271 - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALIGNMENT = 0x4 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXBUFSIZE = 0x80000 - BPF_MAXINSNS = 0x200 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINBUFSIZE = 0x20 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RELEASE = 0x30bb6 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_W = 0x0 - BPF_X = 0x8 - BRKINT = 0x2 - CFLUSH = 0xf - CLOCAL = 0x8000 - CREAD = 0x800 - CS5 = 0x0 - CS6 = 0x100 - CS7 = 0x200 - CS8 = 0x300 - CSIZE = 0x300 - CSTART = 0x11 - CSTATUS = 0x14 - CSTOP = 0x13 - CSTOPB = 0x400 - CSUSP = 0x1a - CTL_MAXNAME = 0xc - CTL_NET = 0x4 - DLT_A429 = 0xb8 - DLT_A653_ICM = 0xb9 - DLT_AIRONET_HEADER = 0x78 - DLT_AOS = 0xde - DLT_APPLE_IP_OVER_IEEE1394 = 0x8a - DLT_ARCNET = 0x7 - DLT_ARCNET_LINUX = 0x81 - DLT_ATM_CLIP = 0x13 - DLT_ATM_RFC1483 = 0xb - DLT_AURORA = 0x7e - DLT_AX25 = 0x3 - DLT_AX25_KISS = 0xca - DLT_BACNET_MS_TP = 0xa5 - DLT_BLUETOOTH_HCI_H4 = 0xbb - DLT_BLUETOOTH_HCI_H4_WITH_PHDR = 0xc9 - DLT_CAN20B = 0xbe - DLT_CAN_SOCKETCAN = 0xe3 - DLT_CHAOS = 0x5 - DLT_CHDLC = 0x68 - DLT_CISCO_IOS = 0x76 - DLT_C_HDLC = 0x68 - DLT_C_HDLC_WITH_DIR = 0xcd - DLT_DBUS = 0xe7 - DLT_DECT = 0xdd - DLT_DOCSIS = 0x8f - DLT_DVB_CI = 0xeb - DLT_ECONET = 0x73 - DLT_EN10MB = 0x1 - DLT_EN3MB = 0x2 - DLT_ENC = 0x6d - DLT_ERF = 0xc5 - DLT_ERF_ETH = 0xaf - DLT_ERF_POS = 0xb0 - DLT_FC_2 = 0xe0 - DLT_FC_2_WITH_FRAME_DELIMS = 0xe1 - DLT_FDDI = 0xa - DLT_FLEXRAY = 0xd2 - DLT_FRELAY = 0x6b - DLT_FRELAY_WITH_DIR = 0xce - DLT_GCOM_SERIAL = 0xad - DLT_GCOM_T1E1 = 0xac - DLT_GPF_F = 0xab - DLT_GPF_T = 0xaa - DLT_GPRS_LLC = 0xa9 - DLT_GSMTAP_ABIS = 0xda - DLT_GSMTAP_UM = 0xd9 - DLT_HHDLC = 0x79 - DLT_IBM_SN = 0x92 - DLT_IBM_SP = 0x91 - DLT_IEEE802 = 0x6 - DLT_IEEE802_11 = 0x69 - DLT_IEEE802_11_RADIO = 0x7f - DLT_IEEE802_11_RADIO_AVS = 0xa3 - DLT_IEEE802_15_4 = 0xc3 - DLT_IEEE802_15_4_LINUX = 0xbf - DLT_IEEE802_15_4_NOFCS = 0xe6 - DLT_IEEE802_15_4_NONASK_PHY = 0xd7 - DLT_IEEE802_16_MAC_CPS = 0xbc - DLT_IEEE802_16_MAC_CPS_RADIO = 0xc1 - DLT_IPFILTER = 0x74 - DLT_IPMB = 0xc7 - DLT_IPMB_LINUX = 0xd1 - DLT_IPNET = 0xe2 - DLT_IPOIB = 0xf2 - DLT_IPV4 = 0xe4 - DLT_IPV6 = 0xe5 - DLT_IP_OVER_FC = 0x7a - DLT_JUNIPER_ATM1 = 0x89 - DLT_JUNIPER_ATM2 = 0x87 - DLT_JUNIPER_ATM_CEMIC = 0xee - DLT_JUNIPER_CHDLC = 0xb5 - DLT_JUNIPER_ES = 0x84 - DLT_JUNIPER_ETHER = 0xb2 - DLT_JUNIPER_FIBRECHANNEL = 0xea - DLT_JUNIPER_FRELAY = 0xb4 - DLT_JUNIPER_GGSN = 0x85 - DLT_JUNIPER_ISM = 0xc2 - DLT_JUNIPER_MFR = 0x86 - DLT_JUNIPER_MLFR = 0x83 - DLT_JUNIPER_MLPPP = 0x82 - DLT_JUNIPER_MONITOR = 0xa4 - DLT_JUNIPER_PIC_PEER = 0xae - DLT_JUNIPER_PPP = 0xb3 - DLT_JUNIPER_PPPOE = 0xa7 - DLT_JUNIPER_PPPOE_ATM = 0xa8 - DLT_JUNIPER_SERVICES = 0x88 - DLT_JUNIPER_SRX_E2E = 0xe9 - DLT_JUNIPER_ST = 0xc8 - DLT_JUNIPER_VP = 0xb7 - DLT_JUNIPER_VS = 0xe8 - DLT_LAPB_WITH_DIR = 0xcf - DLT_LAPD = 0xcb - DLT_LIN = 0xd4 - DLT_LINUX_EVDEV = 0xd8 - DLT_LINUX_IRDA = 0x90 - DLT_LINUX_LAPD = 0xb1 - DLT_LINUX_PPP_WITHDIRECTION = 0xa6 - DLT_LINUX_SLL = 0x71 - DLT_LOOP = 0x6c - DLT_LTALK = 0x72 - DLT_MATCHING_MAX = 0xf5 - DLT_MATCHING_MIN = 0x68 - DLT_MFR = 0xb6 - DLT_MOST = 0xd3 - DLT_MPEG_2_TS = 0xf3 - DLT_MPLS = 0xdb - DLT_MTP2 = 0x8c - DLT_MTP2_WITH_PHDR = 0x8b - DLT_MTP3 = 0x8d - DLT_MUX27010 = 0xec - DLT_NETANALYZER = 0xf0 - DLT_NETANALYZER_TRANSPARENT = 0xf1 - DLT_NFC_LLCP = 0xf5 - DLT_NFLOG = 0xef - DLT_NG40 = 0xf4 - DLT_NULL = 0x0 - DLT_PCI_EXP = 0x7d - DLT_PFLOG = 0x75 - DLT_PFSYNC = 0x12 - DLT_PPI = 0xc0 - DLT_PPP = 0x9 - DLT_PPP_BSDOS = 0x10 - DLT_PPP_ETHER = 0x33 - DLT_PPP_PPPD = 0xa6 - DLT_PPP_SERIAL = 0x32 - DLT_PPP_WITH_DIR = 0xcc - DLT_PPP_WITH_DIRECTION = 0xa6 - DLT_PRISM_HEADER = 0x77 - DLT_PRONET = 0x4 - DLT_RAIF1 = 0xc6 - DLT_RAW = 0xc - DLT_RIO = 0x7c - DLT_SCCP = 0x8e - DLT_SITA = 0xc4 - DLT_SLIP = 0x8 - DLT_SLIP_BSDOS = 0xf - DLT_STANAG_5066_D_PDU = 0xed - DLT_SUNATM = 0x7b - DLT_SYMANTEC_FIREWALL = 0x63 - DLT_TZSP = 0x80 - DLT_USB = 0xba - DLT_USB_LINUX = 0xbd - DLT_USB_LINUX_MMAPPED = 0xdc - DLT_USER0 = 0x93 - DLT_USER1 = 0x94 - DLT_USER10 = 0x9d - DLT_USER11 = 0x9e - DLT_USER12 = 0x9f - DLT_USER13 = 0xa0 - DLT_USER14 = 0xa1 - DLT_USER15 = 0xa2 - DLT_USER2 = 0x95 - DLT_USER3 = 0x96 - DLT_USER4 = 0x97 - DLT_USER5 = 0x98 - DLT_USER6 = 0x99 - DLT_USER7 = 0x9a - DLT_USER8 = 0x9b - DLT_USER9 = 0x9c - DLT_WIHART = 0xdf - DLT_X2E_SERIAL = 0xd5 - DLT_X2E_XORAYA = 0xd6 - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - DT_WHT = 0xe - ECHO = 0x8 - ECHOCTL = 0x40 - ECHOE = 0x2 - ECHOK = 0x4 - ECHOKE = 0x1 - ECHONL = 0x10 - ECHOPRT = 0x20 - EVFILT_AIO = -0x3 - EVFILT_FS = -0x9 - EVFILT_MACHPORT = -0x8 - EVFILT_PROC = -0x5 - EVFILT_READ = -0x1 - EVFILT_SIGNAL = -0x6 - EVFILT_SYSCOUNT = 0xe - EVFILT_THREADMARKER = 0xe - EVFILT_TIMER = -0x7 - EVFILT_USER = -0xa - EVFILT_VM = -0xc - EVFILT_VNODE = -0x4 - EVFILT_WRITE = -0x2 - EV_ADD = 0x1 - EV_CLEAR = 0x20 - EV_DELETE = 0x2 - EV_DISABLE = 0x8 - EV_DISPATCH = 0x80 - EV_ENABLE = 0x4 - EV_EOF = 0x8000 - EV_ERROR = 0x4000 - EV_FLAG0 = 0x1000 - EV_FLAG1 = 0x2000 - EV_ONESHOT = 0x10 - EV_OOBAND = 0x2000 - EV_POLL = 0x1000 - EV_RECEIPT = 0x40 - EV_SYSFLAGS = 0xf000 - EXTA = 0x4b00 - EXTB = 0x9600 - EXTPROC = 0x800 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x400 - FLUSHO = 0x800000 - F_ADDFILESIGS = 0x3d - F_ADDSIGS = 0x3b - F_ALLOCATEALL = 0x4 - F_ALLOCATECONTIG = 0x2 - F_CHKCLEAN = 0x29 - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0x43 - F_FINDSIGS = 0x4e - F_FLUSH_DATA = 0x28 - F_FREEZE_FS = 0x35 - F_FULLFSYNC = 0x33 - F_GETCODEDIR = 0x48 - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLK = 0x7 - F_GETLKPID = 0x42 - F_GETNOSIGPIPE = 0x4a - F_GETOWN = 0x5 - F_GETPATH = 0x32 - F_GETPATH_MTMINFO = 0x47 - F_GETPROTECTIONCLASS = 0x3f - F_GETPROTECTIONLEVEL = 0x4d - F_GLOBAL_NOCACHE = 0x37 - F_LOG2PHYS = 0x31 - F_LOG2PHYS_EXT = 0x41 - F_NOCACHE = 0x30 - F_NODIRECT = 0x3e - F_OK = 0x0 - F_PATHPKG_CHECK = 0x34 - F_PEOFPOSMODE = 0x3 - F_PREALLOCATE = 0x2a - F_RDADVISE = 0x2c - F_RDAHEAD = 0x2d - F_RDLCK = 0x1 - F_SETBACKINGSTORE = 0x46 - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLK = 0x8 - F_SETLKW = 0x9 - F_SETLKWTIMEOUT = 0xa - F_SETNOSIGPIPE = 0x49 - F_SETOWN = 0x6 - F_SETPROTECTIONCLASS = 0x40 - F_SETSIZE = 0x2b - F_SINGLE_WRITER = 0x4c - F_THAW_FS = 0x36 - F_TRANSCODEKEY = 0x4b - F_UNLCK = 0x2 - F_VOLPOSMODE = 0x4 - F_WRLCK = 0x3 - HUPCL = 0x4000 - ICANON = 0x100 - ICMP6_FILTER = 0x12 - ICRNL = 0x100 - IEXTEN = 0x400 - IFF_ALLMULTI = 0x200 - IFF_ALTPHYS = 0x4000 - IFF_BROADCAST = 0x2 - IFF_DEBUG = 0x4 - IFF_LINK0 = 0x1000 - IFF_LINK1 = 0x2000 - IFF_LINK2 = 0x4000 - IFF_LOOPBACK = 0x8 - IFF_MULTICAST = 0x8000 - IFF_NOARP = 0x80 - IFF_NOTRAILERS = 0x20 - IFF_OACTIVE = 0x400 - IFF_POINTOPOINT = 0x10 - IFF_PROMISC = 0x100 - IFF_RUNNING = 0x40 - IFF_SIMPLEX = 0x800 - IFF_UP = 0x1 - IFNAMSIZ = 0x10 - IFT_1822 = 0x2 - IFT_AAL5 = 0x31 - IFT_ARCNET = 0x23 - IFT_ARCNETPLUS = 0x24 - IFT_ATM = 0x25 - IFT_BRIDGE = 0xd1 - IFT_CARP = 0xf8 - IFT_CELLULAR = 0xff - IFT_CEPT = 0x13 - IFT_DS3 = 0x1e - IFT_ENC = 0xf4 - IFT_EON = 0x19 - IFT_ETHER = 0x6 - IFT_FAITH = 0x38 - IFT_FDDI = 0xf - IFT_FRELAY = 0x20 - IFT_FRELAYDCE = 0x2c - IFT_GIF = 0x37 - IFT_HDH1822 = 0x3 - IFT_HIPPI = 0x2f - IFT_HSSI = 0x2e - IFT_HY = 0xe - IFT_IEEE1394 = 0x90 - IFT_IEEE8023ADLAG = 0x88 - IFT_ISDNBASIC = 0x14 - IFT_ISDNPRIMARY = 0x15 - IFT_ISO88022LLC = 0x29 - IFT_ISO88023 = 0x7 - IFT_ISO88024 = 0x8 - IFT_ISO88025 = 0x9 - IFT_ISO88026 = 0xa - IFT_L2VLAN = 0x87 - IFT_LAPB = 0x10 - IFT_LOCALTALK = 0x2a - IFT_LOOP = 0x18 - IFT_MIOX25 = 0x26 - IFT_MODEM = 0x30 - IFT_NSIP = 0x1b - IFT_OTHER = 0x1 - IFT_P10 = 0xc - IFT_P80 = 0xd - IFT_PARA = 0x22 - IFT_PDP = 0xff - IFT_PFLOG = 0xf5 - IFT_PFSYNC = 0xf6 - IFT_PKTAP = 0xfe - IFT_PPP = 0x17 - IFT_PROPMUX = 0x36 - IFT_PROPVIRTUAL = 0x35 - IFT_PTPSERIAL = 0x16 - IFT_RS232 = 0x21 - IFT_SDLC = 0x11 - IFT_SIP = 0x1f - IFT_SLIP = 0x1c - IFT_SMDSDXI = 0x2b - IFT_SMDSICIP = 0x34 - IFT_SONET = 0x27 - IFT_SONETPATH = 0x32 - IFT_SONETVT = 0x33 - IFT_STARLAN = 0xb - IFT_STF = 0x39 - IFT_T1 = 0x12 - IFT_ULTRA = 0x1d - IFT_V35 = 0x2d - IFT_X25 = 0x5 - IFT_X25DDN = 0x4 - IFT_X25PLE = 0x28 - IFT_XETHER = 0x1a - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLASSD_HOST = 0xfffffff - IN_CLASSD_NET = 0xf0000000 - IN_CLASSD_NSHIFT = 0x1c - IN_LINKLOCALNETNUM = 0xa9fe0000 - IN_LOOPBACKNET = 0x7f - IPPROTO_3PC = 0x22 - IPPROTO_ADFS = 0x44 - IPPROTO_AH = 0x33 - IPPROTO_AHIP = 0x3d - IPPROTO_APES = 0x63 - IPPROTO_ARGUS = 0xd - IPPROTO_AX25 = 0x5d - IPPROTO_BHA = 0x31 - IPPROTO_BLT = 0x1e - IPPROTO_BRSATMON = 0x4c - IPPROTO_CFTP = 0x3e - IPPROTO_CHAOS = 0x10 - IPPROTO_CMTP = 0x26 - IPPROTO_CPHB = 0x49 - IPPROTO_CPNX = 0x48 - IPPROTO_DDP = 0x25 - IPPROTO_DGP = 0x56 - IPPROTO_DIVERT = 0xfe - IPPROTO_DONE = 0x101 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_EMCON = 0xe - IPPROTO_ENCAP = 0x62 - IPPROTO_EON = 0x50 - IPPROTO_ESP = 0x32 - IPPROTO_ETHERIP = 0x61 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GGP = 0x3 - IPPROTO_GMTP = 0x64 - IPPROTO_GRE = 0x2f - IPPROTO_HELLO = 0x3f - IPPROTO_HMP = 0x14 - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IDPR = 0x23 - IPPROTO_IDRP = 0x2d - IPPROTO_IGMP = 0x2 - IPPROTO_IGP = 0x55 - IPPROTO_IGRP = 0x58 - IPPROTO_IL = 0x28 - IPPROTO_INLSP = 0x34 - IPPROTO_INP = 0x20 - IPPROTO_IP = 0x0 - IPPROTO_IPCOMP = 0x6c - IPPROTO_IPCV = 0x47 - IPPROTO_IPEIP = 0x5e - IPPROTO_IPIP = 0x4 - IPPROTO_IPPC = 0x43 - IPPROTO_IPV4 = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_IRTP = 0x1c - IPPROTO_KRYPTOLAN = 0x41 - IPPROTO_LARP = 0x5b - IPPROTO_LEAF1 = 0x19 - IPPROTO_LEAF2 = 0x1a - IPPROTO_MAX = 0x100 - IPPROTO_MAXID = 0x34 - IPPROTO_MEAS = 0x13 - IPPROTO_MHRP = 0x30 - IPPROTO_MICP = 0x5f - IPPROTO_MTP = 0x5c - IPPROTO_MUX = 0x12 - IPPROTO_ND = 0x4d - IPPROTO_NHRP = 0x36 - IPPROTO_NONE = 0x3b - IPPROTO_NSP = 0x1f - IPPROTO_NVPII = 0xb - IPPROTO_OSPFIGP = 0x59 - IPPROTO_PGM = 0x71 - IPPROTO_PIGP = 0x9 - IPPROTO_PIM = 0x67 - IPPROTO_PRM = 0x15 - IPPROTO_PUP = 0xc - IPPROTO_PVP = 0x4b - IPPROTO_RAW = 0xff - IPPROTO_RCCMON = 0xa - IPPROTO_RDP = 0x1b - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_RVD = 0x42 - IPPROTO_SATEXPAK = 0x40 - IPPROTO_SATMON = 0x45 - IPPROTO_SCCSP = 0x60 - IPPROTO_SCTP = 0x84 - IPPROTO_SDRP = 0x2a - IPPROTO_SEP = 0x21 - IPPROTO_SRPC = 0x5a - IPPROTO_ST = 0x7 - IPPROTO_SVMTP = 0x52 - IPPROTO_SWIPE = 0x35 - IPPROTO_TCF = 0x57 - IPPROTO_TCP = 0x6 - IPPROTO_TP = 0x1d - IPPROTO_TPXX = 0x27 - IPPROTO_TRUNK1 = 0x17 - IPPROTO_TRUNK2 = 0x18 - IPPROTO_TTP = 0x54 - IPPROTO_UDP = 0x11 - IPPROTO_VINES = 0x53 - IPPROTO_VISA = 0x46 - IPPROTO_VMTP = 0x51 - IPPROTO_WBEXPAK = 0x4f - IPPROTO_WBMON = 0x4e - IPPROTO_WSN = 0x4a - IPPROTO_XNET = 0xf - IPPROTO_XTP = 0x24 - IPV6_2292DSTOPTS = 0x17 - IPV6_2292HOPLIMIT = 0x14 - IPV6_2292HOPOPTS = 0x16 - IPV6_2292NEXTHOP = 0x15 - IPV6_2292PKTINFO = 0x13 - IPV6_2292PKTOPTIONS = 0x19 - IPV6_2292RTHDR = 0x18 - IPV6_BINDV6ONLY = 0x1b - IPV6_BOUND_IF = 0x7d - IPV6_CHECKSUM = 0x1a - IPV6_DEFAULT_MULTICAST_HOPS = 0x1 - IPV6_DEFAULT_MULTICAST_LOOP = 0x1 - IPV6_DEFHLIM = 0x40 - IPV6_FAITH = 0x1d - IPV6_FLOWINFO_MASK = 0xffffff0f - IPV6_FLOWLABEL_MASK = 0xffff0f00 - IPV6_FRAGTTL = 0x3c - IPV6_FW_ADD = 0x1e - IPV6_FW_DEL = 0x1f - IPV6_FW_FLUSH = 0x20 - IPV6_FW_GET = 0x22 - IPV6_FW_ZERO = 0x21 - IPV6_HLIMDEC = 0x1 - IPV6_IPSEC_POLICY = 0x1c - IPV6_JOIN_GROUP = 0xc - IPV6_LEAVE_GROUP = 0xd - IPV6_MAXHLIM = 0xff - IPV6_MAXOPTHDR = 0x800 - IPV6_MAXPACKET = 0xffff - IPV6_MAX_GROUP_SRC_FILTER = 0x200 - IPV6_MAX_MEMBERSHIPS = 0xfff - IPV6_MAX_SOCK_SRC_FILTER = 0x80 - IPV6_MIN_MEMBERSHIPS = 0x1f - IPV6_MMTU = 0x500 - IPV6_MULTICAST_HOPS = 0xa - IPV6_MULTICAST_IF = 0x9 - IPV6_MULTICAST_LOOP = 0xb - IPV6_PORTRANGE = 0xe - IPV6_PORTRANGE_DEFAULT = 0x0 - IPV6_PORTRANGE_HIGH = 0x1 - IPV6_PORTRANGE_LOW = 0x2 - IPV6_RECVTCLASS = 0x23 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_SOCKOPT_RESERVED1 = 0x3 - IPV6_TCLASS = 0x24 - IPV6_UNICAST_HOPS = 0x4 - IPV6_V6ONLY = 0x1b - IPV6_VERSION = 0x60 - IPV6_VERSION_MASK = 0xf0 - IP_ADD_MEMBERSHIP = 0xc - IP_ADD_SOURCE_MEMBERSHIP = 0x46 - IP_BLOCK_SOURCE = 0x48 - IP_BOUND_IF = 0x19 - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DROP_MEMBERSHIP = 0xd - IP_DROP_SOURCE_MEMBERSHIP = 0x47 - IP_DUMMYNET_CONFIGURE = 0x3c - IP_DUMMYNET_DEL = 0x3d - IP_DUMMYNET_FLUSH = 0x3e - IP_DUMMYNET_GET = 0x40 - IP_FAITH = 0x16 - IP_FW_ADD = 0x28 - IP_FW_DEL = 0x29 - IP_FW_FLUSH = 0x2a - IP_FW_GET = 0x2c - IP_FW_RESETLOG = 0x2d - IP_FW_ZERO = 0x2b - IP_HDRINCL = 0x2 - IP_IPSEC_POLICY = 0x15 - IP_MAXPACKET = 0xffff - IP_MAX_GROUP_SRC_FILTER = 0x200 - IP_MAX_MEMBERSHIPS = 0xfff - IP_MAX_SOCK_MUTE_FILTER = 0x80 - IP_MAX_SOCK_SRC_FILTER = 0x80 - IP_MF = 0x2000 - IP_MIN_MEMBERSHIPS = 0x1f - IP_MSFILTER = 0x4a - IP_MSS = 0x240 - IP_MULTICAST_IF = 0x9 - IP_MULTICAST_IFINDEX = 0x42 - IP_MULTICAST_LOOP = 0xb - IP_MULTICAST_TTL = 0xa - IP_MULTICAST_VIF = 0xe - IP_NAT__XXX = 0x37 - IP_OFFMASK = 0x1fff - IP_OLD_FW_ADD = 0x32 - IP_OLD_FW_DEL = 0x33 - IP_OLD_FW_FLUSH = 0x34 - IP_OLD_FW_GET = 0x36 - IP_OLD_FW_RESETLOG = 0x38 - IP_OLD_FW_ZERO = 0x35 - IP_OPTIONS = 0x1 - IP_PKTINFO = 0x1a - IP_PORTRANGE = 0x13 - IP_PORTRANGE_DEFAULT = 0x0 - IP_PORTRANGE_HIGH = 0x1 - IP_PORTRANGE_LOW = 0x2 - IP_RECVDSTADDR = 0x7 - IP_RECVIF = 0x14 - IP_RECVOPTS = 0x5 - IP_RECVPKTINFO = 0x1a - IP_RECVRETOPTS = 0x6 - IP_RECVTTL = 0x18 - IP_RETOPTS = 0x8 - IP_RF = 0x8000 - IP_RSVP_OFF = 0x10 - IP_RSVP_ON = 0xf - IP_RSVP_VIF_OFF = 0x12 - IP_RSVP_VIF_ON = 0x11 - IP_STRIPHDR = 0x17 - IP_TOS = 0x3 - IP_TRAFFIC_MGT_BACKGROUND = 0x41 - IP_TTL = 0x4 - IP_UNBLOCK_SOURCE = 0x49 - ISIG = 0x80 - ISTRIP = 0x20 - IUTF8 = 0x4000 - IXANY = 0x800 - IXOFF = 0x400 - IXON = 0x200 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_CAN_REUSE = 0x9 - MADV_DONTNEED = 0x4 - MADV_FREE = 0x5 - MADV_FREE_REUSABLE = 0x7 - MADV_FREE_REUSE = 0x8 - MADV_NORMAL = 0x0 - MADV_RANDOM = 0x1 - MADV_SEQUENTIAL = 0x2 - MADV_WILLNEED = 0x3 - MADV_ZERO_WIRED_PAGES = 0x6 - MAP_ANON = 0x1000 - MAP_COPY = 0x2 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_HASSEMAPHORE = 0x200 - MAP_JIT = 0x800 - MAP_NOCACHE = 0x400 - MAP_NOEXTEND = 0x100 - MAP_NORESERVE = 0x40 - MAP_PRIVATE = 0x2 - MAP_RENAME = 0x20 - MAP_RESERVED0080 = 0x80 - MAP_SHARED = 0x1 - MCL_CURRENT = 0x1 - MCL_FUTURE = 0x2 - MSG_CTRUNC = 0x20 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x80 - MSG_EOF = 0x100 - MSG_EOR = 0x8 - MSG_FLUSH = 0x400 - MSG_HAVEMORE = 0x2000 - MSG_HOLD = 0x800 - MSG_NEEDSA = 0x10000 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_RCVMORE = 0x4000 - MSG_SEND = 0x1000 - MSG_TRUNC = 0x10 - MSG_WAITALL = 0x40 - MSG_WAITSTREAM = 0x200 - MS_ASYNC = 0x1 - MS_DEACTIVATE = 0x8 - MS_INVALIDATE = 0x2 - MS_KILLPAGES = 0x4 - MS_SYNC = 0x10 - NAME_MAX = 0xff - NET_RT_DUMP = 0x1 - NET_RT_DUMP2 = 0x7 - NET_RT_FLAGS = 0x2 - NET_RT_IFLIST = 0x3 - NET_RT_IFLIST2 = 0x6 - NET_RT_MAXID = 0xa - NET_RT_STAT = 0x4 - NET_RT_TRASH = 0x5 - NOFLSH = 0x80000000 - NOTE_ABSOLUTE = 0x8 - NOTE_ATTRIB = 0x8 - NOTE_BACKGROUND = 0x40 - NOTE_CHILD = 0x4 - NOTE_CRITICAL = 0x20 - NOTE_DELETE = 0x1 - NOTE_EXEC = 0x20000000 - NOTE_EXIT = 0x80000000 - NOTE_EXITSTATUS = 0x4000000 - NOTE_EXIT_CSERROR = 0x40000 - NOTE_EXIT_DECRYPTFAIL = 0x10000 - NOTE_EXIT_DETAIL = 0x2000000 - NOTE_EXIT_DETAIL_MASK = 0x70000 - NOTE_EXIT_MEMORY = 0x20000 - NOTE_EXIT_REPARENTED = 0x80000 - NOTE_EXTEND = 0x4 - NOTE_FFAND = 0x40000000 - NOTE_FFCOPY = 0xc0000000 - NOTE_FFCTRLMASK = 0xc0000000 - NOTE_FFLAGSMASK = 0xffffff - NOTE_FFNOP = 0x0 - NOTE_FFOR = 0x80000000 - NOTE_FORK = 0x40000000 - NOTE_LEEWAY = 0x10 - NOTE_LINK = 0x10 - NOTE_LOWAT = 0x1 - NOTE_NONE = 0x80 - NOTE_NSECONDS = 0x4 - NOTE_PCTRLMASK = -0x100000 - NOTE_PDATAMASK = 0xfffff - NOTE_REAP = 0x10000000 - NOTE_RENAME = 0x20 - NOTE_REVOKE = 0x40 - NOTE_SECONDS = 0x1 - NOTE_SIGNAL = 0x8000000 - NOTE_TRACK = 0x1 - NOTE_TRACKERR = 0x2 - NOTE_TRIGGER = 0x1000000 - NOTE_USECONDS = 0x2 - NOTE_VM_ERROR = 0x10000000 - NOTE_VM_PRESSURE = 0x80000000 - NOTE_VM_PRESSURE_SUDDEN_TERMINATE = 0x20000000 - NOTE_VM_PRESSURE_TERMINATE = 0x40000000 - NOTE_WRITE = 0x2 - OCRNL = 0x10 - OFDEL = 0x20000 - OFILL = 0x80 - ONLCR = 0x2 - ONLRET = 0x40 - ONOCR = 0x20 - ONOEOT = 0x8 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_ALERT = 0x20000000 - O_APPEND = 0x8 - O_ASYNC = 0x40 - O_CLOEXEC = 0x1000000 - O_CREAT = 0x200 - O_DIRECTORY = 0x100000 - O_DP_GETRAWENCRYPTED = 0x1 - O_DSYNC = 0x400000 - O_EVTONLY = 0x8000 - O_EXCL = 0x800 - O_EXLOCK = 0x20 - O_FSYNC = 0x80 - O_NDELAY = 0x4 - O_NOCTTY = 0x20000 - O_NOFOLLOW = 0x100 - O_NONBLOCK = 0x4 - O_POPUP = 0x80000000 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_SHLOCK = 0x10 - O_SYMLINK = 0x200000 - O_SYNC = 0x80 - O_TRUNC = 0x400 - O_WRONLY = 0x1 - PARENB = 0x1000 - PARMRK = 0x8 - PARODD = 0x2000 - PENDIN = 0x20000000 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PROT_EXEC = 0x4 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_WRITE = 0x2 - PT_ATTACH = 0xa - PT_ATTACHEXC = 0xe - PT_CONTINUE = 0x7 - PT_DENY_ATTACH = 0x1f - PT_DETACH = 0xb - PT_FIRSTMACH = 0x20 - PT_FORCEQUOTA = 0x1e - PT_KILL = 0x8 - PT_READ_D = 0x2 - PT_READ_I = 0x1 - PT_READ_U = 0x3 - PT_SIGEXC = 0xc - PT_STEP = 0x9 - PT_THUPDATE = 0xd - PT_TRACE_ME = 0x0 - PT_WRITE_D = 0x5 - PT_WRITE_I = 0x4 - PT_WRITE_U = 0x6 - RLIMIT_AS = 0x5 - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_CPU_USAGE_MONITOR = 0x2 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x8 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = 0x7fffffffffffffff - RTAX_AUTHOR = 0x6 - RTAX_BRD = 0x7 - RTAX_DST = 0x0 - RTAX_GATEWAY = 0x1 - RTAX_GENMASK = 0x3 - RTAX_IFA = 0x5 - RTAX_IFP = 0x4 - RTAX_MAX = 0x8 - RTAX_NETMASK = 0x2 - RTA_AUTHOR = 0x40 - RTA_BRD = 0x80 - RTA_DST = 0x1 - RTA_GATEWAY = 0x2 - RTA_GENMASK = 0x8 - RTA_IFA = 0x20 - RTA_IFP = 0x10 - RTA_NETMASK = 0x4 - RTF_BLACKHOLE = 0x1000 - RTF_BROADCAST = 0x400000 - RTF_CLONING = 0x100 - RTF_CONDEMNED = 0x2000000 - RTF_DELCLONE = 0x80 - RTF_DONE = 0x40 - RTF_DYNAMIC = 0x10 - RTF_GATEWAY = 0x2 - RTF_HOST = 0x4 - RTF_IFREF = 0x4000000 - RTF_IFSCOPE = 0x1000000 - RTF_LLINFO = 0x400 - RTF_LOCAL = 0x200000 - RTF_MODIFIED = 0x20 - RTF_MULTICAST = 0x800000 - RTF_NOIFREF = 0x2000 - RTF_PINNED = 0x100000 - RTF_PRCLONING = 0x10000 - RTF_PROTO1 = 0x8000 - RTF_PROTO2 = 0x4000 - RTF_PROTO3 = 0x40000 - RTF_PROXY = 0x8000000 - RTF_REJECT = 0x8 - RTF_ROUTER = 0x10000000 - RTF_STATIC = 0x800 - RTF_UP = 0x1 - RTF_WASCLONED = 0x20000 - RTF_XRESOLVE = 0x200 - RTM_ADD = 0x1 - RTM_CHANGE = 0x3 - RTM_DELADDR = 0xd - RTM_DELETE = 0x2 - RTM_DELMADDR = 0x10 - RTM_GET = 0x4 - RTM_GET2 = 0x14 - RTM_IFINFO = 0xe - RTM_IFINFO2 = 0x12 - RTM_LOCK = 0x8 - RTM_LOSING = 0x5 - RTM_MISS = 0x7 - RTM_NEWADDR = 0xc - RTM_NEWMADDR = 0xf - RTM_NEWMADDR2 = 0x13 - RTM_OLDADD = 0x9 - RTM_OLDDEL = 0xa - RTM_REDIRECT = 0x6 - RTM_RESOLVE = 0xb - RTM_RTTUNIT = 0xf4240 - RTM_VERSION = 0x5 - RTV_EXPIRE = 0x4 - RTV_HOPCOUNT = 0x2 - RTV_MTU = 0x1 - RTV_RPIPE = 0x8 - RTV_RTT = 0x40 - RTV_RTTVAR = 0x80 - RTV_SPIPE = 0x10 - RTV_SSTHRESH = 0x20 - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - SCM_CREDS = 0x3 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x2 - SCM_TIMESTAMP_MONOTONIC = 0x4 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDMULTI = 0x80206931 - SIOCAIFADDR = 0x8040691a - SIOCARPIPLL = 0xc0206928 - SIOCATMARK = 0x40047307 - SIOCAUTOADDR = 0xc0206926 - SIOCAUTONETMASK = 0x80206927 - SIOCDELMULTI = 0x80206932 - SIOCDIFADDR = 0x80206919 - SIOCDIFPHYADDR = 0x80206941 - SIOCGDRVSPEC = 0xc028697b - SIOCGETVLAN = 0xc020697f - SIOCGHIWAT = 0x40047301 - SIOCGIFADDR = 0xc0206921 - SIOCGIFALTMTU = 0xc0206948 - SIOCGIFASYNCMAP = 0xc020697c - SIOCGIFBOND = 0xc0206947 - SIOCGIFBRDADDR = 0xc0206923 - SIOCGIFCAP = 0xc020695b - SIOCGIFCONF = 0xc00c6924 - SIOCGIFDEVMTU = 0xc0206944 - SIOCGIFDSTADDR = 0xc0206922 - SIOCGIFFLAGS = 0xc0206911 - SIOCGIFGENERIC = 0xc020693a - SIOCGIFKPI = 0xc0206987 - SIOCGIFMAC = 0xc0206982 - SIOCGIFMEDIA = 0xc02c6938 - SIOCGIFMETRIC = 0xc0206917 - SIOCGIFMTU = 0xc0206933 - SIOCGIFNETMASK = 0xc0206925 - SIOCGIFPDSTADDR = 0xc0206940 - SIOCGIFPHYS = 0xc0206935 - SIOCGIFPSRCADDR = 0xc020693f - SIOCGIFSTATUS = 0xc331693d - SIOCGIFVLAN = 0xc020697f - SIOCGIFWAKEFLAGS = 0xc0206988 - SIOCGLOWAT = 0x40047303 - SIOCGPGRP = 0x40047309 - SIOCIFCREATE = 0xc0206978 - SIOCIFCREATE2 = 0xc020697a - SIOCIFDESTROY = 0x80206979 - SIOCIFGCLONERS = 0xc0106981 - SIOCRSLVMULTI = 0xc010693b - SIOCSDRVSPEC = 0x8028697b - SIOCSETVLAN = 0x8020697e - SIOCSHIWAT = 0x80047300 - SIOCSIFADDR = 0x8020690c - SIOCSIFALTMTU = 0x80206945 - SIOCSIFASYNCMAP = 0x8020697d - SIOCSIFBOND = 0x80206946 - SIOCSIFBRDADDR = 0x80206913 - SIOCSIFCAP = 0x8020695a - SIOCSIFDSTADDR = 0x8020690e - SIOCSIFFLAGS = 0x80206910 - SIOCSIFGENERIC = 0x80206939 - SIOCSIFKPI = 0x80206986 - SIOCSIFLLADDR = 0x8020693c - SIOCSIFMAC = 0x80206983 - SIOCSIFMEDIA = 0xc0206937 - SIOCSIFMETRIC = 0x80206918 - SIOCSIFMTU = 0x80206934 - SIOCSIFNETMASK = 0x80206916 - SIOCSIFPHYADDR = 0x8040693e - SIOCSIFPHYS = 0x80206936 - SIOCSIFVLAN = 0x8020697e - SIOCSLOWAT = 0x80047302 - SIOCSPGRP = 0x80047308 - SOCK_DGRAM = 0x2 - SOCK_MAXADDRLEN = 0xff - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_SOCKET = 0xffff - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x2 - SO_BROADCAST = 0x20 - SO_DEBUG = 0x1 - SO_DONTROUTE = 0x10 - SO_DONTTRUNC = 0x2000 - SO_ERROR = 0x1007 - SO_KEEPALIVE = 0x8 - SO_LABEL = 0x1010 - SO_LINGER = 0x80 - SO_LINGER_SEC = 0x1080 - SO_NKE = 0x1021 - SO_NOADDRERR = 0x1023 - SO_NOSIGPIPE = 0x1022 - SO_NOTIFYCONFLICT = 0x1026 - SO_NP_EXTENSIONS = 0x1083 - SO_NREAD = 0x1020 - SO_NUMRCVPKT = 0x1112 - SO_NWRITE = 0x1024 - SO_OOBINLINE = 0x100 - SO_PEERLABEL = 0x1011 - SO_RANDOMPORT = 0x1082 - SO_RCVBUF = 0x1002 - SO_RCVLOWAT = 0x1004 - SO_RCVTIMEO = 0x1006 - SO_REUSEADDR = 0x4 - SO_REUSEPORT = 0x200 - SO_REUSESHAREUID = 0x1025 - SO_SNDBUF = 0x1001 - SO_SNDLOWAT = 0x1003 - SO_SNDTIMEO = 0x1005 - SO_TIMESTAMP = 0x400 - SO_TIMESTAMP_MONOTONIC = 0x800 - SO_TYPE = 0x1008 - SO_UPCALLCLOSEWAIT = 0x1027 - SO_USELOOPBACK = 0x40 - SO_WANTMORE = 0x4000 - SO_WANTOOBFLAG = 0x8000 - S_IEXEC = 0x40 - S_IFBLK = 0x6000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFIFO = 0x1000 - S_IFLNK = 0xa000 - S_IFMT = 0xf000 - S_IFREG = 0x8000 - S_IFSOCK = 0xc000 - S_IFWHT = 0xe000 - S_IREAD = 0x100 - S_IRGRP = 0x20 - S_IROTH = 0x4 - S_IRUSR = 0x100 - S_IRWXG = 0x38 - S_IRWXO = 0x7 - S_IRWXU = 0x1c0 - S_ISGID = 0x400 - S_ISTXT = 0x200 - S_ISUID = 0x800 - S_ISVTX = 0x200 - S_IWGRP = 0x10 - S_IWOTH = 0x2 - S_IWRITE = 0x80 - S_IWUSR = 0x80 - S_IXGRP = 0x8 - S_IXOTH = 0x1 - S_IXUSR = 0x40 - TCIFLUSH = 0x1 - TCIOFLUSH = 0x3 - TCOFLUSH = 0x2 - TCP_CONNECTIONTIMEOUT = 0x20 - TCP_ENABLE_ECN = 0x104 - TCP_KEEPALIVE = 0x10 - TCP_KEEPCNT = 0x102 - TCP_KEEPINTVL = 0x101 - TCP_MAXHLEN = 0x3c - TCP_MAXOLEN = 0x28 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_SACK = 0x4 - TCP_MAX_WINSHIFT = 0xe - TCP_MINMSS = 0xd8 - TCP_MSS = 0x200 - TCP_NODELAY = 0x1 - TCP_NOOPT = 0x8 - TCP_NOPUSH = 0x4 - TCP_NOTSENT_LOWAT = 0x201 - TCP_RXT_CONNDROPTIME = 0x80 - TCP_RXT_FINDROP = 0x100 - TCP_SENDMOREACKS = 0x103 - TCSAFLUSH = 0x2 - TIOCCBRK = 0x2000747a - TIOCCDTR = 0x20007478 - TIOCCONS = 0x80047462 - TIOCDCDTIMESTAMP = 0x40107458 - TIOCDRAIN = 0x2000745e - TIOCDSIMICROCODE = 0x20007455 - TIOCEXCL = 0x2000740d - TIOCEXT = 0x80047460 - TIOCFLUSH = 0x80047410 - TIOCGDRAINWAIT = 0x40047456 - TIOCGETA = 0x40487413 - TIOCGETD = 0x4004741a - TIOCGPGRP = 0x40047477 - TIOCGWINSZ = 0x40087468 - TIOCIXOFF = 0x20007480 - TIOCIXON = 0x20007481 - TIOCMBIC = 0x8004746b - TIOCMBIS = 0x8004746c - TIOCMGDTRWAIT = 0x4004745a - TIOCMGET = 0x4004746a - TIOCMODG = 0x40047403 - TIOCMODS = 0x80047404 - TIOCMSDTRWAIT = 0x8004745b - TIOCMSET = 0x8004746d - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x20007471 - TIOCNXCL = 0x2000740e - TIOCOUTQ = 0x40047473 - TIOCPKT = 0x80047470 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCPTYGNAME = 0x40807453 - TIOCPTYGRANT = 0x20007454 - TIOCPTYUNLK = 0x20007452 - TIOCREMOTE = 0x80047469 - TIOCSBRK = 0x2000747b - TIOCSCONS = 0x20007463 - TIOCSCTTY = 0x20007461 - TIOCSDRAINWAIT = 0x80047457 - TIOCSDTR = 0x20007479 - TIOCSETA = 0x80487414 - TIOCSETAF = 0x80487416 - TIOCSETAW = 0x80487415 - TIOCSETD = 0x8004741b - TIOCSIG = 0x2000745f - TIOCSPGRP = 0x80047476 - TIOCSTART = 0x2000746e - TIOCSTAT = 0x20007465 - TIOCSTI = 0x80017472 - TIOCSTOP = 0x2000746f - TIOCSWINSZ = 0x80087467 - TIOCTIMESTAMP = 0x40107459 - TIOCUCNTL = 0x80047466 - TOSTOP = 0x400000 - VDISCARD = 0xf - VDSUSP = 0xb - VEOF = 0x0 - VEOL = 0x1 - VEOL2 = 0x2 - VERASE = 0x3 - VINTR = 0x8 - VKILL = 0x5 - VLNEXT = 0xe - VMIN = 0x10 - VQUIT = 0x9 - VREPRINT = 0x6 - VSTART = 0xc - VSTATUS = 0x12 - VSTOP = 0xd - VSUSP = 0xa - VT0 = 0x0 - VT1 = 0x10000 - VTDLY = 0x10000 - VTIME = 0x11 - VWERASE = 0x4 - WCONTINUED = 0x10 - WCOREFLAG = 0x80 - WEXITED = 0x4 - WNOHANG = 0x1 - WNOWAIT = 0x20 - WORDSIZE = 0x40 - WSTOPPED = 0x8 - WUNTRACED = 0x2 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x30) - EADDRNOTAVAIL = syscall.Errno(0x31) - EAFNOSUPPORT = syscall.Errno(0x2f) - EAGAIN = syscall.Errno(0x23) - EALREADY = syscall.Errno(0x25) - EAUTH = syscall.Errno(0x50) - EBADARCH = syscall.Errno(0x56) - EBADEXEC = syscall.Errno(0x55) - EBADF = syscall.Errno(0x9) - EBADMACHO = syscall.Errno(0x58) - EBADMSG = syscall.Errno(0x5e) - EBADRPC = syscall.Errno(0x48) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x59) - ECHILD = syscall.Errno(0xa) - ECONNABORTED = syscall.Errno(0x35) - ECONNREFUSED = syscall.Errno(0x3d) - ECONNRESET = syscall.Errno(0x36) - EDEADLK = syscall.Errno(0xb) - EDESTADDRREQ = syscall.Errno(0x27) - EDEVERR = syscall.Errno(0x53) - EDOM = syscall.Errno(0x21) - EDQUOT = syscall.Errno(0x45) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EFTYPE = syscall.Errno(0x4f) - EHOSTDOWN = syscall.Errno(0x40) - EHOSTUNREACH = syscall.Errno(0x41) - EIDRM = syscall.Errno(0x5a) - EILSEQ = syscall.Errno(0x5c) - EINPROGRESS = syscall.Errno(0x24) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EISCONN = syscall.Errno(0x38) - EISDIR = syscall.Errno(0x15) - ELAST = syscall.Errno(0x6a) - ELOOP = syscall.Errno(0x3e) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x28) - EMULTIHOP = syscall.Errno(0x5f) - ENAMETOOLONG = syscall.Errno(0x3f) - ENEEDAUTH = syscall.Errno(0x51) - ENETDOWN = syscall.Errno(0x32) - ENETRESET = syscall.Errno(0x34) - ENETUNREACH = syscall.Errno(0x33) - ENFILE = syscall.Errno(0x17) - ENOATTR = syscall.Errno(0x5d) - ENOBUFS = syscall.Errno(0x37) - ENODATA = syscall.Errno(0x60) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOLCK = syscall.Errno(0x4d) - ENOLINK = syscall.Errno(0x61) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x5b) - ENOPOLICY = syscall.Errno(0x67) - ENOPROTOOPT = syscall.Errno(0x2a) - ENOSPC = syscall.Errno(0x1c) - ENOSR = syscall.Errno(0x62) - ENOSTR = syscall.Errno(0x63) - ENOSYS = syscall.Errno(0x4e) - ENOTBLK = syscall.Errno(0xf) - ENOTCONN = syscall.Errno(0x39) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x42) - ENOTRECOVERABLE = syscall.Errno(0x68) - ENOTSOCK = syscall.Errno(0x26) - ENOTSUP = syscall.Errno(0x2d) - ENOTTY = syscall.Errno(0x19) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x66) - EOVERFLOW = syscall.Errno(0x54) - EOWNERDEAD = syscall.Errno(0x69) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x2e) - EPIPE = syscall.Errno(0x20) - EPROCLIM = syscall.Errno(0x43) - EPROCUNAVAIL = syscall.Errno(0x4c) - EPROGMISMATCH = syscall.Errno(0x4b) - EPROGUNAVAIL = syscall.Errno(0x4a) - EPROTO = syscall.Errno(0x64) - EPROTONOSUPPORT = syscall.Errno(0x2b) - EPROTOTYPE = syscall.Errno(0x29) - EPWROFF = syscall.Errno(0x52) - EQFULL = syscall.Errno(0x6a) - ERANGE = syscall.Errno(0x22) - EREMOTE = syscall.Errno(0x47) - EROFS = syscall.Errno(0x1e) - ERPCMISMATCH = syscall.Errno(0x49) - ESHLIBVERS = syscall.Errno(0x57) - ESHUTDOWN = syscall.Errno(0x3a) - ESOCKTNOSUPPORT = syscall.Errno(0x2c) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESTALE = syscall.Errno(0x46) - ETIME = syscall.Errno(0x65) - ETIMEDOUT = syscall.Errno(0x3c) - ETOOMANYREFS = syscall.Errno(0x3b) - ETXTBSY = syscall.Errno(0x1a) - EUSERS = syscall.Errno(0x44) - EWOULDBLOCK = syscall.Errno(0x23) - EXDEV = syscall.Errno(0x12) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0xa) - SIGCHLD = syscall.Signal(0x14) - SIGCONT = syscall.Signal(0x13) - SIGEMT = syscall.Signal(0x7) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINFO = syscall.Signal(0x1d) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x17) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGPIPE = syscall.Signal(0xd) - SIGPROF = syscall.Signal(0x1b) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTOP = syscall.Signal(0x11) - SIGSYS = syscall.Signal(0xc) - SIGTERM = syscall.Signal(0xf) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x12) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGURG = syscall.Signal(0x10) - SIGUSR1 = syscall.Signal(0x1e) - SIGUSR2 = syscall.Signal(0x1f) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) - -// Error table -var errors = [...]string{ - 1: "operation not permitted", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "input/output error", - 6: "device not configured", - 7: "argument list too long", - 8: "exec format error", - 9: "bad file descriptor", - 10: "no child processes", - 11: "resource deadlock avoided", - 12: "cannot allocate memory", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "resource busy", - 17: "file exists", - 18: "cross-device link", - 19: "operation not supported by device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "too many open files in system", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "numerical argument out of domain", - 34: "result too large", - 35: "resource temporarily unavailable", - 36: "operation now in progress", - 37: "operation already in progress", - 38: "socket operation on non-socket", - 39: "destination address required", - 40: "message too long", - 41: "protocol wrong type for socket", - 42: "protocol not available", - 43: "protocol not supported", - 44: "socket type not supported", - 45: "operation not supported", - 46: "protocol family not supported", - 47: "address family not supported by protocol family", - 48: "address already in use", - 49: "can't assign requested address", - 50: "network is down", - 51: "network is unreachable", - 52: "network dropped connection on reset", - 53: "software caused connection abort", - 54: "connection reset by peer", - 55: "no buffer space available", - 56: "socket is already connected", - 57: "socket is not connected", - 58: "can't send after socket shutdown", - 59: "too many references: can't splice", - 60: "operation timed out", - 61: "connection refused", - 62: "too many levels of symbolic links", - 63: "file name too long", - 64: "host is down", - 65: "no route to host", - 66: "directory not empty", - 67: "too many processes", - 68: "too many users", - 69: "disc quota exceeded", - 70: "stale NFS file handle", - 71: "too many levels of remote in path", - 72: "RPC struct is bad", - 73: "RPC version wrong", - 74: "RPC prog. not avail", - 75: "program version wrong", - 76: "bad procedure for program", - 77: "no locks available", - 78: "function not implemented", - 79: "inappropriate file type or format", - 80: "authentication error", - 81: "need authenticator", - 82: "device power is off", - 83: "device error", - 84: "value too large to be stored in data type", - 85: "bad executable (or shared library)", - 86: "bad CPU type in executable", - 87: "shared library version mismatch", - 88: "malformed Mach-o file", - 89: "operation canceled", - 90: "identifier removed", - 91: "no message of desired type", - 92: "illegal byte sequence", - 93: "attribute not found", - 94: "bad message", - 95: "EMULTIHOP (Reserved)", - 96: "no message available on STREAM", - 97: "ENOLINK (Reserved)", - 98: "no STREAM resources", - 99: "not a STREAM", - 100: "protocol error", - 101: "STREAM ioctl timeout", - 102: "operation not supported on socket", - 103: "policy not found", - 104: "state not recoverable", - 105: "previous owner died", - 106: "interface output queue is full", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal instruction", - 5: "trace/BPT trap", - 6: "abort trap", - 7: "EMT trap", - 8: "floating point exception", - 9: "killed", - 10: "bus error", - 11: "segmentation fault", - 12: "bad system call", - 13: "broken pipe", - 14: "alarm clock", - 15: "terminated", - 16: "urgent I/O condition", - 17: "suspended (signal)", - 18: "suspended", - 19: "continued", - 20: "child exited", - 21: "stopped (tty input)", - 22: "stopped (tty output)", - 23: "I/O possible", - 24: "cputime limit exceeded", - 25: "filesize limit exceeded", - 26: "virtual timer expired", - 27: "profiling timer expired", - 28: "window size changes", - 29: "information request", - 30: "user defined signal 1", - 31: "user defined signal 2", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_darwin_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_darwin_arm.go deleted file mode 100644 index a410e88edde..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_darwin_arm.go +++ /dev/null @@ -1,1293 +0,0 @@ -// mkerrors.sh -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- _const.go - -// +build arm,darwin - -package unix - -import "syscall" - -const ( - AF_APPLETALK = 0x10 - AF_CCITT = 0xa - AF_CHAOS = 0x5 - AF_CNT = 0x15 - AF_COIP = 0x14 - AF_DATAKIT = 0x9 - AF_DECnet = 0xc - AF_DLI = 0xd - AF_E164 = 0x1c - AF_ECMA = 0x8 - AF_HYLINK = 0xf - AF_IEEE80211 = 0x25 - AF_IMPLINK = 0x3 - AF_INET = 0x2 - AF_INET6 = 0x1e - AF_IPX = 0x17 - AF_ISDN = 0x1c - AF_ISO = 0x7 - AF_LAT = 0xe - AF_LINK = 0x12 - AF_LOCAL = 0x1 - AF_MAX = 0x28 - AF_NATM = 0x1f - AF_NDRV = 0x1b - AF_NETBIOS = 0x21 - AF_NS = 0x6 - AF_OSI = 0x7 - AF_PPP = 0x22 - AF_PUP = 0x4 - AF_RESERVED_36 = 0x24 - AF_ROUTE = 0x11 - AF_SIP = 0x18 - AF_SNA = 0xb - AF_SYSTEM = 0x20 - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - AF_UTUN = 0x26 - B0 = 0x0 - B110 = 0x6e - B115200 = 0x1c200 - B1200 = 0x4b0 - B134 = 0x86 - B14400 = 0x3840 - B150 = 0x96 - B1800 = 0x708 - B19200 = 0x4b00 - B200 = 0xc8 - B230400 = 0x38400 - B2400 = 0x960 - B28800 = 0x7080 - B300 = 0x12c - B38400 = 0x9600 - B4800 = 0x12c0 - B50 = 0x32 - B57600 = 0xe100 - B600 = 0x258 - B7200 = 0x1c20 - B75 = 0x4b - B76800 = 0x12c00 - B9600 = 0x2580 - BIOCFLUSH = 0x20004268 - BIOCGBLEN = 0x40044266 - BIOCGDLT = 0x4004426a - BIOCGDLTLIST = 0xc00c4279 - BIOCGETIF = 0x4020426b - BIOCGHDRCMPLT = 0x40044274 - BIOCGRSIG = 0x40044272 - BIOCGRTIMEOUT = 0x4010426e - BIOCGSEESENT = 0x40044276 - BIOCGSTATS = 0x4008426f - BIOCIMMEDIATE = 0x80044270 - BIOCPROMISC = 0x20004269 - BIOCSBLEN = 0xc0044266 - BIOCSDLT = 0x80044278 - BIOCSETF = 0x80104267 - BIOCSETIF = 0x8020426c - BIOCSHDRCMPLT = 0x80044275 - BIOCSRSIG = 0x80044273 - BIOCSRTIMEOUT = 0x8010426d - BIOCSSEESENT = 0x80044277 - BIOCVERSION = 0x40044271 - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALIGNMENT = 0x4 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXBUFSIZE = 0x80000 - BPF_MAXINSNS = 0x200 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINBUFSIZE = 0x20 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RELEASE = 0x30bb6 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_W = 0x0 - BPF_X = 0x8 - BRKINT = 0x2 - CFLUSH = 0xf - CLOCAL = 0x8000 - CREAD = 0x800 - CS5 = 0x0 - CS6 = 0x100 - CS7 = 0x200 - CS8 = 0x300 - CSIZE = 0x300 - CSTART = 0x11 - CSTATUS = 0x14 - CSTOP = 0x13 - CSTOPB = 0x400 - CSUSP = 0x1a - CTL_MAXNAME = 0xc - CTL_NET = 0x4 - DLT_APPLE_IP_OVER_IEEE1394 = 0x8a - DLT_ARCNET = 0x7 - DLT_ATM_CLIP = 0x13 - DLT_ATM_RFC1483 = 0xb - DLT_AX25 = 0x3 - DLT_CHAOS = 0x5 - DLT_CHDLC = 0x68 - DLT_C_HDLC = 0x68 - DLT_EN10MB = 0x1 - DLT_EN3MB = 0x2 - DLT_FDDI = 0xa - DLT_IEEE802 = 0x6 - DLT_IEEE802_11 = 0x69 - DLT_IEEE802_11_RADIO = 0x7f - DLT_IEEE802_11_RADIO_AVS = 0xa3 - DLT_LINUX_SLL = 0x71 - DLT_LOOP = 0x6c - DLT_NULL = 0x0 - DLT_PFLOG = 0x75 - DLT_PFSYNC = 0x12 - DLT_PPP = 0x9 - DLT_PPP_BSDOS = 0x10 - DLT_PPP_SERIAL = 0x32 - DLT_PRONET = 0x4 - DLT_RAW = 0xc - DLT_SLIP = 0x8 - DLT_SLIP_BSDOS = 0xf - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - DT_WHT = 0xe - ECHO = 0x8 - ECHOCTL = 0x40 - ECHOE = 0x2 - ECHOK = 0x4 - ECHOKE = 0x1 - ECHONL = 0x10 - ECHOPRT = 0x20 - EVFILT_AIO = -0x3 - EVFILT_FS = -0x9 - EVFILT_MACHPORT = -0x8 - EVFILT_PROC = -0x5 - EVFILT_READ = -0x1 - EVFILT_SIGNAL = -0x6 - EVFILT_SYSCOUNT = 0xe - EVFILT_THREADMARKER = 0xe - EVFILT_TIMER = -0x7 - EVFILT_USER = -0xa - EVFILT_VM = -0xc - EVFILT_VNODE = -0x4 - EVFILT_WRITE = -0x2 - EV_ADD = 0x1 - EV_CLEAR = 0x20 - EV_DELETE = 0x2 - EV_DISABLE = 0x8 - EV_DISPATCH = 0x80 - EV_ENABLE = 0x4 - EV_EOF = 0x8000 - EV_ERROR = 0x4000 - EV_FLAG0 = 0x1000 - EV_FLAG1 = 0x2000 - EV_ONESHOT = 0x10 - EV_OOBAND = 0x2000 - EV_POLL = 0x1000 - EV_RECEIPT = 0x40 - EV_SYSFLAGS = 0xf000 - EXTA = 0x4b00 - EXTB = 0x9600 - EXTPROC = 0x800 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x400 - FLUSHO = 0x800000 - F_ADDFILESIGS = 0x3d - F_ADDSIGS = 0x3b - F_ALLOCATEALL = 0x4 - F_ALLOCATECONTIG = 0x2 - F_CHKCLEAN = 0x29 - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0x43 - F_FINDSIGS = 0x4e - F_FLUSH_DATA = 0x28 - F_FREEZE_FS = 0x35 - F_FULLFSYNC = 0x33 - F_GETCODEDIR = 0x48 - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLK = 0x7 - F_GETLKPID = 0x42 - F_GETNOSIGPIPE = 0x4a - F_GETOWN = 0x5 - F_GETPATH = 0x32 - F_GETPATH_MTMINFO = 0x47 - F_GETPROTECTIONCLASS = 0x3f - F_GETPROTECTIONLEVEL = 0x4d - F_GLOBAL_NOCACHE = 0x37 - F_LOG2PHYS = 0x31 - F_LOG2PHYS_EXT = 0x41 - F_NOCACHE = 0x30 - F_NODIRECT = 0x3e - F_OK = 0x0 - F_PATHPKG_CHECK = 0x34 - F_PEOFPOSMODE = 0x3 - F_PREALLOCATE = 0x2a - F_RDADVISE = 0x2c - F_RDAHEAD = 0x2d - F_RDLCK = 0x1 - F_SETBACKINGSTORE = 0x46 - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLK = 0x8 - F_SETLKW = 0x9 - F_SETLKWTIMEOUT = 0xa - F_SETNOSIGPIPE = 0x49 - F_SETOWN = 0x6 - F_SETPROTECTIONCLASS = 0x40 - F_SETSIZE = 0x2b - F_SINGLE_WRITER = 0x4c - F_THAW_FS = 0x36 - F_TRANSCODEKEY = 0x4b - F_UNLCK = 0x2 - F_VOLPOSMODE = 0x4 - F_WRLCK = 0x3 - HUPCL = 0x4000 - ICANON = 0x100 - ICMP6_FILTER = 0x12 - ICRNL = 0x100 - IEXTEN = 0x400 - IFF_ALLMULTI = 0x200 - IFF_ALTPHYS = 0x4000 - IFF_BROADCAST = 0x2 - IFF_DEBUG = 0x4 - IFF_LINK0 = 0x1000 - IFF_LINK1 = 0x2000 - IFF_LINK2 = 0x4000 - IFF_LOOPBACK = 0x8 - IFF_MULTICAST = 0x8000 - IFF_NOARP = 0x80 - IFF_NOTRAILERS = 0x20 - IFF_OACTIVE = 0x400 - IFF_POINTOPOINT = 0x10 - IFF_PROMISC = 0x100 - IFF_RUNNING = 0x40 - IFF_SIMPLEX = 0x800 - IFF_UP = 0x1 - IFNAMSIZ = 0x10 - IFT_1822 = 0x2 - IFT_AAL5 = 0x31 - IFT_ARCNET = 0x23 - IFT_ARCNETPLUS = 0x24 - IFT_ATM = 0x25 - IFT_BRIDGE = 0xd1 - IFT_CARP = 0xf8 - IFT_CELLULAR = 0xff - IFT_CEPT = 0x13 - IFT_DS3 = 0x1e - IFT_ENC = 0xf4 - IFT_EON = 0x19 - IFT_ETHER = 0x6 - IFT_FAITH = 0x38 - IFT_FDDI = 0xf - IFT_FRELAY = 0x20 - IFT_FRELAYDCE = 0x2c - IFT_GIF = 0x37 - IFT_HDH1822 = 0x3 - IFT_HIPPI = 0x2f - IFT_HSSI = 0x2e - IFT_HY = 0xe - IFT_IEEE1394 = 0x90 - IFT_IEEE8023ADLAG = 0x88 - IFT_ISDNBASIC = 0x14 - IFT_ISDNPRIMARY = 0x15 - IFT_ISO88022LLC = 0x29 - IFT_ISO88023 = 0x7 - IFT_ISO88024 = 0x8 - IFT_ISO88025 = 0x9 - IFT_ISO88026 = 0xa - IFT_L2VLAN = 0x87 - IFT_LAPB = 0x10 - IFT_LOCALTALK = 0x2a - IFT_LOOP = 0x18 - IFT_MIOX25 = 0x26 - IFT_MODEM = 0x30 - IFT_NSIP = 0x1b - IFT_OTHER = 0x1 - IFT_P10 = 0xc - IFT_P80 = 0xd - IFT_PARA = 0x22 - IFT_PDP = 0xff - IFT_PFLOG = 0xf5 - IFT_PFSYNC = 0xf6 - IFT_PPP = 0x17 - IFT_PROPMUX = 0x36 - IFT_PROPVIRTUAL = 0x35 - IFT_PTPSERIAL = 0x16 - IFT_RS232 = 0x21 - IFT_SDLC = 0x11 - IFT_SIP = 0x1f - IFT_SLIP = 0x1c - IFT_SMDSDXI = 0x2b - IFT_SMDSICIP = 0x34 - IFT_SONET = 0x27 - IFT_SONETPATH = 0x32 - IFT_SONETVT = 0x33 - IFT_STARLAN = 0xb - IFT_STF = 0x39 - IFT_T1 = 0x12 - IFT_ULTRA = 0x1d - IFT_V35 = 0x2d - IFT_X25 = 0x5 - IFT_X25DDN = 0x4 - IFT_X25PLE = 0x28 - IFT_XETHER = 0x1a - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLASSD_HOST = 0xfffffff - IN_CLASSD_NET = 0xf0000000 - IN_CLASSD_NSHIFT = 0x1c - IN_LINKLOCALNETNUM = 0xa9fe0000 - IN_LOOPBACKNET = 0x7f - IPPROTO_3PC = 0x22 - IPPROTO_ADFS = 0x44 - IPPROTO_AH = 0x33 - IPPROTO_AHIP = 0x3d - IPPROTO_APES = 0x63 - IPPROTO_ARGUS = 0xd - IPPROTO_AX25 = 0x5d - IPPROTO_BHA = 0x31 - IPPROTO_BLT = 0x1e - IPPROTO_BRSATMON = 0x4c - IPPROTO_CFTP = 0x3e - IPPROTO_CHAOS = 0x10 - IPPROTO_CMTP = 0x26 - IPPROTO_CPHB = 0x49 - IPPROTO_CPNX = 0x48 - IPPROTO_DDP = 0x25 - IPPROTO_DGP = 0x56 - IPPROTO_DIVERT = 0xfe - IPPROTO_DONE = 0x101 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_EMCON = 0xe - IPPROTO_ENCAP = 0x62 - IPPROTO_EON = 0x50 - IPPROTO_ESP = 0x32 - IPPROTO_ETHERIP = 0x61 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GGP = 0x3 - IPPROTO_GMTP = 0x64 - IPPROTO_GRE = 0x2f - IPPROTO_HELLO = 0x3f - IPPROTO_HMP = 0x14 - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IDPR = 0x23 - IPPROTO_IDRP = 0x2d - IPPROTO_IGMP = 0x2 - IPPROTO_IGP = 0x55 - IPPROTO_IGRP = 0x58 - IPPROTO_IL = 0x28 - IPPROTO_INLSP = 0x34 - IPPROTO_INP = 0x20 - IPPROTO_IP = 0x0 - IPPROTO_IPCOMP = 0x6c - IPPROTO_IPCV = 0x47 - IPPROTO_IPEIP = 0x5e - IPPROTO_IPIP = 0x4 - IPPROTO_IPPC = 0x43 - IPPROTO_IPV4 = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_IRTP = 0x1c - IPPROTO_KRYPTOLAN = 0x41 - IPPROTO_LARP = 0x5b - IPPROTO_LEAF1 = 0x19 - IPPROTO_LEAF2 = 0x1a - IPPROTO_MAX = 0x100 - IPPROTO_MAXID = 0x34 - IPPROTO_MEAS = 0x13 - IPPROTO_MHRP = 0x30 - IPPROTO_MICP = 0x5f - IPPROTO_MTP = 0x5c - IPPROTO_MUX = 0x12 - IPPROTO_ND = 0x4d - IPPROTO_NHRP = 0x36 - IPPROTO_NONE = 0x3b - IPPROTO_NSP = 0x1f - IPPROTO_NVPII = 0xb - IPPROTO_OSPFIGP = 0x59 - IPPROTO_PGM = 0x71 - IPPROTO_PIGP = 0x9 - IPPROTO_PIM = 0x67 - IPPROTO_PRM = 0x15 - IPPROTO_PUP = 0xc - IPPROTO_PVP = 0x4b - IPPROTO_RAW = 0xff - IPPROTO_RCCMON = 0xa - IPPROTO_RDP = 0x1b - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_RVD = 0x42 - IPPROTO_SATEXPAK = 0x40 - IPPROTO_SATMON = 0x45 - IPPROTO_SCCSP = 0x60 - IPPROTO_SCTP = 0x84 - IPPROTO_SDRP = 0x2a - IPPROTO_SEP = 0x21 - IPPROTO_SRPC = 0x5a - IPPROTO_ST = 0x7 - IPPROTO_SVMTP = 0x52 - IPPROTO_SWIPE = 0x35 - IPPROTO_TCF = 0x57 - IPPROTO_TCP = 0x6 - IPPROTO_TP = 0x1d - IPPROTO_TPXX = 0x27 - IPPROTO_TRUNK1 = 0x17 - IPPROTO_TRUNK2 = 0x18 - IPPROTO_TTP = 0x54 - IPPROTO_UDP = 0x11 - IPPROTO_VINES = 0x53 - IPPROTO_VISA = 0x46 - IPPROTO_VMTP = 0x51 - IPPROTO_WBEXPAK = 0x4f - IPPROTO_WBMON = 0x4e - IPPROTO_WSN = 0x4a - IPPROTO_XNET = 0xf - IPPROTO_XTP = 0x24 - IPV6_2292DSTOPTS = 0x17 - IPV6_2292HOPLIMIT = 0x14 - IPV6_2292HOPOPTS = 0x16 - IPV6_2292NEXTHOP = 0x15 - IPV6_2292PKTINFO = 0x13 - IPV6_2292PKTOPTIONS = 0x19 - IPV6_2292RTHDR = 0x18 - IPV6_BINDV6ONLY = 0x1b - IPV6_BOUND_IF = 0x7d - IPV6_CHECKSUM = 0x1a - IPV6_DEFAULT_MULTICAST_HOPS = 0x1 - IPV6_DEFAULT_MULTICAST_LOOP = 0x1 - IPV6_DEFHLIM = 0x40 - IPV6_FAITH = 0x1d - IPV6_FLOWINFO_MASK = 0xffffff0f - IPV6_FLOWLABEL_MASK = 0xffff0f00 - IPV6_FRAGTTL = 0x78 - IPV6_FW_ADD = 0x1e - IPV6_FW_DEL = 0x1f - IPV6_FW_FLUSH = 0x20 - IPV6_FW_GET = 0x22 - IPV6_FW_ZERO = 0x21 - IPV6_HLIMDEC = 0x1 - IPV6_IPSEC_POLICY = 0x1c - IPV6_JOIN_GROUP = 0xc - IPV6_LEAVE_GROUP = 0xd - IPV6_MAXHLIM = 0xff - IPV6_MAXOPTHDR = 0x800 - IPV6_MAXPACKET = 0xffff - IPV6_MAX_GROUP_SRC_FILTER = 0x200 - IPV6_MAX_MEMBERSHIPS = 0xfff - IPV6_MAX_SOCK_SRC_FILTER = 0x80 - IPV6_MIN_MEMBERSHIPS = 0x1f - IPV6_MMTU = 0x500 - IPV6_MULTICAST_HOPS = 0xa - IPV6_MULTICAST_IF = 0x9 - IPV6_MULTICAST_LOOP = 0xb - IPV6_PORTRANGE = 0xe - IPV6_PORTRANGE_DEFAULT = 0x0 - IPV6_PORTRANGE_HIGH = 0x1 - IPV6_PORTRANGE_LOW = 0x2 - IPV6_RECVTCLASS = 0x23 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_SOCKOPT_RESERVED1 = 0x3 - IPV6_TCLASS = 0x24 - IPV6_UNICAST_HOPS = 0x4 - IPV6_V6ONLY = 0x1b - IPV6_VERSION = 0x60 - IPV6_VERSION_MASK = 0xf0 - IP_ADD_MEMBERSHIP = 0xc - IP_ADD_SOURCE_MEMBERSHIP = 0x46 - IP_BLOCK_SOURCE = 0x48 - IP_BOUND_IF = 0x19 - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DROP_MEMBERSHIP = 0xd - IP_DROP_SOURCE_MEMBERSHIP = 0x47 - IP_DUMMYNET_CONFIGURE = 0x3c - IP_DUMMYNET_DEL = 0x3d - IP_DUMMYNET_FLUSH = 0x3e - IP_DUMMYNET_GET = 0x40 - IP_FAITH = 0x16 - IP_FW_ADD = 0x28 - IP_FW_DEL = 0x29 - IP_FW_FLUSH = 0x2a - IP_FW_GET = 0x2c - IP_FW_RESETLOG = 0x2d - IP_FW_ZERO = 0x2b - IP_HDRINCL = 0x2 - IP_IPSEC_POLICY = 0x15 - IP_MAXPACKET = 0xffff - IP_MAX_GROUP_SRC_FILTER = 0x200 - IP_MAX_MEMBERSHIPS = 0xfff - IP_MAX_SOCK_MUTE_FILTER = 0x80 - IP_MAX_SOCK_SRC_FILTER = 0x80 - IP_MF = 0x2000 - IP_MIN_MEMBERSHIPS = 0x1f - IP_MSFILTER = 0x4a - IP_MSS = 0x240 - IP_MULTICAST_IF = 0x9 - IP_MULTICAST_IFINDEX = 0x42 - IP_MULTICAST_LOOP = 0xb - IP_MULTICAST_TTL = 0xa - IP_MULTICAST_VIF = 0xe - IP_NAT__XXX = 0x37 - IP_OFFMASK = 0x1fff - IP_OLD_FW_ADD = 0x32 - IP_OLD_FW_DEL = 0x33 - IP_OLD_FW_FLUSH = 0x34 - IP_OLD_FW_GET = 0x36 - IP_OLD_FW_RESETLOG = 0x38 - IP_OLD_FW_ZERO = 0x35 - IP_OPTIONS = 0x1 - IP_PKTINFO = 0x1a - IP_PORTRANGE = 0x13 - IP_PORTRANGE_DEFAULT = 0x0 - IP_PORTRANGE_HIGH = 0x1 - IP_PORTRANGE_LOW = 0x2 - IP_RECVDSTADDR = 0x7 - IP_RECVIF = 0x14 - IP_RECVOPTS = 0x5 - IP_RECVPKTINFO = 0x1a - IP_RECVRETOPTS = 0x6 - IP_RECVTTL = 0x18 - IP_RETOPTS = 0x8 - IP_RF = 0x8000 - IP_RSVP_OFF = 0x10 - IP_RSVP_ON = 0xf - IP_RSVP_VIF_OFF = 0x12 - IP_RSVP_VIF_ON = 0x11 - IP_STRIPHDR = 0x17 - IP_TOS = 0x3 - IP_TRAFFIC_MGT_BACKGROUND = 0x41 - IP_TTL = 0x4 - IP_UNBLOCK_SOURCE = 0x49 - ISIG = 0x80 - ISTRIP = 0x20 - IUTF8 = 0x4000 - IXANY = 0x800 - IXOFF = 0x400 - IXON = 0x200 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_CAN_REUSE = 0x9 - MADV_DONTNEED = 0x4 - MADV_FREE = 0x5 - MADV_FREE_REUSABLE = 0x7 - MADV_FREE_REUSE = 0x8 - MADV_NORMAL = 0x0 - MADV_RANDOM = 0x1 - MADV_SEQUENTIAL = 0x2 - MADV_WILLNEED = 0x3 - MADV_ZERO_WIRED_PAGES = 0x6 - MAP_ANON = 0x1000 - MAP_COPY = 0x2 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_HASSEMAPHORE = 0x200 - MAP_JIT = 0x800 - MAP_NOCACHE = 0x400 - MAP_NOEXTEND = 0x100 - MAP_NORESERVE = 0x40 - MAP_PRIVATE = 0x2 - MAP_RENAME = 0x20 - MAP_RESERVED0080 = 0x80 - MAP_SHARED = 0x1 - MCL_CURRENT = 0x1 - MCL_FUTURE = 0x2 - MSG_CTRUNC = 0x20 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x80 - MSG_EOF = 0x100 - MSG_EOR = 0x8 - MSG_FLUSH = 0x400 - MSG_HAVEMORE = 0x2000 - MSG_HOLD = 0x800 - MSG_NEEDSA = 0x10000 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_RCVMORE = 0x4000 - MSG_SEND = 0x1000 - MSG_TRUNC = 0x10 - MSG_WAITALL = 0x40 - MSG_WAITSTREAM = 0x200 - MS_ASYNC = 0x1 - MS_DEACTIVATE = 0x8 - MS_INVALIDATE = 0x2 - MS_KILLPAGES = 0x4 - MS_SYNC = 0x10 - NAME_MAX = 0xff - NET_RT_DUMP = 0x1 - NET_RT_DUMP2 = 0x7 - NET_RT_FLAGS = 0x2 - NET_RT_IFLIST = 0x3 - NET_RT_IFLIST2 = 0x6 - NET_RT_MAXID = 0xa - NET_RT_STAT = 0x4 - NET_RT_TRASH = 0x5 - NOFLSH = 0x80000000 - NOTE_ABSOLUTE = 0x8 - NOTE_ATTRIB = 0x8 - NOTE_BACKGROUND = 0x40 - NOTE_CHILD = 0x4 - NOTE_CRITICAL = 0x20 - NOTE_DELETE = 0x1 - NOTE_EXEC = 0x20000000 - NOTE_EXIT = 0x80000000 - NOTE_EXITSTATUS = 0x4000000 - NOTE_EXIT_CSERROR = 0x40000 - NOTE_EXIT_DECRYPTFAIL = 0x10000 - NOTE_EXIT_DETAIL = 0x2000000 - NOTE_EXIT_DETAIL_MASK = 0x70000 - NOTE_EXIT_MEMORY = 0x20000 - NOTE_EXIT_REPARENTED = 0x80000 - NOTE_EXTEND = 0x4 - NOTE_FFAND = 0x40000000 - NOTE_FFCOPY = 0xc0000000 - NOTE_FFCTRLMASK = 0xc0000000 - NOTE_FFLAGSMASK = 0xffffff - NOTE_FFNOP = 0x0 - NOTE_FFOR = 0x80000000 - NOTE_FORK = 0x40000000 - NOTE_LEEWAY = 0x10 - NOTE_LINK = 0x10 - NOTE_LOWAT = 0x1 - NOTE_NONE = 0x80 - NOTE_NSECONDS = 0x4 - NOTE_PCTRLMASK = -0x100000 - NOTE_PDATAMASK = 0xfffff - NOTE_REAP = 0x10000000 - NOTE_RENAME = 0x20 - NOTE_REVOKE = 0x40 - NOTE_SECONDS = 0x1 - NOTE_SIGNAL = 0x8000000 - NOTE_TRACK = 0x1 - NOTE_TRACKERR = 0x2 - NOTE_TRIGGER = 0x1000000 - NOTE_USECONDS = 0x2 - NOTE_VM_ERROR = 0x10000000 - NOTE_VM_PRESSURE = 0x80000000 - NOTE_VM_PRESSURE_SUDDEN_TERMINATE = 0x20000000 - NOTE_VM_PRESSURE_TERMINATE = 0x40000000 - NOTE_WRITE = 0x2 - OCRNL = 0x10 - OFDEL = 0x20000 - OFILL = 0x80 - ONLCR = 0x2 - ONLRET = 0x40 - ONOCR = 0x20 - ONOEOT = 0x8 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_ALERT = 0x20000000 - O_APPEND = 0x8 - O_ASYNC = 0x40 - O_CLOEXEC = 0x1000000 - O_CREAT = 0x200 - O_DIRECTORY = 0x100000 - O_DP_GETRAWENCRYPTED = 0x1 - O_DSYNC = 0x400000 - O_EVTONLY = 0x8000 - O_EXCL = 0x800 - O_EXLOCK = 0x20 - O_FSYNC = 0x80 - O_NDELAY = 0x4 - O_NOCTTY = 0x20000 - O_NOFOLLOW = 0x100 - O_NONBLOCK = 0x4 - O_POPUP = 0x80000000 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_SHLOCK = 0x10 - O_SYMLINK = 0x200000 - O_SYNC = 0x80 - O_TRUNC = 0x400 - O_WRONLY = 0x1 - PARENB = 0x1000 - PARMRK = 0x8 - PARODD = 0x2000 - PENDIN = 0x20000000 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PROT_EXEC = 0x4 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_WRITE = 0x2 - PT_ATTACH = 0xa - PT_ATTACHEXC = 0xe - PT_CONTINUE = 0x7 - PT_DENY_ATTACH = 0x1f - PT_DETACH = 0xb - PT_FIRSTMACH = 0x20 - PT_FORCEQUOTA = 0x1e - PT_KILL = 0x8 - PT_READ_D = 0x2 - PT_READ_I = 0x1 - PT_READ_U = 0x3 - PT_SIGEXC = 0xc - PT_STEP = 0x9 - PT_THUPDATE = 0xd - PT_TRACE_ME = 0x0 - PT_WRITE_D = 0x5 - PT_WRITE_I = 0x4 - PT_WRITE_U = 0x6 - RLIMIT_AS = 0x5 - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_CPU_USAGE_MONITOR = 0x2 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x8 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = 0x7fffffffffffffff - RTAX_AUTHOR = 0x6 - RTAX_BRD = 0x7 - RTAX_DST = 0x0 - RTAX_GATEWAY = 0x1 - RTAX_GENMASK = 0x3 - RTAX_IFA = 0x5 - RTAX_IFP = 0x4 - RTAX_MAX = 0x8 - RTAX_NETMASK = 0x2 - RTA_AUTHOR = 0x40 - RTA_BRD = 0x80 - RTA_DST = 0x1 - RTA_GATEWAY = 0x2 - RTA_GENMASK = 0x8 - RTA_IFA = 0x20 - RTA_IFP = 0x10 - RTA_NETMASK = 0x4 - RTF_BLACKHOLE = 0x1000 - RTF_BROADCAST = 0x400000 - RTF_CLONING = 0x100 - RTF_CONDEMNED = 0x2000000 - RTF_DELCLONE = 0x80 - RTF_DONE = 0x40 - RTF_DYNAMIC = 0x10 - RTF_GATEWAY = 0x2 - RTF_HOST = 0x4 - RTF_IFREF = 0x4000000 - RTF_IFSCOPE = 0x1000000 - RTF_LLINFO = 0x400 - RTF_LOCAL = 0x200000 - RTF_MODIFIED = 0x20 - RTF_MULTICAST = 0x800000 - RTF_PINNED = 0x100000 - RTF_PRCLONING = 0x10000 - RTF_PROTO1 = 0x8000 - RTF_PROTO2 = 0x4000 - RTF_PROTO3 = 0x40000 - RTF_PROXY = 0x8000000 - RTF_REJECT = 0x8 - RTF_ROUTER = 0x10000000 - RTF_STATIC = 0x800 - RTF_UP = 0x1 - RTF_WASCLONED = 0x20000 - RTF_XRESOLVE = 0x200 - RTM_ADD = 0x1 - RTM_CHANGE = 0x3 - RTM_DELADDR = 0xd - RTM_DELETE = 0x2 - RTM_DELMADDR = 0x10 - RTM_GET = 0x4 - RTM_GET2 = 0x14 - RTM_IFINFO = 0xe - RTM_IFINFO2 = 0x12 - RTM_LOCK = 0x8 - RTM_LOSING = 0x5 - RTM_MISS = 0x7 - RTM_NEWADDR = 0xc - RTM_NEWMADDR = 0xf - RTM_NEWMADDR2 = 0x13 - RTM_OLDADD = 0x9 - RTM_OLDDEL = 0xa - RTM_REDIRECT = 0x6 - RTM_RESOLVE = 0xb - RTM_RTTUNIT = 0xf4240 - RTM_VERSION = 0x5 - RTV_EXPIRE = 0x4 - RTV_HOPCOUNT = 0x2 - RTV_MTU = 0x1 - RTV_RPIPE = 0x8 - RTV_RTT = 0x40 - RTV_RTTVAR = 0x80 - RTV_SPIPE = 0x10 - RTV_SSTHRESH = 0x20 - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - SCM_CREDS = 0x3 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x2 - SCM_TIMESTAMP_MONOTONIC = 0x4 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDMULTI = 0x80206931 - SIOCAIFADDR = 0x8040691a - SIOCARPIPLL = 0xc0206928 - SIOCATMARK = 0x40047307 - SIOCAUTOADDR = 0xc0206926 - SIOCAUTONETMASK = 0x80206927 - SIOCDELMULTI = 0x80206932 - SIOCDIFADDR = 0x80206919 - SIOCDIFPHYADDR = 0x80206941 - SIOCGDRVSPEC = 0xc028697b - SIOCGETVLAN = 0xc020697f - SIOCGHIWAT = 0x40047301 - SIOCGIFADDR = 0xc0206921 - SIOCGIFALTMTU = 0xc0206948 - SIOCGIFASYNCMAP = 0xc020697c - SIOCGIFBOND = 0xc0206947 - SIOCGIFBRDADDR = 0xc0206923 - SIOCGIFCAP = 0xc020695b - SIOCGIFCONF = 0xc00c6924 - SIOCGIFDEVMTU = 0xc0206944 - SIOCGIFDSTADDR = 0xc0206922 - SIOCGIFFLAGS = 0xc0206911 - SIOCGIFGENERIC = 0xc020693a - SIOCGIFKPI = 0xc0206987 - SIOCGIFMAC = 0xc0206982 - SIOCGIFMEDIA = 0xc02c6938 - SIOCGIFMETRIC = 0xc0206917 - SIOCGIFMTU = 0xc0206933 - SIOCGIFNETMASK = 0xc0206925 - SIOCGIFPDSTADDR = 0xc0206940 - SIOCGIFPHYS = 0xc0206935 - SIOCGIFPSRCADDR = 0xc020693f - SIOCGIFSTATUS = 0xc331693d - SIOCGIFVLAN = 0xc020697f - SIOCGIFWAKEFLAGS = 0xc0206988 - SIOCGLOWAT = 0x40047303 - SIOCGPGRP = 0x40047309 - SIOCIFCREATE = 0xc0206978 - SIOCIFCREATE2 = 0xc020697a - SIOCIFDESTROY = 0x80206979 - SIOCIFGCLONERS = 0xc0106981 - SIOCRSLVMULTI = 0xc010693b - SIOCSDRVSPEC = 0x8028697b - SIOCSETVLAN = 0x8020697e - SIOCSHIWAT = 0x80047300 - SIOCSIFADDR = 0x8020690c - SIOCSIFALTMTU = 0x80206945 - SIOCSIFASYNCMAP = 0x8020697d - SIOCSIFBOND = 0x80206946 - SIOCSIFBRDADDR = 0x80206913 - SIOCSIFCAP = 0x8020695a - SIOCSIFDSTADDR = 0x8020690e - SIOCSIFFLAGS = 0x80206910 - SIOCSIFGENERIC = 0x80206939 - SIOCSIFKPI = 0x80206986 - SIOCSIFLLADDR = 0x8020693c - SIOCSIFMAC = 0x80206983 - SIOCSIFMEDIA = 0xc0206937 - SIOCSIFMETRIC = 0x80206918 - SIOCSIFMTU = 0x80206934 - SIOCSIFNETMASK = 0x80206916 - SIOCSIFPHYADDR = 0x8040693e - SIOCSIFPHYS = 0x80206936 - SIOCSIFVLAN = 0x8020697e - SIOCSLOWAT = 0x80047302 - SIOCSPGRP = 0x80047308 - SOCK_DGRAM = 0x2 - SOCK_MAXADDRLEN = 0xff - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_SOCKET = 0xffff - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x2 - SO_BROADCAST = 0x20 - SO_DEBUG = 0x1 - SO_DONTROUTE = 0x10 - SO_DONTTRUNC = 0x2000 - SO_ERROR = 0x1007 - SO_KEEPALIVE = 0x8 - SO_LABEL = 0x1010 - SO_LINGER = 0x80 - SO_LINGER_SEC = 0x1080 - SO_NKE = 0x1021 - SO_NOADDRERR = 0x1023 - SO_NOSIGPIPE = 0x1022 - SO_NOTIFYCONFLICT = 0x1026 - SO_NP_EXTENSIONS = 0x1083 - SO_NREAD = 0x1020 - SO_NUMRCVPKT = 0x1112 - SO_NWRITE = 0x1024 - SO_OOBINLINE = 0x100 - SO_PEERLABEL = 0x1011 - SO_RANDOMPORT = 0x1082 - SO_RCVBUF = 0x1002 - SO_RCVLOWAT = 0x1004 - SO_RCVTIMEO = 0x1006 - SO_REUSEADDR = 0x4 - SO_REUSEPORT = 0x200 - SO_REUSESHAREUID = 0x1025 - SO_SNDBUF = 0x1001 - SO_SNDLOWAT = 0x1003 - SO_SNDTIMEO = 0x1005 - SO_TIMESTAMP = 0x400 - SO_TIMESTAMP_MONOTONIC = 0x800 - SO_TYPE = 0x1008 - SO_UPCALLCLOSEWAIT = 0x1027 - SO_USELOOPBACK = 0x40 - SO_WANTMORE = 0x4000 - SO_WANTOOBFLAG = 0x8000 - S_IEXEC = 0x40 - S_IFBLK = 0x6000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFIFO = 0x1000 - S_IFLNK = 0xa000 - S_IFMT = 0xf000 - S_IFREG = 0x8000 - S_IFSOCK = 0xc000 - S_IFWHT = 0xe000 - S_IREAD = 0x100 - S_IRGRP = 0x20 - S_IROTH = 0x4 - S_IRUSR = 0x100 - S_IRWXG = 0x38 - S_IRWXO = 0x7 - S_IRWXU = 0x1c0 - S_ISGID = 0x400 - S_ISTXT = 0x200 - S_ISUID = 0x800 - S_ISVTX = 0x200 - S_IWGRP = 0x10 - S_IWOTH = 0x2 - S_IWRITE = 0x80 - S_IWUSR = 0x80 - S_IXGRP = 0x8 - S_IXOTH = 0x1 - S_IXUSR = 0x40 - TCIFLUSH = 0x1 - TCIOFLUSH = 0x3 - TCOFLUSH = 0x2 - TCP_CONNECTIONTIMEOUT = 0x20 - TCP_ENABLE_ECN = 0x104 - TCP_KEEPALIVE = 0x10 - TCP_KEEPCNT = 0x102 - TCP_KEEPINTVL = 0x101 - TCP_MAXHLEN = 0x3c - TCP_MAXOLEN = 0x28 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_SACK = 0x4 - TCP_MAX_WINSHIFT = 0xe - TCP_MINMSS = 0xd8 - TCP_MSS = 0x200 - TCP_NODELAY = 0x1 - TCP_NOOPT = 0x8 - TCP_NOPUSH = 0x4 - TCP_NOTSENT_LOWAT = 0x201 - TCP_RXT_CONNDROPTIME = 0x80 - TCP_RXT_FINDROP = 0x100 - TCP_SENDMOREACKS = 0x103 - TCSAFLUSH = 0x2 - TIOCCBRK = 0x2000747a - TIOCCDTR = 0x20007478 - TIOCCONS = 0x80047462 - TIOCDCDTIMESTAMP = 0x40107458 - TIOCDRAIN = 0x2000745e - TIOCDSIMICROCODE = 0x20007455 - TIOCEXCL = 0x2000740d - TIOCEXT = 0x80047460 - TIOCFLUSH = 0x80047410 - TIOCGDRAINWAIT = 0x40047456 - TIOCGETA = 0x40487413 - TIOCGETD = 0x4004741a - TIOCGPGRP = 0x40047477 - TIOCGWINSZ = 0x40087468 - TIOCIXOFF = 0x20007480 - TIOCIXON = 0x20007481 - TIOCMBIC = 0x8004746b - TIOCMBIS = 0x8004746c - TIOCMGDTRWAIT = 0x4004745a - TIOCMGET = 0x4004746a - TIOCMODG = 0x40047403 - TIOCMODS = 0x80047404 - TIOCMSDTRWAIT = 0x8004745b - TIOCMSET = 0x8004746d - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x20007471 - TIOCNXCL = 0x2000740e - TIOCOUTQ = 0x40047473 - TIOCPKT = 0x80047470 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCPTYGNAME = 0x40807453 - TIOCPTYGRANT = 0x20007454 - TIOCPTYUNLK = 0x20007452 - TIOCREMOTE = 0x80047469 - TIOCSBRK = 0x2000747b - TIOCSCONS = 0x20007463 - TIOCSCTTY = 0x20007461 - TIOCSDRAINWAIT = 0x80047457 - TIOCSDTR = 0x20007479 - TIOCSETA = 0x80487414 - TIOCSETAF = 0x80487416 - TIOCSETAW = 0x80487415 - TIOCSETD = 0x8004741b - TIOCSIG = 0x2000745f - TIOCSPGRP = 0x80047476 - TIOCSTART = 0x2000746e - TIOCSTAT = 0x20007465 - TIOCSTI = 0x80017472 - TIOCSTOP = 0x2000746f - TIOCSWINSZ = 0x80087467 - TIOCTIMESTAMP = 0x40107459 - TIOCUCNTL = 0x80047466 - TOSTOP = 0x400000 - VDISCARD = 0xf - VDSUSP = 0xb - VEOF = 0x0 - VEOL = 0x1 - VEOL2 = 0x2 - VERASE = 0x3 - VINTR = 0x8 - VKILL = 0x5 - VLNEXT = 0xe - VMIN = 0x10 - VQUIT = 0x9 - VREPRINT = 0x6 - VSTART = 0xc - VSTATUS = 0x12 - VSTOP = 0xd - VSUSP = 0xa - VT0 = 0x0 - VT1 = 0x10000 - VTDLY = 0x10000 - VTIME = 0x11 - VWERASE = 0x4 - WCONTINUED = 0x10 - WCOREFLAG = 0x80 - WEXITED = 0x4 - WNOHANG = 0x1 - WNOWAIT = 0x20 - WORDSIZE = 0x40 - WSTOPPED = 0x8 - WUNTRACED = 0x2 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x30) - EADDRNOTAVAIL = syscall.Errno(0x31) - EAFNOSUPPORT = syscall.Errno(0x2f) - EAGAIN = syscall.Errno(0x23) - EALREADY = syscall.Errno(0x25) - EAUTH = syscall.Errno(0x50) - EBADARCH = syscall.Errno(0x56) - EBADEXEC = syscall.Errno(0x55) - EBADF = syscall.Errno(0x9) - EBADMACHO = syscall.Errno(0x58) - EBADMSG = syscall.Errno(0x5e) - EBADRPC = syscall.Errno(0x48) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x59) - ECHILD = syscall.Errno(0xa) - ECONNABORTED = syscall.Errno(0x35) - ECONNREFUSED = syscall.Errno(0x3d) - ECONNRESET = syscall.Errno(0x36) - EDEADLK = syscall.Errno(0xb) - EDESTADDRREQ = syscall.Errno(0x27) - EDEVERR = syscall.Errno(0x53) - EDOM = syscall.Errno(0x21) - EDQUOT = syscall.Errno(0x45) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EFTYPE = syscall.Errno(0x4f) - EHOSTDOWN = syscall.Errno(0x40) - EHOSTUNREACH = syscall.Errno(0x41) - EIDRM = syscall.Errno(0x5a) - EILSEQ = syscall.Errno(0x5c) - EINPROGRESS = syscall.Errno(0x24) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EISCONN = syscall.Errno(0x38) - EISDIR = syscall.Errno(0x15) - ELAST = syscall.Errno(0x6a) - ELOOP = syscall.Errno(0x3e) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x28) - EMULTIHOP = syscall.Errno(0x5f) - ENAMETOOLONG = syscall.Errno(0x3f) - ENEEDAUTH = syscall.Errno(0x51) - ENETDOWN = syscall.Errno(0x32) - ENETRESET = syscall.Errno(0x34) - ENETUNREACH = syscall.Errno(0x33) - ENFILE = syscall.Errno(0x17) - ENOATTR = syscall.Errno(0x5d) - ENOBUFS = syscall.Errno(0x37) - ENODATA = syscall.Errno(0x60) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOLCK = syscall.Errno(0x4d) - ENOLINK = syscall.Errno(0x61) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x5b) - ENOPOLICY = syscall.Errno(0x67) - ENOPROTOOPT = syscall.Errno(0x2a) - ENOSPC = syscall.Errno(0x1c) - ENOSR = syscall.Errno(0x62) - ENOSTR = syscall.Errno(0x63) - ENOSYS = syscall.Errno(0x4e) - ENOTBLK = syscall.Errno(0xf) - ENOTCONN = syscall.Errno(0x39) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x42) - ENOTRECOVERABLE = syscall.Errno(0x68) - ENOTSOCK = syscall.Errno(0x26) - ENOTSUP = syscall.Errno(0x2d) - ENOTTY = syscall.Errno(0x19) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x66) - EOVERFLOW = syscall.Errno(0x54) - EOWNERDEAD = syscall.Errno(0x69) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x2e) - EPIPE = syscall.Errno(0x20) - EPROCLIM = syscall.Errno(0x43) - EPROCUNAVAIL = syscall.Errno(0x4c) - EPROGMISMATCH = syscall.Errno(0x4b) - EPROGUNAVAIL = syscall.Errno(0x4a) - EPROTO = syscall.Errno(0x64) - EPROTONOSUPPORT = syscall.Errno(0x2b) - EPROTOTYPE = syscall.Errno(0x29) - EPWROFF = syscall.Errno(0x52) - EQFULL = syscall.Errno(0x6a) - ERANGE = syscall.Errno(0x22) - EREMOTE = syscall.Errno(0x47) - EROFS = syscall.Errno(0x1e) - ERPCMISMATCH = syscall.Errno(0x49) - ESHLIBVERS = syscall.Errno(0x57) - ESHUTDOWN = syscall.Errno(0x3a) - ESOCKTNOSUPPORT = syscall.Errno(0x2c) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESTALE = syscall.Errno(0x46) - ETIME = syscall.Errno(0x65) - ETIMEDOUT = syscall.Errno(0x3c) - ETOOMANYREFS = syscall.Errno(0x3b) - ETXTBSY = syscall.Errno(0x1a) - EUSERS = syscall.Errno(0x44) - EWOULDBLOCK = syscall.Errno(0x23) - EXDEV = syscall.Errno(0x12) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0xa) - SIGCHLD = syscall.Signal(0x14) - SIGCONT = syscall.Signal(0x13) - SIGEMT = syscall.Signal(0x7) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINFO = syscall.Signal(0x1d) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x17) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGPIPE = syscall.Signal(0xd) - SIGPROF = syscall.Signal(0x1b) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTOP = syscall.Signal(0x11) - SIGSYS = syscall.Signal(0xc) - SIGTERM = syscall.Signal(0xf) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x12) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGURG = syscall.Signal(0x10) - SIGUSR1 = syscall.Signal(0x1e) - SIGUSR2 = syscall.Signal(0x1f) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_darwin_arm64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_darwin_arm64.go deleted file mode 100644 index 3189c6b3459..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_darwin_arm64.go +++ /dev/null @@ -1,1576 +0,0 @@ -// mkerrors.sh -m64 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build arm64,darwin - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- -m64 _const.go - -package unix - -import "syscall" - -const ( - AF_APPLETALK = 0x10 - AF_CCITT = 0xa - AF_CHAOS = 0x5 - AF_CNT = 0x15 - AF_COIP = 0x14 - AF_DATAKIT = 0x9 - AF_DECnet = 0xc - AF_DLI = 0xd - AF_E164 = 0x1c - AF_ECMA = 0x8 - AF_HYLINK = 0xf - AF_IEEE80211 = 0x25 - AF_IMPLINK = 0x3 - AF_INET = 0x2 - AF_INET6 = 0x1e - AF_IPX = 0x17 - AF_ISDN = 0x1c - AF_ISO = 0x7 - AF_LAT = 0xe - AF_LINK = 0x12 - AF_LOCAL = 0x1 - AF_MAX = 0x28 - AF_NATM = 0x1f - AF_NDRV = 0x1b - AF_NETBIOS = 0x21 - AF_NS = 0x6 - AF_OSI = 0x7 - AF_PPP = 0x22 - AF_PUP = 0x4 - AF_RESERVED_36 = 0x24 - AF_ROUTE = 0x11 - AF_SIP = 0x18 - AF_SNA = 0xb - AF_SYSTEM = 0x20 - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - AF_UTUN = 0x26 - B0 = 0x0 - B110 = 0x6e - B115200 = 0x1c200 - B1200 = 0x4b0 - B134 = 0x86 - B14400 = 0x3840 - B150 = 0x96 - B1800 = 0x708 - B19200 = 0x4b00 - B200 = 0xc8 - B230400 = 0x38400 - B2400 = 0x960 - B28800 = 0x7080 - B300 = 0x12c - B38400 = 0x9600 - B4800 = 0x12c0 - B50 = 0x32 - B57600 = 0xe100 - B600 = 0x258 - B7200 = 0x1c20 - B75 = 0x4b - B76800 = 0x12c00 - B9600 = 0x2580 - BIOCFLUSH = 0x20004268 - BIOCGBLEN = 0x40044266 - BIOCGDLT = 0x4004426a - BIOCGDLTLIST = 0xc00c4279 - BIOCGETIF = 0x4020426b - BIOCGHDRCMPLT = 0x40044274 - BIOCGRSIG = 0x40044272 - BIOCGRTIMEOUT = 0x4010426e - BIOCGSEESENT = 0x40044276 - BIOCGSTATS = 0x4008426f - BIOCIMMEDIATE = 0x80044270 - BIOCPROMISC = 0x20004269 - BIOCSBLEN = 0xc0044266 - BIOCSDLT = 0x80044278 - BIOCSETF = 0x80104267 - BIOCSETFNR = 0x8010427e - BIOCSETIF = 0x8020426c - BIOCSHDRCMPLT = 0x80044275 - BIOCSRSIG = 0x80044273 - BIOCSRTIMEOUT = 0x8010426d - BIOCSSEESENT = 0x80044277 - BIOCVERSION = 0x40044271 - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALIGNMENT = 0x4 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXBUFSIZE = 0x80000 - BPF_MAXINSNS = 0x200 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINBUFSIZE = 0x20 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RELEASE = 0x30bb6 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_W = 0x0 - BPF_X = 0x8 - BRKINT = 0x2 - CFLUSH = 0xf - CLOCAL = 0x8000 - CREAD = 0x800 - CS5 = 0x0 - CS6 = 0x100 - CS7 = 0x200 - CS8 = 0x300 - CSIZE = 0x300 - CSTART = 0x11 - CSTATUS = 0x14 - CSTOP = 0x13 - CSTOPB = 0x400 - CSUSP = 0x1a - CTL_MAXNAME = 0xc - CTL_NET = 0x4 - DLT_A429 = 0xb8 - DLT_A653_ICM = 0xb9 - DLT_AIRONET_HEADER = 0x78 - DLT_AOS = 0xde - DLT_APPLE_IP_OVER_IEEE1394 = 0x8a - DLT_ARCNET = 0x7 - DLT_ARCNET_LINUX = 0x81 - DLT_ATM_CLIP = 0x13 - DLT_ATM_RFC1483 = 0xb - DLT_AURORA = 0x7e - DLT_AX25 = 0x3 - DLT_AX25_KISS = 0xca - DLT_BACNET_MS_TP = 0xa5 - DLT_BLUETOOTH_HCI_H4 = 0xbb - DLT_BLUETOOTH_HCI_H4_WITH_PHDR = 0xc9 - DLT_CAN20B = 0xbe - DLT_CAN_SOCKETCAN = 0xe3 - DLT_CHAOS = 0x5 - DLT_CHDLC = 0x68 - DLT_CISCO_IOS = 0x76 - DLT_C_HDLC = 0x68 - DLT_C_HDLC_WITH_DIR = 0xcd - DLT_DBUS = 0xe7 - DLT_DECT = 0xdd - DLT_DOCSIS = 0x8f - DLT_DVB_CI = 0xeb - DLT_ECONET = 0x73 - DLT_EN10MB = 0x1 - DLT_EN3MB = 0x2 - DLT_ENC = 0x6d - DLT_ERF = 0xc5 - DLT_ERF_ETH = 0xaf - DLT_ERF_POS = 0xb0 - DLT_FC_2 = 0xe0 - DLT_FC_2_WITH_FRAME_DELIMS = 0xe1 - DLT_FDDI = 0xa - DLT_FLEXRAY = 0xd2 - DLT_FRELAY = 0x6b - DLT_FRELAY_WITH_DIR = 0xce - DLT_GCOM_SERIAL = 0xad - DLT_GCOM_T1E1 = 0xac - DLT_GPF_F = 0xab - DLT_GPF_T = 0xaa - DLT_GPRS_LLC = 0xa9 - DLT_GSMTAP_ABIS = 0xda - DLT_GSMTAP_UM = 0xd9 - DLT_HHDLC = 0x79 - DLT_IBM_SN = 0x92 - DLT_IBM_SP = 0x91 - DLT_IEEE802 = 0x6 - DLT_IEEE802_11 = 0x69 - DLT_IEEE802_11_RADIO = 0x7f - DLT_IEEE802_11_RADIO_AVS = 0xa3 - DLT_IEEE802_15_4 = 0xc3 - DLT_IEEE802_15_4_LINUX = 0xbf - DLT_IEEE802_15_4_NOFCS = 0xe6 - DLT_IEEE802_15_4_NONASK_PHY = 0xd7 - DLT_IEEE802_16_MAC_CPS = 0xbc - DLT_IEEE802_16_MAC_CPS_RADIO = 0xc1 - DLT_IPFILTER = 0x74 - DLT_IPMB = 0xc7 - DLT_IPMB_LINUX = 0xd1 - DLT_IPNET = 0xe2 - DLT_IPOIB = 0xf2 - DLT_IPV4 = 0xe4 - DLT_IPV6 = 0xe5 - DLT_IP_OVER_FC = 0x7a - DLT_JUNIPER_ATM1 = 0x89 - DLT_JUNIPER_ATM2 = 0x87 - DLT_JUNIPER_ATM_CEMIC = 0xee - DLT_JUNIPER_CHDLC = 0xb5 - DLT_JUNIPER_ES = 0x84 - DLT_JUNIPER_ETHER = 0xb2 - DLT_JUNIPER_FIBRECHANNEL = 0xea - DLT_JUNIPER_FRELAY = 0xb4 - DLT_JUNIPER_GGSN = 0x85 - DLT_JUNIPER_ISM = 0xc2 - DLT_JUNIPER_MFR = 0x86 - DLT_JUNIPER_MLFR = 0x83 - DLT_JUNIPER_MLPPP = 0x82 - DLT_JUNIPER_MONITOR = 0xa4 - DLT_JUNIPER_PIC_PEER = 0xae - DLT_JUNIPER_PPP = 0xb3 - DLT_JUNIPER_PPPOE = 0xa7 - DLT_JUNIPER_PPPOE_ATM = 0xa8 - DLT_JUNIPER_SERVICES = 0x88 - DLT_JUNIPER_SRX_E2E = 0xe9 - DLT_JUNIPER_ST = 0xc8 - DLT_JUNIPER_VP = 0xb7 - DLT_JUNIPER_VS = 0xe8 - DLT_LAPB_WITH_DIR = 0xcf - DLT_LAPD = 0xcb - DLT_LIN = 0xd4 - DLT_LINUX_EVDEV = 0xd8 - DLT_LINUX_IRDA = 0x90 - DLT_LINUX_LAPD = 0xb1 - DLT_LINUX_PPP_WITHDIRECTION = 0xa6 - DLT_LINUX_SLL = 0x71 - DLT_LOOP = 0x6c - DLT_LTALK = 0x72 - DLT_MATCHING_MAX = 0xf5 - DLT_MATCHING_MIN = 0x68 - DLT_MFR = 0xb6 - DLT_MOST = 0xd3 - DLT_MPEG_2_TS = 0xf3 - DLT_MPLS = 0xdb - DLT_MTP2 = 0x8c - DLT_MTP2_WITH_PHDR = 0x8b - DLT_MTP3 = 0x8d - DLT_MUX27010 = 0xec - DLT_NETANALYZER = 0xf0 - DLT_NETANALYZER_TRANSPARENT = 0xf1 - DLT_NFC_LLCP = 0xf5 - DLT_NFLOG = 0xef - DLT_NG40 = 0xf4 - DLT_NULL = 0x0 - DLT_PCI_EXP = 0x7d - DLT_PFLOG = 0x75 - DLT_PFSYNC = 0x12 - DLT_PPI = 0xc0 - DLT_PPP = 0x9 - DLT_PPP_BSDOS = 0x10 - DLT_PPP_ETHER = 0x33 - DLT_PPP_PPPD = 0xa6 - DLT_PPP_SERIAL = 0x32 - DLT_PPP_WITH_DIR = 0xcc - DLT_PPP_WITH_DIRECTION = 0xa6 - DLT_PRISM_HEADER = 0x77 - DLT_PRONET = 0x4 - DLT_RAIF1 = 0xc6 - DLT_RAW = 0xc - DLT_RIO = 0x7c - DLT_SCCP = 0x8e - DLT_SITA = 0xc4 - DLT_SLIP = 0x8 - DLT_SLIP_BSDOS = 0xf - DLT_STANAG_5066_D_PDU = 0xed - DLT_SUNATM = 0x7b - DLT_SYMANTEC_FIREWALL = 0x63 - DLT_TZSP = 0x80 - DLT_USB = 0xba - DLT_USB_LINUX = 0xbd - DLT_USB_LINUX_MMAPPED = 0xdc - DLT_USER0 = 0x93 - DLT_USER1 = 0x94 - DLT_USER10 = 0x9d - DLT_USER11 = 0x9e - DLT_USER12 = 0x9f - DLT_USER13 = 0xa0 - DLT_USER14 = 0xa1 - DLT_USER15 = 0xa2 - DLT_USER2 = 0x95 - DLT_USER3 = 0x96 - DLT_USER4 = 0x97 - DLT_USER5 = 0x98 - DLT_USER6 = 0x99 - DLT_USER7 = 0x9a - DLT_USER8 = 0x9b - DLT_USER9 = 0x9c - DLT_WIHART = 0xdf - DLT_X2E_SERIAL = 0xd5 - DLT_X2E_XORAYA = 0xd6 - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - DT_WHT = 0xe - ECHO = 0x8 - ECHOCTL = 0x40 - ECHOE = 0x2 - ECHOK = 0x4 - ECHOKE = 0x1 - ECHONL = 0x10 - ECHOPRT = 0x20 - EVFILT_AIO = -0x3 - EVFILT_FS = -0x9 - EVFILT_MACHPORT = -0x8 - EVFILT_PROC = -0x5 - EVFILT_READ = -0x1 - EVFILT_SIGNAL = -0x6 - EVFILT_SYSCOUNT = 0xe - EVFILT_THREADMARKER = 0xe - EVFILT_TIMER = -0x7 - EVFILT_USER = -0xa - EVFILT_VM = -0xc - EVFILT_VNODE = -0x4 - EVFILT_WRITE = -0x2 - EV_ADD = 0x1 - EV_CLEAR = 0x20 - EV_DELETE = 0x2 - EV_DISABLE = 0x8 - EV_DISPATCH = 0x80 - EV_ENABLE = 0x4 - EV_EOF = 0x8000 - EV_ERROR = 0x4000 - EV_FLAG0 = 0x1000 - EV_FLAG1 = 0x2000 - EV_ONESHOT = 0x10 - EV_OOBAND = 0x2000 - EV_POLL = 0x1000 - EV_RECEIPT = 0x40 - EV_SYSFLAGS = 0xf000 - EXTA = 0x4b00 - EXTB = 0x9600 - EXTPROC = 0x800 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x400 - FLUSHO = 0x800000 - F_ADDFILESIGS = 0x3d - F_ADDSIGS = 0x3b - F_ALLOCATEALL = 0x4 - F_ALLOCATECONTIG = 0x2 - F_CHKCLEAN = 0x29 - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0x43 - F_FINDSIGS = 0x4e - F_FLUSH_DATA = 0x28 - F_FREEZE_FS = 0x35 - F_FULLFSYNC = 0x33 - F_GETCODEDIR = 0x48 - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLK = 0x7 - F_GETLKPID = 0x42 - F_GETNOSIGPIPE = 0x4a - F_GETOWN = 0x5 - F_GETPATH = 0x32 - F_GETPATH_MTMINFO = 0x47 - F_GETPROTECTIONCLASS = 0x3f - F_GETPROTECTIONLEVEL = 0x4d - F_GLOBAL_NOCACHE = 0x37 - F_LOG2PHYS = 0x31 - F_LOG2PHYS_EXT = 0x41 - F_NOCACHE = 0x30 - F_NODIRECT = 0x3e - F_OK = 0x0 - F_PATHPKG_CHECK = 0x34 - F_PEOFPOSMODE = 0x3 - F_PREALLOCATE = 0x2a - F_RDADVISE = 0x2c - F_RDAHEAD = 0x2d - F_RDLCK = 0x1 - F_SETBACKINGSTORE = 0x46 - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLK = 0x8 - F_SETLKW = 0x9 - F_SETLKWTIMEOUT = 0xa - F_SETNOSIGPIPE = 0x49 - F_SETOWN = 0x6 - F_SETPROTECTIONCLASS = 0x40 - F_SETSIZE = 0x2b - F_SINGLE_WRITER = 0x4c - F_THAW_FS = 0x36 - F_TRANSCODEKEY = 0x4b - F_UNLCK = 0x2 - F_VOLPOSMODE = 0x4 - F_WRLCK = 0x3 - HUPCL = 0x4000 - ICANON = 0x100 - ICMP6_FILTER = 0x12 - ICRNL = 0x100 - IEXTEN = 0x400 - IFF_ALLMULTI = 0x200 - IFF_ALTPHYS = 0x4000 - IFF_BROADCAST = 0x2 - IFF_DEBUG = 0x4 - IFF_LINK0 = 0x1000 - IFF_LINK1 = 0x2000 - IFF_LINK2 = 0x4000 - IFF_LOOPBACK = 0x8 - IFF_MULTICAST = 0x8000 - IFF_NOARP = 0x80 - IFF_NOTRAILERS = 0x20 - IFF_OACTIVE = 0x400 - IFF_POINTOPOINT = 0x10 - IFF_PROMISC = 0x100 - IFF_RUNNING = 0x40 - IFF_SIMPLEX = 0x800 - IFF_UP = 0x1 - IFNAMSIZ = 0x10 - IFT_1822 = 0x2 - IFT_AAL5 = 0x31 - IFT_ARCNET = 0x23 - IFT_ARCNETPLUS = 0x24 - IFT_ATM = 0x25 - IFT_BRIDGE = 0xd1 - IFT_CARP = 0xf8 - IFT_CELLULAR = 0xff - IFT_CEPT = 0x13 - IFT_DS3 = 0x1e - IFT_ENC = 0xf4 - IFT_EON = 0x19 - IFT_ETHER = 0x6 - IFT_FAITH = 0x38 - IFT_FDDI = 0xf - IFT_FRELAY = 0x20 - IFT_FRELAYDCE = 0x2c - IFT_GIF = 0x37 - IFT_HDH1822 = 0x3 - IFT_HIPPI = 0x2f - IFT_HSSI = 0x2e - IFT_HY = 0xe - IFT_IEEE1394 = 0x90 - IFT_IEEE8023ADLAG = 0x88 - IFT_ISDNBASIC = 0x14 - IFT_ISDNPRIMARY = 0x15 - IFT_ISO88022LLC = 0x29 - IFT_ISO88023 = 0x7 - IFT_ISO88024 = 0x8 - IFT_ISO88025 = 0x9 - IFT_ISO88026 = 0xa - IFT_L2VLAN = 0x87 - IFT_LAPB = 0x10 - IFT_LOCALTALK = 0x2a - IFT_LOOP = 0x18 - IFT_MIOX25 = 0x26 - IFT_MODEM = 0x30 - IFT_NSIP = 0x1b - IFT_OTHER = 0x1 - IFT_P10 = 0xc - IFT_P80 = 0xd - IFT_PARA = 0x22 - IFT_PDP = 0xff - IFT_PFLOG = 0xf5 - IFT_PFSYNC = 0xf6 - IFT_PKTAP = 0xfe - IFT_PPP = 0x17 - IFT_PROPMUX = 0x36 - IFT_PROPVIRTUAL = 0x35 - IFT_PTPSERIAL = 0x16 - IFT_RS232 = 0x21 - IFT_SDLC = 0x11 - IFT_SIP = 0x1f - IFT_SLIP = 0x1c - IFT_SMDSDXI = 0x2b - IFT_SMDSICIP = 0x34 - IFT_SONET = 0x27 - IFT_SONETPATH = 0x32 - IFT_SONETVT = 0x33 - IFT_STARLAN = 0xb - IFT_STF = 0x39 - IFT_T1 = 0x12 - IFT_ULTRA = 0x1d - IFT_V35 = 0x2d - IFT_X25 = 0x5 - IFT_X25DDN = 0x4 - IFT_X25PLE = 0x28 - IFT_XETHER = 0x1a - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLASSD_HOST = 0xfffffff - IN_CLASSD_NET = 0xf0000000 - IN_CLASSD_NSHIFT = 0x1c - IN_LINKLOCALNETNUM = 0xa9fe0000 - IN_LOOPBACKNET = 0x7f - IPPROTO_3PC = 0x22 - IPPROTO_ADFS = 0x44 - IPPROTO_AH = 0x33 - IPPROTO_AHIP = 0x3d - IPPROTO_APES = 0x63 - IPPROTO_ARGUS = 0xd - IPPROTO_AX25 = 0x5d - IPPROTO_BHA = 0x31 - IPPROTO_BLT = 0x1e - IPPROTO_BRSATMON = 0x4c - IPPROTO_CFTP = 0x3e - IPPROTO_CHAOS = 0x10 - IPPROTO_CMTP = 0x26 - IPPROTO_CPHB = 0x49 - IPPROTO_CPNX = 0x48 - IPPROTO_DDP = 0x25 - IPPROTO_DGP = 0x56 - IPPROTO_DIVERT = 0xfe - IPPROTO_DONE = 0x101 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_EMCON = 0xe - IPPROTO_ENCAP = 0x62 - IPPROTO_EON = 0x50 - IPPROTO_ESP = 0x32 - IPPROTO_ETHERIP = 0x61 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GGP = 0x3 - IPPROTO_GMTP = 0x64 - IPPROTO_GRE = 0x2f - IPPROTO_HELLO = 0x3f - IPPROTO_HMP = 0x14 - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IDPR = 0x23 - IPPROTO_IDRP = 0x2d - IPPROTO_IGMP = 0x2 - IPPROTO_IGP = 0x55 - IPPROTO_IGRP = 0x58 - IPPROTO_IL = 0x28 - IPPROTO_INLSP = 0x34 - IPPROTO_INP = 0x20 - IPPROTO_IP = 0x0 - IPPROTO_IPCOMP = 0x6c - IPPROTO_IPCV = 0x47 - IPPROTO_IPEIP = 0x5e - IPPROTO_IPIP = 0x4 - IPPROTO_IPPC = 0x43 - IPPROTO_IPV4 = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_IRTP = 0x1c - IPPROTO_KRYPTOLAN = 0x41 - IPPROTO_LARP = 0x5b - IPPROTO_LEAF1 = 0x19 - IPPROTO_LEAF2 = 0x1a - IPPROTO_MAX = 0x100 - IPPROTO_MAXID = 0x34 - IPPROTO_MEAS = 0x13 - IPPROTO_MHRP = 0x30 - IPPROTO_MICP = 0x5f - IPPROTO_MTP = 0x5c - IPPROTO_MUX = 0x12 - IPPROTO_ND = 0x4d - IPPROTO_NHRP = 0x36 - IPPROTO_NONE = 0x3b - IPPROTO_NSP = 0x1f - IPPROTO_NVPII = 0xb - IPPROTO_OSPFIGP = 0x59 - IPPROTO_PGM = 0x71 - IPPROTO_PIGP = 0x9 - IPPROTO_PIM = 0x67 - IPPROTO_PRM = 0x15 - IPPROTO_PUP = 0xc - IPPROTO_PVP = 0x4b - IPPROTO_RAW = 0xff - IPPROTO_RCCMON = 0xa - IPPROTO_RDP = 0x1b - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_RVD = 0x42 - IPPROTO_SATEXPAK = 0x40 - IPPROTO_SATMON = 0x45 - IPPROTO_SCCSP = 0x60 - IPPROTO_SCTP = 0x84 - IPPROTO_SDRP = 0x2a - IPPROTO_SEP = 0x21 - IPPROTO_SRPC = 0x5a - IPPROTO_ST = 0x7 - IPPROTO_SVMTP = 0x52 - IPPROTO_SWIPE = 0x35 - IPPROTO_TCF = 0x57 - IPPROTO_TCP = 0x6 - IPPROTO_TP = 0x1d - IPPROTO_TPXX = 0x27 - IPPROTO_TRUNK1 = 0x17 - IPPROTO_TRUNK2 = 0x18 - IPPROTO_TTP = 0x54 - IPPROTO_UDP = 0x11 - IPPROTO_VINES = 0x53 - IPPROTO_VISA = 0x46 - IPPROTO_VMTP = 0x51 - IPPROTO_WBEXPAK = 0x4f - IPPROTO_WBMON = 0x4e - IPPROTO_WSN = 0x4a - IPPROTO_XNET = 0xf - IPPROTO_XTP = 0x24 - IPV6_2292DSTOPTS = 0x17 - IPV6_2292HOPLIMIT = 0x14 - IPV6_2292HOPOPTS = 0x16 - IPV6_2292NEXTHOP = 0x15 - IPV6_2292PKTINFO = 0x13 - IPV6_2292PKTOPTIONS = 0x19 - IPV6_2292RTHDR = 0x18 - IPV6_BINDV6ONLY = 0x1b - IPV6_BOUND_IF = 0x7d - IPV6_CHECKSUM = 0x1a - IPV6_DEFAULT_MULTICAST_HOPS = 0x1 - IPV6_DEFAULT_MULTICAST_LOOP = 0x1 - IPV6_DEFHLIM = 0x40 - IPV6_FAITH = 0x1d - IPV6_FLOWINFO_MASK = 0xffffff0f - IPV6_FLOWLABEL_MASK = 0xffff0f00 - IPV6_FRAGTTL = 0x3c - IPV6_FW_ADD = 0x1e - IPV6_FW_DEL = 0x1f - IPV6_FW_FLUSH = 0x20 - IPV6_FW_GET = 0x22 - IPV6_FW_ZERO = 0x21 - IPV6_HLIMDEC = 0x1 - IPV6_IPSEC_POLICY = 0x1c - IPV6_JOIN_GROUP = 0xc - IPV6_LEAVE_GROUP = 0xd - IPV6_MAXHLIM = 0xff - IPV6_MAXOPTHDR = 0x800 - IPV6_MAXPACKET = 0xffff - IPV6_MAX_GROUP_SRC_FILTER = 0x200 - IPV6_MAX_MEMBERSHIPS = 0xfff - IPV6_MAX_SOCK_SRC_FILTER = 0x80 - IPV6_MIN_MEMBERSHIPS = 0x1f - IPV6_MMTU = 0x500 - IPV6_MULTICAST_HOPS = 0xa - IPV6_MULTICAST_IF = 0x9 - IPV6_MULTICAST_LOOP = 0xb - IPV6_PORTRANGE = 0xe - IPV6_PORTRANGE_DEFAULT = 0x0 - IPV6_PORTRANGE_HIGH = 0x1 - IPV6_PORTRANGE_LOW = 0x2 - IPV6_RECVTCLASS = 0x23 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_SOCKOPT_RESERVED1 = 0x3 - IPV6_TCLASS = 0x24 - IPV6_UNICAST_HOPS = 0x4 - IPV6_V6ONLY = 0x1b - IPV6_VERSION = 0x60 - IPV6_VERSION_MASK = 0xf0 - IP_ADD_MEMBERSHIP = 0xc - IP_ADD_SOURCE_MEMBERSHIP = 0x46 - IP_BLOCK_SOURCE = 0x48 - IP_BOUND_IF = 0x19 - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DROP_MEMBERSHIP = 0xd - IP_DROP_SOURCE_MEMBERSHIP = 0x47 - IP_DUMMYNET_CONFIGURE = 0x3c - IP_DUMMYNET_DEL = 0x3d - IP_DUMMYNET_FLUSH = 0x3e - IP_DUMMYNET_GET = 0x40 - IP_FAITH = 0x16 - IP_FW_ADD = 0x28 - IP_FW_DEL = 0x29 - IP_FW_FLUSH = 0x2a - IP_FW_GET = 0x2c - IP_FW_RESETLOG = 0x2d - IP_FW_ZERO = 0x2b - IP_HDRINCL = 0x2 - IP_IPSEC_POLICY = 0x15 - IP_MAXPACKET = 0xffff - IP_MAX_GROUP_SRC_FILTER = 0x200 - IP_MAX_MEMBERSHIPS = 0xfff - IP_MAX_SOCK_MUTE_FILTER = 0x80 - IP_MAX_SOCK_SRC_FILTER = 0x80 - IP_MF = 0x2000 - IP_MIN_MEMBERSHIPS = 0x1f - IP_MSFILTER = 0x4a - IP_MSS = 0x240 - IP_MULTICAST_IF = 0x9 - IP_MULTICAST_IFINDEX = 0x42 - IP_MULTICAST_LOOP = 0xb - IP_MULTICAST_TTL = 0xa - IP_MULTICAST_VIF = 0xe - IP_NAT__XXX = 0x37 - IP_OFFMASK = 0x1fff - IP_OLD_FW_ADD = 0x32 - IP_OLD_FW_DEL = 0x33 - IP_OLD_FW_FLUSH = 0x34 - IP_OLD_FW_GET = 0x36 - IP_OLD_FW_RESETLOG = 0x38 - IP_OLD_FW_ZERO = 0x35 - IP_OPTIONS = 0x1 - IP_PKTINFO = 0x1a - IP_PORTRANGE = 0x13 - IP_PORTRANGE_DEFAULT = 0x0 - IP_PORTRANGE_HIGH = 0x1 - IP_PORTRANGE_LOW = 0x2 - IP_RECVDSTADDR = 0x7 - IP_RECVIF = 0x14 - IP_RECVOPTS = 0x5 - IP_RECVPKTINFO = 0x1a - IP_RECVRETOPTS = 0x6 - IP_RECVTTL = 0x18 - IP_RETOPTS = 0x8 - IP_RF = 0x8000 - IP_RSVP_OFF = 0x10 - IP_RSVP_ON = 0xf - IP_RSVP_VIF_OFF = 0x12 - IP_RSVP_VIF_ON = 0x11 - IP_STRIPHDR = 0x17 - IP_TOS = 0x3 - IP_TRAFFIC_MGT_BACKGROUND = 0x41 - IP_TTL = 0x4 - IP_UNBLOCK_SOURCE = 0x49 - ISIG = 0x80 - ISTRIP = 0x20 - IUTF8 = 0x4000 - IXANY = 0x800 - IXOFF = 0x400 - IXON = 0x200 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_CAN_REUSE = 0x9 - MADV_DONTNEED = 0x4 - MADV_FREE = 0x5 - MADV_FREE_REUSABLE = 0x7 - MADV_FREE_REUSE = 0x8 - MADV_NORMAL = 0x0 - MADV_RANDOM = 0x1 - MADV_SEQUENTIAL = 0x2 - MADV_WILLNEED = 0x3 - MADV_ZERO_WIRED_PAGES = 0x6 - MAP_ANON = 0x1000 - MAP_COPY = 0x2 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_HASSEMAPHORE = 0x200 - MAP_JIT = 0x800 - MAP_NOCACHE = 0x400 - MAP_NOEXTEND = 0x100 - MAP_NORESERVE = 0x40 - MAP_PRIVATE = 0x2 - MAP_RENAME = 0x20 - MAP_RESERVED0080 = 0x80 - MAP_SHARED = 0x1 - MCL_CURRENT = 0x1 - MCL_FUTURE = 0x2 - MSG_CTRUNC = 0x20 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x80 - MSG_EOF = 0x100 - MSG_EOR = 0x8 - MSG_FLUSH = 0x400 - MSG_HAVEMORE = 0x2000 - MSG_HOLD = 0x800 - MSG_NEEDSA = 0x10000 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_RCVMORE = 0x4000 - MSG_SEND = 0x1000 - MSG_TRUNC = 0x10 - MSG_WAITALL = 0x40 - MSG_WAITSTREAM = 0x200 - MS_ASYNC = 0x1 - MS_DEACTIVATE = 0x8 - MS_INVALIDATE = 0x2 - MS_KILLPAGES = 0x4 - MS_SYNC = 0x10 - NAME_MAX = 0xff - NET_RT_DUMP = 0x1 - NET_RT_DUMP2 = 0x7 - NET_RT_FLAGS = 0x2 - NET_RT_IFLIST = 0x3 - NET_RT_IFLIST2 = 0x6 - NET_RT_MAXID = 0xa - NET_RT_STAT = 0x4 - NET_RT_TRASH = 0x5 - NOFLSH = 0x80000000 - NOTE_ABSOLUTE = 0x8 - NOTE_ATTRIB = 0x8 - NOTE_BACKGROUND = 0x40 - NOTE_CHILD = 0x4 - NOTE_CRITICAL = 0x20 - NOTE_DELETE = 0x1 - NOTE_EXEC = 0x20000000 - NOTE_EXIT = 0x80000000 - NOTE_EXITSTATUS = 0x4000000 - NOTE_EXIT_CSERROR = 0x40000 - NOTE_EXIT_DECRYPTFAIL = 0x10000 - NOTE_EXIT_DETAIL = 0x2000000 - NOTE_EXIT_DETAIL_MASK = 0x70000 - NOTE_EXIT_MEMORY = 0x20000 - NOTE_EXIT_REPARENTED = 0x80000 - NOTE_EXTEND = 0x4 - NOTE_FFAND = 0x40000000 - NOTE_FFCOPY = 0xc0000000 - NOTE_FFCTRLMASK = 0xc0000000 - NOTE_FFLAGSMASK = 0xffffff - NOTE_FFNOP = 0x0 - NOTE_FFOR = 0x80000000 - NOTE_FORK = 0x40000000 - NOTE_LEEWAY = 0x10 - NOTE_LINK = 0x10 - NOTE_LOWAT = 0x1 - NOTE_NONE = 0x80 - NOTE_NSECONDS = 0x4 - NOTE_PCTRLMASK = -0x100000 - NOTE_PDATAMASK = 0xfffff - NOTE_REAP = 0x10000000 - NOTE_RENAME = 0x20 - NOTE_REVOKE = 0x40 - NOTE_SECONDS = 0x1 - NOTE_SIGNAL = 0x8000000 - NOTE_TRACK = 0x1 - NOTE_TRACKERR = 0x2 - NOTE_TRIGGER = 0x1000000 - NOTE_USECONDS = 0x2 - NOTE_VM_ERROR = 0x10000000 - NOTE_VM_PRESSURE = 0x80000000 - NOTE_VM_PRESSURE_SUDDEN_TERMINATE = 0x20000000 - NOTE_VM_PRESSURE_TERMINATE = 0x40000000 - NOTE_WRITE = 0x2 - OCRNL = 0x10 - OFDEL = 0x20000 - OFILL = 0x80 - ONLCR = 0x2 - ONLRET = 0x40 - ONOCR = 0x20 - ONOEOT = 0x8 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_ALERT = 0x20000000 - O_APPEND = 0x8 - O_ASYNC = 0x40 - O_CLOEXEC = 0x1000000 - O_CREAT = 0x200 - O_DIRECTORY = 0x100000 - O_DP_GETRAWENCRYPTED = 0x1 - O_DSYNC = 0x400000 - O_EVTONLY = 0x8000 - O_EXCL = 0x800 - O_EXLOCK = 0x20 - O_FSYNC = 0x80 - O_NDELAY = 0x4 - O_NOCTTY = 0x20000 - O_NOFOLLOW = 0x100 - O_NONBLOCK = 0x4 - O_POPUP = 0x80000000 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_SHLOCK = 0x10 - O_SYMLINK = 0x200000 - O_SYNC = 0x80 - O_TRUNC = 0x400 - O_WRONLY = 0x1 - PARENB = 0x1000 - PARMRK = 0x8 - PARODD = 0x2000 - PENDIN = 0x20000000 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PROT_EXEC = 0x4 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_WRITE = 0x2 - PT_ATTACH = 0xa - PT_ATTACHEXC = 0xe - PT_CONTINUE = 0x7 - PT_DENY_ATTACH = 0x1f - PT_DETACH = 0xb - PT_FIRSTMACH = 0x20 - PT_FORCEQUOTA = 0x1e - PT_KILL = 0x8 - PT_READ_D = 0x2 - PT_READ_I = 0x1 - PT_READ_U = 0x3 - PT_SIGEXC = 0xc - PT_STEP = 0x9 - PT_THUPDATE = 0xd - PT_TRACE_ME = 0x0 - PT_WRITE_D = 0x5 - PT_WRITE_I = 0x4 - PT_WRITE_U = 0x6 - RLIMIT_AS = 0x5 - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_CPU_USAGE_MONITOR = 0x2 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x8 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = 0x7fffffffffffffff - RTAX_AUTHOR = 0x6 - RTAX_BRD = 0x7 - RTAX_DST = 0x0 - RTAX_GATEWAY = 0x1 - RTAX_GENMASK = 0x3 - RTAX_IFA = 0x5 - RTAX_IFP = 0x4 - RTAX_MAX = 0x8 - RTAX_NETMASK = 0x2 - RTA_AUTHOR = 0x40 - RTA_BRD = 0x80 - RTA_DST = 0x1 - RTA_GATEWAY = 0x2 - RTA_GENMASK = 0x8 - RTA_IFA = 0x20 - RTA_IFP = 0x10 - RTA_NETMASK = 0x4 - RTF_BLACKHOLE = 0x1000 - RTF_BROADCAST = 0x400000 - RTF_CLONING = 0x100 - RTF_CONDEMNED = 0x2000000 - RTF_DELCLONE = 0x80 - RTF_DONE = 0x40 - RTF_DYNAMIC = 0x10 - RTF_GATEWAY = 0x2 - RTF_HOST = 0x4 - RTF_IFREF = 0x4000000 - RTF_IFSCOPE = 0x1000000 - RTF_LLINFO = 0x400 - RTF_LOCAL = 0x200000 - RTF_MODIFIED = 0x20 - RTF_MULTICAST = 0x800000 - RTF_NOIFREF = 0x2000 - RTF_PINNED = 0x100000 - RTF_PRCLONING = 0x10000 - RTF_PROTO1 = 0x8000 - RTF_PROTO2 = 0x4000 - RTF_PROTO3 = 0x40000 - RTF_PROXY = 0x8000000 - RTF_REJECT = 0x8 - RTF_ROUTER = 0x10000000 - RTF_STATIC = 0x800 - RTF_UP = 0x1 - RTF_WASCLONED = 0x20000 - RTF_XRESOLVE = 0x200 - RTM_ADD = 0x1 - RTM_CHANGE = 0x3 - RTM_DELADDR = 0xd - RTM_DELETE = 0x2 - RTM_DELMADDR = 0x10 - RTM_GET = 0x4 - RTM_GET2 = 0x14 - RTM_IFINFO = 0xe - RTM_IFINFO2 = 0x12 - RTM_LOCK = 0x8 - RTM_LOSING = 0x5 - RTM_MISS = 0x7 - RTM_NEWADDR = 0xc - RTM_NEWMADDR = 0xf - RTM_NEWMADDR2 = 0x13 - RTM_OLDADD = 0x9 - RTM_OLDDEL = 0xa - RTM_REDIRECT = 0x6 - RTM_RESOLVE = 0xb - RTM_RTTUNIT = 0xf4240 - RTM_VERSION = 0x5 - RTV_EXPIRE = 0x4 - RTV_HOPCOUNT = 0x2 - RTV_MTU = 0x1 - RTV_RPIPE = 0x8 - RTV_RTT = 0x40 - RTV_RTTVAR = 0x80 - RTV_SPIPE = 0x10 - RTV_SSTHRESH = 0x20 - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - SCM_CREDS = 0x3 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x2 - SCM_TIMESTAMP_MONOTONIC = 0x4 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDMULTI = 0x80206931 - SIOCAIFADDR = 0x8040691a - SIOCARPIPLL = 0xc0206928 - SIOCATMARK = 0x40047307 - SIOCAUTOADDR = 0xc0206926 - SIOCAUTONETMASK = 0x80206927 - SIOCDELMULTI = 0x80206932 - SIOCDIFADDR = 0x80206919 - SIOCDIFPHYADDR = 0x80206941 - SIOCGDRVSPEC = 0xc028697b - SIOCGETVLAN = 0xc020697f - SIOCGHIWAT = 0x40047301 - SIOCGIFADDR = 0xc0206921 - SIOCGIFALTMTU = 0xc0206948 - SIOCGIFASYNCMAP = 0xc020697c - SIOCGIFBOND = 0xc0206947 - SIOCGIFBRDADDR = 0xc0206923 - SIOCGIFCAP = 0xc020695b - SIOCGIFCONF = 0xc00c6924 - SIOCGIFDEVMTU = 0xc0206944 - SIOCGIFDSTADDR = 0xc0206922 - SIOCGIFFLAGS = 0xc0206911 - SIOCGIFGENERIC = 0xc020693a - SIOCGIFKPI = 0xc0206987 - SIOCGIFMAC = 0xc0206982 - SIOCGIFMEDIA = 0xc02c6938 - SIOCGIFMETRIC = 0xc0206917 - SIOCGIFMTU = 0xc0206933 - SIOCGIFNETMASK = 0xc0206925 - SIOCGIFPDSTADDR = 0xc0206940 - SIOCGIFPHYS = 0xc0206935 - SIOCGIFPSRCADDR = 0xc020693f - SIOCGIFSTATUS = 0xc331693d - SIOCGIFVLAN = 0xc020697f - SIOCGIFWAKEFLAGS = 0xc0206988 - SIOCGLOWAT = 0x40047303 - SIOCGPGRP = 0x40047309 - SIOCIFCREATE = 0xc0206978 - SIOCIFCREATE2 = 0xc020697a - SIOCIFDESTROY = 0x80206979 - SIOCIFGCLONERS = 0xc0106981 - SIOCRSLVMULTI = 0xc010693b - SIOCSDRVSPEC = 0x8028697b - SIOCSETVLAN = 0x8020697e - SIOCSHIWAT = 0x80047300 - SIOCSIFADDR = 0x8020690c - SIOCSIFALTMTU = 0x80206945 - SIOCSIFASYNCMAP = 0x8020697d - SIOCSIFBOND = 0x80206946 - SIOCSIFBRDADDR = 0x80206913 - SIOCSIFCAP = 0x8020695a - SIOCSIFDSTADDR = 0x8020690e - SIOCSIFFLAGS = 0x80206910 - SIOCSIFGENERIC = 0x80206939 - SIOCSIFKPI = 0x80206986 - SIOCSIFLLADDR = 0x8020693c - SIOCSIFMAC = 0x80206983 - SIOCSIFMEDIA = 0xc0206937 - SIOCSIFMETRIC = 0x80206918 - SIOCSIFMTU = 0x80206934 - SIOCSIFNETMASK = 0x80206916 - SIOCSIFPHYADDR = 0x8040693e - SIOCSIFPHYS = 0x80206936 - SIOCSIFVLAN = 0x8020697e - SIOCSLOWAT = 0x80047302 - SIOCSPGRP = 0x80047308 - SOCK_DGRAM = 0x2 - SOCK_MAXADDRLEN = 0xff - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_SOCKET = 0xffff - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x2 - SO_BROADCAST = 0x20 - SO_DEBUG = 0x1 - SO_DONTROUTE = 0x10 - SO_DONTTRUNC = 0x2000 - SO_ERROR = 0x1007 - SO_KEEPALIVE = 0x8 - SO_LABEL = 0x1010 - SO_LINGER = 0x80 - SO_LINGER_SEC = 0x1080 - SO_NKE = 0x1021 - SO_NOADDRERR = 0x1023 - SO_NOSIGPIPE = 0x1022 - SO_NOTIFYCONFLICT = 0x1026 - SO_NP_EXTENSIONS = 0x1083 - SO_NREAD = 0x1020 - SO_NUMRCVPKT = 0x1112 - SO_NWRITE = 0x1024 - SO_OOBINLINE = 0x100 - SO_PEERLABEL = 0x1011 - SO_RANDOMPORT = 0x1082 - SO_RCVBUF = 0x1002 - SO_RCVLOWAT = 0x1004 - SO_RCVTIMEO = 0x1006 - SO_REUSEADDR = 0x4 - SO_REUSEPORT = 0x200 - SO_REUSESHAREUID = 0x1025 - SO_SNDBUF = 0x1001 - SO_SNDLOWAT = 0x1003 - SO_SNDTIMEO = 0x1005 - SO_TIMESTAMP = 0x400 - SO_TIMESTAMP_MONOTONIC = 0x800 - SO_TYPE = 0x1008 - SO_UPCALLCLOSEWAIT = 0x1027 - SO_USELOOPBACK = 0x40 - SO_WANTMORE = 0x4000 - SO_WANTOOBFLAG = 0x8000 - S_IEXEC = 0x40 - S_IFBLK = 0x6000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFIFO = 0x1000 - S_IFLNK = 0xa000 - S_IFMT = 0xf000 - S_IFREG = 0x8000 - S_IFSOCK = 0xc000 - S_IFWHT = 0xe000 - S_IREAD = 0x100 - S_IRGRP = 0x20 - S_IROTH = 0x4 - S_IRUSR = 0x100 - S_IRWXG = 0x38 - S_IRWXO = 0x7 - S_IRWXU = 0x1c0 - S_ISGID = 0x400 - S_ISTXT = 0x200 - S_ISUID = 0x800 - S_ISVTX = 0x200 - S_IWGRP = 0x10 - S_IWOTH = 0x2 - S_IWRITE = 0x80 - S_IWUSR = 0x80 - S_IXGRP = 0x8 - S_IXOTH = 0x1 - S_IXUSR = 0x40 - TCIFLUSH = 0x1 - TCIOFLUSH = 0x3 - TCOFLUSH = 0x2 - TCP_CONNECTIONTIMEOUT = 0x20 - TCP_ENABLE_ECN = 0x104 - TCP_KEEPALIVE = 0x10 - TCP_KEEPCNT = 0x102 - TCP_KEEPINTVL = 0x101 - TCP_MAXHLEN = 0x3c - TCP_MAXOLEN = 0x28 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_SACK = 0x4 - TCP_MAX_WINSHIFT = 0xe - TCP_MINMSS = 0xd8 - TCP_MSS = 0x200 - TCP_NODELAY = 0x1 - TCP_NOOPT = 0x8 - TCP_NOPUSH = 0x4 - TCP_NOTSENT_LOWAT = 0x201 - TCP_RXT_CONNDROPTIME = 0x80 - TCP_RXT_FINDROP = 0x100 - TCP_SENDMOREACKS = 0x103 - TCSAFLUSH = 0x2 - TIOCCBRK = 0x2000747a - TIOCCDTR = 0x20007478 - TIOCCONS = 0x80047462 - TIOCDCDTIMESTAMP = 0x40107458 - TIOCDRAIN = 0x2000745e - TIOCDSIMICROCODE = 0x20007455 - TIOCEXCL = 0x2000740d - TIOCEXT = 0x80047460 - TIOCFLUSH = 0x80047410 - TIOCGDRAINWAIT = 0x40047456 - TIOCGETA = 0x40487413 - TIOCGETD = 0x4004741a - TIOCGPGRP = 0x40047477 - TIOCGWINSZ = 0x40087468 - TIOCIXOFF = 0x20007480 - TIOCIXON = 0x20007481 - TIOCMBIC = 0x8004746b - TIOCMBIS = 0x8004746c - TIOCMGDTRWAIT = 0x4004745a - TIOCMGET = 0x4004746a - TIOCMODG = 0x40047403 - TIOCMODS = 0x80047404 - TIOCMSDTRWAIT = 0x8004745b - TIOCMSET = 0x8004746d - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x20007471 - TIOCNXCL = 0x2000740e - TIOCOUTQ = 0x40047473 - TIOCPKT = 0x80047470 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCPTYGNAME = 0x40807453 - TIOCPTYGRANT = 0x20007454 - TIOCPTYUNLK = 0x20007452 - TIOCREMOTE = 0x80047469 - TIOCSBRK = 0x2000747b - TIOCSCONS = 0x20007463 - TIOCSCTTY = 0x20007461 - TIOCSDRAINWAIT = 0x80047457 - TIOCSDTR = 0x20007479 - TIOCSETA = 0x80487414 - TIOCSETAF = 0x80487416 - TIOCSETAW = 0x80487415 - TIOCSETD = 0x8004741b - TIOCSIG = 0x2000745f - TIOCSPGRP = 0x80047476 - TIOCSTART = 0x2000746e - TIOCSTAT = 0x20007465 - TIOCSTI = 0x80017472 - TIOCSTOP = 0x2000746f - TIOCSWINSZ = 0x80087467 - TIOCTIMESTAMP = 0x40107459 - TIOCUCNTL = 0x80047466 - TOSTOP = 0x400000 - VDISCARD = 0xf - VDSUSP = 0xb - VEOF = 0x0 - VEOL = 0x1 - VEOL2 = 0x2 - VERASE = 0x3 - VINTR = 0x8 - VKILL = 0x5 - VLNEXT = 0xe - VMIN = 0x10 - VQUIT = 0x9 - VREPRINT = 0x6 - VSTART = 0xc - VSTATUS = 0x12 - VSTOP = 0xd - VSUSP = 0xa - VT0 = 0x0 - VT1 = 0x10000 - VTDLY = 0x10000 - VTIME = 0x11 - VWERASE = 0x4 - WCONTINUED = 0x10 - WCOREFLAG = 0x80 - WEXITED = 0x4 - WNOHANG = 0x1 - WNOWAIT = 0x20 - WORDSIZE = 0x40 - WSTOPPED = 0x8 - WUNTRACED = 0x2 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x30) - EADDRNOTAVAIL = syscall.Errno(0x31) - EAFNOSUPPORT = syscall.Errno(0x2f) - EAGAIN = syscall.Errno(0x23) - EALREADY = syscall.Errno(0x25) - EAUTH = syscall.Errno(0x50) - EBADARCH = syscall.Errno(0x56) - EBADEXEC = syscall.Errno(0x55) - EBADF = syscall.Errno(0x9) - EBADMACHO = syscall.Errno(0x58) - EBADMSG = syscall.Errno(0x5e) - EBADRPC = syscall.Errno(0x48) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x59) - ECHILD = syscall.Errno(0xa) - ECONNABORTED = syscall.Errno(0x35) - ECONNREFUSED = syscall.Errno(0x3d) - ECONNRESET = syscall.Errno(0x36) - EDEADLK = syscall.Errno(0xb) - EDESTADDRREQ = syscall.Errno(0x27) - EDEVERR = syscall.Errno(0x53) - EDOM = syscall.Errno(0x21) - EDQUOT = syscall.Errno(0x45) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EFTYPE = syscall.Errno(0x4f) - EHOSTDOWN = syscall.Errno(0x40) - EHOSTUNREACH = syscall.Errno(0x41) - EIDRM = syscall.Errno(0x5a) - EILSEQ = syscall.Errno(0x5c) - EINPROGRESS = syscall.Errno(0x24) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EISCONN = syscall.Errno(0x38) - EISDIR = syscall.Errno(0x15) - ELAST = syscall.Errno(0x6a) - ELOOP = syscall.Errno(0x3e) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x28) - EMULTIHOP = syscall.Errno(0x5f) - ENAMETOOLONG = syscall.Errno(0x3f) - ENEEDAUTH = syscall.Errno(0x51) - ENETDOWN = syscall.Errno(0x32) - ENETRESET = syscall.Errno(0x34) - ENETUNREACH = syscall.Errno(0x33) - ENFILE = syscall.Errno(0x17) - ENOATTR = syscall.Errno(0x5d) - ENOBUFS = syscall.Errno(0x37) - ENODATA = syscall.Errno(0x60) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOLCK = syscall.Errno(0x4d) - ENOLINK = syscall.Errno(0x61) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x5b) - ENOPOLICY = syscall.Errno(0x67) - ENOPROTOOPT = syscall.Errno(0x2a) - ENOSPC = syscall.Errno(0x1c) - ENOSR = syscall.Errno(0x62) - ENOSTR = syscall.Errno(0x63) - ENOSYS = syscall.Errno(0x4e) - ENOTBLK = syscall.Errno(0xf) - ENOTCONN = syscall.Errno(0x39) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x42) - ENOTRECOVERABLE = syscall.Errno(0x68) - ENOTSOCK = syscall.Errno(0x26) - ENOTSUP = syscall.Errno(0x2d) - ENOTTY = syscall.Errno(0x19) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x66) - EOVERFLOW = syscall.Errno(0x54) - EOWNERDEAD = syscall.Errno(0x69) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x2e) - EPIPE = syscall.Errno(0x20) - EPROCLIM = syscall.Errno(0x43) - EPROCUNAVAIL = syscall.Errno(0x4c) - EPROGMISMATCH = syscall.Errno(0x4b) - EPROGUNAVAIL = syscall.Errno(0x4a) - EPROTO = syscall.Errno(0x64) - EPROTONOSUPPORT = syscall.Errno(0x2b) - EPROTOTYPE = syscall.Errno(0x29) - EPWROFF = syscall.Errno(0x52) - EQFULL = syscall.Errno(0x6a) - ERANGE = syscall.Errno(0x22) - EREMOTE = syscall.Errno(0x47) - EROFS = syscall.Errno(0x1e) - ERPCMISMATCH = syscall.Errno(0x49) - ESHLIBVERS = syscall.Errno(0x57) - ESHUTDOWN = syscall.Errno(0x3a) - ESOCKTNOSUPPORT = syscall.Errno(0x2c) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESTALE = syscall.Errno(0x46) - ETIME = syscall.Errno(0x65) - ETIMEDOUT = syscall.Errno(0x3c) - ETOOMANYREFS = syscall.Errno(0x3b) - ETXTBSY = syscall.Errno(0x1a) - EUSERS = syscall.Errno(0x44) - EWOULDBLOCK = syscall.Errno(0x23) - EXDEV = syscall.Errno(0x12) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0xa) - SIGCHLD = syscall.Signal(0x14) - SIGCONT = syscall.Signal(0x13) - SIGEMT = syscall.Signal(0x7) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINFO = syscall.Signal(0x1d) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x17) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGPIPE = syscall.Signal(0xd) - SIGPROF = syscall.Signal(0x1b) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTOP = syscall.Signal(0x11) - SIGSYS = syscall.Signal(0xc) - SIGTERM = syscall.Signal(0xf) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x12) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGURG = syscall.Signal(0x10) - SIGUSR1 = syscall.Signal(0x1e) - SIGUSR2 = syscall.Signal(0x1f) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) - -// Error table -var errors = [...]string{ - 1: "operation not permitted", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "input/output error", - 6: "device not configured", - 7: "argument list too long", - 8: "exec format error", - 9: "bad file descriptor", - 10: "no child processes", - 11: "resource deadlock avoided", - 12: "cannot allocate memory", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "resource busy", - 17: "file exists", - 18: "cross-device link", - 19: "operation not supported by device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "too many open files in system", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "numerical argument out of domain", - 34: "result too large", - 35: "resource temporarily unavailable", - 36: "operation now in progress", - 37: "operation already in progress", - 38: "socket operation on non-socket", - 39: "destination address required", - 40: "message too long", - 41: "protocol wrong type for socket", - 42: "protocol not available", - 43: "protocol not supported", - 44: "socket type not supported", - 45: "operation not supported", - 46: "protocol family not supported", - 47: "address family not supported by protocol family", - 48: "address already in use", - 49: "can't assign requested address", - 50: "network is down", - 51: "network is unreachable", - 52: "network dropped connection on reset", - 53: "software caused connection abort", - 54: "connection reset by peer", - 55: "no buffer space available", - 56: "socket is already connected", - 57: "socket is not connected", - 58: "can't send after socket shutdown", - 59: "too many references: can't splice", - 60: "operation timed out", - 61: "connection refused", - 62: "too many levels of symbolic links", - 63: "file name too long", - 64: "host is down", - 65: "no route to host", - 66: "directory not empty", - 67: "too many processes", - 68: "too many users", - 69: "disc quota exceeded", - 70: "stale NFS file handle", - 71: "too many levels of remote in path", - 72: "RPC struct is bad", - 73: "RPC version wrong", - 74: "RPC prog. not avail", - 75: "program version wrong", - 76: "bad procedure for program", - 77: "no locks available", - 78: "function not implemented", - 79: "inappropriate file type or format", - 80: "authentication error", - 81: "need authenticator", - 82: "device power is off", - 83: "device error", - 84: "value too large to be stored in data type", - 85: "bad executable (or shared library)", - 86: "bad CPU type in executable", - 87: "shared library version mismatch", - 88: "malformed Mach-o file", - 89: "operation canceled", - 90: "identifier removed", - 91: "no message of desired type", - 92: "illegal byte sequence", - 93: "attribute not found", - 94: "bad message", - 95: "EMULTIHOP (Reserved)", - 96: "no message available on STREAM", - 97: "ENOLINK (Reserved)", - 98: "no STREAM resources", - 99: "not a STREAM", - 100: "protocol error", - 101: "STREAM ioctl timeout", - 102: "operation not supported on socket", - 103: "policy not found", - 104: "state not recoverable", - 105: "previous owner died", - 106: "interface output queue is full", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal instruction", - 5: "trace/BPT trap", - 6: "abort trap", - 7: "EMT trap", - 8: "floating point exception", - 9: "killed", - 10: "bus error", - 11: "segmentation fault", - 12: "bad system call", - 13: "broken pipe", - 14: "alarm clock", - 15: "terminated", - 16: "urgent I/O condition", - 17: "suspended (signal)", - 18: "suspended", - 19: "continued", - 20: "child exited", - 21: "stopped (tty input)", - 22: "stopped (tty output)", - 23: "I/O possible", - 24: "cputime limit exceeded", - 25: "filesize limit exceeded", - 26: "virtual timer expired", - 27: "profiling timer expired", - 28: "window size changes", - 29: "information request", - 30: "user defined signal 1", - 31: "user defined signal 2", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_dragonfly_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_dragonfly_386.go deleted file mode 100644 index 2a329f06e25..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_dragonfly_386.go +++ /dev/null @@ -1,1530 +0,0 @@ -// mkerrors.sh -m32 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build 386,dragonfly - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- -m32 _const.go - -package unix - -import "syscall" - -const ( - AF_APPLETALK = 0x10 - AF_ATM = 0x1e - AF_BLUETOOTH = 0x21 - AF_CCITT = 0xa - AF_CHAOS = 0x5 - AF_CNT = 0x15 - AF_COIP = 0x14 - AF_DATAKIT = 0x9 - AF_DECnet = 0xc - AF_DLI = 0xd - AF_E164 = 0x1a - AF_ECMA = 0x8 - AF_HYLINK = 0xf - AF_IEEE80211 = 0x23 - AF_IMPLINK = 0x3 - AF_INET = 0x2 - AF_INET6 = 0x1c - AF_IPX = 0x17 - AF_ISDN = 0x1a - AF_ISO = 0x7 - AF_LAT = 0xe - AF_LINK = 0x12 - AF_LOCAL = 0x1 - AF_MAX = 0x24 - AF_MPLS = 0x22 - AF_NATM = 0x1d - AF_NETGRAPH = 0x20 - AF_NS = 0x6 - AF_OSI = 0x7 - AF_PUP = 0x4 - AF_ROUTE = 0x11 - AF_SIP = 0x18 - AF_SNA = 0xb - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - B0 = 0x0 - B110 = 0x6e - B115200 = 0x1c200 - B1200 = 0x4b0 - B134 = 0x86 - B14400 = 0x3840 - B150 = 0x96 - B1800 = 0x708 - B19200 = 0x4b00 - B200 = 0xc8 - B230400 = 0x38400 - B2400 = 0x960 - B28800 = 0x7080 - B300 = 0x12c - B38400 = 0x9600 - B4800 = 0x12c0 - B50 = 0x32 - B57600 = 0xe100 - B600 = 0x258 - B7200 = 0x1c20 - B75 = 0x4b - B76800 = 0x12c00 - B9600 = 0x2580 - BIOCFLUSH = 0x20004268 - BIOCGBLEN = 0x40044266 - BIOCGDLT = 0x4004426a - BIOCGDLTLIST = 0xc0084279 - BIOCGETIF = 0x4020426b - BIOCGHDRCMPLT = 0x40044274 - BIOCGRSIG = 0x40044272 - BIOCGRTIMEOUT = 0x4008426e - BIOCGSEESENT = 0x40044276 - BIOCGSTATS = 0x4008426f - BIOCIMMEDIATE = 0x80044270 - BIOCLOCK = 0x2000427a - BIOCPROMISC = 0x20004269 - BIOCSBLEN = 0xc0044266 - BIOCSDLT = 0x80044278 - BIOCSETF = 0x80084267 - BIOCSETIF = 0x8020426c - BIOCSETWF = 0x8008427b - BIOCSHDRCMPLT = 0x80044275 - BIOCSRSIG = 0x80044273 - BIOCSRTIMEOUT = 0x8008426d - BIOCSSEESENT = 0x80044277 - BIOCVERSION = 0x40044271 - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALIGNMENT = 0x4 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_DEFAULTBUFSIZE = 0x1000 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXBUFSIZE = 0x80000 - BPF_MAXINSNS = 0x200 - BPF_MAX_CLONES = 0x80 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINBUFSIZE = 0x20 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RELEASE = 0x30bb6 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_W = 0x0 - BPF_X = 0x8 - BRKINT = 0x2 - CFLUSH = 0xf - CLOCAL = 0x8000 - CREAD = 0x800 - CS5 = 0x0 - CS6 = 0x100 - CS7 = 0x200 - CS8 = 0x300 - CSIZE = 0x300 - CSTART = 0x11 - CSTATUS = 0x14 - CSTOP = 0x13 - CSTOPB = 0x400 - CSUSP = 0x1a - CTL_MAXNAME = 0xc - CTL_NET = 0x4 - DLT_A429 = 0xb8 - DLT_A653_ICM = 0xb9 - DLT_AIRONET_HEADER = 0x78 - DLT_APPLE_IP_OVER_IEEE1394 = 0x8a - DLT_ARCNET = 0x7 - DLT_ARCNET_LINUX = 0x81 - DLT_ATM_CLIP = 0x13 - DLT_ATM_RFC1483 = 0xb - DLT_AURORA = 0x7e - DLT_AX25 = 0x3 - DLT_AX25_KISS = 0xca - DLT_BACNET_MS_TP = 0xa5 - DLT_BLUETOOTH_HCI_H4 = 0xbb - DLT_BLUETOOTH_HCI_H4_WITH_PHDR = 0xc9 - DLT_CAN20B = 0xbe - DLT_CHAOS = 0x5 - DLT_CHDLC = 0x68 - DLT_CISCO_IOS = 0x76 - DLT_C_HDLC = 0x68 - DLT_C_HDLC_WITH_DIR = 0xcd - DLT_DOCSIS = 0x8f - DLT_ECONET = 0x73 - DLT_EN10MB = 0x1 - DLT_EN3MB = 0x2 - DLT_ENC = 0x6d - DLT_ERF = 0xc5 - DLT_ERF_ETH = 0xaf - DLT_ERF_POS = 0xb0 - DLT_FDDI = 0xa - DLT_FLEXRAY = 0xd2 - DLT_FRELAY = 0x6b - DLT_FRELAY_WITH_DIR = 0xce - DLT_GCOM_SERIAL = 0xad - DLT_GCOM_T1E1 = 0xac - DLT_GPF_F = 0xab - DLT_GPF_T = 0xaa - DLT_GPRS_LLC = 0xa9 - DLT_HHDLC = 0x79 - DLT_IBM_SN = 0x92 - DLT_IBM_SP = 0x91 - DLT_IEEE802 = 0x6 - DLT_IEEE802_11 = 0x69 - DLT_IEEE802_11_RADIO = 0x7f - DLT_IEEE802_11_RADIO_AVS = 0xa3 - DLT_IEEE802_15_4 = 0xc3 - DLT_IEEE802_15_4_LINUX = 0xbf - DLT_IEEE802_15_4_NONASK_PHY = 0xd7 - DLT_IEEE802_16_MAC_CPS = 0xbc - DLT_IEEE802_16_MAC_CPS_RADIO = 0xc1 - DLT_IPFILTER = 0x74 - DLT_IPMB = 0xc7 - DLT_IPMB_LINUX = 0xd1 - DLT_IP_OVER_FC = 0x7a - DLT_JUNIPER_ATM1 = 0x89 - DLT_JUNIPER_ATM2 = 0x87 - DLT_JUNIPER_CHDLC = 0xb5 - DLT_JUNIPER_ES = 0x84 - DLT_JUNIPER_ETHER = 0xb2 - DLT_JUNIPER_FRELAY = 0xb4 - DLT_JUNIPER_GGSN = 0x85 - DLT_JUNIPER_ISM = 0xc2 - DLT_JUNIPER_MFR = 0x86 - DLT_JUNIPER_MLFR = 0x83 - DLT_JUNIPER_MLPPP = 0x82 - DLT_JUNIPER_MONITOR = 0xa4 - DLT_JUNIPER_PIC_PEER = 0xae - DLT_JUNIPER_PPP = 0xb3 - DLT_JUNIPER_PPPOE = 0xa7 - DLT_JUNIPER_PPPOE_ATM = 0xa8 - DLT_JUNIPER_SERVICES = 0x88 - DLT_JUNIPER_ST = 0xc8 - DLT_JUNIPER_VP = 0xb7 - DLT_LAPB_WITH_DIR = 0xcf - DLT_LAPD = 0xcb - DLT_LIN = 0xd4 - DLT_LINUX_IRDA = 0x90 - DLT_LINUX_LAPD = 0xb1 - DLT_LINUX_SLL = 0x71 - DLT_LOOP = 0x6c - DLT_LTALK = 0x72 - DLT_MFR = 0xb6 - DLT_MOST = 0xd3 - DLT_MTP2 = 0x8c - DLT_MTP2_WITH_PHDR = 0x8b - DLT_MTP3 = 0x8d - DLT_NULL = 0x0 - DLT_PCI_EXP = 0x7d - DLT_PFLOG = 0x75 - DLT_PFSYNC = 0x12 - DLT_PPI = 0xc0 - DLT_PPP = 0x9 - DLT_PPP_BSDOS = 0x10 - DLT_PPP_ETHER = 0x33 - DLT_PPP_PPPD = 0xa6 - DLT_PPP_SERIAL = 0x32 - DLT_PPP_WITH_DIR = 0xcc - DLT_PRISM_HEADER = 0x77 - DLT_PRONET = 0x4 - DLT_RAIF1 = 0xc6 - DLT_RAW = 0xc - DLT_REDBACK_SMARTEDGE = 0x20 - DLT_RIO = 0x7c - DLT_SCCP = 0x8e - DLT_SITA = 0xc4 - DLT_SLIP = 0x8 - DLT_SLIP_BSDOS = 0xf - DLT_SUNATM = 0x7b - DLT_SYMANTEC_FIREWALL = 0x63 - DLT_TZSP = 0x80 - DLT_USB = 0xba - DLT_USB_LINUX = 0xbd - DLT_X2E_SERIAL = 0xd5 - DLT_X2E_XORAYA = 0xd6 - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DBF = 0xf - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - DT_WHT = 0xe - ECHO = 0x8 - ECHOCTL = 0x40 - ECHOE = 0x2 - ECHOK = 0x4 - ECHOKE = 0x1 - ECHONL = 0x10 - ECHOPRT = 0x20 - EVFILT_AIO = -0x3 - EVFILT_EXCEPT = -0x8 - EVFILT_MARKER = 0xf - EVFILT_PROC = -0x5 - EVFILT_READ = -0x1 - EVFILT_SIGNAL = -0x6 - EVFILT_SYSCOUNT = 0x8 - EVFILT_TIMER = -0x7 - EVFILT_VNODE = -0x4 - EVFILT_WRITE = -0x2 - EV_ADD = 0x1 - EV_CLEAR = 0x20 - EV_DELETE = 0x2 - EV_DISABLE = 0x8 - EV_ENABLE = 0x4 - EV_EOF = 0x8000 - EV_ERROR = 0x4000 - EV_FLAG1 = 0x2000 - EV_NODATA = 0x1000 - EV_ONESHOT = 0x10 - EV_SYSFLAGS = 0xf000 - EXTA = 0x4b00 - EXTB = 0x9600 - EXTEXIT_LWP = 0x10000 - EXTEXIT_PROC = 0x0 - EXTEXIT_SETINT = 0x1 - EXTEXIT_SIMPLE = 0x0 - EXTPROC = 0x800 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x400 - FLUSHO = 0x800000 - F_DUP2FD = 0xa - F_DUP2FD_CLOEXEC = 0x12 - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0x11 - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLK = 0x7 - F_GETOWN = 0x5 - F_OK = 0x0 - F_RDLCK = 0x1 - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLK = 0x8 - F_SETLKW = 0x9 - F_SETOWN = 0x6 - F_UNLCK = 0x2 - F_WRLCK = 0x3 - HUPCL = 0x4000 - ICANON = 0x100 - ICMP6_FILTER = 0x12 - ICRNL = 0x100 - IEXTEN = 0x400 - IFAN_ARRIVAL = 0x0 - IFAN_DEPARTURE = 0x1 - IFF_ALLMULTI = 0x200 - IFF_ALTPHYS = 0x4000 - IFF_BROADCAST = 0x2 - IFF_CANTCHANGE = 0x118e72 - IFF_DEBUG = 0x4 - IFF_LINK0 = 0x1000 - IFF_LINK1 = 0x2000 - IFF_LINK2 = 0x4000 - IFF_LOOPBACK = 0x8 - IFF_MONITOR = 0x40000 - IFF_MULTICAST = 0x8000 - IFF_NOARP = 0x80 - IFF_NPOLLING = 0x100000 - IFF_OACTIVE = 0x400 - IFF_OACTIVE_COMPAT = 0x400 - IFF_POINTOPOINT = 0x10 - IFF_POLLING = 0x10000 - IFF_POLLING_COMPAT = 0x10000 - IFF_PPROMISC = 0x20000 - IFF_PROMISC = 0x100 - IFF_RUNNING = 0x40 - IFF_SIMPLEX = 0x800 - IFF_SMART = 0x20 - IFF_STATICARP = 0x80000 - IFF_UP = 0x1 - IFNAMSIZ = 0x10 - IFT_1822 = 0x2 - IFT_A12MPPSWITCH = 0x82 - IFT_AAL2 = 0xbb - IFT_AAL5 = 0x31 - IFT_ADSL = 0x5e - IFT_AFLANE8023 = 0x3b - IFT_AFLANE8025 = 0x3c - IFT_ARAP = 0x58 - IFT_ARCNET = 0x23 - IFT_ARCNETPLUS = 0x24 - IFT_ASYNC = 0x54 - IFT_ATM = 0x25 - IFT_ATMDXI = 0x69 - IFT_ATMFUNI = 0x6a - IFT_ATMIMA = 0x6b - IFT_ATMLOGICAL = 0x50 - IFT_ATMRADIO = 0xbd - IFT_ATMSUBINTERFACE = 0x86 - IFT_ATMVCIENDPT = 0xc2 - IFT_ATMVIRTUAL = 0x95 - IFT_BGPPOLICYACCOUNTING = 0xa2 - IFT_BRIDGE = 0xd1 - IFT_BSC = 0x53 - IFT_CARP = 0xf8 - IFT_CCTEMUL = 0x3d - IFT_CEPT = 0x13 - IFT_CES = 0x85 - IFT_CHANNEL = 0x46 - IFT_CNR = 0x55 - IFT_COFFEE = 0x84 - IFT_COMPOSITELINK = 0x9b - IFT_DCN = 0x8d - IFT_DIGITALPOWERLINE = 0x8a - IFT_DIGITALWRAPPEROVERHEADCHANNEL = 0xba - IFT_DLSW = 0x4a - IFT_DOCSCABLEDOWNSTREAM = 0x80 - IFT_DOCSCABLEMACLAYER = 0x7f - IFT_DOCSCABLEUPSTREAM = 0x81 - IFT_DS0 = 0x51 - IFT_DS0BUNDLE = 0x52 - IFT_DS1FDL = 0xaa - IFT_DS3 = 0x1e - IFT_DTM = 0x8c - IFT_DVBASILN = 0xac - IFT_DVBASIOUT = 0xad - IFT_DVBRCCDOWNSTREAM = 0x93 - IFT_DVBRCCMACLAYER = 0x92 - IFT_DVBRCCUPSTREAM = 0x94 - IFT_ENC = 0xf4 - IFT_EON = 0x19 - IFT_EPLRS = 0x57 - IFT_ESCON = 0x49 - IFT_ETHER = 0x6 - IFT_FAITH = 0xf2 - IFT_FAST = 0x7d - IFT_FASTETHER = 0x3e - IFT_FASTETHERFX = 0x45 - IFT_FDDI = 0xf - IFT_FIBRECHANNEL = 0x38 - IFT_FRAMERELAYINTERCONNECT = 0x3a - IFT_FRAMERELAYMPI = 0x5c - IFT_FRDLCIENDPT = 0xc1 - IFT_FRELAY = 0x20 - IFT_FRELAYDCE = 0x2c - IFT_FRF16MFRBUNDLE = 0xa3 - IFT_FRFORWARD = 0x9e - IFT_G703AT2MB = 0x43 - IFT_G703AT64K = 0x42 - IFT_GIF = 0xf0 - IFT_GIGABITETHERNET = 0x75 - IFT_GR303IDT = 0xb2 - IFT_GR303RDT = 0xb1 - IFT_H323GATEKEEPER = 0xa4 - IFT_H323PROXY = 0xa5 - IFT_HDH1822 = 0x3 - IFT_HDLC = 0x76 - IFT_HDSL2 = 0xa8 - IFT_HIPERLAN2 = 0xb7 - IFT_HIPPI = 0x2f - IFT_HIPPIINTERFACE = 0x39 - IFT_HOSTPAD = 0x5a - IFT_HSSI = 0x2e - IFT_HY = 0xe - IFT_IBM370PARCHAN = 0x48 - IFT_IDSL = 0x9a - IFT_IEEE1394 = 0x90 - IFT_IEEE80211 = 0x47 - IFT_IEEE80212 = 0x37 - IFT_IEEE8023ADLAG = 0xa1 - IFT_IFGSN = 0x91 - IFT_IMT = 0xbe - IFT_INTERLEAVE = 0x7c - IFT_IP = 0x7e - IFT_IPFORWARD = 0x8e - IFT_IPOVERATM = 0x72 - IFT_IPOVERCDLC = 0x6d - IFT_IPOVERCLAW = 0x6e - IFT_IPSWITCH = 0x4e - IFT_ISDN = 0x3f - IFT_ISDNBASIC = 0x14 - IFT_ISDNPRIMARY = 0x15 - IFT_ISDNS = 0x4b - IFT_ISDNU = 0x4c - IFT_ISO88022LLC = 0x29 - IFT_ISO88023 = 0x7 - IFT_ISO88024 = 0x8 - IFT_ISO88025 = 0x9 - IFT_ISO88025CRFPINT = 0x62 - IFT_ISO88025DTR = 0x56 - IFT_ISO88025FIBER = 0x73 - IFT_ISO88026 = 0xa - IFT_ISUP = 0xb3 - IFT_L2VLAN = 0x87 - IFT_L3IPVLAN = 0x88 - IFT_L3IPXVLAN = 0x89 - IFT_LAPB = 0x10 - IFT_LAPD = 0x4d - IFT_LAPF = 0x77 - IFT_LOCALTALK = 0x2a - IFT_LOOP = 0x18 - IFT_MEDIAMAILOVERIP = 0x8b - IFT_MFSIGLINK = 0xa7 - IFT_MIOX25 = 0x26 - IFT_MODEM = 0x30 - IFT_MPC = 0x71 - IFT_MPLS = 0xa6 - IFT_MPLSTUNNEL = 0x96 - IFT_MSDSL = 0x8f - IFT_MVL = 0xbf - IFT_MYRINET = 0x63 - IFT_NFAS = 0xaf - IFT_NSIP = 0x1b - IFT_OPTICALCHANNEL = 0xc3 - IFT_OPTICALTRANSPORT = 0xc4 - IFT_OTHER = 0x1 - IFT_P10 = 0xc - IFT_P80 = 0xd - IFT_PARA = 0x22 - IFT_PFLOG = 0xf5 - IFT_PFSYNC = 0xf6 - IFT_PLC = 0xae - IFT_POS = 0xab - IFT_PPP = 0x17 - IFT_PPPMULTILINKBUNDLE = 0x6c - IFT_PROPBWAP2MP = 0xb8 - IFT_PROPCNLS = 0x59 - IFT_PROPDOCSWIRELESSDOWNSTREAM = 0xb5 - IFT_PROPDOCSWIRELESSMACLAYER = 0xb4 - IFT_PROPDOCSWIRELESSUPSTREAM = 0xb6 - IFT_PROPMUX = 0x36 - IFT_PROPVIRTUAL = 0x35 - IFT_PROPWIRELESSP2P = 0x9d - IFT_PTPSERIAL = 0x16 - IFT_PVC = 0xf1 - IFT_QLLC = 0x44 - IFT_RADIOMAC = 0xbc - IFT_RADSL = 0x5f - IFT_REACHDSL = 0xc0 - IFT_RFC1483 = 0x9f - IFT_RS232 = 0x21 - IFT_RSRB = 0x4f - IFT_SDLC = 0x11 - IFT_SDSL = 0x60 - IFT_SHDSL = 0xa9 - IFT_SIP = 0x1f - IFT_SLIP = 0x1c - IFT_SMDSDXI = 0x2b - IFT_SMDSICIP = 0x34 - IFT_SONET = 0x27 - IFT_SONETOVERHEADCHANNEL = 0xb9 - IFT_SONETPATH = 0x32 - IFT_SONETVT = 0x33 - IFT_SRP = 0x97 - IFT_SS7SIGLINK = 0x9c - IFT_STACKTOSTACK = 0x6f - IFT_STARLAN = 0xb - IFT_STF = 0xf3 - IFT_T1 = 0x12 - IFT_TDLC = 0x74 - IFT_TERMPAD = 0x5b - IFT_TR008 = 0xb0 - IFT_TRANSPHDLC = 0x7b - IFT_TUNNEL = 0x83 - IFT_ULTRA = 0x1d - IFT_USB = 0xa0 - IFT_V11 = 0x40 - IFT_V35 = 0x2d - IFT_V36 = 0x41 - IFT_V37 = 0x78 - IFT_VDSL = 0x61 - IFT_VIRTUALIPADDRESS = 0x70 - IFT_VOICEEM = 0x64 - IFT_VOICEENCAP = 0x67 - IFT_VOICEFXO = 0x65 - IFT_VOICEFXS = 0x66 - IFT_VOICEOVERATM = 0x98 - IFT_VOICEOVERFRAMERELAY = 0x99 - IFT_VOICEOVERIP = 0x68 - IFT_X213 = 0x5d - IFT_X25 = 0x5 - IFT_X25DDN = 0x4 - IFT_X25HUNTGROUP = 0x7a - IFT_X25MLP = 0x79 - IFT_X25PLE = 0x28 - IFT_XETHER = 0x1a - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLASSD_HOST = 0xfffffff - IN_CLASSD_NET = 0xf0000000 - IN_CLASSD_NSHIFT = 0x1c - IN_LOOPBACKNET = 0x7f - IPPROTO_3PC = 0x22 - IPPROTO_ADFS = 0x44 - IPPROTO_AH = 0x33 - IPPROTO_AHIP = 0x3d - IPPROTO_APES = 0x63 - IPPROTO_ARGUS = 0xd - IPPROTO_AX25 = 0x5d - IPPROTO_BHA = 0x31 - IPPROTO_BLT = 0x1e - IPPROTO_BRSATMON = 0x4c - IPPROTO_CARP = 0x70 - IPPROTO_CFTP = 0x3e - IPPROTO_CHAOS = 0x10 - IPPROTO_CMTP = 0x26 - IPPROTO_CPHB = 0x49 - IPPROTO_CPNX = 0x48 - IPPROTO_DDP = 0x25 - IPPROTO_DGP = 0x56 - IPPROTO_DIVERT = 0xfe - IPPROTO_DONE = 0x101 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_EMCON = 0xe - IPPROTO_ENCAP = 0x62 - IPPROTO_EON = 0x50 - IPPROTO_ESP = 0x32 - IPPROTO_ETHERIP = 0x61 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GGP = 0x3 - IPPROTO_GMTP = 0x64 - IPPROTO_GRE = 0x2f - IPPROTO_HELLO = 0x3f - IPPROTO_HMP = 0x14 - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IDPR = 0x23 - IPPROTO_IDRP = 0x2d - IPPROTO_IGMP = 0x2 - IPPROTO_IGP = 0x55 - IPPROTO_IGRP = 0x58 - IPPROTO_IL = 0x28 - IPPROTO_INLSP = 0x34 - IPPROTO_INP = 0x20 - IPPROTO_IP = 0x0 - IPPROTO_IPCOMP = 0x6c - IPPROTO_IPCV = 0x47 - IPPROTO_IPEIP = 0x5e - IPPROTO_IPIP = 0x4 - IPPROTO_IPPC = 0x43 - IPPROTO_IPV4 = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_IRTP = 0x1c - IPPROTO_KRYPTOLAN = 0x41 - IPPROTO_LARP = 0x5b - IPPROTO_LEAF1 = 0x19 - IPPROTO_LEAF2 = 0x1a - IPPROTO_MAX = 0x100 - IPPROTO_MAXID = 0x34 - IPPROTO_MEAS = 0x13 - IPPROTO_MHRP = 0x30 - IPPROTO_MICP = 0x5f - IPPROTO_MOBILE = 0x37 - IPPROTO_MTP = 0x5c - IPPROTO_MUX = 0x12 - IPPROTO_ND = 0x4d - IPPROTO_NHRP = 0x36 - IPPROTO_NONE = 0x3b - IPPROTO_NSP = 0x1f - IPPROTO_NVPII = 0xb - IPPROTO_OSPFIGP = 0x59 - IPPROTO_PFSYNC = 0xf0 - IPPROTO_PGM = 0x71 - IPPROTO_PIGP = 0x9 - IPPROTO_PIM = 0x67 - IPPROTO_PRM = 0x15 - IPPROTO_PUP = 0xc - IPPROTO_PVP = 0x4b - IPPROTO_RAW = 0xff - IPPROTO_RCCMON = 0xa - IPPROTO_RDP = 0x1b - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_RVD = 0x42 - IPPROTO_SATEXPAK = 0x40 - IPPROTO_SATMON = 0x45 - IPPROTO_SCCSP = 0x60 - IPPROTO_SCTP = 0x84 - IPPROTO_SDRP = 0x2a - IPPROTO_SEP = 0x21 - IPPROTO_SKIP = 0x39 - IPPROTO_SRPC = 0x5a - IPPROTO_ST = 0x7 - IPPROTO_SVMTP = 0x52 - IPPROTO_SWIPE = 0x35 - IPPROTO_TCF = 0x57 - IPPROTO_TCP = 0x6 - IPPROTO_TLSP = 0x38 - IPPROTO_TP = 0x1d - IPPROTO_TPXX = 0x27 - IPPROTO_TRUNK1 = 0x17 - IPPROTO_TRUNK2 = 0x18 - IPPROTO_TTP = 0x54 - IPPROTO_UDP = 0x11 - IPPROTO_UNKNOWN = 0x102 - IPPROTO_VINES = 0x53 - IPPROTO_VISA = 0x46 - IPPROTO_VMTP = 0x51 - IPPROTO_WBEXPAK = 0x4f - IPPROTO_WBMON = 0x4e - IPPROTO_WSN = 0x4a - IPPROTO_XNET = 0xf - IPPROTO_XTP = 0x24 - IPV6_AUTOFLOWLABEL = 0x3b - IPV6_BINDV6ONLY = 0x1b - IPV6_CHECKSUM = 0x1a - IPV6_DEFAULT_MULTICAST_HOPS = 0x1 - IPV6_DEFAULT_MULTICAST_LOOP = 0x1 - IPV6_DEFHLIM = 0x40 - IPV6_DONTFRAG = 0x3e - IPV6_DSTOPTS = 0x32 - IPV6_FAITH = 0x1d - IPV6_FLOWINFO_MASK = 0xffffff0f - IPV6_FLOWLABEL_MASK = 0xffff0f00 - IPV6_FRAGTTL = 0x78 - IPV6_FW_ADD = 0x1e - IPV6_FW_DEL = 0x1f - IPV6_FW_FLUSH = 0x20 - IPV6_FW_GET = 0x22 - IPV6_FW_ZERO = 0x21 - IPV6_HLIMDEC = 0x1 - IPV6_HOPLIMIT = 0x2f - IPV6_HOPOPTS = 0x31 - IPV6_IPSEC_POLICY = 0x1c - IPV6_JOIN_GROUP = 0xc - IPV6_LEAVE_GROUP = 0xd - IPV6_MAXHLIM = 0xff - IPV6_MAXPACKET = 0xffff - IPV6_MMTU = 0x500 - IPV6_MSFILTER = 0x4a - IPV6_MULTICAST_HOPS = 0xa - IPV6_MULTICAST_IF = 0x9 - IPV6_MULTICAST_LOOP = 0xb - IPV6_NEXTHOP = 0x30 - IPV6_PATHMTU = 0x2c - IPV6_PKTINFO = 0x2e - IPV6_PKTOPTIONS = 0x34 - IPV6_PORTRANGE = 0xe - IPV6_PORTRANGE_DEFAULT = 0x0 - IPV6_PORTRANGE_HIGH = 0x1 - IPV6_PORTRANGE_LOW = 0x2 - IPV6_PREFER_TEMPADDR = 0x3f - IPV6_RECVDSTOPTS = 0x28 - IPV6_RECVHOPLIMIT = 0x25 - IPV6_RECVHOPOPTS = 0x27 - IPV6_RECVPATHMTU = 0x2b - IPV6_RECVPKTINFO = 0x24 - IPV6_RECVRTHDR = 0x26 - IPV6_RECVTCLASS = 0x39 - IPV6_RTHDR = 0x33 - IPV6_RTHDRDSTOPTS = 0x23 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_SOCKOPT_RESERVED1 = 0x3 - IPV6_TCLASS = 0x3d - IPV6_UNICAST_HOPS = 0x4 - IPV6_USE_MIN_MTU = 0x2a - IPV6_V6ONLY = 0x1b - IPV6_VERSION = 0x60 - IPV6_VERSION_MASK = 0xf0 - IP_ADD_MEMBERSHIP = 0xc - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DROP_MEMBERSHIP = 0xd - IP_DUMMYNET_CONFIGURE = 0x3c - IP_DUMMYNET_DEL = 0x3d - IP_DUMMYNET_FLUSH = 0x3e - IP_DUMMYNET_GET = 0x40 - IP_FAITH = 0x16 - IP_FW_ADD = 0x32 - IP_FW_DEL = 0x33 - IP_FW_FLUSH = 0x34 - IP_FW_GET = 0x36 - IP_FW_RESETLOG = 0x37 - IP_FW_ZERO = 0x35 - IP_HDRINCL = 0x2 - IP_IPSEC_POLICY = 0x15 - IP_MAXPACKET = 0xffff - IP_MAX_MEMBERSHIPS = 0x14 - IP_MF = 0x2000 - IP_MINTTL = 0x42 - IP_MSS = 0x240 - IP_MULTICAST_IF = 0x9 - IP_MULTICAST_LOOP = 0xb - IP_MULTICAST_TTL = 0xa - IP_MULTICAST_VIF = 0xe - IP_OFFMASK = 0x1fff - IP_OPTIONS = 0x1 - IP_PORTRANGE = 0x13 - IP_PORTRANGE_DEFAULT = 0x0 - IP_PORTRANGE_HIGH = 0x1 - IP_PORTRANGE_LOW = 0x2 - IP_RECVDSTADDR = 0x7 - IP_RECVIF = 0x14 - IP_RECVOPTS = 0x5 - IP_RECVRETOPTS = 0x6 - IP_RECVTTL = 0x41 - IP_RETOPTS = 0x8 - IP_RF = 0x8000 - IP_RSVP_OFF = 0x10 - IP_RSVP_ON = 0xf - IP_RSVP_VIF_OFF = 0x12 - IP_RSVP_VIF_ON = 0x11 - IP_TOS = 0x3 - IP_TTL = 0x4 - ISIG = 0x80 - ISTRIP = 0x20 - IXANY = 0x800 - IXOFF = 0x400 - IXON = 0x200 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_AUTOSYNC = 0x7 - MADV_CONTROL_END = 0xb - MADV_CONTROL_START = 0xa - MADV_CORE = 0x9 - MADV_DONTNEED = 0x4 - MADV_FREE = 0x5 - MADV_INVAL = 0xa - MADV_NOCORE = 0x8 - MADV_NORMAL = 0x0 - MADV_NOSYNC = 0x6 - MADV_RANDOM = 0x1 - MADV_SEQUENTIAL = 0x2 - MADV_SETMAP = 0xb - MADV_WILLNEED = 0x3 - MAP_ANON = 0x1000 - MAP_COPY = 0x2 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_HASSEMAPHORE = 0x200 - MAP_INHERIT = 0x80 - MAP_NOCORE = 0x20000 - MAP_NOEXTEND = 0x100 - MAP_NORESERVE = 0x40 - MAP_NOSYNC = 0x800 - MAP_PRIVATE = 0x2 - MAP_RENAME = 0x20 - MAP_SHARED = 0x1 - MAP_SIZEALIGN = 0x40000 - MAP_STACK = 0x400 - MAP_TRYFIXED = 0x10000 - MAP_VPAGETABLE = 0x2000 - MCL_CURRENT = 0x1 - MCL_FUTURE = 0x2 - MSG_CTRUNC = 0x20 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x80 - MSG_EOF = 0x100 - MSG_EOR = 0x8 - MSG_FBLOCKING = 0x10000 - MSG_FMASK = 0xffff0000 - MSG_FNONBLOCKING = 0x20000 - MSG_NOSIGNAL = 0x400 - MSG_NOTIFICATION = 0x200 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_SYNC = 0x800 - MSG_TRUNC = 0x10 - MSG_WAITALL = 0x40 - MS_ASYNC = 0x1 - MS_INVALIDATE = 0x2 - MS_SYNC = 0x0 - NAME_MAX = 0xff - NET_RT_DUMP = 0x1 - NET_RT_FLAGS = 0x2 - NET_RT_IFLIST = 0x3 - NET_RT_MAXID = 0x4 - NOFLSH = 0x80000000 - NOTE_ATTRIB = 0x8 - NOTE_CHILD = 0x4 - NOTE_DELETE = 0x1 - NOTE_EXEC = 0x20000000 - NOTE_EXIT = 0x80000000 - NOTE_EXTEND = 0x4 - NOTE_FORK = 0x40000000 - NOTE_LINK = 0x10 - NOTE_LOWAT = 0x1 - NOTE_OOB = 0x2 - NOTE_PCTRLMASK = 0xf0000000 - NOTE_PDATAMASK = 0xfffff - NOTE_RENAME = 0x20 - NOTE_REVOKE = 0x40 - NOTE_TRACK = 0x1 - NOTE_TRACKERR = 0x2 - NOTE_WRITE = 0x2 - OCRNL = 0x10 - ONLCR = 0x2 - ONLRET = 0x40 - ONOCR = 0x20 - ONOEOT = 0x8 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_APPEND = 0x8 - O_ASYNC = 0x40 - O_CLOEXEC = 0x20000 - O_CREAT = 0x200 - O_DIRECT = 0x10000 - O_DIRECTORY = 0x8000000 - O_EXCL = 0x800 - O_EXLOCK = 0x20 - O_FAPPEND = 0x100000 - O_FASYNCWRITE = 0x800000 - O_FBLOCKING = 0x40000 - O_FBUFFERED = 0x2000000 - O_FMASK = 0x7fc0000 - O_FNONBLOCKING = 0x80000 - O_FOFFSET = 0x200000 - O_FSYNC = 0x80 - O_FSYNCWRITE = 0x400000 - O_FUNBUFFERED = 0x1000000 - O_MAPONREAD = 0x4000000 - O_NDELAY = 0x4 - O_NOCTTY = 0x8000 - O_NOFOLLOW = 0x100 - O_NONBLOCK = 0x4 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_SHLOCK = 0x10 - O_SYNC = 0x80 - O_TRUNC = 0x400 - O_WRONLY = 0x1 - PARENB = 0x1000 - PARMRK = 0x8 - PARODD = 0x2000 - PENDIN = 0x20000000 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PROT_EXEC = 0x4 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_WRITE = 0x2 - RLIMIT_AS = 0xa - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x8 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = 0x7fffffffffffffff - RTAX_AUTHOR = 0x6 - RTAX_BRD = 0x7 - RTAX_DST = 0x0 - RTAX_GATEWAY = 0x1 - RTAX_GENMASK = 0x3 - RTAX_IFA = 0x5 - RTAX_IFP = 0x4 - RTAX_MAX = 0xb - RTAX_MPLS1 = 0x8 - RTAX_MPLS2 = 0x9 - RTAX_MPLS3 = 0xa - RTAX_NETMASK = 0x2 - RTA_AUTHOR = 0x40 - RTA_BRD = 0x80 - RTA_DST = 0x1 - RTA_GATEWAY = 0x2 - RTA_GENMASK = 0x8 - RTA_IFA = 0x20 - RTA_IFP = 0x10 - RTA_MPLS1 = 0x100 - RTA_MPLS2 = 0x200 - RTA_MPLS3 = 0x400 - RTA_NETMASK = 0x4 - RTF_BLACKHOLE = 0x1000 - RTF_BROADCAST = 0x400000 - RTF_CLONING = 0x100 - RTF_DONE = 0x40 - RTF_DYNAMIC = 0x10 - RTF_GATEWAY = 0x2 - RTF_HOST = 0x4 - RTF_LLINFO = 0x400 - RTF_LOCAL = 0x200000 - RTF_MODIFIED = 0x20 - RTF_MPLSOPS = 0x1000000 - RTF_MULTICAST = 0x800000 - RTF_PINNED = 0x100000 - RTF_PRCLONING = 0x10000 - RTF_PROTO1 = 0x8000 - RTF_PROTO2 = 0x4000 - RTF_PROTO3 = 0x40000 - RTF_REJECT = 0x8 - RTF_STATIC = 0x800 - RTF_UP = 0x1 - RTF_WASCLONED = 0x20000 - RTF_XRESOLVE = 0x200 - RTM_ADD = 0x1 - RTM_CHANGE = 0x3 - RTM_DELADDR = 0xd - RTM_DELETE = 0x2 - RTM_DELMADDR = 0x10 - RTM_GET = 0x4 - RTM_IEEE80211 = 0x12 - RTM_IFANNOUNCE = 0x11 - RTM_IFINFO = 0xe - RTM_LOCK = 0x8 - RTM_LOSING = 0x5 - RTM_MISS = 0x7 - RTM_NEWADDR = 0xc - RTM_NEWMADDR = 0xf - RTM_OLDADD = 0x9 - RTM_OLDDEL = 0xa - RTM_REDIRECT = 0x6 - RTM_RESOLVE = 0xb - RTM_RTTUNIT = 0xf4240 - RTM_VERSION = 0x6 - RTV_EXPIRE = 0x4 - RTV_HOPCOUNT = 0x2 - RTV_IWCAPSEGS = 0x400 - RTV_IWMAXSEGS = 0x200 - RTV_MSL = 0x100 - RTV_MTU = 0x1 - RTV_RPIPE = 0x8 - RTV_RTT = 0x40 - RTV_RTTVAR = 0x80 - RTV_SPIPE = 0x10 - RTV_SSTHRESH = 0x20 - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - SCM_CREDS = 0x3 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x2 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDMULTI = 0x80206931 - SIOCADDRT = 0x8030720a - SIOCAIFADDR = 0x8040691a - SIOCALIFADDR = 0x8118691b - SIOCATMARK = 0x40047307 - SIOCDELMULTI = 0x80206932 - SIOCDELRT = 0x8030720b - SIOCDIFADDR = 0x80206919 - SIOCDIFPHYADDR = 0x80206949 - SIOCDLIFADDR = 0x8118691d - SIOCGDRVSPEC = 0xc01c697b - SIOCGETSGCNT = 0xc0147210 - SIOCGETVIFCNT = 0xc014720f - SIOCGHIWAT = 0x40047301 - SIOCGIFADDR = 0xc0206921 - SIOCGIFBRDADDR = 0xc0206923 - SIOCGIFCAP = 0xc020691f - SIOCGIFCONF = 0xc0086924 - SIOCGIFDATA = 0xc0206926 - SIOCGIFDSTADDR = 0xc0206922 - SIOCGIFFLAGS = 0xc0206911 - SIOCGIFGENERIC = 0xc020693a - SIOCGIFGMEMB = 0xc024698a - SIOCGIFINDEX = 0xc0206920 - SIOCGIFMEDIA = 0xc0286938 - SIOCGIFMETRIC = 0xc0206917 - SIOCGIFMTU = 0xc0206933 - SIOCGIFNETMASK = 0xc0206925 - SIOCGIFPDSTADDR = 0xc0206948 - SIOCGIFPHYS = 0xc0206935 - SIOCGIFPOLLCPU = 0xc020697e - SIOCGIFPSRCADDR = 0xc0206947 - SIOCGIFSTATUS = 0xc331693b - SIOCGIFTSOLEN = 0xc0206980 - SIOCGLIFADDR = 0xc118691c - SIOCGLIFPHYADDR = 0xc118694b - SIOCGLOWAT = 0x40047303 - SIOCGPGRP = 0x40047309 - SIOCGPRIVATE_0 = 0xc0206950 - SIOCGPRIVATE_1 = 0xc0206951 - SIOCIFCREATE = 0xc020697a - SIOCIFCREATE2 = 0xc020697c - SIOCIFDESTROY = 0x80206979 - SIOCIFGCLONERS = 0xc00c6978 - SIOCSDRVSPEC = 0x801c697b - SIOCSHIWAT = 0x80047300 - SIOCSIFADDR = 0x8020690c - SIOCSIFBRDADDR = 0x80206913 - SIOCSIFCAP = 0x8020691e - SIOCSIFDSTADDR = 0x8020690e - SIOCSIFFLAGS = 0x80206910 - SIOCSIFGENERIC = 0x80206939 - SIOCSIFLLADDR = 0x8020693c - SIOCSIFMEDIA = 0xc0206937 - SIOCSIFMETRIC = 0x80206918 - SIOCSIFMTU = 0x80206934 - SIOCSIFNAME = 0x80206928 - SIOCSIFNETMASK = 0x80206916 - SIOCSIFPHYADDR = 0x80406946 - SIOCSIFPHYS = 0x80206936 - SIOCSIFPOLLCPU = 0x8020697d - SIOCSIFTSOLEN = 0x8020697f - SIOCSLIFPHYADDR = 0x8118694a - SIOCSLOWAT = 0x80047302 - SIOCSPGRP = 0x80047308 - SOCK_DGRAM = 0x2 - SOCK_MAXADDRLEN = 0xff - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_SOCKET = 0xffff - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x2 - SO_ACCEPTFILTER = 0x1000 - SO_BROADCAST = 0x20 - SO_DEBUG = 0x1 - SO_DONTROUTE = 0x10 - SO_ERROR = 0x1007 - SO_KEEPALIVE = 0x8 - SO_LINGER = 0x80 - SO_NOSIGPIPE = 0x800 - SO_OOBINLINE = 0x100 - SO_RCVBUF = 0x1002 - SO_RCVLOWAT = 0x1004 - SO_RCVTIMEO = 0x1006 - SO_REUSEADDR = 0x4 - SO_REUSEPORT = 0x200 - SO_SNDBUF = 0x1001 - SO_SNDLOWAT = 0x1003 - SO_SNDSPACE = 0x100a - SO_SNDTIMEO = 0x1005 - SO_TIMESTAMP = 0x400 - SO_TYPE = 0x1008 - SO_USELOOPBACK = 0x40 - TCIFLUSH = 0x1 - TCIOFLUSH = 0x3 - TCOFLUSH = 0x2 - TCP_FASTKEEP = 0x80 - TCP_KEEPCNT = 0x400 - TCP_KEEPIDLE = 0x100 - TCP_KEEPINIT = 0x20 - TCP_KEEPINTVL = 0x200 - TCP_MAXBURST = 0x4 - TCP_MAXHLEN = 0x3c - TCP_MAXOLEN = 0x28 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_WINSHIFT = 0xe - TCP_MINMSS = 0x100 - TCP_MIN_WINSHIFT = 0x5 - TCP_MSS = 0x200 - TCP_NODELAY = 0x1 - TCP_NOOPT = 0x8 - TCP_NOPUSH = 0x4 - TCP_SIGNATURE_ENABLE = 0x10 - TCSAFLUSH = 0x2 - TIOCCBRK = 0x2000747a - TIOCCDTR = 0x20007478 - TIOCCONS = 0x80047462 - TIOCDCDTIMESTAMP = 0x40087458 - TIOCDRAIN = 0x2000745e - TIOCEXCL = 0x2000740d - TIOCEXT = 0x80047460 - TIOCFLUSH = 0x80047410 - TIOCGDRAINWAIT = 0x40047456 - TIOCGETA = 0x402c7413 - TIOCGETD = 0x4004741a - TIOCGPGRP = 0x40047477 - TIOCGSID = 0x40047463 - TIOCGSIZE = 0x40087468 - TIOCGWINSZ = 0x40087468 - TIOCISPTMASTER = 0x20007455 - TIOCMBIC = 0x8004746b - TIOCMBIS = 0x8004746c - TIOCMGDTRWAIT = 0x4004745a - TIOCMGET = 0x4004746a - TIOCMODG = 0x40047403 - TIOCMODS = 0x80047404 - TIOCMSDTRWAIT = 0x8004745b - TIOCMSET = 0x8004746d - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x20007471 - TIOCNXCL = 0x2000740e - TIOCOUTQ = 0x40047473 - TIOCPKT = 0x80047470 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCREMOTE = 0x80047469 - TIOCSBRK = 0x2000747b - TIOCSCTTY = 0x20007461 - TIOCSDRAINWAIT = 0x80047457 - TIOCSDTR = 0x20007479 - TIOCSETA = 0x802c7414 - TIOCSETAF = 0x802c7416 - TIOCSETAW = 0x802c7415 - TIOCSETD = 0x8004741b - TIOCSIG = 0x2000745f - TIOCSPGRP = 0x80047476 - TIOCSSIZE = 0x80087467 - TIOCSTART = 0x2000746e - TIOCSTAT = 0x20007465 - TIOCSTI = 0x80017472 - TIOCSTOP = 0x2000746f - TIOCSWINSZ = 0x80087467 - TIOCTIMESTAMP = 0x40087459 - TIOCUCNTL = 0x80047466 - TOSTOP = 0x400000 - VCHECKPT = 0x13 - VDISCARD = 0xf - VDSUSP = 0xb - VEOF = 0x0 - VEOL = 0x1 - VEOL2 = 0x2 - VERASE = 0x3 - VERASE2 = 0x7 - VINTR = 0x8 - VKILL = 0x5 - VLNEXT = 0xe - VMIN = 0x10 - VQUIT = 0x9 - VREPRINT = 0x6 - VSTART = 0xc - VSTATUS = 0x12 - VSTOP = 0xd - VSUSP = 0xa - VTIME = 0x11 - VWERASE = 0x4 - WCONTINUED = 0x4 - WCOREFLAG = 0x80 - WLINUXCLONE = 0x80000000 - WNOHANG = 0x1 - WSTOPPED = 0x7f - WUNTRACED = 0x2 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x30) - EADDRNOTAVAIL = syscall.Errno(0x31) - EAFNOSUPPORT = syscall.Errno(0x2f) - EAGAIN = syscall.Errno(0x23) - EALREADY = syscall.Errno(0x25) - EASYNC = syscall.Errno(0x63) - EAUTH = syscall.Errno(0x50) - EBADF = syscall.Errno(0x9) - EBADMSG = syscall.Errno(0x59) - EBADRPC = syscall.Errno(0x48) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x55) - ECHILD = syscall.Errno(0xa) - ECONNABORTED = syscall.Errno(0x35) - ECONNREFUSED = syscall.Errno(0x3d) - ECONNRESET = syscall.Errno(0x36) - EDEADLK = syscall.Errno(0xb) - EDESTADDRREQ = syscall.Errno(0x27) - EDOM = syscall.Errno(0x21) - EDOOFUS = syscall.Errno(0x58) - EDQUOT = syscall.Errno(0x45) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EFTYPE = syscall.Errno(0x4f) - EHOSTDOWN = syscall.Errno(0x40) - EHOSTUNREACH = syscall.Errno(0x41) - EIDRM = syscall.Errno(0x52) - EILSEQ = syscall.Errno(0x56) - EINPROGRESS = syscall.Errno(0x24) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EISCONN = syscall.Errno(0x38) - EISDIR = syscall.Errno(0x15) - ELAST = syscall.Errno(0x63) - ELOOP = syscall.Errno(0x3e) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x28) - EMULTIHOP = syscall.Errno(0x5a) - ENAMETOOLONG = syscall.Errno(0x3f) - ENEEDAUTH = syscall.Errno(0x51) - ENETDOWN = syscall.Errno(0x32) - ENETRESET = syscall.Errno(0x34) - ENETUNREACH = syscall.Errno(0x33) - ENFILE = syscall.Errno(0x17) - ENOATTR = syscall.Errno(0x57) - ENOBUFS = syscall.Errno(0x37) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOLCK = syscall.Errno(0x4d) - ENOLINK = syscall.Errno(0x5b) - ENOMEDIUM = syscall.Errno(0x5d) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x53) - ENOPROTOOPT = syscall.Errno(0x2a) - ENOSPC = syscall.Errno(0x1c) - ENOSYS = syscall.Errno(0x4e) - ENOTBLK = syscall.Errno(0xf) - ENOTCONN = syscall.Errno(0x39) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x42) - ENOTSOCK = syscall.Errno(0x26) - ENOTSUP = syscall.Errno(0x2d) - ENOTTY = syscall.Errno(0x19) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x2d) - EOVERFLOW = syscall.Errno(0x54) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x2e) - EPIPE = syscall.Errno(0x20) - EPROCLIM = syscall.Errno(0x43) - EPROCUNAVAIL = syscall.Errno(0x4c) - EPROGMISMATCH = syscall.Errno(0x4b) - EPROGUNAVAIL = syscall.Errno(0x4a) - EPROTO = syscall.Errno(0x5c) - EPROTONOSUPPORT = syscall.Errno(0x2b) - EPROTOTYPE = syscall.Errno(0x29) - ERANGE = syscall.Errno(0x22) - EREMOTE = syscall.Errno(0x47) - EROFS = syscall.Errno(0x1e) - ERPCMISMATCH = syscall.Errno(0x49) - ESHUTDOWN = syscall.Errno(0x3a) - ESOCKTNOSUPPORT = syscall.Errno(0x2c) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESTALE = syscall.Errno(0x46) - ETIMEDOUT = syscall.Errno(0x3c) - ETOOMANYREFS = syscall.Errno(0x3b) - ETXTBSY = syscall.Errno(0x1a) - EUNUSED94 = syscall.Errno(0x5e) - EUNUSED95 = syscall.Errno(0x5f) - EUNUSED96 = syscall.Errno(0x60) - EUNUSED97 = syscall.Errno(0x61) - EUNUSED98 = syscall.Errno(0x62) - EUSERS = syscall.Errno(0x44) - EWOULDBLOCK = syscall.Errno(0x23) - EXDEV = syscall.Errno(0x12) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0xa) - SIGCHLD = syscall.Signal(0x14) - SIGCKPT = syscall.Signal(0x21) - SIGCKPTEXIT = syscall.Signal(0x22) - SIGCONT = syscall.Signal(0x13) - SIGEMT = syscall.Signal(0x7) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINFO = syscall.Signal(0x1d) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x17) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGPIPE = syscall.Signal(0xd) - SIGPROF = syscall.Signal(0x1b) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTOP = syscall.Signal(0x11) - SIGSYS = syscall.Signal(0xc) - SIGTERM = syscall.Signal(0xf) - SIGTHR = syscall.Signal(0x20) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x12) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGURG = syscall.Signal(0x10) - SIGUSR1 = syscall.Signal(0x1e) - SIGUSR2 = syscall.Signal(0x1f) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) - -// Error table -var errors = [...]string{ - 1: "operation not permitted", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "input/output error", - 6: "device not configured", - 7: "argument list too long", - 8: "exec format error", - 9: "bad file descriptor", - 10: "no child processes", - 11: "resource deadlock avoided", - 12: "cannot allocate memory", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "device busy", - 17: "file exists", - 18: "cross-device link", - 19: "operation not supported by device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "too many open files in system", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "numerical argument out of domain", - 34: "result too large", - 35: "resource temporarily unavailable", - 36: "operation now in progress", - 37: "operation already in progress", - 38: "socket operation on non-socket", - 39: "destination address required", - 40: "message too long", - 41: "protocol wrong type for socket", - 42: "protocol not available", - 43: "protocol not supported", - 44: "socket type not supported", - 45: "operation not supported", - 46: "protocol family not supported", - 47: "address family not supported by protocol family", - 48: "address already in use", - 49: "can't assign requested address", - 50: "network is down", - 51: "network is unreachable", - 52: "network dropped connection on reset", - 53: "software caused connection abort", - 54: "connection reset by peer", - 55: "no buffer space available", - 56: "socket is already connected", - 57: "socket is not connected", - 58: "can't send after socket shutdown", - 59: "too many references: can't splice", - 60: "operation timed out", - 61: "connection refused", - 62: "too many levels of symbolic links", - 63: "file name too long", - 64: "host is down", - 65: "no route to host", - 66: "directory not empty", - 67: "too many processes", - 68: "too many users", - 69: "disc quota exceeded", - 70: "stale NFS file handle", - 71: "too many levels of remote in path", - 72: "RPC struct is bad", - 73: "RPC version wrong", - 74: "RPC prog. not avail", - 75: "program version wrong", - 76: "bad procedure for program", - 77: "no locks available", - 78: "function not implemented", - 79: "inappropriate file type or format", - 80: "authentication error", - 81: "need authenticator", - 82: "identifier removed", - 83: "no message of desired type", - 84: "value too large to be stored in data type", - 85: "operation canceled", - 86: "illegal byte sequence", - 87: "attribute not found", - 88: "programming error", - 89: "bad message", - 90: "multihop attempted", - 91: "link has been severed", - 92: "protocol error", - 93: "no medium found", - 94: "unknown error: 94", - 95: "unknown error: 95", - 96: "unknown error: 96", - 97: "unknown error: 97", - 98: "unknown error: 98", - 99: "unknown error: 99", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal instruction", - 5: "trace/BPT trap", - 6: "abort trap", - 7: "EMT trap", - 8: "floating point exception", - 9: "killed", - 10: "bus error", - 11: "segmentation fault", - 12: "bad system call", - 13: "broken pipe", - 14: "alarm clock", - 15: "terminated", - 16: "urgent I/O condition", - 17: "suspended (signal)", - 18: "suspended", - 19: "continued", - 20: "child exited", - 21: "stopped (tty input)", - 22: "stopped (tty output)", - 23: "I/O possible", - 24: "cputime limit exceeded", - 25: "filesize limit exceeded", - 26: "virtual timer expired", - 27: "profiling timer expired", - 28: "window size changes", - 29: "information request", - 30: "user defined signal 1", - 31: "user defined signal 2", - 32: "thread Scheduler", - 33: "checkPoint", - 34: "checkPointExit", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_dragonfly_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_dragonfly_amd64.go deleted file mode 100644 index 0feceee1516..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_dragonfly_amd64.go +++ /dev/null @@ -1,1530 +0,0 @@ -// mkerrors.sh -m64 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build amd64,dragonfly - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- -m64 _const.go - -package unix - -import "syscall" - -const ( - AF_APPLETALK = 0x10 - AF_ATM = 0x1e - AF_BLUETOOTH = 0x21 - AF_CCITT = 0xa - AF_CHAOS = 0x5 - AF_CNT = 0x15 - AF_COIP = 0x14 - AF_DATAKIT = 0x9 - AF_DECnet = 0xc - AF_DLI = 0xd - AF_E164 = 0x1a - AF_ECMA = 0x8 - AF_HYLINK = 0xf - AF_IEEE80211 = 0x23 - AF_IMPLINK = 0x3 - AF_INET = 0x2 - AF_INET6 = 0x1c - AF_IPX = 0x17 - AF_ISDN = 0x1a - AF_ISO = 0x7 - AF_LAT = 0xe - AF_LINK = 0x12 - AF_LOCAL = 0x1 - AF_MAX = 0x24 - AF_MPLS = 0x22 - AF_NATM = 0x1d - AF_NETGRAPH = 0x20 - AF_NS = 0x6 - AF_OSI = 0x7 - AF_PUP = 0x4 - AF_ROUTE = 0x11 - AF_SIP = 0x18 - AF_SNA = 0xb - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - B0 = 0x0 - B110 = 0x6e - B115200 = 0x1c200 - B1200 = 0x4b0 - B134 = 0x86 - B14400 = 0x3840 - B150 = 0x96 - B1800 = 0x708 - B19200 = 0x4b00 - B200 = 0xc8 - B230400 = 0x38400 - B2400 = 0x960 - B28800 = 0x7080 - B300 = 0x12c - B38400 = 0x9600 - B4800 = 0x12c0 - B50 = 0x32 - B57600 = 0xe100 - B600 = 0x258 - B7200 = 0x1c20 - B75 = 0x4b - B76800 = 0x12c00 - B9600 = 0x2580 - BIOCFLUSH = 0x20004268 - BIOCGBLEN = 0x40044266 - BIOCGDLT = 0x4004426a - BIOCGDLTLIST = 0xc0104279 - BIOCGETIF = 0x4020426b - BIOCGHDRCMPLT = 0x40044274 - BIOCGRSIG = 0x40044272 - BIOCGRTIMEOUT = 0x4010426e - BIOCGSEESENT = 0x40044276 - BIOCGSTATS = 0x4008426f - BIOCIMMEDIATE = 0x80044270 - BIOCLOCK = 0x2000427a - BIOCPROMISC = 0x20004269 - BIOCSBLEN = 0xc0044266 - BIOCSDLT = 0x80044278 - BIOCSETF = 0x80104267 - BIOCSETIF = 0x8020426c - BIOCSETWF = 0x8010427b - BIOCSHDRCMPLT = 0x80044275 - BIOCSRSIG = 0x80044273 - BIOCSRTIMEOUT = 0x8010426d - BIOCSSEESENT = 0x80044277 - BIOCVERSION = 0x40044271 - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALIGNMENT = 0x8 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_DEFAULTBUFSIZE = 0x1000 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXBUFSIZE = 0x80000 - BPF_MAXINSNS = 0x200 - BPF_MAX_CLONES = 0x80 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINBUFSIZE = 0x20 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RELEASE = 0x30bb6 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_W = 0x0 - BPF_X = 0x8 - BRKINT = 0x2 - CFLUSH = 0xf - CLOCAL = 0x8000 - CREAD = 0x800 - CS5 = 0x0 - CS6 = 0x100 - CS7 = 0x200 - CS8 = 0x300 - CSIZE = 0x300 - CSTART = 0x11 - CSTATUS = 0x14 - CSTOP = 0x13 - CSTOPB = 0x400 - CSUSP = 0x1a - CTL_MAXNAME = 0xc - CTL_NET = 0x4 - DLT_A429 = 0xb8 - DLT_A653_ICM = 0xb9 - DLT_AIRONET_HEADER = 0x78 - DLT_APPLE_IP_OVER_IEEE1394 = 0x8a - DLT_ARCNET = 0x7 - DLT_ARCNET_LINUX = 0x81 - DLT_ATM_CLIP = 0x13 - DLT_ATM_RFC1483 = 0xb - DLT_AURORA = 0x7e - DLT_AX25 = 0x3 - DLT_AX25_KISS = 0xca - DLT_BACNET_MS_TP = 0xa5 - DLT_BLUETOOTH_HCI_H4 = 0xbb - DLT_BLUETOOTH_HCI_H4_WITH_PHDR = 0xc9 - DLT_CAN20B = 0xbe - DLT_CHAOS = 0x5 - DLT_CHDLC = 0x68 - DLT_CISCO_IOS = 0x76 - DLT_C_HDLC = 0x68 - DLT_C_HDLC_WITH_DIR = 0xcd - DLT_DOCSIS = 0x8f - DLT_ECONET = 0x73 - DLT_EN10MB = 0x1 - DLT_EN3MB = 0x2 - DLT_ENC = 0x6d - DLT_ERF = 0xc5 - DLT_ERF_ETH = 0xaf - DLT_ERF_POS = 0xb0 - DLT_FDDI = 0xa - DLT_FLEXRAY = 0xd2 - DLT_FRELAY = 0x6b - DLT_FRELAY_WITH_DIR = 0xce - DLT_GCOM_SERIAL = 0xad - DLT_GCOM_T1E1 = 0xac - DLT_GPF_F = 0xab - DLT_GPF_T = 0xaa - DLT_GPRS_LLC = 0xa9 - DLT_HHDLC = 0x79 - DLT_IBM_SN = 0x92 - DLT_IBM_SP = 0x91 - DLT_IEEE802 = 0x6 - DLT_IEEE802_11 = 0x69 - DLT_IEEE802_11_RADIO = 0x7f - DLT_IEEE802_11_RADIO_AVS = 0xa3 - DLT_IEEE802_15_4 = 0xc3 - DLT_IEEE802_15_4_LINUX = 0xbf - DLT_IEEE802_15_4_NONASK_PHY = 0xd7 - DLT_IEEE802_16_MAC_CPS = 0xbc - DLT_IEEE802_16_MAC_CPS_RADIO = 0xc1 - DLT_IPFILTER = 0x74 - DLT_IPMB = 0xc7 - DLT_IPMB_LINUX = 0xd1 - DLT_IP_OVER_FC = 0x7a - DLT_JUNIPER_ATM1 = 0x89 - DLT_JUNIPER_ATM2 = 0x87 - DLT_JUNIPER_CHDLC = 0xb5 - DLT_JUNIPER_ES = 0x84 - DLT_JUNIPER_ETHER = 0xb2 - DLT_JUNIPER_FRELAY = 0xb4 - DLT_JUNIPER_GGSN = 0x85 - DLT_JUNIPER_ISM = 0xc2 - DLT_JUNIPER_MFR = 0x86 - DLT_JUNIPER_MLFR = 0x83 - DLT_JUNIPER_MLPPP = 0x82 - DLT_JUNIPER_MONITOR = 0xa4 - DLT_JUNIPER_PIC_PEER = 0xae - DLT_JUNIPER_PPP = 0xb3 - DLT_JUNIPER_PPPOE = 0xa7 - DLT_JUNIPER_PPPOE_ATM = 0xa8 - DLT_JUNIPER_SERVICES = 0x88 - DLT_JUNIPER_ST = 0xc8 - DLT_JUNIPER_VP = 0xb7 - DLT_LAPB_WITH_DIR = 0xcf - DLT_LAPD = 0xcb - DLT_LIN = 0xd4 - DLT_LINUX_IRDA = 0x90 - DLT_LINUX_LAPD = 0xb1 - DLT_LINUX_SLL = 0x71 - DLT_LOOP = 0x6c - DLT_LTALK = 0x72 - DLT_MFR = 0xb6 - DLT_MOST = 0xd3 - DLT_MTP2 = 0x8c - DLT_MTP2_WITH_PHDR = 0x8b - DLT_MTP3 = 0x8d - DLT_NULL = 0x0 - DLT_PCI_EXP = 0x7d - DLT_PFLOG = 0x75 - DLT_PFSYNC = 0x12 - DLT_PPI = 0xc0 - DLT_PPP = 0x9 - DLT_PPP_BSDOS = 0x10 - DLT_PPP_ETHER = 0x33 - DLT_PPP_PPPD = 0xa6 - DLT_PPP_SERIAL = 0x32 - DLT_PPP_WITH_DIR = 0xcc - DLT_PRISM_HEADER = 0x77 - DLT_PRONET = 0x4 - DLT_RAIF1 = 0xc6 - DLT_RAW = 0xc - DLT_REDBACK_SMARTEDGE = 0x20 - DLT_RIO = 0x7c - DLT_SCCP = 0x8e - DLT_SITA = 0xc4 - DLT_SLIP = 0x8 - DLT_SLIP_BSDOS = 0xf - DLT_SUNATM = 0x7b - DLT_SYMANTEC_FIREWALL = 0x63 - DLT_TZSP = 0x80 - DLT_USB = 0xba - DLT_USB_LINUX = 0xbd - DLT_X2E_SERIAL = 0xd5 - DLT_X2E_XORAYA = 0xd6 - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DBF = 0xf - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - DT_WHT = 0xe - ECHO = 0x8 - ECHOCTL = 0x40 - ECHOE = 0x2 - ECHOK = 0x4 - ECHOKE = 0x1 - ECHONL = 0x10 - ECHOPRT = 0x20 - EVFILT_AIO = -0x3 - EVFILT_EXCEPT = -0x8 - EVFILT_MARKER = 0xf - EVFILT_PROC = -0x5 - EVFILT_READ = -0x1 - EVFILT_SIGNAL = -0x6 - EVFILT_SYSCOUNT = 0x8 - EVFILT_TIMER = -0x7 - EVFILT_VNODE = -0x4 - EVFILT_WRITE = -0x2 - EV_ADD = 0x1 - EV_CLEAR = 0x20 - EV_DELETE = 0x2 - EV_DISABLE = 0x8 - EV_ENABLE = 0x4 - EV_EOF = 0x8000 - EV_ERROR = 0x4000 - EV_FLAG1 = 0x2000 - EV_NODATA = 0x1000 - EV_ONESHOT = 0x10 - EV_SYSFLAGS = 0xf000 - EXTA = 0x4b00 - EXTB = 0x9600 - EXTEXIT_LWP = 0x10000 - EXTEXIT_PROC = 0x0 - EXTEXIT_SETINT = 0x1 - EXTEXIT_SIMPLE = 0x0 - EXTPROC = 0x800 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x400 - FLUSHO = 0x800000 - F_DUP2FD = 0xa - F_DUP2FD_CLOEXEC = 0x12 - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0x11 - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLK = 0x7 - F_GETOWN = 0x5 - F_OK = 0x0 - F_RDLCK = 0x1 - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLK = 0x8 - F_SETLKW = 0x9 - F_SETOWN = 0x6 - F_UNLCK = 0x2 - F_WRLCK = 0x3 - HUPCL = 0x4000 - ICANON = 0x100 - ICMP6_FILTER = 0x12 - ICRNL = 0x100 - IEXTEN = 0x400 - IFAN_ARRIVAL = 0x0 - IFAN_DEPARTURE = 0x1 - IFF_ALLMULTI = 0x200 - IFF_ALTPHYS = 0x4000 - IFF_BROADCAST = 0x2 - IFF_CANTCHANGE = 0x118e72 - IFF_DEBUG = 0x4 - IFF_LINK0 = 0x1000 - IFF_LINK1 = 0x2000 - IFF_LINK2 = 0x4000 - IFF_LOOPBACK = 0x8 - IFF_MONITOR = 0x40000 - IFF_MULTICAST = 0x8000 - IFF_NOARP = 0x80 - IFF_NPOLLING = 0x100000 - IFF_OACTIVE = 0x400 - IFF_OACTIVE_COMPAT = 0x400 - IFF_POINTOPOINT = 0x10 - IFF_POLLING = 0x10000 - IFF_POLLING_COMPAT = 0x10000 - IFF_PPROMISC = 0x20000 - IFF_PROMISC = 0x100 - IFF_RUNNING = 0x40 - IFF_SIMPLEX = 0x800 - IFF_SMART = 0x20 - IFF_STATICARP = 0x80000 - IFF_UP = 0x1 - IFNAMSIZ = 0x10 - IFT_1822 = 0x2 - IFT_A12MPPSWITCH = 0x82 - IFT_AAL2 = 0xbb - IFT_AAL5 = 0x31 - IFT_ADSL = 0x5e - IFT_AFLANE8023 = 0x3b - IFT_AFLANE8025 = 0x3c - IFT_ARAP = 0x58 - IFT_ARCNET = 0x23 - IFT_ARCNETPLUS = 0x24 - IFT_ASYNC = 0x54 - IFT_ATM = 0x25 - IFT_ATMDXI = 0x69 - IFT_ATMFUNI = 0x6a - IFT_ATMIMA = 0x6b - IFT_ATMLOGICAL = 0x50 - IFT_ATMRADIO = 0xbd - IFT_ATMSUBINTERFACE = 0x86 - IFT_ATMVCIENDPT = 0xc2 - IFT_ATMVIRTUAL = 0x95 - IFT_BGPPOLICYACCOUNTING = 0xa2 - IFT_BRIDGE = 0xd1 - IFT_BSC = 0x53 - IFT_CARP = 0xf8 - IFT_CCTEMUL = 0x3d - IFT_CEPT = 0x13 - IFT_CES = 0x85 - IFT_CHANNEL = 0x46 - IFT_CNR = 0x55 - IFT_COFFEE = 0x84 - IFT_COMPOSITELINK = 0x9b - IFT_DCN = 0x8d - IFT_DIGITALPOWERLINE = 0x8a - IFT_DIGITALWRAPPEROVERHEADCHANNEL = 0xba - IFT_DLSW = 0x4a - IFT_DOCSCABLEDOWNSTREAM = 0x80 - IFT_DOCSCABLEMACLAYER = 0x7f - IFT_DOCSCABLEUPSTREAM = 0x81 - IFT_DS0 = 0x51 - IFT_DS0BUNDLE = 0x52 - IFT_DS1FDL = 0xaa - IFT_DS3 = 0x1e - IFT_DTM = 0x8c - IFT_DVBASILN = 0xac - IFT_DVBASIOUT = 0xad - IFT_DVBRCCDOWNSTREAM = 0x93 - IFT_DVBRCCMACLAYER = 0x92 - IFT_DVBRCCUPSTREAM = 0x94 - IFT_ENC = 0xf4 - IFT_EON = 0x19 - IFT_EPLRS = 0x57 - IFT_ESCON = 0x49 - IFT_ETHER = 0x6 - IFT_FAITH = 0xf2 - IFT_FAST = 0x7d - IFT_FASTETHER = 0x3e - IFT_FASTETHERFX = 0x45 - IFT_FDDI = 0xf - IFT_FIBRECHANNEL = 0x38 - IFT_FRAMERELAYINTERCONNECT = 0x3a - IFT_FRAMERELAYMPI = 0x5c - IFT_FRDLCIENDPT = 0xc1 - IFT_FRELAY = 0x20 - IFT_FRELAYDCE = 0x2c - IFT_FRF16MFRBUNDLE = 0xa3 - IFT_FRFORWARD = 0x9e - IFT_G703AT2MB = 0x43 - IFT_G703AT64K = 0x42 - IFT_GIF = 0xf0 - IFT_GIGABITETHERNET = 0x75 - IFT_GR303IDT = 0xb2 - IFT_GR303RDT = 0xb1 - IFT_H323GATEKEEPER = 0xa4 - IFT_H323PROXY = 0xa5 - IFT_HDH1822 = 0x3 - IFT_HDLC = 0x76 - IFT_HDSL2 = 0xa8 - IFT_HIPERLAN2 = 0xb7 - IFT_HIPPI = 0x2f - IFT_HIPPIINTERFACE = 0x39 - IFT_HOSTPAD = 0x5a - IFT_HSSI = 0x2e - IFT_HY = 0xe - IFT_IBM370PARCHAN = 0x48 - IFT_IDSL = 0x9a - IFT_IEEE1394 = 0x90 - IFT_IEEE80211 = 0x47 - IFT_IEEE80212 = 0x37 - IFT_IEEE8023ADLAG = 0xa1 - IFT_IFGSN = 0x91 - IFT_IMT = 0xbe - IFT_INTERLEAVE = 0x7c - IFT_IP = 0x7e - IFT_IPFORWARD = 0x8e - IFT_IPOVERATM = 0x72 - IFT_IPOVERCDLC = 0x6d - IFT_IPOVERCLAW = 0x6e - IFT_IPSWITCH = 0x4e - IFT_ISDN = 0x3f - IFT_ISDNBASIC = 0x14 - IFT_ISDNPRIMARY = 0x15 - IFT_ISDNS = 0x4b - IFT_ISDNU = 0x4c - IFT_ISO88022LLC = 0x29 - IFT_ISO88023 = 0x7 - IFT_ISO88024 = 0x8 - IFT_ISO88025 = 0x9 - IFT_ISO88025CRFPINT = 0x62 - IFT_ISO88025DTR = 0x56 - IFT_ISO88025FIBER = 0x73 - IFT_ISO88026 = 0xa - IFT_ISUP = 0xb3 - IFT_L2VLAN = 0x87 - IFT_L3IPVLAN = 0x88 - IFT_L3IPXVLAN = 0x89 - IFT_LAPB = 0x10 - IFT_LAPD = 0x4d - IFT_LAPF = 0x77 - IFT_LOCALTALK = 0x2a - IFT_LOOP = 0x18 - IFT_MEDIAMAILOVERIP = 0x8b - IFT_MFSIGLINK = 0xa7 - IFT_MIOX25 = 0x26 - IFT_MODEM = 0x30 - IFT_MPC = 0x71 - IFT_MPLS = 0xa6 - IFT_MPLSTUNNEL = 0x96 - IFT_MSDSL = 0x8f - IFT_MVL = 0xbf - IFT_MYRINET = 0x63 - IFT_NFAS = 0xaf - IFT_NSIP = 0x1b - IFT_OPTICALCHANNEL = 0xc3 - IFT_OPTICALTRANSPORT = 0xc4 - IFT_OTHER = 0x1 - IFT_P10 = 0xc - IFT_P80 = 0xd - IFT_PARA = 0x22 - IFT_PFLOG = 0xf5 - IFT_PFSYNC = 0xf6 - IFT_PLC = 0xae - IFT_POS = 0xab - IFT_PPP = 0x17 - IFT_PPPMULTILINKBUNDLE = 0x6c - IFT_PROPBWAP2MP = 0xb8 - IFT_PROPCNLS = 0x59 - IFT_PROPDOCSWIRELESSDOWNSTREAM = 0xb5 - IFT_PROPDOCSWIRELESSMACLAYER = 0xb4 - IFT_PROPDOCSWIRELESSUPSTREAM = 0xb6 - IFT_PROPMUX = 0x36 - IFT_PROPVIRTUAL = 0x35 - IFT_PROPWIRELESSP2P = 0x9d - IFT_PTPSERIAL = 0x16 - IFT_PVC = 0xf1 - IFT_QLLC = 0x44 - IFT_RADIOMAC = 0xbc - IFT_RADSL = 0x5f - IFT_REACHDSL = 0xc0 - IFT_RFC1483 = 0x9f - IFT_RS232 = 0x21 - IFT_RSRB = 0x4f - IFT_SDLC = 0x11 - IFT_SDSL = 0x60 - IFT_SHDSL = 0xa9 - IFT_SIP = 0x1f - IFT_SLIP = 0x1c - IFT_SMDSDXI = 0x2b - IFT_SMDSICIP = 0x34 - IFT_SONET = 0x27 - IFT_SONETOVERHEADCHANNEL = 0xb9 - IFT_SONETPATH = 0x32 - IFT_SONETVT = 0x33 - IFT_SRP = 0x97 - IFT_SS7SIGLINK = 0x9c - IFT_STACKTOSTACK = 0x6f - IFT_STARLAN = 0xb - IFT_STF = 0xf3 - IFT_T1 = 0x12 - IFT_TDLC = 0x74 - IFT_TERMPAD = 0x5b - IFT_TR008 = 0xb0 - IFT_TRANSPHDLC = 0x7b - IFT_TUNNEL = 0x83 - IFT_ULTRA = 0x1d - IFT_USB = 0xa0 - IFT_V11 = 0x40 - IFT_V35 = 0x2d - IFT_V36 = 0x41 - IFT_V37 = 0x78 - IFT_VDSL = 0x61 - IFT_VIRTUALIPADDRESS = 0x70 - IFT_VOICEEM = 0x64 - IFT_VOICEENCAP = 0x67 - IFT_VOICEFXO = 0x65 - IFT_VOICEFXS = 0x66 - IFT_VOICEOVERATM = 0x98 - IFT_VOICEOVERFRAMERELAY = 0x99 - IFT_VOICEOVERIP = 0x68 - IFT_X213 = 0x5d - IFT_X25 = 0x5 - IFT_X25DDN = 0x4 - IFT_X25HUNTGROUP = 0x7a - IFT_X25MLP = 0x79 - IFT_X25PLE = 0x28 - IFT_XETHER = 0x1a - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLASSD_HOST = 0xfffffff - IN_CLASSD_NET = 0xf0000000 - IN_CLASSD_NSHIFT = 0x1c - IN_LOOPBACKNET = 0x7f - IPPROTO_3PC = 0x22 - IPPROTO_ADFS = 0x44 - IPPROTO_AH = 0x33 - IPPROTO_AHIP = 0x3d - IPPROTO_APES = 0x63 - IPPROTO_ARGUS = 0xd - IPPROTO_AX25 = 0x5d - IPPROTO_BHA = 0x31 - IPPROTO_BLT = 0x1e - IPPROTO_BRSATMON = 0x4c - IPPROTO_CARP = 0x70 - IPPROTO_CFTP = 0x3e - IPPROTO_CHAOS = 0x10 - IPPROTO_CMTP = 0x26 - IPPROTO_CPHB = 0x49 - IPPROTO_CPNX = 0x48 - IPPROTO_DDP = 0x25 - IPPROTO_DGP = 0x56 - IPPROTO_DIVERT = 0xfe - IPPROTO_DONE = 0x101 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_EMCON = 0xe - IPPROTO_ENCAP = 0x62 - IPPROTO_EON = 0x50 - IPPROTO_ESP = 0x32 - IPPROTO_ETHERIP = 0x61 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GGP = 0x3 - IPPROTO_GMTP = 0x64 - IPPROTO_GRE = 0x2f - IPPROTO_HELLO = 0x3f - IPPROTO_HMP = 0x14 - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IDPR = 0x23 - IPPROTO_IDRP = 0x2d - IPPROTO_IGMP = 0x2 - IPPROTO_IGP = 0x55 - IPPROTO_IGRP = 0x58 - IPPROTO_IL = 0x28 - IPPROTO_INLSP = 0x34 - IPPROTO_INP = 0x20 - IPPROTO_IP = 0x0 - IPPROTO_IPCOMP = 0x6c - IPPROTO_IPCV = 0x47 - IPPROTO_IPEIP = 0x5e - IPPROTO_IPIP = 0x4 - IPPROTO_IPPC = 0x43 - IPPROTO_IPV4 = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_IRTP = 0x1c - IPPROTO_KRYPTOLAN = 0x41 - IPPROTO_LARP = 0x5b - IPPROTO_LEAF1 = 0x19 - IPPROTO_LEAF2 = 0x1a - IPPROTO_MAX = 0x100 - IPPROTO_MAXID = 0x34 - IPPROTO_MEAS = 0x13 - IPPROTO_MHRP = 0x30 - IPPROTO_MICP = 0x5f - IPPROTO_MOBILE = 0x37 - IPPROTO_MTP = 0x5c - IPPROTO_MUX = 0x12 - IPPROTO_ND = 0x4d - IPPROTO_NHRP = 0x36 - IPPROTO_NONE = 0x3b - IPPROTO_NSP = 0x1f - IPPROTO_NVPII = 0xb - IPPROTO_OSPFIGP = 0x59 - IPPROTO_PFSYNC = 0xf0 - IPPROTO_PGM = 0x71 - IPPROTO_PIGP = 0x9 - IPPROTO_PIM = 0x67 - IPPROTO_PRM = 0x15 - IPPROTO_PUP = 0xc - IPPROTO_PVP = 0x4b - IPPROTO_RAW = 0xff - IPPROTO_RCCMON = 0xa - IPPROTO_RDP = 0x1b - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_RVD = 0x42 - IPPROTO_SATEXPAK = 0x40 - IPPROTO_SATMON = 0x45 - IPPROTO_SCCSP = 0x60 - IPPROTO_SCTP = 0x84 - IPPROTO_SDRP = 0x2a - IPPROTO_SEP = 0x21 - IPPROTO_SKIP = 0x39 - IPPROTO_SRPC = 0x5a - IPPROTO_ST = 0x7 - IPPROTO_SVMTP = 0x52 - IPPROTO_SWIPE = 0x35 - IPPROTO_TCF = 0x57 - IPPROTO_TCP = 0x6 - IPPROTO_TLSP = 0x38 - IPPROTO_TP = 0x1d - IPPROTO_TPXX = 0x27 - IPPROTO_TRUNK1 = 0x17 - IPPROTO_TRUNK2 = 0x18 - IPPROTO_TTP = 0x54 - IPPROTO_UDP = 0x11 - IPPROTO_UNKNOWN = 0x102 - IPPROTO_VINES = 0x53 - IPPROTO_VISA = 0x46 - IPPROTO_VMTP = 0x51 - IPPROTO_WBEXPAK = 0x4f - IPPROTO_WBMON = 0x4e - IPPROTO_WSN = 0x4a - IPPROTO_XNET = 0xf - IPPROTO_XTP = 0x24 - IPV6_AUTOFLOWLABEL = 0x3b - IPV6_BINDV6ONLY = 0x1b - IPV6_CHECKSUM = 0x1a - IPV6_DEFAULT_MULTICAST_HOPS = 0x1 - IPV6_DEFAULT_MULTICAST_LOOP = 0x1 - IPV6_DEFHLIM = 0x40 - IPV6_DONTFRAG = 0x3e - IPV6_DSTOPTS = 0x32 - IPV6_FAITH = 0x1d - IPV6_FLOWINFO_MASK = 0xffffff0f - IPV6_FLOWLABEL_MASK = 0xffff0f00 - IPV6_FRAGTTL = 0x78 - IPV6_FW_ADD = 0x1e - IPV6_FW_DEL = 0x1f - IPV6_FW_FLUSH = 0x20 - IPV6_FW_GET = 0x22 - IPV6_FW_ZERO = 0x21 - IPV6_HLIMDEC = 0x1 - IPV6_HOPLIMIT = 0x2f - IPV6_HOPOPTS = 0x31 - IPV6_IPSEC_POLICY = 0x1c - IPV6_JOIN_GROUP = 0xc - IPV6_LEAVE_GROUP = 0xd - IPV6_MAXHLIM = 0xff - IPV6_MAXPACKET = 0xffff - IPV6_MMTU = 0x500 - IPV6_MSFILTER = 0x4a - IPV6_MULTICAST_HOPS = 0xa - IPV6_MULTICAST_IF = 0x9 - IPV6_MULTICAST_LOOP = 0xb - IPV6_NEXTHOP = 0x30 - IPV6_PATHMTU = 0x2c - IPV6_PKTINFO = 0x2e - IPV6_PKTOPTIONS = 0x34 - IPV6_PORTRANGE = 0xe - IPV6_PORTRANGE_DEFAULT = 0x0 - IPV6_PORTRANGE_HIGH = 0x1 - IPV6_PORTRANGE_LOW = 0x2 - IPV6_PREFER_TEMPADDR = 0x3f - IPV6_RECVDSTOPTS = 0x28 - IPV6_RECVHOPLIMIT = 0x25 - IPV6_RECVHOPOPTS = 0x27 - IPV6_RECVPATHMTU = 0x2b - IPV6_RECVPKTINFO = 0x24 - IPV6_RECVRTHDR = 0x26 - IPV6_RECVTCLASS = 0x39 - IPV6_RTHDR = 0x33 - IPV6_RTHDRDSTOPTS = 0x23 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_SOCKOPT_RESERVED1 = 0x3 - IPV6_TCLASS = 0x3d - IPV6_UNICAST_HOPS = 0x4 - IPV6_USE_MIN_MTU = 0x2a - IPV6_V6ONLY = 0x1b - IPV6_VERSION = 0x60 - IPV6_VERSION_MASK = 0xf0 - IP_ADD_MEMBERSHIP = 0xc - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DROP_MEMBERSHIP = 0xd - IP_DUMMYNET_CONFIGURE = 0x3c - IP_DUMMYNET_DEL = 0x3d - IP_DUMMYNET_FLUSH = 0x3e - IP_DUMMYNET_GET = 0x40 - IP_FAITH = 0x16 - IP_FW_ADD = 0x32 - IP_FW_DEL = 0x33 - IP_FW_FLUSH = 0x34 - IP_FW_GET = 0x36 - IP_FW_RESETLOG = 0x37 - IP_FW_ZERO = 0x35 - IP_HDRINCL = 0x2 - IP_IPSEC_POLICY = 0x15 - IP_MAXPACKET = 0xffff - IP_MAX_MEMBERSHIPS = 0x14 - IP_MF = 0x2000 - IP_MINTTL = 0x42 - IP_MSS = 0x240 - IP_MULTICAST_IF = 0x9 - IP_MULTICAST_LOOP = 0xb - IP_MULTICAST_TTL = 0xa - IP_MULTICAST_VIF = 0xe - IP_OFFMASK = 0x1fff - IP_OPTIONS = 0x1 - IP_PORTRANGE = 0x13 - IP_PORTRANGE_DEFAULT = 0x0 - IP_PORTRANGE_HIGH = 0x1 - IP_PORTRANGE_LOW = 0x2 - IP_RECVDSTADDR = 0x7 - IP_RECVIF = 0x14 - IP_RECVOPTS = 0x5 - IP_RECVRETOPTS = 0x6 - IP_RECVTTL = 0x41 - IP_RETOPTS = 0x8 - IP_RF = 0x8000 - IP_RSVP_OFF = 0x10 - IP_RSVP_ON = 0xf - IP_RSVP_VIF_OFF = 0x12 - IP_RSVP_VIF_ON = 0x11 - IP_TOS = 0x3 - IP_TTL = 0x4 - ISIG = 0x80 - ISTRIP = 0x20 - IXANY = 0x800 - IXOFF = 0x400 - IXON = 0x200 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_AUTOSYNC = 0x7 - MADV_CONTROL_END = 0xb - MADV_CONTROL_START = 0xa - MADV_CORE = 0x9 - MADV_DONTNEED = 0x4 - MADV_FREE = 0x5 - MADV_INVAL = 0xa - MADV_NOCORE = 0x8 - MADV_NORMAL = 0x0 - MADV_NOSYNC = 0x6 - MADV_RANDOM = 0x1 - MADV_SEQUENTIAL = 0x2 - MADV_SETMAP = 0xb - MADV_WILLNEED = 0x3 - MAP_ANON = 0x1000 - MAP_COPY = 0x2 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_HASSEMAPHORE = 0x200 - MAP_INHERIT = 0x80 - MAP_NOCORE = 0x20000 - MAP_NOEXTEND = 0x100 - MAP_NORESERVE = 0x40 - MAP_NOSYNC = 0x800 - MAP_PRIVATE = 0x2 - MAP_RENAME = 0x20 - MAP_SHARED = 0x1 - MAP_SIZEALIGN = 0x40000 - MAP_STACK = 0x400 - MAP_TRYFIXED = 0x10000 - MAP_VPAGETABLE = 0x2000 - MCL_CURRENT = 0x1 - MCL_FUTURE = 0x2 - MSG_CTRUNC = 0x20 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x80 - MSG_EOF = 0x100 - MSG_EOR = 0x8 - MSG_FBLOCKING = 0x10000 - MSG_FMASK = 0xffff0000 - MSG_FNONBLOCKING = 0x20000 - MSG_NOSIGNAL = 0x400 - MSG_NOTIFICATION = 0x200 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_SYNC = 0x800 - MSG_TRUNC = 0x10 - MSG_WAITALL = 0x40 - MS_ASYNC = 0x1 - MS_INVALIDATE = 0x2 - MS_SYNC = 0x0 - NAME_MAX = 0xff - NET_RT_DUMP = 0x1 - NET_RT_FLAGS = 0x2 - NET_RT_IFLIST = 0x3 - NET_RT_MAXID = 0x4 - NOFLSH = 0x80000000 - NOTE_ATTRIB = 0x8 - NOTE_CHILD = 0x4 - NOTE_DELETE = 0x1 - NOTE_EXEC = 0x20000000 - NOTE_EXIT = 0x80000000 - NOTE_EXTEND = 0x4 - NOTE_FORK = 0x40000000 - NOTE_LINK = 0x10 - NOTE_LOWAT = 0x1 - NOTE_OOB = 0x2 - NOTE_PCTRLMASK = 0xf0000000 - NOTE_PDATAMASK = 0xfffff - NOTE_RENAME = 0x20 - NOTE_REVOKE = 0x40 - NOTE_TRACK = 0x1 - NOTE_TRACKERR = 0x2 - NOTE_WRITE = 0x2 - OCRNL = 0x10 - ONLCR = 0x2 - ONLRET = 0x40 - ONOCR = 0x20 - ONOEOT = 0x8 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_APPEND = 0x8 - O_ASYNC = 0x40 - O_CLOEXEC = 0x20000 - O_CREAT = 0x200 - O_DIRECT = 0x10000 - O_DIRECTORY = 0x8000000 - O_EXCL = 0x800 - O_EXLOCK = 0x20 - O_FAPPEND = 0x100000 - O_FASYNCWRITE = 0x800000 - O_FBLOCKING = 0x40000 - O_FBUFFERED = 0x2000000 - O_FMASK = 0x7fc0000 - O_FNONBLOCKING = 0x80000 - O_FOFFSET = 0x200000 - O_FSYNC = 0x80 - O_FSYNCWRITE = 0x400000 - O_FUNBUFFERED = 0x1000000 - O_MAPONREAD = 0x4000000 - O_NDELAY = 0x4 - O_NOCTTY = 0x8000 - O_NOFOLLOW = 0x100 - O_NONBLOCK = 0x4 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_SHLOCK = 0x10 - O_SYNC = 0x80 - O_TRUNC = 0x400 - O_WRONLY = 0x1 - PARENB = 0x1000 - PARMRK = 0x8 - PARODD = 0x2000 - PENDIN = 0x20000000 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PROT_EXEC = 0x4 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_WRITE = 0x2 - RLIMIT_AS = 0xa - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x8 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = 0x7fffffffffffffff - RTAX_AUTHOR = 0x6 - RTAX_BRD = 0x7 - RTAX_DST = 0x0 - RTAX_GATEWAY = 0x1 - RTAX_GENMASK = 0x3 - RTAX_IFA = 0x5 - RTAX_IFP = 0x4 - RTAX_MAX = 0xb - RTAX_MPLS1 = 0x8 - RTAX_MPLS2 = 0x9 - RTAX_MPLS3 = 0xa - RTAX_NETMASK = 0x2 - RTA_AUTHOR = 0x40 - RTA_BRD = 0x80 - RTA_DST = 0x1 - RTA_GATEWAY = 0x2 - RTA_GENMASK = 0x8 - RTA_IFA = 0x20 - RTA_IFP = 0x10 - RTA_MPLS1 = 0x100 - RTA_MPLS2 = 0x200 - RTA_MPLS3 = 0x400 - RTA_NETMASK = 0x4 - RTF_BLACKHOLE = 0x1000 - RTF_BROADCAST = 0x400000 - RTF_CLONING = 0x100 - RTF_DONE = 0x40 - RTF_DYNAMIC = 0x10 - RTF_GATEWAY = 0x2 - RTF_HOST = 0x4 - RTF_LLINFO = 0x400 - RTF_LOCAL = 0x200000 - RTF_MODIFIED = 0x20 - RTF_MPLSOPS = 0x1000000 - RTF_MULTICAST = 0x800000 - RTF_PINNED = 0x100000 - RTF_PRCLONING = 0x10000 - RTF_PROTO1 = 0x8000 - RTF_PROTO2 = 0x4000 - RTF_PROTO3 = 0x40000 - RTF_REJECT = 0x8 - RTF_STATIC = 0x800 - RTF_UP = 0x1 - RTF_WASCLONED = 0x20000 - RTF_XRESOLVE = 0x200 - RTM_ADD = 0x1 - RTM_CHANGE = 0x3 - RTM_DELADDR = 0xd - RTM_DELETE = 0x2 - RTM_DELMADDR = 0x10 - RTM_GET = 0x4 - RTM_IEEE80211 = 0x12 - RTM_IFANNOUNCE = 0x11 - RTM_IFINFO = 0xe - RTM_LOCK = 0x8 - RTM_LOSING = 0x5 - RTM_MISS = 0x7 - RTM_NEWADDR = 0xc - RTM_NEWMADDR = 0xf - RTM_OLDADD = 0x9 - RTM_OLDDEL = 0xa - RTM_REDIRECT = 0x6 - RTM_RESOLVE = 0xb - RTM_RTTUNIT = 0xf4240 - RTM_VERSION = 0x6 - RTV_EXPIRE = 0x4 - RTV_HOPCOUNT = 0x2 - RTV_IWCAPSEGS = 0x400 - RTV_IWMAXSEGS = 0x200 - RTV_MSL = 0x100 - RTV_MTU = 0x1 - RTV_RPIPE = 0x8 - RTV_RTT = 0x40 - RTV_RTTVAR = 0x80 - RTV_SPIPE = 0x10 - RTV_SSTHRESH = 0x20 - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - SCM_CREDS = 0x3 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x2 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDMULTI = 0x80206931 - SIOCADDRT = 0x8040720a - SIOCAIFADDR = 0x8040691a - SIOCALIFADDR = 0x8118691b - SIOCATMARK = 0x40047307 - SIOCDELMULTI = 0x80206932 - SIOCDELRT = 0x8040720b - SIOCDIFADDR = 0x80206919 - SIOCDIFPHYADDR = 0x80206949 - SIOCDLIFADDR = 0x8118691d - SIOCGDRVSPEC = 0xc028697b - SIOCGETSGCNT = 0xc0207210 - SIOCGETVIFCNT = 0xc028720f - SIOCGHIWAT = 0x40047301 - SIOCGIFADDR = 0xc0206921 - SIOCGIFBRDADDR = 0xc0206923 - SIOCGIFCAP = 0xc020691f - SIOCGIFCONF = 0xc0106924 - SIOCGIFDATA = 0xc0206926 - SIOCGIFDSTADDR = 0xc0206922 - SIOCGIFFLAGS = 0xc0206911 - SIOCGIFGENERIC = 0xc020693a - SIOCGIFGMEMB = 0xc028698a - SIOCGIFINDEX = 0xc0206920 - SIOCGIFMEDIA = 0xc0306938 - SIOCGIFMETRIC = 0xc0206917 - SIOCGIFMTU = 0xc0206933 - SIOCGIFNETMASK = 0xc0206925 - SIOCGIFPDSTADDR = 0xc0206948 - SIOCGIFPHYS = 0xc0206935 - SIOCGIFPOLLCPU = 0xc020697e - SIOCGIFPSRCADDR = 0xc0206947 - SIOCGIFSTATUS = 0xc331693b - SIOCGIFTSOLEN = 0xc0206980 - SIOCGLIFADDR = 0xc118691c - SIOCGLIFPHYADDR = 0xc118694b - SIOCGLOWAT = 0x40047303 - SIOCGPGRP = 0x40047309 - SIOCGPRIVATE_0 = 0xc0206950 - SIOCGPRIVATE_1 = 0xc0206951 - SIOCIFCREATE = 0xc020697a - SIOCIFCREATE2 = 0xc020697c - SIOCIFDESTROY = 0x80206979 - SIOCIFGCLONERS = 0xc0106978 - SIOCSDRVSPEC = 0x8028697b - SIOCSHIWAT = 0x80047300 - SIOCSIFADDR = 0x8020690c - SIOCSIFBRDADDR = 0x80206913 - SIOCSIFCAP = 0x8020691e - SIOCSIFDSTADDR = 0x8020690e - SIOCSIFFLAGS = 0x80206910 - SIOCSIFGENERIC = 0x80206939 - SIOCSIFLLADDR = 0x8020693c - SIOCSIFMEDIA = 0xc0206937 - SIOCSIFMETRIC = 0x80206918 - SIOCSIFMTU = 0x80206934 - SIOCSIFNAME = 0x80206928 - SIOCSIFNETMASK = 0x80206916 - SIOCSIFPHYADDR = 0x80406946 - SIOCSIFPHYS = 0x80206936 - SIOCSIFPOLLCPU = 0x8020697d - SIOCSIFTSOLEN = 0x8020697f - SIOCSLIFPHYADDR = 0x8118694a - SIOCSLOWAT = 0x80047302 - SIOCSPGRP = 0x80047308 - SOCK_DGRAM = 0x2 - SOCK_MAXADDRLEN = 0xff - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_SOCKET = 0xffff - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x2 - SO_ACCEPTFILTER = 0x1000 - SO_BROADCAST = 0x20 - SO_DEBUG = 0x1 - SO_DONTROUTE = 0x10 - SO_ERROR = 0x1007 - SO_KEEPALIVE = 0x8 - SO_LINGER = 0x80 - SO_NOSIGPIPE = 0x800 - SO_OOBINLINE = 0x100 - SO_RCVBUF = 0x1002 - SO_RCVLOWAT = 0x1004 - SO_RCVTIMEO = 0x1006 - SO_REUSEADDR = 0x4 - SO_REUSEPORT = 0x200 - SO_SNDBUF = 0x1001 - SO_SNDLOWAT = 0x1003 - SO_SNDSPACE = 0x100a - SO_SNDTIMEO = 0x1005 - SO_TIMESTAMP = 0x400 - SO_TYPE = 0x1008 - SO_USELOOPBACK = 0x40 - TCIFLUSH = 0x1 - TCIOFLUSH = 0x3 - TCOFLUSH = 0x2 - TCP_FASTKEEP = 0x80 - TCP_KEEPCNT = 0x400 - TCP_KEEPIDLE = 0x100 - TCP_KEEPINIT = 0x20 - TCP_KEEPINTVL = 0x200 - TCP_MAXBURST = 0x4 - TCP_MAXHLEN = 0x3c - TCP_MAXOLEN = 0x28 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_WINSHIFT = 0xe - TCP_MINMSS = 0x100 - TCP_MIN_WINSHIFT = 0x5 - TCP_MSS = 0x200 - TCP_NODELAY = 0x1 - TCP_NOOPT = 0x8 - TCP_NOPUSH = 0x4 - TCP_SIGNATURE_ENABLE = 0x10 - TCSAFLUSH = 0x2 - TIOCCBRK = 0x2000747a - TIOCCDTR = 0x20007478 - TIOCCONS = 0x80047462 - TIOCDCDTIMESTAMP = 0x40107458 - TIOCDRAIN = 0x2000745e - TIOCEXCL = 0x2000740d - TIOCEXT = 0x80047460 - TIOCFLUSH = 0x80047410 - TIOCGDRAINWAIT = 0x40047456 - TIOCGETA = 0x402c7413 - TIOCGETD = 0x4004741a - TIOCGPGRP = 0x40047477 - TIOCGSID = 0x40047463 - TIOCGSIZE = 0x40087468 - TIOCGWINSZ = 0x40087468 - TIOCISPTMASTER = 0x20007455 - TIOCMBIC = 0x8004746b - TIOCMBIS = 0x8004746c - TIOCMGDTRWAIT = 0x4004745a - TIOCMGET = 0x4004746a - TIOCMODG = 0x40047403 - TIOCMODS = 0x80047404 - TIOCMSDTRWAIT = 0x8004745b - TIOCMSET = 0x8004746d - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x20007471 - TIOCNXCL = 0x2000740e - TIOCOUTQ = 0x40047473 - TIOCPKT = 0x80047470 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCREMOTE = 0x80047469 - TIOCSBRK = 0x2000747b - TIOCSCTTY = 0x20007461 - TIOCSDRAINWAIT = 0x80047457 - TIOCSDTR = 0x20007479 - TIOCSETA = 0x802c7414 - TIOCSETAF = 0x802c7416 - TIOCSETAW = 0x802c7415 - TIOCSETD = 0x8004741b - TIOCSIG = 0x2000745f - TIOCSPGRP = 0x80047476 - TIOCSSIZE = 0x80087467 - TIOCSTART = 0x2000746e - TIOCSTAT = 0x20007465 - TIOCSTI = 0x80017472 - TIOCSTOP = 0x2000746f - TIOCSWINSZ = 0x80087467 - TIOCTIMESTAMP = 0x40107459 - TIOCUCNTL = 0x80047466 - TOSTOP = 0x400000 - VCHECKPT = 0x13 - VDISCARD = 0xf - VDSUSP = 0xb - VEOF = 0x0 - VEOL = 0x1 - VEOL2 = 0x2 - VERASE = 0x3 - VERASE2 = 0x7 - VINTR = 0x8 - VKILL = 0x5 - VLNEXT = 0xe - VMIN = 0x10 - VQUIT = 0x9 - VREPRINT = 0x6 - VSTART = 0xc - VSTATUS = 0x12 - VSTOP = 0xd - VSUSP = 0xa - VTIME = 0x11 - VWERASE = 0x4 - WCONTINUED = 0x4 - WCOREFLAG = 0x80 - WLINUXCLONE = 0x80000000 - WNOHANG = 0x1 - WSTOPPED = 0x7f - WUNTRACED = 0x2 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x30) - EADDRNOTAVAIL = syscall.Errno(0x31) - EAFNOSUPPORT = syscall.Errno(0x2f) - EAGAIN = syscall.Errno(0x23) - EALREADY = syscall.Errno(0x25) - EASYNC = syscall.Errno(0x63) - EAUTH = syscall.Errno(0x50) - EBADF = syscall.Errno(0x9) - EBADMSG = syscall.Errno(0x59) - EBADRPC = syscall.Errno(0x48) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x55) - ECHILD = syscall.Errno(0xa) - ECONNABORTED = syscall.Errno(0x35) - ECONNREFUSED = syscall.Errno(0x3d) - ECONNRESET = syscall.Errno(0x36) - EDEADLK = syscall.Errno(0xb) - EDESTADDRREQ = syscall.Errno(0x27) - EDOM = syscall.Errno(0x21) - EDOOFUS = syscall.Errno(0x58) - EDQUOT = syscall.Errno(0x45) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EFTYPE = syscall.Errno(0x4f) - EHOSTDOWN = syscall.Errno(0x40) - EHOSTUNREACH = syscall.Errno(0x41) - EIDRM = syscall.Errno(0x52) - EILSEQ = syscall.Errno(0x56) - EINPROGRESS = syscall.Errno(0x24) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EISCONN = syscall.Errno(0x38) - EISDIR = syscall.Errno(0x15) - ELAST = syscall.Errno(0x63) - ELOOP = syscall.Errno(0x3e) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x28) - EMULTIHOP = syscall.Errno(0x5a) - ENAMETOOLONG = syscall.Errno(0x3f) - ENEEDAUTH = syscall.Errno(0x51) - ENETDOWN = syscall.Errno(0x32) - ENETRESET = syscall.Errno(0x34) - ENETUNREACH = syscall.Errno(0x33) - ENFILE = syscall.Errno(0x17) - ENOATTR = syscall.Errno(0x57) - ENOBUFS = syscall.Errno(0x37) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOLCK = syscall.Errno(0x4d) - ENOLINK = syscall.Errno(0x5b) - ENOMEDIUM = syscall.Errno(0x5d) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x53) - ENOPROTOOPT = syscall.Errno(0x2a) - ENOSPC = syscall.Errno(0x1c) - ENOSYS = syscall.Errno(0x4e) - ENOTBLK = syscall.Errno(0xf) - ENOTCONN = syscall.Errno(0x39) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x42) - ENOTSOCK = syscall.Errno(0x26) - ENOTSUP = syscall.Errno(0x2d) - ENOTTY = syscall.Errno(0x19) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x2d) - EOVERFLOW = syscall.Errno(0x54) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x2e) - EPIPE = syscall.Errno(0x20) - EPROCLIM = syscall.Errno(0x43) - EPROCUNAVAIL = syscall.Errno(0x4c) - EPROGMISMATCH = syscall.Errno(0x4b) - EPROGUNAVAIL = syscall.Errno(0x4a) - EPROTO = syscall.Errno(0x5c) - EPROTONOSUPPORT = syscall.Errno(0x2b) - EPROTOTYPE = syscall.Errno(0x29) - ERANGE = syscall.Errno(0x22) - EREMOTE = syscall.Errno(0x47) - EROFS = syscall.Errno(0x1e) - ERPCMISMATCH = syscall.Errno(0x49) - ESHUTDOWN = syscall.Errno(0x3a) - ESOCKTNOSUPPORT = syscall.Errno(0x2c) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESTALE = syscall.Errno(0x46) - ETIMEDOUT = syscall.Errno(0x3c) - ETOOMANYREFS = syscall.Errno(0x3b) - ETXTBSY = syscall.Errno(0x1a) - EUNUSED94 = syscall.Errno(0x5e) - EUNUSED95 = syscall.Errno(0x5f) - EUNUSED96 = syscall.Errno(0x60) - EUNUSED97 = syscall.Errno(0x61) - EUNUSED98 = syscall.Errno(0x62) - EUSERS = syscall.Errno(0x44) - EWOULDBLOCK = syscall.Errno(0x23) - EXDEV = syscall.Errno(0x12) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0xa) - SIGCHLD = syscall.Signal(0x14) - SIGCKPT = syscall.Signal(0x21) - SIGCKPTEXIT = syscall.Signal(0x22) - SIGCONT = syscall.Signal(0x13) - SIGEMT = syscall.Signal(0x7) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINFO = syscall.Signal(0x1d) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x17) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGPIPE = syscall.Signal(0xd) - SIGPROF = syscall.Signal(0x1b) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTOP = syscall.Signal(0x11) - SIGSYS = syscall.Signal(0xc) - SIGTERM = syscall.Signal(0xf) - SIGTHR = syscall.Signal(0x20) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x12) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGURG = syscall.Signal(0x10) - SIGUSR1 = syscall.Signal(0x1e) - SIGUSR2 = syscall.Signal(0x1f) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) - -// Error table -var errors = [...]string{ - 1: "operation not permitted", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "input/output error", - 6: "device not configured", - 7: "argument list too long", - 8: "exec format error", - 9: "bad file descriptor", - 10: "no child processes", - 11: "resource deadlock avoided", - 12: "cannot allocate memory", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "device busy", - 17: "file exists", - 18: "cross-device link", - 19: "operation not supported by device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "too many open files in system", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "numerical argument out of domain", - 34: "result too large", - 35: "resource temporarily unavailable", - 36: "operation now in progress", - 37: "operation already in progress", - 38: "socket operation on non-socket", - 39: "destination address required", - 40: "message too long", - 41: "protocol wrong type for socket", - 42: "protocol not available", - 43: "protocol not supported", - 44: "socket type not supported", - 45: "operation not supported", - 46: "protocol family not supported", - 47: "address family not supported by protocol family", - 48: "address already in use", - 49: "can't assign requested address", - 50: "network is down", - 51: "network is unreachable", - 52: "network dropped connection on reset", - 53: "software caused connection abort", - 54: "connection reset by peer", - 55: "no buffer space available", - 56: "socket is already connected", - 57: "socket is not connected", - 58: "can't send after socket shutdown", - 59: "too many references: can't splice", - 60: "operation timed out", - 61: "connection refused", - 62: "too many levels of symbolic links", - 63: "file name too long", - 64: "host is down", - 65: "no route to host", - 66: "directory not empty", - 67: "too many processes", - 68: "too many users", - 69: "disc quota exceeded", - 70: "stale NFS file handle", - 71: "too many levels of remote in path", - 72: "RPC struct is bad", - 73: "RPC version wrong", - 74: "RPC prog. not avail", - 75: "program version wrong", - 76: "bad procedure for program", - 77: "no locks available", - 78: "function not implemented", - 79: "inappropriate file type or format", - 80: "authentication error", - 81: "need authenticator", - 82: "identifier removed", - 83: "no message of desired type", - 84: "value too large to be stored in data type", - 85: "operation canceled", - 86: "illegal byte sequence", - 87: "attribute not found", - 88: "programming error", - 89: "bad message", - 90: "multihop attempted", - 91: "link has been severed", - 92: "protocol error", - 93: "no medium found", - 94: "unknown error: 94", - 95: "unknown error: 95", - 96: "unknown error: 96", - 97: "unknown error: 97", - 98: "unknown error: 98", - 99: "unknown error: 99", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal instruction", - 5: "trace/BPT trap", - 6: "abort trap", - 7: "EMT trap", - 8: "floating point exception", - 9: "killed", - 10: "bus error", - 11: "segmentation fault", - 12: "bad system call", - 13: "broken pipe", - 14: "alarm clock", - 15: "terminated", - 16: "urgent I/O condition", - 17: "suspended (signal)", - 18: "suspended", - 19: "continued", - 20: "child exited", - 21: "stopped (tty input)", - 22: "stopped (tty output)", - 23: "I/O possible", - 24: "cputime limit exceeded", - 25: "filesize limit exceeded", - 26: "virtual timer expired", - 27: "profiling timer expired", - 28: "window size changes", - 29: "information request", - 30: "user defined signal 1", - 31: "user defined signal 2", - 32: "thread Scheduler", - 33: "checkPoint", - 34: "checkPointExit", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_freebsd_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_freebsd_386.go deleted file mode 100644 index 7b95751c3db..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_freebsd_386.go +++ /dev/null @@ -1,1743 +0,0 @@ -// mkerrors.sh -m32 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build 386,freebsd - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- -m32 _const.go - -package unix - -import "syscall" - -const ( - AF_APPLETALK = 0x10 - AF_ARP = 0x23 - AF_ATM = 0x1e - AF_BLUETOOTH = 0x24 - AF_CCITT = 0xa - AF_CHAOS = 0x5 - AF_CNT = 0x15 - AF_COIP = 0x14 - AF_DATAKIT = 0x9 - AF_DECnet = 0xc - AF_DLI = 0xd - AF_E164 = 0x1a - AF_ECMA = 0x8 - AF_HYLINK = 0xf - AF_IEEE80211 = 0x25 - AF_IMPLINK = 0x3 - AF_INET = 0x2 - AF_INET6 = 0x1c - AF_INET6_SDP = 0x2a - AF_INET_SDP = 0x28 - AF_IPX = 0x17 - AF_ISDN = 0x1a - AF_ISO = 0x7 - AF_LAT = 0xe - AF_LINK = 0x12 - AF_LOCAL = 0x1 - AF_MAX = 0x2a - AF_NATM = 0x1d - AF_NETBIOS = 0x6 - AF_NETGRAPH = 0x20 - AF_OSI = 0x7 - AF_PUP = 0x4 - AF_ROUTE = 0x11 - AF_SCLUSTER = 0x22 - AF_SIP = 0x18 - AF_SLOW = 0x21 - AF_SNA = 0xb - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - AF_VENDOR00 = 0x27 - AF_VENDOR01 = 0x29 - AF_VENDOR02 = 0x2b - AF_VENDOR03 = 0x2d - AF_VENDOR04 = 0x2f - AF_VENDOR05 = 0x31 - AF_VENDOR06 = 0x33 - AF_VENDOR07 = 0x35 - AF_VENDOR08 = 0x37 - AF_VENDOR09 = 0x39 - AF_VENDOR10 = 0x3b - AF_VENDOR11 = 0x3d - AF_VENDOR12 = 0x3f - AF_VENDOR13 = 0x41 - AF_VENDOR14 = 0x43 - AF_VENDOR15 = 0x45 - AF_VENDOR16 = 0x47 - AF_VENDOR17 = 0x49 - AF_VENDOR18 = 0x4b - AF_VENDOR19 = 0x4d - AF_VENDOR20 = 0x4f - AF_VENDOR21 = 0x51 - AF_VENDOR22 = 0x53 - AF_VENDOR23 = 0x55 - AF_VENDOR24 = 0x57 - AF_VENDOR25 = 0x59 - AF_VENDOR26 = 0x5b - AF_VENDOR27 = 0x5d - AF_VENDOR28 = 0x5f - AF_VENDOR29 = 0x61 - AF_VENDOR30 = 0x63 - AF_VENDOR31 = 0x65 - AF_VENDOR32 = 0x67 - AF_VENDOR33 = 0x69 - AF_VENDOR34 = 0x6b - AF_VENDOR35 = 0x6d - AF_VENDOR36 = 0x6f - AF_VENDOR37 = 0x71 - AF_VENDOR38 = 0x73 - AF_VENDOR39 = 0x75 - AF_VENDOR40 = 0x77 - AF_VENDOR41 = 0x79 - AF_VENDOR42 = 0x7b - AF_VENDOR43 = 0x7d - AF_VENDOR44 = 0x7f - AF_VENDOR45 = 0x81 - AF_VENDOR46 = 0x83 - AF_VENDOR47 = 0x85 - B0 = 0x0 - B110 = 0x6e - B115200 = 0x1c200 - B1200 = 0x4b0 - B134 = 0x86 - B14400 = 0x3840 - B150 = 0x96 - B1800 = 0x708 - B19200 = 0x4b00 - B200 = 0xc8 - B230400 = 0x38400 - B2400 = 0x960 - B28800 = 0x7080 - B300 = 0x12c - B38400 = 0x9600 - B460800 = 0x70800 - B4800 = 0x12c0 - B50 = 0x32 - B57600 = 0xe100 - B600 = 0x258 - B7200 = 0x1c20 - B75 = 0x4b - B76800 = 0x12c00 - B921600 = 0xe1000 - B9600 = 0x2580 - BIOCFEEDBACK = 0x8004427c - BIOCFLUSH = 0x20004268 - BIOCGBLEN = 0x40044266 - BIOCGDIRECTION = 0x40044276 - BIOCGDLT = 0x4004426a - BIOCGDLTLIST = 0xc0084279 - BIOCGETBUFMODE = 0x4004427d - BIOCGETIF = 0x4020426b - BIOCGETZMAX = 0x4004427f - BIOCGHDRCMPLT = 0x40044274 - BIOCGRSIG = 0x40044272 - BIOCGRTIMEOUT = 0x4008426e - BIOCGSEESENT = 0x40044276 - BIOCGSTATS = 0x4008426f - BIOCGTSTAMP = 0x40044283 - BIOCIMMEDIATE = 0x80044270 - BIOCLOCK = 0x2000427a - BIOCPROMISC = 0x20004269 - BIOCROTZBUF = 0x400c4280 - BIOCSBLEN = 0xc0044266 - BIOCSDIRECTION = 0x80044277 - BIOCSDLT = 0x80044278 - BIOCSETBUFMODE = 0x8004427e - BIOCSETF = 0x80084267 - BIOCSETFNR = 0x80084282 - BIOCSETIF = 0x8020426c - BIOCSETWF = 0x8008427b - BIOCSETZBUF = 0x800c4281 - BIOCSHDRCMPLT = 0x80044275 - BIOCSRSIG = 0x80044273 - BIOCSRTIMEOUT = 0x8008426d - BIOCSSEESENT = 0x80044277 - BIOCSTSTAMP = 0x80044284 - BIOCVERSION = 0x40044271 - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALIGNMENT = 0x4 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_BUFMODE_BUFFER = 0x1 - BPF_BUFMODE_ZBUF = 0x2 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXBUFSIZE = 0x80000 - BPF_MAXINSNS = 0x200 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINBUFSIZE = 0x20 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RELEASE = 0x30bb6 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_T_BINTIME = 0x2 - BPF_T_BINTIME_FAST = 0x102 - BPF_T_BINTIME_MONOTONIC = 0x202 - BPF_T_BINTIME_MONOTONIC_FAST = 0x302 - BPF_T_FAST = 0x100 - BPF_T_FLAG_MASK = 0x300 - BPF_T_FORMAT_MASK = 0x3 - BPF_T_MICROTIME = 0x0 - BPF_T_MICROTIME_FAST = 0x100 - BPF_T_MICROTIME_MONOTONIC = 0x200 - BPF_T_MICROTIME_MONOTONIC_FAST = 0x300 - BPF_T_MONOTONIC = 0x200 - BPF_T_MONOTONIC_FAST = 0x300 - BPF_T_NANOTIME = 0x1 - BPF_T_NANOTIME_FAST = 0x101 - BPF_T_NANOTIME_MONOTONIC = 0x201 - BPF_T_NANOTIME_MONOTONIC_FAST = 0x301 - BPF_T_NONE = 0x3 - BPF_T_NORMAL = 0x0 - BPF_W = 0x0 - BPF_X = 0x8 - BRKINT = 0x2 - CFLUSH = 0xf - CLOCAL = 0x8000 - CLOCK_MONOTONIC = 0x4 - CLOCK_MONOTONIC_FAST = 0xc - CLOCK_MONOTONIC_PRECISE = 0xb - CLOCK_PROCESS_CPUTIME_ID = 0xf - CLOCK_PROF = 0x2 - CLOCK_REALTIME = 0x0 - CLOCK_REALTIME_FAST = 0xa - CLOCK_REALTIME_PRECISE = 0x9 - CLOCK_SECOND = 0xd - CLOCK_THREAD_CPUTIME_ID = 0xe - CLOCK_UPTIME = 0x5 - CLOCK_UPTIME_FAST = 0x8 - CLOCK_UPTIME_PRECISE = 0x7 - CLOCK_VIRTUAL = 0x1 - CREAD = 0x800 - CS5 = 0x0 - CS6 = 0x100 - CS7 = 0x200 - CS8 = 0x300 - CSIZE = 0x300 - CSTART = 0x11 - CSTATUS = 0x14 - CSTOP = 0x13 - CSTOPB = 0x400 - CSUSP = 0x1a - CTL_MAXNAME = 0x18 - CTL_NET = 0x4 - DLT_A429 = 0xb8 - DLT_A653_ICM = 0xb9 - DLT_AIRONET_HEADER = 0x78 - DLT_AOS = 0xde - DLT_APPLE_IP_OVER_IEEE1394 = 0x8a - DLT_ARCNET = 0x7 - DLT_ARCNET_LINUX = 0x81 - DLT_ATM_CLIP = 0x13 - DLT_ATM_RFC1483 = 0xb - DLT_AURORA = 0x7e - DLT_AX25 = 0x3 - DLT_AX25_KISS = 0xca - DLT_BACNET_MS_TP = 0xa5 - DLT_BLUETOOTH_HCI_H4 = 0xbb - DLT_BLUETOOTH_HCI_H4_WITH_PHDR = 0xc9 - DLT_CAN20B = 0xbe - DLT_CAN_SOCKETCAN = 0xe3 - DLT_CHAOS = 0x5 - DLT_CHDLC = 0x68 - DLT_CISCO_IOS = 0x76 - DLT_C_HDLC = 0x68 - DLT_C_HDLC_WITH_DIR = 0xcd - DLT_DBUS = 0xe7 - DLT_DECT = 0xdd - DLT_DOCSIS = 0x8f - DLT_DVB_CI = 0xeb - DLT_ECONET = 0x73 - DLT_EN10MB = 0x1 - DLT_EN3MB = 0x2 - DLT_ENC = 0x6d - DLT_ERF = 0xc5 - DLT_ERF_ETH = 0xaf - DLT_ERF_POS = 0xb0 - DLT_FC_2 = 0xe0 - DLT_FC_2_WITH_FRAME_DELIMS = 0xe1 - DLT_FDDI = 0xa - DLT_FLEXRAY = 0xd2 - DLT_FRELAY = 0x6b - DLT_FRELAY_WITH_DIR = 0xce - DLT_GCOM_SERIAL = 0xad - DLT_GCOM_T1E1 = 0xac - DLT_GPF_F = 0xab - DLT_GPF_T = 0xaa - DLT_GPRS_LLC = 0xa9 - DLT_GSMTAP_ABIS = 0xda - DLT_GSMTAP_UM = 0xd9 - DLT_HHDLC = 0x79 - DLT_IBM_SN = 0x92 - DLT_IBM_SP = 0x91 - DLT_IEEE802 = 0x6 - DLT_IEEE802_11 = 0x69 - DLT_IEEE802_11_RADIO = 0x7f - DLT_IEEE802_11_RADIO_AVS = 0xa3 - DLT_IEEE802_15_4 = 0xc3 - DLT_IEEE802_15_4_LINUX = 0xbf - DLT_IEEE802_15_4_NOFCS = 0xe6 - DLT_IEEE802_15_4_NONASK_PHY = 0xd7 - DLT_IEEE802_16_MAC_CPS = 0xbc - DLT_IEEE802_16_MAC_CPS_RADIO = 0xc1 - DLT_IPFILTER = 0x74 - DLT_IPMB = 0xc7 - DLT_IPMB_LINUX = 0xd1 - DLT_IPNET = 0xe2 - DLT_IPOIB = 0xf2 - DLT_IPV4 = 0xe4 - DLT_IPV6 = 0xe5 - DLT_IP_OVER_FC = 0x7a - DLT_JUNIPER_ATM1 = 0x89 - DLT_JUNIPER_ATM2 = 0x87 - DLT_JUNIPER_ATM_CEMIC = 0xee - DLT_JUNIPER_CHDLC = 0xb5 - DLT_JUNIPER_ES = 0x84 - DLT_JUNIPER_ETHER = 0xb2 - DLT_JUNIPER_FIBRECHANNEL = 0xea - DLT_JUNIPER_FRELAY = 0xb4 - DLT_JUNIPER_GGSN = 0x85 - DLT_JUNIPER_ISM = 0xc2 - DLT_JUNIPER_MFR = 0x86 - DLT_JUNIPER_MLFR = 0x83 - DLT_JUNIPER_MLPPP = 0x82 - DLT_JUNIPER_MONITOR = 0xa4 - DLT_JUNIPER_PIC_PEER = 0xae - DLT_JUNIPER_PPP = 0xb3 - DLT_JUNIPER_PPPOE = 0xa7 - DLT_JUNIPER_PPPOE_ATM = 0xa8 - DLT_JUNIPER_SERVICES = 0x88 - DLT_JUNIPER_SRX_E2E = 0xe9 - DLT_JUNIPER_ST = 0xc8 - DLT_JUNIPER_VP = 0xb7 - DLT_JUNIPER_VS = 0xe8 - DLT_LAPB_WITH_DIR = 0xcf - DLT_LAPD = 0xcb - DLT_LIN = 0xd4 - DLT_LINUX_EVDEV = 0xd8 - DLT_LINUX_IRDA = 0x90 - DLT_LINUX_LAPD = 0xb1 - DLT_LINUX_PPP_WITHDIRECTION = 0xa6 - DLT_LINUX_SLL = 0x71 - DLT_LOOP = 0x6c - DLT_LTALK = 0x72 - DLT_MATCHING_MAX = 0xf6 - DLT_MATCHING_MIN = 0x68 - DLT_MFR = 0xb6 - DLT_MOST = 0xd3 - DLT_MPEG_2_TS = 0xf3 - DLT_MPLS = 0xdb - DLT_MTP2 = 0x8c - DLT_MTP2_WITH_PHDR = 0x8b - DLT_MTP3 = 0x8d - DLT_MUX27010 = 0xec - DLT_NETANALYZER = 0xf0 - DLT_NETANALYZER_TRANSPARENT = 0xf1 - DLT_NFC_LLCP = 0xf5 - DLT_NFLOG = 0xef - DLT_NG40 = 0xf4 - DLT_NULL = 0x0 - DLT_PCI_EXP = 0x7d - DLT_PFLOG = 0x75 - DLT_PFSYNC = 0x79 - DLT_PPI = 0xc0 - DLT_PPP = 0x9 - DLT_PPP_BSDOS = 0x10 - DLT_PPP_ETHER = 0x33 - DLT_PPP_PPPD = 0xa6 - DLT_PPP_SERIAL = 0x32 - DLT_PPP_WITH_DIR = 0xcc - DLT_PPP_WITH_DIRECTION = 0xa6 - DLT_PRISM_HEADER = 0x77 - DLT_PRONET = 0x4 - DLT_RAIF1 = 0xc6 - DLT_RAW = 0xc - DLT_RIO = 0x7c - DLT_SCCP = 0x8e - DLT_SITA = 0xc4 - DLT_SLIP = 0x8 - DLT_SLIP_BSDOS = 0xf - DLT_STANAG_5066_D_PDU = 0xed - DLT_SUNATM = 0x7b - DLT_SYMANTEC_FIREWALL = 0x63 - DLT_TZSP = 0x80 - DLT_USB = 0xba - DLT_USB_LINUX = 0xbd - DLT_USB_LINUX_MMAPPED = 0xdc - DLT_USER0 = 0x93 - DLT_USER1 = 0x94 - DLT_USER10 = 0x9d - DLT_USER11 = 0x9e - DLT_USER12 = 0x9f - DLT_USER13 = 0xa0 - DLT_USER14 = 0xa1 - DLT_USER15 = 0xa2 - DLT_USER2 = 0x95 - DLT_USER3 = 0x96 - DLT_USER4 = 0x97 - DLT_USER5 = 0x98 - DLT_USER6 = 0x99 - DLT_USER7 = 0x9a - DLT_USER8 = 0x9b - DLT_USER9 = 0x9c - DLT_WIHART = 0xdf - DLT_X2E_SERIAL = 0xd5 - DLT_X2E_XORAYA = 0xd6 - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - DT_WHT = 0xe - ECHO = 0x8 - ECHOCTL = 0x40 - ECHOE = 0x2 - ECHOK = 0x4 - ECHOKE = 0x1 - ECHONL = 0x10 - ECHOPRT = 0x20 - EVFILT_AIO = -0x3 - EVFILT_FS = -0x9 - EVFILT_LIO = -0xa - EVFILT_PROC = -0x5 - EVFILT_READ = -0x1 - EVFILT_SIGNAL = -0x6 - EVFILT_SYSCOUNT = 0xb - EVFILT_TIMER = -0x7 - EVFILT_USER = -0xb - EVFILT_VNODE = -0x4 - EVFILT_WRITE = -0x2 - EV_ADD = 0x1 - EV_CLEAR = 0x20 - EV_DELETE = 0x2 - EV_DISABLE = 0x8 - EV_DISPATCH = 0x80 - EV_DROP = 0x1000 - EV_ENABLE = 0x4 - EV_EOF = 0x8000 - EV_ERROR = 0x4000 - EV_FLAG1 = 0x2000 - EV_ONESHOT = 0x10 - EV_RECEIPT = 0x40 - EV_SYSFLAGS = 0xf000 - EXTA = 0x4b00 - EXTATTR_NAMESPACE_EMPTY = 0x0 - EXTATTR_NAMESPACE_SYSTEM = 0x2 - EXTATTR_NAMESPACE_USER = 0x1 - EXTB = 0x9600 - EXTPROC = 0x800 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x400 - FLUSHO = 0x800000 - F_CANCEL = 0x5 - F_DUP2FD = 0xa - F_DUP2FD_CLOEXEC = 0x12 - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0x11 - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLK = 0xb - F_GETOWN = 0x5 - F_OGETLK = 0x7 - F_OK = 0x0 - F_OSETLK = 0x8 - F_OSETLKW = 0x9 - F_RDAHEAD = 0x10 - F_RDLCK = 0x1 - F_READAHEAD = 0xf - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLK = 0xc - F_SETLKW = 0xd - F_SETLK_REMOTE = 0xe - F_SETOWN = 0x6 - F_UNLCK = 0x2 - F_UNLCKSYS = 0x4 - F_WRLCK = 0x3 - HUPCL = 0x4000 - ICANON = 0x100 - ICMP6_FILTER = 0x12 - ICRNL = 0x100 - IEXTEN = 0x400 - IFAN_ARRIVAL = 0x0 - IFAN_DEPARTURE = 0x1 - IFF_ALLMULTI = 0x200 - IFF_ALTPHYS = 0x4000 - IFF_BROADCAST = 0x2 - IFF_CANTCHANGE = 0x218f72 - IFF_CANTCONFIG = 0x10000 - IFF_DEBUG = 0x4 - IFF_DRV_OACTIVE = 0x400 - IFF_DRV_RUNNING = 0x40 - IFF_DYING = 0x200000 - IFF_LINK0 = 0x1000 - IFF_LINK1 = 0x2000 - IFF_LINK2 = 0x4000 - IFF_LOOPBACK = 0x8 - IFF_MONITOR = 0x40000 - IFF_MULTICAST = 0x8000 - IFF_NOARP = 0x80 - IFF_OACTIVE = 0x400 - IFF_POINTOPOINT = 0x10 - IFF_PPROMISC = 0x20000 - IFF_PROMISC = 0x100 - IFF_RENAMING = 0x400000 - IFF_RUNNING = 0x40 - IFF_SIMPLEX = 0x800 - IFF_SMART = 0x20 - IFF_STATICARP = 0x80000 - IFF_UP = 0x1 - IFNAMSIZ = 0x10 - IFT_1822 = 0x2 - IFT_A12MPPSWITCH = 0x82 - IFT_AAL2 = 0xbb - IFT_AAL5 = 0x31 - IFT_ADSL = 0x5e - IFT_AFLANE8023 = 0x3b - IFT_AFLANE8025 = 0x3c - IFT_ARAP = 0x58 - IFT_ARCNET = 0x23 - IFT_ARCNETPLUS = 0x24 - IFT_ASYNC = 0x54 - IFT_ATM = 0x25 - IFT_ATMDXI = 0x69 - IFT_ATMFUNI = 0x6a - IFT_ATMIMA = 0x6b - IFT_ATMLOGICAL = 0x50 - IFT_ATMRADIO = 0xbd - IFT_ATMSUBINTERFACE = 0x86 - IFT_ATMVCIENDPT = 0xc2 - IFT_ATMVIRTUAL = 0x95 - IFT_BGPPOLICYACCOUNTING = 0xa2 - IFT_BRIDGE = 0xd1 - IFT_BSC = 0x53 - IFT_CARP = 0xf8 - IFT_CCTEMUL = 0x3d - IFT_CEPT = 0x13 - IFT_CES = 0x85 - IFT_CHANNEL = 0x46 - IFT_CNR = 0x55 - IFT_COFFEE = 0x84 - IFT_COMPOSITELINK = 0x9b - IFT_DCN = 0x8d - IFT_DIGITALPOWERLINE = 0x8a - IFT_DIGITALWRAPPEROVERHEADCHANNEL = 0xba - IFT_DLSW = 0x4a - IFT_DOCSCABLEDOWNSTREAM = 0x80 - IFT_DOCSCABLEMACLAYER = 0x7f - IFT_DOCSCABLEUPSTREAM = 0x81 - IFT_DS0 = 0x51 - IFT_DS0BUNDLE = 0x52 - IFT_DS1FDL = 0xaa - IFT_DS3 = 0x1e - IFT_DTM = 0x8c - IFT_DVBASILN = 0xac - IFT_DVBASIOUT = 0xad - IFT_DVBRCCDOWNSTREAM = 0x93 - IFT_DVBRCCMACLAYER = 0x92 - IFT_DVBRCCUPSTREAM = 0x94 - IFT_ENC = 0xf4 - IFT_EON = 0x19 - IFT_EPLRS = 0x57 - IFT_ESCON = 0x49 - IFT_ETHER = 0x6 - IFT_FAITH = 0xf2 - IFT_FAST = 0x7d - IFT_FASTETHER = 0x3e - IFT_FASTETHERFX = 0x45 - IFT_FDDI = 0xf - IFT_FIBRECHANNEL = 0x38 - IFT_FRAMERELAYINTERCONNECT = 0x3a - IFT_FRAMERELAYMPI = 0x5c - IFT_FRDLCIENDPT = 0xc1 - IFT_FRELAY = 0x20 - IFT_FRELAYDCE = 0x2c - IFT_FRF16MFRBUNDLE = 0xa3 - IFT_FRFORWARD = 0x9e - IFT_G703AT2MB = 0x43 - IFT_G703AT64K = 0x42 - IFT_GIF = 0xf0 - IFT_GIGABITETHERNET = 0x75 - IFT_GR303IDT = 0xb2 - IFT_GR303RDT = 0xb1 - IFT_H323GATEKEEPER = 0xa4 - IFT_H323PROXY = 0xa5 - IFT_HDH1822 = 0x3 - IFT_HDLC = 0x76 - IFT_HDSL2 = 0xa8 - IFT_HIPERLAN2 = 0xb7 - IFT_HIPPI = 0x2f - IFT_HIPPIINTERFACE = 0x39 - IFT_HOSTPAD = 0x5a - IFT_HSSI = 0x2e - IFT_HY = 0xe - IFT_IBM370PARCHAN = 0x48 - IFT_IDSL = 0x9a - IFT_IEEE1394 = 0x90 - IFT_IEEE80211 = 0x47 - IFT_IEEE80212 = 0x37 - IFT_IEEE8023ADLAG = 0xa1 - IFT_IFGSN = 0x91 - IFT_IMT = 0xbe - IFT_INFINIBAND = 0xc7 - IFT_INTERLEAVE = 0x7c - IFT_IP = 0x7e - IFT_IPFORWARD = 0x8e - IFT_IPOVERATM = 0x72 - IFT_IPOVERCDLC = 0x6d - IFT_IPOVERCLAW = 0x6e - IFT_IPSWITCH = 0x4e - IFT_IPXIP = 0xf9 - IFT_ISDN = 0x3f - IFT_ISDNBASIC = 0x14 - IFT_ISDNPRIMARY = 0x15 - IFT_ISDNS = 0x4b - IFT_ISDNU = 0x4c - IFT_ISO88022LLC = 0x29 - IFT_ISO88023 = 0x7 - IFT_ISO88024 = 0x8 - IFT_ISO88025 = 0x9 - IFT_ISO88025CRFPINT = 0x62 - IFT_ISO88025DTR = 0x56 - IFT_ISO88025FIBER = 0x73 - IFT_ISO88026 = 0xa - IFT_ISUP = 0xb3 - IFT_L2VLAN = 0x87 - IFT_L3IPVLAN = 0x88 - IFT_L3IPXVLAN = 0x89 - IFT_LAPB = 0x10 - IFT_LAPD = 0x4d - IFT_LAPF = 0x77 - IFT_LOCALTALK = 0x2a - IFT_LOOP = 0x18 - IFT_MEDIAMAILOVERIP = 0x8b - IFT_MFSIGLINK = 0xa7 - IFT_MIOX25 = 0x26 - IFT_MODEM = 0x30 - IFT_MPC = 0x71 - IFT_MPLS = 0xa6 - IFT_MPLSTUNNEL = 0x96 - IFT_MSDSL = 0x8f - IFT_MVL = 0xbf - IFT_MYRINET = 0x63 - IFT_NFAS = 0xaf - IFT_NSIP = 0x1b - IFT_OPTICALCHANNEL = 0xc3 - IFT_OPTICALTRANSPORT = 0xc4 - IFT_OTHER = 0x1 - IFT_P10 = 0xc - IFT_P80 = 0xd - IFT_PARA = 0x22 - IFT_PFLOG = 0xf6 - IFT_PFSYNC = 0xf7 - IFT_PLC = 0xae - IFT_POS = 0xab - IFT_PPP = 0x17 - IFT_PPPMULTILINKBUNDLE = 0x6c - IFT_PROPBWAP2MP = 0xb8 - IFT_PROPCNLS = 0x59 - IFT_PROPDOCSWIRELESSDOWNSTREAM = 0xb5 - IFT_PROPDOCSWIRELESSMACLAYER = 0xb4 - IFT_PROPDOCSWIRELESSUPSTREAM = 0xb6 - IFT_PROPMUX = 0x36 - IFT_PROPVIRTUAL = 0x35 - IFT_PROPWIRELESSP2P = 0x9d - IFT_PTPSERIAL = 0x16 - IFT_PVC = 0xf1 - IFT_QLLC = 0x44 - IFT_RADIOMAC = 0xbc - IFT_RADSL = 0x5f - IFT_REACHDSL = 0xc0 - IFT_RFC1483 = 0x9f - IFT_RS232 = 0x21 - IFT_RSRB = 0x4f - IFT_SDLC = 0x11 - IFT_SDSL = 0x60 - IFT_SHDSL = 0xa9 - IFT_SIP = 0x1f - IFT_SLIP = 0x1c - IFT_SMDSDXI = 0x2b - IFT_SMDSICIP = 0x34 - IFT_SONET = 0x27 - IFT_SONETOVERHEADCHANNEL = 0xb9 - IFT_SONETPATH = 0x32 - IFT_SONETVT = 0x33 - IFT_SRP = 0x97 - IFT_SS7SIGLINK = 0x9c - IFT_STACKTOSTACK = 0x6f - IFT_STARLAN = 0xb - IFT_STF = 0xd7 - IFT_T1 = 0x12 - IFT_TDLC = 0x74 - IFT_TERMPAD = 0x5b - IFT_TR008 = 0xb0 - IFT_TRANSPHDLC = 0x7b - IFT_TUNNEL = 0x83 - IFT_ULTRA = 0x1d - IFT_USB = 0xa0 - IFT_V11 = 0x40 - IFT_V35 = 0x2d - IFT_V36 = 0x41 - IFT_V37 = 0x78 - IFT_VDSL = 0x61 - IFT_VIRTUALIPADDRESS = 0x70 - IFT_VOICEEM = 0x64 - IFT_VOICEENCAP = 0x67 - IFT_VOICEFXO = 0x65 - IFT_VOICEFXS = 0x66 - IFT_VOICEOVERATM = 0x98 - IFT_VOICEOVERFRAMERELAY = 0x99 - IFT_VOICEOVERIP = 0x68 - IFT_X213 = 0x5d - IFT_X25 = 0x5 - IFT_X25DDN = 0x4 - IFT_X25HUNTGROUP = 0x7a - IFT_X25MLP = 0x79 - IFT_X25PLE = 0x28 - IFT_XETHER = 0x1a - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLASSD_HOST = 0xfffffff - IN_CLASSD_NET = 0xf0000000 - IN_CLASSD_NSHIFT = 0x1c - IN_LOOPBACKNET = 0x7f - IN_RFC3021_MASK = 0xfffffffe - IPPROTO_3PC = 0x22 - IPPROTO_ADFS = 0x44 - IPPROTO_AH = 0x33 - IPPROTO_AHIP = 0x3d - IPPROTO_APES = 0x63 - IPPROTO_ARGUS = 0xd - IPPROTO_AX25 = 0x5d - IPPROTO_BHA = 0x31 - IPPROTO_BLT = 0x1e - IPPROTO_BRSATMON = 0x4c - IPPROTO_CARP = 0x70 - IPPROTO_CFTP = 0x3e - IPPROTO_CHAOS = 0x10 - IPPROTO_CMTP = 0x26 - IPPROTO_CPHB = 0x49 - IPPROTO_CPNX = 0x48 - IPPROTO_DDP = 0x25 - IPPROTO_DGP = 0x56 - IPPROTO_DIVERT = 0x102 - IPPROTO_DONE = 0x101 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_EMCON = 0xe - IPPROTO_ENCAP = 0x62 - IPPROTO_EON = 0x50 - IPPROTO_ESP = 0x32 - IPPROTO_ETHERIP = 0x61 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GGP = 0x3 - IPPROTO_GMTP = 0x64 - IPPROTO_GRE = 0x2f - IPPROTO_HELLO = 0x3f - IPPROTO_HIP = 0x8b - IPPROTO_HMP = 0x14 - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IDPR = 0x23 - IPPROTO_IDRP = 0x2d - IPPROTO_IGMP = 0x2 - IPPROTO_IGP = 0x55 - IPPROTO_IGRP = 0x58 - IPPROTO_IL = 0x28 - IPPROTO_INLSP = 0x34 - IPPROTO_INP = 0x20 - IPPROTO_IP = 0x0 - IPPROTO_IPCOMP = 0x6c - IPPROTO_IPCV = 0x47 - IPPROTO_IPEIP = 0x5e - IPPROTO_IPIP = 0x4 - IPPROTO_IPPC = 0x43 - IPPROTO_IPV4 = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_IRTP = 0x1c - IPPROTO_KRYPTOLAN = 0x41 - IPPROTO_LARP = 0x5b - IPPROTO_LEAF1 = 0x19 - IPPROTO_LEAF2 = 0x1a - IPPROTO_MAX = 0x100 - IPPROTO_MAXID = 0x34 - IPPROTO_MEAS = 0x13 - IPPROTO_MH = 0x87 - IPPROTO_MHRP = 0x30 - IPPROTO_MICP = 0x5f - IPPROTO_MOBILE = 0x37 - IPPROTO_MPLS = 0x89 - IPPROTO_MTP = 0x5c - IPPROTO_MUX = 0x12 - IPPROTO_ND = 0x4d - IPPROTO_NHRP = 0x36 - IPPROTO_NONE = 0x3b - IPPROTO_NSP = 0x1f - IPPROTO_NVPII = 0xb - IPPROTO_OLD_DIVERT = 0xfe - IPPROTO_OSPFIGP = 0x59 - IPPROTO_PFSYNC = 0xf0 - IPPROTO_PGM = 0x71 - IPPROTO_PIGP = 0x9 - IPPROTO_PIM = 0x67 - IPPROTO_PRM = 0x15 - IPPROTO_PUP = 0xc - IPPROTO_PVP = 0x4b - IPPROTO_RAW = 0xff - IPPROTO_RCCMON = 0xa - IPPROTO_RDP = 0x1b - IPPROTO_RESERVED_253 = 0xfd - IPPROTO_RESERVED_254 = 0xfe - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_RVD = 0x42 - IPPROTO_SATEXPAK = 0x40 - IPPROTO_SATMON = 0x45 - IPPROTO_SCCSP = 0x60 - IPPROTO_SCTP = 0x84 - IPPROTO_SDRP = 0x2a - IPPROTO_SEND = 0x103 - IPPROTO_SEP = 0x21 - IPPROTO_SHIM6 = 0x8c - IPPROTO_SKIP = 0x39 - IPPROTO_SPACER = 0x7fff - IPPROTO_SRPC = 0x5a - IPPROTO_ST = 0x7 - IPPROTO_SVMTP = 0x52 - IPPROTO_SWIPE = 0x35 - IPPROTO_TCF = 0x57 - IPPROTO_TCP = 0x6 - IPPROTO_TLSP = 0x38 - IPPROTO_TP = 0x1d - IPPROTO_TPXX = 0x27 - IPPROTO_TRUNK1 = 0x17 - IPPROTO_TRUNK2 = 0x18 - IPPROTO_TTP = 0x54 - IPPROTO_UDP = 0x11 - IPPROTO_UDPLITE = 0x88 - IPPROTO_VINES = 0x53 - IPPROTO_VISA = 0x46 - IPPROTO_VMTP = 0x51 - IPPROTO_WBEXPAK = 0x4f - IPPROTO_WBMON = 0x4e - IPPROTO_WSN = 0x4a - IPPROTO_XNET = 0xf - IPPROTO_XTP = 0x24 - IPV6_AUTOFLOWLABEL = 0x3b - IPV6_BINDANY = 0x40 - IPV6_BINDV6ONLY = 0x1b - IPV6_CHECKSUM = 0x1a - IPV6_DEFAULT_MULTICAST_HOPS = 0x1 - IPV6_DEFAULT_MULTICAST_LOOP = 0x1 - IPV6_DEFHLIM = 0x40 - IPV6_DONTFRAG = 0x3e - IPV6_DSTOPTS = 0x32 - IPV6_FAITH = 0x1d - IPV6_FLOWINFO_MASK = 0xffffff0f - IPV6_FLOWLABEL_MASK = 0xffff0f00 - IPV6_FRAGTTL = 0x78 - IPV6_FW_ADD = 0x1e - IPV6_FW_DEL = 0x1f - IPV6_FW_FLUSH = 0x20 - IPV6_FW_GET = 0x22 - IPV6_FW_ZERO = 0x21 - IPV6_HLIMDEC = 0x1 - IPV6_HOPLIMIT = 0x2f - IPV6_HOPOPTS = 0x31 - IPV6_IPSEC_POLICY = 0x1c - IPV6_JOIN_GROUP = 0xc - IPV6_LEAVE_GROUP = 0xd - IPV6_MAXHLIM = 0xff - IPV6_MAXOPTHDR = 0x800 - IPV6_MAXPACKET = 0xffff - IPV6_MAX_GROUP_SRC_FILTER = 0x200 - IPV6_MAX_MEMBERSHIPS = 0xfff - IPV6_MAX_SOCK_SRC_FILTER = 0x80 - IPV6_MIN_MEMBERSHIPS = 0x1f - IPV6_MMTU = 0x500 - IPV6_MSFILTER = 0x4a - IPV6_MULTICAST_HOPS = 0xa - IPV6_MULTICAST_IF = 0x9 - IPV6_MULTICAST_LOOP = 0xb - IPV6_NEXTHOP = 0x30 - IPV6_PATHMTU = 0x2c - IPV6_PKTINFO = 0x2e - IPV6_PORTRANGE = 0xe - IPV6_PORTRANGE_DEFAULT = 0x0 - IPV6_PORTRANGE_HIGH = 0x1 - IPV6_PORTRANGE_LOW = 0x2 - IPV6_PREFER_TEMPADDR = 0x3f - IPV6_RECVDSTOPTS = 0x28 - IPV6_RECVHOPLIMIT = 0x25 - IPV6_RECVHOPOPTS = 0x27 - IPV6_RECVPATHMTU = 0x2b - IPV6_RECVPKTINFO = 0x24 - IPV6_RECVRTHDR = 0x26 - IPV6_RECVTCLASS = 0x39 - IPV6_RTHDR = 0x33 - IPV6_RTHDRDSTOPTS = 0x23 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_SOCKOPT_RESERVED1 = 0x3 - IPV6_TCLASS = 0x3d - IPV6_UNICAST_HOPS = 0x4 - IPV6_USE_MIN_MTU = 0x2a - IPV6_V6ONLY = 0x1b - IPV6_VERSION = 0x60 - IPV6_VERSION_MASK = 0xf0 - IP_ADD_MEMBERSHIP = 0xc - IP_ADD_SOURCE_MEMBERSHIP = 0x46 - IP_BINDANY = 0x18 - IP_BLOCK_SOURCE = 0x48 - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DONTFRAG = 0x43 - IP_DROP_MEMBERSHIP = 0xd - IP_DROP_SOURCE_MEMBERSHIP = 0x47 - IP_DUMMYNET3 = 0x31 - IP_DUMMYNET_CONFIGURE = 0x3c - IP_DUMMYNET_DEL = 0x3d - IP_DUMMYNET_FLUSH = 0x3e - IP_DUMMYNET_GET = 0x40 - IP_FAITH = 0x16 - IP_FW3 = 0x30 - IP_FW_ADD = 0x32 - IP_FW_DEL = 0x33 - IP_FW_FLUSH = 0x34 - IP_FW_GET = 0x36 - IP_FW_NAT_CFG = 0x38 - IP_FW_NAT_DEL = 0x39 - IP_FW_NAT_GET_CONFIG = 0x3a - IP_FW_NAT_GET_LOG = 0x3b - IP_FW_RESETLOG = 0x37 - IP_FW_TABLE_ADD = 0x28 - IP_FW_TABLE_DEL = 0x29 - IP_FW_TABLE_FLUSH = 0x2a - IP_FW_TABLE_GETSIZE = 0x2b - IP_FW_TABLE_LIST = 0x2c - IP_FW_ZERO = 0x35 - IP_HDRINCL = 0x2 - IP_IPSEC_POLICY = 0x15 - IP_MAXPACKET = 0xffff - IP_MAX_GROUP_SRC_FILTER = 0x200 - IP_MAX_MEMBERSHIPS = 0xfff - IP_MAX_SOCK_MUTE_FILTER = 0x80 - IP_MAX_SOCK_SRC_FILTER = 0x80 - IP_MAX_SOURCE_FILTER = 0x400 - IP_MF = 0x2000 - IP_MINTTL = 0x42 - IP_MIN_MEMBERSHIPS = 0x1f - IP_MSFILTER = 0x4a - IP_MSS = 0x240 - IP_MULTICAST_IF = 0x9 - IP_MULTICAST_LOOP = 0xb - IP_MULTICAST_TTL = 0xa - IP_MULTICAST_VIF = 0xe - IP_OFFMASK = 0x1fff - IP_ONESBCAST = 0x17 - IP_OPTIONS = 0x1 - IP_PORTRANGE = 0x13 - IP_PORTRANGE_DEFAULT = 0x0 - IP_PORTRANGE_HIGH = 0x1 - IP_PORTRANGE_LOW = 0x2 - IP_RECVDSTADDR = 0x7 - IP_RECVIF = 0x14 - IP_RECVOPTS = 0x5 - IP_RECVRETOPTS = 0x6 - IP_RECVTOS = 0x44 - IP_RECVTTL = 0x41 - IP_RETOPTS = 0x8 - IP_RF = 0x8000 - IP_RSVP_OFF = 0x10 - IP_RSVP_ON = 0xf - IP_RSVP_VIF_OFF = 0x12 - IP_RSVP_VIF_ON = 0x11 - IP_SENDSRCADDR = 0x7 - IP_TOS = 0x3 - IP_TTL = 0x4 - IP_UNBLOCK_SOURCE = 0x49 - ISIG = 0x80 - ISTRIP = 0x20 - IXANY = 0x800 - IXOFF = 0x400 - IXON = 0x200 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_AUTOSYNC = 0x7 - MADV_CORE = 0x9 - MADV_DONTNEED = 0x4 - MADV_FREE = 0x5 - MADV_NOCORE = 0x8 - MADV_NORMAL = 0x0 - MADV_NOSYNC = 0x6 - MADV_PROTECT = 0xa - MADV_RANDOM = 0x1 - MADV_SEQUENTIAL = 0x2 - MADV_WILLNEED = 0x3 - MAP_ALIGNED_SUPER = 0x1000000 - MAP_ALIGNMENT_MASK = -0x1000000 - MAP_ALIGNMENT_SHIFT = 0x18 - MAP_ANON = 0x1000 - MAP_ANONYMOUS = 0x1000 - MAP_COPY = 0x2 - MAP_EXCL = 0x4000 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_HASSEMAPHORE = 0x200 - MAP_NOCORE = 0x20000 - MAP_NORESERVE = 0x40 - MAP_NOSYNC = 0x800 - MAP_PREFAULT_READ = 0x40000 - MAP_PRIVATE = 0x2 - MAP_RENAME = 0x20 - MAP_RESERVED0080 = 0x80 - MAP_RESERVED0100 = 0x100 - MAP_SHARED = 0x1 - MAP_STACK = 0x400 - MCL_CURRENT = 0x1 - MCL_FUTURE = 0x2 - MSG_CMSG_CLOEXEC = 0x40000 - MSG_COMPAT = 0x8000 - MSG_CTRUNC = 0x20 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x80 - MSG_EOF = 0x100 - MSG_EOR = 0x8 - MSG_NBIO = 0x4000 - MSG_NOSIGNAL = 0x20000 - MSG_NOTIFICATION = 0x2000 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_TRUNC = 0x10 - MSG_WAITALL = 0x40 - MS_ASYNC = 0x1 - MS_INVALIDATE = 0x2 - MS_SYNC = 0x0 - NAME_MAX = 0xff - NET_RT_DUMP = 0x1 - NET_RT_FLAGS = 0x2 - NET_RT_IFLIST = 0x3 - NET_RT_IFLISTL = 0x5 - NET_RT_IFMALIST = 0x4 - NET_RT_MAXID = 0x6 - NOFLSH = 0x80000000 - NOTE_ATTRIB = 0x8 - NOTE_CHILD = 0x4 - NOTE_DELETE = 0x1 - NOTE_EXEC = 0x20000000 - NOTE_EXIT = 0x80000000 - NOTE_EXTEND = 0x4 - NOTE_FFAND = 0x40000000 - NOTE_FFCOPY = 0xc0000000 - NOTE_FFCTRLMASK = 0xc0000000 - NOTE_FFLAGSMASK = 0xffffff - NOTE_FFNOP = 0x0 - NOTE_FFOR = 0x80000000 - NOTE_FORK = 0x40000000 - NOTE_LINK = 0x10 - NOTE_LOWAT = 0x1 - NOTE_PCTRLMASK = 0xf0000000 - NOTE_PDATAMASK = 0xfffff - NOTE_RENAME = 0x20 - NOTE_REVOKE = 0x40 - NOTE_TRACK = 0x1 - NOTE_TRACKERR = 0x2 - NOTE_TRIGGER = 0x1000000 - NOTE_WRITE = 0x2 - OCRNL = 0x10 - ONLCR = 0x2 - ONLRET = 0x40 - ONOCR = 0x20 - ONOEOT = 0x8 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_APPEND = 0x8 - O_ASYNC = 0x40 - O_CLOEXEC = 0x100000 - O_CREAT = 0x200 - O_DIRECT = 0x10000 - O_DIRECTORY = 0x20000 - O_EXCL = 0x800 - O_EXEC = 0x40000 - O_EXLOCK = 0x20 - O_FSYNC = 0x80 - O_NDELAY = 0x4 - O_NOCTTY = 0x8000 - O_NOFOLLOW = 0x100 - O_NONBLOCK = 0x4 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_SHLOCK = 0x10 - O_SYNC = 0x80 - O_TRUNC = 0x400 - O_TTY_INIT = 0x80000 - O_WRONLY = 0x1 - PARENB = 0x1000 - PARMRK = 0x8 - PARODD = 0x2000 - PENDIN = 0x20000000 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PROT_EXEC = 0x4 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_WRITE = 0x2 - RLIMIT_AS = 0xa - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x8 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = 0x7fffffffffffffff - RTAX_AUTHOR = 0x6 - RTAX_BRD = 0x7 - RTAX_DST = 0x0 - RTAX_GATEWAY = 0x1 - RTAX_GENMASK = 0x3 - RTAX_IFA = 0x5 - RTAX_IFP = 0x4 - RTAX_MAX = 0x8 - RTAX_NETMASK = 0x2 - RTA_AUTHOR = 0x40 - RTA_BRD = 0x80 - RTA_DST = 0x1 - RTA_GATEWAY = 0x2 - RTA_GENMASK = 0x8 - RTA_IFA = 0x20 - RTA_IFP = 0x10 - RTA_NETMASK = 0x4 - RTF_BLACKHOLE = 0x1000 - RTF_BROADCAST = 0x400000 - RTF_DONE = 0x40 - RTF_DYNAMIC = 0x10 - RTF_FMASK = 0x1004d808 - RTF_GATEWAY = 0x2 - RTF_GWFLAG_COMPAT = 0x80000000 - RTF_HOST = 0x4 - RTF_LLDATA = 0x400 - RTF_LLINFO = 0x400 - RTF_LOCAL = 0x200000 - RTF_MODIFIED = 0x20 - RTF_MULTICAST = 0x800000 - RTF_PINNED = 0x100000 - RTF_PRCLONING = 0x10000 - RTF_PROTO1 = 0x8000 - RTF_PROTO2 = 0x4000 - RTF_PROTO3 = 0x40000 - RTF_REJECT = 0x8 - RTF_RNH_LOCKED = 0x40000000 - RTF_STATIC = 0x800 - RTF_STICKY = 0x10000000 - RTF_UP = 0x1 - RTF_XRESOLVE = 0x200 - RTM_ADD = 0x1 - RTM_CHANGE = 0x3 - RTM_DELADDR = 0xd - RTM_DELETE = 0x2 - RTM_DELMADDR = 0x10 - RTM_GET = 0x4 - RTM_IEEE80211 = 0x12 - RTM_IFANNOUNCE = 0x11 - RTM_IFINFO = 0xe - RTM_LOCK = 0x8 - RTM_LOSING = 0x5 - RTM_MISS = 0x7 - RTM_NEWADDR = 0xc - RTM_NEWMADDR = 0xf - RTM_OLDADD = 0x9 - RTM_OLDDEL = 0xa - RTM_REDIRECT = 0x6 - RTM_RESOLVE = 0xb - RTM_RTTUNIT = 0xf4240 - RTM_VERSION = 0x5 - RTV_EXPIRE = 0x4 - RTV_HOPCOUNT = 0x2 - RTV_MTU = 0x1 - RTV_RPIPE = 0x8 - RTV_RTT = 0x40 - RTV_RTTVAR = 0x80 - RTV_SPIPE = 0x10 - RTV_SSTHRESH = 0x20 - RTV_WEIGHT = 0x100 - RT_ALL_FIBS = -0x1 - RT_CACHING_CONTEXT = 0x1 - RT_DEFAULT_FIB = 0x0 - RT_NORTREF = 0x2 - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - RUSAGE_THREAD = 0x1 - SCM_BINTIME = 0x4 - SCM_CREDS = 0x3 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x2 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDMULTI = 0x80206931 - SIOCADDRT = 0x8030720a - SIOCAIFADDR = 0x8040691a - SIOCAIFGROUP = 0x80246987 - SIOCALIFADDR = 0x8118691b - SIOCATMARK = 0x40047307 - SIOCDELMULTI = 0x80206932 - SIOCDELRT = 0x8030720b - SIOCDIFADDR = 0x80206919 - SIOCDIFGROUP = 0x80246989 - SIOCDIFPHYADDR = 0x80206949 - SIOCDLIFADDR = 0x8118691d - SIOCGDRVSPEC = 0xc01c697b - SIOCGETSGCNT = 0xc0147210 - SIOCGETVIFCNT = 0xc014720f - SIOCGHIWAT = 0x40047301 - SIOCGIFADDR = 0xc0206921 - SIOCGIFBRDADDR = 0xc0206923 - SIOCGIFCAP = 0xc020691f - SIOCGIFCONF = 0xc0086924 - SIOCGIFDESCR = 0xc020692a - SIOCGIFDSTADDR = 0xc0206922 - SIOCGIFFIB = 0xc020695c - SIOCGIFFLAGS = 0xc0206911 - SIOCGIFGENERIC = 0xc020693a - SIOCGIFGMEMB = 0xc024698a - SIOCGIFGROUP = 0xc0246988 - SIOCGIFINDEX = 0xc0206920 - SIOCGIFMAC = 0xc0206926 - SIOCGIFMEDIA = 0xc0286938 - SIOCGIFMETRIC = 0xc0206917 - SIOCGIFMTU = 0xc0206933 - SIOCGIFNETMASK = 0xc0206925 - SIOCGIFPDSTADDR = 0xc0206948 - SIOCGIFPHYS = 0xc0206935 - SIOCGIFPSRCADDR = 0xc0206947 - SIOCGIFSTATUS = 0xc331693b - SIOCGLIFADDR = 0xc118691c - SIOCGLIFPHYADDR = 0xc118694b - SIOCGLOWAT = 0x40047303 - SIOCGPGRP = 0x40047309 - SIOCGPRIVATE_0 = 0xc0206950 - SIOCGPRIVATE_1 = 0xc0206951 - SIOCIFCREATE = 0xc020697a - SIOCIFCREATE2 = 0xc020697c - SIOCIFDESTROY = 0x80206979 - SIOCIFGCLONERS = 0xc00c6978 - SIOCSDRVSPEC = 0x801c697b - SIOCSHIWAT = 0x80047300 - SIOCSIFADDR = 0x8020690c - SIOCSIFBRDADDR = 0x80206913 - SIOCSIFCAP = 0x8020691e - SIOCSIFDESCR = 0x80206929 - SIOCSIFDSTADDR = 0x8020690e - SIOCSIFFIB = 0x8020695d - SIOCSIFFLAGS = 0x80206910 - SIOCSIFGENERIC = 0x80206939 - SIOCSIFLLADDR = 0x8020693c - SIOCSIFMAC = 0x80206927 - SIOCSIFMEDIA = 0xc0206937 - SIOCSIFMETRIC = 0x80206918 - SIOCSIFMTU = 0x80206934 - SIOCSIFNAME = 0x80206928 - SIOCSIFNETMASK = 0x80206916 - SIOCSIFPHYADDR = 0x80406946 - SIOCSIFPHYS = 0x80206936 - SIOCSIFRVNET = 0xc020695b - SIOCSIFVNET = 0xc020695a - SIOCSLIFPHYADDR = 0x8118694a - SIOCSLOWAT = 0x80047302 - SIOCSPGRP = 0x80047308 - SOCK_CLOEXEC = 0x10000000 - SOCK_DGRAM = 0x2 - SOCK_MAXADDRLEN = 0xff - SOCK_NONBLOCK = 0x20000000 - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_SOCKET = 0xffff - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x2 - SO_ACCEPTFILTER = 0x1000 - SO_BINTIME = 0x2000 - SO_BROADCAST = 0x20 - SO_DEBUG = 0x1 - SO_DONTROUTE = 0x10 - SO_ERROR = 0x1007 - SO_KEEPALIVE = 0x8 - SO_LABEL = 0x1009 - SO_LINGER = 0x80 - SO_LISTENINCQLEN = 0x1013 - SO_LISTENQLEN = 0x1012 - SO_LISTENQLIMIT = 0x1011 - SO_NOSIGPIPE = 0x800 - SO_NO_DDP = 0x8000 - SO_NO_OFFLOAD = 0x4000 - SO_OOBINLINE = 0x100 - SO_PEERLABEL = 0x1010 - SO_PROTOCOL = 0x1016 - SO_PROTOTYPE = 0x1016 - SO_RCVBUF = 0x1002 - SO_RCVLOWAT = 0x1004 - SO_RCVTIMEO = 0x1006 - SO_REUSEADDR = 0x4 - SO_REUSEPORT = 0x200 - SO_SETFIB = 0x1014 - SO_SNDBUF = 0x1001 - SO_SNDLOWAT = 0x1003 - SO_SNDTIMEO = 0x1005 - SO_TIMESTAMP = 0x400 - SO_TYPE = 0x1008 - SO_USELOOPBACK = 0x40 - SO_USER_COOKIE = 0x1015 - SO_VENDOR = 0x80000000 - TCIFLUSH = 0x1 - TCIOFLUSH = 0x3 - TCOFLUSH = 0x2 - TCP_CA_NAME_MAX = 0x10 - TCP_CONGESTION = 0x40 - TCP_INFO = 0x20 - TCP_KEEPCNT = 0x400 - TCP_KEEPIDLE = 0x100 - TCP_KEEPINIT = 0x80 - TCP_KEEPINTVL = 0x200 - TCP_MAXBURST = 0x4 - TCP_MAXHLEN = 0x3c - TCP_MAXOLEN = 0x28 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_SACK = 0x4 - TCP_MAX_WINSHIFT = 0xe - TCP_MD5SIG = 0x10 - TCP_MINMSS = 0xd8 - TCP_MSS = 0x218 - TCP_NODELAY = 0x1 - TCP_NOOPT = 0x8 - TCP_NOPUSH = 0x4 - TCP_VENDOR = 0x80000000 - TCSAFLUSH = 0x2 - TIOCCBRK = 0x2000747a - TIOCCDTR = 0x20007478 - TIOCCONS = 0x80047462 - TIOCDRAIN = 0x2000745e - TIOCEXCL = 0x2000740d - TIOCEXT = 0x80047460 - TIOCFLUSH = 0x80047410 - TIOCGDRAINWAIT = 0x40047456 - TIOCGETA = 0x402c7413 - TIOCGETD = 0x4004741a - TIOCGPGRP = 0x40047477 - TIOCGPTN = 0x4004740f - TIOCGSID = 0x40047463 - TIOCGWINSZ = 0x40087468 - TIOCMBIC = 0x8004746b - TIOCMBIS = 0x8004746c - TIOCMGDTRWAIT = 0x4004745a - TIOCMGET = 0x4004746a - TIOCMSDTRWAIT = 0x8004745b - TIOCMSET = 0x8004746d - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DCD = 0x40 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x20007471 - TIOCNXCL = 0x2000740e - TIOCOUTQ = 0x40047473 - TIOCPKT = 0x80047470 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCPTMASTER = 0x2000741c - TIOCSBRK = 0x2000747b - TIOCSCTTY = 0x20007461 - TIOCSDRAINWAIT = 0x80047457 - TIOCSDTR = 0x20007479 - TIOCSETA = 0x802c7414 - TIOCSETAF = 0x802c7416 - TIOCSETAW = 0x802c7415 - TIOCSETD = 0x8004741b - TIOCSIG = 0x2004745f - TIOCSPGRP = 0x80047476 - TIOCSTART = 0x2000746e - TIOCSTAT = 0x20007465 - TIOCSTI = 0x80017472 - TIOCSTOP = 0x2000746f - TIOCSWINSZ = 0x80087467 - TIOCTIMESTAMP = 0x40087459 - TIOCUCNTL = 0x80047466 - TOSTOP = 0x400000 - VDISCARD = 0xf - VDSUSP = 0xb - VEOF = 0x0 - VEOL = 0x1 - VEOL2 = 0x2 - VERASE = 0x3 - VERASE2 = 0x7 - VINTR = 0x8 - VKILL = 0x5 - VLNEXT = 0xe - VMIN = 0x10 - VQUIT = 0x9 - VREPRINT = 0x6 - VSTART = 0xc - VSTATUS = 0x12 - VSTOP = 0xd - VSUSP = 0xa - VTIME = 0x11 - VWERASE = 0x4 - WCONTINUED = 0x4 - WCOREFLAG = 0x80 - WEXITED = 0x10 - WLINUXCLONE = 0x80000000 - WNOHANG = 0x1 - WNOWAIT = 0x8 - WSTOPPED = 0x2 - WTRAPPED = 0x20 - WUNTRACED = 0x2 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x30) - EADDRNOTAVAIL = syscall.Errno(0x31) - EAFNOSUPPORT = syscall.Errno(0x2f) - EAGAIN = syscall.Errno(0x23) - EALREADY = syscall.Errno(0x25) - EAUTH = syscall.Errno(0x50) - EBADF = syscall.Errno(0x9) - EBADMSG = syscall.Errno(0x59) - EBADRPC = syscall.Errno(0x48) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x55) - ECAPMODE = syscall.Errno(0x5e) - ECHILD = syscall.Errno(0xa) - ECONNABORTED = syscall.Errno(0x35) - ECONNREFUSED = syscall.Errno(0x3d) - ECONNRESET = syscall.Errno(0x36) - EDEADLK = syscall.Errno(0xb) - EDESTADDRREQ = syscall.Errno(0x27) - EDOM = syscall.Errno(0x21) - EDOOFUS = syscall.Errno(0x58) - EDQUOT = syscall.Errno(0x45) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EFTYPE = syscall.Errno(0x4f) - EHOSTDOWN = syscall.Errno(0x40) - EHOSTUNREACH = syscall.Errno(0x41) - EIDRM = syscall.Errno(0x52) - EILSEQ = syscall.Errno(0x56) - EINPROGRESS = syscall.Errno(0x24) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EISCONN = syscall.Errno(0x38) - EISDIR = syscall.Errno(0x15) - ELAST = syscall.Errno(0x60) - ELOOP = syscall.Errno(0x3e) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x28) - EMULTIHOP = syscall.Errno(0x5a) - ENAMETOOLONG = syscall.Errno(0x3f) - ENEEDAUTH = syscall.Errno(0x51) - ENETDOWN = syscall.Errno(0x32) - ENETRESET = syscall.Errno(0x34) - ENETUNREACH = syscall.Errno(0x33) - ENFILE = syscall.Errno(0x17) - ENOATTR = syscall.Errno(0x57) - ENOBUFS = syscall.Errno(0x37) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOLCK = syscall.Errno(0x4d) - ENOLINK = syscall.Errno(0x5b) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x53) - ENOPROTOOPT = syscall.Errno(0x2a) - ENOSPC = syscall.Errno(0x1c) - ENOSYS = syscall.Errno(0x4e) - ENOTBLK = syscall.Errno(0xf) - ENOTCAPABLE = syscall.Errno(0x5d) - ENOTCONN = syscall.Errno(0x39) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x42) - ENOTRECOVERABLE = syscall.Errno(0x5f) - ENOTSOCK = syscall.Errno(0x26) - ENOTSUP = syscall.Errno(0x2d) - ENOTTY = syscall.Errno(0x19) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x2d) - EOVERFLOW = syscall.Errno(0x54) - EOWNERDEAD = syscall.Errno(0x60) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x2e) - EPIPE = syscall.Errno(0x20) - EPROCLIM = syscall.Errno(0x43) - EPROCUNAVAIL = syscall.Errno(0x4c) - EPROGMISMATCH = syscall.Errno(0x4b) - EPROGUNAVAIL = syscall.Errno(0x4a) - EPROTO = syscall.Errno(0x5c) - EPROTONOSUPPORT = syscall.Errno(0x2b) - EPROTOTYPE = syscall.Errno(0x29) - ERANGE = syscall.Errno(0x22) - EREMOTE = syscall.Errno(0x47) - EROFS = syscall.Errno(0x1e) - ERPCMISMATCH = syscall.Errno(0x49) - ESHUTDOWN = syscall.Errno(0x3a) - ESOCKTNOSUPPORT = syscall.Errno(0x2c) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESTALE = syscall.Errno(0x46) - ETIMEDOUT = syscall.Errno(0x3c) - ETOOMANYREFS = syscall.Errno(0x3b) - ETXTBSY = syscall.Errno(0x1a) - EUSERS = syscall.Errno(0x44) - EWOULDBLOCK = syscall.Errno(0x23) - EXDEV = syscall.Errno(0x12) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0xa) - SIGCHLD = syscall.Signal(0x14) - SIGCONT = syscall.Signal(0x13) - SIGEMT = syscall.Signal(0x7) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINFO = syscall.Signal(0x1d) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x17) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGLIBRT = syscall.Signal(0x21) - SIGLWP = syscall.Signal(0x20) - SIGPIPE = syscall.Signal(0xd) - SIGPROF = syscall.Signal(0x1b) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTOP = syscall.Signal(0x11) - SIGSYS = syscall.Signal(0xc) - SIGTERM = syscall.Signal(0xf) - SIGTHR = syscall.Signal(0x20) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x12) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGURG = syscall.Signal(0x10) - SIGUSR1 = syscall.Signal(0x1e) - SIGUSR2 = syscall.Signal(0x1f) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) - -// Error table -var errors = [...]string{ - 1: "operation not permitted", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "input/output error", - 6: "device not configured", - 7: "argument list too long", - 8: "exec format error", - 9: "bad file descriptor", - 10: "no child processes", - 11: "resource deadlock avoided", - 12: "cannot allocate memory", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "device busy", - 17: "file exists", - 18: "cross-device link", - 19: "operation not supported by device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "too many open files in system", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "numerical argument out of domain", - 34: "result too large", - 35: "resource temporarily unavailable", - 36: "operation now in progress", - 37: "operation already in progress", - 38: "socket operation on non-socket", - 39: "destination address required", - 40: "message too long", - 41: "protocol wrong type for socket", - 42: "protocol not available", - 43: "protocol not supported", - 44: "socket type not supported", - 45: "operation not supported", - 46: "protocol family not supported", - 47: "address family not supported by protocol family", - 48: "address already in use", - 49: "can't assign requested address", - 50: "network is down", - 51: "network is unreachable", - 52: "network dropped connection on reset", - 53: "software caused connection abort", - 54: "connection reset by peer", - 55: "no buffer space available", - 56: "socket is already connected", - 57: "socket is not connected", - 58: "can't send after socket shutdown", - 59: "too many references: can't splice", - 60: "operation timed out", - 61: "connection refused", - 62: "too many levels of symbolic links", - 63: "file name too long", - 64: "host is down", - 65: "no route to host", - 66: "directory not empty", - 67: "too many processes", - 68: "too many users", - 69: "disc quota exceeded", - 70: "stale NFS file handle", - 71: "too many levels of remote in path", - 72: "RPC struct is bad", - 73: "RPC version wrong", - 74: "RPC prog. not avail", - 75: "program version wrong", - 76: "bad procedure for program", - 77: "no locks available", - 78: "function not implemented", - 79: "inappropriate file type or format", - 80: "authentication error", - 81: "need authenticator", - 82: "identifier removed", - 83: "no message of desired type", - 84: "value too large to be stored in data type", - 85: "operation canceled", - 86: "illegal byte sequence", - 87: "attribute not found", - 88: "programming error", - 89: "bad message", - 90: "multihop attempted", - 91: "link has been severed", - 92: "protocol error", - 93: "capabilities insufficient", - 94: "not permitted in capability mode", - 95: "state not recoverable", - 96: "previous owner died", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal instruction", - 5: "trace/BPT trap", - 6: "abort trap", - 7: "EMT trap", - 8: "floating point exception", - 9: "killed", - 10: "bus error", - 11: "segmentation fault", - 12: "bad system call", - 13: "broken pipe", - 14: "alarm clock", - 15: "terminated", - 16: "urgent I/O condition", - 17: "suspended (signal)", - 18: "suspended", - 19: "continued", - 20: "child exited", - 21: "stopped (tty input)", - 22: "stopped (tty output)", - 23: "I/O possible", - 24: "cputime limit exceeded", - 25: "filesize limit exceeded", - 26: "virtual timer expired", - 27: "profiling timer expired", - 28: "window size changes", - 29: "information request", - 30: "user defined signal 1", - 31: "user defined signal 2", - 32: "unknown signal", - 33: "unknown signal", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_freebsd_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_freebsd_amd64.go deleted file mode 100644 index e48e7799a1d..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_freebsd_amd64.go +++ /dev/null @@ -1,1748 +0,0 @@ -// mkerrors.sh -m64 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build amd64,freebsd - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- -m64 _const.go - -package unix - -import "syscall" - -const ( - AF_APPLETALK = 0x10 - AF_ARP = 0x23 - AF_ATM = 0x1e - AF_BLUETOOTH = 0x24 - AF_CCITT = 0xa - AF_CHAOS = 0x5 - AF_CNT = 0x15 - AF_COIP = 0x14 - AF_DATAKIT = 0x9 - AF_DECnet = 0xc - AF_DLI = 0xd - AF_E164 = 0x1a - AF_ECMA = 0x8 - AF_HYLINK = 0xf - AF_IEEE80211 = 0x25 - AF_IMPLINK = 0x3 - AF_INET = 0x2 - AF_INET6 = 0x1c - AF_INET6_SDP = 0x2a - AF_INET_SDP = 0x28 - AF_IPX = 0x17 - AF_ISDN = 0x1a - AF_ISO = 0x7 - AF_LAT = 0xe - AF_LINK = 0x12 - AF_LOCAL = 0x1 - AF_MAX = 0x2a - AF_NATM = 0x1d - AF_NETBIOS = 0x6 - AF_NETGRAPH = 0x20 - AF_OSI = 0x7 - AF_PUP = 0x4 - AF_ROUTE = 0x11 - AF_SCLUSTER = 0x22 - AF_SIP = 0x18 - AF_SLOW = 0x21 - AF_SNA = 0xb - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - AF_VENDOR00 = 0x27 - AF_VENDOR01 = 0x29 - AF_VENDOR02 = 0x2b - AF_VENDOR03 = 0x2d - AF_VENDOR04 = 0x2f - AF_VENDOR05 = 0x31 - AF_VENDOR06 = 0x33 - AF_VENDOR07 = 0x35 - AF_VENDOR08 = 0x37 - AF_VENDOR09 = 0x39 - AF_VENDOR10 = 0x3b - AF_VENDOR11 = 0x3d - AF_VENDOR12 = 0x3f - AF_VENDOR13 = 0x41 - AF_VENDOR14 = 0x43 - AF_VENDOR15 = 0x45 - AF_VENDOR16 = 0x47 - AF_VENDOR17 = 0x49 - AF_VENDOR18 = 0x4b - AF_VENDOR19 = 0x4d - AF_VENDOR20 = 0x4f - AF_VENDOR21 = 0x51 - AF_VENDOR22 = 0x53 - AF_VENDOR23 = 0x55 - AF_VENDOR24 = 0x57 - AF_VENDOR25 = 0x59 - AF_VENDOR26 = 0x5b - AF_VENDOR27 = 0x5d - AF_VENDOR28 = 0x5f - AF_VENDOR29 = 0x61 - AF_VENDOR30 = 0x63 - AF_VENDOR31 = 0x65 - AF_VENDOR32 = 0x67 - AF_VENDOR33 = 0x69 - AF_VENDOR34 = 0x6b - AF_VENDOR35 = 0x6d - AF_VENDOR36 = 0x6f - AF_VENDOR37 = 0x71 - AF_VENDOR38 = 0x73 - AF_VENDOR39 = 0x75 - AF_VENDOR40 = 0x77 - AF_VENDOR41 = 0x79 - AF_VENDOR42 = 0x7b - AF_VENDOR43 = 0x7d - AF_VENDOR44 = 0x7f - AF_VENDOR45 = 0x81 - AF_VENDOR46 = 0x83 - AF_VENDOR47 = 0x85 - B0 = 0x0 - B110 = 0x6e - B115200 = 0x1c200 - B1200 = 0x4b0 - B134 = 0x86 - B14400 = 0x3840 - B150 = 0x96 - B1800 = 0x708 - B19200 = 0x4b00 - B200 = 0xc8 - B230400 = 0x38400 - B2400 = 0x960 - B28800 = 0x7080 - B300 = 0x12c - B38400 = 0x9600 - B460800 = 0x70800 - B4800 = 0x12c0 - B50 = 0x32 - B57600 = 0xe100 - B600 = 0x258 - B7200 = 0x1c20 - B75 = 0x4b - B76800 = 0x12c00 - B921600 = 0xe1000 - B9600 = 0x2580 - BIOCFEEDBACK = 0x8004427c - BIOCFLUSH = 0x20004268 - BIOCGBLEN = 0x40044266 - BIOCGDIRECTION = 0x40044276 - BIOCGDLT = 0x4004426a - BIOCGDLTLIST = 0xc0104279 - BIOCGETBUFMODE = 0x4004427d - BIOCGETIF = 0x4020426b - BIOCGETZMAX = 0x4008427f - BIOCGHDRCMPLT = 0x40044274 - BIOCGRSIG = 0x40044272 - BIOCGRTIMEOUT = 0x4010426e - BIOCGSEESENT = 0x40044276 - BIOCGSTATS = 0x4008426f - BIOCGTSTAMP = 0x40044283 - BIOCIMMEDIATE = 0x80044270 - BIOCLOCK = 0x2000427a - BIOCPROMISC = 0x20004269 - BIOCROTZBUF = 0x40184280 - BIOCSBLEN = 0xc0044266 - BIOCSDIRECTION = 0x80044277 - BIOCSDLT = 0x80044278 - BIOCSETBUFMODE = 0x8004427e - BIOCSETF = 0x80104267 - BIOCSETFNR = 0x80104282 - BIOCSETIF = 0x8020426c - BIOCSETWF = 0x8010427b - BIOCSETZBUF = 0x80184281 - BIOCSHDRCMPLT = 0x80044275 - BIOCSRSIG = 0x80044273 - BIOCSRTIMEOUT = 0x8010426d - BIOCSSEESENT = 0x80044277 - BIOCSTSTAMP = 0x80044284 - BIOCVERSION = 0x40044271 - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALIGNMENT = 0x8 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_BUFMODE_BUFFER = 0x1 - BPF_BUFMODE_ZBUF = 0x2 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXBUFSIZE = 0x80000 - BPF_MAXINSNS = 0x200 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINBUFSIZE = 0x20 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RELEASE = 0x30bb6 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_T_BINTIME = 0x2 - BPF_T_BINTIME_FAST = 0x102 - BPF_T_BINTIME_MONOTONIC = 0x202 - BPF_T_BINTIME_MONOTONIC_FAST = 0x302 - BPF_T_FAST = 0x100 - BPF_T_FLAG_MASK = 0x300 - BPF_T_FORMAT_MASK = 0x3 - BPF_T_MICROTIME = 0x0 - BPF_T_MICROTIME_FAST = 0x100 - BPF_T_MICROTIME_MONOTONIC = 0x200 - BPF_T_MICROTIME_MONOTONIC_FAST = 0x300 - BPF_T_MONOTONIC = 0x200 - BPF_T_MONOTONIC_FAST = 0x300 - BPF_T_NANOTIME = 0x1 - BPF_T_NANOTIME_FAST = 0x101 - BPF_T_NANOTIME_MONOTONIC = 0x201 - BPF_T_NANOTIME_MONOTONIC_FAST = 0x301 - BPF_T_NONE = 0x3 - BPF_T_NORMAL = 0x0 - BPF_W = 0x0 - BPF_X = 0x8 - BRKINT = 0x2 - CFLUSH = 0xf - CLOCAL = 0x8000 - CLOCK_MONOTONIC = 0x4 - CLOCK_MONOTONIC_FAST = 0xc - CLOCK_MONOTONIC_PRECISE = 0xb - CLOCK_PROCESS_CPUTIME_ID = 0xf - CLOCK_PROF = 0x2 - CLOCK_REALTIME = 0x0 - CLOCK_REALTIME_FAST = 0xa - CLOCK_REALTIME_PRECISE = 0x9 - CLOCK_SECOND = 0xd - CLOCK_THREAD_CPUTIME_ID = 0xe - CLOCK_UPTIME = 0x5 - CLOCK_UPTIME_FAST = 0x8 - CLOCK_UPTIME_PRECISE = 0x7 - CLOCK_VIRTUAL = 0x1 - CREAD = 0x800 - CS5 = 0x0 - CS6 = 0x100 - CS7 = 0x200 - CS8 = 0x300 - CSIZE = 0x300 - CSTART = 0x11 - CSTATUS = 0x14 - CSTOP = 0x13 - CSTOPB = 0x400 - CSUSP = 0x1a - CTL_MAXNAME = 0x18 - CTL_NET = 0x4 - DLT_A429 = 0xb8 - DLT_A653_ICM = 0xb9 - DLT_AIRONET_HEADER = 0x78 - DLT_AOS = 0xde - DLT_APPLE_IP_OVER_IEEE1394 = 0x8a - DLT_ARCNET = 0x7 - DLT_ARCNET_LINUX = 0x81 - DLT_ATM_CLIP = 0x13 - DLT_ATM_RFC1483 = 0xb - DLT_AURORA = 0x7e - DLT_AX25 = 0x3 - DLT_AX25_KISS = 0xca - DLT_BACNET_MS_TP = 0xa5 - DLT_BLUETOOTH_HCI_H4 = 0xbb - DLT_BLUETOOTH_HCI_H4_WITH_PHDR = 0xc9 - DLT_CAN20B = 0xbe - DLT_CAN_SOCKETCAN = 0xe3 - DLT_CHAOS = 0x5 - DLT_CHDLC = 0x68 - DLT_CISCO_IOS = 0x76 - DLT_C_HDLC = 0x68 - DLT_C_HDLC_WITH_DIR = 0xcd - DLT_DBUS = 0xe7 - DLT_DECT = 0xdd - DLT_DOCSIS = 0x8f - DLT_DVB_CI = 0xeb - DLT_ECONET = 0x73 - DLT_EN10MB = 0x1 - DLT_EN3MB = 0x2 - DLT_ENC = 0x6d - DLT_ERF = 0xc5 - DLT_ERF_ETH = 0xaf - DLT_ERF_POS = 0xb0 - DLT_FC_2 = 0xe0 - DLT_FC_2_WITH_FRAME_DELIMS = 0xe1 - DLT_FDDI = 0xa - DLT_FLEXRAY = 0xd2 - DLT_FRELAY = 0x6b - DLT_FRELAY_WITH_DIR = 0xce - DLT_GCOM_SERIAL = 0xad - DLT_GCOM_T1E1 = 0xac - DLT_GPF_F = 0xab - DLT_GPF_T = 0xaa - DLT_GPRS_LLC = 0xa9 - DLT_GSMTAP_ABIS = 0xda - DLT_GSMTAP_UM = 0xd9 - DLT_HHDLC = 0x79 - DLT_IBM_SN = 0x92 - DLT_IBM_SP = 0x91 - DLT_IEEE802 = 0x6 - DLT_IEEE802_11 = 0x69 - DLT_IEEE802_11_RADIO = 0x7f - DLT_IEEE802_11_RADIO_AVS = 0xa3 - DLT_IEEE802_15_4 = 0xc3 - DLT_IEEE802_15_4_LINUX = 0xbf - DLT_IEEE802_15_4_NOFCS = 0xe6 - DLT_IEEE802_15_4_NONASK_PHY = 0xd7 - DLT_IEEE802_16_MAC_CPS = 0xbc - DLT_IEEE802_16_MAC_CPS_RADIO = 0xc1 - DLT_IPFILTER = 0x74 - DLT_IPMB = 0xc7 - DLT_IPMB_LINUX = 0xd1 - DLT_IPNET = 0xe2 - DLT_IPOIB = 0xf2 - DLT_IPV4 = 0xe4 - DLT_IPV6 = 0xe5 - DLT_IP_OVER_FC = 0x7a - DLT_JUNIPER_ATM1 = 0x89 - DLT_JUNIPER_ATM2 = 0x87 - DLT_JUNIPER_ATM_CEMIC = 0xee - DLT_JUNIPER_CHDLC = 0xb5 - DLT_JUNIPER_ES = 0x84 - DLT_JUNIPER_ETHER = 0xb2 - DLT_JUNIPER_FIBRECHANNEL = 0xea - DLT_JUNIPER_FRELAY = 0xb4 - DLT_JUNIPER_GGSN = 0x85 - DLT_JUNIPER_ISM = 0xc2 - DLT_JUNIPER_MFR = 0x86 - DLT_JUNIPER_MLFR = 0x83 - DLT_JUNIPER_MLPPP = 0x82 - DLT_JUNIPER_MONITOR = 0xa4 - DLT_JUNIPER_PIC_PEER = 0xae - DLT_JUNIPER_PPP = 0xb3 - DLT_JUNIPER_PPPOE = 0xa7 - DLT_JUNIPER_PPPOE_ATM = 0xa8 - DLT_JUNIPER_SERVICES = 0x88 - DLT_JUNIPER_SRX_E2E = 0xe9 - DLT_JUNIPER_ST = 0xc8 - DLT_JUNIPER_VP = 0xb7 - DLT_JUNIPER_VS = 0xe8 - DLT_LAPB_WITH_DIR = 0xcf - DLT_LAPD = 0xcb - DLT_LIN = 0xd4 - DLT_LINUX_EVDEV = 0xd8 - DLT_LINUX_IRDA = 0x90 - DLT_LINUX_LAPD = 0xb1 - DLT_LINUX_PPP_WITHDIRECTION = 0xa6 - DLT_LINUX_SLL = 0x71 - DLT_LOOP = 0x6c - DLT_LTALK = 0x72 - DLT_MATCHING_MAX = 0xf6 - DLT_MATCHING_MIN = 0x68 - DLT_MFR = 0xb6 - DLT_MOST = 0xd3 - DLT_MPEG_2_TS = 0xf3 - DLT_MPLS = 0xdb - DLT_MTP2 = 0x8c - DLT_MTP2_WITH_PHDR = 0x8b - DLT_MTP3 = 0x8d - DLT_MUX27010 = 0xec - DLT_NETANALYZER = 0xf0 - DLT_NETANALYZER_TRANSPARENT = 0xf1 - DLT_NFC_LLCP = 0xf5 - DLT_NFLOG = 0xef - DLT_NG40 = 0xf4 - DLT_NULL = 0x0 - DLT_PCI_EXP = 0x7d - DLT_PFLOG = 0x75 - DLT_PFSYNC = 0x79 - DLT_PPI = 0xc0 - DLT_PPP = 0x9 - DLT_PPP_BSDOS = 0x10 - DLT_PPP_ETHER = 0x33 - DLT_PPP_PPPD = 0xa6 - DLT_PPP_SERIAL = 0x32 - DLT_PPP_WITH_DIR = 0xcc - DLT_PPP_WITH_DIRECTION = 0xa6 - DLT_PRISM_HEADER = 0x77 - DLT_PRONET = 0x4 - DLT_RAIF1 = 0xc6 - DLT_RAW = 0xc - DLT_RIO = 0x7c - DLT_SCCP = 0x8e - DLT_SITA = 0xc4 - DLT_SLIP = 0x8 - DLT_SLIP_BSDOS = 0xf - DLT_STANAG_5066_D_PDU = 0xed - DLT_SUNATM = 0x7b - DLT_SYMANTEC_FIREWALL = 0x63 - DLT_TZSP = 0x80 - DLT_USB = 0xba - DLT_USB_LINUX = 0xbd - DLT_USB_LINUX_MMAPPED = 0xdc - DLT_USER0 = 0x93 - DLT_USER1 = 0x94 - DLT_USER10 = 0x9d - DLT_USER11 = 0x9e - DLT_USER12 = 0x9f - DLT_USER13 = 0xa0 - DLT_USER14 = 0xa1 - DLT_USER15 = 0xa2 - DLT_USER2 = 0x95 - DLT_USER3 = 0x96 - DLT_USER4 = 0x97 - DLT_USER5 = 0x98 - DLT_USER6 = 0x99 - DLT_USER7 = 0x9a - DLT_USER8 = 0x9b - DLT_USER9 = 0x9c - DLT_WIHART = 0xdf - DLT_X2E_SERIAL = 0xd5 - DLT_X2E_XORAYA = 0xd6 - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - DT_WHT = 0xe - ECHO = 0x8 - ECHOCTL = 0x40 - ECHOE = 0x2 - ECHOK = 0x4 - ECHOKE = 0x1 - ECHONL = 0x10 - ECHOPRT = 0x20 - EVFILT_AIO = -0x3 - EVFILT_FS = -0x9 - EVFILT_LIO = -0xa - EVFILT_PROC = -0x5 - EVFILT_READ = -0x1 - EVFILT_SIGNAL = -0x6 - EVFILT_SYSCOUNT = 0xb - EVFILT_TIMER = -0x7 - EVFILT_USER = -0xb - EVFILT_VNODE = -0x4 - EVFILT_WRITE = -0x2 - EV_ADD = 0x1 - EV_CLEAR = 0x20 - EV_DELETE = 0x2 - EV_DISABLE = 0x8 - EV_DISPATCH = 0x80 - EV_DROP = 0x1000 - EV_ENABLE = 0x4 - EV_EOF = 0x8000 - EV_ERROR = 0x4000 - EV_FLAG1 = 0x2000 - EV_ONESHOT = 0x10 - EV_RECEIPT = 0x40 - EV_SYSFLAGS = 0xf000 - EXTA = 0x4b00 - EXTATTR_NAMESPACE_EMPTY = 0x0 - EXTATTR_NAMESPACE_SYSTEM = 0x2 - EXTATTR_NAMESPACE_USER = 0x1 - EXTB = 0x9600 - EXTPROC = 0x800 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x400 - FLUSHO = 0x800000 - F_CANCEL = 0x5 - F_DUP2FD = 0xa - F_DUP2FD_CLOEXEC = 0x12 - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0x11 - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLK = 0xb - F_GETOWN = 0x5 - F_OGETLK = 0x7 - F_OK = 0x0 - F_OSETLK = 0x8 - F_OSETLKW = 0x9 - F_RDAHEAD = 0x10 - F_RDLCK = 0x1 - F_READAHEAD = 0xf - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLK = 0xc - F_SETLKW = 0xd - F_SETLK_REMOTE = 0xe - F_SETOWN = 0x6 - F_UNLCK = 0x2 - F_UNLCKSYS = 0x4 - F_WRLCK = 0x3 - HUPCL = 0x4000 - ICANON = 0x100 - ICMP6_FILTER = 0x12 - ICRNL = 0x100 - IEXTEN = 0x400 - IFAN_ARRIVAL = 0x0 - IFAN_DEPARTURE = 0x1 - IFF_ALLMULTI = 0x200 - IFF_ALTPHYS = 0x4000 - IFF_BROADCAST = 0x2 - IFF_CANTCHANGE = 0x218f72 - IFF_CANTCONFIG = 0x10000 - IFF_DEBUG = 0x4 - IFF_DRV_OACTIVE = 0x400 - IFF_DRV_RUNNING = 0x40 - IFF_DYING = 0x200000 - IFF_LINK0 = 0x1000 - IFF_LINK1 = 0x2000 - IFF_LINK2 = 0x4000 - IFF_LOOPBACK = 0x8 - IFF_MONITOR = 0x40000 - IFF_MULTICAST = 0x8000 - IFF_NOARP = 0x80 - IFF_OACTIVE = 0x400 - IFF_POINTOPOINT = 0x10 - IFF_PPROMISC = 0x20000 - IFF_PROMISC = 0x100 - IFF_RENAMING = 0x400000 - IFF_RUNNING = 0x40 - IFF_SIMPLEX = 0x800 - IFF_SMART = 0x20 - IFF_STATICARP = 0x80000 - IFF_UP = 0x1 - IFNAMSIZ = 0x10 - IFT_1822 = 0x2 - IFT_A12MPPSWITCH = 0x82 - IFT_AAL2 = 0xbb - IFT_AAL5 = 0x31 - IFT_ADSL = 0x5e - IFT_AFLANE8023 = 0x3b - IFT_AFLANE8025 = 0x3c - IFT_ARAP = 0x58 - IFT_ARCNET = 0x23 - IFT_ARCNETPLUS = 0x24 - IFT_ASYNC = 0x54 - IFT_ATM = 0x25 - IFT_ATMDXI = 0x69 - IFT_ATMFUNI = 0x6a - IFT_ATMIMA = 0x6b - IFT_ATMLOGICAL = 0x50 - IFT_ATMRADIO = 0xbd - IFT_ATMSUBINTERFACE = 0x86 - IFT_ATMVCIENDPT = 0xc2 - IFT_ATMVIRTUAL = 0x95 - IFT_BGPPOLICYACCOUNTING = 0xa2 - IFT_BRIDGE = 0xd1 - IFT_BSC = 0x53 - IFT_CARP = 0xf8 - IFT_CCTEMUL = 0x3d - IFT_CEPT = 0x13 - IFT_CES = 0x85 - IFT_CHANNEL = 0x46 - IFT_CNR = 0x55 - IFT_COFFEE = 0x84 - IFT_COMPOSITELINK = 0x9b - IFT_DCN = 0x8d - IFT_DIGITALPOWERLINE = 0x8a - IFT_DIGITALWRAPPEROVERHEADCHANNEL = 0xba - IFT_DLSW = 0x4a - IFT_DOCSCABLEDOWNSTREAM = 0x80 - IFT_DOCSCABLEMACLAYER = 0x7f - IFT_DOCSCABLEUPSTREAM = 0x81 - IFT_DS0 = 0x51 - IFT_DS0BUNDLE = 0x52 - IFT_DS1FDL = 0xaa - IFT_DS3 = 0x1e - IFT_DTM = 0x8c - IFT_DVBASILN = 0xac - IFT_DVBASIOUT = 0xad - IFT_DVBRCCDOWNSTREAM = 0x93 - IFT_DVBRCCMACLAYER = 0x92 - IFT_DVBRCCUPSTREAM = 0x94 - IFT_ENC = 0xf4 - IFT_EON = 0x19 - IFT_EPLRS = 0x57 - IFT_ESCON = 0x49 - IFT_ETHER = 0x6 - IFT_FAITH = 0xf2 - IFT_FAST = 0x7d - IFT_FASTETHER = 0x3e - IFT_FASTETHERFX = 0x45 - IFT_FDDI = 0xf - IFT_FIBRECHANNEL = 0x38 - IFT_FRAMERELAYINTERCONNECT = 0x3a - IFT_FRAMERELAYMPI = 0x5c - IFT_FRDLCIENDPT = 0xc1 - IFT_FRELAY = 0x20 - IFT_FRELAYDCE = 0x2c - IFT_FRF16MFRBUNDLE = 0xa3 - IFT_FRFORWARD = 0x9e - IFT_G703AT2MB = 0x43 - IFT_G703AT64K = 0x42 - IFT_GIF = 0xf0 - IFT_GIGABITETHERNET = 0x75 - IFT_GR303IDT = 0xb2 - IFT_GR303RDT = 0xb1 - IFT_H323GATEKEEPER = 0xa4 - IFT_H323PROXY = 0xa5 - IFT_HDH1822 = 0x3 - IFT_HDLC = 0x76 - IFT_HDSL2 = 0xa8 - IFT_HIPERLAN2 = 0xb7 - IFT_HIPPI = 0x2f - IFT_HIPPIINTERFACE = 0x39 - IFT_HOSTPAD = 0x5a - IFT_HSSI = 0x2e - IFT_HY = 0xe - IFT_IBM370PARCHAN = 0x48 - IFT_IDSL = 0x9a - IFT_IEEE1394 = 0x90 - IFT_IEEE80211 = 0x47 - IFT_IEEE80212 = 0x37 - IFT_IEEE8023ADLAG = 0xa1 - IFT_IFGSN = 0x91 - IFT_IMT = 0xbe - IFT_INFINIBAND = 0xc7 - IFT_INTERLEAVE = 0x7c - IFT_IP = 0x7e - IFT_IPFORWARD = 0x8e - IFT_IPOVERATM = 0x72 - IFT_IPOVERCDLC = 0x6d - IFT_IPOVERCLAW = 0x6e - IFT_IPSWITCH = 0x4e - IFT_IPXIP = 0xf9 - IFT_ISDN = 0x3f - IFT_ISDNBASIC = 0x14 - IFT_ISDNPRIMARY = 0x15 - IFT_ISDNS = 0x4b - IFT_ISDNU = 0x4c - IFT_ISO88022LLC = 0x29 - IFT_ISO88023 = 0x7 - IFT_ISO88024 = 0x8 - IFT_ISO88025 = 0x9 - IFT_ISO88025CRFPINT = 0x62 - IFT_ISO88025DTR = 0x56 - IFT_ISO88025FIBER = 0x73 - IFT_ISO88026 = 0xa - IFT_ISUP = 0xb3 - IFT_L2VLAN = 0x87 - IFT_L3IPVLAN = 0x88 - IFT_L3IPXVLAN = 0x89 - IFT_LAPB = 0x10 - IFT_LAPD = 0x4d - IFT_LAPF = 0x77 - IFT_LOCALTALK = 0x2a - IFT_LOOP = 0x18 - IFT_MEDIAMAILOVERIP = 0x8b - IFT_MFSIGLINK = 0xa7 - IFT_MIOX25 = 0x26 - IFT_MODEM = 0x30 - IFT_MPC = 0x71 - IFT_MPLS = 0xa6 - IFT_MPLSTUNNEL = 0x96 - IFT_MSDSL = 0x8f - IFT_MVL = 0xbf - IFT_MYRINET = 0x63 - IFT_NFAS = 0xaf - IFT_NSIP = 0x1b - IFT_OPTICALCHANNEL = 0xc3 - IFT_OPTICALTRANSPORT = 0xc4 - IFT_OTHER = 0x1 - IFT_P10 = 0xc - IFT_P80 = 0xd - IFT_PARA = 0x22 - IFT_PFLOG = 0xf6 - IFT_PFSYNC = 0xf7 - IFT_PLC = 0xae - IFT_POS = 0xab - IFT_PPP = 0x17 - IFT_PPPMULTILINKBUNDLE = 0x6c - IFT_PROPBWAP2MP = 0xb8 - IFT_PROPCNLS = 0x59 - IFT_PROPDOCSWIRELESSDOWNSTREAM = 0xb5 - IFT_PROPDOCSWIRELESSMACLAYER = 0xb4 - IFT_PROPDOCSWIRELESSUPSTREAM = 0xb6 - IFT_PROPMUX = 0x36 - IFT_PROPVIRTUAL = 0x35 - IFT_PROPWIRELESSP2P = 0x9d - IFT_PTPSERIAL = 0x16 - IFT_PVC = 0xf1 - IFT_QLLC = 0x44 - IFT_RADIOMAC = 0xbc - IFT_RADSL = 0x5f - IFT_REACHDSL = 0xc0 - IFT_RFC1483 = 0x9f - IFT_RS232 = 0x21 - IFT_RSRB = 0x4f - IFT_SDLC = 0x11 - IFT_SDSL = 0x60 - IFT_SHDSL = 0xa9 - IFT_SIP = 0x1f - IFT_SLIP = 0x1c - IFT_SMDSDXI = 0x2b - IFT_SMDSICIP = 0x34 - IFT_SONET = 0x27 - IFT_SONETOVERHEADCHANNEL = 0xb9 - IFT_SONETPATH = 0x32 - IFT_SONETVT = 0x33 - IFT_SRP = 0x97 - IFT_SS7SIGLINK = 0x9c - IFT_STACKTOSTACK = 0x6f - IFT_STARLAN = 0xb - IFT_STF = 0xd7 - IFT_T1 = 0x12 - IFT_TDLC = 0x74 - IFT_TERMPAD = 0x5b - IFT_TR008 = 0xb0 - IFT_TRANSPHDLC = 0x7b - IFT_TUNNEL = 0x83 - IFT_ULTRA = 0x1d - IFT_USB = 0xa0 - IFT_V11 = 0x40 - IFT_V35 = 0x2d - IFT_V36 = 0x41 - IFT_V37 = 0x78 - IFT_VDSL = 0x61 - IFT_VIRTUALIPADDRESS = 0x70 - IFT_VOICEEM = 0x64 - IFT_VOICEENCAP = 0x67 - IFT_VOICEFXO = 0x65 - IFT_VOICEFXS = 0x66 - IFT_VOICEOVERATM = 0x98 - IFT_VOICEOVERFRAMERELAY = 0x99 - IFT_VOICEOVERIP = 0x68 - IFT_X213 = 0x5d - IFT_X25 = 0x5 - IFT_X25DDN = 0x4 - IFT_X25HUNTGROUP = 0x7a - IFT_X25MLP = 0x79 - IFT_X25PLE = 0x28 - IFT_XETHER = 0x1a - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLASSD_HOST = 0xfffffff - IN_CLASSD_NET = 0xf0000000 - IN_CLASSD_NSHIFT = 0x1c - IN_LOOPBACKNET = 0x7f - IN_RFC3021_MASK = 0xfffffffe - IPPROTO_3PC = 0x22 - IPPROTO_ADFS = 0x44 - IPPROTO_AH = 0x33 - IPPROTO_AHIP = 0x3d - IPPROTO_APES = 0x63 - IPPROTO_ARGUS = 0xd - IPPROTO_AX25 = 0x5d - IPPROTO_BHA = 0x31 - IPPROTO_BLT = 0x1e - IPPROTO_BRSATMON = 0x4c - IPPROTO_CARP = 0x70 - IPPROTO_CFTP = 0x3e - IPPROTO_CHAOS = 0x10 - IPPROTO_CMTP = 0x26 - IPPROTO_CPHB = 0x49 - IPPROTO_CPNX = 0x48 - IPPROTO_DDP = 0x25 - IPPROTO_DGP = 0x56 - IPPROTO_DIVERT = 0x102 - IPPROTO_DONE = 0x101 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_EMCON = 0xe - IPPROTO_ENCAP = 0x62 - IPPROTO_EON = 0x50 - IPPROTO_ESP = 0x32 - IPPROTO_ETHERIP = 0x61 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GGP = 0x3 - IPPROTO_GMTP = 0x64 - IPPROTO_GRE = 0x2f - IPPROTO_HELLO = 0x3f - IPPROTO_HIP = 0x8b - IPPROTO_HMP = 0x14 - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IDPR = 0x23 - IPPROTO_IDRP = 0x2d - IPPROTO_IGMP = 0x2 - IPPROTO_IGP = 0x55 - IPPROTO_IGRP = 0x58 - IPPROTO_IL = 0x28 - IPPROTO_INLSP = 0x34 - IPPROTO_INP = 0x20 - IPPROTO_IP = 0x0 - IPPROTO_IPCOMP = 0x6c - IPPROTO_IPCV = 0x47 - IPPROTO_IPEIP = 0x5e - IPPROTO_IPIP = 0x4 - IPPROTO_IPPC = 0x43 - IPPROTO_IPV4 = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_IRTP = 0x1c - IPPROTO_KRYPTOLAN = 0x41 - IPPROTO_LARP = 0x5b - IPPROTO_LEAF1 = 0x19 - IPPROTO_LEAF2 = 0x1a - IPPROTO_MAX = 0x100 - IPPROTO_MAXID = 0x34 - IPPROTO_MEAS = 0x13 - IPPROTO_MH = 0x87 - IPPROTO_MHRP = 0x30 - IPPROTO_MICP = 0x5f - IPPROTO_MOBILE = 0x37 - IPPROTO_MPLS = 0x89 - IPPROTO_MTP = 0x5c - IPPROTO_MUX = 0x12 - IPPROTO_ND = 0x4d - IPPROTO_NHRP = 0x36 - IPPROTO_NONE = 0x3b - IPPROTO_NSP = 0x1f - IPPROTO_NVPII = 0xb - IPPROTO_OLD_DIVERT = 0xfe - IPPROTO_OSPFIGP = 0x59 - IPPROTO_PFSYNC = 0xf0 - IPPROTO_PGM = 0x71 - IPPROTO_PIGP = 0x9 - IPPROTO_PIM = 0x67 - IPPROTO_PRM = 0x15 - IPPROTO_PUP = 0xc - IPPROTO_PVP = 0x4b - IPPROTO_RAW = 0xff - IPPROTO_RCCMON = 0xa - IPPROTO_RDP = 0x1b - IPPROTO_RESERVED_253 = 0xfd - IPPROTO_RESERVED_254 = 0xfe - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_RVD = 0x42 - IPPROTO_SATEXPAK = 0x40 - IPPROTO_SATMON = 0x45 - IPPROTO_SCCSP = 0x60 - IPPROTO_SCTP = 0x84 - IPPROTO_SDRP = 0x2a - IPPROTO_SEND = 0x103 - IPPROTO_SEP = 0x21 - IPPROTO_SHIM6 = 0x8c - IPPROTO_SKIP = 0x39 - IPPROTO_SPACER = 0x7fff - IPPROTO_SRPC = 0x5a - IPPROTO_ST = 0x7 - IPPROTO_SVMTP = 0x52 - IPPROTO_SWIPE = 0x35 - IPPROTO_TCF = 0x57 - IPPROTO_TCP = 0x6 - IPPROTO_TLSP = 0x38 - IPPROTO_TP = 0x1d - IPPROTO_TPXX = 0x27 - IPPROTO_TRUNK1 = 0x17 - IPPROTO_TRUNK2 = 0x18 - IPPROTO_TTP = 0x54 - IPPROTO_UDP = 0x11 - IPPROTO_UDPLITE = 0x88 - IPPROTO_VINES = 0x53 - IPPROTO_VISA = 0x46 - IPPROTO_VMTP = 0x51 - IPPROTO_WBEXPAK = 0x4f - IPPROTO_WBMON = 0x4e - IPPROTO_WSN = 0x4a - IPPROTO_XNET = 0xf - IPPROTO_XTP = 0x24 - IPV6_AUTOFLOWLABEL = 0x3b - IPV6_BINDANY = 0x40 - IPV6_BINDV6ONLY = 0x1b - IPV6_CHECKSUM = 0x1a - IPV6_DEFAULT_MULTICAST_HOPS = 0x1 - IPV6_DEFAULT_MULTICAST_LOOP = 0x1 - IPV6_DEFHLIM = 0x40 - IPV6_DONTFRAG = 0x3e - IPV6_DSTOPTS = 0x32 - IPV6_FAITH = 0x1d - IPV6_FLOWINFO_MASK = 0xffffff0f - IPV6_FLOWLABEL_MASK = 0xffff0f00 - IPV6_FRAGTTL = 0x78 - IPV6_FW_ADD = 0x1e - IPV6_FW_DEL = 0x1f - IPV6_FW_FLUSH = 0x20 - IPV6_FW_GET = 0x22 - IPV6_FW_ZERO = 0x21 - IPV6_HLIMDEC = 0x1 - IPV6_HOPLIMIT = 0x2f - IPV6_HOPOPTS = 0x31 - IPV6_IPSEC_POLICY = 0x1c - IPV6_JOIN_GROUP = 0xc - IPV6_LEAVE_GROUP = 0xd - IPV6_MAXHLIM = 0xff - IPV6_MAXOPTHDR = 0x800 - IPV6_MAXPACKET = 0xffff - IPV6_MAX_GROUP_SRC_FILTER = 0x200 - IPV6_MAX_MEMBERSHIPS = 0xfff - IPV6_MAX_SOCK_SRC_FILTER = 0x80 - IPV6_MIN_MEMBERSHIPS = 0x1f - IPV6_MMTU = 0x500 - IPV6_MSFILTER = 0x4a - IPV6_MULTICAST_HOPS = 0xa - IPV6_MULTICAST_IF = 0x9 - IPV6_MULTICAST_LOOP = 0xb - IPV6_NEXTHOP = 0x30 - IPV6_PATHMTU = 0x2c - IPV6_PKTINFO = 0x2e - IPV6_PORTRANGE = 0xe - IPV6_PORTRANGE_DEFAULT = 0x0 - IPV6_PORTRANGE_HIGH = 0x1 - IPV6_PORTRANGE_LOW = 0x2 - IPV6_PREFER_TEMPADDR = 0x3f - IPV6_RECVDSTOPTS = 0x28 - IPV6_RECVHOPLIMIT = 0x25 - IPV6_RECVHOPOPTS = 0x27 - IPV6_RECVPATHMTU = 0x2b - IPV6_RECVPKTINFO = 0x24 - IPV6_RECVRTHDR = 0x26 - IPV6_RECVTCLASS = 0x39 - IPV6_RTHDR = 0x33 - IPV6_RTHDRDSTOPTS = 0x23 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_SOCKOPT_RESERVED1 = 0x3 - IPV6_TCLASS = 0x3d - IPV6_UNICAST_HOPS = 0x4 - IPV6_USE_MIN_MTU = 0x2a - IPV6_V6ONLY = 0x1b - IPV6_VERSION = 0x60 - IPV6_VERSION_MASK = 0xf0 - IP_ADD_MEMBERSHIP = 0xc - IP_ADD_SOURCE_MEMBERSHIP = 0x46 - IP_BINDANY = 0x18 - IP_BLOCK_SOURCE = 0x48 - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DONTFRAG = 0x43 - IP_DROP_MEMBERSHIP = 0xd - IP_DROP_SOURCE_MEMBERSHIP = 0x47 - IP_DUMMYNET3 = 0x31 - IP_DUMMYNET_CONFIGURE = 0x3c - IP_DUMMYNET_DEL = 0x3d - IP_DUMMYNET_FLUSH = 0x3e - IP_DUMMYNET_GET = 0x40 - IP_FAITH = 0x16 - IP_FW3 = 0x30 - IP_FW_ADD = 0x32 - IP_FW_DEL = 0x33 - IP_FW_FLUSH = 0x34 - IP_FW_GET = 0x36 - IP_FW_NAT_CFG = 0x38 - IP_FW_NAT_DEL = 0x39 - IP_FW_NAT_GET_CONFIG = 0x3a - IP_FW_NAT_GET_LOG = 0x3b - IP_FW_RESETLOG = 0x37 - IP_FW_TABLE_ADD = 0x28 - IP_FW_TABLE_DEL = 0x29 - IP_FW_TABLE_FLUSH = 0x2a - IP_FW_TABLE_GETSIZE = 0x2b - IP_FW_TABLE_LIST = 0x2c - IP_FW_ZERO = 0x35 - IP_HDRINCL = 0x2 - IP_IPSEC_POLICY = 0x15 - IP_MAXPACKET = 0xffff - IP_MAX_GROUP_SRC_FILTER = 0x200 - IP_MAX_MEMBERSHIPS = 0xfff - IP_MAX_SOCK_MUTE_FILTER = 0x80 - IP_MAX_SOCK_SRC_FILTER = 0x80 - IP_MAX_SOURCE_FILTER = 0x400 - IP_MF = 0x2000 - IP_MINTTL = 0x42 - IP_MIN_MEMBERSHIPS = 0x1f - IP_MSFILTER = 0x4a - IP_MSS = 0x240 - IP_MULTICAST_IF = 0x9 - IP_MULTICAST_LOOP = 0xb - IP_MULTICAST_TTL = 0xa - IP_MULTICAST_VIF = 0xe - IP_OFFMASK = 0x1fff - IP_ONESBCAST = 0x17 - IP_OPTIONS = 0x1 - IP_PORTRANGE = 0x13 - IP_PORTRANGE_DEFAULT = 0x0 - IP_PORTRANGE_HIGH = 0x1 - IP_PORTRANGE_LOW = 0x2 - IP_RECVDSTADDR = 0x7 - IP_RECVIF = 0x14 - IP_RECVOPTS = 0x5 - IP_RECVRETOPTS = 0x6 - IP_RECVTOS = 0x44 - IP_RECVTTL = 0x41 - IP_RETOPTS = 0x8 - IP_RF = 0x8000 - IP_RSVP_OFF = 0x10 - IP_RSVP_ON = 0xf - IP_RSVP_VIF_OFF = 0x12 - IP_RSVP_VIF_ON = 0x11 - IP_SENDSRCADDR = 0x7 - IP_TOS = 0x3 - IP_TTL = 0x4 - IP_UNBLOCK_SOURCE = 0x49 - ISIG = 0x80 - ISTRIP = 0x20 - IXANY = 0x800 - IXOFF = 0x400 - IXON = 0x200 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_AUTOSYNC = 0x7 - MADV_CORE = 0x9 - MADV_DONTNEED = 0x4 - MADV_FREE = 0x5 - MADV_NOCORE = 0x8 - MADV_NORMAL = 0x0 - MADV_NOSYNC = 0x6 - MADV_PROTECT = 0xa - MADV_RANDOM = 0x1 - MADV_SEQUENTIAL = 0x2 - MADV_WILLNEED = 0x3 - MAP_32BIT = 0x80000 - MAP_ALIGNED_SUPER = 0x1000000 - MAP_ALIGNMENT_MASK = -0x1000000 - MAP_ALIGNMENT_SHIFT = 0x18 - MAP_ANON = 0x1000 - MAP_ANONYMOUS = 0x1000 - MAP_COPY = 0x2 - MAP_EXCL = 0x4000 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_HASSEMAPHORE = 0x200 - MAP_NOCORE = 0x20000 - MAP_NORESERVE = 0x40 - MAP_NOSYNC = 0x800 - MAP_PREFAULT_READ = 0x40000 - MAP_PRIVATE = 0x2 - MAP_RENAME = 0x20 - MAP_RESERVED0080 = 0x80 - MAP_RESERVED0100 = 0x100 - MAP_SHARED = 0x1 - MAP_STACK = 0x400 - MCL_CURRENT = 0x1 - MCL_FUTURE = 0x2 - MSG_CMSG_CLOEXEC = 0x40000 - MSG_COMPAT = 0x8000 - MSG_CTRUNC = 0x20 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x80 - MSG_EOF = 0x100 - MSG_EOR = 0x8 - MSG_NBIO = 0x4000 - MSG_NOSIGNAL = 0x20000 - MSG_NOTIFICATION = 0x2000 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_TRUNC = 0x10 - MSG_WAITALL = 0x40 - MS_ASYNC = 0x1 - MS_INVALIDATE = 0x2 - MS_SYNC = 0x0 - NAME_MAX = 0xff - NET_RT_DUMP = 0x1 - NET_RT_FLAGS = 0x2 - NET_RT_IFLIST = 0x3 - NET_RT_IFLISTL = 0x5 - NET_RT_IFMALIST = 0x4 - NET_RT_MAXID = 0x6 - NOFLSH = 0x80000000 - NOTE_ATTRIB = 0x8 - NOTE_CHILD = 0x4 - NOTE_DELETE = 0x1 - NOTE_EXEC = 0x20000000 - NOTE_EXIT = 0x80000000 - NOTE_EXTEND = 0x4 - NOTE_FFAND = 0x40000000 - NOTE_FFCOPY = 0xc0000000 - NOTE_FFCTRLMASK = 0xc0000000 - NOTE_FFLAGSMASK = 0xffffff - NOTE_FFNOP = 0x0 - NOTE_FFOR = 0x80000000 - NOTE_FORK = 0x40000000 - NOTE_LINK = 0x10 - NOTE_LOWAT = 0x1 - NOTE_MSECONDS = 0x2 - NOTE_NSECONDS = 0x8 - NOTE_PCTRLMASK = 0xf0000000 - NOTE_PDATAMASK = 0xfffff - NOTE_RENAME = 0x20 - NOTE_REVOKE = 0x40 - NOTE_SECONDS = 0x1 - NOTE_TRACK = 0x1 - NOTE_TRACKERR = 0x2 - NOTE_TRIGGER = 0x1000000 - NOTE_USECONDS = 0x4 - NOTE_WRITE = 0x2 - OCRNL = 0x10 - ONLCR = 0x2 - ONLRET = 0x40 - ONOCR = 0x20 - ONOEOT = 0x8 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_APPEND = 0x8 - O_ASYNC = 0x40 - O_CLOEXEC = 0x100000 - O_CREAT = 0x200 - O_DIRECT = 0x10000 - O_DIRECTORY = 0x20000 - O_EXCL = 0x800 - O_EXEC = 0x40000 - O_EXLOCK = 0x20 - O_FSYNC = 0x80 - O_NDELAY = 0x4 - O_NOCTTY = 0x8000 - O_NOFOLLOW = 0x100 - O_NONBLOCK = 0x4 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_SHLOCK = 0x10 - O_SYNC = 0x80 - O_TRUNC = 0x400 - O_TTY_INIT = 0x80000 - O_WRONLY = 0x1 - PARENB = 0x1000 - PARMRK = 0x8 - PARODD = 0x2000 - PENDIN = 0x20000000 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PROT_EXEC = 0x4 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_WRITE = 0x2 - RLIMIT_AS = 0xa - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x8 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = 0x7fffffffffffffff - RTAX_AUTHOR = 0x6 - RTAX_BRD = 0x7 - RTAX_DST = 0x0 - RTAX_GATEWAY = 0x1 - RTAX_GENMASK = 0x3 - RTAX_IFA = 0x5 - RTAX_IFP = 0x4 - RTAX_MAX = 0x8 - RTAX_NETMASK = 0x2 - RTA_AUTHOR = 0x40 - RTA_BRD = 0x80 - RTA_DST = 0x1 - RTA_GATEWAY = 0x2 - RTA_GENMASK = 0x8 - RTA_IFA = 0x20 - RTA_IFP = 0x10 - RTA_NETMASK = 0x4 - RTF_BLACKHOLE = 0x1000 - RTF_BROADCAST = 0x400000 - RTF_DONE = 0x40 - RTF_DYNAMIC = 0x10 - RTF_FMASK = 0x1004d808 - RTF_GATEWAY = 0x2 - RTF_GWFLAG_COMPAT = 0x80000000 - RTF_HOST = 0x4 - RTF_LLDATA = 0x400 - RTF_LLINFO = 0x400 - RTF_LOCAL = 0x200000 - RTF_MODIFIED = 0x20 - RTF_MULTICAST = 0x800000 - RTF_PINNED = 0x100000 - RTF_PRCLONING = 0x10000 - RTF_PROTO1 = 0x8000 - RTF_PROTO2 = 0x4000 - RTF_PROTO3 = 0x40000 - RTF_REJECT = 0x8 - RTF_RNH_LOCKED = 0x40000000 - RTF_STATIC = 0x800 - RTF_STICKY = 0x10000000 - RTF_UP = 0x1 - RTF_XRESOLVE = 0x200 - RTM_ADD = 0x1 - RTM_CHANGE = 0x3 - RTM_DELADDR = 0xd - RTM_DELETE = 0x2 - RTM_DELMADDR = 0x10 - RTM_GET = 0x4 - RTM_IEEE80211 = 0x12 - RTM_IFANNOUNCE = 0x11 - RTM_IFINFO = 0xe - RTM_LOCK = 0x8 - RTM_LOSING = 0x5 - RTM_MISS = 0x7 - RTM_NEWADDR = 0xc - RTM_NEWMADDR = 0xf - RTM_OLDADD = 0x9 - RTM_OLDDEL = 0xa - RTM_REDIRECT = 0x6 - RTM_RESOLVE = 0xb - RTM_RTTUNIT = 0xf4240 - RTM_VERSION = 0x5 - RTV_EXPIRE = 0x4 - RTV_HOPCOUNT = 0x2 - RTV_MTU = 0x1 - RTV_RPIPE = 0x8 - RTV_RTT = 0x40 - RTV_RTTVAR = 0x80 - RTV_SPIPE = 0x10 - RTV_SSTHRESH = 0x20 - RTV_WEIGHT = 0x100 - RT_ALL_FIBS = -0x1 - RT_CACHING_CONTEXT = 0x1 - RT_DEFAULT_FIB = 0x0 - RT_NORTREF = 0x2 - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - RUSAGE_THREAD = 0x1 - SCM_BINTIME = 0x4 - SCM_CREDS = 0x3 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x2 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDMULTI = 0x80206931 - SIOCADDRT = 0x8040720a - SIOCAIFADDR = 0x8040691a - SIOCAIFGROUP = 0x80286987 - SIOCALIFADDR = 0x8118691b - SIOCATMARK = 0x40047307 - SIOCDELMULTI = 0x80206932 - SIOCDELRT = 0x8040720b - SIOCDIFADDR = 0x80206919 - SIOCDIFGROUP = 0x80286989 - SIOCDIFPHYADDR = 0x80206949 - SIOCDLIFADDR = 0x8118691d - SIOCGDRVSPEC = 0xc028697b - SIOCGETSGCNT = 0xc0207210 - SIOCGETVIFCNT = 0xc028720f - SIOCGHIWAT = 0x40047301 - SIOCGIFADDR = 0xc0206921 - SIOCGIFBRDADDR = 0xc0206923 - SIOCGIFCAP = 0xc020691f - SIOCGIFCONF = 0xc0106924 - SIOCGIFDESCR = 0xc020692a - SIOCGIFDSTADDR = 0xc0206922 - SIOCGIFFIB = 0xc020695c - SIOCGIFFLAGS = 0xc0206911 - SIOCGIFGENERIC = 0xc020693a - SIOCGIFGMEMB = 0xc028698a - SIOCGIFGROUP = 0xc0286988 - SIOCGIFINDEX = 0xc0206920 - SIOCGIFMAC = 0xc0206926 - SIOCGIFMEDIA = 0xc0306938 - SIOCGIFMETRIC = 0xc0206917 - SIOCGIFMTU = 0xc0206933 - SIOCGIFNETMASK = 0xc0206925 - SIOCGIFPDSTADDR = 0xc0206948 - SIOCGIFPHYS = 0xc0206935 - SIOCGIFPSRCADDR = 0xc0206947 - SIOCGIFSTATUS = 0xc331693b - SIOCGLIFADDR = 0xc118691c - SIOCGLIFPHYADDR = 0xc118694b - SIOCGLOWAT = 0x40047303 - SIOCGPGRP = 0x40047309 - SIOCGPRIVATE_0 = 0xc0206950 - SIOCGPRIVATE_1 = 0xc0206951 - SIOCIFCREATE = 0xc020697a - SIOCIFCREATE2 = 0xc020697c - SIOCIFDESTROY = 0x80206979 - SIOCIFGCLONERS = 0xc0106978 - SIOCSDRVSPEC = 0x8028697b - SIOCSHIWAT = 0x80047300 - SIOCSIFADDR = 0x8020690c - SIOCSIFBRDADDR = 0x80206913 - SIOCSIFCAP = 0x8020691e - SIOCSIFDESCR = 0x80206929 - SIOCSIFDSTADDR = 0x8020690e - SIOCSIFFIB = 0x8020695d - SIOCSIFFLAGS = 0x80206910 - SIOCSIFGENERIC = 0x80206939 - SIOCSIFLLADDR = 0x8020693c - SIOCSIFMAC = 0x80206927 - SIOCSIFMEDIA = 0xc0206937 - SIOCSIFMETRIC = 0x80206918 - SIOCSIFMTU = 0x80206934 - SIOCSIFNAME = 0x80206928 - SIOCSIFNETMASK = 0x80206916 - SIOCSIFPHYADDR = 0x80406946 - SIOCSIFPHYS = 0x80206936 - SIOCSIFRVNET = 0xc020695b - SIOCSIFVNET = 0xc020695a - SIOCSLIFPHYADDR = 0x8118694a - SIOCSLOWAT = 0x80047302 - SIOCSPGRP = 0x80047308 - SOCK_CLOEXEC = 0x10000000 - SOCK_DGRAM = 0x2 - SOCK_MAXADDRLEN = 0xff - SOCK_NONBLOCK = 0x20000000 - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_SOCKET = 0xffff - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x2 - SO_ACCEPTFILTER = 0x1000 - SO_BINTIME = 0x2000 - SO_BROADCAST = 0x20 - SO_DEBUG = 0x1 - SO_DONTROUTE = 0x10 - SO_ERROR = 0x1007 - SO_KEEPALIVE = 0x8 - SO_LABEL = 0x1009 - SO_LINGER = 0x80 - SO_LISTENINCQLEN = 0x1013 - SO_LISTENQLEN = 0x1012 - SO_LISTENQLIMIT = 0x1011 - SO_NOSIGPIPE = 0x800 - SO_NO_DDP = 0x8000 - SO_NO_OFFLOAD = 0x4000 - SO_OOBINLINE = 0x100 - SO_PEERLABEL = 0x1010 - SO_PROTOCOL = 0x1016 - SO_PROTOTYPE = 0x1016 - SO_RCVBUF = 0x1002 - SO_RCVLOWAT = 0x1004 - SO_RCVTIMEO = 0x1006 - SO_REUSEADDR = 0x4 - SO_REUSEPORT = 0x200 - SO_SETFIB = 0x1014 - SO_SNDBUF = 0x1001 - SO_SNDLOWAT = 0x1003 - SO_SNDTIMEO = 0x1005 - SO_TIMESTAMP = 0x400 - SO_TYPE = 0x1008 - SO_USELOOPBACK = 0x40 - SO_USER_COOKIE = 0x1015 - SO_VENDOR = 0x80000000 - TCIFLUSH = 0x1 - TCIOFLUSH = 0x3 - TCOFLUSH = 0x2 - TCP_CA_NAME_MAX = 0x10 - TCP_CONGESTION = 0x40 - TCP_INFO = 0x20 - TCP_KEEPCNT = 0x400 - TCP_KEEPIDLE = 0x100 - TCP_KEEPINIT = 0x80 - TCP_KEEPINTVL = 0x200 - TCP_MAXBURST = 0x4 - TCP_MAXHLEN = 0x3c - TCP_MAXOLEN = 0x28 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_SACK = 0x4 - TCP_MAX_WINSHIFT = 0xe - TCP_MD5SIG = 0x10 - TCP_MINMSS = 0xd8 - TCP_MSS = 0x218 - TCP_NODELAY = 0x1 - TCP_NOOPT = 0x8 - TCP_NOPUSH = 0x4 - TCP_VENDOR = 0x80000000 - TCSAFLUSH = 0x2 - TIOCCBRK = 0x2000747a - TIOCCDTR = 0x20007478 - TIOCCONS = 0x80047462 - TIOCDRAIN = 0x2000745e - TIOCEXCL = 0x2000740d - TIOCEXT = 0x80047460 - TIOCFLUSH = 0x80047410 - TIOCGDRAINWAIT = 0x40047456 - TIOCGETA = 0x402c7413 - TIOCGETD = 0x4004741a - TIOCGPGRP = 0x40047477 - TIOCGPTN = 0x4004740f - TIOCGSID = 0x40047463 - TIOCGWINSZ = 0x40087468 - TIOCMBIC = 0x8004746b - TIOCMBIS = 0x8004746c - TIOCMGDTRWAIT = 0x4004745a - TIOCMGET = 0x4004746a - TIOCMSDTRWAIT = 0x8004745b - TIOCMSET = 0x8004746d - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DCD = 0x40 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x20007471 - TIOCNXCL = 0x2000740e - TIOCOUTQ = 0x40047473 - TIOCPKT = 0x80047470 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCPTMASTER = 0x2000741c - TIOCSBRK = 0x2000747b - TIOCSCTTY = 0x20007461 - TIOCSDRAINWAIT = 0x80047457 - TIOCSDTR = 0x20007479 - TIOCSETA = 0x802c7414 - TIOCSETAF = 0x802c7416 - TIOCSETAW = 0x802c7415 - TIOCSETD = 0x8004741b - TIOCSIG = 0x2004745f - TIOCSPGRP = 0x80047476 - TIOCSTART = 0x2000746e - TIOCSTAT = 0x20007465 - TIOCSTI = 0x80017472 - TIOCSTOP = 0x2000746f - TIOCSWINSZ = 0x80087467 - TIOCTIMESTAMP = 0x40107459 - TIOCUCNTL = 0x80047466 - TOSTOP = 0x400000 - VDISCARD = 0xf - VDSUSP = 0xb - VEOF = 0x0 - VEOL = 0x1 - VEOL2 = 0x2 - VERASE = 0x3 - VERASE2 = 0x7 - VINTR = 0x8 - VKILL = 0x5 - VLNEXT = 0xe - VMIN = 0x10 - VQUIT = 0x9 - VREPRINT = 0x6 - VSTART = 0xc - VSTATUS = 0x12 - VSTOP = 0xd - VSUSP = 0xa - VTIME = 0x11 - VWERASE = 0x4 - WCONTINUED = 0x4 - WCOREFLAG = 0x80 - WEXITED = 0x10 - WLINUXCLONE = 0x80000000 - WNOHANG = 0x1 - WNOWAIT = 0x8 - WSTOPPED = 0x2 - WTRAPPED = 0x20 - WUNTRACED = 0x2 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x30) - EADDRNOTAVAIL = syscall.Errno(0x31) - EAFNOSUPPORT = syscall.Errno(0x2f) - EAGAIN = syscall.Errno(0x23) - EALREADY = syscall.Errno(0x25) - EAUTH = syscall.Errno(0x50) - EBADF = syscall.Errno(0x9) - EBADMSG = syscall.Errno(0x59) - EBADRPC = syscall.Errno(0x48) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x55) - ECAPMODE = syscall.Errno(0x5e) - ECHILD = syscall.Errno(0xa) - ECONNABORTED = syscall.Errno(0x35) - ECONNREFUSED = syscall.Errno(0x3d) - ECONNRESET = syscall.Errno(0x36) - EDEADLK = syscall.Errno(0xb) - EDESTADDRREQ = syscall.Errno(0x27) - EDOM = syscall.Errno(0x21) - EDOOFUS = syscall.Errno(0x58) - EDQUOT = syscall.Errno(0x45) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EFTYPE = syscall.Errno(0x4f) - EHOSTDOWN = syscall.Errno(0x40) - EHOSTUNREACH = syscall.Errno(0x41) - EIDRM = syscall.Errno(0x52) - EILSEQ = syscall.Errno(0x56) - EINPROGRESS = syscall.Errno(0x24) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EISCONN = syscall.Errno(0x38) - EISDIR = syscall.Errno(0x15) - ELAST = syscall.Errno(0x60) - ELOOP = syscall.Errno(0x3e) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x28) - EMULTIHOP = syscall.Errno(0x5a) - ENAMETOOLONG = syscall.Errno(0x3f) - ENEEDAUTH = syscall.Errno(0x51) - ENETDOWN = syscall.Errno(0x32) - ENETRESET = syscall.Errno(0x34) - ENETUNREACH = syscall.Errno(0x33) - ENFILE = syscall.Errno(0x17) - ENOATTR = syscall.Errno(0x57) - ENOBUFS = syscall.Errno(0x37) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOLCK = syscall.Errno(0x4d) - ENOLINK = syscall.Errno(0x5b) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x53) - ENOPROTOOPT = syscall.Errno(0x2a) - ENOSPC = syscall.Errno(0x1c) - ENOSYS = syscall.Errno(0x4e) - ENOTBLK = syscall.Errno(0xf) - ENOTCAPABLE = syscall.Errno(0x5d) - ENOTCONN = syscall.Errno(0x39) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x42) - ENOTRECOVERABLE = syscall.Errno(0x5f) - ENOTSOCK = syscall.Errno(0x26) - ENOTSUP = syscall.Errno(0x2d) - ENOTTY = syscall.Errno(0x19) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x2d) - EOVERFLOW = syscall.Errno(0x54) - EOWNERDEAD = syscall.Errno(0x60) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x2e) - EPIPE = syscall.Errno(0x20) - EPROCLIM = syscall.Errno(0x43) - EPROCUNAVAIL = syscall.Errno(0x4c) - EPROGMISMATCH = syscall.Errno(0x4b) - EPROGUNAVAIL = syscall.Errno(0x4a) - EPROTO = syscall.Errno(0x5c) - EPROTONOSUPPORT = syscall.Errno(0x2b) - EPROTOTYPE = syscall.Errno(0x29) - ERANGE = syscall.Errno(0x22) - EREMOTE = syscall.Errno(0x47) - EROFS = syscall.Errno(0x1e) - ERPCMISMATCH = syscall.Errno(0x49) - ESHUTDOWN = syscall.Errno(0x3a) - ESOCKTNOSUPPORT = syscall.Errno(0x2c) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESTALE = syscall.Errno(0x46) - ETIMEDOUT = syscall.Errno(0x3c) - ETOOMANYREFS = syscall.Errno(0x3b) - ETXTBSY = syscall.Errno(0x1a) - EUSERS = syscall.Errno(0x44) - EWOULDBLOCK = syscall.Errno(0x23) - EXDEV = syscall.Errno(0x12) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0xa) - SIGCHLD = syscall.Signal(0x14) - SIGCONT = syscall.Signal(0x13) - SIGEMT = syscall.Signal(0x7) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINFO = syscall.Signal(0x1d) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x17) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGLIBRT = syscall.Signal(0x21) - SIGLWP = syscall.Signal(0x20) - SIGPIPE = syscall.Signal(0xd) - SIGPROF = syscall.Signal(0x1b) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTOP = syscall.Signal(0x11) - SIGSYS = syscall.Signal(0xc) - SIGTERM = syscall.Signal(0xf) - SIGTHR = syscall.Signal(0x20) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x12) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGURG = syscall.Signal(0x10) - SIGUSR1 = syscall.Signal(0x1e) - SIGUSR2 = syscall.Signal(0x1f) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) - -// Error table -var errors = [...]string{ - 1: "operation not permitted", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "input/output error", - 6: "device not configured", - 7: "argument list too long", - 8: "exec format error", - 9: "bad file descriptor", - 10: "no child processes", - 11: "resource deadlock avoided", - 12: "cannot allocate memory", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "device busy", - 17: "file exists", - 18: "cross-device link", - 19: "operation not supported by device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "too many open files in system", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "numerical argument out of domain", - 34: "result too large", - 35: "resource temporarily unavailable", - 36: "operation now in progress", - 37: "operation already in progress", - 38: "socket operation on non-socket", - 39: "destination address required", - 40: "message too long", - 41: "protocol wrong type for socket", - 42: "protocol not available", - 43: "protocol not supported", - 44: "socket type not supported", - 45: "operation not supported", - 46: "protocol family not supported", - 47: "address family not supported by protocol family", - 48: "address already in use", - 49: "can't assign requested address", - 50: "network is down", - 51: "network is unreachable", - 52: "network dropped connection on reset", - 53: "software caused connection abort", - 54: "connection reset by peer", - 55: "no buffer space available", - 56: "socket is already connected", - 57: "socket is not connected", - 58: "can't send after socket shutdown", - 59: "too many references: can't splice", - 60: "operation timed out", - 61: "connection refused", - 62: "too many levels of symbolic links", - 63: "file name too long", - 64: "host is down", - 65: "no route to host", - 66: "directory not empty", - 67: "too many processes", - 68: "too many users", - 69: "disc quota exceeded", - 70: "stale NFS file handle", - 71: "too many levels of remote in path", - 72: "RPC struct is bad", - 73: "RPC version wrong", - 74: "RPC prog. not avail", - 75: "program version wrong", - 76: "bad procedure for program", - 77: "no locks available", - 78: "function not implemented", - 79: "inappropriate file type or format", - 80: "authentication error", - 81: "need authenticator", - 82: "identifier removed", - 83: "no message of desired type", - 84: "value too large to be stored in data type", - 85: "operation canceled", - 86: "illegal byte sequence", - 87: "attribute not found", - 88: "programming error", - 89: "bad message", - 90: "multihop attempted", - 91: "link has been severed", - 92: "protocol error", - 93: "capabilities insufficient", - 94: "not permitted in capability mode", - 95: "state not recoverable", - 96: "previous owner died", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal instruction", - 5: "trace/BPT trap", - 6: "abort trap", - 7: "EMT trap", - 8: "floating point exception", - 9: "killed", - 10: "bus error", - 11: "segmentation fault", - 12: "bad system call", - 13: "broken pipe", - 14: "alarm clock", - 15: "terminated", - 16: "urgent I/O condition", - 17: "suspended (signal)", - 18: "suspended", - 19: "continued", - 20: "child exited", - 21: "stopped (tty input)", - 22: "stopped (tty output)", - 23: "I/O possible", - 24: "cputime limit exceeded", - 25: "filesize limit exceeded", - 26: "virtual timer expired", - 27: "profiling timer expired", - 28: "window size changes", - 29: "information request", - 30: "user defined signal 1", - 31: "user defined signal 2", - 32: "unknown signal", - 33: "unknown signal", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_freebsd_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_freebsd_arm.go deleted file mode 100644 index 2afbe2d5ed7..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_freebsd_arm.go +++ /dev/null @@ -1,1729 +0,0 @@ -// mkerrors.sh -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build arm,freebsd - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- _const.go - -package unix - -import "syscall" - -const ( - AF_APPLETALK = 0x10 - AF_ARP = 0x23 - AF_ATM = 0x1e - AF_BLUETOOTH = 0x24 - AF_CCITT = 0xa - AF_CHAOS = 0x5 - AF_CNT = 0x15 - AF_COIP = 0x14 - AF_DATAKIT = 0x9 - AF_DECnet = 0xc - AF_DLI = 0xd - AF_E164 = 0x1a - AF_ECMA = 0x8 - AF_HYLINK = 0xf - AF_IEEE80211 = 0x25 - AF_IMPLINK = 0x3 - AF_INET = 0x2 - AF_INET6 = 0x1c - AF_INET6_SDP = 0x2a - AF_INET_SDP = 0x28 - AF_IPX = 0x17 - AF_ISDN = 0x1a - AF_ISO = 0x7 - AF_LAT = 0xe - AF_LINK = 0x12 - AF_LOCAL = 0x1 - AF_MAX = 0x2a - AF_NATM = 0x1d - AF_NETBIOS = 0x6 - AF_NETGRAPH = 0x20 - AF_OSI = 0x7 - AF_PUP = 0x4 - AF_ROUTE = 0x11 - AF_SCLUSTER = 0x22 - AF_SIP = 0x18 - AF_SLOW = 0x21 - AF_SNA = 0xb - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - AF_VENDOR00 = 0x27 - AF_VENDOR01 = 0x29 - AF_VENDOR02 = 0x2b - AF_VENDOR03 = 0x2d - AF_VENDOR04 = 0x2f - AF_VENDOR05 = 0x31 - AF_VENDOR06 = 0x33 - AF_VENDOR07 = 0x35 - AF_VENDOR08 = 0x37 - AF_VENDOR09 = 0x39 - AF_VENDOR10 = 0x3b - AF_VENDOR11 = 0x3d - AF_VENDOR12 = 0x3f - AF_VENDOR13 = 0x41 - AF_VENDOR14 = 0x43 - AF_VENDOR15 = 0x45 - AF_VENDOR16 = 0x47 - AF_VENDOR17 = 0x49 - AF_VENDOR18 = 0x4b - AF_VENDOR19 = 0x4d - AF_VENDOR20 = 0x4f - AF_VENDOR21 = 0x51 - AF_VENDOR22 = 0x53 - AF_VENDOR23 = 0x55 - AF_VENDOR24 = 0x57 - AF_VENDOR25 = 0x59 - AF_VENDOR26 = 0x5b - AF_VENDOR27 = 0x5d - AF_VENDOR28 = 0x5f - AF_VENDOR29 = 0x61 - AF_VENDOR30 = 0x63 - AF_VENDOR31 = 0x65 - AF_VENDOR32 = 0x67 - AF_VENDOR33 = 0x69 - AF_VENDOR34 = 0x6b - AF_VENDOR35 = 0x6d - AF_VENDOR36 = 0x6f - AF_VENDOR37 = 0x71 - AF_VENDOR38 = 0x73 - AF_VENDOR39 = 0x75 - AF_VENDOR40 = 0x77 - AF_VENDOR41 = 0x79 - AF_VENDOR42 = 0x7b - AF_VENDOR43 = 0x7d - AF_VENDOR44 = 0x7f - AF_VENDOR45 = 0x81 - AF_VENDOR46 = 0x83 - AF_VENDOR47 = 0x85 - B0 = 0x0 - B110 = 0x6e - B115200 = 0x1c200 - B1200 = 0x4b0 - B134 = 0x86 - B14400 = 0x3840 - B150 = 0x96 - B1800 = 0x708 - B19200 = 0x4b00 - B200 = 0xc8 - B230400 = 0x38400 - B2400 = 0x960 - B28800 = 0x7080 - B300 = 0x12c - B38400 = 0x9600 - B460800 = 0x70800 - B4800 = 0x12c0 - B50 = 0x32 - B57600 = 0xe100 - B600 = 0x258 - B7200 = 0x1c20 - B75 = 0x4b - B76800 = 0x12c00 - B921600 = 0xe1000 - B9600 = 0x2580 - BIOCFEEDBACK = 0x8004427c - BIOCFLUSH = 0x20004268 - BIOCGBLEN = 0x40044266 - BIOCGDIRECTION = 0x40044276 - BIOCGDLT = 0x4004426a - BIOCGDLTLIST = 0xc0084279 - BIOCGETBUFMODE = 0x4004427d - BIOCGETIF = 0x4020426b - BIOCGETZMAX = 0x4004427f - BIOCGHDRCMPLT = 0x40044274 - BIOCGRSIG = 0x40044272 - BIOCGRTIMEOUT = 0x4008426e - BIOCGSEESENT = 0x40044276 - BIOCGSTATS = 0x4008426f - BIOCGTSTAMP = 0x40044283 - BIOCIMMEDIATE = 0x80044270 - BIOCLOCK = 0x2000427a - BIOCPROMISC = 0x20004269 - BIOCROTZBUF = 0x400c4280 - BIOCSBLEN = 0xc0044266 - BIOCSDIRECTION = 0x80044277 - BIOCSDLT = 0x80044278 - BIOCSETBUFMODE = 0x8004427e - BIOCSETF = 0x80084267 - BIOCSETFNR = 0x80084282 - BIOCSETIF = 0x8020426c - BIOCSETWF = 0x8008427b - BIOCSETZBUF = 0x800c4281 - BIOCSHDRCMPLT = 0x80044275 - BIOCSRSIG = 0x80044273 - BIOCSRTIMEOUT = 0x8008426d - BIOCSSEESENT = 0x80044277 - BIOCSTSTAMP = 0x80044284 - BIOCVERSION = 0x40044271 - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALIGNMENT = 0x4 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_BUFMODE_BUFFER = 0x1 - BPF_BUFMODE_ZBUF = 0x2 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXBUFSIZE = 0x80000 - BPF_MAXINSNS = 0x200 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINBUFSIZE = 0x20 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RELEASE = 0x30bb6 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_T_BINTIME = 0x2 - BPF_T_BINTIME_FAST = 0x102 - BPF_T_BINTIME_MONOTONIC = 0x202 - BPF_T_BINTIME_MONOTONIC_FAST = 0x302 - BPF_T_FAST = 0x100 - BPF_T_FLAG_MASK = 0x300 - BPF_T_FORMAT_MASK = 0x3 - BPF_T_MICROTIME = 0x0 - BPF_T_MICROTIME_FAST = 0x100 - BPF_T_MICROTIME_MONOTONIC = 0x200 - BPF_T_MICROTIME_MONOTONIC_FAST = 0x300 - BPF_T_MONOTONIC = 0x200 - BPF_T_MONOTONIC_FAST = 0x300 - BPF_T_NANOTIME = 0x1 - BPF_T_NANOTIME_FAST = 0x101 - BPF_T_NANOTIME_MONOTONIC = 0x201 - BPF_T_NANOTIME_MONOTONIC_FAST = 0x301 - BPF_T_NONE = 0x3 - BPF_T_NORMAL = 0x0 - BPF_W = 0x0 - BPF_X = 0x8 - BRKINT = 0x2 - CFLUSH = 0xf - CLOCAL = 0x8000 - CREAD = 0x800 - CS5 = 0x0 - CS6 = 0x100 - CS7 = 0x200 - CS8 = 0x300 - CSIZE = 0x300 - CSTART = 0x11 - CSTATUS = 0x14 - CSTOP = 0x13 - CSTOPB = 0x400 - CSUSP = 0x1a - CTL_MAXNAME = 0x18 - CTL_NET = 0x4 - DLT_A429 = 0xb8 - DLT_A653_ICM = 0xb9 - DLT_AIRONET_HEADER = 0x78 - DLT_AOS = 0xde - DLT_APPLE_IP_OVER_IEEE1394 = 0x8a - DLT_ARCNET = 0x7 - DLT_ARCNET_LINUX = 0x81 - DLT_ATM_CLIP = 0x13 - DLT_ATM_RFC1483 = 0xb - DLT_AURORA = 0x7e - DLT_AX25 = 0x3 - DLT_AX25_KISS = 0xca - DLT_BACNET_MS_TP = 0xa5 - DLT_BLUETOOTH_HCI_H4 = 0xbb - DLT_BLUETOOTH_HCI_H4_WITH_PHDR = 0xc9 - DLT_CAN20B = 0xbe - DLT_CAN_SOCKETCAN = 0xe3 - DLT_CHAOS = 0x5 - DLT_CHDLC = 0x68 - DLT_CISCO_IOS = 0x76 - DLT_C_HDLC = 0x68 - DLT_C_HDLC_WITH_DIR = 0xcd - DLT_DBUS = 0xe7 - DLT_DECT = 0xdd - DLT_DOCSIS = 0x8f - DLT_DVB_CI = 0xeb - DLT_ECONET = 0x73 - DLT_EN10MB = 0x1 - DLT_EN3MB = 0x2 - DLT_ENC = 0x6d - DLT_ERF = 0xc5 - DLT_ERF_ETH = 0xaf - DLT_ERF_POS = 0xb0 - DLT_FC_2 = 0xe0 - DLT_FC_2_WITH_FRAME_DELIMS = 0xe1 - DLT_FDDI = 0xa - DLT_FLEXRAY = 0xd2 - DLT_FRELAY = 0x6b - DLT_FRELAY_WITH_DIR = 0xce - DLT_GCOM_SERIAL = 0xad - DLT_GCOM_T1E1 = 0xac - DLT_GPF_F = 0xab - DLT_GPF_T = 0xaa - DLT_GPRS_LLC = 0xa9 - DLT_GSMTAP_ABIS = 0xda - DLT_GSMTAP_UM = 0xd9 - DLT_HHDLC = 0x79 - DLT_IBM_SN = 0x92 - DLT_IBM_SP = 0x91 - DLT_IEEE802 = 0x6 - DLT_IEEE802_11 = 0x69 - DLT_IEEE802_11_RADIO = 0x7f - DLT_IEEE802_11_RADIO_AVS = 0xa3 - DLT_IEEE802_15_4 = 0xc3 - DLT_IEEE802_15_4_LINUX = 0xbf - DLT_IEEE802_15_4_NOFCS = 0xe6 - DLT_IEEE802_15_4_NONASK_PHY = 0xd7 - DLT_IEEE802_16_MAC_CPS = 0xbc - DLT_IEEE802_16_MAC_CPS_RADIO = 0xc1 - DLT_IPFILTER = 0x74 - DLT_IPMB = 0xc7 - DLT_IPMB_LINUX = 0xd1 - DLT_IPNET = 0xe2 - DLT_IPOIB = 0xf2 - DLT_IPV4 = 0xe4 - DLT_IPV6 = 0xe5 - DLT_IP_OVER_FC = 0x7a - DLT_JUNIPER_ATM1 = 0x89 - DLT_JUNIPER_ATM2 = 0x87 - DLT_JUNIPER_ATM_CEMIC = 0xee - DLT_JUNIPER_CHDLC = 0xb5 - DLT_JUNIPER_ES = 0x84 - DLT_JUNIPER_ETHER = 0xb2 - DLT_JUNIPER_FIBRECHANNEL = 0xea - DLT_JUNIPER_FRELAY = 0xb4 - DLT_JUNIPER_GGSN = 0x85 - DLT_JUNIPER_ISM = 0xc2 - DLT_JUNIPER_MFR = 0x86 - DLT_JUNIPER_MLFR = 0x83 - DLT_JUNIPER_MLPPP = 0x82 - DLT_JUNIPER_MONITOR = 0xa4 - DLT_JUNIPER_PIC_PEER = 0xae - DLT_JUNIPER_PPP = 0xb3 - DLT_JUNIPER_PPPOE = 0xa7 - DLT_JUNIPER_PPPOE_ATM = 0xa8 - DLT_JUNIPER_SERVICES = 0x88 - DLT_JUNIPER_SRX_E2E = 0xe9 - DLT_JUNIPER_ST = 0xc8 - DLT_JUNIPER_VP = 0xb7 - DLT_JUNIPER_VS = 0xe8 - DLT_LAPB_WITH_DIR = 0xcf - DLT_LAPD = 0xcb - DLT_LIN = 0xd4 - DLT_LINUX_EVDEV = 0xd8 - DLT_LINUX_IRDA = 0x90 - DLT_LINUX_LAPD = 0xb1 - DLT_LINUX_PPP_WITHDIRECTION = 0xa6 - DLT_LINUX_SLL = 0x71 - DLT_LOOP = 0x6c - DLT_LTALK = 0x72 - DLT_MATCHING_MAX = 0xf6 - DLT_MATCHING_MIN = 0x68 - DLT_MFR = 0xb6 - DLT_MOST = 0xd3 - DLT_MPEG_2_TS = 0xf3 - DLT_MPLS = 0xdb - DLT_MTP2 = 0x8c - DLT_MTP2_WITH_PHDR = 0x8b - DLT_MTP3 = 0x8d - DLT_MUX27010 = 0xec - DLT_NETANALYZER = 0xf0 - DLT_NETANALYZER_TRANSPARENT = 0xf1 - DLT_NFC_LLCP = 0xf5 - DLT_NFLOG = 0xef - DLT_NG40 = 0xf4 - DLT_NULL = 0x0 - DLT_PCI_EXP = 0x7d - DLT_PFLOG = 0x75 - DLT_PFSYNC = 0x79 - DLT_PPI = 0xc0 - DLT_PPP = 0x9 - DLT_PPP_BSDOS = 0x10 - DLT_PPP_ETHER = 0x33 - DLT_PPP_PPPD = 0xa6 - DLT_PPP_SERIAL = 0x32 - DLT_PPP_WITH_DIR = 0xcc - DLT_PPP_WITH_DIRECTION = 0xa6 - DLT_PRISM_HEADER = 0x77 - DLT_PRONET = 0x4 - DLT_RAIF1 = 0xc6 - DLT_RAW = 0xc - DLT_RIO = 0x7c - DLT_SCCP = 0x8e - DLT_SITA = 0xc4 - DLT_SLIP = 0x8 - DLT_SLIP_BSDOS = 0xf - DLT_STANAG_5066_D_PDU = 0xed - DLT_SUNATM = 0x7b - DLT_SYMANTEC_FIREWALL = 0x63 - DLT_TZSP = 0x80 - DLT_USB = 0xba - DLT_USB_LINUX = 0xbd - DLT_USB_LINUX_MMAPPED = 0xdc - DLT_USER0 = 0x93 - DLT_USER1 = 0x94 - DLT_USER10 = 0x9d - DLT_USER11 = 0x9e - DLT_USER12 = 0x9f - DLT_USER13 = 0xa0 - DLT_USER14 = 0xa1 - DLT_USER15 = 0xa2 - DLT_USER2 = 0x95 - DLT_USER3 = 0x96 - DLT_USER4 = 0x97 - DLT_USER5 = 0x98 - DLT_USER6 = 0x99 - DLT_USER7 = 0x9a - DLT_USER8 = 0x9b - DLT_USER9 = 0x9c - DLT_WIHART = 0xdf - DLT_X2E_SERIAL = 0xd5 - DLT_X2E_XORAYA = 0xd6 - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - DT_WHT = 0xe - ECHO = 0x8 - ECHOCTL = 0x40 - ECHOE = 0x2 - ECHOK = 0x4 - ECHOKE = 0x1 - ECHONL = 0x10 - ECHOPRT = 0x20 - EVFILT_AIO = -0x3 - EVFILT_FS = -0x9 - EVFILT_LIO = -0xa - EVFILT_PROC = -0x5 - EVFILT_READ = -0x1 - EVFILT_SIGNAL = -0x6 - EVFILT_SYSCOUNT = 0xb - EVFILT_TIMER = -0x7 - EVFILT_USER = -0xb - EVFILT_VNODE = -0x4 - EVFILT_WRITE = -0x2 - EV_ADD = 0x1 - EV_CLEAR = 0x20 - EV_DELETE = 0x2 - EV_DISABLE = 0x8 - EV_DISPATCH = 0x80 - EV_DROP = 0x1000 - EV_ENABLE = 0x4 - EV_EOF = 0x8000 - EV_ERROR = 0x4000 - EV_FLAG1 = 0x2000 - EV_ONESHOT = 0x10 - EV_RECEIPT = 0x40 - EV_SYSFLAGS = 0xf000 - EXTA = 0x4b00 - EXTATTR_NAMESPACE_EMPTY = 0x0 - EXTATTR_NAMESPACE_SYSTEM = 0x2 - EXTATTR_NAMESPACE_USER = 0x1 - EXTB = 0x9600 - EXTPROC = 0x800 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x400 - FLUSHO = 0x800000 - F_CANCEL = 0x5 - F_DUP2FD = 0xa - F_DUP2FD_CLOEXEC = 0x12 - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0x11 - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLK = 0xb - F_GETOWN = 0x5 - F_OGETLK = 0x7 - F_OK = 0x0 - F_OSETLK = 0x8 - F_OSETLKW = 0x9 - F_RDAHEAD = 0x10 - F_RDLCK = 0x1 - F_READAHEAD = 0xf - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLK = 0xc - F_SETLKW = 0xd - F_SETLK_REMOTE = 0xe - F_SETOWN = 0x6 - F_UNLCK = 0x2 - F_UNLCKSYS = 0x4 - F_WRLCK = 0x3 - HUPCL = 0x4000 - ICANON = 0x100 - ICMP6_FILTER = 0x12 - ICRNL = 0x100 - IEXTEN = 0x400 - IFAN_ARRIVAL = 0x0 - IFAN_DEPARTURE = 0x1 - IFF_ALLMULTI = 0x200 - IFF_ALTPHYS = 0x4000 - IFF_BROADCAST = 0x2 - IFF_CANTCHANGE = 0x218f72 - IFF_CANTCONFIG = 0x10000 - IFF_DEBUG = 0x4 - IFF_DRV_OACTIVE = 0x400 - IFF_DRV_RUNNING = 0x40 - IFF_DYING = 0x200000 - IFF_LINK0 = 0x1000 - IFF_LINK1 = 0x2000 - IFF_LINK2 = 0x4000 - IFF_LOOPBACK = 0x8 - IFF_MONITOR = 0x40000 - IFF_MULTICAST = 0x8000 - IFF_NOARP = 0x80 - IFF_OACTIVE = 0x400 - IFF_POINTOPOINT = 0x10 - IFF_PPROMISC = 0x20000 - IFF_PROMISC = 0x100 - IFF_RENAMING = 0x400000 - IFF_RUNNING = 0x40 - IFF_SIMPLEX = 0x800 - IFF_SMART = 0x20 - IFF_STATICARP = 0x80000 - IFF_UP = 0x1 - IFNAMSIZ = 0x10 - IFT_1822 = 0x2 - IFT_A12MPPSWITCH = 0x82 - IFT_AAL2 = 0xbb - IFT_AAL5 = 0x31 - IFT_ADSL = 0x5e - IFT_AFLANE8023 = 0x3b - IFT_AFLANE8025 = 0x3c - IFT_ARAP = 0x58 - IFT_ARCNET = 0x23 - IFT_ARCNETPLUS = 0x24 - IFT_ASYNC = 0x54 - IFT_ATM = 0x25 - IFT_ATMDXI = 0x69 - IFT_ATMFUNI = 0x6a - IFT_ATMIMA = 0x6b - IFT_ATMLOGICAL = 0x50 - IFT_ATMRADIO = 0xbd - IFT_ATMSUBINTERFACE = 0x86 - IFT_ATMVCIENDPT = 0xc2 - IFT_ATMVIRTUAL = 0x95 - IFT_BGPPOLICYACCOUNTING = 0xa2 - IFT_BRIDGE = 0xd1 - IFT_BSC = 0x53 - IFT_CARP = 0xf8 - IFT_CCTEMUL = 0x3d - IFT_CEPT = 0x13 - IFT_CES = 0x85 - IFT_CHANNEL = 0x46 - IFT_CNR = 0x55 - IFT_COFFEE = 0x84 - IFT_COMPOSITELINK = 0x9b - IFT_DCN = 0x8d - IFT_DIGITALPOWERLINE = 0x8a - IFT_DIGITALWRAPPEROVERHEADCHANNEL = 0xba - IFT_DLSW = 0x4a - IFT_DOCSCABLEDOWNSTREAM = 0x80 - IFT_DOCSCABLEMACLAYER = 0x7f - IFT_DOCSCABLEUPSTREAM = 0x81 - IFT_DS0 = 0x51 - IFT_DS0BUNDLE = 0x52 - IFT_DS1FDL = 0xaa - IFT_DS3 = 0x1e - IFT_DTM = 0x8c - IFT_DVBASILN = 0xac - IFT_DVBASIOUT = 0xad - IFT_DVBRCCDOWNSTREAM = 0x93 - IFT_DVBRCCMACLAYER = 0x92 - IFT_DVBRCCUPSTREAM = 0x94 - IFT_ENC = 0xf4 - IFT_EON = 0x19 - IFT_EPLRS = 0x57 - IFT_ESCON = 0x49 - IFT_ETHER = 0x6 - IFT_FAITH = 0xf2 - IFT_FAST = 0x7d - IFT_FASTETHER = 0x3e - IFT_FASTETHERFX = 0x45 - IFT_FDDI = 0xf - IFT_FIBRECHANNEL = 0x38 - IFT_FRAMERELAYINTERCONNECT = 0x3a - IFT_FRAMERELAYMPI = 0x5c - IFT_FRDLCIENDPT = 0xc1 - IFT_FRELAY = 0x20 - IFT_FRELAYDCE = 0x2c - IFT_FRF16MFRBUNDLE = 0xa3 - IFT_FRFORWARD = 0x9e - IFT_G703AT2MB = 0x43 - IFT_G703AT64K = 0x42 - IFT_GIF = 0xf0 - IFT_GIGABITETHERNET = 0x75 - IFT_GR303IDT = 0xb2 - IFT_GR303RDT = 0xb1 - IFT_H323GATEKEEPER = 0xa4 - IFT_H323PROXY = 0xa5 - IFT_HDH1822 = 0x3 - IFT_HDLC = 0x76 - IFT_HDSL2 = 0xa8 - IFT_HIPERLAN2 = 0xb7 - IFT_HIPPI = 0x2f - IFT_HIPPIINTERFACE = 0x39 - IFT_HOSTPAD = 0x5a - IFT_HSSI = 0x2e - IFT_HY = 0xe - IFT_IBM370PARCHAN = 0x48 - IFT_IDSL = 0x9a - IFT_IEEE1394 = 0x90 - IFT_IEEE80211 = 0x47 - IFT_IEEE80212 = 0x37 - IFT_IEEE8023ADLAG = 0xa1 - IFT_IFGSN = 0x91 - IFT_IMT = 0xbe - IFT_INFINIBAND = 0xc7 - IFT_INTERLEAVE = 0x7c - IFT_IP = 0x7e - IFT_IPFORWARD = 0x8e - IFT_IPOVERATM = 0x72 - IFT_IPOVERCDLC = 0x6d - IFT_IPOVERCLAW = 0x6e - IFT_IPSWITCH = 0x4e - IFT_IPXIP = 0xf9 - IFT_ISDN = 0x3f - IFT_ISDNBASIC = 0x14 - IFT_ISDNPRIMARY = 0x15 - IFT_ISDNS = 0x4b - IFT_ISDNU = 0x4c - IFT_ISO88022LLC = 0x29 - IFT_ISO88023 = 0x7 - IFT_ISO88024 = 0x8 - IFT_ISO88025 = 0x9 - IFT_ISO88025CRFPINT = 0x62 - IFT_ISO88025DTR = 0x56 - IFT_ISO88025FIBER = 0x73 - IFT_ISO88026 = 0xa - IFT_ISUP = 0xb3 - IFT_L2VLAN = 0x87 - IFT_L3IPVLAN = 0x88 - IFT_L3IPXVLAN = 0x89 - IFT_LAPB = 0x10 - IFT_LAPD = 0x4d - IFT_LAPF = 0x77 - IFT_LOCALTALK = 0x2a - IFT_LOOP = 0x18 - IFT_MEDIAMAILOVERIP = 0x8b - IFT_MFSIGLINK = 0xa7 - IFT_MIOX25 = 0x26 - IFT_MODEM = 0x30 - IFT_MPC = 0x71 - IFT_MPLS = 0xa6 - IFT_MPLSTUNNEL = 0x96 - IFT_MSDSL = 0x8f - IFT_MVL = 0xbf - IFT_MYRINET = 0x63 - IFT_NFAS = 0xaf - IFT_NSIP = 0x1b - IFT_OPTICALCHANNEL = 0xc3 - IFT_OPTICALTRANSPORT = 0xc4 - IFT_OTHER = 0x1 - IFT_P10 = 0xc - IFT_P80 = 0xd - IFT_PARA = 0x22 - IFT_PFLOG = 0xf6 - IFT_PFSYNC = 0xf7 - IFT_PLC = 0xae - IFT_POS = 0xab - IFT_PPP = 0x17 - IFT_PPPMULTILINKBUNDLE = 0x6c - IFT_PROPBWAP2MP = 0xb8 - IFT_PROPCNLS = 0x59 - IFT_PROPDOCSWIRELESSDOWNSTREAM = 0xb5 - IFT_PROPDOCSWIRELESSMACLAYER = 0xb4 - IFT_PROPDOCSWIRELESSUPSTREAM = 0xb6 - IFT_PROPMUX = 0x36 - IFT_PROPVIRTUAL = 0x35 - IFT_PROPWIRELESSP2P = 0x9d - IFT_PTPSERIAL = 0x16 - IFT_PVC = 0xf1 - IFT_QLLC = 0x44 - IFT_RADIOMAC = 0xbc - IFT_RADSL = 0x5f - IFT_REACHDSL = 0xc0 - IFT_RFC1483 = 0x9f - IFT_RS232 = 0x21 - IFT_RSRB = 0x4f - IFT_SDLC = 0x11 - IFT_SDSL = 0x60 - IFT_SHDSL = 0xa9 - IFT_SIP = 0x1f - IFT_SLIP = 0x1c - IFT_SMDSDXI = 0x2b - IFT_SMDSICIP = 0x34 - IFT_SONET = 0x27 - IFT_SONETOVERHEADCHANNEL = 0xb9 - IFT_SONETPATH = 0x32 - IFT_SONETVT = 0x33 - IFT_SRP = 0x97 - IFT_SS7SIGLINK = 0x9c - IFT_STACKTOSTACK = 0x6f - IFT_STARLAN = 0xb - IFT_STF = 0xd7 - IFT_T1 = 0x12 - IFT_TDLC = 0x74 - IFT_TERMPAD = 0x5b - IFT_TR008 = 0xb0 - IFT_TRANSPHDLC = 0x7b - IFT_TUNNEL = 0x83 - IFT_ULTRA = 0x1d - IFT_USB = 0xa0 - IFT_V11 = 0x40 - IFT_V35 = 0x2d - IFT_V36 = 0x41 - IFT_V37 = 0x78 - IFT_VDSL = 0x61 - IFT_VIRTUALIPADDRESS = 0x70 - IFT_VOICEEM = 0x64 - IFT_VOICEENCAP = 0x67 - IFT_VOICEFXO = 0x65 - IFT_VOICEFXS = 0x66 - IFT_VOICEOVERATM = 0x98 - IFT_VOICEOVERFRAMERELAY = 0x99 - IFT_VOICEOVERIP = 0x68 - IFT_X213 = 0x5d - IFT_X25 = 0x5 - IFT_X25DDN = 0x4 - IFT_X25HUNTGROUP = 0x7a - IFT_X25MLP = 0x79 - IFT_X25PLE = 0x28 - IFT_XETHER = 0x1a - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLASSD_HOST = 0xfffffff - IN_CLASSD_NET = 0xf0000000 - IN_CLASSD_NSHIFT = 0x1c - IN_LOOPBACKNET = 0x7f - IN_RFC3021_MASK = 0xfffffffe - IPPROTO_3PC = 0x22 - IPPROTO_ADFS = 0x44 - IPPROTO_AH = 0x33 - IPPROTO_AHIP = 0x3d - IPPROTO_APES = 0x63 - IPPROTO_ARGUS = 0xd - IPPROTO_AX25 = 0x5d - IPPROTO_BHA = 0x31 - IPPROTO_BLT = 0x1e - IPPROTO_BRSATMON = 0x4c - IPPROTO_CARP = 0x70 - IPPROTO_CFTP = 0x3e - IPPROTO_CHAOS = 0x10 - IPPROTO_CMTP = 0x26 - IPPROTO_CPHB = 0x49 - IPPROTO_CPNX = 0x48 - IPPROTO_DDP = 0x25 - IPPROTO_DGP = 0x56 - IPPROTO_DIVERT = 0x102 - IPPROTO_DONE = 0x101 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_EMCON = 0xe - IPPROTO_ENCAP = 0x62 - IPPROTO_EON = 0x50 - IPPROTO_ESP = 0x32 - IPPROTO_ETHERIP = 0x61 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GGP = 0x3 - IPPROTO_GMTP = 0x64 - IPPROTO_GRE = 0x2f - IPPROTO_HELLO = 0x3f - IPPROTO_HIP = 0x8b - IPPROTO_HMP = 0x14 - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IDPR = 0x23 - IPPROTO_IDRP = 0x2d - IPPROTO_IGMP = 0x2 - IPPROTO_IGP = 0x55 - IPPROTO_IGRP = 0x58 - IPPROTO_IL = 0x28 - IPPROTO_INLSP = 0x34 - IPPROTO_INP = 0x20 - IPPROTO_IP = 0x0 - IPPROTO_IPCOMP = 0x6c - IPPROTO_IPCV = 0x47 - IPPROTO_IPEIP = 0x5e - IPPROTO_IPIP = 0x4 - IPPROTO_IPPC = 0x43 - IPPROTO_IPV4 = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_IRTP = 0x1c - IPPROTO_KRYPTOLAN = 0x41 - IPPROTO_LARP = 0x5b - IPPROTO_LEAF1 = 0x19 - IPPROTO_LEAF2 = 0x1a - IPPROTO_MAX = 0x100 - IPPROTO_MAXID = 0x34 - IPPROTO_MEAS = 0x13 - IPPROTO_MH = 0x87 - IPPROTO_MHRP = 0x30 - IPPROTO_MICP = 0x5f - IPPROTO_MOBILE = 0x37 - IPPROTO_MPLS = 0x89 - IPPROTO_MTP = 0x5c - IPPROTO_MUX = 0x12 - IPPROTO_ND = 0x4d - IPPROTO_NHRP = 0x36 - IPPROTO_NONE = 0x3b - IPPROTO_NSP = 0x1f - IPPROTO_NVPII = 0xb - IPPROTO_OLD_DIVERT = 0xfe - IPPROTO_OSPFIGP = 0x59 - IPPROTO_PFSYNC = 0xf0 - IPPROTO_PGM = 0x71 - IPPROTO_PIGP = 0x9 - IPPROTO_PIM = 0x67 - IPPROTO_PRM = 0x15 - IPPROTO_PUP = 0xc - IPPROTO_PVP = 0x4b - IPPROTO_RAW = 0xff - IPPROTO_RCCMON = 0xa - IPPROTO_RDP = 0x1b - IPPROTO_RESERVED_253 = 0xfd - IPPROTO_RESERVED_254 = 0xfe - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_RVD = 0x42 - IPPROTO_SATEXPAK = 0x40 - IPPROTO_SATMON = 0x45 - IPPROTO_SCCSP = 0x60 - IPPROTO_SCTP = 0x84 - IPPROTO_SDRP = 0x2a - IPPROTO_SEND = 0x103 - IPPROTO_SEP = 0x21 - IPPROTO_SHIM6 = 0x8c - IPPROTO_SKIP = 0x39 - IPPROTO_SPACER = 0x7fff - IPPROTO_SRPC = 0x5a - IPPROTO_ST = 0x7 - IPPROTO_SVMTP = 0x52 - IPPROTO_SWIPE = 0x35 - IPPROTO_TCF = 0x57 - IPPROTO_TCP = 0x6 - IPPROTO_TLSP = 0x38 - IPPROTO_TP = 0x1d - IPPROTO_TPXX = 0x27 - IPPROTO_TRUNK1 = 0x17 - IPPROTO_TRUNK2 = 0x18 - IPPROTO_TTP = 0x54 - IPPROTO_UDP = 0x11 - IPPROTO_UDPLITE = 0x88 - IPPROTO_VINES = 0x53 - IPPROTO_VISA = 0x46 - IPPROTO_VMTP = 0x51 - IPPROTO_WBEXPAK = 0x4f - IPPROTO_WBMON = 0x4e - IPPROTO_WSN = 0x4a - IPPROTO_XNET = 0xf - IPPROTO_XTP = 0x24 - IPV6_AUTOFLOWLABEL = 0x3b - IPV6_BINDANY = 0x40 - IPV6_BINDV6ONLY = 0x1b - IPV6_CHECKSUM = 0x1a - IPV6_DEFAULT_MULTICAST_HOPS = 0x1 - IPV6_DEFAULT_MULTICAST_LOOP = 0x1 - IPV6_DEFHLIM = 0x40 - IPV6_DONTFRAG = 0x3e - IPV6_DSTOPTS = 0x32 - IPV6_FAITH = 0x1d - IPV6_FLOWINFO_MASK = 0xffffff0f - IPV6_FLOWLABEL_MASK = 0xffff0f00 - IPV6_FRAGTTL = 0x78 - IPV6_FW_ADD = 0x1e - IPV6_FW_DEL = 0x1f - IPV6_FW_FLUSH = 0x20 - IPV6_FW_GET = 0x22 - IPV6_FW_ZERO = 0x21 - IPV6_HLIMDEC = 0x1 - IPV6_HOPLIMIT = 0x2f - IPV6_HOPOPTS = 0x31 - IPV6_IPSEC_POLICY = 0x1c - IPV6_JOIN_GROUP = 0xc - IPV6_LEAVE_GROUP = 0xd - IPV6_MAXHLIM = 0xff - IPV6_MAXOPTHDR = 0x800 - IPV6_MAXPACKET = 0xffff - IPV6_MAX_GROUP_SRC_FILTER = 0x200 - IPV6_MAX_MEMBERSHIPS = 0xfff - IPV6_MAX_SOCK_SRC_FILTER = 0x80 - IPV6_MIN_MEMBERSHIPS = 0x1f - IPV6_MMTU = 0x500 - IPV6_MSFILTER = 0x4a - IPV6_MULTICAST_HOPS = 0xa - IPV6_MULTICAST_IF = 0x9 - IPV6_MULTICAST_LOOP = 0xb - IPV6_NEXTHOP = 0x30 - IPV6_PATHMTU = 0x2c - IPV6_PKTINFO = 0x2e - IPV6_PORTRANGE = 0xe - IPV6_PORTRANGE_DEFAULT = 0x0 - IPV6_PORTRANGE_HIGH = 0x1 - IPV6_PORTRANGE_LOW = 0x2 - IPV6_PREFER_TEMPADDR = 0x3f - IPV6_RECVDSTOPTS = 0x28 - IPV6_RECVHOPLIMIT = 0x25 - IPV6_RECVHOPOPTS = 0x27 - IPV6_RECVPATHMTU = 0x2b - IPV6_RECVPKTINFO = 0x24 - IPV6_RECVRTHDR = 0x26 - IPV6_RECVTCLASS = 0x39 - IPV6_RTHDR = 0x33 - IPV6_RTHDRDSTOPTS = 0x23 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_SOCKOPT_RESERVED1 = 0x3 - IPV6_TCLASS = 0x3d - IPV6_UNICAST_HOPS = 0x4 - IPV6_USE_MIN_MTU = 0x2a - IPV6_V6ONLY = 0x1b - IPV6_VERSION = 0x60 - IPV6_VERSION_MASK = 0xf0 - IP_ADD_MEMBERSHIP = 0xc - IP_ADD_SOURCE_MEMBERSHIP = 0x46 - IP_BINDANY = 0x18 - IP_BLOCK_SOURCE = 0x48 - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DONTFRAG = 0x43 - IP_DROP_MEMBERSHIP = 0xd - IP_DROP_SOURCE_MEMBERSHIP = 0x47 - IP_DUMMYNET3 = 0x31 - IP_DUMMYNET_CONFIGURE = 0x3c - IP_DUMMYNET_DEL = 0x3d - IP_DUMMYNET_FLUSH = 0x3e - IP_DUMMYNET_GET = 0x40 - IP_FAITH = 0x16 - IP_FW3 = 0x30 - IP_FW_ADD = 0x32 - IP_FW_DEL = 0x33 - IP_FW_FLUSH = 0x34 - IP_FW_GET = 0x36 - IP_FW_NAT_CFG = 0x38 - IP_FW_NAT_DEL = 0x39 - IP_FW_NAT_GET_CONFIG = 0x3a - IP_FW_NAT_GET_LOG = 0x3b - IP_FW_RESETLOG = 0x37 - IP_FW_TABLE_ADD = 0x28 - IP_FW_TABLE_DEL = 0x29 - IP_FW_TABLE_FLUSH = 0x2a - IP_FW_TABLE_GETSIZE = 0x2b - IP_FW_TABLE_LIST = 0x2c - IP_FW_ZERO = 0x35 - IP_HDRINCL = 0x2 - IP_IPSEC_POLICY = 0x15 - IP_MAXPACKET = 0xffff - IP_MAX_GROUP_SRC_FILTER = 0x200 - IP_MAX_MEMBERSHIPS = 0xfff - IP_MAX_SOCK_MUTE_FILTER = 0x80 - IP_MAX_SOCK_SRC_FILTER = 0x80 - IP_MAX_SOURCE_FILTER = 0x400 - IP_MF = 0x2000 - IP_MINTTL = 0x42 - IP_MIN_MEMBERSHIPS = 0x1f - IP_MSFILTER = 0x4a - IP_MSS = 0x240 - IP_MULTICAST_IF = 0x9 - IP_MULTICAST_LOOP = 0xb - IP_MULTICAST_TTL = 0xa - IP_MULTICAST_VIF = 0xe - IP_OFFMASK = 0x1fff - IP_ONESBCAST = 0x17 - IP_OPTIONS = 0x1 - IP_PORTRANGE = 0x13 - IP_PORTRANGE_DEFAULT = 0x0 - IP_PORTRANGE_HIGH = 0x1 - IP_PORTRANGE_LOW = 0x2 - IP_RECVDSTADDR = 0x7 - IP_RECVIF = 0x14 - IP_RECVOPTS = 0x5 - IP_RECVRETOPTS = 0x6 - IP_RECVTOS = 0x44 - IP_RECVTTL = 0x41 - IP_RETOPTS = 0x8 - IP_RF = 0x8000 - IP_RSVP_OFF = 0x10 - IP_RSVP_ON = 0xf - IP_RSVP_VIF_OFF = 0x12 - IP_RSVP_VIF_ON = 0x11 - IP_SENDSRCADDR = 0x7 - IP_TOS = 0x3 - IP_TTL = 0x4 - IP_UNBLOCK_SOURCE = 0x49 - ISIG = 0x80 - ISTRIP = 0x20 - IXANY = 0x800 - IXOFF = 0x400 - IXON = 0x200 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_AUTOSYNC = 0x7 - MADV_CORE = 0x9 - MADV_DONTNEED = 0x4 - MADV_FREE = 0x5 - MADV_NOCORE = 0x8 - MADV_NORMAL = 0x0 - MADV_NOSYNC = 0x6 - MADV_PROTECT = 0xa - MADV_RANDOM = 0x1 - MADV_SEQUENTIAL = 0x2 - MADV_WILLNEED = 0x3 - MAP_ALIGNED_SUPER = 0x1000000 - MAP_ALIGNMENT_MASK = -0x1000000 - MAP_ALIGNMENT_SHIFT = 0x18 - MAP_ANON = 0x1000 - MAP_ANONYMOUS = 0x1000 - MAP_COPY = 0x2 - MAP_EXCL = 0x4000 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_HASSEMAPHORE = 0x200 - MAP_NOCORE = 0x20000 - MAP_NORESERVE = 0x40 - MAP_NOSYNC = 0x800 - MAP_PREFAULT_READ = 0x40000 - MAP_PRIVATE = 0x2 - MAP_RENAME = 0x20 - MAP_RESERVED0080 = 0x80 - MAP_RESERVED0100 = 0x100 - MAP_SHARED = 0x1 - MAP_STACK = 0x400 - MCL_CURRENT = 0x1 - MCL_FUTURE = 0x2 - MSG_CMSG_CLOEXEC = 0x40000 - MSG_COMPAT = 0x8000 - MSG_CTRUNC = 0x20 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x80 - MSG_EOF = 0x100 - MSG_EOR = 0x8 - MSG_NBIO = 0x4000 - MSG_NOSIGNAL = 0x20000 - MSG_NOTIFICATION = 0x2000 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_TRUNC = 0x10 - MSG_WAITALL = 0x40 - MS_ASYNC = 0x1 - MS_INVALIDATE = 0x2 - MS_SYNC = 0x0 - NAME_MAX = 0xff - NET_RT_DUMP = 0x1 - NET_RT_FLAGS = 0x2 - NET_RT_IFLIST = 0x3 - NET_RT_IFLISTL = 0x5 - NET_RT_IFMALIST = 0x4 - NET_RT_MAXID = 0x6 - NOFLSH = 0x80000000 - NOTE_ATTRIB = 0x8 - NOTE_CHILD = 0x4 - NOTE_DELETE = 0x1 - NOTE_EXEC = 0x20000000 - NOTE_EXIT = 0x80000000 - NOTE_EXTEND = 0x4 - NOTE_FFAND = 0x40000000 - NOTE_FFCOPY = 0xc0000000 - NOTE_FFCTRLMASK = 0xc0000000 - NOTE_FFLAGSMASK = 0xffffff - NOTE_FFNOP = 0x0 - NOTE_FFOR = 0x80000000 - NOTE_FORK = 0x40000000 - NOTE_LINK = 0x10 - NOTE_LOWAT = 0x1 - NOTE_PCTRLMASK = 0xf0000000 - NOTE_PDATAMASK = 0xfffff - NOTE_RENAME = 0x20 - NOTE_REVOKE = 0x40 - NOTE_TRACK = 0x1 - NOTE_TRACKERR = 0x2 - NOTE_TRIGGER = 0x1000000 - NOTE_WRITE = 0x2 - OCRNL = 0x10 - ONLCR = 0x2 - ONLRET = 0x40 - ONOCR = 0x20 - ONOEOT = 0x8 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_APPEND = 0x8 - O_ASYNC = 0x40 - O_CLOEXEC = 0x100000 - O_CREAT = 0x200 - O_DIRECT = 0x10000 - O_DIRECTORY = 0x20000 - O_EXCL = 0x800 - O_EXEC = 0x40000 - O_EXLOCK = 0x20 - O_FSYNC = 0x80 - O_NDELAY = 0x4 - O_NOCTTY = 0x8000 - O_NOFOLLOW = 0x100 - O_NONBLOCK = 0x4 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_SHLOCK = 0x10 - O_SYNC = 0x80 - O_TRUNC = 0x400 - O_TTY_INIT = 0x80000 - O_WRONLY = 0x1 - PARENB = 0x1000 - PARMRK = 0x8 - PARODD = 0x2000 - PENDIN = 0x20000000 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PROT_EXEC = 0x4 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_WRITE = 0x2 - RLIMIT_AS = 0xa - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x8 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = 0x7fffffffffffffff - RTAX_AUTHOR = 0x6 - RTAX_BRD = 0x7 - RTAX_DST = 0x0 - RTAX_GATEWAY = 0x1 - RTAX_GENMASK = 0x3 - RTAX_IFA = 0x5 - RTAX_IFP = 0x4 - RTAX_MAX = 0x8 - RTAX_NETMASK = 0x2 - RTA_AUTHOR = 0x40 - RTA_BRD = 0x80 - RTA_DST = 0x1 - RTA_GATEWAY = 0x2 - RTA_GENMASK = 0x8 - RTA_IFA = 0x20 - RTA_IFP = 0x10 - RTA_NETMASK = 0x4 - RTF_BLACKHOLE = 0x1000 - RTF_BROADCAST = 0x400000 - RTF_DONE = 0x40 - RTF_DYNAMIC = 0x10 - RTF_FMASK = 0x1004d808 - RTF_GATEWAY = 0x2 - RTF_GWFLAG_COMPAT = 0x80000000 - RTF_HOST = 0x4 - RTF_LLDATA = 0x400 - RTF_LLINFO = 0x400 - RTF_LOCAL = 0x200000 - RTF_MODIFIED = 0x20 - RTF_MULTICAST = 0x800000 - RTF_PINNED = 0x100000 - RTF_PRCLONING = 0x10000 - RTF_PROTO1 = 0x8000 - RTF_PROTO2 = 0x4000 - RTF_PROTO3 = 0x40000 - RTF_REJECT = 0x8 - RTF_RNH_LOCKED = 0x40000000 - RTF_STATIC = 0x800 - RTF_STICKY = 0x10000000 - RTF_UP = 0x1 - RTF_XRESOLVE = 0x200 - RTM_ADD = 0x1 - RTM_CHANGE = 0x3 - RTM_DELADDR = 0xd - RTM_DELETE = 0x2 - RTM_DELMADDR = 0x10 - RTM_GET = 0x4 - RTM_IEEE80211 = 0x12 - RTM_IFANNOUNCE = 0x11 - RTM_IFINFO = 0xe - RTM_LOCK = 0x8 - RTM_LOSING = 0x5 - RTM_MISS = 0x7 - RTM_NEWADDR = 0xc - RTM_NEWMADDR = 0xf - RTM_OLDADD = 0x9 - RTM_OLDDEL = 0xa - RTM_REDIRECT = 0x6 - RTM_RESOLVE = 0xb - RTM_RTTUNIT = 0xf4240 - RTM_VERSION = 0x5 - RTV_EXPIRE = 0x4 - RTV_HOPCOUNT = 0x2 - RTV_MTU = 0x1 - RTV_RPIPE = 0x8 - RTV_RTT = 0x40 - RTV_RTTVAR = 0x80 - RTV_SPIPE = 0x10 - RTV_SSTHRESH = 0x20 - RTV_WEIGHT = 0x100 - RT_ALL_FIBS = -0x1 - RT_CACHING_CONTEXT = 0x1 - RT_DEFAULT_FIB = 0x0 - RT_NORTREF = 0x2 - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - RUSAGE_THREAD = 0x1 - SCM_BINTIME = 0x4 - SCM_CREDS = 0x3 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x2 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDMULTI = 0x80206931 - SIOCADDRT = 0x8030720a - SIOCAIFADDR = 0x8040691a - SIOCAIFGROUP = 0x80246987 - SIOCALIFADDR = 0x8118691b - SIOCATMARK = 0x40047307 - SIOCDELMULTI = 0x80206932 - SIOCDELRT = 0x8030720b - SIOCDIFADDR = 0x80206919 - SIOCDIFGROUP = 0x80246989 - SIOCDIFPHYADDR = 0x80206949 - SIOCDLIFADDR = 0x8118691d - SIOCGDRVSPEC = 0xc01c697b - SIOCGETSGCNT = 0xc0147210 - SIOCGETVIFCNT = 0xc014720f - SIOCGHIWAT = 0x40047301 - SIOCGIFADDR = 0xc0206921 - SIOCGIFBRDADDR = 0xc0206923 - SIOCGIFCAP = 0xc020691f - SIOCGIFCONF = 0xc0086924 - SIOCGIFDESCR = 0xc020692a - SIOCGIFDSTADDR = 0xc0206922 - SIOCGIFFIB = 0xc020695c - SIOCGIFFLAGS = 0xc0206911 - SIOCGIFGENERIC = 0xc020693a - SIOCGIFGMEMB = 0xc024698a - SIOCGIFGROUP = 0xc0246988 - SIOCGIFINDEX = 0xc0206920 - SIOCGIFMAC = 0xc0206926 - SIOCGIFMEDIA = 0xc0286938 - SIOCGIFMETRIC = 0xc0206917 - SIOCGIFMTU = 0xc0206933 - SIOCGIFNETMASK = 0xc0206925 - SIOCGIFPDSTADDR = 0xc0206948 - SIOCGIFPHYS = 0xc0206935 - SIOCGIFPSRCADDR = 0xc0206947 - SIOCGIFSTATUS = 0xc331693b - SIOCGLIFADDR = 0xc118691c - SIOCGLIFPHYADDR = 0xc118694b - SIOCGLOWAT = 0x40047303 - SIOCGPGRP = 0x40047309 - SIOCGPRIVATE_0 = 0xc0206950 - SIOCGPRIVATE_1 = 0xc0206951 - SIOCIFCREATE = 0xc020697a - SIOCIFCREATE2 = 0xc020697c - SIOCIFDESTROY = 0x80206979 - SIOCIFGCLONERS = 0xc00c6978 - SIOCSDRVSPEC = 0x801c697b - SIOCSHIWAT = 0x80047300 - SIOCSIFADDR = 0x8020690c - SIOCSIFBRDADDR = 0x80206913 - SIOCSIFCAP = 0x8020691e - SIOCSIFDESCR = 0x80206929 - SIOCSIFDSTADDR = 0x8020690e - SIOCSIFFIB = 0x8020695d - SIOCSIFFLAGS = 0x80206910 - SIOCSIFGENERIC = 0x80206939 - SIOCSIFLLADDR = 0x8020693c - SIOCSIFMAC = 0x80206927 - SIOCSIFMEDIA = 0xc0206937 - SIOCSIFMETRIC = 0x80206918 - SIOCSIFMTU = 0x80206934 - SIOCSIFNAME = 0x80206928 - SIOCSIFNETMASK = 0x80206916 - SIOCSIFPHYADDR = 0x80406946 - SIOCSIFPHYS = 0x80206936 - SIOCSIFRVNET = 0xc020695b - SIOCSIFVNET = 0xc020695a - SIOCSLIFPHYADDR = 0x8118694a - SIOCSLOWAT = 0x80047302 - SIOCSPGRP = 0x80047308 - SOCK_CLOEXEC = 0x10000000 - SOCK_DGRAM = 0x2 - SOCK_MAXADDRLEN = 0xff - SOCK_NONBLOCK = 0x20000000 - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_SOCKET = 0xffff - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x2 - SO_ACCEPTFILTER = 0x1000 - SO_BINTIME = 0x2000 - SO_BROADCAST = 0x20 - SO_DEBUG = 0x1 - SO_DONTROUTE = 0x10 - SO_ERROR = 0x1007 - SO_KEEPALIVE = 0x8 - SO_LABEL = 0x1009 - SO_LINGER = 0x80 - SO_LISTENINCQLEN = 0x1013 - SO_LISTENQLEN = 0x1012 - SO_LISTENQLIMIT = 0x1011 - SO_NOSIGPIPE = 0x800 - SO_NO_DDP = 0x8000 - SO_NO_OFFLOAD = 0x4000 - SO_OOBINLINE = 0x100 - SO_PEERLABEL = 0x1010 - SO_PROTOCOL = 0x1016 - SO_PROTOTYPE = 0x1016 - SO_RCVBUF = 0x1002 - SO_RCVLOWAT = 0x1004 - SO_RCVTIMEO = 0x1006 - SO_REUSEADDR = 0x4 - SO_REUSEPORT = 0x200 - SO_SETFIB = 0x1014 - SO_SNDBUF = 0x1001 - SO_SNDLOWAT = 0x1003 - SO_SNDTIMEO = 0x1005 - SO_TIMESTAMP = 0x400 - SO_TYPE = 0x1008 - SO_USELOOPBACK = 0x40 - SO_USER_COOKIE = 0x1015 - SO_VENDOR = 0x80000000 - TCIFLUSH = 0x1 - TCIOFLUSH = 0x3 - TCOFLUSH = 0x2 - TCP_CA_NAME_MAX = 0x10 - TCP_CONGESTION = 0x40 - TCP_INFO = 0x20 - TCP_KEEPCNT = 0x400 - TCP_KEEPIDLE = 0x100 - TCP_KEEPINIT = 0x80 - TCP_KEEPINTVL = 0x200 - TCP_MAXBURST = 0x4 - TCP_MAXHLEN = 0x3c - TCP_MAXOLEN = 0x28 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_SACK = 0x4 - TCP_MAX_WINSHIFT = 0xe - TCP_MD5SIG = 0x10 - TCP_MINMSS = 0xd8 - TCP_MSS = 0x218 - TCP_NODELAY = 0x1 - TCP_NOOPT = 0x8 - TCP_NOPUSH = 0x4 - TCP_VENDOR = 0x80000000 - TCSAFLUSH = 0x2 - TIOCCBRK = 0x2000747a - TIOCCDTR = 0x20007478 - TIOCCONS = 0x80047462 - TIOCDRAIN = 0x2000745e - TIOCEXCL = 0x2000740d - TIOCEXT = 0x80047460 - TIOCFLUSH = 0x80047410 - TIOCGDRAINWAIT = 0x40047456 - TIOCGETA = 0x402c7413 - TIOCGETD = 0x4004741a - TIOCGPGRP = 0x40047477 - TIOCGPTN = 0x4004740f - TIOCGSID = 0x40047463 - TIOCGWINSZ = 0x40087468 - TIOCMBIC = 0x8004746b - TIOCMBIS = 0x8004746c - TIOCMGDTRWAIT = 0x4004745a - TIOCMGET = 0x4004746a - TIOCMSDTRWAIT = 0x8004745b - TIOCMSET = 0x8004746d - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DCD = 0x40 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x20007471 - TIOCNXCL = 0x2000740e - TIOCOUTQ = 0x40047473 - TIOCPKT = 0x80047470 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCPTMASTER = 0x2000741c - TIOCSBRK = 0x2000747b - TIOCSCTTY = 0x20007461 - TIOCSDRAINWAIT = 0x80047457 - TIOCSDTR = 0x20007479 - TIOCSETA = 0x802c7414 - TIOCSETAF = 0x802c7416 - TIOCSETAW = 0x802c7415 - TIOCSETD = 0x8004741b - TIOCSIG = 0x2004745f - TIOCSPGRP = 0x80047476 - TIOCSTART = 0x2000746e - TIOCSTAT = 0x20007465 - TIOCSTI = 0x80017472 - TIOCSTOP = 0x2000746f - TIOCSWINSZ = 0x80087467 - TIOCTIMESTAMP = 0x40087459 - TIOCUCNTL = 0x80047466 - TOSTOP = 0x400000 - VDISCARD = 0xf - VDSUSP = 0xb - VEOF = 0x0 - VEOL = 0x1 - VEOL2 = 0x2 - VERASE = 0x3 - VERASE2 = 0x7 - VINTR = 0x8 - VKILL = 0x5 - VLNEXT = 0xe - VMIN = 0x10 - VQUIT = 0x9 - VREPRINT = 0x6 - VSTART = 0xc - VSTATUS = 0x12 - VSTOP = 0xd - VSUSP = 0xa - VTIME = 0x11 - VWERASE = 0x4 - WCONTINUED = 0x4 - WCOREFLAG = 0x80 - WEXITED = 0x10 - WLINUXCLONE = 0x80000000 - WNOHANG = 0x1 - WNOWAIT = 0x8 - WSTOPPED = 0x2 - WTRAPPED = 0x20 - WUNTRACED = 0x2 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x30) - EADDRNOTAVAIL = syscall.Errno(0x31) - EAFNOSUPPORT = syscall.Errno(0x2f) - EAGAIN = syscall.Errno(0x23) - EALREADY = syscall.Errno(0x25) - EAUTH = syscall.Errno(0x50) - EBADF = syscall.Errno(0x9) - EBADMSG = syscall.Errno(0x59) - EBADRPC = syscall.Errno(0x48) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x55) - ECAPMODE = syscall.Errno(0x5e) - ECHILD = syscall.Errno(0xa) - ECONNABORTED = syscall.Errno(0x35) - ECONNREFUSED = syscall.Errno(0x3d) - ECONNRESET = syscall.Errno(0x36) - EDEADLK = syscall.Errno(0xb) - EDESTADDRREQ = syscall.Errno(0x27) - EDOM = syscall.Errno(0x21) - EDOOFUS = syscall.Errno(0x58) - EDQUOT = syscall.Errno(0x45) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EFTYPE = syscall.Errno(0x4f) - EHOSTDOWN = syscall.Errno(0x40) - EHOSTUNREACH = syscall.Errno(0x41) - EIDRM = syscall.Errno(0x52) - EILSEQ = syscall.Errno(0x56) - EINPROGRESS = syscall.Errno(0x24) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EISCONN = syscall.Errno(0x38) - EISDIR = syscall.Errno(0x15) - ELAST = syscall.Errno(0x60) - ELOOP = syscall.Errno(0x3e) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x28) - EMULTIHOP = syscall.Errno(0x5a) - ENAMETOOLONG = syscall.Errno(0x3f) - ENEEDAUTH = syscall.Errno(0x51) - ENETDOWN = syscall.Errno(0x32) - ENETRESET = syscall.Errno(0x34) - ENETUNREACH = syscall.Errno(0x33) - ENFILE = syscall.Errno(0x17) - ENOATTR = syscall.Errno(0x57) - ENOBUFS = syscall.Errno(0x37) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOLCK = syscall.Errno(0x4d) - ENOLINK = syscall.Errno(0x5b) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x53) - ENOPROTOOPT = syscall.Errno(0x2a) - ENOSPC = syscall.Errno(0x1c) - ENOSYS = syscall.Errno(0x4e) - ENOTBLK = syscall.Errno(0xf) - ENOTCAPABLE = syscall.Errno(0x5d) - ENOTCONN = syscall.Errno(0x39) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x42) - ENOTRECOVERABLE = syscall.Errno(0x5f) - ENOTSOCK = syscall.Errno(0x26) - ENOTSUP = syscall.Errno(0x2d) - ENOTTY = syscall.Errno(0x19) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x2d) - EOVERFLOW = syscall.Errno(0x54) - EOWNERDEAD = syscall.Errno(0x60) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x2e) - EPIPE = syscall.Errno(0x20) - EPROCLIM = syscall.Errno(0x43) - EPROCUNAVAIL = syscall.Errno(0x4c) - EPROGMISMATCH = syscall.Errno(0x4b) - EPROGUNAVAIL = syscall.Errno(0x4a) - EPROTO = syscall.Errno(0x5c) - EPROTONOSUPPORT = syscall.Errno(0x2b) - EPROTOTYPE = syscall.Errno(0x29) - ERANGE = syscall.Errno(0x22) - EREMOTE = syscall.Errno(0x47) - EROFS = syscall.Errno(0x1e) - ERPCMISMATCH = syscall.Errno(0x49) - ESHUTDOWN = syscall.Errno(0x3a) - ESOCKTNOSUPPORT = syscall.Errno(0x2c) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESTALE = syscall.Errno(0x46) - ETIMEDOUT = syscall.Errno(0x3c) - ETOOMANYREFS = syscall.Errno(0x3b) - ETXTBSY = syscall.Errno(0x1a) - EUSERS = syscall.Errno(0x44) - EWOULDBLOCK = syscall.Errno(0x23) - EXDEV = syscall.Errno(0x12) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0xa) - SIGCHLD = syscall.Signal(0x14) - SIGCONT = syscall.Signal(0x13) - SIGEMT = syscall.Signal(0x7) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINFO = syscall.Signal(0x1d) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x17) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGLIBRT = syscall.Signal(0x21) - SIGLWP = syscall.Signal(0x20) - SIGPIPE = syscall.Signal(0xd) - SIGPROF = syscall.Signal(0x1b) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTOP = syscall.Signal(0x11) - SIGSYS = syscall.Signal(0xc) - SIGTERM = syscall.Signal(0xf) - SIGTHR = syscall.Signal(0x20) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x12) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGURG = syscall.Signal(0x10) - SIGUSR1 = syscall.Signal(0x1e) - SIGUSR2 = syscall.Signal(0x1f) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) - -// Error table -var errors = [...]string{ - 1: "operation not permitted", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "input/output error", - 6: "device not configured", - 7: "argument list too long", - 8: "exec format error", - 9: "bad file descriptor", - 10: "no child processes", - 11: "resource deadlock avoided", - 12: "cannot allocate memory", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "device busy", - 17: "file exists", - 18: "cross-device link", - 19: "operation not supported by device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "too many open files in system", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "numerical argument out of domain", - 34: "result too large", - 35: "resource temporarily unavailable", - 36: "operation now in progress", - 37: "operation already in progress", - 38: "socket operation on non-socket", - 39: "destination address required", - 40: "message too long", - 41: "protocol wrong type for socket", - 42: "protocol not available", - 43: "protocol not supported", - 44: "socket type not supported", - 45: "operation not supported", - 46: "protocol family not supported", - 47: "address family not supported by protocol family", - 48: "address already in use", - 49: "can't assign requested address", - 50: "network is down", - 51: "network is unreachable", - 52: "network dropped connection on reset", - 53: "software caused connection abort", - 54: "connection reset by peer", - 55: "no buffer space available", - 56: "socket is already connected", - 57: "socket is not connected", - 58: "can't send after socket shutdown", - 59: "too many references: can't splice", - 60: "operation timed out", - 61: "connection refused", - 62: "too many levels of symbolic links", - 63: "file name too long", - 64: "host is down", - 65: "no route to host", - 66: "directory not empty", - 67: "too many processes", - 68: "too many users", - 69: "disc quota exceeded", - 70: "stale NFS file handle", - 71: "too many levels of remote in path", - 72: "RPC struct is bad", - 73: "RPC version wrong", - 74: "RPC prog. not avail", - 75: "program version wrong", - 76: "bad procedure for program", - 77: "no locks available", - 78: "function not implemented", - 79: "inappropriate file type or format", - 80: "authentication error", - 81: "need authenticator", - 82: "identifier removed", - 83: "no message of desired type", - 84: "value too large to be stored in data type", - 85: "operation canceled", - 86: "illegal byte sequence", - 87: "attribute not found", - 88: "programming error", - 89: "bad message", - 90: "multihop attempted", - 91: "link has been severed", - 92: "protocol error", - 93: "capabilities insufficient", - 94: "not permitted in capability mode", - 95: "state not recoverable", - 96: "previous owner died", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal instruction", - 5: "trace/BPT trap", - 6: "abort trap", - 7: "EMT trap", - 8: "floating point exception", - 9: "killed", - 10: "bus error", - 11: "segmentation fault", - 12: "bad system call", - 13: "broken pipe", - 14: "alarm clock", - 15: "terminated", - 16: "urgent I/O condition", - 17: "suspended (signal)", - 18: "suspended", - 19: "continued", - 20: "child exited", - 21: "stopped (tty input)", - 22: "stopped (tty output)", - 23: "I/O possible", - 24: "cputime limit exceeded", - 25: "filesize limit exceeded", - 26: "virtual timer expired", - 27: "profiling timer expired", - 28: "window size changes", - 29: "information request", - 30: "user defined signal 1", - 31: "user defined signal 2", - 32: "unknown signal", - 33: "unknown signal", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_linux_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_linux_386.go deleted file mode 100644 index d370be0ecb3..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_linux_386.go +++ /dev/null @@ -1,1817 +0,0 @@ -// mkerrors.sh -m32 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build 386,linux - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- -m32 _const.go - -package unix - -import "syscall" - -const ( - AF_ALG = 0x26 - AF_APPLETALK = 0x5 - AF_ASH = 0x12 - AF_ATMPVC = 0x8 - AF_ATMSVC = 0x14 - AF_AX25 = 0x3 - AF_BLUETOOTH = 0x1f - AF_BRIDGE = 0x7 - AF_CAIF = 0x25 - AF_CAN = 0x1d - AF_DECnet = 0xc - AF_ECONET = 0x13 - AF_FILE = 0x1 - AF_IEEE802154 = 0x24 - AF_INET = 0x2 - AF_INET6 = 0xa - AF_IPX = 0x4 - AF_IRDA = 0x17 - AF_ISDN = 0x22 - AF_IUCV = 0x20 - AF_KEY = 0xf - AF_LLC = 0x1a - AF_LOCAL = 0x1 - AF_MAX = 0x28 - AF_NETBEUI = 0xd - AF_NETLINK = 0x10 - AF_NETROM = 0x6 - AF_NFC = 0x27 - AF_PACKET = 0x11 - AF_PHONET = 0x23 - AF_PPPOX = 0x18 - AF_RDS = 0x15 - AF_ROSE = 0xb - AF_ROUTE = 0x10 - AF_RXRPC = 0x21 - AF_SECURITY = 0xe - AF_SNA = 0x16 - AF_TIPC = 0x1e - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - AF_WANPIPE = 0x19 - AF_X25 = 0x9 - ARPHRD_ADAPT = 0x108 - ARPHRD_APPLETLK = 0x8 - ARPHRD_ARCNET = 0x7 - ARPHRD_ASH = 0x30d - ARPHRD_ATM = 0x13 - ARPHRD_AX25 = 0x3 - ARPHRD_BIF = 0x307 - ARPHRD_CAIF = 0x336 - ARPHRD_CAN = 0x118 - ARPHRD_CHAOS = 0x5 - ARPHRD_CISCO = 0x201 - ARPHRD_CSLIP = 0x101 - ARPHRD_CSLIP6 = 0x103 - ARPHRD_DDCMP = 0x205 - ARPHRD_DLCI = 0xf - ARPHRD_ECONET = 0x30e - ARPHRD_EETHER = 0x2 - ARPHRD_ETHER = 0x1 - ARPHRD_EUI64 = 0x1b - ARPHRD_FCAL = 0x311 - ARPHRD_FCFABRIC = 0x313 - ARPHRD_FCPL = 0x312 - ARPHRD_FCPP = 0x310 - ARPHRD_FDDI = 0x306 - ARPHRD_FRAD = 0x302 - ARPHRD_HDLC = 0x201 - ARPHRD_HIPPI = 0x30c - ARPHRD_HWX25 = 0x110 - ARPHRD_IEEE1394 = 0x18 - ARPHRD_IEEE802 = 0x6 - ARPHRD_IEEE80211 = 0x321 - ARPHRD_IEEE80211_PRISM = 0x322 - ARPHRD_IEEE80211_RADIOTAP = 0x323 - ARPHRD_IEEE802154 = 0x324 - ARPHRD_IEEE802_TR = 0x320 - ARPHRD_INFINIBAND = 0x20 - ARPHRD_IPDDP = 0x309 - ARPHRD_IPGRE = 0x30a - ARPHRD_IRDA = 0x30f - ARPHRD_LAPB = 0x204 - ARPHRD_LOCALTLK = 0x305 - ARPHRD_LOOPBACK = 0x304 - ARPHRD_METRICOM = 0x17 - ARPHRD_NETROM = 0x0 - ARPHRD_NONE = 0xfffe - ARPHRD_PHONET = 0x334 - ARPHRD_PHONET_PIPE = 0x335 - ARPHRD_PIMREG = 0x30b - ARPHRD_PPP = 0x200 - ARPHRD_PRONET = 0x4 - ARPHRD_RAWHDLC = 0x206 - ARPHRD_ROSE = 0x10e - ARPHRD_RSRVD = 0x104 - ARPHRD_SIT = 0x308 - ARPHRD_SKIP = 0x303 - ARPHRD_SLIP = 0x100 - ARPHRD_SLIP6 = 0x102 - ARPHRD_TUNNEL = 0x300 - ARPHRD_TUNNEL6 = 0x301 - ARPHRD_VOID = 0xffff - ARPHRD_X25 = 0x10f - B0 = 0x0 - B1000000 = 0x1008 - B110 = 0x3 - B115200 = 0x1002 - B1152000 = 0x1009 - B1200 = 0x9 - B134 = 0x4 - B150 = 0x5 - B1500000 = 0x100a - B1800 = 0xa - B19200 = 0xe - B200 = 0x6 - B2000000 = 0x100b - B230400 = 0x1003 - B2400 = 0xb - B2500000 = 0x100c - B300 = 0x7 - B3000000 = 0x100d - B3500000 = 0x100e - B38400 = 0xf - B4000000 = 0x100f - B460800 = 0x1004 - B4800 = 0xc - B50 = 0x1 - B500000 = 0x1005 - B57600 = 0x1001 - B576000 = 0x1006 - B600 = 0x8 - B75 = 0x2 - B921600 = 0x1007 - B9600 = 0xd - BOTHER = 0x1000 - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXINSNS = 0x1000 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_W = 0x0 - BPF_X = 0x8 - BRKINT = 0x2 - BS0 = 0x0 - BS1 = 0x2000 - BSDLY = 0x2000 - CBAUD = 0x100f - CBAUDEX = 0x1000 - CFLUSH = 0xf - CIBAUD = 0x100f0000 - CLOCAL = 0x800 - CLOCK_BOOTTIME = 0x7 - CLOCK_BOOTTIME_ALARM = 0x9 - CLOCK_DEFAULT = 0x0 - CLOCK_EXT = 0x1 - CLOCK_INT = 0x2 - CLOCK_MONOTONIC = 0x1 - CLOCK_MONOTONIC_COARSE = 0x6 - CLOCK_MONOTONIC_RAW = 0x4 - CLOCK_PROCESS_CPUTIME_ID = 0x2 - CLOCK_REALTIME = 0x0 - CLOCK_REALTIME_ALARM = 0x8 - CLOCK_REALTIME_COARSE = 0x5 - CLOCK_THREAD_CPUTIME_ID = 0x3 - CLOCK_TXFROMRX = 0x4 - CLOCK_TXINT = 0x3 - CLONE_CHILD_CLEARTID = 0x200000 - CLONE_CHILD_SETTID = 0x1000000 - CLONE_DETACHED = 0x400000 - CLONE_FILES = 0x400 - CLONE_FS = 0x200 - CLONE_IO = 0x80000000 - CLONE_NEWIPC = 0x8000000 - CLONE_NEWNET = 0x40000000 - CLONE_NEWNS = 0x20000 - CLONE_NEWPID = 0x20000000 - CLONE_NEWUSER = 0x10000000 - CLONE_NEWUTS = 0x4000000 - CLONE_PARENT = 0x8000 - CLONE_PARENT_SETTID = 0x100000 - CLONE_PTRACE = 0x2000 - CLONE_SETTLS = 0x80000 - CLONE_SIGHAND = 0x800 - CLONE_SYSVSEM = 0x40000 - CLONE_THREAD = 0x10000 - CLONE_UNTRACED = 0x800000 - CLONE_VFORK = 0x4000 - CLONE_VM = 0x100 - CMSPAR = 0x40000000 - CR0 = 0x0 - CR1 = 0x200 - CR2 = 0x400 - CR3 = 0x600 - CRDLY = 0x600 - CREAD = 0x80 - CRTSCTS = 0x80000000 - CS5 = 0x0 - CS6 = 0x10 - CS7 = 0x20 - CS8 = 0x30 - CSIGNAL = 0xff - CSIZE = 0x30 - CSTART = 0x11 - CSTATUS = 0x0 - CSTOP = 0x13 - CSTOPB = 0x40 - CSUSP = 0x1a - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - DT_WHT = 0xe - ECHO = 0x8 - ECHOCTL = 0x200 - ECHOE = 0x10 - ECHOK = 0x20 - ECHOKE = 0x800 - ECHONL = 0x40 - ECHOPRT = 0x400 - ENCODING_DEFAULT = 0x0 - ENCODING_FM_MARK = 0x3 - ENCODING_FM_SPACE = 0x4 - ENCODING_MANCHESTER = 0x5 - ENCODING_NRZ = 0x1 - ENCODING_NRZI = 0x2 - EPOLLERR = 0x8 - EPOLLET = 0x80000000 - EPOLLHUP = 0x10 - EPOLLIN = 0x1 - EPOLLMSG = 0x400 - EPOLLONESHOT = 0x40000000 - EPOLLOUT = 0x4 - EPOLLPRI = 0x2 - EPOLLRDBAND = 0x80 - EPOLLRDHUP = 0x2000 - EPOLLRDNORM = 0x40 - EPOLLWRBAND = 0x200 - EPOLLWRNORM = 0x100 - EPOLL_CLOEXEC = 0x80000 - EPOLL_CTL_ADD = 0x1 - EPOLL_CTL_DEL = 0x2 - EPOLL_CTL_MOD = 0x3 - EPOLL_NONBLOCK = 0x800 - ETH_P_1588 = 0x88f7 - ETH_P_8021AD = 0x88a8 - ETH_P_8021AH = 0x88e7 - ETH_P_8021Q = 0x8100 - ETH_P_802_2 = 0x4 - ETH_P_802_3 = 0x1 - ETH_P_AARP = 0x80f3 - ETH_P_AF_IUCV = 0xfbfb - ETH_P_ALL = 0x3 - ETH_P_AOE = 0x88a2 - ETH_P_ARCNET = 0x1a - ETH_P_ARP = 0x806 - ETH_P_ATALK = 0x809b - ETH_P_ATMFATE = 0x8884 - ETH_P_ATMMPOA = 0x884c - ETH_P_AX25 = 0x2 - ETH_P_BPQ = 0x8ff - ETH_P_CAIF = 0xf7 - ETH_P_CAN = 0xc - ETH_P_CONTROL = 0x16 - ETH_P_CUST = 0x6006 - ETH_P_DDCMP = 0x6 - ETH_P_DEC = 0x6000 - ETH_P_DIAG = 0x6005 - ETH_P_DNA_DL = 0x6001 - ETH_P_DNA_RC = 0x6002 - ETH_P_DNA_RT = 0x6003 - ETH_P_DSA = 0x1b - ETH_P_ECONET = 0x18 - ETH_P_EDSA = 0xdada - ETH_P_FCOE = 0x8906 - ETH_P_FIP = 0x8914 - ETH_P_HDLC = 0x19 - ETH_P_IEEE802154 = 0xf6 - ETH_P_IEEEPUP = 0xa00 - ETH_P_IEEEPUPAT = 0xa01 - ETH_P_IP = 0x800 - ETH_P_IPV6 = 0x86dd - ETH_P_IPX = 0x8137 - ETH_P_IRDA = 0x17 - ETH_P_LAT = 0x6004 - ETH_P_LINK_CTL = 0x886c - ETH_P_LOCALTALK = 0x9 - ETH_P_LOOP = 0x60 - ETH_P_MOBITEX = 0x15 - ETH_P_MPLS_MC = 0x8848 - ETH_P_MPLS_UC = 0x8847 - ETH_P_PAE = 0x888e - ETH_P_PAUSE = 0x8808 - ETH_P_PHONET = 0xf5 - ETH_P_PPPTALK = 0x10 - ETH_P_PPP_DISC = 0x8863 - ETH_P_PPP_MP = 0x8 - ETH_P_PPP_SES = 0x8864 - ETH_P_PUP = 0x200 - ETH_P_PUPAT = 0x201 - ETH_P_QINQ1 = 0x9100 - ETH_P_QINQ2 = 0x9200 - ETH_P_QINQ3 = 0x9300 - ETH_P_RARP = 0x8035 - ETH_P_SCA = 0x6007 - ETH_P_SLOW = 0x8809 - ETH_P_SNAP = 0x5 - ETH_P_TDLS = 0x890d - ETH_P_TEB = 0x6558 - ETH_P_TIPC = 0x88ca - ETH_P_TRAILER = 0x1c - ETH_P_TR_802_2 = 0x11 - ETH_P_WAN_PPP = 0x7 - ETH_P_WCCP = 0x883e - ETH_P_X25 = 0x805 - EXTA = 0xe - EXTB = 0xf - EXTPROC = 0x10000 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x400 - FF0 = 0x0 - FF1 = 0x8000 - FFDLY = 0x8000 - FLUSHO = 0x1000 - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0x406 - F_EXLCK = 0x4 - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLEASE = 0x401 - F_GETLK = 0xc - F_GETLK64 = 0xc - F_GETOWN = 0x9 - F_GETOWN_EX = 0x10 - F_GETPIPE_SZ = 0x408 - F_GETSIG = 0xb - F_LOCK = 0x1 - F_NOTIFY = 0x402 - F_OK = 0x0 - F_RDLCK = 0x0 - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLEASE = 0x400 - F_SETLK = 0xd - F_SETLK64 = 0xd - F_SETLKW = 0xe - F_SETLKW64 = 0xe - F_SETOWN = 0x8 - F_SETOWN_EX = 0xf - F_SETPIPE_SZ = 0x407 - F_SETSIG = 0xa - F_SHLCK = 0x8 - F_TEST = 0x3 - F_TLOCK = 0x2 - F_ULOCK = 0x0 - F_UNLCK = 0x2 - F_WRLCK = 0x1 - HUPCL = 0x400 - IBSHIFT = 0x10 - ICANON = 0x2 - ICMPV6_FILTER = 0x1 - ICRNL = 0x100 - IEXTEN = 0x8000 - IFA_F_DADFAILED = 0x8 - IFA_F_DEPRECATED = 0x20 - IFA_F_HOMEADDRESS = 0x10 - IFA_F_NODAD = 0x2 - IFA_F_OPTIMISTIC = 0x4 - IFA_F_PERMANENT = 0x80 - IFA_F_SECONDARY = 0x1 - IFA_F_TEMPORARY = 0x1 - IFA_F_TENTATIVE = 0x40 - IFA_MAX = 0x7 - IFF_802_1Q_VLAN = 0x1 - IFF_ALLMULTI = 0x200 - IFF_AUTOMEDIA = 0x4000 - IFF_BONDING = 0x20 - IFF_BRIDGE_PORT = 0x4000 - IFF_BROADCAST = 0x2 - IFF_DEBUG = 0x4 - IFF_DISABLE_NETPOLL = 0x1000 - IFF_DONT_BRIDGE = 0x800 - IFF_DORMANT = 0x20000 - IFF_DYNAMIC = 0x8000 - IFF_EBRIDGE = 0x2 - IFF_ECHO = 0x40000 - IFF_ISATAP = 0x80 - IFF_LOOPBACK = 0x8 - IFF_LOWER_UP = 0x10000 - IFF_MACVLAN_PORT = 0x2000 - IFF_MASTER = 0x400 - IFF_MASTER_8023AD = 0x8 - IFF_MASTER_ALB = 0x10 - IFF_MASTER_ARPMON = 0x100 - IFF_MULTICAST = 0x1000 - IFF_NOARP = 0x80 - IFF_NOTRAILERS = 0x20 - IFF_NO_PI = 0x1000 - IFF_ONE_QUEUE = 0x2000 - IFF_OVS_DATAPATH = 0x8000 - IFF_POINTOPOINT = 0x10 - IFF_PORTSEL = 0x2000 - IFF_PROMISC = 0x100 - IFF_RUNNING = 0x40 - IFF_SLAVE = 0x800 - IFF_SLAVE_INACTIVE = 0x4 - IFF_SLAVE_NEEDARP = 0x40 - IFF_TAP = 0x2 - IFF_TUN = 0x1 - IFF_TUN_EXCL = 0x8000 - IFF_TX_SKB_SHARING = 0x10000 - IFF_UNICAST_FLT = 0x20000 - IFF_UP = 0x1 - IFF_VNET_HDR = 0x4000 - IFF_VOLATILE = 0x70c5a - IFF_WAN_HDLC = 0x200 - IFF_XMIT_DST_RELEASE = 0x400 - IFNAMSIZ = 0x10 - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_ACCESS = 0x1 - IN_ALL_EVENTS = 0xfff - IN_ATTRIB = 0x4 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLOEXEC = 0x80000 - IN_CLOSE = 0x18 - IN_CLOSE_NOWRITE = 0x10 - IN_CLOSE_WRITE = 0x8 - IN_CREATE = 0x100 - IN_DELETE = 0x200 - IN_DELETE_SELF = 0x400 - IN_DONT_FOLLOW = 0x2000000 - IN_EXCL_UNLINK = 0x4000000 - IN_IGNORED = 0x8000 - IN_ISDIR = 0x40000000 - IN_LOOPBACKNET = 0x7f - IN_MASK_ADD = 0x20000000 - IN_MODIFY = 0x2 - IN_MOVE = 0xc0 - IN_MOVED_FROM = 0x40 - IN_MOVED_TO = 0x80 - IN_MOVE_SELF = 0x800 - IN_NONBLOCK = 0x800 - IN_ONESHOT = 0x80000000 - IN_ONLYDIR = 0x1000000 - IN_OPEN = 0x20 - IN_Q_OVERFLOW = 0x4000 - IN_UNMOUNT = 0x2000 - IPPROTO_AH = 0x33 - IPPROTO_COMP = 0x6c - IPPROTO_DCCP = 0x21 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_ENCAP = 0x62 - IPPROTO_ESP = 0x32 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GRE = 0x2f - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IGMP = 0x2 - IPPROTO_IP = 0x0 - IPPROTO_IPIP = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_MTP = 0x5c - IPPROTO_NONE = 0x3b - IPPROTO_PIM = 0x67 - IPPROTO_PUP = 0xc - IPPROTO_RAW = 0xff - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_SCTP = 0x84 - IPPROTO_TCP = 0x6 - IPPROTO_TP = 0x1d - IPPROTO_UDP = 0x11 - IPPROTO_UDPLITE = 0x88 - IPV6_2292DSTOPTS = 0x4 - IPV6_2292HOPLIMIT = 0x8 - IPV6_2292HOPOPTS = 0x3 - IPV6_2292PKTINFO = 0x2 - IPV6_2292PKTOPTIONS = 0x6 - IPV6_2292RTHDR = 0x5 - IPV6_ADDRFORM = 0x1 - IPV6_ADD_MEMBERSHIP = 0x14 - IPV6_AUTHHDR = 0xa - IPV6_CHECKSUM = 0x7 - IPV6_DROP_MEMBERSHIP = 0x15 - IPV6_DSTOPTS = 0x3b - IPV6_HOPLIMIT = 0x34 - IPV6_HOPOPTS = 0x36 - IPV6_IPSEC_POLICY = 0x22 - IPV6_JOIN_ANYCAST = 0x1b - IPV6_JOIN_GROUP = 0x14 - IPV6_LEAVE_ANYCAST = 0x1c - IPV6_LEAVE_GROUP = 0x15 - IPV6_MTU = 0x18 - IPV6_MTU_DISCOVER = 0x17 - IPV6_MULTICAST_HOPS = 0x12 - IPV6_MULTICAST_IF = 0x11 - IPV6_MULTICAST_LOOP = 0x13 - IPV6_NEXTHOP = 0x9 - IPV6_PKTINFO = 0x32 - IPV6_PMTUDISC_DO = 0x2 - IPV6_PMTUDISC_DONT = 0x0 - IPV6_PMTUDISC_PROBE = 0x3 - IPV6_PMTUDISC_WANT = 0x1 - IPV6_RECVDSTOPTS = 0x3a - IPV6_RECVERR = 0x19 - IPV6_RECVHOPLIMIT = 0x33 - IPV6_RECVHOPOPTS = 0x35 - IPV6_RECVPKTINFO = 0x31 - IPV6_RECVRTHDR = 0x38 - IPV6_RECVTCLASS = 0x42 - IPV6_ROUTER_ALERT = 0x16 - IPV6_RTHDR = 0x39 - IPV6_RTHDRDSTOPTS = 0x37 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_RXDSTOPTS = 0x3b - IPV6_RXHOPOPTS = 0x36 - IPV6_TCLASS = 0x43 - IPV6_UNICAST_HOPS = 0x10 - IPV6_V6ONLY = 0x1a - IPV6_XFRM_POLICY = 0x23 - IP_ADD_MEMBERSHIP = 0x23 - IP_ADD_SOURCE_MEMBERSHIP = 0x27 - IP_BLOCK_SOURCE = 0x26 - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DROP_MEMBERSHIP = 0x24 - IP_DROP_SOURCE_MEMBERSHIP = 0x28 - IP_FREEBIND = 0xf - IP_HDRINCL = 0x3 - IP_IPSEC_POLICY = 0x10 - IP_MAXPACKET = 0xffff - IP_MAX_MEMBERSHIPS = 0x14 - IP_MF = 0x2000 - IP_MINTTL = 0x15 - IP_MSFILTER = 0x29 - IP_MSS = 0x240 - IP_MTU = 0xe - IP_MTU_DISCOVER = 0xa - IP_MULTICAST_ALL = 0x31 - IP_MULTICAST_IF = 0x20 - IP_MULTICAST_LOOP = 0x22 - IP_MULTICAST_TTL = 0x21 - IP_OFFMASK = 0x1fff - IP_OPTIONS = 0x4 - IP_ORIGDSTADDR = 0x14 - IP_PASSSEC = 0x12 - IP_PKTINFO = 0x8 - IP_PKTOPTIONS = 0x9 - IP_PMTUDISC = 0xa - IP_PMTUDISC_DO = 0x2 - IP_PMTUDISC_DONT = 0x0 - IP_PMTUDISC_PROBE = 0x3 - IP_PMTUDISC_WANT = 0x1 - IP_RECVERR = 0xb - IP_RECVOPTS = 0x6 - IP_RECVORIGDSTADDR = 0x14 - IP_RECVRETOPTS = 0x7 - IP_RECVTOS = 0xd - IP_RECVTTL = 0xc - IP_RETOPTS = 0x7 - IP_RF = 0x8000 - IP_ROUTER_ALERT = 0x5 - IP_TOS = 0x1 - IP_TRANSPARENT = 0x13 - IP_TTL = 0x2 - IP_UNBLOCK_SOURCE = 0x25 - IP_XFRM_POLICY = 0x11 - ISIG = 0x1 - ISTRIP = 0x20 - IUCLC = 0x200 - IUTF8 = 0x4000 - IXANY = 0x800 - IXOFF = 0x1000 - IXON = 0x400 - LINUX_REBOOT_CMD_CAD_OFF = 0x0 - LINUX_REBOOT_CMD_CAD_ON = 0x89abcdef - LINUX_REBOOT_CMD_HALT = 0xcdef0123 - LINUX_REBOOT_CMD_KEXEC = 0x45584543 - LINUX_REBOOT_CMD_POWER_OFF = 0x4321fedc - LINUX_REBOOT_CMD_RESTART = 0x1234567 - LINUX_REBOOT_CMD_RESTART2 = 0xa1b2c3d4 - LINUX_REBOOT_CMD_SW_SUSPEND = 0xd000fce2 - LINUX_REBOOT_MAGIC1 = 0xfee1dead - LINUX_REBOOT_MAGIC2 = 0x28121969 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_DOFORK = 0xb - MADV_DONTFORK = 0xa - MADV_DONTNEED = 0x4 - MADV_HUGEPAGE = 0xe - MADV_HWPOISON = 0x64 - MADV_MERGEABLE = 0xc - MADV_NOHUGEPAGE = 0xf - MADV_NORMAL = 0x0 - MADV_RANDOM = 0x1 - MADV_REMOVE = 0x9 - MADV_SEQUENTIAL = 0x2 - MADV_UNMERGEABLE = 0xd - MADV_WILLNEED = 0x3 - MAP_32BIT = 0x40 - MAP_ANON = 0x20 - MAP_ANONYMOUS = 0x20 - MAP_DENYWRITE = 0x800 - MAP_EXECUTABLE = 0x1000 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_GROWSDOWN = 0x100 - MAP_HUGETLB = 0x40000 - MAP_LOCKED = 0x2000 - MAP_NONBLOCK = 0x10000 - MAP_NORESERVE = 0x4000 - MAP_POPULATE = 0x8000 - MAP_PRIVATE = 0x2 - MAP_SHARED = 0x1 - MAP_STACK = 0x20000 - MAP_TYPE = 0xf - MCL_CURRENT = 0x1 - MCL_FUTURE = 0x2 - MNT_DETACH = 0x2 - MNT_EXPIRE = 0x4 - MNT_FORCE = 0x1 - MSG_CMSG_CLOEXEC = 0x40000000 - MSG_CONFIRM = 0x800 - MSG_CTRUNC = 0x8 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x40 - MSG_EOR = 0x80 - MSG_ERRQUEUE = 0x2000 - MSG_FASTOPEN = 0x20000000 - MSG_FIN = 0x200 - MSG_MORE = 0x8000 - MSG_NOSIGNAL = 0x4000 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_PROXY = 0x10 - MSG_RST = 0x1000 - MSG_SYN = 0x400 - MSG_TRUNC = 0x20 - MSG_TRYHARD = 0x4 - MSG_WAITALL = 0x100 - MSG_WAITFORONE = 0x10000 - MS_ACTIVE = 0x40000000 - MS_ASYNC = 0x1 - MS_BIND = 0x1000 - MS_DIRSYNC = 0x80 - MS_INVALIDATE = 0x2 - MS_I_VERSION = 0x800000 - MS_KERNMOUNT = 0x400000 - MS_MANDLOCK = 0x40 - MS_MGC_MSK = 0xffff0000 - MS_MGC_VAL = 0xc0ed0000 - MS_MOVE = 0x2000 - MS_NOATIME = 0x400 - MS_NODEV = 0x4 - MS_NODIRATIME = 0x800 - MS_NOEXEC = 0x8 - MS_NOSUID = 0x2 - MS_NOUSER = -0x80000000 - MS_POSIXACL = 0x10000 - MS_PRIVATE = 0x40000 - MS_RDONLY = 0x1 - MS_REC = 0x4000 - MS_RELATIME = 0x200000 - MS_REMOUNT = 0x20 - MS_RMT_MASK = 0x800051 - MS_SHARED = 0x100000 - MS_SILENT = 0x8000 - MS_SLAVE = 0x80000 - MS_STRICTATIME = 0x1000000 - MS_SYNC = 0x4 - MS_SYNCHRONOUS = 0x10 - MS_UNBINDABLE = 0x20000 - NAME_MAX = 0xff - NETLINK_ADD_MEMBERSHIP = 0x1 - NETLINK_AUDIT = 0x9 - NETLINK_BROADCAST_ERROR = 0x4 - NETLINK_CONNECTOR = 0xb - NETLINK_CRYPTO = 0x15 - NETLINK_DNRTMSG = 0xe - NETLINK_DROP_MEMBERSHIP = 0x2 - NETLINK_ECRYPTFS = 0x13 - NETLINK_FIB_LOOKUP = 0xa - NETLINK_FIREWALL = 0x3 - NETLINK_GENERIC = 0x10 - NETLINK_INET_DIAG = 0x4 - NETLINK_IP6_FW = 0xd - NETLINK_ISCSI = 0x8 - NETLINK_KOBJECT_UEVENT = 0xf - NETLINK_NETFILTER = 0xc - NETLINK_NFLOG = 0x5 - NETLINK_NO_ENOBUFS = 0x5 - NETLINK_PKTINFO = 0x3 - NETLINK_RDMA = 0x14 - NETLINK_ROUTE = 0x0 - NETLINK_SCSITRANSPORT = 0x12 - NETLINK_SELINUX = 0x7 - NETLINK_UNUSED = 0x1 - NETLINK_USERSOCK = 0x2 - NETLINK_XFRM = 0x6 - NL0 = 0x0 - NL1 = 0x100 - NLA_ALIGNTO = 0x4 - NLA_F_NESTED = 0x8000 - NLA_F_NET_BYTEORDER = 0x4000 - NLA_HDRLEN = 0x4 - NLDLY = 0x100 - NLMSG_ALIGNTO = 0x4 - NLMSG_DONE = 0x3 - NLMSG_ERROR = 0x2 - NLMSG_HDRLEN = 0x10 - NLMSG_MIN_TYPE = 0x10 - NLMSG_NOOP = 0x1 - NLMSG_OVERRUN = 0x4 - NLM_F_ACK = 0x4 - NLM_F_APPEND = 0x800 - NLM_F_ATOMIC = 0x400 - NLM_F_CREATE = 0x400 - NLM_F_DUMP = 0x300 - NLM_F_DUMP_INTR = 0x10 - NLM_F_ECHO = 0x8 - NLM_F_EXCL = 0x200 - NLM_F_MATCH = 0x200 - NLM_F_MULTI = 0x2 - NLM_F_REPLACE = 0x100 - NLM_F_REQUEST = 0x1 - NLM_F_ROOT = 0x100 - NOFLSH = 0x80 - OCRNL = 0x8 - OFDEL = 0x80 - OFILL = 0x40 - OLCUC = 0x2 - ONLCR = 0x4 - ONLRET = 0x20 - ONOCR = 0x10 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_APPEND = 0x400 - O_ASYNC = 0x2000 - O_CLOEXEC = 0x80000 - O_CREAT = 0x40 - O_DIRECT = 0x4000 - O_DIRECTORY = 0x10000 - O_DSYNC = 0x1000 - O_EXCL = 0x80 - O_FSYNC = 0x101000 - O_LARGEFILE = 0x8000 - O_NDELAY = 0x800 - O_NOATIME = 0x40000 - O_NOCTTY = 0x100 - O_NOFOLLOW = 0x20000 - O_NONBLOCK = 0x800 - O_PATH = 0x200000 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_RSYNC = 0x101000 - O_SYNC = 0x101000 - O_TRUNC = 0x200 - O_WRONLY = 0x1 - PACKET_ADD_MEMBERSHIP = 0x1 - PACKET_AUXDATA = 0x8 - PACKET_BROADCAST = 0x1 - PACKET_COPY_THRESH = 0x7 - PACKET_DROP_MEMBERSHIP = 0x2 - PACKET_FANOUT = 0x12 - PACKET_FANOUT_CPU = 0x2 - PACKET_FANOUT_FLAG_DEFRAG = 0x8000 - PACKET_FANOUT_HASH = 0x0 - PACKET_FANOUT_LB = 0x1 - PACKET_FASTROUTE = 0x6 - PACKET_HDRLEN = 0xb - PACKET_HOST = 0x0 - PACKET_LOOPBACK = 0x5 - PACKET_LOSS = 0xe - PACKET_MR_ALLMULTI = 0x2 - PACKET_MR_MULTICAST = 0x0 - PACKET_MR_PROMISC = 0x1 - PACKET_MR_UNICAST = 0x3 - PACKET_MULTICAST = 0x2 - PACKET_ORIGDEV = 0x9 - PACKET_OTHERHOST = 0x3 - PACKET_OUTGOING = 0x4 - PACKET_RECV_OUTPUT = 0x3 - PACKET_RESERVE = 0xc - PACKET_RX_RING = 0x5 - PACKET_STATISTICS = 0x6 - PACKET_TIMESTAMP = 0x11 - PACKET_TX_RING = 0xd - PACKET_TX_TIMESTAMP = 0x10 - PACKET_VERSION = 0xa - PACKET_VNET_HDR = 0xf - PARENB = 0x100 - PARITY_CRC16_PR0 = 0x2 - PARITY_CRC16_PR0_CCITT = 0x4 - PARITY_CRC16_PR1 = 0x3 - PARITY_CRC16_PR1_CCITT = 0x5 - PARITY_CRC32_PR0_CCITT = 0x6 - PARITY_CRC32_PR1_CCITT = 0x7 - PARITY_DEFAULT = 0x0 - PARITY_NONE = 0x1 - PARMRK = 0x8 - PARODD = 0x200 - PENDIN = 0x4000 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PROT_EXEC = 0x4 - PROT_GROWSDOWN = 0x1000000 - PROT_GROWSUP = 0x2000000 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_WRITE = 0x2 - PR_CAPBSET_DROP = 0x18 - PR_CAPBSET_READ = 0x17 - PR_ENDIAN_BIG = 0x0 - PR_ENDIAN_LITTLE = 0x1 - PR_ENDIAN_PPC_LITTLE = 0x2 - PR_FPEMU_NOPRINT = 0x1 - PR_FPEMU_SIGFPE = 0x2 - PR_FP_EXC_ASYNC = 0x2 - PR_FP_EXC_DISABLED = 0x0 - PR_FP_EXC_DIV = 0x10000 - PR_FP_EXC_INV = 0x100000 - PR_FP_EXC_NONRECOV = 0x1 - PR_FP_EXC_OVF = 0x20000 - PR_FP_EXC_PRECISE = 0x3 - PR_FP_EXC_RES = 0x80000 - PR_FP_EXC_SW_ENABLE = 0x80 - PR_FP_EXC_UND = 0x40000 - PR_GET_DUMPABLE = 0x3 - PR_GET_ENDIAN = 0x13 - PR_GET_FPEMU = 0x9 - PR_GET_FPEXC = 0xb - PR_GET_KEEPCAPS = 0x7 - PR_GET_NAME = 0x10 - PR_GET_NO_NEW_PRIVS = 0x27 - PR_GET_PDEATHSIG = 0x2 - PR_GET_SECCOMP = 0x15 - PR_GET_SECUREBITS = 0x1b - PR_GET_TIMERSLACK = 0x1e - PR_GET_TIMING = 0xd - PR_GET_TSC = 0x19 - PR_GET_UNALIGN = 0x5 - PR_MCE_KILL = 0x21 - PR_MCE_KILL_CLEAR = 0x0 - PR_MCE_KILL_DEFAULT = 0x2 - PR_MCE_KILL_EARLY = 0x1 - PR_MCE_KILL_GET = 0x22 - PR_MCE_KILL_LATE = 0x0 - PR_MCE_KILL_SET = 0x1 - PR_SET_DUMPABLE = 0x4 - PR_SET_ENDIAN = 0x14 - PR_SET_FPEMU = 0xa - PR_SET_FPEXC = 0xc - PR_SET_KEEPCAPS = 0x8 - PR_SET_MM = 0x23 - PR_SET_MM_BRK = 0x7 - PR_SET_MM_END_CODE = 0x2 - PR_SET_MM_END_DATA = 0x4 - PR_SET_MM_START_BRK = 0x6 - PR_SET_MM_START_CODE = 0x1 - PR_SET_MM_START_DATA = 0x3 - PR_SET_MM_START_STACK = 0x5 - PR_SET_NAME = 0xf - PR_SET_NO_NEW_PRIVS = 0x26 - PR_SET_PDEATHSIG = 0x1 - PR_SET_PTRACER = 0x59616d61 - PR_SET_PTRACER_ANY = 0xffffffff - PR_SET_SECCOMP = 0x16 - PR_SET_SECUREBITS = 0x1c - PR_SET_TIMERSLACK = 0x1d - PR_SET_TIMING = 0xe - PR_SET_TSC = 0x1a - PR_SET_UNALIGN = 0x6 - PR_TASK_PERF_EVENTS_DISABLE = 0x1f - PR_TASK_PERF_EVENTS_ENABLE = 0x20 - PR_TIMING_STATISTICAL = 0x0 - PR_TIMING_TIMESTAMP = 0x1 - PR_TSC_ENABLE = 0x1 - PR_TSC_SIGSEGV = 0x2 - PR_UNALIGN_NOPRINT = 0x1 - PR_UNALIGN_SIGBUS = 0x2 - PTRACE_ATTACH = 0x10 - PTRACE_CONT = 0x7 - PTRACE_DETACH = 0x11 - PTRACE_EVENT_CLONE = 0x3 - PTRACE_EVENT_EXEC = 0x4 - PTRACE_EVENT_EXIT = 0x6 - PTRACE_EVENT_FORK = 0x1 - PTRACE_EVENT_SECCOMP = 0x7 - PTRACE_EVENT_STOP = 0x80 - PTRACE_EVENT_VFORK = 0x2 - PTRACE_EVENT_VFORK_DONE = 0x5 - PTRACE_GETEVENTMSG = 0x4201 - PTRACE_GETFPREGS = 0xe - PTRACE_GETFPXREGS = 0x12 - PTRACE_GETREGS = 0xc - PTRACE_GETREGSET = 0x4204 - PTRACE_GETSIGINFO = 0x4202 - PTRACE_GET_THREAD_AREA = 0x19 - PTRACE_INTERRUPT = 0x4207 - PTRACE_KILL = 0x8 - PTRACE_LISTEN = 0x4208 - PTRACE_OLDSETOPTIONS = 0x15 - PTRACE_O_MASK = 0xff - PTRACE_O_TRACECLONE = 0x8 - PTRACE_O_TRACEEXEC = 0x10 - PTRACE_O_TRACEEXIT = 0x40 - PTRACE_O_TRACEFORK = 0x2 - PTRACE_O_TRACESECCOMP = 0x80 - PTRACE_O_TRACESYSGOOD = 0x1 - PTRACE_O_TRACEVFORK = 0x4 - PTRACE_O_TRACEVFORKDONE = 0x20 - PTRACE_PEEKDATA = 0x2 - PTRACE_PEEKTEXT = 0x1 - PTRACE_PEEKUSR = 0x3 - PTRACE_POKEDATA = 0x5 - PTRACE_POKETEXT = 0x4 - PTRACE_POKEUSR = 0x6 - PTRACE_SEIZE = 0x4206 - PTRACE_SEIZE_DEVEL = 0x80000000 - PTRACE_SETFPREGS = 0xf - PTRACE_SETFPXREGS = 0x13 - PTRACE_SETOPTIONS = 0x4200 - PTRACE_SETREGS = 0xd - PTRACE_SETREGSET = 0x4205 - PTRACE_SETSIGINFO = 0x4203 - PTRACE_SET_THREAD_AREA = 0x1a - PTRACE_SINGLEBLOCK = 0x21 - PTRACE_SINGLESTEP = 0x9 - PTRACE_SYSCALL = 0x18 - PTRACE_SYSEMU = 0x1f - PTRACE_SYSEMU_SINGLESTEP = 0x20 - PTRACE_TRACEME = 0x0 - RLIMIT_AS = 0x9 - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x7 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = -0x1 - RTAX_ADVMSS = 0x8 - RTAX_CWND = 0x7 - RTAX_FEATURES = 0xc - RTAX_FEATURE_ALLFRAG = 0x8 - RTAX_FEATURE_ECN = 0x1 - RTAX_FEATURE_SACK = 0x2 - RTAX_FEATURE_TIMESTAMP = 0x4 - RTAX_HOPLIMIT = 0xa - RTAX_INITCWND = 0xb - RTAX_INITRWND = 0xe - RTAX_LOCK = 0x1 - RTAX_MAX = 0xe - RTAX_MTU = 0x2 - RTAX_REORDERING = 0x9 - RTAX_RTO_MIN = 0xd - RTAX_RTT = 0x4 - RTAX_RTTVAR = 0x5 - RTAX_SSTHRESH = 0x6 - RTAX_UNSPEC = 0x0 - RTAX_WINDOW = 0x3 - RTA_ALIGNTO = 0x4 - RTA_MAX = 0x10 - RTCF_DIRECTSRC = 0x4000000 - RTCF_DOREDIRECT = 0x1000000 - RTCF_LOG = 0x2000000 - RTCF_MASQ = 0x400000 - RTCF_NAT = 0x800000 - RTCF_VALVE = 0x200000 - RTF_ADDRCLASSMASK = 0xf8000000 - RTF_ADDRCONF = 0x40000 - RTF_ALLONLINK = 0x20000 - RTF_BROADCAST = 0x10000000 - RTF_CACHE = 0x1000000 - RTF_DEFAULT = 0x10000 - RTF_DYNAMIC = 0x10 - RTF_FLOW = 0x2000000 - RTF_GATEWAY = 0x2 - RTF_HOST = 0x4 - RTF_INTERFACE = 0x40000000 - RTF_IRTT = 0x100 - RTF_LINKRT = 0x100000 - RTF_LOCAL = 0x80000000 - RTF_MODIFIED = 0x20 - RTF_MSS = 0x40 - RTF_MTU = 0x40 - RTF_MULTICAST = 0x20000000 - RTF_NAT = 0x8000000 - RTF_NOFORWARD = 0x1000 - RTF_NONEXTHOP = 0x200000 - RTF_NOPMTUDISC = 0x4000 - RTF_POLICY = 0x4000000 - RTF_REINSTATE = 0x8 - RTF_REJECT = 0x200 - RTF_STATIC = 0x400 - RTF_THROW = 0x2000 - RTF_UP = 0x1 - RTF_WINDOW = 0x80 - RTF_XRESOLVE = 0x800 - RTM_BASE = 0x10 - RTM_DELACTION = 0x31 - RTM_DELADDR = 0x15 - RTM_DELADDRLABEL = 0x49 - RTM_DELLINK = 0x11 - RTM_DELNEIGH = 0x1d - RTM_DELQDISC = 0x25 - RTM_DELROUTE = 0x19 - RTM_DELRULE = 0x21 - RTM_DELTCLASS = 0x29 - RTM_DELTFILTER = 0x2d - RTM_F_CLONED = 0x200 - RTM_F_EQUALIZE = 0x400 - RTM_F_NOTIFY = 0x100 - RTM_F_PREFIX = 0x800 - RTM_GETACTION = 0x32 - RTM_GETADDR = 0x16 - RTM_GETADDRLABEL = 0x4a - RTM_GETANYCAST = 0x3e - RTM_GETDCB = 0x4e - RTM_GETLINK = 0x12 - RTM_GETMULTICAST = 0x3a - RTM_GETNEIGH = 0x1e - RTM_GETNEIGHTBL = 0x42 - RTM_GETQDISC = 0x26 - RTM_GETROUTE = 0x1a - RTM_GETRULE = 0x22 - RTM_GETTCLASS = 0x2a - RTM_GETTFILTER = 0x2e - RTM_MAX = 0x4f - RTM_NEWACTION = 0x30 - RTM_NEWADDR = 0x14 - RTM_NEWADDRLABEL = 0x48 - RTM_NEWLINK = 0x10 - RTM_NEWNDUSEROPT = 0x44 - RTM_NEWNEIGH = 0x1c - RTM_NEWNEIGHTBL = 0x40 - RTM_NEWPREFIX = 0x34 - RTM_NEWQDISC = 0x24 - RTM_NEWROUTE = 0x18 - RTM_NEWRULE = 0x20 - RTM_NEWTCLASS = 0x28 - RTM_NEWTFILTER = 0x2c - RTM_NR_FAMILIES = 0x10 - RTM_NR_MSGTYPES = 0x40 - RTM_SETDCB = 0x4f - RTM_SETLINK = 0x13 - RTM_SETNEIGHTBL = 0x43 - RTNH_ALIGNTO = 0x4 - RTNH_F_DEAD = 0x1 - RTNH_F_ONLINK = 0x4 - RTNH_F_PERVASIVE = 0x2 - RTN_MAX = 0xb - RTPROT_BIRD = 0xc - RTPROT_BOOT = 0x3 - RTPROT_DHCP = 0x10 - RTPROT_DNROUTED = 0xd - RTPROT_GATED = 0x8 - RTPROT_KERNEL = 0x2 - RTPROT_MRT = 0xa - RTPROT_NTK = 0xf - RTPROT_RA = 0x9 - RTPROT_REDIRECT = 0x1 - RTPROT_STATIC = 0x4 - RTPROT_UNSPEC = 0x0 - RTPROT_XORP = 0xe - RTPROT_ZEBRA = 0xb - RT_CLASS_DEFAULT = 0xfd - RT_CLASS_LOCAL = 0xff - RT_CLASS_MAIN = 0xfe - RT_CLASS_MAX = 0xff - RT_CLASS_UNSPEC = 0x0 - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - RUSAGE_THREAD = 0x1 - SCM_CREDENTIALS = 0x2 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x1d - SCM_TIMESTAMPING = 0x25 - SCM_TIMESTAMPNS = 0x23 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDDLCI = 0x8980 - SIOCADDMULTI = 0x8931 - SIOCADDRT = 0x890b - SIOCATMARK = 0x8905 - SIOCDARP = 0x8953 - SIOCDELDLCI = 0x8981 - SIOCDELMULTI = 0x8932 - SIOCDELRT = 0x890c - SIOCDEVPRIVATE = 0x89f0 - SIOCDIFADDR = 0x8936 - SIOCDRARP = 0x8960 - SIOCGARP = 0x8954 - SIOCGIFADDR = 0x8915 - SIOCGIFBR = 0x8940 - SIOCGIFBRDADDR = 0x8919 - SIOCGIFCONF = 0x8912 - SIOCGIFCOUNT = 0x8938 - SIOCGIFDSTADDR = 0x8917 - SIOCGIFENCAP = 0x8925 - SIOCGIFFLAGS = 0x8913 - SIOCGIFHWADDR = 0x8927 - SIOCGIFINDEX = 0x8933 - SIOCGIFMAP = 0x8970 - SIOCGIFMEM = 0x891f - SIOCGIFMETRIC = 0x891d - SIOCGIFMTU = 0x8921 - SIOCGIFNAME = 0x8910 - SIOCGIFNETMASK = 0x891b - SIOCGIFPFLAGS = 0x8935 - SIOCGIFSLAVE = 0x8929 - SIOCGIFTXQLEN = 0x8942 - SIOCGPGRP = 0x8904 - SIOCGRARP = 0x8961 - SIOCGSTAMP = 0x8906 - SIOCGSTAMPNS = 0x8907 - SIOCPROTOPRIVATE = 0x89e0 - SIOCRTMSG = 0x890d - SIOCSARP = 0x8955 - SIOCSIFADDR = 0x8916 - SIOCSIFBR = 0x8941 - SIOCSIFBRDADDR = 0x891a - SIOCSIFDSTADDR = 0x8918 - SIOCSIFENCAP = 0x8926 - SIOCSIFFLAGS = 0x8914 - SIOCSIFHWADDR = 0x8924 - SIOCSIFHWBROADCAST = 0x8937 - SIOCSIFLINK = 0x8911 - SIOCSIFMAP = 0x8971 - SIOCSIFMEM = 0x8920 - SIOCSIFMETRIC = 0x891e - SIOCSIFMTU = 0x8922 - SIOCSIFNAME = 0x8923 - SIOCSIFNETMASK = 0x891c - SIOCSIFPFLAGS = 0x8934 - SIOCSIFSLAVE = 0x8930 - SIOCSIFTXQLEN = 0x8943 - SIOCSPGRP = 0x8902 - SIOCSRARP = 0x8962 - SOCK_CLOEXEC = 0x80000 - SOCK_DCCP = 0x6 - SOCK_DGRAM = 0x2 - SOCK_NONBLOCK = 0x800 - SOCK_PACKET = 0xa - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_AAL = 0x109 - SOL_ATM = 0x108 - SOL_DECNET = 0x105 - SOL_ICMPV6 = 0x3a - SOL_IP = 0x0 - SOL_IPV6 = 0x29 - SOL_IRDA = 0x10a - SOL_PACKET = 0x107 - SOL_RAW = 0xff - SOL_SOCKET = 0x1 - SOL_TCP = 0x6 - SOL_X25 = 0x106 - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x1e - SO_ATTACH_FILTER = 0x1a - SO_BINDTODEVICE = 0x19 - SO_BROADCAST = 0x6 - SO_BSDCOMPAT = 0xe - SO_DEBUG = 0x1 - SO_DETACH_FILTER = 0x1b - SO_DOMAIN = 0x27 - SO_DONTROUTE = 0x5 - SO_ERROR = 0x4 - SO_KEEPALIVE = 0x9 - SO_LINGER = 0xd - SO_MARK = 0x24 - SO_NO_CHECK = 0xb - SO_OOBINLINE = 0xa - SO_PASSCRED = 0x10 - SO_PASSSEC = 0x22 - SO_PEERCRED = 0x11 - SO_PEERNAME = 0x1c - SO_PEERSEC = 0x1f - SO_PRIORITY = 0xc - SO_PROTOCOL = 0x26 - SO_RCVBUF = 0x8 - SO_RCVBUFFORCE = 0x21 - SO_RCVLOWAT = 0x12 - SO_RCVTIMEO = 0x14 - SO_REUSEADDR = 0x2 - SO_RXQ_OVFL = 0x28 - SO_SECURITY_AUTHENTICATION = 0x16 - SO_SECURITY_ENCRYPTION_NETWORK = 0x18 - SO_SECURITY_ENCRYPTION_TRANSPORT = 0x17 - SO_SNDBUF = 0x7 - SO_SNDBUFFORCE = 0x20 - SO_SNDLOWAT = 0x13 - SO_SNDTIMEO = 0x15 - SO_TIMESTAMP = 0x1d - SO_TIMESTAMPING = 0x25 - SO_TIMESTAMPNS = 0x23 - SO_TYPE = 0x3 - S_BLKSIZE = 0x200 - S_IEXEC = 0x40 - S_IFBLK = 0x6000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFIFO = 0x1000 - S_IFLNK = 0xa000 - S_IFMT = 0xf000 - S_IFREG = 0x8000 - S_IFSOCK = 0xc000 - S_IREAD = 0x100 - S_IRGRP = 0x20 - S_IROTH = 0x4 - S_IRUSR = 0x100 - S_IRWXG = 0x38 - S_IRWXO = 0x7 - S_IRWXU = 0x1c0 - S_ISGID = 0x400 - S_ISUID = 0x800 - S_ISVTX = 0x200 - S_IWGRP = 0x10 - S_IWOTH = 0x2 - S_IWRITE = 0x80 - S_IWUSR = 0x80 - S_IXGRP = 0x8 - S_IXOTH = 0x1 - S_IXUSR = 0x40 - TAB0 = 0x0 - TAB1 = 0x800 - TAB2 = 0x1000 - TAB3 = 0x1800 - TABDLY = 0x1800 - TCFLSH = 0x540b - TCGETA = 0x5405 - TCGETS = 0x5401 - TCGETS2 = 0x802c542a - TCGETX = 0x5432 - TCIFLUSH = 0x0 - TCIOFF = 0x2 - TCIOFLUSH = 0x2 - TCION = 0x3 - TCOFLUSH = 0x1 - TCOOFF = 0x0 - TCOON = 0x1 - TCP_CONGESTION = 0xd - TCP_CORK = 0x3 - TCP_DEFER_ACCEPT = 0x9 - TCP_INFO = 0xb - TCP_KEEPCNT = 0x6 - TCP_KEEPIDLE = 0x4 - TCP_KEEPINTVL = 0x5 - TCP_LINGER2 = 0x8 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_WINSHIFT = 0xe - TCP_MD5SIG = 0xe - TCP_MD5SIG_MAXKEYLEN = 0x50 - TCP_MSS = 0x200 - TCP_NODELAY = 0x1 - TCP_QUICKACK = 0xc - TCP_SYNCNT = 0x7 - TCP_WINDOW_CLAMP = 0xa - TCSAFLUSH = 0x2 - TCSBRK = 0x5409 - TCSBRKP = 0x5425 - TCSETA = 0x5406 - TCSETAF = 0x5408 - TCSETAW = 0x5407 - TCSETS = 0x5402 - TCSETS2 = 0x402c542b - TCSETSF = 0x5404 - TCSETSF2 = 0x402c542d - TCSETSW = 0x5403 - TCSETSW2 = 0x402c542c - TCSETX = 0x5433 - TCSETXF = 0x5434 - TCSETXW = 0x5435 - TCXONC = 0x540a - TIOCCBRK = 0x5428 - TIOCCONS = 0x541d - TIOCEXCL = 0x540c - TIOCGDEV = 0x80045432 - TIOCGETD = 0x5424 - TIOCGEXCL = 0x80045440 - TIOCGICOUNT = 0x545d - TIOCGLCKTRMIOS = 0x5456 - TIOCGPGRP = 0x540f - TIOCGPKT = 0x80045438 - TIOCGPTLCK = 0x80045439 - TIOCGPTN = 0x80045430 - TIOCGRS485 = 0x542e - TIOCGSERIAL = 0x541e - TIOCGSID = 0x5429 - TIOCGSOFTCAR = 0x5419 - TIOCGWINSZ = 0x5413 - TIOCINQ = 0x541b - TIOCLINUX = 0x541c - TIOCMBIC = 0x5417 - TIOCMBIS = 0x5416 - TIOCMGET = 0x5415 - TIOCMIWAIT = 0x545c - TIOCMSET = 0x5418 - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x5422 - TIOCNXCL = 0x540d - TIOCOUTQ = 0x5411 - TIOCPKT = 0x5420 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCSBRK = 0x5427 - TIOCSCTTY = 0x540e - TIOCSERCONFIG = 0x5453 - TIOCSERGETLSR = 0x5459 - TIOCSERGETMULTI = 0x545a - TIOCSERGSTRUCT = 0x5458 - TIOCSERGWILD = 0x5454 - TIOCSERSETMULTI = 0x545b - TIOCSERSWILD = 0x5455 - TIOCSER_TEMT = 0x1 - TIOCSETD = 0x5423 - TIOCSIG = 0x40045436 - TIOCSLCKTRMIOS = 0x5457 - TIOCSPGRP = 0x5410 - TIOCSPTLCK = 0x40045431 - TIOCSRS485 = 0x542f - TIOCSSERIAL = 0x541f - TIOCSSOFTCAR = 0x541a - TIOCSTI = 0x5412 - TIOCSWINSZ = 0x5414 - TIOCVHANGUP = 0x5437 - TOSTOP = 0x100 - TUNATTACHFILTER = 0x400854d5 - TUNDETACHFILTER = 0x400854d6 - TUNGETFEATURES = 0x800454cf - TUNGETIFF = 0x800454d2 - TUNGETSNDBUF = 0x800454d3 - TUNGETVNETHDRSZ = 0x800454d7 - TUNSETDEBUG = 0x400454c9 - TUNSETGROUP = 0x400454ce - TUNSETIFF = 0x400454ca - TUNSETLINK = 0x400454cd - TUNSETNOCSUM = 0x400454c8 - TUNSETOFFLOAD = 0x400454d0 - TUNSETOWNER = 0x400454cc - TUNSETPERSIST = 0x400454cb - TUNSETSNDBUF = 0x400454d4 - TUNSETTXFILTER = 0x400454d1 - TUNSETVNETHDRSZ = 0x400454d8 - VDISCARD = 0xd - VEOF = 0x4 - VEOL = 0xb - VEOL2 = 0x10 - VERASE = 0x2 - VINTR = 0x0 - VKILL = 0x3 - VLNEXT = 0xf - VMIN = 0x6 - VQUIT = 0x1 - VREPRINT = 0xc - VSTART = 0x8 - VSTOP = 0x9 - VSUSP = 0xa - VSWTC = 0x7 - VT0 = 0x0 - VT1 = 0x4000 - VTDLY = 0x4000 - VTIME = 0x5 - VWERASE = 0xe - WALL = 0x40000000 - WCLONE = 0x80000000 - WCONTINUED = 0x8 - WEXITED = 0x4 - WNOHANG = 0x1 - WNOTHREAD = 0x20000000 - WNOWAIT = 0x1000000 - WORDSIZE = 0x20 - WSTOPPED = 0x2 - WUNTRACED = 0x2 - XCASE = 0x4 - XTABS = 0x1800 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x62) - EADDRNOTAVAIL = syscall.Errno(0x63) - EADV = syscall.Errno(0x44) - EAFNOSUPPORT = syscall.Errno(0x61) - EAGAIN = syscall.Errno(0xb) - EALREADY = syscall.Errno(0x72) - EBADE = syscall.Errno(0x34) - EBADF = syscall.Errno(0x9) - EBADFD = syscall.Errno(0x4d) - EBADMSG = syscall.Errno(0x4a) - EBADR = syscall.Errno(0x35) - EBADRQC = syscall.Errno(0x38) - EBADSLT = syscall.Errno(0x39) - EBFONT = syscall.Errno(0x3b) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x7d) - ECHILD = syscall.Errno(0xa) - ECHRNG = syscall.Errno(0x2c) - ECOMM = syscall.Errno(0x46) - ECONNABORTED = syscall.Errno(0x67) - ECONNREFUSED = syscall.Errno(0x6f) - ECONNRESET = syscall.Errno(0x68) - EDEADLK = syscall.Errno(0x23) - EDEADLOCK = syscall.Errno(0x23) - EDESTADDRREQ = syscall.Errno(0x59) - EDOM = syscall.Errno(0x21) - EDOTDOT = syscall.Errno(0x49) - EDQUOT = syscall.Errno(0x7a) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EHOSTDOWN = syscall.Errno(0x70) - EHOSTUNREACH = syscall.Errno(0x71) - EHWPOISON = syscall.Errno(0x85) - EIDRM = syscall.Errno(0x2b) - EILSEQ = syscall.Errno(0x54) - EINPROGRESS = syscall.Errno(0x73) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EISCONN = syscall.Errno(0x6a) - EISDIR = syscall.Errno(0x15) - EISNAM = syscall.Errno(0x78) - EKEYEXPIRED = syscall.Errno(0x7f) - EKEYREJECTED = syscall.Errno(0x81) - EKEYREVOKED = syscall.Errno(0x80) - EL2HLT = syscall.Errno(0x33) - EL2NSYNC = syscall.Errno(0x2d) - EL3HLT = syscall.Errno(0x2e) - EL3RST = syscall.Errno(0x2f) - ELIBACC = syscall.Errno(0x4f) - ELIBBAD = syscall.Errno(0x50) - ELIBEXEC = syscall.Errno(0x53) - ELIBMAX = syscall.Errno(0x52) - ELIBSCN = syscall.Errno(0x51) - ELNRNG = syscall.Errno(0x30) - ELOOP = syscall.Errno(0x28) - EMEDIUMTYPE = syscall.Errno(0x7c) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x5a) - EMULTIHOP = syscall.Errno(0x48) - ENAMETOOLONG = syscall.Errno(0x24) - ENAVAIL = syscall.Errno(0x77) - ENETDOWN = syscall.Errno(0x64) - ENETRESET = syscall.Errno(0x66) - ENETUNREACH = syscall.Errno(0x65) - ENFILE = syscall.Errno(0x17) - ENOANO = syscall.Errno(0x37) - ENOBUFS = syscall.Errno(0x69) - ENOCSI = syscall.Errno(0x32) - ENODATA = syscall.Errno(0x3d) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOKEY = syscall.Errno(0x7e) - ENOLCK = syscall.Errno(0x25) - ENOLINK = syscall.Errno(0x43) - ENOMEDIUM = syscall.Errno(0x7b) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x2a) - ENONET = syscall.Errno(0x40) - ENOPKG = syscall.Errno(0x41) - ENOPROTOOPT = syscall.Errno(0x5c) - ENOSPC = syscall.Errno(0x1c) - ENOSR = syscall.Errno(0x3f) - ENOSTR = syscall.Errno(0x3c) - ENOSYS = syscall.Errno(0x26) - ENOTBLK = syscall.Errno(0xf) - ENOTCONN = syscall.Errno(0x6b) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x27) - ENOTNAM = syscall.Errno(0x76) - ENOTRECOVERABLE = syscall.Errno(0x83) - ENOTSOCK = syscall.Errno(0x58) - ENOTSUP = syscall.Errno(0x5f) - ENOTTY = syscall.Errno(0x19) - ENOTUNIQ = syscall.Errno(0x4c) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x5f) - EOVERFLOW = syscall.Errno(0x4b) - EOWNERDEAD = syscall.Errno(0x82) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x60) - EPIPE = syscall.Errno(0x20) - EPROTO = syscall.Errno(0x47) - EPROTONOSUPPORT = syscall.Errno(0x5d) - EPROTOTYPE = syscall.Errno(0x5b) - ERANGE = syscall.Errno(0x22) - EREMCHG = syscall.Errno(0x4e) - EREMOTE = syscall.Errno(0x42) - EREMOTEIO = syscall.Errno(0x79) - ERESTART = syscall.Errno(0x55) - ERFKILL = syscall.Errno(0x84) - EROFS = syscall.Errno(0x1e) - ESHUTDOWN = syscall.Errno(0x6c) - ESOCKTNOSUPPORT = syscall.Errno(0x5e) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESRMNT = syscall.Errno(0x45) - ESTALE = syscall.Errno(0x74) - ESTRPIPE = syscall.Errno(0x56) - ETIME = syscall.Errno(0x3e) - ETIMEDOUT = syscall.Errno(0x6e) - ETOOMANYREFS = syscall.Errno(0x6d) - ETXTBSY = syscall.Errno(0x1a) - EUCLEAN = syscall.Errno(0x75) - EUNATCH = syscall.Errno(0x31) - EUSERS = syscall.Errno(0x57) - EWOULDBLOCK = syscall.Errno(0xb) - EXDEV = syscall.Errno(0x12) - EXFULL = syscall.Errno(0x36) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0x7) - SIGCHLD = syscall.Signal(0x11) - SIGCLD = syscall.Signal(0x11) - SIGCONT = syscall.Signal(0x12) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x1d) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGPIPE = syscall.Signal(0xd) - SIGPOLL = syscall.Signal(0x1d) - SIGPROF = syscall.Signal(0x1b) - SIGPWR = syscall.Signal(0x1e) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTKFLT = syscall.Signal(0x10) - SIGSTOP = syscall.Signal(0x13) - SIGSYS = syscall.Signal(0x1f) - SIGTERM = syscall.Signal(0xf) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x14) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGUNUSED = syscall.Signal(0x1f) - SIGURG = syscall.Signal(0x17) - SIGUSR1 = syscall.Signal(0xa) - SIGUSR2 = syscall.Signal(0xc) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) - -// Error table -var errors = [...]string{ - 1: "operation not permitted", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "input/output error", - 6: "no such device or address", - 7: "argument list too long", - 8: "exec format error", - 9: "bad file descriptor", - 10: "no child processes", - 11: "resource temporarily unavailable", - 12: "cannot allocate memory", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "device or resource busy", - 17: "file exists", - 18: "invalid cross-device link", - 19: "no such device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "too many open files in system", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "numerical argument out of domain", - 34: "numerical result out of range", - 35: "resource deadlock avoided", - 36: "file name too long", - 37: "no locks available", - 38: "function not implemented", - 39: "directory not empty", - 40: "too many levels of symbolic links", - 42: "no message of desired type", - 43: "identifier removed", - 44: "channel number out of range", - 45: "level 2 not synchronized", - 46: "level 3 halted", - 47: "level 3 reset", - 48: "link number out of range", - 49: "protocol driver not attached", - 50: "no CSI structure available", - 51: "level 2 halted", - 52: "invalid exchange", - 53: "invalid request descriptor", - 54: "exchange full", - 55: "no anode", - 56: "invalid request code", - 57: "invalid slot", - 59: "bad font file format", - 60: "device not a stream", - 61: "no data available", - 62: "timer expired", - 63: "out of streams resources", - 64: "machine is not on the network", - 65: "package not installed", - 66: "object is remote", - 67: "link has been severed", - 68: "advertise error", - 69: "srmount error", - 70: "communication error on send", - 71: "protocol error", - 72: "multihop attempted", - 73: "RFS specific error", - 74: "bad message", - 75: "value too large for defined data type", - 76: "name not unique on network", - 77: "file descriptor in bad state", - 78: "remote address changed", - 79: "can not access a needed shared library", - 80: "accessing a corrupted shared library", - 81: ".lib section in a.out corrupted", - 82: "attempting to link in too many shared libraries", - 83: "cannot exec a shared library directly", - 84: "invalid or incomplete multibyte or wide character", - 85: "interrupted system call should be restarted", - 86: "streams pipe error", - 87: "too many users", - 88: "socket operation on non-socket", - 89: "destination address required", - 90: "message too long", - 91: "protocol wrong type for socket", - 92: "protocol not available", - 93: "protocol not supported", - 94: "socket type not supported", - 95: "operation not supported", - 96: "protocol family not supported", - 97: "address family not supported by protocol", - 98: "address already in use", - 99: "cannot assign requested address", - 100: "network is down", - 101: "network is unreachable", - 102: "network dropped connection on reset", - 103: "software caused connection abort", - 104: "connection reset by peer", - 105: "no buffer space available", - 106: "transport endpoint is already connected", - 107: "transport endpoint is not connected", - 108: "cannot send after transport endpoint shutdown", - 109: "too many references: cannot splice", - 110: "connection timed out", - 111: "connection refused", - 112: "host is down", - 113: "no route to host", - 114: "operation already in progress", - 115: "operation now in progress", - 116: "stale NFS file handle", - 117: "structure needs cleaning", - 118: "not a XENIX named type file", - 119: "no XENIX semaphores available", - 120: "is a named type file", - 121: "remote I/O error", - 122: "disk quota exceeded", - 123: "no medium found", - 124: "wrong medium type", - 125: "operation canceled", - 126: "required key not available", - 127: "key has expired", - 128: "key has been revoked", - 129: "key was rejected by service", - 130: "owner died", - 131: "state not recoverable", - 132: "operation not possible due to RF-kill", - 133: "unknown error 133", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal instruction", - 5: "trace/breakpoint trap", - 6: "aborted", - 7: "bus error", - 8: "floating point exception", - 9: "killed", - 10: "user defined signal 1", - 11: "segmentation fault", - 12: "user defined signal 2", - 13: "broken pipe", - 14: "alarm clock", - 15: "terminated", - 16: "stack fault", - 17: "child exited", - 18: "continued", - 19: "stopped (signal)", - 20: "stopped", - 21: "stopped (tty input)", - 22: "stopped (tty output)", - 23: "urgent I/O condition", - 24: "CPU time limit exceeded", - 25: "file size limit exceeded", - 26: "virtual timer expired", - 27: "profiling timer expired", - 28: "window changed", - 29: "I/O possible", - 30: "power failure", - 31: "bad system call", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_linux_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_linux_amd64.go deleted file mode 100644 index b83fb40b395..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_linux_amd64.go +++ /dev/null @@ -1,1818 +0,0 @@ -// mkerrors.sh -m64 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build amd64,linux - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- -m64 _const.go - -package unix - -import "syscall" - -const ( - AF_ALG = 0x26 - AF_APPLETALK = 0x5 - AF_ASH = 0x12 - AF_ATMPVC = 0x8 - AF_ATMSVC = 0x14 - AF_AX25 = 0x3 - AF_BLUETOOTH = 0x1f - AF_BRIDGE = 0x7 - AF_CAIF = 0x25 - AF_CAN = 0x1d - AF_DECnet = 0xc - AF_ECONET = 0x13 - AF_FILE = 0x1 - AF_IEEE802154 = 0x24 - AF_INET = 0x2 - AF_INET6 = 0xa - AF_IPX = 0x4 - AF_IRDA = 0x17 - AF_ISDN = 0x22 - AF_IUCV = 0x20 - AF_KEY = 0xf - AF_LLC = 0x1a - AF_LOCAL = 0x1 - AF_MAX = 0x28 - AF_NETBEUI = 0xd - AF_NETLINK = 0x10 - AF_NETROM = 0x6 - AF_NFC = 0x27 - AF_PACKET = 0x11 - AF_PHONET = 0x23 - AF_PPPOX = 0x18 - AF_RDS = 0x15 - AF_ROSE = 0xb - AF_ROUTE = 0x10 - AF_RXRPC = 0x21 - AF_SECURITY = 0xe - AF_SNA = 0x16 - AF_TIPC = 0x1e - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - AF_WANPIPE = 0x19 - AF_X25 = 0x9 - ARPHRD_ADAPT = 0x108 - ARPHRD_APPLETLK = 0x8 - ARPHRD_ARCNET = 0x7 - ARPHRD_ASH = 0x30d - ARPHRD_ATM = 0x13 - ARPHRD_AX25 = 0x3 - ARPHRD_BIF = 0x307 - ARPHRD_CAIF = 0x336 - ARPHRD_CAN = 0x118 - ARPHRD_CHAOS = 0x5 - ARPHRD_CISCO = 0x201 - ARPHRD_CSLIP = 0x101 - ARPHRD_CSLIP6 = 0x103 - ARPHRD_DDCMP = 0x205 - ARPHRD_DLCI = 0xf - ARPHRD_ECONET = 0x30e - ARPHRD_EETHER = 0x2 - ARPHRD_ETHER = 0x1 - ARPHRD_EUI64 = 0x1b - ARPHRD_FCAL = 0x311 - ARPHRD_FCFABRIC = 0x313 - ARPHRD_FCPL = 0x312 - ARPHRD_FCPP = 0x310 - ARPHRD_FDDI = 0x306 - ARPHRD_FRAD = 0x302 - ARPHRD_HDLC = 0x201 - ARPHRD_HIPPI = 0x30c - ARPHRD_HWX25 = 0x110 - ARPHRD_IEEE1394 = 0x18 - ARPHRD_IEEE802 = 0x6 - ARPHRD_IEEE80211 = 0x321 - ARPHRD_IEEE80211_PRISM = 0x322 - ARPHRD_IEEE80211_RADIOTAP = 0x323 - ARPHRD_IEEE802154 = 0x324 - ARPHRD_IEEE802_TR = 0x320 - ARPHRD_INFINIBAND = 0x20 - ARPHRD_IPDDP = 0x309 - ARPHRD_IPGRE = 0x30a - ARPHRD_IRDA = 0x30f - ARPHRD_LAPB = 0x204 - ARPHRD_LOCALTLK = 0x305 - ARPHRD_LOOPBACK = 0x304 - ARPHRD_METRICOM = 0x17 - ARPHRD_NETROM = 0x0 - ARPHRD_NONE = 0xfffe - ARPHRD_PHONET = 0x334 - ARPHRD_PHONET_PIPE = 0x335 - ARPHRD_PIMREG = 0x30b - ARPHRD_PPP = 0x200 - ARPHRD_PRONET = 0x4 - ARPHRD_RAWHDLC = 0x206 - ARPHRD_ROSE = 0x10e - ARPHRD_RSRVD = 0x104 - ARPHRD_SIT = 0x308 - ARPHRD_SKIP = 0x303 - ARPHRD_SLIP = 0x100 - ARPHRD_SLIP6 = 0x102 - ARPHRD_TUNNEL = 0x300 - ARPHRD_TUNNEL6 = 0x301 - ARPHRD_VOID = 0xffff - ARPHRD_X25 = 0x10f - B0 = 0x0 - B1000000 = 0x1008 - B110 = 0x3 - B115200 = 0x1002 - B1152000 = 0x1009 - B1200 = 0x9 - B134 = 0x4 - B150 = 0x5 - B1500000 = 0x100a - B1800 = 0xa - B19200 = 0xe - B200 = 0x6 - B2000000 = 0x100b - B230400 = 0x1003 - B2400 = 0xb - B2500000 = 0x100c - B300 = 0x7 - B3000000 = 0x100d - B3500000 = 0x100e - B38400 = 0xf - B4000000 = 0x100f - B460800 = 0x1004 - B4800 = 0xc - B50 = 0x1 - B500000 = 0x1005 - B57600 = 0x1001 - B576000 = 0x1006 - B600 = 0x8 - B75 = 0x2 - B921600 = 0x1007 - B9600 = 0xd - BOTHER = 0x1000 - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXINSNS = 0x1000 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_W = 0x0 - BPF_X = 0x8 - BRKINT = 0x2 - BS0 = 0x0 - BS1 = 0x2000 - BSDLY = 0x2000 - CBAUD = 0x100f - CBAUDEX = 0x1000 - CFLUSH = 0xf - CIBAUD = 0x100f0000 - CLOCAL = 0x800 - CLOCK_BOOTTIME = 0x7 - CLOCK_BOOTTIME_ALARM = 0x9 - CLOCK_DEFAULT = 0x0 - CLOCK_EXT = 0x1 - CLOCK_INT = 0x2 - CLOCK_MONOTONIC = 0x1 - CLOCK_MONOTONIC_COARSE = 0x6 - CLOCK_MONOTONIC_RAW = 0x4 - CLOCK_PROCESS_CPUTIME_ID = 0x2 - CLOCK_REALTIME = 0x0 - CLOCK_REALTIME_ALARM = 0x8 - CLOCK_REALTIME_COARSE = 0x5 - CLOCK_THREAD_CPUTIME_ID = 0x3 - CLOCK_TXFROMRX = 0x4 - CLOCK_TXINT = 0x3 - CLONE_CHILD_CLEARTID = 0x200000 - CLONE_CHILD_SETTID = 0x1000000 - CLONE_DETACHED = 0x400000 - CLONE_FILES = 0x400 - CLONE_FS = 0x200 - CLONE_IO = 0x80000000 - CLONE_NEWIPC = 0x8000000 - CLONE_NEWNET = 0x40000000 - CLONE_NEWNS = 0x20000 - CLONE_NEWPID = 0x20000000 - CLONE_NEWUSER = 0x10000000 - CLONE_NEWUTS = 0x4000000 - CLONE_PARENT = 0x8000 - CLONE_PARENT_SETTID = 0x100000 - CLONE_PTRACE = 0x2000 - CLONE_SETTLS = 0x80000 - CLONE_SIGHAND = 0x800 - CLONE_SYSVSEM = 0x40000 - CLONE_THREAD = 0x10000 - CLONE_UNTRACED = 0x800000 - CLONE_VFORK = 0x4000 - CLONE_VM = 0x100 - CMSPAR = 0x40000000 - CR0 = 0x0 - CR1 = 0x200 - CR2 = 0x400 - CR3 = 0x600 - CRDLY = 0x600 - CREAD = 0x80 - CRTSCTS = 0x80000000 - CS5 = 0x0 - CS6 = 0x10 - CS7 = 0x20 - CS8 = 0x30 - CSIGNAL = 0xff - CSIZE = 0x30 - CSTART = 0x11 - CSTATUS = 0x0 - CSTOP = 0x13 - CSTOPB = 0x40 - CSUSP = 0x1a - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - DT_WHT = 0xe - ECHO = 0x8 - ECHOCTL = 0x200 - ECHOE = 0x10 - ECHOK = 0x20 - ECHOKE = 0x800 - ECHONL = 0x40 - ECHOPRT = 0x400 - ENCODING_DEFAULT = 0x0 - ENCODING_FM_MARK = 0x3 - ENCODING_FM_SPACE = 0x4 - ENCODING_MANCHESTER = 0x5 - ENCODING_NRZ = 0x1 - ENCODING_NRZI = 0x2 - EPOLLERR = 0x8 - EPOLLET = 0x80000000 - EPOLLHUP = 0x10 - EPOLLIN = 0x1 - EPOLLMSG = 0x400 - EPOLLONESHOT = 0x40000000 - EPOLLOUT = 0x4 - EPOLLPRI = 0x2 - EPOLLRDBAND = 0x80 - EPOLLRDHUP = 0x2000 - EPOLLRDNORM = 0x40 - EPOLLWRBAND = 0x200 - EPOLLWRNORM = 0x100 - EPOLL_CLOEXEC = 0x80000 - EPOLL_CTL_ADD = 0x1 - EPOLL_CTL_DEL = 0x2 - EPOLL_CTL_MOD = 0x3 - EPOLL_NONBLOCK = 0x800 - ETH_P_1588 = 0x88f7 - ETH_P_8021AD = 0x88a8 - ETH_P_8021AH = 0x88e7 - ETH_P_8021Q = 0x8100 - ETH_P_802_2 = 0x4 - ETH_P_802_3 = 0x1 - ETH_P_AARP = 0x80f3 - ETH_P_AF_IUCV = 0xfbfb - ETH_P_ALL = 0x3 - ETH_P_AOE = 0x88a2 - ETH_P_ARCNET = 0x1a - ETH_P_ARP = 0x806 - ETH_P_ATALK = 0x809b - ETH_P_ATMFATE = 0x8884 - ETH_P_ATMMPOA = 0x884c - ETH_P_AX25 = 0x2 - ETH_P_BPQ = 0x8ff - ETH_P_CAIF = 0xf7 - ETH_P_CAN = 0xc - ETH_P_CONTROL = 0x16 - ETH_P_CUST = 0x6006 - ETH_P_DDCMP = 0x6 - ETH_P_DEC = 0x6000 - ETH_P_DIAG = 0x6005 - ETH_P_DNA_DL = 0x6001 - ETH_P_DNA_RC = 0x6002 - ETH_P_DNA_RT = 0x6003 - ETH_P_DSA = 0x1b - ETH_P_ECONET = 0x18 - ETH_P_EDSA = 0xdada - ETH_P_FCOE = 0x8906 - ETH_P_FIP = 0x8914 - ETH_P_HDLC = 0x19 - ETH_P_IEEE802154 = 0xf6 - ETH_P_IEEEPUP = 0xa00 - ETH_P_IEEEPUPAT = 0xa01 - ETH_P_IP = 0x800 - ETH_P_IPV6 = 0x86dd - ETH_P_IPX = 0x8137 - ETH_P_IRDA = 0x17 - ETH_P_LAT = 0x6004 - ETH_P_LINK_CTL = 0x886c - ETH_P_LOCALTALK = 0x9 - ETH_P_LOOP = 0x60 - ETH_P_MOBITEX = 0x15 - ETH_P_MPLS_MC = 0x8848 - ETH_P_MPLS_UC = 0x8847 - ETH_P_PAE = 0x888e - ETH_P_PAUSE = 0x8808 - ETH_P_PHONET = 0xf5 - ETH_P_PPPTALK = 0x10 - ETH_P_PPP_DISC = 0x8863 - ETH_P_PPP_MP = 0x8 - ETH_P_PPP_SES = 0x8864 - ETH_P_PUP = 0x200 - ETH_P_PUPAT = 0x201 - ETH_P_QINQ1 = 0x9100 - ETH_P_QINQ2 = 0x9200 - ETH_P_QINQ3 = 0x9300 - ETH_P_RARP = 0x8035 - ETH_P_SCA = 0x6007 - ETH_P_SLOW = 0x8809 - ETH_P_SNAP = 0x5 - ETH_P_TDLS = 0x890d - ETH_P_TEB = 0x6558 - ETH_P_TIPC = 0x88ca - ETH_P_TRAILER = 0x1c - ETH_P_TR_802_2 = 0x11 - ETH_P_WAN_PPP = 0x7 - ETH_P_WCCP = 0x883e - ETH_P_X25 = 0x805 - EXTA = 0xe - EXTB = 0xf - EXTPROC = 0x10000 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x400 - FF0 = 0x0 - FF1 = 0x8000 - FFDLY = 0x8000 - FLUSHO = 0x1000 - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0x406 - F_EXLCK = 0x4 - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLEASE = 0x401 - F_GETLK = 0x5 - F_GETLK64 = 0x5 - F_GETOWN = 0x9 - F_GETOWN_EX = 0x10 - F_GETPIPE_SZ = 0x408 - F_GETSIG = 0xb - F_LOCK = 0x1 - F_NOTIFY = 0x402 - F_OK = 0x0 - F_RDLCK = 0x0 - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLEASE = 0x400 - F_SETLK = 0x6 - F_SETLK64 = 0x6 - F_SETLKW = 0x7 - F_SETLKW64 = 0x7 - F_SETOWN = 0x8 - F_SETOWN_EX = 0xf - F_SETPIPE_SZ = 0x407 - F_SETSIG = 0xa - F_SHLCK = 0x8 - F_TEST = 0x3 - F_TLOCK = 0x2 - F_ULOCK = 0x0 - F_UNLCK = 0x2 - F_WRLCK = 0x1 - HUPCL = 0x400 - IBSHIFT = 0x10 - ICANON = 0x2 - ICMPV6_FILTER = 0x1 - ICRNL = 0x100 - IEXTEN = 0x8000 - IFA_F_DADFAILED = 0x8 - IFA_F_DEPRECATED = 0x20 - IFA_F_HOMEADDRESS = 0x10 - IFA_F_NODAD = 0x2 - IFA_F_OPTIMISTIC = 0x4 - IFA_F_PERMANENT = 0x80 - IFA_F_SECONDARY = 0x1 - IFA_F_TEMPORARY = 0x1 - IFA_F_TENTATIVE = 0x40 - IFA_MAX = 0x7 - IFF_802_1Q_VLAN = 0x1 - IFF_ALLMULTI = 0x200 - IFF_AUTOMEDIA = 0x4000 - IFF_BONDING = 0x20 - IFF_BRIDGE_PORT = 0x4000 - IFF_BROADCAST = 0x2 - IFF_DEBUG = 0x4 - IFF_DISABLE_NETPOLL = 0x1000 - IFF_DONT_BRIDGE = 0x800 - IFF_DORMANT = 0x20000 - IFF_DYNAMIC = 0x8000 - IFF_EBRIDGE = 0x2 - IFF_ECHO = 0x40000 - IFF_ISATAP = 0x80 - IFF_LOOPBACK = 0x8 - IFF_LOWER_UP = 0x10000 - IFF_MACVLAN_PORT = 0x2000 - IFF_MASTER = 0x400 - IFF_MASTER_8023AD = 0x8 - IFF_MASTER_ALB = 0x10 - IFF_MASTER_ARPMON = 0x100 - IFF_MULTICAST = 0x1000 - IFF_NOARP = 0x80 - IFF_NOTRAILERS = 0x20 - IFF_NO_PI = 0x1000 - IFF_ONE_QUEUE = 0x2000 - IFF_OVS_DATAPATH = 0x8000 - IFF_POINTOPOINT = 0x10 - IFF_PORTSEL = 0x2000 - IFF_PROMISC = 0x100 - IFF_RUNNING = 0x40 - IFF_SLAVE = 0x800 - IFF_SLAVE_INACTIVE = 0x4 - IFF_SLAVE_NEEDARP = 0x40 - IFF_TAP = 0x2 - IFF_TUN = 0x1 - IFF_TUN_EXCL = 0x8000 - IFF_TX_SKB_SHARING = 0x10000 - IFF_UNICAST_FLT = 0x20000 - IFF_UP = 0x1 - IFF_VNET_HDR = 0x4000 - IFF_VOLATILE = 0x70c5a - IFF_WAN_HDLC = 0x200 - IFF_XMIT_DST_RELEASE = 0x400 - IFNAMSIZ = 0x10 - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_ACCESS = 0x1 - IN_ALL_EVENTS = 0xfff - IN_ATTRIB = 0x4 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLOEXEC = 0x80000 - IN_CLOSE = 0x18 - IN_CLOSE_NOWRITE = 0x10 - IN_CLOSE_WRITE = 0x8 - IN_CREATE = 0x100 - IN_DELETE = 0x200 - IN_DELETE_SELF = 0x400 - IN_DONT_FOLLOW = 0x2000000 - IN_EXCL_UNLINK = 0x4000000 - IN_IGNORED = 0x8000 - IN_ISDIR = 0x40000000 - IN_LOOPBACKNET = 0x7f - IN_MASK_ADD = 0x20000000 - IN_MODIFY = 0x2 - IN_MOVE = 0xc0 - IN_MOVED_FROM = 0x40 - IN_MOVED_TO = 0x80 - IN_MOVE_SELF = 0x800 - IN_NONBLOCK = 0x800 - IN_ONESHOT = 0x80000000 - IN_ONLYDIR = 0x1000000 - IN_OPEN = 0x20 - IN_Q_OVERFLOW = 0x4000 - IN_UNMOUNT = 0x2000 - IPPROTO_AH = 0x33 - IPPROTO_COMP = 0x6c - IPPROTO_DCCP = 0x21 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_ENCAP = 0x62 - IPPROTO_ESP = 0x32 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GRE = 0x2f - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IGMP = 0x2 - IPPROTO_IP = 0x0 - IPPROTO_IPIP = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_MTP = 0x5c - IPPROTO_NONE = 0x3b - IPPROTO_PIM = 0x67 - IPPROTO_PUP = 0xc - IPPROTO_RAW = 0xff - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_SCTP = 0x84 - IPPROTO_TCP = 0x6 - IPPROTO_TP = 0x1d - IPPROTO_UDP = 0x11 - IPPROTO_UDPLITE = 0x88 - IPV6_2292DSTOPTS = 0x4 - IPV6_2292HOPLIMIT = 0x8 - IPV6_2292HOPOPTS = 0x3 - IPV6_2292PKTINFO = 0x2 - IPV6_2292PKTOPTIONS = 0x6 - IPV6_2292RTHDR = 0x5 - IPV6_ADDRFORM = 0x1 - IPV6_ADD_MEMBERSHIP = 0x14 - IPV6_AUTHHDR = 0xa - IPV6_CHECKSUM = 0x7 - IPV6_DROP_MEMBERSHIP = 0x15 - IPV6_DSTOPTS = 0x3b - IPV6_HOPLIMIT = 0x34 - IPV6_HOPOPTS = 0x36 - IPV6_IPSEC_POLICY = 0x22 - IPV6_JOIN_ANYCAST = 0x1b - IPV6_JOIN_GROUP = 0x14 - IPV6_LEAVE_ANYCAST = 0x1c - IPV6_LEAVE_GROUP = 0x15 - IPV6_MTU = 0x18 - IPV6_MTU_DISCOVER = 0x17 - IPV6_MULTICAST_HOPS = 0x12 - IPV6_MULTICAST_IF = 0x11 - IPV6_MULTICAST_LOOP = 0x13 - IPV6_NEXTHOP = 0x9 - IPV6_PKTINFO = 0x32 - IPV6_PMTUDISC_DO = 0x2 - IPV6_PMTUDISC_DONT = 0x0 - IPV6_PMTUDISC_PROBE = 0x3 - IPV6_PMTUDISC_WANT = 0x1 - IPV6_RECVDSTOPTS = 0x3a - IPV6_RECVERR = 0x19 - IPV6_RECVHOPLIMIT = 0x33 - IPV6_RECVHOPOPTS = 0x35 - IPV6_RECVPKTINFO = 0x31 - IPV6_RECVRTHDR = 0x38 - IPV6_RECVTCLASS = 0x42 - IPV6_ROUTER_ALERT = 0x16 - IPV6_RTHDR = 0x39 - IPV6_RTHDRDSTOPTS = 0x37 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_RXDSTOPTS = 0x3b - IPV6_RXHOPOPTS = 0x36 - IPV6_TCLASS = 0x43 - IPV6_UNICAST_HOPS = 0x10 - IPV6_V6ONLY = 0x1a - IPV6_XFRM_POLICY = 0x23 - IP_ADD_MEMBERSHIP = 0x23 - IP_ADD_SOURCE_MEMBERSHIP = 0x27 - IP_BLOCK_SOURCE = 0x26 - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DROP_MEMBERSHIP = 0x24 - IP_DROP_SOURCE_MEMBERSHIP = 0x28 - IP_FREEBIND = 0xf - IP_HDRINCL = 0x3 - IP_IPSEC_POLICY = 0x10 - IP_MAXPACKET = 0xffff - IP_MAX_MEMBERSHIPS = 0x14 - IP_MF = 0x2000 - IP_MINTTL = 0x15 - IP_MSFILTER = 0x29 - IP_MSS = 0x240 - IP_MTU = 0xe - IP_MTU_DISCOVER = 0xa - IP_MULTICAST_ALL = 0x31 - IP_MULTICAST_IF = 0x20 - IP_MULTICAST_LOOP = 0x22 - IP_MULTICAST_TTL = 0x21 - IP_OFFMASK = 0x1fff - IP_OPTIONS = 0x4 - IP_ORIGDSTADDR = 0x14 - IP_PASSSEC = 0x12 - IP_PKTINFO = 0x8 - IP_PKTOPTIONS = 0x9 - IP_PMTUDISC = 0xa - IP_PMTUDISC_DO = 0x2 - IP_PMTUDISC_DONT = 0x0 - IP_PMTUDISC_PROBE = 0x3 - IP_PMTUDISC_WANT = 0x1 - IP_RECVERR = 0xb - IP_RECVOPTS = 0x6 - IP_RECVORIGDSTADDR = 0x14 - IP_RECVRETOPTS = 0x7 - IP_RECVTOS = 0xd - IP_RECVTTL = 0xc - IP_RETOPTS = 0x7 - IP_RF = 0x8000 - IP_ROUTER_ALERT = 0x5 - IP_TOS = 0x1 - IP_TRANSPARENT = 0x13 - IP_TTL = 0x2 - IP_UNBLOCK_SOURCE = 0x25 - IP_XFRM_POLICY = 0x11 - ISIG = 0x1 - ISTRIP = 0x20 - IUCLC = 0x200 - IUTF8 = 0x4000 - IXANY = 0x800 - IXOFF = 0x1000 - IXON = 0x400 - LINUX_REBOOT_CMD_CAD_OFF = 0x0 - LINUX_REBOOT_CMD_CAD_ON = 0x89abcdef - LINUX_REBOOT_CMD_HALT = 0xcdef0123 - LINUX_REBOOT_CMD_KEXEC = 0x45584543 - LINUX_REBOOT_CMD_POWER_OFF = 0x4321fedc - LINUX_REBOOT_CMD_RESTART = 0x1234567 - LINUX_REBOOT_CMD_RESTART2 = 0xa1b2c3d4 - LINUX_REBOOT_CMD_SW_SUSPEND = 0xd000fce2 - LINUX_REBOOT_MAGIC1 = 0xfee1dead - LINUX_REBOOT_MAGIC2 = 0x28121969 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_DOFORK = 0xb - MADV_DONTFORK = 0xa - MADV_DONTNEED = 0x4 - MADV_HUGEPAGE = 0xe - MADV_HWPOISON = 0x64 - MADV_MERGEABLE = 0xc - MADV_NOHUGEPAGE = 0xf - MADV_NORMAL = 0x0 - MADV_RANDOM = 0x1 - MADV_REMOVE = 0x9 - MADV_SEQUENTIAL = 0x2 - MADV_UNMERGEABLE = 0xd - MADV_WILLNEED = 0x3 - MAP_32BIT = 0x40 - MAP_ANON = 0x20 - MAP_ANONYMOUS = 0x20 - MAP_DENYWRITE = 0x800 - MAP_EXECUTABLE = 0x1000 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_GROWSDOWN = 0x100 - MAP_HUGETLB = 0x40000 - MAP_LOCKED = 0x2000 - MAP_NONBLOCK = 0x10000 - MAP_NORESERVE = 0x4000 - MAP_POPULATE = 0x8000 - MAP_PRIVATE = 0x2 - MAP_SHARED = 0x1 - MAP_STACK = 0x20000 - MAP_TYPE = 0xf - MCL_CURRENT = 0x1 - MCL_FUTURE = 0x2 - MNT_DETACH = 0x2 - MNT_EXPIRE = 0x4 - MNT_FORCE = 0x1 - MSG_CMSG_CLOEXEC = 0x40000000 - MSG_CONFIRM = 0x800 - MSG_CTRUNC = 0x8 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x40 - MSG_EOR = 0x80 - MSG_ERRQUEUE = 0x2000 - MSG_FASTOPEN = 0x20000000 - MSG_FIN = 0x200 - MSG_MORE = 0x8000 - MSG_NOSIGNAL = 0x4000 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_PROXY = 0x10 - MSG_RST = 0x1000 - MSG_SYN = 0x400 - MSG_TRUNC = 0x20 - MSG_TRYHARD = 0x4 - MSG_WAITALL = 0x100 - MSG_WAITFORONE = 0x10000 - MS_ACTIVE = 0x40000000 - MS_ASYNC = 0x1 - MS_BIND = 0x1000 - MS_DIRSYNC = 0x80 - MS_INVALIDATE = 0x2 - MS_I_VERSION = 0x800000 - MS_KERNMOUNT = 0x400000 - MS_MANDLOCK = 0x40 - MS_MGC_MSK = 0xffff0000 - MS_MGC_VAL = 0xc0ed0000 - MS_MOVE = 0x2000 - MS_NOATIME = 0x400 - MS_NODEV = 0x4 - MS_NODIRATIME = 0x800 - MS_NOEXEC = 0x8 - MS_NOSUID = 0x2 - MS_NOUSER = -0x80000000 - MS_POSIXACL = 0x10000 - MS_PRIVATE = 0x40000 - MS_RDONLY = 0x1 - MS_REC = 0x4000 - MS_RELATIME = 0x200000 - MS_REMOUNT = 0x20 - MS_RMT_MASK = 0x800051 - MS_SHARED = 0x100000 - MS_SILENT = 0x8000 - MS_SLAVE = 0x80000 - MS_STRICTATIME = 0x1000000 - MS_SYNC = 0x4 - MS_SYNCHRONOUS = 0x10 - MS_UNBINDABLE = 0x20000 - NAME_MAX = 0xff - NETLINK_ADD_MEMBERSHIP = 0x1 - NETLINK_AUDIT = 0x9 - NETLINK_BROADCAST_ERROR = 0x4 - NETLINK_CONNECTOR = 0xb - NETLINK_CRYPTO = 0x15 - NETLINK_DNRTMSG = 0xe - NETLINK_DROP_MEMBERSHIP = 0x2 - NETLINK_ECRYPTFS = 0x13 - NETLINK_FIB_LOOKUP = 0xa - NETLINK_FIREWALL = 0x3 - NETLINK_GENERIC = 0x10 - NETLINK_INET_DIAG = 0x4 - NETLINK_IP6_FW = 0xd - NETLINK_ISCSI = 0x8 - NETLINK_KOBJECT_UEVENT = 0xf - NETLINK_NETFILTER = 0xc - NETLINK_NFLOG = 0x5 - NETLINK_NO_ENOBUFS = 0x5 - NETLINK_PKTINFO = 0x3 - NETLINK_RDMA = 0x14 - NETLINK_ROUTE = 0x0 - NETLINK_SCSITRANSPORT = 0x12 - NETLINK_SELINUX = 0x7 - NETLINK_UNUSED = 0x1 - NETLINK_USERSOCK = 0x2 - NETLINK_XFRM = 0x6 - NL0 = 0x0 - NL1 = 0x100 - NLA_ALIGNTO = 0x4 - NLA_F_NESTED = 0x8000 - NLA_F_NET_BYTEORDER = 0x4000 - NLA_HDRLEN = 0x4 - NLDLY = 0x100 - NLMSG_ALIGNTO = 0x4 - NLMSG_DONE = 0x3 - NLMSG_ERROR = 0x2 - NLMSG_HDRLEN = 0x10 - NLMSG_MIN_TYPE = 0x10 - NLMSG_NOOP = 0x1 - NLMSG_OVERRUN = 0x4 - NLM_F_ACK = 0x4 - NLM_F_APPEND = 0x800 - NLM_F_ATOMIC = 0x400 - NLM_F_CREATE = 0x400 - NLM_F_DUMP = 0x300 - NLM_F_DUMP_INTR = 0x10 - NLM_F_ECHO = 0x8 - NLM_F_EXCL = 0x200 - NLM_F_MATCH = 0x200 - NLM_F_MULTI = 0x2 - NLM_F_REPLACE = 0x100 - NLM_F_REQUEST = 0x1 - NLM_F_ROOT = 0x100 - NOFLSH = 0x80 - OCRNL = 0x8 - OFDEL = 0x80 - OFILL = 0x40 - OLCUC = 0x2 - ONLCR = 0x4 - ONLRET = 0x20 - ONOCR = 0x10 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_APPEND = 0x400 - O_ASYNC = 0x2000 - O_CLOEXEC = 0x80000 - O_CREAT = 0x40 - O_DIRECT = 0x4000 - O_DIRECTORY = 0x10000 - O_DSYNC = 0x1000 - O_EXCL = 0x80 - O_FSYNC = 0x101000 - O_LARGEFILE = 0x0 - O_NDELAY = 0x800 - O_NOATIME = 0x40000 - O_NOCTTY = 0x100 - O_NOFOLLOW = 0x20000 - O_NONBLOCK = 0x800 - O_PATH = 0x200000 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_RSYNC = 0x101000 - O_SYNC = 0x101000 - O_TRUNC = 0x200 - O_WRONLY = 0x1 - PACKET_ADD_MEMBERSHIP = 0x1 - PACKET_AUXDATA = 0x8 - PACKET_BROADCAST = 0x1 - PACKET_COPY_THRESH = 0x7 - PACKET_DROP_MEMBERSHIP = 0x2 - PACKET_FANOUT = 0x12 - PACKET_FANOUT_CPU = 0x2 - PACKET_FANOUT_FLAG_DEFRAG = 0x8000 - PACKET_FANOUT_HASH = 0x0 - PACKET_FANOUT_LB = 0x1 - PACKET_FASTROUTE = 0x6 - PACKET_HDRLEN = 0xb - PACKET_HOST = 0x0 - PACKET_LOOPBACK = 0x5 - PACKET_LOSS = 0xe - PACKET_MR_ALLMULTI = 0x2 - PACKET_MR_MULTICAST = 0x0 - PACKET_MR_PROMISC = 0x1 - PACKET_MR_UNICAST = 0x3 - PACKET_MULTICAST = 0x2 - PACKET_ORIGDEV = 0x9 - PACKET_OTHERHOST = 0x3 - PACKET_OUTGOING = 0x4 - PACKET_RECV_OUTPUT = 0x3 - PACKET_RESERVE = 0xc - PACKET_RX_RING = 0x5 - PACKET_STATISTICS = 0x6 - PACKET_TIMESTAMP = 0x11 - PACKET_TX_RING = 0xd - PACKET_TX_TIMESTAMP = 0x10 - PACKET_VERSION = 0xa - PACKET_VNET_HDR = 0xf - PARENB = 0x100 - PARITY_CRC16_PR0 = 0x2 - PARITY_CRC16_PR0_CCITT = 0x4 - PARITY_CRC16_PR1 = 0x3 - PARITY_CRC16_PR1_CCITT = 0x5 - PARITY_CRC32_PR0_CCITT = 0x6 - PARITY_CRC32_PR1_CCITT = 0x7 - PARITY_DEFAULT = 0x0 - PARITY_NONE = 0x1 - PARMRK = 0x8 - PARODD = 0x200 - PENDIN = 0x4000 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PROT_EXEC = 0x4 - PROT_GROWSDOWN = 0x1000000 - PROT_GROWSUP = 0x2000000 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_WRITE = 0x2 - PR_CAPBSET_DROP = 0x18 - PR_CAPBSET_READ = 0x17 - PR_ENDIAN_BIG = 0x0 - PR_ENDIAN_LITTLE = 0x1 - PR_ENDIAN_PPC_LITTLE = 0x2 - PR_FPEMU_NOPRINT = 0x1 - PR_FPEMU_SIGFPE = 0x2 - PR_FP_EXC_ASYNC = 0x2 - PR_FP_EXC_DISABLED = 0x0 - PR_FP_EXC_DIV = 0x10000 - PR_FP_EXC_INV = 0x100000 - PR_FP_EXC_NONRECOV = 0x1 - PR_FP_EXC_OVF = 0x20000 - PR_FP_EXC_PRECISE = 0x3 - PR_FP_EXC_RES = 0x80000 - PR_FP_EXC_SW_ENABLE = 0x80 - PR_FP_EXC_UND = 0x40000 - PR_GET_DUMPABLE = 0x3 - PR_GET_ENDIAN = 0x13 - PR_GET_FPEMU = 0x9 - PR_GET_FPEXC = 0xb - PR_GET_KEEPCAPS = 0x7 - PR_GET_NAME = 0x10 - PR_GET_NO_NEW_PRIVS = 0x27 - PR_GET_PDEATHSIG = 0x2 - PR_GET_SECCOMP = 0x15 - PR_GET_SECUREBITS = 0x1b - PR_GET_TIMERSLACK = 0x1e - PR_GET_TIMING = 0xd - PR_GET_TSC = 0x19 - PR_GET_UNALIGN = 0x5 - PR_MCE_KILL = 0x21 - PR_MCE_KILL_CLEAR = 0x0 - PR_MCE_KILL_DEFAULT = 0x2 - PR_MCE_KILL_EARLY = 0x1 - PR_MCE_KILL_GET = 0x22 - PR_MCE_KILL_LATE = 0x0 - PR_MCE_KILL_SET = 0x1 - PR_SET_DUMPABLE = 0x4 - PR_SET_ENDIAN = 0x14 - PR_SET_FPEMU = 0xa - PR_SET_FPEXC = 0xc - PR_SET_KEEPCAPS = 0x8 - PR_SET_MM = 0x23 - PR_SET_MM_BRK = 0x7 - PR_SET_MM_END_CODE = 0x2 - PR_SET_MM_END_DATA = 0x4 - PR_SET_MM_START_BRK = 0x6 - PR_SET_MM_START_CODE = 0x1 - PR_SET_MM_START_DATA = 0x3 - PR_SET_MM_START_STACK = 0x5 - PR_SET_NAME = 0xf - PR_SET_NO_NEW_PRIVS = 0x26 - PR_SET_PDEATHSIG = 0x1 - PR_SET_PTRACER = 0x59616d61 - PR_SET_PTRACER_ANY = -0x1 - PR_SET_SECCOMP = 0x16 - PR_SET_SECUREBITS = 0x1c - PR_SET_TIMERSLACK = 0x1d - PR_SET_TIMING = 0xe - PR_SET_TSC = 0x1a - PR_SET_UNALIGN = 0x6 - PR_TASK_PERF_EVENTS_DISABLE = 0x1f - PR_TASK_PERF_EVENTS_ENABLE = 0x20 - PR_TIMING_STATISTICAL = 0x0 - PR_TIMING_TIMESTAMP = 0x1 - PR_TSC_ENABLE = 0x1 - PR_TSC_SIGSEGV = 0x2 - PR_UNALIGN_NOPRINT = 0x1 - PR_UNALIGN_SIGBUS = 0x2 - PTRACE_ARCH_PRCTL = 0x1e - PTRACE_ATTACH = 0x10 - PTRACE_CONT = 0x7 - PTRACE_DETACH = 0x11 - PTRACE_EVENT_CLONE = 0x3 - PTRACE_EVENT_EXEC = 0x4 - PTRACE_EVENT_EXIT = 0x6 - PTRACE_EVENT_FORK = 0x1 - PTRACE_EVENT_SECCOMP = 0x7 - PTRACE_EVENT_STOP = 0x80 - PTRACE_EVENT_VFORK = 0x2 - PTRACE_EVENT_VFORK_DONE = 0x5 - PTRACE_GETEVENTMSG = 0x4201 - PTRACE_GETFPREGS = 0xe - PTRACE_GETFPXREGS = 0x12 - PTRACE_GETREGS = 0xc - PTRACE_GETREGSET = 0x4204 - PTRACE_GETSIGINFO = 0x4202 - PTRACE_GET_THREAD_AREA = 0x19 - PTRACE_INTERRUPT = 0x4207 - PTRACE_KILL = 0x8 - PTRACE_LISTEN = 0x4208 - PTRACE_OLDSETOPTIONS = 0x15 - PTRACE_O_MASK = 0xff - PTRACE_O_TRACECLONE = 0x8 - PTRACE_O_TRACEEXEC = 0x10 - PTRACE_O_TRACEEXIT = 0x40 - PTRACE_O_TRACEFORK = 0x2 - PTRACE_O_TRACESECCOMP = 0x80 - PTRACE_O_TRACESYSGOOD = 0x1 - PTRACE_O_TRACEVFORK = 0x4 - PTRACE_O_TRACEVFORKDONE = 0x20 - PTRACE_PEEKDATA = 0x2 - PTRACE_PEEKTEXT = 0x1 - PTRACE_PEEKUSR = 0x3 - PTRACE_POKEDATA = 0x5 - PTRACE_POKETEXT = 0x4 - PTRACE_POKEUSR = 0x6 - PTRACE_SEIZE = 0x4206 - PTRACE_SEIZE_DEVEL = 0x80000000 - PTRACE_SETFPREGS = 0xf - PTRACE_SETFPXREGS = 0x13 - PTRACE_SETOPTIONS = 0x4200 - PTRACE_SETREGS = 0xd - PTRACE_SETREGSET = 0x4205 - PTRACE_SETSIGINFO = 0x4203 - PTRACE_SET_THREAD_AREA = 0x1a - PTRACE_SINGLEBLOCK = 0x21 - PTRACE_SINGLESTEP = 0x9 - PTRACE_SYSCALL = 0x18 - PTRACE_SYSEMU = 0x1f - PTRACE_SYSEMU_SINGLESTEP = 0x20 - PTRACE_TRACEME = 0x0 - RLIMIT_AS = 0x9 - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x7 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = -0x1 - RTAX_ADVMSS = 0x8 - RTAX_CWND = 0x7 - RTAX_FEATURES = 0xc - RTAX_FEATURE_ALLFRAG = 0x8 - RTAX_FEATURE_ECN = 0x1 - RTAX_FEATURE_SACK = 0x2 - RTAX_FEATURE_TIMESTAMP = 0x4 - RTAX_HOPLIMIT = 0xa - RTAX_INITCWND = 0xb - RTAX_INITRWND = 0xe - RTAX_LOCK = 0x1 - RTAX_MAX = 0xe - RTAX_MTU = 0x2 - RTAX_REORDERING = 0x9 - RTAX_RTO_MIN = 0xd - RTAX_RTT = 0x4 - RTAX_RTTVAR = 0x5 - RTAX_SSTHRESH = 0x6 - RTAX_UNSPEC = 0x0 - RTAX_WINDOW = 0x3 - RTA_ALIGNTO = 0x4 - RTA_MAX = 0x10 - RTCF_DIRECTSRC = 0x4000000 - RTCF_DOREDIRECT = 0x1000000 - RTCF_LOG = 0x2000000 - RTCF_MASQ = 0x400000 - RTCF_NAT = 0x800000 - RTCF_VALVE = 0x200000 - RTF_ADDRCLASSMASK = 0xf8000000 - RTF_ADDRCONF = 0x40000 - RTF_ALLONLINK = 0x20000 - RTF_BROADCAST = 0x10000000 - RTF_CACHE = 0x1000000 - RTF_DEFAULT = 0x10000 - RTF_DYNAMIC = 0x10 - RTF_FLOW = 0x2000000 - RTF_GATEWAY = 0x2 - RTF_HOST = 0x4 - RTF_INTERFACE = 0x40000000 - RTF_IRTT = 0x100 - RTF_LINKRT = 0x100000 - RTF_LOCAL = 0x80000000 - RTF_MODIFIED = 0x20 - RTF_MSS = 0x40 - RTF_MTU = 0x40 - RTF_MULTICAST = 0x20000000 - RTF_NAT = 0x8000000 - RTF_NOFORWARD = 0x1000 - RTF_NONEXTHOP = 0x200000 - RTF_NOPMTUDISC = 0x4000 - RTF_POLICY = 0x4000000 - RTF_REINSTATE = 0x8 - RTF_REJECT = 0x200 - RTF_STATIC = 0x400 - RTF_THROW = 0x2000 - RTF_UP = 0x1 - RTF_WINDOW = 0x80 - RTF_XRESOLVE = 0x800 - RTM_BASE = 0x10 - RTM_DELACTION = 0x31 - RTM_DELADDR = 0x15 - RTM_DELADDRLABEL = 0x49 - RTM_DELLINK = 0x11 - RTM_DELNEIGH = 0x1d - RTM_DELQDISC = 0x25 - RTM_DELROUTE = 0x19 - RTM_DELRULE = 0x21 - RTM_DELTCLASS = 0x29 - RTM_DELTFILTER = 0x2d - RTM_F_CLONED = 0x200 - RTM_F_EQUALIZE = 0x400 - RTM_F_NOTIFY = 0x100 - RTM_F_PREFIX = 0x800 - RTM_GETACTION = 0x32 - RTM_GETADDR = 0x16 - RTM_GETADDRLABEL = 0x4a - RTM_GETANYCAST = 0x3e - RTM_GETDCB = 0x4e - RTM_GETLINK = 0x12 - RTM_GETMULTICAST = 0x3a - RTM_GETNEIGH = 0x1e - RTM_GETNEIGHTBL = 0x42 - RTM_GETQDISC = 0x26 - RTM_GETROUTE = 0x1a - RTM_GETRULE = 0x22 - RTM_GETTCLASS = 0x2a - RTM_GETTFILTER = 0x2e - RTM_MAX = 0x4f - RTM_NEWACTION = 0x30 - RTM_NEWADDR = 0x14 - RTM_NEWADDRLABEL = 0x48 - RTM_NEWLINK = 0x10 - RTM_NEWNDUSEROPT = 0x44 - RTM_NEWNEIGH = 0x1c - RTM_NEWNEIGHTBL = 0x40 - RTM_NEWPREFIX = 0x34 - RTM_NEWQDISC = 0x24 - RTM_NEWROUTE = 0x18 - RTM_NEWRULE = 0x20 - RTM_NEWTCLASS = 0x28 - RTM_NEWTFILTER = 0x2c - RTM_NR_FAMILIES = 0x10 - RTM_NR_MSGTYPES = 0x40 - RTM_SETDCB = 0x4f - RTM_SETLINK = 0x13 - RTM_SETNEIGHTBL = 0x43 - RTNH_ALIGNTO = 0x4 - RTNH_F_DEAD = 0x1 - RTNH_F_ONLINK = 0x4 - RTNH_F_PERVASIVE = 0x2 - RTN_MAX = 0xb - RTPROT_BIRD = 0xc - RTPROT_BOOT = 0x3 - RTPROT_DHCP = 0x10 - RTPROT_DNROUTED = 0xd - RTPROT_GATED = 0x8 - RTPROT_KERNEL = 0x2 - RTPROT_MRT = 0xa - RTPROT_NTK = 0xf - RTPROT_RA = 0x9 - RTPROT_REDIRECT = 0x1 - RTPROT_STATIC = 0x4 - RTPROT_UNSPEC = 0x0 - RTPROT_XORP = 0xe - RTPROT_ZEBRA = 0xb - RT_CLASS_DEFAULT = 0xfd - RT_CLASS_LOCAL = 0xff - RT_CLASS_MAIN = 0xfe - RT_CLASS_MAX = 0xff - RT_CLASS_UNSPEC = 0x0 - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - RUSAGE_THREAD = 0x1 - SCM_CREDENTIALS = 0x2 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x1d - SCM_TIMESTAMPING = 0x25 - SCM_TIMESTAMPNS = 0x23 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDDLCI = 0x8980 - SIOCADDMULTI = 0x8931 - SIOCADDRT = 0x890b - SIOCATMARK = 0x8905 - SIOCDARP = 0x8953 - SIOCDELDLCI = 0x8981 - SIOCDELMULTI = 0x8932 - SIOCDELRT = 0x890c - SIOCDEVPRIVATE = 0x89f0 - SIOCDIFADDR = 0x8936 - SIOCDRARP = 0x8960 - SIOCGARP = 0x8954 - SIOCGIFADDR = 0x8915 - SIOCGIFBR = 0x8940 - SIOCGIFBRDADDR = 0x8919 - SIOCGIFCONF = 0x8912 - SIOCGIFCOUNT = 0x8938 - SIOCGIFDSTADDR = 0x8917 - SIOCGIFENCAP = 0x8925 - SIOCGIFFLAGS = 0x8913 - SIOCGIFHWADDR = 0x8927 - SIOCGIFINDEX = 0x8933 - SIOCGIFMAP = 0x8970 - SIOCGIFMEM = 0x891f - SIOCGIFMETRIC = 0x891d - SIOCGIFMTU = 0x8921 - SIOCGIFNAME = 0x8910 - SIOCGIFNETMASK = 0x891b - SIOCGIFPFLAGS = 0x8935 - SIOCGIFSLAVE = 0x8929 - SIOCGIFTXQLEN = 0x8942 - SIOCGPGRP = 0x8904 - SIOCGRARP = 0x8961 - SIOCGSTAMP = 0x8906 - SIOCGSTAMPNS = 0x8907 - SIOCPROTOPRIVATE = 0x89e0 - SIOCRTMSG = 0x890d - SIOCSARP = 0x8955 - SIOCSIFADDR = 0x8916 - SIOCSIFBR = 0x8941 - SIOCSIFBRDADDR = 0x891a - SIOCSIFDSTADDR = 0x8918 - SIOCSIFENCAP = 0x8926 - SIOCSIFFLAGS = 0x8914 - SIOCSIFHWADDR = 0x8924 - SIOCSIFHWBROADCAST = 0x8937 - SIOCSIFLINK = 0x8911 - SIOCSIFMAP = 0x8971 - SIOCSIFMEM = 0x8920 - SIOCSIFMETRIC = 0x891e - SIOCSIFMTU = 0x8922 - SIOCSIFNAME = 0x8923 - SIOCSIFNETMASK = 0x891c - SIOCSIFPFLAGS = 0x8934 - SIOCSIFSLAVE = 0x8930 - SIOCSIFTXQLEN = 0x8943 - SIOCSPGRP = 0x8902 - SIOCSRARP = 0x8962 - SOCK_CLOEXEC = 0x80000 - SOCK_DCCP = 0x6 - SOCK_DGRAM = 0x2 - SOCK_NONBLOCK = 0x800 - SOCK_PACKET = 0xa - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_AAL = 0x109 - SOL_ATM = 0x108 - SOL_DECNET = 0x105 - SOL_ICMPV6 = 0x3a - SOL_IP = 0x0 - SOL_IPV6 = 0x29 - SOL_IRDA = 0x10a - SOL_PACKET = 0x107 - SOL_RAW = 0xff - SOL_SOCKET = 0x1 - SOL_TCP = 0x6 - SOL_X25 = 0x106 - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x1e - SO_ATTACH_FILTER = 0x1a - SO_BINDTODEVICE = 0x19 - SO_BROADCAST = 0x6 - SO_BSDCOMPAT = 0xe - SO_DEBUG = 0x1 - SO_DETACH_FILTER = 0x1b - SO_DOMAIN = 0x27 - SO_DONTROUTE = 0x5 - SO_ERROR = 0x4 - SO_KEEPALIVE = 0x9 - SO_LINGER = 0xd - SO_MARK = 0x24 - SO_NO_CHECK = 0xb - SO_OOBINLINE = 0xa - SO_PASSCRED = 0x10 - SO_PASSSEC = 0x22 - SO_PEERCRED = 0x11 - SO_PEERNAME = 0x1c - SO_PEERSEC = 0x1f - SO_PRIORITY = 0xc - SO_PROTOCOL = 0x26 - SO_RCVBUF = 0x8 - SO_RCVBUFFORCE = 0x21 - SO_RCVLOWAT = 0x12 - SO_RCVTIMEO = 0x14 - SO_REUSEADDR = 0x2 - SO_RXQ_OVFL = 0x28 - SO_SECURITY_AUTHENTICATION = 0x16 - SO_SECURITY_ENCRYPTION_NETWORK = 0x18 - SO_SECURITY_ENCRYPTION_TRANSPORT = 0x17 - SO_SNDBUF = 0x7 - SO_SNDBUFFORCE = 0x20 - SO_SNDLOWAT = 0x13 - SO_SNDTIMEO = 0x15 - SO_TIMESTAMP = 0x1d - SO_TIMESTAMPING = 0x25 - SO_TIMESTAMPNS = 0x23 - SO_TYPE = 0x3 - S_BLKSIZE = 0x200 - S_IEXEC = 0x40 - S_IFBLK = 0x6000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFIFO = 0x1000 - S_IFLNK = 0xa000 - S_IFMT = 0xf000 - S_IFREG = 0x8000 - S_IFSOCK = 0xc000 - S_IREAD = 0x100 - S_IRGRP = 0x20 - S_IROTH = 0x4 - S_IRUSR = 0x100 - S_IRWXG = 0x38 - S_IRWXO = 0x7 - S_IRWXU = 0x1c0 - S_ISGID = 0x400 - S_ISUID = 0x800 - S_ISVTX = 0x200 - S_IWGRP = 0x10 - S_IWOTH = 0x2 - S_IWRITE = 0x80 - S_IWUSR = 0x80 - S_IXGRP = 0x8 - S_IXOTH = 0x1 - S_IXUSR = 0x40 - TAB0 = 0x0 - TAB1 = 0x800 - TAB2 = 0x1000 - TAB3 = 0x1800 - TABDLY = 0x1800 - TCFLSH = 0x540b - TCGETA = 0x5405 - TCGETS = 0x5401 - TCGETS2 = 0x802c542a - TCGETX = 0x5432 - TCIFLUSH = 0x0 - TCIOFF = 0x2 - TCIOFLUSH = 0x2 - TCION = 0x3 - TCOFLUSH = 0x1 - TCOOFF = 0x0 - TCOON = 0x1 - TCP_CONGESTION = 0xd - TCP_CORK = 0x3 - TCP_DEFER_ACCEPT = 0x9 - TCP_INFO = 0xb - TCP_KEEPCNT = 0x6 - TCP_KEEPIDLE = 0x4 - TCP_KEEPINTVL = 0x5 - TCP_LINGER2 = 0x8 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_WINSHIFT = 0xe - TCP_MD5SIG = 0xe - TCP_MD5SIG_MAXKEYLEN = 0x50 - TCP_MSS = 0x200 - TCP_NODELAY = 0x1 - TCP_QUICKACK = 0xc - TCP_SYNCNT = 0x7 - TCP_WINDOW_CLAMP = 0xa - TCSAFLUSH = 0x2 - TCSBRK = 0x5409 - TCSBRKP = 0x5425 - TCSETA = 0x5406 - TCSETAF = 0x5408 - TCSETAW = 0x5407 - TCSETS = 0x5402 - TCSETS2 = 0x402c542b - TCSETSF = 0x5404 - TCSETSF2 = 0x402c542d - TCSETSW = 0x5403 - TCSETSW2 = 0x402c542c - TCSETX = 0x5433 - TCSETXF = 0x5434 - TCSETXW = 0x5435 - TCXONC = 0x540a - TIOCCBRK = 0x5428 - TIOCCONS = 0x541d - TIOCEXCL = 0x540c - TIOCGDEV = 0x80045432 - TIOCGETD = 0x5424 - TIOCGEXCL = 0x80045440 - TIOCGICOUNT = 0x545d - TIOCGLCKTRMIOS = 0x5456 - TIOCGPGRP = 0x540f - TIOCGPKT = 0x80045438 - TIOCGPTLCK = 0x80045439 - TIOCGPTN = 0x80045430 - TIOCGRS485 = 0x542e - TIOCGSERIAL = 0x541e - TIOCGSID = 0x5429 - TIOCGSOFTCAR = 0x5419 - TIOCGWINSZ = 0x5413 - TIOCINQ = 0x541b - TIOCLINUX = 0x541c - TIOCMBIC = 0x5417 - TIOCMBIS = 0x5416 - TIOCMGET = 0x5415 - TIOCMIWAIT = 0x545c - TIOCMSET = 0x5418 - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x5422 - TIOCNXCL = 0x540d - TIOCOUTQ = 0x5411 - TIOCPKT = 0x5420 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCSBRK = 0x5427 - TIOCSCTTY = 0x540e - TIOCSERCONFIG = 0x5453 - TIOCSERGETLSR = 0x5459 - TIOCSERGETMULTI = 0x545a - TIOCSERGSTRUCT = 0x5458 - TIOCSERGWILD = 0x5454 - TIOCSERSETMULTI = 0x545b - TIOCSERSWILD = 0x5455 - TIOCSER_TEMT = 0x1 - TIOCSETD = 0x5423 - TIOCSIG = 0x40045436 - TIOCSLCKTRMIOS = 0x5457 - TIOCSPGRP = 0x5410 - TIOCSPTLCK = 0x40045431 - TIOCSRS485 = 0x542f - TIOCSSERIAL = 0x541f - TIOCSSOFTCAR = 0x541a - TIOCSTI = 0x5412 - TIOCSWINSZ = 0x5414 - TIOCVHANGUP = 0x5437 - TOSTOP = 0x100 - TUNATTACHFILTER = 0x401054d5 - TUNDETACHFILTER = 0x401054d6 - TUNGETFEATURES = 0x800454cf - TUNGETIFF = 0x800454d2 - TUNGETSNDBUF = 0x800454d3 - TUNGETVNETHDRSZ = 0x800454d7 - TUNSETDEBUG = 0x400454c9 - TUNSETGROUP = 0x400454ce - TUNSETIFF = 0x400454ca - TUNSETLINK = 0x400454cd - TUNSETNOCSUM = 0x400454c8 - TUNSETOFFLOAD = 0x400454d0 - TUNSETOWNER = 0x400454cc - TUNSETPERSIST = 0x400454cb - TUNSETSNDBUF = 0x400454d4 - TUNSETTXFILTER = 0x400454d1 - TUNSETVNETHDRSZ = 0x400454d8 - VDISCARD = 0xd - VEOF = 0x4 - VEOL = 0xb - VEOL2 = 0x10 - VERASE = 0x2 - VINTR = 0x0 - VKILL = 0x3 - VLNEXT = 0xf - VMIN = 0x6 - VQUIT = 0x1 - VREPRINT = 0xc - VSTART = 0x8 - VSTOP = 0x9 - VSUSP = 0xa - VSWTC = 0x7 - VT0 = 0x0 - VT1 = 0x4000 - VTDLY = 0x4000 - VTIME = 0x5 - VWERASE = 0xe - WALL = 0x40000000 - WCLONE = 0x80000000 - WCONTINUED = 0x8 - WEXITED = 0x4 - WNOHANG = 0x1 - WNOTHREAD = 0x20000000 - WNOWAIT = 0x1000000 - WORDSIZE = 0x40 - WSTOPPED = 0x2 - WUNTRACED = 0x2 - XCASE = 0x4 - XTABS = 0x1800 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x62) - EADDRNOTAVAIL = syscall.Errno(0x63) - EADV = syscall.Errno(0x44) - EAFNOSUPPORT = syscall.Errno(0x61) - EAGAIN = syscall.Errno(0xb) - EALREADY = syscall.Errno(0x72) - EBADE = syscall.Errno(0x34) - EBADF = syscall.Errno(0x9) - EBADFD = syscall.Errno(0x4d) - EBADMSG = syscall.Errno(0x4a) - EBADR = syscall.Errno(0x35) - EBADRQC = syscall.Errno(0x38) - EBADSLT = syscall.Errno(0x39) - EBFONT = syscall.Errno(0x3b) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x7d) - ECHILD = syscall.Errno(0xa) - ECHRNG = syscall.Errno(0x2c) - ECOMM = syscall.Errno(0x46) - ECONNABORTED = syscall.Errno(0x67) - ECONNREFUSED = syscall.Errno(0x6f) - ECONNRESET = syscall.Errno(0x68) - EDEADLK = syscall.Errno(0x23) - EDEADLOCK = syscall.Errno(0x23) - EDESTADDRREQ = syscall.Errno(0x59) - EDOM = syscall.Errno(0x21) - EDOTDOT = syscall.Errno(0x49) - EDQUOT = syscall.Errno(0x7a) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EHOSTDOWN = syscall.Errno(0x70) - EHOSTUNREACH = syscall.Errno(0x71) - EHWPOISON = syscall.Errno(0x85) - EIDRM = syscall.Errno(0x2b) - EILSEQ = syscall.Errno(0x54) - EINPROGRESS = syscall.Errno(0x73) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EISCONN = syscall.Errno(0x6a) - EISDIR = syscall.Errno(0x15) - EISNAM = syscall.Errno(0x78) - EKEYEXPIRED = syscall.Errno(0x7f) - EKEYREJECTED = syscall.Errno(0x81) - EKEYREVOKED = syscall.Errno(0x80) - EL2HLT = syscall.Errno(0x33) - EL2NSYNC = syscall.Errno(0x2d) - EL3HLT = syscall.Errno(0x2e) - EL3RST = syscall.Errno(0x2f) - ELIBACC = syscall.Errno(0x4f) - ELIBBAD = syscall.Errno(0x50) - ELIBEXEC = syscall.Errno(0x53) - ELIBMAX = syscall.Errno(0x52) - ELIBSCN = syscall.Errno(0x51) - ELNRNG = syscall.Errno(0x30) - ELOOP = syscall.Errno(0x28) - EMEDIUMTYPE = syscall.Errno(0x7c) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x5a) - EMULTIHOP = syscall.Errno(0x48) - ENAMETOOLONG = syscall.Errno(0x24) - ENAVAIL = syscall.Errno(0x77) - ENETDOWN = syscall.Errno(0x64) - ENETRESET = syscall.Errno(0x66) - ENETUNREACH = syscall.Errno(0x65) - ENFILE = syscall.Errno(0x17) - ENOANO = syscall.Errno(0x37) - ENOBUFS = syscall.Errno(0x69) - ENOCSI = syscall.Errno(0x32) - ENODATA = syscall.Errno(0x3d) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOKEY = syscall.Errno(0x7e) - ENOLCK = syscall.Errno(0x25) - ENOLINK = syscall.Errno(0x43) - ENOMEDIUM = syscall.Errno(0x7b) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x2a) - ENONET = syscall.Errno(0x40) - ENOPKG = syscall.Errno(0x41) - ENOPROTOOPT = syscall.Errno(0x5c) - ENOSPC = syscall.Errno(0x1c) - ENOSR = syscall.Errno(0x3f) - ENOSTR = syscall.Errno(0x3c) - ENOSYS = syscall.Errno(0x26) - ENOTBLK = syscall.Errno(0xf) - ENOTCONN = syscall.Errno(0x6b) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x27) - ENOTNAM = syscall.Errno(0x76) - ENOTRECOVERABLE = syscall.Errno(0x83) - ENOTSOCK = syscall.Errno(0x58) - ENOTSUP = syscall.Errno(0x5f) - ENOTTY = syscall.Errno(0x19) - ENOTUNIQ = syscall.Errno(0x4c) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x5f) - EOVERFLOW = syscall.Errno(0x4b) - EOWNERDEAD = syscall.Errno(0x82) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x60) - EPIPE = syscall.Errno(0x20) - EPROTO = syscall.Errno(0x47) - EPROTONOSUPPORT = syscall.Errno(0x5d) - EPROTOTYPE = syscall.Errno(0x5b) - ERANGE = syscall.Errno(0x22) - EREMCHG = syscall.Errno(0x4e) - EREMOTE = syscall.Errno(0x42) - EREMOTEIO = syscall.Errno(0x79) - ERESTART = syscall.Errno(0x55) - ERFKILL = syscall.Errno(0x84) - EROFS = syscall.Errno(0x1e) - ESHUTDOWN = syscall.Errno(0x6c) - ESOCKTNOSUPPORT = syscall.Errno(0x5e) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESRMNT = syscall.Errno(0x45) - ESTALE = syscall.Errno(0x74) - ESTRPIPE = syscall.Errno(0x56) - ETIME = syscall.Errno(0x3e) - ETIMEDOUT = syscall.Errno(0x6e) - ETOOMANYREFS = syscall.Errno(0x6d) - ETXTBSY = syscall.Errno(0x1a) - EUCLEAN = syscall.Errno(0x75) - EUNATCH = syscall.Errno(0x31) - EUSERS = syscall.Errno(0x57) - EWOULDBLOCK = syscall.Errno(0xb) - EXDEV = syscall.Errno(0x12) - EXFULL = syscall.Errno(0x36) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0x7) - SIGCHLD = syscall.Signal(0x11) - SIGCLD = syscall.Signal(0x11) - SIGCONT = syscall.Signal(0x12) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x1d) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGPIPE = syscall.Signal(0xd) - SIGPOLL = syscall.Signal(0x1d) - SIGPROF = syscall.Signal(0x1b) - SIGPWR = syscall.Signal(0x1e) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTKFLT = syscall.Signal(0x10) - SIGSTOP = syscall.Signal(0x13) - SIGSYS = syscall.Signal(0x1f) - SIGTERM = syscall.Signal(0xf) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x14) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGUNUSED = syscall.Signal(0x1f) - SIGURG = syscall.Signal(0x17) - SIGUSR1 = syscall.Signal(0xa) - SIGUSR2 = syscall.Signal(0xc) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) - -// Error table -var errors = [...]string{ - 1: "operation not permitted", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "input/output error", - 6: "no such device or address", - 7: "argument list too long", - 8: "exec format error", - 9: "bad file descriptor", - 10: "no child processes", - 11: "resource temporarily unavailable", - 12: "cannot allocate memory", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "device or resource busy", - 17: "file exists", - 18: "invalid cross-device link", - 19: "no such device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "too many open files in system", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "numerical argument out of domain", - 34: "numerical result out of range", - 35: "resource deadlock avoided", - 36: "file name too long", - 37: "no locks available", - 38: "function not implemented", - 39: "directory not empty", - 40: "too many levels of symbolic links", - 42: "no message of desired type", - 43: "identifier removed", - 44: "channel number out of range", - 45: "level 2 not synchronized", - 46: "level 3 halted", - 47: "level 3 reset", - 48: "link number out of range", - 49: "protocol driver not attached", - 50: "no CSI structure available", - 51: "level 2 halted", - 52: "invalid exchange", - 53: "invalid request descriptor", - 54: "exchange full", - 55: "no anode", - 56: "invalid request code", - 57: "invalid slot", - 59: "bad font file format", - 60: "device not a stream", - 61: "no data available", - 62: "timer expired", - 63: "out of streams resources", - 64: "machine is not on the network", - 65: "package not installed", - 66: "object is remote", - 67: "link has been severed", - 68: "advertise error", - 69: "srmount error", - 70: "communication error on send", - 71: "protocol error", - 72: "multihop attempted", - 73: "RFS specific error", - 74: "bad message", - 75: "value too large for defined data type", - 76: "name not unique on network", - 77: "file descriptor in bad state", - 78: "remote address changed", - 79: "can not access a needed shared library", - 80: "accessing a corrupted shared library", - 81: ".lib section in a.out corrupted", - 82: "attempting to link in too many shared libraries", - 83: "cannot exec a shared library directly", - 84: "invalid or incomplete multibyte or wide character", - 85: "interrupted system call should be restarted", - 86: "streams pipe error", - 87: "too many users", - 88: "socket operation on non-socket", - 89: "destination address required", - 90: "message too long", - 91: "protocol wrong type for socket", - 92: "protocol not available", - 93: "protocol not supported", - 94: "socket type not supported", - 95: "operation not supported", - 96: "protocol family not supported", - 97: "address family not supported by protocol", - 98: "address already in use", - 99: "cannot assign requested address", - 100: "network is down", - 101: "network is unreachable", - 102: "network dropped connection on reset", - 103: "software caused connection abort", - 104: "connection reset by peer", - 105: "no buffer space available", - 106: "transport endpoint is already connected", - 107: "transport endpoint is not connected", - 108: "cannot send after transport endpoint shutdown", - 109: "too many references: cannot splice", - 110: "connection timed out", - 111: "connection refused", - 112: "host is down", - 113: "no route to host", - 114: "operation already in progress", - 115: "operation now in progress", - 116: "stale NFS file handle", - 117: "structure needs cleaning", - 118: "not a XENIX named type file", - 119: "no XENIX semaphores available", - 120: "is a named type file", - 121: "remote I/O error", - 122: "disk quota exceeded", - 123: "no medium found", - 124: "wrong medium type", - 125: "operation canceled", - 126: "required key not available", - 127: "key has expired", - 128: "key has been revoked", - 129: "key was rejected by service", - 130: "owner died", - 131: "state not recoverable", - 132: "operation not possible due to RF-kill", - 133: "unknown error 133", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal instruction", - 5: "trace/breakpoint trap", - 6: "aborted", - 7: "bus error", - 8: "floating point exception", - 9: "killed", - 10: "user defined signal 1", - 11: "segmentation fault", - 12: "user defined signal 2", - 13: "broken pipe", - 14: "alarm clock", - 15: "terminated", - 16: "stack fault", - 17: "child exited", - 18: "continued", - 19: "stopped (signal)", - 20: "stopped", - 21: "stopped (tty input)", - 22: "stopped (tty output)", - 23: "urgent I/O condition", - 24: "CPU time limit exceeded", - 25: "file size limit exceeded", - 26: "virtual timer expired", - 27: "profiling timer expired", - 28: "window changed", - 29: "I/O possible", - 30: "power failure", - 31: "bad system call", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_linux_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_linux_arm.go deleted file mode 100644 index 1cc76a78cf4..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_linux_arm.go +++ /dev/null @@ -1,1742 +0,0 @@ -// mkerrors.sh -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build arm,linux - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- _const.go - -package unix - -import "syscall" - -const ( - AF_ALG = 0x26 - AF_APPLETALK = 0x5 - AF_ASH = 0x12 - AF_ATMPVC = 0x8 - AF_ATMSVC = 0x14 - AF_AX25 = 0x3 - AF_BLUETOOTH = 0x1f - AF_BRIDGE = 0x7 - AF_CAIF = 0x25 - AF_CAN = 0x1d - AF_DECnet = 0xc - AF_ECONET = 0x13 - AF_FILE = 0x1 - AF_IEEE802154 = 0x24 - AF_INET = 0x2 - AF_INET6 = 0xa - AF_IPX = 0x4 - AF_IRDA = 0x17 - AF_ISDN = 0x22 - AF_IUCV = 0x20 - AF_KEY = 0xf - AF_LLC = 0x1a - AF_LOCAL = 0x1 - AF_MAX = 0x27 - AF_NETBEUI = 0xd - AF_NETLINK = 0x10 - AF_NETROM = 0x6 - AF_PACKET = 0x11 - AF_PHONET = 0x23 - AF_PPPOX = 0x18 - AF_RDS = 0x15 - AF_ROSE = 0xb - AF_ROUTE = 0x10 - AF_RXRPC = 0x21 - AF_SECURITY = 0xe - AF_SNA = 0x16 - AF_TIPC = 0x1e - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - AF_WANPIPE = 0x19 - AF_X25 = 0x9 - ARPHRD_ADAPT = 0x108 - ARPHRD_APPLETLK = 0x8 - ARPHRD_ARCNET = 0x7 - ARPHRD_ASH = 0x30d - ARPHRD_ATM = 0x13 - ARPHRD_AX25 = 0x3 - ARPHRD_BIF = 0x307 - ARPHRD_CHAOS = 0x5 - ARPHRD_CISCO = 0x201 - ARPHRD_CSLIP = 0x101 - ARPHRD_CSLIP6 = 0x103 - ARPHRD_DDCMP = 0x205 - ARPHRD_DLCI = 0xf - ARPHRD_ECONET = 0x30e - ARPHRD_EETHER = 0x2 - ARPHRD_ETHER = 0x1 - ARPHRD_EUI64 = 0x1b - ARPHRD_FCAL = 0x311 - ARPHRD_FCFABRIC = 0x313 - ARPHRD_FCPL = 0x312 - ARPHRD_FCPP = 0x310 - ARPHRD_FDDI = 0x306 - ARPHRD_FRAD = 0x302 - ARPHRD_HDLC = 0x201 - ARPHRD_HIPPI = 0x30c - ARPHRD_HWX25 = 0x110 - ARPHRD_IEEE1394 = 0x18 - ARPHRD_IEEE802 = 0x6 - ARPHRD_IEEE80211 = 0x321 - ARPHRD_IEEE80211_PRISM = 0x322 - ARPHRD_IEEE80211_RADIOTAP = 0x323 - ARPHRD_IEEE802154 = 0x324 - ARPHRD_IEEE802154_PHY = 0x325 - ARPHRD_IEEE802_TR = 0x320 - ARPHRD_INFINIBAND = 0x20 - ARPHRD_IPDDP = 0x309 - ARPHRD_IPGRE = 0x30a - ARPHRD_IRDA = 0x30f - ARPHRD_LAPB = 0x204 - ARPHRD_LOCALTLK = 0x305 - ARPHRD_LOOPBACK = 0x304 - ARPHRD_METRICOM = 0x17 - ARPHRD_NETROM = 0x0 - ARPHRD_NONE = 0xfffe - ARPHRD_PIMREG = 0x30b - ARPHRD_PPP = 0x200 - ARPHRD_PRONET = 0x4 - ARPHRD_RAWHDLC = 0x206 - ARPHRD_ROSE = 0x10e - ARPHRD_RSRVD = 0x104 - ARPHRD_SIT = 0x308 - ARPHRD_SKIP = 0x303 - ARPHRD_SLIP = 0x100 - ARPHRD_SLIP6 = 0x102 - ARPHRD_TUNNEL = 0x300 - ARPHRD_TUNNEL6 = 0x301 - ARPHRD_VOID = 0xffff - ARPHRD_X25 = 0x10f - B0 = 0x0 - B1000000 = 0x1008 - B110 = 0x3 - B115200 = 0x1002 - B1152000 = 0x1009 - B1200 = 0x9 - B134 = 0x4 - B150 = 0x5 - B1500000 = 0x100a - B1800 = 0xa - B19200 = 0xe - B200 = 0x6 - B2000000 = 0x100b - B230400 = 0x1003 - B2400 = 0xb - B2500000 = 0x100c - B300 = 0x7 - B3000000 = 0x100d - B3500000 = 0x100e - B38400 = 0xf - B4000000 = 0x100f - B460800 = 0x1004 - B4800 = 0xc - B50 = 0x1 - B500000 = 0x1005 - B57600 = 0x1001 - B576000 = 0x1006 - B600 = 0x8 - B75 = 0x2 - B921600 = 0x1007 - B9600 = 0xd - BOTHER = 0x1000 - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXINSNS = 0x1000 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_W = 0x0 - BPF_X = 0x8 - BRKINT = 0x2 - BS0 = 0x0 - BS1 = 0x2000 - BSDLY = 0x2000 - CBAUD = 0x100f - CBAUDEX = 0x1000 - CFLUSH = 0xf - CIBAUD = 0x100f0000 - CLOCAL = 0x800 - CLOCK_BOOTTIME = 0x7 - CLOCK_BOOTTIME_ALARM = 0x9 - CLOCK_DEFAULT = 0x0 - CLOCK_EXT = 0x1 - CLOCK_INT = 0x2 - CLOCK_MONOTONIC = 0x1 - CLOCK_MONOTONIC_COARSE = 0x6 - CLOCK_MONOTONIC_RAW = 0x4 - CLOCK_PROCESS_CPUTIME_ID = 0x2 - CLOCK_REALTIME = 0x0 - CLOCK_REALTIME_ALARM = 0x8 - CLOCK_REALTIME_COARSE = 0x5 - CLOCK_THREAD_CPUTIME_ID = 0x3 - CLOCK_TXFROMRX = 0x4 - CLOCK_TXINT = 0x3 - CLONE_CHILD_CLEARTID = 0x200000 - CLONE_CHILD_SETTID = 0x1000000 - CLONE_DETACHED = 0x400000 - CLONE_FILES = 0x400 - CLONE_FS = 0x200 - CLONE_IO = 0x80000000 - CLONE_NEWIPC = 0x8000000 - CLONE_NEWNET = 0x40000000 - CLONE_NEWNS = 0x20000 - CLONE_NEWPID = 0x20000000 - CLONE_NEWUSER = 0x10000000 - CLONE_NEWUTS = 0x4000000 - CLONE_PARENT = 0x8000 - CLONE_PARENT_SETTID = 0x100000 - CLONE_PTRACE = 0x2000 - CLONE_SETTLS = 0x80000 - CLONE_SIGHAND = 0x800 - CLONE_SYSVSEM = 0x40000 - CLONE_THREAD = 0x10000 - CLONE_UNTRACED = 0x800000 - CLONE_VFORK = 0x4000 - CLONE_VM = 0x100 - CMSPAR = 0x40000000 - CR0 = 0x0 - CR1 = 0x200 - CR2 = 0x400 - CR3 = 0x600 - CRDLY = 0x600 - CREAD = 0x80 - CRTSCTS = 0x80000000 - CS5 = 0x0 - CS6 = 0x10 - CS7 = 0x20 - CS8 = 0x30 - CSIGNAL = 0xff - CSIZE = 0x30 - CSTART = 0x11 - CSTATUS = 0x0 - CSTOP = 0x13 - CSTOPB = 0x40 - CSUSP = 0x1a - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - DT_WHT = 0xe - ELF_NGREG = 0x12 - ELF_PRARGSZ = 0x50 - ECHO = 0x8 - ECHOCTL = 0x200 - ECHOE = 0x10 - ECHOK = 0x20 - ECHOKE = 0x800 - ECHONL = 0x40 - ECHOPRT = 0x400 - EPOLLERR = 0x8 - EPOLLET = -0x80000000 - EPOLLHUP = 0x10 - EPOLLIN = 0x1 - EPOLLMSG = 0x400 - EPOLLONESHOT = 0x40000000 - EPOLLOUT = 0x4 - EPOLLPRI = 0x2 - EPOLLRDBAND = 0x80 - EPOLLRDHUP = 0x2000 - EPOLLRDNORM = 0x40 - EPOLLWRBAND = 0x200 - EPOLLWRNORM = 0x100 - EPOLL_CLOEXEC = 0x80000 - EPOLL_CTL_ADD = 0x1 - EPOLL_CTL_DEL = 0x2 - EPOLL_CTL_MOD = 0x3 - EPOLL_NONBLOCK = 0x800 - ETH_P_1588 = 0x88f7 - ETH_P_8021Q = 0x8100 - ETH_P_802_2 = 0x4 - ETH_P_802_3 = 0x1 - ETH_P_AARP = 0x80f3 - ETH_P_ALL = 0x3 - ETH_P_AOE = 0x88a2 - ETH_P_ARCNET = 0x1a - ETH_P_ARP = 0x806 - ETH_P_ATALK = 0x809b - ETH_P_ATMFATE = 0x8884 - ETH_P_ATMMPOA = 0x884c - ETH_P_AX25 = 0x2 - ETH_P_BPQ = 0x8ff - ETH_P_CAIF = 0xf7 - ETH_P_CAN = 0xc - ETH_P_CONTROL = 0x16 - ETH_P_CUST = 0x6006 - ETH_P_DDCMP = 0x6 - ETH_P_DEC = 0x6000 - ETH_P_DIAG = 0x6005 - ETH_P_DNA_DL = 0x6001 - ETH_P_DNA_RC = 0x6002 - ETH_P_DNA_RT = 0x6003 - ETH_P_DSA = 0x1b - ETH_P_ECONET = 0x18 - ETH_P_EDSA = 0xdada - ETH_P_FCOE = 0x8906 - ETH_P_FIP = 0x8914 - ETH_P_HDLC = 0x19 - ETH_P_IEEE802154 = 0xf6 - ETH_P_IEEEPUP = 0xa00 - ETH_P_IEEEPUPAT = 0xa01 - ETH_P_IP = 0x800 - ETH_P_IPV6 = 0x86dd - ETH_P_IPX = 0x8137 - ETH_P_IRDA = 0x17 - ETH_P_LAT = 0x6004 - ETH_P_LINK_CTL = 0x886c - ETH_P_LOCALTALK = 0x9 - ETH_P_LOOP = 0x60 - ETH_P_MOBITEX = 0x15 - ETH_P_MPLS_MC = 0x8848 - ETH_P_MPLS_UC = 0x8847 - ETH_P_PAE = 0x888e - ETH_P_PAUSE = 0x8808 - ETH_P_PHONET = 0xf5 - ETH_P_PPPTALK = 0x10 - ETH_P_PPP_DISC = 0x8863 - ETH_P_PPP_MP = 0x8 - ETH_P_PPP_SES = 0x8864 - ETH_P_PUP = 0x200 - ETH_P_PUPAT = 0x201 - ETH_P_RARP = 0x8035 - ETH_P_SCA = 0x6007 - ETH_P_SLOW = 0x8809 - ETH_P_SNAP = 0x5 - ETH_P_TEB = 0x6558 - ETH_P_TIPC = 0x88ca - ETH_P_TRAILER = 0x1c - ETH_P_TR_802_2 = 0x11 - ETH_P_WAN_PPP = 0x7 - ETH_P_WCCP = 0x883e - ETH_P_X25 = 0x805 - EXTA = 0xe - EXTB = 0xf - EXTPROC = 0x10000 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x400 - FF0 = 0x0 - FF1 = 0x8000 - FFDLY = 0x8000 - FLUSHO = 0x1000 - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0x406 - F_EXLCK = 0x4 - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLEASE = 0x401 - F_GETLK = 0xc - F_GETLK64 = 0xc - F_GETOWN = 0x9 - F_GETOWN_EX = 0x10 - F_GETPIPE_SZ = 0x408 - F_GETSIG = 0xb - F_LOCK = 0x1 - F_NOTIFY = 0x402 - F_OK = 0x0 - F_RDLCK = 0x0 - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLEASE = 0x400 - F_SETLK = 0xd - F_SETLK64 = 0xd - F_SETLKW = 0xe - F_SETLKW64 = 0xe - F_SETOWN = 0x8 - F_SETOWN_EX = 0xf - F_SETPIPE_SZ = 0x407 - F_SETSIG = 0xa - F_SHLCK = 0x8 - F_TEST = 0x3 - F_TLOCK = 0x2 - F_ULOCK = 0x0 - F_UNLCK = 0x2 - F_WRLCK = 0x1 - HUPCL = 0x400 - IBSHIFT = 0x10 - ICANON = 0x2 - ICMPV6_FILTER = 0x1 - ICRNL = 0x100 - IEXTEN = 0x8000 - IFA_F_DADFAILED = 0x8 - IFA_F_DEPRECATED = 0x20 - IFA_F_HOMEADDRESS = 0x10 - IFA_F_NODAD = 0x2 - IFA_F_OPTIMISTIC = 0x4 - IFA_F_PERMANENT = 0x80 - IFA_F_SECONDARY = 0x1 - IFA_F_TEMPORARY = 0x1 - IFA_F_TENTATIVE = 0x40 - IFA_MAX = 0x7 - IFF_ALLMULTI = 0x200 - IFF_AUTOMEDIA = 0x4000 - IFF_BROADCAST = 0x2 - IFF_DEBUG = 0x4 - IFF_DYNAMIC = 0x8000 - IFF_LOOPBACK = 0x8 - IFF_MASTER = 0x400 - IFF_MULTICAST = 0x1000 - IFF_NOARP = 0x80 - IFF_NOTRAILERS = 0x20 - IFF_NO_PI = 0x1000 - IFF_ONE_QUEUE = 0x2000 - IFF_POINTOPOINT = 0x10 - IFF_PORTSEL = 0x2000 - IFF_PROMISC = 0x100 - IFF_RUNNING = 0x40 - IFF_SLAVE = 0x800 - IFF_TAP = 0x2 - IFF_TUN = 0x1 - IFF_TUN_EXCL = 0x8000 - IFF_UP = 0x1 - IFF_VNET_HDR = 0x4000 - IFNAMSIZ = 0x10 - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_ACCESS = 0x1 - IN_ALL_EVENTS = 0xfff - IN_ATTRIB = 0x4 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLOEXEC = 0x80000 - IN_CLOSE = 0x18 - IN_CLOSE_NOWRITE = 0x10 - IN_CLOSE_WRITE = 0x8 - IN_CREATE = 0x100 - IN_DELETE = 0x200 - IN_DELETE_SELF = 0x400 - IN_DONT_FOLLOW = 0x2000000 - IN_EXCL_UNLINK = 0x4000000 - IN_IGNORED = 0x8000 - IN_ISDIR = 0x40000000 - IN_LOOPBACKNET = 0x7f - IN_MASK_ADD = 0x20000000 - IN_MODIFY = 0x2 - IN_MOVE = 0xc0 - IN_MOVED_FROM = 0x40 - IN_MOVED_TO = 0x80 - IN_MOVE_SELF = 0x800 - IN_NONBLOCK = 0x800 - IN_ONESHOT = 0x80000000 - IN_ONLYDIR = 0x1000000 - IN_OPEN = 0x20 - IN_Q_OVERFLOW = 0x4000 - IN_UNMOUNT = 0x2000 - IPPROTO_AH = 0x33 - IPPROTO_COMP = 0x6c - IPPROTO_DCCP = 0x21 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_ENCAP = 0x62 - IPPROTO_ESP = 0x32 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GRE = 0x2f - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IGMP = 0x2 - IPPROTO_IP = 0x0 - IPPROTO_IPIP = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_MTP = 0x5c - IPPROTO_NONE = 0x3b - IPPROTO_PIM = 0x67 - IPPROTO_PUP = 0xc - IPPROTO_RAW = 0xff - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_SCTP = 0x84 - IPPROTO_TCP = 0x6 - IPPROTO_TP = 0x1d - IPPROTO_UDP = 0x11 - IPPROTO_UDPLITE = 0x88 - IPV6_2292DSTOPTS = 0x4 - IPV6_2292HOPLIMIT = 0x8 - IPV6_2292HOPOPTS = 0x3 - IPV6_2292PKTINFO = 0x2 - IPV6_2292PKTOPTIONS = 0x6 - IPV6_2292RTHDR = 0x5 - IPV6_ADDRFORM = 0x1 - IPV6_ADD_MEMBERSHIP = 0x14 - IPV6_AUTHHDR = 0xa - IPV6_CHECKSUM = 0x7 - IPV6_DROP_MEMBERSHIP = 0x15 - IPV6_DSTOPTS = 0x3b - IPV6_HOPLIMIT = 0x34 - IPV6_HOPOPTS = 0x36 - IPV6_IPSEC_POLICY = 0x22 - IPV6_JOIN_ANYCAST = 0x1b - IPV6_JOIN_GROUP = 0x14 - IPV6_LEAVE_ANYCAST = 0x1c - IPV6_LEAVE_GROUP = 0x15 - IPV6_MTU = 0x18 - IPV6_MTU_DISCOVER = 0x17 - IPV6_MULTICAST_HOPS = 0x12 - IPV6_MULTICAST_IF = 0x11 - IPV6_MULTICAST_LOOP = 0x13 - IPV6_NEXTHOP = 0x9 - IPV6_PKTINFO = 0x32 - IPV6_PMTUDISC_DO = 0x2 - IPV6_PMTUDISC_DONT = 0x0 - IPV6_PMTUDISC_PROBE = 0x3 - IPV6_PMTUDISC_WANT = 0x1 - IPV6_RECVDSTOPTS = 0x3a - IPV6_RECVERR = 0x19 - IPV6_RECVHOPLIMIT = 0x33 - IPV6_RECVHOPOPTS = 0x35 - IPV6_RECVPKTINFO = 0x31 - IPV6_RECVRTHDR = 0x38 - IPV6_RECVTCLASS = 0x42 - IPV6_ROUTER_ALERT = 0x16 - IPV6_RTHDR = 0x39 - IPV6_RTHDRDSTOPTS = 0x37 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_RXDSTOPTS = 0x3b - IPV6_RXHOPOPTS = 0x36 - IPV6_TCLASS = 0x43 - IPV6_UNICAST_HOPS = 0x10 - IPV6_V6ONLY = 0x1a - IPV6_XFRM_POLICY = 0x23 - IP_ADD_MEMBERSHIP = 0x23 - IP_ADD_SOURCE_MEMBERSHIP = 0x27 - IP_BLOCK_SOURCE = 0x26 - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DROP_MEMBERSHIP = 0x24 - IP_DROP_SOURCE_MEMBERSHIP = 0x28 - IP_FREEBIND = 0xf - IP_HDRINCL = 0x3 - IP_IPSEC_POLICY = 0x10 - IP_MAXPACKET = 0xffff - IP_MAX_MEMBERSHIPS = 0x14 - IP_MF = 0x2000 - IP_MINTTL = 0x15 - IP_MSFILTER = 0x29 - IP_MSS = 0x240 - IP_MTU = 0xe - IP_MTU_DISCOVER = 0xa - IP_MULTICAST_IF = 0x20 - IP_MULTICAST_LOOP = 0x22 - IP_MULTICAST_TTL = 0x21 - IP_OFFMASK = 0x1fff - IP_OPTIONS = 0x4 - IP_ORIGDSTADDR = 0x14 - IP_PASSSEC = 0x12 - IP_PKTINFO = 0x8 - IP_PKTOPTIONS = 0x9 - IP_PMTUDISC = 0xa - IP_PMTUDISC_DO = 0x2 - IP_PMTUDISC_DONT = 0x0 - IP_PMTUDISC_PROBE = 0x3 - IP_PMTUDISC_WANT = 0x1 - IP_RECVERR = 0xb - IP_RECVOPTS = 0x6 - IP_RECVORIGDSTADDR = 0x14 - IP_RECVRETOPTS = 0x7 - IP_RECVTOS = 0xd - IP_RECVTTL = 0xc - IP_RETOPTS = 0x7 - IP_RF = 0x8000 - IP_ROUTER_ALERT = 0x5 - IP_TOS = 0x1 - IP_TRANSPARENT = 0x13 - IP_TTL = 0x2 - IP_UNBLOCK_SOURCE = 0x25 - IP_XFRM_POLICY = 0x11 - ISIG = 0x1 - ISTRIP = 0x20 - IUCLC = 0x200 - IUTF8 = 0x4000 - IXANY = 0x800 - IXOFF = 0x1000 - IXON = 0x400 - LINUX_REBOOT_CMD_CAD_OFF = 0x0 - LINUX_REBOOT_CMD_CAD_ON = 0x89abcdef - LINUX_REBOOT_CMD_HALT = 0xcdef0123 - LINUX_REBOOT_CMD_KEXEC = 0x45584543 - LINUX_REBOOT_CMD_POWER_OFF = 0x4321fedc - LINUX_REBOOT_CMD_RESTART = 0x1234567 - LINUX_REBOOT_CMD_RESTART2 = 0xa1b2c3d4 - LINUX_REBOOT_CMD_SW_SUSPEND = 0xd000fce2 - LINUX_REBOOT_MAGIC1 = 0xfee1dead - LINUX_REBOOT_MAGIC2 = 0x28121969 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_DOFORK = 0xb - MADV_DONTFORK = 0xa - MADV_DONTNEED = 0x4 - MADV_HUGEPAGE = 0xe - MADV_HWPOISON = 0x64 - MADV_MERGEABLE = 0xc - MADV_NOHUGEPAGE = 0xf - MADV_NORMAL = 0x0 - MADV_RANDOM = 0x1 - MADV_REMOVE = 0x9 - MADV_SEQUENTIAL = 0x2 - MADV_UNMERGEABLE = 0xd - MADV_WILLNEED = 0x3 - MAP_ANON = 0x20 - MAP_ANONYMOUS = 0x20 - MAP_DENYWRITE = 0x800 - MAP_EXECUTABLE = 0x1000 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_GROWSDOWN = 0x100 - MAP_LOCKED = 0x2000 - MAP_NONBLOCK = 0x10000 - MAP_NORESERVE = 0x4000 - MAP_POPULATE = 0x8000 - MAP_PRIVATE = 0x2 - MAP_SHARED = 0x1 - MAP_TYPE = 0xf - MCL_CURRENT = 0x1 - MCL_FUTURE = 0x2 - MNT_DETACH = 0x2 - MNT_EXPIRE = 0x4 - MNT_FORCE = 0x1 - MSG_CMSG_CLOEXEC = 0x40000000 - MSG_CONFIRM = 0x800 - MSG_CTRUNC = 0x8 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x40 - MSG_EOR = 0x80 - MSG_ERRQUEUE = 0x2000 - MSG_FASTOPEN = 0x20000000 - MSG_FIN = 0x200 - MSG_MORE = 0x8000 - MSG_NOSIGNAL = 0x4000 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_PROXY = 0x10 - MSG_RST = 0x1000 - MSG_SYN = 0x400 - MSG_TRUNC = 0x20 - MSG_TRYHARD = 0x4 - MSG_WAITALL = 0x100 - MSG_WAITFORONE = 0x10000 - MS_ACTIVE = 0x40000000 - MS_ASYNC = 0x1 - MS_BIND = 0x1000 - MS_DIRSYNC = 0x80 - MS_INVALIDATE = 0x2 - MS_I_VERSION = 0x800000 - MS_KERNMOUNT = 0x400000 - MS_MANDLOCK = 0x40 - MS_MGC_MSK = 0xffff0000 - MS_MGC_VAL = 0xc0ed0000 - MS_MOVE = 0x2000 - MS_NOATIME = 0x400 - MS_NODEV = 0x4 - MS_NODIRATIME = 0x800 - MS_NOEXEC = 0x8 - MS_NOSUID = 0x2 - MS_NOUSER = -0x80000000 - MS_POSIXACL = 0x10000 - MS_PRIVATE = 0x40000 - MS_RDONLY = 0x1 - MS_REC = 0x4000 - MS_RELATIME = 0x200000 - MS_REMOUNT = 0x20 - MS_RMT_MASK = 0x800051 - MS_SHARED = 0x100000 - MS_SILENT = 0x8000 - MS_SLAVE = 0x80000 - MS_STRICTATIME = 0x1000000 - MS_SYNC = 0x4 - MS_SYNCHRONOUS = 0x10 - MS_UNBINDABLE = 0x20000 - NAME_MAX = 0xff - NETLINK_ADD_MEMBERSHIP = 0x1 - NETLINK_AUDIT = 0x9 - NETLINK_BROADCAST_ERROR = 0x4 - NETLINK_CONNECTOR = 0xb - NETLINK_DNRTMSG = 0xe - NETLINK_DROP_MEMBERSHIP = 0x2 - NETLINK_ECRYPTFS = 0x13 - NETLINK_FIB_LOOKUP = 0xa - NETLINK_FIREWALL = 0x3 - NETLINK_GENERIC = 0x10 - NETLINK_INET_DIAG = 0x4 - NETLINK_IP6_FW = 0xd - NETLINK_ISCSI = 0x8 - NETLINK_KOBJECT_UEVENT = 0xf - NETLINK_NETFILTER = 0xc - NETLINK_NFLOG = 0x5 - NETLINK_NO_ENOBUFS = 0x5 - NETLINK_PKTINFO = 0x3 - NETLINK_RDMA = 0x14 - NETLINK_ROUTE = 0x0 - NETLINK_SCSITRANSPORT = 0x12 - NETLINK_SELINUX = 0x7 - NETLINK_UNUSED = 0x1 - NETLINK_USERSOCK = 0x2 - NETLINK_XFRM = 0x6 - NL0 = 0x0 - NL1 = 0x100 - NLA_ALIGNTO = 0x4 - NLA_F_NESTED = 0x8000 - NLA_F_NET_BYTEORDER = 0x4000 - NLA_HDRLEN = 0x4 - NLDLY = 0x100 - NLMSG_ALIGNTO = 0x4 - NLMSG_DONE = 0x3 - NLMSG_ERROR = 0x2 - NLMSG_HDRLEN = 0x10 - NLMSG_MIN_TYPE = 0x10 - NLMSG_NOOP = 0x1 - NLMSG_OVERRUN = 0x4 - NLM_F_ACK = 0x4 - NLM_F_APPEND = 0x800 - NLM_F_ATOMIC = 0x400 - NLM_F_CREATE = 0x400 - NLM_F_DUMP = 0x300 - NLM_F_ECHO = 0x8 - NLM_F_EXCL = 0x200 - NLM_F_MATCH = 0x200 - NLM_F_MULTI = 0x2 - NLM_F_REPLACE = 0x100 - NLM_F_REQUEST = 0x1 - NLM_F_ROOT = 0x100 - NOFLSH = 0x80 - OCRNL = 0x8 - OFDEL = 0x80 - OFILL = 0x40 - OLCUC = 0x2 - ONLCR = 0x4 - ONLRET = 0x20 - ONOCR = 0x10 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_APPEND = 0x400 - O_ASYNC = 0x2000 - O_CLOEXEC = 0x80000 - O_CREAT = 0x40 - O_DIRECT = 0x10000 - O_DIRECTORY = 0x4000 - O_DSYNC = 0x1000 - O_EXCL = 0x80 - O_FSYNC = 0x1000 - O_LARGEFILE = 0x20000 - O_NDELAY = 0x800 - O_NOATIME = 0x40000 - O_NOCTTY = 0x100 - O_NOFOLLOW = 0x8000 - O_NONBLOCK = 0x800 - O_PATH = 0x200000 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_RSYNC = 0x1000 - O_SYNC = 0x1000 - O_TRUNC = 0x200 - O_WRONLY = 0x1 - PACKET_ADD_MEMBERSHIP = 0x1 - PACKET_BROADCAST = 0x1 - PACKET_DROP_MEMBERSHIP = 0x2 - PACKET_FASTROUTE = 0x6 - PACKET_HOST = 0x0 - PACKET_LOOPBACK = 0x5 - PACKET_MR_ALLMULTI = 0x2 - PACKET_MR_MULTICAST = 0x0 - PACKET_MR_PROMISC = 0x1 - PACKET_MULTICAST = 0x2 - PACKET_OTHERHOST = 0x3 - PACKET_OUTGOING = 0x4 - PACKET_RECV_OUTPUT = 0x3 - PACKET_RX_RING = 0x5 - PACKET_STATISTICS = 0x6 - PARENB = 0x100 - PARMRK = 0x8 - PARODD = 0x200 - PENDIN = 0x4000 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PROT_EXEC = 0x4 - PROT_GROWSDOWN = 0x1000000 - PROT_GROWSUP = 0x2000000 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_WRITE = 0x2 - PR_CAPBSET_DROP = 0x18 - PR_CAPBSET_READ = 0x17 - PR_CLEAR_SECCOMP_FILTER = 0x25 - PR_ENDIAN_BIG = 0x0 - PR_ENDIAN_LITTLE = 0x1 - PR_ENDIAN_PPC_LITTLE = 0x2 - PR_FPEMU_NOPRINT = 0x1 - PR_FPEMU_SIGFPE = 0x2 - PR_FP_EXC_ASYNC = 0x2 - PR_FP_EXC_DISABLED = 0x0 - PR_FP_EXC_DIV = 0x10000 - PR_FP_EXC_INV = 0x100000 - PR_FP_EXC_NONRECOV = 0x1 - PR_FP_EXC_OVF = 0x20000 - PR_FP_EXC_PRECISE = 0x3 - PR_FP_EXC_RES = 0x80000 - PR_FP_EXC_SW_ENABLE = 0x80 - PR_FP_EXC_UND = 0x40000 - PR_GET_DUMPABLE = 0x3 - PR_GET_ENDIAN = 0x13 - PR_GET_FPEMU = 0x9 - PR_GET_FPEXC = 0xb - PR_GET_KEEPCAPS = 0x7 - PR_GET_NAME = 0x10 - PR_GET_PDEATHSIG = 0x2 - PR_GET_SECCOMP = 0x15 - PR_GET_SECCOMP_FILTER = 0x23 - PR_GET_SECUREBITS = 0x1b - PR_GET_TIMERSLACK = 0x1e - PR_GET_TIMING = 0xd - PR_GET_TSC = 0x19 - PR_GET_UNALIGN = 0x5 - PR_MCE_KILL = 0x21 - PR_MCE_KILL_CLEAR = 0x0 - PR_MCE_KILL_DEFAULT = 0x2 - PR_MCE_KILL_EARLY = 0x1 - PR_MCE_KILL_GET = 0x22 - PR_MCE_KILL_LATE = 0x0 - PR_MCE_KILL_SET = 0x1 - PR_SECCOMP_FILTER_EVENT = 0x1 - PR_SECCOMP_FILTER_SYSCALL = 0x0 - PR_SET_DUMPABLE = 0x4 - PR_SET_ENDIAN = 0x14 - PR_SET_FPEMU = 0xa - PR_SET_FPEXC = 0xc - PR_SET_KEEPCAPS = 0x8 - PR_SET_NAME = 0xf - PR_SET_PDEATHSIG = 0x1 - PR_SET_PTRACER = 0x59616d61 - PR_SET_SECCOMP = 0x16 - PR_SET_SECCOMP_FILTER = 0x24 - PR_SET_SECUREBITS = 0x1c - PR_SET_TIMERSLACK = 0x1d - PR_SET_TIMING = 0xe - PR_SET_TSC = 0x1a - PR_SET_UNALIGN = 0x6 - PR_TASK_PERF_EVENTS_DISABLE = 0x1f - PR_TASK_PERF_EVENTS_ENABLE = 0x20 - PR_TIMING_STATISTICAL = 0x0 - PR_TIMING_TIMESTAMP = 0x1 - PR_TSC_ENABLE = 0x1 - PR_TSC_SIGSEGV = 0x2 - PR_UNALIGN_NOPRINT = 0x1 - PR_UNALIGN_SIGBUS = 0x2 - PTRACE_ATTACH = 0x10 - PTRACE_CONT = 0x7 - PTRACE_DETACH = 0x11 - PTRACE_EVENT_CLONE = 0x3 - PTRACE_EVENT_EXEC = 0x4 - PTRACE_EVENT_EXIT = 0x6 - PTRACE_EVENT_FORK = 0x1 - PTRACE_EVENT_VFORK = 0x2 - PTRACE_EVENT_VFORK_DONE = 0x5 - PTRACE_GETCRUNCHREGS = 0x19 - PTRACE_GETEVENTMSG = 0x4201 - PTRACE_GETFPREGS = 0xe - PTRACE_GETHBPREGS = 0x1d - PTRACE_GETREGS = 0xc - PTRACE_GETREGSET = 0x4204 - PTRACE_GETSIGINFO = 0x4202 - PTRACE_GETVFPREGS = 0x1b - PTRACE_GETWMMXREGS = 0x12 - PTRACE_GET_THREAD_AREA = 0x16 - PTRACE_KILL = 0x8 - PTRACE_OLDSETOPTIONS = 0x15 - PTRACE_O_MASK = 0x7f - PTRACE_O_TRACECLONE = 0x8 - PTRACE_O_TRACEEXEC = 0x10 - PTRACE_O_TRACEEXIT = 0x40 - PTRACE_O_TRACEFORK = 0x2 - PTRACE_O_TRACESYSGOOD = 0x1 - PTRACE_O_TRACEVFORK = 0x4 - PTRACE_O_TRACEVFORKDONE = 0x20 - PTRACE_PEEKDATA = 0x2 - PTRACE_PEEKTEXT = 0x1 - PTRACE_PEEKUSR = 0x3 - PTRACE_POKEDATA = 0x5 - PTRACE_POKETEXT = 0x4 - PTRACE_POKEUSR = 0x6 - PTRACE_SETCRUNCHREGS = 0x1a - PTRACE_SETFPREGS = 0xf - PTRACE_SETHBPREGS = 0x1e - PTRACE_SETOPTIONS = 0x4200 - PTRACE_SETREGS = 0xd - PTRACE_SETREGSET = 0x4205 - PTRACE_SETSIGINFO = 0x4203 - PTRACE_SETVFPREGS = 0x1c - PTRACE_SETWMMXREGS = 0x13 - PTRACE_SET_SYSCALL = 0x17 - PTRACE_SINGLESTEP = 0x9 - PTRACE_SYSCALL = 0x18 - PTRACE_TRACEME = 0x0 - PT_DATA_ADDR = 0x10004 - PT_TEXT_ADDR = 0x10000 - PT_TEXT_END_ADDR = 0x10008 - RLIMIT_AS = 0x9 - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x7 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = -0x1 - RTAX_ADVMSS = 0x8 - RTAX_CWND = 0x7 - RTAX_FEATURES = 0xc - RTAX_FEATURE_ALLFRAG = 0x8 - RTAX_FEATURE_ECN = 0x1 - RTAX_FEATURE_SACK = 0x2 - RTAX_FEATURE_TIMESTAMP = 0x4 - RTAX_HOPLIMIT = 0xa - RTAX_INITCWND = 0xb - RTAX_INITRWND = 0xe - RTAX_LOCK = 0x1 - RTAX_MAX = 0xe - RTAX_MTU = 0x2 - RTAX_REORDERING = 0x9 - RTAX_RTO_MIN = 0xd - RTAX_RTT = 0x4 - RTAX_RTTVAR = 0x5 - RTAX_SSTHRESH = 0x6 - RTAX_UNSPEC = 0x0 - RTAX_WINDOW = 0x3 - RTA_ALIGNTO = 0x4 - RTA_MAX = 0x10 - RTCF_DIRECTSRC = 0x4000000 - RTCF_DOREDIRECT = 0x1000000 - RTCF_LOG = 0x2000000 - RTCF_MASQ = 0x400000 - RTCF_NAT = 0x800000 - RTCF_VALVE = 0x200000 - RTF_ADDRCLASSMASK = 0xf8000000 - RTF_ADDRCONF = 0x40000 - RTF_ALLONLINK = 0x20000 - RTF_BROADCAST = 0x10000000 - RTF_CACHE = 0x1000000 - RTF_DEFAULT = 0x10000 - RTF_DYNAMIC = 0x10 - RTF_FLOW = 0x2000000 - RTF_GATEWAY = 0x2 - RTF_HOST = 0x4 - RTF_INTERFACE = 0x40000000 - RTF_IRTT = 0x100 - RTF_LINKRT = 0x100000 - RTF_LOCAL = 0x80000000 - RTF_MODIFIED = 0x20 - RTF_MSS = 0x40 - RTF_MTU = 0x40 - RTF_MULTICAST = 0x20000000 - RTF_NAT = 0x8000000 - RTF_NOFORWARD = 0x1000 - RTF_NONEXTHOP = 0x200000 - RTF_NOPMTUDISC = 0x4000 - RTF_POLICY = 0x4000000 - RTF_REINSTATE = 0x8 - RTF_REJECT = 0x200 - RTF_STATIC = 0x400 - RTF_THROW = 0x2000 - RTF_UP = 0x1 - RTF_WINDOW = 0x80 - RTF_XRESOLVE = 0x800 - RTM_BASE = 0x10 - RTM_DELACTION = 0x31 - RTM_DELADDR = 0x15 - RTM_DELADDRLABEL = 0x49 - RTM_DELLINK = 0x11 - RTM_DELNEIGH = 0x1d - RTM_DELQDISC = 0x25 - RTM_DELROUTE = 0x19 - RTM_DELRULE = 0x21 - RTM_DELTCLASS = 0x29 - RTM_DELTFILTER = 0x2d - RTM_F_CLONED = 0x200 - RTM_F_EQUALIZE = 0x400 - RTM_F_NOTIFY = 0x100 - RTM_F_PREFIX = 0x800 - RTM_GETACTION = 0x32 - RTM_GETADDR = 0x16 - RTM_GETADDRLABEL = 0x4a - RTM_GETANYCAST = 0x3e - RTM_GETDCB = 0x4e - RTM_GETLINK = 0x12 - RTM_GETMULTICAST = 0x3a - RTM_GETNEIGH = 0x1e - RTM_GETNEIGHTBL = 0x42 - RTM_GETQDISC = 0x26 - RTM_GETROUTE = 0x1a - RTM_GETRULE = 0x22 - RTM_GETTCLASS = 0x2a - RTM_GETTFILTER = 0x2e - RTM_MAX = 0x4f - RTM_NEWACTION = 0x30 - RTM_NEWADDR = 0x14 - RTM_NEWADDRLABEL = 0x48 - RTM_NEWLINK = 0x10 - RTM_NEWNDUSEROPT = 0x44 - RTM_NEWNEIGH = 0x1c - RTM_NEWNEIGHTBL = 0x40 - RTM_NEWPREFIX = 0x34 - RTM_NEWQDISC = 0x24 - RTM_NEWROUTE = 0x18 - RTM_NEWRULE = 0x20 - RTM_NEWTCLASS = 0x28 - RTM_NEWTFILTER = 0x2c - RTM_NR_FAMILIES = 0x10 - RTM_NR_MSGTYPES = 0x40 - RTM_SETDCB = 0x4f - RTM_SETLINK = 0x13 - RTM_SETNEIGHTBL = 0x43 - RTNH_ALIGNTO = 0x4 - RTNH_F_DEAD = 0x1 - RTNH_F_ONLINK = 0x4 - RTNH_F_PERVASIVE = 0x2 - RTN_MAX = 0xb - RTPROT_BIRD = 0xc - RTPROT_BOOT = 0x3 - RTPROT_DHCP = 0x10 - RTPROT_DNROUTED = 0xd - RTPROT_GATED = 0x8 - RTPROT_KERNEL = 0x2 - RTPROT_MRT = 0xa - RTPROT_NTK = 0xf - RTPROT_RA = 0x9 - RTPROT_REDIRECT = 0x1 - RTPROT_STATIC = 0x4 - RTPROT_UNSPEC = 0x0 - RTPROT_XORP = 0xe - RTPROT_ZEBRA = 0xb - RT_CLASS_DEFAULT = 0xfd - RT_CLASS_LOCAL = 0xff - RT_CLASS_MAIN = 0xfe - RT_CLASS_MAX = 0xff - RT_CLASS_UNSPEC = 0x0 - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - RUSAGE_THREAD = 0x1 - SCM_CREDENTIALS = 0x2 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x1d - SCM_TIMESTAMPING = 0x25 - SCM_TIMESTAMPNS = 0x23 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDDLCI = 0x8980 - SIOCADDMULTI = 0x8931 - SIOCADDRT = 0x890b - SIOCATMARK = 0x8905 - SIOCDARP = 0x8953 - SIOCDELDLCI = 0x8981 - SIOCDELMULTI = 0x8932 - SIOCDELRT = 0x890c - SIOCDEVPRIVATE = 0x89f0 - SIOCDIFADDR = 0x8936 - SIOCDRARP = 0x8960 - SIOCGARP = 0x8954 - SIOCGIFADDR = 0x8915 - SIOCGIFBR = 0x8940 - SIOCGIFBRDADDR = 0x8919 - SIOCGIFCONF = 0x8912 - SIOCGIFCOUNT = 0x8938 - SIOCGIFDSTADDR = 0x8917 - SIOCGIFENCAP = 0x8925 - SIOCGIFFLAGS = 0x8913 - SIOCGIFHWADDR = 0x8927 - SIOCGIFINDEX = 0x8933 - SIOCGIFMAP = 0x8970 - SIOCGIFMEM = 0x891f - SIOCGIFMETRIC = 0x891d - SIOCGIFMTU = 0x8921 - SIOCGIFNAME = 0x8910 - SIOCGIFNETMASK = 0x891b - SIOCGIFPFLAGS = 0x8935 - SIOCGIFSLAVE = 0x8929 - SIOCGIFTXQLEN = 0x8942 - SIOCGPGRP = 0x8904 - SIOCGRARP = 0x8961 - SIOCGSTAMP = 0x8906 - SIOCGSTAMPNS = 0x8907 - SIOCPROTOPRIVATE = 0x89e0 - SIOCRTMSG = 0x890d - SIOCSARP = 0x8955 - SIOCSIFADDR = 0x8916 - SIOCSIFBR = 0x8941 - SIOCSIFBRDADDR = 0x891a - SIOCSIFDSTADDR = 0x8918 - SIOCSIFENCAP = 0x8926 - SIOCSIFFLAGS = 0x8914 - SIOCSIFHWADDR = 0x8924 - SIOCSIFHWBROADCAST = 0x8937 - SIOCSIFLINK = 0x8911 - SIOCSIFMAP = 0x8971 - SIOCSIFMEM = 0x8920 - SIOCSIFMETRIC = 0x891e - SIOCSIFMTU = 0x8922 - SIOCSIFNAME = 0x8923 - SIOCSIFNETMASK = 0x891c - SIOCSIFPFLAGS = 0x8934 - SIOCSIFSLAVE = 0x8930 - SIOCSIFTXQLEN = 0x8943 - SIOCSPGRP = 0x8902 - SIOCSRARP = 0x8962 - SOCK_CLOEXEC = 0x80000 - SOCK_DCCP = 0x6 - SOCK_DGRAM = 0x2 - SOCK_NONBLOCK = 0x800 - SOCK_PACKET = 0xa - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_AAL = 0x109 - SOL_ATM = 0x108 - SOL_DECNET = 0x105 - SOL_ICMPV6 = 0x3a - SOL_IP = 0x0 - SOL_IPV6 = 0x29 - SOL_IRDA = 0x10a - SOL_PACKET = 0x107 - SOL_RAW = 0xff - SOL_SOCKET = 0x1 - SOL_TCP = 0x6 - SOL_X25 = 0x106 - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x1e - SO_ATTACH_FILTER = 0x1a - SO_BINDTODEVICE = 0x19 - SO_BROADCAST = 0x6 - SO_BSDCOMPAT = 0xe - SO_DEBUG = 0x1 - SO_DETACH_FILTER = 0x1b - SO_DOMAIN = 0x27 - SO_DONTROUTE = 0x5 - SO_ERROR = 0x4 - SO_KEEPALIVE = 0x9 - SO_LINGER = 0xd - SO_MARK = 0x24 - SO_NO_CHECK = 0xb - SO_OOBINLINE = 0xa - SO_PASSCRED = 0x10 - SO_PASSSEC = 0x22 - SO_PEERCRED = 0x11 - SO_PEERNAME = 0x1c - SO_PEERSEC = 0x1f - SO_PRIORITY = 0xc - SO_PROTOCOL = 0x26 - SO_RCVBUF = 0x8 - SO_RCVBUFFORCE = 0x21 - SO_RCVLOWAT = 0x12 - SO_RCVTIMEO = 0x14 - SO_REUSEADDR = 0x2 - SO_RXQ_OVFL = 0x28 - SO_SECURITY_AUTHENTICATION = 0x16 - SO_SECURITY_ENCRYPTION_NETWORK = 0x18 - SO_SECURITY_ENCRYPTION_TRANSPORT = 0x17 - SO_SNDBUF = 0x7 - SO_SNDBUFFORCE = 0x20 - SO_SNDLOWAT = 0x13 - SO_SNDTIMEO = 0x15 - SO_TIMESTAMP = 0x1d - SO_TIMESTAMPING = 0x25 - SO_TIMESTAMPNS = 0x23 - SO_TYPE = 0x3 - S_BLKSIZE = 0x200 - S_IEXEC = 0x40 - S_IFBLK = 0x6000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFIFO = 0x1000 - S_IFLNK = 0xa000 - S_IFMT = 0xf000 - S_IFREG = 0x8000 - S_IFSOCK = 0xc000 - S_IREAD = 0x100 - S_IRGRP = 0x20 - S_IROTH = 0x4 - S_IRUSR = 0x100 - S_IRWXG = 0x38 - S_IRWXO = 0x7 - S_IRWXU = 0x1c0 - S_ISGID = 0x400 - S_ISUID = 0x800 - S_ISVTX = 0x200 - S_IWGRP = 0x10 - S_IWOTH = 0x2 - S_IWRITE = 0x80 - S_IWUSR = 0x80 - S_IXGRP = 0x8 - S_IXOTH = 0x1 - S_IXUSR = 0x40 - TAB0 = 0x0 - TAB1 = 0x800 - TAB2 = 0x1000 - TAB3 = 0x1800 - TABDLY = 0x1800 - TCFLSH = 0x540b - TCGETA = 0x5405 - TCGETS = 0x5401 - TCGETS2 = 0x802c542a - TCGETX = 0x5432 - TCIFLUSH = 0x0 - TCIOFF = 0x2 - TCIOFLUSH = 0x2 - TCION = 0x3 - TCOFLUSH = 0x1 - TCOOFF = 0x0 - TCOON = 0x1 - TCP_CONGESTION = 0xd - TCP_CORK = 0x3 - TCP_DEFER_ACCEPT = 0x9 - TCP_INFO = 0xb - TCP_KEEPCNT = 0x6 - TCP_KEEPIDLE = 0x4 - TCP_KEEPINTVL = 0x5 - TCP_LINGER2 = 0x8 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_WINSHIFT = 0xe - TCP_MD5SIG = 0xe - TCP_MD5SIG_MAXKEYLEN = 0x50 - TCP_MSS = 0x200 - TCP_NODELAY = 0x1 - TCP_QUICKACK = 0xc - TCP_SYNCNT = 0x7 - TCP_WINDOW_CLAMP = 0xa - TCSAFLUSH = 0x2 - TCSBRK = 0x5409 - TCSBRKP = 0x5425 - TCSETA = 0x5406 - TCSETAF = 0x5408 - TCSETAW = 0x5407 - TCSETS = 0x5402 - TCSETS2 = 0x402c542b - TCSETSF = 0x5404 - TCSETSF2 = 0x402c542d - TCSETSW = 0x5403 - TCSETSW2 = 0x402c542c - TCSETX = 0x5433 - TCSETXF = 0x5434 - TCSETXW = 0x5435 - TCXONC = 0x540a - TIOCCBRK = 0x5428 - TIOCCONS = 0x541d - TIOCEXCL = 0x540c - TIOCGDEV = 0x80045432 - TIOCGETD = 0x5424 - TIOCGEXCL = 0x80045440 - TIOCGICOUNT = 0x545d - TIOCGLCKTRMIOS = 0x5456 - TIOCGPGRP = 0x540f - TIOCGPKT = 0x80045438 - TIOCGPTLCK = 0x80045439 - TIOCGPTN = 0x80045430 - TIOCGRS485 = 0x542e - TIOCGSERIAL = 0x541e - TIOCGSID = 0x5429 - TIOCGSOFTCAR = 0x5419 - TIOCGWINSZ = 0x5413 - TIOCINQ = 0x541b - TIOCLINUX = 0x541c - TIOCMBIC = 0x5417 - TIOCMBIS = 0x5416 - TIOCMGET = 0x5415 - TIOCMIWAIT = 0x545c - TIOCMSET = 0x5418 - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x5422 - TIOCNXCL = 0x540d - TIOCOUTQ = 0x5411 - TIOCPKT = 0x5420 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCSBRK = 0x5427 - TIOCSCTTY = 0x540e - TIOCSERCONFIG = 0x5453 - TIOCSERGETLSR = 0x5459 - TIOCSERGETMULTI = 0x545a - TIOCSERGSTRUCT = 0x5458 - TIOCSERGWILD = 0x5454 - TIOCSERSETMULTI = 0x545b - TIOCSERSWILD = 0x5455 - TIOCSER_TEMT = 0x1 - TIOCSETD = 0x5423 - TIOCSIG = 0x40045436 - TIOCSLCKTRMIOS = 0x5457 - TIOCSPGRP = 0x5410 - TIOCSPTLCK = 0x40045431 - TIOCSRS485 = 0x542f - TIOCSSERIAL = 0x541f - TIOCSSOFTCAR = 0x541a - TIOCSTI = 0x5412 - TIOCSWINSZ = 0x5414 - TIOCVHANGUP = 0x5437 - TOSTOP = 0x100 - TUNATTACHFILTER = 0x400854d5 - TUNDETACHFILTER = 0x400854d6 - TUNGETFEATURES = 0x800454cf - TUNGETIFF = 0x800454d2 - TUNGETSNDBUF = 0x800454d3 - TUNGETVNETHDRSZ = 0x800454d7 - TUNSETDEBUG = 0x400454c9 - TUNSETGROUP = 0x400454ce - TUNSETIFF = 0x400454ca - TUNSETLINK = 0x400454cd - TUNSETNOCSUM = 0x400454c8 - TUNSETOFFLOAD = 0x400454d0 - TUNSETOWNER = 0x400454cc - TUNSETPERSIST = 0x400454cb - TUNSETSNDBUF = 0x400454d4 - TUNSETTXFILTER = 0x400454d1 - TUNSETVNETHDRSZ = 0x400454d8 - VDISCARD = 0xd - VEOF = 0x4 - VEOL = 0xb - VEOL2 = 0x10 - VERASE = 0x2 - VINTR = 0x0 - VKILL = 0x3 - VLNEXT = 0xf - VMIN = 0x6 - VQUIT = 0x1 - VREPRINT = 0xc - VSTART = 0x8 - VSTOP = 0x9 - VSUSP = 0xa - VSWTC = 0x7 - VT0 = 0x0 - VT1 = 0x4000 - VTDLY = 0x4000 - VTIME = 0x5 - VWERASE = 0xe - WALL = 0x40000000 - WCLONE = 0x80000000 - WCONTINUED = 0x8 - WEXITED = 0x4 - WNOHANG = 0x1 - WNOTHREAD = 0x20000000 - WNOWAIT = 0x1000000 - WORDSIZE = 0x20 - WSTOPPED = 0x2 - WUNTRACED = 0x2 - XCASE = 0x4 - XTABS = 0x1800 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x62) - EADDRNOTAVAIL = syscall.Errno(0x63) - EADV = syscall.Errno(0x44) - EAFNOSUPPORT = syscall.Errno(0x61) - EAGAIN = syscall.Errno(0xb) - EALREADY = syscall.Errno(0x72) - EBADE = syscall.Errno(0x34) - EBADF = syscall.Errno(0x9) - EBADFD = syscall.Errno(0x4d) - EBADMSG = syscall.Errno(0x4a) - EBADR = syscall.Errno(0x35) - EBADRQC = syscall.Errno(0x38) - EBADSLT = syscall.Errno(0x39) - EBFONT = syscall.Errno(0x3b) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x7d) - ECHILD = syscall.Errno(0xa) - ECHRNG = syscall.Errno(0x2c) - ECOMM = syscall.Errno(0x46) - ECONNABORTED = syscall.Errno(0x67) - ECONNREFUSED = syscall.Errno(0x6f) - ECONNRESET = syscall.Errno(0x68) - EDEADLK = syscall.Errno(0x23) - EDEADLOCK = syscall.Errno(0x23) - EDESTADDRREQ = syscall.Errno(0x59) - EDOM = syscall.Errno(0x21) - EDOTDOT = syscall.Errno(0x49) - EDQUOT = syscall.Errno(0x7a) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EHOSTDOWN = syscall.Errno(0x70) - EHOSTUNREACH = syscall.Errno(0x71) - EHWPOISON = syscall.Errno(0x85) - EIDRM = syscall.Errno(0x2b) - EILSEQ = syscall.Errno(0x54) - EINPROGRESS = syscall.Errno(0x73) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EISCONN = syscall.Errno(0x6a) - EISDIR = syscall.Errno(0x15) - EISNAM = syscall.Errno(0x78) - EKEYEXPIRED = syscall.Errno(0x7f) - EKEYREJECTED = syscall.Errno(0x81) - EKEYREVOKED = syscall.Errno(0x80) - EL2HLT = syscall.Errno(0x33) - EL2NSYNC = syscall.Errno(0x2d) - EL3HLT = syscall.Errno(0x2e) - EL3RST = syscall.Errno(0x2f) - ELIBACC = syscall.Errno(0x4f) - ELIBBAD = syscall.Errno(0x50) - ELIBEXEC = syscall.Errno(0x53) - ELIBMAX = syscall.Errno(0x52) - ELIBSCN = syscall.Errno(0x51) - ELNRNG = syscall.Errno(0x30) - ELOOP = syscall.Errno(0x28) - EMEDIUMTYPE = syscall.Errno(0x7c) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x5a) - EMULTIHOP = syscall.Errno(0x48) - ENAMETOOLONG = syscall.Errno(0x24) - ENAVAIL = syscall.Errno(0x77) - ENETDOWN = syscall.Errno(0x64) - ENETRESET = syscall.Errno(0x66) - ENETUNREACH = syscall.Errno(0x65) - ENFILE = syscall.Errno(0x17) - ENOANO = syscall.Errno(0x37) - ENOBUFS = syscall.Errno(0x69) - ENOCSI = syscall.Errno(0x32) - ENODATA = syscall.Errno(0x3d) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOKEY = syscall.Errno(0x7e) - ENOLCK = syscall.Errno(0x25) - ENOLINK = syscall.Errno(0x43) - ENOMEDIUM = syscall.Errno(0x7b) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x2a) - ENONET = syscall.Errno(0x40) - ENOPKG = syscall.Errno(0x41) - ENOPROTOOPT = syscall.Errno(0x5c) - ENOSPC = syscall.Errno(0x1c) - ENOSR = syscall.Errno(0x3f) - ENOSTR = syscall.Errno(0x3c) - ENOSYS = syscall.Errno(0x26) - ENOTBLK = syscall.Errno(0xf) - ENOTCONN = syscall.Errno(0x6b) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x27) - ENOTNAM = syscall.Errno(0x76) - ENOTRECOVERABLE = syscall.Errno(0x83) - ENOTSOCK = syscall.Errno(0x58) - ENOTSUP = syscall.Errno(0x5f) - ENOTTY = syscall.Errno(0x19) - ENOTUNIQ = syscall.Errno(0x4c) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x5f) - EOVERFLOW = syscall.Errno(0x4b) - EOWNERDEAD = syscall.Errno(0x82) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x60) - EPIPE = syscall.Errno(0x20) - EPROTO = syscall.Errno(0x47) - EPROTONOSUPPORT = syscall.Errno(0x5d) - EPROTOTYPE = syscall.Errno(0x5b) - ERANGE = syscall.Errno(0x22) - EREMCHG = syscall.Errno(0x4e) - EREMOTE = syscall.Errno(0x42) - EREMOTEIO = syscall.Errno(0x79) - ERESTART = syscall.Errno(0x55) - ERFKILL = syscall.Errno(0x84) - EROFS = syscall.Errno(0x1e) - ESHUTDOWN = syscall.Errno(0x6c) - ESOCKTNOSUPPORT = syscall.Errno(0x5e) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESRMNT = syscall.Errno(0x45) - ESTALE = syscall.Errno(0x74) - ESTRPIPE = syscall.Errno(0x56) - ETIME = syscall.Errno(0x3e) - ETIMEDOUT = syscall.Errno(0x6e) - ETOOMANYREFS = syscall.Errno(0x6d) - ETXTBSY = syscall.Errno(0x1a) - EUCLEAN = syscall.Errno(0x75) - EUNATCH = syscall.Errno(0x31) - EUSERS = syscall.Errno(0x57) - EWOULDBLOCK = syscall.Errno(0xb) - EXDEV = syscall.Errno(0x12) - EXFULL = syscall.Errno(0x36) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0x7) - SIGCHLD = syscall.Signal(0x11) - SIGCLD = syscall.Signal(0x11) - SIGCONT = syscall.Signal(0x12) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x1d) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGPIPE = syscall.Signal(0xd) - SIGPOLL = syscall.Signal(0x1d) - SIGPROF = syscall.Signal(0x1b) - SIGPWR = syscall.Signal(0x1e) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTKFLT = syscall.Signal(0x10) - SIGSTOP = syscall.Signal(0x13) - SIGSYS = syscall.Signal(0x1f) - SIGTERM = syscall.Signal(0xf) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x14) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGUNUSED = syscall.Signal(0x1f) - SIGURG = syscall.Signal(0x17) - SIGUSR1 = syscall.Signal(0xa) - SIGUSR2 = syscall.Signal(0xc) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) - -// Error table -var errors = [...]string{ - 1: "operation not permitted", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "input/output error", - 6: "no such device or address", - 7: "argument list too long", - 8: "exec format error", - 9: "bad file descriptor", - 10: "no child processes", - 11: "resource temporarily unavailable", - 12: "cannot allocate memory", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "device or resource busy", - 17: "file exists", - 18: "invalid cross-device link", - 19: "no such device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "too many open files in system", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "numerical argument out of domain", - 34: "numerical result out of range", - 35: "resource deadlock avoided", - 36: "file name too long", - 37: "no locks available", - 38: "function not implemented", - 39: "directory not empty", - 40: "too many levels of symbolic links", - 42: "no message of desired type", - 43: "identifier removed", - 44: "channel number out of range", - 45: "level 2 not synchronized", - 46: "level 3 halted", - 47: "level 3 reset", - 48: "link number out of range", - 49: "protocol driver not attached", - 50: "no CSI structure available", - 51: "level 2 halted", - 52: "invalid exchange", - 53: "invalid request descriptor", - 54: "exchange full", - 55: "no anode", - 56: "invalid request code", - 57: "invalid slot", - 59: "bad font file format", - 60: "device not a stream", - 61: "no data available", - 62: "timer expired", - 63: "out of streams resources", - 64: "machine is not on the network", - 65: "package not installed", - 66: "object is remote", - 67: "link has been severed", - 68: "advertise error", - 69: "srmount error", - 70: "communication error on send", - 71: "protocol error", - 72: "multihop attempted", - 73: "RFS specific error", - 74: "bad message", - 75: "value too large for defined data type", - 76: "name not unique on network", - 77: "file descriptor in bad state", - 78: "remote address changed", - 79: "can not access a needed shared library", - 80: "accessing a corrupted shared library", - 81: ".lib section in a.out corrupted", - 82: "attempting to link in too many shared libraries", - 83: "cannot exec a shared library directly", - 84: "invalid or incomplete multibyte or wide character", - 85: "interrupted system call should be restarted", - 86: "streams pipe error", - 87: "too many users", - 88: "socket operation on non-socket", - 89: "destination address required", - 90: "message too long", - 91: "protocol wrong type for socket", - 92: "protocol not available", - 93: "protocol not supported", - 94: "socket type not supported", - 95: "operation not supported", - 96: "protocol family not supported", - 97: "address family not supported by protocol", - 98: "address already in use", - 99: "cannot assign requested address", - 100: "network is down", - 101: "network is unreachable", - 102: "network dropped connection on reset", - 103: "software caused connection abort", - 104: "connection reset by peer", - 105: "no buffer space available", - 106: "transport endpoint is already connected", - 107: "transport endpoint is not connected", - 108: "cannot send after transport endpoint shutdown", - 109: "too many references: cannot splice", - 110: "connection timed out", - 111: "connection refused", - 112: "host is down", - 113: "no route to host", - 114: "operation already in progress", - 115: "operation now in progress", - 116: "stale NFS file handle", - 117: "structure needs cleaning", - 118: "not a XENIX named type file", - 119: "no XENIX semaphores available", - 120: "is a named type file", - 121: "remote I/O error", - 122: "disk quota exceeded", - 123: "no medium found", - 124: "wrong medium type", - 125: "operation canceled", - 126: "required key not available", - 127: "key has expired", - 128: "key has been revoked", - 129: "key was rejected by service", - 130: "owner died", - 131: "state not recoverable", - 132: "operation not possible due to RF-kill", - 133: "unknown error 133", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal instruction", - 5: "trace/breakpoint trap", - 6: "aborted", - 7: "bus error", - 8: "floating point exception", - 9: "killed", - 10: "user defined signal 1", - 11: "segmentation fault", - 12: "user defined signal 2", - 13: "broken pipe", - 14: "alarm clock", - 15: "terminated", - 16: "stack fault", - 17: "child exited", - 18: "continued", - 19: "stopped (signal)", - 20: "stopped", - 21: "stopped (tty input)", - 22: "stopped (tty output)", - 23: "urgent I/O condition", - 24: "CPU time limit exceeded", - 25: "file size limit exceeded", - 26: "virtual timer expired", - 27: "profiling timer expired", - 28: "window changed", - 29: "I/O possible", - 30: "power failure", - 31: "bad system call", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_linux_arm64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_linux_arm64.go deleted file mode 100644 index 47027b79c9d..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_linux_arm64.go +++ /dev/null @@ -1,1896 +0,0 @@ -// mkerrors.sh -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build arm64,linux - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- _const.go - -package unix - -import "syscall" - -const ( - AF_ALG = 0x26 - AF_APPLETALK = 0x5 - AF_ASH = 0x12 - AF_ATMPVC = 0x8 - AF_ATMSVC = 0x14 - AF_AX25 = 0x3 - AF_BLUETOOTH = 0x1f - AF_BRIDGE = 0x7 - AF_CAIF = 0x25 - AF_CAN = 0x1d - AF_DECnet = 0xc - AF_ECONET = 0x13 - AF_FILE = 0x1 - AF_IEEE802154 = 0x24 - AF_INET = 0x2 - AF_INET6 = 0xa - AF_IPX = 0x4 - AF_IRDA = 0x17 - AF_ISDN = 0x22 - AF_IUCV = 0x20 - AF_KEY = 0xf - AF_LLC = 0x1a - AF_LOCAL = 0x1 - AF_MAX = 0x29 - AF_NETBEUI = 0xd - AF_NETLINK = 0x10 - AF_NETROM = 0x6 - AF_NFC = 0x27 - AF_PACKET = 0x11 - AF_PHONET = 0x23 - AF_PPPOX = 0x18 - AF_RDS = 0x15 - AF_ROSE = 0xb - AF_ROUTE = 0x10 - AF_RXRPC = 0x21 - AF_SECURITY = 0xe - AF_SNA = 0x16 - AF_TIPC = 0x1e - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - AF_VSOCK = 0x28 - AF_WANPIPE = 0x19 - AF_X25 = 0x9 - ARPHRD_ADAPT = 0x108 - ARPHRD_APPLETLK = 0x8 - ARPHRD_ARCNET = 0x7 - ARPHRD_ASH = 0x30d - ARPHRD_ATM = 0x13 - ARPHRD_AX25 = 0x3 - ARPHRD_BIF = 0x307 - ARPHRD_CAIF = 0x336 - ARPHRD_CAN = 0x118 - ARPHRD_CHAOS = 0x5 - ARPHRD_CISCO = 0x201 - ARPHRD_CSLIP = 0x101 - ARPHRD_CSLIP6 = 0x103 - ARPHRD_DDCMP = 0x205 - ARPHRD_DLCI = 0xf - ARPHRD_ECONET = 0x30e - ARPHRD_EETHER = 0x2 - ARPHRD_ETHER = 0x1 - ARPHRD_EUI64 = 0x1b - ARPHRD_FCAL = 0x311 - ARPHRD_FCFABRIC = 0x313 - ARPHRD_FCPL = 0x312 - ARPHRD_FCPP = 0x310 - ARPHRD_FDDI = 0x306 - ARPHRD_FRAD = 0x302 - ARPHRD_HDLC = 0x201 - ARPHRD_HIPPI = 0x30c - ARPHRD_HWX25 = 0x110 - ARPHRD_IEEE1394 = 0x18 - ARPHRD_IEEE802 = 0x6 - ARPHRD_IEEE80211 = 0x321 - ARPHRD_IEEE80211_PRISM = 0x322 - ARPHRD_IEEE80211_RADIOTAP = 0x323 - ARPHRD_IEEE802154 = 0x324 - ARPHRD_IEEE802154_MONITOR = 0x325 - ARPHRD_IEEE802_TR = 0x320 - ARPHRD_INFINIBAND = 0x20 - ARPHRD_IP6GRE = 0x337 - ARPHRD_IPDDP = 0x309 - ARPHRD_IPGRE = 0x30a - ARPHRD_IRDA = 0x30f - ARPHRD_LAPB = 0x204 - ARPHRD_LOCALTLK = 0x305 - ARPHRD_LOOPBACK = 0x304 - ARPHRD_METRICOM = 0x17 - ARPHRD_NETLINK = 0x338 - ARPHRD_NETROM = 0x0 - ARPHRD_NONE = 0xfffe - ARPHRD_PHONET = 0x334 - ARPHRD_PHONET_PIPE = 0x335 - ARPHRD_PIMREG = 0x30b - ARPHRD_PPP = 0x200 - ARPHRD_PRONET = 0x4 - ARPHRD_RAWHDLC = 0x206 - ARPHRD_ROSE = 0x10e - ARPHRD_RSRVD = 0x104 - ARPHRD_SIT = 0x308 - ARPHRD_SKIP = 0x303 - ARPHRD_SLIP = 0x100 - ARPHRD_SLIP6 = 0x102 - ARPHRD_TUNNEL = 0x300 - ARPHRD_TUNNEL6 = 0x301 - ARPHRD_VOID = 0xffff - ARPHRD_X25 = 0x10f - B0 = 0x0 - B1000000 = 0x1008 - B110 = 0x3 - B115200 = 0x1002 - B1152000 = 0x1009 - B1200 = 0x9 - B134 = 0x4 - B150 = 0x5 - B1500000 = 0x100a - B1800 = 0xa - B19200 = 0xe - B200 = 0x6 - B2000000 = 0x100b - B230400 = 0x1003 - B2400 = 0xb - B2500000 = 0x100c - B300 = 0x7 - B3000000 = 0x100d - B3500000 = 0x100e - B38400 = 0xf - B4000000 = 0x100f - B460800 = 0x1004 - B4800 = 0xc - B50 = 0x1 - B500000 = 0x1005 - B57600 = 0x1001 - B576000 = 0x1006 - B600 = 0x8 - B75 = 0x2 - B921600 = 0x1007 - B9600 = 0xd - BOTHER = 0x1000 - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXINSNS = 0x1000 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MOD = 0x90 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_W = 0x0 - BPF_X = 0x8 - BPF_XOR = 0xa0 - BRKINT = 0x2 - BS0 = 0x0 - BS1 = 0x2000 - BSDLY = 0x2000 - CBAUD = 0x100f - CBAUDEX = 0x1000 - CFLUSH = 0xf - CIBAUD = 0x100f0000 - CLOCAL = 0x800 - CLOCK_BOOTTIME = 0x7 - CLOCK_BOOTTIME_ALARM = 0x9 - CLOCK_DEFAULT = 0x0 - CLOCK_EXT = 0x1 - CLOCK_INT = 0x2 - CLOCK_MONOTONIC = 0x1 - CLOCK_MONOTONIC_COARSE = 0x6 - CLOCK_MONOTONIC_RAW = 0x4 - CLOCK_PROCESS_CPUTIME_ID = 0x2 - CLOCK_REALTIME = 0x0 - CLOCK_REALTIME_ALARM = 0x8 - CLOCK_REALTIME_COARSE = 0x5 - CLOCK_THREAD_CPUTIME_ID = 0x3 - CLOCK_TXFROMRX = 0x4 - CLOCK_TXINT = 0x3 - CLONE_CHILD_CLEARTID = 0x200000 - CLONE_CHILD_SETTID = 0x1000000 - CLONE_DETACHED = 0x400000 - CLONE_FILES = 0x400 - CLONE_FS = 0x200 - CLONE_IO = 0x80000000 - CLONE_NEWIPC = 0x8000000 - CLONE_NEWNET = 0x40000000 - CLONE_NEWNS = 0x20000 - CLONE_NEWPID = 0x20000000 - CLONE_NEWUSER = 0x10000000 - CLONE_NEWUTS = 0x4000000 - CLONE_PARENT = 0x8000 - CLONE_PARENT_SETTID = 0x100000 - CLONE_PTRACE = 0x2000 - CLONE_SETTLS = 0x80000 - CLONE_SIGHAND = 0x800 - CLONE_SYSVSEM = 0x40000 - CLONE_THREAD = 0x10000 - CLONE_UNTRACED = 0x800000 - CLONE_VFORK = 0x4000 - CLONE_VM = 0x100 - CMSPAR = 0x40000000 - CR0 = 0x0 - CR1 = 0x200 - CR2 = 0x400 - CR3 = 0x600 - CRDLY = 0x600 - CREAD = 0x80 - CRTSCTS = 0x80000000 - CS5 = 0x0 - CS6 = 0x10 - CS7 = 0x20 - CS8 = 0x30 - CSIGNAL = 0xff - CSIZE = 0x30 - CSTART = 0x11 - CSTATUS = 0x0 - CSTOP = 0x13 - CSTOPB = 0x40 - CSUSP = 0x1a - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - DT_WHT = 0xe - ECHO = 0x8 - ECHOCTL = 0x200 - ECHOE = 0x10 - ECHOK = 0x20 - ECHOKE = 0x800 - ECHONL = 0x40 - ECHOPRT = 0x400 - ELF_NGREG = 0x22 - ELF_PRARGSZ = 0x50 - ENCODING_DEFAULT = 0x0 - ENCODING_FM_MARK = 0x3 - ENCODING_FM_SPACE = 0x4 - ENCODING_MANCHESTER = 0x5 - ENCODING_NRZ = 0x1 - ENCODING_NRZI = 0x2 - EPOLLERR = 0x8 - EPOLLET = 0x80000000 - EPOLLHUP = 0x10 - EPOLLIN = 0x1 - EPOLLMSG = 0x400 - EPOLLONESHOT = 0x40000000 - EPOLLOUT = 0x4 - EPOLLPRI = 0x2 - EPOLLRDBAND = 0x80 - EPOLLRDHUP = 0x2000 - EPOLLRDNORM = 0x40 - EPOLLWAKEUP = 0x20000000 - EPOLLWRBAND = 0x200 - EPOLLWRNORM = 0x100 - EPOLL_CLOEXEC = 0x80000 - EPOLL_CTL_ADD = 0x1 - EPOLL_CTL_DEL = 0x2 - EPOLL_CTL_MOD = 0x3 - ETH_P_1588 = 0x88f7 - ETH_P_8021AD = 0x88a8 - ETH_P_8021AH = 0x88e7 - ETH_P_8021Q = 0x8100 - ETH_P_802_2 = 0x4 - ETH_P_802_3 = 0x1 - ETH_P_802_3_MIN = 0x600 - ETH_P_802_EX1 = 0x88b5 - ETH_P_AARP = 0x80f3 - ETH_P_AF_IUCV = 0xfbfb - ETH_P_ALL = 0x3 - ETH_P_AOE = 0x88a2 - ETH_P_ARCNET = 0x1a - ETH_P_ARP = 0x806 - ETH_P_ATALK = 0x809b - ETH_P_ATMFATE = 0x8884 - ETH_P_ATMMPOA = 0x884c - ETH_P_AX25 = 0x2 - ETH_P_BATMAN = 0x4305 - ETH_P_BPQ = 0x8ff - ETH_P_CAIF = 0xf7 - ETH_P_CAN = 0xc - ETH_P_CANFD = 0xd - ETH_P_CONTROL = 0x16 - ETH_P_CUST = 0x6006 - ETH_P_DDCMP = 0x6 - ETH_P_DEC = 0x6000 - ETH_P_DIAG = 0x6005 - ETH_P_DNA_DL = 0x6001 - ETH_P_DNA_RC = 0x6002 - ETH_P_DNA_RT = 0x6003 - ETH_P_DSA = 0x1b - ETH_P_ECONET = 0x18 - ETH_P_EDSA = 0xdada - ETH_P_FCOE = 0x8906 - ETH_P_FIP = 0x8914 - ETH_P_HDLC = 0x19 - ETH_P_IEEE802154 = 0xf6 - ETH_P_IEEEPUP = 0xa00 - ETH_P_IEEEPUPAT = 0xa01 - ETH_P_IP = 0x800 - ETH_P_IPV6 = 0x86dd - ETH_P_IPX = 0x8137 - ETH_P_IRDA = 0x17 - ETH_P_LAT = 0x6004 - ETH_P_LINK_CTL = 0x886c - ETH_P_LOCALTALK = 0x9 - ETH_P_LOOP = 0x60 - ETH_P_MOBITEX = 0x15 - ETH_P_MPLS_MC = 0x8848 - ETH_P_MPLS_UC = 0x8847 - ETH_P_MVRP = 0x88f5 - ETH_P_PAE = 0x888e - ETH_P_PAUSE = 0x8808 - ETH_P_PHONET = 0xf5 - ETH_P_PPPTALK = 0x10 - ETH_P_PPP_DISC = 0x8863 - ETH_P_PPP_MP = 0x8 - ETH_P_PPP_SES = 0x8864 - ETH_P_PRP = 0x88fb - ETH_P_PUP = 0x200 - ETH_P_PUPAT = 0x201 - ETH_P_QINQ1 = 0x9100 - ETH_P_QINQ2 = 0x9200 - ETH_P_QINQ3 = 0x9300 - ETH_P_RARP = 0x8035 - ETH_P_SCA = 0x6007 - ETH_P_SLOW = 0x8809 - ETH_P_SNAP = 0x5 - ETH_P_TDLS = 0x890d - ETH_P_TEB = 0x6558 - ETH_P_TIPC = 0x88ca - ETH_P_TRAILER = 0x1c - ETH_P_TR_802_2 = 0x11 - ETH_P_WAN_PPP = 0x7 - ETH_P_WCCP = 0x883e - ETH_P_X25 = 0x805 - EXTA = 0xe - EXTB = 0xf - EXTPROC = 0x10000 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x400 - FF0 = 0x0 - FF1 = 0x8000 - FFDLY = 0x8000 - FLUSHO = 0x1000 - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0x406 - F_EXLCK = 0x4 - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLEASE = 0x401 - F_GETLK = 0x5 - F_GETLK64 = 0x5 - F_GETOWN = 0x9 - F_GETOWN_EX = 0x10 - F_GETPIPE_SZ = 0x408 - F_GETSIG = 0xb - F_LOCK = 0x1 - F_NOTIFY = 0x402 - F_OK = 0x0 - F_RDLCK = 0x0 - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLEASE = 0x400 - F_SETLK = 0x6 - F_SETLK64 = 0x6 - F_SETLKW = 0x7 - F_SETLKW64 = 0x7 - F_SETOWN = 0x8 - F_SETOWN_EX = 0xf - F_SETPIPE_SZ = 0x407 - F_SETSIG = 0xa - F_SHLCK = 0x8 - F_TEST = 0x3 - F_TLOCK = 0x2 - F_ULOCK = 0x0 - F_UNLCK = 0x2 - F_WRLCK = 0x1 - HUPCL = 0x400 - IBSHIFT = 0x10 - ICANON = 0x2 - ICMPV6_FILTER = 0x1 - ICRNL = 0x100 - IEXTEN = 0x8000 - IFA_F_DADFAILED = 0x8 - IFA_F_DEPRECATED = 0x20 - IFA_F_HOMEADDRESS = 0x10 - IFA_F_NODAD = 0x2 - IFA_F_OPTIMISTIC = 0x4 - IFA_F_PERMANENT = 0x80 - IFA_F_SECONDARY = 0x1 - IFA_F_TEMPORARY = 0x1 - IFA_F_TENTATIVE = 0x40 - IFA_MAX = 0x7 - IFF_802_1Q_VLAN = 0x1 - IFF_ALLMULTI = 0x200 - IFF_ATTACH_QUEUE = 0x200 - IFF_AUTOMEDIA = 0x4000 - IFF_BONDING = 0x20 - IFF_BRIDGE_PORT = 0x4000 - IFF_BROADCAST = 0x2 - IFF_DEBUG = 0x4 - IFF_DETACH_QUEUE = 0x400 - IFF_DISABLE_NETPOLL = 0x1000 - IFF_DONT_BRIDGE = 0x800 - IFF_DORMANT = 0x20000 - IFF_DYNAMIC = 0x8000 - IFF_EBRIDGE = 0x2 - IFF_ECHO = 0x40000 - IFF_ISATAP = 0x80 - IFF_LIVE_ADDR_CHANGE = 0x100000 - IFF_LOOPBACK = 0x8 - IFF_LOWER_UP = 0x10000 - IFF_MACVLAN = 0x200000 - IFF_MACVLAN_PORT = 0x2000 - IFF_MASTER = 0x400 - IFF_MASTER_8023AD = 0x8 - IFF_MASTER_ALB = 0x10 - IFF_MASTER_ARPMON = 0x100 - IFF_MULTICAST = 0x1000 - IFF_MULTI_QUEUE = 0x100 - IFF_NOARP = 0x80 - IFF_NOFILTER = 0x1000 - IFF_NOTRAILERS = 0x20 - IFF_NO_PI = 0x1000 - IFF_ONE_QUEUE = 0x2000 - IFF_OVS_DATAPATH = 0x8000 - IFF_PERSIST = 0x800 - IFF_POINTOPOINT = 0x10 - IFF_PORTSEL = 0x2000 - IFF_PROMISC = 0x100 - IFF_RUNNING = 0x40 - IFF_SLAVE = 0x800 - IFF_SLAVE_INACTIVE = 0x4 - IFF_SLAVE_NEEDARP = 0x40 - IFF_SUPP_NOFCS = 0x80000 - IFF_TAP = 0x2 - IFF_TEAM_PORT = 0x40000 - IFF_TUN = 0x1 - IFF_TUN_EXCL = 0x8000 - IFF_TX_SKB_SHARING = 0x10000 - IFF_UNICAST_FLT = 0x20000 - IFF_UP = 0x1 - IFF_VNET_HDR = 0x4000 - IFF_VOLATILE = 0x70c5a - IFF_WAN_HDLC = 0x200 - IFF_XMIT_DST_RELEASE = 0x400 - IFNAMSIZ = 0x10 - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_ACCESS = 0x1 - IN_ALL_EVENTS = 0xfff - IN_ATTRIB = 0x4 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLOEXEC = 0x80000 - IN_CLOSE = 0x18 - IN_CLOSE_NOWRITE = 0x10 - IN_CLOSE_WRITE = 0x8 - IN_CREATE = 0x100 - IN_DELETE = 0x200 - IN_DELETE_SELF = 0x400 - IN_DONT_FOLLOW = 0x2000000 - IN_EXCL_UNLINK = 0x4000000 - IN_IGNORED = 0x8000 - IN_ISDIR = 0x40000000 - IN_LOOPBACKNET = 0x7f - IN_MASK_ADD = 0x20000000 - IN_MODIFY = 0x2 - IN_MOVE = 0xc0 - IN_MOVED_FROM = 0x40 - IN_MOVED_TO = 0x80 - IN_MOVE_SELF = 0x800 - IN_NONBLOCK = 0x800 - IN_ONESHOT = 0x80000000 - IN_ONLYDIR = 0x1000000 - IN_OPEN = 0x20 - IN_Q_OVERFLOW = 0x4000 - IN_UNMOUNT = 0x2000 - IPPROTO_AH = 0x33 - IPPROTO_BEETPH = 0x5e - IPPROTO_COMP = 0x6c - IPPROTO_DCCP = 0x21 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_ENCAP = 0x62 - IPPROTO_ESP = 0x32 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GRE = 0x2f - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IGMP = 0x2 - IPPROTO_IP = 0x0 - IPPROTO_IPIP = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_MH = 0x87 - IPPROTO_MTP = 0x5c - IPPROTO_NONE = 0x3b - IPPROTO_PIM = 0x67 - IPPROTO_PUP = 0xc - IPPROTO_RAW = 0xff - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_SCTP = 0x84 - IPPROTO_TCP = 0x6 - IPPROTO_TP = 0x1d - IPPROTO_UDP = 0x11 - IPPROTO_UDPLITE = 0x88 - IPV6_2292DSTOPTS = 0x4 - IPV6_2292HOPLIMIT = 0x8 - IPV6_2292HOPOPTS = 0x3 - IPV6_2292PKTINFO = 0x2 - IPV6_2292PKTOPTIONS = 0x6 - IPV6_2292RTHDR = 0x5 - IPV6_ADDRFORM = 0x1 - IPV6_ADD_MEMBERSHIP = 0x14 - IPV6_AUTHHDR = 0xa - IPV6_CHECKSUM = 0x7 - IPV6_DROP_MEMBERSHIP = 0x15 - IPV6_DSTOPTS = 0x3b - IPV6_HOPLIMIT = 0x34 - IPV6_HOPOPTS = 0x36 - IPV6_IPSEC_POLICY = 0x22 - IPV6_JOIN_ANYCAST = 0x1b - IPV6_JOIN_GROUP = 0x14 - IPV6_LEAVE_ANYCAST = 0x1c - IPV6_LEAVE_GROUP = 0x15 - IPV6_MTU = 0x18 - IPV6_MTU_DISCOVER = 0x17 - IPV6_MULTICAST_HOPS = 0x12 - IPV6_MULTICAST_IF = 0x11 - IPV6_MULTICAST_LOOP = 0x13 - IPV6_NEXTHOP = 0x9 - IPV6_PKTINFO = 0x32 - IPV6_PMTUDISC_DO = 0x2 - IPV6_PMTUDISC_DONT = 0x0 - IPV6_PMTUDISC_PROBE = 0x3 - IPV6_PMTUDISC_WANT = 0x1 - IPV6_RECVDSTOPTS = 0x3a - IPV6_RECVERR = 0x19 - IPV6_RECVHOPLIMIT = 0x33 - IPV6_RECVHOPOPTS = 0x35 - IPV6_RECVPKTINFO = 0x31 - IPV6_RECVRTHDR = 0x38 - IPV6_RECVTCLASS = 0x42 - IPV6_ROUTER_ALERT = 0x16 - IPV6_RTHDR = 0x39 - IPV6_RTHDRDSTOPTS = 0x37 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_RXDSTOPTS = 0x3b - IPV6_RXHOPOPTS = 0x36 - IPV6_TCLASS = 0x43 - IPV6_UNICAST_HOPS = 0x10 - IPV6_V6ONLY = 0x1a - IPV6_XFRM_POLICY = 0x23 - IP_ADD_MEMBERSHIP = 0x23 - IP_ADD_SOURCE_MEMBERSHIP = 0x27 - IP_BLOCK_SOURCE = 0x26 - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DROP_MEMBERSHIP = 0x24 - IP_DROP_SOURCE_MEMBERSHIP = 0x28 - IP_FREEBIND = 0xf - IP_HDRINCL = 0x3 - IP_IPSEC_POLICY = 0x10 - IP_MAXPACKET = 0xffff - IP_MAX_MEMBERSHIPS = 0x14 - IP_MF = 0x2000 - IP_MINTTL = 0x15 - IP_MSFILTER = 0x29 - IP_MSS = 0x240 - IP_MTU = 0xe - IP_MTU_DISCOVER = 0xa - IP_MULTICAST_ALL = 0x31 - IP_MULTICAST_IF = 0x20 - IP_MULTICAST_LOOP = 0x22 - IP_MULTICAST_TTL = 0x21 - IP_OFFMASK = 0x1fff - IP_OPTIONS = 0x4 - IP_ORIGDSTADDR = 0x14 - IP_PASSSEC = 0x12 - IP_PKTINFO = 0x8 - IP_PKTOPTIONS = 0x9 - IP_PMTUDISC = 0xa - IP_PMTUDISC_DO = 0x2 - IP_PMTUDISC_DONT = 0x0 - IP_PMTUDISC_PROBE = 0x3 - IP_PMTUDISC_WANT = 0x1 - IP_RECVERR = 0xb - IP_RECVOPTS = 0x6 - IP_RECVORIGDSTADDR = 0x14 - IP_RECVRETOPTS = 0x7 - IP_RECVTOS = 0xd - IP_RECVTTL = 0xc - IP_RETOPTS = 0x7 - IP_RF = 0x8000 - IP_ROUTER_ALERT = 0x5 - IP_TOS = 0x1 - IP_TRANSPARENT = 0x13 - IP_TTL = 0x2 - IP_UNBLOCK_SOURCE = 0x25 - IP_UNICAST_IF = 0x32 - IP_XFRM_POLICY = 0x11 - ISIG = 0x1 - ISTRIP = 0x20 - IUCLC = 0x200 - IUTF8 = 0x4000 - IXANY = 0x800 - IXOFF = 0x1000 - IXON = 0x400 - LINUX_REBOOT_CMD_CAD_OFF = 0x0 - LINUX_REBOOT_CMD_CAD_ON = 0x89abcdef - LINUX_REBOOT_CMD_HALT = 0xcdef0123 - LINUX_REBOOT_CMD_KEXEC = 0x45584543 - LINUX_REBOOT_CMD_POWER_OFF = 0x4321fedc - LINUX_REBOOT_CMD_RESTART = 0x1234567 - LINUX_REBOOT_CMD_RESTART2 = 0xa1b2c3d4 - LINUX_REBOOT_CMD_SW_SUSPEND = 0xd000fce2 - LINUX_REBOOT_MAGIC1 = 0xfee1dead - LINUX_REBOOT_MAGIC2 = 0x28121969 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_DODUMP = 0x11 - MADV_DOFORK = 0xb - MADV_DONTDUMP = 0x10 - MADV_DONTFORK = 0xa - MADV_DONTNEED = 0x4 - MADV_HUGEPAGE = 0xe - MADV_HWPOISON = 0x64 - MADV_MERGEABLE = 0xc - MADV_NOHUGEPAGE = 0xf - MADV_NORMAL = 0x0 - MADV_RANDOM = 0x1 - MADV_REMOVE = 0x9 - MADV_SEQUENTIAL = 0x2 - MADV_UNMERGEABLE = 0xd - MADV_WILLNEED = 0x3 - MAP_ANON = 0x20 - MAP_ANONYMOUS = 0x20 - MAP_DENYWRITE = 0x800 - MAP_EXECUTABLE = 0x1000 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_GROWSDOWN = 0x100 - MAP_HUGETLB = 0x40000 - MAP_HUGE_MASK = 0x3f - MAP_HUGE_SHIFT = 0x1a - MAP_LOCKED = 0x2000 - MAP_NONBLOCK = 0x10000 - MAP_NORESERVE = 0x4000 - MAP_POPULATE = 0x8000 - MAP_PRIVATE = 0x2 - MAP_SHARED = 0x1 - MAP_STACK = 0x20000 - MAP_TYPE = 0xf - MCL_CURRENT = 0x1 - MCL_FUTURE = 0x2 - MNT_DETACH = 0x2 - MNT_EXPIRE = 0x4 - MNT_FORCE = 0x1 - MSG_CMSG_CLOEXEC = 0x40000000 - MSG_CONFIRM = 0x800 - MSG_CTRUNC = 0x8 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x40 - MSG_EOR = 0x80 - MSG_ERRQUEUE = 0x2000 - MSG_FASTOPEN = 0x20000000 - MSG_FIN = 0x200 - MSG_MORE = 0x8000 - MSG_NOSIGNAL = 0x4000 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_PROXY = 0x10 - MSG_RST = 0x1000 - MSG_SYN = 0x400 - MSG_TRUNC = 0x20 - MSG_TRYHARD = 0x4 - MSG_WAITALL = 0x100 - MSG_WAITFORONE = 0x10000 - MS_ACTIVE = 0x40000000 - MS_ASYNC = 0x1 - MS_BIND = 0x1000 - MS_DIRSYNC = 0x80 - MS_INVALIDATE = 0x2 - MS_I_VERSION = 0x800000 - MS_KERNMOUNT = 0x400000 - MS_MANDLOCK = 0x40 - MS_MGC_MSK = 0xffff0000 - MS_MGC_VAL = 0xc0ed0000 - MS_MOVE = 0x2000 - MS_NOATIME = 0x400 - MS_NODEV = 0x4 - MS_NODIRATIME = 0x800 - MS_NOEXEC = 0x8 - MS_NOSUID = 0x2 - MS_NOUSER = -0x80000000 - MS_POSIXACL = 0x10000 - MS_PRIVATE = 0x40000 - MS_RDONLY = 0x1 - MS_REC = 0x4000 - MS_RELATIME = 0x200000 - MS_REMOUNT = 0x20 - MS_RMT_MASK = 0x800051 - MS_SHARED = 0x100000 - MS_SILENT = 0x8000 - MS_SLAVE = 0x80000 - MS_STRICTATIME = 0x1000000 - MS_SYNC = 0x4 - MS_SYNCHRONOUS = 0x10 - MS_UNBINDABLE = 0x20000 - NAME_MAX = 0xff - NETLINK_ADD_MEMBERSHIP = 0x1 - NETLINK_AUDIT = 0x9 - NETLINK_BROADCAST_ERROR = 0x4 - NETLINK_CONNECTOR = 0xb - NETLINK_CRYPTO = 0x15 - NETLINK_DNRTMSG = 0xe - NETLINK_DROP_MEMBERSHIP = 0x2 - NETLINK_ECRYPTFS = 0x13 - NETLINK_FIB_LOOKUP = 0xa - NETLINK_FIREWALL = 0x3 - NETLINK_GENERIC = 0x10 - NETLINK_INET_DIAG = 0x4 - NETLINK_IP6_FW = 0xd - NETLINK_ISCSI = 0x8 - NETLINK_KOBJECT_UEVENT = 0xf - NETLINK_NETFILTER = 0xc - NETLINK_NFLOG = 0x5 - NETLINK_NO_ENOBUFS = 0x5 - NETLINK_PKTINFO = 0x3 - NETLINK_RDMA = 0x14 - NETLINK_ROUTE = 0x0 - NETLINK_RX_RING = 0x6 - NETLINK_SCSITRANSPORT = 0x12 - NETLINK_SELINUX = 0x7 - NETLINK_SOCK_DIAG = 0x4 - NETLINK_TX_RING = 0x7 - NETLINK_UNUSED = 0x1 - NETLINK_USERSOCK = 0x2 - NETLINK_XFRM = 0x6 - NL0 = 0x0 - NL1 = 0x100 - NLA_ALIGNTO = 0x4 - NLA_F_NESTED = 0x8000 - NLA_F_NET_BYTEORDER = 0x4000 - NLA_HDRLEN = 0x4 - NLDLY = 0x100 - NLMSG_ALIGNTO = 0x4 - NLMSG_DONE = 0x3 - NLMSG_ERROR = 0x2 - NLMSG_HDRLEN = 0x10 - NLMSG_MIN_TYPE = 0x10 - NLMSG_NOOP = 0x1 - NLMSG_OVERRUN = 0x4 - NLM_F_ACK = 0x4 - NLM_F_APPEND = 0x800 - NLM_F_ATOMIC = 0x400 - NLM_F_CREATE = 0x400 - NLM_F_DUMP = 0x300 - NLM_F_DUMP_INTR = 0x10 - NLM_F_ECHO = 0x8 - NLM_F_EXCL = 0x200 - NLM_F_MATCH = 0x200 - NLM_F_MULTI = 0x2 - NLM_F_REPLACE = 0x100 - NLM_F_REQUEST = 0x1 - NLM_F_ROOT = 0x100 - NOFLSH = 0x80 - OCRNL = 0x8 - OFDEL = 0x80 - OFILL = 0x40 - OLCUC = 0x2 - ONLCR = 0x4 - ONLRET = 0x20 - ONOCR = 0x10 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_APPEND = 0x400 - O_ASYNC = 0x2000 - O_CLOEXEC = 0x80000 - O_CREAT = 0x40 - O_DIRECT = 0x10000 - O_DIRECTORY = 0x4000 - O_DSYNC = 0x1000 - O_EXCL = 0x80 - O_FSYNC = 0x101000 - O_LARGEFILE = 0x0 - O_NDELAY = 0x800 - O_NOATIME = 0x40000 - O_NOCTTY = 0x100 - O_NOFOLLOW = 0x8000 - O_NONBLOCK = 0x800 - O_PATH = 0x200000 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_RSYNC = 0x101000 - O_SYNC = 0x101000 - O_TMPFILE = 0x410000 - O_TRUNC = 0x200 - O_WRONLY = 0x1 - PACKET_ADD_MEMBERSHIP = 0x1 - PACKET_AUXDATA = 0x8 - PACKET_BROADCAST = 0x1 - PACKET_COPY_THRESH = 0x7 - PACKET_DROP_MEMBERSHIP = 0x2 - PACKET_FANOUT = 0x12 - PACKET_FANOUT_CPU = 0x2 - PACKET_FANOUT_FLAG_DEFRAG = 0x8000 - PACKET_FANOUT_FLAG_ROLLOVER = 0x1000 - PACKET_FANOUT_HASH = 0x0 - PACKET_FANOUT_LB = 0x1 - PACKET_FANOUT_RND = 0x4 - PACKET_FANOUT_ROLLOVER = 0x3 - PACKET_FASTROUTE = 0x6 - PACKET_HDRLEN = 0xb - PACKET_HOST = 0x0 - PACKET_LOOPBACK = 0x5 - PACKET_LOSS = 0xe - PACKET_MR_ALLMULTI = 0x2 - PACKET_MR_MULTICAST = 0x0 - PACKET_MR_PROMISC = 0x1 - PACKET_MR_UNICAST = 0x3 - PACKET_MULTICAST = 0x2 - PACKET_ORIGDEV = 0x9 - PACKET_OTHERHOST = 0x3 - PACKET_OUTGOING = 0x4 - PACKET_RECV_OUTPUT = 0x3 - PACKET_RESERVE = 0xc - PACKET_RX_RING = 0x5 - PACKET_STATISTICS = 0x6 - PACKET_TIMESTAMP = 0x11 - PACKET_TX_HAS_OFF = 0x13 - PACKET_TX_RING = 0xd - PACKET_TX_TIMESTAMP = 0x10 - PACKET_VERSION = 0xa - PACKET_VNET_HDR = 0xf - PARENB = 0x100 - PARITY_CRC16_PR0 = 0x2 - PARITY_CRC16_PR0_CCITT = 0x4 - PARITY_CRC16_PR1 = 0x3 - PARITY_CRC16_PR1_CCITT = 0x5 - PARITY_CRC32_PR0_CCITT = 0x6 - PARITY_CRC32_PR1_CCITT = 0x7 - PARITY_DEFAULT = 0x0 - PARITY_NONE = 0x1 - PARMRK = 0x8 - PARODD = 0x200 - PENDIN = 0x4000 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PROT_EXEC = 0x4 - PROT_GROWSDOWN = 0x1000000 - PROT_GROWSUP = 0x2000000 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_WRITE = 0x2 - PR_CAPBSET_DROP = 0x18 - PR_CAPBSET_READ = 0x17 - PR_ENDIAN_BIG = 0x0 - PR_ENDIAN_LITTLE = 0x1 - PR_ENDIAN_PPC_LITTLE = 0x2 - PR_FPEMU_NOPRINT = 0x1 - PR_FPEMU_SIGFPE = 0x2 - PR_FP_EXC_ASYNC = 0x2 - PR_FP_EXC_DISABLED = 0x0 - PR_FP_EXC_DIV = 0x10000 - PR_FP_EXC_INV = 0x100000 - PR_FP_EXC_NONRECOV = 0x1 - PR_FP_EXC_OVF = 0x20000 - PR_FP_EXC_PRECISE = 0x3 - PR_FP_EXC_RES = 0x80000 - PR_FP_EXC_SW_ENABLE = 0x80 - PR_FP_EXC_UND = 0x40000 - PR_GET_CHILD_SUBREAPER = 0x25 - PR_GET_DUMPABLE = 0x3 - PR_GET_ENDIAN = 0x13 - PR_GET_FPEMU = 0x9 - PR_GET_FPEXC = 0xb - PR_GET_KEEPCAPS = 0x7 - PR_GET_NAME = 0x10 - PR_GET_NO_NEW_PRIVS = 0x27 - PR_GET_PDEATHSIG = 0x2 - PR_GET_SECCOMP = 0x15 - PR_GET_SECUREBITS = 0x1b - PR_GET_TID_ADDRESS = 0x28 - PR_GET_TIMERSLACK = 0x1e - PR_GET_TIMING = 0xd - PR_GET_TSC = 0x19 - PR_GET_UNALIGN = 0x5 - PR_MCE_KILL = 0x21 - PR_MCE_KILL_CLEAR = 0x0 - PR_MCE_KILL_DEFAULT = 0x2 - PR_MCE_KILL_EARLY = 0x1 - PR_MCE_KILL_GET = 0x22 - PR_MCE_KILL_LATE = 0x0 - PR_MCE_KILL_SET = 0x1 - PR_SET_CHILD_SUBREAPER = 0x24 - PR_SET_DUMPABLE = 0x4 - PR_SET_ENDIAN = 0x14 - PR_SET_FPEMU = 0xa - PR_SET_FPEXC = 0xc - PR_SET_KEEPCAPS = 0x8 - PR_SET_MM = 0x23 - PR_SET_MM_ARG_END = 0x9 - PR_SET_MM_ARG_START = 0x8 - PR_SET_MM_AUXV = 0xc - PR_SET_MM_BRK = 0x7 - PR_SET_MM_END_CODE = 0x2 - PR_SET_MM_END_DATA = 0x4 - PR_SET_MM_ENV_END = 0xb - PR_SET_MM_ENV_START = 0xa - PR_SET_MM_EXE_FILE = 0xd - PR_SET_MM_START_BRK = 0x6 - PR_SET_MM_START_CODE = 0x1 - PR_SET_MM_START_DATA = 0x3 - PR_SET_MM_START_STACK = 0x5 - PR_SET_NAME = 0xf - PR_SET_NO_NEW_PRIVS = 0x26 - PR_SET_PDEATHSIG = 0x1 - PR_SET_PTRACER = 0x59616d61 - PR_SET_PTRACER_ANY = -0x1 - PR_SET_SECCOMP = 0x16 - PR_SET_SECUREBITS = 0x1c - PR_SET_TIMERSLACK = 0x1d - PR_SET_TIMING = 0xe - PR_SET_TSC = 0x1a - PR_SET_UNALIGN = 0x6 - PR_TASK_PERF_EVENTS_DISABLE = 0x1f - PR_TASK_PERF_EVENTS_ENABLE = 0x20 - PR_TIMING_STATISTICAL = 0x0 - PR_TIMING_TIMESTAMP = 0x1 - PR_TSC_ENABLE = 0x1 - PR_TSC_SIGSEGV = 0x2 - PR_UNALIGN_NOPRINT = 0x1 - PR_UNALIGN_SIGBUS = 0x2 - PTRACE_ATTACH = 0x10 - PTRACE_CONT = 0x7 - PTRACE_DETACH = 0x11 - PTRACE_EVENT_CLONE = 0x3 - PTRACE_EVENT_EXEC = 0x4 - PTRACE_EVENT_EXIT = 0x6 - PTRACE_EVENT_FORK = 0x1 - PTRACE_EVENT_SECCOMP = 0x7 - PTRACE_EVENT_STOP = 0x80 - PTRACE_EVENT_VFORK = 0x2 - PTRACE_EVENT_VFORK_DONE = 0x5 - PTRACE_GETEVENTMSG = 0x4201 - PTRACE_GETREGS = 0xc - PTRACE_GETREGSET = 0x4204 - PTRACE_GETSIGINFO = 0x4202 - PTRACE_GETSIGMASK = 0x420a - PTRACE_INTERRUPT = 0x4207 - PTRACE_KILL = 0x8 - PTRACE_LISTEN = 0x4208 - PTRACE_O_EXITKILL = 0x100000 - PTRACE_O_MASK = 0x1000ff - PTRACE_O_TRACECLONE = 0x8 - PTRACE_O_TRACEEXEC = 0x10 - PTRACE_O_TRACEEXIT = 0x40 - PTRACE_O_TRACEFORK = 0x2 - PTRACE_O_TRACESECCOMP = 0x80 - PTRACE_O_TRACESYSGOOD = 0x1 - PTRACE_O_TRACEVFORK = 0x4 - PTRACE_O_TRACEVFORKDONE = 0x20 - PTRACE_PEEKDATA = 0x2 - PTRACE_PEEKSIGINFO = 0x4209 - PTRACE_PEEKSIGINFO_SHARED = 0x1 - PTRACE_PEEKTEXT = 0x1 - PTRACE_PEEKUSR = 0x3 - PTRACE_POKEDATA = 0x5 - PTRACE_POKETEXT = 0x4 - PTRACE_POKEUSR = 0x6 - PTRACE_SEIZE = 0x4206 - PTRACE_SETOPTIONS = 0x4200 - PTRACE_SETREGS = 0xd - PTRACE_SETREGSET = 0x4205 - PTRACE_SETSIGINFO = 0x4203 - PTRACE_SETSIGMASK = 0x420b - PTRACE_SINGLESTEP = 0x9 - PTRACE_SYSCALL = 0x18 - PTRACE_TRACEME = 0x0 - RLIMIT_AS = 0x9 - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x7 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = -0x1 - RTAX_ADVMSS = 0x8 - RTAX_CWND = 0x7 - RTAX_FEATURES = 0xc - RTAX_FEATURE_ALLFRAG = 0x8 - RTAX_FEATURE_ECN = 0x1 - RTAX_FEATURE_SACK = 0x2 - RTAX_FEATURE_TIMESTAMP = 0x4 - RTAX_HOPLIMIT = 0xa - RTAX_INITCWND = 0xb - RTAX_INITRWND = 0xe - RTAX_LOCK = 0x1 - RTAX_MAX = 0xf - RTAX_MTU = 0x2 - RTAX_QUICKACK = 0xf - RTAX_REORDERING = 0x9 - RTAX_RTO_MIN = 0xd - RTAX_RTT = 0x4 - RTAX_RTTVAR = 0x5 - RTAX_SSTHRESH = 0x6 - RTAX_UNSPEC = 0x0 - RTAX_WINDOW = 0x3 - RTA_ALIGNTO = 0x4 - RTA_MAX = 0x11 - RTCF_DIRECTSRC = 0x4000000 - RTCF_DOREDIRECT = 0x1000000 - RTCF_LOG = 0x2000000 - RTCF_MASQ = 0x400000 - RTCF_NAT = 0x800000 - RTCF_VALVE = 0x200000 - RTF_ADDRCLASSMASK = 0xf8000000 - RTF_ADDRCONF = 0x40000 - RTF_ALLONLINK = 0x20000 - RTF_BROADCAST = 0x10000000 - RTF_CACHE = 0x1000000 - RTF_DEFAULT = 0x10000 - RTF_DYNAMIC = 0x10 - RTF_FLOW = 0x2000000 - RTF_GATEWAY = 0x2 - RTF_HOST = 0x4 - RTF_INTERFACE = 0x40000000 - RTF_IRTT = 0x100 - RTF_LINKRT = 0x100000 - RTF_LOCAL = 0x80000000 - RTF_MODIFIED = 0x20 - RTF_MSS = 0x40 - RTF_MTU = 0x40 - RTF_MULTICAST = 0x20000000 - RTF_NAT = 0x8000000 - RTF_NOFORWARD = 0x1000 - RTF_NONEXTHOP = 0x200000 - RTF_NOPMTUDISC = 0x4000 - RTF_POLICY = 0x4000000 - RTF_REINSTATE = 0x8 - RTF_REJECT = 0x200 - RTF_STATIC = 0x400 - RTF_THROW = 0x2000 - RTF_UP = 0x1 - RTF_WINDOW = 0x80 - RTF_XRESOLVE = 0x800 - RTM_BASE = 0x10 - RTM_DELACTION = 0x31 - RTM_DELADDR = 0x15 - RTM_DELADDRLABEL = 0x49 - RTM_DELLINK = 0x11 - RTM_DELMDB = 0x55 - RTM_DELNEIGH = 0x1d - RTM_DELQDISC = 0x25 - RTM_DELROUTE = 0x19 - RTM_DELRULE = 0x21 - RTM_DELTCLASS = 0x29 - RTM_DELTFILTER = 0x2d - RTM_F_CLONED = 0x200 - RTM_F_EQUALIZE = 0x400 - RTM_F_NOTIFY = 0x100 - RTM_F_PREFIX = 0x800 - RTM_GETACTION = 0x32 - RTM_GETADDR = 0x16 - RTM_GETADDRLABEL = 0x4a - RTM_GETANYCAST = 0x3e - RTM_GETDCB = 0x4e - RTM_GETLINK = 0x12 - RTM_GETMDB = 0x56 - RTM_GETMULTICAST = 0x3a - RTM_GETNEIGH = 0x1e - RTM_GETNEIGHTBL = 0x42 - RTM_GETNETCONF = 0x52 - RTM_GETQDISC = 0x26 - RTM_GETROUTE = 0x1a - RTM_GETRULE = 0x22 - RTM_GETTCLASS = 0x2a - RTM_GETTFILTER = 0x2e - RTM_MAX = 0x57 - RTM_NEWACTION = 0x30 - RTM_NEWADDR = 0x14 - RTM_NEWADDRLABEL = 0x48 - RTM_NEWLINK = 0x10 - RTM_NEWMDB = 0x54 - RTM_NEWNDUSEROPT = 0x44 - RTM_NEWNEIGH = 0x1c - RTM_NEWNEIGHTBL = 0x40 - RTM_NEWNETCONF = 0x50 - RTM_NEWPREFIX = 0x34 - RTM_NEWQDISC = 0x24 - RTM_NEWROUTE = 0x18 - RTM_NEWRULE = 0x20 - RTM_NEWTCLASS = 0x28 - RTM_NEWTFILTER = 0x2c - RTM_NR_FAMILIES = 0x12 - RTM_NR_MSGTYPES = 0x48 - RTM_SETDCB = 0x4f - RTM_SETLINK = 0x13 - RTM_SETNEIGHTBL = 0x43 - RTNH_ALIGNTO = 0x4 - RTNH_F_DEAD = 0x1 - RTNH_F_ONLINK = 0x4 - RTNH_F_PERVASIVE = 0x2 - RTN_MAX = 0xb - RTPROT_BIRD = 0xc - RTPROT_BOOT = 0x3 - RTPROT_DHCP = 0x10 - RTPROT_DNROUTED = 0xd - RTPROT_GATED = 0x8 - RTPROT_KERNEL = 0x2 - RTPROT_MROUTED = 0x11 - RTPROT_MRT = 0xa - RTPROT_NTK = 0xf - RTPROT_RA = 0x9 - RTPROT_REDIRECT = 0x1 - RTPROT_STATIC = 0x4 - RTPROT_UNSPEC = 0x0 - RTPROT_XORP = 0xe - RTPROT_ZEBRA = 0xb - RT_CLASS_DEFAULT = 0xfd - RT_CLASS_LOCAL = 0xff - RT_CLASS_MAIN = 0xfe - RT_CLASS_MAX = 0xff - RT_CLASS_UNSPEC = 0x0 - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - RUSAGE_THREAD = 0x1 - SCM_CREDENTIALS = 0x2 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x1d - SCM_TIMESTAMPING = 0x25 - SCM_TIMESTAMPNS = 0x23 - SCM_WIFI_STATUS = 0x29 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDDLCI = 0x8980 - SIOCADDMULTI = 0x8931 - SIOCADDRT = 0x890b - SIOCATMARK = 0x8905 - SIOCDARP = 0x8953 - SIOCDELDLCI = 0x8981 - SIOCDELMULTI = 0x8932 - SIOCDELRT = 0x890c - SIOCDEVPRIVATE = 0x89f0 - SIOCDIFADDR = 0x8936 - SIOCDRARP = 0x8960 - SIOCGARP = 0x8954 - SIOCGIFADDR = 0x8915 - SIOCGIFBR = 0x8940 - SIOCGIFBRDADDR = 0x8919 - SIOCGIFCONF = 0x8912 - SIOCGIFCOUNT = 0x8938 - SIOCGIFDSTADDR = 0x8917 - SIOCGIFENCAP = 0x8925 - SIOCGIFFLAGS = 0x8913 - SIOCGIFHWADDR = 0x8927 - SIOCGIFINDEX = 0x8933 - SIOCGIFMAP = 0x8970 - SIOCGIFMEM = 0x891f - SIOCGIFMETRIC = 0x891d - SIOCGIFMTU = 0x8921 - SIOCGIFNAME = 0x8910 - SIOCGIFNETMASK = 0x891b - SIOCGIFPFLAGS = 0x8935 - SIOCGIFSLAVE = 0x8929 - SIOCGIFTXQLEN = 0x8942 - SIOCGPGRP = 0x8904 - SIOCGRARP = 0x8961 - SIOCGSTAMP = 0x8906 - SIOCGSTAMPNS = 0x8907 - SIOCPROTOPRIVATE = 0x89e0 - SIOCRTMSG = 0x890d - SIOCSARP = 0x8955 - SIOCSIFADDR = 0x8916 - SIOCSIFBR = 0x8941 - SIOCSIFBRDADDR = 0x891a - SIOCSIFDSTADDR = 0x8918 - SIOCSIFENCAP = 0x8926 - SIOCSIFFLAGS = 0x8914 - SIOCSIFHWADDR = 0x8924 - SIOCSIFHWBROADCAST = 0x8937 - SIOCSIFLINK = 0x8911 - SIOCSIFMAP = 0x8971 - SIOCSIFMEM = 0x8920 - SIOCSIFMETRIC = 0x891e - SIOCSIFMTU = 0x8922 - SIOCSIFNAME = 0x8923 - SIOCSIFNETMASK = 0x891c - SIOCSIFPFLAGS = 0x8934 - SIOCSIFSLAVE = 0x8930 - SIOCSIFTXQLEN = 0x8943 - SIOCSPGRP = 0x8902 - SIOCSRARP = 0x8962 - SOCK_CLOEXEC = 0x80000 - SOCK_DCCP = 0x6 - SOCK_DGRAM = 0x2 - SOCK_NONBLOCK = 0x800 - SOCK_PACKET = 0xa - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_AAL = 0x109 - SOL_ATM = 0x108 - SOL_DECNET = 0x105 - SOL_ICMPV6 = 0x3a - SOL_IP = 0x0 - SOL_IPV6 = 0x29 - SOL_IRDA = 0x10a - SOL_PACKET = 0x107 - SOL_RAW = 0xff - SOL_SOCKET = 0x1 - SOL_TCP = 0x6 - SOL_X25 = 0x106 - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x1e - SO_ATTACH_FILTER = 0x1a - SO_BINDTODEVICE = 0x19 - SO_BROADCAST = 0x6 - SO_BSDCOMPAT = 0xe - SO_BUSY_POLL = 0x2e - SO_DEBUG = 0x1 - SO_DETACH_FILTER = 0x1b - SO_DOMAIN = 0x27 - SO_DONTROUTE = 0x5 - SO_ERROR = 0x4 - SO_GET_FILTER = 0x1a - SO_KEEPALIVE = 0x9 - SO_LINGER = 0xd - SO_LOCK_FILTER = 0x2c - SO_MARK = 0x24 - SO_MAX_PACING_RATE = 0x2f - SO_NOFCS = 0x2b - SO_NO_CHECK = 0xb - SO_OOBINLINE = 0xa - SO_PASSCRED = 0x10 - SO_PASSSEC = 0x22 - SO_PEEK_OFF = 0x2a - SO_PEERCRED = 0x11 - SO_PEERNAME = 0x1c - SO_PEERSEC = 0x1f - SO_PRIORITY = 0xc - SO_PROTOCOL = 0x26 - SO_RCVBUF = 0x8 - SO_RCVBUFFORCE = 0x21 - SO_RCVLOWAT = 0x12 - SO_RCVTIMEO = 0x14 - SO_REUSEADDR = 0x2 - SO_REUSEPORT = 0xf - SO_RXQ_OVFL = 0x28 - SO_SECURITY_AUTHENTICATION = 0x16 - SO_SECURITY_ENCRYPTION_NETWORK = 0x18 - SO_SECURITY_ENCRYPTION_TRANSPORT = 0x17 - SO_SELECT_ERR_QUEUE = 0x2d - SO_SNDBUF = 0x7 - SO_SNDBUFFORCE = 0x20 - SO_SNDLOWAT = 0x13 - SO_SNDTIMEO = 0x15 - SO_TIMESTAMP = 0x1d - SO_TIMESTAMPING = 0x25 - SO_TIMESTAMPNS = 0x23 - SO_TYPE = 0x3 - SO_WIFI_STATUS = 0x29 - S_BLKSIZE = 0x200 - S_IEXEC = 0x40 - S_IFBLK = 0x6000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFIFO = 0x1000 - S_IFLNK = 0xa000 - S_IFMT = 0xf000 - S_IFREG = 0x8000 - S_IFSOCK = 0xc000 - S_IREAD = 0x100 - S_IRGRP = 0x20 - S_IROTH = 0x4 - S_IRUSR = 0x100 - S_IRWXG = 0x38 - S_IRWXO = 0x7 - S_IRWXU = 0x1c0 - S_ISGID = 0x400 - S_ISUID = 0x800 - S_ISVTX = 0x200 - S_IWGRP = 0x10 - S_IWOTH = 0x2 - S_IWRITE = 0x80 - S_IWUSR = 0x80 - S_IXGRP = 0x8 - S_IXOTH = 0x1 - S_IXUSR = 0x40 - TAB0 = 0x0 - TAB1 = 0x800 - TAB2 = 0x1000 - TAB3 = 0x1800 - TABDLY = 0x1800 - TCFLSH = 0x540b - TCGETA = 0x5405 - TCGETS = 0x5401 - TCGETS2 = 0x802c542a - TCGETX = 0x5432 - TCIFLUSH = 0x0 - TCIOFF = 0x2 - TCIOFLUSH = 0x2 - TCION = 0x3 - TCOFLUSH = 0x1 - TCOOFF = 0x0 - TCOON = 0x1 - TCP_CONGESTION = 0xd - TCP_COOKIE_IN_ALWAYS = 0x1 - TCP_COOKIE_MAX = 0x10 - TCP_COOKIE_MIN = 0x8 - TCP_COOKIE_OUT_NEVER = 0x2 - TCP_COOKIE_PAIR_SIZE = 0x20 - TCP_COOKIE_TRANSACTIONS = 0xf - TCP_CORK = 0x3 - TCP_DEFER_ACCEPT = 0x9 - TCP_FASTOPEN = 0x17 - TCP_INFO = 0xb - TCP_KEEPCNT = 0x6 - TCP_KEEPIDLE = 0x4 - TCP_KEEPINTVL = 0x5 - TCP_LINGER2 = 0x8 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_WINSHIFT = 0xe - TCP_MD5SIG = 0xe - TCP_MD5SIG_MAXKEYLEN = 0x50 - TCP_MSS = 0x200 - TCP_MSS_DEFAULT = 0x218 - TCP_MSS_DESIRED = 0x4c4 - TCP_NODELAY = 0x1 - TCP_QUEUE_SEQ = 0x15 - TCP_QUICKACK = 0xc - TCP_REPAIR = 0x13 - TCP_REPAIR_OPTIONS = 0x16 - TCP_REPAIR_QUEUE = 0x14 - TCP_SYNCNT = 0x7 - TCP_S_DATA_IN = 0x4 - TCP_S_DATA_OUT = 0x8 - TCP_THIN_DUPACK = 0x11 - TCP_THIN_LINEAR_TIMEOUTS = 0x10 - TCP_TIMESTAMP = 0x18 - TCP_USER_TIMEOUT = 0x12 - TCP_WINDOW_CLAMP = 0xa - TCSAFLUSH = 0x2 - TCSBRK = 0x5409 - TCSBRKP = 0x5425 - TCSETA = 0x5406 - TCSETAF = 0x5408 - TCSETAW = 0x5407 - TCSETS = 0x5402 - TCSETS2 = 0x402c542b - TCSETSF = 0x5404 - TCSETSF2 = 0x402c542d - TCSETSW = 0x5403 - TCSETSW2 = 0x402c542c - TCSETX = 0x5433 - TCSETXF = 0x5434 - TCSETXW = 0x5435 - TCXONC = 0x540a - TIOCCBRK = 0x5428 - TIOCCONS = 0x541d - TIOCEXCL = 0x540c - TIOCGDEV = 0x80045432 - TIOCGETD = 0x5424 - TIOCGEXCL = 0x80045440 - TIOCGICOUNT = 0x545d - TIOCGLCKTRMIOS = 0x5456 - TIOCGPGRP = 0x540f - TIOCGPKT = 0x80045438 - TIOCGPTLCK = 0x80045439 - TIOCGPTN = 0x80045430 - TIOCGRS485 = 0x542e - TIOCGSERIAL = 0x541e - TIOCGSID = 0x5429 - TIOCGSOFTCAR = 0x5419 - TIOCGWINSZ = 0x5413 - TIOCINQ = 0x541b - TIOCLINUX = 0x541c - TIOCMBIC = 0x5417 - TIOCMBIS = 0x5416 - TIOCMGET = 0x5415 - TIOCMIWAIT = 0x545c - TIOCMSET = 0x5418 - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x5422 - TIOCNXCL = 0x540d - TIOCOUTQ = 0x5411 - TIOCPKT = 0x5420 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCSBRK = 0x5427 - TIOCSCTTY = 0x540e - TIOCSERCONFIG = 0x5453 - TIOCSERGETLSR = 0x5459 - TIOCSERGETMULTI = 0x545a - TIOCSERGSTRUCT = 0x5458 - TIOCSERGWILD = 0x5454 - TIOCSERSETMULTI = 0x545b - TIOCSERSWILD = 0x5455 - TIOCSER_TEMT = 0x1 - TIOCSETD = 0x5423 - TIOCSIG = 0x40045436 - TIOCSLCKTRMIOS = 0x5457 - TIOCSPGRP = 0x5410 - TIOCSPTLCK = 0x40045431 - TIOCSRS485 = 0x542f - TIOCSSERIAL = 0x541f - TIOCSSOFTCAR = 0x541a - TIOCSTI = 0x5412 - TIOCSWINSZ = 0x5414 - TIOCVHANGUP = 0x5437 - TOSTOP = 0x100 - TUNATTACHFILTER = 0x401054d5 - TUNDETACHFILTER = 0x401054d6 - TUNGETFEATURES = 0x800454cf - TUNGETFILTER = 0x801054db - TUNGETIFF = 0x800454d2 - TUNGETSNDBUF = 0x800454d3 - TUNGETVNETHDRSZ = 0x800454d7 - TUNSETDEBUG = 0x400454c9 - TUNSETGROUP = 0x400454ce - TUNSETIFF = 0x400454ca - TUNSETIFINDEX = 0x400454da - TUNSETLINK = 0x400454cd - TUNSETNOCSUM = 0x400454c8 - TUNSETOFFLOAD = 0x400454d0 - TUNSETOWNER = 0x400454cc - TUNSETPERSIST = 0x400454cb - TUNSETQUEUE = 0x400454d9 - TUNSETSNDBUF = 0x400454d4 - TUNSETTXFILTER = 0x400454d1 - TUNSETVNETHDRSZ = 0x400454d8 - VDISCARD = 0xd - VEOF = 0x4 - VEOL = 0xb - VEOL2 = 0x10 - VERASE = 0x2 - VINTR = 0x0 - VKILL = 0x3 - VLNEXT = 0xf - VMIN = 0x6 - VQUIT = 0x1 - VREPRINT = 0xc - VSTART = 0x8 - VSTOP = 0x9 - VSUSP = 0xa - VSWTC = 0x7 - VT0 = 0x0 - VT1 = 0x4000 - VTDLY = 0x4000 - VTIME = 0x5 - VWERASE = 0xe - WALL = 0x40000000 - WCLONE = 0x80000000 - WCONTINUED = 0x8 - WEXITED = 0x4 - WNOHANG = 0x1 - WNOTHREAD = 0x20000000 - WNOWAIT = 0x1000000 - WORDSIZE = 0x40 - WSTOPPED = 0x2 - WUNTRACED = 0x2 - XCASE = 0x4 - XTABS = 0x1800 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x62) - EADDRNOTAVAIL = syscall.Errno(0x63) - EADV = syscall.Errno(0x44) - EAFNOSUPPORT = syscall.Errno(0x61) - EAGAIN = syscall.Errno(0xb) - EALREADY = syscall.Errno(0x72) - EBADE = syscall.Errno(0x34) - EBADF = syscall.Errno(0x9) - EBADFD = syscall.Errno(0x4d) - EBADMSG = syscall.Errno(0x4a) - EBADR = syscall.Errno(0x35) - EBADRQC = syscall.Errno(0x38) - EBADSLT = syscall.Errno(0x39) - EBFONT = syscall.Errno(0x3b) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x7d) - ECHILD = syscall.Errno(0xa) - ECHRNG = syscall.Errno(0x2c) - ECOMM = syscall.Errno(0x46) - ECONNABORTED = syscall.Errno(0x67) - ECONNREFUSED = syscall.Errno(0x6f) - ECONNRESET = syscall.Errno(0x68) - EDEADLK = syscall.Errno(0x23) - EDEADLOCK = syscall.Errno(0x23) - EDESTADDRREQ = syscall.Errno(0x59) - EDOM = syscall.Errno(0x21) - EDOTDOT = syscall.Errno(0x49) - EDQUOT = syscall.Errno(0x7a) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EHOSTDOWN = syscall.Errno(0x70) - EHOSTUNREACH = syscall.Errno(0x71) - EHWPOISON = syscall.Errno(0x85) - EIDRM = syscall.Errno(0x2b) - EILSEQ = syscall.Errno(0x54) - EINPROGRESS = syscall.Errno(0x73) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EISCONN = syscall.Errno(0x6a) - EISDIR = syscall.Errno(0x15) - EISNAM = syscall.Errno(0x78) - EKEYEXPIRED = syscall.Errno(0x7f) - EKEYREJECTED = syscall.Errno(0x81) - EKEYREVOKED = syscall.Errno(0x80) - EL2HLT = syscall.Errno(0x33) - EL2NSYNC = syscall.Errno(0x2d) - EL3HLT = syscall.Errno(0x2e) - EL3RST = syscall.Errno(0x2f) - ELIBACC = syscall.Errno(0x4f) - ELIBBAD = syscall.Errno(0x50) - ELIBEXEC = syscall.Errno(0x53) - ELIBMAX = syscall.Errno(0x52) - ELIBSCN = syscall.Errno(0x51) - ELNRNG = syscall.Errno(0x30) - ELOOP = syscall.Errno(0x28) - EMEDIUMTYPE = syscall.Errno(0x7c) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x5a) - EMULTIHOP = syscall.Errno(0x48) - ENAMETOOLONG = syscall.Errno(0x24) - ENAVAIL = syscall.Errno(0x77) - ENETDOWN = syscall.Errno(0x64) - ENETRESET = syscall.Errno(0x66) - ENETUNREACH = syscall.Errno(0x65) - ENFILE = syscall.Errno(0x17) - ENOANO = syscall.Errno(0x37) - ENOBUFS = syscall.Errno(0x69) - ENOCSI = syscall.Errno(0x32) - ENODATA = syscall.Errno(0x3d) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOKEY = syscall.Errno(0x7e) - ENOLCK = syscall.Errno(0x25) - ENOLINK = syscall.Errno(0x43) - ENOMEDIUM = syscall.Errno(0x7b) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x2a) - ENONET = syscall.Errno(0x40) - ENOPKG = syscall.Errno(0x41) - ENOPROTOOPT = syscall.Errno(0x5c) - ENOSPC = syscall.Errno(0x1c) - ENOSR = syscall.Errno(0x3f) - ENOSTR = syscall.Errno(0x3c) - ENOSYS = syscall.Errno(0x26) - ENOTBLK = syscall.Errno(0xf) - ENOTCONN = syscall.Errno(0x6b) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x27) - ENOTNAM = syscall.Errno(0x76) - ENOTRECOVERABLE = syscall.Errno(0x83) - ENOTSOCK = syscall.Errno(0x58) - ENOTSUP = syscall.Errno(0x5f) - ENOTTY = syscall.Errno(0x19) - ENOTUNIQ = syscall.Errno(0x4c) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x5f) - EOVERFLOW = syscall.Errno(0x4b) - EOWNERDEAD = syscall.Errno(0x82) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x60) - EPIPE = syscall.Errno(0x20) - EPROTO = syscall.Errno(0x47) - EPROTONOSUPPORT = syscall.Errno(0x5d) - EPROTOTYPE = syscall.Errno(0x5b) - ERANGE = syscall.Errno(0x22) - EREMCHG = syscall.Errno(0x4e) - EREMOTE = syscall.Errno(0x42) - EREMOTEIO = syscall.Errno(0x79) - ERESTART = syscall.Errno(0x55) - ERFKILL = syscall.Errno(0x84) - EROFS = syscall.Errno(0x1e) - ESHUTDOWN = syscall.Errno(0x6c) - ESOCKTNOSUPPORT = syscall.Errno(0x5e) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESRMNT = syscall.Errno(0x45) - ESTALE = syscall.Errno(0x74) - ESTRPIPE = syscall.Errno(0x56) - ETIME = syscall.Errno(0x3e) - ETIMEDOUT = syscall.Errno(0x6e) - ETOOMANYREFS = syscall.Errno(0x6d) - ETXTBSY = syscall.Errno(0x1a) - EUCLEAN = syscall.Errno(0x75) - EUNATCH = syscall.Errno(0x31) - EUSERS = syscall.Errno(0x57) - EWOULDBLOCK = syscall.Errno(0xb) - EXDEV = syscall.Errno(0x12) - EXFULL = syscall.Errno(0x36) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0x7) - SIGCHLD = syscall.Signal(0x11) - SIGCLD = syscall.Signal(0x11) - SIGCONT = syscall.Signal(0x12) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x1d) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGPIPE = syscall.Signal(0xd) - SIGPOLL = syscall.Signal(0x1d) - SIGPROF = syscall.Signal(0x1b) - SIGPWR = syscall.Signal(0x1e) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTKFLT = syscall.Signal(0x10) - SIGSTOP = syscall.Signal(0x13) - SIGSYS = syscall.Signal(0x1f) - SIGTERM = syscall.Signal(0xf) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x14) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGUNUSED = syscall.Signal(0x1f) - SIGURG = syscall.Signal(0x17) - SIGUSR1 = syscall.Signal(0xa) - SIGUSR2 = syscall.Signal(0xc) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) - -// Error table -var errors = [...]string{ - 1: "operation not permitted", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "input/output error", - 6: "no such device or address", - 7: "argument list too long", - 8: "exec format error", - 9: "bad file descriptor", - 10: "no child processes", - 11: "resource temporarily unavailable", - 12: "cannot allocate memory", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "device or resource busy", - 17: "file exists", - 18: "invalid cross-device link", - 19: "no such device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "too many open files in system", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "numerical argument out of domain", - 34: "numerical result out of range", - 35: "resource deadlock avoided", - 36: "file name too long", - 37: "no locks available", - 38: "function not implemented", - 39: "directory not empty", - 40: "too many levels of symbolic links", - 42: "no message of desired type", - 43: "identifier removed", - 44: "channel number out of range", - 45: "level 2 not synchronized", - 46: "level 3 halted", - 47: "level 3 reset", - 48: "link number out of range", - 49: "protocol driver not attached", - 50: "no CSI structure available", - 51: "level 2 halted", - 52: "invalid exchange", - 53: "invalid request descriptor", - 54: "exchange full", - 55: "no anode", - 56: "invalid request code", - 57: "invalid slot", - 59: "bad font file format", - 60: "device not a stream", - 61: "no data available", - 62: "timer expired", - 63: "out of streams resources", - 64: "machine is not on the network", - 65: "package not installed", - 66: "object is remote", - 67: "link has been severed", - 68: "advertise error", - 69: "srmount error", - 70: "communication error on send", - 71: "protocol error", - 72: "multihop attempted", - 73: "RFS specific error", - 74: "bad message", - 75: "value too large for defined data type", - 76: "name not unique on network", - 77: "file descriptor in bad state", - 78: "remote address changed", - 79: "can not access a needed shared library", - 80: "accessing a corrupted shared library", - 81: ".lib section in a.out corrupted", - 82: "attempting to link in too many shared libraries", - 83: "cannot exec a shared library directly", - 84: "invalid or incomplete multibyte or wide character", - 85: "interrupted system call should be restarted", - 86: "streams pipe error", - 87: "too many users", - 88: "socket operation on non-socket", - 89: "destination address required", - 90: "message too long", - 91: "protocol wrong type for socket", - 92: "protocol not available", - 93: "protocol not supported", - 94: "socket type not supported", - 95: "operation not supported", - 96: "protocol family not supported", - 97: "address family not supported by protocol", - 98: "address already in use", - 99: "cannot assign requested address", - 100: "network is down", - 101: "network is unreachable", - 102: "network dropped connection on reset", - 103: "software caused connection abort", - 104: "connection reset by peer", - 105: "no buffer space available", - 106: "transport endpoint is already connected", - 107: "transport endpoint is not connected", - 108: "cannot send after transport endpoint shutdown", - 109: "too many references: cannot splice", - 110: "connection timed out", - 111: "connection refused", - 112: "host is down", - 113: "no route to host", - 114: "operation already in progress", - 115: "operation now in progress", - 116: "stale file handle", - 117: "structure needs cleaning", - 118: "not a XENIX named type file", - 119: "no XENIX semaphores available", - 120: "is a named type file", - 121: "remote I/O error", - 122: "disk quota exceeded", - 123: "no medium found", - 124: "wrong medium type", - 125: "operation canceled", - 126: "required key not available", - 127: "key has expired", - 128: "key has been revoked", - 129: "key was rejected by service", - 130: "owner died", - 131: "state not recoverable", - 132: "operation not possible due to RF-kill", - 133: "memory page has hardware error", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal instruction", - 5: "trace/breakpoint trap", - 6: "aborted", - 7: "bus error", - 8: "floating point exception", - 9: "killed", - 10: "user defined signal 1", - 11: "segmentation fault", - 12: "user defined signal 2", - 13: "broken pipe", - 14: "alarm clock", - 15: "terminated", - 16: "stack fault", - 17: "child exited", - 18: "continued", - 19: "stopped (signal)", - 20: "stopped", - 21: "stopped (tty input)", - 22: "stopped (tty output)", - 23: "urgent I/O condition", - 24: "CPU time limit exceeded", - 25: "file size limit exceeded", - 26: "virtual timer expired", - 27: "profiling timer expired", - 28: "window changed", - 29: "I/O possible", - 30: "power failure", - 31: "bad system call", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_linux_ppc64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_linux_ppc64.go deleted file mode 100644 index 5b90d07ed23..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_linux_ppc64.go +++ /dev/null @@ -1,1969 +0,0 @@ -// mkerrors.sh -m64 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build ppc64,linux - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- -m64 _const.go - -package unix - -import "syscall" - -const ( - AF_ALG = 0x26 - AF_APPLETALK = 0x5 - AF_ASH = 0x12 - AF_ATMPVC = 0x8 - AF_ATMSVC = 0x14 - AF_AX25 = 0x3 - AF_BLUETOOTH = 0x1f - AF_BRIDGE = 0x7 - AF_CAIF = 0x25 - AF_CAN = 0x1d - AF_DECnet = 0xc - AF_ECONET = 0x13 - AF_FILE = 0x1 - AF_IEEE802154 = 0x24 - AF_INET = 0x2 - AF_INET6 = 0xa - AF_IPX = 0x4 - AF_IRDA = 0x17 - AF_ISDN = 0x22 - AF_IUCV = 0x20 - AF_KEY = 0xf - AF_LLC = 0x1a - AF_LOCAL = 0x1 - AF_MAX = 0x29 - AF_NETBEUI = 0xd - AF_NETLINK = 0x10 - AF_NETROM = 0x6 - AF_NFC = 0x27 - AF_PACKET = 0x11 - AF_PHONET = 0x23 - AF_PPPOX = 0x18 - AF_RDS = 0x15 - AF_ROSE = 0xb - AF_ROUTE = 0x10 - AF_RXRPC = 0x21 - AF_SECURITY = 0xe - AF_SNA = 0x16 - AF_TIPC = 0x1e - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - AF_VSOCK = 0x28 - AF_WANPIPE = 0x19 - AF_X25 = 0x9 - ARPHRD_6LOWPAN = 0x339 - ARPHRD_ADAPT = 0x108 - ARPHRD_APPLETLK = 0x8 - ARPHRD_ARCNET = 0x7 - ARPHRD_ASH = 0x30d - ARPHRD_ATM = 0x13 - ARPHRD_AX25 = 0x3 - ARPHRD_BIF = 0x307 - ARPHRD_CAIF = 0x336 - ARPHRD_CAN = 0x118 - ARPHRD_CHAOS = 0x5 - ARPHRD_CISCO = 0x201 - ARPHRD_CSLIP = 0x101 - ARPHRD_CSLIP6 = 0x103 - ARPHRD_DDCMP = 0x205 - ARPHRD_DLCI = 0xf - ARPHRD_ECONET = 0x30e - ARPHRD_EETHER = 0x2 - ARPHRD_ETHER = 0x1 - ARPHRD_EUI64 = 0x1b - ARPHRD_FCAL = 0x311 - ARPHRD_FCFABRIC = 0x313 - ARPHRD_FCPL = 0x312 - ARPHRD_FCPP = 0x310 - ARPHRD_FDDI = 0x306 - ARPHRD_FRAD = 0x302 - ARPHRD_HDLC = 0x201 - ARPHRD_HIPPI = 0x30c - ARPHRD_HWX25 = 0x110 - ARPHRD_IEEE1394 = 0x18 - ARPHRD_IEEE802 = 0x6 - ARPHRD_IEEE80211 = 0x321 - ARPHRD_IEEE80211_PRISM = 0x322 - ARPHRD_IEEE80211_RADIOTAP = 0x323 - ARPHRD_IEEE802154 = 0x324 - ARPHRD_IEEE802154_MONITOR = 0x325 - ARPHRD_IEEE802_TR = 0x320 - ARPHRD_INFINIBAND = 0x20 - ARPHRD_IP6GRE = 0x337 - ARPHRD_IPDDP = 0x309 - ARPHRD_IPGRE = 0x30a - ARPHRD_IRDA = 0x30f - ARPHRD_LAPB = 0x204 - ARPHRD_LOCALTLK = 0x305 - ARPHRD_LOOPBACK = 0x304 - ARPHRD_METRICOM = 0x17 - ARPHRD_NETLINK = 0x338 - ARPHRD_NETROM = 0x0 - ARPHRD_NONE = 0xfffe - ARPHRD_PHONET = 0x334 - ARPHRD_PHONET_PIPE = 0x335 - ARPHRD_PIMREG = 0x30b - ARPHRD_PPP = 0x200 - ARPHRD_PRONET = 0x4 - ARPHRD_RAWHDLC = 0x206 - ARPHRD_ROSE = 0x10e - ARPHRD_RSRVD = 0x104 - ARPHRD_SIT = 0x308 - ARPHRD_SKIP = 0x303 - ARPHRD_SLIP = 0x100 - ARPHRD_SLIP6 = 0x102 - ARPHRD_TUNNEL = 0x300 - ARPHRD_TUNNEL6 = 0x301 - ARPHRD_VOID = 0xffff - ARPHRD_X25 = 0x10f - B0 = 0x0 - B1000000 = 0x17 - B110 = 0x3 - B115200 = 0x11 - B1152000 = 0x18 - B1200 = 0x9 - B134 = 0x4 - B150 = 0x5 - B1500000 = 0x19 - B1800 = 0xa - B19200 = 0xe - B200 = 0x6 - B2000000 = 0x1a - B230400 = 0x12 - B2400 = 0xb - B2500000 = 0x1b - B300 = 0x7 - B3000000 = 0x1c - B3500000 = 0x1d - B38400 = 0xf - B4000000 = 0x1e - B460800 = 0x13 - B4800 = 0xc - B50 = 0x1 - B500000 = 0x14 - B57600 = 0x10 - B576000 = 0x15 - B600 = 0x8 - B75 = 0x2 - B921600 = 0x16 - B9600 = 0xd - BOTHER = 0x1f - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXINSNS = 0x1000 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MOD = 0x90 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_W = 0x0 - BPF_X = 0x8 - BPF_XOR = 0xa0 - BRKINT = 0x2 - BS0 = 0x0 - BS1 = 0x8000 - BSDLY = 0x8000 - CBAUD = 0xff - CBAUDEX = 0x0 - CFLUSH = 0xf - CIBAUD = 0xff0000 - CLOCAL = 0x8000 - CLOCK_BOOTTIME = 0x7 - CLOCK_BOOTTIME_ALARM = 0x9 - CLOCK_DEFAULT = 0x0 - CLOCK_EXT = 0x1 - CLOCK_INT = 0x2 - CLOCK_MONOTONIC = 0x1 - CLOCK_MONOTONIC_COARSE = 0x6 - CLOCK_MONOTONIC_RAW = 0x4 - CLOCK_PROCESS_CPUTIME_ID = 0x2 - CLOCK_REALTIME = 0x0 - CLOCK_REALTIME_ALARM = 0x8 - CLOCK_REALTIME_COARSE = 0x5 - CLOCK_THREAD_CPUTIME_ID = 0x3 - CLOCK_TXFROMRX = 0x4 - CLOCK_TXINT = 0x3 - CLONE_CHILD_CLEARTID = 0x200000 - CLONE_CHILD_SETTID = 0x1000000 - CLONE_DETACHED = 0x400000 - CLONE_FILES = 0x400 - CLONE_FS = 0x200 - CLONE_IO = 0x80000000 - CLONE_NEWIPC = 0x8000000 - CLONE_NEWNET = 0x40000000 - CLONE_NEWNS = 0x20000 - CLONE_NEWPID = 0x20000000 - CLONE_NEWUSER = 0x10000000 - CLONE_NEWUTS = 0x4000000 - CLONE_PARENT = 0x8000 - CLONE_PARENT_SETTID = 0x100000 - CLONE_PTRACE = 0x2000 - CLONE_SETTLS = 0x80000 - CLONE_SIGHAND = 0x800 - CLONE_SYSVSEM = 0x40000 - CLONE_THREAD = 0x10000 - CLONE_UNTRACED = 0x800000 - CLONE_VFORK = 0x4000 - CLONE_VM = 0x100 - CMSPAR = 0x40000000 - CR0 = 0x0 - CR1 = 0x1000 - CR2 = 0x2000 - CR3 = 0x3000 - CRDLY = 0x3000 - CREAD = 0x800 - CRTSCTS = 0x80000000 - CS5 = 0x0 - CS6 = 0x100 - CS7 = 0x200 - CS8 = 0x300 - CSIGNAL = 0xff - CSIZE = 0x300 - CSTART = 0x11 - CSTATUS = 0x0 - CSTOP = 0x13 - CSTOPB = 0x400 - CSUSP = 0x1a - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - DT_WHT = 0xe - ECHO = 0x8 - ECHOCTL = 0x40 - ECHOE = 0x2 - ECHOK = 0x4 - ECHOKE = 0x1 - ECHONL = 0x10 - ECHOPRT = 0x20 - ENCODING_DEFAULT = 0x0 - ENCODING_FM_MARK = 0x3 - ENCODING_FM_SPACE = 0x4 - ENCODING_MANCHESTER = 0x5 - ENCODING_NRZ = 0x1 - ENCODING_NRZI = 0x2 - EPOLLERR = 0x8 - EPOLLET = 0x80000000 - EPOLLHUP = 0x10 - EPOLLIN = 0x1 - EPOLLMSG = 0x400 - EPOLLONESHOT = 0x40000000 - EPOLLOUT = 0x4 - EPOLLPRI = 0x2 - EPOLLRDBAND = 0x80 - EPOLLRDHUP = 0x2000 - EPOLLRDNORM = 0x40 - EPOLLWAKEUP = 0x20000000 - EPOLLWRBAND = 0x200 - EPOLLWRNORM = 0x100 - EPOLL_CLOEXEC = 0x80000 - EPOLL_CTL_ADD = 0x1 - EPOLL_CTL_DEL = 0x2 - EPOLL_CTL_MOD = 0x3 - ETH_P_1588 = 0x88f7 - ETH_P_8021AD = 0x88a8 - ETH_P_8021AH = 0x88e7 - ETH_P_8021Q = 0x8100 - ETH_P_80221 = 0x8917 - ETH_P_802_2 = 0x4 - ETH_P_802_3 = 0x1 - ETH_P_802_3_MIN = 0x600 - ETH_P_802_EX1 = 0x88b5 - ETH_P_AARP = 0x80f3 - ETH_P_AF_IUCV = 0xfbfb - ETH_P_ALL = 0x3 - ETH_P_AOE = 0x88a2 - ETH_P_ARCNET = 0x1a - ETH_P_ARP = 0x806 - ETH_P_ATALK = 0x809b - ETH_P_ATMFATE = 0x8884 - ETH_P_ATMMPOA = 0x884c - ETH_P_AX25 = 0x2 - ETH_P_BATMAN = 0x4305 - ETH_P_BPQ = 0x8ff - ETH_P_CAIF = 0xf7 - ETH_P_CAN = 0xc - ETH_P_CANFD = 0xd - ETH_P_CONTROL = 0x16 - ETH_P_CUST = 0x6006 - ETH_P_DDCMP = 0x6 - ETH_P_DEC = 0x6000 - ETH_P_DIAG = 0x6005 - ETH_P_DNA_DL = 0x6001 - ETH_P_DNA_RC = 0x6002 - ETH_P_DNA_RT = 0x6003 - ETH_P_DSA = 0x1b - ETH_P_ECONET = 0x18 - ETH_P_EDSA = 0xdada - ETH_P_FCOE = 0x8906 - ETH_P_FIP = 0x8914 - ETH_P_HDLC = 0x19 - ETH_P_IEEE802154 = 0xf6 - ETH_P_IEEEPUP = 0xa00 - ETH_P_IEEEPUPAT = 0xa01 - ETH_P_IP = 0x800 - ETH_P_IPV6 = 0x86dd - ETH_P_IPX = 0x8137 - ETH_P_IRDA = 0x17 - ETH_P_LAT = 0x6004 - ETH_P_LINK_CTL = 0x886c - ETH_P_LOCALTALK = 0x9 - ETH_P_LOOP = 0x60 - ETH_P_LOOPBACK = 0x9000 - ETH_P_MOBITEX = 0x15 - ETH_P_MPLS_MC = 0x8848 - ETH_P_MPLS_UC = 0x8847 - ETH_P_MVRP = 0x88f5 - ETH_P_PAE = 0x888e - ETH_P_PAUSE = 0x8808 - ETH_P_PHONET = 0xf5 - ETH_P_PPPTALK = 0x10 - ETH_P_PPP_DISC = 0x8863 - ETH_P_PPP_MP = 0x8 - ETH_P_PPP_SES = 0x8864 - ETH_P_PRP = 0x88fb - ETH_P_PUP = 0x200 - ETH_P_PUPAT = 0x201 - ETH_P_QINQ1 = 0x9100 - ETH_P_QINQ2 = 0x9200 - ETH_P_QINQ3 = 0x9300 - ETH_P_RARP = 0x8035 - ETH_P_SCA = 0x6007 - ETH_P_SLOW = 0x8809 - ETH_P_SNAP = 0x5 - ETH_P_TDLS = 0x890d - ETH_P_TEB = 0x6558 - ETH_P_TIPC = 0x88ca - ETH_P_TRAILER = 0x1c - ETH_P_TR_802_2 = 0x11 - ETH_P_WAN_PPP = 0x7 - ETH_P_WCCP = 0x883e - ETH_P_X25 = 0x805 - ETH_P_XDSA = 0xf8 - EXTA = 0xe - EXTB = 0xf - EXTPROC = 0x10000000 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x400 - FF0 = 0x0 - FF1 = 0x4000 - FFDLY = 0x4000 - FLUSHO = 0x800000 - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0x406 - F_EXLCK = 0x4 - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLEASE = 0x401 - F_GETLK = 0x5 - F_GETLK64 = 0xc - F_GETOWN = 0x9 - F_GETOWN_EX = 0x10 - F_GETPIPE_SZ = 0x408 - F_GETSIG = 0xb - F_LOCK = 0x1 - F_NOTIFY = 0x402 - F_OFD_GETLK = 0x24 - F_OFD_SETLK = 0x25 - F_OFD_SETLKW = 0x26 - F_OK = 0x0 - F_RDLCK = 0x0 - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLEASE = 0x400 - F_SETLK = 0x6 - F_SETLK64 = 0xd - F_SETLKW = 0x7 - F_SETLKW64 = 0xe - F_SETOWN = 0x8 - F_SETOWN_EX = 0xf - F_SETPIPE_SZ = 0x407 - F_SETSIG = 0xa - F_SHLCK = 0x8 - F_TEST = 0x3 - F_TLOCK = 0x2 - F_ULOCK = 0x0 - F_UNLCK = 0x2 - F_WRLCK = 0x1 - HUPCL = 0x4000 - IBSHIFT = 0x10 - ICANON = 0x100 - ICMPV6_FILTER = 0x1 - ICRNL = 0x100 - IEXTEN = 0x400 - IFA_F_DADFAILED = 0x8 - IFA_F_DEPRECATED = 0x20 - IFA_F_HOMEADDRESS = 0x10 - IFA_F_MANAGETEMPADDR = 0x100 - IFA_F_NODAD = 0x2 - IFA_F_NOPREFIXROUTE = 0x200 - IFA_F_OPTIMISTIC = 0x4 - IFA_F_PERMANENT = 0x80 - IFA_F_SECONDARY = 0x1 - IFA_F_TEMPORARY = 0x1 - IFA_F_TENTATIVE = 0x40 - IFA_MAX = 0x8 - IFF_ALLMULTI = 0x200 - IFF_ATTACH_QUEUE = 0x200 - IFF_AUTOMEDIA = 0x4000 - IFF_BROADCAST = 0x2 - IFF_DEBUG = 0x4 - IFF_DETACH_QUEUE = 0x400 - IFF_DORMANT = 0x20000 - IFF_DYNAMIC = 0x8000 - IFF_ECHO = 0x40000 - IFF_LOOPBACK = 0x8 - IFF_LOWER_UP = 0x10000 - IFF_MASTER = 0x400 - IFF_MULTICAST = 0x1000 - IFF_MULTI_QUEUE = 0x100 - IFF_NOARP = 0x80 - IFF_NOFILTER = 0x1000 - IFF_NOTRAILERS = 0x20 - IFF_NO_PI = 0x1000 - IFF_ONE_QUEUE = 0x2000 - IFF_PERSIST = 0x800 - IFF_POINTOPOINT = 0x10 - IFF_PORTSEL = 0x2000 - IFF_PROMISC = 0x100 - IFF_RUNNING = 0x40 - IFF_SLAVE = 0x800 - IFF_TAP = 0x2 - IFF_TUN = 0x1 - IFF_TUN_EXCL = 0x8000 - IFF_UP = 0x1 - IFF_VNET_HDR = 0x4000 - IFF_VOLATILE = 0x70c5a - IFNAMSIZ = 0x10 - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_ACCESS = 0x1 - IN_ALL_EVENTS = 0xfff - IN_ATTRIB = 0x4 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLOEXEC = 0x80000 - IN_CLOSE = 0x18 - IN_CLOSE_NOWRITE = 0x10 - IN_CLOSE_WRITE = 0x8 - IN_CREATE = 0x100 - IN_DELETE = 0x200 - IN_DELETE_SELF = 0x400 - IN_DONT_FOLLOW = 0x2000000 - IN_EXCL_UNLINK = 0x4000000 - IN_IGNORED = 0x8000 - IN_ISDIR = 0x40000000 - IN_LOOPBACKNET = 0x7f - IN_MASK_ADD = 0x20000000 - IN_MODIFY = 0x2 - IN_MOVE = 0xc0 - IN_MOVED_FROM = 0x40 - IN_MOVED_TO = 0x80 - IN_MOVE_SELF = 0x800 - IN_NONBLOCK = 0x800 - IN_ONESHOT = 0x80000000 - IN_ONLYDIR = 0x1000000 - IN_OPEN = 0x20 - IN_Q_OVERFLOW = 0x4000 - IN_UNMOUNT = 0x2000 - IPPROTO_AH = 0x33 - IPPROTO_BEETPH = 0x5e - IPPROTO_COMP = 0x6c - IPPROTO_DCCP = 0x21 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_ENCAP = 0x62 - IPPROTO_ESP = 0x32 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GRE = 0x2f - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IGMP = 0x2 - IPPROTO_IP = 0x0 - IPPROTO_IPIP = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_MH = 0x87 - IPPROTO_MTP = 0x5c - IPPROTO_NONE = 0x3b - IPPROTO_PIM = 0x67 - IPPROTO_PUP = 0xc - IPPROTO_RAW = 0xff - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_SCTP = 0x84 - IPPROTO_TCP = 0x6 - IPPROTO_TP = 0x1d - IPPROTO_UDP = 0x11 - IPPROTO_UDPLITE = 0x88 - IPV6_2292DSTOPTS = 0x4 - IPV6_2292HOPLIMIT = 0x8 - IPV6_2292HOPOPTS = 0x3 - IPV6_2292PKTINFO = 0x2 - IPV6_2292PKTOPTIONS = 0x6 - IPV6_2292RTHDR = 0x5 - IPV6_ADDRFORM = 0x1 - IPV6_ADD_MEMBERSHIP = 0x14 - IPV6_AUTHHDR = 0xa - IPV6_CHECKSUM = 0x7 - IPV6_DROP_MEMBERSHIP = 0x15 - IPV6_DSTOPTS = 0x3b - IPV6_HOPLIMIT = 0x34 - IPV6_HOPOPTS = 0x36 - IPV6_IPSEC_POLICY = 0x22 - IPV6_JOIN_ANYCAST = 0x1b - IPV6_JOIN_GROUP = 0x14 - IPV6_LEAVE_ANYCAST = 0x1c - IPV6_LEAVE_GROUP = 0x15 - IPV6_MTU = 0x18 - IPV6_MTU_DISCOVER = 0x17 - IPV6_MULTICAST_HOPS = 0x12 - IPV6_MULTICAST_IF = 0x11 - IPV6_MULTICAST_LOOP = 0x13 - IPV6_NEXTHOP = 0x9 - IPV6_PKTINFO = 0x32 - IPV6_PMTUDISC_DO = 0x2 - IPV6_PMTUDISC_DONT = 0x0 - IPV6_PMTUDISC_INTERFACE = 0x4 - IPV6_PMTUDISC_OMIT = 0x5 - IPV6_PMTUDISC_PROBE = 0x3 - IPV6_PMTUDISC_WANT = 0x1 - IPV6_RECVDSTOPTS = 0x3a - IPV6_RECVERR = 0x19 - IPV6_RECVHOPLIMIT = 0x33 - IPV6_RECVHOPOPTS = 0x35 - IPV6_RECVPKTINFO = 0x31 - IPV6_RECVRTHDR = 0x38 - IPV6_RECVTCLASS = 0x42 - IPV6_ROUTER_ALERT = 0x16 - IPV6_RTHDR = 0x39 - IPV6_RTHDRDSTOPTS = 0x37 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_RXDSTOPTS = 0x3b - IPV6_RXHOPOPTS = 0x36 - IPV6_TCLASS = 0x43 - IPV6_UNICAST_HOPS = 0x10 - IPV6_V6ONLY = 0x1a - IPV6_XFRM_POLICY = 0x23 - IP_ADD_MEMBERSHIP = 0x23 - IP_ADD_SOURCE_MEMBERSHIP = 0x27 - IP_BLOCK_SOURCE = 0x26 - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DROP_MEMBERSHIP = 0x24 - IP_DROP_SOURCE_MEMBERSHIP = 0x28 - IP_FREEBIND = 0xf - IP_HDRINCL = 0x3 - IP_IPSEC_POLICY = 0x10 - IP_MAXPACKET = 0xffff - IP_MAX_MEMBERSHIPS = 0x14 - IP_MF = 0x2000 - IP_MINTTL = 0x15 - IP_MSFILTER = 0x29 - IP_MSS = 0x240 - IP_MTU = 0xe - IP_MTU_DISCOVER = 0xa - IP_MULTICAST_ALL = 0x31 - IP_MULTICAST_IF = 0x20 - IP_MULTICAST_LOOP = 0x22 - IP_MULTICAST_TTL = 0x21 - IP_NODEFRAG = 0x16 - IP_OFFMASK = 0x1fff - IP_OPTIONS = 0x4 - IP_ORIGDSTADDR = 0x14 - IP_PASSSEC = 0x12 - IP_PKTINFO = 0x8 - IP_PKTOPTIONS = 0x9 - IP_PMTUDISC = 0xa - IP_PMTUDISC_DO = 0x2 - IP_PMTUDISC_DONT = 0x0 - IP_PMTUDISC_INTERFACE = 0x4 - IP_PMTUDISC_OMIT = 0x5 - IP_PMTUDISC_PROBE = 0x3 - IP_PMTUDISC_WANT = 0x1 - IP_RECVERR = 0xb - IP_RECVOPTS = 0x6 - IP_RECVORIGDSTADDR = 0x14 - IP_RECVRETOPTS = 0x7 - IP_RECVTOS = 0xd - IP_RECVTTL = 0xc - IP_RETOPTS = 0x7 - IP_RF = 0x8000 - IP_ROUTER_ALERT = 0x5 - IP_TOS = 0x1 - IP_TRANSPARENT = 0x13 - IP_TTL = 0x2 - IP_UNBLOCK_SOURCE = 0x25 - IP_UNICAST_IF = 0x32 - IP_XFRM_POLICY = 0x11 - ISIG = 0x80 - ISTRIP = 0x20 - IUCLC = 0x1000 - IUTF8 = 0x4000 - IXANY = 0x800 - IXOFF = 0x400 - IXON = 0x200 - LINUX_REBOOT_CMD_CAD_OFF = 0x0 - LINUX_REBOOT_CMD_CAD_ON = 0x89abcdef - LINUX_REBOOT_CMD_HALT = 0xcdef0123 - LINUX_REBOOT_CMD_KEXEC = 0x45584543 - LINUX_REBOOT_CMD_POWER_OFF = 0x4321fedc - LINUX_REBOOT_CMD_RESTART = 0x1234567 - LINUX_REBOOT_CMD_RESTART2 = 0xa1b2c3d4 - LINUX_REBOOT_CMD_SW_SUSPEND = 0xd000fce2 - LINUX_REBOOT_MAGIC1 = 0xfee1dead - LINUX_REBOOT_MAGIC2 = 0x28121969 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_DODUMP = 0x11 - MADV_DOFORK = 0xb - MADV_DONTDUMP = 0x10 - MADV_DONTFORK = 0xa - MADV_DONTNEED = 0x4 - MADV_HUGEPAGE = 0xe - MADV_HWPOISON = 0x64 - MADV_MERGEABLE = 0xc - MADV_NOHUGEPAGE = 0xf - MADV_NORMAL = 0x0 - MADV_RANDOM = 0x1 - MADV_REMOVE = 0x9 - MADV_SEQUENTIAL = 0x2 - MADV_UNMERGEABLE = 0xd - MADV_WILLNEED = 0x3 - MAP_ANON = 0x20 - MAP_ANONYMOUS = 0x20 - MAP_DENYWRITE = 0x800 - MAP_EXECUTABLE = 0x1000 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_GROWSDOWN = 0x100 - MAP_HUGETLB = 0x40000 - MAP_HUGE_MASK = 0x3f - MAP_HUGE_SHIFT = 0x1a - MAP_LOCKED = 0x80 - MAP_NONBLOCK = 0x10000 - MAP_NORESERVE = 0x40 - MAP_POPULATE = 0x8000 - MAP_PRIVATE = 0x2 - MAP_SHARED = 0x1 - MAP_STACK = 0x20000 - MAP_TYPE = 0xf - MCL_CURRENT = 0x2000 - MCL_FUTURE = 0x4000 - MNT_DETACH = 0x2 - MNT_EXPIRE = 0x4 - MNT_FORCE = 0x1 - MSG_CMSG_CLOEXEC = 0x40000000 - MSG_CONFIRM = 0x800 - MSG_CTRUNC = 0x8 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x40 - MSG_EOR = 0x80 - MSG_ERRQUEUE = 0x2000 - MSG_FASTOPEN = 0x20000000 - MSG_FIN = 0x200 - MSG_MORE = 0x8000 - MSG_NOSIGNAL = 0x4000 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_PROXY = 0x10 - MSG_RST = 0x1000 - MSG_SYN = 0x400 - MSG_TRUNC = 0x20 - MSG_TRYHARD = 0x4 - MSG_WAITALL = 0x100 - MSG_WAITFORONE = 0x10000 - MS_ACTIVE = 0x40000000 - MS_ASYNC = 0x1 - MS_BIND = 0x1000 - MS_DIRSYNC = 0x80 - MS_INVALIDATE = 0x2 - MS_I_VERSION = 0x800000 - MS_KERNMOUNT = 0x400000 - MS_MANDLOCK = 0x40 - MS_MGC_MSK = 0xffff0000 - MS_MGC_VAL = 0xc0ed0000 - MS_MOVE = 0x2000 - MS_NOATIME = 0x400 - MS_NODEV = 0x4 - MS_NODIRATIME = 0x800 - MS_NOEXEC = 0x8 - MS_NOSUID = 0x2 - MS_NOUSER = -0x80000000 - MS_POSIXACL = 0x10000 - MS_PRIVATE = 0x40000 - MS_RDONLY = 0x1 - MS_REC = 0x4000 - MS_RELATIME = 0x200000 - MS_REMOUNT = 0x20 - MS_RMT_MASK = 0x800051 - MS_SHARED = 0x100000 - MS_SILENT = 0x8000 - MS_SLAVE = 0x80000 - MS_STRICTATIME = 0x1000000 - MS_SYNC = 0x4 - MS_SYNCHRONOUS = 0x10 - MS_UNBINDABLE = 0x20000 - NAME_MAX = 0xff - NETLINK_ADD_MEMBERSHIP = 0x1 - NETLINK_AUDIT = 0x9 - NETLINK_BROADCAST_ERROR = 0x4 - NETLINK_CONNECTOR = 0xb - NETLINK_CRYPTO = 0x15 - NETLINK_DNRTMSG = 0xe - NETLINK_DROP_MEMBERSHIP = 0x2 - NETLINK_ECRYPTFS = 0x13 - NETLINK_FIB_LOOKUP = 0xa - NETLINK_FIREWALL = 0x3 - NETLINK_GENERIC = 0x10 - NETLINK_INET_DIAG = 0x4 - NETLINK_IP6_FW = 0xd - NETLINK_ISCSI = 0x8 - NETLINK_KOBJECT_UEVENT = 0xf - NETLINK_NETFILTER = 0xc - NETLINK_NFLOG = 0x5 - NETLINK_NO_ENOBUFS = 0x5 - NETLINK_PKTINFO = 0x3 - NETLINK_RDMA = 0x14 - NETLINK_ROUTE = 0x0 - NETLINK_RX_RING = 0x6 - NETLINK_SCSITRANSPORT = 0x12 - NETLINK_SELINUX = 0x7 - NETLINK_SOCK_DIAG = 0x4 - NETLINK_TX_RING = 0x7 - NETLINK_UNUSED = 0x1 - NETLINK_USERSOCK = 0x2 - NETLINK_XFRM = 0x6 - NL0 = 0x0 - NL1 = 0x100 - NL2 = 0x200 - NL3 = 0x300 - NLA_ALIGNTO = 0x4 - NLA_F_NESTED = 0x8000 - NLA_F_NET_BYTEORDER = 0x4000 - NLA_HDRLEN = 0x4 - NLDLY = 0x300 - NLMSG_ALIGNTO = 0x4 - NLMSG_DONE = 0x3 - NLMSG_ERROR = 0x2 - NLMSG_HDRLEN = 0x10 - NLMSG_MIN_TYPE = 0x10 - NLMSG_NOOP = 0x1 - NLMSG_OVERRUN = 0x4 - NLM_F_ACK = 0x4 - NLM_F_APPEND = 0x800 - NLM_F_ATOMIC = 0x400 - NLM_F_CREATE = 0x400 - NLM_F_DUMP = 0x300 - NLM_F_DUMP_INTR = 0x10 - NLM_F_ECHO = 0x8 - NLM_F_EXCL = 0x200 - NLM_F_MATCH = 0x200 - NLM_F_MULTI = 0x2 - NLM_F_REPLACE = 0x100 - NLM_F_REQUEST = 0x1 - NLM_F_ROOT = 0x100 - NOFLSH = 0x80000000 - OCRNL = 0x8 - OFDEL = 0x80 - OFILL = 0x40 - OLCUC = 0x4 - ONLCR = 0x2 - ONLRET = 0x20 - ONOCR = 0x10 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_APPEND = 0x400 - O_ASYNC = 0x2000 - O_CLOEXEC = 0x80000 - O_CREAT = 0x40 - O_DIRECT = 0x20000 - O_DIRECTORY = 0x4000 - O_DSYNC = 0x1000 - O_EXCL = 0x80 - O_FSYNC = 0x101000 - O_LARGEFILE = 0x0 - O_NDELAY = 0x800 - O_NOATIME = 0x40000 - O_NOCTTY = 0x100 - O_NOFOLLOW = 0x8000 - O_NONBLOCK = 0x800 - O_PATH = 0x200000 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_RSYNC = 0x101000 - O_SYNC = 0x101000 - O_TMPFILE = 0x410000 - O_TRUNC = 0x200 - O_WRONLY = 0x1 - PACKET_ADD_MEMBERSHIP = 0x1 - PACKET_AUXDATA = 0x8 - PACKET_BROADCAST = 0x1 - PACKET_COPY_THRESH = 0x7 - PACKET_DROP_MEMBERSHIP = 0x2 - PACKET_FANOUT = 0x12 - PACKET_FANOUT_CPU = 0x2 - PACKET_FANOUT_FLAG_DEFRAG = 0x8000 - PACKET_FANOUT_FLAG_ROLLOVER = 0x1000 - PACKET_FANOUT_HASH = 0x0 - PACKET_FANOUT_LB = 0x1 - PACKET_FANOUT_QM = 0x5 - PACKET_FANOUT_RND = 0x4 - PACKET_FANOUT_ROLLOVER = 0x3 - PACKET_FASTROUTE = 0x6 - PACKET_HDRLEN = 0xb - PACKET_HOST = 0x0 - PACKET_KERNEL = 0x7 - PACKET_LOOPBACK = 0x5 - PACKET_LOSS = 0xe - PACKET_MR_ALLMULTI = 0x2 - PACKET_MR_MULTICAST = 0x0 - PACKET_MR_PROMISC = 0x1 - PACKET_MR_UNICAST = 0x3 - PACKET_MULTICAST = 0x2 - PACKET_ORIGDEV = 0x9 - PACKET_OTHERHOST = 0x3 - PACKET_OUTGOING = 0x4 - PACKET_QDISC_BYPASS = 0x14 - PACKET_RECV_OUTPUT = 0x3 - PACKET_RESERVE = 0xc - PACKET_RX_RING = 0x5 - PACKET_STATISTICS = 0x6 - PACKET_TIMESTAMP = 0x11 - PACKET_TX_HAS_OFF = 0x13 - PACKET_TX_RING = 0xd - PACKET_TX_TIMESTAMP = 0x10 - PACKET_USER = 0x6 - PACKET_VERSION = 0xa - PACKET_VNET_HDR = 0xf - PARENB = 0x1000 - PARITY_CRC16_PR0 = 0x2 - PARITY_CRC16_PR0_CCITT = 0x4 - PARITY_CRC16_PR1 = 0x3 - PARITY_CRC16_PR1_CCITT = 0x5 - PARITY_CRC32_PR0_CCITT = 0x6 - PARITY_CRC32_PR1_CCITT = 0x7 - PARITY_DEFAULT = 0x0 - PARITY_NONE = 0x1 - PARMRK = 0x8 - PARODD = 0x2000 - PENDIN = 0x20000000 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PROT_EXEC = 0x4 - PROT_GROWSDOWN = 0x1000000 - PROT_GROWSUP = 0x2000000 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_SAO = 0x10 - PROT_WRITE = 0x2 - PR_CAPBSET_DROP = 0x18 - PR_CAPBSET_READ = 0x17 - PR_ENDIAN_BIG = 0x0 - PR_ENDIAN_LITTLE = 0x1 - PR_ENDIAN_PPC_LITTLE = 0x2 - PR_FPEMU_NOPRINT = 0x1 - PR_FPEMU_SIGFPE = 0x2 - PR_FP_EXC_ASYNC = 0x2 - PR_FP_EXC_DISABLED = 0x0 - PR_FP_EXC_DIV = 0x10000 - PR_FP_EXC_INV = 0x100000 - PR_FP_EXC_NONRECOV = 0x1 - PR_FP_EXC_OVF = 0x20000 - PR_FP_EXC_PRECISE = 0x3 - PR_FP_EXC_RES = 0x80000 - PR_FP_EXC_SW_ENABLE = 0x80 - PR_FP_EXC_UND = 0x40000 - PR_GET_CHILD_SUBREAPER = 0x25 - PR_GET_DUMPABLE = 0x3 - PR_GET_ENDIAN = 0x13 - PR_GET_FPEMU = 0x9 - PR_GET_FPEXC = 0xb - PR_GET_KEEPCAPS = 0x7 - PR_GET_NAME = 0x10 - PR_GET_NO_NEW_PRIVS = 0x27 - PR_GET_PDEATHSIG = 0x2 - PR_GET_SECCOMP = 0x15 - PR_GET_SECUREBITS = 0x1b - PR_GET_THP_DISABLE = 0x2a - PR_GET_TID_ADDRESS = 0x28 - PR_GET_TIMERSLACK = 0x1e - PR_GET_TIMING = 0xd - PR_GET_TSC = 0x19 - PR_GET_UNALIGN = 0x5 - PR_MCE_KILL = 0x21 - PR_MCE_KILL_CLEAR = 0x0 - PR_MCE_KILL_DEFAULT = 0x2 - PR_MCE_KILL_EARLY = 0x1 - PR_MCE_KILL_GET = 0x22 - PR_MCE_KILL_LATE = 0x0 - PR_MCE_KILL_SET = 0x1 - PR_SET_CHILD_SUBREAPER = 0x24 - PR_SET_DUMPABLE = 0x4 - PR_SET_ENDIAN = 0x14 - PR_SET_FPEMU = 0xa - PR_SET_FPEXC = 0xc - PR_SET_KEEPCAPS = 0x8 - PR_SET_MM = 0x23 - PR_SET_MM_ARG_END = 0x9 - PR_SET_MM_ARG_START = 0x8 - PR_SET_MM_AUXV = 0xc - PR_SET_MM_BRK = 0x7 - PR_SET_MM_END_CODE = 0x2 - PR_SET_MM_END_DATA = 0x4 - PR_SET_MM_ENV_END = 0xb - PR_SET_MM_ENV_START = 0xa - PR_SET_MM_EXE_FILE = 0xd - PR_SET_MM_MAP = 0xe - PR_SET_MM_MAP_SIZE = 0xf - PR_SET_MM_START_BRK = 0x6 - PR_SET_MM_START_CODE = 0x1 - PR_SET_MM_START_DATA = 0x3 - PR_SET_MM_START_STACK = 0x5 - PR_SET_NAME = 0xf - PR_SET_NO_NEW_PRIVS = 0x26 - PR_SET_PDEATHSIG = 0x1 - PR_SET_PTRACER = 0x59616d61 - PR_SET_PTRACER_ANY = -0x1 - PR_SET_SECCOMP = 0x16 - PR_SET_SECUREBITS = 0x1c - PR_SET_THP_DISABLE = 0x29 - PR_SET_TIMERSLACK = 0x1d - PR_SET_TIMING = 0xe - PR_SET_TSC = 0x1a - PR_SET_UNALIGN = 0x6 - PR_TASK_PERF_EVENTS_DISABLE = 0x1f - PR_TASK_PERF_EVENTS_ENABLE = 0x20 - PR_TIMING_STATISTICAL = 0x0 - PR_TIMING_TIMESTAMP = 0x1 - PR_TSC_ENABLE = 0x1 - PR_TSC_SIGSEGV = 0x2 - PR_UNALIGN_NOPRINT = 0x1 - PR_UNALIGN_SIGBUS = 0x2 - PTRACE_ATTACH = 0x10 - PTRACE_CONT = 0x7 - PTRACE_DETACH = 0x11 - PTRACE_EVENT_CLONE = 0x3 - PTRACE_EVENT_EXEC = 0x4 - PTRACE_EVENT_EXIT = 0x6 - PTRACE_EVENT_FORK = 0x1 - PTRACE_EVENT_SECCOMP = 0x7 - PTRACE_EVENT_STOP = 0x80 - PTRACE_EVENT_VFORK = 0x2 - PTRACE_EVENT_VFORK_DONE = 0x5 - PTRACE_GETEVENTMSG = 0x4201 - PTRACE_GETEVRREGS = 0x14 - PTRACE_GETFPREGS = 0xe - PTRACE_GETREGS = 0xc - PTRACE_GETREGS64 = 0x16 - PTRACE_GETREGSET = 0x4204 - PTRACE_GETSIGINFO = 0x4202 - PTRACE_GETSIGMASK = 0x420a - PTRACE_GETVRREGS = 0x12 - PTRACE_GETVSRREGS = 0x1b - PTRACE_GET_DEBUGREG = 0x19 - PTRACE_INTERRUPT = 0x4207 - PTRACE_KILL = 0x8 - PTRACE_LISTEN = 0x4208 - PTRACE_O_EXITKILL = 0x100000 - PTRACE_O_MASK = 0x1000ff - PTRACE_O_TRACECLONE = 0x8 - PTRACE_O_TRACEEXEC = 0x10 - PTRACE_O_TRACEEXIT = 0x40 - PTRACE_O_TRACEFORK = 0x2 - PTRACE_O_TRACESECCOMP = 0x80 - PTRACE_O_TRACESYSGOOD = 0x1 - PTRACE_O_TRACEVFORK = 0x4 - PTRACE_O_TRACEVFORKDONE = 0x20 - PTRACE_PEEKDATA = 0x2 - PTRACE_PEEKSIGINFO = 0x4209 - PTRACE_PEEKSIGINFO_SHARED = 0x1 - PTRACE_PEEKTEXT = 0x1 - PTRACE_PEEKUSR = 0x3 - PTRACE_POKEDATA = 0x5 - PTRACE_POKETEXT = 0x4 - PTRACE_POKEUSR = 0x6 - PTRACE_SEIZE = 0x4206 - PTRACE_SETEVRREGS = 0x15 - PTRACE_SETFPREGS = 0xf - PTRACE_SETOPTIONS = 0x4200 - PTRACE_SETREGS = 0xd - PTRACE_SETREGS64 = 0x17 - PTRACE_SETREGSET = 0x4205 - PTRACE_SETSIGINFO = 0x4203 - PTRACE_SETSIGMASK = 0x420b - PTRACE_SETVRREGS = 0x13 - PTRACE_SETVSRREGS = 0x1c - PTRACE_SET_DEBUGREG = 0x1a - PTRACE_SINGLEBLOCK = 0x100 - PTRACE_SINGLESTEP = 0x9 - PTRACE_SYSCALL = 0x18 - PTRACE_TRACEME = 0x0 - PT_CCR = 0x26 - PT_CTR = 0x23 - PT_DAR = 0x29 - PT_DSCR = 0x2c - PT_DSISR = 0x2a - PT_FPR0 = 0x30 - PT_FPSCR = 0x50 - PT_LNK = 0x24 - PT_MSR = 0x21 - PT_NIP = 0x20 - PT_ORIG_R3 = 0x22 - PT_R0 = 0x0 - PT_R1 = 0x1 - PT_R10 = 0xa - PT_R11 = 0xb - PT_R12 = 0xc - PT_R13 = 0xd - PT_R14 = 0xe - PT_R15 = 0xf - PT_R16 = 0x10 - PT_R17 = 0x11 - PT_R18 = 0x12 - PT_R19 = 0x13 - PT_R2 = 0x2 - PT_R20 = 0x14 - PT_R21 = 0x15 - PT_R22 = 0x16 - PT_R23 = 0x17 - PT_R24 = 0x18 - PT_R25 = 0x19 - PT_R26 = 0x1a - PT_R27 = 0x1b - PT_R28 = 0x1c - PT_R29 = 0x1d - PT_R3 = 0x3 - PT_R30 = 0x1e - PT_R31 = 0x1f - PT_R4 = 0x4 - PT_R5 = 0x5 - PT_R6 = 0x6 - PT_R7 = 0x7 - PT_R8 = 0x8 - PT_R9 = 0x9 - PT_REGS_COUNT = 0x2c - PT_RESULT = 0x2b - PT_SOFTE = 0x27 - PT_TRAP = 0x28 - PT_VR0 = 0x52 - PT_VRSAVE = 0x94 - PT_VSCR = 0x93 - PT_VSR0 = 0x96 - PT_VSR31 = 0xd4 - PT_XER = 0x25 - RLIMIT_AS = 0x9 - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x7 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = -0x1 - RTAX_ADVMSS = 0x8 - RTAX_CWND = 0x7 - RTAX_FEATURES = 0xc - RTAX_FEATURE_ALLFRAG = 0x8 - RTAX_FEATURE_ECN = 0x1 - RTAX_FEATURE_SACK = 0x2 - RTAX_FEATURE_TIMESTAMP = 0x4 - RTAX_HOPLIMIT = 0xa - RTAX_INITCWND = 0xb - RTAX_INITRWND = 0xe - RTAX_LOCK = 0x1 - RTAX_MAX = 0xf - RTAX_MTU = 0x2 - RTAX_QUICKACK = 0xf - RTAX_REORDERING = 0x9 - RTAX_RTO_MIN = 0xd - RTAX_RTT = 0x4 - RTAX_RTTVAR = 0x5 - RTAX_SSTHRESH = 0x6 - RTAX_UNSPEC = 0x0 - RTAX_WINDOW = 0x3 - RTA_ALIGNTO = 0x4 - RTA_MAX = 0x11 - RTCF_DIRECTSRC = 0x4000000 - RTCF_DOREDIRECT = 0x1000000 - RTCF_LOG = 0x2000000 - RTCF_MASQ = 0x400000 - RTCF_NAT = 0x800000 - RTCF_VALVE = 0x200000 - RTF_ADDRCLASSMASK = 0xf8000000 - RTF_ADDRCONF = 0x40000 - RTF_ALLONLINK = 0x20000 - RTF_BROADCAST = 0x10000000 - RTF_CACHE = 0x1000000 - RTF_DEFAULT = 0x10000 - RTF_DYNAMIC = 0x10 - RTF_FLOW = 0x2000000 - RTF_GATEWAY = 0x2 - RTF_HOST = 0x4 - RTF_INTERFACE = 0x40000000 - RTF_IRTT = 0x100 - RTF_LINKRT = 0x100000 - RTF_LOCAL = 0x80000000 - RTF_MODIFIED = 0x20 - RTF_MSS = 0x40 - RTF_MTU = 0x40 - RTF_MULTICAST = 0x20000000 - RTF_NAT = 0x8000000 - RTF_NOFORWARD = 0x1000 - RTF_NONEXTHOP = 0x200000 - RTF_NOPMTUDISC = 0x4000 - RTF_POLICY = 0x4000000 - RTF_REINSTATE = 0x8 - RTF_REJECT = 0x200 - RTF_STATIC = 0x400 - RTF_THROW = 0x2000 - RTF_UP = 0x1 - RTF_WINDOW = 0x80 - RTF_XRESOLVE = 0x800 - RTM_BASE = 0x10 - RTM_DELACTION = 0x31 - RTM_DELADDR = 0x15 - RTM_DELADDRLABEL = 0x49 - RTM_DELLINK = 0x11 - RTM_DELMDB = 0x55 - RTM_DELNEIGH = 0x1d - RTM_DELQDISC = 0x25 - RTM_DELROUTE = 0x19 - RTM_DELRULE = 0x21 - RTM_DELTCLASS = 0x29 - RTM_DELTFILTER = 0x2d - RTM_F_CLONED = 0x200 - RTM_F_EQUALIZE = 0x400 - RTM_F_NOTIFY = 0x100 - RTM_F_PREFIX = 0x800 - RTM_GETACTION = 0x32 - RTM_GETADDR = 0x16 - RTM_GETADDRLABEL = 0x4a - RTM_GETANYCAST = 0x3e - RTM_GETDCB = 0x4e - RTM_GETLINK = 0x12 - RTM_GETMDB = 0x56 - RTM_GETMULTICAST = 0x3a - RTM_GETNEIGH = 0x1e - RTM_GETNEIGHTBL = 0x42 - RTM_GETNETCONF = 0x52 - RTM_GETQDISC = 0x26 - RTM_GETROUTE = 0x1a - RTM_GETRULE = 0x22 - RTM_GETTCLASS = 0x2a - RTM_GETTFILTER = 0x2e - RTM_MAX = 0x57 - RTM_NEWACTION = 0x30 - RTM_NEWADDR = 0x14 - RTM_NEWADDRLABEL = 0x48 - RTM_NEWLINK = 0x10 - RTM_NEWMDB = 0x54 - RTM_NEWNDUSEROPT = 0x44 - RTM_NEWNEIGH = 0x1c - RTM_NEWNEIGHTBL = 0x40 - RTM_NEWNETCONF = 0x50 - RTM_NEWPREFIX = 0x34 - RTM_NEWQDISC = 0x24 - RTM_NEWROUTE = 0x18 - RTM_NEWRULE = 0x20 - RTM_NEWTCLASS = 0x28 - RTM_NEWTFILTER = 0x2c - RTM_NR_FAMILIES = 0x12 - RTM_NR_MSGTYPES = 0x48 - RTM_SETDCB = 0x4f - RTM_SETLINK = 0x13 - RTM_SETNEIGHTBL = 0x43 - RTNH_ALIGNTO = 0x4 - RTNH_F_DEAD = 0x1 - RTNH_F_ONLINK = 0x4 - RTNH_F_PERVASIVE = 0x2 - RTN_MAX = 0xb - RTPROT_BIRD = 0xc - RTPROT_BOOT = 0x3 - RTPROT_DHCP = 0x10 - RTPROT_DNROUTED = 0xd - RTPROT_GATED = 0x8 - RTPROT_KERNEL = 0x2 - RTPROT_MROUTED = 0x11 - RTPROT_MRT = 0xa - RTPROT_NTK = 0xf - RTPROT_RA = 0x9 - RTPROT_REDIRECT = 0x1 - RTPROT_STATIC = 0x4 - RTPROT_UNSPEC = 0x0 - RTPROT_XORP = 0xe - RTPROT_ZEBRA = 0xb - RT_CLASS_DEFAULT = 0xfd - RT_CLASS_LOCAL = 0xff - RT_CLASS_MAIN = 0xfe - RT_CLASS_MAX = 0xff - RT_CLASS_UNSPEC = 0x0 - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - RUSAGE_THREAD = 0x1 - SCM_CREDENTIALS = 0x2 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x1d - SCM_TIMESTAMPING = 0x25 - SCM_TIMESTAMPNS = 0x23 - SCM_WIFI_STATUS = 0x29 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDDLCI = 0x8980 - SIOCADDMULTI = 0x8931 - SIOCADDRT = 0x890b - SIOCATMARK = 0x8905 - SIOCDARP = 0x8953 - SIOCDELDLCI = 0x8981 - SIOCDELMULTI = 0x8932 - SIOCDELRT = 0x890c - SIOCDEVPRIVATE = 0x89f0 - SIOCDIFADDR = 0x8936 - SIOCDRARP = 0x8960 - SIOCGARP = 0x8954 - SIOCGIFADDR = 0x8915 - SIOCGIFBR = 0x8940 - SIOCGIFBRDADDR = 0x8919 - SIOCGIFCONF = 0x8912 - SIOCGIFCOUNT = 0x8938 - SIOCGIFDSTADDR = 0x8917 - SIOCGIFENCAP = 0x8925 - SIOCGIFFLAGS = 0x8913 - SIOCGIFHWADDR = 0x8927 - SIOCGIFINDEX = 0x8933 - SIOCGIFMAP = 0x8970 - SIOCGIFMEM = 0x891f - SIOCGIFMETRIC = 0x891d - SIOCGIFMTU = 0x8921 - SIOCGIFNAME = 0x8910 - SIOCGIFNETMASK = 0x891b - SIOCGIFPFLAGS = 0x8935 - SIOCGIFSLAVE = 0x8929 - SIOCGIFTXQLEN = 0x8942 - SIOCGPGRP = 0x8904 - SIOCGRARP = 0x8961 - SIOCGSTAMP = 0x8906 - SIOCGSTAMPNS = 0x8907 - SIOCPROTOPRIVATE = 0x89e0 - SIOCRTMSG = 0x890d - SIOCSARP = 0x8955 - SIOCSIFADDR = 0x8916 - SIOCSIFBR = 0x8941 - SIOCSIFBRDADDR = 0x891a - SIOCSIFDSTADDR = 0x8918 - SIOCSIFENCAP = 0x8926 - SIOCSIFFLAGS = 0x8914 - SIOCSIFHWADDR = 0x8924 - SIOCSIFHWBROADCAST = 0x8937 - SIOCSIFLINK = 0x8911 - SIOCSIFMAP = 0x8971 - SIOCSIFMEM = 0x8920 - SIOCSIFMETRIC = 0x891e - SIOCSIFMTU = 0x8922 - SIOCSIFNAME = 0x8923 - SIOCSIFNETMASK = 0x891c - SIOCSIFPFLAGS = 0x8934 - SIOCSIFSLAVE = 0x8930 - SIOCSIFTXQLEN = 0x8943 - SIOCSPGRP = 0x8902 - SIOCSRARP = 0x8962 - SOCK_CLOEXEC = 0x80000 - SOCK_DCCP = 0x6 - SOCK_DGRAM = 0x2 - SOCK_NONBLOCK = 0x800 - SOCK_PACKET = 0xa - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_AAL = 0x109 - SOL_ATM = 0x108 - SOL_DECNET = 0x105 - SOL_ICMPV6 = 0x3a - SOL_IP = 0x0 - SOL_IPV6 = 0x29 - SOL_IRDA = 0x10a - SOL_PACKET = 0x107 - SOL_RAW = 0xff - SOL_SOCKET = 0x1 - SOL_TCP = 0x6 - SOL_X25 = 0x106 - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x1e - SO_ATTACH_FILTER = 0x1a - SO_BINDTODEVICE = 0x19 - SO_BPF_EXTENSIONS = 0x30 - SO_BROADCAST = 0x6 - SO_BSDCOMPAT = 0xe - SO_BUSY_POLL = 0x2e - SO_DEBUG = 0x1 - SO_DETACH_FILTER = 0x1b - SO_DOMAIN = 0x27 - SO_DONTROUTE = 0x5 - SO_ERROR = 0x4 - SO_GET_FILTER = 0x1a - SO_KEEPALIVE = 0x9 - SO_LINGER = 0xd - SO_LOCK_FILTER = 0x2c - SO_MARK = 0x24 - SO_MAX_PACING_RATE = 0x2f - SO_NOFCS = 0x2b - SO_NO_CHECK = 0xb - SO_OOBINLINE = 0xa - SO_PASSCRED = 0x14 - SO_PASSSEC = 0x22 - SO_PEEK_OFF = 0x2a - SO_PEERCRED = 0x15 - SO_PEERNAME = 0x1c - SO_PEERSEC = 0x1f - SO_PRIORITY = 0xc - SO_PROTOCOL = 0x26 - SO_RCVBUF = 0x8 - SO_RCVBUFFORCE = 0x21 - SO_RCVLOWAT = 0x10 - SO_RCVTIMEO = 0x12 - SO_REUSEADDR = 0x2 - SO_REUSEPORT = 0xf - SO_RXQ_OVFL = 0x28 - SO_SECURITY_AUTHENTICATION = 0x16 - SO_SECURITY_ENCRYPTION_NETWORK = 0x18 - SO_SECURITY_ENCRYPTION_TRANSPORT = 0x17 - SO_SELECT_ERR_QUEUE = 0x2d - SO_SNDBUF = 0x7 - SO_SNDBUFFORCE = 0x20 - SO_SNDLOWAT = 0x11 - SO_SNDTIMEO = 0x13 - SO_TIMESTAMP = 0x1d - SO_TIMESTAMPING = 0x25 - SO_TIMESTAMPNS = 0x23 - SO_TYPE = 0x3 - SO_WIFI_STATUS = 0x29 - S_BLKSIZE = 0x200 - S_IEXEC = 0x40 - S_IFBLK = 0x6000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFIFO = 0x1000 - S_IFLNK = 0xa000 - S_IFMT = 0xf000 - S_IFREG = 0x8000 - S_IFSOCK = 0xc000 - S_IREAD = 0x100 - S_IRGRP = 0x20 - S_IROTH = 0x4 - S_IRUSR = 0x100 - S_IRWXG = 0x38 - S_IRWXO = 0x7 - S_IRWXU = 0x1c0 - S_ISGID = 0x400 - S_ISUID = 0x800 - S_ISVTX = 0x200 - S_IWGRP = 0x10 - S_IWOTH = 0x2 - S_IWRITE = 0x80 - S_IWUSR = 0x80 - S_IXGRP = 0x8 - S_IXOTH = 0x1 - S_IXUSR = 0x40 - TAB0 = 0x0 - TAB1 = 0x400 - TAB2 = 0x800 - TAB3 = 0xc00 - TABDLY = 0xc00 - TCFLSH = 0x2000741f - TCGETA = 0x40147417 - TCGETS = 0x402c7413 - TCIFLUSH = 0x0 - TCIOFF = 0x2 - TCIOFLUSH = 0x2 - TCION = 0x3 - TCOFLUSH = 0x1 - TCOOFF = 0x0 - TCOON = 0x1 - TCP_CONGESTION = 0xd - TCP_COOKIE_IN_ALWAYS = 0x1 - TCP_COOKIE_MAX = 0x10 - TCP_COOKIE_MIN = 0x8 - TCP_COOKIE_OUT_NEVER = 0x2 - TCP_COOKIE_PAIR_SIZE = 0x20 - TCP_COOKIE_TRANSACTIONS = 0xf - TCP_CORK = 0x3 - TCP_DEFER_ACCEPT = 0x9 - TCP_FASTOPEN = 0x17 - TCP_INFO = 0xb - TCP_KEEPCNT = 0x6 - TCP_KEEPIDLE = 0x4 - TCP_KEEPINTVL = 0x5 - TCP_LINGER2 = 0x8 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_WINSHIFT = 0xe - TCP_MD5SIG = 0xe - TCP_MD5SIG_MAXKEYLEN = 0x50 - TCP_MSS = 0x200 - TCP_MSS_DEFAULT = 0x218 - TCP_MSS_DESIRED = 0x4c4 - TCP_NODELAY = 0x1 - TCP_QUEUE_SEQ = 0x15 - TCP_QUICKACK = 0xc - TCP_REPAIR = 0x13 - TCP_REPAIR_OPTIONS = 0x16 - TCP_REPAIR_QUEUE = 0x14 - TCP_SYNCNT = 0x7 - TCP_S_DATA_IN = 0x4 - TCP_S_DATA_OUT = 0x8 - TCP_THIN_DUPACK = 0x11 - TCP_THIN_LINEAR_TIMEOUTS = 0x10 - TCP_TIMESTAMP = 0x18 - TCP_USER_TIMEOUT = 0x12 - TCP_WINDOW_CLAMP = 0xa - TCSAFLUSH = 0x2 - TCSBRK = 0x2000741d - TCSBRKP = 0x5425 - TCSETA = 0x80147418 - TCSETAF = 0x8014741c - TCSETAW = 0x80147419 - TCSETS = 0x802c7414 - TCSETSF = 0x802c7416 - TCSETSW = 0x802c7415 - TCXONC = 0x2000741e - TIOCCBRK = 0x5428 - TIOCCONS = 0x541d - TIOCEXCL = 0x540c - TIOCGDEV = 0x40045432 - TIOCGETC = 0x40067412 - TIOCGETD = 0x5424 - TIOCGETP = 0x40067408 - TIOCGEXCL = 0x40045440 - TIOCGICOUNT = 0x545d - TIOCGLCKTRMIOS = 0x5456 - TIOCGLTC = 0x40067474 - TIOCGPGRP = 0x40047477 - TIOCGPKT = 0x40045438 - TIOCGPTLCK = 0x40045439 - TIOCGPTN = 0x40045430 - TIOCGRS485 = 0x542e - TIOCGSERIAL = 0x541e - TIOCGSID = 0x5429 - TIOCGSOFTCAR = 0x5419 - TIOCGWINSZ = 0x40087468 - TIOCINQ = 0x4004667f - TIOCLINUX = 0x541c - TIOCMBIC = 0x5417 - TIOCMBIS = 0x5416 - TIOCMGET = 0x5415 - TIOCMIWAIT = 0x545c - TIOCMSET = 0x5418 - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_LOOP = 0x8000 - TIOCM_OUT1 = 0x2000 - TIOCM_OUT2 = 0x4000 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x5422 - TIOCNXCL = 0x540d - TIOCOUTQ = 0x40047473 - TIOCPKT = 0x5420 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCSBRK = 0x5427 - TIOCSCTTY = 0x540e - TIOCSERCONFIG = 0x5453 - TIOCSERGETLSR = 0x5459 - TIOCSERGETMULTI = 0x545a - TIOCSERGSTRUCT = 0x5458 - TIOCSERGWILD = 0x5454 - TIOCSERSETMULTI = 0x545b - TIOCSERSWILD = 0x5455 - TIOCSER_TEMT = 0x1 - TIOCSETC = 0x80067411 - TIOCSETD = 0x5423 - TIOCSETN = 0x8006740a - TIOCSETP = 0x80067409 - TIOCSIG = 0x80045436 - TIOCSLCKTRMIOS = 0x5457 - TIOCSLTC = 0x80067475 - TIOCSPGRP = 0x80047476 - TIOCSPTLCK = 0x80045431 - TIOCSRS485 = 0x542f - TIOCSSERIAL = 0x541f - TIOCSSOFTCAR = 0x541a - TIOCSTART = 0x2000746e - TIOCSTI = 0x5412 - TIOCSTOP = 0x2000746f - TIOCSWINSZ = 0x80087467 - TIOCVHANGUP = 0x5437 - TOSTOP = 0x400000 - TUNATTACHFILTER = 0x801054d5 - TUNDETACHFILTER = 0x801054d6 - TUNGETFEATURES = 0x400454cf - TUNGETFILTER = 0x401054db - TUNGETIFF = 0x400454d2 - TUNGETSNDBUF = 0x400454d3 - TUNGETVNETHDRSZ = 0x400454d7 - TUNSETDEBUG = 0x800454c9 - TUNSETGROUP = 0x800454ce - TUNSETIFF = 0x800454ca - TUNSETIFINDEX = 0x800454da - TUNSETLINK = 0x800454cd - TUNSETNOCSUM = 0x800454c8 - TUNSETOFFLOAD = 0x800454d0 - TUNSETOWNER = 0x800454cc - TUNSETPERSIST = 0x800454cb - TUNSETQUEUE = 0x800454d9 - TUNSETSNDBUF = 0x800454d4 - TUNSETTXFILTER = 0x800454d1 - TUNSETVNETHDRSZ = 0x800454d8 - VDISCARD = 0x10 - VEOF = 0x4 - VEOL = 0x6 - VEOL2 = 0x8 - VERASE = 0x2 - VINTR = 0x0 - VKILL = 0x3 - VLNEXT = 0xf - VMIN = 0x5 - VQUIT = 0x1 - VREPRINT = 0xb - VSTART = 0xd - VSTOP = 0xe - VSUSP = 0xc - VSWTC = 0x9 - VT0 = 0x0 - VT1 = 0x10000 - VTDLY = 0x10000 - VTIME = 0x7 - VWERASE = 0xa - WALL = 0x40000000 - WCLONE = 0x80000000 - WCONTINUED = 0x8 - WEXITED = 0x4 - WNOHANG = 0x1 - WNOTHREAD = 0x20000000 - WNOWAIT = 0x1000000 - WORDSIZE = 0x40 - WSTOPPED = 0x2 - WUNTRACED = 0x2 - XCASE = 0x4000 - XTABS = 0xc00 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x62) - EADDRNOTAVAIL = syscall.Errno(0x63) - EADV = syscall.Errno(0x44) - EAFNOSUPPORT = syscall.Errno(0x61) - EAGAIN = syscall.Errno(0xb) - EALREADY = syscall.Errno(0x72) - EBADE = syscall.Errno(0x34) - EBADF = syscall.Errno(0x9) - EBADFD = syscall.Errno(0x4d) - EBADMSG = syscall.Errno(0x4a) - EBADR = syscall.Errno(0x35) - EBADRQC = syscall.Errno(0x38) - EBADSLT = syscall.Errno(0x39) - EBFONT = syscall.Errno(0x3b) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x7d) - ECHILD = syscall.Errno(0xa) - ECHRNG = syscall.Errno(0x2c) - ECOMM = syscall.Errno(0x46) - ECONNABORTED = syscall.Errno(0x67) - ECONNREFUSED = syscall.Errno(0x6f) - ECONNRESET = syscall.Errno(0x68) - EDEADLK = syscall.Errno(0x23) - EDEADLOCK = syscall.Errno(0x3a) - EDESTADDRREQ = syscall.Errno(0x59) - EDOM = syscall.Errno(0x21) - EDOTDOT = syscall.Errno(0x49) - EDQUOT = syscall.Errno(0x7a) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EHOSTDOWN = syscall.Errno(0x70) - EHOSTUNREACH = syscall.Errno(0x71) - EHWPOISON = syscall.Errno(0x85) - EIDRM = syscall.Errno(0x2b) - EILSEQ = syscall.Errno(0x54) - EINPROGRESS = syscall.Errno(0x73) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EISCONN = syscall.Errno(0x6a) - EISDIR = syscall.Errno(0x15) - EISNAM = syscall.Errno(0x78) - EKEYEXPIRED = syscall.Errno(0x7f) - EKEYREJECTED = syscall.Errno(0x81) - EKEYREVOKED = syscall.Errno(0x80) - EL2HLT = syscall.Errno(0x33) - EL2NSYNC = syscall.Errno(0x2d) - EL3HLT = syscall.Errno(0x2e) - EL3RST = syscall.Errno(0x2f) - ELIBACC = syscall.Errno(0x4f) - ELIBBAD = syscall.Errno(0x50) - ELIBEXEC = syscall.Errno(0x53) - ELIBMAX = syscall.Errno(0x52) - ELIBSCN = syscall.Errno(0x51) - ELNRNG = syscall.Errno(0x30) - ELOOP = syscall.Errno(0x28) - EMEDIUMTYPE = syscall.Errno(0x7c) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x5a) - EMULTIHOP = syscall.Errno(0x48) - ENAMETOOLONG = syscall.Errno(0x24) - ENAVAIL = syscall.Errno(0x77) - ENETDOWN = syscall.Errno(0x64) - ENETRESET = syscall.Errno(0x66) - ENETUNREACH = syscall.Errno(0x65) - ENFILE = syscall.Errno(0x17) - ENOANO = syscall.Errno(0x37) - ENOBUFS = syscall.Errno(0x69) - ENOCSI = syscall.Errno(0x32) - ENODATA = syscall.Errno(0x3d) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOKEY = syscall.Errno(0x7e) - ENOLCK = syscall.Errno(0x25) - ENOLINK = syscall.Errno(0x43) - ENOMEDIUM = syscall.Errno(0x7b) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x2a) - ENONET = syscall.Errno(0x40) - ENOPKG = syscall.Errno(0x41) - ENOPROTOOPT = syscall.Errno(0x5c) - ENOSPC = syscall.Errno(0x1c) - ENOSR = syscall.Errno(0x3f) - ENOSTR = syscall.Errno(0x3c) - ENOSYS = syscall.Errno(0x26) - ENOTBLK = syscall.Errno(0xf) - ENOTCONN = syscall.Errno(0x6b) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x27) - ENOTNAM = syscall.Errno(0x76) - ENOTRECOVERABLE = syscall.Errno(0x83) - ENOTSOCK = syscall.Errno(0x58) - ENOTSUP = syscall.Errno(0x5f) - ENOTTY = syscall.Errno(0x19) - ENOTUNIQ = syscall.Errno(0x4c) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x5f) - EOVERFLOW = syscall.Errno(0x4b) - EOWNERDEAD = syscall.Errno(0x82) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x60) - EPIPE = syscall.Errno(0x20) - EPROTO = syscall.Errno(0x47) - EPROTONOSUPPORT = syscall.Errno(0x5d) - EPROTOTYPE = syscall.Errno(0x5b) - ERANGE = syscall.Errno(0x22) - EREMCHG = syscall.Errno(0x4e) - EREMOTE = syscall.Errno(0x42) - EREMOTEIO = syscall.Errno(0x79) - ERESTART = syscall.Errno(0x55) - ERFKILL = syscall.Errno(0x84) - EROFS = syscall.Errno(0x1e) - ESHUTDOWN = syscall.Errno(0x6c) - ESOCKTNOSUPPORT = syscall.Errno(0x5e) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESRMNT = syscall.Errno(0x45) - ESTALE = syscall.Errno(0x74) - ESTRPIPE = syscall.Errno(0x56) - ETIME = syscall.Errno(0x3e) - ETIMEDOUT = syscall.Errno(0x6e) - ETOOMANYREFS = syscall.Errno(0x6d) - ETXTBSY = syscall.Errno(0x1a) - EUCLEAN = syscall.Errno(0x75) - EUNATCH = syscall.Errno(0x31) - EUSERS = syscall.Errno(0x57) - EWOULDBLOCK = syscall.Errno(0xb) - EXDEV = syscall.Errno(0x12) - EXFULL = syscall.Errno(0x36) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0x7) - SIGCHLD = syscall.Signal(0x11) - SIGCLD = syscall.Signal(0x11) - SIGCONT = syscall.Signal(0x12) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x1d) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGPIPE = syscall.Signal(0xd) - SIGPOLL = syscall.Signal(0x1d) - SIGPROF = syscall.Signal(0x1b) - SIGPWR = syscall.Signal(0x1e) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTKFLT = syscall.Signal(0x10) - SIGSTOP = syscall.Signal(0x13) - SIGSYS = syscall.Signal(0x1f) - SIGTERM = syscall.Signal(0xf) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x14) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGUNUSED = syscall.Signal(0x1f) - SIGURG = syscall.Signal(0x17) - SIGUSR1 = syscall.Signal(0xa) - SIGUSR2 = syscall.Signal(0xc) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) - -// Error table -var errors = [...]string{ - 1: "operation not permitted", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "input/output error", - 6: "no such device or address", - 7: "argument list too long", - 8: "exec format error", - 9: "bad file descriptor", - 10: "no child processes", - 11: "resource temporarily unavailable", - 12: "cannot allocate memory", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "device or resource busy", - 17: "file exists", - 18: "invalid cross-device link", - 19: "no such device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "too many open files in system", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "numerical argument out of domain", - 34: "numerical result out of range", - 35: "resource deadlock avoided", - 36: "file name too long", - 37: "no locks available", - 38: "function not implemented", - 39: "directory not empty", - 40: "too many levels of symbolic links", - 42: "no message of desired type", - 43: "identifier removed", - 44: "channel number out of range", - 45: "level 2 not synchronized", - 46: "level 3 halted", - 47: "level 3 reset", - 48: "link number out of range", - 49: "protocol driver not attached", - 50: "no CSI structure available", - 51: "level 2 halted", - 52: "invalid exchange", - 53: "invalid request descriptor", - 54: "exchange full", - 55: "no anode", - 56: "invalid request code", - 57: "invalid slot", - 58: "file locking deadlock error", - 59: "bad font file format", - 60: "device not a stream", - 61: "no data available", - 62: "timer expired", - 63: "out of streams resources", - 64: "machine is not on the network", - 65: "package not installed", - 66: "object is remote", - 67: "link has been severed", - 68: "advertise error", - 69: "srmount error", - 70: "communication error on send", - 71: "protocol error", - 72: "multihop attempted", - 73: "RFS specific error", - 74: "bad message", - 75: "value too large for defined data type", - 76: "name not unique on network", - 77: "file descriptor in bad state", - 78: "remote address changed", - 79: "can not access a needed shared library", - 80: "accessing a corrupted shared library", - 81: ".lib section in a.out corrupted", - 82: "attempting to link in too many shared libraries", - 83: "cannot exec a shared library directly", - 84: "invalid or incomplete multibyte or wide character", - 85: "interrupted system call should be restarted", - 86: "streams pipe error", - 87: "too many users", - 88: "socket operation on non-socket", - 89: "destination address required", - 90: "message too long", - 91: "protocol wrong type for socket", - 92: "protocol not available", - 93: "protocol not supported", - 94: "socket type not supported", - 95: "operation not supported", - 96: "protocol family not supported", - 97: "address family not supported by protocol", - 98: "address already in use", - 99: "cannot assign requested address", - 100: "network is down", - 101: "network is unreachable", - 102: "network dropped connection on reset", - 103: "software caused connection abort", - 104: "connection reset by peer", - 105: "no buffer space available", - 106: "transport endpoint is already connected", - 107: "transport endpoint is not connected", - 108: "cannot send after transport endpoint shutdown", - 109: "too many references: cannot splice", - 110: "connection timed out", - 111: "connection refused", - 112: "host is down", - 113: "no route to host", - 114: "operation already in progress", - 115: "operation now in progress", - 116: "stale file handle", - 117: "structure needs cleaning", - 118: "not a XENIX named type file", - 119: "no XENIX semaphores available", - 120: "is a named type file", - 121: "remote I/O error", - 122: "disk quota exceeded", - 123: "no medium found", - 124: "wrong medium type", - 125: "operation canceled", - 126: "required key not available", - 127: "key has expired", - 128: "key has been revoked", - 129: "key was rejected by service", - 130: "owner died", - 131: "state not recoverable", - 132: "operation not possible due to RF-kill", - 133: "memory page has hardware error", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal instruction", - 5: "trace/breakpoint trap", - 6: "aborted", - 7: "bus error", - 8: "floating point exception", - 9: "killed", - 10: "user defined signal 1", - 11: "segmentation fault", - 12: "user defined signal 2", - 13: "broken pipe", - 14: "alarm clock", - 15: "terminated", - 16: "stack fault", - 17: "child exited", - 18: "continued", - 19: "stopped (signal)", - 20: "stopped", - 21: "stopped (tty input)", - 22: "stopped (tty output)", - 23: "urgent I/O condition", - 24: "CPU time limit exceeded", - 25: "file size limit exceeded", - 26: "virtual timer expired", - 27: "profiling timer expired", - 28: "window changed", - 29: "I/O possible", - 30: "power failure", - 31: "bad system call", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_linux_ppc64le.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_linux_ppc64le.go deleted file mode 100644 index 0861bd5666b..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_linux_ppc64le.go +++ /dev/null @@ -1,1968 +0,0 @@ -// mkerrors.sh -m64 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build ppc64le,linux - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- -m64 _const.go - -package unix - -import "syscall" - -const ( - AF_ALG = 0x26 - AF_APPLETALK = 0x5 - AF_ASH = 0x12 - AF_ATMPVC = 0x8 - AF_ATMSVC = 0x14 - AF_AX25 = 0x3 - AF_BLUETOOTH = 0x1f - AF_BRIDGE = 0x7 - AF_CAIF = 0x25 - AF_CAN = 0x1d - AF_DECnet = 0xc - AF_ECONET = 0x13 - AF_FILE = 0x1 - AF_IEEE802154 = 0x24 - AF_INET = 0x2 - AF_INET6 = 0xa - AF_IPX = 0x4 - AF_IRDA = 0x17 - AF_ISDN = 0x22 - AF_IUCV = 0x20 - AF_KEY = 0xf - AF_LLC = 0x1a - AF_LOCAL = 0x1 - AF_MAX = 0x29 - AF_NETBEUI = 0xd - AF_NETLINK = 0x10 - AF_NETROM = 0x6 - AF_NFC = 0x27 - AF_PACKET = 0x11 - AF_PHONET = 0x23 - AF_PPPOX = 0x18 - AF_RDS = 0x15 - AF_ROSE = 0xb - AF_ROUTE = 0x10 - AF_RXRPC = 0x21 - AF_SECURITY = 0xe - AF_SNA = 0x16 - AF_TIPC = 0x1e - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - AF_VSOCK = 0x28 - AF_WANPIPE = 0x19 - AF_X25 = 0x9 - ARPHRD_ADAPT = 0x108 - ARPHRD_APPLETLK = 0x8 - ARPHRD_ARCNET = 0x7 - ARPHRD_ASH = 0x30d - ARPHRD_ATM = 0x13 - ARPHRD_AX25 = 0x3 - ARPHRD_BIF = 0x307 - ARPHRD_CAIF = 0x336 - ARPHRD_CAN = 0x118 - ARPHRD_CHAOS = 0x5 - ARPHRD_CISCO = 0x201 - ARPHRD_CSLIP = 0x101 - ARPHRD_CSLIP6 = 0x103 - ARPHRD_DDCMP = 0x205 - ARPHRD_DLCI = 0xf - ARPHRD_ECONET = 0x30e - ARPHRD_EETHER = 0x2 - ARPHRD_ETHER = 0x1 - ARPHRD_EUI64 = 0x1b - ARPHRD_FCAL = 0x311 - ARPHRD_FCFABRIC = 0x313 - ARPHRD_FCPL = 0x312 - ARPHRD_FCPP = 0x310 - ARPHRD_FDDI = 0x306 - ARPHRD_FRAD = 0x302 - ARPHRD_HDLC = 0x201 - ARPHRD_HIPPI = 0x30c - ARPHRD_HWX25 = 0x110 - ARPHRD_IEEE1394 = 0x18 - ARPHRD_IEEE802 = 0x6 - ARPHRD_IEEE80211 = 0x321 - ARPHRD_IEEE80211_PRISM = 0x322 - ARPHRD_IEEE80211_RADIOTAP = 0x323 - ARPHRD_IEEE802154 = 0x324 - ARPHRD_IEEE802154_MONITOR = 0x325 - ARPHRD_IEEE802_TR = 0x320 - ARPHRD_INFINIBAND = 0x20 - ARPHRD_IP6GRE = 0x337 - ARPHRD_IPDDP = 0x309 - ARPHRD_IPGRE = 0x30a - ARPHRD_IRDA = 0x30f - ARPHRD_LAPB = 0x204 - ARPHRD_LOCALTLK = 0x305 - ARPHRD_LOOPBACK = 0x304 - ARPHRD_METRICOM = 0x17 - ARPHRD_NETLINK = 0x338 - ARPHRD_NETROM = 0x0 - ARPHRD_NONE = 0xfffe - ARPHRD_PHONET = 0x334 - ARPHRD_PHONET_PIPE = 0x335 - ARPHRD_PIMREG = 0x30b - ARPHRD_PPP = 0x200 - ARPHRD_PRONET = 0x4 - ARPHRD_RAWHDLC = 0x206 - ARPHRD_ROSE = 0x10e - ARPHRD_RSRVD = 0x104 - ARPHRD_SIT = 0x308 - ARPHRD_SKIP = 0x303 - ARPHRD_SLIP = 0x100 - ARPHRD_SLIP6 = 0x102 - ARPHRD_TUNNEL = 0x300 - ARPHRD_TUNNEL6 = 0x301 - ARPHRD_VOID = 0xffff - ARPHRD_X25 = 0x10f - B0 = 0x0 - B1000000 = 0x17 - B110 = 0x3 - B115200 = 0x11 - B1152000 = 0x18 - B1200 = 0x9 - B134 = 0x4 - B150 = 0x5 - B1500000 = 0x19 - B1800 = 0xa - B19200 = 0xe - B200 = 0x6 - B2000000 = 0x1a - B230400 = 0x12 - B2400 = 0xb - B2500000 = 0x1b - B300 = 0x7 - B3000000 = 0x1c - B3500000 = 0x1d - B38400 = 0xf - B4000000 = 0x1e - B460800 = 0x13 - B4800 = 0xc - B50 = 0x1 - B500000 = 0x14 - B57600 = 0x10 - B576000 = 0x15 - B600 = 0x8 - B75 = 0x2 - B921600 = 0x16 - B9600 = 0xd - BOTHER = 0x1f - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXINSNS = 0x1000 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MOD = 0x90 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_W = 0x0 - BPF_X = 0x8 - BPF_XOR = 0xa0 - BRKINT = 0x2 - BS0 = 0x0 - BS1 = 0x8000 - BSDLY = 0x8000 - CBAUD = 0xff - CBAUDEX = 0x0 - CFLUSH = 0xf - CIBAUD = 0xff0000 - CLOCAL = 0x8000 - CLOCK_BOOTTIME = 0x7 - CLOCK_BOOTTIME_ALARM = 0x9 - CLOCK_DEFAULT = 0x0 - CLOCK_EXT = 0x1 - CLOCK_INT = 0x2 - CLOCK_MONOTONIC = 0x1 - CLOCK_MONOTONIC_COARSE = 0x6 - CLOCK_MONOTONIC_RAW = 0x4 - CLOCK_PROCESS_CPUTIME_ID = 0x2 - CLOCK_REALTIME = 0x0 - CLOCK_REALTIME_ALARM = 0x8 - CLOCK_REALTIME_COARSE = 0x5 - CLOCK_THREAD_CPUTIME_ID = 0x3 - CLOCK_TXFROMRX = 0x4 - CLOCK_TXINT = 0x3 - CLONE_CHILD_CLEARTID = 0x200000 - CLONE_CHILD_SETTID = 0x1000000 - CLONE_DETACHED = 0x400000 - CLONE_FILES = 0x400 - CLONE_FS = 0x200 - CLONE_IO = 0x80000000 - CLONE_NEWIPC = 0x8000000 - CLONE_NEWNET = 0x40000000 - CLONE_NEWNS = 0x20000 - CLONE_NEWPID = 0x20000000 - CLONE_NEWUSER = 0x10000000 - CLONE_NEWUTS = 0x4000000 - CLONE_PARENT = 0x8000 - CLONE_PARENT_SETTID = 0x100000 - CLONE_PTRACE = 0x2000 - CLONE_SETTLS = 0x80000 - CLONE_SIGHAND = 0x800 - CLONE_SYSVSEM = 0x40000 - CLONE_THREAD = 0x10000 - CLONE_UNTRACED = 0x800000 - CLONE_VFORK = 0x4000 - CLONE_VM = 0x100 - CMSPAR = 0x40000000 - CR0 = 0x0 - CR1 = 0x1000 - CR2 = 0x2000 - CR3 = 0x3000 - CRDLY = 0x3000 - CREAD = 0x800 - CRTSCTS = 0x80000000 - CS5 = 0x0 - CS6 = 0x100 - CS7 = 0x200 - CS8 = 0x300 - CSIGNAL = 0xff - CSIZE = 0x300 - CSTART = 0x11 - CSTATUS = 0x0 - CSTOP = 0x13 - CSTOPB = 0x400 - CSUSP = 0x1a - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - DT_WHT = 0xe - ECHO = 0x8 - ECHOCTL = 0x40 - ECHOE = 0x2 - ECHOK = 0x4 - ECHOKE = 0x1 - ECHONL = 0x10 - ECHOPRT = 0x20 - ENCODING_DEFAULT = 0x0 - ENCODING_FM_MARK = 0x3 - ENCODING_FM_SPACE = 0x4 - ENCODING_MANCHESTER = 0x5 - ENCODING_NRZ = 0x1 - ENCODING_NRZI = 0x2 - EPOLLERR = 0x8 - EPOLLET = 0x80000000 - EPOLLHUP = 0x10 - EPOLLIN = 0x1 - EPOLLMSG = 0x400 - EPOLLONESHOT = 0x40000000 - EPOLLOUT = 0x4 - EPOLLPRI = 0x2 - EPOLLRDBAND = 0x80 - EPOLLRDHUP = 0x2000 - EPOLLRDNORM = 0x40 - EPOLLWAKEUP = 0x20000000 - EPOLLWRBAND = 0x200 - EPOLLWRNORM = 0x100 - EPOLL_CLOEXEC = 0x80000 - EPOLL_CTL_ADD = 0x1 - EPOLL_CTL_DEL = 0x2 - EPOLL_CTL_MOD = 0x3 - ETH_P_1588 = 0x88f7 - ETH_P_8021AD = 0x88a8 - ETH_P_8021AH = 0x88e7 - ETH_P_8021Q = 0x8100 - ETH_P_802_2 = 0x4 - ETH_P_802_3 = 0x1 - ETH_P_802_3_MIN = 0x600 - ETH_P_802_EX1 = 0x88b5 - ETH_P_AARP = 0x80f3 - ETH_P_AF_IUCV = 0xfbfb - ETH_P_ALL = 0x3 - ETH_P_AOE = 0x88a2 - ETH_P_ARCNET = 0x1a - ETH_P_ARP = 0x806 - ETH_P_ATALK = 0x809b - ETH_P_ATMFATE = 0x8884 - ETH_P_ATMMPOA = 0x884c - ETH_P_AX25 = 0x2 - ETH_P_BATMAN = 0x4305 - ETH_P_BPQ = 0x8ff - ETH_P_CAIF = 0xf7 - ETH_P_CAN = 0xc - ETH_P_CANFD = 0xd - ETH_P_CONTROL = 0x16 - ETH_P_CUST = 0x6006 - ETH_P_DDCMP = 0x6 - ETH_P_DEC = 0x6000 - ETH_P_DIAG = 0x6005 - ETH_P_DNA_DL = 0x6001 - ETH_P_DNA_RC = 0x6002 - ETH_P_DNA_RT = 0x6003 - ETH_P_DSA = 0x1b - ETH_P_ECONET = 0x18 - ETH_P_EDSA = 0xdada - ETH_P_FCOE = 0x8906 - ETH_P_FIP = 0x8914 - ETH_P_HDLC = 0x19 - ETH_P_IEEE802154 = 0xf6 - ETH_P_IEEEPUP = 0xa00 - ETH_P_IEEEPUPAT = 0xa01 - ETH_P_IP = 0x800 - ETH_P_IPV6 = 0x86dd - ETH_P_IPX = 0x8137 - ETH_P_IRDA = 0x17 - ETH_P_LAT = 0x6004 - ETH_P_LINK_CTL = 0x886c - ETH_P_LOCALTALK = 0x9 - ETH_P_LOOP = 0x60 - ETH_P_MOBITEX = 0x15 - ETH_P_MPLS_MC = 0x8848 - ETH_P_MPLS_UC = 0x8847 - ETH_P_MVRP = 0x88f5 - ETH_P_PAE = 0x888e - ETH_P_PAUSE = 0x8808 - ETH_P_PHONET = 0xf5 - ETH_P_PPPTALK = 0x10 - ETH_P_PPP_DISC = 0x8863 - ETH_P_PPP_MP = 0x8 - ETH_P_PPP_SES = 0x8864 - ETH_P_PRP = 0x88fb - ETH_P_PUP = 0x200 - ETH_P_PUPAT = 0x201 - ETH_P_QINQ1 = 0x9100 - ETH_P_QINQ2 = 0x9200 - ETH_P_QINQ3 = 0x9300 - ETH_P_RARP = 0x8035 - ETH_P_SCA = 0x6007 - ETH_P_SLOW = 0x8809 - ETH_P_SNAP = 0x5 - ETH_P_TDLS = 0x890d - ETH_P_TEB = 0x6558 - ETH_P_TIPC = 0x88ca - ETH_P_TRAILER = 0x1c - ETH_P_TR_802_2 = 0x11 - ETH_P_WAN_PPP = 0x7 - ETH_P_WCCP = 0x883e - ETH_P_X25 = 0x805 - EXTA = 0xe - EXTB = 0xf - EXTPROC = 0x10000000 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x400 - FF0 = 0x0 - FF1 = 0x4000 - FFDLY = 0x4000 - FLUSHO = 0x800000 - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0x406 - F_EXLCK = 0x4 - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLEASE = 0x401 - F_GETLK = 0x5 - F_GETLK64 = 0xc - F_GETOWN = 0x9 - F_GETOWN_EX = 0x10 - F_GETPIPE_SZ = 0x408 - F_GETSIG = 0xb - F_LOCK = 0x1 - F_NOTIFY = 0x402 - F_OK = 0x0 - F_RDLCK = 0x0 - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLEASE = 0x400 - F_SETLK = 0x6 - F_SETLK64 = 0xd - F_SETLKW = 0x7 - F_SETLKW64 = 0xe - F_SETOWN = 0x8 - F_SETOWN_EX = 0xf - F_SETPIPE_SZ = 0x407 - F_SETSIG = 0xa - F_SHLCK = 0x8 - F_TEST = 0x3 - F_TLOCK = 0x2 - F_ULOCK = 0x0 - F_UNLCK = 0x2 - F_WRLCK = 0x1 - HUPCL = 0x4000 - IBSHIFT = 0x10 - ICANON = 0x100 - ICMPV6_FILTER = 0x1 - ICRNL = 0x100 - IEXTEN = 0x400 - IFA_F_DADFAILED = 0x8 - IFA_F_DEPRECATED = 0x20 - IFA_F_HOMEADDRESS = 0x10 - IFA_F_NODAD = 0x2 - IFA_F_OPTIMISTIC = 0x4 - IFA_F_PERMANENT = 0x80 - IFA_F_SECONDARY = 0x1 - IFA_F_TEMPORARY = 0x1 - IFA_F_TENTATIVE = 0x40 - IFA_MAX = 0x7 - IFF_802_1Q_VLAN = 0x1 - IFF_ALLMULTI = 0x200 - IFF_ATTACH_QUEUE = 0x200 - IFF_AUTOMEDIA = 0x4000 - IFF_BONDING = 0x20 - IFF_BRIDGE_PORT = 0x4000 - IFF_BROADCAST = 0x2 - IFF_DEBUG = 0x4 - IFF_DETACH_QUEUE = 0x400 - IFF_DISABLE_NETPOLL = 0x1000 - IFF_DONT_BRIDGE = 0x800 - IFF_DORMANT = 0x20000 - IFF_DYNAMIC = 0x8000 - IFF_EBRIDGE = 0x2 - IFF_ECHO = 0x40000 - IFF_ISATAP = 0x80 - IFF_LIVE_ADDR_CHANGE = 0x100000 - IFF_LOOPBACK = 0x8 - IFF_LOWER_UP = 0x10000 - IFF_MACVLAN = 0x200000 - IFF_MACVLAN_PORT = 0x2000 - IFF_MASTER = 0x400 - IFF_MASTER_8023AD = 0x8 - IFF_MASTER_ALB = 0x10 - IFF_MASTER_ARPMON = 0x100 - IFF_MULTICAST = 0x1000 - IFF_MULTI_QUEUE = 0x100 - IFF_NOARP = 0x80 - IFF_NOFILTER = 0x1000 - IFF_NOTRAILERS = 0x20 - IFF_NO_PI = 0x1000 - IFF_ONE_QUEUE = 0x2000 - IFF_OVS_DATAPATH = 0x8000 - IFF_PERSIST = 0x800 - IFF_POINTOPOINT = 0x10 - IFF_PORTSEL = 0x2000 - IFF_PROMISC = 0x100 - IFF_RUNNING = 0x40 - IFF_SLAVE = 0x800 - IFF_SLAVE_INACTIVE = 0x4 - IFF_SLAVE_NEEDARP = 0x40 - IFF_SUPP_NOFCS = 0x80000 - IFF_TAP = 0x2 - IFF_TEAM_PORT = 0x40000 - IFF_TUN = 0x1 - IFF_TUN_EXCL = 0x8000 - IFF_TX_SKB_SHARING = 0x10000 - IFF_UNICAST_FLT = 0x20000 - IFF_UP = 0x1 - IFF_VNET_HDR = 0x4000 - IFF_VOLATILE = 0x70c5a - IFF_WAN_HDLC = 0x200 - IFF_XMIT_DST_RELEASE = 0x400 - IFNAMSIZ = 0x10 - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_ACCESS = 0x1 - IN_ALL_EVENTS = 0xfff - IN_ATTRIB = 0x4 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLOEXEC = 0x80000 - IN_CLOSE = 0x18 - IN_CLOSE_NOWRITE = 0x10 - IN_CLOSE_WRITE = 0x8 - IN_CREATE = 0x100 - IN_DELETE = 0x200 - IN_DELETE_SELF = 0x400 - IN_DONT_FOLLOW = 0x2000000 - IN_EXCL_UNLINK = 0x4000000 - IN_IGNORED = 0x8000 - IN_ISDIR = 0x40000000 - IN_LOOPBACKNET = 0x7f - IN_MASK_ADD = 0x20000000 - IN_MODIFY = 0x2 - IN_MOVE = 0xc0 - IN_MOVED_FROM = 0x40 - IN_MOVED_TO = 0x80 - IN_MOVE_SELF = 0x800 - IN_NONBLOCK = 0x800 - IN_ONESHOT = 0x80000000 - IN_ONLYDIR = 0x1000000 - IN_OPEN = 0x20 - IN_Q_OVERFLOW = 0x4000 - IN_UNMOUNT = 0x2000 - IPPROTO_AH = 0x33 - IPPROTO_BEETPH = 0x5e - IPPROTO_COMP = 0x6c - IPPROTO_DCCP = 0x21 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_ENCAP = 0x62 - IPPROTO_ESP = 0x32 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GRE = 0x2f - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IGMP = 0x2 - IPPROTO_IP = 0x0 - IPPROTO_IPIP = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_MH = 0x87 - IPPROTO_MTP = 0x5c - IPPROTO_NONE = 0x3b - IPPROTO_PIM = 0x67 - IPPROTO_PUP = 0xc - IPPROTO_RAW = 0xff - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_SCTP = 0x84 - IPPROTO_TCP = 0x6 - IPPROTO_TP = 0x1d - IPPROTO_UDP = 0x11 - IPPROTO_UDPLITE = 0x88 - IPV6_2292DSTOPTS = 0x4 - IPV6_2292HOPLIMIT = 0x8 - IPV6_2292HOPOPTS = 0x3 - IPV6_2292PKTINFO = 0x2 - IPV6_2292PKTOPTIONS = 0x6 - IPV6_2292RTHDR = 0x5 - IPV6_ADDRFORM = 0x1 - IPV6_ADD_MEMBERSHIP = 0x14 - IPV6_AUTHHDR = 0xa - IPV6_CHECKSUM = 0x7 - IPV6_DROP_MEMBERSHIP = 0x15 - IPV6_DSTOPTS = 0x3b - IPV6_HOPLIMIT = 0x34 - IPV6_HOPOPTS = 0x36 - IPV6_IPSEC_POLICY = 0x22 - IPV6_JOIN_ANYCAST = 0x1b - IPV6_JOIN_GROUP = 0x14 - IPV6_LEAVE_ANYCAST = 0x1c - IPV6_LEAVE_GROUP = 0x15 - IPV6_MTU = 0x18 - IPV6_MTU_DISCOVER = 0x17 - IPV6_MULTICAST_HOPS = 0x12 - IPV6_MULTICAST_IF = 0x11 - IPV6_MULTICAST_LOOP = 0x13 - IPV6_NEXTHOP = 0x9 - IPV6_PKTINFO = 0x32 - IPV6_PMTUDISC_DO = 0x2 - IPV6_PMTUDISC_DONT = 0x0 - IPV6_PMTUDISC_PROBE = 0x3 - IPV6_PMTUDISC_WANT = 0x1 - IPV6_RECVDSTOPTS = 0x3a - IPV6_RECVERR = 0x19 - IPV6_RECVHOPLIMIT = 0x33 - IPV6_RECVHOPOPTS = 0x35 - IPV6_RECVPKTINFO = 0x31 - IPV6_RECVRTHDR = 0x38 - IPV6_RECVTCLASS = 0x42 - IPV6_ROUTER_ALERT = 0x16 - IPV6_RTHDR = 0x39 - IPV6_RTHDRDSTOPTS = 0x37 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_RXDSTOPTS = 0x3b - IPV6_RXHOPOPTS = 0x36 - IPV6_TCLASS = 0x43 - IPV6_UNICAST_HOPS = 0x10 - IPV6_V6ONLY = 0x1a - IPV6_XFRM_POLICY = 0x23 - IP_ADD_MEMBERSHIP = 0x23 - IP_ADD_SOURCE_MEMBERSHIP = 0x27 - IP_BLOCK_SOURCE = 0x26 - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DROP_MEMBERSHIP = 0x24 - IP_DROP_SOURCE_MEMBERSHIP = 0x28 - IP_FREEBIND = 0xf - IP_HDRINCL = 0x3 - IP_IPSEC_POLICY = 0x10 - IP_MAXPACKET = 0xffff - IP_MAX_MEMBERSHIPS = 0x14 - IP_MF = 0x2000 - IP_MINTTL = 0x15 - IP_MSFILTER = 0x29 - IP_MSS = 0x240 - IP_MTU = 0xe - IP_MTU_DISCOVER = 0xa - IP_MULTICAST_ALL = 0x31 - IP_MULTICAST_IF = 0x20 - IP_MULTICAST_LOOP = 0x22 - IP_MULTICAST_TTL = 0x21 - IP_OFFMASK = 0x1fff - IP_OPTIONS = 0x4 - IP_ORIGDSTADDR = 0x14 - IP_PASSSEC = 0x12 - IP_PKTINFO = 0x8 - IP_PKTOPTIONS = 0x9 - IP_PMTUDISC = 0xa - IP_PMTUDISC_DO = 0x2 - IP_PMTUDISC_DONT = 0x0 - IP_PMTUDISC_PROBE = 0x3 - IP_PMTUDISC_WANT = 0x1 - IP_RECVERR = 0xb - IP_RECVOPTS = 0x6 - IP_RECVORIGDSTADDR = 0x14 - IP_RECVRETOPTS = 0x7 - IP_RECVTOS = 0xd - IP_RECVTTL = 0xc - IP_RETOPTS = 0x7 - IP_RF = 0x8000 - IP_ROUTER_ALERT = 0x5 - IP_TOS = 0x1 - IP_TRANSPARENT = 0x13 - IP_TTL = 0x2 - IP_UNBLOCK_SOURCE = 0x25 - IP_UNICAST_IF = 0x32 - IP_XFRM_POLICY = 0x11 - ISIG = 0x80 - ISTRIP = 0x20 - IUCLC = 0x1000 - IUTF8 = 0x4000 - IXANY = 0x800 - IXOFF = 0x400 - IXON = 0x200 - LINUX_REBOOT_CMD_CAD_OFF = 0x0 - LINUX_REBOOT_CMD_CAD_ON = 0x89abcdef - LINUX_REBOOT_CMD_HALT = 0xcdef0123 - LINUX_REBOOT_CMD_KEXEC = 0x45584543 - LINUX_REBOOT_CMD_POWER_OFF = 0x4321fedc - LINUX_REBOOT_CMD_RESTART = 0x1234567 - LINUX_REBOOT_CMD_RESTART2 = 0xa1b2c3d4 - LINUX_REBOOT_CMD_SW_SUSPEND = 0xd000fce2 - LINUX_REBOOT_MAGIC1 = 0xfee1dead - LINUX_REBOOT_MAGIC2 = 0x28121969 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_DODUMP = 0x11 - MADV_DOFORK = 0xb - MADV_DONTDUMP = 0x10 - MADV_DONTFORK = 0xa - MADV_DONTNEED = 0x4 - MADV_HUGEPAGE = 0xe - MADV_HWPOISON = 0x64 - MADV_MERGEABLE = 0xc - MADV_NOHUGEPAGE = 0xf - MADV_NORMAL = 0x0 - MADV_RANDOM = 0x1 - MADV_REMOVE = 0x9 - MADV_SEQUENTIAL = 0x2 - MADV_UNMERGEABLE = 0xd - MADV_WILLNEED = 0x3 - MAP_ANON = 0x20 - MAP_ANONYMOUS = 0x20 - MAP_DENYWRITE = 0x800 - MAP_EXECUTABLE = 0x1000 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_GROWSDOWN = 0x100 - MAP_HUGETLB = 0x40000 - MAP_HUGE_MASK = 0x3f - MAP_HUGE_SHIFT = 0x1a - MAP_LOCKED = 0x80 - MAP_NONBLOCK = 0x10000 - MAP_NORESERVE = 0x40 - MAP_POPULATE = 0x8000 - MAP_PRIVATE = 0x2 - MAP_SHARED = 0x1 - MAP_STACK = 0x20000 - MAP_TYPE = 0xf - MCL_CURRENT = 0x2000 - MCL_FUTURE = 0x4000 - MNT_DETACH = 0x2 - MNT_EXPIRE = 0x4 - MNT_FORCE = 0x1 - MSG_CMSG_CLOEXEC = 0x40000000 - MSG_CONFIRM = 0x800 - MSG_CTRUNC = 0x8 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x40 - MSG_EOR = 0x80 - MSG_ERRQUEUE = 0x2000 - MSG_FASTOPEN = 0x20000000 - MSG_FIN = 0x200 - MSG_MORE = 0x8000 - MSG_NOSIGNAL = 0x4000 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_PROXY = 0x10 - MSG_RST = 0x1000 - MSG_SYN = 0x400 - MSG_TRUNC = 0x20 - MSG_TRYHARD = 0x4 - MSG_WAITALL = 0x100 - MSG_WAITFORONE = 0x10000 - MS_ACTIVE = 0x40000000 - MS_ASYNC = 0x1 - MS_BIND = 0x1000 - MS_DIRSYNC = 0x80 - MS_INVALIDATE = 0x2 - MS_I_VERSION = 0x800000 - MS_KERNMOUNT = 0x400000 - MS_MANDLOCK = 0x40 - MS_MGC_MSK = 0xffff0000 - MS_MGC_VAL = 0xc0ed0000 - MS_MOVE = 0x2000 - MS_NOATIME = 0x400 - MS_NODEV = 0x4 - MS_NODIRATIME = 0x800 - MS_NOEXEC = 0x8 - MS_NOSUID = 0x2 - MS_NOUSER = -0x80000000 - MS_POSIXACL = 0x10000 - MS_PRIVATE = 0x40000 - MS_RDONLY = 0x1 - MS_REC = 0x4000 - MS_RELATIME = 0x200000 - MS_REMOUNT = 0x20 - MS_RMT_MASK = 0x800051 - MS_SHARED = 0x100000 - MS_SILENT = 0x8000 - MS_SLAVE = 0x80000 - MS_STRICTATIME = 0x1000000 - MS_SYNC = 0x4 - MS_SYNCHRONOUS = 0x10 - MS_UNBINDABLE = 0x20000 - NAME_MAX = 0xff - NETLINK_ADD_MEMBERSHIP = 0x1 - NETLINK_AUDIT = 0x9 - NETLINK_BROADCAST_ERROR = 0x4 - NETLINK_CONNECTOR = 0xb - NETLINK_CRYPTO = 0x15 - NETLINK_DNRTMSG = 0xe - NETLINK_DROP_MEMBERSHIP = 0x2 - NETLINK_ECRYPTFS = 0x13 - NETLINK_FIB_LOOKUP = 0xa - NETLINK_FIREWALL = 0x3 - NETLINK_GENERIC = 0x10 - NETLINK_INET_DIAG = 0x4 - NETLINK_IP6_FW = 0xd - NETLINK_ISCSI = 0x8 - NETLINK_KOBJECT_UEVENT = 0xf - NETLINK_NETFILTER = 0xc - NETLINK_NFLOG = 0x5 - NETLINK_NO_ENOBUFS = 0x5 - NETLINK_PKTINFO = 0x3 - NETLINK_RDMA = 0x14 - NETLINK_ROUTE = 0x0 - NETLINK_RX_RING = 0x6 - NETLINK_SCSITRANSPORT = 0x12 - NETLINK_SELINUX = 0x7 - NETLINK_SOCK_DIAG = 0x4 - NETLINK_TX_RING = 0x7 - NETLINK_UNUSED = 0x1 - NETLINK_USERSOCK = 0x2 - NETLINK_XFRM = 0x6 - NL0 = 0x0 - NL1 = 0x100 - NL2 = 0x200 - NL3 = 0x300 - NLA_ALIGNTO = 0x4 - NLA_F_NESTED = 0x8000 - NLA_F_NET_BYTEORDER = 0x4000 - NLA_HDRLEN = 0x4 - NLDLY = 0x300 - NLMSG_ALIGNTO = 0x4 - NLMSG_DONE = 0x3 - NLMSG_ERROR = 0x2 - NLMSG_HDRLEN = 0x10 - NLMSG_MIN_TYPE = 0x10 - NLMSG_NOOP = 0x1 - NLMSG_OVERRUN = 0x4 - NLM_F_ACK = 0x4 - NLM_F_APPEND = 0x800 - NLM_F_ATOMIC = 0x400 - NLM_F_CREATE = 0x400 - NLM_F_DUMP = 0x300 - NLM_F_DUMP_INTR = 0x10 - NLM_F_ECHO = 0x8 - NLM_F_EXCL = 0x200 - NLM_F_MATCH = 0x200 - NLM_F_MULTI = 0x2 - NLM_F_REPLACE = 0x100 - NLM_F_REQUEST = 0x1 - NLM_F_ROOT = 0x100 - NOFLSH = 0x80000000 - OCRNL = 0x8 - OFDEL = 0x80 - OFILL = 0x40 - OLCUC = 0x4 - ONLCR = 0x2 - ONLRET = 0x20 - ONOCR = 0x10 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_APPEND = 0x400 - O_ASYNC = 0x2000 - O_CLOEXEC = 0x80000 - O_CREAT = 0x40 - O_DIRECT = 0x20000 - O_DIRECTORY = 0x4000 - O_DSYNC = 0x1000 - O_EXCL = 0x80 - O_FSYNC = 0x101000 - O_LARGEFILE = 0x0 - O_NDELAY = 0x800 - O_NOATIME = 0x40000 - O_NOCTTY = 0x100 - O_NOFOLLOW = 0x8000 - O_NONBLOCK = 0x800 - O_PATH = 0x200000 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_RSYNC = 0x101000 - O_SYNC = 0x101000 - O_TMPFILE = 0x410000 - O_TRUNC = 0x200 - O_WRONLY = 0x1 - PACKET_ADD_MEMBERSHIP = 0x1 - PACKET_AUXDATA = 0x8 - PACKET_BROADCAST = 0x1 - PACKET_COPY_THRESH = 0x7 - PACKET_DROP_MEMBERSHIP = 0x2 - PACKET_FANOUT = 0x12 - PACKET_FANOUT_CPU = 0x2 - PACKET_FANOUT_FLAG_DEFRAG = 0x8000 - PACKET_FANOUT_FLAG_ROLLOVER = 0x1000 - PACKET_FANOUT_HASH = 0x0 - PACKET_FANOUT_LB = 0x1 - PACKET_FANOUT_RND = 0x4 - PACKET_FANOUT_ROLLOVER = 0x3 - PACKET_FASTROUTE = 0x6 - PACKET_HDRLEN = 0xb - PACKET_HOST = 0x0 - PACKET_LOOPBACK = 0x5 - PACKET_LOSS = 0xe - PACKET_MR_ALLMULTI = 0x2 - PACKET_MR_MULTICAST = 0x0 - PACKET_MR_PROMISC = 0x1 - PACKET_MR_UNICAST = 0x3 - PACKET_MULTICAST = 0x2 - PACKET_ORIGDEV = 0x9 - PACKET_OTHERHOST = 0x3 - PACKET_OUTGOING = 0x4 - PACKET_RECV_OUTPUT = 0x3 - PACKET_RESERVE = 0xc - PACKET_RX_RING = 0x5 - PACKET_STATISTICS = 0x6 - PACKET_TIMESTAMP = 0x11 - PACKET_TX_HAS_OFF = 0x13 - PACKET_TX_RING = 0xd - PACKET_TX_TIMESTAMP = 0x10 - PACKET_VERSION = 0xa - PACKET_VNET_HDR = 0xf - PARENB = 0x1000 - PARITY_CRC16_PR0 = 0x2 - PARITY_CRC16_PR0_CCITT = 0x4 - PARITY_CRC16_PR1 = 0x3 - PARITY_CRC16_PR1_CCITT = 0x5 - PARITY_CRC32_PR0_CCITT = 0x6 - PARITY_CRC32_PR1_CCITT = 0x7 - PARITY_DEFAULT = 0x0 - PARITY_NONE = 0x1 - PARMRK = 0x8 - PARODD = 0x2000 - PENDIN = 0x20000000 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PROT_EXEC = 0x4 - PROT_GROWSDOWN = 0x1000000 - PROT_GROWSUP = 0x2000000 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_SAO = 0x10 - PROT_WRITE = 0x2 - PR_CAPBSET_DROP = 0x18 - PR_CAPBSET_READ = 0x17 - PR_ENDIAN_BIG = 0x0 - PR_ENDIAN_LITTLE = 0x1 - PR_ENDIAN_PPC_LITTLE = 0x2 - PR_FPEMU_NOPRINT = 0x1 - PR_FPEMU_SIGFPE = 0x2 - PR_FP_EXC_ASYNC = 0x2 - PR_FP_EXC_DISABLED = 0x0 - PR_FP_EXC_DIV = 0x10000 - PR_FP_EXC_INV = 0x100000 - PR_FP_EXC_NONRECOV = 0x1 - PR_FP_EXC_OVF = 0x20000 - PR_FP_EXC_PRECISE = 0x3 - PR_FP_EXC_RES = 0x80000 - PR_FP_EXC_SW_ENABLE = 0x80 - PR_FP_EXC_UND = 0x40000 - PR_GET_CHILD_SUBREAPER = 0x25 - PR_GET_DUMPABLE = 0x3 - PR_GET_ENDIAN = 0x13 - PR_GET_FPEMU = 0x9 - PR_GET_FPEXC = 0xb - PR_GET_KEEPCAPS = 0x7 - PR_GET_NAME = 0x10 - PR_GET_NO_NEW_PRIVS = 0x27 - PR_GET_PDEATHSIG = 0x2 - PR_GET_SECCOMP = 0x15 - PR_GET_SECUREBITS = 0x1b - PR_GET_TID_ADDRESS = 0x28 - PR_GET_TIMERSLACK = 0x1e - PR_GET_TIMING = 0xd - PR_GET_TSC = 0x19 - PR_GET_UNALIGN = 0x5 - PR_MCE_KILL = 0x21 - PR_MCE_KILL_CLEAR = 0x0 - PR_MCE_KILL_DEFAULT = 0x2 - PR_MCE_KILL_EARLY = 0x1 - PR_MCE_KILL_GET = 0x22 - PR_MCE_KILL_LATE = 0x0 - PR_MCE_KILL_SET = 0x1 - PR_SET_CHILD_SUBREAPER = 0x24 - PR_SET_DUMPABLE = 0x4 - PR_SET_ENDIAN = 0x14 - PR_SET_FPEMU = 0xa - PR_SET_FPEXC = 0xc - PR_SET_KEEPCAPS = 0x8 - PR_SET_MM = 0x23 - PR_SET_MM_ARG_END = 0x9 - PR_SET_MM_ARG_START = 0x8 - PR_SET_MM_AUXV = 0xc - PR_SET_MM_BRK = 0x7 - PR_SET_MM_END_CODE = 0x2 - PR_SET_MM_END_DATA = 0x4 - PR_SET_MM_ENV_END = 0xb - PR_SET_MM_ENV_START = 0xa - PR_SET_MM_EXE_FILE = 0xd - PR_SET_MM_START_BRK = 0x6 - PR_SET_MM_START_CODE = 0x1 - PR_SET_MM_START_DATA = 0x3 - PR_SET_MM_START_STACK = 0x5 - PR_SET_NAME = 0xf - PR_SET_NO_NEW_PRIVS = 0x26 - PR_SET_PDEATHSIG = 0x1 - PR_SET_PTRACER = 0x59616d61 - PR_SET_PTRACER_ANY = -0x1 - PR_SET_SECCOMP = 0x16 - PR_SET_SECUREBITS = 0x1c - PR_SET_TIMERSLACK = 0x1d - PR_SET_TIMING = 0xe - PR_SET_TSC = 0x1a - PR_SET_UNALIGN = 0x6 - PR_TASK_PERF_EVENTS_DISABLE = 0x1f - PR_TASK_PERF_EVENTS_ENABLE = 0x20 - PR_TIMING_STATISTICAL = 0x0 - PR_TIMING_TIMESTAMP = 0x1 - PR_TSC_ENABLE = 0x1 - PR_TSC_SIGSEGV = 0x2 - PR_UNALIGN_NOPRINT = 0x1 - PR_UNALIGN_SIGBUS = 0x2 - PTRACE_ATTACH = 0x10 - PTRACE_CONT = 0x7 - PTRACE_DETACH = 0x11 - PTRACE_EVENT_CLONE = 0x3 - PTRACE_EVENT_EXEC = 0x4 - PTRACE_EVENT_EXIT = 0x6 - PTRACE_EVENT_FORK = 0x1 - PTRACE_EVENT_SECCOMP = 0x7 - PTRACE_EVENT_STOP = 0x80 - PTRACE_EVENT_VFORK = 0x2 - PTRACE_EVENT_VFORK_DONE = 0x5 - PTRACE_GETEVENTMSG = 0x4201 - PTRACE_GETEVRREGS = 0x14 - PTRACE_GETFPREGS = 0xe - PTRACE_GETREGS = 0xc - PTRACE_GETREGS64 = 0x16 - PTRACE_GETREGSET = 0x4204 - PTRACE_GETSIGINFO = 0x4202 - PTRACE_GETSIGMASK = 0x420a - PTRACE_GETVRREGS = 0x12 - PTRACE_GETVSRREGS = 0x1b - PTRACE_GET_DEBUGREG = 0x19 - PTRACE_INTERRUPT = 0x4207 - PTRACE_KILL = 0x8 - PTRACE_LISTEN = 0x4208 - PTRACE_O_EXITKILL = 0x100000 - PTRACE_O_MASK = 0x1000ff - PTRACE_O_TRACECLONE = 0x8 - PTRACE_O_TRACEEXEC = 0x10 - PTRACE_O_TRACEEXIT = 0x40 - PTRACE_O_TRACEFORK = 0x2 - PTRACE_O_TRACESECCOMP = 0x80 - PTRACE_O_TRACESYSGOOD = 0x1 - PTRACE_O_TRACEVFORK = 0x4 - PTRACE_O_TRACEVFORKDONE = 0x20 - PTRACE_PEEKDATA = 0x2 - PTRACE_PEEKSIGINFO = 0x4209 - PTRACE_PEEKSIGINFO_SHARED = 0x1 - PTRACE_PEEKTEXT = 0x1 - PTRACE_PEEKUSR = 0x3 - PTRACE_POKEDATA = 0x5 - PTRACE_POKETEXT = 0x4 - PTRACE_POKEUSR = 0x6 - PTRACE_SEIZE = 0x4206 - PTRACE_SETEVRREGS = 0x15 - PTRACE_SETFPREGS = 0xf - PTRACE_SETOPTIONS = 0x4200 - PTRACE_SETREGS = 0xd - PTRACE_SETREGS64 = 0x17 - PTRACE_SETREGSET = 0x4205 - PTRACE_SETSIGINFO = 0x4203 - PTRACE_SETSIGMASK = 0x420b - PTRACE_SETVRREGS = 0x13 - PTRACE_SETVSRREGS = 0x1c - PTRACE_SET_DEBUGREG = 0x1a - PTRACE_SINGLEBLOCK = 0x100 - PTRACE_SINGLESTEP = 0x9 - PTRACE_SYSCALL = 0x18 - PTRACE_TRACEME = 0x0 - PT_CCR = 0x26 - PT_CTR = 0x23 - PT_DAR = 0x29 - PT_DSCR = 0x2c - PT_DSISR = 0x2a - PT_FPR0 = 0x30 - PT_FPSCR = 0x50 - PT_LNK = 0x24 - PT_MSR = 0x21 - PT_NIP = 0x20 - PT_ORIG_R3 = 0x22 - PT_R0 = 0x0 - PT_R1 = 0x1 - PT_R10 = 0xa - PT_R11 = 0xb - PT_R12 = 0xc - PT_R13 = 0xd - PT_R14 = 0xe - PT_R15 = 0xf - PT_R16 = 0x10 - PT_R17 = 0x11 - PT_R18 = 0x12 - PT_R19 = 0x13 - PT_R2 = 0x2 - PT_R20 = 0x14 - PT_R21 = 0x15 - PT_R22 = 0x16 - PT_R23 = 0x17 - PT_R24 = 0x18 - PT_R25 = 0x19 - PT_R26 = 0x1a - PT_R27 = 0x1b - PT_R28 = 0x1c - PT_R29 = 0x1d - PT_R3 = 0x3 - PT_R30 = 0x1e - PT_R31 = 0x1f - PT_R4 = 0x4 - PT_R5 = 0x5 - PT_R6 = 0x6 - PT_R7 = 0x7 - PT_R8 = 0x8 - PT_R9 = 0x9 - PT_REGS_COUNT = 0x2c - PT_RESULT = 0x2b - PT_SOFTE = 0x27 - PT_TRAP = 0x28 - PT_VR0 = 0x52 - PT_VRSAVE = 0x94 - PT_VSCR = 0x93 - PT_VSR0 = 0x96 - PT_VSR31 = 0xd4 - PT_XER = 0x25 - RLIMIT_AS = 0x9 - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x7 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = -0x1 - RTAX_ADVMSS = 0x8 - RTAX_CWND = 0x7 - RTAX_FEATURES = 0xc - RTAX_FEATURE_ALLFRAG = 0x8 - RTAX_FEATURE_ECN = 0x1 - RTAX_FEATURE_SACK = 0x2 - RTAX_FEATURE_TIMESTAMP = 0x4 - RTAX_HOPLIMIT = 0xa - RTAX_INITCWND = 0xb - RTAX_INITRWND = 0xe - RTAX_LOCK = 0x1 - RTAX_MAX = 0xf - RTAX_MTU = 0x2 - RTAX_QUICKACK = 0xf - RTAX_REORDERING = 0x9 - RTAX_RTO_MIN = 0xd - RTAX_RTT = 0x4 - RTAX_RTTVAR = 0x5 - RTAX_SSTHRESH = 0x6 - RTAX_UNSPEC = 0x0 - RTAX_WINDOW = 0x3 - RTA_ALIGNTO = 0x4 - RTA_MAX = 0x11 - RTCF_DIRECTSRC = 0x4000000 - RTCF_DOREDIRECT = 0x1000000 - RTCF_LOG = 0x2000000 - RTCF_MASQ = 0x400000 - RTCF_NAT = 0x800000 - RTCF_VALVE = 0x200000 - RTF_ADDRCLASSMASK = 0xf8000000 - RTF_ADDRCONF = 0x40000 - RTF_ALLONLINK = 0x20000 - RTF_BROADCAST = 0x10000000 - RTF_CACHE = 0x1000000 - RTF_DEFAULT = 0x10000 - RTF_DYNAMIC = 0x10 - RTF_FLOW = 0x2000000 - RTF_GATEWAY = 0x2 - RTF_HOST = 0x4 - RTF_INTERFACE = 0x40000000 - RTF_IRTT = 0x100 - RTF_LINKRT = 0x100000 - RTF_LOCAL = 0x80000000 - RTF_MODIFIED = 0x20 - RTF_MSS = 0x40 - RTF_MTU = 0x40 - RTF_MULTICAST = 0x20000000 - RTF_NAT = 0x8000000 - RTF_NOFORWARD = 0x1000 - RTF_NONEXTHOP = 0x200000 - RTF_NOPMTUDISC = 0x4000 - RTF_POLICY = 0x4000000 - RTF_REINSTATE = 0x8 - RTF_REJECT = 0x200 - RTF_STATIC = 0x400 - RTF_THROW = 0x2000 - RTF_UP = 0x1 - RTF_WINDOW = 0x80 - RTF_XRESOLVE = 0x800 - RTM_BASE = 0x10 - RTM_DELACTION = 0x31 - RTM_DELADDR = 0x15 - RTM_DELADDRLABEL = 0x49 - RTM_DELLINK = 0x11 - RTM_DELMDB = 0x55 - RTM_DELNEIGH = 0x1d - RTM_DELQDISC = 0x25 - RTM_DELROUTE = 0x19 - RTM_DELRULE = 0x21 - RTM_DELTCLASS = 0x29 - RTM_DELTFILTER = 0x2d - RTM_F_CLONED = 0x200 - RTM_F_EQUALIZE = 0x400 - RTM_F_NOTIFY = 0x100 - RTM_F_PREFIX = 0x800 - RTM_GETACTION = 0x32 - RTM_GETADDR = 0x16 - RTM_GETADDRLABEL = 0x4a - RTM_GETANYCAST = 0x3e - RTM_GETDCB = 0x4e - RTM_GETLINK = 0x12 - RTM_GETMDB = 0x56 - RTM_GETMULTICAST = 0x3a - RTM_GETNEIGH = 0x1e - RTM_GETNEIGHTBL = 0x42 - RTM_GETNETCONF = 0x52 - RTM_GETQDISC = 0x26 - RTM_GETROUTE = 0x1a - RTM_GETRULE = 0x22 - RTM_GETTCLASS = 0x2a - RTM_GETTFILTER = 0x2e - RTM_MAX = 0x57 - RTM_NEWACTION = 0x30 - RTM_NEWADDR = 0x14 - RTM_NEWADDRLABEL = 0x48 - RTM_NEWLINK = 0x10 - RTM_NEWMDB = 0x54 - RTM_NEWNDUSEROPT = 0x44 - RTM_NEWNEIGH = 0x1c - RTM_NEWNEIGHTBL = 0x40 - RTM_NEWNETCONF = 0x50 - RTM_NEWPREFIX = 0x34 - RTM_NEWQDISC = 0x24 - RTM_NEWROUTE = 0x18 - RTM_NEWRULE = 0x20 - RTM_NEWTCLASS = 0x28 - RTM_NEWTFILTER = 0x2c - RTM_NR_FAMILIES = 0x12 - RTM_NR_MSGTYPES = 0x48 - RTM_SETDCB = 0x4f - RTM_SETLINK = 0x13 - RTM_SETNEIGHTBL = 0x43 - RTNH_ALIGNTO = 0x4 - RTNH_F_DEAD = 0x1 - RTNH_F_ONLINK = 0x4 - RTNH_F_PERVASIVE = 0x2 - RTN_MAX = 0xb - RTPROT_BIRD = 0xc - RTPROT_BOOT = 0x3 - RTPROT_DHCP = 0x10 - RTPROT_DNROUTED = 0xd - RTPROT_GATED = 0x8 - RTPROT_KERNEL = 0x2 - RTPROT_MROUTED = 0x11 - RTPROT_MRT = 0xa - RTPROT_NTK = 0xf - RTPROT_RA = 0x9 - RTPROT_REDIRECT = 0x1 - RTPROT_STATIC = 0x4 - RTPROT_UNSPEC = 0x0 - RTPROT_XORP = 0xe - RTPROT_ZEBRA = 0xb - RT_CLASS_DEFAULT = 0xfd - RT_CLASS_LOCAL = 0xff - RT_CLASS_MAIN = 0xfe - RT_CLASS_MAX = 0xff - RT_CLASS_UNSPEC = 0x0 - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - RUSAGE_THREAD = 0x1 - SCM_CREDENTIALS = 0x2 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x1d - SCM_TIMESTAMPING = 0x25 - SCM_TIMESTAMPNS = 0x23 - SCM_WIFI_STATUS = 0x29 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDDLCI = 0x8980 - SIOCADDMULTI = 0x8931 - SIOCADDRT = 0x890b - SIOCATMARK = 0x8905 - SIOCDARP = 0x8953 - SIOCDELDLCI = 0x8981 - SIOCDELMULTI = 0x8932 - SIOCDELRT = 0x890c - SIOCDEVPRIVATE = 0x89f0 - SIOCDIFADDR = 0x8936 - SIOCDRARP = 0x8960 - SIOCGARP = 0x8954 - SIOCGIFADDR = 0x8915 - SIOCGIFBR = 0x8940 - SIOCGIFBRDADDR = 0x8919 - SIOCGIFCONF = 0x8912 - SIOCGIFCOUNT = 0x8938 - SIOCGIFDSTADDR = 0x8917 - SIOCGIFENCAP = 0x8925 - SIOCGIFFLAGS = 0x8913 - SIOCGIFHWADDR = 0x8927 - SIOCGIFINDEX = 0x8933 - SIOCGIFMAP = 0x8970 - SIOCGIFMEM = 0x891f - SIOCGIFMETRIC = 0x891d - SIOCGIFMTU = 0x8921 - SIOCGIFNAME = 0x8910 - SIOCGIFNETMASK = 0x891b - SIOCGIFPFLAGS = 0x8935 - SIOCGIFSLAVE = 0x8929 - SIOCGIFTXQLEN = 0x8942 - SIOCGPGRP = 0x8904 - SIOCGRARP = 0x8961 - SIOCGSTAMP = 0x8906 - SIOCGSTAMPNS = 0x8907 - SIOCPROTOPRIVATE = 0x89e0 - SIOCRTMSG = 0x890d - SIOCSARP = 0x8955 - SIOCSIFADDR = 0x8916 - SIOCSIFBR = 0x8941 - SIOCSIFBRDADDR = 0x891a - SIOCSIFDSTADDR = 0x8918 - SIOCSIFENCAP = 0x8926 - SIOCSIFFLAGS = 0x8914 - SIOCSIFHWADDR = 0x8924 - SIOCSIFHWBROADCAST = 0x8937 - SIOCSIFLINK = 0x8911 - SIOCSIFMAP = 0x8971 - SIOCSIFMEM = 0x8920 - SIOCSIFMETRIC = 0x891e - SIOCSIFMTU = 0x8922 - SIOCSIFNAME = 0x8923 - SIOCSIFNETMASK = 0x891c - SIOCSIFPFLAGS = 0x8934 - SIOCSIFSLAVE = 0x8930 - SIOCSIFTXQLEN = 0x8943 - SIOCSPGRP = 0x8902 - SIOCSRARP = 0x8962 - SOCK_CLOEXEC = 0x80000 - SOCK_DCCP = 0x6 - SOCK_DGRAM = 0x2 - SOCK_NONBLOCK = 0x800 - SOCK_PACKET = 0xa - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_AAL = 0x109 - SOL_ATM = 0x108 - SOL_DECNET = 0x105 - SOL_ICMPV6 = 0x3a - SOL_IP = 0x0 - SOL_IPV6 = 0x29 - SOL_IRDA = 0x10a - SOL_PACKET = 0x107 - SOL_RAW = 0xff - SOL_SOCKET = 0x1 - SOL_TCP = 0x6 - SOL_X25 = 0x106 - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x1e - SO_ATTACH_FILTER = 0x1a - SO_BINDTODEVICE = 0x19 - SO_BROADCAST = 0x6 - SO_BSDCOMPAT = 0xe - SO_BUSY_POLL = 0x2e - SO_DEBUG = 0x1 - SO_DETACH_FILTER = 0x1b - SO_DOMAIN = 0x27 - SO_DONTROUTE = 0x5 - SO_ERROR = 0x4 - SO_GET_FILTER = 0x1a - SO_KEEPALIVE = 0x9 - SO_LINGER = 0xd - SO_LOCK_FILTER = 0x2c - SO_MARK = 0x24 - SO_MAX_PACING_RATE = 0x2f - SO_NOFCS = 0x2b - SO_NO_CHECK = 0xb - SO_OOBINLINE = 0xa - SO_PASSCRED = 0x14 - SO_PASSSEC = 0x22 - SO_PEEK_OFF = 0x2a - SO_PEERCRED = 0x15 - SO_PEERNAME = 0x1c - SO_PEERSEC = 0x1f - SO_PRIORITY = 0xc - SO_PROTOCOL = 0x26 - SO_RCVBUF = 0x8 - SO_RCVBUFFORCE = 0x21 - SO_RCVLOWAT = 0x10 - SO_RCVTIMEO = 0x12 - SO_REUSEADDR = 0x2 - SO_REUSEPORT = 0xf - SO_RXQ_OVFL = 0x28 - SO_SECURITY_AUTHENTICATION = 0x16 - SO_SECURITY_ENCRYPTION_NETWORK = 0x18 - SO_SECURITY_ENCRYPTION_TRANSPORT = 0x17 - SO_SELECT_ERR_QUEUE = 0x2d - SO_SNDBUF = 0x7 - SO_SNDBUFFORCE = 0x20 - SO_SNDLOWAT = 0x11 - SO_SNDTIMEO = 0x13 - SO_TIMESTAMP = 0x1d - SO_TIMESTAMPING = 0x25 - SO_TIMESTAMPNS = 0x23 - SO_TYPE = 0x3 - SO_WIFI_STATUS = 0x29 - S_BLKSIZE = 0x200 - S_IEXEC = 0x40 - S_IFBLK = 0x6000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFIFO = 0x1000 - S_IFLNK = 0xa000 - S_IFMT = 0xf000 - S_IFREG = 0x8000 - S_IFSOCK = 0xc000 - S_IREAD = 0x100 - S_IRGRP = 0x20 - S_IROTH = 0x4 - S_IRUSR = 0x100 - S_IRWXG = 0x38 - S_IRWXO = 0x7 - S_IRWXU = 0x1c0 - S_ISGID = 0x400 - S_ISUID = 0x800 - S_ISVTX = 0x200 - S_IWGRP = 0x10 - S_IWOTH = 0x2 - S_IWRITE = 0x80 - S_IWUSR = 0x80 - S_IXGRP = 0x8 - S_IXOTH = 0x1 - S_IXUSR = 0x40 - TAB0 = 0x0 - TAB1 = 0x400 - TAB2 = 0x800 - TAB3 = 0xc00 - TABDLY = 0xc00 - TCFLSH = 0x2000741f - TCGETA = 0x40147417 - TCGETS = 0x402c7413 - TCIFLUSH = 0x0 - TCIOFF = 0x2 - TCIOFLUSH = 0x2 - TCION = 0x3 - TCOFLUSH = 0x1 - TCOOFF = 0x0 - TCOON = 0x1 - TCP_CONGESTION = 0xd - TCP_COOKIE_IN_ALWAYS = 0x1 - TCP_COOKIE_MAX = 0x10 - TCP_COOKIE_MIN = 0x8 - TCP_COOKIE_OUT_NEVER = 0x2 - TCP_COOKIE_PAIR_SIZE = 0x20 - TCP_COOKIE_TRANSACTIONS = 0xf - TCP_CORK = 0x3 - TCP_DEFER_ACCEPT = 0x9 - TCP_FASTOPEN = 0x17 - TCP_INFO = 0xb - TCP_KEEPCNT = 0x6 - TCP_KEEPIDLE = 0x4 - TCP_KEEPINTVL = 0x5 - TCP_LINGER2 = 0x8 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_WINSHIFT = 0xe - TCP_MD5SIG = 0xe - TCP_MD5SIG_MAXKEYLEN = 0x50 - TCP_MSS = 0x200 - TCP_MSS_DEFAULT = 0x218 - TCP_MSS_DESIRED = 0x4c4 - TCP_NODELAY = 0x1 - TCP_QUEUE_SEQ = 0x15 - TCP_QUICKACK = 0xc - TCP_REPAIR = 0x13 - TCP_REPAIR_OPTIONS = 0x16 - TCP_REPAIR_QUEUE = 0x14 - TCP_SYNCNT = 0x7 - TCP_S_DATA_IN = 0x4 - TCP_S_DATA_OUT = 0x8 - TCP_THIN_DUPACK = 0x11 - TCP_THIN_LINEAR_TIMEOUTS = 0x10 - TCP_TIMESTAMP = 0x18 - TCP_USER_TIMEOUT = 0x12 - TCP_WINDOW_CLAMP = 0xa - TCSAFLUSH = 0x2 - TCSBRK = 0x2000741d - TCSBRKP = 0x5425 - TCSETA = 0x80147418 - TCSETAF = 0x8014741c - TCSETAW = 0x80147419 - TCSETS = 0x802c7414 - TCSETSF = 0x802c7416 - TCSETSW = 0x802c7415 - TCXONC = 0x2000741e - TIOCCBRK = 0x5428 - TIOCCONS = 0x541d - TIOCEXCL = 0x540c - TIOCGDEV = 0x40045432 - TIOCGETC = 0x40067412 - TIOCGETD = 0x5424 - TIOCGETP = 0x40067408 - TIOCGEXCL = 0x40045440 - TIOCGICOUNT = 0x545d - TIOCGLCKTRMIOS = 0x5456 - TIOCGLTC = 0x40067474 - TIOCGPGRP = 0x40047477 - TIOCGPKT = 0x40045438 - TIOCGPTLCK = 0x40045439 - TIOCGPTN = 0x40045430 - TIOCGRS485 = 0x542e - TIOCGSERIAL = 0x541e - TIOCGSID = 0x5429 - TIOCGSOFTCAR = 0x5419 - TIOCGWINSZ = 0x40087468 - TIOCINQ = 0x4004667f - TIOCLINUX = 0x541c - TIOCMBIC = 0x5417 - TIOCMBIS = 0x5416 - TIOCMGET = 0x5415 - TIOCMIWAIT = 0x545c - TIOCMSET = 0x5418 - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_LOOP = 0x8000 - TIOCM_OUT1 = 0x2000 - TIOCM_OUT2 = 0x4000 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x5422 - TIOCNXCL = 0x540d - TIOCOUTQ = 0x40047473 - TIOCPKT = 0x5420 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCSBRK = 0x5427 - TIOCSCTTY = 0x540e - TIOCSERCONFIG = 0x5453 - TIOCSERGETLSR = 0x5459 - TIOCSERGETMULTI = 0x545a - TIOCSERGSTRUCT = 0x5458 - TIOCSERGWILD = 0x5454 - TIOCSERSETMULTI = 0x545b - TIOCSERSWILD = 0x5455 - TIOCSER_TEMT = 0x1 - TIOCSETC = 0x80067411 - TIOCSETD = 0x5423 - TIOCSETN = 0x8006740a - TIOCSETP = 0x80067409 - TIOCSIG = 0x80045436 - TIOCSLCKTRMIOS = 0x5457 - TIOCSLTC = 0x80067475 - TIOCSPGRP = 0x80047476 - TIOCSPTLCK = 0x80045431 - TIOCSRS485 = 0x542f - TIOCSSERIAL = 0x541f - TIOCSSOFTCAR = 0x541a - TIOCSTART = 0x2000746e - TIOCSTI = 0x5412 - TIOCSTOP = 0x2000746f - TIOCSWINSZ = 0x80087467 - TIOCVHANGUP = 0x5437 - TOSTOP = 0x400000 - TUNATTACHFILTER = 0x801054d5 - TUNDETACHFILTER = 0x801054d6 - TUNGETFEATURES = 0x400454cf - TUNGETFILTER = 0x401054db - TUNGETIFF = 0x400454d2 - TUNGETSNDBUF = 0x400454d3 - TUNGETVNETHDRSZ = 0x400454d7 - TUNSETDEBUG = 0x800454c9 - TUNSETGROUP = 0x800454ce - TUNSETIFF = 0x800454ca - TUNSETIFINDEX = 0x800454da - TUNSETLINK = 0x800454cd - TUNSETNOCSUM = 0x800454c8 - TUNSETOFFLOAD = 0x800454d0 - TUNSETOWNER = 0x800454cc - TUNSETPERSIST = 0x800454cb - TUNSETQUEUE = 0x800454d9 - TUNSETSNDBUF = 0x800454d4 - TUNSETTXFILTER = 0x800454d1 - TUNSETVNETHDRSZ = 0x800454d8 - VDISCARD = 0x10 - VEOF = 0x4 - VEOL = 0x6 - VEOL2 = 0x8 - VERASE = 0x2 - VINTR = 0x0 - VKILL = 0x3 - VLNEXT = 0xf - VMIN = 0x5 - VQUIT = 0x1 - VREPRINT = 0xb - VSTART = 0xd - VSTOP = 0xe - VSUSP = 0xc - VSWTC = 0x9 - VT0 = 0x0 - VT1 = 0x10000 - VTDLY = 0x10000 - VTIME = 0x7 - VWERASE = 0xa - WALL = 0x40000000 - WCLONE = 0x80000000 - WCONTINUED = 0x8 - WEXITED = 0x4 - WNOHANG = 0x1 - WNOTHREAD = 0x20000000 - WNOWAIT = 0x1000000 - WORDSIZE = 0x40 - WSTOPPED = 0x2 - WUNTRACED = 0x2 - XCASE = 0x4000 - XTABS = 0xc00 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x62) - EADDRNOTAVAIL = syscall.Errno(0x63) - EADV = syscall.Errno(0x44) - EAFNOSUPPORT = syscall.Errno(0x61) - EAGAIN = syscall.Errno(0xb) - EALREADY = syscall.Errno(0x72) - EBADE = syscall.Errno(0x34) - EBADF = syscall.Errno(0x9) - EBADFD = syscall.Errno(0x4d) - EBADMSG = syscall.Errno(0x4a) - EBADR = syscall.Errno(0x35) - EBADRQC = syscall.Errno(0x38) - EBADSLT = syscall.Errno(0x39) - EBFONT = syscall.Errno(0x3b) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x7d) - ECHILD = syscall.Errno(0xa) - ECHRNG = syscall.Errno(0x2c) - ECOMM = syscall.Errno(0x46) - ECONNABORTED = syscall.Errno(0x67) - ECONNREFUSED = syscall.Errno(0x6f) - ECONNRESET = syscall.Errno(0x68) - EDEADLK = syscall.Errno(0x23) - EDEADLOCK = syscall.Errno(0x3a) - EDESTADDRREQ = syscall.Errno(0x59) - EDOM = syscall.Errno(0x21) - EDOTDOT = syscall.Errno(0x49) - EDQUOT = syscall.Errno(0x7a) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EHOSTDOWN = syscall.Errno(0x70) - EHOSTUNREACH = syscall.Errno(0x71) - EHWPOISON = syscall.Errno(0x85) - EIDRM = syscall.Errno(0x2b) - EILSEQ = syscall.Errno(0x54) - EINPROGRESS = syscall.Errno(0x73) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EISCONN = syscall.Errno(0x6a) - EISDIR = syscall.Errno(0x15) - EISNAM = syscall.Errno(0x78) - EKEYEXPIRED = syscall.Errno(0x7f) - EKEYREJECTED = syscall.Errno(0x81) - EKEYREVOKED = syscall.Errno(0x80) - EL2HLT = syscall.Errno(0x33) - EL2NSYNC = syscall.Errno(0x2d) - EL3HLT = syscall.Errno(0x2e) - EL3RST = syscall.Errno(0x2f) - ELIBACC = syscall.Errno(0x4f) - ELIBBAD = syscall.Errno(0x50) - ELIBEXEC = syscall.Errno(0x53) - ELIBMAX = syscall.Errno(0x52) - ELIBSCN = syscall.Errno(0x51) - ELNRNG = syscall.Errno(0x30) - ELOOP = syscall.Errno(0x28) - EMEDIUMTYPE = syscall.Errno(0x7c) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x5a) - EMULTIHOP = syscall.Errno(0x48) - ENAMETOOLONG = syscall.Errno(0x24) - ENAVAIL = syscall.Errno(0x77) - ENETDOWN = syscall.Errno(0x64) - ENETRESET = syscall.Errno(0x66) - ENETUNREACH = syscall.Errno(0x65) - ENFILE = syscall.Errno(0x17) - ENOANO = syscall.Errno(0x37) - ENOBUFS = syscall.Errno(0x69) - ENOCSI = syscall.Errno(0x32) - ENODATA = syscall.Errno(0x3d) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOKEY = syscall.Errno(0x7e) - ENOLCK = syscall.Errno(0x25) - ENOLINK = syscall.Errno(0x43) - ENOMEDIUM = syscall.Errno(0x7b) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x2a) - ENONET = syscall.Errno(0x40) - ENOPKG = syscall.Errno(0x41) - ENOPROTOOPT = syscall.Errno(0x5c) - ENOSPC = syscall.Errno(0x1c) - ENOSR = syscall.Errno(0x3f) - ENOSTR = syscall.Errno(0x3c) - ENOSYS = syscall.Errno(0x26) - ENOTBLK = syscall.Errno(0xf) - ENOTCONN = syscall.Errno(0x6b) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x27) - ENOTNAM = syscall.Errno(0x76) - ENOTRECOVERABLE = syscall.Errno(0x83) - ENOTSOCK = syscall.Errno(0x58) - ENOTSUP = syscall.Errno(0x5f) - ENOTTY = syscall.Errno(0x19) - ENOTUNIQ = syscall.Errno(0x4c) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x5f) - EOVERFLOW = syscall.Errno(0x4b) - EOWNERDEAD = syscall.Errno(0x82) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x60) - EPIPE = syscall.Errno(0x20) - EPROTO = syscall.Errno(0x47) - EPROTONOSUPPORT = syscall.Errno(0x5d) - EPROTOTYPE = syscall.Errno(0x5b) - ERANGE = syscall.Errno(0x22) - EREMCHG = syscall.Errno(0x4e) - EREMOTE = syscall.Errno(0x42) - EREMOTEIO = syscall.Errno(0x79) - ERESTART = syscall.Errno(0x55) - ERFKILL = syscall.Errno(0x84) - EROFS = syscall.Errno(0x1e) - ESHUTDOWN = syscall.Errno(0x6c) - ESOCKTNOSUPPORT = syscall.Errno(0x5e) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESRMNT = syscall.Errno(0x45) - ESTALE = syscall.Errno(0x74) - ESTRPIPE = syscall.Errno(0x56) - ETIME = syscall.Errno(0x3e) - ETIMEDOUT = syscall.Errno(0x6e) - ETOOMANYREFS = syscall.Errno(0x6d) - ETXTBSY = syscall.Errno(0x1a) - EUCLEAN = syscall.Errno(0x75) - EUNATCH = syscall.Errno(0x31) - EUSERS = syscall.Errno(0x57) - EWOULDBLOCK = syscall.Errno(0xb) - EXDEV = syscall.Errno(0x12) - EXFULL = syscall.Errno(0x36) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0x7) - SIGCHLD = syscall.Signal(0x11) - SIGCLD = syscall.Signal(0x11) - SIGCONT = syscall.Signal(0x12) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x1d) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGPIPE = syscall.Signal(0xd) - SIGPOLL = syscall.Signal(0x1d) - SIGPROF = syscall.Signal(0x1b) - SIGPWR = syscall.Signal(0x1e) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTKFLT = syscall.Signal(0x10) - SIGSTOP = syscall.Signal(0x13) - SIGSYS = syscall.Signal(0x1f) - SIGTERM = syscall.Signal(0xf) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x14) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGUNUSED = syscall.Signal(0x1f) - SIGURG = syscall.Signal(0x17) - SIGUSR1 = syscall.Signal(0xa) - SIGUSR2 = syscall.Signal(0xc) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) - -// Error table -var errors = [...]string{ - 1: "operation not permitted", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "input/output error", - 6: "no such device or address", - 7: "argument list too long", - 8: "exec format error", - 9: "bad file descriptor", - 10: "no child processes", - 11: "resource temporarily unavailable", - 12: "cannot allocate memory", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "device or resource busy", - 17: "file exists", - 18: "invalid cross-device link", - 19: "no such device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "too many open files in system", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "numerical argument out of domain", - 34: "numerical result out of range", - 35: "resource deadlock avoided", - 36: "file name too long", - 37: "no locks available", - 38: "function not implemented", - 39: "directory not empty", - 40: "too many levels of symbolic links", - 42: "no message of desired type", - 43: "identifier removed", - 44: "channel number out of range", - 45: "level 2 not synchronized", - 46: "level 3 halted", - 47: "level 3 reset", - 48: "link number out of range", - 49: "protocol driver not attached", - 50: "no CSI structure available", - 51: "level 2 halted", - 52: "invalid exchange", - 53: "invalid request descriptor", - 54: "exchange full", - 55: "no anode", - 56: "invalid request code", - 57: "invalid slot", - 58: "file locking deadlock error", - 59: "bad font file format", - 60: "device not a stream", - 61: "no data available", - 62: "timer expired", - 63: "out of streams resources", - 64: "machine is not on the network", - 65: "package not installed", - 66: "object is remote", - 67: "link has been severed", - 68: "advertise error", - 69: "srmount error", - 70: "communication error on send", - 71: "protocol error", - 72: "multihop attempted", - 73: "RFS specific error", - 74: "bad message", - 75: "value too large for defined data type", - 76: "name not unique on network", - 77: "file descriptor in bad state", - 78: "remote address changed", - 79: "can not access a needed shared library", - 80: "accessing a corrupted shared library", - 81: ".lib section in a.out corrupted", - 82: "attempting to link in too many shared libraries", - 83: "cannot exec a shared library directly", - 84: "invalid or incomplete multibyte or wide character", - 85: "interrupted system call should be restarted", - 86: "streams pipe error", - 87: "too many users", - 88: "socket operation on non-socket", - 89: "destination address required", - 90: "message too long", - 91: "protocol wrong type for socket", - 92: "protocol not available", - 93: "protocol not supported", - 94: "socket type not supported", - 95: "operation not supported", - 96: "protocol family not supported", - 97: "address family not supported by protocol", - 98: "address already in use", - 99: "cannot assign requested address", - 100: "network is down", - 101: "network is unreachable", - 102: "network dropped connection on reset", - 103: "software caused connection abort", - 104: "connection reset by peer", - 105: "no buffer space available", - 106: "transport endpoint is already connected", - 107: "transport endpoint is not connected", - 108: "cannot send after transport endpoint shutdown", - 109: "too many references: cannot splice", - 110: "connection timed out", - 111: "connection refused", - 112: "host is down", - 113: "no route to host", - 114: "operation already in progress", - 115: "operation now in progress", - 116: "stale file handle", - 117: "structure needs cleaning", - 118: "not a XENIX named type file", - 119: "no XENIX semaphores available", - 120: "is a named type file", - 121: "remote I/O error", - 122: "disk quota exceeded", - 123: "no medium found", - 124: "wrong medium type", - 125: "operation canceled", - 126: "required key not available", - 127: "key has expired", - 128: "key has been revoked", - 129: "key was rejected by service", - 130: "owner died", - 131: "state not recoverable", - 132: "operation not possible due to RF-kill", - 133: "memory page has hardware error", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal instruction", - 5: "trace/breakpoint trap", - 6: "aborted", - 7: "bus error", - 8: "floating point exception", - 9: "killed", - 10: "user defined signal 1", - 11: "segmentation fault", - 12: "user defined signal 2", - 13: "broken pipe", - 14: "alarm clock", - 15: "terminated", - 16: "stack fault", - 17: "child exited", - 18: "continued", - 19: "stopped (signal)", - 20: "stopped", - 21: "stopped (tty input)", - 22: "stopped (tty output)", - 23: "urgent I/O condition", - 24: "CPU time limit exceeded", - 25: "file size limit exceeded", - 26: "virtual timer expired", - 27: "profiling timer expired", - 28: "window changed", - 29: "I/O possible", - 30: "power failure", - 31: "bad system call", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_netbsd_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_netbsd_386.go deleted file mode 100644 index b4338d5f263..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_netbsd_386.go +++ /dev/null @@ -1,1712 +0,0 @@ -// mkerrors.sh -m32 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build 386,netbsd - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- -m32 _const.go - -package unix - -import "syscall" - -const ( - AF_APPLETALK = 0x10 - AF_ARP = 0x1c - AF_BLUETOOTH = 0x1f - AF_CCITT = 0xa - AF_CHAOS = 0x5 - AF_CNT = 0x15 - AF_COIP = 0x14 - AF_DATAKIT = 0x9 - AF_DECnet = 0xc - AF_DLI = 0xd - AF_E164 = 0x1a - AF_ECMA = 0x8 - AF_HYLINK = 0xf - AF_IEEE80211 = 0x20 - AF_IMPLINK = 0x3 - AF_INET = 0x2 - AF_INET6 = 0x18 - AF_IPX = 0x17 - AF_ISDN = 0x1a - AF_ISO = 0x7 - AF_LAT = 0xe - AF_LINK = 0x12 - AF_LOCAL = 0x1 - AF_MAX = 0x23 - AF_MPLS = 0x21 - AF_NATM = 0x1b - AF_NS = 0x6 - AF_OROUTE = 0x11 - AF_OSI = 0x7 - AF_PUP = 0x4 - AF_ROUTE = 0x22 - AF_SNA = 0xb - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - ARPHRD_ARCNET = 0x7 - ARPHRD_ETHER = 0x1 - ARPHRD_FRELAY = 0xf - ARPHRD_IEEE1394 = 0x18 - ARPHRD_IEEE802 = 0x6 - ARPHRD_STRIP = 0x17 - B0 = 0x0 - B110 = 0x6e - B115200 = 0x1c200 - B1200 = 0x4b0 - B134 = 0x86 - B14400 = 0x3840 - B150 = 0x96 - B1800 = 0x708 - B19200 = 0x4b00 - B200 = 0xc8 - B230400 = 0x38400 - B2400 = 0x960 - B28800 = 0x7080 - B300 = 0x12c - B38400 = 0x9600 - B460800 = 0x70800 - B4800 = 0x12c0 - B50 = 0x32 - B57600 = 0xe100 - B600 = 0x258 - B7200 = 0x1c20 - B75 = 0x4b - B76800 = 0x12c00 - B921600 = 0xe1000 - B9600 = 0x2580 - BIOCFEEDBACK = 0x8004427d - BIOCFLUSH = 0x20004268 - BIOCGBLEN = 0x40044266 - BIOCGDLT = 0x4004426a - BIOCGDLTLIST = 0xc0084277 - BIOCGETIF = 0x4090426b - BIOCGFEEDBACK = 0x4004427c - BIOCGHDRCMPLT = 0x40044274 - BIOCGRTIMEOUT = 0x400c427b - BIOCGSEESENT = 0x40044278 - BIOCGSTATS = 0x4080426f - BIOCGSTATSOLD = 0x4008426f - BIOCIMMEDIATE = 0x80044270 - BIOCPROMISC = 0x20004269 - BIOCSBLEN = 0xc0044266 - BIOCSDLT = 0x80044276 - BIOCSETF = 0x80084267 - BIOCSETIF = 0x8090426c - BIOCSFEEDBACK = 0x8004427d - BIOCSHDRCMPLT = 0x80044275 - BIOCSRTIMEOUT = 0x800c427a - BIOCSSEESENT = 0x80044279 - BIOCSTCPF = 0x80084272 - BIOCSUDPF = 0x80084273 - BIOCVERSION = 0x40044271 - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALIGNMENT = 0x4 - BPF_ALIGNMENT32 = 0x4 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_DFLTBUFSIZE = 0x100000 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXBUFSIZE = 0x1000000 - BPF_MAXINSNS = 0x200 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINBUFSIZE = 0x20 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RELEASE = 0x30bb6 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_W = 0x0 - BPF_X = 0x8 - BRKINT = 0x2 - CFLUSH = 0xf - CLOCAL = 0x8000 - CLONE_CSIGNAL = 0xff - CLONE_FILES = 0x400 - CLONE_FS = 0x200 - CLONE_PID = 0x1000 - CLONE_PTRACE = 0x2000 - CLONE_SIGHAND = 0x800 - CLONE_VFORK = 0x4000 - CLONE_VM = 0x100 - CREAD = 0x800 - CS5 = 0x0 - CS6 = 0x100 - CS7 = 0x200 - CS8 = 0x300 - CSIZE = 0x300 - CSTART = 0x11 - CSTATUS = 0x14 - CSTOP = 0x13 - CSTOPB = 0x400 - CSUSP = 0x1a - CTL_MAXNAME = 0xc - CTL_NET = 0x4 - CTL_QUERY = -0x2 - DIOCBSFLUSH = 0x20006478 - DLT_A429 = 0xb8 - DLT_A653_ICM = 0xb9 - DLT_AIRONET_HEADER = 0x78 - DLT_AOS = 0xde - DLT_APPLE_IP_OVER_IEEE1394 = 0x8a - DLT_ARCNET = 0x7 - DLT_ARCNET_LINUX = 0x81 - DLT_ATM_CLIP = 0x13 - DLT_ATM_RFC1483 = 0xb - DLT_AURORA = 0x7e - DLT_AX25 = 0x3 - DLT_AX25_KISS = 0xca - DLT_BACNET_MS_TP = 0xa5 - DLT_BLUETOOTH_HCI_H4 = 0xbb - DLT_BLUETOOTH_HCI_H4_WITH_PHDR = 0xc9 - DLT_CAN20B = 0xbe - DLT_CAN_SOCKETCAN = 0xe3 - DLT_CHAOS = 0x5 - DLT_CISCO_IOS = 0x76 - DLT_C_HDLC = 0x68 - DLT_C_HDLC_WITH_DIR = 0xcd - DLT_DECT = 0xdd - DLT_DOCSIS = 0x8f - DLT_ECONET = 0x73 - DLT_EN10MB = 0x1 - DLT_EN3MB = 0x2 - DLT_ENC = 0x6d - DLT_ERF = 0xc5 - DLT_ERF_ETH = 0xaf - DLT_ERF_POS = 0xb0 - DLT_FC_2 = 0xe0 - DLT_FC_2_WITH_FRAME_DELIMS = 0xe1 - DLT_FDDI = 0xa - DLT_FLEXRAY = 0xd2 - DLT_FRELAY = 0x6b - DLT_FRELAY_WITH_DIR = 0xce - DLT_GCOM_SERIAL = 0xad - DLT_GCOM_T1E1 = 0xac - DLT_GPF_F = 0xab - DLT_GPF_T = 0xaa - DLT_GPRS_LLC = 0xa9 - DLT_GSMTAP_ABIS = 0xda - DLT_GSMTAP_UM = 0xd9 - DLT_HDLC = 0x10 - DLT_HHDLC = 0x79 - DLT_HIPPI = 0xf - DLT_IBM_SN = 0x92 - DLT_IBM_SP = 0x91 - DLT_IEEE802 = 0x6 - DLT_IEEE802_11 = 0x69 - DLT_IEEE802_11_RADIO = 0x7f - DLT_IEEE802_11_RADIO_AVS = 0xa3 - DLT_IEEE802_15_4 = 0xc3 - DLT_IEEE802_15_4_LINUX = 0xbf - DLT_IEEE802_15_4_NONASK_PHY = 0xd7 - DLT_IEEE802_16_MAC_CPS = 0xbc - DLT_IEEE802_16_MAC_CPS_RADIO = 0xc1 - DLT_IPMB = 0xc7 - DLT_IPMB_LINUX = 0xd1 - DLT_IPNET = 0xe2 - DLT_IPV4 = 0xe4 - DLT_IPV6 = 0xe5 - DLT_IP_OVER_FC = 0x7a - DLT_JUNIPER_ATM1 = 0x89 - DLT_JUNIPER_ATM2 = 0x87 - DLT_JUNIPER_CHDLC = 0xb5 - DLT_JUNIPER_ES = 0x84 - DLT_JUNIPER_ETHER = 0xb2 - DLT_JUNIPER_FRELAY = 0xb4 - DLT_JUNIPER_GGSN = 0x85 - DLT_JUNIPER_ISM = 0xc2 - DLT_JUNIPER_MFR = 0x86 - DLT_JUNIPER_MLFR = 0x83 - DLT_JUNIPER_MLPPP = 0x82 - DLT_JUNIPER_MONITOR = 0xa4 - DLT_JUNIPER_PIC_PEER = 0xae - DLT_JUNIPER_PPP = 0xb3 - DLT_JUNIPER_PPPOE = 0xa7 - DLT_JUNIPER_PPPOE_ATM = 0xa8 - DLT_JUNIPER_SERVICES = 0x88 - DLT_JUNIPER_ST = 0xc8 - DLT_JUNIPER_VP = 0xb7 - DLT_LAPB_WITH_DIR = 0xcf - DLT_LAPD = 0xcb - DLT_LIN = 0xd4 - DLT_LINUX_EVDEV = 0xd8 - DLT_LINUX_IRDA = 0x90 - DLT_LINUX_LAPD = 0xb1 - DLT_LINUX_SLL = 0x71 - DLT_LOOP = 0x6c - DLT_LTALK = 0x72 - DLT_MFR = 0xb6 - DLT_MOST = 0xd3 - DLT_MPLS = 0xdb - DLT_MTP2 = 0x8c - DLT_MTP2_WITH_PHDR = 0x8b - DLT_MTP3 = 0x8d - DLT_NULL = 0x0 - DLT_PCI_EXP = 0x7d - DLT_PFLOG = 0x75 - DLT_PFSYNC = 0x12 - DLT_PPI = 0xc0 - DLT_PPP = 0x9 - DLT_PPP_BSDOS = 0xe - DLT_PPP_ETHER = 0x33 - DLT_PPP_PPPD = 0xa6 - DLT_PPP_SERIAL = 0x32 - DLT_PPP_WITH_DIR = 0xcc - DLT_PRISM_HEADER = 0x77 - DLT_PRONET = 0x4 - DLT_RAIF1 = 0xc6 - DLT_RAW = 0xc - DLT_RAWAF_MASK = 0x2240000 - DLT_RIO = 0x7c - DLT_SCCP = 0x8e - DLT_SITA = 0xc4 - DLT_SLIP = 0x8 - DLT_SLIP_BSDOS = 0xd - DLT_SUNATM = 0x7b - DLT_SYMANTEC_FIREWALL = 0x63 - DLT_TZSP = 0x80 - DLT_USB = 0xba - DLT_USB_LINUX = 0xbd - DLT_USB_LINUX_MMAPPED = 0xdc - DLT_WIHART = 0xdf - DLT_X2E_SERIAL = 0xd5 - DLT_X2E_XORAYA = 0xd6 - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - DT_WHT = 0xe - ECHO = 0x8 - ECHOCTL = 0x40 - ECHOE = 0x2 - ECHOK = 0x4 - ECHOKE = 0x1 - ECHONL = 0x10 - ECHOPRT = 0x20 - EMUL_LINUX = 0x1 - EMUL_LINUX32 = 0x5 - EMUL_MAXID = 0x6 - EN_SW_CTL_INF = 0x1000 - EN_SW_CTL_PREC = 0x300 - EN_SW_CTL_ROUND = 0xc00 - EN_SW_DATACHAIN = 0x80 - EN_SW_DENORM = 0x2 - EN_SW_INVOP = 0x1 - EN_SW_OVERFLOW = 0x8 - EN_SW_PRECLOSS = 0x20 - EN_SW_UNDERFLOW = 0x10 - EN_SW_ZERODIV = 0x4 - ETHERCAP_JUMBO_MTU = 0x4 - ETHERCAP_VLAN_HWTAGGING = 0x2 - ETHERCAP_VLAN_MTU = 0x1 - ETHERMIN = 0x2e - ETHERMTU = 0x5dc - ETHERMTU_JUMBO = 0x2328 - ETHERTYPE_8023 = 0x4 - ETHERTYPE_AARP = 0x80f3 - ETHERTYPE_ACCTON = 0x8390 - ETHERTYPE_AEONIC = 0x8036 - ETHERTYPE_ALPHA = 0x814a - ETHERTYPE_AMBER = 0x6008 - ETHERTYPE_AMOEBA = 0x8145 - ETHERTYPE_APOLLO = 0x80f7 - ETHERTYPE_APOLLODOMAIN = 0x8019 - ETHERTYPE_APPLETALK = 0x809b - ETHERTYPE_APPLITEK = 0x80c7 - ETHERTYPE_ARGONAUT = 0x803a - ETHERTYPE_ARP = 0x806 - ETHERTYPE_AT = 0x809b - ETHERTYPE_ATALK = 0x809b - ETHERTYPE_ATOMIC = 0x86df - ETHERTYPE_ATT = 0x8069 - ETHERTYPE_ATTSTANFORD = 0x8008 - ETHERTYPE_AUTOPHON = 0x806a - ETHERTYPE_AXIS = 0x8856 - ETHERTYPE_BCLOOP = 0x9003 - ETHERTYPE_BOFL = 0x8102 - ETHERTYPE_CABLETRON = 0x7034 - ETHERTYPE_CHAOS = 0x804 - ETHERTYPE_COMDESIGN = 0x806c - ETHERTYPE_COMPUGRAPHIC = 0x806d - ETHERTYPE_COUNTERPOINT = 0x8062 - ETHERTYPE_CRONUS = 0x8004 - ETHERTYPE_CRONUSVLN = 0x8003 - ETHERTYPE_DCA = 0x1234 - ETHERTYPE_DDE = 0x807b - ETHERTYPE_DEBNI = 0xaaaa - ETHERTYPE_DECAM = 0x8048 - ETHERTYPE_DECCUST = 0x6006 - ETHERTYPE_DECDIAG = 0x6005 - ETHERTYPE_DECDNS = 0x803c - ETHERTYPE_DECDTS = 0x803e - ETHERTYPE_DECEXPER = 0x6000 - ETHERTYPE_DECLAST = 0x8041 - ETHERTYPE_DECLTM = 0x803f - ETHERTYPE_DECMUMPS = 0x6009 - ETHERTYPE_DECNETBIOS = 0x8040 - ETHERTYPE_DELTACON = 0x86de - ETHERTYPE_DIDDLE = 0x4321 - ETHERTYPE_DLOG1 = 0x660 - ETHERTYPE_DLOG2 = 0x661 - ETHERTYPE_DN = 0x6003 - ETHERTYPE_DOGFIGHT = 0x1989 - ETHERTYPE_DSMD = 0x8039 - ETHERTYPE_ECMA = 0x803 - ETHERTYPE_ENCRYPT = 0x803d - ETHERTYPE_ES = 0x805d - ETHERTYPE_EXCELAN = 0x8010 - ETHERTYPE_EXPERDATA = 0x8049 - ETHERTYPE_FLIP = 0x8146 - ETHERTYPE_FLOWCONTROL = 0x8808 - ETHERTYPE_FRARP = 0x808 - ETHERTYPE_GENDYN = 0x8068 - ETHERTYPE_HAYES = 0x8130 - ETHERTYPE_HIPPI_FP = 0x8180 - ETHERTYPE_HITACHI = 0x8820 - ETHERTYPE_HP = 0x8005 - ETHERTYPE_IEEEPUP = 0xa00 - ETHERTYPE_IEEEPUPAT = 0xa01 - ETHERTYPE_IMLBL = 0x4c42 - ETHERTYPE_IMLBLDIAG = 0x424c - ETHERTYPE_IP = 0x800 - ETHERTYPE_IPAS = 0x876c - ETHERTYPE_IPV6 = 0x86dd - ETHERTYPE_IPX = 0x8137 - ETHERTYPE_IPXNEW = 0x8037 - ETHERTYPE_KALPANA = 0x8582 - ETHERTYPE_LANBRIDGE = 0x8038 - ETHERTYPE_LANPROBE = 0x8888 - ETHERTYPE_LAT = 0x6004 - ETHERTYPE_LBACK = 0x9000 - ETHERTYPE_LITTLE = 0x8060 - ETHERTYPE_LOGICRAFT = 0x8148 - ETHERTYPE_LOOPBACK = 0x9000 - ETHERTYPE_MATRA = 0x807a - ETHERTYPE_MAX = 0xffff - ETHERTYPE_MERIT = 0x807c - ETHERTYPE_MICP = 0x873a - ETHERTYPE_MOPDL = 0x6001 - ETHERTYPE_MOPRC = 0x6002 - ETHERTYPE_MOTOROLA = 0x818d - ETHERTYPE_MPLS = 0x8847 - ETHERTYPE_MPLS_MCAST = 0x8848 - ETHERTYPE_MUMPS = 0x813f - ETHERTYPE_NBPCC = 0x3c04 - ETHERTYPE_NBPCLAIM = 0x3c09 - ETHERTYPE_NBPCLREQ = 0x3c05 - ETHERTYPE_NBPCLRSP = 0x3c06 - ETHERTYPE_NBPCREQ = 0x3c02 - ETHERTYPE_NBPCRSP = 0x3c03 - ETHERTYPE_NBPDG = 0x3c07 - ETHERTYPE_NBPDGB = 0x3c08 - ETHERTYPE_NBPDLTE = 0x3c0a - ETHERTYPE_NBPRAR = 0x3c0c - ETHERTYPE_NBPRAS = 0x3c0b - ETHERTYPE_NBPRST = 0x3c0d - ETHERTYPE_NBPSCD = 0x3c01 - ETHERTYPE_NBPVCD = 0x3c00 - ETHERTYPE_NBS = 0x802 - ETHERTYPE_NCD = 0x8149 - ETHERTYPE_NESTAR = 0x8006 - ETHERTYPE_NETBEUI = 0x8191 - ETHERTYPE_NOVELL = 0x8138 - ETHERTYPE_NS = 0x600 - ETHERTYPE_NSAT = 0x601 - ETHERTYPE_NSCOMPAT = 0x807 - ETHERTYPE_NTRAILER = 0x10 - ETHERTYPE_OS9 = 0x7007 - ETHERTYPE_OS9NET = 0x7009 - ETHERTYPE_PACER = 0x80c6 - ETHERTYPE_PAE = 0x888e - ETHERTYPE_PCS = 0x4242 - ETHERTYPE_PLANNING = 0x8044 - ETHERTYPE_PPP = 0x880b - ETHERTYPE_PPPOE = 0x8864 - ETHERTYPE_PPPOEDISC = 0x8863 - ETHERTYPE_PRIMENTS = 0x7031 - ETHERTYPE_PUP = 0x200 - ETHERTYPE_PUPAT = 0x200 - ETHERTYPE_RACAL = 0x7030 - ETHERTYPE_RATIONAL = 0x8150 - ETHERTYPE_RAWFR = 0x6559 - ETHERTYPE_RCL = 0x1995 - ETHERTYPE_RDP = 0x8739 - ETHERTYPE_RETIX = 0x80f2 - ETHERTYPE_REVARP = 0x8035 - ETHERTYPE_SCA = 0x6007 - ETHERTYPE_SECTRA = 0x86db - ETHERTYPE_SECUREDATA = 0x876d - ETHERTYPE_SGITW = 0x817e - ETHERTYPE_SG_BOUNCE = 0x8016 - ETHERTYPE_SG_DIAG = 0x8013 - ETHERTYPE_SG_NETGAMES = 0x8014 - ETHERTYPE_SG_RESV = 0x8015 - ETHERTYPE_SIMNET = 0x5208 - ETHERTYPE_SLOWPROTOCOLS = 0x8809 - ETHERTYPE_SNA = 0x80d5 - ETHERTYPE_SNMP = 0x814c - ETHERTYPE_SONIX = 0xfaf5 - ETHERTYPE_SPIDER = 0x809f - ETHERTYPE_SPRITE = 0x500 - ETHERTYPE_STP = 0x8181 - ETHERTYPE_TALARIS = 0x812b - ETHERTYPE_TALARISMC = 0x852b - ETHERTYPE_TCPCOMP = 0x876b - ETHERTYPE_TCPSM = 0x9002 - ETHERTYPE_TEC = 0x814f - ETHERTYPE_TIGAN = 0x802f - ETHERTYPE_TRAIL = 0x1000 - ETHERTYPE_TRANSETHER = 0x6558 - ETHERTYPE_TYMSHARE = 0x802e - ETHERTYPE_UBBST = 0x7005 - ETHERTYPE_UBDEBUG = 0x900 - ETHERTYPE_UBDIAGLOOP = 0x7002 - ETHERTYPE_UBDL = 0x7000 - ETHERTYPE_UBNIU = 0x7001 - ETHERTYPE_UBNMC = 0x7003 - ETHERTYPE_VALID = 0x1600 - ETHERTYPE_VARIAN = 0x80dd - ETHERTYPE_VAXELN = 0x803b - ETHERTYPE_VEECO = 0x8067 - ETHERTYPE_VEXP = 0x805b - ETHERTYPE_VGLAB = 0x8131 - ETHERTYPE_VINES = 0xbad - ETHERTYPE_VINESECHO = 0xbaf - ETHERTYPE_VINESLOOP = 0xbae - ETHERTYPE_VITAL = 0xff00 - ETHERTYPE_VLAN = 0x8100 - ETHERTYPE_VLTLMAN = 0x8080 - ETHERTYPE_VPROD = 0x805c - ETHERTYPE_VURESERVED = 0x8147 - ETHERTYPE_WATERLOO = 0x8130 - ETHERTYPE_WELLFLEET = 0x8103 - ETHERTYPE_X25 = 0x805 - ETHERTYPE_X75 = 0x801 - ETHERTYPE_XNSSM = 0x9001 - ETHERTYPE_XTP = 0x817d - ETHER_ADDR_LEN = 0x6 - ETHER_CRC_LEN = 0x4 - ETHER_CRC_POLY_BE = 0x4c11db6 - ETHER_CRC_POLY_LE = 0xedb88320 - ETHER_HDR_LEN = 0xe - ETHER_MAX_LEN = 0x5ee - ETHER_MAX_LEN_JUMBO = 0x233a - ETHER_MIN_LEN = 0x40 - ETHER_PPPOE_ENCAP_LEN = 0x8 - ETHER_TYPE_LEN = 0x2 - ETHER_VLAN_ENCAP_LEN = 0x4 - EVFILT_AIO = 0x2 - EVFILT_PROC = 0x4 - EVFILT_READ = 0x0 - EVFILT_SIGNAL = 0x5 - EVFILT_SYSCOUNT = 0x7 - EVFILT_TIMER = 0x6 - EVFILT_VNODE = 0x3 - EVFILT_WRITE = 0x1 - EV_ADD = 0x1 - EV_CLEAR = 0x20 - EV_DELETE = 0x2 - EV_DISABLE = 0x8 - EV_ENABLE = 0x4 - EV_EOF = 0x8000 - EV_ERROR = 0x4000 - EV_FLAG1 = 0x2000 - EV_ONESHOT = 0x10 - EV_SYSFLAGS = 0xf000 - EXTA = 0x4b00 - EXTB = 0x9600 - EXTPROC = 0x800 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x100 - FLUSHO = 0x800000 - F_CLOSEM = 0xa - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0xc - F_FSCTL = -0x80000000 - F_FSDIRMASK = 0x70000000 - F_FSIN = 0x10000000 - F_FSINOUT = 0x30000000 - F_FSOUT = 0x20000000 - F_FSPRIV = 0x8000 - F_FSVOID = 0x40000000 - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLK = 0x7 - F_GETNOSIGPIPE = 0xd - F_GETOWN = 0x5 - F_MAXFD = 0xb - F_OK = 0x0 - F_PARAM_MASK = 0xfff - F_PARAM_MAX = 0xfff - F_RDLCK = 0x1 - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLK = 0x8 - F_SETLKW = 0x9 - F_SETNOSIGPIPE = 0xe - F_SETOWN = 0x6 - F_UNLCK = 0x2 - F_WRLCK = 0x3 - HUPCL = 0x4000 - ICANON = 0x100 - ICMP6_FILTER = 0x12 - ICRNL = 0x100 - IEXTEN = 0x400 - IFAN_ARRIVAL = 0x0 - IFAN_DEPARTURE = 0x1 - IFA_ROUTE = 0x1 - IFF_ALLMULTI = 0x200 - IFF_BROADCAST = 0x2 - IFF_CANTCHANGE = 0x8f52 - IFF_DEBUG = 0x4 - IFF_LINK0 = 0x1000 - IFF_LINK1 = 0x2000 - IFF_LINK2 = 0x4000 - IFF_LOOPBACK = 0x8 - IFF_MULTICAST = 0x8000 - IFF_NOARP = 0x80 - IFF_NOTRAILERS = 0x20 - IFF_OACTIVE = 0x400 - IFF_POINTOPOINT = 0x10 - IFF_PROMISC = 0x100 - IFF_RUNNING = 0x40 - IFF_SIMPLEX = 0x800 - IFF_UP = 0x1 - IFNAMSIZ = 0x10 - IFT_1822 = 0x2 - IFT_A12MPPSWITCH = 0x82 - IFT_AAL2 = 0xbb - IFT_AAL5 = 0x31 - IFT_ADSL = 0x5e - IFT_AFLANE8023 = 0x3b - IFT_AFLANE8025 = 0x3c - IFT_ARAP = 0x58 - IFT_ARCNET = 0x23 - IFT_ARCNETPLUS = 0x24 - IFT_ASYNC = 0x54 - IFT_ATM = 0x25 - IFT_ATMDXI = 0x69 - IFT_ATMFUNI = 0x6a - IFT_ATMIMA = 0x6b - IFT_ATMLOGICAL = 0x50 - IFT_ATMRADIO = 0xbd - IFT_ATMSUBINTERFACE = 0x86 - IFT_ATMVCIENDPT = 0xc2 - IFT_ATMVIRTUAL = 0x95 - IFT_BGPPOLICYACCOUNTING = 0xa2 - IFT_BRIDGE = 0xd1 - IFT_BSC = 0x53 - IFT_CARP = 0xf8 - IFT_CCTEMUL = 0x3d - IFT_CEPT = 0x13 - IFT_CES = 0x85 - IFT_CHANNEL = 0x46 - IFT_CNR = 0x55 - IFT_COFFEE = 0x84 - IFT_COMPOSITELINK = 0x9b - IFT_DCN = 0x8d - IFT_DIGITALPOWERLINE = 0x8a - IFT_DIGITALWRAPPEROVERHEADCHANNEL = 0xba - IFT_DLSW = 0x4a - IFT_DOCSCABLEDOWNSTREAM = 0x80 - IFT_DOCSCABLEMACLAYER = 0x7f - IFT_DOCSCABLEUPSTREAM = 0x81 - IFT_DOCSCABLEUPSTREAMCHANNEL = 0xcd - IFT_DS0 = 0x51 - IFT_DS0BUNDLE = 0x52 - IFT_DS1FDL = 0xaa - IFT_DS3 = 0x1e - IFT_DTM = 0x8c - IFT_DVBASILN = 0xac - IFT_DVBASIOUT = 0xad - IFT_DVBRCCDOWNSTREAM = 0x93 - IFT_DVBRCCMACLAYER = 0x92 - IFT_DVBRCCUPSTREAM = 0x94 - IFT_ECONET = 0xce - IFT_EON = 0x19 - IFT_EPLRS = 0x57 - IFT_ESCON = 0x49 - IFT_ETHER = 0x6 - IFT_FAITH = 0xf2 - IFT_FAST = 0x7d - IFT_FASTETHER = 0x3e - IFT_FASTETHERFX = 0x45 - IFT_FDDI = 0xf - IFT_FIBRECHANNEL = 0x38 - IFT_FRAMERELAYINTERCONNECT = 0x3a - IFT_FRAMERELAYMPI = 0x5c - IFT_FRDLCIENDPT = 0xc1 - IFT_FRELAY = 0x20 - IFT_FRELAYDCE = 0x2c - IFT_FRF16MFRBUNDLE = 0xa3 - IFT_FRFORWARD = 0x9e - IFT_G703AT2MB = 0x43 - IFT_G703AT64K = 0x42 - IFT_GIF = 0xf0 - IFT_GIGABITETHERNET = 0x75 - IFT_GR303IDT = 0xb2 - IFT_GR303RDT = 0xb1 - IFT_H323GATEKEEPER = 0xa4 - IFT_H323PROXY = 0xa5 - IFT_HDH1822 = 0x3 - IFT_HDLC = 0x76 - IFT_HDSL2 = 0xa8 - IFT_HIPERLAN2 = 0xb7 - IFT_HIPPI = 0x2f - IFT_HIPPIINTERFACE = 0x39 - IFT_HOSTPAD = 0x5a - IFT_HSSI = 0x2e - IFT_HY = 0xe - IFT_IBM370PARCHAN = 0x48 - IFT_IDSL = 0x9a - IFT_IEEE1394 = 0x90 - IFT_IEEE80211 = 0x47 - IFT_IEEE80212 = 0x37 - IFT_IEEE8023ADLAG = 0xa1 - IFT_IFGSN = 0x91 - IFT_IMT = 0xbe - IFT_INFINIBAND = 0xc7 - IFT_INTERLEAVE = 0x7c - IFT_IP = 0x7e - IFT_IPFORWARD = 0x8e - IFT_IPOVERATM = 0x72 - IFT_IPOVERCDLC = 0x6d - IFT_IPOVERCLAW = 0x6e - IFT_IPSWITCH = 0x4e - IFT_ISDN = 0x3f - IFT_ISDNBASIC = 0x14 - IFT_ISDNPRIMARY = 0x15 - IFT_ISDNS = 0x4b - IFT_ISDNU = 0x4c - IFT_ISO88022LLC = 0x29 - IFT_ISO88023 = 0x7 - IFT_ISO88024 = 0x8 - IFT_ISO88025 = 0x9 - IFT_ISO88025CRFPINT = 0x62 - IFT_ISO88025DTR = 0x56 - IFT_ISO88025FIBER = 0x73 - IFT_ISO88026 = 0xa - IFT_ISUP = 0xb3 - IFT_L2VLAN = 0x87 - IFT_L3IPVLAN = 0x88 - IFT_L3IPXVLAN = 0x89 - IFT_LAPB = 0x10 - IFT_LAPD = 0x4d - IFT_LAPF = 0x77 - IFT_LINEGROUP = 0xd2 - IFT_LOCALTALK = 0x2a - IFT_LOOP = 0x18 - IFT_MEDIAMAILOVERIP = 0x8b - IFT_MFSIGLINK = 0xa7 - IFT_MIOX25 = 0x26 - IFT_MODEM = 0x30 - IFT_MPC = 0x71 - IFT_MPLS = 0xa6 - IFT_MPLSTUNNEL = 0x96 - IFT_MSDSL = 0x8f - IFT_MVL = 0xbf - IFT_MYRINET = 0x63 - IFT_NFAS = 0xaf - IFT_NSIP = 0x1b - IFT_OPTICALCHANNEL = 0xc3 - IFT_OPTICALTRANSPORT = 0xc4 - IFT_OTHER = 0x1 - IFT_P10 = 0xc - IFT_P80 = 0xd - IFT_PARA = 0x22 - IFT_PFLOG = 0xf5 - IFT_PFSYNC = 0xf6 - IFT_PLC = 0xae - IFT_PON155 = 0xcf - IFT_PON622 = 0xd0 - IFT_POS = 0xab - IFT_PPP = 0x17 - IFT_PPPMULTILINKBUNDLE = 0x6c - IFT_PROPATM = 0xc5 - IFT_PROPBWAP2MP = 0xb8 - IFT_PROPCNLS = 0x59 - IFT_PROPDOCSWIRELESSDOWNSTREAM = 0xb5 - IFT_PROPDOCSWIRELESSMACLAYER = 0xb4 - IFT_PROPDOCSWIRELESSUPSTREAM = 0xb6 - IFT_PROPMUX = 0x36 - IFT_PROPVIRTUAL = 0x35 - IFT_PROPWIRELESSP2P = 0x9d - IFT_PTPSERIAL = 0x16 - IFT_PVC = 0xf1 - IFT_Q2931 = 0xc9 - IFT_QLLC = 0x44 - IFT_RADIOMAC = 0xbc - IFT_RADSL = 0x5f - IFT_REACHDSL = 0xc0 - IFT_RFC1483 = 0x9f - IFT_RS232 = 0x21 - IFT_RSRB = 0x4f - IFT_SDLC = 0x11 - IFT_SDSL = 0x60 - IFT_SHDSL = 0xa9 - IFT_SIP = 0x1f - IFT_SIPSIG = 0xcc - IFT_SIPTG = 0xcb - IFT_SLIP = 0x1c - IFT_SMDSDXI = 0x2b - IFT_SMDSICIP = 0x34 - IFT_SONET = 0x27 - IFT_SONETOVERHEADCHANNEL = 0xb9 - IFT_SONETPATH = 0x32 - IFT_SONETVT = 0x33 - IFT_SRP = 0x97 - IFT_SS7SIGLINK = 0x9c - IFT_STACKTOSTACK = 0x6f - IFT_STARLAN = 0xb - IFT_STF = 0xd7 - IFT_T1 = 0x12 - IFT_TDLC = 0x74 - IFT_TELINK = 0xc8 - IFT_TERMPAD = 0x5b - IFT_TR008 = 0xb0 - IFT_TRANSPHDLC = 0x7b - IFT_TUNNEL = 0x83 - IFT_ULTRA = 0x1d - IFT_USB = 0xa0 - IFT_V11 = 0x40 - IFT_V35 = 0x2d - IFT_V36 = 0x41 - IFT_V37 = 0x78 - IFT_VDSL = 0x61 - IFT_VIRTUALIPADDRESS = 0x70 - IFT_VIRTUALTG = 0xca - IFT_VOICEDID = 0xd5 - IFT_VOICEEM = 0x64 - IFT_VOICEEMFGD = 0xd3 - IFT_VOICEENCAP = 0x67 - IFT_VOICEFGDEANA = 0xd4 - IFT_VOICEFXO = 0x65 - IFT_VOICEFXS = 0x66 - IFT_VOICEOVERATM = 0x98 - IFT_VOICEOVERCABLE = 0xc6 - IFT_VOICEOVERFRAMERELAY = 0x99 - IFT_VOICEOVERIP = 0x68 - IFT_X213 = 0x5d - IFT_X25 = 0x5 - IFT_X25DDN = 0x4 - IFT_X25HUNTGROUP = 0x7a - IFT_X25MLP = 0x79 - IFT_X25PLE = 0x28 - IFT_XETHER = 0x1a - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLASSD_HOST = 0xfffffff - IN_CLASSD_NET = 0xf0000000 - IN_CLASSD_NSHIFT = 0x1c - IN_LOOPBACKNET = 0x7f - IPPROTO_AH = 0x33 - IPPROTO_CARP = 0x70 - IPPROTO_DONE = 0x101 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_ENCAP = 0x62 - IPPROTO_EON = 0x50 - IPPROTO_ESP = 0x32 - IPPROTO_ETHERIP = 0x61 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GGP = 0x3 - IPPROTO_GRE = 0x2f - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IGMP = 0x2 - IPPROTO_IP = 0x0 - IPPROTO_IPCOMP = 0x6c - IPPROTO_IPIP = 0x4 - IPPROTO_IPV4 = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_IPV6_ICMP = 0x3a - IPPROTO_MAX = 0x100 - IPPROTO_MAXID = 0x34 - IPPROTO_MOBILE = 0x37 - IPPROTO_NONE = 0x3b - IPPROTO_PFSYNC = 0xf0 - IPPROTO_PIM = 0x67 - IPPROTO_PUP = 0xc - IPPROTO_RAW = 0xff - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_TCP = 0x6 - IPPROTO_TP = 0x1d - IPPROTO_UDP = 0x11 - IPPROTO_VRRP = 0x70 - IPV6_CHECKSUM = 0x1a - IPV6_DEFAULT_MULTICAST_HOPS = 0x1 - IPV6_DEFAULT_MULTICAST_LOOP = 0x1 - IPV6_DEFHLIM = 0x40 - IPV6_DONTFRAG = 0x3e - IPV6_DSTOPTS = 0x32 - IPV6_FAITH = 0x1d - IPV6_FLOWINFO_MASK = 0xffffff0f - IPV6_FLOWLABEL_MASK = 0xffff0f00 - IPV6_FRAGTTL = 0x78 - IPV6_HLIMDEC = 0x1 - IPV6_HOPLIMIT = 0x2f - IPV6_HOPOPTS = 0x31 - IPV6_IPSEC_POLICY = 0x1c - IPV6_JOIN_GROUP = 0xc - IPV6_LEAVE_GROUP = 0xd - IPV6_MAXHLIM = 0xff - IPV6_MAXPACKET = 0xffff - IPV6_MMTU = 0x500 - IPV6_MULTICAST_HOPS = 0xa - IPV6_MULTICAST_IF = 0x9 - IPV6_MULTICAST_LOOP = 0xb - IPV6_NEXTHOP = 0x30 - IPV6_PATHMTU = 0x2c - IPV6_PKTINFO = 0x2e - IPV6_PORTRANGE = 0xe - IPV6_PORTRANGE_DEFAULT = 0x0 - IPV6_PORTRANGE_HIGH = 0x1 - IPV6_PORTRANGE_LOW = 0x2 - IPV6_RECVDSTOPTS = 0x28 - IPV6_RECVHOPLIMIT = 0x25 - IPV6_RECVHOPOPTS = 0x27 - IPV6_RECVPATHMTU = 0x2b - IPV6_RECVPKTINFO = 0x24 - IPV6_RECVRTHDR = 0x26 - IPV6_RECVTCLASS = 0x39 - IPV6_RTHDR = 0x33 - IPV6_RTHDRDSTOPTS = 0x23 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_SOCKOPT_RESERVED1 = 0x3 - IPV6_TCLASS = 0x3d - IPV6_UNICAST_HOPS = 0x4 - IPV6_USE_MIN_MTU = 0x2a - IPV6_V6ONLY = 0x1b - IPV6_VERSION = 0x60 - IPV6_VERSION_MASK = 0xf0 - IP_ADD_MEMBERSHIP = 0xc - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DROP_MEMBERSHIP = 0xd - IP_EF = 0x8000 - IP_ERRORMTU = 0x15 - IP_HDRINCL = 0x2 - IP_IPSEC_POLICY = 0x16 - IP_MAXPACKET = 0xffff - IP_MAX_MEMBERSHIPS = 0x14 - IP_MF = 0x2000 - IP_MINFRAGSIZE = 0x45 - IP_MINTTL = 0x18 - IP_MSS = 0x240 - IP_MULTICAST_IF = 0x9 - IP_MULTICAST_LOOP = 0xb - IP_MULTICAST_TTL = 0xa - IP_OFFMASK = 0x1fff - IP_OPTIONS = 0x1 - IP_PORTRANGE = 0x13 - IP_PORTRANGE_DEFAULT = 0x0 - IP_PORTRANGE_HIGH = 0x1 - IP_PORTRANGE_LOW = 0x2 - IP_RECVDSTADDR = 0x7 - IP_RECVIF = 0x14 - IP_RECVOPTS = 0x5 - IP_RECVRETOPTS = 0x6 - IP_RECVTTL = 0x17 - IP_RETOPTS = 0x8 - IP_RF = 0x8000 - IP_TOS = 0x3 - IP_TTL = 0x4 - ISIG = 0x80 - ISTRIP = 0x20 - IXANY = 0x800 - IXOFF = 0x400 - IXON = 0x200 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_DONTNEED = 0x4 - MADV_FREE = 0x6 - MADV_NORMAL = 0x0 - MADV_RANDOM = 0x1 - MADV_SEQUENTIAL = 0x2 - MADV_SPACEAVAIL = 0x5 - MADV_WILLNEED = 0x3 - MAP_ALIGNMENT_16MB = 0x18000000 - MAP_ALIGNMENT_1TB = 0x28000000 - MAP_ALIGNMENT_256TB = 0x30000000 - MAP_ALIGNMENT_4GB = 0x20000000 - MAP_ALIGNMENT_64KB = 0x10000000 - MAP_ALIGNMENT_64PB = 0x38000000 - MAP_ALIGNMENT_MASK = -0x1000000 - MAP_ALIGNMENT_SHIFT = 0x18 - MAP_ANON = 0x1000 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_HASSEMAPHORE = 0x200 - MAP_INHERIT = 0x80 - MAP_INHERIT_COPY = 0x1 - MAP_INHERIT_DEFAULT = 0x1 - MAP_INHERIT_DONATE_COPY = 0x3 - MAP_INHERIT_NONE = 0x2 - MAP_INHERIT_SHARE = 0x0 - MAP_NORESERVE = 0x40 - MAP_PRIVATE = 0x2 - MAP_RENAME = 0x20 - MAP_SHARED = 0x1 - MAP_STACK = 0x2000 - MAP_TRYFIXED = 0x400 - MAP_WIRED = 0x800 - MCL_CURRENT = 0x1 - MCL_FUTURE = 0x2 - MSG_BCAST = 0x100 - MSG_CMSG_CLOEXEC = 0x800 - MSG_CONTROLMBUF = 0x2000000 - MSG_CTRUNC = 0x20 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x80 - MSG_EOR = 0x8 - MSG_IOVUSRSPACE = 0x4000000 - MSG_LENUSRSPACE = 0x8000000 - MSG_MCAST = 0x200 - MSG_NAMEMBUF = 0x1000000 - MSG_NBIO = 0x1000 - MSG_NOSIGNAL = 0x400 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_TRUNC = 0x10 - MSG_USERFLAGS = 0xffffff - MSG_WAITALL = 0x40 - MS_ASYNC = 0x1 - MS_INVALIDATE = 0x2 - MS_SYNC = 0x4 - NAME_MAX = 0x1ff - NET_RT_DUMP = 0x1 - NET_RT_FLAGS = 0x2 - NET_RT_IFLIST = 0x5 - NET_RT_MAXID = 0x6 - NET_RT_OIFLIST = 0x4 - NET_RT_OOIFLIST = 0x3 - NOFLSH = 0x80000000 - NOTE_ATTRIB = 0x8 - NOTE_CHILD = 0x4 - NOTE_DELETE = 0x1 - NOTE_EXEC = 0x20000000 - NOTE_EXIT = 0x80000000 - NOTE_EXTEND = 0x4 - NOTE_FORK = 0x40000000 - NOTE_LINK = 0x10 - NOTE_LOWAT = 0x1 - NOTE_PCTRLMASK = 0xf0000000 - NOTE_PDATAMASK = 0xfffff - NOTE_RENAME = 0x20 - NOTE_REVOKE = 0x40 - NOTE_TRACK = 0x1 - NOTE_TRACKERR = 0x2 - NOTE_WRITE = 0x2 - OCRNL = 0x10 - OFIOGETBMAP = 0xc004667a - ONLCR = 0x2 - ONLRET = 0x40 - ONOCR = 0x20 - ONOEOT = 0x8 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_ALT_IO = 0x40000 - O_APPEND = 0x8 - O_ASYNC = 0x40 - O_CLOEXEC = 0x400000 - O_CREAT = 0x200 - O_DIRECT = 0x80000 - O_DIRECTORY = 0x200000 - O_DSYNC = 0x10000 - O_EXCL = 0x800 - O_EXLOCK = 0x20 - O_FSYNC = 0x80 - O_NDELAY = 0x4 - O_NOCTTY = 0x8000 - O_NOFOLLOW = 0x100 - O_NONBLOCK = 0x4 - O_NOSIGPIPE = 0x1000000 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_RSYNC = 0x20000 - O_SHLOCK = 0x10 - O_SYNC = 0x80 - O_TRUNC = 0x400 - O_WRONLY = 0x1 - PARENB = 0x1000 - PARMRK = 0x8 - PARODD = 0x2000 - PENDIN = 0x20000000 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PRI_IOFLUSH = 0x7c - PROT_EXEC = 0x4 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_WRITE = 0x2 - RLIMIT_AS = 0xa - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x8 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = 0x7fffffffffffffff - RTAX_AUTHOR = 0x6 - RTAX_BRD = 0x7 - RTAX_DST = 0x0 - RTAX_GATEWAY = 0x1 - RTAX_GENMASK = 0x3 - RTAX_IFA = 0x5 - RTAX_IFP = 0x4 - RTAX_MAX = 0x9 - RTAX_NETMASK = 0x2 - RTAX_TAG = 0x8 - RTA_AUTHOR = 0x40 - RTA_BRD = 0x80 - RTA_DST = 0x1 - RTA_GATEWAY = 0x2 - RTA_GENMASK = 0x8 - RTA_IFA = 0x20 - RTA_IFP = 0x10 - RTA_NETMASK = 0x4 - RTA_TAG = 0x100 - RTF_ANNOUNCE = 0x20000 - RTF_BLACKHOLE = 0x1000 - RTF_CLONED = 0x2000 - RTF_CLONING = 0x100 - RTF_DONE = 0x40 - RTF_DYNAMIC = 0x10 - RTF_GATEWAY = 0x2 - RTF_HOST = 0x4 - RTF_LLINFO = 0x400 - RTF_MASK = 0x80 - RTF_MODIFIED = 0x20 - RTF_PROTO1 = 0x8000 - RTF_PROTO2 = 0x4000 - RTF_REJECT = 0x8 - RTF_SRC = 0x10000 - RTF_STATIC = 0x800 - RTF_UP = 0x1 - RTF_XRESOLVE = 0x200 - RTM_ADD = 0x1 - RTM_CHANGE = 0x3 - RTM_CHGADDR = 0x15 - RTM_DELADDR = 0xd - RTM_DELETE = 0x2 - RTM_GET = 0x4 - RTM_IEEE80211 = 0x11 - RTM_IFANNOUNCE = 0x10 - RTM_IFINFO = 0x14 - RTM_LLINFO_UPD = 0x13 - RTM_LOCK = 0x8 - RTM_LOSING = 0x5 - RTM_MISS = 0x7 - RTM_NEWADDR = 0xc - RTM_OIFINFO = 0xf - RTM_OLDADD = 0x9 - RTM_OLDDEL = 0xa - RTM_OOIFINFO = 0xe - RTM_REDIRECT = 0x6 - RTM_RESOLVE = 0xb - RTM_RTTUNIT = 0xf4240 - RTM_SETGATE = 0x12 - RTM_VERSION = 0x4 - RTV_EXPIRE = 0x4 - RTV_HOPCOUNT = 0x2 - RTV_MTU = 0x1 - RTV_RPIPE = 0x8 - RTV_RTT = 0x40 - RTV_RTTVAR = 0x80 - RTV_SPIPE = 0x10 - RTV_SSTHRESH = 0x20 - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - SCM_CREDS = 0x4 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x8 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDMULTI = 0x80906931 - SIOCADDRT = 0x8030720a - SIOCAIFADDR = 0x8040691a - SIOCALIFADDR = 0x8118691c - SIOCATMARK = 0x40047307 - SIOCDELMULTI = 0x80906932 - SIOCDELRT = 0x8030720b - SIOCDIFADDR = 0x80906919 - SIOCDIFPHYADDR = 0x80906949 - SIOCDLIFADDR = 0x8118691e - SIOCGDRVSPEC = 0xc01c697b - SIOCGETPFSYNC = 0xc09069f8 - SIOCGETSGCNT = 0xc0147534 - SIOCGETVIFCNT = 0xc0147533 - SIOCGHIWAT = 0x40047301 - SIOCGIFADDR = 0xc0906921 - SIOCGIFADDRPREF = 0xc0946920 - SIOCGIFALIAS = 0xc040691b - SIOCGIFBRDADDR = 0xc0906923 - SIOCGIFCAP = 0xc0206976 - SIOCGIFCONF = 0xc0086926 - SIOCGIFDATA = 0xc0946985 - SIOCGIFDLT = 0xc0906977 - SIOCGIFDSTADDR = 0xc0906922 - SIOCGIFFLAGS = 0xc0906911 - SIOCGIFGENERIC = 0xc090693a - SIOCGIFMEDIA = 0xc0286936 - SIOCGIFMETRIC = 0xc0906917 - SIOCGIFMTU = 0xc090697e - SIOCGIFNETMASK = 0xc0906925 - SIOCGIFPDSTADDR = 0xc0906948 - SIOCGIFPSRCADDR = 0xc0906947 - SIOCGLIFADDR = 0xc118691d - SIOCGLIFPHYADDR = 0xc118694b - SIOCGLINKSTR = 0xc01c6987 - SIOCGLOWAT = 0x40047303 - SIOCGPGRP = 0x40047309 - SIOCGVH = 0xc0906983 - SIOCIFCREATE = 0x8090697a - SIOCIFDESTROY = 0x80906979 - SIOCIFGCLONERS = 0xc00c6978 - SIOCINITIFADDR = 0xc0446984 - SIOCSDRVSPEC = 0x801c697b - SIOCSETPFSYNC = 0x809069f7 - SIOCSHIWAT = 0x80047300 - SIOCSIFADDR = 0x8090690c - SIOCSIFADDRPREF = 0x8094691f - SIOCSIFBRDADDR = 0x80906913 - SIOCSIFCAP = 0x80206975 - SIOCSIFDSTADDR = 0x8090690e - SIOCSIFFLAGS = 0x80906910 - SIOCSIFGENERIC = 0x80906939 - SIOCSIFMEDIA = 0xc0906935 - SIOCSIFMETRIC = 0x80906918 - SIOCSIFMTU = 0x8090697f - SIOCSIFNETMASK = 0x80906916 - SIOCSIFPHYADDR = 0x80406946 - SIOCSLIFPHYADDR = 0x8118694a - SIOCSLINKSTR = 0x801c6988 - SIOCSLOWAT = 0x80047302 - SIOCSPGRP = 0x80047308 - SIOCSVH = 0xc0906982 - SIOCZIFDATA = 0xc0946986 - SOCK_CLOEXEC = 0x10000000 - SOCK_DGRAM = 0x2 - SOCK_FLAGS_MASK = 0xf0000000 - SOCK_NONBLOCK = 0x20000000 - SOCK_NOSIGPIPE = 0x40000000 - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_SOCKET = 0xffff - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x2 - SO_ACCEPTFILTER = 0x1000 - SO_BROADCAST = 0x20 - SO_DEBUG = 0x1 - SO_DONTROUTE = 0x10 - SO_ERROR = 0x1007 - SO_KEEPALIVE = 0x8 - SO_LINGER = 0x80 - SO_NOHEADER = 0x100a - SO_NOSIGPIPE = 0x800 - SO_OOBINLINE = 0x100 - SO_OVERFLOWED = 0x1009 - SO_RCVBUF = 0x1002 - SO_RCVLOWAT = 0x1004 - SO_RCVTIMEO = 0x100c - SO_REUSEADDR = 0x4 - SO_REUSEPORT = 0x200 - SO_SNDBUF = 0x1001 - SO_SNDLOWAT = 0x1003 - SO_SNDTIMEO = 0x100b - SO_TIMESTAMP = 0x2000 - SO_TYPE = 0x1008 - SO_USELOOPBACK = 0x40 - SYSCTL_VERSION = 0x1000000 - SYSCTL_VERS_0 = 0x0 - SYSCTL_VERS_1 = 0x1000000 - SYSCTL_VERS_MASK = 0xff000000 - S_ARCH1 = 0x10000 - S_ARCH2 = 0x20000 - S_BLKSIZE = 0x200 - S_IEXEC = 0x40 - S_IFBLK = 0x6000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFIFO = 0x1000 - S_IFLNK = 0xa000 - S_IFMT = 0xf000 - S_IFREG = 0x8000 - S_IFSOCK = 0xc000 - S_IFWHT = 0xe000 - S_IREAD = 0x100 - S_IRGRP = 0x20 - S_IROTH = 0x4 - S_IRUSR = 0x100 - S_IRWXG = 0x38 - S_IRWXO = 0x7 - S_IRWXU = 0x1c0 - S_ISGID = 0x400 - S_ISTXT = 0x200 - S_ISUID = 0x800 - S_ISVTX = 0x200 - S_IWGRP = 0x10 - S_IWOTH = 0x2 - S_IWRITE = 0x80 - S_IWUSR = 0x80 - S_IXGRP = 0x8 - S_IXOTH = 0x1 - S_IXUSR = 0x40 - S_LOGIN_SET = 0x1 - TCIFLUSH = 0x1 - TCIOFLUSH = 0x3 - TCOFLUSH = 0x2 - TCP_CONGCTL = 0x20 - TCP_KEEPCNT = 0x6 - TCP_KEEPIDLE = 0x3 - TCP_KEEPINIT = 0x7 - TCP_KEEPINTVL = 0x5 - TCP_MAXBURST = 0x4 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_WINSHIFT = 0xe - TCP_MD5SIG = 0x10 - TCP_MINMSS = 0xd8 - TCP_MSS = 0x218 - TCP_NODELAY = 0x1 - TCSAFLUSH = 0x2 - TIOCCBRK = 0x2000747a - TIOCCDTR = 0x20007478 - TIOCCONS = 0x80047462 - TIOCDCDTIMESTAMP = 0x400c7458 - TIOCDRAIN = 0x2000745e - TIOCEXCL = 0x2000740d - TIOCEXT = 0x80047460 - TIOCFLAG_CDTRCTS = 0x10 - TIOCFLAG_CLOCAL = 0x2 - TIOCFLAG_CRTSCTS = 0x4 - TIOCFLAG_MDMBUF = 0x8 - TIOCFLAG_SOFTCAR = 0x1 - TIOCFLUSH = 0x80047410 - TIOCGETA = 0x402c7413 - TIOCGETD = 0x4004741a - TIOCGFLAGS = 0x4004745d - TIOCGLINED = 0x40207442 - TIOCGPGRP = 0x40047477 - TIOCGQSIZE = 0x40047481 - TIOCGRANTPT = 0x20007447 - TIOCGSID = 0x40047463 - TIOCGSIZE = 0x40087468 - TIOCGWINSZ = 0x40087468 - TIOCMBIC = 0x8004746b - TIOCMBIS = 0x8004746c - TIOCMGET = 0x4004746a - TIOCMSET = 0x8004746d - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x20007471 - TIOCNXCL = 0x2000740e - TIOCOUTQ = 0x40047473 - TIOCPKT = 0x80047470 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCPTMGET = 0x40287446 - TIOCPTSNAME = 0x40287448 - TIOCRCVFRAME = 0x80047445 - TIOCREMOTE = 0x80047469 - TIOCSBRK = 0x2000747b - TIOCSCTTY = 0x20007461 - TIOCSDTR = 0x20007479 - TIOCSETA = 0x802c7414 - TIOCSETAF = 0x802c7416 - TIOCSETAW = 0x802c7415 - TIOCSETD = 0x8004741b - TIOCSFLAGS = 0x8004745c - TIOCSIG = 0x2000745f - TIOCSLINED = 0x80207443 - TIOCSPGRP = 0x80047476 - TIOCSQSIZE = 0x80047480 - TIOCSSIZE = 0x80087467 - TIOCSTART = 0x2000746e - TIOCSTAT = 0x80047465 - TIOCSTI = 0x80017472 - TIOCSTOP = 0x2000746f - TIOCSWINSZ = 0x80087467 - TIOCUCNTL = 0x80047466 - TIOCXMTFRAME = 0x80047444 - TOSTOP = 0x400000 - VDISCARD = 0xf - VDSUSP = 0xb - VEOF = 0x0 - VEOL = 0x1 - VEOL2 = 0x2 - VERASE = 0x3 - VINTR = 0x8 - VKILL = 0x5 - VLNEXT = 0xe - VMIN = 0x10 - VQUIT = 0x9 - VREPRINT = 0x6 - VSTART = 0xc - VSTATUS = 0x12 - VSTOP = 0xd - VSUSP = 0xa - VTIME = 0x11 - VWERASE = 0x4 - WALL = 0x8 - WALLSIG = 0x8 - WALTSIG = 0x4 - WCLONE = 0x4 - WCOREFLAG = 0x80 - WNOHANG = 0x1 - WNOWAIT = 0x10000 - WNOZOMBIE = 0x20000 - WOPTSCHECKED = 0x40000 - WSTOPPED = 0x7f - WUNTRACED = 0x2 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x30) - EADDRNOTAVAIL = syscall.Errno(0x31) - EAFNOSUPPORT = syscall.Errno(0x2f) - EAGAIN = syscall.Errno(0x23) - EALREADY = syscall.Errno(0x25) - EAUTH = syscall.Errno(0x50) - EBADF = syscall.Errno(0x9) - EBADMSG = syscall.Errno(0x58) - EBADRPC = syscall.Errno(0x48) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x57) - ECHILD = syscall.Errno(0xa) - ECONNABORTED = syscall.Errno(0x35) - ECONNREFUSED = syscall.Errno(0x3d) - ECONNRESET = syscall.Errno(0x36) - EDEADLK = syscall.Errno(0xb) - EDESTADDRREQ = syscall.Errno(0x27) - EDOM = syscall.Errno(0x21) - EDQUOT = syscall.Errno(0x45) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EFTYPE = syscall.Errno(0x4f) - EHOSTDOWN = syscall.Errno(0x40) - EHOSTUNREACH = syscall.Errno(0x41) - EIDRM = syscall.Errno(0x52) - EILSEQ = syscall.Errno(0x55) - EINPROGRESS = syscall.Errno(0x24) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EISCONN = syscall.Errno(0x38) - EISDIR = syscall.Errno(0x15) - ELAST = syscall.Errno(0x60) - ELOOP = syscall.Errno(0x3e) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x28) - EMULTIHOP = syscall.Errno(0x5e) - ENAMETOOLONG = syscall.Errno(0x3f) - ENEEDAUTH = syscall.Errno(0x51) - ENETDOWN = syscall.Errno(0x32) - ENETRESET = syscall.Errno(0x34) - ENETUNREACH = syscall.Errno(0x33) - ENFILE = syscall.Errno(0x17) - ENOATTR = syscall.Errno(0x5d) - ENOBUFS = syscall.Errno(0x37) - ENODATA = syscall.Errno(0x59) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOLCK = syscall.Errno(0x4d) - ENOLINK = syscall.Errno(0x5f) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x53) - ENOPROTOOPT = syscall.Errno(0x2a) - ENOSPC = syscall.Errno(0x1c) - ENOSR = syscall.Errno(0x5a) - ENOSTR = syscall.Errno(0x5b) - ENOSYS = syscall.Errno(0x4e) - ENOTBLK = syscall.Errno(0xf) - ENOTCONN = syscall.Errno(0x39) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x42) - ENOTSOCK = syscall.Errno(0x26) - ENOTSUP = syscall.Errno(0x56) - ENOTTY = syscall.Errno(0x19) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x2d) - EOVERFLOW = syscall.Errno(0x54) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x2e) - EPIPE = syscall.Errno(0x20) - EPROCLIM = syscall.Errno(0x43) - EPROCUNAVAIL = syscall.Errno(0x4c) - EPROGMISMATCH = syscall.Errno(0x4b) - EPROGUNAVAIL = syscall.Errno(0x4a) - EPROTO = syscall.Errno(0x60) - EPROTONOSUPPORT = syscall.Errno(0x2b) - EPROTOTYPE = syscall.Errno(0x29) - ERANGE = syscall.Errno(0x22) - EREMOTE = syscall.Errno(0x47) - EROFS = syscall.Errno(0x1e) - ERPCMISMATCH = syscall.Errno(0x49) - ESHUTDOWN = syscall.Errno(0x3a) - ESOCKTNOSUPPORT = syscall.Errno(0x2c) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESTALE = syscall.Errno(0x46) - ETIME = syscall.Errno(0x5c) - ETIMEDOUT = syscall.Errno(0x3c) - ETOOMANYREFS = syscall.Errno(0x3b) - ETXTBSY = syscall.Errno(0x1a) - EUSERS = syscall.Errno(0x44) - EWOULDBLOCK = syscall.Errno(0x23) - EXDEV = syscall.Errno(0x12) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0xa) - SIGCHLD = syscall.Signal(0x14) - SIGCONT = syscall.Signal(0x13) - SIGEMT = syscall.Signal(0x7) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINFO = syscall.Signal(0x1d) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x17) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGPIPE = syscall.Signal(0xd) - SIGPROF = syscall.Signal(0x1b) - SIGPWR = syscall.Signal(0x20) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTOP = syscall.Signal(0x11) - SIGSYS = syscall.Signal(0xc) - SIGTERM = syscall.Signal(0xf) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x12) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGURG = syscall.Signal(0x10) - SIGUSR1 = syscall.Signal(0x1e) - SIGUSR2 = syscall.Signal(0x1f) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) - -// Error table -var errors = [...]string{ - 1: "operation not permitted", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "input/output error", - 6: "device not configured", - 7: "argument list too long", - 8: "exec format error", - 9: "bad file descriptor", - 10: "no child processes", - 11: "resource deadlock avoided", - 12: "cannot allocate memory", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "device busy", - 17: "file exists", - 18: "cross-device link", - 19: "operation not supported by device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "too many open files in system", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "numerical argument out of domain", - 34: "result too large or too small", - 35: "resource temporarily unavailable", - 36: "operation now in progress", - 37: "operation already in progress", - 38: "socket operation on non-socket", - 39: "destination address required", - 40: "message too long", - 41: "protocol wrong type for socket", - 42: "protocol option not available", - 43: "protocol not supported", - 44: "socket type not supported", - 45: "operation not supported", - 46: "protocol family not supported", - 47: "address family not supported by protocol family", - 48: "address already in use", - 49: "can't assign requested address", - 50: "network is down", - 51: "network is unreachable", - 52: "network dropped connection on reset", - 53: "software caused connection abort", - 54: "connection reset by peer", - 55: "no buffer space available", - 56: "socket is already connected", - 57: "socket is not connected", - 58: "can't send after socket shutdown", - 59: "too many references: can't splice", - 60: "connection timed out", - 61: "connection refused", - 62: "too many levels of symbolic links", - 63: "file name too long", - 64: "host is down", - 65: "no route to host", - 66: "directory not empty", - 67: "too many processes", - 68: "too many users", - 69: "disc quota exceeded", - 70: "stale NFS file handle", - 71: "too many levels of remote in path", - 72: "RPC struct is bad", - 73: "RPC version wrong", - 74: "RPC prog. not avail", - 75: "program version wrong", - 76: "bad procedure for program", - 77: "no locks available", - 78: "function not implemented", - 79: "inappropriate file type or format", - 80: "authentication error", - 81: "need authenticator", - 82: "identifier removed", - 83: "no message of desired type", - 84: "value too large to be stored in data type", - 85: "illegal byte sequence", - 86: "not supported", - 87: "operation Canceled", - 88: "bad or Corrupt message", - 89: "no message available", - 90: "no STREAM resources", - 91: "not a STREAM", - 92: "STREAM ioctl timeout", - 93: "attribute not found", - 94: "multihop attempted", - 95: "link has been severed", - 96: "protocol error", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal instruction", - 5: "trace/BPT trap", - 6: "abort trap", - 7: "EMT trap", - 8: "floating point exception", - 9: "killed", - 10: "bus error", - 11: "segmentation fault", - 12: "bad system call", - 13: "broken pipe", - 14: "alarm clock", - 15: "terminated", - 16: "urgent I/O condition", - 17: "stopped (signal)", - 18: "stopped", - 19: "continued", - 20: "child exited", - 21: "stopped (tty input)", - 22: "stopped (tty output)", - 23: "I/O possible", - 24: "cputime limit exceeded", - 25: "filesize limit exceeded", - 26: "virtual timer expired", - 27: "profiling timer expired", - 28: "window size changes", - 29: "information request", - 30: "user defined signal 1", - 31: "user defined signal 2", - 32: "power fail/restart", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_netbsd_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_netbsd_amd64.go deleted file mode 100644 index 4994437b63d..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_netbsd_amd64.go +++ /dev/null @@ -1,1702 +0,0 @@ -// mkerrors.sh -m64 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build amd64,netbsd - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- -m64 _const.go - -package unix - -import "syscall" - -const ( - AF_APPLETALK = 0x10 - AF_ARP = 0x1c - AF_BLUETOOTH = 0x1f - AF_CCITT = 0xa - AF_CHAOS = 0x5 - AF_CNT = 0x15 - AF_COIP = 0x14 - AF_DATAKIT = 0x9 - AF_DECnet = 0xc - AF_DLI = 0xd - AF_E164 = 0x1a - AF_ECMA = 0x8 - AF_HYLINK = 0xf - AF_IEEE80211 = 0x20 - AF_IMPLINK = 0x3 - AF_INET = 0x2 - AF_INET6 = 0x18 - AF_IPX = 0x17 - AF_ISDN = 0x1a - AF_ISO = 0x7 - AF_LAT = 0xe - AF_LINK = 0x12 - AF_LOCAL = 0x1 - AF_MAX = 0x23 - AF_MPLS = 0x21 - AF_NATM = 0x1b - AF_NS = 0x6 - AF_OROUTE = 0x11 - AF_OSI = 0x7 - AF_PUP = 0x4 - AF_ROUTE = 0x22 - AF_SNA = 0xb - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - ARPHRD_ARCNET = 0x7 - ARPHRD_ETHER = 0x1 - ARPHRD_FRELAY = 0xf - ARPHRD_IEEE1394 = 0x18 - ARPHRD_IEEE802 = 0x6 - ARPHRD_STRIP = 0x17 - B0 = 0x0 - B110 = 0x6e - B115200 = 0x1c200 - B1200 = 0x4b0 - B134 = 0x86 - B14400 = 0x3840 - B150 = 0x96 - B1800 = 0x708 - B19200 = 0x4b00 - B200 = 0xc8 - B230400 = 0x38400 - B2400 = 0x960 - B28800 = 0x7080 - B300 = 0x12c - B38400 = 0x9600 - B460800 = 0x70800 - B4800 = 0x12c0 - B50 = 0x32 - B57600 = 0xe100 - B600 = 0x258 - B7200 = 0x1c20 - B75 = 0x4b - B76800 = 0x12c00 - B921600 = 0xe1000 - B9600 = 0x2580 - BIOCFEEDBACK = 0x8004427d - BIOCFLUSH = 0x20004268 - BIOCGBLEN = 0x40044266 - BIOCGDLT = 0x4004426a - BIOCGDLTLIST = 0xc0104277 - BIOCGETIF = 0x4090426b - BIOCGFEEDBACK = 0x4004427c - BIOCGHDRCMPLT = 0x40044274 - BIOCGRTIMEOUT = 0x4010427b - BIOCGSEESENT = 0x40044278 - BIOCGSTATS = 0x4080426f - BIOCGSTATSOLD = 0x4008426f - BIOCIMMEDIATE = 0x80044270 - BIOCPROMISC = 0x20004269 - BIOCSBLEN = 0xc0044266 - BIOCSDLT = 0x80044276 - BIOCSETF = 0x80104267 - BIOCSETIF = 0x8090426c - BIOCSFEEDBACK = 0x8004427d - BIOCSHDRCMPLT = 0x80044275 - BIOCSRTIMEOUT = 0x8010427a - BIOCSSEESENT = 0x80044279 - BIOCSTCPF = 0x80104272 - BIOCSUDPF = 0x80104273 - BIOCVERSION = 0x40044271 - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALIGNMENT = 0x8 - BPF_ALIGNMENT32 = 0x4 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_DFLTBUFSIZE = 0x100000 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXBUFSIZE = 0x1000000 - BPF_MAXINSNS = 0x200 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINBUFSIZE = 0x20 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RELEASE = 0x30bb6 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_W = 0x0 - BPF_X = 0x8 - BRKINT = 0x2 - CFLUSH = 0xf - CLOCAL = 0x8000 - CLONE_CSIGNAL = 0xff - CLONE_FILES = 0x400 - CLONE_FS = 0x200 - CLONE_PID = 0x1000 - CLONE_PTRACE = 0x2000 - CLONE_SIGHAND = 0x800 - CLONE_VFORK = 0x4000 - CLONE_VM = 0x100 - CREAD = 0x800 - CS5 = 0x0 - CS6 = 0x100 - CS7 = 0x200 - CS8 = 0x300 - CSIZE = 0x300 - CSTART = 0x11 - CSTATUS = 0x14 - CSTOP = 0x13 - CSTOPB = 0x400 - CSUSP = 0x1a - CTL_MAXNAME = 0xc - CTL_NET = 0x4 - CTL_QUERY = -0x2 - DIOCBSFLUSH = 0x20006478 - DLT_A429 = 0xb8 - DLT_A653_ICM = 0xb9 - DLT_AIRONET_HEADER = 0x78 - DLT_AOS = 0xde - DLT_APPLE_IP_OVER_IEEE1394 = 0x8a - DLT_ARCNET = 0x7 - DLT_ARCNET_LINUX = 0x81 - DLT_ATM_CLIP = 0x13 - DLT_ATM_RFC1483 = 0xb - DLT_AURORA = 0x7e - DLT_AX25 = 0x3 - DLT_AX25_KISS = 0xca - DLT_BACNET_MS_TP = 0xa5 - DLT_BLUETOOTH_HCI_H4 = 0xbb - DLT_BLUETOOTH_HCI_H4_WITH_PHDR = 0xc9 - DLT_CAN20B = 0xbe - DLT_CAN_SOCKETCAN = 0xe3 - DLT_CHAOS = 0x5 - DLT_CISCO_IOS = 0x76 - DLT_C_HDLC = 0x68 - DLT_C_HDLC_WITH_DIR = 0xcd - DLT_DECT = 0xdd - DLT_DOCSIS = 0x8f - DLT_ECONET = 0x73 - DLT_EN10MB = 0x1 - DLT_EN3MB = 0x2 - DLT_ENC = 0x6d - DLT_ERF = 0xc5 - DLT_ERF_ETH = 0xaf - DLT_ERF_POS = 0xb0 - DLT_FC_2 = 0xe0 - DLT_FC_2_WITH_FRAME_DELIMS = 0xe1 - DLT_FDDI = 0xa - DLT_FLEXRAY = 0xd2 - DLT_FRELAY = 0x6b - DLT_FRELAY_WITH_DIR = 0xce - DLT_GCOM_SERIAL = 0xad - DLT_GCOM_T1E1 = 0xac - DLT_GPF_F = 0xab - DLT_GPF_T = 0xaa - DLT_GPRS_LLC = 0xa9 - DLT_GSMTAP_ABIS = 0xda - DLT_GSMTAP_UM = 0xd9 - DLT_HDLC = 0x10 - DLT_HHDLC = 0x79 - DLT_HIPPI = 0xf - DLT_IBM_SN = 0x92 - DLT_IBM_SP = 0x91 - DLT_IEEE802 = 0x6 - DLT_IEEE802_11 = 0x69 - DLT_IEEE802_11_RADIO = 0x7f - DLT_IEEE802_11_RADIO_AVS = 0xa3 - DLT_IEEE802_15_4 = 0xc3 - DLT_IEEE802_15_4_LINUX = 0xbf - DLT_IEEE802_15_4_NONASK_PHY = 0xd7 - DLT_IEEE802_16_MAC_CPS = 0xbc - DLT_IEEE802_16_MAC_CPS_RADIO = 0xc1 - DLT_IPMB = 0xc7 - DLT_IPMB_LINUX = 0xd1 - DLT_IPNET = 0xe2 - DLT_IPV4 = 0xe4 - DLT_IPV6 = 0xe5 - DLT_IP_OVER_FC = 0x7a - DLT_JUNIPER_ATM1 = 0x89 - DLT_JUNIPER_ATM2 = 0x87 - DLT_JUNIPER_CHDLC = 0xb5 - DLT_JUNIPER_ES = 0x84 - DLT_JUNIPER_ETHER = 0xb2 - DLT_JUNIPER_FRELAY = 0xb4 - DLT_JUNIPER_GGSN = 0x85 - DLT_JUNIPER_ISM = 0xc2 - DLT_JUNIPER_MFR = 0x86 - DLT_JUNIPER_MLFR = 0x83 - DLT_JUNIPER_MLPPP = 0x82 - DLT_JUNIPER_MONITOR = 0xa4 - DLT_JUNIPER_PIC_PEER = 0xae - DLT_JUNIPER_PPP = 0xb3 - DLT_JUNIPER_PPPOE = 0xa7 - DLT_JUNIPER_PPPOE_ATM = 0xa8 - DLT_JUNIPER_SERVICES = 0x88 - DLT_JUNIPER_ST = 0xc8 - DLT_JUNIPER_VP = 0xb7 - DLT_LAPB_WITH_DIR = 0xcf - DLT_LAPD = 0xcb - DLT_LIN = 0xd4 - DLT_LINUX_EVDEV = 0xd8 - DLT_LINUX_IRDA = 0x90 - DLT_LINUX_LAPD = 0xb1 - DLT_LINUX_SLL = 0x71 - DLT_LOOP = 0x6c - DLT_LTALK = 0x72 - DLT_MFR = 0xb6 - DLT_MOST = 0xd3 - DLT_MPLS = 0xdb - DLT_MTP2 = 0x8c - DLT_MTP2_WITH_PHDR = 0x8b - DLT_MTP3 = 0x8d - DLT_NULL = 0x0 - DLT_PCI_EXP = 0x7d - DLT_PFLOG = 0x75 - DLT_PFSYNC = 0x12 - DLT_PPI = 0xc0 - DLT_PPP = 0x9 - DLT_PPP_BSDOS = 0xe - DLT_PPP_ETHER = 0x33 - DLT_PPP_PPPD = 0xa6 - DLT_PPP_SERIAL = 0x32 - DLT_PPP_WITH_DIR = 0xcc - DLT_PRISM_HEADER = 0x77 - DLT_PRONET = 0x4 - DLT_RAIF1 = 0xc6 - DLT_RAW = 0xc - DLT_RAWAF_MASK = 0x2240000 - DLT_RIO = 0x7c - DLT_SCCP = 0x8e - DLT_SITA = 0xc4 - DLT_SLIP = 0x8 - DLT_SLIP_BSDOS = 0xd - DLT_SUNATM = 0x7b - DLT_SYMANTEC_FIREWALL = 0x63 - DLT_TZSP = 0x80 - DLT_USB = 0xba - DLT_USB_LINUX = 0xbd - DLT_USB_LINUX_MMAPPED = 0xdc - DLT_WIHART = 0xdf - DLT_X2E_SERIAL = 0xd5 - DLT_X2E_XORAYA = 0xd6 - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - DT_WHT = 0xe - ECHO = 0x8 - ECHOCTL = 0x40 - ECHOE = 0x2 - ECHOK = 0x4 - ECHOKE = 0x1 - ECHONL = 0x10 - ECHOPRT = 0x20 - EMUL_LINUX = 0x1 - EMUL_LINUX32 = 0x5 - EMUL_MAXID = 0x6 - ETHERCAP_JUMBO_MTU = 0x4 - ETHERCAP_VLAN_HWTAGGING = 0x2 - ETHERCAP_VLAN_MTU = 0x1 - ETHERMIN = 0x2e - ETHERMTU = 0x5dc - ETHERMTU_JUMBO = 0x2328 - ETHERTYPE_8023 = 0x4 - ETHERTYPE_AARP = 0x80f3 - ETHERTYPE_ACCTON = 0x8390 - ETHERTYPE_AEONIC = 0x8036 - ETHERTYPE_ALPHA = 0x814a - ETHERTYPE_AMBER = 0x6008 - ETHERTYPE_AMOEBA = 0x8145 - ETHERTYPE_APOLLO = 0x80f7 - ETHERTYPE_APOLLODOMAIN = 0x8019 - ETHERTYPE_APPLETALK = 0x809b - ETHERTYPE_APPLITEK = 0x80c7 - ETHERTYPE_ARGONAUT = 0x803a - ETHERTYPE_ARP = 0x806 - ETHERTYPE_AT = 0x809b - ETHERTYPE_ATALK = 0x809b - ETHERTYPE_ATOMIC = 0x86df - ETHERTYPE_ATT = 0x8069 - ETHERTYPE_ATTSTANFORD = 0x8008 - ETHERTYPE_AUTOPHON = 0x806a - ETHERTYPE_AXIS = 0x8856 - ETHERTYPE_BCLOOP = 0x9003 - ETHERTYPE_BOFL = 0x8102 - ETHERTYPE_CABLETRON = 0x7034 - ETHERTYPE_CHAOS = 0x804 - ETHERTYPE_COMDESIGN = 0x806c - ETHERTYPE_COMPUGRAPHIC = 0x806d - ETHERTYPE_COUNTERPOINT = 0x8062 - ETHERTYPE_CRONUS = 0x8004 - ETHERTYPE_CRONUSVLN = 0x8003 - ETHERTYPE_DCA = 0x1234 - ETHERTYPE_DDE = 0x807b - ETHERTYPE_DEBNI = 0xaaaa - ETHERTYPE_DECAM = 0x8048 - ETHERTYPE_DECCUST = 0x6006 - ETHERTYPE_DECDIAG = 0x6005 - ETHERTYPE_DECDNS = 0x803c - ETHERTYPE_DECDTS = 0x803e - ETHERTYPE_DECEXPER = 0x6000 - ETHERTYPE_DECLAST = 0x8041 - ETHERTYPE_DECLTM = 0x803f - ETHERTYPE_DECMUMPS = 0x6009 - ETHERTYPE_DECNETBIOS = 0x8040 - ETHERTYPE_DELTACON = 0x86de - ETHERTYPE_DIDDLE = 0x4321 - ETHERTYPE_DLOG1 = 0x660 - ETHERTYPE_DLOG2 = 0x661 - ETHERTYPE_DN = 0x6003 - ETHERTYPE_DOGFIGHT = 0x1989 - ETHERTYPE_DSMD = 0x8039 - ETHERTYPE_ECMA = 0x803 - ETHERTYPE_ENCRYPT = 0x803d - ETHERTYPE_ES = 0x805d - ETHERTYPE_EXCELAN = 0x8010 - ETHERTYPE_EXPERDATA = 0x8049 - ETHERTYPE_FLIP = 0x8146 - ETHERTYPE_FLOWCONTROL = 0x8808 - ETHERTYPE_FRARP = 0x808 - ETHERTYPE_GENDYN = 0x8068 - ETHERTYPE_HAYES = 0x8130 - ETHERTYPE_HIPPI_FP = 0x8180 - ETHERTYPE_HITACHI = 0x8820 - ETHERTYPE_HP = 0x8005 - ETHERTYPE_IEEEPUP = 0xa00 - ETHERTYPE_IEEEPUPAT = 0xa01 - ETHERTYPE_IMLBL = 0x4c42 - ETHERTYPE_IMLBLDIAG = 0x424c - ETHERTYPE_IP = 0x800 - ETHERTYPE_IPAS = 0x876c - ETHERTYPE_IPV6 = 0x86dd - ETHERTYPE_IPX = 0x8137 - ETHERTYPE_IPXNEW = 0x8037 - ETHERTYPE_KALPANA = 0x8582 - ETHERTYPE_LANBRIDGE = 0x8038 - ETHERTYPE_LANPROBE = 0x8888 - ETHERTYPE_LAT = 0x6004 - ETHERTYPE_LBACK = 0x9000 - ETHERTYPE_LITTLE = 0x8060 - ETHERTYPE_LOGICRAFT = 0x8148 - ETHERTYPE_LOOPBACK = 0x9000 - ETHERTYPE_MATRA = 0x807a - ETHERTYPE_MAX = 0xffff - ETHERTYPE_MERIT = 0x807c - ETHERTYPE_MICP = 0x873a - ETHERTYPE_MOPDL = 0x6001 - ETHERTYPE_MOPRC = 0x6002 - ETHERTYPE_MOTOROLA = 0x818d - ETHERTYPE_MPLS = 0x8847 - ETHERTYPE_MPLS_MCAST = 0x8848 - ETHERTYPE_MUMPS = 0x813f - ETHERTYPE_NBPCC = 0x3c04 - ETHERTYPE_NBPCLAIM = 0x3c09 - ETHERTYPE_NBPCLREQ = 0x3c05 - ETHERTYPE_NBPCLRSP = 0x3c06 - ETHERTYPE_NBPCREQ = 0x3c02 - ETHERTYPE_NBPCRSP = 0x3c03 - ETHERTYPE_NBPDG = 0x3c07 - ETHERTYPE_NBPDGB = 0x3c08 - ETHERTYPE_NBPDLTE = 0x3c0a - ETHERTYPE_NBPRAR = 0x3c0c - ETHERTYPE_NBPRAS = 0x3c0b - ETHERTYPE_NBPRST = 0x3c0d - ETHERTYPE_NBPSCD = 0x3c01 - ETHERTYPE_NBPVCD = 0x3c00 - ETHERTYPE_NBS = 0x802 - ETHERTYPE_NCD = 0x8149 - ETHERTYPE_NESTAR = 0x8006 - ETHERTYPE_NETBEUI = 0x8191 - ETHERTYPE_NOVELL = 0x8138 - ETHERTYPE_NS = 0x600 - ETHERTYPE_NSAT = 0x601 - ETHERTYPE_NSCOMPAT = 0x807 - ETHERTYPE_NTRAILER = 0x10 - ETHERTYPE_OS9 = 0x7007 - ETHERTYPE_OS9NET = 0x7009 - ETHERTYPE_PACER = 0x80c6 - ETHERTYPE_PAE = 0x888e - ETHERTYPE_PCS = 0x4242 - ETHERTYPE_PLANNING = 0x8044 - ETHERTYPE_PPP = 0x880b - ETHERTYPE_PPPOE = 0x8864 - ETHERTYPE_PPPOEDISC = 0x8863 - ETHERTYPE_PRIMENTS = 0x7031 - ETHERTYPE_PUP = 0x200 - ETHERTYPE_PUPAT = 0x200 - ETHERTYPE_RACAL = 0x7030 - ETHERTYPE_RATIONAL = 0x8150 - ETHERTYPE_RAWFR = 0x6559 - ETHERTYPE_RCL = 0x1995 - ETHERTYPE_RDP = 0x8739 - ETHERTYPE_RETIX = 0x80f2 - ETHERTYPE_REVARP = 0x8035 - ETHERTYPE_SCA = 0x6007 - ETHERTYPE_SECTRA = 0x86db - ETHERTYPE_SECUREDATA = 0x876d - ETHERTYPE_SGITW = 0x817e - ETHERTYPE_SG_BOUNCE = 0x8016 - ETHERTYPE_SG_DIAG = 0x8013 - ETHERTYPE_SG_NETGAMES = 0x8014 - ETHERTYPE_SG_RESV = 0x8015 - ETHERTYPE_SIMNET = 0x5208 - ETHERTYPE_SLOWPROTOCOLS = 0x8809 - ETHERTYPE_SNA = 0x80d5 - ETHERTYPE_SNMP = 0x814c - ETHERTYPE_SONIX = 0xfaf5 - ETHERTYPE_SPIDER = 0x809f - ETHERTYPE_SPRITE = 0x500 - ETHERTYPE_STP = 0x8181 - ETHERTYPE_TALARIS = 0x812b - ETHERTYPE_TALARISMC = 0x852b - ETHERTYPE_TCPCOMP = 0x876b - ETHERTYPE_TCPSM = 0x9002 - ETHERTYPE_TEC = 0x814f - ETHERTYPE_TIGAN = 0x802f - ETHERTYPE_TRAIL = 0x1000 - ETHERTYPE_TRANSETHER = 0x6558 - ETHERTYPE_TYMSHARE = 0x802e - ETHERTYPE_UBBST = 0x7005 - ETHERTYPE_UBDEBUG = 0x900 - ETHERTYPE_UBDIAGLOOP = 0x7002 - ETHERTYPE_UBDL = 0x7000 - ETHERTYPE_UBNIU = 0x7001 - ETHERTYPE_UBNMC = 0x7003 - ETHERTYPE_VALID = 0x1600 - ETHERTYPE_VARIAN = 0x80dd - ETHERTYPE_VAXELN = 0x803b - ETHERTYPE_VEECO = 0x8067 - ETHERTYPE_VEXP = 0x805b - ETHERTYPE_VGLAB = 0x8131 - ETHERTYPE_VINES = 0xbad - ETHERTYPE_VINESECHO = 0xbaf - ETHERTYPE_VINESLOOP = 0xbae - ETHERTYPE_VITAL = 0xff00 - ETHERTYPE_VLAN = 0x8100 - ETHERTYPE_VLTLMAN = 0x8080 - ETHERTYPE_VPROD = 0x805c - ETHERTYPE_VURESERVED = 0x8147 - ETHERTYPE_WATERLOO = 0x8130 - ETHERTYPE_WELLFLEET = 0x8103 - ETHERTYPE_X25 = 0x805 - ETHERTYPE_X75 = 0x801 - ETHERTYPE_XNSSM = 0x9001 - ETHERTYPE_XTP = 0x817d - ETHER_ADDR_LEN = 0x6 - ETHER_CRC_LEN = 0x4 - ETHER_CRC_POLY_BE = 0x4c11db6 - ETHER_CRC_POLY_LE = 0xedb88320 - ETHER_HDR_LEN = 0xe - ETHER_MAX_LEN = 0x5ee - ETHER_MAX_LEN_JUMBO = 0x233a - ETHER_MIN_LEN = 0x40 - ETHER_PPPOE_ENCAP_LEN = 0x8 - ETHER_TYPE_LEN = 0x2 - ETHER_VLAN_ENCAP_LEN = 0x4 - EVFILT_AIO = 0x2 - EVFILT_PROC = 0x4 - EVFILT_READ = 0x0 - EVFILT_SIGNAL = 0x5 - EVFILT_SYSCOUNT = 0x7 - EVFILT_TIMER = 0x6 - EVFILT_VNODE = 0x3 - EVFILT_WRITE = 0x1 - EV_ADD = 0x1 - EV_CLEAR = 0x20 - EV_DELETE = 0x2 - EV_DISABLE = 0x8 - EV_ENABLE = 0x4 - EV_EOF = 0x8000 - EV_ERROR = 0x4000 - EV_FLAG1 = 0x2000 - EV_ONESHOT = 0x10 - EV_SYSFLAGS = 0xf000 - EXTA = 0x4b00 - EXTB = 0x9600 - EXTPROC = 0x800 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x100 - FLUSHO = 0x800000 - F_CLOSEM = 0xa - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0xc - F_FSCTL = -0x80000000 - F_FSDIRMASK = 0x70000000 - F_FSIN = 0x10000000 - F_FSINOUT = 0x30000000 - F_FSOUT = 0x20000000 - F_FSPRIV = 0x8000 - F_FSVOID = 0x40000000 - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLK = 0x7 - F_GETNOSIGPIPE = 0xd - F_GETOWN = 0x5 - F_MAXFD = 0xb - F_OK = 0x0 - F_PARAM_MASK = 0xfff - F_PARAM_MAX = 0xfff - F_RDLCK = 0x1 - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLK = 0x8 - F_SETLKW = 0x9 - F_SETNOSIGPIPE = 0xe - F_SETOWN = 0x6 - F_UNLCK = 0x2 - F_WRLCK = 0x3 - HUPCL = 0x4000 - ICANON = 0x100 - ICMP6_FILTER = 0x12 - ICRNL = 0x100 - IEXTEN = 0x400 - IFAN_ARRIVAL = 0x0 - IFAN_DEPARTURE = 0x1 - IFA_ROUTE = 0x1 - IFF_ALLMULTI = 0x200 - IFF_BROADCAST = 0x2 - IFF_CANTCHANGE = 0x8f52 - IFF_DEBUG = 0x4 - IFF_LINK0 = 0x1000 - IFF_LINK1 = 0x2000 - IFF_LINK2 = 0x4000 - IFF_LOOPBACK = 0x8 - IFF_MULTICAST = 0x8000 - IFF_NOARP = 0x80 - IFF_NOTRAILERS = 0x20 - IFF_OACTIVE = 0x400 - IFF_POINTOPOINT = 0x10 - IFF_PROMISC = 0x100 - IFF_RUNNING = 0x40 - IFF_SIMPLEX = 0x800 - IFF_UP = 0x1 - IFNAMSIZ = 0x10 - IFT_1822 = 0x2 - IFT_A12MPPSWITCH = 0x82 - IFT_AAL2 = 0xbb - IFT_AAL5 = 0x31 - IFT_ADSL = 0x5e - IFT_AFLANE8023 = 0x3b - IFT_AFLANE8025 = 0x3c - IFT_ARAP = 0x58 - IFT_ARCNET = 0x23 - IFT_ARCNETPLUS = 0x24 - IFT_ASYNC = 0x54 - IFT_ATM = 0x25 - IFT_ATMDXI = 0x69 - IFT_ATMFUNI = 0x6a - IFT_ATMIMA = 0x6b - IFT_ATMLOGICAL = 0x50 - IFT_ATMRADIO = 0xbd - IFT_ATMSUBINTERFACE = 0x86 - IFT_ATMVCIENDPT = 0xc2 - IFT_ATMVIRTUAL = 0x95 - IFT_BGPPOLICYACCOUNTING = 0xa2 - IFT_BRIDGE = 0xd1 - IFT_BSC = 0x53 - IFT_CARP = 0xf8 - IFT_CCTEMUL = 0x3d - IFT_CEPT = 0x13 - IFT_CES = 0x85 - IFT_CHANNEL = 0x46 - IFT_CNR = 0x55 - IFT_COFFEE = 0x84 - IFT_COMPOSITELINK = 0x9b - IFT_DCN = 0x8d - IFT_DIGITALPOWERLINE = 0x8a - IFT_DIGITALWRAPPEROVERHEADCHANNEL = 0xba - IFT_DLSW = 0x4a - IFT_DOCSCABLEDOWNSTREAM = 0x80 - IFT_DOCSCABLEMACLAYER = 0x7f - IFT_DOCSCABLEUPSTREAM = 0x81 - IFT_DOCSCABLEUPSTREAMCHANNEL = 0xcd - IFT_DS0 = 0x51 - IFT_DS0BUNDLE = 0x52 - IFT_DS1FDL = 0xaa - IFT_DS3 = 0x1e - IFT_DTM = 0x8c - IFT_DVBASILN = 0xac - IFT_DVBASIOUT = 0xad - IFT_DVBRCCDOWNSTREAM = 0x93 - IFT_DVBRCCMACLAYER = 0x92 - IFT_DVBRCCUPSTREAM = 0x94 - IFT_ECONET = 0xce - IFT_EON = 0x19 - IFT_EPLRS = 0x57 - IFT_ESCON = 0x49 - IFT_ETHER = 0x6 - IFT_FAITH = 0xf2 - IFT_FAST = 0x7d - IFT_FASTETHER = 0x3e - IFT_FASTETHERFX = 0x45 - IFT_FDDI = 0xf - IFT_FIBRECHANNEL = 0x38 - IFT_FRAMERELAYINTERCONNECT = 0x3a - IFT_FRAMERELAYMPI = 0x5c - IFT_FRDLCIENDPT = 0xc1 - IFT_FRELAY = 0x20 - IFT_FRELAYDCE = 0x2c - IFT_FRF16MFRBUNDLE = 0xa3 - IFT_FRFORWARD = 0x9e - IFT_G703AT2MB = 0x43 - IFT_G703AT64K = 0x42 - IFT_GIF = 0xf0 - IFT_GIGABITETHERNET = 0x75 - IFT_GR303IDT = 0xb2 - IFT_GR303RDT = 0xb1 - IFT_H323GATEKEEPER = 0xa4 - IFT_H323PROXY = 0xa5 - IFT_HDH1822 = 0x3 - IFT_HDLC = 0x76 - IFT_HDSL2 = 0xa8 - IFT_HIPERLAN2 = 0xb7 - IFT_HIPPI = 0x2f - IFT_HIPPIINTERFACE = 0x39 - IFT_HOSTPAD = 0x5a - IFT_HSSI = 0x2e - IFT_HY = 0xe - IFT_IBM370PARCHAN = 0x48 - IFT_IDSL = 0x9a - IFT_IEEE1394 = 0x90 - IFT_IEEE80211 = 0x47 - IFT_IEEE80212 = 0x37 - IFT_IEEE8023ADLAG = 0xa1 - IFT_IFGSN = 0x91 - IFT_IMT = 0xbe - IFT_INFINIBAND = 0xc7 - IFT_INTERLEAVE = 0x7c - IFT_IP = 0x7e - IFT_IPFORWARD = 0x8e - IFT_IPOVERATM = 0x72 - IFT_IPOVERCDLC = 0x6d - IFT_IPOVERCLAW = 0x6e - IFT_IPSWITCH = 0x4e - IFT_ISDN = 0x3f - IFT_ISDNBASIC = 0x14 - IFT_ISDNPRIMARY = 0x15 - IFT_ISDNS = 0x4b - IFT_ISDNU = 0x4c - IFT_ISO88022LLC = 0x29 - IFT_ISO88023 = 0x7 - IFT_ISO88024 = 0x8 - IFT_ISO88025 = 0x9 - IFT_ISO88025CRFPINT = 0x62 - IFT_ISO88025DTR = 0x56 - IFT_ISO88025FIBER = 0x73 - IFT_ISO88026 = 0xa - IFT_ISUP = 0xb3 - IFT_L2VLAN = 0x87 - IFT_L3IPVLAN = 0x88 - IFT_L3IPXVLAN = 0x89 - IFT_LAPB = 0x10 - IFT_LAPD = 0x4d - IFT_LAPF = 0x77 - IFT_LINEGROUP = 0xd2 - IFT_LOCALTALK = 0x2a - IFT_LOOP = 0x18 - IFT_MEDIAMAILOVERIP = 0x8b - IFT_MFSIGLINK = 0xa7 - IFT_MIOX25 = 0x26 - IFT_MODEM = 0x30 - IFT_MPC = 0x71 - IFT_MPLS = 0xa6 - IFT_MPLSTUNNEL = 0x96 - IFT_MSDSL = 0x8f - IFT_MVL = 0xbf - IFT_MYRINET = 0x63 - IFT_NFAS = 0xaf - IFT_NSIP = 0x1b - IFT_OPTICALCHANNEL = 0xc3 - IFT_OPTICALTRANSPORT = 0xc4 - IFT_OTHER = 0x1 - IFT_P10 = 0xc - IFT_P80 = 0xd - IFT_PARA = 0x22 - IFT_PFLOG = 0xf5 - IFT_PFSYNC = 0xf6 - IFT_PLC = 0xae - IFT_PON155 = 0xcf - IFT_PON622 = 0xd0 - IFT_POS = 0xab - IFT_PPP = 0x17 - IFT_PPPMULTILINKBUNDLE = 0x6c - IFT_PROPATM = 0xc5 - IFT_PROPBWAP2MP = 0xb8 - IFT_PROPCNLS = 0x59 - IFT_PROPDOCSWIRELESSDOWNSTREAM = 0xb5 - IFT_PROPDOCSWIRELESSMACLAYER = 0xb4 - IFT_PROPDOCSWIRELESSUPSTREAM = 0xb6 - IFT_PROPMUX = 0x36 - IFT_PROPVIRTUAL = 0x35 - IFT_PROPWIRELESSP2P = 0x9d - IFT_PTPSERIAL = 0x16 - IFT_PVC = 0xf1 - IFT_Q2931 = 0xc9 - IFT_QLLC = 0x44 - IFT_RADIOMAC = 0xbc - IFT_RADSL = 0x5f - IFT_REACHDSL = 0xc0 - IFT_RFC1483 = 0x9f - IFT_RS232 = 0x21 - IFT_RSRB = 0x4f - IFT_SDLC = 0x11 - IFT_SDSL = 0x60 - IFT_SHDSL = 0xa9 - IFT_SIP = 0x1f - IFT_SIPSIG = 0xcc - IFT_SIPTG = 0xcb - IFT_SLIP = 0x1c - IFT_SMDSDXI = 0x2b - IFT_SMDSICIP = 0x34 - IFT_SONET = 0x27 - IFT_SONETOVERHEADCHANNEL = 0xb9 - IFT_SONETPATH = 0x32 - IFT_SONETVT = 0x33 - IFT_SRP = 0x97 - IFT_SS7SIGLINK = 0x9c - IFT_STACKTOSTACK = 0x6f - IFT_STARLAN = 0xb - IFT_STF = 0xd7 - IFT_T1 = 0x12 - IFT_TDLC = 0x74 - IFT_TELINK = 0xc8 - IFT_TERMPAD = 0x5b - IFT_TR008 = 0xb0 - IFT_TRANSPHDLC = 0x7b - IFT_TUNNEL = 0x83 - IFT_ULTRA = 0x1d - IFT_USB = 0xa0 - IFT_V11 = 0x40 - IFT_V35 = 0x2d - IFT_V36 = 0x41 - IFT_V37 = 0x78 - IFT_VDSL = 0x61 - IFT_VIRTUALIPADDRESS = 0x70 - IFT_VIRTUALTG = 0xca - IFT_VOICEDID = 0xd5 - IFT_VOICEEM = 0x64 - IFT_VOICEEMFGD = 0xd3 - IFT_VOICEENCAP = 0x67 - IFT_VOICEFGDEANA = 0xd4 - IFT_VOICEFXO = 0x65 - IFT_VOICEFXS = 0x66 - IFT_VOICEOVERATM = 0x98 - IFT_VOICEOVERCABLE = 0xc6 - IFT_VOICEOVERFRAMERELAY = 0x99 - IFT_VOICEOVERIP = 0x68 - IFT_X213 = 0x5d - IFT_X25 = 0x5 - IFT_X25DDN = 0x4 - IFT_X25HUNTGROUP = 0x7a - IFT_X25MLP = 0x79 - IFT_X25PLE = 0x28 - IFT_XETHER = 0x1a - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLASSD_HOST = 0xfffffff - IN_CLASSD_NET = 0xf0000000 - IN_CLASSD_NSHIFT = 0x1c - IN_LOOPBACKNET = 0x7f - IPPROTO_AH = 0x33 - IPPROTO_CARP = 0x70 - IPPROTO_DONE = 0x101 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_ENCAP = 0x62 - IPPROTO_EON = 0x50 - IPPROTO_ESP = 0x32 - IPPROTO_ETHERIP = 0x61 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GGP = 0x3 - IPPROTO_GRE = 0x2f - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IGMP = 0x2 - IPPROTO_IP = 0x0 - IPPROTO_IPCOMP = 0x6c - IPPROTO_IPIP = 0x4 - IPPROTO_IPV4 = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_IPV6_ICMP = 0x3a - IPPROTO_MAX = 0x100 - IPPROTO_MAXID = 0x34 - IPPROTO_MOBILE = 0x37 - IPPROTO_NONE = 0x3b - IPPROTO_PFSYNC = 0xf0 - IPPROTO_PIM = 0x67 - IPPROTO_PUP = 0xc - IPPROTO_RAW = 0xff - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_TCP = 0x6 - IPPROTO_TP = 0x1d - IPPROTO_UDP = 0x11 - IPPROTO_VRRP = 0x70 - IPV6_CHECKSUM = 0x1a - IPV6_DEFAULT_MULTICAST_HOPS = 0x1 - IPV6_DEFAULT_MULTICAST_LOOP = 0x1 - IPV6_DEFHLIM = 0x40 - IPV6_DONTFRAG = 0x3e - IPV6_DSTOPTS = 0x32 - IPV6_FAITH = 0x1d - IPV6_FLOWINFO_MASK = 0xffffff0f - IPV6_FLOWLABEL_MASK = 0xffff0f00 - IPV6_FRAGTTL = 0x78 - IPV6_HLIMDEC = 0x1 - IPV6_HOPLIMIT = 0x2f - IPV6_HOPOPTS = 0x31 - IPV6_IPSEC_POLICY = 0x1c - IPV6_JOIN_GROUP = 0xc - IPV6_LEAVE_GROUP = 0xd - IPV6_MAXHLIM = 0xff - IPV6_MAXPACKET = 0xffff - IPV6_MMTU = 0x500 - IPV6_MULTICAST_HOPS = 0xa - IPV6_MULTICAST_IF = 0x9 - IPV6_MULTICAST_LOOP = 0xb - IPV6_NEXTHOP = 0x30 - IPV6_PATHMTU = 0x2c - IPV6_PKTINFO = 0x2e - IPV6_PORTRANGE = 0xe - IPV6_PORTRANGE_DEFAULT = 0x0 - IPV6_PORTRANGE_HIGH = 0x1 - IPV6_PORTRANGE_LOW = 0x2 - IPV6_RECVDSTOPTS = 0x28 - IPV6_RECVHOPLIMIT = 0x25 - IPV6_RECVHOPOPTS = 0x27 - IPV6_RECVPATHMTU = 0x2b - IPV6_RECVPKTINFO = 0x24 - IPV6_RECVRTHDR = 0x26 - IPV6_RECVTCLASS = 0x39 - IPV6_RTHDR = 0x33 - IPV6_RTHDRDSTOPTS = 0x23 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_SOCKOPT_RESERVED1 = 0x3 - IPV6_TCLASS = 0x3d - IPV6_UNICAST_HOPS = 0x4 - IPV6_USE_MIN_MTU = 0x2a - IPV6_V6ONLY = 0x1b - IPV6_VERSION = 0x60 - IPV6_VERSION_MASK = 0xf0 - IP_ADD_MEMBERSHIP = 0xc - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DROP_MEMBERSHIP = 0xd - IP_EF = 0x8000 - IP_ERRORMTU = 0x15 - IP_HDRINCL = 0x2 - IP_IPSEC_POLICY = 0x16 - IP_MAXPACKET = 0xffff - IP_MAX_MEMBERSHIPS = 0x14 - IP_MF = 0x2000 - IP_MINFRAGSIZE = 0x45 - IP_MINTTL = 0x18 - IP_MSS = 0x240 - IP_MULTICAST_IF = 0x9 - IP_MULTICAST_LOOP = 0xb - IP_MULTICAST_TTL = 0xa - IP_OFFMASK = 0x1fff - IP_OPTIONS = 0x1 - IP_PORTRANGE = 0x13 - IP_PORTRANGE_DEFAULT = 0x0 - IP_PORTRANGE_HIGH = 0x1 - IP_PORTRANGE_LOW = 0x2 - IP_RECVDSTADDR = 0x7 - IP_RECVIF = 0x14 - IP_RECVOPTS = 0x5 - IP_RECVRETOPTS = 0x6 - IP_RECVTTL = 0x17 - IP_RETOPTS = 0x8 - IP_RF = 0x8000 - IP_TOS = 0x3 - IP_TTL = 0x4 - ISIG = 0x80 - ISTRIP = 0x20 - IXANY = 0x800 - IXOFF = 0x400 - IXON = 0x200 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_DONTNEED = 0x4 - MADV_FREE = 0x6 - MADV_NORMAL = 0x0 - MADV_RANDOM = 0x1 - MADV_SEQUENTIAL = 0x2 - MADV_SPACEAVAIL = 0x5 - MADV_WILLNEED = 0x3 - MAP_ALIGNMENT_16MB = 0x18000000 - MAP_ALIGNMENT_1TB = 0x28000000 - MAP_ALIGNMENT_256TB = 0x30000000 - MAP_ALIGNMENT_4GB = 0x20000000 - MAP_ALIGNMENT_64KB = 0x10000000 - MAP_ALIGNMENT_64PB = 0x38000000 - MAP_ALIGNMENT_MASK = -0x1000000 - MAP_ALIGNMENT_SHIFT = 0x18 - MAP_ANON = 0x1000 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_HASSEMAPHORE = 0x200 - MAP_INHERIT = 0x80 - MAP_INHERIT_COPY = 0x1 - MAP_INHERIT_DEFAULT = 0x1 - MAP_INHERIT_DONATE_COPY = 0x3 - MAP_INHERIT_NONE = 0x2 - MAP_INHERIT_SHARE = 0x0 - MAP_NORESERVE = 0x40 - MAP_PRIVATE = 0x2 - MAP_RENAME = 0x20 - MAP_SHARED = 0x1 - MAP_STACK = 0x2000 - MAP_TRYFIXED = 0x400 - MAP_WIRED = 0x800 - MCL_CURRENT = 0x1 - MCL_FUTURE = 0x2 - MSG_BCAST = 0x100 - MSG_CMSG_CLOEXEC = 0x800 - MSG_CONTROLMBUF = 0x2000000 - MSG_CTRUNC = 0x20 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x80 - MSG_EOR = 0x8 - MSG_IOVUSRSPACE = 0x4000000 - MSG_LENUSRSPACE = 0x8000000 - MSG_MCAST = 0x200 - MSG_NAMEMBUF = 0x1000000 - MSG_NBIO = 0x1000 - MSG_NOSIGNAL = 0x400 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_TRUNC = 0x10 - MSG_USERFLAGS = 0xffffff - MSG_WAITALL = 0x40 - MS_ASYNC = 0x1 - MS_INVALIDATE = 0x2 - MS_SYNC = 0x4 - NAME_MAX = 0x1ff - NET_RT_DUMP = 0x1 - NET_RT_FLAGS = 0x2 - NET_RT_IFLIST = 0x5 - NET_RT_MAXID = 0x6 - NET_RT_OIFLIST = 0x4 - NET_RT_OOIFLIST = 0x3 - NOFLSH = 0x80000000 - NOTE_ATTRIB = 0x8 - NOTE_CHILD = 0x4 - NOTE_DELETE = 0x1 - NOTE_EXEC = 0x20000000 - NOTE_EXIT = 0x80000000 - NOTE_EXTEND = 0x4 - NOTE_FORK = 0x40000000 - NOTE_LINK = 0x10 - NOTE_LOWAT = 0x1 - NOTE_PCTRLMASK = 0xf0000000 - NOTE_PDATAMASK = 0xfffff - NOTE_RENAME = 0x20 - NOTE_REVOKE = 0x40 - NOTE_TRACK = 0x1 - NOTE_TRACKERR = 0x2 - NOTE_WRITE = 0x2 - OCRNL = 0x10 - OFIOGETBMAP = 0xc004667a - ONLCR = 0x2 - ONLRET = 0x40 - ONOCR = 0x20 - ONOEOT = 0x8 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_ALT_IO = 0x40000 - O_APPEND = 0x8 - O_ASYNC = 0x40 - O_CLOEXEC = 0x400000 - O_CREAT = 0x200 - O_DIRECT = 0x80000 - O_DIRECTORY = 0x200000 - O_DSYNC = 0x10000 - O_EXCL = 0x800 - O_EXLOCK = 0x20 - O_FSYNC = 0x80 - O_NDELAY = 0x4 - O_NOCTTY = 0x8000 - O_NOFOLLOW = 0x100 - O_NONBLOCK = 0x4 - O_NOSIGPIPE = 0x1000000 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_RSYNC = 0x20000 - O_SHLOCK = 0x10 - O_SYNC = 0x80 - O_TRUNC = 0x400 - O_WRONLY = 0x1 - PARENB = 0x1000 - PARMRK = 0x8 - PARODD = 0x2000 - PENDIN = 0x20000000 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PRI_IOFLUSH = 0x7c - PROT_EXEC = 0x4 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_WRITE = 0x2 - RLIMIT_AS = 0xa - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x8 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = 0x7fffffffffffffff - RTAX_AUTHOR = 0x6 - RTAX_BRD = 0x7 - RTAX_DST = 0x0 - RTAX_GATEWAY = 0x1 - RTAX_GENMASK = 0x3 - RTAX_IFA = 0x5 - RTAX_IFP = 0x4 - RTAX_MAX = 0x9 - RTAX_NETMASK = 0x2 - RTAX_TAG = 0x8 - RTA_AUTHOR = 0x40 - RTA_BRD = 0x80 - RTA_DST = 0x1 - RTA_GATEWAY = 0x2 - RTA_GENMASK = 0x8 - RTA_IFA = 0x20 - RTA_IFP = 0x10 - RTA_NETMASK = 0x4 - RTA_TAG = 0x100 - RTF_ANNOUNCE = 0x20000 - RTF_BLACKHOLE = 0x1000 - RTF_CLONED = 0x2000 - RTF_CLONING = 0x100 - RTF_DONE = 0x40 - RTF_DYNAMIC = 0x10 - RTF_GATEWAY = 0x2 - RTF_HOST = 0x4 - RTF_LLINFO = 0x400 - RTF_MASK = 0x80 - RTF_MODIFIED = 0x20 - RTF_PROTO1 = 0x8000 - RTF_PROTO2 = 0x4000 - RTF_REJECT = 0x8 - RTF_SRC = 0x10000 - RTF_STATIC = 0x800 - RTF_UP = 0x1 - RTF_XRESOLVE = 0x200 - RTM_ADD = 0x1 - RTM_CHANGE = 0x3 - RTM_CHGADDR = 0x15 - RTM_DELADDR = 0xd - RTM_DELETE = 0x2 - RTM_GET = 0x4 - RTM_IEEE80211 = 0x11 - RTM_IFANNOUNCE = 0x10 - RTM_IFINFO = 0x14 - RTM_LLINFO_UPD = 0x13 - RTM_LOCK = 0x8 - RTM_LOSING = 0x5 - RTM_MISS = 0x7 - RTM_NEWADDR = 0xc - RTM_OIFINFO = 0xf - RTM_OLDADD = 0x9 - RTM_OLDDEL = 0xa - RTM_OOIFINFO = 0xe - RTM_REDIRECT = 0x6 - RTM_RESOLVE = 0xb - RTM_RTTUNIT = 0xf4240 - RTM_SETGATE = 0x12 - RTM_VERSION = 0x4 - RTV_EXPIRE = 0x4 - RTV_HOPCOUNT = 0x2 - RTV_MTU = 0x1 - RTV_RPIPE = 0x8 - RTV_RTT = 0x40 - RTV_RTTVAR = 0x80 - RTV_SPIPE = 0x10 - RTV_SSTHRESH = 0x20 - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - SCM_CREDS = 0x4 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x8 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDMULTI = 0x80906931 - SIOCADDRT = 0x8038720a - SIOCAIFADDR = 0x8040691a - SIOCALIFADDR = 0x8118691c - SIOCATMARK = 0x40047307 - SIOCDELMULTI = 0x80906932 - SIOCDELRT = 0x8038720b - SIOCDIFADDR = 0x80906919 - SIOCDIFPHYADDR = 0x80906949 - SIOCDLIFADDR = 0x8118691e - SIOCGDRVSPEC = 0xc028697b - SIOCGETPFSYNC = 0xc09069f8 - SIOCGETSGCNT = 0xc0207534 - SIOCGETVIFCNT = 0xc0287533 - SIOCGHIWAT = 0x40047301 - SIOCGIFADDR = 0xc0906921 - SIOCGIFADDRPREF = 0xc0986920 - SIOCGIFALIAS = 0xc040691b - SIOCGIFBRDADDR = 0xc0906923 - SIOCGIFCAP = 0xc0206976 - SIOCGIFCONF = 0xc0106926 - SIOCGIFDATA = 0xc0986985 - SIOCGIFDLT = 0xc0906977 - SIOCGIFDSTADDR = 0xc0906922 - SIOCGIFFLAGS = 0xc0906911 - SIOCGIFGENERIC = 0xc090693a - SIOCGIFMEDIA = 0xc0306936 - SIOCGIFMETRIC = 0xc0906917 - SIOCGIFMTU = 0xc090697e - SIOCGIFNETMASK = 0xc0906925 - SIOCGIFPDSTADDR = 0xc0906948 - SIOCGIFPSRCADDR = 0xc0906947 - SIOCGLIFADDR = 0xc118691d - SIOCGLIFPHYADDR = 0xc118694b - SIOCGLINKSTR = 0xc0286987 - SIOCGLOWAT = 0x40047303 - SIOCGPGRP = 0x40047309 - SIOCGVH = 0xc0906983 - SIOCIFCREATE = 0x8090697a - SIOCIFDESTROY = 0x80906979 - SIOCIFGCLONERS = 0xc0106978 - SIOCINITIFADDR = 0xc0706984 - SIOCSDRVSPEC = 0x8028697b - SIOCSETPFSYNC = 0x809069f7 - SIOCSHIWAT = 0x80047300 - SIOCSIFADDR = 0x8090690c - SIOCSIFADDRPREF = 0x8098691f - SIOCSIFBRDADDR = 0x80906913 - SIOCSIFCAP = 0x80206975 - SIOCSIFDSTADDR = 0x8090690e - SIOCSIFFLAGS = 0x80906910 - SIOCSIFGENERIC = 0x80906939 - SIOCSIFMEDIA = 0xc0906935 - SIOCSIFMETRIC = 0x80906918 - SIOCSIFMTU = 0x8090697f - SIOCSIFNETMASK = 0x80906916 - SIOCSIFPHYADDR = 0x80406946 - SIOCSLIFPHYADDR = 0x8118694a - SIOCSLINKSTR = 0x80286988 - SIOCSLOWAT = 0x80047302 - SIOCSPGRP = 0x80047308 - SIOCSVH = 0xc0906982 - SIOCZIFDATA = 0xc0986986 - SOCK_CLOEXEC = 0x10000000 - SOCK_DGRAM = 0x2 - SOCK_FLAGS_MASK = 0xf0000000 - SOCK_NONBLOCK = 0x20000000 - SOCK_NOSIGPIPE = 0x40000000 - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_SOCKET = 0xffff - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x2 - SO_ACCEPTFILTER = 0x1000 - SO_BROADCAST = 0x20 - SO_DEBUG = 0x1 - SO_DONTROUTE = 0x10 - SO_ERROR = 0x1007 - SO_KEEPALIVE = 0x8 - SO_LINGER = 0x80 - SO_NOHEADER = 0x100a - SO_NOSIGPIPE = 0x800 - SO_OOBINLINE = 0x100 - SO_OVERFLOWED = 0x1009 - SO_RCVBUF = 0x1002 - SO_RCVLOWAT = 0x1004 - SO_RCVTIMEO = 0x100c - SO_REUSEADDR = 0x4 - SO_REUSEPORT = 0x200 - SO_SNDBUF = 0x1001 - SO_SNDLOWAT = 0x1003 - SO_SNDTIMEO = 0x100b - SO_TIMESTAMP = 0x2000 - SO_TYPE = 0x1008 - SO_USELOOPBACK = 0x40 - SYSCTL_VERSION = 0x1000000 - SYSCTL_VERS_0 = 0x0 - SYSCTL_VERS_1 = 0x1000000 - SYSCTL_VERS_MASK = 0xff000000 - S_ARCH1 = 0x10000 - S_ARCH2 = 0x20000 - S_BLKSIZE = 0x200 - S_IEXEC = 0x40 - S_IFBLK = 0x6000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFIFO = 0x1000 - S_IFLNK = 0xa000 - S_IFMT = 0xf000 - S_IFREG = 0x8000 - S_IFSOCK = 0xc000 - S_IFWHT = 0xe000 - S_IREAD = 0x100 - S_IRGRP = 0x20 - S_IROTH = 0x4 - S_IRUSR = 0x100 - S_IRWXG = 0x38 - S_IRWXO = 0x7 - S_IRWXU = 0x1c0 - S_ISGID = 0x400 - S_ISTXT = 0x200 - S_ISUID = 0x800 - S_ISVTX = 0x200 - S_IWGRP = 0x10 - S_IWOTH = 0x2 - S_IWRITE = 0x80 - S_IWUSR = 0x80 - S_IXGRP = 0x8 - S_IXOTH = 0x1 - S_IXUSR = 0x40 - S_LOGIN_SET = 0x1 - TCIFLUSH = 0x1 - TCIOFLUSH = 0x3 - TCOFLUSH = 0x2 - TCP_CONGCTL = 0x20 - TCP_KEEPCNT = 0x6 - TCP_KEEPIDLE = 0x3 - TCP_KEEPINIT = 0x7 - TCP_KEEPINTVL = 0x5 - TCP_MAXBURST = 0x4 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_WINSHIFT = 0xe - TCP_MD5SIG = 0x10 - TCP_MINMSS = 0xd8 - TCP_MSS = 0x218 - TCP_NODELAY = 0x1 - TCSAFLUSH = 0x2 - TIOCCBRK = 0x2000747a - TIOCCDTR = 0x20007478 - TIOCCONS = 0x80047462 - TIOCDCDTIMESTAMP = 0x40107458 - TIOCDRAIN = 0x2000745e - TIOCEXCL = 0x2000740d - TIOCEXT = 0x80047460 - TIOCFLAG_CDTRCTS = 0x10 - TIOCFLAG_CLOCAL = 0x2 - TIOCFLAG_CRTSCTS = 0x4 - TIOCFLAG_MDMBUF = 0x8 - TIOCFLAG_SOFTCAR = 0x1 - TIOCFLUSH = 0x80047410 - TIOCGETA = 0x402c7413 - TIOCGETD = 0x4004741a - TIOCGFLAGS = 0x4004745d - TIOCGLINED = 0x40207442 - TIOCGPGRP = 0x40047477 - TIOCGQSIZE = 0x40047481 - TIOCGRANTPT = 0x20007447 - TIOCGSID = 0x40047463 - TIOCGSIZE = 0x40087468 - TIOCGWINSZ = 0x40087468 - TIOCMBIC = 0x8004746b - TIOCMBIS = 0x8004746c - TIOCMGET = 0x4004746a - TIOCMSET = 0x8004746d - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x20007471 - TIOCNXCL = 0x2000740e - TIOCOUTQ = 0x40047473 - TIOCPKT = 0x80047470 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCPTMGET = 0x40287446 - TIOCPTSNAME = 0x40287448 - TIOCRCVFRAME = 0x80087445 - TIOCREMOTE = 0x80047469 - TIOCSBRK = 0x2000747b - TIOCSCTTY = 0x20007461 - TIOCSDTR = 0x20007479 - TIOCSETA = 0x802c7414 - TIOCSETAF = 0x802c7416 - TIOCSETAW = 0x802c7415 - TIOCSETD = 0x8004741b - TIOCSFLAGS = 0x8004745c - TIOCSIG = 0x2000745f - TIOCSLINED = 0x80207443 - TIOCSPGRP = 0x80047476 - TIOCSQSIZE = 0x80047480 - TIOCSSIZE = 0x80087467 - TIOCSTART = 0x2000746e - TIOCSTAT = 0x80047465 - TIOCSTI = 0x80017472 - TIOCSTOP = 0x2000746f - TIOCSWINSZ = 0x80087467 - TIOCUCNTL = 0x80047466 - TIOCXMTFRAME = 0x80087444 - TOSTOP = 0x400000 - VDISCARD = 0xf - VDSUSP = 0xb - VEOF = 0x0 - VEOL = 0x1 - VEOL2 = 0x2 - VERASE = 0x3 - VINTR = 0x8 - VKILL = 0x5 - VLNEXT = 0xe - VMIN = 0x10 - VQUIT = 0x9 - VREPRINT = 0x6 - VSTART = 0xc - VSTATUS = 0x12 - VSTOP = 0xd - VSUSP = 0xa - VTIME = 0x11 - VWERASE = 0x4 - WALL = 0x8 - WALLSIG = 0x8 - WALTSIG = 0x4 - WCLONE = 0x4 - WCOREFLAG = 0x80 - WNOHANG = 0x1 - WNOWAIT = 0x10000 - WNOZOMBIE = 0x20000 - WOPTSCHECKED = 0x40000 - WSTOPPED = 0x7f - WUNTRACED = 0x2 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x30) - EADDRNOTAVAIL = syscall.Errno(0x31) - EAFNOSUPPORT = syscall.Errno(0x2f) - EAGAIN = syscall.Errno(0x23) - EALREADY = syscall.Errno(0x25) - EAUTH = syscall.Errno(0x50) - EBADF = syscall.Errno(0x9) - EBADMSG = syscall.Errno(0x58) - EBADRPC = syscall.Errno(0x48) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x57) - ECHILD = syscall.Errno(0xa) - ECONNABORTED = syscall.Errno(0x35) - ECONNREFUSED = syscall.Errno(0x3d) - ECONNRESET = syscall.Errno(0x36) - EDEADLK = syscall.Errno(0xb) - EDESTADDRREQ = syscall.Errno(0x27) - EDOM = syscall.Errno(0x21) - EDQUOT = syscall.Errno(0x45) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EFTYPE = syscall.Errno(0x4f) - EHOSTDOWN = syscall.Errno(0x40) - EHOSTUNREACH = syscall.Errno(0x41) - EIDRM = syscall.Errno(0x52) - EILSEQ = syscall.Errno(0x55) - EINPROGRESS = syscall.Errno(0x24) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EISCONN = syscall.Errno(0x38) - EISDIR = syscall.Errno(0x15) - ELAST = syscall.Errno(0x60) - ELOOP = syscall.Errno(0x3e) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x28) - EMULTIHOP = syscall.Errno(0x5e) - ENAMETOOLONG = syscall.Errno(0x3f) - ENEEDAUTH = syscall.Errno(0x51) - ENETDOWN = syscall.Errno(0x32) - ENETRESET = syscall.Errno(0x34) - ENETUNREACH = syscall.Errno(0x33) - ENFILE = syscall.Errno(0x17) - ENOATTR = syscall.Errno(0x5d) - ENOBUFS = syscall.Errno(0x37) - ENODATA = syscall.Errno(0x59) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOLCK = syscall.Errno(0x4d) - ENOLINK = syscall.Errno(0x5f) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x53) - ENOPROTOOPT = syscall.Errno(0x2a) - ENOSPC = syscall.Errno(0x1c) - ENOSR = syscall.Errno(0x5a) - ENOSTR = syscall.Errno(0x5b) - ENOSYS = syscall.Errno(0x4e) - ENOTBLK = syscall.Errno(0xf) - ENOTCONN = syscall.Errno(0x39) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x42) - ENOTSOCK = syscall.Errno(0x26) - ENOTSUP = syscall.Errno(0x56) - ENOTTY = syscall.Errno(0x19) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x2d) - EOVERFLOW = syscall.Errno(0x54) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x2e) - EPIPE = syscall.Errno(0x20) - EPROCLIM = syscall.Errno(0x43) - EPROCUNAVAIL = syscall.Errno(0x4c) - EPROGMISMATCH = syscall.Errno(0x4b) - EPROGUNAVAIL = syscall.Errno(0x4a) - EPROTO = syscall.Errno(0x60) - EPROTONOSUPPORT = syscall.Errno(0x2b) - EPROTOTYPE = syscall.Errno(0x29) - ERANGE = syscall.Errno(0x22) - EREMOTE = syscall.Errno(0x47) - EROFS = syscall.Errno(0x1e) - ERPCMISMATCH = syscall.Errno(0x49) - ESHUTDOWN = syscall.Errno(0x3a) - ESOCKTNOSUPPORT = syscall.Errno(0x2c) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESTALE = syscall.Errno(0x46) - ETIME = syscall.Errno(0x5c) - ETIMEDOUT = syscall.Errno(0x3c) - ETOOMANYREFS = syscall.Errno(0x3b) - ETXTBSY = syscall.Errno(0x1a) - EUSERS = syscall.Errno(0x44) - EWOULDBLOCK = syscall.Errno(0x23) - EXDEV = syscall.Errno(0x12) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0xa) - SIGCHLD = syscall.Signal(0x14) - SIGCONT = syscall.Signal(0x13) - SIGEMT = syscall.Signal(0x7) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINFO = syscall.Signal(0x1d) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x17) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGPIPE = syscall.Signal(0xd) - SIGPROF = syscall.Signal(0x1b) - SIGPWR = syscall.Signal(0x20) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTOP = syscall.Signal(0x11) - SIGSYS = syscall.Signal(0xc) - SIGTERM = syscall.Signal(0xf) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x12) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGURG = syscall.Signal(0x10) - SIGUSR1 = syscall.Signal(0x1e) - SIGUSR2 = syscall.Signal(0x1f) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) - -// Error table -var errors = [...]string{ - 1: "operation not permitted", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "input/output error", - 6: "device not configured", - 7: "argument list too long", - 8: "exec format error", - 9: "bad file descriptor", - 10: "no child processes", - 11: "resource deadlock avoided", - 12: "cannot allocate memory", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "device busy", - 17: "file exists", - 18: "cross-device link", - 19: "operation not supported by device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "too many open files in system", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "numerical argument out of domain", - 34: "result too large or too small", - 35: "resource temporarily unavailable", - 36: "operation now in progress", - 37: "operation already in progress", - 38: "socket operation on non-socket", - 39: "destination address required", - 40: "message too long", - 41: "protocol wrong type for socket", - 42: "protocol option not available", - 43: "protocol not supported", - 44: "socket type not supported", - 45: "operation not supported", - 46: "protocol family not supported", - 47: "address family not supported by protocol family", - 48: "address already in use", - 49: "can't assign requested address", - 50: "network is down", - 51: "network is unreachable", - 52: "network dropped connection on reset", - 53: "software caused connection abort", - 54: "connection reset by peer", - 55: "no buffer space available", - 56: "socket is already connected", - 57: "socket is not connected", - 58: "can't send after socket shutdown", - 59: "too many references: can't splice", - 60: "connection timed out", - 61: "connection refused", - 62: "too many levels of symbolic links", - 63: "file name too long", - 64: "host is down", - 65: "no route to host", - 66: "directory not empty", - 67: "too many processes", - 68: "too many users", - 69: "disc quota exceeded", - 70: "stale NFS file handle", - 71: "too many levels of remote in path", - 72: "RPC struct is bad", - 73: "RPC version wrong", - 74: "RPC prog. not avail", - 75: "program version wrong", - 76: "bad procedure for program", - 77: "no locks available", - 78: "function not implemented", - 79: "inappropriate file type or format", - 80: "authentication error", - 81: "need authenticator", - 82: "identifier removed", - 83: "no message of desired type", - 84: "value too large to be stored in data type", - 85: "illegal byte sequence", - 86: "not supported", - 87: "operation Canceled", - 88: "bad or Corrupt message", - 89: "no message available", - 90: "no STREAM resources", - 91: "not a STREAM", - 92: "STREAM ioctl timeout", - 93: "attribute not found", - 94: "multihop attempted", - 95: "link has been severed", - 96: "protocol error", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal instruction", - 5: "trace/BPT trap", - 6: "abort trap", - 7: "EMT trap", - 8: "floating point exception", - 9: "killed", - 10: "bus error", - 11: "segmentation fault", - 12: "bad system call", - 13: "broken pipe", - 14: "alarm clock", - 15: "terminated", - 16: "urgent I/O condition", - 17: "stopped (signal)", - 18: "stopped", - 19: "continued", - 20: "child exited", - 21: "stopped (tty input)", - 22: "stopped (tty output)", - 23: "I/O possible", - 24: "cputime limit exceeded", - 25: "filesize limit exceeded", - 26: "virtual timer expired", - 27: "profiling timer expired", - 28: "window size changes", - 29: "information request", - 30: "user defined signal 1", - 31: "user defined signal 2", - 32: "power fail/restart", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_netbsd_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_netbsd_arm.go deleted file mode 100644 index ac85ca64529..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_netbsd_arm.go +++ /dev/null @@ -1,1688 +0,0 @@ -// mkerrors.sh -marm -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build arm,netbsd - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- -marm _const.go - -package unix - -import "syscall" - -const ( - AF_APPLETALK = 0x10 - AF_ARP = 0x1c - AF_BLUETOOTH = 0x1f - AF_CCITT = 0xa - AF_CHAOS = 0x5 - AF_CNT = 0x15 - AF_COIP = 0x14 - AF_DATAKIT = 0x9 - AF_DECnet = 0xc - AF_DLI = 0xd - AF_E164 = 0x1a - AF_ECMA = 0x8 - AF_HYLINK = 0xf - AF_IEEE80211 = 0x20 - AF_IMPLINK = 0x3 - AF_INET = 0x2 - AF_INET6 = 0x18 - AF_IPX = 0x17 - AF_ISDN = 0x1a - AF_ISO = 0x7 - AF_LAT = 0xe - AF_LINK = 0x12 - AF_LOCAL = 0x1 - AF_MAX = 0x23 - AF_MPLS = 0x21 - AF_NATM = 0x1b - AF_NS = 0x6 - AF_OROUTE = 0x11 - AF_OSI = 0x7 - AF_PUP = 0x4 - AF_ROUTE = 0x22 - AF_SNA = 0xb - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - ARPHRD_ARCNET = 0x7 - ARPHRD_ETHER = 0x1 - ARPHRD_FRELAY = 0xf - ARPHRD_IEEE1394 = 0x18 - ARPHRD_IEEE802 = 0x6 - ARPHRD_STRIP = 0x17 - B0 = 0x0 - B110 = 0x6e - B115200 = 0x1c200 - B1200 = 0x4b0 - B134 = 0x86 - B14400 = 0x3840 - B150 = 0x96 - B1800 = 0x708 - B19200 = 0x4b00 - B200 = 0xc8 - B230400 = 0x38400 - B2400 = 0x960 - B28800 = 0x7080 - B300 = 0x12c - B38400 = 0x9600 - B460800 = 0x70800 - B4800 = 0x12c0 - B50 = 0x32 - B57600 = 0xe100 - B600 = 0x258 - B7200 = 0x1c20 - B75 = 0x4b - B76800 = 0x12c00 - B921600 = 0xe1000 - B9600 = 0x2580 - BIOCFEEDBACK = 0x8004427d - BIOCFLUSH = 0x20004268 - BIOCGBLEN = 0x40044266 - BIOCGDLT = 0x4004426a - BIOCGDLTLIST = 0xc0084277 - BIOCGETIF = 0x4090426b - BIOCGFEEDBACK = 0x4004427c - BIOCGHDRCMPLT = 0x40044274 - BIOCGRTIMEOUT = 0x400c427b - BIOCGSEESENT = 0x40044278 - BIOCGSTATS = 0x4080426f - BIOCGSTATSOLD = 0x4008426f - BIOCIMMEDIATE = 0x80044270 - BIOCPROMISC = 0x20004269 - BIOCSBLEN = 0xc0044266 - BIOCSDLT = 0x80044276 - BIOCSETF = 0x80084267 - BIOCSETIF = 0x8090426c - BIOCSFEEDBACK = 0x8004427d - BIOCSHDRCMPLT = 0x80044275 - BIOCSRTIMEOUT = 0x800c427a - BIOCSSEESENT = 0x80044279 - BIOCSTCPF = 0x80084272 - BIOCSUDPF = 0x80084273 - BIOCVERSION = 0x40044271 - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALIGNMENT = 0x4 - BPF_ALIGNMENT32 = 0x4 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_DFLTBUFSIZE = 0x100000 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXBUFSIZE = 0x1000000 - BPF_MAXINSNS = 0x200 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINBUFSIZE = 0x20 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RELEASE = 0x30bb6 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_W = 0x0 - BPF_X = 0x8 - BRKINT = 0x2 - CFLUSH = 0xf - CLOCAL = 0x8000 - CREAD = 0x800 - CS5 = 0x0 - CS6 = 0x100 - CS7 = 0x200 - CS8 = 0x300 - CSIZE = 0x300 - CSTART = 0x11 - CSTATUS = 0x14 - CSTOP = 0x13 - CSTOPB = 0x400 - CSUSP = 0x1a - CTL_MAXNAME = 0xc - CTL_NET = 0x4 - CTL_QUERY = -0x2 - DIOCBSFLUSH = 0x20006478 - DLT_A429 = 0xb8 - DLT_A653_ICM = 0xb9 - DLT_AIRONET_HEADER = 0x78 - DLT_AOS = 0xde - DLT_APPLE_IP_OVER_IEEE1394 = 0x8a - DLT_ARCNET = 0x7 - DLT_ARCNET_LINUX = 0x81 - DLT_ATM_CLIP = 0x13 - DLT_ATM_RFC1483 = 0xb - DLT_AURORA = 0x7e - DLT_AX25 = 0x3 - DLT_AX25_KISS = 0xca - DLT_BACNET_MS_TP = 0xa5 - DLT_BLUETOOTH_HCI_H4 = 0xbb - DLT_BLUETOOTH_HCI_H4_WITH_PHDR = 0xc9 - DLT_CAN20B = 0xbe - DLT_CAN_SOCKETCAN = 0xe3 - DLT_CHAOS = 0x5 - DLT_CISCO_IOS = 0x76 - DLT_C_HDLC = 0x68 - DLT_C_HDLC_WITH_DIR = 0xcd - DLT_DECT = 0xdd - DLT_DOCSIS = 0x8f - DLT_ECONET = 0x73 - DLT_EN10MB = 0x1 - DLT_EN3MB = 0x2 - DLT_ENC = 0x6d - DLT_ERF = 0xc5 - DLT_ERF_ETH = 0xaf - DLT_ERF_POS = 0xb0 - DLT_FC_2 = 0xe0 - DLT_FC_2_WITH_FRAME_DELIMS = 0xe1 - DLT_FDDI = 0xa - DLT_FLEXRAY = 0xd2 - DLT_FRELAY = 0x6b - DLT_FRELAY_WITH_DIR = 0xce - DLT_GCOM_SERIAL = 0xad - DLT_GCOM_T1E1 = 0xac - DLT_GPF_F = 0xab - DLT_GPF_T = 0xaa - DLT_GPRS_LLC = 0xa9 - DLT_GSMTAP_ABIS = 0xda - DLT_GSMTAP_UM = 0xd9 - DLT_HDLC = 0x10 - DLT_HHDLC = 0x79 - DLT_HIPPI = 0xf - DLT_IBM_SN = 0x92 - DLT_IBM_SP = 0x91 - DLT_IEEE802 = 0x6 - DLT_IEEE802_11 = 0x69 - DLT_IEEE802_11_RADIO = 0x7f - DLT_IEEE802_11_RADIO_AVS = 0xa3 - DLT_IEEE802_15_4 = 0xc3 - DLT_IEEE802_15_4_LINUX = 0xbf - DLT_IEEE802_15_4_NONASK_PHY = 0xd7 - DLT_IEEE802_16_MAC_CPS = 0xbc - DLT_IEEE802_16_MAC_CPS_RADIO = 0xc1 - DLT_IPMB = 0xc7 - DLT_IPMB_LINUX = 0xd1 - DLT_IPNET = 0xe2 - DLT_IPV4 = 0xe4 - DLT_IPV6 = 0xe5 - DLT_IP_OVER_FC = 0x7a - DLT_JUNIPER_ATM1 = 0x89 - DLT_JUNIPER_ATM2 = 0x87 - DLT_JUNIPER_CHDLC = 0xb5 - DLT_JUNIPER_ES = 0x84 - DLT_JUNIPER_ETHER = 0xb2 - DLT_JUNIPER_FRELAY = 0xb4 - DLT_JUNIPER_GGSN = 0x85 - DLT_JUNIPER_ISM = 0xc2 - DLT_JUNIPER_MFR = 0x86 - DLT_JUNIPER_MLFR = 0x83 - DLT_JUNIPER_MLPPP = 0x82 - DLT_JUNIPER_MONITOR = 0xa4 - DLT_JUNIPER_PIC_PEER = 0xae - DLT_JUNIPER_PPP = 0xb3 - DLT_JUNIPER_PPPOE = 0xa7 - DLT_JUNIPER_PPPOE_ATM = 0xa8 - DLT_JUNIPER_SERVICES = 0x88 - DLT_JUNIPER_ST = 0xc8 - DLT_JUNIPER_VP = 0xb7 - DLT_LAPB_WITH_DIR = 0xcf - DLT_LAPD = 0xcb - DLT_LIN = 0xd4 - DLT_LINUX_EVDEV = 0xd8 - DLT_LINUX_IRDA = 0x90 - DLT_LINUX_LAPD = 0xb1 - DLT_LINUX_SLL = 0x71 - DLT_LOOP = 0x6c - DLT_LTALK = 0x72 - DLT_MFR = 0xb6 - DLT_MOST = 0xd3 - DLT_MPLS = 0xdb - DLT_MTP2 = 0x8c - DLT_MTP2_WITH_PHDR = 0x8b - DLT_MTP3 = 0x8d - DLT_NULL = 0x0 - DLT_PCI_EXP = 0x7d - DLT_PFLOG = 0x75 - DLT_PFSYNC = 0x12 - DLT_PPI = 0xc0 - DLT_PPP = 0x9 - DLT_PPP_BSDOS = 0xe - DLT_PPP_ETHER = 0x33 - DLT_PPP_PPPD = 0xa6 - DLT_PPP_SERIAL = 0x32 - DLT_PPP_WITH_DIR = 0xcc - DLT_PRISM_HEADER = 0x77 - DLT_PRONET = 0x4 - DLT_RAIF1 = 0xc6 - DLT_RAW = 0xc - DLT_RAWAF_MASK = 0x2240000 - DLT_RIO = 0x7c - DLT_SCCP = 0x8e - DLT_SITA = 0xc4 - DLT_SLIP = 0x8 - DLT_SLIP_BSDOS = 0xd - DLT_SUNATM = 0x7b - DLT_SYMANTEC_FIREWALL = 0x63 - DLT_TZSP = 0x80 - DLT_USB = 0xba - DLT_USB_LINUX = 0xbd - DLT_USB_LINUX_MMAPPED = 0xdc - DLT_WIHART = 0xdf - DLT_X2E_SERIAL = 0xd5 - DLT_X2E_XORAYA = 0xd6 - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - DT_WHT = 0xe - ECHO = 0x8 - ECHOCTL = 0x40 - ECHOE = 0x2 - ECHOK = 0x4 - ECHOKE = 0x1 - ECHONL = 0x10 - ECHOPRT = 0x20 - EMUL_LINUX = 0x1 - EMUL_LINUX32 = 0x5 - EMUL_MAXID = 0x6 - ETHERCAP_JUMBO_MTU = 0x4 - ETHERCAP_VLAN_HWTAGGING = 0x2 - ETHERCAP_VLAN_MTU = 0x1 - ETHERMIN = 0x2e - ETHERMTU = 0x5dc - ETHERMTU_JUMBO = 0x2328 - ETHERTYPE_8023 = 0x4 - ETHERTYPE_AARP = 0x80f3 - ETHERTYPE_ACCTON = 0x8390 - ETHERTYPE_AEONIC = 0x8036 - ETHERTYPE_ALPHA = 0x814a - ETHERTYPE_AMBER = 0x6008 - ETHERTYPE_AMOEBA = 0x8145 - ETHERTYPE_APOLLO = 0x80f7 - ETHERTYPE_APOLLODOMAIN = 0x8019 - ETHERTYPE_APPLETALK = 0x809b - ETHERTYPE_APPLITEK = 0x80c7 - ETHERTYPE_ARGONAUT = 0x803a - ETHERTYPE_ARP = 0x806 - ETHERTYPE_AT = 0x809b - ETHERTYPE_ATALK = 0x809b - ETHERTYPE_ATOMIC = 0x86df - ETHERTYPE_ATT = 0x8069 - ETHERTYPE_ATTSTANFORD = 0x8008 - ETHERTYPE_AUTOPHON = 0x806a - ETHERTYPE_AXIS = 0x8856 - ETHERTYPE_BCLOOP = 0x9003 - ETHERTYPE_BOFL = 0x8102 - ETHERTYPE_CABLETRON = 0x7034 - ETHERTYPE_CHAOS = 0x804 - ETHERTYPE_COMDESIGN = 0x806c - ETHERTYPE_COMPUGRAPHIC = 0x806d - ETHERTYPE_COUNTERPOINT = 0x8062 - ETHERTYPE_CRONUS = 0x8004 - ETHERTYPE_CRONUSVLN = 0x8003 - ETHERTYPE_DCA = 0x1234 - ETHERTYPE_DDE = 0x807b - ETHERTYPE_DEBNI = 0xaaaa - ETHERTYPE_DECAM = 0x8048 - ETHERTYPE_DECCUST = 0x6006 - ETHERTYPE_DECDIAG = 0x6005 - ETHERTYPE_DECDNS = 0x803c - ETHERTYPE_DECDTS = 0x803e - ETHERTYPE_DECEXPER = 0x6000 - ETHERTYPE_DECLAST = 0x8041 - ETHERTYPE_DECLTM = 0x803f - ETHERTYPE_DECMUMPS = 0x6009 - ETHERTYPE_DECNETBIOS = 0x8040 - ETHERTYPE_DELTACON = 0x86de - ETHERTYPE_DIDDLE = 0x4321 - ETHERTYPE_DLOG1 = 0x660 - ETHERTYPE_DLOG2 = 0x661 - ETHERTYPE_DN = 0x6003 - ETHERTYPE_DOGFIGHT = 0x1989 - ETHERTYPE_DSMD = 0x8039 - ETHERTYPE_ECMA = 0x803 - ETHERTYPE_ENCRYPT = 0x803d - ETHERTYPE_ES = 0x805d - ETHERTYPE_EXCELAN = 0x8010 - ETHERTYPE_EXPERDATA = 0x8049 - ETHERTYPE_FLIP = 0x8146 - ETHERTYPE_FLOWCONTROL = 0x8808 - ETHERTYPE_FRARP = 0x808 - ETHERTYPE_GENDYN = 0x8068 - ETHERTYPE_HAYES = 0x8130 - ETHERTYPE_HIPPI_FP = 0x8180 - ETHERTYPE_HITACHI = 0x8820 - ETHERTYPE_HP = 0x8005 - ETHERTYPE_IEEEPUP = 0xa00 - ETHERTYPE_IEEEPUPAT = 0xa01 - ETHERTYPE_IMLBL = 0x4c42 - ETHERTYPE_IMLBLDIAG = 0x424c - ETHERTYPE_IP = 0x800 - ETHERTYPE_IPAS = 0x876c - ETHERTYPE_IPV6 = 0x86dd - ETHERTYPE_IPX = 0x8137 - ETHERTYPE_IPXNEW = 0x8037 - ETHERTYPE_KALPANA = 0x8582 - ETHERTYPE_LANBRIDGE = 0x8038 - ETHERTYPE_LANPROBE = 0x8888 - ETHERTYPE_LAT = 0x6004 - ETHERTYPE_LBACK = 0x9000 - ETHERTYPE_LITTLE = 0x8060 - ETHERTYPE_LOGICRAFT = 0x8148 - ETHERTYPE_LOOPBACK = 0x9000 - ETHERTYPE_MATRA = 0x807a - ETHERTYPE_MAX = 0xffff - ETHERTYPE_MERIT = 0x807c - ETHERTYPE_MICP = 0x873a - ETHERTYPE_MOPDL = 0x6001 - ETHERTYPE_MOPRC = 0x6002 - ETHERTYPE_MOTOROLA = 0x818d - ETHERTYPE_MPLS = 0x8847 - ETHERTYPE_MPLS_MCAST = 0x8848 - ETHERTYPE_MUMPS = 0x813f - ETHERTYPE_NBPCC = 0x3c04 - ETHERTYPE_NBPCLAIM = 0x3c09 - ETHERTYPE_NBPCLREQ = 0x3c05 - ETHERTYPE_NBPCLRSP = 0x3c06 - ETHERTYPE_NBPCREQ = 0x3c02 - ETHERTYPE_NBPCRSP = 0x3c03 - ETHERTYPE_NBPDG = 0x3c07 - ETHERTYPE_NBPDGB = 0x3c08 - ETHERTYPE_NBPDLTE = 0x3c0a - ETHERTYPE_NBPRAR = 0x3c0c - ETHERTYPE_NBPRAS = 0x3c0b - ETHERTYPE_NBPRST = 0x3c0d - ETHERTYPE_NBPSCD = 0x3c01 - ETHERTYPE_NBPVCD = 0x3c00 - ETHERTYPE_NBS = 0x802 - ETHERTYPE_NCD = 0x8149 - ETHERTYPE_NESTAR = 0x8006 - ETHERTYPE_NETBEUI = 0x8191 - ETHERTYPE_NOVELL = 0x8138 - ETHERTYPE_NS = 0x600 - ETHERTYPE_NSAT = 0x601 - ETHERTYPE_NSCOMPAT = 0x807 - ETHERTYPE_NTRAILER = 0x10 - ETHERTYPE_OS9 = 0x7007 - ETHERTYPE_OS9NET = 0x7009 - ETHERTYPE_PACER = 0x80c6 - ETHERTYPE_PAE = 0x888e - ETHERTYPE_PCS = 0x4242 - ETHERTYPE_PLANNING = 0x8044 - ETHERTYPE_PPP = 0x880b - ETHERTYPE_PPPOE = 0x8864 - ETHERTYPE_PPPOEDISC = 0x8863 - ETHERTYPE_PRIMENTS = 0x7031 - ETHERTYPE_PUP = 0x200 - ETHERTYPE_PUPAT = 0x200 - ETHERTYPE_RACAL = 0x7030 - ETHERTYPE_RATIONAL = 0x8150 - ETHERTYPE_RAWFR = 0x6559 - ETHERTYPE_RCL = 0x1995 - ETHERTYPE_RDP = 0x8739 - ETHERTYPE_RETIX = 0x80f2 - ETHERTYPE_REVARP = 0x8035 - ETHERTYPE_SCA = 0x6007 - ETHERTYPE_SECTRA = 0x86db - ETHERTYPE_SECUREDATA = 0x876d - ETHERTYPE_SGITW = 0x817e - ETHERTYPE_SG_BOUNCE = 0x8016 - ETHERTYPE_SG_DIAG = 0x8013 - ETHERTYPE_SG_NETGAMES = 0x8014 - ETHERTYPE_SG_RESV = 0x8015 - ETHERTYPE_SIMNET = 0x5208 - ETHERTYPE_SLOWPROTOCOLS = 0x8809 - ETHERTYPE_SNA = 0x80d5 - ETHERTYPE_SNMP = 0x814c - ETHERTYPE_SONIX = 0xfaf5 - ETHERTYPE_SPIDER = 0x809f - ETHERTYPE_SPRITE = 0x500 - ETHERTYPE_STP = 0x8181 - ETHERTYPE_TALARIS = 0x812b - ETHERTYPE_TALARISMC = 0x852b - ETHERTYPE_TCPCOMP = 0x876b - ETHERTYPE_TCPSM = 0x9002 - ETHERTYPE_TEC = 0x814f - ETHERTYPE_TIGAN = 0x802f - ETHERTYPE_TRAIL = 0x1000 - ETHERTYPE_TRANSETHER = 0x6558 - ETHERTYPE_TYMSHARE = 0x802e - ETHERTYPE_UBBST = 0x7005 - ETHERTYPE_UBDEBUG = 0x900 - ETHERTYPE_UBDIAGLOOP = 0x7002 - ETHERTYPE_UBDL = 0x7000 - ETHERTYPE_UBNIU = 0x7001 - ETHERTYPE_UBNMC = 0x7003 - ETHERTYPE_VALID = 0x1600 - ETHERTYPE_VARIAN = 0x80dd - ETHERTYPE_VAXELN = 0x803b - ETHERTYPE_VEECO = 0x8067 - ETHERTYPE_VEXP = 0x805b - ETHERTYPE_VGLAB = 0x8131 - ETHERTYPE_VINES = 0xbad - ETHERTYPE_VINESECHO = 0xbaf - ETHERTYPE_VINESLOOP = 0xbae - ETHERTYPE_VITAL = 0xff00 - ETHERTYPE_VLAN = 0x8100 - ETHERTYPE_VLTLMAN = 0x8080 - ETHERTYPE_VPROD = 0x805c - ETHERTYPE_VURESERVED = 0x8147 - ETHERTYPE_WATERLOO = 0x8130 - ETHERTYPE_WELLFLEET = 0x8103 - ETHERTYPE_X25 = 0x805 - ETHERTYPE_X75 = 0x801 - ETHERTYPE_XNSSM = 0x9001 - ETHERTYPE_XTP = 0x817d - ETHER_ADDR_LEN = 0x6 - ETHER_CRC_LEN = 0x4 - ETHER_CRC_POLY_BE = 0x4c11db6 - ETHER_CRC_POLY_LE = 0xedb88320 - ETHER_HDR_LEN = 0xe - ETHER_MAX_LEN = 0x5ee - ETHER_MAX_LEN_JUMBO = 0x233a - ETHER_MIN_LEN = 0x40 - ETHER_PPPOE_ENCAP_LEN = 0x8 - ETHER_TYPE_LEN = 0x2 - ETHER_VLAN_ENCAP_LEN = 0x4 - EVFILT_AIO = 0x2 - EVFILT_PROC = 0x4 - EVFILT_READ = 0x0 - EVFILT_SIGNAL = 0x5 - EVFILT_SYSCOUNT = 0x7 - EVFILT_TIMER = 0x6 - EVFILT_VNODE = 0x3 - EVFILT_WRITE = 0x1 - EV_ADD = 0x1 - EV_CLEAR = 0x20 - EV_DELETE = 0x2 - EV_DISABLE = 0x8 - EV_ENABLE = 0x4 - EV_EOF = 0x8000 - EV_ERROR = 0x4000 - EV_FLAG1 = 0x2000 - EV_ONESHOT = 0x10 - EV_SYSFLAGS = 0xf000 - EXTA = 0x4b00 - EXTB = 0x9600 - EXTPROC = 0x800 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x100 - FLUSHO = 0x800000 - F_CLOSEM = 0xa - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0xc - F_FSCTL = -0x80000000 - F_FSDIRMASK = 0x70000000 - F_FSIN = 0x10000000 - F_FSINOUT = 0x30000000 - F_FSOUT = 0x20000000 - F_FSPRIV = 0x8000 - F_FSVOID = 0x40000000 - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLK = 0x7 - F_GETNOSIGPIPE = 0xd - F_GETOWN = 0x5 - F_MAXFD = 0xb - F_OK = 0x0 - F_PARAM_MASK = 0xfff - F_PARAM_MAX = 0xfff - F_RDLCK = 0x1 - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLK = 0x8 - F_SETLKW = 0x9 - F_SETNOSIGPIPE = 0xe - F_SETOWN = 0x6 - F_UNLCK = 0x2 - F_WRLCK = 0x3 - HUPCL = 0x4000 - ICANON = 0x100 - ICMP6_FILTER = 0x12 - ICRNL = 0x100 - IEXTEN = 0x400 - IFAN_ARRIVAL = 0x0 - IFAN_DEPARTURE = 0x1 - IFA_ROUTE = 0x1 - IFF_ALLMULTI = 0x200 - IFF_BROADCAST = 0x2 - IFF_CANTCHANGE = 0x8f52 - IFF_DEBUG = 0x4 - IFF_LINK0 = 0x1000 - IFF_LINK1 = 0x2000 - IFF_LINK2 = 0x4000 - IFF_LOOPBACK = 0x8 - IFF_MULTICAST = 0x8000 - IFF_NOARP = 0x80 - IFF_NOTRAILERS = 0x20 - IFF_OACTIVE = 0x400 - IFF_POINTOPOINT = 0x10 - IFF_PROMISC = 0x100 - IFF_RUNNING = 0x40 - IFF_SIMPLEX = 0x800 - IFF_UP = 0x1 - IFNAMSIZ = 0x10 - IFT_1822 = 0x2 - IFT_A12MPPSWITCH = 0x82 - IFT_AAL2 = 0xbb - IFT_AAL5 = 0x31 - IFT_ADSL = 0x5e - IFT_AFLANE8023 = 0x3b - IFT_AFLANE8025 = 0x3c - IFT_ARAP = 0x58 - IFT_ARCNET = 0x23 - IFT_ARCNETPLUS = 0x24 - IFT_ASYNC = 0x54 - IFT_ATM = 0x25 - IFT_ATMDXI = 0x69 - IFT_ATMFUNI = 0x6a - IFT_ATMIMA = 0x6b - IFT_ATMLOGICAL = 0x50 - IFT_ATMRADIO = 0xbd - IFT_ATMSUBINTERFACE = 0x86 - IFT_ATMVCIENDPT = 0xc2 - IFT_ATMVIRTUAL = 0x95 - IFT_BGPPOLICYACCOUNTING = 0xa2 - IFT_BRIDGE = 0xd1 - IFT_BSC = 0x53 - IFT_CARP = 0xf8 - IFT_CCTEMUL = 0x3d - IFT_CEPT = 0x13 - IFT_CES = 0x85 - IFT_CHANNEL = 0x46 - IFT_CNR = 0x55 - IFT_COFFEE = 0x84 - IFT_COMPOSITELINK = 0x9b - IFT_DCN = 0x8d - IFT_DIGITALPOWERLINE = 0x8a - IFT_DIGITALWRAPPEROVERHEADCHANNEL = 0xba - IFT_DLSW = 0x4a - IFT_DOCSCABLEDOWNSTREAM = 0x80 - IFT_DOCSCABLEMACLAYER = 0x7f - IFT_DOCSCABLEUPSTREAM = 0x81 - IFT_DOCSCABLEUPSTREAMCHANNEL = 0xcd - IFT_DS0 = 0x51 - IFT_DS0BUNDLE = 0x52 - IFT_DS1FDL = 0xaa - IFT_DS3 = 0x1e - IFT_DTM = 0x8c - IFT_DVBASILN = 0xac - IFT_DVBASIOUT = 0xad - IFT_DVBRCCDOWNSTREAM = 0x93 - IFT_DVBRCCMACLAYER = 0x92 - IFT_DVBRCCUPSTREAM = 0x94 - IFT_ECONET = 0xce - IFT_EON = 0x19 - IFT_EPLRS = 0x57 - IFT_ESCON = 0x49 - IFT_ETHER = 0x6 - IFT_FAITH = 0xf2 - IFT_FAST = 0x7d - IFT_FASTETHER = 0x3e - IFT_FASTETHERFX = 0x45 - IFT_FDDI = 0xf - IFT_FIBRECHANNEL = 0x38 - IFT_FRAMERELAYINTERCONNECT = 0x3a - IFT_FRAMERELAYMPI = 0x5c - IFT_FRDLCIENDPT = 0xc1 - IFT_FRELAY = 0x20 - IFT_FRELAYDCE = 0x2c - IFT_FRF16MFRBUNDLE = 0xa3 - IFT_FRFORWARD = 0x9e - IFT_G703AT2MB = 0x43 - IFT_G703AT64K = 0x42 - IFT_GIF = 0xf0 - IFT_GIGABITETHERNET = 0x75 - IFT_GR303IDT = 0xb2 - IFT_GR303RDT = 0xb1 - IFT_H323GATEKEEPER = 0xa4 - IFT_H323PROXY = 0xa5 - IFT_HDH1822 = 0x3 - IFT_HDLC = 0x76 - IFT_HDSL2 = 0xa8 - IFT_HIPERLAN2 = 0xb7 - IFT_HIPPI = 0x2f - IFT_HIPPIINTERFACE = 0x39 - IFT_HOSTPAD = 0x5a - IFT_HSSI = 0x2e - IFT_HY = 0xe - IFT_IBM370PARCHAN = 0x48 - IFT_IDSL = 0x9a - IFT_IEEE1394 = 0x90 - IFT_IEEE80211 = 0x47 - IFT_IEEE80212 = 0x37 - IFT_IEEE8023ADLAG = 0xa1 - IFT_IFGSN = 0x91 - IFT_IMT = 0xbe - IFT_INFINIBAND = 0xc7 - IFT_INTERLEAVE = 0x7c - IFT_IP = 0x7e - IFT_IPFORWARD = 0x8e - IFT_IPOVERATM = 0x72 - IFT_IPOVERCDLC = 0x6d - IFT_IPOVERCLAW = 0x6e - IFT_IPSWITCH = 0x4e - IFT_ISDN = 0x3f - IFT_ISDNBASIC = 0x14 - IFT_ISDNPRIMARY = 0x15 - IFT_ISDNS = 0x4b - IFT_ISDNU = 0x4c - IFT_ISO88022LLC = 0x29 - IFT_ISO88023 = 0x7 - IFT_ISO88024 = 0x8 - IFT_ISO88025 = 0x9 - IFT_ISO88025CRFPINT = 0x62 - IFT_ISO88025DTR = 0x56 - IFT_ISO88025FIBER = 0x73 - IFT_ISO88026 = 0xa - IFT_ISUP = 0xb3 - IFT_L2VLAN = 0x87 - IFT_L3IPVLAN = 0x88 - IFT_L3IPXVLAN = 0x89 - IFT_LAPB = 0x10 - IFT_LAPD = 0x4d - IFT_LAPF = 0x77 - IFT_LINEGROUP = 0xd2 - IFT_LOCALTALK = 0x2a - IFT_LOOP = 0x18 - IFT_MEDIAMAILOVERIP = 0x8b - IFT_MFSIGLINK = 0xa7 - IFT_MIOX25 = 0x26 - IFT_MODEM = 0x30 - IFT_MPC = 0x71 - IFT_MPLS = 0xa6 - IFT_MPLSTUNNEL = 0x96 - IFT_MSDSL = 0x8f - IFT_MVL = 0xbf - IFT_MYRINET = 0x63 - IFT_NFAS = 0xaf - IFT_NSIP = 0x1b - IFT_OPTICALCHANNEL = 0xc3 - IFT_OPTICALTRANSPORT = 0xc4 - IFT_OTHER = 0x1 - IFT_P10 = 0xc - IFT_P80 = 0xd - IFT_PARA = 0x22 - IFT_PFLOG = 0xf5 - IFT_PFSYNC = 0xf6 - IFT_PLC = 0xae - IFT_PON155 = 0xcf - IFT_PON622 = 0xd0 - IFT_POS = 0xab - IFT_PPP = 0x17 - IFT_PPPMULTILINKBUNDLE = 0x6c - IFT_PROPATM = 0xc5 - IFT_PROPBWAP2MP = 0xb8 - IFT_PROPCNLS = 0x59 - IFT_PROPDOCSWIRELESSDOWNSTREAM = 0xb5 - IFT_PROPDOCSWIRELESSMACLAYER = 0xb4 - IFT_PROPDOCSWIRELESSUPSTREAM = 0xb6 - IFT_PROPMUX = 0x36 - IFT_PROPVIRTUAL = 0x35 - IFT_PROPWIRELESSP2P = 0x9d - IFT_PTPSERIAL = 0x16 - IFT_PVC = 0xf1 - IFT_Q2931 = 0xc9 - IFT_QLLC = 0x44 - IFT_RADIOMAC = 0xbc - IFT_RADSL = 0x5f - IFT_REACHDSL = 0xc0 - IFT_RFC1483 = 0x9f - IFT_RS232 = 0x21 - IFT_RSRB = 0x4f - IFT_SDLC = 0x11 - IFT_SDSL = 0x60 - IFT_SHDSL = 0xa9 - IFT_SIP = 0x1f - IFT_SIPSIG = 0xcc - IFT_SIPTG = 0xcb - IFT_SLIP = 0x1c - IFT_SMDSDXI = 0x2b - IFT_SMDSICIP = 0x34 - IFT_SONET = 0x27 - IFT_SONETOVERHEADCHANNEL = 0xb9 - IFT_SONETPATH = 0x32 - IFT_SONETVT = 0x33 - IFT_SRP = 0x97 - IFT_SS7SIGLINK = 0x9c - IFT_STACKTOSTACK = 0x6f - IFT_STARLAN = 0xb - IFT_STF = 0xd7 - IFT_T1 = 0x12 - IFT_TDLC = 0x74 - IFT_TELINK = 0xc8 - IFT_TERMPAD = 0x5b - IFT_TR008 = 0xb0 - IFT_TRANSPHDLC = 0x7b - IFT_TUNNEL = 0x83 - IFT_ULTRA = 0x1d - IFT_USB = 0xa0 - IFT_V11 = 0x40 - IFT_V35 = 0x2d - IFT_V36 = 0x41 - IFT_V37 = 0x78 - IFT_VDSL = 0x61 - IFT_VIRTUALIPADDRESS = 0x70 - IFT_VIRTUALTG = 0xca - IFT_VOICEDID = 0xd5 - IFT_VOICEEM = 0x64 - IFT_VOICEEMFGD = 0xd3 - IFT_VOICEENCAP = 0x67 - IFT_VOICEFGDEANA = 0xd4 - IFT_VOICEFXO = 0x65 - IFT_VOICEFXS = 0x66 - IFT_VOICEOVERATM = 0x98 - IFT_VOICEOVERCABLE = 0xc6 - IFT_VOICEOVERFRAMERELAY = 0x99 - IFT_VOICEOVERIP = 0x68 - IFT_X213 = 0x5d - IFT_X25 = 0x5 - IFT_X25DDN = 0x4 - IFT_X25HUNTGROUP = 0x7a - IFT_X25MLP = 0x79 - IFT_X25PLE = 0x28 - IFT_XETHER = 0x1a - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLASSD_HOST = 0xfffffff - IN_CLASSD_NET = 0xf0000000 - IN_CLASSD_NSHIFT = 0x1c - IN_LOOPBACKNET = 0x7f - IPPROTO_AH = 0x33 - IPPROTO_CARP = 0x70 - IPPROTO_DONE = 0x101 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_ENCAP = 0x62 - IPPROTO_EON = 0x50 - IPPROTO_ESP = 0x32 - IPPROTO_ETHERIP = 0x61 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GGP = 0x3 - IPPROTO_GRE = 0x2f - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IGMP = 0x2 - IPPROTO_IP = 0x0 - IPPROTO_IPCOMP = 0x6c - IPPROTO_IPIP = 0x4 - IPPROTO_IPV4 = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_IPV6_ICMP = 0x3a - IPPROTO_MAX = 0x100 - IPPROTO_MAXID = 0x34 - IPPROTO_MOBILE = 0x37 - IPPROTO_NONE = 0x3b - IPPROTO_PFSYNC = 0xf0 - IPPROTO_PIM = 0x67 - IPPROTO_PUP = 0xc - IPPROTO_RAW = 0xff - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_TCP = 0x6 - IPPROTO_TP = 0x1d - IPPROTO_UDP = 0x11 - IPPROTO_VRRP = 0x70 - IPV6_CHECKSUM = 0x1a - IPV6_DEFAULT_MULTICAST_HOPS = 0x1 - IPV6_DEFAULT_MULTICAST_LOOP = 0x1 - IPV6_DEFHLIM = 0x40 - IPV6_DONTFRAG = 0x3e - IPV6_DSTOPTS = 0x32 - IPV6_FAITH = 0x1d - IPV6_FLOWINFO_MASK = 0xffffff0f - IPV6_FLOWLABEL_MASK = 0xffff0f00 - IPV6_FRAGTTL = 0x78 - IPV6_HLIMDEC = 0x1 - IPV6_HOPLIMIT = 0x2f - IPV6_HOPOPTS = 0x31 - IPV6_IPSEC_POLICY = 0x1c - IPV6_JOIN_GROUP = 0xc - IPV6_LEAVE_GROUP = 0xd - IPV6_MAXHLIM = 0xff - IPV6_MAXPACKET = 0xffff - IPV6_MMTU = 0x500 - IPV6_MULTICAST_HOPS = 0xa - IPV6_MULTICAST_IF = 0x9 - IPV6_MULTICAST_LOOP = 0xb - IPV6_NEXTHOP = 0x30 - IPV6_PATHMTU = 0x2c - IPV6_PKTINFO = 0x2e - IPV6_PORTRANGE = 0xe - IPV6_PORTRANGE_DEFAULT = 0x0 - IPV6_PORTRANGE_HIGH = 0x1 - IPV6_PORTRANGE_LOW = 0x2 - IPV6_RECVDSTOPTS = 0x28 - IPV6_RECVHOPLIMIT = 0x25 - IPV6_RECVHOPOPTS = 0x27 - IPV6_RECVPATHMTU = 0x2b - IPV6_RECVPKTINFO = 0x24 - IPV6_RECVRTHDR = 0x26 - IPV6_RECVTCLASS = 0x39 - IPV6_RTHDR = 0x33 - IPV6_RTHDRDSTOPTS = 0x23 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_SOCKOPT_RESERVED1 = 0x3 - IPV6_TCLASS = 0x3d - IPV6_UNICAST_HOPS = 0x4 - IPV6_USE_MIN_MTU = 0x2a - IPV6_V6ONLY = 0x1b - IPV6_VERSION = 0x60 - IPV6_VERSION_MASK = 0xf0 - IP_ADD_MEMBERSHIP = 0xc - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DROP_MEMBERSHIP = 0xd - IP_EF = 0x8000 - IP_ERRORMTU = 0x15 - IP_HDRINCL = 0x2 - IP_IPSEC_POLICY = 0x16 - IP_MAXPACKET = 0xffff - IP_MAX_MEMBERSHIPS = 0x14 - IP_MF = 0x2000 - IP_MINFRAGSIZE = 0x45 - IP_MINTTL = 0x18 - IP_MSS = 0x240 - IP_MULTICAST_IF = 0x9 - IP_MULTICAST_LOOP = 0xb - IP_MULTICAST_TTL = 0xa - IP_OFFMASK = 0x1fff - IP_OPTIONS = 0x1 - IP_PORTRANGE = 0x13 - IP_PORTRANGE_DEFAULT = 0x0 - IP_PORTRANGE_HIGH = 0x1 - IP_PORTRANGE_LOW = 0x2 - IP_RECVDSTADDR = 0x7 - IP_RECVIF = 0x14 - IP_RECVOPTS = 0x5 - IP_RECVRETOPTS = 0x6 - IP_RECVTTL = 0x17 - IP_RETOPTS = 0x8 - IP_RF = 0x8000 - IP_TOS = 0x3 - IP_TTL = 0x4 - ISIG = 0x80 - ISTRIP = 0x20 - IXANY = 0x800 - IXOFF = 0x400 - IXON = 0x200 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_DONTNEED = 0x4 - MADV_FREE = 0x6 - MADV_NORMAL = 0x0 - MADV_RANDOM = 0x1 - MADV_SEQUENTIAL = 0x2 - MADV_SPACEAVAIL = 0x5 - MADV_WILLNEED = 0x3 - MAP_ALIGNMENT_16MB = 0x18000000 - MAP_ALIGNMENT_1TB = 0x28000000 - MAP_ALIGNMENT_256TB = 0x30000000 - MAP_ALIGNMENT_4GB = 0x20000000 - MAP_ALIGNMENT_64KB = 0x10000000 - MAP_ALIGNMENT_64PB = 0x38000000 - MAP_ALIGNMENT_MASK = -0x1000000 - MAP_ALIGNMENT_SHIFT = 0x18 - MAP_ANON = 0x1000 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_HASSEMAPHORE = 0x200 - MAP_INHERIT = 0x80 - MAP_INHERIT_COPY = 0x1 - MAP_INHERIT_DEFAULT = 0x1 - MAP_INHERIT_DONATE_COPY = 0x3 - MAP_INHERIT_NONE = 0x2 - MAP_INHERIT_SHARE = 0x0 - MAP_NORESERVE = 0x40 - MAP_PRIVATE = 0x2 - MAP_RENAME = 0x20 - MAP_SHARED = 0x1 - MAP_STACK = 0x2000 - MAP_TRYFIXED = 0x400 - MAP_WIRED = 0x800 - MSG_BCAST = 0x100 - MSG_CMSG_CLOEXEC = 0x800 - MSG_CONTROLMBUF = 0x2000000 - MSG_CTRUNC = 0x20 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x80 - MSG_EOR = 0x8 - MSG_IOVUSRSPACE = 0x4000000 - MSG_LENUSRSPACE = 0x8000000 - MSG_MCAST = 0x200 - MSG_NAMEMBUF = 0x1000000 - MSG_NBIO = 0x1000 - MSG_NOSIGNAL = 0x400 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_TRUNC = 0x10 - MSG_USERFLAGS = 0xffffff - MSG_WAITALL = 0x40 - NAME_MAX = 0x1ff - NET_RT_DUMP = 0x1 - NET_RT_FLAGS = 0x2 - NET_RT_IFLIST = 0x5 - NET_RT_MAXID = 0x6 - NET_RT_OIFLIST = 0x4 - NET_RT_OOIFLIST = 0x3 - NOFLSH = 0x80000000 - NOTE_ATTRIB = 0x8 - NOTE_CHILD = 0x4 - NOTE_DELETE = 0x1 - NOTE_EXEC = 0x20000000 - NOTE_EXIT = 0x80000000 - NOTE_EXTEND = 0x4 - NOTE_FORK = 0x40000000 - NOTE_LINK = 0x10 - NOTE_LOWAT = 0x1 - NOTE_PCTRLMASK = 0xf0000000 - NOTE_PDATAMASK = 0xfffff - NOTE_RENAME = 0x20 - NOTE_REVOKE = 0x40 - NOTE_TRACK = 0x1 - NOTE_TRACKERR = 0x2 - NOTE_WRITE = 0x2 - OCRNL = 0x10 - OFIOGETBMAP = 0xc004667a - ONLCR = 0x2 - ONLRET = 0x40 - ONOCR = 0x20 - ONOEOT = 0x8 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_ALT_IO = 0x40000 - O_APPEND = 0x8 - O_ASYNC = 0x40 - O_CLOEXEC = 0x400000 - O_CREAT = 0x200 - O_DIRECT = 0x80000 - O_DIRECTORY = 0x200000 - O_DSYNC = 0x10000 - O_EXCL = 0x800 - O_EXLOCK = 0x20 - O_FSYNC = 0x80 - O_NDELAY = 0x4 - O_NOCTTY = 0x8000 - O_NOFOLLOW = 0x100 - O_NONBLOCK = 0x4 - O_NOSIGPIPE = 0x1000000 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_RSYNC = 0x20000 - O_SHLOCK = 0x10 - O_SYNC = 0x80 - O_TRUNC = 0x400 - O_WRONLY = 0x1 - PARENB = 0x1000 - PARMRK = 0x8 - PARODD = 0x2000 - PENDIN = 0x20000000 - PROT_EXEC = 0x4 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_WRITE = 0x2 - PRI_IOFLUSH = 0x7c - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - RLIMIT_AS = 0xa - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x8 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = 0x7fffffffffffffff - RTAX_AUTHOR = 0x6 - RTAX_BRD = 0x7 - RTAX_DST = 0x0 - RTAX_GATEWAY = 0x1 - RTAX_GENMASK = 0x3 - RTAX_IFA = 0x5 - RTAX_IFP = 0x4 - RTAX_MAX = 0x9 - RTAX_NETMASK = 0x2 - RTAX_TAG = 0x8 - RTA_AUTHOR = 0x40 - RTA_BRD = 0x80 - RTA_DST = 0x1 - RTA_GATEWAY = 0x2 - RTA_GENMASK = 0x8 - RTA_IFA = 0x20 - RTA_IFP = 0x10 - RTA_NETMASK = 0x4 - RTA_TAG = 0x100 - RTF_ANNOUNCE = 0x20000 - RTF_BLACKHOLE = 0x1000 - RTF_CLONED = 0x2000 - RTF_CLONING = 0x100 - RTF_DONE = 0x40 - RTF_DYNAMIC = 0x10 - RTF_GATEWAY = 0x2 - RTF_HOST = 0x4 - RTF_LLINFO = 0x400 - RTF_MASK = 0x80 - RTF_MODIFIED = 0x20 - RTF_PROTO1 = 0x8000 - RTF_PROTO2 = 0x4000 - RTF_REJECT = 0x8 - RTF_SRC = 0x10000 - RTF_STATIC = 0x800 - RTF_UP = 0x1 - RTF_XRESOLVE = 0x200 - RTM_ADD = 0x1 - RTM_CHANGE = 0x3 - RTM_CHGADDR = 0x15 - RTM_DELADDR = 0xd - RTM_DELETE = 0x2 - RTM_GET = 0x4 - RTM_IEEE80211 = 0x11 - RTM_IFANNOUNCE = 0x10 - RTM_IFINFO = 0x14 - RTM_LLINFO_UPD = 0x13 - RTM_LOCK = 0x8 - RTM_LOSING = 0x5 - RTM_MISS = 0x7 - RTM_NEWADDR = 0xc - RTM_OIFINFO = 0xf - RTM_OLDADD = 0x9 - RTM_OLDDEL = 0xa - RTM_OOIFINFO = 0xe - RTM_REDIRECT = 0x6 - RTM_RESOLVE = 0xb - RTM_RTTUNIT = 0xf4240 - RTM_SETGATE = 0x12 - RTM_VERSION = 0x4 - RTV_EXPIRE = 0x4 - RTV_HOPCOUNT = 0x2 - RTV_MTU = 0x1 - RTV_RPIPE = 0x8 - RTV_RTT = 0x40 - RTV_RTTVAR = 0x80 - RTV_SPIPE = 0x10 - RTV_SSTHRESH = 0x20 - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - SCM_CREDS = 0x4 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x8 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDMULTI = 0x80906931 - SIOCADDRT = 0x8030720a - SIOCAIFADDR = 0x8040691a - SIOCALIFADDR = 0x8118691c - SIOCATMARK = 0x40047307 - SIOCDELMULTI = 0x80906932 - SIOCDELRT = 0x8030720b - SIOCDIFADDR = 0x80906919 - SIOCDIFPHYADDR = 0x80906949 - SIOCDLIFADDR = 0x8118691e - SIOCGDRVSPEC = 0xc01c697b - SIOCGETPFSYNC = 0xc09069f8 - SIOCGETSGCNT = 0xc0147534 - SIOCGETVIFCNT = 0xc0147533 - SIOCGHIWAT = 0x40047301 - SIOCGIFADDR = 0xc0906921 - SIOCGIFADDRPREF = 0xc0946920 - SIOCGIFALIAS = 0xc040691b - SIOCGIFBRDADDR = 0xc0906923 - SIOCGIFCAP = 0xc0206976 - SIOCGIFCONF = 0xc0086926 - SIOCGIFDATA = 0xc0946985 - SIOCGIFDLT = 0xc0906977 - SIOCGIFDSTADDR = 0xc0906922 - SIOCGIFFLAGS = 0xc0906911 - SIOCGIFGENERIC = 0xc090693a - SIOCGIFMEDIA = 0xc0286936 - SIOCGIFMETRIC = 0xc0906917 - SIOCGIFMTU = 0xc090697e - SIOCGIFNETMASK = 0xc0906925 - SIOCGIFPDSTADDR = 0xc0906948 - SIOCGIFPSRCADDR = 0xc0906947 - SIOCGLIFADDR = 0xc118691d - SIOCGLIFPHYADDR = 0xc118694b - SIOCGLINKSTR = 0xc01c6987 - SIOCGLOWAT = 0x40047303 - SIOCGPGRP = 0x40047309 - SIOCGVH = 0xc0906983 - SIOCIFCREATE = 0x8090697a - SIOCIFDESTROY = 0x80906979 - SIOCIFGCLONERS = 0xc00c6978 - SIOCINITIFADDR = 0xc0446984 - SIOCSDRVSPEC = 0x801c697b - SIOCSETPFSYNC = 0x809069f7 - SIOCSHIWAT = 0x80047300 - SIOCSIFADDR = 0x8090690c - SIOCSIFADDRPREF = 0x8094691f - SIOCSIFBRDADDR = 0x80906913 - SIOCSIFCAP = 0x80206975 - SIOCSIFDSTADDR = 0x8090690e - SIOCSIFFLAGS = 0x80906910 - SIOCSIFGENERIC = 0x80906939 - SIOCSIFMEDIA = 0xc0906935 - SIOCSIFMETRIC = 0x80906918 - SIOCSIFMTU = 0x8090697f - SIOCSIFNETMASK = 0x80906916 - SIOCSIFPHYADDR = 0x80406946 - SIOCSLIFPHYADDR = 0x8118694a - SIOCSLINKSTR = 0x801c6988 - SIOCSLOWAT = 0x80047302 - SIOCSPGRP = 0x80047308 - SIOCSVH = 0xc0906982 - SIOCZIFDATA = 0xc0946986 - SOCK_CLOEXEC = 0x10000000 - SOCK_DGRAM = 0x2 - SOCK_FLAGS_MASK = 0xf0000000 - SOCK_NONBLOCK = 0x20000000 - SOCK_NOSIGPIPE = 0x40000000 - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_SOCKET = 0xffff - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x2 - SO_ACCEPTFILTER = 0x1000 - SO_BROADCAST = 0x20 - SO_DEBUG = 0x1 - SO_DONTROUTE = 0x10 - SO_ERROR = 0x1007 - SO_KEEPALIVE = 0x8 - SO_LINGER = 0x80 - SO_NOHEADER = 0x100a - SO_NOSIGPIPE = 0x800 - SO_OOBINLINE = 0x100 - SO_OVERFLOWED = 0x1009 - SO_RCVBUF = 0x1002 - SO_RCVLOWAT = 0x1004 - SO_RCVTIMEO = 0x100c - SO_REUSEADDR = 0x4 - SO_REUSEPORT = 0x200 - SO_SNDBUF = 0x1001 - SO_SNDLOWAT = 0x1003 - SO_SNDTIMEO = 0x100b - SO_TIMESTAMP = 0x2000 - SO_TYPE = 0x1008 - SO_USELOOPBACK = 0x40 - SYSCTL_VERSION = 0x1000000 - SYSCTL_VERS_0 = 0x0 - SYSCTL_VERS_1 = 0x1000000 - SYSCTL_VERS_MASK = 0xff000000 - S_ARCH1 = 0x10000 - S_ARCH2 = 0x20000 - S_BLKSIZE = 0x200 - S_IEXEC = 0x40 - S_IFBLK = 0x6000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFIFO = 0x1000 - S_IFLNK = 0xa000 - S_IFMT = 0xf000 - S_IFREG = 0x8000 - S_IFSOCK = 0xc000 - S_IFWHT = 0xe000 - S_IREAD = 0x100 - S_IRGRP = 0x20 - S_IROTH = 0x4 - S_IRUSR = 0x100 - S_IRWXG = 0x38 - S_IRWXO = 0x7 - S_IRWXU = 0x1c0 - S_ISGID = 0x400 - S_ISTXT = 0x200 - S_ISUID = 0x800 - S_ISVTX = 0x200 - S_IWGRP = 0x10 - S_IWOTH = 0x2 - S_IWRITE = 0x80 - S_IWUSR = 0x80 - S_IXGRP = 0x8 - S_IXOTH = 0x1 - S_IXUSR = 0x40 - TCIFLUSH = 0x1 - TCIOFLUSH = 0x3 - TCOFLUSH = 0x2 - TCP_CONGCTL = 0x20 - TCP_KEEPCNT = 0x6 - TCP_KEEPIDLE = 0x3 - TCP_KEEPINIT = 0x7 - TCP_KEEPINTVL = 0x5 - TCP_MAXBURST = 0x4 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_WINSHIFT = 0xe - TCP_MD5SIG = 0x10 - TCP_MINMSS = 0xd8 - TCP_MSS = 0x218 - TCP_NODELAY = 0x1 - TCSAFLUSH = 0x2 - TIOCCBRK = 0x2000747a - TIOCCDTR = 0x20007478 - TIOCCONS = 0x80047462 - TIOCDCDTIMESTAMP = 0x400c7458 - TIOCDRAIN = 0x2000745e - TIOCEXCL = 0x2000740d - TIOCEXT = 0x80047460 - TIOCFLAG_CDTRCTS = 0x10 - TIOCFLAG_CLOCAL = 0x2 - TIOCFLAG_CRTSCTS = 0x4 - TIOCFLAG_MDMBUF = 0x8 - TIOCFLAG_SOFTCAR = 0x1 - TIOCFLUSH = 0x80047410 - TIOCGETA = 0x402c7413 - TIOCGETD = 0x4004741a - TIOCGFLAGS = 0x4004745d - TIOCGLINED = 0x40207442 - TIOCGPGRP = 0x40047477 - TIOCGQSIZE = 0x40047481 - TIOCGRANTPT = 0x20007447 - TIOCGSID = 0x40047463 - TIOCGSIZE = 0x40087468 - TIOCGWINSZ = 0x40087468 - TIOCMBIC = 0x8004746b - TIOCMBIS = 0x8004746c - TIOCMGET = 0x4004746a - TIOCMSET = 0x8004746d - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x20007471 - TIOCNXCL = 0x2000740e - TIOCOUTQ = 0x40047473 - TIOCPKT = 0x80047470 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCPTMGET = 0x48087446 - TIOCPTSNAME = 0x48087448 - TIOCRCVFRAME = 0x80047445 - TIOCREMOTE = 0x80047469 - TIOCSBRK = 0x2000747b - TIOCSCTTY = 0x20007461 - TIOCSDTR = 0x20007479 - TIOCSETA = 0x802c7414 - TIOCSETAF = 0x802c7416 - TIOCSETAW = 0x802c7415 - TIOCSETD = 0x8004741b - TIOCSFLAGS = 0x8004745c - TIOCSIG = 0x2000745f - TIOCSLINED = 0x80207443 - TIOCSPGRP = 0x80047476 - TIOCSQSIZE = 0x80047480 - TIOCSSIZE = 0x80087467 - TIOCSTART = 0x2000746e - TIOCSTAT = 0x80047465 - TIOCSTI = 0x80017472 - TIOCSTOP = 0x2000746f - TIOCSWINSZ = 0x80087467 - TIOCUCNTL = 0x80047466 - TIOCXMTFRAME = 0x80047444 - TOSTOP = 0x400000 - VDISCARD = 0xf - VDSUSP = 0xb - VEOF = 0x0 - VEOL = 0x1 - VEOL2 = 0x2 - VERASE = 0x3 - VINTR = 0x8 - VKILL = 0x5 - VLNEXT = 0xe - VMIN = 0x10 - VQUIT = 0x9 - VREPRINT = 0x6 - VSTART = 0xc - VSTATUS = 0x12 - VSTOP = 0xd - VSUSP = 0xa - VTIME = 0x11 - VWERASE = 0x4 - WALL = 0x8 - WALLSIG = 0x8 - WALTSIG = 0x4 - WCLONE = 0x4 - WCOREFLAG = 0x80 - WNOHANG = 0x1 - WNOWAIT = 0x10000 - WNOZOMBIE = 0x20000 - WOPTSCHECKED = 0x40000 - WSTOPPED = 0x7f - WUNTRACED = 0x2 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x30) - EADDRNOTAVAIL = syscall.Errno(0x31) - EAFNOSUPPORT = syscall.Errno(0x2f) - EAGAIN = syscall.Errno(0x23) - EALREADY = syscall.Errno(0x25) - EAUTH = syscall.Errno(0x50) - EBADF = syscall.Errno(0x9) - EBADMSG = syscall.Errno(0x58) - EBADRPC = syscall.Errno(0x48) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x57) - ECHILD = syscall.Errno(0xa) - ECONNABORTED = syscall.Errno(0x35) - ECONNREFUSED = syscall.Errno(0x3d) - ECONNRESET = syscall.Errno(0x36) - EDEADLK = syscall.Errno(0xb) - EDESTADDRREQ = syscall.Errno(0x27) - EDOM = syscall.Errno(0x21) - EDQUOT = syscall.Errno(0x45) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EFTYPE = syscall.Errno(0x4f) - EHOSTDOWN = syscall.Errno(0x40) - EHOSTUNREACH = syscall.Errno(0x41) - EIDRM = syscall.Errno(0x52) - EILSEQ = syscall.Errno(0x55) - EINPROGRESS = syscall.Errno(0x24) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EISCONN = syscall.Errno(0x38) - EISDIR = syscall.Errno(0x15) - ELAST = syscall.Errno(0x60) - ELOOP = syscall.Errno(0x3e) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x28) - EMULTIHOP = syscall.Errno(0x5e) - ENAMETOOLONG = syscall.Errno(0x3f) - ENEEDAUTH = syscall.Errno(0x51) - ENETDOWN = syscall.Errno(0x32) - ENETRESET = syscall.Errno(0x34) - ENETUNREACH = syscall.Errno(0x33) - ENFILE = syscall.Errno(0x17) - ENOATTR = syscall.Errno(0x5d) - ENOBUFS = syscall.Errno(0x37) - ENODATA = syscall.Errno(0x59) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOLCK = syscall.Errno(0x4d) - ENOLINK = syscall.Errno(0x5f) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x53) - ENOPROTOOPT = syscall.Errno(0x2a) - ENOSPC = syscall.Errno(0x1c) - ENOSR = syscall.Errno(0x5a) - ENOSTR = syscall.Errno(0x5b) - ENOSYS = syscall.Errno(0x4e) - ENOTBLK = syscall.Errno(0xf) - ENOTCONN = syscall.Errno(0x39) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x42) - ENOTSOCK = syscall.Errno(0x26) - ENOTSUP = syscall.Errno(0x56) - ENOTTY = syscall.Errno(0x19) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x2d) - EOVERFLOW = syscall.Errno(0x54) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x2e) - EPIPE = syscall.Errno(0x20) - EPROCLIM = syscall.Errno(0x43) - EPROCUNAVAIL = syscall.Errno(0x4c) - EPROGMISMATCH = syscall.Errno(0x4b) - EPROGUNAVAIL = syscall.Errno(0x4a) - EPROTO = syscall.Errno(0x60) - EPROTONOSUPPORT = syscall.Errno(0x2b) - EPROTOTYPE = syscall.Errno(0x29) - ERANGE = syscall.Errno(0x22) - EREMOTE = syscall.Errno(0x47) - EROFS = syscall.Errno(0x1e) - ERPCMISMATCH = syscall.Errno(0x49) - ESHUTDOWN = syscall.Errno(0x3a) - ESOCKTNOSUPPORT = syscall.Errno(0x2c) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESTALE = syscall.Errno(0x46) - ETIME = syscall.Errno(0x5c) - ETIMEDOUT = syscall.Errno(0x3c) - ETOOMANYREFS = syscall.Errno(0x3b) - ETXTBSY = syscall.Errno(0x1a) - EUSERS = syscall.Errno(0x44) - EWOULDBLOCK = syscall.Errno(0x23) - EXDEV = syscall.Errno(0x12) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0xa) - SIGCHLD = syscall.Signal(0x14) - SIGCONT = syscall.Signal(0x13) - SIGEMT = syscall.Signal(0x7) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINFO = syscall.Signal(0x1d) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x17) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGPIPE = syscall.Signal(0xd) - SIGPROF = syscall.Signal(0x1b) - SIGPWR = syscall.Signal(0x20) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTOP = syscall.Signal(0x11) - SIGSYS = syscall.Signal(0xc) - SIGTERM = syscall.Signal(0xf) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x12) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGURG = syscall.Signal(0x10) - SIGUSR1 = syscall.Signal(0x1e) - SIGUSR2 = syscall.Signal(0x1f) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) - -// Error table -var errors = [...]string{ - 1: "operation not permitted", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "input/output error", - 6: "device not configured", - 7: "argument list too long", - 8: "exec format error", - 9: "bad file descriptor", - 10: "no child processes", - 11: "resource deadlock avoided", - 12: "cannot allocate memory", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "device busy", - 17: "file exists", - 18: "cross-device link", - 19: "operation not supported by device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "too many open files in system", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "numerical argument out of domain", - 34: "result too large or too small", - 35: "resource temporarily unavailable", - 36: "operation now in progress", - 37: "operation already in progress", - 38: "socket operation on non-socket", - 39: "destination address required", - 40: "message too long", - 41: "protocol wrong type for socket", - 42: "protocol option not available", - 43: "protocol not supported", - 44: "socket type not supported", - 45: "operation not supported", - 46: "protocol family not supported", - 47: "address family not supported by protocol family", - 48: "address already in use", - 49: "can't assign requested address", - 50: "network is down", - 51: "network is unreachable", - 52: "network dropped connection on reset", - 53: "software caused connection abort", - 54: "connection reset by peer", - 55: "no buffer space available", - 56: "socket is already connected", - 57: "socket is not connected", - 58: "can't send after socket shutdown", - 59: "too many references: can't splice", - 60: "connection timed out", - 61: "connection refused", - 62: "too many levels of symbolic links", - 63: "file name too long", - 64: "host is down", - 65: "no route to host", - 66: "directory not empty", - 67: "too many processes", - 68: "too many users", - 69: "disc quota exceeded", - 70: "stale NFS file handle", - 71: "too many levels of remote in path", - 72: "RPC struct is bad", - 73: "RPC version wrong", - 74: "RPC prog. not avail", - 75: "program version wrong", - 76: "bad procedure for program", - 77: "no locks available", - 78: "function not implemented", - 79: "inappropriate file type or format", - 80: "authentication error", - 81: "need authenticator", - 82: "identifier removed", - 83: "no message of desired type", - 84: "value too large to be stored in data type", - 85: "illegal byte sequence", - 86: "not supported", - 87: "operation Canceled", - 88: "bad or Corrupt message", - 89: "no message available", - 90: "no STREAM resources", - 91: "not a STREAM", - 92: "STREAM ioctl timeout", - 93: "attribute not found", - 94: "multihop attempted", - 95: "link has been severed", - 96: "protocol error", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal instruction", - 5: "trace/BPT trap", - 6: "abort trap", - 7: "EMT trap", - 8: "floating point exception", - 9: "killed", - 10: "bus error", - 11: "segmentation fault", - 12: "bad system call", - 13: "broken pipe", - 14: "alarm clock", - 15: "terminated", - 16: "urgent I/O condition", - 17: "stopped (signal)", - 18: "stopped", - 19: "continued", - 20: "child exited", - 21: "stopped (tty input)", - 22: "stopped (tty output)", - 23: "I/O possible", - 24: "cputime limit exceeded", - 25: "filesize limit exceeded", - 26: "virtual timer expired", - 27: "profiling timer expired", - 28: "window size changes", - 29: "information request", - 30: "user defined signal 1", - 31: "user defined signal 2", - 32: "power fail/restart", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_openbsd_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_openbsd_386.go deleted file mode 100644 index 3322e998d30..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_openbsd_386.go +++ /dev/null @@ -1,1584 +0,0 @@ -// mkerrors.sh -m32 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build 386,openbsd - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- -m32 _const.go - -package unix - -import "syscall" - -const ( - AF_APPLETALK = 0x10 - AF_BLUETOOTH = 0x20 - AF_CCITT = 0xa - AF_CHAOS = 0x5 - AF_CNT = 0x15 - AF_COIP = 0x14 - AF_DATAKIT = 0x9 - AF_DECnet = 0xc - AF_DLI = 0xd - AF_E164 = 0x1a - AF_ECMA = 0x8 - AF_ENCAP = 0x1c - AF_HYLINK = 0xf - AF_IMPLINK = 0x3 - AF_INET = 0x2 - AF_INET6 = 0x18 - AF_IPX = 0x17 - AF_ISDN = 0x1a - AF_ISO = 0x7 - AF_KEY = 0x1e - AF_LAT = 0xe - AF_LINK = 0x12 - AF_LOCAL = 0x1 - AF_MAX = 0x24 - AF_MPLS = 0x21 - AF_NATM = 0x1b - AF_NS = 0x6 - AF_OSI = 0x7 - AF_PUP = 0x4 - AF_ROUTE = 0x11 - AF_SIP = 0x1d - AF_SNA = 0xb - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - ARPHRD_ETHER = 0x1 - ARPHRD_FRELAY = 0xf - ARPHRD_IEEE1394 = 0x18 - ARPHRD_IEEE802 = 0x6 - B0 = 0x0 - B110 = 0x6e - B115200 = 0x1c200 - B1200 = 0x4b0 - B134 = 0x86 - B14400 = 0x3840 - B150 = 0x96 - B1800 = 0x708 - B19200 = 0x4b00 - B200 = 0xc8 - B230400 = 0x38400 - B2400 = 0x960 - B28800 = 0x7080 - B300 = 0x12c - B38400 = 0x9600 - B4800 = 0x12c0 - B50 = 0x32 - B57600 = 0xe100 - B600 = 0x258 - B7200 = 0x1c20 - B75 = 0x4b - B76800 = 0x12c00 - B9600 = 0x2580 - BIOCFLUSH = 0x20004268 - BIOCGBLEN = 0x40044266 - BIOCGDIRFILT = 0x4004427c - BIOCGDLT = 0x4004426a - BIOCGDLTLIST = 0xc008427b - BIOCGETIF = 0x4020426b - BIOCGFILDROP = 0x40044278 - BIOCGHDRCMPLT = 0x40044274 - BIOCGRSIG = 0x40044273 - BIOCGRTIMEOUT = 0x400c426e - BIOCGSTATS = 0x4008426f - BIOCIMMEDIATE = 0x80044270 - BIOCLOCK = 0x20004276 - BIOCPROMISC = 0x20004269 - BIOCSBLEN = 0xc0044266 - BIOCSDIRFILT = 0x8004427d - BIOCSDLT = 0x8004427a - BIOCSETF = 0x80084267 - BIOCSETIF = 0x8020426c - BIOCSETWF = 0x80084277 - BIOCSFILDROP = 0x80044279 - BIOCSHDRCMPLT = 0x80044275 - BIOCSRSIG = 0x80044272 - BIOCSRTIMEOUT = 0x800c426d - BIOCVERSION = 0x40044271 - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALIGNMENT = 0x4 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_DIRECTION_IN = 0x1 - BPF_DIRECTION_OUT = 0x2 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXBUFSIZE = 0x200000 - BPF_MAXINSNS = 0x200 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINBUFSIZE = 0x20 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RELEASE = 0x30bb6 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_W = 0x0 - BPF_X = 0x8 - BRKINT = 0x2 - CFLUSH = 0xf - CLOCAL = 0x8000 - CREAD = 0x800 - CS5 = 0x0 - CS6 = 0x100 - CS7 = 0x200 - CS8 = 0x300 - CSIZE = 0x300 - CSTART = 0x11 - CSTATUS = 0xff - CSTOP = 0x13 - CSTOPB = 0x400 - CSUSP = 0x1a - CTL_MAXNAME = 0xc - CTL_NET = 0x4 - DIOCOSFPFLUSH = 0x2000444e - DLT_ARCNET = 0x7 - DLT_ATM_RFC1483 = 0xb - DLT_AX25 = 0x3 - DLT_CHAOS = 0x5 - DLT_C_HDLC = 0x68 - DLT_EN10MB = 0x1 - DLT_EN3MB = 0x2 - DLT_ENC = 0xd - DLT_FDDI = 0xa - DLT_IEEE802 = 0x6 - DLT_IEEE802_11 = 0x69 - DLT_IEEE802_11_RADIO = 0x7f - DLT_LOOP = 0xc - DLT_MPLS = 0xdb - DLT_NULL = 0x0 - DLT_PFLOG = 0x75 - DLT_PFSYNC = 0x12 - DLT_PPP = 0x9 - DLT_PPP_BSDOS = 0x10 - DLT_PPP_ETHER = 0x33 - DLT_PPP_SERIAL = 0x32 - DLT_PRONET = 0x4 - DLT_RAW = 0xe - DLT_SLIP = 0x8 - DLT_SLIP_BSDOS = 0xf - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - ECHO = 0x8 - ECHOCTL = 0x40 - ECHOE = 0x2 - ECHOK = 0x4 - ECHOKE = 0x1 - ECHONL = 0x10 - ECHOPRT = 0x20 - EMT_TAGOVF = 0x1 - EMUL_ENABLED = 0x1 - EMUL_NATIVE = 0x2 - ENDRUNDISC = 0x9 - ETHERMIN = 0x2e - ETHERMTU = 0x5dc - ETHERTYPE_8023 = 0x4 - ETHERTYPE_AARP = 0x80f3 - ETHERTYPE_ACCTON = 0x8390 - ETHERTYPE_AEONIC = 0x8036 - ETHERTYPE_ALPHA = 0x814a - ETHERTYPE_AMBER = 0x6008 - ETHERTYPE_AMOEBA = 0x8145 - ETHERTYPE_AOE = 0x88a2 - ETHERTYPE_APOLLO = 0x80f7 - ETHERTYPE_APOLLODOMAIN = 0x8019 - ETHERTYPE_APPLETALK = 0x809b - ETHERTYPE_APPLITEK = 0x80c7 - ETHERTYPE_ARGONAUT = 0x803a - ETHERTYPE_ARP = 0x806 - ETHERTYPE_AT = 0x809b - ETHERTYPE_ATALK = 0x809b - ETHERTYPE_ATOMIC = 0x86df - ETHERTYPE_ATT = 0x8069 - ETHERTYPE_ATTSTANFORD = 0x8008 - ETHERTYPE_AUTOPHON = 0x806a - ETHERTYPE_AXIS = 0x8856 - ETHERTYPE_BCLOOP = 0x9003 - ETHERTYPE_BOFL = 0x8102 - ETHERTYPE_CABLETRON = 0x7034 - ETHERTYPE_CHAOS = 0x804 - ETHERTYPE_COMDESIGN = 0x806c - ETHERTYPE_COMPUGRAPHIC = 0x806d - ETHERTYPE_COUNTERPOINT = 0x8062 - ETHERTYPE_CRONUS = 0x8004 - ETHERTYPE_CRONUSVLN = 0x8003 - ETHERTYPE_DCA = 0x1234 - ETHERTYPE_DDE = 0x807b - ETHERTYPE_DEBNI = 0xaaaa - ETHERTYPE_DECAM = 0x8048 - ETHERTYPE_DECCUST = 0x6006 - ETHERTYPE_DECDIAG = 0x6005 - ETHERTYPE_DECDNS = 0x803c - ETHERTYPE_DECDTS = 0x803e - ETHERTYPE_DECEXPER = 0x6000 - ETHERTYPE_DECLAST = 0x8041 - ETHERTYPE_DECLTM = 0x803f - ETHERTYPE_DECMUMPS = 0x6009 - ETHERTYPE_DECNETBIOS = 0x8040 - ETHERTYPE_DELTACON = 0x86de - ETHERTYPE_DIDDLE = 0x4321 - ETHERTYPE_DLOG1 = 0x660 - ETHERTYPE_DLOG2 = 0x661 - ETHERTYPE_DN = 0x6003 - ETHERTYPE_DOGFIGHT = 0x1989 - ETHERTYPE_DSMD = 0x8039 - ETHERTYPE_ECMA = 0x803 - ETHERTYPE_ENCRYPT = 0x803d - ETHERTYPE_ES = 0x805d - ETHERTYPE_EXCELAN = 0x8010 - ETHERTYPE_EXPERDATA = 0x8049 - ETHERTYPE_FLIP = 0x8146 - ETHERTYPE_FLOWCONTROL = 0x8808 - ETHERTYPE_FRARP = 0x808 - ETHERTYPE_GENDYN = 0x8068 - ETHERTYPE_HAYES = 0x8130 - ETHERTYPE_HIPPI_FP = 0x8180 - ETHERTYPE_HITACHI = 0x8820 - ETHERTYPE_HP = 0x8005 - ETHERTYPE_IEEEPUP = 0xa00 - ETHERTYPE_IEEEPUPAT = 0xa01 - ETHERTYPE_IMLBL = 0x4c42 - ETHERTYPE_IMLBLDIAG = 0x424c - ETHERTYPE_IP = 0x800 - ETHERTYPE_IPAS = 0x876c - ETHERTYPE_IPV6 = 0x86dd - ETHERTYPE_IPX = 0x8137 - ETHERTYPE_IPXNEW = 0x8037 - ETHERTYPE_KALPANA = 0x8582 - ETHERTYPE_LANBRIDGE = 0x8038 - ETHERTYPE_LANPROBE = 0x8888 - ETHERTYPE_LAT = 0x6004 - ETHERTYPE_LBACK = 0x9000 - ETHERTYPE_LITTLE = 0x8060 - ETHERTYPE_LLDP = 0x88cc - ETHERTYPE_LOGICRAFT = 0x8148 - ETHERTYPE_LOOPBACK = 0x9000 - ETHERTYPE_MATRA = 0x807a - ETHERTYPE_MAX = 0xffff - ETHERTYPE_MERIT = 0x807c - ETHERTYPE_MICP = 0x873a - ETHERTYPE_MOPDL = 0x6001 - ETHERTYPE_MOPRC = 0x6002 - ETHERTYPE_MOTOROLA = 0x818d - ETHERTYPE_MPLS = 0x8847 - ETHERTYPE_MPLS_MCAST = 0x8848 - ETHERTYPE_MUMPS = 0x813f - ETHERTYPE_NBPCC = 0x3c04 - ETHERTYPE_NBPCLAIM = 0x3c09 - ETHERTYPE_NBPCLREQ = 0x3c05 - ETHERTYPE_NBPCLRSP = 0x3c06 - ETHERTYPE_NBPCREQ = 0x3c02 - ETHERTYPE_NBPCRSP = 0x3c03 - ETHERTYPE_NBPDG = 0x3c07 - ETHERTYPE_NBPDGB = 0x3c08 - ETHERTYPE_NBPDLTE = 0x3c0a - ETHERTYPE_NBPRAR = 0x3c0c - ETHERTYPE_NBPRAS = 0x3c0b - ETHERTYPE_NBPRST = 0x3c0d - ETHERTYPE_NBPSCD = 0x3c01 - ETHERTYPE_NBPVCD = 0x3c00 - ETHERTYPE_NBS = 0x802 - ETHERTYPE_NCD = 0x8149 - ETHERTYPE_NESTAR = 0x8006 - ETHERTYPE_NETBEUI = 0x8191 - ETHERTYPE_NOVELL = 0x8138 - ETHERTYPE_NS = 0x600 - ETHERTYPE_NSAT = 0x601 - ETHERTYPE_NSCOMPAT = 0x807 - ETHERTYPE_NTRAILER = 0x10 - ETHERTYPE_OS9 = 0x7007 - ETHERTYPE_OS9NET = 0x7009 - ETHERTYPE_PACER = 0x80c6 - ETHERTYPE_PAE = 0x888e - ETHERTYPE_PCS = 0x4242 - ETHERTYPE_PLANNING = 0x8044 - ETHERTYPE_PPP = 0x880b - ETHERTYPE_PPPOE = 0x8864 - ETHERTYPE_PPPOEDISC = 0x8863 - ETHERTYPE_PRIMENTS = 0x7031 - ETHERTYPE_PUP = 0x200 - ETHERTYPE_PUPAT = 0x200 - ETHERTYPE_QINQ = 0x88a8 - ETHERTYPE_RACAL = 0x7030 - ETHERTYPE_RATIONAL = 0x8150 - ETHERTYPE_RAWFR = 0x6559 - ETHERTYPE_RCL = 0x1995 - ETHERTYPE_RDP = 0x8739 - ETHERTYPE_RETIX = 0x80f2 - ETHERTYPE_REVARP = 0x8035 - ETHERTYPE_SCA = 0x6007 - ETHERTYPE_SECTRA = 0x86db - ETHERTYPE_SECUREDATA = 0x876d - ETHERTYPE_SGITW = 0x817e - ETHERTYPE_SG_BOUNCE = 0x8016 - ETHERTYPE_SG_DIAG = 0x8013 - ETHERTYPE_SG_NETGAMES = 0x8014 - ETHERTYPE_SG_RESV = 0x8015 - ETHERTYPE_SIMNET = 0x5208 - ETHERTYPE_SLOW = 0x8809 - ETHERTYPE_SNA = 0x80d5 - ETHERTYPE_SNMP = 0x814c - ETHERTYPE_SONIX = 0xfaf5 - ETHERTYPE_SPIDER = 0x809f - ETHERTYPE_SPRITE = 0x500 - ETHERTYPE_STP = 0x8181 - ETHERTYPE_TALARIS = 0x812b - ETHERTYPE_TALARISMC = 0x852b - ETHERTYPE_TCPCOMP = 0x876b - ETHERTYPE_TCPSM = 0x9002 - ETHERTYPE_TEC = 0x814f - ETHERTYPE_TIGAN = 0x802f - ETHERTYPE_TRAIL = 0x1000 - ETHERTYPE_TRANSETHER = 0x6558 - ETHERTYPE_TYMSHARE = 0x802e - ETHERTYPE_UBBST = 0x7005 - ETHERTYPE_UBDEBUG = 0x900 - ETHERTYPE_UBDIAGLOOP = 0x7002 - ETHERTYPE_UBDL = 0x7000 - ETHERTYPE_UBNIU = 0x7001 - ETHERTYPE_UBNMC = 0x7003 - ETHERTYPE_VALID = 0x1600 - ETHERTYPE_VARIAN = 0x80dd - ETHERTYPE_VAXELN = 0x803b - ETHERTYPE_VEECO = 0x8067 - ETHERTYPE_VEXP = 0x805b - ETHERTYPE_VGLAB = 0x8131 - ETHERTYPE_VINES = 0xbad - ETHERTYPE_VINESECHO = 0xbaf - ETHERTYPE_VINESLOOP = 0xbae - ETHERTYPE_VITAL = 0xff00 - ETHERTYPE_VLAN = 0x8100 - ETHERTYPE_VLTLMAN = 0x8080 - ETHERTYPE_VPROD = 0x805c - ETHERTYPE_VURESERVED = 0x8147 - ETHERTYPE_WATERLOO = 0x8130 - ETHERTYPE_WELLFLEET = 0x8103 - ETHERTYPE_X25 = 0x805 - ETHERTYPE_X75 = 0x801 - ETHERTYPE_XNSSM = 0x9001 - ETHERTYPE_XTP = 0x817d - ETHER_ADDR_LEN = 0x6 - ETHER_ALIGN = 0x2 - ETHER_CRC_LEN = 0x4 - ETHER_CRC_POLY_BE = 0x4c11db6 - ETHER_CRC_POLY_LE = 0xedb88320 - ETHER_HDR_LEN = 0xe - ETHER_MAX_DIX_LEN = 0x600 - ETHER_MAX_LEN = 0x5ee - ETHER_MIN_LEN = 0x40 - ETHER_TYPE_LEN = 0x2 - ETHER_VLAN_ENCAP_LEN = 0x4 - EVFILT_AIO = -0x3 - EVFILT_PROC = -0x5 - EVFILT_READ = -0x1 - EVFILT_SIGNAL = -0x6 - EVFILT_SYSCOUNT = 0x7 - EVFILT_TIMER = -0x7 - EVFILT_VNODE = -0x4 - EVFILT_WRITE = -0x2 - EV_ADD = 0x1 - EV_CLEAR = 0x20 - EV_DELETE = 0x2 - EV_DISABLE = 0x8 - EV_ENABLE = 0x4 - EV_EOF = 0x8000 - EV_ERROR = 0x4000 - EV_FLAG1 = 0x2000 - EV_ONESHOT = 0x10 - EV_SYSFLAGS = 0xf000 - EXTA = 0x4b00 - EXTB = 0x9600 - EXTPROC = 0x800 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x400 - FLUSHO = 0x800000 - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0xa - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLK = 0x7 - F_GETOWN = 0x5 - F_OK = 0x0 - F_RDLCK = 0x1 - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLK = 0x8 - F_SETLKW = 0x9 - F_SETOWN = 0x6 - F_UNLCK = 0x2 - F_WRLCK = 0x3 - HUPCL = 0x4000 - ICANON = 0x100 - ICMP6_FILTER = 0x12 - ICRNL = 0x100 - IEXTEN = 0x400 - IFAN_ARRIVAL = 0x0 - IFAN_DEPARTURE = 0x1 - IFA_ROUTE = 0x1 - IFF_ALLMULTI = 0x200 - IFF_BROADCAST = 0x2 - IFF_CANTCHANGE = 0x8e52 - IFF_DEBUG = 0x4 - IFF_LINK0 = 0x1000 - IFF_LINK1 = 0x2000 - IFF_LINK2 = 0x4000 - IFF_LOOPBACK = 0x8 - IFF_MULTICAST = 0x8000 - IFF_NOARP = 0x80 - IFF_NOTRAILERS = 0x20 - IFF_OACTIVE = 0x400 - IFF_POINTOPOINT = 0x10 - IFF_PROMISC = 0x100 - IFF_RUNNING = 0x40 - IFF_SIMPLEX = 0x800 - IFF_UP = 0x1 - IFNAMSIZ = 0x10 - IFT_1822 = 0x2 - IFT_A12MPPSWITCH = 0x82 - IFT_AAL2 = 0xbb - IFT_AAL5 = 0x31 - IFT_ADSL = 0x5e - IFT_AFLANE8023 = 0x3b - IFT_AFLANE8025 = 0x3c - IFT_ARAP = 0x58 - IFT_ARCNET = 0x23 - IFT_ARCNETPLUS = 0x24 - IFT_ASYNC = 0x54 - IFT_ATM = 0x25 - IFT_ATMDXI = 0x69 - IFT_ATMFUNI = 0x6a - IFT_ATMIMA = 0x6b - IFT_ATMLOGICAL = 0x50 - IFT_ATMRADIO = 0xbd - IFT_ATMSUBINTERFACE = 0x86 - IFT_ATMVCIENDPT = 0xc2 - IFT_ATMVIRTUAL = 0x95 - IFT_BGPPOLICYACCOUNTING = 0xa2 - IFT_BLUETOOTH = 0xf8 - IFT_BRIDGE = 0xd1 - IFT_BSC = 0x53 - IFT_CARP = 0xf7 - IFT_CCTEMUL = 0x3d - IFT_CEPT = 0x13 - IFT_CES = 0x85 - IFT_CHANNEL = 0x46 - IFT_CNR = 0x55 - IFT_COFFEE = 0x84 - IFT_COMPOSITELINK = 0x9b - IFT_DCN = 0x8d - IFT_DIGITALPOWERLINE = 0x8a - IFT_DIGITALWRAPPEROVERHEADCHANNEL = 0xba - IFT_DLSW = 0x4a - IFT_DOCSCABLEDOWNSTREAM = 0x80 - IFT_DOCSCABLEMACLAYER = 0x7f - IFT_DOCSCABLEUPSTREAM = 0x81 - IFT_DOCSCABLEUPSTREAMCHANNEL = 0xcd - IFT_DS0 = 0x51 - IFT_DS0BUNDLE = 0x52 - IFT_DS1FDL = 0xaa - IFT_DS3 = 0x1e - IFT_DTM = 0x8c - IFT_DUMMY = 0xf1 - IFT_DVBASILN = 0xac - IFT_DVBASIOUT = 0xad - IFT_DVBRCCDOWNSTREAM = 0x93 - IFT_DVBRCCMACLAYER = 0x92 - IFT_DVBRCCUPSTREAM = 0x94 - IFT_ECONET = 0xce - IFT_ENC = 0xf4 - IFT_EON = 0x19 - IFT_EPLRS = 0x57 - IFT_ESCON = 0x49 - IFT_ETHER = 0x6 - IFT_FAITH = 0xf3 - IFT_FAST = 0x7d - IFT_FASTETHER = 0x3e - IFT_FASTETHERFX = 0x45 - IFT_FDDI = 0xf - IFT_FIBRECHANNEL = 0x38 - IFT_FRAMERELAYINTERCONNECT = 0x3a - IFT_FRAMERELAYMPI = 0x5c - IFT_FRDLCIENDPT = 0xc1 - IFT_FRELAY = 0x20 - IFT_FRELAYDCE = 0x2c - IFT_FRF16MFRBUNDLE = 0xa3 - IFT_FRFORWARD = 0x9e - IFT_G703AT2MB = 0x43 - IFT_G703AT64K = 0x42 - IFT_GIF = 0xf0 - IFT_GIGABITETHERNET = 0x75 - IFT_GR303IDT = 0xb2 - IFT_GR303RDT = 0xb1 - IFT_H323GATEKEEPER = 0xa4 - IFT_H323PROXY = 0xa5 - IFT_HDH1822 = 0x3 - IFT_HDLC = 0x76 - IFT_HDSL2 = 0xa8 - IFT_HIPERLAN2 = 0xb7 - IFT_HIPPI = 0x2f - IFT_HIPPIINTERFACE = 0x39 - IFT_HOSTPAD = 0x5a - IFT_HSSI = 0x2e - IFT_HY = 0xe - IFT_IBM370PARCHAN = 0x48 - IFT_IDSL = 0x9a - IFT_IEEE1394 = 0x90 - IFT_IEEE80211 = 0x47 - IFT_IEEE80212 = 0x37 - IFT_IEEE8023ADLAG = 0xa1 - IFT_IFGSN = 0x91 - IFT_IMT = 0xbe - IFT_INFINIBAND = 0xc7 - IFT_INTERLEAVE = 0x7c - IFT_IP = 0x7e - IFT_IPFORWARD = 0x8e - IFT_IPOVERATM = 0x72 - IFT_IPOVERCDLC = 0x6d - IFT_IPOVERCLAW = 0x6e - IFT_IPSWITCH = 0x4e - IFT_ISDN = 0x3f - IFT_ISDNBASIC = 0x14 - IFT_ISDNPRIMARY = 0x15 - IFT_ISDNS = 0x4b - IFT_ISDNU = 0x4c - IFT_ISO88022LLC = 0x29 - IFT_ISO88023 = 0x7 - IFT_ISO88024 = 0x8 - IFT_ISO88025 = 0x9 - IFT_ISO88025CRFPINT = 0x62 - IFT_ISO88025DTR = 0x56 - IFT_ISO88025FIBER = 0x73 - IFT_ISO88026 = 0xa - IFT_ISUP = 0xb3 - IFT_L2VLAN = 0x87 - IFT_L3IPVLAN = 0x88 - IFT_L3IPXVLAN = 0x89 - IFT_LAPB = 0x10 - IFT_LAPD = 0x4d - IFT_LAPF = 0x77 - IFT_LINEGROUP = 0xd2 - IFT_LOCALTALK = 0x2a - IFT_LOOP = 0x18 - IFT_MEDIAMAILOVERIP = 0x8b - IFT_MFSIGLINK = 0xa7 - IFT_MIOX25 = 0x26 - IFT_MODEM = 0x30 - IFT_MPC = 0x71 - IFT_MPLS = 0xa6 - IFT_MPLSTUNNEL = 0x96 - IFT_MSDSL = 0x8f - IFT_MVL = 0xbf - IFT_MYRINET = 0x63 - IFT_NFAS = 0xaf - IFT_NSIP = 0x1b - IFT_OPTICALCHANNEL = 0xc3 - IFT_OPTICALTRANSPORT = 0xc4 - IFT_OTHER = 0x1 - IFT_P10 = 0xc - IFT_P80 = 0xd - IFT_PARA = 0x22 - IFT_PFLOG = 0xf5 - IFT_PFLOW = 0xf9 - IFT_PFSYNC = 0xf6 - IFT_PLC = 0xae - IFT_PON155 = 0xcf - IFT_PON622 = 0xd0 - IFT_POS = 0xab - IFT_PPP = 0x17 - IFT_PPPMULTILINKBUNDLE = 0x6c - IFT_PROPATM = 0xc5 - IFT_PROPBWAP2MP = 0xb8 - IFT_PROPCNLS = 0x59 - IFT_PROPDOCSWIRELESSDOWNSTREAM = 0xb5 - IFT_PROPDOCSWIRELESSMACLAYER = 0xb4 - IFT_PROPDOCSWIRELESSUPSTREAM = 0xb6 - IFT_PROPMUX = 0x36 - IFT_PROPVIRTUAL = 0x35 - IFT_PROPWIRELESSP2P = 0x9d - IFT_PTPSERIAL = 0x16 - IFT_PVC = 0xf2 - IFT_Q2931 = 0xc9 - IFT_QLLC = 0x44 - IFT_RADIOMAC = 0xbc - IFT_RADSL = 0x5f - IFT_REACHDSL = 0xc0 - IFT_RFC1483 = 0x9f - IFT_RS232 = 0x21 - IFT_RSRB = 0x4f - IFT_SDLC = 0x11 - IFT_SDSL = 0x60 - IFT_SHDSL = 0xa9 - IFT_SIP = 0x1f - IFT_SIPSIG = 0xcc - IFT_SIPTG = 0xcb - IFT_SLIP = 0x1c - IFT_SMDSDXI = 0x2b - IFT_SMDSICIP = 0x34 - IFT_SONET = 0x27 - IFT_SONETOVERHEADCHANNEL = 0xb9 - IFT_SONETPATH = 0x32 - IFT_SONETVT = 0x33 - IFT_SRP = 0x97 - IFT_SS7SIGLINK = 0x9c - IFT_STACKTOSTACK = 0x6f - IFT_STARLAN = 0xb - IFT_T1 = 0x12 - IFT_TDLC = 0x74 - IFT_TELINK = 0xc8 - IFT_TERMPAD = 0x5b - IFT_TR008 = 0xb0 - IFT_TRANSPHDLC = 0x7b - IFT_TUNNEL = 0x83 - IFT_ULTRA = 0x1d - IFT_USB = 0xa0 - IFT_V11 = 0x40 - IFT_V35 = 0x2d - IFT_V36 = 0x41 - IFT_V37 = 0x78 - IFT_VDSL = 0x61 - IFT_VIRTUALIPADDRESS = 0x70 - IFT_VIRTUALTG = 0xca - IFT_VOICEDID = 0xd5 - IFT_VOICEEM = 0x64 - IFT_VOICEEMFGD = 0xd3 - IFT_VOICEENCAP = 0x67 - IFT_VOICEFGDEANA = 0xd4 - IFT_VOICEFXO = 0x65 - IFT_VOICEFXS = 0x66 - IFT_VOICEOVERATM = 0x98 - IFT_VOICEOVERCABLE = 0xc6 - IFT_VOICEOVERFRAMERELAY = 0x99 - IFT_VOICEOVERIP = 0x68 - IFT_X213 = 0x5d - IFT_X25 = 0x5 - IFT_X25DDN = 0x4 - IFT_X25HUNTGROUP = 0x7a - IFT_X25MLP = 0x79 - IFT_X25PLE = 0x28 - IFT_XETHER = 0x1a - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLASSD_HOST = 0xfffffff - IN_CLASSD_NET = 0xf0000000 - IN_CLASSD_NSHIFT = 0x1c - IN_LOOPBACKNET = 0x7f - IN_RFC3021_HOST = 0x1 - IN_RFC3021_NET = 0xfffffffe - IN_RFC3021_NSHIFT = 0x1f - IPPROTO_AH = 0x33 - IPPROTO_CARP = 0x70 - IPPROTO_DIVERT = 0x102 - IPPROTO_DIVERT_INIT = 0x2 - IPPROTO_DIVERT_RESP = 0x1 - IPPROTO_DONE = 0x101 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_ENCAP = 0x62 - IPPROTO_EON = 0x50 - IPPROTO_ESP = 0x32 - IPPROTO_ETHERIP = 0x61 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GGP = 0x3 - IPPROTO_GRE = 0x2f - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IGMP = 0x2 - IPPROTO_IP = 0x0 - IPPROTO_IPCOMP = 0x6c - IPPROTO_IPIP = 0x4 - IPPROTO_IPV4 = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_MAX = 0x100 - IPPROTO_MAXID = 0x103 - IPPROTO_MOBILE = 0x37 - IPPROTO_MPLS = 0x89 - IPPROTO_NONE = 0x3b - IPPROTO_PFSYNC = 0xf0 - IPPROTO_PIM = 0x67 - IPPROTO_PUP = 0xc - IPPROTO_RAW = 0xff - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_TCP = 0x6 - IPPROTO_TP = 0x1d - IPPROTO_UDP = 0x11 - IPV6_AUTH_LEVEL = 0x35 - IPV6_AUTOFLOWLABEL = 0x3b - IPV6_CHECKSUM = 0x1a - IPV6_DEFAULT_MULTICAST_HOPS = 0x1 - IPV6_DEFAULT_MULTICAST_LOOP = 0x1 - IPV6_DEFHLIM = 0x40 - IPV6_DONTFRAG = 0x3e - IPV6_DSTOPTS = 0x32 - IPV6_ESP_NETWORK_LEVEL = 0x37 - IPV6_ESP_TRANS_LEVEL = 0x36 - IPV6_FAITH = 0x1d - IPV6_FLOWINFO_MASK = 0xffffff0f - IPV6_FLOWLABEL_MASK = 0xffff0f00 - IPV6_FRAGTTL = 0x78 - IPV6_HLIMDEC = 0x1 - IPV6_HOPLIMIT = 0x2f - IPV6_HOPOPTS = 0x31 - IPV6_IPCOMP_LEVEL = 0x3c - IPV6_JOIN_GROUP = 0xc - IPV6_LEAVE_GROUP = 0xd - IPV6_MAXHLIM = 0xff - IPV6_MAXPACKET = 0xffff - IPV6_MMTU = 0x500 - IPV6_MULTICAST_HOPS = 0xa - IPV6_MULTICAST_IF = 0x9 - IPV6_MULTICAST_LOOP = 0xb - IPV6_NEXTHOP = 0x30 - IPV6_OPTIONS = 0x1 - IPV6_PATHMTU = 0x2c - IPV6_PIPEX = 0x3f - IPV6_PKTINFO = 0x2e - IPV6_PORTRANGE = 0xe - IPV6_PORTRANGE_DEFAULT = 0x0 - IPV6_PORTRANGE_HIGH = 0x1 - IPV6_PORTRANGE_LOW = 0x2 - IPV6_RECVDSTOPTS = 0x28 - IPV6_RECVDSTPORT = 0x40 - IPV6_RECVHOPLIMIT = 0x25 - IPV6_RECVHOPOPTS = 0x27 - IPV6_RECVPATHMTU = 0x2b - IPV6_RECVPKTINFO = 0x24 - IPV6_RECVRTHDR = 0x26 - IPV6_RECVTCLASS = 0x39 - IPV6_RTABLE = 0x1021 - IPV6_RTHDR = 0x33 - IPV6_RTHDRDSTOPTS = 0x23 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_SOCKOPT_RESERVED1 = 0x3 - IPV6_TCLASS = 0x3d - IPV6_UNICAST_HOPS = 0x4 - IPV6_USE_MIN_MTU = 0x2a - IPV6_V6ONLY = 0x1b - IPV6_VERSION = 0x60 - IPV6_VERSION_MASK = 0xf0 - IP_ADD_MEMBERSHIP = 0xc - IP_AUTH_LEVEL = 0x14 - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DIVERTFL = 0x1022 - IP_DROP_MEMBERSHIP = 0xd - IP_ESP_NETWORK_LEVEL = 0x16 - IP_ESP_TRANS_LEVEL = 0x15 - IP_HDRINCL = 0x2 - IP_IPCOMP_LEVEL = 0x1d - IP_IPSECFLOWINFO = 0x24 - IP_IPSEC_LOCAL_AUTH = 0x1b - IP_IPSEC_LOCAL_CRED = 0x19 - IP_IPSEC_LOCAL_ID = 0x17 - IP_IPSEC_REMOTE_AUTH = 0x1c - IP_IPSEC_REMOTE_CRED = 0x1a - IP_IPSEC_REMOTE_ID = 0x18 - IP_MAXPACKET = 0xffff - IP_MAX_MEMBERSHIPS = 0xfff - IP_MF = 0x2000 - IP_MINTTL = 0x20 - IP_MIN_MEMBERSHIPS = 0xf - IP_MSS = 0x240 - IP_MULTICAST_IF = 0x9 - IP_MULTICAST_LOOP = 0xb - IP_MULTICAST_TTL = 0xa - IP_OFFMASK = 0x1fff - IP_OPTIONS = 0x1 - IP_PIPEX = 0x22 - IP_PORTRANGE = 0x13 - IP_PORTRANGE_DEFAULT = 0x0 - IP_PORTRANGE_HIGH = 0x1 - IP_PORTRANGE_LOW = 0x2 - IP_RECVDSTADDR = 0x7 - IP_RECVDSTPORT = 0x21 - IP_RECVIF = 0x1e - IP_RECVOPTS = 0x5 - IP_RECVRETOPTS = 0x6 - IP_RECVRTABLE = 0x23 - IP_RECVTTL = 0x1f - IP_RETOPTS = 0x8 - IP_RF = 0x8000 - IP_RTABLE = 0x1021 - IP_TOS = 0x3 - IP_TTL = 0x4 - ISIG = 0x80 - ISTRIP = 0x20 - IXANY = 0x800 - IXOFF = 0x400 - IXON = 0x200 - LCNT_OVERLOAD_FLUSH = 0x6 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_DONTNEED = 0x4 - MADV_FREE = 0x6 - MADV_NORMAL = 0x0 - MADV_RANDOM = 0x1 - MADV_SEQUENTIAL = 0x2 - MADV_SPACEAVAIL = 0x5 - MADV_WILLNEED = 0x3 - MAP_ANON = 0x1000 - MAP_COPY = 0x4 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_FLAGMASK = 0x1ff7 - MAP_HASSEMAPHORE = 0x200 - MAP_INHERIT = 0x80 - MAP_INHERIT_COPY = 0x1 - MAP_INHERIT_DONATE_COPY = 0x3 - MAP_INHERIT_NONE = 0x2 - MAP_INHERIT_SHARE = 0x0 - MAP_NOEXTEND = 0x100 - MAP_NORESERVE = 0x40 - MAP_PRIVATE = 0x2 - MAP_RENAME = 0x20 - MAP_SHARED = 0x1 - MAP_TRYFIXED = 0x400 - MCL_CURRENT = 0x1 - MCL_FUTURE = 0x2 - MSG_BCAST = 0x100 - MSG_CTRUNC = 0x20 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x80 - MSG_EOR = 0x8 - MSG_MCAST = 0x200 - MSG_NOSIGNAL = 0x400 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_TRUNC = 0x10 - MSG_WAITALL = 0x40 - MS_ASYNC = 0x1 - MS_INVALIDATE = 0x4 - MS_SYNC = 0x2 - NAME_MAX = 0xff - NET_RT_DUMP = 0x1 - NET_RT_FLAGS = 0x2 - NET_RT_IFLIST = 0x3 - NET_RT_MAXID = 0x6 - NET_RT_STATS = 0x4 - NET_RT_TABLE = 0x5 - NOFLSH = 0x80000000 - NOTE_ATTRIB = 0x8 - NOTE_CHILD = 0x4 - NOTE_DELETE = 0x1 - NOTE_EOF = 0x2 - NOTE_EXEC = 0x20000000 - NOTE_EXIT = 0x80000000 - NOTE_EXTEND = 0x4 - NOTE_FORK = 0x40000000 - NOTE_LINK = 0x10 - NOTE_LOWAT = 0x1 - NOTE_PCTRLMASK = 0xf0000000 - NOTE_PDATAMASK = 0xfffff - NOTE_RENAME = 0x20 - NOTE_REVOKE = 0x40 - NOTE_TRACK = 0x1 - NOTE_TRACKERR = 0x2 - NOTE_TRUNCATE = 0x80 - NOTE_WRITE = 0x2 - OCRNL = 0x10 - ONLCR = 0x2 - ONLRET = 0x80 - ONOCR = 0x40 - ONOEOT = 0x8 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_APPEND = 0x8 - O_ASYNC = 0x40 - O_CLOEXEC = 0x10000 - O_CREAT = 0x200 - O_DIRECTORY = 0x20000 - O_DSYNC = 0x80 - O_EXCL = 0x800 - O_EXLOCK = 0x20 - O_FSYNC = 0x80 - O_NDELAY = 0x4 - O_NOCTTY = 0x8000 - O_NOFOLLOW = 0x100 - O_NONBLOCK = 0x4 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_RSYNC = 0x80 - O_SHLOCK = 0x10 - O_SYNC = 0x80 - O_TRUNC = 0x400 - O_WRONLY = 0x1 - PARENB = 0x1000 - PARMRK = 0x8 - PARODD = 0x2000 - PENDIN = 0x20000000 - PF_FLUSH = 0x1 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PROT_EXEC = 0x4 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_WRITE = 0x2 - PT_MASK = 0x3ff000 - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x8 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = 0x7fffffffffffffff - RTAX_AUTHOR = 0x6 - RTAX_BRD = 0x7 - RTAX_DST = 0x0 - RTAX_GATEWAY = 0x1 - RTAX_GENMASK = 0x3 - RTAX_IFA = 0x5 - RTAX_IFP = 0x4 - RTAX_LABEL = 0xa - RTAX_MAX = 0xb - RTAX_NETMASK = 0x2 - RTAX_SRC = 0x8 - RTAX_SRCMASK = 0x9 - RTA_AUTHOR = 0x40 - RTA_BRD = 0x80 - RTA_DST = 0x1 - RTA_GATEWAY = 0x2 - RTA_GENMASK = 0x8 - RTA_IFA = 0x20 - RTA_IFP = 0x10 - RTA_LABEL = 0x400 - RTA_NETMASK = 0x4 - RTA_SRC = 0x100 - RTA_SRCMASK = 0x200 - RTF_ANNOUNCE = 0x4000 - RTF_BLACKHOLE = 0x1000 - RTF_CLONED = 0x10000 - RTF_CLONING = 0x100 - RTF_DONE = 0x40 - RTF_DYNAMIC = 0x10 - RTF_FMASK = 0x10f808 - RTF_GATEWAY = 0x2 - RTF_HOST = 0x4 - RTF_LLINFO = 0x400 - RTF_MASK = 0x80 - RTF_MODIFIED = 0x20 - RTF_MPATH = 0x40000 - RTF_MPLS = 0x100000 - RTF_PERMANENT_ARP = 0x2000 - RTF_PROTO1 = 0x8000 - RTF_PROTO2 = 0x4000 - RTF_PROTO3 = 0x2000 - RTF_REJECT = 0x8 - RTF_SOURCE = 0x20000 - RTF_STATIC = 0x800 - RTF_TUNNEL = 0x100000 - RTF_UP = 0x1 - RTF_USETRAILERS = 0x8000 - RTF_XRESOLVE = 0x200 - RTM_ADD = 0x1 - RTM_CHANGE = 0x3 - RTM_DELADDR = 0xd - RTM_DELETE = 0x2 - RTM_DESYNC = 0x10 - RTM_GET = 0x4 - RTM_IFANNOUNCE = 0xf - RTM_IFINFO = 0xe - RTM_LOCK = 0x8 - RTM_LOSING = 0x5 - RTM_MAXSIZE = 0x800 - RTM_MISS = 0x7 - RTM_NEWADDR = 0xc - RTM_REDIRECT = 0x6 - RTM_RESOLVE = 0xb - RTM_RTTUNIT = 0xf4240 - RTM_VERSION = 0x5 - RTV_EXPIRE = 0x4 - RTV_HOPCOUNT = 0x2 - RTV_MTU = 0x1 - RTV_RPIPE = 0x8 - RTV_RTT = 0x40 - RTV_RTTVAR = 0x80 - RTV_SPIPE = 0x10 - RTV_SSTHRESH = 0x20 - RT_TABLEID_MAX = 0xff - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - RUSAGE_THREAD = 0x1 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x4 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDMULTI = 0x80206931 - SIOCAIFADDR = 0x8040691a - SIOCAIFGROUP = 0x80246987 - SIOCALIFADDR = 0x8218691c - SIOCATMARK = 0x40047307 - SIOCBRDGADD = 0x8054693c - SIOCBRDGADDS = 0x80546941 - SIOCBRDGARL = 0x806e694d - SIOCBRDGDADDR = 0x81286947 - SIOCBRDGDEL = 0x8054693d - SIOCBRDGDELS = 0x80546942 - SIOCBRDGFLUSH = 0x80546948 - SIOCBRDGFRL = 0x806e694e - SIOCBRDGGCACHE = 0xc0146941 - SIOCBRDGGFD = 0xc0146952 - SIOCBRDGGHT = 0xc0146951 - SIOCBRDGGIFFLGS = 0xc054693e - SIOCBRDGGMA = 0xc0146953 - SIOCBRDGGPARAM = 0xc03c6958 - SIOCBRDGGPRI = 0xc0146950 - SIOCBRDGGRL = 0xc028694f - SIOCBRDGGSIFS = 0xc054693c - SIOCBRDGGTO = 0xc0146946 - SIOCBRDGIFS = 0xc0546942 - SIOCBRDGRTS = 0xc0186943 - SIOCBRDGSADDR = 0xc1286944 - SIOCBRDGSCACHE = 0x80146940 - SIOCBRDGSFD = 0x80146952 - SIOCBRDGSHT = 0x80146951 - SIOCBRDGSIFCOST = 0x80546955 - SIOCBRDGSIFFLGS = 0x8054693f - SIOCBRDGSIFPRIO = 0x80546954 - SIOCBRDGSMA = 0x80146953 - SIOCBRDGSPRI = 0x80146950 - SIOCBRDGSPROTO = 0x8014695a - SIOCBRDGSTO = 0x80146945 - SIOCBRDGSTXHC = 0x80146959 - SIOCDELMULTI = 0x80206932 - SIOCDIFADDR = 0x80206919 - SIOCDIFGROUP = 0x80246989 - SIOCDIFPHYADDR = 0x80206949 - SIOCDLIFADDR = 0x8218691e - SIOCGETKALIVE = 0xc01869a4 - SIOCGETLABEL = 0x8020699a - SIOCGETPFLOW = 0xc02069fe - SIOCGETPFSYNC = 0xc02069f8 - SIOCGETSGCNT = 0xc0147534 - SIOCGETVIFCNT = 0xc0147533 - SIOCGETVLAN = 0xc0206990 - SIOCGHIWAT = 0x40047301 - SIOCGIFADDR = 0xc0206921 - SIOCGIFASYNCMAP = 0xc020697c - SIOCGIFBRDADDR = 0xc0206923 - SIOCGIFCONF = 0xc0086924 - SIOCGIFDATA = 0xc020691b - SIOCGIFDESCR = 0xc0206981 - SIOCGIFDSTADDR = 0xc0206922 - SIOCGIFFLAGS = 0xc0206911 - SIOCGIFGATTR = 0xc024698b - SIOCGIFGENERIC = 0xc020693a - SIOCGIFGMEMB = 0xc024698a - SIOCGIFGROUP = 0xc0246988 - SIOCGIFHARDMTU = 0xc02069a5 - SIOCGIFMEDIA = 0xc0286936 - SIOCGIFMETRIC = 0xc0206917 - SIOCGIFMTU = 0xc020697e - SIOCGIFNETMASK = 0xc0206925 - SIOCGIFPDSTADDR = 0xc0206948 - SIOCGIFPRIORITY = 0xc020699c - SIOCGIFPSRCADDR = 0xc0206947 - SIOCGIFRDOMAIN = 0xc02069a0 - SIOCGIFRTLABEL = 0xc0206983 - SIOCGIFTIMESLOT = 0xc0206986 - SIOCGIFXFLAGS = 0xc020699e - SIOCGLIFADDR = 0xc218691d - SIOCGLIFPHYADDR = 0xc218694b - SIOCGLIFPHYRTABLE = 0xc02069a2 - SIOCGLIFPHYTTL = 0xc02069a9 - SIOCGLOWAT = 0x40047303 - SIOCGPGRP = 0x40047309 - SIOCGSPPPPARAMS = 0xc0206994 - SIOCGVH = 0xc02069f6 - SIOCGVNETID = 0xc02069a7 - SIOCIFCREATE = 0x8020697a - SIOCIFDESTROY = 0x80206979 - SIOCIFGCLONERS = 0xc00c6978 - SIOCSETKALIVE = 0x801869a3 - SIOCSETLABEL = 0x80206999 - SIOCSETPFLOW = 0x802069fd - SIOCSETPFSYNC = 0x802069f7 - SIOCSETVLAN = 0x8020698f - SIOCSHIWAT = 0x80047300 - SIOCSIFADDR = 0x8020690c - SIOCSIFASYNCMAP = 0x8020697d - SIOCSIFBRDADDR = 0x80206913 - SIOCSIFDESCR = 0x80206980 - SIOCSIFDSTADDR = 0x8020690e - SIOCSIFFLAGS = 0x80206910 - SIOCSIFGATTR = 0x8024698c - SIOCSIFGENERIC = 0x80206939 - SIOCSIFLLADDR = 0x8020691f - SIOCSIFMEDIA = 0xc0206935 - SIOCSIFMETRIC = 0x80206918 - SIOCSIFMTU = 0x8020697f - SIOCSIFNETMASK = 0x80206916 - SIOCSIFPHYADDR = 0x80406946 - SIOCSIFPRIORITY = 0x8020699b - SIOCSIFRDOMAIN = 0x8020699f - SIOCSIFRTLABEL = 0x80206982 - SIOCSIFTIMESLOT = 0x80206985 - SIOCSIFXFLAGS = 0x8020699d - SIOCSLIFPHYADDR = 0x8218694a - SIOCSLIFPHYRTABLE = 0x802069a1 - SIOCSLIFPHYTTL = 0x802069a8 - SIOCSLOWAT = 0x80047302 - SIOCSPGRP = 0x80047308 - SIOCSSPPPPARAMS = 0x80206993 - SIOCSVH = 0xc02069f5 - SIOCSVNETID = 0x802069a6 - SOCK_DGRAM = 0x2 - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_SOCKET = 0xffff - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x2 - SO_BINDANY = 0x1000 - SO_BROADCAST = 0x20 - SO_DEBUG = 0x1 - SO_DONTROUTE = 0x10 - SO_ERROR = 0x1007 - SO_KEEPALIVE = 0x8 - SO_LINGER = 0x80 - SO_NETPROC = 0x1020 - SO_OOBINLINE = 0x100 - SO_PEERCRED = 0x1022 - SO_RCVBUF = 0x1002 - SO_RCVLOWAT = 0x1004 - SO_RCVTIMEO = 0x1006 - SO_REUSEADDR = 0x4 - SO_REUSEPORT = 0x200 - SO_RTABLE = 0x1021 - SO_SNDBUF = 0x1001 - SO_SNDLOWAT = 0x1003 - SO_SNDTIMEO = 0x1005 - SO_SPLICE = 0x1023 - SO_TIMESTAMP = 0x800 - SO_TYPE = 0x1008 - SO_USELOOPBACK = 0x40 - TCIFLUSH = 0x1 - TCIOFLUSH = 0x3 - TCOFLUSH = 0x2 - TCP_MAXBURST = 0x4 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_SACK = 0x3 - TCP_MAX_WINSHIFT = 0xe - TCP_MD5SIG = 0x4 - TCP_MSS = 0x200 - TCP_NODELAY = 0x1 - TCP_NOPUSH = 0x10 - TCP_NSTATES = 0xb - TCP_SACK_ENABLE = 0x8 - TCSAFLUSH = 0x2 - TIOCCBRK = 0x2000747a - TIOCCDTR = 0x20007478 - TIOCCONS = 0x80047462 - TIOCDRAIN = 0x2000745e - TIOCEXCL = 0x2000740d - TIOCEXT = 0x80047460 - TIOCFLAG_CLOCAL = 0x2 - TIOCFLAG_CRTSCTS = 0x4 - TIOCFLAG_MDMBUF = 0x8 - TIOCFLAG_PPS = 0x10 - TIOCFLAG_SOFTCAR = 0x1 - TIOCFLUSH = 0x80047410 - TIOCGETA = 0x402c7413 - TIOCGETD = 0x4004741a - TIOCGFLAGS = 0x4004745d - TIOCGPGRP = 0x40047477 - TIOCGSID = 0x40047463 - TIOCGTSTAMP = 0x400c745b - TIOCGWINSZ = 0x40087468 - TIOCMBIC = 0x8004746b - TIOCMBIS = 0x8004746c - TIOCMGET = 0x4004746a - TIOCMODG = 0x4004746a - TIOCMODS = 0x8004746d - TIOCMSET = 0x8004746d - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x20007471 - TIOCNXCL = 0x2000740e - TIOCOUTQ = 0x40047473 - TIOCPKT = 0x80047470 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCREMOTE = 0x80047469 - TIOCSBRK = 0x2000747b - TIOCSCTTY = 0x20007461 - TIOCSDTR = 0x20007479 - TIOCSETA = 0x802c7414 - TIOCSETAF = 0x802c7416 - TIOCSETAW = 0x802c7415 - TIOCSETD = 0x8004741b - TIOCSFLAGS = 0x8004745c - TIOCSIG = 0x8004745f - TIOCSPGRP = 0x80047476 - TIOCSTART = 0x2000746e - TIOCSTAT = 0x80047465 - TIOCSTI = 0x80017472 - TIOCSTOP = 0x2000746f - TIOCSTSTAMP = 0x8008745a - TIOCSWINSZ = 0x80087467 - TIOCUCNTL = 0x80047466 - TOSTOP = 0x400000 - VDISCARD = 0xf - VDSUSP = 0xb - VEOF = 0x0 - VEOL = 0x1 - VEOL2 = 0x2 - VERASE = 0x3 - VINTR = 0x8 - VKILL = 0x5 - VLNEXT = 0xe - VMIN = 0x10 - VQUIT = 0x9 - VREPRINT = 0x6 - VSTART = 0xc - VSTATUS = 0x12 - VSTOP = 0xd - VSUSP = 0xa - VTIME = 0x11 - VWERASE = 0x4 - WALTSIG = 0x4 - WCONTINUED = 0x8 - WCOREFLAG = 0x80 - WNOHANG = 0x1 - WSTOPPED = 0x7f - WUNTRACED = 0x2 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x30) - EADDRNOTAVAIL = syscall.Errno(0x31) - EAFNOSUPPORT = syscall.Errno(0x2f) - EAGAIN = syscall.Errno(0x23) - EALREADY = syscall.Errno(0x25) - EAUTH = syscall.Errno(0x50) - EBADF = syscall.Errno(0x9) - EBADRPC = syscall.Errno(0x48) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x58) - ECHILD = syscall.Errno(0xa) - ECONNABORTED = syscall.Errno(0x35) - ECONNREFUSED = syscall.Errno(0x3d) - ECONNRESET = syscall.Errno(0x36) - EDEADLK = syscall.Errno(0xb) - EDESTADDRREQ = syscall.Errno(0x27) - EDOM = syscall.Errno(0x21) - EDQUOT = syscall.Errno(0x45) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EFTYPE = syscall.Errno(0x4f) - EHOSTDOWN = syscall.Errno(0x40) - EHOSTUNREACH = syscall.Errno(0x41) - EIDRM = syscall.Errno(0x59) - EILSEQ = syscall.Errno(0x54) - EINPROGRESS = syscall.Errno(0x24) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EIPSEC = syscall.Errno(0x52) - EISCONN = syscall.Errno(0x38) - EISDIR = syscall.Errno(0x15) - ELAST = syscall.Errno(0x5b) - ELOOP = syscall.Errno(0x3e) - EMEDIUMTYPE = syscall.Errno(0x56) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x28) - ENAMETOOLONG = syscall.Errno(0x3f) - ENEEDAUTH = syscall.Errno(0x51) - ENETDOWN = syscall.Errno(0x32) - ENETRESET = syscall.Errno(0x34) - ENETUNREACH = syscall.Errno(0x33) - ENFILE = syscall.Errno(0x17) - ENOATTR = syscall.Errno(0x53) - ENOBUFS = syscall.Errno(0x37) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOLCK = syscall.Errno(0x4d) - ENOMEDIUM = syscall.Errno(0x55) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x5a) - ENOPROTOOPT = syscall.Errno(0x2a) - ENOSPC = syscall.Errno(0x1c) - ENOSYS = syscall.Errno(0x4e) - ENOTBLK = syscall.Errno(0xf) - ENOTCONN = syscall.Errno(0x39) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x42) - ENOTSOCK = syscall.Errno(0x26) - ENOTSUP = syscall.Errno(0x5b) - ENOTTY = syscall.Errno(0x19) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x2d) - EOVERFLOW = syscall.Errno(0x57) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x2e) - EPIPE = syscall.Errno(0x20) - EPROCLIM = syscall.Errno(0x43) - EPROCUNAVAIL = syscall.Errno(0x4c) - EPROGMISMATCH = syscall.Errno(0x4b) - EPROGUNAVAIL = syscall.Errno(0x4a) - EPROTONOSUPPORT = syscall.Errno(0x2b) - EPROTOTYPE = syscall.Errno(0x29) - ERANGE = syscall.Errno(0x22) - EREMOTE = syscall.Errno(0x47) - EROFS = syscall.Errno(0x1e) - ERPCMISMATCH = syscall.Errno(0x49) - ESHUTDOWN = syscall.Errno(0x3a) - ESOCKTNOSUPPORT = syscall.Errno(0x2c) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESTALE = syscall.Errno(0x46) - ETIMEDOUT = syscall.Errno(0x3c) - ETOOMANYREFS = syscall.Errno(0x3b) - ETXTBSY = syscall.Errno(0x1a) - EUSERS = syscall.Errno(0x44) - EWOULDBLOCK = syscall.Errno(0x23) - EXDEV = syscall.Errno(0x12) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0xa) - SIGCHLD = syscall.Signal(0x14) - SIGCONT = syscall.Signal(0x13) - SIGEMT = syscall.Signal(0x7) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINFO = syscall.Signal(0x1d) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x17) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGPIPE = syscall.Signal(0xd) - SIGPROF = syscall.Signal(0x1b) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTOP = syscall.Signal(0x11) - SIGSYS = syscall.Signal(0xc) - SIGTERM = syscall.Signal(0xf) - SIGTHR = syscall.Signal(0x20) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x12) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGURG = syscall.Signal(0x10) - SIGUSR1 = syscall.Signal(0x1e) - SIGUSR2 = syscall.Signal(0x1f) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) - -// Error table -var errors = [...]string{ - 1: "operation not permitted", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "input/output error", - 6: "device not configured", - 7: "argument list too long", - 8: "exec format error", - 9: "bad file descriptor", - 10: "no child processes", - 11: "resource deadlock avoided", - 12: "cannot allocate memory", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "device busy", - 17: "file exists", - 18: "cross-device link", - 19: "operation not supported by device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "too many open files in system", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "numerical argument out of domain", - 34: "result too large", - 35: "resource temporarily unavailable", - 36: "operation now in progress", - 37: "operation already in progress", - 38: "socket operation on non-socket", - 39: "destination address required", - 40: "message too long", - 41: "protocol wrong type for socket", - 42: "protocol not available", - 43: "protocol not supported", - 44: "socket type not supported", - 45: "operation not supported", - 46: "protocol family not supported", - 47: "address family not supported by protocol family", - 48: "address already in use", - 49: "can't assign requested address", - 50: "network is down", - 51: "network is unreachable", - 52: "network dropped connection on reset", - 53: "software caused connection abort", - 54: "connection reset by peer", - 55: "no buffer space available", - 56: "socket is already connected", - 57: "socket is not connected", - 58: "can't send after socket shutdown", - 59: "too many references: can't splice", - 60: "connection timed out", - 61: "connection refused", - 62: "too many levels of symbolic links", - 63: "file name too long", - 64: "host is down", - 65: "no route to host", - 66: "directory not empty", - 67: "too many processes", - 68: "too many users", - 69: "disc quota exceeded", - 70: "stale NFS file handle", - 71: "too many levels of remote in path", - 72: "RPC struct is bad", - 73: "RPC version wrong", - 74: "RPC prog. not avail", - 75: "program version wrong", - 76: "bad procedure for program", - 77: "no locks available", - 78: "function not implemented", - 79: "inappropriate file type or format", - 80: "authentication error", - 81: "need authenticator", - 82: "IPsec processing failure", - 83: "attribute not found", - 84: "illegal byte sequence", - 85: "no medium found", - 86: "wrong medium type", - 87: "value too large to be stored in data type", - 88: "operation canceled", - 89: "identifier removed", - 90: "no message of desired type", - 91: "not supported", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal instruction", - 5: "trace/BPT trap", - 6: "abort trap", - 7: "EMT trap", - 8: "floating point exception", - 9: "killed", - 10: "bus error", - 11: "segmentation fault", - 12: "bad system call", - 13: "broken pipe", - 14: "alarm clock", - 15: "terminated", - 16: "urgent I/O condition", - 17: "stopped (signal)", - 18: "stopped", - 19: "continued", - 20: "child exited", - 21: "stopped (tty input)", - 22: "stopped (tty output)", - 23: "I/O possible", - 24: "cputime limit exceeded", - 25: "filesize limit exceeded", - 26: "virtual timer expired", - 27: "profiling timer expired", - 28: "window size changes", - 29: "information request", - 30: "user defined signal 1", - 31: "user defined signal 2", - 32: "thread AST", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_openbsd_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_openbsd_amd64.go deleted file mode 100644 index 1758ecca93e..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_openbsd_amd64.go +++ /dev/null @@ -1,1583 +0,0 @@ -// mkerrors.sh -m64 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build amd64,openbsd - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- -m64 _const.go - -package unix - -import "syscall" - -const ( - AF_APPLETALK = 0x10 - AF_BLUETOOTH = 0x20 - AF_CCITT = 0xa - AF_CHAOS = 0x5 - AF_CNT = 0x15 - AF_COIP = 0x14 - AF_DATAKIT = 0x9 - AF_DECnet = 0xc - AF_DLI = 0xd - AF_E164 = 0x1a - AF_ECMA = 0x8 - AF_ENCAP = 0x1c - AF_HYLINK = 0xf - AF_IMPLINK = 0x3 - AF_INET = 0x2 - AF_INET6 = 0x18 - AF_IPX = 0x17 - AF_ISDN = 0x1a - AF_ISO = 0x7 - AF_KEY = 0x1e - AF_LAT = 0xe - AF_LINK = 0x12 - AF_LOCAL = 0x1 - AF_MAX = 0x24 - AF_MPLS = 0x21 - AF_NATM = 0x1b - AF_NS = 0x6 - AF_OSI = 0x7 - AF_PUP = 0x4 - AF_ROUTE = 0x11 - AF_SIP = 0x1d - AF_SNA = 0xb - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - ARPHRD_ETHER = 0x1 - ARPHRD_FRELAY = 0xf - ARPHRD_IEEE1394 = 0x18 - ARPHRD_IEEE802 = 0x6 - B0 = 0x0 - B110 = 0x6e - B115200 = 0x1c200 - B1200 = 0x4b0 - B134 = 0x86 - B14400 = 0x3840 - B150 = 0x96 - B1800 = 0x708 - B19200 = 0x4b00 - B200 = 0xc8 - B230400 = 0x38400 - B2400 = 0x960 - B28800 = 0x7080 - B300 = 0x12c - B38400 = 0x9600 - B4800 = 0x12c0 - B50 = 0x32 - B57600 = 0xe100 - B600 = 0x258 - B7200 = 0x1c20 - B75 = 0x4b - B76800 = 0x12c00 - B9600 = 0x2580 - BIOCFLUSH = 0x20004268 - BIOCGBLEN = 0x40044266 - BIOCGDIRFILT = 0x4004427c - BIOCGDLT = 0x4004426a - BIOCGDLTLIST = 0xc010427b - BIOCGETIF = 0x4020426b - BIOCGFILDROP = 0x40044278 - BIOCGHDRCMPLT = 0x40044274 - BIOCGRSIG = 0x40044273 - BIOCGRTIMEOUT = 0x4010426e - BIOCGSTATS = 0x4008426f - BIOCIMMEDIATE = 0x80044270 - BIOCLOCK = 0x20004276 - BIOCPROMISC = 0x20004269 - BIOCSBLEN = 0xc0044266 - BIOCSDIRFILT = 0x8004427d - BIOCSDLT = 0x8004427a - BIOCSETF = 0x80104267 - BIOCSETIF = 0x8020426c - BIOCSETWF = 0x80104277 - BIOCSFILDROP = 0x80044279 - BIOCSHDRCMPLT = 0x80044275 - BIOCSRSIG = 0x80044272 - BIOCSRTIMEOUT = 0x8010426d - BIOCVERSION = 0x40044271 - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALIGNMENT = 0x4 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_DIRECTION_IN = 0x1 - BPF_DIRECTION_OUT = 0x2 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXBUFSIZE = 0x200000 - BPF_MAXINSNS = 0x200 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINBUFSIZE = 0x20 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RELEASE = 0x30bb6 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_W = 0x0 - BPF_X = 0x8 - BRKINT = 0x2 - CFLUSH = 0xf - CLOCAL = 0x8000 - CREAD = 0x800 - CS5 = 0x0 - CS6 = 0x100 - CS7 = 0x200 - CS8 = 0x300 - CSIZE = 0x300 - CSTART = 0x11 - CSTATUS = 0xff - CSTOP = 0x13 - CSTOPB = 0x400 - CSUSP = 0x1a - CTL_MAXNAME = 0xc - CTL_NET = 0x4 - DIOCOSFPFLUSH = 0x2000444e - DLT_ARCNET = 0x7 - DLT_ATM_RFC1483 = 0xb - DLT_AX25 = 0x3 - DLT_CHAOS = 0x5 - DLT_C_HDLC = 0x68 - DLT_EN10MB = 0x1 - DLT_EN3MB = 0x2 - DLT_ENC = 0xd - DLT_FDDI = 0xa - DLT_IEEE802 = 0x6 - DLT_IEEE802_11 = 0x69 - DLT_IEEE802_11_RADIO = 0x7f - DLT_LOOP = 0xc - DLT_MPLS = 0xdb - DLT_NULL = 0x0 - DLT_PFLOG = 0x75 - DLT_PFSYNC = 0x12 - DLT_PPP = 0x9 - DLT_PPP_BSDOS = 0x10 - DLT_PPP_ETHER = 0x33 - DLT_PPP_SERIAL = 0x32 - DLT_PRONET = 0x4 - DLT_RAW = 0xe - DLT_SLIP = 0x8 - DLT_SLIP_BSDOS = 0xf - DT_BLK = 0x6 - DT_CHR = 0x2 - DT_DIR = 0x4 - DT_FIFO = 0x1 - DT_LNK = 0xa - DT_REG = 0x8 - DT_SOCK = 0xc - DT_UNKNOWN = 0x0 - ECHO = 0x8 - ECHOCTL = 0x40 - ECHOE = 0x2 - ECHOK = 0x4 - ECHOKE = 0x1 - ECHONL = 0x10 - ECHOPRT = 0x20 - EMT_TAGOVF = 0x1 - EMUL_ENABLED = 0x1 - EMUL_NATIVE = 0x2 - ENDRUNDISC = 0x9 - ETHERMIN = 0x2e - ETHERMTU = 0x5dc - ETHERTYPE_8023 = 0x4 - ETHERTYPE_AARP = 0x80f3 - ETHERTYPE_ACCTON = 0x8390 - ETHERTYPE_AEONIC = 0x8036 - ETHERTYPE_ALPHA = 0x814a - ETHERTYPE_AMBER = 0x6008 - ETHERTYPE_AMOEBA = 0x8145 - ETHERTYPE_AOE = 0x88a2 - ETHERTYPE_APOLLO = 0x80f7 - ETHERTYPE_APOLLODOMAIN = 0x8019 - ETHERTYPE_APPLETALK = 0x809b - ETHERTYPE_APPLITEK = 0x80c7 - ETHERTYPE_ARGONAUT = 0x803a - ETHERTYPE_ARP = 0x806 - ETHERTYPE_AT = 0x809b - ETHERTYPE_ATALK = 0x809b - ETHERTYPE_ATOMIC = 0x86df - ETHERTYPE_ATT = 0x8069 - ETHERTYPE_ATTSTANFORD = 0x8008 - ETHERTYPE_AUTOPHON = 0x806a - ETHERTYPE_AXIS = 0x8856 - ETHERTYPE_BCLOOP = 0x9003 - ETHERTYPE_BOFL = 0x8102 - ETHERTYPE_CABLETRON = 0x7034 - ETHERTYPE_CHAOS = 0x804 - ETHERTYPE_COMDESIGN = 0x806c - ETHERTYPE_COMPUGRAPHIC = 0x806d - ETHERTYPE_COUNTERPOINT = 0x8062 - ETHERTYPE_CRONUS = 0x8004 - ETHERTYPE_CRONUSVLN = 0x8003 - ETHERTYPE_DCA = 0x1234 - ETHERTYPE_DDE = 0x807b - ETHERTYPE_DEBNI = 0xaaaa - ETHERTYPE_DECAM = 0x8048 - ETHERTYPE_DECCUST = 0x6006 - ETHERTYPE_DECDIAG = 0x6005 - ETHERTYPE_DECDNS = 0x803c - ETHERTYPE_DECDTS = 0x803e - ETHERTYPE_DECEXPER = 0x6000 - ETHERTYPE_DECLAST = 0x8041 - ETHERTYPE_DECLTM = 0x803f - ETHERTYPE_DECMUMPS = 0x6009 - ETHERTYPE_DECNETBIOS = 0x8040 - ETHERTYPE_DELTACON = 0x86de - ETHERTYPE_DIDDLE = 0x4321 - ETHERTYPE_DLOG1 = 0x660 - ETHERTYPE_DLOG2 = 0x661 - ETHERTYPE_DN = 0x6003 - ETHERTYPE_DOGFIGHT = 0x1989 - ETHERTYPE_DSMD = 0x8039 - ETHERTYPE_ECMA = 0x803 - ETHERTYPE_ENCRYPT = 0x803d - ETHERTYPE_ES = 0x805d - ETHERTYPE_EXCELAN = 0x8010 - ETHERTYPE_EXPERDATA = 0x8049 - ETHERTYPE_FLIP = 0x8146 - ETHERTYPE_FLOWCONTROL = 0x8808 - ETHERTYPE_FRARP = 0x808 - ETHERTYPE_GENDYN = 0x8068 - ETHERTYPE_HAYES = 0x8130 - ETHERTYPE_HIPPI_FP = 0x8180 - ETHERTYPE_HITACHI = 0x8820 - ETHERTYPE_HP = 0x8005 - ETHERTYPE_IEEEPUP = 0xa00 - ETHERTYPE_IEEEPUPAT = 0xa01 - ETHERTYPE_IMLBL = 0x4c42 - ETHERTYPE_IMLBLDIAG = 0x424c - ETHERTYPE_IP = 0x800 - ETHERTYPE_IPAS = 0x876c - ETHERTYPE_IPV6 = 0x86dd - ETHERTYPE_IPX = 0x8137 - ETHERTYPE_IPXNEW = 0x8037 - ETHERTYPE_KALPANA = 0x8582 - ETHERTYPE_LANBRIDGE = 0x8038 - ETHERTYPE_LANPROBE = 0x8888 - ETHERTYPE_LAT = 0x6004 - ETHERTYPE_LBACK = 0x9000 - ETHERTYPE_LITTLE = 0x8060 - ETHERTYPE_LLDP = 0x88cc - ETHERTYPE_LOGICRAFT = 0x8148 - ETHERTYPE_LOOPBACK = 0x9000 - ETHERTYPE_MATRA = 0x807a - ETHERTYPE_MAX = 0xffff - ETHERTYPE_MERIT = 0x807c - ETHERTYPE_MICP = 0x873a - ETHERTYPE_MOPDL = 0x6001 - ETHERTYPE_MOPRC = 0x6002 - ETHERTYPE_MOTOROLA = 0x818d - ETHERTYPE_MPLS = 0x8847 - ETHERTYPE_MPLS_MCAST = 0x8848 - ETHERTYPE_MUMPS = 0x813f - ETHERTYPE_NBPCC = 0x3c04 - ETHERTYPE_NBPCLAIM = 0x3c09 - ETHERTYPE_NBPCLREQ = 0x3c05 - ETHERTYPE_NBPCLRSP = 0x3c06 - ETHERTYPE_NBPCREQ = 0x3c02 - ETHERTYPE_NBPCRSP = 0x3c03 - ETHERTYPE_NBPDG = 0x3c07 - ETHERTYPE_NBPDGB = 0x3c08 - ETHERTYPE_NBPDLTE = 0x3c0a - ETHERTYPE_NBPRAR = 0x3c0c - ETHERTYPE_NBPRAS = 0x3c0b - ETHERTYPE_NBPRST = 0x3c0d - ETHERTYPE_NBPSCD = 0x3c01 - ETHERTYPE_NBPVCD = 0x3c00 - ETHERTYPE_NBS = 0x802 - ETHERTYPE_NCD = 0x8149 - ETHERTYPE_NESTAR = 0x8006 - ETHERTYPE_NETBEUI = 0x8191 - ETHERTYPE_NOVELL = 0x8138 - ETHERTYPE_NS = 0x600 - ETHERTYPE_NSAT = 0x601 - ETHERTYPE_NSCOMPAT = 0x807 - ETHERTYPE_NTRAILER = 0x10 - ETHERTYPE_OS9 = 0x7007 - ETHERTYPE_OS9NET = 0x7009 - ETHERTYPE_PACER = 0x80c6 - ETHERTYPE_PAE = 0x888e - ETHERTYPE_PCS = 0x4242 - ETHERTYPE_PLANNING = 0x8044 - ETHERTYPE_PPP = 0x880b - ETHERTYPE_PPPOE = 0x8864 - ETHERTYPE_PPPOEDISC = 0x8863 - ETHERTYPE_PRIMENTS = 0x7031 - ETHERTYPE_PUP = 0x200 - ETHERTYPE_PUPAT = 0x200 - ETHERTYPE_QINQ = 0x88a8 - ETHERTYPE_RACAL = 0x7030 - ETHERTYPE_RATIONAL = 0x8150 - ETHERTYPE_RAWFR = 0x6559 - ETHERTYPE_RCL = 0x1995 - ETHERTYPE_RDP = 0x8739 - ETHERTYPE_RETIX = 0x80f2 - ETHERTYPE_REVARP = 0x8035 - ETHERTYPE_SCA = 0x6007 - ETHERTYPE_SECTRA = 0x86db - ETHERTYPE_SECUREDATA = 0x876d - ETHERTYPE_SGITW = 0x817e - ETHERTYPE_SG_BOUNCE = 0x8016 - ETHERTYPE_SG_DIAG = 0x8013 - ETHERTYPE_SG_NETGAMES = 0x8014 - ETHERTYPE_SG_RESV = 0x8015 - ETHERTYPE_SIMNET = 0x5208 - ETHERTYPE_SLOW = 0x8809 - ETHERTYPE_SNA = 0x80d5 - ETHERTYPE_SNMP = 0x814c - ETHERTYPE_SONIX = 0xfaf5 - ETHERTYPE_SPIDER = 0x809f - ETHERTYPE_SPRITE = 0x500 - ETHERTYPE_STP = 0x8181 - ETHERTYPE_TALARIS = 0x812b - ETHERTYPE_TALARISMC = 0x852b - ETHERTYPE_TCPCOMP = 0x876b - ETHERTYPE_TCPSM = 0x9002 - ETHERTYPE_TEC = 0x814f - ETHERTYPE_TIGAN = 0x802f - ETHERTYPE_TRAIL = 0x1000 - ETHERTYPE_TRANSETHER = 0x6558 - ETHERTYPE_TYMSHARE = 0x802e - ETHERTYPE_UBBST = 0x7005 - ETHERTYPE_UBDEBUG = 0x900 - ETHERTYPE_UBDIAGLOOP = 0x7002 - ETHERTYPE_UBDL = 0x7000 - ETHERTYPE_UBNIU = 0x7001 - ETHERTYPE_UBNMC = 0x7003 - ETHERTYPE_VALID = 0x1600 - ETHERTYPE_VARIAN = 0x80dd - ETHERTYPE_VAXELN = 0x803b - ETHERTYPE_VEECO = 0x8067 - ETHERTYPE_VEXP = 0x805b - ETHERTYPE_VGLAB = 0x8131 - ETHERTYPE_VINES = 0xbad - ETHERTYPE_VINESECHO = 0xbaf - ETHERTYPE_VINESLOOP = 0xbae - ETHERTYPE_VITAL = 0xff00 - ETHERTYPE_VLAN = 0x8100 - ETHERTYPE_VLTLMAN = 0x8080 - ETHERTYPE_VPROD = 0x805c - ETHERTYPE_VURESERVED = 0x8147 - ETHERTYPE_WATERLOO = 0x8130 - ETHERTYPE_WELLFLEET = 0x8103 - ETHERTYPE_X25 = 0x805 - ETHERTYPE_X75 = 0x801 - ETHERTYPE_XNSSM = 0x9001 - ETHERTYPE_XTP = 0x817d - ETHER_ADDR_LEN = 0x6 - ETHER_ALIGN = 0x2 - ETHER_CRC_LEN = 0x4 - ETHER_CRC_POLY_BE = 0x4c11db6 - ETHER_CRC_POLY_LE = 0xedb88320 - ETHER_HDR_LEN = 0xe - ETHER_MAX_DIX_LEN = 0x600 - ETHER_MAX_LEN = 0x5ee - ETHER_MIN_LEN = 0x40 - ETHER_TYPE_LEN = 0x2 - ETHER_VLAN_ENCAP_LEN = 0x4 - EVFILT_AIO = -0x3 - EVFILT_PROC = -0x5 - EVFILT_READ = -0x1 - EVFILT_SIGNAL = -0x6 - EVFILT_SYSCOUNT = 0x7 - EVFILT_TIMER = -0x7 - EVFILT_VNODE = -0x4 - EVFILT_WRITE = -0x2 - EV_ADD = 0x1 - EV_CLEAR = 0x20 - EV_DELETE = 0x2 - EV_DISABLE = 0x8 - EV_ENABLE = 0x4 - EV_EOF = 0x8000 - EV_ERROR = 0x4000 - EV_FLAG1 = 0x2000 - EV_ONESHOT = 0x10 - EV_SYSFLAGS = 0xf000 - EXTA = 0x4b00 - EXTB = 0x9600 - EXTPROC = 0x800 - FD_CLOEXEC = 0x1 - FD_SETSIZE = 0x400 - FLUSHO = 0x800000 - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0xa - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLK = 0x7 - F_GETOWN = 0x5 - F_OK = 0x0 - F_RDLCK = 0x1 - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLK = 0x8 - F_SETLKW = 0x9 - F_SETOWN = 0x6 - F_UNLCK = 0x2 - F_WRLCK = 0x3 - HUPCL = 0x4000 - ICANON = 0x100 - ICMP6_FILTER = 0x12 - ICRNL = 0x100 - IEXTEN = 0x400 - IFAN_ARRIVAL = 0x0 - IFAN_DEPARTURE = 0x1 - IFA_ROUTE = 0x1 - IFF_ALLMULTI = 0x200 - IFF_BROADCAST = 0x2 - IFF_CANTCHANGE = 0x8e52 - IFF_DEBUG = 0x4 - IFF_LINK0 = 0x1000 - IFF_LINK1 = 0x2000 - IFF_LINK2 = 0x4000 - IFF_LOOPBACK = 0x8 - IFF_MULTICAST = 0x8000 - IFF_NOARP = 0x80 - IFF_NOTRAILERS = 0x20 - IFF_OACTIVE = 0x400 - IFF_POINTOPOINT = 0x10 - IFF_PROMISC = 0x100 - IFF_RUNNING = 0x40 - IFF_SIMPLEX = 0x800 - IFF_UP = 0x1 - IFNAMSIZ = 0x10 - IFT_1822 = 0x2 - IFT_A12MPPSWITCH = 0x82 - IFT_AAL2 = 0xbb - IFT_AAL5 = 0x31 - IFT_ADSL = 0x5e - IFT_AFLANE8023 = 0x3b - IFT_AFLANE8025 = 0x3c - IFT_ARAP = 0x58 - IFT_ARCNET = 0x23 - IFT_ARCNETPLUS = 0x24 - IFT_ASYNC = 0x54 - IFT_ATM = 0x25 - IFT_ATMDXI = 0x69 - IFT_ATMFUNI = 0x6a - IFT_ATMIMA = 0x6b - IFT_ATMLOGICAL = 0x50 - IFT_ATMRADIO = 0xbd - IFT_ATMSUBINTERFACE = 0x86 - IFT_ATMVCIENDPT = 0xc2 - IFT_ATMVIRTUAL = 0x95 - IFT_BGPPOLICYACCOUNTING = 0xa2 - IFT_BLUETOOTH = 0xf8 - IFT_BRIDGE = 0xd1 - IFT_BSC = 0x53 - IFT_CARP = 0xf7 - IFT_CCTEMUL = 0x3d - IFT_CEPT = 0x13 - IFT_CES = 0x85 - IFT_CHANNEL = 0x46 - IFT_CNR = 0x55 - IFT_COFFEE = 0x84 - IFT_COMPOSITELINK = 0x9b - IFT_DCN = 0x8d - IFT_DIGITALPOWERLINE = 0x8a - IFT_DIGITALWRAPPEROVERHEADCHANNEL = 0xba - IFT_DLSW = 0x4a - IFT_DOCSCABLEDOWNSTREAM = 0x80 - IFT_DOCSCABLEMACLAYER = 0x7f - IFT_DOCSCABLEUPSTREAM = 0x81 - IFT_DOCSCABLEUPSTREAMCHANNEL = 0xcd - IFT_DS0 = 0x51 - IFT_DS0BUNDLE = 0x52 - IFT_DS1FDL = 0xaa - IFT_DS3 = 0x1e - IFT_DTM = 0x8c - IFT_DUMMY = 0xf1 - IFT_DVBASILN = 0xac - IFT_DVBASIOUT = 0xad - IFT_DVBRCCDOWNSTREAM = 0x93 - IFT_DVBRCCMACLAYER = 0x92 - IFT_DVBRCCUPSTREAM = 0x94 - IFT_ECONET = 0xce - IFT_ENC = 0xf4 - IFT_EON = 0x19 - IFT_EPLRS = 0x57 - IFT_ESCON = 0x49 - IFT_ETHER = 0x6 - IFT_FAITH = 0xf3 - IFT_FAST = 0x7d - IFT_FASTETHER = 0x3e - IFT_FASTETHERFX = 0x45 - IFT_FDDI = 0xf - IFT_FIBRECHANNEL = 0x38 - IFT_FRAMERELAYINTERCONNECT = 0x3a - IFT_FRAMERELAYMPI = 0x5c - IFT_FRDLCIENDPT = 0xc1 - IFT_FRELAY = 0x20 - IFT_FRELAYDCE = 0x2c - IFT_FRF16MFRBUNDLE = 0xa3 - IFT_FRFORWARD = 0x9e - IFT_G703AT2MB = 0x43 - IFT_G703AT64K = 0x42 - IFT_GIF = 0xf0 - IFT_GIGABITETHERNET = 0x75 - IFT_GR303IDT = 0xb2 - IFT_GR303RDT = 0xb1 - IFT_H323GATEKEEPER = 0xa4 - IFT_H323PROXY = 0xa5 - IFT_HDH1822 = 0x3 - IFT_HDLC = 0x76 - IFT_HDSL2 = 0xa8 - IFT_HIPERLAN2 = 0xb7 - IFT_HIPPI = 0x2f - IFT_HIPPIINTERFACE = 0x39 - IFT_HOSTPAD = 0x5a - IFT_HSSI = 0x2e - IFT_HY = 0xe - IFT_IBM370PARCHAN = 0x48 - IFT_IDSL = 0x9a - IFT_IEEE1394 = 0x90 - IFT_IEEE80211 = 0x47 - IFT_IEEE80212 = 0x37 - IFT_IEEE8023ADLAG = 0xa1 - IFT_IFGSN = 0x91 - IFT_IMT = 0xbe - IFT_INFINIBAND = 0xc7 - IFT_INTERLEAVE = 0x7c - IFT_IP = 0x7e - IFT_IPFORWARD = 0x8e - IFT_IPOVERATM = 0x72 - IFT_IPOVERCDLC = 0x6d - IFT_IPOVERCLAW = 0x6e - IFT_IPSWITCH = 0x4e - IFT_ISDN = 0x3f - IFT_ISDNBASIC = 0x14 - IFT_ISDNPRIMARY = 0x15 - IFT_ISDNS = 0x4b - IFT_ISDNU = 0x4c - IFT_ISO88022LLC = 0x29 - IFT_ISO88023 = 0x7 - IFT_ISO88024 = 0x8 - IFT_ISO88025 = 0x9 - IFT_ISO88025CRFPINT = 0x62 - IFT_ISO88025DTR = 0x56 - IFT_ISO88025FIBER = 0x73 - IFT_ISO88026 = 0xa - IFT_ISUP = 0xb3 - IFT_L2VLAN = 0x87 - IFT_L3IPVLAN = 0x88 - IFT_L3IPXVLAN = 0x89 - IFT_LAPB = 0x10 - IFT_LAPD = 0x4d - IFT_LAPF = 0x77 - IFT_LINEGROUP = 0xd2 - IFT_LOCALTALK = 0x2a - IFT_LOOP = 0x18 - IFT_MEDIAMAILOVERIP = 0x8b - IFT_MFSIGLINK = 0xa7 - IFT_MIOX25 = 0x26 - IFT_MODEM = 0x30 - IFT_MPC = 0x71 - IFT_MPLS = 0xa6 - IFT_MPLSTUNNEL = 0x96 - IFT_MSDSL = 0x8f - IFT_MVL = 0xbf - IFT_MYRINET = 0x63 - IFT_NFAS = 0xaf - IFT_NSIP = 0x1b - IFT_OPTICALCHANNEL = 0xc3 - IFT_OPTICALTRANSPORT = 0xc4 - IFT_OTHER = 0x1 - IFT_P10 = 0xc - IFT_P80 = 0xd - IFT_PARA = 0x22 - IFT_PFLOG = 0xf5 - IFT_PFLOW = 0xf9 - IFT_PFSYNC = 0xf6 - IFT_PLC = 0xae - IFT_PON155 = 0xcf - IFT_PON622 = 0xd0 - IFT_POS = 0xab - IFT_PPP = 0x17 - IFT_PPPMULTILINKBUNDLE = 0x6c - IFT_PROPATM = 0xc5 - IFT_PROPBWAP2MP = 0xb8 - IFT_PROPCNLS = 0x59 - IFT_PROPDOCSWIRELESSDOWNSTREAM = 0xb5 - IFT_PROPDOCSWIRELESSMACLAYER = 0xb4 - IFT_PROPDOCSWIRELESSUPSTREAM = 0xb6 - IFT_PROPMUX = 0x36 - IFT_PROPVIRTUAL = 0x35 - IFT_PROPWIRELESSP2P = 0x9d - IFT_PTPSERIAL = 0x16 - IFT_PVC = 0xf2 - IFT_Q2931 = 0xc9 - IFT_QLLC = 0x44 - IFT_RADIOMAC = 0xbc - IFT_RADSL = 0x5f - IFT_REACHDSL = 0xc0 - IFT_RFC1483 = 0x9f - IFT_RS232 = 0x21 - IFT_RSRB = 0x4f - IFT_SDLC = 0x11 - IFT_SDSL = 0x60 - IFT_SHDSL = 0xa9 - IFT_SIP = 0x1f - IFT_SIPSIG = 0xcc - IFT_SIPTG = 0xcb - IFT_SLIP = 0x1c - IFT_SMDSDXI = 0x2b - IFT_SMDSICIP = 0x34 - IFT_SONET = 0x27 - IFT_SONETOVERHEADCHANNEL = 0xb9 - IFT_SONETPATH = 0x32 - IFT_SONETVT = 0x33 - IFT_SRP = 0x97 - IFT_SS7SIGLINK = 0x9c - IFT_STACKTOSTACK = 0x6f - IFT_STARLAN = 0xb - IFT_T1 = 0x12 - IFT_TDLC = 0x74 - IFT_TELINK = 0xc8 - IFT_TERMPAD = 0x5b - IFT_TR008 = 0xb0 - IFT_TRANSPHDLC = 0x7b - IFT_TUNNEL = 0x83 - IFT_ULTRA = 0x1d - IFT_USB = 0xa0 - IFT_V11 = 0x40 - IFT_V35 = 0x2d - IFT_V36 = 0x41 - IFT_V37 = 0x78 - IFT_VDSL = 0x61 - IFT_VIRTUALIPADDRESS = 0x70 - IFT_VIRTUALTG = 0xca - IFT_VOICEDID = 0xd5 - IFT_VOICEEM = 0x64 - IFT_VOICEEMFGD = 0xd3 - IFT_VOICEENCAP = 0x67 - IFT_VOICEFGDEANA = 0xd4 - IFT_VOICEFXO = 0x65 - IFT_VOICEFXS = 0x66 - IFT_VOICEOVERATM = 0x98 - IFT_VOICEOVERCABLE = 0xc6 - IFT_VOICEOVERFRAMERELAY = 0x99 - IFT_VOICEOVERIP = 0x68 - IFT_X213 = 0x5d - IFT_X25 = 0x5 - IFT_X25DDN = 0x4 - IFT_X25HUNTGROUP = 0x7a - IFT_X25MLP = 0x79 - IFT_X25PLE = 0x28 - IFT_XETHER = 0x1a - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLASSD_HOST = 0xfffffff - IN_CLASSD_NET = 0xf0000000 - IN_CLASSD_NSHIFT = 0x1c - IN_LOOPBACKNET = 0x7f - IN_RFC3021_HOST = 0x1 - IN_RFC3021_NET = 0xfffffffe - IN_RFC3021_NSHIFT = 0x1f - IPPROTO_AH = 0x33 - IPPROTO_CARP = 0x70 - IPPROTO_DIVERT = 0x102 - IPPROTO_DIVERT_INIT = 0x2 - IPPROTO_DIVERT_RESP = 0x1 - IPPROTO_DONE = 0x101 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_ENCAP = 0x62 - IPPROTO_EON = 0x50 - IPPROTO_ESP = 0x32 - IPPROTO_ETHERIP = 0x61 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GGP = 0x3 - IPPROTO_GRE = 0x2f - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IGMP = 0x2 - IPPROTO_IP = 0x0 - IPPROTO_IPCOMP = 0x6c - IPPROTO_IPIP = 0x4 - IPPROTO_IPV4 = 0x4 - IPPROTO_IPV6 = 0x29 - IPPROTO_MAX = 0x100 - IPPROTO_MAXID = 0x103 - IPPROTO_MOBILE = 0x37 - IPPROTO_MPLS = 0x89 - IPPROTO_NONE = 0x3b - IPPROTO_PFSYNC = 0xf0 - IPPROTO_PIM = 0x67 - IPPROTO_PUP = 0xc - IPPROTO_RAW = 0xff - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_TCP = 0x6 - IPPROTO_TP = 0x1d - IPPROTO_UDP = 0x11 - IPV6_AUTH_LEVEL = 0x35 - IPV6_AUTOFLOWLABEL = 0x3b - IPV6_CHECKSUM = 0x1a - IPV6_DEFAULT_MULTICAST_HOPS = 0x1 - IPV6_DEFAULT_MULTICAST_LOOP = 0x1 - IPV6_DEFHLIM = 0x40 - IPV6_DONTFRAG = 0x3e - IPV6_DSTOPTS = 0x32 - IPV6_ESP_NETWORK_LEVEL = 0x37 - IPV6_ESP_TRANS_LEVEL = 0x36 - IPV6_FAITH = 0x1d - IPV6_FLOWINFO_MASK = 0xffffff0f - IPV6_FLOWLABEL_MASK = 0xffff0f00 - IPV6_FRAGTTL = 0x78 - IPV6_HLIMDEC = 0x1 - IPV6_HOPLIMIT = 0x2f - IPV6_HOPOPTS = 0x31 - IPV6_IPCOMP_LEVEL = 0x3c - IPV6_JOIN_GROUP = 0xc - IPV6_LEAVE_GROUP = 0xd - IPV6_MAXHLIM = 0xff - IPV6_MAXPACKET = 0xffff - IPV6_MMTU = 0x500 - IPV6_MULTICAST_HOPS = 0xa - IPV6_MULTICAST_IF = 0x9 - IPV6_MULTICAST_LOOP = 0xb - IPV6_NEXTHOP = 0x30 - IPV6_OPTIONS = 0x1 - IPV6_PATHMTU = 0x2c - IPV6_PIPEX = 0x3f - IPV6_PKTINFO = 0x2e - IPV6_PORTRANGE = 0xe - IPV6_PORTRANGE_DEFAULT = 0x0 - IPV6_PORTRANGE_HIGH = 0x1 - IPV6_PORTRANGE_LOW = 0x2 - IPV6_RECVDSTOPTS = 0x28 - IPV6_RECVDSTPORT = 0x40 - IPV6_RECVHOPLIMIT = 0x25 - IPV6_RECVHOPOPTS = 0x27 - IPV6_RECVPATHMTU = 0x2b - IPV6_RECVPKTINFO = 0x24 - IPV6_RECVRTHDR = 0x26 - IPV6_RECVTCLASS = 0x39 - IPV6_RTABLE = 0x1021 - IPV6_RTHDR = 0x33 - IPV6_RTHDRDSTOPTS = 0x23 - IPV6_RTHDR_LOOSE = 0x0 - IPV6_RTHDR_STRICT = 0x1 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_SOCKOPT_RESERVED1 = 0x3 - IPV6_TCLASS = 0x3d - IPV6_UNICAST_HOPS = 0x4 - IPV6_USE_MIN_MTU = 0x2a - IPV6_V6ONLY = 0x1b - IPV6_VERSION = 0x60 - IPV6_VERSION_MASK = 0xf0 - IP_ADD_MEMBERSHIP = 0xc - IP_AUTH_LEVEL = 0x14 - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DIVERTFL = 0x1022 - IP_DROP_MEMBERSHIP = 0xd - IP_ESP_NETWORK_LEVEL = 0x16 - IP_ESP_TRANS_LEVEL = 0x15 - IP_HDRINCL = 0x2 - IP_IPCOMP_LEVEL = 0x1d - IP_IPSECFLOWINFO = 0x24 - IP_IPSEC_LOCAL_AUTH = 0x1b - IP_IPSEC_LOCAL_CRED = 0x19 - IP_IPSEC_LOCAL_ID = 0x17 - IP_IPSEC_REMOTE_AUTH = 0x1c - IP_IPSEC_REMOTE_CRED = 0x1a - IP_IPSEC_REMOTE_ID = 0x18 - IP_MAXPACKET = 0xffff - IP_MAX_MEMBERSHIPS = 0xfff - IP_MF = 0x2000 - IP_MINTTL = 0x20 - IP_MIN_MEMBERSHIPS = 0xf - IP_MSS = 0x240 - IP_MULTICAST_IF = 0x9 - IP_MULTICAST_LOOP = 0xb - IP_MULTICAST_TTL = 0xa - IP_OFFMASK = 0x1fff - IP_OPTIONS = 0x1 - IP_PIPEX = 0x22 - IP_PORTRANGE = 0x13 - IP_PORTRANGE_DEFAULT = 0x0 - IP_PORTRANGE_HIGH = 0x1 - IP_PORTRANGE_LOW = 0x2 - IP_RECVDSTADDR = 0x7 - IP_RECVDSTPORT = 0x21 - IP_RECVIF = 0x1e - IP_RECVOPTS = 0x5 - IP_RECVRETOPTS = 0x6 - IP_RECVRTABLE = 0x23 - IP_RECVTTL = 0x1f - IP_RETOPTS = 0x8 - IP_RF = 0x8000 - IP_RTABLE = 0x1021 - IP_TOS = 0x3 - IP_TTL = 0x4 - ISIG = 0x80 - ISTRIP = 0x20 - IXANY = 0x800 - IXOFF = 0x400 - IXON = 0x200 - LCNT_OVERLOAD_FLUSH = 0x6 - LOCK_EX = 0x2 - LOCK_NB = 0x4 - LOCK_SH = 0x1 - LOCK_UN = 0x8 - MADV_DONTNEED = 0x4 - MADV_FREE = 0x6 - MADV_NORMAL = 0x0 - MADV_RANDOM = 0x1 - MADV_SEQUENTIAL = 0x2 - MADV_SPACEAVAIL = 0x5 - MADV_WILLNEED = 0x3 - MAP_ANON = 0x1000 - MAP_COPY = 0x4 - MAP_FILE = 0x0 - MAP_FIXED = 0x10 - MAP_FLAGMASK = 0x1ff7 - MAP_HASSEMAPHORE = 0x200 - MAP_INHERIT = 0x80 - MAP_INHERIT_COPY = 0x1 - MAP_INHERIT_DONATE_COPY = 0x3 - MAP_INHERIT_NONE = 0x2 - MAP_INHERIT_SHARE = 0x0 - MAP_NOEXTEND = 0x100 - MAP_NORESERVE = 0x40 - MAP_PRIVATE = 0x2 - MAP_RENAME = 0x20 - MAP_SHARED = 0x1 - MAP_TRYFIXED = 0x400 - MCL_CURRENT = 0x1 - MCL_FUTURE = 0x2 - MSG_BCAST = 0x100 - MSG_CTRUNC = 0x20 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x80 - MSG_EOR = 0x8 - MSG_MCAST = 0x200 - MSG_NOSIGNAL = 0x400 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_TRUNC = 0x10 - MSG_WAITALL = 0x40 - MS_ASYNC = 0x1 - MS_INVALIDATE = 0x4 - MS_SYNC = 0x2 - NAME_MAX = 0xff - NET_RT_DUMP = 0x1 - NET_RT_FLAGS = 0x2 - NET_RT_IFLIST = 0x3 - NET_RT_MAXID = 0x6 - NET_RT_STATS = 0x4 - NET_RT_TABLE = 0x5 - NOFLSH = 0x80000000 - NOTE_ATTRIB = 0x8 - NOTE_CHILD = 0x4 - NOTE_DELETE = 0x1 - NOTE_EOF = 0x2 - NOTE_EXEC = 0x20000000 - NOTE_EXIT = 0x80000000 - NOTE_EXTEND = 0x4 - NOTE_FORK = 0x40000000 - NOTE_LINK = 0x10 - NOTE_LOWAT = 0x1 - NOTE_PCTRLMASK = 0xf0000000 - NOTE_PDATAMASK = 0xfffff - NOTE_RENAME = 0x20 - NOTE_REVOKE = 0x40 - NOTE_TRACK = 0x1 - NOTE_TRACKERR = 0x2 - NOTE_TRUNCATE = 0x80 - NOTE_WRITE = 0x2 - OCRNL = 0x10 - ONLCR = 0x2 - ONLRET = 0x80 - ONOCR = 0x40 - ONOEOT = 0x8 - OPOST = 0x1 - O_ACCMODE = 0x3 - O_APPEND = 0x8 - O_ASYNC = 0x40 - O_CLOEXEC = 0x10000 - O_CREAT = 0x200 - O_DIRECTORY = 0x20000 - O_DSYNC = 0x80 - O_EXCL = 0x800 - O_EXLOCK = 0x20 - O_FSYNC = 0x80 - O_NDELAY = 0x4 - O_NOCTTY = 0x8000 - O_NOFOLLOW = 0x100 - O_NONBLOCK = 0x4 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_RSYNC = 0x80 - O_SHLOCK = 0x10 - O_SYNC = 0x80 - O_TRUNC = 0x400 - O_WRONLY = 0x1 - PARENB = 0x1000 - PARMRK = 0x8 - PARODD = 0x2000 - PENDIN = 0x20000000 - PF_FLUSH = 0x1 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PROT_EXEC = 0x4 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_WRITE = 0x2 - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x8 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = 0x7fffffffffffffff - RTAX_AUTHOR = 0x6 - RTAX_BRD = 0x7 - RTAX_DST = 0x0 - RTAX_GATEWAY = 0x1 - RTAX_GENMASK = 0x3 - RTAX_IFA = 0x5 - RTAX_IFP = 0x4 - RTAX_LABEL = 0xa - RTAX_MAX = 0xb - RTAX_NETMASK = 0x2 - RTAX_SRC = 0x8 - RTAX_SRCMASK = 0x9 - RTA_AUTHOR = 0x40 - RTA_BRD = 0x80 - RTA_DST = 0x1 - RTA_GATEWAY = 0x2 - RTA_GENMASK = 0x8 - RTA_IFA = 0x20 - RTA_IFP = 0x10 - RTA_LABEL = 0x400 - RTA_NETMASK = 0x4 - RTA_SRC = 0x100 - RTA_SRCMASK = 0x200 - RTF_ANNOUNCE = 0x4000 - RTF_BLACKHOLE = 0x1000 - RTF_CLONED = 0x10000 - RTF_CLONING = 0x100 - RTF_DONE = 0x40 - RTF_DYNAMIC = 0x10 - RTF_FMASK = 0x10f808 - RTF_GATEWAY = 0x2 - RTF_HOST = 0x4 - RTF_LLINFO = 0x400 - RTF_MASK = 0x80 - RTF_MODIFIED = 0x20 - RTF_MPATH = 0x40000 - RTF_MPLS = 0x100000 - RTF_PERMANENT_ARP = 0x2000 - RTF_PROTO1 = 0x8000 - RTF_PROTO2 = 0x4000 - RTF_PROTO3 = 0x2000 - RTF_REJECT = 0x8 - RTF_SOURCE = 0x20000 - RTF_STATIC = 0x800 - RTF_TUNNEL = 0x100000 - RTF_UP = 0x1 - RTF_USETRAILERS = 0x8000 - RTF_XRESOLVE = 0x200 - RTM_ADD = 0x1 - RTM_CHANGE = 0x3 - RTM_DELADDR = 0xd - RTM_DELETE = 0x2 - RTM_DESYNC = 0x10 - RTM_GET = 0x4 - RTM_IFANNOUNCE = 0xf - RTM_IFINFO = 0xe - RTM_LOCK = 0x8 - RTM_LOSING = 0x5 - RTM_MAXSIZE = 0x800 - RTM_MISS = 0x7 - RTM_NEWADDR = 0xc - RTM_REDIRECT = 0x6 - RTM_RESOLVE = 0xb - RTM_RTTUNIT = 0xf4240 - RTM_VERSION = 0x5 - RTV_EXPIRE = 0x4 - RTV_HOPCOUNT = 0x2 - RTV_MTU = 0x1 - RTV_RPIPE = 0x8 - RTV_RTT = 0x40 - RTV_RTTVAR = 0x80 - RTV_SPIPE = 0x10 - RTV_SSTHRESH = 0x20 - RT_TABLEID_MAX = 0xff - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - RUSAGE_THREAD = 0x1 - SCM_RIGHTS = 0x1 - SCM_TIMESTAMP = 0x4 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIOCADDMULTI = 0x80206931 - SIOCAIFADDR = 0x8040691a - SIOCAIFGROUP = 0x80286987 - SIOCALIFADDR = 0x8218691c - SIOCATMARK = 0x40047307 - SIOCBRDGADD = 0x8058693c - SIOCBRDGADDS = 0x80586941 - SIOCBRDGARL = 0x806e694d - SIOCBRDGDADDR = 0x81286947 - SIOCBRDGDEL = 0x8058693d - SIOCBRDGDELS = 0x80586942 - SIOCBRDGFLUSH = 0x80586948 - SIOCBRDGFRL = 0x806e694e - SIOCBRDGGCACHE = 0xc0146941 - SIOCBRDGGFD = 0xc0146952 - SIOCBRDGGHT = 0xc0146951 - SIOCBRDGGIFFLGS = 0xc058693e - SIOCBRDGGMA = 0xc0146953 - SIOCBRDGGPARAM = 0xc0406958 - SIOCBRDGGPRI = 0xc0146950 - SIOCBRDGGRL = 0xc030694f - SIOCBRDGGSIFS = 0xc058693c - SIOCBRDGGTO = 0xc0146946 - SIOCBRDGIFS = 0xc0586942 - SIOCBRDGRTS = 0xc0206943 - SIOCBRDGSADDR = 0xc1286944 - SIOCBRDGSCACHE = 0x80146940 - SIOCBRDGSFD = 0x80146952 - SIOCBRDGSHT = 0x80146951 - SIOCBRDGSIFCOST = 0x80586955 - SIOCBRDGSIFFLGS = 0x8058693f - SIOCBRDGSIFPRIO = 0x80586954 - SIOCBRDGSMA = 0x80146953 - SIOCBRDGSPRI = 0x80146950 - SIOCBRDGSPROTO = 0x8014695a - SIOCBRDGSTO = 0x80146945 - SIOCBRDGSTXHC = 0x80146959 - SIOCDELMULTI = 0x80206932 - SIOCDIFADDR = 0x80206919 - SIOCDIFGROUP = 0x80286989 - SIOCDIFPHYADDR = 0x80206949 - SIOCDLIFADDR = 0x8218691e - SIOCGETKALIVE = 0xc01869a4 - SIOCGETLABEL = 0x8020699a - SIOCGETPFLOW = 0xc02069fe - SIOCGETPFSYNC = 0xc02069f8 - SIOCGETSGCNT = 0xc0207534 - SIOCGETVIFCNT = 0xc0287533 - SIOCGETVLAN = 0xc0206990 - SIOCGHIWAT = 0x40047301 - SIOCGIFADDR = 0xc0206921 - SIOCGIFASYNCMAP = 0xc020697c - SIOCGIFBRDADDR = 0xc0206923 - SIOCGIFCONF = 0xc0106924 - SIOCGIFDATA = 0xc020691b - SIOCGIFDESCR = 0xc0206981 - SIOCGIFDSTADDR = 0xc0206922 - SIOCGIFFLAGS = 0xc0206911 - SIOCGIFGATTR = 0xc028698b - SIOCGIFGENERIC = 0xc020693a - SIOCGIFGMEMB = 0xc028698a - SIOCGIFGROUP = 0xc0286988 - SIOCGIFHARDMTU = 0xc02069a5 - SIOCGIFMEDIA = 0xc0306936 - SIOCGIFMETRIC = 0xc0206917 - SIOCGIFMTU = 0xc020697e - SIOCGIFNETMASK = 0xc0206925 - SIOCGIFPDSTADDR = 0xc0206948 - SIOCGIFPRIORITY = 0xc020699c - SIOCGIFPSRCADDR = 0xc0206947 - SIOCGIFRDOMAIN = 0xc02069a0 - SIOCGIFRTLABEL = 0xc0206983 - SIOCGIFTIMESLOT = 0xc0206986 - SIOCGIFXFLAGS = 0xc020699e - SIOCGLIFADDR = 0xc218691d - SIOCGLIFPHYADDR = 0xc218694b - SIOCGLIFPHYRTABLE = 0xc02069a2 - SIOCGLIFPHYTTL = 0xc02069a9 - SIOCGLOWAT = 0x40047303 - SIOCGPGRP = 0x40047309 - SIOCGSPPPPARAMS = 0xc0206994 - SIOCGVH = 0xc02069f6 - SIOCGVNETID = 0xc02069a7 - SIOCIFCREATE = 0x8020697a - SIOCIFDESTROY = 0x80206979 - SIOCIFGCLONERS = 0xc0106978 - SIOCSETKALIVE = 0x801869a3 - SIOCSETLABEL = 0x80206999 - SIOCSETPFLOW = 0x802069fd - SIOCSETPFSYNC = 0x802069f7 - SIOCSETVLAN = 0x8020698f - SIOCSHIWAT = 0x80047300 - SIOCSIFADDR = 0x8020690c - SIOCSIFASYNCMAP = 0x8020697d - SIOCSIFBRDADDR = 0x80206913 - SIOCSIFDESCR = 0x80206980 - SIOCSIFDSTADDR = 0x8020690e - SIOCSIFFLAGS = 0x80206910 - SIOCSIFGATTR = 0x8028698c - SIOCSIFGENERIC = 0x80206939 - SIOCSIFLLADDR = 0x8020691f - SIOCSIFMEDIA = 0xc0206935 - SIOCSIFMETRIC = 0x80206918 - SIOCSIFMTU = 0x8020697f - SIOCSIFNETMASK = 0x80206916 - SIOCSIFPHYADDR = 0x80406946 - SIOCSIFPRIORITY = 0x8020699b - SIOCSIFRDOMAIN = 0x8020699f - SIOCSIFRTLABEL = 0x80206982 - SIOCSIFTIMESLOT = 0x80206985 - SIOCSIFXFLAGS = 0x8020699d - SIOCSLIFPHYADDR = 0x8218694a - SIOCSLIFPHYRTABLE = 0x802069a1 - SIOCSLIFPHYTTL = 0x802069a8 - SIOCSLOWAT = 0x80047302 - SIOCSPGRP = 0x80047308 - SIOCSSPPPPARAMS = 0x80206993 - SIOCSVH = 0xc02069f5 - SIOCSVNETID = 0x802069a6 - SOCK_DGRAM = 0x2 - SOCK_RAW = 0x3 - SOCK_RDM = 0x4 - SOCK_SEQPACKET = 0x5 - SOCK_STREAM = 0x1 - SOL_SOCKET = 0xffff - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x2 - SO_BINDANY = 0x1000 - SO_BROADCAST = 0x20 - SO_DEBUG = 0x1 - SO_DONTROUTE = 0x10 - SO_ERROR = 0x1007 - SO_KEEPALIVE = 0x8 - SO_LINGER = 0x80 - SO_NETPROC = 0x1020 - SO_OOBINLINE = 0x100 - SO_PEERCRED = 0x1022 - SO_RCVBUF = 0x1002 - SO_RCVLOWAT = 0x1004 - SO_RCVTIMEO = 0x1006 - SO_REUSEADDR = 0x4 - SO_REUSEPORT = 0x200 - SO_RTABLE = 0x1021 - SO_SNDBUF = 0x1001 - SO_SNDLOWAT = 0x1003 - SO_SNDTIMEO = 0x1005 - SO_SPLICE = 0x1023 - SO_TIMESTAMP = 0x800 - SO_TYPE = 0x1008 - SO_USELOOPBACK = 0x40 - TCIFLUSH = 0x1 - TCIOFLUSH = 0x3 - TCOFLUSH = 0x2 - TCP_MAXBURST = 0x4 - TCP_MAXSEG = 0x2 - TCP_MAXWIN = 0xffff - TCP_MAX_SACK = 0x3 - TCP_MAX_WINSHIFT = 0xe - TCP_MD5SIG = 0x4 - TCP_MSS = 0x200 - TCP_NODELAY = 0x1 - TCP_NOPUSH = 0x10 - TCP_NSTATES = 0xb - TCP_SACK_ENABLE = 0x8 - TCSAFLUSH = 0x2 - TIOCCBRK = 0x2000747a - TIOCCDTR = 0x20007478 - TIOCCONS = 0x80047462 - TIOCDRAIN = 0x2000745e - TIOCEXCL = 0x2000740d - TIOCEXT = 0x80047460 - TIOCFLAG_CLOCAL = 0x2 - TIOCFLAG_CRTSCTS = 0x4 - TIOCFLAG_MDMBUF = 0x8 - TIOCFLAG_PPS = 0x10 - TIOCFLAG_SOFTCAR = 0x1 - TIOCFLUSH = 0x80047410 - TIOCGETA = 0x402c7413 - TIOCGETD = 0x4004741a - TIOCGFLAGS = 0x4004745d - TIOCGPGRP = 0x40047477 - TIOCGSID = 0x40047463 - TIOCGTSTAMP = 0x4010745b - TIOCGWINSZ = 0x40087468 - TIOCMBIC = 0x8004746b - TIOCMBIS = 0x8004746c - TIOCMGET = 0x4004746a - TIOCMODG = 0x4004746a - TIOCMODS = 0x8004746d - TIOCMSET = 0x8004746d - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x20007471 - TIOCNXCL = 0x2000740e - TIOCOUTQ = 0x40047473 - TIOCPKT = 0x80047470 - TIOCPKT_DATA = 0x0 - TIOCPKT_DOSTOP = 0x20 - TIOCPKT_FLUSHREAD = 0x1 - TIOCPKT_FLUSHWRITE = 0x2 - TIOCPKT_IOCTL = 0x40 - TIOCPKT_NOSTOP = 0x10 - TIOCPKT_START = 0x8 - TIOCPKT_STOP = 0x4 - TIOCREMOTE = 0x80047469 - TIOCSBRK = 0x2000747b - TIOCSCTTY = 0x20007461 - TIOCSDTR = 0x20007479 - TIOCSETA = 0x802c7414 - TIOCSETAF = 0x802c7416 - TIOCSETAW = 0x802c7415 - TIOCSETD = 0x8004741b - TIOCSFLAGS = 0x8004745c - TIOCSIG = 0x8004745f - TIOCSPGRP = 0x80047476 - TIOCSTART = 0x2000746e - TIOCSTAT = 0x80047465 - TIOCSTI = 0x80017472 - TIOCSTOP = 0x2000746f - TIOCSTSTAMP = 0x8008745a - TIOCSWINSZ = 0x80087467 - TIOCUCNTL = 0x80047466 - TOSTOP = 0x400000 - VDISCARD = 0xf - VDSUSP = 0xb - VEOF = 0x0 - VEOL = 0x1 - VEOL2 = 0x2 - VERASE = 0x3 - VINTR = 0x8 - VKILL = 0x5 - VLNEXT = 0xe - VMIN = 0x10 - VQUIT = 0x9 - VREPRINT = 0x6 - VSTART = 0xc - VSTATUS = 0x12 - VSTOP = 0xd - VSUSP = 0xa - VTIME = 0x11 - VWERASE = 0x4 - WALTSIG = 0x4 - WCONTINUED = 0x8 - WCOREFLAG = 0x80 - WNOHANG = 0x1 - WSTOPPED = 0x7f - WUNTRACED = 0x2 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x30) - EADDRNOTAVAIL = syscall.Errno(0x31) - EAFNOSUPPORT = syscall.Errno(0x2f) - EAGAIN = syscall.Errno(0x23) - EALREADY = syscall.Errno(0x25) - EAUTH = syscall.Errno(0x50) - EBADF = syscall.Errno(0x9) - EBADRPC = syscall.Errno(0x48) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x58) - ECHILD = syscall.Errno(0xa) - ECONNABORTED = syscall.Errno(0x35) - ECONNREFUSED = syscall.Errno(0x3d) - ECONNRESET = syscall.Errno(0x36) - EDEADLK = syscall.Errno(0xb) - EDESTADDRREQ = syscall.Errno(0x27) - EDOM = syscall.Errno(0x21) - EDQUOT = syscall.Errno(0x45) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EFTYPE = syscall.Errno(0x4f) - EHOSTDOWN = syscall.Errno(0x40) - EHOSTUNREACH = syscall.Errno(0x41) - EIDRM = syscall.Errno(0x59) - EILSEQ = syscall.Errno(0x54) - EINPROGRESS = syscall.Errno(0x24) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EIPSEC = syscall.Errno(0x52) - EISCONN = syscall.Errno(0x38) - EISDIR = syscall.Errno(0x15) - ELAST = syscall.Errno(0x5b) - ELOOP = syscall.Errno(0x3e) - EMEDIUMTYPE = syscall.Errno(0x56) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x28) - ENAMETOOLONG = syscall.Errno(0x3f) - ENEEDAUTH = syscall.Errno(0x51) - ENETDOWN = syscall.Errno(0x32) - ENETRESET = syscall.Errno(0x34) - ENETUNREACH = syscall.Errno(0x33) - ENFILE = syscall.Errno(0x17) - ENOATTR = syscall.Errno(0x53) - ENOBUFS = syscall.Errno(0x37) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOLCK = syscall.Errno(0x4d) - ENOMEDIUM = syscall.Errno(0x55) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x5a) - ENOPROTOOPT = syscall.Errno(0x2a) - ENOSPC = syscall.Errno(0x1c) - ENOSYS = syscall.Errno(0x4e) - ENOTBLK = syscall.Errno(0xf) - ENOTCONN = syscall.Errno(0x39) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x42) - ENOTSOCK = syscall.Errno(0x26) - ENOTSUP = syscall.Errno(0x5b) - ENOTTY = syscall.Errno(0x19) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x2d) - EOVERFLOW = syscall.Errno(0x57) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x2e) - EPIPE = syscall.Errno(0x20) - EPROCLIM = syscall.Errno(0x43) - EPROCUNAVAIL = syscall.Errno(0x4c) - EPROGMISMATCH = syscall.Errno(0x4b) - EPROGUNAVAIL = syscall.Errno(0x4a) - EPROTONOSUPPORT = syscall.Errno(0x2b) - EPROTOTYPE = syscall.Errno(0x29) - ERANGE = syscall.Errno(0x22) - EREMOTE = syscall.Errno(0x47) - EROFS = syscall.Errno(0x1e) - ERPCMISMATCH = syscall.Errno(0x49) - ESHUTDOWN = syscall.Errno(0x3a) - ESOCKTNOSUPPORT = syscall.Errno(0x2c) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESTALE = syscall.Errno(0x46) - ETIMEDOUT = syscall.Errno(0x3c) - ETOOMANYREFS = syscall.Errno(0x3b) - ETXTBSY = syscall.Errno(0x1a) - EUSERS = syscall.Errno(0x44) - EWOULDBLOCK = syscall.Errno(0x23) - EXDEV = syscall.Errno(0x12) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0xa) - SIGCHLD = syscall.Signal(0x14) - SIGCONT = syscall.Signal(0x13) - SIGEMT = syscall.Signal(0x7) - SIGFPE = syscall.Signal(0x8) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINFO = syscall.Signal(0x1d) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x17) - SIGIOT = syscall.Signal(0x6) - SIGKILL = syscall.Signal(0x9) - SIGPIPE = syscall.Signal(0xd) - SIGPROF = syscall.Signal(0x1b) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTOP = syscall.Signal(0x11) - SIGSYS = syscall.Signal(0xc) - SIGTERM = syscall.Signal(0xf) - SIGTHR = syscall.Signal(0x20) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x12) - SIGTTIN = syscall.Signal(0x15) - SIGTTOU = syscall.Signal(0x16) - SIGURG = syscall.Signal(0x10) - SIGUSR1 = syscall.Signal(0x1e) - SIGUSR2 = syscall.Signal(0x1f) - SIGVTALRM = syscall.Signal(0x1a) - SIGWINCH = syscall.Signal(0x1c) - SIGXCPU = syscall.Signal(0x18) - SIGXFSZ = syscall.Signal(0x19) -) - -// Error table -var errors = [...]string{ - 1: "operation not permitted", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "input/output error", - 6: "device not configured", - 7: "argument list too long", - 8: "exec format error", - 9: "bad file descriptor", - 10: "no child processes", - 11: "resource deadlock avoided", - 12: "cannot allocate memory", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "device busy", - 17: "file exists", - 18: "cross-device link", - 19: "operation not supported by device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "too many open files in system", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "numerical argument out of domain", - 34: "result too large", - 35: "resource temporarily unavailable", - 36: "operation now in progress", - 37: "operation already in progress", - 38: "socket operation on non-socket", - 39: "destination address required", - 40: "message too long", - 41: "protocol wrong type for socket", - 42: "protocol not available", - 43: "protocol not supported", - 44: "socket type not supported", - 45: "operation not supported", - 46: "protocol family not supported", - 47: "address family not supported by protocol family", - 48: "address already in use", - 49: "can't assign requested address", - 50: "network is down", - 51: "network is unreachable", - 52: "network dropped connection on reset", - 53: "software caused connection abort", - 54: "connection reset by peer", - 55: "no buffer space available", - 56: "socket is already connected", - 57: "socket is not connected", - 58: "can't send after socket shutdown", - 59: "too many references: can't splice", - 60: "connection timed out", - 61: "connection refused", - 62: "too many levels of symbolic links", - 63: "file name too long", - 64: "host is down", - 65: "no route to host", - 66: "directory not empty", - 67: "too many processes", - 68: "too many users", - 69: "disc quota exceeded", - 70: "stale NFS file handle", - 71: "too many levels of remote in path", - 72: "RPC struct is bad", - 73: "RPC version wrong", - 74: "RPC prog. not avail", - 75: "program version wrong", - 76: "bad procedure for program", - 77: "no locks available", - 78: "function not implemented", - 79: "inappropriate file type or format", - 80: "authentication error", - 81: "need authenticator", - 82: "IPsec processing failure", - 83: "attribute not found", - 84: "illegal byte sequence", - 85: "no medium found", - 86: "wrong medium type", - 87: "value too large to be stored in data type", - 88: "operation canceled", - 89: "identifier removed", - 90: "no message of desired type", - 91: "not supported", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal instruction", - 5: "trace/BPT trap", - 6: "abort trap", - 7: "EMT trap", - 8: "floating point exception", - 9: "killed", - 10: "bus error", - 11: "segmentation fault", - 12: "bad system call", - 13: "broken pipe", - 14: "alarm clock", - 15: "terminated", - 16: "urgent I/O condition", - 17: "stopped (signal)", - 18: "stopped", - 19: "continued", - 20: "child exited", - 21: "stopped (tty input)", - 22: "stopped (tty output)", - 23: "I/O possible", - 24: "cputime limit exceeded", - 25: "filesize limit exceeded", - 26: "virtual timer expired", - 27: "profiling timer expired", - 28: "window size changes", - 29: "information request", - 30: "user defined signal 1", - 31: "user defined signal 2", - 32: "thread AST", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_solaris_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_solaris_amd64.go deleted file mode 100644 index a08922b9818..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zerrors_solaris_amd64.go +++ /dev/null @@ -1,1436 +0,0 @@ -// mkerrors.sh -m64 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build amd64,solaris - -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- -m64 _const.go - -package unix - -import "syscall" - -const ( - AF_802 = 0x12 - AF_APPLETALK = 0x10 - AF_CCITT = 0xa - AF_CHAOS = 0x5 - AF_DATAKIT = 0x9 - AF_DECnet = 0xc - AF_DLI = 0xd - AF_ECMA = 0x8 - AF_FILE = 0x1 - AF_GOSIP = 0x16 - AF_HYLINK = 0xf - AF_IMPLINK = 0x3 - AF_INET = 0x2 - AF_INET6 = 0x1a - AF_INET_OFFLOAD = 0x1e - AF_IPX = 0x17 - AF_KEY = 0x1b - AF_LAT = 0xe - AF_LINK = 0x19 - AF_LOCAL = 0x1 - AF_MAX = 0x20 - AF_NBS = 0x7 - AF_NCA = 0x1c - AF_NIT = 0x11 - AF_NS = 0x6 - AF_OSI = 0x13 - AF_OSINET = 0x15 - AF_PACKET = 0x20 - AF_POLICY = 0x1d - AF_PUP = 0x4 - AF_ROUTE = 0x18 - AF_SNA = 0xb - AF_TRILL = 0x1f - AF_UNIX = 0x1 - AF_UNSPEC = 0x0 - AF_X25 = 0x14 - ARPHRD_ARCNET = 0x7 - ARPHRD_ATM = 0x10 - ARPHRD_AX25 = 0x3 - ARPHRD_CHAOS = 0x5 - ARPHRD_EETHER = 0x2 - ARPHRD_ETHER = 0x1 - ARPHRD_FC = 0x12 - ARPHRD_FRAME = 0xf - ARPHRD_HDLC = 0x11 - ARPHRD_IB = 0x20 - ARPHRD_IEEE802 = 0x6 - ARPHRD_IPATM = 0x13 - ARPHRD_METRICOM = 0x17 - ARPHRD_TUNNEL = 0x1f - B0 = 0x0 - B110 = 0x3 - B115200 = 0x12 - B1200 = 0x9 - B134 = 0x4 - B150 = 0x5 - B153600 = 0x13 - B1800 = 0xa - B19200 = 0xe - B200 = 0x6 - B230400 = 0x14 - B2400 = 0xb - B300 = 0x7 - B307200 = 0x15 - B38400 = 0xf - B460800 = 0x16 - B4800 = 0xc - B50 = 0x1 - B57600 = 0x10 - B600 = 0x8 - B75 = 0x2 - B76800 = 0x11 - B921600 = 0x17 - B9600 = 0xd - BIOCFLUSH = 0x20004268 - BIOCGBLEN = 0x40044266 - BIOCGDLT = 0x4004426a - BIOCGDLTLIST = -0x3fefbd89 - BIOCGDLTLIST32 = -0x3ff7bd89 - BIOCGETIF = 0x4020426b - BIOCGETLIF = 0x4078426b - BIOCGHDRCMPLT = 0x40044274 - BIOCGRTIMEOUT = 0x4010427b - BIOCGRTIMEOUT32 = 0x4008427b - BIOCGSEESENT = 0x40044278 - BIOCGSTATS = 0x4080426f - BIOCGSTATSOLD = 0x4008426f - BIOCIMMEDIATE = -0x7ffbbd90 - BIOCPROMISC = 0x20004269 - BIOCSBLEN = -0x3ffbbd9a - BIOCSDLT = -0x7ffbbd8a - BIOCSETF = -0x7fefbd99 - BIOCSETF32 = -0x7ff7bd99 - BIOCSETIF = -0x7fdfbd94 - BIOCSETLIF = -0x7f87bd94 - BIOCSHDRCMPLT = -0x7ffbbd8b - BIOCSRTIMEOUT = -0x7fefbd86 - BIOCSRTIMEOUT32 = -0x7ff7bd86 - BIOCSSEESENT = -0x7ffbbd87 - BIOCSTCPF = -0x7fefbd8e - BIOCSUDPF = -0x7fefbd8d - BIOCVERSION = 0x40044271 - BPF_A = 0x10 - BPF_ABS = 0x20 - BPF_ADD = 0x0 - BPF_ALIGNMENT = 0x4 - BPF_ALU = 0x4 - BPF_AND = 0x50 - BPF_B = 0x10 - BPF_DFLTBUFSIZE = 0x100000 - BPF_DIV = 0x30 - BPF_H = 0x8 - BPF_IMM = 0x0 - BPF_IND = 0x40 - BPF_JA = 0x0 - BPF_JEQ = 0x10 - BPF_JGE = 0x30 - BPF_JGT = 0x20 - BPF_JMP = 0x5 - BPF_JSET = 0x40 - BPF_K = 0x0 - BPF_LD = 0x0 - BPF_LDX = 0x1 - BPF_LEN = 0x80 - BPF_LSH = 0x60 - BPF_MAJOR_VERSION = 0x1 - BPF_MAXBUFSIZE = 0x1000000 - BPF_MAXINSNS = 0x200 - BPF_MEM = 0x60 - BPF_MEMWORDS = 0x10 - BPF_MINBUFSIZE = 0x20 - BPF_MINOR_VERSION = 0x1 - BPF_MISC = 0x7 - BPF_MSH = 0xa0 - BPF_MUL = 0x20 - BPF_NEG = 0x80 - BPF_OR = 0x40 - BPF_RELEASE = 0x30bb6 - BPF_RET = 0x6 - BPF_RSH = 0x70 - BPF_ST = 0x2 - BPF_STX = 0x3 - BPF_SUB = 0x10 - BPF_TAX = 0x0 - BPF_TXA = 0x80 - BPF_W = 0x0 - BPF_X = 0x8 - BRKINT = 0x2 - CFLUSH = 0xf - CLOCAL = 0x800 - CLOCK_HIGHRES = 0x4 - CLOCK_LEVEL = 0xa - CLOCK_MONOTONIC = 0x4 - CLOCK_PROCESS_CPUTIME_ID = 0x5 - CLOCK_PROF = 0x2 - CLOCK_REALTIME = 0x3 - CLOCK_THREAD_CPUTIME_ID = 0x2 - CLOCK_VIRTUAL = 0x1 - CREAD = 0x80 - CS5 = 0x0 - CS6 = 0x10 - CS7 = 0x20 - CS8 = 0x30 - CSIZE = 0x30 - CSTART = 0x11 - CSTATUS = 0x14 - CSTOP = 0x13 - CSTOPB = 0x40 - CSUSP = 0x1a - CSWTCH = 0x1a - DLT_AIRONET_HEADER = 0x78 - DLT_APPLE_IP_OVER_IEEE1394 = 0x8a - DLT_ARCNET = 0x7 - DLT_ARCNET_LINUX = 0x81 - DLT_ATM_CLIP = 0x13 - DLT_ATM_RFC1483 = 0xb - DLT_AURORA = 0x7e - DLT_AX25 = 0x3 - DLT_BACNET_MS_TP = 0xa5 - DLT_CHAOS = 0x5 - DLT_CISCO_IOS = 0x76 - DLT_C_HDLC = 0x68 - DLT_DOCSIS = 0x8f - DLT_ECONET = 0x73 - DLT_EN10MB = 0x1 - DLT_EN3MB = 0x2 - DLT_ENC = 0x6d - DLT_ERF_ETH = 0xaf - DLT_ERF_POS = 0xb0 - DLT_FDDI = 0xa - DLT_FRELAY = 0x6b - DLT_GCOM_SERIAL = 0xad - DLT_GCOM_T1E1 = 0xac - DLT_GPF_F = 0xab - DLT_GPF_T = 0xaa - DLT_GPRS_LLC = 0xa9 - DLT_HDLC = 0x10 - DLT_HHDLC = 0x79 - DLT_HIPPI = 0xf - DLT_IBM_SN = 0x92 - DLT_IBM_SP = 0x91 - DLT_IEEE802 = 0x6 - DLT_IEEE802_11 = 0x69 - DLT_IEEE802_11_RADIO = 0x7f - DLT_IEEE802_11_RADIO_AVS = 0xa3 - DLT_IPNET = 0xe2 - DLT_IPOIB = 0xa2 - DLT_IP_OVER_FC = 0x7a - DLT_JUNIPER_ATM1 = 0x89 - DLT_JUNIPER_ATM2 = 0x87 - DLT_JUNIPER_CHDLC = 0xb5 - DLT_JUNIPER_ES = 0x84 - DLT_JUNIPER_ETHER = 0xb2 - DLT_JUNIPER_FRELAY = 0xb4 - DLT_JUNIPER_GGSN = 0x85 - DLT_JUNIPER_MFR = 0x86 - DLT_JUNIPER_MLFR = 0x83 - DLT_JUNIPER_MLPPP = 0x82 - DLT_JUNIPER_MONITOR = 0xa4 - DLT_JUNIPER_PIC_PEER = 0xae - DLT_JUNIPER_PPP = 0xb3 - DLT_JUNIPER_PPPOE = 0xa7 - DLT_JUNIPER_PPPOE_ATM = 0xa8 - DLT_JUNIPER_SERVICES = 0x88 - DLT_LINUX_IRDA = 0x90 - DLT_LINUX_LAPD = 0xb1 - DLT_LINUX_SLL = 0x71 - DLT_LOOP = 0x6c - DLT_LTALK = 0x72 - DLT_MTP2 = 0x8c - DLT_MTP2_WITH_PHDR = 0x8b - DLT_MTP3 = 0x8d - DLT_NULL = 0x0 - DLT_PCI_EXP = 0x7d - DLT_PFLOG = 0x75 - DLT_PFSYNC = 0x12 - DLT_PPP = 0x9 - DLT_PPP_BSDOS = 0xe - DLT_PPP_PPPD = 0xa6 - DLT_PRISM_HEADER = 0x77 - DLT_PRONET = 0x4 - DLT_RAW = 0xc - DLT_RAWAF_MASK = 0x2240000 - DLT_RIO = 0x7c - DLT_SCCP = 0x8e - DLT_SLIP = 0x8 - DLT_SLIP_BSDOS = 0xd - DLT_SUNATM = 0x7b - DLT_SYMANTEC_FIREWALL = 0x63 - DLT_TZSP = 0x80 - ECHO = 0x8 - ECHOCTL = 0x200 - ECHOE = 0x10 - ECHOK = 0x20 - ECHOKE = 0x800 - ECHONL = 0x40 - ECHOPRT = 0x400 - EMPTY_SET = 0x0 - EMT_CPCOVF = 0x1 - EQUALITY_CHECK = 0x0 - EXTA = 0xe - EXTB = 0xf - FD_CLOEXEC = 0x1 - FD_NFDBITS = 0x40 - FD_SETSIZE = 0x10000 - FLUSHALL = 0x1 - FLUSHDATA = 0x0 - FLUSHO = 0x2000 - F_ALLOCSP = 0xa - F_ALLOCSP64 = 0xa - F_BADFD = 0x2e - F_BLKSIZE = 0x13 - F_BLOCKS = 0x12 - F_CHKFL = 0x8 - F_COMPAT = 0x8 - F_DUP2FD = 0x9 - F_DUP2FD_CLOEXEC = 0x24 - F_DUPFD = 0x0 - F_DUPFD_CLOEXEC = 0x25 - F_FREESP = 0xb - F_FREESP64 = 0xb - F_GETFD = 0x1 - F_GETFL = 0x3 - F_GETLK = 0xe - F_GETLK64 = 0xe - F_GETOWN = 0x17 - F_GETXFL = 0x2d - F_HASREMOTELOCKS = 0x1a - F_ISSTREAM = 0xd - F_MANDDNY = 0x10 - F_MDACC = 0x20 - F_NODNY = 0x0 - F_NPRIV = 0x10 - F_PRIV = 0xf - F_QUOTACTL = 0x11 - F_RDACC = 0x1 - F_RDDNY = 0x1 - F_RDLCK = 0x1 - F_REVOKE = 0x19 - F_RMACC = 0x4 - F_RMDNY = 0x4 - F_RWACC = 0x3 - F_RWDNY = 0x3 - F_SETFD = 0x2 - F_SETFL = 0x4 - F_SETLK = 0x6 - F_SETLK64 = 0x6 - F_SETLK64_NBMAND = 0x2a - F_SETLKW = 0x7 - F_SETLKW64 = 0x7 - F_SETLK_NBMAND = 0x2a - F_SETOWN = 0x18 - F_SHARE = 0x28 - F_SHARE_NBMAND = 0x2b - F_UNLCK = 0x3 - F_UNLKSYS = 0x4 - F_UNSHARE = 0x29 - F_WRACC = 0x2 - F_WRDNY = 0x2 - F_WRLCK = 0x2 - HUPCL = 0x400 - ICANON = 0x2 - ICRNL = 0x100 - IEXTEN = 0x8000 - IFF_ADDRCONF = 0x80000 - IFF_ALLMULTI = 0x200 - IFF_ANYCAST = 0x400000 - IFF_BROADCAST = 0x2 - IFF_CANTCHANGE = 0x7f203003b5a - IFF_COS_ENABLED = 0x200000000 - IFF_DEBUG = 0x4 - IFF_DEPRECATED = 0x40000 - IFF_DHCPRUNNING = 0x4000 - IFF_DUPLICATE = 0x4000000000 - IFF_FAILED = 0x10000000 - IFF_FIXEDMTU = 0x1000000000 - IFF_INACTIVE = 0x40000000 - IFF_INTELLIGENT = 0x400 - IFF_IPMP = 0x8000000000 - IFF_IPMP_CANTCHANGE = 0x10000000 - IFF_IPMP_INVALID = 0x1ec200080 - IFF_IPV4 = 0x1000000 - IFF_IPV6 = 0x2000000 - IFF_L3PROTECT = 0x40000000000 - IFF_LOOPBACK = 0x8 - IFF_MULTICAST = 0x800 - IFF_MULTI_BCAST = 0x1000 - IFF_NOACCEPT = 0x4000000 - IFF_NOARP = 0x80 - IFF_NOFAILOVER = 0x8000000 - IFF_NOLINKLOCAL = 0x20000000000 - IFF_NOLOCAL = 0x20000 - IFF_NONUD = 0x200000 - IFF_NORTEXCH = 0x800000 - IFF_NOTRAILERS = 0x20 - IFF_NOXMIT = 0x10000 - IFF_OFFLINE = 0x80000000 - IFF_POINTOPOINT = 0x10 - IFF_PREFERRED = 0x400000000 - IFF_PRIVATE = 0x8000 - IFF_PROMISC = 0x100 - IFF_ROUTER = 0x100000 - IFF_RUNNING = 0x40 - IFF_STANDBY = 0x20000000 - IFF_TEMPORARY = 0x800000000 - IFF_UNNUMBERED = 0x2000 - IFF_UP = 0x1 - IFF_VIRTUAL = 0x2000000000 - IFF_VRRP = 0x10000000000 - IFF_XRESOLV = 0x100000000 - IFNAMSIZ = 0x10 - IFT_1822 = 0x2 - IFT_6TO4 = 0xca - IFT_AAL5 = 0x31 - IFT_ARCNET = 0x23 - IFT_ARCNETPLUS = 0x24 - IFT_ATM = 0x25 - IFT_CEPT = 0x13 - IFT_DS3 = 0x1e - IFT_EON = 0x19 - IFT_ETHER = 0x6 - IFT_FDDI = 0xf - IFT_FRELAY = 0x20 - IFT_FRELAYDCE = 0x2c - IFT_HDH1822 = 0x3 - IFT_HIPPI = 0x2f - IFT_HSSI = 0x2e - IFT_HY = 0xe - IFT_IB = 0xc7 - IFT_IPV4 = 0xc8 - IFT_IPV6 = 0xc9 - IFT_ISDNBASIC = 0x14 - IFT_ISDNPRIMARY = 0x15 - IFT_ISO88022LLC = 0x29 - IFT_ISO88023 = 0x7 - IFT_ISO88024 = 0x8 - IFT_ISO88025 = 0x9 - IFT_ISO88026 = 0xa - IFT_LAPB = 0x10 - IFT_LOCALTALK = 0x2a - IFT_LOOP = 0x18 - IFT_MIOX25 = 0x26 - IFT_MODEM = 0x30 - IFT_NSIP = 0x1b - IFT_OTHER = 0x1 - IFT_P10 = 0xc - IFT_P80 = 0xd - IFT_PARA = 0x22 - IFT_PPP = 0x17 - IFT_PROPMUX = 0x36 - IFT_PROPVIRTUAL = 0x35 - IFT_PTPSERIAL = 0x16 - IFT_RS232 = 0x21 - IFT_SDLC = 0x11 - IFT_SIP = 0x1f - IFT_SLIP = 0x1c - IFT_SMDSDXI = 0x2b - IFT_SMDSICIP = 0x34 - IFT_SONET = 0x27 - IFT_SONETPATH = 0x32 - IFT_SONETVT = 0x33 - IFT_STARLAN = 0xb - IFT_T1 = 0x12 - IFT_ULTRA = 0x1d - IFT_V35 = 0x2d - IFT_X25 = 0x5 - IFT_X25DDN = 0x4 - IFT_X25PLE = 0x28 - IFT_XETHER = 0x1a - IGNBRK = 0x1 - IGNCR = 0x80 - IGNPAR = 0x4 - IMAXBEL = 0x2000 - INLCR = 0x40 - INPCK = 0x10 - IN_AUTOCONF_MASK = 0xffff0000 - IN_AUTOCONF_NET = 0xa9fe0000 - IN_CLASSA_HOST = 0xffffff - IN_CLASSA_MAX = 0x80 - IN_CLASSA_NET = 0xff000000 - IN_CLASSA_NSHIFT = 0x18 - IN_CLASSB_HOST = 0xffff - IN_CLASSB_MAX = 0x10000 - IN_CLASSB_NET = 0xffff0000 - IN_CLASSB_NSHIFT = 0x10 - IN_CLASSC_HOST = 0xff - IN_CLASSC_NET = 0xffffff00 - IN_CLASSC_NSHIFT = 0x8 - IN_CLASSD_HOST = 0xfffffff - IN_CLASSD_NET = 0xf0000000 - IN_CLASSD_NSHIFT = 0x1c - IN_CLASSE_NET = 0xffffffff - IN_LOOPBACKNET = 0x7f - IN_PRIVATE12_MASK = 0xfff00000 - IN_PRIVATE12_NET = 0xac100000 - IN_PRIVATE16_MASK = 0xffff0000 - IN_PRIVATE16_NET = 0xc0a80000 - IN_PRIVATE8_MASK = 0xff000000 - IN_PRIVATE8_NET = 0xa000000 - IPPROTO_AH = 0x33 - IPPROTO_DSTOPTS = 0x3c - IPPROTO_EGP = 0x8 - IPPROTO_ENCAP = 0x4 - IPPROTO_EON = 0x50 - IPPROTO_ESP = 0x32 - IPPROTO_FRAGMENT = 0x2c - IPPROTO_GGP = 0x3 - IPPROTO_HELLO = 0x3f - IPPROTO_HOPOPTS = 0x0 - IPPROTO_ICMP = 0x1 - IPPROTO_ICMPV6 = 0x3a - IPPROTO_IDP = 0x16 - IPPROTO_IGMP = 0x2 - IPPROTO_IP = 0x0 - IPPROTO_IPV6 = 0x29 - IPPROTO_MAX = 0x100 - IPPROTO_ND = 0x4d - IPPROTO_NONE = 0x3b - IPPROTO_OSPF = 0x59 - IPPROTO_PIM = 0x67 - IPPROTO_PUP = 0xc - IPPROTO_RAW = 0xff - IPPROTO_ROUTING = 0x2b - IPPROTO_RSVP = 0x2e - IPPROTO_SCTP = 0x84 - IPPROTO_TCP = 0x6 - IPPROTO_UDP = 0x11 - IPV6_ADD_MEMBERSHIP = 0x9 - IPV6_BOUND_IF = 0x41 - IPV6_CHECKSUM = 0x18 - IPV6_DONTFRAG = 0x21 - IPV6_DROP_MEMBERSHIP = 0xa - IPV6_DSTOPTS = 0xf - IPV6_FLOWINFO_FLOWLABEL = 0xffff0f00 - IPV6_FLOWINFO_TCLASS = 0xf00f - IPV6_HOPLIMIT = 0xc - IPV6_HOPOPTS = 0xe - IPV6_JOIN_GROUP = 0x9 - IPV6_LEAVE_GROUP = 0xa - IPV6_MULTICAST_HOPS = 0x7 - IPV6_MULTICAST_IF = 0x6 - IPV6_MULTICAST_LOOP = 0x8 - IPV6_NEXTHOP = 0xd - IPV6_PAD1_OPT = 0x0 - IPV6_PATHMTU = 0x25 - IPV6_PKTINFO = 0xb - IPV6_PREFER_SRC_CGA = 0x20 - IPV6_PREFER_SRC_CGADEFAULT = 0x10 - IPV6_PREFER_SRC_CGAMASK = 0x30 - IPV6_PREFER_SRC_COA = 0x2 - IPV6_PREFER_SRC_DEFAULT = 0x15 - IPV6_PREFER_SRC_HOME = 0x1 - IPV6_PREFER_SRC_MASK = 0x3f - IPV6_PREFER_SRC_MIPDEFAULT = 0x1 - IPV6_PREFER_SRC_MIPMASK = 0x3 - IPV6_PREFER_SRC_NONCGA = 0x10 - IPV6_PREFER_SRC_PUBLIC = 0x4 - IPV6_PREFER_SRC_TMP = 0x8 - IPV6_PREFER_SRC_TMPDEFAULT = 0x4 - IPV6_PREFER_SRC_TMPMASK = 0xc - IPV6_RECVDSTOPTS = 0x28 - IPV6_RECVHOPLIMIT = 0x13 - IPV6_RECVHOPOPTS = 0x14 - IPV6_RECVPATHMTU = 0x24 - IPV6_RECVPKTINFO = 0x12 - IPV6_RECVRTHDR = 0x16 - IPV6_RECVRTHDRDSTOPTS = 0x17 - IPV6_RECVTCLASS = 0x19 - IPV6_RTHDR = 0x10 - IPV6_RTHDRDSTOPTS = 0x11 - IPV6_RTHDR_TYPE_0 = 0x0 - IPV6_SEC_OPT = 0x22 - IPV6_SRC_PREFERENCES = 0x23 - IPV6_TCLASS = 0x26 - IPV6_UNICAST_HOPS = 0x5 - IPV6_UNSPEC_SRC = 0x42 - IPV6_USE_MIN_MTU = 0x20 - IPV6_V6ONLY = 0x27 - IP_ADD_MEMBERSHIP = 0x13 - IP_ADD_SOURCE_MEMBERSHIP = 0x17 - IP_BLOCK_SOURCE = 0x15 - IP_BOUND_IF = 0x41 - IP_BROADCAST = 0x106 - IP_BROADCAST_TTL = 0x43 - IP_DEFAULT_MULTICAST_LOOP = 0x1 - IP_DEFAULT_MULTICAST_TTL = 0x1 - IP_DF = 0x4000 - IP_DHCPINIT_IF = 0x45 - IP_DONTFRAG = 0x1b - IP_DONTROUTE = 0x105 - IP_DROP_MEMBERSHIP = 0x14 - IP_DROP_SOURCE_MEMBERSHIP = 0x18 - IP_HDRINCL = 0x2 - IP_MAXPACKET = 0xffff - IP_MF = 0x2000 - IP_MSS = 0x240 - IP_MULTICAST_IF = 0x10 - IP_MULTICAST_LOOP = 0x12 - IP_MULTICAST_TTL = 0x11 - IP_NEXTHOP = 0x19 - IP_OPTIONS = 0x1 - IP_PKTINFO = 0x1a - IP_RECVDSTADDR = 0x7 - IP_RECVIF = 0x9 - IP_RECVOPTS = 0x5 - IP_RECVPKTINFO = 0x1a - IP_RECVRETOPTS = 0x6 - IP_RECVSLLA = 0xa - IP_RECVTTL = 0xb - IP_RETOPTS = 0x8 - IP_REUSEADDR = 0x104 - IP_SEC_OPT = 0x22 - IP_TOS = 0x3 - IP_TTL = 0x4 - IP_UNBLOCK_SOURCE = 0x16 - IP_UNSPEC_SRC = 0x42 - ISIG = 0x1 - ISTRIP = 0x20 - IXANY = 0x800 - IXOFF = 0x1000 - IXON = 0x400 - MADV_ACCESS_DEFAULT = 0x6 - MADV_ACCESS_LWP = 0x7 - MADV_ACCESS_MANY = 0x8 - MADV_DONTNEED = 0x4 - MADV_FREE = 0x5 - MADV_NORMAL = 0x0 - MADV_RANDOM = 0x1 - MADV_SEQUENTIAL = 0x2 - MADV_WILLNEED = 0x3 - MAP_32BIT = 0x80 - MAP_ALIGN = 0x200 - MAP_ANON = 0x100 - MAP_ANONYMOUS = 0x100 - MAP_FIXED = 0x10 - MAP_INITDATA = 0x800 - MAP_NORESERVE = 0x40 - MAP_PRIVATE = 0x2 - MAP_RENAME = 0x20 - MAP_SHARED = 0x1 - MAP_TEXT = 0x400 - MAP_TYPE = 0xf - MCL_CURRENT = 0x1 - MCL_FUTURE = 0x2 - MSG_CTRUNC = 0x10 - MSG_DONTROUTE = 0x4 - MSG_DONTWAIT = 0x80 - MSG_DUPCTRL = 0x800 - MSG_EOR = 0x8 - MSG_MAXIOVLEN = 0x10 - MSG_NOTIFICATION = 0x100 - MSG_OOB = 0x1 - MSG_PEEK = 0x2 - MSG_TRUNC = 0x20 - MSG_WAITALL = 0x40 - MSG_XPG4_2 = 0x8000 - MS_ASYNC = 0x1 - MS_INVALIDATE = 0x2 - MS_OLDSYNC = 0x0 - MS_SYNC = 0x4 - M_FLUSH = 0x86 - NOFLSH = 0x80 - OCRNL = 0x8 - OFDEL = 0x80 - OFILL = 0x40 - ONLCR = 0x4 - ONLRET = 0x20 - ONOCR = 0x10 - OPENFAIL = -0x1 - OPOST = 0x1 - O_ACCMODE = 0x600003 - O_APPEND = 0x8 - O_CLOEXEC = 0x800000 - O_CREAT = 0x100 - O_DSYNC = 0x40 - O_EXCL = 0x400 - O_EXEC = 0x400000 - O_LARGEFILE = 0x2000 - O_NDELAY = 0x4 - O_NOCTTY = 0x800 - O_NOFOLLOW = 0x20000 - O_NOLINKS = 0x40000 - O_NONBLOCK = 0x80 - O_RDONLY = 0x0 - O_RDWR = 0x2 - O_RSYNC = 0x8000 - O_SEARCH = 0x200000 - O_SIOCGIFCONF = -0x3ff796ec - O_SIOCGLIFCONF = -0x3fef9688 - O_SYNC = 0x10 - O_TRUNC = 0x200 - O_WRONLY = 0x1 - O_XATTR = 0x4000 - PARENB = 0x100 - PAREXT = 0x100000 - PARMRK = 0x8 - PARODD = 0x200 - PENDIN = 0x4000 - PRIO_PGRP = 0x1 - PRIO_PROCESS = 0x0 - PRIO_USER = 0x2 - PROT_EXEC = 0x4 - PROT_NONE = 0x0 - PROT_READ = 0x1 - PROT_WRITE = 0x2 - RLIMIT_AS = 0x6 - RLIMIT_CORE = 0x4 - RLIMIT_CPU = 0x0 - RLIMIT_DATA = 0x2 - RLIMIT_FSIZE = 0x1 - RLIMIT_NOFILE = 0x5 - RLIMIT_STACK = 0x3 - RLIM_INFINITY = -0x3 - RTAX_AUTHOR = 0x6 - RTAX_BRD = 0x7 - RTAX_DST = 0x0 - RTAX_GATEWAY = 0x1 - RTAX_GENMASK = 0x3 - RTAX_IFA = 0x5 - RTAX_IFP = 0x4 - RTAX_MAX = 0x9 - RTAX_NETMASK = 0x2 - RTAX_SRC = 0x8 - RTA_AUTHOR = 0x40 - RTA_BRD = 0x80 - RTA_DST = 0x1 - RTA_GATEWAY = 0x2 - RTA_GENMASK = 0x8 - RTA_IFA = 0x20 - RTA_IFP = 0x10 - RTA_NETMASK = 0x4 - RTA_NUMBITS = 0x9 - RTA_SRC = 0x100 - RTF_BLACKHOLE = 0x1000 - RTF_CLONING = 0x100 - RTF_DONE = 0x40 - RTF_DYNAMIC = 0x10 - RTF_GATEWAY = 0x2 - RTF_HOST = 0x4 - RTF_INDIRECT = 0x40000 - RTF_KERNEL = 0x80000 - RTF_LLINFO = 0x400 - RTF_MASK = 0x80 - RTF_MODIFIED = 0x20 - RTF_MULTIRT = 0x10000 - RTF_PRIVATE = 0x2000 - RTF_PROTO1 = 0x8000 - RTF_PROTO2 = 0x4000 - RTF_REJECT = 0x8 - RTF_SETSRC = 0x20000 - RTF_STATIC = 0x800 - RTF_UP = 0x1 - RTF_XRESOLVE = 0x200 - RTF_ZONE = 0x100000 - RTM_ADD = 0x1 - RTM_CHANGE = 0x3 - RTM_CHGADDR = 0xf - RTM_DELADDR = 0xd - RTM_DELETE = 0x2 - RTM_FREEADDR = 0x10 - RTM_GET = 0x4 - RTM_IFINFO = 0xe - RTM_LOCK = 0x8 - RTM_LOSING = 0x5 - RTM_MISS = 0x7 - RTM_NEWADDR = 0xc - RTM_OLDADD = 0x9 - RTM_OLDDEL = 0xa - RTM_REDIRECT = 0x6 - RTM_RESOLVE = 0xb - RTM_VERSION = 0x3 - RTV_EXPIRE = 0x4 - RTV_HOPCOUNT = 0x2 - RTV_MTU = 0x1 - RTV_RPIPE = 0x8 - RTV_RTT = 0x40 - RTV_RTTVAR = 0x80 - RTV_SPIPE = 0x10 - RTV_SSTHRESH = 0x20 - RT_AWARE = 0x1 - RUSAGE_CHILDREN = -0x1 - RUSAGE_SELF = 0x0 - SCM_RIGHTS = 0x1010 - SCM_TIMESTAMP = 0x1013 - SCM_UCRED = 0x1012 - SHUT_RD = 0x0 - SHUT_RDWR = 0x2 - SHUT_WR = 0x1 - SIG2STR_MAX = 0x20 - SIOCADDMULTI = -0x7fdf96cf - SIOCADDRT = -0x7fcf8df6 - SIOCATMARK = 0x40047307 - SIOCDARP = -0x7fdb96e0 - SIOCDELMULTI = -0x7fdf96ce - SIOCDELRT = -0x7fcf8df5 - SIOCDXARP = -0x7fff9658 - SIOCGARP = -0x3fdb96e1 - SIOCGDSTINFO = -0x3fff965c - SIOCGENADDR = -0x3fdf96ab - SIOCGENPSTATS = -0x3fdf96c7 - SIOCGETLSGCNT = -0x3fef8deb - SIOCGETNAME = 0x40107334 - SIOCGETPEER = 0x40107335 - SIOCGETPROP = -0x3fff8f44 - SIOCGETSGCNT = -0x3feb8deb - SIOCGETSYNC = -0x3fdf96d3 - SIOCGETVIFCNT = -0x3feb8dec - SIOCGHIWAT = 0x40047301 - SIOCGIFADDR = -0x3fdf96f3 - SIOCGIFBRDADDR = -0x3fdf96e9 - SIOCGIFCONF = -0x3ff796a4 - SIOCGIFDSTADDR = -0x3fdf96f1 - SIOCGIFFLAGS = -0x3fdf96ef - SIOCGIFHWADDR = -0x3fdf9647 - SIOCGIFINDEX = -0x3fdf96a6 - SIOCGIFMEM = -0x3fdf96ed - SIOCGIFMETRIC = -0x3fdf96e5 - SIOCGIFMTU = -0x3fdf96ea - SIOCGIFMUXID = -0x3fdf96a8 - SIOCGIFNETMASK = -0x3fdf96e7 - SIOCGIFNUM = 0x40046957 - SIOCGIP6ADDRPOLICY = -0x3fff965e - SIOCGIPMSFILTER = -0x3ffb964c - SIOCGLIFADDR = -0x3f87968f - SIOCGLIFBINDING = -0x3f879666 - SIOCGLIFBRDADDR = -0x3f879685 - SIOCGLIFCONF = -0x3fef965b - SIOCGLIFDADSTATE = -0x3f879642 - SIOCGLIFDSTADDR = -0x3f87968d - SIOCGLIFFLAGS = -0x3f87968b - SIOCGLIFGROUPINFO = -0x3f4b9663 - SIOCGLIFGROUPNAME = -0x3f879664 - SIOCGLIFHWADDR = -0x3f879640 - SIOCGLIFINDEX = -0x3f87967b - SIOCGLIFLNKINFO = -0x3f879674 - SIOCGLIFMETRIC = -0x3f879681 - SIOCGLIFMTU = -0x3f879686 - SIOCGLIFMUXID = -0x3f87967d - SIOCGLIFNETMASK = -0x3f879683 - SIOCGLIFNUM = -0x3ff3967e - SIOCGLIFSRCOF = -0x3fef964f - SIOCGLIFSUBNET = -0x3f879676 - SIOCGLIFTOKEN = -0x3f879678 - SIOCGLIFUSESRC = -0x3f879651 - SIOCGLIFZONE = -0x3f879656 - SIOCGLOWAT = 0x40047303 - SIOCGMSFILTER = -0x3ffb964e - SIOCGPGRP = 0x40047309 - SIOCGSTAMP = -0x3fef9646 - SIOCGXARP = -0x3fff9659 - SIOCIFDETACH = -0x7fdf96c8 - SIOCILB = -0x3ffb9645 - SIOCLIFADDIF = -0x3f879691 - SIOCLIFDELND = -0x7f879673 - SIOCLIFGETND = -0x3f879672 - SIOCLIFREMOVEIF = -0x7f879692 - SIOCLIFSETND = -0x7f879671 - SIOCLOWER = -0x7fdf96d7 - SIOCSARP = -0x7fdb96e2 - SIOCSCTPGOPT = -0x3fef9653 - SIOCSCTPPEELOFF = -0x3ffb9652 - SIOCSCTPSOPT = -0x7fef9654 - SIOCSENABLESDP = -0x3ffb9649 - SIOCSETPROP = -0x7ffb8f43 - SIOCSETSYNC = -0x7fdf96d4 - SIOCSHIWAT = -0x7ffb8d00 - SIOCSIFADDR = -0x7fdf96f4 - SIOCSIFBRDADDR = -0x7fdf96e8 - SIOCSIFDSTADDR = -0x7fdf96f2 - SIOCSIFFLAGS = -0x7fdf96f0 - SIOCSIFINDEX = -0x7fdf96a5 - SIOCSIFMEM = -0x7fdf96ee - SIOCSIFMETRIC = -0x7fdf96e4 - SIOCSIFMTU = -0x7fdf96eb - SIOCSIFMUXID = -0x7fdf96a7 - SIOCSIFNAME = -0x7fdf96b7 - SIOCSIFNETMASK = -0x7fdf96e6 - SIOCSIP6ADDRPOLICY = -0x7fff965d - SIOCSIPMSFILTER = -0x7ffb964b - SIOCSLGETREQ = -0x3fdf96b9 - SIOCSLIFADDR = -0x7f879690 - SIOCSLIFBRDADDR = -0x7f879684 - SIOCSLIFDSTADDR = -0x7f87968e - SIOCSLIFFLAGS = -0x7f87968c - SIOCSLIFGROUPNAME = -0x7f879665 - SIOCSLIFINDEX = -0x7f87967a - SIOCSLIFLNKINFO = -0x7f879675 - SIOCSLIFMETRIC = -0x7f879680 - SIOCSLIFMTU = -0x7f879687 - SIOCSLIFMUXID = -0x7f87967c - SIOCSLIFNAME = -0x3f87967f - SIOCSLIFNETMASK = -0x7f879682 - SIOCSLIFPREFIX = -0x3f879641 - SIOCSLIFSUBNET = -0x7f879677 - SIOCSLIFTOKEN = -0x7f879679 - SIOCSLIFUSESRC = -0x7f879650 - SIOCSLIFZONE = -0x7f879655 - SIOCSLOWAT = -0x7ffb8cfe - SIOCSLSTAT = -0x7fdf96b8 - SIOCSMSFILTER = -0x7ffb964d - SIOCSPGRP = -0x7ffb8cf8 - SIOCSPROMISC = -0x7ffb96d0 - SIOCSQPTR = -0x3ffb9648 - SIOCSSDSTATS = -0x3fdf96d2 - SIOCSSESTATS = -0x3fdf96d1 - SIOCSXARP = -0x7fff965a - SIOCTMYADDR = -0x3ff79670 - SIOCTMYSITE = -0x3ff7966e - SIOCTONLINK = -0x3ff7966f - SIOCUPPER = -0x7fdf96d8 - SIOCX25RCV = -0x3fdf96c4 - SIOCX25TBL = -0x3fdf96c3 - SIOCX25XMT = -0x3fdf96c5 - SIOCXPROTO = 0x20007337 - SOCK_CLOEXEC = 0x80000 - SOCK_DGRAM = 0x1 - SOCK_NDELAY = 0x200000 - SOCK_NONBLOCK = 0x100000 - SOCK_RAW = 0x4 - SOCK_RDM = 0x5 - SOCK_SEQPACKET = 0x6 - SOCK_STREAM = 0x2 - SOCK_TYPE_MASK = 0xffff - SOL_FILTER = 0xfffc - SOL_PACKET = 0xfffd - SOL_ROUTE = 0xfffe - SOL_SOCKET = 0xffff - SOMAXCONN = 0x80 - SO_ACCEPTCONN = 0x2 - SO_ALL = 0x3f - SO_ALLZONES = 0x1014 - SO_ANON_MLP = 0x100a - SO_ATTACH_FILTER = 0x40000001 - SO_BAND = 0x4000 - SO_BROADCAST = 0x20 - SO_COPYOPT = 0x80000 - SO_DEBUG = 0x1 - SO_DELIM = 0x8000 - SO_DETACH_FILTER = 0x40000002 - SO_DGRAM_ERRIND = 0x200 - SO_DOMAIN = 0x100c - SO_DONTLINGER = -0x81 - SO_DONTROUTE = 0x10 - SO_ERROPT = 0x40000 - SO_ERROR = 0x1007 - SO_EXCLBIND = 0x1015 - SO_HIWAT = 0x10 - SO_ISNTTY = 0x800 - SO_ISTTY = 0x400 - SO_KEEPALIVE = 0x8 - SO_LINGER = 0x80 - SO_LOWAT = 0x20 - SO_MAC_EXEMPT = 0x100b - SO_MAC_IMPLICIT = 0x1016 - SO_MAXBLK = 0x100000 - SO_MAXPSZ = 0x8 - SO_MINPSZ = 0x4 - SO_MREADOFF = 0x80 - SO_MREADON = 0x40 - SO_NDELOFF = 0x200 - SO_NDELON = 0x100 - SO_NODELIM = 0x10000 - SO_OOBINLINE = 0x100 - SO_PROTOTYPE = 0x1009 - SO_RCVBUF = 0x1002 - SO_RCVLOWAT = 0x1004 - SO_RCVPSH = 0x100d - SO_RCVTIMEO = 0x1006 - SO_READOPT = 0x1 - SO_RECVUCRED = 0x400 - SO_REUSEADDR = 0x4 - SO_SECATTR = 0x1011 - SO_SNDBUF = 0x1001 - SO_SNDLOWAT = 0x1003 - SO_SNDTIMEO = 0x1005 - SO_STRHOLD = 0x20000 - SO_TAIL = 0x200000 - SO_TIMESTAMP = 0x1013 - SO_TONSTOP = 0x2000 - SO_TOSTOP = 0x1000 - SO_TYPE = 0x1008 - SO_USELOOPBACK = 0x40 - SO_VRRP = 0x1017 - SO_WROFF = 0x2 - TCFLSH = 0x5407 - TCGETA = 0x5401 - TCGETS = 0x540d - TCIFLUSH = 0x0 - TCIOFLUSH = 0x2 - TCOFLUSH = 0x1 - TCP_ABORT_THRESHOLD = 0x11 - TCP_ANONPRIVBIND = 0x20 - TCP_CONN_ABORT_THRESHOLD = 0x13 - TCP_CONN_NOTIFY_THRESHOLD = 0x12 - TCP_CORK = 0x18 - TCP_EXCLBIND = 0x21 - TCP_INIT_CWND = 0x15 - TCP_KEEPALIVE = 0x8 - TCP_KEEPALIVE_ABORT_THRESHOLD = 0x17 - TCP_KEEPALIVE_THRESHOLD = 0x16 - TCP_KEEPCNT = 0x23 - TCP_KEEPIDLE = 0x22 - TCP_KEEPINTVL = 0x24 - TCP_LINGER2 = 0x1c - TCP_MAXSEG = 0x2 - TCP_MSS = 0x218 - TCP_NODELAY = 0x1 - TCP_NOTIFY_THRESHOLD = 0x10 - TCP_RECVDSTADDR = 0x14 - TCP_RTO_INITIAL = 0x19 - TCP_RTO_MAX = 0x1b - TCP_RTO_MIN = 0x1a - TCSAFLUSH = 0x5410 - TCSBRK = 0x5405 - TCSETA = 0x5402 - TCSETAF = 0x5404 - TCSETAW = 0x5403 - TCSETS = 0x540e - TCSETSF = 0x5410 - TCSETSW = 0x540f - TCXONC = 0x5406 - TIOC = 0x5400 - TIOCCBRK = 0x747a - TIOCCDTR = 0x7478 - TIOCCILOOP = 0x746c - TIOCEXCL = 0x740d - TIOCFLUSH = 0x7410 - TIOCGETC = 0x7412 - TIOCGETD = 0x7400 - TIOCGETP = 0x7408 - TIOCGLTC = 0x7474 - TIOCGPGRP = 0x7414 - TIOCGPPS = 0x547d - TIOCGPPSEV = 0x547f - TIOCGSID = 0x7416 - TIOCGSOFTCAR = 0x5469 - TIOCGWINSZ = 0x5468 - TIOCHPCL = 0x7402 - TIOCKBOF = 0x5409 - TIOCKBON = 0x5408 - TIOCLBIC = 0x747e - TIOCLBIS = 0x747f - TIOCLGET = 0x747c - TIOCLSET = 0x747d - TIOCMBIC = 0x741c - TIOCMBIS = 0x741b - TIOCMGET = 0x741d - TIOCMSET = 0x741a - TIOCM_CAR = 0x40 - TIOCM_CD = 0x40 - TIOCM_CTS = 0x20 - TIOCM_DSR = 0x100 - TIOCM_DTR = 0x2 - TIOCM_LE = 0x1 - TIOCM_RI = 0x80 - TIOCM_RNG = 0x80 - TIOCM_RTS = 0x4 - TIOCM_SR = 0x10 - TIOCM_ST = 0x8 - TIOCNOTTY = 0x7471 - TIOCNXCL = 0x740e - TIOCOUTQ = 0x7473 - TIOCREMOTE = 0x741e - TIOCSBRK = 0x747b - TIOCSCTTY = 0x7484 - TIOCSDTR = 0x7479 - TIOCSETC = 0x7411 - TIOCSETD = 0x7401 - TIOCSETN = 0x740a - TIOCSETP = 0x7409 - TIOCSIGNAL = 0x741f - TIOCSILOOP = 0x746d - TIOCSLTC = 0x7475 - TIOCSPGRP = 0x7415 - TIOCSPPS = 0x547e - TIOCSSOFTCAR = 0x546a - TIOCSTART = 0x746e - TIOCSTI = 0x7417 - TIOCSTOP = 0x746f - TIOCSWINSZ = 0x5467 - TOSTOP = 0x100 - VCEOF = 0x8 - VCEOL = 0x9 - VDISCARD = 0xd - VDSUSP = 0xb - VEOF = 0x4 - VEOL = 0x5 - VEOL2 = 0x6 - VERASE = 0x2 - VINTR = 0x0 - VKILL = 0x3 - VLNEXT = 0xf - VMIN = 0x4 - VQUIT = 0x1 - VREPRINT = 0xc - VSTART = 0x8 - VSTATUS = 0x10 - VSTOP = 0x9 - VSUSP = 0xa - VSWTCH = 0x7 - VT0 = 0x0 - VT1 = 0x4000 - VTDLY = 0x4000 - VTIME = 0x5 - VWERASE = 0xe - WCONTFLG = 0xffff - WCONTINUED = 0x8 - WCOREFLG = 0x80 - WEXITED = 0x1 - WNOHANG = 0x40 - WNOWAIT = 0x80 - WOPTMASK = 0xcf - WRAP = 0x20000 - WSIGMASK = 0x7f - WSTOPFLG = 0x7f - WSTOPPED = 0x4 - WTRAPPED = 0x2 - WUNTRACED = 0x4 -) - -// Errors -const ( - E2BIG = syscall.Errno(0x7) - EACCES = syscall.Errno(0xd) - EADDRINUSE = syscall.Errno(0x7d) - EADDRNOTAVAIL = syscall.Errno(0x7e) - EADV = syscall.Errno(0x44) - EAFNOSUPPORT = syscall.Errno(0x7c) - EAGAIN = syscall.Errno(0xb) - EALREADY = syscall.Errno(0x95) - EBADE = syscall.Errno(0x32) - EBADF = syscall.Errno(0x9) - EBADFD = syscall.Errno(0x51) - EBADMSG = syscall.Errno(0x4d) - EBADR = syscall.Errno(0x33) - EBADRQC = syscall.Errno(0x36) - EBADSLT = syscall.Errno(0x37) - EBFONT = syscall.Errno(0x39) - EBUSY = syscall.Errno(0x10) - ECANCELED = syscall.Errno(0x2f) - ECHILD = syscall.Errno(0xa) - ECHRNG = syscall.Errno(0x25) - ECOMM = syscall.Errno(0x46) - ECONNABORTED = syscall.Errno(0x82) - ECONNREFUSED = syscall.Errno(0x92) - ECONNRESET = syscall.Errno(0x83) - EDEADLK = syscall.Errno(0x2d) - EDEADLOCK = syscall.Errno(0x38) - EDESTADDRREQ = syscall.Errno(0x60) - EDOM = syscall.Errno(0x21) - EDQUOT = syscall.Errno(0x31) - EEXIST = syscall.Errno(0x11) - EFAULT = syscall.Errno(0xe) - EFBIG = syscall.Errno(0x1b) - EHOSTDOWN = syscall.Errno(0x93) - EHOSTUNREACH = syscall.Errno(0x94) - EIDRM = syscall.Errno(0x24) - EILSEQ = syscall.Errno(0x58) - EINPROGRESS = syscall.Errno(0x96) - EINTR = syscall.Errno(0x4) - EINVAL = syscall.Errno(0x16) - EIO = syscall.Errno(0x5) - EISCONN = syscall.Errno(0x85) - EISDIR = syscall.Errno(0x15) - EL2HLT = syscall.Errno(0x2c) - EL2NSYNC = syscall.Errno(0x26) - EL3HLT = syscall.Errno(0x27) - EL3RST = syscall.Errno(0x28) - ELIBACC = syscall.Errno(0x53) - ELIBBAD = syscall.Errno(0x54) - ELIBEXEC = syscall.Errno(0x57) - ELIBMAX = syscall.Errno(0x56) - ELIBSCN = syscall.Errno(0x55) - ELNRNG = syscall.Errno(0x29) - ELOCKUNMAPPED = syscall.Errno(0x48) - ELOOP = syscall.Errno(0x5a) - EMFILE = syscall.Errno(0x18) - EMLINK = syscall.Errno(0x1f) - EMSGSIZE = syscall.Errno(0x61) - EMULTIHOP = syscall.Errno(0x4a) - ENAMETOOLONG = syscall.Errno(0x4e) - ENETDOWN = syscall.Errno(0x7f) - ENETRESET = syscall.Errno(0x81) - ENETUNREACH = syscall.Errno(0x80) - ENFILE = syscall.Errno(0x17) - ENOANO = syscall.Errno(0x35) - ENOBUFS = syscall.Errno(0x84) - ENOCSI = syscall.Errno(0x2b) - ENODATA = syscall.Errno(0x3d) - ENODEV = syscall.Errno(0x13) - ENOENT = syscall.Errno(0x2) - ENOEXEC = syscall.Errno(0x8) - ENOLCK = syscall.Errno(0x2e) - ENOLINK = syscall.Errno(0x43) - ENOMEM = syscall.Errno(0xc) - ENOMSG = syscall.Errno(0x23) - ENONET = syscall.Errno(0x40) - ENOPKG = syscall.Errno(0x41) - ENOPROTOOPT = syscall.Errno(0x63) - ENOSPC = syscall.Errno(0x1c) - ENOSR = syscall.Errno(0x3f) - ENOSTR = syscall.Errno(0x3c) - ENOSYS = syscall.Errno(0x59) - ENOTACTIVE = syscall.Errno(0x49) - ENOTBLK = syscall.Errno(0xf) - ENOTCONN = syscall.Errno(0x86) - ENOTDIR = syscall.Errno(0x14) - ENOTEMPTY = syscall.Errno(0x5d) - ENOTRECOVERABLE = syscall.Errno(0x3b) - ENOTSOCK = syscall.Errno(0x5f) - ENOTSUP = syscall.Errno(0x30) - ENOTTY = syscall.Errno(0x19) - ENOTUNIQ = syscall.Errno(0x50) - ENXIO = syscall.Errno(0x6) - EOPNOTSUPP = syscall.Errno(0x7a) - EOVERFLOW = syscall.Errno(0x4f) - EOWNERDEAD = syscall.Errno(0x3a) - EPERM = syscall.Errno(0x1) - EPFNOSUPPORT = syscall.Errno(0x7b) - EPIPE = syscall.Errno(0x20) - EPROTO = syscall.Errno(0x47) - EPROTONOSUPPORT = syscall.Errno(0x78) - EPROTOTYPE = syscall.Errno(0x62) - ERANGE = syscall.Errno(0x22) - EREMCHG = syscall.Errno(0x52) - EREMOTE = syscall.Errno(0x42) - ERESTART = syscall.Errno(0x5b) - EROFS = syscall.Errno(0x1e) - ESHUTDOWN = syscall.Errno(0x8f) - ESOCKTNOSUPPORT = syscall.Errno(0x79) - ESPIPE = syscall.Errno(0x1d) - ESRCH = syscall.Errno(0x3) - ESRMNT = syscall.Errno(0x45) - ESTALE = syscall.Errno(0x97) - ESTRPIPE = syscall.Errno(0x5c) - ETIME = syscall.Errno(0x3e) - ETIMEDOUT = syscall.Errno(0x91) - ETOOMANYREFS = syscall.Errno(0x90) - ETXTBSY = syscall.Errno(0x1a) - EUNATCH = syscall.Errno(0x2a) - EUSERS = syscall.Errno(0x5e) - EWOULDBLOCK = syscall.Errno(0xb) - EXDEV = syscall.Errno(0x12) - EXFULL = syscall.Errno(0x34) -) - -// Signals -const ( - SIGABRT = syscall.Signal(0x6) - SIGALRM = syscall.Signal(0xe) - SIGBUS = syscall.Signal(0xa) - SIGCANCEL = syscall.Signal(0x24) - SIGCHLD = syscall.Signal(0x12) - SIGCLD = syscall.Signal(0x12) - SIGCONT = syscall.Signal(0x19) - SIGEMT = syscall.Signal(0x7) - SIGFPE = syscall.Signal(0x8) - SIGFREEZE = syscall.Signal(0x22) - SIGHUP = syscall.Signal(0x1) - SIGILL = syscall.Signal(0x4) - SIGINFO = syscall.Signal(0x29) - SIGINT = syscall.Signal(0x2) - SIGIO = syscall.Signal(0x16) - SIGIOT = syscall.Signal(0x6) - SIGJVM1 = syscall.Signal(0x27) - SIGJVM2 = syscall.Signal(0x28) - SIGKILL = syscall.Signal(0x9) - SIGLOST = syscall.Signal(0x25) - SIGLWP = syscall.Signal(0x21) - SIGPIPE = syscall.Signal(0xd) - SIGPOLL = syscall.Signal(0x16) - SIGPROF = syscall.Signal(0x1d) - SIGPWR = syscall.Signal(0x13) - SIGQUIT = syscall.Signal(0x3) - SIGSEGV = syscall.Signal(0xb) - SIGSTOP = syscall.Signal(0x17) - SIGSYS = syscall.Signal(0xc) - SIGTERM = syscall.Signal(0xf) - SIGTHAW = syscall.Signal(0x23) - SIGTRAP = syscall.Signal(0x5) - SIGTSTP = syscall.Signal(0x18) - SIGTTIN = syscall.Signal(0x1a) - SIGTTOU = syscall.Signal(0x1b) - SIGURG = syscall.Signal(0x15) - SIGUSR1 = syscall.Signal(0x10) - SIGUSR2 = syscall.Signal(0x11) - SIGVTALRM = syscall.Signal(0x1c) - SIGWAITING = syscall.Signal(0x20) - SIGWINCH = syscall.Signal(0x14) - SIGXCPU = syscall.Signal(0x1e) - SIGXFSZ = syscall.Signal(0x1f) - SIGXRES = syscall.Signal(0x26) -) - -// Error table -var errors = [...]string{ - 1: "not owner", - 2: "no such file or directory", - 3: "no such process", - 4: "interrupted system call", - 5: "I/O error", - 6: "no such device or address", - 7: "arg list too long", - 8: "exec format error", - 9: "bad file number", - 10: "no child processes", - 11: "resource temporarily unavailable", - 12: "not enough space", - 13: "permission denied", - 14: "bad address", - 15: "block device required", - 16: "device busy", - 17: "file exists", - 18: "cross-device link", - 19: "no such device", - 20: "not a directory", - 21: "is a directory", - 22: "invalid argument", - 23: "file table overflow", - 24: "too many open files", - 25: "inappropriate ioctl for device", - 26: "text file busy", - 27: "file too large", - 28: "no space left on device", - 29: "illegal seek", - 30: "read-only file system", - 31: "too many links", - 32: "broken pipe", - 33: "argument out of domain", - 34: "result too large", - 35: "no message of desired type", - 36: "identifier removed", - 37: "channel number out of range", - 38: "level 2 not synchronized", - 39: "level 3 halted", - 40: "level 3 reset", - 41: "link number out of range", - 42: "protocol driver not attached", - 43: "no CSI structure available", - 44: "level 2 halted", - 45: "deadlock situation detected/avoided", - 46: "no record locks available", - 47: "operation canceled", - 48: "operation not supported", - 49: "disc quota exceeded", - 50: "bad exchange descriptor", - 51: "bad request descriptor", - 52: "message tables full", - 53: "anode table overflow", - 54: "bad request code", - 55: "invalid slot", - 56: "file locking deadlock", - 57: "bad font file format", - 58: "owner of the lock died", - 59: "lock is not recoverable", - 60: "not a stream device", - 61: "no data available", - 62: "timer expired", - 63: "out of stream resources", - 64: "machine is not on the network", - 65: "package not installed", - 66: "object is remote", - 67: "link has been severed", - 68: "advertise error", - 69: "srmount error", - 70: "communication error on send", - 71: "protocol error", - 72: "locked lock was unmapped ", - 73: "facility is not active", - 74: "multihop attempted", - 77: "not a data message", - 78: "file name too long", - 79: "value too large for defined data type", - 80: "name not unique on network", - 81: "file descriptor in bad state", - 82: "remote address changed", - 83: "can not access a needed shared library", - 84: "accessing a corrupted shared library", - 85: ".lib section in a.out corrupted", - 86: "attempting to link in more shared libraries than system limit", - 87: "can not exec a shared library directly", - 88: "illegal byte sequence", - 89: "operation not applicable", - 90: "number of symbolic links encountered during path name traversal exceeds MAXSYMLINKS", - 91: "error 91", - 92: "error 92", - 93: "directory not empty", - 94: "too many users", - 95: "socket operation on non-socket", - 96: "destination address required", - 97: "message too long", - 98: "protocol wrong type for socket", - 99: "option not supported by protocol", - 120: "protocol not supported", - 121: "socket type not supported", - 122: "operation not supported on transport endpoint", - 123: "protocol family not supported", - 124: "address family not supported by protocol family", - 125: "address already in use", - 126: "cannot assign requested address", - 127: "network is down", - 128: "network is unreachable", - 129: "network dropped connection because of reset", - 130: "software caused connection abort", - 131: "connection reset by peer", - 132: "no buffer space available", - 133: "transport endpoint is already connected", - 134: "transport endpoint is not connected", - 143: "cannot send after socket shutdown", - 144: "too many references: cannot splice", - 145: "connection timed out", - 146: "connection refused", - 147: "host is down", - 148: "no route to host", - 149: "operation already in progress", - 150: "operation now in progress", - 151: "stale NFS file handle", -} - -// Signal table -var signals = [...]string{ - 1: "hangup", - 2: "interrupt", - 3: "quit", - 4: "illegal Instruction", - 5: "trace/Breakpoint Trap", - 6: "abort", - 7: "emulation Trap", - 8: "arithmetic Exception", - 9: "killed", - 10: "bus Error", - 11: "segmentation Fault", - 12: "bad System Call", - 13: "broken Pipe", - 14: "alarm Clock", - 15: "terminated", - 16: "user Signal 1", - 17: "user Signal 2", - 18: "child Status Changed", - 19: "power-Fail/Restart", - 20: "window Size Change", - 21: "urgent Socket Condition", - 22: "pollable Event", - 23: "stopped (signal)", - 24: "stopped (user)", - 25: "continued", - 26: "stopped (tty input)", - 27: "stopped (tty output)", - 28: "virtual Timer Expired", - 29: "profiling Timer Expired", - 30: "cpu Limit Exceeded", - 31: "file Size Limit Exceeded", - 32: "no runnable lwp", - 33: "inter-lwp signal", - 34: "checkpoint Freeze", - 35: "checkpoint Thaw", - 36: "thread Cancellation", - 37: "resource Lost", - 38: "resource Control Exceeded", - 39: "reserved for JVM 1", - 40: "reserved for JVM 2", - 41: "information Request", -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_darwin_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_darwin_386.go deleted file mode 100644 index a15aaf120a7..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_darwin_386.go +++ /dev/null @@ -1,1426 +0,0 @@ -// mksyscall.pl -l32 syscall_bsd.go syscall_darwin.go syscall_darwin_386.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build 386,darwin - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(ngid int, gid *_Gid_t) (n int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(ngid int, gid *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Shutdown(s int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(s), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func kevent(kq int, change unsafe.Pointer, nchange int, event unsafe.Pointer, nevent int, timeout *Timespec) (n int, err error) { - r0, _, e1 := Syscall6(SYS_KEVENT, uintptr(kq), uintptr(change), uintptr(nchange), uintptr(event), uintptr(nevent), uintptr(unsafe.Pointer(timeout))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { - var _p0 unsafe.Pointer - if len(mib) > 0 { - _p0 = unsafe.Pointer(&mib[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS___SYSCTL, uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, timeval *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(timeval)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimes(fd int, timeval *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMES, uintptr(fd), uintptr(unsafe.Pointer(timeval)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ptrace(request int, pid int, addr uintptr, data uintptr) (err error) { - _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe() (r int, w int, err error) { - r0, r1, e1 := RawSyscall(SYS_PIPE, 0, 0, 0) - r = int(r0) - w = int(r1) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func kill(pid int, signum int, posix int) (err error) { - _, _, e1 := Syscall(SYS_KILL, uintptr(pid), uintptr(signum), uintptr(posix)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Access(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCESS, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtime(delta *Timeval, olddelta *Timeval) (err error) { - _, _, e1 := Syscall(SYS_ADJTIME, uintptr(unsafe.Pointer(delta)), uintptr(unsafe.Pointer(olddelta)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chflags(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHFLAGS, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chmod(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHMOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(fd int) (nfd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(fd), 0, 0) - nfd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup2(from int, to int) (err error) { - _, _, e1 := Syscall(SYS_DUP2, uintptr(from), uintptr(to), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exchangedata(path1 string, path2 string, options int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path1) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(path2) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_EXCHANGEDATA, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(options)) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchflags(fd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_FCHFLAGS, uintptr(fd), uintptr(flags), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fpathconf(fd int, name int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FPATHCONF, uintptr(fd), uintptr(name), 0) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT64, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstatfs(fd int, stat *Statfs_t) (err error) { - _, _, e1 := Syscall(SYS_FSTATFS64, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall(SYS_FTRUNCATE, uintptr(fd), uintptr(length), uintptr(length>>32)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_GETDIRENTRIES64, uintptr(fd), uintptr(_p0), uintptr(len(buf)), uintptr(unsafe.Pointer(basep)), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdtablesize() (size int) { - r0, _, _ := Syscall(SYS_GETDTABLESIZE, 0, 0, 0) - size = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgrp() (pgrp int) { - r0, _, _ := RawSyscall(SYS_GETPGRP, 0, 0, 0) - pgrp = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_GETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getsid(pid int) (sid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETSID, uintptr(pid), 0, 0) - sid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Issetugid() (tainted bool) { - r0, _, _ := RawSyscall(SYS_ISSETUGID, 0, 0, 0) - tainted = bool(r0 != 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kqueue() (fd int, err error) { - r0, _, e1 := Syscall(SYS_KQUEUE, 0, 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LCHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Link(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listen(s int, backlog int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(backlog), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LSTAT64, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdir(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIR, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkfifo(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKFIFO, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknod(path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKNOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Open(path string, mode int, perm uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_OPEN, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm)) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pathconf(path string, name int) (val int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_PATHCONF, uintptr(unsafe.Pointer(_p0)), uintptr(name), 0) - use(unsafe.Pointer(_p0)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pread(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PREAD, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), uintptr(offset>>32), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PWRITE, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), uintptr(offset>>32), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Readlink(path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READLINK, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf))) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rename(from string, to string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(from) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(to) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RENAME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Revoke(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REVOKE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rmdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RMDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { - r0, r1, e1 := Syscall6(SYS_LSEEK, uintptr(fd), uintptr(offset), uintptr(offset>>32), uintptr(whence), 0, 0) - newoffset = int64(int64(r1)<<32 | int64(r0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) { - _, _, e1 := Syscall6(SYS_SELECT, uintptr(n), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setegid(egid int) (err error) { - _, _, e1 := Syscall(SYS_SETEGID, uintptr(egid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seteuid(euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEUID, uintptr(euid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setgid(gid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETGID, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setlogin(name string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(name) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SETLOGIN, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setprivexec(flag int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIVEXEC, uintptr(flag), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tp *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tp)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setuid(uid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETUID, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STAT64, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Statfs(path string, stat *Statfs_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STATFS64, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Symlink(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() (err error) { - _, _, e1 := Syscall(SYS_SYNC, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_TRUNCATE, uintptr(unsafe.Pointer(_p0)), uintptr(length), uintptr(length>>32)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(newmask int) (oldmask int) { - r0, _, _ := Syscall(SYS_UMASK, uintptr(newmask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Undelete(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNDELETE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unlink(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINK, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNMOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) { - r0, _, e1 := Syscall9(SYS_MMAP, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flag), uintptr(fd), uintptr(pos), uintptr(pos>>32), 0, 0) - ret = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func gettimeofday(tp *Timeval) (sec int32, usec int32, err error) { - r0, r1, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tp)), 0, 0) - sec = int32(r0) - usec = int32(r1) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_darwin_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_darwin_amd64.go deleted file mode 100644 index 74606b2f499..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_darwin_amd64.go +++ /dev/null @@ -1,1442 +0,0 @@ -// mksyscall.pl syscall_bsd.go syscall_darwin.go syscall_darwin_amd64.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build amd64,darwin - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(ngid int, gid *_Gid_t) (n int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(ngid int, gid *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Shutdown(s int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(s), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func kevent(kq int, change unsafe.Pointer, nchange int, event unsafe.Pointer, nevent int, timeout *Timespec) (n int, err error) { - r0, _, e1 := Syscall6(SYS_KEVENT, uintptr(kq), uintptr(change), uintptr(nchange), uintptr(event), uintptr(nevent), uintptr(unsafe.Pointer(timeout))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { - var _p0 unsafe.Pointer - if len(mib) > 0 { - _p0 = unsafe.Pointer(&mib[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS___SYSCTL, uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, timeval *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(timeval)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimes(fd int, timeval *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMES, uintptr(fd), uintptr(unsafe.Pointer(timeval)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ptrace(request int, pid int, addr uintptr, data uintptr) (err error) { - _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe() (r int, w int, err error) { - r0, r1, e1 := RawSyscall(SYS_PIPE, 0, 0, 0) - r = int(r0) - w = int(r1) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func kill(pid int, signum int, posix int) (err error) { - _, _, e1 := Syscall(SYS_KILL, uintptr(pid), uintptr(signum), uintptr(posix)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Access(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCESS, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtime(delta *Timeval, olddelta *Timeval) (err error) { - _, _, e1 := Syscall(SYS_ADJTIME, uintptr(unsafe.Pointer(delta)), uintptr(unsafe.Pointer(olddelta)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chflags(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHFLAGS, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chmod(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHMOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(fd int) (nfd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(fd), 0, 0) - nfd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup2(from int, to int) (err error) { - _, _, e1 := Syscall(SYS_DUP2, uintptr(from), uintptr(to), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exchangedata(path1 string, path2 string, options int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path1) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(path2) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_EXCHANGEDATA, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(options)) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchflags(fd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_FCHFLAGS, uintptr(fd), uintptr(flags), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fpathconf(fd int, name int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FPATHCONF, uintptr(fd), uintptr(name), 0) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT64, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstatfs(fd int, stat *Statfs_t) (err error) { - _, _, e1 := Syscall(SYS_FSTATFS64, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall(SYS_FTRUNCATE, uintptr(fd), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_GETDIRENTRIES64, uintptr(fd), uintptr(_p0), uintptr(len(buf)), uintptr(unsafe.Pointer(basep)), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdtablesize() (size int) { - r0, _, _ := Syscall(SYS_GETDTABLESIZE, 0, 0, 0) - size = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgrp() (pgrp int) { - r0, _, _ := RawSyscall(SYS_GETPGRP, 0, 0, 0) - pgrp = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_GETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getsid(pid int) (sid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETSID, uintptr(pid), 0, 0) - sid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Issetugid() (tainted bool) { - r0, _, _ := RawSyscall(SYS_ISSETUGID, 0, 0, 0) - tainted = bool(r0 != 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kqueue() (fd int, err error) { - r0, _, e1 := Syscall(SYS_KQUEUE, 0, 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LCHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Link(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listen(s int, backlog int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(backlog), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LSTAT64, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdir(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIR, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkfifo(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKFIFO, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknod(path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKNOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Open(path string, mode int, perm uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_OPEN, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm)) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pathconf(path string, name int) (val int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_PATHCONF, uintptr(unsafe.Pointer(_p0)), uintptr(name), 0) - use(unsafe.Pointer(_p0)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pread(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PREAD, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PWRITE, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Readlink(path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READLINK, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf))) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rename(from string, to string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(from) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(to) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RENAME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Revoke(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REVOKE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rmdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RMDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { - r0, _, e1 := Syscall(SYS_LSEEK, uintptr(fd), uintptr(offset), uintptr(whence)) - newoffset = int64(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) { - _, _, e1 := Syscall6(SYS_SELECT, uintptr(n), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setegid(egid int) (err error) { - _, _, e1 := Syscall(SYS_SETEGID, uintptr(egid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seteuid(euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEUID, uintptr(euid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setgid(gid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETGID, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setlogin(name string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(name) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SETLOGIN, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setprivexec(flag int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIVEXEC, uintptr(flag), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tp *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tp)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setuid(uid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETUID, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STAT64, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Statfs(path string, stat *Statfs_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STATFS64, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Symlink(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() (err error) { - _, _, e1 := Syscall(SYS_SYNC, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_TRUNCATE, uintptr(unsafe.Pointer(_p0)), uintptr(length), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(newmask int) (oldmask int) { - r0, _, _ := Syscall(SYS_UMASK, uintptr(newmask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Undelete(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNDELETE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unlink(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINK, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNMOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) { - r0, _, e1 := Syscall6(SYS_MMAP, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flag), uintptr(fd), uintptr(pos)) - ret = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FCHMODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func gettimeofday(tp *Timeval) (sec int64, usec int32, err error) { - r0, r1, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tp)), 0, 0) - sec = int64(r0) - usec = int32(r1) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_darwin_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_darwin_arm.go deleted file mode 100644 index 640e8542692..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_darwin_arm.go +++ /dev/null @@ -1,1426 +0,0 @@ -// mksyscall.pl syscall_bsd.go syscall_darwin.go syscall_darwin_arm.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build arm,darwin - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(ngid int, gid *_Gid_t) (n int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(ngid int, gid *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Shutdown(s int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(s), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func kevent(kq int, change unsafe.Pointer, nchange int, event unsafe.Pointer, nevent int, timeout *Timespec) (n int, err error) { - r0, _, e1 := Syscall6(SYS_KEVENT, uintptr(kq), uintptr(change), uintptr(nchange), uintptr(event), uintptr(nevent), uintptr(unsafe.Pointer(timeout))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { - var _p0 unsafe.Pointer - if len(mib) > 0 { - _p0 = unsafe.Pointer(&mib[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS___SYSCTL, uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, timeval *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(timeval)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimes(fd int, timeval *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMES, uintptr(fd), uintptr(unsafe.Pointer(timeval)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ptrace(request int, pid int, addr uintptr, data uintptr) (err error) { - _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe() (r int, w int, err error) { - r0, r1, e1 := RawSyscall(SYS_PIPE, 0, 0, 0) - r = int(r0) - w = int(r1) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func kill(pid int, signum int, posix int) (err error) { - _, _, e1 := Syscall(SYS_KILL, uintptr(pid), uintptr(signum), uintptr(posix)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Access(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCESS, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtime(delta *Timeval, olddelta *Timeval) (err error) { - _, _, e1 := Syscall(SYS_ADJTIME, uintptr(unsafe.Pointer(delta)), uintptr(unsafe.Pointer(olddelta)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chflags(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHFLAGS, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chmod(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHMOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(fd int) (nfd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(fd), 0, 0) - nfd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup2(from int, to int) (err error) { - _, _, e1 := Syscall(SYS_DUP2, uintptr(from), uintptr(to), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exchangedata(path1 string, path2 string, options int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path1) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(path2) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_EXCHANGEDATA, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(options)) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchflags(fd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_FCHFLAGS, uintptr(fd), uintptr(flags), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fpathconf(fd int, name int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FPATHCONF, uintptr(fd), uintptr(name), 0) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT64, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstatfs(fd int, stat *Statfs_t) (err error) { - _, _, e1 := Syscall(SYS_FSTATFS64, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall(SYS_FTRUNCATE, uintptr(fd), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_GETDIRENTRIES64, uintptr(fd), uintptr(_p0), uintptr(len(buf)), uintptr(unsafe.Pointer(basep)), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdtablesize() (size int) { - r0, _, _ := Syscall(SYS_GETDTABLESIZE, 0, 0, 0) - size = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgrp() (pgrp int) { - r0, _, _ := RawSyscall(SYS_GETPGRP, 0, 0, 0) - pgrp = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_GETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getsid(pid int) (sid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETSID, uintptr(pid), 0, 0) - sid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Issetugid() (tainted bool) { - r0, _, _ := RawSyscall(SYS_ISSETUGID, 0, 0, 0) - tainted = bool(r0 != 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kqueue() (fd int, err error) { - r0, _, e1 := Syscall(SYS_KQUEUE, 0, 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LCHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Link(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listen(s int, backlog int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(backlog), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LSTAT64, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdir(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIR, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkfifo(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKFIFO, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknod(path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKNOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Open(path string, mode int, perm uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_OPEN, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm)) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pathconf(path string, name int) (val int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_PATHCONF, uintptr(unsafe.Pointer(_p0)), uintptr(name), 0) - use(unsafe.Pointer(_p0)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pread(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PREAD, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PWRITE, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Readlink(path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READLINK, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf))) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rename(from string, to string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(from) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(to) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RENAME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Revoke(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REVOKE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rmdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RMDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { - r0, _, e1 := Syscall(SYS_LSEEK, uintptr(fd), uintptr(offset), uintptr(whence)) - newoffset = int64(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) { - _, _, e1 := Syscall6(SYS_SELECT, uintptr(n), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setegid(egid int) (err error) { - _, _, e1 := Syscall(SYS_SETEGID, uintptr(egid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seteuid(euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEUID, uintptr(euid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setgid(gid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETGID, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setlogin(name string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(name) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SETLOGIN, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setprivexec(flag int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIVEXEC, uintptr(flag), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tp *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tp)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setuid(uid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETUID, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STAT64, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Statfs(path string, stat *Statfs_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STATFS64, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Symlink(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() (err error) { - _, _, e1 := Syscall(SYS_SYNC, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_TRUNCATE, uintptr(unsafe.Pointer(_p0)), uintptr(length), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(newmask int) (oldmask int) { - r0, _, _ := Syscall(SYS_UMASK, uintptr(newmask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Undelete(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNDELETE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unlink(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINK, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNMOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) { - r0, _, e1 := Syscall6(SYS_MMAP, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flag), uintptr(fd), uintptr(pos)) - ret = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func gettimeofday(tp *Timeval) (sec int32, usec int32, err error) { - r0, r1, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tp)), 0, 0) - sec = int32(r0) - usec = int32(r1) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_darwin_arm64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_darwin_arm64.go deleted file mode 100644 index 933f67bbf12..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_darwin_arm64.go +++ /dev/null @@ -1,1426 +0,0 @@ -// mksyscall.pl syscall_bsd.go syscall_darwin.go syscall_darwin_arm64.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build arm64,darwin - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(ngid int, gid *_Gid_t) (n int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(ngid int, gid *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Shutdown(s int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(s), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func kevent(kq int, change unsafe.Pointer, nchange int, event unsafe.Pointer, nevent int, timeout *Timespec) (n int, err error) { - r0, _, e1 := Syscall6(SYS_KEVENT, uintptr(kq), uintptr(change), uintptr(nchange), uintptr(event), uintptr(nevent), uintptr(unsafe.Pointer(timeout))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { - var _p0 unsafe.Pointer - if len(mib) > 0 { - _p0 = unsafe.Pointer(&mib[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS___SYSCTL, uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, timeval *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(timeval)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimes(fd int, timeval *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMES, uintptr(fd), uintptr(unsafe.Pointer(timeval)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ptrace(request int, pid int, addr uintptr, data uintptr) (err error) { - _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe() (r int, w int, err error) { - r0, r1, e1 := RawSyscall(SYS_PIPE, 0, 0, 0) - r = int(r0) - w = int(r1) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func kill(pid int, signum int, posix int) (err error) { - _, _, e1 := Syscall(SYS_KILL, uintptr(pid), uintptr(signum), uintptr(posix)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Access(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCESS, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtime(delta *Timeval, olddelta *Timeval) (err error) { - _, _, e1 := Syscall(SYS_ADJTIME, uintptr(unsafe.Pointer(delta)), uintptr(unsafe.Pointer(olddelta)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chflags(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHFLAGS, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chmod(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHMOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(fd int) (nfd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(fd), 0, 0) - nfd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup2(from int, to int) (err error) { - _, _, e1 := Syscall(SYS_DUP2, uintptr(from), uintptr(to), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exchangedata(path1 string, path2 string, options int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path1) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(path2) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_EXCHANGEDATA, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(options)) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchflags(fd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_FCHFLAGS, uintptr(fd), uintptr(flags), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fpathconf(fd int, name int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FPATHCONF, uintptr(fd), uintptr(name), 0) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT64, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstatfs(fd int, stat *Statfs_t) (err error) { - _, _, e1 := Syscall(SYS_FSTATFS64, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall(SYS_FTRUNCATE, uintptr(fd), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_GETDIRENTRIES64, uintptr(fd), uintptr(_p0), uintptr(len(buf)), uintptr(unsafe.Pointer(basep)), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdtablesize() (size int) { - r0, _, _ := Syscall(SYS_GETDTABLESIZE, 0, 0, 0) - size = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgrp() (pgrp int) { - r0, _, _ := RawSyscall(SYS_GETPGRP, 0, 0, 0) - pgrp = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_GETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getsid(pid int) (sid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETSID, uintptr(pid), 0, 0) - sid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Issetugid() (tainted bool) { - r0, _, _ := RawSyscall(SYS_ISSETUGID, 0, 0, 0) - tainted = bool(r0 != 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kqueue() (fd int, err error) { - r0, _, e1 := Syscall(SYS_KQUEUE, 0, 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LCHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Link(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listen(s int, backlog int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(backlog), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LSTAT64, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdir(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIR, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkfifo(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKFIFO, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknod(path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKNOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Open(path string, mode int, perm uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_OPEN, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm)) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pathconf(path string, name int) (val int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_PATHCONF, uintptr(unsafe.Pointer(_p0)), uintptr(name), 0) - use(unsafe.Pointer(_p0)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pread(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PREAD, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PWRITE, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Readlink(path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READLINK, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf))) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rename(from string, to string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(from) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(to) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RENAME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Revoke(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REVOKE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rmdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RMDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { - r0, _, e1 := Syscall(SYS_LSEEK, uintptr(fd), uintptr(offset), uintptr(whence)) - newoffset = int64(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) { - _, _, e1 := Syscall6(SYS_SELECT, uintptr(n), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setegid(egid int) (err error) { - _, _, e1 := Syscall(SYS_SETEGID, uintptr(egid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seteuid(euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEUID, uintptr(euid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setgid(gid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETGID, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setlogin(name string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(name) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SETLOGIN, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setprivexec(flag int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIVEXEC, uintptr(flag), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tp *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tp)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setuid(uid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETUID, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STAT64, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Statfs(path string, stat *Statfs_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STATFS64, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Symlink(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() (err error) { - _, _, e1 := Syscall(SYS_SYNC, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_TRUNCATE, uintptr(unsafe.Pointer(_p0)), uintptr(length), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(newmask int) (oldmask int) { - r0, _, _ := Syscall(SYS_UMASK, uintptr(newmask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Undelete(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNDELETE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unlink(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINK, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNMOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) { - r0, _, e1 := Syscall6(SYS_MMAP, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flag), uintptr(fd), uintptr(pos)) - ret = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func gettimeofday(tp *Timeval) (sec int64, usec int32, err error) { - r0, r1, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tp)), 0, 0) - sec = int64(r0) - usec = int32(r1) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_dragonfly_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_dragonfly_386.go deleted file mode 100644 index 32e46af601e..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_dragonfly_386.go +++ /dev/null @@ -1,1412 +0,0 @@ -// mksyscall.pl -l32 -dragonfly syscall_bsd.go syscall_dragonfly.go syscall_dragonfly_386.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build 386,dragonfly - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(ngid int, gid *_Gid_t) (n int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(ngid int, gid *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Shutdown(s int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(s), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func kevent(kq int, change unsafe.Pointer, nchange int, event unsafe.Pointer, nevent int, timeout *Timespec) (n int, err error) { - r0, _, e1 := Syscall6(SYS_KEVENT, uintptr(kq), uintptr(change), uintptr(nchange), uintptr(event), uintptr(nevent), uintptr(unsafe.Pointer(timeout))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { - var _p0 unsafe.Pointer - if len(mib) > 0 { - _p0 = unsafe.Pointer(&mib[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS___SYSCTL, uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, timeval *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(timeval)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimes(fd int, timeval *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMES, uintptr(fd), uintptr(unsafe.Pointer(timeval)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe() (r int, w int, err error) { - r0, r1, e1 := RawSyscall(SYS_PIPE, 0, 0, 0) - r = int(r0) - w = int(r1) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func extpread(fd int, p []byte, flags int, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_EXTPREAD, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(offset), uintptr(offset>>32)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func extpwrite(fd int, p []byte, flags int, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_EXTPWRITE, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(offset), uintptr(offset>>32)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Access(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCESS, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtime(delta *Timeval, olddelta *Timeval) (err error) { - _, _, e1 := Syscall(SYS_ADJTIME, uintptr(unsafe.Pointer(delta)), uintptr(unsafe.Pointer(olddelta)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chflags(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHFLAGS, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chmod(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHMOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(fd int) (nfd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(fd), 0, 0) - nfd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup2(from int, to int) (err error) { - _, _, e1 := Syscall(SYS_DUP2, uintptr(from), uintptr(to), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchflags(fd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_FCHFLAGS, uintptr(fd), uintptr(flags), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fpathconf(fd int, name int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FPATHCONF, uintptr(fd), uintptr(name), 0) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstatfs(fd int, stat *Statfs_t) (err error) { - _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall6(SYS_FTRUNCATE, uintptr(fd), 0, uintptr(length), uintptr(length>>32), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_GETDIRENTRIES, uintptr(fd), uintptr(_p0), uintptr(len(buf)), uintptr(unsafe.Pointer(basep)), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdtablesize() (size int) { - r0, _, _ := Syscall(SYS_GETDTABLESIZE, 0, 0, 0) - size = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgrp() (pgrp int) { - r0, _, _ := RawSyscall(SYS_GETPGRP, 0, 0, 0) - pgrp = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_GETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getsid(pid int) (sid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETSID, uintptr(pid), 0, 0) - sid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Issetugid() (tainted bool) { - r0, _, _ := Syscall(SYS_ISSETUGID, 0, 0, 0) - tainted = bool(r0 != 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kill(pid int, signum syscall.Signal) (err error) { - _, _, e1 := Syscall(SYS_KILL, uintptr(pid), uintptr(signum), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kqueue() (fd int, err error) { - r0, _, e1 := Syscall(SYS_KQUEUE, 0, 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LCHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Link(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listen(s int, backlog int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(backlog), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LSTAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdir(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIR, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkfifo(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKFIFO, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknod(path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKNOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Nanosleep(time *Timespec, leftover *Timespec) (err error) { - _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Open(path string, mode int, perm uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_OPEN, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm)) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pathconf(path string, name int) (val int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_PATHCONF, uintptr(unsafe.Pointer(_p0)), uintptr(name), 0) - use(unsafe.Pointer(_p0)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Readlink(path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READLINK, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf))) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rename(from string, to string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(from) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(to) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RENAME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Revoke(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REVOKE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rmdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RMDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { - r0, r1, e1 := Syscall6(SYS_LSEEK, uintptr(fd), 0, uintptr(offset), uintptr(offset>>32), uintptr(whence), 0) - newoffset = int64(int64(r1)<<32 | int64(r0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) { - _, _, e1 := Syscall6(SYS_SELECT, uintptr(n), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setegid(egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEGID, uintptr(egid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seteuid(euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEUID, uintptr(euid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setgid(gid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETGID, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setlogin(name string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(name) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SETLOGIN, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresgid(rgid int, egid int, sgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESGID, uintptr(rgid), uintptr(egid), uintptr(sgid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresuid(ruid int, euid int, suid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESUID, uintptr(ruid), uintptr(euid), uintptr(suid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tp *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tp)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setuid(uid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETUID, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Statfs(path string, stat *Statfs_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STATFS, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Symlink(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() (err error) { - _, _, e1 := Syscall(SYS_SYNC, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_TRUNCATE, uintptr(unsafe.Pointer(_p0)), 0, uintptr(length), uintptr(length>>32), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(newmask int) (oldmask int) { - r0, _, _ := Syscall(SYS_UMASK, uintptr(newmask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Undelete(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNDELETE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unlink(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINK, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNMOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) { - r0, _, e1 := Syscall9(SYS_MMAP, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flag), uintptr(fd), 0, uintptr(pos), uintptr(pos>>32), 0) - ret = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_dragonfly_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_dragonfly_amd64.go deleted file mode 100644 index 3fa6ff796db..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_dragonfly_amd64.go +++ /dev/null @@ -1,1412 +0,0 @@ -// mksyscall.pl -dragonfly syscall_bsd.go syscall_dragonfly.go syscall_dragonfly_amd64.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build amd64,dragonfly - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(ngid int, gid *_Gid_t) (n int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(ngid int, gid *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Shutdown(s int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(s), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func kevent(kq int, change unsafe.Pointer, nchange int, event unsafe.Pointer, nevent int, timeout *Timespec) (n int, err error) { - r0, _, e1 := Syscall6(SYS_KEVENT, uintptr(kq), uintptr(change), uintptr(nchange), uintptr(event), uintptr(nevent), uintptr(unsafe.Pointer(timeout))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { - var _p0 unsafe.Pointer - if len(mib) > 0 { - _p0 = unsafe.Pointer(&mib[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS___SYSCTL, uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, timeval *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(timeval)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimes(fd int, timeval *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMES, uintptr(fd), uintptr(unsafe.Pointer(timeval)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe() (r int, w int, err error) { - r0, r1, e1 := RawSyscall(SYS_PIPE, 0, 0, 0) - r = int(r0) - w = int(r1) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func extpread(fd int, p []byte, flags int, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_EXTPREAD, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(offset), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func extpwrite(fd int, p []byte, flags int, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_EXTPWRITE, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(offset), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Access(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCESS, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtime(delta *Timeval, olddelta *Timeval) (err error) { - _, _, e1 := Syscall(SYS_ADJTIME, uintptr(unsafe.Pointer(delta)), uintptr(unsafe.Pointer(olddelta)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chflags(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHFLAGS, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chmod(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHMOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(fd int) (nfd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(fd), 0, 0) - nfd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup2(from int, to int) (err error) { - _, _, e1 := Syscall(SYS_DUP2, uintptr(from), uintptr(to), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchflags(fd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_FCHFLAGS, uintptr(fd), uintptr(flags), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fpathconf(fd int, name int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FPATHCONF, uintptr(fd), uintptr(name), 0) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstatfs(fd int, stat *Statfs_t) (err error) { - _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall(SYS_FTRUNCATE, uintptr(fd), 0, uintptr(length)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_GETDIRENTRIES, uintptr(fd), uintptr(_p0), uintptr(len(buf)), uintptr(unsafe.Pointer(basep)), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdtablesize() (size int) { - r0, _, _ := Syscall(SYS_GETDTABLESIZE, 0, 0, 0) - size = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgrp() (pgrp int) { - r0, _, _ := RawSyscall(SYS_GETPGRP, 0, 0, 0) - pgrp = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_GETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getsid(pid int) (sid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETSID, uintptr(pid), 0, 0) - sid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Issetugid() (tainted bool) { - r0, _, _ := Syscall(SYS_ISSETUGID, 0, 0, 0) - tainted = bool(r0 != 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kill(pid int, signum syscall.Signal) (err error) { - _, _, e1 := Syscall(SYS_KILL, uintptr(pid), uintptr(signum), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kqueue() (fd int, err error) { - r0, _, e1 := Syscall(SYS_KQUEUE, 0, 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LCHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Link(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listen(s int, backlog int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(backlog), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LSTAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdir(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIR, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkfifo(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKFIFO, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknod(path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKNOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Nanosleep(time *Timespec, leftover *Timespec) (err error) { - _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Open(path string, mode int, perm uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_OPEN, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm)) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pathconf(path string, name int) (val int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_PATHCONF, uintptr(unsafe.Pointer(_p0)), uintptr(name), 0) - use(unsafe.Pointer(_p0)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Readlink(path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READLINK, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf))) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rename(from string, to string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(from) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(to) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RENAME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Revoke(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REVOKE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rmdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RMDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { - r0, _, e1 := Syscall6(SYS_LSEEK, uintptr(fd), 0, uintptr(offset), uintptr(whence), 0, 0) - newoffset = int64(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) { - _, _, e1 := Syscall6(SYS_SELECT, uintptr(n), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setegid(egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEGID, uintptr(egid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seteuid(euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEUID, uintptr(euid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setgid(gid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETGID, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setlogin(name string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(name) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SETLOGIN, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresgid(rgid int, egid int, sgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESGID, uintptr(rgid), uintptr(egid), uintptr(sgid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresuid(ruid int, euid int, suid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESUID, uintptr(ruid), uintptr(euid), uintptr(suid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tp *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tp)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setuid(uid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETUID, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Statfs(path string, stat *Statfs_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STATFS, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Symlink(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() (err error) { - _, _, e1 := Syscall(SYS_SYNC, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_TRUNCATE, uintptr(unsafe.Pointer(_p0)), 0, uintptr(length)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(newmask int) (oldmask int) { - r0, _, _ := Syscall(SYS_UMASK, uintptr(newmask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Undelete(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNDELETE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unlink(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINK, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNMOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) { - r0, _, e1 := Syscall9(SYS_MMAP, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flag), uintptr(fd), 0, uintptr(pos), 0, 0) - ret = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_freebsd_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_freebsd_386.go deleted file mode 100644 index 1a0e528cdac..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_freebsd_386.go +++ /dev/null @@ -1,1664 +0,0 @@ -// mksyscall.pl -l32 syscall_bsd.go syscall_freebsd.go syscall_freebsd_386.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build 386,freebsd - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(ngid int, gid *_Gid_t) (n int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(ngid int, gid *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Shutdown(s int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(s), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func kevent(kq int, change unsafe.Pointer, nchange int, event unsafe.Pointer, nevent int, timeout *Timespec) (n int, err error) { - r0, _, e1 := Syscall6(SYS_KEVENT, uintptr(kq), uintptr(change), uintptr(nchange), uintptr(event), uintptr(nevent), uintptr(unsafe.Pointer(timeout))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { - var _p0 unsafe.Pointer - if len(mib) > 0 { - _p0 = unsafe.Pointer(&mib[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS___SYSCTL, uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, timeval *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(timeval)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimes(fd int, timeval *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMES, uintptr(fd), uintptr(unsafe.Pointer(timeval)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe() (r int, w int, err error) { - r0, r1, e1 := RawSyscall(SYS_PIPE, 0, 0, 0) - r = int(r0) - w = int(r1) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Access(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCESS, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtime(delta *Timeval, olddelta *Timeval) (err error) { - _, _, e1 := Syscall(SYS_ADJTIME, uintptr(unsafe.Pointer(delta)), uintptr(unsafe.Pointer(olddelta)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chflags(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHFLAGS, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chmod(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHMOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(fd int) (nfd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(fd), 0, 0) - nfd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup2(from int, to int) (err error) { - _, _, e1 := Syscall(SYS_DUP2, uintptr(from), uintptr(to), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrGetFd(fd int, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(attrname) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_GET_FD, uintptr(fd), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p0)), uintptr(data), uintptr(nbytes), 0) - use(unsafe.Pointer(_p0)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrSetFd(fd int, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(attrname) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_SET_FD, uintptr(fd), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p0)), uintptr(data), uintptr(nbytes), 0) - use(unsafe.Pointer(_p0)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrDeleteFd(fd int, attrnamespace int, attrname string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(attrname) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_EXTATTR_DELETE_FD, uintptr(fd), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p0))) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrListFd(fd int, attrnamespace int, data uintptr, nbytes int) (ret int, err error) { - r0, _, e1 := Syscall6(SYS_EXTATTR_LIST_FD, uintptr(fd), uintptr(attrnamespace), uintptr(data), uintptr(nbytes), 0, 0) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrGetFile(file string, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(file) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attrname) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_GET_FILE, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p1)), uintptr(data), uintptr(nbytes), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrSetFile(file string, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(file) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attrname) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_SET_FILE, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p1)), uintptr(data), uintptr(nbytes), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrDeleteFile(file string, attrnamespace int, attrname string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(file) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attrname) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_EXTATTR_DELETE_FILE, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p1))) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrListFile(file string, attrnamespace int, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(file) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_LIST_FILE, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(data), uintptr(nbytes), 0, 0) - use(unsafe.Pointer(_p0)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrGetLink(link string, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(link) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attrname) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_GET_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p1)), uintptr(data), uintptr(nbytes), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrSetLink(link string, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(link) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attrname) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_SET_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p1)), uintptr(data), uintptr(nbytes), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrDeleteLink(link string, attrnamespace int, attrname string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(link) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attrname) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_EXTATTR_DELETE_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p1))) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrListLink(link string, attrnamespace int, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(link) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_LIST_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(data), uintptr(nbytes), 0, 0) - use(unsafe.Pointer(_p0)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fadvise(fd int, offset int64, length int64, advice int) (err error) { - _, _, e1 := Syscall6(SYS_POSIX_FADVISE, uintptr(fd), uintptr(offset), uintptr(offset>>32), uintptr(length), uintptr(length>>32), uintptr(advice)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchflags(fd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_FCHFLAGS, uintptr(fd), uintptr(flags), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fpathconf(fd int, name int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FPATHCONF, uintptr(fd), uintptr(name), 0) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstatfs(fd int, stat *Statfs_t) (err error) { - _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall(SYS_FTRUNCATE, uintptr(fd), uintptr(length), uintptr(length>>32)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_GETDIRENTRIES, uintptr(fd), uintptr(_p0), uintptr(len(buf)), uintptr(unsafe.Pointer(basep)), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdtablesize() (size int) { - r0, _, _ := Syscall(SYS_GETDTABLESIZE, 0, 0, 0) - size = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgrp() (pgrp int) { - r0, _, _ := RawSyscall(SYS_GETPGRP, 0, 0, 0) - pgrp = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_GETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getsid(pid int) (sid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETSID, uintptr(pid), 0, 0) - sid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Issetugid() (tainted bool) { - r0, _, _ := Syscall(SYS_ISSETUGID, 0, 0, 0) - tainted = bool(r0 != 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kill(pid int, signum syscall.Signal) (err error) { - _, _, e1 := Syscall(SYS_KILL, uintptr(pid), uintptr(signum), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kqueue() (fd int, err error) { - r0, _, e1 := Syscall(SYS_KQUEUE, 0, 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LCHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Link(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listen(s int, backlog int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(backlog), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LSTAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdir(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIR, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkfifo(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKFIFO, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknod(path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKNOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Nanosleep(time *Timespec, leftover *Timespec) (err error) { - _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Open(path string, mode int, perm uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_OPEN, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm)) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pathconf(path string, name int) (val int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_PATHCONF, uintptr(unsafe.Pointer(_p0)), uintptr(name), 0) - use(unsafe.Pointer(_p0)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pread(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PREAD, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), uintptr(offset>>32), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PWRITE, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), uintptr(offset>>32), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Readlink(path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READLINK, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf))) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rename(from string, to string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(from) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(to) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RENAME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Revoke(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REVOKE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rmdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RMDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { - r0, r1, e1 := Syscall6(SYS_LSEEK, uintptr(fd), uintptr(offset), uintptr(offset>>32), uintptr(whence), 0, 0) - newoffset = int64(int64(r1)<<32 | int64(r0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) { - _, _, e1 := Syscall6(SYS_SELECT, uintptr(n), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setegid(egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEGID, uintptr(egid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seteuid(euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEUID, uintptr(euid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setgid(gid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETGID, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setlogin(name string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(name) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SETLOGIN, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresgid(rgid int, egid int, sgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESGID, uintptr(rgid), uintptr(egid), uintptr(sgid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresuid(ruid int, euid int, suid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESUID, uintptr(ruid), uintptr(euid), uintptr(suid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tp *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tp)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setuid(uid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETUID, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Statfs(path string, stat *Statfs_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STATFS, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Symlink(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() (err error) { - _, _, e1 := Syscall(SYS_SYNC, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_TRUNCATE, uintptr(unsafe.Pointer(_p0)), uintptr(length), uintptr(length>>32)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(newmask int) (oldmask int) { - r0, _, _ := Syscall(SYS_UMASK, uintptr(newmask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Undelete(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNDELETE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unlink(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINK, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNMOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) { - r0, _, e1 := Syscall9(SYS_MMAP, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flag), uintptr(fd), uintptr(pos), uintptr(pos>>32), 0, 0) - ret = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept4(fd int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (nfd int, err error) { - r0, _, e1 := Syscall6(SYS_ACCEPT4, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), uintptr(flags), 0, 0) - nfd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_freebsd_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_freebsd_amd64.go deleted file mode 100644 index 6e4cf1455ca..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_freebsd_amd64.go +++ /dev/null @@ -1,1664 +0,0 @@ -// mksyscall.pl syscall_bsd.go syscall_freebsd.go syscall_freebsd_amd64.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build amd64,freebsd - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(ngid int, gid *_Gid_t) (n int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(ngid int, gid *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Shutdown(s int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(s), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func kevent(kq int, change unsafe.Pointer, nchange int, event unsafe.Pointer, nevent int, timeout *Timespec) (n int, err error) { - r0, _, e1 := Syscall6(SYS_KEVENT, uintptr(kq), uintptr(change), uintptr(nchange), uintptr(event), uintptr(nevent), uintptr(unsafe.Pointer(timeout))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { - var _p0 unsafe.Pointer - if len(mib) > 0 { - _p0 = unsafe.Pointer(&mib[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS___SYSCTL, uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, timeval *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(timeval)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimes(fd int, timeval *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMES, uintptr(fd), uintptr(unsafe.Pointer(timeval)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe() (r int, w int, err error) { - r0, r1, e1 := RawSyscall(SYS_PIPE, 0, 0, 0) - r = int(r0) - w = int(r1) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Access(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCESS, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtime(delta *Timeval, olddelta *Timeval) (err error) { - _, _, e1 := Syscall(SYS_ADJTIME, uintptr(unsafe.Pointer(delta)), uintptr(unsafe.Pointer(olddelta)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chflags(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHFLAGS, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chmod(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHMOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(fd int) (nfd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(fd), 0, 0) - nfd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup2(from int, to int) (err error) { - _, _, e1 := Syscall(SYS_DUP2, uintptr(from), uintptr(to), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrGetFd(fd int, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(attrname) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_GET_FD, uintptr(fd), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p0)), uintptr(data), uintptr(nbytes), 0) - use(unsafe.Pointer(_p0)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrSetFd(fd int, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(attrname) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_SET_FD, uintptr(fd), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p0)), uintptr(data), uintptr(nbytes), 0) - use(unsafe.Pointer(_p0)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrDeleteFd(fd int, attrnamespace int, attrname string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(attrname) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_EXTATTR_DELETE_FD, uintptr(fd), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p0))) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrListFd(fd int, attrnamespace int, data uintptr, nbytes int) (ret int, err error) { - r0, _, e1 := Syscall6(SYS_EXTATTR_LIST_FD, uintptr(fd), uintptr(attrnamespace), uintptr(data), uintptr(nbytes), 0, 0) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrGetFile(file string, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(file) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attrname) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_GET_FILE, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p1)), uintptr(data), uintptr(nbytes), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrSetFile(file string, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(file) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attrname) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_SET_FILE, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p1)), uintptr(data), uintptr(nbytes), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrDeleteFile(file string, attrnamespace int, attrname string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(file) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attrname) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_EXTATTR_DELETE_FILE, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p1))) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrListFile(file string, attrnamespace int, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(file) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_LIST_FILE, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(data), uintptr(nbytes), 0, 0) - use(unsafe.Pointer(_p0)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrGetLink(link string, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(link) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attrname) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_GET_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p1)), uintptr(data), uintptr(nbytes), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrSetLink(link string, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(link) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attrname) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_SET_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p1)), uintptr(data), uintptr(nbytes), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrDeleteLink(link string, attrnamespace int, attrname string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(link) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attrname) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_EXTATTR_DELETE_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p1))) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrListLink(link string, attrnamespace int, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(link) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_LIST_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(data), uintptr(nbytes), 0, 0) - use(unsafe.Pointer(_p0)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fadvise(fd int, offset int64, length int64, advice int) (err error) { - _, _, e1 := Syscall6(SYS_POSIX_FADVISE, uintptr(fd), uintptr(offset), uintptr(length), uintptr(advice), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchflags(fd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_FCHFLAGS, uintptr(fd), uintptr(flags), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fpathconf(fd int, name int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FPATHCONF, uintptr(fd), uintptr(name), 0) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstatfs(fd int, stat *Statfs_t) (err error) { - _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall(SYS_FTRUNCATE, uintptr(fd), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_GETDIRENTRIES, uintptr(fd), uintptr(_p0), uintptr(len(buf)), uintptr(unsafe.Pointer(basep)), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdtablesize() (size int) { - r0, _, _ := Syscall(SYS_GETDTABLESIZE, 0, 0, 0) - size = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgrp() (pgrp int) { - r0, _, _ := RawSyscall(SYS_GETPGRP, 0, 0, 0) - pgrp = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_GETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getsid(pid int) (sid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETSID, uintptr(pid), 0, 0) - sid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Issetugid() (tainted bool) { - r0, _, _ := Syscall(SYS_ISSETUGID, 0, 0, 0) - tainted = bool(r0 != 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kill(pid int, signum syscall.Signal) (err error) { - _, _, e1 := Syscall(SYS_KILL, uintptr(pid), uintptr(signum), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kqueue() (fd int, err error) { - r0, _, e1 := Syscall(SYS_KQUEUE, 0, 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LCHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Link(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listen(s int, backlog int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(backlog), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LSTAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdir(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIR, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkfifo(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKFIFO, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknod(path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKNOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Nanosleep(time *Timespec, leftover *Timespec) (err error) { - _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Open(path string, mode int, perm uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_OPEN, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm)) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pathconf(path string, name int) (val int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_PATHCONF, uintptr(unsafe.Pointer(_p0)), uintptr(name), 0) - use(unsafe.Pointer(_p0)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pread(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PREAD, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PWRITE, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Readlink(path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READLINK, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf))) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rename(from string, to string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(from) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(to) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RENAME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Revoke(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REVOKE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rmdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RMDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { - r0, _, e1 := Syscall(SYS_LSEEK, uintptr(fd), uintptr(offset), uintptr(whence)) - newoffset = int64(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) { - _, _, e1 := Syscall6(SYS_SELECT, uintptr(n), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setegid(egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEGID, uintptr(egid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seteuid(euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEUID, uintptr(euid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setgid(gid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETGID, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setlogin(name string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(name) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SETLOGIN, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresgid(rgid int, egid int, sgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESGID, uintptr(rgid), uintptr(egid), uintptr(sgid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresuid(ruid int, euid int, suid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESUID, uintptr(ruid), uintptr(euid), uintptr(suid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tp *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tp)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setuid(uid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETUID, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Statfs(path string, stat *Statfs_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STATFS, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Symlink(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() (err error) { - _, _, e1 := Syscall(SYS_SYNC, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_TRUNCATE, uintptr(unsafe.Pointer(_p0)), uintptr(length), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(newmask int) (oldmask int) { - r0, _, _ := Syscall(SYS_UMASK, uintptr(newmask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Undelete(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNDELETE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unlink(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINK, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNMOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) { - r0, _, e1 := Syscall6(SYS_MMAP, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flag), uintptr(fd), uintptr(pos)) - ret = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept4(fd int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (nfd int, err error) { - r0, _, e1 := Syscall6(SYS_ACCEPT4, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), uintptr(flags), 0, 0) - nfd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_freebsd_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_freebsd_arm.go deleted file mode 100644 index 1872d32308f..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_freebsd_arm.go +++ /dev/null @@ -1,1664 +0,0 @@ -// mksyscall.pl -l32 -arm syscall_bsd.go syscall_freebsd.go syscall_freebsd_arm.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build arm,freebsd - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(ngid int, gid *_Gid_t) (n int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(ngid int, gid *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Shutdown(s int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(s), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func kevent(kq int, change unsafe.Pointer, nchange int, event unsafe.Pointer, nevent int, timeout *Timespec) (n int, err error) { - r0, _, e1 := Syscall6(SYS_KEVENT, uintptr(kq), uintptr(change), uintptr(nchange), uintptr(event), uintptr(nevent), uintptr(unsafe.Pointer(timeout))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { - var _p0 unsafe.Pointer - if len(mib) > 0 { - _p0 = unsafe.Pointer(&mib[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS___SYSCTL, uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, timeval *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(timeval)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimes(fd int, timeval *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMES, uintptr(fd), uintptr(unsafe.Pointer(timeval)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe() (r int, w int, err error) { - r0, r1, e1 := RawSyscall(SYS_PIPE, 0, 0, 0) - r = int(r0) - w = int(r1) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Access(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCESS, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtime(delta *Timeval, olddelta *Timeval) (err error) { - _, _, e1 := Syscall(SYS_ADJTIME, uintptr(unsafe.Pointer(delta)), uintptr(unsafe.Pointer(olddelta)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chflags(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHFLAGS, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chmod(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHMOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(fd int) (nfd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(fd), 0, 0) - nfd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup2(from int, to int) (err error) { - _, _, e1 := Syscall(SYS_DUP2, uintptr(from), uintptr(to), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrGetFd(fd int, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(attrname) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_GET_FD, uintptr(fd), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p0)), uintptr(data), uintptr(nbytes), 0) - use(unsafe.Pointer(_p0)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrSetFd(fd int, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(attrname) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_SET_FD, uintptr(fd), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p0)), uintptr(data), uintptr(nbytes), 0) - use(unsafe.Pointer(_p0)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrDeleteFd(fd int, attrnamespace int, attrname string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(attrname) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_EXTATTR_DELETE_FD, uintptr(fd), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p0))) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrListFd(fd int, attrnamespace int, data uintptr, nbytes int) (ret int, err error) { - r0, _, e1 := Syscall6(SYS_EXTATTR_LIST_FD, uintptr(fd), uintptr(attrnamespace), uintptr(data), uintptr(nbytes), 0, 0) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrGetFile(file string, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(file) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attrname) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_GET_FILE, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p1)), uintptr(data), uintptr(nbytes), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrSetFile(file string, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(file) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attrname) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_SET_FILE, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p1)), uintptr(data), uintptr(nbytes), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrDeleteFile(file string, attrnamespace int, attrname string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(file) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attrname) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_EXTATTR_DELETE_FILE, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p1))) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrListFile(file string, attrnamespace int, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(file) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_LIST_FILE, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(data), uintptr(nbytes), 0, 0) - use(unsafe.Pointer(_p0)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrGetLink(link string, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(link) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attrname) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_GET_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p1)), uintptr(data), uintptr(nbytes), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrSetLink(link string, attrnamespace int, attrname string, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(link) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attrname) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_SET_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p1)), uintptr(data), uintptr(nbytes), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrDeleteLink(link string, attrnamespace int, attrname string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(link) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attrname) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_EXTATTR_DELETE_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(unsafe.Pointer(_p1))) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ExtattrListLink(link string, attrnamespace int, data uintptr, nbytes int) (ret int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(link) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_EXTATTR_LIST_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(attrnamespace), uintptr(data), uintptr(nbytes), 0, 0) - use(unsafe.Pointer(_p0)) - ret = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fadvise(fd int, offset int64, length int64, advice int) (err error) { - _, _, e1 := Syscall9(SYS_POSIX_FADVISE, uintptr(fd), 0, uintptr(offset), uintptr(offset>>32), uintptr(length), uintptr(length>>32), uintptr(advice), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchflags(fd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_FCHFLAGS, uintptr(fd), uintptr(flags), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fpathconf(fd int, name int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FPATHCONF, uintptr(fd), uintptr(name), 0) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstatfs(fd int, stat *Statfs_t) (err error) { - _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall6(SYS_FTRUNCATE, uintptr(fd), 0, uintptr(length), uintptr(length>>32), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_GETDIRENTRIES, uintptr(fd), uintptr(_p0), uintptr(len(buf)), uintptr(unsafe.Pointer(basep)), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdtablesize() (size int) { - r0, _, _ := Syscall(SYS_GETDTABLESIZE, 0, 0, 0) - size = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgrp() (pgrp int) { - r0, _, _ := RawSyscall(SYS_GETPGRP, 0, 0, 0) - pgrp = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_GETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getsid(pid int) (sid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETSID, uintptr(pid), 0, 0) - sid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Issetugid() (tainted bool) { - r0, _, _ := Syscall(SYS_ISSETUGID, 0, 0, 0) - tainted = bool(r0 != 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kill(pid int, signum syscall.Signal) (err error) { - _, _, e1 := Syscall(SYS_KILL, uintptr(pid), uintptr(signum), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kqueue() (fd int, err error) { - r0, _, e1 := Syscall(SYS_KQUEUE, 0, 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LCHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Link(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listen(s int, backlog int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(backlog), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LSTAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdir(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIR, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkfifo(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKFIFO, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknod(path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKNOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Nanosleep(time *Timespec, leftover *Timespec) (err error) { - _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Open(path string, mode int, perm uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_OPEN, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm)) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pathconf(path string, name int) (val int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_PATHCONF, uintptr(unsafe.Pointer(_p0)), uintptr(name), 0) - use(unsafe.Pointer(_p0)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pread(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PREAD, uintptr(fd), uintptr(_p0), uintptr(len(p)), 0, uintptr(offset), uintptr(offset>>32)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PWRITE, uintptr(fd), uintptr(_p0), uintptr(len(p)), 0, uintptr(offset), uintptr(offset>>32)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Readlink(path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READLINK, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf))) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rename(from string, to string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(from) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(to) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RENAME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Revoke(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REVOKE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rmdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RMDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { - r0, r1, e1 := Syscall6(SYS_LSEEK, uintptr(fd), 0, uintptr(offset), uintptr(offset>>32), uintptr(whence), 0) - newoffset = int64(int64(r1)<<32 | int64(r0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) { - _, _, e1 := Syscall6(SYS_SELECT, uintptr(n), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setegid(egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEGID, uintptr(egid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seteuid(euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEUID, uintptr(euid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setgid(gid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETGID, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setlogin(name string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(name) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SETLOGIN, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresgid(rgid int, egid int, sgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESGID, uintptr(rgid), uintptr(egid), uintptr(sgid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresuid(ruid int, euid int, suid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESUID, uintptr(ruid), uintptr(euid), uintptr(suid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tp *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tp)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setuid(uid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETUID, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Statfs(path string, stat *Statfs_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STATFS, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Symlink(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() (err error) { - _, _, e1 := Syscall(SYS_SYNC, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_TRUNCATE, uintptr(unsafe.Pointer(_p0)), 0, uintptr(length), uintptr(length>>32), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(newmask int) (oldmask int) { - r0, _, _ := Syscall(SYS_UMASK, uintptr(newmask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Undelete(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNDELETE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unlink(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINK, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNMOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) { - r0, _, e1 := Syscall9(SYS_MMAP, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flag), uintptr(fd), 0, uintptr(pos), uintptr(pos>>32), 0) - ret = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept4(fd int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (nfd int, err error) { - r0, _, e1 := Syscall6(SYS_ACCEPT4, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), uintptr(flags), 0, 0) - nfd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_linux_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_linux_386.go deleted file mode 100644 index ff6c39dc244..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_linux_386.go +++ /dev/null @@ -1,1628 +0,0 @@ -// mksyscall.pl -l32 syscall_linux.go syscall_linux_386.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build 386,linux - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func linkat(olddirfd int, oldpath string, newdirfd int, newpath string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(oldpath) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(newpath) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_LINKAT, uintptr(olddirfd), uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func openat(dirfd int, path string, flags int, mode uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_OPENAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags), uintptr(mode), 0, 0) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlinkat(dirfd int, path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_READLINKAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf)), 0, 0) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func symlinkat(oldpath string, newdirfd int, newpath string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(oldpath) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(newpath) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINKAT, uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1))) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func unlinkat(dirfd int, path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINKAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, times *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_UTIMENSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimesat(dirfd int, path *byte, times *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMESAT, uintptr(dirfd), uintptr(unsafe.Pointer(path)), uintptr(unsafe.Pointer(times))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getcwd(buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_GETCWD, uintptr(_p0), uintptr(len(buf)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ptrace(request int, pid int, addr uintptr, data uintptr) (err error) { - _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func reboot(magic1 uint, magic2 uint, cmd int, arg string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(arg) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_REBOOT, uintptr(magic1), uintptr(magic2), uintptr(cmd), uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mount(source string, target string, fstype string, flags uintptr, data *byte) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(source) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(target) - if err != nil { - return - } - var _p2 *byte - _p2, err = BytePtrFromString(fstype) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_MOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(unsafe.Pointer(_p2)), uintptr(flags), uintptr(unsafe.Pointer(data)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - use(unsafe.Pointer(_p2)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Acct(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtimex(buf *Timex) (state int, err error) { - r0, _, e1 := Syscall(SYS_ADJTIMEX, uintptr(unsafe.Pointer(buf)), 0, 0) - state = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ClockGettime(clockid int32, time *Timespec) (err error) { - _, _, e1 := Syscall(SYS_CLOCK_GETTIME, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(oldfd int) (fd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(oldfd), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup3(oldfd int, newfd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_DUP3, uintptr(oldfd), uintptr(newfd), uintptr(flags)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollCreate(size int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_EPOLL_CREATE, uintptr(size), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollCreate1(flag int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_EPOLL_CREATE1, uintptr(flag), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollCtl(epfd int, op int, fd int, event *EpollEvent) (err error) { - _, _, e1 := RawSyscall6(SYS_EPOLL_CTL, uintptr(epfd), uintptr(op), uintptr(fd), uintptr(unsafe.Pointer(event)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollWait(epfd int, events []EpollEvent, msec int) (n int, err error) { - var _p0 unsafe.Pointer - if len(events) > 0 { - _p0 = unsafe.Pointer(&events[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_EPOLL_WAIT, uintptr(epfd), uintptr(_p0), uintptr(len(events)), uintptr(msec), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT_GROUP, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Faccessat(dirfd int, path string, mode uint32, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FACCESSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fallocate(fd int, mode uint32, off int64, len int64) (err error) { - _, _, e1 := Syscall6(SYS_FALLOCATE, uintptr(fd), uintptr(mode), uintptr(off), uintptr(off>>32), uintptr(len), uintptr(len>>32)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FCHMODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchownat(dirfd int, path string, uid int, gid int, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FCHOWNAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fdatasync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FDATASYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdents(fd int, buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_GETDENTS64, uintptr(fd), uintptr(_p0), uintptr(len(buf))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettid() (tid int) { - r0, _, _ := RawSyscall(SYS_GETTID, 0, 0, 0) - tid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getxattr(path string, attr string, dest []byte) (sz int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attr) - if err != nil { - return - } - var _p2 unsafe.Pointer - if len(dest) > 0 { - _p2 = unsafe.Pointer(&dest[0]) - } else { - _p2 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_GETXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(_p2), uintptr(len(dest)), 0, 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - sz = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyAddWatch(fd int, pathname string, mask uint32) (watchdesc int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(pathname) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_INOTIFY_ADD_WATCH, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(mask)) - use(unsafe.Pointer(_p0)) - watchdesc = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyInit1(flags int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_INOTIFY_INIT1, uintptr(flags), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyRmWatch(fd int, watchdesc uint32) (success int, err error) { - r0, _, e1 := RawSyscall(SYS_INOTIFY_RM_WATCH, uintptr(fd), uintptr(watchdesc), 0) - success = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kill(pid int, sig syscall.Signal) (err error) { - _, _, e1 := RawSyscall(SYS_KILL, uintptr(pid), uintptr(sig), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Klogctl(typ int, buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_SYSLOG, uintptr(typ), uintptr(_p0), uintptr(len(buf))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listxattr(path string, dest []byte) (sz int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(dest) > 0 { - _p1 = unsafe.Pointer(&dest[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_LISTXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(dest))) - use(unsafe.Pointer(_p0)) - sz = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdirat(dirfd int, path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIRAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknodat(dirfd int, path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_MKNODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Nanosleep(time *Timespec, leftover *Timespec) (err error) { - _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pause() (err error) { - _, _, e1 := Syscall(SYS_PAUSE, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func PivotRoot(newroot string, putold string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(newroot) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(putold) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_PIVOT_ROOT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func prlimit(pid int, resource int, old *Rlimit, newlimit *Rlimit) (err error) { - _, _, e1 := RawSyscall6(SYS_PRLIMIT64, uintptr(pid), uintptr(resource), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(newlimit)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Prctl(option int, arg2 uintptr, arg3 uintptr, arg4 uintptr, arg5 uintptr) (err error) { - _, _, e1 := Syscall6(SYS_PRCTL, uintptr(option), uintptr(arg2), uintptr(arg3), uintptr(arg4), uintptr(arg5), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Removexattr(path string, attr string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attr) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REMOVEXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Renameat(olddirfd int, oldpath string, newdirfd int, newpath string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(oldpath) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(newpath) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_RENAMEAT, uintptr(olddirfd), uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1)), 0, 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setdomainname(p []byte) (err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_SETDOMAINNAME, uintptr(_p0), uintptr(len(p)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sethostname(p []byte) (err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_SETHOSTNAME, uintptr(_p0), uintptr(len(p)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setxattr(path string, attr string, data []byte, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attr) - if err != nil { - return - } - var _p2 unsafe.Pointer - if len(data) > 0 { - _p2 = unsafe.Pointer(&data[0]) - } else { - _p2 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SETXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(_p2), uintptr(len(data)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() { - Syscall(SYS_SYNC, 0, 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sysinfo(info *Sysinfo_t) (err error) { - _, _, e1 := RawSyscall(SYS_SYSINFO, uintptr(unsafe.Pointer(info)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Tee(rfd int, wfd int, len int, flags int) (n int64, err error) { - r0, r1, e1 := Syscall6(SYS_TEE, uintptr(rfd), uintptr(wfd), uintptr(len), uintptr(flags), 0, 0) - n = int64(int64(r1)<<32 | int64(r0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Tgkill(tgid int, tid int, sig syscall.Signal) (err error) { - _, _, e1 := RawSyscall(SYS_TGKILL, uintptr(tgid), uintptr(tid), uintptr(sig)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Times(tms *Tms) (ticks uintptr, err error) { - r0, _, e1 := RawSyscall(SYS_TIMES, uintptr(unsafe.Pointer(tms)), 0, 0) - ticks = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(mask int) (oldmask int) { - r0, _, _ := RawSyscall(SYS_UMASK, uintptr(mask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Uname(buf *Utsname) (err error) { - _, _, e1 := RawSyscall(SYS_UNAME, uintptr(unsafe.Pointer(buf)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(target string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(target) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UMOUNT2, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unshare(flags int) (err error) { - _, _, e1 := Syscall(SYS_UNSHARE, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ustat(dev int, ubuf *Ustat_t) (err error) { - _, _, e1 := Syscall(SYS_USTAT, uintptr(dev), uintptr(unsafe.Pointer(ubuf)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Utime(path string, buf *Utimbuf) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(buf)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func exitThread(code int) (err error) { - _, _, e1 := Syscall(SYS_EXIT, uintptr(code), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, p *byte, np int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(p)), uintptr(np)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, p *byte, np int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(p)), uintptr(np)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Madvise(b []byte, advice int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(advice)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe(p *[2]_C_int) (err error) { - _, _, e1 := RawSyscall(SYS_PIPE, uintptr(unsafe.Pointer(p)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe2(p *[2]_C_int, flags int) (err error) { - _, _, e1 := RawSyscall(SYS_PIPE2, uintptr(unsafe.Pointer(p)), uintptr(flags), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup2(oldfd int, newfd int) (err error) { - _, _, e1 := Syscall(SYS_DUP2, uintptr(oldfd), uintptr(newfd), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fadvise(fd int, offset int64, length int64, advice int) (err error) { - _, _, e1 := Syscall6(SYS_FADVISE64_64, uintptr(fd), uintptr(offset), uintptr(offset>>32), uintptr(length), uintptr(length>>32), uintptr(advice)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN32, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT64, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall(SYS_FTRUNCATE64, uintptr(fd), uintptr(length), uintptr(length>>32)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID32, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (euid int) { - r0, _, _ := RawSyscall(SYS_GETEUID32, 0, 0, 0) - euid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID32, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID32, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyInit() (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_INOTIFY_INIT, 0, 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ioperm(from int, num int, on int) (err error) { - _, _, e1 := Syscall(SYS_IOPERM, uintptr(from), uintptr(num), uintptr(on)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Iopl(level int) (err error) { - _, _, e1 := Syscall(SYS_IOPL, uintptr(level), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LCHOWN32, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LSTAT64, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pread(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PREAD64, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), uintptr(offset>>32), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PWRITE64, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), uintptr(offset>>32), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { - r0, _, e1 := Syscall6(SYS_SENDFILE64, uintptr(outfd), uintptr(infd), uintptr(unsafe.Pointer(offset)), uintptr(count), 0, 0) - written = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setfsgid(gid int) (err error) { - _, _, e1 := Syscall(SYS_SETFSGID32, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setfsuid(uid int) (err error) { - _, _, e1 := Syscall(SYS_SETFSUID32, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID32, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresgid(rgid int, egid int, sgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESGID32, uintptr(rgid), uintptr(egid), uintptr(sgid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresuid(ruid int, euid int, suid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESUID32, uintptr(ruid), uintptr(euid), uintptr(suid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID32, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Splice(rfd int, roff *int64, wfd int, woff *int64, len int, flags int) (n int, err error) { - r0, _, e1 := Syscall6(SYS_SPLICE, uintptr(rfd), uintptr(unsafe.Pointer(roff)), uintptr(wfd), uintptr(unsafe.Pointer(woff)), uintptr(len), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STAT64, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func SyncFileRange(fd int, off int64, n int64, flags int) (err error) { - _, _, e1 := Syscall6(SYS_SYNC_FILE_RANGE, uintptr(fd), uintptr(off), uintptr(off>>32), uintptr(n), uintptr(n>>32), uintptr(flags)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_TRUNCATE64, uintptr(unsafe.Pointer(_p0)), uintptr(length), uintptr(length>>32)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(n int, list *_Gid_t) (nn int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS32, uintptr(n), uintptr(unsafe.Pointer(list)), 0) - nn = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(n int, list *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS32, uintptr(n), uintptr(unsafe.Pointer(list)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) { - r0, _, e1 := Syscall6(SYS__NEWSELECT, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap2(addr uintptr, length uintptr, prot int, flags int, fd int, pageOffset uintptr) (xaddr uintptr, err error) { - r0, _, e1 := Syscall6(SYS_MMAP2, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flags), uintptr(fd), uintptr(pageOffset)) - xaddr = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getrlimit(resource int, rlim *rlimit32) (err error) { - _, _, e1 := RawSyscall(SYS_GETRLIMIT, uintptr(resource), uintptr(unsafe.Pointer(rlim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setrlimit(resource int, rlim *rlimit32) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(resource), uintptr(unsafe.Pointer(rlim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Time(t *Time_t) (tt Time_t, err error) { - r0, _, e1 := RawSyscall(SYS_TIME, uintptr(unsafe.Pointer(t)), 0, 0) - tt = Time_t(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_linux_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_linux_amd64.go deleted file mode 100644 index c2438522d34..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_linux_amd64.go +++ /dev/null @@ -1,1822 +0,0 @@ -// mksyscall.pl syscall_linux.go syscall_linux_amd64.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build amd64,linux - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func linkat(olddirfd int, oldpath string, newdirfd int, newpath string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(oldpath) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(newpath) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_LINKAT, uintptr(olddirfd), uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func openat(dirfd int, path string, flags int, mode uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_OPENAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags), uintptr(mode), 0, 0) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlinkat(dirfd int, path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_READLINKAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf)), 0, 0) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func symlinkat(oldpath string, newdirfd int, newpath string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(oldpath) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(newpath) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINKAT, uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1))) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func unlinkat(dirfd int, path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINKAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, times *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_UTIMENSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimesat(dirfd int, path *byte, times *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMESAT, uintptr(dirfd), uintptr(unsafe.Pointer(path)), uintptr(unsafe.Pointer(times))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getcwd(buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_GETCWD, uintptr(_p0), uintptr(len(buf)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ptrace(request int, pid int, addr uintptr, data uintptr) (err error) { - _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func reboot(magic1 uint, magic2 uint, cmd int, arg string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(arg) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_REBOOT, uintptr(magic1), uintptr(magic2), uintptr(cmd), uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mount(source string, target string, fstype string, flags uintptr, data *byte) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(source) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(target) - if err != nil { - return - } - var _p2 *byte - _p2, err = BytePtrFromString(fstype) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_MOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(unsafe.Pointer(_p2)), uintptr(flags), uintptr(unsafe.Pointer(data)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - use(unsafe.Pointer(_p2)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Acct(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtimex(buf *Timex) (state int, err error) { - r0, _, e1 := Syscall(SYS_ADJTIMEX, uintptr(unsafe.Pointer(buf)), 0, 0) - state = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ClockGettime(clockid int32, time *Timespec) (err error) { - _, _, e1 := Syscall(SYS_CLOCK_GETTIME, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(oldfd int) (fd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(oldfd), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup3(oldfd int, newfd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_DUP3, uintptr(oldfd), uintptr(newfd), uintptr(flags)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollCreate(size int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_EPOLL_CREATE, uintptr(size), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollCreate1(flag int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_EPOLL_CREATE1, uintptr(flag), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollCtl(epfd int, op int, fd int, event *EpollEvent) (err error) { - _, _, e1 := RawSyscall6(SYS_EPOLL_CTL, uintptr(epfd), uintptr(op), uintptr(fd), uintptr(unsafe.Pointer(event)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollWait(epfd int, events []EpollEvent, msec int) (n int, err error) { - var _p0 unsafe.Pointer - if len(events) > 0 { - _p0 = unsafe.Pointer(&events[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_EPOLL_WAIT, uintptr(epfd), uintptr(_p0), uintptr(len(events)), uintptr(msec), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT_GROUP, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Faccessat(dirfd int, path string, mode uint32, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FACCESSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fallocate(fd int, mode uint32, off int64, len int64) (err error) { - _, _, e1 := Syscall6(SYS_FALLOCATE, uintptr(fd), uintptr(mode), uintptr(off), uintptr(len), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FCHMODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchownat(dirfd int, path string, uid int, gid int, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FCHOWNAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fdatasync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FDATASYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdents(fd int, buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_GETDENTS64, uintptr(fd), uintptr(_p0), uintptr(len(buf))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettid() (tid int) { - r0, _, _ := RawSyscall(SYS_GETTID, 0, 0, 0) - tid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getxattr(path string, attr string, dest []byte) (sz int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attr) - if err != nil { - return - } - var _p2 unsafe.Pointer - if len(dest) > 0 { - _p2 = unsafe.Pointer(&dest[0]) - } else { - _p2 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_GETXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(_p2), uintptr(len(dest)), 0, 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - sz = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyAddWatch(fd int, pathname string, mask uint32) (watchdesc int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(pathname) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_INOTIFY_ADD_WATCH, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(mask)) - use(unsafe.Pointer(_p0)) - watchdesc = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyInit1(flags int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_INOTIFY_INIT1, uintptr(flags), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyRmWatch(fd int, watchdesc uint32) (success int, err error) { - r0, _, e1 := RawSyscall(SYS_INOTIFY_RM_WATCH, uintptr(fd), uintptr(watchdesc), 0) - success = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kill(pid int, sig syscall.Signal) (err error) { - _, _, e1 := RawSyscall(SYS_KILL, uintptr(pid), uintptr(sig), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Klogctl(typ int, buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_SYSLOG, uintptr(typ), uintptr(_p0), uintptr(len(buf))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listxattr(path string, dest []byte) (sz int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(dest) > 0 { - _p1 = unsafe.Pointer(&dest[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_LISTXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(dest))) - use(unsafe.Pointer(_p0)) - sz = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdirat(dirfd int, path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIRAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknodat(dirfd int, path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_MKNODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Nanosleep(time *Timespec, leftover *Timespec) (err error) { - _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pause() (err error) { - _, _, e1 := Syscall(SYS_PAUSE, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func PivotRoot(newroot string, putold string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(newroot) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(putold) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_PIVOT_ROOT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func prlimit(pid int, resource int, old *Rlimit, newlimit *Rlimit) (err error) { - _, _, e1 := RawSyscall6(SYS_PRLIMIT64, uintptr(pid), uintptr(resource), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(newlimit)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Prctl(option int, arg2 uintptr, arg3 uintptr, arg4 uintptr, arg5 uintptr) (err error) { - _, _, e1 := Syscall6(SYS_PRCTL, uintptr(option), uintptr(arg2), uintptr(arg3), uintptr(arg4), uintptr(arg5), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Removexattr(path string, attr string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attr) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REMOVEXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Renameat(olddirfd int, oldpath string, newdirfd int, newpath string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(oldpath) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(newpath) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_RENAMEAT, uintptr(olddirfd), uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1)), 0, 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setdomainname(p []byte) (err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_SETDOMAINNAME, uintptr(_p0), uintptr(len(p)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sethostname(p []byte) (err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_SETHOSTNAME, uintptr(_p0), uintptr(len(p)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setxattr(path string, attr string, data []byte, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attr) - if err != nil { - return - } - var _p2 unsafe.Pointer - if len(data) > 0 { - _p2 = unsafe.Pointer(&data[0]) - } else { - _p2 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SETXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(_p2), uintptr(len(data)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() { - Syscall(SYS_SYNC, 0, 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sysinfo(info *Sysinfo_t) (err error) { - _, _, e1 := RawSyscall(SYS_SYSINFO, uintptr(unsafe.Pointer(info)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Tee(rfd int, wfd int, len int, flags int) (n int64, err error) { - r0, _, e1 := Syscall6(SYS_TEE, uintptr(rfd), uintptr(wfd), uintptr(len), uintptr(flags), 0, 0) - n = int64(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Tgkill(tgid int, tid int, sig syscall.Signal) (err error) { - _, _, e1 := RawSyscall(SYS_TGKILL, uintptr(tgid), uintptr(tid), uintptr(sig)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Times(tms *Tms) (ticks uintptr, err error) { - r0, _, e1 := RawSyscall(SYS_TIMES, uintptr(unsafe.Pointer(tms)), 0, 0) - ticks = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(mask int) (oldmask int) { - r0, _, _ := RawSyscall(SYS_UMASK, uintptr(mask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Uname(buf *Utsname) (err error) { - _, _, e1 := RawSyscall(SYS_UNAME, uintptr(unsafe.Pointer(buf)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(target string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(target) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UMOUNT2, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unshare(flags int) (err error) { - _, _, e1 := Syscall(SYS_UNSHARE, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ustat(dev int, ubuf *Ustat_t) (err error) { - _, _, e1 := Syscall(SYS_USTAT, uintptr(dev), uintptr(unsafe.Pointer(ubuf)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Utime(path string, buf *Utimbuf) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(buf)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func exitThread(code int) (err error) { - _, _, e1 := Syscall(SYS_EXIT, uintptr(code), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, p *byte, np int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(p)), uintptr(np)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, p *byte, np int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(p)), uintptr(np)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Madvise(b []byte, advice int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(advice)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup2(oldfd int, newfd int) (err error) { - _, _, e1 := Syscall(SYS_DUP2, uintptr(oldfd), uintptr(newfd), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fadvise(fd int, offset int64, length int64, advice int) (err error) { - _, _, e1 := Syscall6(SYS_FADVISE64, uintptr(fd), uintptr(offset), uintptr(length), uintptr(advice), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstatfs(fd int, buf *Statfs_t) (err error) { - _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(buf)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall(SYS_FTRUNCATE, uintptr(fd), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (euid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) - euid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrlimit(resource int, rlim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_GETRLIMIT, uintptr(resource), uintptr(unsafe.Pointer(rlim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyInit() (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_INOTIFY_INIT, 0, 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ioperm(from int, num int, on int) (err error) { - _, _, e1 := Syscall(SYS_IOPERM, uintptr(from), uintptr(num), uintptr(on)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Iopl(level int) (err error) { - _, _, e1 := Syscall(SYS_IOPL, uintptr(level), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LCHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listen(s int, n int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(n), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LSTAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pread(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PREAD64, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PWRITE64, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seek(fd int, offset int64, whence int) (off int64, err error) { - r0, _, e1 := Syscall(SYS_LSEEK, uintptr(fd), uintptr(offset), uintptr(whence)) - off = int64(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) { - r0, _, e1 := Syscall6(SYS_SELECT, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { - r0, _, e1 := Syscall6(SYS_SENDFILE, uintptr(outfd), uintptr(infd), uintptr(unsafe.Pointer(offset)), uintptr(count), 0, 0) - written = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setfsgid(gid int) (err error) { - _, _, e1 := Syscall(SYS_SETFSGID, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setfsuid(uid int) (err error) { - _, _, e1 := Syscall(SYS_SETFSUID, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresgid(rgid int, egid int, sgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESGID, uintptr(rgid), uintptr(egid), uintptr(sgid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresuid(ruid int, euid int, suid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESUID, uintptr(ruid), uintptr(euid), uintptr(suid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setrlimit(resource int, rlim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(resource), uintptr(unsafe.Pointer(rlim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Shutdown(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Splice(rfd int, roff *int64, wfd int, woff *int64, len int, flags int) (n int64, err error) { - r0, _, e1 := Syscall6(SYS_SPLICE, uintptr(rfd), uintptr(unsafe.Pointer(roff)), uintptr(wfd), uintptr(unsafe.Pointer(woff)), uintptr(len), uintptr(flags)) - n = int64(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Statfs(path string, buf *Statfs_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STATFS, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(buf)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func SyncFileRange(fd int, off int64, n int64, flags int) (err error) { - _, _, e1 := Syscall6(SYS_SYNC_FILE_RANGE, uintptr(fd), uintptr(off), uintptr(n), uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_TRUNCATE, uintptr(unsafe.Pointer(_p0)), uintptr(length), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error) { - r0, _, e1 := Syscall6(SYS_ACCEPT4, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), uintptr(flags), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(n int, list *_Gid_t) (nn int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS, uintptr(n), uintptr(unsafe.Pointer(list)), 0) - nn = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(n int, list *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS, uintptr(n), uintptr(unsafe.Pointer(list)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap(addr uintptr, length uintptr, prot int, flags int, fd int, offset int64) (xaddr uintptr, err error) { - r0, _, e1 := Syscall6(SYS_MMAP, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flags), uintptr(fd), uintptr(offset)) - xaddr = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe(p *[2]_C_int) (err error) { - _, _, e1 := RawSyscall(SYS_PIPE, uintptr(unsafe.Pointer(p)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe2(p *[2]_C_int, flags int) (err error) { - _, _, e1 := RawSyscall(SYS_PIPE2, uintptr(unsafe.Pointer(p)), uintptr(flags), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_linux_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_linux_arm.go deleted file mode 100644 index dd66c975879..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_linux_arm.go +++ /dev/null @@ -1,1756 +0,0 @@ -// mksyscall.pl -l32 -arm syscall_linux.go syscall_linux_arm.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build arm,linux - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func linkat(olddirfd int, oldpath string, newdirfd int, newpath string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(oldpath) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(newpath) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_LINKAT, uintptr(olddirfd), uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func openat(dirfd int, path string, flags int, mode uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_OPENAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags), uintptr(mode), 0, 0) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlinkat(dirfd int, path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_READLINKAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf)), 0, 0) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func symlinkat(oldpath string, newdirfd int, newpath string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(oldpath) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(newpath) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINKAT, uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1))) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func unlinkat(dirfd int, path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINKAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, times *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_UTIMENSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimesat(dirfd int, path *byte, times *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMESAT, uintptr(dirfd), uintptr(unsafe.Pointer(path)), uintptr(unsafe.Pointer(times))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getcwd(buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_GETCWD, uintptr(_p0), uintptr(len(buf)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ptrace(request int, pid int, addr uintptr, data uintptr) (err error) { - _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func reboot(magic1 uint, magic2 uint, cmd int, arg string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(arg) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_REBOOT, uintptr(magic1), uintptr(magic2), uintptr(cmd), uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mount(source string, target string, fstype string, flags uintptr, data *byte) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(source) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(target) - if err != nil { - return - } - var _p2 *byte - _p2, err = BytePtrFromString(fstype) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_MOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(unsafe.Pointer(_p2)), uintptr(flags), uintptr(unsafe.Pointer(data)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - use(unsafe.Pointer(_p2)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Acct(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtimex(buf *Timex) (state int, err error) { - r0, _, e1 := Syscall(SYS_ADJTIMEX, uintptr(unsafe.Pointer(buf)), 0, 0) - state = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ClockGettime(clockid int32, time *Timespec) (err error) { - _, _, e1 := Syscall(SYS_CLOCK_GETTIME, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(oldfd int) (fd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(oldfd), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup3(oldfd int, newfd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_DUP3, uintptr(oldfd), uintptr(newfd), uintptr(flags)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollCreate(size int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_EPOLL_CREATE, uintptr(size), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollCreate1(flag int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_EPOLL_CREATE1, uintptr(flag), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollCtl(epfd int, op int, fd int, event *EpollEvent) (err error) { - _, _, e1 := RawSyscall6(SYS_EPOLL_CTL, uintptr(epfd), uintptr(op), uintptr(fd), uintptr(unsafe.Pointer(event)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollWait(epfd int, events []EpollEvent, msec int) (n int, err error) { - var _p0 unsafe.Pointer - if len(events) > 0 { - _p0 = unsafe.Pointer(&events[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_EPOLL_WAIT, uintptr(epfd), uintptr(_p0), uintptr(len(events)), uintptr(msec), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT_GROUP, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Faccessat(dirfd int, path string, mode uint32, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FACCESSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fallocate(fd int, mode uint32, off int64, len int64) (err error) { - _, _, e1 := Syscall6(SYS_FALLOCATE, uintptr(fd), uintptr(mode), uintptr(off), uintptr(off>>32), uintptr(len), uintptr(len>>32)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FCHMODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchownat(dirfd int, path string, uid int, gid int, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FCHOWNAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fdatasync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FDATASYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdents(fd int, buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_GETDENTS64, uintptr(fd), uintptr(_p0), uintptr(len(buf))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettid() (tid int) { - r0, _, _ := RawSyscall(SYS_GETTID, 0, 0, 0) - tid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getxattr(path string, attr string, dest []byte) (sz int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attr) - if err != nil { - return - } - var _p2 unsafe.Pointer - if len(dest) > 0 { - _p2 = unsafe.Pointer(&dest[0]) - } else { - _p2 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_GETXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(_p2), uintptr(len(dest)), 0, 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - sz = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyAddWatch(fd int, pathname string, mask uint32) (watchdesc int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(pathname) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_INOTIFY_ADD_WATCH, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(mask)) - use(unsafe.Pointer(_p0)) - watchdesc = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyInit1(flags int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_INOTIFY_INIT1, uintptr(flags), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyRmWatch(fd int, watchdesc uint32) (success int, err error) { - r0, _, e1 := RawSyscall(SYS_INOTIFY_RM_WATCH, uintptr(fd), uintptr(watchdesc), 0) - success = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kill(pid int, sig syscall.Signal) (err error) { - _, _, e1 := RawSyscall(SYS_KILL, uintptr(pid), uintptr(sig), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Klogctl(typ int, buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_SYSLOG, uintptr(typ), uintptr(_p0), uintptr(len(buf))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listxattr(path string, dest []byte) (sz int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(dest) > 0 { - _p1 = unsafe.Pointer(&dest[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_LISTXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(dest))) - use(unsafe.Pointer(_p0)) - sz = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdirat(dirfd int, path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIRAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknodat(dirfd int, path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_MKNODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Nanosleep(time *Timespec, leftover *Timespec) (err error) { - _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pause() (err error) { - _, _, e1 := Syscall(SYS_PAUSE, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func PivotRoot(newroot string, putold string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(newroot) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(putold) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_PIVOT_ROOT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func prlimit(pid int, resource int, old *Rlimit, newlimit *Rlimit) (err error) { - _, _, e1 := RawSyscall6(SYS_PRLIMIT64, uintptr(pid), uintptr(resource), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(newlimit)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Prctl(option int, arg2 uintptr, arg3 uintptr, arg4 uintptr, arg5 uintptr) (err error) { - _, _, e1 := Syscall6(SYS_PRCTL, uintptr(option), uintptr(arg2), uintptr(arg3), uintptr(arg4), uintptr(arg5), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Removexattr(path string, attr string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attr) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REMOVEXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Renameat(olddirfd int, oldpath string, newdirfd int, newpath string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(oldpath) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(newpath) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_RENAMEAT, uintptr(olddirfd), uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1)), 0, 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setdomainname(p []byte) (err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_SETDOMAINNAME, uintptr(_p0), uintptr(len(p)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sethostname(p []byte) (err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_SETHOSTNAME, uintptr(_p0), uintptr(len(p)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setxattr(path string, attr string, data []byte, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attr) - if err != nil { - return - } - var _p2 unsafe.Pointer - if len(data) > 0 { - _p2 = unsafe.Pointer(&data[0]) - } else { - _p2 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SETXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(_p2), uintptr(len(data)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() { - Syscall(SYS_SYNC, 0, 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sysinfo(info *Sysinfo_t) (err error) { - _, _, e1 := RawSyscall(SYS_SYSINFO, uintptr(unsafe.Pointer(info)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Tee(rfd int, wfd int, len int, flags int) (n int64, err error) { - r0, r1, e1 := Syscall6(SYS_TEE, uintptr(rfd), uintptr(wfd), uintptr(len), uintptr(flags), 0, 0) - n = int64(int64(r1)<<32 | int64(r0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Tgkill(tgid int, tid int, sig syscall.Signal) (err error) { - _, _, e1 := RawSyscall(SYS_TGKILL, uintptr(tgid), uintptr(tid), uintptr(sig)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Times(tms *Tms) (ticks uintptr, err error) { - r0, _, e1 := RawSyscall(SYS_TIMES, uintptr(unsafe.Pointer(tms)), 0, 0) - ticks = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(mask int) (oldmask int) { - r0, _, _ := RawSyscall(SYS_UMASK, uintptr(mask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Uname(buf *Utsname) (err error) { - _, _, e1 := RawSyscall(SYS_UNAME, uintptr(unsafe.Pointer(buf)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(target string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(target) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UMOUNT2, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unshare(flags int) (err error) { - _, _, e1 := Syscall(SYS_UNSHARE, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ustat(dev int, ubuf *Ustat_t) (err error) { - _, _, e1 := Syscall(SYS_USTAT, uintptr(dev), uintptr(unsafe.Pointer(ubuf)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Utime(path string, buf *Utimbuf) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(buf)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func exitThread(code int) (err error) { - _, _, e1 := Syscall(SYS_EXIT, uintptr(code), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, p *byte, np int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(p)), uintptr(np)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, p *byte, np int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(p)), uintptr(np)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Madvise(b []byte, advice int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(advice)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe2(p *[2]_C_int, flags int) (err error) { - _, _, e1 := RawSyscall(SYS_PIPE2, uintptr(unsafe.Pointer(p)), uintptr(flags), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error) { - r0, _, e1 := Syscall6(SYS_ACCEPT4, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), uintptr(flags), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(n int, list *_Gid_t) (nn int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS32, uintptr(n), uintptr(unsafe.Pointer(list)), 0) - nn = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(n int, list *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS32, uintptr(n), uintptr(unsafe.Pointer(list)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socketpair(domain int, typ int, flags int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(flags), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup2(oldfd int, newfd int) (err error) { - _, _, e1 := Syscall(SYS_DUP2, uintptr(oldfd), uintptr(newfd), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN32, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT64, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID32, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (euid int) { - r0, _, _ := RawSyscall(SYS_GETEUID32, 0, 0, 0) - euid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID32, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID32, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyInit() (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_INOTIFY_INIT, 0, 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LCHOWN32, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listen(s int, n int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(n), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LSTAT64, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { - r0, _, e1 := Syscall6(SYS_SENDFILE64, uintptr(outfd), uintptr(infd), uintptr(unsafe.Pointer(offset)), uintptr(count), 0, 0) - written = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) { - r0, _, e1 := Syscall6(SYS__NEWSELECT, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setfsgid(gid int) (err error) { - _, _, e1 := Syscall(SYS_SETFSGID32, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setfsuid(uid int) (err error) { - _, _, e1 := Syscall(SYS_SETFSUID32, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID32, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresgid(rgid int, egid int, sgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESGID32, uintptr(rgid), uintptr(egid), uintptr(sgid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresuid(ruid int, euid int, suid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESUID32, uintptr(ruid), uintptr(euid), uintptr(suid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID32, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Shutdown(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Splice(rfd int, roff *int64, wfd int, woff *int64, len int, flags int) (n int, err error) { - r0, _, e1 := Syscall6(SYS_SPLICE, uintptr(rfd), uintptr(unsafe.Pointer(roff)), uintptr(wfd), uintptr(unsafe.Pointer(woff)), uintptr(len), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STAT64, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Time(t *Time_t) (tt Time_t, err error) { - r0, _, e1 := RawSyscall(SYS_TIME, uintptr(unsafe.Pointer(t)), 0, 0) - tt = Time_t(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pread(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PREAD64, uintptr(fd), uintptr(_p0), uintptr(len(p)), 0, uintptr(offset), uintptr(offset>>32)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PWRITE64, uintptr(fd), uintptr(_p0), uintptr(len(p)), 0, uintptr(offset), uintptr(offset>>32)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_TRUNCATE64, uintptr(unsafe.Pointer(_p0)), 0, uintptr(length), uintptr(length>>32), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall6(SYS_FTRUNCATE64, uintptr(fd), 0, uintptr(length), uintptr(length>>32), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap2(addr uintptr, length uintptr, prot int, flags int, fd int, pageOffset uintptr) (xaddr uintptr, err error) { - r0, _, e1 := Syscall6(SYS_MMAP2, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flags), uintptr(fd), uintptr(pageOffset)) - xaddr = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getrlimit(resource int, rlim *rlimit32) (err error) { - _, _, e1 := RawSyscall(SYS_GETRLIMIT, uintptr(resource), uintptr(unsafe.Pointer(rlim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setrlimit(resource int, rlim *rlimit32) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(resource), uintptr(unsafe.Pointer(rlim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_linux_arm64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_linux_arm64.go deleted file mode 100644 index d0a6ed82927..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_linux_arm64.go +++ /dev/null @@ -1,1750 +0,0 @@ -// mksyscall.pl syscall_linux.go syscall_linux_arm64.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build arm64,linux - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func linkat(olddirfd int, oldpath string, newdirfd int, newpath string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(oldpath) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(newpath) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_LINKAT, uintptr(olddirfd), uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func openat(dirfd int, path string, flags int, mode uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_OPENAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags), uintptr(mode), 0, 0) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlinkat(dirfd int, path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_READLINKAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf)), 0, 0) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func symlinkat(oldpath string, newdirfd int, newpath string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(oldpath) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(newpath) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINKAT, uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1))) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func unlinkat(dirfd int, path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINKAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, times *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_UTIMENSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimesat(dirfd int, path *byte, times *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMESAT, uintptr(dirfd), uintptr(unsafe.Pointer(path)), uintptr(unsafe.Pointer(times))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getcwd(buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_GETCWD, uintptr(_p0), uintptr(len(buf)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ptrace(request int, pid int, addr uintptr, data uintptr) (err error) { - _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func reboot(magic1 uint, magic2 uint, cmd int, arg string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(arg) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_REBOOT, uintptr(magic1), uintptr(magic2), uintptr(cmd), uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mount(source string, target string, fstype string, flags uintptr, data *byte) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(source) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(target) - if err != nil { - return - } - var _p2 *byte - _p2, err = BytePtrFromString(fstype) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_MOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(unsafe.Pointer(_p2)), uintptr(flags), uintptr(unsafe.Pointer(data)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - use(unsafe.Pointer(_p2)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Acct(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtimex(buf *Timex) (state int, err error) { - r0, _, e1 := Syscall(SYS_ADJTIMEX, uintptr(unsafe.Pointer(buf)), 0, 0) - state = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ClockGettime(clockid int32, time *Timespec) (err error) { - _, _, e1 := Syscall(SYS_CLOCK_GETTIME, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(oldfd int) (fd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(oldfd), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup3(oldfd int, newfd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_DUP3, uintptr(oldfd), uintptr(newfd), uintptr(flags)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollCreate(size int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_EPOLL_CREATE, uintptr(size), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollCreate1(flag int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_EPOLL_CREATE1, uintptr(flag), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollCtl(epfd int, op int, fd int, event *EpollEvent) (err error) { - _, _, e1 := RawSyscall6(SYS_EPOLL_CTL, uintptr(epfd), uintptr(op), uintptr(fd), uintptr(unsafe.Pointer(event)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollWait(epfd int, events []EpollEvent, msec int) (n int, err error) { - var _p0 unsafe.Pointer - if len(events) > 0 { - _p0 = unsafe.Pointer(&events[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_EPOLL_WAIT, uintptr(epfd), uintptr(_p0), uintptr(len(events)), uintptr(msec), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT_GROUP, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Faccessat(dirfd int, path string, mode uint32, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FACCESSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fallocate(fd int, mode uint32, off int64, len int64) (err error) { - _, _, e1 := Syscall6(SYS_FALLOCATE, uintptr(fd), uintptr(mode), uintptr(off), uintptr(len), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FCHMODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchownat(dirfd int, path string, uid int, gid int, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FCHOWNAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fdatasync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FDATASYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdents(fd int, buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_GETDENTS64, uintptr(fd), uintptr(_p0), uintptr(len(buf))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettid() (tid int) { - r0, _, _ := RawSyscall(SYS_GETTID, 0, 0, 0) - tid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getxattr(path string, attr string, dest []byte) (sz int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attr) - if err != nil { - return - } - var _p2 unsafe.Pointer - if len(dest) > 0 { - _p2 = unsafe.Pointer(&dest[0]) - } else { - _p2 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_GETXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(_p2), uintptr(len(dest)), 0, 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - sz = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyAddWatch(fd int, pathname string, mask uint32) (watchdesc int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(pathname) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_INOTIFY_ADD_WATCH, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(mask)) - use(unsafe.Pointer(_p0)) - watchdesc = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyInit1(flags int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_INOTIFY_INIT1, uintptr(flags), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyRmWatch(fd int, watchdesc uint32) (success int, err error) { - r0, _, e1 := RawSyscall(SYS_INOTIFY_RM_WATCH, uintptr(fd), uintptr(watchdesc), 0) - success = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kill(pid int, sig syscall.Signal) (err error) { - _, _, e1 := RawSyscall(SYS_KILL, uintptr(pid), uintptr(sig), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Klogctl(typ int, buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_SYSLOG, uintptr(typ), uintptr(_p0), uintptr(len(buf))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listxattr(path string, dest []byte) (sz int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(dest) > 0 { - _p1 = unsafe.Pointer(&dest[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_LISTXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(dest))) - use(unsafe.Pointer(_p0)) - sz = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdirat(dirfd int, path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIRAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknodat(dirfd int, path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_MKNODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Nanosleep(time *Timespec, leftover *Timespec) (err error) { - _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pause() (err error) { - _, _, e1 := Syscall(SYS_PAUSE, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func PivotRoot(newroot string, putold string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(newroot) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(putold) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_PIVOT_ROOT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func prlimit(pid int, resource int, old *Rlimit, newlimit *Rlimit) (err error) { - _, _, e1 := RawSyscall6(SYS_PRLIMIT64, uintptr(pid), uintptr(resource), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(newlimit)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Prctl(option int, arg2 uintptr, arg3 uintptr, arg4 uintptr, arg5 uintptr) (err error) { - _, _, e1 := Syscall6(SYS_PRCTL, uintptr(option), uintptr(arg2), uintptr(arg3), uintptr(arg4), uintptr(arg5), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Removexattr(path string, attr string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attr) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REMOVEXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Renameat(olddirfd int, oldpath string, newdirfd int, newpath string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(oldpath) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(newpath) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_RENAMEAT, uintptr(olddirfd), uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1)), 0, 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setdomainname(p []byte) (err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_SETDOMAINNAME, uintptr(_p0), uintptr(len(p)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sethostname(p []byte) (err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_SETHOSTNAME, uintptr(_p0), uintptr(len(p)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setxattr(path string, attr string, data []byte, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attr) - if err != nil { - return - } - var _p2 unsafe.Pointer - if len(data) > 0 { - _p2 = unsafe.Pointer(&data[0]) - } else { - _p2 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SETXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(_p2), uintptr(len(data)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() { - Syscall(SYS_SYNC, 0, 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sysinfo(info *Sysinfo_t) (err error) { - _, _, e1 := RawSyscall(SYS_SYSINFO, uintptr(unsafe.Pointer(info)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Tee(rfd int, wfd int, len int, flags int) (n int64, err error) { - r0, _, e1 := Syscall6(SYS_TEE, uintptr(rfd), uintptr(wfd), uintptr(len), uintptr(flags), 0, 0) - n = int64(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Tgkill(tgid int, tid int, sig syscall.Signal) (err error) { - _, _, e1 := RawSyscall(SYS_TGKILL, uintptr(tgid), uintptr(tid), uintptr(sig)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Times(tms *Tms) (ticks uintptr, err error) { - r0, _, e1 := RawSyscall(SYS_TIMES, uintptr(unsafe.Pointer(tms)), 0, 0) - ticks = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(mask int) (oldmask int) { - r0, _, _ := RawSyscall(SYS_UMASK, uintptr(mask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Uname(buf *Utsname) (err error) { - _, _, e1 := RawSyscall(SYS_UNAME, uintptr(unsafe.Pointer(buf)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(target string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(target) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UMOUNT2, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unshare(flags int) (err error) { - _, _, e1 := Syscall(SYS_UNSHARE, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ustat(dev int, ubuf *Ustat_t) (err error) { - _, _, e1 := Syscall(SYS_USTAT, uintptr(dev), uintptr(unsafe.Pointer(ubuf)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Utime(path string, buf *Utimbuf) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(buf)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func exitThread(code int) (err error) { - _, _, e1 := Syscall(SYS_EXIT, uintptr(code), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, p *byte, np int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(p)), uintptr(np)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, p *byte, np int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(p)), uintptr(np)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Madvise(b []byte, advice int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(advice)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FSTATAT, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstatfs(fd int, buf *Statfs_t) (err error) { - _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(buf)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall(SYS_FTRUNCATE, uintptr(fd), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (euid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) - euid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrlimit(resource int, rlim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_GETRLIMIT, uintptr(resource), uintptr(unsafe.Pointer(rlim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listen(s int, n int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(n), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pread(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PREAD64, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PWRITE64, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seek(fd int, offset int64, whence int) (off int64, err error) { - r0, _, e1 := Syscall(SYS_LSEEK, uintptr(fd), uintptr(offset), uintptr(whence)) - off = int64(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) { - r0, _, e1 := Syscall6(SYS_PSELECT6, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { - r0, _, e1 := Syscall6(SYS_SENDFILE, uintptr(outfd), uintptr(infd), uintptr(unsafe.Pointer(offset)), uintptr(count), 0, 0) - written = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setfsgid(gid int) (err error) { - _, _, e1 := Syscall(SYS_SETFSGID, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setfsuid(uid int) (err error) { - _, _, e1 := Syscall(SYS_SETFSUID, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresgid(rgid int, egid int, sgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESGID, uintptr(rgid), uintptr(egid), uintptr(sgid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresuid(ruid int, euid int, suid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESUID, uintptr(ruid), uintptr(euid), uintptr(suid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setrlimit(resource int, rlim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(resource), uintptr(unsafe.Pointer(rlim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Shutdown(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Splice(rfd int, roff *int64, wfd int, woff *int64, len int, flags int) (n int64, err error) { - r0, _, e1 := Syscall6(SYS_SPLICE, uintptr(rfd), uintptr(unsafe.Pointer(roff)), uintptr(wfd), uintptr(unsafe.Pointer(woff)), uintptr(len), uintptr(flags)) - n = int64(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Statfs(path string, buf *Statfs_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STATFS, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(buf)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func SyncFileRange(fd int, off int64, n int64, flags int) (err error) { - _, _, e1 := Syscall6(SYS_SYNC_FILE_RANGE, uintptr(fd), uintptr(off), uintptr(n), uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_TRUNCATE, uintptr(unsafe.Pointer(_p0)), uintptr(length), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error) { - r0, _, e1 := Syscall6(SYS_ACCEPT4, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), uintptr(flags), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(n int, list *_Gid_t) (nn int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS, uintptr(n), uintptr(unsafe.Pointer(list)), 0) - nn = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(n int, list *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS, uintptr(n), uintptr(unsafe.Pointer(list)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap(addr uintptr, length uintptr, prot int, flags int, fd int, offset int64) (xaddr uintptr, err error) { - r0, _, e1 := Syscall6(SYS_MMAP, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flags), uintptr(fd), uintptr(offset)) - xaddr = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Time(t *Time_t) (tt Time_t, err error) { - r0, _, e1 := RawSyscall(SYS_TIME, uintptr(unsafe.Pointer(t)), 0, 0) - tt = Time_t(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe2(p *[2]_C_int, flags int) (err error) { - _, _, e1 := RawSyscall(SYS_PIPE2, uintptr(unsafe.Pointer(p)), uintptr(flags), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_linux_ppc64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_linux_ppc64.go deleted file mode 100644 index f58a3ff2f92..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_linux_ppc64.go +++ /dev/null @@ -1,1792 +0,0 @@ -// mksyscall.pl syscall_linux.go syscall_linux_ppc64x.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build ppc64,linux - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func linkat(olddirfd int, oldpath string, newdirfd int, newpath string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(oldpath) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(newpath) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_LINKAT, uintptr(olddirfd), uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func openat(dirfd int, path string, flags int, mode uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_OPENAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags), uintptr(mode), 0, 0) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlinkat(dirfd int, path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_READLINKAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf)), 0, 0) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func symlinkat(oldpath string, newdirfd int, newpath string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(oldpath) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(newpath) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINKAT, uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1))) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func unlinkat(dirfd int, path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINKAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, times *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_UTIMENSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimesat(dirfd int, path *byte, times *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMESAT, uintptr(dirfd), uintptr(unsafe.Pointer(path)), uintptr(unsafe.Pointer(times))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getcwd(buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_GETCWD, uintptr(_p0), uintptr(len(buf)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ptrace(request int, pid int, addr uintptr, data uintptr) (err error) { - _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func reboot(magic1 uint, magic2 uint, cmd int, arg string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(arg) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_REBOOT, uintptr(magic1), uintptr(magic2), uintptr(cmd), uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mount(source string, target string, fstype string, flags uintptr, data *byte) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(source) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(target) - if err != nil { - return - } - var _p2 *byte - _p2, err = BytePtrFromString(fstype) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_MOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(unsafe.Pointer(_p2)), uintptr(flags), uintptr(unsafe.Pointer(data)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - use(unsafe.Pointer(_p2)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Acct(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtimex(buf *Timex) (state int, err error) { - r0, _, e1 := Syscall(SYS_ADJTIMEX, uintptr(unsafe.Pointer(buf)), 0, 0) - state = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ClockGettime(clockid int32, time *Timespec) (err error) { - _, _, e1 := Syscall(SYS_CLOCK_GETTIME, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(oldfd int) (fd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(oldfd), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup3(oldfd int, newfd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_DUP3, uintptr(oldfd), uintptr(newfd), uintptr(flags)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollCreate(size int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_EPOLL_CREATE, uintptr(size), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollCreate1(flag int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_EPOLL_CREATE1, uintptr(flag), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollCtl(epfd int, op int, fd int, event *EpollEvent) (err error) { - _, _, e1 := RawSyscall6(SYS_EPOLL_CTL, uintptr(epfd), uintptr(op), uintptr(fd), uintptr(unsafe.Pointer(event)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollWait(epfd int, events []EpollEvent, msec int) (n int, err error) { - var _p0 unsafe.Pointer - if len(events) > 0 { - _p0 = unsafe.Pointer(&events[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_EPOLL_WAIT, uintptr(epfd), uintptr(_p0), uintptr(len(events)), uintptr(msec), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT_GROUP, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Faccessat(dirfd int, path string, mode uint32, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FACCESSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fallocate(fd int, mode uint32, off int64, len int64) (err error) { - _, _, e1 := Syscall6(SYS_FALLOCATE, uintptr(fd), uintptr(mode), uintptr(off), uintptr(len), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FCHMODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchownat(dirfd int, path string, uid int, gid int, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FCHOWNAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fdatasync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FDATASYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdents(fd int, buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_GETDENTS64, uintptr(fd), uintptr(_p0), uintptr(len(buf))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettid() (tid int) { - r0, _, _ := RawSyscall(SYS_GETTID, 0, 0, 0) - tid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getxattr(path string, attr string, dest []byte) (sz int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attr) - if err != nil { - return - } - var _p2 unsafe.Pointer - if len(dest) > 0 { - _p2 = unsafe.Pointer(&dest[0]) - } else { - _p2 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_GETXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(_p2), uintptr(len(dest)), 0, 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - sz = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyAddWatch(fd int, pathname string, mask uint32) (watchdesc int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(pathname) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_INOTIFY_ADD_WATCH, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(mask)) - use(unsafe.Pointer(_p0)) - watchdesc = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyInit1(flags int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_INOTIFY_INIT1, uintptr(flags), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyRmWatch(fd int, watchdesc uint32) (success int, err error) { - r0, _, e1 := RawSyscall(SYS_INOTIFY_RM_WATCH, uintptr(fd), uintptr(watchdesc), 0) - success = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kill(pid int, sig syscall.Signal) (err error) { - _, _, e1 := RawSyscall(SYS_KILL, uintptr(pid), uintptr(sig), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Klogctl(typ int, buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_SYSLOG, uintptr(typ), uintptr(_p0), uintptr(len(buf))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listxattr(path string, dest []byte) (sz int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(dest) > 0 { - _p1 = unsafe.Pointer(&dest[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_LISTXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(dest))) - use(unsafe.Pointer(_p0)) - sz = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdirat(dirfd int, path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIRAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknodat(dirfd int, path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_MKNODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Nanosleep(time *Timespec, leftover *Timespec) (err error) { - _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pause() (err error) { - _, _, e1 := Syscall(SYS_PAUSE, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func PivotRoot(newroot string, putold string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(newroot) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(putold) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_PIVOT_ROOT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func prlimit(pid int, resource int, old *Rlimit, newlimit *Rlimit) (err error) { - _, _, e1 := RawSyscall6(SYS_PRLIMIT64, uintptr(pid), uintptr(resource), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(newlimit)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Prctl(option int, arg2 uintptr, arg3 uintptr, arg4 uintptr, arg5 uintptr) (err error) { - _, _, e1 := Syscall6(SYS_PRCTL, uintptr(option), uintptr(arg2), uintptr(arg3), uintptr(arg4), uintptr(arg5), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Removexattr(path string, attr string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attr) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REMOVEXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Renameat(olddirfd int, oldpath string, newdirfd int, newpath string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(oldpath) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(newpath) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_RENAMEAT, uintptr(olddirfd), uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1)), 0, 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setdomainname(p []byte) (err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_SETDOMAINNAME, uintptr(_p0), uintptr(len(p)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sethostname(p []byte) (err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_SETHOSTNAME, uintptr(_p0), uintptr(len(p)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setxattr(path string, attr string, data []byte, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attr) - if err != nil { - return - } - var _p2 unsafe.Pointer - if len(data) > 0 { - _p2 = unsafe.Pointer(&data[0]) - } else { - _p2 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SETXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(_p2), uintptr(len(data)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() { - Syscall(SYS_SYNC, 0, 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sysinfo(info *Sysinfo_t) (err error) { - _, _, e1 := RawSyscall(SYS_SYSINFO, uintptr(unsafe.Pointer(info)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Tee(rfd int, wfd int, len int, flags int) (n int64, err error) { - r0, _, e1 := Syscall6(SYS_TEE, uintptr(rfd), uintptr(wfd), uintptr(len), uintptr(flags), 0, 0) - n = int64(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Tgkill(tgid int, tid int, sig syscall.Signal) (err error) { - _, _, e1 := RawSyscall(SYS_TGKILL, uintptr(tgid), uintptr(tid), uintptr(sig)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Times(tms *Tms) (ticks uintptr, err error) { - r0, _, e1 := RawSyscall(SYS_TIMES, uintptr(unsafe.Pointer(tms)), 0, 0) - ticks = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(mask int) (oldmask int) { - r0, _, _ := RawSyscall(SYS_UMASK, uintptr(mask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Uname(buf *Utsname) (err error) { - _, _, e1 := RawSyscall(SYS_UNAME, uintptr(unsafe.Pointer(buf)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(target string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(target) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UMOUNT2, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unshare(flags int) (err error) { - _, _, e1 := Syscall(SYS_UNSHARE, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ustat(dev int, ubuf *Ustat_t) (err error) { - _, _, e1 := Syscall(SYS_USTAT, uintptr(dev), uintptr(unsafe.Pointer(ubuf)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Utime(path string, buf *Utimbuf) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(buf)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func exitThread(code int) (err error) { - _, _, e1 := Syscall(SYS_EXIT, uintptr(code), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, p *byte, np int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(p)), uintptr(np)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, p *byte, np int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(p)), uintptr(np)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Madvise(b []byte, advice int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(advice)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstatfs(fd int, buf *Statfs_t) (err error) { - _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(buf)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall(SYS_FTRUNCATE, uintptr(fd), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (euid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) - euid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrlimit(resource int, rlim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_UGETRLIMIT, uintptr(resource), uintptr(unsafe.Pointer(rlim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ioperm(from int, num int, on int) (err error) { - _, _, e1 := Syscall(SYS_IOPERM, uintptr(from), uintptr(num), uintptr(on)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Iopl(level int) (err error) { - _, _, e1 := Syscall(SYS_IOPL, uintptr(level), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LCHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listen(s int, n int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(n), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LSTAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pread(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PREAD64, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PWRITE64, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seek(fd int, offset int64, whence int) (off int64, err error) { - r0, _, e1 := Syscall(SYS_LSEEK, uintptr(fd), uintptr(offset), uintptr(whence)) - off = int64(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) { - r0, _, e1 := Syscall6(SYS_SELECT, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { - r0, _, e1 := Syscall6(SYS_SENDFILE, uintptr(outfd), uintptr(infd), uintptr(unsafe.Pointer(offset)), uintptr(count), 0, 0) - written = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setfsgid(gid int) (err error) { - _, _, e1 := Syscall(SYS_SETFSGID, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setfsuid(uid int) (err error) { - _, _, e1 := Syscall(SYS_SETFSUID, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresgid(rgid int, egid int, sgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESGID, uintptr(rgid), uintptr(egid), uintptr(sgid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresuid(ruid int, euid int, suid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESUID, uintptr(ruid), uintptr(euid), uintptr(suid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setrlimit(resource int, rlim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(resource), uintptr(unsafe.Pointer(rlim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Shutdown(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Splice(rfd int, roff *int64, wfd int, woff *int64, len int, flags int) (n int64, err error) { - r0, _, e1 := Syscall6(SYS_SPLICE, uintptr(rfd), uintptr(unsafe.Pointer(roff)), uintptr(wfd), uintptr(unsafe.Pointer(woff)), uintptr(len), uintptr(flags)) - n = int64(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Statfs(path string, buf *Statfs_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STATFS, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(buf)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func SyncFileRange(fd int, off int64, n int64, flags int) (err error) { - _, _, e1 := Syscall6(SYS_SYNC_FILE_RANGE2, uintptr(fd), uintptr(off), uintptr(n), uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_TRUNCATE, uintptr(unsafe.Pointer(_p0)), uintptr(length), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error) { - r0, _, e1 := Syscall6(SYS_ACCEPT4, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), uintptr(flags), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(n int, list *_Gid_t) (nn int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS, uintptr(n), uintptr(unsafe.Pointer(list)), 0) - nn = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(n int, list *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS, uintptr(n), uintptr(unsafe.Pointer(list)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap(addr uintptr, length uintptr, prot int, flags int, fd int, offset int64) (xaddr uintptr, err error) { - r0, _, e1 := Syscall6(SYS_MMAP, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flags), uintptr(fd), uintptr(offset)) - xaddr = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Time(t *Time_t) (tt Time_t, err error) { - r0, _, e1 := RawSyscall(SYS_TIME, uintptr(unsafe.Pointer(t)), 0, 0) - tt = Time_t(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_linux_ppc64le.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_linux_ppc64le.go deleted file mode 100644 index 22fc7a45725..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_linux_ppc64le.go +++ /dev/null @@ -1,1792 +0,0 @@ -// mksyscall.pl syscall_linux.go syscall_linux_ppc64x.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build ppc64le,linux - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func linkat(olddirfd int, oldpath string, newdirfd int, newpath string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(oldpath) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(newpath) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_LINKAT, uintptr(olddirfd), uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func openat(dirfd int, path string, flags int, mode uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall6(SYS_OPENAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags), uintptr(mode), 0, 0) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlinkat(dirfd int, path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_READLINKAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf)), 0, 0) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func symlinkat(oldpath string, newdirfd int, newpath string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(oldpath) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(newpath) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINKAT, uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1))) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func unlinkat(dirfd int, path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINKAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, times *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_UTIMENSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimesat(dirfd int, path *byte, times *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMESAT, uintptr(dirfd), uintptr(unsafe.Pointer(path)), uintptr(unsafe.Pointer(times))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getcwd(buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_GETCWD, uintptr(_p0), uintptr(len(buf)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ptrace(request int, pid int, addr uintptr, data uintptr) (err error) { - _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func reboot(magic1 uint, magic2 uint, cmd int, arg string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(arg) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_REBOOT, uintptr(magic1), uintptr(magic2), uintptr(cmd), uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mount(source string, target string, fstype string, flags uintptr, data *byte) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(source) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(target) - if err != nil { - return - } - var _p2 *byte - _p2, err = BytePtrFromString(fstype) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_MOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(unsafe.Pointer(_p2)), uintptr(flags), uintptr(unsafe.Pointer(data)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - use(unsafe.Pointer(_p2)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Acct(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtimex(buf *Timex) (state int, err error) { - r0, _, e1 := Syscall(SYS_ADJTIMEX, uintptr(unsafe.Pointer(buf)), 0, 0) - state = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func ClockGettime(clockid int32, time *Timespec) (err error) { - _, _, e1 := Syscall(SYS_CLOCK_GETTIME, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(oldfd int) (fd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(oldfd), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup3(oldfd int, newfd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_DUP3, uintptr(oldfd), uintptr(newfd), uintptr(flags)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollCreate(size int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_EPOLL_CREATE, uintptr(size), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollCreate1(flag int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_EPOLL_CREATE1, uintptr(flag), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollCtl(epfd int, op int, fd int, event *EpollEvent) (err error) { - _, _, e1 := RawSyscall6(SYS_EPOLL_CTL, uintptr(epfd), uintptr(op), uintptr(fd), uintptr(unsafe.Pointer(event)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func EpollWait(epfd int, events []EpollEvent, msec int) (n int, err error) { - var _p0 unsafe.Pointer - if len(events) > 0 { - _p0 = unsafe.Pointer(&events[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_EPOLL_WAIT, uintptr(epfd), uintptr(_p0), uintptr(len(events)), uintptr(msec), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT_GROUP, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Faccessat(dirfd int, path string, mode uint32, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FACCESSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fallocate(fd int, mode uint32, off int64, len int64) (err error) { - _, _, e1 := Syscall6(SYS_FALLOCATE, uintptr(fd), uintptr(mode), uintptr(off), uintptr(len), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FCHMODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchownat(dirfd int, path string, uid int, gid int, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_FCHOWNAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fdatasync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FDATASYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getdents(fd int, buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_GETDENTS64, uintptr(fd), uintptr(_p0), uintptr(len(buf))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettid() (tid int) { - r0, _, _ := RawSyscall(SYS_GETTID, 0, 0, 0) - tid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getxattr(path string, attr string, dest []byte) (sz int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attr) - if err != nil { - return - } - var _p2 unsafe.Pointer - if len(dest) > 0 { - _p2 = unsafe.Pointer(&dest[0]) - } else { - _p2 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_GETXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(_p2), uintptr(len(dest)), 0, 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - sz = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyAddWatch(fd int, pathname string, mask uint32) (watchdesc int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(pathname) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_INOTIFY_ADD_WATCH, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(mask)) - use(unsafe.Pointer(_p0)) - watchdesc = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyInit1(flags int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_INOTIFY_INIT1, uintptr(flags), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func InotifyRmWatch(fd int, watchdesc uint32) (success int, err error) { - r0, _, e1 := RawSyscall(SYS_INOTIFY_RM_WATCH, uintptr(fd), uintptr(watchdesc), 0) - success = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kill(pid int, sig syscall.Signal) (err error) { - _, _, e1 := RawSyscall(SYS_KILL, uintptr(pid), uintptr(sig), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Klogctl(typ int, buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_SYSLOG, uintptr(typ), uintptr(_p0), uintptr(len(buf))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listxattr(path string, dest []byte) (sz int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(dest) > 0 { - _p1 = unsafe.Pointer(&dest[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_LISTXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(dest))) - use(unsafe.Pointer(_p0)) - sz = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdirat(dirfd int, path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIRAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknodat(dirfd int, path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_MKNODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Nanosleep(time *Timespec, leftover *Timespec) (err error) { - _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pause() (err error) { - _, _, e1 := Syscall(SYS_PAUSE, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func PivotRoot(newroot string, putold string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(newroot) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(putold) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_PIVOT_ROOT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func prlimit(pid int, resource int, old *Rlimit, newlimit *Rlimit) (err error) { - _, _, e1 := RawSyscall6(SYS_PRLIMIT64, uintptr(pid), uintptr(resource), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(newlimit)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Prctl(option int, arg2 uintptr, arg3 uintptr, arg4 uintptr, arg5 uintptr) (err error) { - _, _, e1 := Syscall6(SYS_PRCTL, uintptr(option), uintptr(arg2), uintptr(arg3), uintptr(arg4), uintptr(arg5), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Removexattr(path string, attr string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attr) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REMOVEXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Renameat(olddirfd int, oldpath string, newdirfd int, newpath string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(oldpath) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(newpath) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_RENAMEAT, uintptr(olddirfd), uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1)), 0, 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setdomainname(p []byte) (err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_SETDOMAINNAME, uintptr(_p0), uintptr(len(p)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sethostname(p []byte) (err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_SETHOSTNAME, uintptr(_p0), uintptr(len(p)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setxattr(path string, attr string, data []byte, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(attr) - if err != nil { - return - } - var _p2 unsafe.Pointer - if len(data) > 0 { - _p2 = unsafe.Pointer(&data[0]) - } else { - _p2 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SETXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(_p2), uintptr(len(data)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() { - Syscall(SYS_SYNC, 0, 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sysinfo(info *Sysinfo_t) (err error) { - _, _, e1 := RawSyscall(SYS_SYSINFO, uintptr(unsafe.Pointer(info)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Tee(rfd int, wfd int, len int, flags int) (n int64, err error) { - r0, _, e1 := Syscall6(SYS_TEE, uintptr(rfd), uintptr(wfd), uintptr(len), uintptr(flags), 0, 0) - n = int64(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Tgkill(tgid int, tid int, sig syscall.Signal) (err error) { - _, _, e1 := RawSyscall(SYS_TGKILL, uintptr(tgid), uintptr(tid), uintptr(sig)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Times(tms *Tms) (ticks uintptr, err error) { - r0, _, e1 := RawSyscall(SYS_TIMES, uintptr(unsafe.Pointer(tms)), 0, 0) - ticks = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(mask int) (oldmask int) { - r0, _, _ := RawSyscall(SYS_UMASK, uintptr(mask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Uname(buf *Utsname) (err error) { - _, _, e1 := RawSyscall(SYS_UNAME, uintptr(unsafe.Pointer(buf)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(target string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(target) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UMOUNT2, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unshare(flags int) (err error) { - _, _, e1 := Syscall(SYS_UNSHARE, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ustat(dev int, ubuf *Ustat_t) (err error) { - _, _, e1 := Syscall(SYS_USTAT, uintptr(dev), uintptr(unsafe.Pointer(ubuf)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Utime(path string, buf *Utimbuf) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(buf)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func exitThread(code int) (err error) { - _, _, e1 := Syscall(SYS_EXIT, uintptr(code), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, p *byte, np int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(p)), uintptr(np)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, p *byte, np int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(p)), uintptr(np)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Madvise(b []byte, advice int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(advice)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstatfs(fd int, buf *Statfs_t) (err error) { - _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(buf)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall(SYS_FTRUNCATE, uintptr(fd), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (euid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) - euid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrlimit(resource int, rlim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_UGETRLIMIT, uintptr(resource), uintptr(unsafe.Pointer(rlim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ioperm(from int, num int, on int) (err error) { - _, _, e1 := Syscall(SYS_IOPERM, uintptr(from), uintptr(num), uintptr(on)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Iopl(level int) (err error) { - _, _, e1 := Syscall(SYS_IOPL, uintptr(level), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LCHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listen(s int, n int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(n), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LSTAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pread(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PREAD64, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PWRITE64, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), 0, 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seek(fd int, offset int64, whence int) (off int64, err error) { - r0, _, e1 := Syscall(SYS_LSEEK, uintptr(fd), uintptr(offset), uintptr(whence)) - off = int64(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) { - r0, _, e1 := Syscall6(SYS_SELECT, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { - r0, _, e1 := Syscall6(SYS_SENDFILE, uintptr(outfd), uintptr(infd), uintptr(unsafe.Pointer(offset)), uintptr(count), 0, 0) - written = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setfsgid(gid int) (err error) { - _, _, e1 := Syscall(SYS_SETFSGID, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setfsuid(uid int) (err error) { - _, _, e1 := Syscall(SYS_SETFSUID, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresgid(rgid int, egid int, sgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESGID, uintptr(rgid), uintptr(egid), uintptr(sgid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresuid(ruid int, euid int, suid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESUID, uintptr(ruid), uintptr(euid), uintptr(suid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setrlimit(resource int, rlim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(resource), uintptr(unsafe.Pointer(rlim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Shutdown(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Splice(rfd int, roff *int64, wfd int, woff *int64, len int, flags int) (n int64, err error) { - r0, _, e1 := Syscall6(SYS_SPLICE, uintptr(rfd), uintptr(unsafe.Pointer(roff)), uintptr(wfd), uintptr(unsafe.Pointer(woff)), uintptr(len), uintptr(flags)) - n = int64(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Statfs(path string, buf *Statfs_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STATFS, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(buf)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func SyncFileRange(fd int, off int64, n int64, flags int) (err error) { - _, _, e1 := Syscall6(SYS_SYNC_FILE_RANGE2, uintptr(fd), uintptr(off), uintptr(n), uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_TRUNCATE, uintptr(unsafe.Pointer(_p0)), uintptr(length), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error) { - r0, _, e1 := Syscall6(SYS_ACCEPT4, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), uintptr(flags), 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(n int, list *_Gid_t) (nn int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS, uintptr(n), uintptr(unsafe.Pointer(list)), 0) - nn = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(n int, list *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS, uintptr(n), uintptr(unsafe.Pointer(list)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap(addr uintptr, length uintptr, prot int, flags int, fd int, offset int64) (xaddr uintptr, err error) { - r0, _, e1 := Syscall6(SYS_MMAP, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flags), uintptr(fd), uintptr(offset)) - xaddr = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Time(t *Time_t) (tt Time_t, err error) { - r0, _, e1 := RawSyscall(SYS_TIME, uintptr(unsafe.Pointer(t)), 0, 0) - tt = Time_t(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_netbsd_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_netbsd_386.go deleted file mode 100644 index 00ca1f9c119..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_netbsd_386.go +++ /dev/null @@ -1,1326 +0,0 @@ -// mksyscall.pl -l32 -netbsd syscall_bsd.go syscall_netbsd.go syscall_netbsd_386.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build 386,netbsd - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(ngid int, gid *_Gid_t) (n int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(ngid int, gid *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Shutdown(s int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(s), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func kevent(kq int, change unsafe.Pointer, nchange int, event unsafe.Pointer, nevent int, timeout *Timespec) (n int, err error) { - r0, _, e1 := Syscall6(SYS_KEVENT, uintptr(kq), uintptr(change), uintptr(nchange), uintptr(event), uintptr(nevent), uintptr(unsafe.Pointer(timeout))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { - var _p0 unsafe.Pointer - if len(mib) > 0 { - _p0 = unsafe.Pointer(&mib[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS___SYSCTL, uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, timeval *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(timeval)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimes(fd int, timeval *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMES, uintptr(fd), uintptr(unsafe.Pointer(timeval)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe() (fd1 int, fd2 int, err error) { - r0, r1, e1 := RawSyscall(SYS_PIPE, 0, 0, 0) - fd1 = int(r0) - fd2 = int(r1) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getdents(fd int, buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_GETDENTS, uintptr(fd), uintptr(_p0), uintptr(len(buf))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Access(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCESS, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtime(delta *Timeval, olddelta *Timeval) (err error) { - _, _, e1 := Syscall(SYS_ADJTIME, uintptr(unsafe.Pointer(delta)), uintptr(unsafe.Pointer(olddelta)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chflags(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHFLAGS, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chmod(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHMOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(fd int) (nfd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(fd), 0, 0) - nfd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup2(from int, to int) (err error) { - _, _, e1 := Syscall(SYS_DUP2, uintptr(from), uintptr(to), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchflags(fd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_FCHFLAGS, uintptr(fd), uintptr(flags), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fpathconf(fd int, name int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FPATHCONF, uintptr(fd), uintptr(name), 0) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall6(SYS_FTRUNCATE, uintptr(fd), 0, uintptr(length), uintptr(length>>32), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgrp() (pgrp int) { - r0, _, _ := RawSyscall(SYS_GETPGRP, 0, 0, 0) - pgrp = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_GETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getsid(pid int) (sid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETSID, uintptr(pid), 0, 0) - sid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Issetugid() (tainted bool) { - r0, _, _ := Syscall(SYS_ISSETUGID, 0, 0, 0) - tainted = bool(r0 != 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kill(pid int, signum syscall.Signal) (err error) { - _, _, e1 := Syscall(SYS_KILL, uintptr(pid), uintptr(signum), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kqueue() (fd int, err error) { - r0, _, e1 := Syscall(SYS_KQUEUE, 0, 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LCHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Link(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listen(s int, backlog int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(backlog), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LSTAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdir(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIR, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkfifo(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKFIFO, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknod(path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKNOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Nanosleep(time *Timespec, leftover *Timespec) (err error) { - _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Open(path string, mode int, perm uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_OPEN, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm)) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pathconf(path string, name int) (val int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_PATHCONF, uintptr(unsafe.Pointer(_p0)), uintptr(name), 0) - use(unsafe.Pointer(_p0)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pread(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PREAD, uintptr(fd), uintptr(_p0), uintptr(len(p)), 0, uintptr(offset), uintptr(offset>>32)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PWRITE, uintptr(fd), uintptr(_p0), uintptr(len(p)), 0, uintptr(offset), uintptr(offset>>32)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Readlink(path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READLINK, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf))) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rename(from string, to string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(from) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(to) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RENAME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Revoke(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REVOKE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rmdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RMDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { - r0, r1, e1 := Syscall6(SYS_LSEEK, uintptr(fd), 0, uintptr(offset), uintptr(offset>>32), uintptr(whence), 0) - newoffset = int64(int64(r1)<<32 | int64(r0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) { - _, _, e1 := Syscall6(SYS_SELECT, uintptr(n), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setegid(egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEGID, uintptr(egid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seteuid(euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEUID, uintptr(euid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setgid(gid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETGID, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tp *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tp)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setuid(uid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETUID, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Symlink(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() (err error) { - _, _, e1 := Syscall(SYS_SYNC, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_TRUNCATE, uintptr(unsafe.Pointer(_p0)), 0, uintptr(length), uintptr(length>>32), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(newmask int) (oldmask int) { - r0, _, _ := Syscall(SYS_UMASK, uintptr(newmask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unlink(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINK, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNMOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) { - r0, _, e1 := Syscall9(SYS_MMAP, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flag), uintptr(fd), 0, uintptr(pos), uintptr(pos>>32), 0) - ret = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_netbsd_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_netbsd_amd64.go deleted file mode 100644 index 03f31b973d5..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_netbsd_amd64.go +++ /dev/null @@ -1,1326 +0,0 @@ -// mksyscall.pl -netbsd syscall_bsd.go syscall_netbsd.go syscall_netbsd_amd64.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build amd64,netbsd - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(ngid int, gid *_Gid_t) (n int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(ngid int, gid *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Shutdown(s int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(s), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func kevent(kq int, change unsafe.Pointer, nchange int, event unsafe.Pointer, nevent int, timeout *Timespec) (n int, err error) { - r0, _, e1 := Syscall6(SYS_KEVENT, uintptr(kq), uintptr(change), uintptr(nchange), uintptr(event), uintptr(nevent), uintptr(unsafe.Pointer(timeout))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { - var _p0 unsafe.Pointer - if len(mib) > 0 { - _p0 = unsafe.Pointer(&mib[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS___SYSCTL, uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, timeval *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(timeval)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimes(fd int, timeval *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMES, uintptr(fd), uintptr(unsafe.Pointer(timeval)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe() (fd1 int, fd2 int, err error) { - r0, r1, e1 := RawSyscall(SYS_PIPE, 0, 0, 0) - fd1 = int(r0) - fd2 = int(r1) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getdents(fd int, buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_GETDENTS, uintptr(fd), uintptr(_p0), uintptr(len(buf))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Access(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCESS, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtime(delta *Timeval, olddelta *Timeval) (err error) { - _, _, e1 := Syscall(SYS_ADJTIME, uintptr(unsafe.Pointer(delta)), uintptr(unsafe.Pointer(olddelta)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chflags(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHFLAGS, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chmod(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHMOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(fd int) (nfd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(fd), 0, 0) - nfd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup2(from int, to int) (err error) { - _, _, e1 := Syscall(SYS_DUP2, uintptr(from), uintptr(to), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchflags(fd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_FCHFLAGS, uintptr(fd), uintptr(flags), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fpathconf(fd int, name int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FPATHCONF, uintptr(fd), uintptr(name), 0) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall(SYS_FTRUNCATE, uintptr(fd), 0, uintptr(length)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgrp() (pgrp int) { - r0, _, _ := RawSyscall(SYS_GETPGRP, 0, 0, 0) - pgrp = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_GETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getsid(pid int) (sid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETSID, uintptr(pid), 0, 0) - sid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Issetugid() (tainted bool) { - r0, _, _ := Syscall(SYS_ISSETUGID, 0, 0, 0) - tainted = bool(r0 != 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kill(pid int, signum syscall.Signal) (err error) { - _, _, e1 := Syscall(SYS_KILL, uintptr(pid), uintptr(signum), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kqueue() (fd int, err error) { - r0, _, e1 := Syscall(SYS_KQUEUE, 0, 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LCHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Link(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listen(s int, backlog int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(backlog), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LSTAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdir(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIR, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkfifo(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKFIFO, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknod(path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKNOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Nanosleep(time *Timespec, leftover *Timespec) (err error) { - _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Open(path string, mode int, perm uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_OPEN, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm)) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pathconf(path string, name int) (val int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_PATHCONF, uintptr(unsafe.Pointer(_p0)), uintptr(name), 0) - use(unsafe.Pointer(_p0)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pread(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PREAD, uintptr(fd), uintptr(_p0), uintptr(len(p)), 0, uintptr(offset), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PWRITE, uintptr(fd), uintptr(_p0), uintptr(len(p)), 0, uintptr(offset), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Readlink(path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READLINK, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf))) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rename(from string, to string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(from) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(to) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RENAME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Revoke(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REVOKE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rmdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RMDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { - r0, _, e1 := Syscall6(SYS_LSEEK, uintptr(fd), 0, uintptr(offset), uintptr(whence), 0, 0) - newoffset = int64(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) { - _, _, e1 := Syscall6(SYS_SELECT, uintptr(n), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setegid(egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEGID, uintptr(egid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seteuid(euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEUID, uintptr(euid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setgid(gid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETGID, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tp *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tp)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setuid(uid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETUID, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Symlink(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() (err error) { - _, _, e1 := Syscall(SYS_SYNC, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_TRUNCATE, uintptr(unsafe.Pointer(_p0)), 0, uintptr(length)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(newmask int) (oldmask int) { - r0, _, _ := Syscall(SYS_UMASK, uintptr(newmask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unlink(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINK, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNMOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) { - r0, _, e1 := Syscall9(SYS_MMAP, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flag), uintptr(fd), 0, uintptr(pos), 0, 0) - ret = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_netbsd_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_netbsd_arm.go deleted file mode 100644 index 84dc61cfa98..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_netbsd_arm.go +++ /dev/null @@ -1,1326 +0,0 @@ -// mksyscall.pl -l32 -arm syscall_bsd.go syscall_netbsd.go syscall_netbsd_arm.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build arm,netbsd - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(ngid int, gid *_Gid_t) (n int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(ngid int, gid *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Shutdown(s int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(s), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func kevent(kq int, change unsafe.Pointer, nchange int, event unsafe.Pointer, nevent int, timeout *Timespec) (n int, err error) { - r0, _, e1 := Syscall6(SYS_KEVENT, uintptr(kq), uintptr(change), uintptr(nchange), uintptr(event), uintptr(nevent), uintptr(unsafe.Pointer(timeout))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { - var _p0 unsafe.Pointer - if len(mib) > 0 { - _p0 = unsafe.Pointer(&mib[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS___SYSCTL, uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, timeval *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(timeval)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimes(fd int, timeval *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMES, uintptr(fd), uintptr(unsafe.Pointer(timeval)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe() (fd1 int, fd2 int, err error) { - r0, r1, e1 := RawSyscall(SYS_PIPE, 0, 0, 0) - fd1 = int(r0) - fd2 = int(r1) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getdents(fd int, buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_GETDENTS, uintptr(fd), uintptr(_p0), uintptr(len(buf))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Access(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCESS, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtime(delta *Timeval, olddelta *Timeval) (err error) { - _, _, e1 := Syscall(SYS_ADJTIME, uintptr(unsafe.Pointer(delta)), uintptr(unsafe.Pointer(olddelta)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chflags(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHFLAGS, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chmod(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHMOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(fd int) (nfd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(fd), 0, 0) - nfd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup2(from int, to int) (err error) { - _, _, e1 := Syscall(SYS_DUP2, uintptr(from), uintptr(to), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchflags(fd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_FCHFLAGS, uintptr(fd), uintptr(flags), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fpathconf(fd int, name int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FPATHCONF, uintptr(fd), uintptr(name), 0) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall6(SYS_FTRUNCATE, uintptr(fd), 0, uintptr(length), uintptr(length>>32), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgrp() (pgrp int) { - r0, _, _ := RawSyscall(SYS_GETPGRP, 0, 0, 0) - pgrp = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_GETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getsid(pid int) (sid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETSID, uintptr(pid), 0, 0) - sid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Issetugid() (tainted bool) { - r0, _, _ := Syscall(SYS_ISSETUGID, 0, 0, 0) - tainted = bool(r0 != 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kill(pid int, signum syscall.Signal) (err error) { - _, _, e1 := Syscall(SYS_KILL, uintptr(pid), uintptr(signum), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kqueue() (fd int, err error) { - r0, _, e1 := Syscall(SYS_KQUEUE, 0, 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LCHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Link(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listen(s int, backlog int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(backlog), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LSTAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdir(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIR, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkfifo(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKFIFO, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknod(path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKNOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Nanosleep(time *Timespec, leftover *Timespec) (err error) { - _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Open(path string, mode int, perm uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_OPEN, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm)) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pathconf(path string, name int) (val int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_PATHCONF, uintptr(unsafe.Pointer(_p0)), uintptr(name), 0) - use(unsafe.Pointer(_p0)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pread(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PREAD, uintptr(fd), uintptr(_p0), uintptr(len(p)), 0, uintptr(offset), uintptr(offset>>32)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PWRITE, uintptr(fd), uintptr(_p0), uintptr(len(p)), 0, uintptr(offset), uintptr(offset>>32)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Readlink(path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READLINK, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf))) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rename(from string, to string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(from) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(to) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RENAME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Revoke(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REVOKE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rmdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RMDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { - r0, r1, e1 := Syscall6(SYS_LSEEK, uintptr(fd), 0, uintptr(offset), uintptr(offset>>32), uintptr(whence), 0) - newoffset = int64(int64(r1)<<32 | int64(r0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) { - _, _, e1 := Syscall6(SYS_SELECT, uintptr(n), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setegid(egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEGID, uintptr(egid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seteuid(euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEUID, uintptr(euid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setgid(gid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETGID, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tp *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tp)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setuid(uid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETUID, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Symlink(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() (err error) { - _, _, e1 := Syscall(SYS_SYNC, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_TRUNCATE, uintptr(unsafe.Pointer(_p0)), 0, uintptr(length), uintptr(length>>32), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(newmask int) (oldmask int) { - r0, _, _ := Syscall(SYS_UMASK, uintptr(newmask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unlink(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINK, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNMOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) { - r0, _, e1 := Syscall9(SYS_MMAP, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flag), uintptr(fd), 0, uintptr(pos), uintptr(pos>>32), 0) - ret = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_openbsd_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_openbsd_386.go deleted file mode 100644 index 02b3528a69e..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_openbsd_386.go +++ /dev/null @@ -1,1386 +0,0 @@ -// mksyscall.pl -l32 -openbsd syscall_bsd.go syscall_openbsd.go syscall_openbsd_386.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build 386,openbsd - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(ngid int, gid *_Gid_t) (n int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(ngid int, gid *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Shutdown(s int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(s), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func kevent(kq int, change unsafe.Pointer, nchange int, event unsafe.Pointer, nevent int, timeout *Timespec) (n int, err error) { - r0, _, e1 := Syscall6(SYS_KEVENT, uintptr(kq), uintptr(change), uintptr(nchange), uintptr(event), uintptr(nevent), uintptr(unsafe.Pointer(timeout))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { - var _p0 unsafe.Pointer - if len(mib) > 0 { - _p0 = unsafe.Pointer(&mib[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS___SYSCTL, uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, timeval *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(timeval)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimes(fd int, timeval *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMES, uintptr(fd), uintptr(unsafe.Pointer(timeval)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe(p *[2]_C_int) (err error) { - _, _, e1 := RawSyscall(SYS_PIPE, uintptr(unsafe.Pointer(p)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getdents(fd int, buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_GETDENTS, uintptr(fd), uintptr(_p0), uintptr(len(buf))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Access(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCESS, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtime(delta *Timeval, olddelta *Timeval) (err error) { - _, _, e1 := Syscall(SYS_ADJTIME, uintptr(unsafe.Pointer(delta)), uintptr(unsafe.Pointer(olddelta)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chflags(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHFLAGS, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chmod(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHMOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(fd int) (nfd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(fd), 0, 0) - nfd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup2(from int, to int) (err error) { - _, _, e1 := Syscall(SYS_DUP2, uintptr(from), uintptr(to), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchflags(fd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_FCHFLAGS, uintptr(fd), uintptr(flags), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fpathconf(fd int, name int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FPATHCONF, uintptr(fd), uintptr(name), 0) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstatfs(fd int, stat *Statfs_t) (err error) { - _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall6(SYS_FTRUNCATE, uintptr(fd), 0, uintptr(length), uintptr(length>>32), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgrp() (pgrp int) { - r0, _, _ := RawSyscall(SYS_GETPGRP, 0, 0, 0) - pgrp = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_GETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getsid(pid int) (sid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETSID, uintptr(pid), 0, 0) - sid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Issetugid() (tainted bool) { - r0, _, _ := Syscall(SYS_ISSETUGID, 0, 0, 0) - tainted = bool(r0 != 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kill(pid int, signum syscall.Signal) (err error) { - _, _, e1 := Syscall(SYS_KILL, uintptr(pid), uintptr(signum), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kqueue() (fd int, err error) { - r0, _, e1 := Syscall(SYS_KQUEUE, 0, 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LCHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Link(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listen(s int, backlog int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(backlog), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LSTAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdir(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIR, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkfifo(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKFIFO, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknod(path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKNOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Nanosleep(time *Timespec, leftover *Timespec) (err error) { - _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Open(path string, mode int, perm uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_OPEN, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm)) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pathconf(path string, name int) (val int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_PATHCONF, uintptr(unsafe.Pointer(_p0)), uintptr(name), 0) - use(unsafe.Pointer(_p0)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pread(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PREAD, uintptr(fd), uintptr(_p0), uintptr(len(p)), 0, uintptr(offset), uintptr(offset>>32)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PWRITE, uintptr(fd), uintptr(_p0), uintptr(len(p)), 0, uintptr(offset), uintptr(offset>>32)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Readlink(path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READLINK, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf))) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rename(from string, to string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(from) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(to) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RENAME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Revoke(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REVOKE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rmdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RMDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { - r0, r1, e1 := Syscall6(SYS_LSEEK, uintptr(fd), 0, uintptr(offset), uintptr(offset>>32), uintptr(whence), 0) - newoffset = int64(int64(r1)<<32 | int64(r0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) { - _, _, e1 := Syscall6(SYS_SELECT, uintptr(n), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setegid(egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEGID, uintptr(egid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seteuid(euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEUID, uintptr(euid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setgid(gid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETGID, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setlogin(name string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(name) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SETLOGIN, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresgid(rgid int, egid int, sgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESGID, uintptr(rgid), uintptr(egid), uintptr(sgid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresuid(ruid int, euid int, suid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESUID, uintptr(ruid), uintptr(euid), uintptr(suid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tp *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tp)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setuid(uid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETUID, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Statfs(path string, stat *Statfs_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STATFS, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Symlink(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() (err error) { - _, _, e1 := Syscall(SYS_SYNC, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall6(SYS_TRUNCATE, uintptr(unsafe.Pointer(_p0)), 0, uintptr(length), uintptr(length>>32), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(newmask int) (oldmask int) { - r0, _, _ := Syscall(SYS_UMASK, uintptr(newmask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unlink(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINK, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNMOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) { - r0, _, e1 := Syscall9(SYS_MMAP, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flag), uintptr(fd), 0, uintptr(pos), uintptr(pos>>32), 0) - ret = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go deleted file mode 100644 index 7dc2b7eaf7f..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go +++ /dev/null @@ -1,1386 +0,0 @@ -// mksyscall.pl -openbsd syscall_bsd.go syscall_openbsd.go syscall_openbsd_amd64.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build amd64,openbsd - -package unix - -import ( - "syscall" - "unsafe" -) - -var _ syscall.Errno - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getgroups(ngid int, gid *_Gid_t) (n int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setgroups(ngid int, gid *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) - wpid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Shutdown(s int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(s), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func kevent(kq int, change unsafe.Pointer, nchange int, event unsafe.Pointer, nevent int, timeout *Timespec) (n int, err error) { - r0, _, e1 := Syscall6(SYS_KEVENT, uintptr(kq), uintptr(change), uintptr(nchange), uintptr(event), uintptr(nevent), uintptr(unsafe.Pointer(timeout))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { - var _p0 unsafe.Pointer - if len(mib) > 0 { - _p0 = unsafe.Pointer(&mib[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall6(SYS___SYSCTL, uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func utimes(path string, timeval *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(timeval)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func futimes(fd int, timeval *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMES, uintptr(fd), uintptr(unsafe.Pointer(timeval)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FCNTL, uintptr(fd), uintptr(cmd), uintptr(arg)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func pipe(p *[2]_C_int) (err error) { - _, _, e1 := RawSyscall(SYS_PIPE, uintptr(unsafe.Pointer(p)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func getdents(fd int, buf []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(buf) > 0 { - _p0 = unsafe.Pointer(&buf[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_GETDENTS, uintptr(fd), uintptr(_p0), uintptr(len(buf))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Access(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_ACCESS, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Adjtime(delta *Timeval, olddelta *Timeval) (err error) { - _, _, e1 := Syscall(SYS_ADJTIME, uintptr(unsafe.Pointer(delta)), uintptr(unsafe.Pointer(olddelta)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chflags(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHFLAGS, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chmod(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHMOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup(fd int) (nfd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(fd), 0, 0) - nfd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Dup2(from int, to int) (err error) { - _, _, e1 := Syscall(SYS_DUP2, uintptr(from), uintptr(to), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Exit(code int) { - Syscall(SYS_EXIT, uintptr(code), 0, 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchflags(fd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_FCHFLAGS, uintptr(fd), uintptr(flags), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fpathconf(fd int, name int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FPATHCONF, uintptr(fd), uintptr(name), 0) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fstatfs(fd int, stat *Statfs_t) (err error) { - _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall(SYS_FTRUNCATE, uintptr(fd), 0, uintptr(length)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) - egid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Geteuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) - gid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) - pgid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpgrp() (pgrp int) { - r0, _, _ := RawSyscall(SYS_GETPGRP, 0, 0, 0) - pgrp = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) - pid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) - ppid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) - prio = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_GETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getsid(pid int) (sid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETSID, uintptr(pid), 0, 0) - sid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Gettimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) - uid = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Issetugid() (tainted bool) { - r0, _, _ := Syscall(SYS_ISSETUGID, 0, 0, 0) - tainted = bool(r0 != 0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kill(pid int, signum syscall.Signal) (err error) { - _, _, e1 := Syscall(SYS_KILL, uintptr(pid), uintptr(signum), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Kqueue() (fd int, err error) { - r0, _, e1 := Syscall(SYS_KQUEUE, 0, 0, 0) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LCHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Link(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Listen(s int, backlog int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(backlog), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_LSTAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkdir(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKDIR, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mkfifo(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKFIFO, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mknod(path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_MKNOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Nanosleep(time *Timespec, leftover *Timespec) (err error) { - _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Open(path string, mode int, perm uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_OPEN, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm)) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pathconf(path string, name int) (val int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := Syscall(SYS_PATHCONF, uintptr(unsafe.Pointer(_p0)), uintptr(name), 0) - use(unsafe.Pointer(_p0)) - val = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pread(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PREAD, uintptr(fd), uintptr(_p0), uintptr(len(p)), 0, uintptr(offset), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall6(SYS_PWRITE, uintptr(fd), uintptr(_p0), uintptr(len(p)), 0, uintptr(offset), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func read(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Readlink(path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 unsafe.Pointer - if len(buf) > 0 { - _p1 = unsafe.Pointer(&buf[0]) - } else { - _p1 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_READLINK, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf))) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rename(from string, to string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(from) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(to) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RENAME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Revoke(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_REVOKE, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Rmdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_RMDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { - r0, _, e1 := Syscall6(SYS_LSEEK, uintptr(fd), 0, uintptr(offset), uintptr(whence), 0, 0) - newoffset = int64(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) { - _, _, e1 := Syscall6(SYS_SELECT, uintptr(n), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setegid(egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEGID, uintptr(egid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Seteuid(euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEUID, uintptr(euid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setgid(gid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETGID, uintptr(gid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setlogin(name string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(name) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SETLOGIN, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID, uintptr(rgid), uintptr(egid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID, uintptr(ruid), uintptr(euid), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresgid(rgid int, egid int, sgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESGID, uintptr(rgid), uintptr(egid), uintptr(sgid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setresuid(ruid int, euid int, suid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESUID, uintptr(ruid), uintptr(euid), uintptr(suid)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Settimeofday(tp *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tp)), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Setuid(uid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETUID, uintptr(uid), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Statfs(path string, stat *Statfs_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_STATFS, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Symlink(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_SYMLINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Sync() (err error) { - _, _, e1 := Syscall(SYS_SYNC, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_TRUNCATE, uintptr(unsafe.Pointer(_p0)), 0, uintptr(length)) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Umask(newmask int) (oldmask int) { - r0, _, _ := Syscall(SYS_UMASK, uintptr(newmask), 0, 0) - oldmask = int(r0) - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unlink(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNLINK, uintptr(unsafe.Pointer(_p0)), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Unmount(path string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := Syscall(SYS_UNMOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func write(fd int, p []byte) (n int, err error) { - var _p0 unsafe.Pointer - if len(p) > 0 { - _p0 = unsafe.Pointer(&p[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) { - r0, _, e1 := Syscall9(SYS_MMAP, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flag), uintptr(fd), 0, uintptr(pos), 0, 0) - ret = uintptr(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func readlen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func writelen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_solaris_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_solaris_amd64.go deleted file mode 100644 index 8d2a8365b2f..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsyscall_solaris_amd64.go +++ /dev/null @@ -1,1559 +0,0 @@ -// mksyscall_solaris.pl syscall_solaris.go syscall_solaris_amd64.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT - -// +build amd64,solaris - -package unix - -import ( - "syscall" - "unsafe" -) - -//go:cgo_import_dynamic libc_getsockname getsockname "libsocket.so" -//go:cgo_import_dynamic libc_getcwd getcwd "libc.so" -//go:cgo_import_dynamic libc_getgroups getgroups "libc.so" -//go:cgo_import_dynamic libc_setgroups setgroups "libc.so" -//go:cgo_import_dynamic libc_utimes utimes "libc.so" -//go:cgo_import_dynamic libc_utimensat utimensat "libc.so" -//go:cgo_import_dynamic libc_fcntl fcntl "libc.so" -//go:cgo_import_dynamic libc_futimesat futimesat "libc.so" -//go:cgo_import_dynamic libc_accept accept "libsocket.so" -//go:cgo_import_dynamic libc_recvmsg recvmsg "libsocket.so" -//go:cgo_import_dynamic libc_sendmsg sendmsg "libsocket.so" -//go:cgo_import_dynamic libc_acct acct "libc.so" -//go:cgo_import_dynamic libc_ioctl ioctl "libc.so" -//go:cgo_import_dynamic libc_access access "libc.so" -//go:cgo_import_dynamic libc_adjtime adjtime "libc.so" -//go:cgo_import_dynamic libc_chdir chdir "libc.so" -//go:cgo_import_dynamic libc_chmod chmod "libc.so" -//go:cgo_import_dynamic libc_chown chown "libc.so" -//go:cgo_import_dynamic libc_chroot chroot "libc.so" -//go:cgo_import_dynamic libc_close close "libc.so" -//go:cgo_import_dynamic libc_creat creat "libc.so" -//go:cgo_import_dynamic libc_dup dup "libc.so" -//go:cgo_import_dynamic libc_dup2 dup2 "libc.so" -//go:cgo_import_dynamic libc_exit exit "libc.so" -//go:cgo_import_dynamic libc_fchdir fchdir "libc.so" -//go:cgo_import_dynamic libc_fchmod fchmod "libc.so" -//go:cgo_import_dynamic libc_fchmodat fchmodat "libc.so" -//go:cgo_import_dynamic libc_fchown fchown "libc.so" -//go:cgo_import_dynamic libc_fchownat fchownat "libc.so" -//go:cgo_import_dynamic libc_fdatasync fdatasync "libc.so" -//go:cgo_import_dynamic libc_fpathconf fpathconf "libc.so" -//go:cgo_import_dynamic libc_fstat fstat "libc.so" -//go:cgo_import_dynamic libc_getdents getdents "libc.so" -//go:cgo_import_dynamic libc_getgid getgid "libc.so" -//go:cgo_import_dynamic libc_getpid getpid "libc.so" -//go:cgo_import_dynamic libc_getpgid getpgid "libc.so" -//go:cgo_import_dynamic libc_getpgrp getpgrp "libc.so" -//go:cgo_import_dynamic libc_geteuid geteuid "libc.so" -//go:cgo_import_dynamic libc_getegid getegid "libc.so" -//go:cgo_import_dynamic libc_getppid getppid "libc.so" -//go:cgo_import_dynamic libc_getpriority getpriority "libc.so" -//go:cgo_import_dynamic libc_getrlimit getrlimit "libc.so" -//go:cgo_import_dynamic libc_getrusage getrusage "libc.so" -//go:cgo_import_dynamic libc_gettimeofday gettimeofday "libc.so" -//go:cgo_import_dynamic libc_getuid getuid "libc.so" -//go:cgo_import_dynamic libc_kill kill "libc.so" -//go:cgo_import_dynamic libc_lchown lchown "libc.so" -//go:cgo_import_dynamic libc_link link "libc.so" -//go:cgo_import_dynamic libc_listen listen "libsocket.so" -//go:cgo_import_dynamic libc_lstat lstat "libc.so" -//go:cgo_import_dynamic libc_madvise madvise "libc.so" -//go:cgo_import_dynamic libc_mkdir mkdir "libc.so" -//go:cgo_import_dynamic libc_mkdirat mkdirat "libc.so" -//go:cgo_import_dynamic libc_mkfifo mkfifo "libc.so" -//go:cgo_import_dynamic libc_mkfifoat mkfifoat "libc.so" -//go:cgo_import_dynamic libc_mknod mknod "libc.so" -//go:cgo_import_dynamic libc_mknodat mknodat "libc.so" -//go:cgo_import_dynamic libc_mlock mlock "libc.so" -//go:cgo_import_dynamic libc_mlockall mlockall "libc.so" -//go:cgo_import_dynamic libc_mprotect mprotect "libc.so" -//go:cgo_import_dynamic libc_munlock munlock "libc.so" -//go:cgo_import_dynamic libc_munlockall munlockall "libc.so" -//go:cgo_import_dynamic libc_nanosleep nanosleep "libc.so" -//go:cgo_import_dynamic libc_open open "libc.so" -//go:cgo_import_dynamic libc_openat openat "libc.so" -//go:cgo_import_dynamic libc_pathconf pathconf "libc.so" -//go:cgo_import_dynamic libc_pause pause "libc.so" -//go:cgo_import_dynamic libc_pread pread "libc.so" -//go:cgo_import_dynamic libc_pwrite pwrite "libc.so" -//go:cgo_import_dynamic libc_read read "libc.so" -//go:cgo_import_dynamic libc_readlink readlink "libc.so" -//go:cgo_import_dynamic libc_rename rename "libc.so" -//go:cgo_import_dynamic libc_renameat renameat "libc.so" -//go:cgo_import_dynamic libc_rmdir rmdir "libc.so" -//go:cgo_import_dynamic libc_lseek lseek "libc.so" -//go:cgo_import_dynamic libc_setegid setegid "libc.so" -//go:cgo_import_dynamic libc_seteuid seteuid "libc.so" -//go:cgo_import_dynamic libc_setgid setgid "libc.so" -//go:cgo_import_dynamic libc_sethostname sethostname "libc.so" -//go:cgo_import_dynamic libc_setpgid setpgid "libc.so" -//go:cgo_import_dynamic libc_setpriority setpriority "libc.so" -//go:cgo_import_dynamic libc_setregid setregid "libc.so" -//go:cgo_import_dynamic libc_setreuid setreuid "libc.so" -//go:cgo_import_dynamic libc_setrlimit setrlimit "libc.so" -//go:cgo_import_dynamic libc_setsid setsid "libc.so" -//go:cgo_import_dynamic libc_setuid setuid "libc.so" -//go:cgo_import_dynamic libc_shutdown shutdown "libsocket.so" -//go:cgo_import_dynamic libc_stat stat "libc.so" -//go:cgo_import_dynamic libc_symlink symlink "libc.so" -//go:cgo_import_dynamic libc_sync sync "libc.so" -//go:cgo_import_dynamic libc_times times "libc.so" -//go:cgo_import_dynamic libc_truncate truncate "libc.so" -//go:cgo_import_dynamic libc_fsync fsync "libc.so" -//go:cgo_import_dynamic libc_ftruncate ftruncate "libc.so" -//go:cgo_import_dynamic libc_umask umask "libc.so" -//go:cgo_import_dynamic libc_uname uname "libc.so" -//go:cgo_import_dynamic libc_umount umount "libc.so" -//go:cgo_import_dynamic libc_unlink unlink "libc.so" -//go:cgo_import_dynamic libc_unlinkat unlinkat "libc.so" -//go:cgo_import_dynamic libc_ustat ustat "libc.so" -//go:cgo_import_dynamic libc_utime utime "libc.so" -//go:cgo_import_dynamic libc_bind bind "libsocket.so" -//go:cgo_import_dynamic libc_connect connect "libsocket.so" -//go:cgo_import_dynamic libc_mmap mmap "libc.so" -//go:cgo_import_dynamic libc_munmap munmap "libc.so" -//go:cgo_import_dynamic libc_sendto sendto "libsocket.so" -//go:cgo_import_dynamic libc_socket socket "libsocket.so" -//go:cgo_import_dynamic libc_socketpair socketpair "libsocket.so" -//go:cgo_import_dynamic libc_write write "libc.so" -//go:cgo_import_dynamic libc_getsockopt getsockopt "libsocket.so" -//go:cgo_import_dynamic libc_getpeername getpeername "libsocket.so" -//go:cgo_import_dynamic libc_setsockopt setsockopt "libsocket.so" -//go:cgo_import_dynamic libc_recvfrom recvfrom "libsocket.so" -//go:cgo_import_dynamic libc_sysconf sysconf "libc.so" - -//go:linkname procgetsockname libc_getsockname -//go:linkname procGetcwd libc_getcwd -//go:linkname procgetgroups libc_getgroups -//go:linkname procsetgroups libc_setgroups -//go:linkname procutimes libc_utimes -//go:linkname procutimensat libc_utimensat -//go:linkname procfcntl libc_fcntl -//go:linkname procfutimesat libc_futimesat -//go:linkname procaccept libc_accept -//go:linkname procrecvmsg libc_recvmsg -//go:linkname procsendmsg libc_sendmsg -//go:linkname procacct libc_acct -//go:linkname procioctl libc_ioctl -//go:linkname procAccess libc_access -//go:linkname procAdjtime libc_adjtime -//go:linkname procChdir libc_chdir -//go:linkname procChmod libc_chmod -//go:linkname procChown libc_chown -//go:linkname procChroot libc_chroot -//go:linkname procClose libc_close -//go:linkname procCreat libc_creat -//go:linkname procDup libc_dup -//go:linkname procDup2 libc_dup2 -//go:linkname procExit libc_exit -//go:linkname procFchdir libc_fchdir -//go:linkname procFchmod libc_fchmod -//go:linkname procFchmodat libc_fchmodat -//go:linkname procFchown libc_fchown -//go:linkname procFchownat libc_fchownat -//go:linkname procFdatasync libc_fdatasync -//go:linkname procFpathconf libc_fpathconf -//go:linkname procFstat libc_fstat -//go:linkname procGetdents libc_getdents -//go:linkname procGetgid libc_getgid -//go:linkname procGetpid libc_getpid -//go:linkname procGetpgid libc_getpgid -//go:linkname procGetpgrp libc_getpgrp -//go:linkname procGeteuid libc_geteuid -//go:linkname procGetegid libc_getegid -//go:linkname procGetppid libc_getppid -//go:linkname procGetpriority libc_getpriority -//go:linkname procGetrlimit libc_getrlimit -//go:linkname procGetrusage libc_getrusage -//go:linkname procGettimeofday libc_gettimeofday -//go:linkname procGetuid libc_getuid -//go:linkname procKill libc_kill -//go:linkname procLchown libc_lchown -//go:linkname procLink libc_link -//go:linkname proclisten libc_listen -//go:linkname procLstat libc_lstat -//go:linkname procMadvise libc_madvise -//go:linkname procMkdir libc_mkdir -//go:linkname procMkdirat libc_mkdirat -//go:linkname procMkfifo libc_mkfifo -//go:linkname procMkfifoat libc_mkfifoat -//go:linkname procMknod libc_mknod -//go:linkname procMknodat libc_mknodat -//go:linkname procMlock libc_mlock -//go:linkname procMlockall libc_mlockall -//go:linkname procMprotect libc_mprotect -//go:linkname procMunlock libc_munlock -//go:linkname procMunlockall libc_munlockall -//go:linkname procNanosleep libc_nanosleep -//go:linkname procOpen libc_open -//go:linkname procOpenat libc_openat -//go:linkname procPathconf libc_pathconf -//go:linkname procPause libc_pause -//go:linkname procPread libc_pread -//go:linkname procPwrite libc_pwrite -//go:linkname procread libc_read -//go:linkname procReadlink libc_readlink -//go:linkname procRename libc_rename -//go:linkname procRenameat libc_renameat -//go:linkname procRmdir libc_rmdir -//go:linkname proclseek libc_lseek -//go:linkname procSetegid libc_setegid -//go:linkname procSeteuid libc_seteuid -//go:linkname procSetgid libc_setgid -//go:linkname procSethostname libc_sethostname -//go:linkname procSetpgid libc_setpgid -//go:linkname procSetpriority libc_setpriority -//go:linkname procSetregid libc_setregid -//go:linkname procSetreuid libc_setreuid -//go:linkname procSetrlimit libc_setrlimit -//go:linkname procSetsid libc_setsid -//go:linkname procSetuid libc_setuid -//go:linkname procshutdown libc_shutdown -//go:linkname procStat libc_stat -//go:linkname procSymlink libc_symlink -//go:linkname procSync libc_sync -//go:linkname procTimes libc_times -//go:linkname procTruncate libc_truncate -//go:linkname procFsync libc_fsync -//go:linkname procFtruncate libc_ftruncate -//go:linkname procUmask libc_umask -//go:linkname procUname libc_uname -//go:linkname procumount libc_umount -//go:linkname procUnlink libc_unlink -//go:linkname procUnlinkat libc_unlinkat -//go:linkname procUstat libc_ustat -//go:linkname procUtime libc_utime -//go:linkname procbind libc_bind -//go:linkname procconnect libc_connect -//go:linkname procmmap libc_mmap -//go:linkname procmunmap libc_munmap -//go:linkname procsendto libc_sendto -//go:linkname procsocket libc_socket -//go:linkname procsocketpair libc_socketpair -//go:linkname procwrite libc_write -//go:linkname procgetsockopt libc_getsockopt -//go:linkname procgetpeername libc_getpeername -//go:linkname procsetsockopt libc_setsockopt -//go:linkname procrecvfrom libc_recvfrom -//go:linkname procsysconf libc_sysconf - -var ( - procgetsockname, - procGetcwd, - procgetgroups, - procsetgroups, - procutimes, - procutimensat, - procfcntl, - procfutimesat, - procaccept, - procrecvmsg, - procsendmsg, - procacct, - procioctl, - procAccess, - procAdjtime, - procChdir, - procChmod, - procChown, - procChroot, - procClose, - procCreat, - procDup, - procDup2, - procExit, - procFchdir, - procFchmod, - procFchmodat, - procFchown, - procFchownat, - procFdatasync, - procFpathconf, - procFstat, - procGetdents, - procGetgid, - procGetpid, - procGetpgid, - procGetpgrp, - procGeteuid, - procGetegid, - procGetppid, - procGetpriority, - procGetrlimit, - procGetrusage, - procGettimeofday, - procGetuid, - procKill, - procLchown, - procLink, - proclisten, - procLstat, - procMadvise, - procMkdir, - procMkdirat, - procMkfifo, - procMkfifoat, - procMknod, - procMknodat, - procMlock, - procMlockall, - procMprotect, - procMunlock, - procMunlockall, - procNanosleep, - procOpen, - procOpenat, - procPathconf, - procPause, - procPread, - procPwrite, - procread, - procReadlink, - procRename, - procRenameat, - procRmdir, - proclseek, - procSetegid, - procSeteuid, - procSetgid, - procSethostname, - procSetpgid, - procSetpriority, - procSetregid, - procSetreuid, - procSetrlimit, - procSetsid, - procSetuid, - procshutdown, - procStat, - procSymlink, - procSync, - procTimes, - procTruncate, - procFsync, - procFtruncate, - procUmask, - procUname, - procumount, - procUnlink, - procUnlinkat, - procUstat, - procUtime, - procbind, - procconnect, - procmmap, - procmunmap, - procsendto, - procsocket, - procsocketpair, - procwrite, - procgetsockopt, - procgetpeername, - procsetsockopt, - procrecvfrom, - procsysconf syscallFunc -) - -func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procgetsockname)), 3, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Getcwd(buf []byte) (n int, err error) { - var _p0 *byte - if len(buf) > 0 { - _p0 = &buf[0] - } - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procGetcwd)), 2, uintptr(unsafe.Pointer(_p0)), uintptr(len(buf)), 0, 0, 0, 0) - n = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func getgroups(ngid int, gid *_Gid_t) (n int, err error) { - r0, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procgetgroups)), 2, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0, 0, 0, 0) - n = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func setgroups(ngid int, gid *_Gid_t) (err error) { - _, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procsetgroups)), 2, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func utimes(path string, times *[2]Timeval) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procutimes)), 2, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func utimensat(fd int, path string, times *[2]Timespec, flag int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procutimensat)), 4, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), uintptr(flag), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func fcntl(fd int, cmd int, arg int) (val int, err error) { - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procfcntl)), 3, uintptr(fd), uintptr(cmd), uintptr(arg), 0, 0, 0) - val = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func futimesat(fildes int, path *byte, times *[2]Timeval) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procfutimesat)), 3, uintptr(fildes), uintptr(unsafe.Pointer(path)), uintptr(unsafe.Pointer(times)), 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procaccept)), 3, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), 0, 0, 0) - fd = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procrecvmsg)), 3, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags), 0, 0, 0) - n = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procsendmsg)), 3, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags), 0, 0, 0) - n = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func acct(path *byte) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procacct)), 1, uintptr(unsafe.Pointer(path)), 0, 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func ioctl(fd int, req int, arg uintptr) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procioctl)), 3, uintptr(fd), uintptr(req), uintptr(arg), 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Access(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procAccess)), 2, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Adjtime(delta *Timeval, olddelta *Timeval) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procAdjtime)), 2, uintptr(unsafe.Pointer(delta)), uintptr(unsafe.Pointer(olddelta)), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Chdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procChdir)), 1, uintptr(unsafe.Pointer(_p0)), 0, 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Chmod(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procChmod)), 2, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Chown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procChown)), 3, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid), 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Chroot(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procChroot)), 1, uintptr(unsafe.Pointer(_p0)), 0, 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Close(fd int) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procClose)), 1, uintptr(fd), 0, 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Creat(path string, mode uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procCreat)), 2, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func Dup(fd int) (nfd int, err error) { - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procDup)), 1, uintptr(fd), 0, 0, 0, 0, 0) - nfd = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func Dup2(oldfd int, newfd int) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procDup2)), 2, uintptr(oldfd), uintptr(newfd), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Exit(code int) { - sysvicall6(uintptr(unsafe.Pointer(&procExit)), 1, uintptr(code), 0, 0, 0, 0, 0) - return -} - -func Fchdir(fd int) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procFchdir)), 1, uintptr(fd), 0, 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procFchmod)), 2, uintptr(fd), uintptr(mode), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procFchmodat)), 4, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procFchown)), 3, uintptr(fd), uintptr(uid), uintptr(gid), 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Fchownat(dirfd int, path string, uid int, gid int, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procFchownat)), 5, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid), uintptr(flags), 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Fdatasync(fd int) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procFdatasync)), 1, uintptr(fd), 0, 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Fpathconf(fd int, name int) (val int, err error) { - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procFpathconf)), 2, uintptr(fd), uintptr(name), 0, 0, 0, 0) - val = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procFstat)), 2, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Getdents(fd int, buf []byte, basep *uintptr) (n int, err error) { - var _p0 *byte - if len(buf) > 0 { - _p0 = &buf[0] - } - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procGetdents)), 4, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(len(buf)), uintptr(unsafe.Pointer(basep)), 0, 0) - n = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func Getgid() (gid int) { - r0, _, _ := rawSysvicall6(uintptr(unsafe.Pointer(&procGetgid)), 0, 0, 0, 0, 0, 0, 0) - gid = int(r0) - return -} - -func Getpid() (pid int) { - r0, _, _ := rawSysvicall6(uintptr(unsafe.Pointer(&procGetpid)), 0, 0, 0, 0, 0, 0, 0) - pid = int(r0) - return -} - -func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procGetpgid)), 1, uintptr(pid), 0, 0, 0, 0, 0) - pgid = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func Getpgrp() (pgid int, err error) { - r0, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procGetpgrp)), 0, 0, 0, 0, 0, 0, 0) - pgid = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func Geteuid() (euid int) { - r0, _, _ := sysvicall6(uintptr(unsafe.Pointer(&procGeteuid)), 0, 0, 0, 0, 0, 0, 0) - euid = int(r0) - return -} - -func Getegid() (egid int) { - r0, _, _ := sysvicall6(uintptr(unsafe.Pointer(&procGetegid)), 0, 0, 0, 0, 0, 0, 0) - egid = int(r0) - return -} - -func Getppid() (ppid int) { - r0, _, _ := sysvicall6(uintptr(unsafe.Pointer(&procGetppid)), 0, 0, 0, 0, 0, 0, 0) - ppid = int(r0) - return -} - -func Getpriority(which int, who int) (n int, err error) { - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procGetpriority)), 2, uintptr(which), uintptr(who), 0, 0, 0, 0) - n = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func Getrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procGetrlimit)), 2, uintptr(which), uintptr(unsafe.Pointer(lim)), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procGetrusage)), 2, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Gettimeofday(tv *Timeval) (err error) { - _, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procGettimeofday)), 1, uintptr(unsafe.Pointer(tv)), 0, 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Getuid() (uid int) { - r0, _, _ := rawSysvicall6(uintptr(unsafe.Pointer(&procGetuid)), 0, 0, 0, 0, 0, 0, 0) - uid = int(r0) - return -} - -func Kill(pid int, signum syscall.Signal) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procKill)), 2, uintptr(pid), uintptr(signum), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Lchown(path string, uid int, gid int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procLchown)), 3, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid), 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Link(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procLink)), 2, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = e1 - } - return -} - -func Listen(s int, backlog int) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&proclisten)), 2, uintptr(s), uintptr(backlog), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Lstat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procLstat)), 2, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Madvise(b []byte, advice int) (err error) { - var _p0 *byte - if len(b) > 0 { - _p0 = &b[0] - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procMadvise)), 3, uintptr(unsafe.Pointer(_p0)), uintptr(len(b)), uintptr(advice), 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Mkdir(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procMkdir)), 2, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Mkdirat(dirfd int, path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procMkdirat)), 3, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Mkfifo(path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procMkfifo)), 2, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Mkfifoat(dirfd int, path string, mode uint32) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procMkfifoat)), 3, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Mknod(path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procMknod)), 3, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev), 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Mknodat(dirfd int, path string, mode uint32, dev int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procMknodat)), 4, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev), 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Mlock(b []byte) (err error) { - var _p0 *byte - if len(b) > 0 { - _p0 = &b[0] - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procMlock)), 2, uintptr(unsafe.Pointer(_p0)), uintptr(len(b)), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Mlockall(flags int) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procMlockall)), 1, uintptr(flags), 0, 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Mprotect(b []byte, prot int) (err error) { - var _p0 *byte - if len(b) > 0 { - _p0 = &b[0] - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procMprotect)), 3, uintptr(unsafe.Pointer(_p0)), uintptr(len(b)), uintptr(prot), 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Munlock(b []byte) (err error) { - var _p0 *byte - if len(b) > 0 { - _p0 = &b[0] - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procMunlock)), 2, uintptr(unsafe.Pointer(_p0)), uintptr(len(b)), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Munlockall() (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procMunlockall)), 0, 0, 0, 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Nanosleep(time *Timespec, leftover *Timespec) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procNanosleep)), 2, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Open(path string, mode int, perm uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procOpen)), 3, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm), 0, 0, 0) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func Openat(dirfd int, path string, flags int, mode uint32) (fd int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procOpenat)), 4, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags), uintptr(mode), 0, 0) - use(unsafe.Pointer(_p0)) - fd = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func Pathconf(path string, name int) (val int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procPathconf)), 2, uintptr(unsafe.Pointer(_p0)), uintptr(name), 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - val = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func Pause() (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procPause)), 0, 0, 0, 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Pread(fd int, p []byte, offset int64) (n int, err error) { - var _p0 *byte - if len(p) > 0 { - _p0 = &p[0] - } - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procPread)), 4, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(len(p)), uintptr(offset), 0, 0) - n = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func Pwrite(fd int, p []byte, offset int64) (n int, err error) { - var _p0 *byte - if len(p) > 0 { - _p0 = &p[0] - } - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procPwrite)), 4, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(len(p)), uintptr(offset), 0, 0) - n = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func read(fd int, p []byte) (n int, err error) { - var _p0 *byte - if len(p) > 0 { - _p0 = &p[0] - } - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procread)), 3, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(len(p)), 0, 0, 0) - n = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func Readlink(path string, buf []byte) (n int, err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - if len(buf) > 0 { - _p1 = &buf[0] - } - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procReadlink)), 3, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(len(buf)), 0, 0, 0) - use(unsafe.Pointer(_p0)) - n = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func Rename(from string, to string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(from) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(to) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procRename)), 2, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = e1 - } - return -} - -func Renameat(olddirfd int, oldpath string, newdirfd int, newpath string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(oldpath) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(newpath) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procRenameat)), 4, uintptr(olddirfd), uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1)), 0, 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = e1 - } - return -} - -func Rmdir(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procRmdir)), 1, uintptr(unsafe.Pointer(_p0)), 0, 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&proclseek)), 3, uintptr(fd), uintptr(offset), uintptr(whence), 0, 0, 0) - newoffset = int64(r0) - if e1 != 0 { - err = e1 - } - return -} - -func Setegid(egid int) (err error) { - _, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procSetegid)), 1, uintptr(egid), 0, 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Seteuid(euid int) (err error) { - _, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procSeteuid)), 1, uintptr(euid), 0, 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Setgid(gid int) (err error) { - _, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procSetgid)), 1, uintptr(gid), 0, 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Sethostname(p []byte) (err error) { - var _p0 *byte - if len(p) > 0 { - _p0 = &p[0] - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procSethostname)), 2, uintptr(unsafe.Pointer(_p0)), uintptr(len(p)), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procSetpgid)), 2, uintptr(pid), uintptr(pgid), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procSetpriority)), 3, uintptr(which), uintptr(who), uintptr(prio), 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Setregid(rgid int, egid int) (err error) { - _, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procSetregid)), 2, uintptr(rgid), uintptr(egid), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procSetreuid)), 2, uintptr(ruid), uintptr(euid), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Setrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procSetrlimit)), 2, uintptr(which), uintptr(unsafe.Pointer(lim)), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Setsid() (pid int, err error) { - r0, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procSetsid)), 0, 0, 0, 0, 0, 0, 0) - pid = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func Setuid(uid int) (err error) { - _, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procSetuid)), 1, uintptr(uid), 0, 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Shutdown(s int, how int) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procshutdown)), 2, uintptr(s), uintptr(how), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Stat(path string, stat *Stat_t) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procStat)), 2, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Symlink(path string, link string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - var _p1 *byte - _p1, err = BytePtrFromString(link) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procSymlink)), 2, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - use(unsafe.Pointer(_p1)) - if e1 != 0 { - err = e1 - } - return -} - -func Sync() (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procSync)), 0, 0, 0, 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Times(tms *Tms) (ticks uintptr, err error) { - r0, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procTimes)), 1, uintptr(unsafe.Pointer(tms)), 0, 0, 0, 0, 0) - ticks = uintptr(r0) - if e1 != 0 { - err = e1 - } - return -} - -func Truncate(path string, length int64) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procTruncate)), 2, uintptr(unsafe.Pointer(_p0)), uintptr(length), 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Fsync(fd int) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procFsync)), 1, uintptr(fd), 0, 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procFtruncate)), 2, uintptr(fd), uintptr(length), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Umask(mask int) (oldmask int) { - r0, _, _ := sysvicall6(uintptr(unsafe.Pointer(&procUmask)), 1, uintptr(mask), 0, 0, 0, 0, 0) - oldmask = int(r0) - return -} - -func Uname(buf *Utsname) (err error) { - _, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procUname)), 1, uintptr(unsafe.Pointer(buf)), 0, 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Unmount(target string, flags int) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(target) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procumount)), 2, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Unlink(path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procUnlink)), 1, uintptr(unsafe.Pointer(_p0)), 0, 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Unlinkat(dirfd int, path string) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procUnlinkat)), 2, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func Ustat(dev int, ubuf *Ustat_t) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procUstat)), 2, uintptr(dev), uintptr(unsafe.Pointer(ubuf)), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func Utime(path string, buf *Utimbuf) (err error) { - var _p0 *byte - _p0, err = BytePtrFromString(path) - if err != nil { - return - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procUtime)), 2, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(buf)), 0, 0, 0, 0) - use(unsafe.Pointer(_p0)) - if e1 != 0 { - err = e1 - } - return -} - -func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procbind)), 3, uintptr(s), uintptr(addr), uintptr(addrlen), 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procconnect)), 3, uintptr(s), uintptr(addr), uintptr(addrlen), 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) { - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procmmap)), 6, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flag), uintptr(fd), uintptr(pos)) - ret = uintptr(r0) - if e1 != 0 { - err = e1 - } - return -} - -func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procmunmap)), 2, uintptr(addr), uintptr(length), 0, 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { - var _p0 *byte - if len(buf) > 0 { - _p0 = &buf[0] - } - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procsendto)), 6, uintptr(s), uintptr(unsafe.Pointer(_p0)), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) - if e1 != 0 { - err = e1 - } - return -} - -func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procsocket)), 3, uintptr(domain), uintptr(typ), uintptr(proto), 0, 0, 0) - fd = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procsocketpair)), 4, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func write(fd int, p []byte) (n int, err error) { - var _p0 *byte - if len(p) > 0 { - _p0 = &p[0] - } - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procwrite)), 3, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(len(p)), 0, 0, 0) - n = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procgetsockopt)), 5, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) - if e1 != 0 { - err = e1 - } - return -} - -func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procgetpeername)), 3, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), 0, 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procsetsockopt)), 5, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) - if e1 != 0 { - err = e1 - } - return -} - -func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { - var _p0 *byte - if len(p) > 0 { - _p0 = &p[0] - } - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procrecvfrom)), 6, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) - n = int(r0) - if e1 != 0 { - err = e1 - } - return -} - -func sysconf(name int) (n int64, err error) { - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procsysconf)), 1, uintptr(name), 0, 0, 0, 0, 0) - n = int64(r0) - if e1 != 0 { - err = e1 - } - return -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysctl_openbsd.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysctl_openbsd.go deleted file mode 100644 index 83bb935b91c..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysctl_openbsd.go +++ /dev/null @@ -1,270 +0,0 @@ -// mksysctl_openbsd.pl -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -package unix - -type mibentry struct { - ctlname string - ctloid []_C_int -} - -var sysctlMib = []mibentry{ - {"ddb.console", []_C_int{9, 6}}, - {"ddb.log", []_C_int{9, 7}}, - {"ddb.max_line", []_C_int{9, 3}}, - {"ddb.max_width", []_C_int{9, 2}}, - {"ddb.panic", []_C_int{9, 5}}, - {"ddb.radix", []_C_int{9, 1}}, - {"ddb.tab_stop_width", []_C_int{9, 4}}, - {"ddb.trigger", []_C_int{9, 8}}, - {"fs.posix.setuid", []_C_int{3, 1, 1}}, - {"hw.allowpowerdown", []_C_int{6, 22}}, - {"hw.byteorder", []_C_int{6, 4}}, - {"hw.cpuspeed", []_C_int{6, 12}}, - {"hw.diskcount", []_C_int{6, 10}}, - {"hw.disknames", []_C_int{6, 8}}, - {"hw.diskstats", []_C_int{6, 9}}, - {"hw.machine", []_C_int{6, 1}}, - {"hw.model", []_C_int{6, 2}}, - {"hw.ncpu", []_C_int{6, 3}}, - {"hw.ncpufound", []_C_int{6, 21}}, - {"hw.pagesize", []_C_int{6, 7}}, - {"hw.physmem", []_C_int{6, 19}}, - {"hw.product", []_C_int{6, 15}}, - {"hw.serialno", []_C_int{6, 17}}, - {"hw.setperf", []_C_int{6, 13}}, - {"hw.usermem", []_C_int{6, 20}}, - {"hw.uuid", []_C_int{6, 18}}, - {"hw.vendor", []_C_int{6, 14}}, - {"hw.version", []_C_int{6, 16}}, - {"kern.arandom", []_C_int{1, 37}}, - {"kern.argmax", []_C_int{1, 8}}, - {"kern.boottime", []_C_int{1, 21}}, - {"kern.bufcachepercent", []_C_int{1, 72}}, - {"kern.ccpu", []_C_int{1, 45}}, - {"kern.clockrate", []_C_int{1, 12}}, - {"kern.consdev", []_C_int{1, 75}}, - {"kern.cp_time", []_C_int{1, 40}}, - {"kern.cp_time2", []_C_int{1, 71}}, - {"kern.cryptodevallowsoft", []_C_int{1, 53}}, - {"kern.domainname", []_C_int{1, 22}}, - {"kern.file", []_C_int{1, 73}}, - {"kern.forkstat", []_C_int{1, 42}}, - {"kern.fscale", []_C_int{1, 46}}, - {"kern.fsync", []_C_int{1, 33}}, - {"kern.hostid", []_C_int{1, 11}}, - {"kern.hostname", []_C_int{1, 10}}, - {"kern.intrcnt.nintrcnt", []_C_int{1, 63, 1}}, - {"kern.job_control", []_C_int{1, 19}}, - {"kern.malloc.buckets", []_C_int{1, 39, 1}}, - {"kern.malloc.kmemnames", []_C_int{1, 39, 3}}, - {"kern.maxclusters", []_C_int{1, 67}}, - {"kern.maxfiles", []_C_int{1, 7}}, - {"kern.maxlocksperuid", []_C_int{1, 70}}, - {"kern.maxpartitions", []_C_int{1, 23}}, - {"kern.maxproc", []_C_int{1, 6}}, - {"kern.maxthread", []_C_int{1, 25}}, - {"kern.maxvnodes", []_C_int{1, 5}}, - {"kern.mbstat", []_C_int{1, 59}}, - {"kern.msgbuf", []_C_int{1, 48}}, - {"kern.msgbufsize", []_C_int{1, 38}}, - {"kern.nchstats", []_C_int{1, 41}}, - {"kern.netlivelocks", []_C_int{1, 76}}, - {"kern.nfiles", []_C_int{1, 56}}, - {"kern.ngroups", []_C_int{1, 18}}, - {"kern.nosuidcoredump", []_C_int{1, 32}}, - {"kern.nprocs", []_C_int{1, 47}}, - {"kern.nselcoll", []_C_int{1, 43}}, - {"kern.nthreads", []_C_int{1, 26}}, - {"kern.numvnodes", []_C_int{1, 58}}, - {"kern.osrelease", []_C_int{1, 2}}, - {"kern.osrevision", []_C_int{1, 3}}, - {"kern.ostype", []_C_int{1, 1}}, - {"kern.osversion", []_C_int{1, 27}}, - {"kern.pool_debug", []_C_int{1, 77}}, - {"kern.posix1version", []_C_int{1, 17}}, - {"kern.proc", []_C_int{1, 66}}, - {"kern.random", []_C_int{1, 31}}, - {"kern.rawpartition", []_C_int{1, 24}}, - {"kern.saved_ids", []_C_int{1, 20}}, - {"kern.securelevel", []_C_int{1, 9}}, - {"kern.seminfo", []_C_int{1, 61}}, - {"kern.shminfo", []_C_int{1, 62}}, - {"kern.somaxconn", []_C_int{1, 28}}, - {"kern.sominconn", []_C_int{1, 29}}, - {"kern.splassert", []_C_int{1, 54}}, - {"kern.stackgap_random", []_C_int{1, 50}}, - {"kern.sysvipc_info", []_C_int{1, 51}}, - {"kern.sysvmsg", []_C_int{1, 34}}, - {"kern.sysvsem", []_C_int{1, 35}}, - {"kern.sysvshm", []_C_int{1, 36}}, - {"kern.timecounter.choice", []_C_int{1, 69, 4}}, - {"kern.timecounter.hardware", []_C_int{1, 69, 3}}, - {"kern.timecounter.tick", []_C_int{1, 69, 1}}, - {"kern.timecounter.timestepwarnings", []_C_int{1, 69, 2}}, - {"kern.tty.maxptys", []_C_int{1, 44, 6}}, - {"kern.tty.nptys", []_C_int{1, 44, 7}}, - {"kern.tty.tk_cancc", []_C_int{1, 44, 4}}, - {"kern.tty.tk_nin", []_C_int{1, 44, 1}}, - {"kern.tty.tk_nout", []_C_int{1, 44, 2}}, - {"kern.tty.tk_rawcc", []_C_int{1, 44, 3}}, - {"kern.tty.ttyinfo", []_C_int{1, 44, 5}}, - {"kern.ttycount", []_C_int{1, 57}}, - {"kern.userasymcrypto", []_C_int{1, 60}}, - {"kern.usercrypto", []_C_int{1, 52}}, - {"kern.usermount", []_C_int{1, 30}}, - {"kern.version", []_C_int{1, 4}}, - {"kern.vnode", []_C_int{1, 13}}, - {"kern.watchdog.auto", []_C_int{1, 64, 2}}, - {"kern.watchdog.period", []_C_int{1, 64, 1}}, - {"net.bpf.bufsize", []_C_int{4, 31, 1}}, - {"net.bpf.maxbufsize", []_C_int{4, 31, 2}}, - {"net.inet.ah.enable", []_C_int{4, 2, 51, 1}}, - {"net.inet.ah.stats", []_C_int{4, 2, 51, 2}}, - {"net.inet.carp.allow", []_C_int{4, 2, 112, 1}}, - {"net.inet.carp.log", []_C_int{4, 2, 112, 3}}, - {"net.inet.carp.preempt", []_C_int{4, 2, 112, 2}}, - {"net.inet.carp.stats", []_C_int{4, 2, 112, 4}}, - {"net.inet.divert.recvspace", []_C_int{4, 2, 258, 1}}, - {"net.inet.divert.sendspace", []_C_int{4, 2, 258, 2}}, - {"net.inet.divert.stats", []_C_int{4, 2, 258, 3}}, - {"net.inet.esp.enable", []_C_int{4, 2, 50, 1}}, - {"net.inet.esp.stats", []_C_int{4, 2, 50, 4}}, - {"net.inet.esp.udpencap", []_C_int{4, 2, 50, 2}}, - {"net.inet.esp.udpencap_port", []_C_int{4, 2, 50, 3}}, - {"net.inet.etherip.allow", []_C_int{4, 2, 97, 1}}, - {"net.inet.etherip.stats", []_C_int{4, 2, 97, 2}}, - {"net.inet.gre.allow", []_C_int{4, 2, 47, 1}}, - {"net.inet.gre.wccp", []_C_int{4, 2, 47, 2}}, - {"net.inet.icmp.bmcastecho", []_C_int{4, 2, 1, 2}}, - {"net.inet.icmp.errppslimit", []_C_int{4, 2, 1, 3}}, - {"net.inet.icmp.maskrepl", []_C_int{4, 2, 1, 1}}, - {"net.inet.icmp.rediraccept", []_C_int{4, 2, 1, 4}}, - {"net.inet.icmp.redirtimeout", []_C_int{4, 2, 1, 5}}, - {"net.inet.icmp.stats", []_C_int{4, 2, 1, 7}}, - {"net.inet.icmp.tstamprepl", []_C_int{4, 2, 1, 6}}, - {"net.inet.igmp.stats", []_C_int{4, 2, 2, 1}}, - {"net.inet.ip.arpqueued", []_C_int{4, 2, 0, 36}}, - {"net.inet.ip.encdebug", []_C_int{4, 2, 0, 12}}, - {"net.inet.ip.forwarding", []_C_int{4, 2, 0, 1}}, - {"net.inet.ip.ifq.congestion", []_C_int{4, 2, 0, 30, 4}}, - {"net.inet.ip.ifq.drops", []_C_int{4, 2, 0, 30, 3}}, - {"net.inet.ip.ifq.len", []_C_int{4, 2, 0, 30, 1}}, - {"net.inet.ip.ifq.maxlen", []_C_int{4, 2, 0, 30, 2}}, - {"net.inet.ip.maxqueue", []_C_int{4, 2, 0, 11}}, - {"net.inet.ip.mforwarding", []_C_int{4, 2, 0, 31}}, - {"net.inet.ip.mrtproto", []_C_int{4, 2, 0, 34}}, - {"net.inet.ip.mrtstats", []_C_int{4, 2, 0, 35}}, - {"net.inet.ip.mtu", []_C_int{4, 2, 0, 4}}, - {"net.inet.ip.mtudisc", []_C_int{4, 2, 0, 27}}, - {"net.inet.ip.mtudisctimeout", []_C_int{4, 2, 0, 28}}, - {"net.inet.ip.multipath", []_C_int{4, 2, 0, 32}}, - {"net.inet.ip.portfirst", []_C_int{4, 2, 0, 7}}, - {"net.inet.ip.porthifirst", []_C_int{4, 2, 0, 9}}, - {"net.inet.ip.porthilast", []_C_int{4, 2, 0, 10}}, - {"net.inet.ip.portlast", []_C_int{4, 2, 0, 8}}, - {"net.inet.ip.redirect", []_C_int{4, 2, 0, 2}}, - {"net.inet.ip.sourceroute", []_C_int{4, 2, 0, 5}}, - {"net.inet.ip.stats", []_C_int{4, 2, 0, 33}}, - {"net.inet.ip.ttl", []_C_int{4, 2, 0, 3}}, - {"net.inet.ipcomp.enable", []_C_int{4, 2, 108, 1}}, - {"net.inet.ipcomp.stats", []_C_int{4, 2, 108, 2}}, - {"net.inet.ipip.allow", []_C_int{4, 2, 4, 1}}, - {"net.inet.ipip.stats", []_C_int{4, 2, 4, 2}}, - {"net.inet.mobileip.allow", []_C_int{4, 2, 55, 1}}, - {"net.inet.pfsync.stats", []_C_int{4, 2, 240, 1}}, - {"net.inet.pim.stats", []_C_int{4, 2, 103, 1}}, - {"net.inet.tcp.ackonpush", []_C_int{4, 2, 6, 13}}, - {"net.inet.tcp.always_keepalive", []_C_int{4, 2, 6, 22}}, - {"net.inet.tcp.baddynamic", []_C_int{4, 2, 6, 6}}, - {"net.inet.tcp.drop", []_C_int{4, 2, 6, 19}}, - {"net.inet.tcp.ecn", []_C_int{4, 2, 6, 14}}, - {"net.inet.tcp.ident", []_C_int{4, 2, 6, 9}}, - {"net.inet.tcp.keepidle", []_C_int{4, 2, 6, 3}}, - {"net.inet.tcp.keepinittime", []_C_int{4, 2, 6, 2}}, - {"net.inet.tcp.keepintvl", []_C_int{4, 2, 6, 4}}, - {"net.inet.tcp.mssdflt", []_C_int{4, 2, 6, 11}}, - {"net.inet.tcp.reasslimit", []_C_int{4, 2, 6, 18}}, - {"net.inet.tcp.rfc1323", []_C_int{4, 2, 6, 1}}, - {"net.inet.tcp.rfc3390", []_C_int{4, 2, 6, 17}}, - {"net.inet.tcp.rstppslimit", []_C_int{4, 2, 6, 12}}, - {"net.inet.tcp.sack", []_C_int{4, 2, 6, 10}}, - {"net.inet.tcp.sackholelimit", []_C_int{4, 2, 6, 20}}, - {"net.inet.tcp.slowhz", []_C_int{4, 2, 6, 5}}, - {"net.inet.tcp.stats", []_C_int{4, 2, 6, 21}}, - {"net.inet.tcp.synbucketlimit", []_C_int{4, 2, 6, 16}}, - {"net.inet.tcp.syncachelimit", []_C_int{4, 2, 6, 15}}, - {"net.inet.udp.baddynamic", []_C_int{4, 2, 17, 2}}, - {"net.inet.udp.checksum", []_C_int{4, 2, 17, 1}}, - {"net.inet.udp.recvspace", []_C_int{4, 2, 17, 3}}, - {"net.inet.udp.sendspace", []_C_int{4, 2, 17, 4}}, - {"net.inet.udp.stats", []_C_int{4, 2, 17, 5}}, - {"net.inet6.divert.recvspace", []_C_int{4, 24, 86, 1}}, - {"net.inet6.divert.sendspace", []_C_int{4, 24, 86, 2}}, - {"net.inet6.divert.stats", []_C_int{4, 24, 86, 3}}, - {"net.inet6.icmp6.errppslimit", []_C_int{4, 24, 30, 14}}, - {"net.inet6.icmp6.mtudisc_hiwat", []_C_int{4, 24, 30, 16}}, - {"net.inet6.icmp6.mtudisc_lowat", []_C_int{4, 24, 30, 17}}, - {"net.inet6.icmp6.nd6_debug", []_C_int{4, 24, 30, 18}}, - {"net.inet6.icmp6.nd6_delay", []_C_int{4, 24, 30, 8}}, - {"net.inet6.icmp6.nd6_maxnudhint", []_C_int{4, 24, 30, 15}}, - {"net.inet6.icmp6.nd6_mmaxtries", []_C_int{4, 24, 30, 10}}, - {"net.inet6.icmp6.nd6_prune", []_C_int{4, 24, 30, 6}}, - {"net.inet6.icmp6.nd6_umaxtries", []_C_int{4, 24, 30, 9}}, - {"net.inet6.icmp6.nd6_useloopback", []_C_int{4, 24, 30, 11}}, - {"net.inet6.icmp6.nodeinfo", []_C_int{4, 24, 30, 13}}, - {"net.inet6.icmp6.rediraccept", []_C_int{4, 24, 30, 2}}, - {"net.inet6.icmp6.redirtimeout", []_C_int{4, 24, 30, 3}}, - {"net.inet6.ip6.accept_rtadv", []_C_int{4, 24, 17, 12}}, - {"net.inet6.ip6.auto_flowlabel", []_C_int{4, 24, 17, 17}}, - {"net.inet6.ip6.dad_count", []_C_int{4, 24, 17, 16}}, - {"net.inet6.ip6.dad_pending", []_C_int{4, 24, 17, 49}}, - {"net.inet6.ip6.defmcasthlim", []_C_int{4, 24, 17, 18}}, - {"net.inet6.ip6.forwarding", []_C_int{4, 24, 17, 1}}, - {"net.inet6.ip6.forwsrcrt", []_C_int{4, 24, 17, 5}}, - {"net.inet6.ip6.hdrnestlimit", []_C_int{4, 24, 17, 15}}, - {"net.inet6.ip6.hlim", []_C_int{4, 24, 17, 3}}, - {"net.inet6.ip6.log_interval", []_C_int{4, 24, 17, 14}}, - {"net.inet6.ip6.maxdynroutes", []_C_int{4, 24, 17, 48}}, - {"net.inet6.ip6.maxfragpackets", []_C_int{4, 24, 17, 9}}, - {"net.inet6.ip6.maxfrags", []_C_int{4, 24, 17, 41}}, - {"net.inet6.ip6.maxifdefrouters", []_C_int{4, 24, 17, 47}}, - {"net.inet6.ip6.maxifprefixes", []_C_int{4, 24, 17, 46}}, - {"net.inet6.ip6.mforwarding", []_C_int{4, 24, 17, 42}}, - {"net.inet6.ip6.mrtproto", []_C_int{4, 24, 17, 8}}, - {"net.inet6.ip6.mtudisctimeout", []_C_int{4, 24, 17, 50}}, - {"net.inet6.ip6.multicast_mtudisc", []_C_int{4, 24, 17, 44}}, - {"net.inet6.ip6.multipath", []_C_int{4, 24, 17, 43}}, - {"net.inet6.ip6.neighborgcthresh", []_C_int{4, 24, 17, 45}}, - {"net.inet6.ip6.redirect", []_C_int{4, 24, 17, 2}}, - {"net.inet6.ip6.rr_prune", []_C_int{4, 24, 17, 22}}, - {"net.inet6.ip6.sourcecheck", []_C_int{4, 24, 17, 10}}, - {"net.inet6.ip6.sourcecheck_logint", []_C_int{4, 24, 17, 11}}, - {"net.inet6.ip6.use_deprecated", []_C_int{4, 24, 17, 21}}, - {"net.inet6.ip6.v6only", []_C_int{4, 24, 17, 24}}, - {"net.key.sadb_dump", []_C_int{4, 30, 1}}, - {"net.key.spd_dump", []_C_int{4, 30, 2}}, - {"net.mpls.ifq.congestion", []_C_int{4, 33, 3, 4}}, - {"net.mpls.ifq.drops", []_C_int{4, 33, 3, 3}}, - {"net.mpls.ifq.len", []_C_int{4, 33, 3, 1}}, - {"net.mpls.ifq.maxlen", []_C_int{4, 33, 3, 2}}, - {"net.mpls.mapttl_ip", []_C_int{4, 33, 5}}, - {"net.mpls.mapttl_ip6", []_C_int{4, 33, 6}}, - {"net.mpls.maxloop_inkernel", []_C_int{4, 33, 4}}, - {"net.mpls.ttl", []_C_int{4, 33, 2}}, - {"net.pflow.stats", []_C_int{4, 34, 1}}, - {"net.pipex.enable", []_C_int{4, 35, 1}}, - {"vm.anonmin", []_C_int{2, 7}}, - {"vm.loadavg", []_C_int{2, 2}}, - {"vm.maxslp", []_C_int{2, 10}}, - {"vm.nkmempages", []_C_int{2, 6}}, - {"vm.psstrings", []_C_int{2, 3}}, - {"vm.swapencrypt.enable", []_C_int{2, 5, 0}}, - {"vm.swapencrypt.keyscreated", []_C_int{2, 5, 1}}, - {"vm.swapencrypt.keysdeleted", []_C_int{2, 5, 2}}, - {"vm.uspace", []_C_int{2, 11}}, - {"vm.uvmexp", []_C_int{2, 4}}, - {"vm.vmmeter", []_C_int{2, 1}}, - {"vm.vnodemin", []_C_int{2, 9}}, - {"vm.vtextmin", []_C_int{2, 8}}, -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_darwin_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_darwin_386.go deleted file mode 100644 index 2786773ba37..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_darwin_386.go +++ /dev/null @@ -1,398 +0,0 @@ -// mksysnum_darwin.pl /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/include/sys/syscall.h -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build 386,darwin - -package unix - -const ( - SYS_SYSCALL = 0 - SYS_EXIT = 1 - SYS_FORK = 2 - SYS_READ = 3 - SYS_WRITE = 4 - SYS_OPEN = 5 - SYS_CLOSE = 6 - SYS_WAIT4 = 7 - SYS_LINK = 9 - SYS_UNLINK = 10 - SYS_CHDIR = 12 - SYS_FCHDIR = 13 - SYS_MKNOD = 14 - SYS_CHMOD = 15 - SYS_CHOWN = 16 - SYS_GETFSSTAT = 18 - SYS_GETPID = 20 - SYS_SETUID = 23 - SYS_GETUID = 24 - SYS_GETEUID = 25 - SYS_PTRACE = 26 - SYS_RECVMSG = 27 - SYS_SENDMSG = 28 - SYS_RECVFROM = 29 - SYS_ACCEPT = 30 - SYS_GETPEERNAME = 31 - SYS_GETSOCKNAME = 32 - SYS_ACCESS = 33 - SYS_CHFLAGS = 34 - SYS_FCHFLAGS = 35 - SYS_SYNC = 36 - SYS_KILL = 37 - SYS_GETPPID = 39 - SYS_DUP = 41 - SYS_PIPE = 42 - SYS_GETEGID = 43 - SYS_SIGACTION = 46 - SYS_GETGID = 47 - SYS_SIGPROCMASK = 48 - SYS_GETLOGIN = 49 - SYS_SETLOGIN = 50 - SYS_ACCT = 51 - SYS_SIGPENDING = 52 - SYS_SIGALTSTACK = 53 - SYS_IOCTL = 54 - SYS_REBOOT = 55 - SYS_REVOKE = 56 - SYS_SYMLINK = 57 - SYS_READLINK = 58 - SYS_EXECVE = 59 - SYS_UMASK = 60 - SYS_CHROOT = 61 - SYS_MSYNC = 65 - SYS_VFORK = 66 - SYS_MUNMAP = 73 - SYS_MPROTECT = 74 - SYS_MADVISE = 75 - SYS_MINCORE = 78 - SYS_GETGROUPS = 79 - SYS_SETGROUPS = 80 - SYS_GETPGRP = 81 - SYS_SETPGID = 82 - SYS_SETITIMER = 83 - SYS_SWAPON = 85 - SYS_GETITIMER = 86 - SYS_GETDTABLESIZE = 89 - SYS_DUP2 = 90 - SYS_FCNTL = 92 - SYS_SELECT = 93 - SYS_FSYNC = 95 - SYS_SETPRIORITY = 96 - SYS_SOCKET = 97 - SYS_CONNECT = 98 - SYS_GETPRIORITY = 100 - SYS_BIND = 104 - SYS_SETSOCKOPT = 105 - SYS_LISTEN = 106 - SYS_SIGSUSPEND = 111 - SYS_GETTIMEOFDAY = 116 - SYS_GETRUSAGE = 117 - SYS_GETSOCKOPT = 118 - SYS_READV = 120 - SYS_WRITEV = 121 - SYS_SETTIMEOFDAY = 122 - SYS_FCHOWN = 123 - SYS_FCHMOD = 124 - SYS_SETREUID = 126 - SYS_SETREGID = 127 - SYS_RENAME = 128 - SYS_FLOCK = 131 - SYS_MKFIFO = 132 - SYS_SENDTO = 133 - SYS_SHUTDOWN = 134 - SYS_SOCKETPAIR = 135 - SYS_MKDIR = 136 - SYS_RMDIR = 137 - SYS_UTIMES = 138 - SYS_FUTIMES = 139 - SYS_ADJTIME = 140 - SYS_GETHOSTUUID = 142 - SYS_SETSID = 147 - SYS_GETPGID = 151 - SYS_SETPRIVEXEC = 152 - SYS_PREAD = 153 - SYS_PWRITE = 154 - SYS_NFSSVC = 155 - SYS_STATFS = 157 - SYS_FSTATFS = 158 - SYS_UNMOUNT = 159 - SYS_GETFH = 161 - SYS_QUOTACTL = 165 - SYS_MOUNT = 167 - SYS_CSOPS = 169 - SYS_CSOPS_AUDITTOKEN = 170 - SYS_WAITID = 173 - SYS_KDEBUG_TRACE64 = 179 - SYS_KDEBUG_TRACE = 180 - SYS_SETGID = 181 - SYS_SETEGID = 182 - SYS_SETEUID = 183 - SYS_SIGRETURN = 184 - SYS_CHUD = 185 - SYS_FDATASYNC = 187 - SYS_STAT = 188 - SYS_FSTAT = 189 - SYS_LSTAT = 190 - SYS_PATHCONF = 191 - SYS_FPATHCONF = 192 - SYS_GETRLIMIT = 194 - SYS_SETRLIMIT = 195 - SYS_GETDIRENTRIES = 196 - SYS_MMAP = 197 - SYS_LSEEK = 199 - SYS_TRUNCATE = 200 - SYS_FTRUNCATE = 201 - SYS_SYSCTL = 202 - SYS_MLOCK = 203 - SYS_MUNLOCK = 204 - SYS_UNDELETE = 205 - SYS_OPEN_DPROTECTED_NP = 216 - SYS_GETATTRLIST = 220 - SYS_SETATTRLIST = 221 - SYS_GETDIRENTRIESATTR = 222 - SYS_EXCHANGEDATA = 223 - SYS_SEARCHFS = 225 - SYS_DELETE = 226 - SYS_COPYFILE = 227 - SYS_FGETATTRLIST = 228 - SYS_FSETATTRLIST = 229 - SYS_POLL = 230 - SYS_WATCHEVENT = 231 - SYS_WAITEVENT = 232 - SYS_MODWATCH = 233 - SYS_GETXATTR = 234 - SYS_FGETXATTR = 235 - SYS_SETXATTR = 236 - SYS_FSETXATTR = 237 - SYS_REMOVEXATTR = 238 - SYS_FREMOVEXATTR = 239 - SYS_LISTXATTR = 240 - SYS_FLISTXATTR = 241 - SYS_FSCTL = 242 - SYS_INITGROUPS = 243 - SYS_POSIX_SPAWN = 244 - SYS_FFSCTL = 245 - SYS_NFSCLNT = 247 - SYS_FHOPEN = 248 - SYS_MINHERIT = 250 - SYS_SEMSYS = 251 - SYS_MSGSYS = 252 - SYS_SHMSYS = 253 - SYS_SEMCTL = 254 - SYS_SEMGET = 255 - SYS_SEMOP = 256 - SYS_MSGCTL = 258 - SYS_MSGGET = 259 - SYS_MSGSND = 260 - SYS_MSGRCV = 261 - SYS_SHMAT = 262 - SYS_SHMCTL = 263 - SYS_SHMDT = 264 - SYS_SHMGET = 265 - SYS_SHM_OPEN = 266 - SYS_SHM_UNLINK = 267 - SYS_SEM_OPEN = 268 - SYS_SEM_CLOSE = 269 - SYS_SEM_UNLINK = 270 - SYS_SEM_WAIT = 271 - SYS_SEM_TRYWAIT = 272 - SYS_SEM_POST = 273 - SYS_SYSCTLBYNAME = 274 - SYS_OPEN_EXTENDED = 277 - SYS_UMASK_EXTENDED = 278 - SYS_STAT_EXTENDED = 279 - SYS_LSTAT_EXTENDED = 280 - SYS_FSTAT_EXTENDED = 281 - SYS_CHMOD_EXTENDED = 282 - SYS_FCHMOD_EXTENDED = 283 - SYS_ACCESS_EXTENDED = 284 - SYS_SETTID = 285 - SYS_GETTID = 286 - SYS_SETSGROUPS = 287 - SYS_GETSGROUPS = 288 - SYS_SETWGROUPS = 289 - SYS_GETWGROUPS = 290 - SYS_MKFIFO_EXTENDED = 291 - SYS_MKDIR_EXTENDED = 292 - SYS_IDENTITYSVC = 293 - SYS_SHARED_REGION_CHECK_NP = 294 - SYS_VM_PRESSURE_MONITOR = 296 - SYS_PSYNCH_RW_LONGRDLOCK = 297 - SYS_PSYNCH_RW_YIELDWRLOCK = 298 - SYS_PSYNCH_RW_DOWNGRADE = 299 - SYS_PSYNCH_RW_UPGRADE = 300 - SYS_PSYNCH_MUTEXWAIT = 301 - SYS_PSYNCH_MUTEXDROP = 302 - SYS_PSYNCH_CVBROAD = 303 - SYS_PSYNCH_CVSIGNAL = 304 - SYS_PSYNCH_CVWAIT = 305 - SYS_PSYNCH_RW_RDLOCK = 306 - SYS_PSYNCH_RW_WRLOCK = 307 - SYS_PSYNCH_RW_UNLOCK = 308 - SYS_PSYNCH_RW_UNLOCK2 = 309 - SYS_GETSID = 310 - SYS_SETTID_WITH_PID = 311 - SYS_PSYNCH_CVCLRPREPOST = 312 - SYS_AIO_FSYNC = 313 - SYS_AIO_RETURN = 314 - SYS_AIO_SUSPEND = 315 - SYS_AIO_CANCEL = 316 - SYS_AIO_ERROR = 317 - SYS_AIO_READ = 318 - SYS_AIO_WRITE = 319 - SYS_LIO_LISTIO = 320 - SYS_IOPOLICYSYS = 322 - SYS_PROCESS_POLICY = 323 - SYS_MLOCKALL = 324 - SYS_MUNLOCKALL = 325 - SYS_ISSETUGID = 327 - SYS___PTHREAD_KILL = 328 - SYS___PTHREAD_SIGMASK = 329 - SYS___SIGWAIT = 330 - SYS___DISABLE_THREADSIGNAL = 331 - SYS___PTHREAD_MARKCANCEL = 332 - SYS___PTHREAD_CANCELED = 333 - SYS___SEMWAIT_SIGNAL = 334 - SYS_PROC_INFO = 336 - SYS_SENDFILE = 337 - SYS_STAT64 = 338 - SYS_FSTAT64 = 339 - SYS_LSTAT64 = 340 - SYS_STAT64_EXTENDED = 341 - SYS_LSTAT64_EXTENDED = 342 - SYS_FSTAT64_EXTENDED = 343 - SYS_GETDIRENTRIES64 = 344 - SYS_STATFS64 = 345 - SYS_FSTATFS64 = 346 - SYS_GETFSSTAT64 = 347 - SYS___PTHREAD_CHDIR = 348 - SYS___PTHREAD_FCHDIR = 349 - SYS_AUDIT = 350 - SYS_AUDITON = 351 - SYS_GETAUID = 353 - SYS_SETAUID = 354 - SYS_GETAUDIT_ADDR = 357 - SYS_SETAUDIT_ADDR = 358 - SYS_AUDITCTL = 359 - SYS_BSDTHREAD_CREATE = 360 - SYS_BSDTHREAD_TERMINATE = 361 - SYS_KQUEUE = 362 - SYS_KEVENT = 363 - SYS_LCHOWN = 364 - SYS_STACK_SNAPSHOT = 365 - SYS_BSDTHREAD_REGISTER = 366 - SYS_WORKQ_OPEN = 367 - SYS_WORKQ_KERNRETURN = 368 - SYS_KEVENT64 = 369 - SYS___OLD_SEMWAIT_SIGNAL = 370 - SYS___OLD_SEMWAIT_SIGNAL_NOCANCEL = 371 - SYS_THREAD_SELFID = 372 - SYS_LEDGER = 373 - SYS___MAC_EXECVE = 380 - SYS___MAC_SYSCALL = 381 - SYS___MAC_GET_FILE = 382 - SYS___MAC_SET_FILE = 383 - SYS___MAC_GET_LINK = 384 - SYS___MAC_SET_LINK = 385 - SYS___MAC_GET_PROC = 386 - SYS___MAC_SET_PROC = 387 - SYS___MAC_GET_FD = 388 - SYS___MAC_SET_FD = 389 - SYS___MAC_GET_PID = 390 - SYS___MAC_GET_LCID = 391 - SYS___MAC_GET_LCTX = 392 - SYS___MAC_SET_LCTX = 393 - SYS_SETLCID = 394 - SYS_GETLCID = 395 - SYS_READ_NOCANCEL = 396 - SYS_WRITE_NOCANCEL = 397 - SYS_OPEN_NOCANCEL = 398 - SYS_CLOSE_NOCANCEL = 399 - SYS_WAIT4_NOCANCEL = 400 - SYS_RECVMSG_NOCANCEL = 401 - SYS_SENDMSG_NOCANCEL = 402 - SYS_RECVFROM_NOCANCEL = 403 - SYS_ACCEPT_NOCANCEL = 404 - SYS_MSYNC_NOCANCEL = 405 - SYS_FCNTL_NOCANCEL = 406 - SYS_SELECT_NOCANCEL = 407 - SYS_FSYNC_NOCANCEL = 408 - SYS_CONNECT_NOCANCEL = 409 - SYS_SIGSUSPEND_NOCANCEL = 410 - SYS_READV_NOCANCEL = 411 - SYS_WRITEV_NOCANCEL = 412 - SYS_SENDTO_NOCANCEL = 413 - SYS_PREAD_NOCANCEL = 414 - SYS_PWRITE_NOCANCEL = 415 - SYS_WAITID_NOCANCEL = 416 - SYS_POLL_NOCANCEL = 417 - SYS_MSGSND_NOCANCEL = 418 - SYS_MSGRCV_NOCANCEL = 419 - SYS_SEM_WAIT_NOCANCEL = 420 - SYS_AIO_SUSPEND_NOCANCEL = 421 - SYS___SIGWAIT_NOCANCEL = 422 - SYS___SEMWAIT_SIGNAL_NOCANCEL = 423 - SYS___MAC_MOUNT = 424 - SYS___MAC_GET_MOUNT = 425 - SYS___MAC_GETFSSTAT = 426 - SYS_FSGETPATH = 427 - SYS_AUDIT_SESSION_SELF = 428 - SYS_AUDIT_SESSION_JOIN = 429 - SYS_FILEPORT_MAKEPORT = 430 - SYS_FILEPORT_MAKEFD = 431 - SYS_AUDIT_SESSION_PORT = 432 - SYS_PID_SUSPEND = 433 - SYS_PID_RESUME = 434 - SYS_PID_HIBERNATE = 435 - SYS_PID_SHUTDOWN_SOCKETS = 436 - SYS_SHARED_REGION_MAP_AND_SLIDE_NP = 438 - SYS_KAS_INFO = 439 - SYS_MEMORYSTATUS_CONTROL = 440 - SYS_GUARDED_OPEN_NP = 441 - SYS_GUARDED_CLOSE_NP = 442 - SYS_GUARDED_KQUEUE_NP = 443 - SYS_CHANGE_FDGUARD_NP = 444 - SYS_PROC_RLIMIT_CONTROL = 446 - SYS_CONNECTX = 447 - SYS_DISCONNECTX = 448 - SYS_PEELOFF = 449 - SYS_SOCKET_DELEGATE = 450 - SYS_TELEMETRY = 451 - SYS_PROC_UUID_POLICY = 452 - SYS_MEMORYSTATUS_GET_LEVEL = 453 - SYS_SYSTEM_OVERRIDE = 454 - SYS_VFS_PURGE = 455 - SYS_SFI_CTL = 456 - SYS_SFI_PIDCTL = 457 - SYS_COALITION = 458 - SYS_COALITION_INFO = 459 - SYS_NECP_MATCH_POLICY = 460 - SYS_GETATTRLISTBULK = 461 - SYS_OPENAT = 463 - SYS_OPENAT_NOCANCEL = 464 - SYS_RENAMEAT = 465 - SYS_FACCESSAT = 466 - SYS_FCHMODAT = 467 - SYS_FCHOWNAT = 468 - SYS_FSTATAT = 469 - SYS_FSTATAT64 = 470 - SYS_LINKAT = 471 - SYS_UNLINKAT = 472 - SYS_READLINKAT = 473 - SYS_SYMLINKAT = 474 - SYS_MKDIRAT = 475 - SYS_GETATTRLISTAT = 476 - SYS_PROC_TRACE_LOG = 477 - SYS_BSDTHREAD_CTL = 478 - SYS_OPENBYID_NP = 479 - SYS_RECVMSG_X = 480 - SYS_SENDMSG_X = 481 - SYS_THREAD_SELFUSAGE = 482 - SYS_CSRCTL = 483 - SYS_GUARDED_OPEN_DPROTECTED_NP = 484 - SYS_GUARDED_WRITE_NP = 485 - SYS_GUARDED_PWRITE_NP = 486 - SYS_GUARDED_WRITEV_NP = 487 - SYS_RENAME_EXT = 488 - SYS_MREMAP_ENCRYPTED = 489 - SYS_MAXSYSCALL = 490 -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_darwin_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_darwin_amd64.go deleted file mode 100644 index 09de240c8f8..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_darwin_amd64.go +++ /dev/null @@ -1,398 +0,0 @@ -// mksysnum_darwin.pl /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/include/sys/syscall.h -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build amd64,darwin - -package unix - -const ( - SYS_SYSCALL = 0 - SYS_EXIT = 1 - SYS_FORK = 2 - SYS_READ = 3 - SYS_WRITE = 4 - SYS_OPEN = 5 - SYS_CLOSE = 6 - SYS_WAIT4 = 7 - SYS_LINK = 9 - SYS_UNLINK = 10 - SYS_CHDIR = 12 - SYS_FCHDIR = 13 - SYS_MKNOD = 14 - SYS_CHMOD = 15 - SYS_CHOWN = 16 - SYS_GETFSSTAT = 18 - SYS_GETPID = 20 - SYS_SETUID = 23 - SYS_GETUID = 24 - SYS_GETEUID = 25 - SYS_PTRACE = 26 - SYS_RECVMSG = 27 - SYS_SENDMSG = 28 - SYS_RECVFROM = 29 - SYS_ACCEPT = 30 - SYS_GETPEERNAME = 31 - SYS_GETSOCKNAME = 32 - SYS_ACCESS = 33 - SYS_CHFLAGS = 34 - SYS_FCHFLAGS = 35 - SYS_SYNC = 36 - SYS_KILL = 37 - SYS_GETPPID = 39 - SYS_DUP = 41 - SYS_PIPE = 42 - SYS_GETEGID = 43 - SYS_SIGACTION = 46 - SYS_GETGID = 47 - SYS_SIGPROCMASK = 48 - SYS_GETLOGIN = 49 - SYS_SETLOGIN = 50 - SYS_ACCT = 51 - SYS_SIGPENDING = 52 - SYS_SIGALTSTACK = 53 - SYS_IOCTL = 54 - SYS_REBOOT = 55 - SYS_REVOKE = 56 - SYS_SYMLINK = 57 - SYS_READLINK = 58 - SYS_EXECVE = 59 - SYS_UMASK = 60 - SYS_CHROOT = 61 - SYS_MSYNC = 65 - SYS_VFORK = 66 - SYS_MUNMAP = 73 - SYS_MPROTECT = 74 - SYS_MADVISE = 75 - SYS_MINCORE = 78 - SYS_GETGROUPS = 79 - SYS_SETGROUPS = 80 - SYS_GETPGRP = 81 - SYS_SETPGID = 82 - SYS_SETITIMER = 83 - SYS_SWAPON = 85 - SYS_GETITIMER = 86 - SYS_GETDTABLESIZE = 89 - SYS_DUP2 = 90 - SYS_FCNTL = 92 - SYS_SELECT = 93 - SYS_FSYNC = 95 - SYS_SETPRIORITY = 96 - SYS_SOCKET = 97 - SYS_CONNECT = 98 - SYS_GETPRIORITY = 100 - SYS_BIND = 104 - SYS_SETSOCKOPT = 105 - SYS_LISTEN = 106 - SYS_SIGSUSPEND = 111 - SYS_GETTIMEOFDAY = 116 - SYS_GETRUSAGE = 117 - SYS_GETSOCKOPT = 118 - SYS_READV = 120 - SYS_WRITEV = 121 - SYS_SETTIMEOFDAY = 122 - SYS_FCHOWN = 123 - SYS_FCHMOD = 124 - SYS_SETREUID = 126 - SYS_SETREGID = 127 - SYS_RENAME = 128 - SYS_FLOCK = 131 - SYS_MKFIFO = 132 - SYS_SENDTO = 133 - SYS_SHUTDOWN = 134 - SYS_SOCKETPAIR = 135 - SYS_MKDIR = 136 - SYS_RMDIR = 137 - SYS_UTIMES = 138 - SYS_FUTIMES = 139 - SYS_ADJTIME = 140 - SYS_GETHOSTUUID = 142 - SYS_SETSID = 147 - SYS_GETPGID = 151 - SYS_SETPRIVEXEC = 152 - SYS_PREAD = 153 - SYS_PWRITE = 154 - SYS_NFSSVC = 155 - SYS_STATFS = 157 - SYS_FSTATFS = 158 - SYS_UNMOUNT = 159 - SYS_GETFH = 161 - SYS_QUOTACTL = 165 - SYS_MOUNT = 167 - SYS_CSOPS = 169 - SYS_CSOPS_AUDITTOKEN = 170 - SYS_WAITID = 173 - SYS_KDEBUG_TRACE64 = 179 - SYS_KDEBUG_TRACE = 180 - SYS_SETGID = 181 - SYS_SETEGID = 182 - SYS_SETEUID = 183 - SYS_SIGRETURN = 184 - SYS_CHUD = 185 - SYS_FDATASYNC = 187 - SYS_STAT = 188 - SYS_FSTAT = 189 - SYS_LSTAT = 190 - SYS_PATHCONF = 191 - SYS_FPATHCONF = 192 - SYS_GETRLIMIT = 194 - SYS_SETRLIMIT = 195 - SYS_GETDIRENTRIES = 196 - SYS_MMAP = 197 - SYS_LSEEK = 199 - SYS_TRUNCATE = 200 - SYS_FTRUNCATE = 201 - SYS_SYSCTL = 202 - SYS_MLOCK = 203 - SYS_MUNLOCK = 204 - SYS_UNDELETE = 205 - SYS_OPEN_DPROTECTED_NP = 216 - SYS_GETATTRLIST = 220 - SYS_SETATTRLIST = 221 - SYS_GETDIRENTRIESATTR = 222 - SYS_EXCHANGEDATA = 223 - SYS_SEARCHFS = 225 - SYS_DELETE = 226 - SYS_COPYFILE = 227 - SYS_FGETATTRLIST = 228 - SYS_FSETATTRLIST = 229 - SYS_POLL = 230 - SYS_WATCHEVENT = 231 - SYS_WAITEVENT = 232 - SYS_MODWATCH = 233 - SYS_GETXATTR = 234 - SYS_FGETXATTR = 235 - SYS_SETXATTR = 236 - SYS_FSETXATTR = 237 - SYS_REMOVEXATTR = 238 - SYS_FREMOVEXATTR = 239 - SYS_LISTXATTR = 240 - SYS_FLISTXATTR = 241 - SYS_FSCTL = 242 - SYS_INITGROUPS = 243 - SYS_POSIX_SPAWN = 244 - SYS_FFSCTL = 245 - SYS_NFSCLNT = 247 - SYS_FHOPEN = 248 - SYS_MINHERIT = 250 - SYS_SEMSYS = 251 - SYS_MSGSYS = 252 - SYS_SHMSYS = 253 - SYS_SEMCTL = 254 - SYS_SEMGET = 255 - SYS_SEMOP = 256 - SYS_MSGCTL = 258 - SYS_MSGGET = 259 - SYS_MSGSND = 260 - SYS_MSGRCV = 261 - SYS_SHMAT = 262 - SYS_SHMCTL = 263 - SYS_SHMDT = 264 - SYS_SHMGET = 265 - SYS_SHM_OPEN = 266 - SYS_SHM_UNLINK = 267 - SYS_SEM_OPEN = 268 - SYS_SEM_CLOSE = 269 - SYS_SEM_UNLINK = 270 - SYS_SEM_WAIT = 271 - SYS_SEM_TRYWAIT = 272 - SYS_SEM_POST = 273 - SYS_SYSCTLBYNAME = 274 - SYS_OPEN_EXTENDED = 277 - SYS_UMASK_EXTENDED = 278 - SYS_STAT_EXTENDED = 279 - SYS_LSTAT_EXTENDED = 280 - SYS_FSTAT_EXTENDED = 281 - SYS_CHMOD_EXTENDED = 282 - SYS_FCHMOD_EXTENDED = 283 - SYS_ACCESS_EXTENDED = 284 - SYS_SETTID = 285 - SYS_GETTID = 286 - SYS_SETSGROUPS = 287 - SYS_GETSGROUPS = 288 - SYS_SETWGROUPS = 289 - SYS_GETWGROUPS = 290 - SYS_MKFIFO_EXTENDED = 291 - SYS_MKDIR_EXTENDED = 292 - SYS_IDENTITYSVC = 293 - SYS_SHARED_REGION_CHECK_NP = 294 - SYS_VM_PRESSURE_MONITOR = 296 - SYS_PSYNCH_RW_LONGRDLOCK = 297 - SYS_PSYNCH_RW_YIELDWRLOCK = 298 - SYS_PSYNCH_RW_DOWNGRADE = 299 - SYS_PSYNCH_RW_UPGRADE = 300 - SYS_PSYNCH_MUTEXWAIT = 301 - SYS_PSYNCH_MUTEXDROP = 302 - SYS_PSYNCH_CVBROAD = 303 - SYS_PSYNCH_CVSIGNAL = 304 - SYS_PSYNCH_CVWAIT = 305 - SYS_PSYNCH_RW_RDLOCK = 306 - SYS_PSYNCH_RW_WRLOCK = 307 - SYS_PSYNCH_RW_UNLOCK = 308 - SYS_PSYNCH_RW_UNLOCK2 = 309 - SYS_GETSID = 310 - SYS_SETTID_WITH_PID = 311 - SYS_PSYNCH_CVCLRPREPOST = 312 - SYS_AIO_FSYNC = 313 - SYS_AIO_RETURN = 314 - SYS_AIO_SUSPEND = 315 - SYS_AIO_CANCEL = 316 - SYS_AIO_ERROR = 317 - SYS_AIO_READ = 318 - SYS_AIO_WRITE = 319 - SYS_LIO_LISTIO = 320 - SYS_IOPOLICYSYS = 322 - SYS_PROCESS_POLICY = 323 - SYS_MLOCKALL = 324 - SYS_MUNLOCKALL = 325 - SYS_ISSETUGID = 327 - SYS___PTHREAD_KILL = 328 - SYS___PTHREAD_SIGMASK = 329 - SYS___SIGWAIT = 330 - SYS___DISABLE_THREADSIGNAL = 331 - SYS___PTHREAD_MARKCANCEL = 332 - SYS___PTHREAD_CANCELED = 333 - SYS___SEMWAIT_SIGNAL = 334 - SYS_PROC_INFO = 336 - SYS_SENDFILE = 337 - SYS_STAT64 = 338 - SYS_FSTAT64 = 339 - SYS_LSTAT64 = 340 - SYS_STAT64_EXTENDED = 341 - SYS_LSTAT64_EXTENDED = 342 - SYS_FSTAT64_EXTENDED = 343 - SYS_GETDIRENTRIES64 = 344 - SYS_STATFS64 = 345 - SYS_FSTATFS64 = 346 - SYS_GETFSSTAT64 = 347 - SYS___PTHREAD_CHDIR = 348 - SYS___PTHREAD_FCHDIR = 349 - SYS_AUDIT = 350 - SYS_AUDITON = 351 - SYS_GETAUID = 353 - SYS_SETAUID = 354 - SYS_GETAUDIT_ADDR = 357 - SYS_SETAUDIT_ADDR = 358 - SYS_AUDITCTL = 359 - SYS_BSDTHREAD_CREATE = 360 - SYS_BSDTHREAD_TERMINATE = 361 - SYS_KQUEUE = 362 - SYS_KEVENT = 363 - SYS_LCHOWN = 364 - SYS_STACK_SNAPSHOT = 365 - SYS_BSDTHREAD_REGISTER = 366 - SYS_WORKQ_OPEN = 367 - SYS_WORKQ_KERNRETURN = 368 - SYS_KEVENT64 = 369 - SYS___OLD_SEMWAIT_SIGNAL = 370 - SYS___OLD_SEMWAIT_SIGNAL_NOCANCEL = 371 - SYS_THREAD_SELFID = 372 - SYS_LEDGER = 373 - SYS___MAC_EXECVE = 380 - SYS___MAC_SYSCALL = 381 - SYS___MAC_GET_FILE = 382 - SYS___MAC_SET_FILE = 383 - SYS___MAC_GET_LINK = 384 - SYS___MAC_SET_LINK = 385 - SYS___MAC_GET_PROC = 386 - SYS___MAC_SET_PROC = 387 - SYS___MAC_GET_FD = 388 - SYS___MAC_SET_FD = 389 - SYS___MAC_GET_PID = 390 - SYS___MAC_GET_LCID = 391 - SYS___MAC_GET_LCTX = 392 - SYS___MAC_SET_LCTX = 393 - SYS_SETLCID = 394 - SYS_GETLCID = 395 - SYS_READ_NOCANCEL = 396 - SYS_WRITE_NOCANCEL = 397 - SYS_OPEN_NOCANCEL = 398 - SYS_CLOSE_NOCANCEL = 399 - SYS_WAIT4_NOCANCEL = 400 - SYS_RECVMSG_NOCANCEL = 401 - SYS_SENDMSG_NOCANCEL = 402 - SYS_RECVFROM_NOCANCEL = 403 - SYS_ACCEPT_NOCANCEL = 404 - SYS_MSYNC_NOCANCEL = 405 - SYS_FCNTL_NOCANCEL = 406 - SYS_SELECT_NOCANCEL = 407 - SYS_FSYNC_NOCANCEL = 408 - SYS_CONNECT_NOCANCEL = 409 - SYS_SIGSUSPEND_NOCANCEL = 410 - SYS_READV_NOCANCEL = 411 - SYS_WRITEV_NOCANCEL = 412 - SYS_SENDTO_NOCANCEL = 413 - SYS_PREAD_NOCANCEL = 414 - SYS_PWRITE_NOCANCEL = 415 - SYS_WAITID_NOCANCEL = 416 - SYS_POLL_NOCANCEL = 417 - SYS_MSGSND_NOCANCEL = 418 - SYS_MSGRCV_NOCANCEL = 419 - SYS_SEM_WAIT_NOCANCEL = 420 - SYS_AIO_SUSPEND_NOCANCEL = 421 - SYS___SIGWAIT_NOCANCEL = 422 - SYS___SEMWAIT_SIGNAL_NOCANCEL = 423 - SYS___MAC_MOUNT = 424 - SYS___MAC_GET_MOUNT = 425 - SYS___MAC_GETFSSTAT = 426 - SYS_FSGETPATH = 427 - SYS_AUDIT_SESSION_SELF = 428 - SYS_AUDIT_SESSION_JOIN = 429 - SYS_FILEPORT_MAKEPORT = 430 - SYS_FILEPORT_MAKEFD = 431 - SYS_AUDIT_SESSION_PORT = 432 - SYS_PID_SUSPEND = 433 - SYS_PID_RESUME = 434 - SYS_PID_HIBERNATE = 435 - SYS_PID_SHUTDOWN_SOCKETS = 436 - SYS_SHARED_REGION_MAP_AND_SLIDE_NP = 438 - SYS_KAS_INFO = 439 - SYS_MEMORYSTATUS_CONTROL = 440 - SYS_GUARDED_OPEN_NP = 441 - SYS_GUARDED_CLOSE_NP = 442 - SYS_GUARDED_KQUEUE_NP = 443 - SYS_CHANGE_FDGUARD_NP = 444 - SYS_PROC_RLIMIT_CONTROL = 446 - SYS_CONNECTX = 447 - SYS_DISCONNECTX = 448 - SYS_PEELOFF = 449 - SYS_SOCKET_DELEGATE = 450 - SYS_TELEMETRY = 451 - SYS_PROC_UUID_POLICY = 452 - SYS_MEMORYSTATUS_GET_LEVEL = 453 - SYS_SYSTEM_OVERRIDE = 454 - SYS_VFS_PURGE = 455 - SYS_SFI_CTL = 456 - SYS_SFI_PIDCTL = 457 - SYS_COALITION = 458 - SYS_COALITION_INFO = 459 - SYS_NECP_MATCH_POLICY = 460 - SYS_GETATTRLISTBULK = 461 - SYS_OPENAT = 463 - SYS_OPENAT_NOCANCEL = 464 - SYS_RENAMEAT = 465 - SYS_FACCESSAT = 466 - SYS_FCHMODAT = 467 - SYS_FCHOWNAT = 468 - SYS_FSTATAT = 469 - SYS_FSTATAT64 = 470 - SYS_LINKAT = 471 - SYS_UNLINKAT = 472 - SYS_READLINKAT = 473 - SYS_SYMLINKAT = 474 - SYS_MKDIRAT = 475 - SYS_GETATTRLISTAT = 476 - SYS_PROC_TRACE_LOG = 477 - SYS_BSDTHREAD_CTL = 478 - SYS_OPENBYID_NP = 479 - SYS_RECVMSG_X = 480 - SYS_SENDMSG_X = 481 - SYS_THREAD_SELFUSAGE = 482 - SYS_CSRCTL = 483 - SYS_GUARDED_OPEN_DPROTECTED_NP = 484 - SYS_GUARDED_WRITE_NP = 485 - SYS_GUARDED_PWRITE_NP = 486 - SYS_GUARDED_WRITEV_NP = 487 - SYS_RENAME_EXT = 488 - SYS_MREMAP_ENCRYPTED = 489 - SYS_MAXSYSCALL = 490 -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_darwin_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_darwin_arm.go deleted file mode 100644 index b8c9aea852f..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_darwin_arm.go +++ /dev/null @@ -1,358 +0,0 @@ -// mksysnum_darwin.pl /usr/include/sys/syscall.h -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build arm,darwin - -package unix - -const ( - SYS_SYSCALL = 0 - SYS_EXIT = 1 - SYS_FORK = 2 - SYS_READ = 3 - SYS_WRITE = 4 - SYS_OPEN = 5 - SYS_CLOSE = 6 - SYS_WAIT4 = 7 - SYS_LINK = 9 - SYS_UNLINK = 10 - SYS_CHDIR = 12 - SYS_FCHDIR = 13 - SYS_MKNOD = 14 - SYS_CHMOD = 15 - SYS_CHOWN = 16 - SYS_GETFSSTAT = 18 - SYS_GETPID = 20 - SYS_SETUID = 23 - SYS_GETUID = 24 - SYS_GETEUID = 25 - SYS_PTRACE = 26 - SYS_RECVMSG = 27 - SYS_SENDMSG = 28 - SYS_RECVFROM = 29 - SYS_ACCEPT = 30 - SYS_GETPEERNAME = 31 - SYS_GETSOCKNAME = 32 - SYS_ACCESS = 33 - SYS_CHFLAGS = 34 - SYS_FCHFLAGS = 35 - SYS_SYNC = 36 - SYS_KILL = 37 - SYS_GETPPID = 39 - SYS_DUP = 41 - SYS_PIPE = 42 - SYS_GETEGID = 43 - SYS_SIGACTION = 46 - SYS_GETGID = 47 - SYS_SIGPROCMASK = 48 - SYS_GETLOGIN = 49 - SYS_SETLOGIN = 50 - SYS_ACCT = 51 - SYS_SIGPENDING = 52 - SYS_SIGALTSTACK = 53 - SYS_IOCTL = 54 - SYS_REBOOT = 55 - SYS_REVOKE = 56 - SYS_SYMLINK = 57 - SYS_READLINK = 58 - SYS_EXECVE = 59 - SYS_UMASK = 60 - SYS_CHROOT = 61 - SYS_MSYNC = 65 - SYS_VFORK = 66 - SYS_MUNMAP = 73 - SYS_MPROTECT = 74 - SYS_MADVISE = 75 - SYS_MINCORE = 78 - SYS_GETGROUPS = 79 - SYS_SETGROUPS = 80 - SYS_GETPGRP = 81 - SYS_SETPGID = 82 - SYS_SETITIMER = 83 - SYS_SWAPON = 85 - SYS_GETITIMER = 86 - SYS_GETDTABLESIZE = 89 - SYS_DUP2 = 90 - SYS_FCNTL = 92 - SYS_SELECT = 93 - SYS_FSYNC = 95 - SYS_SETPRIORITY = 96 - SYS_SOCKET = 97 - SYS_CONNECT = 98 - SYS_GETPRIORITY = 100 - SYS_BIND = 104 - SYS_SETSOCKOPT = 105 - SYS_LISTEN = 106 - SYS_SIGSUSPEND = 111 - SYS_GETTIMEOFDAY = 116 - SYS_GETRUSAGE = 117 - SYS_GETSOCKOPT = 118 - SYS_READV = 120 - SYS_WRITEV = 121 - SYS_SETTIMEOFDAY = 122 - SYS_FCHOWN = 123 - SYS_FCHMOD = 124 - SYS_SETREUID = 126 - SYS_SETREGID = 127 - SYS_RENAME = 128 - SYS_FLOCK = 131 - SYS_MKFIFO = 132 - SYS_SENDTO = 133 - SYS_SHUTDOWN = 134 - SYS_SOCKETPAIR = 135 - SYS_MKDIR = 136 - SYS_RMDIR = 137 - SYS_UTIMES = 138 - SYS_FUTIMES = 139 - SYS_ADJTIME = 140 - SYS_GETHOSTUUID = 142 - SYS_SETSID = 147 - SYS_GETPGID = 151 - SYS_SETPRIVEXEC = 152 - SYS_PREAD = 153 - SYS_PWRITE = 154 - SYS_NFSSVC = 155 - SYS_STATFS = 157 - SYS_FSTATFS = 158 - SYS_UNMOUNT = 159 - SYS_GETFH = 161 - SYS_QUOTACTL = 165 - SYS_MOUNT = 167 - SYS_CSOPS = 169 - SYS_CSOPS_AUDITTOKEN = 170 - SYS_WAITID = 173 - SYS_KDEBUG_TRACE = 180 - SYS_SETGID = 181 - SYS_SETEGID = 182 - SYS_SETEUID = 183 - SYS_SIGRETURN = 184 - SYS_CHUD = 185 - SYS_FDATASYNC = 187 - SYS_STAT = 188 - SYS_FSTAT = 189 - SYS_LSTAT = 190 - SYS_PATHCONF = 191 - SYS_FPATHCONF = 192 - SYS_GETRLIMIT = 194 - SYS_SETRLIMIT = 195 - SYS_GETDIRENTRIES = 196 - SYS_MMAP = 197 - SYS_LSEEK = 199 - SYS_TRUNCATE = 200 - SYS_FTRUNCATE = 201 - SYS___SYSCTL = 202 - SYS_MLOCK = 203 - SYS_MUNLOCK = 204 - SYS_UNDELETE = 205 - SYS_ATSOCKET = 206 - SYS_ATGETMSG = 207 - SYS_ATPUTMSG = 208 - SYS_ATPSNDREQ = 209 - SYS_ATPSNDRSP = 210 - SYS_ATPGETREQ = 211 - SYS_ATPGETRSP = 212 - SYS_OPEN_DPROTECTED_NP = 216 - SYS_GETATTRLIST = 220 - SYS_SETATTRLIST = 221 - SYS_GETDIRENTRIESATTR = 222 - SYS_EXCHANGEDATA = 223 - SYS_SEARCHFS = 225 - SYS_DELETE = 226 - SYS_COPYFILE = 227 - SYS_FGETATTRLIST = 228 - SYS_FSETATTRLIST = 229 - SYS_POLL = 230 - SYS_WATCHEVENT = 231 - SYS_WAITEVENT = 232 - SYS_MODWATCH = 233 - SYS_GETXATTR = 234 - SYS_FGETXATTR = 235 - SYS_SETXATTR = 236 - SYS_FSETXATTR = 237 - SYS_REMOVEXATTR = 238 - SYS_FREMOVEXATTR = 239 - SYS_LISTXATTR = 240 - SYS_FLISTXATTR = 241 - SYS_FSCTL = 242 - SYS_INITGROUPS = 243 - SYS_POSIX_SPAWN = 244 - SYS_FFSCTL = 245 - SYS_NFSCLNT = 247 - SYS_FHOPEN = 248 - SYS_MINHERIT = 250 - SYS_SEMSYS = 251 - SYS_MSGSYS = 252 - SYS_SHMSYS = 253 - SYS_SEMCTL = 254 - SYS_SEMGET = 255 - SYS_SEMOP = 256 - SYS_MSGCTL = 258 - SYS_MSGGET = 259 - SYS_MSGSND = 260 - SYS_MSGRCV = 261 - SYS_SHMAT = 262 - SYS_SHMCTL = 263 - SYS_SHMDT = 264 - SYS_SHMGET = 265 - SYS_SHM_OPEN = 266 - SYS_SHM_UNLINK = 267 - SYS_SEM_OPEN = 268 - SYS_SEM_CLOSE = 269 - SYS_SEM_UNLINK = 270 - SYS_SEM_WAIT = 271 - SYS_SEM_TRYWAIT = 272 - SYS_SEM_POST = 273 - SYS_SEM_GETVALUE = 274 - SYS_SEM_INIT = 275 - SYS_SEM_DESTROY = 276 - SYS_OPEN_EXTENDED = 277 - SYS_UMASK_EXTENDED = 278 - SYS_STAT_EXTENDED = 279 - SYS_LSTAT_EXTENDED = 280 - SYS_FSTAT_EXTENDED = 281 - SYS_CHMOD_EXTENDED = 282 - SYS_FCHMOD_EXTENDED = 283 - SYS_ACCESS_EXTENDED = 284 - SYS_SETTID = 285 - SYS_GETTID = 286 - SYS_SETSGROUPS = 287 - SYS_GETSGROUPS = 288 - SYS_SETWGROUPS = 289 - SYS_GETWGROUPS = 290 - SYS_MKFIFO_EXTENDED = 291 - SYS_MKDIR_EXTENDED = 292 - SYS_IDENTITYSVC = 293 - SYS_SHARED_REGION_CHECK_NP = 294 - SYS_VM_PRESSURE_MONITOR = 296 - SYS_PSYNCH_RW_LONGRDLOCK = 297 - SYS_PSYNCH_RW_YIELDWRLOCK = 298 - SYS_PSYNCH_RW_DOWNGRADE = 299 - SYS_PSYNCH_RW_UPGRADE = 300 - SYS_PSYNCH_MUTEXWAIT = 301 - SYS_PSYNCH_MUTEXDROP = 302 - SYS_PSYNCH_CVBROAD = 303 - SYS_PSYNCH_CVSIGNAL = 304 - SYS_PSYNCH_CVWAIT = 305 - SYS_PSYNCH_RW_RDLOCK = 306 - SYS_PSYNCH_RW_WRLOCK = 307 - SYS_PSYNCH_RW_UNLOCK = 308 - SYS_PSYNCH_RW_UNLOCK2 = 309 - SYS_GETSID = 310 - SYS_SETTID_WITH_PID = 311 - SYS_PSYNCH_CVCLRPREPOST = 312 - SYS_AIO_FSYNC = 313 - SYS_AIO_RETURN = 314 - SYS_AIO_SUSPEND = 315 - SYS_AIO_CANCEL = 316 - SYS_AIO_ERROR = 317 - SYS_AIO_READ = 318 - SYS_AIO_WRITE = 319 - SYS_LIO_LISTIO = 320 - SYS_IOPOLICYSYS = 322 - SYS_PROCESS_POLICY = 323 - SYS_MLOCKALL = 324 - SYS_MUNLOCKALL = 325 - SYS_ISSETUGID = 327 - SYS___PTHREAD_KILL = 328 - SYS___PTHREAD_SIGMASK = 329 - SYS___SIGWAIT = 330 - SYS___DISABLE_THREADSIGNAL = 331 - SYS___PTHREAD_MARKCANCEL = 332 - SYS___PTHREAD_CANCELED = 333 - SYS___SEMWAIT_SIGNAL = 334 - SYS_PROC_INFO = 336 - SYS_SENDFILE = 337 - SYS_STAT64 = 338 - SYS_FSTAT64 = 339 - SYS_LSTAT64 = 340 - SYS_STAT64_EXTENDED = 341 - SYS_LSTAT64_EXTENDED = 342 - SYS_FSTAT64_EXTENDED = 343 - SYS_GETDIRENTRIES64 = 344 - SYS_STATFS64 = 345 - SYS_FSTATFS64 = 346 - SYS_GETFSSTAT64 = 347 - SYS___PTHREAD_CHDIR = 348 - SYS___PTHREAD_FCHDIR = 349 - SYS_AUDIT = 350 - SYS_AUDITON = 351 - SYS_GETAUID = 353 - SYS_SETAUID = 354 - SYS_GETAUDIT_ADDR = 357 - SYS_SETAUDIT_ADDR = 358 - SYS_AUDITCTL = 359 - SYS_BSDTHREAD_CREATE = 360 - SYS_BSDTHREAD_TERMINATE = 361 - SYS_KQUEUE = 362 - SYS_KEVENT = 363 - SYS_LCHOWN = 364 - SYS_STACK_SNAPSHOT = 365 - SYS_BSDTHREAD_REGISTER = 366 - SYS_WORKQ_OPEN = 367 - SYS_WORKQ_KERNRETURN = 368 - SYS_KEVENT64 = 369 - SYS___OLD_SEMWAIT_SIGNAL = 370 - SYS___OLD_SEMWAIT_SIGNAL_NOCANCEL = 371 - SYS_THREAD_SELFID = 372 - SYS_LEDGER = 373 - SYS___MAC_EXECVE = 380 - SYS___MAC_SYSCALL = 381 - SYS___MAC_GET_FILE = 382 - SYS___MAC_SET_FILE = 383 - SYS___MAC_GET_LINK = 384 - SYS___MAC_SET_LINK = 385 - SYS___MAC_GET_PROC = 386 - SYS___MAC_SET_PROC = 387 - SYS___MAC_GET_FD = 388 - SYS___MAC_SET_FD = 389 - SYS___MAC_GET_PID = 390 - SYS___MAC_GET_LCID = 391 - SYS___MAC_GET_LCTX = 392 - SYS___MAC_SET_LCTX = 393 - SYS_SETLCID = 394 - SYS_GETLCID = 395 - SYS_READ_NOCANCEL = 396 - SYS_WRITE_NOCANCEL = 397 - SYS_OPEN_NOCANCEL = 398 - SYS_CLOSE_NOCANCEL = 399 - SYS_WAIT4_NOCANCEL = 400 - SYS_RECVMSG_NOCANCEL = 401 - SYS_SENDMSG_NOCANCEL = 402 - SYS_RECVFROM_NOCANCEL = 403 - SYS_ACCEPT_NOCANCEL = 404 - SYS_MSYNC_NOCANCEL = 405 - SYS_FCNTL_NOCANCEL = 406 - SYS_SELECT_NOCANCEL = 407 - SYS_FSYNC_NOCANCEL = 408 - SYS_CONNECT_NOCANCEL = 409 - SYS_SIGSUSPEND_NOCANCEL = 410 - SYS_READV_NOCANCEL = 411 - SYS_WRITEV_NOCANCEL = 412 - SYS_SENDTO_NOCANCEL = 413 - SYS_PREAD_NOCANCEL = 414 - SYS_PWRITE_NOCANCEL = 415 - SYS_WAITID_NOCANCEL = 416 - SYS_POLL_NOCANCEL = 417 - SYS_MSGSND_NOCANCEL = 418 - SYS_MSGRCV_NOCANCEL = 419 - SYS_SEM_WAIT_NOCANCEL = 420 - SYS_AIO_SUSPEND_NOCANCEL = 421 - SYS___SIGWAIT_NOCANCEL = 422 - SYS___SEMWAIT_SIGNAL_NOCANCEL = 423 - SYS___MAC_MOUNT = 424 - SYS___MAC_GET_MOUNT = 425 - SYS___MAC_GETFSSTAT = 426 - SYS_FSGETPATH = 427 - SYS_AUDIT_SESSION_SELF = 428 - SYS_AUDIT_SESSION_JOIN = 429 - SYS_FILEPORT_MAKEPORT = 430 - SYS_FILEPORT_MAKEFD = 431 - SYS_AUDIT_SESSION_PORT = 432 - SYS_PID_SUSPEND = 433 - SYS_PID_RESUME = 434 - SYS_PID_HIBERNATE = 435 - SYS_PID_SHUTDOWN_SOCKETS = 436 - SYS_SHARED_REGION_MAP_AND_SLIDE_NP = 438 - SYS_KAS_INFO = 439 - SYS_MAXSYSCALL = 440 -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_darwin_arm64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_darwin_arm64.go deleted file mode 100644 index 26677ebbf5b..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_darwin_arm64.go +++ /dev/null @@ -1,398 +0,0 @@ -// mksysnum_darwin.pl /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS8.4.sdk/usr/include/sys/syscall.h -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build arm64,darwin - -package unix - -const ( - SYS_SYSCALL = 0 - SYS_EXIT = 1 - SYS_FORK = 2 - SYS_READ = 3 - SYS_WRITE = 4 - SYS_OPEN = 5 - SYS_CLOSE = 6 - SYS_WAIT4 = 7 - SYS_LINK = 9 - SYS_UNLINK = 10 - SYS_CHDIR = 12 - SYS_FCHDIR = 13 - SYS_MKNOD = 14 - SYS_CHMOD = 15 - SYS_CHOWN = 16 - SYS_GETFSSTAT = 18 - SYS_GETPID = 20 - SYS_SETUID = 23 - SYS_GETUID = 24 - SYS_GETEUID = 25 - SYS_PTRACE = 26 - SYS_RECVMSG = 27 - SYS_SENDMSG = 28 - SYS_RECVFROM = 29 - SYS_ACCEPT = 30 - SYS_GETPEERNAME = 31 - SYS_GETSOCKNAME = 32 - SYS_ACCESS = 33 - SYS_CHFLAGS = 34 - SYS_FCHFLAGS = 35 - SYS_SYNC = 36 - SYS_KILL = 37 - SYS_GETPPID = 39 - SYS_DUP = 41 - SYS_PIPE = 42 - SYS_GETEGID = 43 - SYS_SIGACTION = 46 - SYS_GETGID = 47 - SYS_SIGPROCMASK = 48 - SYS_GETLOGIN = 49 - SYS_SETLOGIN = 50 - SYS_ACCT = 51 - SYS_SIGPENDING = 52 - SYS_SIGALTSTACK = 53 - SYS_IOCTL = 54 - SYS_REBOOT = 55 - SYS_REVOKE = 56 - SYS_SYMLINK = 57 - SYS_READLINK = 58 - SYS_EXECVE = 59 - SYS_UMASK = 60 - SYS_CHROOT = 61 - SYS_MSYNC = 65 - SYS_VFORK = 66 - SYS_MUNMAP = 73 - SYS_MPROTECT = 74 - SYS_MADVISE = 75 - SYS_MINCORE = 78 - SYS_GETGROUPS = 79 - SYS_SETGROUPS = 80 - SYS_GETPGRP = 81 - SYS_SETPGID = 82 - SYS_SETITIMER = 83 - SYS_SWAPON = 85 - SYS_GETITIMER = 86 - SYS_GETDTABLESIZE = 89 - SYS_DUP2 = 90 - SYS_FCNTL = 92 - SYS_SELECT = 93 - SYS_FSYNC = 95 - SYS_SETPRIORITY = 96 - SYS_SOCKET = 97 - SYS_CONNECT = 98 - SYS_GETPRIORITY = 100 - SYS_BIND = 104 - SYS_SETSOCKOPT = 105 - SYS_LISTEN = 106 - SYS_SIGSUSPEND = 111 - SYS_GETTIMEOFDAY = 116 - SYS_GETRUSAGE = 117 - SYS_GETSOCKOPT = 118 - SYS_READV = 120 - SYS_WRITEV = 121 - SYS_SETTIMEOFDAY = 122 - SYS_FCHOWN = 123 - SYS_FCHMOD = 124 - SYS_SETREUID = 126 - SYS_SETREGID = 127 - SYS_RENAME = 128 - SYS_FLOCK = 131 - SYS_MKFIFO = 132 - SYS_SENDTO = 133 - SYS_SHUTDOWN = 134 - SYS_SOCKETPAIR = 135 - SYS_MKDIR = 136 - SYS_RMDIR = 137 - SYS_UTIMES = 138 - SYS_FUTIMES = 139 - SYS_ADJTIME = 140 - SYS_GETHOSTUUID = 142 - SYS_SETSID = 147 - SYS_GETPGID = 151 - SYS_SETPRIVEXEC = 152 - SYS_PREAD = 153 - SYS_PWRITE = 154 - SYS_NFSSVC = 155 - SYS_STATFS = 157 - SYS_FSTATFS = 158 - SYS_UNMOUNT = 159 - SYS_GETFH = 161 - SYS_QUOTACTL = 165 - SYS_MOUNT = 167 - SYS_CSOPS = 169 - SYS_CSOPS_AUDITTOKEN = 170 - SYS_WAITID = 173 - SYS_KDEBUG_TRACE64 = 179 - SYS_KDEBUG_TRACE = 180 - SYS_SETGID = 181 - SYS_SETEGID = 182 - SYS_SETEUID = 183 - SYS_SIGRETURN = 184 - SYS_CHUD = 185 - SYS_FDATASYNC = 187 - SYS_STAT = 188 - SYS_FSTAT = 189 - SYS_LSTAT = 190 - SYS_PATHCONF = 191 - SYS_FPATHCONF = 192 - SYS_GETRLIMIT = 194 - SYS_SETRLIMIT = 195 - SYS_GETDIRENTRIES = 196 - SYS_MMAP = 197 - SYS_LSEEK = 199 - SYS_TRUNCATE = 200 - SYS_FTRUNCATE = 201 - SYS_SYSCTL = 202 - SYS_MLOCK = 203 - SYS_MUNLOCK = 204 - SYS_UNDELETE = 205 - SYS_OPEN_DPROTECTED_NP = 216 - SYS_GETATTRLIST = 220 - SYS_SETATTRLIST = 221 - SYS_GETDIRENTRIESATTR = 222 - SYS_EXCHANGEDATA = 223 - SYS_SEARCHFS = 225 - SYS_DELETE = 226 - SYS_COPYFILE = 227 - SYS_FGETATTRLIST = 228 - SYS_FSETATTRLIST = 229 - SYS_POLL = 230 - SYS_WATCHEVENT = 231 - SYS_WAITEVENT = 232 - SYS_MODWATCH = 233 - SYS_GETXATTR = 234 - SYS_FGETXATTR = 235 - SYS_SETXATTR = 236 - SYS_FSETXATTR = 237 - SYS_REMOVEXATTR = 238 - SYS_FREMOVEXATTR = 239 - SYS_LISTXATTR = 240 - SYS_FLISTXATTR = 241 - SYS_FSCTL = 242 - SYS_INITGROUPS = 243 - SYS_POSIX_SPAWN = 244 - SYS_FFSCTL = 245 - SYS_NFSCLNT = 247 - SYS_FHOPEN = 248 - SYS_MINHERIT = 250 - SYS_SEMSYS = 251 - SYS_MSGSYS = 252 - SYS_SHMSYS = 253 - SYS_SEMCTL = 254 - SYS_SEMGET = 255 - SYS_SEMOP = 256 - SYS_MSGCTL = 258 - SYS_MSGGET = 259 - SYS_MSGSND = 260 - SYS_MSGRCV = 261 - SYS_SHMAT = 262 - SYS_SHMCTL = 263 - SYS_SHMDT = 264 - SYS_SHMGET = 265 - SYS_SHM_OPEN = 266 - SYS_SHM_UNLINK = 267 - SYS_SEM_OPEN = 268 - SYS_SEM_CLOSE = 269 - SYS_SEM_UNLINK = 270 - SYS_SEM_WAIT = 271 - SYS_SEM_TRYWAIT = 272 - SYS_SEM_POST = 273 - SYS_SYSCTLBYNAME = 274 - SYS_OPEN_EXTENDED = 277 - SYS_UMASK_EXTENDED = 278 - SYS_STAT_EXTENDED = 279 - SYS_LSTAT_EXTENDED = 280 - SYS_FSTAT_EXTENDED = 281 - SYS_CHMOD_EXTENDED = 282 - SYS_FCHMOD_EXTENDED = 283 - SYS_ACCESS_EXTENDED = 284 - SYS_SETTID = 285 - SYS_GETTID = 286 - SYS_SETSGROUPS = 287 - SYS_GETSGROUPS = 288 - SYS_SETWGROUPS = 289 - SYS_GETWGROUPS = 290 - SYS_MKFIFO_EXTENDED = 291 - SYS_MKDIR_EXTENDED = 292 - SYS_IDENTITYSVC = 293 - SYS_SHARED_REGION_CHECK_NP = 294 - SYS_VM_PRESSURE_MONITOR = 296 - SYS_PSYNCH_RW_LONGRDLOCK = 297 - SYS_PSYNCH_RW_YIELDWRLOCK = 298 - SYS_PSYNCH_RW_DOWNGRADE = 299 - SYS_PSYNCH_RW_UPGRADE = 300 - SYS_PSYNCH_MUTEXWAIT = 301 - SYS_PSYNCH_MUTEXDROP = 302 - SYS_PSYNCH_CVBROAD = 303 - SYS_PSYNCH_CVSIGNAL = 304 - SYS_PSYNCH_CVWAIT = 305 - SYS_PSYNCH_RW_RDLOCK = 306 - SYS_PSYNCH_RW_WRLOCK = 307 - SYS_PSYNCH_RW_UNLOCK = 308 - SYS_PSYNCH_RW_UNLOCK2 = 309 - SYS_GETSID = 310 - SYS_SETTID_WITH_PID = 311 - SYS_PSYNCH_CVCLRPREPOST = 312 - SYS_AIO_FSYNC = 313 - SYS_AIO_RETURN = 314 - SYS_AIO_SUSPEND = 315 - SYS_AIO_CANCEL = 316 - SYS_AIO_ERROR = 317 - SYS_AIO_READ = 318 - SYS_AIO_WRITE = 319 - SYS_LIO_LISTIO = 320 - SYS_IOPOLICYSYS = 322 - SYS_PROCESS_POLICY = 323 - SYS_MLOCKALL = 324 - SYS_MUNLOCKALL = 325 - SYS_ISSETUGID = 327 - SYS___PTHREAD_KILL = 328 - SYS___PTHREAD_SIGMASK = 329 - SYS___SIGWAIT = 330 - SYS___DISABLE_THREADSIGNAL = 331 - SYS___PTHREAD_MARKCANCEL = 332 - SYS___PTHREAD_CANCELED = 333 - SYS___SEMWAIT_SIGNAL = 334 - SYS_PROC_INFO = 336 - SYS_SENDFILE = 337 - SYS_STAT64 = 338 - SYS_FSTAT64 = 339 - SYS_LSTAT64 = 340 - SYS_STAT64_EXTENDED = 341 - SYS_LSTAT64_EXTENDED = 342 - SYS_FSTAT64_EXTENDED = 343 - SYS_GETDIRENTRIES64 = 344 - SYS_STATFS64 = 345 - SYS_FSTATFS64 = 346 - SYS_GETFSSTAT64 = 347 - SYS___PTHREAD_CHDIR = 348 - SYS___PTHREAD_FCHDIR = 349 - SYS_AUDIT = 350 - SYS_AUDITON = 351 - SYS_GETAUID = 353 - SYS_SETAUID = 354 - SYS_GETAUDIT_ADDR = 357 - SYS_SETAUDIT_ADDR = 358 - SYS_AUDITCTL = 359 - SYS_BSDTHREAD_CREATE = 360 - SYS_BSDTHREAD_TERMINATE = 361 - SYS_KQUEUE = 362 - SYS_KEVENT = 363 - SYS_LCHOWN = 364 - SYS_STACK_SNAPSHOT = 365 - SYS_BSDTHREAD_REGISTER = 366 - SYS_WORKQ_OPEN = 367 - SYS_WORKQ_KERNRETURN = 368 - SYS_KEVENT64 = 369 - SYS___OLD_SEMWAIT_SIGNAL = 370 - SYS___OLD_SEMWAIT_SIGNAL_NOCANCEL = 371 - SYS_THREAD_SELFID = 372 - SYS_LEDGER = 373 - SYS___MAC_EXECVE = 380 - SYS___MAC_SYSCALL = 381 - SYS___MAC_GET_FILE = 382 - SYS___MAC_SET_FILE = 383 - SYS___MAC_GET_LINK = 384 - SYS___MAC_SET_LINK = 385 - SYS___MAC_GET_PROC = 386 - SYS___MAC_SET_PROC = 387 - SYS___MAC_GET_FD = 388 - SYS___MAC_SET_FD = 389 - SYS___MAC_GET_PID = 390 - SYS___MAC_GET_LCID = 391 - SYS___MAC_GET_LCTX = 392 - SYS___MAC_SET_LCTX = 393 - SYS_SETLCID = 394 - SYS_GETLCID = 395 - SYS_READ_NOCANCEL = 396 - SYS_WRITE_NOCANCEL = 397 - SYS_OPEN_NOCANCEL = 398 - SYS_CLOSE_NOCANCEL = 399 - SYS_WAIT4_NOCANCEL = 400 - SYS_RECVMSG_NOCANCEL = 401 - SYS_SENDMSG_NOCANCEL = 402 - SYS_RECVFROM_NOCANCEL = 403 - SYS_ACCEPT_NOCANCEL = 404 - SYS_MSYNC_NOCANCEL = 405 - SYS_FCNTL_NOCANCEL = 406 - SYS_SELECT_NOCANCEL = 407 - SYS_FSYNC_NOCANCEL = 408 - SYS_CONNECT_NOCANCEL = 409 - SYS_SIGSUSPEND_NOCANCEL = 410 - SYS_READV_NOCANCEL = 411 - SYS_WRITEV_NOCANCEL = 412 - SYS_SENDTO_NOCANCEL = 413 - SYS_PREAD_NOCANCEL = 414 - SYS_PWRITE_NOCANCEL = 415 - SYS_WAITID_NOCANCEL = 416 - SYS_POLL_NOCANCEL = 417 - SYS_MSGSND_NOCANCEL = 418 - SYS_MSGRCV_NOCANCEL = 419 - SYS_SEM_WAIT_NOCANCEL = 420 - SYS_AIO_SUSPEND_NOCANCEL = 421 - SYS___SIGWAIT_NOCANCEL = 422 - SYS___SEMWAIT_SIGNAL_NOCANCEL = 423 - SYS___MAC_MOUNT = 424 - SYS___MAC_GET_MOUNT = 425 - SYS___MAC_GETFSSTAT = 426 - SYS_FSGETPATH = 427 - SYS_AUDIT_SESSION_SELF = 428 - SYS_AUDIT_SESSION_JOIN = 429 - SYS_FILEPORT_MAKEPORT = 430 - SYS_FILEPORT_MAKEFD = 431 - SYS_AUDIT_SESSION_PORT = 432 - SYS_PID_SUSPEND = 433 - SYS_PID_RESUME = 434 - SYS_PID_HIBERNATE = 435 - SYS_PID_SHUTDOWN_SOCKETS = 436 - SYS_SHARED_REGION_MAP_AND_SLIDE_NP = 438 - SYS_KAS_INFO = 439 - SYS_MEMORYSTATUS_CONTROL = 440 - SYS_GUARDED_OPEN_NP = 441 - SYS_GUARDED_CLOSE_NP = 442 - SYS_GUARDED_KQUEUE_NP = 443 - SYS_CHANGE_FDGUARD_NP = 444 - SYS_PROC_RLIMIT_CONTROL = 446 - SYS_CONNECTX = 447 - SYS_DISCONNECTX = 448 - SYS_PEELOFF = 449 - SYS_SOCKET_DELEGATE = 450 - SYS_TELEMETRY = 451 - SYS_PROC_UUID_POLICY = 452 - SYS_MEMORYSTATUS_GET_LEVEL = 453 - SYS_SYSTEM_OVERRIDE = 454 - SYS_VFS_PURGE = 455 - SYS_SFI_CTL = 456 - SYS_SFI_PIDCTL = 457 - SYS_COALITION = 458 - SYS_COALITION_INFO = 459 - SYS_NECP_MATCH_POLICY = 460 - SYS_GETATTRLISTBULK = 461 - SYS_OPENAT = 463 - SYS_OPENAT_NOCANCEL = 464 - SYS_RENAMEAT = 465 - SYS_FACCESSAT = 466 - SYS_FCHMODAT = 467 - SYS_FCHOWNAT = 468 - SYS_FSTATAT = 469 - SYS_FSTATAT64 = 470 - SYS_LINKAT = 471 - SYS_UNLINKAT = 472 - SYS_READLINKAT = 473 - SYS_SYMLINKAT = 474 - SYS_MKDIRAT = 475 - SYS_GETATTRLISTAT = 476 - SYS_PROC_TRACE_LOG = 477 - SYS_BSDTHREAD_CTL = 478 - SYS_OPENBYID_NP = 479 - SYS_RECVMSG_X = 480 - SYS_SENDMSG_X = 481 - SYS_THREAD_SELFUSAGE = 482 - SYS_CSRCTL = 483 - SYS_GUARDED_OPEN_DPROTECTED_NP = 484 - SYS_GUARDED_WRITE_NP = 485 - SYS_GUARDED_PWRITE_NP = 486 - SYS_GUARDED_WRITEV_NP = 487 - SYS_RENAME_EXT = 488 - SYS_MREMAP_ENCRYPTED = 489 - SYS_MAXSYSCALL = 490 -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_dragonfly_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_dragonfly_386.go deleted file mode 100644 index 785240a75b7..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_dragonfly_386.go +++ /dev/null @@ -1,304 +0,0 @@ -// mksysnum_dragonfly.pl -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build 386,dragonfly - -package unix - -const ( - // SYS_NOSYS = 0; // { int nosys(void); } syscall nosys_args int - SYS_EXIT = 1 // { void exit(int rval); } - SYS_FORK = 2 // { int fork(void); } - SYS_READ = 3 // { ssize_t read(int fd, void *buf, size_t nbyte); } - SYS_WRITE = 4 // { ssize_t write(int fd, const void *buf, size_t nbyte); } - SYS_OPEN = 5 // { int open(char *path, int flags, int mode); } - SYS_CLOSE = 6 // { int close(int fd); } - SYS_WAIT4 = 7 // { int wait4(int pid, int *status, int options, \ - SYS_LINK = 9 // { int link(char *path, char *link); } - SYS_UNLINK = 10 // { int unlink(char *path); } - SYS_CHDIR = 12 // { int chdir(char *path); } - SYS_FCHDIR = 13 // { int fchdir(int fd); } - SYS_MKNOD = 14 // { int mknod(char *path, int mode, int dev); } - SYS_CHMOD = 15 // { int chmod(char *path, int mode); } - SYS_CHOWN = 16 // { int chown(char *path, int uid, int gid); } - SYS_OBREAK = 17 // { int obreak(char *nsize); } break obreak_args int - SYS_GETFSSTAT = 18 // { int getfsstat(struct statfs *buf, long bufsize, \ - SYS_GETPID = 20 // { pid_t getpid(void); } - SYS_MOUNT = 21 // { int mount(char *type, char *path, int flags, \ - SYS_UNMOUNT = 22 // { int unmount(char *path, int flags); } - SYS_SETUID = 23 // { int setuid(uid_t uid); } - SYS_GETUID = 24 // { uid_t getuid(void); } - SYS_GETEUID = 25 // { uid_t geteuid(void); } - SYS_PTRACE = 26 // { int ptrace(int req, pid_t pid, caddr_t addr, \ - SYS_RECVMSG = 27 // { int recvmsg(int s, struct msghdr *msg, int flags); } - SYS_SENDMSG = 28 // { int sendmsg(int s, caddr_t msg, int flags); } - SYS_RECVFROM = 29 // { int recvfrom(int s, caddr_t buf, size_t len, \ - SYS_ACCEPT = 30 // { int accept(int s, caddr_t name, int *anamelen); } - SYS_GETPEERNAME = 31 // { int getpeername(int fdes, caddr_t asa, int *alen); } - SYS_GETSOCKNAME = 32 // { int getsockname(int fdes, caddr_t asa, int *alen); } - SYS_ACCESS = 33 // { int access(char *path, int flags); } - SYS_CHFLAGS = 34 // { int chflags(char *path, int flags); } - SYS_FCHFLAGS = 35 // { int fchflags(int fd, int flags); } - SYS_SYNC = 36 // { int sync(void); } - SYS_KILL = 37 // { int kill(int pid, int signum); } - SYS_GETPPID = 39 // { pid_t getppid(void); } - SYS_DUP = 41 // { int dup(u_int fd); } - SYS_PIPE = 42 // { int pipe(void); } - SYS_GETEGID = 43 // { gid_t getegid(void); } - SYS_PROFIL = 44 // { int profil(caddr_t samples, size_t size, \ - SYS_KTRACE = 45 // { int ktrace(const char *fname, int ops, int facs, \ - SYS_GETGID = 47 // { gid_t getgid(void); } - SYS_GETLOGIN = 49 // { int getlogin(char *namebuf, u_int namelen); } - SYS_SETLOGIN = 50 // { int setlogin(char *namebuf); } - SYS_ACCT = 51 // { int acct(char *path); } - SYS_SIGALTSTACK = 53 // { int sigaltstack(stack_t *ss, stack_t *oss); } - SYS_IOCTL = 54 // { int ioctl(int fd, u_long com, caddr_t data); } - SYS_REBOOT = 55 // { int reboot(int opt); } - SYS_REVOKE = 56 // { int revoke(char *path); } - SYS_SYMLINK = 57 // { int symlink(char *path, char *link); } - SYS_READLINK = 58 // { int readlink(char *path, char *buf, int count); } - SYS_EXECVE = 59 // { int execve(char *fname, char **argv, char **envv); } - SYS_UMASK = 60 // { int umask(int newmask); } umask umask_args int - SYS_CHROOT = 61 // { int chroot(char *path); } - SYS_MSYNC = 65 // { int msync(void *addr, size_t len, int flags); } - SYS_VFORK = 66 // { pid_t vfork(void); } - SYS_SBRK = 69 // { int sbrk(int incr); } - SYS_SSTK = 70 // { int sstk(int incr); } - SYS_MUNMAP = 73 // { int munmap(void *addr, size_t len); } - SYS_MPROTECT = 74 // { int mprotect(void *addr, size_t len, int prot); } - SYS_MADVISE = 75 // { int madvise(void *addr, size_t len, int behav); } - SYS_MINCORE = 78 // { int mincore(const void *addr, size_t len, \ - SYS_GETGROUPS = 79 // { int getgroups(u_int gidsetsize, gid_t *gidset); } - SYS_SETGROUPS = 80 // { int setgroups(u_int gidsetsize, gid_t *gidset); } - SYS_GETPGRP = 81 // { int getpgrp(void); } - SYS_SETPGID = 82 // { int setpgid(int pid, int pgid); } - SYS_SETITIMER = 83 // { int setitimer(u_int which, struct itimerval *itv, \ - SYS_SWAPON = 85 // { int swapon(char *name); } - SYS_GETITIMER = 86 // { int getitimer(u_int which, struct itimerval *itv); } - SYS_GETDTABLESIZE = 89 // { int getdtablesize(void); } - SYS_DUP2 = 90 // { int dup2(u_int from, u_int to); } - SYS_FCNTL = 92 // { int fcntl(int fd, int cmd, long arg); } - SYS_SELECT = 93 // { int select(int nd, fd_set *in, fd_set *ou, \ - SYS_FSYNC = 95 // { int fsync(int fd); } - SYS_SETPRIORITY = 96 // { int setpriority(int which, int who, int prio); } - SYS_SOCKET = 97 // { int socket(int domain, int type, int protocol); } - SYS_CONNECT = 98 // { int connect(int s, caddr_t name, int namelen); } - SYS_GETPRIORITY = 100 // { int getpriority(int which, int who); } - SYS_BIND = 104 // { int bind(int s, caddr_t name, int namelen); } - SYS_SETSOCKOPT = 105 // { int setsockopt(int s, int level, int name, \ - SYS_LISTEN = 106 // { int listen(int s, int backlog); } - SYS_GETTIMEOFDAY = 116 // { int gettimeofday(struct timeval *tp, \ - SYS_GETRUSAGE = 117 // { int getrusage(int who, struct rusage *rusage); } - SYS_GETSOCKOPT = 118 // { int getsockopt(int s, int level, int name, \ - SYS_READV = 120 // { int readv(int fd, struct iovec *iovp, u_int iovcnt); } - SYS_WRITEV = 121 // { int writev(int fd, struct iovec *iovp, \ - SYS_SETTIMEOFDAY = 122 // { int settimeofday(struct timeval *tv, \ - SYS_FCHOWN = 123 // { int fchown(int fd, int uid, int gid); } - SYS_FCHMOD = 124 // { int fchmod(int fd, int mode); } - SYS_SETREUID = 126 // { int setreuid(int ruid, int euid); } - SYS_SETREGID = 127 // { int setregid(int rgid, int egid); } - SYS_RENAME = 128 // { int rename(char *from, char *to); } - SYS_FLOCK = 131 // { int flock(int fd, int how); } - SYS_MKFIFO = 132 // { int mkfifo(char *path, int mode); } - SYS_SENDTO = 133 // { int sendto(int s, caddr_t buf, size_t len, \ - SYS_SHUTDOWN = 134 // { int shutdown(int s, int how); } - SYS_SOCKETPAIR = 135 // { int socketpair(int domain, int type, int protocol, \ - SYS_MKDIR = 136 // { int mkdir(char *path, int mode); } - SYS_RMDIR = 137 // { int rmdir(char *path); } - SYS_UTIMES = 138 // { int utimes(char *path, struct timeval *tptr); } - SYS_ADJTIME = 140 // { int adjtime(struct timeval *delta, \ - SYS_SETSID = 147 // { int setsid(void); } - SYS_QUOTACTL = 148 // { int quotactl(char *path, int cmd, int uid, \ - SYS_STATFS = 157 // { int statfs(char *path, struct statfs *buf); } - SYS_FSTATFS = 158 // { int fstatfs(int fd, struct statfs *buf); } - SYS_GETFH = 161 // { int getfh(char *fname, struct fhandle *fhp); } - SYS_GETDOMAINNAME = 162 // { int getdomainname(char *domainname, int len); } - SYS_SETDOMAINNAME = 163 // { int setdomainname(char *domainname, int len); } - SYS_UNAME = 164 // { int uname(struct utsname *name); } - SYS_SYSARCH = 165 // { int sysarch(int op, char *parms); } - SYS_RTPRIO = 166 // { int rtprio(int function, pid_t pid, \ - SYS_EXTPREAD = 173 // { ssize_t extpread(int fd, void *buf, \ - SYS_EXTPWRITE = 174 // { ssize_t extpwrite(int fd, const void *buf, \ - SYS_NTP_ADJTIME = 176 // { int ntp_adjtime(struct timex *tp); } - SYS_SETGID = 181 // { int setgid(gid_t gid); } - SYS_SETEGID = 182 // { int setegid(gid_t egid); } - SYS_SETEUID = 183 // { int seteuid(uid_t euid); } - SYS_PATHCONF = 191 // { int pathconf(char *path, int name); } - SYS_FPATHCONF = 192 // { int fpathconf(int fd, int name); } - SYS_GETRLIMIT = 194 // { int getrlimit(u_int which, \ - SYS_SETRLIMIT = 195 // { int setrlimit(u_int which, \ - SYS_MMAP = 197 // { caddr_t mmap(caddr_t addr, size_t len, int prot, \ - // SYS_NOSYS = 198; // { int nosys(void); } __syscall __syscall_args int - SYS_LSEEK = 199 // { off_t lseek(int fd, int pad, off_t offset, \ - SYS_TRUNCATE = 200 // { int truncate(char *path, int pad, off_t length); } - SYS_FTRUNCATE = 201 // { int ftruncate(int fd, int pad, off_t length); } - SYS___SYSCTL = 202 // { int __sysctl(int *name, u_int namelen, void *old, \ - SYS_MLOCK = 203 // { int mlock(const void *addr, size_t len); } - SYS_MUNLOCK = 204 // { int munlock(const void *addr, size_t len); } - SYS_UNDELETE = 205 // { int undelete(char *path); } - SYS_FUTIMES = 206 // { int futimes(int fd, struct timeval *tptr); } - SYS_GETPGID = 207 // { int getpgid(pid_t pid); } - SYS_POLL = 209 // { int poll(struct pollfd *fds, u_int nfds, \ - SYS___SEMCTL = 220 // { int __semctl(int semid, int semnum, int cmd, \ - SYS_SEMGET = 221 // { int semget(key_t key, int nsems, int semflg); } - SYS_SEMOP = 222 // { int semop(int semid, struct sembuf *sops, \ - SYS_MSGCTL = 224 // { int msgctl(int msqid, int cmd, \ - SYS_MSGGET = 225 // { int msgget(key_t key, int msgflg); } - SYS_MSGSND = 226 // { int msgsnd(int msqid, void *msgp, size_t msgsz, \ - SYS_MSGRCV = 227 // { int msgrcv(int msqid, void *msgp, size_t msgsz, \ - SYS_SHMAT = 228 // { caddr_t shmat(int shmid, const void *shmaddr, \ - SYS_SHMCTL = 229 // { int shmctl(int shmid, int cmd, \ - SYS_SHMDT = 230 // { int shmdt(const void *shmaddr); } - SYS_SHMGET = 231 // { int shmget(key_t key, size_t size, int shmflg); } - SYS_CLOCK_GETTIME = 232 // { int clock_gettime(clockid_t clock_id, \ - SYS_CLOCK_SETTIME = 233 // { int clock_settime(clockid_t clock_id, \ - SYS_CLOCK_GETRES = 234 // { int clock_getres(clockid_t clock_id, \ - SYS_NANOSLEEP = 240 // { int nanosleep(const struct timespec *rqtp, \ - SYS_MINHERIT = 250 // { int minherit(void *addr, size_t len, int inherit); } - SYS_RFORK = 251 // { int rfork(int flags); } - SYS_OPENBSD_POLL = 252 // { int openbsd_poll(struct pollfd *fds, u_int nfds, \ - SYS_ISSETUGID = 253 // { int issetugid(void); } - SYS_LCHOWN = 254 // { int lchown(char *path, int uid, int gid); } - SYS_LCHMOD = 274 // { int lchmod(char *path, mode_t mode); } - SYS_LUTIMES = 276 // { int lutimes(char *path, struct timeval *tptr); } - SYS_EXTPREADV = 289 // { ssize_t extpreadv(int fd, struct iovec *iovp, \ - SYS_EXTPWRITEV = 290 // { ssize_t extpwritev(int fd, struct iovec *iovp,\ - SYS_FHSTATFS = 297 // { int fhstatfs(const struct fhandle *u_fhp, struct statfs *buf); } - SYS_FHOPEN = 298 // { int fhopen(const struct fhandle *u_fhp, int flags); } - SYS_MODNEXT = 300 // { int modnext(int modid); } - SYS_MODSTAT = 301 // { int modstat(int modid, struct module_stat* stat); } - SYS_MODFNEXT = 302 // { int modfnext(int modid); } - SYS_MODFIND = 303 // { int modfind(const char *name); } - SYS_KLDLOAD = 304 // { int kldload(const char *file); } - SYS_KLDUNLOAD = 305 // { int kldunload(int fileid); } - SYS_KLDFIND = 306 // { int kldfind(const char *file); } - SYS_KLDNEXT = 307 // { int kldnext(int fileid); } - SYS_KLDSTAT = 308 // { int kldstat(int fileid, struct kld_file_stat* stat); } - SYS_KLDFIRSTMOD = 309 // { int kldfirstmod(int fileid); } - SYS_GETSID = 310 // { int getsid(pid_t pid); } - SYS_SETRESUID = 311 // { int setresuid(uid_t ruid, uid_t euid, uid_t suid); } - SYS_SETRESGID = 312 // { int setresgid(gid_t rgid, gid_t egid, gid_t sgid); } - SYS_AIO_RETURN = 314 // { int aio_return(struct aiocb *aiocbp); } - SYS_AIO_SUSPEND = 315 // { int aio_suspend(struct aiocb * const * aiocbp, int nent, const struct timespec *timeout); } - SYS_AIO_CANCEL = 316 // { int aio_cancel(int fd, struct aiocb *aiocbp); } - SYS_AIO_ERROR = 317 // { int aio_error(struct aiocb *aiocbp); } - SYS_AIO_READ = 318 // { int aio_read(struct aiocb *aiocbp); } - SYS_AIO_WRITE = 319 // { int aio_write(struct aiocb *aiocbp); } - SYS_LIO_LISTIO = 320 // { int lio_listio(int mode, struct aiocb * const *acb_list, int nent, struct sigevent *sig); } - SYS_YIELD = 321 // { int yield(void); } - SYS_MLOCKALL = 324 // { int mlockall(int how); } - SYS_MUNLOCKALL = 325 // { int munlockall(void); } - SYS___GETCWD = 326 // { int __getcwd(u_char *buf, u_int buflen); } - SYS_SCHED_SETPARAM = 327 // { int sched_setparam (pid_t pid, const struct sched_param *param); } - SYS_SCHED_GETPARAM = 328 // { int sched_getparam (pid_t pid, struct sched_param *param); } - SYS_SCHED_SETSCHEDULER = 329 // { int sched_setscheduler (pid_t pid, int policy, const struct sched_param *param); } - SYS_SCHED_GETSCHEDULER = 330 // { int sched_getscheduler (pid_t pid); } - SYS_SCHED_YIELD = 331 // { int sched_yield (void); } - SYS_SCHED_GET_PRIORITY_MAX = 332 // { int sched_get_priority_max (int policy); } - SYS_SCHED_GET_PRIORITY_MIN = 333 // { int sched_get_priority_min (int policy); } - SYS_SCHED_RR_GET_INTERVAL = 334 // { int sched_rr_get_interval (pid_t pid, struct timespec *interval); } - SYS_UTRACE = 335 // { int utrace(const void *addr, size_t len); } - SYS_KLDSYM = 337 // { int kldsym(int fileid, int cmd, void *data); } - SYS_JAIL = 338 // { int jail(struct jail *jail); } - SYS_SIGPROCMASK = 340 // { int sigprocmask(int how, const sigset_t *set, \ - SYS_SIGSUSPEND = 341 // { int sigsuspend(const sigset_t *sigmask); } - SYS_SIGACTION = 342 // { int sigaction(int sig, const struct sigaction *act, \ - SYS_SIGPENDING = 343 // { int sigpending(sigset_t *set); } - SYS_SIGRETURN = 344 // { int sigreturn(ucontext_t *sigcntxp); } - SYS_SIGTIMEDWAIT = 345 // { int sigtimedwait(const sigset_t *set,\ - SYS_SIGWAITINFO = 346 // { int sigwaitinfo(const sigset_t *set,\ - SYS___ACL_GET_FILE = 347 // { int __acl_get_file(const char *path, \ - SYS___ACL_SET_FILE = 348 // { int __acl_set_file(const char *path, \ - SYS___ACL_GET_FD = 349 // { int __acl_get_fd(int filedes, acl_type_t type, \ - SYS___ACL_SET_FD = 350 // { int __acl_set_fd(int filedes, acl_type_t type, \ - SYS___ACL_DELETE_FILE = 351 // { int __acl_delete_file(const char *path, \ - SYS___ACL_DELETE_FD = 352 // { int __acl_delete_fd(int filedes, acl_type_t type); } - SYS___ACL_ACLCHECK_FILE = 353 // { int __acl_aclcheck_file(const char *path, \ - SYS___ACL_ACLCHECK_FD = 354 // { int __acl_aclcheck_fd(int filedes, acl_type_t type, \ - SYS_EXTATTRCTL = 355 // { int extattrctl(const char *path, int cmd, \ - SYS_EXTATTR_SET_FILE = 356 // { int extattr_set_file(const char *path, \ - SYS_EXTATTR_GET_FILE = 357 // { int extattr_get_file(const char *path, \ - SYS_EXTATTR_DELETE_FILE = 358 // { int extattr_delete_file(const char *path, \ - SYS_AIO_WAITCOMPLETE = 359 // { int aio_waitcomplete(struct aiocb **aiocbp, struct timespec *timeout); } - SYS_GETRESUID = 360 // { int getresuid(uid_t *ruid, uid_t *euid, uid_t *suid); } - SYS_GETRESGID = 361 // { int getresgid(gid_t *rgid, gid_t *egid, gid_t *sgid); } - SYS_KQUEUE = 362 // { int kqueue(void); } - SYS_KEVENT = 363 // { int kevent(int fd, \ - SYS_SCTP_PEELOFF = 364 // { int sctp_peeloff(int sd, caddr_t name ); } - SYS_LCHFLAGS = 391 // { int lchflags(char *path, int flags); } - SYS_UUIDGEN = 392 // { int uuidgen(struct uuid *store, int count); } - SYS_SENDFILE = 393 // { int sendfile(int fd, int s, off_t offset, size_t nbytes, \ - SYS_VARSYM_SET = 450 // { int varsym_set(int level, const char *name, const char *data); } - SYS_VARSYM_GET = 451 // { int varsym_get(int mask, const char *wild, char *buf, int bufsize); } - SYS_VARSYM_LIST = 452 // { int varsym_list(int level, char *buf, int maxsize, int *marker); } - SYS_EXEC_SYS_REGISTER = 465 // { int exec_sys_register(void *entry); } - SYS_EXEC_SYS_UNREGISTER = 466 // { int exec_sys_unregister(int id); } - SYS_SYS_CHECKPOINT = 467 // { int sys_checkpoint(int type, int fd, pid_t pid, int retval); } - SYS_MOUNTCTL = 468 // { int mountctl(const char *path, int op, int fd, const void *ctl, int ctllen, void *buf, int buflen); } - SYS_UMTX_SLEEP = 469 // { int umtx_sleep(volatile const int *ptr, int value, int timeout); } - SYS_UMTX_WAKEUP = 470 // { int umtx_wakeup(volatile const int *ptr, int count); } - SYS_JAIL_ATTACH = 471 // { int jail_attach(int jid); } - SYS_SET_TLS_AREA = 472 // { int set_tls_area(int which, struct tls_info *info, size_t infosize); } - SYS_GET_TLS_AREA = 473 // { int get_tls_area(int which, struct tls_info *info, size_t infosize); } - SYS_CLOSEFROM = 474 // { int closefrom(int fd); } - SYS_STAT = 475 // { int stat(const char *path, struct stat *ub); } - SYS_FSTAT = 476 // { int fstat(int fd, struct stat *sb); } - SYS_LSTAT = 477 // { int lstat(const char *path, struct stat *ub); } - SYS_FHSTAT = 478 // { int fhstat(const struct fhandle *u_fhp, struct stat *sb); } - SYS_GETDIRENTRIES = 479 // { int getdirentries(int fd, char *buf, u_int count, \ - SYS_GETDENTS = 480 // { int getdents(int fd, char *buf, size_t count); } - SYS_USCHED_SET = 481 // { int usched_set(pid_t pid, int cmd, void *data, \ - SYS_EXTACCEPT = 482 // { int extaccept(int s, int flags, caddr_t name, int *anamelen); } - SYS_EXTCONNECT = 483 // { int extconnect(int s, int flags, caddr_t name, int namelen); } - SYS_MCONTROL = 485 // { int mcontrol(void *addr, size_t len, int behav, off_t value); } - SYS_VMSPACE_CREATE = 486 // { int vmspace_create(void *id, int type, void *data); } - SYS_VMSPACE_DESTROY = 487 // { int vmspace_destroy(void *id); } - SYS_VMSPACE_CTL = 488 // { int vmspace_ctl(void *id, int cmd, \ - SYS_VMSPACE_MMAP = 489 // { int vmspace_mmap(void *id, void *addr, size_t len, \ - SYS_VMSPACE_MUNMAP = 490 // { int vmspace_munmap(void *id, void *addr, \ - SYS_VMSPACE_MCONTROL = 491 // { int vmspace_mcontrol(void *id, void *addr, \ - SYS_VMSPACE_PREAD = 492 // { ssize_t vmspace_pread(void *id, void *buf, \ - SYS_VMSPACE_PWRITE = 493 // { ssize_t vmspace_pwrite(void *id, const void *buf, \ - SYS_EXTEXIT = 494 // { void extexit(int how, int status, void *addr); } - SYS_LWP_CREATE = 495 // { int lwp_create(struct lwp_params *params); } - SYS_LWP_GETTID = 496 // { lwpid_t lwp_gettid(void); } - SYS_LWP_KILL = 497 // { int lwp_kill(pid_t pid, lwpid_t tid, int signum); } - SYS_LWP_RTPRIO = 498 // { int lwp_rtprio(int function, pid_t pid, lwpid_t tid, struct rtprio *rtp); } - SYS_PSELECT = 499 // { int pselect(int nd, fd_set *in, fd_set *ou, \ - SYS_STATVFS = 500 // { int statvfs(const char *path, struct statvfs *buf); } - SYS_FSTATVFS = 501 // { int fstatvfs(int fd, struct statvfs *buf); } - SYS_FHSTATVFS = 502 // { int fhstatvfs(const struct fhandle *u_fhp, struct statvfs *buf); } - SYS_GETVFSSTAT = 503 // { int getvfsstat(struct statfs *buf, \ - SYS_OPENAT = 504 // { int openat(int fd, char *path, int flags, int mode); } - SYS_FSTATAT = 505 // { int fstatat(int fd, char *path, \ - SYS_FCHMODAT = 506 // { int fchmodat(int fd, char *path, int mode, \ - SYS_FCHOWNAT = 507 // { int fchownat(int fd, char *path, int uid, int gid, \ - SYS_UNLINKAT = 508 // { int unlinkat(int fd, char *path, int flags); } - SYS_FACCESSAT = 509 // { int faccessat(int fd, char *path, int amode, \ - SYS_MQ_OPEN = 510 // { mqd_t mq_open(const char * name, int oflag, \ - SYS_MQ_CLOSE = 511 // { int mq_close(mqd_t mqdes); } - SYS_MQ_UNLINK = 512 // { int mq_unlink(const char *name); } - SYS_MQ_GETATTR = 513 // { int mq_getattr(mqd_t mqdes, \ - SYS_MQ_SETATTR = 514 // { int mq_setattr(mqd_t mqdes, \ - SYS_MQ_NOTIFY = 515 // { int mq_notify(mqd_t mqdes, \ - SYS_MQ_SEND = 516 // { int mq_send(mqd_t mqdes, const char *msg_ptr, \ - SYS_MQ_RECEIVE = 517 // { ssize_t mq_receive(mqd_t mqdes, char *msg_ptr, \ - SYS_MQ_TIMEDSEND = 518 // { int mq_timedsend(mqd_t mqdes, \ - SYS_MQ_TIMEDRECEIVE = 519 // { ssize_t mq_timedreceive(mqd_t mqdes, \ - SYS_IOPRIO_SET = 520 // { int ioprio_set(int which, int who, int prio); } - SYS_IOPRIO_GET = 521 // { int ioprio_get(int which, int who); } - SYS_CHROOT_KERNEL = 522 // { int chroot_kernel(char *path); } - SYS_RENAMEAT = 523 // { int renameat(int oldfd, char *old, int newfd, \ - SYS_MKDIRAT = 524 // { int mkdirat(int fd, char *path, mode_t mode); } - SYS_MKFIFOAT = 525 // { int mkfifoat(int fd, char *path, mode_t mode); } - SYS_MKNODAT = 526 // { int mknodat(int fd, char *path, mode_t mode, \ - SYS_READLINKAT = 527 // { int readlinkat(int fd, char *path, char *buf, \ - SYS_SYMLINKAT = 528 // { int symlinkat(char *path1, int fd, char *path2); } - SYS_SWAPOFF = 529 // { int swapoff(char *name); } - SYS_VQUOTACTL = 530 // { int vquotactl(const char *path, \ - SYS_LINKAT = 531 // { int linkat(int fd1, char *path1, int fd2, \ - SYS_EACCESS = 532 // { int eaccess(char *path, int flags); } - SYS_LPATHCONF = 533 // { int lpathconf(char *path, int name); } - SYS_VMM_GUEST_CTL = 534 // { int vmm_guest_ctl(int op, struct vmm_guest_options *options); } - SYS_VMM_GUEST_SYNC_ADDR = 535 // { int vmm_guest_sync_addr(long *dstaddr, long *srcaddr); } -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_dragonfly_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_dragonfly_amd64.go deleted file mode 100644 index d6038fa9b08..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_dragonfly_amd64.go +++ /dev/null @@ -1,304 +0,0 @@ -// mksysnum_dragonfly.pl -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build amd64,dragonfly - -package unix - -const ( - // SYS_NOSYS = 0; // { int nosys(void); } syscall nosys_args int - SYS_EXIT = 1 // { void exit(int rval); } - SYS_FORK = 2 // { int fork(void); } - SYS_READ = 3 // { ssize_t read(int fd, void *buf, size_t nbyte); } - SYS_WRITE = 4 // { ssize_t write(int fd, const void *buf, size_t nbyte); } - SYS_OPEN = 5 // { int open(char *path, int flags, int mode); } - SYS_CLOSE = 6 // { int close(int fd); } - SYS_WAIT4 = 7 // { int wait4(int pid, int *status, int options, \ - SYS_LINK = 9 // { int link(char *path, char *link); } - SYS_UNLINK = 10 // { int unlink(char *path); } - SYS_CHDIR = 12 // { int chdir(char *path); } - SYS_FCHDIR = 13 // { int fchdir(int fd); } - SYS_MKNOD = 14 // { int mknod(char *path, int mode, int dev); } - SYS_CHMOD = 15 // { int chmod(char *path, int mode); } - SYS_CHOWN = 16 // { int chown(char *path, int uid, int gid); } - SYS_OBREAK = 17 // { int obreak(char *nsize); } break obreak_args int - SYS_GETFSSTAT = 18 // { int getfsstat(struct statfs *buf, long bufsize, \ - SYS_GETPID = 20 // { pid_t getpid(void); } - SYS_MOUNT = 21 // { int mount(char *type, char *path, int flags, \ - SYS_UNMOUNT = 22 // { int unmount(char *path, int flags); } - SYS_SETUID = 23 // { int setuid(uid_t uid); } - SYS_GETUID = 24 // { uid_t getuid(void); } - SYS_GETEUID = 25 // { uid_t geteuid(void); } - SYS_PTRACE = 26 // { int ptrace(int req, pid_t pid, caddr_t addr, \ - SYS_RECVMSG = 27 // { int recvmsg(int s, struct msghdr *msg, int flags); } - SYS_SENDMSG = 28 // { int sendmsg(int s, caddr_t msg, int flags); } - SYS_RECVFROM = 29 // { int recvfrom(int s, caddr_t buf, size_t len, \ - SYS_ACCEPT = 30 // { int accept(int s, caddr_t name, int *anamelen); } - SYS_GETPEERNAME = 31 // { int getpeername(int fdes, caddr_t asa, int *alen); } - SYS_GETSOCKNAME = 32 // { int getsockname(int fdes, caddr_t asa, int *alen); } - SYS_ACCESS = 33 // { int access(char *path, int flags); } - SYS_CHFLAGS = 34 // { int chflags(char *path, int flags); } - SYS_FCHFLAGS = 35 // { int fchflags(int fd, int flags); } - SYS_SYNC = 36 // { int sync(void); } - SYS_KILL = 37 // { int kill(int pid, int signum); } - SYS_GETPPID = 39 // { pid_t getppid(void); } - SYS_DUP = 41 // { int dup(u_int fd); } - SYS_PIPE = 42 // { int pipe(void); } - SYS_GETEGID = 43 // { gid_t getegid(void); } - SYS_PROFIL = 44 // { int profil(caddr_t samples, size_t size, \ - SYS_KTRACE = 45 // { int ktrace(const char *fname, int ops, int facs, \ - SYS_GETGID = 47 // { gid_t getgid(void); } - SYS_GETLOGIN = 49 // { int getlogin(char *namebuf, u_int namelen); } - SYS_SETLOGIN = 50 // { int setlogin(char *namebuf); } - SYS_ACCT = 51 // { int acct(char *path); } - SYS_SIGALTSTACK = 53 // { int sigaltstack(stack_t *ss, stack_t *oss); } - SYS_IOCTL = 54 // { int ioctl(int fd, u_long com, caddr_t data); } - SYS_REBOOT = 55 // { int reboot(int opt); } - SYS_REVOKE = 56 // { int revoke(char *path); } - SYS_SYMLINK = 57 // { int symlink(char *path, char *link); } - SYS_READLINK = 58 // { int readlink(char *path, char *buf, int count); } - SYS_EXECVE = 59 // { int execve(char *fname, char **argv, char **envv); } - SYS_UMASK = 60 // { int umask(int newmask); } umask umask_args int - SYS_CHROOT = 61 // { int chroot(char *path); } - SYS_MSYNC = 65 // { int msync(void *addr, size_t len, int flags); } - SYS_VFORK = 66 // { pid_t vfork(void); } - SYS_SBRK = 69 // { int sbrk(int incr); } - SYS_SSTK = 70 // { int sstk(int incr); } - SYS_MUNMAP = 73 // { int munmap(void *addr, size_t len); } - SYS_MPROTECT = 74 // { int mprotect(void *addr, size_t len, int prot); } - SYS_MADVISE = 75 // { int madvise(void *addr, size_t len, int behav); } - SYS_MINCORE = 78 // { int mincore(const void *addr, size_t len, \ - SYS_GETGROUPS = 79 // { int getgroups(u_int gidsetsize, gid_t *gidset); } - SYS_SETGROUPS = 80 // { int setgroups(u_int gidsetsize, gid_t *gidset); } - SYS_GETPGRP = 81 // { int getpgrp(void); } - SYS_SETPGID = 82 // { int setpgid(int pid, int pgid); } - SYS_SETITIMER = 83 // { int setitimer(u_int which, struct itimerval *itv, \ - SYS_SWAPON = 85 // { int swapon(char *name); } - SYS_GETITIMER = 86 // { int getitimer(u_int which, struct itimerval *itv); } - SYS_GETDTABLESIZE = 89 // { int getdtablesize(void); } - SYS_DUP2 = 90 // { int dup2(u_int from, u_int to); } - SYS_FCNTL = 92 // { int fcntl(int fd, int cmd, long arg); } - SYS_SELECT = 93 // { int select(int nd, fd_set *in, fd_set *ou, \ - SYS_FSYNC = 95 // { int fsync(int fd); } - SYS_SETPRIORITY = 96 // { int setpriority(int which, int who, int prio); } - SYS_SOCKET = 97 // { int socket(int domain, int type, int protocol); } - SYS_CONNECT = 98 // { int connect(int s, caddr_t name, int namelen); } - SYS_GETPRIORITY = 100 // { int getpriority(int which, int who); } - SYS_BIND = 104 // { int bind(int s, caddr_t name, int namelen); } - SYS_SETSOCKOPT = 105 // { int setsockopt(int s, int level, int name, \ - SYS_LISTEN = 106 // { int listen(int s, int backlog); } - SYS_GETTIMEOFDAY = 116 // { int gettimeofday(struct timeval *tp, \ - SYS_GETRUSAGE = 117 // { int getrusage(int who, struct rusage *rusage); } - SYS_GETSOCKOPT = 118 // { int getsockopt(int s, int level, int name, \ - SYS_READV = 120 // { int readv(int fd, struct iovec *iovp, u_int iovcnt); } - SYS_WRITEV = 121 // { int writev(int fd, struct iovec *iovp, \ - SYS_SETTIMEOFDAY = 122 // { int settimeofday(struct timeval *tv, \ - SYS_FCHOWN = 123 // { int fchown(int fd, int uid, int gid); } - SYS_FCHMOD = 124 // { int fchmod(int fd, int mode); } - SYS_SETREUID = 126 // { int setreuid(int ruid, int euid); } - SYS_SETREGID = 127 // { int setregid(int rgid, int egid); } - SYS_RENAME = 128 // { int rename(char *from, char *to); } - SYS_FLOCK = 131 // { int flock(int fd, int how); } - SYS_MKFIFO = 132 // { int mkfifo(char *path, int mode); } - SYS_SENDTO = 133 // { int sendto(int s, caddr_t buf, size_t len, \ - SYS_SHUTDOWN = 134 // { int shutdown(int s, int how); } - SYS_SOCKETPAIR = 135 // { int socketpair(int domain, int type, int protocol, \ - SYS_MKDIR = 136 // { int mkdir(char *path, int mode); } - SYS_RMDIR = 137 // { int rmdir(char *path); } - SYS_UTIMES = 138 // { int utimes(char *path, struct timeval *tptr); } - SYS_ADJTIME = 140 // { int adjtime(struct timeval *delta, \ - SYS_SETSID = 147 // { int setsid(void); } - SYS_QUOTACTL = 148 // { int quotactl(char *path, int cmd, int uid, \ - SYS_STATFS = 157 // { int statfs(char *path, struct statfs *buf); } - SYS_FSTATFS = 158 // { int fstatfs(int fd, struct statfs *buf); } - SYS_GETFH = 161 // { int getfh(char *fname, struct fhandle *fhp); } - SYS_GETDOMAINNAME = 162 // { int getdomainname(char *domainname, int len); } - SYS_SETDOMAINNAME = 163 // { int setdomainname(char *domainname, int len); } - SYS_UNAME = 164 // { int uname(struct utsname *name); } - SYS_SYSARCH = 165 // { int sysarch(int op, char *parms); } - SYS_RTPRIO = 166 // { int rtprio(int function, pid_t pid, \ - SYS_EXTPREAD = 173 // { ssize_t extpread(int fd, void *buf, \ - SYS_EXTPWRITE = 174 // { ssize_t extpwrite(int fd, const void *buf, \ - SYS_NTP_ADJTIME = 176 // { int ntp_adjtime(struct timex *tp); } - SYS_SETGID = 181 // { int setgid(gid_t gid); } - SYS_SETEGID = 182 // { int setegid(gid_t egid); } - SYS_SETEUID = 183 // { int seteuid(uid_t euid); } - SYS_PATHCONF = 191 // { int pathconf(char *path, int name); } - SYS_FPATHCONF = 192 // { int fpathconf(int fd, int name); } - SYS_GETRLIMIT = 194 // { int getrlimit(u_int which, \ - SYS_SETRLIMIT = 195 // { int setrlimit(u_int which, \ - SYS_MMAP = 197 // { caddr_t mmap(caddr_t addr, size_t len, int prot, \ - // SYS_NOSYS = 198; // { int nosys(void); } __syscall __syscall_args int - SYS_LSEEK = 199 // { off_t lseek(int fd, int pad, off_t offset, \ - SYS_TRUNCATE = 200 // { int truncate(char *path, int pad, off_t length); } - SYS_FTRUNCATE = 201 // { int ftruncate(int fd, int pad, off_t length); } - SYS___SYSCTL = 202 // { int __sysctl(int *name, u_int namelen, void *old, \ - SYS_MLOCK = 203 // { int mlock(const void *addr, size_t len); } - SYS_MUNLOCK = 204 // { int munlock(const void *addr, size_t len); } - SYS_UNDELETE = 205 // { int undelete(char *path); } - SYS_FUTIMES = 206 // { int futimes(int fd, struct timeval *tptr); } - SYS_GETPGID = 207 // { int getpgid(pid_t pid); } - SYS_POLL = 209 // { int poll(struct pollfd *fds, u_int nfds, \ - SYS___SEMCTL = 220 // { int __semctl(int semid, int semnum, int cmd, \ - SYS_SEMGET = 221 // { int semget(key_t key, int nsems, int semflg); } - SYS_SEMOP = 222 // { int semop(int semid, struct sembuf *sops, \ - SYS_MSGCTL = 224 // { int msgctl(int msqid, int cmd, \ - SYS_MSGGET = 225 // { int msgget(key_t key, int msgflg); } - SYS_MSGSND = 226 // { int msgsnd(int msqid, void *msgp, size_t msgsz, \ - SYS_MSGRCV = 227 // { int msgrcv(int msqid, void *msgp, size_t msgsz, \ - SYS_SHMAT = 228 // { caddr_t shmat(int shmid, const void *shmaddr, \ - SYS_SHMCTL = 229 // { int shmctl(int shmid, int cmd, \ - SYS_SHMDT = 230 // { int shmdt(const void *shmaddr); } - SYS_SHMGET = 231 // { int shmget(key_t key, size_t size, int shmflg); } - SYS_CLOCK_GETTIME = 232 // { int clock_gettime(clockid_t clock_id, \ - SYS_CLOCK_SETTIME = 233 // { int clock_settime(clockid_t clock_id, \ - SYS_CLOCK_GETRES = 234 // { int clock_getres(clockid_t clock_id, \ - SYS_NANOSLEEP = 240 // { int nanosleep(const struct timespec *rqtp, \ - SYS_MINHERIT = 250 // { int minherit(void *addr, size_t len, int inherit); } - SYS_RFORK = 251 // { int rfork(int flags); } - SYS_OPENBSD_POLL = 252 // { int openbsd_poll(struct pollfd *fds, u_int nfds, \ - SYS_ISSETUGID = 253 // { int issetugid(void); } - SYS_LCHOWN = 254 // { int lchown(char *path, int uid, int gid); } - SYS_LCHMOD = 274 // { int lchmod(char *path, mode_t mode); } - SYS_LUTIMES = 276 // { int lutimes(char *path, struct timeval *tptr); } - SYS_EXTPREADV = 289 // { ssize_t extpreadv(int fd, struct iovec *iovp, \ - SYS_EXTPWRITEV = 290 // { ssize_t extpwritev(int fd, struct iovec *iovp,\ - SYS_FHSTATFS = 297 // { int fhstatfs(const struct fhandle *u_fhp, struct statfs *buf); } - SYS_FHOPEN = 298 // { int fhopen(const struct fhandle *u_fhp, int flags); } - SYS_MODNEXT = 300 // { int modnext(int modid); } - SYS_MODSTAT = 301 // { int modstat(int modid, struct module_stat* stat); } - SYS_MODFNEXT = 302 // { int modfnext(int modid); } - SYS_MODFIND = 303 // { int modfind(const char *name); } - SYS_KLDLOAD = 304 // { int kldload(const char *file); } - SYS_KLDUNLOAD = 305 // { int kldunload(int fileid); } - SYS_KLDFIND = 306 // { int kldfind(const char *file); } - SYS_KLDNEXT = 307 // { int kldnext(int fileid); } - SYS_KLDSTAT = 308 // { int kldstat(int fileid, struct kld_file_stat* stat); } - SYS_KLDFIRSTMOD = 309 // { int kldfirstmod(int fileid); } - SYS_GETSID = 310 // { int getsid(pid_t pid); } - SYS_SETRESUID = 311 // { int setresuid(uid_t ruid, uid_t euid, uid_t suid); } - SYS_SETRESGID = 312 // { int setresgid(gid_t rgid, gid_t egid, gid_t sgid); } - SYS_AIO_RETURN = 314 // { int aio_return(struct aiocb *aiocbp); } - SYS_AIO_SUSPEND = 315 // { int aio_suspend(struct aiocb * const * aiocbp, int nent, const struct timespec *timeout); } - SYS_AIO_CANCEL = 316 // { int aio_cancel(int fd, struct aiocb *aiocbp); } - SYS_AIO_ERROR = 317 // { int aio_error(struct aiocb *aiocbp); } - SYS_AIO_READ = 318 // { int aio_read(struct aiocb *aiocbp); } - SYS_AIO_WRITE = 319 // { int aio_write(struct aiocb *aiocbp); } - SYS_LIO_LISTIO = 320 // { int lio_listio(int mode, struct aiocb * const *acb_list, int nent, struct sigevent *sig); } - SYS_YIELD = 321 // { int yield(void); } - SYS_MLOCKALL = 324 // { int mlockall(int how); } - SYS_MUNLOCKALL = 325 // { int munlockall(void); } - SYS___GETCWD = 326 // { int __getcwd(u_char *buf, u_int buflen); } - SYS_SCHED_SETPARAM = 327 // { int sched_setparam (pid_t pid, const struct sched_param *param); } - SYS_SCHED_GETPARAM = 328 // { int sched_getparam (pid_t pid, struct sched_param *param); } - SYS_SCHED_SETSCHEDULER = 329 // { int sched_setscheduler (pid_t pid, int policy, const struct sched_param *param); } - SYS_SCHED_GETSCHEDULER = 330 // { int sched_getscheduler (pid_t pid); } - SYS_SCHED_YIELD = 331 // { int sched_yield (void); } - SYS_SCHED_GET_PRIORITY_MAX = 332 // { int sched_get_priority_max (int policy); } - SYS_SCHED_GET_PRIORITY_MIN = 333 // { int sched_get_priority_min (int policy); } - SYS_SCHED_RR_GET_INTERVAL = 334 // { int sched_rr_get_interval (pid_t pid, struct timespec *interval); } - SYS_UTRACE = 335 // { int utrace(const void *addr, size_t len); } - SYS_KLDSYM = 337 // { int kldsym(int fileid, int cmd, void *data); } - SYS_JAIL = 338 // { int jail(struct jail *jail); } - SYS_SIGPROCMASK = 340 // { int sigprocmask(int how, const sigset_t *set, \ - SYS_SIGSUSPEND = 341 // { int sigsuspend(const sigset_t *sigmask); } - SYS_SIGACTION = 342 // { int sigaction(int sig, const struct sigaction *act, \ - SYS_SIGPENDING = 343 // { int sigpending(sigset_t *set); } - SYS_SIGRETURN = 344 // { int sigreturn(ucontext_t *sigcntxp); } - SYS_SIGTIMEDWAIT = 345 // { int sigtimedwait(const sigset_t *set,\ - SYS_SIGWAITINFO = 346 // { int sigwaitinfo(const sigset_t *set,\ - SYS___ACL_GET_FILE = 347 // { int __acl_get_file(const char *path, \ - SYS___ACL_SET_FILE = 348 // { int __acl_set_file(const char *path, \ - SYS___ACL_GET_FD = 349 // { int __acl_get_fd(int filedes, acl_type_t type, \ - SYS___ACL_SET_FD = 350 // { int __acl_set_fd(int filedes, acl_type_t type, \ - SYS___ACL_DELETE_FILE = 351 // { int __acl_delete_file(const char *path, \ - SYS___ACL_DELETE_FD = 352 // { int __acl_delete_fd(int filedes, acl_type_t type); } - SYS___ACL_ACLCHECK_FILE = 353 // { int __acl_aclcheck_file(const char *path, \ - SYS___ACL_ACLCHECK_FD = 354 // { int __acl_aclcheck_fd(int filedes, acl_type_t type, \ - SYS_EXTATTRCTL = 355 // { int extattrctl(const char *path, int cmd, \ - SYS_EXTATTR_SET_FILE = 356 // { int extattr_set_file(const char *path, \ - SYS_EXTATTR_GET_FILE = 357 // { int extattr_get_file(const char *path, \ - SYS_EXTATTR_DELETE_FILE = 358 // { int extattr_delete_file(const char *path, \ - SYS_AIO_WAITCOMPLETE = 359 // { int aio_waitcomplete(struct aiocb **aiocbp, struct timespec *timeout); } - SYS_GETRESUID = 360 // { int getresuid(uid_t *ruid, uid_t *euid, uid_t *suid); } - SYS_GETRESGID = 361 // { int getresgid(gid_t *rgid, gid_t *egid, gid_t *sgid); } - SYS_KQUEUE = 362 // { int kqueue(void); } - SYS_KEVENT = 363 // { int kevent(int fd, \ - SYS_SCTP_PEELOFF = 364 // { int sctp_peeloff(int sd, caddr_t name ); } - SYS_LCHFLAGS = 391 // { int lchflags(char *path, int flags); } - SYS_UUIDGEN = 392 // { int uuidgen(struct uuid *store, int count); } - SYS_SENDFILE = 393 // { int sendfile(int fd, int s, off_t offset, size_t nbytes, \ - SYS_VARSYM_SET = 450 // { int varsym_set(int level, const char *name, const char *data); } - SYS_VARSYM_GET = 451 // { int varsym_get(int mask, const char *wild, char *buf, int bufsize); } - SYS_VARSYM_LIST = 452 // { int varsym_list(int level, char *buf, int maxsize, int *marker); } - SYS_EXEC_SYS_REGISTER = 465 // { int exec_sys_register(void *entry); } - SYS_EXEC_SYS_UNREGISTER = 466 // { int exec_sys_unregister(int id); } - SYS_SYS_CHECKPOINT = 467 // { int sys_checkpoint(int type, int fd, pid_t pid, int retval); } - SYS_MOUNTCTL = 468 // { int mountctl(const char *path, int op, int fd, const void *ctl, int ctllen, void *buf, int buflen); } - SYS_UMTX_SLEEP = 469 // { int umtx_sleep(volatile const int *ptr, int value, int timeout); } - SYS_UMTX_WAKEUP = 470 // { int umtx_wakeup(volatile const int *ptr, int count); } - SYS_JAIL_ATTACH = 471 // { int jail_attach(int jid); } - SYS_SET_TLS_AREA = 472 // { int set_tls_area(int which, struct tls_info *info, size_t infosize); } - SYS_GET_TLS_AREA = 473 // { int get_tls_area(int which, struct tls_info *info, size_t infosize); } - SYS_CLOSEFROM = 474 // { int closefrom(int fd); } - SYS_STAT = 475 // { int stat(const char *path, struct stat *ub); } - SYS_FSTAT = 476 // { int fstat(int fd, struct stat *sb); } - SYS_LSTAT = 477 // { int lstat(const char *path, struct stat *ub); } - SYS_FHSTAT = 478 // { int fhstat(const struct fhandle *u_fhp, struct stat *sb); } - SYS_GETDIRENTRIES = 479 // { int getdirentries(int fd, char *buf, u_int count, \ - SYS_GETDENTS = 480 // { int getdents(int fd, char *buf, size_t count); } - SYS_USCHED_SET = 481 // { int usched_set(pid_t pid, int cmd, void *data, \ - SYS_EXTACCEPT = 482 // { int extaccept(int s, int flags, caddr_t name, int *anamelen); } - SYS_EXTCONNECT = 483 // { int extconnect(int s, int flags, caddr_t name, int namelen); } - SYS_MCONTROL = 485 // { int mcontrol(void *addr, size_t len, int behav, off_t value); } - SYS_VMSPACE_CREATE = 486 // { int vmspace_create(void *id, int type, void *data); } - SYS_VMSPACE_DESTROY = 487 // { int vmspace_destroy(void *id); } - SYS_VMSPACE_CTL = 488 // { int vmspace_ctl(void *id, int cmd, \ - SYS_VMSPACE_MMAP = 489 // { int vmspace_mmap(void *id, void *addr, size_t len, \ - SYS_VMSPACE_MUNMAP = 490 // { int vmspace_munmap(void *id, void *addr, \ - SYS_VMSPACE_MCONTROL = 491 // { int vmspace_mcontrol(void *id, void *addr, \ - SYS_VMSPACE_PREAD = 492 // { ssize_t vmspace_pread(void *id, void *buf, \ - SYS_VMSPACE_PWRITE = 493 // { ssize_t vmspace_pwrite(void *id, const void *buf, \ - SYS_EXTEXIT = 494 // { void extexit(int how, int status, void *addr); } - SYS_LWP_CREATE = 495 // { int lwp_create(struct lwp_params *params); } - SYS_LWP_GETTID = 496 // { lwpid_t lwp_gettid(void); } - SYS_LWP_KILL = 497 // { int lwp_kill(pid_t pid, lwpid_t tid, int signum); } - SYS_LWP_RTPRIO = 498 // { int lwp_rtprio(int function, pid_t pid, lwpid_t tid, struct rtprio *rtp); } - SYS_PSELECT = 499 // { int pselect(int nd, fd_set *in, fd_set *ou, \ - SYS_STATVFS = 500 // { int statvfs(const char *path, struct statvfs *buf); } - SYS_FSTATVFS = 501 // { int fstatvfs(int fd, struct statvfs *buf); } - SYS_FHSTATVFS = 502 // { int fhstatvfs(const struct fhandle *u_fhp, struct statvfs *buf); } - SYS_GETVFSSTAT = 503 // { int getvfsstat(struct statfs *buf, \ - SYS_OPENAT = 504 // { int openat(int fd, char *path, int flags, int mode); } - SYS_FSTATAT = 505 // { int fstatat(int fd, char *path, \ - SYS_FCHMODAT = 506 // { int fchmodat(int fd, char *path, int mode, \ - SYS_FCHOWNAT = 507 // { int fchownat(int fd, char *path, int uid, int gid, \ - SYS_UNLINKAT = 508 // { int unlinkat(int fd, char *path, int flags); } - SYS_FACCESSAT = 509 // { int faccessat(int fd, char *path, int amode, \ - SYS_MQ_OPEN = 510 // { mqd_t mq_open(const char * name, int oflag, \ - SYS_MQ_CLOSE = 511 // { int mq_close(mqd_t mqdes); } - SYS_MQ_UNLINK = 512 // { int mq_unlink(const char *name); } - SYS_MQ_GETATTR = 513 // { int mq_getattr(mqd_t mqdes, \ - SYS_MQ_SETATTR = 514 // { int mq_setattr(mqd_t mqdes, \ - SYS_MQ_NOTIFY = 515 // { int mq_notify(mqd_t mqdes, \ - SYS_MQ_SEND = 516 // { int mq_send(mqd_t mqdes, const char *msg_ptr, \ - SYS_MQ_RECEIVE = 517 // { ssize_t mq_receive(mqd_t mqdes, char *msg_ptr, \ - SYS_MQ_TIMEDSEND = 518 // { int mq_timedsend(mqd_t mqdes, \ - SYS_MQ_TIMEDRECEIVE = 519 // { ssize_t mq_timedreceive(mqd_t mqdes, \ - SYS_IOPRIO_SET = 520 // { int ioprio_set(int which, int who, int prio); } - SYS_IOPRIO_GET = 521 // { int ioprio_get(int which, int who); } - SYS_CHROOT_KERNEL = 522 // { int chroot_kernel(char *path); } - SYS_RENAMEAT = 523 // { int renameat(int oldfd, char *old, int newfd, \ - SYS_MKDIRAT = 524 // { int mkdirat(int fd, char *path, mode_t mode); } - SYS_MKFIFOAT = 525 // { int mkfifoat(int fd, char *path, mode_t mode); } - SYS_MKNODAT = 526 // { int mknodat(int fd, char *path, mode_t mode, \ - SYS_READLINKAT = 527 // { int readlinkat(int fd, char *path, char *buf, \ - SYS_SYMLINKAT = 528 // { int symlinkat(char *path1, int fd, char *path2); } - SYS_SWAPOFF = 529 // { int swapoff(char *name); } - SYS_VQUOTACTL = 530 // { int vquotactl(const char *path, \ - SYS_LINKAT = 531 // { int linkat(int fd1, char *path1, int fd2, \ - SYS_EACCESS = 532 // { int eaccess(char *path, int flags); } - SYS_LPATHCONF = 533 // { int lpathconf(char *path, int name); } - SYS_VMM_GUEST_CTL = 534 // { int vmm_guest_ctl(int op, struct vmm_guest_options *options); } - SYS_VMM_GUEST_SYNC_ADDR = 535 // { int vmm_guest_sync_addr(long *dstaddr, long *srcaddr); } -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_freebsd_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_freebsd_386.go deleted file mode 100644 index 262a84536a7..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_freebsd_386.go +++ /dev/null @@ -1,351 +0,0 @@ -// mksysnum_freebsd.pl -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build 386,freebsd - -package unix - -const ( - // SYS_NOSYS = 0; // { int nosys(void); } syscall nosys_args int - SYS_EXIT = 1 // { void sys_exit(int rval); } exit \ - SYS_FORK = 2 // { int fork(void); } - SYS_READ = 3 // { ssize_t read(int fd, void *buf, \ - SYS_WRITE = 4 // { ssize_t write(int fd, const void *buf, \ - SYS_OPEN = 5 // { int open(char *path, int flags, int mode); } - SYS_CLOSE = 6 // { int close(int fd); } - SYS_WAIT4 = 7 // { int wait4(int pid, int *status, \ - SYS_LINK = 9 // { int link(char *path, char *link); } - SYS_UNLINK = 10 // { int unlink(char *path); } - SYS_CHDIR = 12 // { int chdir(char *path); } - SYS_FCHDIR = 13 // { int fchdir(int fd); } - SYS_MKNOD = 14 // { int mknod(char *path, int mode, int dev); } - SYS_CHMOD = 15 // { int chmod(char *path, int mode); } - SYS_CHOWN = 16 // { int chown(char *path, int uid, int gid); } - SYS_OBREAK = 17 // { int obreak(char *nsize); } break \ - SYS_GETPID = 20 // { pid_t getpid(void); } - SYS_MOUNT = 21 // { int mount(char *type, char *path, \ - SYS_UNMOUNT = 22 // { int unmount(char *path, int flags); } - SYS_SETUID = 23 // { int setuid(uid_t uid); } - SYS_GETUID = 24 // { uid_t getuid(void); } - SYS_GETEUID = 25 // { uid_t geteuid(void); } - SYS_PTRACE = 26 // { int ptrace(int req, pid_t pid, \ - SYS_RECVMSG = 27 // { int recvmsg(int s, struct msghdr *msg, \ - SYS_SENDMSG = 28 // { int sendmsg(int s, struct msghdr *msg, \ - SYS_RECVFROM = 29 // { int recvfrom(int s, caddr_t buf, \ - SYS_ACCEPT = 30 // { int accept(int s, \ - SYS_GETPEERNAME = 31 // { int getpeername(int fdes, \ - SYS_GETSOCKNAME = 32 // { int getsockname(int fdes, \ - SYS_ACCESS = 33 // { int access(char *path, int amode); } - SYS_CHFLAGS = 34 // { int chflags(const char *path, u_long flags); } - SYS_FCHFLAGS = 35 // { int fchflags(int fd, u_long flags); } - SYS_SYNC = 36 // { int sync(void); } - SYS_KILL = 37 // { int kill(int pid, int signum); } - SYS_GETPPID = 39 // { pid_t getppid(void); } - SYS_DUP = 41 // { int dup(u_int fd); } - SYS_PIPE = 42 // { int pipe(void); } - SYS_GETEGID = 43 // { gid_t getegid(void); } - SYS_PROFIL = 44 // { int profil(caddr_t samples, size_t size, \ - SYS_KTRACE = 45 // { int ktrace(const char *fname, int ops, \ - SYS_GETGID = 47 // { gid_t getgid(void); } - SYS_GETLOGIN = 49 // { int getlogin(char *namebuf, u_int \ - SYS_SETLOGIN = 50 // { int setlogin(char *namebuf); } - SYS_ACCT = 51 // { int acct(char *path); } - SYS_SIGALTSTACK = 53 // { int sigaltstack(stack_t *ss, \ - SYS_IOCTL = 54 // { int ioctl(int fd, u_long com, \ - SYS_REBOOT = 55 // { int reboot(int opt); } - SYS_REVOKE = 56 // { int revoke(char *path); } - SYS_SYMLINK = 57 // { int symlink(char *path, char *link); } - SYS_READLINK = 58 // { ssize_t readlink(char *path, char *buf, \ - SYS_EXECVE = 59 // { int execve(char *fname, char **argv, \ - SYS_UMASK = 60 // { int umask(int newmask); } umask umask_args \ - SYS_CHROOT = 61 // { int chroot(char *path); } - SYS_MSYNC = 65 // { int msync(void *addr, size_t len, \ - SYS_VFORK = 66 // { int vfork(void); } - SYS_SBRK = 69 // { int sbrk(int incr); } - SYS_SSTK = 70 // { int sstk(int incr); } - SYS_OVADVISE = 72 // { int ovadvise(int anom); } vadvise \ - SYS_MUNMAP = 73 // { int munmap(void *addr, size_t len); } - SYS_MPROTECT = 74 // { int mprotect(const void *addr, size_t len, \ - SYS_MADVISE = 75 // { int madvise(void *addr, size_t len, \ - SYS_MINCORE = 78 // { int mincore(const void *addr, size_t len, \ - SYS_GETGROUPS = 79 // { int getgroups(u_int gidsetsize, \ - SYS_SETGROUPS = 80 // { int setgroups(u_int gidsetsize, \ - SYS_GETPGRP = 81 // { int getpgrp(void); } - SYS_SETPGID = 82 // { int setpgid(int pid, int pgid); } - SYS_SETITIMER = 83 // { int setitimer(u_int which, struct \ - SYS_SWAPON = 85 // { int swapon(char *name); } - SYS_GETITIMER = 86 // { int getitimer(u_int which, \ - SYS_GETDTABLESIZE = 89 // { int getdtablesize(void); } - SYS_DUP2 = 90 // { int dup2(u_int from, u_int to); } - SYS_FCNTL = 92 // { int fcntl(int fd, int cmd, long arg); } - SYS_SELECT = 93 // { int select(int nd, fd_set *in, fd_set *ou, \ - SYS_FSYNC = 95 // { int fsync(int fd); } - SYS_SETPRIORITY = 96 // { int setpriority(int which, int who, \ - SYS_SOCKET = 97 // { int socket(int domain, int type, \ - SYS_CONNECT = 98 // { int connect(int s, caddr_t name, \ - SYS_GETPRIORITY = 100 // { int getpriority(int which, int who); } - SYS_BIND = 104 // { int bind(int s, caddr_t name, \ - SYS_SETSOCKOPT = 105 // { int setsockopt(int s, int level, int name, \ - SYS_LISTEN = 106 // { int listen(int s, int backlog); } - SYS_GETTIMEOFDAY = 116 // { int gettimeofday(struct timeval *tp, \ - SYS_GETRUSAGE = 117 // { int getrusage(int who, \ - SYS_GETSOCKOPT = 118 // { int getsockopt(int s, int level, int name, \ - SYS_READV = 120 // { int readv(int fd, struct iovec *iovp, \ - SYS_WRITEV = 121 // { int writev(int fd, struct iovec *iovp, \ - SYS_SETTIMEOFDAY = 122 // { int settimeofday(struct timeval *tv, \ - SYS_FCHOWN = 123 // { int fchown(int fd, int uid, int gid); } - SYS_FCHMOD = 124 // { int fchmod(int fd, int mode); } - SYS_SETREUID = 126 // { int setreuid(int ruid, int euid); } - SYS_SETREGID = 127 // { int setregid(int rgid, int egid); } - SYS_RENAME = 128 // { int rename(char *from, char *to); } - SYS_FLOCK = 131 // { int flock(int fd, int how); } - SYS_MKFIFO = 132 // { int mkfifo(char *path, int mode); } - SYS_SENDTO = 133 // { int sendto(int s, caddr_t buf, size_t len, \ - SYS_SHUTDOWN = 134 // { int shutdown(int s, int how); } - SYS_SOCKETPAIR = 135 // { int socketpair(int domain, int type, \ - SYS_MKDIR = 136 // { int mkdir(char *path, int mode); } - SYS_RMDIR = 137 // { int rmdir(char *path); } - SYS_UTIMES = 138 // { int utimes(char *path, \ - SYS_ADJTIME = 140 // { int adjtime(struct timeval *delta, \ - SYS_SETSID = 147 // { int setsid(void); } - SYS_QUOTACTL = 148 // { int quotactl(char *path, int cmd, int uid, \ - SYS_LGETFH = 160 // { int lgetfh(char *fname, \ - SYS_GETFH = 161 // { int getfh(char *fname, \ - SYS_SYSARCH = 165 // { int sysarch(int op, char *parms); } - SYS_RTPRIO = 166 // { int rtprio(int function, pid_t pid, \ - SYS_FREEBSD6_PREAD = 173 // { ssize_t freebsd6_pread(int fd, void *buf, \ - SYS_FREEBSD6_PWRITE = 174 // { ssize_t freebsd6_pwrite(int fd, \ - SYS_SETFIB = 175 // { int setfib(int fibnum); } - SYS_NTP_ADJTIME = 176 // { int ntp_adjtime(struct timex *tp); } - SYS_SETGID = 181 // { int setgid(gid_t gid); } - SYS_SETEGID = 182 // { int setegid(gid_t egid); } - SYS_SETEUID = 183 // { int seteuid(uid_t euid); } - SYS_STAT = 188 // { int stat(char *path, struct stat *ub); } - SYS_FSTAT = 189 // { int fstat(int fd, struct stat *sb); } - SYS_LSTAT = 190 // { int lstat(char *path, struct stat *ub); } - SYS_PATHCONF = 191 // { int pathconf(char *path, int name); } - SYS_FPATHCONF = 192 // { int fpathconf(int fd, int name); } - SYS_GETRLIMIT = 194 // { int getrlimit(u_int which, \ - SYS_SETRLIMIT = 195 // { int setrlimit(u_int which, \ - SYS_GETDIRENTRIES = 196 // { int getdirentries(int fd, char *buf, \ - SYS_FREEBSD6_MMAP = 197 // { caddr_t freebsd6_mmap(caddr_t addr, \ - SYS_FREEBSD6_LSEEK = 199 // { off_t freebsd6_lseek(int fd, int pad, \ - SYS_FREEBSD6_TRUNCATE = 200 // { int freebsd6_truncate(char *path, int pad, \ - SYS_FREEBSD6_FTRUNCATE = 201 // { int freebsd6_ftruncate(int fd, int pad, \ - SYS___SYSCTL = 202 // { int __sysctl(int *name, u_int namelen, \ - SYS_MLOCK = 203 // { int mlock(const void *addr, size_t len); } - SYS_MUNLOCK = 204 // { int munlock(const void *addr, size_t len); } - SYS_UNDELETE = 205 // { int undelete(char *path); } - SYS_FUTIMES = 206 // { int futimes(int fd, struct timeval *tptr); } - SYS_GETPGID = 207 // { int getpgid(pid_t pid); } - SYS_POLL = 209 // { int poll(struct pollfd *fds, u_int nfds, \ - SYS_CLOCK_GETTIME = 232 // { int clock_gettime(clockid_t clock_id, \ - SYS_CLOCK_SETTIME = 233 // { int clock_settime( \ - SYS_CLOCK_GETRES = 234 // { int clock_getres(clockid_t clock_id, \ - SYS_KTIMER_CREATE = 235 // { int ktimer_create(clockid_t clock_id, \ - SYS_KTIMER_DELETE = 236 // { int ktimer_delete(int timerid); } - SYS_KTIMER_SETTIME = 237 // { int ktimer_settime(int timerid, int flags, \ - SYS_KTIMER_GETTIME = 238 // { int ktimer_gettime(int timerid, struct \ - SYS_KTIMER_GETOVERRUN = 239 // { int ktimer_getoverrun(int timerid); } - SYS_NANOSLEEP = 240 // { int nanosleep(const struct timespec *rqtp, \ - SYS_FFCLOCK_GETCOUNTER = 241 // { int ffclock_getcounter(ffcounter *ffcount); } - SYS_FFCLOCK_SETESTIMATE = 242 // { int ffclock_setestimate( \ - SYS_FFCLOCK_GETESTIMATE = 243 // { int ffclock_getestimate( \ - SYS_CLOCK_GETCPUCLOCKID2 = 247 // { int clock_getcpuclockid2(id_t id,\ - SYS_NTP_GETTIME = 248 // { int ntp_gettime(struct ntptimeval *ntvp); } - SYS_MINHERIT = 250 // { int minherit(void *addr, size_t len, \ - SYS_RFORK = 251 // { int rfork(int flags); } - SYS_OPENBSD_POLL = 252 // { int openbsd_poll(struct pollfd *fds, \ - SYS_ISSETUGID = 253 // { int issetugid(void); } - SYS_LCHOWN = 254 // { int lchown(char *path, int uid, int gid); } - SYS_GETDENTS = 272 // { int getdents(int fd, char *buf, \ - SYS_LCHMOD = 274 // { int lchmod(char *path, mode_t mode); } - SYS_LUTIMES = 276 // { int lutimes(char *path, \ - SYS_NSTAT = 278 // { int nstat(char *path, struct nstat *ub); } - SYS_NFSTAT = 279 // { int nfstat(int fd, struct nstat *sb); } - SYS_NLSTAT = 280 // { int nlstat(char *path, struct nstat *ub); } - SYS_PREADV = 289 // { ssize_t preadv(int fd, struct iovec *iovp, \ - SYS_PWRITEV = 290 // { ssize_t pwritev(int fd, struct iovec *iovp, \ - SYS_FHOPEN = 298 // { int fhopen(const struct fhandle *u_fhp, \ - SYS_FHSTAT = 299 // { int fhstat(const struct fhandle *u_fhp, \ - SYS_MODNEXT = 300 // { int modnext(int modid); } - SYS_MODSTAT = 301 // { int modstat(int modid, \ - SYS_MODFNEXT = 302 // { int modfnext(int modid); } - SYS_MODFIND = 303 // { int modfind(const char *name); } - SYS_KLDLOAD = 304 // { int kldload(const char *file); } - SYS_KLDUNLOAD = 305 // { int kldunload(int fileid); } - SYS_KLDFIND = 306 // { int kldfind(const char *file); } - SYS_KLDNEXT = 307 // { int kldnext(int fileid); } - SYS_KLDSTAT = 308 // { int kldstat(int fileid, struct \ - SYS_KLDFIRSTMOD = 309 // { int kldfirstmod(int fileid); } - SYS_GETSID = 310 // { int getsid(pid_t pid); } - SYS_SETRESUID = 311 // { int setresuid(uid_t ruid, uid_t euid, \ - SYS_SETRESGID = 312 // { int setresgid(gid_t rgid, gid_t egid, \ - SYS_YIELD = 321 // { int yield(void); } - SYS_MLOCKALL = 324 // { int mlockall(int how); } - SYS_MUNLOCKALL = 325 // { int munlockall(void); } - SYS___GETCWD = 326 // { int __getcwd(char *buf, u_int buflen); } - SYS_SCHED_SETPARAM = 327 // { int sched_setparam (pid_t pid, \ - SYS_SCHED_GETPARAM = 328 // { int sched_getparam (pid_t pid, struct \ - SYS_SCHED_SETSCHEDULER = 329 // { int sched_setscheduler (pid_t pid, int \ - SYS_SCHED_GETSCHEDULER = 330 // { int sched_getscheduler (pid_t pid); } - SYS_SCHED_YIELD = 331 // { int sched_yield (void); } - SYS_SCHED_GET_PRIORITY_MAX = 332 // { int sched_get_priority_max (int policy); } - SYS_SCHED_GET_PRIORITY_MIN = 333 // { int sched_get_priority_min (int policy); } - SYS_SCHED_RR_GET_INTERVAL = 334 // { int sched_rr_get_interval (pid_t pid, \ - SYS_UTRACE = 335 // { int utrace(const void *addr, size_t len); } - SYS_KLDSYM = 337 // { int kldsym(int fileid, int cmd, \ - SYS_JAIL = 338 // { int jail(struct jail *jail); } - SYS_SIGPROCMASK = 340 // { int sigprocmask(int how, \ - SYS_SIGSUSPEND = 341 // { int sigsuspend(const sigset_t *sigmask); } - SYS_SIGPENDING = 343 // { int sigpending(sigset_t *set); } - SYS_SIGTIMEDWAIT = 345 // { int sigtimedwait(const sigset_t *set, \ - SYS_SIGWAITINFO = 346 // { int sigwaitinfo(const sigset_t *set, \ - SYS___ACL_GET_FILE = 347 // { int __acl_get_file(const char *path, \ - SYS___ACL_SET_FILE = 348 // { int __acl_set_file(const char *path, \ - SYS___ACL_GET_FD = 349 // { int __acl_get_fd(int filedes, \ - SYS___ACL_SET_FD = 350 // { int __acl_set_fd(int filedes, \ - SYS___ACL_DELETE_FILE = 351 // { int __acl_delete_file(const char *path, \ - SYS___ACL_DELETE_FD = 352 // { int __acl_delete_fd(int filedes, \ - SYS___ACL_ACLCHECK_FILE = 353 // { int __acl_aclcheck_file(const char *path, \ - SYS___ACL_ACLCHECK_FD = 354 // { int __acl_aclcheck_fd(int filedes, \ - SYS_EXTATTRCTL = 355 // { int extattrctl(const char *path, int cmd, \ - SYS_EXTATTR_SET_FILE = 356 // { ssize_t extattr_set_file( \ - SYS_EXTATTR_GET_FILE = 357 // { ssize_t extattr_get_file( \ - SYS_EXTATTR_DELETE_FILE = 358 // { int extattr_delete_file(const char *path, \ - SYS_GETRESUID = 360 // { int getresuid(uid_t *ruid, uid_t *euid, \ - SYS_GETRESGID = 361 // { int getresgid(gid_t *rgid, gid_t *egid, \ - SYS_KQUEUE = 362 // { int kqueue(void); } - SYS_KEVENT = 363 // { int kevent(int fd, \ - SYS_EXTATTR_SET_FD = 371 // { ssize_t extattr_set_fd(int fd, \ - SYS_EXTATTR_GET_FD = 372 // { ssize_t extattr_get_fd(int fd, \ - SYS_EXTATTR_DELETE_FD = 373 // { int extattr_delete_fd(int fd, \ - SYS___SETUGID = 374 // { int __setugid(int flag); } - SYS_EACCESS = 376 // { int eaccess(char *path, int amode); } - SYS_NMOUNT = 378 // { int nmount(struct iovec *iovp, \ - SYS___MAC_GET_PROC = 384 // { int __mac_get_proc(struct mac *mac_p); } - SYS___MAC_SET_PROC = 385 // { int __mac_set_proc(struct mac *mac_p); } - SYS___MAC_GET_FD = 386 // { int __mac_get_fd(int fd, \ - SYS___MAC_GET_FILE = 387 // { int __mac_get_file(const char *path_p, \ - SYS___MAC_SET_FD = 388 // { int __mac_set_fd(int fd, \ - SYS___MAC_SET_FILE = 389 // { int __mac_set_file(const char *path_p, \ - SYS_KENV = 390 // { int kenv(int what, const char *name, \ - SYS_LCHFLAGS = 391 // { int lchflags(const char *path, \ - SYS_UUIDGEN = 392 // { int uuidgen(struct uuid *store, \ - SYS_SENDFILE = 393 // { int sendfile(int fd, int s, off_t offset, \ - SYS_MAC_SYSCALL = 394 // { int mac_syscall(const char *policy, \ - SYS_GETFSSTAT = 395 // { int getfsstat(struct statfs *buf, \ - SYS_STATFS = 396 // { int statfs(char *path, \ - SYS_FSTATFS = 397 // { int fstatfs(int fd, struct statfs *buf); } - SYS_FHSTATFS = 398 // { int fhstatfs(const struct fhandle *u_fhp, \ - SYS___MAC_GET_PID = 409 // { int __mac_get_pid(pid_t pid, \ - SYS___MAC_GET_LINK = 410 // { int __mac_get_link(const char *path_p, \ - SYS___MAC_SET_LINK = 411 // { int __mac_set_link(const char *path_p, \ - SYS_EXTATTR_SET_LINK = 412 // { ssize_t extattr_set_link( \ - SYS_EXTATTR_GET_LINK = 413 // { ssize_t extattr_get_link( \ - SYS_EXTATTR_DELETE_LINK = 414 // { int extattr_delete_link( \ - SYS___MAC_EXECVE = 415 // { int __mac_execve(char *fname, char **argv, \ - SYS_SIGACTION = 416 // { int sigaction(int sig, \ - SYS_SIGRETURN = 417 // { int sigreturn( \ - SYS_GETCONTEXT = 421 // { int getcontext(struct __ucontext *ucp); } - SYS_SETCONTEXT = 422 // { int setcontext( \ - SYS_SWAPCONTEXT = 423 // { int swapcontext(struct __ucontext *oucp, \ - SYS_SWAPOFF = 424 // { int swapoff(const char *name); } - SYS___ACL_GET_LINK = 425 // { int __acl_get_link(const char *path, \ - SYS___ACL_SET_LINK = 426 // { int __acl_set_link(const char *path, \ - SYS___ACL_DELETE_LINK = 427 // { int __acl_delete_link(const char *path, \ - SYS___ACL_ACLCHECK_LINK = 428 // { int __acl_aclcheck_link(const char *path, \ - SYS_SIGWAIT = 429 // { int sigwait(const sigset_t *set, \ - SYS_THR_CREATE = 430 // { int thr_create(ucontext_t *ctx, long *id, \ - SYS_THR_EXIT = 431 // { void thr_exit(long *state); } - SYS_THR_SELF = 432 // { int thr_self(long *id); } - SYS_THR_KILL = 433 // { int thr_kill(long id, int sig); } - SYS__UMTX_LOCK = 434 // { int _umtx_lock(struct umtx *umtx); } - SYS__UMTX_UNLOCK = 435 // { int _umtx_unlock(struct umtx *umtx); } - SYS_JAIL_ATTACH = 436 // { int jail_attach(int jid); } - SYS_EXTATTR_LIST_FD = 437 // { ssize_t extattr_list_fd(int fd, \ - SYS_EXTATTR_LIST_FILE = 438 // { ssize_t extattr_list_file( \ - SYS_EXTATTR_LIST_LINK = 439 // { ssize_t extattr_list_link( \ - SYS_THR_SUSPEND = 442 // { int thr_suspend( \ - SYS_THR_WAKE = 443 // { int thr_wake(long id); } - SYS_KLDUNLOADF = 444 // { int kldunloadf(int fileid, int flags); } - SYS_AUDIT = 445 // { int audit(const void *record, \ - SYS_AUDITON = 446 // { int auditon(int cmd, void *data, \ - SYS_GETAUID = 447 // { int getauid(uid_t *auid); } - SYS_SETAUID = 448 // { int setauid(uid_t *auid); } - SYS_GETAUDIT = 449 // { int getaudit(struct auditinfo *auditinfo); } - SYS_SETAUDIT = 450 // { int setaudit(struct auditinfo *auditinfo); } - SYS_GETAUDIT_ADDR = 451 // { int getaudit_addr( \ - SYS_SETAUDIT_ADDR = 452 // { int setaudit_addr( \ - SYS_AUDITCTL = 453 // { int auditctl(char *path); } - SYS__UMTX_OP = 454 // { int _umtx_op(void *obj, int op, \ - SYS_THR_NEW = 455 // { int thr_new(struct thr_param *param, \ - SYS_SIGQUEUE = 456 // { int sigqueue(pid_t pid, int signum, void *value); } - SYS_ABORT2 = 463 // { int abort2(const char *why, int nargs, void **args); } - SYS_THR_SET_NAME = 464 // { int thr_set_name(long id, const char *name); } - SYS_RTPRIO_THREAD = 466 // { int rtprio_thread(int function, \ - SYS_SCTP_PEELOFF = 471 // { int sctp_peeloff(int sd, uint32_t name); } - SYS_SCTP_GENERIC_SENDMSG = 472 // { int sctp_generic_sendmsg(int sd, caddr_t msg, int mlen, \ - SYS_SCTP_GENERIC_SENDMSG_IOV = 473 // { int sctp_generic_sendmsg_iov(int sd, struct iovec *iov, int iovlen, \ - SYS_SCTP_GENERIC_RECVMSG = 474 // { int sctp_generic_recvmsg(int sd, struct iovec *iov, int iovlen, \ - SYS_PREAD = 475 // { ssize_t pread(int fd, void *buf, \ - SYS_PWRITE = 476 // { ssize_t pwrite(int fd, const void *buf, \ - SYS_MMAP = 477 // { caddr_t mmap(caddr_t addr, size_t len, \ - SYS_LSEEK = 478 // { off_t lseek(int fd, off_t offset, \ - SYS_TRUNCATE = 479 // { int truncate(char *path, off_t length); } - SYS_FTRUNCATE = 480 // { int ftruncate(int fd, off_t length); } - SYS_THR_KILL2 = 481 // { int thr_kill2(pid_t pid, long id, int sig); } - SYS_SHM_OPEN = 482 // { int shm_open(const char *path, int flags, \ - SYS_SHM_UNLINK = 483 // { int shm_unlink(const char *path); } - SYS_CPUSET = 484 // { int cpuset(cpusetid_t *setid); } - SYS_CPUSET_SETID = 485 // { int cpuset_setid(cpuwhich_t which, id_t id, \ - SYS_CPUSET_GETID = 486 // { int cpuset_getid(cpulevel_t level, \ - SYS_CPUSET_GETAFFINITY = 487 // { int cpuset_getaffinity(cpulevel_t level, \ - SYS_CPUSET_SETAFFINITY = 488 // { int cpuset_setaffinity(cpulevel_t level, \ - SYS_FACCESSAT = 489 // { int faccessat(int fd, char *path, int amode, \ - SYS_FCHMODAT = 490 // { int fchmodat(int fd, char *path, mode_t mode, \ - SYS_FCHOWNAT = 491 // { int fchownat(int fd, char *path, uid_t uid, \ - SYS_FEXECVE = 492 // { int fexecve(int fd, char **argv, \ - SYS_FSTATAT = 493 // { int fstatat(int fd, char *path, \ - SYS_FUTIMESAT = 494 // { int futimesat(int fd, char *path, \ - SYS_LINKAT = 495 // { int linkat(int fd1, char *path1, int fd2, \ - SYS_MKDIRAT = 496 // { int mkdirat(int fd, char *path, mode_t mode); } - SYS_MKFIFOAT = 497 // { int mkfifoat(int fd, char *path, mode_t mode); } - SYS_MKNODAT = 498 // { int mknodat(int fd, char *path, mode_t mode, \ - SYS_OPENAT = 499 // { int openat(int fd, char *path, int flag, \ - SYS_READLINKAT = 500 // { int readlinkat(int fd, char *path, char *buf, \ - SYS_RENAMEAT = 501 // { int renameat(int oldfd, char *old, int newfd, \ - SYS_SYMLINKAT = 502 // { int symlinkat(char *path1, int fd, \ - SYS_UNLINKAT = 503 // { int unlinkat(int fd, char *path, int flag); } - SYS_POSIX_OPENPT = 504 // { int posix_openpt(int flags); } - SYS_JAIL_GET = 506 // { int jail_get(struct iovec *iovp, \ - SYS_JAIL_SET = 507 // { int jail_set(struct iovec *iovp, \ - SYS_JAIL_REMOVE = 508 // { int jail_remove(int jid); } - SYS_CLOSEFROM = 509 // { int closefrom(int lowfd); } - SYS_LPATHCONF = 513 // { int lpathconf(char *path, int name); } - SYS_CAP_NEW = 514 // { int cap_new(int fd, uint64_t rights); } - SYS_CAP_GETRIGHTS = 515 // { int cap_getrights(int fd, \ - SYS_CAP_ENTER = 516 // { int cap_enter(void); } - SYS_CAP_GETMODE = 517 // { int cap_getmode(u_int *modep); } - SYS_PDFORK = 518 // { int pdfork(int *fdp, int flags); } - SYS_PDKILL = 519 // { int pdkill(int fd, int signum); } - SYS_PDGETPID = 520 // { int pdgetpid(int fd, pid_t *pidp); } - SYS_PSELECT = 522 // { int pselect(int nd, fd_set *in, \ - SYS_GETLOGINCLASS = 523 // { int getloginclass(char *namebuf, \ - SYS_SETLOGINCLASS = 524 // { int setloginclass(const char *namebuf); } - SYS_RCTL_GET_RACCT = 525 // { int rctl_get_racct(const void *inbufp, \ - SYS_RCTL_GET_RULES = 526 // { int rctl_get_rules(const void *inbufp, \ - SYS_RCTL_GET_LIMITS = 527 // { int rctl_get_limits(const void *inbufp, \ - SYS_RCTL_ADD_RULE = 528 // { int rctl_add_rule(const void *inbufp, \ - SYS_RCTL_REMOVE_RULE = 529 // { int rctl_remove_rule(const void *inbufp, \ - SYS_POSIX_FALLOCATE = 530 // { int posix_fallocate(int fd, \ - SYS_POSIX_FADVISE = 531 // { int posix_fadvise(int fd, off_t offset, \ - SYS_WAIT6 = 532 // { int wait6(idtype_t idtype, id_t id, \ - SYS_BINDAT = 538 // { int bindat(int fd, int s, caddr_t name, \ - SYS_CONNECTAT = 539 // { int connectat(int fd, int s, caddr_t name, \ - SYS_CHFLAGSAT = 540 // { int chflagsat(int fd, const char *path, \ - SYS_ACCEPT4 = 541 // { int accept4(int s, \ - SYS_PIPE2 = 542 // { int pipe2(int *fildes, int flags); } - SYS_PROCCTL = 544 // { int procctl(idtype_t idtype, id_t id, \ - SYS_PPOLL = 545 // { int ppoll(struct pollfd *fds, u_int nfds, \ -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_freebsd_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_freebsd_amd64.go deleted file mode 100644 index 57a60ea126d..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_freebsd_amd64.go +++ /dev/null @@ -1,351 +0,0 @@ -// mksysnum_freebsd.pl -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build amd64,freebsd - -package unix - -const ( - // SYS_NOSYS = 0; // { int nosys(void); } syscall nosys_args int - SYS_EXIT = 1 // { void sys_exit(int rval); } exit \ - SYS_FORK = 2 // { int fork(void); } - SYS_READ = 3 // { ssize_t read(int fd, void *buf, \ - SYS_WRITE = 4 // { ssize_t write(int fd, const void *buf, \ - SYS_OPEN = 5 // { int open(char *path, int flags, int mode); } - SYS_CLOSE = 6 // { int close(int fd); } - SYS_WAIT4 = 7 // { int wait4(int pid, int *status, \ - SYS_LINK = 9 // { int link(char *path, char *link); } - SYS_UNLINK = 10 // { int unlink(char *path); } - SYS_CHDIR = 12 // { int chdir(char *path); } - SYS_FCHDIR = 13 // { int fchdir(int fd); } - SYS_MKNOD = 14 // { int mknod(char *path, int mode, int dev); } - SYS_CHMOD = 15 // { int chmod(char *path, int mode); } - SYS_CHOWN = 16 // { int chown(char *path, int uid, int gid); } - SYS_OBREAK = 17 // { int obreak(char *nsize); } break \ - SYS_GETPID = 20 // { pid_t getpid(void); } - SYS_MOUNT = 21 // { int mount(char *type, char *path, \ - SYS_UNMOUNT = 22 // { int unmount(char *path, int flags); } - SYS_SETUID = 23 // { int setuid(uid_t uid); } - SYS_GETUID = 24 // { uid_t getuid(void); } - SYS_GETEUID = 25 // { uid_t geteuid(void); } - SYS_PTRACE = 26 // { int ptrace(int req, pid_t pid, \ - SYS_RECVMSG = 27 // { int recvmsg(int s, struct msghdr *msg, \ - SYS_SENDMSG = 28 // { int sendmsg(int s, struct msghdr *msg, \ - SYS_RECVFROM = 29 // { int recvfrom(int s, caddr_t buf, \ - SYS_ACCEPT = 30 // { int accept(int s, \ - SYS_GETPEERNAME = 31 // { int getpeername(int fdes, \ - SYS_GETSOCKNAME = 32 // { int getsockname(int fdes, \ - SYS_ACCESS = 33 // { int access(char *path, int amode); } - SYS_CHFLAGS = 34 // { int chflags(const char *path, u_long flags); } - SYS_FCHFLAGS = 35 // { int fchflags(int fd, u_long flags); } - SYS_SYNC = 36 // { int sync(void); } - SYS_KILL = 37 // { int kill(int pid, int signum); } - SYS_GETPPID = 39 // { pid_t getppid(void); } - SYS_DUP = 41 // { int dup(u_int fd); } - SYS_PIPE = 42 // { int pipe(void); } - SYS_GETEGID = 43 // { gid_t getegid(void); } - SYS_PROFIL = 44 // { int profil(caddr_t samples, size_t size, \ - SYS_KTRACE = 45 // { int ktrace(const char *fname, int ops, \ - SYS_GETGID = 47 // { gid_t getgid(void); } - SYS_GETLOGIN = 49 // { int getlogin(char *namebuf, u_int \ - SYS_SETLOGIN = 50 // { int setlogin(char *namebuf); } - SYS_ACCT = 51 // { int acct(char *path); } - SYS_SIGALTSTACK = 53 // { int sigaltstack(stack_t *ss, \ - SYS_IOCTL = 54 // { int ioctl(int fd, u_long com, \ - SYS_REBOOT = 55 // { int reboot(int opt); } - SYS_REVOKE = 56 // { int revoke(char *path); } - SYS_SYMLINK = 57 // { int symlink(char *path, char *link); } - SYS_READLINK = 58 // { ssize_t readlink(char *path, char *buf, \ - SYS_EXECVE = 59 // { int execve(char *fname, char **argv, \ - SYS_UMASK = 60 // { int umask(int newmask); } umask umask_args \ - SYS_CHROOT = 61 // { int chroot(char *path); } - SYS_MSYNC = 65 // { int msync(void *addr, size_t len, \ - SYS_VFORK = 66 // { int vfork(void); } - SYS_SBRK = 69 // { int sbrk(int incr); } - SYS_SSTK = 70 // { int sstk(int incr); } - SYS_OVADVISE = 72 // { int ovadvise(int anom); } vadvise \ - SYS_MUNMAP = 73 // { int munmap(void *addr, size_t len); } - SYS_MPROTECT = 74 // { int mprotect(const void *addr, size_t len, \ - SYS_MADVISE = 75 // { int madvise(void *addr, size_t len, \ - SYS_MINCORE = 78 // { int mincore(const void *addr, size_t len, \ - SYS_GETGROUPS = 79 // { int getgroups(u_int gidsetsize, \ - SYS_SETGROUPS = 80 // { int setgroups(u_int gidsetsize, \ - SYS_GETPGRP = 81 // { int getpgrp(void); } - SYS_SETPGID = 82 // { int setpgid(int pid, int pgid); } - SYS_SETITIMER = 83 // { int setitimer(u_int which, struct \ - SYS_SWAPON = 85 // { int swapon(char *name); } - SYS_GETITIMER = 86 // { int getitimer(u_int which, \ - SYS_GETDTABLESIZE = 89 // { int getdtablesize(void); } - SYS_DUP2 = 90 // { int dup2(u_int from, u_int to); } - SYS_FCNTL = 92 // { int fcntl(int fd, int cmd, long arg); } - SYS_SELECT = 93 // { int select(int nd, fd_set *in, fd_set *ou, \ - SYS_FSYNC = 95 // { int fsync(int fd); } - SYS_SETPRIORITY = 96 // { int setpriority(int which, int who, \ - SYS_SOCKET = 97 // { int socket(int domain, int type, \ - SYS_CONNECT = 98 // { int connect(int s, caddr_t name, \ - SYS_GETPRIORITY = 100 // { int getpriority(int which, int who); } - SYS_BIND = 104 // { int bind(int s, caddr_t name, \ - SYS_SETSOCKOPT = 105 // { int setsockopt(int s, int level, int name, \ - SYS_LISTEN = 106 // { int listen(int s, int backlog); } - SYS_GETTIMEOFDAY = 116 // { int gettimeofday(struct timeval *tp, \ - SYS_GETRUSAGE = 117 // { int getrusage(int who, \ - SYS_GETSOCKOPT = 118 // { int getsockopt(int s, int level, int name, \ - SYS_READV = 120 // { int readv(int fd, struct iovec *iovp, \ - SYS_WRITEV = 121 // { int writev(int fd, struct iovec *iovp, \ - SYS_SETTIMEOFDAY = 122 // { int settimeofday(struct timeval *tv, \ - SYS_FCHOWN = 123 // { int fchown(int fd, int uid, int gid); } - SYS_FCHMOD = 124 // { int fchmod(int fd, int mode); } - SYS_SETREUID = 126 // { int setreuid(int ruid, int euid); } - SYS_SETREGID = 127 // { int setregid(int rgid, int egid); } - SYS_RENAME = 128 // { int rename(char *from, char *to); } - SYS_FLOCK = 131 // { int flock(int fd, int how); } - SYS_MKFIFO = 132 // { int mkfifo(char *path, int mode); } - SYS_SENDTO = 133 // { int sendto(int s, caddr_t buf, size_t len, \ - SYS_SHUTDOWN = 134 // { int shutdown(int s, int how); } - SYS_SOCKETPAIR = 135 // { int socketpair(int domain, int type, \ - SYS_MKDIR = 136 // { int mkdir(char *path, int mode); } - SYS_RMDIR = 137 // { int rmdir(char *path); } - SYS_UTIMES = 138 // { int utimes(char *path, \ - SYS_ADJTIME = 140 // { int adjtime(struct timeval *delta, \ - SYS_SETSID = 147 // { int setsid(void); } - SYS_QUOTACTL = 148 // { int quotactl(char *path, int cmd, int uid, \ - SYS_LGETFH = 160 // { int lgetfh(char *fname, \ - SYS_GETFH = 161 // { int getfh(char *fname, \ - SYS_SYSARCH = 165 // { int sysarch(int op, char *parms); } - SYS_RTPRIO = 166 // { int rtprio(int function, pid_t pid, \ - SYS_FREEBSD6_PREAD = 173 // { ssize_t freebsd6_pread(int fd, void *buf, \ - SYS_FREEBSD6_PWRITE = 174 // { ssize_t freebsd6_pwrite(int fd, \ - SYS_SETFIB = 175 // { int setfib(int fibnum); } - SYS_NTP_ADJTIME = 176 // { int ntp_adjtime(struct timex *tp); } - SYS_SETGID = 181 // { int setgid(gid_t gid); } - SYS_SETEGID = 182 // { int setegid(gid_t egid); } - SYS_SETEUID = 183 // { int seteuid(uid_t euid); } - SYS_STAT = 188 // { int stat(char *path, struct stat *ub); } - SYS_FSTAT = 189 // { int fstat(int fd, struct stat *sb); } - SYS_LSTAT = 190 // { int lstat(char *path, struct stat *ub); } - SYS_PATHCONF = 191 // { int pathconf(char *path, int name); } - SYS_FPATHCONF = 192 // { int fpathconf(int fd, int name); } - SYS_GETRLIMIT = 194 // { int getrlimit(u_int which, \ - SYS_SETRLIMIT = 195 // { int setrlimit(u_int which, \ - SYS_GETDIRENTRIES = 196 // { int getdirentries(int fd, char *buf, \ - SYS_FREEBSD6_MMAP = 197 // { caddr_t freebsd6_mmap(caddr_t addr, \ - SYS_FREEBSD6_LSEEK = 199 // { off_t freebsd6_lseek(int fd, int pad, \ - SYS_FREEBSD6_TRUNCATE = 200 // { int freebsd6_truncate(char *path, int pad, \ - SYS_FREEBSD6_FTRUNCATE = 201 // { int freebsd6_ftruncate(int fd, int pad, \ - SYS___SYSCTL = 202 // { int __sysctl(int *name, u_int namelen, \ - SYS_MLOCK = 203 // { int mlock(const void *addr, size_t len); } - SYS_MUNLOCK = 204 // { int munlock(const void *addr, size_t len); } - SYS_UNDELETE = 205 // { int undelete(char *path); } - SYS_FUTIMES = 206 // { int futimes(int fd, struct timeval *tptr); } - SYS_GETPGID = 207 // { int getpgid(pid_t pid); } - SYS_POLL = 209 // { int poll(struct pollfd *fds, u_int nfds, \ - SYS_CLOCK_GETTIME = 232 // { int clock_gettime(clockid_t clock_id, \ - SYS_CLOCK_SETTIME = 233 // { int clock_settime( \ - SYS_CLOCK_GETRES = 234 // { int clock_getres(clockid_t clock_id, \ - SYS_KTIMER_CREATE = 235 // { int ktimer_create(clockid_t clock_id, \ - SYS_KTIMER_DELETE = 236 // { int ktimer_delete(int timerid); } - SYS_KTIMER_SETTIME = 237 // { int ktimer_settime(int timerid, int flags, \ - SYS_KTIMER_GETTIME = 238 // { int ktimer_gettime(int timerid, struct \ - SYS_KTIMER_GETOVERRUN = 239 // { int ktimer_getoverrun(int timerid); } - SYS_NANOSLEEP = 240 // { int nanosleep(const struct timespec *rqtp, \ - SYS_FFCLOCK_GETCOUNTER = 241 // { int ffclock_getcounter(ffcounter *ffcount); } - SYS_FFCLOCK_SETESTIMATE = 242 // { int ffclock_setestimate( \ - SYS_FFCLOCK_GETESTIMATE = 243 // { int ffclock_getestimate( \ - SYS_CLOCK_GETCPUCLOCKID2 = 247 // { int clock_getcpuclockid2(id_t id,\ - SYS_NTP_GETTIME = 248 // { int ntp_gettime(struct ntptimeval *ntvp); } - SYS_MINHERIT = 250 // { int minherit(void *addr, size_t len, \ - SYS_RFORK = 251 // { int rfork(int flags); } - SYS_OPENBSD_POLL = 252 // { int openbsd_poll(struct pollfd *fds, \ - SYS_ISSETUGID = 253 // { int issetugid(void); } - SYS_LCHOWN = 254 // { int lchown(char *path, int uid, int gid); } - SYS_GETDENTS = 272 // { int getdents(int fd, char *buf, \ - SYS_LCHMOD = 274 // { int lchmod(char *path, mode_t mode); } - SYS_LUTIMES = 276 // { int lutimes(char *path, \ - SYS_NSTAT = 278 // { int nstat(char *path, struct nstat *ub); } - SYS_NFSTAT = 279 // { int nfstat(int fd, struct nstat *sb); } - SYS_NLSTAT = 280 // { int nlstat(char *path, struct nstat *ub); } - SYS_PREADV = 289 // { ssize_t preadv(int fd, struct iovec *iovp, \ - SYS_PWRITEV = 290 // { ssize_t pwritev(int fd, struct iovec *iovp, \ - SYS_FHOPEN = 298 // { int fhopen(const struct fhandle *u_fhp, \ - SYS_FHSTAT = 299 // { int fhstat(const struct fhandle *u_fhp, \ - SYS_MODNEXT = 300 // { int modnext(int modid); } - SYS_MODSTAT = 301 // { int modstat(int modid, \ - SYS_MODFNEXT = 302 // { int modfnext(int modid); } - SYS_MODFIND = 303 // { int modfind(const char *name); } - SYS_KLDLOAD = 304 // { int kldload(const char *file); } - SYS_KLDUNLOAD = 305 // { int kldunload(int fileid); } - SYS_KLDFIND = 306 // { int kldfind(const char *file); } - SYS_KLDNEXT = 307 // { int kldnext(int fileid); } - SYS_KLDSTAT = 308 // { int kldstat(int fileid, struct \ - SYS_KLDFIRSTMOD = 309 // { int kldfirstmod(int fileid); } - SYS_GETSID = 310 // { int getsid(pid_t pid); } - SYS_SETRESUID = 311 // { int setresuid(uid_t ruid, uid_t euid, \ - SYS_SETRESGID = 312 // { int setresgid(gid_t rgid, gid_t egid, \ - SYS_YIELD = 321 // { int yield(void); } - SYS_MLOCKALL = 324 // { int mlockall(int how); } - SYS_MUNLOCKALL = 325 // { int munlockall(void); } - SYS___GETCWD = 326 // { int __getcwd(char *buf, u_int buflen); } - SYS_SCHED_SETPARAM = 327 // { int sched_setparam (pid_t pid, \ - SYS_SCHED_GETPARAM = 328 // { int sched_getparam (pid_t pid, struct \ - SYS_SCHED_SETSCHEDULER = 329 // { int sched_setscheduler (pid_t pid, int \ - SYS_SCHED_GETSCHEDULER = 330 // { int sched_getscheduler (pid_t pid); } - SYS_SCHED_YIELD = 331 // { int sched_yield (void); } - SYS_SCHED_GET_PRIORITY_MAX = 332 // { int sched_get_priority_max (int policy); } - SYS_SCHED_GET_PRIORITY_MIN = 333 // { int sched_get_priority_min (int policy); } - SYS_SCHED_RR_GET_INTERVAL = 334 // { int sched_rr_get_interval (pid_t pid, \ - SYS_UTRACE = 335 // { int utrace(const void *addr, size_t len); } - SYS_KLDSYM = 337 // { int kldsym(int fileid, int cmd, \ - SYS_JAIL = 338 // { int jail(struct jail *jail); } - SYS_SIGPROCMASK = 340 // { int sigprocmask(int how, \ - SYS_SIGSUSPEND = 341 // { int sigsuspend(const sigset_t *sigmask); } - SYS_SIGPENDING = 343 // { int sigpending(sigset_t *set); } - SYS_SIGTIMEDWAIT = 345 // { int sigtimedwait(const sigset_t *set, \ - SYS_SIGWAITINFO = 346 // { int sigwaitinfo(const sigset_t *set, \ - SYS___ACL_GET_FILE = 347 // { int __acl_get_file(const char *path, \ - SYS___ACL_SET_FILE = 348 // { int __acl_set_file(const char *path, \ - SYS___ACL_GET_FD = 349 // { int __acl_get_fd(int filedes, \ - SYS___ACL_SET_FD = 350 // { int __acl_set_fd(int filedes, \ - SYS___ACL_DELETE_FILE = 351 // { int __acl_delete_file(const char *path, \ - SYS___ACL_DELETE_FD = 352 // { int __acl_delete_fd(int filedes, \ - SYS___ACL_ACLCHECK_FILE = 353 // { int __acl_aclcheck_file(const char *path, \ - SYS___ACL_ACLCHECK_FD = 354 // { int __acl_aclcheck_fd(int filedes, \ - SYS_EXTATTRCTL = 355 // { int extattrctl(const char *path, int cmd, \ - SYS_EXTATTR_SET_FILE = 356 // { ssize_t extattr_set_file( \ - SYS_EXTATTR_GET_FILE = 357 // { ssize_t extattr_get_file( \ - SYS_EXTATTR_DELETE_FILE = 358 // { int extattr_delete_file(const char *path, \ - SYS_GETRESUID = 360 // { int getresuid(uid_t *ruid, uid_t *euid, \ - SYS_GETRESGID = 361 // { int getresgid(gid_t *rgid, gid_t *egid, \ - SYS_KQUEUE = 362 // { int kqueue(void); } - SYS_KEVENT = 363 // { int kevent(int fd, \ - SYS_EXTATTR_SET_FD = 371 // { ssize_t extattr_set_fd(int fd, \ - SYS_EXTATTR_GET_FD = 372 // { ssize_t extattr_get_fd(int fd, \ - SYS_EXTATTR_DELETE_FD = 373 // { int extattr_delete_fd(int fd, \ - SYS___SETUGID = 374 // { int __setugid(int flag); } - SYS_EACCESS = 376 // { int eaccess(char *path, int amode); } - SYS_NMOUNT = 378 // { int nmount(struct iovec *iovp, \ - SYS___MAC_GET_PROC = 384 // { int __mac_get_proc(struct mac *mac_p); } - SYS___MAC_SET_PROC = 385 // { int __mac_set_proc(struct mac *mac_p); } - SYS___MAC_GET_FD = 386 // { int __mac_get_fd(int fd, \ - SYS___MAC_GET_FILE = 387 // { int __mac_get_file(const char *path_p, \ - SYS___MAC_SET_FD = 388 // { int __mac_set_fd(int fd, \ - SYS___MAC_SET_FILE = 389 // { int __mac_set_file(const char *path_p, \ - SYS_KENV = 390 // { int kenv(int what, const char *name, \ - SYS_LCHFLAGS = 391 // { int lchflags(const char *path, \ - SYS_UUIDGEN = 392 // { int uuidgen(struct uuid *store, \ - SYS_SENDFILE = 393 // { int sendfile(int fd, int s, off_t offset, \ - SYS_MAC_SYSCALL = 394 // { int mac_syscall(const char *policy, \ - SYS_GETFSSTAT = 395 // { int getfsstat(struct statfs *buf, \ - SYS_STATFS = 396 // { int statfs(char *path, \ - SYS_FSTATFS = 397 // { int fstatfs(int fd, struct statfs *buf); } - SYS_FHSTATFS = 398 // { int fhstatfs(const struct fhandle *u_fhp, \ - SYS___MAC_GET_PID = 409 // { int __mac_get_pid(pid_t pid, \ - SYS___MAC_GET_LINK = 410 // { int __mac_get_link(const char *path_p, \ - SYS___MAC_SET_LINK = 411 // { int __mac_set_link(const char *path_p, \ - SYS_EXTATTR_SET_LINK = 412 // { ssize_t extattr_set_link( \ - SYS_EXTATTR_GET_LINK = 413 // { ssize_t extattr_get_link( \ - SYS_EXTATTR_DELETE_LINK = 414 // { int extattr_delete_link( \ - SYS___MAC_EXECVE = 415 // { int __mac_execve(char *fname, char **argv, \ - SYS_SIGACTION = 416 // { int sigaction(int sig, \ - SYS_SIGRETURN = 417 // { int sigreturn( \ - SYS_GETCONTEXT = 421 // { int getcontext(struct __ucontext *ucp); } - SYS_SETCONTEXT = 422 // { int setcontext( \ - SYS_SWAPCONTEXT = 423 // { int swapcontext(struct __ucontext *oucp, \ - SYS_SWAPOFF = 424 // { int swapoff(const char *name); } - SYS___ACL_GET_LINK = 425 // { int __acl_get_link(const char *path, \ - SYS___ACL_SET_LINK = 426 // { int __acl_set_link(const char *path, \ - SYS___ACL_DELETE_LINK = 427 // { int __acl_delete_link(const char *path, \ - SYS___ACL_ACLCHECK_LINK = 428 // { int __acl_aclcheck_link(const char *path, \ - SYS_SIGWAIT = 429 // { int sigwait(const sigset_t *set, \ - SYS_THR_CREATE = 430 // { int thr_create(ucontext_t *ctx, long *id, \ - SYS_THR_EXIT = 431 // { void thr_exit(long *state); } - SYS_THR_SELF = 432 // { int thr_self(long *id); } - SYS_THR_KILL = 433 // { int thr_kill(long id, int sig); } - SYS__UMTX_LOCK = 434 // { int _umtx_lock(struct umtx *umtx); } - SYS__UMTX_UNLOCK = 435 // { int _umtx_unlock(struct umtx *umtx); } - SYS_JAIL_ATTACH = 436 // { int jail_attach(int jid); } - SYS_EXTATTR_LIST_FD = 437 // { ssize_t extattr_list_fd(int fd, \ - SYS_EXTATTR_LIST_FILE = 438 // { ssize_t extattr_list_file( \ - SYS_EXTATTR_LIST_LINK = 439 // { ssize_t extattr_list_link( \ - SYS_THR_SUSPEND = 442 // { int thr_suspend( \ - SYS_THR_WAKE = 443 // { int thr_wake(long id); } - SYS_KLDUNLOADF = 444 // { int kldunloadf(int fileid, int flags); } - SYS_AUDIT = 445 // { int audit(const void *record, \ - SYS_AUDITON = 446 // { int auditon(int cmd, void *data, \ - SYS_GETAUID = 447 // { int getauid(uid_t *auid); } - SYS_SETAUID = 448 // { int setauid(uid_t *auid); } - SYS_GETAUDIT = 449 // { int getaudit(struct auditinfo *auditinfo); } - SYS_SETAUDIT = 450 // { int setaudit(struct auditinfo *auditinfo); } - SYS_GETAUDIT_ADDR = 451 // { int getaudit_addr( \ - SYS_SETAUDIT_ADDR = 452 // { int setaudit_addr( \ - SYS_AUDITCTL = 453 // { int auditctl(char *path); } - SYS__UMTX_OP = 454 // { int _umtx_op(void *obj, int op, \ - SYS_THR_NEW = 455 // { int thr_new(struct thr_param *param, \ - SYS_SIGQUEUE = 456 // { int sigqueue(pid_t pid, int signum, void *value); } - SYS_ABORT2 = 463 // { int abort2(const char *why, int nargs, void **args); } - SYS_THR_SET_NAME = 464 // { int thr_set_name(long id, const char *name); } - SYS_RTPRIO_THREAD = 466 // { int rtprio_thread(int function, \ - SYS_SCTP_PEELOFF = 471 // { int sctp_peeloff(int sd, uint32_t name); } - SYS_SCTP_GENERIC_SENDMSG = 472 // { int sctp_generic_sendmsg(int sd, caddr_t msg, int mlen, \ - SYS_SCTP_GENERIC_SENDMSG_IOV = 473 // { int sctp_generic_sendmsg_iov(int sd, struct iovec *iov, int iovlen, \ - SYS_SCTP_GENERIC_RECVMSG = 474 // { int sctp_generic_recvmsg(int sd, struct iovec *iov, int iovlen, \ - SYS_PREAD = 475 // { ssize_t pread(int fd, void *buf, \ - SYS_PWRITE = 476 // { ssize_t pwrite(int fd, const void *buf, \ - SYS_MMAP = 477 // { caddr_t mmap(caddr_t addr, size_t len, \ - SYS_LSEEK = 478 // { off_t lseek(int fd, off_t offset, \ - SYS_TRUNCATE = 479 // { int truncate(char *path, off_t length); } - SYS_FTRUNCATE = 480 // { int ftruncate(int fd, off_t length); } - SYS_THR_KILL2 = 481 // { int thr_kill2(pid_t pid, long id, int sig); } - SYS_SHM_OPEN = 482 // { int shm_open(const char *path, int flags, \ - SYS_SHM_UNLINK = 483 // { int shm_unlink(const char *path); } - SYS_CPUSET = 484 // { int cpuset(cpusetid_t *setid); } - SYS_CPUSET_SETID = 485 // { int cpuset_setid(cpuwhich_t which, id_t id, \ - SYS_CPUSET_GETID = 486 // { int cpuset_getid(cpulevel_t level, \ - SYS_CPUSET_GETAFFINITY = 487 // { int cpuset_getaffinity(cpulevel_t level, \ - SYS_CPUSET_SETAFFINITY = 488 // { int cpuset_setaffinity(cpulevel_t level, \ - SYS_FACCESSAT = 489 // { int faccessat(int fd, char *path, int amode, \ - SYS_FCHMODAT = 490 // { int fchmodat(int fd, char *path, mode_t mode, \ - SYS_FCHOWNAT = 491 // { int fchownat(int fd, char *path, uid_t uid, \ - SYS_FEXECVE = 492 // { int fexecve(int fd, char **argv, \ - SYS_FSTATAT = 493 // { int fstatat(int fd, char *path, \ - SYS_FUTIMESAT = 494 // { int futimesat(int fd, char *path, \ - SYS_LINKAT = 495 // { int linkat(int fd1, char *path1, int fd2, \ - SYS_MKDIRAT = 496 // { int mkdirat(int fd, char *path, mode_t mode); } - SYS_MKFIFOAT = 497 // { int mkfifoat(int fd, char *path, mode_t mode); } - SYS_MKNODAT = 498 // { int mknodat(int fd, char *path, mode_t mode, \ - SYS_OPENAT = 499 // { int openat(int fd, char *path, int flag, \ - SYS_READLINKAT = 500 // { int readlinkat(int fd, char *path, char *buf, \ - SYS_RENAMEAT = 501 // { int renameat(int oldfd, char *old, int newfd, \ - SYS_SYMLINKAT = 502 // { int symlinkat(char *path1, int fd, \ - SYS_UNLINKAT = 503 // { int unlinkat(int fd, char *path, int flag); } - SYS_POSIX_OPENPT = 504 // { int posix_openpt(int flags); } - SYS_JAIL_GET = 506 // { int jail_get(struct iovec *iovp, \ - SYS_JAIL_SET = 507 // { int jail_set(struct iovec *iovp, \ - SYS_JAIL_REMOVE = 508 // { int jail_remove(int jid); } - SYS_CLOSEFROM = 509 // { int closefrom(int lowfd); } - SYS_LPATHCONF = 513 // { int lpathconf(char *path, int name); } - SYS_CAP_NEW = 514 // { int cap_new(int fd, uint64_t rights); } - SYS_CAP_GETRIGHTS = 515 // { int cap_getrights(int fd, \ - SYS_CAP_ENTER = 516 // { int cap_enter(void); } - SYS_CAP_GETMODE = 517 // { int cap_getmode(u_int *modep); } - SYS_PDFORK = 518 // { int pdfork(int *fdp, int flags); } - SYS_PDKILL = 519 // { int pdkill(int fd, int signum); } - SYS_PDGETPID = 520 // { int pdgetpid(int fd, pid_t *pidp); } - SYS_PSELECT = 522 // { int pselect(int nd, fd_set *in, \ - SYS_GETLOGINCLASS = 523 // { int getloginclass(char *namebuf, \ - SYS_SETLOGINCLASS = 524 // { int setloginclass(const char *namebuf); } - SYS_RCTL_GET_RACCT = 525 // { int rctl_get_racct(const void *inbufp, \ - SYS_RCTL_GET_RULES = 526 // { int rctl_get_rules(const void *inbufp, \ - SYS_RCTL_GET_LIMITS = 527 // { int rctl_get_limits(const void *inbufp, \ - SYS_RCTL_ADD_RULE = 528 // { int rctl_add_rule(const void *inbufp, \ - SYS_RCTL_REMOVE_RULE = 529 // { int rctl_remove_rule(const void *inbufp, \ - SYS_POSIX_FALLOCATE = 530 // { int posix_fallocate(int fd, \ - SYS_POSIX_FADVISE = 531 // { int posix_fadvise(int fd, off_t offset, \ - SYS_WAIT6 = 532 // { int wait6(idtype_t idtype, id_t id, \ - SYS_BINDAT = 538 // { int bindat(int fd, int s, caddr_t name, \ - SYS_CONNECTAT = 539 // { int connectat(int fd, int s, caddr_t name, \ - SYS_CHFLAGSAT = 540 // { int chflagsat(int fd, const char *path, \ - SYS_ACCEPT4 = 541 // { int accept4(int s, \ - SYS_PIPE2 = 542 // { int pipe2(int *fildes, int flags); } - SYS_PROCCTL = 544 // { int procctl(idtype_t idtype, id_t id, \ - SYS_PPOLL = 545 // { int ppoll(struct pollfd *fds, u_int nfds, \ -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_freebsd_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_freebsd_arm.go deleted file mode 100644 index 206b9f612d4..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_freebsd_arm.go +++ /dev/null @@ -1,351 +0,0 @@ -// mksysnum_freebsd.pl -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build arm,freebsd - -package unix - -const ( - // SYS_NOSYS = 0; // { int nosys(void); } syscall nosys_args int - SYS_EXIT = 1 // { void sys_exit(int rval); } exit \ - SYS_FORK = 2 // { int fork(void); } - SYS_READ = 3 // { ssize_t read(int fd, void *buf, \ - SYS_WRITE = 4 // { ssize_t write(int fd, const void *buf, \ - SYS_OPEN = 5 // { int open(char *path, int flags, int mode); } - SYS_CLOSE = 6 // { int close(int fd); } - SYS_WAIT4 = 7 // { int wait4(int pid, int *status, \ - SYS_LINK = 9 // { int link(char *path, char *link); } - SYS_UNLINK = 10 // { int unlink(char *path); } - SYS_CHDIR = 12 // { int chdir(char *path); } - SYS_FCHDIR = 13 // { int fchdir(int fd); } - SYS_MKNOD = 14 // { int mknod(char *path, int mode, int dev); } - SYS_CHMOD = 15 // { int chmod(char *path, int mode); } - SYS_CHOWN = 16 // { int chown(char *path, int uid, int gid); } - SYS_OBREAK = 17 // { int obreak(char *nsize); } break \ - SYS_GETPID = 20 // { pid_t getpid(void); } - SYS_MOUNT = 21 // { int mount(char *type, char *path, \ - SYS_UNMOUNT = 22 // { int unmount(char *path, int flags); } - SYS_SETUID = 23 // { int setuid(uid_t uid); } - SYS_GETUID = 24 // { uid_t getuid(void); } - SYS_GETEUID = 25 // { uid_t geteuid(void); } - SYS_PTRACE = 26 // { int ptrace(int req, pid_t pid, \ - SYS_RECVMSG = 27 // { int recvmsg(int s, struct msghdr *msg, \ - SYS_SENDMSG = 28 // { int sendmsg(int s, struct msghdr *msg, \ - SYS_RECVFROM = 29 // { int recvfrom(int s, caddr_t buf, \ - SYS_ACCEPT = 30 // { int accept(int s, \ - SYS_GETPEERNAME = 31 // { int getpeername(int fdes, \ - SYS_GETSOCKNAME = 32 // { int getsockname(int fdes, \ - SYS_ACCESS = 33 // { int access(char *path, int amode); } - SYS_CHFLAGS = 34 // { int chflags(const char *path, u_long flags); } - SYS_FCHFLAGS = 35 // { int fchflags(int fd, u_long flags); } - SYS_SYNC = 36 // { int sync(void); } - SYS_KILL = 37 // { int kill(int pid, int signum); } - SYS_GETPPID = 39 // { pid_t getppid(void); } - SYS_DUP = 41 // { int dup(u_int fd); } - SYS_PIPE = 42 // { int pipe(void); } - SYS_GETEGID = 43 // { gid_t getegid(void); } - SYS_PROFIL = 44 // { int profil(caddr_t samples, size_t size, \ - SYS_KTRACE = 45 // { int ktrace(const char *fname, int ops, \ - SYS_GETGID = 47 // { gid_t getgid(void); } - SYS_GETLOGIN = 49 // { int getlogin(char *namebuf, u_int \ - SYS_SETLOGIN = 50 // { int setlogin(char *namebuf); } - SYS_ACCT = 51 // { int acct(char *path); } - SYS_SIGALTSTACK = 53 // { int sigaltstack(stack_t *ss, \ - SYS_IOCTL = 54 // { int ioctl(int fd, u_long com, \ - SYS_REBOOT = 55 // { int reboot(int opt); } - SYS_REVOKE = 56 // { int revoke(char *path); } - SYS_SYMLINK = 57 // { int symlink(char *path, char *link); } - SYS_READLINK = 58 // { ssize_t readlink(char *path, char *buf, \ - SYS_EXECVE = 59 // { int execve(char *fname, char **argv, \ - SYS_UMASK = 60 // { int umask(int newmask); } umask umask_args \ - SYS_CHROOT = 61 // { int chroot(char *path); } - SYS_MSYNC = 65 // { int msync(void *addr, size_t len, \ - SYS_VFORK = 66 // { int vfork(void); } - SYS_SBRK = 69 // { int sbrk(int incr); } - SYS_SSTK = 70 // { int sstk(int incr); } - SYS_OVADVISE = 72 // { int ovadvise(int anom); } vadvise \ - SYS_MUNMAP = 73 // { int munmap(void *addr, size_t len); } - SYS_MPROTECT = 74 // { int mprotect(const void *addr, size_t len, \ - SYS_MADVISE = 75 // { int madvise(void *addr, size_t len, \ - SYS_MINCORE = 78 // { int mincore(const void *addr, size_t len, \ - SYS_GETGROUPS = 79 // { int getgroups(u_int gidsetsize, \ - SYS_SETGROUPS = 80 // { int setgroups(u_int gidsetsize, \ - SYS_GETPGRP = 81 // { int getpgrp(void); } - SYS_SETPGID = 82 // { int setpgid(int pid, int pgid); } - SYS_SETITIMER = 83 // { int setitimer(u_int which, struct \ - SYS_SWAPON = 85 // { int swapon(char *name); } - SYS_GETITIMER = 86 // { int getitimer(u_int which, \ - SYS_GETDTABLESIZE = 89 // { int getdtablesize(void); } - SYS_DUP2 = 90 // { int dup2(u_int from, u_int to); } - SYS_FCNTL = 92 // { int fcntl(int fd, int cmd, long arg); } - SYS_SELECT = 93 // { int select(int nd, fd_set *in, fd_set *ou, \ - SYS_FSYNC = 95 // { int fsync(int fd); } - SYS_SETPRIORITY = 96 // { int setpriority(int which, int who, \ - SYS_SOCKET = 97 // { int socket(int domain, int type, \ - SYS_CONNECT = 98 // { int connect(int s, caddr_t name, \ - SYS_GETPRIORITY = 100 // { int getpriority(int which, int who); } - SYS_BIND = 104 // { int bind(int s, caddr_t name, \ - SYS_SETSOCKOPT = 105 // { int setsockopt(int s, int level, int name, \ - SYS_LISTEN = 106 // { int listen(int s, int backlog); } - SYS_GETTIMEOFDAY = 116 // { int gettimeofday(struct timeval *tp, \ - SYS_GETRUSAGE = 117 // { int getrusage(int who, \ - SYS_GETSOCKOPT = 118 // { int getsockopt(int s, int level, int name, \ - SYS_READV = 120 // { int readv(int fd, struct iovec *iovp, \ - SYS_WRITEV = 121 // { int writev(int fd, struct iovec *iovp, \ - SYS_SETTIMEOFDAY = 122 // { int settimeofday(struct timeval *tv, \ - SYS_FCHOWN = 123 // { int fchown(int fd, int uid, int gid); } - SYS_FCHMOD = 124 // { int fchmod(int fd, int mode); } - SYS_SETREUID = 126 // { int setreuid(int ruid, int euid); } - SYS_SETREGID = 127 // { int setregid(int rgid, int egid); } - SYS_RENAME = 128 // { int rename(char *from, char *to); } - SYS_FLOCK = 131 // { int flock(int fd, int how); } - SYS_MKFIFO = 132 // { int mkfifo(char *path, int mode); } - SYS_SENDTO = 133 // { int sendto(int s, caddr_t buf, size_t len, \ - SYS_SHUTDOWN = 134 // { int shutdown(int s, int how); } - SYS_SOCKETPAIR = 135 // { int socketpair(int domain, int type, \ - SYS_MKDIR = 136 // { int mkdir(char *path, int mode); } - SYS_RMDIR = 137 // { int rmdir(char *path); } - SYS_UTIMES = 138 // { int utimes(char *path, \ - SYS_ADJTIME = 140 // { int adjtime(struct timeval *delta, \ - SYS_SETSID = 147 // { int setsid(void); } - SYS_QUOTACTL = 148 // { int quotactl(char *path, int cmd, int uid, \ - SYS_LGETFH = 160 // { int lgetfh(char *fname, \ - SYS_GETFH = 161 // { int getfh(char *fname, \ - SYS_SYSARCH = 165 // { int sysarch(int op, char *parms); } - SYS_RTPRIO = 166 // { int rtprio(int function, pid_t pid, \ - SYS_FREEBSD6_PREAD = 173 // { ssize_t freebsd6_pread(int fd, void *buf, \ - SYS_FREEBSD6_PWRITE = 174 // { ssize_t freebsd6_pwrite(int fd, \ - SYS_SETFIB = 175 // { int setfib(int fibnum); } - SYS_NTP_ADJTIME = 176 // { int ntp_adjtime(struct timex *tp); } - SYS_SETGID = 181 // { int setgid(gid_t gid); } - SYS_SETEGID = 182 // { int setegid(gid_t egid); } - SYS_SETEUID = 183 // { int seteuid(uid_t euid); } - SYS_STAT = 188 // { int stat(char *path, struct stat *ub); } - SYS_FSTAT = 189 // { int fstat(int fd, struct stat *sb); } - SYS_LSTAT = 190 // { int lstat(char *path, struct stat *ub); } - SYS_PATHCONF = 191 // { int pathconf(char *path, int name); } - SYS_FPATHCONF = 192 // { int fpathconf(int fd, int name); } - SYS_GETRLIMIT = 194 // { int getrlimit(u_int which, \ - SYS_SETRLIMIT = 195 // { int setrlimit(u_int which, \ - SYS_GETDIRENTRIES = 196 // { int getdirentries(int fd, char *buf, \ - SYS_FREEBSD6_MMAP = 197 // { caddr_t freebsd6_mmap(caddr_t addr, \ - SYS_FREEBSD6_LSEEK = 199 // { off_t freebsd6_lseek(int fd, int pad, \ - SYS_FREEBSD6_TRUNCATE = 200 // { int freebsd6_truncate(char *path, int pad, \ - SYS_FREEBSD6_FTRUNCATE = 201 // { int freebsd6_ftruncate(int fd, int pad, \ - SYS___SYSCTL = 202 // { int __sysctl(int *name, u_int namelen, \ - SYS_MLOCK = 203 // { int mlock(const void *addr, size_t len); } - SYS_MUNLOCK = 204 // { int munlock(const void *addr, size_t len); } - SYS_UNDELETE = 205 // { int undelete(char *path); } - SYS_FUTIMES = 206 // { int futimes(int fd, struct timeval *tptr); } - SYS_GETPGID = 207 // { int getpgid(pid_t pid); } - SYS_POLL = 209 // { int poll(struct pollfd *fds, u_int nfds, \ - SYS_CLOCK_GETTIME = 232 // { int clock_gettime(clockid_t clock_id, \ - SYS_CLOCK_SETTIME = 233 // { int clock_settime( \ - SYS_CLOCK_GETRES = 234 // { int clock_getres(clockid_t clock_id, \ - SYS_KTIMER_CREATE = 235 // { int ktimer_create(clockid_t clock_id, \ - SYS_KTIMER_DELETE = 236 // { int ktimer_delete(int timerid); } - SYS_KTIMER_SETTIME = 237 // { int ktimer_settime(int timerid, int flags, \ - SYS_KTIMER_GETTIME = 238 // { int ktimer_gettime(int timerid, struct \ - SYS_KTIMER_GETOVERRUN = 239 // { int ktimer_getoverrun(int timerid); } - SYS_NANOSLEEP = 240 // { int nanosleep(const struct timespec *rqtp, \ - SYS_FFCLOCK_GETCOUNTER = 241 // { int ffclock_getcounter(ffcounter *ffcount); } - SYS_FFCLOCK_SETESTIMATE = 242 // { int ffclock_setestimate( \ - SYS_FFCLOCK_GETESTIMATE = 243 // { int ffclock_getestimate( \ - SYS_CLOCK_GETCPUCLOCKID2 = 247 // { int clock_getcpuclockid2(id_t id,\ - SYS_NTP_GETTIME = 248 // { int ntp_gettime(struct ntptimeval *ntvp); } - SYS_MINHERIT = 250 // { int minherit(void *addr, size_t len, \ - SYS_RFORK = 251 // { int rfork(int flags); } - SYS_OPENBSD_POLL = 252 // { int openbsd_poll(struct pollfd *fds, \ - SYS_ISSETUGID = 253 // { int issetugid(void); } - SYS_LCHOWN = 254 // { int lchown(char *path, int uid, int gid); } - SYS_GETDENTS = 272 // { int getdents(int fd, char *buf, \ - SYS_LCHMOD = 274 // { int lchmod(char *path, mode_t mode); } - SYS_LUTIMES = 276 // { int lutimes(char *path, \ - SYS_NSTAT = 278 // { int nstat(char *path, struct nstat *ub); } - SYS_NFSTAT = 279 // { int nfstat(int fd, struct nstat *sb); } - SYS_NLSTAT = 280 // { int nlstat(char *path, struct nstat *ub); } - SYS_PREADV = 289 // { ssize_t preadv(int fd, struct iovec *iovp, \ - SYS_PWRITEV = 290 // { ssize_t pwritev(int fd, struct iovec *iovp, \ - SYS_FHOPEN = 298 // { int fhopen(const struct fhandle *u_fhp, \ - SYS_FHSTAT = 299 // { int fhstat(const struct fhandle *u_fhp, \ - SYS_MODNEXT = 300 // { int modnext(int modid); } - SYS_MODSTAT = 301 // { int modstat(int modid, \ - SYS_MODFNEXT = 302 // { int modfnext(int modid); } - SYS_MODFIND = 303 // { int modfind(const char *name); } - SYS_KLDLOAD = 304 // { int kldload(const char *file); } - SYS_KLDUNLOAD = 305 // { int kldunload(int fileid); } - SYS_KLDFIND = 306 // { int kldfind(const char *file); } - SYS_KLDNEXT = 307 // { int kldnext(int fileid); } - SYS_KLDSTAT = 308 // { int kldstat(int fileid, struct \ - SYS_KLDFIRSTMOD = 309 // { int kldfirstmod(int fileid); } - SYS_GETSID = 310 // { int getsid(pid_t pid); } - SYS_SETRESUID = 311 // { int setresuid(uid_t ruid, uid_t euid, \ - SYS_SETRESGID = 312 // { int setresgid(gid_t rgid, gid_t egid, \ - SYS_YIELD = 321 // { int yield(void); } - SYS_MLOCKALL = 324 // { int mlockall(int how); } - SYS_MUNLOCKALL = 325 // { int munlockall(void); } - SYS___GETCWD = 326 // { int __getcwd(char *buf, u_int buflen); } - SYS_SCHED_SETPARAM = 327 // { int sched_setparam (pid_t pid, \ - SYS_SCHED_GETPARAM = 328 // { int sched_getparam (pid_t pid, struct \ - SYS_SCHED_SETSCHEDULER = 329 // { int sched_setscheduler (pid_t pid, int \ - SYS_SCHED_GETSCHEDULER = 330 // { int sched_getscheduler (pid_t pid); } - SYS_SCHED_YIELD = 331 // { int sched_yield (void); } - SYS_SCHED_GET_PRIORITY_MAX = 332 // { int sched_get_priority_max (int policy); } - SYS_SCHED_GET_PRIORITY_MIN = 333 // { int sched_get_priority_min (int policy); } - SYS_SCHED_RR_GET_INTERVAL = 334 // { int sched_rr_get_interval (pid_t pid, \ - SYS_UTRACE = 335 // { int utrace(const void *addr, size_t len); } - SYS_KLDSYM = 337 // { int kldsym(int fileid, int cmd, \ - SYS_JAIL = 338 // { int jail(struct jail *jail); } - SYS_SIGPROCMASK = 340 // { int sigprocmask(int how, \ - SYS_SIGSUSPEND = 341 // { int sigsuspend(const sigset_t *sigmask); } - SYS_SIGPENDING = 343 // { int sigpending(sigset_t *set); } - SYS_SIGTIMEDWAIT = 345 // { int sigtimedwait(const sigset_t *set, \ - SYS_SIGWAITINFO = 346 // { int sigwaitinfo(const sigset_t *set, \ - SYS___ACL_GET_FILE = 347 // { int __acl_get_file(const char *path, \ - SYS___ACL_SET_FILE = 348 // { int __acl_set_file(const char *path, \ - SYS___ACL_GET_FD = 349 // { int __acl_get_fd(int filedes, \ - SYS___ACL_SET_FD = 350 // { int __acl_set_fd(int filedes, \ - SYS___ACL_DELETE_FILE = 351 // { int __acl_delete_file(const char *path, \ - SYS___ACL_DELETE_FD = 352 // { int __acl_delete_fd(int filedes, \ - SYS___ACL_ACLCHECK_FILE = 353 // { int __acl_aclcheck_file(const char *path, \ - SYS___ACL_ACLCHECK_FD = 354 // { int __acl_aclcheck_fd(int filedes, \ - SYS_EXTATTRCTL = 355 // { int extattrctl(const char *path, int cmd, \ - SYS_EXTATTR_SET_FILE = 356 // { ssize_t extattr_set_file( \ - SYS_EXTATTR_GET_FILE = 357 // { ssize_t extattr_get_file( \ - SYS_EXTATTR_DELETE_FILE = 358 // { int extattr_delete_file(const char *path, \ - SYS_GETRESUID = 360 // { int getresuid(uid_t *ruid, uid_t *euid, \ - SYS_GETRESGID = 361 // { int getresgid(gid_t *rgid, gid_t *egid, \ - SYS_KQUEUE = 362 // { int kqueue(void); } - SYS_KEVENT = 363 // { int kevent(int fd, \ - SYS_EXTATTR_SET_FD = 371 // { ssize_t extattr_set_fd(int fd, \ - SYS_EXTATTR_GET_FD = 372 // { ssize_t extattr_get_fd(int fd, \ - SYS_EXTATTR_DELETE_FD = 373 // { int extattr_delete_fd(int fd, \ - SYS___SETUGID = 374 // { int __setugid(int flag); } - SYS_EACCESS = 376 // { int eaccess(char *path, int amode); } - SYS_NMOUNT = 378 // { int nmount(struct iovec *iovp, \ - SYS___MAC_GET_PROC = 384 // { int __mac_get_proc(struct mac *mac_p); } - SYS___MAC_SET_PROC = 385 // { int __mac_set_proc(struct mac *mac_p); } - SYS___MAC_GET_FD = 386 // { int __mac_get_fd(int fd, \ - SYS___MAC_GET_FILE = 387 // { int __mac_get_file(const char *path_p, \ - SYS___MAC_SET_FD = 388 // { int __mac_set_fd(int fd, \ - SYS___MAC_SET_FILE = 389 // { int __mac_set_file(const char *path_p, \ - SYS_KENV = 390 // { int kenv(int what, const char *name, \ - SYS_LCHFLAGS = 391 // { int lchflags(const char *path, \ - SYS_UUIDGEN = 392 // { int uuidgen(struct uuid *store, \ - SYS_SENDFILE = 393 // { int sendfile(int fd, int s, off_t offset, \ - SYS_MAC_SYSCALL = 394 // { int mac_syscall(const char *policy, \ - SYS_GETFSSTAT = 395 // { int getfsstat(struct statfs *buf, \ - SYS_STATFS = 396 // { int statfs(char *path, \ - SYS_FSTATFS = 397 // { int fstatfs(int fd, struct statfs *buf); } - SYS_FHSTATFS = 398 // { int fhstatfs(const struct fhandle *u_fhp, \ - SYS___MAC_GET_PID = 409 // { int __mac_get_pid(pid_t pid, \ - SYS___MAC_GET_LINK = 410 // { int __mac_get_link(const char *path_p, \ - SYS___MAC_SET_LINK = 411 // { int __mac_set_link(const char *path_p, \ - SYS_EXTATTR_SET_LINK = 412 // { ssize_t extattr_set_link( \ - SYS_EXTATTR_GET_LINK = 413 // { ssize_t extattr_get_link( \ - SYS_EXTATTR_DELETE_LINK = 414 // { int extattr_delete_link( \ - SYS___MAC_EXECVE = 415 // { int __mac_execve(char *fname, char **argv, \ - SYS_SIGACTION = 416 // { int sigaction(int sig, \ - SYS_SIGRETURN = 417 // { int sigreturn( \ - SYS_GETCONTEXT = 421 // { int getcontext(struct __ucontext *ucp); } - SYS_SETCONTEXT = 422 // { int setcontext( \ - SYS_SWAPCONTEXT = 423 // { int swapcontext(struct __ucontext *oucp, \ - SYS_SWAPOFF = 424 // { int swapoff(const char *name); } - SYS___ACL_GET_LINK = 425 // { int __acl_get_link(const char *path, \ - SYS___ACL_SET_LINK = 426 // { int __acl_set_link(const char *path, \ - SYS___ACL_DELETE_LINK = 427 // { int __acl_delete_link(const char *path, \ - SYS___ACL_ACLCHECK_LINK = 428 // { int __acl_aclcheck_link(const char *path, \ - SYS_SIGWAIT = 429 // { int sigwait(const sigset_t *set, \ - SYS_THR_CREATE = 430 // { int thr_create(ucontext_t *ctx, long *id, \ - SYS_THR_EXIT = 431 // { void thr_exit(long *state); } - SYS_THR_SELF = 432 // { int thr_self(long *id); } - SYS_THR_KILL = 433 // { int thr_kill(long id, int sig); } - SYS__UMTX_LOCK = 434 // { int _umtx_lock(struct umtx *umtx); } - SYS__UMTX_UNLOCK = 435 // { int _umtx_unlock(struct umtx *umtx); } - SYS_JAIL_ATTACH = 436 // { int jail_attach(int jid); } - SYS_EXTATTR_LIST_FD = 437 // { ssize_t extattr_list_fd(int fd, \ - SYS_EXTATTR_LIST_FILE = 438 // { ssize_t extattr_list_file( \ - SYS_EXTATTR_LIST_LINK = 439 // { ssize_t extattr_list_link( \ - SYS_THR_SUSPEND = 442 // { int thr_suspend( \ - SYS_THR_WAKE = 443 // { int thr_wake(long id); } - SYS_KLDUNLOADF = 444 // { int kldunloadf(int fileid, int flags); } - SYS_AUDIT = 445 // { int audit(const void *record, \ - SYS_AUDITON = 446 // { int auditon(int cmd, void *data, \ - SYS_GETAUID = 447 // { int getauid(uid_t *auid); } - SYS_SETAUID = 448 // { int setauid(uid_t *auid); } - SYS_GETAUDIT = 449 // { int getaudit(struct auditinfo *auditinfo); } - SYS_SETAUDIT = 450 // { int setaudit(struct auditinfo *auditinfo); } - SYS_GETAUDIT_ADDR = 451 // { int getaudit_addr( \ - SYS_SETAUDIT_ADDR = 452 // { int setaudit_addr( \ - SYS_AUDITCTL = 453 // { int auditctl(char *path); } - SYS__UMTX_OP = 454 // { int _umtx_op(void *obj, int op, \ - SYS_THR_NEW = 455 // { int thr_new(struct thr_param *param, \ - SYS_SIGQUEUE = 456 // { int sigqueue(pid_t pid, int signum, void *value); } - SYS_ABORT2 = 463 // { int abort2(const char *why, int nargs, void **args); } - SYS_THR_SET_NAME = 464 // { int thr_set_name(long id, const char *name); } - SYS_RTPRIO_THREAD = 466 // { int rtprio_thread(int function, \ - SYS_SCTP_PEELOFF = 471 // { int sctp_peeloff(int sd, uint32_t name); } - SYS_SCTP_GENERIC_SENDMSG = 472 // { int sctp_generic_sendmsg(int sd, caddr_t msg, int mlen, \ - SYS_SCTP_GENERIC_SENDMSG_IOV = 473 // { int sctp_generic_sendmsg_iov(int sd, struct iovec *iov, int iovlen, \ - SYS_SCTP_GENERIC_RECVMSG = 474 // { int sctp_generic_recvmsg(int sd, struct iovec *iov, int iovlen, \ - SYS_PREAD = 475 // { ssize_t pread(int fd, void *buf, \ - SYS_PWRITE = 476 // { ssize_t pwrite(int fd, const void *buf, \ - SYS_MMAP = 477 // { caddr_t mmap(caddr_t addr, size_t len, \ - SYS_LSEEK = 478 // { off_t lseek(int fd, off_t offset, \ - SYS_TRUNCATE = 479 // { int truncate(char *path, off_t length); } - SYS_FTRUNCATE = 480 // { int ftruncate(int fd, off_t length); } - SYS_THR_KILL2 = 481 // { int thr_kill2(pid_t pid, long id, int sig); } - SYS_SHM_OPEN = 482 // { int shm_open(const char *path, int flags, \ - SYS_SHM_UNLINK = 483 // { int shm_unlink(const char *path); } - SYS_CPUSET = 484 // { int cpuset(cpusetid_t *setid); } - SYS_CPUSET_SETID = 485 // { int cpuset_setid(cpuwhich_t which, id_t id, \ - SYS_CPUSET_GETID = 486 // { int cpuset_getid(cpulevel_t level, \ - SYS_CPUSET_GETAFFINITY = 487 // { int cpuset_getaffinity(cpulevel_t level, \ - SYS_CPUSET_SETAFFINITY = 488 // { int cpuset_setaffinity(cpulevel_t level, \ - SYS_FACCESSAT = 489 // { int faccessat(int fd, char *path, int amode, \ - SYS_FCHMODAT = 490 // { int fchmodat(int fd, char *path, mode_t mode, \ - SYS_FCHOWNAT = 491 // { int fchownat(int fd, char *path, uid_t uid, \ - SYS_FEXECVE = 492 // { int fexecve(int fd, char **argv, \ - SYS_FSTATAT = 493 // { int fstatat(int fd, char *path, \ - SYS_FUTIMESAT = 494 // { int futimesat(int fd, char *path, \ - SYS_LINKAT = 495 // { int linkat(int fd1, char *path1, int fd2, \ - SYS_MKDIRAT = 496 // { int mkdirat(int fd, char *path, mode_t mode); } - SYS_MKFIFOAT = 497 // { int mkfifoat(int fd, char *path, mode_t mode); } - SYS_MKNODAT = 498 // { int mknodat(int fd, char *path, mode_t mode, \ - SYS_OPENAT = 499 // { int openat(int fd, char *path, int flag, \ - SYS_READLINKAT = 500 // { int readlinkat(int fd, char *path, char *buf, \ - SYS_RENAMEAT = 501 // { int renameat(int oldfd, char *old, int newfd, \ - SYS_SYMLINKAT = 502 // { int symlinkat(char *path1, int fd, \ - SYS_UNLINKAT = 503 // { int unlinkat(int fd, char *path, int flag); } - SYS_POSIX_OPENPT = 504 // { int posix_openpt(int flags); } - SYS_JAIL_GET = 506 // { int jail_get(struct iovec *iovp, \ - SYS_JAIL_SET = 507 // { int jail_set(struct iovec *iovp, \ - SYS_JAIL_REMOVE = 508 // { int jail_remove(int jid); } - SYS_CLOSEFROM = 509 // { int closefrom(int lowfd); } - SYS_LPATHCONF = 513 // { int lpathconf(char *path, int name); } - SYS_CAP_NEW = 514 // { int cap_new(int fd, uint64_t rights); } - SYS_CAP_GETRIGHTS = 515 // { int cap_getrights(int fd, \ - SYS_CAP_ENTER = 516 // { int cap_enter(void); } - SYS_CAP_GETMODE = 517 // { int cap_getmode(u_int *modep); } - SYS_PDFORK = 518 // { int pdfork(int *fdp, int flags); } - SYS_PDKILL = 519 // { int pdkill(int fd, int signum); } - SYS_PDGETPID = 520 // { int pdgetpid(int fd, pid_t *pidp); } - SYS_PSELECT = 522 // { int pselect(int nd, fd_set *in, \ - SYS_GETLOGINCLASS = 523 // { int getloginclass(char *namebuf, \ - SYS_SETLOGINCLASS = 524 // { int setloginclass(const char *namebuf); } - SYS_RCTL_GET_RACCT = 525 // { int rctl_get_racct(const void *inbufp, \ - SYS_RCTL_GET_RULES = 526 // { int rctl_get_rules(const void *inbufp, \ - SYS_RCTL_GET_LIMITS = 527 // { int rctl_get_limits(const void *inbufp, \ - SYS_RCTL_ADD_RULE = 528 // { int rctl_add_rule(const void *inbufp, \ - SYS_RCTL_REMOVE_RULE = 529 // { int rctl_remove_rule(const void *inbufp, \ - SYS_POSIX_FALLOCATE = 530 // { int posix_fallocate(int fd, \ - SYS_POSIX_FADVISE = 531 // { int posix_fadvise(int fd, off_t offset, \ - SYS_WAIT6 = 532 // { int wait6(idtype_t idtype, id_t id, \ - SYS_BINDAT = 538 // { int bindat(int fd, int s, caddr_t name, \ - SYS_CONNECTAT = 539 // { int connectat(int fd, int s, caddr_t name, \ - SYS_CHFLAGSAT = 540 // { int chflagsat(int fd, const char *path, \ - SYS_ACCEPT4 = 541 // { int accept4(int s, \ - SYS_PIPE2 = 542 // { int pipe2(int *fildes, int flags); } - SYS_PROCCTL = 544 // { int procctl(idtype_t idtype, id_t id, \ - SYS_PPOLL = 545 // { int ppoll(struct pollfd *fds, u_int nfds, \ -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_linux_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_linux_386.go deleted file mode 100644 index ba952c67548..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_linux_386.go +++ /dev/null @@ -1,355 +0,0 @@ -// mksysnum_linux.pl /usr/include/asm/unistd_32.h -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build 386,linux - -package unix - -const ( - SYS_RESTART_SYSCALL = 0 - SYS_EXIT = 1 - SYS_FORK = 2 - SYS_READ = 3 - SYS_WRITE = 4 - SYS_OPEN = 5 - SYS_CLOSE = 6 - SYS_WAITPID = 7 - SYS_CREAT = 8 - SYS_LINK = 9 - SYS_UNLINK = 10 - SYS_EXECVE = 11 - SYS_CHDIR = 12 - SYS_TIME = 13 - SYS_MKNOD = 14 - SYS_CHMOD = 15 - SYS_LCHOWN = 16 - SYS_BREAK = 17 - SYS_OLDSTAT = 18 - SYS_LSEEK = 19 - SYS_GETPID = 20 - SYS_MOUNT = 21 - SYS_UMOUNT = 22 - SYS_SETUID = 23 - SYS_GETUID = 24 - SYS_STIME = 25 - SYS_PTRACE = 26 - SYS_ALARM = 27 - SYS_OLDFSTAT = 28 - SYS_PAUSE = 29 - SYS_UTIME = 30 - SYS_STTY = 31 - SYS_GTTY = 32 - SYS_ACCESS = 33 - SYS_NICE = 34 - SYS_FTIME = 35 - SYS_SYNC = 36 - SYS_KILL = 37 - SYS_RENAME = 38 - SYS_MKDIR = 39 - SYS_RMDIR = 40 - SYS_DUP = 41 - SYS_PIPE = 42 - SYS_TIMES = 43 - SYS_PROF = 44 - SYS_BRK = 45 - SYS_SETGID = 46 - SYS_GETGID = 47 - SYS_SIGNAL = 48 - SYS_GETEUID = 49 - SYS_GETEGID = 50 - SYS_ACCT = 51 - SYS_UMOUNT2 = 52 - SYS_LOCK = 53 - SYS_IOCTL = 54 - SYS_FCNTL = 55 - SYS_MPX = 56 - SYS_SETPGID = 57 - SYS_ULIMIT = 58 - SYS_OLDOLDUNAME = 59 - SYS_UMASK = 60 - SYS_CHROOT = 61 - SYS_USTAT = 62 - SYS_DUP2 = 63 - SYS_GETPPID = 64 - SYS_GETPGRP = 65 - SYS_SETSID = 66 - SYS_SIGACTION = 67 - SYS_SGETMASK = 68 - SYS_SSETMASK = 69 - SYS_SETREUID = 70 - SYS_SETREGID = 71 - SYS_SIGSUSPEND = 72 - SYS_SIGPENDING = 73 - SYS_SETHOSTNAME = 74 - SYS_SETRLIMIT = 75 - SYS_GETRLIMIT = 76 - SYS_GETRUSAGE = 77 - SYS_GETTIMEOFDAY = 78 - SYS_SETTIMEOFDAY = 79 - SYS_GETGROUPS = 80 - SYS_SETGROUPS = 81 - SYS_SELECT = 82 - SYS_SYMLINK = 83 - SYS_OLDLSTAT = 84 - SYS_READLINK = 85 - SYS_USELIB = 86 - SYS_SWAPON = 87 - SYS_REBOOT = 88 - SYS_READDIR = 89 - SYS_MMAP = 90 - SYS_MUNMAP = 91 - SYS_TRUNCATE = 92 - SYS_FTRUNCATE = 93 - SYS_FCHMOD = 94 - SYS_FCHOWN = 95 - SYS_GETPRIORITY = 96 - SYS_SETPRIORITY = 97 - SYS_PROFIL = 98 - SYS_STATFS = 99 - SYS_FSTATFS = 100 - SYS_IOPERM = 101 - SYS_SOCKETCALL = 102 - SYS_SYSLOG = 103 - SYS_SETITIMER = 104 - SYS_GETITIMER = 105 - SYS_STAT = 106 - SYS_LSTAT = 107 - SYS_FSTAT = 108 - SYS_OLDUNAME = 109 - SYS_IOPL = 110 - SYS_VHANGUP = 111 - SYS_IDLE = 112 - SYS_VM86OLD = 113 - SYS_WAIT4 = 114 - SYS_SWAPOFF = 115 - SYS_SYSINFO = 116 - SYS_IPC = 117 - SYS_FSYNC = 118 - SYS_SIGRETURN = 119 - SYS_CLONE = 120 - SYS_SETDOMAINNAME = 121 - SYS_UNAME = 122 - SYS_MODIFY_LDT = 123 - SYS_ADJTIMEX = 124 - SYS_MPROTECT = 125 - SYS_SIGPROCMASK = 126 - SYS_CREATE_MODULE = 127 - SYS_INIT_MODULE = 128 - SYS_DELETE_MODULE = 129 - SYS_GET_KERNEL_SYMS = 130 - SYS_QUOTACTL = 131 - SYS_GETPGID = 132 - SYS_FCHDIR = 133 - SYS_BDFLUSH = 134 - SYS_SYSFS = 135 - SYS_PERSONALITY = 136 - SYS_AFS_SYSCALL = 137 - SYS_SETFSUID = 138 - SYS_SETFSGID = 139 - SYS__LLSEEK = 140 - SYS_GETDENTS = 141 - SYS__NEWSELECT = 142 - SYS_FLOCK = 143 - SYS_MSYNC = 144 - SYS_READV = 145 - SYS_WRITEV = 146 - SYS_GETSID = 147 - SYS_FDATASYNC = 148 - SYS__SYSCTL = 149 - SYS_MLOCK = 150 - SYS_MUNLOCK = 151 - SYS_MLOCKALL = 152 - SYS_MUNLOCKALL = 153 - SYS_SCHED_SETPARAM = 154 - SYS_SCHED_GETPARAM = 155 - SYS_SCHED_SETSCHEDULER = 156 - SYS_SCHED_GETSCHEDULER = 157 - SYS_SCHED_YIELD = 158 - SYS_SCHED_GET_PRIORITY_MAX = 159 - SYS_SCHED_GET_PRIORITY_MIN = 160 - SYS_SCHED_RR_GET_INTERVAL = 161 - SYS_NANOSLEEP = 162 - SYS_MREMAP = 163 - SYS_SETRESUID = 164 - SYS_GETRESUID = 165 - SYS_VM86 = 166 - SYS_QUERY_MODULE = 167 - SYS_POLL = 168 - SYS_NFSSERVCTL = 169 - SYS_SETRESGID = 170 - SYS_GETRESGID = 171 - SYS_PRCTL = 172 - SYS_RT_SIGRETURN = 173 - SYS_RT_SIGACTION = 174 - SYS_RT_SIGPROCMASK = 175 - SYS_RT_SIGPENDING = 176 - SYS_RT_SIGTIMEDWAIT = 177 - SYS_RT_SIGQUEUEINFO = 178 - SYS_RT_SIGSUSPEND = 179 - SYS_PREAD64 = 180 - SYS_PWRITE64 = 181 - SYS_CHOWN = 182 - SYS_GETCWD = 183 - SYS_CAPGET = 184 - SYS_CAPSET = 185 - SYS_SIGALTSTACK = 186 - SYS_SENDFILE = 187 - SYS_GETPMSG = 188 - SYS_PUTPMSG = 189 - SYS_VFORK = 190 - SYS_UGETRLIMIT = 191 - SYS_MMAP2 = 192 - SYS_TRUNCATE64 = 193 - SYS_FTRUNCATE64 = 194 - SYS_STAT64 = 195 - SYS_LSTAT64 = 196 - SYS_FSTAT64 = 197 - SYS_LCHOWN32 = 198 - SYS_GETUID32 = 199 - SYS_GETGID32 = 200 - SYS_GETEUID32 = 201 - SYS_GETEGID32 = 202 - SYS_SETREUID32 = 203 - SYS_SETREGID32 = 204 - SYS_GETGROUPS32 = 205 - SYS_SETGROUPS32 = 206 - SYS_FCHOWN32 = 207 - SYS_SETRESUID32 = 208 - SYS_GETRESUID32 = 209 - SYS_SETRESGID32 = 210 - SYS_GETRESGID32 = 211 - SYS_CHOWN32 = 212 - SYS_SETUID32 = 213 - SYS_SETGID32 = 214 - SYS_SETFSUID32 = 215 - SYS_SETFSGID32 = 216 - SYS_PIVOT_ROOT = 217 - SYS_MINCORE = 218 - SYS_MADVISE = 219 - SYS_MADVISE1 = 219 - SYS_GETDENTS64 = 220 - SYS_FCNTL64 = 221 - SYS_GETTID = 224 - SYS_READAHEAD = 225 - SYS_SETXATTR = 226 - SYS_LSETXATTR = 227 - SYS_FSETXATTR = 228 - SYS_GETXATTR = 229 - SYS_LGETXATTR = 230 - SYS_FGETXATTR = 231 - SYS_LISTXATTR = 232 - SYS_LLISTXATTR = 233 - SYS_FLISTXATTR = 234 - SYS_REMOVEXATTR = 235 - SYS_LREMOVEXATTR = 236 - SYS_FREMOVEXATTR = 237 - SYS_TKILL = 238 - SYS_SENDFILE64 = 239 - SYS_FUTEX = 240 - SYS_SCHED_SETAFFINITY = 241 - SYS_SCHED_GETAFFINITY = 242 - SYS_SET_THREAD_AREA = 243 - SYS_GET_THREAD_AREA = 244 - SYS_IO_SETUP = 245 - SYS_IO_DESTROY = 246 - SYS_IO_GETEVENTS = 247 - SYS_IO_SUBMIT = 248 - SYS_IO_CANCEL = 249 - SYS_FADVISE64 = 250 - SYS_EXIT_GROUP = 252 - SYS_LOOKUP_DCOOKIE = 253 - SYS_EPOLL_CREATE = 254 - SYS_EPOLL_CTL = 255 - SYS_EPOLL_WAIT = 256 - SYS_REMAP_FILE_PAGES = 257 - SYS_SET_TID_ADDRESS = 258 - SYS_TIMER_CREATE = 259 - SYS_TIMER_SETTIME = 260 - SYS_TIMER_GETTIME = 261 - SYS_TIMER_GETOVERRUN = 262 - SYS_TIMER_DELETE = 263 - SYS_CLOCK_SETTIME = 264 - SYS_CLOCK_GETTIME = 265 - SYS_CLOCK_GETRES = 266 - SYS_CLOCK_NANOSLEEP = 267 - SYS_STATFS64 = 268 - SYS_FSTATFS64 = 269 - SYS_TGKILL = 270 - SYS_UTIMES = 271 - SYS_FADVISE64_64 = 272 - SYS_VSERVER = 273 - SYS_MBIND = 274 - SYS_GET_MEMPOLICY = 275 - SYS_SET_MEMPOLICY = 276 - SYS_MQ_OPEN = 277 - SYS_MQ_UNLINK = 278 - SYS_MQ_TIMEDSEND = 279 - SYS_MQ_TIMEDRECEIVE = 280 - SYS_MQ_NOTIFY = 281 - SYS_MQ_GETSETATTR = 282 - SYS_KEXEC_LOAD = 283 - SYS_WAITID = 284 - SYS_ADD_KEY = 286 - SYS_REQUEST_KEY = 287 - SYS_KEYCTL = 288 - SYS_IOPRIO_SET = 289 - SYS_IOPRIO_GET = 290 - SYS_INOTIFY_INIT = 291 - SYS_INOTIFY_ADD_WATCH = 292 - SYS_INOTIFY_RM_WATCH = 293 - SYS_MIGRATE_PAGES = 294 - SYS_OPENAT = 295 - SYS_MKDIRAT = 296 - SYS_MKNODAT = 297 - SYS_FCHOWNAT = 298 - SYS_FUTIMESAT = 299 - SYS_FSTATAT64 = 300 - SYS_UNLINKAT = 301 - SYS_RENAMEAT = 302 - SYS_LINKAT = 303 - SYS_SYMLINKAT = 304 - SYS_READLINKAT = 305 - SYS_FCHMODAT = 306 - SYS_FACCESSAT = 307 - SYS_PSELECT6 = 308 - SYS_PPOLL = 309 - SYS_UNSHARE = 310 - SYS_SET_ROBUST_LIST = 311 - SYS_GET_ROBUST_LIST = 312 - SYS_SPLICE = 313 - SYS_SYNC_FILE_RANGE = 314 - SYS_TEE = 315 - SYS_VMSPLICE = 316 - SYS_MOVE_PAGES = 317 - SYS_GETCPU = 318 - SYS_EPOLL_PWAIT = 319 - SYS_UTIMENSAT = 320 - SYS_SIGNALFD = 321 - SYS_TIMERFD_CREATE = 322 - SYS_EVENTFD = 323 - SYS_FALLOCATE = 324 - SYS_TIMERFD_SETTIME = 325 - SYS_TIMERFD_GETTIME = 326 - SYS_SIGNALFD4 = 327 - SYS_EVENTFD2 = 328 - SYS_EPOLL_CREATE1 = 329 - SYS_DUP3 = 330 - SYS_PIPE2 = 331 - SYS_INOTIFY_INIT1 = 332 - SYS_PREADV = 333 - SYS_PWRITEV = 334 - SYS_RT_TGSIGQUEUEINFO = 335 - SYS_PERF_EVENT_OPEN = 336 - SYS_RECVMMSG = 337 - SYS_FANOTIFY_INIT = 338 - SYS_FANOTIFY_MARK = 339 - SYS_PRLIMIT64 = 340 - SYS_NAME_TO_HANDLE_AT = 341 - SYS_OPEN_BY_HANDLE_AT = 342 - SYS_CLOCK_ADJTIME = 343 - SYS_SYNCFS = 344 - SYS_SENDMMSG = 345 - SYS_SETNS = 346 - SYS_PROCESS_VM_READV = 347 - SYS_PROCESS_VM_WRITEV = 348 -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_linux_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_linux_amd64.go deleted file mode 100644 index ddac31f58ad..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_linux_amd64.go +++ /dev/null @@ -1,321 +0,0 @@ -// mksysnum_linux.pl /usr/include/asm/unistd_64.h -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build amd64,linux - -package unix - -const ( - SYS_READ = 0 - SYS_WRITE = 1 - SYS_OPEN = 2 - SYS_CLOSE = 3 - SYS_STAT = 4 - SYS_FSTAT = 5 - SYS_LSTAT = 6 - SYS_POLL = 7 - SYS_LSEEK = 8 - SYS_MMAP = 9 - SYS_MPROTECT = 10 - SYS_MUNMAP = 11 - SYS_BRK = 12 - SYS_RT_SIGACTION = 13 - SYS_RT_SIGPROCMASK = 14 - SYS_RT_SIGRETURN = 15 - SYS_IOCTL = 16 - SYS_PREAD64 = 17 - SYS_PWRITE64 = 18 - SYS_READV = 19 - SYS_WRITEV = 20 - SYS_ACCESS = 21 - SYS_PIPE = 22 - SYS_SELECT = 23 - SYS_SCHED_YIELD = 24 - SYS_MREMAP = 25 - SYS_MSYNC = 26 - SYS_MINCORE = 27 - SYS_MADVISE = 28 - SYS_SHMGET = 29 - SYS_SHMAT = 30 - SYS_SHMCTL = 31 - SYS_DUP = 32 - SYS_DUP2 = 33 - SYS_PAUSE = 34 - SYS_NANOSLEEP = 35 - SYS_GETITIMER = 36 - SYS_ALARM = 37 - SYS_SETITIMER = 38 - SYS_GETPID = 39 - SYS_SENDFILE = 40 - SYS_SOCKET = 41 - SYS_CONNECT = 42 - SYS_ACCEPT = 43 - SYS_SENDTO = 44 - SYS_RECVFROM = 45 - SYS_SENDMSG = 46 - SYS_RECVMSG = 47 - SYS_SHUTDOWN = 48 - SYS_BIND = 49 - SYS_LISTEN = 50 - SYS_GETSOCKNAME = 51 - SYS_GETPEERNAME = 52 - SYS_SOCKETPAIR = 53 - SYS_SETSOCKOPT = 54 - SYS_GETSOCKOPT = 55 - SYS_CLONE = 56 - SYS_FORK = 57 - SYS_VFORK = 58 - SYS_EXECVE = 59 - SYS_EXIT = 60 - SYS_WAIT4 = 61 - SYS_KILL = 62 - SYS_UNAME = 63 - SYS_SEMGET = 64 - SYS_SEMOP = 65 - SYS_SEMCTL = 66 - SYS_SHMDT = 67 - SYS_MSGGET = 68 - SYS_MSGSND = 69 - SYS_MSGRCV = 70 - SYS_MSGCTL = 71 - SYS_FCNTL = 72 - SYS_FLOCK = 73 - SYS_FSYNC = 74 - SYS_FDATASYNC = 75 - SYS_TRUNCATE = 76 - SYS_FTRUNCATE = 77 - SYS_GETDENTS = 78 - SYS_GETCWD = 79 - SYS_CHDIR = 80 - SYS_FCHDIR = 81 - SYS_RENAME = 82 - SYS_MKDIR = 83 - SYS_RMDIR = 84 - SYS_CREAT = 85 - SYS_LINK = 86 - SYS_UNLINK = 87 - SYS_SYMLINK = 88 - SYS_READLINK = 89 - SYS_CHMOD = 90 - SYS_FCHMOD = 91 - SYS_CHOWN = 92 - SYS_FCHOWN = 93 - SYS_LCHOWN = 94 - SYS_UMASK = 95 - SYS_GETTIMEOFDAY = 96 - SYS_GETRLIMIT = 97 - SYS_GETRUSAGE = 98 - SYS_SYSINFO = 99 - SYS_TIMES = 100 - SYS_PTRACE = 101 - SYS_GETUID = 102 - SYS_SYSLOG = 103 - SYS_GETGID = 104 - SYS_SETUID = 105 - SYS_SETGID = 106 - SYS_GETEUID = 107 - SYS_GETEGID = 108 - SYS_SETPGID = 109 - SYS_GETPPID = 110 - SYS_GETPGRP = 111 - SYS_SETSID = 112 - SYS_SETREUID = 113 - SYS_SETREGID = 114 - SYS_GETGROUPS = 115 - SYS_SETGROUPS = 116 - SYS_SETRESUID = 117 - SYS_GETRESUID = 118 - SYS_SETRESGID = 119 - SYS_GETRESGID = 120 - SYS_GETPGID = 121 - SYS_SETFSUID = 122 - SYS_SETFSGID = 123 - SYS_GETSID = 124 - SYS_CAPGET = 125 - SYS_CAPSET = 126 - SYS_RT_SIGPENDING = 127 - SYS_RT_SIGTIMEDWAIT = 128 - SYS_RT_SIGQUEUEINFO = 129 - SYS_RT_SIGSUSPEND = 130 - SYS_SIGALTSTACK = 131 - SYS_UTIME = 132 - SYS_MKNOD = 133 - SYS_USELIB = 134 - SYS_PERSONALITY = 135 - SYS_USTAT = 136 - SYS_STATFS = 137 - SYS_FSTATFS = 138 - SYS_SYSFS = 139 - SYS_GETPRIORITY = 140 - SYS_SETPRIORITY = 141 - SYS_SCHED_SETPARAM = 142 - SYS_SCHED_GETPARAM = 143 - SYS_SCHED_SETSCHEDULER = 144 - SYS_SCHED_GETSCHEDULER = 145 - SYS_SCHED_GET_PRIORITY_MAX = 146 - SYS_SCHED_GET_PRIORITY_MIN = 147 - SYS_SCHED_RR_GET_INTERVAL = 148 - SYS_MLOCK = 149 - SYS_MUNLOCK = 150 - SYS_MLOCKALL = 151 - SYS_MUNLOCKALL = 152 - SYS_VHANGUP = 153 - SYS_MODIFY_LDT = 154 - SYS_PIVOT_ROOT = 155 - SYS__SYSCTL = 156 - SYS_PRCTL = 157 - SYS_ARCH_PRCTL = 158 - SYS_ADJTIMEX = 159 - SYS_SETRLIMIT = 160 - SYS_CHROOT = 161 - SYS_SYNC = 162 - SYS_ACCT = 163 - SYS_SETTIMEOFDAY = 164 - SYS_MOUNT = 165 - SYS_UMOUNT2 = 166 - SYS_SWAPON = 167 - SYS_SWAPOFF = 168 - SYS_REBOOT = 169 - SYS_SETHOSTNAME = 170 - SYS_SETDOMAINNAME = 171 - SYS_IOPL = 172 - SYS_IOPERM = 173 - SYS_CREATE_MODULE = 174 - SYS_INIT_MODULE = 175 - SYS_DELETE_MODULE = 176 - SYS_GET_KERNEL_SYMS = 177 - SYS_QUERY_MODULE = 178 - SYS_QUOTACTL = 179 - SYS_NFSSERVCTL = 180 - SYS_GETPMSG = 181 - SYS_PUTPMSG = 182 - SYS_AFS_SYSCALL = 183 - SYS_TUXCALL = 184 - SYS_SECURITY = 185 - SYS_GETTID = 186 - SYS_READAHEAD = 187 - SYS_SETXATTR = 188 - SYS_LSETXATTR = 189 - SYS_FSETXATTR = 190 - SYS_GETXATTR = 191 - SYS_LGETXATTR = 192 - SYS_FGETXATTR = 193 - SYS_LISTXATTR = 194 - SYS_LLISTXATTR = 195 - SYS_FLISTXATTR = 196 - SYS_REMOVEXATTR = 197 - SYS_LREMOVEXATTR = 198 - SYS_FREMOVEXATTR = 199 - SYS_TKILL = 200 - SYS_TIME = 201 - SYS_FUTEX = 202 - SYS_SCHED_SETAFFINITY = 203 - SYS_SCHED_GETAFFINITY = 204 - SYS_SET_THREAD_AREA = 205 - SYS_IO_SETUP = 206 - SYS_IO_DESTROY = 207 - SYS_IO_GETEVENTS = 208 - SYS_IO_SUBMIT = 209 - SYS_IO_CANCEL = 210 - SYS_GET_THREAD_AREA = 211 - SYS_LOOKUP_DCOOKIE = 212 - SYS_EPOLL_CREATE = 213 - SYS_EPOLL_CTL_OLD = 214 - SYS_EPOLL_WAIT_OLD = 215 - SYS_REMAP_FILE_PAGES = 216 - SYS_GETDENTS64 = 217 - SYS_SET_TID_ADDRESS = 218 - SYS_RESTART_SYSCALL = 219 - SYS_SEMTIMEDOP = 220 - SYS_FADVISE64 = 221 - SYS_TIMER_CREATE = 222 - SYS_TIMER_SETTIME = 223 - SYS_TIMER_GETTIME = 224 - SYS_TIMER_GETOVERRUN = 225 - SYS_TIMER_DELETE = 226 - SYS_CLOCK_SETTIME = 227 - SYS_CLOCK_GETTIME = 228 - SYS_CLOCK_GETRES = 229 - SYS_CLOCK_NANOSLEEP = 230 - SYS_EXIT_GROUP = 231 - SYS_EPOLL_WAIT = 232 - SYS_EPOLL_CTL = 233 - SYS_TGKILL = 234 - SYS_UTIMES = 235 - SYS_VSERVER = 236 - SYS_MBIND = 237 - SYS_SET_MEMPOLICY = 238 - SYS_GET_MEMPOLICY = 239 - SYS_MQ_OPEN = 240 - SYS_MQ_UNLINK = 241 - SYS_MQ_TIMEDSEND = 242 - SYS_MQ_TIMEDRECEIVE = 243 - SYS_MQ_NOTIFY = 244 - SYS_MQ_GETSETATTR = 245 - SYS_KEXEC_LOAD = 246 - SYS_WAITID = 247 - SYS_ADD_KEY = 248 - SYS_REQUEST_KEY = 249 - SYS_KEYCTL = 250 - SYS_IOPRIO_SET = 251 - SYS_IOPRIO_GET = 252 - SYS_INOTIFY_INIT = 253 - SYS_INOTIFY_ADD_WATCH = 254 - SYS_INOTIFY_RM_WATCH = 255 - SYS_MIGRATE_PAGES = 256 - SYS_OPENAT = 257 - SYS_MKDIRAT = 258 - SYS_MKNODAT = 259 - SYS_FCHOWNAT = 260 - SYS_FUTIMESAT = 261 - SYS_NEWFSTATAT = 262 - SYS_UNLINKAT = 263 - SYS_RENAMEAT = 264 - SYS_LINKAT = 265 - SYS_SYMLINKAT = 266 - SYS_READLINKAT = 267 - SYS_FCHMODAT = 268 - SYS_FACCESSAT = 269 - SYS_PSELECT6 = 270 - SYS_PPOLL = 271 - SYS_UNSHARE = 272 - SYS_SET_ROBUST_LIST = 273 - SYS_GET_ROBUST_LIST = 274 - SYS_SPLICE = 275 - SYS_TEE = 276 - SYS_SYNC_FILE_RANGE = 277 - SYS_VMSPLICE = 278 - SYS_MOVE_PAGES = 279 - SYS_UTIMENSAT = 280 - SYS_EPOLL_PWAIT = 281 - SYS_SIGNALFD = 282 - SYS_TIMERFD_CREATE = 283 - SYS_EVENTFD = 284 - SYS_FALLOCATE = 285 - SYS_TIMERFD_SETTIME = 286 - SYS_TIMERFD_GETTIME = 287 - SYS_ACCEPT4 = 288 - SYS_SIGNALFD4 = 289 - SYS_EVENTFD2 = 290 - SYS_EPOLL_CREATE1 = 291 - SYS_DUP3 = 292 - SYS_PIPE2 = 293 - SYS_INOTIFY_INIT1 = 294 - SYS_PREADV = 295 - SYS_PWRITEV = 296 - SYS_RT_TGSIGQUEUEINFO = 297 - SYS_PERF_EVENT_OPEN = 298 - SYS_RECVMMSG = 299 - SYS_FANOTIFY_INIT = 300 - SYS_FANOTIFY_MARK = 301 - SYS_PRLIMIT64 = 302 - SYS_NAME_TO_HANDLE_AT = 303 - SYS_OPEN_BY_HANDLE_AT = 304 - SYS_CLOCK_ADJTIME = 305 - SYS_SYNCFS = 306 - SYS_SENDMMSG = 307 - SYS_SETNS = 308 - SYS_GETCPU = 309 - SYS_PROCESS_VM_READV = 310 - SYS_PROCESS_VM_WRITEV = 311 -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_linux_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_linux_arm.go deleted file mode 100644 index 45ced17fc4c..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_linux_arm.go +++ /dev/null @@ -1,356 +0,0 @@ -// mksysnum_linux.pl -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build arm,linux - -package unix - -const ( - SYS_OABI_SYSCALL_BASE = 0 - SYS_SYSCALL_BASE = 0 - SYS_RESTART_SYSCALL = 0 - SYS_EXIT = 1 - SYS_FORK = 2 - SYS_READ = 3 - SYS_WRITE = 4 - SYS_OPEN = 5 - SYS_CLOSE = 6 - SYS_CREAT = 8 - SYS_LINK = 9 - SYS_UNLINK = 10 - SYS_EXECVE = 11 - SYS_CHDIR = 12 - SYS_TIME = 13 - SYS_MKNOD = 14 - SYS_CHMOD = 15 - SYS_LCHOWN = 16 - SYS_LSEEK = 19 - SYS_GETPID = 20 - SYS_MOUNT = 21 - SYS_UMOUNT = 22 - SYS_SETUID = 23 - SYS_GETUID = 24 - SYS_STIME = 25 - SYS_PTRACE = 26 - SYS_ALARM = 27 - SYS_PAUSE = 29 - SYS_UTIME = 30 - SYS_ACCESS = 33 - SYS_NICE = 34 - SYS_SYNC = 36 - SYS_KILL = 37 - SYS_RENAME = 38 - SYS_MKDIR = 39 - SYS_RMDIR = 40 - SYS_DUP = 41 - SYS_PIPE = 42 - SYS_TIMES = 43 - SYS_BRK = 45 - SYS_SETGID = 46 - SYS_GETGID = 47 - SYS_GETEUID = 49 - SYS_GETEGID = 50 - SYS_ACCT = 51 - SYS_UMOUNT2 = 52 - SYS_IOCTL = 54 - SYS_FCNTL = 55 - SYS_SETPGID = 57 - SYS_UMASK = 60 - SYS_CHROOT = 61 - SYS_USTAT = 62 - SYS_DUP2 = 63 - SYS_GETPPID = 64 - SYS_GETPGRP = 65 - SYS_SETSID = 66 - SYS_SIGACTION = 67 - SYS_SETREUID = 70 - SYS_SETREGID = 71 - SYS_SIGSUSPEND = 72 - SYS_SIGPENDING = 73 - SYS_SETHOSTNAME = 74 - SYS_SETRLIMIT = 75 - SYS_GETRLIMIT = 76 - SYS_GETRUSAGE = 77 - SYS_GETTIMEOFDAY = 78 - SYS_SETTIMEOFDAY = 79 - SYS_GETGROUPS = 80 - SYS_SETGROUPS = 81 - SYS_SELECT = 82 - SYS_SYMLINK = 83 - SYS_READLINK = 85 - SYS_USELIB = 86 - SYS_SWAPON = 87 - SYS_REBOOT = 88 - SYS_READDIR = 89 - SYS_MMAP = 90 - SYS_MUNMAP = 91 - SYS_TRUNCATE = 92 - SYS_FTRUNCATE = 93 - SYS_FCHMOD = 94 - SYS_FCHOWN = 95 - SYS_GETPRIORITY = 96 - SYS_SETPRIORITY = 97 - SYS_STATFS = 99 - SYS_FSTATFS = 100 - SYS_SOCKETCALL = 102 - SYS_SYSLOG = 103 - SYS_SETITIMER = 104 - SYS_GETITIMER = 105 - SYS_STAT = 106 - SYS_LSTAT = 107 - SYS_FSTAT = 108 - SYS_VHANGUP = 111 - SYS_SYSCALL = 113 - SYS_WAIT4 = 114 - SYS_SWAPOFF = 115 - SYS_SYSINFO = 116 - SYS_IPC = 117 - SYS_FSYNC = 118 - SYS_SIGRETURN = 119 - SYS_CLONE = 120 - SYS_SETDOMAINNAME = 121 - SYS_UNAME = 122 - SYS_ADJTIMEX = 124 - SYS_MPROTECT = 125 - SYS_SIGPROCMASK = 126 - SYS_INIT_MODULE = 128 - SYS_DELETE_MODULE = 129 - SYS_QUOTACTL = 131 - SYS_GETPGID = 132 - SYS_FCHDIR = 133 - SYS_BDFLUSH = 134 - SYS_SYSFS = 135 - SYS_PERSONALITY = 136 - SYS_SETFSUID = 138 - SYS_SETFSGID = 139 - SYS__LLSEEK = 140 - SYS_GETDENTS = 141 - SYS__NEWSELECT = 142 - SYS_FLOCK = 143 - SYS_MSYNC = 144 - SYS_READV = 145 - SYS_WRITEV = 146 - SYS_GETSID = 147 - SYS_FDATASYNC = 148 - SYS__SYSCTL = 149 - SYS_MLOCK = 150 - SYS_MUNLOCK = 151 - SYS_MLOCKALL = 152 - SYS_MUNLOCKALL = 153 - SYS_SCHED_SETPARAM = 154 - SYS_SCHED_GETPARAM = 155 - SYS_SCHED_SETSCHEDULER = 156 - SYS_SCHED_GETSCHEDULER = 157 - SYS_SCHED_YIELD = 158 - SYS_SCHED_GET_PRIORITY_MAX = 159 - SYS_SCHED_GET_PRIORITY_MIN = 160 - SYS_SCHED_RR_GET_INTERVAL = 161 - SYS_NANOSLEEP = 162 - SYS_MREMAP = 163 - SYS_SETRESUID = 164 - SYS_GETRESUID = 165 - SYS_POLL = 168 - SYS_NFSSERVCTL = 169 - SYS_SETRESGID = 170 - SYS_GETRESGID = 171 - SYS_PRCTL = 172 - SYS_RT_SIGRETURN = 173 - SYS_RT_SIGACTION = 174 - SYS_RT_SIGPROCMASK = 175 - SYS_RT_SIGPENDING = 176 - SYS_RT_SIGTIMEDWAIT = 177 - SYS_RT_SIGQUEUEINFO = 178 - SYS_RT_SIGSUSPEND = 179 - SYS_PREAD64 = 180 - SYS_PWRITE64 = 181 - SYS_CHOWN = 182 - SYS_GETCWD = 183 - SYS_CAPGET = 184 - SYS_CAPSET = 185 - SYS_SIGALTSTACK = 186 - SYS_SENDFILE = 187 - SYS_VFORK = 190 - SYS_UGETRLIMIT = 191 - SYS_MMAP2 = 192 - SYS_TRUNCATE64 = 193 - SYS_FTRUNCATE64 = 194 - SYS_STAT64 = 195 - SYS_LSTAT64 = 196 - SYS_FSTAT64 = 197 - SYS_LCHOWN32 = 198 - SYS_GETUID32 = 199 - SYS_GETGID32 = 200 - SYS_GETEUID32 = 201 - SYS_GETEGID32 = 202 - SYS_SETREUID32 = 203 - SYS_SETREGID32 = 204 - SYS_GETGROUPS32 = 205 - SYS_SETGROUPS32 = 206 - SYS_FCHOWN32 = 207 - SYS_SETRESUID32 = 208 - SYS_GETRESUID32 = 209 - SYS_SETRESGID32 = 210 - SYS_GETRESGID32 = 211 - SYS_CHOWN32 = 212 - SYS_SETUID32 = 213 - SYS_SETGID32 = 214 - SYS_SETFSUID32 = 215 - SYS_SETFSGID32 = 216 - SYS_GETDENTS64 = 217 - SYS_PIVOT_ROOT = 218 - SYS_MINCORE = 219 - SYS_MADVISE = 220 - SYS_FCNTL64 = 221 - SYS_GETTID = 224 - SYS_READAHEAD = 225 - SYS_SETXATTR = 226 - SYS_LSETXATTR = 227 - SYS_FSETXATTR = 228 - SYS_GETXATTR = 229 - SYS_LGETXATTR = 230 - SYS_FGETXATTR = 231 - SYS_LISTXATTR = 232 - SYS_LLISTXATTR = 233 - SYS_FLISTXATTR = 234 - SYS_REMOVEXATTR = 235 - SYS_LREMOVEXATTR = 236 - SYS_FREMOVEXATTR = 237 - SYS_TKILL = 238 - SYS_SENDFILE64 = 239 - SYS_FUTEX = 240 - SYS_SCHED_SETAFFINITY = 241 - SYS_SCHED_GETAFFINITY = 242 - SYS_IO_SETUP = 243 - SYS_IO_DESTROY = 244 - SYS_IO_GETEVENTS = 245 - SYS_IO_SUBMIT = 246 - SYS_IO_CANCEL = 247 - SYS_EXIT_GROUP = 248 - SYS_LOOKUP_DCOOKIE = 249 - SYS_EPOLL_CREATE = 250 - SYS_EPOLL_CTL = 251 - SYS_EPOLL_WAIT = 252 - SYS_REMAP_FILE_PAGES = 253 - SYS_SET_TID_ADDRESS = 256 - SYS_TIMER_CREATE = 257 - SYS_TIMER_SETTIME = 258 - SYS_TIMER_GETTIME = 259 - SYS_TIMER_GETOVERRUN = 260 - SYS_TIMER_DELETE = 261 - SYS_CLOCK_SETTIME = 262 - SYS_CLOCK_GETTIME = 263 - SYS_CLOCK_GETRES = 264 - SYS_CLOCK_NANOSLEEP = 265 - SYS_STATFS64 = 266 - SYS_FSTATFS64 = 267 - SYS_TGKILL = 268 - SYS_UTIMES = 269 - SYS_ARM_FADVISE64_64 = 270 - SYS_PCICONFIG_IOBASE = 271 - SYS_PCICONFIG_READ = 272 - SYS_PCICONFIG_WRITE = 273 - SYS_MQ_OPEN = 274 - SYS_MQ_UNLINK = 275 - SYS_MQ_TIMEDSEND = 276 - SYS_MQ_TIMEDRECEIVE = 277 - SYS_MQ_NOTIFY = 278 - SYS_MQ_GETSETATTR = 279 - SYS_WAITID = 280 - SYS_SOCKET = 281 - SYS_BIND = 282 - SYS_CONNECT = 283 - SYS_LISTEN = 284 - SYS_ACCEPT = 285 - SYS_GETSOCKNAME = 286 - SYS_GETPEERNAME = 287 - SYS_SOCKETPAIR = 288 - SYS_SEND = 289 - SYS_SENDTO = 290 - SYS_RECV = 291 - SYS_RECVFROM = 292 - SYS_SHUTDOWN = 293 - SYS_SETSOCKOPT = 294 - SYS_GETSOCKOPT = 295 - SYS_SENDMSG = 296 - SYS_RECVMSG = 297 - SYS_SEMOP = 298 - SYS_SEMGET = 299 - SYS_SEMCTL = 300 - SYS_MSGSND = 301 - SYS_MSGRCV = 302 - SYS_MSGGET = 303 - SYS_MSGCTL = 304 - SYS_SHMAT = 305 - SYS_SHMDT = 306 - SYS_SHMGET = 307 - SYS_SHMCTL = 308 - SYS_ADD_KEY = 309 - SYS_REQUEST_KEY = 310 - SYS_KEYCTL = 311 - SYS_SEMTIMEDOP = 312 - SYS_VSERVER = 313 - SYS_IOPRIO_SET = 314 - SYS_IOPRIO_GET = 315 - SYS_INOTIFY_INIT = 316 - SYS_INOTIFY_ADD_WATCH = 317 - SYS_INOTIFY_RM_WATCH = 318 - SYS_MBIND = 319 - SYS_GET_MEMPOLICY = 320 - SYS_SET_MEMPOLICY = 321 - SYS_OPENAT = 322 - SYS_MKDIRAT = 323 - SYS_MKNODAT = 324 - SYS_FCHOWNAT = 325 - SYS_FUTIMESAT = 326 - SYS_FSTATAT64 = 327 - SYS_UNLINKAT = 328 - SYS_RENAMEAT = 329 - SYS_LINKAT = 330 - SYS_SYMLINKAT = 331 - SYS_READLINKAT = 332 - SYS_FCHMODAT = 333 - SYS_FACCESSAT = 334 - SYS_PSELECT6 = 335 - SYS_PPOLL = 336 - SYS_UNSHARE = 337 - SYS_SET_ROBUST_LIST = 338 - SYS_GET_ROBUST_LIST = 339 - SYS_SPLICE = 340 - SYS_ARM_SYNC_FILE_RANGE = 341 - SYS_TEE = 342 - SYS_VMSPLICE = 343 - SYS_MOVE_PAGES = 344 - SYS_GETCPU = 345 - SYS_EPOLL_PWAIT = 346 - SYS_KEXEC_LOAD = 347 - SYS_UTIMENSAT = 348 - SYS_SIGNALFD = 349 - SYS_TIMERFD_CREATE = 350 - SYS_EVENTFD = 351 - SYS_FALLOCATE = 352 - SYS_TIMERFD_SETTIME = 353 - SYS_TIMERFD_GETTIME = 354 - SYS_SIGNALFD4 = 355 - SYS_EVENTFD2 = 356 - SYS_EPOLL_CREATE1 = 357 - SYS_DUP3 = 358 - SYS_PIPE2 = 359 - SYS_INOTIFY_INIT1 = 360 - SYS_PREADV = 361 - SYS_PWRITEV = 362 - SYS_RT_TGSIGQUEUEINFO = 363 - SYS_PERF_EVENT_OPEN = 364 - SYS_RECVMMSG = 365 - SYS_ACCEPT4 = 366 - SYS_FANOTIFY_INIT = 367 - SYS_FANOTIFY_MARK = 368 - SYS_PRLIMIT64 = 369 - SYS_NAME_TO_HANDLE_AT = 370 - SYS_OPEN_BY_HANDLE_AT = 371 - SYS_CLOCK_ADJTIME = 372 - SYS_SYNCFS = 373 - SYS_SENDMMSG = 374 - SYS_SETNS = 375 - SYS_PROCESS_VM_READV = 376 - SYS_PROCESS_VM_WRITEV = 377 -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_linux_arm64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_linux_arm64.go deleted file mode 100644 index 2e9514f2803..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_linux_arm64.go +++ /dev/null @@ -1,272 +0,0 @@ -// mksysnum_linux.pl /usr/include/asm-generic/unistd.h -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build arm64,linux - -package unix - -const ( - SYS_IO_SETUP = 0 - SYS_IO_DESTROY = 1 - SYS_IO_SUBMIT = 2 - SYS_IO_CANCEL = 3 - SYS_IO_GETEVENTS = 4 - SYS_SETXATTR = 5 - SYS_LSETXATTR = 6 - SYS_FSETXATTR = 7 - SYS_GETXATTR = 8 - SYS_LGETXATTR = 9 - SYS_FGETXATTR = 10 - SYS_LISTXATTR = 11 - SYS_LLISTXATTR = 12 - SYS_FLISTXATTR = 13 - SYS_REMOVEXATTR = 14 - SYS_LREMOVEXATTR = 15 - SYS_FREMOVEXATTR = 16 - SYS_GETCWD = 17 - SYS_LOOKUP_DCOOKIE = 18 - SYS_EVENTFD2 = 19 - SYS_EPOLL_CREATE1 = 20 - SYS_EPOLL_CTL = 21 - SYS_EPOLL_PWAIT = 22 - SYS_DUP = 23 - SYS_DUP3 = 24 - SYS_FCNTL = 25 - SYS_INOTIFY_INIT1 = 26 - SYS_INOTIFY_ADD_WATCH = 27 - SYS_INOTIFY_RM_WATCH = 28 - SYS_IOCTL = 29 - SYS_IOPRIO_SET = 30 - SYS_IOPRIO_GET = 31 - SYS_FLOCK = 32 - SYS_MKNODAT = 33 - SYS_MKDIRAT = 34 - SYS_UNLINKAT = 35 - SYS_SYMLINKAT = 36 - SYS_LINKAT = 37 - SYS_RENAMEAT = 38 - SYS_UMOUNT2 = 39 - SYS_MOUNT = 40 - SYS_PIVOT_ROOT = 41 - SYS_NFSSERVCTL = 42 - SYS_STATFS = 43 - SYS_FSTATFS = 44 - SYS_TRUNCATE = 45 - SYS_FTRUNCATE = 46 - SYS_FALLOCATE = 47 - SYS_FACCESSAT = 48 - SYS_CHDIR = 49 - SYS_FCHDIR = 50 - SYS_CHROOT = 51 - SYS_FCHMOD = 52 - SYS_FCHMODAT = 53 - SYS_FCHOWNAT = 54 - SYS_FCHOWN = 55 - SYS_OPENAT = 56 - SYS_CLOSE = 57 - SYS_VHANGUP = 58 - SYS_PIPE2 = 59 - SYS_QUOTACTL = 60 - SYS_GETDENTS64 = 61 - SYS_LSEEK = 62 - SYS_READ = 63 - SYS_WRITE = 64 - SYS_READV = 65 - SYS_WRITEV = 66 - SYS_PREAD64 = 67 - SYS_PWRITE64 = 68 - SYS_PREADV = 69 - SYS_PWRITEV = 70 - SYS_SENDFILE = 71 - SYS_PSELECT6 = 72 - SYS_PPOLL = 73 - SYS_SIGNALFD4 = 74 - SYS_VMSPLICE = 75 - SYS_SPLICE = 76 - SYS_TEE = 77 - SYS_READLINKAT = 78 - SYS_FSTATAT = 79 - SYS_FSTAT = 80 - SYS_SYNC = 81 - SYS_FSYNC = 82 - SYS_FDATASYNC = 83 - SYS_SYNC_FILE_RANGE = 84 - SYS_TIMERFD_CREATE = 85 - SYS_TIMERFD_SETTIME = 86 - SYS_TIMERFD_GETTIME = 87 - SYS_UTIMENSAT = 88 - SYS_ACCT = 89 - SYS_CAPGET = 90 - SYS_CAPSET = 91 - SYS_PERSONALITY = 92 - SYS_EXIT = 93 - SYS_EXIT_GROUP = 94 - SYS_WAITID = 95 - SYS_SET_TID_ADDRESS = 96 - SYS_UNSHARE = 97 - SYS_FUTEX = 98 - SYS_SET_ROBUST_LIST = 99 - SYS_GET_ROBUST_LIST = 100 - SYS_NANOSLEEP = 101 - SYS_GETITIMER = 102 - SYS_SETITIMER = 103 - SYS_KEXEC_LOAD = 104 - SYS_INIT_MODULE = 105 - SYS_DELETE_MODULE = 106 - SYS_TIMER_CREATE = 107 - SYS_TIMER_GETTIME = 108 - SYS_TIMER_GETOVERRUN = 109 - SYS_TIMER_SETTIME = 110 - SYS_TIMER_DELETE = 111 - SYS_CLOCK_SETTIME = 112 - SYS_CLOCK_GETTIME = 113 - SYS_CLOCK_GETRES = 114 - SYS_CLOCK_NANOSLEEP = 115 - SYS_SYSLOG = 116 - SYS_PTRACE = 117 - SYS_SCHED_SETPARAM = 118 - SYS_SCHED_SETSCHEDULER = 119 - SYS_SCHED_GETSCHEDULER = 120 - SYS_SCHED_GETPARAM = 121 - SYS_SCHED_SETAFFINITY = 122 - SYS_SCHED_GETAFFINITY = 123 - SYS_SCHED_YIELD = 124 - SYS_SCHED_GET_PRIORITY_MAX = 125 - SYS_SCHED_GET_PRIORITY_MIN = 126 - SYS_SCHED_RR_GET_INTERVAL = 127 - SYS_RESTART_SYSCALL = 128 - SYS_KILL = 129 - SYS_TKILL = 130 - SYS_TGKILL = 131 - SYS_SIGALTSTACK = 132 - SYS_RT_SIGSUSPEND = 133 - SYS_RT_SIGACTION = 134 - SYS_RT_SIGPROCMASK = 135 - SYS_RT_SIGPENDING = 136 - SYS_RT_SIGTIMEDWAIT = 137 - SYS_RT_SIGQUEUEINFO = 138 - SYS_RT_SIGRETURN = 139 - SYS_SETPRIORITY = 140 - SYS_GETPRIORITY = 141 - SYS_REBOOT = 142 - SYS_SETREGID = 143 - SYS_SETGID = 144 - SYS_SETREUID = 145 - SYS_SETUID = 146 - SYS_SETRESUID = 147 - SYS_GETRESUID = 148 - SYS_SETRESGID = 149 - SYS_GETRESGID = 150 - SYS_SETFSUID = 151 - SYS_SETFSGID = 152 - SYS_TIMES = 153 - SYS_SETPGID = 154 - SYS_GETPGID = 155 - SYS_GETSID = 156 - SYS_SETSID = 157 - SYS_GETGROUPS = 158 - SYS_SETGROUPS = 159 - SYS_UNAME = 160 - SYS_SETHOSTNAME = 161 - SYS_SETDOMAINNAME = 162 - SYS_GETRLIMIT = 163 - SYS_SETRLIMIT = 164 - SYS_GETRUSAGE = 165 - SYS_UMASK = 166 - SYS_PRCTL = 167 - SYS_GETCPU = 168 - SYS_GETTIMEOFDAY = 169 - SYS_SETTIMEOFDAY = 170 - SYS_ADJTIMEX = 171 - SYS_GETPID = 172 - SYS_GETPPID = 173 - SYS_GETUID = 174 - SYS_GETEUID = 175 - SYS_GETGID = 176 - SYS_GETEGID = 177 - SYS_GETTID = 178 - SYS_SYSINFO = 179 - SYS_MQ_OPEN = 180 - SYS_MQ_UNLINK = 181 - SYS_MQ_TIMEDSEND = 182 - SYS_MQ_TIMEDRECEIVE = 183 - SYS_MQ_NOTIFY = 184 - SYS_MQ_GETSETATTR = 185 - SYS_MSGGET = 186 - SYS_MSGCTL = 187 - SYS_MSGRCV = 188 - SYS_MSGSND = 189 - SYS_SEMGET = 190 - SYS_SEMCTL = 191 - SYS_SEMTIMEDOP = 192 - SYS_SEMOP = 193 - SYS_SHMGET = 194 - SYS_SHMCTL = 195 - SYS_SHMAT = 196 - SYS_SHMDT = 197 - SYS_SOCKET = 198 - SYS_SOCKETPAIR = 199 - SYS_BIND = 200 - SYS_LISTEN = 201 - SYS_ACCEPT = 202 - SYS_CONNECT = 203 - SYS_GETSOCKNAME = 204 - SYS_GETPEERNAME = 205 - SYS_SENDTO = 206 - SYS_RECVFROM = 207 - SYS_SETSOCKOPT = 208 - SYS_GETSOCKOPT = 209 - SYS_SHUTDOWN = 210 - SYS_SENDMSG = 211 - SYS_RECVMSG = 212 - SYS_READAHEAD = 213 - SYS_BRK = 214 - SYS_MUNMAP = 215 - SYS_MREMAP = 216 - SYS_ADD_KEY = 217 - SYS_REQUEST_KEY = 218 - SYS_KEYCTL = 219 - SYS_CLONE = 220 - SYS_EXECVE = 221 - SYS_MMAP = 222 - SYS_FADVISE64 = 223 - SYS_SWAPON = 224 - SYS_SWAPOFF = 225 - SYS_MPROTECT = 226 - SYS_MSYNC = 227 - SYS_MLOCK = 228 - SYS_MUNLOCK = 229 - SYS_MLOCKALL = 230 - SYS_MUNLOCKALL = 231 - SYS_MINCORE = 232 - SYS_MADVISE = 233 - SYS_REMAP_FILE_PAGES = 234 - SYS_MBIND = 235 - SYS_GET_MEMPOLICY = 236 - SYS_SET_MEMPOLICY = 237 - SYS_MIGRATE_PAGES = 238 - SYS_MOVE_PAGES = 239 - SYS_RT_TGSIGQUEUEINFO = 240 - SYS_PERF_EVENT_OPEN = 241 - SYS_ACCEPT4 = 242 - SYS_RECVMMSG = 243 - SYS_ARCH_SPECIFIC_SYSCALL = 244 - SYS_WAIT4 = 260 - SYS_PRLIMIT64 = 261 - SYS_FANOTIFY_INIT = 262 - SYS_FANOTIFY_MARK = 263 - SYS_NAME_TO_HANDLE_AT = 264 - SYS_OPEN_BY_HANDLE_AT = 265 - SYS_CLOCK_ADJTIME = 266 - SYS_SYNCFS = 267 - SYS_SETNS = 268 - SYS_SENDMMSG = 269 - SYS_PROCESS_VM_READV = 270 - SYS_PROCESS_VM_WRITEV = 271 - SYS_KCMP = 272 - SYS_FINIT_MODULE = 273 - SYS_SCHED_SETATTR = 274 - SYS_SCHED_GETATTR = 275 - SYS_RENAMEAT2 = 276 - SYS_SECCOMP = 277 -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_linux_ppc64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_linux_ppc64.go deleted file mode 100644 index e1b08f00d3b..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_linux_ppc64.go +++ /dev/null @@ -1,360 +0,0 @@ -// mksysnum_linux.pl /usr/include/asm/unistd.h -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build ppc64,linux - -package unix - -const ( - SYS_RESTART_SYSCALL = 0 - SYS_EXIT = 1 - SYS_FORK = 2 - SYS_READ = 3 - SYS_WRITE = 4 - SYS_OPEN = 5 - SYS_CLOSE = 6 - SYS_WAITPID = 7 - SYS_CREAT = 8 - SYS_LINK = 9 - SYS_UNLINK = 10 - SYS_EXECVE = 11 - SYS_CHDIR = 12 - SYS_TIME = 13 - SYS_MKNOD = 14 - SYS_CHMOD = 15 - SYS_LCHOWN = 16 - SYS_BREAK = 17 - SYS_OLDSTAT = 18 - SYS_LSEEK = 19 - SYS_GETPID = 20 - SYS_MOUNT = 21 - SYS_UMOUNT = 22 - SYS_SETUID = 23 - SYS_GETUID = 24 - SYS_STIME = 25 - SYS_PTRACE = 26 - SYS_ALARM = 27 - SYS_OLDFSTAT = 28 - SYS_PAUSE = 29 - SYS_UTIME = 30 - SYS_STTY = 31 - SYS_GTTY = 32 - SYS_ACCESS = 33 - SYS_NICE = 34 - SYS_FTIME = 35 - SYS_SYNC = 36 - SYS_KILL = 37 - SYS_RENAME = 38 - SYS_MKDIR = 39 - SYS_RMDIR = 40 - SYS_DUP = 41 - SYS_PIPE = 42 - SYS_TIMES = 43 - SYS_PROF = 44 - SYS_BRK = 45 - SYS_SETGID = 46 - SYS_GETGID = 47 - SYS_SIGNAL = 48 - SYS_GETEUID = 49 - SYS_GETEGID = 50 - SYS_ACCT = 51 - SYS_UMOUNT2 = 52 - SYS_LOCK = 53 - SYS_IOCTL = 54 - SYS_FCNTL = 55 - SYS_MPX = 56 - SYS_SETPGID = 57 - SYS_ULIMIT = 58 - SYS_OLDOLDUNAME = 59 - SYS_UMASK = 60 - SYS_CHROOT = 61 - SYS_USTAT = 62 - SYS_DUP2 = 63 - SYS_GETPPID = 64 - SYS_GETPGRP = 65 - SYS_SETSID = 66 - SYS_SIGACTION = 67 - SYS_SGETMASK = 68 - SYS_SSETMASK = 69 - SYS_SETREUID = 70 - SYS_SETREGID = 71 - SYS_SIGSUSPEND = 72 - SYS_SIGPENDING = 73 - SYS_SETHOSTNAME = 74 - SYS_SETRLIMIT = 75 - SYS_GETRLIMIT = 76 - SYS_GETRUSAGE = 77 - SYS_GETTIMEOFDAY = 78 - SYS_SETTIMEOFDAY = 79 - SYS_GETGROUPS = 80 - SYS_SETGROUPS = 81 - SYS_SELECT = 82 - SYS_SYMLINK = 83 - SYS_OLDLSTAT = 84 - SYS_READLINK = 85 - SYS_USELIB = 86 - SYS_SWAPON = 87 - SYS_REBOOT = 88 - SYS_READDIR = 89 - SYS_MMAP = 90 - SYS_MUNMAP = 91 - SYS_TRUNCATE = 92 - SYS_FTRUNCATE = 93 - SYS_FCHMOD = 94 - SYS_FCHOWN = 95 - SYS_GETPRIORITY = 96 - SYS_SETPRIORITY = 97 - SYS_PROFIL = 98 - SYS_STATFS = 99 - SYS_FSTATFS = 100 - SYS_IOPERM = 101 - SYS_SOCKETCALL = 102 - SYS_SYSLOG = 103 - SYS_SETITIMER = 104 - SYS_GETITIMER = 105 - SYS_STAT = 106 - SYS_LSTAT = 107 - SYS_FSTAT = 108 - SYS_OLDUNAME = 109 - SYS_IOPL = 110 - SYS_VHANGUP = 111 - SYS_IDLE = 112 - SYS_VM86 = 113 - SYS_WAIT4 = 114 - SYS_SWAPOFF = 115 - SYS_SYSINFO = 116 - SYS_IPC = 117 - SYS_FSYNC = 118 - SYS_SIGRETURN = 119 - SYS_CLONE = 120 - SYS_SETDOMAINNAME = 121 - SYS_UNAME = 122 - SYS_MODIFY_LDT = 123 - SYS_ADJTIMEX = 124 - SYS_MPROTECT = 125 - SYS_SIGPROCMASK = 126 - SYS_CREATE_MODULE = 127 - SYS_INIT_MODULE = 128 - SYS_DELETE_MODULE = 129 - SYS_GET_KERNEL_SYMS = 130 - SYS_QUOTACTL = 131 - SYS_GETPGID = 132 - SYS_FCHDIR = 133 - SYS_BDFLUSH = 134 - SYS_SYSFS = 135 - SYS_PERSONALITY = 136 - SYS_AFS_SYSCALL = 137 - SYS_SETFSUID = 138 - SYS_SETFSGID = 139 - SYS__LLSEEK = 140 - SYS_GETDENTS = 141 - SYS__NEWSELECT = 142 - SYS_FLOCK = 143 - SYS_MSYNC = 144 - SYS_READV = 145 - SYS_WRITEV = 146 - SYS_GETSID = 147 - SYS_FDATASYNC = 148 - SYS__SYSCTL = 149 - SYS_MLOCK = 150 - SYS_MUNLOCK = 151 - SYS_MLOCKALL = 152 - SYS_MUNLOCKALL = 153 - SYS_SCHED_SETPARAM = 154 - SYS_SCHED_GETPARAM = 155 - SYS_SCHED_SETSCHEDULER = 156 - SYS_SCHED_GETSCHEDULER = 157 - SYS_SCHED_YIELD = 158 - SYS_SCHED_GET_PRIORITY_MAX = 159 - SYS_SCHED_GET_PRIORITY_MIN = 160 - SYS_SCHED_RR_GET_INTERVAL = 161 - SYS_NANOSLEEP = 162 - SYS_MREMAP = 163 - SYS_SETRESUID = 164 - SYS_GETRESUID = 165 - SYS_QUERY_MODULE = 166 - SYS_POLL = 167 - SYS_NFSSERVCTL = 168 - SYS_SETRESGID = 169 - SYS_GETRESGID = 170 - SYS_PRCTL = 171 - SYS_RT_SIGRETURN = 172 - SYS_RT_SIGACTION = 173 - SYS_RT_SIGPROCMASK = 174 - SYS_RT_SIGPENDING = 175 - SYS_RT_SIGTIMEDWAIT = 176 - SYS_RT_SIGQUEUEINFO = 177 - SYS_RT_SIGSUSPEND = 178 - SYS_PREAD64 = 179 - SYS_PWRITE64 = 180 - SYS_CHOWN = 181 - SYS_GETCWD = 182 - SYS_CAPGET = 183 - SYS_CAPSET = 184 - SYS_SIGALTSTACK = 185 - SYS_SENDFILE = 186 - SYS_GETPMSG = 187 - SYS_PUTPMSG = 188 - SYS_VFORK = 189 - SYS_UGETRLIMIT = 190 - SYS_READAHEAD = 191 - SYS_PCICONFIG_READ = 198 - SYS_PCICONFIG_WRITE = 199 - SYS_PCICONFIG_IOBASE = 200 - SYS_MULTIPLEXER = 201 - SYS_GETDENTS64 = 202 - SYS_PIVOT_ROOT = 203 - SYS_MADVISE = 205 - SYS_MINCORE = 206 - SYS_GETTID = 207 - SYS_TKILL = 208 - SYS_SETXATTR = 209 - SYS_LSETXATTR = 210 - SYS_FSETXATTR = 211 - SYS_GETXATTR = 212 - SYS_LGETXATTR = 213 - SYS_FGETXATTR = 214 - SYS_LISTXATTR = 215 - SYS_LLISTXATTR = 216 - SYS_FLISTXATTR = 217 - SYS_REMOVEXATTR = 218 - SYS_LREMOVEXATTR = 219 - SYS_FREMOVEXATTR = 220 - SYS_FUTEX = 221 - SYS_SCHED_SETAFFINITY = 222 - SYS_SCHED_GETAFFINITY = 223 - SYS_TUXCALL = 225 - SYS_IO_SETUP = 227 - SYS_IO_DESTROY = 228 - SYS_IO_GETEVENTS = 229 - SYS_IO_SUBMIT = 230 - SYS_IO_CANCEL = 231 - SYS_SET_TID_ADDRESS = 232 - SYS_FADVISE64 = 233 - SYS_EXIT_GROUP = 234 - SYS_LOOKUP_DCOOKIE = 235 - SYS_EPOLL_CREATE = 236 - SYS_EPOLL_CTL = 237 - SYS_EPOLL_WAIT = 238 - SYS_REMAP_FILE_PAGES = 239 - SYS_TIMER_CREATE = 240 - SYS_TIMER_SETTIME = 241 - SYS_TIMER_GETTIME = 242 - SYS_TIMER_GETOVERRUN = 243 - SYS_TIMER_DELETE = 244 - SYS_CLOCK_SETTIME = 245 - SYS_CLOCK_GETTIME = 246 - SYS_CLOCK_GETRES = 247 - SYS_CLOCK_NANOSLEEP = 248 - SYS_SWAPCONTEXT = 249 - SYS_TGKILL = 250 - SYS_UTIMES = 251 - SYS_STATFS64 = 252 - SYS_FSTATFS64 = 253 - SYS_RTAS = 255 - SYS_SYS_DEBUG_SETCONTEXT = 256 - SYS_MIGRATE_PAGES = 258 - SYS_MBIND = 259 - SYS_GET_MEMPOLICY = 260 - SYS_SET_MEMPOLICY = 261 - SYS_MQ_OPEN = 262 - SYS_MQ_UNLINK = 263 - SYS_MQ_TIMEDSEND = 264 - SYS_MQ_TIMEDRECEIVE = 265 - SYS_MQ_NOTIFY = 266 - SYS_MQ_GETSETATTR = 267 - SYS_KEXEC_LOAD = 268 - SYS_ADD_KEY = 269 - SYS_REQUEST_KEY = 270 - SYS_KEYCTL = 271 - SYS_WAITID = 272 - SYS_IOPRIO_SET = 273 - SYS_IOPRIO_GET = 274 - SYS_INOTIFY_INIT = 275 - SYS_INOTIFY_ADD_WATCH = 276 - SYS_INOTIFY_RM_WATCH = 277 - SYS_SPU_RUN = 278 - SYS_SPU_CREATE = 279 - SYS_PSELECT6 = 280 - SYS_PPOLL = 281 - SYS_UNSHARE = 282 - SYS_SPLICE = 283 - SYS_TEE = 284 - SYS_VMSPLICE = 285 - SYS_OPENAT = 286 - SYS_MKDIRAT = 287 - SYS_MKNODAT = 288 - SYS_FCHOWNAT = 289 - SYS_FUTIMESAT = 290 - SYS_NEWFSTATAT = 291 - SYS_UNLINKAT = 292 - SYS_RENAMEAT = 293 - SYS_LINKAT = 294 - SYS_SYMLINKAT = 295 - SYS_READLINKAT = 296 - SYS_FCHMODAT = 297 - SYS_FACCESSAT = 298 - SYS_GET_ROBUST_LIST = 299 - SYS_SET_ROBUST_LIST = 300 - SYS_MOVE_PAGES = 301 - SYS_GETCPU = 302 - SYS_EPOLL_PWAIT = 303 - SYS_UTIMENSAT = 304 - SYS_SIGNALFD = 305 - SYS_TIMERFD_CREATE = 306 - SYS_EVENTFD = 307 - SYS_SYNC_FILE_RANGE2 = 308 - SYS_FALLOCATE = 309 - SYS_SUBPAGE_PROT = 310 - SYS_TIMERFD_SETTIME = 311 - SYS_TIMERFD_GETTIME = 312 - SYS_SIGNALFD4 = 313 - SYS_EVENTFD2 = 314 - SYS_EPOLL_CREATE1 = 315 - SYS_DUP3 = 316 - SYS_PIPE2 = 317 - SYS_INOTIFY_INIT1 = 318 - SYS_PERF_EVENT_OPEN = 319 - SYS_PREADV = 320 - SYS_PWRITEV = 321 - SYS_RT_TGSIGQUEUEINFO = 322 - SYS_FANOTIFY_INIT = 323 - SYS_FANOTIFY_MARK = 324 - SYS_PRLIMIT64 = 325 - SYS_SOCKET = 326 - SYS_BIND = 327 - SYS_CONNECT = 328 - SYS_LISTEN = 329 - SYS_ACCEPT = 330 - SYS_GETSOCKNAME = 331 - SYS_GETPEERNAME = 332 - SYS_SOCKETPAIR = 333 - SYS_SEND = 334 - SYS_SENDTO = 335 - SYS_RECV = 336 - SYS_RECVFROM = 337 - SYS_SHUTDOWN = 338 - SYS_SETSOCKOPT = 339 - SYS_GETSOCKOPT = 340 - SYS_SENDMSG = 341 - SYS_RECVMSG = 342 - SYS_RECVMMSG = 343 - SYS_ACCEPT4 = 344 - SYS_NAME_TO_HANDLE_AT = 345 - SYS_OPEN_BY_HANDLE_AT = 346 - SYS_CLOCK_ADJTIME = 347 - SYS_SYNCFS = 348 - SYS_SENDMMSG = 349 - SYS_SETNS = 350 - SYS_PROCESS_VM_READV = 351 - SYS_PROCESS_VM_WRITEV = 352 - SYS_FINIT_MODULE = 353 - SYS_KCMP = 354 - SYS_SCHED_SETATTR = 355 - SYS_SCHED_GETATTR = 356 - SYS_RENAMEAT2 = 357 - SYS_SECCOMP = 358 - SYS_GETRANDOM = 359 - SYS_MEMFD_CREATE = 360 - SYS_BPF = 361 -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_linux_ppc64le.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_linux_ppc64le.go deleted file mode 100644 index 45e63f51a43..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_linux_ppc64le.go +++ /dev/null @@ -1,353 +0,0 @@ -// mksysnum_linux.pl /usr/include/powerpc64le-linux-gnu/asm/unistd.h -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build ppc64le,linux - -package unix - -const ( - SYS_RESTART_SYSCALL = 0 - SYS_EXIT = 1 - SYS_FORK = 2 - SYS_READ = 3 - SYS_WRITE = 4 - SYS_OPEN = 5 - SYS_CLOSE = 6 - SYS_WAITPID = 7 - SYS_CREAT = 8 - SYS_LINK = 9 - SYS_UNLINK = 10 - SYS_EXECVE = 11 - SYS_CHDIR = 12 - SYS_TIME = 13 - SYS_MKNOD = 14 - SYS_CHMOD = 15 - SYS_LCHOWN = 16 - SYS_BREAK = 17 - SYS_OLDSTAT = 18 - SYS_LSEEK = 19 - SYS_GETPID = 20 - SYS_MOUNT = 21 - SYS_UMOUNT = 22 - SYS_SETUID = 23 - SYS_GETUID = 24 - SYS_STIME = 25 - SYS_PTRACE = 26 - SYS_ALARM = 27 - SYS_OLDFSTAT = 28 - SYS_PAUSE = 29 - SYS_UTIME = 30 - SYS_STTY = 31 - SYS_GTTY = 32 - SYS_ACCESS = 33 - SYS_NICE = 34 - SYS_FTIME = 35 - SYS_SYNC = 36 - SYS_KILL = 37 - SYS_RENAME = 38 - SYS_MKDIR = 39 - SYS_RMDIR = 40 - SYS_DUP = 41 - SYS_PIPE = 42 - SYS_TIMES = 43 - SYS_PROF = 44 - SYS_BRK = 45 - SYS_SETGID = 46 - SYS_GETGID = 47 - SYS_SIGNAL = 48 - SYS_GETEUID = 49 - SYS_GETEGID = 50 - SYS_ACCT = 51 - SYS_UMOUNT2 = 52 - SYS_LOCK = 53 - SYS_IOCTL = 54 - SYS_FCNTL = 55 - SYS_MPX = 56 - SYS_SETPGID = 57 - SYS_ULIMIT = 58 - SYS_OLDOLDUNAME = 59 - SYS_UMASK = 60 - SYS_CHROOT = 61 - SYS_USTAT = 62 - SYS_DUP2 = 63 - SYS_GETPPID = 64 - SYS_GETPGRP = 65 - SYS_SETSID = 66 - SYS_SIGACTION = 67 - SYS_SGETMASK = 68 - SYS_SSETMASK = 69 - SYS_SETREUID = 70 - SYS_SETREGID = 71 - SYS_SIGSUSPEND = 72 - SYS_SIGPENDING = 73 - SYS_SETHOSTNAME = 74 - SYS_SETRLIMIT = 75 - SYS_GETRLIMIT = 76 - SYS_GETRUSAGE = 77 - SYS_GETTIMEOFDAY = 78 - SYS_SETTIMEOFDAY = 79 - SYS_GETGROUPS = 80 - SYS_SETGROUPS = 81 - SYS_SELECT = 82 - SYS_SYMLINK = 83 - SYS_OLDLSTAT = 84 - SYS_READLINK = 85 - SYS_USELIB = 86 - SYS_SWAPON = 87 - SYS_REBOOT = 88 - SYS_READDIR = 89 - SYS_MMAP = 90 - SYS_MUNMAP = 91 - SYS_TRUNCATE = 92 - SYS_FTRUNCATE = 93 - SYS_FCHMOD = 94 - SYS_FCHOWN = 95 - SYS_GETPRIORITY = 96 - SYS_SETPRIORITY = 97 - SYS_PROFIL = 98 - SYS_STATFS = 99 - SYS_FSTATFS = 100 - SYS_IOPERM = 101 - SYS_SOCKETCALL = 102 - SYS_SYSLOG = 103 - SYS_SETITIMER = 104 - SYS_GETITIMER = 105 - SYS_STAT = 106 - SYS_LSTAT = 107 - SYS_FSTAT = 108 - SYS_OLDUNAME = 109 - SYS_IOPL = 110 - SYS_VHANGUP = 111 - SYS_IDLE = 112 - SYS_VM86 = 113 - SYS_WAIT4 = 114 - SYS_SWAPOFF = 115 - SYS_SYSINFO = 116 - SYS_IPC = 117 - SYS_FSYNC = 118 - SYS_SIGRETURN = 119 - SYS_CLONE = 120 - SYS_SETDOMAINNAME = 121 - SYS_UNAME = 122 - SYS_MODIFY_LDT = 123 - SYS_ADJTIMEX = 124 - SYS_MPROTECT = 125 - SYS_SIGPROCMASK = 126 - SYS_CREATE_MODULE = 127 - SYS_INIT_MODULE = 128 - SYS_DELETE_MODULE = 129 - SYS_GET_KERNEL_SYMS = 130 - SYS_QUOTACTL = 131 - SYS_GETPGID = 132 - SYS_FCHDIR = 133 - SYS_BDFLUSH = 134 - SYS_SYSFS = 135 - SYS_PERSONALITY = 136 - SYS_AFS_SYSCALL = 137 - SYS_SETFSUID = 138 - SYS_SETFSGID = 139 - SYS__LLSEEK = 140 - SYS_GETDENTS = 141 - SYS__NEWSELECT = 142 - SYS_FLOCK = 143 - SYS_MSYNC = 144 - SYS_READV = 145 - SYS_WRITEV = 146 - SYS_GETSID = 147 - SYS_FDATASYNC = 148 - SYS__SYSCTL = 149 - SYS_MLOCK = 150 - SYS_MUNLOCK = 151 - SYS_MLOCKALL = 152 - SYS_MUNLOCKALL = 153 - SYS_SCHED_SETPARAM = 154 - SYS_SCHED_GETPARAM = 155 - SYS_SCHED_SETSCHEDULER = 156 - SYS_SCHED_GETSCHEDULER = 157 - SYS_SCHED_YIELD = 158 - SYS_SCHED_GET_PRIORITY_MAX = 159 - SYS_SCHED_GET_PRIORITY_MIN = 160 - SYS_SCHED_RR_GET_INTERVAL = 161 - SYS_NANOSLEEP = 162 - SYS_MREMAP = 163 - SYS_SETRESUID = 164 - SYS_GETRESUID = 165 - SYS_QUERY_MODULE = 166 - SYS_POLL = 167 - SYS_NFSSERVCTL = 168 - SYS_SETRESGID = 169 - SYS_GETRESGID = 170 - SYS_PRCTL = 171 - SYS_RT_SIGRETURN = 172 - SYS_RT_SIGACTION = 173 - SYS_RT_SIGPROCMASK = 174 - SYS_RT_SIGPENDING = 175 - SYS_RT_SIGTIMEDWAIT = 176 - SYS_RT_SIGQUEUEINFO = 177 - SYS_RT_SIGSUSPEND = 178 - SYS_PREAD64 = 179 - SYS_PWRITE64 = 180 - SYS_CHOWN = 181 - SYS_GETCWD = 182 - SYS_CAPGET = 183 - SYS_CAPSET = 184 - SYS_SIGALTSTACK = 185 - SYS_SENDFILE = 186 - SYS_GETPMSG = 187 - SYS_PUTPMSG = 188 - SYS_VFORK = 189 - SYS_UGETRLIMIT = 190 - SYS_READAHEAD = 191 - SYS_PCICONFIG_READ = 198 - SYS_PCICONFIG_WRITE = 199 - SYS_PCICONFIG_IOBASE = 200 - SYS_MULTIPLEXER = 201 - SYS_GETDENTS64 = 202 - SYS_PIVOT_ROOT = 203 - SYS_MADVISE = 205 - SYS_MINCORE = 206 - SYS_GETTID = 207 - SYS_TKILL = 208 - SYS_SETXATTR = 209 - SYS_LSETXATTR = 210 - SYS_FSETXATTR = 211 - SYS_GETXATTR = 212 - SYS_LGETXATTR = 213 - SYS_FGETXATTR = 214 - SYS_LISTXATTR = 215 - SYS_LLISTXATTR = 216 - SYS_FLISTXATTR = 217 - SYS_REMOVEXATTR = 218 - SYS_LREMOVEXATTR = 219 - SYS_FREMOVEXATTR = 220 - SYS_FUTEX = 221 - SYS_SCHED_SETAFFINITY = 222 - SYS_SCHED_GETAFFINITY = 223 - SYS_TUXCALL = 225 - SYS_IO_SETUP = 227 - SYS_IO_DESTROY = 228 - SYS_IO_GETEVENTS = 229 - SYS_IO_SUBMIT = 230 - SYS_IO_CANCEL = 231 - SYS_SET_TID_ADDRESS = 232 - SYS_FADVISE64 = 233 - SYS_EXIT_GROUP = 234 - SYS_LOOKUP_DCOOKIE = 235 - SYS_EPOLL_CREATE = 236 - SYS_EPOLL_CTL = 237 - SYS_EPOLL_WAIT = 238 - SYS_REMAP_FILE_PAGES = 239 - SYS_TIMER_CREATE = 240 - SYS_TIMER_SETTIME = 241 - SYS_TIMER_GETTIME = 242 - SYS_TIMER_GETOVERRUN = 243 - SYS_TIMER_DELETE = 244 - SYS_CLOCK_SETTIME = 245 - SYS_CLOCK_GETTIME = 246 - SYS_CLOCK_GETRES = 247 - SYS_CLOCK_NANOSLEEP = 248 - SYS_SWAPCONTEXT = 249 - SYS_TGKILL = 250 - SYS_UTIMES = 251 - SYS_STATFS64 = 252 - SYS_FSTATFS64 = 253 - SYS_RTAS = 255 - SYS_SYS_DEBUG_SETCONTEXT = 256 - SYS_MIGRATE_PAGES = 258 - SYS_MBIND = 259 - SYS_GET_MEMPOLICY = 260 - SYS_SET_MEMPOLICY = 261 - SYS_MQ_OPEN = 262 - SYS_MQ_UNLINK = 263 - SYS_MQ_TIMEDSEND = 264 - SYS_MQ_TIMEDRECEIVE = 265 - SYS_MQ_NOTIFY = 266 - SYS_MQ_GETSETATTR = 267 - SYS_KEXEC_LOAD = 268 - SYS_ADD_KEY = 269 - SYS_REQUEST_KEY = 270 - SYS_KEYCTL = 271 - SYS_WAITID = 272 - SYS_IOPRIO_SET = 273 - SYS_IOPRIO_GET = 274 - SYS_INOTIFY_INIT = 275 - SYS_INOTIFY_ADD_WATCH = 276 - SYS_INOTIFY_RM_WATCH = 277 - SYS_SPU_RUN = 278 - SYS_SPU_CREATE = 279 - SYS_PSELECT6 = 280 - SYS_PPOLL = 281 - SYS_UNSHARE = 282 - SYS_SPLICE = 283 - SYS_TEE = 284 - SYS_VMSPLICE = 285 - SYS_OPENAT = 286 - SYS_MKDIRAT = 287 - SYS_MKNODAT = 288 - SYS_FCHOWNAT = 289 - SYS_FUTIMESAT = 290 - SYS_NEWFSTATAT = 291 - SYS_UNLINKAT = 292 - SYS_RENAMEAT = 293 - SYS_LINKAT = 294 - SYS_SYMLINKAT = 295 - SYS_READLINKAT = 296 - SYS_FCHMODAT = 297 - SYS_FACCESSAT = 298 - SYS_GET_ROBUST_LIST = 299 - SYS_SET_ROBUST_LIST = 300 - SYS_MOVE_PAGES = 301 - SYS_GETCPU = 302 - SYS_EPOLL_PWAIT = 303 - SYS_UTIMENSAT = 304 - SYS_SIGNALFD = 305 - SYS_TIMERFD_CREATE = 306 - SYS_EVENTFD = 307 - SYS_SYNC_FILE_RANGE2 = 308 - SYS_FALLOCATE = 309 - SYS_SUBPAGE_PROT = 310 - SYS_TIMERFD_SETTIME = 311 - SYS_TIMERFD_GETTIME = 312 - SYS_SIGNALFD4 = 313 - SYS_EVENTFD2 = 314 - SYS_EPOLL_CREATE1 = 315 - SYS_DUP3 = 316 - SYS_PIPE2 = 317 - SYS_INOTIFY_INIT1 = 318 - SYS_PERF_EVENT_OPEN = 319 - SYS_PREADV = 320 - SYS_PWRITEV = 321 - SYS_RT_TGSIGQUEUEINFO = 322 - SYS_FANOTIFY_INIT = 323 - SYS_FANOTIFY_MARK = 324 - SYS_PRLIMIT64 = 325 - SYS_SOCKET = 326 - SYS_BIND = 327 - SYS_CONNECT = 328 - SYS_LISTEN = 329 - SYS_ACCEPT = 330 - SYS_GETSOCKNAME = 331 - SYS_GETPEERNAME = 332 - SYS_SOCKETPAIR = 333 - SYS_SEND = 334 - SYS_SENDTO = 335 - SYS_RECV = 336 - SYS_RECVFROM = 337 - SYS_SHUTDOWN = 338 - SYS_SETSOCKOPT = 339 - SYS_GETSOCKOPT = 340 - SYS_SENDMSG = 341 - SYS_RECVMSG = 342 - SYS_RECVMMSG = 343 - SYS_ACCEPT4 = 344 - SYS_NAME_TO_HANDLE_AT = 345 - SYS_OPEN_BY_HANDLE_AT = 346 - SYS_CLOCK_ADJTIME = 347 - SYS_SYNCFS = 348 - SYS_SENDMMSG = 349 - SYS_SETNS = 350 - SYS_PROCESS_VM_READV = 351 - SYS_PROCESS_VM_WRITEV = 352 - SYS_FINIT_MODULE = 353 - SYS_KCMP = 354 -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_netbsd_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_netbsd_386.go deleted file mode 100644 index f60d8f98823..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_netbsd_386.go +++ /dev/null @@ -1,273 +0,0 @@ -// mksysnum_netbsd.pl -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build 386,netbsd - -package unix - -const ( - SYS_EXIT = 1 // { void|sys||exit(int rval); } - SYS_FORK = 2 // { int|sys||fork(void); } - SYS_READ = 3 // { ssize_t|sys||read(int fd, void *buf, size_t nbyte); } - SYS_WRITE = 4 // { ssize_t|sys||write(int fd, const void *buf, size_t nbyte); } - SYS_OPEN = 5 // { int|sys||open(const char *path, int flags, ... mode_t mode); } - SYS_CLOSE = 6 // { int|sys||close(int fd); } - SYS_LINK = 9 // { int|sys||link(const char *path, const char *link); } - SYS_UNLINK = 10 // { int|sys||unlink(const char *path); } - SYS_CHDIR = 12 // { int|sys||chdir(const char *path); } - SYS_FCHDIR = 13 // { int|sys||fchdir(int fd); } - SYS_CHMOD = 15 // { int|sys||chmod(const char *path, mode_t mode); } - SYS_CHOWN = 16 // { int|sys||chown(const char *path, uid_t uid, gid_t gid); } - SYS_BREAK = 17 // { int|sys||obreak(char *nsize); } - SYS_GETPID = 20 // { pid_t|sys||getpid_with_ppid(void); } - SYS_UNMOUNT = 22 // { int|sys||unmount(const char *path, int flags); } - SYS_SETUID = 23 // { int|sys||setuid(uid_t uid); } - SYS_GETUID = 24 // { uid_t|sys||getuid_with_euid(void); } - SYS_GETEUID = 25 // { uid_t|sys||geteuid(void); } - SYS_PTRACE = 26 // { int|sys||ptrace(int req, pid_t pid, void *addr, int data); } - SYS_RECVMSG = 27 // { ssize_t|sys||recvmsg(int s, struct msghdr *msg, int flags); } - SYS_SENDMSG = 28 // { ssize_t|sys||sendmsg(int s, const struct msghdr *msg, int flags); } - SYS_RECVFROM = 29 // { ssize_t|sys||recvfrom(int s, void *buf, size_t len, int flags, struct sockaddr *from, socklen_t *fromlenaddr); } - SYS_ACCEPT = 30 // { int|sys||accept(int s, struct sockaddr *name, socklen_t *anamelen); } - SYS_GETPEERNAME = 31 // { int|sys||getpeername(int fdes, struct sockaddr *asa, socklen_t *alen); } - SYS_GETSOCKNAME = 32 // { int|sys||getsockname(int fdes, struct sockaddr *asa, socklen_t *alen); } - SYS_ACCESS = 33 // { int|sys||access(const char *path, int flags); } - SYS_CHFLAGS = 34 // { int|sys||chflags(const char *path, u_long flags); } - SYS_FCHFLAGS = 35 // { int|sys||fchflags(int fd, u_long flags); } - SYS_SYNC = 36 // { void|sys||sync(void); } - SYS_KILL = 37 // { int|sys||kill(pid_t pid, int signum); } - SYS_GETPPID = 39 // { pid_t|sys||getppid(void); } - SYS_DUP = 41 // { int|sys||dup(int fd); } - SYS_PIPE = 42 // { int|sys||pipe(void); } - SYS_GETEGID = 43 // { gid_t|sys||getegid(void); } - SYS_PROFIL = 44 // { int|sys||profil(char *samples, size_t size, u_long offset, u_int scale); } - SYS_KTRACE = 45 // { int|sys||ktrace(const char *fname, int ops, int facs, pid_t pid); } - SYS_GETGID = 47 // { gid_t|sys||getgid_with_egid(void); } - SYS___GETLOGIN = 49 // { int|sys||__getlogin(char *namebuf, size_t namelen); } - SYS___SETLOGIN = 50 // { int|sys||__setlogin(const char *namebuf); } - SYS_ACCT = 51 // { int|sys||acct(const char *path); } - SYS_IOCTL = 54 // { int|sys||ioctl(int fd, u_long com, ... void *data); } - SYS_REVOKE = 56 // { int|sys||revoke(const char *path); } - SYS_SYMLINK = 57 // { int|sys||symlink(const char *path, const char *link); } - SYS_READLINK = 58 // { ssize_t|sys||readlink(const char *path, char *buf, size_t count); } - SYS_EXECVE = 59 // { int|sys||execve(const char *path, char * const *argp, char * const *envp); } - SYS_UMASK = 60 // { mode_t|sys||umask(mode_t newmask); } - SYS_CHROOT = 61 // { int|sys||chroot(const char *path); } - SYS_VFORK = 66 // { int|sys||vfork(void); } - SYS_SBRK = 69 // { int|sys||sbrk(intptr_t incr); } - SYS_SSTK = 70 // { int|sys||sstk(int incr); } - SYS_VADVISE = 72 // { int|sys||ovadvise(int anom); } - SYS_MUNMAP = 73 // { int|sys||munmap(void *addr, size_t len); } - SYS_MPROTECT = 74 // { int|sys||mprotect(void *addr, size_t len, int prot); } - SYS_MADVISE = 75 // { int|sys||madvise(void *addr, size_t len, int behav); } - SYS_MINCORE = 78 // { int|sys||mincore(void *addr, size_t len, char *vec); } - SYS_GETGROUPS = 79 // { int|sys||getgroups(int gidsetsize, gid_t *gidset); } - SYS_SETGROUPS = 80 // { int|sys||setgroups(int gidsetsize, const gid_t *gidset); } - SYS_GETPGRP = 81 // { int|sys||getpgrp(void); } - SYS_SETPGID = 82 // { int|sys||setpgid(pid_t pid, pid_t pgid); } - SYS_DUP2 = 90 // { int|sys||dup2(int from, int to); } - SYS_FCNTL = 92 // { int|sys||fcntl(int fd, int cmd, ... void *arg); } - SYS_FSYNC = 95 // { int|sys||fsync(int fd); } - SYS_SETPRIORITY = 96 // { int|sys||setpriority(int which, id_t who, int prio); } - SYS_CONNECT = 98 // { int|sys||connect(int s, const struct sockaddr *name, socklen_t namelen); } - SYS_GETPRIORITY = 100 // { int|sys||getpriority(int which, id_t who); } - SYS_BIND = 104 // { int|sys||bind(int s, const struct sockaddr *name, socklen_t namelen); } - SYS_SETSOCKOPT = 105 // { int|sys||setsockopt(int s, int level, int name, const void *val, socklen_t valsize); } - SYS_LISTEN = 106 // { int|sys||listen(int s, int backlog); } - SYS_GETSOCKOPT = 118 // { int|sys||getsockopt(int s, int level, int name, void *val, socklen_t *avalsize); } - SYS_READV = 120 // { ssize_t|sys||readv(int fd, const struct iovec *iovp, int iovcnt); } - SYS_WRITEV = 121 // { ssize_t|sys||writev(int fd, const struct iovec *iovp, int iovcnt); } - SYS_FCHOWN = 123 // { int|sys||fchown(int fd, uid_t uid, gid_t gid); } - SYS_FCHMOD = 124 // { int|sys||fchmod(int fd, mode_t mode); } - SYS_SETREUID = 126 // { int|sys||setreuid(uid_t ruid, uid_t euid); } - SYS_SETREGID = 127 // { int|sys||setregid(gid_t rgid, gid_t egid); } - SYS_RENAME = 128 // { int|sys||rename(const char *from, const char *to); } - SYS_FLOCK = 131 // { int|sys||flock(int fd, int how); } - SYS_MKFIFO = 132 // { int|sys||mkfifo(const char *path, mode_t mode); } - SYS_SENDTO = 133 // { ssize_t|sys||sendto(int s, const void *buf, size_t len, int flags, const struct sockaddr *to, socklen_t tolen); } - SYS_SHUTDOWN = 134 // { int|sys||shutdown(int s, int how); } - SYS_SOCKETPAIR = 135 // { int|sys||socketpair(int domain, int type, int protocol, int *rsv); } - SYS_MKDIR = 136 // { int|sys||mkdir(const char *path, mode_t mode); } - SYS_RMDIR = 137 // { int|sys||rmdir(const char *path); } - SYS_SETSID = 147 // { int|sys||setsid(void); } - SYS_SYSARCH = 165 // { int|sys||sysarch(int op, void *parms); } - SYS_PREAD = 173 // { ssize_t|sys||pread(int fd, void *buf, size_t nbyte, int PAD, off_t offset); } - SYS_PWRITE = 174 // { ssize_t|sys||pwrite(int fd, const void *buf, size_t nbyte, int PAD, off_t offset); } - SYS_NTP_ADJTIME = 176 // { int|sys||ntp_adjtime(struct timex *tp); } - SYS_SETGID = 181 // { int|sys||setgid(gid_t gid); } - SYS_SETEGID = 182 // { int|sys||setegid(gid_t egid); } - SYS_SETEUID = 183 // { int|sys||seteuid(uid_t euid); } - SYS_PATHCONF = 191 // { long|sys||pathconf(const char *path, int name); } - SYS_FPATHCONF = 192 // { long|sys||fpathconf(int fd, int name); } - SYS_GETRLIMIT = 194 // { int|sys||getrlimit(int which, struct rlimit *rlp); } - SYS_SETRLIMIT = 195 // { int|sys||setrlimit(int which, const struct rlimit *rlp); } - SYS_MMAP = 197 // { void *|sys||mmap(void *addr, size_t len, int prot, int flags, int fd, long PAD, off_t pos); } - SYS_LSEEK = 199 // { off_t|sys||lseek(int fd, int PAD, off_t offset, int whence); } - SYS_TRUNCATE = 200 // { int|sys||truncate(const char *path, int PAD, off_t length); } - SYS_FTRUNCATE = 201 // { int|sys||ftruncate(int fd, int PAD, off_t length); } - SYS___SYSCTL = 202 // { int|sys||__sysctl(const int *name, u_int namelen, void *old, size_t *oldlenp, const void *new, size_t newlen); } - SYS_MLOCK = 203 // { int|sys||mlock(const void *addr, size_t len); } - SYS_MUNLOCK = 204 // { int|sys||munlock(const void *addr, size_t len); } - SYS_UNDELETE = 205 // { int|sys||undelete(const char *path); } - SYS_GETPGID = 207 // { pid_t|sys||getpgid(pid_t pid); } - SYS_REBOOT = 208 // { int|sys||reboot(int opt, char *bootstr); } - SYS_POLL = 209 // { int|sys||poll(struct pollfd *fds, u_int nfds, int timeout); } - SYS_SEMGET = 221 // { int|sys||semget(key_t key, int nsems, int semflg); } - SYS_SEMOP = 222 // { int|sys||semop(int semid, struct sembuf *sops, size_t nsops); } - SYS_SEMCONFIG = 223 // { int|sys||semconfig(int flag); } - SYS_MSGGET = 225 // { int|sys||msgget(key_t key, int msgflg); } - SYS_MSGSND = 226 // { int|sys||msgsnd(int msqid, const void *msgp, size_t msgsz, int msgflg); } - SYS_MSGRCV = 227 // { ssize_t|sys||msgrcv(int msqid, void *msgp, size_t msgsz, long msgtyp, int msgflg); } - SYS_SHMAT = 228 // { void *|sys||shmat(int shmid, const void *shmaddr, int shmflg); } - SYS_SHMDT = 230 // { int|sys||shmdt(const void *shmaddr); } - SYS_SHMGET = 231 // { int|sys||shmget(key_t key, size_t size, int shmflg); } - SYS_TIMER_CREATE = 235 // { int|sys||timer_create(clockid_t clock_id, struct sigevent *evp, timer_t *timerid); } - SYS_TIMER_DELETE = 236 // { int|sys||timer_delete(timer_t timerid); } - SYS_TIMER_GETOVERRUN = 239 // { int|sys||timer_getoverrun(timer_t timerid); } - SYS_FDATASYNC = 241 // { int|sys||fdatasync(int fd); } - SYS_MLOCKALL = 242 // { int|sys||mlockall(int flags); } - SYS_MUNLOCKALL = 243 // { int|sys||munlockall(void); } - SYS_SIGQUEUEINFO = 245 // { int|sys||sigqueueinfo(pid_t pid, const siginfo_t *info); } - SYS_MODCTL = 246 // { int|sys||modctl(int cmd, void *arg); } - SYS___POSIX_RENAME = 270 // { int|sys||__posix_rename(const char *from, const char *to); } - SYS_SWAPCTL = 271 // { int|sys||swapctl(int cmd, void *arg, int misc); } - SYS_MINHERIT = 273 // { int|sys||minherit(void *addr, size_t len, int inherit); } - SYS_LCHMOD = 274 // { int|sys||lchmod(const char *path, mode_t mode); } - SYS_LCHOWN = 275 // { int|sys||lchown(const char *path, uid_t uid, gid_t gid); } - SYS___POSIX_CHOWN = 283 // { int|sys||__posix_chown(const char *path, uid_t uid, gid_t gid); } - SYS___POSIX_FCHOWN = 284 // { int|sys||__posix_fchown(int fd, uid_t uid, gid_t gid); } - SYS___POSIX_LCHOWN = 285 // { int|sys||__posix_lchown(const char *path, uid_t uid, gid_t gid); } - SYS_GETSID = 286 // { pid_t|sys||getsid(pid_t pid); } - SYS___CLONE = 287 // { pid_t|sys||__clone(int flags, void *stack); } - SYS_FKTRACE = 288 // { int|sys||fktrace(int fd, int ops, int facs, pid_t pid); } - SYS_PREADV = 289 // { ssize_t|sys||preadv(int fd, const struct iovec *iovp, int iovcnt, int PAD, off_t offset); } - SYS_PWRITEV = 290 // { ssize_t|sys||pwritev(int fd, const struct iovec *iovp, int iovcnt, int PAD, off_t offset); } - SYS___GETCWD = 296 // { int|sys||__getcwd(char *bufp, size_t length); } - SYS_FCHROOT = 297 // { int|sys||fchroot(int fd); } - SYS_LCHFLAGS = 304 // { int|sys||lchflags(const char *path, u_long flags); } - SYS_ISSETUGID = 305 // { int|sys||issetugid(void); } - SYS_UTRACE = 306 // { int|sys||utrace(const char *label, void *addr, size_t len); } - SYS_GETCONTEXT = 307 // { int|sys||getcontext(struct __ucontext *ucp); } - SYS_SETCONTEXT = 308 // { int|sys||setcontext(const struct __ucontext *ucp); } - SYS__LWP_CREATE = 309 // { int|sys||_lwp_create(const struct __ucontext *ucp, u_long flags, lwpid_t *new_lwp); } - SYS__LWP_EXIT = 310 // { int|sys||_lwp_exit(void); } - SYS__LWP_SELF = 311 // { lwpid_t|sys||_lwp_self(void); } - SYS__LWP_WAIT = 312 // { int|sys||_lwp_wait(lwpid_t wait_for, lwpid_t *departed); } - SYS__LWP_SUSPEND = 313 // { int|sys||_lwp_suspend(lwpid_t target); } - SYS__LWP_CONTINUE = 314 // { int|sys||_lwp_continue(lwpid_t target); } - SYS__LWP_WAKEUP = 315 // { int|sys||_lwp_wakeup(lwpid_t target); } - SYS__LWP_GETPRIVATE = 316 // { void *|sys||_lwp_getprivate(void); } - SYS__LWP_SETPRIVATE = 317 // { void|sys||_lwp_setprivate(void *ptr); } - SYS__LWP_KILL = 318 // { int|sys||_lwp_kill(lwpid_t target, int signo); } - SYS__LWP_DETACH = 319 // { int|sys||_lwp_detach(lwpid_t target); } - SYS__LWP_UNPARK = 321 // { int|sys||_lwp_unpark(lwpid_t target, const void *hint); } - SYS__LWP_UNPARK_ALL = 322 // { ssize_t|sys||_lwp_unpark_all(const lwpid_t *targets, size_t ntargets, const void *hint); } - SYS__LWP_SETNAME = 323 // { int|sys||_lwp_setname(lwpid_t target, const char *name); } - SYS__LWP_GETNAME = 324 // { int|sys||_lwp_getname(lwpid_t target, char *name, size_t len); } - SYS__LWP_CTL = 325 // { int|sys||_lwp_ctl(int features, struct lwpctl **address); } - SYS___SIGACTION_SIGTRAMP = 340 // { int|sys||__sigaction_sigtramp(int signum, const struct sigaction *nsa, struct sigaction *osa, const void *tramp, int vers); } - SYS_PMC_GET_INFO = 341 // { int|sys||pmc_get_info(int ctr, int op, void *args); } - SYS_PMC_CONTROL = 342 // { int|sys||pmc_control(int ctr, int op, void *args); } - SYS_RASCTL = 343 // { int|sys||rasctl(void *addr, size_t len, int op); } - SYS_KQUEUE = 344 // { int|sys||kqueue(void); } - SYS__SCHED_SETPARAM = 346 // { int|sys||_sched_setparam(pid_t pid, lwpid_t lid, int policy, const struct sched_param *params); } - SYS__SCHED_GETPARAM = 347 // { int|sys||_sched_getparam(pid_t pid, lwpid_t lid, int *policy, struct sched_param *params); } - SYS__SCHED_SETAFFINITY = 348 // { int|sys||_sched_setaffinity(pid_t pid, lwpid_t lid, size_t size, const cpuset_t *cpuset); } - SYS__SCHED_GETAFFINITY = 349 // { int|sys||_sched_getaffinity(pid_t pid, lwpid_t lid, size_t size, cpuset_t *cpuset); } - SYS_SCHED_YIELD = 350 // { int|sys||sched_yield(void); } - SYS_FSYNC_RANGE = 354 // { int|sys||fsync_range(int fd, int flags, off_t start, off_t length); } - SYS_UUIDGEN = 355 // { int|sys||uuidgen(struct uuid *store, int count); } - SYS_GETVFSSTAT = 356 // { int|sys||getvfsstat(struct statvfs *buf, size_t bufsize, int flags); } - SYS_STATVFS1 = 357 // { int|sys||statvfs1(const char *path, struct statvfs *buf, int flags); } - SYS_FSTATVFS1 = 358 // { int|sys||fstatvfs1(int fd, struct statvfs *buf, int flags); } - SYS_EXTATTRCTL = 360 // { int|sys||extattrctl(const char *path, int cmd, const char *filename, int attrnamespace, const char *attrname); } - SYS_EXTATTR_SET_FILE = 361 // { int|sys||extattr_set_file(const char *path, int attrnamespace, const char *attrname, const void *data, size_t nbytes); } - SYS_EXTATTR_GET_FILE = 362 // { ssize_t|sys||extattr_get_file(const char *path, int attrnamespace, const char *attrname, void *data, size_t nbytes); } - SYS_EXTATTR_DELETE_FILE = 363 // { int|sys||extattr_delete_file(const char *path, int attrnamespace, const char *attrname); } - SYS_EXTATTR_SET_FD = 364 // { int|sys||extattr_set_fd(int fd, int attrnamespace, const char *attrname, const void *data, size_t nbytes); } - SYS_EXTATTR_GET_FD = 365 // { ssize_t|sys||extattr_get_fd(int fd, int attrnamespace, const char *attrname, void *data, size_t nbytes); } - SYS_EXTATTR_DELETE_FD = 366 // { int|sys||extattr_delete_fd(int fd, int attrnamespace, const char *attrname); } - SYS_EXTATTR_SET_LINK = 367 // { int|sys||extattr_set_link(const char *path, int attrnamespace, const char *attrname, const void *data, size_t nbytes); } - SYS_EXTATTR_GET_LINK = 368 // { ssize_t|sys||extattr_get_link(const char *path, int attrnamespace, const char *attrname, void *data, size_t nbytes); } - SYS_EXTATTR_DELETE_LINK = 369 // { int|sys||extattr_delete_link(const char *path, int attrnamespace, const char *attrname); } - SYS_EXTATTR_LIST_FD = 370 // { ssize_t|sys||extattr_list_fd(int fd, int attrnamespace, void *data, size_t nbytes); } - SYS_EXTATTR_LIST_FILE = 371 // { ssize_t|sys||extattr_list_file(const char *path, int attrnamespace, void *data, size_t nbytes); } - SYS_EXTATTR_LIST_LINK = 372 // { ssize_t|sys||extattr_list_link(const char *path, int attrnamespace, void *data, size_t nbytes); } - SYS_SETXATTR = 375 // { int|sys||setxattr(const char *path, const char *name, const void *value, size_t size, int flags); } - SYS_LSETXATTR = 376 // { int|sys||lsetxattr(const char *path, const char *name, const void *value, size_t size, int flags); } - SYS_FSETXATTR = 377 // { int|sys||fsetxattr(int fd, const char *name, const void *value, size_t size, int flags); } - SYS_GETXATTR = 378 // { int|sys||getxattr(const char *path, const char *name, void *value, size_t size); } - SYS_LGETXATTR = 379 // { int|sys||lgetxattr(const char *path, const char *name, void *value, size_t size); } - SYS_FGETXATTR = 380 // { int|sys||fgetxattr(int fd, const char *name, void *value, size_t size); } - SYS_LISTXATTR = 381 // { int|sys||listxattr(const char *path, char *list, size_t size); } - SYS_LLISTXATTR = 382 // { int|sys||llistxattr(const char *path, char *list, size_t size); } - SYS_FLISTXATTR = 383 // { int|sys||flistxattr(int fd, char *list, size_t size); } - SYS_REMOVEXATTR = 384 // { int|sys||removexattr(const char *path, const char *name); } - SYS_LREMOVEXATTR = 385 // { int|sys||lremovexattr(const char *path, const char *name); } - SYS_FREMOVEXATTR = 386 // { int|sys||fremovexattr(int fd, const char *name); } - SYS_GETDENTS = 390 // { int|sys|30|getdents(int fd, char *buf, size_t count); } - SYS_SOCKET = 394 // { int|sys|30|socket(int domain, int type, int protocol); } - SYS_GETFH = 395 // { int|sys|30|getfh(const char *fname, void *fhp, size_t *fh_size); } - SYS_MOUNT = 410 // { int|sys|50|mount(const char *type, const char *path, int flags, void *data, size_t data_len); } - SYS_MREMAP = 411 // { void *|sys||mremap(void *old_address, size_t old_size, void *new_address, size_t new_size, int flags); } - SYS_PSET_CREATE = 412 // { int|sys||pset_create(psetid_t *psid); } - SYS_PSET_DESTROY = 413 // { int|sys||pset_destroy(psetid_t psid); } - SYS_PSET_ASSIGN = 414 // { int|sys||pset_assign(psetid_t psid, cpuid_t cpuid, psetid_t *opsid); } - SYS__PSET_BIND = 415 // { int|sys||_pset_bind(idtype_t idtype, id_t first_id, id_t second_id, psetid_t psid, psetid_t *opsid); } - SYS_POSIX_FADVISE = 416 // { int|sys|50|posix_fadvise(int fd, int PAD, off_t offset, off_t len, int advice); } - SYS_SELECT = 417 // { int|sys|50|select(int nd, fd_set *in, fd_set *ou, fd_set *ex, struct timeval *tv); } - SYS_GETTIMEOFDAY = 418 // { int|sys|50|gettimeofday(struct timeval *tp, void *tzp); } - SYS_SETTIMEOFDAY = 419 // { int|sys|50|settimeofday(const struct timeval *tv, const void *tzp); } - SYS_UTIMES = 420 // { int|sys|50|utimes(const char *path, const struct timeval *tptr); } - SYS_ADJTIME = 421 // { int|sys|50|adjtime(const struct timeval *delta, struct timeval *olddelta); } - SYS_FUTIMES = 423 // { int|sys|50|futimes(int fd, const struct timeval *tptr); } - SYS_LUTIMES = 424 // { int|sys|50|lutimes(const char *path, const struct timeval *tptr); } - SYS_SETITIMER = 425 // { int|sys|50|setitimer(int which, const struct itimerval *itv, struct itimerval *oitv); } - SYS_GETITIMER = 426 // { int|sys|50|getitimer(int which, struct itimerval *itv); } - SYS_CLOCK_GETTIME = 427 // { int|sys|50|clock_gettime(clockid_t clock_id, struct timespec *tp); } - SYS_CLOCK_SETTIME = 428 // { int|sys|50|clock_settime(clockid_t clock_id, const struct timespec *tp); } - SYS_CLOCK_GETRES = 429 // { int|sys|50|clock_getres(clockid_t clock_id, struct timespec *tp); } - SYS_NANOSLEEP = 430 // { int|sys|50|nanosleep(const struct timespec *rqtp, struct timespec *rmtp); } - SYS___SIGTIMEDWAIT = 431 // { int|sys|50|__sigtimedwait(const sigset_t *set, siginfo_t *info, struct timespec *timeout); } - SYS__LWP_PARK = 434 // { int|sys|50|_lwp_park(const struct timespec *ts, lwpid_t unpark, const void *hint, const void *unparkhint); } - SYS_KEVENT = 435 // { int|sys|50|kevent(int fd, const struct kevent *changelist, size_t nchanges, struct kevent *eventlist, size_t nevents, const struct timespec *timeout); } - SYS_PSELECT = 436 // { int|sys|50|pselect(int nd, fd_set *in, fd_set *ou, fd_set *ex, const struct timespec *ts, const sigset_t *mask); } - SYS_POLLTS = 437 // { int|sys|50|pollts(struct pollfd *fds, u_int nfds, const struct timespec *ts, const sigset_t *mask); } - SYS_STAT = 439 // { int|sys|50|stat(const char *path, struct stat *ub); } - SYS_FSTAT = 440 // { int|sys|50|fstat(int fd, struct stat *sb); } - SYS_LSTAT = 441 // { int|sys|50|lstat(const char *path, struct stat *ub); } - SYS___SEMCTL = 442 // { int|sys|50|__semctl(int semid, int semnum, int cmd, ... union __semun *arg); } - SYS_SHMCTL = 443 // { int|sys|50|shmctl(int shmid, int cmd, struct shmid_ds *buf); } - SYS_MSGCTL = 444 // { int|sys|50|msgctl(int msqid, int cmd, struct msqid_ds *buf); } - SYS_GETRUSAGE = 445 // { int|sys|50|getrusage(int who, struct rusage *rusage); } - SYS_TIMER_SETTIME = 446 // { int|sys|50|timer_settime(timer_t timerid, int flags, const struct itimerspec *value, struct itimerspec *ovalue); } - SYS_TIMER_GETTIME = 447 // { int|sys|50|timer_gettime(timer_t timerid, struct itimerspec *value); } - SYS_NTP_GETTIME = 448 // { int|sys|50|ntp_gettime(struct ntptimeval *ntvp); } - SYS_WAIT4 = 449 // { int|sys|50|wait4(pid_t pid, int *status, int options, struct rusage *rusage); } - SYS_MKNOD = 450 // { int|sys|50|mknod(const char *path, mode_t mode, dev_t dev); } - SYS_FHSTAT = 451 // { int|sys|50|fhstat(const void *fhp, size_t fh_size, struct stat *sb); } - SYS_PIPE2 = 453 // { int|sys||pipe2(int *fildes, int flags); } - SYS_DUP3 = 454 // { int|sys||dup3(int from, int to, int flags); } - SYS_KQUEUE1 = 455 // { int|sys||kqueue1(int flags); } - SYS_PACCEPT = 456 // { int|sys||paccept(int s, struct sockaddr *name, socklen_t *anamelen, const sigset_t *mask, int flags); } - SYS_LINKAT = 457 // { int|sys||linkat(int fd1, const char *name1, int fd2, const char *name2, int flags); } - SYS_RENAMEAT = 458 // { int|sys||renameat(int fromfd, const char *from, int tofd, const char *to); } - SYS_MKFIFOAT = 459 // { int|sys||mkfifoat(int fd, const char *path, mode_t mode); } - SYS_MKNODAT = 460 // { int|sys||mknodat(int fd, const char *path, mode_t mode, uint32_t dev); } - SYS_MKDIRAT = 461 // { int|sys||mkdirat(int fd, const char *path, mode_t mode); } - SYS_FACCESSAT = 462 // { int|sys||faccessat(int fd, const char *path, int amode, int flag); } - SYS_FCHMODAT = 463 // { int|sys||fchmodat(int fd, const char *path, mode_t mode, int flag); } - SYS_FCHOWNAT = 464 // { int|sys||fchownat(int fd, const char *path, uid_t owner, gid_t group, int flag); } - SYS_FEXECVE = 465 // { int|sys||fexecve(int fd, char * const *argp, char * const *envp); } - SYS_FSTATAT = 466 // { int|sys||fstatat(int fd, const char *path, struct stat *buf, int flag); } - SYS_UTIMENSAT = 467 // { int|sys||utimensat(int fd, const char *path, const struct timespec *tptr, int flag); } - SYS_OPENAT = 468 // { int|sys||openat(int fd, const char *path, int oflags, ... mode_t mode); } - SYS_READLINKAT = 469 // { int|sys||readlinkat(int fd, const char *path, char *buf, size_t bufsize); } - SYS_SYMLINKAT = 470 // { int|sys||symlinkat(const char *path1, int fd, const char *path2); } - SYS_UNLINKAT = 471 // { int|sys||unlinkat(int fd, const char *path, int flag); } - SYS_FUTIMENS = 472 // { int|sys||futimens(int fd, const struct timespec *tptr); } - SYS___QUOTACTL = 473 // { int|sys||__quotactl(const char *path, struct quotactl_args *args); } - SYS_POSIX_SPAWN = 474 // { int|sys||posix_spawn(pid_t *pid, const char *path, const struct posix_spawn_file_actions *file_actions, const struct posix_spawnattr *attrp, char *const *argv, char *const *envp); } - SYS_RECVMMSG = 475 // { int|sys||recvmmsg(int s, struct mmsghdr *mmsg, unsigned int vlen, unsigned int flags, struct timespec *timeout); } - SYS_SENDMMSG = 476 // { int|sys||sendmmsg(int s, struct mmsghdr *mmsg, unsigned int vlen, unsigned int flags); } -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_netbsd_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_netbsd_amd64.go deleted file mode 100644 index 48a91d46464..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_netbsd_amd64.go +++ /dev/null @@ -1,273 +0,0 @@ -// mksysnum_netbsd.pl -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build amd64,netbsd - -package unix - -const ( - SYS_EXIT = 1 // { void|sys||exit(int rval); } - SYS_FORK = 2 // { int|sys||fork(void); } - SYS_READ = 3 // { ssize_t|sys||read(int fd, void *buf, size_t nbyte); } - SYS_WRITE = 4 // { ssize_t|sys||write(int fd, const void *buf, size_t nbyte); } - SYS_OPEN = 5 // { int|sys||open(const char *path, int flags, ... mode_t mode); } - SYS_CLOSE = 6 // { int|sys||close(int fd); } - SYS_LINK = 9 // { int|sys||link(const char *path, const char *link); } - SYS_UNLINK = 10 // { int|sys||unlink(const char *path); } - SYS_CHDIR = 12 // { int|sys||chdir(const char *path); } - SYS_FCHDIR = 13 // { int|sys||fchdir(int fd); } - SYS_CHMOD = 15 // { int|sys||chmod(const char *path, mode_t mode); } - SYS_CHOWN = 16 // { int|sys||chown(const char *path, uid_t uid, gid_t gid); } - SYS_BREAK = 17 // { int|sys||obreak(char *nsize); } - SYS_GETPID = 20 // { pid_t|sys||getpid_with_ppid(void); } - SYS_UNMOUNT = 22 // { int|sys||unmount(const char *path, int flags); } - SYS_SETUID = 23 // { int|sys||setuid(uid_t uid); } - SYS_GETUID = 24 // { uid_t|sys||getuid_with_euid(void); } - SYS_GETEUID = 25 // { uid_t|sys||geteuid(void); } - SYS_PTRACE = 26 // { int|sys||ptrace(int req, pid_t pid, void *addr, int data); } - SYS_RECVMSG = 27 // { ssize_t|sys||recvmsg(int s, struct msghdr *msg, int flags); } - SYS_SENDMSG = 28 // { ssize_t|sys||sendmsg(int s, const struct msghdr *msg, int flags); } - SYS_RECVFROM = 29 // { ssize_t|sys||recvfrom(int s, void *buf, size_t len, int flags, struct sockaddr *from, socklen_t *fromlenaddr); } - SYS_ACCEPT = 30 // { int|sys||accept(int s, struct sockaddr *name, socklen_t *anamelen); } - SYS_GETPEERNAME = 31 // { int|sys||getpeername(int fdes, struct sockaddr *asa, socklen_t *alen); } - SYS_GETSOCKNAME = 32 // { int|sys||getsockname(int fdes, struct sockaddr *asa, socklen_t *alen); } - SYS_ACCESS = 33 // { int|sys||access(const char *path, int flags); } - SYS_CHFLAGS = 34 // { int|sys||chflags(const char *path, u_long flags); } - SYS_FCHFLAGS = 35 // { int|sys||fchflags(int fd, u_long flags); } - SYS_SYNC = 36 // { void|sys||sync(void); } - SYS_KILL = 37 // { int|sys||kill(pid_t pid, int signum); } - SYS_GETPPID = 39 // { pid_t|sys||getppid(void); } - SYS_DUP = 41 // { int|sys||dup(int fd); } - SYS_PIPE = 42 // { int|sys||pipe(void); } - SYS_GETEGID = 43 // { gid_t|sys||getegid(void); } - SYS_PROFIL = 44 // { int|sys||profil(char *samples, size_t size, u_long offset, u_int scale); } - SYS_KTRACE = 45 // { int|sys||ktrace(const char *fname, int ops, int facs, pid_t pid); } - SYS_GETGID = 47 // { gid_t|sys||getgid_with_egid(void); } - SYS___GETLOGIN = 49 // { int|sys||__getlogin(char *namebuf, size_t namelen); } - SYS___SETLOGIN = 50 // { int|sys||__setlogin(const char *namebuf); } - SYS_ACCT = 51 // { int|sys||acct(const char *path); } - SYS_IOCTL = 54 // { int|sys||ioctl(int fd, u_long com, ... void *data); } - SYS_REVOKE = 56 // { int|sys||revoke(const char *path); } - SYS_SYMLINK = 57 // { int|sys||symlink(const char *path, const char *link); } - SYS_READLINK = 58 // { ssize_t|sys||readlink(const char *path, char *buf, size_t count); } - SYS_EXECVE = 59 // { int|sys||execve(const char *path, char * const *argp, char * const *envp); } - SYS_UMASK = 60 // { mode_t|sys||umask(mode_t newmask); } - SYS_CHROOT = 61 // { int|sys||chroot(const char *path); } - SYS_VFORK = 66 // { int|sys||vfork(void); } - SYS_SBRK = 69 // { int|sys||sbrk(intptr_t incr); } - SYS_SSTK = 70 // { int|sys||sstk(int incr); } - SYS_VADVISE = 72 // { int|sys||ovadvise(int anom); } - SYS_MUNMAP = 73 // { int|sys||munmap(void *addr, size_t len); } - SYS_MPROTECT = 74 // { int|sys||mprotect(void *addr, size_t len, int prot); } - SYS_MADVISE = 75 // { int|sys||madvise(void *addr, size_t len, int behav); } - SYS_MINCORE = 78 // { int|sys||mincore(void *addr, size_t len, char *vec); } - SYS_GETGROUPS = 79 // { int|sys||getgroups(int gidsetsize, gid_t *gidset); } - SYS_SETGROUPS = 80 // { int|sys||setgroups(int gidsetsize, const gid_t *gidset); } - SYS_GETPGRP = 81 // { int|sys||getpgrp(void); } - SYS_SETPGID = 82 // { int|sys||setpgid(pid_t pid, pid_t pgid); } - SYS_DUP2 = 90 // { int|sys||dup2(int from, int to); } - SYS_FCNTL = 92 // { int|sys||fcntl(int fd, int cmd, ... void *arg); } - SYS_FSYNC = 95 // { int|sys||fsync(int fd); } - SYS_SETPRIORITY = 96 // { int|sys||setpriority(int which, id_t who, int prio); } - SYS_CONNECT = 98 // { int|sys||connect(int s, const struct sockaddr *name, socklen_t namelen); } - SYS_GETPRIORITY = 100 // { int|sys||getpriority(int which, id_t who); } - SYS_BIND = 104 // { int|sys||bind(int s, const struct sockaddr *name, socklen_t namelen); } - SYS_SETSOCKOPT = 105 // { int|sys||setsockopt(int s, int level, int name, const void *val, socklen_t valsize); } - SYS_LISTEN = 106 // { int|sys||listen(int s, int backlog); } - SYS_GETSOCKOPT = 118 // { int|sys||getsockopt(int s, int level, int name, void *val, socklen_t *avalsize); } - SYS_READV = 120 // { ssize_t|sys||readv(int fd, const struct iovec *iovp, int iovcnt); } - SYS_WRITEV = 121 // { ssize_t|sys||writev(int fd, const struct iovec *iovp, int iovcnt); } - SYS_FCHOWN = 123 // { int|sys||fchown(int fd, uid_t uid, gid_t gid); } - SYS_FCHMOD = 124 // { int|sys||fchmod(int fd, mode_t mode); } - SYS_SETREUID = 126 // { int|sys||setreuid(uid_t ruid, uid_t euid); } - SYS_SETREGID = 127 // { int|sys||setregid(gid_t rgid, gid_t egid); } - SYS_RENAME = 128 // { int|sys||rename(const char *from, const char *to); } - SYS_FLOCK = 131 // { int|sys||flock(int fd, int how); } - SYS_MKFIFO = 132 // { int|sys||mkfifo(const char *path, mode_t mode); } - SYS_SENDTO = 133 // { ssize_t|sys||sendto(int s, const void *buf, size_t len, int flags, const struct sockaddr *to, socklen_t tolen); } - SYS_SHUTDOWN = 134 // { int|sys||shutdown(int s, int how); } - SYS_SOCKETPAIR = 135 // { int|sys||socketpair(int domain, int type, int protocol, int *rsv); } - SYS_MKDIR = 136 // { int|sys||mkdir(const char *path, mode_t mode); } - SYS_RMDIR = 137 // { int|sys||rmdir(const char *path); } - SYS_SETSID = 147 // { int|sys||setsid(void); } - SYS_SYSARCH = 165 // { int|sys||sysarch(int op, void *parms); } - SYS_PREAD = 173 // { ssize_t|sys||pread(int fd, void *buf, size_t nbyte, int PAD, off_t offset); } - SYS_PWRITE = 174 // { ssize_t|sys||pwrite(int fd, const void *buf, size_t nbyte, int PAD, off_t offset); } - SYS_NTP_ADJTIME = 176 // { int|sys||ntp_adjtime(struct timex *tp); } - SYS_SETGID = 181 // { int|sys||setgid(gid_t gid); } - SYS_SETEGID = 182 // { int|sys||setegid(gid_t egid); } - SYS_SETEUID = 183 // { int|sys||seteuid(uid_t euid); } - SYS_PATHCONF = 191 // { long|sys||pathconf(const char *path, int name); } - SYS_FPATHCONF = 192 // { long|sys||fpathconf(int fd, int name); } - SYS_GETRLIMIT = 194 // { int|sys||getrlimit(int which, struct rlimit *rlp); } - SYS_SETRLIMIT = 195 // { int|sys||setrlimit(int which, const struct rlimit *rlp); } - SYS_MMAP = 197 // { void *|sys||mmap(void *addr, size_t len, int prot, int flags, int fd, long PAD, off_t pos); } - SYS_LSEEK = 199 // { off_t|sys||lseek(int fd, int PAD, off_t offset, int whence); } - SYS_TRUNCATE = 200 // { int|sys||truncate(const char *path, int PAD, off_t length); } - SYS_FTRUNCATE = 201 // { int|sys||ftruncate(int fd, int PAD, off_t length); } - SYS___SYSCTL = 202 // { int|sys||__sysctl(const int *name, u_int namelen, void *old, size_t *oldlenp, const void *new, size_t newlen); } - SYS_MLOCK = 203 // { int|sys||mlock(const void *addr, size_t len); } - SYS_MUNLOCK = 204 // { int|sys||munlock(const void *addr, size_t len); } - SYS_UNDELETE = 205 // { int|sys||undelete(const char *path); } - SYS_GETPGID = 207 // { pid_t|sys||getpgid(pid_t pid); } - SYS_REBOOT = 208 // { int|sys||reboot(int opt, char *bootstr); } - SYS_POLL = 209 // { int|sys||poll(struct pollfd *fds, u_int nfds, int timeout); } - SYS_SEMGET = 221 // { int|sys||semget(key_t key, int nsems, int semflg); } - SYS_SEMOP = 222 // { int|sys||semop(int semid, struct sembuf *sops, size_t nsops); } - SYS_SEMCONFIG = 223 // { int|sys||semconfig(int flag); } - SYS_MSGGET = 225 // { int|sys||msgget(key_t key, int msgflg); } - SYS_MSGSND = 226 // { int|sys||msgsnd(int msqid, const void *msgp, size_t msgsz, int msgflg); } - SYS_MSGRCV = 227 // { ssize_t|sys||msgrcv(int msqid, void *msgp, size_t msgsz, long msgtyp, int msgflg); } - SYS_SHMAT = 228 // { void *|sys||shmat(int shmid, const void *shmaddr, int shmflg); } - SYS_SHMDT = 230 // { int|sys||shmdt(const void *shmaddr); } - SYS_SHMGET = 231 // { int|sys||shmget(key_t key, size_t size, int shmflg); } - SYS_TIMER_CREATE = 235 // { int|sys||timer_create(clockid_t clock_id, struct sigevent *evp, timer_t *timerid); } - SYS_TIMER_DELETE = 236 // { int|sys||timer_delete(timer_t timerid); } - SYS_TIMER_GETOVERRUN = 239 // { int|sys||timer_getoverrun(timer_t timerid); } - SYS_FDATASYNC = 241 // { int|sys||fdatasync(int fd); } - SYS_MLOCKALL = 242 // { int|sys||mlockall(int flags); } - SYS_MUNLOCKALL = 243 // { int|sys||munlockall(void); } - SYS_SIGQUEUEINFO = 245 // { int|sys||sigqueueinfo(pid_t pid, const siginfo_t *info); } - SYS_MODCTL = 246 // { int|sys||modctl(int cmd, void *arg); } - SYS___POSIX_RENAME = 270 // { int|sys||__posix_rename(const char *from, const char *to); } - SYS_SWAPCTL = 271 // { int|sys||swapctl(int cmd, void *arg, int misc); } - SYS_MINHERIT = 273 // { int|sys||minherit(void *addr, size_t len, int inherit); } - SYS_LCHMOD = 274 // { int|sys||lchmod(const char *path, mode_t mode); } - SYS_LCHOWN = 275 // { int|sys||lchown(const char *path, uid_t uid, gid_t gid); } - SYS___POSIX_CHOWN = 283 // { int|sys||__posix_chown(const char *path, uid_t uid, gid_t gid); } - SYS___POSIX_FCHOWN = 284 // { int|sys||__posix_fchown(int fd, uid_t uid, gid_t gid); } - SYS___POSIX_LCHOWN = 285 // { int|sys||__posix_lchown(const char *path, uid_t uid, gid_t gid); } - SYS_GETSID = 286 // { pid_t|sys||getsid(pid_t pid); } - SYS___CLONE = 287 // { pid_t|sys||__clone(int flags, void *stack); } - SYS_FKTRACE = 288 // { int|sys||fktrace(int fd, int ops, int facs, pid_t pid); } - SYS_PREADV = 289 // { ssize_t|sys||preadv(int fd, const struct iovec *iovp, int iovcnt, int PAD, off_t offset); } - SYS_PWRITEV = 290 // { ssize_t|sys||pwritev(int fd, const struct iovec *iovp, int iovcnt, int PAD, off_t offset); } - SYS___GETCWD = 296 // { int|sys||__getcwd(char *bufp, size_t length); } - SYS_FCHROOT = 297 // { int|sys||fchroot(int fd); } - SYS_LCHFLAGS = 304 // { int|sys||lchflags(const char *path, u_long flags); } - SYS_ISSETUGID = 305 // { int|sys||issetugid(void); } - SYS_UTRACE = 306 // { int|sys||utrace(const char *label, void *addr, size_t len); } - SYS_GETCONTEXT = 307 // { int|sys||getcontext(struct __ucontext *ucp); } - SYS_SETCONTEXT = 308 // { int|sys||setcontext(const struct __ucontext *ucp); } - SYS__LWP_CREATE = 309 // { int|sys||_lwp_create(const struct __ucontext *ucp, u_long flags, lwpid_t *new_lwp); } - SYS__LWP_EXIT = 310 // { int|sys||_lwp_exit(void); } - SYS__LWP_SELF = 311 // { lwpid_t|sys||_lwp_self(void); } - SYS__LWP_WAIT = 312 // { int|sys||_lwp_wait(lwpid_t wait_for, lwpid_t *departed); } - SYS__LWP_SUSPEND = 313 // { int|sys||_lwp_suspend(lwpid_t target); } - SYS__LWP_CONTINUE = 314 // { int|sys||_lwp_continue(lwpid_t target); } - SYS__LWP_WAKEUP = 315 // { int|sys||_lwp_wakeup(lwpid_t target); } - SYS__LWP_GETPRIVATE = 316 // { void *|sys||_lwp_getprivate(void); } - SYS__LWP_SETPRIVATE = 317 // { void|sys||_lwp_setprivate(void *ptr); } - SYS__LWP_KILL = 318 // { int|sys||_lwp_kill(lwpid_t target, int signo); } - SYS__LWP_DETACH = 319 // { int|sys||_lwp_detach(lwpid_t target); } - SYS__LWP_UNPARK = 321 // { int|sys||_lwp_unpark(lwpid_t target, const void *hint); } - SYS__LWP_UNPARK_ALL = 322 // { ssize_t|sys||_lwp_unpark_all(const lwpid_t *targets, size_t ntargets, const void *hint); } - SYS__LWP_SETNAME = 323 // { int|sys||_lwp_setname(lwpid_t target, const char *name); } - SYS__LWP_GETNAME = 324 // { int|sys||_lwp_getname(lwpid_t target, char *name, size_t len); } - SYS__LWP_CTL = 325 // { int|sys||_lwp_ctl(int features, struct lwpctl **address); } - SYS___SIGACTION_SIGTRAMP = 340 // { int|sys||__sigaction_sigtramp(int signum, const struct sigaction *nsa, struct sigaction *osa, const void *tramp, int vers); } - SYS_PMC_GET_INFO = 341 // { int|sys||pmc_get_info(int ctr, int op, void *args); } - SYS_PMC_CONTROL = 342 // { int|sys||pmc_control(int ctr, int op, void *args); } - SYS_RASCTL = 343 // { int|sys||rasctl(void *addr, size_t len, int op); } - SYS_KQUEUE = 344 // { int|sys||kqueue(void); } - SYS__SCHED_SETPARAM = 346 // { int|sys||_sched_setparam(pid_t pid, lwpid_t lid, int policy, const struct sched_param *params); } - SYS__SCHED_GETPARAM = 347 // { int|sys||_sched_getparam(pid_t pid, lwpid_t lid, int *policy, struct sched_param *params); } - SYS__SCHED_SETAFFINITY = 348 // { int|sys||_sched_setaffinity(pid_t pid, lwpid_t lid, size_t size, const cpuset_t *cpuset); } - SYS__SCHED_GETAFFINITY = 349 // { int|sys||_sched_getaffinity(pid_t pid, lwpid_t lid, size_t size, cpuset_t *cpuset); } - SYS_SCHED_YIELD = 350 // { int|sys||sched_yield(void); } - SYS_FSYNC_RANGE = 354 // { int|sys||fsync_range(int fd, int flags, off_t start, off_t length); } - SYS_UUIDGEN = 355 // { int|sys||uuidgen(struct uuid *store, int count); } - SYS_GETVFSSTAT = 356 // { int|sys||getvfsstat(struct statvfs *buf, size_t bufsize, int flags); } - SYS_STATVFS1 = 357 // { int|sys||statvfs1(const char *path, struct statvfs *buf, int flags); } - SYS_FSTATVFS1 = 358 // { int|sys||fstatvfs1(int fd, struct statvfs *buf, int flags); } - SYS_EXTATTRCTL = 360 // { int|sys||extattrctl(const char *path, int cmd, const char *filename, int attrnamespace, const char *attrname); } - SYS_EXTATTR_SET_FILE = 361 // { int|sys||extattr_set_file(const char *path, int attrnamespace, const char *attrname, const void *data, size_t nbytes); } - SYS_EXTATTR_GET_FILE = 362 // { ssize_t|sys||extattr_get_file(const char *path, int attrnamespace, const char *attrname, void *data, size_t nbytes); } - SYS_EXTATTR_DELETE_FILE = 363 // { int|sys||extattr_delete_file(const char *path, int attrnamespace, const char *attrname); } - SYS_EXTATTR_SET_FD = 364 // { int|sys||extattr_set_fd(int fd, int attrnamespace, const char *attrname, const void *data, size_t nbytes); } - SYS_EXTATTR_GET_FD = 365 // { ssize_t|sys||extattr_get_fd(int fd, int attrnamespace, const char *attrname, void *data, size_t nbytes); } - SYS_EXTATTR_DELETE_FD = 366 // { int|sys||extattr_delete_fd(int fd, int attrnamespace, const char *attrname); } - SYS_EXTATTR_SET_LINK = 367 // { int|sys||extattr_set_link(const char *path, int attrnamespace, const char *attrname, const void *data, size_t nbytes); } - SYS_EXTATTR_GET_LINK = 368 // { ssize_t|sys||extattr_get_link(const char *path, int attrnamespace, const char *attrname, void *data, size_t nbytes); } - SYS_EXTATTR_DELETE_LINK = 369 // { int|sys||extattr_delete_link(const char *path, int attrnamespace, const char *attrname); } - SYS_EXTATTR_LIST_FD = 370 // { ssize_t|sys||extattr_list_fd(int fd, int attrnamespace, void *data, size_t nbytes); } - SYS_EXTATTR_LIST_FILE = 371 // { ssize_t|sys||extattr_list_file(const char *path, int attrnamespace, void *data, size_t nbytes); } - SYS_EXTATTR_LIST_LINK = 372 // { ssize_t|sys||extattr_list_link(const char *path, int attrnamespace, void *data, size_t nbytes); } - SYS_SETXATTR = 375 // { int|sys||setxattr(const char *path, const char *name, const void *value, size_t size, int flags); } - SYS_LSETXATTR = 376 // { int|sys||lsetxattr(const char *path, const char *name, const void *value, size_t size, int flags); } - SYS_FSETXATTR = 377 // { int|sys||fsetxattr(int fd, const char *name, const void *value, size_t size, int flags); } - SYS_GETXATTR = 378 // { int|sys||getxattr(const char *path, const char *name, void *value, size_t size); } - SYS_LGETXATTR = 379 // { int|sys||lgetxattr(const char *path, const char *name, void *value, size_t size); } - SYS_FGETXATTR = 380 // { int|sys||fgetxattr(int fd, const char *name, void *value, size_t size); } - SYS_LISTXATTR = 381 // { int|sys||listxattr(const char *path, char *list, size_t size); } - SYS_LLISTXATTR = 382 // { int|sys||llistxattr(const char *path, char *list, size_t size); } - SYS_FLISTXATTR = 383 // { int|sys||flistxattr(int fd, char *list, size_t size); } - SYS_REMOVEXATTR = 384 // { int|sys||removexattr(const char *path, const char *name); } - SYS_LREMOVEXATTR = 385 // { int|sys||lremovexattr(const char *path, const char *name); } - SYS_FREMOVEXATTR = 386 // { int|sys||fremovexattr(int fd, const char *name); } - SYS_GETDENTS = 390 // { int|sys|30|getdents(int fd, char *buf, size_t count); } - SYS_SOCKET = 394 // { int|sys|30|socket(int domain, int type, int protocol); } - SYS_GETFH = 395 // { int|sys|30|getfh(const char *fname, void *fhp, size_t *fh_size); } - SYS_MOUNT = 410 // { int|sys|50|mount(const char *type, const char *path, int flags, void *data, size_t data_len); } - SYS_MREMAP = 411 // { void *|sys||mremap(void *old_address, size_t old_size, void *new_address, size_t new_size, int flags); } - SYS_PSET_CREATE = 412 // { int|sys||pset_create(psetid_t *psid); } - SYS_PSET_DESTROY = 413 // { int|sys||pset_destroy(psetid_t psid); } - SYS_PSET_ASSIGN = 414 // { int|sys||pset_assign(psetid_t psid, cpuid_t cpuid, psetid_t *opsid); } - SYS__PSET_BIND = 415 // { int|sys||_pset_bind(idtype_t idtype, id_t first_id, id_t second_id, psetid_t psid, psetid_t *opsid); } - SYS_POSIX_FADVISE = 416 // { int|sys|50|posix_fadvise(int fd, int PAD, off_t offset, off_t len, int advice); } - SYS_SELECT = 417 // { int|sys|50|select(int nd, fd_set *in, fd_set *ou, fd_set *ex, struct timeval *tv); } - SYS_GETTIMEOFDAY = 418 // { int|sys|50|gettimeofday(struct timeval *tp, void *tzp); } - SYS_SETTIMEOFDAY = 419 // { int|sys|50|settimeofday(const struct timeval *tv, const void *tzp); } - SYS_UTIMES = 420 // { int|sys|50|utimes(const char *path, const struct timeval *tptr); } - SYS_ADJTIME = 421 // { int|sys|50|adjtime(const struct timeval *delta, struct timeval *olddelta); } - SYS_FUTIMES = 423 // { int|sys|50|futimes(int fd, const struct timeval *tptr); } - SYS_LUTIMES = 424 // { int|sys|50|lutimes(const char *path, const struct timeval *tptr); } - SYS_SETITIMER = 425 // { int|sys|50|setitimer(int which, const struct itimerval *itv, struct itimerval *oitv); } - SYS_GETITIMER = 426 // { int|sys|50|getitimer(int which, struct itimerval *itv); } - SYS_CLOCK_GETTIME = 427 // { int|sys|50|clock_gettime(clockid_t clock_id, struct timespec *tp); } - SYS_CLOCK_SETTIME = 428 // { int|sys|50|clock_settime(clockid_t clock_id, const struct timespec *tp); } - SYS_CLOCK_GETRES = 429 // { int|sys|50|clock_getres(clockid_t clock_id, struct timespec *tp); } - SYS_NANOSLEEP = 430 // { int|sys|50|nanosleep(const struct timespec *rqtp, struct timespec *rmtp); } - SYS___SIGTIMEDWAIT = 431 // { int|sys|50|__sigtimedwait(const sigset_t *set, siginfo_t *info, struct timespec *timeout); } - SYS__LWP_PARK = 434 // { int|sys|50|_lwp_park(const struct timespec *ts, lwpid_t unpark, const void *hint, const void *unparkhint); } - SYS_KEVENT = 435 // { int|sys|50|kevent(int fd, const struct kevent *changelist, size_t nchanges, struct kevent *eventlist, size_t nevents, const struct timespec *timeout); } - SYS_PSELECT = 436 // { int|sys|50|pselect(int nd, fd_set *in, fd_set *ou, fd_set *ex, const struct timespec *ts, const sigset_t *mask); } - SYS_POLLTS = 437 // { int|sys|50|pollts(struct pollfd *fds, u_int nfds, const struct timespec *ts, const sigset_t *mask); } - SYS_STAT = 439 // { int|sys|50|stat(const char *path, struct stat *ub); } - SYS_FSTAT = 440 // { int|sys|50|fstat(int fd, struct stat *sb); } - SYS_LSTAT = 441 // { int|sys|50|lstat(const char *path, struct stat *ub); } - SYS___SEMCTL = 442 // { int|sys|50|__semctl(int semid, int semnum, int cmd, ... union __semun *arg); } - SYS_SHMCTL = 443 // { int|sys|50|shmctl(int shmid, int cmd, struct shmid_ds *buf); } - SYS_MSGCTL = 444 // { int|sys|50|msgctl(int msqid, int cmd, struct msqid_ds *buf); } - SYS_GETRUSAGE = 445 // { int|sys|50|getrusage(int who, struct rusage *rusage); } - SYS_TIMER_SETTIME = 446 // { int|sys|50|timer_settime(timer_t timerid, int flags, const struct itimerspec *value, struct itimerspec *ovalue); } - SYS_TIMER_GETTIME = 447 // { int|sys|50|timer_gettime(timer_t timerid, struct itimerspec *value); } - SYS_NTP_GETTIME = 448 // { int|sys|50|ntp_gettime(struct ntptimeval *ntvp); } - SYS_WAIT4 = 449 // { int|sys|50|wait4(pid_t pid, int *status, int options, struct rusage *rusage); } - SYS_MKNOD = 450 // { int|sys|50|mknod(const char *path, mode_t mode, dev_t dev); } - SYS_FHSTAT = 451 // { int|sys|50|fhstat(const void *fhp, size_t fh_size, struct stat *sb); } - SYS_PIPE2 = 453 // { int|sys||pipe2(int *fildes, int flags); } - SYS_DUP3 = 454 // { int|sys||dup3(int from, int to, int flags); } - SYS_KQUEUE1 = 455 // { int|sys||kqueue1(int flags); } - SYS_PACCEPT = 456 // { int|sys||paccept(int s, struct sockaddr *name, socklen_t *anamelen, const sigset_t *mask, int flags); } - SYS_LINKAT = 457 // { int|sys||linkat(int fd1, const char *name1, int fd2, const char *name2, int flags); } - SYS_RENAMEAT = 458 // { int|sys||renameat(int fromfd, const char *from, int tofd, const char *to); } - SYS_MKFIFOAT = 459 // { int|sys||mkfifoat(int fd, const char *path, mode_t mode); } - SYS_MKNODAT = 460 // { int|sys||mknodat(int fd, const char *path, mode_t mode, uint32_t dev); } - SYS_MKDIRAT = 461 // { int|sys||mkdirat(int fd, const char *path, mode_t mode); } - SYS_FACCESSAT = 462 // { int|sys||faccessat(int fd, const char *path, int amode, int flag); } - SYS_FCHMODAT = 463 // { int|sys||fchmodat(int fd, const char *path, mode_t mode, int flag); } - SYS_FCHOWNAT = 464 // { int|sys||fchownat(int fd, const char *path, uid_t owner, gid_t group, int flag); } - SYS_FEXECVE = 465 // { int|sys||fexecve(int fd, char * const *argp, char * const *envp); } - SYS_FSTATAT = 466 // { int|sys||fstatat(int fd, const char *path, struct stat *buf, int flag); } - SYS_UTIMENSAT = 467 // { int|sys||utimensat(int fd, const char *path, const struct timespec *tptr, int flag); } - SYS_OPENAT = 468 // { int|sys||openat(int fd, const char *path, int oflags, ... mode_t mode); } - SYS_READLINKAT = 469 // { int|sys||readlinkat(int fd, const char *path, char *buf, size_t bufsize); } - SYS_SYMLINKAT = 470 // { int|sys||symlinkat(const char *path1, int fd, const char *path2); } - SYS_UNLINKAT = 471 // { int|sys||unlinkat(int fd, const char *path, int flag); } - SYS_FUTIMENS = 472 // { int|sys||futimens(int fd, const struct timespec *tptr); } - SYS___QUOTACTL = 473 // { int|sys||__quotactl(const char *path, struct quotactl_args *args); } - SYS_POSIX_SPAWN = 474 // { int|sys||posix_spawn(pid_t *pid, const char *path, const struct posix_spawn_file_actions *file_actions, const struct posix_spawnattr *attrp, char *const *argv, char *const *envp); } - SYS_RECVMMSG = 475 // { int|sys||recvmmsg(int s, struct mmsghdr *mmsg, unsigned int vlen, unsigned int flags, struct timespec *timeout); } - SYS_SENDMMSG = 476 // { int|sys||sendmmsg(int s, struct mmsghdr *mmsg, unsigned int vlen, unsigned int flags); } -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_netbsd_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_netbsd_arm.go deleted file mode 100644 index 612ba662cb2..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_netbsd_arm.go +++ /dev/null @@ -1,273 +0,0 @@ -// mksysnum_netbsd.pl -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build arm,netbsd - -package unix - -const ( - SYS_EXIT = 1 // { void|sys||exit(int rval); } - SYS_FORK = 2 // { int|sys||fork(void); } - SYS_READ = 3 // { ssize_t|sys||read(int fd, void *buf, size_t nbyte); } - SYS_WRITE = 4 // { ssize_t|sys||write(int fd, const void *buf, size_t nbyte); } - SYS_OPEN = 5 // { int|sys||open(const char *path, int flags, ... mode_t mode); } - SYS_CLOSE = 6 // { int|sys||close(int fd); } - SYS_LINK = 9 // { int|sys||link(const char *path, const char *link); } - SYS_UNLINK = 10 // { int|sys||unlink(const char *path); } - SYS_CHDIR = 12 // { int|sys||chdir(const char *path); } - SYS_FCHDIR = 13 // { int|sys||fchdir(int fd); } - SYS_CHMOD = 15 // { int|sys||chmod(const char *path, mode_t mode); } - SYS_CHOWN = 16 // { int|sys||chown(const char *path, uid_t uid, gid_t gid); } - SYS_BREAK = 17 // { int|sys||obreak(char *nsize); } - SYS_GETPID = 20 // { pid_t|sys||getpid_with_ppid(void); } - SYS_UNMOUNT = 22 // { int|sys||unmount(const char *path, int flags); } - SYS_SETUID = 23 // { int|sys||setuid(uid_t uid); } - SYS_GETUID = 24 // { uid_t|sys||getuid_with_euid(void); } - SYS_GETEUID = 25 // { uid_t|sys||geteuid(void); } - SYS_PTRACE = 26 // { int|sys||ptrace(int req, pid_t pid, void *addr, int data); } - SYS_RECVMSG = 27 // { ssize_t|sys||recvmsg(int s, struct msghdr *msg, int flags); } - SYS_SENDMSG = 28 // { ssize_t|sys||sendmsg(int s, const struct msghdr *msg, int flags); } - SYS_RECVFROM = 29 // { ssize_t|sys||recvfrom(int s, void *buf, size_t len, int flags, struct sockaddr *from, socklen_t *fromlenaddr); } - SYS_ACCEPT = 30 // { int|sys||accept(int s, struct sockaddr *name, socklen_t *anamelen); } - SYS_GETPEERNAME = 31 // { int|sys||getpeername(int fdes, struct sockaddr *asa, socklen_t *alen); } - SYS_GETSOCKNAME = 32 // { int|sys||getsockname(int fdes, struct sockaddr *asa, socklen_t *alen); } - SYS_ACCESS = 33 // { int|sys||access(const char *path, int flags); } - SYS_CHFLAGS = 34 // { int|sys||chflags(const char *path, u_long flags); } - SYS_FCHFLAGS = 35 // { int|sys||fchflags(int fd, u_long flags); } - SYS_SYNC = 36 // { void|sys||sync(void); } - SYS_KILL = 37 // { int|sys||kill(pid_t pid, int signum); } - SYS_GETPPID = 39 // { pid_t|sys||getppid(void); } - SYS_DUP = 41 // { int|sys||dup(int fd); } - SYS_PIPE = 42 // { int|sys||pipe(void); } - SYS_GETEGID = 43 // { gid_t|sys||getegid(void); } - SYS_PROFIL = 44 // { int|sys||profil(char *samples, size_t size, u_long offset, u_int scale); } - SYS_KTRACE = 45 // { int|sys||ktrace(const char *fname, int ops, int facs, pid_t pid); } - SYS_GETGID = 47 // { gid_t|sys||getgid_with_egid(void); } - SYS___GETLOGIN = 49 // { int|sys||__getlogin(char *namebuf, size_t namelen); } - SYS___SETLOGIN = 50 // { int|sys||__setlogin(const char *namebuf); } - SYS_ACCT = 51 // { int|sys||acct(const char *path); } - SYS_IOCTL = 54 // { int|sys||ioctl(int fd, u_long com, ... void *data); } - SYS_REVOKE = 56 // { int|sys||revoke(const char *path); } - SYS_SYMLINK = 57 // { int|sys||symlink(const char *path, const char *link); } - SYS_READLINK = 58 // { ssize_t|sys||readlink(const char *path, char *buf, size_t count); } - SYS_EXECVE = 59 // { int|sys||execve(const char *path, char * const *argp, char * const *envp); } - SYS_UMASK = 60 // { mode_t|sys||umask(mode_t newmask); } - SYS_CHROOT = 61 // { int|sys||chroot(const char *path); } - SYS_VFORK = 66 // { int|sys||vfork(void); } - SYS_SBRK = 69 // { int|sys||sbrk(intptr_t incr); } - SYS_SSTK = 70 // { int|sys||sstk(int incr); } - SYS_VADVISE = 72 // { int|sys||ovadvise(int anom); } - SYS_MUNMAP = 73 // { int|sys||munmap(void *addr, size_t len); } - SYS_MPROTECT = 74 // { int|sys||mprotect(void *addr, size_t len, int prot); } - SYS_MADVISE = 75 // { int|sys||madvise(void *addr, size_t len, int behav); } - SYS_MINCORE = 78 // { int|sys||mincore(void *addr, size_t len, char *vec); } - SYS_GETGROUPS = 79 // { int|sys||getgroups(int gidsetsize, gid_t *gidset); } - SYS_SETGROUPS = 80 // { int|sys||setgroups(int gidsetsize, const gid_t *gidset); } - SYS_GETPGRP = 81 // { int|sys||getpgrp(void); } - SYS_SETPGID = 82 // { int|sys||setpgid(pid_t pid, pid_t pgid); } - SYS_DUP2 = 90 // { int|sys||dup2(int from, int to); } - SYS_FCNTL = 92 // { int|sys||fcntl(int fd, int cmd, ... void *arg); } - SYS_FSYNC = 95 // { int|sys||fsync(int fd); } - SYS_SETPRIORITY = 96 // { int|sys||setpriority(int which, id_t who, int prio); } - SYS_CONNECT = 98 // { int|sys||connect(int s, const struct sockaddr *name, socklen_t namelen); } - SYS_GETPRIORITY = 100 // { int|sys||getpriority(int which, id_t who); } - SYS_BIND = 104 // { int|sys||bind(int s, const struct sockaddr *name, socklen_t namelen); } - SYS_SETSOCKOPT = 105 // { int|sys||setsockopt(int s, int level, int name, const void *val, socklen_t valsize); } - SYS_LISTEN = 106 // { int|sys||listen(int s, int backlog); } - SYS_GETSOCKOPT = 118 // { int|sys||getsockopt(int s, int level, int name, void *val, socklen_t *avalsize); } - SYS_READV = 120 // { ssize_t|sys||readv(int fd, const struct iovec *iovp, int iovcnt); } - SYS_WRITEV = 121 // { ssize_t|sys||writev(int fd, const struct iovec *iovp, int iovcnt); } - SYS_FCHOWN = 123 // { int|sys||fchown(int fd, uid_t uid, gid_t gid); } - SYS_FCHMOD = 124 // { int|sys||fchmod(int fd, mode_t mode); } - SYS_SETREUID = 126 // { int|sys||setreuid(uid_t ruid, uid_t euid); } - SYS_SETREGID = 127 // { int|sys||setregid(gid_t rgid, gid_t egid); } - SYS_RENAME = 128 // { int|sys||rename(const char *from, const char *to); } - SYS_FLOCK = 131 // { int|sys||flock(int fd, int how); } - SYS_MKFIFO = 132 // { int|sys||mkfifo(const char *path, mode_t mode); } - SYS_SENDTO = 133 // { ssize_t|sys||sendto(int s, const void *buf, size_t len, int flags, const struct sockaddr *to, socklen_t tolen); } - SYS_SHUTDOWN = 134 // { int|sys||shutdown(int s, int how); } - SYS_SOCKETPAIR = 135 // { int|sys||socketpair(int domain, int type, int protocol, int *rsv); } - SYS_MKDIR = 136 // { int|sys||mkdir(const char *path, mode_t mode); } - SYS_RMDIR = 137 // { int|sys||rmdir(const char *path); } - SYS_SETSID = 147 // { int|sys||setsid(void); } - SYS_SYSARCH = 165 // { int|sys||sysarch(int op, void *parms); } - SYS_PREAD = 173 // { ssize_t|sys||pread(int fd, void *buf, size_t nbyte, int PAD, off_t offset); } - SYS_PWRITE = 174 // { ssize_t|sys||pwrite(int fd, const void *buf, size_t nbyte, int PAD, off_t offset); } - SYS_NTP_ADJTIME = 176 // { int|sys||ntp_adjtime(struct timex *tp); } - SYS_SETGID = 181 // { int|sys||setgid(gid_t gid); } - SYS_SETEGID = 182 // { int|sys||setegid(gid_t egid); } - SYS_SETEUID = 183 // { int|sys||seteuid(uid_t euid); } - SYS_PATHCONF = 191 // { long|sys||pathconf(const char *path, int name); } - SYS_FPATHCONF = 192 // { long|sys||fpathconf(int fd, int name); } - SYS_GETRLIMIT = 194 // { int|sys||getrlimit(int which, struct rlimit *rlp); } - SYS_SETRLIMIT = 195 // { int|sys||setrlimit(int which, const struct rlimit *rlp); } - SYS_MMAP = 197 // { void *|sys||mmap(void *addr, size_t len, int prot, int flags, int fd, long PAD, off_t pos); } - SYS_LSEEK = 199 // { off_t|sys||lseek(int fd, int PAD, off_t offset, int whence); } - SYS_TRUNCATE = 200 // { int|sys||truncate(const char *path, int PAD, off_t length); } - SYS_FTRUNCATE = 201 // { int|sys||ftruncate(int fd, int PAD, off_t length); } - SYS___SYSCTL = 202 // { int|sys||__sysctl(const int *name, u_int namelen, void *old, size_t *oldlenp, const void *new, size_t newlen); } - SYS_MLOCK = 203 // { int|sys||mlock(const void *addr, size_t len); } - SYS_MUNLOCK = 204 // { int|sys||munlock(const void *addr, size_t len); } - SYS_UNDELETE = 205 // { int|sys||undelete(const char *path); } - SYS_GETPGID = 207 // { pid_t|sys||getpgid(pid_t pid); } - SYS_REBOOT = 208 // { int|sys||reboot(int opt, char *bootstr); } - SYS_POLL = 209 // { int|sys||poll(struct pollfd *fds, u_int nfds, int timeout); } - SYS_SEMGET = 221 // { int|sys||semget(key_t key, int nsems, int semflg); } - SYS_SEMOP = 222 // { int|sys||semop(int semid, struct sembuf *sops, size_t nsops); } - SYS_SEMCONFIG = 223 // { int|sys||semconfig(int flag); } - SYS_MSGGET = 225 // { int|sys||msgget(key_t key, int msgflg); } - SYS_MSGSND = 226 // { int|sys||msgsnd(int msqid, const void *msgp, size_t msgsz, int msgflg); } - SYS_MSGRCV = 227 // { ssize_t|sys||msgrcv(int msqid, void *msgp, size_t msgsz, long msgtyp, int msgflg); } - SYS_SHMAT = 228 // { void *|sys||shmat(int shmid, const void *shmaddr, int shmflg); } - SYS_SHMDT = 230 // { int|sys||shmdt(const void *shmaddr); } - SYS_SHMGET = 231 // { int|sys||shmget(key_t key, size_t size, int shmflg); } - SYS_TIMER_CREATE = 235 // { int|sys||timer_create(clockid_t clock_id, struct sigevent *evp, timer_t *timerid); } - SYS_TIMER_DELETE = 236 // { int|sys||timer_delete(timer_t timerid); } - SYS_TIMER_GETOVERRUN = 239 // { int|sys||timer_getoverrun(timer_t timerid); } - SYS_FDATASYNC = 241 // { int|sys||fdatasync(int fd); } - SYS_MLOCKALL = 242 // { int|sys||mlockall(int flags); } - SYS_MUNLOCKALL = 243 // { int|sys||munlockall(void); } - SYS_SIGQUEUEINFO = 245 // { int|sys||sigqueueinfo(pid_t pid, const siginfo_t *info); } - SYS_MODCTL = 246 // { int|sys||modctl(int cmd, void *arg); } - SYS___POSIX_RENAME = 270 // { int|sys||__posix_rename(const char *from, const char *to); } - SYS_SWAPCTL = 271 // { int|sys||swapctl(int cmd, void *arg, int misc); } - SYS_MINHERIT = 273 // { int|sys||minherit(void *addr, size_t len, int inherit); } - SYS_LCHMOD = 274 // { int|sys||lchmod(const char *path, mode_t mode); } - SYS_LCHOWN = 275 // { int|sys||lchown(const char *path, uid_t uid, gid_t gid); } - SYS___POSIX_CHOWN = 283 // { int|sys||__posix_chown(const char *path, uid_t uid, gid_t gid); } - SYS___POSIX_FCHOWN = 284 // { int|sys||__posix_fchown(int fd, uid_t uid, gid_t gid); } - SYS___POSIX_LCHOWN = 285 // { int|sys||__posix_lchown(const char *path, uid_t uid, gid_t gid); } - SYS_GETSID = 286 // { pid_t|sys||getsid(pid_t pid); } - SYS___CLONE = 287 // { pid_t|sys||__clone(int flags, void *stack); } - SYS_FKTRACE = 288 // { int|sys||fktrace(int fd, int ops, int facs, pid_t pid); } - SYS_PREADV = 289 // { ssize_t|sys||preadv(int fd, const struct iovec *iovp, int iovcnt, int PAD, off_t offset); } - SYS_PWRITEV = 290 // { ssize_t|sys||pwritev(int fd, const struct iovec *iovp, int iovcnt, int PAD, off_t offset); } - SYS___GETCWD = 296 // { int|sys||__getcwd(char *bufp, size_t length); } - SYS_FCHROOT = 297 // { int|sys||fchroot(int fd); } - SYS_LCHFLAGS = 304 // { int|sys||lchflags(const char *path, u_long flags); } - SYS_ISSETUGID = 305 // { int|sys||issetugid(void); } - SYS_UTRACE = 306 // { int|sys||utrace(const char *label, void *addr, size_t len); } - SYS_GETCONTEXT = 307 // { int|sys||getcontext(struct __ucontext *ucp); } - SYS_SETCONTEXT = 308 // { int|sys||setcontext(const struct __ucontext *ucp); } - SYS__LWP_CREATE = 309 // { int|sys||_lwp_create(const struct __ucontext *ucp, u_long flags, lwpid_t *new_lwp); } - SYS__LWP_EXIT = 310 // { int|sys||_lwp_exit(void); } - SYS__LWP_SELF = 311 // { lwpid_t|sys||_lwp_self(void); } - SYS__LWP_WAIT = 312 // { int|sys||_lwp_wait(lwpid_t wait_for, lwpid_t *departed); } - SYS__LWP_SUSPEND = 313 // { int|sys||_lwp_suspend(lwpid_t target); } - SYS__LWP_CONTINUE = 314 // { int|sys||_lwp_continue(lwpid_t target); } - SYS__LWP_WAKEUP = 315 // { int|sys||_lwp_wakeup(lwpid_t target); } - SYS__LWP_GETPRIVATE = 316 // { void *|sys||_lwp_getprivate(void); } - SYS__LWP_SETPRIVATE = 317 // { void|sys||_lwp_setprivate(void *ptr); } - SYS__LWP_KILL = 318 // { int|sys||_lwp_kill(lwpid_t target, int signo); } - SYS__LWP_DETACH = 319 // { int|sys||_lwp_detach(lwpid_t target); } - SYS__LWP_UNPARK = 321 // { int|sys||_lwp_unpark(lwpid_t target, const void *hint); } - SYS__LWP_UNPARK_ALL = 322 // { ssize_t|sys||_lwp_unpark_all(const lwpid_t *targets, size_t ntargets, const void *hint); } - SYS__LWP_SETNAME = 323 // { int|sys||_lwp_setname(lwpid_t target, const char *name); } - SYS__LWP_GETNAME = 324 // { int|sys||_lwp_getname(lwpid_t target, char *name, size_t len); } - SYS__LWP_CTL = 325 // { int|sys||_lwp_ctl(int features, struct lwpctl **address); } - SYS___SIGACTION_SIGTRAMP = 340 // { int|sys||__sigaction_sigtramp(int signum, const struct sigaction *nsa, struct sigaction *osa, const void *tramp, int vers); } - SYS_PMC_GET_INFO = 341 // { int|sys||pmc_get_info(int ctr, int op, void *args); } - SYS_PMC_CONTROL = 342 // { int|sys||pmc_control(int ctr, int op, void *args); } - SYS_RASCTL = 343 // { int|sys||rasctl(void *addr, size_t len, int op); } - SYS_KQUEUE = 344 // { int|sys||kqueue(void); } - SYS__SCHED_SETPARAM = 346 // { int|sys||_sched_setparam(pid_t pid, lwpid_t lid, int policy, const struct sched_param *params); } - SYS__SCHED_GETPARAM = 347 // { int|sys||_sched_getparam(pid_t pid, lwpid_t lid, int *policy, struct sched_param *params); } - SYS__SCHED_SETAFFINITY = 348 // { int|sys||_sched_setaffinity(pid_t pid, lwpid_t lid, size_t size, const cpuset_t *cpuset); } - SYS__SCHED_GETAFFINITY = 349 // { int|sys||_sched_getaffinity(pid_t pid, lwpid_t lid, size_t size, cpuset_t *cpuset); } - SYS_SCHED_YIELD = 350 // { int|sys||sched_yield(void); } - SYS_FSYNC_RANGE = 354 // { int|sys||fsync_range(int fd, int flags, off_t start, off_t length); } - SYS_UUIDGEN = 355 // { int|sys||uuidgen(struct uuid *store, int count); } - SYS_GETVFSSTAT = 356 // { int|sys||getvfsstat(struct statvfs *buf, size_t bufsize, int flags); } - SYS_STATVFS1 = 357 // { int|sys||statvfs1(const char *path, struct statvfs *buf, int flags); } - SYS_FSTATVFS1 = 358 // { int|sys||fstatvfs1(int fd, struct statvfs *buf, int flags); } - SYS_EXTATTRCTL = 360 // { int|sys||extattrctl(const char *path, int cmd, const char *filename, int attrnamespace, const char *attrname); } - SYS_EXTATTR_SET_FILE = 361 // { int|sys||extattr_set_file(const char *path, int attrnamespace, const char *attrname, const void *data, size_t nbytes); } - SYS_EXTATTR_GET_FILE = 362 // { ssize_t|sys||extattr_get_file(const char *path, int attrnamespace, const char *attrname, void *data, size_t nbytes); } - SYS_EXTATTR_DELETE_FILE = 363 // { int|sys||extattr_delete_file(const char *path, int attrnamespace, const char *attrname); } - SYS_EXTATTR_SET_FD = 364 // { int|sys||extattr_set_fd(int fd, int attrnamespace, const char *attrname, const void *data, size_t nbytes); } - SYS_EXTATTR_GET_FD = 365 // { ssize_t|sys||extattr_get_fd(int fd, int attrnamespace, const char *attrname, void *data, size_t nbytes); } - SYS_EXTATTR_DELETE_FD = 366 // { int|sys||extattr_delete_fd(int fd, int attrnamespace, const char *attrname); } - SYS_EXTATTR_SET_LINK = 367 // { int|sys||extattr_set_link(const char *path, int attrnamespace, const char *attrname, const void *data, size_t nbytes); } - SYS_EXTATTR_GET_LINK = 368 // { ssize_t|sys||extattr_get_link(const char *path, int attrnamespace, const char *attrname, void *data, size_t nbytes); } - SYS_EXTATTR_DELETE_LINK = 369 // { int|sys||extattr_delete_link(const char *path, int attrnamespace, const char *attrname); } - SYS_EXTATTR_LIST_FD = 370 // { ssize_t|sys||extattr_list_fd(int fd, int attrnamespace, void *data, size_t nbytes); } - SYS_EXTATTR_LIST_FILE = 371 // { ssize_t|sys||extattr_list_file(const char *path, int attrnamespace, void *data, size_t nbytes); } - SYS_EXTATTR_LIST_LINK = 372 // { ssize_t|sys||extattr_list_link(const char *path, int attrnamespace, void *data, size_t nbytes); } - SYS_SETXATTR = 375 // { int|sys||setxattr(const char *path, const char *name, const void *value, size_t size, int flags); } - SYS_LSETXATTR = 376 // { int|sys||lsetxattr(const char *path, const char *name, const void *value, size_t size, int flags); } - SYS_FSETXATTR = 377 // { int|sys||fsetxattr(int fd, const char *name, const void *value, size_t size, int flags); } - SYS_GETXATTR = 378 // { int|sys||getxattr(const char *path, const char *name, void *value, size_t size); } - SYS_LGETXATTR = 379 // { int|sys||lgetxattr(const char *path, const char *name, void *value, size_t size); } - SYS_FGETXATTR = 380 // { int|sys||fgetxattr(int fd, const char *name, void *value, size_t size); } - SYS_LISTXATTR = 381 // { int|sys||listxattr(const char *path, char *list, size_t size); } - SYS_LLISTXATTR = 382 // { int|sys||llistxattr(const char *path, char *list, size_t size); } - SYS_FLISTXATTR = 383 // { int|sys||flistxattr(int fd, char *list, size_t size); } - SYS_REMOVEXATTR = 384 // { int|sys||removexattr(const char *path, const char *name); } - SYS_LREMOVEXATTR = 385 // { int|sys||lremovexattr(const char *path, const char *name); } - SYS_FREMOVEXATTR = 386 // { int|sys||fremovexattr(int fd, const char *name); } - SYS_GETDENTS = 390 // { int|sys|30|getdents(int fd, char *buf, size_t count); } - SYS_SOCKET = 394 // { int|sys|30|socket(int domain, int type, int protocol); } - SYS_GETFH = 395 // { int|sys|30|getfh(const char *fname, void *fhp, size_t *fh_size); } - SYS_MOUNT = 410 // { int|sys|50|mount(const char *type, const char *path, int flags, void *data, size_t data_len); } - SYS_MREMAP = 411 // { void *|sys||mremap(void *old_address, size_t old_size, void *new_address, size_t new_size, int flags); } - SYS_PSET_CREATE = 412 // { int|sys||pset_create(psetid_t *psid); } - SYS_PSET_DESTROY = 413 // { int|sys||pset_destroy(psetid_t psid); } - SYS_PSET_ASSIGN = 414 // { int|sys||pset_assign(psetid_t psid, cpuid_t cpuid, psetid_t *opsid); } - SYS__PSET_BIND = 415 // { int|sys||_pset_bind(idtype_t idtype, id_t first_id, id_t second_id, psetid_t psid, psetid_t *opsid); } - SYS_POSIX_FADVISE = 416 // { int|sys|50|posix_fadvise(int fd, int PAD, off_t offset, off_t len, int advice); } - SYS_SELECT = 417 // { int|sys|50|select(int nd, fd_set *in, fd_set *ou, fd_set *ex, struct timeval *tv); } - SYS_GETTIMEOFDAY = 418 // { int|sys|50|gettimeofday(struct timeval *tp, void *tzp); } - SYS_SETTIMEOFDAY = 419 // { int|sys|50|settimeofday(const struct timeval *tv, const void *tzp); } - SYS_UTIMES = 420 // { int|sys|50|utimes(const char *path, const struct timeval *tptr); } - SYS_ADJTIME = 421 // { int|sys|50|adjtime(const struct timeval *delta, struct timeval *olddelta); } - SYS_FUTIMES = 423 // { int|sys|50|futimes(int fd, const struct timeval *tptr); } - SYS_LUTIMES = 424 // { int|sys|50|lutimes(const char *path, const struct timeval *tptr); } - SYS_SETITIMER = 425 // { int|sys|50|setitimer(int which, const struct itimerval *itv, struct itimerval *oitv); } - SYS_GETITIMER = 426 // { int|sys|50|getitimer(int which, struct itimerval *itv); } - SYS_CLOCK_GETTIME = 427 // { int|sys|50|clock_gettime(clockid_t clock_id, struct timespec *tp); } - SYS_CLOCK_SETTIME = 428 // { int|sys|50|clock_settime(clockid_t clock_id, const struct timespec *tp); } - SYS_CLOCK_GETRES = 429 // { int|sys|50|clock_getres(clockid_t clock_id, struct timespec *tp); } - SYS_NANOSLEEP = 430 // { int|sys|50|nanosleep(const struct timespec *rqtp, struct timespec *rmtp); } - SYS___SIGTIMEDWAIT = 431 // { int|sys|50|__sigtimedwait(const sigset_t *set, siginfo_t *info, struct timespec *timeout); } - SYS__LWP_PARK = 434 // { int|sys|50|_lwp_park(const struct timespec *ts, lwpid_t unpark, const void *hint, const void *unparkhint); } - SYS_KEVENT = 435 // { int|sys|50|kevent(int fd, const struct kevent *changelist, size_t nchanges, struct kevent *eventlist, size_t nevents, const struct timespec *timeout); } - SYS_PSELECT = 436 // { int|sys|50|pselect(int nd, fd_set *in, fd_set *ou, fd_set *ex, const struct timespec *ts, const sigset_t *mask); } - SYS_POLLTS = 437 // { int|sys|50|pollts(struct pollfd *fds, u_int nfds, const struct timespec *ts, const sigset_t *mask); } - SYS_STAT = 439 // { int|sys|50|stat(const char *path, struct stat *ub); } - SYS_FSTAT = 440 // { int|sys|50|fstat(int fd, struct stat *sb); } - SYS_LSTAT = 441 // { int|sys|50|lstat(const char *path, struct stat *ub); } - SYS___SEMCTL = 442 // { int|sys|50|__semctl(int semid, int semnum, int cmd, ... union __semun *arg); } - SYS_SHMCTL = 443 // { int|sys|50|shmctl(int shmid, int cmd, struct shmid_ds *buf); } - SYS_MSGCTL = 444 // { int|sys|50|msgctl(int msqid, int cmd, struct msqid_ds *buf); } - SYS_GETRUSAGE = 445 // { int|sys|50|getrusage(int who, struct rusage *rusage); } - SYS_TIMER_SETTIME = 446 // { int|sys|50|timer_settime(timer_t timerid, int flags, const struct itimerspec *value, struct itimerspec *ovalue); } - SYS_TIMER_GETTIME = 447 // { int|sys|50|timer_gettime(timer_t timerid, struct itimerspec *value); } - SYS_NTP_GETTIME = 448 // { int|sys|50|ntp_gettime(struct ntptimeval *ntvp); } - SYS_WAIT4 = 449 // { int|sys|50|wait4(pid_t pid, int *status, int options, struct rusage *rusage); } - SYS_MKNOD = 450 // { int|sys|50|mknod(const char *path, mode_t mode, dev_t dev); } - SYS_FHSTAT = 451 // { int|sys|50|fhstat(const void *fhp, size_t fh_size, struct stat *sb); } - SYS_PIPE2 = 453 // { int|sys||pipe2(int *fildes, int flags); } - SYS_DUP3 = 454 // { int|sys||dup3(int from, int to, int flags); } - SYS_KQUEUE1 = 455 // { int|sys||kqueue1(int flags); } - SYS_PACCEPT = 456 // { int|sys||paccept(int s, struct sockaddr *name, socklen_t *anamelen, const sigset_t *mask, int flags); } - SYS_LINKAT = 457 // { int|sys||linkat(int fd1, const char *name1, int fd2, const char *name2, int flags); } - SYS_RENAMEAT = 458 // { int|sys||renameat(int fromfd, const char *from, int tofd, const char *to); } - SYS_MKFIFOAT = 459 // { int|sys||mkfifoat(int fd, const char *path, mode_t mode); } - SYS_MKNODAT = 460 // { int|sys||mknodat(int fd, const char *path, mode_t mode, uint32_t dev); } - SYS_MKDIRAT = 461 // { int|sys||mkdirat(int fd, const char *path, mode_t mode); } - SYS_FACCESSAT = 462 // { int|sys||faccessat(int fd, const char *path, int amode, int flag); } - SYS_FCHMODAT = 463 // { int|sys||fchmodat(int fd, const char *path, mode_t mode, int flag); } - SYS_FCHOWNAT = 464 // { int|sys||fchownat(int fd, const char *path, uid_t owner, gid_t group, int flag); } - SYS_FEXECVE = 465 // { int|sys||fexecve(int fd, char * const *argp, char * const *envp); } - SYS_FSTATAT = 466 // { int|sys||fstatat(int fd, const char *path, struct stat *buf, int flag); } - SYS_UTIMENSAT = 467 // { int|sys||utimensat(int fd, const char *path, const struct timespec *tptr, int flag); } - SYS_OPENAT = 468 // { int|sys||openat(int fd, const char *path, int oflags, ... mode_t mode); } - SYS_READLINKAT = 469 // { int|sys||readlinkat(int fd, const char *path, char *buf, size_t bufsize); } - SYS_SYMLINKAT = 470 // { int|sys||symlinkat(const char *path1, int fd, const char *path2); } - SYS_UNLINKAT = 471 // { int|sys||unlinkat(int fd, const char *path, int flag); } - SYS_FUTIMENS = 472 // { int|sys||futimens(int fd, const struct timespec *tptr); } - SYS___QUOTACTL = 473 // { int|sys||__quotactl(const char *path, struct quotactl_args *args); } - SYS_POSIX_SPAWN = 474 // { int|sys||posix_spawn(pid_t *pid, const char *path, const struct posix_spawn_file_actions *file_actions, const struct posix_spawnattr *attrp, char *const *argv, char *const *envp); } - SYS_RECVMMSG = 475 // { int|sys||recvmmsg(int s, struct mmsghdr *mmsg, unsigned int vlen, unsigned int flags, struct timespec *timeout); } - SYS_SENDMMSG = 476 // { int|sys||sendmmsg(int s, struct mmsghdr *mmsg, unsigned int vlen, unsigned int flags); } -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_openbsd_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_openbsd_386.go deleted file mode 100644 index 3e8ce2a1ddf..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_openbsd_386.go +++ /dev/null @@ -1,207 +0,0 @@ -// mksysnum_openbsd.pl -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build 386,openbsd - -package unix - -const ( - SYS_EXIT = 1 // { void sys_exit(int rval); } - SYS_FORK = 2 // { int sys_fork(void); } - SYS_READ = 3 // { ssize_t sys_read(int fd, void *buf, size_t nbyte); } - SYS_WRITE = 4 // { ssize_t sys_write(int fd, const void *buf, \ - SYS_OPEN = 5 // { int sys_open(const char *path, \ - SYS_CLOSE = 6 // { int sys_close(int fd); } - SYS___TFORK = 8 // { int sys___tfork(const struct __tfork *param, \ - SYS_LINK = 9 // { int sys_link(const char *path, const char *link); } - SYS_UNLINK = 10 // { int sys_unlink(const char *path); } - SYS_WAIT4 = 11 // { pid_t sys_wait4(pid_t pid, int *status, \ - SYS_CHDIR = 12 // { int sys_chdir(const char *path); } - SYS_FCHDIR = 13 // { int sys_fchdir(int fd); } - SYS_MKNOD = 14 // { int sys_mknod(const char *path, mode_t mode, \ - SYS_CHMOD = 15 // { int sys_chmod(const char *path, mode_t mode); } - SYS_CHOWN = 16 // { int sys_chown(const char *path, uid_t uid, \ - SYS_OBREAK = 17 // { int sys_obreak(char *nsize); } break - SYS_GETDTABLECOUNT = 18 // { int sys_getdtablecount(void); } - SYS_GETRUSAGE = 19 // { int sys_getrusage(int who, \ - SYS_GETPID = 20 // { pid_t sys_getpid(void); } - SYS_MOUNT = 21 // { int sys_mount(const char *type, const char *path, \ - SYS_UNMOUNT = 22 // { int sys_unmount(const char *path, int flags); } - SYS_SETUID = 23 // { int sys_setuid(uid_t uid); } - SYS_GETUID = 24 // { uid_t sys_getuid(void); } - SYS_GETEUID = 25 // { uid_t sys_geteuid(void); } - SYS_PTRACE = 26 // { int sys_ptrace(int req, pid_t pid, caddr_t addr, \ - SYS_RECVMSG = 27 // { ssize_t sys_recvmsg(int s, struct msghdr *msg, \ - SYS_SENDMSG = 28 // { ssize_t sys_sendmsg(int s, \ - SYS_RECVFROM = 29 // { ssize_t sys_recvfrom(int s, void *buf, size_t len, \ - SYS_ACCEPT = 30 // { int sys_accept(int s, struct sockaddr *name, \ - SYS_GETPEERNAME = 31 // { int sys_getpeername(int fdes, struct sockaddr *asa, \ - SYS_GETSOCKNAME = 32 // { int sys_getsockname(int fdes, struct sockaddr *asa, \ - SYS_ACCESS = 33 // { int sys_access(const char *path, int flags); } - SYS_CHFLAGS = 34 // { int sys_chflags(const char *path, u_int flags); } - SYS_FCHFLAGS = 35 // { int sys_fchflags(int fd, u_int flags); } - SYS_SYNC = 36 // { void sys_sync(void); } - SYS_KILL = 37 // { int sys_kill(int pid, int signum); } - SYS_STAT = 38 // { int sys_stat(const char *path, struct stat *ub); } - SYS_GETPPID = 39 // { pid_t sys_getppid(void); } - SYS_LSTAT = 40 // { int sys_lstat(const char *path, struct stat *ub); } - SYS_DUP = 41 // { int sys_dup(int fd); } - SYS_FSTATAT = 42 // { int sys_fstatat(int fd, const char *path, \ - SYS_GETEGID = 43 // { gid_t sys_getegid(void); } - SYS_PROFIL = 44 // { int sys_profil(caddr_t samples, size_t size, \ - SYS_KTRACE = 45 // { int sys_ktrace(const char *fname, int ops, \ - SYS_SIGACTION = 46 // { int sys_sigaction(int signum, \ - SYS_GETGID = 47 // { gid_t sys_getgid(void); } - SYS_SIGPROCMASK = 48 // { int sys_sigprocmask(int how, sigset_t mask); } - SYS_GETLOGIN = 49 // { int sys_getlogin(char *namebuf, u_int namelen); } - SYS_SETLOGIN = 50 // { int sys_setlogin(const char *namebuf); } - SYS_ACCT = 51 // { int sys_acct(const char *path); } - SYS_SIGPENDING = 52 // { int sys_sigpending(void); } - SYS_FSTAT = 53 // { int sys_fstat(int fd, struct stat *sb); } - SYS_IOCTL = 54 // { int sys_ioctl(int fd, \ - SYS_REBOOT = 55 // { int sys_reboot(int opt); } - SYS_REVOKE = 56 // { int sys_revoke(const char *path); } - SYS_SYMLINK = 57 // { int sys_symlink(const char *path, \ - SYS_READLINK = 58 // { int sys_readlink(const char *path, char *buf, \ - SYS_EXECVE = 59 // { int sys_execve(const char *path, \ - SYS_UMASK = 60 // { mode_t sys_umask(mode_t newmask); } - SYS_CHROOT = 61 // { int sys_chroot(const char *path); } - SYS_GETFSSTAT = 62 // { int sys_getfsstat(struct statfs *buf, size_t bufsize, \ - SYS_STATFS = 63 // { int sys_statfs(const char *path, \ - SYS_FSTATFS = 64 // { int sys_fstatfs(int fd, struct statfs *buf); } - SYS_FHSTATFS = 65 // { int sys_fhstatfs(const fhandle_t *fhp, \ - SYS_VFORK = 66 // { int sys_vfork(void); } - SYS_GETTIMEOFDAY = 67 // { int sys_gettimeofday(struct timeval *tp, \ - SYS_SETTIMEOFDAY = 68 // { int sys_settimeofday(const struct timeval *tv, \ - SYS_SETITIMER = 69 // { int sys_setitimer(int which, \ - SYS_GETITIMER = 70 // { int sys_getitimer(int which, \ - SYS_SELECT = 71 // { int sys_select(int nd, fd_set *in, fd_set *ou, \ - SYS_KEVENT = 72 // { int sys_kevent(int fd, \ - SYS_MUNMAP = 73 // { int sys_munmap(void *addr, size_t len); } - SYS_MPROTECT = 74 // { int sys_mprotect(void *addr, size_t len, \ - SYS_MADVISE = 75 // { int sys_madvise(void *addr, size_t len, \ - SYS_UTIMES = 76 // { int sys_utimes(const char *path, \ - SYS_FUTIMES = 77 // { int sys_futimes(int fd, \ - SYS_MINCORE = 78 // { int sys_mincore(void *addr, size_t len, \ - SYS_GETGROUPS = 79 // { int sys_getgroups(int gidsetsize, \ - SYS_SETGROUPS = 80 // { int sys_setgroups(int gidsetsize, \ - SYS_GETPGRP = 81 // { int sys_getpgrp(void); } - SYS_SETPGID = 82 // { int sys_setpgid(pid_t pid, int pgid); } - SYS_UTIMENSAT = 84 // { int sys_utimensat(int fd, const char *path, \ - SYS_FUTIMENS = 85 // { int sys_futimens(int fd, \ - SYS_CLOCK_GETTIME = 87 // { int sys_clock_gettime(clockid_t clock_id, \ - SYS_CLOCK_SETTIME = 88 // { int sys_clock_settime(clockid_t clock_id, \ - SYS_CLOCK_GETRES = 89 // { int sys_clock_getres(clockid_t clock_id, \ - SYS_DUP2 = 90 // { int sys_dup2(int from, int to); } - SYS_NANOSLEEP = 91 // { int sys_nanosleep(const struct timespec *rqtp, \ - SYS_FCNTL = 92 // { int sys_fcntl(int fd, int cmd, ... void *arg); } - SYS___THRSLEEP = 94 // { int sys___thrsleep(const volatile void *ident, \ - SYS_FSYNC = 95 // { int sys_fsync(int fd); } - SYS_SETPRIORITY = 96 // { int sys_setpriority(int which, id_t who, int prio); } - SYS_SOCKET = 97 // { int sys_socket(int domain, int type, int protocol); } - SYS_CONNECT = 98 // { int sys_connect(int s, const struct sockaddr *name, \ - SYS_GETDENTS = 99 // { int sys_getdents(int fd, void *buf, size_t buflen); } - SYS_GETPRIORITY = 100 // { int sys_getpriority(int which, id_t who); } - SYS_SIGRETURN = 103 // { int sys_sigreturn(struct sigcontext *sigcntxp); } - SYS_BIND = 104 // { int sys_bind(int s, const struct sockaddr *name, \ - SYS_SETSOCKOPT = 105 // { int sys_setsockopt(int s, int level, int name, \ - SYS_LISTEN = 106 // { int sys_listen(int s, int backlog); } - SYS_PPOLL = 109 // { int sys_ppoll(struct pollfd *fds, \ - SYS_PSELECT = 110 // { int sys_pselect(int nd, fd_set *in, fd_set *ou, \ - SYS_SIGSUSPEND = 111 // { int sys_sigsuspend(int mask); } - SYS_GETSOCKOPT = 118 // { int sys_getsockopt(int s, int level, int name, \ - SYS_READV = 120 // { ssize_t sys_readv(int fd, \ - SYS_WRITEV = 121 // { ssize_t sys_writev(int fd, \ - SYS_FCHOWN = 123 // { int sys_fchown(int fd, uid_t uid, gid_t gid); } - SYS_FCHMOD = 124 // { int sys_fchmod(int fd, mode_t mode); } - SYS_SETREUID = 126 // { int sys_setreuid(uid_t ruid, uid_t euid); } - SYS_SETREGID = 127 // { int sys_setregid(gid_t rgid, gid_t egid); } - SYS_RENAME = 128 // { int sys_rename(const char *from, const char *to); } - SYS_FLOCK = 131 // { int sys_flock(int fd, int how); } - SYS_MKFIFO = 132 // { int sys_mkfifo(const char *path, mode_t mode); } - SYS_SENDTO = 133 // { ssize_t sys_sendto(int s, const void *buf, \ - SYS_SHUTDOWN = 134 // { int sys_shutdown(int s, int how); } - SYS_SOCKETPAIR = 135 // { int sys_socketpair(int domain, int type, \ - SYS_MKDIR = 136 // { int sys_mkdir(const char *path, mode_t mode); } - SYS_RMDIR = 137 // { int sys_rmdir(const char *path); } - SYS_ADJTIME = 140 // { int sys_adjtime(const struct timeval *delta, \ - SYS_SETSID = 147 // { int sys_setsid(void); } - SYS_QUOTACTL = 148 // { int sys_quotactl(const char *path, int cmd, \ - SYS_NFSSVC = 155 // { int sys_nfssvc(int flag, void *argp); } - SYS_GETFH = 161 // { int sys_getfh(const char *fname, fhandle_t *fhp); } - SYS_SYSARCH = 165 // { int sys_sysarch(int op, void *parms); } - SYS_PREAD = 173 // { ssize_t sys_pread(int fd, void *buf, \ - SYS_PWRITE = 174 // { ssize_t sys_pwrite(int fd, const void *buf, \ - SYS_SETGID = 181 // { int sys_setgid(gid_t gid); } - SYS_SETEGID = 182 // { int sys_setegid(gid_t egid); } - SYS_SETEUID = 183 // { int sys_seteuid(uid_t euid); } - SYS_PATHCONF = 191 // { long sys_pathconf(const char *path, int name); } - SYS_FPATHCONF = 192 // { long sys_fpathconf(int fd, int name); } - SYS_SWAPCTL = 193 // { int sys_swapctl(int cmd, const void *arg, int misc); } - SYS_GETRLIMIT = 194 // { int sys_getrlimit(int which, \ - SYS_SETRLIMIT = 195 // { int sys_setrlimit(int which, \ - SYS_MMAP = 197 // { void *sys_mmap(void *addr, size_t len, int prot, \ - SYS_LSEEK = 199 // { off_t sys_lseek(int fd, int pad, off_t offset, \ - SYS_TRUNCATE = 200 // { int sys_truncate(const char *path, int pad, \ - SYS_FTRUNCATE = 201 // { int sys_ftruncate(int fd, int pad, off_t length); } - SYS___SYSCTL = 202 // { int sys___sysctl(const int *name, u_int namelen, \ - SYS_MLOCK = 203 // { int sys_mlock(const void *addr, size_t len); } - SYS_MUNLOCK = 204 // { int sys_munlock(const void *addr, size_t len); } - SYS_GETPGID = 207 // { pid_t sys_getpgid(pid_t pid); } - SYS_UTRACE = 209 // { int sys_utrace(const char *label, const void *addr, \ - SYS_SEMGET = 221 // { int sys_semget(key_t key, int nsems, int semflg); } - SYS_MSGGET = 225 // { int sys_msgget(key_t key, int msgflg); } - SYS_MSGSND = 226 // { int sys_msgsnd(int msqid, const void *msgp, size_t msgsz, \ - SYS_MSGRCV = 227 // { int sys_msgrcv(int msqid, void *msgp, size_t msgsz, \ - SYS_SHMAT = 228 // { void *sys_shmat(int shmid, const void *shmaddr, \ - SYS_SHMDT = 230 // { int sys_shmdt(const void *shmaddr); } - SYS_MINHERIT = 250 // { int sys_minherit(void *addr, size_t len, \ - SYS_POLL = 252 // { int sys_poll(struct pollfd *fds, \ - SYS_ISSETUGID = 253 // { int sys_issetugid(void); } - SYS_LCHOWN = 254 // { int sys_lchown(const char *path, uid_t uid, gid_t gid); } - SYS_GETSID = 255 // { pid_t sys_getsid(pid_t pid); } - SYS_MSYNC = 256 // { int sys_msync(void *addr, size_t len, int flags); } - SYS_PIPE = 263 // { int sys_pipe(int *fdp); } - SYS_FHOPEN = 264 // { int sys_fhopen(const fhandle_t *fhp, int flags); } - SYS_PREADV = 267 // { ssize_t sys_preadv(int fd, \ - SYS_PWRITEV = 268 // { ssize_t sys_pwritev(int fd, \ - SYS_KQUEUE = 269 // { int sys_kqueue(void); } - SYS_MLOCKALL = 271 // { int sys_mlockall(int flags); } - SYS_MUNLOCKALL = 272 // { int sys_munlockall(void); } - SYS_GETRESUID = 281 // { int sys_getresuid(uid_t *ruid, uid_t *euid, \ - SYS_SETRESUID = 282 // { int sys_setresuid(uid_t ruid, uid_t euid, \ - SYS_GETRESGID = 283 // { int sys_getresgid(gid_t *rgid, gid_t *egid, \ - SYS_SETRESGID = 284 // { int sys_setresgid(gid_t rgid, gid_t egid, \ - SYS_MQUERY = 286 // { void *sys_mquery(void *addr, size_t len, int prot, \ - SYS_CLOSEFROM = 287 // { int sys_closefrom(int fd); } - SYS_SIGALTSTACK = 288 // { int sys_sigaltstack(const struct sigaltstack *nss, \ - SYS_SHMGET = 289 // { int sys_shmget(key_t key, size_t size, int shmflg); } - SYS_SEMOP = 290 // { int sys_semop(int semid, struct sembuf *sops, \ - SYS_FHSTAT = 294 // { int sys_fhstat(const fhandle_t *fhp, \ - SYS___SEMCTL = 295 // { int sys___semctl(int semid, int semnum, int cmd, \ - SYS_SHMCTL = 296 // { int sys_shmctl(int shmid, int cmd, \ - SYS_MSGCTL = 297 // { int sys_msgctl(int msqid, int cmd, \ - SYS_SCHED_YIELD = 298 // { int sys_sched_yield(void); } - SYS_GETTHRID = 299 // { pid_t sys_getthrid(void); } - SYS___THRWAKEUP = 301 // { int sys___thrwakeup(const volatile void *ident, \ - SYS___THREXIT = 302 // { void sys___threxit(pid_t *notdead); } - SYS___THRSIGDIVERT = 303 // { int sys___thrsigdivert(sigset_t sigmask, \ - SYS___GETCWD = 304 // { int sys___getcwd(char *buf, size_t len); } - SYS_ADJFREQ = 305 // { int sys_adjfreq(const int64_t *freq, \ - SYS_SETRTABLE = 310 // { int sys_setrtable(int rtableid); } - SYS_GETRTABLE = 311 // { int sys_getrtable(void); } - SYS_FACCESSAT = 313 // { int sys_faccessat(int fd, const char *path, \ - SYS_FCHMODAT = 314 // { int sys_fchmodat(int fd, const char *path, \ - SYS_FCHOWNAT = 315 // { int sys_fchownat(int fd, const char *path, \ - SYS_LINKAT = 317 // { int sys_linkat(int fd1, const char *path1, int fd2, \ - SYS_MKDIRAT = 318 // { int sys_mkdirat(int fd, const char *path, \ - SYS_MKFIFOAT = 319 // { int sys_mkfifoat(int fd, const char *path, \ - SYS_MKNODAT = 320 // { int sys_mknodat(int fd, const char *path, \ - SYS_OPENAT = 321 // { int sys_openat(int fd, const char *path, int flags, \ - SYS_READLINKAT = 322 // { ssize_t sys_readlinkat(int fd, const char *path, \ - SYS_RENAMEAT = 323 // { int sys_renameat(int fromfd, const char *from, \ - SYS_SYMLINKAT = 324 // { int sys_symlinkat(const char *path, int fd, \ - SYS_UNLINKAT = 325 // { int sys_unlinkat(int fd, const char *path, \ - SYS___SET_TCB = 329 // { void sys___set_tcb(void *tcb); } - SYS___GET_TCB = 330 // { void *sys___get_tcb(void); } -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_openbsd_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_openbsd_amd64.go deleted file mode 100644 index bd28146ddd5..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_openbsd_amd64.go +++ /dev/null @@ -1,207 +0,0 @@ -// mksysnum_openbsd.pl -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT - -// +build amd64,openbsd - -package unix - -const ( - SYS_EXIT = 1 // { void sys_exit(int rval); } - SYS_FORK = 2 // { int sys_fork(void); } - SYS_READ = 3 // { ssize_t sys_read(int fd, void *buf, size_t nbyte); } - SYS_WRITE = 4 // { ssize_t sys_write(int fd, const void *buf, \ - SYS_OPEN = 5 // { int sys_open(const char *path, \ - SYS_CLOSE = 6 // { int sys_close(int fd); } - SYS___TFORK = 8 // { int sys___tfork(const struct __tfork *param, \ - SYS_LINK = 9 // { int sys_link(const char *path, const char *link); } - SYS_UNLINK = 10 // { int sys_unlink(const char *path); } - SYS_WAIT4 = 11 // { pid_t sys_wait4(pid_t pid, int *status, \ - SYS_CHDIR = 12 // { int sys_chdir(const char *path); } - SYS_FCHDIR = 13 // { int sys_fchdir(int fd); } - SYS_MKNOD = 14 // { int sys_mknod(const char *path, mode_t mode, \ - SYS_CHMOD = 15 // { int sys_chmod(const char *path, mode_t mode); } - SYS_CHOWN = 16 // { int sys_chown(const char *path, uid_t uid, \ - SYS_OBREAK = 17 // { int sys_obreak(char *nsize); } break - SYS_GETDTABLECOUNT = 18 // { int sys_getdtablecount(void); } - SYS_GETRUSAGE = 19 // { int sys_getrusage(int who, \ - SYS_GETPID = 20 // { pid_t sys_getpid(void); } - SYS_MOUNT = 21 // { int sys_mount(const char *type, const char *path, \ - SYS_UNMOUNT = 22 // { int sys_unmount(const char *path, int flags); } - SYS_SETUID = 23 // { int sys_setuid(uid_t uid); } - SYS_GETUID = 24 // { uid_t sys_getuid(void); } - SYS_GETEUID = 25 // { uid_t sys_geteuid(void); } - SYS_PTRACE = 26 // { int sys_ptrace(int req, pid_t pid, caddr_t addr, \ - SYS_RECVMSG = 27 // { ssize_t sys_recvmsg(int s, struct msghdr *msg, \ - SYS_SENDMSG = 28 // { ssize_t sys_sendmsg(int s, \ - SYS_RECVFROM = 29 // { ssize_t sys_recvfrom(int s, void *buf, size_t len, \ - SYS_ACCEPT = 30 // { int sys_accept(int s, struct sockaddr *name, \ - SYS_GETPEERNAME = 31 // { int sys_getpeername(int fdes, struct sockaddr *asa, \ - SYS_GETSOCKNAME = 32 // { int sys_getsockname(int fdes, struct sockaddr *asa, \ - SYS_ACCESS = 33 // { int sys_access(const char *path, int flags); } - SYS_CHFLAGS = 34 // { int sys_chflags(const char *path, u_int flags); } - SYS_FCHFLAGS = 35 // { int sys_fchflags(int fd, u_int flags); } - SYS_SYNC = 36 // { void sys_sync(void); } - SYS_KILL = 37 // { int sys_kill(int pid, int signum); } - SYS_STAT = 38 // { int sys_stat(const char *path, struct stat *ub); } - SYS_GETPPID = 39 // { pid_t sys_getppid(void); } - SYS_LSTAT = 40 // { int sys_lstat(const char *path, struct stat *ub); } - SYS_DUP = 41 // { int sys_dup(int fd); } - SYS_FSTATAT = 42 // { int sys_fstatat(int fd, const char *path, \ - SYS_GETEGID = 43 // { gid_t sys_getegid(void); } - SYS_PROFIL = 44 // { int sys_profil(caddr_t samples, size_t size, \ - SYS_KTRACE = 45 // { int sys_ktrace(const char *fname, int ops, \ - SYS_SIGACTION = 46 // { int sys_sigaction(int signum, \ - SYS_GETGID = 47 // { gid_t sys_getgid(void); } - SYS_SIGPROCMASK = 48 // { int sys_sigprocmask(int how, sigset_t mask); } - SYS_GETLOGIN = 49 // { int sys_getlogin(char *namebuf, u_int namelen); } - SYS_SETLOGIN = 50 // { int sys_setlogin(const char *namebuf); } - SYS_ACCT = 51 // { int sys_acct(const char *path); } - SYS_SIGPENDING = 52 // { int sys_sigpending(void); } - SYS_FSTAT = 53 // { int sys_fstat(int fd, struct stat *sb); } - SYS_IOCTL = 54 // { int sys_ioctl(int fd, \ - SYS_REBOOT = 55 // { int sys_reboot(int opt); } - SYS_REVOKE = 56 // { int sys_revoke(const char *path); } - SYS_SYMLINK = 57 // { int sys_symlink(const char *path, \ - SYS_READLINK = 58 // { int sys_readlink(const char *path, char *buf, \ - SYS_EXECVE = 59 // { int sys_execve(const char *path, \ - SYS_UMASK = 60 // { mode_t sys_umask(mode_t newmask); } - SYS_CHROOT = 61 // { int sys_chroot(const char *path); } - SYS_GETFSSTAT = 62 // { int sys_getfsstat(struct statfs *buf, size_t bufsize, \ - SYS_STATFS = 63 // { int sys_statfs(const char *path, \ - SYS_FSTATFS = 64 // { int sys_fstatfs(int fd, struct statfs *buf); } - SYS_FHSTATFS = 65 // { int sys_fhstatfs(const fhandle_t *fhp, \ - SYS_VFORK = 66 // { int sys_vfork(void); } - SYS_GETTIMEOFDAY = 67 // { int sys_gettimeofday(struct timeval *tp, \ - SYS_SETTIMEOFDAY = 68 // { int sys_settimeofday(const struct timeval *tv, \ - SYS_SETITIMER = 69 // { int sys_setitimer(int which, \ - SYS_GETITIMER = 70 // { int sys_getitimer(int which, \ - SYS_SELECT = 71 // { int sys_select(int nd, fd_set *in, fd_set *ou, \ - SYS_KEVENT = 72 // { int sys_kevent(int fd, \ - SYS_MUNMAP = 73 // { int sys_munmap(void *addr, size_t len); } - SYS_MPROTECT = 74 // { int sys_mprotect(void *addr, size_t len, \ - SYS_MADVISE = 75 // { int sys_madvise(void *addr, size_t len, \ - SYS_UTIMES = 76 // { int sys_utimes(const char *path, \ - SYS_FUTIMES = 77 // { int sys_futimes(int fd, \ - SYS_MINCORE = 78 // { int sys_mincore(void *addr, size_t len, \ - SYS_GETGROUPS = 79 // { int sys_getgroups(int gidsetsize, \ - SYS_SETGROUPS = 80 // { int sys_setgroups(int gidsetsize, \ - SYS_GETPGRP = 81 // { int sys_getpgrp(void); } - SYS_SETPGID = 82 // { int sys_setpgid(pid_t pid, int pgid); } - SYS_UTIMENSAT = 84 // { int sys_utimensat(int fd, const char *path, \ - SYS_FUTIMENS = 85 // { int sys_futimens(int fd, \ - SYS_CLOCK_GETTIME = 87 // { int sys_clock_gettime(clockid_t clock_id, \ - SYS_CLOCK_SETTIME = 88 // { int sys_clock_settime(clockid_t clock_id, \ - SYS_CLOCK_GETRES = 89 // { int sys_clock_getres(clockid_t clock_id, \ - SYS_DUP2 = 90 // { int sys_dup2(int from, int to); } - SYS_NANOSLEEP = 91 // { int sys_nanosleep(const struct timespec *rqtp, \ - SYS_FCNTL = 92 // { int sys_fcntl(int fd, int cmd, ... void *arg); } - SYS___THRSLEEP = 94 // { int sys___thrsleep(const volatile void *ident, \ - SYS_FSYNC = 95 // { int sys_fsync(int fd); } - SYS_SETPRIORITY = 96 // { int sys_setpriority(int which, id_t who, int prio); } - SYS_SOCKET = 97 // { int sys_socket(int domain, int type, int protocol); } - SYS_CONNECT = 98 // { int sys_connect(int s, const struct sockaddr *name, \ - SYS_GETDENTS = 99 // { int sys_getdents(int fd, void *buf, size_t buflen); } - SYS_GETPRIORITY = 100 // { int sys_getpriority(int which, id_t who); } - SYS_SIGRETURN = 103 // { int sys_sigreturn(struct sigcontext *sigcntxp); } - SYS_BIND = 104 // { int sys_bind(int s, const struct sockaddr *name, \ - SYS_SETSOCKOPT = 105 // { int sys_setsockopt(int s, int level, int name, \ - SYS_LISTEN = 106 // { int sys_listen(int s, int backlog); } - SYS_PPOLL = 109 // { int sys_ppoll(struct pollfd *fds, \ - SYS_PSELECT = 110 // { int sys_pselect(int nd, fd_set *in, fd_set *ou, \ - SYS_SIGSUSPEND = 111 // { int sys_sigsuspend(int mask); } - SYS_GETSOCKOPT = 118 // { int sys_getsockopt(int s, int level, int name, \ - SYS_READV = 120 // { ssize_t sys_readv(int fd, \ - SYS_WRITEV = 121 // { ssize_t sys_writev(int fd, \ - SYS_FCHOWN = 123 // { int sys_fchown(int fd, uid_t uid, gid_t gid); } - SYS_FCHMOD = 124 // { int sys_fchmod(int fd, mode_t mode); } - SYS_SETREUID = 126 // { int sys_setreuid(uid_t ruid, uid_t euid); } - SYS_SETREGID = 127 // { int sys_setregid(gid_t rgid, gid_t egid); } - SYS_RENAME = 128 // { int sys_rename(const char *from, const char *to); } - SYS_FLOCK = 131 // { int sys_flock(int fd, int how); } - SYS_MKFIFO = 132 // { int sys_mkfifo(const char *path, mode_t mode); } - SYS_SENDTO = 133 // { ssize_t sys_sendto(int s, const void *buf, \ - SYS_SHUTDOWN = 134 // { int sys_shutdown(int s, int how); } - SYS_SOCKETPAIR = 135 // { int sys_socketpair(int domain, int type, \ - SYS_MKDIR = 136 // { int sys_mkdir(const char *path, mode_t mode); } - SYS_RMDIR = 137 // { int sys_rmdir(const char *path); } - SYS_ADJTIME = 140 // { int sys_adjtime(const struct timeval *delta, \ - SYS_SETSID = 147 // { int sys_setsid(void); } - SYS_QUOTACTL = 148 // { int sys_quotactl(const char *path, int cmd, \ - SYS_NFSSVC = 155 // { int sys_nfssvc(int flag, void *argp); } - SYS_GETFH = 161 // { int sys_getfh(const char *fname, fhandle_t *fhp); } - SYS_SYSARCH = 165 // { int sys_sysarch(int op, void *parms); } - SYS_PREAD = 173 // { ssize_t sys_pread(int fd, void *buf, \ - SYS_PWRITE = 174 // { ssize_t sys_pwrite(int fd, const void *buf, \ - SYS_SETGID = 181 // { int sys_setgid(gid_t gid); } - SYS_SETEGID = 182 // { int sys_setegid(gid_t egid); } - SYS_SETEUID = 183 // { int sys_seteuid(uid_t euid); } - SYS_PATHCONF = 191 // { long sys_pathconf(const char *path, int name); } - SYS_FPATHCONF = 192 // { long sys_fpathconf(int fd, int name); } - SYS_SWAPCTL = 193 // { int sys_swapctl(int cmd, const void *arg, int misc); } - SYS_GETRLIMIT = 194 // { int sys_getrlimit(int which, \ - SYS_SETRLIMIT = 195 // { int sys_setrlimit(int which, \ - SYS_MMAP = 197 // { void *sys_mmap(void *addr, size_t len, int prot, \ - SYS_LSEEK = 199 // { off_t sys_lseek(int fd, int pad, off_t offset, \ - SYS_TRUNCATE = 200 // { int sys_truncate(const char *path, int pad, \ - SYS_FTRUNCATE = 201 // { int sys_ftruncate(int fd, int pad, off_t length); } - SYS___SYSCTL = 202 // { int sys___sysctl(const int *name, u_int namelen, \ - SYS_MLOCK = 203 // { int sys_mlock(const void *addr, size_t len); } - SYS_MUNLOCK = 204 // { int sys_munlock(const void *addr, size_t len); } - SYS_GETPGID = 207 // { pid_t sys_getpgid(pid_t pid); } - SYS_UTRACE = 209 // { int sys_utrace(const char *label, const void *addr, \ - SYS_SEMGET = 221 // { int sys_semget(key_t key, int nsems, int semflg); } - SYS_MSGGET = 225 // { int sys_msgget(key_t key, int msgflg); } - SYS_MSGSND = 226 // { int sys_msgsnd(int msqid, const void *msgp, size_t msgsz, \ - SYS_MSGRCV = 227 // { int sys_msgrcv(int msqid, void *msgp, size_t msgsz, \ - SYS_SHMAT = 228 // { void *sys_shmat(int shmid, const void *shmaddr, \ - SYS_SHMDT = 230 // { int sys_shmdt(const void *shmaddr); } - SYS_MINHERIT = 250 // { int sys_minherit(void *addr, size_t len, \ - SYS_POLL = 252 // { int sys_poll(struct pollfd *fds, \ - SYS_ISSETUGID = 253 // { int sys_issetugid(void); } - SYS_LCHOWN = 254 // { int sys_lchown(const char *path, uid_t uid, gid_t gid); } - SYS_GETSID = 255 // { pid_t sys_getsid(pid_t pid); } - SYS_MSYNC = 256 // { int sys_msync(void *addr, size_t len, int flags); } - SYS_PIPE = 263 // { int sys_pipe(int *fdp); } - SYS_FHOPEN = 264 // { int sys_fhopen(const fhandle_t *fhp, int flags); } - SYS_PREADV = 267 // { ssize_t sys_preadv(int fd, \ - SYS_PWRITEV = 268 // { ssize_t sys_pwritev(int fd, \ - SYS_KQUEUE = 269 // { int sys_kqueue(void); } - SYS_MLOCKALL = 271 // { int sys_mlockall(int flags); } - SYS_MUNLOCKALL = 272 // { int sys_munlockall(void); } - SYS_GETRESUID = 281 // { int sys_getresuid(uid_t *ruid, uid_t *euid, \ - SYS_SETRESUID = 282 // { int sys_setresuid(uid_t ruid, uid_t euid, \ - SYS_GETRESGID = 283 // { int sys_getresgid(gid_t *rgid, gid_t *egid, \ - SYS_SETRESGID = 284 // { int sys_setresgid(gid_t rgid, gid_t egid, \ - SYS_MQUERY = 286 // { void *sys_mquery(void *addr, size_t len, int prot, \ - SYS_CLOSEFROM = 287 // { int sys_closefrom(int fd); } - SYS_SIGALTSTACK = 288 // { int sys_sigaltstack(const struct sigaltstack *nss, \ - SYS_SHMGET = 289 // { int sys_shmget(key_t key, size_t size, int shmflg); } - SYS_SEMOP = 290 // { int sys_semop(int semid, struct sembuf *sops, \ - SYS_FHSTAT = 294 // { int sys_fhstat(const fhandle_t *fhp, \ - SYS___SEMCTL = 295 // { int sys___semctl(int semid, int semnum, int cmd, \ - SYS_SHMCTL = 296 // { int sys_shmctl(int shmid, int cmd, \ - SYS_MSGCTL = 297 // { int sys_msgctl(int msqid, int cmd, \ - SYS_SCHED_YIELD = 298 // { int sys_sched_yield(void); } - SYS_GETTHRID = 299 // { pid_t sys_getthrid(void); } - SYS___THRWAKEUP = 301 // { int sys___thrwakeup(const volatile void *ident, \ - SYS___THREXIT = 302 // { void sys___threxit(pid_t *notdead); } - SYS___THRSIGDIVERT = 303 // { int sys___thrsigdivert(sigset_t sigmask, \ - SYS___GETCWD = 304 // { int sys___getcwd(char *buf, size_t len); } - SYS_ADJFREQ = 305 // { int sys_adjfreq(const int64_t *freq, \ - SYS_SETRTABLE = 310 // { int sys_setrtable(int rtableid); } - SYS_GETRTABLE = 311 // { int sys_getrtable(void); } - SYS_FACCESSAT = 313 // { int sys_faccessat(int fd, const char *path, \ - SYS_FCHMODAT = 314 // { int sys_fchmodat(int fd, const char *path, \ - SYS_FCHOWNAT = 315 // { int sys_fchownat(int fd, const char *path, \ - SYS_LINKAT = 317 // { int sys_linkat(int fd1, const char *path1, int fd2, \ - SYS_MKDIRAT = 318 // { int sys_mkdirat(int fd, const char *path, \ - SYS_MKFIFOAT = 319 // { int sys_mkfifoat(int fd, const char *path, \ - SYS_MKNODAT = 320 // { int sys_mknodat(int fd, const char *path, \ - SYS_OPENAT = 321 // { int sys_openat(int fd, const char *path, int flags, \ - SYS_READLINKAT = 322 // { ssize_t sys_readlinkat(int fd, const char *path, \ - SYS_RENAMEAT = 323 // { int sys_renameat(int fromfd, const char *from, \ - SYS_SYMLINKAT = 324 // { int sys_symlinkat(const char *path, int fd, \ - SYS_UNLINKAT = 325 // { int sys_unlinkat(int fd, const char *path, \ - SYS___SET_TCB = 329 // { void sys___set_tcb(void *tcb); } - SYS___GET_TCB = 330 // { void *sys___get_tcb(void); } -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_solaris_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_solaris_amd64.go deleted file mode 100644 index c7086598590..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/zsysnum_solaris_amd64.go +++ /dev/null @@ -1,13 +0,0 @@ -// Copyright 2014 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build amd64,solaris - -package unix - -// TODO(aram): remove these before Go 1.3. -const ( - SYS_EXECVE = 59 - SYS_FCNTL = 62 -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_darwin_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_darwin_386.go deleted file mode 100644 index 2de1d44e281..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_darwin_386.go +++ /dev/null @@ -1,447 +0,0 @@ -// +build 386,darwin -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_darwin.go - -package unix - -const ( - sizeofPtr = 0x4 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x4 - sizeofLongLong = 0x8 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int32 - _C_long_long int64 -) - -type Timespec struct { - Sec int32 - Nsec int32 -} - -type Timeval struct { - Sec int32 - Usec int32 -} - -type Timeval32 struct{} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int32 - Ixrss int32 - Idrss int32 - Isrss int32 - Minflt int32 - Majflt int32 - Nswap int32 - Inblock int32 - Oublock int32 - Msgsnd int32 - Msgrcv int32 - Nsignals int32 - Nvcsw int32 - Nivcsw int32 -} - -type Rlimit struct { - Cur uint64 - Max uint64 -} - -type _Gid_t uint32 - -type Stat_t struct { - Dev int32 - Mode uint16 - Nlink uint16 - Ino uint64 - Uid uint32 - Gid uint32 - Rdev int32 - Atimespec Timespec - Mtimespec Timespec - Ctimespec Timespec - Birthtimespec Timespec - Size int64 - Blocks int64 - Blksize int32 - Flags uint32 - Gen uint32 - Lspare int32 - Qspare [2]int64 -} - -type Statfs_t struct { - Bsize uint32 - Iosize int32 - Blocks uint64 - Bfree uint64 - Bavail uint64 - Files uint64 - Ffree uint64 - Fsid Fsid - Owner uint32 - Type uint32 - Flags uint32 - Fssubtype uint32 - Fstypename [16]int8 - Mntonname [1024]int8 - Mntfromname [1024]int8 - Reserved [8]uint32 -} - -type Flock_t struct { - Start int64 - Len int64 - Pid int32 - Type int16 - Whence int16 -} - -type Fstore_t struct { - Flags uint32 - Posmode int32 - Offset int64 - Length int64 - Bytesalloc int64 -} - -type Radvisory_t struct { - Offset int64 - Count int32 -} - -type Fbootstraptransfer_t struct { - Offset int64 - Length uint32 - Buffer *byte -} - -type Log2phys_t struct { - Flags uint32 - Contigbytes int64 - Devoffset int64 -} - -type Fsid struct { - Val [2]int32 -} - -type Dirent struct { - Ino uint64 - Seekoff uint64 - Reclen uint16 - Namlen uint16 - Type uint8 - Name [1024]int8 - Pad_cgo_0 [3]byte -} - -type RawSockaddrInet4 struct { - Len uint8 - Family uint8 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]int8 -} - -type RawSockaddrInet6 struct { - Len uint8 - Family uint8 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Len uint8 - Family uint8 - Path [104]int8 -} - -type RawSockaddrDatalink struct { - Len uint8 - Family uint8 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [12]int8 -} - -type RawSockaddr struct { - Len uint8 - Family uint8 - Data [14]int8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [92]int8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint32 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Iov *Iovec - Iovlen int32 - Control *byte - Controllen uint32 - Flags int32 -} - -type Cmsghdr struct { - Len uint32 - Level int32 - Type int32 -} - -type Inet4Pktinfo struct { - Ifindex uint32 - Spec_dst [4]byte /* in_addr */ - Addr [4]byte /* in_addr */ -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Filt [8]uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x6c - SizeofSockaddrUnix = 0x6a - SizeofSockaddrDatalink = 0x14 - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x1c - SizeofCmsghdr = 0xc - SizeofInet4Pktinfo = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 -) - -const ( - PTRACE_TRACEME = 0x0 - PTRACE_CONT = 0x7 - PTRACE_KILL = 0x8 -) - -type Kevent_t struct { - Ident uint32 - Filter int16 - Flags uint16 - Fflags uint32 - Data int32 - Udata *byte -} - -type FdSet struct { - Bits [32]int32 -} - -const ( - SizeofIfMsghdr = 0x70 - SizeofIfData = 0x60 - SizeofIfaMsghdr = 0x14 - SizeofIfmaMsghdr = 0x10 - SizeofIfmaMsghdr2 = 0x14 - SizeofRtMsghdr = 0x5c - SizeofRtMetrics = 0x38 -) - -type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data IfData -} - -type IfData struct { - Type uint8 - Typelen uint8 - Physical uint8 - Addrlen uint8 - Hdrlen uint8 - Recvquota uint8 - Xmitquota uint8 - Unused1 uint8 - Mtu uint32 - Metric uint32 - Baudrate uint32 - Ipackets uint32 - Ierrors uint32 - Opackets uint32 - Oerrors uint32 - Collisions uint32 - Ibytes uint32 - Obytes uint32 - Imcasts uint32 - Omcasts uint32 - Iqdrops uint32 - Noproto uint32 - Recvtiming uint32 - Xmittiming uint32 - Lastchange Timeval - Unused2 uint32 - Hwassist uint32 - Reserved1 uint32 - Reserved2 uint32 -} - -type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Metric int32 -} - -type IfmaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte -} - -type IfmaMsghdr2 struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Refcount int32 -} - -type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Pad_cgo_0 [2]byte - Flags int32 - Addrs int32 - Pid int32 - Seq int32 - Errno int32 - Use int32 - Inits uint32 - Rmx RtMetrics -} - -type RtMetrics struct { - Locks uint32 - Mtu uint32 - Hopcount uint32 - Expire int32 - Recvpipe uint32 - Sendpipe uint32 - Ssthresh uint32 - Rtt uint32 - Rttvar uint32 - Pksent uint32 - Filler [4]uint32 -} - -const ( - SizeofBpfVersion = 0x4 - SizeofBpfStat = 0x8 - SizeofBpfProgram = 0x8 - SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x14 -) - -type BpfVersion struct { - Major uint16 - Minor uint16 -} - -type BpfStat struct { - Recv uint32 - Drop uint32 -} - -type BpfProgram struct { - Len uint32 - Insns *BpfInsn -} - -type BpfInsn struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type BpfHdr struct { - Tstamp Timeval - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [2]byte -} - -type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Cc [20]uint8 - Ispeed uint32 - Ospeed uint32 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_darwin_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_darwin_amd64.go deleted file mode 100644 index 044657878c8..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_darwin_amd64.go +++ /dev/null @@ -1,462 +0,0 @@ -// +build amd64,darwin -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_darwin.go - -package unix - -const ( - sizeofPtr = 0x8 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x8 - sizeofLongLong = 0x8 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int64 - _C_long_long int64 -) - -type Timespec struct { - Sec int64 - Nsec int64 -} - -type Timeval struct { - Sec int64 - Usec int32 - Pad_cgo_0 [4]byte -} - -type Timeval32 struct { - Sec int32 - Usec int32 -} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int64 - Ixrss int64 - Idrss int64 - Isrss int64 - Minflt int64 - Majflt int64 - Nswap int64 - Inblock int64 - Oublock int64 - Msgsnd int64 - Msgrcv int64 - Nsignals int64 - Nvcsw int64 - Nivcsw int64 -} - -type Rlimit struct { - Cur uint64 - Max uint64 -} - -type _Gid_t uint32 - -type Stat_t struct { - Dev int32 - Mode uint16 - Nlink uint16 - Ino uint64 - Uid uint32 - Gid uint32 - Rdev int32 - Pad_cgo_0 [4]byte - Atimespec Timespec - Mtimespec Timespec - Ctimespec Timespec - Birthtimespec Timespec - Size int64 - Blocks int64 - Blksize int32 - Flags uint32 - Gen uint32 - Lspare int32 - Qspare [2]int64 -} - -type Statfs_t struct { - Bsize uint32 - Iosize int32 - Blocks uint64 - Bfree uint64 - Bavail uint64 - Files uint64 - Ffree uint64 - Fsid Fsid - Owner uint32 - Type uint32 - Flags uint32 - Fssubtype uint32 - Fstypename [16]int8 - Mntonname [1024]int8 - Mntfromname [1024]int8 - Reserved [8]uint32 -} - -type Flock_t struct { - Start int64 - Len int64 - Pid int32 - Type int16 - Whence int16 -} - -type Fstore_t struct { - Flags uint32 - Posmode int32 - Offset int64 - Length int64 - Bytesalloc int64 -} - -type Radvisory_t struct { - Offset int64 - Count int32 - Pad_cgo_0 [4]byte -} - -type Fbootstraptransfer_t struct { - Offset int64 - Length uint64 - Buffer *byte -} - -type Log2phys_t struct { - Flags uint32 - Pad_cgo_0 [8]byte - Pad_cgo_1 [8]byte -} - -type Fsid struct { - Val [2]int32 -} - -type Dirent struct { - Ino uint64 - Seekoff uint64 - Reclen uint16 - Namlen uint16 - Type uint8 - Name [1024]int8 - Pad_cgo_0 [3]byte -} - -type RawSockaddrInet4 struct { - Len uint8 - Family uint8 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]int8 -} - -type RawSockaddrInet6 struct { - Len uint8 - Family uint8 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Len uint8 - Family uint8 - Path [104]int8 -} - -type RawSockaddrDatalink struct { - Len uint8 - Family uint8 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [12]int8 -} - -type RawSockaddr struct { - Len uint8 - Family uint8 - Data [14]int8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [92]int8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint64 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Pad_cgo_0 [4]byte - Iov *Iovec - Iovlen int32 - Pad_cgo_1 [4]byte - Control *byte - Controllen uint32 - Flags int32 -} - -type Cmsghdr struct { - Len uint32 - Level int32 - Type int32 -} - -type Inet4Pktinfo struct { - Ifindex uint32 - Spec_dst [4]byte /* in_addr */ - Addr [4]byte /* in_addr */ -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Filt [8]uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x6c - SizeofSockaddrUnix = 0x6a - SizeofSockaddrDatalink = 0x14 - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x30 - SizeofCmsghdr = 0xc - SizeofInet4Pktinfo = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 -) - -const ( - PTRACE_TRACEME = 0x0 - PTRACE_CONT = 0x7 - PTRACE_KILL = 0x8 -) - -type Kevent_t struct { - Ident uint64 - Filter int16 - Flags uint16 - Fflags uint32 - Data int64 - Udata *byte -} - -type FdSet struct { - Bits [32]int32 -} - -const ( - SizeofIfMsghdr = 0x70 - SizeofIfData = 0x60 - SizeofIfaMsghdr = 0x14 - SizeofIfmaMsghdr = 0x10 - SizeofIfmaMsghdr2 = 0x14 - SizeofRtMsghdr = 0x5c - SizeofRtMetrics = 0x38 -) - -type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data IfData -} - -type IfData struct { - Type uint8 - Typelen uint8 - Physical uint8 - Addrlen uint8 - Hdrlen uint8 - Recvquota uint8 - Xmitquota uint8 - Unused1 uint8 - Mtu uint32 - Metric uint32 - Baudrate uint32 - Ipackets uint32 - Ierrors uint32 - Opackets uint32 - Oerrors uint32 - Collisions uint32 - Ibytes uint32 - Obytes uint32 - Imcasts uint32 - Omcasts uint32 - Iqdrops uint32 - Noproto uint32 - Recvtiming uint32 - Xmittiming uint32 - Lastchange Timeval32 - Unused2 uint32 - Hwassist uint32 - Reserved1 uint32 - Reserved2 uint32 -} - -type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Metric int32 -} - -type IfmaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte -} - -type IfmaMsghdr2 struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Refcount int32 -} - -type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Pad_cgo_0 [2]byte - Flags int32 - Addrs int32 - Pid int32 - Seq int32 - Errno int32 - Use int32 - Inits uint32 - Rmx RtMetrics -} - -type RtMetrics struct { - Locks uint32 - Mtu uint32 - Hopcount uint32 - Expire int32 - Recvpipe uint32 - Sendpipe uint32 - Ssthresh uint32 - Rtt uint32 - Rttvar uint32 - Pksent uint32 - Filler [4]uint32 -} - -const ( - SizeofBpfVersion = 0x4 - SizeofBpfStat = 0x8 - SizeofBpfProgram = 0x10 - SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x14 -) - -type BpfVersion struct { - Major uint16 - Minor uint16 -} - -type BpfStat struct { - Recv uint32 - Drop uint32 -} - -type BpfProgram struct { - Len uint32 - Pad_cgo_0 [4]byte - Insns *BpfInsn -} - -type BpfInsn struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type BpfHdr struct { - Tstamp Timeval32 - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [2]byte -} - -type Termios struct { - Iflag uint64 - Oflag uint64 - Cflag uint64 - Lflag uint64 - Cc [20]uint8 - Pad_cgo_0 [4]byte - Ispeed uint64 - Ospeed uint64 -} - -const ( - AT_FDCWD = -0x2 - AT_SYMLINK_NOFOLLOW = 0x20 -) diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_darwin_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_darwin_arm.go deleted file mode 100644 index 66df363ce5b..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_darwin_arm.go +++ /dev/null @@ -1,449 +0,0 @@ -// NOTE: cgo can't generate struct Stat_t and struct Statfs_t yet -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_darwin.go - -// +build arm,darwin - -package unix - -const ( - sizeofPtr = 0x4 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x4 - sizeofLongLong = 0x8 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int32 - _C_long_long int64 -) - -type Timespec struct { - Sec int32 - Nsec int32 -} - -type Timeval struct { - Sec int32 - Usec int32 -} - -type Timeval32 [0]byte - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int32 - Ixrss int32 - Idrss int32 - Isrss int32 - Minflt int32 - Majflt int32 - Nswap int32 - Inblock int32 - Oublock int32 - Msgsnd int32 - Msgrcv int32 - Nsignals int32 - Nvcsw int32 - Nivcsw int32 -} - -type Rlimit struct { - Cur uint64 - Max uint64 -} - -type _Gid_t uint32 - -type Stat_t struct { - Dev int32 - Mode uint16 - Nlink uint16 - Ino uint64 - Uid uint32 - Gid uint32 - Rdev int32 - Atimespec Timespec - Mtimespec Timespec - Ctimespec Timespec - Birthtimespec Timespec - Size int64 - Blocks int64 - Blksize int32 - Flags uint32 - Gen uint32 - Lspare int32 - Qspare [2]int64 -} - -type Statfs_t struct { - Bsize uint32 - Iosize int32 - Blocks uint64 - Bfree uint64 - Bavail uint64 - Files uint64 - Ffree uint64 - Fsid Fsid - Owner uint32 - Type uint32 - Flags uint32 - Fssubtype uint32 - Fstypename [16]int8 - Mntonname [1024]int8 - Mntfromname [1024]int8 - Reserved [8]uint32 -} - -type Flock_t struct { - Start int64 - Len int64 - Pid int32 - Type int16 - Whence int16 -} - -type Fstore_t struct { - Flags uint32 - Posmode int32 - Offset int64 - Length int64 - Bytesalloc int64 -} - -type Radvisory_t struct { - Offset int64 - Count int32 -} - -type Fbootstraptransfer_t struct { - Offset int64 - Length uint32 - Buffer *byte -} - -type Log2phys_t struct { - Flags uint32 - Contigbytes int64 - Devoffset int64 -} - -type Fsid struct { - Val [2]int32 -} - -type Dirent struct { - Ino uint64 - Seekoff uint64 - Reclen uint16 - Namlen uint16 - Type uint8 - Name [1024]int8 - Pad_cgo_0 [3]byte -} - -type RawSockaddrInet4 struct { - Len uint8 - Family uint8 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]int8 -} - -type RawSockaddrInet6 struct { - Len uint8 - Family uint8 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Len uint8 - Family uint8 - Path [104]int8 -} - -type RawSockaddrDatalink struct { - Len uint8 - Family uint8 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [12]int8 -} - -type RawSockaddr struct { - Len uint8 - Family uint8 - Data [14]int8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [92]int8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint32 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Iov *Iovec - Iovlen int32 - Control *byte - Controllen uint32 - Flags int32 -} - -type Cmsghdr struct { - Len uint32 - Level int32 - Type int32 -} - -type Inet4Pktinfo struct { - Ifindex uint32 - Spec_dst [4]byte /* in_addr */ - Addr [4]byte /* in_addr */ -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Filt [8]uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x6c - SizeofSockaddrUnix = 0x6a - SizeofSockaddrDatalink = 0x14 - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x1c - SizeofCmsghdr = 0xc - SizeofInet4Pktinfo = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 -) - -const ( - PTRACE_TRACEME = 0x0 - PTRACE_CONT = 0x7 - PTRACE_KILL = 0x8 -) - -type Kevent_t struct { - Ident uint32 - Filter int16 - Flags uint16 - Fflags uint32 - Data int32 - Udata *byte -} - -type FdSet struct { - Bits [32]int32 -} - -const ( - SizeofIfMsghdr = 0x70 - SizeofIfData = 0x60 - SizeofIfaMsghdr = 0x14 - SizeofIfmaMsghdr = 0x10 - SizeofIfmaMsghdr2 = 0x14 - SizeofRtMsghdr = 0x5c - SizeofRtMetrics = 0x38 -) - -type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data IfData -} - -type IfData struct { - Type uint8 - Typelen uint8 - Physical uint8 - Addrlen uint8 - Hdrlen uint8 - Recvquota uint8 - Xmitquota uint8 - Unused1 uint8 - Mtu uint32 - Metric uint32 - Baudrate uint32 - Ipackets uint32 - Ierrors uint32 - Opackets uint32 - Oerrors uint32 - Collisions uint32 - Ibytes uint32 - Obytes uint32 - Imcasts uint32 - Omcasts uint32 - Iqdrops uint32 - Noproto uint32 - Recvtiming uint32 - Xmittiming uint32 - Lastchange Timeval - Unused2 uint32 - Hwassist uint32 - Reserved1 uint32 - Reserved2 uint32 -} - -type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Metric int32 -} - -type IfmaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte -} - -type IfmaMsghdr2 struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Refcount int32 -} - -type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Pad_cgo_0 [2]byte - Flags int32 - Addrs int32 - Pid int32 - Seq int32 - Errno int32 - Use int32 - Inits uint32 - Rmx RtMetrics -} - -type RtMetrics struct { - Locks uint32 - Mtu uint32 - Hopcount uint32 - Expire int32 - Recvpipe uint32 - Sendpipe uint32 - Ssthresh uint32 - Rtt uint32 - Rttvar uint32 - Pksent uint32 - Filler [4]uint32 -} - -const ( - SizeofBpfVersion = 0x4 - SizeofBpfStat = 0x8 - SizeofBpfProgram = 0x8 - SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x14 -) - -type BpfVersion struct { - Major uint16 - Minor uint16 -} - -type BpfStat struct { - Recv uint32 - Drop uint32 -} - -type BpfProgram struct { - Len uint32 - Insns *BpfInsn -} - -type BpfInsn struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type BpfHdr struct { - Tstamp Timeval - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [2]byte -} - -type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Cc [20]uint8 - Ispeed uint32 - Ospeed uint32 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_darwin_arm64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_darwin_arm64.go deleted file mode 100644 index 85d56eabd3f..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_darwin_arm64.go +++ /dev/null @@ -1,457 +0,0 @@ -// +build arm64,darwin -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_darwin.go - -package unix - -const ( - sizeofPtr = 0x8 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x8 - sizeofLongLong = 0x8 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int64 - _C_long_long int64 -) - -type Timespec struct { - Sec int64 - Nsec int64 -} - -type Timeval struct { - Sec int64 - Usec int32 - Pad_cgo_0 [4]byte -} - -type Timeval32 struct { - Sec int32 - Usec int32 -} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int64 - Ixrss int64 - Idrss int64 - Isrss int64 - Minflt int64 - Majflt int64 - Nswap int64 - Inblock int64 - Oublock int64 - Msgsnd int64 - Msgrcv int64 - Nsignals int64 - Nvcsw int64 - Nivcsw int64 -} - -type Rlimit struct { - Cur uint64 - Max uint64 -} - -type _Gid_t uint32 - -type Stat_t struct { - Dev int32 - Mode uint16 - Nlink uint16 - Ino uint64 - Uid uint32 - Gid uint32 - Rdev int32 - Pad_cgo_0 [4]byte - Atimespec Timespec - Mtimespec Timespec - Ctimespec Timespec - Birthtimespec Timespec - Size int64 - Blocks int64 - Blksize int32 - Flags uint32 - Gen uint32 - Lspare int32 - Qspare [2]int64 -} - -type Statfs_t struct { - Bsize uint32 - Iosize int32 - Blocks uint64 - Bfree uint64 - Bavail uint64 - Files uint64 - Ffree uint64 - Fsid Fsid - Owner uint32 - Type uint32 - Flags uint32 - Fssubtype uint32 - Fstypename [16]int8 - Mntonname [1024]int8 - Mntfromname [1024]int8 - Reserved [8]uint32 -} - -type Flock_t struct { - Start int64 - Len int64 - Pid int32 - Type int16 - Whence int16 -} - -type Fstore_t struct { - Flags uint32 - Posmode int32 - Offset int64 - Length int64 - Bytesalloc int64 -} - -type Radvisory_t struct { - Offset int64 - Count int32 - Pad_cgo_0 [4]byte -} - -type Fbootstraptransfer_t struct { - Offset int64 - Length uint64 - Buffer *byte -} - -type Log2phys_t struct { - Flags uint32 - Pad_cgo_0 [8]byte - Pad_cgo_1 [8]byte -} - -type Fsid struct { - Val [2]int32 -} - -type Dirent struct { - Ino uint64 - Seekoff uint64 - Reclen uint16 - Namlen uint16 - Type uint8 - Name [1024]int8 - Pad_cgo_0 [3]byte -} - -type RawSockaddrInet4 struct { - Len uint8 - Family uint8 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]int8 -} - -type RawSockaddrInet6 struct { - Len uint8 - Family uint8 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Len uint8 - Family uint8 - Path [104]int8 -} - -type RawSockaddrDatalink struct { - Len uint8 - Family uint8 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [12]int8 -} - -type RawSockaddr struct { - Len uint8 - Family uint8 - Data [14]int8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [92]int8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint64 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Pad_cgo_0 [4]byte - Iov *Iovec - Iovlen int32 - Pad_cgo_1 [4]byte - Control *byte - Controllen uint32 - Flags int32 -} - -type Cmsghdr struct { - Len uint32 - Level int32 - Type int32 -} - -type Inet4Pktinfo struct { - Ifindex uint32 - Spec_dst [4]byte /* in_addr */ - Addr [4]byte /* in_addr */ -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Filt [8]uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x6c - SizeofSockaddrUnix = 0x6a - SizeofSockaddrDatalink = 0x14 - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x30 - SizeofCmsghdr = 0xc - SizeofInet4Pktinfo = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 -) - -const ( - PTRACE_TRACEME = 0x0 - PTRACE_CONT = 0x7 - PTRACE_KILL = 0x8 -) - -type Kevent_t struct { - Ident uint64 - Filter int16 - Flags uint16 - Fflags uint32 - Data int64 - Udata *byte -} - -type FdSet struct { - Bits [32]int32 -} - -const ( - SizeofIfMsghdr = 0x70 - SizeofIfData = 0x60 - SizeofIfaMsghdr = 0x14 - SizeofIfmaMsghdr = 0x10 - SizeofIfmaMsghdr2 = 0x14 - SizeofRtMsghdr = 0x5c - SizeofRtMetrics = 0x38 -) - -type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data IfData -} - -type IfData struct { - Type uint8 - Typelen uint8 - Physical uint8 - Addrlen uint8 - Hdrlen uint8 - Recvquota uint8 - Xmitquota uint8 - Unused1 uint8 - Mtu uint32 - Metric uint32 - Baudrate uint32 - Ipackets uint32 - Ierrors uint32 - Opackets uint32 - Oerrors uint32 - Collisions uint32 - Ibytes uint32 - Obytes uint32 - Imcasts uint32 - Omcasts uint32 - Iqdrops uint32 - Noproto uint32 - Recvtiming uint32 - Xmittiming uint32 - Lastchange Timeval32 - Unused2 uint32 - Hwassist uint32 - Reserved1 uint32 - Reserved2 uint32 -} - -type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Metric int32 -} - -type IfmaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte -} - -type IfmaMsghdr2 struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Refcount int32 -} - -type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Pad_cgo_0 [2]byte - Flags int32 - Addrs int32 - Pid int32 - Seq int32 - Errno int32 - Use int32 - Inits uint32 - Rmx RtMetrics -} - -type RtMetrics struct { - Locks uint32 - Mtu uint32 - Hopcount uint32 - Expire int32 - Recvpipe uint32 - Sendpipe uint32 - Ssthresh uint32 - Rtt uint32 - Rttvar uint32 - Pksent uint32 - Filler [4]uint32 -} - -const ( - SizeofBpfVersion = 0x4 - SizeofBpfStat = 0x8 - SizeofBpfProgram = 0x10 - SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x14 -) - -type BpfVersion struct { - Major uint16 - Minor uint16 -} - -type BpfStat struct { - Recv uint32 - Drop uint32 -} - -type BpfProgram struct { - Len uint32 - Pad_cgo_0 [4]byte - Insns *BpfInsn -} - -type BpfInsn struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type BpfHdr struct { - Tstamp Timeval32 - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [2]byte -} - -type Termios struct { - Iflag uint64 - Oflag uint64 - Cflag uint64 - Lflag uint64 - Cc [20]uint8 - Pad_cgo_0 [4]byte - Ispeed uint64 - Ospeed uint64 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_dragonfly_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_dragonfly_386.go deleted file mode 100644 index b7e7ff08865..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_dragonfly_386.go +++ /dev/null @@ -1,437 +0,0 @@ -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_dragonfly.go - -// +build 386,dragonfly - -package unix - -const ( - sizeofPtr = 0x4 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x4 - sizeofLongLong = 0x8 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int32 - _C_long_long int64 -) - -type Timespec struct { - Sec int32 - Nsec int32 -} - -type Timeval struct { - Sec int32 - Usec int32 -} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int32 - Ixrss int32 - Idrss int32 - Isrss int32 - Minflt int32 - Majflt int32 - Nswap int32 - Inblock int32 - Oublock int32 - Msgsnd int32 - Msgrcv int32 - Nsignals int32 - Nvcsw int32 - Nivcsw int32 -} - -type Rlimit struct { - Cur int64 - Max int64 -} - -type _Gid_t uint32 - -const ( - S_IFMT = 0xf000 - S_IFIFO = 0x1000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFBLK = 0x6000 - S_IFREG = 0x8000 - S_IFLNK = 0xa000 - S_IFSOCK = 0xc000 - S_ISUID = 0x800 - S_ISGID = 0x400 - S_ISVTX = 0x200 - S_IRUSR = 0x100 - S_IWUSR = 0x80 - S_IXUSR = 0x40 -) - -type Stat_t struct { - Ino uint64 - Nlink uint32 - Dev uint32 - Mode uint16 - Padding1 uint16 - Uid uint32 - Gid uint32 - Rdev uint32 - Atim Timespec - Mtim Timespec - Ctim Timespec - Size int64 - Blocks int64 - Blksize uint32 - Flags uint32 - Gen uint32 - Lspare int32 - Qspare1 int64 - Qspare2 int64 -} - -type Statfs_t struct { - Spare2 int32 - Bsize int32 - Iosize int32 - Blocks int32 - Bfree int32 - Bavail int32 - Files int32 - Ffree int32 - Fsid Fsid - Owner uint32 - Type int32 - Flags int32 - Syncwrites int32 - Asyncwrites int32 - Fstypename [16]int8 - Mntonname [80]int8 - Syncreads int32 - Asyncreads int32 - Spares1 int16 - Mntfromname [80]int8 - Spares2 int16 - Spare [2]int32 -} - -type Flock_t struct { - Start int64 - Len int64 - Pid int32 - Type int16 - Whence int16 -} - -type Dirent struct { - Fileno uint64 - Namlen uint16 - Type uint8 - Unused1 uint8 - Unused2 uint32 - Name [256]int8 -} - -type Fsid struct { - Val [2]int32 -} - -type RawSockaddrInet4 struct { - Len uint8 - Family uint8 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]int8 -} - -type RawSockaddrInet6 struct { - Len uint8 - Family uint8 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Len uint8 - Family uint8 - Path [104]int8 -} - -type RawSockaddrDatalink struct { - Len uint8 - Family uint8 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [12]int8 - Rcf uint16 - Route [16]uint16 -} - -type RawSockaddr struct { - Len uint8 - Family uint8 - Data [14]int8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [92]int8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint32 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Iov *Iovec - Iovlen int32 - Control *byte - Controllen uint32 - Flags int32 -} - -type Cmsghdr struct { - Len uint32 - Level int32 - Type int32 -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Filt [8]uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x6c - SizeofSockaddrUnix = 0x6a - SizeofSockaddrDatalink = 0x36 - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x1c - SizeofCmsghdr = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 -) - -const ( - PTRACE_TRACEME = 0x0 - PTRACE_CONT = 0x7 - PTRACE_KILL = 0x8 -) - -type Kevent_t struct { - Ident uint32 - Filter int16 - Flags uint16 - Fflags uint32 - Data int32 - Udata *byte -} - -type FdSet struct { - Bits [32]uint32 -} - -const ( - SizeofIfMsghdr = 0x68 - SizeofIfData = 0x58 - SizeofIfaMsghdr = 0x14 - SizeofIfmaMsghdr = 0x10 - SizeofIfAnnounceMsghdr = 0x18 - SizeofRtMsghdr = 0x5c - SizeofRtMetrics = 0x38 -) - -type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data IfData -} - -type IfData struct { - Type uint8 - Physical uint8 - Addrlen uint8 - Hdrlen uint8 - Recvquota uint8 - Xmitquota uint8 - Pad_cgo_0 [2]byte - Mtu uint32 - Metric uint32 - Link_state uint32 - Baudrate uint64 - Ipackets uint32 - Ierrors uint32 - Opackets uint32 - Oerrors uint32 - Collisions uint32 - Ibytes uint32 - Obytes uint32 - Imcasts uint32 - Omcasts uint32 - Iqdrops uint32 - Noproto uint32 - Hwassist uint32 - Unused uint32 - Lastchange Timeval -} - -type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Metric int32 -} - -type IfmaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte -} - -type IfAnnounceMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Name [16]int8 - What uint16 -} - -type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Pad_cgo_0 [2]byte - Flags int32 - Addrs int32 - Pid int32 - Seq int32 - Errno int32 - Use int32 - Inits uint32 - Rmx RtMetrics -} - -type RtMetrics struct { - Locks uint32 - Mtu uint32 - Pksent uint32 - Expire uint32 - Sendpipe uint32 - Ssthresh uint32 - Rtt uint32 - Rttvar uint32 - Recvpipe uint32 - Hopcount uint32 - Mssopt uint16 - Pad uint16 - Msl uint32 - Iwmaxsegs uint32 - Iwcapsegs uint32 -} - -const ( - SizeofBpfVersion = 0x4 - SizeofBpfStat = 0x8 - SizeofBpfProgram = 0x8 - SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x14 -) - -type BpfVersion struct { - Major uint16 - Minor uint16 -} - -type BpfStat struct { - Recv uint32 - Drop uint32 -} - -type BpfProgram struct { - Len uint32 - Insns *BpfInsn -} - -type BpfInsn struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type BpfHdr struct { - Tstamp Timeval - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [2]byte -} - -type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Cc [20]uint8 - Ispeed uint32 - Ospeed uint32 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_dragonfly_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_dragonfly_amd64.go deleted file mode 100644 index 8a6f4e1ce3f..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_dragonfly_amd64.go +++ /dev/null @@ -1,443 +0,0 @@ -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_dragonfly.go - -// +build amd64,dragonfly - -package unix - -const ( - sizeofPtr = 0x8 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x8 - sizeofLongLong = 0x8 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int64 - _C_long_long int64 -) - -type Timespec struct { - Sec int64 - Nsec int64 -} - -type Timeval struct { - Sec int64 - Usec int64 -} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int64 - Ixrss int64 - Idrss int64 - Isrss int64 - Minflt int64 - Majflt int64 - Nswap int64 - Inblock int64 - Oublock int64 - Msgsnd int64 - Msgrcv int64 - Nsignals int64 - Nvcsw int64 - Nivcsw int64 -} - -type Rlimit struct { - Cur int64 - Max int64 -} - -type _Gid_t uint32 - -const ( - S_IFMT = 0xf000 - S_IFIFO = 0x1000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFBLK = 0x6000 - S_IFREG = 0x8000 - S_IFLNK = 0xa000 - S_IFSOCK = 0xc000 - S_ISUID = 0x800 - S_ISGID = 0x400 - S_ISVTX = 0x200 - S_IRUSR = 0x100 - S_IWUSR = 0x80 - S_IXUSR = 0x40 -) - -type Stat_t struct { - Ino uint64 - Nlink uint32 - Dev uint32 - Mode uint16 - Padding1 uint16 - Uid uint32 - Gid uint32 - Rdev uint32 - Atim Timespec - Mtim Timespec - Ctim Timespec - Size int64 - Blocks int64 - Blksize uint32 - Flags uint32 - Gen uint32 - Lspare int32 - Qspare1 int64 - Qspare2 int64 -} - -type Statfs_t struct { - Spare2 int64 - Bsize int64 - Iosize int64 - Blocks int64 - Bfree int64 - Bavail int64 - Files int64 - Ffree int64 - Fsid Fsid - Owner uint32 - Type int32 - Flags int32 - Pad_cgo_0 [4]byte - Syncwrites int64 - Asyncwrites int64 - Fstypename [16]int8 - Mntonname [80]int8 - Syncreads int64 - Asyncreads int64 - Spares1 int16 - Mntfromname [80]int8 - Spares2 int16 - Pad_cgo_1 [4]byte - Spare [2]int64 -} - -type Flock_t struct { - Start int64 - Len int64 - Pid int32 - Type int16 - Whence int16 -} - -type Dirent struct { - Fileno uint64 - Namlen uint16 - Type uint8 - Unused1 uint8 - Unused2 uint32 - Name [256]int8 -} - -type Fsid struct { - Val [2]int32 -} - -type RawSockaddrInet4 struct { - Len uint8 - Family uint8 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]int8 -} - -type RawSockaddrInet6 struct { - Len uint8 - Family uint8 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Len uint8 - Family uint8 - Path [104]int8 -} - -type RawSockaddrDatalink struct { - Len uint8 - Family uint8 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [12]int8 - Rcf uint16 - Route [16]uint16 -} - -type RawSockaddr struct { - Len uint8 - Family uint8 - Data [14]int8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [92]int8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint64 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Pad_cgo_0 [4]byte - Iov *Iovec - Iovlen int32 - Pad_cgo_1 [4]byte - Control *byte - Controllen uint32 - Flags int32 -} - -type Cmsghdr struct { - Len uint32 - Level int32 - Type int32 -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Filt [8]uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x6c - SizeofSockaddrUnix = 0x6a - SizeofSockaddrDatalink = 0x36 - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x30 - SizeofCmsghdr = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 -) - -const ( - PTRACE_TRACEME = 0x0 - PTRACE_CONT = 0x7 - PTRACE_KILL = 0x8 -) - -type Kevent_t struct { - Ident uint64 - Filter int16 - Flags uint16 - Fflags uint32 - Data int64 - Udata *byte -} - -type FdSet struct { - Bits [16]uint64 -} - -const ( - SizeofIfMsghdr = 0xb0 - SizeofIfData = 0xa0 - SizeofIfaMsghdr = 0x14 - SizeofIfmaMsghdr = 0x10 - SizeofIfAnnounceMsghdr = 0x18 - SizeofRtMsghdr = 0x98 - SizeofRtMetrics = 0x70 -) - -type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data IfData -} - -type IfData struct { - Type uint8 - Physical uint8 - Addrlen uint8 - Hdrlen uint8 - Recvquota uint8 - Xmitquota uint8 - Pad_cgo_0 [2]byte - Mtu uint64 - Metric uint64 - Link_state uint64 - Baudrate uint64 - Ipackets uint64 - Ierrors uint64 - Opackets uint64 - Oerrors uint64 - Collisions uint64 - Ibytes uint64 - Obytes uint64 - Imcasts uint64 - Omcasts uint64 - Iqdrops uint64 - Noproto uint64 - Hwassist uint64 - Unused uint64 - Lastchange Timeval -} - -type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Metric int32 -} - -type IfmaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte -} - -type IfAnnounceMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Name [16]int8 - What uint16 -} - -type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Pad_cgo_0 [2]byte - Flags int32 - Addrs int32 - Pid int32 - Seq int32 - Errno int32 - Use int32 - Inits uint64 - Rmx RtMetrics -} - -type RtMetrics struct { - Locks uint64 - Mtu uint64 - Pksent uint64 - Expire uint64 - Sendpipe uint64 - Ssthresh uint64 - Rtt uint64 - Rttvar uint64 - Recvpipe uint64 - Hopcount uint64 - Mssopt uint16 - Pad uint16 - Pad_cgo_0 [4]byte - Msl uint64 - Iwmaxsegs uint64 - Iwcapsegs uint64 -} - -const ( - SizeofBpfVersion = 0x4 - SizeofBpfStat = 0x8 - SizeofBpfProgram = 0x10 - SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x20 -) - -type BpfVersion struct { - Major uint16 - Minor uint16 -} - -type BpfStat struct { - Recv uint32 - Drop uint32 -} - -type BpfProgram struct { - Len uint32 - Pad_cgo_0 [4]byte - Insns *BpfInsn -} - -type BpfInsn struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type BpfHdr struct { - Tstamp Timeval - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [6]byte -} - -type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Cc [20]uint8 - Ispeed uint32 - Ospeed uint32 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_freebsd_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_freebsd_386.go deleted file mode 100644 index 8cf30947b41..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_freebsd_386.go +++ /dev/null @@ -1,502 +0,0 @@ -// +build 386,freebsd -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_freebsd.go - -package unix - -const ( - sizeofPtr = 0x4 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x4 - sizeofLongLong = 0x8 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int32 - _C_long_long int64 -) - -type Timespec struct { - Sec int32 - Nsec int32 -} - -type Timeval struct { - Sec int32 - Usec int32 -} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int32 - Ixrss int32 - Idrss int32 - Isrss int32 - Minflt int32 - Majflt int32 - Nswap int32 - Inblock int32 - Oublock int32 - Msgsnd int32 - Msgrcv int32 - Nsignals int32 - Nvcsw int32 - Nivcsw int32 -} - -type Rlimit struct { - Cur int64 - Max int64 -} - -type _Gid_t uint32 - -const ( - S_IFMT = 0xf000 - S_IFIFO = 0x1000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFBLK = 0x6000 - S_IFREG = 0x8000 - S_IFLNK = 0xa000 - S_IFSOCK = 0xc000 - S_ISUID = 0x800 - S_ISGID = 0x400 - S_ISVTX = 0x200 - S_IRUSR = 0x100 - S_IWUSR = 0x80 - S_IXUSR = 0x40 -) - -type Stat_t struct { - Dev uint32 - Ino uint32 - Mode uint16 - Nlink uint16 - Uid uint32 - Gid uint32 - Rdev uint32 - Atimespec Timespec - Mtimespec Timespec - Ctimespec Timespec - Size int64 - Blocks int64 - Blksize uint32 - Flags uint32 - Gen uint32 - Lspare int32 - Birthtimespec Timespec - Pad_cgo_0 [8]byte -} - -type Statfs_t struct { - Version uint32 - Type uint32 - Flags uint64 - Bsize uint64 - Iosize uint64 - Blocks uint64 - Bfree uint64 - Bavail int64 - Files uint64 - Ffree int64 - Syncwrites uint64 - Asyncwrites uint64 - Syncreads uint64 - Asyncreads uint64 - Spare [10]uint64 - Namemax uint32 - Owner uint32 - Fsid Fsid - Charspare [80]int8 - Fstypename [16]int8 - Mntfromname [88]int8 - Mntonname [88]int8 -} - -type Flock_t struct { - Start int64 - Len int64 - Pid int32 - Type int16 - Whence int16 - Sysid int32 -} - -type Dirent struct { - Fileno uint32 - Reclen uint16 - Type uint8 - Namlen uint8 - Name [256]int8 -} - -type Fsid struct { - Val [2]int32 -} - -const ( - FADV_NORMAL = 0x0 - FADV_RANDOM = 0x1 - FADV_SEQUENTIAL = 0x2 - FADV_WILLNEED = 0x3 - FADV_DONTNEED = 0x4 - FADV_NOREUSE = 0x5 -) - -type RawSockaddrInet4 struct { - Len uint8 - Family uint8 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]int8 -} - -type RawSockaddrInet6 struct { - Len uint8 - Family uint8 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Len uint8 - Family uint8 - Path [104]int8 -} - -type RawSockaddrDatalink struct { - Len uint8 - Family uint8 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [46]int8 -} - -type RawSockaddr struct { - Len uint8 - Family uint8 - Data [14]int8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [92]int8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint32 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPMreqn struct { - Multiaddr [4]byte /* in_addr */ - Address [4]byte /* in_addr */ - Ifindex int32 -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Iov *Iovec - Iovlen int32 - Control *byte - Controllen uint32 - Flags int32 -} - -type Cmsghdr struct { - Len uint32 - Level int32 - Type int32 -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Filt [8]uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x6c - SizeofSockaddrUnix = 0x6a - SizeofSockaddrDatalink = 0x36 - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPMreqn = 0xc - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x1c - SizeofCmsghdr = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 -) - -const ( - PTRACE_TRACEME = 0x0 - PTRACE_CONT = 0x7 - PTRACE_KILL = 0x8 -) - -type Kevent_t struct { - Ident uint32 - Filter int16 - Flags uint16 - Fflags uint32 - Data int32 - Udata *byte -} - -type FdSet struct { - X__fds_bits [32]uint32 -} - -const ( - sizeofIfMsghdr = 0x64 - SizeofIfMsghdr = 0x60 - sizeofIfData = 0x54 - SizeofIfData = 0x50 - SizeofIfaMsghdr = 0x14 - SizeofIfmaMsghdr = 0x10 - SizeofIfAnnounceMsghdr = 0x18 - SizeofRtMsghdr = 0x5c - SizeofRtMetrics = 0x38 -) - -type ifMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data ifData -} - -type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data IfData -} - -type ifData struct { - Type uint8 - Physical uint8 - Addrlen uint8 - Hdrlen uint8 - Link_state uint8 - Vhid uint8 - Baudrate_pf uint8 - Datalen uint8 - Mtu uint32 - Metric uint32 - Baudrate uint32 - Ipackets uint32 - Ierrors uint32 - Opackets uint32 - Oerrors uint32 - Collisions uint32 - Ibytes uint32 - Obytes uint32 - Imcasts uint32 - Omcasts uint32 - Iqdrops uint32 - Noproto uint32 - Hwassist uint64 - Epoch int32 - Lastchange Timeval -} - -type IfData struct { - Type uint8 - Physical uint8 - Addrlen uint8 - Hdrlen uint8 - Link_state uint8 - Spare_char1 uint8 - Spare_char2 uint8 - Datalen uint8 - Mtu uint32 - Metric uint32 - Baudrate uint32 - Ipackets uint32 - Ierrors uint32 - Opackets uint32 - Oerrors uint32 - Collisions uint32 - Ibytes uint32 - Obytes uint32 - Imcasts uint32 - Omcasts uint32 - Iqdrops uint32 - Noproto uint32 - Hwassist uint32 - Epoch int32 - Lastchange Timeval -} - -type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Metric int32 -} - -type IfmaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte -} - -type IfAnnounceMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Name [16]int8 - What uint16 -} - -type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Pad_cgo_0 [2]byte - Flags int32 - Addrs int32 - Pid int32 - Seq int32 - Errno int32 - Fmask int32 - Inits uint32 - Rmx RtMetrics -} - -type RtMetrics struct { - Locks uint32 - Mtu uint32 - Hopcount uint32 - Expire uint32 - Recvpipe uint32 - Sendpipe uint32 - Ssthresh uint32 - Rtt uint32 - Rttvar uint32 - Pksent uint32 - Weight uint32 - Filler [3]uint32 -} - -const ( - SizeofBpfVersion = 0x4 - SizeofBpfStat = 0x8 - SizeofBpfZbuf = 0xc - SizeofBpfProgram = 0x8 - SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x14 - SizeofBpfZbufHeader = 0x20 -) - -type BpfVersion struct { - Major uint16 - Minor uint16 -} - -type BpfStat struct { - Recv uint32 - Drop uint32 -} - -type BpfZbuf struct { - Bufa *byte - Bufb *byte - Buflen uint32 -} - -type BpfProgram struct { - Len uint32 - Insns *BpfInsn -} - -type BpfInsn struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type BpfHdr struct { - Tstamp Timeval - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [2]byte -} - -type BpfZbufHeader struct { - Kernel_gen uint32 - Kernel_len uint32 - User_gen uint32 - X_bzh_pad [5]uint32 -} - -type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Cc [20]uint8 - Ispeed uint32 - Ospeed uint32 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_freebsd_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_freebsd_amd64.go deleted file mode 100644 index e5feb207be6..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_freebsd_amd64.go +++ /dev/null @@ -1,505 +0,0 @@ -// +build amd64,freebsd -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_freebsd.go - -package unix - -const ( - sizeofPtr = 0x8 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x8 - sizeofLongLong = 0x8 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int64 - _C_long_long int64 -) - -type Timespec struct { - Sec int64 - Nsec int64 -} - -type Timeval struct { - Sec int64 - Usec int64 -} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int64 - Ixrss int64 - Idrss int64 - Isrss int64 - Minflt int64 - Majflt int64 - Nswap int64 - Inblock int64 - Oublock int64 - Msgsnd int64 - Msgrcv int64 - Nsignals int64 - Nvcsw int64 - Nivcsw int64 -} - -type Rlimit struct { - Cur int64 - Max int64 -} - -type _Gid_t uint32 - -const ( - S_IFMT = 0xf000 - S_IFIFO = 0x1000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFBLK = 0x6000 - S_IFREG = 0x8000 - S_IFLNK = 0xa000 - S_IFSOCK = 0xc000 - S_ISUID = 0x800 - S_ISGID = 0x400 - S_ISVTX = 0x200 - S_IRUSR = 0x100 - S_IWUSR = 0x80 - S_IXUSR = 0x40 -) - -type Stat_t struct { - Dev uint32 - Ino uint32 - Mode uint16 - Nlink uint16 - Uid uint32 - Gid uint32 - Rdev uint32 - Atimespec Timespec - Mtimespec Timespec - Ctimespec Timespec - Size int64 - Blocks int64 - Blksize uint32 - Flags uint32 - Gen uint32 - Lspare int32 - Birthtimespec Timespec -} - -type Statfs_t struct { - Version uint32 - Type uint32 - Flags uint64 - Bsize uint64 - Iosize uint64 - Blocks uint64 - Bfree uint64 - Bavail int64 - Files uint64 - Ffree int64 - Syncwrites uint64 - Asyncwrites uint64 - Syncreads uint64 - Asyncreads uint64 - Spare [10]uint64 - Namemax uint32 - Owner uint32 - Fsid Fsid - Charspare [80]int8 - Fstypename [16]int8 - Mntfromname [88]int8 - Mntonname [88]int8 -} - -type Flock_t struct { - Start int64 - Len int64 - Pid int32 - Type int16 - Whence int16 - Sysid int32 - Pad_cgo_0 [4]byte -} - -type Dirent struct { - Fileno uint32 - Reclen uint16 - Type uint8 - Namlen uint8 - Name [256]int8 -} - -type Fsid struct { - Val [2]int32 -} - -const ( - FADV_NORMAL = 0x0 - FADV_RANDOM = 0x1 - FADV_SEQUENTIAL = 0x2 - FADV_WILLNEED = 0x3 - FADV_DONTNEED = 0x4 - FADV_NOREUSE = 0x5 -) - -type RawSockaddrInet4 struct { - Len uint8 - Family uint8 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]int8 -} - -type RawSockaddrInet6 struct { - Len uint8 - Family uint8 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Len uint8 - Family uint8 - Path [104]int8 -} - -type RawSockaddrDatalink struct { - Len uint8 - Family uint8 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [46]int8 -} - -type RawSockaddr struct { - Len uint8 - Family uint8 - Data [14]int8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [92]int8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint64 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPMreqn struct { - Multiaddr [4]byte /* in_addr */ - Address [4]byte /* in_addr */ - Ifindex int32 -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Pad_cgo_0 [4]byte - Iov *Iovec - Iovlen int32 - Pad_cgo_1 [4]byte - Control *byte - Controllen uint32 - Flags int32 -} - -type Cmsghdr struct { - Len uint32 - Level int32 - Type int32 -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Filt [8]uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x6c - SizeofSockaddrUnix = 0x6a - SizeofSockaddrDatalink = 0x36 - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPMreqn = 0xc - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x30 - SizeofCmsghdr = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 -) - -const ( - PTRACE_TRACEME = 0x0 - PTRACE_CONT = 0x7 - PTRACE_KILL = 0x8 -) - -type Kevent_t struct { - Ident uint64 - Filter int16 - Flags uint16 - Fflags uint32 - Data int64 - Udata *byte -} - -type FdSet struct { - X__fds_bits [16]uint64 -} - -const ( - sizeofIfMsghdr = 0xa8 - SizeofIfMsghdr = 0xa8 - sizeofIfData = 0x98 - SizeofIfData = 0x98 - SizeofIfaMsghdr = 0x14 - SizeofIfmaMsghdr = 0x10 - SizeofIfAnnounceMsghdr = 0x18 - SizeofRtMsghdr = 0x98 - SizeofRtMetrics = 0x70 -) - -type ifMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data ifData -} - -type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data IfData -} - -type ifData struct { - Type uint8 - Physical uint8 - Addrlen uint8 - Hdrlen uint8 - Link_state uint8 - Vhid uint8 - Baudrate_pf uint8 - Datalen uint8 - Mtu uint64 - Metric uint64 - Baudrate uint64 - Ipackets uint64 - Ierrors uint64 - Opackets uint64 - Oerrors uint64 - Collisions uint64 - Ibytes uint64 - Obytes uint64 - Imcasts uint64 - Omcasts uint64 - Iqdrops uint64 - Noproto uint64 - Hwassist uint64 - Epoch int64 - Lastchange Timeval -} - -type IfData struct { - Type uint8 - Physical uint8 - Addrlen uint8 - Hdrlen uint8 - Link_state uint8 - Spare_char1 uint8 - Spare_char2 uint8 - Datalen uint8 - Mtu uint64 - Metric uint64 - Baudrate uint64 - Ipackets uint64 - Ierrors uint64 - Opackets uint64 - Oerrors uint64 - Collisions uint64 - Ibytes uint64 - Obytes uint64 - Imcasts uint64 - Omcasts uint64 - Iqdrops uint64 - Noproto uint64 - Hwassist uint64 - Epoch int64 - Lastchange Timeval -} - -type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Metric int32 -} - -type IfmaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte -} - -type IfAnnounceMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Name [16]int8 - What uint16 -} - -type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Pad_cgo_0 [2]byte - Flags int32 - Addrs int32 - Pid int32 - Seq int32 - Errno int32 - Fmask int32 - Inits uint64 - Rmx RtMetrics -} - -type RtMetrics struct { - Locks uint64 - Mtu uint64 - Hopcount uint64 - Expire uint64 - Recvpipe uint64 - Sendpipe uint64 - Ssthresh uint64 - Rtt uint64 - Rttvar uint64 - Pksent uint64 - Weight uint64 - Filler [3]uint64 -} - -const ( - SizeofBpfVersion = 0x4 - SizeofBpfStat = 0x8 - SizeofBpfZbuf = 0x18 - SizeofBpfProgram = 0x10 - SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x20 - SizeofBpfZbufHeader = 0x20 -) - -type BpfVersion struct { - Major uint16 - Minor uint16 -} - -type BpfStat struct { - Recv uint32 - Drop uint32 -} - -type BpfZbuf struct { - Bufa *byte - Bufb *byte - Buflen uint64 -} - -type BpfProgram struct { - Len uint32 - Pad_cgo_0 [4]byte - Insns *BpfInsn -} - -type BpfInsn struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type BpfHdr struct { - Tstamp Timeval - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [6]byte -} - -type BpfZbufHeader struct { - Kernel_gen uint32 - Kernel_len uint32 - User_gen uint32 - X_bzh_pad [5]uint32 -} - -type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Cc [20]uint8 - Ispeed uint32 - Ospeed uint32 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_freebsd_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_freebsd_arm.go deleted file mode 100644 index 5472b54284b..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_freebsd_arm.go +++ /dev/null @@ -1,497 +0,0 @@ -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- -fsigned-char types_freebsd.go - -// +build arm,freebsd - -package unix - -const ( - sizeofPtr = 0x4 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x4 - sizeofLongLong = 0x8 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int32 - _C_long_long int64 -) - -type Timespec struct { - Sec int64 - Nsec int32 - Pad_cgo_0 [4]byte -} - -type Timeval struct { - Sec int64 - Usec int32 - Pad_cgo_0 [4]byte -} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int32 - Ixrss int32 - Idrss int32 - Isrss int32 - Minflt int32 - Majflt int32 - Nswap int32 - Inblock int32 - Oublock int32 - Msgsnd int32 - Msgrcv int32 - Nsignals int32 - Nvcsw int32 - Nivcsw int32 -} - -type Rlimit struct { - Cur int64 - Max int64 -} - -type _Gid_t uint32 - -const ( - S_IFMT = 0xf000 - S_IFIFO = 0x1000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFBLK = 0x6000 - S_IFREG = 0x8000 - S_IFLNK = 0xa000 - S_IFSOCK = 0xc000 - S_ISUID = 0x800 - S_ISGID = 0x400 - S_ISVTX = 0x200 - S_IRUSR = 0x100 - S_IWUSR = 0x80 - S_IXUSR = 0x40 -) - -type Stat_t struct { - Dev uint32 - Ino uint32 - Mode uint16 - Nlink uint16 - Uid uint32 - Gid uint32 - Rdev uint32 - Atimespec Timespec - Mtimespec Timespec - Ctimespec Timespec - Size int64 - Blocks int64 - Blksize uint32 - Flags uint32 - Gen uint32 - Lspare int32 - Birthtimespec Timespec -} - -type Statfs_t struct { - Version uint32 - Type uint32 - Flags uint64 - Bsize uint64 - Iosize uint64 - Blocks uint64 - Bfree uint64 - Bavail int64 - Files uint64 - Ffree int64 - Syncwrites uint64 - Asyncwrites uint64 - Syncreads uint64 - Asyncreads uint64 - Spare [10]uint64 - Namemax uint32 - Owner uint32 - Fsid Fsid - Charspare [80]int8 - Fstypename [16]int8 - Mntfromname [88]int8 - Mntonname [88]int8 -} - -type Flock_t struct { - Start int64 - Len int64 - Pid int32 - Type int16 - Whence int16 - Sysid int32 - Pad_cgo_0 [4]byte -} - -type Dirent struct { - Fileno uint32 - Reclen uint16 - Type uint8 - Namlen uint8 - Name [256]int8 -} - -type Fsid struct { - Val [2]int32 -} - -type RawSockaddrInet4 struct { - Len uint8 - Family uint8 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]int8 -} - -type RawSockaddrInet6 struct { - Len uint8 - Family uint8 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Len uint8 - Family uint8 - Path [104]int8 -} - -type RawSockaddrDatalink struct { - Len uint8 - Family uint8 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [46]int8 -} - -type RawSockaddr struct { - Len uint8 - Family uint8 - Data [14]int8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [92]int8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint32 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPMreqn struct { - Multiaddr [4]byte /* in_addr */ - Address [4]byte /* in_addr */ - Ifindex int32 -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Iov *Iovec - Iovlen int32 - Control *byte - Controllen uint32 - Flags int32 -} - -type Cmsghdr struct { - Len uint32 - Level int32 - Type int32 -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Filt [8]uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x6c - SizeofSockaddrUnix = 0x6a - SizeofSockaddrDatalink = 0x36 - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPMreqn = 0xc - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x1c - SizeofCmsghdr = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 -) - -const ( - PTRACE_TRACEME = 0x0 - PTRACE_CONT = 0x7 - PTRACE_KILL = 0x8 -) - -type Kevent_t struct { - Ident uint32 - Filter int16 - Flags uint16 - Fflags uint32 - Data int32 - Udata *byte -} - -type FdSet struct { - X__fds_bits [32]uint32 -} - -const ( - sizeofIfMsghdr = 0x70 - SizeofIfMsghdr = 0x70 - sizeofIfData = 0x60 - SizeofIfData = 0x60 - SizeofIfaMsghdr = 0x14 - SizeofIfmaMsghdr = 0x10 - SizeofIfAnnounceMsghdr = 0x18 - SizeofRtMsghdr = 0x5c - SizeofRtMetrics = 0x38 -) - -type ifMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data ifData -} - -type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data IfData -} - -type ifData struct { - Type uint8 - Physical uint8 - Addrlen uint8 - Hdrlen uint8 - Link_state uint8 - Vhid uint8 - Baudrate_pf uint8 - Datalen uint8 - Mtu uint32 - Metric uint32 - Baudrate uint32 - Ipackets uint32 - Ierrors uint32 - Opackets uint32 - Oerrors uint32 - Collisions uint32 - Ibytes uint32 - Obytes uint32 - Imcasts uint32 - Omcasts uint32 - Iqdrops uint32 - Noproto uint32 - Hwassist uint64 - Epoch int64 - Lastchange Timeval -} - -type IfData struct { - Type uint8 - Physical uint8 - Addrlen uint8 - Hdrlen uint8 - Link_state uint8 - Spare_char1 uint8 - Spare_char2 uint8 - Datalen uint8 - Mtu uint32 - Metric uint32 - Baudrate uint32 - Ipackets uint32 - Ierrors uint32 - Opackets uint32 - Oerrors uint32 - Collisions uint32 - Ibytes uint32 - Obytes uint32 - Imcasts uint32 - Omcasts uint32 - Iqdrops uint32 - Noproto uint32 - Hwassist uint32 - Pad_cgo_0 [4]byte - Epoch int64 - Lastchange Timeval -} - -type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Metric int32 -} - -type IfmaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte -} - -type IfAnnounceMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Name [16]int8 - What uint16 -} - -type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Pad_cgo_0 [2]byte - Flags int32 - Addrs int32 - Pid int32 - Seq int32 - Errno int32 - Fmask int32 - Inits uint32 - Rmx RtMetrics -} - -type RtMetrics struct { - Locks uint32 - Mtu uint32 - Hopcount uint32 - Expire uint32 - Recvpipe uint32 - Sendpipe uint32 - Ssthresh uint32 - Rtt uint32 - Rttvar uint32 - Pksent uint32 - Weight uint32 - Filler [3]uint32 -} - -const ( - SizeofBpfVersion = 0x4 - SizeofBpfStat = 0x8 - SizeofBpfZbuf = 0xc - SizeofBpfProgram = 0x8 - SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x20 - SizeofBpfZbufHeader = 0x20 -) - -type BpfVersion struct { - Major uint16 - Minor uint16 -} - -type BpfStat struct { - Recv uint32 - Drop uint32 -} - -type BpfZbuf struct { - Bufa *byte - Bufb *byte - Buflen uint32 -} - -type BpfProgram struct { - Len uint32 - Insns *BpfInsn -} - -type BpfInsn struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type BpfHdr struct { - Tstamp Timeval - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [6]byte -} - -type BpfZbufHeader struct { - Kernel_gen uint32 - Kernel_len uint32 - User_gen uint32 - X_bzh_pad [5]uint32 -} - -type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Cc [20]uint8 - Ispeed uint32 - Ospeed uint32 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_linux_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_linux_386.go deleted file mode 100644 index cf5db0e1b67..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_linux_386.go +++ /dev/null @@ -1,590 +0,0 @@ -// +build 386,linux -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_linux.go - -package unix - -const ( - sizeofPtr = 0x4 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x4 - sizeofLongLong = 0x8 - PathMax = 0x1000 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int32 - _C_long_long int64 -) - -type Timespec struct { - Sec int32 - Nsec int32 -} - -type Timeval struct { - Sec int32 - Usec int32 -} - -type Timex struct { - Modes uint32 - Offset int32 - Freq int32 - Maxerror int32 - Esterror int32 - Status int32 - Constant int32 - Precision int32 - Tolerance int32 - Time Timeval - Tick int32 - Ppsfreq int32 - Jitter int32 - Shift int32 - Stabil int32 - Jitcnt int32 - Calcnt int32 - Errcnt int32 - Stbcnt int32 - Tai int32 - Pad_cgo_0 [44]byte -} - -type Time_t int32 - -type Tms struct { - Utime int32 - Stime int32 - Cutime int32 - Cstime int32 -} - -type Utimbuf struct { - Actime int32 - Modtime int32 -} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int32 - Ixrss int32 - Idrss int32 - Isrss int32 - Minflt int32 - Majflt int32 - Nswap int32 - Inblock int32 - Oublock int32 - Msgsnd int32 - Msgrcv int32 - Nsignals int32 - Nvcsw int32 - Nivcsw int32 -} - -type Rlimit struct { - Cur uint64 - Max uint64 -} - -type _Gid_t uint32 - -type Stat_t struct { - Dev uint64 - X__pad1 uint16 - Pad_cgo_0 [2]byte - X__st_ino uint32 - Mode uint32 - Nlink uint32 - Uid uint32 - Gid uint32 - Rdev uint64 - X__pad2 uint16 - Pad_cgo_1 [2]byte - Size int64 - Blksize int32 - Blocks int64 - Atim Timespec - Mtim Timespec - Ctim Timespec - Ino uint64 -} - -type Statfs_t struct { - Type int32 - Bsize int32 - Blocks uint64 - Bfree uint64 - Bavail uint64 - Files uint64 - Ffree uint64 - Fsid Fsid - Namelen int32 - Frsize int32 - Flags int32 - Spare [4]int32 -} - -type Dirent struct { - Ino uint64 - Off int64 - Reclen uint16 - Type uint8 - Name [256]int8 - Pad_cgo_0 [1]byte -} - -type Fsid struct { - X__val [2]int32 -} - -type Flock_t struct { - Type int16 - Whence int16 - Start int64 - Len int64 - Pid int32 -} - -type RawSockaddrInet4 struct { - Family uint16 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]uint8 -} - -type RawSockaddrInet6 struct { - Family uint16 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Family uint16 - Path [108]int8 -} - -type RawSockaddrLinklayer struct { - Family uint16 - Protocol uint16 - Ifindex int32 - Hatype uint16 - Pkttype uint8 - Halen uint8 - Addr [8]uint8 -} - -type RawSockaddrNetlink struct { - Family uint16 - Pad uint16 - Pid uint32 - Groups uint32 -} - -type RawSockaddr struct { - Family uint16 - Data [14]int8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [96]int8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint32 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPMreqn struct { - Multiaddr [4]byte /* in_addr */ - Address [4]byte /* in_addr */ - Ifindex int32 -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Iov *Iovec - Iovlen uint32 - Control *byte - Controllen uint32 - Flags int32 -} - -type Cmsghdr struct { - Len uint32 - Level int32 - Type int32 - X__cmsg_data [0]uint8 -} - -type Inet4Pktinfo struct { - Ifindex int32 - Spec_dst [4]byte /* in_addr */ - Addr [4]byte /* in_addr */ -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Data [8]uint32 -} - -type Ucred struct { - Pid int32 - Uid uint32 - Gid uint32 -} - -type TCPInfo struct { - State uint8 - Ca_state uint8 - Retransmits uint8 - Probes uint8 - Backoff uint8 - Options uint8 - Pad_cgo_0 [2]byte - Rto uint32 - Ato uint32 - Snd_mss uint32 - Rcv_mss uint32 - Unacked uint32 - Sacked uint32 - Lost uint32 - Retrans uint32 - Fackets uint32 - Last_data_sent uint32 - Last_ack_sent uint32 - Last_data_recv uint32 - Last_ack_recv uint32 - Pmtu uint32 - Rcv_ssthresh uint32 - Rtt uint32 - Rttvar uint32 - Snd_ssthresh uint32 - Snd_cwnd uint32 - Advmss uint32 - Reordering uint32 - Rcv_rtt uint32 - Rcv_space uint32 - Total_retrans uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x70 - SizeofSockaddrUnix = 0x6e - SizeofSockaddrLinklayer = 0x14 - SizeofSockaddrNetlink = 0xc - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPMreqn = 0xc - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x1c - SizeofCmsghdr = 0xc - SizeofInet4Pktinfo = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 - SizeofUcred = 0xc - SizeofTCPInfo = 0x68 -) - -const ( - IFA_UNSPEC = 0x0 - IFA_ADDRESS = 0x1 - IFA_LOCAL = 0x2 - IFA_LABEL = 0x3 - IFA_BROADCAST = 0x4 - IFA_ANYCAST = 0x5 - IFA_CACHEINFO = 0x6 - IFA_MULTICAST = 0x7 - IFLA_UNSPEC = 0x0 - IFLA_ADDRESS = 0x1 - IFLA_BROADCAST = 0x2 - IFLA_IFNAME = 0x3 - IFLA_MTU = 0x4 - IFLA_LINK = 0x5 - IFLA_QDISC = 0x6 - IFLA_STATS = 0x7 - IFLA_COST = 0x8 - IFLA_PRIORITY = 0x9 - IFLA_MASTER = 0xa - IFLA_WIRELESS = 0xb - IFLA_PROTINFO = 0xc - IFLA_TXQLEN = 0xd - IFLA_MAP = 0xe - IFLA_WEIGHT = 0xf - IFLA_OPERSTATE = 0x10 - IFLA_LINKMODE = 0x11 - IFLA_LINKINFO = 0x12 - IFLA_NET_NS_PID = 0x13 - IFLA_IFALIAS = 0x14 - IFLA_MAX = 0x1d - RT_SCOPE_UNIVERSE = 0x0 - RT_SCOPE_SITE = 0xc8 - RT_SCOPE_LINK = 0xfd - RT_SCOPE_HOST = 0xfe - RT_SCOPE_NOWHERE = 0xff - RT_TABLE_UNSPEC = 0x0 - RT_TABLE_COMPAT = 0xfc - RT_TABLE_DEFAULT = 0xfd - RT_TABLE_MAIN = 0xfe - RT_TABLE_LOCAL = 0xff - RT_TABLE_MAX = 0xffffffff - RTA_UNSPEC = 0x0 - RTA_DST = 0x1 - RTA_SRC = 0x2 - RTA_IIF = 0x3 - RTA_OIF = 0x4 - RTA_GATEWAY = 0x5 - RTA_PRIORITY = 0x6 - RTA_PREFSRC = 0x7 - RTA_METRICS = 0x8 - RTA_MULTIPATH = 0x9 - RTA_FLOW = 0xb - RTA_CACHEINFO = 0xc - RTA_TABLE = 0xf - RTN_UNSPEC = 0x0 - RTN_UNICAST = 0x1 - RTN_LOCAL = 0x2 - RTN_BROADCAST = 0x3 - RTN_ANYCAST = 0x4 - RTN_MULTICAST = 0x5 - RTN_BLACKHOLE = 0x6 - RTN_UNREACHABLE = 0x7 - RTN_PROHIBIT = 0x8 - RTN_THROW = 0x9 - RTN_NAT = 0xa - RTN_XRESOLVE = 0xb - RTNLGRP_NONE = 0x0 - RTNLGRP_LINK = 0x1 - RTNLGRP_NOTIFY = 0x2 - RTNLGRP_NEIGH = 0x3 - RTNLGRP_TC = 0x4 - RTNLGRP_IPV4_IFADDR = 0x5 - RTNLGRP_IPV4_MROUTE = 0x6 - RTNLGRP_IPV4_ROUTE = 0x7 - RTNLGRP_IPV4_RULE = 0x8 - RTNLGRP_IPV6_IFADDR = 0x9 - RTNLGRP_IPV6_MROUTE = 0xa - RTNLGRP_IPV6_ROUTE = 0xb - RTNLGRP_IPV6_IFINFO = 0xc - RTNLGRP_IPV6_PREFIX = 0x12 - RTNLGRP_IPV6_RULE = 0x13 - RTNLGRP_ND_USEROPT = 0x14 - SizeofNlMsghdr = 0x10 - SizeofNlMsgerr = 0x14 - SizeofRtGenmsg = 0x1 - SizeofNlAttr = 0x4 - SizeofRtAttr = 0x4 - SizeofIfInfomsg = 0x10 - SizeofIfAddrmsg = 0x8 - SizeofRtMsg = 0xc - SizeofRtNexthop = 0x8 -) - -type NlMsghdr struct { - Len uint32 - Type uint16 - Flags uint16 - Seq uint32 - Pid uint32 -} - -type NlMsgerr struct { - Error int32 - Msg NlMsghdr -} - -type RtGenmsg struct { - Family uint8 -} - -type NlAttr struct { - Len uint16 - Type uint16 -} - -type RtAttr struct { - Len uint16 - Type uint16 -} - -type IfInfomsg struct { - Family uint8 - X__ifi_pad uint8 - Type uint16 - Index int32 - Flags uint32 - Change uint32 -} - -type IfAddrmsg struct { - Family uint8 - Prefixlen uint8 - Flags uint8 - Scope uint8 - Index uint32 -} - -type RtMsg struct { - Family uint8 - Dst_len uint8 - Src_len uint8 - Tos uint8 - Table uint8 - Protocol uint8 - Scope uint8 - Type uint8 - Flags uint32 -} - -type RtNexthop struct { - Len uint16 - Flags uint8 - Hops uint8 - Ifindex int32 -} - -const ( - SizeofSockFilter = 0x8 - SizeofSockFprog = 0x8 -) - -type SockFilter struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type SockFprog struct { - Len uint16 - Pad_cgo_0 [2]byte - Filter *SockFilter -} - -type InotifyEvent struct { - Wd int32 - Mask uint32 - Cookie uint32 - Len uint32 - Name [0]int8 -} - -const SizeofInotifyEvent = 0x10 - -type PtraceRegs struct { - Ebx int32 - Ecx int32 - Edx int32 - Esi int32 - Edi int32 - Ebp int32 - Eax int32 - Xds int32 - Xes int32 - Xfs int32 - Xgs int32 - Orig_eax int32 - Eip int32 - Xcs int32 - Eflags int32 - Esp int32 - Xss int32 -} - -type FdSet struct { - Bits [32]int32 -} - -type Sysinfo_t struct { - Uptime int32 - Loads [3]uint32 - Totalram uint32 - Freeram uint32 - Sharedram uint32 - Bufferram uint32 - Totalswap uint32 - Freeswap uint32 - Procs uint16 - Pad uint16 - Totalhigh uint32 - Freehigh uint32 - Unit uint32 - X_f [8]int8 -} - -type Utsname struct { - Sysname [65]int8 - Nodename [65]int8 - Release [65]int8 - Version [65]int8 - Machine [65]int8 - Domainname [65]int8 -} - -type Ustat_t struct { - Tfree int32 - Tinode uint32 - Fname [6]int8 - Fpack [6]int8 -} - -type EpollEvent struct { - Events uint32 - Fd int32 - Pad int32 -} - -const ( - AT_FDCWD = -0x64 - AT_SYMLINK_NOFOLLOW = 0x100 - AT_REMOVEDIR = 0x200 -) - -type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Line uint8 - Cc [19]uint8 - Ispeed uint32 - Ospeed uint32 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_linux_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_linux_amd64.go deleted file mode 100644 index ac27784ab2e..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_linux_amd64.go +++ /dev/null @@ -1,608 +0,0 @@ -// +build amd64,linux -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_linux.go - -package unix - -const ( - sizeofPtr = 0x8 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x8 - sizeofLongLong = 0x8 - PathMax = 0x1000 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int64 - _C_long_long int64 -) - -type Timespec struct { - Sec int64 - Nsec int64 -} - -type Timeval struct { - Sec int64 - Usec int64 -} - -type Timex struct { - Modes uint32 - Pad_cgo_0 [4]byte - Offset int64 - Freq int64 - Maxerror int64 - Esterror int64 - Status int32 - Pad_cgo_1 [4]byte - Constant int64 - Precision int64 - Tolerance int64 - Time Timeval - Tick int64 - Ppsfreq int64 - Jitter int64 - Shift int32 - Pad_cgo_2 [4]byte - Stabil int64 - Jitcnt int64 - Calcnt int64 - Errcnt int64 - Stbcnt int64 - Tai int32 - Pad_cgo_3 [44]byte -} - -type Time_t int64 - -type Tms struct { - Utime int64 - Stime int64 - Cutime int64 - Cstime int64 -} - -type Utimbuf struct { - Actime int64 - Modtime int64 -} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int64 - Ixrss int64 - Idrss int64 - Isrss int64 - Minflt int64 - Majflt int64 - Nswap int64 - Inblock int64 - Oublock int64 - Msgsnd int64 - Msgrcv int64 - Nsignals int64 - Nvcsw int64 - Nivcsw int64 -} - -type Rlimit struct { - Cur uint64 - Max uint64 -} - -type _Gid_t uint32 - -type Stat_t struct { - Dev uint64 - Ino uint64 - Nlink uint64 - Mode uint32 - Uid uint32 - Gid uint32 - X__pad0 int32 - Rdev uint64 - Size int64 - Blksize int64 - Blocks int64 - Atim Timespec - Mtim Timespec - Ctim Timespec - X__unused [3]int64 -} - -type Statfs_t struct { - Type int64 - Bsize int64 - Blocks uint64 - Bfree uint64 - Bavail uint64 - Files uint64 - Ffree uint64 - Fsid Fsid - Namelen int64 - Frsize int64 - Flags int64 - Spare [4]int64 -} - -type Dirent struct { - Ino uint64 - Off int64 - Reclen uint16 - Type uint8 - Name [256]int8 - Pad_cgo_0 [5]byte -} - -type Fsid struct { - X__val [2]int32 -} - -type Flock_t struct { - Type int16 - Whence int16 - Pad_cgo_0 [4]byte - Start int64 - Len int64 - Pid int32 - Pad_cgo_1 [4]byte -} - -type RawSockaddrInet4 struct { - Family uint16 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]uint8 -} - -type RawSockaddrInet6 struct { - Family uint16 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Family uint16 - Path [108]int8 -} - -type RawSockaddrLinklayer struct { - Family uint16 - Protocol uint16 - Ifindex int32 - Hatype uint16 - Pkttype uint8 - Halen uint8 - Addr [8]uint8 -} - -type RawSockaddrNetlink struct { - Family uint16 - Pad uint16 - Pid uint32 - Groups uint32 -} - -type RawSockaddr struct { - Family uint16 - Data [14]int8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [96]int8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint64 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPMreqn struct { - Multiaddr [4]byte /* in_addr */ - Address [4]byte /* in_addr */ - Ifindex int32 -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Pad_cgo_0 [4]byte - Iov *Iovec - Iovlen uint64 - Control *byte - Controllen uint64 - Flags int32 - Pad_cgo_1 [4]byte -} - -type Cmsghdr struct { - Len uint64 - Level int32 - Type int32 - X__cmsg_data [0]uint8 -} - -type Inet4Pktinfo struct { - Ifindex int32 - Spec_dst [4]byte /* in_addr */ - Addr [4]byte /* in_addr */ -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Data [8]uint32 -} - -type Ucred struct { - Pid int32 - Uid uint32 - Gid uint32 -} - -type TCPInfo struct { - State uint8 - Ca_state uint8 - Retransmits uint8 - Probes uint8 - Backoff uint8 - Options uint8 - Pad_cgo_0 [2]byte - Rto uint32 - Ato uint32 - Snd_mss uint32 - Rcv_mss uint32 - Unacked uint32 - Sacked uint32 - Lost uint32 - Retrans uint32 - Fackets uint32 - Last_data_sent uint32 - Last_ack_sent uint32 - Last_data_recv uint32 - Last_ack_recv uint32 - Pmtu uint32 - Rcv_ssthresh uint32 - Rtt uint32 - Rttvar uint32 - Snd_ssthresh uint32 - Snd_cwnd uint32 - Advmss uint32 - Reordering uint32 - Rcv_rtt uint32 - Rcv_space uint32 - Total_retrans uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x70 - SizeofSockaddrUnix = 0x6e - SizeofSockaddrLinklayer = 0x14 - SizeofSockaddrNetlink = 0xc - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPMreqn = 0xc - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x38 - SizeofCmsghdr = 0x10 - SizeofInet4Pktinfo = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 - SizeofUcred = 0xc - SizeofTCPInfo = 0x68 -) - -const ( - IFA_UNSPEC = 0x0 - IFA_ADDRESS = 0x1 - IFA_LOCAL = 0x2 - IFA_LABEL = 0x3 - IFA_BROADCAST = 0x4 - IFA_ANYCAST = 0x5 - IFA_CACHEINFO = 0x6 - IFA_MULTICAST = 0x7 - IFLA_UNSPEC = 0x0 - IFLA_ADDRESS = 0x1 - IFLA_BROADCAST = 0x2 - IFLA_IFNAME = 0x3 - IFLA_MTU = 0x4 - IFLA_LINK = 0x5 - IFLA_QDISC = 0x6 - IFLA_STATS = 0x7 - IFLA_COST = 0x8 - IFLA_PRIORITY = 0x9 - IFLA_MASTER = 0xa - IFLA_WIRELESS = 0xb - IFLA_PROTINFO = 0xc - IFLA_TXQLEN = 0xd - IFLA_MAP = 0xe - IFLA_WEIGHT = 0xf - IFLA_OPERSTATE = 0x10 - IFLA_LINKMODE = 0x11 - IFLA_LINKINFO = 0x12 - IFLA_NET_NS_PID = 0x13 - IFLA_IFALIAS = 0x14 - IFLA_MAX = 0x1d - RT_SCOPE_UNIVERSE = 0x0 - RT_SCOPE_SITE = 0xc8 - RT_SCOPE_LINK = 0xfd - RT_SCOPE_HOST = 0xfe - RT_SCOPE_NOWHERE = 0xff - RT_TABLE_UNSPEC = 0x0 - RT_TABLE_COMPAT = 0xfc - RT_TABLE_DEFAULT = 0xfd - RT_TABLE_MAIN = 0xfe - RT_TABLE_LOCAL = 0xff - RT_TABLE_MAX = 0xffffffff - RTA_UNSPEC = 0x0 - RTA_DST = 0x1 - RTA_SRC = 0x2 - RTA_IIF = 0x3 - RTA_OIF = 0x4 - RTA_GATEWAY = 0x5 - RTA_PRIORITY = 0x6 - RTA_PREFSRC = 0x7 - RTA_METRICS = 0x8 - RTA_MULTIPATH = 0x9 - RTA_FLOW = 0xb - RTA_CACHEINFO = 0xc - RTA_TABLE = 0xf - RTN_UNSPEC = 0x0 - RTN_UNICAST = 0x1 - RTN_LOCAL = 0x2 - RTN_BROADCAST = 0x3 - RTN_ANYCAST = 0x4 - RTN_MULTICAST = 0x5 - RTN_BLACKHOLE = 0x6 - RTN_UNREACHABLE = 0x7 - RTN_PROHIBIT = 0x8 - RTN_THROW = 0x9 - RTN_NAT = 0xa - RTN_XRESOLVE = 0xb - RTNLGRP_NONE = 0x0 - RTNLGRP_LINK = 0x1 - RTNLGRP_NOTIFY = 0x2 - RTNLGRP_NEIGH = 0x3 - RTNLGRP_TC = 0x4 - RTNLGRP_IPV4_IFADDR = 0x5 - RTNLGRP_IPV4_MROUTE = 0x6 - RTNLGRP_IPV4_ROUTE = 0x7 - RTNLGRP_IPV4_RULE = 0x8 - RTNLGRP_IPV6_IFADDR = 0x9 - RTNLGRP_IPV6_MROUTE = 0xa - RTNLGRP_IPV6_ROUTE = 0xb - RTNLGRP_IPV6_IFINFO = 0xc - RTNLGRP_IPV6_PREFIX = 0x12 - RTNLGRP_IPV6_RULE = 0x13 - RTNLGRP_ND_USEROPT = 0x14 - SizeofNlMsghdr = 0x10 - SizeofNlMsgerr = 0x14 - SizeofRtGenmsg = 0x1 - SizeofNlAttr = 0x4 - SizeofRtAttr = 0x4 - SizeofIfInfomsg = 0x10 - SizeofIfAddrmsg = 0x8 - SizeofRtMsg = 0xc - SizeofRtNexthop = 0x8 -) - -type NlMsghdr struct { - Len uint32 - Type uint16 - Flags uint16 - Seq uint32 - Pid uint32 -} - -type NlMsgerr struct { - Error int32 - Msg NlMsghdr -} - -type RtGenmsg struct { - Family uint8 -} - -type NlAttr struct { - Len uint16 - Type uint16 -} - -type RtAttr struct { - Len uint16 - Type uint16 -} - -type IfInfomsg struct { - Family uint8 - X__ifi_pad uint8 - Type uint16 - Index int32 - Flags uint32 - Change uint32 -} - -type IfAddrmsg struct { - Family uint8 - Prefixlen uint8 - Flags uint8 - Scope uint8 - Index uint32 -} - -type RtMsg struct { - Family uint8 - Dst_len uint8 - Src_len uint8 - Tos uint8 - Table uint8 - Protocol uint8 - Scope uint8 - Type uint8 - Flags uint32 -} - -type RtNexthop struct { - Len uint16 - Flags uint8 - Hops uint8 - Ifindex int32 -} - -const ( - SizeofSockFilter = 0x8 - SizeofSockFprog = 0x10 -) - -type SockFilter struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type SockFprog struct { - Len uint16 - Pad_cgo_0 [6]byte - Filter *SockFilter -} - -type InotifyEvent struct { - Wd int32 - Mask uint32 - Cookie uint32 - Len uint32 - Name [0]int8 -} - -const SizeofInotifyEvent = 0x10 - -type PtraceRegs struct { - R15 uint64 - R14 uint64 - R13 uint64 - R12 uint64 - Rbp uint64 - Rbx uint64 - R11 uint64 - R10 uint64 - R9 uint64 - R8 uint64 - Rax uint64 - Rcx uint64 - Rdx uint64 - Rsi uint64 - Rdi uint64 - Orig_rax uint64 - Rip uint64 - Cs uint64 - Eflags uint64 - Rsp uint64 - Ss uint64 - Fs_base uint64 - Gs_base uint64 - Ds uint64 - Es uint64 - Fs uint64 - Gs uint64 -} - -type FdSet struct { - Bits [16]int64 -} - -type Sysinfo_t struct { - Uptime int64 - Loads [3]uint64 - Totalram uint64 - Freeram uint64 - Sharedram uint64 - Bufferram uint64 - Totalswap uint64 - Freeswap uint64 - Procs uint16 - Pad uint16 - Pad_cgo_0 [4]byte - Totalhigh uint64 - Freehigh uint64 - Unit uint32 - X_f [0]int8 - Pad_cgo_1 [4]byte -} - -type Utsname struct { - Sysname [65]int8 - Nodename [65]int8 - Release [65]int8 - Version [65]int8 - Machine [65]int8 - Domainname [65]int8 -} - -type Ustat_t struct { - Tfree int32 - Pad_cgo_0 [4]byte - Tinode uint64 - Fname [6]int8 - Fpack [6]int8 - Pad_cgo_1 [4]byte -} - -type EpollEvent struct { - Events uint32 - Fd int32 - Pad int32 -} - -const ( - AT_FDCWD = -0x64 - AT_SYMLINK_NOFOLLOW = 0x100 - AT_REMOVEDIR = 0x200 -) - -type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Line uint8 - Cc [19]uint8 - Ispeed uint32 - Ospeed uint32 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_linux_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_linux_arm.go deleted file mode 100644 index b318bb851c7..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_linux_arm.go +++ /dev/null @@ -1,579 +0,0 @@ -// +build arm,linux -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_linux.go - -package unix - -const ( - sizeofPtr = 0x4 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x4 - sizeofLongLong = 0x8 - PathMax = 0x1000 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int32 - _C_long_long int64 -) - -type Timespec struct { - Sec int32 - Nsec int32 -} - -type Timeval struct { - Sec int32 - Usec int32 -} - -type Timex struct { - Modes uint32 - Offset int32 - Freq int32 - Maxerror int32 - Esterror int32 - Status int32 - Constant int32 - Precision int32 - Tolerance int32 - Time Timeval - Tick int32 - Ppsfreq int32 - Jitter int32 - Shift int32 - Stabil int32 - Jitcnt int32 - Calcnt int32 - Errcnt int32 - Stbcnt int32 - Tai int32 - Pad_cgo_0 [44]byte -} - -type Time_t int32 - -type Tms struct { - Utime int32 - Stime int32 - Cutime int32 - Cstime int32 -} - -type Utimbuf struct { - Actime int32 - Modtime int32 -} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int32 - Ixrss int32 - Idrss int32 - Isrss int32 - Minflt int32 - Majflt int32 - Nswap int32 - Inblock int32 - Oublock int32 - Msgsnd int32 - Msgrcv int32 - Nsignals int32 - Nvcsw int32 - Nivcsw int32 -} - -type Rlimit struct { - Cur uint64 - Max uint64 -} - -type _Gid_t uint32 - -type Stat_t struct { - Dev uint64 - X__pad1 uint16 - Pad_cgo_0 [2]byte - X__st_ino uint32 - Mode uint32 - Nlink uint32 - Uid uint32 - Gid uint32 - Rdev uint64 - X__pad2 uint16 - Pad_cgo_1 [6]byte - Size int64 - Blksize int32 - Pad_cgo_2 [4]byte - Blocks int64 - Atim Timespec - Mtim Timespec - Ctim Timespec - Ino uint64 -} - -type Statfs_t struct { - Type int32 - Bsize int32 - Blocks uint64 - Bfree uint64 - Bavail uint64 - Files uint64 - Ffree uint64 - Fsid Fsid - Namelen int32 - Frsize int32 - Flags int32 - Spare [4]int32 - Pad_cgo_0 [4]byte -} - -type Dirent struct { - Ino uint64 - Off int64 - Reclen uint16 - Type uint8 - Name [256]uint8 - Pad_cgo_0 [5]byte -} - -type Fsid struct { - X__val [2]int32 -} - -type Flock_t struct { - Type int16 - Whence int16 - Pad_cgo_0 [4]byte - Start int64 - Len int64 - Pid int32 - Pad_cgo_1 [4]byte -} - -type RawSockaddrInet4 struct { - Family uint16 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]uint8 -} - -type RawSockaddrInet6 struct { - Family uint16 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Family uint16 - Path [108]int8 -} - -type RawSockaddrLinklayer struct { - Family uint16 - Protocol uint16 - Ifindex int32 - Hatype uint16 - Pkttype uint8 - Halen uint8 - Addr [8]uint8 -} - -type RawSockaddrNetlink struct { - Family uint16 - Pad uint16 - Pid uint32 - Groups uint32 -} - -type RawSockaddr struct { - Family uint16 - Data [14]uint8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [96]uint8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint32 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPMreqn struct { - Multiaddr [4]byte /* in_addr */ - Address [4]byte /* in_addr */ - Ifindex int32 -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Iov *Iovec - Iovlen uint32 - Control *byte - Controllen uint32 - Flags int32 -} - -type Cmsghdr struct { - Len uint32 - Level int32 - Type int32 - X__cmsg_data [0]uint8 -} - -type Inet4Pktinfo struct { - Ifindex int32 - Spec_dst [4]byte /* in_addr */ - Addr [4]byte /* in_addr */ -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Data [8]uint32 -} - -type Ucred struct { - Pid int32 - Uid uint32 - Gid uint32 -} - -type TCPInfo struct { - State uint8 - Ca_state uint8 - Retransmits uint8 - Probes uint8 - Backoff uint8 - Options uint8 - Pad_cgo_0 [2]byte - Rto uint32 - Ato uint32 - Snd_mss uint32 - Rcv_mss uint32 - Unacked uint32 - Sacked uint32 - Lost uint32 - Retrans uint32 - Fackets uint32 - Last_data_sent uint32 - Last_ack_sent uint32 - Last_data_recv uint32 - Last_ack_recv uint32 - Pmtu uint32 - Rcv_ssthresh uint32 - Rtt uint32 - Rttvar uint32 - Snd_ssthresh uint32 - Snd_cwnd uint32 - Advmss uint32 - Reordering uint32 - Rcv_rtt uint32 - Rcv_space uint32 - Total_retrans uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x70 - SizeofSockaddrUnix = 0x6e - SizeofSockaddrLinklayer = 0x14 - SizeofSockaddrNetlink = 0xc - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPMreqn = 0xc - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x1c - SizeofCmsghdr = 0xc - SizeofInet4Pktinfo = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 - SizeofUcred = 0xc - SizeofTCPInfo = 0x68 -) - -const ( - IFA_UNSPEC = 0x0 - IFA_ADDRESS = 0x1 - IFA_LOCAL = 0x2 - IFA_LABEL = 0x3 - IFA_BROADCAST = 0x4 - IFA_ANYCAST = 0x5 - IFA_CACHEINFO = 0x6 - IFA_MULTICAST = 0x7 - IFLA_UNSPEC = 0x0 - IFLA_ADDRESS = 0x1 - IFLA_BROADCAST = 0x2 - IFLA_IFNAME = 0x3 - IFLA_MTU = 0x4 - IFLA_LINK = 0x5 - IFLA_QDISC = 0x6 - IFLA_STATS = 0x7 - IFLA_COST = 0x8 - IFLA_PRIORITY = 0x9 - IFLA_MASTER = 0xa - IFLA_WIRELESS = 0xb - IFLA_PROTINFO = 0xc - IFLA_TXQLEN = 0xd - IFLA_MAP = 0xe - IFLA_WEIGHT = 0xf - IFLA_OPERSTATE = 0x10 - IFLA_LINKMODE = 0x11 - IFLA_LINKINFO = 0x12 - IFLA_NET_NS_PID = 0x13 - IFLA_IFALIAS = 0x14 - IFLA_MAX = 0x1d - RT_SCOPE_UNIVERSE = 0x0 - RT_SCOPE_SITE = 0xc8 - RT_SCOPE_LINK = 0xfd - RT_SCOPE_HOST = 0xfe - RT_SCOPE_NOWHERE = 0xff - RT_TABLE_UNSPEC = 0x0 - RT_TABLE_COMPAT = 0xfc - RT_TABLE_DEFAULT = 0xfd - RT_TABLE_MAIN = 0xfe - RT_TABLE_LOCAL = 0xff - RT_TABLE_MAX = 0xffffffff - RTA_UNSPEC = 0x0 - RTA_DST = 0x1 - RTA_SRC = 0x2 - RTA_IIF = 0x3 - RTA_OIF = 0x4 - RTA_GATEWAY = 0x5 - RTA_PRIORITY = 0x6 - RTA_PREFSRC = 0x7 - RTA_METRICS = 0x8 - RTA_MULTIPATH = 0x9 - RTA_FLOW = 0xb - RTA_CACHEINFO = 0xc - RTA_TABLE = 0xf - RTN_UNSPEC = 0x0 - RTN_UNICAST = 0x1 - RTN_LOCAL = 0x2 - RTN_BROADCAST = 0x3 - RTN_ANYCAST = 0x4 - RTN_MULTICAST = 0x5 - RTN_BLACKHOLE = 0x6 - RTN_UNREACHABLE = 0x7 - RTN_PROHIBIT = 0x8 - RTN_THROW = 0x9 - RTN_NAT = 0xa - RTN_XRESOLVE = 0xb - RTNLGRP_NONE = 0x0 - RTNLGRP_LINK = 0x1 - RTNLGRP_NOTIFY = 0x2 - RTNLGRP_NEIGH = 0x3 - RTNLGRP_TC = 0x4 - RTNLGRP_IPV4_IFADDR = 0x5 - RTNLGRP_IPV4_MROUTE = 0x6 - RTNLGRP_IPV4_ROUTE = 0x7 - RTNLGRP_IPV4_RULE = 0x8 - RTNLGRP_IPV6_IFADDR = 0x9 - RTNLGRP_IPV6_MROUTE = 0xa - RTNLGRP_IPV6_ROUTE = 0xb - RTNLGRP_IPV6_IFINFO = 0xc - RTNLGRP_IPV6_PREFIX = 0x12 - RTNLGRP_IPV6_RULE = 0x13 - RTNLGRP_ND_USEROPT = 0x14 - SizeofNlMsghdr = 0x10 - SizeofNlMsgerr = 0x14 - SizeofRtGenmsg = 0x1 - SizeofNlAttr = 0x4 - SizeofRtAttr = 0x4 - SizeofIfInfomsg = 0x10 - SizeofIfAddrmsg = 0x8 - SizeofRtMsg = 0xc - SizeofRtNexthop = 0x8 -) - -type NlMsghdr struct { - Len uint32 - Type uint16 - Flags uint16 - Seq uint32 - Pid uint32 -} - -type NlMsgerr struct { - Error int32 - Msg NlMsghdr -} - -type RtGenmsg struct { - Family uint8 -} - -type NlAttr struct { - Len uint16 - Type uint16 -} - -type RtAttr struct { - Len uint16 - Type uint16 -} - -type IfInfomsg struct { - Family uint8 - X__ifi_pad uint8 - Type uint16 - Index int32 - Flags uint32 - Change uint32 -} - -type IfAddrmsg struct { - Family uint8 - Prefixlen uint8 - Flags uint8 - Scope uint8 - Index uint32 -} - -type RtMsg struct { - Family uint8 - Dst_len uint8 - Src_len uint8 - Tos uint8 - Table uint8 - Protocol uint8 - Scope uint8 - Type uint8 - Flags uint32 -} - -type RtNexthop struct { - Len uint16 - Flags uint8 - Hops uint8 - Ifindex int32 -} - -const ( - SizeofSockFilter = 0x8 - SizeofSockFprog = 0x8 -) - -type SockFilter struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type SockFprog struct { - Len uint16 - Pad_cgo_0 [2]byte - Filter *SockFilter -} - -type InotifyEvent struct { - Wd int32 - Mask uint32 - Cookie uint32 - Len uint32 - Name [0]uint8 -} - -const SizeofInotifyEvent = 0x10 - -type PtraceRegs struct { - Uregs [18]uint32 -} - -type FdSet struct { - Bits [32]int32 -} - -type Sysinfo_t struct { - Uptime int32 - Loads [3]uint32 - Totalram uint32 - Freeram uint32 - Sharedram uint32 - Bufferram uint32 - Totalswap uint32 - Freeswap uint32 - Procs uint16 - Pad uint16 - Totalhigh uint32 - Freehigh uint32 - Unit uint32 - X_f [8]uint8 -} - -type Utsname struct { - Sysname [65]uint8 - Nodename [65]uint8 - Release [65]uint8 - Version [65]uint8 - Machine [65]uint8 - Domainname [65]uint8 -} - -type Ustat_t struct { - Tfree int32 - Tinode uint32 - Fname [6]uint8 - Fpack [6]uint8 -} - -type EpollEvent struct { - Events uint32 - PadFd int32 - Fd int32 - Pad int32 -} - -const ( - AT_FDCWD = -0x64 - AT_SYMLINK_NOFOLLOW = 0x100 - AT_REMOVEDIR = 0x200 -) - -type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Line uint8 - Cc [19]uint8 - Ispeed uint32 - Ospeed uint32 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_linux_arm64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_linux_arm64.go deleted file mode 100644 index a159aadab69..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_linux_arm64.go +++ /dev/null @@ -1,595 +0,0 @@ -// +build arm64,linux -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs -- -fsigned-char types_linux.go - -package unix - -const ( - sizeofPtr = 0x8 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x8 - sizeofLongLong = 0x8 - PathMax = 0x1000 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int64 - _C_long_long int64 -) - -type Timespec struct { - Sec int64 - Nsec int64 -} - -type Timeval struct { - Sec int64 - Usec int64 -} - -type Timex struct { - Modes uint32 - Pad_cgo_0 [4]byte - Offset int64 - Freq int64 - Maxerror int64 - Esterror int64 - Status int32 - Pad_cgo_1 [4]byte - Constant int64 - Precision int64 - Tolerance int64 - Time Timeval - Tick int64 - Ppsfreq int64 - Jitter int64 - Shift int32 - Pad_cgo_2 [4]byte - Stabil int64 - Jitcnt int64 - Calcnt int64 - Errcnt int64 - Stbcnt int64 - Tai int32 - Pad_cgo_3 [44]byte -} - -type Time_t int64 - -type Tms struct { - Utime int64 - Stime int64 - Cutime int64 - Cstime int64 -} - -type Utimbuf struct { - Actime int64 - Modtime int64 -} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int64 - Ixrss int64 - Idrss int64 - Isrss int64 - Minflt int64 - Majflt int64 - Nswap int64 - Inblock int64 - Oublock int64 - Msgsnd int64 - Msgrcv int64 - Nsignals int64 - Nvcsw int64 - Nivcsw int64 -} - -type Rlimit struct { - Cur uint64 - Max uint64 -} - -type _Gid_t uint32 - -type Stat_t struct { - Dev uint64 - Ino uint64 - Mode uint32 - Nlink uint32 - Uid uint32 - Gid uint32 - Rdev uint64 - X__pad1 uint64 - Size int64 - Blksize int32 - X__pad2 int32 - Blocks int64 - Atim Timespec - Mtim Timespec - Ctim Timespec - X__glibc_reserved [2]int32 -} - -type Statfs_t struct { - Type int64 - Bsize int64 - Blocks uint64 - Bfree uint64 - Bavail uint64 - Files uint64 - Ffree uint64 - Fsid Fsid - Namelen int64 - Frsize int64 - Flags int64 - Spare [4]int64 -} - -type Dirent struct { - Ino uint64 - Off int64 - Reclen uint16 - Type uint8 - Name [256]int8 - Pad_cgo_0 [5]byte -} - -type Fsid struct { - X__val [2]int32 -} - -type Flock_t struct { - Type int16 - Whence int16 - Pad_cgo_0 [4]byte - Start int64 - Len int64 - Pid int32 - Pad_cgo_1 [4]byte -} - -const ( - FADV_NORMAL = 0x0 - FADV_RANDOM = 0x1 - FADV_SEQUENTIAL = 0x2 - FADV_WILLNEED = 0x3 - FADV_DONTNEED = 0x4 - FADV_NOREUSE = 0x5 -) - -type RawSockaddrInet4 struct { - Family uint16 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]uint8 -} - -type RawSockaddrInet6 struct { - Family uint16 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Family uint16 - Path [108]int8 -} - -type RawSockaddrLinklayer struct { - Family uint16 - Protocol uint16 - Ifindex int32 - Hatype uint16 - Pkttype uint8 - Halen uint8 - Addr [8]uint8 -} - -type RawSockaddrNetlink struct { - Family uint16 - Pad uint16 - Pid uint32 - Groups uint32 -} - -type RawSockaddr struct { - Family uint16 - Data [14]int8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [96]int8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint64 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPMreqn struct { - Multiaddr [4]byte /* in_addr */ - Address [4]byte /* in_addr */ - Ifindex int32 -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Pad_cgo_0 [4]byte - Iov *Iovec - Iovlen uint64 - Control *byte - Controllen uint64 - Flags int32 - Pad_cgo_1 [4]byte -} - -type Cmsghdr struct { - Len uint64 - Level int32 - Type int32 - X__cmsg_data [0]uint8 -} - -type Inet4Pktinfo struct { - Ifindex int32 - Spec_dst [4]byte /* in_addr */ - Addr [4]byte /* in_addr */ -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Data [8]uint32 -} - -type Ucred struct { - Pid int32 - Uid uint32 - Gid uint32 -} - -type TCPInfo struct { - State uint8 - Ca_state uint8 - Retransmits uint8 - Probes uint8 - Backoff uint8 - Options uint8 - Pad_cgo_0 [2]byte - Rto uint32 - Ato uint32 - Snd_mss uint32 - Rcv_mss uint32 - Unacked uint32 - Sacked uint32 - Lost uint32 - Retrans uint32 - Fackets uint32 - Last_data_sent uint32 - Last_ack_sent uint32 - Last_data_recv uint32 - Last_ack_recv uint32 - Pmtu uint32 - Rcv_ssthresh uint32 - Rtt uint32 - Rttvar uint32 - Snd_ssthresh uint32 - Snd_cwnd uint32 - Advmss uint32 - Reordering uint32 - Rcv_rtt uint32 - Rcv_space uint32 - Total_retrans uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x70 - SizeofSockaddrUnix = 0x6e - SizeofSockaddrLinklayer = 0x14 - SizeofSockaddrNetlink = 0xc - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPMreqn = 0xc - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x38 - SizeofCmsghdr = 0x10 - SizeofInet4Pktinfo = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 - SizeofUcred = 0xc - SizeofTCPInfo = 0x68 -) - -const ( - IFA_UNSPEC = 0x0 - IFA_ADDRESS = 0x1 - IFA_LOCAL = 0x2 - IFA_LABEL = 0x3 - IFA_BROADCAST = 0x4 - IFA_ANYCAST = 0x5 - IFA_CACHEINFO = 0x6 - IFA_MULTICAST = 0x7 - IFLA_UNSPEC = 0x0 - IFLA_ADDRESS = 0x1 - IFLA_BROADCAST = 0x2 - IFLA_IFNAME = 0x3 - IFLA_MTU = 0x4 - IFLA_LINK = 0x5 - IFLA_QDISC = 0x6 - IFLA_STATS = 0x7 - IFLA_COST = 0x8 - IFLA_PRIORITY = 0x9 - IFLA_MASTER = 0xa - IFLA_WIRELESS = 0xb - IFLA_PROTINFO = 0xc - IFLA_TXQLEN = 0xd - IFLA_MAP = 0xe - IFLA_WEIGHT = 0xf - IFLA_OPERSTATE = 0x10 - IFLA_LINKMODE = 0x11 - IFLA_LINKINFO = 0x12 - IFLA_NET_NS_PID = 0x13 - IFLA_IFALIAS = 0x14 - IFLA_MAX = 0x22 - RT_SCOPE_UNIVERSE = 0x0 - RT_SCOPE_SITE = 0xc8 - RT_SCOPE_LINK = 0xfd - RT_SCOPE_HOST = 0xfe - RT_SCOPE_NOWHERE = 0xff - RT_TABLE_UNSPEC = 0x0 - RT_TABLE_COMPAT = 0xfc - RT_TABLE_DEFAULT = 0xfd - RT_TABLE_MAIN = 0xfe - RT_TABLE_LOCAL = 0xff - RT_TABLE_MAX = 0xffffffff - RTA_UNSPEC = 0x0 - RTA_DST = 0x1 - RTA_SRC = 0x2 - RTA_IIF = 0x3 - RTA_OIF = 0x4 - RTA_GATEWAY = 0x5 - RTA_PRIORITY = 0x6 - RTA_PREFSRC = 0x7 - RTA_METRICS = 0x8 - RTA_MULTIPATH = 0x9 - RTA_FLOW = 0xb - RTA_CACHEINFO = 0xc - RTA_TABLE = 0xf - RTN_UNSPEC = 0x0 - RTN_UNICAST = 0x1 - RTN_LOCAL = 0x2 - RTN_BROADCAST = 0x3 - RTN_ANYCAST = 0x4 - RTN_MULTICAST = 0x5 - RTN_BLACKHOLE = 0x6 - RTN_UNREACHABLE = 0x7 - RTN_PROHIBIT = 0x8 - RTN_THROW = 0x9 - RTN_NAT = 0xa - RTN_XRESOLVE = 0xb - RTNLGRP_NONE = 0x0 - RTNLGRP_LINK = 0x1 - RTNLGRP_NOTIFY = 0x2 - RTNLGRP_NEIGH = 0x3 - RTNLGRP_TC = 0x4 - RTNLGRP_IPV4_IFADDR = 0x5 - RTNLGRP_IPV4_MROUTE = 0x6 - RTNLGRP_IPV4_ROUTE = 0x7 - RTNLGRP_IPV4_RULE = 0x8 - RTNLGRP_IPV6_IFADDR = 0x9 - RTNLGRP_IPV6_MROUTE = 0xa - RTNLGRP_IPV6_ROUTE = 0xb - RTNLGRP_IPV6_IFINFO = 0xc - RTNLGRP_IPV6_PREFIX = 0x12 - RTNLGRP_IPV6_RULE = 0x13 - RTNLGRP_ND_USEROPT = 0x14 - SizeofNlMsghdr = 0x10 - SizeofNlMsgerr = 0x14 - SizeofRtGenmsg = 0x1 - SizeofNlAttr = 0x4 - SizeofRtAttr = 0x4 - SizeofIfInfomsg = 0x10 - SizeofIfAddrmsg = 0x8 - SizeofRtMsg = 0xc - SizeofRtNexthop = 0x8 -) - -type NlMsghdr struct { - Len uint32 - Type uint16 - Flags uint16 - Seq uint32 - Pid uint32 -} - -type NlMsgerr struct { - Error int32 - Msg NlMsghdr -} - -type RtGenmsg struct { - Family uint8 -} - -type NlAttr struct { - Len uint16 - Type uint16 -} - -type RtAttr struct { - Len uint16 - Type uint16 -} - -type IfInfomsg struct { - Family uint8 - X__ifi_pad uint8 - Type uint16 - Index int32 - Flags uint32 - Change uint32 -} - -type IfAddrmsg struct { - Family uint8 - Prefixlen uint8 - Flags uint8 - Scope uint8 - Index uint32 -} - -type RtMsg struct { - Family uint8 - Dst_len uint8 - Src_len uint8 - Tos uint8 - Table uint8 - Protocol uint8 - Scope uint8 - Type uint8 - Flags uint32 -} - -type RtNexthop struct { - Len uint16 - Flags uint8 - Hops uint8 - Ifindex int32 -} - -const ( - SizeofSockFilter = 0x8 - SizeofSockFprog = 0x10 -) - -type SockFilter struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type SockFprog struct { - Len uint16 - Pad_cgo_0 [6]byte - Filter *SockFilter -} - -type InotifyEvent struct { - Wd int32 - Mask uint32 - Cookie uint32 - Len uint32 - Name [0]int8 -} - -const SizeofInotifyEvent = 0x10 - -type PtraceRegs struct { - Regs [31]uint64 - Sp uint64 - Pc uint64 - Pstate uint64 -} - -type FdSet struct { - Bits [16]int64 -} - -type Sysinfo_t struct { - Uptime int64 - Loads [3]uint64 - Totalram uint64 - Freeram uint64 - Sharedram uint64 - Bufferram uint64 - Totalswap uint64 - Freeswap uint64 - Procs uint16 - Pad uint16 - Pad_cgo_0 [4]byte - Totalhigh uint64 - Freehigh uint64 - Unit uint32 - X_f [0]int8 - Pad_cgo_1 [4]byte -} - -type Utsname struct { - Sysname [65]int8 - Nodename [65]int8 - Release [65]int8 - Version [65]int8 - Machine [65]int8 - Domainname [65]int8 -} - -type Ustat_t struct { - Tfree int32 - Pad_cgo_0 [4]byte - Tinode uint64 - Fname [6]int8 - Fpack [6]int8 - Pad_cgo_1 [4]byte -} - -type EpollEvent struct { - Events uint32 - Fd int32 - Pad int32 -} - -const ( - AT_FDCWD = -0x64 - AT_REMOVEDIR = 0x200 - AT_SYMLINK_NOFOLLOW = 0x100 -) - -type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Line uint8 - Cc [19]uint8 - Ispeed uint32 - Ospeed uint32 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_linux_ppc64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_linux_ppc64.go deleted file mode 100644 index b14cbfef0d8..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_linux_ppc64.go +++ /dev/null @@ -1,605 +0,0 @@ -// +build ppc64,linux -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_linux.go - -package unix - -const ( - sizeofPtr = 0x8 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x8 - sizeofLongLong = 0x8 - PathMax = 0x1000 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int64 - _C_long_long int64 -) - -type Timespec struct { - Sec int64 - Nsec int64 -} - -type Timeval struct { - Sec int64 - Usec int64 -} - -type Timex struct { - Modes uint32 - Pad_cgo_0 [4]byte - Offset int64 - Freq int64 - Maxerror int64 - Esterror int64 - Status int32 - Pad_cgo_1 [4]byte - Constant int64 - Precision int64 - Tolerance int64 - Time Timeval - Tick int64 - Ppsfreq int64 - Jitter int64 - Shift int32 - Pad_cgo_2 [4]byte - Stabil int64 - Jitcnt int64 - Calcnt int64 - Errcnt int64 - Stbcnt int64 - Tai int32 - Pad_cgo_3 [44]byte -} - -type Time_t int64 - -type Tms struct { - Utime int64 - Stime int64 - Cutime int64 - Cstime int64 -} - -type Utimbuf struct { - Actime int64 - Modtime int64 -} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int64 - Ixrss int64 - Idrss int64 - Isrss int64 - Minflt int64 - Majflt int64 - Nswap int64 - Inblock int64 - Oublock int64 - Msgsnd int64 - Msgrcv int64 - Nsignals int64 - Nvcsw int64 - Nivcsw int64 -} - -type Rlimit struct { - Cur uint64 - Max uint64 -} - -type _Gid_t uint32 - -type Stat_t struct { - Dev uint64 - Ino uint64 - Nlink uint64 - Mode uint32 - Uid uint32 - Gid uint32 - X__pad2 int32 - Rdev uint64 - Size int64 - Blksize int64 - Blocks int64 - Atim Timespec - Mtim Timespec - Ctim Timespec - X__glibc_reserved4 uint64 - X__glibc_reserved5 uint64 - X__glibc_reserved6 uint64 -} - -type Statfs_t struct { - Type int64 - Bsize int64 - Blocks uint64 - Bfree uint64 - Bavail uint64 - Files uint64 - Ffree uint64 - Fsid Fsid - Namelen int64 - Frsize int64 - Flags int64 - Spare [4]int64 -} - -type Dirent struct { - Ino uint64 - Off int64 - Reclen uint16 - Type uint8 - Name [256]uint8 - Pad_cgo_0 [5]byte -} - -type Fsid struct { - X__val [2]int32 -} - -type Flock_t struct { - Type int16 - Whence int16 - Pad_cgo_0 [4]byte - Start int64 - Len int64 - Pid int32 - Pad_cgo_1 [4]byte -} - -const ( - FADV_NORMAL = 0x0 - FADV_RANDOM = 0x1 - FADV_SEQUENTIAL = 0x2 - FADV_WILLNEED = 0x3 - FADV_DONTNEED = 0x4 - FADV_NOREUSE = 0x5 -) - -type RawSockaddrInet4 struct { - Family uint16 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]uint8 -} - -type RawSockaddrInet6 struct { - Family uint16 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Family uint16 - Path [108]int8 -} - -type RawSockaddrLinklayer struct { - Family uint16 - Protocol uint16 - Ifindex int32 - Hatype uint16 - Pkttype uint8 - Halen uint8 - Addr [8]uint8 -} - -type RawSockaddrNetlink struct { - Family uint16 - Pad uint16 - Pid uint32 - Groups uint32 -} - -type RawSockaddr struct { - Family uint16 - Data [14]uint8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [96]uint8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint64 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPMreqn struct { - Multiaddr [4]byte /* in_addr */ - Address [4]byte /* in_addr */ - Ifindex int32 -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Pad_cgo_0 [4]byte - Iov *Iovec - Iovlen uint64 - Control *byte - Controllen uint64 - Flags int32 - Pad_cgo_1 [4]byte -} - -type Cmsghdr struct { - Len uint64 - Level int32 - Type int32 - X__cmsg_data [0]uint8 -} - -type Inet4Pktinfo struct { - Ifindex int32 - Spec_dst [4]byte /* in_addr */ - Addr [4]byte /* in_addr */ -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Data [8]uint32 -} - -type Ucred struct { - Pid int32 - Uid uint32 - Gid uint32 -} - -type TCPInfo struct { - State uint8 - Ca_state uint8 - Retransmits uint8 - Probes uint8 - Backoff uint8 - Options uint8 - Pad_cgo_0 [2]byte - Rto uint32 - Ato uint32 - Snd_mss uint32 - Rcv_mss uint32 - Unacked uint32 - Sacked uint32 - Lost uint32 - Retrans uint32 - Fackets uint32 - Last_data_sent uint32 - Last_ack_sent uint32 - Last_data_recv uint32 - Last_ack_recv uint32 - Pmtu uint32 - Rcv_ssthresh uint32 - Rtt uint32 - Rttvar uint32 - Snd_ssthresh uint32 - Snd_cwnd uint32 - Advmss uint32 - Reordering uint32 - Rcv_rtt uint32 - Rcv_space uint32 - Total_retrans uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x70 - SizeofSockaddrUnix = 0x6e - SizeofSockaddrLinklayer = 0x14 - SizeofSockaddrNetlink = 0xc - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPMreqn = 0xc - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x38 - SizeofCmsghdr = 0x10 - SizeofInet4Pktinfo = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 - SizeofUcred = 0xc - SizeofTCPInfo = 0x68 -) - -const ( - IFA_UNSPEC = 0x0 - IFA_ADDRESS = 0x1 - IFA_LOCAL = 0x2 - IFA_LABEL = 0x3 - IFA_BROADCAST = 0x4 - IFA_ANYCAST = 0x5 - IFA_CACHEINFO = 0x6 - IFA_MULTICAST = 0x7 - IFLA_UNSPEC = 0x0 - IFLA_ADDRESS = 0x1 - IFLA_BROADCAST = 0x2 - IFLA_IFNAME = 0x3 - IFLA_MTU = 0x4 - IFLA_LINK = 0x5 - IFLA_QDISC = 0x6 - IFLA_STATS = 0x7 - IFLA_COST = 0x8 - IFLA_PRIORITY = 0x9 - IFLA_MASTER = 0xa - IFLA_WIRELESS = 0xb - IFLA_PROTINFO = 0xc - IFLA_TXQLEN = 0xd - IFLA_MAP = 0xe - IFLA_WEIGHT = 0xf - IFLA_OPERSTATE = 0x10 - IFLA_LINKMODE = 0x11 - IFLA_LINKINFO = 0x12 - IFLA_NET_NS_PID = 0x13 - IFLA_IFALIAS = 0x14 - IFLA_MAX = 0x23 - RT_SCOPE_UNIVERSE = 0x0 - RT_SCOPE_SITE = 0xc8 - RT_SCOPE_LINK = 0xfd - RT_SCOPE_HOST = 0xfe - RT_SCOPE_NOWHERE = 0xff - RT_TABLE_UNSPEC = 0x0 - RT_TABLE_COMPAT = 0xfc - RT_TABLE_DEFAULT = 0xfd - RT_TABLE_MAIN = 0xfe - RT_TABLE_LOCAL = 0xff - RT_TABLE_MAX = 0xffffffff - RTA_UNSPEC = 0x0 - RTA_DST = 0x1 - RTA_SRC = 0x2 - RTA_IIF = 0x3 - RTA_OIF = 0x4 - RTA_GATEWAY = 0x5 - RTA_PRIORITY = 0x6 - RTA_PREFSRC = 0x7 - RTA_METRICS = 0x8 - RTA_MULTIPATH = 0x9 - RTA_FLOW = 0xb - RTA_CACHEINFO = 0xc - RTA_TABLE = 0xf - RTN_UNSPEC = 0x0 - RTN_UNICAST = 0x1 - RTN_LOCAL = 0x2 - RTN_BROADCAST = 0x3 - RTN_ANYCAST = 0x4 - RTN_MULTICAST = 0x5 - RTN_BLACKHOLE = 0x6 - RTN_UNREACHABLE = 0x7 - RTN_PROHIBIT = 0x8 - RTN_THROW = 0x9 - RTN_NAT = 0xa - RTN_XRESOLVE = 0xb - RTNLGRP_NONE = 0x0 - RTNLGRP_LINK = 0x1 - RTNLGRP_NOTIFY = 0x2 - RTNLGRP_NEIGH = 0x3 - RTNLGRP_TC = 0x4 - RTNLGRP_IPV4_IFADDR = 0x5 - RTNLGRP_IPV4_MROUTE = 0x6 - RTNLGRP_IPV4_ROUTE = 0x7 - RTNLGRP_IPV4_RULE = 0x8 - RTNLGRP_IPV6_IFADDR = 0x9 - RTNLGRP_IPV6_MROUTE = 0xa - RTNLGRP_IPV6_ROUTE = 0xb - RTNLGRP_IPV6_IFINFO = 0xc - RTNLGRP_IPV6_PREFIX = 0x12 - RTNLGRP_IPV6_RULE = 0x13 - RTNLGRP_ND_USEROPT = 0x14 - SizeofNlMsghdr = 0x10 - SizeofNlMsgerr = 0x14 - SizeofRtGenmsg = 0x1 - SizeofNlAttr = 0x4 - SizeofRtAttr = 0x4 - SizeofIfInfomsg = 0x10 - SizeofIfAddrmsg = 0x8 - SizeofRtMsg = 0xc - SizeofRtNexthop = 0x8 -) - -type NlMsghdr struct { - Len uint32 - Type uint16 - Flags uint16 - Seq uint32 - Pid uint32 -} - -type NlMsgerr struct { - Error int32 - Msg NlMsghdr -} - -type RtGenmsg struct { - Family uint8 -} - -type NlAttr struct { - Len uint16 - Type uint16 -} - -type RtAttr struct { - Len uint16 - Type uint16 -} - -type IfInfomsg struct { - Family uint8 - X__ifi_pad uint8 - Type uint16 - Index int32 - Flags uint32 - Change uint32 -} - -type IfAddrmsg struct { - Family uint8 - Prefixlen uint8 - Flags uint8 - Scope uint8 - Index uint32 -} - -type RtMsg struct { - Family uint8 - Dst_len uint8 - Src_len uint8 - Tos uint8 - Table uint8 - Protocol uint8 - Scope uint8 - Type uint8 - Flags uint32 -} - -type RtNexthop struct { - Len uint16 - Flags uint8 - Hops uint8 - Ifindex int32 -} - -const ( - SizeofSockFilter = 0x8 - SizeofSockFprog = 0x10 -) - -type SockFilter struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type SockFprog struct { - Len uint16 - Pad_cgo_0 [6]byte - Filter *SockFilter -} - -type InotifyEvent struct { - Wd int32 - Mask uint32 - Cookie uint32 - Len uint32 - Name [0]uint8 -} - -const SizeofInotifyEvent = 0x10 - -type PtraceRegs struct { - Gpr [32]uint64 - Nip uint64 - Msr uint64 - Orig_gpr3 uint64 - Ctr uint64 - Link uint64 - Xer uint64 - Ccr uint64 - Softe uint64 - Trap uint64 - Dar uint64 - Dsisr uint64 - Result uint64 -} - -type FdSet struct { - Bits [16]int64 -} - -type Sysinfo_t struct { - Uptime int64 - Loads [3]uint64 - Totalram uint64 - Freeram uint64 - Sharedram uint64 - Bufferram uint64 - Totalswap uint64 - Freeswap uint64 - Procs uint16 - Pad uint16 - Pad_cgo_0 [4]byte - Totalhigh uint64 - Freehigh uint64 - Unit uint32 - X_f [0]uint8 - Pad_cgo_1 [4]byte -} - -type Utsname struct { - Sysname [65]uint8 - Nodename [65]uint8 - Release [65]uint8 - Version [65]uint8 - Machine [65]uint8 - Domainname [65]uint8 -} - -type Ustat_t struct { - Tfree int32 - Pad_cgo_0 [4]byte - Tinode uint64 - Fname [6]uint8 - Fpack [6]uint8 - Pad_cgo_1 [4]byte -} - -type EpollEvent struct { - Events uint32 - Fd int32 - Pad int32 -} - -const ( - AT_FDCWD = -0x64 - AT_REMOVEDIR = 0x200 - AT_SYMLINK_NOFOLLOW = 0x100 -) - -type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Cc [19]uint8 - Line uint8 - Ispeed uint32 - Ospeed uint32 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_linux_ppc64le.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_linux_ppc64le.go deleted file mode 100644 index 22c96a2f0a0..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_linux_ppc64le.go +++ /dev/null @@ -1,605 +0,0 @@ -// +build ppc64le,linux -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_linux.go - -package unix - -const ( - sizeofPtr = 0x8 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x8 - sizeofLongLong = 0x8 - PathMax = 0x1000 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int64 - _C_long_long int64 -) - -type Timespec struct { - Sec int64 - Nsec int64 -} - -type Timeval struct { - Sec int64 - Usec int64 -} - -type Timex struct { - Modes uint32 - Pad_cgo_0 [4]byte - Offset int64 - Freq int64 - Maxerror int64 - Esterror int64 - Status int32 - Pad_cgo_1 [4]byte - Constant int64 - Precision int64 - Tolerance int64 - Time Timeval - Tick int64 - Ppsfreq int64 - Jitter int64 - Shift int32 - Pad_cgo_2 [4]byte - Stabil int64 - Jitcnt int64 - Calcnt int64 - Errcnt int64 - Stbcnt int64 - Tai int32 - Pad_cgo_3 [44]byte -} - -type Time_t int64 - -type Tms struct { - Utime int64 - Stime int64 - Cutime int64 - Cstime int64 -} - -type Utimbuf struct { - Actime int64 - Modtime int64 -} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int64 - Ixrss int64 - Idrss int64 - Isrss int64 - Minflt int64 - Majflt int64 - Nswap int64 - Inblock int64 - Oublock int64 - Msgsnd int64 - Msgrcv int64 - Nsignals int64 - Nvcsw int64 - Nivcsw int64 -} - -type Rlimit struct { - Cur uint64 - Max uint64 -} - -type _Gid_t uint32 - -type Stat_t struct { - Dev uint64 - Ino uint64 - Nlink uint64 - Mode uint32 - Uid uint32 - Gid uint32 - X__pad2 int32 - Rdev uint64 - Size int64 - Blksize int64 - Blocks int64 - Atim Timespec - Mtim Timespec - Ctim Timespec - X__glibc_reserved4 uint64 - X__glibc_reserved5 uint64 - X__glibc_reserved6 uint64 -} - -type Statfs_t struct { - Type int64 - Bsize int64 - Blocks uint64 - Bfree uint64 - Bavail uint64 - Files uint64 - Ffree uint64 - Fsid Fsid - Namelen int64 - Frsize int64 - Flags int64 - Spare [4]int64 -} - -type Dirent struct { - Ino uint64 - Off int64 - Reclen uint16 - Type uint8 - Name [256]uint8 - Pad_cgo_0 [5]byte -} - -type Fsid struct { - X__val [2]int32 -} - -type Flock_t struct { - Type int16 - Whence int16 - Pad_cgo_0 [4]byte - Start int64 - Len int64 - Pid int32 - Pad_cgo_1 [4]byte -} - -const ( - FADV_NORMAL = 0x0 - FADV_RANDOM = 0x1 - FADV_SEQUENTIAL = 0x2 - FADV_WILLNEED = 0x3 - FADV_DONTNEED = 0x4 - FADV_NOREUSE = 0x5 -) - -type RawSockaddrInet4 struct { - Family uint16 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]uint8 -} - -type RawSockaddrInet6 struct { - Family uint16 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Family uint16 - Path [108]int8 -} - -type RawSockaddrLinklayer struct { - Family uint16 - Protocol uint16 - Ifindex int32 - Hatype uint16 - Pkttype uint8 - Halen uint8 - Addr [8]uint8 -} - -type RawSockaddrNetlink struct { - Family uint16 - Pad uint16 - Pid uint32 - Groups uint32 -} - -type RawSockaddr struct { - Family uint16 - Data [14]uint8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [96]uint8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint64 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPMreqn struct { - Multiaddr [4]byte /* in_addr */ - Address [4]byte /* in_addr */ - Ifindex int32 -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Pad_cgo_0 [4]byte - Iov *Iovec - Iovlen uint64 - Control *byte - Controllen uint64 - Flags int32 - Pad_cgo_1 [4]byte -} - -type Cmsghdr struct { - Len uint64 - Level int32 - Type int32 - X__cmsg_data [0]uint8 -} - -type Inet4Pktinfo struct { - Ifindex int32 - Spec_dst [4]byte /* in_addr */ - Addr [4]byte /* in_addr */ -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Data [8]uint32 -} - -type Ucred struct { - Pid int32 - Uid uint32 - Gid uint32 -} - -type TCPInfo struct { - State uint8 - Ca_state uint8 - Retransmits uint8 - Probes uint8 - Backoff uint8 - Options uint8 - Pad_cgo_0 [2]byte - Rto uint32 - Ato uint32 - Snd_mss uint32 - Rcv_mss uint32 - Unacked uint32 - Sacked uint32 - Lost uint32 - Retrans uint32 - Fackets uint32 - Last_data_sent uint32 - Last_ack_sent uint32 - Last_data_recv uint32 - Last_ack_recv uint32 - Pmtu uint32 - Rcv_ssthresh uint32 - Rtt uint32 - Rttvar uint32 - Snd_ssthresh uint32 - Snd_cwnd uint32 - Advmss uint32 - Reordering uint32 - Rcv_rtt uint32 - Rcv_space uint32 - Total_retrans uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x70 - SizeofSockaddrUnix = 0x6e - SizeofSockaddrLinklayer = 0x14 - SizeofSockaddrNetlink = 0xc - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPMreqn = 0xc - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x38 - SizeofCmsghdr = 0x10 - SizeofInet4Pktinfo = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 - SizeofUcred = 0xc - SizeofTCPInfo = 0x68 -) - -const ( - IFA_UNSPEC = 0x0 - IFA_ADDRESS = 0x1 - IFA_LOCAL = 0x2 - IFA_LABEL = 0x3 - IFA_BROADCAST = 0x4 - IFA_ANYCAST = 0x5 - IFA_CACHEINFO = 0x6 - IFA_MULTICAST = 0x7 - IFLA_UNSPEC = 0x0 - IFLA_ADDRESS = 0x1 - IFLA_BROADCAST = 0x2 - IFLA_IFNAME = 0x3 - IFLA_MTU = 0x4 - IFLA_LINK = 0x5 - IFLA_QDISC = 0x6 - IFLA_STATS = 0x7 - IFLA_COST = 0x8 - IFLA_PRIORITY = 0x9 - IFLA_MASTER = 0xa - IFLA_WIRELESS = 0xb - IFLA_PROTINFO = 0xc - IFLA_TXQLEN = 0xd - IFLA_MAP = 0xe - IFLA_WEIGHT = 0xf - IFLA_OPERSTATE = 0x10 - IFLA_LINKMODE = 0x11 - IFLA_LINKINFO = 0x12 - IFLA_NET_NS_PID = 0x13 - IFLA_IFALIAS = 0x14 - IFLA_MAX = 0x22 - RT_SCOPE_UNIVERSE = 0x0 - RT_SCOPE_SITE = 0xc8 - RT_SCOPE_LINK = 0xfd - RT_SCOPE_HOST = 0xfe - RT_SCOPE_NOWHERE = 0xff - RT_TABLE_UNSPEC = 0x0 - RT_TABLE_COMPAT = 0xfc - RT_TABLE_DEFAULT = 0xfd - RT_TABLE_MAIN = 0xfe - RT_TABLE_LOCAL = 0xff - RT_TABLE_MAX = 0xffffffff - RTA_UNSPEC = 0x0 - RTA_DST = 0x1 - RTA_SRC = 0x2 - RTA_IIF = 0x3 - RTA_OIF = 0x4 - RTA_GATEWAY = 0x5 - RTA_PRIORITY = 0x6 - RTA_PREFSRC = 0x7 - RTA_METRICS = 0x8 - RTA_MULTIPATH = 0x9 - RTA_FLOW = 0xb - RTA_CACHEINFO = 0xc - RTA_TABLE = 0xf - RTN_UNSPEC = 0x0 - RTN_UNICAST = 0x1 - RTN_LOCAL = 0x2 - RTN_BROADCAST = 0x3 - RTN_ANYCAST = 0x4 - RTN_MULTICAST = 0x5 - RTN_BLACKHOLE = 0x6 - RTN_UNREACHABLE = 0x7 - RTN_PROHIBIT = 0x8 - RTN_THROW = 0x9 - RTN_NAT = 0xa - RTN_XRESOLVE = 0xb - RTNLGRP_NONE = 0x0 - RTNLGRP_LINK = 0x1 - RTNLGRP_NOTIFY = 0x2 - RTNLGRP_NEIGH = 0x3 - RTNLGRP_TC = 0x4 - RTNLGRP_IPV4_IFADDR = 0x5 - RTNLGRP_IPV4_MROUTE = 0x6 - RTNLGRP_IPV4_ROUTE = 0x7 - RTNLGRP_IPV4_RULE = 0x8 - RTNLGRP_IPV6_IFADDR = 0x9 - RTNLGRP_IPV6_MROUTE = 0xa - RTNLGRP_IPV6_ROUTE = 0xb - RTNLGRP_IPV6_IFINFO = 0xc - RTNLGRP_IPV6_PREFIX = 0x12 - RTNLGRP_IPV6_RULE = 0x13 - RTNLGRP_ND_USEROPT = 0x14 - SizeofNlMsghdr = 0x10 - SizeofNlMsgerr = 0x14 - SizeofRtGenmsg = 0x1 - SizeofNlAttr = 0x4 - SizeofRtAttr = 0x4 - SizeofIfInfomsg = 0x10 - SizeofIfAddrmsg = 0x8 - SizeofRtMsg = 0xc - SizeofRtNexthop = 0x8 -) - -type NlMsghdr struct { - Len uint32 - Type uint16 - Flags uint16 - Seq uint32 - Pid uint32 -} - -type NlMsgerr struct { - Error int32 - Msg NlMsghdr -} - -type RtGenmsg struct { - Family uint8 -} - -type NlAttr struct { - Len uint16 - Type uint16 -} - -type RtAttr struct { - Len uint16 - Type uint16 -} - -type IfInfomsg struct { - Family uint8 - X__ifi_pad uint8 - Type uint16 - Index int32 - Flags uint32 - Change uint32 -} - -type IfAddrmsg struct { - Family uint8 - Prefixlen uint8 - Flags uint8 - Scope uint8 - Index uint32 -} - -type RtMsg struct { - Family uint8 - Dst_len uint8 - Src_len uint8 - Tos uint8 - Table uint8 - Protocol uint8 - Scope uint8 - Type uint8 - Flags uint32 -} - -type RtNexthop struct { - Len uint16 - Flags uint8 - Hops uint8 - Ifindex int32 -} - -const ( - SizeofSockFilter = 0x8 - SizeofSockFprog = 0x10 -) - -type SockFilter struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type SockFprog struct { - Len uint16 - Pad_cgo_0 [6]byte - Filter *SockFilter -} - -type InotifyEvent struct { - Wd int32 - Mask uint32 - Cookie uint32 - Len uint32 - Name [0]uint8 -} - -const SizeofInotifyEvent = 0x10 - -type PtraceRegs struct { - Gpr [32]uint64 - Nip uint64 - Msr uint64 - Orig_gpr3 uint64 - Ctr uint64 - Link uint64 - Xer uint64 - Ccr uint64 - Softe uint64 - Trap uint64 - Dar uint64 - Dsisr uint64 - Result uint64 -} - -type FdSet struct { - Bits [16]int64 -} - -type Sysinfo_t struct { - Uptime int64 - Loads [3]uint64 - Totalram uint64 - Freeram uint64 - Sharedram uint64 - Bufferram uint64 - Totalswap uint64 - Freeswap uint64 - Procs uint16 - Pad uint16 - Pad_cgo_0 [4]byte - Totalhigh uint64 - Freehigh uint64 - Unit uint32 - X_f [0]uint8 - Pad_cgo_1 [4]byte -} - -type Utsname struct { - Sysname [65]uint8 - Nodename [65]uint8 - Release [65]uint8 - Version [65]uint8 - Machine [65]uint8 - Domainname [65]uint8 -} - -type Ustat_t struct { - Tfree int32 - Pad_cgo_0 [4]byte - Tinode uint64 - Fname [6]uint8 - Fpack [6]uint8 - Pad_cgo_1 [4]byte -} - -type EpollEvent struct { - Events uint32 - Fd int32 - Pad int32 -} - -const ( - AT_FDCWD = -0x64 - AT_REMOVEDIR = 0x200 - AT_SYMLINK_NOFOLLOW = 0x100 -) - -type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Cc [19]uint8 - Line uint8 - Ispeed uint32 - Ospeed uint32 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_netbsd_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_netbsd_386.go deleted file mode 100644 index caf755fb86c..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_netbsd_386.go +++ /dev/null @@ -1,396 +0,0 @@ -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_netbsd.go - -// +build 386,netbsd - -package unix - -const ( - sizeofPtr = 0x4 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x4 - sizeofLongLong = 0x8 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int32 - _C_long_long int64 -) - -type Timespec struct { - Sec int64 - Nsec int32 -} - -type Timeval struct { - Sec int64 - Usec int32 -} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int32 - Ixrss int32 - Idrss int32 - Isrss int32 - Minflt int32 - Majflt int32 - Nswap int32 - Inblock int32 - Oublock int32 - Msgsnd int32 - Msgrcv int32 - Nsignals int32 - Nvcsw int32 - Nivcsw int32 -} - -type Rlimit struct { - Cur uint64 - Max uint64 -} - -type _Gid_t uint32 - -type Stat_t struct { - Dev uint64 - Mode uint32 - Ino uint64 - Nlink uint32 - Uid uint32 - Gid uint32 - Rdev uint64 - Atimespec Timespec - Mtimespec Timespec - Ctimespec Timespec - Birthtimespec Timespec - Size int64 - Blocks int64 - Blksize uint32 - Flags uint32 - Gen uint32 - Spare [2]uint32 -} - -type Statfs_t [0]byte - -type Flock_t struct { - Start int64 - Len int64 - Pid int32 - Type int16 - Whence int16 -} - -type Dirent struct { - Fileno uint64 - Reclen uint16 - Namlen uint16 - Type uint8 - Name [512]int8 - Pad_cgo_0 [3]byte -} - -type Fsid struct { - X__fsid_val [2]int32 -} - -type RawSockaddrInet4 struct { - Len uint8 - Family uint8 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]int8 -} - -type RawSockaddrInet6 struct { - Len uint8 - Family uint8 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Len uint8 - Family uint8 - Path [104]int8 -} - -type RawSockaddrDatalink struct { - Len uint8 - Family uint8 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [12]int8 -} - -type RawSockaddr struct { - Len uint8 - Family uint8 - Data [14]int8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [92]int8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint32 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Iov *Iovec - Iovlen int32 - Control *byte - Controllen uint32 - Flags int32 -} - -type Cmsghdr struct { - Len uint32 - Level int32 - Type int32 -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Filt [8]uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x6c - SizeofSockaddrUnix = 0x6a - SizeofSockaddrDatalink = 0x14 - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x1c - SizeofCmsghdr = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 -) - -const ( - PTRACE_TRACEME = 0x0 - PTRACE_CONT = 0x7 - PTRACE_KILL = 0x8 -) - -type Kevent_t struct { - Ident uint32 - Filter uint32 - Flags uint32 - Fflags uint32 - Data int64 - Udata int32 -} - -type FdSet struct { - Bits [8]uint32 -} - -const ( - SizeofIfMsghdr = 0x98 - SizeofIfData = 0x84 - SizeofIfaMsghdr = 0x18 - SizeofIfAnnounceMsghdr = 0x18 - SizeofRtMsghdr = 0x78 - SizeofRtMetrics = 0x50 -) - -type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data IfData - Pad_cgo_1 [4]byte -} - -type IfData struct { - Type uint8 - Addrlen uint8 - Hdrlen uint8 - Pad_cgo_0 [1]byte - Link_state int32 - Mtu uint64 - Metric uint64 - Baudrate uint64 - Ipackets uint64 - Ierrors uint64 - Opackets uint64 - Oerrors uint64 - Collisions uint64 - Ibytes uint64 - Obytes uint64 - Imcasts uint64 - Omcasts uint64 - Iqdrops uint64 - Noproto uint64 - Lastchange Timespec -} - -type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Metric int32 - Index uint16 - Pad_cgo_0 [6]byte -} - -type IfAnnounceMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Name [16]int8 - What uint16 -} - -type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Pad_cgo_0 [2]byte - Flags int32 - Addrs int32 - Pid int32 - Seq int32 - Errno int32 - Use int32 - Inits int32 - Pad_cgo_1 [4]byte - Rmx RtMetrics -} - -type RtMetrics struct { - Locks uint64 - Mtu uint64 - Hopcount uint64 - Recvpipe uint64 - Sendpipe uint64 - Ssthresh uint64 - Rtt uint64 - Rttvar uint64 - Expire int64 - Pksent int64 -} - -type Mclpool [0]byte - -const ( - SizeofBpfVersion = 0x4 - SizeofBpfStat = 0x80 - SizeofBpfProgram = 0x8 - SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x14 -) - -type BpfVersion struct { - Major uint16 - Minor uint16 -} - -type BpfStat struct { - Recv uint64 - Drop uint64 - Capt uint64 - Padding [13]uint64 -} - -type BpfProgram struct { - Len uint32 - Insns *BpfInsn -} - -type BpfInsn struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type BpfHdr struct { - Tstamp BpfTimeval - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [2]byte -} - -type BpfTimeval struct { - Sec int32 - Usec int32 -} - -type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Cc [20]uint8 - Ispeed int32 - Ospeed int32 -} - -type Sysctlnode struct { - Flags uint32 - Num int32 - Name [32]int8 - Ver uint32 - X__rsvd uint32 - Un [16]byte - X_sysctl_size [8]byte - X_sysctl_func [8]byte - X_sysctl_parent [8]byte - X_sysctl_desc [8]byte -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_netbsd_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_netbsd_amd64.go deleted file mode 100644 index 91b4a5305a4..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_netbsd_amd64.go +++ /dev/null @@ -1,403 +0,0 @@ -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_netbsd.go - -// +build amd64,netbsd - -package unix - -const ( - sizeofPtr = 0x8 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x8 - sizeofLongLong = 0x8 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int64 - _C_long_long int64 -) - -type Timespec struct { - Sec int64 - Nsec int64 -} - -type Timeval struct { - Sec int64 - Usec int32 - Pad_cgo_0 [4]byte -} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int64 - Ixrss int64 - Idrss int64 - Isrss int64 - Minflt int64 - Majflt int64 - Nswap int64 - Inblock int64 - Oublock int64 - Msgsnd int64 - Msgrcv int64 - Nsignals int64 - Nvcsw int64 - Nivcsw int64 -} - -type Rlimit struct { - Cur uint64 - Max uint64 -} - -type _Gid_t uint32 - -type Stat_t struct { - Dev uint64 - Mode uint32 - Pad_cgo_0 [4]byte - Ino uint64 - Nlink uint32 - Uid uint32 - Gid uint32 - Pad_cgo_1 [4]byte - Rdev uint64 - Atimespec Timespec - Mtimespec Timespec - Ctimespec Timespec - Birthtimespec Timespec - Size int64 - Blocks int64 - Blksize uint32 - Flags uint32 - Gen uint32 - Spare [2]uint32 - Pad_cgo_2 [4]byte -} - -type Statfs_t [0]byte - -type Flock_t struct { - Start int64 - Len int64 - Pid int32 - Type int16 - Whence int16 -} - -type Dirent struct { - Fileno uint64 - Reclen uint16 - Namlen uint16 - Type uint8 - Name [512]int8 - Pad_cgo_0 [3]byte -} - -type Fsid struct { - X__fsid_val [2]int32 -} - -type RawSockaddrInet4 struct { - Len uint8 - Family uint8 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]int8 -} - -type RawSockaddrInet6 struct { - Len uint8 - Family uint8 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Len uint8 - Family uint8 - Path [104]int8 -} - -type RawSockaddrDatalink struct { - Len uint8 - Family uint8 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [12]int8 -} - -type RawSockaddr struct { - Len uint8 - Family uint8 - Data [14]int8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [92]int8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint64 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Pad_cgo_0 [4]byte - Iov *Iovec - Iovlen int32 - Pad_cgo_1 [4]byte - Control *byte - Controllen uint32 - Flags int32 -} - -type Cmsghdr struct { - Len uint32 - Level int32 - Type int32 -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Filt [8]uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x6c - SizeofSockaddrUnix = 0x6a - SizeofSockaddrDatalink = 0x14 - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x30 - SizeofCmsghdr = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 -) - -const ( - PTRACE_TRACEME = 0x0 - PTRACE_CONT = 0x7 - PTRACE_KILL = 0x8 -) - -type Kevent_t struct { - Ident uint64 - Filter uint32 - Flags uint32 - Fflags uint32 - Pad_cgo_0 [4]byte - Data int64 - Udata int64 -} - -type FdSet struct { - Bits [8]uint32 -} - -const ( - SizeofIfMsghdr = 0x98 - SizeofIfData = 0x88 - SizeofIfaMsghdr = 0x18 - SizeofIfAnnounceMsghdr = 0x18 - SizeofRtMsghdr = 0x78 - SizeofRtMetrics = 0x50 -) - -type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data IfData -} - -type IfData struct { - Type uint8 - Addrlen uint8 - Hdrlen uint8 - Pad_cgo_0 [1]byte - Link_state int32 - Mtu uint64 - Metric uint64 - Baudrate uint64 - Ipackets uint64 - Ierrors uint64 - Opackets uint64 - Oerrors uint64 - Collisions uint64 - Ibytes uint64 - Obytes uint64 - Imcasts uint64 - Omcasts uint64 - Iqdrops uint64 - Noproto uint64 - Lastchange Timespec -} - -type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Metric int32 - Index uint16 - Pad_cgo_0 [6]byte -} - -type IfAnnounceMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Name [16]int8 - What uint16 -} - -type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Pad_cgo_0 [2]byte - Flags int32 - Addrs int32 - Pid int32 - Seq int32 - Errno int32 - Use int32 - Inits int32 - Pad_cgo_1 [4]byte - Rmx RtMetrics -} - -type RtMetrics struct { - Locks uint64 - Mtu uint64 - Hopcount uint64 - Recvpipe uint64 - Sendpipe uint64 - Ssthresh uint64 - Rtt uint64 - Rttvar uint64 - Expire int64 - Pksent int64 -} - -type Mclpool [0]byte - -const ( - SizeofBpfVersion = 0x4 - SizeofBpfStat = 0x80 - SizeofBpfProgram = 0x10 - SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x20 -) - -type BpfVersion struct { - Major uint16 - Minor uint16 -} - -type BpfStat struct { - Recv uint64 - Drop uint64 - Capt uint64 - Padding [13]uint64 -} - -type BpfProgram struct { - Len uint32 - Pad_cgo_0 [4]byte - Insns *BpfInsn -} - -type BpfInsn struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type BpfHdr struct { - Tstamp BpfTimeval - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [6]byte -} - -type BpfTimeval struct { - Sec int64 - Usec int64 -} - -type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Cc [20]uint8 - Ispeed int32 - Ospeed int32 -} - -type Sysctlnode struct { - Flags uint32 - Num int32 - Name [32]int8 - Ver uint32 - X__rsvd uint32 - Un [16]byte - X_sysctl_size [8]byte - X_sysctl_func [8]byte - X_sysctl_parent [8]byte - X_sysctl_desc [8]byte -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_netbsd_arm.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_netbsd_arm.go deleted file mode 100644 index c0758f9d3f7..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_netbsd_arm.go +++ /dev/null @@ -1,401 +0,0 @@ -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_netbsd.go - -// +build arm,netbsd - -package unix - -const ( - sizeofPtr = 0x4 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x4 - sizeofLongLong = 0x8 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int32 - _C_long_long int64 -) - -type Timespec struct { - Sec int64 - Nsec int32 - Pad_cgo_0 [4]byte -} - -type Timeval struct { - Sec int64 - Usec int32 - Pad_cgo_0 [4]byte -} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int32 - Ixrss int32 - Idrss int32 - Isrss int32 - Minflt int32 - Majflt int32 - Nswap int32 - Inblock int32 - Oublock int32 - Msgsnd int32 - Msgrcv int32 - Nsignals int32 - Nvcsw int32 - Nivcsw int32 -} - -type Rlimit struct { - Cur uint64 - Max uint64 -} - -type _Gid_t uint32 - -type Stat_t struct { - Dev uint64 - Mode uint32 - Pad_cgo_0 [4]byte - Ino uint64 - Nlink uint32 - Uid uint32 - Gid uint32 - Pad_cgo_1 [4]byte - Rdev uint64 - Atimespec Timespec - Mtimespec Timespec - Ctimespec Timespec - Birthtimespec Timespec - Size int64 - Blocks int64 - Blksize uint32 - Flags uint32 - Gen uint32 - Spare [2]uint32 - Pad_cgo_2 [4]byte -} - -type Statfs_t [0]byte - -type Flock_t struct { - Start int64 - Len int64 - Pid int32 - Type int16 - Whence int16 -} - -type Dirent struct { - Fileno uint64 - Reclen uint16 - Namlen uint16 - Type uint8 - Name [512]int8 - Pad_cgo_0 [3]byte -} - -type Fsid struct { - X__fsid_val [2]int32 -} - -type RawSockaddrInet4 struct { - Len uint8 - Family uint8 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]int8 -} - -type RawSockaddrInet6 struct { - Len uint8 - Family uint8 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Len uint8 - Family uint8 - Path [104]int8 -} - -type RawSockaddrDatalink struct { - Len uint8 - Family uint8 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [12]int8 -} - -type RawSockaddr struct { - Len uint8 - Family uint8 - Data [14]int8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [92]int8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint32 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Iov *Iovec - Iovlen int32 - Control *byte - Controllen uint32 - Flags int32 -} - -type Cmsghdr struct { - Len uint32 - Level int32 - Type int32 -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Filt [8]uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x6c - SizeofSockaddrUnix = 0x6a - SizeofSockaddrDatalink = 0x14 - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x1c - SizeofCmsghdr = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 -) - -const ( - PTRACE_TRACEME = 0x0 - PTRACE_CONT = 0x7 - PTRACE_KILL = 0x8 -) - -type Kevent_t struct { - Ident uint32 - Filter uint32 - Flags uint32 - Fflags uint32 - Data int64 - Udata int32 - Pad_cgo_0 [4]byte -} - -type FdSet struct { - Bits [8]uint32 -} - -const ( - SizeofIfMsghdr = 0x98 - SizeofIfData = 0x88 - SizeofIfaMsghdr = 0x18 - SizeofIfAnnounceMsghdr = 0x18 - SizeofRtMsghdr = 0x78 - SizeofRtMetrics = 0x50 -) - -type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data IfData -} - -type IfData struct { - Type uint8 - Addrlen uint8 - Hdrlen uint8 - Pad_cgo_0 [1]byte - Link_state int32 - Mtu uint64 - Metric uint64 - Baudrate uint64 - Ipackets uint64 - Ierrors uint64 - Opackets uint64 - Oerrors uint64 - Collisions uint64 - Ibytes uint64 - Obytes uint64 - Imcasts uint64 - Omcasts uint64 - Iqdrops uint64 - Noproto uint64 - Lastchange Timespec -} - -type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Metric int32 - Index uint16 - Pad_cgo_0 [6]byte -} - -type IfAnnounceMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Name [16]int8 - What uint16 -} - -type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Pad_cgo_0 [2]byte - Flags int32 - Addrs int32 - Pid int32 - Seq int32 - Errno int32 - Use int32 - Inits int32 - Pad_cgo_1 [4]byte - Rmx RtMetrics -} - -type RtMetrics struct { - Locks uint64 - Mtu uint64 - Hopcount uint64 - Recvpipe uint64 - Sendpipe uint64 - Ssthresh uint64 - Rtt uint64 - Rttvar uint64 - Expire int64 - Pksent int64 -} - -type Mclpool [0]byte - -const ( - SizeofBpfVersion = 0x4 - SizeofBpfStat = 0x80 - SizeofBpfProgram = 0x8 - SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x14 -) - -type BpfVersion struct { - Major uint16 - Minor uint16 -} - -type BpfStat struct { - Recv uint64 - Drop uint64 - Capt uint64 - Padding [13]uint64 -} - -type BpfProgram struct { - Len uint32 - Insns *BpfInsn -} - -type BpfInsn struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type BpfHdr struct { - Tstamp BpfTimeval - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [2]byte -} - -type BpfTimeval struct { - Sec int32 - Usec int32 -} - -type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Cc [20]uint8 - Ispeed int32 - Ospeed int32 -} - -type Sysctlnode struct { - Flags uint32 - Num int32 - Name [32]int8 - Ver uint32 - X__rsvd uint32 - Un [16]byte - X_sysctl_size [8]byte - X_sysctl_func [8]byte - X_sysctl_parent [8]byte - X_sysctl_desc [8]byte -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_openbsd_386.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_openbsd_386.go deleted file mode 100644 index 860a4697961..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_openbsd_386.go +++ /dev/null @@ -1,441 +0,0 @@ -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_openbsd.go - -// +build 386,openbsd - -package unix - -const ( - sizeofPtr = 0x4 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x4 - sizeofLongLong = 0x8 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int32 - _C_long_long int64 -) - -type Timespec struct { - Sec int64 - Nsec int32 -} - -type Timeval struct { - Sec int64 - Usec int32 -} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int32 - Ixrss int32 - Idrss int32 - Isrss int32 - Minflt int32 - Majflt int32 - Nswap int32 - Inblock int32 - Oublock int32 - Msgsnd int32 - Msgrcv int32 - Nsignals int32 - Nvcsw int32 - Nivcsw int32 -} - -type Rlimit struct { - Cur uint64 - Max uint64 -} - -type _Gid_t uint32 - -const ( - S_IFMT = 0xf000 - S_IFIFO = 0x1000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFBLK = 0x6000 - S_IFREG = 0x8000 - S_IFLNK = 0xa000 - S_IFSOCK = 0xc000 - S_ISUID = 0x800 - S_ISGID = 0x400 - S_ISVTX = 0x200 - S_IRUSR = 0x100 - S_IWUSR = 0x80 - S_IXUSR = 0x40 -) - -type Stat_t struct { - Mode uint32 - Dev int32 - Ino uint64 - Nlink uint32 - Uid uint32 - Gid uint32 - Rdev int32 - Atim Timespec - Mtim Timespec - Ctim Timespec - Size int64 - Blocks int64 - Blksize uint32 - Flags uint32 - Gen uint32 - X__st_birthtim Timespec -} - -type Statfs_t struct { - F_flags uint32 - F_bsize uint32 - F_iosize uint32 - F_blocks uint64 - F_bfree uint64 - F_bavail int64 - F_files uint64 - F_ffree uint64 - F_favail int64 - F_syncwrites uint64 - F_syncreads uint64 - F_asyncwrites uint64 - F_asyncreads uint64 - F_fsid Fsid - F_namemax uint32 - F_owner uint32 - F_ctime uint64 - F_fstypename [16]int8 - F_mntonname [90]int8 - F_mntfromname [90]int8 - F_mntfromspec [90]int8 - Pad_cgo_0 [2]byte - Mount_info [160]byte -} - -type Flock_t struct { - Start int64 - Len int64 - Pid int32 - Type int16 - Whence int16 -} - -type Dirent struct { - Fileno uint64 - Off int64 - Reclen uint16 - Type uint8 - Namlen uint8 - X__d_padding [4]uint8 - Name [256]int8 -} - -type Fsid struct { - Val [2]int32 -} - -type RawSockaddrInet4 struct { - Len uint8 - Family uint8 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]int8 -} - -type RawSockaddrInet6 struct { - Len uint8 - Family uint8 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Len uint8 - Family uint8 - Path [104]int8 -} - -type RawSockaddrDatalink struct { - Len uint8 - Family uint8 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [24]int8 -} - -type RawSockaddr struct { - Len uint8 - Family uint8 - Data [14]int8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [92]int8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint32 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Iov *Iovec - Iovlen uint32 - Control *byte - Controllen uint32 - Flags int32 -} - -type Cmsghdr struct { - Len uint32 - Level int32 - Type int32 -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Filt [8]uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x6c - SizeofSockaddrUnix = 0x6a - SizeofSockaddrDatalink = 0x20 - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x1c - SizeofCmsghdr = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 -) - -const ( - PTRACE_TRACEME = 0x0 - PTRACE_CONT = 0x7 - PTRACE_KILL = 0x8 -) - -type Kevent_t struct { - Ident uint32 - Filter int16 - Flags uint16 - Fflags uint32 - Data int64 - Udata *byte -} - -type FdSet struct { - Bits [32]uint32 -} - -const ( - SizeofIfMsghdr = 0xec - SizeofIfData = 0xd4 - SizeofIfaMsghdr = 0x18 - SizeofIfAnnounceMsghdr = 0x1a - SizeofRtMsghdr = 0x60 - SizeofRtMetrics = 0x38 -) - -type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Hdrlen uint16 - Index uint16 - Tableid uint16 - Pad1 uint8 - Pad2 uint8 - Addrs int32 - Flags int32 - Xflags int32 - Data IfData -} - -type IfData struct { - Type uint8 - Addrlen uint8 - Hdrlen uint8 - Link_state uint8 - Mtu uint32 - Metric uint32 - Pad uint32 - Baudrate uint64 - Ipackets uint64 - Ierrors uint64 - Opackets uint64 - Oerrors uint64 - Collisions uint64 - Ibytes uint64 - Obytes uint64 - Imcasts uint64 - Omcasts uint64 - Iqdrops uint64 - Noproto uint64 - Capabilities uint32 - Lastchange Timeval - Mclpool [7]Mclpool -} - -type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Hdrlen uint16 - Index uint16 - Tableid uint16 - Pad1 uint8 - Pad2 uint8 - Addrs int32 - Flags int32 - Metric int32 -} - -type IfAnnounceMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Hdrlen uint16 - Index uint16 - What uint16 - Name [16]int8 -} - -type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Hdrlen uint16 - Index uint16 - Tableid uint16 - Priority uint8 - Mpls uint8 - Addrs int32 - Flags int32 - Fmask int32 - Pid int32 - Seq int32 - Errno int32 - Inits uint32 - Rmx RtMetrics -} - -type RtMetrics struct { - Pksent uint64 - Expire int64 - Locks uint32 - Mtu uint32 - Refcnt uint32 - Hopcount uint32 - Recvpipe uint32 - Sendpipe uint32 - Ssthresh uint32 - Rtt uint32 - Rttvar uint32 - Pad uint32 -} - -type Mclpool struct { - Grown int32 - Alive uint16 - Hwm uint16 - Cwm uint16 - Lwm uint16 -} - -const ( - SizeofBpfVersion = 0x4 - SizeofBpfStat = 0x8 - SizeofBpfProgram = 0x8 - SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x14 -) - -type BpfVersion struct { - Major uint16 - Minor uint16 -} - -type BpfStat struct { - Recv uint32 - Drop uint32 -} - -type BpfProgram struct { - Len uint32 - Insns *BpfInsn -} - -type BpfInsn struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type BpfHdr struct { - Tstamp BpfTimeval - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [2]byte -} - -type BpfTimeval struct { - Sec uint32 - Usec uint32 -} - -type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Cc [20]uint8 - Ispeed int32 - Ospeed int32 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_openbsd_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_openbsd_amd64.go deleted file mode 100644 index 23c52727f7d..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_openbsd_amd64.go +++ /dev/null @@ -1,448 +0,0 @@ -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_openbsd.go - -// +build amd64,openbsd - -package unix - -const ( - sizeofPtr = 0x8 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x8 - sizeofLongLong = 0x8 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int64 - _C_long_long int64 -) - -type Timespec struct { - Sec int64 - Nsec int64 -} - -type Timeval struct { - Sec int64 - Usec int64 -} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int64 - Ixrss int64 - Idrss int64 - Isrss int64 - Minflt int64 - Majflt int64 - Nswap int64 - Inblock int64 - Oublock int64 - Msgsnd int64 - Msgrcv int64 - Nsignals int64 - Nvcsw int64 - Nivcsw int64 -} - -type Rlimit struct { - Cur uint64 - Max uint64 -} - -type _Gid_t uint32 - -const ( - S_IFMT = 0xf000 - S_IFIFO = 0x1000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFBLK = 0x6000 - S_IFREG = 0x8000 - S_IFLNK = 0xa000 - S_IFSOCK = 0xc000 - S_ISUID = 0x800 - S_ISGID = 0x400 - S_ISVTX = 0x200 - S_IRUSR = 0x100 - S_IWUSR = 0x80 - S_IXUSR = 0x40 -) - -type Stat_t struct { - Mode uint32 - Dev int32 - Ino uint64 - Nlink uint32 - Uid uint32 - Gid uint32 - Rdev int32 - Atim Timespec - Mtim Timespec - Ctim Timespec - Size int64 - Blocks int64 - Blksize uint32 - Flags uint32 - Gen uint32 - Pad_cgo_0 [4]byte - X__st_birthtim Timespec -} - -type Statfs_t struct { - F_flags uint32 - F_bsize uint32 - F_iosize uint32 - Pad_cgo_0 [4]byte - F_blocks uint64 - F_bfree uint64 - F_bavail int64 - F_files uint64 - F_ffree uint64 - F_favail int64 - F_syncwrites uint64 - F_syncreads uint64 - F_asyncwrites uint64 - F_asyncreads uint64 - F_fsid Fsid - F_namemax uint32 - F_owner uint32 - F_ctime uint64 - F_fstypename [16]int8 - F_mntonname [90]int8 - F_mntfromname [90]int8 - F_mntfromspec [90]int8 - Pad_cgo_1 [2]byte - Mount_info [160]byte -} - -type Flock_t struct { - Start int64 - Len int64 - Pid int32 - Type int16 - Whence int16 -} - -type Dirent struct { - Fileno uint64 - Off int64 - Reclen uint16 - Type uint8 - Namlen uint8 - X__d_padding [4]uint8 - Name [256]int8 -} - -type Fsid struct { - Val [2]int32 -} - -type RawSockaddrInet4 struct { - Len uint8 - Family uint8 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]int8 -} - -type RawSockaddrInet6 struct { - Len uint8 - Family uint8 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 -} - -type RawSockaddrUnix struct { - Len uint8 - Family uint8 - Path [104]int8 -} - -type RawSockaddrDatalink struct { - Len uint8 - Family uint8 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [24]int8 -} - -type RawSockaddr struct { - Len uint8 - Family uint8 - Data [14]int8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [92]int8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *byte - Len uint64 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Pad_cgo_0 [4]byte - Iov *Iovec - Iovlen uint32 - Pad_cgo_1 [4]byte - Control *byte - Controllen uint32 - Flags int32 -} - -type Cmsghdr struct { - Len uint32 - Level int32 - Type int32 -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - Filt [8]uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x1c - SizeofSockaddrAny = 0x6c - SizeofSockaddrUnix = 0x6a - SizeofSockaddrDatalink = 0x20 - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x30 - SizeofCmsghdr = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x20 - SizeofICMPv6Filter = 0x20 -) - -const ( - PTRACE_TRACEME = 0x0 - PTRACE_CONT = 0x7 - PTRACE_KILL = 0x8 -) - -type Kevent_t struct { - Ident uint64 - Filter int16 - Flags uint16 - Fflags uint32 - Data int64 - Udata *byte -} - -type FdSet struct { - Bits [32]uint32 -} - -const ( - SizeofIfMsghdr = 0xf8 - SizeofIfData = 0xe0 - SizeofIfaMsghdr = 0x18 - SizeofIfAnnounceMsghdr = 0x1a - SizeofRtMsghdr = 0x60 - SizeofRtMetrics = 0x38 -) - -type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Hdrlen uint16 - Index uint16 - Tableid uint16 - Pad1 uint8 - Pad2 uint8 - Addrs int32 - Flags int32 - Xflags int32 - Data IfData -} - -type IfData struct { - Type uint8 - Addrlen uint8 - Hdrlen uint8 - Link_state uint8 - Mtu uint32 - Metric uint32 - Pad uint32 - Baudrate uint64 - Ipackets uint64 - Ierrors uint64 - Opackets uint64 - Oerrors uint64 - Collisions uint64 - Ibytes uint64 - Obytes uint64 - Imcasts uint64 - Omcasts uint64 - Iqdrops uint64 - Noproto uint64 - Capabilities uint32 - Pad_cgo_0 [4]byte - Lastchange Timeval - Mclpool [7]Mclpool - Pad_cgo_1 [4]byte -} - -type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Hdrlen uint16 - Index uint16 - Tableid uint16 - Pad1 uint8 - Pad2 uint8 - Addrs int32 - Flags int32 - Metric int32 -} - -type IfAnnounceMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Hdrlen uint16 - Index uint16 - What uint16 - Name [16]int8 -} - -type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Hdrlen uint16 - Index uint16 - Tableid uint16 - Priority uint8 - Mpls uint8 - Addrs int32 - Flags int32 - Fmask int32 - Pid int32 - Seq int32 - Errno int32 - Inits uint32 - Rmx RtMetrics -} - -type RtMetrics struct { - Pksent uint64 - Expire int64 - Locks uint32 - Mtu uint32 - Refcnt uint32 - Hopcount uint32 - Recvpipe uint32 - Sendpipe uint32 - Ssthresh uint32 - Rtt uint32 - Rttvar uint32 - Pad uint32 -} - -type Mclpool struct { - Grown int32 - Alive uint16 - Hwm uint16 - Cwm uint16 - Lwm uint16 -} - -const ( - SizeofBpfVersion = 0x4 - SizeofBpfStat = 0x8 - SizeofBpfProgram = 0x10 - SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x14 -) - -type BpfVersion struct { - Major uint16 - Minor uint16 -} - -type BpfStat struct { - Recv uint32 - Drop uint32 -} - -type BpfProgram struct { - Len uint32 - Pad_cgo_0 [4]byte - Insns *BpfInsn -} - -type BpfInsn struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type BpfHdr struct { - Tstamp BpfTimeval - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [2]byte -} - -type BpfTimeval struct { - Sec uint32 - Usec uint32 -} - -type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Cc [20]uint8 - Ispeed int32 - Ospeed int32 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_solaris_amd64.go b/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_solaris_amd64.go deleted file mode 100644 index b3b928a51e6..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/external/golang.org/x/sys/unix/ztypes_solaris_amd64.go +++ /dev/null @@ -1,422 +0,0 @@ -// +build amd64,solaris -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_solaris.go - -package unix - -const ( - sizeofPtr = 0x8 - sizeofShort = 0x2 - sizeofInt = 0x4 - sizeofLong = 0x8 - sizeofLongLong = 0x8 - PathMax = 0x400 -) - -type ( - _C_short int16 - _C_int int32 - _C_long int64 - _C_long_long int64 -) - -type Timespec struct { - Sec int64 - Nsec int64 -} - -type Timeval struct { - Sec int64 - Usec int64 -} - -type Timeval32 struct { - Sec int32 - Usec int32 -} - -type Tms struct { - Utime int64 - Stime int64 - Cutime int64 - Cstime int64 -} - -type Utimbuf struct { - Actime int64 - Modtime int64 -} - -type Rusage struct { - Utime Timeval - Stime Timeval - Maxrss int64 - Ixrss int64 - Idrss int64 - Isrss int64 - Minflt int64 - Majflt int64 - Nswap int64 - Inblock int64 - Oublock int64 - Msgsnd int64 - Msgrcv int64 - Nsignals int64 - Nvcsw int64 - Nivcsw int64 -} - -type Rlimit struct { - Cur uint64 - Max uint64 -} - -type _Gid_t uint32 - -const ( - S_IFMT = 0xf000 - S_IFIFO = 0x1000 - S_IFCHR = 0x2000 - S_IFDIR = 0x4000 - S_IFBLK = 0x6000 - S_IFREG = 0x8000 - S_IFLNK = 0xa000 - S_IFSOCK = 0xc000 - S_ISUID = 0x800 - S_ISGID = 0x400 - S_ISVTX = 0x200 - S_IRUSR = 0x100 - S_IWUSR = 0x80 - S_IXUSR = 0x40 -) - -type Stat_t struct { - Dev uint64 - Ino uint64 - Mode uint32 - Nlink uint32 - Uid uint32 - Gid uint32 - Rdev uint64 - Size int64 - Atim Timespec - Mtim Timespec - Ctim Timespec - Blksize int32 - Pad_cgo_0 [4]byte - Blocks int64 - Fstype [16]int8 -} - -type Flock_t struct { - Type int16 - Whence int16 - Pad_cgo_0 [4]byte - Start int64 - Len int64 - Sysid int32 - Pid int32 - Pad [4]int64 -} - -type Dirent struct { - Ino uint64 - Off int64 - Reclen uint16 - Name [1]int8 - Pad_cgo_0 [5]byte -} - -type RawSockaddrInet4 struct { - Family uint16 - Port uint16 - Addr [4]byte /* in_addr */ - Zero [8]int8 -} - -type RawSockaddrInet6 struct { - Family uint16 - Port uint16 - Flowinfo uint32 - Addr [16]byte /* in6_addr */ - Scope_id uint32 - X__sin6_src_id uint32 -} - -type RawSockaddrUnix struct { - Family uint16 - Path [108]int8 -} - -type RawSockaddrDatalink struct { - Family uint16 - Index uint16 - Type uint8 - Nlen uint8 - Alen uint8 - Slen uint8 - Data [244]int8 -} - -type RawSockaddr struct { - Family uint16 - Data [14]int8 -} - -type RawSockaddrAny struct { - Addr RawSockaddr - Pad [236]int8 -} - -type _Socklen uint32 - -type Linger struct { - Onoff int32 - Linger int32 -} - -type Iovec struct { - Base *int8 - Len uint64 -} - -type IPMreq struct { - Multiaddr [4]byte /* in_addr */ - Interface [4]byte /* in_addr */ -} - -type IPv6Mreq struct { - Multiaddr [16]byte /* in6_addr */ - Interface uint32 -} - -type Msghdr struct { - Name *byte - Namelen uint32 - Pad_cgo_0 [4]byte - Iov *Iovec - Iovlen int32 - Pad_cgo_1 [4]byte - Accrights *int8 - Accrightslen int32 - Pad_cgo_2 [4]byte -} - -type Cmsghdr struct { - Len uint32 - Level int32 - Type int32 -} - -type Inet6Pktinfo struct { - Addr [16]byte /* in6_addr */ - Ifindex uint32 -} - -type IPv6MTUInfo struct { - Addr RawSockaddrInet6 - Mtu uint32 -} - -type ICMPv6Filter struct { - X__icmp6_filt [8]uint32 -} - -const ( - SizeofSockaddrInet4 = 0x10 - SizeofSockaddrInet6 = 0x20 - SizeofSockaddrAny = 0xfc - SizeofSockaddrUnix = 0x6e - SizeofSockaddrDatalink = 0xfc - SizeofLinger = 0x8 - SizeofIPMreq = 0x8 - SizeofIPv6Mreq = 0x14 - SizeofMsghdr = 0x30 - SizeofCmsghdr = 0xc - SizeofInet6Pktinfo = 0x14 - SizeofIPv6MTUInfo = 0x24 - SizeofICMPv6Filter = 0x20 -) - -type FdSet struct { - Bits [1024]int64 -} - -type Utsname struct { - Sysname [257]int8 - Nodename [257]int8 - Release [257]int8 - Version [257]int8 - Machine [257]int8 -} - -type Ustat_t struct { - Tfree int64 - Tinode uint64 - Fname [6]int8 - Fpack [6]int8 - Pad_cgo_0 [4]byte -} - -const ( - AT_FDCWD = 0xffd19553 - AT_SYMLINK_NOFOLLOW = 0x1000 - AT_SYMLINK_FOLLOW = 0x2000 - AT_REMOVEDIR = 0x1 - AT_EACCESS = 0x4 -) - -const ( - SizeofIfMsghdr = 0x54 - SizeofIfData = 0x44 - SizeofIfaMsghdr = 0x14 - SizeofRtMsghdr = 0x4c - SizeofRtMetrics = 0x28 -) - -type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data IfData -} - -type IfData struct { - Type uint8 - Addrlen uint8 - Hdrlen uint8 - Pad_cgo_0 [1]byte - Mtu uint32 - Metric uint32 - Baudrate uint32 - Ipackets uint32 - Ierrors uint32 - Opackets uint32 - Oerrors uint32 - Collisions uint32 - Ibytes uint32 - Obytes uint32 - Imcasts uint32 - Omcasts uint32 - Iqdrops uint32 - Noproto uint32 - Lastchange Timeval32 -} - -type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Metric int32 -} - -type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Pad_cgo_0 [2]byte - Flags int32 - Addrs int32 - Pid int32 - Seq int32 - Errno int32 - Use int32 - Inits uint32 - Rmx RtMetrics -} - -type RtMetrics struct { - Locks uint32 - Mtu uint32 - Hopcount uint32 - Expire uint32 - Recvpipe uint32 - Sendpipe uint32 - Ssthresh uint32 - Rtt uint32 - Rttvar uint32 - Pksent uint32 -} - -const ( - SizeofBpfVersion = 0x4 - SizeofBpfStat = 0x80 - SizeofBpfProgram = 0x10 - SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x14 -) - -type BpfVersion struct { - Major uint16 - Minor uint16 -} - -type BpfStat struct { - Recv uint64 - Drop uint64 - Capt uint64 - Padding [13]uint64 -} - -type BpfProgram struct { - Len uint32 - Pad_cgo_0 [4]byte - Insns *BpfInsn -} - -type BpfInsn struct { - Code uint16 - Jt uint8 - Jf uint8 - K uint32 -} - -type BpfTimeval struct { - Sec int32 - Usec int32 -} - -type BpfHdr struct { - Tstamp BpfTimeval - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [2]byte -} - -const _SC_PAGESIZE = 0xb - -type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Cc [19]uint8 - Pad_cgo_0 [1]byte -} - -type Termio struct { - Iflag uint16 - Oflag uint16 - Cflag uint16 - Lflag uint16 - Line int8 - Cc [8]uint8 - Pad_cgo_0 [1]byte -} - -type Winsize struct { - Row uint16 - Col uint16 - Xpixel uint16 - Ypixel uint16 -} diff --git a/vendor/github.com/fsouza/go-dockerclient/image.go b/vendor/github.com/fsouza/go-dockerclient/image.go deleted file mode 100644 index fd51c3f9283..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/image.go +++ /dev/null @@ -1,642 +0,0 @@ -// Copyright 2015 go-dockerclient authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package docker - -import ( - "bytes" - "encoding/base64" - "encoding/json" - "errors" - "fmt" - "io" - "net/http" - "net/url" - "os" - "time" -) - -// APIImages represent an image returned in the ListImages call. -type APIImages struct { - ID string `json:"Id" yaml:"Id"` - RepoTags []string `json:"RepoTags,omitempty" yaml:"RepoTags,omitempty"` - Created int64 `json:"Created,omitempty" yaml:"Created,omitempty"` - Size int64 `json:"Size,omitempty" yaml:"Size,omitempty"` - VirtualSize int64 `json:"VirtualSize,omitempty" yaml:"VirtualSize,omitempty"` - ParentID string `json:"ParentId,omitempty" yaml:"ParentId,omitempty"` - RepoDigests []string `json:"RepoDigests,omitempty" yaml:"RepoDigests,omitempty"` - Labels map[string]string `json:"Labels,omitempty" yaml:"Labels,omitempty"` -} - -// RootFS represents the underlying layers used by an image -type RootFS struct { - Type string `json:"Type,omitempty" yaml:"Type,omitempty"` - Layers []string `json:"Layers,omitempty" yaml:"Layers,omitempty"` -} - -// Image is the type representing a docker image and its various properties -type Image struct { - ID string `json:"Id" yaml:"Id"` - RepoTags []string `json:"RepoTags,omitempty" yaml:"RepoTags,omitempty"` - Parent string `json:"Parent,omitempty" yaml:"Parent,omitempty"` - Comment string `json:"Comment,omitempty" yaml:"Comment,omitempty"` - Created time.Time `json:"Created,omitempty" yaml:"Created,omitempty"` - Container string `json:"Container,omitempty" yaml:"Container,omitempty"` - ContainerConfig Config `json:"ContainerConfig,omitempty" yaml:"ContainerConfig,omitempty"` - DockerVersion string `json:"DockerVersion,omitempty" yaml:"DockerVersion,omitempty"` - Author string `json:"Author,omitempty" yaml:"Author,omitempty"` - Config *Config `json:"Config,omitempty" yaml:"Config,omitempty"` - Architecture string `json:"Architecture,omitempty" yaml:"Architecture,omitempty"` - Size int64 `json:"Size,omitempty" yaml:"Size,omitempty"` - VirtualSize int64 `json:"VirtualSize,omitempty" yaml:"VirtualSize,omitempty"` - RepoDigests []string `json:"RepoDigests,omitempty" yaml:"RepoDigests,omitempty"` - RootFS *RootFS `json:"RootFS,omitempty" yaml:"RootFS,omitempty"` -} - -// ImagePre012 serves the same purpose as the Image type except that it is for -// earlier versions of the Docker API (pre-012 to be specific) -type ImagePre012 struct { - ID string `json:"id"` - Parent string `json:"parent,omitempty"` - Comment string `json:"comment,omitempty"` - Created time.Time `json:"created"` - Container string `json:"container,omitempty"` - ContainerConfig Config `json:"container_config,omitempty"` - DockerVersion string `json:"docker_version,omitempty"` - Author string `json:"author,omitempty"` - Config *Config `json:"config,omitempty"` - Architecture string `json:"architecture,omitempty"` - Size int64 `json:"size,omitempty"` -} - -var ( - // ErrNoSuchImage is the error returned when the image does not exist. - ErrNoSuchImage = errors.New("no such image") - - // ErrMissingRepo is the error returned when the remote repository is - // missing. - ErrMissingRepo = errors.New("missing remote repository e.g. 'github.com/user/repo'") - - // ErrMissingOutputStream is the error returned when no output stream - // is provided to some calls, like BuildImage. - ErrMissingOutputStream = errors.New("missing output stream") - - // ErrMultipleContexts is the error returned when both a ContextDir and - // InputStream are provided in BuildImageOptions - ErrMultipleContexts = errors.New("image build may not be provided BOTH context dir and input stream") - - // ErrMustSpecifyNames is the error rreturned when the Names field on - // ExportImagesOptions is nil or empty - ErrMustSpecifyNames = errors.New("must specify at least one name to export") -) - -// ListImagesOptions specify parameters to the ListImages function. -// -// See https://goo.gl/xBe1u3 for more details. -type ListImagesOptions struct { - All bool - Filters map[string][]string - Digests bool - Filter string -} - -// ListImages returns the list of available images in the server. -// -// See https://goo.gl/xBe1u3 for more details. -func (c *Client) ListImages(opts ListImagesOptions) ([]APIImages, error) { - path := "/images/json?" + queryString(opts) - resp, err := c.do("GET", path, doOptions{}) - if err != nil { - return nil, err - } - defer resp.Body.Close() - var images []APIImages - if err := json.NewDecoder(resp.Body).Decode(&images); err != nil { - return nil, err - } - return images, nil -} - -// ImageHistory represent a layer in an image's history returned by the -// ImageHistory call. -type ImageHistory struct { - ID string `json:"Id" yaml:"Id"` - Tags []string `json:"Tags,omitempty" yaml:"Tags,omitempty"` - Created int64 `json:"Created,omitempty" yaml:"Created,omitempty"` - CreatedBy string `json:"CreatedBy,omitempty" yaml:"CreatedBy,omitempty"` - Size int64 `json:"Size,omitempty" yaml:"Size,omitempty"` -} - -// ImageHistory returns the history of the image by its name or ID. -// -// See https://goo.gl/8bnTId for more details. -func (c *Client) ImageHistory(name string) ([]ImageHistory, error) { - resp, err := c.do("GET", "/images/"+name+"/history", doOptions{}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return nil, ErrNoSuchImage - } - return nil, err - } - defer resp.Body.Close() - var history []ImageHistory - if err := json.NewDecoder(resp.Body).Decode(&history); err != nil { - return nil, err - } - return history, nil -} - -// RemoveImage removes an image by its name or ID. -// -// See https://goo.gl/V3ZWnK for more details. -func (c *Client) RemoveImage(name string) error { - resp, err := c.do("DELETE", "/images/"+name, doOptions{}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return ErrNoSuchImage - } - return err - } - resp.Body.Close() - return nil -} - -// RemoveImageOptions present the set of options available for removing an image -// from a registry. -// -// See https://goo.gl/V3ZWnK for more details. -type RemoveImageOptions struct { - Force bool `qs:"force"` - NoPrune bool `qs:"noprune"` -} - -// RemoveImageExtended removes an image by its name or ID. -// Extra params can be passed, see RemoveImageOptions -// -// See https://goo.gl/V3ZWnK for more details. -func (c *Client) RemoveImageExtended(name string, opts RemoveImageOptions) error { - uri := fmt.Sprintf("/images/%s?%s", name, queryString(&opts)) - resp, err := c.do("DELETE", uri, doOptions{}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return ErrNoSuchImage - } - return err - } - resp.Body.Close() - return nil -} - -// InspectImage returns an image by its name or ID. -// -// See https://goo.gl/jHPcg6 for more details. -func (c *Client) InspectImage(name string) (*Image, error) { - resp, err := c.do("GET", "/images/"+name+"/json", doOptions{}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return nil, ErrNoSuchImage - } - return nil, err - } - defer resp.Body.Close() - - var image Image - - // if the caller elected to skip checking the server's version, assume it's the latest - if c.SkipServerVersionCheck || c.expectedAPIVersion.GreaterThanOrEqualTo(apiVersion112) { - if err := json.NewDecoder(resp.Body).Decode(&image); err != nil { - return nil, err - } - } else { - var imagePre012 ImagePre012 - if err := json.NewDecoder(resp.Body).Decode(&imagePre012); err != nil { - return nil, err - } - - image.ID = imagePre012.ID - image.Parent = imagePre012.Parent - image.Comment = imagePre012.Comment - image.Created = imagePre012.Created - image.Container = imagePre012.Container - image.ContainerConfig = imagePre012.ContainerConfig - image.DockerVersion = imagePre012.DockerVersion - image.Author = imagePre012.Author - image.Config = imagePre012.Config - image.Architecture = imagePre012.Architecture - image.Size = imagePre012.Size - } - - return &image, nil -} - -// PushImageOptions represents options to use in the PushImage method. -// -// See https://goo.gl/zPtZaT for more details. -type PushImageOptions struct { - // Name of the image - Name string - - // Tag of the image - Tag string - - // Registry server to push the image - Registry string - - OutputStream io.Writer `qs:"-"` - RawJSONStream bool `qs:"-"` - InactivityTimeout time.Duration `qs:"-"` -} - -// PushImage pushes an image to a remote registry, logging progress to w. -// -// An empty instance of AuthConfiguration may be used for unauthenticated -// pushes. -// -// See https://goo.gl/zPtZaT for more details. -func (c *Client) PushImage(opts PushImageOptions, auth AuthConfiguration) error { - if opts.Name == "" { - return ErrNoSuchImage - } - headers, err := headersWithAuth(auth) - if err != nil { - return err - } - name := opts.Name - opts.Name = "" - path := "/images/" + name + "/push?" + queryString(&opts) - return c.stream("POST", path, streamOptions{ - setRawTerminal: true, - rawJSONStream: opts.RawJSONStream, - headers: headers, - stdout: opts.OutputStream, - inactivityTimeout: opts.InactivityTimeout, - }) -} - -// PullImageOptions present the set of options available for pulling an image -// from a registry. -// -// See https://goo.gl/iJkZjD for more details. -type PullImageOptions struct { - Repository string `qs:"fromImage"` - Registry string - Tag string - - OutputStream io.Writer `qs:"-"` - RawJSONStream bool `qs:"-"` - InactivityTimeout time.Duration `qs:"-"` -} - -// PullImage pulls an image from a remote registry, logging progress to -// opts.OutputStream. -// -// See https://goo.gl/iJkZjD for more details. -func (c *Client) PullImage(opts PullImageOptions, auth AuthConfiguration) error { - if opts.Repository == "" { - return ErrNoSuchImage - } - - headers, err := headersWithAuth(auth) - if err != nil { - return err - } - return c.createImage(queryString(&opts), headers, nil, opts.OutputStream, opts.RawJSONStream, opts.InactivityTimeout) -} - -func (c *Client) createImage(qs string, headers map[string]string, in io.Reader, w io.Writer, rawJSONStream bool, timeout time.Duration) error { - path := "/images/create?" + qs - return c.stream("POST", path, streamOptions{ - setRawTerminal: true, - headers: headers, - in: in, - stdout: w, - rawJSONStream: rawJSONStream, - inactivityTimeout: timeout, - }) -} - -// LoadImageOptions represents the options for LoadImage Docker API Call -// -// See https://goo.gl/JyClMX for more details. -type LoadImageOptions struct { - InputStream io.Reader -} - -// LoadImage imports a tarball docker image -// -// See https://goo.gl/JyClMX for more details. -func (c *Client) LoadImage(opts LoadImageOptions) error { - return c.stream("POST", "/images/load", streamOptions{ - setRawTerminal: true, - in: opts.InputStream, - }) -} - -// ExportImageOptions represent the options for ExportImage Docker API call. -// -// See https://goo.gl/le7vK8 for more details. -type ExportImageOptions struct { - Name string - OutputStream io.Writer - InactivityTimeout time.Duration `qs:"-"` -} - -// ExportImage exports an image (as a tar file) into the stream. -// -// See https://goo.gl/le7vK8 for more details. -func (c *Client) ExportImage(opts ExportImageOptions) error { - return c.stream("GET", fmt.Sprintf("/images/%s/get", opts.Name), streamOptions{ - setRawTerminal: true, - stdout: opts.OutputStream, - inactivityTimeout: opts.InactivityTimeout, - }) -} - -// ExportImagesOptions represent the options for ExportImages Docker API call -// -// See https://goo.gl/huC7HA for more details. -type ExportImagesOptions struct { - Names []string - OutputStream io.Writer `qs:"-"` - InactivityTimeout time.Duration `qs:"-"` -} - -// ExportImages exports one or more images (as a tar file) into the stream -// -// See https://goo.gl/huC7HA for more details. -func (c *Client) ExportImages(opts ExportImagesOptions) error { - if opts.Names == nil || len(opts.Names) == 0 { - return ErrMustSpecifyNames - } - return c.stream("GET", "/images/get?"+queryString(&opts), streamOptions{ - setRawTerminal: true, - stdout: opts.OutputStream, - inactivityTimeout: opts.InactivityTimeout, - }) -} - -// ImportImageOptions present the set of informations available for importing -// an image from a source file or the stdin. -// -// See https://goo.gl/iJkZjD for more details. -type ImportImageOptions struct { - Repository string `qs:"repo"` - Source string `qs:"fromSrc"` - Tag string `qs:"tag"` - - InputStream io.Reader `qs:"-"` - OutputStream io.Writer `qs:"-"` - RawJSONStream bool `qs:"-"` - InactivityTimeout time.Duration `qs:"-"` -} - -// ImportImage imports an image from a url, a file or stdin -// -// See https://goo.gl/iJkZjD for more details. -func (c *Client) ImportImage(opts ImportImageOptions) error { - if opts.Repository == "" { - return ErrNoSuchImage - } - if opts.Source != "-" { - opts.InputStream = nil - } - if opts.Source != "-" && !isURL(opts.Source) { - f, err := os.Open(opts.Source) - if err != nil { - return err - } - opts.InputStream = f - opts.Source = "-" - } - return c.createImage(queryString(&opts), nil, opts.InputStream, opts.OutputStream, opts.RawJSONStream, opts.InactivityTimeout) -} - -// BuildImageOptions present the set of informations available for building an -// image from a tarfile with a Dockerfile in it. -// -// For more details about the Docker building process, see -// http://goo.gl/tlPXPu. -type BuildImageOptions struct { - Name string `qs:"t"` - Dockerfile string `qs:"dockerfile"` - NoCache bool `qs:"nocache"` - SuppressOutput bool `qs:"q"` - Pull bool `qs:"pull"` - RmTmpContainer bool `qs:"rm"` - ForceRmTmpContainer bool `qs:"forcerm"` - Memory int64 `qs:"memory"` - Memswap int64 `qs:"memswap"` - CPUShares int64 `qs:"cpushares"` - CPUQuota int64 `qs:"cpuquota"` - CPUPeriod int64 `qs:"cpuperiod"` - CPUSetCPUs string `qs:"cpusetcpus"` - InputStream io.Reader `qs:"-"` - OutputStream io.Writer `qs:"-"` - RawJSONStream bool `qs:"-"` - Remote string `qs:"remote"` - Auth AuthConfiguration `qs:"-"` // for older docker X-Registry-Auth header - AuthConfigs AuthConfigurations `qs:"-"` // for newer docker X-Registry-Config header - ContextDir string `qs:"-"` - Ulimits []ULimit `qs:"-"` - BuildArgs []BuildArg `qs:"-"` - InactivityTimeout time.Duration `qs:"-"` -} - -// BuildArg represents arguments that can be passed to the image when building -// it from a Dockerfile. -// -// For more details about the Docker building process, see -// http://goo.gl/tlPXPu. -type BuildArg struct { - Name string `json:"Name,omitempty" yaml:"Name,omitempty"` - Value string `json:"Value,omitempty" yaml:"Value,omitempty"` -} - -// BuildImage builds an image from a tarball's url or a Dockerfile in the input -// stream. -// -// See https://goo.gl/xySxCe for more details. -func (c *Client) BuildImage(opts BuildImageOptions) error { - if opts.OutputStream == nil { - return ErrMissingOutputStream - } - headers, err := headersWithAuth(opts.Auth, c.versionedAuthConfigs(opts.AuthConfigs)) - if err != nil { - return err - } - - if opts.Remote != "" && opts.Name == "" { - opts.Name = opts.Remote - } - if opts.InputStream != nil || opts.ContextDir != "" { - headers["Content-Type"] = "application/tar" - } else if opts.Remote == "" { - return ErrMissingRepo - } - if opts.ContextDir != "" { - if opts.InputStream != nil { - return ErrMultipleContexts - } - var err error - if opts.InputStream, err = createTarStream(opts.ContextDir, opts.Dockerfile); err != nil { - return err - } - } - - qs := queryString(&opts) - if len(opts.Ulimits) > 0 { - if b, err := json.Marshal(opts.Ulimits); err == nil { - item := url.Values(map[string][]string{}) - item.Add("ulimits", string(b)) - qs = fmt.Sprintf("%s&%s", qs, item.Encode()) - } - } - - if len(opts.BuildArgs) > 0 { - v := make(map[string]string) - for _, arg := range opts.BuildArgs { - v[arg.Name] = arg.Value - } - if b, err := json.Marshal(v); err == nil { - item := url.Values(map[string][]string{}) - item.Add("buildargs", string(b)) - qs = fmt.Sprintf("%s&%s", qs, item.Encode()) - } - } - - return c.stream("POST", fmt.Sprintf("/build?%s", qs), streamOptions{ - setRawTerminal: true, - rawJSONStream: opts.RawJSONStream, - headers: headers, - in: opts.InputStream, - stdout: opts.OutputStream, - inactivityTimeout: opts.InactivityTimeout, - }) -} - -func (c *Client) versionedAuthConfigs(authConfigs AuthConfigurations) interface{} { - if c.serverAPIVersion == nil { - c.checkAPIVersion() - } - if c.serverAPIVersion != nil && c.serverAPIVersion.GreaterThanOrEqualTo(apiVersion119) { - return AuthConfigurations119(authConfigs.Configs) - } - return authConfigs -} - -// TagImageOptions present the set of options to tag an image. -// -// See https://goo.gl/98ZzkU for more details. -type TagImageOptions struct { - Repo string - Tag string - Force bool -} - -// TagImage adds a tag to the image identified by the given name. -// -// See https://goo.gl/98ZzkU for more details. -func (c *Client) TagImage(name string, opts TagImageOptions) error { - if name == "" { - return ErrNoSuchImage - } - resp, err := c.do("POST", fmt.Sprintf("/images/"+name+"/tag?%s", - queryString(&opts)), doOptions{}) - - if err != nil { - return err - } - - defer resp.Body.Close() - - if resp.StatusCode == http.StatusNotFound { - return ErrNoSuchImage - } - - return err -} - -func isURL(u string) bool { - p, err := url.Parse(u) - if err != nil { - return false - } - return p.Scheme == "http" || p.Scheme == "https" -} - -func headersWithAuth(auths ...interface{}) (map[string]string, error) { - var headers = make(map[string]string) - - for _, auth := range auths { - switch auth.(type) { - case AuthConfiguration: - var buf bytes.Buffer - if err := json.NewEncoder(&buf).Encode(auth); err != nil { - return nil, err - } - headers["X-Registry-Auth"] = base64.URLEncoding.EncodeToString(buf.Bytes()) - case AuthConfigurations, AuthConfigurations119: - var buf bytes.Buffer - if err := json.NewEncoder(&buf).Encode(auth); err != nil { - return nil, err - } - headers["X-Registry-Config"] = base64.URLEncoding.EncodeToString(buf.Bytes()) - } - } - - return headers, nil -} - -// APIImageSearch reflect the result of a search on the Docker Hub. -// -// See https://goo.gl/AYjyrF for more details. -type APIImageSearch struct { - Description string `json:"description,omitempty" yaml:"description,omitempty"` - IsOfficial bool `json:"is_official,omitempty" yaml:"is_official,omitempty"` - IsAutomated bool `json:"is_automated,omitempty" yaml:"is_automated,omitempty"` - Name string `json:"name,omitempty" yaml:"name,omitempty"` - StarCount int `json:"star_count,omitempty" yaml:"star_count,omitempty"` -} - -// SearchImages search the docker hub with a specific given term. -// -// See https://goo.gl/AYjyrF for more details. -func (c *Client) SearchImages(term string) ([]APIImageSearch, error) { - resp, err := c.do("GET", "/images/search?term="+term, doOptions{}) - if err != nil { - return nil, err - } - defer resp.Body.Close() - var searchResult []APIImageSearch - if err := json.NewDecoder(resp.Body).Decode(&searchResult); err != nil { - return nil, err - } - return searchResult, nil -} - -// SearchImagesEx search the docker hub with a specific given term and authentication. -// -// See https://goo.gl/AYjyrF for more details. -func (c *Client) SearchImagesEx(term string, auth AuthConfiguration) ([]APIImageSearch, error) { - headers, err := headersWithAuth(auth) - if err != nil { - return nil, err - } - - resp, err := c.do("GET", "/images/search?term="+term, doOptions{ - headers: headers, - }) - if err != nil { - return nil, err - } - - defer resp.Body.Close() - - var searchResult []APIImageSearch - if err := json.NewDecoder(resp.Body).Decode(&searchResult); err != nil { - return nil, err - } - - return searchResult, nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/misc.go b/vendor/github.com/fsouza/go-dockerclient/misc.go deleted file mode 100644 index ce9e9750b08..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/misc.go +++ /dev/null @@ -1,124 +0,0 @@ -// Copyright 2015 go-dockerclient authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package docker - -import ( - "encoding/json" - "strings" -) - -// Version returns version information about the docker server. -// -// See https://goo.gl/ND9R8L for more details. -func (c *Client) Version() (*Env, error) { - resp, err := c.do("GET", "/version", doOptions{}) - if err != nil { - return nil, err - } - defer resp.Body.Close() - var env Env - if err := env.Decode(resp.Body); err != nil { - return nil, err - } - return &env, nil -} - -// DockerInfo contains information about the Docker server -// -// See https://goo.gl/bHUoz9 for more details. -type DockerInfo struct { - ID string - Containers int - ContainersRunning int - ContainersPaused int - ContainersStopped int - Images int - Driver string - DriverStatus [][2]string - SystemStatus [][2]string - Plugins PluginsInfo - MemoryLimit bool - SwapLimit bool - KernelMemory bool - CPUCfsPeriod bool `json:"CpuCfsPeriod"` - CPUCfsQuota bool `json:"CpuCfsQuota"` - CPUShares bool - CPUSet bool - IPv4Forwarding bool - BridgeNfIptables bool - BridgeNfIP6tables bool `json:"BridgeNfIp6tables"` - Debug bool - NFd int - OomKillDisable bool - NGoroutines int - SystemTime string - ExecutionDriver string - LoggingDriver string - CgroupDriver string - NEventsListener int - KernelVersion string - OperatingSystem string - OSType string - Architecture string - IndexServerAddress string - NCPU int - MemTotal int64 - DockerRootDir string - HTTPProxy string `json:"HttpProxy"` - HTTPSProxy string `json:"HttpsProxy"` - NoProxy string - Name string - Labels []string - ExperimentalBuild bool - ServerVersion string - ClusterStore string - ClusterAdvertise string -} - -// PluginsInfo is a struct with the plugins registered with the docker daemon -// -// for more information, see: https://goo.gl/bHUoz9 -type PluginsInfo struct { - // List of Volume plugins registered - Volume []string - // List of Network plugins registered - Network []string - // List of Authorization plugins registered - Authorization []string -} - -// Info returns system-wide information about the Docker server. -// -// See https://goo.gl/ElTHi2 for more details. -func (c *Client) Info() (*DockerInfo, error) { - resp, err := c.do("GET", "/info", doOptions{}) - if err != nil { - return nil, err - } - defer resp.Body.Close() - var info DockerInfo - if err := json.NewDecoder(resp.Body).Decode(&info); err != nil { - return nil, err - } - return &info, nil -} - -// ParseRepositoryTag gets the name of the repository and returns it splitted -// in two parts: the repository and the tag. -// -// Some examples: -// -// localhost.localdomain:5000/samalba/hipache:latest -> localhost.localdomain:5000/samalba/hipache, latest -// localhost.localdomain:5000/samalba/hipache -> localhost.localdomain:5000/samalba/hipache, "" -func ParseRepositoryTag(repoTag string) (repository string, tag string) { - n := strings.LastIndex(repoTag, ":") - if n < 0 { - return repoTag, "" - } - if tag := repoTag[n+1:]; !strings.Contains(tag, "/") { - return repoTag[:n], tag - } - return repoTag, "" -} diff --git a/vendor/github.com/fsouza/go-dockerclient/network.go b/vendor/github.com/fsouza/go-dockerclient/network.go deleted file mode 100644 index a6812495b72..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/network.go +++ /dev/null @@ -1,273 +0,0 @@ -// Copyright 2015 go-dockerclient authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package docker - -import ( - "bytes" - "encoding/json" - "errors" - "fmt" - "net/http" -) - -// ErrNetworkAlreadyExists is the error returned by CreateNetwork when the -// network already exists. -var ErrNetworkAlreadyExists = errors.New("network already exists") - -// Network represents a network. -// -// See https://goo.gl/6GugX3 for more details. -type Network struct { - Name string - ID string `json:"Id"` - Scope string - Driver string - IPAM IPAMOptions - Containers map[string]Endpoint - Options map[string]string - Internal bool - EnableIPv6 bool `json:"EnableIPv6"` -} - -// Endpoint contains network resources allocated and used for a container in a network -// -// See https://goo.gl/6GugX3 for more details. -type Endpoint struct { - Name string - ID string `json:"EndpointID"` - MacAddress string - IPv4Address string - IPv6Address string -} - -// ListNetworks returns all networks. -// -// See https://goo.gl/6GugX3 for more details. -func (c *Client) ListNetworks() ([]Network, error) { - resp, err := c.do("GET", "/networks", doOptions{}) - if err != nil { - return nil, err - } - defer resp.Body.Close() - var networks []Network - if err := json.NewDecoder(resp.Body).Decode(&networks); err != nil { - return nil, err - } - return networks, nil -} - -// NetworkFilterOpts is an aggregation of key=value that Docker -// uses to filter networks -type NetworkFilterOpts map[string]map[string]bool - -// FilteredListNetworks returns all networks with the filters applied -// -// See goo.gl/zd2mx4 for more details. -func (c *Client) FilteredListNetworks(opts NetworkFilterOpts) ([]Network, error) { - params := bytes.NewBuffer(nil) - if err := json.NewEncoder(params).Encode(&opts); err != nil { - return nil, err - } - path := "/networks?filters=" + params.String() - resp, err := c.do("GET", path, doOptions{}) - if err != nil { - return nil, err - } - defer resp.Body.Close() - var networks []Network - if err := json.NewDecoder(resp.Body).Decode(&networks); err != nil { - return nil, err - } - return networks, nil -} - -// NetworkInfo returns information about a network by its ID. -// -// See https://goo.gl/6GugX3 for more details. -func (c *Client) NetworkInfo(id string) (*Network, error) { - path := "/networks/" + id - resp, err := c.do("GET", path, doOptions{}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return nil, &NoSuchNetwork{ID: id} - } - return nil, err - } - defer resp.Body.Close() - var network Network - if err := json.NewDecoder(resp.Body).Decode(&network); err != nil { - return nil, err - } - return &network, nil -} - -// CreateNetworkOptions specify parameters to the CreateNetwork function and -// (for now) is the expected body of the "create network" http request message -// -// See https://goo.gl/6GugX3 for more details. -type CreateNetworkOptions struct { - Name string `json:"Name"` - CheckDuplicate bool `json:"CheckDuplicate"` - Driver string `json:"Driver"` - IPAM IPAMOptions `json:"IPAM"` - Options map[string]interface{} `json:"Options"` - Internal bool `json:"Internal"` - EnableIPv6 bool `json:"EnableIPv6"` -} - -// IPAMOptions controls IP Address Management when creating a network -// -// See https://goo.gl/T8kRVH for more details. -type IPAMOptions struct { - Driver string `json:"Driver"` - Config []IPAMConfig `json:"Config"` -} - -// IPAMConfig represents IPAM configurations -// -// See https://goo.gl/T8kRVH for more details. -type IPAMConfig struct { - Subnet string `json:",omitempty"` - IPRange string `json:",omitempty"` - Gateway string `json:",omitempty"` - AuxAddress map[string]string `json:"AuxiliaryAddresses,omitempty"` -} - -// CreateNetwork creates a new network, returning the network instance, -// or an error in case of failure. -// -// See https://goo.gl/6GugX3 for more details. -func (c *Client) CreateNetwork(opts CreateNetworkOptions) (*Network, error) { - resp, err := c.do( - "POST", - "/networks/create", - doOptions{ - data: opts, - }, - ) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusConflict { - return nil, ErrNetworkAlreadyExists - } - return nil, err - } - defer resp.Body.Close() - - type createNetworkResponse struct { - ID string - } - var ( - network Network - cnr createNetworkResponse - ) - if err := json.NewDecoder(resp.Body).Decode(&cnr); err != nil { - return nil, err - } - - network.Name = opts.Name - network.ID = cnr.ID - network.Driver = opts.Driver - - return &network, nil -} - -// RemoveNetwork removes a network or returns an error in case of failure. -// -// See https://goo.gl/6GugX3 for more details. -func (c *Client) RemoveNetwork(id string) error { - resp, err := c.do("DELETE", "/networks/"+id, doOptions{}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return &NoSuchNetwork{ID: id} - } - return err - } - resp.Body.Close() - return nil -} - -// NetworkConnectionOptions specify parameters to the ConnectNetwork and -// DisconnectNetwork function. -// -// See https://goo.gl/RV7BJU for more details. -type NetworkConnectionOptions struct { - Container string - - // EndpointConfig is only applicable to the ConnectNetwork call - EndpointConfig *EndpointConfig `json:"EndpointConfig,omitempty"` - - // Force is only applicable to the DisconnectNetwork call - Force bool -} - -// EndpointConfig stores network endpoint details -// -// See https://goo.gl/RV7BJU for more details. -type EndpointConfig struct { - IPAMConfig *EndpointIPAMConfig - Links []string - Aliases []string -} - -// EndpointIPAMConfig represents IPAM configurations for an -// endpoint -// -// See https://goo.gl/RV7BJU for more details. -type EndpointIPAMConfig struct { - IPv4Address string `json:",omitempty"` - IPv6Address string `json:",omitempty"` -} - -// ConnectNetwork adds a container to a network or returns an error in case of -// failure. -// -// See https://goo.gl/6GugX3 for more details. -func (c *Client) ConnectNetwork(id string, opts NetworkConnectionOptions) error { - resp, err := c.do("POST", "/networks/"+id+"/connect", doOptions{data: opts}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return &NoSuchNetworkOrContainer{NetworkID: id, ContainerID: opts.Container} - } - return err - } - resp.Body.Close() - return nil -} - -// DisconnectNetwork removes a container from a network or returns an error in -// case of failure. -// -// See https://goo.gl/6GugX3 for more details. -func (c *Client) DisconnectNetwork(id string, opts NetworkConnectionOptions) error { - resp, err := c.do("POST", "/networks/"+id+"/disconnect", doOptions{data: opts}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return &NoSuchNetworkOrContainer{NetworkID: id, ContainerID: opts.Container} - } - return err - } - resp.Body.Close() - return nil -} - -// NoSuchNetwork is the error returned when a given network does not exist. -type NoSuchNetwork struct { - ID string -} - -func (err *NoSuchNetwork) Error() string { - return fmt.Sprintf("No such network: %s", err.ID) -} - -// NoSuchNetworkOrContainer is the error returned when a given network or -// container does not exist. -type NoSuchNetworkOrContainer struct { - NetworkID string - ContainerID string -} - -func (err *NoSuchNetworkOrContainer) Error() string { - return fmt.Sprintf("No such network (%s) or container (%s)", err.NetworkID, err.ContainerID) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/signal.go b/vendor/github.com/fsouza/go-dockerclient/signal.go deleted file mode 100644 index 16aa00388fd..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/signal.go +++ /dev/null @@ -1,49 +0,0 @@ -// Copyright 2014 go-dockerclient authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package docker - -// Signal represents a signal that can be send to the container on -// KillContainer call. -type Signal int - -// These values represent all signals available on Linux, where containers will -// be running. -const ( - SIGABRT = Signal(0x6) - SIGALRM = Signal(0xe) - SIGBUS = Signal(0x7) - SIGCHLD = Signal(0x11) - SIGCLD = Signal(0x11) - SIGCONT = Signal(0x12) - SIGFPE = Signal(0x8) - SIGHUP = Signal(0x1) - SIGILL = Signal(0x4) - SIGINT = Signal(0x2) - SIGIO = Signal(0x1d) - SIGIOT = Signal(0x6) - SIGKILL = Signal(0x9) - SIGPIPE = Signal(0xd) - SIGPOLL = Signal(0x1d) - SIGPROF = Signal(0x1b) - SIGPWR = Signal(0x1e) - SIGQUIT = Signal(0x3) - SIGSEGV = Signal(0xb) - SIGSTKFLT = Signal(0x10) - SIGSTOP = Signal(0x13) - SIGSYS = Signal(0x1f) - SIGTERM = Signal(0xf) - SIGTRAP = Signal(0x5) - SIGTSTP = Signal(0x14) - SIGTTIN = Signal(0x15) - SIGTTOU = Signal(0x16) - SIGUNUSED = Signal(0x1f) - SIGURG = Signal(0x17) - SIGUSR1 = Signal(0xa) - SIGUSR2 = Signal(0xc) - SIGVTALRM = Signal(0x1a) - SIGWINCH = Signal(0x1c) - SIGXCPU = Signal(0x18) - SIGXFSZ = Signal(0x19) -) diff --git a/vendor/github.com/fsouza/go-dockerclient/tar.go b/vendor/github.com/fsouza/go-dockerclient/tar.go deleted file mode 100644 index 48042cbda05..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/tar.go +++ /dev/null @@ -1,117 +0,0 @@ -// Copyright 2015 go-dockerclient authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package docker - -import ( - "fmt" - "io" - "io/ioutil" - "os" - "path" - "path/filepath" - "strings" - - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive" - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/fileutils" -) - -func createTarStream(srcPath, dockerfilePath string) (io.ReadCloser, error) { - excludes, err := parseDockerignore(srcPath) - if err != nil { - return nil, err - } - - includes := []string{"."} - - // If .dockerignore mentions .dockerignore or the Dockerfile - // then make sure we send both files over to the daemon - // because Dockerfile is, obviously, needed no matter what, and - // .dockerignore is needed to know if either one needs to be - // removed. The deamon will remove them for us, if needed, after it - // parses the Dockerfile. - // - // https://github.com/docker/docker/issues/8330 - // - forceIncludeFiles := []string{".dockerignore", dockerfilePath} - - for _, includeFile := range forceIncludeFiles { - if includeFile == "" { - continue - } - keepThem, err := fileutils.Matches(includeFile, excludes) - if err != nil { - return nil, fmt.Errorf("cannot match .dockerfile: '%s', error: %s", includeFile, err) - } - if keepThem { - includes = append(includes, includeFile) - } - } - - if err := validateContextDirectory(srcPath, excludes); err != nil { - return nil, err - } - tarOpts := &archive.TarOptions{ - ExcludePatterns: excludes, - IncludeFiles: includes, - Compression: archive.Uncompressed, - NoLchown: true, - } - return archive.TarWithOptions(srcPath, tarOpts) -} - -// validateContextDirectory checks if all the contents of the directory -// can be read and returns an error if some files can't be read. -// Symlinks which point to non-existing files don't trigger an error -func validateContextDirectory(srcPath string, excludes []string) error { - return filepath.Walk(filepath.Join(srcPath, "."), func(filePath string, f os.FileInfo, err error) error { - // skip this directory/file if it's not in the path, it won't get added to the context - if relFilePath, err := filepath.Rel(srcPath, filePath); err != nil { - return err - } else if skip, err := fileutils.Matches(relFilePath, excludes); err != nil { - return err - } else if skip { - if f.IsDir() { - return filepath.SkipDir - } - return nil - } - - if err != nil { - if os.IsPermission(err) { - return fmt.Errorf("can't stat '%s'", filePath) - } - if os.IsNotExist(err) { - return nil - } - return err - } - - // skip checking if symlinks point to non-existing files, such symlinks can be useful - // also skip named pipes, because they hanging on open - if f.Mode()&(os.ModeSymlink|os.ModeNamedPipe) != 0 { - return nil - } - - if !f.IsDir() { - currentFile, err := os.Open(filePath) - if err != nil && os.IsPermission(err) { - return fmt.Errorf("no permission to read from '%s'", filePath) - } - currentFile.Close() - } - return nil - }) -} - -func parseDockerignore(root string) ([]string, error) { - var excludes []string - ignore, err := ioutil.ReadFile(path.Join(root, ".dockerignore")) - if err != nil && !os.IsNotExist(err) { - return excludes, fmt.Errorf("error reading .dockerignore: '%s'", err) - } - excludes = strings.Split(string(ignore), "\n") - - return excludes, nil -} diff --git a/vendor/github.com/fsouza/go-dockerclient/testing/data/Dockerfile b/vendor/github.com/fsouza/go-dockerclient/testing/data/Dockerfile deleted file mode 100644 index 0948dcfa8cc..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/testing/data/Dockerfile +++ /dev/null @@ -1,15 +0,0 @@ -# this file describes how to build tsuru python image -# to run it: -# 1- install docker -# 2- run: $ docker build -t tsuru/python https://raw.github.com/tsuru/basebuilder/master/python/Dockerfile - -from base:ubuntu-quantal -run apt-get install wget -y --force-yes -run wget http://github.com/tsuru/basebuilder/tarball/master -O basebuilder.tar.gz --no-check-certificate -run mkdir /var/lib/tsuru -run tar -xvf basebuilder.tar.gz -C /var/lib/tsuru --strip 1 -run cp /var/lib/tsuru/python/deploy /var/lib/tsuru -run cp /var/lib/tsuru/base/restart /var/lib/tsuru -run cp /var/lib/tsuru/base/start /var/lib/tsuru -run /var/lib/tsuru/base/install -run /var/lib/tsuru/base/setup diff --git a/vendor/github.com/fsouza/go-dockerclient/testing/data/barfile b/vendor/github.com/fsouza/go-dockerclient/testing/data/barfile deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/vendor/github.com/fsouza/go-dockerclient/testing/data/ca.pem b/vendor/github.com/fsouza/go-dockerclient/testing/data/ca.pem deleted file mode 100644 index 8e38bba13c6..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/testing/data/ca.pem +++ /dev/null @@ -1,18 +0,0 @@ ------BEGIN CERTIFICATE----- -MIIC1TCCAb+gAwIBAgIQJ9MsNxrUxumNbAytGi3GEDALBgkqhkiG9w0BAQswFjEU -MBIGA1UEChMLQm9vdDJEb2NrZXIwHhcNMTQxMDE2MjAyMTM4WhcNMTcwOTMwMjAy -MTM4WjAWMRQwEgYDVQQKEwtCb290MkRvY2tlcjCCASIwDQYJKoZIhvcNAQEBBQAD -ggEPADCCAQoCggEBALpFCSARjG+5yXoqr7UMzuE0df7RRZfeRZI06lJ02ZqV4Iii -rgL7ML9yPxX50NbLnjiilSDTUhnyocYFItokzUzz8qpX/nlYhuN2Iqwh4d0aWS8z -f5y248F+H1z+HY2W8NPl/6DVlVwYaNW1/k+RPMlHS0INLR6j+3Ievew7RNE0NnM2 -znELW6NetekDt3GUcz0Z95vDUDfdPnIk1eIFMmYvLxZh23xOca4Q37a3S8F3d+dN -+OOpwjdgY9Qme0NQUaXpgp58jWuQfB8q7mZrdnLlLqRa8gx1HeDSotX7UmWtWPkb -vd9EdlKLYw5PVpxMV1rkwf2t4TdgD5NfkpXlXkkCAwEAAaMjMCEwDgYDVR0PAQH/ -BAQDAgCkMA8GA1UdEwEB/wQFMAMBAf8wCwYJKoZIhvcNAQELA4IBAQBxYjHVSKqE -MJw7CW0GddesULtXXVWGJuZdWJLQlPvPMfIfjIvlcZyS4cdVNiQ3sREFIZz8TpII -CT0/Pg3sgv/FcOQe1CN0xZYZcyiAZHK1z0fJQq2qVpdv7+tJcjI2vvU6NI24iQCo -W1wz25trJz9QbdB2MRLMjyz7TSWuafztIvcfEzaIdQ0Whqund/cSuPGQx5IwF83F -rvlkOyJSH2+VIEBTCIuykJeL0DLTt8cePBQR5L1ISXb4RUMK9ZtqRscBRv8sn7o2 -ixG3wtL0gYF4xLtsQWVxI3iFVrU3WzOH/3c5shVRkWBd+AQRSwCJI4mKH7penJCF -i3/zzlkvOnjV ------END CERTIFICATE----- diff --git a/vendor/github.com/fsouza/go-dockerclient/testing/data/cert.pem b/vendor/github.com/fsouza/go-dockerclient/testing/data/cert.pem deleted file mode 100644 index 5e7244b24ff..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/testing/data/cert.pem +++ /dev/null @@ -1,18 +0,0 @@ ------BEGIN CERTIFICATE----- -MIIC6DCCAdKgAwIBAgIRANO6ymxQAjp66KmEka1G6b0wCwYJKoZIhvcNAQELMBYx -FDASBgNVBAoTC0Jvb3QyRG9ja2VyMB4XDTE0MTAxNjIwMjE1MloXDTE3MDkzMDIw -MjE1MlowFjEUMBIGA1UEChMLQm9vdDJEb2NrZXIwggEiMA0GCSqGSIb3DQEBAQUA -A4IBDwAwggEKAoIBAQDGA1mAhSOpZspD1dpZ7qVEQrIJw4Xo8252jHaORnEdDiFm -b6brEmr6jw8t4P3IGxbqBc/TqRV+SSXxwYEVvfpeQKH+SmqStoMNtD3Ura161az4 -V0BcxMtSlsUGpoz+//QCAq8qiaxMwgiyc5253mkQm88anj2cNt7xbewiu/KFWuf7 -BVpNK1+ltpJmlukfcj/G+I1bw7j1KxBjDrFqe5cyDuuZcDL2tmUXP/ZWDyXwSv+H -AOckqn44z6aXlBkVvOXDBZJqY76d/vWVDNCuZeXRnqlhP3t1kH4V0RQXo+JD2tgt -JgdU0unzyoFOSWNUBPm73tqmjUGGAmGHBmeegJr/AgMBAAGjNTAzMA4GA1UdDwEB -/wQEAwIAgDATBgNVHSUEDDAKBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMAsGCSqG -SIb3DQEBCwOCAQEABVTWl5SmBP+j5He5bQsgnIXjviSKqe40/10V4LJAOmilycRF -zLrzM+YMwfjg6PLIs8CldAMWHw9y9ktZY4MxkgCktaiaN/QmMTMwFWEcN4wy5IpM -U5l93eAg7xsnY430h3QBBADujX4wdF3fs8rSL8zAAQFL0ihurwU124K3yXKsrwpb -CiVUGfIN4sPwjy8Ws9oxHFDC9/P8lgjHZ1nBIf8KSHnMzlxDGj7isQfhtH+7mcCL -cM1qO2NirS2v7uaEPPY+MJstAz+W7EJCW9dfMSmHna2SDC37Xkin7uEY9z+qaKFL -8d/XxOB/L8Ucy8VZhdsv0dsBq5KfJntITM0ksQ== ------END CERTIFICATE----- diff --git a/vendor/github.com/fsouza/go-dockerclient/testing/data/container.tar b/vendor/github.com/fsouza/go-dockerclient/testing/data/container.tar deleted file mode 100644 index e4b066e3b6d..00000000000 Binary files a/vendor/github.com/fsouza/go-dockerclient/testing/data/container.tar and /dev/null differ diff --git a/vendor/github.com/fsouza/go-dockerclient/testing/data/dockerfile.tar b/vendor/github.com/fsouza/go-dockerclient/testing/data/dockerfile.tar deleted file mode 100644 index 32c9ce64704..00000000000 Binary files a/vendor/github.com/fsouza/go-dockerclient/testing/data/dockerfile.tar and /dev/null differ diff --git a/vendor/github.com/fsouza/go-dockerclient/testing/data/foofile b/vendor/github.com/fsouza/go-dockerclient/testing/data/foofile deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/vendor/github.com/fsouza/go-dockerclient/testing/data/key.pem b/vendor/github.com/fsouza/go-dockerclient/testing/data/key.pem deleted file mode 100644 index a9346bcf45a..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/testing/data/key.pem +++ /dev/null @@ -1,27 +0,0 @@ ------BEGIN RSA PRIVATE KEY----- -MIIEowIBAAKCAQEAxgNZgIUjqWbKQ9XaWe6lREKyCcOF6PNudox2jkZxHQ4hZm+m -6xJq+o8PLeD9yBsW6gXP06kVfkkl8cGBFb36XkCh/kpqkraDDbQ91K2tetWs+FdA -XMTLUpbFBqaM/v/0AgKvKomsTMIIsnOdud5pEJvPGp49nDbe8W3sIrvyhVrn+wVa -TStfpbaSZpbpH3I/xviNW8O49SsQYw6xanuXMg7rmXAy9rZlFz/2Vg8l8Er/hwDn -JKp+OM+ml5QZFbzlwwWSamO+nf71lQzQrmXl0Z6pYT97dZB+FdEUF6PiQ9rYLSYH -VNLp88qBTkljVAT5u97apo1BhgJhhwZnnoCa/wIDAQABAoIBAQCaGy9EC9pmU95l -DwGh7k5nIrUnTilg1FwLHWSDdCVCZKXv8ENrPelOWZqJrUo1u4eI2L8XTsewgkNq -tJu/DRzWz9yDaO0qg6rZNobMh+K076lvmZA44twOydJLS8H+D7ua+PXU2FLlZjmY -kMyXRJZmW6zCXZc7haTbJx6ZJccoquk/DkS4FcFurJP177u1YrWS9TTw9kensUtU -jQ63uf56UTN1i+0+Rxl7OW1TZlqwlri5I4njg5249+FxwwHzIq8+l7zD7K9pl8c/ -nG1HuulvU2bVlDlRdyslMPAH34vw9Sku1BD8furrJLr1na5lRSLKJODEaIPEsLwv -CdEUwP9JAoGBAO76ZW80RyNB2fA+wbTq70Sr8CwrXxYemXrez5LKDC7SsohKFCPE -IedpO/n+nmymiiJvMm874EExoG6BVrbkWkeb+2vinEfOQNlDMsDx7WLjPekP3t6i -rXHO3CjFooVFq2z3mZa/Nc5NZqu8fNWNCKJxZDJphdoj6sORNJIUvZVjAoGBANQd -++J+ITcu3/+A6JrGcgLunBFQYPqkiItk0J4QKYKuX5ik9rWcQDN8TTtfW2mDuiQ4 -NrCwuVPq1V1kB16JzH017SsYLo9g8I20YjnBZge9pKTeUaLVTb3C50LW8FBylop0 -Bnm597dNbtSjphjoTMg0XyC19o3Esf2YeWG0QNS1AoGAWWDfFRNJU99qIldmXULM -0DM6NVrXSk+ReYnhunXEzrJQwXZrR+EwCPurydk36Uz0NuK9yypquhdUeF/5TZfk -SAoHo5byekyipl9imRUigqyY2BTudvgCxKDoaHtaSFwBPFTyZZYICquaLbrmOXxw -8UhVgCFFRYvPXuts7QHC0h8CgYBWEvy9gfU0kV7wLX02IUTuj6jhFb7ktpN6DSTi -nyhZES1VoctDEu6ydcRZTW6ouH12aSE4Pd5WgTqntQmQgVZrkNB25k8ue2Xh+srJ -KQOgLIJ9LIHwE6KCWG7DnrjRzE3uTPq7to0g4tkQjH/AJ7PQof/gJDayfJjFkXPg -A+cy6QKBgEPbKpiqscm03gT2QanBut5pg4dqPOxp0SlErA3kSFNTRK3oYBQPC+LH -qA5nD5brdkeNBB58Rll8Zpzxiff50bcvLP/7/Sb3NjaXFTEY0gVbdRof3n6N0YP3 -Hu5XDNJ9RNkNzE5RIG1g86KE+aKlcrKMaigqAiuIy2PSnjkQeGk8 ------END RSA PRIVATE KEY----- diff --git a/vendor/github.com/fsouza/go-dockerclient/testing/data/server.pem b/vendor/github.com/fsouza/go-dockerclient/testing/data/server.pem deleted file mode 100644 index 89cc445e1ba..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/testing/data/server.pem +++ /dev/null @@ -1,18 +0,0 @@ ------BEGIN CERTIFICATE----- -MIIC/DCCAeagAwIBAgIQMUILcXtvmSOK63zEBo0VXzALBgkqhkiG9w0BAQswFjEU -MBIGA1UEChMLQm9vdDJEb2NrZXIwHhcNMTQxMDE2MjAyMTQ2WhcNMTcwOTMwMjAy -MTQ2WjAWMRQwEgYDVQQKEwtCb290MkRvY2tlcjCCASIwDQYJKoZIhvcNAQEBBQAD -ggEPADCCAQoCggEBANxUOUhNnqFnrTlLsBYzfFRZWQo268l+4K4lOJCVbfDonP3g -Mz0vGi9fcyFqEWSA8Y+ShXna625HTnReCwFdsu0861qCIq7v95hFFCyOe0iIxpd0 -AKLnl90d+1vonE7andgFgoobbTiMly4UK4H6z8D148fFNIihoteOG3PIF89TFxP7 -CJ/3wXnx/IKpdlO8PAnub3tBPJHvGDj7KORLy4IBxRX5VBAdfGNybE66fcrehEva -rLA4m9pgiaR/Nnr9FdKhPyqYdjflLNvzydxNvMIV4M0hFlhXmYvpMjA5/XsTnsyV -t9JHJa5Upwqsbne08t7rsm7liZNxZlko8xPOTQcCAwEAAaNKMEgwDgYDVR0PAQH/ -BAQDAgCgMAwGA1UdEwEB/wQCMAAwKAYDVR0RBCEwH4ILYm9vdDJkb2NrZXKHBH8A -AAGHBAoAAg+HBMCoO2cwCwYJKoZIhvcNAQELA4IBAQAYoYcDkDWkl73FZ0WnPmAj -LiF7HU95Qg3KyEpFsAJeShSLPPbQntmwhdekEzY4tQ3eKQB/+zHFjzsCr/lmDUmH -Ea/ryQ17C+jyH+Ykg0IWW6L6veZhvRDg6Z9focVtPVBRxPTqC/Qhb54blWRASV+W -UreMuXQ5+1dQptAM7ixOeLVHjBi/bd9TL3jvwBVCr9QedteMjjK4TCF9Tbcou+MF -2w3OJJZMDhcD+YwoK9uJDqlKmcTm/vVMbSsp/pTMcnQ7jxCeR8/XyX+VwTZwaHAa -o92Q/eg3THAiWhvyT/SzyH9dHHBAyXynUwGCggKawHktfvW4QXRPuLxLrJ7iB5cy ------END CERTIFICATE----- diff --git a/vendor/github.com/fsouza/go-dockerclient/testing/data/serverkey.pem b/vendor/github.com/fsouza/go-dockerclient/testing/data/serverkey.pem deleted file mode 100644 index c897e5da550..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/testing/data/serverkey.pem +++ /dev/null @@ -1,27 +0,0 @@ ------BEGIN RSA PRIVATE KEY----- -MIIEoAIBAAKCAQEA3FQ5SE2eoWetOUuwFjN8VFlZCjbryX7griU4kJVt8Oic/eAz -PS8aL19zIWoRZIDxj5KFedrrbkdOdF4LAV2y7TzrWoIiru/3mEUULI57SIjGl3QA -oueX3R37W+icTtqd2AWCihttOIyXLhQrgfrPwPXjx8U0iKGi144bc8gXz1MXE/sI -n/fBefH8gql2U7w8Ce5ve0E8ke8YOPso5EvLggHFFflUEB18Y3JsTrp9yt6ES9qs -sDib2mCJpH82ev0V0qE/Kph2N+Us2/PJ3E28whXgzSEWWFeZi+kyMDn9exOezJW3 -0kclrlSnCqxud7Ty3uuybuWJk3FmWSjzE85NBwIDAQABAoIBAG0ak+cW8LeShHf7 -3+2Of0GxoOLrAWWdG5uAuPr31CJYve0FybnBimDtDjD8ujIfm/7xmoEWBEFutA3x -x9dcU88gvJbsHEqub9gKVQwfXjMz78tt2SbSMiR/xUnk7QorPcCMMfE71aEMFYzu -1gCed6Rg3vO81t/V0rKVH0j9S7UQz5v/oX15eVDV5LOqyCHwAi6K0eXXbqnbI0TH -SOQ/nexM2msVXWbO9t6ra6f5V7FXziDK5Xi+rPxRbX9mkrDzxDAevfuRqYBx5vtL -W2Q2hKjUAHFgXFniNSZBS7dCdAtz0el/3ct+cNmpuTMhhs7M6wC1CuYiZ/DxLiFh -Si73VckCgYEA+/ceh3+VjtQ0rgEw8sD9bqYEA8IaBiObjneIoFnKBYRG7yZd8JMm -HD4M/aQ1qhcRLPN7GR03YQULgQJURbKSjJHnhfTXHyeHC3NN4gMVHQXewu2MHCh6 -7FCQ9CfK0KcYLgegVVvL3PrF3hyWGnmTu+G0UkDQRYVnaNrB7snrW6UCgYEA39tq -+MCQdu0moJ5szSZf02undg9EeW6isk9qzi7TId3/MLci2eH7PEnipipPUK3+DERq -aba0y0TKgBR2EXvXLFJA/+kfdo2loIEHOfox85HVfxgUaFRti63ZI0uF8D0QT2Yy -oJal+RFghVoSnv4LjhRKEPbIkScTXGjdK+7wFjsCfz79iKRXQQx0ALd/lL0bgkAn -QNmvrNHcFQeI2p8700WNzC39aX67SsvEt3qxkrjzC1gxhpTAuReIK1gVPPwvqHN8 -BmV20FD5kMlMCix2mNCopwgUWvKvLAvoGFTxncKMA39+aJbuXAjiqJTekKgNvOE7 -i9kEWw0GTNPp3JHV6QECgYAPwb0M11kT1euDIMOdyRazpf86kyaJuZzgGjD1ZFxe -JOcigbGFTp/FhZnbglzk2+pm6KXo3QBq0mPCki4hWusxZnTGzpz1VlETNCHTFeZQ -M7KoaIR/N3oie9Et59H8r/+m5xWnMhNqratyl316DX24uXrhKM3DUdHODl+LCR2D -IwKBgE1MbHuwolUPEw3HeO4R7NMFVTFei7E/fpUsimPfArGg8UydwvloNT1myJos -N2JzfGGjN2KPVcBk9fOs71mJ6VcK3C3g5JIccplk6h9VNaw55+zdQvKPTzoBoTvy -A+Fwx2AlF61KeRF87DL2YTRJ6B9MHmWgf7+GVZOxomLgEAcZ ------END RSA PRIVATE KEY----- diff --git a/vendor/github.com/fsouza/go-dockerclient/testing/server.go b/vendor/github.com/fsouza/go-dockerclient/testing/server.go deleted file mode 100644 index 7b65a8c3d5c..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/testing/server.go +++ /dev/null @@ -1,1334 +0,0 @@ -// Copyright 2015 go-dockerclient authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// Package testing provides a fake implementation of the Docker API, useful for -// testing purpose. -package testing - -import ( - "archive/tar" - "crypto/rand" - "encoding/json" - "errors" - "fmt" - "io/ioutil" - mathrand "math/rand" - "net" - "net/http" - "regexp" - "strconv" - "strings" - "sync" - "time" - - "github.com/fsouza/go-dockerclient" - "github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/stdcopy" - "github.com/fsouza/go-dockerclient/external/github.com/gorilla/mux" -) - -var nameRegexp = regexp.MustCompile(`^[a-zA-Z0-9][a-zA-Z0-9_.-]+$`) - -// DockerServer represents a programmable, concurrent (not much), HTTP server -// implementing a fake version of the Docker remote API. -// -// It can used in standalone mode, listening for connections or as an arbitrary -// HTTP handler. -// -// For more details on the remote API, check http://goo.gl/G3plxW. -type DockerServer struct { - containers []*docker.Container - uploadedFiles map[string]string - execs []*docker.ExecInspect - execMut sync.RWMutex - cMut sync.RWMutex - images []docker.Image - iMut sync.RWMutex - imgIDs map[string]string - networks []*docker.Network - netMut sync.RWMutex - listener net.Listener - mux *mux.Router - hook func(*http.Request) - failures map[string]string - multiFailures []map[string]string - execCallbacks map[string]func() - statsCallbacks map[string]func(string) docker.Stats - customHandlers map[string]http.Handler - handlerMutex sync.RWMutex - cChan chan<- *docker.Container - volStore map[string]*volumeCounter - volMut sync.RWMutex -} - -type volumeCounter struct { - volume docker.Volume - count int -} - -// NewServer returns a new instance of the fake server, in standalone mode. Use -// the method URL to get the URL of the server. -// -// It receives the bind address (use 127.0.0.1:0 for getting an available port -// on the host), a channel of containers and a hook function, that will be -// called on every request. -// -// The fake server will send containers in the channel whenever the container -// changes its state, via the HTTP API (i.e.: create, start and stop). This -// channel may be nil, which means that the server won't notify on state -// changes. -func NewServer(bind string, containerChan chan<- *docker.Container, hook func(*http.Request)) (*DockerServer, error) { - listener, err := net.Listen("tcp", bind) - if err != nil { - return nil, err - } - server := DockerServer{ - listener: listener, - imgIDs: make(map[string]string), - hook: hook, - failures: make(map[string]string), - execCallbacks: make(map[string]func()), - statsCallbacks: make(map[string]func(string) docker.Stats), - customHandlers: make(map[string]http.Handler), - uploadedFiles: make(map[string]string), - cChan: containerChan, - } - server.buildMuxer() - go http.Serve(listener, &server) - return &server, nil -} - -func (s *DockerServer) notify(container *docker.Container) { - if s.cChan != nil { - s.cChan <- container - } -} - -func (s *DockerServer) buildMuxer() { - s.mux = mux.NewRouter() - s.mux.Path("/commit").Methods("POST").HandlerFunc(s.handlerWrapper(s.commitContainer)) - s.mux.Path("/containers/json").Methods("GET").HandlerFunc(s.handlerWrapper(s.listContainers)) - s.mux.Path("/containers/create").Methods("POST").HandlerFunc(s.handlerWrapper(s.createContainer)) - s.mux.Path("/containers/{id:.*}/json").Methods("GET").HandlerFunc(s.handlerWrapper(s.inspectContainer)) - s.mux.Path("/containers/{id:.*}/rename").Methods("POST").HandlerFunc(s.handlerWrapper(s.renameContainer)) - s.mux.Path("/containers/{id:.*}/top").Methods("GET").HandlerFunc(s.handlerWrapper(s.topContainer)) - s.mux.Path("/containers/{id:.*}/start").Methods("POST").HandlerFunc(s.handlerWrapper(s.startContainer)) - s.mux.Path("/containers/{id:.*}/kill").Methods("POST").HandlerFunc(s.handlerWrapper(s.stopContainer)) - s.mux.Path("/containers/{id:.*}/stop").Methods("POST").HandlerFunc(s.handlerWrapper(s.stopContainer)) - s.mux.Path("/containers/{id:.*}/pause").Methods("POST").HandlerFunc(s.handlerWrapper(s.pauseContainer)) - s.mux.Path("/containers/{id:.*}/unpause").Methods("POST").HandlerFunc(s.handlerWrapper(s.unpauseContainer)) - s.mux.Path("/containers/{id:.*}/wait").Methods("POST").HandlerFunc(s.handlerWrapper(s.waitContainer)) - s.mux.Path("/containers/{id:.*}/attach").Methods("POST").HandlerFunc(s.handlerWrapper(s.attachContainer)) - s.mux.Path("/containers/{id:.*}").Methods("DELETE").HandlerFunc(s.handlerWrapper(s.removeContainer)) - s.mux.Path("/containers/{id:.*}/exec").Methods("POST").HandlerFunc(s.handlerWrapper(s.createExecContainer)) - s.mux.Path("/containers/{id:.*}/stats").Methods("GET").HandlerFunc(s.handlerWrapper(s.statsContainer)) - s.mux.Path("/containers/{id:.*}/archive").Methods("PUT").HandlerFunc(s.handlerWrapper(s.uploadToContainer)) - s.mux.Path("/exec/{id:.*}/resize").Methods("POST").HandlerFunc(s.handlerWrapper(s.resizeExecContainer)) - s.mux.Path("/exec/{id:.*}/start").Methods("POST").HandlerFunc(s.handlerWrapper(s.startExecContainer)) - s.mux.Path("/exec/{id:.*}/json").Methods("GET").HandlerFunc(s.handlerWrapper(s.inspectExecContainer)) - s.mux.Path("/images/create").Methods("POST").HandlerFunc(s.handlerWrapper(s.pullImage)) - s.mux.Path("/build").Methods("POST").HandlerFunc(s.handlerWrapper(s.buildImage)) - s.mux.Path("/images/json").Methods("GET").HandlerFunc(s.handlerWrapper(s.listImages)) - s.mux.Path("/images/{id:.*}").Methods("DELETE").HandlerFunc(s.handlerWrapper(s.removeImage)) - s.mux.Path("/images/{name:.*}/json").Methods("GET").HandlerFunc(s.handlerWrapper(s.inspectImage)) - s.mux.Path("/images/{name:.*}/push").Methods("POST").HandlerFunc(s.handlerWrapper(s.pushImage)) - s.mux.Path("/images/{name:.*}/tag").Methods("POST").HandlerFunc(s.handlerWrapper(s.tagImage)) - s.mux.Path("/events").Methods("GET").HandlerFunc(s.listEvents) - s.mux.Path("/_ping").Methods("GET").HandlerFunc(s.handlerWrapper(s.pingDocker)) - s.mux.Path("/images/load").Methods("POST").HandlerFunc(s.handlerWrapper(s.loadImage)) - s.mux.Path("/images/{id:.*}/get").Methods("GET").HandlerFunc(s.handlerWrapper(s.getImage)) - s.mux.Path("/networks").Methods("GET").HandlerFunc(s.handlerWrapper(s.listNetworks)) - s.mux.Path("/networks/{id:.*}").Methods("GET").HandlerFunc(s.handlerWrapper(s.networkInfo)) - s.mux.Path("/networks").Methods("POST").HandlerFunc(s.handlerWrapper(s.createNetwork)) - s.mux.Path("/volumes").Methods("GET").HandlerFunc(s.handlerWrapper(s.listVolumes)) - s.mux.Path("/volumes/create").Methods("POST").HandlerFunc(s.handlerWrapper(s.createVolume)) - s.mux.Path("/volumes/{name:.*}").Methods("GET").HandlerFunc(s.handlerWrapper(s.inspectVolume)) - s.mux.Path("/volumes/{name:.*}").Methods("DELETE").HandlerFunc(s.handlerWrapper(s.removeVolume)) - s.mux.Path("/info").Methods("GET").HandlerFunc(s.handlerWrapper(s.infoDocker)) -} - -// SetHook changes the hook function used by the server. -// -// The hook function is a function called on every request. -func (s *DockerServer) SetHook(hook func(*http.Request)) { - s.hook = hook -} - -// PrepareExec adds a callback to a container exec in the fake server. -// -// This function will be called whenever the given exec id is started, and the -// given exec id will remain in the "Running" start while the function is -// running, so it's useful for emulating an exec that runs for two seconds, for -// example: -// -// opts := docker.CreateExecOptions{ -// AttachStdin: true, -// AttachStdout: true, -// AttachStderr: true, -// Tty: true, -// Cmd: []string{"/bin/bash", "-l"}, -// } -// // Client points to a fake server. -// exec, err := client.CreateExec(opts) -// // handle error -// server.PrepareExec(exec.ID, func() {time.Sleep(2 * time.Second)}) -// err = client.StartExec(exec.ID, docker.StartExecOptions{Tty: true}) // will block for 2 seconds -// // handle error -func (s *DockerServer) PrepareExec(id string, callback func()) { - s.execCallbacks[id] = callback -} - -// PrepareStats adds a callback that will be called for each container stats -// call. -// -// This callback function will be called multiple times if stream is set to -// true when stats is called. -func (s *DockerServer) PrepareStats(id string, callback func(string) docker.Stats) { - s.statsCallbacks[id] = callback -} - -// PrepareFailure adds a new expected failure based on a URL regexp it receives -// an id for the failure. -func (s *DockerServer) PrepareFailure(id string, urlRegexp string) { - s.failures[id] = urlRegexp -} - -// PrepareMultiFailures enqueues a new expected failure based on a URL regexp -// it receives an id for the failure. -func (s *DockerServer) PrepareMultiFailures(id string, urlRegexp string) { - s.multiFailures = append(s.multiFailures, map[string]string{"error": id, "url": urlRegexp}) -} - -// ResetFailure removes an expected failure identified by the given id. -func (s *DockerServer) ResetFailure(id string) { - delete(s.failures, id) -} - -// ResetMultiFailures removes all enqueued failures. -func (s *DockerServer) ResetMultiFailures() { - s.multiFailures = []map[string]string{} -} - -// CustomHandler registers a custom handler for a specific path. -// -// For example: -// -// server.CustomHandler("/containers/json", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { -// http.Error(w, "Something wrong is not right", http.StatusInternalServerError) -// })) -func (s *DockerServer) CustomHandler(path string, handler http.Handler) { - s.handlerMutex.Lock() - s.customHandlers[path] = handler - s.handlerMutex.Unlock() -} - -// MutateContainer changes the state of a container, returning an error if the -// given id does not match to any container "running" in the server. -func (s *DockerServer) MutateContainer(id string, state docker.State) error { - for _, container := range s.containers { - if container.ID == id { - container.State = state - return nil - } - } - return errors.New("container not found") -} - -// Stop stops the server. -func (s *DockerServer) Stop() { - if s.listener != nil { - s.listener.Close() - } -} - -// URL returns the HTTP URL of the server. -func (s *DockerServer) URL() string { - if s.listener == nil { - return "" - } - return "http://" + s.listener.Addr().String() + "/" -} - -// ServeHTTP handles HTTP requests sent to the server. -func (s *DockerServer) ServeHTTP(w http.ResponseWriter, r *http.Request) { - s.handlerMutex.RLock() - defer s.handlerMutex.RUnlock() - for re, handler := range s.customHandlers { - if m, _ := regexp.MatchString(re, r.URL.Path); m { - handler.ServeHTTP(w, r) - return - } - } - s.mux.ServeHTTP(w, r) - if s.hook != nil { - s.hook(r) - } -} - -// DefaultHandler returns default http.Handler mux, it allows customHandlers to -// call the default behavior if wanted. -func (s *DockerServer) DefaultHandler() http.Handler { - return s.mux -} - -func (s *DockerServer) handlerWrapper(f func(http.ResponseWriter, *http.Request)) func(http.ResponseWriter, *http.Request) { - return func(w http.ResponseWriter, r *http.Request) { - for errorID, urlRegexp := range s.failures { - matched, err := regexp.MatchString(urlRegexp, r.URL.Path) - if err != nil { - http.Error(w, err.Error(), http.StatusBadRequest) - return - } - if !matched { - continue - } - http.Error(w, errorID, http.StatusBadRequest) - return - } - for i, failure := range s.multiFailures { - matched, err := regexp.MatchString(failure["url"], r.URL.Path) - if err != nil { - http.Error(w, err.Error(), http.StatusBadRequest) - return - } - if !matched { - continue - } - http.Error(w, failure["error"], http.StatusBadRequest) - s.multiFailures = append(s.multiFailures[:i], s.multiFailures[i+1:]...) - return - } - f(w, r) - } -} - -func (s *DockerServer) listContainers(w http.ResponseWriter, r *http.Request) { - all := r.URL.Query().Get("all") - s.cMut.RLock() - result := make([]docker.APIContainers, 0, len(s.containers)) - for _, container := range s.containers { - if all == "1" || container.State.Running { - result = append(result, docker.APIContainers{ - ID: container.ID, - Image: container.Image, - Command: fmt.Sprintf("%s %s", container.Path, strings.Join(container.Args, " ")), - Created: container.Created.Unix(), - Status: container.State.String(), - Ports: container.NetworkSettings.PortMappingAPI(), - Names: []string{fmt.Sprintf("/%s", container.Name)}, - }) - } - } - s.cMut.RUnlock() - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - json.NewEncoder(w).Encode(result) -} - -func (s *DockerServer) listImages(w http.ResponseWriter, r *http.Request) { - s.cMut.RLock() - result := make([]docker.APIImages, len(s.images)) - for i, image := range s.images { - result[i] = docker.APIImages{ - ID: image.ID, - Created: image.Created.Unix(), - } - for tag, id := range s.imgIDs { - if id == image.ID { - result[i].RepoTags = append(result[i].RepoTags, tag) - } - } - } - s.cMut.RUnlock() - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - json.NewEncoder(w).Encode(result) -} - -func (s *DockerServer) findImage(id string) (string, error) { - s.iMut.RLock() - defer s.iMut.RUnlock() - image, ok := s.imgIDs[id] - if ok { - return image, nil - } - image, _, err := s.findImageByID(id) - return image, err -} - -func (s *DockerServer) findImageByID(id string) (string, int, error) { - s.iMut.RLock() - defer s.iMut.RUnlock() - for i, image := range s.images { - if image.ID == id { - return image.ID, i, nil - } - } - return "", -1, errors.New("No such image") -} - -func (s *DockerServer) createContainer(w http.ResponseWriter, r *http.Request) { - var config struct { - *docker.Config - HostConfig *docker.HostConfig - } - defer r.Body.Close() - err := json.NewDecoder(r.Body).Decode(&config) - if err != nil { - http.Error(w, err.Error(), http.StatusBadRequest) - return - } - name := r.URL.Query().Get("name") - if name != "" && !nameRegexp.MatchString(name) { - http.Error(w, "Invalid container name", http.StatusInternalServerError) - return - } - if _, err := s.findImage(config.Image); err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - ports := map[docker.Port][]docker.PortBinding{} - for port := range config.ExposedPorts { - ports[port] = []docker.PortBinding{{ - HostIP: "0.0.0.0", - HostPort: strconv.Itoa(mathrand.Int() % 0xffff), - }} - } - - //the container may not have cmd when using a Dockerfile - var path string - var args []string - if len(config.Cmd) == 1 { - path = config.Cmd[0] - } else if len(config.Cmd) > 1 { - path = config.Cmd[0] - args = config.Cmd[1:] - } - - generatedID := s.generateID() - config.Config.Hostname = generatedID[:12] - container := docker.Container{ - Name: name, - ID: generatedID, - Created: time.Now(), - Path: path, - Args: args, - Config: config.Config, - HostConfig: config.HostConfig, - State: docker.State{ - Running: false, - Pid: mathrand.Int() % 50000, - ExitCode: 0, - StartedAt: time.Now(), - }, - Image: config.Image, - NetworkSettings: &docker.NetworkSettings{ - IPAddress: fmt.Sprintf("172.16.42.%d", mathrand.Int()%250+2), - IPPrefixLen: 24, - Gateway: "172.16.42.1", - Bridge: "docker0", - Ports: ports, - }, - } - s.cMut.Lock() - if container.Name != "" { - for _, c := range s.containers { - if c.Name == container.Name { - defer s.cMut.Unlock() - http.Error(w, "there's already a container with this name", http.StatusConflict) - return - } - } - } - s.containers = append(s.containers, &container) - s.cMut.Unlock() - w.WriteHeader(http.StatusCreated) - s.notify(&container) - - json.NewEncoder(w).Encode(container) -} - -func (s *DockerServer) generateID() string { - var buf [16]byte - rand.Read(buf[:]) - return fmt.Sprintf("%x", buf) -} - -func (s *DockerServer) renameContainer(w http.ResponseWriter, r *http.Request) { - id := mux.Vars(r)["id"] - container, index, err := s.findContainer(id) - if err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - copy := *container - copy.Name = r.URL.Query().Get("name") - s.cMut.Lock() - defer s.cMut.Unlock() - if s.containers[index].ID == copy.ID { - s.containers[index] = © - } - w.WriteHeader(http.StatusNoContent) -} - -func (s *DockerServer) inspectContainer(w http.ResponseWriter, r *http.Request) { - id := mux.Vars(r)["id"] - container, _, err := s.findContainer(id) - if err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - json.NewEncoder(w).Encode(container) -} - -func (s *DockerServer) statsContainer(w http.ResponseWriter, r *http.Request) { - id := mux.Vars(r)["id"] - _, _, err := s.findContainer(id) - if err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - stream, _ := strconv.ParseBool(r.URL.Query().Get("stream")) - callback := s.statsCallbacks[id] - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - encoder := json.NewEncoder(w) - for { - var stats docker.Stats - if callback != nil { - stats = callback(id) - } - encoder.Encode(stats) - if !stream { - break - } - } -} - -func (s *DockerServer) uploadToContainer(w http.ResponseWriter, r *http.Request) { - id := mux.Vars(r)["id"] - container, _, err := s.findContainer(id) - if err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - if !container.State.Running { - w.WriteHeader(http.StatusInternalServerError) - fmt.Fprintf(w, "Container %s is not running", id) - return - } - path := r.URL.Query().Get("path") - s.uploadedFiles[id] = path - w.WriteHeader(http.StatusOK) -} - -func (s *DockerServer) topContainer(w http.ResponseWriter, r *http.Request) { - id := mux.Vars(r)["id"] - container, _, err := s.findContainer(id) - if err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - if !container.State.Running { - w.WriteHeader(http.StatusInternalServerError) - fmt.Fprintf(w, "Container %s is not running", id) - return - } - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - result := docker.TopResult{ - Titles: []string{"UID", "PID", "PPID", "C", "STIME", "TTY", "TIME", "CMD"}, - Processes: [][]string{ - {"root", "7535", "7516", "0", "03:20", "?", "00:00:00", container.Path + " " + strings.Join(container.Args, " ")}, - }, - } - json.NewEncoder(w).Encode(result) -} - -func (s *DockerServer) startContainer(w http.ResponseWriter, r *http.Request) { - id := mux.Vars(r)["id"] - container, _, err := s.findContainer(id) - if err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - s.cMut.Lock() - defer s.cMut.Unlock() - defer r.Body.Close() - var hostConfig docker.HostConfig - err = json.NewDecoder(r.Body).Decode(&hostConfig) - if err != nil { - http.Error(w, err.Error(), http.StatusInternalServerError) - return - } - container.HostConfig = &hostConfig - if len(hostConfig.PortBindings) > 0 { - ports := map[docker.Port][]docker.PortBinding{} - for key, items := range hostConfig.PortBindings { - bindings := make([]docker.PortBinding, len(items)) - for i := range items { - binding := docker.PortBinding{ - HostIP: items[i].HostIP, - HostPort: items[i].HostPort, - } - if binding.HostIP == "" { - binding.HostIP = "0.0.0.0" - } - if binding.HostPort == "" { - binding.HostPort = strconv.Itoa(mathrand.Int() % 0xffff) - } - bindings[i] = binding - } - ports[key] = bindings - } - container.NetworkSettings.Ports = ports - } - if container.State.Running { - http.Error(w, "", http.StatusNotModified) - return - } - container.State.Running = true - s.notify(container) -} - -func (s *DockerServer) stopContainer(w http.ResponseWriter, r *http.Request) { - id := mux.Vars(r)["id"] - container, _, err := s.findContainer(id) - if err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - s.cMut.Lock() - defer s.cMut.Unlock() - if !container.State.Running { - http.Error(w, "Container not running", http.StatusBadRequest) - return - } - w.WriteHeader(http.StatusNoContent) - container.State.Running = false - s.notify(container) -} - -func (s *DockerServer) pauseContainer(w http.ResponseWriter, r *http.Request) { - id := mux.Vars(r)["id"] - container, _, err := s.findContainer(id) - if err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - s.cMut.Lock() - defer s.cMut.Unlock() - if container.State.Paused { - http.Error(w, "Container already paused", http.StatusBadRequest) - return - } - w.WriteHeader(http.StatusNoContent) - container.State.Paused = true -} - -func (s *DockerServer) unpauseContainer(w http.ResponseWriter, r *http.Request) { - id := mux.Vars(r)["id"] - container, _, err := s.findContainer(id) - if err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - s.cMut.Lock() - defer s.cMut.Unlock() - if !container.State.Paused { - http.Error(w, "Container not paused", http.StatusBadRequest) - return - } - w.WriteHeader(http.StatusNoContent) - container.State.Paused = false -} - -func (s *DockerServer) attachContainer(w http.ResponseWriter, r *http.Request) { - id := mux.Vars(r)["id"] - container, _, err := s.findContainer(id) - if err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - hijacker, ok := w.(http.Hijacker) - if !ok { - http.Error(w, "cannot hijack connection", http.StatusInternalServerError) - return - } - w.Header().Set("Content-Type", "application/vnd.docker.raw-stream") - w.WriteHeader(http.StatusOK) - conn, _, err := hijacker.Hijack() - if err != nil { - http.Error(w, err.Error(), http.StatusInternalServerError) - return - } - wg := sync.WaitGroup{} - if r.URL.Query().Get("stdin") == "1" { - wg.Add(1) - go func() { - ioutil.ReadAll(conn) - wg.Done() - }() - } - outStream := stdcopy.NewStdWriter(conn, stdcopy.Stdout) - if container.State.Running { - fmt.Fprintf(outStream, "Container is running\n") - } else { - fmt.Fprintf(outStream, "Container is not running\n") - } - fmt.Fprintln(outStream, "What happened?") - fmt.Fprintln(outStream, "Something happened") - wg.Wait() - if r.URL.Query().Get("stream") == "1" { - for { - time.Sleep(1e6) - s.cMut.RLock() - if !container.State.Running { - s.cMut.RUnlock() - break - } - s.cMut.RUnlock() - } - } - conn.Close() -} - -func (s *DockerServer) waitContainer(w http.ResponseWriter, r *http.Request) { - id := mux.Vars(r)["id"] - container, _, err := s.findContainer(id) - if err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - for { - time.Sleep(1e6) - s.cMut.RLock() - if !container.State.Running { - s.cMut.RUnlock() - break - } - s.cMut.RUnlock() - } - result := map[string]int{"StatusCode": container.State.ExitCode} - json.NewEncoder(w).Encode(result) -} - -func (s *DockerServer) removeContainer(w http.ResponseWriter, r *http.Request) { - id := mux.Vars(r)["id"] - force := r.URL.Query().Get("force") - s.cMut.Lock() - defer s.cMut.Unlock() - container, index, err := s.findContainerWithLock(id, false) - if err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - if container.State.Running && force != "1" { - msg := "Error: API error (406): Impossible to remove a running container, please stop it first" - http.Error(w, msg, http.StatusInternalServerError) - return - } - w.WriteHeader(http.StatusNoContent) - s.containers[index] = s.containers[len(s.containers)-1] - s.containers = s.containers[:len(s.containers)-1] -} - -func (s *DockerServer) commitContainer(w http.ResponseWriter, r *http.Request) { - id := r.URL.Query().Get("container") - container, _, err := s.findContainer(id) - if err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - config := new(docker.Config) - runConfig := r.URL.Query().Get("run") - if runConfig != "" { - err = json.Unmarshal([]byte(runConfig), config) - if err != nil { - http.Error(w, err.Error(), http.StatusBadRequest) - return - } - } - w.WriteHeader(http.StatusOK) - image := docker.Image{ - ID: "img-" + container.ID, - Parent: container.Image, - Container: container.ID, - Comment: r.URL.Query().Get("m"), - Author: r.URL.Query().Get("author"), - Config: config, - } - repository := r.URL.Query().Get("repo") - tag := r.URL.Query().Get("tag") - s.iMut.Lock() - s.images = append(s.images, image) - if repository != "" { - if tag != "" { - repository += ":" + tag - } - s.imgIDs[repository] = image.ID - } - s.iMut.Unlock() - fmt.Fprintf(w, `{"ID":%q}`, image.ID) -} - -func (s *DockerServer) findContainer(idOrName string) (*docker.Container, int, error) { - return s.findContainerWithLock(idOrName, true) -} - -func (s *DockerServer) findContainerWithLock(idOrName string, shouldLock bool) (*docker.Container, int, error) { - if shouldLock { - s.cMut.RLock() - defer s.cMut.RUnlock() - } - for i, container := range s.containers { - if container.ID == idOrName || container.Name == idOrName { - return container, i, nil - } - } - return nil, -1, errors.New("No such container") -} - -func (s *DockerServer) buildImage(w http.ResponseWriter, r *http.Request) { - if ct := r.Header.Get("Content-Type"); ct == "application/tar" { - gotDockerFile := false - tr := tar.NewReader(r.Body) - for { - header, err := tr.Next() - if err != nil { - break - } - if header.Name == "Dockerfile" { - gotDockerFile = true - } - } - if !gotDockerFile { - w.WriteHeader(http.StatusBadRequest) - w.Write([]byte("miss Dockerfile")) - return - } - } - //we did not use that Dockerfile to build image cause we are a fake Docker daemon - image := docker.Image{ - ID: s.generateID(), - Created: time.Now(), - } - - query := r.URL.Query() - repository := image.ID - if t := query.Get("t"); t != "" { - repository = t - } - s.iMut.Lock() - s.images = append(s.images, image) - s.imgIDs[repository] = image.ID - s.iMut.Unlock() - w.Write([]byte(fmt.Sprintf("Successfully built %s", image.ID))) -} - -func (s *DockerServer) pullImage(w http.ResponseWriter, r *http.Request) { - fromImageName := r.URL.Query().Get("fromImage") - tag := r.URL.Query().Get("tag") - image := docker.Image{ - ID: s.generateID(), - Config: &docker.Config{}, - } - s.iMut.Lock() - s.images = append(s.images, image) - if fromImageName != "" { - if tag != "" { - fromImageName = fmt.Sprintf("%s:%s", fromImageName, tag) - } - s.imgIDs[fromImageName] = image.ID - } - s.iMut.Unlock() -} - -func (s *DockerServer) pushImage(w http.ResponseWriter, r *http.Request) { - name := mux.Vars(r)["name"] - tag := r.URL.Query().Get("tag") - if tag != "" { - name += ":" + tag - } - s.iMut.RLock() - if _, ok := s.imgIDs[name]; !ok { - s.iMut.RUnlock() - http.Error(w, "No such image", http.StatusNotFound) - return - } - s.iMut.RUnlock() - fmt.Fprintln(w, "Pushing...") - fmt.Fprintln(w, "Pushed") -} - -func (s *DockerServer) tagImage(w http.ResponseWriter, r *http.Request) { - name := mux.Vars(r)["name"] - s.iMut.RLock() - if _, ok := s.imgIDs[name]; !ok { - s.iMut.RUnlock() - http.Error(w, "No such image", http.StatusNotFound) - return - } - s.iMut.RUnlock() - s.iMut.Lock() - defer s.iMut.Unlock() - newRepo := r.URL.Query().Get("repo") - newTag := r.URL.Query().Get("tag") - if newTag != "" { - newRepo += ":" + newTag - } - s.imgIDs[newRepo] = s.imgIDs[name] - w.WriteHeader(http.StatusCreated) -} - -func (s *DockerServer) removeImage(w http.ResponseWriter, r *http.Request) { - id := mux.Vars(r)["id"] - s.iMut.RLock() - var tag string - if img, ok := s.imgIDs[id]; ok { - id, tag = img, id - } - var tags []string - for tag, taggedID := range s.imgIDs { - if taggedID == id { - tags = append(tags, tag) - } - } - s.iMut.RUnlock() - _, index, err := s.findImageByID(id) - if err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - w.WriteHeader(http.StatusNoContent) - s.iMut.Lock() - defer s.iMut.Unlock() - if len(tags) < 2 { - s.images[index] = s.images[len(s.images)-1] - s.images = s.images[:len(s.images)-1] - } - if tag != "" { - delete(s.imgIDs, tag) - } -} - -func (s *DockerServer) inspectImage(w http.ResponseWriter, r *http.Request) { - name := mux.Vars(r)["name"] - s.iMut.RLock() - defer s.iMut.RUnlock() - if id, ok := s.imgIDs[name]; ok { - for _, img := range s.images { - if img.ID == id { - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - json.NewEncoder(w).Encode(img) - return - } - } - } - http.Error(w, "not found", http.StatusNotFound) -} - -func (s *DockerServer) listEvents(w http.ResponseWriter, r *http.Request) { - w.Header().Set("Content-Type", "application/json") - var events [][]byte - count := mathrand.Intn(20) - for i := 0; i < count; i++ { - data, err := json.Marshal(s.generateEvent()) - if err != nil { - w.WriteHeader(http.StatusInternalServerError) - return - } - events = append(events, data) - } - w.WriteHeader(http.StatusOK) - for _, d := range events { - fmt.Fprintln(w, d) - time.Sleep(time.Duration(mathrand.Intn(200)) * time.Millisecond) - } -} - -func (s *DockerServer) pingDocker(w http.ResponseWriter, r *http.Request) { - w.WriteHeader(http.StatusOK) -} - -func (s *DockerServer) generateEvent() *docker.APIEvents { - var eventType string - switch mathrand.Intn(4) { - case 0: - eventType = "create" - case 1: - eventType = "start" - case 2: - eventType = "stop" - case 3: - eventType = "destroy" - } - return &docker.APIEvents{ - ID: s.generateID(), - Status: eventType, - From: "mybase:latest", - Time: time.Now().Unix(), - } -} - -func (s *DockerServer) loadImage(w http.ResponseWriter, r *http.Request) { - w.WriteHeader(http.StatusOK) -} - -func (s *DockerServer) getImage(w http.ResponseWriter, r *http.Request) { - w.WriteHeader(http.StatusOK) - w.Header().Set("Content-Type", "application/tar") -} - -func (s *DockerServer) createExecContainer(w http.ResponseWriter, r *http.Request) { - id := mux.Vars(r)["id"] - container, _, err := s.findContainer(id) - if err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - - execID := s.generateID() - container.ExecIDs = append(container.ExecIDs, execID) - - exec := docker.ExecInspect{ - ID: execID, - Container: *container, - } - - var params docker.CreateExecOptions - err = json.NewDecoder(r.Body).Decode(¶ms) - if err != nil { - http.Error(w, err.Error(), http.StatusInternalServerError) - return - } - if len(params.Cmd) > 0 { - exec.ProcessConfig.EntryPoint = params.Cmd[0] - if len(params.Cmd) > 1 { - exec.ProcessConfig.Arguments = params.Cmd[1:] - } - } - - exec.ProcessConfig.User = params.User - exec.ProcessConfig.Tty = params.Tty - - s.execMut.Lock() - s.execs = append(s.execs, &exec) - s.execMut.Unlock() - w.WriteHeader(http.StatusOK) - w.Header().Set("Content-Type", "application/json") - json.NewEncoder(w).Encode(map[string]string{"Id": exec.ID}) -} - -func (s *DockerServer) startExecContainer(w http.ResponseWriter, r *http.Request) { - id := mux.Vars(r)["id"] - if exec, err := s.getExec(id, false); err == nil { - s.execMut.Lock() - exec.Running = true - s.execMut.Unlock() - if callback, ok := s.execCallbacks[id]; ok { - callback() - delete(s.execCallbacks, id) - } else if callback, ok := s.execCallbacks["*"]; ok { - callback() - delete(s.execCallbacks, "*") - } - s.execMut.Lock() - exec.Running = false - s.execMut.Unlock() - w.WriteHeader(http.StatusOK) - return - } - w.WriteHeader(http.StatusNotFound) -} - -func (s *DockerServer) resizeExecContainer(w http.ResponseWriter, r *http.Request) { - id := mux.Vars(r)["id"] - if _, err := s.getExec(id, false); err == nil { - w.WriteHeader(http.StatusOK) - return - } - w.WriteHeader(http.StatusNotFound) -} - -func (s *DockerServer) inspectExecContainer(w http.ResponseWriter, r *http.Request) { - id := mux.Vars(r)["id"] - if exec, err := s.getExec(id, true); err == nil { - w.WriteHeader(http.StatusOK) - w.Header().Set("Content-Type", "application/json") - json.NewEncoder(w).Encode(exec) - return - } - w.WriteHeader(http.StatusNotFound) -} - -func (s *DockerServer) getExec(id string, copy bool) (*docker.ExecInspect, error) { - s.execMut.RLock() - defer s.execMut.RUnlock() - for _, exec := range s.execs { - if exec.ID == id { - if copy { - cp := *exec - exec = &cp - } - return exec, nil - } - } - return nil, errors.New("exec not found") -} - -func (s *DockerServer) findNetwork(idOrName string) (*docker.Network, int, error) { - s.netMut.RLock() - defer s.netMut.RUnlock() - for i, network := range s.networks { - if network.ID == idOrName || network.Name == idOrName { - return network, i, nil - } - } - return nil, -1, errors.New("No such network") -} - -func (s *DockerServer) listNetworks(w http.ResponseWriter, r *http.Request) { - s.netMut.RLock() - result := make([]docker.Network, 0, len(s.networks)) - for _, network := range s.networks { - result = append(result, *network) - } - s.netMut.RUnlock() - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - json.NewEncoder(w).Encode(result) -} - -func (s *DockerServer) networkInfo(w http.ResponseWriter, r *http.Request) { - id := mux.Vars(r)["id"] - network, _, err := s.findNetwork(id) - if err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - json.NewEncoder(w).Encode(network) -} - -// isValidName validates configuration objects supported by libnetwork -func isValidName(name string) bool { - if name == "" || strings.Contains(name, ".") { - return false - } - return true -} - -func (s *DockerServer) createNetwork(w http.ResponseWriter, r *http.Request) { - var config *docker.CreateNetworkOptions - defer r.Body.Close() - err := json.NewDecoder(r.Body).Decode(&config) - if err != nil { - http.Error(w, err.Error(), http.StatusBadRequest) - return - } - if !isValidName(config.Name) { - http.Error(w, "Invalid network name", http.StatusBadRequest) - return - } - if n, _, _ := s.findNetwork(config.Name); n != nil { - http.Error(w, "network already exists", http.StatusForbidden) - return - } - - generatedID := s.generateID() - network := docker.Network{ - Name: config.Name, - ID: generatedID, - Driver: config.Driver, - } - s.netMut.Lock() - s.networks = append(s.networks, &network) - s.netMut.Unlock() - w.WriteHeader(http.StatusCreated) - var c = struct{ ID string }{ID: network.ID} - json.NewEncoder(w).Encode(c) -} - -func (s *DockerServer) listVolumes(w http.ResponseWriter, r *http.Request) { - s.volMut.RLock() - result := make([]docker.Volume, 0, len(s.volStore)) - for _, volumeCounter := range s.volStore { - result = append(result, volumeCounter.volume) - } - s.volMut.RUnlock() - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - json.NewEncoder(w).Encode(result) -} - -func (s *DockerServer) createVolume(w http.ResponseWriter, r *http.Request) { - var data struct { - *docker.CreateVolumeOptions - } - defer r.Body.Close() - err := json.NewDecoder(r.Body).Decode(&data) - if err != nil { - http.Error(w, err.Error(), http.StatusBadRequest) - return - } - volume := &docker.Volume{ - Name: data.CreateVolumeOptions.Name, - Driver: data.CreateVolumeOptions.Driver, - } - // If the name is not specified, generate one. Just using generateID for now - if len(volume.Name) == 0 { - volume.Name = s.generateID() - } - // If driver is not specified, use local - if len(volume.Driver) == 0 { - volume.Driver = "local" - } - // Mount point is a default one with name - volume.Mountpoint = "/var/lib/docker/volumes/" + volume.Name - - // If the volume already exists, don't re-add it. - exists := false - s.volMut.Lock() - if s.volStore != nil { - _, exists = s.volStore[volume.Name] - } else { - // No volumes, create volStore - s.volStore = make(map[string]*volumeCounter) - } - if !exists { - s.volStore[volume.Name] = &volumeCounter{ - volume: *volume, - count: 0, - } - } - s.volMut.Unlock() - w.WriteHeader(http.StatusCreated) - json.NewEncoder(w).Encode(volume) -} - -func (s *DockerServer) inspectVolume(w http.ResponseWriter, r *http.Request) { - s.volMut.RLock() - defer s.volMut.RUnlock() - name := mux.Vars(r)["name"] - vol, err := s.findVolume(name) - if err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - json.NewEncoder(w).Encode(vol.volume) -} - -func (s *DockerServer) findVolume(name string) (*volumeCounter, error) { - vol, ok := s.volStore[name] - if !ok { - return nil, errors.New("no such volume") - } - return vol, nil -} - -func (s *DockerServer) removeVolume(w http.ResponseWriter, r *http.Request) { - s.volMut.Lock() - defer s.volMut.Unlock() - name := mux.Vars(r)["name"] - vol, err := s.findVolume(name) - if err != nil { - http.Error(w, err.Error(), http.StatusNotFound) - return - } - if vol.count != 0 { - http.Error(w, "volume in use and cannot be removed", http.StatusConflict) - return - } - s.volStore[vol.volume.Name] = nil - w.WriteHeader(http.StatusNoContent) -} - -func (s *DockerServer) infoDocker(w http.ResponseWriter, r *http.Request) { - s.cMut.RLock() - defer s.cMut.RUnlock() - s.iMut.RLock() - defer s.iMut.RUnlock() - var running, stopped, paused int - for _, c := range s.containers { - if c.State.Running { - running++ - } else { - stopped++ - } - if c.State.Paused { - paused++ - } - } - envs := map[string]interface{}{ - "ID": "AAAA:XXXX:0000:BBBB:AAAA:XXXX:0000:BBBB:AAAA:XXXX:0000:BBBB", - "Containers": len(s.containers), - "ContainersRunning": running, - "ContainersPaused": paused, - "ContainersStopped": stopped, - "Images": len(s.images), - "Driver": "aufs", - "DriverStatus": [][]string{}, - "SystemStatus": nil, - "Plugins": map[string]interface{}{ - "Volume": []string{ - "local", - }, - "Network": []string{ - "bridge", - "null", - "host", - }, - "Authorization": nil, - }, - "MemoryLimit": true, - "SwapLimit": false, - "CpuCfsPeriod": true, - "CpuCfsQuota": true, - "CPUShares": true, - "CPUSet": true, - "IPv4Forwarding": true, - "BridgeNfIptables": true, - "BridgeNfIp6tables": true, - "Debug": false, - "NFd": 79, - "OomKillDisable": true, - "NGoroutines": 101, - "SystemTime": "2016-02-25T18:13:10.25870078Z", - "ExecutionDriver": "native-0.2", - "LoggingDriver": "json-file", - "NEventsListener": 0, - "KernelVersion": "3.13.0-77-generic", - "OperatingSystem": "Ubuntu 14.04.3 LTS", - "OSType": "linux", - "Architecture": "x86_64", - "IndexServerAddress": "https://index.docker.io/v1/", - "RegistryConfig": map[string]interface{}{ - "InsecureRegistryCIDRs": []string{}, - "IndexConfigs": map[string]interface{}{}, - "Mirrors": nil, - }, - "InitSha1": "e2042dbb0fcf49bb9da199186d9a5063cda92a01", - "InitPath": "/usr/lib/docker/dockerinit", - "NCPU": 1, - "MemTotal": 2099204096, - "DockerRootDir": "/var/lib/docker", - "HttpProxy": "", - "HttpsProxy": "", - "NoProxy": "", - "Name": "vagrant-ubuntu-trusty-64", - "Labels": nil, - "ExperimentalBuild": false, - "ServerVersion": "1.10.1", - "ClusterStore": "", - "ClusterAdvertise": "", - } - w.WriteHeader(http.StatusOK) - json.NewEncoder(w).Encode(envs) -} diff --git a/vendor/github.com/fsouza/go-dockerclient/tls.go b/vendor/github.com/fsouza/go-dockerclient/tls.go deleted file mode 100644 index bb5790b5f0b..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/tls.go +++ /dev/null @@ -1,118 +0,0 @@ -// Copyright 2014 go-dockerclient authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. -// -// The content is borrowed from Docker's own source code to provide a simple -// tls based dialer - -package docker - -import ( - "crypto/tls" - "errors" - "net" - "strings" - "time" -) - -type tlsClientCon struct { - *tls.Conn - rawConn net.Conn -} - -func (c *tlsClientCon) CloseWrite() error { - // Go standard tls.Conn doesn't provide the CloseWrite() method so we do it - // on its underlying connection. - if cwc, ok := c.rawConn.(interface { - CloseWrite() error - }); ok { - return cwc.CloseWrite() - } - return nil -} - -func tlsDialWithDialer(dialer *net.Dialer, network, addr string, config *tls.Config) (net.Conn, error) { - // We want the Timeout and Deadline values from dialer to cover the - // whole process: TCP connection and TLS handshake. This means that we - // also need to start our own timers now. - timeout := dialer.Timeout - - if !dialer.Deadline.IsZero() { - deadlineTimeout := dialer.Deadline.Sub(time.Now()) - if timeout == 0 || deadlineTimeout < timeout { - timeout = deadlineTimeout - } - } - - var errChannel chan error - - if timeout != 0 { - errChannel = make(chan error, 2) - time.AfterFunc(timeout, func() { - errChannel <- errors.New("") - }) - } - - rawConn, err := dialer.Dial(network, addr) - if err != nil { - return nil, err - } - - colonPos := strings.LastIndex(addr, ":") - if colonPos == -1 { - colonPos = len(addr) - } - hostname := addr[:colonPos] - - // If no ServerName is set, infer the ServerName - // from the hostname we're connecting to. - if config.ServerName == "" { - // Make a copy to avoid polluting argument or default. - config = copyTLSConfig(config) - config.ServerName = hostname - } - - conn := tls.Client(rawConn, config) - - if timeout == 0 { - err = conn.Handshake() - } else { - go func() { - errChannel <- conn.Handshake() - }() - - err = <-errChannel - } - - if err != nil { - rawConn.Close() - return nil, err - } - - // This is Docker difference with standard's crypto/tls package: returned a - // wrapper which holds both the TLS and raw connections. - return &tlsClientCon{conn, rawConn}, nil -} - -// this exists to silent an error message in go vet -func copyTLSConfig(cfg *tls.Config) *tls.Config { - return &tls.Config{ - Certificates: cfg.Certificates, - CipherSuites: cfg.CipherSuites, - ClientAuth: cfg.ClientAuth, - ClientCAs: cfg.ClientCAs, - ClientSessionCache: cfg.ClientSessionCache, - CurvePreferences: cfg.CurvePreferences, - InsecureSkipVerify: cfg.InsecureSkipVerify, - MaxVersion: cfg.MaxVersion, - MinVersion: cfg.MinVersion, - NameToCertificate: cfg.NameToCertificate, - NextProtos: cfg.NextProtos, - PreferServerCipherSuites: cfg.PreferServerCipherSuites, - Rand: cfg.Rand, - RootCAs: cfg.RootCAs, - ServerName: cfg.ServerName, - SessionTicketKey: cfg.SessionTicketKey, - SessionTicketsDisabled: cfg.SessionTicketsDisabled, - } -} diff --git a/vendor/github.com/fsouza/go-dockerclient/travis-scripts/install.bash b/vendor/github.com/fsouza/go-dockerclient/travis-scripts/install.bash deleted file mode 100755 index 9d1708fa994..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/travis-scripts/install.bash +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/bash -x - -# Copyright 2016 go-dockerclient authors. All rights reserved. -# Use of this source code is governed by a BSD-style -# license that can be found in the LICENSE file. - -if [[ $TRAVIS_OS_NAME == "linux" ]]; then - sudo stop docker || true - sudo rm -rf /var/lib/docker - sudo rm -f `which docker` - - set -e - sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D - echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" | sudo tee /etc/apt/sources.list.d/docker.list - sudo apt-get update - sudo apt-get install docker-engine=${DOCKER_VERSION}-0~$(lsb_release -cs) -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" -fi diff --git a/vendor/github.com/fsouza/go-dockerclient/travis-scripts/run-tests.bash b/vendor/github.com/fsouza/go-dockerclient/travis-scripts/run-tests.bash deleted file mode 100755 index 0d836cd4f4a..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/travis-scripts/run-tests.bash +++ /dev/null @@ -1,15 +0,0 @@ -#!/bin/bash -ex - -# Copyright 2016 go-dockerclient authors. All rights reserved. -# Use of this source code is governed by a BSD-style -# license that can be found in the LICENSE file. - -if ! [[ $TRAVIS_GO_VERSION =~ ^1\.[34] ]]; then - make lint vet -fi - -make fmtcheck gotest GO_TEST_FLAGS=-race - -if [[ $TRAVIS_OS_NAME == "linux" ]]; then - DOCKER_HOST=tcp://127.0.0.1:2375 make integration -fi diff --git a/vendor/github.com/fsouza/go-dockerclient/volume.go b/vendor/github.com/fsouza/go-dockerclient/volume.go deleted file mode 100644 index 5fe8ee3d637..00000000000 --- a/vendor/github.com/fsouza/go-dockerclient/volume.go +++ /dev/null @@ -1,128 +0,0 @@ -// Copyright 2015 go-dockerclient authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package docker - -import ( - "encoding/json" - "errors" - "net/http" -) - -var ( - // ErrNoSuchVolume is the error returned when the volume does not exist. - ErrNoSuchVolume = errors.New("no such volume") - - // ErrVolumeInUse is the error returned when the volume requested to be removed is still in use. - ErrVolumeInUse = errors.New("volume in use and cannot be removed") -) - -// Volume represents a volume. -// -// See https://goo.gl/FZA4BK for more details. -type Volume struct { - Name string `json:"Name" yaml:"Name"` - Driver string `json:"Driver,omitempty" yaml:"Driver,omitempty"` - Mountpoint string `json:"Mountpoint,omitempty" yaml:"Mountpoint,omitempty"` - Labels map[string]string `json:"Labels,omitempty" yaml:"Labels,omitempty"` -} - -// ListVolumesOptions specify parameters to the ListVolumes function. -// -// See https://goo.gl/FZA4BK for more details. -type ListVolumesOptions struct { - Filters map[string][]string -} - -// ListVolumes returns a list of available volumes in the server. -// -// See https://goo.gl/FZA4BK for more details. -func (c *Client) ListVolumes(opts ListVolumesOptions) ([]Volume, error) { - resp, err := c.do("GET", "/volumes?"+queryString(opts), doOptions{}) - if err != nil { - return nil, err - } - defer resp.Body.Close() - m := make(map[string]interface{}) - if err := json.NewDecoder(resp.Body).Decode(&m); err != nil { - return nil, err - } - var volumes []Volume - volumesJSON, ok := m["Volumes"] - if !ok { - return volumes, nil - } - data, err := json.Marshal(volumesJSON) - if err != nil { - return nil, err - } - if err := json.Unmarshal(data, &volumes); err != nil { - return nil, err - } - return volumes, nil -} - -// CreateVolumeOptions specify parameters to the CreateVolume function. -// -// See https://goo.gl/pBUbZ9 for more details. -type CreateVolumeOptions struct { - Name string - Driver string - DriverOpts map[string]string -} - -// CreateVolume creates a volume on the server. -// -// See https://goo.gl/pBUbZ9 for more details. -func (c *Client) CreateVolume(opts CreateVolumeOptions) (*Volume, error) { - resp, err := c.do("POST", "/volumes/create", doOptions{data: opts}) - if err != nil { - return nil, err - } - defer resp.Body.Close() - var volume Volume - if err := json.NewDecoder(resp.Body).Decode(&volume); err != nil { - return nil, err - } - return &volume, nil -} - -// InspectVolume returns a volume by its name. -// -// See https://goo.gl/0g9A6i for more details. -func (c *Client) InspectVolume(name string) (*Volume, error) { - resp, err := c.do("GET", "/volumes/"+name, doOptions{}) - if err != nil { - if e, ok := err.(*Error); ok && e.Status == http.StatusNotFound { - return nil, ErrNoSuchVolume - } - return nil, err - } - defer resp.Body.Close() - var volume Volume - if err := json.NewDecoder(resp.Body).Decode(&volume); err != nil { - return nil, err - } - return &volume, nil -} - -// RemoveVolume removes a volume by its name. -// -// See https://goo.gl/79GNQz for more details. -func (c *Client) RemoveVolume(name string) error { - resp, err := c.do("DELETE", "/volumes/"+name, doOptions{}) - if err != nil { - if e, ok := err.(*Error); ok { - if e.Status == http.StatusNotFound { - return ErrNoSuchVolume - } - if e.Status == http.StatusConflict { - return ErrVolumeInUse - } - } - return nil - } - defer resp.Body.Close() - return nil -} diff --git a/vendor/github.com/go-ini/ini/README.md b/vendor/github.com/go-ini/ini/README.md index 22a42344aa0..85947422d70 100644 --- a/vendor/github.com/go-ini/ini/README.md +++ b/vendor/github.com/go-ini/ini/README.md @@ -1,4 +1,4 @@ -INI [![Build Status](https://travis-ci.org/go-ini/ini.svg?branch=master)](https://travis-ci.org/go-ini/ini) +INI [![Build Status](https://travis-ci.org/go-ini/ini.svg?branch=master)](https://travis-ci.org/go-ini/ini) [![Sourcegraph](https://sourcegraph.com/github.com/go-ini/ini/-/badge.svg)](https://sourcegraph.com/github.com/go-ini/ini?badge) === ![](https://avatars0.githubusercontent.com/u/10216035?v=3&s=200) @@ -106,6 +106,12 @@ cfg, err := LoadSources(LoadOptions{AllowBooleanKeys: true}, "my.cnf")) The value of those keys are always `true`, and when you save to a file, it will keep in the same foramt as you read. +To generate such keys in your program, you could use `NewBooleanKey`: + +```go +key, err := sec.NewBooleanKey("skip-host-cache") +``` + #### Comment Take care that following format will be treated as comment: diff --git a/vendor/github.com/go-ini/ini/README_ZH.md b/vendor/github.com/go-ini/ini/README_ZH.md index 3b4fb6604e3..163432db9af 100644 --- a/vendor/github.com/go-ini/ini/README_ZH.md +++ b/vendor/github.com/go-ini/ini/README_ZH.md @@ -99,6 +99,12 @@ cfg, err := LoadSources(LoadOptions{AllowBooleanKeys: true}, "my.cnf")) 这些键的值永远为 `true`,且在保存到文件时也只会输出键名。 +如果您想要通过程序来生成此类键,则可以使用 `NewBooleanKey`: + +```go +key, err := sec.NewBooleanKey("skip-host-cache") +``` + #### 关于注释 下述几种情况的内容将被视为注释: diff --git a/vendor/github.com/go-ini/ini/ini.go b/vendor/github.com/go-ini/ini/ini.go index 77e0dbde643..68d73aa750c 100644 --- a/vendor/github.com/go-ini/ini/ini.go +++ b/vendor/github.com/go-ini/ini/ini.go @@ -37,7 +37,7 @@ const ( // Maximum allowed depth when recursively substituing variable names. _DEPTH_VALUES = 99 - _VERSION = "1.23.1" + _VERSION = "1.25.4" ) // Version returns current package version literal. @@ -176,6 +176,8 @@ type LoadOptions struct { // AllowBooleanKeys indicates whether to allow boolean type keys or treat as value is missing. // This type of keys are mostly used in my.cnf. AllowBooleanKeys bool + // AllowShadows indicates whether to keep track of keys with same name under same section. + AllowShadows bool // Some INI formats allow group blocks that store a block of raw content that doesn't otherwise // conform to key/value pairs. Specify the names of those blocks here. UnparseableSections []string @@ -219,6 +221,12 @@ func InsensitiveLoad(source interface{}, others ...interface{}) (*File, error) { return LoadSources(LoadOptions{Insensitive: true}, source, others...) } +// InsensitiveLoad has exactly same functionality as Load function +// except it allows have shadow keys. +func ShadowLoad(source interface{}, others ...interface{}) (*File, error) { + return LoadSources(LoadOptions{AllowShadows: true}, source, others...) +} + // Empty returns an empty file object. func Empty() *File { // Ignore error here, we sure our data is good. @@ -441,6 +449,7 @@ func (f *File) WriteToIndent(w io.Writer, indent string) (n int64, err error) { } alignSpaces := bytes.Repeat([]byte(" "), alignLength) + KEY_LIST: for _, kname := range sec.keyList { key := sec.Key(kname) if len(key.Comment) > 0 { @@ -467,28 +476,33 @@ func (f *File) WriteToIndent(w io.Writer, indent string) (n int64, err error) { case strings.Contains(kname, "`"): kname = `"""` + kname + `"""` } - if _, err = buf.WriteString(kname); err != nil { - return 0, err - } - if key.isBooleanType { - continue - } + for _, val := range key.ValueWithShadows() { + if _, err = buf.WriteString(kname); err != nil { + return 0, err + } - // Write out alignment spaces before "=" sign - if PrettyFormat { - buf.Write(alignSpaces[:alignLength-len(kname)]) - } + if key.isBooleanType { + if kname != sec.keyList[len(sec.keyList)-1] { + buf.WriteString(LineBreak) + } + continue KEY_LIST + } - val := key.value - // In case key value contains "\n", "`", "\"", "#" or ";" - if strings.ContainsAny(val, "\n`") { - val = `"""` + val + `"""` - } else if strings.ContainsAny(val, "#;") { - val = "`" + val + "`" - } - if _, err = buf.WriteString(equalSign + val + LineBreak); err != nil { - return 0, err + // Write out alignment spaces before "=" sign + if PrettyFormat { + buf.Write(alignSpaces[:alignLength-len(kname)]) + } + + // In case key value contains "\n", "`", "\"", "#" or ";" + if strings.ContainsAny(val, "\n`") { + val = `"""` + val + `"""` + } else if strings.ContainsAny(val, "#;") { + val = "`" + val + "`" + } + if _, err = buf.WriteString(equalSign + val + LineBreak); err != nil { + return 0, err + } } } diff --git a/vendor/github.com/go-ini/ini/key.go b/vendor/github.com/go-ini/ini/key.go index 9738c55a21b..852696f4c4f 100644 --- a/vendor/github.com/go-ini/ini/key.go +++ b/vendor/github.com/go-ini/ini/key.go @@ -15,6 +15,7 @@ package ini import ( + "errors" "fmt" "strconv" "strings" @@ -29,9 +30,42 @@ type Key struct { isAutoIncrement bool isBooleanType bool + isShadow bool + shadows []*Key + Comment string } +// newKey simply return a key object with given values. +func newKey(s *Section, name, val string) *Key { + return &Key{ + s: s, + name: name, + value: val, + } +} + +func (k *Key) addShadow(val string) error { + if k.isShadow { + return errors.New("cannot add shadow to another shadow key") + } else if k.isAutoIncrement || k.isBooleanType { + return errors.New("cannot add shadow to auto-increment or boolean key") + } + + shadow := newKey(k.s, k.name, val) + shadow.isShadow = true + k.shadows = append(k.shadows, shadow) + return nil +} + +// AddShadow adds a new shadow key to itself. +func (k *Key) AddShadow(val string) error { + if !k.s.f.options.AllowShadows { + return errors.New("shadow key is not allowed") + } + return k.addShadow(val) +} + // ValueMapper represents a mapping function for values, e.g. os.ExpandEnv type ValueMapper func(string) string @@ -45,16 +79,29 @@ func (k *Key) Value() string { return k.value } -// String returns string representation of value. -func (k *Key) String() string { - val := k.value +// ValueWithShadows returns raw values of key and its shadows if any. +func (k *Key) ValueWithShadows() []string { + if len(k.shadows) == 0 { + return []string{k.value} + } + vals := make([]string, len(k.shadows)+1) + vals[0] = k.value + for i := range k.shadows { + vals[i+1] = k.shadows[i].value + } + return vals +} + +// transformValue takes a raw value and transforms to its final string. +func (k *Key) transformValue(val string) string { if k.s.f.ValueMapper != nil { val = k.s.f.ValueMapper(val) } - if strings.Index(val, "%") == -1 { + + // Fail-fast if no indicate char found for recursive value + if !strings.Contains(val, "%") { return val } - for i := 0; i < _DEPTH_VALUES; i++ { vr := varPattern.FindString(val) if len(vr) == 0 { @@ -78,6 +125,11 @@ func (k *Key) String() string { return val } +// String returns string representation of value. +func (k *Key) String() string { + return k.transformValue(k.value) +} + // Validate accepts a validate function which can // return modifed result as key value. func (k *Key) Validate(fn func(string) string) string { @@ -394,11 +446,31 @@ func (k *Key) Strings(delim string) []string { vals := strings.Split(str, delim) for i := range vals { + // vals[i] = k.transformValue(strings.TrimSpace(vals[i])) vals[i] = strings.TrimSpace(vals[i]) } return vals } +// StringsWithShadows returns list of string divided by given delimiter. +// Shadows will also be appended if any. +func (k *Key) StringsWithShadows(delim string) []string { + vals := k.ValueWithShadows() + results := make([]string, 0, len(vals)*2) + for i := range vals { + if len(vals) == 0 { + continue + } + + results = append(results, strings.Split(vals[i], delim)...) + } + + for i := range results { + results[i] = k.transformValue(strings.TrimSpace(results[i])) + } + return results +} + // Float64s returns list of float64 divided by given delimiter. Any invalid input will be treated as zero value. func (k *Key) Float64s(delim string) []float64 { vals, _ := k.getFloat64s(delim, true, false) @@ -407,13 +479,13 @@ func (k *Key) Float64s(delim string) []float64 { // Ints returns list of int divided by given delimiter. Any invalid input will be treated as zero value. func (k *Key) Ints(delim string) []int { - vals, _ := k.getInts(delim, true, false) + vals, _ := k.parseInts(k.Strings(delim), true, false) return vals } // Int64s returns list of int64 divided by given delimiter. Any invalid input will be treated as zero value. func (k *Key) Int64s(delim string) []int64 { - vals, _ := k.getInt64s(delim, true, false) + vals, _ := k.parseInt64s(k.Strings(delim), true, false) return vals } @@ -452,14 +524,14 @@ func (k *Key) ValidFloat64s(delim string) []float64 { // ValidInts returns list of int divided by given delimiter. If some value is not integer, then it will // not be included to result list. func (k *Key) ValidInts(delim string) []int { - vals, _ := k.getInts(delim, false, false) + vals, _ := k.parseInts(k.Strings(delim), false, false) return vals } // ValidInt64s returns list of int64 divided by given delimiter. If some value is not 64-bit integer, // then it will not be included to result list. func (k *Key) ValidInt64s(delim string) []int64 { - vals, _ := k.getInt64s(delim, false, false) + vals, _ := k.parseInt64s(k.Strings(delim), false, false) return vals } @@ -495,12 +567,12 @@ func (k *Key) StrictFloat64s(delim string) ([]float64, error) { // StrictInts returns list of int divided by given delimiter or error on first invalid input. func (k *Key) StrictInts(delim string) ([]int, error) { - return k.getInts(delim, false, true) + return k.parseInts(k.Strings(delim), false, true) } // StrictInt64s returns list of int64 divided by given delimiter or error on first invalid input. func (k *Key) StrictInt64s(delim string) ([]int64, error) { - return k.getInt64s(delim, false, true) + return k.parseInt64s(k.Strings(delim), false, true) } // StrictUints returns list of uint divided by given delimiter or error on first invalid input. @@ -541,9 +613,8 @@ func (k *Key) getFloat64s(delim string, addInvalid, returnOnInvalid bool) ([]flo return vals, nil } -// getInts returns list of int divided by given delimiter. -func (k *Key) getInts(delim string, addInvalid, returnOnInvalid bool) ([]int, error) { - strs := k.Strings(delim) +// parseInts transforms strings to ints. +func (k *Key) parseInts(strs []string, addInvalid, returnOnInvalid bool) ([]int, error) { vals := make([]int, 0, len(strs)) for _, str := range strs { val, err := strconv.Atoi(str) @@ -557,9 +628,8 @@ func (k *Key) getInts(delim string, addInvalid, returnOnInvalid bool) ([]int, er return vals, nil } -// getInt64s returns list of int64 divided by given delimiter. -func (k *Key) getInt64s(delim string, addInvalid, returnOnInvalid bool) ([]int64, error) { - strs := k.Strings(delim) +// parseInt64s transforms strings to int64s. +func (k *Key) parseInt64s(strs []string, addInvalid, returnOnInvalid bool) ([]int64, error) { vals := make([]int64, 0, len(strs)) for _, str := range strs { val, err := strconv.ParseInt(str, 10, 64) diff --git a/vendor/github.com/go-ini/ini/parser.go b/vendor/github.com/go-ini/ini/parser.go index b0aabe33b1b..673ef80ca26 100644 --- a/vendor/github.com/go-ini/ini/parser.go +++ b/vendor/github.com/go-ini/ini/parser.go @@ -318,11 +318,14 @@ func (f *File) parse(reader io.Reader) (err error) { if err != nil { // Treat as boolean key when desired, and whole line is key name. if IsErrDelimiterNotFound(err) && f.options.AllowBooleanKeys { - key, err := section.NewKey(string(line), "true") + kname, err := p.readValue(line, f.options.IgnoreContinuation) + if err != nil { + return err + } + key, err := section.NewBooleanKey(kname) if err != nil { return err } - key.isBooleanType = true key.Comment = strings.TrimSpace(p.comment.String()) p.comment.Reset() continue @@ -338,17 +341,16 @@ func (f *File) parse(reader io.Reader) (err error) { p.count++ } - key, err := section.NewKey(kname, "") + value, err := p.readValue(line[offset:], f.options.IgnoreContinuation) if err != nil { return err } - key.isAutoIncrement = isAutoIncr - value, err := p.readValue(line[offset:], f.options.IgnoreContinuation) + key, err := section.NewKey(kname, value) if err != nil { return err } - key.SetValue(value) + key.isAutoIncrement = isAutoIncr key.Comment = strings.TrimSpace(p.comment.String()) p.comment.Reset() } diff --git a/vendor/github.com/go-ini/ini/section.go b/vendor/github.com/go-ini/ini/section.go index 45d2f3bfd00..c9fa27e9ca7 100644 --- a/vendor/github.com/go-ini/ini/section.go +++ b/vendor/github.com/go-ini/ini/section.go @@ -68,20 +68,33 @@ func (s *Section) NewKey(name, val string) (*Key, error) { } if inSlice(name, s.keyList) { - s.keys[name].value = val + if s.f.options.AllowShadows { + if err := s.keys[name].addShadow(val); err != nil { + return nil, err + } + } else { + s.keys[name].value = val + } return s.keys[name], nil } s.keyList = append(s.keyList, name) - s.keys[name] = &Key{ - s: s, - name: name, - value: val, - } + s.keys[name] = newKey(s, name, val) s.keysHash[name] = val return s.keys[name], nil } +// NewBooleanKey creates a new boolean type key to given section. +func (s *Section) NewBooleanKey(name string) (*Key, error) { + key, err := s.NewKey(name, "true") + if err != nil { + return nil, err + } + + key.isBooleanType = true + return key, nil +} + // GetKey returns key in section by given name. func (s *Section) GetKey(name string) (*Key, error) { // FIXME: change to section level lock? diff --git a/vendor/github.com/go-ini/ini/struct.go b/vendor/github.com/go-ini/ini/struct.go index 5ef38d865f0..509c682fa6d 100644 --- a/vendor/github.com/go-ini/ini/struct.go +++ b/vendor/github.com/go-ini/ini/struct.go @@ -78,8 +78,14 @@ func parseDelim(actual string) string { var reflectTime = reflect.TypeOf(time.Now()).Kind() // setSliceWithProperType sets proper values to slice based on its type. -func setSliceWithProperType(key *Key, field reflect.Value, delim string) error { - strs := key.Strings(delim) +func setSliceWithProperType(key *Key, field reflect.Value, delim string, allowShadow bool) error { + var strs []string + if allowShadow { + strs = key.StringsWithShadows(delim) + } else { + strs = key.Strings(delim) + } + numVals := len(strs) if numVals == 0 { return nil @@ -92,9 +98,9 @@ func setSliceWithProperType(key *Key, field reflect.Value, delim string) error { case reflect.String: vals = strs case reflect.Int: - vals = key.Ints(delim) + vals, _ = key.parseInts(strs, true, false) case reflect.Int64: - vals = key.Int64s(delim) + vals, _ = key.parseInt64s(strs, true, false) case reflect.Uint: vals = key.Uints(delim) case reflect.Uint64: @@ -133,7 +139,7 @@ func setSliceWithProperType(key *Key, field reflect.Value, delim string) error { // setWithProperType sets proper value to field based on its type, // but it does not return error for failing parsing, // because we want to use default value that is already assigned to strcut. -func setWithProperType(t reflect.Type, key *Key, field reflect.Value, delim string) error { +func setWithProperType(t reflect.Type, key *Key, field reflect.Value, delim string, allowShadow bool) error { switch t.Kind() { case reflect.String: if len(key.String()) == 0 { @@ -187,13 +193,25 @@ func setWithProperType(t reflect.Type, key *Key, field reflect.Value, delim stri } field.Set(reflect.ValueOf(timeVal)) case reflect.Slice: - return setSliceWithProperType(key, field, delim) + return setSliceWithProperType(key, field, delim, allowShadow) default: return fmt.Errorf("unsupported type '%s'", t) } return nil } +func parseTagOptions(tag string) (rawName string, omitEmpty bool, allowShadow bool) { + opts := strings.SplitN(tag, ",", 3) + rawName = opts[0] + if len(opts) > 1 { + omitEmpty = opts[1] == "omitempty" + } + if len(opts) > 2 { + allowShadow = opts[2] == "allowshadow" + } + return rawName, omitEmpty, allowShadow +} + func (s *Section) mapTo(val reflect.Value) error { if val.Kind() == reflect.Ptr { val = val.Elem() @@ -209,8 +227,8 @@ func (s *Section) mapTo(val reflect.Value) error { continue } - opts := strings.SplitN(tag, ",", 2) // strip off possible omitempty - fieldName := s.parseFieldName(tpField.Name, opts[0]) + rawName, _, allowShadow := parseTagOptions(tag) + fieldName := s.parseFieldName(tpField.Name, rawName) if len(fieldName) == 0 || !field.CanSet() { continue } @@ -231,7 +249,8 @@ func (s *Section) mapTo(val reflect.Value) error { } if key, err := s.GetKey(fieldName); err == nil { - if err = setWithProperType(tpField.Type, key, field, parseDelim(tpField.Tag.Get("delim"))); err != nil { + delim := parseDelim(tpField.Tag.Get("delim")) + if err = setWithProperType(tpField.Type, key, field, delim, allowShadow); err != nil { return fmt.Errorf("error mapping field(%s): %v", fieldName, err) } } diff --git a/vendor/github.com/golang/snappy/AUTHORS b/vendor/github.com/golang/snappy/AUTHORS new file mode 100644 index 00000000000..bcfa19520af --- /dev/null +++ b/vendor/github.com/golang/snappy/AUTHORS @@ -0,0 +1,15 @@ +# This is the official list of Snappy-Go authors for copyright purposes. +# This file is distinct from the CONTRIBUTORS files. +# See the latter for an explanation. + +# Names should be added to this file as +# Name or Organization +# The email address is not required for organizations. + +# Please keep the list sorted. + +Damian Gryski +Google Inc. +Jan Mercl <0xjnml@gmail.com> +Rodolfo Carvalho +Sebastien Binet diff --git a/vendor/github.com/golang/snappy/CONTRIBUTORS b/vendor/github.com/golang/snappy/CONTRIBUTORS new file mode 100644 index 00000000000..931ae31606f --- /dev/null +++ b/vendor/github.com/golang/snappy/CONTRIBUTORS @@ -0,0 +1,37 @@ +# This is the official list of people who can contribute +# (and typically have contributed) code to the Snappy-Go repository. +# The AUTHORS file lists the copyright holders; this file +# lists people. For example, Google employees are listed here +# but not in AUTHORS, because Google holds the copyright. +# +# The submission process automatically checks to make sure +# that people submitting code are listed in this file (by email address). +# +# Names should be added to this file only after verifying that +# the individual or the individual's organization has agreed to +# the appropriate Contributor License Agreement, found here: +# +# http://code.google.com/legal/individual-cla-v1.0.html +# http://code.google.com/legal/corporate-cla-v1.0.html +# +# The agreement for individuals can be filled out on the web. +# +# When adding J Random Contributor's name to this file, +# either J's name or J's organization's name should be +# added to the AUTHORS file, depending on whether the +# individual or corporate CLA was used. + +# Names should be added to this file like so: +# Name + +# Please keep the list sorted. + +Damian Gryski +Jan Mercl <0xjnml@gmail.com> +Kai Backman +Marc-Antoine Ruel +Nigel Tao +Rob Pike +Rodolfo Carvalho +Russ Cox +Sebastien Binet diff --git a/vendor/github.com/golang/snappy/LICENSE b/vendor/github.com/golang/snappy/LICENSE new file mode 100644 index 00000000000..6050c10f4c8 --- /dev/null +++ b/vendor/github.com/golang/snappy/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2011 The Snappy-Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/golang/snappy/README b/vendor/github.com/golang/snappy/README new file mode 100644 index 00000000000..cea12879a0e --- /dev/null +++ b/vendor/github.com/golang/snappy/README @@ -0,0 +1,107 @@ +The Snappy compression format in the Go programming language. + +To download and install from source: +$ go get github.com/golang/snappy + +Unless otherwise noted, the Snappy-Go source files are distributed +under the BSD-style license found in the LICENSE file. + + + +Benchmarks. + +The golang/snappy benchmarks include compressing (Z) and decompressing (U) ten +or so files, the same set used by the C++ Snappy code (github.com/google/snappy +and note the "google", not "golang"). On an "Intel(R) Core(TM) i7-3770 CPU @ +3.40GHz", Go's GOARCH=amd64 numbers as of 2016-05-29: + +"go test -test.bench=." + +_UFlat0-8 2.19GB/s ± 0% html +_UFlat1-8 1.41GB/s ± 0% urls +_UFlat2-8 23.5GB/s ± 2% jpg +_UFlat3-8 1.91GB/s ± 0% jpg_200 +_UFlat4-8 14.0GB/s ± 1% pdf +_UFlat5-8 1.97GB/s ± 0% html4 +_UFlat6-8 814MB/s ± 0% txt1 +_UFlat7-8 785MB/s ± 0% txt2 +_UFlat8-8 857MB/s ± 0% txt3 +_UFlat9-8 719MB/s ± 1% txt4 +_UFlat10-8 2.84GB/s ± 0% pb +_UFlat11-8 1.05GB/s ± 0% gaviota + +_ZFlat0-8 1.04GB/s ± 0% html +_ZFlat1-8 534MB/s ± 0% urls +_ZFlat2-8 15.7GB/s ± 1% jpg +_ZFlat3-8 740MB/s ± 3% jpg_200 +_ZFlat4-8 9.20GB/s ± 1% pdf +_ZFlat5-8 991MB/s ± 0% html4 +_ZFlat6-8 379MB/s ± 0% txt1 +_ZFlat7-8 352MB/s ± 0% txt2 +_ZFlat8-8 396MB/s ± 1% txt3 +_ZFlat9-8 327MB/s ± 1% txt4 +_ZFlat10-8 1.33GB/s ± 1% pb +_ZFlat11-8 605MB/s ± 1% gaviota + + + +"go test -test.bench=. -tags=noasm" + +_UFlat0-8 621MB/s ± 2% html +_UFlat1-8 494MB/s ± 1% urls +_UFlat2-8 23.2GB/s ± 1% jpg +_UFlat3-8 1.12GB/s ± 1% jpg_200 +_UFlat4-8 4.35GB/s ± 1% pdf +_UFlat5-8 609MB/s ± 0% html4 +_UFlat6-8 296MB/s ± 0% txt1 +_UFlat7-8 288MB/s ± 0% txt2 +_UFlat8-8 309MB/s ± 1% txt3 +_UFlat9-8 280MB/s ± 1% txt4 +_UFlat10-8 753MB/s ± 0% pb +_UFlat11-8 400MB/s ± 0% gaviota + +_ZFlat0-8 409MB/s ± 1% html +_ZFlat1-8 250MB/s ± 1% urls +_ZFlat2-8 12.3GB/s ± 1% jpg +_ZFlat3-8 132MB/s ± 0% jpg_200 +_ZFlat4-8 2.92GB/s ± 0% pdf +_ZFlat5-8 405MB/s ± 1% html4 +_ZFlat6-8 179MB/s ± 1% txt1 +_ZFlat7-8 170MB/s ± 1% txt2 +_ZFlat8-8 189MB/s ± 1% txt3 +_ZFlat9-8 164MB/s ± 1% txt4 +_ZFlat10-8 479MB/s ± 1% pb +_ZFlat11-8 270MB/s ± 1% gaviota + + + +For comparison (Go's encoded output is byte-for-byte identical to C++'s), here +are the numbers from C++ Snappy's + +make CXXFLAGS="-O2 -DNDEBUG -g" clean snappy_unittest.log && cat snappy_unittest.log + +BM_UFlat/0 2.4GB/s html +BM_UFlat/1 1.4GB/s urls +BM_UFlat/2 21.8GB/s jpg +BM_UFlat/3 1.5GB/s jpg_200 +BM_UFlat/4 13.3GB/s pdf +BM_UFlat/5 2.1GB/s html4 +BM_UFlat/6 1.0GB/s txt1 +BM_UFlat/7 959.4MB/s txt2 +BM_UFlat/8 1.0GB/s txt3 +BM_UFlat/9 864.5MB/s txt4 +BM_UFlat/10 2.9GB/s pb +BM_UFlat/11 1.2GB/s gaviota + +BM_ZFlat/0 944.3MB/s html (22.31 %) +BM_ZFlat/1 501.6MB/s urls (47.78 %) +BM_ZFlat/2 14.3GB/s jpg (99.95 %) +BM_ZFlat/3 538.3MB/s jpg_200 (73.00 %) +BM_ZFlat/4 8.3GB/s pdf (83.30 %) +BM_ZFlat/5 903.5MB/s html4 (22.52 %) +BM_ZFlat/6 336.0MB/s txt1 (57.88 %) +BM_ZFlat/7 312.3MB/s txt2 (61.91 %) +BM_ZFlat/8 353.1MB/s txt3 (54.99 %) +BM_ZFlat/9 289.9MB/s txt4 (66.26 %) +BM_ZFlat/10 1.2GB/s pb (19.68 %) +BM_ZFlat/11 527.4MB/s gaviota (37.72 %) diff --git a/vendor/github.com/golang/snappy/decode.go b/vendor/github.com/golang/snappy/decode.go new file mode 100644 index 00000000000..72efb0353dd --- /dev/null +++ b/vendor/github.com/golang/snappy/decode.go @@ -0,0 +1,237 @@ +// Copyright 2011 The Snappy-Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package snappy + +import ( + "encoding/binary" + "errors" + "io" +) + +var ( + // ErrCorrupt reports that the input is invalid. + ErrCorrupt = errors.New("snappy: corrupt input") + // ErrTooLarge reports that the uncompressed length is too large. + ErrTooLarge = errors.New("snappy: decoded block is too large") + // ErrUnsupported reports that the input isn't supported. + ErrUnsupported = errors.New("snappy: unsupported input") + + errUnsupportedLiteralLength = errors.New("snappy: unsupported literal length") +) + +// DecodedLen returns the length of the decoded block. +func DecodedLen(src []byte) (int, error) { + v, _, err := decodedLen(src) + return v, err +} + +// decodedLen returns the length of the decoded block and the number of bytes +// that the length header occupied. +func decodedLen(src []byte) (blockLen, headerLen int, err error) { + v, n := binary.Uvarint(src) + if n <= 0 || v > 0xffffffff { + return 0, 0, ErrCorrupt + } + + const wordSize = 32 << (^uint(0) >> 32 & 1) + if wordSize == 32 && v > 0x7fffffff { + return 0, 0, ErrTooLarge + } + return int(v), n, nil +} + +const ( + decodeErrCodeCorrupt = 1 + decodeErrCodeUnsupportedLiteralLength = 2 +) + +// Decode returns the decoded form of src. The returned slice may be a sub- +// slice of dst if dst was large enough to hold the entire decoded block. +// Otherwise, a newly allocated slice will be returned. +// +// The dst and src must not overlap. It is valid to pass a nil dst. +func Decode(dst, src []byte) ([]byte, error) { + dLen, s, err := decodedLen(src) + if err != nil { + return nil, err + } + if dLen <= len(dst) { + dst = dst[:dLen] + } else { + dst = make([]byte, dLen) + } + switch decode(dst, src[s:]) { + case 0: + return dst, nil + case decodeErrCodeUnsupportedLiteralLength: + return nil, errUnsupportedLiteralLength + } + return nil, ErrCorrupt +} + +// NewReader returns a new Reader that decompresses from r, using the framing +// format described at +// https://github.com/google/snappy/blob/master/framing_format.txt +func NewReader(r io.Reader) *Reader { + return &Reader{ + r: r, + decoded: make([]byte, maxBlockSize), + buf: make([]byte, maxEncodedLenOfMaxBlockSize+checksumSize), + } +} + +// Reader is an io.Reader that can read Snappy-compressed bytes. +type Reader struct { + r io.Reader + err error + decoded []byte + buf []byte + // decoded[i:j] contains decoded bytes that have not yet been passed on. + i, j int + readHeader bool +} + +// Reset discards any buffered data, resets all state, and switches the Snappy +// reader to read from r. This permits reusing a Reader rather than allocating +// a new one. +func (r *Reader) Reset(reader io.Reader) { + r.r = reader + r.err = nil + r.i = 0 + r.j = 0 + r.readHeader = false +} + +func (r *Reader) readFull(p []byte, allowEOF bool) (ok bool) { + if _, r.err = io.ReadFull(r.r, p); r.err != nil { + if r.err == io.ErrUnexpectedEOF || (r.err == io.EOF && !allowEOF) { + r.err = ErrCorrupt + } + return false + } + return true +} + +// Read satisfies the io.Reader interface. +func (r *Reader) Read(p []byte) (int, error) { + if r.err != nil { + return 0, r.err + } + for { + if r.i < r.j { + n := copy(p, r.decoded[r.i:r.j]) + r.i += n + return n, nil + } + if !r.readFull(r.buf[:4], true) { + return 0, r.err + } + chunkType := r.buf[0] + if !r.readHeader { + if chunkType != chunkTypeStreamIdentifier { + r.err = ErrCorrupt + return 0, r.err + } + r.readHeader = true + } + chunkLen := int(r.buf[1]) | int(r.buf[2])<<8 | int(r.buf[3])<<16 + if chunkLen > len(r.buf) { + r.err = ErrUnsupported + return 0, r.err + } + + // The chunk types are specified at + // https://github.com/google/snappy/blob/master/framing_format.txt + switch chunkType { + case chunkTypeCompressedData: + // Section 4.2. Compressed data (chunk type 0x00). + if chunkLen < checksumSize { + r.err = ErrCorrupt + return 0, r.err + } + buf := r.buf[:chunkLen] + if !r.readFull(buf, false) { + return 0, r.err + } + checksum := uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2])<<16 | uint32(buf[3])<<24 + buf = buf[checksumSize:] + + n, err := DecodedLen(buf) + if err != nil { + r.err = err + return 0, r.err + } + if n > len(r.decoded) { + r.err = ErrCorrupt + return 0, r.err + } + if _, err := Decode(r.decoded, buf); err != nil { + r.err = err + return 0, r.err + } + if crc(r.decoded[:n]) != checksum { + r.err = ErrCorrupt + return 0, r.err + } + r.i, r.j = 0, n + continue + + case chunkTypeUncompressedData: + // Section 4.3. Uncompressed data (chunk type 0x01). + if chunkLen < checksumSize { + r.err = ErrCorrupt + return 0, r.err + } + buf := r.buf[:checksumSize] + if !r.readFull(buf, false) { + return 0, r.err + } + checksum := uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2])<<16 | uint32(buf[3])<<24 + // Read directly into r.decoded instead of via r.buf. + n := chunkLen - checksumSize + if n > len(r.decoded) { + r.err = ErrCorrupt + return 0, r.err + } + if !r.readFull(r.decoded[:n], false) { + return 0, r.err + } + if crc(r.decoded[:n]) != checksum { + r.err = ErrCorrupt + return 0, r.err + } + r.i, r.j = 0, n + continue + + case chunkTypeStreamIdentifier: + // Section 4.1. Stream identifier (chunk type 0xff). + if chunkLen != len(magicBody) { + r.err = ErrCorrupt + return 0, r.err + } + if !r.readFull(r.buf[:len(magicBody)], false) { + return 0, r.err + } + for i := 0; i < len(magicBody); i++ { + if r.buf[i] != magicBody[i] { + r.err = ErrCorrupt + return 0, r.err + } + } + continue + } + + if chunkType <= 0x7f { + // Section 4.5. Reserved unskippable chunks (chunk types 0x02-0x7f). + r.err = ErrUnsupported + return 0, r.err + } + // Section 4.4 Padding (chunk type 0xfe). + // Section 4.6. Reserved skippable chunks (chunk types 0x80-0xfd). + if !r.readFull(r.buf[:chunkLen], false) { + return 0, r.err + } + } +} diff --git a/vendor/github.com/golang/snappy/decode_amd64.go b/vendor/github.com/golang/snappy/decode_amd64.go new file mode 100644 index 00000000000..fcd192b849e --- /dev/null +++ b/vendor/github.com/golang/snappy/decode_amd64.go @@ -0,0 +1,14 @@ +// Copyright 2016 The Snappy-Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !appengine +// +build gc +// +build !noasm + +package snappy + +// decode has the same semantics as in decode_other.go. +// +//go:noescape +func decode(dst, src []byte) int diff --git a/vendor/github.com/golang/snappy/decode_amd64.s b/vendor/github.com/golang/snappy/decode_amd64.s new file mode 100644 index 00000000000..e6179f65e35 --- /dev/null +++ b/vendor/github.com/golang/snappy/decode_amd64.s @@ -0,0 +1,490 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !appengine +// +build gc +// +build !noasm + +#include "textflag.h" + +// The asm code generally follows the pure Go code in decode_other.go, except +// where marked with a "!!!". + +// func decode(dst, src []byte) int +// +// All local variables fit into registers. The non-zero stack size is only to +// spill registers and push args when issuing a CALL. The register allocation: +// - AX scratch +// - BX scratch +// - CX length or x +// - DX offset +// - SI &src[s] +// - DI &dst[d] +// + R8 dst_base +// + R9 dst_len +// + R10 dst_base + dst_len +// + R11 src_base +// + R12 src_len +// + R13 src_base + src_len +// - R14 used by doCopy +// - R15 used by doCopy +// +// The registers R8-R13 (marked with a "+") are set at the start of the +// function, and after a CALL returns, and are not otherwise modified. +// +// The d variable is implicitly DI - R8, and len(dst)-d is R10 - DI. +// The s variable is implicitly SI - R11, and len(src)-s is R13 - SI. +TEXT ·decode(SB), NOSPLIT, $48-56 + // Initialize SI, DI and R8-R13. + MOVQ dst_base+0(FP), R8 + MOVQ dst_len+8(FP), R9 + MOVQ R8, DI + MOVQ R8, R10 + ADDQ R9, R10 + MOVQ src_base+24(FP), R11 + MOVQ src_len+32(FP), R12 + MOVQ R11, SI + MOVQ R11, R13 + ADDQ R12, R13 + +loop: + // for s < len(src) + CMPQ SI, R13 + JEQ end + + // CX = uint32(src[s]) + // + // switch src[s] & 0x03 + MOVBLZX (SI), CX + MOVL CX, BX + ANDL $3, BX + CMPL BX, $1 + JAE tagCopy + + // ---------------------------------------- + // The code below handles literal tags. + + // case tagLiteral: + // x := uint32(src[s] >> 2) + // switch + SHRL $2, CX + CMPL CX, $60 + JAE tagLit60Plus + + // case x < 60: + // s++ + INCQ SI + +doLit: + // This is the end of the inner "switch", when we have a literal tag. + // + // We assume that CX == x and x fits in a uint32, where x is the variable + // used in the pure Go decode_other.go code. + + // length = int(x) + 1 + // + // Unlike the pure Go code, we don't need to check if length <= 0 because + // CX can hold 64 bits, so the increment cannot overflow. + INCQ CX + + // Prepare to check if copying length bytes will run past the end of dst or + // src. + // + // AX = len(dst) - d + // BX = len(src) - s + MOVQ R10, AX + SUBQ DI, AX + MOVQ R13, BX + SUBQ SI, BX + + // !!! Try a faster technique for short (16 or fewer bytes) copies. + // + // if length > 16 || len(dst)-d < 16 || len(src)-s < 16 { + // goto callMemmove // Fall back on calling runtime·memmove. + // } + // + // The C++ snappy code calls this TryFastAppend. It also checks len(src)-s + // against 21 instead of 16, because it cannot assume that all of its input + // is contiguous in memory and so it needs to leave enough source bytes to + // read the next tag without refilling buffers, but Go's Decode assumes + // contiguousness (the src argument is a []byte). + CMPQ CX, $16 + JGT callMemmove + CMPQ AX, $16 + JLT callMemmove + CMPQ BX, $16 + JLT callMemmove + + // !!! Implement the copy from src to dst as a 16-byte load and store. + // (Decode's documentation says that dst and src must not overlap.) + // + // This always copies 16 bytes, instead of only length bytes, but that's + // OK. If the input is a valid Snappy encoding then subsequent iterations + // will fix up the overrun. Otherwise, Decode returns a nil []byte (and a + // non-nil error), so the overrun will be ignored. + // + // Note that on amd64, it is legal and cheap to issue unaligned 8-byte or + // 16-byte loads and stores. This technique probably wouldn't be as + // effective on architectures that are fussier about alignment. + MOVOU 0(SI), X0 + MOVOU X0, 0(DI) + + // d += length + // s += length + ADDQ CX, DI + ADDQ CX, SI + JMP loop + +callMemmove: + // if length > len(dst)-d || length > len(src)-s { etc } + CMPQ CX, AX + JGT errCorrupt + CMPQ CX, BX + JGT errCorrupt + + // copy(dst[d:], src[s:s+length]) + // + // This means calling runtime·memmove(&dst[d], &src[s], length), so we push + // DI, SI and CX as arguments. Coincidentally, we also need to spill those + // three registers to the stack, to save local variables across the CALL. + MOVQ DI, 0(SP) + MOVQ SI, 8(SP) + MOVQ CX, 16(SP) + MOVQ DI, 24(SP) + MOVQ SI, 32(SP) + MOVQ CX, 40(SP) + CALL runtime·memmove(SB) + + // Restore local variables: unspill registers from the stack and + // re-calculate R8-R13. + MOVQ 24(SP), DI + MOVQ 32(SP), SI + MOVQ 40(SP), CX + MOVQ dst_base+0(FP), R8 + MOVQ dst_len+8(FP), R9 + MOVQ R8, R10 + ADDQ R9, R10 + MOVQ src_base+24(FP), R11 + MOVQ src_len+32(FP), R12 + MOVQ R11, R13 + ADDQ R12, R13 + + // d += length + // s += length + ADDQ CX, DI + ADDQ CX, SI + JMP loop + +tagLit60Plus: + // !!! This fragment does the + // + // s += x - 58; if uint(s) > uint(len(src)) { etc } + // + // checks. In the asm version, we code it once instead of once per switch case. + ADDQ CX, SI + SUBQ $58, SI + MOVQ SI, BX + SUBQ R11, BX + CMPQ BX, R12 + JA errCorrupt + + // case x == 60: + CMPL CX, $61 + JEQ tagLit61 + JA tagLit62Plus + + // x = uint32(src[s-1]) + MOVBLZX -1(SI), CX + JMP doLit + +tagLit61: + // case x == 61: + // x = uint32(src[s-2]) | uint32(src[s-1])<<8 + MOVWLZX -2(SI), CX + JMP doLit + +tagLit62Plus: + CMPL CX, $62 + JA tagLit63 + + // case x == 62: + // x = uint32(src[s-3]) | uint32(src[s-2])<<8 | uint32(src[s-1])<<16 + MOVWLZX -3(SI), CX + MOVBLZX -1(SI), BX + SHLL $16, BX + ORL BX, CX + JMP doLit + +tagLit63: + // case x == 63: + // x = uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24 + MOVL -4(SI), CX + JMP doLit + +// The code above handles literal tags. +// ---------------------------------------- +// The code below handles copy tags. + +tagCopy4: + // case tagCopy4: + // s += 5 + ADDQ $5, SI + + // if uint(s) > uint(len(src)) { etc } + MOVQ SI, BX + SUBQ R11, BX + CMPQ BX, R12 + JA errCorrupt + + // length = 1 + int(src[s-5])>>2 + SHRQ $2, CX + INCQ CX + + // offset = int(uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24) + MOVLQZX -4(SI), DX + JMP doCopy + +tagCopy2: + // case tagCopy2: + // s += 3 + ADDQ $3, SI + + // if uint(s) > uint(len(src)) { etc } + MOVQ SI, BX + SUBQ R11, BX + CMPQ BX, R12 + JA errCorrupt + + // length = 1 + int(src[s-3])>>2 + SHRQ $2, CX + INCQ CX + + // offset = int(uint32(src[s-2]) | uint32(src[s-1])<<8) + MOVWQZX -2(SI), DX + JMP doCopy + +tagCopy: + // We have a copy tag. We assume that: + // - BX == src[s] & 0x03 + // - CX == src[s] + CMPQ BX, $2 + JEQ tagCopy2 + JA tagCopy4 + + // case tagCopy1: + // s += 2 + ADDQ $2, SI + + // if uint(s) > uint(len(src)) { etc } + MOVQ SI, BX + SUBQ R11, BX + CMPQ BX, R12 + JA errCorrupt + + // offset = int(uint32(src[s-2])&0xe0<<3 | uint32(src[s-1])) + MOVQ CX, DX + ANDQ $0xe0, DX + SHLQ $3, DX + MOVBQZX -1(SI), BX + ORQ BX, DX + + // length = 4 + int(src[s-2])>>2&0x7 + SHRQ $2, CX + ANDQ $7, CX + ADDQ $4, CX + +doCopy: + // This is the end of the outer "switch", when we have a copy tag. + // + // We assume that: + // - CX == length && CX > 0 + // - DX == offset + + // if offset <= 0 { etc } + CMPQ DX, $0 + JLE errCorrupt + + // if d < offset { etc } + MOVQ DI, BX + SUBQ R8, BX + CMPQ BX, DX + JLT errCorrupt + + // if length > len(dst)-d { etc } + MOVQ R10, BX + SUBQ DI, BX + CMPQ CX, BX + JGT errCorrupt + + // forwardCopy(dst[d:d+length], dst[d-offset:]); d += length + // + // Set: + // - R14 = len(dst)-d + // - R15 = &dst[d-offset] + MOVQ R10, R14 + SUBQ DI, R14 + MOVQ DI, R15 + SUBQ DX, R15 + + // !!! Try a faster technique for short (16 or fewer bytes) forward copies. + // + // First, try using two 8-byte load/stores, similar to the doLit technique + // above. Even if dst[d:d+length] and dst[d-offset:] can overlap, this is + // still OK if offset >= 8. Note that this has to be two 8-byte load/stores + // and not one 16-byte load/store, and the first store has to be before the + // second load, due to the overlap if offset is in the range [8, 16). + // + // if length > 16 || offset < 8 || len(dst)-d < 16 { + // goto slowForwardCopy + // } + // copy 16 bytes + // d += length + CMPQ CX, $16 + JGT slowForwardCopy + CMPQ DX, $8 + JLT slowForwardCopy + CMPQ R14, $16 + JLT slowForwardCopy + MOVQ 0(R15), AX + MOVQ AX, 0(DI) + MOVQ 8(R15), BX + MOVQ BX, 8(DI) + ADDQ CX, DI + JMP loop + +slowForwardCopy: + // !!! If the forward copy is longer than 16 bytes, or if offset < 8, we + // can still try 8-byte load stores, provided we can overrun up to 10 extra + // bytes. As above, the overrun will be fixed up by subsequent iterations + // of the outermost loop. + // + // The C++ snappy code calls this technique IncrementalCopyFastPath. Its + // commentary says: + // + // ---- + // + // The main part of this loop is a simple copy of eight bytes at a time + // until we've copied (at least) the requested amount of bytes. However, + // if d and d-offset are less than eight bytes apart (indicating a + // repeating pattern of length < 8), we first need to expand the pattern in + // order to get the correct results. For instance, if the buffer looks like + // this, with the eight-byte and patterns marked as + // intervals: + // + // abxxxxxxxxxxxx + // [------] d-offset + // [------] d + // + // a single eight-byte copy from to will repeat the pattern + // once, after which we can move two bytes without moving : + // + // ababxxxxxxxxxx + // [------] d-offset + // [------] d + // + // and repeat the exercise until the two no longer overlap. + // + // This allows us to do very well in the special case of one single byte + // repeated many times, without taking a big hit for more general cases. + // + // The worst case of extra writing past the end of the match occurs when + // offset == 1 and length == 1; the last copy will read from byte positions + // [0..7] and write to [4..11], whereas it was only supposed to write to + // position 1. Thus, ten excess bytes. + // + // ---- + // + // That "10 byte overrun" worst case is confirmed by Go's + // TestSlowForwardCopyOverrun, which also tests the fixUpSlowForwardCopy + // and finishSlowForwardCopy algorithm. + // + // if length > len(dst)-d-10 { + // goto verySlowForwardCopy + // } + SUBQ $10, R14 + CMPQ CX, R14 + JGT verySlowForwardCopy + +makeOffsetAtLeast8: + // !!! As above, expand the pattern so that offset >= 8 and we can use + // 8-byte load/stores. + // + // for offset < 8 { + // copy 8 bytes from dst[d-offset:] to dst[d:] + // length -= offset + // d += offset + // offset += offset + // // The two previous lines together means that d-offset, and therefore + // // R15, is unchanged. + // } + CMPQ DX, $8 + JGE fixUpSlowForwardCopy + MOVQ (R15), BX + MOVQ BX, (DI) + SUBQ DX, CX + ADDQ DX, DI + ADDQ DX, DX + JMP makeOffsetAtLeast8 + +fixUpSlowForwardCopy: + // !!! Add length (which might be negative now) to d (implied by DI being + // &dst[d]) so that d ends up at the right place when we jump back to the + // top of the loop. Before we do that, though, we save DI to AX so that, if + // length is positive, copying the remaining length bytes will write to the + // right place. + MOVQ DI, AX + ADDQ CX, DI + +finishSlowForwardCopy: + // !!! Repeat 8-byte load/stores until length <= 0. Ending with a negative + // length means that we overrun, but as above, that will be fixed up by + // subsequent iterations of the outermost loop. + CMPQ CX, $0 + JLE loop + MOVQ (R15), BX + MOVQ BX, (AX) + ADDQ $8, R15 + ADDQ $8, AX + SUBQ $8, CX + JMP finishSlowForwardCopy + +verySlowForwardCopy: + // verySlowForwardCopy is a simple implementation of forward copy. In C + // parlance, this is a do/while loop instead of a while loop, since we know + // that length > 0. In Go syntax: + // + // for { + // dst[d] = dst[d - offset] + // d++ + // length-- + // if length == 0 { + // break + // } + // } + MOVB (R15), BX + MOVB BX, (DI) + INCQ R15 + INCQ DI + DECQ CX + JNZ verySlowForwardCopy + JMP loop + +// The code above handles copy tags. +// ---------------------------------------- + +end: + // This is the end of the "for s < len(src)". + // + // if d != len(dst) { etc } + CMPQ DI, R10 + JNE errCorrupt + + // return 0 + MOVQ $0, ret+48(FP) + RET + +errCorrupt: + // return decodeErrCodeCorrupt + MOVQ $1, ret+48(FP) + RET diff --git a/vendor/github.com/golang/snappy/decode_other.go b/vendor/github.com/golang/snappy/decode_other.go new file mode 100644 index 00000000000..8c9f2049bc7 --- /dev/null +++ b/vendor/github.com/golang/snappy/decode_other.go @@ -0,0 +1,101 @@ +// Copyright 2016 The Snappy-Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !amd64 appengine !gc noasm + +package snappy + +// decode writes the decoding of src to dst. It assumes that the varint-encoded +// length of the decompressed bytes has already been read, and that len(dst) +// equals that length. +// +// It returns 0 on success or a decodeErrCodeXxx error code on failure. +func decode(dst, src []byte) int { + var d, s, offset, length int + for s < len(src) { + switch src[s] & 0x03 { + case tagLiteral: + x := uint32(src[s] >> 2) + switch { + case x < 60: + s++ + case x == 60: + s += 2 + if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line. + return decodeErrCodeCorrupt + } + x = uint32(src[s-1]) + case x == 61: + s += 3 + if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line. + return decodeErrCodeCorrupt + } + x = uint32(src[s-2]) | uint32(src[s-1])<<8 + case x == 62: + s += 4 + if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line. + return decodeErrCodeCorrupt + } + x = uint32(src[s-3]) | uint32(src[s-2])<<8 | uint32(src[s-1])<<16 + case x == 63: + s += 5 + if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line. + return decodeErrCodeCorrupt + } + x = uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24 + } + length = int(x) + 1 + if length <= 0 { + return decodeErrCodeUnsupportedLiteralLength + } + if length > len(dst)-d || length > len(src)-s { + return decodeErrCodeCorrupt + } + copy(dst[d:], src[s:s+length]) + d += length + s += length + continue + + case tagCopy1: + s += 2 + if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line. + return decodeErrCodeCorrupt + } + length = 4 + int(src[s-2])>>2&0x7 + offset = int(uint32(src[s-2])&0xe0<<3 | uint32(src[s-1])) + + case tagCopy2: + s += 3 + if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line. + return decodeErrCodeCorrupt + } + length = 1 + int(src[s-3])>>2 + offset = int(uint32(src[s-2]) | uint32(src[s-1])<<8) + + case tagCopy4: + s += 5 + if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line. + return decodeErrCodeCorrupt + } + length = 1 + int(src[s-5])>>2 + offset = int(uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24) + } + + if offset <= 0 || d < offset || length > len(dst)-d { + return decodeErrCodeCorrupt + } + // Copy from an earlier sub-slice of dst to a later sub-slice. Unlike + // the built-in copy function, this byte-by-byte copy always runs + // forwards, even if the slices overlap. Conceptually, this is: + // + // d += forwardCopy(dst[d:d+length], dst[d-offset:]) + for end := d + length; d != end; d++ { + dst[d] = dst[d-offset] + } + } + if d != len(dst) { + return decodeErrCodeCorrupt + } + return 0 +} diff --git a/vendor/github.com/golang/snappy/encode.go b/vendor/github.com/golang/snappy/encode.go new file mode 100644 index 00000000000..8d393e904bb --- /dev/null +++ b/vendor/github.com/golang/snappy/encode.go @@ -0,0 +1,285 @@ +// Copyright 2011 The Snappy-Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package snappy + +import ( + "encoding/binary" + "errors" + "io" +) + +// Encode returns the encoded form of src. The returned slice may be a sub- +// slice of dst if dst was large enough to hold the entire encoded block. +// Otherwise, a newly allocated slice will be returned. +// +// The dst and src must not overlap. It is valid to pass a nil dst. +func Encode(dst, src []byte) []byte { + if n := MaxEncodedLen(len(src)); n < 0 { + panic(ErrTooLarge) + } else if len(dst) < n { + dst = make([]byte, n) + } + + // The block starts with the varint-encoded length of the decompressed bytes. + d := binary.PutUvarint(dst, uint64(len(src))) + + for len(src) > 0 { + p := src + src = nil + if len(p) > maxBlockSize { + p, src = p[:maxBlockSize], p[maxBlockSize:] + } + if len(p) < minNonLiteralBlockSize { + d += emitLiteral(dst[d:], p) + } else { + d += encodeBlock(dst[d:], p) + } + } + return dst[:d] +} + +// inputMargin is the minimum number of extra input bytes to keep, inside +// encodeBlock's inner loop. On some architectures, this margin lets us +// implement a fast path for emitLiteral, where the copy of short (<= 16 byte) +// literals can be implemented as a single load to and store from a 16-byte +// register. That literal's actual length can be as short as 1 byte, so this +// can copy up to 15 bytes too much, but that's OK as subsequent iterations of +// the encoding loop will fix up the copy overrun, and this inputMargin ensures +// that we don't overrun the dst and src buffers. +const inputMargin = 16 - 1 + +// minNonLiteralBlockSize is the minimum size of the input to encodeBlock that +// could be encoded with a copy tag. This is the minimum with respect to the +// algorithm used by encodeBlock, not a minimum enforced by the file format. +// +// The encoded output must start with at least a 1 byte literal, as there are +// no previous bytes to copy. A minimal (1 byte) copy after that, generated +// from an emitCopy call in encodeBlock's main loop, would require at least +// another inputMargin bytes, for the reason above: we want any emitLiteral +// calls inside encodeBlock's main loop to use the fast path if possible, which +// requires being able to overrun by inputMargin bytes. Thus, +// minNonLiteralBlockSize equals 1 + 1 + inputMargin. +// +// The C++ code doesn't use this exact threshold, but it could, as discussed at +// https://groups.google.com/d/topic/snappy-compression/oGbhsdIJSJ8/discussion +// The difference between Go (2+inputMargin) and C++ (inputMargin) is purely an +// optimization. It should not affect the encoded form. This is tested by +// TestSameEncodingAsCppShortCopies. +const minNonLiteralBlockSize = 1 + 1 + inputMargin + +// MaxEncodedLen returns the maximum length of a snappy block, given its +// uncompressed length. +// +// It will return a negative value if srcLen is too large to encode. +func MaxEncodedLen(srcLen int) int { + n := uint64(srcLen) + if n > 0xffffffff { + return -1 + } + // Compressed data can be defined as: + // compressed := item* literal* + // item := literal* copy + // + // The trailing literal sequence has a space blowup of at most 62/60 + // since a literal of length 60 needs one tag byte + one extra byte + // for length information. + // + // Item blowup is trickier to measure. Suppose the "copy" op copies + // 4 bytes of data. Because of a special check in the encoding code, + // we produce a 4-byte copy only if the offset is < 65536. Therefore + // the copy op takes 3 bytes to encode, and this type of item leads + // to at most the 62/60 blowup for representing literals. + // + // Suppose the "copy" op copies 5 bytes of data. If the offset is big + // enough, it will take 5 bytes to encode the copy op. Therefore the + // worst case here is a one-byte literal followed by a five-byte copy. + // That is, 6 bytes of input turn into 7 bytes of "compressed" data. + // + // This last factor dominates the blowup, so the final estimate is: + n = 32 + n + n/6 + if n > 0xffffffff { + return -1 + } + return int(n) +} + +var errClosed = errors.New("snappy: Writer is closed") + +// NewWriter returns a new Writer that compresses to w. +// +// The Writer returned does not buffer writes. There is no need to Flush or +// Close such a Writer. +// +// Deprecated: the Writer returned is not suitable for many small writes, only +// for few large writes. Use NewBufferedWriter instead, which is efficient +// regardless of the frequency and shape of the writes, and remember to Close +// that Writer when done. +func NewWriter(w io.Writer) *Writer { + return &Writer{ + w: w, + obuf: make([]byte, obufLen), + } +} + +// NewBufferedWriter returns a new Writer that compresses to w, using the +// framing format described at +// https://github.com/google/snappy/blob/master/framing_format.txt +// +// The Writer returned buffers writes. Users must call Close to guarantee all +// data has been forwarded to the underlying io.Writer. They may also call +// Flush zero or more times before calling Close. +func NewBufferedWriter(w io.Writer) *Writer { + return &Writer{ + w: w, + ibuf: make([]byte, 0, maxBlockSize), + obuf: make([]byte, obufLen), + } +} + +// Writer is an io.Writer that can write Snappy-compressed bytes. +type Writer struct { + w io.Writer + err error + + // ibuf is a buffer for the incoming (uncompressed) bytes. + // + // Its use is optional. For backwards compatibility, Writers created by the + // NewWriter function have ibuf == nil, do not buffer incoming bytes, and + // therefore do not need to be Flush'ed or Close'd. + ibuf []byte + + // obuf is a buffer for the outgoing (compressed) bytes. + obuf []byte + + // wroteStreamHeader is whether we have written the stream header. + wroteStreamHeader bool +} + +// Reset discards the writer's state and switches the Snappy writer to write to +// w. This permits reusing a Writer rather than allocating a new one. +func (w *Writer) Reset(writer io.Writer) { + w.w = writer + w.err = nil + if w.ibuf != nil { + w.ibuf = w.ibuf[:0] + } + w.wroteStreamHeader = false +} + +// Write satisfies the io.Writer interface. +func (w *Writer) Write(p []byte) (nRet int, errRet error) { + if w.ibuf == nil { + // Do not buffer incoming bytes. This does not perform or compress well + // if the caller of Writer.Write writes many small slices. This + // behavior is therefore deprecated, but still supported for backwards + // compatibility with code that doesn't explicitly Flush or Close. + return w.write(p) + } + + // The remainder of this method is based on bufio.Writer.Write from the + // standard library. + + for len(p) > (cap(w.ibuf)-len(w.ibuf)) && w.err == nil { + var n int + if len(w.ibuf) == 0 { + // Large write, empty buffer. + // Write directly from p to avoid copy. + n, _ = w.write(p) + } else { + n = copy(w.ibuf[len(w.ibuf):cap(w.ibuf)], p) + w.ibuf = w.ibuf[:len(w.ibuf)+n] + w.Flush() + } + nRet += n + p = p[n:] + } + if w.err != nil { + return nRet, w.err + } + n := copy(w.ibuf[len(w.ibuf):cap(w.ibuf)], p) + w.ibuf = w.ibuf[:len(w.ibuf)+n] + nRet += n + return nRet, nil +} + +func (w *Writer) write(p []byte) (nRet int, errRet error) { + if w.err != nil { + return 0, w.err + } + for len(p) > 0 { + obufStart := len(magicChunk) + if !w.wroteStreamHeader { + w.wroteStreamHeader = true + copy(w.obuf, magicChunk) + obufStart = 0 + } + + var uncompressed []byte + if len(p) > maxBlockSize { + uncompressed, p = p[:maxBlockSize], p[maxBlockSize:] + } else { + uncompressed, p = p, nil + } + checksum := crc(uncompressed) + + // Compress the buffer, discarding the result if the improvement + // isn't at least 12.5%. + compressed := Encode(w.obuf[obufHeaderLen:], uncompressed) + chunkType := uint8(chunkTypeCompressedData) + chunkLen := 4 + len(compressed) + obufEnd := obufHeaderLen + len(compressed) + if len(compressed) >= len(uncompressed)-len(uncompressed)/8 { + chunkType = chunkTypeUncompressedData + chunkLen = 4 + len(uncompressed) + obufEnd = obufHeaderLen + } + + // Fill in the per-chunk header that comes before the body. + w.obuf[len(magicChunk)+0] = chunkType + w.obuf[len(magicChunk)+1] = uint8(chunkLen >> 0) + w.obuf[len(magicChunk)+2] = uint8(chunkLen >> 8) + w.obuf[len(magicChunk)+3] = uint8(chunkLen >> 16) + w.obuf[len(magicChunk)+4] = uint8(checksum >> 0) + w.obuf[len(magicChunk)+5] = uint8(checksum >> 8) + w.obuf[len(magicChunk)+6] = uint8(checksum >> 16) + w.obuf[len(magicChunk)+7] = uint8(checksum >> 24) + + if _, err := w.w.Write(w.obuf[obufStart:obufEnd]); err != nil { + w.err = err + return nRet, err + } + if chunkType == chunkTypeUncompressedData { + if _, err := w.w.Write(uncompressed); err != nil { + w.err = err + return nRet, err + } + } + nRet += len(uncompressed) + } + return nRet, nil +} + +// Flush flushes the Writer to its underlying io.Writer. +func (w *Writer) Flush() error { + if w.err != nil { + return w.err + } + if len(w.ibuf) == 0 { + return nil + } + w.write(w.ibuf) + w.ibuf = w.ibuf[:0] + return w.err +} + +// Close calls Flush and then closes the Writer. +func (w *Writer) Close() error { + w.Flush() + ret := w.err + if w.err == nil { + w.err = errClosed + } + return ret +} diff --git a/vendor/github.com/golang/snappy/encode_amd64.go b/vendor/github.com/golang/snappy/encode_amd64.go new file mode 100644 index 00000000000..150d91bc8be --- /dev/null +++ b/vendor/github.com/golang/snappy/encode_amd64.go @@ -0,0 +1,29 @@ +// Copyright 2016 The Snappy-Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !appengine +// +build gc +// +build !noasm + +package snappy + +// emitLiteral has the same semantics as in encode_other.go. +// +//go:noescape +func emitLiteral(dst, lit []byte) int + +// emitCopy has the same semantics as in encode_other.go. +// +//go:noescape +func emitCopy(dst []byte, offset, length int) int + +// extendMatch has the same semantics as in encode_other.go. +// +//go:noescape +func extendMatch(src []byte, i, j int) int + +// encodeBlock has the same semantics as in encode_other.go. +// +//go:noescape +func encodeBlock(dst, src []byte) (d int) diff --git a/vendor/github.com/golang/snappy/encode_amd64.s b/vendor/github.com/golang/snappy/encode_amd64.s new file mode 100644 index 00000000000..adfd979fe27 --- /dev/null +++ b/vendor/github.com/golang/snappy/encode_amd64.s @@ -0,0 +1,730 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !appengine +// +build gc +// +build !noasm + +#include "textflag.h" + +// The XXX lines assemble on Go 1.4, 1.5 and 1.7, but not 1.6, due to a +// Go toolchain regression. See https://github.com/golang/go/issues/15426 and +// https://github.com/golang/snappy/issues/29 +// +// As a workaround, the package was built with a known good assembler, and +// those instructions were disassembled by "objdump -d" to yield the +// 4e 0f b7 7c 5c 78 movzwq 0x78(%rsp,%r11,2),%r15 +// style comments, in AT&T asm syntax. Note that rsp here is a physical +// register, not Go/asm's SP pseudo-register (see https://golang.org/doc/asm). +// The instructions were then encoded as "BYTE $0x.." sequences, which assemble +// fine on Go 1.6. + +// The asm code generally follows the pure Go code in encode_other.go, except +// where marked with a "!!!". + +// ---------------------------------------------------------------------------- + +// func emitLiteral(dst, lit []byte) int +// +// All local variables fit into registers. The register allocation: +// - AX len(lit) +// - BX n +// - DX return value +// - DI &dst[i] +// - R10 &lit[0] +// +// The 24 bytes of stack space is to call runtime·memmove. +// +// The unusual register allocation of local variables, such as R10 for the +// source pointer, matches the allocation used at the call site in encodeBlock, +// which makes it easier to manually inline this function. +TEXT ·emitLiteral(SB), NOSPLIT, $24-56 + MOVQ dst_base+0(FP), DI + MOVQ lit_base+24(FP), R10 + MOVQ lit_len+32(FP), AX + MOVQ AX, DX + MOVL AX, BX + SUBL $1, BX + + CMPL BX, $60 + JLT oneByte + CMPL BX, $256 + JLT twoBytes + +threeBytes: + MOVB $0xf4, 0(DI) + MOVW BX, 1(DI) + ADDQ $3, DI + ADDQ $3, DX + JMP memmove + +twoBytes: + MOVB $0xf0, 0(DI) + MOVB BX, 1(DI) + ADDQ $2, DI + ADDQ $2, DX + JMP memmove + +oneByte: + SHLB $2, BX + MOVB BX, 0(DI) + ADDQ $1, DI + ADDQ $1, DX + +memmove: + MOVQ DX, ret+48(FP) + + // copy(dst[i:], lit) + // + // This means calling runtime·memmove(&dst[i], &lit[0], len(lit)), so we push + // DI, R10 and AX as arguments. + MOVQ DI, 0(SP) + MOVQ R10, 8(SP) + MOVQ AX, 16(SP) + CALL runtime·memmove(SB) + RET + +// ---------------------------------------------------------------------------- + +// func emitCopy(dst []byte, offset, length int) int +// +// All local variables fit into registers. The register allocation: +// - AX length +// - SI &dst[0] +// - DI &dst[i] +// - R11 offset +// +// The unusual register allocation of local variables, such as R11 for the +// offset, matches the allocation used at the call site in encodeBlock, which +// makes it easier to manually inline this function. +TEXT ·emitCopy(SB), NOSPLIT, $0-48 + MOVQ dst_base+0(FP), DI + MOVQ DI, SI + MOVQ offset+24(FP), R11 + MOVQ length+32(FP), AX + +loop0: + // for length >= 68 { etc } + CMPL AX, $68 + JLT step1 + + // Emit a length 64 copy, encoded as 3 bytes. + MOVB $0xfe, 0(DI) + MOVW R11, 1(DI) + ADDQ $3, DI + SUBL $64, AX + JMP loop0 + +step1: + // if length > 64 { etc } + CMPL AX, $64 + JLE step2 + + // Emit a length 60 copy, encoded as 3 bytes. + MOVB $0xee, 0(DI) + MOVW R11, 1(DI) + ADDQ $3, DI + SUBL $60, AX + +step2: + // if length >= 12 || offset >= 2048 { goto step3 } + CMPL AX, $12 + JGE step3 + CMPL R11, $2048 + JGE step3 + + // Emit the remaining copy, encoded as 2 bytes. + MOVB R11, 1(DI) + SHRL $8, R11 + SHLB $5, R11 + SUBB $4, AX + SHLB $2, AX + ORB AX, R11 + ORB $1, R11 + MOVB R11, 0(DI) + ADDQ $2, DI + + // Return the number of bytes written. + SUBQ SI, DI + MOVQ DI, ret+40(FP) + RET + +step3: + // Emit the remaining copy, encoded as 3 bytes. + SUBL $1, AX + SHLB $2, AX + ORB $2, AX + MOVB AX, 0(DI) + MOVW R11, 1(DI) + ADDQ $3, DI + + // Return the number of bytes written. + SUBQ SI, DI + MOVQ DI, ret+40(FP) + RET + +// ---------------------------------------------------------------------------- + +// func extendMatch(src []byte, i, j int) int +// +// All local variables fit into registers. The register allocation: +// - DX &src[0] +// - SI &src[j] +// - R13 &src[len(src) - 8] +// - R14 &src[len(src)] +// - R15 &src[i] +// +// The unusual register allocation of local variables, such as R15 for a source +// pointer, matches the allocation used at the call site in encodeBlock, which +// makes it easier to manually inline this function. +TEXT ·extendMatch(SB), NOSPLIT, $0-48 + MOVQ src_base+0(FP), DX + MOVQ src_len+8(FP), R14 + MOVQ i+24(FP), R15 + MOVQ j+32(FP), SI + ADDQ DX, R14 + ADDQ DX, R15 + ADDQ DX, SI + MOVQ R14, R13 + SUBQ $8, R13 + +cmp8: + // As long as we are 8 or more bytes before the end of src, we can load and + // compare 8 bytes at a time. If those 8 bytes are equal, repeat. + CMPQ SI, R13 + JA cmp1 + MOVQ (R15), AX + MOVQ (SI), BX + CMPQ AX, BX + JNE bsf + ADDQ $8, R15 + ADDQ $8, SI + JMP cmp8 + +bsf: + // If those 8 bytes were not equal, XOR the two 8 byte values, and return + // the index of the first byte that differs. The BSF instruction finds the + // least significant 1 bit, the amd64 architecture is little-endian, and + // the shift by 3 converts a bit index to a byte index. + XORQ AX, BX + BSFQ BX, BX + SHRQ $3, BX + ADDQ BX, SI + + // Convert from &src[ret] to ret. + SUBQ DX, SI + MOVQ SI, ret+40(FP) + RET + +cmp1: + // In src's tail, compare 1 byte at a time. + CMPQ SI, R14 + JAE extendMatchEnd + MOVB (R15), AX + MOVB (SI), BX + CMPB AX, BX + JNE extendMatchEnd + ADDQ $1, R15 + ADDQ $1, SI + JMP cmp1 + +extendMatchEnd: + // Convert from &src[ret] to ret. + SUBQ DX, SI + MOVQ SI, ret+40(FP) + RET + +// ---------------------------------------------------------------------------- + +// func encodeBlock(dst, src []byte) (d int) +// +// All local variables fit into registers, other than "var table". The register +// allocation: +// - AX . . +// - BX . . +// - CX 56 shift (note that amd64 shifts by non-immediates must use CX). +// - DX 64 &src[0], tableSize +// - SI 72 &src[s] +// - DI 80 &dst[d] +// - R9 88 sLimit +// - R10 . &src[nextEmit] +// - R11 96 prevHash, currHash, nextHash, offset +// - R12 104 &src[base], skip +// - R13 . &src[nextS], &src[len(src) - 8] +// - R14 . len(src), bytesBetweenHashLookups, &src[len(src)], x +// - R15 112 candidate +// +// The second column (56, 64, etc) is the stack offset to spill the registers +// when calling other functions. We could pack this slightly tighter, but it's +// simpler to have a dedicated spill map independent of the function called. +// +// "var table [maxTableSize]uint16" takes up 32768 bytes of stack space. An +// extra 56 bytes, to call other functions, and an extra 64 bytes, to spill +// local variables (registers) during calls gives 32768 + 56 + 64 = 32888. +TEXT ·encodeBlock(SB), 0, $32888-56 + MOVQ dst_base+0(FP), DI + MOVQ src_base+24(FP), SI + MOVQ src_len+32(FP), R14 + + // shift, tableSize := uint32(32-8), 1<<8 + MOVQ $24, CX + MOVQ $256, DX + +calcShift: + // for ; tableSize < maxTableSize && tableSize < len(src); tableSize *= 2 { + // shift-- + // } + CMPQ DX, $16384 + JGE varTable + CMPQ DX, R14 + JGE varTable + SUBQ $1, CX + SHLQ $1, DX + JMP calcShift + +varTable: + // var table [maxTableSize]uint16 + // + // In the asm code, unlike the Go code, we can zero-initialize only the + // first tableSize elements. Each uint16 element is 2 bytes and each MOVOU + // writes 16 bytes, so we can do only tableSize/8 writes instead of the + // 2048 writes that would zero-initialize all of table's 32768 bytes. + SHRQ $3, DX + LEAQ table-32768(SP), BX + PXOR X0, X0 + +memclr: + MOVOU X0, 0(BX) + ADDQ $16, BX + SUBQ $1, DX + JNZ memclr + + // !!! DX = &src[0] + MOVQ SI, DX + + // sLimit := len(src) - inputMargin + MOVQ R14, R9 + SUBQ $15, R9 + + // !!! Pre-emptively spill CX, DX and R9 to the stack. Their values don't + // change for the rest of the function. + MOVQ CX, 56(SP) + MOVQ DX, 64(SP) + MOVQ R9, 88(SP) + + // nextEmit := 0 + MOVQ DX, R10 + + // s := 1 + ADDQ $1, SI + + // nextHash := hash(load32(src, s), shift) + MOVL 0(SI), R11 + IMULL $0x1e35a7bd, R11 + SHRL CX, R11 + +outer: + // for { etc } + + // skip := 32 + MOVQ $32, R12 + + // nextS := s + MOVQ SI, R13 + + // candidate := 0 + MOVQ $0, R15 + +inner0: + // for { etc } + + // s := nextS + MOVQ R13, SI + + // bytesBetweenHashLookups := skip >> 5 + MOVQ R12, R14 + SHRQ $5, R14 + + // nextS = s + bytesBetweenHashLookups + ADDQ R14, R13 + + // skip += bytesBetweenHashLookups + ADDQ R14, R12 + + // if nextS > sLimit { goto emitRemainder } + MOVQ R13, AX + SUBQ DX, AX + CMPQ AX, R9 + JA emitRemainder + + // candidate = int(table[nextHash]) + // XXX: MOVWQZX table-32768(SP)(R11*2), R15 + // XXX: 4e 0f b7 7c 5c 78 movzwq 0x78(%rsp,%r11,2),%r15 + BYTE $0x4e + BYTE $0x0f + BYTE $0xb7 + BYTE $0x7c + BYTE $0x5c + BYTE $0x78 + + // table[nextHash] = uint16(s) + MOVQ SI, AX + SUBQ DX, AX + + // XXX: MOVW AX, table-32768(SP)(R11*2) + // XXX: 66 42 89 44 5c 78 mov %ax,0x78(%rsp,%r11,2) + BYTE $0x66 + BYTE $0x42 + BYTE $0x89 + BYTE $0x44 + BYTE $0x5c + BYTE $0x78 + + // nextHash = hash(load32(src, nextS), shift) + MOVL 0(R13), R11 + IMULL $0x1e35a7bd, R11 + SHRL CX, R11 + + // if load32(src, s) != load32(src, candidate) { continue } break + MOVL 0(SI), AX + MOVL (DX)(R15*1), BX + CMPL AX, BX + JNE inner0 + +fourByteMatch: + // As per the encode_other.go code: + // + // A 4-byte match has been found. We'll later see etc. + + // !!! Jump to a fast path for short (<= 16 byte) literals. See the comment + // on inputMargin in encode.go. + MOVQ SI, AX + SUBQ R10, AX + CMPQ AX, $16 + JLE emitLiteralFastPath + + // ---------------------------------------- + // Begin inline of the emitLiteral call. + // + // d += emitLiteral(dst[d:], src[nextEmit:s]) + + MOVL AX, BX + SUBL $1, BX + + CMPL BX, $60 + JLT inlineEmitLiteralOneByte + CMPL BX, $256 + JLT inlineEmitLiteralTwoBytes + +inlineEmitLiteralThreeBytes: + MOVB $0xf4, 0(DI) + MOVW BX, 1(DI) + ADDQ $3, DI + JMP inlineEmitLiteralMemmove + +inlineEmitLiteralTwoBytes: + MOVB $0xf0, 0(DI) + MOVB BX, 1(DI) + ADDQ $2, DI + JMP inlineEmitLiteralMemmove + +inlineEmitLiteralOneByte: + SHLB $2, BX + MOVB BX, 0(DI) + ADDQ $1, DI + +inlineEmitLiteralMemmove: + // Spill local variables (registers) onto the stack; call; unspill. + // + // copy(dst[i:], lit) + // + // This means calling runtime·memmove(&dst[i], &lit[0], len(lit)), so we push + // DI, R10 and AX as arguments. + MOVQ DI, 0(SP) + MOVQ R10, 8(SP) + MOVQ AX, 16(SP) + ADDQ AX, DI // Finish the "d +=" part of "d += emitLiteral(etc)". + MOVQ SI, 72(SP) + MOVQ DI, 80(SP) + MOVQ R15, 112(SP) + CALL runtime·memmove(SB) + MOVQ 56(SP), CX + MOVQ 64(SP), DX + MOVQ 72(SP), SI + MOVQ 80(SP), DI + MOVQ 88(SP), R9 + MOVQ 112(SP), R15 + JMP inner1 + +inlineEmitLiteralEnd: + // End inline of the emitLiteral call. + // ---------------------------------------- + +emitLiteralFastPath: + // !!! Emit the 1-byte encoding "uint8(len(lit)-1)<<2". + MOVB AX, BX + SUBB $1, BX + SHLB $2, BX + MOVB BX, (DI) + ADDQ $1, DI + + // !!! Implement the copy from lit to dst as a 16-byte load and store. + // (Encode's documentation says that dst and src must not overlap.) + // + // This always copies 16 bytes, instead of only len(lit) bytes, but that's + // OK. Subsequent iterations will fix up the overrun. + // + // Note that on amd64, it is legal and cheap to issue unaligned 8-byte or + // 16-byte loads and stores. This technique probably wouldn't be as + // effective on architectures that are fussier about alignment. + MOVOU 0(R10), X0 + MOVOU X0, 0(DI) + ADDQ AX, DI + +inner1: + // for { etc } + + // base := s + MOVQ SI, R12 + + // !!! offset := base - candidate + MOVQ R12, R11 + SUBQ R15, R11 + SUBQ DX, R11 + + // ---------------------------------------- + // Begin inline of the extendMatch call. + // + // s = extendMatch(src, candidate+4, s+4) + + // !!! R14 = &src[len(src)] + MOVQ src_len+32(FP), R14 + ADDQ DX, R14 + + // !!! R13 = &src[len(src) - 8] + MOVQ R14, R13 + SUBQ $8, R13 + + // !!! R15 = &src[candidate + 4] + ADDQ $4, R15 + ADDQ DX, R15 + + // !!! s += 4 + ADDQ $4, SI + +inlineExtendMatchCmp8: + // As long as we are 8 or more bytes before the end of src, we can load and + // compare 8 bytes at a time. If those 8 bytes are equal, repeat. + CMPQ SI, R13 + JA inlineExtendMatchCmp1 + MOVQ (R15), AX + MOVQ (SI), BX + CMPQ AX, BX + JNE inlineExtendMatchBSF + ADDQ $8, R15 + ADDQ $8, SI + JMP inlineExtendMatchCmp8 + +inlineExtendMatchBSF: + // If those 8 bytes were not equal, XOR the two 8 byte values, and return + // the index of the first byte that differs. The BSF instruction finds the + // least significant 1 bit, the amd64 architecture is little-endian, and + // the shift by 3 converts a bit index to a byte index. + XORQ AX, BX + BSFQ BX, BX + SHRQ $3, BX + ADDQ BX, SI + JMP inlineExtendMatchEnd + +inlineExtendMatchCmp1: + // In src's tail, compare 1 byte at a time. + CMPQ SI, R14 + JAE inlineExtendMatchEnd + MOVB (R15), AX + MOVB (SI), BX + CMPB AX, BX + JNE inlineExtendMatchEnd + ADDQ $1, R15 + ADDQ $1, SI + JMP inlineExtendMatchCmp1 + +inlineExtendMatchEnd: + // End inline of the extendMatch call. + // ---------------------------------------- + + // ---------------------------------------- + // Begin inline of the emitCopy call. + // + // d += emitCopy(dst[d:], base-candidate, s-base) + + // !!! length := s - base + MOVQ SI, AX + SUBQ R12, AX + +inlineEmitCopyLoop0: + // for length >= 68 { etc } + CMPL AX, $68 + JLT inlineEmitCopyStep1 + + // Emit a length 64 copy, encoded as 3 bytes. + MOVB $0xfe, 0(DI) + MOVW R11, 1(DI) + ADDQ $3, DI + SUBL $64, AX + JMP inlineEmitCopyLoop0 + +inlineEmitCopyStep1: + // if length > 64 { etc } + CMPL AX, $64 + JLE inlineEmitCopyStep2 + + // Emit a length 60 copy, encoded as 3 bytes. + MOVB $0xee, 0(DI) + MOVW R11, 1(DI) + ADDQ $3, DI + SUBL $60, AX + +inlineEmitCopyStep2: + // if length >= 12 || offset >= 2048 { goto inlineEmitCopyStep3 } + CMPL AX, $12 + JGE inlineEmitCopyStep3 + CMPL R11, $2048 + JGE inlineEmitCopyStep3 + + // Emit the remaining copy, encoded as 2 bytes. + MOVB R11, 1(DI) + SHRL $8, R11 + SHLB $5, R11 + SUBB $4, AX + SHLB $2, AX + ORB AX, R11 + ORB $1, R11 + MOVB R11, 0(DI) + ADDQ $2, DI + JMP inlineEmitCopyEnd + +inlineEmitCopyStep3: + // Emit the remaining copy, encoded as 3 bytes. + SUBL $1, AX + SHLB $2, AX + ORB $2, AX + MOVB AX, 0(DI) + MOVW R11, 1(DI) + ADDQ $3, DI + +inlineEmitCopyEnd: + // End inline of the emitCopy call. + // ---------------------------------------- + + // nextEmit = s + MOVQ SI, R10 + + // if s >= sLimit { goto emitRemainder } + MOVQ SI, AX + SUBQ DX, AX + CMPQ AX, R9 + JAE emitRemainder + + // As per the encode_other.go code: + // + // We could immediately etc. + + // x := load64(src, s-1) + MOVQ -1(SI), R14 + + // prevHash := hash(uint32(x>>0), shift) + MOVL R14, R11 + IMULL $0x1e35a7bd, R11 + SHRL CX, R11 + + // table[prevHash] = uint16(s-1) + MOVQ SI, AX + SUBQ DX, AX + SUBQ $1, AX + + // XXX: MOVW AX, table-32768(SP)(R11*2) + // XXX: 66 42 89 44 5c 78 mov %ax,0x78(%rsp,%r11,2) + BYTE $0x66 + BYTE $0x42 + BYTE $0x89 + BYTE $0x44 + BYTE $0x5c + BYTE $0x78 + + // currHash := hash(uint32(x>>8), shift) + SHRQ $8, R14 + MOVL R14, R11 + IMULL $0x1e35a7bd, R11 + SHRL CX, R11 + + // candidate = int(table[currHash]) + // XXX: MOVWQZX table-32768(SP)(R11*2), R15 + // XXX: 4e 0f b7 7c 5c 78 movzwq 0x78(%rsp,%r11,2),%r15 + BYTE $0x4e + BYTE $0x0f + BYTE $0xb7 + BYTE $0x7c + BYTE $0x5c + BYTE $0x78 + + // table[currHash] = uint16(s) + ADDQ $1, AX + + // XXX: MOVW AX, table-32768(SP)(R11*2) + // XXX: 66 42 89 44 5c 78 mov %ax,0x78(%rsp,%r11,2) + BYTE $0x66 + BYTE $0x42 + BYTE $0x89 + BYTE $0x44 + BYTE $0x5c + BYTE $0x78 + + // if uint32(x>>8) == load32(src, candidate) { continue } + MOVL (DX)(R15*1), BX + CMPL R14, BX + JEQ inner1 + + // nextHash = hash(uint32(x>>16), shift) + SHRQ $8, R14 + MOVL R14, R11 + IMULL $0x1e35a7bd, R11 + SHRL CX, R11 + + // s++ + ADDQ $1, SI + + // break out of the inner1 for loop, i.e. continue the outer loop. + JMP outer + +emitRemainder: + // if nextEmit < len(src) { etc } + MOVQ src_len+32(FP), AX + ADDQ DX, AX + CMPQ R10, AX + JEQ encodeBlockEnd + + // d += emitLiteral(dst[d:], src[nextEmit:]) + // + // Push args. + MOVQ DI, 0(SP) + MOVQ $0, 8(SP) // Unnecessary, as the callee ignores it, but conservative. + MOVQ $0, 16(SP) // Unnecessary, as the callee ignores it, but conservative. + MOVQ R10, 24(SP) + SUBQ R10, AX + MOVQ AX, 32(SP) + MOVQ AX, 40(SP) // Unnecessary, as the callee ignores it, but conservative. + + // Spill local variables (registers) onto the stack; call; unspill. + MOVQ DI, 80(SP) + CALL ·emitLiteral(SB) + MOVQ 80(SP), DI + + // Finish the "d +=" part of "d += emitLiteral(etc)". + ADDQ 48(SP), DI + +encodeBlockEnd: + MOVQ dst_base+0(FP), AX + SUBQ AX, DI + MOVQ DI, d+48(FP) + RET diff --git a/vendor/github.com/golang/snappy/encode_other.go b/vendor/github.com/golang/snappy/encode_other.go new file mode 100644 index 00000000000..dbcae905e6e --- /dev/null +++ b/vendor/github.com/golang/snappy/encode_other.go @@ -0,0 +1,238 @@ +// Copyright 2016 The Snappy-Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !amd64 appengine !gc noasm + +package snappy + +func load32(b []byte, i int) uint32 { + b = b[i : i+4 : len(b)] // Help the compiler eliminate bounds checks on the next line. + return uint32(b[0]) | uint32(b[1])<<8 | uint32(b[2])<<16 | uint32(b[3])<<24 +} + +func load64(b []byte, i int) uint64 { + b = b[i : i+8 : len(b)] // Help the compiler eliminate bounds checks on the next line. + return uint64(b[0]) | uint64(b[1])<<8 | uint64(b[2])<<16 | uint64(b[3])<<24 | + uint64(b[4])<<32 | uint64(b[5])<<40 | uint64(b[6])<<48 | uint64(b[7])<<56 +} + +// emitLiteral writes a literal chunk and returns the number of bytes written. +// +// It assumes that: +// dst is long enough to hold the encoded bytes +// 1 <= len(lit) && len(lit) <= 65536 +func emitLiteral(dst, lit []byte) int { + i, n := 0, uint(len(lit)-1) + switch { + case n < 60: + dst[0] = uint8(n)<<2 | tagLiteral + i = 1 + case n < 1<<8: + dst[0] = 60<<2 | tagLiteral + dst[1] = uint8(n) + i = 2 + default: + dst[0] = 61<<2 | tagLiteral + dst[1] = uint8(n) + dst[2] = uint8(n >> 8) + i = 3 + } + return i + copy(dst[i:], lit) +} + +// emitCopy writes a copy chunk and returns the number of bytes written. +// +// It assumes that: +// dst is long enough to hold the encoded bytes +// 1 <= offset && offset <= 65535 +// 4 <= length && length <= 65535 +func emitCopy(dst []byte, offset, length int) int { + i := 0 + // The maximum length for a single tagCopy1 or tagCopy2 op is 64 bytes. The + // threshold for this loop is a little higher (at 68 = 64 + 4), and the + // length emitted down below is is a little lower (at 60 = 64 - 4), because + // it's shorter to encode a length 67 copy as a length 60 tagCopy2 followed + // by a length 7 tagCopy1 (which encodes as 3+2 bytes) than to encode it as + // a length 64 tagCopy2 followed by a length 3 tagCopy2 (which encodes as + // 3+3 bytes). The magic 4 in the 64±4 is because the minimum length for a + // tagCopy1 op is 4 bytes, which is why a length 3 copy has to be an + // encodes-as-3-bytes tagCopy2 instead of an encodes-as-2-bytes tagCopy1. + for length >= 68 { + // Emit a length 64 copy, encoded as 3 bytes. + dst[i+0] = 63<<2 | tagCopy2 + dst[i+1] = uint8(offset) + dst[i+2] = uint8(offset >> 8) + i += 3 + length -= 64 + } + if length > 64 { + // Emit a length 60 copy, encoded as 3 bytes. + dst[i+0] = 59<<2 | tagCopy2 + dst[i+1] = uint8(offset) + dst[i+2] = uint8(offset >> 8) + i += 3 + length -= 60 + } + if length >= 12 || offset >= 2048 { + // Emit the remaining copy, encoded as 3 bytes. + dst[i+0] = uint8(length-1)<<2 | tagCopy2 + dst[i+1] = uint8(offset) + dst[i+2] = uint8(offset >> 8) + return i + 3 + } + // Emit the remaining copy, encoded as 2 bytes. + dst[i+0] = uint8(offset>>8)<<5 | uint8(length-4)<<2 | tagCopy1 + dst[i+1] = uint8(offset) + return i + 2 +} + +// extendMatch returns the largest k such that k <= len(src) and that +// src[i:i+k-j] and src[j:k] have the same contents. +// +// It assumes that: +// 0 <= i && i < j && j <= len(src) +func extendMatch(src []byte, i, j int) int { + for ; j < len(src) && src[i] == src[j]; i, j = i+1, j+1 { + } + return j +} + +func hash(u, shift uint32) uint32 { + return (u * 0x1e35a7bd) >> shift +} + +// encodeBlock encodes a non-empty src to a guaranteed-large-enough dst. It +// assumes that the varint-encoded length of the decompressed bytes has already +// been written. +// +// It also assumes that: +// len(dst) >= MaxEncodedLen(len(src)) && +// minNonLiteralBlockSize <= len(src) && len(src) <= maxBlockSize +func encodeBlock(dst, src []byte) (d int) { + // Initialize the hash table. Its size ranges from 1<<8 to 1<<14 inclusive. + // The table element type is uint16, as s < sLimit and sLimit < len(src) + // and len(src) <= maxBlockSize and maxBlockSize == 65536. + const ( + maxTableSize = 1 << 14 + // tableMask is redundant, but helps the compiler eliminate bounds + // checks. + tableMask = maxTableSize - 1 + ) + shift := uint32(32 - 8) + for tableSize := 1 << 8; tableSize < maxTableSize && tableSize < len(src); tableSize *= 2 { + shift-- + } + // In Go, all array elements are zero-initialized, so there is no advantage + // to a smaller tableSize per se. However, it matches the C++ algorithm, + // and in the asm versions of this code, we can get away with zeroing only + // the first tableSize elements. + var table [maxTableSize]uint16 + + // sLimit is when to stop looking for offset/length copies. The inputMargin + // lets us use a fast path for emitLiteral in the main loop, while we are + // looking for copies. + sLimit := len(src) - inputMargin + + // nextEmit is where in src the next emitLiteral should start from. + nextEmit := 0 + + // The encoded form must start with a literal, as there are no previous + // bytes to copy, so we start looking for hash matches at s == 1. + s := 1 + nextHash := hash(load32(src, s), shift) + + for { + // Copied from the C++ snappy implementation: + // + // Heuristic match skipping: If 32 bytes are scanned with no matches + // found, start looking only at every other byte. If 32 more bytes are + // scanned (or skipped), look at every third byte, etc.. When a match + // is found, immediately go back to looking at every byte. This is a + // small loss (~5% performance, ~0.1% density) for compressible data + // due to more bookkeeping, but for non-compressible data (such as + // JPEG) it's a huge win since the compressor quickly "realizes" the + // data is incompressible and doesn't bother looking for matches + // everywhere. + // + // The "skip" variable keeps track of how many bytes there are since + // the last match; dividing it by 32 (ie. right-shifting by five) gives + // the number of bytes to move ahead for each iteration. + skip := 32 + + nextS := s + candidate := 0 + for { + s = nextS + bytesBetweenHashLookups := skip >> 5 + nextS = s + bytesBetweenHashLookups + skip += bytesBetweenHashLookups + if nextS > sLimit { + goto emitRemainder + } + candidate = int(table[nextHash&tableMask]) + table[nextHash&tableMask] = uint16(s) + nextHash = hash(load32(src, nextS), shift) + if load32(src, s) == load32(src, candidate) { + break + } + } + + // A 4-byte match has been found. We'll later see if more than 4 bytes + // match. But, prior to the match, src[nextEmit:s] are unmatched. Emit + // them as literal bytes. + d += emitLiteral(dst[d:], src[nextEmit:s]) + + // Call emitCopy, and then see if another emitCopy could be our next + // move. Repeat until we find no match for the input immediately after + // what was consumed by the last emitCopy call. + // + // If we exit this loop normally then we need to call emitLiteral next, + // though we don't yet know how big the literal will be. We handle that + // by proceeding to the next iteration of the main loop. We also can + // exit this loop via goto if we get close to exhausting the input. + for { + // Invariant: we have a 4-byte match at s, and no need to emit any + // literal bytes prior to s. + base := s + + // Extend the 4-byte match as long as possible. + // + // This is an inlined version of: + // s = extendMatch(src, candidate+4, s+4) + s += 4 + for i := candidate + 4; s < len(src) && src[i] == src[s]; i, s = i+1, s+1 { + } + + d += emitCopy(dst[d:], base-candidate, s-base) + nextEmit = s + if s >= sLimit { + goto emitRemainder + } + + // We could immediately start working at s now, but to improve + // compression we first update the hash table at s-1 and at s. If + // another emitCopy is not our next move, also calculate nextHash + // at s+1. At least on GOARCH=amd64, these three hash calculations + // are faster as one load64 call (with some shifts) instead of + // three load32 calls. + x := load64(src, s-1) + prevHash := hash(uint32(x>>0), shift) + table[prevHash&tableMask] = uint16(s - 1) + currHash := hash(uint32(x>>8), shift) + candidate = int(table[currHash&tableMask]) + table[currHash&tableMask] = uint16(s) + if uint32(x>>8) != load32(src, candidate) { + nextHash = hash(uint32(x>>16), shift) + s++ + break + } + } + } + +emitRemainder: + if nextEmit < len(src) { + d += emitLiteral(dst[d:], src[nextEmit:]) + } + return d +} diff --git a/vendor/github.com/golang/snappy/snappy.go b/vendor/github.com/golang/snappy/snappy.go new file mode 100644 index 00000000000..ece692ea461 --- /dev/null +++ b/vendor/github.com/golang/snappy/snappy.go @@ -0,0 +1,98 @@ +// Copyright 2011 The Snappy-Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package snappy implements the Snappy compression format. It aims for very +// high speeds and reasonable compression. +// +// There are actually two Snappy formats: block and stream. They are related, +// but different: trying to decompress block-compressed data as a Snappy stream +// will fail, and vice versa. The block format is the Decode and Encode +// functions and the stream format is the Reader and Writer types. +// +// The block format, the more common case, is used when the complete size (the +// number of bytes) of the original data is known upfront, at the time +// compression starts. The stream format, also known as the framing format, is +// for when that isn't always true. +// +// The canonical, C++ implementation is at https://github.com/google/snappy and +// it only implements the block format. +package snappy // import "github.com/golang/snappy" + +import ( + "hash/crc32" +) + +/* +Each encoded block begins with the varint-encoded length of the decoded data, +followed by a sequence of chunks. Chunks begin and end on byte boundaries. The +first byte of each chunk is broken into its 2 least and 6 most significant bits +called l and m: l ranges in [0, 4) and m ranges in [0, 64). l is the chunk tag. +Zero means a literal tag. All other values mean a copy tag. + +For literal tags: + - If m < 60, the next 1 + m bytes are literal bytes. + - Otherwise, let n be the little-endian unsigned integer denoted by the next + m - 59 bytes. The next 1 + n bytes after that are literal bytes. + +For copy tags, length bytes are copied from offset bytes ago, in the style of +Lempel-Ziv compression algorithms. In particular: + - For l == 1, the offset ranges in [0, 1<<11) and the length in [4, 12). + The length is 4 + the low 3 bits of m. The high 3 bits of m form bits 8-10 + of the offset. The next byte is bits 0-7 of the offset. + - For l == 2, the offset ranges in [0, 1<<16) and the length in [1, 65). + The length is 1 + m. The offset is the little-endian unsigned integer + denoted by the next 2 bytes. + - For l == 3, this tag is a legacy format that is no longer issued by most + encoders. Nonetheless, the offset ranges in [0, 1<<32) and the length in + [1, 65). The length is 1 + m. The offset is the little-endian unsigned + integer denoted by the next 4 bytes. +*/ +const ( + tagLiteral = 0x00 + tagCopy1 = 0x01 + tagCopy2 = 0x02 + tagCopy4 = 0x03 +) + +const ( + checksumSize = 4 + chunkHeaderSize = 4 + magicChunk = "\xff\x06\x00\x00" + magicBody + magicBody = "sNaPpY" + + // maxBlockSize is the maximum size of the input to encodeBlock. It is not + // part of the wire format per se, but some parts of the encoder assume + // that an offset fits into a uint16. + // + // Also, for the framing format (Writer type instead of Encode function), + // https://github.com/google/snappy/blob/master/framing_format.txt says + // that "the uncompressed data in a chunk must be no longer than 65536 + // bytes". + maxBlockSize = 65536 + + // maxEncodedLenOfMaxBlockSize equals MaxEncodedLen(maxBlockSize), but is + // hard coded to be a const instead of a variable, so that obufLen can also + // be a const. Their equivalence is confirmed by + // TestMaxEncodedLenOfMaxBlockSize. + maxEncodedLenOfMaxBlockSize = 76490 + + obufHeaderLen = len(magicChunk) + checksumSize + chunkHeaderSize + obufLen = obufHeaderLen + maxEncodedLenOfMaxBlockSize +) + +const ( + chunkTypeCompressedData = 0x00 + chunkTypeUncompressedData = 0x01 + chunkTypePadding = 0xfe + chunkTypeStreamIdentifier = 0xff +) + +var crcTable = crc32.MakeTable(crc32.Castagnoli) + +// crc implements the checksum specified in section 3 of +// https://github.com/google/snappy/blob/master/framing_format.txt +func crc(b []byte) uint32 { + c := crc32.Update(0, crcTable, b) + return uint32(c>>15|c<<17) + 0xa282ead8 +} diff --git a/vendor/github.com/hashicorp/errwrap/README.md b/vendor/github.com/hashicorp/errwrap/README.md index 1c95f59782b..444df08f8e7 100644 --- a/vendor/github.com/hashicorp/errwrap/README.md +++ b/vendor/github.com/hashicorp/errwrap/README.md @@ -48,7 +48,7 @@ func main() { // We can use the Contains helpers to check if an error contains // another error. It is safe to do this with a nil error, or with // an error that doesn't even use the errwrap package. - if errwrap.Contains(err, ErrNotExist) { + if errwrap.Contains(err, "does not exist") { // Do something } if errwrap.ContainsType(err, new(os.PathError)) { diff --git a/vendor/github.com/hashicorp/errwrap/go.mod b/vendor/github.com/hashicorp/errwrap/go.mod new file mode 100644 index 00000000000..c9b84022cf7 --- /dev/null +++ b/vendor/github.com/hashicorp/errwrap/go.mod @@ -0,0 +1 @@ +module github.com/hashicorp/errwrap diff --git a/vendor/github.com/hashicorp/go-cleanhttp/cleanhttp.go b/vendor/github.com/hashicorp/go-cleanhttp/cleanhttp.go index 7d8a57c2807..8d306bf5134 100644 --- a/vendor/github.com/hashicorp/go-cleanhttp/cleanhttp.go +++ b/vendor/github.com/hashicorp/go-cleanhttp/cleanhttp.go @@ -26,6 +26,7 @@ func DefaultPooledTransport() *http.Transport { DialContext: (&net.Dialer{ Timeout: 30 * time.Second, KeepAlive: 30 * time.Second, + DualStack: true, }).DialContext, MaxIdleConns: 100, IdleConnTimeout: 90 * time.Second, diff --git a/vendor/github.com/hashicorp/go-cleanhttp/go.mod b/vendor/github.com/hashicorp/go-cleanhttp/go.mod new file mode 100644 index 00000000000..310f07569fc --- /dev/null +++ b/vendor/github.com/hashicorp/go-cleanhttp/go.mod @@ -0,0 +1 @@ +module github.com/hashicorp/go-cleanhttp diff --git a/vendor/github.com/hashicorp/go-cleanhttp/handlers.go b/vendor/github.com/hashicorp/go-cleanhttp/handlers.go new file mode 100644 index 00000000000..7eda3777f3c --- /dev/null +++ b/vendor/github.com/hashicorp/go-cleanhttp/handlers.go @@ -0,0 +1,43 @@ +package cleanhttp + +import ( + "net/http" + "strings" + "unicode" +) + +// HandlerInput provides input options to cleanhttp's handlers +type HandlerInput struct { + ErrStatus int +} + +// PrintablePathCheckHandler is a middleware that ensures the request path +// contains only printable runes. +func PrintablePathCheckHandler(next http.Handler, input *HandlerInput) http.Handler { + // Nil-check on input to make it optional + if input == nil { + input = &HandlerInput{ + ErrStatus: http.StatusBadRequest, + } + } + + // Default to http.StatusBadRequest on error + if input.ErrStatus == 0 { + input.ErrStatus = http.StatusBadRequest + } + + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + // Check URL path for non-printable characters + idx := strings.IndexFunc(r.URL.Path, func(c rune) bool { + return !unicode.IsPrint(c) + }) + + if idx != -1 { + w.WriteHeader(input.ErrStatus) + return + } + + next.ServeHTTP(w, r) + return + }) +} diff --git a/vendor/github.com/hashicorp/go-getter/README.md b/vendor/github.com/hashicorp/go-getter/README.md index a9eafd61f06..40ace74d8ad 100644 --- a/vendor/github.com/hashicorp/go-getter/README.md +++ b/vendor/github.com/hashicorp/go-getter/README.md @@ -232,6 +232,9 @@ The options below are available to all protocols: * `checksum` - Checksum to verify the downloaded file or archive. See the entire section on checksumming above for format and more details. + * `filename` - When in file download mode, allows specifying the name of the + downloaded file on disk. Has no effect in directory mode. + ### Local Files (`file`) None @@ -282,7 +285,7 @@ be used automatically. * `aws_access_key_id` (required) - Minio access key. * `aws_access_key_secret` (required) - Minio access key secret. * `region` (optional - defaults to us-east-1) - Region identifier to use. - * `version` (optional - fefaults to Minio default) - Configuration file format. + * `version` (optional - defaults to Minio default) - Configuration file format. #### S3 Bucket Examples diff --git a/vendor/github.com/hashicorp/go-getter/client.go b/vendor/github.com/hashicorp/go-getter/client.go index b67bb641c36..300301c2eb4 100644 --- a/vendor/github.com/hashicorp/go-getter/client.go +++ b/vendor/github.com/hashicorp/go-getter/client.go @@ -17,6 +17,7 @@ import ( "strings" urlhelper "github.com/hashicorp/go-getter/helper/url" + "github.com/hashicorp/go-safetemp" ) // Client is a client for downloading things. @@ -100,17 +101,14 @@ func (c *Client) Get() error { dst := c.Dst src, subDir := SourceDirSubdir(src) if subDir != "" { - tmpDir, err := ioutil.TempDir("", "tf") + td, tdcloser, err := safetemp.Dir("", "getter") if err != nil { return err } - if err := os.RemoveAll(tmpDir); err != nil { - return err - } - defer os.RemoveAll(tmpDir) + defer tdcloser.Close() realDst = dst - dst = tmpDir + dst = td } u, err := urlhelper.Parse(src) @@ -232,7 +230,18 @@ func (c *Client) Get() error { // Destination is the base name of the URL path in "any" mode when // a file source is detected. if mode == ClientModeFile { - dst = filepath.Join(dst, filepath.Base(u.Path)) + filename := filepath.Base(u.Path) + + // Determine if we have a custom file name + if v := q.Get("filename"); v != "" { + // Delete the query parameter if we have it. + q.Del("filename") + u.RawQuery = q.Encode() + + filename = v + } + + dst = filepath.Join(dst, filename) } } diff --git a/vendor/github.com/hashicorp/go-getter/decompress.go b/vendor/github.com/hashicorp/go-getter/decompress.go index fc5681d39fa..198bb0edd01 100644 --- a/vendor/github.com/hashicorp/go-getter/decompress.go +++ b/vendor/github.com/hashicorp/go-getter/decompress.go @@ -1,7 +1,15 @@ package getter +import ( + "strings" +) + // Decompressor defines the interface that must be implemented to add // support for decompressing a type. +// +// Important: if you're implementing a decompressor, please use the +// containsDotDot helper in this file to ensure that files can't be +// decompressed outside of the specified directory. type Decompressor interface { // Decompress should decompress src to dst. dir specifies whether dst // is a directory or single file. src is guaranteed to be a single file @@ -31,3 +39,20 @@ func init() { "zip": new(ZipDecompressor), } } + +// containsDotDot checks if the filepath value v contains a ".." entry. +// This will check filepath components by splitting along / or \. This +// function is copied directly from the Go net/http implementation. +func containsDotDot(v string) bool { + if !strings.Contains(v, "..") { + return false + } + for _, ent := range strings.FieldsFunc(v, isSlashRune) { + if ent == ".." { + return true + } + } + return false +} + +func isSlashRune(r rune) bool { return r == '/' || r == '\\' } diff --git a/vendor/github.com/hashicorp/go-getter/decompress_tar.go b/vendor/github.com/hashicorp/go-getter/decompress_tar.go index 543c30d21f3..39cb392e066 100644 --- a/vendor/github.com/hashicorp/go-getter/decompress_tar.go +++ b/vendor/github.com/hashicorp/go-getter/decompress_tar.go @@ -13,6 +13,7 @@ import ( func untar(input io.Reader, dst, src string, dir bool) error { tarR := tar.NewReader(input) done := false + dirHdrs := []*tar.Header{} for { hdr, err := tarR.Next() if err == io.EOF { @@ -21,7 +22,7 @@ func untar(input io.Reader, dst, src string, dir bool) error { return fmt.Errorf("empty archive: %s", src) } - return nil + break } if err != nil { return err @@ -34,6 +35,11 @@ func untar(input io.Reader, dst, src string, dir bool) error { path := dst if dir { + // Disallow parent traversal + if containsDotDot(hdr.Name) { + return fmt.Errorf("entry contains '..': %s", hdr.Name) + } + path = filepath.Join(path, hdr.Name) } @@ -47,6 +53,10 @@ func untar(input io.Reader, dst, src string, dir bool) error { return err } + // Record the directory information so that we may set its attributes + // after all files have been extracted + dirHdrs = append(dirHdrs, hdr) + continue } else { // There is no ordering guarantee that a file in a directory is @@ -84,7 +94,23 @@ func untar(input io.Reader, dst, src string, dir bool) error { if err := os.Chmod(path, hdr.FileInfo().Mode()); err != nil { return err } + + // Set the access and modification time + if err := os.Chtimes(path, hdr.AccessTime, hdr.ModTime); err != nil { + return err + } } + + // Adding a file or subdirectory changes the mtime of a directory + // We therefore wait until we've extracted everything and then set the mtime and atime attributes + for _, dirHdr := range dirHdrs { + path := filepath.Join(dst, dirHdr.Name) + if err := os.Chtimes(path, dirHdr.AccessTime, dirHdr.ModTime); err != nil { + return err + } + } + + return nil } // tarDecompressor is an implementation of Decompressor that can diff --git a/vendor/github.com/hashicorp/go-getter/decompress_testing.go b/vendor/github.com/hashicorp/go-getter/decompress_testing.go index 82b8ab4f6e8..91cf33d98df 100644 --- a/vendor/github.com/hashicorp/go-getter/decompress_testing.go +++ b/vendor/github.com/hashicorp/go-getter/decompress_testing.go @@ -11,6 +11,7 @@ import ( "runtime" "sort" "strings" + "time" "github.com/mitchellh/go-testing-interface" ) @@ -22,6 +23,7 @@ type TestDecompressCase struct { Err bool // Err is whether we expect an error or not DirList []string // DirList is the list of files for Dir mode FileMD5 string // FileMD5 is the expected MD5 for a single file + Mtime *time.Time // Mtime is the optionally expected mtime for a single file (or all files if in Dir mode) } // TestDecompressor is a helper function for testing generic decompressors. @@ -68,6 +70,14 @@ func TestDecompressor(t testing.T, d Decompressor, cases []TestDecompressCase) { } } + if tc.Mtime != nil { + actual := fi.ModTime() + expected := *tc.Mtime + if actual != expected { + t.Fatalf("err %s: expected mtime '%s' for %s, got '%s'", tc.Input, expected.String(), dst, actual.String()) + } + } + return } @@ -84,6 +94,21 @@ func TestDecompressor(t testing.T, d Decompressor, cases []TestDecompressCase) { if !reflect.DeepEqual(actual, expected) { t.Fatalf("bad %s\n\n%#v\n\n%#v", tc.Input, actual, expected) } + // Check for correct atime/mtime + for _, dir := range actual { + path := filepath.Join(dst, dir) + if tc.Mtime != nil { + fi, err := os.Stat(path) + if err != nil { + t.Fatalf("err: %s", err) + } + actual := fi.ModTime() + expected := *tc.Mtime + if actual != expected { + t.Fatalf("err %s: expected mtime '%s' for %s, got '%s'", tc.Input, expected.String(), path, actual.String()) + } + } + } }() } } diff --git a/vendor/github.com/hashicorp/go-getter/decompress_zip.go b/vendor/github.com/hashicorp/go-getter/decompress_zip.go index a065c076ffe..b0e70cac35c 100644 --- a/vendor/github.com/hashicorp/go-getter/decompress_zip.go +++ b/vendor/github.com/hashicorp/go-getter/decompress_zip.go @@ -42,6 +42,11 @@ func (d *ZipDecompressor) Decompress(dst, src string, dir bool) error { for _, f := range zipR.File { path := dst if dir { + // Disallow parent traversal + if containsDotDot(f.Name) { + return fmt.Errorf("entry contains '..': %s", f.Name) + } + path = filepath.Join(path, f.Name) } diff --git a/vendor/github.com/hashicorp/go-getter/get_git.go b/vendor/github.com/hashicorp/go-getter/get_git.go index 6f5d9142bcd..cb1d02947ab 100644 --- a/vendor/github.com/hashicorp/go-getter/get_git.go +++ b/vendor/github.com/hashicorp/go-getter/get_git.go @@ -11,6 +11,7 @@ import ( "strings" urlhelper "github.com/hashicorp/go-getter/helper/url" + "github.com/hashicorp/go-safetemp" "github.com/hashicorp/go-version" ) @@ -105,13 +106,11 @@ func (g *GitGetter) Get(dst string, u *url.URL) error { // GetFile for Git doesn't support updating at this time. It will download // the file every time. func (g *GitGetter) GetFile(dst string, u *url.URL) error { - td, err := ioutil.TempDir("", "getter-git") + td, tdcloser, err := safetemp.Dir("", "getter") if err != nil { return err } - if err := os.RemoveAll(td); err != nil { - return err - } + defer tdcloser.Close() // Get the filename, and strip the filename from the URL so we can // just get the repository directly. diff --git a/vendor/github.com/hashicorp/go-getter/get_hg.go b/vendor/github.com/hashicorp/go-getter/get_hg.go index 820bdd488e1..f3869227057 100644 --- a/vendor/github.com/hashicorp/go-getter/get_hg.go +++ b/vendor/github.com/hashicorp/go-getter/get_hg.go @@ -2,7 +2,6 @@ package getter import ( "fmt" - "io/ioutil" "net/url" "os" "os/exec" @@ -10,6 +9,7 @@ import ( "runtime" urlhelper "github.com/hashicorp/go-getter/helper/url" + "github.com/hashicorp/go-safetemp" ) // HgGetter is a Getter implementation that will download a module from @@ -64,13 +64,13 @@ func (g *HgGetter) Get(dst string, u *url.URL) error { // GetFile for Hg doesn't support updating at this time. It will download // the file every time. func (g *HgGetter) GetFile(dst string, u *url.URL) error { - td, err := ioutil.TempDir("", "getter-hg") + // Create a temporary directory to store the full source. This has to be + // a non-existent directory. + td, tdcloser, err := safetemp.Dir("", "getter") if err != nil { return err } - if err := os.RemoveAll(td); err != nil { - return err - } + defer tdcloser.Close() // Get the filename, and strip the filename from the URL so we can // just get the repository directly. diff --git a/vendor/github.com/hashicorp/go-getter/get_http.go b/vendor/github.com/hashicorp/go-getter/get_http.go index 9acc72cd720..d2e28796d8f 100644 --- a/vendor/github.com/hashicorp/go-getter/get_http.go +++ b/vendor/github.com/hashicorp/go-getter/get_http.go @@ -4,12 +4,13 @@ import ( "encoding/xml" "fmt" "io" - "io/ioutil" "net/http" "net/url" "os" "path/filepath" "strings" + + "github.com/hashicorp/go-safetemp" ) // HttpGetter is a Getter implementation that will download from an HTTP @@ -135,25 +136,27 @@ func (g *HttpGetter) GetFile(dst string, u *url.URL) error { if err != nil { return err } - defer f.Close() - _, err = io.Copy(f, resp.Body) + n, err := io.Copy(f, resp.Body) + if err == nil && n < resp.ContentLength { + err = io.ErrShortWrite + } + if err1 := f.Close(); err == nil { + err = err1 + } return err } // getSubdir downloads the source into the destination, but with // the proper subdir. func (g *HttpGetter) getSubdir(dst, source, subDir string) error { - // Create a temporary directory to store the full source - td, err := ioutil.TempDir("", "tf") + // Create a temporary directory to store the full source. This has to be + // a non-existent directory. + td, tdcloser, err := safetemp.Dir("", "getter") if err != nil { return err } - defer os.RemoveAll(td) - - // We have to create a subdirectory that doesn't exist for the file - // getter to work. - td = filepath.Join(td, "data") + defer tdcloser.Close() // Download that into the given directory if err := Get(td, source); err != nil { diff --git a/vendor/github.com/hashicorp/go-multierror/Makefile b/vendor/github.com/hashicorp/go-multierror/Makefile new file mode 100644 index 00000000000..b97cd6ed02b --- /dev/null +++ b/vendor/github.com/hashicorp/go-multierror/Makefile @@ -0,0 +1,31 @@ +TEST?=./... + +default: test + +# test runs the test suite and vets the code. +test: generate + @echo "==> Running tests..." + @go list $(TEST) \ + | grep -v "/vendor/" \ + | xargs -n1 go test -timeout=60s -parallel=10 ${TESTARGS} + +# testrace runs the race checker +testrace: generate + @echo "==> Running tests (race)..." + @go list $(TEST) \ + | grep -v "/vendor/" \ + | xargs -n1 go test -timeout=60s -race ${TESTARGS} + +# updatedeps installs all the dependencies needed to run and build. +updatedeps: + @sh -c "'${CURDIR}/scripts/deps.sh' '${NAME}'" + +# generate runs `go generate` to build the dynamically generated source files. +generate: + @echo "==> Generating..." + @find . -type f -name '.DS_Store' -delete + @go list ./... \ + | grep -v "/vendor/" \ + | xargs -n1 go generate + +.PHONY: default test testrace updatedeps generate diff --git a/vendor/github.com/hashicorp/go-multierror/README.md b/vendor/github.com/hashicorp/go-multierror/README.md index e81be50e0d3..ead5830f7b7 100644 --- a/vendor/github.com/hashicorp/go-multierror/README.md +++ b/vendor/github.com/hashicorp/go-multierror/README.md @@ -1,5 +1,11 @@ # go-multierror +[![Build Status](http://img.shields.io/travis/hashicorp/go-multierror.svg?style=flat-square)][travis] +[![Go Documentation](http://img.shields.io/badge/go-documentation-blue.svg?style=flat-square)][godocs] + +[travis]: https://travis-ci.org/hashicorp/go-multierror +[godocs]: https://godoc.org/github.com/hashicorp/go-multierror + `go-multierror` is a package for Go that provides a mechanism for representing a list of `error` values as a single `error`. diff --git a/vendor/github.com/hashicorp/go-multierror/append.go b/vendor/github.com/hashicorp/go-multierror/append.go index 00afa9b3516..775b6e753e7 100644 --- a/vendor/github.com/hashicorp/go-multierror/append.go +++ b/vendor/github.com/hashicorp/go-multierror/append.go @@ -18,9 +18,13 @@ func Append(err error, errs ...error) *Error { for _, e := range errs { switch e := e.(type) { case *Error: - err.Errors = append(err.Errors, e.Errors...) + if e != nil { + err.Errors = append(err.Errors, e.Errors...) + } default: - err.Errors = append(err.Errors, e) + if e != nil { + err.Errors = append(err.Errors, e) + } } } diff --git a/vendor/github.com/hashicorp/go-multierror/format.go b/vendor/github.com/hashicorp/go-multierror/format.go index bb65a12e743..47f13c49a67 100644 --- a/vendor/github.com/hashicorp/go-multierror/format.go +++ b/vendor/github.com/hashicorp/go-multierror/format.go @@ -12,12 +12,16 @@ type ErrorFormatFunc func([]error) string // ListFormatFunc is a basic formatter that outputs the number of errors // that occurred along with a bullet point list of the errors. func ListFormatFunc(es []error) string { + if len(es) == 1 { + return fmt.Sprintf("1 error occurred:\n\t* %s\n\n", es[0]) + } + points := make([]string, len(es)) for i, err := range es { points[i] = fmt.Sprintf("* %s", err) } return fmt.Sprintf( - "%d error(s) occurred:\n\n%s", - len(es), strings.Join(points, "\n")) + "%d errors occurred:\n\t%s\n\n", + len(es), strings.Join(points, "\n\t")) } diff --git a/vendor/github.com/hashicorp/go-multierror/go.mod b/vendor/github.com/hashicorp/go-multierror/go.mod new file mode 100644 index 00000000000..2534331d5f9 --- /dev/null +++ b/vendor/github.com/hashicorp/go-multierror/go.mod @@ -0,0 +1,3 @@ +module github.com/hashicorp/go-multierror + +require github.com/hashicorp/errwrap v1.0.0 diff --git a/vendor/github.com/hashicorp/go-multierror/go.sum b/vendor/github.com/hashicorp/go-multierror/go.sum new file mode 100644 index 00000000000..85b1f8ff333 --- /dev/null +++ b/vendor/github.com/hashicorp/go-multierror/go.sum @@ -0,0 +1,4 @@ +github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce h1:prjrVgOk2Yg6w+PflHoszQNLTUh4kaByUcEWM/9uin4= +github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= +github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA= +github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= diff --git a/vendor/github.com/hashicorp/go-multierror/multierror.go b/vendor/github.com/hashicorp/go-multierror/multierror.go index 2ea08273290..89b1422d1d1 100644 --- a/vendor/github.com/hashicorp/go-multierror/multierror.go +++ b/vendor/github.com/hashicorp/go-multierror/multierror.go @@ -40,11 +40,11 @@ func (e *Error) GoString() string { } // WrappedErrors returns the list of errors that this Error is wrapping. -// It is an implementatin of the errwrap.Wrapper interface so that +// It is an implementation of the errwrap.Wrapper interface so that // multierror.Error can be used with that library. // // This method is not safe to be called concurrently and is no different -// than accessing the Errors field directly. It is implementd only to +// than accessing the Errors field directly. It is implemented only to // satisfy the errwrap.Wrapper interface. func (e *Error) WrappedErrors() []error { return e.Errors diff --git a/vendor/github.com/hashicorp/go-multierror/sort.go b/vendor/github.com/hashicorp/go-multierror/sort.go new file mode 100644 index 00000000000..fecb14e81c5 --- /dev/null +++ b/vendor/github.com/hashicorp/go-multierror/sort.go @@ -0,0 +1,16 @@ +package multierror + +// Len implements sort.Interface function for length +func (err Error) Len() int { + return len(err.Errors) +} + +// Swap implements sort.Interface function for swapping elements +func (err Error) Swap(i, j int) { + err.Errors[i], err.Errors[j] = err.Errors[j], err.Errors[i] +} + +// Less implements sort.Interface function for determining order +func (err Error) Less(i, j int) bool { + return err.Errors[i].Error() < err.Errors[j].Error() +} diff --git a/vendor/github.com/hashicorp/go-safetemp/LICENSE b/vendor/github.com/hashicorp/go-safetemp/LICENSE new file mode 100644 index 00000000000..be2cc4dfb60 --- /dev/null +++ b/vendor/github.com/hashicorp/go-safetemp/LICENSE @@ -0,0 +1,362 @@ +Mozilla Public License, version 2.0 + +1. Definitions + +1.1. "Contributor" + + means each individual or legal entity that creates, contributes to the + creation of, or owns Covered Software. + +1.2. "Contributor Version" + + means the combination of the Contributions of others (if any) used by a + Contributor and that particular Contributor's Contribution. + +1.3. "Contribution" + + means Covered Software of a particular Contributor. + +1.4. "Covered Software" + + means Source Code Form to which the initial Contributor has attached the + notice in Exhibit A, the Executable Form of such Source Code Form, and + Modifications of such Source Code Form, in each case including portions + thereof. + +1.5. "Incompatible With Secondary Licenses" + means + + a. that the initial Contributor has attached the notice described in + Exhibit B to the Covered Software; or + + b. that the Covered Software was made available under the terms of + version 1.1 or earlier of the License, but not also under the terms of + a Secondary License. + +1.6. "Executable Form" + + means any form of the work other than Source Code Form. + +1.7. "Larger Work" + + means a work that combines Covered Software with other material, in a + separate file or files, that is not Covered Software. + +1.8. "License" + + means this document. + +1.9. "Licensable" + + means having the right to grant, to the maximum extent possible, whether + at the time of the initial grant or subsequently, any and all of the + rights conveyed by this License. + +1.10. "Modifications" + + means any of the following: + + a. any file in Source Code Form that results from an addition to, + deletion from, or modification of the contents of Covered Software; or + + b. any new file in Source Code Form that contains any Covered Software. + +1.11. "Patent Claims" of a Contributor + + means any patent claim(s), including without limitation, method, + process, and apparatus claims, in any patent Licensable by such + Contributor that would be infringed, but for the grant of the License, + by the making, using, selling, offering for sale, having made, import, + or transfer of either its Contributions or its Contributor Version. + +1.12. "Secondary License" + + means either the GNU General Public License, Version 2.0, the GNU Lesser + General Public License, Version 2.1, the GNU Affero General Public + License, Version 3.0, or any later versions of those licenses. + +1.13. "Source Code Form" + + means the form of the work preferred for making modifications. + +1.14. "You" (or "Your") + + means an individual or a legal entity exercising rights under this + License. For legal entities, "You" includes any entity that controls, is + controlled by, or is under common control with You. For purposes of this + definition, "control" means (a) the power, direct or indirect, to cause + the direction or management of such entity, whether by contract or + otherwise, or (b) ownership of more than fifty percent (50%) of the + outstanding shares or beneficial ownership of such entity. + + +2. License Grants and Conditions + +2.1. Grants + + Each Contributor hereby grants You a world-wide, royalty-free, + non-exclusive license: + + a. under intellectual property rights (other than patent or trademark) + Licensable by such Contributor to use, reproduce, make available, + modify, display, perform, distribute, and otherwise exploit its + Contributions, either on an unmodified basis, with Modifications, or + as part of a Larger Work; and + + b. under Patent Claims of such Contributor to make, use, sell, offer for + sale, have made, import, and otherwise transfer either its + Contributions or its Contributor Version. + +2.2. Effective Date + + The licenses granted in Section 2.1 with respect to any Contribution + become effective for each Contribution on the date the Contributor first + distributes such Contribution. + +2.3. Limitations on Grant Scope + + The licenses granted in this Section 2 are the only rights granted under + this License. No additional rights or licenses will be implied from the + distribution or licensing of Covered Software under this License. + Notwithstanding Section 2.1(b) above, no patent license is granted by a + Contributor: + + a. for any code that a Contributor has removed from Covered Software; or + + b. for infringements caused by: (i) Your and any other third party's + modifications of Covered Software, or (ii) the combination of its + Contributions with other software (except as part of its Contributor + Version); or + + c. under Patent Claims infringed by Covered Software in the absence of + its Contributions. + + This License does not grant any rights in the trademarks, service marks, + or logos of any Contributor (except as may be necessary to comply with + the notice requirements in Section 3.4). + +2.4. Subsequent Licenses + + No Contributor makes additional grants as a result of Your choice to + distribute the Covered Software under a subsequent version of this + License (see Section 10.2) or under the terms of a Secondary License (if + permitted under the terms of Section 3.3). + +2.5. Representation + + Each Contributor represents that the Contributor believes its + Contributions are its original creation(s) or it has sufficient rights to + grant the rights to its Contributions conveyed by this License. + +2.6. Fair Use + + This License is not intended to limit any rights You have under + applicable copyright doctrines of fair use, fair dealing, or other + equivalents. + +2.7. Conditions + + Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in + Section 2.1. + + +3. Responsibilities + +3.1. Distribution of Source Form + + All distribution of Covered Software in Source Code Form, including any + Modifications that You create or to which You contribute, must be under + the terms of this License. You must inform recipients that the Source + Code Form of the Covered Software is governed by the terms of this + License, and how they can obtain a copy of this License. You may not + attempt to alter or restrict the recipients' rights in the Source Code + Form. + +3.2. Distribution of Executable Form + + If You distribute Covered Software in Executable Form then: + + a. such Covered Software must also be made available in Source Code Form, + as described in Section 3.1, and You must inform recipients of the + Executable Form how they can obtain a copy of such Source Code Form by + reasonable means in a timely manner, at a charge no more than the cost + of distribution to the recipient; and + + b. You may distribute such Executable Form under the terms of this + License, or sublicense it under different terms, provided that the + license for the Executable Form does not attempt to limit or alter the + recipients' rights in the Source Code Form under this License. + +3.3. Distribution of a Larger Work + + You may create and distribute a Larger Work under terms of Your choice, + provided that You also comply with the requirements of this License for + the Covered Software. If the Larger Work is a combination of Covered + Software with a work governed by one or more Secondary Licenses, and the + Covered Software is not Incompatible With Secondary Licenses, this + License permits You to additionally distribute such Covered Software + under the terms of such Secondary License(s), so that the recipient of + the Larger Work may, at their option, further distribute the Covered + Software under the terms of either this License or such Secondary + License(s). + +3.4. Notices + + You may not remove or alter the substance of any license notices + (including copyright notices, patent notices, disclaimers of warranty, or + limitations of liability) contained within the Source Code Form of the + Covered Software, except that You may alter any license notices to the + extent required to remedy known factual inaccuracies. + +3.5. Application of Additional Terms + + You may choose to offer, and to charge a fee for, warranty, support, + indemnity or liability obligations to one or more recipients of Covered + Software. However, You may do so only on Your own behalf, and not on + behalf of any Contributor. You must make it absolutely clear that any + such warranty, support, indemnity, or liability obligation is offered by + You alone, and You hereby agree to indemnify every Contributor for any + liability incurred by such Contributor as a result of warranty, support, + indemnity or liability terms You offer. You may include additional + disclaimers of warranty and limitations of liability specific to any + jurisdiction. + +4. Inability to Comply Due to Statute or Regulation + + If it is impossible for You to comply with any of the terms of this License + with respect to some or all of the Covered Software due to statute, + judicial order, or regulation then You must: (a) comply with the terms of + this License to the maximum extent possible; and (b) describe the + limitations and the code they affect. Such description must be placed in a + text file included with all distributions of the Covered Software under + this License. Except to the extent prohibited by statute or regulation, + such description must be sufficiently detailed for a recipient of ordinary + skill to be able to understand it. + +5. Termination + +5.1. The rights granted under this License will terminate automatically if You + fail to comply with any of its terms. However, if You become compliant, + then the rights granted under this License from a particular Contributor + are reinstated (a) provisionally, unless and until such Contributor + explicitly and finally terminates Your grants, and (b) on an ongoing + basis, if such Contributor fails to notify You of the non-compliance by + some reasonable means prior to 60 days after You have come back into + compliance. Moreover, Your grants from a particular Contributor are + reinstated on an ongoing basis if such Contributor notifies You of the + non-compliance by some reasonable means, this is the first time You have + received notice of non-compliance with this License from such + Contributor, and You become compliant prior to 30 days after Your receipt + of the notice. + +5.2. If You initiate litigation against any entity by asserting a patent + infringement claim (excluding declaratory judgment actions, + counter-claims, and cross-claims) alleging that a Contributor Version + directly or indirectly infringes any patent, then the rights granted to + You by any and all Contributors for the Covered Software under Section + 2.1 of this License shall terminate. + +5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user + license agreements (excluding distributors and resellers) which have been + validly granted by You or Your distributors under this License prior to + termination shall survive termination. + +6. Disclaimer of Warranty + + Covered Software is provided under this License on an "as is" basis, + without warranty of any kind, either expressed, implied, or statutory, + including, without limitation, warranties that the Covered Software is free + of defects, merchantable, fit for a particular purpose or non-infringing. + The entire risk as to the quality and performance of the Covered Software + is with You. Should any Covered Software prove defective in any respect, + You (not any Contributor) assume the cost of any necessary servicing, + repair, or correction. This disclaimer of warranty constitutes an essential + part of this License. No use of any Covered Software is authorized under + this License except under this disclaimer. + +7. Limitation of Liability + + Under no circumstances and under no legal theory, whether tort (including + negligence), contract, or otherwise, shall any Contributor, or anyone who + distributes Covered Software as permitted above, be liable to You for any + direct, indirect, special, incidental, or consequential damages of any + character including, without limitation, damages for lost profits, loss of + goodwill, work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses, even if such party shall have been + informed of the possibility of such damages. This limitation of liability + shall not apply to liability for death or personal injury resulting from + such party's negligence to the extent applicable law prohibits such + limitation. Some jurisdictions do not allow the exclusion or limitation of + incidental or consequential damages, so this exclusion and limitation may + not apply to You. + +8. Litigation + + Any litigation relating to this License may be brought only in the courts + of a jurisdiction where the defendant maintains its principal place of + business and such litigation shall be governed by laws of that + jurisdiction, without reference to its conflict-of-law provisions. Nothing + in this Section shall prevent a party's ability to bring cross-claims or + counter-claims. + +9. Miscellaneous + + This License represents the complete agreement concerning the subject + matter hereof. If any provision of this License is held to be + unenforceable, such provision shall be reformed only to the extent + necessary to make it enforceable. Any law or regulation which provides that + the language of a contract shall be construed against the drafter shall not + be used to construe this License against a Contributor. + + +10. Versions of the License + +10.1. New Versions + + Mozilla Foundation is the license steward. Except as provided in Section + 10.3, no one other than the license steward has the right to modify or + publish new versions of this License. Each version will be given a + distinguishing version number. + +10.2. Effect of New Versions + + You may distribute the Covered Software under the terms of the version + of the License under which You originally received the Covered Software, + or under the terms of any subsequent version published by the license + steward. + +10.3. Modified Versions + + If you create software not governed by this License, and you want to + create a new license for such software, you may create and use a + modified version of this License if you rename the license and remove + any references to the name of the license steward (except to note that + such modified license differs from this License). + +10.4. Distributing Source Code Form that is Incompatible With Secondary + Licenses If You choose to distribute Source Code Form that is + Incompatible With Secondary Licenses under the terms of this version of + the License, the notice described in Exhibit B of this License must be + attached. + +Exhibit A - Source Code Form License Notice + + This Source Code Form is subject to the + terms of the Mozilla Public License, v. + 2.0. If a copy of the MPL was not + distributed with this file, You can + obtain one at + http://mozilla.org/MPL/2.0/. + +If it is not possible or desirable to put the notice in a particular file, +then You may include the notice in a location (such as a LICENSE file in a +relevant directory) where a recipient would be likely to look for such a +notice. + +You may add additional accurate notices of copyright ownership. + +Exhibit B - "Incompatible With Secondary Licenses" Notice + + This Source Code Form is "Incompatible + With Secondary Licenses", as defined by + the Mozilla Public License, v. 2.0. diff --git a/vendor/github.com/hashicorp/go-safetemp/README.md b/vendor/github.com/hashicorp/go-safetemp/README.md new file mode 100644 index 00000000000..02ece331711 --- /dev/null +++ b/vendor/github.com/hashicorp/go-safetemp/README.md @@ -0,0 +1,10 @@ +# go-safetemp +[![Godoc](https://godoc.org/github.com/hashcorp/go-safetemp?status.svg)](https://godoc.org/github.com/hashicorp/go-safetemp) + +Functions for safely working with temporary directories and files. + +## Why? + +The Go standard library provides the excellent `ioutil` package for +working with temporary directories and files. This library builds on top +of that to provide safe abstractions above that. diff --git a/vendor/github.com/hashicorp/go-safetemp/go.mod b/vendor/github.com/hashicorp/go-safetemp/go.mod new file mode 100644 index 00000000000..02bc5f5bb55 --- /dev/null +++ b/vendor/github.com/hashicorp/go-safetemp/go.mod @@ -0,0 +1 @@ +module github.com/hashicorp/go-safetemp diff --git a/vendor/github.com/hashicorp/go-safetemp/safetemp.go b/vendor/github.com/hashicorp/go-safetemp/safetemp.go new file mode 100644 index 00000000000..c4ae72b7899 --- /dev/null +++ b/vendor/github.com/hashicorp/go-safetemp/safetemp.go @@ -0,0 +1,40 @@ +package safetemp + +import ( + "io" + "io/ioutil" + "os" + "path/filepath" +) + +// Dir creates a new temporary directory that isn't yet created. This +// can be used with calls that expect a non-existent directory. +// +// The directory is created as a child of a temporary directory created +// within the directory dir starting with prefix. The temporary directory +// returned is always named "temp". The parent directory has the specified +// prefix. +// +// The returned io.Closer should be used to clean up the returned directory. +// This will properly remove the returned directory and any other temporary +// files created. +// +// If an error is returned, the Closer does not need to be called (and will +// be nil). +func Dir(dir, prefix string) (string, io.Closer, error) { + // Create the temporary directory + td, err := ioutil.TempDir(dir, prefix) + if err != nil { + return "", nil, err + } + + return filepath.Join(td, "temp"), pathCloser(td), nil +} + +// pathCloser implements io.Closer to remove the given path on Close. +type pathCloser string + +// Close deletes this path. +func (p pathCloser) Close() error { + return os.RemoveAll(string(p)) +} diff --git a/vendor/github.com/hashicorp/go-uuid/README.md b/vendor/github.com/hashicorp/go-uuid/README.md index 21fdda4adaf..fbde8b9aef6 100644 --- a/vendor/github.com/hashicorp/go-uuid/README.md +++ b/vendor/github.com/hashicorp/go-uuid/README.md @@ -1,6 +1,6 @@ -# uuid +# uuid [![Build Status](https://travis-ci.org/hashicorp/go-uuid.svg?branch=master)](https://travis-ci.org/hashicorp/go-uuid) -Generates UUID-format strings using purely high quality random bytes. +Generates UUID-format strings using high quality, _purely random_ bytes. It is **not** intended to be RFC compliant, merely to use a well-understood string representation of a 128-bit value. It can also parse UUID-format strings into their component bytes. Documentation ============= diff --git a/vendor/github.com/hashicorp/go-uuid/go.mod b/vendor/github.com/hashicorp/go-uuid/go.mod new file mode 100644 index 00000000000..dd57f9d21ad --- /dev/null +++ b/vendor/github.com/hashicorp/go-uuid/go.mod @@ -0,0 +1 @@ +module github.com/hashicorp/go-uuid diff --git a/vendor/github.com/hashicorp/go-uuid/uuid.go b/vendor/github.com/hashicorp/go-uuid/uuid.go index 322b522c23f..ff9364c4040 100644 --- a/vendor/github.com/hashicorp/go-uuid/uuid.go +++ b/vendor/github.com/hashicorp/go-uuid/uuid.go @@ -6,13 +6,21 @@ import ( "fmt" ) -// GenerateUUID is used to generate a random UUID -func GenerateUUID() (string, error) { - buf := make([]byte, 16) +// GenerateRandomBytes is used to generate random bytes of given size. +func GenerateRandomBytes(size int) ([]byte, error) { + buf := make([]byte, size) if _, err := rand.Read(buf); err != nil { - return "", fmt.Errorf("failed to read random bytes: %v", err) + return nil, fmt.Errorf("failed to read random bytes: %v", err) } + return buf, nil +} +// GenerateUUID is used to generate a random UUID +func GenerateUUID() (string, error) { + buf, err := GenerateRandomBytes(16) + if err != nil { + return "", err + } return FormatUUID(buf) } diff --git a/vendor/github.com/hashicorp/go-version/constraint.go b/vendor/github.com/hashicorp/go-version/constraint.go index 8c73df0602b..d055759611c 100644 --- a/vendor/github.com/hashicorp/go-version/constraint.go +++ b/vendor/github.com/hashicorp/go-version/constraint.go @@ -2,6 +2,7 @@ package version import ( "fmt" + "reflect" "regexp" "strings" ) @@ -113,6 +114,26 @@ func parseSingle(v string) (*Constraint, error) { }, nil } +func prereleaseCheck(v, c *Version) bool { + switch vPre, cPre := v.Prerelease() != "", c.Prerelease() != ""; { + case cPre && vPre: + // A constraint with a pre-release can only match a pre-release version + // with the same base segments. + return reflect.DeepEqual(c.Segments64(), v.Segments64()) + + case !cPre && vPre: + // A constraint without a pre-release can only match a version without a + // pre-release. + return false + + case cPre && !vPre: + // OK, except with the pessimistic operator + case !cPre && !vPre: + // OK + } + return true +} + //------------------------------------------------------------------- // Constraint functions //------------------------------------------------------------------- @@ -126,22 +147,27 @@ func constraintNotEqual(v, c *Version) bool { } func constraintGreaterThan(v, c *Version) bool { - return v.Compare(c) == 1 + return prereleaseCheck(v, c) && v.Compare(c) == 1 } func constraintLessThan(v, c *Version) bool { - return v.Compare(c) == -1 + return prereleaseCheck(v, c) && v.Compare(c) == -1 } func constraintGreaterThanEqual(v, c *Version) bool { - return v.Compare(c) >= 0 + return prereleaseCheck(v, c) && v.Compare(c) >= 0 } func constraintLessThanEqual(v, c *Version) bool { - return v.Compare(c) <= 0 + return prereleaseCheck(v, c) && v.Compare(c) <= 0 } func constraintPessimistic(v, c *Version) bool { + // Using a pessimistic constraint with a pre-release, restricts versions to pre-releases + if !prereleaseCheck(v, c) || (c.Prerelease() != "" && v.Prerelease() == "") { + return false + } + // If the version being checked is naturally less than the constraint, then there // is no way for the version to be valid against the constraint if v.LessThan(c) { diff --git a/vendor/github.com/hashicorp/go-version/go.mod b/vendor/github.com/hashicorp/go-version/go.mod new file mode 100644 index 00000000000..f5285555fa8 --- /dev/null +++ b/vendor/github.com/hashicorp/go-version/go.mod @@ -0,0 +1 @@ +module github.com/hashicorp/go-version diff --git a/vendor/github.com/hashicorp/go-version/version.go b/vendor/github.com/hashicorp/go-version/version.go index ae2f6b63a75..4d1e6e2210c 100644 --- a/vendor/github.com/hashicorp/go-version/version.go +++ b/vendor/github.com/hashicorp/go-version/version.go @@ -15,8 +15,8 @@ var versionRegexp *regexp.Regexp // The raw regular expression string used for testing the validity // of a version. const VersionRegexpRaw string = `v?([0-9]+(\.[0-9]+)*?)` + - `(-?([0-9A-Za-z\-]+(\.[0-9A-Za-z\-]+)*))?` + - `(\+([0-9A-Za-z\-]+(\.[0-9A-Za-z\-]+)*))?` + + `(-([0-9]+[0-9A-Za-z\-~]*(\.[0-9A-Za-z\-~]+)*)|(-?([A-Za-z\-~]+[0-9A-Za-z\-~]*(\.[0-9A-Za-z\-~]+)*)))?` + + `(\+([0-9A-Za-z\-~]+(\.[0-9A-Za-z\-~]+)*))?` + `?` // Version represents a single version. @@ -25,6 +25,7 @@ type Version struct { pre string segments []int64 si int + original string } func init() { @@ -59,11 +60,17 @@ func NewVersion(v string) (*Version, error) { segments = append(segments, 0) } + pre := matches[7] + if pre == "" { + pre = matches[4] + } + return &Version{ - metadata: matches[7], - pre: matches[4], + metadata: matches[10], + pre: pre, segments: segments, si: si, + original: v, }, nil } @@ -166,24 +173,42 @@ func comparePart(preSelf string, preOther string) int { return 0 } + var selfInt int64 + selfNumeric := true + selfInt, err := strconv.ParseInt(preSelf, 10, 64) + if err != nil { + selfNumeric = false + } + + var otherInt int64 + otherNumeric := true + otherInt, err = strconv.ParseInt(preOther, 10, 64) + if err != nil { + otherNumeric = false + } + // if a part is empty, we use the other to decide if preSelf == "" { - _, notIsNumeric := strconv.ParseInt(preOther, 10, 64) - if notIsNumeric == nil { + if otherNumeric { return -1 } return 1 } if preOther == "" { - _, notIsNumeric := strconv.ParseInt(preSelf, 10, 64) - if notIsNumeric == nil { + if selfNumeric { return 1 } return -1 } - if preSelf > preOther { + if selfNumeric && !otherNumeric { + return -1 + } else if !selfNumeric && otherNumeric { + return 1 + } else if !selfNumeric && !otherNumeric && preSelf > preOther { + return 1 + } else if selfInt > otherInt { return 1 } @@ -283,11 +308,19 @@ func (v *Version) Segments() []int { // for a version "1.2.3-beta", segments will return a slice of // 1, 2, 3. func (v *Version) Segments64() []int64 { - return v.segments + result := make([]int64, len(v.segments)) + copy(result, v.segments) + return result } // String returns the full version string included pre-release // and metadata information. +// +// This value is rebuilt according to the parsed segments and other +// information. Therefore, ambiguities in the version string such as +// prefixed zeroes (1.04.0 => 1.4.0), `v` prefix (v1.0.0 => 1.0.0), and +// missing parts (1.0 => 1.0.0) will be made into a canonicalized form +// as shown in the parenthesized examples. func (v *Version) String() string { var buf bytes.Buffer fmtParts := make([]string, len(v.segments)) @@ -306,3 +339,9 @@ func (v *Version) String() string { return buf.String() } + +// Original returns the original parsed version as-is, including any +// potential whitespace, `v` prefix, etc. +func (v *Version) Original() string { + return v.original +} diff --git a/vendor/github.com/hashicorp/logutils/go.mod b/vendor/github.com/hashicorp/logutils/go.mod new file mode 100644 index 00000000000..ba38a457646 --- /dev/null +++ b/vendor/github.com/hashicorp/logutils/go.mod @@ -0,0 +1 @@ +module github.com/hashicorp/logutils diff --git a/vendor/github.com/hashicorp/terraform/config/interpolate_funcs.go b/vendor/github.com/hashicorp/terraform/config/interpolate_funcs.go index 72be817a0a6..b94fca8896a 100644 --- a/vendor/github.com/hashicorp/terraform/config/interpolate_funcs.go +++ b/vendor/github.com/hashicorp/terraform/config/interpolate_funcs.go @@ -1698,7 +1698,7 @@ func interpolationFuncRsaDecrypt() ast.Function { b, err := base64.StdEncoding.DecodeString(s) if err != nil { - return "", fmt.Errorf("Failed to decode input %q: cipher text must be base64-encoded", key) + return "", fmt.Errorf("Failed to decode input %q: cipher text must be base64-encoded", s) } block, _ := pem.Decode([]byte(key)) diff --git a/vendor/github.com/hashicorp/terraform/config/module/get.go b/vendor/github.com/hashicorp/terraform/config/module/get.go index 58515ab3633..5073d0d2715 100644 --- a/vendor/github.com/hashicorp/terraform/config/module/get.go +++ b/vendor/github.com/hashicorp/terraform/config/module/get.go @@ -3,6 +3,7 @@ package module import ( "io/ioutil" "os" + "path/filepath" "github.com/hashicorp/go-getter" ) @@ -37,13 +38,10 @@ func GetCopy(dst, src string) error { if err != nil { return err } - // FIXME: This isn't completely safe. Creating and removing our temp path - // exposes where to race to inject files. - if err := os.RemoveAll(tmpDir); err != nil { - return err - } defer os.RemoveAll(tmpDir) + tmpDir = filepath.Join(tmpDir, "module") + // Get to that temporary dir if err := getter.Get(tmpDir, src); err != nil { return err diff --git a/vendor/github.com/hashicorp/terraform/config/module/storage.go b/vendor/github.com/hashicorp/terraform/config/module/storage.go index c1588d684f4..4b828dcb083 100644 --- a/vendor/github.com/hashicorp/terraform/config/module/storage.go +++ b/vendor/github.com/hashicorp/terraform/config/module/storage.go @@ -11,7 +11,6 @@ import ( getter "github.com/hashicorp/go-getter" "github.com/hashicorp/terraform/registry" "github.com/hashicorp/terraform/registry/regsrc" - "github.com/hashicorp/terraform/svchost/auth" "github.com/hashicorp/terraform/svchost/disco" "github.com/mitchellh/cli" ) @@ -64,22 +63,19 @@ type Storage struct { // StorageDir is the full path to the directory where all modules will be // stored. StorageDir string - // Services is a required *disco.Disco, which may have services and - // credentials pre-loaded. - Services *disco.Disco - // Creds optionally provides credentials for communicating with service - // providers. - Creds auth.CredentialsSource + // Ui is an optional cli.Ui for user output Ui cli.Ui + // Mode is the GetMode that will be used for various operations. Mode GetMode registry *registry.Client } -func NewStorage(dir string, services *disco.Disco, creds auth.CredentialsSource) *Storage { - regClient := registry.NewClient(services, creds, nil) +// NewStorage returns a new initialized Storage object. +func NewStorage(dir string, services *disco.Disco) *Storage { + regClient := registry.NewClient(services, nil) return &Storage{ StorageDir: dir, diff --git a/vendor/github.com/hashicorp/terraform/config/module/validate_provider_alias.go b/vendor/github.com/hashicorp/terraform/config/module/validate_provider_alias.go index 090d4f7e398..f203556c10d 100644 --- a/vendor/github.com/hashicorp/terraform/config/module/validate_provider_alias.go +++ b/vendor/github.com/hashicorp/terraform/config/module/validate_provider_alias.go @@ -67,7 +67,7 @@ func (t *Tree) validateProviderAlias() error { // We didn't find the alias, error! err = multierror.Append(err, fmt.Errorf( - "module %s: provider alias must be defined by the module or a parent: %s", + "module %s: provider alias must be defined by the module: %s", strings.Join(pv.Path, "."), k)) } } diff --git a/vendor/github.com/hashicorp/terraform/helper/logging/transport.go b/vendor/github.com/hashicorp/terraform/helper/logging/transport.go index 44779248790..bddabe647a9 100644 --- a/vendor/github.com/hashicorp/terraform/helper/logging/transport.go +++ b/vendor/github.com/hashicorp/terraform/helper/logging/transport.go @@ -1,9 +1,12 @@ package logging import ( + "bytes" + "encoding/json" "log" "net/http" "net/http/httputil" + "strings" ) type transport struct { @@ -15,7 +18,7 @@ func (t *transport) RoundTrip(req *http.Request) (*http.Response, error) { if IsDebugOrHigher() { reqData, err := httputil.DumpRequestOut(req, true) if err == nil { - log.Printf("[DEBUG] "+logReqMsg, t.name, string(reqData)) + log.Printf("[DEBUG] "+logReqMsg, t.name, prettyPrintJsonLines(reqData)) } else { log.Printf("[ERROR] %s API Request error: %#v", t.name, err) } @@ -29,7 +32,7 @@ func (t *transport) RoundTrip(req *http.Request) (*http.Response, error) { if IsDebugOrHigher() { respData, err := httputil.DumpResponse(resp, true) if err == nil { - log.Printf("[DEBUG] "+logRespMsg, t.name, string(respData)) + log.Printf("[DEBUG] "+logRespMsg, t.name, prettyPrintJsonLines(respData)) } else { log.Printf("[ERROR] %s API Response error: %#v", t.name, err) } @@ -42,6 +45,20 @@ func NewTransport(name string, t http.RoundTripper) *transport { return &transport{name, t} } +// prettyPrintJsonLines iterates through a []byte line-by-line, +// transforming any lines that are complete json into pretty-printed json. +func prettyPrintJsonLines(b []byte) string { + parts := strings.Split(string(b), "\n") + for i, p := range parts { + if b := []byte(p); json.Valid(b) { + var out bytes.Buffer + json.Indent(&out, b, "", " ") + parts[i] = out.String() + } + } + return strings.Join(parts, "\n") +} + const logReqMsg = `%s API Request Details: ---[ REQUEST ]--------------------------------------- %s diff --git a/vendor/github.com/hashicorp/terraform/helper/resource/testing.go b/vendor/github.com/hashicorp/terraform/helper/resource/testing.go index 27bfc9b5a5e..b97673fdf5d 100644 --- a/vendor/github.com/hashicorp/terraform/helper/resource/testing.go +++ b/vendor/github.com/hashicorp/terraform/helper/resource/testing.go @@ -266,6 +266,15 @@ type TestStep struct { // below. PreConfig func() + // Taint is a list of resource addresses to taint prior to the execution of + // the step. Be sure to only include this at a step where the referenced + // address will be present in state, as it will fail the test if the resource + // is missing. + // + // This option is ignored on ImportState tests, and currently only works for + // resources in the root module path. + Taint []string + //--------------------------------------------------------------- // Test modes. One of the following groups of settings must be // set to determine what the test step will do. Ideally we would've @@ -409,6 +418,17 @@ func LogOutput(t TestT) (logOutput io.Writer, err error) { return } +// ParallelTest performs an acceptance test on a resource, allowing concurrency +// with other ParallelTest. +// +// Tests will fail if they do not properly handle conditions to allow multiple +// tests to occur against the same resource or service (e.g. random naming). +// All other requirements of the Test function also apply to this function. +func ParallelTest(t TestT, c TestCase) { + t.Parallel() + Test(t, c) +} + // Test performs an acceptance test on a resource. // // Tests are not run unless an environmental variable "TF_ACC" is @@ -1119,6 +1139,7 @@ type TestT interface { Fatal(args ...interface{}) Skip(args ...interface{}) Name() string + Parallel() } // This is set to true by unit tests to alter some behavior diff --git a/vendor/github.com/hashicorp/terraform/helper/resource/testing_config.go b/vendor/github.com/hashicorp/terraform/helper/resource/testing_config.go index 300a9ea6eec..033f1266d6c 100644 --- a/vendor/github.com/hashicorp/terraform/helper/resource/testing_config.go +++ b/vendor/github.com/hashicorp/terraform/helper/resource/testing_config.go @@ -1,6 +1,7 @@ package resource import ( + "errors" "fmt" "log" "strings" @@ -21,6 +22,14 @@ func testStep( opts terraform.ContextOpts, state *terraform.State, step TestStep) (*terraform.State, error) { + // Pre-taint any resources that have been defined in Taint, as long as this + // is not a destroy step. + if !step.Destroy { + if err := testStepTaint(state, step); err != nil { + return state, err + } + } + mod, err := testModule(opts, step) if err != nil { return state, err @@ -154,3 +163,19 @@ func testStep( // Made it here? Good job test step! return state, nil } + +func testStepTaint(state *terraform.State, step TestStep) error { + for _, p := range step.Taint { + m := state.RootModule() + if m == nil { + return errors.New("no state") + } + rs, ok := m.Resources[p] + if !ok { + return fmt.Errorf("resource %q not found in state", p) + } + log.Printf("[WARN] Test: Explicitly tainting resource %q", p) + rs.Taint() + } + return nil +} diff --git a/vendor/github.com/hashicorp/terraform/helper/schema/data_source_resource_shim.go b/vendor/github.com/hashicorp/terraform/helper/schema/data_source_resource_shim.go index 5a03d2d8018..8d93750aede 100644 --- a/vendor/github.com/hashicorp/terraform/helper/schema/data_source_resource_shim.go +++ b/vendor/github.com/hashicorp/terraform/helper/schema/data_source_resource_shim.go @@ -32,7 +32,7 @@ func DataSourceResourceShim(name string, dataSource *Resource) *Resource { // FIXME: Link to some further docs either on the website or in the // changelog, once such a thing exists. - dataSource.deprecationMessage = fmt.Sprintf( + dataSource.DeprecationMessage = fmt.Sprintf( "using %s as a resource is deprecated; consider using the data source instead", name, ) diff --git a/vendor/github.com/hashicorp/terraform/helper/schema/resource.go b/vendor/github.com/hashicorp/terraform/helper/schema/resource.go index c8e99b945b4..d3be2d61489 100644 --- a/vendor/github.com/hashicorp/terraform/helper/schema/resource.go +++ b/vendor/github.com/hashicorp/terraform/helper/schema/resource.go @@ -124,9 +124,7 @@ type Resource struct { Importer *ResourceImporter // If non-empty, this string is emitted as a warning during Validate. - // This is a private interface for now, for use by DataSourceResourceShim, - // and not for general use. (But maybe later...) - deprecationMessage string + DeprecationMessage string // Timeouts allow users to specify specific time durations in which an // operation should time out, to allow them to extend an action to suit their @@ -269,8 +267,8 @@ func (r *Resource) Diff( func (r *Resource) Validate(c *terraform.ResourceConfig) ([]string, []error) { warns, errs := schemaMap(r.Schema).Validate(c) - if r.deprecationMessage != "" { - warns = append(warns, r.deprecationMessage) + if r.DeprecationMessage != "" { + warns = append(warns, r.DeprecationMessage) } return warns, errs @@ -492,6 +490,12 @@ func (r *Resource) Data(s *terraform.InstanceState) *ResourceData { panic(err) } + // load the Resource timeouts + result.timeouts = r.Timeouts + if result.timeouts == nil { + result.timeouts = &ResourceTimeout{} + } + // Set the schema version to latest by default result.meta = map[string]interface{}{ "schema_version": strconv.Itoa(r.SchemaVersion), diff --git a/vendor/github.com/hashicorp/terraform/helper/schema/resource_data.go b/vendor/github.com/hashicorp/terraform/helper/schema/resource_data.go index 9ab8bccaa5b..6cc01ee0bd4 100644 --- a/vendor/github.com/hashicorp/terraform/helper/schema/resource_data.go +++ b/vendor/github.com/hashicorp/terraform/helper/schema/resource_data.go @@ -315,6 +315,7 @@ func (d *ResourceData) State() *terraform.InstanceState { mapW := &MapFieldWriter{Schema: d.schema} if err := mapW.WriteField(nil, rawMap); err != nil { + log.Printf("[ERR] Error writing fields: %s", err) return nil } @@ -366,6 +367,13 @@ func (d *ResourceData) State() *terraform.InstanceState { func (d *ResourceData) Timeout(key string) time.Duration { key = strings.ToLower(key) + // System default of 20 minutes + defaultTimeout := 20 * time.Minute + + if d.timeouts == nil { + return defaultTimeout + } + var timeout *time.Duration switch key { case TimeoutCreate: @@ -386,8 +394,7 @@ func (d *ResourceData) Timeout(key string) time.Duration { return *d.timeouts.Default } - // Return system default of 20 minutes - return 20 * time.Minute + return defaultTimeout } func (d *ResourceData) init() { diff --git a/vendor/github.com/hashicorp/terraform/helper/schema/resource_diff.go b/vendor/github.com/hashicorp/terraform/helper/schema/resource_diff.go index 7b3716dd769..7db3decc5f2 100644 --- a/vendor/github.com/hashicorp/terraform/helper/schema/resource_diff.go +++ b/vendor/github.com/hashicorp/terraform/helper/schema/resource_diff.go @@ -135,6 +135,10 @@ type ResourceDiff struct { // diff does not get re-run on keys that were not touched, or diffs that were // just removed (re-running on the latter would just roll back the removal). updatedKeys map[string]bool + + // Tracks which keys were flagged as forceNew. These keys are not saved in + // newWriter, but we need to track them so that they can be re-diffed later. + forcedNewKeys map[string]bool } // newResourceDiff creates a new ResourceDiff instance. @@ -193,17 +197,30 @@ func newResourceDiff(schema map[string]*Schema, config *terraform.ResourceConfig } d.updatedKeys = make(map[string]bool) + d.forcedNewKeys = make(map[string]bool) return d } // UpdatedKeys returns the keys that were updated by this ResourceDiff run. // These are the only keys that a diff should be re-calculated for. +// +// This is the combined result of both keys for which diff values were updated +// for or cleared, and also keys that were flagged to be re-diffed as a result +// of ForceNew. func (d *ResourceDiff) UpdatedKeys() []string { var s []string for k := range d.updatedKeys { s = append(s, k) } + for k := range d.forcedNewKeys { + for _, l := range s { + if k == l { + break + } + } + s = append(s, k) + } return s } @@ -214,7 +231,7 @@ func (d *ResourceDiff) UpdatedKeys() []string { // Note that this does not wipe an override. This function is only allowed on // computed keys. func (d *ResourceDiff) Clear(key string) error { - if err := d.checkKey(key, "Clear"); err != nil { + if err := d.checkKey(key, "Clear", true); err != nil { return err } @@ -257,7 +274,7 @@ func (d *ResourceDiff) diffChange(key string) (interface{}, interface{}, bool, b if !old.Exists { old.Value = nil } - if !new.Exists { + if !new.Exists || d.removed(key) { new.Value = nil } @@ -270,7 +287,7 @@ func (d *ResourceDiff) diffChange(key string) (interface{}, interface{}, bool, b // // This function is only allowed on computed attributes. func (d *ResourceDiff) SetNew(key string, value interface{}) error { - if err := d.checkKey(key, "SetNew"); err != nil { + if err := d.checkKey(key, "SetNew", false); err != nil { return err } @@ -282,7 +299,7 @@ func (d *ResourceDiff) SetNew(key string, value interface{}) error { // // This function is only allowed on computed attributes. func (d *ResourceDiff) SetNewComputed(key string) error { - if err := d.checkKey(key, "SetNewComputed"); err != nil { + if err := d.checkKey(key, "SetNewComputed", false); err != nil { return err } @@ -335,9 +352,12 @@ func (d *ResourceDiff) ForceNew(key string) error { schema.ForceNew = true - // We need to set whole lists/sets/maps here - _, new := d.GetChange(keyParts[0]) - return d.setDiff(keyParts[0], new, false) + // Flag this for a re-diff. Don't save any values to guarantee that existing + // diffs aren't messed with, as this gets messy when dealing with complex + // structures, zero values, etc. + d.forcedNewKeys[keyParts[0]] = true + + return nil } // Get hands off to ResourceData.Get. @@ -378,6 +398,29 @@ func (d *ResourceDiff) GetOk(key string) (interface{}, bool) { return r.Value, exists } +// GetOkExists functions the same way as GetOkExists within ResourceData, but +// it also checks the new diff levels to provide data consistent with the +// current state of the customized diff. +// +// This is nearly the same function as GetOk, yet it does not check +// for the zero value of the attribute's type. This allows for attributes +// without a default, to fully check for a literal assignment, regardless +// of the zero-value for that type. +func (d *ResourceDiff) GetOkExists(key string) (interface{}, bool) { + r := d.get(strings.Split(key, "."), "newDiff") + exists := r.Exists && !r.Computed + return r.Value, exists +} + +// NewValueKnown returns true if the new value for the given key is available +// as its final value at diff time. If the return value is false, this means +// either the value is based of interpolation that was unavailable at diff +// time, or that the value was explicitly marked as computed by SetNewComputed. +func (d *ResourceDiff) NewValueKnown(key string) bool { + r := d.get(strings.Split(key, "."), "newDiff") + return !r.Computed +} + // HasChange checks to see if there is a change between state and the diff, or // in the overridden diff. func (d *ResourceDiff) HasChange(key string) bool { @@ -426,6 +469,16 @@ func (d *ResourceDiff) getChange(key string) (getResult, getResult, bool) { return old, new, false } +// removed checks to see if the key is present in the existing, pre-customized +// diff and if it was marked as NewRemoved. +func (d *ResourceDiff) removed(k string) bool { + diff, ok := d.diff.Attributes[k] + if !ok { + return false + } + return diff.NewRemoved +} + // get performs the appropriate multi-level reader logic for ResourceDiff, // starting at source. Refer to newResourceDiff for the level order. func (d *ResourceDiff) get(addr []string, source string) getResult { @@ -482,12 +535,24 @@ func childAddrOf(child, parent string) bool { } // checkKey checks the key to make sure it exists and is computed. -func (d *ResourceDiff) checkKey(key, caller string) error { - s, ok := d.schema[key] - if !ok { +func (d *ResourceDiff) checkKey(key, caller string, nested bool) error { + var schema *Schema + if nested { + keyParts := strings.Split(key, ".") + schemaL := addrToSchema(keyParts, d.schema) + if len(schemaL) > 0 { + schema = schemaL[len(schemaL)-1] + } + } else { + s, ok := d.schema[key] + if ok { + schema = s + } + } + if schema == nil { return fmt.Errorf("%s: invalid key: %s", caller, key) } - if !s.Computed { + if !schema.Computed { return fmt.Errorf("%s only operates on computed keys - %s is not one", caller, key) } return nil diff --git a/vendor/github.com/hashicorp/terraform/helper/schema/schema.go b/vendor/github.com/hashicorp/terraform/helper/schema/schema.go index c94b694ff75..0ea5aad5585 100644 --- a/vendor/github.com/hashicorp/terraform/helper/schema/schema.go +++ b/vendor/github.com/hashicorp/terraform/helper/schema/schema.go @@ -199,7 +199,7 @@ type Schema struct { Sensitive bool } -// SchemaDiffSuppresFunc is a function which can be used to determine +// SchemaDiffSuppressFunc is a function which can be used to determine // whether a detected diff on a schema element is "valid" or not, and // suppress it from the plan if necessary. // @@ -395,7 +395,7 @@ func (m *schemaMap) DeepCopy() schemaMap { if err != nil { panic(err) } - return copy.(schemaMap) + return *copy.(*schemaMap) } // Diff returns the diff for a resource given the schema map, @@ -1270,9 +1270,9 @@ func (m schemaMap) validateConflictingAttributes( } for _, conflicting_key := range schema.ConflictsWith { - if value, ok := c.Get(conflicting_key); ok { + if _, ok := c.Get(conflicting_key); ok { return fmt.Errorf( - "%q: conflicts with %s (%#v)", k, conflicting_key, value) + "%q: conflicts with %s", k, conflicting_key) } } diff --git a/vendor/github.com/hashicorp/terraform/helper/schema/set.go b/vendor/github.com/hashicorp/terraform/helper/schema/set.go index bb194ee6505..cba289035d9 100644 --- a/vendor/github.com/hashicorp/terraform/helper/schema/set.go +++ b/vendor/github.com/hashicorp/terraform/helper/schema/set.go @@ -17,6 +17,12 @@ func HashString(v interface{}) int { return hashcode.String(v.(string)) } +// HashInt hashes integers. If you want a Set of integers, this is the +// SchemaSetFunc you want. +func HashInt(v interface{}) int { + return hashcode.String(strconv.Itoa(v.(int))) +} + // HashResource hashes complex structures that are described using // a *Resource. This is the default set implementation used when a set's // element type is a full resource. diff --git a/vendor/github.com/hashicorp/terraform/helper/validation/validation.go b/vendor/github.com/hashicorp/terraform/helper/validation/validation.go index 1fc3a6c0952..e9edcd33385 100644 --- a/vendor/github.com/hashicorp/terraform/helper/validation/validation.go +++ b/vendor/github.com/hashicorp/terraform/helper/validation/validation.go @@ -1,6 +1,7 @@ package validation import ( + "bytes" "fmt" "net" "reflect" @@ -180,6 +181,51 @@ func CIDRNetwork(min, max int) schema.SchemaValidateFunc { } } +// SingleIP returns a SchemaValidateFunc which tests if the provided value +// is of type string, and in valid single IP notation +func SingleIP() schema.SchemaValidateFunc { + return func(i interface{}, k string) (s []string, es []error) { + v, ok := i.(string) + if !ok { + es = append(es, fmt.Errorf("expected type of %s to be string", k)) + return + } + + ip := net.ParseIP(v) + if ip == nil { + es = append(es, fmt.Errorf( + "expected %s to contain a valid IP, got: %s", k, v)) + } + return + } +} + +// IPRange returns a SchemaValidateFunc which tests if the provided value +// is of type string, and in valid IP range notation +func IPRange() schema.SchemaValidateFunc { + return func(i interface{}, k string) (s []string, es []error) { + v, ok := i.(string) + if !ok { + es = append(es, fmt.Errorf("expected type of %s to be string", k)) + return + } + + ips := strings.Split(v, "-") + if len(ips) != 2 { + es = append(es, fmt.Errorf( + "expected %s to contain a valid IP range, got: %s", k, v)) + return + } + ip1 := net.ParseIP(ips[0]) + ip2 := net.ParseIP(ips[1]) + if ip1 == nil || ip2 == nil || bytes.Compare(ip1, ip2) > 0 { + es = append(es, fmt.Errorf( + "expected %s to contain a valid IP range, got: %s", k, v)) + } + return + } +} + // ValidateJsonString is a SchemaValidateFunc which tests to make sure the // supplied string is valid JSON. func ValidateJsonString(v interface{}, k string) (ws []string, errors []error) { diff --git a/vendor/github.com/hashicorp/terraform/httpclient/useragent.go b/vendor/github.com/hashicorp/terraform/httpclient/useragent.go index d8db087cf9c..5e280176888 100644 --- a/vendor/github.com/hashicorp/terraform/httpclient/useragent.go +++ b/vendor/github.com/hashicorp/terraform/httpclient/useragent.go @@ -2,15 +2,29 @@ package httpclient import ( "fmt" + "log" "net/http" + "os" + "strings" "github.com/hashicorp/terraform/version" ) const userAgentFormat = "Terraform/%s" +const uaEnvVar = "TF_APPEND_USER_AGENT" func UserAgentString() string { - return fmt.Sprintf(userAgentFormat, version.Version) + ua := fmt.Sprintf(userAgentFormat, version.Version) + + if add := os.Getenv(uaEnvVar); add != "" { + add = strings.TrimSpace(add) + if len(add) > 0 { + ua += " " + add + log.Printf("[DEBUG] Using modified User-Agent: %s", ua) + } + } + + return ua } type userAgentRoundTripper struct { diff --git a/vendor/github.com/hashicorp/terraform/registry/client.go b/vendor/github.com/hashicorp/terraform/registry/client.go index aefed84af1a..8e31a6a3e2c 100644 --- a/vendor/github.com/hashicorp/terraform/registry/client.go +++ b/vendor/github.com/hashicorp/terraform/registry/client.go @@ -15,7 +15,6 @@ import ( "github.com/hashicorp/terraform/registry/regsrc" "github.com/hashicorp/terraform/registry/response" "github.com/hashicorp/terraform/svchost" - "github.com/hashicorp/terraform/svchost/auth" "github.com/hashicorp/terraform/svchost/disco" "github.com/hashicorp/terraform/version" ) @@ -37,19 +36,14 @@ type Client struct { // services is a required *disco.Disco, which may have services and // credentials pre-loaded. services *disco.Disco - - // Creds optionally provides credentials for communicating with service - // providers. - creds auth.CredentialsSource } -func NewClient(services *disco.Disco, creds auth.CredentialsSource, client *http.Client) *Client { +// NewClient returns a new initialized registry client. +func NewClient(services *disco.Disco, client *http.Client) *Client { if services == nil { - services = disco.NewDisco() + services = disco.New() } - services.SetCredentialsSource(creds) - if client == nil { client = httpclient.New() client.Timeout = requestTimeout @@ -60,7 +54,6 @@ func NewClient(services *disco.Disco, creds auth.CredentialsSource, client *http return &Client{ client: client, services: services, - creds: creds, } } @@ -137,11 +130,7 @@ func (c *Client) Versions(module *regsrc.Module) (*response.ModuleVersions, erro } func (c *Client) addRequestCreds(host svchost.Hostname, req *http.Request) { - if c.creds == nil { - return - } - - creds, err := c.creds.ForHost(host) + creds, err := c.services.CredentialsForHost(host) if err != nil { log.Printf("[WARN] Failed to get credentials for %s: %s (ignoring)", host, err) return diff --git a/vendor/github.com/hashicorp/terraform/svchost/auth/credentials.go b/vendor/github.com/hashicorp/terraform/svchost/auth/credentials.go index 0bc6db4f10c..0372c160963 100644 --- a/vendor/github.com/hashicorp/terraform/svchost/auth/credentials.go +++ b/vendor/github.com/hashicorp/terraform/svchost/auth/credentials.go @@ -42,6 +42,9 @@ type HostCredentials interface { // receiving credentials. The usual behavior of this method is to // add some sort of Authorization header to the request. PrepareRequest(req *http.Request) + + // Token returns the authentication token. + Token() string } // ForHost iterates over the contained CredentialsSource objects and diff --git a/vendor/github.com/hashicorp/terraform/svchost/auth/token_credentials.go b/vendor/github.com/hashicorp/terraform/svchost/auth/token_credentials.go index 8f771b0d9b4..9358bcb6444 100644 --- a/vendor/github.com/hashicorp/terraform/svchost/auth/token_credentials.go +++ b/vendor/github.com/hashicorp/terraform/svchost/auth/token_credentials.go @@ -18,3 +18,8 @@ func (tc HostCredentialsToken) PrepareRequest(req *http.Request) { } req.Header.Set("Authorization", "Bearer "+string(tc)) } + +// Token returns the authentication token. +func (tc HostCredentialsToken) Token() string { + return string(tc) +} diff --git a/vendor/github.com/hashicorp/terraform/svchost/disco/disco.go b/vendor/github.com/hashicorp/terraform/svchost/disco/disco.go index 144384e049d..7fc49da9cb6 100644 --- a/vendor/github.com/hashicorp/terraform/svchost/disco/disco.go +++ b/vendor/github.com/hashicorp/terraform/svchost/disco/disco.go @@ -42,8 +42,15 @@ type Disco struct { Transport http.RoundTripper } -func NewDisco() *Disco { - return &Disco{} +// New returns a new initialized discovery object. +func New() *Disco { + return NewWithCredentialsSource(nil) +} + +// NewWithCredentialsSource returns a new discovery object initialized with +// the given credentials source. +func NewWithCredentialsSource(credsSrc auth.CredentialsSource) *Disco { + return &Disco{credsSrc: credsSrc} } // SetCredentialsSource provides a credentials source that will be used to @@ -55,6 +62,15 @@ func (d *Disco) SetCredentialsSource(src auth.CredentialsSource) { d.credsSrc = src } +// CredentialsForHost returns a non-nil HostCredentials if the embedded source has +// credentials available for the host, and a nil HostCredentials if it does not. +func (d *Disco) CredentialsForHost(host svchost.Hostname) (auth.HostCredentials, error) { + if d.credsSrc == nil { + return nil, nil + } + return d.credsSrc.ForHost(host) +} + // ForceHostServices provides a pre-defined set of services for a given // host, which prevents the receiver from attempting network-based discovery // for the given host. Instead, the given services map will be returned @@ -117,7 +133,7 @@ func (d *Disco) DiscoverServiceURL(host svchost.Hostname, serviceID string) *url func (d *Disco) discover(host svchost.Hostname) Host { discoURL := &url.URL{ Scheme: "https", - Host: string(host), + Host: host.String(), Path: discoPath, } @@ -144,15 +160,10 @@ func (d *Disco) discover(host svchost.Hostname) Host { URL: discoURL, } - if d.credsSrc != nil { - creds, err := d.credsSrc.ForHost(host) - if err == nil { - if creds != nil { - creds.PrepareRequest(req) // alters req to include credentials - } - } else { - log.Printf("[WARN] Failed to get credentials for %s: %s (ignoring)", host, err) - } + if creds, err := d.CredentialsForHost(host); err != nil { + log.Printf("[WARN] Failed to get credentials for %s: %s (ignoring)", host, err) + } else if creds != nil { + creds.PrepareRequest(req) // alters req to include credentials } log.Printf("[DEBUG] Service discovery for %s at %s", host, discoURL) @@ -166,6 +177,8 @@ func (d *Disco) discover(host svchost.Hostname) Host { log.Printf("[WARN] Failed to request discovery document: %s", err) return ret // empty } + defer resp.Body.Close() + if resp.StatusCode != 200 { log.Printf("[WARN] Failed to request discovery document: %s", resp.Status) return ret // empty diff --git a/vendor/github.com/hashicorp/terraform/terraform/context.go b/vendor/github.com/hashicorp/terraform/terraform/context.go index 53a12314a0f..f133cc20ad2 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/context.go +++ b/vendor/github.com/hashicorp/terraform/terraform/context.go @@ -487,6 +487,13 @@ func (c *Context) Input(mode InputMode) error { func (c *Context) Apply() (*State, error) { defer c.acquireRun("apply")() + // Check there are no empty target parameter values + for _, target := range c.targets { + if target == "" { + return nil, fmt.Errorf("Target parameter must not have empty value") + } + } + // Copy our own state c.state = c.state.DeepCopy() @@ -524,6 +531,13 @@ func (c *Context) Apply() (*State, error) { func (c *Context) Plan() (*Plan, error) { defer c.acquireRun("plan")() + // Check there are no empty target parameter values + for _, target := range c.targets { + if target == "" { + return nil, fmt.Errorf("Target parameter must not have empty value") + } + } + p := &Plan{ Module: c.module, Vars: c.variables, diff --git a/vendor/github.com/hashicorp/terraform/terraform/eval_validate.go b/vendor/github.com/hashicorp/terraform/terraform/eval_validate.go index 16bca3587c2..3e5a84ce65e 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/eval_validate.go +++ b/vendor/github.com/hashicorp/terraform/terraform/eval_validate.go @@ -157,6 +157,7 @@ func (n *EvalValidateProvisioner) validateConnConfig(connConfig *ResourceConfig) // For type=winrm only (enforced in winrm communicator) HTTPS interface{} `mapstructure:"https"` Insecure interface{} `mapstructure:"insecure"` + NTLM interface{} `mapstructure:"use_ntlm"` CACert interface{} `mapstructure:"cacert"` } diff --git a/vendor/github.com/hashicorp/terraform/terraform/interpolate.go b/vendor/github.com/hashicorp/terraform/terraform/interpolate.go index 1509a65fe0c..4f4e178cf39 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/interpolate.go +++ b/vendor/github.com/hashicorp/terraform/terraform/interpolate.go @@ -789,7 +789,8 @@ func (i *Interpolater) resourceCountMax( // If we're NOT applying, then we assume we can read the count // from the state. Plan and so on may not have any state yet so // we do a full interpolation. - if i.Operation != walkApply { + // Don't forget walkDestroy, which is a special case of walkApply + if !(i.Operation == walkApply || i.Operation == walkDestroy) { if cr == nil { return 0, nil } @@ -820,7 +821,13 @@ func (i *Interpolater) resourceCountMax( // use "cr.Count()" but that doesn't work if the count is interpolated // and we can't guarantee that so we instead depend on the state. max := -1 - for k, _ := range ms.Resources { + for k, s := range ms.Resources { + // This resource may have been just removed, in which case the Primary + // may be nil, or just empty. + if s == nil || s.Primary == nil || len(s.Primary.Attributes) == 0 { + continue + } + // Get the index number for this resource index := "" if k == id { diff --git a/vendor/github.com/hashicorp/terraform/terraform/state.go b/vendor/github.com/hashicorp/terraform/terraform/state.go index 89203bbfe1b..04b14a6597c 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/state.go +++ b/vendor/github.com/hashicorp/terraform/terraform/state.go @@ -9,6 +9,7 @@ import ( "io" "io/ioutil" "log" + "os" "reflect" "sort" "strconv" @@ -1876,11 +1877,19 @@ var ErrNoState = errors.New("no state") // ReadState reads a state structure out of a reader in the format that // was written by WriteState. func ReadState(src io.Reader) (*State, error) { + // check for a nil file specifically, since that produces a platform + // specific error if we try to use it in a bufio.Reader. + if f, ok := src.(*os.File); ok && f == nil { + return nil, ErrNoState + } + buf := bufio.NewReader(src) + if _, err := buf.Peek(1); err != nil { - // the error is either io.EOF or "invalid argument", and both are from - // an empty state. - return nil, ErrNoState + if err == io.EOF { + return nil, ErrNoState + } + return nil, err } if err := testForV0State(buf); err != nil { diff --git a/vendor/github.com/hashicorp/terraform/terraform/test_failure b/vendor/github.com/hashicorp/terraform/terraform/test_failure deleted file mode 100644 index 5d3ad1ac4ed..00000000000 --- a/vendor/github.com/hashicorp/terraform/terraform/test_failure +++ /dev/null @@ -1,9 +0,0 @@ ---- FAIL: TestContext2Plan_moduleProviderInherit (0.01s) - context_plan_test.go:552: bad: []string{"child"} -map[string]dag.Vertex{} -"module.middle.null" -map[string]dag.Vertex{} -"module.middle.module.inner.null" -map[string]dag.Vertex{} -"aws" -FAIL diff --git a/vendor/github.com/hashicorp/terraform/terraform/transform_reference.go b/vendor/github.com/hashicorp/terraform/terraform/transform_reference.go index 403b7e42463..be8c7f96c39 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/transform_reference.go +++ b/vendor/github.com/hashicorp/terraform/terraform/transform_reference.go @@ -119,17 +119,36 @@ func (t *DestroyValueReferenceTransformer) Transform(g *Graph) error { type PruneUnusedValuesTransformer struct{} func (t *PruneUnusedValuesTransformer) Transform(g *Graph) error { - vs := g.Vertices() - for _, v := range vs { - switch v.(type) { - case *NodeApplyableOutput, *NodeLocal: - // OK - default: - continue - } + // this might need multiple runs in order to ensure that pruning a value + // doesn't effect a previously checked value. + for removed := 0; ; removed = 0 { + for _, v := range g.Vertices() { + switch v.(type) { + case *NodeApplyableOutput, *NodeLocal: + // OK + default: + continue + } - if len(g.EdgesTo(v)) == 0 { - g.Remove(v) + dependants := g.UpEdges(v) + + switch dependants.Len() { + case 0: + // nothing at all depends on this + g.Remove(v) + removed++ + case 1: + // because an output's destroy node always depends on the output, + // we need to check for the case of a single destroy node. + d := dependants.List()[0] + if _, ok := d.(*NodeDestroyableOutput); ok { + g.Remove(v) + removed++ + } + } + } + if removed == 0 { + break } } diff --git a/vendor/github.com/hashicorp/terraform/terraform/transform_targets.go b/vendor/github.com/hashicorp/terraform/terraform/transform_targets.go index 0cfcb0ac0d6..af6defe3668 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/transform_targets.go +++ b/vendor/github.com/hashicorp/terraform/terraform/transform_targets.go @@ -217,6 +217,12 @@ func filterPartialOutputs(v interface{}, targetedNodes *dag.Set, g *Graph) bool if _, ok := d.(*NodeCountBoundary); ok { continue } + + if !targetedNodes.Include(d) { + // this one is going to be removed, so it doesn't count + continue + } + // as soon as we see a real dependency, we mark this as // non-removable return true diff --git a/vendor/github.com/hashicorp/terraform/version/version.go b/vendor/github.com/hashicorp/terraform/version/version.go index 5a259e3adcf..90a9b52dc87 100644 --- a/vendor/github.com/hashicorp/terraform/version/version.go +++ b/vendor/github.com/hashicorp/terraform/version/version.go @@ -1,7 +1,7 @@ // The version package provides a location to set the release versions for all // packages to consume, without creating import cycles. // -// This pckage should not import any other terraform packages. +// This package should not import any other terraform packages. package version import ( @@ -11,12 +11,12 @@ import ( ) // The main version number that is being run at the moment. -const Version = "0.11.4" +const Version = "0.11.9" // A pre-release marker for the version. If this is "" (empty string) // then it means that it is a final release. Otherwise, this is a pre-release // such as "dev" (in development), "beta", "rc1", etc. -var Prerelease = "" +var Prerelease = "beta1" // SemVer is an instance of version.Version. This has the secondary // benefit of verifying during tests and init time that our version is a diff --git a/vendor/github.com/hashicorp/vault/helper/compressutil/compress.go b/vendor/github.com/hashicorp/vault/helper/compressutil/compress.go index e485f2f2207..a7fb87bcfff 100644 --- a/vendor/github.com/hashicorp/vault/helper/compressutil/compress.go +++ b/vendor/github.com/hashicorp/vault/helper/compressutil/compress.go @@ -6,6 +6,9 @@ import ( "compress/lzw" "fmt" "io" + + "github.com/golang/snappy" + "github.com/hashicorp/errwrap" ) const ( @@ -20,16 +23,35 @@ const ( // Byte value used as canary when using Lzw format CompressionCanaryLzw byte = 'L' + // Byte value used as canary when using Snappy format + CompressionCanarySnappy byte = 'S' + CompressionTypeLzw = "lzw" CompressionTypeGzip = "gzip" + + CompressionTypeSnappy = "snappy" ) +// SnappyReadCloser embeds the snappy reader which implements the io.Reader +// interface. The decompress procedure in this utility expects an +// io.ReadCloser. This type implements the io.Closer interface to retain the +// generic way of decompression. +type SnappyReadCloser struct { + *snappy.Reader +} + +// Close is a noop method implemented only to satisfy the io.Closer interface +func (s *SnappyReadCloser) Close() error { + return nil +} + // CompressionConfig is used to select a compression type to be performed by // Compress and Decompress utilities. // Supported types are: // * CompressionTypeLzw // * CompressionTypeGzip +// * CompressionTypeSnappy // // When using CompressionTypeGzip, the compression levels can also be chosen: // * gzip.DefaultCompression @@ -78,11 +100,15 @@ func Compress(data []byte, config *CompressionConfig) ([]byte, error) { config.GzipCompressionLevel = gzip.DefaultCompression } writer, err = gzip.NewWriterLevel(&buf, config.GzipCompressionLevel) + case CompressionTypeSnappy: + buf.Write([]byte{CompressionCanarySnappy}) + writer = snappy.NewBufferedWriter(&buf) default: return nil, fmt.Errorf("unsupported compression type") } + if err != nil { - return nil, fmt.Errorf("failed to create a compression writer; err: %v", err) + return nil, errwrap.Wrapf("failed to create a compression writer: {{err}}", err) } if writer == nil { @@ -92,7 +118,7 @@ func Compress(data []byte, config *CompressionConfig) ([]byte, error) { // Compress the input and place it in the same buffer containing the // canary byte. if _, err = writer.Write(data); err != nil { - return nil, fmt.Errorf("failed to compress input data; err: %v", err) + return nil, errwrap.Wrapf("failed to compress input data: err: {{err}}", err) } // Close the io.WriteCloser @@ -117,22 +143,29 @@ func Decompress(data []byte) ([]byte, bool, error) { } switch { + // If the first byte matches the canary byte, remove the canary + // byte and try to decompress the data that is after the canary. case data[0] == CompressionCanaryGzip: - // If the first byte matches the canary byte, remove the canary - // byte and try to decompress the data that is after the canary. if len(data) < 2 { return nil, false, fmt.Errorf("invalid 'data' after the canary") } data = data[1:] reader, err = gzip.NewReader(bytes.NewReader(data)) case data[0] == CompressionCanaryLzw: - // If the first byte matches the canary byte, remove the canary - // byte and try to decompress the data that is after the canary. if len(data) < 2 { return nil, false, fmt.Errorf("invalid 'data' after the canary") } data = data[1:] reader = lzw.NewReader(bytes.NewReader(data), lzw.LSB, 8) + + case data[0] == CompressionCanarySnappy: + if len(data) < 2 { + return nil, false, fmt.Errorf("invalid 'data' after the canary") + } + data = data[1:] + reader = &SnappyReadCloser{ + Reader: snappy.NewReader(bytes.NewReader(data)), + } default: // If the first byte doesn't match the canary byte, it means // that the content was not compressed at all. Indicate the @@ -140,7 +173,7 @@ func Decompress(data []byte) ([]byte, bool, error) { return nil, true, nil } if err != nil { - return nil, false, fmt.Errorf("failed to create a compression reader; err: %v", err) + return nil, false, errwrap.Wrapf("failed to create a compression reader: {{err}}", err) } if reader == nil { return nil, false, fmt.Errorf("failed to create a compression reader") diff --git a/vendor/github.com/hashicorp/vault/helper/jsonutil/json.go b/vendor/github.com/hashicorp/vault/helper/jsonutil/json.go index a96745be8f6..d03ddef5f06 100644 --- a/vendor/github.com/hashicorp/vault/helper/jsonutil/json.go +++ b/vendor/github.com/hashicorp/vault/helper/jsonutil/json.go @@ -7,6 +7,7 @@ import ( "fmt" "io" + "github.com/hashicorp/errwrap" "github.com/hashicorp/vault/helper/compressutil" ) @@ -64,7 +65,7 @@ func DecodeJSON(data []byte, out interface{}) error { // Decompress the data if it was compressed in the first place decompressedBytes, uncompressed, err := compressutil.Decompress(data) if err != nil { - return fmt.Errorf("failed to decompress JSON: err: %v", err) + return errwrap.Wrapf("failed to decompress JSON: {{err}}", err) } if !uncompressed && (decompressedBytes == nil || len(decompressedBytes) == 0) { return fmt.Errorf("decompressed data being decoded is invalid") @@ -91,7 +92,7 @@ func DecodeJSONFromReader(r io.Reader, out interface{}) error { dec := json.NewDecoder(r) - // While decoding JSON values, intepret the integer values as `json.Number`s instead of `float64`. + // While decoding JSON values, interpret the integer values as `json.Number`s instead of `float64`. dec.UseNumber() // Since 'out' is an interface representing a pointer, pass it to the decoder without an '&' diff --git a/vendor/github.com/hashicorp/vault/helper/pgpkeys/encrypt_decrypt.go b/vendor/github.com/hashicorp/vault/helper/pgpkeys/encrypt_decrypt.go index d8b7f605cac..eef4c5ed007 100644 --- a/vendor/github.com/hashicorp/vault/helper/pgpkeys/encrypt_decrypt.go +++ b/vendor/github.com/hashicorp/vault/helper/pgpkeys/encrypt_decrypt.go @@ -5,6 +5,7 @@ import ( "encoding/base64" "fmt" + "github.com/hashicorp/errwrap" "github.com/keybase/go-crypto/openpgp" "github.com/keybase/go-crypto/openpgp/packet" ) @@ -17,7 +18,7 @@ import ( // thoroughly tested in the init and rekey command unit tests func EncryptShares(input [][]byte, pgpKeys []string) ([]string, [][]byte, error) { if len(input) != len(pgpKeys) { - return nil, nil, fmt.Errorf("Mismatch between number items to encrypt and number of PGP keys") + return nil, nil, fmt.Errorf("mismatch between number items to encrypt and number of PGP keys") } encryptedShares := make([][]byte, 0, len(pgpKeys)) entities, err := GetEntities(pgpKeys) @@ -28,11 +29,11 @@ func EncryptShares(input [][]byte, pgpKeys []string) ([]string, [][]byte, error) ctBuf := bytes.NewBuffer(nil) pt, err := openpgp.Encrypt(ctBuf, []*openpgp.Entity{entity}, nil, nil, nil) if err != nil { - return nil, nil, fmt.Errorf("Error setting up encryption for PGP message: %s", err) + return nil, nil, errwrap.Wrapf("error setting up encryption for PGP message: {{err}}", err) } _, err = pt.Write(input[i]) if err != nil { - return nil, nil, fmt.Errorf("Error encrypting PGP message: %s", err) + return nil, nil, errwrap.Wrapf("error encrypting PGP message: {{err}}", err) } pt.Close() encryptedShares = append(encryptedShares, ctBuf.Bytes()) @@ -72,11 +73,11 @@ func GetEntities(pgpKeys []string) ([]*openpgp.Entity, error) { for _, keystring := range pgpKeys { data, err := base64.StdEncoding.DecodeString(keystring) if err != nil { - return nil, fmt.Errorf("Error decoding given PGP key: %s", err) + return nil, errwrap.Wrapf("error decoding given PGP key: {{err}}", err) } entity, err := openpgp.ReadEntity(packet.NewReader(bytes.NewBuffer(data))) if err != nil { - return nil, fmt.Errorf("Error parsing given PGP key: %s", err) + return nil, errwrap.Wrapf("error parsing given PGP key: {{err}}", err) } ret = append(ret, entity) } @@ -91,23 +92,23 @@ func GetEntities(pgpKeys []string) ([]*openpgp.Entity, error) { func DecryptBytes(encodedCrypt, privKey string) (*bytes.Buffer, error) { privKeyBytes, err := base64.StdEncoding.DecodeString(privKey) if err != nil { - return nil, fmt.Errorf("Error decoding base64 private key: %s", err) + return nil, errwrap.Wrapf("error decoding base64 private key: {{err}}", err) } cryptBytes, err := base64.StdEncoding.DecodeString(encodedCrypt) if err != nil { - return nil, fmt.Errorf("Error decoding base64 crypted bytes: %s", err) + return nil, errwrap.Wrapf("error decoding base64 crypted bytes: {{err}}", err) } entity, err := openpgp.ReadEntity(packet.NewReader(bytes.NewBuffer(privKeyBytes))) if err != nil { - return nil, fmt.Errorf("Error parsing private key: %s", err) + return nil, errwrap.Wrapf("error parsing private key: {{err}}", err) } entityList := &openpgp.EntityList{entity} md, err := openpgp.ReadMessage(bytes.NewBuffer(cryptBytes), entityList, nil, nil) if err != nil { - return nil, fmt.Errorf("Error decrypting the messages: %s", err) + return nil, errwrap.Wrapf("error decrypting the messages: {{err}}", err) } ptBuf := bytes.NewBuffer(nil) diff --git a/vendor/github.com/hashicorp/vault/helper/pgpkeys/flag.go b/vendor/github.com/hashicorp/vault/helper/pgpkeys/flag.go index ccfc64b804c..bb0f367d6bf 100644 --- a/vendor/github.com/hashicorp/vault/helper/pgpkeys/flag.go +++ b/vendor/github.com/hashicorp/vault/helper/pgpkeys/flag.go @@ -8,51 +8,94 @@ import ( "os" "strings" + "github.com/hashicorp/errwrap" "github.com/keybase/go-crypto/openpgp" ) -// PGPPubKeyFiles implements the flag.Value interface and allows -// parsing and reading a list of pgp public key files +// PubKeyFileFlag implements flag.Value and command.Example to receive exactly +// one PGP or keybase key via a flag. +type PubKeyFileFlag string + +func (p *PubKeyFileFlag) String() string { return string(*p) } + +func (p *PubKeyFileFlag) Set(val string) error { + if p != nil && *p != "" { + return errors.New("can only be specified once") + } + + keys, err := ParsePGPKeys(strings.Split(val, ",")) + if err != nil { + return err + } + + if len(keys) > 1 { + return errors.New("can only specify one pgp key") + } + + *p = PubKeyFileFlag(keys[0]) + return nil +} + +func (p *PubKeyFileFlag) Example() string { return "keybase:user" } + +// PGPPubKeyFiles implements the flag.Value interface and allows parsing and +// reading a list of PGP public key files. type PubKeyFilesFlag []string func (p *PubKeyFilesFlag) String() string { return fmt.Sprint(*p) } -func (p *PubKeyFilesFlag) Set(value string) error { +func (p *PubKeyFilesFlag) Set(val string) error { if len(*p) > 0 { - return errors.New("pgp-keys can only be specified once") + return errors.New("can only be specified once") } - splitValues := strings.Split(value, ",") - - keybaseMap, err := FetchKeybasePubkeys(splitValues) + keys, err := ParsePGPKeys(strings.Split(val, ",")) if err != nil { return err } - // Now go through the actual flag, and substitute in resolved keybase - // entries where appropriate - for _, keyfile := range splitValues { + *p = PubKeyFilesFlag(keys) + return nil +} + +func (p *PubKeyFilesFlag) Example() string { return "keybase:user1, keybase:user2, ..." } + +// ParsePGPKeys takes a list of PGP keys and parses them either using keybase +// or reading them from disk and returns the "expanded" list of pgp keys in +// the same order. +func ParsePGPKeys(keyfiles []string) ([]string, error) { + keys := make([]string, len(keyfiles)) + + keybaseMap, err := FetchKeybasePubkeys(keyfiles) + if err != nil { + return nil, err + } + + for i, keyfile := range keyfiles { + keyfile = strings.TrimSpace(keyfile) + if strings.HasPrefix(keyfile, kbPrefix) { - key := keybaseMap[keyfile] - if key == "" { - return fmt.Errorf("key for keybase user %s was not found in the map", strings.TrimPrefix(keyfile, kbPrefix)) + key, ok := keybaseMap[keyfile] + if !ok || key == "" { + return nil, fmt.Errorf("keybase user %q not found", strings.TrimPrefix(keyfile, kbPrefix)) } - *p = append(*p, key) + keys[i] = key continue } pgpStr, err := ReadPGPFile(keyfile) if err != nil { - return err + return nil, err } - - *p = append(*p, pgpStr) + keys[i] = pgpStr } - return nil + + return keys, nil } +// ReadPGPFile reads the given PGP file from disk. func ReadPGPFile(path string) (string, error) { if path[0] == '@' { path = path[1:] @@ -73,16 +116,16 @@ func ReadPGPFile(path string) (string, error) { entityList, err := openpgp.ReadArmoredKeyRing(keyReader) if err == nil { if len(entityList) != 1 { - return "", fmt.Errorf("more than one key found in file %s", path) + return "", fmt.Errorf("more than one key found in file %q", path) } if entityList[0] == nil { - return "", fmt.Errorf("primary key was nil for file %s", path) + return "", fmt.Errorf("primary key was nil for file %q", path) } serializedEntity := bytes.NewBuffer(nil) err = entityList[0].Serialize(serializedEntity) if err != nil { - return "", fmt.Errorf("error serializing entity for file %s: %s", path, err) + return "", errwrap.Wrapf(fmt.Sprintf("error serializing entity for file %q: {{err}}", path), err) } return base64.StdEncoding.EncodeToString(serializedEntity.Bytes()), nil diff --git a/vendor/github.com/hashicorp/vault/helper/pgpkeys/keybase.go b/vendor/github.com/hashicorp/vault/helper/pgpkeys/keybase.go index 5c14cbcedc9..eba06776232 100644 --- a/vendor/github.com/hashicorp/vault/helper/pgpkeys/keybase.go +++ b/vendor/github.com/hashicorp/vault/helper/pgpkeys/keybase.go @@ -6,6 +6,7 @@ import ( "fmt" "strings" + "github.com/hashicorp/errwrap" "github.com/hashicorp/go-cleanhttp" "github.com/hashicorp/vault/helper/jsonutil" "github.com/keybase/go-crypto/openpgp" @@ -49,25 +50,25 @@ func FetchKeybasePubkeys(input []string) (map[string]string, error) { } defer resp.Body.Close() - type publicKeys struct { + type PublicKeys struct { Primary struct { Bundle string } } - type them struct { - publicKeys `json:"public_keys"` + type LThem struct { + PublicKeys `json:"public_keys"` } - type kbResp struct { + type KbResp struct { Status struct { Name string } - Them []them + Them []LThem } - out := &kbResp{ - Them: []them{}, + out := &KbResp{ + Them: []LThem{}, } if err := jsonutil.DecodeJSONFromReader(resp.Body, out); err != nil { @@ -75,7 +76,7 @@ func FetchKeybasePubkeys(input []string) (map[string]string, error) { } if out.Status.Name != "OK" { - return nil, fmt.Errorf("got non-OK response: %s", out.Status.Name) + return nil, fmt.Errorf("got non-OK response: %q", out.Status.Name) } missingNames := make([]string, 0, len(usernames)) @@ -92,16 +93,16 @@ func FetchKeybasePubkeys(input []string) (map[string]string, error) { return nil, err } if len(entityList) != 1 { - return nil, fmt.Errorf("primary key could not be parsed for user %s", usernames[i]) + return nil, fmt.Errorf("primary key could not be parsed for user %q", usernames[i]) } if entityList[0] == nil { - return nil, fmt.Errorf("primary key was nil for user %s", usernames[i]) + return nil, fmt.Errorf("primary key was nil for user %q", usernames[i]) } serializedEntity.Reset() err = entityList[0].Serialize(serializedEntity) if err != nil { - return nil, fmt.Errorf("error serializing entity for user %s: %s", usernames[i], err) + return nil, errwrap.Wrapf(fmt.Sprintf("error serializing entity for user %q: {{err}}", usernames[i]), err) } // The API returns values in the same ordering requested, so this should properly match @@ -109,7 +110,7 @@ func FetchKeybasePubkeys(input []string) (map[string]string, error) { } if len(missingNames) > 0 { - return nil, fmt.Errorf("unable to fetch keys for user(s) %s from keybase", strings.Join(missingNames, ",")) + return nil, fmt.Errorf("unable to fetch keys for user(s) %q from keybase", strings.Join(missingNames, ",")) } return ret, nil diff --git a/vendor/github.com/jen20/awspolicyequivalence/aws_policy_equivalence.go b/vendor/github.com/jen20/awspolicyequivalence/aws_policy_equivalence.go index 9150d20ad34..adf38ba5981 100644 --- a/vendor/github.com/jen20/awspolicyequivalence/aws_policy_equivalence.go +++ b/vendor/github.com/jen20/awspolicyequivalence/aws_policy_equivalence.go @@ -9,12 +9,11 @@ package awspolicy import ( "encoding/json" "errors" + "fmt" "reflect" "regexp" "strings" - "github.com/aws/aws-sdk-go/aws/arn" - "github.com/hashicorp/errwrap" "github.com/mitchellh/mapstructure" ) @@ -31,21 +30,21 @@ import ( func PoliciesAreEquivalent(policy1, policy2 string) (bool, error) { policy1intermediate := &intermediateAwsPolicyDocument{} if err := json.Unmarshal([]byte(policy1), policy1intermediate); err != nil { - return false, errwrap.Wrapf("Error unmarshaling policy: {{err}}", err) + return false, fmt.Errorf("Error unmarshaling policy: %s", err) } policy2intermediate := &intermediateAwsPolicyDocument{} if err := json.Unmarshal([]byte(policy2), policy2intermediate); err != nil { - return false, errwrap.Wrapf("Error unmarshaling policy: {{err}}", err) + return false, fmt.Errorf("Error unmarshaling policy: %s", err) } policy1Doc, err := policy1intermediate.document() if err != nil { - return false, errwrap.Wrapf("Error parsing policy: {{err}}", err) + return false, fmt.Errorf("Error parsing policy: %s", err) } policy2Doc, err := policy2intermediate.document() if err != nil { - return false, errwrap.Wrapf("Error parsing policy: {{err}}", err) + return false, fmt.Errorf("Error parsing policy: %s", err) } return policy1Doc.equals(policy2Doc), nil @@ -63,12 +62,12 @@ func (intermediate *intermediateAwsPolicyDocument) document() (*awsPolicyDocumen switch s := intermediate.Statements.(type) { case []interface{}: if err := mapstructure.Decode(s, &statements); err != nil { - return nil, errwrap.Wrapf("Error parsing statement: {{err}}", err) + return nil, fmt.Errorf("Error parsing statement: %s", err) } case map[string]interface{}: var singleStatement *awsPolicyStatement if err := mapstructure.Decode(s, &singleStatement); err != nil { - return nil, errwrap.Wrapf("Error parsing statement: {{err}}", err) + return nil, fmt.Errorf("Error parsing statement: %s", err) } statements = append(statements, singleStatement) default: @@ -276,16 +275,16 @@ func stringPrincipalsEqual(ours, theirs interface{}) bool { awsAccountIDRegex := regexp.MustCompile(`^[0-9]{12}$`) if awsAccountIDRegex.MatchString(ourPrincipal) { - if theirArn, err := arn.Parse(theirPrincipal); err == nil { - if theirArn.Service == "iam" && theirArn.Resource == "root" && theirArn.AccountID == ourPrincipal { + if theirArn, err := parseAwsArnString(theirPrincipal); err == nil { + if theirArn.service == "iam" && theirArn.resource == "root" && theirArn.account == ourPrincipal { return true } } } if awsAccountIDRegex.MatchString(theirPrincipal) { - if ourArn, err := arn.Parse(ourPrincipal); err == nil { - if ourArn.Service == "iam" && ourArn.Resource == "root" && ourArn.AccountID == theirPrincipal { + if ourArn, err := parseAwsArnString(ourPrincipal); err == nil { + if ourArn.service == "iam" && ourArn.resource == "root" && ourArn.account == theirPrincipal { return true } } @@ -435,3 +434,32 @@ func (ours awsPrincipalStringSet) equals(theirs awsPrincipalStringSet) bool { return true } + +// awsArn describes an Amazon Resource Name +// More information: http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html +type awsArn struct { + account string + partition string + region string + resource string + service string +} + +// parseAwsArnString converts a string into an awsArn +// Expects string in form of: arn:PARTITION:SERVICE:REGION:ACCOUNT:RESOURCE +func parseAwsArnString(arn string) (awsArn, error) { + if !strings.HasPrefix(arn, "arn:") { + return awsArn{}, fmt.Errorf("expected arn: prefix, received: %s", arn) + } + arnParts := strings.SplitN(arn, ":", 6) + if len(arnParts) != 6 { + return awsArn{}, fmt.Errorf("expected 6 colon delimited sections, received: %s", arn) + } + return awsArn{ + account: arnParts[4], + partition: arnParts[1], + region: arnParts[3], + resource: arnParts[5], + service: arnParts[2], + }, nil +} diff --git a/vendor/github.com/jen20/awspolicyequivalence/go.mod b/vendor/github.com/jen20/awspolicyequivalence/go.mod new file mode 100644 index 00000000000..35e1621e867 --- /dev/null +++ b/vendor/github.com/jen20/awspolicyequivalence/go.mod @@ -0,0 +1,3 @@ +module github.com/jen20/awspolicyequivalence + +require github.com/mitchellh/mapstructure v1.0.0 diff --git a/vendor/github.com/jen20/awspolicyequivalence/go.sum b/vendor/github.com/jen20/awspolicyequivalence/go.sum new file mode 100644 index 00000000000..8f6af4f3948 --- /dev/null +++ b/vendor/github.com/jen20/awspolicyequivalence/go.sum @@ -0,0 +1,2 @@ +github.com/mitchellh/mapstructure v1.0.0 h1:vVpGvMXJPqSDh2VYHF7gsfQj8Ncx+Xw5Y1KHeTRY+7I= +github.com/mitchellh/mapstructure v1.0.0/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= diff --git a/vendor/github.com/mattn/go-isatty/isatty_linux.go b/vendor/github.com/mattn/go-isatty/isatty_linux.go index 9d24bac1db3..7384cf99167 100644 --- a/vendor/github.com/mattn/go-isatty/isatty_linux.go +++ b/vendor/github.com/mattn/go-isatty/isatty_linux.go @@ -1,5 +1,5 @@ // +build linux -// +build !appengine +// +build !appengine,!ppc64,!ppc64le package isatty diff --git a/vendor/github.com/mattn/go-isatty/isatty_linux_ppc64x.go b/vendor/github.com/mattn/go-isatty/isatty_linux_ppc64x.go new file mode 100644 index 00000000000..44e5d213021 --- /dev/null +++ b/vendor/github.com/mattn/go-isatty/isatty_linux_ppc64x.go @@ -0,0 +1,19 @@ +// +build linux +// +build ppc64 ppc64le + +package isatty + +import ( + "unsafe" + + syscall "golang.org/x/sys/unix" +) + +const ioctlReadTermios = syscall.TCGETS + +// IsTerminal return true if the file descriptor is terminal. +func IsTerminal(fd uintptr) bool { + var termios syscall.Termios + _, _, err := syscall.Syscall6(syscall.SYS_IOCTL, fd, ioctlReadTermios, uintptr(unsafe.Pointer(&termios)), 0, 0, 0) + return err == 0 +} diff --git a/vendor/github.com/mattn/go-isatty/isatty_others.go b/vendor/github.com/mattn/go-isatty/isatty_others.go index ff4de3d9a53..9d8b4a59961 100644 --- a/vendor/github.com/mattn/go-isatty/isatty_others.go +++ b/vendor/github.com/mattn/go-isatty/isatty_others.go @@ -3,7 +3,7 @@ package isatty -// IsCygwinTerminal() return true if the file descriptor is a cygwin or msys2 +// IsCygwinTerminal return true if the file descriptor is a cygwin or msys2 // terminal. This is also always false on this environment. func IsCygwinTerminal(fd uintptr) bool { return false diff --git a/vendor/github.com/mitchellh/copystructure/copystructure.go b/vendor/github.com/mitchellh/copystructure/copystructure.go index 0e725ea7237..140435255e1 100644 --- a/vendor/github.com/mitchellh/copystructure/copystructure.go +++ b/vendor/github.com/mitchellh/copystructure/copystructure.go @@ -156,9 +156,13 @@ func (w *walker) Exit(l reflectwalk.Location) error { } switch l { + case reflectwalk.Array: + fallthrough case reflectwalk.Map: fallthrough case reflectwalk.Slice: + w.replacePointerMaybe() + // Pop map off our container w.cs = w.cs[:len(w.cs)-1] case reflectwalk.MapValue: @@ -171,16 +175,27 @@ func (w *walker) Exit(l reflectwalk.Location) error { // or in this case never adds it. We need to create a properly typed // zero value so that this key can be set. if !mv.IsValid() { - mv = reflect.Zero(m.Type().Elem()) + mv = reflect.Zero(m.Elem().Type().Elem()) + } + m.Elem().SetMapIndex(mk, mv) + case reflectwalk.ArrayElem: + // Pop off the value and the index and set it on the array + v := w.valPop() + i := w.valPop().Interface().(int) + if v.IsValid() { + a := w.cs[len(w.cs)-1] + ae := a.Elem().Index(i) // storing array as pointer on stack - so need Elem() call + if ae.CanSet() { + ae.Set(v) + } } - m.SetMapIndex(mk, mv) case reflectwalk.SliceElem: // Pop off the value and the index and set it on the slice v := w.valPop() i := w.valPop().Interface().(int) if v.IsValid() { s := w.cs[len(w.cs)-1] - se := s.Index(i) + se := s.Elem().Index(i) if se.CanSet() { se.Set(v) } @@ -220,9 +235,9 @@ func (w *walker) Map(m reflect.Value) error { // Create the map. If the map itself is nil, then just make a nil map var newMap reflect.Value if m.IsNil() { - newMap = reflect.Indirect(reflect.New(m.Type())) + newMap = reflect.New(m.Type()) } else { - newMap = reflect.MakeMap(m.Type()) + newMap = wrapPtr(reflect.MakeMap(m.Type())) } w.cs = append(w.cs, newMap) @@ -287,9 +302,9 @@ func (w *walker) Slice(s reflect.Value) error { var newS reflect.Value if s.IsNil() { - newS = reflect.Indirect(reflect.New(s.Type())) + newS = reflect.New(s.Type()) } else { - newS = reflect.MakeSlice(s.Type(), s.Len(), s.Cap()) + newS = wrapPtr(reflect.MakeSlice(s.Type(), s.Len(), s.Cap())) } w.cs = append(w.cs, newS) @@ -309,6 +324,31 @@ func (w *walker) SliceElem(i int, elem reflect.Value) error { return nil } +func (w *walker) Array(a reflect.Value) error { + if w.ignoring() { + return nil + } + w.lock(a) + + newA := reflect.New(a.Type()) + + w.cs = append(w.cs, newA) + w.valPush(newA) + return nil +} + +func (w *walker) ArrayElem(i int, elem reflect.Value) error { + if w.ignoring() { + return nil + } + + // We don't write the array here because elem might still be + // arbitrarily complex. Just record the index and continue on. + w.valPush(reflect.ValueOf(i)) + + return nil +} + func (w *walker) Struct(s reflect.Value) error { if w.ignoring() { return nil @@ -326,7 +366,10 @@ func (w *walker) Struct(s reflect.Value) error { return err } - v = reflect.ValueOf(dup) + // We need to put a pointer to the value on the value stack, + // so allocate a new pointer and set it. + v = reflect.New(s.Type()) + reflect.Indirect(v).Set(reflect.ValueOf(dup)) } else { // No copier, we copy ourselves and allow reflectwalk to guide // us deeper into the structure for copying. @@ -405,6 +448,23 @@ func (w *walker) replacePointerMaybe() { } v := w.valPop() + + // If the expected type is a pointer to an interface of any depth, + // such as *interface{}, **interface{}, etc., then we need to convert + // the value "v" from *CONCRETE to *interface{} so types match for + // Set. + // + // Example if v is type *Foo where Foo is a struct, v would become + // *interface{} instead. This only happens if we have an interface expectation + // at this depth. + // + // For more info, see GH-16 + if iType, ok := w.ifaceTypes[ifaceKey(w.ps[w.depth], w.depth)]; ok && iType.Kind() == reflect.Interface { + y := reflect.New(iType) // Create *interface{} + y.Elem().Set(reflect.Indirect(v)) // Assign "Foo" to interface{} (dereferenced) + v = y // v is now typed *interface{} (where *v = Foo) + } + for i := 1; i < w.ps[w.depth]; i++ { if iType, ok := w.ifaceTypes[ifaceKey(w.ps[w.depth]-i, w.depth)]; ok { iface := reflect.New(iType).Elem() @@ -475,3 +535,14 @@ func (w *walker) lock(v reflect.Value) { locker.Lock() w.locks[w.depth] = locker } + +// wrapPtr is a helper that takes v and always make it *v. copystructure +// stores things internally as pointers until the last moment before unwrapping +func wrapPtr(v reflect.Value) reflect.Value { + if !v.IsValid() { + return v + } + vPtr := reflect.New(v.Type()) + vPtr.Elem().Set(v) + return vPtr +} diff --git a/vendor/github.com/mitchellh/copystructure/go.mod b/vendor/github.com/mitchellh/copystructure/go.mod new file mode 100644 index 00000000000..d01864309b4 --- /dev/null +++ b/vendor/github.com/mitchellh/copystructure/go.mod @@ -0,0 +1,3 @@ +module github.com/mitchellh/copystructure + +require github.com/mitchellh/reflectwalk v1.0.0 diff --git a/vendor/github.com/mitchellh/copystructure/go.sum b/vendor/github.com/mitchellh/copystructure/go.sum new file mode 100644 index 00000000000..be572456190 --- /dev/null +++ b/vendor/github.com/mitchellh/copystructure/go.sum @@ -0,0 +1,2 @@ +github.com/mitchellh/reflectwalk v1.0.0 h1:9D+8oIskB4VJBN5SFlmc27fSlIBZaov1Wpk/IfikLNY= +github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw= diff --git a/vendor/github.com/mitchellh/go-homedir/go.mod b/vendor/github.com/mitchellh/go-homedir/go.mod new file mode 100644 index 00000000000..7efa09a0432 --- /dev/null +++ b/vendor/github.com/mitchellh/go-homedir/go.mod @@ -0,0 +1 @@ +module github.com/mitchellh/go-homedir diff --git a/vendor/github.com/mitchellh/go-homedir/homedir.go b/vendor/github.com/mitchellh/go-homedir/homedir.go index 47e1f9ef8e6..fb87bef94f3 100644 --- a/vendor/github.com/mitchellh/go-homedir/homedir.go +++ b/vendor/github.com/mitchellh/go-homedir/homedir.go @@ -77,33 +77,51 @@ func Expand(path string) (string, error) { } func dirUnix() (string, error) { + homeEnv := "HOME" + if runtime.GOOS == "plan9" { + // On plan9, env vars are lowercase. + homeEnv = "home" + } + // First prefer the HOME environmental variable - if home := os.Getenv("HOME"); home != "" { + if home := os.Getenv(homeEnv); home != "" { return home, nil } - // If that fails, try getent var stdout bytes.Buffer - cmd := exec.Command("getent", "passwd", strconv.Itoa(os.Getuid())) - cmd.Stdout = &stdout - if err := cmd.Run(); err != nil { - // If the error is ErrNotFound, we ignore it. Otherwise, return it. - if err != exec.ErrNotFound { - return "", err + + // If that fails, try OS specific commands + if runtime.GOOS == "darwin" { + cmd := exec.Command("sh", "-c", `dscl -q . -read /Users/"$(whoami)" NFSHomeDirectory | sed 's/^[^ ]*: //'`) + cmd.Stdout = &stdout + if err := cmd.Run(); err == nil { + result := strings.TrimSpace(stdout.String()) + if result != "" { + return result, nil + } } } else { - if passwd := strings.TrimSpace(stdout.String()); passwd != "" { - // username:password:uid:gid:gecos:home:shell - passwdParts := strings.SplitN(passwd, ":", 7) - if len(passwdParts) > 5 { - return passwdParts[5], nil + cmd := exec.Command("getent", "passwd", strconv.Itoa(os.Getuid())) + cmd.Stdout = &stdout + if err := cmd.Run(); err != nil { + // If the error is ErrNotFound, we ignore it. Otherwise, return it. + if err != exec.ErrNotFound { + return "", err + } + } else { + if passwd := strings.TrimSpace(stdout.String()); passwd != "" { + // username:password:uid:gid:gecos:home:shell + passwdParts := strings.SplitN(passwd, ":", 7) + if len(passwdParts) > 5 { + return passwdParts[5], nil + } } } } // If all else fails, try the shell stdout.Reset() - cmd = exec.Command("sh", "-c", "cd && pwd") + cmd := exec.Command("sh", "-c", "cd && pwd") cmd.Stdout = &stdout if err := cmd.Run(); err != nil { return "", err @@ -123,14 +141,16 @@ func dirWindows() (string, error) { return home, nil } + // Prefer standard environment variable USERPROFILE + if home := os.Getenv("USERPROFILE"); home != "" { + return home, nil + } + drive := os.Getenv("HOMEDRIVE") path := os.Getenv("HOMEPATH") home := drive + path if drive == "" || path == "" { - home = os.Getenv("USERPROFILE") - } - if home == "" { - return "", errors.New("HOMEDRIVE, HOMEPATH, and USERPROFILE are blank") + return "", errors.New("HOMEDRIVE, HOMEPATH, or USERPROFILE are blank") } return home, nil diff --git a/vendor/github.com/mitchellh/go-wordwrap/go.mod b/vendor/github.com/mitchellh/go-wordwrap/go.mod new file mode 100644 index 00000000000..2ae411b2012 --- /dev/null +++ b/vendor/github.com/mitchellh/go-wordwrap/go.mod @@ -0,0 +1 @@ +module github.com/mitchellh/go-wordwrap diff --git a/vendor/github.com/mitchellh/hashstructure/README.md b/vendor/github.com/mitchellh/hashstructure/README.md index 7d0de5bf5a6..28ce45a3e18 100644 --- a/vendor/github.com/mitchellh/hashstructure/README.md +++ b/vendor/github.com/mitchellh/hashstructure/README.md @@ -1,4 +1,4 @@ -# hashstructure +# hashstructure [![GoDoc](https://godoc.org/github.com/mitchellh/hashstructure?status.svg)](https://godoc.org/github.com/mitchellh/hashstructure) hashstructure is a Go library for creating a unique hash value for arbitrary values in Go. @@ -19,6 +19,9 @@ sending data across the network, caching values locally (de-dup), and so on. * Optionally specify a custom hash function to optimize for speed, collision avoidance for your data set, etc. + + * Optionally hash the output of `.String()` on structs that implement fmt.Stringer, + allowing effective hashing of time.Time ## Installation @@ -34,28 +37,29 @@ For usage and examples see the [Godoc](http://godoc.org/github.com/mitchellh/has A quick code example is shown below: - - type ComplexStruct struct { - Name string - Age uint - Metadata map[string]interface{} - } - - v := ComplexStruct{ - Name: "mitchellh", - Age: 64, - Metadata: map[string]interface{}{ - "car": true, - "location": "California", - "siblings": []string{"Bob", "John"}, - }, - } - - hash, err := hashstructure.Hash(v, nil) - if err != nil { - panic(err) - } - - fmt.Printf("%d", hash) - // Output: - // 2307517237273902113 +```go +type ComplexStruct struct { + Name string + Age uint + Metadata map[string]interface{} +} + +v := ComplexStruct{ + Name: "mitchellh", + Age: 64, + Metadata: map[string]interface{}{ + "car": true, + "location": "California", + "siblings": []string{"Bob", "John"}, + }, +} + +hash, err := hashstructure.Hash(v, nil) +if err != nil { + panic(err) +} + +fmt.Printf("%d", hash) +// Output: +// 2307517237273902113 +``` diff --git a/vendor/github.com/mitchellh/hashstructure/go.mod b/vendor/github.com/mitchellh/hashstructure/go.mod new file mode 100644 index 00000000000..966582aa95b --- /dev/null +++ b/vendor/github.com/mitchellh/hashstructure/go.mod @@ -0,0 +1 @@ +module github.com/mitchellh/hashstructure diff --git a/vendor/github.com/mitchellh/hashstructure/hashstructure.go b/vendor/github.com/mitchellh/hashstructure/hashstructure.go index 6f586fa7725..ea13a1583c3 100644 --- a/vendor/github.com/mitchellh/hashstructure/hashstructure.go +++ b/vendor/github.com/mitchellh/hashstructure/hashstructure.go @@ -8,6 +8,16 @@ import ( "reflect" ) +// ErrNotStringer is returned when there's an error with hash:"string" +type ErrNotStringer struct { + Field string +} + +// Error implements error for ErrNotStringer +func (ens *ErrNotStringer) Error() string { + return fmt.Sprintf("hashstructure: %s has hash:\"string\" set, but does not implement fmt.Stringer", ens.Field) +} + // HashOptions are options that are available for hashing. type HashOptions struct { // Hasher is the hash function to use. If this isn't set, it will @@ -17,12 +27,18 @@ type HashOptions struct { // TagName is the struct tag to look at when hashing the structure. // By default this is "hash". TagName string + + // ZeroNil is flag determining if nil pointer should be treated equal + // to a zero value of pointed type. By default this is false. + ZeroNil bool } // Hash returns the hash value of an arbitrary value. // // If opts is nil, then default options will be used. See HashOptions -// for the default values. +// for the default values. The same *HashOptions value cannot be used +// concurrently. None of the values within a *HashOptions struct are +// safe to read/write while hashing is being done. // // Notes on the value: // @@ -41,11 +57,14 @@ type HashOptions struct { // // The available tag values are: // -// * "ignore" - The field will be ignored and not affect the hash code. +// * "ignore" or "-" - The field will be ignored and not affect the hash code. // // * "set" - The field will be treated as a set, where ordering doesn't // affect the hash code. This only works for slices. // +// * "string" - The field will be hashed as a string, only works when the +// field implements fmt.Stringer +// func Hash(v interface{}, opts *HashOptions) (uint64, error) { // Create default options if opts == nil { @@ -63,15 +82,17 @@ func Hash(v interface{}, opts *HashOptions) (uint64, error) { // Create our walker and walk the structure w := &walker{ - h: opts.Hasher, - tag: opts.TagName, + h: opts.Hasher, + tag: opts.TagName, + zeronil: opts.ZeroNil, } return w.visit(reflect.ValueOf(v), nil) } type walker struct { - h hash.Hash64 - tag string + h hash.Hash64 + tag string + zeronil bool } type visitOpts struct { @@ -84,6 +105,8 @@ type visitOpts struct { } func (w *walker) visit(v reflect.Value, opts *visitOpts) (uint64, error) { + t := reflect.TypeOf(0) + // Loop since these can be wrapped in multiple layers of pointers // and interfaces. for { @@ -96,6 +119,9 @@ func (w *walker) visit(v reflect.Value, opts *visitOpts) (uint64, error) { } if v.Kind() == reflect.Ptr { + if w.zeronil { + t = v.Type().Elem() + } v = reflect.Indirect(v) continue } @@ -105,8 +131,7 @@ func (w *walker) visit(v reflect.Value, opts *visitOpts) (uint64, error) { // If it is nil, treat it like a zero. if !v.IsValid() { - var tmp int8 - v = reflect.ValueOf(tmp) + v = reflect.Zero(t) } // Binary writing can use raw ints, we have to convert to @@ -189,8 +214,8 @@ func (w *walker) visit(v reflect.Value, opts *visitOpts) (uint64, error) { return h, nil case reflect.Struct: - var include Includable parent := v.Interface() + var include Includable if impl, ok := parent.(Includable); ok { include = impl } @@ -203,7 +228,7 @@ func (w *walker) visit(v reflect.Value, opts *visitOpts) (uint64, error) { l := v.NumField() for i := 0; i < l; i++ { - if v := v.Field(i); v.CanSet() || t.Field(i).Name != "_" { + if innerV := v.Field(i); v.CanSet() || t.Field(i).Name != "_" { var f visitFlag fieldType := t.Field(i) if fieldType.PkgPath != "" { @@ -212,14 +237,25 @@ func (w *walker) visit(v reflect.Value, opts *visitOpts) (uint64, error) { } tag := fieldType.Tag.Get(w.tag) - if tag == "ignore" { + if tag == "ignore" || tag == "-" { // Ignore this field continue } + // if string is set, use the string value + if tag == "string" { + if impl, ok := innerV.Interface().(fmt.Stringer); ok { + innerV = reflect.ValueOf(impl.String()) + } else { + return 0, &ErrNotStringer{ + Field: v.Type().Field(i).Name, + } + } + } + // Check if we implement includable and check it if include != nil { - incl, err := include.HashInclude(fieldType.Name, v) + incl, err := include.HashInclude(fieldType.Name, innerV) if err != nil { return 0, err } @@ -238,7 +274,7 @@ func (w *walker) visit(v reflect.Value, opts *visitOpts) (uint64, error) { return 0, err } - vh, err := w.visit(v, &visitOpts{ + vh, err := w.visit(innerV, &visitOpts{ Flags: f, Struct: parent, StructField: fieldType.Name, @@ -289,7 +325,6 @@ func (w *walker) visit(v reflect.Value, opts *visitOpts) (uint64, error) { return 0, fmt.Errorf("unknown kind to hash: %s", k) } - return 0, nil } func hashUpdateOrdered(h hash.Hash64, a, b uint64) uint64 { diff --git a/vendor/github.com/mitchellh/reflectwalk/go.mod b/vendor/github.com/mitchellh/reflectwalk/go.mod new file mode 100644 index 00000000000..52bb7c469e9 --- /dev/null +++ b/vendor/github.com/mitchellh/reflectwalk/go.mod @@ -0,0 +1 @@ +module github.com/mitchellh/reflectwalk diff --git a/vendor/github.com/mitchellh/reflectwalk/location.go b/vendor/github.com/mitchellh/reflectwalk/location.go index 7c59d764c2a..6a7f176117f 100644 --- a/vendor/github.com/mitchellh/reflectwalk/location.go +++ b/vendor/github.com/mitchellh/reflectwalk/location.go @@ -11,6 +11,8 @@ const ( MapValue Slice SliceElem + Array + ArrayElem Struct StructField WalkLoc diff --git a/vendor/github.com/mitchellh/reflectwalk/location_string.go b/vendor/github.com/mitchellh/reflectwalk/location_string.go index d3cfe85459b..70760cf4c70 100644 --- a/vendor/github.com/mitchellh/reflectwalk/location_string.go +++ b/vendor/github.com/mitchellh/reflectwalk/location_string.go @@ -1,15 +1,15 @@ -// generated by stringer -type=Location location.go; DO NOT EDIT +// Code generated by "stringer -type=Location location.go"; DO NOT EDIT. package reflectwalk import "fmt" -const _Location_name = "NoneMapMapKeyMapValueSliceSliceElemStructStructFieldWalkLoc" +const _Location_name = "NoneMapMapKeyMapValueSliceSliceElemArrayArrayElemStructStructFieldWalkLoc" -var _Location_index = [...]uint8{0, 4, 7, 13, 21, 26, 35, 41, 52, 59} +var _Location_index = [...]uint8{0, 4, 7, 13, 21, 26, 35, 40, 49, 55, 66, 73} func (i Location) String() string { - if i+1 >= Location(len(_Location_index)) { + if i >= Location(len(_Location_index)-1) { return fmt.Sprintf("Location(%d)", i) } return _Location_name[_Location_index[i]:_Location_index[i+1]] diff --git a/vendor/github.com/mitchellh/reflectwalk/reflectwalk.go b/vendor/github.com/mitchellh/reflectwalk/reflectwalk.go index ec0a62337e3..d7ab7b6d782 100644 --- a/vendor/github.com/mitchellh/reflectwalk/reflectwalk.go +++ b/vendor/github.com/mitchellh/reflectwalk/reflectwalk.go @@ -39,6 +39,13 @@ type SliceWalker interface { SliceElem(int, reflect.Value) error } +// ArrayWalker implementations are able to handle array elements found +// within complex structures. +type ArrayWalker interface { + Array(reflect.Value) error + ArrayElem(int, reflect.Value) error +} + // StructWalker is an interface that has methods that are called for // structs when a Walk is done. type StructWalker interface { @@ -65,6 +72,7 @@ type PointerWalker interface { // SkipEntry can be returned from walk functions to skip walking // the value of this field. This is only valid in the following functions: // +// - Struct: skips all fields from being walked // - StructField: skips walking the struct value // var SkipEntry = errors.New("skip this entry") @@ -179,6 +187,9 @@ func walk(v reflect.Value, w interface{}) (err error) { case reflect.Struct: err = walkStruct(v, w) return + case reflect.Array: + err = walkArray(v, w) + return default: panic("unsupported type: " + k.String()) } @@ -286,48 +297,99 @@ func walkSlice(v reflect.Value, w interface{}) (err error) { return nil } +func walkArray(v reflect.Value, w interface{}) (err error) { + ew, ok := w.(EnterExitWalker) + if ok { + ew.Enter(Array) + } + + if aw, ok := w.(ArrayWalker); ok { + if err := aw.Array(v); err != nil { + return err + } + } + + for i := 0; i < v.Len(); i++ { + elem := v.Index(i) + + if aw, ok := w.(ArrayWalker); ok { + if err := aw.ArrayElem(i, elem); err != nil { + return err + } + } + + ew, ok := w.(EnterExitWalker) + if ok { + ew.Enter(ArrayElem) + } + + if err := walk(elem, w); err != nil { + return err + } + + if ok { + ew.Exit(ArrayElem) + } + } + + ew, ok = w.(EnterExitWalker) + if ok { + ew.Exit(Array) + } + + return nil +} + func walkStruct(v reflect.Value, w interface{}) (err error) { ew, ewok := w.(EnterExitWalker) if ewok { ew.Enter(Struct) } + skip := false if sw, ok := w.(StructWalker); ok { - if err = sw.Struct(v); err != nil { + err = sw.Struct(v) + if err == SkipEntry { + skip = true + err = nil + } + if err != nil { return } } - vt := v.Type() - for i := 0; i < vt.NumField(); i++ { - sf := vt.Field(i) - f := v.FieldByIndex([]int{i}) + if !skip { + vt := v.Type() + for i := 0; i < vt.NumField(); i++ { + sf := vt.Field(i) + f := v.FieldByIndex([]int{i}) - if sw, ok := w.(StructWalker); ok { - err = sw.StructField(sf, f) + if sw, ok := w.(StructWalker); ok { + err = sw.StructField(sf, f) - // SkipEntry just pretends this field doesn't even exist - if err == SkipEntry { - continue + // SkipEntry just pretends this field doesn't even exist + if err == SkipEntry { + continue + } + + if err != nil { + return + } + } + + ew, ok := w.(EnterExitWalker) + if ok { + ew.Enter(StructField) } + err = walk(f, w) if err != nil { return } - } - - ew, ok := w.(EnterExitWalker) - if ok { - ew.Enter(StructField) - } - err = walk(f, w) - if err != nil { - return - } - - if ok { - ew.Exit(StructField) + if ok { + ew.Exit(StructField) + } } } diff --git a/vendor/github.com/pquerna/otp/LICENSE b/vendor/github.com/pquerna/otp/LICENSE new file mode 100644 index 00000000000..d6456956733 --- /dev/null +++ b/vendor/github.com/pquerna/otp/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/pquerna/otp/NOTICE b/vendor/github.com/pquerna/otp/NOTICE new file mode 100644 index 00000000000..50e2e75016e --- /dev/null +++ b/vendor/github.com/pquerna/otp/NOTICE @@ -0,0 +1,5 @@ +otp +Copyright (c) 2014, Paul Querna + +This product includes software developed by +Paul Querna (http://paul.querna.org/). diff --git a/vendor/github.com/pquerna/otp/README.md b/vendor/github.com/pquerna/otp/README.md new file mode 100644 index 00000000000..148e8980d6d --- /dev/null +++ b/vendor/github.com/pquerna/otp/README.md @@ -0,0 +1,60 @@ +# otp: One Time Password utilities Go / Golang + +[![GoDoc](https://godoc.org/github.com/pquerna/otp?status.svg)](https://godoc.org/github.com/pquerna/otp) [![Build Status](https://travis-ci.org/pquerna/otp.svg?branch=master)](https://travis-ci.org/pquerna/otp) + +# Why One Time Passwords? + +One Time Passwords (OTPs) are an mechanism to improve security over passwords alone. When a Time-based OTP (TOTP) is stored on a user's phone, and combined with something the user knows (Password), you have an easy on-ramp to [Multi-factor authentication](http://en.wikipedia.org/wiki/Multi-factor_authentication) without adding a dependency on a SMS provider. This Password and TOTP combination is used by many popular websites including Google, Github, Facebook, Salesforce and many others. + +The `otp` library enables you to easily add TOTPs to your own application, increasing your user's security against mass-password breaches and malware. + +Because TOTP is standardized and widely deployed, there are many [mobile clients and software implementations](http://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm#Client_implementations). + +## `otp` Supports: + +* Generating QR Code images for easy user enrollment. +* Time-based One-time Password Algorithm (TOTP) (RFC 6238): Time based OTP, the most commonly used method. +* HMAC-based One-time Password Algorithm (HOTP) (RFC 4226): Counter based OTP, which TOTP is based upon. +* Generation and Validation of codes for either algorithm. + +## Implementing TOTP in your application: + +### User Enrollment + +For an example of a working enrollment work flow, [Github has documented theirs](https://help.github.com/articles/configuring-two-factor-authentication-via-a-totp-mobile-app/ +), but the basics are: + +1. Generate new TOTP Key for a User. `key,_ := totp.Generate(...)`. +1. Display the Key's Secret and QR-Code for the User. `key.Secret()` and `key.Image(...)`. +1. Test that the user can successfully use their TOTP. `totp.Validate(...)`. +1. Store TOTP Secret for the User in your backend. `key.Secret()` +1. Provide the user with "recovery codes". (See Recovery Codes bellow) + +### Code Generation + +* In either TOTP or HOTP cases, use the `GenerateCode` function and a counter or + `time.Time` struct to generate a valid code compatible with most implementations. +* For uncommon or custom settings, or to catch unlikely errors, use `GenerateCodeCustom` + in either module. + +### Validation + +1. Prompt and validate User's password as normal. +1. If the user has TOTP enabled, prompt for TOTP passcode. +1. Retrieve the User's TOTP Secret from your backend. +1. Validate the user's passcode. `totp.Validate(...)` + + +### Recovery Codes + +When a user loses access to their TOTP device, they would no longer have access to their account. Because TOTPs are often configured on mobile devices that can be lost, stolen or damaged, this is a common problem. For this reason many providers give their users "backup codes" or "recovery codes". These are a set of one time use codes that can be used instead of the TOTP. These can simply be randomly generated strings that you store in your backend. [Github's documentation provides an overview of the user experience]( +https://help.github.com/articles/downloading-your-two-factor-authentication-recovery-codes/). + + +## Improvements, bugs, adding feature, etc: + +Please [open issues in Github](https://github.com/pquerna/otp/issues) for ideas, bugs, and general thoughts. Pull requests are of course preferred :) + +## License + +`otp` is licensed under the [Apache License, Version 2.0](./LICENSE) diff --git a/vendor/github.com/pquerna/otp/doc.go b/vendor/github.com/pquerna/otp/doc.go new file mode 100644 index 00000000000..b8b4c8cc193 --- /dev/null +++ b/vendor/github.com/pquerna/otp/doc.go @@ -0,0 +1,70 @@ +/** + * Copyright 2014 Paul Querna + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +// Package otp implements both HOTP and TOTP based +// one time passcodes in a Google Authenticator compatible manner. +// +// When adding a TOTP for a user, you must store the "secret" value +// persistently. It is recommend to store the secret in an encrypted field in your +// datastore. Due to how TOTP works, it is not possible to store a hash +// for the secret value like you would a password. +// +// To enroll a user, you must first generate an OTP for them. Google +// Authenticator supports using a QR code as an enrollment method: +// +// import ( +// "github.com/pquerna/otp/totp" +// +// "bytes" +// "image/png" +// ) +// +// key, err := totp.Generate(totp.GenerateOpts{ +// Issuer: "Example.com", +// AccountName: "alice@example.com", +// }) +// +// // Convert TOTP key into a QR code encoded as a PNG image. +// var buf bytes.Buffer +// img, err := key.Image(200, 200) +// png.Encode(&buf, img) +// +// // display the QR code to the user. +// display(buf.Bytes()) +// +// // Now Validate that the user's successfully added the passcode. +// passcode := promptForPasscode() +// valid := totp.Validate(passcode, key.Secret()) +// +// if valid { +// // User successfully used their TOTP, save it to your backend! +// storeSecret("alice@example.com", key.Secret()) +// } +// +// Validating a TOTP passcode is very easy, just prompt the user for a passcode +// and retrieve the associated user's previously stored secret. +// import "github.com/pquerna/otp/totp" +// +// passcode := promptForPasscode() +// secret := getSecret("alice@example.com") +// +// valid := totp.Validate(passcode, secret) +// +// if valid { +// // Success! continue login process. +// } +package otp diff --git a/vendor/github.com/pquerna/otp/hotp/hotp.go b/vendor/github.com/pquerna/otp/hotp/hotp.go new file mode 100644 index 00000000000..bc123c63afe --- /dev/null +++ b/vendor/github.com/pquerna/otp/hotp/hotp.go @@ -0,0 +1,196 @@ +/** + * Copyright 2014 Paul Querna + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package hotp + +import ( + "github.com/pquerna/otp" + + "crypto/hmac" + "crypto/rand" + "crypto/subtle" + "encoding/base32" + "encoding/binary" + "fmt" + "math" + "net/url" + "strings" +) + +const debug = false + +// Validate a HOTP passcode given a counter and secret. +// This is a shortcut for ValidateCustom, with parameters that +// are compataible with Google-Authenticator. +func Validate(passcode string, counter uint64, secret string) bool { + rv, _ := ValidateCustom( + passcode, + counter, + secret, + ValidateOpts{ + Digits: otp.DigitsSix, + Algorithm: otp.AlgorithmSHA1, + }, + ) + return rv +} + +// ValidateOpts provides options for ValidateCustom(). +type ValidateOpts struct { + // Digits as part of the input. Defaults to 6. + Digits otp.Digits + // Algorithm to use for HMAC. Defaults to SHA1. + Algorithm otp.Algorithm +} + +// GenerateCode creates a HOTP passcode given a counter and secret. +// This is a shortcut for GenerateCodeCustom, with parameters that +// are compataible with Google-Authenticator. +func GenerateCode(secret string, counter uint64) (string, error) { + return GenerateCodeCustom(secret, counter, ValidateOpts{ + Digits: otp.DigitsSix, + Algorithm: otp.AlgorithmSHA1, + }) +} + +// GenerateCodeCustom uses a counter and secret value and options struct to +// create a passcode. +func GenerateCodeCustom(secret string, counter uint64, opts ValidateOpts) (passcode string, err error) { + // As noted in issue #10 and #17 this adds support for TOTP secrets that are + // missing their padding. + secret = strings.TrimSpace(secret) + if n := len(secret) % 8; n != 0 { + secret = secret + strings.Repeat("=", 8-n) + } + + // As noted in issue #24 Google has started producing base32 in lower case, + // but the StdEncoding (and the RFC), expect a dictionary of only upper case letters. + secret = strings.ToUpper(secret) + + secretBytes, err := base32.StdEncoding.DecodeString(secret) + if err != nil { + return "", otp.ErrValidateSecretInvalidBase32 + } + + buf := make([]byte, 8) + mac := hmac.New(opts.Algorithm.Hash, secretBytes) + binary.BigEndian.PutUint64(buf, counter) + if debug { + fmt.Printf("counter=%v\n", counter) + fmt.Printf("buf=%v\n", buf) + } + + mac.Write(buf) + sum := mac.Sum(nil) + + // "Dynamic truncation" in RFC 4226 + // http://tools.ietf.org/html/rfc4226#section-5.4 + offset := sum[len(sum)-1] & 0xf + value := int64(((int(sum[offset]) & 0x7f) << 24) | + ((int(sum[offset+1] & 0xff)) << 16) | + ((int(sum[offset+2] & 0xff)) << 8) | + (int(sum[offset+3]) & 0xff)) + + l := opts.Digits.Length() + mod := int32(value % int64(math.Pow10(l))) + + if debug { + fmt.Printf("offset=%v\n", offset) + fmt.Printf("value=%v\n", value) + fmt.Printf("mod'ed=%v\n", mod) + } + + return opts.Digits.Format(mod), nil +} + +// ValidateCustom validates an HOTP with customizable options. Most users should +// use Validate(). +func ValidateCustom(passcode string, counter uint64, secret string, opts ValidateOpts) (bool, error) { + passcode = strings.TrimSpace(passcode) + + if len(passcode) != opts.Digits.Length() { + return false, otp.ErrValidateInputInvalidLength + } + + otpstr, err := GenerateCodeCustom(secret, counter, opts) + if err != nil { + return false, err + } + + if subtle.ConstantTimeCompare([]byte(otpstr), []byte(passcode)) == 1 { + return true, nil + } + + return false, nil +} + +// GenerateOpts provides options for .Generate() +type GenerateOpts struct { + // Name of the issuing Organization/Company. + Issuer string + // Name of the User's Account (eg, email address) + AccountName string + // Size in size of the generated Secret. Defaults to 10 bytes. + SecretSize uint + // Digits to request. Defaults to 6. + Digits otp.Digits + // Algorithm to use for HMAC. Defaults to SHA1. + Algorithm otp.Algorithm +} + +// Generate creates a new HOTP Key. +func Generate(opts GenerateOpts) (*otp.Key, error) { + // url encode the Issuer/AccountName + if opts.Issuer == "" { + return nil, otp.ErrGenerateMissingIssuer + } + + if opts.AccountName == "" { + return nil, otp.ErrGenerateMissingAccountName + } + + if opts.SecretSize == 0 { + opts.SecretSize = 10 + } + + if opts.Digits == 0 { + opts.Digits = otp.DigitsSix + } + + // otpauth://totp/Example:alice@google.com?secret=JBSWY3DPEHPK3PXP&issuer=Example + + v := url.Values{} + secret := make([]byte, opts.SecretSize) + _, err := rand.Read(secret) + if err != nil { + return nil, err + } + + v.Set("secret", strings.TrimRight(base32.StdEncoding.EncodeToString(secret), "=")) + v.Set("issuer", opts.Issuer) + v.Set("algorithm", opts.Algorithm.String()) + v.Set("digits", opts.Digits.String()) + + u := url.URL{ + Scheme: "otpauth", + Host: "hotp", + Path: "/" + opts.Issuer + ":" + opts.AccountName, + RawQuery: v.Encode(), + } + + return otp.NewKeyFromURL(u.String()) +} diff --git a/vendor/github.com/pquerna/otp/otp.go b/vendor/github.com/pquerna/otp/otp.go new file mode 100644 index 00000000000..5db93029cee --- /dev/null +++ b/vendor/github.com/pquerna/otp/otp.go @@ -0,0 +1,207 @@ +/** + * Copyright 2014 Paul Querna + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package otp + +import ( + "github.com/boombuler/barcode" + "github.com/boombuler/barcode/qr" + + "crypto/md5" + "crypto/sha1" + "crypto/sha256" + "crypto/sha512" + "errors" + "fmt" + "hash" + "image" + "net/url" + "strings" +) + +// Error when attempting to convert the secret from base32 to raw bytes. +var ErrValidateSecretInvalidBase32 = errors.New("Decoding of secret as base32 failed.") + +// The user provided passcode length was not expected. +var ErrValidateInputInvalidLength = errors.New("Input length unexpected") + +// When generating a Key, the Issuer must be set. +var ErrGenerateMissingIssuer = errors.New("Issuer must be set") + +// When generating a Key, the Account Name must be set. +var ErrGenerateMissingAccountName = errors.New("AccountName must be set") + +// Key represents an TOTP or HTOP key. +type Key struct { + orig string + url *url.URL +} + +// NewKeyFromURL creates a new Key from an TOTP or HOTP url. +// +// The URL format is documented here: +// https://github.com/google/google-authenticator/wiki/Key-Uri-Format +// +func NewKeyFromURL(orig string) (*Key, error) { + s := strings.TrimSpace(orig) + + u, err := url.Parse(s) + + if err != nil { + return nil, err + } + + return &Key{ + orig: s, + url: u, + }, nil +} + +func (k *Key) String() string { + return k.orig +} + +// Image returns an QR-Code image of the specified width and height, +// suitable for use by many clients like Google-Authenricator +// to enroll a user's TOTP/HOTP key. +func (k *Key) Image(width int, height int) (image.Image, error) { + b, err := qr.Encode(k.orig, qr.M, qr.Auto) + + if err != nil { + return nil, err + } + + b, err = barcode.Scale(b, width, height) + + if err != nil { + return nil, err + } + + return b, nil +} + +// Type returns "hotp" or "totp". +func (k *Key) Type() string { + return k.url.Host +} + +// Issuer returns the name of the issuing organization. +func (k *Key) Issuer() string { + q := k.url.Query() + + issuer := q.Get("issuer") + + if issuer != "" { + return issuer + } + + p := strings.TrimPrefix(k.url.Path, "/") + i := strings.Index(p, ":") + + if i == -1 { + return "" + } + + return p[:i] +} + +// AccountName returns the name of the user's account. +func (k *Key) AccountName() string { + p := strings.TrimPrefix(k.url.Path, "/") + i := strings.Index(p, ":") + + if i == -1 { + return p + } + + return p[i+1:] +} + +// Secret returns the opaque secret for this Key. +func (k *Key) Secret() string { + q := k.url.Query() + + return q.Get("secret") +} + +// URL returns the OTP URL as a string +func (k *Key) URL() string { + return k.url.String() +} + +// Algorithm represents the hashing function to use in the HMAC +// operation needed for OTPs. +type Algorithm int + +const ( + AlgorithmSHA1 Algorithm = iota + AlgorithmSHA256 + AlgorithmSHA512 + AlgorithmMD5 +) + +func (a Algorithm) String() string { + switch a { + case AlgorithmSHA1: + return "SHA1" + case AlgorithmSHA256: + return "SHA256" + case AlgorithmSHA512: + return "SHA512" + case AlgorithmMD5: + return "MD5" + } + panic("unreached") +} + +func (a Algorithm) Hash() hash.Hash { + switch a { + case AlgorithmSHA1: + return sha1.New() + case AlgorithmSHA256: + return sha256.New() + case AlgorithmSHA512: + return sha512.New() + case AlgorithmMD5: + return md5.New() + } + panic("unreached") +} + +// Digits represents the number of digits present in the +// user's OTP passcode. Six and Eight are the most common values. +type Digits int + +const ( + DigitsSix Digits = 6 + DigitsEight Digits = 8 +) + +// Format converts an integer into the zero-filled size for this Digits. +func (d Digits) Format(in int32) string { + f := fmt.Sprintf("%%0%dd", d) + return fmt.Sprintf(f, in) +} + +// Length returns the number of characters for this Digits. +func (d Digits) Length() int { + return int(d) +} + +func (d Digits) String() string { + return fmt.Sprintf("%d", d) +} diff --git a/vendor/github.com/pquerna/otp/totp/totp.go b/vendor/github.com/pquerna/otp/totp/totp.go new file mode 100644 index 00000000000..60532d6197b --- /dev/null +++ b/vendor/github.com/pquerna/otp/totp/totp.go @@ -0,0 +1,193 @@ +/** + * Copyright 2014 Paul Querna + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package totp + +import ( + "strings" + + "github.com/pquerna/otp" + "github.com/pquerna/otp/hotp" + + "crypto/rand" + "encoding/base32" + "math" + "net/url" + "strconv" + "time" +) + +// Validate a TOTP using the current time. +// A shortcut for ValidateCustom, Validate uses a configuration +// that is compatible with Google-Authenticator and most clients. +func Validate(passcode string, secret string) bool { + rv, _ := ValidateCustom( + passcode, + secret, + time.Now().UTC(), + ValidateOpts{ + Period: 30, + Skew: 1, + Digits: otp.DigitsSix, + Algorithm: otp.AlgorithmSHA1, + }, + ) + return rv +} + +// GenerateCode creates a TOTP token using the current time. +// A shortcut for GenerateCodeCustom, GenerateCode uses a configuration +// that is compatible with Google-Authenticator and most clients. +func GenerateCode(secret string, t time.Time) (string, error) { + return GenerateCodeCustom(secret, t, ValidateOpts{ + Period: 30, + Skew: 1, + Digits: otp.DigitsSix, + Algorithm: otp.AlgorithmSHA1, + }) +} + +// ValidateOpts provides options for ValidateCustom(). +type ValidateOpts struct { + // Number of seconds a TOTP hash is valid for. Defaults to 30 seconds. + Period uint + // Periods before or after the current time to allow. Value of 1 allows up to Period + // of either side of the specified time. Defaults to 0 allowed skews. Values greater + // than 1 are likely sketchy. + Skew uint + // Digits as part of the input. Defaults to 6. + Digits otp.Digits + // Algorithm to use for HMAC. Defaults to SHA1. + Algorithm otp.Algorithm +} + +// GenerateCodeCustom takes a timepoint and produces a passcode using a +// secret and the provided opts. (Under the hood, this is making an adapted +// call to hotp.GenerateCodeCustom) +func GenerateCodeCustom(secret string, t time.Time, opts ValidateOpts) (passcode string, err error) { + if opts.Period == 0 { + opts.Period = 30 + } + counter := uint64(math.Floor(float64(t.Unix()) / float64(opts.Period))) + passcode, err = hotp.GenerateCodeCustom(secret, counter, hotp.ValidateOpts{ + Digits: opts.Digits, + Algorithm: opts.Algorithm, + }) + if err != nil { + return "", err + } + return passcode, nil +} + +// ValidateCustom validates a TOTP given a user specified time and custom options. +// Most users should use Validate() to provide an interpolatable TOTP experience. +func ValidateCustom(passcode string, secret string, t time.Time, opts ValidateOpts) (bool, error) { + if opts.Period == 0 { + opts.Period = 30 + } + + counters := []uint64{} + counter := int64(math.Floor(float64(t.Unix()) / float64(opts.Period))) + + counters = append(counters, uint64(counter)) + for i := 1; i <= int(opts.Skew); i++ { + counters = append(counters, uint64(counter+int64(i))) + counters = append(counters, uint64(counter-int64(i))) + } + + for _, counter := range counters { + rv, err := hotp.ValidateCustom(passcode, counter, secret, hotp.ValidateOpts{ + Digits: opts.Digits, + Algorithm: opts.Algorithm, + }) + + if err != nil { + return false, err + } + + if rv == true { + return true, nil + } + } + + return false, nil +} + +// GenerateOpts provides options for Generate(). The default values +// are compatible with Google-Authenticator. +type GenerateOpts struct { + // Name of the issuing Organization/Company. + Issuer string + // Name of the User's Account (eg, email address) + AccountName string + // Number of seconds a TOTP hash is valid for. Defaults to 30 seconds. + Period uint + // Size in size of the generated Secret. Defaults to 20 bytes. + SecretSize uint + // Digits to request. Defaults to 6. + Digits otp.Digits + // Algorithm to use for HMAC. Defaults to SHA1. + Algorithm otp.Algorithm +} + +// Generate a new TOTP Key. +func Generate(opts GenerateOpts) (*otp.Key, error) { + // url encode the Issuer/AccountName + if opts.Issuer == "" { + return nil, otp.ErrGenerateMissingIssuer + } + + if opts.AccountName == "" { + return nil, otp.ErrGenerateMissingAccountName + } + + if opts.Period == 0 { + opts.Period = 30 + } + + if opts.SecretSize == 0 { + opts.SecretSize = 20 + } + + if opts.Digits == 0 { + opts.Digits = otp.DigitsSix + } + + // otpauth://totp/Example:alice@google.com?secret=JBSWY3DPEHPK3PXP&issuer=Example + + v := url.Values{} + secret := make([]byte, opts.SecretSize) + _, err := rand.Read(secret) + if err != nil { + return nil, err + } + + v.Set("secret", strings.TrimRight(base32.StdEncoding.EncodeToString(secret), "=")) + v.Set("issuer", opts.Issuer) + v.Set("period", strconv.FormatUint(uint64(opts.Period), 10)) + v.Set("algorithm", opts.Algorithm.String()) + v.Set("digits", opts.Digits.String()) + + u := url.URL{ + Scheme: "otpauth", + Host: "totp", + Path: "/" + opts.Issuer + ":" + opts.AccountName, + RawQuery: v.Encode(), + } + + return otp.NewKeyFromURL(u.String()) +} diff --git a/vendor/github.com/satori/go.uuid/LICENSE b/vendor/github.com/satori/go.uuid/LICENSE deleted file mode 100644 index 488357b8af1..00000000000 --- a/vendor/github.com/satori/go.uuid/LICENSE +++ /dev/null @@ -1,20 +0,0 @@ -Copyright (C) 2013-2016 by Maxim Bublis - -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal in the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/vendor/github.com/satori/go.uuid/README.md b/vendor/github.com/satori/go.uuid/README.md deleted file mode 100644 index b6aad1c8130..00000000000 --- a/vendor/github.com/satori/go.uuid/README.md +++ /dev/null @@ -1,65 +0,0 @@ -# UUID package for Go language - -[![Build Status](https://travis-ci.org/satori/go.uuid.png?branch=master)](https://travis-ci.org/satori/go.uuid) -[![Coverage Status](https://coveralls.io/repos/github/satori/go.uuid/badge.svg?branch=master)](https://coveralls.io/github/satori/go.uuid) -[![GoDoc](http://godoc.org/github.com/satori/go.uuid?status.png)](http://godoc.org/github.com/satori/go.uuid) - -This package provides pure Go implementation of Universally Unique Identifier (UUID). Supported both creation and parsing of UUIDs. - -With 100% test coverage and benchmarks out of box. - -Supported versions: -* Version 1, based on timestamp and MAC address (RFC 4122) -* Version 2, based on timestamp, MAC address and POSIX UID/GID (DCE 1.1) -* Version 3, based on MD5 hashing (RFC 4122) -* Version 4, based on random numbers (RFC 4122) -* Version 5, based on SHA-1 hashing (RFC 4122) - -## Installation - -Use the `go` command: - - $ go get github.com/satori/go.uuid - -## Requirements - -UUID package requires Go >= 1.2. - -## Example - -```go -package main - -import ( - "fmt" - "github.com/satori/go.uuid" -) - -func main() { - // Creating UUID Version 4 - u1 := uuid.NewV4() - fmt.Printf("UUIDv4: %s\n", u1) - - // Parsing UUID from string input - u2, err := uuid.FromString("6ba7b810-9dad-11d1-80b4-00c04fd430c8") - if err != nil { - fmt.Printf("Something gone wrong: %s", err) - } - fmt.Printf("Successfully parsed: %s", u2) -} -``` - -## Documentation - -[Documentation](http://godoc.org/github.com/satori/go.uuid) is hosted at GoDoc project. - -## Links -* [RFC 4122](http://tools.ietf.org/html/rfc4122) -* [DCE 1.1: Authentication and Security Services](http://pubs.opengroup.org/onlinepubs/9696989899/chap5.htm#tagcjh_08_02_01_01) - -## Copyright - -Copyright (C) 2013-2016 by Maxim Bublis . - -UUID package released under MIT License. -See [LICENSE](https://github.com/satori/go.uuid/blob/master/LICENSE) for details. diff --git a/vendor/github.com/satori/go.uuid/uuid.go b/vendor/github.com/satori/go.uuid/uuid.go deleted file mode 100644 index 295f3fc2c57..00000000000 --- a/vendor/github.com/satori/go.uuid/uuid.go +++ /dev/null @@ -1,481 +0,0 @@ -// Copyright (C) 2013-2015 by Maxim Bublis -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -// Package uuid provides implementation of Universally Unique Identifier (UUID). -// Supported versions are 1, 3, 4 and 5 (as specified in RFC 4122) and -// version 2 (as specified in DCE 1.1). -package uuid - -import ( - "bytes" - "crypto/md5" - "crypto/rand" - "crypto/sha1" - "database/sql/driver" - "encoding/binary" - "encoding/hex" - "fmt" - "hash" - "net" - "os" - "sync" - "time" -) - -// UUID layout variants. -const ( - VariantNCS = iota - VariantRFC4122 - VariantMicrosoft - VariantFuture -) - -// UUID DCE domains. -const ( - DomainPerson = iota - DomainGroup - DomainOrg -) - -// Difference in 100-nanosecond intervals between -// UUID epoch (October 15, 1582) and Unix epoch (January 1, 1970). -const epochStart = 122192928000000000 - -// Used in string method conversion -const dash byte = '-' - -// UUID v1/v2 storage. -var ( - storageMutex sync.Mutex - storageOnce sync.Once - epochFunc = unixTimeFunc - clockSequence uint16 - lastTime uint64 - hardwareAddr [6]byte - posixUID = uint32(os.Getuid()) - posixGID = uint32(os.Getgid()) -) - -// String parse helpers. -var ( - urnPrefix = []byte("urn:uuid:") - byteGroups = []int{8, 4, 4, 4, 12} -) - -func initClockSequence() { - buf := make([]byte, 2) - safeRandom(buf) - clockSequence = binary.BigEndian.Uint16(buf) -} - -func initHardwareAddr() { - interfaces, err := net.Interfaces() - if err == nil { - for _, iface := range interfaces { - if len(iface.HardwareAddr) >= 6 { - copy(hardwareAddr[:], iface.HardwareAddr) - return - } - } - } - - // Initialize hardwareAddr randomly in case - // of real network interfaces absence - safeRandom(hardwareAddr[:]) - - // Set multicast bit as recommended in RFC 4122 - hardwareAddr[0] |= 0x01 -} - -func initStorage() { - initClockSequence() - initHardwareAddr() -} - -func safeRandom(dest []byte) { - if _, err := rand.Read(dest); err != nil { - panic(err) - } -} - -// Returns difference in 100-nanosecond intervals between -// UUID epoch (October 15, 1582) and current time. -// This is default epoch calculation function. -func unixTimeFunc() uint64 { - return epochStart + uint64(time.Now().UnixNano()/100) -} - -// UUID representation compliant with specification -// described in RFC 4122. -type UUID [16]byte - -// NullUUID can be used with the standard sql package to represent a -// UUID value that can be NULL in the database -type NullUUID struct { - UUID UUID - Valid bool -} - -// The nil UUID is special form of UUID that is specified to have all -// 128 bits set to zero. -var Nil = UUID{} - -// Predefined namespace UUIDs. -var ( - NamespaceDNS, _ = FromString("6ba7b810-9dad-11d1-80b4-00c04fd430c8") - NamespaceURL, _ = FromString("6ba7b811-9dad-11d1-80b4-00c04fd430c8") - NamespaceOID, _ = FromString("6ba7b812-9dad-11d1-80b4-00c04fd430c8") - NamespaceX500, _ = FromString("6ba7b814-9dad-11d1-80b4-00c04fd430c8") -) - -// And returns result of binary AND of two UUIDs. -func And(u1 UUID, u2 UUID) UUID { - u := UUID{} - for i := 0; i < 16; i++ { - u[i] = u1[i] & u2[i] - } - return u -} - -// Or returns result of binary OR of two UUIDs. -func Or(u1 UUID, u2 UUID) UUID { - u := UUID{} - for i := 0; i < 16; i++ { - u[i] = u1[i] | u2[i] - } - return u -} - -// Equal returns true if u1 and u2 equals, otherwise returns false. -func Equal(u1 UUID, u2 UUID) bool { - return bytes.Equal(u1[:], u2[:]) -} - -// Version returns algorithm version used to generate UUID. -func (u UUID) Version() uint { - return uint(u[6] >> 4) -} - -// Variant returns UUID layout variant. -func (u UUID) Variant() uint { - switch { - case (u[8] & 0x80) == 0x00: - return VariantNCS - case (u[8]&0xc0)|0x80 == 0x80: - return VariantRFC4122 - case (u[8]&0xe0)|0xc0 == 0xc0: - return VariantMicrosoft - } - return VariantFuture -} - -// Bytes returns bytes slice representation of UUID. -func (u UUID) Bytes() []byte { - return u[:] -} - -// Returns canonical string representation of UUID: -// xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. -func (u UUID) String() string { - buf := make([]byte, 36) - - hex.Encode(buf[0:8], u[0:4]) - buf[8] = dash - hex.Encode(buf[9:13], u[4:6]) - buf[13] = dash - hex.Encode(buf[14:18], u[6:8]) - buf[18] = dash - hex.Encode(buf[19:23], u[8:10]) - buf[23] = dash - hex.Encode(buf[24:], u[10:]) - - return string(buf) -} - -// SetVersion sets version bits. -func (u *UUID) SetVersion(v byte) { - u[6] = (u[6] & 0x0f) | (v << 4) -} - -// SetVariant sets variant bits as described in RFC 4122. -func (u *UUID) SetVariant() { - u[8] = (u[8] & 0xbf) | 0x80 -} - -// MarshalText implements the encoding.TextMarshaler interface. -// The encoding is the same as returned by String. -func (u UUID) MarshalText() (text []byte, err error) { - text = []byte(u.String()) - return -} - -// UnmarshalText implements the encoding.TextUnmarshaler interface. -// Following formats are supported: -// "6ba7b810-9dad-11d1-80b4-00c04fd430c8", -// "{6ba7b810-9dad-11d1-80b4-00c04fd430c8}", -// "urn:uuid:6ba7b810-9dad-11d1-80b4-00c04fd430c8" -func (u *UUID) UnmarshalText(text []byte) (err error) { - if len(text) < 32 { - err = fmt.Errorf("uuid: UUID string too short: %s", text) - return - } - - t := text[:] - braced := false - - if bytes.Equal(t[:9], urnPrefix) { - t = t[9:] - } else if t[0] == '{' { - braced = true - t = t[1:] - } - - b := u[:] - - for i, byteGroup := range byteGroups { - if i > 0 { - if t[0] != '-' { - err = fmt.Errorf("uuid: invalid string format") - return - } - t = t[1:] - } - - if len(t) < byteGroup { - err = fmt.Errorf("uuid: UUID string too short: %s", text) - return - } - - if i == 4 && len(t) > byteGroup && - ((braced && t[byteGroup] != '}') || len(t[byteGroup:]) > 1 || !braced) { - err = fmt.Errorf("uuid: UUID string too long: %s", text) - return - } - - _, err = hex.Decode(b[:byteGroup/2], t[:byteGroup]) - if err != nil { - return - } - - t = t[byteGroup:] - b = b[byteGroup/2:] - } - - return -} - -// MarshalBinary implements the encoding.BinaryMarshaler interface. -func (u UUID) MarshalBinary() (data []byte, err error) { - data = u.Bytes() - return -} - -// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface. -// It will return error if the slice isn't 16 bytes long. -func (u *UUID) UnmarshalBinary(data []byte) (err error) { - if len(data) != 16 { - err = fmt.Errorf("uuid: UUID must be exactly 16 bytes long, got %d bytes", len(data)) - return - } - copy(u[:], data) - - return -} - -// Value implements the driver.Valuer interface. -func (u UUID) Value() (driver.Value, error) { - return u.String(), nil -} - -// Scan implements the sql.Scanner interface. -// A 16-byte slice is handled by UnmarshalBinary, while -// a longer byte slice or a string is handled by UnmarshalText. -func (u *UUID) Scan(src interface{}) error { - switch src := src.(type) { - case []byte: - if len(src) == 16 { - return u.UnmarshalBinary(src) - } - return u.UnmarshalText(src) - - case string: - return u.UnmarshalText([]byte(src)) - } - - return fmt.Errorf("uuid: cannot convert %T to UUID", src) -} - -// Value implements the driver.Valuer interface. -func (u NullUUID) Value() (driver.Value, error) { - if !u.Valid { - return nil, nil - } - // Delegate to UUID Value function - return u.UUID.Value() -} - -// Scan implements the sql.Scanner interface. -func (u *NullUUID) Scan(src interface{}) error { - if src == nil { - u.UUID, u.Valid = Nil, false - return nil - } - - // Delegate to UUID Scan function - u.Valid = true - return u.UUID.Scan(src) -} - -// FromBytes returns UUID converted from raw byte slice input. -// It will return error if the slice isn't 16 bytes long. -func FromBytes(input []byte) (u UUID, err error) { - err = u.UnmarshalBinary(input) - return -} - -// FromBytesOrNil returns UUID converted from raw byte slice input. -// Same behavior as FromBytes, but returns a Nil UUID on error. -func FromBytesOrNil(input []byte) UUID { - uuid, err := FromBytes(input) - if err != nil { - return Nil - } - return uuid -} - -// FromString returns UUID parsed from string input. -// Input is expected in a form accepted by UnmarshalText. -func FromString(input string) (u UUID, err error) { - err = u.UnmarshalText([]byte(input)) - return -} - -// FromStringOrNil returns UUID parsed from string input. -// Same behavior as FromString, but returns a Nil UUID on error. -func FromStringOrNil(input string) UUID { - uuid, err := FromString(input) - if err != nil { - return Nil - } - return uuid -} - -// Returns UUID v1/v2 storage state. -// Returns epoch timestamp, clock sequence, and hardware address. -func getStorage() (uint64, uint16, []byte) { - storageOnce.Do(initStorage) - - storageMutex.Lock() - defer storageMutex.Unlock() - - timeNow := epochFunc() - // Clock changed backwards since last UUID generation. - // Should increase clock sequence. - if timeNow <= lastTime { - clockSequence++ - } - lastTime = timeNow - - return timeNow, clockSequence, hardwareAddr[:] -} - -// NewV1 returns UUID based on current timestamp and MAC address. -func NewV1() UUID { - u := UUID{} - - timeNow, clockSeq, hardwareAddr := getStorage() - - binary.BigEndian.PutUint32(u[0:], uint32(timeNow)) - binary.BigEndian.PutUint16(u[4:], uint16(timeNow>>32)) - binary.BigEndian.PutUint16(u[6:], uint16(timeNow>>48)) - binary.BigEndian.PutUint16(u[8:], clockSeq) - - copy(u[10:], hardwareAddr) - - u.SetVersion(1) - u.SetVariant() - - return u -} - -// NewV2 returns DCE Security UUID based on POSIX UID/GID. -func NewV2(domain byte) UUID { - u := UUID{} - - timeNow, clockSeq, hardwareAddr := getStorage() - - switch domain { - case DomainPerson: - binary.BigEndian.PutUint32(u[0:], posixUID) - case DomainGroup: - binary.BigEndian.PutUint32(u[0:], posixGID) - } - - binary.BigEndian.PutUint16(u[4:], uint16(timeNow>>32)) - binary.BigEndian.PutUint16(u[6:], uint16(timeNow>>48)) - binary.BigEndian.PutUint16(u[8:], clockSeq) - u[9] = domain - - copy(u[10:], hardwareAddr) - - u.SetVersion(2) - u.SetVariant() - - return u -} - -// NewV3 returns UUID based on MD5 hash of namespace UUID and name. -func NewV3(ns UUID, name string) UUID { - u := newFromHash(md5.New(), ns, name) - u.SetVersion(3) - u.SetVariant() - - return u -} - -// NewV4 returns random generated UUID. -func NewV4() UUID { - u := UUID{} - safeRandom(u[:]) - u.SetVersion(4) - u.SetVariant() - - return u -} - -// NewV5 returns UUID based on SHA-1 hash of namespace UUID and name. -func NewV5(ns UUID, name string) UUID { - u := newFromHash(sha1.New(), ns, name) - u.SetVersion(5) - u.SetVariant() - - return u -} - -// Returns UUID based on hashing of namespace UUID and name. -func newFromHash(h hash.Hash, ns UUID, name string) UUID { - u := UUID{} - h.Write(ns[:]) - h.Write([]byte(name)) - copy(u[:], h.Sum(nil)) - - return u -} diff --git a/vendor/github.com/satori/uuid/LICENSE b/vendor/github.com/satori/uuid/LICENSE deleted file mode 100644 index 488357b8af1..00000000000 --- a/vendor/github.com/satori/uuid/LICENSE +++ /dev/null @@ -1,20 +0,0 @@ -Copyright (C) 2013-2016 by Maxim Bublis - -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal in the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/vendor/github.com/satori/uuid/README.md b/vendor/github.com/satori/uuid/README.md deleted file mode 100644 index b6aad1c8130..00000000000 --- a/vendor/github.com/satori/uuid/README.md +++ /dev/null @@ -1,65 +0,0 @@ -# UUID package for Go language - -[![Build Status](https://travis-ci.org/satori/go.uuid.png?branch=master)](https://travis-ci.org/satori/go.uuid) -[![Coverage Status](https://coveralls.io/repos/github/satori/go.uuid/badge.svg?branch=master)](https://coveralls.io/github/satori/go.uuid) -[![GoDoc](http://godoc.org/github.com/satori/go.uuid?status.png)](http://godoc.org/github.com/satori/go.uuid) - -This package provides pure Go implementation of Universally Unique Identifier (UUID). Supported both creation and parsing of UUIDs. - -With 100% test coverage and benchmarks out of box. - -Supported versions: -* Version 1, based on timestamp and MAC address (RFC 4122) -* Version 2, based on timestamp, MAC address and POSIX UID/GID (DCE 1.1) -* Version 3, based on MD5 hashing (RFC 4122) -* Version 4, based on random numbers (RFC 4122) -* Version 5, based on SHA-1 hashing (RFC 4122) - -## Installation - -Use the `go` command: - - $ go get github.com/satori/go.uuid - -## Requirements - -UUID package requires Go >= 1.2. - -## Example - -```go -package main - -import ( - "fmt" - "github.com/satori/go.uuid" -) - -func main() { - // Creating UUID Version 4 - u1 := uuid.NewV4() - fmt.Printf("UUIDv4: %s\n", u1) - - // Parsing UUID from string input - u2, err := uuid.FromString("6ba7b810-9dad-11d1-80b4-00c04fd430c8") - if err != nil { - fmt.Printf("Something gone wrong: %s", err) - } - fmt.Printf("Successfully parsed: %s", u2) -} -``` - -## Documentation - -[Documentation](http://godoc.org/github.com/satori/go.uuid) is hosted at GoDoc project. - -## Links -* [RFC 4122](http://tools.ietf.org/html/rfc4122) -* [DCE 1.1: Authentication and Security Services](http://pubs.opengroup.org/onlinepubs/9696989899/chap5.htm#tagcjh_08_02_01_01) - -## Copyright - -Copyright (C) 2013-2016 by Maxim Bublis . - -UUID package released under MIT License. -See [LICENSE](https://github.com/satori/go.uuid/blob/master/LICENSE) for details. diff --git a/vendor/github.com/satori/uuid/uuid.go b/vendor/github.com/satori/uuid/uuid.go deleted file mode 100644 index 295f3fc2c57..00000000000 --- a/vendor/github.com/satori/uuid/uuid.go +++ /dev/null @@ -1,481 +0,0 @@ -// Copyright (C) 2013-2015 by Maxim Bublis -// -// Permission is hereby granted, free of charge, to any person obtaining -// a copy of this software and associated documentation files (the -// "Software"), to deal in the Software without restriction, including -// without limitation the rights to use, copy, modify, merge, publish, -// distribute, sublicense, and/or sell copies of the Software, and to -// permit persons to whom the Software is furnished to do so, subject to -// the following conditions: -// -// The above copyright notice and this permission notice shall be -// included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -// Package uuid provides implementation of Universally Unique Identifier (UUID). -// Supported versions are 1, 3, 4 and 5 (as specified in RFC 4122) and -// version 2 (as specified in DCE 1.1). -package uuid - -import ( - "bytes" - "crypto/md5" - "crypto/rand" - "crypto/sha1" - "database/sql/driver" - "encoding/binary" - "encoding/hex" - "fmt" - "hash" - "net" - "os" - "sync" - "time" -) - -// UUID layout variants. -const ( - VariantNCS = iota - VariantRFC4122 - VariantMicrosoft - VariantFuture -) - -// UUID DCE domains. -const ( - DomainPerson = iota - DomainGroup - DomainOrg -) - -// Difference in 100-nanosecond intervals between -// UUID epoch (October 15, 1582) and Unix epoch (January 1, 1970). -const epochStart = 122192928000000000 - -// Used in string method conversion -const dash byte = '-' - -// UUID v1/v2 storage. -var ( - storageMutex sync.Mutex - storageOnce sync.Once - epochFunc = unixTimeFunc - clockSequence uint16 - lastTime uint64 - hardwareAddr [6]byte - posixUID = uint32(os.Getuid()) - posixGID = uint32(os.Getgid()) -) - -// String parse helpers. -var ( - urnPrefix = []byte("urn:uuid:") - byteGroups = []int{8, 4, 4, 4, 12} -) - -func initClockSequence() { - buf := make([]byte, 2) - safeRandom(buf) - clockSequence = binary.BigEndian.Uint16(buf) -} - -func initHardwareAddr() { - interfaces, err := net.Interfaces() - if err == nil { - for _, iface := range interfaces { - if len(iface.HardwareAddr) >= 6 { - copy(hardwareAddr[:], iface.HardwareAddr) - return - } - } - } - - // Initialize hardwareAddr randomly in case - // of real network interfaces absence - safeRandom(hardwareAddr[:]) - - // Set multicast bit as recommended in RFC 4122 - hardwareAddr[0] |= 0x01 -} - -func initStorage() { - initClockSequence() - initHardwareAddr() -} - -func safeRandom(dest []byte) { - if _, err := rand.Read(dest); err != nil { - panic(err) - } -} - -// Returns difference in 100-nanosecond intervals between -// UUID epoch (October 15, 1582) and current time. -// This is default epoch calculation function. -func unixTimeFunc() uint64 { - return epochStart + uint64(time.Now().UnixNano()/100) -} - -// UUID representation compliant with specification -// described in RFC 4122. -type UUID [16]byte - -// NullUUID can be used with the standard sql package to represent a -// UUID value that can be NULL in the database -type NullUUID struct { - UUID UUID - Valid bool -} - -// The nil UUID is special form of UUID that is specified to have all -// 128 bits set to zero. -var Nil = UUID{} - -// Predefined namespace UUIDs. -var ( - NamespaceDNS, _ = FromString("6ba7b810-9dad-11d1-80b4-00c04fd430c8") - NamespaceURL, _ = FromString("6ba7b811-9dad-11d1-80b4-00c04fd430c8") - NamespaceOID, _ = FromString("6ba7b812-9dad-11d1-80b4-00c04fd430c8") - NamespaceX500, _ = FromString("6ba7b814-9dad-11d1-80b4-00c04fd430c8") -) - -// And returns result of binary AND of two UUIDs. -func And(u1 UUID, u2 UUID) UUID { - u := UUID{} - for i := 0; i < 16; i++ { - u[i] = u1[i] & u2[i] - } - return u -} - -// Or returns result of binary OR of two UUIDs. -func Or(u1 UUID, u2 UUID) UUID { - u := UUID{} - for i := 0; i < 16; i++ { - u[i] = u1[i] | u2[i] - } - return u -} - -// Equal returns true if u1 and u2 equals, otherwise returns false. -func Equal(u1 UUID, u2 UUID) bool { - return bytes.Equal(u1[:], u2[:]) -} - -// Version returns algorithm version used to generate UUID. -func (u UUID) Version() uint { - return uint(u[6] >> 4) -} - -// Variant returns UUID layout variant. -func (u UUID) Variant() uint { - switch { - case (u[8] & 0x80) == 0x00: - return VariantNCS - case (u[8]&0xc0)|0x80 == 0x80: - return VariantRFC4122 - case (u[8]&0xe0)|0xc0 == 0xc0: - return VariantMicrosoft - } - return VariantFuture -} - -// Bytes returns bytes slice representation of UUID. -func (u UUID) Bytes() []byte { - return u[:] -} - -// Returns canonical string representation of UUID: -// xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. -func (u UUID) String() string { - buf := make([]byte, 36) - - hex.Encode(buf[0:8], u[0:4]) - buf[8] = dash - hex.Encode(buf[9:13], u[4:6]) - buf[13] = dash - hex.Encode(buf[14:18], u[6:8]) - buf[18] = dash - hex.Encode(buf[19:23], u[8:10]) - buf[23] = dash - hex.Encode(buf[24:], u[10:]) - - return string(buf) -} - -// SetVersion sets version bits. -func (u *UUID) SetVersion(v byte) { - u[6] = (u[6] & 0x0f) | (v << 4) -} - -// SetVariant sets variant bits as described in RFC 4122. -func (u *UUID) SetVariant() { - u[8] = (u[8] & 0xbf) | 0x80 -} - -// MarshalText implements the encoding.TextMarshaler interface. -// The encoding is the same as returned by String. -func (u UUID) MarshalText() (text []byte, err error) { - text = []byte(u.String()) - return -} - -// UnmarshalText implements the encoding.TextUnmarshaler interface. -// Following formats are supported: -// "6ba7b810-9dad-11d1-80b4-00c04fd430c8", -// "{6ba7b810-9dad-11d1-80b4-00c04fd430c8}", -// "urn:uuid:6ba7b810-9dad-11d1-80b4-00c04fd430c8" -func (u *UUID) UnmarshalText(text []byte) (err error) { - if len(text) < 32 { - err = fmt.Errorf("uuid: UUID string too short: %s", text) - return - } - - t := text[:] - braced := false - - if bytes.Equal(t[:9], urnPrefix) { - t = t[9:] - } else if t[0] == '{' { - braced = true - t = t[1:] - } - - b := u[:] - - for i, byteGroup := range byteGroups { - if i > 0 { - if t[0] != '-' { - err = fmt.Errorf("uuid: invalid string format") - return - } - t = t[1:] - } - - if len(t) < byteGroup { - err = fmt.Errorf("uuid: UUID string too short: %s", text) - return - } - - if i == 4 && len(t) > byteGroup && - ((braced && t[byteGroup] != '}') || len(t[byteGroup:]) > 1 || !braced) { - err = fmt.Errorf("uuid: UUID string too long: %s", text) - return - } - - _, err = hex.Decode(b[:byteGroup/2], t[:byteGroup]) - if err != nil { - return - } - - t = t[byteGroup:] - b = b[byteGroup/2:] - } - - return -} - -// MarshalBinary implements the encoding.BinaryMarshaler interface. -func (u UUID) MarshalBinary() (data []byte, err error) { - data = u.Bytes() - return -} - -// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface. -// It will return error if the slice isn't 16 bytes long. -func (u *UUID) UnmarshalBinary(data []byte) (err error) { - if len(data) != 16 { - err = fmt.Errorf("uuid: UUID must be exactly 16 bytes long, got %d bytes", len(data)) - return - } - copy(u[:], data) - - return -} - -// Value implements the driver.Valuer interface. -func (u UUID) Value() (driver.Value, error) { - return u.String(), nil -} - -// Scan implements the sql.Scanner interface. -// A 16-byte slice is handled by UnmarshalBinary, while -// a longer byte slice or a string is handled by UnmarshalText. -func (u *UUID) Scan(src interface{}) error { - switch src := src.(type) { - case []byte: - if len(src) == 16 { - return u.UnmarshalBinary(src) - } - return u.UnmarshalText(src) - - case string: - return u.UnmarshalText([]byte(src)) - } - - return fmt.Errorf("uuid: cannot convert %T to UUID", src) -} - -// Value implements the driver.Valuer interface. -func (u NullUUID) Value() (driver.Value, error) { - if !u.Valid { - return nil, nil - } - // Delegate to UUID Value function - return u.UUID.Value() -} - -// Scan implements the sql.Scanner interface. -func (u *NullUUID) Scan(src interface{}) error { - if src == nil { - u.UUID, u.Valid = Nil, false - return nil - } - - // Delegate to UUID Scan function - u.Valid = true - return u.UUID.Scan(src) -} - -// FromBytes returns UUID converted from raw byte slice input. -// It will return error if the slice isn't 16 bytes long. -func FromBytes(input []byte) (u UUID, err error) { - err = u.UnmarshalBinary(input) - return -} - -// FromBytesOrNil returns UUID converted from raw byte slice input. -// Same behavior as FromBytes, but returns a Nil UUID on error. -func FromBytesOrNil(input []byte) UUID { - uuid, err := FromBytes(input) - if err != nil { - return Nil - } - return uuid -} - -// FromString returns UUID parsed from string input. -// Input is expected in a form accepted by UnmarshalText. -func FromString(input string) (u UUID, err error) { - err = u.UnmarshalText([]byte(input)) - return -} - -// FromStringOrNil returns UUID parsed from string input. -// Same behavior as FromString, but returns a Nil UUID on error. -func FromStringOrNil(input string) UUID { - uuid, err := FromString(input) - if err != nil { - return Nil - } - return uuid -} - -// Returns UUID v1/v2 storage state. -// Returns epoch timestamp, clock sequence, and hardware address. -func getStorage() (uint64, uint16, []byte) { - storageOnce.Do(initStorage) - - storageMutex.Lock() - defer storageMutex.Unlock() - - timeNow := epochFunc() - // Clock changed backwards since last UUID generation. - // Should increase clock sequence. - if timeNow <= lastTime { - clockSequence++ - } - lastTime = timeNow - - return timeNow, clockSequence, hardwareAddr[:] -} - -// NewV1 returns UUID based on current timestamp and MAC address. -func NewV1() UUID { - u := UUID{} - - timeNow, clockSeq, hardwareAddr := getStorage() - - binary.BigEndian.PutUint32(u[0:], uint32(timeNow)) - binary.BigEndian.PutUint16(u[4:], uint16(timeNow>>32)) - binary.BigEndian.PutUint16(u[6:], uint16(timeNow>>48)) - binary.BigEndian.PutUint16(u[8:], clockSeq) - - copy(u[10:], hardwareAddr) - - u.SetVersion(1) - u.SetVariant() - - return u -} - -// NewV2 returns DCE Security UUID based on POSIX UID/GID. -func NewV2(domain byte) UUID { - u := UUID{} - - timeNow, clockSeq, hardwareAddr := getStorage() - - switch domain { - case DomainPerson: - binary.BigEndian.PutUint32(u[0:], posixUID) - case DomainGroup: - binary.BigEndian.PutUint32(u[0:], posixGID) - } - - binary.BigEndian.PutUint16(u[4:], uint16(timeNow>>32)) - binary.BigEndian.PutUint16(u[6:], uint16(timeNow>>48)) - binary.BigEndian.PutUint16(u[8:], clockSeq) - u[9] = domain - - copy(u[10:], hardwareAddr) - - u.SetVersion(2) - u.SetVariant() - - return u -} - -// NewV3 returns UUID based on MD5 hash of namespace UUID and name. -func NewV3(ns UUID, name string) UUID { - u := newFromHash(md5.New(), ns, name) - u.SetVersion(3) - u.SetVariant() - - return u -} - -// NewV4 returns random generated UUID. -func NewV4() UUID { - u := UUID{} - safeRandom(u[:]) - u.SetVersion(4) - u.SetVariant() - - return u -} - -// NewV5 returns UUID based on SHA-1 hash of namespace UUID and name. -func NewV5(ns UUID, name string) UUID { - u := newFromHash(sha1.New(), ns, name) - u.SetVersion(5) - u.SetVariant() - - return u -} - -// Returns UUID based on hashing of namespace UUID and name. -func newFromHash(h hash.Hash, ns UUID, name string) UUID { - u := UUID{} - h.Write(ns[:]) - h.Write([]byte(name)) - copy(u[:], h.Sum(nil)) - - return u -} diff --git a/vendor/gopkg.in/yaml.v2/LICENSE b/vendor/gopkg.in/yaml.v2/LICENSE index 866d74a7ad7..8dada3edaf5 100644 --- a/vendor/gopkg.in/yaml.v2/LICENSE +++ b/vendor/gopkg.in/yaml.v2/LICENSE @@ -1,13 +1,201 @@ -Copyright 2011-2016 Canonical Ltd. + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - http://www.apache.org/licenses/LICENSE-2.0 + 1. Definitions. -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/gopkg.in/yaml.v2/NOTICE b/vendor/gopkg.in/yaml.v2/NOTICE new file mode 100644 index 00000000000..866d74a7ad7 --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/NOTICE @@ -0,0 +1,13 @@ +Copyright 2011-2016 Canonical Ltd. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. diff --git a/vendor/gopkg.in/yaml.v2/README.md b/vendor/gopkg.in/yaml.v2/README.md index 1884de6a7d7..b50c6e87755 100644 --- a/vendor/gopkg.in/yaml.v2/README.md +++ b/vendor/gopkg.in/yaml.v2/README.md @@ -65,6 +65,8 @@ b: d: [3, 4] ` +// Note: struct fields must be public in order for unmarshal to +// correctly populate the data. type T struct { A string B struct { diff --git a/vendor/gopkg.in/yaml.v2/apic.go b/vendor/gopkg.in/yaml.v2/apic.go index 95ec014e8cc..1f7e87e6727 100644 --- a/vendor/gopkg.in/yaml.v2/apic.go +++ b/vendor/gopkg.in/yaml.v2/apic.go @@ -2,7 +2,6 @@ package yaml import ( "io" - "os" ) func yaml_insert_token(parser *yaml_parser_t, pos int, token *yaml_token_t) { @@ -48,9 +47,9 @@ func yaml_string_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err return n, nil } -// File read handler. -func yaml_file_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err error) { - return parser.input_file.Read(buffer) +// Reader read handler. +func yaml_reader_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err error) { + return parser.input_reader.Read(buffer) } // Set a string input. @@ -64,12 +63,12 @@ func yaml_parser_set_input_string(parser *yaml_parser_t, input []byte) { } // Set a file input. -func yaml_parser_set_input_file(parser *yaml_parser_t, file *os.File) { +func yaml_parser_set_input_reader(parser *yaml_parser_t, r io.Reader) { if parser.read_handler != nil { panic("must set the input source only once") } - parser.read_handler = yaml_file_read_handler - parser.input_file = file + parser.read_handler = yaml_reader_read_handler + parser.input_reader = r } // Set the source encoding. @@ -81,14 +80,13 @@ func yaml_parser_set_encoding(parser *yaml_parser_t, encoding yaml_encoding_t) { } // Create a new emitter object. -func yaml_emitter_initialize(emitter *yaml_emitter_t) bool { +func yaml_emitter_initialize(emitter *yaml_emitter_t) { *emitter = yaml_emitter_t{ buffer: make([]byte, output_buffer_size), raw_buffer: make([]byte, 0, output_raw_buffer_size), states: make([]yaml_emitter_state_t, 0, initial_stack_size), events: make([]yaml_event_t, 0, initial_queue_size), } - return true } // Destroy an emitter object. @@ -102,9 +100,10 @@ func yaml_string_write_handler(emitter *yaml_emitter_t, buffer []byte) error { return nil } -// File write handler. -func yaml_file_write_handler(emitter *yaml_emitter_t, buffer []byte) error { - _, err := emitter.output_file.Write(buffer) +// yaml_writer_write_handler uses emitter.output_writer to write the +// emitted text. +func yaml_writer_write_handler(emitter *yaml_emitter_t, buffer []byte) error { + _, err := emitter.output_writer.Write(buffer) return err } @@ -118,12 +117,12 @@ func yaml_emitter_set_output_string(emitter *yaml_emitter_t, output_buffer *[]by } // Set a file output. -func yaml_emitter_set_output_file(emitter *yaml_emitter_t, file io.Writer) { +func yaml_emitter_set_output_writer(emitter *yaml_emitter_t, w io.Writer) { if emitter.write_handler != nil { panic("must set the output target only once") } - emitter.write_handler = yaml_file_write_handler - emitter.output_file = file + emitter.write_handler = yaml_writer_write_handler + emitter.output_writer = w } // Set the output encoding. @@ -252,41 +251,41 @@ func yaml_emitter_set_break(emitter *yaml_emitter_t, line_break yaml_break_t) { // // Create STREAM-START. -func yaml_stream_start_event_initialize(event *yaml_event_t, encoding yaml_encoding_t) bool { +func yaml_stream_start_event_initialize(event *yaml_event_t, encoding yaml_encoding_t) { *event = yaml_event_t{ typ: yaml_STREAM_START_EVENT, encoding: encoding, } - return true } // Create STREAM-END. -func yaml_stream_end_event_initialize(event *yaml_event_t) bool { +func yaml_stream_end_event_initialize(event *yaml_event_t) { *event = yaml_event_t{ typ: yaml_STREAM_END_EVENT, } - return true } // Create DOCUMENT-START. -func yaml_document_start_event_initialize(event *yaml_event_t, version_directive *yaml_version_directive_t, - tag_directives []yaml_tag_directive_t, implicit bool) bool { +func yaml_document_start_event_initialize( + event *yaml_event_t, + version_directive *yaml_version_directive_t, + tag_directives []yaml_tag_directive_t, + implicit bool, +) { *event = yaml_event_t{ typ: yaml_DOCUMENT_START_EVENT, version_directive: version_directive, tag_directives: tag_directives, implicit: implicit, } - return true } // Create DOCUMENT-END. -func yaml_document_end_event_initialize(event *yaml_event_t, implicit bool) bool { +func yaml_document_end_event_initialize(event *yaml_event_t, implicit bool) { *event = yaml_event_t{ typ: yaml_DOCUMENT_END_EVENT, implicit: implicit, } - return true } ///* @@ -348,7 +347,7 @@ func yaml_sequence_end_event_initialize(event *yaml_event_t) bool { } // Create MAPPING-START. -func yaml_mapping_start_event_initialize(event *yaml_event_t, anchor, tag []byte, implicit bool, style yaml_mapping_style_t) bool { +func yaml_mapping_start_event_initialize(event *yaml_event_t, anchor, tag []byte, implicit bool, style yaml_mapping_style_t) { *event = yaml_event_t{ typ: yaml_MAPPING_START_EVENT, anchor: anchor, @@ -356,15 +355,13 @@ func yaml_mapping_start_event_initialize(event *yaml_event_t, anchor, tag []byte implicit: implicit, style: yaml_style_t(style), } - return true } // Create MAPPING-END. -func yaml_mapping_end_event_initialize(event *yaml_event_t) bool { +func yaml_mapping_end_event_initialize(event *yaml_event_t) { *event = yaml_event_t{ typ: yaml_MAPPING_END_EVENT, } - return true } // Destroy an event object. @@ -471,7 +468,7 @@ func yaml_event_delete(event *yaml_event_t) { // } context // tag_directive *yaml_tag_directive_t // -// context.error = YAML_NO_ERROR // Eliminate a compliler warning. +// context.error = YAML_NO_ERROR // Eliminate a compiler warning. // // assert(document) // Non-NULL document object is expected. // diff --git a/vendor/gopkg.in/yaml.v2/decode.go b/vendor/gopkg.in/yaml.v2/decode.go index 052ecfcd190..e4e56e28e0e 100644 --- a/vendor/gopkg.in/yaml.v2/decode.go +++ b/vendor/gopkg.in/yaml.v2/decode.go @@ -4,6 +4,7 @@ import ( "encoding" "encoding/base64" "fmt" + "io" "math" "reflect" "strconv" @@ -22,19 +23,22 @@ type node struct { kind int line, column int tag string - value string - implicit bool - children []*node - anchors map[string]*node + // For an alias node, alias holds the resolved alias. + alias *node + value string + implicit bool + children []*node + anchors map[string]*node } // ---------------------------------------------------------------------------- // Parser, produces a node tree out of a libyaml event stream. type parser struct { - parser yaml_parser_t - event yaml_event_t - doc *node + parser yaml_parser_t + event yaml_event_t + doc *node + doneInit bool } func newParser(b []byte) *parser { @@ -42,21 +46,30 @@ func newParser(b []byte) *parser { if !yaml_parser_initialize(&p.parser) { panic("failed to initialize YAML emitter") } - if len(b) == 0 { b = []byte{'\n'} } - yaml_parser_set_input_string(&p.parser, b) + return &p +} - p.skip() - if p.event.typ != yaml_STREAM_START_EVENT { - panic("expected stream start event, got " + strconv.Itoa(int(p.event.typ))) +func newParserFromReader(r io.Reader) *parser { + p := parser{} + if !yaml_parser_initialize(&p.parser) { + panic("failed to initialize YAML emitter") } - p.skip() + yaml_parser_set_input_reader(&p.parser, r) return &p } +func (p *parser) init() { + if p.doneInit { + return + } + p.expect(yaml_STREAM_START_EVENT) + p.doneInit = true +} + func (p *parser) destroy() { if p.event.typ != yaml_NO_EVENT { yaml_event_delete(&p.event) @@ -64,16 +77,35 @@ func (p *parser) destroy() { yaml_parser_delete(&p.parser) } -func (p *parser) skip() { - if p.event.typ != yaml_NO_EVENT { - if p.event.typ == yaml_STREAM_END_EVENT { - failf("attempted to go past the end of stream; corrupted value?") +// expect consumes an event from the event stream and +// checks that it's of the expected type. +func (p *parser) expect(e yaml_event_type_t) { + if p.event.typ == yaml_NO_EVENT { + if !yaml_parser_parse(&p.parser, &p.event) { + p.fail() } - yaml_event_delete(&p.event) + } + if p.event.typ == yaml_STREAM_END_EVENT { + failf("attempted to go past the end of stream; corrupted value?") + } + if p.event.typ != e { + p.parser.problem = fmt.Sprintf("expected %s event but got %s", e, p.event.typ) + p.fail() + } + yaml_event_delete(&p.event) + p.event.typ = yaml_NO_EVENT +} + +// peek peeks at the next event in the event stream, +// puts the results into p.event and returns the event type. +func (p *parser) peek() yaml_event_type_t { + if p.event.typ != yaml_NO_EVENT { + return p.event.typ } if !yaml_parser_parse(&p.parser, &p.event) { p.fail() } + return p.event.typ } func (p *parser) fail() { @@ -81,6 +113,10 @@ func (p *parser) fail() { var line int if p.parser.problem_mark.line != 0 { line = p.parser.problem_mark.line + // Scanner errors don't iterate line before returning error + if p.parser.error == yaml_SCANNER_ERROR { + line++ + } } else if p.parser.context_mark.line != 0 { line = p.parser.context_mark.line } @@ -103,7 +139,8 @@ func (p *parser) anchor(n *node, anchor []byte) { } func (p *parser) parse() *node { - switch p.event.typ { + p.init() + switch p.peek() { case yaml_SCALAR_EVENT: return p.scalar() case yaml_ALIAS_EVENT: @@ -118,7 +155,7 @@ func (p *parser) parse() *node { // Happens when attempting to decode an empty buffer. return nil default: - panic("attempted to parse unknown event: " + strconv.Itoa(int(p.event.typ))) + panic("attempted to parse unknown event: " + p.event.typ.String()) } } @@ -134,19 +171,20 @@ func (p *parser) document() *node { n := p.node(documentNode) n.anchors = make(map[string]*node) p.doc = n - p.skip() + p.expect(yaml_DOCUMENT_START_EVENT) n.children = append(n.children, p.parse()) - if p.event.typ != yaml_DOCUMENT_END_EVENT { - panic("expected end of document event but got " + strconv.Itoa(int(p.event.typ))) - } - p.skip() + p.expect(yaml_DOCUMENT_END_EVENT) return n } func (p *parser) alias() *node { n := p.node(aliasNode) n.value = string(p.event.anchor) - p.skip() + n.alias = p.doc.anchors[n.value] + if n.alias == nil { + failf("unknown anchor '%s' referenced", n.value) + } + p.expect(yaml_ALIAS_EVENT) return n } @@ -156,29 +194,29 @@ func (p *parser) scalar() *node { n.tag = string(p.event.tag) n.implicit = p.event.implicit p.anchor(n, p.event.anchor) - p.skip() + p.expect(yaml_SCALAR_EVENT) return n } func (p *parser) sequence() *node { n := p.node(sequenceNode) p.anchor(n, p.event.anchor) - p.skip() - for p.event.typ != yaml_SEQUENCE_END_EVENT { + p.expect(yaml_SEQUENCE_START_EVENT) + for p.peek() != yaml_SEQUENCE_END_EVENT { n.children = append(n.children, p.parse()) } - p.skip() + p.expect(yaml_SEQUENCE_END_EVENT) return n } func (p *parser) mapping() *node { n := p.node(mappingNode) p.anchor(n, p.event.anchor) - p.skip() - for p.event.typ != yaml_MAPPING_END_EVENT { + p.expect(yaml_MAPPING_START_EVENT) + for p.peek() != yaml_MAPPING_END_EVENT { n.children = append(n.children, p.parse(), p.parse()) } - p.skip() + p.expect(yaml_MAPPING_END_EVENT) return n } @@ -187,9 +225,10 @@ func (p *parser) mapping() *node { type decoder struct { doc *node - aliases map[string]bool + aliases map[*node]bool mapType reflect.Type terrors []string + strict bool } var ( @@ -197,11 +236,13 @@ var ( durationType = reflect.TypeOf(time.Duration(0)) defaultMapType = reflect.TypeOf(map[interface{}]interface{}{}) ifaceType = defaultMapType.Elem() + timeType = reflect.TypeOf(time.Time{}) + ptrTimeType = reflect.TypeOf(&time.Time{}) ) -func newDecoder() *decoder { - d := &decoder{mapType: defaultMapType} - d.aliases = make(map[string]bool) +func newDecoder(strict bool) *decoder { + d := &decoder{mapType: defaultMapType, strict: strict} + d.aliases = make(map[*node]bool) return d } @@ -250,7 +291,7 @@ func (d *decoder) callUnmarshaler(n *node, u Unmarshaler) (good bool) { // // If n holds a null value, prepare returns before doing anything. func (d *decoder) prepare(n *node, out reflect.Value) (newout reflect.Value, unmarshaled, good bool) { - if n.tag == yaml_NULL_TAG || n.kind == scalarNode && n.tag == "" && (n.value == "null" || n.value == "" && n.implicit) { + if n.tag == yaml_NULL_TAG || n.kind == scalarNode && n.tag == "" && (n.value == "null" || n.value == "~" || n.value == "" && n.implicit) { return out, false, false } again := true @@ -307,16 +348,13 @@ func (d *decoder) document(n *node, out reflect.Value) (good bool) { } func (d *decoder) alias(n *node, out reflect.Value) (good bool) { - an, ok := d.doc.anchors[n.value] - if !ok { - failf("unknown anchor '%s' referenced", n.value) - } - if d.aliases[n.value] { + if d.aliases[n] { + // TODO this could actually be allowed in some circumstances. failf("anchor '%s' value contains itself", n.value) } - d.aliases[n.value] = true - good = d.unmarshal(an, out) - delete(d.aliases, n.value) + d.aliases[n] = true + good = d.unmarshal(n.alias, out) + delete(d.aliases, n) return good } @@ -328,7 +366,7 @@ func resetMap(out reflect.Value) { } } -func (d *decoder) scalar(n *node, out reflect.Value) (good bool) { +func (d *decoder) scalar(n *node, out reflect.Value) bool { var tag string var resolved interface{} if n.tag == "" && !n.implicit { @@ -352,9 +390,26 @@ func (d *decoder) scalar(n *node, out reflect.Value) (good bool) { } return true } - if s, ok := resolved.(string); ok && out.CanAddr() { - if u, ok := out.Addr().Interface().(encoding.TextUnmarshaler); ok { - err := u.UnmarshalText([]byte(s)) + if resolvedv := reflect.ValueOf(resolved); out.Type() == resolvedv.Type() { + // We've resolved to exactly the type we want, so use that. + out.Set(resolvedv) + return true + } + // Perhaps we can use the value as a TextUnmarshaler to + // set its value. + if out.CanAddr() { + u, ok := out.Addr().Interface().(encoding.TextUnmarshaler) + if ok { + var text []byte + if tag == yaml_BINARY_TAG { + text = []byte(resolved.(string)) + } else { + // We let any value be unmarshaled into TextUnmarshaler. + // That might be more lax than we'd like, but the + // TextUnmarshaler itself should bowl out any dubious values. + text = []byte(n.value) + } + err := u.UnmarshalText(text) if err != nil { fail(err) } @@ -365,46 +420,54 @@ func (d *decoder) scalar(n *node, out reflect.Value) (good bool) { case reflect.String: if tag == yaml_BINARY_TAG { out.SetString(resolved.(string)) - good = true - } else if resolved != nil { + return true + } + if resolved != nil { out.SetString(n.value) - good = true + return true } case reflect.Interface: if resolved == nil { out.Set(reflect.Zero(out.Type())) + } else if tag == yaml_TIMESTAMP_TAG { + // It looks like a timestamp but for backward compatibility + // reasons we set it as a string, so that code that unmarshals + // timestamp-like values into interface{} will continue to + // see a string and not a time.Time. + // TODO(v3) Drop this. + out.Set(reflect.ValueOf(n.value)) } else { out.Set(reflect.ValueOf(resolved)) } - good = true + return true case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: switch resolved := resolved.(type) { case int: if !out.OverflowInt(int64(resolved)) { out.SetInt(int64(resolved)) - good = true + return true } case int64: if !out.OverflowInt(resolved) { out.SetInt(resolved) - good = true + return true } case uint64: if resolved <= math.MaxInt64 && !out.OverflowInt(int64(resolved)) { out.SetInt(int64(resolved)) - good = true + return true } case float64: if resolved <= math.MaxInt64 && !out.OverflowInt(int64(resolved)) { out.SetInt(int64(resolved)) - good = true + return true } case string: if out.Type() == durationType { d, err := time.ParseDuration(resolved) if err == nil { out.SetInt(int64(d)) - good = true + return true } } } @@ -413,44 +476,49 @@ func (d *decoder) scalar(n *node, out reflect.Value) (good bool) { case int: if resolved >= 0 && !out.OverflowUint(uint64(resolved)) { out.SetUint(uint64(resolved)) - good = true + return true } case int64: if resolved >= 0 && !out.OverflowUint(uint64(resolved)) { out.SetUint(uint64(resolved)) - good = true + return true } case uint64: if !out.OverflowUint(uint64(resolved)) { out.SetUint(uint64(resolved)) - good = true + return true } case float64: if resolved <= math.MaxUint64 && !out.OverflowUint(uint64(resolved)) { out.SetUint(uint64(resolved)) - good = true + return true } } case reflect.Bool: switch resolved := resolved.(type) { case bool: out.SetBool(resolved) - good = true + return true } case reflect.Float32, reflect.Float64: switch resolved := resolved.(type) { case int: out.SetFloat(float64(resolved)) - good = true + return true case int64: out.SetFloat(float64(resolved)) - good = true + return true case uint64: out.SetFloat(float64(resolved)) - good = true + return true case float64: out.SetFloat(resolved) - good = true + return true + } + case reflect.Struct: + if resolvedv := reflect.ValueOf(resolved); out.Type() == resolvedv.Type() { + out.Set(resolvedv) + return true } case reflect.Ptr: if out.Type().Elem() == reflect.TypeOf(resolved) { @@ -458,13 +526,11 @@ func (d *decoder) scalar(n *node, out reflect.Value) (good bool) { elem := reflect.New(out.Type().Elem()) elem.Elem().Set(reflect.ValueOf(resolved)) out.Set(elem) - good = true + return true } } - if !good { - d.terror(n, tag, out) - } - return good + d.terror(n, tag, out) + return false } func settableValueOf(i interface{}) reflect.Value { @@ -481,6 +547,10 @@ func (d *decoder) sequence(n *node, out reflect.Value) (good bool) { switch out.Kind() { case reflect.Slice: out.Set(reflect.MakeSlice(out.Type(), l, l)) + case reflect.Array: + if l != out.Len() { + failf("invalid array: want %d elements but got %d", out.Len(), l) + } case reflect.Interface: // No type hints. Will have to use a generic sequence. iface = out @@ -499,7 +569,9 @@ func (d *decoder) sequence(n *node, out reflect.Value) (good bool) { j++ } } - out.Set(out.Slice(0, j)) + if out.Kind() != reflect.Array { + out.Set(out.Slice(0, j)) + } if iface.IsValid() { iface.Set(out) } @@ -560,7 +632,7 @@ func (d *decoder) mapping(n *node, out reflect.Value) (good bool) { } e := reflect.New(et).Elem() if d.unmarshal(n.children[i+1], e) { - out.SetMapIndex(k, e) + d.setMapIndex(n.children[i+1], out, k, e) } } } @@ -568,6 +640,14 @@ func (d *decoder) mapping(n *node, out reflect.Value) (good bool) { return true } +func (d *decoder) setMapIndex(n *node, out, k, v reflect.Value) { + if d.strict && out.MapIndex(k) != zeroValue { + d.terrors = append(d.terrors, fmt.Sprintf("line %d: key %#v already set in map", n.line+1, k.Interface())) + return + } + out.SetMapIndex(k, v) +} + func (d *decoder) mappingSlice(n *node, out reflect.Value) (good bool) { outt := out.Type() if outt.Elem() != mapItemType { @@ -615,6 +695,10 @@ func (d *decoder) mappingStruct(n *node, out reflect.Value) (good bool) { elemType = inlineMap.Type().Elem() } + var doneFields []bool + if d.strict { + doneFields = make([]bool, len(sinfo.FieldsList)) + } for i := 0; i < l; i += 2 { ni := n.children[i] if isMerge(ni) { @@ -625,6 +709,13 @@ func (d *decoder) mappingStruct(n *node, out reflect.Value) (good bool) { continue } if info, ok := sinfo.FieldsMap[name.String()]; ok { + if d.strict { + if doneFields[info.Id] { + d.terrors = append(d.terrors, fmt.Sprintf("line %d: field %s already set in type %s", ni.line+1, name.String(), out.Type())) + continue + } + doneFields[info.Id] = true + } var field reflect.Value if info.Inline == nil { field = out.Field(info.Num) @@ -638,7 +729,9 @@ func (d *decoder) mappingStruct(n *node, out reflect.Value) (good bool) { } value := reflect.New(elemType).Elem() d.unmarshal(n.children[i+1], value) - inlineMap.SetMapIndex(name, value) + d.setMapIndex(n.children[i+1], inlineMap, name, value) + } else if d.strict { + d.terrors = append(d.terrors, fmt.Sprintf("line %d: field %s not found in type %s", ni.line+1, name.String(), out.Type())) } } return true diff --git a/vendor/gopkg.in/yaml.v2/emitterc.go b/vendor/gopkg.in/yaml.v2/emitterc.go index 6ecdcb3c7fe..a1c2cc52627 100644 --- a/vendor/gopkg.in/yaml.v2/emitterc.go +++ b/vendor/gopkg.in/yaml.v2/emitterc.go @@ -2,6 +2,7 @@ package yaml import ( "bytes" + "fmt" ) // Flush the buffer if needed. @@ -664,7 +665,7 @@ func yaml_emitter_emit_node(emitter *yaml_emitter_t, event *yaml_event_t, return yaml_emitter_emit_mapping_start(emitter, event) default: return yaml_emitter_set_emitter_error(emitter, - "expected SCALAR, SEQUENCE-START, MAPPING-START, or ALIAS") + fmt.Sprintf("expected SCALAR, SEQUENCE-START, MAPPING-START, or ALIAS, but got %v", event.typ)) } } @@ -842,7 +843,7 @@ func yaml_emitter_select_scalar_style(emitter *yaml_emitter_t, event *yaml_event return true } -// Write an achor. +// Write an anchor. func yaml_emitter_process_anchor(emitter *yaml_emitter_t) bool { if emitter.anchor_data.anchor == nil { return true @@ -994,10 +995,10 @@ func yaml_emitter_analyze_scalar(emitter *yaml_emitter_t, value []byte) bool { break_space = false space_break = false - preceeded_by_whitespace = false - followed_by_whitespace = false - previous_space = false - previous_break = false + preceded_by_whitespace = false + followed_by_whitespace = false + previous_space = false + previous_break = false ) emitter.scalar_data.value = value @@ -1016,7 +1017,7 @@ func yaml_emitter_analyze_scalar(emitter *yaml_emitter_t, value []byte) bool { flow_indicators = true } - preceeded_by_whitespace = true + preceded_by_whitespace = true for i, w := 0, 0; i < len(value); i += w { w = width(value[i]) followed_by_whitespace = i+w >= len(value) || is_blank(value, i+w) @@ -1047,7 +1048,7 @@ func yaml_emitter_analyze_scalar(emitter *yaml_emitter_t, value []byte) bool { block_indicators = true } case '#': - if preceeded_by_whitespace { + if preceded_by_whitespace { flow_indicators = true block_indicators = true } @@ -1088,7 +1089,7 @@ func yaml_emitter_analyze_scalar(emitter *yaml_emitter_t, value []byte) bool { } // [Go]: Why 'z'? Couldn't be the end of the string as that's the loop condition. - preceeded_by_whitespace = is_blankz(value, i) + preceded_by_whitespace = is_blankz(value, i) } emitter.scalar_data.multiline = line_breaks diff --git a/vendor/gopkg.in/yaml.v2/encode.go b/vendor/gopkg.in/yaml.v2/encode.go index 84f84995517..a14435e82f8 100644 --- a/vendor/gopkg.in/yaml.v2/encode.go +++ b/vendor/gopkg.in/yaml.v2/encode.go @@ -3,12 +3,14 @@ package yaml import ( "encoding" "fmt" + "io" "reflect" "regexp" "sort" "strconv" "strings" "time" + "unicode/utf8" ) type encoder struct { @@ -16,25 +18,39 @@ type encoder struct { event yaml_event_t out []byte flow bool + // doneInit holds whether the initial stream_start_event has been + // emitted. + doneInit bool } -func newEncoder() (e *encoder) { - e = &encoder{} - e.must(yaml_emitter_initialize(&e.emitter)) +func newEncoder() *encoder { + e := &encoder{} + yaml_emitter_initialize(&e.emitter) yaml_emitter_set_output_string(&e.emitter, &e.out) yaml_emitter_set_unicode(&e.emitter, true) - e.must(yaml_stream_start_event_initialize(&e.event, yaml_UTF8_ENCODING)) - e.emit() - e.must(yaml_document_start_event_initialize(&e.event, nil, nil, true)) - e.emit() return e } -func (e *encoder) finish() { - e.must(yaml_document_end_event_initialize(&e.event, true)) +func newEncoderWithWriter(w io.Writer) *encoder { + e := &encoder{} + yaml_emitter_initialize(&e.emitter) + yaml_emitter_set_output_writer(&e.emitter, w) + yaml_emitter_set_unicode(&e.emitter, true) + return e +} + +func (e *encoder) init() { + if e.doneInit { + return + } + yaml_stream_start_event_initialize(&e.event, yaml_UTF8_ENCODING) e.emit() + e.doneInit = true +} + +func (e *encoder) finish() { e.emitter.open_ended = false - e.must(yaml_stream_end_event_initialize(&e.event)) + yaml_stream_end_event_initialize(&e.event) e.emit() } @@ -44,9 +60,7 @@ func (e *encoder) destroy() { func (e *encoder) emit() { // This will internally delete the e.event value. - if !yaml_emitter_emit(&e.emitter, &e.event) && e.event.typ != yaml_DOCUMENT_END_EVENT && e.event.typ != yaml_STREAM_END_EVENT { - e.must(false) - } + e.must(yaml_emitter_emit(&e.emitter, &e.event)) } func (e *encoder) must(ok bool) { @@ -59,13 +73,28 @@ func (e *encoder) must(ok bool) { } } +func (e *encoder) marshalDoc(tag string, in reflect.Value) { + e.init() + yaml_document_start_event_initialize(&e.event, nil, nil, true) + e.emit() + e.marshal(tag, in) + yaml_document_end_event_initialize(&e.event, true) + e.emit() +} + func (e *encoder) marshal(tag string, in reflect.Value) { - if !in.IsValid() { + if !in.IsValid() || in.Kind() == reflect.Ptr && in.IsNil() { e.nilv() return } iface := in.Interface() - if m, ok := iface.(Marshaler); ok { + switch m := iface.(type) { + case time.Time, *time.Time: + // Although time.Time implements TextMarshaler, + // we don't want to treat it as a string for YAML + // purposes because YAML has special support for + // timestamps. + case Marshaler: v, err := m.MarshalYAML() if err != nil { fail(err) @@ -75,31 +104,34 @@ func (e *encoder) marshal(tag string, in reflect.Value) { return } in = reflect.ValueOf(v) - } else if m, ok := iface.(encoding.TextMarshaler); ok { + case encoding.TextMarshaler: text, err := m.MarshalText() if err != nil { fail(err) } in = reflect.ValueOf(string(text)) + case nil: + e.nilv() + return } switch in.Kind() { case reflect.Interface: - if in.IsNil() { - e.nilv() - } else { - e.marshal(tag, in.Elem()) - } + e.marshal(tag, in.Elem()) case reflect.Map: e.mapv(tag, in) case reflect.Ptr: - if in.IsNil() { - e.nilv() + if in.Type() == ptrTimeType { + e.timev(tag, in.Elem()) } else { e.marshal(tag, in.Elem()) } case reflect.Struct: - e.structv(tag, in) - case reflect.Slice: + if in.Type() == timeType { + e.timev(tag, in) + } else { + e.structv(tag, in) + } + case reflect.Slice, reflect.Array: if in.Type().Elem() == mapItemType { e.itemsv(tag, in) } else { @@ -191,10 +223,10 @@ func (e *encoder) mappingv(tag string, f func()) { e.flow = false style = yaml_FLOW_MAPPING_STYLE } - e.must(yaml_mapping_start_event_initialize(&e.event, nil, []byte(tag), implicit, style)) + yaml_mapping_start_event_initialize(&e.event, nil, []byte(tag), implicit, style) e.emit() f() - e.must(yaml_mapping_end_event_initialize(&e.event)) + yaml_mapping_end_event_initialize(&e.event) e.emit() } @@ -240,23 +272,36 @@ var base60float = regexp.MustCompile(`^[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+(?:\.[0 func (e *encoder) stringv(tag string, in reflect.Value) { var style yaml_scalar_style_t s := in.String() - rtag, rs := resolve("", s) - if rtag == yaml_BINARY_TAG { - if tag == "" || tag == yaml_STR_TAG { - tag = rtag - s = rs.(string) - } else if tag == yaml_BINARY_TAG { + canUsePlain := true + switch { + case !utf8.ValidString(s): + if tag == yaml_BINARY_TAG { failf("explicitly tagged !!binary data must be base64-encoded") - } else { + } + if tag != "" { failf("cannot marshal invalid UTF-8 data as %s", shortTag(tag)) } + // It can't be encoded directly as YAML so use a binary tag + // and encode it as base64. + tag = yaml_BINARY_TAG + s = encodeBase64(s) + case tag == "": + // Check to see if it would resolve to a specific + // tag when encoded unquoted. If it doesn't, + // there's no need to quote it. + rtag, _ := resolve("", s) + canUsePlain = rtag == yaml_STR_TAG && !isBase60Float(s) } - if tag == "" && (rtag != yaml_STR_TAG || isBase60Float(s)) { - style = yaml_DOUBLE_QUOTED_SCALAR_STYLE - } else if strings.Contains(s, "\n") { + // Note: it's possible for user code to emit invalid YAML + // if they explicitly specify a tag and a string containing + // text that's incompatible with that tag. + switch { + case strings.Contains(s, "\n"): style = yaml_LITERAL_SCALAR_STYLE - } else { + case canUsePlain: style = yaml_PLAIN_SCALAR_STYLE + default: + style = yaml_DOUBLE_QUOTED_SCALAR_STYLE } e.emitScalar(s, "", tag, style) } @@ -281,9 +326,20 @@ func (e *encoder) uintv(tag string, in reflect.Value) { e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE) } +func (e *encoder) timev(tag string, in reflect.Value) { + t := in.Interface().(time.Time) + s := t.Format(time.RFC3339Nano) + e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE) +} + func (e *encoder) floatv(tag string, in reflect.Value) { - // FIXME: Handle 64 bits here. - s := strconv.FormatFloat(float64(in.Float()), 'g', -1, 32) + // Issue #352: When formatting, use the precision of the underlying value + precision := 64 + if in.Kind() == reflect.Float32 { + precision = 32 + } + + s := strconv.FormatFloat(in.Float(), 'g', -1, precision) switch s { case "+Inf": s = ".inf" diff --git a/vendor/gopkg.in/yaml.v2/go.mod b/vendor/gopkg.in/yaml.v2/go.mod new file mode 100644 index 00000000000..1934e876945 --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/go.mod @@ -0,0 +1,5 @@ +module "gopkg.in/yaml.v2" + +require ( + "gopkg.in/check.v1" v0.0.0-20161208181325-20d25e280405 +) diff --git a/vendor/gopkg.in/yaml.v2/readerc.go b/vendor/gopkg.in/yaml.v2/readerc.go index f450791717b..7c1f5fac3db 100644 --- a/vendor/gopkg.in/yaml.v2/readerc.go +++ b/vendor/gopkg.in/yaml.v2/readerc.go @@ -93,9 +93,18 @@ func yaml_parser_update_buffer(parser *yaml_parser_t, length int) bool { panic("read handler must be set") } + // [Go] This function was changed to guarantee the requested length size at EOF. + // The fact we need to do this is pretty awful, but the description above implies + // for that to be the case, and there are tests + // If the EOF flag is set and the raw buffer is empty, do nothing. if parser.eof && parser.raw_buffer_pos == len(parser.raw_buffer) { - return true + // [Go] ACTUALLY! Read the documentation of this function above. + // This is just broken. To return true, we need to have the + // given length in the buffer. Not doing that means every single + // check that calls this function to make sure the buffer has a + // given length is Go) panicking; or C) accessing invalid memory. + //return true } // Return if the buffer contains enough characters. @@ -389,6 +398,15 @@ func yaml_parser_update_buffer(parser *yaml_parser_t, length int) bool { break } } + // [Go] Read the documentation of this function above. To return true, + // we need to have the given length in the buffer. Not doing that means + // every single check that calls this function to make sure the buffer + // has a given length is Go) panicking; or C) accessing invalid memory. + // This happens here due to the EOF above breaking early. + for buffer_len < length { + parser.buffer[buffer_len] = 0 + buffer_len++ + } parser.buffer = parser.buffer[:buffer_len] return true } diff --git a/vendor/gopkg.in/yaml.v2/resolve.go b/vendor/gopkg.in/yaml.v2/resolve.go index 232313cc084..6c151db6fbd 100644 --- a/vendor/gopkg.in/yaml.v2/resolve.go +++ b/vendor/gopkg.in/yaml.v2/resolve.go @@ -6,7 +6,7 @@ import ( "regexp" "strconv" "strings" - "unicode/utf8" + "time" ) type resolveMapItem struct { @@ -75,7 +75,7 @@ func longTag(tag string) string { func resolvableTag(tag string) bool { switch tag { - case "", yaml_STR_TAG, yaml_BOOL_TAG, yaml_INT_TAG, yaml_FLOAT_TAG, yaml_NULL_TAG: + case "", yaml_STR_TAG, yaml_BOOL_TAG, yaml_INT_TAG, yaml_FLOAT_TAG, yaml_NULL_TAG, yaml_TIMESTAMP_TAG: return true } return false @@ -92,6 +92,19 @@ func resolve(tag string, in string) (rtag string, out interface{}) { switch tag { case "", rtag, yaml_STR_TAG, yaml_BINARY_TAG: return + case yaml_FLOAT_TAG: + if rtag == yaml_INT_TAG { + switch v := out.(type) { + case int64: + rtag = yaml_FLOAT_TAG + out = float64(v) + return + case int: + rtag = yaml_FLOAT_TAG + out = float64(v) + return + } + } } failf("cannot decode %s `%s` as a %s", shortTag(rtag), in, shortTag(tag)) }() @@ -125,6 +138,15 @@ func resolve(tag string, in string) (rtag string, out interface{}) { case 'D', 'S': // Int, float, or timestamp. + // Only try values as a timestamp if the value is unquoted or there's an explicit + // !!timestamp tag. + if tag == "" || tag == yaml_TIMESTAMP_TAG { + t, ok := parseTimestamp(in) + if ok { + return yaml_TIMESTAMP_TAG, t + } + } + plain := strings.Replace(in, "_", "", -1) intv, err := strconv.ParseInt(plain, 0, 64) if err == nil { @@ -158,28 +180,20 @@ func resolve(tag string, in string) (rtag string, out interface{}) { return yaml_INT_TAG, uintv } } else if strings.HasPrefix(plain, "-0b") { - intv, err := strconv.ParseInt(plain[3:], 2, 64) + intv, err := strconv.ParseInt("-" + plain[3:], 2, 64) if err == nil { - if intv == int64(int(intv)) { - return yaml_INT_TAG, -int(intv) + if true || intv == int64(int(intv)) { + return yaml_INT_TAG, int(intv) } else { - return yaml_INT_TAG, -intv + return yaml_INT_TAG, intv } } } - // XXX Handle timestamps here. - default: panic("resolveTable item not yet handled: " + string(rune(hint)) + " (with " + in + ")") } } - if tag == yaml_BINARY_TAG { - return yaml_BINARY_TAG, in - } - if utf8.ValidString(in) { - return yaml_STR_TAG, in - } - return yaml_BINARY_TAG, encodeBase64(in) + return yaml_STR_TAG, in } // encodeBase64 encodes s as base64 that is broken up into multiple lines @@ -206,3 +220,39 @@ func encodeBase64(s string) string { } return string(out[:k]) } + +// This is a subset of the formats allowed by the regular expression +// defined at http://yaml.org/type/timestamp.html. +var allowedTimestampFormats = []string{ + "2006-1-2T15:4:5.999999999Z07:00", // RCF3339Nano with short date fields. + "2006-1-2t15:4:5.999999999Z07:00", // RFC3339Nano with short date fields and lower-case "t". + "2006-1-2 15:4:5.999999999", // space separated with no time zone + "2006-1-2", // date only + // Notable exception: time.Parse cannot handle: "2001-12-14 21:59:43.10 -5" + // from the set of examples. +} + +// parseTimestamp parses s as a timestamp string and +// returns the timestamp and reports whether it succeeded. +// Timestamp formats are defined at http://yaml.org/type/timestamp.html +func parseTimestamp(s string) (time.Time, bool) { + // TODO write code to check all the formats supported by + // http://yaml.org/type/timestamp.html instead of using time.Parse. + + // Quick check: all date formats start with YYYY-. + i := 0 + for ; i < len(s); i++ { + if c := s[i]; c < '0' || c > '9' { + break + } + } + if i != 4 || i == len(s) || s[i] != '-' { + return time.Time{}, false + } + for _, format := range allowedTimestampFormats { + if t, err := time.Parse(format, s); err == nil { + return t, true + } + } + return time.Time{}, false +} diff --git a/vendor/gopkg.in/yaml.v2/scannerc.go b/vendor/gopkg.in/yaml.v2/scannerc.go index 2c9d5111f95..077fd1dd2d4 100644 --- a/vendor/gopkg.in/yaml.v2/scannerc.go +++ b/vendor/gopkg.in/yaml.v2/scannerc.go @@ -611,7 +611,7 @@ func yaml_parser_set_scanner_tag_error(parser *yaml_parser_t, directive bool, co if directive { context = "while parsing a %TAG directive" } - return yaml_parser_set_scanner_error(parser, context, context_mark, "did not find URI escaped octet") + return yaml_parser_set_scanner_error(parser, context, context_mark, problem) } func trace(args ...interface{}) func() { @@ -871,12 +871,6 @@ func yaml_parser_save_simple_key(parser *yaml_parser_t) bool { required := parser.flow_level == 0 && parser.indent == parser.mark.column - // A simple key is required only when it is the first token in the current - // line. Therefore it is always allowed. But we add a check anyway. - if required && !parser.simple_key_allowed { - panic("should not happen") - } - // // If the current position may start a simple key, save it. // @@ -1944,7 +1938,7 @@ func yaml_parser_scan_tag_handle(parser *yaml_parser_t, directive bool, start_ma } else { // It's either the '!' tag or not really a tag handle. If it's a %TAG // directive, it's an error. If it's a tag token, it must be a part of URI. - if directive && !(s[0] == '!' && s[1] == 0) { + if directive && string(s) != "!" { yaml_parser_set_scanner_tag_error(parser, directive, start_mark, "did not find expected '!'") return false @@ -1959,6 +1953,7 @@ func yaml_parser_scan_tag_handle(parser *yaml_parser_t, directive bool, start_ma func yaml_parser_scan_tag_uri(parser *yaml_parser_t, directive bool, head []byte, start_mark yaml_mark_t, uri *[]byte) bool { //size_t length = head ? strlen((char *)head) : 0 var s []byte + hasTag := len(head) > 0 // Copy the head if needed. // @@ -2000,10 +1995,10 @@ func yaml_parser_scan_tag_uri(parser *yaml_parser_t, directive bool, head []byte if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { return false } + hasTag = true } - // Check if the tag is non-empty. - if len(s) == 0 { + if !hasTag { yaml_parser_set_scanner_tag_error(parser, directive, start_mark, "did not find expected tag URI") return false @@ -2474,6 +2469,10 @@ func yaml_parser_scan_flow_scalar(parser *yaml_parser_t, token *yaml_token_t, si } } + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + // Check if we are at the end of the scalar. if single { if parser.buffer[parser.buffer_pos] == '\'' { @@ -2486,10 +2485,6 @@ func yaml_parser_scan_flow_scalar(parser *yaml_parser_t, token *yaml_token_t, si } // Consume blank characters. - if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { - return false - } - for is_blank(parser.buffer, parser.buffer_pos) || is_break(parser.buffer, parser.buffer_pos) { if is_blank(parser.buffer, parser.buffer_pos) { // Consume a space or a tab character. @@ -2591,19 +2586,10 @@ func yaml_parser_scan_plain_scalar(parser *yaml_parser_t, token *yaml_token_t) b // Consume non-blank characters. for !is_blankz(parser.buffer, parser.buffer_pos) { - // Check for 'x:x' in the flow context. TODO: Fix the test "spec-08-13". - if parser.flow_level > 0 && - parser.buffer[parser.buffer_pos] == ':' && - !is_blankz(parser.buffer, parser.buffer_pos+1) { - yaml_parser_set_scanner_error(parser, "while scanning a plain scalar", - start_mark, "found unexpected ':'") - return false - } - // Check for indicators that may end a plain scalar. if (parser.buffer[parser.buffer_pos] == ':' && is_blankz(parser.buffer, parser.buffer_pos+1)) || (parser.flow_level > 0 && - (parser.buffer[parser.buffer_pos] == ',' || parser.buffer[parser.buffer_pos] == ':' || + (parser.buffer[parser.buffer_pos] == ',' || parser.buffer[parser.buffer_pos] == '?' || parser.buffer[parser.buffer_pos] == '[' || parser.buffer[parser.buffer_pos] == ']' || parser.buffer[parser.buffer_pos] == '{' || parser.buffer[parser.buffer_pos] == '}')) { @@ -2655,10 +2641,10 @@ func yaml_parser_scan_plain_scalar(parser *yaml_parser_t, token *yaml_token_t) b for is_blank(parser.buffer, parser.buffer_pos) || is_break(parser.buffer, parser.buffer_pos) { if is_blank(parser.buffer, parser.buffer_pos) { - // Check for tab character that abuse indentation. + // Check for tab characters that abuse indentation. if leading_blanks && parser.mark.column < indent && is_tab(parser.buffer, parser.buffer_pos) { yaml_parser_set_scanner_error(parser, "while scanning a plain scalar", - start_mark, "found a tab character that violate indentation") + start_mark, "found a tab character that violates indentation") return false } diff --git a/vendor/gopkg.in/yaml.v2/sorter.go b/vendor/gopkg.in/yaml.v2/sorter.go index 5958822f9c6..4c45e660a8f 100644 --- a/vendor/gopkg.in/yaml.v2/sorter.go +++ b/vendor/gopkg.in/yaml.v2/sorter.go @@ -51,6 +51,15 @@ func (l keyList) Less(i, j int) bool { } var ai, bi int var an, bn int64 + if ar[i] == '0' || br[i] == '0' { + for j := i-1; j >= 0 && unicode.IsDigit(ar[j]); j-- { + if ar[j] != '0' { + an = 1 + bn = 1 + break + } + } + } for ai = i; ai < len(ar) && unicode.IsDigit(ar[ai]); ai++ { an = an*10 + int64(ar[ai]-'0') } diff --git a/vendor/gopkg.in/yaml.v2/writerc.go b/vendor/gopkg.in/yaml.v2/writerc.go index 190362f25df..a2dde608cb7 100644 --- a/vendor/gopkg.in/yaml.v2/writerc.go +++ b/vendor/gopkg.in/yaml.v2/writerc.go @@ -18,72 +18,9 @@ func yaml_emitter_flush(emitter *yaml_emitter_t) bool { return true } - // If the output encoding is UTF-8, we don't need to recode the buffer. - if emitter.encoding == yaml_UTF8_ENCODING { - if err := emitter.write_handler(emitter, emitter.buffer[:emitter.buffer_pos]); err != nil { - return yaml_emitter_set_writer_error(emitter, "write error: "+err.Error()) - } - emitter.buffer_pos = 0 - return true - } - - // Recode the buffer into the raw buffer. - var low, high int - if emitter.encoding == yaml_UTF16LE_ENCODING { - low, high = 0, 1 - } else { - high, low = 1, 0 - } - - pos := 0 - for pos < emitter.buffer_pos { - // See the "reader.c" code for more details on UTF-8 encoding. Note - // that we assume that the buffer contains a valid UTF-8 sequence. - - // Read the next UTF-8 character. - octet := emitter.buffer[pos] - - var w int - var value rune - switch { - case octet&0x80 == 0x00: - w, value = 1, rune(octet&0x7F) - case octet&0xE0 == 0xC0: - w, value = 2, rune(octet&0x1F) - case octet&0xF0 == 0xE0: - w, value = 3, rune(octet&0x0F) - case octet&0xF8 == 0xF0: - w, value = 4, rune(octet&0x07) - } - for k := 1; k < w; k++ { - octet = emitter.buffer[pos+k] - value = (value << 6) + (rune(octet) & 0x3F) - } - pos += w - - // Write the character. - if value < 0x10000 { - var b [2]byte - b[high] = byte(value >> 8) - b[low] = byte(value & 0xFF) - emitter.raw_buffer = append(emitter.raw_buffer, b[0], b[1]) - } else { - // Write the character using a surrogate pair (check "reader.c"). - var b [4]byte - value -= 0x10000 - b[high] = byte(0xD8 + (value >> 18)) - b[low] = byte((value >> 10) & 0xFF) - b[high+2] = byte(0xDC + ((value >> 8) & 0xFF)) - b[low+2] = byte(value & 0xFF) - emitter.raw_buffer = append(emitter.raw_buffer, b[0], b[1], b[2], b[3]) - } - } - - // Write the raw buffer. - if err := emitter.write_handler(emitter, emitter.raw_buffer); err != nil { + if err := emitter.write_handler(emitter, emitter.buffer[:emitter.buffer_pos]); err != nil { return yaml_emitter_set_writer_error(emitter, "write error: "+err.Error()) } emitter.buffer_pos = 0 - emitter.raw_buffer = emitter.raw_buffer[:0] return true } diff --git a/vendor/gopkg.in/yaml.v2/yaml.go b/vendor/gopkg.in/yaml.v2/yaml.go index 36d6b883a6c..de85aa4cdb7 100644 --- a/vendor/gopkg.in/yaml.v2/yaml.go +++ b/vendor/gopkg.in/yaml.v2/yaml.go @@ -9,6 +9,7 @@ package yaml import ( "errors" "fmt" + "io" "reflect" "strings" "sync" @@ -77,8 +78,65 @@ type Marshaler interface { // supported tag options. // func Unmarshal(in []byte, out interface{}) (err error) { + return unmarshal(in, out, false) +} + +// UnmarshalStrict is like Unmarshal except that any fields that are found +// in the data that do not have corresponding struct members, or mapping +// keys that are duplicates, will result in +// an error. +func UnmarshalStrict(in []byte, out interface{}) (err error) { + return unmarshal(in, out, true) +} + +// A Decorder reads and decodes YAML values from an input stream. +type Decoder struct { + strict bool + parser *parser +} + +// NewDecoder returns a new decoder that reads from r. +// +// The decoder introduces its own buffering and may read +// data from r beyond the YAML values requested. +func NewDecoder(r io.Reader) *Decoder { + return &Decoder{ + parser: newParserFromReader(r), + } +} + +// SetStrict sets whether strict decoding behaviour is enabled when +// decoding items in the data (see UnmarshalStrict). By default, decoding is not strict. +func (dec *Decoder) SetStrict(strict bool) { + dec.strict = strict +} + +// Decode reads the next YAML-encoded value from its input +// and stores it in the value pointed to by v. +// +// See the documentation for Unmarshal for details about the +// conversion of YAML into a Go value. +func (dec *Decoder) Decode(v interface{}) (err error) { + d := newDecoder(dec.strict) + defer handleErr(&err) + node := dec.parser.parse() + if node == nil { + return io.EOF + } + out := reflect.ValueOf(v) + if out.Kind() == reflect.Ptr && !out.IsNil() { + out = out.Elem() + } + d.unmarshal(node, out) + if len(d.terrors) > 0 { + return &TypeError{d.terrors} + } + return nil +} + +func unmarshal(in []byte, out interface{}, strict bool) (err error) { defer handleErr(&err) - d := newDecoder() + d := newDecoder(strict) p := newParser(in) defer p.destroy() node := p.parse() @@ -99,8 +157,8 @@ func Unmarshal(in []byte, out interface{}) (err error) { // of the generated document will reflect the structure of the value itself. // Maps and pointers (to struct, string, int, etc) are accepted as the in value. // -// Struct fields are only unmarshalled if they are exported (have an upper case -// first letter), and are unmarshalled using the field name lowercased as the +// Struct fields are only marshalled if they are exported (have an upper case +// first letter), and are marshalled using the field name lowercased as the // default key. Custom keys may be defined via the "yaml" name in the field // tag: the content preceding the first comma is used as the key, and the // following comma-separated options are used to tweak the marshalling process. @@ -114,7 +172,10 @@ func Unmarshal(in []byte, out interface{}) (err error) { // // omitempty Only include the field if it's not set to the zero // value for the type or to empty slices or maps. -// Does not apply to zero valued structs. +// Zero valued structs will be omitted if all their public +// fields are zero, unless they implement an IsZero +// method (see the IsZeroer interface type), in which +// case the field will be included if that method returns true. // // flow Marshal using a flow style (useful for structs, // sequences and maps). @@ -129,7 +190,7 @@ func Unmarshal(in []byte, out interface{}) (err error) { // For example: // // type T struct { -// F int "a,omitempty" +// F int `yaml:"a,omitempty"` // B int // } // yaml.Marshal(&T{B: 2}) // Returns "b: 2\n" @@ -139,12 +200,47 @@ func Marshal(in interface{}) (out []byte, err error) { defer handleErr(&err) e := newEncoder() defer e.destroy() - e.marshal("", reflect.ValueOf(in)) + e.marshalDoc("", reflect.ValueOf(in)) e.finish() out = e.out return } +// An Encoder writes YAML values to an output stream. +type Encoder struct { + encoder *encoder +} + +// NewEncoder returns a new encoder that writes to w. +// The Encoder should be closed after use to flush all data +// to w. +func NewEncoder(w io.Writer) *Encoder { + return &Encoder{ + encoder: newEncoderWithWriter(w), + } +} + +// Encode writes the YAML encoding of v to the stream. +// If multiple items are encoded to the stream, the +// second and subsequent document will be preceded +// with a "---" document separator, but the first will not. +// +// See the documentation for Marshal for details about the conversion of Go +// values to YAML. +func (e *Encoder) Encode(v interface{}) (err error) { + defer handleErr(&err) + e.encoder.marshalDoc("", reflect.ValueOf(v)) + return nil +} + +// Close closes the encoder by writing any remaining data. +// It does not write a stream terminating string "...". +func (e *Encoder) Close() (err error) { + defer handleErr(&err) + e.encoder.finish() + return nil +} + func handleErr(err *error) { if v := recover(); v != nil { if e, ok := v.(yamlError); ok { @@ -200,6 +296,9 @@ type fieldInfo struct { Num int OmitEmpty bool Flow bool + // Id holds the unique field identifier, so we can cheaply + // check for field duplicates without maintaining an extra map. + Id int // Inline holds the field index if the field is part of an inlined struct. Inline []int @@ -279,6 +378,7 @@ func getStructInfo(st reflect.Type) (*structInfo, error) { } else { finfo.Inline = append([]int{i}, finfo.Inline...) } + finfo.Id = len(fieldsList) fieldsMap[finfo.Key] = finfo fieldsList = append(fieldsList, finfo) } @@ -300,11 +400,16 @@ func getStructInfo(st reflect.Type) (*structInfo, error) { return nil, errors.New(msg) } + info.Id = len(fieldsList) fieldsList = append(fieldsList, info) fieldsMap[info.Key] = info } - sinfo = &structInfo{fieldsMap, fieldsList, inlineMap} + sinfo = &structInfo{ + FieldsMap: fieldsMap, + FieldsList: fieldsList, + InlineMap: inlineMap, + } fieldMapMutex.Lock() structMap[st] = sinfo @@ -312,8 +417,23 @@ func getStructInfo(st reflect.Type) (*structInfo, error) { return sinfo, nil } +// IsZeroer is used to check whether an object is zero to +// determine whether it should be omitted when marshaling +// with the omitempty flag. One notable implementation +// is time.Time. +type IsZeroer interface { + IsZero() bool +} + func isZero(v reflect.Value) bool { - switch v.Kind() { + kind := v.Kind() + if z, ok := v.Interface().(IsZeroer); ok { + if (kind == reflect.Ptr || kind == reflect.Interface) && v.IsNil() { + return true + } + return z.IsZero() + } + switch kind { case reflect.String: return len(v.String()) == 0 case reflect.Interface, reflect.Ptr: diff --git a/vendor/gopkg.in/yaml.v2/yamlh.go b/vendor/gopkg.in/yaml.v2/yamlh.go index d60a6b6b003..e25cee563be 100644 --- a/vendor/gopkg.in/yaml.v2/yamlh.go +++ b/vendor/gopkg.in/yaml.v2/yamlh.go @@ -1,6 +1,7 @@ package yaml import ( + "fmt" "io" ) @@ -239,6 +240,27 @@ const ( yaml_MAPPING_END_EVENT // A MAPPING-END event. ) +var eventStrings = []string{ + yaml_NO_EVENT: "none", + yaml_STREAM_START_EVENT: "stream start", + yaml_STREAM_END_EVENT: "stream end", + yaml_DOCUMENT_START_EVENT: "document start", + yaml_DOCUMENT_END_EVENT: "document end", + yaml_ALIAS_EVENT: "alias", + yaml_SCALAR_EVENT: "scalar", + yaml_SEQUENCE_START_EVENT: "sequence start", + yaml_SEQUENCE_END_EVENT: "sequence end", + yaml_MAPPING_START_EVENT: "mapping start", + yaml_MAPPING_END_EVENT: "mapping end", +} + +func (e yaml_event_type_t) String() string { + if e < 0 || int(e) >= len(eventStrings) { + return fmt.Sprintf("unknown event %d", e) + } + return eventStrings[e] +} + // The event structure. type yaml_event_t struct { @@ -508,7 +530,7 @@ type yaml_parser_t struct { problem string // Error description. - // The byte about which the problem occured. + // The byte about which the problem occurred. problem_offset int problem_value int problem_mark yaml_mark_t @@ -521,9 +543,9 @@ type yaml_parser_t struct { read_handler yaml_read_handler_t // Read handler. - input_file io.Reader // File input data. - input []byte // String input data. - input_pos int + input_reader io.Reader // File input data. + input []byte // String input data. + input_pos int eof bool // EOF flag @@ -632,7 +654,7 @@ type yaml_emitter_t struct { write_handler yaml_write_handler_t // Write handler. output_buffer *[]byte // String output data. - output_file io.Writer // File output data. + output_writer io.Writer // File output data. buffer []byte // The working buffer. buffer_pos int // The current position of the buffer. diff --git a/vendor/vendor.json b/vendor/vendor.json index f4dbca1794c..dbbb1c253c1 100644 --- a/vendor/vendor.json +++ b/vendor/vendor.json @@ -6,7 +6,9 @@ "checksumSHA1": "jQh1fnoKPKMURvKkpdRjN695nAQ=", "path": "github.com/agext/levenshtein", "revision": "5f10fee965225ac1eecdc234c09daf5cd9e7f7b6", - "revisionTime": "2017-02-17T06:30:20Z" + "revisionTime": "2017-02-17T06:30:20Z", + "version": "v1.2.1", + "versionExact": "v1.2.1" }, { "checksumSHA1": "LLVyR2dAgkihu0+HdZF+JK0gMMs=", @@ -21,908 +23,1092 @@ "revisionTime": "2015-08-30T18:26:16Z" }, { - "checksumSHA1": "FIL83loX9V9APvGQIjJpbxq53F0=", + "checksumSHA1": "nRnZ35uyYct3TL95z7DPJ/lSUNg=", "path": "github.com/apparentlymart/go-cidr/cidr", - "revision": "7e4b007599d4e2076d9a81be723b3912852dda2c", - "revisionTime": "2017-04-18T07:21:50Z" + "revision": "b1115bf8e14a60131a196f908223e4506b0ddc35", + "revisionTime": "2018-08-15T15:04:34Z", + "version": "v1.0.0", + "versionExact": "v1.0.0" }, { "checksumSHA1": "Ffhtm8iHH7l2ynVVOIGJE3eiuLA=", "path": "github.com/apparentlymart/go-textseg/textseg", - "revision": "b836f5c4d331d1945a2fead7188db25432d73b69", - "revisionTime": "2017-05-31T20:39:52Z" + "revision": "fb01f485ebef760e5ee06d55e1b07534dda2d295", + "revisionTime": "2018-08-15T02:43:44Z", + "version": "v1.0.0", + "versionExact": "v1.0.0" }, { - "checksumSHA1": "GCTVJ1J/SGZstNZauuLAnTFOhGA=", + "checksumSHA1": "ZWu+JUivE3zS4FZvVktFAWhj3FQ=", "path": "github.com/armon/go-radix", - "revision": "1fca145dffbcaa8fe914309b1ec0cfc67500fe61", - "revisionTime": "2017-07-27T15:54:43Z" + "revision": "1a2de0c21c94309923825da3df33a4381872c795", + "revisionTime": "2018-08-24T02:57:28Z", + "version": "v1.0.0", + "versionExact": "v1.0.0" }, { - "checksumSHA1": "q4osc1RWwXEKwVApYRUuSwMdxkU=", + "checksumSHA1": "+utPu9mmoleGbB7s99E3Qnp82RI=", "path": "github.com/aws/aws-sdk-go/aws", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { "checksumSHA1": "DtuTqKH29YnLjrIJkRYX0HQtXY0=", "path": "github.com/aws/aws-sdk-go/aws/arn", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { "checksumSHA1": "Y9W+4GimK4Fuxq+vyIskVYFRnX4=", "path": "github.com/aws/aws-sdk-go/aws/awserr", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { "checksumSHA1": "yyYr41HZ1Aq0hWc3J5ijXwYEcac=", "path": "github.com/aws/aws-sdk-go/aws/awsutil", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "qPamnqpz6piw78HJjvluOegQfGE=", + "checksumSHA1": "EwL79Cq6euk+EV/t/n2E+jzPNmU=", "path": "github.com/aws/aws-sdk-go/aws/client", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "ieAJ+Cvp/PKv1LpUEnUXpc3OI6E=", + "checksumSHA1": "uEJU4I6dTKaraQKvrljlYKUZwoc=", "path": "github.com/aws/aws-sdk-go/aws/client/metadata", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { "checksumSHA1": "vVSUnICaD9IaBQisCfw0n8zLwig=", "path": "github.com/aws/aws-sdk-go/aws/corehandlers", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "Y+cPwQL0dZMyqp3wI+KJWmA9KQ8=", + "checksumSHA1": "21pBkDFjY5sDY1rAW+f8dDPcWhk=", "path": "github.com/aws/aws-sdk-go/aws/credentials", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "u3GOAJLmdvbuNUeUEcZSEAOeL/0=", + "checksumSHA1": "JTilCBYWVAfhbKSnrxCNhE8IFns=", "path": "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "NUJUTWlc1sV8b7WjfiYc4JZbXl0=", + "checksumSHA1": "1pENtl2K9hG7qoB7R6J7dAHa82g=", "path": "github.com/aws/aws-sdk-go/aws/credentials/endpointcreds", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { "checksumSHA1": "JEYqmF83O5n5bHkupAzA6STm0no=", "path": "github.com/aws/aws-sdk-go/aws/credentials/stscreds", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "HogDW/Wu6ZKTDrQ2HSzG8/vj9Ag=", + "checksumSHA1": "UX3qPZyIaXL5p8mFCVYSDve6isk=", + "path": "github.com/aws/aws-sdk-go/aws/crr", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "KZylhHa5CQP8deDHphHMU2tUr3o=", + "path": "github.com/aws/aws-sdk-go/aws/csm", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "7AmyyJXVkMdmy8dphC3Nalx5XkI=", "path": "github.com/aws/aws-sdk-go/aws/defaults", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "pDnK93CqjQ4ROSW8Y/RuHXjv52M=", + "checksumSHA1": "mYqgKOMSGvLmrt0CoBNbqdcTM3c=", "path": "github.com/aws/aws-sdk-go/aws/ec2metadata", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "YAa4mUoL6J81yCT+QzeWF3EEo8A=", + "checksumSHA1": "mgrPYvlQg++swrAt4sK+OEFSAgQ=", "path": "github.com/aws/aws-sdk-go/aws/endpoints", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "xXfBg4kOE6+Z4sABLqdR49OlSNA=", + "checksumSHA1": "JQpL1G6Z8ri4zsuqzQTQK9YUcKw=", "path": "github.com/aws/aws-sdk-go/aws/request", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "NSUxDbxBtCToRUoMRUKn61WJWmE=", + "checksumSHA1": "8ATKRj627SHY8OCliOMYJGkNhGA=", "path": "github.com/aws/aws-sdk-go/aws/session", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "BqDC+Sxo3hrC6WYvwx1U4OObaT4=", + "checksumSHA1": "NI5Qu/tfh4S4st2RsI7W8Fces9Q=", "path": "github.com/aws/aws-sdk-go/aws/signer/v4", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "3A0q2ZxyOnQN77dQV0AEpVv9HPY=", + "path": "github.com/aws/aws-sdk-go/internal/ini", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "QvKGojx+wCHTDfXQ1aoOYzH3Y88=", + "path": "github.com/aws/aws-sdk-go/internal/s3err", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { "checksumSHA1": "wjxQlU1PYxrDRFoL1Vek8Wch7jk=", "path": "github.com/aws/aws-sdk-go/internal/sdkio", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { "checksumSHA1": "MYLldFRnsZh21TfCkgkXCT3maPU=", "path": "github.com/aws/aws-sdk-go/internal/sdkrand", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "04ypv4x12l4q0TksA1zEVsmgpvw=", + "checksumSHA1": "tQVg7Sz2zv+KkhbiXxPH0mh9spg=", + "path": "github.com/aws/aws-sdk-go/internal/sdkuri", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "LjfJ5ydXdiSuQixC+HrmSZjW3NU=", "path": "github.com/aws/aws-sdk-go/internal/shareddefaults", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "NStHCXEvYqG72GknZyv1jaKaeH0=", + "checksumSHA1": "NHfa9brYkChSmKiBcKe+xMaJzlc=", "path": "github.com/aws/aws-sdk-go/private/protocol", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "GQuRJY72iGQrufDqIaB50zG27u0=", + "checksumSHA1": "0cZnOaE1EcFUuiu4bdHV2k7slQg=", "path": "github.com/aws/aws-sdk-go/private/protocol/ec2query", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "stsUCJVnZ5yMrmzSExbjbYp5tZ8=", + "path": "github.com/aws/aws-sdk-go/private/protocol/eventstream", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "yHfT5DTbeCLs4NE2Rgnqrhe15ls=", + "checksumSHA1": "bOQjEfKXaTqe7dZhDDER/wZUzQc=", + "path": "github.com/aws/aws-sdk-go/private/protocol/eventstream/eventstreamapi", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "tXRIRarT7qepHconxydtO7mXod4=", "path": "github.com/aws/aws-sdk-go/private/protocol/json/jsonutil", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "R00RL5jJXRYq1iiK1+PGvMfvXyM=", + "checksumSHA1": "v2c4B7IgTyjl7ShytqbTOqhCIoM=", "path": "github.com/aws/aws-sdk-go/private/protocol/jsonrpc", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "SBBVYdLcocjdPzMWgDuR8vcOfDQ=", + "checksumSHA1": "lj56XJFI2OSp+hEOrFZ+eiEi/yM=", "path": "github.com/aws/aws-sdk-go/private/protocol/query", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "9V1PvtFQ9MObZTc3sa86WcuOtOU=", + "checksumSHA1": "+O6A945eTP9plLpkEMZB0lwBAcg=", "path": "github.com/aws/aws-sdk-go/private/protocol/query/queryutil", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "pkeoOfZpHRvFG/AOZeTf0lwtsFg=", + "checksumSHA1": "uRvmEPKcEdv7qc0Ep2zn0E3Xumc=", "path": "github.com/aws/aws-sdk-go/private/protocol/rest", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "Rpu8KBtHZgvhkwHxUfaky+qW+G4=", + "checksumSHA1": "S7NJNuKPbT+a9/zk9qC1/zZAHLM=", "path": "github.com/aws/aws-sdk-go/private/protocol/restjson", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "ODo+ko8D6unAxZuN1jGzMcN4QCc=", + "checksumSHA1": "ZZgzuZoMphxAf8wwz9QqpSQdBGc=", "path": "github.com/aws/aws-sdk-go/private/protocol/restxml", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "0qYPUga28aQVkxZgBR3Z86AbGUQ=", + "checksumSHA1": "B8unEuOlpQfnig4cMyZtXLZVVOs=", "path": "github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { "checksumSHA1": "F6mth+G7dXN1GI+nktaGo8Lx8aE=", "path": "github.com/aws/aws-sdk-go/private/signer/v2", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "Ih4il2OyFyaSuoMv6hhvPUN8Gn4=", + "checksumSHA1": "V5YPKdVv7D3cpcfO2gecYoB4+0E=", "path": "github.com/aws/aws-sdk-go/service/acm", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "DPl/OkvEUjrd+XKqX73l6nUNw3U=", + "checksumSHA1": "TekD25t+ErY7ep0VSZU1RbOuAhg=", + "path": "github.com/aws/aws-sdk-go/service/acmpca", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "cxeLAPywD0cT2SnRy0W4B1joyBs=", "path": "github.com/aws/aws-sdk-go/service/apigateway", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "Lhn7o6Ua6s/dgjol0ATOiRBIMxA=", + "checksumSHA1": "AAv5tgpGyzpzwfftoAJnudq2334=", "path": "github.com/aws/aws-sdk-go/service/applicationautoscaling", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "eXdYovPaIY64heZE1nczTsNRvtw=", + "checksumSHA1": "4GehXfXvsfsv903OjmzEQskC2Z4=", "path": "github.com/aws/aws-sdk-go/service/appsync", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "wrOVdI/6ZTZ/H0Kxjh3bBEZtVzk=", + "checksumSHA1": "62J/tLeZX36VfFPh5+gCrH9kh/E=", "path": "github.com/aws/aws-sdk-go/service/athena", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "GhemRVzpnrN6HAgm1NdAgESY8CQ=", + "checksumSHA1": "jixPFRDc2Mw50szA2n01JRhvJnU=", "path": "github.com/aws/aws-sdk-go/service/autoscaling", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "P5gDOoqIdVjMU77e5Nhy48QLpS4=", + "checksumSHA1": "P6Snig6bHJTqpzK2ilOV27NTgZU=", "path": "github.com/aws/aws-sdk-go/service/batch", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "BrC9UCeefniH5UN7x0PFr8A9l6Y=", + "checksumSHA1": "y0zwTXci4I6vL2us4sVXlSZnXC4=", "path": "github.com/aws/aws-sdk-go/service/budgets", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "EoaTzMilW+OIi5eETJUpd+giyTc=", + "checksumSHA1": "VatUlbTYWJxikHDG/XnfIgejXtI=", "path": "github.com/aws/aws-sdk-go/service/cloud9", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "yuFNzmUIWppfji/Xx6Aao0EE4cY=", + "checksumSHA1": "+Ujy5nlb+HWGMKCCS7nzxHMQ+mA=", "path": "github.com/aws/aws-sdk-go/service/cloudformation", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "0nPnGWlegQG7bn/iIIfjJFoljyU=", + "checksumSHA1": "PZHlzkNYMSasi//Us6Eguq/rz48=", "path": "github.com/aws/aws-sdk-go/service/cloudfront", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "36H7Vj7tRy/x0zvKjXZxuOhZ4zk=", + "path": "github.com/aws/aws-sdk-go/service/cloudhsmv2", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "XJAlmauOOHsHEUW7n0XO/eEpcWI=", + "checksumSHA1": "tOw80eNTNpvIpMRVBr9oRjLcQ58=", "path": "github.com/aws/aws-sdk-go/service/cloudsearch", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "VTc9UOMqIwuhWJ6oGQDsMkTW09I=", + "checksumSHA1": "5GDMRGL8Ov9o9pa4te4me+OHPc8=", "path": "github.com/aws/aws-sdk-go/service/cloudtrail", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "ItXljM1vG/0goVleodRgbfYgyxQ=", + "checksumSHA1": "YsLO1gRTLh3f+c3TdsYs0WqHUKo=", "path": "github.com/aws/aws-sdk-go/service/cloudwatch", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "e23yBAf38AKW/Fve77BO4WgMq4c=", + "checksumSHA1": "l1AcqoQdzomYKGm7006Bop4ms84=", "path": "github.com/aws/aws-sdk-go/service/cloudwatchevents", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "HHct8eQygkIJ+vrQpKhB0IEDymQ=", + "checksumSHA1": "9Cxvnmqh2j0dX5OFoHOu5cePz1Y=", "path": "github.com/aws/aws-sdk-go/service/cloudwatchlogs", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "22txzj8ItH1+lzyyLlFz/vtRV2I=", + "checksumSHA1": "yd3vgz6OAoSK20TLb/bkg5/JqjA=", "path": "github.com/aws/aws-sdk-go/service/codebuild", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "AQwpqKKc52nnI5F50K73Zx5luoQ=", + "checksumSHA1": "9uVTrIQWdmX4oWxLYOB6QHf7mdo=", "path": "github.com/aws/aws-sdk-go/service/codecommit", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "wvXGTyWPjtgC4OjXb80IxYdpqmE=", + "checksumSHA1": "zJdKvz7MomKCn752Wizv3OkecrI=", "path": "github.com/aws/aws-sdk-go/service/codedeploy", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "V1Y05qfjN4xOCy+GnPWSCqIeZb4=", + "checksumSHA1": "0OFYwJRcnrXHBP9dXGjtQtqNc9w=", "path": "github.com/aws/aws-sdk-go/service/codepipeline", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "Ju8efcqcIgggB7N8io/as9ERVdc=", + "checksumSHA1": "cJY0EMAnPPjmLHW6BepTS4yrI/g=", "path": "github.com/aws/aws-sdk-go/service/cognitoidentity", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "jeCyZm4iJmOLbVOe/70QNkII+qU=", + "checksumSHA1": "s2S+xgdxmt4yjviWgRzgX8Tk2pE=", "path": "github.com/aws/aws-sdk-go/service/cognitoidentityprovider", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "fPHmnn5kgs8BB73v8Pz/7frfnmo=", + "checksumSHA1": "10wibvFDEouxaWgPB2RCtsen1v0=", "path": "github.com/aws/aws-sdk-go/service/configservice", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "45sgs1urdRiXDb35iuAhQPzl0e4=", + "checksumSHA1": "RKh6PvFZuR7hX732WBExF4byYTM=", "path": "github.com/aws/aws-sdk-go/service/databasemigrationservice", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "bu1R1eKCK2fc5+hMcXmagr3iIik=", + "checksumSHA1": "af9EdSqDMCYQElRwv6JyhNIusQo=", + "path": "github.com/aws/aws-sdk-go/service/datapipeline", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "E2PzR2gdjvKrUoxFlf5Recjd604=", "path": "github.com/aws/aws-sdk-go/service/dax", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "EaEfUc3nt1sS/cdfSYGq+JtSVKs=", + "checksumSHA1": "7qPTtIiAIoXpgAXTLfhYlAvkUiU=", "path": "github.com/aws/aws-sdk-go/service/devicefarm", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "LRdh5oXUe2yURIk5FDH6ceEZGMo=", + "checksumSHA1": "PC6sz5+T75ms9dSxhmSVrZq9bE4=", "path": "github.com/aws/aws-sdk-go/service/directconnect", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "a7itHIwtyXtOGQ0KsiefmsHgu4s=", + "checksumSHA1": "SMFibYGCd4yJfI7cV6m5hsA0DdU=", "path": "github.com/aws/aws-sdk-go/service/directoryservice", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "QXJRjnodsUq3WACgm850nSlV8pE=", + "path": "github.com/aws/aws-sdk-go/service/dlm", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "8JiVrxMjFSdBOfVWCy1QU+JzB08=", + "checksumSHA1": "wY+9wVzwZ2IRigGVUH/Brml8dCw=", "path": "github.com/aws/aws-sdk-go/service/dynamodb", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "SOmBkudpfGbt3Pb3I1bzuXV9FvQ=", + "checksumSHA1": "WoLG/Esx0gnTKXDfLJD3q0dra+8=", "path": "github.com/aws/aws-sdk-go/service/ec2", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "PcnSiFF2p2xOsvITrsglyWp20YA=", + "checksumSHA1": "Ib0Plp1+0bdv4RlTvyTjJY38drE=", "path": "github.com/aws/aws-sdk-go/service/ecr", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "T38/mnajk0fqAPXT1Qq1DG+cJ6g=", + "checksumSHA1": "6Jf3Hif1D9lPWSyuNUqhfxl7Nas=", "path": "github.com/aws/aws-sdk-go/service/ecs", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "4XmkiujbDA68x39KGgURR1+uPiQ=", + "checksumSHA1": "8ea7fZjeKLrp8d0H2oPJt+CmAEk=", "path": "github.com/aws/aws-sdk-go/service/efs", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "o73xT1zFo3C+giQwKcRj02OAZhM=", + "checksumSHA1": "zlnQCc6wRah9JnvhDp3EyWD/GnQ=", + "path": "github.com/aws/aws-sdk-go/service/eks", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "UR7K4m62MzrSPEB4KLLEQOsJ4mw=", "path": "github.com/aws/aws-sdk-go/service/elasticache", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "1U0w3+W7kvH901jSftehitrRHCg=", + "checksumSHA1": "iRZ8TBVI03KJhe3usx8HZH+hz7Q=", "path": "github.com/aws/aws-sdk-go/service/elasticbeanstalk", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "VYGtTaSiajfKOVTbi9/SNmbiIac=", + "checksumSHA1": "Xv5k/JHJ+CsuyUCc5SoENm2r8w4=", "path": "github.com/aws/aws-sdk-go/service/elasticsearchservice", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "SZ7yLDZ6RvMhpWe0Goyem64kgyA=", + "checksumSHA1": "apL29Unu7vIxb5VgA+HWW0nm1v0=", "path": "github.com/aws/aws-sdk-go/service/elastictranscoder", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "iRI32eUYQfumh0LybzZ+5iWV3jE=", + "checksumSHA1": "f5/ev7DpX3Fn2Qg12TG8+aXX8Ek=", "path": "github.com/aws/aws-sdk-go/service/elb", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "ufCsoZUQK13j8Y+m25yxhC2XteQ=", + "checksumSHA1": "SozrDFhzpIRmVf6xcx2OgsNSONE=", "path": "github.com/aws/aws-sdk-go/service/elbv2", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "5x77vwxya74Qu5YEq75/lhyYkqY=", + "checksumSHA1": "Kv3fpVUq/lOmilTffzAnRQ/5yPk=", "path": "github.com/aws/aws-sdk-go/service/emr", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "t7BmfpJqmQ7Y0EYcj/CR9Aup9go=", + "checksumSHA1": "retO+IhZiinZm0yaf0hdU03P3nM=", "path": "github.com/aws/aws-sdk-go/service/firehose", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "lAJmnDWbMBwbWp2LNj+EgoK44Gw=", + "path": "github.com/aws/aws-sdk-go/service/fms", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "h37wXo80exgME9QqHkeLTyG88ZM=", + "checksumSHA1": "VOSOe2McOhEVDSfRAz7OM5stigI=", "path": "github.com/aws/aws-sdk-go/service/gamelift", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "20HMsQ2tH47zknjFIUQLi/ZOBvc=", + "checksumSHA1": "BkSoTPbLpV9Ov9iVpuBRJv9j8+s=", "path": "github.com/aws/aws-sdk-go/service/glacier", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "M01rYrldc6zwbpAeaLX5UJ6b25g=", + "checksumSHA1": "5kEA8EUoVwodknTHutskiCfu4+c=", "path": "github.com/aws/aws-sdk-go/service/glue", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "VRaMYP1928z+aLVk2qX5OPFaobk=", + "checksumSHA1": "a8UUqzlic1ljsDtjTH97ShjzFIY=", "path": "github.com/aws/aws-sdk-go/service/guardduty", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "I8CWKTI9BLrIF9ZKf6SpWhG+LXM=", + "checksumSHA1": "oCcChwE/IMCzMyWR0l8nuY5nqc8=", "path": "github.com/aws/aws-sdk-go/service/iam", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "45gdBZuM7PWLQzWuBasytvZZpK0=", + "checksumSHA1": "0EmBq5ipRFEW1qSToFlBP6WmRyA=", "path": "github.com/aws/aws-sdk-go/service/inspector", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "88bf507ASfaFmQHmjidwQ6uRPJ8=", + "checksumSHA1": "pREhkbXVXKGj0tR84WHFjYfimZs=", "path": "github.com/aws/aws-sdk-go/service/iot", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "DfzNze8B3ME2tV3TtXP7eQXUjD0=", + "checksumSHA1": "BqFgvuCkO8U2SOLpzBEWAwkSwL0=", "path": "github.com/aws/aws-sdk-go/service/kinesis", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "Vqq049R2eveVD15emT9vKTyBsIg=", + "checksumSHA1": "o92noObpHXdSONAKlSCjmheNal0=", + "path": "github.com/aws/aws-sdk-go/service/kinesisanalytics", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "ac/mCyWnYF9Br3WPYQcAOYGxCFc=", "path": "github.com/aws/aws-sdk-go/service/kms", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "lAgaKbwpyflY7+t4V3EeH18RwgA=", + "checksumSHA1": "yJQyav77BKJ6nmlW2+bvJ5gnuxE=", "path": "github.com/aws/aws-sdk-go/service/lambda", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "j89kyJh2GGvR24TptSP3OsO2fho=", + "checksumSHA1": "R8gYQx1m4W1Z8GXwFz10Y9eFkpc=", "path": "github.com/aws/aws-sdk-go/service/lexmodelbuildingservice", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "2dy15/1bnS80cwKZiwnEai1d7O8=", + "checksumSHA1": "6JlwcpzYXh120l0nkTBGFPupNTU=", "path": "github.com/aws/aws-sdk-go/service/lightsail", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "RVGzBxEeU2U6tmIWIsK4HNCYOig=", + "path": "github.com/aws/aws-sdk-go/service/macie", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "8qaF9hO9gXBfjhqcgPKx9yrz9wk=", + "checksumSHA1": "PmHXYQmLgre3YISTEeOQ7r3nXeQ=", "path": "github.com/aws/aws-sdk-go/service/mediaconvert", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "TJfSeycacPHDkVi7by9+qkpS2Rg=", + "checksumSHA1": "wrr0R+TQPdVNzBkYqybMTgC2cis=", "path": "github.com/aws/aws-sdk-go/service/medialive", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "Uc40skINWoBmH5LglPNfoKxET0g=", + "checksumSHA1": "y/0+Q1sC6CQzVR5qITCGJ/mbFa0=", "path": "github.com/aws/aws-sdk-go/service/mediapackage", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "kEZmPI9Y9+05SWuRCdtt+QkqwLI=", + "checksumSHA1": "PI4HQYFv1c30dZh4O4CpuxC1sc8=", "path": "github.com/aws/aws-sdk-go/service/mediastore", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "1hKLl5dLV28iH5rMfOPlWmD+oXk=", + "checksumSHA1": "kq99e0KCM51EmVJwJ2ycUdzwLWM=", "path": "github.com/aws/aws-sdk-go/service/mediastoredata", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "3QV+ZVkQ8g8JkNkftwHaOCevyqM=", + "checksumSHA1": "QzXXaK3Wp4dyew5yPBf6vvthDrU=", "path": "github.com/aws/aws-sdk-go/service/mq", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "y1mGrPJlPShO/yOagp/iFRyHMtg=", + "path": "github.com/aws/aws-sdk-go/service/neptune", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "QuOSKqV8nFvvzN4wcsToltMFI1Y=", + "checksumSHA1": "ZkfCVW7M7hCcVhk4wUPOhIhfKm0=", "path": "github.com/aws/aws-sdk-go/service/opsworks", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "vSNXu8VAbPuFoTTjtVEpldKDZHA=", + "checksumSHA1": "jbeiGywfS9eq+sgkpYdTSG1+6OY=", "path": "github.com/aws/aws-sdk-go/service/organizations", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "BmJUoqyu9C2nKM2azyOfZu4B2DA=", + "path": "github.com/aws/aws-sdk-go/service/pinpoint", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "xKY1N27xgmGIfx4qRKsuPRzhY4Q=", + "path": "github.com/aws/aws-sdk-go/service/pricing", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "HUlP17axe1KXCHIc8vHlrvQPD2s=", + "checksumSHA1": "jZU2pCtRt4/+aBCkg09ELTwWArQ=", "path": "github.com/aws/aws-sdk-go/service/rds", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "/F1EdnLahy5V5+EjvRQ3C0JhIyw=", + "checksumSHA1": "08h0XCi1mWXkMoQroX+NPsV8c5s=", "path": "github.com/aws/aws-sdk-go/service/redshift", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "p2x3SXLzi1PXnyMpPUJUntpS1hQ=", + "checksumSHA1": "edMRC1DPJcWX6eoUuZHTJoYcjXA=", + "path": "github.com/aws/aws-sdk-go/service/resourcegroups", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "vn3OhTeWgYQMFDZ+iRuNa1EzNg8=", "path": "github.com/aws/aws-sdk-go/service/route53", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "8p50HAyOFQoF/q0uQaNLKFt+AXA=", + "checksumSHA1": "9QjcQS7eJ5eff1xX14OrACHkKO0=", "path": "github.com/aws/aws-sdk-go/service/s3", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "ft2RlaOgFMy5hkGg6okKx01eBFs=", + "checksumSHA1": "bGC64BejVz3MgnNHrbWz8YLOZLs=", "path": "github.com/aws/aws-sdk-go/service/sagemaker", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "JPCIldJju4peXDEB9QglS3aD/G0=", + "path": "github.com/aws/aws-sdk-go/service/secretsmanager", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "JrQMIue45k6Nk738T347pe9X0c0=", + "checksumSHA1": "AJmifLGuOaPSILz5jVb79k+H1UE=", + "path": "github.com/aws/aws-sdk-go/service/serverlessapplicationrepository", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "nLDJnqA+Q+hm+Bikups8i53/ByA=", "path": "github.com/aws/aws-sdk-go/service/servicecatalog", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "fmnQo2GajqI9eEn6tcEGiJ9A66E=", + "checksumSHA1": "+EZbk9VlvYV1bAT3NNHu3krvlvg=", "path": "github.com/aws/aws-sdk-go/service/servicediscovery", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "8LeTeLzNs+0vNxTeOjMCtSrSwqo=", + "checksumSHA1": "pq0s/7ZYvscjU6DHFxrasIIcu/o=", "path": "github.com/aws/aws-sdk-go/service/ses", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "u3AMeFxtHGtiJCxDeIm4dAwzBIc=", + "checksumSHA1": "/Ln2ZFfKCZq8hqfr613XO8ZpnRs=", "path": "github.com/aws/aws-sdk-go/service/sfn", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "B3CgAFSREebpsFoFOo4vrQ6u04w=", + "checksumSHA1": "g6KVAXiGpvaHGM6bOf5OBkvWRb4=", "path": "github.com/aws/aws-sdk-go/service/simpledb", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "9Qj8yLl67q9uxBUCc0PT20YiP1M=", + "checksumSHA1": "JSC6tm9PRJeTbbiH9KHyc4PgwNY=", "path": "github.com/aws/aws-sdk-go/service/sns", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "XmEJe50M8MddNEkwbZoC+YvRjgg=", + "checksumSHA1": "5nHvnLQSvF4JOtXu/hi+iZOVfak=", "path": "github.com/aws/aws-sdk-go/service/sqs", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "ULmsQAt5z5ZxGHZC/r7enBs1HvM=", + "checksumSHA1": "ktyl9ag1DysoQ1hxVtq7MFYdNhU=", "path": "github.com/aws/aws-sdk-go/service/ssm", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" + }, + { + "checksumSHA1": "+oRYFnGRYOqZGZcQ0hrOONtGH/k=", + "path": "github.com/aws/aws-sdk-go/service/storagegateway", + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "tH1Pzc7jUUGJv47Cy6uuWl/86bU=", + "checksumSHA1": "35a/vm5R/P68l/hQD55GqviO6bg=", "path": "github.com/aws/aws-sdk-go/service/sts", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "Uw4pOUxSMbx4xBHUcOUkNhtnywE=", + "checksumSHA1": "hTDzNXqoUUS81wwttkD8My6MstI=", "path": "github.com/aws/aws-sdk-go/service/swf", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "dBDZ2w/7ZEdk2UZ6PWFmk/cGSPw=", + "checksumSHA1": "PR55l/umJd2tTXH03wDMA65g1gA=", "path": "github.com/aws/aws-sdk-go/service/waf", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "hOobCqlfwCl/iPGetELNIDsns1U=", + "checksumSHA1": "5bs2RlDPqtt8li74YjPOfHRhtdg=", "path": "github.com/aws/aws-sdk-go/service/wafregional", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "FVK4fHBkjOIaCRIY4DleqSKtZKs=", + "checksumSHA1": "W1X4wTHwuje5bjenIJZtMCOZG8E=", "path": "github.com/aws/aws-sdk-go/service/workspaces", - "revision": "62465d4996f172ae426fdcec7a6a053062cf8995", - "revisionTime": "2018-03-15T19:52:51Z", - "version": "v1.13.15", - "versionExact": "v1.13.15" + "revision": "f557567ff83b93530a0498d4bdb0eb616af70bb3", + "revisionTime": "2018-11-20T00:31:09Z", + "version": "v1.15.79", + "versionExact": "v1.15.79" }, { - "checksumSHA1": "usT4LCSQItkFvFOQT7cBlkCuGaE=", + "checksumSHA1": "yBBHqv7DvZNsZdF00SO8PbEQAKU=", "path": "github.com/beevik/etree", - "revision": "af219c0c7ea1b67ec263c0b1b1b96d284a9181ce", - "revisionTime": "2017-10-15T22:09:51Z" + "revision": "9d7e8feddccb4ed1b8afb54e368bd323d2ff652c", + "revisionTime": "2018-05-06T21:45:57Z", + "version": "v1.0.1", + "versionExact": "v1.0.1" }, { "checksumSHA1": "nqw2Qn5xUklssHTubS5HDvEL9L4=", @@ -934,33 +1120,51 @@ "checksumSHA1": "oTmBS67uxM6OXB/+OJUAG9LK4jw=", "path": "github.com/bgentry/speakeasy", "revision": "4aabc24848ce5fd31929f7d1e4ea74d3709c14cd", - "revisionTime": "2017-04-17T20:07:03Z" + "revisionTime": "2017-04-17T20:07:03Z", + "version": "v0.1.0", + "versionExact": "v0.1.0" }, { "checksumSHA1": "OT4XN9z5k69e2RsMSpwW74B+yk4=", "path": "github.com/blang/semver", "revision": "2ee87856327ba09384cabd113bc6b5d174e9ec0f", - "revisionTime": "2017-07-27T06:48:18Z" + "revisionTime": "2017-07-27T06:48:18Z", + "version": "v3.5.1", + "versionExact": "v3.5.1" }, { - "checksumSHA1": "dvabztWVQX8f6oMLRyv4dLH+TGY=", - "path": "github.com/davecgh/go-spew/spew", - "revision": "346938d642f2ec3594ed81d874461961cd0faa76", - "revisionTime": "2016-10-29T20:57:26Z" + "checksumSHA1": "X/ezkFEECpsC9T8wf1m6jbJo3A4=", + "path": "github.com/boombuler/barcode", + "revision": "34fff276c74eba9c3506f0c6f4064dbaa67d8da3", + "revisionTime": "2018-08-09T05:23:37Z" + }, + { + "checksumSHA1": "jWsoIeAcg4+QlCJLZ8jXHiJ5a3s=", + "path": "github.com/boombuler/barcode/qr", + "revision": "34fff276c74eba9c3506f0c6f4064dbaa67d8da3", + "revisionTime": "2018-08-09T05:23:37Z" + }, + { + "checksumSHA1": "axe0OTdOjYa+XKDUYqzOv7FGaWo=", + "path": "github.com/boombuler/barcode/utils", + "revision": "34fff276c74eba9c3506f0c6f4064dbaa67d8da3", + "revisionTime": "2018-08-09T05:23:37Z" }, { - "checksumSHA1": "BCv50o5pDkoSG3vYKOSai1Z8p3w=", - "path": "github.com/fsouza/go-dockerclient", - "revision": "1d4f4ae73768d3ca16a6fb964694f58dc5eba601", - "revisionTime": "2016-04-27T17:25:47Z", - "tree": true + "checksumSHA1": "CSPbwbyzqA6sfORicn4HFtIhF/c=", + "path": "github.com/davecgh/go-spew/spew", + "revision": "8991bc29aa16c548c550c7ff78260e27b9ab7c73", + "revisionTime": "2018-02-21T22:46:20Z", + "version": "v1.1.1", + "versionExact": "v1.1.1" }, { - "checksumSHA1": "1K+xrZ1PBez190iGt5OnMtGdih4=", - "comment": "v1.8.6", + "checksumSHA1": "VvZKmbuBN1QAG699KduTdmSPwA4=", "path": "github.com/go-ini/ini", - "revision": "766e555c68dc8bda90d197ee8946c37519c19409", - "revisionTime": "2017-01-17T13:00:17Z" + "revision": "300e940a926eb277d3901b20bdfcc54928ad3642", + "revisionTime": "2017-03-13T04:11:30Z", + "version": "v1.25.4", + "versionExact": "v1.25.4" }, { "checksumSHA1": "yqF125xVSkmfLpIVGrLlfE05IUk=", @@ -993,27 +1197,38 @@ "revisionTime": "2017-11-13T18:07:20Z" }, { - "checksumSHA1": "cdOCt0Yb+hdErz8NAQqayxPmRsY=", + "checksumSHA1": "h1d2lPZf6j2dW/mIqVnd1RdykDo=", + "path": "github.com/golang/snappy", + "revision": "2e65f85255dbc3072edf28d6b5b8efc472979f5a", + "revisionTime": "2018-05-18T05:39:59Z" + }, + { + "checksumSHA1": "ByRdQMv2yl16W6Tp9gUW1nNmpuI=", "path": "github.com/hashicorp/errwrap", - "revision": "7554cd9344cec97297fa6649b055a8c98c2a1e55" + "revision": "8a6fb523712970c966eefc6b39ed2c5e74880354", + "revisionTime": "2018-08-24T00:39:10Z", + "version": "v1.0.0", + "versionExact": "v1.0.0" }, { - "checksumSHA1": "b8F628srIitj5p7Y130xc9k0QWs=", + "checksumSHA1": "Ihile6rE76J6SivxECovHgMROxw=", "path": "github.com/hashicorp/go-cleanhttp", - "revision": "3573b8b52aa7b37b9358d966a898feb387f62437", - "revisionTime": "2017-02-11T01:34:15Z" + "revision": "e8ab9daed8d1ddd2d3c4efba338fe2eeae2e4f18", + "revisionTime": "2018-08-30T03:37:06Z", + "version": "v0.5.0", + "versionExact": "v0.5.0" }, { - "checksumSHA1": "9VcI9QGCShWIUIL187qRd4sxwb8=", + "checksumSHA1": "fvjFEz5PBN1m9ALWf9UuLgTFWLg=", "path": "github.com/hashicorp/go-getter", - "revision": "a686900cb3753aa644dc4812be91ceaf9fdd3b98", - "revisionTime": "2017-09-22T19:29:48Z" + "revision": "90bb99a48d86cf1d327cee9968f7428f90ba13c1", + "revisionTime": "2018-03-27T01:01:14Z" }, { "checksumSHA1": "9J+kDr29yDrwsdu2ULzewmqGjpA=", "path": "github.com/hashicorp/go-getter/helper/url", - "revision": "c3d66e76678dce180a7b452653472f949aedfbcd", - "revisionTime": "2017-02-07T21:55:32Z" + "revision": "90bb99a48d86cf1d327cee9968f7428f90ba13c1", + "revisionTime": "2018-03-27T01:01:14Z" }, { "checksumSHA1": "AA0aYmdg4pb5gPCUSXg8iPzxLag=", @@ -1022,9 +1237,12 @@ "revisionTime": "2017-10-05T15:17:51Z" }, { - "checksumSHA1": "lrSl49G23l6NhfilxPM0XFs5rZo=", + "checksumSHA1": "cFcwn0Zxefithm9Q9DioRNcGbqg=", "path": "github.com/hashicorp/go-multierror", - "revision": "d30f09973e19c1dfcd120b2d9c4f168e68d6b5d5" + "revision": "886a7fbe3eb1c874d46f623bfa70af45f425b3d1", + "revisionTime": "2018-08-24T00:40:42Z", + "version": "v1.0.0", + "versionExact": "v1.0.0" }, { "checksumSHA1": "R6me0jVmcT/OPo80Fe0qo5fRwHc=", @@ -1033,15 +1251,28 @@ "revisionTime": "2017-08-16T15:18:19Z" }, { - "checksumSHA1": "85XUnluYJL7F55ptcwdmN8eSOsk=", + "checksumSHA1": "0daDqRBckum49dBVYduwjaoObgU=", + "path": "github.com/hashicorp/go-safetemp", + "revision": "c9a55de4fe06c920a71964b53cfe3dd293a3c743", + "revisionTime": "2018-08-28T16:41:30Z", + "version": "v1.0.0", + "versionExact": "v1.0.0" + }, + { + "checksumSHA1": "Kjansj6ZDJrWKNS1BJpg+z7OBnI=", "path": "github.com/hashicorp/go-uuid", - "revision": "36289988d83ca270bc07c234c36f364b0dd9c9a7" + "revision": "de160f5c59f693fed329e73e291bb751fe4ea4dc", + "revisionTime": "2018-08-30T03:27:00Z", + "version": "v1.0.0", + "versionExact": "v1.0.0" }, { - "checksumSHA1": "EcZfls6vcqjasWV/nBlu+C+EFmc=", + "checksumSHA1": "r0pj5dMHCghpaQZ3f1BRGoKiSWw=", "path": "github.com/hashicorp/go-version", - "revision": "e96d3840402619007766590ecea8dd7af1292276", - "revisionTime": "2016-10-31T18:26:05Z" + "revision": "b5a281d3160aa11950a6182bd9a9dc2cb1e02d50", + "revisionTime": "2018-08-24T00:43:55Z", + "version": "v1.0.0", + "versionExact": "v1.0.0" }, { "checksumSHA1": "o3XZZdOnSnwQSpYw215QV75ZDeI=", @@ -1158,284 +1389,292 @@ "revisionTime": "2017-05-12T21:33:05Z" }, { - "checksumSHA1": "vt+P9D2yWDO3gdvdgCzwqunlhxU=", + "checksumSHA1": "v8LPIrksMLyWuVyNVJul0BbQONM=", "path": "github.com/hashicorp/logutils", - "revision": "0dc08b1671f34c4250ce212759ebd880f743d883", - "revisionTime": "2015-06-09T07:04:31Z" + "revision": "a335183dfd075f638afcc820c90591ca3c97eba6", + "revisionTime": "2018-08-28T16:12:49Z", + "version": "v1.0.0", + "versionExact": "v1.0.0" }, { - "checksumSHA1": "D2qVXjDywJu6wLj/4NCTsFnRrvw=", + "checksumSHA1": "MpMvoeVDNxeoOQTI+hUxt+0bHdY=", "path": "github.com/hashicorp/terraform/config", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "WzQP2WfiCYlaALKZVqEFsxZsG1o=", "path": "github.com/hashicorp/terraform/config/configschema", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "3V7300kyZF+AGy/cOKV0+P6M3LY=", "path": "github.com/hashicorp/terraform/config/hcl2shim", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { - "checksumSHA1": "7+cYlhS0+Z/xYUzYQft8Wibs1GA=", + "checksumSHA1": "/5UEeukyNbbP/j80Jht10AZ+Law=", "path": "github.com/hashicorp/terraform/config/module", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "mPbjVPD2enEey45bP4M83W2AxlY=", "path": "github.com/hashicorp/terraform/dag", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "P8gNPDuOzmiK4Lz9xG7OBy4Rlm8=", "path": "github.com/hashicorp/terraform/flatmap", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "zx5DLo5aV0xDqxGTzSibXg7HHAA=", "path": "github.com/hashicorp/terraform/helper/acctest", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "uT6Q9RdSRAkDjyUgQlJ2XKJRab4=", "path": "github.com/hashicorp/terraform/helper/config", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "qVmQPoZmJ2w2OnaxIheWfuwun6g=", "path": "github.com/hashicorp/terraform/helper/customdiff", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "FH5eOEHfHgdxPC/JnfmCeSBk66U=", "path": "github.com/hashicorp/terraform/helper/encryption", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "KNvbU1r5jv0CBeQLnEtDoL3dRtc=", "path": "github.com/hashicorp/terraform/helper/hashcode", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "B267stWNQd0/pBTXHfI/tJsxzfc=", "path": "github.com/hashicorp/terraform/helper/hilmapstructure", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { - "checksumSHA1": "BAXV9ruAyno3aFgwYI2/wWzB2Gc=", + "checksumSHA1": "j8XqkwLh2W3r3i6wnCRmve07BgI=", "path": "github.com/hashicorp/terraform/helper/logging", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "twkFd4x71kBnDfrdqO5nhs8dMOY=", "path": "github.com/hashicorp/terraform/helper/mutexkv", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "ImyqbHM/xe3eAT2moIjLI8ksuks=", "path": "github.com/hashicorp/terraform/helper/pathorcontents", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { - "checksumSHA1": "ryCWu7RtMlYrAfSevaI7RtaXe98=", + "checksumSHA1": "ejnz+70aL76+An9FZymcUcg0lUU=", "path": "github.com/hashicorp/terraform/helper/resource", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { - "checksumSHA1": "ckEg0tYoH3oVZD30IHOay6Ne63o=", + "checksumSHA1": "OOwTGBTHcUmQTPBdyscTMkjApbI=", "path": "github.com/hashicorp/terraform/helper/schema", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "Fzbv+N7hFXOtrR6E7ZcHT3jEE9s=", "path": "github.com/hashicorp/terraform/helper/structure", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { - "checksumSHA1": "JL8eJjRMxGI2zz2fMYXhXQpRtPk=", + "checksumSHA1": "nEC56vB6M60BJtGPe+N9rziHqLg=", "path": "github.com/hashicorp/terraform/helper/validation", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { - "checksumSHA1": "VgChgJYOTM4zrNOWifEqDsdv7Hk=", + "checksumSHA1": "kD1ayilNruf2cES1LDfNZjYRscQ=", "path": "github.com/hashicorp/terraform/httpclient", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "yFWmdS6yEJZpRJzUqd/mULqCYGk=", "path": "github.com/hashicorp/terraform/moduledeps", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "DqaoG++NXRCfvH/OloneLWrM+3k=", "path": "github.com/hashicorp/terraform/plugin", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "tx5xrdiUWdAHqoRV5aEfALgT1aU=", "path": "github.com/hashicorp/terraform/plugin/discovery", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { - "checksumSHA1": "f6wDpr0uHKZqQw4ztvxMrtiuvQo=", + "checksumSHA1": "dD3uZ27A7V6r2ZcXabXbUwOxD2E=", "path": "github.com/hashicorp/terraform/registry", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "cR87P4V5aiEfvF+1qoBi2JQyQS4=", "path": "github.com/hashicorp/terraform/registry/regsrc", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "y9IXgIJQq9XNy1zIYUV2Kc0KsnA=", "path": "github.com/hashicorp/terraform/registry/response", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "VXlzRRDVOqeMvnnrbUcR9H64OA4=", "path": "github.com/hashicorp/terraform/svchost", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { - "checksumSHA1": "GzcKNlFL0N77JVjU8qbltXE4R3k=", + "checksumSHA1": "o6CMncmy6Q2F+r13sEOeT6R9GF8=", "path": "github.com/hashicorp/terraform/svchost/auth", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { - "checksumSHA1": "jiDWmQieUE6OoUBMs53hj9P/JDQ=", + "checksumSHA1": "uEzjKyPvbd8k5VGdgn4b/2rDDi0=", "path": "github.com/hashicorp/terraform/svchost/disco", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { - "checksumSHA1": "Soh8iC7ikCO2Z7CWulI/9mlJUDc=", + "checksumSHA1": "SJ9F1euNPxacFDFaic/Ks4SUUzw=", "path": "github.com/hashicorp/terraform/terraform", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { "checksumSHA1": "+K+oz9mMTmQMxIA3KVkGRfjvm9I=", "path": "github.com/hashicorp/terraform/tfdiags", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { - "checksumSHA1": "6n9DKfhMSN1TSizmkoetaPH8HDk=", + "checksumSHA1": "WJiYLbmIspnkzrhPRAHB6H9JE7g=", "path": "github.com/hashicorp/terraform/version", - "revision": "7878d66b386e5474102b5047722c2de2b3237278", - "revisionTime": "2018-03-15T20:35:34Z", - "version": "v0.11.4", - "versionExact": "v0.11.4" + "revision": "48c1ae62b3e1f5e5bc1c2e9ab3e64c7b4b86a6a1", + "revisionTime": "2018-10-15T19:04:37Z", + "version": "v0.11.9-beta1", + "versionExact": "v0.11.9-beta1" }, { - "checksumSHA1": "ft77GtqeZEeCXioGpF/s6DlGm/U=", + "checksumSHA1": "bSdPFOHaTwEvM4PIvn0PZfn75jM=", "path": "github.com/hashicorp/vault/helper/compressutil", - "revision": "9a60bf2a50e4dd1ba4b929a3ccf8072435cbdd0a", - "revisionTime": "2016-10-29T21:01:49Z" + "revision": "e21712a687889de1125e0a12a980420b1a4f72d3", + "revisionTime": "2018-07-25T14:15:52Z", + "version": "v0.10.4", + "versionExact": "v0.10.4" }, { - "checksumSHA1": "yUiSTPf0QUuL2r/81sjuytqBoeQ=", + "checksumSHA1": "POgkM3GrjRFw6H3sw95YNEs552A=", "path": "github.com/hashicorp/vault/helper/jsonutil", - "revision": "9a60bf2a50e4dd1ba4b929a3ccf8072435cbdd0a", - "revisionTime": "2016-10-29T21:01:49Z" + "revision": "e21712a687889de1125e0a12a980420b1a4f72d3", + "revisionTime": "2018-07-25T14:15:52Z", + "version": "v0.10.4", + "versionExact": "v0.10.4" }, { - "checksumSHA1": "YmXAnTwbzhLLBZM+1tQrJiG3qpc=", + "checksumSHA1": "EPOFuDrCcLPwHZNgaCve5N7i2wU=", "path": "github.com/hashicorp/vault/helper/pgpkeys", - "revision": "9a60bf2a50e4dd1ba4b929a3ccf8072435cbdd0a", - "revisionTime": "2016-10-29T21:01:49Z" + "revision": "e21712a687889de1125e0a12a980420b1a4f72d3", + "revisionTime": "2018-07-25T14:15:52Z", + "version": "v0.10.4", + "versionExact": "v0.10.4" }, { "checksumSHA1": "ZhK6IO2XN81Y+3RAjTcVm1Ic7oU=", @@ -1444,10 +1683,12 @@ "revisionTime": "2016-07-20T23:31:40Z" }, { - "checksumSHA1": "77PKgZO4nQy8w8CKOQAc9/vrAmo=", + "checksumSHA1": "x+4RAiAczFwq+qsWJUEb9oRpgjM=", "path": "github.com/jen20/awspolicyequivalence", - "revision": "9fbcaca9f9f868b9560463d0790aae33b2322945", - "revisionTime": "2018-03-16T12:05:52Z" + "revision": "9ebbf3c225b2b9da629263e13c3015a5de7965d1", + "revisionTime": "2018-08-29T22:45:56Z", + "version": "v1.0.0", + "versionExact": "v1.0.0" }, { "checksumSHA1": "0ZrwvB6KoGPj2PoDNSEJwxQ6Mog=", @@ -1511,10 +1752,12 @@ "revisionTime": "2016-10-04T15:35:44Z" }, { - "checksumSHA1": "trzmsZQDCc13zk/6qANox7Z/KCg=", + "checksumSHA1": "AZO2VGorXTMDiSVUih3k73vORHY=", "path": "github.com/mattn/go-isatty", - "revision": "fc9e8d8ef48496124e79ae0df75490096eccf6fe", - "revisionTime": "2017-03-22T23:44:13Z" + "revision": "6ca4dbf54d38eea1a992b3c722a76a5d1c4cb25c", + "revisionTime": "2017-11-07T05:05:31Z", + "version": "v0.0.4", + "versionExact": "v0.0.4" }, { "checksumSHA1": "jXakiCRizt6jtyeGh+DeuA76Bh0=", @@ -1523,16 +1766,20 @@ "revisionTime": "2017-08-03T04:29:10Z" }, { - "checksumSHA1": "guxbLo8KHHBeM0rzou4OTzzpDNs=", + "checksumSHA1": "gpgfDOKGUOP/h+j1cM7T36NZFTQ=", "path": "github.com/mitchellh/copystructure", - "revision": "5af94aef99f597e6a9e1f6ac6be6ce0f3c96b49d", - "revisionTime": "2016-10-13T19:53:42Z" + "revision": "9a1b6f44e8da0e0e374624fb0a825a231b00c537", + "revisionTime": "2018-08-24T00:35:38Z", + "version": "v1.0.0", + "versionExact": "v1.0.0" }, { - "checksumSHA1": "V/quM7+em2ByJbWBLOsEwnY3j/Q=", + "checksumSHA1": "a58zUNtDH/gEd6F6KI3FqT2iEo0=", "path": "github.com/mitchellh/go-homedir", - "revision": "b8bc1bf767474819792c23f32d8286a45736f1c6", - "revisionTime": "2016-12-03T19:45:07Z" + "revision": "ae18d6b8b3205b561c79e8e5f69bff09736185f4", + "revisionTime": "2018-08-24T00:42:36Z", + "version": "v1.0.0", + "versionExact": "v1.0.0" }, { "checksumSHA1": "bDdhmDk8q6utWrccBhEOa6IoGkE=", @@ -1541,15 +1788,20 @@ "revisionTime": "2017-10-04T22:19:16Z" }, { - "checksumSHA1": "L3leymg2RT8hFl5uL+5KP/LpBkg=", + "checksumSHA1": "2gW/SeTAbHsmAdoDes4DksvqTj4=", "path": "github.com/mitchellh/go-wordwrap", - "revision": "ad45545899c7b13c020ea92b2072220eefad42b8", - "revisionTime": "2015-03-14T17:03:34Z" + "revision": "9e67c67572bc5dd02aef930e2b0ae3c02a4b5a5c", + "revisionTime": "2018-08-28T14:53:44Z", + "version": "v1.0.0", + "versionExact": "v1.0.0" }, { - "checksumSHA1": "xyoJKalfQwTUN1qzZGQKWYAwl0A=", + "checksumSHA1": "gOIe6F97Mcq/PwZjG38FkxervzE=", "path": "github.com/mitchellh/hashstructure", - "revision": "6b17d669fac5e2f71c16658d781ec3fdd3802b69" + "revision": "a38c50148365edc8df43c1580c48fb2b3a1e9cd7", + "revisionTime": "2018-08-28T14:53:49Z", + "version": "v1.0.0", + "versionExact": "v1.0.0" }, { "checksumSHA1": "MlX15lJuV8DYARX5RJY8rqrSEWQ=", @@ -1558,10 +1810,12 @@ "revisionTime": "2017-03-07T20:11:23Z" }, { - "checksumSHA1": "vBpuqNfSTZcAR/0tP8tNYacySGs=", + "checksumSHA1": "nxuST3bjBv5uDVPzrX9wdruOwv0=", "path": "github.com/mitchellh/reflectwalk", - "revision": "92573fe8d000a145bfebc03a16bc22b34945867f", - "revisionTime": "2016-10-03T17:45:16Z" + "revision": "eecee6c969c02c8cc2ae48e1e269843ae8590796", + "revisionTime": "2018-08-24T00:34:11Z", + "version": "v1.0.0", + "versionExact": "v1.0.0" }, { "checksumSHA1": "6OEUkwOM0qgI6YxR+BDEn6YMvpU=", @@ -1588,16 +1842,22 @@ "revisionTime": "2017-07-30T19:30:24Z" }, { - "checksumSHA1": "zmC8/3V4ls53DJlNTKDZwPSC/dA=", - "path": "github.com/satori/go.uuid", - "revision": "b061729afc07e77a8aa4fad0a2fd840958f1942a", - "revisionTime": "2016-09-27T10:08:44Z" + "checksumSHA1": "vCogt04lbcE8fUgvRCOaZQUo+Pk=", + "path": "github.com/pquerna/otp", + "revision": "be78767b3e392ce45ea73444451022a6fc32ad0d", + "revisionTime": "2018-08-13T14:46:49Z" + }, + { + "checksumSHA1": "Rpx/4T1X/tZgHG56AJ6G8nPk7Gw=", + "path": "github.com/pquerna/otp/hotp", + "revision": "be78767b3e392ce45ea73444451022a6fc32ad0d", + "revisionTime": "2018-08-13T14:46:49Z" }, { - "checksumSHA1": "iqUXcP3VA+G1/gVLRpQpBUt/BuA=", - "path": "github.com/satori/uuid", - "revision": "b061729afc07e77a8aa4fad0a2fd840958f1942a", - "revisionTime": "2016-09-27T10:08:44Z" + "checksumSHA1": "OvSOUZb554+cPpvBOK4kEjv2ZpE=", + "path": "github.com/pquerna/otp/totp", + "revision": "be78767b3e392ce45ea73444451022a6fc32ad0d", + "revisionTime": "2018-08-13T14:46:49Z" }, { "checksumSHA1": "EUR26b2t3XDPxiEMwDBtn8Ajp8A=", @@ -1619,25 +1879,33 @@ "checksumSHA1": "qgMa75aMGbkFY0jIqqqgVnCUoNA=", "path": "github.com/ulikunitz/xz", "revision": "0c6b41e72360850ca4f98dc341fd999726ea007f", - "revisionTime": "2017-06-05T21:53:11Z" + "revisionTime": "2017-06-05T21:53:11Z", + "version": "v0.5.4", + "versionExact": "v0.5.4" }, { "checksumSHA1": "vjnTkzNrMs5Xj6so/fq0mQ6dT1c=", "path": "github.com/ulikunitz/xz/internal/hash", "revision": "0c6b41e72360850ca4f98dc341fd999726ea007f", - "revisionTime": "2017-06-05T21:53:11Z" + "revisionTime": "2017-06-05T21:53:11Z", + "version": "v0.5.4", + "versionExact": "v0.5.4" }, { "checksumSHA1": "m0pm57ASBK/CTdmC0ppRHO17mBs=", "path": "github.com/ulikunitz/xz/internal/xlog", "revision": "0c6b41e72360850ca4f98dc341fd999726ea007f", - "revisionTime": "2017-06-05T21:53:11Z" + "revisionTime": "2017-06-05T21:53:11Z", + "version": "v0.5.4", + "versionExact": "v0.5.4" }, { "checksumSHA1": "2vZw6zc8xuNlyVz2QKvdlNSZQ1U=", "path": "github.com/ulikunitz/xz/lzma", "revision": "0c6b41e72360850ca4f98dc341fd999726ea007f", - "revisionTime": "2017-06-05T21:53:11Z" + "revisionTime": "2017-06-05T21:53:11Z", + "version": "v0.5.4", + "versionExact": "v0.5.4" }, { "checksumSHA1": "TudZOVOvOvR5zw7EFbvD3eZpmLI=", @@ -1982,10 +2250,12 @@ "revisionTime": "2017-10-25T22:59:19Z" }, { - "checksumSHA1": "fALlQNY1fM99NesfLJ50KguWsio=", + "checksumSHA1": "ZSWoOPUNRr5+3dhkLK3C4cZAQPk=", "path": "gopkg.in/yaml.v2", - "revision": "cd8b52f8269e0feb286dfeef29f8fe4d5b397e0b", - "revisionTime": "2017-04-07T17:21:22Z" + "revision": "5420a8b6744d3b0345ab293f6fcba19c978f1183", + "revisionTime": "2018-03-28T19:50:20Z", + "version": "v2.2.1", + "versionExact": "v2.2.1" } ], "rootPath": "github.com/terraform-providers/terraform-provider-aws" diff --git a/website/aws.erb b/website/aws.erb index 2ae53058fc4..b6c678d75ad 100644 --- a/website/aws.erb +++ b/website/aws.erb @@ -14,6 +14,18 @@ Guides @@ -268,6 +400,15 @@ + > + ACM PCA Resources + + + > API Gateway Resources + > + Budget Resources + + + > CloudFormation Resources + + + > + CloudHSM v2 Resources + @@ -493,6 +667,10 @@ aws_codebuild_project + > + aws_codebuild_webhook + + @@ -502,11 +680,11 @@ @@ -538,6 +716,11 @@ aws_codepipeline + + > + aws_codepipeline_webhook + + @@ -550,6 +733,12 @@ > aws_cognito_identity_pool_roles_attachment + > + aws_cognito_identity_provider + + > + aws_cognito_resource_server + > aws_cognito_user_group @@ -568,6 +757,13 @@ > Config Resources + > + Data Lifecycle Manager Resources + + + > Database Migration Service > Direct Connect Resources @@ -678,6 +914,14 @@ aws_dax_cluster + > + aws_dax_parameter_group + + + > + aws_dax_subnet_group + + @@ -762,10 +1006,22 @@ aws_ebs_snapshot + > + aws_ebs_snapshot_copy + + > aws_ebs_volume + > + aws_ec2_capacity_reservation + + + > + aws_ec2_fleet + + > aws_eip @@ -794,6 +1050,10 @@ aws_launch_configuration + > + aws_launch_template + + > aws_lb_cookie_stickiness_policy @@ -917,6 +1177,18 @@ + > + EKS Resources + + + + > ElastiCache Resources > Glacier Resources @@ -1041,6 +1319,27 @@ > aws_glue_catalog_database + > + aws_glue_catalog_table + + > + aws_glue_classifier + + > + aws_glue_connection + + > + aws_glue_crawler + + > + aws_glue_job + + > + aws_glue_security_configuration + + > + aws_glue_trigger + @@ -1133,10 +1432,18 @@ aws_iam_server_certificate + > + aws_iam_service_linked_role + + > aws_iam_user + > + aws_iam_user_group_membership + + > aws_iam_user_login_profile @@ -1167,12 +1474,22 @@ > aws_iot_policy + + > + aws_iot_policy_attachment + + > aws_iot_topic_rule > aws_iot_thing + + > + aws_iot_thing_principal_attachment + + > aws_iot_thing_type @@ -1203,6 +1520,10 @@ Kinesis Resources + > + Macie Resources + + + > MQ Resources @@ -1384,9 +1723,58 @@ Organizations Resources + + + > + Pinpoint Resources + @@ -1394,6 +1782,10 @@ RDS Resources + > + Neptune Resources + + + > Redshift Resources @@ -1574,6 +2050,10 @@ aws_s3_bucket + > + aws_s3_bucket_inventory + + > aws_s3_bucket_metric @@ -1592,6 +2072,20 @@ + > + Secrets Manager Resources + + > SES Resources @@ -1605,6 +2099,10 @@ aws_ses_domain_identity + > + aws_ses_domain_identity_verification + + > aws_ses_domain_dkim @@ -1633,6 +2131,10 @@ aws_ses_event_destination + > + aws_ses_identity_notification_topic + + > aws_ses_template @@ -1706,6 +2208,10 @@ aws_sns_platform_application + > + aws_sns_sms_preferences + + > aws_sns_topic @@ -1782,6 +2288,51 @@ + > + Storage Gateway Resources + + + + > + SWF Resources + + > VPC Resources @@ -1913,6 +2464,10 @@ aws_vpc_endpoint_subnet_association + > + aws_vpc_ipv4_cidr_block_association + + > aws_vpc_peering_connection @@ -1921,6 +2476,10 @@ aws_vpc_peering_connection_accepter + > + aws_vpc_peering_connection_options + + > aws_vpn_connection diff --git a/website/docs/d/acm_certificate.html.markdown b/website/docs/d/acm_certificate.html.markdown index 0d169a1bfda..fb8d0c82324 100644 --- a/website/docs/d/acm_certificate.html.markdown +++ b/website/docs/d/acm_certificate.html.markdown @@ -9,10 +9,8 @@ description: |- # Data Source: aws_acm_certificate Use this data source to get the ARN of a certificate in AWS Certificate -Manager (ACM). The process of requesting and verifying a certificate in ACM -requires some manual steps, which means that Terraform cannot automate the -creation of ACM certificates. But using this data source, you can reference -them by domain without having to hard code the ARNs as input. +Manager (ACM), you can reference +it by domain without having to hard code the ARNs as input. ## Example Usage @@ -23,8 +21,8 @@ data "aws_acm_certificate" "example" { } data "aws_acm_certificate" "example" { - domain = "tf.example.com" - types = ["AMAZON_ISSUED"] + domain = "tf.example.com" + types = ["AMAZON_ISSUED"] most_recent = true } ``` diff --git a/website/docs/d/acmpca_certificate_authority.html.markdown b/website/docs/d/acmpca_certificate_authority.html.markdown new file mode 100644 index 00000000000..2bfe8f672bb --- /dev/null +++ b/website/docs/d/acmpca_certificate_authority.html.markdown @@ -0,0 +1,46 @@ +--- +layout: "aws" +page_title: "AWS: aws_acmpca_certificate_authority" +sidebar_current: "docs-aws-datasource-acmpca-certificate-authority" +description: |- + Get information on a AWS Certificate Manager Private Certificate Authority +--- + +# Data Source: aws_acmpca_certificate_authority + +Get information on a AWS Certificate Manager Private Certificate Authority (ACM PCA Certificate Authority). + +## Example Usage + +```hcl +data "aws_acmpca_certificate_authority" "example" { + arn = "arn:aws:acm-pca:us-east-1:123456789012:certificate-authority/12345678-1234-1234-1234-123456789012" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `arn` - (Required) Amazon Resource Name (ARN) of the certificate authority. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Amazon Resource Name (ARN) of the certificate authority. +* `certificate` - Base64-encoded certificate authority (CA) certificate. Only available after the certificate authority certificate has been imported. +* `certificate_chain` - Base64-encoded certificate chain that includes any intermediate certificates and chains up to root on-premises certificate that you used to sign your private CA certificate. The chain does not include your private CA certificate. Only available after the certificate authority certificate has been imported. +* `certificate_signing_request` - The base64 PEM-encoded certificate signing request (CSR) for your private CA certificate. +* `not_after` - Date and time after which the certificate authority is not valid. Only available after the certificate authority certificate has been imported. +* `not_before` - Date and time before which the certificate authority is not valid. Only available after the certificate authority certificate has been imported. +* `revocation_configuration` - Nested attribute containing revocation configuration. + * `revocation_configuration.0.crl_configuration` - Nested attribute containing configuration of the certificate revocation list (CRL), if any, maintained by the certificate authority. + * `revocation_configuration.0.crl_configuration.0.custom_cname` - Name inserted into the certificate CRL Distribution Points extension that enables the use of an alias for the CRL distribution point. + * `revocation_configuration.0.crl_configuration.0.enabled` - Boolean value that specifies whether certificate revocation lists (CRLs) are enabled. + * `revocation_configuration.0.crl_configuration.0.expiration_in_days` - Number of days until a certificate expires. + * `revocation_configuration.0.crl_configuration.0.s3_bucket_name` - Name of the S3 bucket that contains the CRL. +* `serial` - Serial number of the certificate authority. Only available after the certificate authority certificate has been imported. +* `status` - Status of the certificate authority. +* `tags` - Specifies a key-value map of user-defined tags that are attached to the certificate authority. +* `type` - The type of the certificate authority. diff --git a/website/docs/d/ami.html.markdown b/website/docs/d/ami.html.markdown index 716d5854817..baae6aaf189 100644 --- a/website/docs/d/ami.html.markdown +++ b/website/docs/d/ami.html.markdown @@ -11,6 +11,8 @@ description: |- Use this data source to get the ID of a registered AMI for use in other resources. +~> **NOTE:** The `owners` argument will be **required** in the next major version. + ## Example Usage ```hcl diff --git a/website/docs/d/ami_ids.html.markdown b/website/docs/d/ami_ids.html.markdown index bc691f5faec..2ef3175d7fc 100644 --- a/website/docs/d/ami_ids.html.markdown +++ b/website/docs/d/ami_ids.html.markdown @@ -10,6 +10,8 @@ description: |- Use this data source to get a list of AMI IDs matching the specified criteria. +~> **NOTE:** The `owners` argument will be **required** in the next major version. + ## Example Usage ```hcl @@ -44,9 +46,10 @@ options to narrow down the list AWS returns. ~> **NOTE:** At least one of `executable_users`, `filter`, `owners` or `name_regex` must be specified. +* `sort_ascending` - (Defaults to `false`) Used to sort AMIs by creation time. + ## Attributes Reference -`ids` is set to the list of AMI IDs, sorted by creation time in descending -order. +`ids` is set to the list of AMI IDs, sorted by creation time according to `sort_ascending`. [1]: http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-images.html diff --git a/website/docs/d/api_gateway_api_key.html.markdown b/website/docs/d/api_gateway_api_key.html.markdown new file mode 100644 index 00000000000..ba8e17701ae --- /dev/null +++ b/website/docs/d/api_gateway_api_key.html.markdown @@ -0,0 +1,30 @@ +--- +layout: "aws" +page_title: "AWS: aws_api_gateway_api_key" +sidebar_current: "docs-aws_api_gateway_api_key" +description: |- + Get information on an API Gateway API Key +--- + +# Data Source: aws_api_gateway_api_key + +Use this data source to get the name and value of a pre-existing API Key, for +example to supply credentials for a dependency microservice. + +## Example Usage + +```hcl +data "aws_api_gateway_api_key" "my_api_key" { + id = "ru3mpjgse6" +} +``` + +## Argument Reference + + * `id` - (Required) The ID of the API Key to look up. + +## Attributes Reference + + * `id` - Set to the ID of the API Key. + * `name` - Set to the name of the API Key. + * `value` - Set to the value of the API Key. diff --git a/website/docs/d/api_gateway_resource.html.markdown b/website/docs/d/api_gateway_resource.html.markdown new file mode 100644 index 00000000000..3a493697acd --- /dev/null +++ b/website/docs/d/api_gateway_resource.html.markdown @@ -0,0 +1,36 @@ +--- +layout: "aws" +page_title: "AWS: aws_api_gateway_resource" +sidebar_current: "docs-aws_api_gateway_resource" +description: |- + Get information on a API Gateway Resource +--- + +# Data Source: aws_api_gateway_resource + +Use this data source to get the id of a Resource in API Gateway. +To fetch the Resource, you must provide the REST API id as well as the full path. + +## Example Usage + +```hcl +data "aws_api_gateway_rest_api" "my_rest_api" { + name = "my-rest-api" +} + +data "aws_api_gateway_resource" "my_resource" { + rest_api_id = "${aws_api_gateway_rest_api.my_rest_api.id}" + path = "/endpoint/path" +} +``` + +## Argument Reference + + * `rest_api_id` - (Required) The REST API id that owns the resource. If no REST API is found, an error will be returned. + * `path` - (Required) The full path of the resource. If no path is found, an error will be returned. + +## Attributes Reference + + * `id` - Set to the ID of the found Resource. + * `parent_id` - Set to the ID of the parent Resource. + * `path_part` - Set to the path relative to the parent Resource. diff --git a/website/docs/d/api_gateway_rest_api.html.markdown b/website/docs/d/api_gateway_rest_api.html.markdown new file mode 100644 index 00000000000..630153aba8c --- /dev/null +++ b/website/docs/d/api_gateway_rest_api.html.markdown @@ -0,0 +1,32 @@ +--- +layout: "aws" +page_title: "AWS: aws_api_gateway_rest_api" +sidebar_current: "docs-aws_api_gateway_rest_api" +description: |- + Get information on a API Gateway REST API +--- + +# Data Source: aws_api_gateway_rest_api + +Use this data source to get the id and root_resource_id of a REST API in +API Gateway. To fetch the REST API you must provide a name to match against. +As there is no unique name constraint on REST APIs this data source will +error if there is more than one match. + +## Example Usage + +```hcl +data "aws_api_gateway_rest_api" "my_rest_api" { + name = "my-rest-api" +} +``` + +## Argument Reference + + * `name` - (Required) The name of the REST API to look up. If no REST API is found with this name, an error will be returned. + If multiple REST APIs are found with this name, an error will be returned. + +## Attributes Reference + + * `id` - Set to the ID of the found REST API. + * `root_resource_id` - Set to the ID of the API Gateway Resource on the found REST API where the route matches '/'. diff --git a/website/docs/d/arn.html.markdown b/website/docs/d/arn.html.markdown new file mode 100644 index 00000000000..bd91e5f7935 --- /dev/null +++ b/website/docs/d/arn.html.markdown @@ -0,0 +1,41 @@ +--- +layout: "aws" +page_title: "AWS: aws_arn" +sidebar_current: "docs-aws-datasource-arn" +description: |- + Parses an ARN into its constituent parts. +--- + +# Data Source: aws_arn + +Parses an Amazon Resource Name (ARN) into its constituent parts. + +## Example Usage + +```hcl +data "aws_arn" "db_instance" { + arn = "arn:aws:rds:eu-west-1:123456789012:db:mysql-db" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `arn` - (Required) The ARN to parse. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `partition` - The partition that the resource is in. + +* `service` - The [service namespace](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces) that identifies the AWS product. + +* `region` - The region the resource resides in. +Note that the ARNs for some resources do not require a region, so this component might be omitted. + +* `account` - The [ID](https://docs.aws.amazon.com/general/latest/gr/acct-identifiers.html) of the AWS account that owns the resource, without the hyphens. + +* `resource` - The content of this part of the ARN varies by service. +It often includes an indicator of the type of resource—for example, an IAM user or Amazon RDS database —followed by a slash (/) or a colon (:), followed by the resource name itself. diff --git a/website/docs/d/autoscaling_groups.html.markdown b/website/docs/d/autoscaling_groups.html.markdown index 470f05e3152..17827654f58 100644 --- a/website/docs/d/autoscaling_groups.html.markdown +++ b/website/docs/d/autoscaling_groups.html.markdown @@ -16,12 +16,12 @@ ASGs within a specific region. This will allow you to pass a list of AutoScaling ```hcl data "aws_autoscaling_groups" "groups" { filter { - name = "key" + name = "key" values = ["Team"] } filter { - name = "value" + name = "value" values = ["Pets"] } } @@ -48,6 +48,7 @@ resource "aws_autoscaling_notification" "slack_notifications" { ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `names` - A list of the Autoscaling Groups in the current region. +* `arns` - A list of the Autoscaling Groups Arns in the current region. diff --git a/website/docs/d/availability_zone.html.markdown b/website/docs/d/availability_zone.html.markdown index 7cc945be36e..c55406218d9 100644 --- a/website/docs/d/availability_zone.html.markdown +++ b/website/docs/d/availability_zone.html.markdown @@ -84,7 +84,7 @@ alone would match a single AZ only in a region that itself has only one AZ. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `name` - The name of the selected availability zone. diff --git a/website/docs/d/availability_zones.html.markdown b/website/docs/d/availability_zones.html.markdown index 3000a2f8b16..b04bfd67c5c 100644 --- a/website/docs/d/availability_zones.html.markdown +++ b/website/docs/d/availability_zones.html.markdown @@ -47,6 +47,6 @@ to which the underlying AWS account has access, regardless of their state. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `names` - A list of the Availability Zone names available to the account. diff --git a/website/docs/d/batch_compute_environment.html.markdown b/website/docs/d/batch_compute_environment.html.markdown new file mode 100644 index 00000000000..cc41d343c12 --- /dev/null +++ b/website/docs/d/batch_compute_environment.html.markdown @@ -0,0 +1,38 @@ +--- +layout: "aws" +page_title: "AWS: aws_batch_compute_environment" +sidebar_current: "docs-aws-datasource-batch-compute-environment" +description: |- + Provides details about a batch compute environment +--- + +# Data Source: aws_batch_compute_environment + +The Batch Compute Environment data source allows access to details of a specific +compute environment within AWS Batch. + +## Example Usage + +```hcl +data "aws_batch_compute_environment" "batch-mongo" { + compute_environment_name = "batch-mongo-production" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `compute_environment_name` - (Required) The name of the Batch Compute Environment + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The ARN of the compute environment. +* `ecs_cluster_arn` - The ARN of the underlying Amazon ECS cluster used by the compute environment. +* `service_role` - The ARN of the IAM role that allows AWS Batch to make calls to other AWS services on your behalf. +* `type` - The type of the compute environment (for example, `MANAGED` or `UNMANAGED`). +* `status` - The current status of the compute environment (for example, `CREATING` or `VALID`). +* `status_reason` - A short, human-readable string to provide additional details about the current status of the compute environment. +* `state` - The state of the compute environment (for example, `ENABLED` or `DISABLED`). If the state is `ENABLED`, then the compute environment accepts jobs from a queue and can scale out automatically based on queues. diff --git a/website/docs/d/batch_job_queue.html.markdown b/website/docs/d/batch_job_queue.html.markdown new file mode 100644 index 00000000000..5262378afdb --- /dev/null +++ b/website/docs/d/batch_job_queue.html.markdown @@ -0,0 +1,42 @@ +--- +layout: "aws" +page_title: "AWS: aws_batch_job_queue" +sidebar_current: "docs-aws-datasource-batch-job-queue" +description: |- + Provides details about a batch job queue +--- + +# Data Source: aws_batch_job_queue + +The Batch Job Queue data source allows access to details of a specific +job queue within AWS Batch. + +## Example Usage + +```hcl +data "aws_batch_job_queue" "test-queue" { + name = "tf-test-batch-job-queue" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the job queue. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The ARN of the job queue. +* `status` - The current status of the job queue (for example, `CREATING` or `VALID`). +* `status_reason` - A short, human-readable string to provide additional details about the current status + of the job queue. +* `state` - Describes the ability of the queue to accept new jobs (for example, `ENABLED` or `DISABLED`). +* `priority` - The priority of the job queue. Job queues with a higher priority are evaluated first when + associated with the same compute environment. +* `compute_environment_order` - The compute environments that are attached to the job queue and the order in + which job placement is preferred. Compute environments are selected for job placement in ascending order. + * `compute_environment_order.#.order` - The order of the compute environment. + * `compute_environment_order.#.compute_environment` - The ARN of the compute environment. diff --git a/website/docs/d/billing_service_account.html.markdown b/website/docs/d/billing_service_account.html.markdown index 5545d230349..4ccbe71367e 100644 --- a/website/docs/d/billing_service_account.html.markdown +++ b/website/docs/d/billing_service_account.html.markdown @@ -41,7 +41,7 @@ resource "aws_s3_bucket" "billing_logs" { "s3:PutObject" ], "Effect": "Allow", - "Resource": "arn:aws:s3:::my-billing-tf-test-bucket/AWSLogs/*", + "Resource": "arn:aws:s3:::my-billing-tf-test-bucket/*", "Principal": { "AWS": [ "${data.aws_billing_service_account.main.arn}" diff --git a/website/docs/d/canonical_user_id.html.markdown b/website/docs/d/canonical_user_id.html.markdown index e9b19657f03..4b75eb9d1a1 100644 --- a/website/docs/d/canonical_user_id.html.markdown +++ b/website/docs/d/canonical_user_id.html.markdown @@ -28,7 +28,7 @@ There are no arguments available for this data source. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The canonical user ID associated with the AWS account. diff --git a/website/docs/d/cloudformation_export.html.markdown b/website/docs/d/cloudformation_export.html.markdown new file mode 100644 index 00000000000..fd20926c732 --- /dev/null +++ b/website/docs/d/cloudformation_export.html.markdown @@ -0,0 +1,39 @@ +--- +layout: "aws" +page_title: "AWS: aws_cloudformation_export" +sidebar_current: "docs-aws-datasource-cloudformation-export" +description: |- + Provides metadata of a CloudFormation Export (e.g. Cross Stack References) +--- + +# Data Source: aws_cloudformation_export + +The CloudFormation Export data source allows access to stack +exports specified in the [Output](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/outputs-section-structure.html) section of the Cloudformation Template using the optional Export Property. + + -> Note: If you are trying to use a value from a Cloudformation Stack in the same Terraform run please use normal interpolation or Cloudformation Outputs. + +## Example Usage + +```hcl +data "aws_cloudformation_export" "subnet_id" { + name = "mySubnetIdExportName" +} + +resource "aws_instance" "web" { + ami = "ami-abb07bcb" + instance_type = "t1.micro" + subnet_id = "${data.aws_cloudformation_export.subnet_id.value}" +} +``` + +## Argument Reference + +* `name` - (Required) The name of the export as it appears in the console or from [list-exports](http://docs.aws.amazon.com/cli/latest/reference/cloudformation/list-exports.html) + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `value` - The value from Cloudformation export identified by the export name found from [list-exports](http://docs.aws.amazon.com/cli/latest/reference/cloudformation/list-exports.html) +* `exporting_stack_id` - The exporting_stack_id (AWS ARNs) equivalent `ExportingStackId` from [list-exports](http://docs.aws.amazon.com/cli/latest/reference/cloudformation/list-exports.html) diff --git a/website/docs/d/cloudformation_stack.html.markdown b/website/docs/d/cloudformation_stack.html.markdown index 815b8813180..9e4d87be65e 100644 --- a/website/docs/d/cloudformation_stack.html.markdown +++ b/website/docs/d/cloudformation_stack.html.markdown @@ -37,7 +37,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `capabilities` - A list of capabilities * `description` - Description of the stack diff --git a/website/docs/d/cloudhsm_v2_cluster.html.markdown b/website/docs/d/cloudhsm_v2_cluster.html.markdown new file mode 100644 index 00000000000..72242b532f3 --- /dev/null +++ b/website/docs/d/cloudhsm_v2_cluster.html.markdown @@ -0,0 +1,40 @@ +--- +layout: "aws" +page_title: "AWS: cloudhsm_v2_cluster" +sidebar_current: "docs-aws-datasource-cloudhsm-v2-cluster" +description: |- + Get information on a CloudHSM v2 cluster. +--- + +# Data Source: aws_cloudhsm_v2_cluster + +Use this data source to get information about a CloudHSM v2 cluster + +## Example Usage + +```hcl +data "aws_cloudhsm_v2_cluster" "cluster" { + cluster_id = "cluster-testclusterid" +} +``` +## Argument Reference + +The following arguments are supported: + +* `cluster_id` - (Required) The id of Cloud HSM v2 cluster. +* `cluster_state` - (Optional) The state of the cluster to be found. + +## Attributes Reference + +The following attributes are exported: + +* `vpc_id` - The id of the VPC that the CloudHSM cluster resides in. +* `security_group_id` - The ID of the security group associated with the CloudHSM cluster. +* `subnet_ids` - The IDs of subnets in which cluster operates. +* `cluster_certificates` - The list of cluster certificates. + * `cluster_certificates.0.cluster_certificate` - The cluster certificate issued (signed) by the issuing certificate authority (CA) of the cluster's owner. + * `cluster_certificates.0.cluster_csr` - The certificate signing request (CSR). Available only in UNINITIALIZED state. + * `cluster_certificates.0.aws_hardware_certificate` - The HSM hardware certificate issued (signed) by AWS CloudHSM. + * `cluster_certificates.0.hsm_certificate` - The HSM certificate issued (signed) by the HSM hardware. + * `cluster_certificates.0.manufacturer_hardware_certificate` - The HSM hardware certificate issued (signed) by the hardware manufacturer. +The number of available cluster certificates may vary depending on state of the cluster. diff --git a/website/docs/d/cloudwatch_log_group.html.markdown b/website/docs/d/cloudwatch_log_group.html.markdown new file mode 100644 index 00000000000..7781809171c --- /dev/null +++ b/website/docs/d/cloudwatch_log_group.html.markdown @@ -0,0 +1,32 @@ +--- +layout: "aws" +page_title: "AWS: aws_cloudwatch_log_group" +sidebar_current: "docs-aws-cloudwatch-log-group" +description: |- + Get information on a Cloudwatch Log Group. +--- + +# Data Source: aws_cloudwatch_log_group + +Use this data source to get information about an AWS Cloudwatch Log Group + +## Example Usage + +```hcl +data "aws_cloudwatch_log_group" "example" { + name = "MyImportantLogs" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the Cloudwatch log group + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The ARN of the Cloudwatch log group +* `creation_time` - The creation time of the log group, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. diff --git a/website/docs/d/codecommit_repository.html.markdown b/website/docs/d/codecommit_repository.html.markdown new file mode 100644 index 00000000000..b7bbb214f10 --- /dev/null +++ b/website/docs/d/codecommit_repository.html.markdown @@ -0,0 +1,34 @@ +--- +layout: "aws" +page_title: "AWS: aws_codecommit_repository" +sidebar_current: "docs-aws-datasource-codecommit-repository" +description: |- + Provides details about CodeCommit Repository. +--- + +# Data Source: aws_codecommit_repository + +The CodeCommit Repository data source allows the ARN, Repository ID, Repository URL for HTTP and Repository URL for SSH to be retrieved for an CodeCommit repository. + +## Example Usage + +```hcl +data "aws_codecommit_repository" "test" { + repository_name = "MyTestRepository" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `repository_name` - (Required) The name for the repository. This needs to be less than 100 characters. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `repository_id` - The ID of the repository +* `arn` - The ARN of the repository +* `clone_url_http` - The URL to use for cloning the repository over HTTPS. +* `clone_url_ssh` - The URL to use for cloning the repository over SSH. diff --git a/website/docs/d/cognito_user_pools.markdown b/website/docs/d/cognito_user_pools.markdown new file mode 100644 index 00000000000..cef7a0b2871 --- /dev/null +++ b/website/docs/d/cognito_user_pools.markdown @@ -0,0 +1,39 @@ +--- +layout: "aws" +page_title: "AWS: aws_cognito_user_pools" +sidebar_current: "docs-aws-cognito-user-pools" +description: |- + Get list of cognito user pools. +--- + +# Data Source: aws_cognito_user_pools + +Use this data source to get a list of cognito user pools. + +## Example Usage + +```hcl +data "aws_api_gateway_rest_api" "selected" { + name = "${var.api_gateway_name}" +} + +data "aws_cognito_user_pools" "selected" { + name = "${var.cognito_user_pool_name}" +} + +resource "aws_api_gateway_authorizer" "cognito" { + name = "cognito" + type = "COGNITO_USER_POOLS" + rest_api_id = "${data.aws_api_gateway_rest_api.selected.id}" + provider_arns = ["${data.aws_cognito_user_pools.selected.arns}"] +} +``` + +## Argument Reference + +* `name` - (required) Name of the cognito user pools. Name is not a unique attribute for cognito user pool, so multiple pools might be returned with given name. + + +## Attributes Reference + +* `ids` - The list of cognito user pool ids. diff --git a/website/docs/d/db_cluster_snapshot.html.markdown b/website/docs/d/db_cluster_snapshot.html.markdown new file mode 100644 index 00000000000..af265cabe96 --- /dev/null +++ b/website/docs/d/db_cluster_snapshot.html.markdown @@ -0,0 +1,82 @@ +--- +layout: "aws" +page_title: "AWS: aws_db_cluster_snapshot" +sidebar_current: "docs-aws-datasource-db-cluster-snapshot" +description: |- + Get information on a DB Cluster Snapshot. +--- + +# Data Source: aws_db_cluster_snapshot + +Use this data source to get information about a DB Cluster Snapshot for use when provisioning DB clusters. + +~> **NOTE:** This data source does not apply to snapshots created on DB Instances. +See the [`aws_db_snapshot` data source](/docs/providers/aws/d/db_snapshot.html) for DB Instance snapshots. + +## Example Usage + +```hcl +data "aws_db_cluster_snapshot" "development_final_snapshot" { + db_cluster_identifier = "development_cluster" + most_recent = true +} + +# Use the last snapshot of the dev database before it was destroyed to create +# a new dev database. +resource "aws_rds_cluster" "aurora" { + cluster_identifier = "development_cluster" + snapshot_identifier = "${data.aws_db_cluster_snapshot.development_final_snapshot.id}" + db_subnet_group_name = "my_db_subnet_group" + + lifecycle { + ignore_changes = ["snapshot_identifier"] + } +} + +resource "aws_rds_cluster_instance" "aurora" { + cluster_identifier = "${aws_rds_cluster.aurora.id}" + instance_class = "db.t2.small" + db_subnet_group_name = "my_db_subnet_group" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `most_recent` - (Optional) If more than one result is returned, use the most recent Snapshot. + +* `db_cluster_identifier` - (Optional) Returns the list of snapshots created by the specific db_cluster + +* `db_cluster_snapshot_identifier` - (Optional) Returns information on a specific snapshot_id. + +* `snapshot_type` - (Optional) The type of snapshots to be returned. If you don't specify a SnapshotType +value, then both automated and manual DB cluster snapshots are returned. Shared and public DB Cluster Snapshots are not +included in the returned results by default. Possible values are, `automated`, `manual`, `shared` and `public`. + +* `include_shared` - (Optional) Set this value to true to include shared manual DB Cluster Snapshots from other +AWS accounts that this AWS account has been given permission to copy or restore, otherwise set this value to false. +The default is `false`. + +* `include_public` - (Optional) Set this value to true to include manual DB Cluster Snapshots that are public and can be +copied or restored by any AWS account, otherwise set this value to false. The default is `false`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `allocated_storage` - Specifies the allocated storage size in gigabytes (GB). +* `availability_zones` - List of EC2 Availability Zones that instances in the DB cluster snapshot can be restored in. +* `db_cluster_identifier` - Specifies the DB cluster identifier of the DB cluster that this DB cluster snapshot was created from. +* `db_cluster_snapshot_arn` - The Amazon Resource Name (ARN) for the DB Cluster Snapshot. +* `engine_version` - Version of the database engine for this DB cluster snapshot. +* `engine` - Specifies the name of the database engine. +* `id` - The snapshot ID. +* `kms_key_id` - If storage_encrypted is true, the AWS KMS key identifier for the encrypted DB cluster snapshot. +* `license_model` - License model information for the restored DB cluster. +* `port` - Port that the DB cluster was listening on at the time of the snapshot. +* `snapshot_create_time` - Time when the snapshot was taken, in Universal Coordinated Time (UTC). +* `source_db_cluster_snapshot_identifier` - The DB Cluster Snapshot Arn that the DB Cluster Snapshot was copied from. It only has value in case of cross customer or cross region copy. +* `status` - The status of this DB Cluster Snapshot. +* `storage_encrypted` - Specifies whether the DB cluster snapshot is encrypted. +* `vpc_id` - The VPC ID associated with the DB cluster snapshot. diff --git a/website/docs/d/db_event_categories.html.markdown b/website/docs/d/db_event_categories.html.markdown new file mode 100644 index 00000000000..1ae81499740 --- /dev/null +++ b/website/docs/d/db_event_categories.html.markdown @@ -0,0 +1,45 @@ +--- +layout: "aws" +page_title: "AWS: aws_db_event_categories" +sidebar_current: "docs-aws-datasource-db-event-categories" +description: |- + Provides a list of DB Event Categories which can be used to pass values into DB Event Subscription. +--- + +# Data Source: aws_db_event_categories + +## Example Usage + +List the event categories of all the RDS resources. + +```hcl +data "aws_db_event_categories" "example" {} + +output "example" { + value = "${data.aws_db_event_categories.example.event_categories}" +} +``` + +List the event categories specific to the RDS resource `db-snapshot`. + +```hcl +data "aws_db_event_categories" "example" { + source_type = "db-snapshot" +} + +output "example" { + value = "${data.aws_db_event_categories.example.event_categories}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `source_type` - (Optional) The type of source that will be generating the events. Valid options are db-instance, db-security-group, db-parameter-group, db-snapshot, db-cluster or db-cluster-snapshot. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `event_categories` - A list of the event categories. diff --git a/website/docs/d/db_instance.html.markdown b/website/docs/d/db_instance.html.markdown index 04146fdd710..9127cf6281e 100644 --- a/website/docs/d/db_instance.html.markdown +++ b/website/docs/d/db_instance.html.markdown @@ -26,9 +26,9 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: -* `address` - The address of the RDS instance. +* `address` - The hostname of the RDS instance. See also `endpoint` and `port`. * `allocated_storage` - Specifies the allocated storage size specified in gigabytes. * `auto_minor_version_upgrade` - Indicates that minor version patches are applied automatically. * `availability_zone` - Specifies the name of the Availability Zone the DB instance is located in. @@ -41,7 +41,8 @@ The following attributes are exported: * `db_security_groups` - Provides List of DB security groups associated to this DB instance. * `db_subnet_group` - Specifies the name of the subnet group associated with the DB instance. * `db_instance_port` - Specifies the port that the DB instance listens on. -* `endpoint` - The connection endpoint. +* `enabled_cloudwatch_logs_exports` - List of log types to export to cloudwatch. +* `endpoint` - The connection endpoint in `address:port` format. * `engine` - Provides the name of the database engine to be used for this DB instance. * `engine_version` - Indicates the database engine version. * `hosted_zone_id` - The canonical hosted zone ID of the DB instance (to be used in a Route 53 Alias record). diff --git a/website/docs/d/db_snapshot.html.markdown b/website/docs/d/db_snapshot.html.markdown index 6ad43a083e9..59646bcc3a9 100644 --- a/website/docs/d/db_snapshot.html.markdown +++ b/website/docs/d/db_snapshot.html.markdown @@ -11,6 +11,7 @@ description: |- Use this data source to get information about a DB Snapshot for use when provisioning DB instances ~> **NOTE:** This data source does not apply to snapshots created on Aurora DB clusters. +See the [`aws_db_cluster_snapshot` data source](/docs/providers/aws/d/db_cluster_snapshot.html) for DB Cluster snapshots. ## Example Usage @@ -28,15 +29,16 @@ resource "aws_db_instance" "prod" { } data "aws_db_snapshot" "latest_prod_snapshot" { - db_instance_identifier = "${aws_db_instance.prod.identifier}" - most_recent = true + db_instance_identifier = "${aws_db_instance.prod.id}" + most_recent = true } # Use the latest production snapshot to create a dev instance. resource "aws_db_instance" "dev" { instance_class = "db.t2.micro" - name = "mydb-dev" + name = "mydbdev" snapshot_identifier = "${data.aws_db_snapshot.latest_prod_snapshot.id}" + lifecycle { ignore_changes = ["snapshot_identifier"] } @@ -54,21 +56,21 @@ recent Snapshot. * `db_snapshot_identifier` - (Optional) Returns information on a specific snapshot_id. -* `snapshot_type` - (Optional) The type of snapshots to be returned. If you don't specify a SnapshotType -value, then both automated and manual snapshots are returned. Shared and public DB snapshots are not +* `snapshot_type` - (Optional) The type of snapshots to be returned. If you don't specify a SnapshotType +value, then both automated and manual snapshots are returned. Shared and public DB snapshots are not included in the returned results by default. Possible values are, `automated`, `manual`, `shared` and `public`. -* `include_shared` - (Optional) Set this value to true to include shared manual DB snapshots from other -AWS accounts that this AWS account has been given permission to copy or restore, otherwise set this value to false. +* `include_shared` - (Optional) Set this value to true to include shared manual DB snapshots from other +AWS accounts that this AWS account has been given permission to copy or restore, otherwise set this value to false. The default is `false`. -* `include_public` - (Optional) Set this value to true to include manual DB snapshots that are public and can be +* `include_public` - (Optional) Set this value to true to include manual DB snapshots that are public and can be copied or restored by any AWS account, otherwise set this value to false. The default is `false`. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The snapshot ID. * `allocated_storage` - Specifies the allocated storage size in gigabytes (GB). @@ -85,5 +87,5 @@ The following attributes are exported: * `source_region` - The region that the DB snapshot was created in or copied from. * `status` - Specifies the status of this DB snapshot. * `storage_type` - Specifies the storage type associated with DB snapshot. -* `vpc_id` - Specifies the storage type associated with DB snapshot. +* `vpc_id` - Specifies the ID of the VPC associated with the DB snapshot. * `snapshot_create_time` - Provides the time when the snapshot was taken, in Universal Coordinated Time (UTC). diff --git a/website/docs/d/dx_gateway.html.markdown b/website/docs/d/dx_gateway.html.markdown new file mode 100644 index 00000000000..1ac9c34d397 --- /dev/null +++ b/website/docs/d/dx_gateway.html.markdown @@ -0,0 +1,28 @@ +--- +layout: "aws" +page_title: "AWS: aws_dx_gateway" +sidebar_current: "docs-aws-datasource-dx-gateway" +description: |- + Retrieve information about a Direct Connect Gateway +--- + +# Data Source: aws_dx_gateway + +Retrieve information about a Direct Connect Gateway. + +## Example Usage + +```hcl +data "aws_dx_gateway" "example" { + name = "example" +} +``` + +## Argument Reference + +* `name` - (Required) The name of the gateway to retrieve. + +## Attributes Reference + +* `amazon_side_asn` - The ASN on the Amazon side of the connection. +* `id` - The ID of the gateway. diff --git a/website/docs/d/dynamodb_table.html.markdown b/website/docs/d/dynamodb_table.html.markdown index 1b55f23193d..b747b6f50fe 100644 --- a/website/docs/d/dynamodb_table.html.markdown +++ b/website/docs/d/dynamodb_table.html.markdown @@ -27,4 +27,4 @@ The following arguments are supported: ## Attributes Reference See the [DynamoDB Table Resource](/docs/providers/aws/r/dynamodb_table.html) for details on the -returned attributes - they are identical. \ No newline at end of file +returned attributes - they are identical. diff --git a/website/docs/d/ebs_snapshot.html.markdown b/website/docs/d/ebs_snapshot.html.markdown index 57f4e9f72b2..e5e923570f6 100644 --- a/website/docs/d/ebs_snapshot.html.markdown +++ b/website/docs/d/ebs_snapshot.html.markdown @@ -48,7 +48,7 @@ several valid keys, for a full reference, check out ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The snapshot ID (e.g. snap-59fcb34e). * `snapshot_id` - The snapshot ID (e.g. snap-59fcb34e). diff --git a/website/docs/d/ebs_volume.html.markdown b/website/docs/d/ebs_volume.html.markdown index c46b089a53d..16acd66044e 100644 --- a/website/docs/d/ebs_volume.html.markdown +++ b/website/docs/d/ebs_volume.html.markdown @@ -42,7 +42,7 @@ several valid keys, for a full reference, check out ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The volume ID (e.g. vol-59fcb34e). * `volume_id` - The volume ID (e.g. vol-59fcb34e). diff --git a/website/docs/d/ecr_repository.html.markdown b/website/docs/d/ecr_repository.html.markdown index 91ee3730851..54b88f5760d 100644 --- a/website/docs/d/ecr_repository.html.markdown +++ b/website/docs/d/ecr_repository.html.markdown @@ -26,7 +26,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - Full ARN of the repository. * `registry_id` - The registry ID where the repository was created. diff --git a/website/docs/d/ecs_cluster.html.markdown b/website/docs/d/ecs_cluster.html.markdown index 309ba59dd96..a7ed1059dd5 100644 --- a/website/docs/d/ecs_cluster.html.markdown +++ b/website/docs/d/ecs_cluster.html.markdown @@ -27,7 +27,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - The ARN of the ECS Cluster * `status` - The status of the ECS Cluster diff --git a/website/docs/d/ecs_container_definition.html.markdown b/website/docs/d/ecs_container_definition.html.markdown index f6c8f2587ca..0d31fab8a70 100644 --- a/website/docs/d/ecs_container_definition.html.markdown +++ b/website/docs/d/ecs_container_definition.html.markdown @@ -29,7 +29,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `image` - The docker image in use, including the digest * `image_digest` - The digest of the docker image in use diff --git a/website/docs/d/ecs_service.html.markdown b/website/docs/d/ecs_service.html.markdown new file mode 100644 index 00000000000..afa5746a011 --- /dev/null +++ b/website/docs/d/ecs_service.html.markdown @@ -0,0 +1,38 @@ +--- +layout: "aws" +page_title: "AWS: aws_ecs_service" +sidebar_current: "docs-aws-datasource-ecs-service" +description: |- + Provides details about an ecs service +--- + +# Data Source: aws_ecs_service + +The ECS Service data source allows access to details of a specific +Service within a AWS ECS Cluster. + +## Example Usage + +```hcl +data "aws_ecs_service" "example" { + service_name = "example" + cluster_arn = "${data.aws_ecs_cluster.example.arn}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `service_name` - (Required) The name of the ECS Service +* `cluster_arn` - (Required) The arn of the ECS Cluster + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The ARN of the ECS Service +* `desired_count` - The number of tasks for the ECS Service +* `launch_type` - The launch type for the ECS Service +* `scheduling_strategy` - The scheduling strategy for the ECS Service +* `task_definition` - The family for the latest ACTIVE revision diff --git a/website/docs/d/ecs_task_definition.html.markdown b/website/docs/d/ecs_task_definition.html.markdown index 8e680543d5a..f7dca502554 100644 --- a/website/docs/d/ecs_task_definition.html.markdown +++ b/website/docs/d/ecs_task_definition.html.markdown @@ -63,7 +63,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `family` - The family of this task definition * `network_mode` - The Docker networking mode to use for the containers in this task. diff --git a/website/docs/d/efs_file_system.html.markdown b/website/docs/d/efs_file_system.html.markdown index 82fc30e4d9a..28965856119 100644 --- a/website/docs/d/efs_file_system.html.markdown +++ b/website/docs/d/efs_file_system.html.markdown @@ -14,7 +14,7 @@ Provides information about an Elastic File System (EFS). ```hcl variable "file_system_id" { - type = "string" + type = "string" default = "" } @@ -32,8 +32,9 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: +* `arn` - Amazon Resource Name of the file system. * `performance_mode` - The PerformanceMode of the file system. * `tags` - The list of tags assigned to the file system. * `encrypted` - Whether EFS is encrypted. diff --git a/website/docs/d/efs_mount_target.html.markdown b/website/docs/d/efs_mount_target.html.markdown index 0cb47413d84..a9e24abef3a 100644 --- a/website/docs/d/efs_mount_target.html.markdown +++ b/website/docs/d/efs_mount_target.html.markdown @@ -14,7 +14,7 @@ Provides information about an Elastic File System Mount Target (EFS). ```hcl variable "mount_target_id" { - type = "string" + type = "string" default = "" } @@ -31,8 +31,9 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: +* `file_system_arn` - Amazon Resource Name of the file system for which the mount target is intended. * `file_system_id` - ID of the file system for which the mount target is intended. * `subnet_id` - ID of the mount target's subnet. * `ip_address` - Address at which the file system may be mounted via the mount target. diff --git a/website/docs/d/eip.html.markdown b/website/docs/d/eip.html.markdown index 9a6535eedb4..fffd6fb4646 100644 --- a/website/docs/d/eip.html.markdown +++ b/website/docs/d/eip.html.markdown @@ -10,25 +10,42 @@ description: |- `aws_eip` provides details about a specific Elastic IP. -This resource can prove useful when a module accepts an allocation ID or -public IP as an input variable and needs to determine the other. - ## Example Usage -The following example shows how one might accept a public IP as a variable -and use this data source to obtain the allocation ID. +### Search By Allocation ID (VPC only) ```hcl -variable "instance_id" {} -variable "public_ip" {} +data "aws_eip" "by_allocation_id" { + id = "eipalloc-12345678" +} +``` + +### Search By Filters (EC2-Classic or VPC) + +```hcl +data "aws_eip" "by_filter" { + filter { + name = "tag:Name" + values = ["exampleNameTagValue"] + } +} +``` + +### Search By Public IP (EC2-Classic or VPC) -data "aws_eip" "proxy_ip" { - public_ip = "${var.public_ip}" +```hcl +data "aws_eip" "by_public_ip" { + public_ip = "1.2.3.4" } +``` -resource "aws_eip_association" "proxy_eip" { - instance_id = "${var.instance_id}" - allocation_id = "${data.aws_eip.proxy_ip.id}" +### Search By Tags (EC2-Classic or VPC) + +```hcl +data "aws_eip" "by_tags" { + tags { + Name = "exampleNameTagValue" + } } ``` @@ -38,13 +55,22 @@ The arguments of this data source act as filters for querying the available Elastic IPs in the current region. The given filters must match exactly one Elastic IP whose data will be exported as attributes. -* `id` - (Optional) The allocation id of the specific EIP to retrieve. - +* `filter` - (Optional) One or more name/value pairs to use as filters. There are several valid keys, for a full reference, check out the [EC2 API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeAddresses.html). +* `id` - (Optional) The allocation id of the specific VPC EIP to retrieve. If a classic EIP is required, do NOT set `id`, only set `public_ip` * `public_ip` - (Optional) The public IP of the specific EIP to retrieve. +* `tags` - (Optional) A mapping of tags, each pair of which must exactly match a pair on the desired Elastic IP ## Attributes Reference -All of the argument attributes are also exported as result attributes. This -data source will complete the data by populating any fields that are not -included in the configuration with the data for the selected Elastic IP. +In addition to all arguments above, the following attributes are exported: +* `association_id` - The ID representing the association of the address with an instance in a VPC. +* `domain` - Indicates whether the address is for use in EC2-Classic (standard) or in a VPC (vpc). +* `id` - If VPC Elastic IP, the allocation identifier. If EC2-Classic Elastic IP, the public IP address. +* `instance_id` - The ID of the instance that the address is associated with (if any). +* `network_interface_id` - The ID of the network interface. +* `network_interface_owner_id` - The ID of the AWS account that owns the network interface. +* `private_ip` - The private IP address associated with the Elastic IP address. +* `public_ip` - Public IP address of Elastic IP. +* `public_ipv4_pool` - The ID of an address pool. +* `tags` - Key-value map of tags associated with Elastic IP. diff --git a/website/docs/d/eks_cluster.html.markdown b/website/docs/d/eks_cluster.html.markdown new file mode 100644 index 00000000000..78778e51856 --- /dev/null +++ b/website/docs/d/eks_cluster.html.markdown @@ -0,0 +1,47 @@ +--- +layout: "aws" +page_title: "AWS: aws_eks_cluster" +sidebar_current: "docs-aws-datasource-eks-cluster" +description: |- + Retrieve information about an EKS Cluster +--- + +# Data Source: aws_eks_cluster + +Retrieve information about an EKS Cluster. + +## Example Usage + +```hcl +data "aws_eks_cluster" "example" { + name = "example" +} + +output "endpoint" { + value = "${data.aws_eks_cluster.example.endpoint}" +} + +output "kubeconfig-certificate-authority-data" { + value = "${data.aws_eks_cluster.example.certificate_authority.0.data}" +} +``` + +## Argument Reference + +* `name` - (Required) The name of the cluster + +## Attributes Reference + +* `id` - The name of the cluster +* `arn` - The Amazon Resource Name (ARN) of the cluster. +* `certificate_authority` - Nested attribute containing `certificate-authority-data` for your cluster. + * `data` - The base64 encoded certificate data required to communicate with your cluster. Add this to the `certificate-authority-data` section of the `kubeconfig` file for your cluster. +* `created_at` - The Unix epoch time stamp in seconds for when the cluster was created. +* `endpoint` - The endpoint for your Kubernetes API server. +* `platform_version` - The platform version for the cluster. +* `role_arn` - The Amazon Resource Name (ARN) of the IAM role that provides permissions for the Kubernetes control plane to make calls to AWS API operations on your behalf. +* `version` - The Kubernetes server version for the cluster. +* `vpc_config` - Nested attribute containing VPC configuration for the cluster. + * `security_group_ids` – List of security group IDs + * `subnet_ids` – List of subnet IDs + * `vpc_id` – The VPC associated with your cluster. diff --git a/website/docs/d/elastic_beanstalk_solution_stack.html.markdown b/website/docs/d/elastic_beanstalk_solution_stack.html.markdown index 31e1b6983e0..0a6ee4fd9e4 100644 --- a/website/docs/d/elastic_beanstalk_solution_stack.html.markdown +++ b/website/docs/d/elastic_beanstalk_solution_stack.html.markdown @@ -14,9 +14,9 @@ Use this data source to get the name of a elastic beanstalk solution stack. ```hcl data "aws_elastic_beanstalk_solution_stack" "multi_docker" { - most_recent = true + most_recent = true - name_regex = "^64bit Amazon Linux (.*) Multi-container Docker (.*)$" + name_regex = "^64bit Amazon Linux (.*) Multi-container Docker (.*)$" } ``` diff --git a/website/docs/d/elasticache_cluster.html.markdown b/website/docs/d/elasticache_cluster.html.markdown index 6764941c0b6..6845fb065f6 100644 --- a/website/docs/d/elasticache_cluster.html.markdown +++ b/website/docs/d/elasticache_cluster.html.markdown @@ -27,7 +27,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `node_type` – The cluster node type. * `num_cache_nodes` – The number of cache nodes that the cache cluster has. @@ -38,19 +38,19 @@ The following attributes are exported: * `security_group_ids` – List VPC security groups associated with the cache cluster. * `parameter_group_name` – Name of the parameter group associated with this cache cluster. * `replication_group_id` - The replication group to which this cache cluster belongs. -* `maintenance_window` – Specifies the weekly time range for when maintenance +* `maintenance_window` – Specifies the weekly time range for when maintenance on the cache cluster is performed. * `snapshot_window` - The daily time range (in UTC) during which ElastiCache will begin taking a daily snapshot of the cache cluster. * `snapshot_retention_limit` - The number of days for which ElastiCache will retain automatic cache cluster snapshots before deleting them. * `availability_zone` - The Availability Zone for the cache cluster. -* `notification_topic_arn` – An Amazon Resource Name (ARN) of an +* `notification_topic_arn` – An Amazon Resource Name (ARN) of an SNS topic that ElastiCache notifications get sent to. * `port` – The port number on which each of the cache nodes will accept connections. -* `configuration_endpoint` - The configuration endpoint to allow host discovery. -* `cluster_address` - The DNS name of the cache cluster without the port appended. +* `configuration_endpoint` - (Memcached only) The configuration endpoint to allow host discovery. +* `cluster_address` - (Memcached only) The DNS name of the cache cluster without the port appended. * `cache_nodes` - List of node objects including `id`, `address`, `port` and `availability_zone`. Referenceable e.g. as `${data.aws_elasticache_cluster.bar.cache_nodes.0.address}` * `tags` - The tags assigned to the resource diff --git a/website/docs/d/elasticache_replication_group.html.markdown b/website/docs/d/elasticache_replication_group.html.markdown index fbb02603aaf..ec0f9d71c6c 100644 --- a/website/docs/d/elasticache_replication_group.html.markdown +++ b/website/docs/d/elasticache_replication_group.html.markdown @@ -26,7 +26,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `replication_group_id` - The identifier for the replication group. * `replication_group_description` - The description of the replication group. @@ -34,6 +34,7 @@ The following attributes are exported: * `automatic_failover_enabled` - A flag whether a read-only replica will be automatically promoted to read/write primary if the existing primary fails. * `node_type` – The cluster node type. * `number_cache_clusters` – The number of cache clusters that the replication group has. +* `member_clusters` - The identifiers of all the nodes that are part of this replication group. * `snapshot_window` - The daily time range (in UTC) during which ElastiCache begins taking a daily snapshot of your node group (shard). * `snapshot_retention_limit` - The number of days for which ElastiCache retains automatic cache cluster snapshots before deleting them. * `port` – The port number on which the configuration endpoint will accept connections. diff --git a/website/docs/d/glue_script.html.markdown b/website/docs/d/glue_script.html.markdown new file mode 100644 index 00000000000..db0a87fd24a --- /dev/null +++ b/website/docs/d/glue_script.html.markdown @@ -0,0 +1,83 @@ +--- +layout: "aws" +page_title: "AWS: aws_glue_script" +sidebar_current: "docs-aws-datasource-glue-script" +description: |- + Generate Glue script from Directed Acyclic Graph +--- + +# Data Source: aws_glue_script + +Use this data source to generate a Glue script from a Directed Acyclic Graph (DAG). + +## Example Usage + +### Generate Python Script + +```hcl +data "aws_glue_script" "example" { + language = "PYTHON" + + dag_edge = [] + + # ... + + dag_node = [] + + # ... +} + +output "python_script" { + value = "${data.aws_glue_script.example.python_script}" +} +``` + +### Generate Scala Code + +```hcl +data "aws_glue_script" "example" { + language = "SCALA" + + dag_edge = [] + + # ... + + dag_node = [] + + # ... +} + +output "scala_code" { + value = "${data.aws_glue_script.example.scala_code}" +} +``` + +## Argument Reference + +* `dag_edge` - (Required) A list of the edges in the DAG. Defined below. +* `dag_node` - (Required) A list of the nodes in the DAG. Defined below. +* `language` - (Optional) The programming language of the resulting code from the DAG. Defaults to `PYTHON`. Valid values are `PYTHON` and `SCALA`. + +### dag_edge Argument Reference + +* `source` - (Required) The ID of the node at which the edge starts. +* `target` - (Required) The ID of the node at which the edge ends. +* `target_parameter` - (Optional) The target of the edge. + +### dag_node Argument Reference + +* `args` - (Required) Nested configuration an argument or property of a node. Defined below. +* `id` - (Required) A node identifier that is unique within the node's graph. +* `node_type` - (Required) The type of node this is. +* `line_number` - (Optional) The line number of the node. + +#### args Argument Reference + +* `name` - (Required) The name of the argument or property. +* `value` - (Required) The value of the argument or property. +* `param` - (Optional) Boolean if the value is used as a parameter. Defaults to `false`. + +## Attributes Reference + +* `python_script` - The Python script generated from the DAG when the `language` argument is set to `PYTHON`. +* `scala_code` - The Scala code generated from the DAG when the `language` argument is set to `SCALA`. diff --git a/website/docs/d/iam_account_alias.html.markdown b/website/docs/d/iam_account_alias.html.markdown index 07fbebc490e..60a5adc0795 100644 --- a/website/docs/d/iam_account_alias.html.markdown +++ b/website/docs/d/iam_account_alias.html.markdown @@ -28,6 +28,6 @@ There are no arguments available for this data source. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `account_alias` - The alias associated with the AWS account. diff --git a/website/docs/d/iam_instance_profile.html.markdown b/website/docs/d/iam_instance_profile.html.markdown index 4eb866a0799..017069a34a3 100644 --- a/website/docs/d/iam_instance_profile.html.markdown +++ b/website/docs/d/iam_instance_profile.html.markdown @@ -9,7 +9,7 @@ description: |- # Data Source: aws_iam_instance_profile This data source can be used to fetch information about a specific -IAM instance profile. By using this data source, you can reference IAM +IAM instance profile. By using this data source, you can reference IAM instance profile properties without having to hard code ARNs as input. ## Example Usage @@ -33,4 +33,8 @@ data "aws_iam_instance_profile" "example" { * `path` - The path to the instance profile. +* `role_arn` - The role arn associated with this instance profile. + * `role_id` - The role id associated with this instance profile. + +* `role_name` - The role name associated with this instance profile. diff --git a/website/docs/d/iam_policy_document.html.markdown b/website/docs/d/iam_policy_document.html.markdown index 6f3d48781c1..18f5b0c702e 100644 --- a/website/docs/d/iam_policy_document.html.markdown +++ b/website/docs/d/iam_policy_document.html.markdown @@ -14,6 +14,8 @@ This is a data source which can be used to construct a JSON representation of an IAM policy document, for use with resources which expect policy documents, such as the `aws_iam_policy` resource. +-> For more information about building AWS IAM policy documents with Terraform, see the [AWS IAM Policy Document Guide](/docs/providers/aws/guides/iam-policy-documents.html). + ```hcl data "aws_iam_policy_document" "example" { statement { @@ -86,10 +88,10 @@ The following arguments are supported: current policy document. Statements with non-blank `sid`s in the override document will overwrite statements with the same `sid` in the current document. Statements without an `sid` cannot be overwritten. -* `statement` (Required) - A nested configuration block (described below) +* `statement` (Optional) - A nested configuration block (described below) configuring one *statement* to be included in the policy document. -Each document configuration must have one or more `statement` blocks, which +Each document configuration may have one or more `statement` blocks, which each accept the following arguments: * `sid` (Optional) - An ID for the policy statement. @@ -147,6 +149,16 @@ uses `${...}`-style syntax that is in conflict with Terraform's interpolation syntax, so this data source instead uses `&{...}` syntax for interpolations that should be processed by AWS rather than by Terraform. +## Wildcard Principal + +In order to define wildcard principal (a.k.a. anonymous user) use `type = "*"` and +`identifiers = ["*"]`. In that case the rendered json will contain `"Principal": "*"`. +Note, that even though the [IAM Documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) +states that `"Principal": "*"` and `"Principal": {"AWS": "*"}` are equivalent, +those principals have different behavior for IAM Role Trust Policy. Therefore +Terraform will normalize the principal field only in above-mentioned case and principals +like `type = "AWS"` and `identifiers = ["*"]` will be rendered as `"Principal": {"AWS": "*"}`. + ## Attributes Reference The following attribute is exported: @@ -287,3 +299,46 @@ data "aws_iam_policy_document" "override_json_example" { ``` You can also combine `source_json` and `override_json` in the same document. + +## Example without Statement + +Use without a `statement`: + +```hcl +data "aws_iam_policy_document" "source" { + statement { + sid = "OverridePlaceholder" + actions = ["ec2:DescribeAccountAttributes"] + resources = ["*"] + } +} + +data "aws_iam_policy_document" "override" { + statement { + sid = "OverridePlaceholder" + actions = ["s3:GetObject"] + resources = ["*"] + } +} + +data "aws_iam_policy_document" "politik" { + source_json = "${data.aws_iam_policy_document.source.json}" + override_json = "${data.aws_iam_policy_document.override.json}" +} +``` + +`data.aws_iam_policy_document.politik.json` will evaluate to: + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "OverridePlaceholder", + "Effect": "Allow", + "Action": "s3:GetObject", + "Resource": "*" + } + ] +} +``` diff --git a/website/docs/d/iam_role.html.markdown b/website/docs/d/iam_role.html.markdown index b2e981794e3..a23ff830b95 100644 --- a/website/docs/d/iam_role.html.markdown +++ b/website/docs/d/iam_role.html.markdown @@ -30,4 +30,5 @@ data "aws_iam_role" "example" { * `arn` - The Amazon Resource Name (ARN) specifying the role. * `assume_role_policy` - The policy document associated with the role. * `path` - The path to the role. +* `permissions_boundary` - The ARN of the policy that is used to set the permissions boundary for the role. * `unique_id` - The stable and unique string identifying the role. diff --git a/website/docs/d/iam_server_certificate.html.markdown b/website/docs/d/iam_server_certificate.html.markdown index cc51baa37f1..dce2fed5082 100644 --- a/website/docs/d/iam_server_certificate.html.markdown +++ b/website/docs/d/iam_server_certificate.html.markdown @@ -34,6 +34,7 @@ resource "aws_elb" "elb" { ## Argument Reference * `name_prefix` - prefix of cert to filter by +* `path_prefix` - prefix of path to filter by * `name` - exact name of the cert to lookup * `latest` - sort results by expiration date. returns the certificate with expiration date in furthest in the future. @@ -50,5 +51,3 @@ resource "aws_elb" "elb" { The terraform import function will read in certificate body, certificate chain (if it exists), id, name, path, and arn. It will not retrieve the private key which is not available through the AWS API. - - diff --git a/website/docs/d/iam_user.html.markdown b/website/docs/d/iam_user.html.markdown index 09835aad46f..b7c3672276f 100644 --- a/website/docs/d/iam_user.html.markdown +++ b/website/docs/d/iam_user.html.markdown @@ -27,7 +27,6 @@ data "aws_iam_user" "example" { ## Attributes Reference * `arn` - The Amazon Resource Name (ARN) assigned by AWS for this user. - * `path` - Path in which this user was created. - +* `permissions_boundary` - The ARN of the policy that is used to set the permissions boundary for the user. * `user_id` - The unique ID assigned by AWS for this user. diff --git a/website/docs/d/inspector_rules_packages.html.markdown b/website/docs/d/inspector_rules_packages.html.markdown index 14b10b9c73a..d2aa5e5ce9b 100644 --- a/website/docs/d/inspector_rules_packages.html.markdown +++ b/website/docs/d/inspector_rules_packages.html.markdown @@ -21,7 +21,7 @@ data "aws_inspector_rules_packages" "rules" {} # e.g. Use in aws_inspector_assessment_template resource "aws_inspector_resource_group" "group" { tags { - test = "test" + test = "test" } } @@ -35,12 +35,12 @@ resource "aws_inspector_assessment_template" "assessment" { target_arn = "${aws_inspector_assessment_target.assessment.arn}" duration = "60" - rules_package_arns = "${data.aws_inspector_rules_packages.rules.arns}" + rules_package_arns = ["${data.aws_inspector_rules_packages.rules.arns}"] } ``` ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arns` - A list of the AWS Inspector Rules Packages arns available in the AWS region. diff --git a/website/docs/d/instance.html.markdown b/website/docs/d/instance.html.markdown index 29cd449a0f7..84b3726874e 100644 --- a/website/docs/d/instance.html.markdown +++ b/website/docs/d/instance.html.markdown @@ -57,6 +57,7 @@ are exported: interpolation. * `ami` - The ID of the AMI used to launch the instance. +* `arn` - The ARN of the instance. * `associate_public_ip_address` - Whether or not the Instance is associated with a public IP address or not (Boolean). * `availability_zone` - The availability zone of the Instance. * `ebs_block_device` - The EBS block device mappings of the Instance. @@ -102,5 +103,6 @@ interpolation. * `tags` - A mapping of tags assigned to the Instance. * `tenancy` - The tenancy of the instance: `dedicated`, `default`, `host`. * `vpc_security_group_ids` - The associated security groups in a non-default VPC. +* `credit_specification` - The credit specification of the Instance. [1]: http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html diff --git a/website/docs/d/instances.html.markdown b/website/docs/d/instances.html.markdown index d09e16c3cc6..a222691ee39 100644 --- a/website/docs/d/instances.html.markdown +++ b/website/docs/d/instances.html.markdown @@ -28,14 +28,17 @@ data "aws_instances" "test" { instance_tags { Role = "HardWorker" } + filter { name = "instance.group-id" values = ["sg-12345678"] } + + instance_state_names = ["running", "stopped"] } resource "aws_eip" "test" { - count = "${length(data.aws_instances.test.ids)}" + count = "${length(data.aws_instances.test.ids)}" instance = "${data.aws_instances.test.ids[count.index]}" } ``` @@ -45,6 +48,8 @@ resource "aws_eip" "test" { * `instance_tags` - (Optional) A mapping of tags, each pair of which must exactly match a pair on desired instances. +* `instance_state_names` - (Optional) A list of instance states that should be applicable to the desired instances. The permitted values are: `pending, running, shutting-down, stopped, stopping, terminated`. The default value is `running`. + * `filter` - (Optional) One or more name/value pairs to use as filters. There are several valid keys, for a full reference, check out [describe-instances in the AWS CLI reference][1]. diff --git a/website/docs/d/internet_gateway.html.markdown b/website/docs/d/internet_gateway.html.markdown index 288631578b8..45899a86e79 100644 --- a/website/docs/d/internet_gateway.html.markdown +++ b/website/docs/d/internet_gateway.html.markdown @@ -17,7 +17,7 @@ variable "vpc_id" {} data "aws_internet_gateway" "default" { filter { - name = "attachment.vpc-id" + name = "attachment.vpc-id" values = ["${var.vpc_id}"] } } diff --git a/website/docs/d/iot_endpoint.html.markdown b/website/docs/d/iot_endpoint.html.markdown new file mode 100644 index 00000000000..0721ed8a55b --- /dev/null +++ b/website/docs/d/iot_endpoint.html.markdown @@ -0,0 +1,50 @@ +--- +layout: "aws" +page_title: "AWS: aws_iot_endpoint" +sidebar_current: "docs-aws-datasource-iot-endpoint" +description: |- + Get the unique IoT endpoint +--- + +# Data Source: aws_iot_endpoint + +Returns a unique endpoint specific to the AWS account making the call. + +## Example Usage + +```hcl +data "aws_iot_endpoint" "example" {} + +resource "kubernetes_pod" "agent" { + metadata { + name = "my-device" + } + + spec { + container { + image = "gcr.io/my-project/image-name" + name = "image-name" + + env = [ + { + name = "IOT_ENDPOINT" + value = "${data.aws_iot_endpoint.example.endpoint_address}" + }, + ] + } + } +} +``` + +## Argument Reference + +* `endpoint_type` - (Optional) Endpoint type. Valid values: `iot:CredentialProvider`, `iot:Data`, `iot:Data-ATS`, `iot:Job`. + +## Attributes Reference + +* `endpoint_address` - The endpoint based on `endpoint_type`: + * No `endpoint_type`: Either `iot:Data` or `iot:Data-ATS` [depending on region](https://aws.amazon.com/blogs/iot/aws-iot-core-ats-endpoints/) + * `iot:CredentialsProvider`: `IDENTIFIER.credentials.iot.REGION.amazonaws.com` + * `iot:Data`: `IDENTIFIER.iot.REGION.amazonaws.com` + * `iot:Data-ATS`: `IDENTIFIER-ats.iot.REGION.amazonaws.com` + * `iot:Job`: `IDENTIFIER.jobs.iot.REGION.amazonaws.com` diff --git a/website/docs/d/ip_ranges.html.markdown b/website/docs/d/ip_ranges.html.markdown index c5490f7f075..4636cc1341a 100644 --- a/website/docs/d/ip_ranges.html.markdown +++ b/website/docs/d/ip_ranges.html.markdown @@ -22,10 +22,11 @@ resource "aws_security_group" "from_europe" { name = "from_europe" ingress { - from_port = "443" - to_port = "443" - protocol = "tcp" - cidr_blocks = ["${data.aws_ip_ranges.european_ec2.cidr_blocks}"] + from_port = "443" + to_port = "443" + protocol = "tcp" + cidr_blocks = ["${data.aws_ip_ranges.european_ec2.cidr_blocks}"] + ipv6_cidr_blocks = ["${data.aws_ip_ranges.european_ec2.ipv6_cidr_blocks}"] } tags { @@ -50,6 +51,7 @@ CIDR blocks, Terraform will fail. ## Attributes Reference * `cidr_blocks` - The lexically ordered list of CIDR blocks. +* `ipv6_cidr_blocks` - The lexically ordered list of IPv6 CIDR blocks. * `create_date` - The publication time of the IP ranges (e.g. `2016-08-03-23-46-05`). * `sync_token` - The publication time of the IP ranges, in Unix epoch time format (e.g. `1470267965`). diff --git a/website/docs/d/kms_ciphertext.html.markdown b/website/docs/d/kms_ciphertext.html.markdown index 503940dc5fd..fb4a8684030 100644 --- a/website/docs/d/kms_ciphertext.html.markdown +++ b/website/docs/d/kms_ciphertext.html.markdown @@ -19,11 +19,12 @@ by using an AWS KMS customer master key. ```hcl resource "aws_kms_key" "oauth_config" { description = "oauth config" - is_enabled = true + is_enabled = true } data "aws_kms_ciphertext" "oauth" { key_id = "${aws_kms_key.oauth_config.key_id}" + plaintext = < **WARNING:** This data source is deprecated and will be removed in the next major version. You can migrate existing configurations to the [`aws_kms_secrets` data source](/docs/providers/aws/d/kms_secrets.html) following instructions available in the [Version 2 Upgrade Guide](/docs/providers/aws/guides/version-2-upgrade.html#data-source-aws_kms_secret). + The KMS secret data source allows you to use data encrypted with the AWS KMS service within your resource definitions. diff --git a/website/docs/d/kms_secrets.html.markdown b/website/docs/d/kms_secrets.html.markdown new file mode 100644 index 00000000000..f5a30970279 --- /dev/null +++ b/website/docs/d/kms_secrets.html.markdown @@ -0,0 +1,77 @@ +--- +layout: "aws" +page_title: "AWS: aws_kms_secrets" +sidebar_current: "docs-aws-datasource-kms-secrets" +description: |- + Decrypt multiple secrets from data encrypted with the AWS KMS service +--- + +# Data Source: aws_kms_secrets + +Decrypt multiple secrets from data encrypted with the AWS KMS service. + +~> **NOTE**: Using this data provider will allow you to conceal secret data within your resource definitions but does not take care of protecting that data in all Terraform logging and state output. Please take care to secure your secret data beyond just the Terraform configuration. + +## Example Usage + +If you do not already have a `CiphertextBlob` from encrypting a KMS secret, you can use the below commands to obtain one using the [AWS CLI kms encrypt](https://docs.aws.amazon.com/cli/latest/reference/kms/encrypt.html) command. This requires you to have your AWS CLI setup correctly and replace the `--key-id` with your own. Alternatively you can use `--plaintext 'password'` instead of reading from a file. + +-> If you have a newline character at the end of your file, it will be decrypted with this newline character intact. For most use cases this is undesirable and leads to incorrect passwords or invalid values, as well as possible changes in the plan. Be sure to use `echo -n` if necessary. + +```sh +$ echo -n 'master-password' > plaintext-password +$ aws kms encrypt --key-id ab123456-c012-4567-890a-deadbeef123 --plaintext fileb://plaintext-password --encryption-context foo=bar --output text --query CiphertextBlob +AQECAHgaPa0J8WadplGCqqVAr4HNvDaFSQ+NaiwIBhmm6qDSFwAAAGIwYAYJKoZIhvcNAQcGoFMwUQIBADBMBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDI+LoLdvYv8l41OhAAIBEIAfx49FFJCLeYrkfMfAw6XlnxP23MmDBdqP8dPp28OoAQ== +``` + +That encrypted output can now be inserted into Terraform configurations without exposing the plaintext secret directly. + +```hcl +data "aws_kms_secrets" "example" { + secret { + # ... potentially other configuration ... + name = "master_password" + payload = "AQECAHgaPa0J8WadplGCqqVAr4HNvDaFSQ+NaiwIBhmm6qDSFwAAAGIwYAYJKoZIhvcNAQcGoFMwUQIBADBMBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDI+LoLdvYv8l41OhAAIBEIAfx49FFJCLeYrkfMfAw6XlnxP23MmDBdqP8dPp28OoAQ==" + + context { + foo = "bar" + } + } + + secret { + # ... potentially other configuration ... + name = "master_username" + payload = "AQECAHgaPa0J8WadplGCqqVAr4HNvDaFSQ+NaiwIBhmm6qDSFwAAAGIwYAYJKoZIhvcNAQcGoFMwUQIBADBMBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDI+LoLdvYv8l41OhAAIBEIAfx49FFJCLeYrkfMfAw6XlnxP23MmDBdqP8dPp28OoAQ==" + } +} + +resource "aws_rds_cluster" "example" { + # ... other configuration ... + master_password = "${data.aws_kms_secrets.example.plaintext["master_password"]}" + master_username = "${data.aws_kms_secrets.example.plaintext["master_username"]}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `secret` - (Required) One or more encrypted payload definitions from the KMS service. See the Secret Definitions below. + +### Secret Definitions + +Each `secret` supports the following arguments: + +* `name` - (Required) The name to export this secret under in the attributes. +* `payload` - (Required) Base64 encoded payload, as returned from a KMS encrypt operation. +* `context` - (Optional) An optional mapping that makes up the Encryption Context for the secret. +* `grant_tokens` (Optional) An optional list of Grant Tokens for the secret. + +For more information on `context` and `grant_tokens` see the [KMS +Concepts](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html) + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `plaintext` - Map containing each `secret` `name` as the key with its decrypted plaintext value diff --git a/website/docs/d/lambda_function.html.markdown b/website/docs/d/lambda_function.html.markdown new file mode 100644 index 00000000000..e82f2d55abd --- /dev/null +++ b/website/docs/d/lambda_function.html.markdown @@ -0,0 +1,54 @@ +--- +layout: "aws" +page_title: "AWS: aws_lambda_function" +sidebar_current: "docs-aws-datasource-lambda-function" +description: |- + Provides a Lambda Function data source. +--- + +# aws_lambda_function + +Provides information about a Lambda Function. + +## Example Usage + +```hcl +variable "function_name" { + type = "string" +} + +data "aws_lambda_function" "existing" { + function_name = "${var.function_name}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `function_name` - (Required) Name of the lambda function. +* `qualifier` - (Optional) Qualifier of the lambda function. Defaults to `$LATEST`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The Amazon Resource Name (ARN) identifying your Lambda Function. +* `dead_letter_config` - Configure the function's *dead letter queue*. +* `description` - Description of what your Lambda Function does. +* `environment` - The Lambda environment's configuration settings. +* `handler` - The function entrypoint in your code. +* `invoke_arn` - The ARN to be used for invoking Lambda Function from API Gateway. +* `kms_key_arn` - The ARN for the KMS encryption key. +* `last_modified` - The date this resource was last modified. +* `memory_size` - Amount of memory in MB your Lambda Function can use at runtime. +* `qualified_arn` - The Amazon Resource Name (ARN) identifying your Lambda Function Version +* `reserved_concurrent_executions` - The amount of reserved concurrent executions for this lambda function. +* `role` - IAM role attached to the Lambda Function. +* `runtime` - The runtime environment for the Lambda function.. +* `source_code_hash` - Base64-encoded representation of raw SHA-256 sum of the zip file. +* `source_code_size` - The size in bytes of the function .zip file. +* `timeout` - The function execution time at which Lambda should terminate the function. +* `tracing_config` - Tracing settings of the function. +* `version` - The version of the Lambda function. +* `vpc_config` - VPC configuration associated with your Lambda function. diff --git a/website/docs/d/lambda_invocation.html.markdown b/website/docs/d/lambda_invocation.html.markdown new file mode 100644 index 00000000000..118f9087f78 --- /dev/null +++ b/website/docs/d/lambda_invocation.html.markdown @@ -0,0 +1,48 @@ +--- +layout: "aws" +page_title: "AWS: aws_lambda_invocation" +sidebar_current: "docs-aws-datasource-lambda-invocation" +description: |- + Invoke AWS Lambda Function as data source +--- + +# Data Source: aws_lambda_invocation + +Use this data source to invoke custom lambda functions as data source. +The lambda function is invoked with [RequestResponse](https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html#API_Invoke_RequestSyntax) +invocation type. + +## Example Usage + +```hcl +data "aws_lambda_invocation" "example" { + function_name = "${aws_lambda_function.lambda_function_test.function_name}" + + input = < **Note:** The unencrypted value of a SecureString will be stored in the raw state as plain-text. [Read more about sensitive data in state](/docs/state/sensitive-data.html). + +~> **Note:** The data source is currently following the behavior of the [SSM API](https://docs.aws.amazon.com/sdk-for-go/api/service/ssm/#Parameter) to return a string value, regardless of parameter type. For type `StringList`, we can use [split()](https://www.terraform.io/docs/configuration/interpolation.html#split-delim-string-) built-in function to get values in a list. Example: `split(",", data.aws_ssm_parameter.subnets.value)` + + ## Argument Reference The following arguments are supported: @@ -31,9 +33,9 @@ The following arguments are supported: * `with_decryption` - (Optional) Whether to return decrypted `SecureString` value. Defaults to `true`. -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - The ARN of the parameter. -* `name` - (Required) The name of the parameter. -* `type` - (Required) The type of the parameter. Valid types are `String`, `StringList` and `SecureString`. -* `value` - (Required) The value of the parameter. +* `name` - The name of the parameter. +* `type` - The type of the parameter. Valid types are `String`, `StringList` and `SecureString`. +* `value` - The value of the parameter. diff --git a/website/docs/d/storagegateway_local_disk.html.markdown b/website/docs/d/storagegateway_local_disk.html.markdown new file mode 100644 index 00000000000..3c607542688 --- /dev/null +++ b/website/docs/d/storagegateway_local_disk.html.markdown @@ -0,0 +1,33 @@ +--- +layout: "aws" +page_title: "AWS: aws_storagegateway_local_disk" +sidebar_current: "docs-aws-datasource-storagegateway-local-disk" +description: |- + Retrieve information about a Storage Gateway local disk +--- + +# Data Source: aws_storagegateway_local_disk + +Retrieve information about a Storage Gateway local disk. The disk identifier is useful for adding the disk as a cache or upload buffer to a gateway. + +## Example Usage + +```hcl +data "aws_storagegateway_local_disk" "test" { + disk_path = "${aws_volume_attachment.test.device_name}" + gateway_arn = "${aws_storagegateway_gateway.test.arn}" +} +``` + +## Argument Reference + +* `gateway_arn` - (Required) The Amazon Resource Name (ARN) of the gateway. +* `disk_node` - (Optional) The device node of the local disk to retrieve. For example, `/dev/sdb`. +* `disk_path` - (Optional) The device path of the local disk to retrieve. For example, `/dev/xvdb` or `/dev/nvme1n1`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `disk_id` - The disk identifier. e.g. `pci-0000:03:00.0-scsi-0:0:0:0` +* `id` - The disk identifier. e.g. `pci-0000:03:00.0-scsi-0:0:0:0` diff --git a/website/docs/d/subnet.html.markdown b/website/docs/d/subnet.html.markdown index 440f5b20fb6..7d6d423e488 100644 --- a/website/docs/d/subnet.html.markdown +++ b/website/docs/d/subnet.html.markdown @@ -73,12 +73,14 @@ which take the following arguments: [the underlying AWS API](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSubnets.html). For example, if matching against tag `Name`, use: - ```hcl +```hcl +data "aws_subnet" "selected" { filter { - name = "tag:Name" - values = ... + name = "tag:Name" + values = "" # insert value here } - ``` +} +``` * `values` - (Required) Set of values that are accepted for the given field. A subnet will be selected if any one of the given values matches. @@ -89,3 +91,7 @@ All of the argument attributes except `filter` blocks are also exported as result attributes. This data source will complete the data by populating any fields that are not included in the configuration with the data for the selected subnet. + +In addition the following attributes are exported: + +* `arn` - The ARN of the subnet. diff --git a/website/docs/d/subnet_ids.html.markdown b/website/docs/d/subnet_ids.html.markdown index bab6870f156..55a6c03670a 100644 --- a/website/docs/d/subnet_ids.html.markdown +++ b/website/docs/d/subnet_ids.html.markdown @@ -23,7 +23,7 @@ data "aws_subnet_ids" "example" { data "aws_subnet" "example" { count = "${length(data.aws_subnet_ids.example.ids)}" - id = "${data.aws_subnet_ids.example.ids[count.index]}" + id = "${data.aws_subnet_ids.example.ids[count.index]}" } output "subnet_cidr_blocks" { @@ -38,6 +38,7 @@ can loop through the subnets, putting instances across availability zones. ```hcl data "aws_subnet_ids" "private" { vpc_id = "${var.vpc_id}" + tags { Tier = "Private" } diff --git a/website/docs/d/vpc.html.markdown b/website/docs/d/vpc.html.markdown index a003c0311d4..032374e3525 100644 --- a/website/docs/d/vpc.html.markdown +++ b/website/docs/d/vpc.html.markdown @@ -75,9 +75,17 @@ the selected VPC. The following attribute is additionally exported: +* `arn` - Amazon Resource Name (ARN) of VPC +* `enable_dns_support` - Whether or not the VPC has DNS support +* `enable_dns_hostnames` - Whether or not the VPC has DNS hostname support * `instance_tenancy` - The allowed tenancy of instances launched into the selected VPC. May be any of `"default"`, `"dedicated"`, or `"host"`. * `ipv6_association_id` - The association ID for the IPv6 CIDR block. * `ipv6_cidr_block` - The IPv6 CIDR block. -* `enable_dns_support` - Whether or not the VPC has DNS support -* `enable_dns_hostnames` - Whether or not the VPC has DNS hostname support +* `main_route_table_id` - The ID of the main route table associated with this VPC. + +`cidr_block_associations` is also exported with the following attributes: + +* `association_id` - The association ID for the the IPv4 CIDR block. +* `cidr_block` - The CIDR block for the association. +* `state` - The State of the association. diff --git a/website/docs/d/vpc_dhcp_options.html.markdown b/website/docs/d/vpc_dhcp_options.html.markdown new file mode 100644 index 00000000000..6e24eef8e24 --- /dev/null +++ b/website/docs/d/vpc_dhcp_options.html.markdown @@ -0,0 +1,60 @@ +--- +layout: "aws" +page_title: "AWS: aws_vpc_dhcp_options" +sidebar_current: "docs-aws-datasource-vpc-dhcp-options" +description: |- + Retrieve information about an EC2 DHCP Options configuration +--- + +# Data Source: aws_vpc_dhcp_options + +Retrieve information about an EC2 DHCP Options configuration. + +## Example Usage + +### Lookup by DHCP Options ID + +```hcl +data "aws_vpc_dhcp_options" "example" { + dhcp_options_id = "dopts-12345678" +} +``` + +### Lookup by Filter + +```hcl +data "aws_vpc_dhcp_options" "example" { + filter { + name = "key" + values = ["domain-name"] + } + + filter { + name = "value" + values = ["example.com"] + } +} +``` + +## Argument Reference + +* `dhcp_options_id` - (Optional) The EC2 DHCP Options ID. +* `filter` - (Optional) List of custom filters as described below. + +### filter + +For more information about filtering, see the [EC2 API documentation](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeDhcpOptions.html). + +* `name` - (Required) The name of the field to filter. +* `values` - (Required) Set of values for filtering. + +## Attributes Reference + +* `dhcp_options_id` - EC2 DHCP Options ID +* `domain_name` - The suffix domain name to used when resolving non Fully Qualified Domain Names. e.g. the `search` value in the `/etc/resolv.conf` file. +* `domain_name_servers` - List of name servers. +* `id` - EC2 DHCP Options ID +* `netbios_name_servers` - List of NETBIOS name servers. +* `netbios_node_type` - The NetBIOS node type (1, 2, 4, or 8). For more information about these node types, see [RFC 2132](http://www.ietf.org/rfc/rfc2132.txt). +* `ntp_servers` - List of NTP servers. +* `tags` - A mapping of tags assigned to the resource. diff --git a/website/docs/d/vpc_endpoint.html.markdown b/website/docs/d/vpc_endpoint.html.markdown index e6360c28149..e581d36771f 100644 --- a/website/docs/d/vpc_endpoint.html.markdown +++ b/website/docs/d/vpc_endpoint.html.markdown @@ -51,7 +51,7 @@ All of the argument attributes are also exported as result attributes. * `subnet_ids` - One or more subnets in which the VPC Endpoint is located. Applicable for endpoints of type `Interface`. * `network_interface_ids` - One or more network interfaces for the VPC Endpoint. Applicable for endpoints of type `Interface`. * `security_group_ids` - One or more security groups associated with the network interfaces. Applicable for endpoints of type `Interface`. -* `private_dns_enabled` - Whether or not the VPC is associated with a private hosted zone - `true` or `false`. Applicable for endpoints of type `Gateway`. +* `private_dns_enabled` - Whether or not the VPC is associated with a private hosted zone - `true` or `false`. Applicable for endpoints of type `Interface`. * `dns_entry` - The DNS entries for the VPC Endpoint. Applicable for endpoints of type `Interface`. DNS blocks are documented below. DNS blocks (for `dns_entry`) support the following attributes: diff --git a/website/docs/d/vpc_endpoint_service.html.markdown b/website/docs/d/vpc_endpoint_service.html.markdown index 8064bf923fb..96ec3c5ffc4 100644 --- a/website/docs/d/vpc_endpoint_service.html.markdown +++ b/website/docs/d/vpc_endpoint_service.html.markdown @@ -53,7 +53,7 @@ The given filters must match exactly one VPC endpoint service whose data will be ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `service_type` - The service type, `Gateway` or `Interface`. * `owner` - The AWS account ID of the service owner or `amazon`. diff --git a/website/docs/d/vpcs.html.markdown b/website/docs/d/vpcs.html.markdown new file mode 100644 index 00000000000..2d50a093e83 --- /dev/null +++ b/website/docs/d/vpcs.html.markdown @@ -0,0 +1,66 @@ +--- +layout: "aws" +page_title: "AWS: aws_vpcs" +sidebar_current: "docs-aws-datasource-vpcs" +description: |- + Provides a list of VPC Ids in a region +--- + +# Data Source: aws_vpcs + +This resource can be useful for getting back a list of VPC Ids for a region. + +The following example retrieves a list of VPC Ids with a custom tag of `service` set to a value of "production". + +## Example Usage + +The following shows outputing all VPC Ids. + +```hcl +data "aws_vpcs" "foo" { + tags { + service = "production" + } +} + +output "foo" { + value = "${data.aws_vpcs.foo.ids}" +} +``` + +An example use case would be interpolate the `aws_vpcs` output into `count` of an aws_flow_log resource. + +```hcl +data "aws_vpcs" "foo" {} + +resource "aws_flow_log" "test_flow_log" { + count = "${length(data.aws_vpcs.foo.ids)}" + # ... + vpc_id = "${element(data.aws_vpcs.foo.ids, count.index)}" + # ... +} + +output "foo" { + value = "${data.aws_vpcs.foo.ids}" +} +``` + +## Argument Reference + +* `tags` - (Optional) A mapping of tags, each pair of which must exactly match + a pair on the desired vpcs. + +* `filter` - (Optional) Custom filter block as described below. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) The name of the field to filter by, as defined by + [the underlying AWS API](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcs.html). + +* `values` - (Required) Set of values that are accepted for the given field. + A VPC will be selected if any one of the given values matches. + +## Attributes Reference + +* `ids` - A list of all the VPC Ids found. This data source will fail if none are found. diff --git a/website/docs/d/vpn_gateway.html.markdown b/website/docs/d/vpn_gateway.html.markdown index 0996a88e350..1aa986e21fd 100644 --- a/website/docs/d/vpn_gateway.html.markdown +++ b/website/docs/d/vpn_gateway.html.markdown @@ -16,7 +16,7 @@ a specific VPN gateway. ```hcl data "aws_vpn_gateway" "selected" { filter { - name = "tag:Name" + name = "tag:Name" values = ["vpn-gw"] } } diff --git a/website/docs/d/workspaces_bundle.html.markdown b/website/docs/d/workspaces_bundle.html.markdown new file mode 100644 index 00000000000..4cbca348d2d --- /dev/null +++ b/website/docs/d/workspaces_bundle.html.markdown @@ -0,0 +1,48 @@ +--- +layout: "aws" +page_title: "AWS: aws_workspaces_bundle" +sidebar_current: "docs-aws-datasource-workspaces-bundle" +description: |- + Get information on a Workspaces Bundle. +--- + +# Data Source: aws_workspaces_bundle + +Use this data source to get information about a Workspaces Bundle. + +## Example Usage + +```hcl +data "aws_workspaces_bundle" "example" { + bundle_id = "wsb-b0s22j3d7" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `bundle_id` – (Required) The ID of the bundle. + +## Attributes Reference + +The following attributes are exported: + +* `description` – The description of the bundle. +* `name` – The name of the bundle. +* `owner` – The owner of the bundle. +* `compute_type` – The compute type. See supported fields below. +* `root_storage` – The root volume. See supported fields below. +* `user_storage` – The user storage. See supported fields below. + +### `compute_type` + +* `name` - The name of the compute type. + +### `root_storage` + +* `capacity` - The size of the root volume. + +### `user_storage` + +* `capacity` - The size of the user storage. diff --git a/website/docs/guides/eks-getting-started.html.md b/website/docs/guides/eks-getting-started.html.md new file mode 100644 index 00000000000..895f5f2bbd5 --- /dev/null +++ b/website/docs/guides/eks-getting-started.html.md @@ -0,0 +1,599 @@ +--- +layout: "aws" +page_title: "EKS Getting Started Guide" +sidebar_current: "docs-aws-guide-eks-getting-started" +description: |- + Using Terraform to configure AWS EKS. +--- + +# Getting Started with AWS EKS + +The Amazon Web Services EKS service allows for simplified management of +[Kubernetes](https://kubernetes.io/) servers. While the service itself is +quite simple from an operator perspective, understanding how it interconnects +with other pieces of the AWS service universe and how to configure local +Kubernetes clients to manage clusters can be helpful. + +While the [EKS User Guide](https://docs.aws.amazon.com/eks/latest/userguide/) +provides much of the up-to-date information about getting started with the service +from a generic standpoint, this guide provides a Terraform configuration based +introduction. + +This guide will show how to deploy a sample architecture using Terraform. The +guide assumes some basic familiarity with Kubernetes but does not +assume any pre-existing deployment. It also assumes that you are familiar +with the usual Terraform plan/apply workflow; if you're new to Terraform +itself, refer first to [the Getting Started guide](/intro/getting-started/install.html). + +It is worth noting that there are other valid ways to use these services and +resources that make different tradeoffs. We encourage readers to consult the official documentation for the respective services and resources for additional context and +best-practices. This guide can still serve as an introduction to the main resources +associated with these services, even if you choose a different architecture. + + + +- [Guide Overview](#guide-overview) +- [Preparation](#preparation) +- [Create Sample Architecture in AWS](#create-sample-architecture-in-aws) + - [Cluster Name Variable](#cluster-name-variable) + - [Base VPC Networking](#base-vpc-networking) + - [Kubernetes Masters](#kubernetes-masters) + - [EKS Master Cluster IAM Role](#eks-master-cluster-iam-role) + - [EKS Master Cluster Security Group](#eks-master-cluster-security-group) + - [EKS Master Cluster](#eks-master-cluster) + - [Configuring kubectl for EKS](#configuring-kubectl-for-eks) + - [Kubernetes Worker Nodes](#kubernetes-worker-nodes) + - [Worker Node IAM Role and Instance Profile](#worker-node-iam-role-and-instance-profile) + - [Worker Node Security Group](#worker-node-security-group) + - [Worker Node Access to EKS Master Cluster](#worker-node-access-to-eks-master-cluster) + - [Worker Node AutoScaling Group](#worker-node-autoscaling-group) + - [Required Kubernetes Configuration to Join Worker Nodes](#required-kubernetes-configuration-to-join-worker-nodes) +- [Destroy Sample Architecture in AWS](#destroy-sample-architecture-in-aws) + + + +## Guide Overview + +~> **Warning:** Following this guide will create objects in your AWS account +that will cost you money against your AWS bill. + +The sample architecture introduced here includes the following resources: + +* EKS Cluster: AWS managed Kubernetes cluster of master servers +* AutoScaling Group containing 2 m4.large instances based on the latest EKS Amazon Linux 2 AMI: Operator managed Kubernetes worker nodes for running Kubernetes service deployments +* Associated VPC, Internet Gateway, Security Groups, and Subnets: Operator managed networking resources for the EKS Cluster and worker node instances +* Associated IAM Roles and Policies: Operator managed access resources for EKS and worker node instances + +## Preparation + +In order to follow this guide you will need an AWS account and to have +Terraform installed. +[Configure your credentials](/docs/providers/aws/index.html#authentication) +so that Terraform is able to act on your behalf. + +For simplicity here, we will assume you are already using a set of IAM +credentials with suitable access to create AutoScaling, EC2, EKS, and IAM +resources. If you are not sure and are working in an AWS account used only for +development, the simplest approach to get started is to use credentials with +full administrative access to the target AWS account. + +If you are planning to locally use the standard Kubernetes client, `kubectl`, +it must be at least version 1.10 to support `exec` authentication with usage +of `aws-iam-authenticator`. For additional information about installation +and configuration of these applications, see their official documentation. + +Relevant Links: + +* [Kubernetes Client Install Guide](https://kubernetes.io/docs/tasks/tools/install-kubectl/) +* [AWS IAM Authenticator](https://github.com/kubernetes-sigs/aws-iam-authenticator) + +## Create Sample Architecture in AWS + +~> **NOTE:** We recommend using this guide to build a separate Terraform +configuration (for easy tear down) and more importantly running it in a +separate AWS account as your production infrastructure. While it is +self-contained and should not affect existing infrastructure, its always best +to be cautious! + +~> **NOTE:** If you would rather see the full sample Terraform configuration +for this guide rather than the individual pieces, it can be found at: +https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/eks-getting-started + +### Cluster Name Variable + +The below sample Terraform configurations reference a variable called +`cluster-name` (`var.cluster-name`) which is used for consistency. Feel free +to substitute your own cluster name or create the variable configuration: + +```hcl +variable "cluster-name" { + default = "terraform-eks-demo" + type = "string" +} +``` + +### Base VPC Networking + +EKS requires the usage of [Virtual Private Cloud](https://aws.amazon.com/vpc/) to +provide the base for its networking configuration. + +~> **NOTE:** The usage of the specific `kubernetes.io/cluster/*` resource tags below are required for EKS and Kubernetes to discover and manage networking resources. + +The below will create a 10.0.0.0/16 VPC, two 10.0.X.0/24 subnets, an internet +gateway, and setup the subnet routing to route external traffic through the +internet gateway: + +```hcl +# This data source is included for ease of sample architecture deployment +# and can be swapped out as necessary. +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "demo" { + cidr_block = "10.0.0.0/16" + + tags = "${ + map( + "Name", "terraform-eks-demo-node", + "kubernetes.io/cluster/${var.cluster-name}", "shared", + ) + }" +} + +resource "aws_subnet" "demo" { + count = 2 + + availability_zone = "${data.aws_availability_zones.available.names[count.index]}" + cidr_block = "10.0.${count.index}.0/24" + vpc_id = "${aws_vpc.demo.id}" + + tags = "${ + map( + "Name", "terraform-eks-demo-node", + "kubernetes.io/cluster/${var.cluster-name}", "shared", + ) + }" +} + +resource "aws_internet_gateway" "demo" { + vpc_id = "${aws_vpc.demo.id}" + + tags { + Name = "terraform-eks-demo" + } +} + +resource "aws_route_table" "demo" { + vpc_id = "${aws_vpc.demo.id}" + + route { + cidr_block = "0.0.0.0/0" + gateway_id = "${aws_internet_gateway.demo.id}" + } +} + +resource "aws_route_table_association" "demo" { + count = 2 + + subnet_id = "${aws_subnet.demo.*.id[count.index]}" + route_table_id = "${aws_route_table.demo.id}" +} +``` + +### Kubernetes Masters + +This is where the EKS service comes into play. It requires a few operator +managed resources beforehand so that Kubernetes can properly manage other +AWS services as well as allow inbound networking communication from your +local workstation (if desired) and worker nodes. + +#### EKS Master Cluster IAM Role + +The below is an example IAM role and policy to allow the EKS service to +manage or retrieve data from other AWS services. It is also possible to create +these policies with the [`aws_iam_policy_document` data source](/docs/providers/aws/d/iam_policy_document.html) + +For the latest required policy, see the [EKS User Guide](https://docs.aws.amazon.com/eks/latest/userguide/). + +```hcl +resource "aws_iam_role" "demo-cluster" { + name = "terraform-eks-demo-cluster" + + assume_role_policy = < This section only provides some example methods for configuring `kubectl` to communicate with EKS servers. Managing Kubernetes clients and configurations is outside the scope of this guide. + +If you are planning on using `kubectl` to manage the Kubernetes cluster, now +might be a great time to configure your client. After configuration, you can +verify cluster access via `kubectl version` displaying server version +information in addition to local client version information. + +The AWS CLI [`eks update-kubeconfig`](https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html) +command provides a simple method to create or update configuration files. + +If you would rather update your configuration manually, the below Terraform output +generates a sample `kubectl` configuration to connect to your cluster. This can +be placed into a Kubernetes configuration file, e.g. `~/.kube/config` + +```hcl +locals { + kubeconfig = < **NOTE:** The usage of the specific `kubernetes.io/cluster/*` resource tag below is required for EKS and Kubernetes to discover and manage compute resources. + +```hcl +resource "aws_autoscaling_group" "demo" { + desired_capacity = 2 + launch_configuration = "${aws_launch_configuration.demo.id}" + max_size = 2 + min_size = 1 + name = "terraform-eks-demo" + vpc_zone_identifier = ["${aws_subnet.demo.*.id}"] + + tag { + key = "Name" + value = "terraform-eks-demo" + propagate_at_launch = true + } + + tag { + key = "kubernetes.io/cluster/${var.cluster-name}" + value = "owned" + propagate_at_launch = true + } +} +``` + +~> **NOTE:** At this point, your Kubernetes cluster will have running masters +and worker nodes, _however_, the worker nodes will not be able to join the +Kubernetes cluster quite yet. The next section has the required Kubernetes +configuration to enable the worker nodes to join the cluster. + +#### Required Kubernetes Configuration to Join Worker Nodes + +-> While managing Kubernetes cluster and client configurations are beyond the scope of this guide, we provide an example of how to apply the required Kubernetes [`ConfigMap`](http://kubernetes.io/docs/user-guide/configmap/) via `kubectl` below for completeness. See also the [Configuring kubectl for EKS](#configuring-kubectl-for-eks) section. + +The EKS service does not provide a cluster-level API parameter or resource to +automatically configure the underlying Kubernetes cluster to allow worker nodes +to join the cluster via AWS IAM role authentication. + +To output an example IAM Role authentication `ConfigMap` from your +Terraform configuration: + +```hcl +locals { + config_map_aws_auth = < **NOTE:** Some AWS services only allow a subset of the policy elements or policy variables. For more information, see the AWS User Guide for the service you are configuring. + +~> **NOTE:** [IAM policy variables](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html), e.g. `${aws:username}`, use the same configuration syntax (`${...}`) as Terraform interpolation. When implementing IAM policy documents with these IAM variables, you may receive syntax errors from Terraform. You can escape the dollar character within your Terraform configration to prevent the error, e.g. `$${aws:username}`. + + + +- [Choosing a Configuration Method](#choosing-a-configuration-method) +- [Recommended Configuration Method Examples](#recommended-configuration-method-examples) + - [aws_iam_policy_document Data Source](#aws_iam_policy_document-data-source) + - [Multiple Line Heredoc Syntax](#multiple-line-heredoc-syntax) +- [Other Configuration Method Examples](#other-configuration-method-examples) + - [Single Line String Syntax](#single-line-string-syntax) + - [file() Interpolation Function](#file-interpolation-function) + - [template_file Data Source](#template_file-data-source) + + + +## Choosing a Configuration Method + +Terraform offers flexibility when creating configurations to match the architectural structure of teams and infrastructure. In most situations, using native functionality within Terraform and its providers will be the simplest to understand, eliminating context switching with other tooling, file sprawl, or differing file formats. Configuration examples of the available methods can be found later in the guide. + +The recommended approach to building AWS IAM policy documents within Terraform is the highly customizable [`aws_iam_policy_document` data source](#aws_iam_policy_document-data-source). A short list of benefits over other methods include: + +- Native Terraform configuration - no need to worry about JSON formatting or syntax +- Policy layering - create policy documents that combine and/or overwrite other policy documents +- Built-in policy error checking + +Otherwise in simple cases, such as a statically defined [assume role policy for an IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_permissions-to-switch.html), Terraform's [multiple line heredoc syntax](#multiple-line-heredoc-syntax) allows the easiest formatting without any indirection of a separate data source configuration or file. + +Additional methods are available, such [single line string syntax](#single-line-string-syntax), the [file() interpolation function](#file-interpolation-function), and the [template_file data source](#template_file-data-source), however their usage is discouraged due to their complexity. + +## Recommended Configuration Method Examples + +These configuration methods are the simplest and most powerful within Terraform. + +### aws_iam_policy_document Data Source + +-> For complete implementation information and examples, see the [`aws_iam_policy_document` data source documentation](/docs/providers/aws/d/iam_policy_document.html). + +```hcl +data "aws_iam_policy_document" "example" { + statement { + actions = ["*"] + resources = ["*"] + } +} + +resource "aws_iam_policy" "example" { + # ... other configuration ... + + policy = "${data.aws_iam_policy_document.example.json}" +} +``` + +### Multiple Line Heredoc Syntax + +Interpolation is available within the heredoc string if necessary. + +For example: + +```hcl +resource "aws_iam_policy" "example" { + # ... other configuration ... + policy = < **NOTE:** This upgrade guide is a work in progress and will not be completed until the release of version 2.0.0 of the provider later this year. Many of the topics discussed, except for the actual provider upgrade, can be performed using the most recent 1.X version of the provider. + +Version 2.0.0 of the AWS provider for Terraform is a major release and includes some changes that you will need to consider when upgrading. This guide is intended to help with that process and focuses only on changes from version 1.X to version 2.0.0. + +Most of the changes outlined in this guide have been previously marked as deprecated in the Terraform plan/apply output throughout previous provider releases. These changes, such as deprecation notices, can always be found in the [Terraform AWS Provider CHANGELOG](https://github.com/terraform-providers/terraform-provider-aws/blob/master/CHANGELOG.md). + +Upgrade topics: + + + +- [Provider Version Configuration](#provider-version-configuration) +- [Provider: Configuration](#provider-configuration) +- [Data Source: aws_ami](#data-source-aws_ami) +- [Data Source: aws_ami_ids](#data-source-aws_ami_ids) +- [Data Source: aws_iam_role](#data-source-aws_iam_role) +- [Data Source: aws_kms_secret](#data-source-aws_kms_secret) +- [Data Source: aws_region](#data-source-aws_region) +- [Resource: aws_api_gateway_api_key](#resource-aws_api_gateway_api_key) +- [Resource: aws_api_gateway_integration](#resource-aws_api_gateway_integration) +- [Resource: aws_api_gateway_integration_response](#resource-aws_api_gateway_integration_response) +- [Resource: aws_api_gateway_method](#resource-aws_api_gateway_method) +- [Resource: aws_api_gateway_method_response](#resource-aws_api_gateway_method_response) +- [Resource: aws_appautoscaling_policy](#resource-aws_appautoscaling_policy) +- [Resource: aws_autoscaling_policy](#resource-aws_autoscaling_policy) +- [Resource: aws_batch_compute_environment](#resource-aws_batch_compute_environment) +- [Resource: aws_cloudfront_distribution](#resource-aws_cloudfront_distribution) +- [Resource: aws_dx_lag](#resource-aws_dx_lag) +- [Resource: aws_ecs_service](#resource-aws_ecs_service) +- [Resource: aws_efs_file_system](#resource-aws_efs_file_system) +- [Resource: aws_elasticache_cluster](#resource-aws_elasticache_cluster) +- [Resource: aws_instance](#resource-aws_instance) +- [Resource: aws_network_acl](#resource-aws_network_acl) +- [Resource: aws_redshift_cluster](#resource-aws_redshift_cluster) +- [Resource: aws_route_table](#resource-aws_route_table) +- [Resource: aws_route53_zone](#resource-aws_route53_zone) +- [Resource: aws_wafregional_byte_match_set](#resource-aws_wafregional_byte_match_set) + + + +## Provider Version Configuration + +!> **WARNING:** This topic is placeholder documentation until version 2.0.0 is released later this year. + +-> Before upgrading to version 2.0.0, it is recommended to upgrade to the most recent 1.X version of the provider and ensure that your environment successfully runs [`terraform plan`](https://www.terraform.io/docs/commands/plan.html) without unexpected changes or deprecation notices. + +It is recommended to use [version constraints when configuring Terraform providers](https://www.terraform.io/docs/configuration/providers.html#provider-versions). If you are following that recommendation, update the version constraints in your Terraform configuration and run [`terraform init`](https://www.terraform.io/docs/commands/init.html) to download the new version. + +For example, given this previous configuration: + +```hcl +provider "aws" { + # ... other configuration ... + + version = "~> 1.29.0" +} +``` + +An updated configuration: + +```hcl +provider "aws" { + # ... other configuration ... + + version = "~> 2.0.0" +} +``` + +## Provider: Configuration + +### skip_requesting_account_id Argument Now Required to Skip Account ID Lookup Errors + +If the provider is unable to determine the AWS account ID from a provider assume role configuration or the STS GetCallerIdentity call used to verify the credentials (if `skip_credentials_validation = false`), it will attempt to lookup the AWS account ID via EC2 metadata, IAM GetUser, IAM ListRoles, and STS GetCallerIdentity. Previously, the provider would silently allow the failure of all the above methods. + +The provider will now return an error to ensure operators understand the implications of the missing AWS account ID in the provider. + +If necessary, the AWS account ID lookup logic can be skipped via: + +```hcl +provider "aws" { + # ... other configuration ... + + skip_requesting_account_id = true +} +``` + +## Data Source: aws_ami + +### owners Argument Now Required + +The `owners` argument is now required. Specifying `owner-id` or `owner-alias` under `filter` does not satisfy this requirement. + +## Data Source: aws_ami_ids + +### owners Argument Now Required + +The `owners` argument is now required. Specifying `owner-id` or `owner-alias` under `filter` does not satisfy this requirement. + +## Data Source: aws_iam_role + +### assume_role_policy_document Attribute Removal + +Switch your attribute references to the `assume_role_policy` attribute instead. + +### role_id Attribute Removal + +Switch your attribute references to the `unique_id` attribute instead. + +### role_name Argument Removal + +Switch your Terraform configuration to the `name` argument instead. + +## Data Source: aws_kms_secret + +### Data Source Removal and Migrating to aws_kms_secrets Data Source + +The implementation of the `aws_kms_secret` data source, prior to Terraform AWS provider version 2.0.0, used dynamic attribute behavior which is not supported with Terraform 0.12 and beyond (full details available in [this GitHub issue](https://github.com/terraform-providers/terraform-provider-aws/issues/5144)). + +Terraform configuration migration steps: + +* Change the data source type from `aws_kms_secret` to `aws_kms_secrets` +* Change any attribute reference (e.g. `"${data.aws_kms_secret.example.ATTRIBUTE}"`) from `.ATTRIBUTE` to `.plaintext["ATTRIBUTE"]` + +As an example, lets take the below sample configuration and migrate it. + +```hcl +# Below example configuration will not be supported in Terraform AWS provider version 2.0.0 + +data "aws_kms_secret" "example" { + secret { + # ... potentially other configuration ... + name = "master_password" + payload = "AQEC..." + } + + secret { + # ... potentially other configuration ... + name = "master_username" + payload = "AQEC..." + } +} + +resource "aws_rds_cluster" "example" { + # ... other configuration ... + master_password = "${data.aws_kms_secret.example.master_password}" + master_username = "${data.aws_kms_secret.example.master_username}" +} +``` + +Notice that the `aws_kms_secret` data source previously was taking the two `secret` configuration block `name` arguments and generating those as attribute names (`master_password` and `master_username` in this case). To remove the incompatible behavior, this updated version of the data source provides the decrypted value of each of those `secret` configuration block `name` arguments within a map attribute named `plaintext`. + +Updating the sample configuration from above: + +```hcl +data "aws_kms_secrets" "example" { + secret { + # ... potentially other configuration ... + name = "master_password" + payload = "AQEC..." + } + + secret { + # ... potentially other configuration ... + name = "master_username" + payload = "AQEC..." + } +} + +resource "aws_rds_cluster" "example" { + # ... other configuration ... + master_password = "${data.aws_kms_secrets.example.plaintext["master_password"]}" + master_username = "${data.aws_kms_secrets.example.plaintext["master_username"]}" +} +``` + +## Data Source: aws_region + +### current Argument Removal + +Simply remove `current = true` from your Terraform configuration. The data source defaults to the current provider region if no other filtering is enabled. + +## Resource: aws_api_gateway_api_key + +### stage_key Argument Removal + +Since the API Gateway usage plans feature was launched on August 11, 2016, usage plans are now required to associate an API key with an API stage. To migrate your Terraform configuration, the AWS provider implements support for usage plans with the following resources: + +* [`aws_api_gateway_usage_plan`](/docs/providers/aws/r/api_gateway_usage_plan.html) +* [`aws_api_gateway_usage_plan_key`](/docs/providers/aws/r/api_gateway_usage_plan_key.html) + +## Resource: aws_api_gateway_integration + +### request_parameters_in_json Argument Removal + +Switch your Terraform configuration to the `request_parameters` argument instead. + +For example, given this previous configuration: + +```hcl +resource "aws_api_gateway_integration" "example" { + # ... other configuration ... + + request_parameters_in_json = < **NOTE:** Creating this resource will leave the certificate authority in a `PENDING_CERTIFICATE` status, which means it cannot yet issue certificates. To complete this setup, you must fully sign the certificate authority CSR available in the `certificate_signing_request` attribute and import the signed certificate outside of Terraform. Terraform can support another resource to manage that workflow automatically in the future. + +## Example Usage + +### Basic + +```hcl +resource "aws_acmpca_certificate_authority" "example" { + certificate_authority_configuration { + key_algorithm = "RSA_4096" + signing_algorithm = "SHA512WITHRSA" + + subject { + common_name = "example.com" + } + } +} +``` + +### Enable Certificate Revocation List + +```hcl +resource "aws_s3_bucket" "example" { + bucket = "example" +} + +data "aws_iam_policy_document" "acmpca_bucket_access" { + statement { + actions = [ + "s3:GetBucketAcl", + "s3:GetBucketLocation", + "s3:PutObject", + "s3:PutObjectAcl", + ] + + resources = [ + "${aws_s3_bucket.example.arn}", + "${aws_s3_bucket.example.arn}/*", + ] + + principals { + identifiers = ["acm-pca.amazonaws.com"] + type = "Service" + } + } +} + +resource "aws_s3_bucket_policy" "example" { + bucket = "${aws_s3_bucket.example.id}" + policy = "${data.aws_iam_policy_document.acmpca_bucket_access.json}" +} + +resource "aws_acmpca_certificate_authority" "example" { + certificate_authority_configuration { + key_algorithm = "RSA_4096" + signing_algorithm = "SHA512WITHRSA" + + subject { + common_name = "example.com" + } + } + + revocation_configuration { + crl_configuration { + custom_cname = "crl.example.com" + enabled = true + expiration_in_days = 7 + s3_bucket_name = "${aws_s3_bucket.example.id}" + } + } + + depends_on = ["aws_s3_bucket_policy.example"] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `certificate_authority_configuration` - (Required) Nested argument containing algorithms and certificate subject information. Defined below. +* `enabled` - (Optional) Whether the certificate authority is enabled or disabled. Defaults to `true`. +* `revocation_configuration` - (Optional) Nested argument containing revocation configuration. Defined below. +* `tags` - (Optional) Specifies a key-value map of user-defined tags that are attached to the certificate authority. +* `type` - (Optional) The type of the certificate authority. Currently, this must be `SUBORDINATE`. + +### certificate_authority_configuration + +* `key_algorithm` - (Required) Type of the public key algorithm and size, in bits, of the key pair that your key pair creates when it issues a certificate. Valid values can be found in the [ACM PCA Documentation](https://docs.aws.amazon.com/acm-pca/latest/APIReference/API_CertificateAuthorityConfiguration.html). +* `signing_algorithm` - (Required) Name of the algorithm your private CA uses to sign certificate requests. Valid values can be found in the [ACM PCA Documentation](https://docs.aws.amazon.com/acm-pca/latest/APIReference/API_CertificateAuthorityConfiguration.html). +* `subject` - (Required) Nested argument that contains X.500 distinguished name information. At least one nested attribute must be specified. + +#### subject + +Contains information about the certificate subject. Identifies the entity that owns or controls the public key in the certificate. The entity can be a user, computer, device, or service. + +* `common_name` - (Optional) Fully qualified domain name (FQDN) associated with the certificate subject. +* `country` - (Optional) Two digit code that specifies the country in which the certificate subject located. +* `distinguished_name_qualifier` - (Optional) Disambiguating information for the certificate subject. +* `generation_qualifier` - (Optional) Typically a qualifier appended to the name of an individual. Examples include Jr. for junior, Sr. for senior, and III for third. +* `given_name` - (Optional) First name. +* `initials` - (Optional) Concatenation that typically contains the first letter of the `given_name`, the first letter of the middle name if one exists, and the first letter of the `surname`. +* `locality` - (Optional) The locality (such as a city or town) in which the certificate subject is located. +* `organization` - (Optional) Legal name of the organization with which the certificate subject is affiliated. +* `organizational_unit` - (Optional) A subdivision or unit of the organization (such as sales or finance) with which the certificate subject is affiliated. +* `pseudonym` - (Optional) Typically a shortened version of a longer `given_name`. For example, Jonathan is often shortened to John. Elizabeth is often shortened to Beth, Liz, or Eliza. +* `state` - (Optional) State in which the subject of the certificate is located. +* `surname` - (Optional) Family name. In the US and the UK for example, the surname of an individual is ordered last. In Asian cultures the surname is typically ordered first. +* `title` - (Optional) A title such as Mr. or Ms. which is pre-pended to the name to refer formally to the certificate subject. + +### revocation_configuration + +* `crl_configuration` - (Optional) Nested argument containing configuration of the certificate revocation list (CRL), if any, maintained by the certificate authority. Defined below. + +#### crl_configuration + +* `custom_cname` - (Optional) Name inserted into the certificate CRL Distribution Points extension that enables the use of an alias for the CRL distribution point. Use this value if you don't want the name of your S3 bucket to be public. +* `enabled` - (Optional) Boolean value that specifies whether certificate revocation lists (CRLs) are enabled. Defaults to `false`. +* `expiration_in_days` - (Required) Number of days until a certificate expires. Must be between 1 and 5000. +* `s3_bucket_name` - (Optional) Name of the S3 bucket that contains the CRL. If you do not provide a value for the `custom_cname` argument, the name of your S3 bucket is placed into the CRL Distribution Points extension of the issued certificate. You must specify a bucket policy that allows ACM PCA to write the CRL to your bucket. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Amazon Resource Name (ARN) of the certificate authority. +* `arn` - Amazon Resource Name (ARN) of the certificate authority. +* `certificate` - Base64-encoded certificate authority (CA) certificate. Only available after the certificate authority certificate has been imported. +* `certificate_chain` - Base64-encoded certificate chain that includes any intermediate certificates and chains up to root on-premises certificate that you used to sign your private CA certificate. The chain does not include your private CA certificate. Only available after the certificate authority certificate has been imported. +* `certificate_signing_request` - The base64 PEM-encoded certificate signing request (CSR) for your private CA certificate. +* `not_after` - Date and time after which the certificate authority is not valid. Only available after the certificate authority certificate has been imported. +* `not_before` - Date and time before which the certificate authority is not valid. Only available after the certificate authority certificate has been imported. +* `serial` - Serial number of the certificate authority. Only available after the certificate authority certificate has been imported. +* `status` - Status of the certificate authority. + +## Timeouts + +`aws_acmpca_certificate_authority` provides the following [Timeouts](/docs/configuration/resources.html#timeouts) +configuration options: + +* `create` - (Default `1m`) How long to wait for a certificate authority to be created. + +## Import + +`aws_acmpca_certificate_authority` can be imported by using the certificate authority Amazon Resource Name (ARN), e.g. + +``` +$ terraform import aws_acmpca_certificate_authority.example arn:aws:acm-pca:us-east-1:123456789012:certificate-authority/12345678-1234-1234-1234-123456789012 +``` diff --git a/website/docs/r/ami.html.markdown b/website/docs/r/ami.html.markdown index ed9dd591d51..7a3d4b8f4f7 100644 --- a/website/docs/r/ami.html.markdown +++ b/website/docs/r/ami.html.markdown @@ -24,15 +24,15 @@ it's better to use `aws_ami_launch_permission` instead. # an EBS volume populated from a snapshot. It is assumed that such a snapshot # already exists with the id "snap-xxxxxxxx". resource "aws_ami" "example" { - name = "terraform-example" - virtualization_type = "hvm" - root_device_name = "/dev/xvda" - - ebs_block_device { - device_name = "/dev/xvda" - snapshot_id = "snap-xxxxxxxx" - volume_size = 8 - } + name = "terraform-example" + virtualization_type = "hvm" + root_device_name = "/dev/xvda" + + ebs_block_device { + device_name = "/dev/xvda" + snapshot_id = "snap-xxxxxxxx" + volume_size = 8 + } } ``` @@ -42,6 +42,7 @@ The following arguments are supported: * `name` - (Required) A region-unique name for the AMI. * `description` - (Optional) A longer, human-readable description for the AMI. +* `ena_support` - (Optional) Specifies whether enhanced networking with ENA is enabled. Defaults to `false`. * `root_device_name` - (Optional) The name of the root device (for example, `/dev/sda1`, or `/dev/xvda`). * `virtualization_type` - (Optional) Keyword to choose what virtualization mode created instances will use. Can be either "paravirtual" (the default) or "hvm". The choice of virtualization type @@ -104,7 +105,15 @@ The `timeouts` block allows you to specify [timeouts](https://www.terraform.io/d ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the created AMI. * `root_snapshot_id` - The Snapshot ID for the root volume (for EBS-backed AMIs) + +## Import + +`aws_ami` can be imported using the ID of the AMI, e.g. + +``` +$ terraform import aws_ami.example ami-12345678 +``` diff --git a/website/docs/r/ami_copy.html.markdown b/website/docs/r/ami_copy.html.markdown index 293c806ed24..c07753d1c15 100644 --- a/website/docs/r/ami_copy.html.markdown +++ b/website/docs/r/ami_copy.html.markdown @@ -59,7 +59,7 @@ The `timeouts` block allows you to specify [timeouts](https://www.terraform.io/d ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the created AMI. diff --git a/website/docs/r/ami_from_instance.html.markdown b/website/docs/r/ami_from_instance.html.markdown index 48e9d9e64d3..f2d5a7ce23c 100644 --- a/website/docs/r/ami_from_instance.html.markdown +++ b/website/docs/r/ami_from_instance.html.markdown @@ -56,7 +56,7 @@ The `timeouts` block allows you to specify [timeouts](https://www.terraform.io/d ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the created AMI. diff --git a/website/docs/r/ami_launch_permission.html.markdown b/website/docs/r/ami_launch_permission.html.markdown index af1efb2c7ce..70c174cdd9b 100644 --- a/website/docs/r/ami_launch_permission.html.markdown +++ b/website/docs/r/ami_launch_permission.html.markdown @@ -28,6 +28,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - A combination of "`image_id`-`account_id`". diff --git a/website/docs/r/api_gateway_account.html.markdown b/website/docs/r/api_gateway_account.html.markdown index e576d633af0..0dca4e90fa9 100644 --- a/website/docs/r/api_gateway_account.html.markdown +++ b/website/docs/r/api_gateway_account.html.markdown @@ -92,4 +92,4 @@ API Gateway Accounts can be imported using the word `api-gateway-account`, e.g. ``` $ terraform import aws_api_gateway_account.demo api-gateway-account -``` \ No newline at end of file +``` diff --git a/website/docs/r/api_gateway_api_key.html.markdown b/website/docs/r/api_gateway_api_key.html.markdown index 84cac7d7ec6..c1a592e56e9 100644 --- a/website/docs/r/api_gateway_api_key.html.markdown +++ b/website/docs/r/api_gateway_api_key.html.markdown @@ -51,7 +51,7 @@ The following arguments are supported: ## Attribute Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the API key * `created_date` - The creation date of the API key diff --git a/website/docs/r/api_gateway_base_path_mapping.html.markdown b/website/docs/r/api_gateway_base_path_mapping.html.markdown index 9a838f490e2..70de3fe7ae5 100644 --- a/website/docs/r/api_gateway_base_path_mapping.html.markdown +++ b/website/docs/r/api_gateway_base_path_mapping.html.markdown @@ -45,3 +45,19 @@ The following arguments are supported: * `api_id` - (Required) The id of the API to connect. * `stage_name` - (Optional) The name of a specific deployment stage to expose at the given path. If omitted, callers may select any stage by including its name as a path element after the base path. * `base_path` - (Optional) Path segment that must be prepended to the path when accessing the API via this mapping. If omitted, the API is exposed at the root of the given domain. + +## Import + +`aws_api_gateway_base_path_mapping` can be imported by using the domain name and base path, e.g. + +For empty `base_path` (e.g. root path (`/`)): + +``` +$ terraform import aws_api_gateway_base_path_mapping.example example.com/ +``` + +Otherwise: + +``` +$ terraform import aws_api_gateway_base_path_mapping.example example.com/base-path +``` diff --git a/website/docs/r/api_gateway_client_certificate.html.markdown b/website/docs/r/api_gateway_client_certificate.html.markdown index 105668dbc02..c41cff8241e 100644 --- a/website/docs/r/api_gateway_client_certificate.html.markdown +++ b/website/docs/r/api_gateway_client_certificate.html.markdown @@ -27,7 +27,7 @@ The following arguments are supported: ## Attribute Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The identifier of the client certificate. * `created_date` - The date when the client certificate was created. diff --git a/website/docs/r/api_gateway_deployment.html.markdown b/website/docs/r/api_gateway_deployment.html.markdown index 9be934a15bf..528e5eba2ae 100644 --- a/website/docs/r/api_gateway_deployment.html.markdown +++ b/website/docs/r/api_gateway_deployment.html.markdown @@ -58,14 +58,14 @@ resource "aws_api_gateway_deployment" "MyDemoDeployment" { The following arguments are supported: * `rest_api_id` - (Required) The ID of the associated REST API -* `stage_name` - (Required) The name of the stage +* `stage_name` - (Required) The name of the stage. If the specified stage already exists, it will be updated to point to the new deployment. If the stage does not exist, a new one will be created and point to this deployment. Use `""` to point at the default stage. * `description` - (Optional) The description of the deployment * `stage_description` - (Optional) The description of the stage * `variables` - (Optional) A map that defines variables for the stage ## Attribute Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the deployment * `invoke_url` - The URL to invoke the API pointing to the stage, diff --git a/website/docs/r/api_gateway_documentation_part.html.markdown b/website/docs/r/api_gateway_documentation_part.html.markdown index 2fde0e47970..3238e412e10 100644 --- a/website/docs/r/api_gateway_documentation_part.html.markdown +++ b/website/docs/r/api_gateway_documentation_part.html.markdown @@ -15,11 +15,12 @@ Provides a settings of an API Gateway Documentation Part. ```hcl resource "aws_api_gateway_documentation_part" "example" { location { - type = "METHOD" + type = "METHOD" method = "GET" - path = "/example" + path = "/example" } - properties = "{\"description\":\"Example description\"}" + + properties = "{\"description\":\"Example description\"}" rest_api_id = "${aws_api_gateway_rest_api.example.id}" } @@ -60,4 +61,4 @@ API Gateway documentation_parts can be imported using `REST-API-ID/DOC-PART-ID`, ``` $ terraform import aws_api_gateway_documentation_part.example 5i4e1ko720/3oyy3t -``` \ No newline at end of file +``` diff --git a/website/docs/r/api_gateway_documentation_version.html.markdown b/website/docs/r/api_gateway_documentation_version.html.markdown index 5a12399ee6d..adae7759369 100644 --- a/website/docs/r/api_gateway_documentation_version.html.markdown +++ b/website/docs/r/api_gateway_documentation_version.html.markdown @@ -14,10 +14,10 @@ Provides a resource to manage an API Gateway Documentation Version. ```hcl resource "aws_api_gateway_documentation_version" "example" { - version = "example_version" + version = "example_version" rest_api_id = "${aws_api_gateway_rest_api.example.id}" description = "Example description" - depends_on = ["aws_api_gateway_documentation_part.example"] + depends_on = ["aws_api_gateway_documentation_part.example"] } resource "aws_api_gateway_rest_api" "example" { @@ -28,7 +28,8 @@ resource "aws_api_gateway_documentation_part" "example" { location { type = "API" } - properties = "{\"description\":\"Example\"}" + + properties = "{\"description\":\"Example\"}" rest_api_id = "${aws_api_gateway_rest_api.example.id}" } ``` @@ -51,4 +52,4 @@ API Gateway documentation versions can be imported using `REST-API-ID/VERSION`, ``` $ terraform import aws_api_gateway_documentation_version.example 5i4e1ko720/example-version -``` \ No newline at end of file +``` diff --git a/website/docs/r/api_gateway_domain_name.html.markdown b/website/docs/r/api_gateway_domain_name.html.markdown index 60d14854f51..ecd5fae34c0 100644 --- a/website/docs/r/api_gateway_domain_name.html.markdown +++ b/website/docs/r/api_gateway_domain_name.html.markdown @@ -15,12 +15,16 @@ a particular domain name. An API can be attached to a particular path under the registered domain name using [the `aws_api_gateway_base_path_mapping` resource](api_gateway_base_path_mapping.html). -Internally API Gateway creates a CloudFront distribution to -route requests on the given hostname. In addition to this resource -it's necessary to create a DNS record corresponding to the -given domain name which is an alias (either Route53 alias or -traditional CNAME) to the Cloudfront domain name exported in the -`cloudfront_domain_name` attribute. +API Gateway domains can be defined as either 'edge-optimized' or 'regional'. In an edge-optimized configuration, +API Gateway internally creates and manages a CloudFront distribution to route requests on the given hostname. In +addition to this resource it's necessary to create a DNS record corresponding to the given domain name which is an alias +(either Route53 alias or traditional CNAME) to the Cloudfront domain name exported in the `cloudfront_domain_name` +attribute. + +In a regional configuration, API Gateway does not create a CloudFront distribution to route requests to the API, though +a distribution can be created if needed. In either case, it is necessary to create a DNS record corresponding to the +given domain name which is an alias (either Route53 alias or traditional CNAME) to the regional domain name exported in +the `regional_domain_name` attribute. ~> **Note:** All arguments including the private key will be stored in the raw state as plain-text. [Read more about sensitive data in state](/docs/state/sensitive-data.html). @@ -59,15 +63,30 @@ The following arguments are supported: * `domain_name` - (Required) The fully-qualified domain name to register * `certificate_name` - (Optional) The unique name to use when registering this - cert as an IAM server certificate. Conflicts with `certificate_arn`. Required if `certificate_arn` is not set. + certificate as an IAM server certificate. Conflicts with `certificate_arn`, `regional_certificate_arn`, and + `regional_certificate_name`. Required if `certificate_arn` is not set. * `certificate_body` - (Optional) The certificate issued for the domain name - being registered, in PEM format. Conflicts with `certificate_arn`. + being registered, in PEM format. Only valid for `EDGE` endpoint configuration type. Conflicts with `certificate_arn`, `regional_certificate_arn`, and + `regional_certificate_name`. * `certificate_chain` - (Optional) The certificate for the CA that issued the certificate, along with any intermediate CA certificates required to - create an unbroken chain to a certificate trusted by the intended API clients. Conflicts with `certificate_arn`. + create an unbroken chain to a certificate trusted by the intended API clients. Only valid for `EDGE` endpoint configuration type. Conflicts with `certificate_arn`, + `regional_certificate_arn`, and `regional_certificate_name`. * `certificate_private_key` - (Optional) The private key associated with the - domain certificate given in `certificate_body`. Conflicts with `certificate_arn`. -* `certificate_arn` - (Optional) The ARN for an AWS-managed certificate. Conflicts with `certificate_name`, `certificate_body`, `certificate_chain` and `certificate_private_key`. + domain certificate given in `certificate_body`. Only valid for `EDGE` endpoint configuration type. Conflicts with `certificate_arn`, `regional_certificate_arn`, and `regional_certificate_name`. +* `certificate_arn` - (Optional) The ARN for an AWS-managed certificate. Used when an edge-optimized domain name is + desired. Conflicts with `certificate_name`, `certificate_body`, `certificate_chain`, `certificate_private_key`, + `regional_certificate_arn`, and `regional_certificate_name`. +* `endpoint_configuration` - (Optional) Nested argument defining API endpoint configuration including endpoint type. Defined below. +* `regional_certificate_arn` - (Optional) The ARN for an AWS-managed certificate. Used when a regional domain name is + desired. Conflicts with `certificate_arn`, `certificate_name`, `certificate_body`, `certificate_chain`, and + `certificate_private_key`. +* `regional_certificate_name` - (Optional) The user-friendly name of the certificate that will be used by regional endpoint for this domain name. Conflicts with `certificate_arn`, `certificate_name`, `certificate_body`, `certificate_chain`, and + `certificate_private_key`. + +### endpoint_configuration + +* `types` - (Required) A list of endpoint types. This resource currently only supports managing a single value. Valid values: `EDGE` or `REGIONAL`. If unspecified, defaults to `EDGE`. Must be declared as `REGIONAL` in non-Commercial partitions. Refer to the [documentation](https://docs.aws.amazon.com/apigateway/latest/developerguide/create-regional-api.html) for more information on the difference between edge-optimized and regional APIs. ## Attributes Reference @@ -77,5 +96,15 @@ In addition to the arguments, the following attributes are exported: * `certificate_upload_date` - The upload date associated with the domain certificate. * `cloudfront_domain_name` - The hostname created by Cloudfront to represent the distribution that implements this domain name mapping. -* `cloudfront_zone_id` - For convenience, the hosted zone id (`Z2FDTNDATAQYW2`) +* `cloudfront_zone_id` - For convenience, the hosted zone ID (`Z2FDTNDATAQYW2`) that can be used to create a Route53 alias record for the distribution. +* `regional_domain_name` - The hostname for the custom domain's regional endpoint. +* `regional_zone_id` - The hosted zone ID that can be used to create a Route53 alias record for the regional endpoint. + +## Import + +API Gateway domain names can be imported using their `name`, e.g. + +``` +$ terraform import aws_api_gateway_domain_name.example dev.example.com +``` diff --git a/website/docs/r/api_gateway_gateway_response.markdown b/website/docs/r/api_gateway_gateway_response.markdown index 41813f6d9bd..a7d5cb9f087 100644 --- a/website/docs/r/api_gateway_gateway_response.markdown +++ b/website/docs/r/api_gateway_gateway_response.markdown @@ -41,3 +41,11 @@ The following arguments are supported: * `status_code` - (Optional) The HTTP status code of the Gateway Response. * `response_parameters` - (Optional) A map specifying the templates used to transform the response body. * `response_templates` - (Optional) A map specifying the parameters (paths, query strings and headers) of the Gateway Response. + +## Import + +`aws_api_gateway_gateway_response` can be imported using `REST-API-ID/RESPONSE-TYPE`, e.g. + +``` +$ terraform import aws_api_gateway_gateway_response.example 12345abcde/UNAUTHORIZED +``` diff --git a/website/docs/r/api_gateway_integration.html.markdown b/website/docs/r/api_gateway_integration.html.markdown index 9a940daa20d..159ce7f4480 100644 --- a/website/docs/r/api_gateway_integration.html.markdown +++ b/website/docs/r/api_gateway_integration.html.markdown @@ -38,6 +38,7 @@ resource "aws_api_gateway_integration" "MyDemoIntegration" { type = "MOCK" cache_key_parameters = ["method.request.path.param"] cache_namespace = "foobar" + timeout_milliseconds = 29000 request_parameters = { "integration.request.header.X-Authorization" = "'static'" @@ -59,6 +60,7 @@ EOF ```hcl # Variables variable "myregion" {} + variable "accountId" {} # API Gateway @@ -67,8 +69,8 @@ resource "aws_api_gateway_rest_api" "api" { } resource "aws_api_gateway_resource" "resource" { - path_part = "resource" - parent_id = "${aws_api_gateway_rest_api.api.root_resource_id}" + path_part = "resource" + parent_id = "${aws_api_gateway_rest_api.api.root_resource_id}" rest_api_id = "${aws_api_gateway_rest_api.api.id}" } @@ -96,7 +98,7 @@ resource "aws_lambda_permission" "apigw_lambda" { principal = "apigateway.amazonaws.com" # More: http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-control-access-using-iam-policies-to-invoke-api.html - source_arn = "arn:aws:execute-api:${var.myregion}:${var.accountId}:${aws_api_gateway_rest_api.api.id}/*/${aws_api_gateway_method.method.http_method}${aws_api_gateway_resource.resource.path}" + source_arn = "arn:aws:execute-api:${var.myregion}:${var.accountId}:${aws_api_gateway_rest_api.api.id}/*/${aws_api_gateway_method.method.http_method} ${aws_api_gateway_resource.resource.path}" } resource "aws_lambda_function" "lambda" { @@ -130,6 +132,71 @@ POLICY } ``` +## VPC Link + +```hcl +variable "name" {} +variable "subnet_id" {} + +resource "aws_lb" "test" { + name = "${var.name}" + internal = true + load_balancer_type = "network" + subnets = ["${var.subnet_id}"] +} + +resource "aws_api_gateway_vpc_link" "test" { + name = "${var.name}" + target_arns = ["${aws_lb.test.arn}"] +} + +resource "aws_api_gateway_rest_api" "test" { + name = "${var.name}" +} + +resource "aws_api_gateway_resource" "test" { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + parent_id = "${aws_api_gateway_rest_api.test.root_resource_id}" + path_part = "test" +} + +resource "aws_api_gateway_method" "test" { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + resource_id = "${aws_api_gateway_resource.test.id}" + http_method = "GET" + authorization = "NONE" + + request_models = { + "application/json" = "Error" + } +} + +resource "aws_api_gateway_integration" "test" { + rest_api_id = "${aws_api_gateway_rest_api.test.id}" + resource_id = "${aws_api_gateway_resource.test.id}" + http_method = "${aws_api_gateway_method.test.http_method}" + + request_templates = { + "application/json" = "" + "application/xml" = "#set($inputRoot = $input.path('$'))\n{ }" + } + + request_parameters = { + "integration.request.header.X-Authorization" = "'static'" + "integration.request.header.X-Foo" = "'Bar'" + } + + type = "HTTP" + uri = "https://www.google.de" + integration_http_method = "GET" + passthrough_behavior = "WHEN_NO_MATCH" + content_handling = "CONVERT_TO_TEXT" + + connection_type = "VPC_LINK" + connection_id = "${aws_api_gateway_vpc_link.test.id}" +} +``` + ## Argument Reference The following arguments are supported: @@ -143,7 +210,9 @@ The following arguments are supported: **Required** if `type` is `AWS`, `AWS_PROXY`, `HTTP` or `HTTP_PROXY`. Not all methods are compatible with all `AWS` integrations. e.g. Lambda function [can only be invoked](https://github.com/awslabs/aws-apigateway-importer/issues/9#issuecomment-129651005) via `POST`. -* `type` - (Required) The integration input's [type](https://docs.aws.amazon.com/apigateway/api-reference/resource/integration/). Valid values are `HTTP` (for HTTP backends), `MOCK` (not calling any real backend), `AWS` (for AWS services), `AWS_PROXY` (for Lambda proxy integration) and `HTTP_PROXY` (for HTTP proxy integration). +* `type` - (Required) The integration input's [type](https://docs.aws.amazon.com/apigateway/api-reference/resource/integration/). Valid values are `HTTP` (for HTTP backends), `MOCK` (not calling any real backend), `AWS` (for AWS services), `AWS_PROXY` (for Lambda proxy integration) and `HTTP_PROXY` (for HTTP proxy integration). An `HTTP` or `HTTP_PROXY` integration with a `connection_type` of `VPC_LINK` is referred to as a private integration and uses a VpcLink to connect API Gateway to a network load balancer of a VPC. +* `connection_type` - (Optional) The integration input's [connectionType](https://docs.aws.amazon.com/apigateway/api-reference/resource/integration/#connectionType). Valid values are `INTERNET` (default for connections through the public routable internet), and `VPC_LINK` (for private connections between API Gateway and a network load balancer in a VPC). +* `connection_id` - (Optional) The id of the VpcLink used for the integration. **Required** if `connection_type` is `VPC_LINK` * `uri` - (Optional) The input's URI (HTTP, AWS). **Required** if `type` is `HTTP` or `AWS`. For HTTP integrations, the URI must be a fully formed, encoded HTTP(S) URL according to the RFC-3986 specification . For AWS integrations, the URI should be of the form `arn:aws:apigateway:{region}:{subdomain.service|service}:{path|action}/{service_api}`. `region`, `subdomain` and `service` are used to determine the right endpoint. e.g. `arn:aws:apigateway:eu-west-1:lambda:path/2015-03-31/functions/arn:aws:lambda:eu-west-1:012345678901:function:my-func/invocations` @@ -156,3 +225,12 @@ The following arguments are supported: * `cache_namespace` - (Optional) The integration's cache namespace. * `request_parameters_in_json` - **Deprecated**, use `request_parameters` instead. * `content_handling` - (Optional) Specifies how to handle request payload content type conversions. Supported values are `CONVERT_TO_BINARY` and `CONVERT_TO_TEXT`. If this property is not defined, the request payload will be passed through from the method request to integration request without modification, provided that the passthroughBehaviors is configured to support payload pass-through. +* `timeout_milliseconds` - (Optional) Custom timeout between 50 and 29,000 milliseconds. The default value is 29,000 milliseconds. + +## Import + +`aws_api_gateway_integration` can be imported using `REST-API-ID/RESOURCE-ID/HTTP-METHOD`, e.g. + +``` +$ terraform import aws_api_gateway_integration.example 12345abcde/67890fghij/GET +``` diff --git a/website/docs/r/api_gateway_integration_response.html.markdown b/website/docs/r/api_gateway_integration_response.html.markdown index d619d18ad3a..1c0e9a394b7 100644 --- a/website/docs/r/api_gateway_integration_response.html.markdown +++ b/website/docs/r/api_gateway_integration_response.html.markdown @@ -76,11 +76,19 @@ The following arguments are supported: * `http_method` - (Required) The HTTP method (`GET`, `POST`, `PUT`, `DELETE`, `HEAD`, `OPTIONS`, `ANY`) * `status_code` - (Required) The HTTP status code * `selection_pattern` - (Optional) Specifies the regular expression pattern used to choose -  an integration response based on the response from the backend. Setting this to `-` makes the integration the default one. -  If the backend is an `AWS` Lambda function, the AWS Lambda function error header is matched. + an integration response based on the response from the backend. Setting this to `-` makes the integration the default one. + If the backend is an `AWS` Lambda function, the AWS Lambda function error header is matched. For all other `HTTP` and `AWS` backends, the HTTP status code is matched. * `response_templates` - (Optional) A map specifying the templates used to transform the integration response body * `response_parameters` - (Optional) A map of response parameters that can be read from the backend response. For example: `response_parameters = { "method.response.header.X-Some-Header" = "integration.response.header.X-Some-Other-Header" }`, * `response_parameters_in_json` - **Deprecated**, use `response_parameters` instead. * `content_handling` - (Optional) Specifies how to handle request payload content type conversions. Supported values are `CONVERT_TO_BINARY` and `CONVERT_TO_TEXT`. If this property is not defined, the response payload will be passed through from the integration response to the method response without modification. + +## Import + +`aws_api_gateway_integration_response` can be imported using `REST-API-ID/RESOURCE-ID/HTTP-METHOD/STATUS-CODE`, e.g. + +``` +$ terraform import aws_api_gateway_integration_response.example 12345abcde/67890fghij/GET/200 +``` diff --git a/website/docs/r/api_gateway_method.html.markdown b/website/docs/r/api_gateway_method.html.markdown index c85b2137d16..e1b1a08d086 100644 --- a/website/docs/r/api_gateway_method.html.markdown +++ b/website/docs/r/api_gateway_method.html.markdown @@ -32,6 +32,44 @@ resource "aws_api_gateway_method" "MyDemoMethod" { } ``` +## Usage with Cognito User Pool Authorizer +```hcl +variable "cognito_user_pool_name" {} + +data "aws_cognito_user_pools" "this" { + name = "${var.cognito_user_pool_name}" +} + +resource "aws_api_gateway_rest_api" "this" { + name = "with-authorizer" +} + +resource "aws_api_gateway_resource" "this" { + rest_api_id = "${aws_api_gateway_rest_api.this.id}" + parent_id = "${aws_api_gateway_rest_api.this.root_resource_id}" + path_part = "{proxy+}" +} + +resource "aws_api_gateway_authorizer" "this" { + name = "CognitoUserPoolAuthorizer" + type = "COGNITO_USER_POOLS" + rest_api_id = "${aws_api_gateway_rest_api.this.id}" + provider_arns = ["${data.aws_cognito_user_pools.this.arns}"] +} + +resource "aws_api_gateway_method" "any" { + rest_api_id = "${aws_api_gateway_rest_api.this.id}" + resource_id = "${aws_api_gateway_resource.this.id}" + http_method = "ANY" + authorization = "COGNITO_USER_POOLS" + authorizer_id = "${aws_api_gateway_authorizer.this.id}" + + request_parameters = { + "method.request.path.proxy" = true + } +} +``` + ## Argument Reference The following arguments are supported: @@ -39,20 +77,29 @@ The following arguments are supported: * `rest_api_id` - (Required) The ID of the associated REST API * `resource_id` - (Required) The API resource ID * `http_method` - (Required) The HTTP Method (`GET`, `POST`, `PUT`, `DELETE`, `HEAD`, `OPTIONS`, `ANY`) -* `authorization` - (Required) The type of authorization used for the method (`NONE`, `CUSTOM`, `AWS_IAM`) -* `authorizer_id` - (Optional) The authorizer id to be used when the authorization is `CUSTOM` +* `authorization` - (Required) The type of authorization used for the method (`NONE`, `CUSTOM`, `AWS_IAM`, `COGNITO_USER_POOLS`) +* `authorizer_id` - (Optional) The authorizer id to be used when the authorization is `CUSTOM` or `COGNITO_USER_POOLS` +* `authorization_scopes` - (Optional) The authorization scopes used when the authorization is `COGNITO_USER_POOLS` * `api_key_required` - (Optional) Specify if the method requires an API key * `request_models` - (Optional) A map of the API models used for the request's content type where key is the content type (e.g. `application/json`) and value is either `Error`, `Empty` (built-in models) or `aws_api_gateway_model`'s `name`. * `request_validator_id` - (Optional) The ID of a `aws_api_gateway_request_validator` * `request_parameters` - (Optional) A map of request query string parameters and headers that should be passed to the integration. - For example: + For example: ```hcl -request_parameters = { - "method.request.header.X-Some-Header" = true, - "method.request.querystring.some-query-param" = true, +request_parameters = { + "method.request.header.X-Some-Header" = true + "method.request.querystring.some-query-param" = true } ``` -would define that the header `X-Some-Header` and the query string `some-query-param` must be provided on the request, or +would define that the header `X-Some-Header` and the query string `some-query-param` must be provided on the request, or * `request_parameters_in_json` - **Deprecated**, use `request_parameters` instead. + +## Import + +`aws_api_gateway_method` can be imported using `REST-API-ID/RESOURCE-ID/HTTP-METHOD`, e.g. + +``` +$ terraform import aws_api_gateway_method.example 12345abcde/67890fghij/GET +``` diff --git a/website/docs/r/api_gateway_method_response.html.markdown b/website/docs/r/api_gateway_method_response.html.markdown index f474f90dfdb..cecd1162b88 100644 --- a/website/docs/r/api_gateway_method_response.html.markdown +++ b/website/docs/r/api_gateway_method_response.html.markdown @@ -59,3 +59,11 @@ The following arguments are supported: For example: `response_parameters = { "method.response.header.X-Some-Header" = true }` would define that the header `X-Some-Header` can be provided on the response. * `response_parameters_in_json` - **Deprecated**, use `response_parameters` instead. + +## Import + +`aws_api_gateway_method_response` can be imported using `REST-API-ID/RESOURCE-ID/HTTP-METHOD/STATUS-CODE`, e.g. + +``` +$ terraform import aws_api_gateway_method_response.example 12345abcde/67890fghij/GET/200 +``` diff --git a/website/docs/r/api_gateway_method_settings.html.markdown b/website/docs/r/api_gateway_method_settings.html.markdown index 3abec6ef4c4..e8153ad6f97 100644 --- a/website/docs/r/api_gateway_method_settings.html.markdown +++ b/website/docs/r/api_gateway_method_settings.html.markdown @@ -25,19 +25,19 @@ resource "aws_api_gateway_method_settings" "s" { } resource "aws_api_gateway_rest_api" "test" { - name = "MyDemoAPI" + name = "MyDemoAPI" description = "This is my API for demonstration purposes" } resource "aws_api_gateway_deployment" "test" { - depends_on = ["aws_api_gateway_integration.test"] + depends_on = ["aws_api_gateway_integration.test"] rest_api_id = "${aws_api_gateway_rest_api.test.id}" - stage_name = "dev" + stage_name = "dev" } resource "aws_api_gateway_stage" "test" { - stage_name = "prod" - rest_api_id = "${aws_api_gateway_rest_api.test.id}" + stage_name = "prod" + rest_api_id = "${aws_api_gateway_rest_api.test.id}" deployment_id = "${aws_api_gateway_deployment.test.id}" } diff --git a/website/docs/r/api_gateway_model.html.markdown b/website/docs/r/api_gateway_model.html.markdown index c4e0c9c801d..b29d21fe84c 100644 --- a/website/docs/r/api_gateway_model.html.markdown +++ b/website/docs/r/api_gateway_model.html.markdown @@ -44,6 +44,14 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the model + +## Import + +`aws_api_gateway_model` can be imported using `REST-API-ID/NAME`, e.g. + +``` +$ terraform import aws_api_gateway_model.example 12345abcde/example +``` diff --git a/website/docs/r/api_gateway_request_validator.html.markdown b/website/docs/r/api_gateway_request_validator.html.markdown new file mode 100644 index 00000000000..5b07dc8fed7 --- /dev/null +++ b/website/docs/r/api_gateway_request_validator.html.markdown @@ -0,0 +1,45 @@ +--- +layout: "aws" +page_title: "AWS: aws_api_gateway_request_validator" +sidebar_current: "docs-aws-resource-api-gateway-request-validator" +description: |- + Manages an API Gateway Request Validator. +--- + +# aws_api_gateway_request_validator + +Manages an API Gateway Request Validator. + +## Example Usage + +```hcl +resource "aws_api_gateway_request_validator" "example" { + name = "example" + rest_api_id = "${aws_api_gateway_rest_api.example.id}" + validate_request_body = true + validate_request_parameters = true +} +``` + +## Argument Reference + +The following argument is supported: + +* `name` - (Required) The name of the request validator +* `rest_api_id` - (Required) The ID of the associated Rest API +* `validate_request_body` - (Optional) Boolean whether to validate request body. Defaults to `false`. +* `validate_request_parameters` - (Optional) Boolean whether to validate request parameters. Defaults to `false`. + +## Attribute Reference + +The following attribute is exported in addition to the arguments listed above: + +* `id` - The unique ID of the request validator + +## Import + +`aws_api_gateway_request_validator` can be imported using `REST-API-ID/REQUEST-VALIDATOR-ID`, e.g. + +``` +$ terraform import aws_api_gateway_request_validator.example 12345abcde/67890fghij +``` diff --git a/website/docs/r/api_gateway_resource.html.markdown b/website/docs/r/api_gateway_resource.html.markdown index 87a9310b800..ff8bb5a4312 100644 --- a/website/docs/r/api_gateway_resource.html.markdown +++ b/website/docs/r/api_gateway_resource.html.markdown @@ -39,3 +39,11 @@ In addition to all arguments above, the following attributes are exported: * `id` - The resource's identifier. * `path` - The complete path for this API resource, including all parent paths. + +## Import + +`aws_api_gateway_resource` can be imported using `REST-API-ID/RESOURCE-ID`, e.g. + +``` +$ terraform import aws_api_gateway_resource.example 12345abcde/67890fghij +``` diff --git a/website/docs/r/api_gateway_rest_api.html.markdown b/website/docs/r/api_gateway_rest_api.html.markdown index cebeb71b7d0..03ea9701669 100644 --- a/website/docs/r/api_gateway_rest_api.html.markdown +++ b/website/docs/r/api_gateway_rest_api.html.markdown @@ -12,6 +12,8 @@ Provides an API Gateway REST API. ## Example Usage +### Basic + ```hcl resource "aws_api_gateway_rest_api" "MyDemoAPI" { name = "MyDemoAPI" @@ -19,15 +21,30 @@ resource "aws_api_gateway_rest_api" "MyDemoAPI" { } ``` +### Regional Endpoint Type + +```hcl +resource "aws_api_gateway_rest_api" "example" { + name = "regional-example" + + endpoint_configuration { + types = ["REGIONAL"] + } +} +``` + ## Argument Reference The following arguments are supported: * `name` - (Required) The name of the REST API * `description` - (Optional) The description of the REST API +* `endpoint_configuration` - (Optional) Nested argument defining API endpoint configuration including endpoint type. Defined below. * `binary_media_types` - (Optional) The list of binary media types supported by the RestApi. By default, the RestApi supports only UTF-8-encoded text payloads. * `minimum_compression_size` - (Optional) Minimum response size to compress for the REST API. Integer between -1 and 10485760 (10MB). Setting a value greater than -1 will enable compression, -1 disables compression (default). * `body` - (Optional) An OpenAPI specification that defines the set of routes and integrations to create as part of the REST API. +* `policy` - (Optional) JSON formatted policy document that controls access to the API Gateway. For more information about building AWS IAM policy documents with Terraform, see the [AWS IAM Policy Document Guide](/docs/providers/aws/guides/iam-policy-documents.html) +* `api_key_source` - (Optional) The source of the API key for requests. Valid values are HEADER (default) and AUTHORIZER. __Note__: If the `body` argument is provided, the OpenAPI specification will be used to configure the resources, methods and integrations for the Rest API. If this argument is provided, the following resources should not be managed as separate ones, as updates may cause manual resource updates to be overwritten: @@ -40,10 +57,27 @@ __Note__: If the `body` argument is provided, the OpenAPI specification will be * `aws_api_gateway_gateway_response` * `aws_api_gateway_model` +### endpoint_configuration + +* `types` - (Required) A list of endpoint types. This resource currently only supports managing a single value. Valid values: `EDGE`, `REGIONAL` or `PRIVATE`. If unspecified, defaults to `EDGE`. Must be declared as `REGIONAL` in non-Commercial partitions. Refer to the [documentation](https://docs.aws.amazon.com/apigateway/latest/developerguide/create-regional-api.html) for more information on the difference between edge-optimized and regional APIs. + ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the REST API * `root_resource_id` - The resource ID of the REST API's root * `created_date` - The creation date of the REST API +* `execution_arn` - The execution ARN part to be used in [`lambda_permission`](/docs/providers/aws/r/lambda_permission.html)'s `source_arn` + when allowing API Gateway to invoke a Lambda function, + e.g. `arn:aws:execute-api:eu-west-2:123456789012:z4675bid1j`, which can be concatenated with allowed stage, method and resource path. + +## Import + +`aws_api_gateway_rest_api` can be imported by using the REST API ID, e.g. + +``` +$ terraform import aws_api_gateway_rest_api.example 12345abcde +``` + +~> **NOTE:** Resource import does not currently support the `body` attribute. diff --git a/website/docs/r/api_gateway_stage.html.markdown b/website/docs/r/api_gateway_stage.html.markdown index f32ab20d69e..1d4f82add68 100644 --- a/website/docs/r/api_gateway_stage.html.markdown +++ b/website/docs/r/api_gateway_stage.html.markdown @@ -14,20 +14,20 @@ Provides an API Gateway Stage. ```hcl resource "aws_api_gateway_stage" "test" { - stage_name = "prod" - rest_api_id = "${aws_api_gateway_rest_api.test.id}" + stage_name = "prod" + rest_api_id = "${aws_api_gateway_rest_api.test.id}" deployment_id = "${aws_api_gateway_deployment.test.id}" } resource "aws_api_gateway_rest_api" "test" { - name = "MyDemoAPI" + name = "MyDemoAPI" description = "This is my API for demonstration purposes" } resource "aws_api_gateway_deployment" "test" { - depends_on = ["aws_api_gateway_integration.test"] + depends_on = ["aws_api_gateway_integration.test"] rest_api_id = "${aws_api_gateway_rest_api.test.id}" - stage_name = "dev" + stage_name = "dev" } resource "aws_api_gateway_resource" "test" { @@ -50,7 +50,7 @@ resource "aws_api_gateway_method_settings" "s" { settings { metrics_enabled = true - logging_level = "INFO" + logging_level = "INFO" } } @@ -69,6 +69,7 @@ The following arguments are supported: * `rest_api_id` - (Required) The ID of the associated REST API * `stage_name` - (Required) The name of the stage * `deployment_id` - (Required) The ID of the deployment that the stage points to +* `access_log_settings` - (Optional) Enables access logs for the API stage. Detailed below. * `cache_cluster_enabled` - (Optional) Specifies whether a cache cluster is enabled for the stage * `cache_cluster_size` - (Optional) The size of the cache cluster for the stage, if enabled. Allowed values include `0.5`, `1.6`, `6.1`, `13.5`, `28.4`, `58.2`, `118` and `237`. @@ -76,3 +77,32 @@ The following arguments are supported: * `description` - (Optional) The description of the stage * `documentation_version` - (Optional) The version of the associated API documentation * `variables` - (Optional) A map that defines the stage variables +* `tags` - (Optional) A mapping of tags to assign to the resource. +* `xray_tracing_enabled` - (Optional) Whether active tracing with X-ray is enabled. Defaults to `false`. + +### Nested Blocks + +#### `access_log_settings` + +* `destination_arn` - (Required) ARN of the log group to send the logs to. Automatically removes trailing `:*` if present. +* `format` - (Required) The formatting and values recorded in the logs. +For more information on configuring the log format rules visit the AWS [documentation](https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-logging.html) + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the stage +* `invoke_url` - The URL to invoke the API pointing to the stage, + e.g. `https://z4675bid1j.execute-api.eu-west-2.amazonaws.com/prod` +* `execution_arn` - The execution ARN to be used in [`lambda_permission`](/docs/providers/aws/r/lambda_permission.html)'s `source_arn` + when allowing API Gateway to invoke a Lambda function, + e.g. `arn:aws:execute-api:eu-west-2:123456789012:z4675bid1j/prod` + +## Import + +`aws_api_gateway_stage` can be imported using `REST-API-ID/STAGE-NAME`, e.g. + +``` +$ terraform import aws_api_gateway_stage.example 12345abcde/example +``` diff --git a/website/docs/r/api_gateway_usage_plan.html.markdown b/website/docs/r/api_gateway_usage_plan.html.markdown index e5570152368..bc0b97094dd 100644 --- a/website/docs/r/api_gateway_usage_plan.html.markdown +++ b/website/docs/r/api_gateway_usage_plan.html.markdown @@ -17,16 +17,16 @@ resource "aws_api_gateway_rest_api" "myapi" { name = "MyDemoAPI" } -... +# ... resource "aws_api_gateway_deployment" "dev" { rest_api_id = "${aws_api_gateway_rest_api.myapi.id}" - stage_name = "dev" + stage_name = "dev" } resource "aws_api_gateway_deployment" "prod" { rest_api_id = "${aws_api_gateway_rest_api.myapi.id}" - stage_name = "prod" + stage_name = "prod" } resource "aws_api_gateway_usage_plan" "MyUsagePlan" { @@ -64,7 +64,7 @@ The API Gateway Usage Plan argument layout is a structure composed of several su ### Top-Level Arguments * `name` - (Required) The name of the usage plan. -* `description` - (Required) The description of a usage plan. +* `description` - (Optional) The description of a usage plan. * `api_stages` - (Optional) The associated [API stages](#api-stages-arguments) of the usage plan. * `quota_settings` - (Optional) The [quota settings](#quota-settings-arguments) of the usage plan. * `throttle_settings` - (Optional) The [throttling limits](#throttling-settings-arguments) of the usage plan. @@ -88,7 +88,7 @@ The API Gateway Usage Plan argument layout is a structure composed of several su ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the API resource * `name` - The name of the usage plan. @@ -102,6 +102,6 @@ The following attributes are exported: AWS API Gateway Usage Plan can be imported using the `id`, e.g. -``` +```sh $ terraform import aws_api_gateway_usage_plan.myusageplan ``` diff --git a/website/docs/r/api_gateway_usage_plan_key.html.markdown b/website/docs/r/api_gateway_usage_plan_key.html.markdown index ebccc2db459..355f55d3212 100644 --- a/website/docs/r/api_gateway_usage_plan_key.html.markdown +++ b/website/docs/r/api_gateway_usage_plan_key.html.markdown @@ -17,7 +17,7 @@ resource "aws_api_gateway_rest_api" "test" { name = "MyDemoAPI" } -... +# ... resource "aws_api_gateway_usage_plan" "myusageplan" { name = "my_usage_plan" @@ -49,7 +49,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The Id of a usage plan key. * `key_id` - The identifier of the API gateway key resource. diff --git a/website/docs/r/api_gateway_vpc_link.html.markdown b/website/docs/r/api_gateway_vpc_link.html.markdown index deb53f107d8..e4228dad7af 100644 --- a/website/docs/r/api_gateway_vpc_link.html.markdown +++ b/website/docs/r/api_gateway_vpc_link.html.markdown @@ -14,8 +14,8 @@ Provides an API Gateway VPC Link. ```hcl resource "aws_lb" "example" { - name = "example" - internal = true + name = "example" + internal = true load_balancer_type = "network" subnet_mapping { @@ -24,7 +24,7 @@ resource "aws_lb" "example" { } resource "aws_api_gateway_vpc_link" "example" { - name = "example" + name = "example" description = "example description" target_arns = ["${aws_lb.example.arn}"] } @@ -40,6 +40,14 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The identifier of the VpcLink. + +## Import + +API Gateway VPC Link can be imported using the `id`, e.g. + +``` +$ terraform import aws_api_gateway_vpc_link.example +``` diff --git a/website/docs/r/app_cookie_stickiness_policy.html.markdown b/website/docs/r/app_cookie_stickiness_policy.html.markdown index 689acb6fe1e..3d2fd861752 100644 --- a/website/docs/r/app_cookie_stickiness_policy.html.markdown +++ b/website/docs/r/app_cookie_stickiness_policy.html.markdown @@ -47,7 +47,7 @@ balancer. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the policy. * `name` - The name of the stickiness policy. diff --git a/website/docs/r/appautoscaling_policy.html.markdown b/website/docs/r/appautoscaling_policy.html.markdown index 274781a0e66..3479e5e0d4e 100644 --- a/website/docs/r/appautoscaling_policy.html.markdown +++ b/website/docs/r/appautoscaling_policy.html.markdown @@ -54,10 +54,11 @@ resource "aws_appautoscaling_target" "ecs_target" { } resource "aws_appautoscaling_policy" "ecs_policy" { - name = "scale-down" - resource_id = "service/clusterName/serviceName" - scalable_dimension = "ecs:service:DesiredCount" - service_namespace = "ecs" + name = "scale-down" + policy_type = "StepScaling" + resource_id = "service/clusterName/serviceName" + scalable_dimension = "ecs:service:DesiredCount" + service_namespace = "ecs" step_scaling_policy_configuration { adjustment_type = "ChangeInCapacity" @@ -78,10 +79,10 @@ resource "aws_appautoscaling_policy" "ecs_policy" { ```hcl resource "aws_ecs_service" "ecs_service" { - name = "serviceName" - cluster = "clusterName" + name = "serviceName" + cluster = "clusterName" task_definition = "taskDefinitionFamily:1" - desired_count = 2 + desired_count = 2 lifecycle { ignore_changes = ["desired_count"] @@ -89,12 +90,42 @@ resource "aws_ecs_service" "ecs_service" { } ``` +### Aurora Read Replica Autoscaling + +```hcl +resource "aws_appautoscaling_target" "replicas" { + service_namespace = "rds" + scalable_dimension = "rds:cluster:ReadReplicaCount" + resource_id = "cluster:${aws_rds_cluster.example.id}" + min_capacity = 1 + max_capacity = 15 +} + +resource "aws_appautoscaling_policy" "replicas" { + name = "cpu-auto-scaling" + service_namespace = "${aws_appautoscaling_target.replicas.service_namespace}" + scalable_dimension = "${aws_appautoscaling_target.replicas.scalable_dimension}" + resource_id = "${aws_appautoscaling_target.replicas.resource_id}" + policy_type = "TargetTrackingScaling" + + target_tracking_scaling_policy_configuration { + predefined_metric_specification { + predefined_metric_type = "RDSReaderAverageCPUUtilization" + } + + target_value = 75 + scale_in_cooldown = 300 + scale_out_cooldown = 300 + } +} +``` + ## Argument Reference The following arguments are supported: * `name` - (Required) The name of the policy. -* `policy_type` - (Optional) For DynamoDB, only `TargetTrackingScaling` is supported. For any other service, only `StepScaling` is supported. Defaults to `StepScaling`. +* `policy_type` - (Optional) For DynamoDB, only `TargetTrackingScaling` is supported. For Amazon ECS, Spot Fleet, and Amazon RDS, both `StepScaling` and `TargetTrackingScaling` are supported. For any other service, only `StepScaling` is supported. Defaults to `StepScaling`. * `resource_id` - (Required) The resource type and unique identifier string for the resource associated with the scaling policy. Documentation can be found in the `ResourceId` parameter at: [AWS Application Auto Scaling API Reference](http://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_RegisterScalableTarget.html#API_RegisterScalableTarget_RequestParameters) * `scalable_dimension` - (Required) The scalable dimension of the scalable target. Documentation can be found in the `ScalableDimension` parameter at: [AWS Application Auto Scaling API Reference](http://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_RegisterScalableTarget.html#API_RegisterScalableTarget_RequestParameters) * `service_namespace` - (Required) The AWS service namespace of the scalable target. Documentation can be found in the `ServiceNamespace` parameter at: [AWS Application Auto Scaling API Reference](http://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_RegisterScalableTarget.html#API_RegisterScalableTarget_RequestParameters) @@ -111,20 +142,29 @@ The following arguments are supported: * `min_adjustment_magnitude` - (Optional) The minimum number to adjust your scalable dimension as a result of a scaling activity. If the adjustment type is PercentChangeInCapacity, the scaling policy changes the scalable dimension of the scalable target by this amount. * `step_adjustment` - (Optional) A set of adjustments that manage scaling. These have the following structure: - ```hcl - step_adjustment { - metric_interval_lower_bound = 1.0 - metric_interval_upper_bound = 2.0 - scaling_adjustment = -1 - } - step_adjustment { - metric_interval_lower_bound = 2.0 - metric_interval_upper_bound = 3.0 - scaling_adjustment = 1 + ```hcl +resource "aws_appautoscaling_policy" "ecs_policy" { + # ... + + step_scaling_policy_configuration { + # insert config here + + step_adjustment { + metric_interval_lower_bound = 1.0 + metric_interval_upper_bound = 2.0 + scaling_adjustment = -1 + } + + step_adjustment { + metric_interval_lower_bound = 2.0 + metric_interval_upper_bound = 3.0 + scaling_adjustment = 1 + } } - ``` +} +``` - * `metric_interval_lower_bound` - (Optional) The lower bound for the difference between the alarm threshold and the CloudWatch metric. Without a value, AWS will treat this bound as infinity. + * `metric_interval_lower_bound` - (Optional) The lower bound for the difference between the alarm threshold and the CloudWatch metric. Without a value, AWS will treat this bound as negative infinity. * `metric_interval_upper_bound` - (Optional) The upper bound for the difference between the alarm threshold and the CloudWatch metric. Without a value, AWS will treat this bound as infinity. The upper bound must be greater than the lower bound. * `scaling_adjustment` - (Required) The number of members by which to scale, when the adjustment bounds are breached. A positive value scales up. A negative value scales down. diff --git a/website/docs/r/appautoscaling_scheduled_action.html.markdown b/website/docs/r/appautoscaling_scheduled_action.html.markdown index 4b300af9b5b..54e04f215a8 100644 --- a/website/docs/r/appautoscaling_scheduled_action.html.markdown +++ b/website/docs/r/appautoscaling_scheduled_action.html.markdown @@ -25,11 +25,11 @@ resource "aws_appautoscaling_target" "dynamodb" { } resource "aws_appautoscaling_scheduled_action" "dynamodb" { - name = "dynamodb" - service_namespace = "${aws_appautoscaling_target.dynamodb.service_namespace}" - resource_id = "${aws_appautoscaling_target.dynamodb.resource_id}" + name = "dynamodb" + service_namespace = "${aws_appautoscaling_target.dynamodb.service_namespace}" + resource_id = "${aws_appautoscaling_target.dynamodb.resource_id}" scalable_dimension = "${aws_appautoscaling_target.dynamodb.scalable_dimension}" - schedule = "at(2006-01-02T15:04:05)" + schedule = "at(2006-01-02T15:04:05)" scalable_target_action { min_capacity = 1 @@ -51,11 +51,11 @@ resource "aws_appautoscaling_target" "ecs" { } resource "aws_appautoscaling_scheduled_action" "ecs" { - name = "ecs" - service_namespace = "${aws_appautoscaling_target.ecs.service_namespace}" - resource_id = "${aws_appautoscaling_target.ecs.resource_id}" + name = "ecs" + service_namespace = "${aws_appautoscaling_target.ecs.service_namespace}" + resource_id = "${aws_appautoscaling_target.ecs.resource_id}" scalable_dimension = "${aws_appautoscaling_target.ecs.scalable_dimension}" - schedule = "at(2006-01-02T15:04:05)" + schedule = "at(2006-01-02T15:04:05)" scalable_target_action { min_capacity = 1 @@ -84,6 +84,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - The Amazon Resource Name (ARN) of the scheduled action. diff --git a/website/docs/r/appautoscaling_target.html.markdown b/website/docs/r/appautoscaling_target.html.markdown index 1343d5d5eda..17a12ada214 100644 --- a/website/docs/r/appautoscaling_target.html.markdown +++ b/website/docs/r/appautoscaling_target.html.markdown @@ -8,7 +8,7 @@ description: |- # aws_appautoscaling_target -Provides an Application AutoScaling ScalableTarget resource. +Provides an Application AutoScaling ScalableTarget resource. To manage policies which get attached to the target, see the [`aws_appautoscaling_policy` resource](/docs/providers/aws/r/appautoscaling_policy.html). ## Example Usage @@ -18,7 +18,7 @@ Provides an Application AutoScaling ScalableTarget resource. resource "aws_appautoscaling_target" "dynamodb_table_read_target" { max_capacity = 100 min_capacity = 5 - resource_id = "table/tableName" + resource_id = "table/${aws_dynamodb_table.example.name}" role_arn = "${data.aws_iam_role.DynamoDBAutoscaleRole.arn}" scalable_dimension = "dynamodb:table:ReadCapacityUnits" service_namespace = "dynamodb" @@ -31,7 +31,7 @@ resource "aws_appautoscaling_target" "dynamodb_table_read_target" { resource "aws_appautoscaling_target" "dynamodb_index_read_target" { max_capacity = 100 min_capacity = 5 - resource_id = "table/tableName/index/indexName" + resource_id = "table/${aws_dynamodb_table.example.name}/index/${var.index_name}" role_arn = "${data.aws_iam_role.DynamoDBAutoscaleRole.arn}" scalable_dimension = "dynamodb:index:ReadCapacityUnits" service_namespace = "dynamodb" @@ -44,13 +44,25 @@ resource "aws_appautoscaling_target" "dynamodb_index_read_target" { resource "aws_appautoscaling_target" "ecs_target" { max_capacity = 4 min_capacity = 1 - resource_id = "service/clusterName/serviceName" + resource_id = "service/${aws_ecs_cluster.example.name}/${aws_ecs_service.example.name}" role_arn = "${var.ecs_iam_role}" scalable_dimension = "ecs:service:DesiredCount" service_namespace = "ecs" } ``` +### Aurora Read Replica Autoscaling + +```hcl +resource "aws_appautoscaling_target" "replicas" { + service_namespace = "rds" + scalable_dimension = "rds:cluster:ReadReplicaCount" + resource_id = "cluster:${aws_rds_cluster.example.id}" + min_capacity = 1 + max_capacity = 15 +} +``` + ## Argument Reference The following arguments are supported: diff --git a/website/docs/r/appsync_api_key.html.markdown b/website/docs/r/appsync_api_key.html.markdown new file mode 100644 index 00000000000..3874591d407 --- /dev/null +++ b/website/docs/r/appsync_api_key.html.markdown @@ -0,0 +1,48 @@ +--- +layout: "aws" +page_title: "AWS: aws_appsync_api_key" +sidebar_current: "docs-aws-resource-appsync-api-key" +description: |- + Provides an AppSync API Key. +--- + +# aws_appsync_api_key + +Provides an AppSync API Key. + +## Example Usage + +```hcl +resource "aws_appsync_graphql_api" "example" { + authentication_type = "API_KEY" + name = "example" +} + +resource "aws_appsync_api_key" "example" { + api_id = "${aws_appsync_graphql_api.example.id}" + expires = "2018-05-03T04:00:00Z" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `api_id` - (Required) The ID of the associated AppSync API +* `description` - (Optional) The API key description. Defaults to "Managed by Terraform". +* `expires` - (Optional) RFC3339 string representation of the expiry date. Rounded down to nearest hour. By default, it is 7 days from the date of creation. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - API Key ID (Formatted as ApiId:Key) +* `key` - The API key + +## Import + +`aws_appsync_api_key` can be imported using the AppSync API ID and key separated by `:`, e.g. + +``` +$ terraform import aws_appsync_api_key.example xxxxx:yyyyy +``` diff --git a/website/docs/r/appsync_datasource.html.markdown b/website/docs/r/appsync_datasource.html.markdown new file mode 100644 index 00000000000..bec043882b5 --- /dev/null +++ b/website/docs/r/appsync_datasource.html.markdown @@ -0,0 +1,139 @@ +--- +layout: "aws" +page_title: "AWS: aws_appsync_datasource" +sidebar_current: "docs-aws-resource-appsync-datasource" +description: |- + Provides an AppSync DataSource. +--- + +# aws_appsync_datasource + +Provides an AppSync DataSource. + +## Example Usage + +```hcl +resource "aws_dynamodb_table" "example" { + name = "example" + read_capacity = 1 + write_capacity = 1 + hash_key = "UserId" + + attribute { + name = "UserId" + type = "S" + } +} + +resource "aws_iam_role" "example" { + name = "example" + + assume_role_policy = < **NOTE:** When Athena queries are executed, result files may be created in the specified bucket. Consider using `force_destroy` on the bucket too in order to avoid any problems when destroying the bucket. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The database name diff --git a/website/docs/r/athena_named_query.html.markdown b/website/docs/r/athena_named_query.html.markdown index 6050d3bf120..d6951642ae5 100644 --- a/website/docs/r/athena_named_query.html.markdown +++ b/website/docs/r/athena_named_query.html.markdown @@ -18,14 +18,14 @@ resource "aws_s3_bucket" "hoge" { } resource "aws_athena_database" "hoge" { - name = "users" + name = "users" bucket = "${aws_s3_bucket.hoge.bucket}" } resource "aws_athena_named_query" "foo" { - name = "bar" + name = "bar" database = "${aws_athena_database.hoge.name}" - query = "SELECT * FROM ${aws_athena_database.hoge.name} limit 10;" + query = "SELECT * FROM ${aws_athena_database.hoge.name} limit 10;" } ``` @@ -40,7 +40,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The unique ID of the query. diff --git a/website/docs/r/autoscaling_group.html.markdown b/website/docs/r/autoscaling_group.html.markdown index ddd2f9ff788..aba25d74632 100644 --- a/website/docs/r/autoscaling_group.html.markdown +++ b/website/docs/r/autoscaling_group.html.markdown @@ -10,6 +10,8 @@ description: |- Provides an AutoScaling Group resource. +-> **Note:** You must specify either `launch_configuration`, `launch_template`, or `mixed_instances_policy`. + ## Example Usage ```hcl @@ -19,7 +21,6 @@ resource "aws_placement_group" "test" { } resource "aws_autoscaling_group" "bar" { - availability_zones = ["us-east-1a"] name = "foobar3-terraform-test" max_size = 5 min_size = 2 @@ -29,6 +30,7 @@ resource "aws_autoscaling_group" "bar" { force_delete = true placement_group = "${aws_placement_group.test.id}" launch_configuration = "${aws_launch_configuration.foobar.name}" + vpc_zone_identifier = ["${aws_subnet.example1.id}", "${aws_subnet.example2.id}"] initial_lifecycle_hook { name = "foobar" @@ -64,30 +66,85 @@ EOF } ``` +### With Latest Version Of Launch Template + +```hcl +resource "aws_launch_template" "foobar" { + name_prefix = "foobar" + image_id = "ami-1a2b3c" + instance_type = "t2.micro" +} + +resource "aws_autoscaling_group" "bar" { + availability_zones = ["us-east-1a"] + desired_capacity = 1 + max_size = 1 + min_size = 1 + + launch_template = { + id = "${aws_launch_template.foobar.id}" + version = "$$Latest" + } +} +``` + +### Mixed Instances Policy + +```hcl +resource "aws_launch_template" "example" { + name_prefix = "example" + image_id = "${data.aws_ami.example.id}" + instance_type = "c5.large" +} + +resource "aws_autoscaling_group" "example" { + availability_zones = ["us-east-1a"] + desired_capacity = 1 + max_size = 1 + min_size = 1 + + mixed_instances_policy { + launch_template { + launch_template_specification { + launch_template_id = "${aws_launch_template.example.id}" + } + + override { + instance_type = "c4.large" + } + + override { + instance_type = "c3.large" + } + } + } +} +``` + ## Interpolated tags ```hcl -variable extra_tags { +variable "extra_tags" { default = [ { - key = "Foo" - value = "Bar" + key = "Foo" + value = "Bar" propagate_at_launch = true }, { - key = "Baz" - value = "Bam" + key = "Baz" + value = "Bam" propagate_at_launch = true }, ] } resource "aws_autoscaling_group" "bar" { - availability_zones = ["us-east-1a"] - name = "foobar3-terraform-test" - max_size = 5 - min_size = 2 - launch_configuration = "${aws_launch_configuration.foobar.name}" + name = "foobar3-terraform-test" + max_size = 5 + min_size = 2 + launch_configuration = "${aws_launch_configuration.foobar.name}" + vpc_zone_identifier = ["${aws_subnet.example1.id}", "${aws_subnet.example2.id}"] tags = [ { @@ -122,10 +179,11 @@ The following arguments are supported: * `max_size` - (Required) The maximum size of the auto scale group. * `min_size` - (Required) The minimum size of the auto scale group. (See also [Waiting for Capacity](#waiting-for-capacity) below.) -* `availability_zones` - (Optional) A list of AZs to launch resources in. - Required only if you do not specify any `vpc_zone_identifier` +* `availability_zones` - (Required only for EC2-Classic) A list of one or more availability zones for the group. This parameter should not be specified when using `vpc_zone_identifier`. * `default_cooldown` - (Optional) The amount of time, in seconds, after a scaling activity completes before another scaling activity can start. -* `launch_configuration` - (Required) The name of the launch configuration to use. +* `launch_configuration` - (Optional) The name of the launch configuration to use. +* `launch_template` - (Optional) Nested argument with Launch template specification to use to launch instances. Defined below. +* `mixed_instances_policy` (Optional) Configuration block containing settings to define launch targets for Auto Scaling groups. Defined below. * `initial_lifecycle_hook` - (Optional) One or more [Lifecycle Hooks](http://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.html) to attach to the autoscaling group **before** instances are launched. The @@ -172,8 +230,58 @@ Note that if you suspend either the `Launch` or `Terminate` process types, it ca * `protect_from_scale_in` (Optional) Allows setting instance protection. The autoscaling group will not select instances with this setting for terminination during scale in events. +* `service_linked_role_arn` (Optional) The ARN of the service-linked role that the ASG will use to call other AWS services + +### launch_template + +~> **NOTE:** Either `id` or `name` must be specified. + +The top-level `launch_template` block supports the following: + +* `id` - (Optional) The ID of the launch template. Conflicts with `name`. +* `name` - (Optional) The name of the launch template. Conflicts with `id`. +* `version` - (Optional) Template version. Can be version number, `$Latest`, or `$Default`. (Default: `$Default`). + +### mixed_instances_policy + +* `instances_distribution` - (Optional) Nested argument containing settings on how to mix on-demand and Spot instances in the Auto Scaling group. Defined below. +* `launch_template` - (Optional) Nested argument containing launch template settings along with the overrides to specify multiple instance types. Defined below. + +#### mixed_instances_policy instances_distribution + +This configuration block supports the following: + +* `on_demand_allocation_strategy` - (Optional) Strategy to use when launching on-demand instances. Valid values: `prioritized`. Default: `prioritized`. +* `on_demand_base_capacity` - (Optional) Absolute minimum amount of desired capacity that must be fulfilled by on-demand instances. Default: `0`. +* `on_demand_percentage_above_base_capacity` - (Optional) Percentage split between on-demand and Spot instances above the base on-demand capacity. Default: `100`. +* `spot_allocation_strategy` - (Optional) How to allocate capacity across the Spot pools. Valid values: `lowest-price`. Default: `lowest-price`. +* `spot_instance_pools` - (Optional) Number of Spot pools per availability zone to allocate capacity. EC2 Auto Scaling selects the cheapest Spot pools and evenly allocates Spot capacity across the number of Spot pools that you specify. Default: `1`. +* `spot_max_price` - (Optional) Maximum price per unit hour that the user is willing to pay for the Spot instances. Default: on-demand price. + +#### mixed_instances_policy launch_template + +This configuration block supports the following: + +* `launch_template_specification` - (Optional) Nested argument defines the Launch Template. Defined below. +* `overrides` - (Optional) List of nested arguments provides the ability to specify multiple instance types. This will override the same parameter in the launch template. For on-demand instances, Auto Scaling considers the order of preference of instance types to launch based on the order specified in the overrides list. Defined below. + +##### mixed_instances_policy launch_template launch_template_specification + +~> **NOTE:** Either `launch_template_id` or `launch_template_name` must be specified. + +This configuration block supports the following: + +* `launch_template_id` - (Optional) The ID of the launch template. Conflicts with `launch_template_name`. +* `launch_template_name` - (Optional) The name of the launch template. Conflicts with `launch_template_id`. +* `version` - (Optional) Template version. Can be version number, `$Latest`, or `$Default`. (Default: `$Default`). + +##### mixed_instances_policy launch_template overrides + +This configuration block supports the following: + +* `instance_type` - (Optional) Override the instance type in the Launch Template. -Tags support the following: +### tag and tags The `tag` attribute accepts exactly one tag declaration with the following fields: @@ -189,7 +297,7 @@ This allows the construction of dynamic lists of tags which is not possible usin ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The autoscaling group id. * `arn` - The ARN for this AutoScaling Group @@ -267,7 +375,7 @@ Setting `wait_for_capacity_timeout` to `"0"` disables ASG Capacity waiting. #### Waiting for ELB Capacity The second mechanism is optional, and affects ASGs with attached ELBs specified -via the `load_balancers` attribute. +via the `load_balancers` attribute or with ALBs specified with `target_group_arns`. The `min_elb_capacity` parameter causes Terraform to wait for at least the requested number of instances to show up `"InService"` in all attached ELBs diff --git a/website/docs/r/autoscaling_notification.html.markdown b/website/docs/r/autoscaling_notification.html.markdown index ed8b3bc3255..3cf5ea3eaa8 100644 --- a/website/docs/r/autoscaling_notification.html.markdown +++ b/website/docs/r/autoscaling_notification.html.markdown @@ -62,7 +62,7 @@ notifications. Acceptable values are documented [in the AWS documentation here][ ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `group_names` * `notifications` diff --git a/website/docs/r/autoscaling_policy.html.markdown b/website/docs/r/autoscaling_policy.html.markdown index 1bfc64e79f7..401b8d89030 100644 --- a/website/docs/r/autoscaling_policy.html.markdown +++ b/website/docs/r/autoscaling_policy.html.markdown @@ -47,6 +47,7 @@ The following arguments are supported: * `autoscaling_group_name` - (Required) The name of the autoscaling group. * `adjustment_type` - (Optional) Specifies whether the adjustment is an absolute number or a percentage of the current capacity. Valid values are `ChangeInCapacity`, `ExactCapacity`, and `PercentChangeInCapacity`. * `policy_type` - (Optional) The policy type, either "SimpleScaling", "StepScaling" or "TargetTrackingScaling". If this value isn't provided, AWS will default to "SimpleScaling." +* `estimated_instance_warmup` - (Optional) The estimated time, in seconds, until a newly launched instance will contribute CloudWatch metrics. Without a value, AWS will default to the group's specified cooldown period. The following arguments are only available to "SimpleScaling" type policies: @@ -56,18 +57,18 @@ The following arguments are only available to "SimpleScaling" type policies: The following arguments are only available to "StepScaling" type policies: * `metric_aggregation_type` - (Optional) The aggregation type for the policy's metrics. Valid values are "Minimum", "Maximum", and "Average". Without a value, AWS will treat the aggregation type as "Average". -* `estimated_instance_warmup` - (Optional) The estimated time, in seconds, until a newly launched instance will contribute CloudWatch metrics. Without a value, AWS will default to the group's specified cooldown period. * `step_adjustments` - (Optional) A set of adjustments that manage group scaling. These have the following structure: ```hcl step_adjustment { - scaling_adjustment = -1 + scaling_adjustment = -1 metric_interval_lower_bound = 1.0 metric_interval_upper_bound = 2.0 } + step_adjustment { - scaling_adjustment = 1 + scaling_adjustment = 1 metric_interval_lower_bound = 2.0 metric_interval_upper_bound = 3.0 } @@ -95,18 +96,22 @@ target_tracking_configuration { predefined_metric_specification { predefined_metric_type = "ASGAverageCPUUtilization" } + target_value = 40.0 } + target_tracking_configuration { customized_metric_specification { metric_dimension { - name = "fuga" + name = "fuga" value = "fuga" } + metric_name = "hoge" - namespace = "hoge" - statistic = "Average" + namespace = "hoge" + statistic = "Average" } + target_value = 40.0 } ``` diff --git a/website/docs/r/batch_compute_environment.html.markdown b/website/docs/r/batch_compute_environment.html.markdown index 796f80f5648..48de9e95fcc 100644 --- a/website/docs/r/batch_compute_environment.html.markdown +++ b/website/docs/r/batch_compute_environment.html.markdown @@ -21,6 +21,7 @@ For information about compute environment, see [Compute Environments][2] . ```hcl resource "aws_iam_role" "ecs_instance_role" { name = "ecs_instance_role" + assume_role_policy = <` + * `TagKeyValue` + +Refer to [AWS CostFilter documentation](http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/data-type-filter.html) for further detail. + +## Import + +Budgets can be imported using `AccountID:BudgetName`, e.g. + +`$ terraform import aws_budgets_budget.myBudget 123456789012:myBudget` diff --git a/website/docs/r/cloud9_environment_ec2.html.markdown b/website/docs/r/cloud9_environment_ec2.html.markdown index 8cbb6c03fdc..4907bbe8db9 100644 --- a/website/docs/r/cloud9_environment_ec2.html.markdown +++ b/website/docs/r/cloud9_environment_ec2.html.markdown @@ -15,7 +15,7 @@ Provides a Cloud9 EC2 Development Environment. ```hcl resource "aws_cloud9_environment_ec2" "example" { instance_type = "t2.micro" - name = "example-env" + name = "example-env" } ``` diff --git a/website/docs/r/cloudformation_stack.html.markdown b/website/docs/r/cloudformation_stack.html.markdown index 0902a9756ec..a9b5ca40931 100644 --- a/website/docs/r/cloudformation_stack.html.markdown +++ b/website/docs/r/cloudformation_stack.html.markdown @@ -30,7 +30,7 @@ resource "aws_cloudformation_stack" "network" { } }, "Resources" : { - "my-vpc": { + "myVpc": { "Type" : "AWS::EC2::VPC", "Properties" : { "CidrBlock" : { "Ref" : "VPCCidr" }, @@ -59,7 +59,7 @@ The following arguments are supported: * `notification_arns` - (Optional) A list of SNS topic ARNs to publish stack related events. * `on_failure` - (Optional) Action to be taken if stack creation fails. This must be one of: `DO_NOTHING`, `ROLLBACK`, or `DELETE`. Conflicts with `disable_rollback`. -* `parameters` - (Optional) A list of Parameter structures that specify input parameters for the stack. +* `parameters` - (Optional) A map of Parameter structures that specify input parameters for the stack. * `policy_body` - (Optional) Structure containing the stack policy body. Conflicts w/ `policy_url`. * `policy_url` - (Optional) Location of a file containing the stack policy. @@ -70,7 +70,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - A unique identifier of the stack. * `outputs` - A map of outputs from the stack. @@ -93,4 +93,4 @@ $ terraform import aws_cloudformation_stack.stack networking-stack - `create` - (Default `30 minutes`) Used for Creating Stacks - `update` - (Default `30 minutes`) Used for Stack modifications -- `delete` - (Default `30 minutes`) Used for destroying stacks. \ No newline at end of file +- `delete` - (Default `30 minutes`) Used for destroying stacks. diff --git a/website/docs/r/cloudfront_distribution.html.markdown b/website/docs/r/cloudfront_distribution.html.markdown index d9eddd2344b..472eddaa6b9 100644 --- a/website/docs/r/cloudfront_distribution.html.markdown +++ b/website/docs/r/cloudfront_distribution.html.markdown @@ -34,10 +34,14 @@ resource "aws_s3_bucket" "b" { } } +locals { + s3_origin_id = "myS3Origin" +} + resource "aws_cloudfront_distribution" "s3_distribution" { origin { - domain_name = "${aws_s3_bucket.b.bucket_domain_name}" - origin_id = "myS3Origin" + domain_name = "${aws_s3_bucket.b.bucket_regional_domain_name}" + origin_id = "${local.s3_origin_id}" s3_origin_config { origin_access_identity = "origin-access-identity/cloudfront/ABCDEFG1234567" @@ -60,7 +64,7 @@ resource "aws_cloudfront_distribution" "s3_distribution" { default_cache_behavior { allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"] cached_methods = ["GET", "HEAD"] - target_origin_id = "myS3Origin" + target_origin_id = "${local.s3_origin_id}" forwarded_values { query_string = false @@ -76,6 +80,51 @@ resource "aws_cloudfront_distribution" "s3_distribution" { max_ttl = 86400 } + # Cache behavior with precedence 0 + ordered_cache_behavior { + path_pattern = "/content/immutable/*" + allowed_methods = ["GET", "HEAD", "OPTIONS"] + cached_methods = ["GET", "HEAD", "OPTIONS"] + target_origin_id = "${local.s3_origin_id}" + + forwarded_values { + query_string = false + headers = ["Origin"] + + cookies { + forward = "none" + } + } + + min_ttl = 0 + default_ttl = 86400 + max_ttl = 31536000 + compress = true + viewer_protocol_policy = "redirect-to-https" + } + + # Cache behavior with precedence 1 + ordered_cache_behavior { + path_pattern = "/content/*" + allowed_methods = ["GET", "HEAD", "OPTIONS"] + cached_methods = ["GET", "HEAD"] + target_origin_id = "${local.s3_origin_id}" + + forwarded_values { + query_string = false + + cookies { + forward = "none" + } + } + + min_ttl = 0 + default_ttl = 3600 + max_ttl = 86400 + compress = true + viewer_protocol_policy = "redirect-to-https" + } + price_class = "PriceClass_200" restrictions { @@ -105,8 +154,11 @@ of several sub-resources - these resources are laid out below. * `aliases` (Optional) - Extra CNAMEs (alternate domain names), if any, for this distribution. - * `cache_behavior` (Optional) - A [cache behavior](#cache-behavior-arguments) - resource for this distribution (multiples allowed). + * `cache_behavior` (Optional) - **Deprecated**, use `ordered_cache_behavior` instead. + + * `ordered_cache_behavior` (Optional) - An ordered list of [cache behaviors](#cache-behavior-arguments) + resource for this distribution. List from top to bottom ++ in order of precedence. The topmost cache behavior will have precedence 0. * `comment` (Optional) - Any comments you want to include about the distribution. @@ -172,6 +224,8 @@ of several sub-resources - these resources are laid out below. in the absence of an `Cache-Control max-age` or `Expires` header. Defaults to 1 day. + * `field_level_encryption_id` (Optional) - Field level encryption configuration ID + * `forwarded_values` (Required) - The [forwarded values configuration](#forwarded-values-arguments) that specifies how CloudFront handles query strings, cookies and headers (maximum one). @@ -229,13 +283,32 @@ of several sub-resources - these resources are laid out below. Lambda@Edge allows you to associate an AWS Lambda Function with a predefined event. You can associate a single function per event type. See [What is Lambda@Edge](http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/what-is-lambda-at-edge.html) -for more information +for more information. - * `event_type` (Required) - The specific event to trigger this function. +Example configuration: + +```hcl +resource "aws_cloudfront_distribution" "example" { + # ... other configuration ... + + # lambda_function_association is also supported by default_cache_behavior + ordered_cache_behavior { + # ... other configuration ... + + lambda_function_association { + event_type = "viewer-request" + lambda_arn = "${aws_lambda_function.example.qualified_arn}" + include_body = false + } + } +} +``` + +* `event_type` (Required) - The specific event to trigger this function. Valid values: `viewer-request`, `origin-request`, `viewer-response`, `origin-response` - - * `lambda_arn` (Required) - ARN of the Lambda function. +* `lambda_arn` (Required) - ARN of the Lambda function. +* `include_body` (Optional) - When set to true it exposes the request body to the lambda function. Defaults to false. Valid values: `true`, `false`. ##### Cookies Arguments @@ -266,7 +339,7 @@ for more information #### Default Cache Behavior Arguments The arguments for `default_cache_behavior` are the same as for -[`cache_behavior`](#cache-behavior-arguments), except for the `path_pattern` +[`ordered_cache_behavior`](#cache-behavior-arguments), except for the `path_pattern` argument is not required. #### Logging Config Arguments @@ -372,7 +445,7 @@ The arguments of `geo_restriction` are: ## Attribute Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The identifier for the distribution. For example: `EDFDVBD632BHDS5`. @@ -406,7 +479,7 @@ The following attributes are exported: [1]: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html -[2]: http://docs.aws.amazon.com/AmazonCloudFront/latest/APIReference/CreateDistribution.html +[2]: https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_CreateDistribution.html [3]: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html [4]: http://www.iso.org/iso/country_codes/iso_3166_code_lists/country_names_and_code_elements.htm [5]: /docs/providers/aws/r/cloudfront_origin_access_identity.html diff --git a/website/docs/r/cloudfront_origin_access_identity.html.markdown b/website/docs/r/cloudfront_origin_access_identity.html.markdown index de129f940a0..8a09d744d7f 100644 --- a/website/docs/r/cloudfront_origin_access_identity.html.markdown +++ b/website/docs/r/cloudfront_origin_access_identity.html.markdown @@ -31,7 +31,7 @@ resource "aws_cloudfront_origin_access_identity" "origin_access_identity" { ## Attribute Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The identifier for the distribution. For example: `EDFDVBD632BHDS5`. * `caller_reference` - Internal value used by CloudFront to allow future @@ -53,7 +53,7 @@ Normally, when referencing an origin access identity in CloudFront, you need to prefix the ID with the `origin-access-identity/cloudfront/` special path. The `cloudfront_access_identity_path` allows this to be circumvented. The below snippet demonstrates use with the `s3_origin_config` structure for the -[`aws_cloudfront_web_distribution`][3] resource: +[`aws_cloudfront_distribution`][3] resource: ```hcl s3_origin_config { @@ -72,7 +72,7 @@ you see this behaviour, use the `iam_arn` instead: data "aws_iam_policy_document" "s3_policy" { statement { actions = ["s3:GetObject"] - resources = ["${module.names.s3_endpoint_arn_base}/*"] + resources = ["${aws_s3_bucket.example.arn}/*"] principals { type = "AWS" @@ -82,7 +82,7 @@ data "aws_iam_policy_document" "s3_policy" { statement { actions = ["s3:ListBucket"] - resources = ["${module.names.s3_endpoint_arn_base}"] + resources = ["${aws_s3_bucket.example.arn}"] principals { type = "AWS" @@ -91,8 +91,8 @@ data "aws_iam_policy_document" "s3_policy" { } } -resource "aws_s3_bucket" "bucket" { - # ... +resource "aws_s3_bucket_policy" "example" { + bucket = "${aws_s3_bucket.example.id}" policy = "${data.aws_iam_policy_document.s3_policy.json}" } ``` diff --git a/website/docs/r/cloudfront_public_key.html.markdown b/website/docs/r/cloudfront_public_key.html.markdown new file mode 100644 index 00000000000..18fa64c6b6f --- /dev/null +++ b/website/docs/r/cloudfront_public_key.html.markdown @@ -0,0 +1,38 @@ +--- +layout: "aws" +page_title: "AWS: cloudfront_public_key" +sidebar_current: "docs-aws-resource-cloudfront-public-key" +description: |- + Provides a CloudFront Public Key which you add to CloudFront to use with features like field-level encryption. +--- + +# aws_cloudfront_public_key + +## Example Usage + +The following example below creates a CloudFront public key. + +```hcl +resource "aws_cloudfront_public_key" "example" { + comment = "test public key" + encoded_key = "${file("public_key.pem")}" + name = "test_key" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `comment` - (Optional) An optional comment about the public key. +* `encoded_key` - (Required) The encoded public key that you want to add to CloudFront to use with features like field-level encryption. +* `name` - (Optional) The name for the public key. By default generated by Terraform. +* `name_prefix` - (Optional) The name for the public key. Conflicts with `name`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `caller_reference` - Internal value used by CloudFront to allow future updates to the public key configuration. +* `etag` - The current version of the public key. For example: `E2QWRUHAPOMQZL`. +* `id` - The identifier for the public key. For example: `K3D5EWEUDCCXON`. diff --git a/website/docs/r/cloudhsm_v2_cluster.html.markdown b/website/docs/r/cloudhsm_v2_cluster.html.markdown new file mode 100644 index 00000000000..16953963a69 --- /dev/null +++ b/website/docs/r/cloudhsm_v2_cluster.html.markdown @@ -0,0 +1,86 @@ +--- +layout: "aws" +page_title: "AWS: cloudhsm_v2_cluster" +sidebar_current: "docs-aws-resource-cloudhsm-v2-cluster" +description: |- + Provides a CloudHSM v2 resource. +--- + +# aws_cloudhsm_v2_cluster + +Creates an Amazon CloudHSM v2 cluster. + +For information about CloudHSM v2, see the +[AWS CloudHSM User Guide][1] and the [Amazon +CloudHSM API Reference][2]. + +~> **NOTE:** CloudHSM can take up to several minutes to be set up. +Practically no single attribute can be updated except TAGS. +If you need to delete a cluster, you have to remove its HSM modules first. +To initialize cluster you have to sign CSR and upload it. + +## Example Usage + +The following example below creates a CloudHSM cluster. + +```hcl +provider "aws" { + region = "${var.aws_region}" +} + +data "aws_availability_zones" "available" {} + +resource "aws_vpc" "cloudhsm2_vpc" { + cidr_block = "10.0.0.0/16" + + tags { + Name = "example-aws_cloudhsm_v2_cluster" + } +} + +resource "aws_subnet" "cloudhsm2_subnets" { + count = 2 + vpc_id = "${aws_vpc.cloudhsm2_vpc.id}" + cidr_block = "${element(var.subnets, count.index)}" + map_public_ip_on_launch = false + availability_zone = "${element(data.aws_availability_zones.available.names, count.index)}" + + tags { + Name = "example-aws_cloudhsm_v2_cluster" + } +} + +resource "aws_cloudhsm_v2_cluster" "cloudhsm_v2_cluster" { + hsm_type = "hsm1.medium" + subnet_ids = ["${aws_subnet.cloudhsm2_subnets.*.id}"] + + tags { + Name = "example-aws_cloudhsm_v2_cluster" + } +} +``` +## Argument Reference + +The following arguments are supported: + +* `source_backup_identifier` - (Optional) The id of Cloud HSM v2 cluster backup to be restored. +* `hsm_type` - (Required) The type of HSM module in the cluster. Currently, only hsm1.medium is supported. +* `subnet_ids` - (Required) The IDs of subnets in which cluster will operate. + +## Attributes Reference + +The following attributes are exported: + +* `cluster_id` - The id of the CloudHSM cluster. +* `cluster_state` - The state of the cluster. +* `vpc_id` - The id of the VPC that the CloudHSM cluster resides in. +* `security_group_id` - The ID of the security group associated with the CloudHSM cluster. +* `cluster_certificates` - The list of cluster certificates. + * `cluster_certificates.0.cluster_certificate` - The cluster certificate issued (signed) by the issuing certificate authority (CA) of the cluster's owner. + * `cluster_certificates.0.cluster_csr` - The certificate signing request (CSR). Available only in UNINITIALIZED state. + * `cluster_certificates.0.aws_hardware_certificate` - The HSM hardware certificate issued (signed) by AWS CloudHSM. + * `cluster_certificates.0.hsm_certificate` - The HSM certificate issued (signed) by the HSM hardware. + * `cluster_certificates.0.manufacturer_hardware_certificate` - The HSM hardware certificate issued (signed) by the hardware manufacturer. + +[1]: https://docs.aws.amazon.com/cloudhsm/latest/userguide/introduction.html +[2]: https://docs.aws.amazon.com/cloudhsm/latest/APIReference/Welcome.html diff --git a/website/docs/r/cloudhsm_v2_hsm.html.markdown b/website/docs/r/cloudhsm_v2_hsm.html.markdown new file mode 100644 index 00000000000..6f0839c3858 --- /dev/null +++ b/website/docs/r/cloudhsm_v2_hsm.html.markdown @@ -0,0 +1,42 @@ +--- +layout: "aws" +page_title: "AWS: cloudhsm_v2_hsm" +sidebar_current: "docs-aws-resource-cloudhsm-v2-hsm" +description: |- + Provides a CloudHSM v2 HSM module resource. +--- + +# aws_cloudhsm_v2_hsm + +Creates an HSM module in Amazon CloudHSM v2 cluster. + +## Example Usage + +The following example below creates an HSM module in CloudHSM cluster. + +```hcl +data "aws_cloudhsm_v2_cluster" "cluster" { + cluster_id = "${var.cloudhsm_cluster_id}" +} + +resource "aws_cloudhsm_v2_hsm" "cloudhsm_v2_hsm" { + subnet_id = "${data.aws_cloudhsm_v2_cluster.cluster.subnet_ids[0]}" + cluster_id = "${data.aws_cloudhsm_v2_cluster.cluster.cluster_id}" +} +``` +## Argument Reference + +The following arguments are supported: + +* `cluster_id` - (Required) The ID of Cloud HSM v2 cluster to which HSM will be added. +* `subnet_id` - (Optional) The ID of subnet in which HSM module will be located. +* `availability_zone` - (Optional) The IDs of AZ in which HSM module will be located. Do not use together with subnet_id. +* `ip_address` - (Optional) The IP address of HSM module. Must be within the CIDR of selected subnet. + +## Attributes Reference + +The following attributes are exported: + +* `hsm_id` - The id of the HSM module. +* `hsm_state` - The state of the HSM module. +* `hsm_eni_id` - The id of the ENI interface allocated for HSM module. diff --git a/website/docs/r/cloudtrail.html.markdown b/website/docs/r/cloudtrail.html.markdown index ddb2d8542c7..6b3eb02cb66 100644 --- a/website/docs/r/cloudtrail.html.markdown +++ b/website/docs/r/cloudtrail.html.markdown @@ -17,15 +17,6 @@ Provides a CloudTrail resource. Enable CloudTrail to capture all compatible management events in region. For capturing events from services like IAM, `include_global_service_events` must be enabled. -```hcl -resource "aws_cloudtrail" "example" { - name = "basic-example" - include_global_service_events = false -} -``` - -### Logging to S3 - ```hcl resource "aws_cloudtrail" "foobar" { name = "tf-trail-foobar" @@ -82,7 +73,7 @@ resource "aws_cloudtrail" "example" { # ... other configuration ... event_selector { - read_write_type = "All" + read_write_type = "All" include_management_events = true data_resource { @@ -100,7 +91,7 @@ resource "aws_cloudtrail" "example" { # ... other configuration ... event_selector { - read_write_type = "All" + read_write_type = "All" include_management_events = true data_resource { @@ -122,12 +113,15 @@ resource "aws_cloudtrail" "example" { # ... other configuration ... event_selector { - read_write_type = "All" + read_write_type = "All" include_management_events = true data_resource { - type = "AWS::S3::Object" - values = ["${data.aws_s3_bucket.important-bucket.arn}"] + type = "AWS::S3::Object" + + # Make sure to append a trailing '/' to your ARN if you want + # to monitor all objects in a bucket. + values = ["${data.aws_s3_bucket.important-bucket.arn}/"] } } } @@ -169,12 +163,12 @@ For **event_selector** the following attributes are supported. #### Data Resource Arguments For **data_resource** the following attributes are supported. -* `type` (Required) - The resource type in witch you want to log data events. You can specify only the follwing value: "AWS::S3::Object", "AWS::Lambda::Function" +* `type` (Required) - The resource type in which you want to log data events. You can specify only the follwing value: "AWS::S3::Object", "AWS::Lambda::Function" * `values` (Required) - A list of ARN for the specified S3 buckets and object prefixes.. ## Attribute Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The name of the trail. * `home_region` - The region in which the trail was created. diff --git a/website/docs/r/cloudwatch_dashboard.html.markdown b/website/docs/r/cloudwatch_dashboard.html.markdown index 8bbfa6cfa12..adf2e2a34c6 100644 --- a/website/docs/r/cloudwatch_dashboard.html.markdown +++ b/website/docs/r/cloudwatch_dashboard.html.markdown @@ -14,8 +14,9 @@ Provides a CloudWatch Dashboard resource. ```hcl resource "aws_cloudwatch_dashboard" "main" { - dashboard_name = "my-dashboard" - dashboard_body = < **Note:** `input` and `input_path` are mutually exclusive options. @@ -62,7 +244,7 @@ resource "aws_kinesis_stream" "test_stream" { SNS topic invoked by a CloudWatch Events rule, you must setup the right permissions using [`aws_lambda_permission`](https://www.terraform.io/docs/providers/aws/r/lambda_permission.html) or [`aws_sns_topic.policy`](https://www.terraform.io/docs/providers/aws/r/sns_topic.html#policy). - More info [here](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/EventsResourceBasedPermissions.html). + More info [here](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/resource-based-policies-cwe.html). The following arguments are supported: @@ -75,6 +257,9 @@ The following arguments are supported: * `role_arn` - (Optional) The Amazon Resource Name (ARN) of the IAM role to be used for this target when the rule is triggered. Required if `ecs_target` is used. * `run_command_targets` - (Optional) Parameters used when you are using the rule to invoke Amazon EC2 Run Command. Documented below. A maximum of 5 are allowed. * `ecs_target` - (Optional) Parameters used when you are using the rule to invoke Amazon ECS Task. Documented below. A maximum of 1 are allowed. +* `batch_target` - (Optional) Parameters used when you are using the rule to invoke an Amazon Batch Job. Documented below. A maximum of 1 are allowed. +* `kinesis_target` - (Optional) Parameters used when you are using the rule to invoke an Amazon Kinesis Stream. Documented below. A maximum of 1 are allowed. +* `sqs_target` - (Optional) Parameters used when you are using the rule to invoke an Amazon SQS Queue. Documented below. A maximum of 1 are allowed. * `input_transformer` - (Optional) Parameters used when you are providing a custom input to a target based on certain event data. `run_command_targets` support the following: @@ -84,9 +269,36 @@ The following arguments are supported: `ecs_target` support the following: +* `group` - (Optional) Specifies an ECS task group for the task. The maximum length is 255 characters. +* `launch_type` - (Optional) Specifies the launch type on which your task is running. The launch type that you specify here must match one of the launch type (compatibilities) of the target task. Valid values are EC2 or FARGATE. +* `network_configuration` - (Optional) Use this if the ECS task uses the awsvpc network mode. This specifies the VPC subnets and security groups associated with the task, and whether a public IP address is to be used. Required if launch_type is FARGATE because the awsvpc mode is required for Fargate tasks. +* `platform_version` - (Optional) Specifies the platform version for the task. Specify only the numeric portion of the platform version, such as 1.1.0. This is used only if LaunchType is FARGATE. For more information about valid platform versions, see [AWS Fargate Platform Versions](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/platform_versions.html). * `task_count` - (Optional) The number of tasks to create based on the TaskDefinition. The default is 1. * `task_definition_arn` - (Required) The ARN of the task definition to use if the event target is an Amazon ECS cluster. +`network_configuration` support the following: + +* `subnets` - (Required) The subnets associated with the task or service. +* `security_groups` - (Optional) The security groups associated with the task or service. If you do not specify a security group, the default security group for the VPC is used. +* `assign_public_ip` - (Optional) Assign a public IP address to the ENI (Fargate launch type only). Valid values are `true` or `false`. Default `false`. + +For more information, see [Task Networking](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html) + +`batch_target` support the following: + +* `job_definition` - (Required) The ARN or name of the job definition to use if the event target is an AWS Batch job. This job definition must already exist. +* `job_name` - (Required) The name to use for this execution of the job, if the target is an AWS Batch job. +* `array_size` - (Optional) The size of the array, if this is an array batch job. Valid values are integers between 2 and 10,000. +* `job_attempts` - (Optional) The number of times to attempt to retry, if the job fails. Valid values are 1 to 10. + +`kinesis_target` support the following: + +* `partition_key_path` - (Optional) The JSON path to be extracted from the event and used as the partition key. + +`sqs_target` support the following: + +* `message_group_id` - (Optional) The FIFO message group ID to use as the target. + `input_transformer` support the following: * `input_paths` - (Optional) Key value pairs specified in the form of JSONPath (for example, time = $.time) diff --git a/website/docs/r/cloudwatch_log_destination.html.markdown b/website/docs/r/cloudwatch_log_destination.html.markdown index 8e97df1788e..62ac6293b47 100644 --- a/website/docs/r/cloudwatch_log_destination.html.markdown +++ b/website/docs/r/cloudwatch_log_destination.html.markdown @@ -30,7 +30,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - The Amazon Resource Name (ARN) specifying the log destination. diff --git a/website/docs/r/cloudwatch_log_group.html.markdown b/website/docs/r/cloudwatch_log_group.html.markdown index a2c654a17ed..a20fab95ef5 100644 --- a/website/docs/r/cloudwatch_log_group.html.markdown +++ b/website/docs/r/cloudwatch_log_group.html.markdown @@ -38,7 +38,7 @@ permissions for the CMK whenever the encrypted data is requested. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - The Amazon Resource Name (ARN) specifying the log group. diff --git a/website/docs/r/cloudwatch_log_metric_filter.html.markdown b/website/docs/r/cloudwatch_log_metric_filter.html.markdown index 01285150ad5..400558a10b5 100644 --- a/website/docs/r/cloudwatch_log_metric_filter.html.markdown +++ b/website/docs/r/cloudwatch_log_metric_filter.html.markdown @@ -50,6 +50,6 @@ The `metric_transformation` block supports the following arguments: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The name of the metric filter. diff --git a/website/docs/r/cloudwatch_log_resource_policy.html.markdown b/website/docs/r/cloudwatch_log_resource_policy.html.markdown index fef369ee37d..8a1f699efac 100644 --- a/website/docs/r/cloudwatch_log_resource_policy.html.markdown +++ b/website/docs/r/cloudwatch_log_resource_policy.html.markdown @@ -46,7 +46,7 @@ The following arguments are supported: ## Attributes Reference -The following additional attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The name of the CloudWatch log resource policy diff --git a/website/docs/r/cloudwatch_log_stream.html.markdown b/website/docs/r/cloudwatch_log_stream.html.markdown index 37c1b4fd3da..e14fab94025 100644 --- a/website/docs/r/cloudwatch_log_stream.html.markdown +++ b/website/docs/r/cloudwatch_log_stream.html.markdown @@ -32,6 +32,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: -* `arn` - The Amazon Resource Name (ARN) specifying the log stream. \ No newline at end of file +* `arn` - The Amazon Resource Name (ARN) specifying the log stream. diff --git a/website/docs/r/cloudwatch_log_subscription_filter.html.markdown b/website/docs/r/cloudwatch_log_subscription_filter.html.markdown index 16c65ff6275..ef91ec74daf 100644 --- a/website/docs/r/cloudwatch_log_subscription_filter.html.markdown +++ b/website/docs/r/cloudwatch_log_subscription_filter.html.markdown @@ -36,4 +36,4 @@ The following arguments are supported: ## Attributes Reference -No extra attributes are exported. \ No newline at end of file +No extra attributes are exported. diff --git a/website/docs/r/cloudwatch_metric_alarm.html.markdown b/website/docs/r/cloudwatch_metric_alarm.html.markdown index 5fac7f2314f..feea77767d1 100644 --- a/website/docs/r/cloudwatch_metric_alarm.html.markdown +++ b/website/docs/r/cloudwatch_metric_alarm.html.markdown @@ -68,6 +68,7 @@ for details about valid values. The following arguments are supported: * `alarm_name` - (Required) The descriptive name for the alarm. This name must be unique within the user's AWS account +* `arn` - The ARN of the cloudwatch metric alarm. * `comparison_operator` - (Required) The arithmetic operation to use when comparing the specified Statistic and Threshold. The specified Statistic value is used as the first operand. Either of the following is supported: `GreaterThanOrEqualToThreshold`, `GreaterThanThreshold`, `LessThanThreshold`, `LessThanOrEqualToThreshold`. * `evaluation_periods` - (Required) The number of periods over which data is compared to the specified threshold. * `metric_name` - (Required) The name for the alarm's associated metric. @@ -97,7 +98,7 @@ The following values are supported: `ignore`, and `evaluate`. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the health check diff --git a/website/docs/r/code_commit_repository.html.markdown b/website/docs/r/code_commit_repository.html.markdown deleted file mode 100644 index 70e1cc3758b..00000000000 --- a/website/docs/r/code_commit_repository.html.markdown +++ /dev/null @@ -1,49 +0,0 @@ ---- -layout: "aws" -page_title: "AWS: aws_codecommit_repository" -sidebar_current: "docs-aws-resource-codecommit-repository" -description: |- - Provides a CodeCommit Repository Resource. ---- - -# aws_codecommit_repository - -Provides a CodeCommit Repository Resource. - -~> **NOTE on CodeCommit Availability**: The CodeCommit is not yet rolled out -in all regions - available regions are listed -[the AWS Docs](https://docs.aws.amazon.com/general/latest/gr/rande.html#codecommit_region). - -## Example Usage - -```hcl -resource "aws_codecommit_repository" "test" { - repository_name = "MyTestRepository" - description = "This is the Sample App Repository" -} -``` - -## Argument Reference - -The following arguments are supported: - -* `repository_name` - (Required) The name for the repository. This needs to be less than 100 characters. -* `description` - (Optional) The description of the repository. This needs to be less than 1000 characters -* `default_branch` - (Optional) The default branch of the repository. The branch specified here needs to exist. - -## Attributes Reference - -The following attributes are exported: - -* `repository_id` - The ID of the repository -* `arn` - The ARN of the repository -* `clone_url_http` - The URL to use for cloning the repository over HTTPS. -* `clone_url_ssh` - The URL to use for cloning the repository over SSH. - -## Import - -Codecommit repository can be imported using repository name, e.g. - -``` -$ terraform import aws_codecommit_repository.imported ExistingRepo -``` diff --git a/website/docs/r/codebuild_project.html.markdown b/website/docs/r/codebuild_project.html.markdown index 33ef73c13a1..58abfe8ab8c 100644 --- a/website/docs/r/codebuild_project.html.markdown +++ b/website/docs/r/codebuild_project.html.markdown @@ -13,8 +13,13 @@ Provides a CodeBuild Project resource. ## Example Usage ```hcl -resource "aws_iam_role" "codebuild_role" { - name = "codebuild-role-" +resource "aws_s3_bucket" "example" { + bucket = "example" + acl = "private" +} + +resource "aws_iam_role" "example" { + name = "example" assume_role_policy = < **Note:** The AWS account that Terraform uses to create this resource *must* have authorized CodeBuild to access Bitbucket/GitHub's OAuth API in each applicable region. This is a manual step that must be done *before* creating webhooks with this resource. If OAuth is not configured, AWS will return an error similar to `ResourceNotFoundException: Could not find access token for server type github`. More information can be found in the CodeBuild User Guide for [Bitbucket](https://docs.aws.amazon.com/codebuild/latest/userguide/sample-bitbucket-pull-request.html) and [GitHub](https://docs.aws.amazon.com/codebuild/latest/userguide/sample-github-pull-request.html). + +~> **Note:** Further managing the automatically created Bitbucket/GitHub webhook with the `bitbucket_hook`/`github_repository_webhook` resource is only possible with importing that resource after creation of the `aws_codebuild_webhook` resource. The CodeBuild API does not ever provide the `secret` attribute for the `aws_codebuild_webhook` resource in this scenario. + +```hcl +resource "aws_codebuild_webhook" "example" { + project_name = "${aws_codebuild_project.example.name}" +} +``` + +### GitHub Enterprise + +When working with [GitHub Enterprise](https://enterprise.github.com/) source CodeBuild webhooks, the GHE repository webhook must be separately managed (e.g. manually or with the `github_repository_webhook` resource). + +More information creating webhooks with GitHub Enterprise can be found in the [CodeBuild User Guide](https://docs.aws.amazon.com/codebuild/latest/userguide/sample-github-enterprise.html). + +```hcl +resource "aws_codebuild_webhook" "example" { + project_name = "${aws_codebuild_project.example.name}" +} + +resource "github_repository_webhook" "example" { + active = true + events = ["push"] + name = "example" + repository = "${github_repository.example.name}" + + configuration { + url = "${aws_codebuild_webhook.example.payload_url}" + secret = "${aws_codebuild_webhook.example.secret}" + content_type = "json" + insecure_ssl = false + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `project_name` - (Required) The name of the build project. +* `branch_filter` - (Optional) A regular expression used to determine which branches get built. Default is all branches are built. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The name of the build project. +* `payload_url` - The CodeBuild endpoint where webhook events are sent. +* `secret` - The secret token of the associated repository. Not returned by the CodeBuild API for all source types. +* `url` - The URL to the webhook. + +~> **Note:** The `secret` attribute is only set on resource creation, so if the secret is manually rotated, terraform will not pick up the change on subsequent runs. In that case, the webhook resource should be tainted and re-created to get the secret back in sync. + +## Import + +CodeBuild Webhooks can be imported using the CodeBuild Project name, e.g. + +``` +$ terraform import aws_codebuild_webhook.example MyProjectName +``` diff --git a/website/docs/r/codecommit_repository.html.markdown b/website/docs/r/codecommit_repository.html.markdown new file mode 100644 index 00000000000..fee86b81a45 --- /dev/null +++ b/website/docs/r/codecommit_repository.html.markdown @@ -0,0 +1,49 @@ +--- +layout: "aws" +page_title: "AWS: aws_codecommit_repository" +sidebar_current: "docs-aws-resource-codecommit-repository" +description: |- + Provides a CodeCommit Repository Resource. +--- + +# aws_codecommit_repository + +Provides a CodeCommit Repository Resource. + +~> **NOTE on CodeCommit Availability**: The CodeCommit is not yet rolled out +in all regions - available regions are listed +[the AWS Docs](https://docs.aws.amazon.com/general/latest/gr/rande.html#codecommit_region). + +## Example Usage + +```hcl +resource "aws_codecommit_repository" "test" { + repository_name = "MyTestRepository" + description = "This is the Sample App Repository" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `repository_name` - (Required) The name for the repository. This needs to be less than 100 characters. +* `description` - (Optional) The description of the repository. This needs to be less than 1000 characters +* `default_branch` - (Optional) The default branch of the repository. The branch specified here needs to exist. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `repository_id` - The ID of the repository +* `arn` - The ARN of the repository +* `clone_url_http` - The URL to use for cloning the repository over HTTPS. +* `clone_url_ssh` - The URL to use for cloning the repository over SSH. + +## Import + +Codecommit repository can be imported using repository name, e.g. + +``` +$ terraform import aws_codecommit_repository.imported ExistingRepo +``` diff --git a/website/docs/r/code_commit_trigger.html.markdown b/website/docs/r/codecommit_trigger.html.markdown similarity index 100% rename from website/docs/r/code_commit_trigger.html.markdown rename to website/docs/r/codecommit_trigger.html.markdown diff --git a/website/docs/r/codedeploy_app.html.markdown b/website/docs/r/codedeploy_app.html.markdown index 388c2266c33..1d17de0e15c 100644 --- a/website/docs/r/codedeploy_app.html.markdown +++ b/website/docs/r/codedeploy_app.html.markdown @@ -23,6 +23,7 @@ resource "aws_codedeploy_app" "foo" { The following arguments are supported: * `name` - (Required) The name of the application. +* `compute_platform` - (Optional) The compute platform can either be `Server` or `Lambda`. Default is `Server`. ## Attribute Reference @@ -30,3 +31,11 @@ The following arguments are exported: * `id` - Amazon's assigned ID for the application. * `name` - The application's name. + +## Import + +CodeDeploy Applications can be imported using the `name`, e.g. + +``` +$ terraform import aws_codedeploy_app.example my-application +``` diff --git a/website/docs/r/codedeploy_deployment_config.html.markdown b/website/docs/r/codedeploy_deployment_config.html.markdown index f5616d0c1e8..0294e46b231 100644 --- a/website/docs/r/codedeploy_deployment_config.html.markdown +++ b/website/docs/r/codedeploy_deployment_config.html.markdown @@ -69,7 +69,15 @@ When the type is `HOST_COUNT`, the value represents the minimum number of health ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The deployment group's config name. * `deployment_config_id` - The AWS Assigned deployment config id + +## Import + +CodeDeploy Deployment Configurations can be imported using the `deployment_config_name`, e.g. + +``` +$ terraform import aws_codedeploy_app.example my-deployment-config +``` diff --git a/website/docs/r/codedeploy_deployment_group.html.markdown b/website/docs/r/codedeploy_deployment_group.html.markdown index d2c977501f7..5589fcae253 100644 --- a/website/docs/r/codedeploy_deployment_group.html.markdown +++ b/website/docs/r/codedeploy_deployment_group.html.markdown @@ -33,35 +33,9 @@ resource "aws_iam_role" "example" { EOF } -resource "aws_iam_role_policy" "example" { - name = "example-policy" - role = "${aws_iam_role.example.id}" - - policy = </ +``` diff --git a/website/docs/r/cognito_user_pool_domain.markdown b/website/docs/r/cognito_user_pool_domain.markdown index ae999627c4e..5952ffcd23e 100644 --- a/website/docs/r/cognito_user_pool_domain.markdown +++ b/website/docs/r/cognito_user_pool_domain.markdown @@ -1,7 +1,7 @@ --- layout: "aws" page_title: "AWS: aws_cognito_user_pool_domain" -side_bar_current: "docs-aws-resource-cognito-user-pool-domain" +sidebar_current: "docs-aws-resource-cognito-user-pool-domain" description: |- Provides a Cognito User Pool Domain resource. --- @@ -12,9 +12,10 @@ Provides a Cognito User Pool Domain resource. ## Example Usage +### Amazon Cognito domain ```hcl resource "aws_cognito_user_pool_domain" "main" { - domain = "example-domain" + domain = "example-domain" user_pool_id = "${aws_cognito_user_pool.example.id}" } @@ -22,6 +23,20 @@ resource "aws_cognito_user_pool" "example" { name = "example-pool" } ``` +### Custom Cognito domain +```hcl +resource "aws_cognito_user_pool_domain" "main" { + domain = "example-domain.exemple.com" + certificate_arn = "${aws_acm_certificate.cert.arn}" + user_pool_id = "${aws_cognito_user_pool.example.id}" +} + +resource "aws_cognito_user_pool" "example" { + name = "example-pool" +} +``` + + ## Argument Reference @@ -29,12 +44,21 @@ The following arguments are supported: * `domain` - (Required) The domain string. * `user_pool_id` - (Required) The user pool ID. +* `certificate_arn` - (Optional) The ARN of an ISSUED ACM certificate in us-east-1 for a custom domain. ## Attribute Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `aws_account_id` - The AWS account ID for the user pool owner. * `cloudfront_distribution_arn` - The ARN of the CloudFront distribution. * `s3_bucket` - The S3 bucket where the static files for this domain are stored. * `version` - The app version. + +## Import + +Cognito User Pool Domains can be imported using the `domain`, e.g. + +``` +$ terraform import aws_cognito_user_pool_domain.main +``` diff --git a/website/docs/r/config_aggregate_authorization.markdown b/website/docs/r/config_aggregate_authorization.markdown new file mode 100644 index 00000000000..f6a18ea244c --- /dev/null +++ b/website/docs/r/config_aggregate_authorization.markdown @@ -0,0 +1,41 @@ +--- +layout: "aws" +page_title: "AWS: aws_config_aggregate_authorization" +sidebar_current: "docs-aws-resource-config-aggregate-authorization" +description: |- + Manages an AWS Config Aggregate Authorization. +--- + +# aws_config_aggregate_authorization + +Manages an AWS Config Aggregate Authorization + +## Example Usage + +```hcl +resource "aws_config_aggregate_authorization" "example" { + account_id = "123456789012" + region = "eu-west-2" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `account_id` - (Required) Account ID +* `region` - (Required) Region + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The ARN of the authorization + +## Import + +Config aggregate authorizations can be imported using `account_id:region`, e.g. + +``` +$ terraform import aws_config_authorization.example 123456789012:us-east-1 +``` diff --git a/website/docs/r/config_config_rule.html.markdown b/website/docs/r/config_config_rule.html.markdown index 019b8719b7f..b28f2818c91 100644 --- a/website/docs/r/config_config_rule.html.markdown +++ b/website/docs/r/config_config_rule.html.markdown @@ -125,7 +125,7 @@ Provides the rule owner (AWS or customer), the rule identifier, and the notifica ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - The ARN of the config rule * `rule_id` - The ID of the config rule diff --git a/website/docs/r/config_configuration_aggregator.html.markdown b/website/docs/r/config_configuration_aggregator.html.markdown new file mode 100644 index 00000000000..33fa1cf1dce --- /dev/null +++ b/website/docs/r/config_configuration_aggregator.html.markdown @@ -0,0 +1,108 @@ +--- +layout: "aws" +page_title: "AWS: aws_config_configuration_aggregator" +sidebar_current: "docs-aws-resource-config-configuration-aggregator" +description: |- + Manages an AWS Config Configuration Aggregator. +--- + +# aws_config_configuration_aggregator + +Manages an AWS Config Configuration Aggregator + +## Example Usage + +### Account Based Aggregation + +```hcl +resource "aws_config_configuration_aggregator" "account" { + name = "example" + + account_aggregation_source { + account_ids = ["123456789012"] + regions = ["us-west-2"] + } +} +``` + +### Organization Based Aggregation + +```hcl +resource "aws_config_configuration_aggregator" "organization" { + depends_on = ["aws_iam_role_policy_attachment.organization"] + + name = "example" # Required + + organization_aggregation_source { + all_regions = true + role_arn = "${aws_iam_role.organization.arn}" + } +} + +resource "aws_iam_role" "organization" { + name = "example" + + assume_role_policy = < **Note:** If your source type is an organization, you must be signed in to the master account and all features must be enabled in your organization. AWS Config calls EnableAwsServiceAccess API to enable integration between AWS Config and AWS Organizations. + +* `all_regions` - (Optional) If true, aggregate existing AWS Config regions and future regions. +* `regions` - (Optional) List of source regions being aggregated. +* `role_arn` - (Required) ARN of the IAM role used to retrieve AWS Organization details associated with the aggregator account. + +Either `regions` or `all_regions` (as true) must be specified. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The ARN of the aggregator + +## Import + +Configuration Aggregators can be imported using the name, e.g. + +``` +$ terraform import aws_config_configuration_aggregator.example foo +``` diff --git a/website/docs/r/config_configuration_recorder.html.markdown b/website/docs/r/config_configuration_recorder.html.markdown index 1233dc1fced..d96577520aa 100644 --- a/website/docs/r/config_configuration_recorder.html.markdown +++ b/website/docs/r/config_configuration_recorder.html.markdown @@ -64,7 +64,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - Name of the recorder diff --git a/website/docs/r/config_configuration_recorder_status.html.markdown b/website/docs/r/config_configuration_recorder_status.html.markdown index bf40241d327..1b7394ef4c3 100644 --- a/website/docs/r/config_configuration_recorder_status.html.markdown +++ b/website/docs/r/config_configuration_recorder_status.html.markdown @@ -59,6 +59,29 @@ resource "aws_iam_role" "r" { } POLICY } + +resource "aws_iam_role_policy" "p" { + name = "awsconfig-example" + role = "${aws_iam_role.r.id}" + + policy = < **NOTE:** Removing the `replicate_source_db` attribute from an existing RDS Replicate database managed by Terraform will promote the database to a fully standalone database. +### S3 Import Options + +Full details on the core parameters and impacts are in the API Docs: [RestoreDBInstanceFromS3](http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromS3.html). Sample + +```hcl +resource "aws_db_instance" "db" { + s3_import { + source_engine = "mysql" + source_engine_version = "5.6" + bucket_name = "mybucket" + bucket_prefix = "backups" + ingestion_role = "arn:aws:iam::1234567890:role/role-xtrabackup-rds-restore" + } +} +``` + +* `bucket_name` - (Required) The bucket name where your backup is stored +* `bucket_prefix` - (Optional) Can be blank, but is the path to your backup +* `ingestion_role` - (Required) Role applied to load the data. +* `source_engine` - (Required, as of Feb 2018 only 'mysql' supported) Source engine for the backup +* `source_engine_version` - (Required, as of Feb 2018 only '5.6' supported) Version of the source engine used to make the backup + +This will not recreate the resource if the S3 object changes in some way. It's only used to initialize the database + ### Timeouts `aws_db_instance` provides the following @@ -190,9 +231,9 @@ https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Ma ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: -* `address` - The address of the RDS instance. +* `address` - The hostname of the RDS instance. See also `endpoint` and `port`. * `arn` - The ARN of the RDS instance. * `allocated_storage` - The amount of allocated storage. * `availability_zone` - The availability zone of the instance. @@ -200,7 +241,9 @@ The following attributes are exported: * `backup_window` - The backup window. * `ca_cert_identifier` - Specifies the identifier of the CA certificate for the DB instance. -* `endpoint` - The connection endpoint. +* `domain` - The ID of the Directory Service Active Directory domain the instance is joined to +* `domain_iam_role_name` - The name of the IAM role to be used when making API calls to the Directory Service. +* `endpoint` - The connection endpoint in `address:port` format. * `engine` - The database engine. * `engine_version` - The database engine version. * `hosted_zone_id` - The canonical hosted zone ID of the DB instance (to be used diff --git a/website/docs/r/db_option_group.html.markdown b/website/docs/r/db_option_group.html.markdown index abf4ee9642f..7caa3fe2e97 100644 --- a/website/docs/r/db_option_group.html.markdown +++ b/website/docs/r/db_option_group.html.markdown @@ -2,16 +2,22 @@ layout: "aws" page_title: "AWS: aws_db_option_group" sidebar_current: "docs-aws-resource-db-option-group" +description: |- + Provides an RDS DB option group resource. --- # aws_db_option_group -Provides an RDS DB option group resource. +Provides an RDS DB option group resource. Documentation of the available options for various RDS engines can be found at: +* [MariaDB Options](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.MariaDB.Options.html) +* [Microsoft SQL Server Options](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.Options.html) +* [MySQL Options](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.MySQL.Options.html) +* [Oracle Options](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.html) ## Example Usage ```hcl -resource "aws_db_option_group" "bar" { +resource "aws_db_option_group" "example" { name = "option-group-test-terraform" option_group_description = "Terraform Option Group" engine_name = "sqlserver-ee" @@ -26,6 +32,15 @@ resource "aws_db_option_group" "bar" { } } + option { + option_name = "SQLSERVER_BACKUP_RESTORE" + + option_settings { + name = "IAM_ROLE_ARN" + value = "${aws_iam_role.example.arn}" + } + } + option { option_name = "TDE" } @@ -51,6 +66,7 @@ Option blocks support the following: * `option_name` - (Required) The Name of the Option (e.g. MEMCACHED). * `option_settings` - (Optional) A list of option settings to apply. * `port` - (Optional) The Port number when connecting to the Option (e.g. 11211). +* `version` - (Optional) The version of the option (e.g. 13.1.0.0). * `db_security_group_memberships` - (Optional) A list of DB Security Groups for which the option is enabled. * `vpc_security_group_memberships` - (Optional) A list of VPC Security Groups for which the option is enabled. @@ -61,12 +77,11 @@ Option Settings blocks support the following: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The db option group name. * `arn` - The ARN of the db option group. - ## Timeouts `aws_db_option_group` provides the following diff --git a/website/docs/r/db_parameter_group.html.markdown b/website/docs/r/db_parameter_group.html.markdown index e4abdbccdc1..2855788940d 100644 --- a/website/docs/r/db_parameter_group.html.markdown +++ b/website/docs/r/db_parameter_group.html.markdown @@ -2,11 +2,18 @@ layout: "aws" page_title: "AWS: aws_db_parameter_group" sidebar_current: "docs-aws-resource-db-parameter-group" +description: |- + Provides an RDS DB parameter group resource. --- # aws_db_parameter_group -Provides an RDS DB parameter group resource. +Provides an RDS DB parameter group resource .Documentation of the available parameters for various RDS engines can be found at: +* [Aurora MySQL Parameters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AuroraMySQL.Reference.html) +* [Aurora PostgreSQL Parameters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AuroraPostgreSQL.Reference.html) +* [MariaDB Parameters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.MariaDB.Parameters.html) +* [Oracle Parameters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ModifyInstance.Oracle.html#USER_ModifyInstance.Oracle.sqlnet) +* [PostgreSQL Parameters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html#Appendix.PostgreSQL.CommonDBATasks.Parameters) ## Example Usage @@ -48,7 +55,7 @@ Parameter blocks support the following: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The db parameter group name. * `arn` - The ARN of the db parameter group. diff --git a/website/docs/r/db_security_group.html.markdown b/website/docs/r/db_security_group.html.markdown index 3c49e21cce0..4ec29aefefa 100644 --- a/website/docs/r/db_security_group.html.markdown +++ b/website/docs/r/db_security_group.html.markdown @@ -44,7 +44,7 @@ Ingress blocks support the following: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The db security group ID. * `arn` - The arn of the DB security group. diff --git a/website/docs/r/db_snapshot.html.markdown b/website/docs/r/db_snapshot.html.markdown index 58ef00ff37e..0539270579b 100644 --- a/website/docs/r/db_snapshot.html.markdown +++ b/website/docs/r/db_snapshot.html.markdown @@ -3,33 +3,33 @@ layout: "aws" page_title: "AWS: aws_db_snapshot" sidebar_current: "docs-aws-resource-db-snapshot" description: |- - Provides an DB Instance. + Manages a RDS database instance snapshot. --- # aws_db_snapshot -Creates a Snapshot of an DB Instance. +Manages a RDS database instance snapshot. For managing RDS database cluster snapshots, see the [`aws_db_cluster_snapshot` resource](/docs/providers/aws/r/db_cluster_snapshot.html). ## Example Usage ```hcl resource "aws_db_instance" "bar" { - allocated_storage = 10 - engine = "MySQL" - engine_version = "5.6.21" - instance_class = "db.t2.micro" - name = "baz" - password = "barbarbarbar" - username = "foo" - - maintenance_window = "Fri:09:00-Fri:09:30" - backup_retention_period = 0 - parameter_group_name = "default.mysql5.6" + allocated_storage = 10 + engine = "MySQL" + engine_version = "5.6.21" + instance_class = "db.t2.micro" + name = "baz" + password = "barbarbarbar" + username = "foo" + + maintenance_window = "Fri:09:00-Fri:09:30" + backup_retention_period = 0 + parameter_group_name = "default.mysql5.6" } resource "aws_db_snapshot" "test" { - db_instance_identifier = "${aws_db_instance.bar.id}" - db_snapshot_identifier = "testsnapshot1234" + db_instance_identifier = "${aws_db_instance.bar.id}" + db_snapshot_identifier = "testsnapshot1234" } ``` @@ -43,7 +43,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `allocated_storage` - Specifies the allocated storage size in gigabytes (GB). * `availability_zone` - Specifies the name of the Availability Zone the DB instance was located in at the time of the DB snapshot. @@ -59,4 +59,4 @@ The following attributes are exported: * `source_region` - The region that the DB snapshot was created in or copied from. * `status` - Specifies the status of this DB snapshot. * `storage_type` - Specifies the storage type associated with DB snapshot. -* `vpc_id` - Specifies the storage type associated with DB snapshot. \ No newline at end of file +* `vpc_id` - Specifies the storage type associated with DB snapshot. diff --git a/website/docs/r/db_subnet_group.html.markdown b/website/docs/r/db_subnet_group.html.markdown index 606238d4eb6..21f5af95210 100644 --- a/website/docs/r/db_subnet_group.html.markdown +++ b/website/docs/r/db_subnet_group.html.markdown @@ -35,7 +35,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The db subnet group name. * `arn` - The ARN of the db subnet group. diff --git a/website/docs/r/default_network_acl.html.markdown b/website/docs/r/default_network_acl.html.markdown index f417e030761..c04fb80132b 100644 --- a/website/docs/r/default_network_acl.html.markdown +++ b/website/docs/r/default_network_acl.html.markdown @@ -135,7 +135,7 @@ valid network mask. * `icmp_type` - (Optional) The ICMP type to be used. Default 0. * `icmp_code` - (Optional) The ICMP type code to be used. Default 0. -~> Note: For more information on ICMP types and codes, see here: http://www.nthelp.com/icmp.html +~> Note: For more information on ICMP types and codes, see here: https://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml ### Managing Subnets in the Default Network ACL @@ -160,6 +160,14 @@ adopted by the Default Network ACL. In order to avoid a reoccurring plan, they will need to be reassigned, destroyed, or added to the `subnet_ids` attribute of the `aws_default_network_acl` entry. +As an alternative to the above, you can also specify the following lifecycle configuration in your `aws_default_network_acl` resource: + +```hcl +lifecycle { + ignore_changes = ["subnet_ids"] +} +``` + ### Removing `aws_default_network_acl` from your configuration Each AWS VPC comes with a Default Network ACL that cannot be deleted. The `aws_default_network_acl` @@ -171,12 +179,12 @@ can resume managing them via the AWS Console. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the Default Network ACL * `vpc_id` - The ID of the associated VPC * `ingress` - Set of ingress rules * `egress` - Set of egress rules -* `subnet_ids` – IDs of associated Subnets +* `subnet_ids` – IDs of associated Subnets [aws-network-acls]: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html diff --git a/website/docs/r/default_route_table.html.markdown b/website/docs/r/default_route_table.html.markdown index f3d5254bc51..33507ff7912 100644 --- a/website/docs/r/default_route_table.html.markdown +++ b/website/docs/r/default_route_table.html.markdown @@ -82,7 +82,7 @@ the VPC's CIDR block to "local", is created implicitly and cannot be specified. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the routing table diff --git a/website/docs/r/default_security_group.html.markdown b/website/docs/r/default_security_group.html.markdown index 631270042d5..ab99873a911 100644 --- a/website/docs/r/default_security_group.html.markdown +++ b/website/docs/r/default_security_group.html.markdown @@ -26,7 +26,7 @@ ingress and egress rules in the Security Group**. It then proceeds to create any configuration. This step is required so that only the rules specified in the configuration are created. -This resource treats it's inline rules as absolute; only the rules defined +This resource treats its inline rules as absolute; only the rules defined inline are created, and any additions/removals external to this resource will result in diff shown. For these reasons, this resource is incompatible with the `aws_security_group_rule` resource. @@ -98,7 +98,7 @@ removed. The following arguments are still supported: egress rule. Each egress block supports fields documented below. * `vpc_id` - (Optional, Forces new resource) The VPC ID. **Note that changing the `vpc_id` will _not_ restore any default security group rules that were -modified, added, or removed.** It will be left in it's current state +modified, added, or removed.** It will be left in its current state * `tags` - (Optional) A mapping of tags to assign to the resource. @@ -119,7 +119,7 @@ they are at the time of removal. You can resume managing them via the AWS Consol ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the security group * `vpc_id` - The VPC ID. diff --git a/website/docs/r/default_subnet.html.markdown b/website/docs/r/default_subnet.html.markdown index 3f3ba20638a..40151d38016 100644 --- a/website/docs/r/default_subnet.html.markdown +++ b/website/docs/r/default_subnet.html.markdown @@ -13,7 +13,7 @@ in the current region. The `aws_default_subnet` behaves differently from normal resources, in that Terraform does not _create_ this resource, but instead "adopts" it -into management. +into management. ## Example Usage @@ -23,9 +23,9 @@ Basic usage with tags: resource "aws_default_subnet" "default_az1" { availability_zone = "us-west-2a" - tags { - Name = "Default subnet for us-west-2a" - } + tags { + Name = "Default subnet for us-west-2a" + } } ``` @@ -33,9 +33,12 @@ resource "aws_default_subnet" "default_az1" { The arguments of an `aws_default_subnet` differ from `aws_subnet` resources. Namely, the `availability_zone` argument is required and the `vpc_id`, `cidr_block`, `ipv6_cidr_block`, -`map_public_ip_on_launch` and `assign_ipv6_address_on_creation` arguments are computed. -The following arguments are still supported: +and `assign_ipv6_address_on_creation` arguments are computed. +The following arguments are still supported: +* `map_public_ip_on_launch` - (Optional) Specify true to indicate + that instances launched into the subnet should be assigned + a public IP address. * `tags` - (Optional) A mapping of tags to assign to the resource. ### Removing `aws_default_subnet` from your configuration @@ -47,7 +50,7 @@ You can resume managing the subnet via the AWS Console. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the subnet * `availability_zone`- The AZ for the subnet. diff --git a/website/docs/r/default_vpc.html.markdown b/website/docs/r/default_vpc.html.markdown index f8f63de50f5..d8f7b6c0f98 100644 --- a/website/docs/r/default_vpc.html.markdown +++ b/website/docs/r/default_vpc.html.markdown @@ -25,9 +25,9 @@ Basic usage with tags: ```hcl resource "aws_default_vpc" "default" { - tags { - Name = "Default VPC" - } + tags { + Name = "Default VPC" + } } ``` @@ -53,8 +53,9 @@ You can resume managing the VPC via the AWS Console. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: +* `arn` - Amazon Resource Name (ARN) of VPC * `id` - The ID of the VPC * `cidr_block` - The CIDR block of the VPC * `instance_tenancy` - Tenancy of instances spin up within VPC. @@ -74,3 +75,11 @@ block with a /56 prefix length for the VPC was assigned [1]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/vpc-classiclink.html + +## Import + +Default VPCs can be imported using the `vpc id`, e.g. + +``` +$ terraform import aws_default_vpc.default vpc-a01106c2 +``` diff --git a/website/docs/r/default_vpc_dhcp_options.html.markdown b/website/docs/r/default_vpc_dhcp_options.html.markdown index 50e74d04c29..6f2cb7aba4c 100644 --- a/website/docs/r/default_vpc_dhcp_options.html.markdown +++ b/website/docs/r/default_vpc_dhcp_options.html.markdown @@ -25,9 +25,9 @@ Basic usage with tags: ```hcl resource "aws_default_vpc_dhcp_options" "default" { - tags { - Name = "Default DHCP Option Set" - } + tags { + Name = "Default DHCP Option Set" + } } ``` @@ -50,6 +50,6 @@ You can resume managing the DHCP Options Set via the AWS Console. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the DHCP Options Set. diff --git a/website/docs/r/devicefarm_project.html.markdown b/website/docs/r/devicefarm_project.html.markdown index 2f042493373..b4e8ed85b6d 100644 --- a/website/docs/r/devicefarm_project.html.markdown +++ b/website/docs/r/devicefarm_project.html.markdown @@ -20,7 +20,7 @@ For more information about Device Farm Projects, see the AWS Documentation on ```hcl resource "aws_devicefarm_project" "awesome_devices" { - name = "my-device-farm" + name = "my-device-farm" } ``` @@ -30,7 +30,7 @@ resource "aws_devicefarm_project" "awesome_devices" { ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - The Amazon Resource Name of this project diff --git a/website/docs/r/directory_service_conditional_forwarder.html.markdown b/website/docs/r/directory_service_conditional_forwarder.html.markdown new file mode 100644 index 00000000000..00d49be6180 --- /dev/null +++ b/website/docs/r/directory_service_conditional_forwarder.html.markdown @@ -0,0 +1,41 @@ +--- +layout: "aws" +page_title: "AWS: aws_directory_service_conditional_forwarder" +sidebar_current: "docs-aws-resource-directory-service-conditional-forwarder" +description: |- + Provides a conditional forwarder for managed Microsoft AD in AWS Directory Service. +--- + +# aws_directory_service_conditional_forwarder + +Provides a conditional forwarder for managed Microsoft AD in AWS Directory Service. + +## Example Usage + +```hcl +resource "aws_directory_service_conditional_forwarder" "example" { + directory_id = "${aws_directory_service_directory.ad.id}" + remote_domain_name = "example.com" + + dns_ips = [ + "8.8.8.8", + "8.8.4.4", + ] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `directory_id` - (Required) The id of directory. +* `dns_ips` - (Required) A list of forwarder IP addresses. +* `remote_domain_name` - (Required) The fully qualified domain name of the remote domain for which forwarders will be used. + +## Import + +Conditional forwarders can be imported using the directory id and remote_domain_name, e.g. + +``` +$ terraform import aws_directory_service_conditional_forwarder.example d-1234567890:example.com +``` diff --git a/website/docs/r/directory_service_directory.html.markdown b/website/docs/r/directory_service_directory.html.markdown index 3ef506ac45c..1645f8ecf27 100644 --- a/website/docs/r/directory_service_directory.html.markdown +++ b/website/docs/r/directory_service_directory.html.markdown @@ -97,7 +97,7 @@ resource "aws_directory_service_directory" "connector" { connect_settings { customer_dns_ips = ["A.B.C.D"] - customer_username = "Administrator" + customer_username = "Admin" subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] vpc_id = "${aws_vpc.main.id}" } @@ -151,7 +151,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The directory identifier. * `access_url` - The access URL for the directory, such as `http://alias.awsapps.com`. diff --git a/website/docs/r/dlm_lifecycle_policy.markdown b/website/docs/r/dlm_lifecycle_policy.markdown new file mode 100644 index 00000000000..4ef65092da6 --- /dev/null +++ b/website/docs/r/dlm_lifecycle_policy.markdown @@ -0,0 +1,145 @@ +--- +layout: "aws" +page_title: "AWS: aws_dlm_lifecycle_policy" +sidebar_current: "docs-aws-resource-dlm-lifecycle-policy" +description: |- + Provides a Data Lifecycle Manager (DLM) lifecycle policy for managing snapshots. +--- + +# aws_dlm_lifecycle_policy + +Provides a [Data Lifecycle Manager (DLM) lifecycle policy](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html) for managing snapshots. + +## Example Usage + +```hcl +resource "aws_iam_role" "dlm_lifecycle_role" { + name = "dlm-lifecycle-role" + + assume_role_policy = < Note: You cannot have overlapping lifecycle policies that share the same `target_tags`. Terraform is unable to detect this at plan time but it will fail during apply. + +#### Schedule arguments + +* `copy_tags` - (Optional) Copy all user-defined tags on a source volume to snapshots of the volume created by this policy. +* `create_rule` - (Required) See the [`create_rule`](#create-rule-arguments) block. Max of 1 per schedule. +* `name` - (Required) A name for the schedule. +* `retain_rule` - (Required) See the [`create_rule`](#create-rule-arguments) block. Max of 1 per schedule. +* `tags_to_add` - (Optional) A mapping of tag keys and their values. DLM lifecycle policies will already tag the snapshot with the tags on the volume. This configuration adds extra tags on top of these. + +#### Create Rule arguments + +* `interval` - (Required) How often this lifecycle policy should be evaluated. `12` or `24` are valid values. +* `interval_unit` - (Optional) The unit for how often the lifecycle policy should be evaluated. `HOURS` is currently the only allowed value and also the default value. +* `times` - (Optional) A list of times in 24 hour clock format that sets when the lifecycle policy should be evaluated. Max of 1. + +#### Retain Rule arguments + +* `count` - (Required) How many snapshots to keep. Must be an integer between 1 and 1000. + +## Attributes Reference + +All of the arguments above are exported as attributes. + +## Import + +DLM lifecyle policies can be imported by their policy ID: + +``` +$ terraform import aws_dlm_lifecycle_policy.example policy-abcdef12345678901 +``` \ No newline at end of file diff --git a/website/docs/r/dms_certificate.html.markdown b/website/docs/r/dms_certificate.html.markdown index 3f472d64d9c..bf23014fa76 100644 --- a/website/docs/r/dms_certificate.html.markdown +++ b/website/docs/r/dms_certificate.html.markdown @@ -36,7 +36,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `certificate_arn` - The Amazon Resource Name (ARN) for the certificate. diff --git a/website/docs/r/dms_endpoint.html.markdown b/website/docs/r/dms_endpoint.html.markdown index d43e9fcfeed..a1ac98e92de 100644 --- a/website/docs/r/dms_endpoint.html.markdown +++ b/website/docs/r/dms_endpoint.html.markdown @@ -53,20 +53,22 @@ The following arguments are supported: - Must not contain two consecutive hyphens * `endpoint_type` - (Required) The type of endpoint. Can be one of `source | target`. -* `engine_name` - (Required) The type of engine for the endpoint. Can be one of `mysql | oracle | postgres | mariadb | aurora | redshift | sybase | sqlserver | dynamodb`. +* `engine_name` - (Required) The type of engine for the endpoint. Can be one of `mysql | oracle | postgres | mariadb | aurora | redshift | sybase | sqlserver | dynamodb | mongodb | s3 | azuredb`. * `extra_connection_attributes` - (Optional) Additional attributes associated with the connection. For available attributes see [Using Extra Connection Attributes with AWS Database Migration Service](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Introduction.ConnectionAttributes.html). -* `kms_key_arn` - (Optional) The Amazon Resource Name (ARN) for the KMS key that will be used to encrypt the connection parameters. If you do not specify a value for `kms_key_arn`, then AWS DMS will use your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS region. +* `kms_key_arn` - (Required when `engine_name` is `mongodb`, optional otherwise) The Amazon Resource Name (ARN) for the KMS key that will be used to encrypt the connection parameters. If you do not specify a value for `kms_key_arn`, then AWS DMS will use your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS region. * `password` - (Optional) The password to be used to login to the endpoint database. * `port` - (Optional) The port used by the endpoint database. * `server_name` - (Optional) The host name of the server. * `ssl_mode` - (Optional, Default: none) The SSL mode to use for the connection. Can be one of `none | require | verify-ca | verify-full` * `tags` - (Optional) A mapping of tags to assign to the resource. * `username` - (Optional) The user name to be used to login to the endpoint database. -* `service_access_role` (Optional) The Amazon Resource Name (ARN) used by the service access IAM role for dynamodb endpoints. +* `service_access_role` - (Optional) The Amazon Resource Name (ARN) used by the service access IAM role for dynamodb endpoints. +* `mongodb_settings` - (Optional) Settings for the source MongoDB endpoint. Available settings are `auth_type` (default: `PASSWORD`), `auth_mechanism` (default: `DEFAULT`), `nesting_level` (default: `NONE`), `extract_doc_id` (default: `false`), `docs_to_investigate` (default: `1000`) and `auth_source` (default: `admin`). For more details, see [Using MongoDB as a Source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html). +* `s3_settings` - (Optional) Settings for the target S3 endpoint. Available settings are `service_access_role_arn`, `external_table_definition`, `csv_row_delimiter` (default: `\\n`), `csv_delimiter` (default: `,`), `bucket_folder`, `bucket_name` and `compression_type` (default: `NONE`). For more details, see [Using Amazon S3 as a Target for AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html). ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `endpoint_arn` - The Amazon Resource Name (ARN) for the endpoint. diff --git a/website/docs/r/dms_replication_instance.html.markdown b/website/docs/r/dms_replication_instance.html.markdown index ab93ed02ca1..05fbe644fdc 100644 --- a/website/docs/r/dms_replication_instance.html.markdown +++ b/website/docs/r/dms_replication_instance.html.markdown @@ -26,7 +26,7 @@ resource "aws_dms_replication_instance" "test" { publicly_accessible = true replication_instance_class = "dms.t2.micro" replication_instance_id = "test-dms-replication-instance-tf" - replication_subnet_group_id = "${aws_dms_replication_subnet_group.test-dms-replication-subnet-group-tf}" + replication_subnet_group_id = "${aws_dms_replication_subnet_group.test-dms-replication-subnet-group-tf.id}" tags { Name = "test" @@ -71,7 +71,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `replication_instance_arn` - The Amazon Resource Name (ARN) of the replication instance. * `replication_instance_private_ips` - A list of the private IP addresses of the replication instance. diff --git a/website/docs/r/dms_replication_subnet_group.html.markdown b/website/docs/r/dms_replication_subnet_group.html.markdown index 291392ea8b2..9729432ec3f 100644 --- a/website/docs/r/dms_replication_subnet_group.html.markdown +++ b/website/docs/r/dms_replication_subnet_group.html.markdown @@ -21,6 +21,10 @@ resource "aws_dms_replication_subnet_group" "test" { subnet_ids = [ "subnet-12345678", ] + + tags { + Name = "test" + } } ``` @@ -35,10 +39,11 @@ The following arguments are supported: - Must not be "default". * `subnet_ids` - (Required) A list of the EC2 subnet IDs for the subnet group. +* `tags` - (Optional) A mapping of tags to assign to the resource. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `vpc_id` - The ID of the VPC the subnet group is in. diff --git a/website/docs/r/dms_replication_task.html.markdown b/website/docs/r/dms_replication_task.html.markdown index 8f355a5b5a3..bc7492f02f4 100644 --- a/website/docs/r/dms_replication_task.html.markdown +++ b/website/docs/r/dms_replication_task.html.markdown @@ -53,7 +53,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `replication_task_arn` - The Amazon Resource Name (ARN) for the replication task. diff --git a/website/docs/r/dx_bgp_peer.html.markdown b/website/docs/r/dx_bgp_peer.html.markdown new file mode 100644 index 00000000000..9079386d738 --- /dev/null +++ b/website/docs/r/dx_bgp_peer.html.markdown @@ -0,0 +1,49 @@ +--- +layout: "aws" +page_title: "AWS: aws_dx_bgp_peer" +sidebar_current: "docs-aws-resource-dx-bgp-peer" +description: |- + Provides a Direct Connect BGP peer resource. +--- + +# aws_dx_bgp_peer + +Provides a Direct Connect BGP peer resource. + +## Example Usage + +```hcl +resource "aws_dx_bgp_peer" "peer" { + virtual_interface_id = "${aws_dx_private_virtual_interface.foo.id}" + address_family = "ipv6" + bgp_asn = 65351 +} +``` + +## Argument Reference + +The following arguments are supported: + +* `address_family` - (Required) The address family for the BGP peer. `ipv4 ` or `ipv6`. +* `bgp_asn` - (Required) The autonomous system (AS) number for Border Gateway Protocol (BGP) configuration. +* `virtual_interface_id` - (Required) The ID of the Direct Connect virtual interface on which to create the BGP peer. +* `amazon_address` - (Optional) The IPv4 CIDR address to use to send traffic to Amazon. +Required for IPv4 BGP peers on public virtual interfaces. +* `bgp_auth_key` - (Optional) The authentication key for BGP configuration. +* `customer_address` - (Optional) The IPv4 CIDR destination address to which Amazon should send traffic. +Required for IPv4 BGP peers on public virtual interfaces. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the BGP peer. +* `bgp_status` - The Up/Down state of the BGP peer. + +## Timeouts + +`aws_dx_bgp_peer` provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - (Default `10 minutes`) Used for creating BGP peer +- `delete` - (Default `10 minutes`) Used for destroying BGP peer diff --git a/website/docs/r/dx_connection.html.markdown b/website/docs/r/dx_connection.html.markdown index 1e7bacbfd90..8b919f24681 100644 --- a/website/docs/r/dx_connection.html.markdown +++ b/website/docs/r/dx_connection.html.markdown @@ -14,9 +14,9 @@ Provides a Connection of Direct Connect. ```hcl resource "aws_dx_connection" "hoge" { - name = "tf-dx-connection" + name = "tf-dx-connection" bandwidth = "1Gbps" - location = "EqDC2" + location = "EqDC2" } ``` @@ -31,10 +31,11 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the connection. * `arn` - The ARN of the connection. +* `jumbo_frame_capable` - Boolean value representing if jumbo frames have been enabled for this connection. ## Import diff --git a/website/docs/r/dx_connection_association.html.markdown b/website/docs/r/dx_connection_association.html.markdown index 5b76e4516b7..1199c5c02ba 100644 --- a/website/docs/r/dx_connection_association.html.markdown +++ b/website/docs/r/dx_connection_association.html.markdown @@ -14,21 +14,21 @@ Associates a Direct Connect Connection with a LAG. ```hcl resource "aws_dx_connection" "example" { - name = "example" + name = "example" bandwidth = "1Gbps" - location = "EqSe2" + location = "EqSe2" } resource "aws_dx_lag" "example" { - name = "example" + name = "example" connections_bandwidth = "1Gbps" - location = "EqSe2" + location = "EqSe2" number_of_connections = 1 } resource "aws_dx_connection_association" "example" { connection_id = "${aws_dx_connection.example.id}" - lag_id = "${aws_dx_lag.example.id}" + lag_id = "${aws_dx_lag.example.id}" } ``` diff --git a/website/docs/r/dx_gateway.html.markdown b/website/docs/r/dx_gateway.html.markdown new file mode 100644 index 00000000000..234a1e68479 --- /dev/null +++ b/website/docs/r/dx_gateway.html.markdown @@ -0,0 +1,49 @@ +--- +layout: "aws" +page_title: "AWS: aws_dx_gateway" +sidebar_current: "docs-aws-resource-dx-gateway" +description: |- + Provides a Direct Connect Gateway. +--- + +# aws_dx_gateway + +Provides a Direct Connect Gateway. + +## Example Usage + +```hcl +resource "aws_dx_gateway" "example" { + name = "tf-dxg-example" + amazon_side_asn = "64512" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the connection. +* `amazon_side_asn` - (Required) The ASN to be configured on the Amazon side of the connection. The ASN must be in the private range of 64,512 to 65,534 or 4,200,000,000 to 4,294,967,294. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the gateway. + +## Timeouts + +`aws_dx_gateway` provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - (Default `10 minutes`) Used for creating the gateway +- `delete` - (Default `10 minutes`) Used for destroying the gateway + +## Import + +Direct Connect Gateways can be imported using the `gateway id`, e.g. + +``` +$ terraform import aws_dx_gateway.test abcd1234-dcba-5678-be23-cdef9876ab45 +``` diff --git a/website/docs/r/dx_gateway_association.html.markdown b/website/docs/r/dx_gateway_association.html.markdown new file mode 100644 index 00000000000..adfee793565 --- /dev/null +++ b/website/docs/r/dx_gateway_association.html.markdown @@ -0,0 +1,48 @@ +--- +layout: "aws" +page_title: "AWS: aws_dx_gateway_association" +sidebar_current: "docs-aws-resource-dx-gateway-association" +description: |- + Associates a Direct Connect Gateway with a VGW. +--- + +# aws_dx_gateway_association + +Associates a Direct Connect Gateway with a VGW. + +## Example Usage + +```hcl +resource "aws_dx_gateway" "example" { + name = "example" + amazon_side_asn = "64512" +} + +resource "aws_vpc" "example" { + cidr_block = "10.255.255.0/28" +} + +resource "aws_vpn_gateway" "example" { + vpc_id = "${aws_vpc.test.id}" +} + +resource "aws_dx_gateway_association" "example" { + dx_gateway_id = "${aws_dx_gateway.example.id}" + vpn_gateway_id = "${aws_vpn_gateway.example.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `dx_gateway_id` - (Required) The ID of the Direct Connect Gateway. +* `vpn_gateway_id` - (Required) The ID of the VGW with which to associate the gateway. + +## Timeouts + +`aws_dx_gateway_association` provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - (Default `15 minutes`) Used for creating the association +- `delete` - (Default `10 minutes`) Used for destroying the association diff --git a/website/docs/r/dx_hosted_private_virtual_interface.html.markdown b/website/docs/r/dx_hosted_private_virtual_interface.html.markdown new file mode 100644 index 00000000000..6207599602d --- /dev/null +++ b/website/docs/r/dx_hosted_private_virtual_interface.html.markdown @@ -0,0 +1,65 @@ +--- +layout: "aws" +page_title: "AWS: aws_dx_hosted_private_virtual_interface" +sidebar_current: "docs-aws-resource-dx-hosted-private-virtual-interface" +description: |- + Provides a Direct Connect hosted private virtual interface resource. +--- + +# aws_dx_hosted_private_virtual_interface + +Provides a Direct Connect hosted private virtual interface resource. This resource represents the allocator's side of the hosted virtual interface. +A hosted virtual interface is a virtual interface that is owned by another AWS account. + +## Example Usage + +```hcl +resource "aws_dx_hosted_private_virtual_interface" "foo" { + connection_id = "dxcon-zzzzzzzz" + + name = "vif-foo" + vlan = 4094 + address_family = "ipv4" + bgp_asn = 65352 +} +``` + +## Argument Reference + +The following arguments are supported: + +* `address_family` - (Required) The address family for the BGP peer. `ipv4 ` or `ipv6`. +* `bgp_asn` - (Required) The autonomous system (AS) number for Border Gateway Protocol (BGP) configuration. +* `connection_id` - (Required) The ID of the Direct Connect connection (or LAG) on which to create the virtual interface. +* `name` - (Required) The name for the virtual interface. +* `owner_account_id` - (Required) The AWS account that will own the new virtual interface. +* `vlan` - (Required) The VLAN ID. +* `amazon_address` - (Optional) The IPv4 CIDR address to use to send traffic to Amazon. Required for IPv4 BGP peers. +* `mtu` - (Optional) The maximum transmission unit (MTU) is the size, in bytes, of the largest permissible packet that can be passed over the connection. The MTU of a virtual private interface can be either `1500` or `9001` (jumbo frames). Default is `1500`. +* `bgp_auth_key` - (Optional) The authentication key for BGP configuration. +* `customer_address` - (Optional) The IPv4 CIDR destination address to which Amazon should send traffic. Required for IPv4 BGP peers. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the virtual interface. +* `arn` - The ARN of the virtual interface. +* `jumbo_frame_capable` - Indicates whether jumbo frames (9001 MTU) are supported. + +## Timeouts + +`aws_dx_hosted_private_virtual_interface` provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - (Default `10 minutes`) Used for creating virtual interface +- `update` - (Default `10 minutes`) Used for virtual interface modifications +- `delete` - (Default `10 minutes`) Used for destroying virtual interface + +## Import + +Direct Connect hosted private virtual interfaces can be imported using the `vif id`, e.g. + +``` +$ terraform import aws_dx_hosted_private_virtual_interface.test dxvif-33cc44dd +``` diff --git a/website/docs/r/dx_hosted_private_virtual_interface_accepter.html.markdown b/website/docs/r/dx_hosted_private_virtual_interface_accepter.html.markdown new file mode 100644 index 00000000000..55943e482b9 --- /dev/null +++ b/website/docs/r/dx_hosted_private_virtual_interface_accepter.html.markdown @@ -0,0 +1,96 @@ +--- +layout: "aws" +page_title: "AWS: aws_dx_hosted_private_virtual_interface_accepter" +sidebar_current: "docs-aws-resource-dx-hosted-private-virtual-interface-accepter" +description: |- + Provides a resource to manage the accepter's side of a Direct Connect hosted private virtual interface. +--- + +# aws_dx_hosted_private_virtual_interface_accepter + +Provides a resource to manage the accepter's side of a Direct Connect hosted private virtual interface. +This resource accepts ownership of a private virtual interface created by another AWS account. + +## Example Usage + +```hcl +provider "aws" { + # Creator's credentials. +} + +provider "aws" { + alias = "accepter" + + # Accepter's credentials. +} + +data "aws_caller_identity" "accepter" { + provider = "aws.accepter" +} + +# Creator's side of the VIF +resource "aws_dx_hosted_private_virtual_interface" "creator" { + connection_id = "dxcon-zzzzzzzz" + owner_account_id = "${data.aws_caller_identity.accepter.account_id}" + + name = "vif-foo" + vlan = 4094 + address_family = "ipv4" + bgp_asn = 65352 +} + +# Accepter's side of the VIF. +resource "aws_vpn_gateway" "vpn_gw" { + provider = "aws.accepter" +} + +resource "aws_dx_hosted_private_virtual_interface_accepter" "accepter" { + provider = "aws.accepter" + virtual_interface_id = "${aws_dx_hosted_private_virtual_interface.creator.id}" + vpn_gateway_id = "${aws_vpn_gateway.vpn_gw.id}" + + tags { + Side = "Accepter" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `virtual_interface_id` - (Required) The ID of the Direct Connect virtual interface to accept. +* `dx_gateway_id` - (Optional) The ID of the Direct Connect gateway to which to connect the virtual interface. +* `tags` - (Optional) A mapping of tags to assign to the resource. +* `vpn_gateway_id` - (Optional) The ID of the [virtual private gateway](vpn_gateway.html) to which to connect the virtual interface. + +### Removing `aws_dx_hosted_private_virtual_interface_accepter` from your configuration + +AWS allows a Direct Connect hosted private virtual interface to be deleted from either the allocator's or accepter's side. +However, Terraform only allows the Direct Connect hosted private virtual interface to be deleted from the allocator's side +by removing the corresponding `aws_dx_hosted_private_virtual_interface` resource from your configuration. +Removing a `aws_dx_hosted_private_virtual_interface_accepter` resource from your configuration will remove it +from your statefile and management, **but will not delete the Direct Connect virtual interface.** + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the virtual interface. +* `arn` - The ARN of the virtual interface. + +## Timeouts + +`aws_dx_hosted_private_virtual_interface_accepter` provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - (Default `10 minutes`) Used for creating virtual interface +- `delete` - (Default `10 minutes`) Used for destroying virtual interface + +## Import + +Direct Connect hosted private virtual interfaces can be imported using the `vif id`, e.g. + +``` +$ terraform import aws_dx_hosted_private_virtual_interface_accepter.test dxvif-33cc44dd +``` diff --git a/website/docs/r/dx_hosted_public_virtual_interface.html.markdown b/website/docs/r/dx_hosted_public_virtual_interface.html.markdown new file mode 100644 index 00000000000..cceefa3ae22 --- /dev/null +++ b/website/docs/r/dx_hosted_public_virtual_interface.html.markdown @@ -0,0 +1,71 @@ +--- +layout: "aws" +page_title: "AWS: aws_dx_hosted_public_virtual_interface" +sidebar_current: "docs-aws-resource-dx-hosted-public-virtual-interface" +description: |- + Provides a Direct Connect hosted public virtual interface resource. +--- + +# aws_dx_hosted_public_virtual_interface + +Provides a Direct Connect hosted public virtual interface resource. This resource represents the allocator's side of the hosted virtual interface. +A hosted virtual interface is a virtual interface that is owned by another AWS account. + +## Example Usage + +```hcl +resource "aws_dx_hosted_public_virtual_interface" "foo" { + connection_id = "dxcon-zzzzzzzz" + + name = "vif-foo" + vlan = 4094 + address_family = "ipv4" + bgp_asn = 65352 + + customer_address = "175.45.176.1/30" + amazon_address = "175.45.176.2/30" + + route_filter_prefixes = [ + "210.52.109.0/24", + "175.45.176.0/22", + ] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `address_family` - (Required) The address family for the BGP peer. `ipv4 ` or `ipv6`. +* `bgp_asn` - (Required) The autonomous system (AS) number for Border Gateway Protocol (BGP) configuration. +* `connection_id` - (Required) The ID of the Direct Connect connection (or LAG) on which to create the virtual interface. +* `name` - (Required) The name for the virtual interface. +* `owner_account_id` - (Required) The AWS account that will own the new virtual interface. +* `route_filter_prefixes` - (Required) A list of routes to be advertised to the AWS network in this region. +* `vlan` - (Required) The VLAN ID. +* `amazon_address` - (Optional) The IPv4 CIDR address to use to send traffic to Amazon. Required for IPv4 BGP peers. +* `bgp_auth_key` - (Optional) The authentication key for BGP configuration. +* `customer_address` - (Optional) The IPv4 CIDR destination address to which Amazon should send traffic. Required for IPv4 BGP peers. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the virtual interface. +* `arn` - The ARN of the virtual interface. + +## Timeouts + +`aws_dx_hosted_public_virtual_interface` provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - (Default `10 minutes`) Used for creating virtual interface +- `delete` - (Default `10 minutes`) Used for destroying virtual interface + +## Import + +Direct Connect hosted public virtual interfaces can be imported using the `vif id`, e.g. + +``` +$ terraform import aws_dx_hosted_public_virtual_interface.test dxvif-33cc44dd +``` diff --git a/website/docs/r/dx_hosted_public_virtual_interface_accepter.html.markdown b/website/docs/r/dx_hosted_public_virtual_interface_accepter.html.markdown new file mode 100644 index 00000000000..1ef49a2cc7e --- /dev/null +++ b/website/docs/r/dx_hosted_public_virtual_interface_accepter.html.markdown @@ -0,0 +1,98 @@ +--- +layout: "aws" +page_title: "AWS: aws_dx_hosted_public_virtual_interface_accepter" +sidebar_current: "docs-aws-resource-dx-hosted-public-virtual-interface-accepter" +description: |- + Provides a resource to manage the accepter's side of a Direct Connect hosted public virtual interface. +--- + +# aws_dx_hosted_public_virtual_interface_accepter + +Provides a resource to manage the accepter's side of a Direct Connect hosted public virtual interface. +This resource accepts ownership of a public virtual interface created by another AWS account. + +## Example Usage + +```hcl +provider "aws" { + # Creator's credentials. +} + +provider "aws" { + alias = "accepter" + + # Accepter's credentials. +} + +data "aws_caller_identity" "accepter" { + provider = "aws.accepter" +} + +# Creator's side of the VIF +resource "aws_dx_hosted_public_virtual_interface" "creator" { + connection_id = "dxcon-zzzzzzzz" + owner_account_id = "${data.aws_caller_identity.accepter.account_id}" + + name = "vif-foo" + vlan = 4094 + address_family = "ipv4" + bgp_asn = 65352 + + customer_address = "175.45.176.1/30" + amazon_address = "175.45.176.2/30" + + route_filter_prefixes = [ + "210.52.109.0/24", + "175.45.176.0/22", + ] +} + +# Accepter's side of the VIF. +resource "aws_dx_hosted_public_virtual_interface_accepter" "accepter" { + provider = "aws.accepter" + virtual_interface_id = "${aws_dx_hosted_public_virtual_interface.creator.id}" + + tags { + Side = "Accepter" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `virtual_interface_id` - (Required) The ID of the Direct Connect virtual interface to accept. +* `tags` - (Optional) A mapping of tags to assign to the resource. + +### Removing `aws_dx_hosted_public_virtual_interface_accepter` from your configuration + +AWS allows a Direct Connect hosted public virtual interface to be deleted from either the allocator's or accepter's side. +However, Terraform only allows the Direct Connect hosted public virtual interface to be deleted from the allocator's side +by removing the corresponding `aws_dx_hosted_public_virtual_interface` resource from your configuration. +Removing a `aws_dx_hosted_public_virtual_interface_accepter` resource from your configuration will remove it +from your statefile and management, **but will not delete the Direct Connect virtual interface.** + + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the virtual interface. +* `arn` - The ARN of the virtual interface. + +## Timeouts + +`aws_dx_hosted_public_virtual_interface_accepter` provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - (Default `10 minutes`) Used for creating virtual interface +- `delete` - (Default `10 minutes`) Used for destroying virtual interface + +## Import + +Direct Connect hosted public virtual interfaces can be imported using the `vif id`, e.g. + +``` +$ terraform import aws_dx_hosted_public_virtual_interface_accepter.test dxvif-33cc44dd +``` diff --git a/website/docs/r/dx_lag.html.markdown b/website/docs/r/dx_lag.html.markdown index d5e158ff818..bd3e241fea8 100644 --- a/website/docs/r/dx_lag.html.markdown +++ b/website/docs/r/dx_lag.html.markdown @@ -14,11 +14,11 @@ Provides a Direct Connect LAG. ```hcl resource "aws_dx_lag" "hoge" { - name = "tf-dx-lag" + name = "tf-dx-lag" connections_bandwidth = "1Gbps" - location = "EqDC2" + location = "EqDC2" number_of_connections = 2 - force_destroy = true + force_destroy = true } ``` @@ -35,7 +35,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the LAG. * `arn` - The ARN of the LAG. diff --git a/website/docs/r/dx_private_virtual_interface.html.markdown b/website/docs/r/dx_private_virtual_interface.html.markdown new file mode 100644 index 00000000000..10ffd4219f2 --- /dev/null +++ b/website/docs/r/dx_private_virtual_interface.html.markdown @@ -0,0 +1,67 @@ +--- +layout: "aws" +page_title: "AWS: aws_dx_private_virtual_interface" +sidebar_current: "docs-aws-resource-dx-private-virtual-interface" +description: |- + Provides a Direct Connect private virtual interface resource. +--- + +# aws_dx_private_virtual_interface + +Provides a Direct Connect private virtual interface resource. + +## Example Usage + +```hcl +resource "aws_dx_private_virtual_interface" "foo" { + connection_id = "dxcon-zzzzzzzz" + + name = "vif-foo" + vlan = 4094 + address_family = "ipv4" + bgp_asn = 65352 +} +``` + +## Argument Reference + +The following arguments are supported: + +* `address_family` - (Required) The address family for the BGP peer. `ipv4 ` or `ipv6`. +* `bgp_asn` - (Required) The autonomous system (AS) number for Border Gateway Protocol (BGP) configuration. +* `connection_id` - (Required) The ID of the Direct Connect connection (or LAG) on which to create the virtual interface. +* `name` - (Required) The name for the virtual interface. +* `vlan` - (Required) The VLAN ID. +* `amazon_address` - (Optional) The IPv4 CIDR address to use to send traffic to Amazon. Required for IPv4 BGP peers. +* `mtu` - (Optional) The maximum transmission unit (MTU) is the size, in bytes, of the largest permissible packet that can be passed over the connection. +The MTU of a virtual private interface can be either `1500` or `9001` (jumbo frames). Default is `1500`. +* `bgp_auth_key` - (Optional) The authentication key for BGP configuration. +* `customer_address` - (Optional) The IPv4 CIDR destination address to which Amazon should send traffic. Required for IPv4 BGP peers. +* `dx_gateway_id` - (Optional) The ID of the Direct Connect gateway to which to connect the virtual interface. +* `tags` - (Optional) A mapping of tags to assign to the resource. +* `vpn_gateway_id` - (Optional) The ID of the [virtual private gateway](vpn_gateway.html) to which to connect the virtual interface. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the virtual interface. +* `arn` - The ARN of the virtual interface. +* `jumbo_frame_capable` - Indicates whether jumbo frames (9001 MTU) are supported. + +## Timeouts + +`aws_dx_private_virtual_interface` provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - (Default `10 minutes`) Used for creating virtual interface +- `update` - (Default `10 minutes`) Used for virtual interface modifications +- `delete` - (Default `10 minutes`) Used for destroying virtual interface + +## Import + +Direct Connect private virtual interfaces can be imported using the `vif id`, e.g. + +``` +$ terraform import aws_dx_private_virtual_interface.test dxvif-33cc44dd +``` diff --git a/website/docs/r/dx_public_virtual_interface.html.markdown b/website/docs/r/dx_public_virtual_interface.html.markdown new file mode 100644 index 00000000000..b21145020b9 --- /dev/null +++ b/website/docs/r/dx_public_virtual_interface.html.markdown @@ -0,0 +1,70 @@ +--- +layout: "aws" +page_title: "AWS: aws_dx_public_virtual_interface" +sidebar_current: "docs-aws-resource-dx-public-virtual-interface" +description: |- + Provides a Direct Connect public virtual interface resource. +--- + +# aws_dx_public_virtual_interface + +Provides a Direct Connect public virtual interface resource. + +## Example Usage + +```hcl +resource "aws_dx_public_virtual_interface" "foo" { + connection_id = "dxcon-zzzzzzzz" + + name = "vif-foo" + vlan = 4094 + address_family = "ipv4" + bgp_asn = 65352 + + customer_address = "175.45.176.1/30" + amazon_address = "175.45.176.2/30" + + route_filter_prefixes = [ + "210.52.109.0/24", + "175.45.176.0/22", + ] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `address_family` - (Required) The address family for the BGP peer. `ipv4 ` or `ipv6`. +* `bgp_asn` - (Required) The autonomous system (AS) number for Border Gateway Protocol (BGP) configuration. +* `connection_id` - (Required) The ID of the Direct Connect connection (or LAG) on which to create the virtual interface. +* `name` - (Required) The name for the virtual interface. +* `vlan` - (Required) The VLAN ID. +* `amazon_address` - (Optional) The IPv4 CIDR address to use to send traffic to Amazon. Required for IPv4 BGP peers. +* `bgp_auth_key` - (Optional) The authentication key for BGP configuration. +* `customer_address` - (Optional) The IPv4 CIDR destination address to which Amazon should send traffic. Required for IPv4 BGP peers. +* `route_filter_prefixes` - (Required) A list of routes to be advertised to the AWS network in this region. +* `tags` - (Optional) A mapping of tags to assign to the resource. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the virtual interface. +* `arn` - The ARN of the virtual interface. + +## Timeouts + +`aws_dx_public_virtual_interface` provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - (Default `10 minutes`) Used for creating virtual interface +- `delete` - (Default `10 minutes`) Used for destroying virtual interface + +## Import + +Direct Connect public virtual interfaces can be imported using the `vif id`, e.g. + +``` +$ terraform import aws_dx_public_virtual_interface.test dxvif-33cc44dd +``` diff --git a/website/docs/r/dynamodb_global_table.html.markdown b/website/docs/r/dynamodb_global_table.html.markdown index 80d897fd3ef..ce5df5ac2e7 100644 --- a/website/docs/r/dynamodb_global_table.html.markdown +++ b/website/docs/r/dynamodb_global_table.html.markdown @@ -88,7 +88,7 @@ The following arguments are supported: ## Attributes Reference -The following additional attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The name of the DynamoDB Global Table * `arn` - The ARN of the DynamoDB Global Table diff --git a/website/docs/r/dynamodb_table.html.markdown b/website/docs/r/dynamodb_table.html.markdown index f796c915b54..713aa3fe7ee 100644 --- a/website/docs/r/dynamodb_table.html.markdown +++ b/website/docs/r/dynamodb_table.html.markdown @@ -42,7 +42,7 @@ resource "aws_dynamodb_table" "basic-dynamodb-table" { ttl { attribute_name = "TimeToExist" - enabled = false + enabled = false } global_secondary_index { @@ -62,21 +62,34 @@ resource "aws_dynamodb_table" "basic-dynamodb-table" { } ``` +Notes: `attribute` can be lists + +``` + attribute = [{ + name = "UserId" + type = "S" + }, { + name = "GameTitle" + type = "S" + }, { + name = "TopScore" + type = "N" + }] +``` + ## Argument Reference The following arguments are supported: * `name` - (Required) The name of the table, this needs to be unique within a region. -* `hash_key` - (Required, Forces new resource) The attribute to use as the hash key (the - attribute must also be defined as an attribute record -* `range_key` - (Optional, Forces new resource) The attribute to use as the range key (must - also be defined) +* `hash_key` - (Required, Forces new resource) The attribute to use as the hash (partition) key. Must also be defined as an `attribute`, see below. +* `range_key` - (Optional, Forces new resource) The attribute to use as the range (sort) key. Must also be defined as an `attribute`, see below. * `write_capacity` - (Required) The number of write units for this table * `read_capacity` - (Required) The number of read units for this table -* `attribute` - (Required) Define an attribute, has two properties: - * `name` - The name of the attribute - * `type` - One of: S, N, or B for (S)tring, (N)umber or (B)inary data +* `attribute` - (Required) List of nested attribute definitions. Only required for `hash_key` and `range_key` attributes. Each attribute has two properties: + * `name` - (Required) The name of the attribute + * `type` - (Required) Attribute type, which must be a scalar type: `S`, `N`, or `B` for (S)tring, (N)umber or (B)inary data * `ttl` - (Optional) Defines ttl, has two properties, and can only be specified once: * `enabled` - (Required) Indicates whether ttl is enabled (true) or disabled (false). * `attribute_name` - (Required) The name of the table attribute to store the TTL timestamp in. @@ -90,6 +103,7 @@ attributes, etc. * `stream_view_type` - (Optional) When an item in the table is modified, StreamViewType determines what information is written to the table's stream. Valid values are `KEYS_ONLY`, `NEW_IMAGE`, `OLD_IMAGE`, `NEW_AND_OLD_IMAGES`. * `server_side_encryption` - (Optional) Encrypt at rest options. * `tags` - (Optional) A map of tags to populate on the created table. +* `point_in_time_recovery` - (Optional) Point-in-time recovery options. ### Timeouts @@ -135,6 +149,10 @@ The `timeouts` block allows you to specify [timeouts](https://www.terraform.io/d * `enabled` - (Required) Whether to enable encryption at rest. If the `server_side_encryption` block is not provided then this defaults to `false`. +#### `point_in_time_recovery` + +* `enabled` - (Required) Whether to enable point-in-time recovery - note that it can take up to 10 minutes to enable for new tables. If the `point_in_time_recovery` block is not provided then this defaults to `false`. + ### A note about attributes Only define attributes on the table object that are going to be used as: @@ -154,7 +172,7 @@ infinite loop in planning. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - The arn of the table * `id` - The name of the table diff --git a/website/docs/r/dynamodb_table_item.html.markdown b/website/docs/r/dynamodb_table_item.html.markdown index 8ac534b2c83..53007a51288 100644 --- a/website/docs/r/dynamodb_table_item.html.markdown +++ b/website/docs/r/dynamodb_table_item.html.markdown @@ -18,7 +18,8 @@ Provides a DynamoDB table item resource ```hcl resource "aws_dynamodb_table_item" "example" { table_name = "${aws_dynamodb_table.example.name}" - hash_key = "${aws_dynamodb_table.example.hash_key}" + hash_key = "${aws_dynamodb_table.example.hash_key}" + item = < *NOTE:* Either `launch_template_id` or `launch_template_name` must be specified. + +* `version` - (Required) Version number of the launch template. +* `launch_template_id` - (Optional) ID of the launch template. +* `launch_template_name` - (Optional) Name of the launch template. + +#### override + +Example: + +```hcl +resource "aws_ec2_fleet" "example" { + # ... other configuration ... + + launch_template_config { + # ... other configuration ... + + override { + instance_type = "m4.xlarge" + weighted_capacity = 1 + } + + override { + instance_type = "m4.2xlarge" + weighted_capacity = 2 + } + } +} +``` + +* `availability_zone` - (Optional) Availability Zone in which to launch the instances. +* `instance_type` - (Optional) Instance type. +* `max_price` - (Optional) Maximum price per unit hour that you are willing to pay for a Spot Instance. +* `priority` - (Optional) Priority for the launch template override. If `on_demand_options` `allocation_strategy` is set to `prioritized`, EC2 Fleet uses priority to determine which launch template override to use first in fulfilling On-Demand capacity. The highest priority is launched first. The lower the number, the higher the priority. If no number is set, the launch template override has the lowest priority. Valid values are whole numbers starting at 0. +* `subnet_id` - (Optional) ID of the subnet in which to launch the instances. +* `weighted_capacity` - (Optional) Number of units provided by the specified instance type. + +### on_demand_options + +* `allocation_strategy` - (Optional) The order of the launch template overrides to use in fulfilling On-Demand capacity. Valid values: `lowestPrice`, `prioritized`. Default: `lowestPrice`. + +### spot_options + +* `allocation_strategy` - (Optional) How to allocate the target capacity across the Spot pools. Valid values: `diversified`, `lowestPrice`. Default: `lowestPrice`. +* `instance_interruption_behavior` - (Optional) Behavior when a Spot Instance is interrupted. Valid values: `hibernate`, `stop`, `terminate`. Default: `terminate`. +* `instance_pools_to_use_count` - (Optional) Number of Spot pools across which to allocate your target Spot capacity. Valid only when Spot `allocation_strategy` is set to `lowestPrice`. Default: `1`. + +### target_capacity_specification + +* `default_target_capacity_type` - (Required) Default target capacity type. Valid values: `on-demand`, `spot`. +* `total_target_capacity` - (Required) The number of units to request, filled using `default_target_capacity_type`. +* `on_demand_target_capacity` - (Optional) The number of On-Demand units to request. +* `spot_target_capacity` - (Optional) The number of Spot units to request. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Fleet identifier + +## Timeouts + +`aws_ec2_fleet` provides the following [Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +* `create` - (Default `10m`) How long to wait for a fleet to be active. +* `update` - (Default `10m`) How long to wait for a fleet to be modified. +* `delete` - (Default `10m`) How long to wait for a fleet to be deleted. If `terminate_instances` is `true`, how long to wait for instances to terminate. + +## Import + +`aws_ec2_fleet` can be imported by using the Fleet identifier, e.g. + +``` +$ terraform import aws_ec2_fleet.example fleet-b9b55d27-c5fc-41ac-a6f3-48fcc91f080c +``` diff --git a/website/docs/r/ecr_lifecycle_policy.html.markdown b/website/docs/r/ecr_lifecycle_policy.html.markdown index ae9c72b194f..0f34648c192 100644 --- a/website/docs/r/ecr_lifecycle_policy.html.markdown +++ b/website/docs/r/ecr_lifecycle_policy.html.markdown @@ -3,12 +3,16 @@ layout: "aws" page_title: "AWS: aws_ecr_lifecycle_policy" sidebar_current: "docs-aws-resource-ecr-lifecycle-policy" description: |- - Provides an ECR Lifecycle Policy. + Manages an ECR repository lifecycle policy. --- # aws_ecr_lifecycle_policy -Provides an ECR lifecycle policy. +Manages an ECR repository lifecycle policy. + +~> **NOTE:** Only one `aws_ecr_lifecycle_policy` resource can be used with the same ECR repository. To apply multiple rules, they must be combined in the `policy` JSON. + +~> **NOTE:** The AWS ECR API seems to reorder rules based on `rulePriority`. If you define multiple rules that are not sorted in ascending `rulePriority` order in the Terraform code, the resource will be flagged for recreation every `terraform plan`. ## Example Usage @@ -81,11 +85,11 @@ EOF The following arguments are supported: * `repository` - (Required) Name of the repository to apply the policy. -* `policy` - (Required) The policy document. This is a JSON formatted string. See more details about [Policy Parameters](http://docs.aws.amazon.com/AmazonECR/latest/userguide/LifecyclePolicies.html#lifecycle_policy_parameters) in the official AWS docs. +* `policy` - (Required) The policy document. This is a JSON formatted string. See more details about [Policy Parameters](http://docs.aws.amazon.com/AmazonECR/latest/userguide/LifecyclePolicies.html#lifecycle_policy_parameters) in the official AWS docs. For more information about building IAM policy documents with Terraform, see the [AWS IAM Policy Document Guide](/docs/providers/aws/guides/iam-policy-documents.html). ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `repository` - The name of the repository. * `registry_id` - The registry ID where the repository was created. diff --git a/website/docs/r/ecr_repository.html.markdown b/website/docs/r/ecr_repository.html.markdown index 503ad0e8767..a9c173a6125 100644 --- a/website/docs/r/ecr_repository.html.markdown +++ b/website/docs/r/ecr_repository.html.markdown @@ -30,13 +30,19 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - Full ARN of the repository. * `name` - The name of the repository. * `registry_id` - The registry ID where the repository was created. * `repository_url` - The URL of the repository (in the form `aws_account_id.dkr.ecr.region.amazonaws.com/repositoryName` +## Timeouts + +`aws_ecr_repository` provides the following [Timeouts](/docs/configuration/resources.html#timeouts) +configuration options: + +- `delete` - (Default `20 minutes`) How long to wait for a repository to be deleted. ## Import diff --git a/website/docs/r/ecr_repository_policy.html.markdown b/website/docs/r/ecr_repository_policy.html.markdown index 24b5a30a05a..de269865682 100644 --- a/website/docs/r/ecr_repository_policy.html.markdown +++ b/website/docs/r/ecr_repository_policy.html.markdown @@ -62,11 +62,11 @@ EOF The following arguments are supported: * `repository` - (Required) Name of the repository to apply the policy. -* `policy` - (Required) The policy document. This is a JSON formatted string. +* `policy` - (Required) The policy document. This is a JSON formatted string. For more information about building IAM policy documents with Terraform, see the [AWS IAM Policy Document Guide](/docs/providers/aws/guides/iam-policy-documents.html) ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `repository` - The name of the repository. * `registry_id` - The registry ID where the repository was created. diff --git a/website/docs/r/ecs_cluster.html.markdown b/website/docs/r/ecs_cluster.html.markdown index cb65f57e176..f289b56ae8a 100644 --- a/website/docs/r/ecs_cluster.html.markdown +++ b/website/docs/r/ecs_cluster.html.markdown @@ -23,10 +23,11 @@ resource "aws_ecs_cluster" "foo" { The following arguments are supported: * `name` - (Required) The name of the cluster (up to 255 letters, numbers, hyphens, and underscores) +* `tags` - (Optional) Key-value mapping of resource tags ## Attributes Reference -The following additional attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The Amazon Resource Name (ARN) that identifies the cluster * `arn` - The Amazon Resource Name (ARN) that identifies the cluster @@ -37,4 +38,4 @@ ECS clusters can be imported using the `name`, e.g. ``` $ terraform import aws_ecs_cluster.stateless stateless-app -``` \ No newline at end of file +``` diff --git a/website/docs/r/ecs_service.html.markdown b/website/docs/r/ecs_service.html.markdown index 395ec38c20d..5e2d18dacbb 100644 --- a/website/docs/r/ecs_service.html.markdown +++ b/website/docs/r/ecs_service.html.markdown @@ -25,15 +25,15 @@ resource "aws_ecs_service" "mongo" { iam_role = "${aws_iam_role.foo.arn}" depends_on = ["aws_iam_role_policy.foo"] - placement_strategy { + ordered_placement_strategy { type = "binpack" field = "cpu" } load_balancer { - elb_name = "${aws_elb.foo.name}" - container_name = "mongo" - container_port = 8080 + target_group_arn = "${aws_lb_target_group.foo.arn}" + container_name = "mongo" + container_port = 8080 } placement_constraints { @@ -43,42 +43,75 @@ resource "aws_ecs_service" "mongo" { } ``` +### Ignoring Changes to Desired Count + +You can utilize the generic Terraform resource [lifecycle configuration block](/docs/configuration/resources.html#lifecycle) with `ignore_changes` to create an ECS service with an initial count of running instances, then ignore any changes to that count caused externally (e.g. Application Autoscaling). + +```hcl +resource "aws_ecs_service" "example" { + # ... other configurations ... + + # Example: Create service with 2 instances to start + desired_count = 2 + + # Optional: Allow external changes without Terraform plan difference + lifecycle { + ignore_changes = ["desired_count"] + } +} +``` + +### Daemon Scheduling Strategy + +```hcl +resource "aws_ecs_service" "bar" { + name = "bar" + cluster = "${aws_ecs_cluster.foo.id}" + task_definition = "${aws_ecs_task_definition.bar.arn}" + scheduling_strategy = "DAEMON" +} +``` + ## Argument Reference The following arguments are supported: * `name` - (Required) The name of the service (up to 255 letters, numbers, hyphens, and underscores) * `task_definition` - (Required) The family and revision (`family:revision`) or full ARN of the task definition that you want to run in your service. -* `desired_count` - (Required) The number of instances of the task definition to place and keep running +* `desired_count` - (Optional) The number of instances of the task definition to place and keep running. Defaults to 0. Do not specify if using the `DAEMON` scheduling strategy. * `launch_type` - (Optional) The launch type on which to run your service. The valid values are `EC2` and `FARGATE`. Defaults to `EC2`. +* `scheduling_strategy` - (Optional) The scheduling strategy to use for the service. The valid values are `REPLICA` and `DAEMON`. Defaults to `REPLICA`. Note that [*Fargate tasks do not support the `DAEMON` scheduling strategy*](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/scheduling_tasks.html). * `cluster` - (Optional) ARN of an ECS cluster -* `iam_role` - (Optional) The ARN of IAM role that allows your Amazon ECS container agent to make calls to your load balancer on your behalf. This parameter is only required if you are using a load balancer with your service. -* `deployment_maximum_percent` - (Optional) The upper limit (as a percentage of the service's desiredCount) of the number of running tasks that can be running in a service during a deployment. +* `iam_role` - (Optional) ARN of the IAM role that allows Amazon ECS to make calls to your load balancer on your behalf. This parameter is required if you are using a load balancer with your service, but only if your task definition does not use the `awsvpc` network mode. If using `awsvpc` network mode, do not specify this role. If your account has already created the Amazon ECS service-linked role, that role is used by default for your service unless you specify a role here. +* `deployment_maximum_percent` - (Optional) The upper limit (as a percentage of the service's desiredCount) of the number of running tasks that can be running in a service during a deployment. Not valid when using the `DAEMON` scheduling strategy. * `deployment_minimum_healthy_percent` - (Optional) The lower limit (as a percentage of the service's desiredCount) of the number of running tasks that must remain running and healthy in a service during a deployment. -* `placement_strategy` - (Optional) Service level strategy rules that are taken -into consideration during task placement. The maximum number of -`placement_strategy` blocks is `5`. Defined below. -* `health_check_grace_period_seconds` - (Optional) Seconds to ignore failing load balancer health checks on newly instantiated tasks to prevent premature shutdown, up to 1800. Only valid for services configured to use load balancers. +* `placement_strategy` - (Optional) **Deprecated**, use `ordered_placement_strategy` instead. +* `ordered_placement_strategy` - (Optional) Service level strategy rules that are taken into consideration during task placement. List from top to bottom in order of precedence. The maximum number of `ordered_placement_strategy` blocks is `5`. Defined below. +* `health_check_grace_period_seconds` - (Optional) Seconds to ignore failing load balancer health checks on newly instantiated tasks to prevent premature shutdown, up to 7200. Only valid for services configured to use load balancers. * `load_balancer` - (Optional) A load balancer block. Load balancers documented below. * `placement_constraints` - (Optional) rules that are taken into consideration during task placement. Maximum number of `placement_constraints` is `10`. Defined below. -* `network_configuration` - (Optional) The network configuration for the service. This parameter is required for task definitions that use the awsvpc network mode to receive their own Elastic Network Interface, and it is not supported for other network modes. +* `network_configuration` - (Optional) The network configuration for the service. This parameter is required for task definitions that use the `awsvpc` network mode to receive their own Elastic Network Interface, and it is not supported for other network modes. +* `service_registries` - (Optional) The service discovery registries for the service. The maximum number of `service_registries` blocks is `1`. +* `tags` - (Optional) Key-value mapping of resource tags --> **Note:** As a result of an AWS limitation, a single `load_balancer` can be attached to the ECS service at most. See [related docs](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html#load-balancing-concepts). +## load_balancer -Load balancers support the following: +`load_balancer` supports the following: * `elb_name` - (Required for ELB Classic) The name of the ELB (Classic) to associate with the service. -* `target_group_arn` - (Required for ALB) The ARN of the ALB target group to associate with the service. +* `target_group_arn` - (Required for ALB/NLB) The ARN of the Load Balancer target group to associate with the service. * `container_name` - (Required) The name of the container to associate with the load balancer (as it appears in a container definition). * `container_port` - (Required) The port on the container to associate with the load balancer. -## placement_strategy +-> **Note:** As a result of an AWS limitation, a single `load_balancer` can be attached to the ECS service at most. See [related docs](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html#load-balancing-concepts). + +## ordered_placement_strategy -`placement_strategy` supports the following: +`ordered_placement_strategy` supports the following: * `type` - (Required) The type of placement strategy. Must be one of: `binpack`, `random`, or `spread` -* `field` - (Optional) For the `spread` placement strategy, valid values are instanceId (or host, +* `field` - (Optional) For the `spread` placement strategy, valid values are `instanceId` (or `host`, which has the same effect), or any platform or custom attribute that is applied to a container instance. For the `binpack` type, valid values are `memory` and `cpu`. For the `random` type, this attribute is not needed. For more information, see [Placement Strategy](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_PlacementStrategy.html). @@ -106,9 +139,18 @@ Guide](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-query For more information, see [Task Networking](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html) +## service_registries + +`service_registries` support the following: + +* `registry_arn` - (Required) The ARN of the Service Registry. The currently supported service registry is Amazon Route 53 Auto Naming Service(`aws_service_discovery_service`). For more information, see [Service](https://docs.aws.amazon.com/Route53/latest/APIReference/API_autonaming_Service.html) +* `port` - (Optional) The port value used if your Service Discovery service specified an SRV record. +* `container_port` - (Optional) The port value, already specified in the task definition, to be used for your service discovery service. +* `container_name` - (Optional) The container name value, already specified in the task definition, to be used for your service discovery service. + ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The Amazon Resource Name (ARN) that identifies the service * `name` - The name of the service @@ -122,4 +164,4 @@ ECS services can be imported using the `name` together with ecs cluster `name`, ``` $ terraform import aws_ecs_service.imported cluster-name/service-name -``` \ No newline at end of file +``` diff --git a/website/docs/r/ecs_task_definition.html.markdown b/website/docs/r/ecs_task_definition.html.markdown index 84f419b1e1c..937ba9d9ae7 100644 --- a/website/docs/r/ecs_task_definition.html.markdown +++ b/website/docs/r/ecs_task_definition.html.markdown @@ -3,12 +3,12 @@ layout: "aws" page_title: "AWS: aws_ecs_task_definition" sidebar_current: "docs-aws-resource-ecs-task-definition" description: |- - Provides an ECS task definition. + Manages a revision of an ECS task definition. --- # aws_ecs_task_definition -Provides an ECS task definition to be used in `aws_ecs_service`. +Manages a revision of an ECS task definition to be used in `aws_ecs_service`. ## Example Usage @@ -82,17 +82,48 @@ official [Developer Guide](https://docs.aws.amazon.com/AmazonECS/latest/develope * `task_role_arn` - (Optional) The ARN of IAM role that allows your Amazon ECS container task to make calls to other AWS services. * `execution_role_arn` - (Optional) The Amazon Resource Name (ARN) of the task execution role that the Amazon ECS container agent and the Docker daemon can assume. * `network_mode` - (Optional) The Docker networking mode to use for the containers in the task. The valid values are `none`, `bridge`, `awsvpc`, and `host`. +* `ipc_mode` - (Optional) The IPC resource namespace to be used for the containers in the task The valid values are `host`, `task`, and `none`. +* `pid_mode` - (Optional) The process namespace to use for the containers in the task. The valid values are `host` and `task`. * `volume` - (Optional) A set of [volume blocks](#volume-block-arguments) that containers in your task may use. * `placement_constraints` - (Optional) A set of [placement constraints](#placement-constraints-arguments) rules that are taken into consideration during task placement. Maximum number of `placement_constraints` is `10`. * `cpu` - (Optional) The number of cpu units used by the task. If the `requires_compatibilities` is `FARGATE` this field is required. * `memory` - (Optional) The amount (in MiB) of memory used by the task. If the `requires_compatibilities` is `FARGATE` this field is required. * `requires_compatibilities` - (Optional) A set of launch types required by the task. The valid values are `EC2` and `FARGATE`. +* `tags` - (Optional) Key-value mapping of resource tags #### Volume Block Arguments * `name` - (Required) The name of the volume. This name is referenced in the `sourceVolume` parameter of container definition in the `mountPoints` section. * `host_path` - (Optional) The path on the host container instance that is presented to the container. If not set, ECS will create a nonpersistent data volume that starts empty and is deleted after the task has finished. +* `docker_volume_configuration` - (Optional) Used to configure a [docker volume](#docker-volume-configuration-arguments) + +#### Docker Volume Configuration Arguments + +For more information, see [Specifying a Docker volume in your Task Definition Developer Guide](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-volumes.html#specify-volume-config) + +* `scope` - (Optional) The scope for the Docker volume, which determines its lifecycle, either `task` or `shared`. Docker volumes that are scoped to a `task` are automatically provisioned when the task starts and destroyed when the task stops. Docker volumes that are `scoped` as shared persist after the task stops. +* `autoprovision` - (Optional) If this value is `true`, the Docker volume is created if it does not already exist. *Note*: This field is only used if the scope is `shared`. +* `driver` - (Optional) The Docker volume driver to use. The driver value must match the driver name provided by Docker because it is used for task placement. +* `driver_opts` - (Optional) A map of Docker driver specific options. +* `labels` - (Optional) A map of custom metadata to add to your Docker volume. + +##### Example Usage: +```hcl +resource "aws_ecs_task_definition" "service" { + family = "service" + container_definitions = "${file("task-definitions/service.json")}" + + volume { + name = "service-storage" + + docker_volume_configuration { + scope = "shared" + autoprovision = true + } + } +} +``` #### Placement Constraints Arguments @@ -106,7 +137,7 @@ Guide](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-query- ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - Full ARN of the Task Definition (including both `family` and `revision`). * `family` - The family of the Task Definition. diff --git a/website/docs/r/efs_file_system.html.markdown b/website/docs/r/efs_file_system.html.markdown index 4f23550766e..de119bdcff9 100644 --- a/website/docs/r/efs_file_system.html.markdown +++ b/website/docs/r/efs_file_system.html.markdown @@ -36,18 +36,19 @@ system creation. By default generated by Terraform. See [Elastic File System] * `reference_name` - **DEPRECATED** (Optional) A reference name used when creating the `Creation Token` which Amazon EFS uses to ensure idempotent file system creation. By default generated by Terraform. -* `performance_mode` - (Optional) The file system performance mode. Can be either -`"generalPurpose"` or `"maxIO"` (Default: `"generalPurpose"`). -* `tags` - (Optional) A mapping of tags to assign to the file system. * `encrypted` - (Optional) If true, the disk will be encrypted. * `kms_key_id` - (Optional) The ARN for the KMS encryption key. When specifying kms_key_id, encrypted needs to be set to true. +* `performance_mode` - (Optional) The file system performance mode. Can be either `"generalPurpose"` or `"maxIO"` (Default: `"generalPurpose"`). +* `provisioned_throughput_in_mibps` - (Optional) The throughput, measured in MiB/s, that you want to provision for the file system. Only applicable with `throughput_mode` set to `provisioned`. +* `tags` - (Optional) A mapping of tags to assign to the file system. +* `throughput_mode` - (Optional) Throughput mode for the file system. Defaults to `bursting`. Valid values: `bursting`, `provisioned`. When using `provisioned`, also set `provisioned_throughput_in_mibps`. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: +* `arn` - Amazon Resource Name of the file system. * `id` - The ID that identifies the file system (e.g. fs-ccfc0d65). -* `kms_key_id` - The ARN for the KMS encryption key. * `dns_name` - The DNS name for the filesystem per [documented convention](http://docs.aws.amazon.com/efs/latest/ug/mounting-fs-mount-cmd-dns-name.html). ## Import diff --git a/website/docs/r/efs_mount_target.html.markdown b/website/docs/r/efs_mount_target.html.markdown index ae7a17c0f0a..6f32914311f 100644 --- a/website/docs/r/efs_mount_target.html.markdown +++ b/website/docs/r/efs_mount_target.html.markdown @@ -50,6 +50,7 @@ In addition to all arguments above, the following attributes are exported: * `id` - The ID of the mount target. * `dns_name` - The DNS name for the given subnet/AZ per [documented convention](http://docs.aws.amazon.com/efs/latest/ug/mounting-fs-mount-cmd-dns-name.html). +* `file_system_arn` - Amazon Resource Name of the file system. * `network_interface_id` - The ID of the network interface that Amazon EFS created when it created the mount target. ## Import diff --git a/website/docs/r/egress_only_internet_gateway.html.markdown b/website/docs/r/egress_only_internet_gateway.html.markdown index b19c2a4eff2..4a3fc19395c 100644 --- a/website/docs/r/egress_only_internet_gateway.html.markdown +++ b/website/docs/r/egress_only_internet_gateway.html.markdown @@ -17,8 +17,8 @@ outside of your VPC from initiating an IPv6 connection with your instance. ```hcl resource "aws_vpc" "foo" { - cidr_block = "10.1.0.0/16" - assign_generated_ipv6_cidr_block = true + cidr_block = "10.1.0.0/16" + assign_generated_ipv6_cidr_block = true } resource "aws_egress_only_internet_gateway" "foo" { @@ -34,6 +34,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the Egress Only Internet Gateway. diff --git a/website/docs/r/eip.html.markdown b/website/docs/r/eip.html.markdown index 0058a93c45b..d15fc35557f 100644 --- a/website/docs/r/eip.html.markdown +++ b/website/docs/r/eip.html.markdown @@ -12,6 +12,8 @@ Provides an Elastic IP resource. ~> **Note:** EIP may require IGW to exist prior to association. Use `depends_on` to set an explicit dependency on the IGW. +~> **Note:** Do not use `network_interface` to associate the EIP to `aws_lb` or `aws_nat_gateway` resources. Instead use the `allocation_id` available in those resources to allow AWS to manage the association, otherwise you will see `AuthFailure` errors. + ## Example Usage Single EIP associated with an instance: @@ -82,6 +84,15 @@ resource "aws_eip" "bar" { } ``` +Allocating EIP from the BYOIP pool: + +```hcl +resource "aws_eip" "byoip-ip" { + vpc = true + public_ipv4_pool = "ipv4pool-ec2-012345" +} +``` + ## Argument Reference The following arguments are supported: @@ -93,6 +104,7 @@ The following arguments are supported: associate with the Elastic IP address. If no private IP address is specified, the Elastic IP address is associated with the primary private IP address. * `tags` - (Optional) A mapping of tags to assign to the resource. +* `public_ipv4_pool` - (Optional) EC2 IPv4 address pool identifier or `amazon`. This option is only available for VPC EIPs. ~> **NOTE:** You can specify either the `instance` ID or the `network_interface` ID, but not both. Including both will **not** return an error from the AWS API, but will @@ -101,7 +113,7 @@ more information. ## Attributes Reference -The following additional attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - Contains the EIP allocation ID. * `private_ip` - Contains the private IP address (if in VPC). @@ -110,6 +122,7 @@ The following additional attributes are exported: * `public_ip` - Contains the public IP address. * `instance` - Contains the ID of the attached instance. * `network_interface` - Contains the ID of the attached network interface. +* `public_ipv4_pool` - EC2 IPv4 address pool identifier (if in VPC). ## Timeouts `aws_eip` provides the following [Timeouts](/docs/configuration/resources.html#timeouts) configuration options: diff --git a/website/docs/r/eip_association.html.markdown b/website/docs/r/eip_association.html.markdown index 6446c75bec2..952977e82df 100644 --- a/website/docs/r/eip_association.html.markdown +++ b/website/docs/r/eip_association.html.markdown @@ -11,6 +11,8 @@ description: |- Provides an AWS EIP Association as a top level resource, to associate and disassociate Elastic IPs from AWS Instances and Network Interfaces. +~> **NOTE:** Do not use this resource to associate an EIP to `aws_lb` or `aws_nat_gateway` resources. Instead use the `allocation_id` available in those resources to allow AWS to manage the association, otherwise you will see `AuthFailure` errors. + ~> **NOTE:** `aws_eip_association` is useful in scenarios where EIPs are either pre-existing or distributed to customers or users and therefore cannot be changed. @@ -66,3 +68,11 @@ address with an instance. * `network_interface_id` - As above * `private_ip_address` - As above * `public_ip` - As above + +## Import + +EIP Assocations can be imported using their association ID. + +``` +$ terraform import aws_eip_association.test eipassoc-ab12c345 +``` diff --git a/website/docs/r/eks_cluster.html.markdown b/website/docs/r/eks_cluster.html.markdown new file mode 100644 index 00000000000..1d0ed35ab25 --- /dev/null +++ b/website/docs/r/eks_cluster.html.markdown @@ -0,0 +1,76 @@ +--- +layout: "aws" +page_title: "AWS: aws_eks_cluster" +sidebar_current: "docs-aws-resource-eks-cluster" +description: |- + Manages an EKS Cluster +--- + +# aws_eks_cluster + +Manages an EKS Cluster. + +## Example Usage + +```hcl +resource "aws_eks_cluster" "example" { + name = "example" + role_arn = "${aws_iam_role.example.arn}" + + vpc_config { + subnet_ids = ["${aws_subnet.example1.id}", "${aws_subnet.example2.id}"] + } +} + +output "endpoint" { + value = "${aws_eks_cluster.example.endpoint}" +} + +output "kubeconfig-certificate-authority-data" { + value = "${aws_eks_cluster.example.certificate_authority.0.data}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` – (Required) Name of the cluster. +* `role_arn` - (Required) The Amazon Resource Name (ARN) of the IAM role that provides permissions for the Kubernetes control plane to make calls to AWS API operations on your behalf. +* `vpc_config` - (Required) Nested argument for the VPC associated with your cluster. Amazon EKS VPC resources have specific requirements to work properly with Kubernetes. For more information, see [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html) and [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) in the Amazon EKS User Guide. Configuration detailed below. +* `version` – (Optional) Desired Kubernetes master version. If you do not specify a value, the latest available version is used. + +### vpc_config + +* `security_group_ids` – (Optional) List of security group IDs for the cross-account elastic network interfaces that Amazon EKS creates to use to allow communication between your worker nodes and the Kubernetes control plane. +* `subnet_ids` – (Required) List of subnet IDs. Must be in at least two different availability zones. Amazon EKS creates cross-account elastic network interfaces in these subnets to allow communication between your worker nodes and the Kubernetes control plane. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The name of the cluster. +* `arn` - The Amazon Resource Name (ARN) of the cluster. +* `certificate_authority` - Nested attribute containing `certificate-authority-data` for your cluster. + * `data` - The base64 encoded certificate data required to communicate with your cluster. Add this to the `certificate-authority-data` section of the `kubeconfig` file for your cluster. +* `endpoint` - The endpoint for your Kubernetes API server. +* `platform_version` - The platform version for the cluster. +* `version` - The Kubernetes server version for the cluster. +* `vpc_config` - Additional nested attributes: + * `vpc_id` - The VPC associated with your cluster. + +## Timeouts + +`aws_eks_cluster` provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +* `create` - (Default `15 minutes`) How long to wait for the EKS Cluster to be created. +* `delete` - (Default `15 minutes`) How long to wait for the EKS Cluster to be deleted. + +## Import + +EKS Clusters can be imported using the `name`, e.g. + +``` +$ terraform import aws_eks_cluster.my_cluster my_cluster +``` diff --git a/website/docs/r/elastic_beanstalk_application.html.markdown b/website/docs/r/elastic_beanstalk_application.html.markdown index 146f2512f59..7e84ab098ae 100644 --- a/website/docs/r/elastic_beanstalk_application.html.markdown +++ b/website/docs/r/elastic_beanstalk_application.html.markdown @@ -21,6 +21,12 @@ This resource creates an application that has one configuration template named resource "aws_elastic_beanstalk_application" "tftest" { name = "tf-test-name" description = "tf-test-desc" + + appversion_lifecycle { + service_role = "${aws_iam_role.beanstalk_service.arn}" + max_count = 128 + delete_source_from_s3 = true + } } ``` @@ -31,9 +37,16 @@ The following arguments are supported: * `name` - (Required) The name of the application, must be unique within your account * `description` - (Optional) Short description of the application +Application version lifecycle (`appversion_lifecycle`) supports the following settings. Only one of either `max_count` or `max_age_in_days` can be provided: + +* `service_role` - (Required) The ARN of an IAM service role under which the application version is deleted. Elastic Beanstalk must have permission to assume this role. +* `max_count` - (Optional) The maximum number of application versions to retain. +* `max_age_in_days` - (Optional) The number of days to retain an application version. +* `delete_source_from_s3` - (Optional) Set to `true` to delete a version's source bundle from S3 when the application version is deleted. + ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `name` * `description` @@ -45,4 +58,4 @@ Elastic Beanstalk Applications can be imported using the `name`, e.g. ``` $ terraform import aws_elastic_beanstalk_application.tf_test tf-test-name -``` \ No newline at end of file +``` diff --git a/website/docs/r/elastic_beanstalk_application_version.html.markdown b/website/docs/r/elastic_beanstalk_application_version.html.markdown index 992a7ef0988..8a67cd8b518 100644 --- a/website/docs/r/elastic_beanstalk_application_version.html.markdown +++ b/website/docs/r/elastic_beanstalk_application_version.html.markdown @@ -66,6 +66,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `name` - The Application Version name. diff --git a/website/docs/r/elastic_beanstalk_configuration_template.html.markdown b/website/docs/r/elastic_beanstalk_configuration_template.html.markdown index b704c398964..52c031e1d41 100644 --- a/website/docs/r/elastic_beanstalk_configuration_template.html.markdown +++ b/website/docs/r/elastic_beanstalk_configuration_template.html.markdown @@ -35,10 +35,10 @@ The following arguments are supported: * `application` – (Required) name of the application to associate with this configuration template * `description` - (Optional) Short description of the Template * `environment_id` – (Optional) The ID of the environment used with this configuration template -* `setting` – (Optional) Option settings to configure the new Environment. These +* `setting` – (Optional) Option settings to configure the new Environment. These override specific values that are set as defaults. The format is detailed below in [Option Settings](#option-settings) -* `solution_stack_name` – (Optional) A solution stack to base your Template +* `solution_stack_name` – (Optional) A solution stack to base your Template off of. Example stacks can be found in the [Amazon API documentation][1] @@ -53,7 +53,7 @@ The `setting` field supports the following format: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `name` * `application` @@ -63,5 +63,3 @@ The following attributes are exported: * `solution_stack_name` [1]: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.platforms.html - - diff --git a/website/docs/r/elastic_beanstalk_environment.html.markdown b/website/docs/r/elastic_beanstalk_environment.html.markdown index 090782017c2..c7dd63d29c4 100644 --- a/website/docs/r/elastic_beanstalk_environment.html.markdown +++ b/website/docs/r/elastic_beanstalk_environment.html.markdown @@ -43,24 +43,26 @@ The following arguments are supported: * `description` - (Optional) Short description of the Environment * `tier` - (Optional) Elastic Beanstalk Environment tier. Valid values are `Worker` or `WebServer`. If tier is left blank `WebServer` will be used. -* `setting` – (Optional) Option settings to configure the new Environment. These +* `setting` – (Optional) Option settings to configure the new Environment. These override specific values that are set as defaults. The format is detailed below in [Option Settings](#option-settings) -* `solution_stack_name` – (Optional) A solution stack to base your environment +* `solution_stack_name` – (Optional) A solution stack to base your environment off of. Example stacks can be found in the [Amazon API documentation][1] * `template_name` – (Optional) The name of the Elastic Beanstalk Configuration template to use in deployment +* `platform_arn` – (Optional) The [ARN][2] of the Elastic Beanstalk [Platform][3] + to use in deployment * `wait_for_ready_timeout` - (Default: `20m`) The maximum [duration](https://golang.org/pkg/time/#ParseDuration) that Terraform should wait for an Elastic Beanstalk Environment to be in a ready state before timing out. -* `poll_interval` – The time between polling the AWS API to +* `poll_interval` – The time between polling the AWS API to check if changes have been applied. Use this to adjust the rate of API calls for any `create` or `update` action. Minimum `10s`, maximum `180s`. Omit this to use the default behavior, which is an exponential backoff * `version_label` - (Optional) The name of the Elastic Beanstalk Application Version to use in deployment. -* `tags` – (Optional) A set of tags to apply to the Environment. +* `tags` – (Optional) A set of tags to apply to the Environment. ## Option Settings @@ -104,7 +106,7 @@ resource "aws_elastic_beanstalk_environment" "tfenvtest" { ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - ID of the Elastic Beanstalk Environment. * `name` - Name of the Elastic Beanstalk Environment. @@ -126,7 +128,8 @@ The following attributes are exported: [1]: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.platforms.html - +[2]: https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html +[3]: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-beanstalk-environment.html#cfn-beanstalk-environment-platformarn ## Import diff --git a/website/docs/r/elastic_transcoder_pipeline.html.markdown b/website/docs/r/elastic_transcoder_pipeline.html.markdown index bc5478b877d..94c9bb40b72 100644 --- a/website/docs/r/elastic_transcoder_pipeline.html.markdown +++ b/website/docs/r/elastic_transcoder_pipeline.html.markdown @@ -18,12 +18,12 @@ resource "aws_elastictranscoder_pipeline" "bar" { name = "aws_elastictranscoder_pipeline_tf_test_" role = "${aws_iam_role.test_role.arn}" - content_config = { + content_config { bucket = "${aws_s3_bucket.content_bucket.bucket}" storage_class = "Standard" } - thumbnail_config = { + thumbnail_config { bucket = "${aws_s3_bucket.thumb_bucket.bucket}" storage_class = "Standard" } @@ -93,3 +93,11 @@ The `thumbnail_config_permissions` object supports the following: * `access` - The permission that you want to give to the AWS user that you specified in `thumbnail_config_permissions.grantee`. * `grantee` - The AWS user or group that you want to have access to thumbnail files. * `grantee_type` - Specify the type of value that appears in the `thumbnail_config_permissions.grantee` object. + +## Import + +Elastic Transcoder pipelines can be imported using the `id`, e.g. + +``` +$ terraform import aws_elastic_transcoder_pipeline.basic_pipeline 1407981661351-cttk8b +``` diff --git a/website/docs/r/elastic_transcoder_preset.html.markdown b/website/docs/r/elastic_transcoder_preset.html.markdown index 9096d36bcdf..5ed927b220f 100644 --- a/website/docs/r/elastic_transcoder_preset.html.markdown +++ b/website/docs/r/elastic_transcoder_preset.html.markdown @@ -18,7 +18,7 @@ resource "aws_elastictranscoder_preset" "bar" { description = "Sample Preset" name = "sample_preset" - audio = { + audio { audio_packing_mode = "SingleTrack" bit_rate = 96 channels = 2 @@ -26,11 +26,11 @@ resource "aws_elastictranscoder_preset" "bar" { sample_rate = 44100 } - audio_codec_options = { + audio_codec_options { profile = "AAC-LC" } - video = { + video { bit_rate = "1600" codec = "H.264" display_aspect_ratio = "16:9" @@ -52,7 +52,7 @@ resource "aws_elastictranscoder_preset" "bar" { ColorSpaceConversionMode = "None" } - video_watermarks = { + video_watermarks { id = "Terraform Test" max_width = "20%" max_height = "20%" @@ -65,7 +65,7 @@ resource "aws_elastictranscoder_preset" "bar" { target = "Content" } - thumbnails = { + thumbnails { format = "png" interval = 120 max_width = "auto" @@ -124,7 +124,7 @@ The `video` object supports the following: * `bit_rate` - The bit rate of the video stream in the output file, in kilobits/second. You can configure variable bit rate or constant bit rate encoding. * `codec` - The video codec for the output file. Valid values are `gif`, `H.264`, `mpeg2`, `vp8`, and `vp9`. * `display_aspect_ratio` - The value that Elastic Transcoder adds to the metadata in the output file. If you set DisplayAspectRatio to auto, Elastic Transcoder chooses an aspect ratio that ensures square pixels. If you specify another option, Elastic Transcoder sets that value in the output file. -* `fixed_gop` - Whether to use a fixed value for Video:FixedGOP. Not applicable for containers of type gif. Valid values are true and false. +* `fixed_gop` - Whether to use a fixed value for Video:FixedGOP. Not applicable for containers of type gif. Valid values are true and false. Also known as, Fixed Number of Frames Between Keyframes. * `frame_rate` - The frames per second for the video stream in the output file. The following values are valid: `auto`, `10`, `15`, `23.97`, `24`, `25`, `29.97`, `30`, `50`, `60`. * `keyframes_max_dist` - The maximum number of frames between key frames. Not applicable for containers of type gif. * `max_frame_rate` - If you specify auto for FrameRate, Elastic Transcoder uses the frame rate of the input video for the frame rate of the output video, up to the maximum frame rate. If you do not specify a MaxFrameRate, Elastic Transcoder will use a default of 30. @@ -158,3 +158,11 @@ The `video_codec_options` map supports the following: * `ColorSpaceConversion` - The color space conversion Elastic Transcoder applies to the output video. Valid values are `None`, `Bt709toBt601`, `Bt601toBt709`, and `Auto`. (Optional, H.264/MPEG2 Only) * `ChromaSubsampling` - The sampling pattern for the chroma (color) channels of the output video. Valid values are `yuv420p` and `yuv422p`. * `LoopCount` - The number of times you want the output gif to loop (Gif only) + +## Import + +Elastic Transcoder presets can be imported using the `id`, e.g. + +``` +$ terraform import aws_elastic_transcoder_preset.basic_preset 1407981661351-cttk8b +``` diff --git a/website/docs/r/elasticache_cluster.html.markdown b/website/docs/r/elasticache_cluster.html.markdown index ac9ae93529d..e6d57145b6f 100644 --- a/website/docs/r/elasticache_cluster.html.markdown +++ b/website/docs/r/elasticache_cluster.html.markdown @@ -8,29 +8,53 @@ description: |- # aws_elasticache_cluster -Provides an ElastiCache Cluster resource. +Provides an ElastiCache Cluster resource, which manages a Memcached cluster or Redis instance. +For working with Redis (Cluster Mode Enabled) replication groups, see the +[`aws_elasticache_replication_group` resource](/docs/providers/aws/r/elasticache_replication_group.html). -Changes to a Cache Cluster can occur when you manually change a -parameter, such as `node_type`, and are reflected in the next maintenance -window. Because of this, Terraform may report a difference in its planning -phase because a modification has not yet taken place. You can use the -`apply_immediately` flag to instruct the service to apply the change immediately -(see documentation below). - -~> **Note:** using `apply_immediately` can result in a -brief downtime as the server reboots. See the AWS Docs on -[Modifying an ElastiCache Cache Cluster][2] for more information. +~> **Note:** When you change an attribute, such as `node_type`, by default +it is applied in the next maintenance window. Because of this, Terraform may report +a difference in its planning phase because the actual modification has not yet taken +place. You can use the `apply_immediately` flag to instruct the service to apply the +change immediately. Using `apply_immediately` can result in a brief downtime as the server reboots. +See the AWS Docs on [Modifying an ElastiCache Cache Cluster][2] for more information. ## Example Usage +### Memcached Cluster + ```hcl -resource "aws_elasticache_cluster" "bar" { +resource "aws_elasticache_cluster" "example" { cluster_id = "cluster-example" engine = "memcached" - node_type = "cache.t2.micro" + node_type = "cache.m4.large" + num_cache_nodes = 2 + parameter_group_name = "default.memcached1.4" port = 11211 +} +``` + +### Redis Instance + +```hcl +resource "aws_elasticache_cluster" "example" { + cluster_id = "cluster-example" + engine = "redis" + node_type = "cache.m4.large" num_cache_nodes = 1 - parameter_group_name = "default.memcached1.4" + parameter_group_name = "default.redis3.2" + port = 6379 +} +``` + +### Redis Cluster Mode Disabled Read Replica Instance + +These inherit their settings from the replication group. + +```hcl +resource "aws_elasticache_cluster" "replica" { + cluster_id = "cluster-example" + replication_group_id = "${aws_elasticache_replication_group.example.id}" } ``` @@ -41,31 +65,32 @@ The following arguments are supported: * `cluster_id` – (Required) Group identifier. ElastiCache converts this name to lowercase -* `engine` – (Required) Name of the cache engine to be used for this cache cluster. +* `replication_group_id` - (Optional) The ID of the replication group to which this cluster should belong. If this parameter is specified, the cluster is added to the specified replication group as a read replica; otherwise, the cluster is a standalone primary that is not part of any replication group. + +* `engine` – (Required unless `replication_group_id` is provided) Name of the cache engine to be used for this cache cluster. Valid values for this parameter are `memcached` or `redis` * `engine_version` – (Optional) Version number of the cache engine to be used. -See [Selecting a Cache Engine and Version](https://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/SelectEngine.html) +See [Describe Cache Engine Versions](https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-cache-engine-versions.html) in the AWS Documentation center for supported versions -* `maintenance_window` – (Optional) Specifies the weekly time range for when maintenance +* `maintenance_window` – (Optional) Specifies the weekly time range for when maintenance on the cache cluster is performed. The format is `ddd:hh24:mi-ddd:hh24:mi` (24H Clock UTC). The minimum maintenance window is a 60 minute period. Example: `sun:05:00-sun:09:00` -* `node_type` – (Required) The compute and memory capacity of the nodes. See +* `node_type` – (Required unless `replication_group_id` is provided) The compute and memory capacity of the nodes. See [Available Cache Node Types](https://aws.amazon.com/elasticache/details#Available_Cache_Node_Types) for supported node types -* `num_cache_nodes` – (Required) The initial number of cache nodes that the +* `num_cache_nodes` – (Required unless `replication_group_id` is provided) The initial number of cache nodes that the cache cluster will have. For Redis, this value must be 1. For Memcache, this value must be between 1 and 20. If this number is reduced on subsequent runs, the highest numbered nodes will be removed. -* `parameter_group_name` – (Required) Name of the parameter group to associate +* `parameter_group_name` – (Required unless `replication_group_id` is provided) Name of the parameter group to associate with this cache cluster -* `port` – (Required) The port number on which each of the cache nodes will -accept connections. For Memcache the default is 11211, and for Redis the default port is 6379. +* `port` – (Optional) The port number on which each of the cache nodes will accept connections. For Memcache the default is 11211, and for Redis the default port is 6379. Cannot be provided with `replication_group_id`. * `subnet_group_name` – (Optional, VPC only) Name of the subnet group to be used for the cache cluster. @@ -81,7 +106,7 @@ names to associate with this cache cluster `false`. See [Amazon ElastiCache Documentation for more information.][1] (Available since v0.6.0) -* `snapshot_arns` – (Optional) A single-element string list containing an +* `snapshot_arns` – (Optional) A single-element string list containing an Amazon Resource Name (ARN) of a Redis RDB snapshot file stored in Amazon S3. Example: `arn:aws:s3:::my_bucket/snapshot1.rdb` @@ -96,23 +121,23 @@ SnapshotRetentionLimit to 5, then a snapshot that was taken today will be retain before being deleted. If the value of SnapshotRetentionLimit is set to zero (0), backups are turned off. Please note that setting a `snapshot_retention_limit` is not supported on cache.t1.micro or cache.t2.* cache nodes -* `notification_topic_arn` – (Optional) An Amazon Resource Name (ARN) of an +* `notification_topic_arn` – (Optional) An Amazon Resource Name (ARN) of an SNS topic to send ElastiCache notifications to. Example: `arn:aws:sns:us-east-1:012345678999:my_sns_topic` * `az_mode` - (Optional, Memcached only) Specifies whether the nodes in this Memcached node group are created in a single Availability Zone or created across multiple Availability Zones in the cluster's region. Valid values for this parameter are `single-az` or `cross-az`, default is `single-az`. If you want to choose `cross-az`, `num_cache_nodes` must be greater than `1` -* `availability_zone` - (Optional) The Availability Zone for the cache cluster. If you want to create cache nodes in multi-az, use `availability_zones` +* `availability_zone` - (Optional) The Availability Zone for the cache cluster. If you want to create cache nodes in multi-az, use `preferred_availability_zones` instead. Default: System chosen Availability Zone. -* `availability_zones` - (Optional, Memcached only) List of Availability Zones in which the cache nodes will be created. If you want to create cache nodes in single-az, use `availability_zone` +* `availability_zones` - (*DEPRECATED*, Optional, Memcached only) Use `preferred_availability_zones` instead unless you want to create cache nodes in single-az, then use `availability_zone`. Set of Availability Zones in which the cache nodes will be created. -* `tags` - (Optional) A mapping of tags to assign to the resource +* `preferred_availability_zones` - (Optional, Memcached only) A list of the Availability Zones in which cache nodes are created. If you are creating your cluster in an Amazon VPC you can only locate nodes in Availability Zones that are associated with the subnets in the selected subnet group. The number of Availability Zones listed must equal the value of `num_cache_nodes`. If you want all the nodes in the same Availability Zone, use `availability_zone` instead, or repeat the Availability Zone multiple times in the list. Default: System chosen Availability Zones. Detecting drift of existing node availability zone is not currently supported. Updating this argument by itself to migrate existing node availability zones is not currently supported and will show a perpetual difference. -~> **NOTE:** Snapshotting functionality is not compatible with t2 instance types. +* `tags` - (Optional) A mapping of tags to assign to the resource ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `cache_nodes` - List of node objects including `id`, `address`, `port` and `availability_zone`. Referenceable e.g. as `${aws_elasticache_cluster.bar.cache_nodes.0.address}` diff --git a/website/docs/r/elasticache_parameter_group.html.markdown b/website/docs/r/elasticache_parameter_group.html.markdown index 0b1a772033f..826c49676c8 100644 --- a/website/docs/r/elasticache_parameter_group.html.markdown +++ b/website/docs/r/elasticache_parameter_group.html.markdown @@ -2,6 +2,8 @@ layout: "aws" page_title: "AWS: aws_elasticache_parameter_group" sidebar_current: "docs-aws-resource-elasticache-parameter-group" +description: |- + Provides an ElastiCache parameter group resource. --- # aws_elasticache_parameter_group @@ -43,7 +45,7 @@ Parameter blocks support the following: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ElastiCache parameter group name. @@ -54,4 +56,4 @@ ElastiCache Parameter Groups can be imported using the `name`, e.g. ``` $ terraform import aws_elasticache_parameter_group.default redis-params -``` \ No newline at end of file +``` diff --git a/website/docs/r/elasticache_replication_group.html.markdown b/website/docs/r/elasticache_replication_group.html.markdown index 4603c9a5889..fb40d8be2db 100644 --- a/website/docs/r/elasticache_replication_group.html.markdown +++ b/website/docs/r/elasticache_replication_group.html.markdown @@ -9,37 +9,73 @@ description: |- # aws_elasticache_replication_group Provides an ElastiCache Replication Group resource. +For working with Memcached or single primary Redis instances (Cluster Mode Disabled), see the +[`aws_elasticache_cluster` resource](/docs/providers/aws/r/elasticache_cluster.html). ## Example Usage -### Redis Master with One Replica +### Redis Cluster Mode Disabled + +To create a single shard primary with single read replica: ```hcl -resource "aws_elasticache_replication_group" "bar" { +resource "aws_elasticache_replication_group" "example" { + automatic_failover_enabled = true + availability_zones = ["us-west-2a", "us-west-2b"] replication_group_id = "tf-rep-group-1" replication_group_description = "test description" - node_type = "cache.m1.small" + node_type = "cache.m4.large" number_cache_clusters = 2 - port = 6379 parameter_group_name = "default.redis3.2" - availability_zones = ["us-west-2a", "us-west-2b"] + port = 6379 +} +``` + +You have two options for adjusting the number of replicas: + +* Adjusting `number_cache_clusters` directly. This will attempt to automatically add or remove replicas, but provides no granular control (e.g. preferred availability zone, cache cluster ID) for the added or removed replicas. This also currently expects cache cluster IDs in the form of `replication_group_id-00#`. +* Otherwise for fine grained control of the underlying cache clusters, they can be added or removed with the [`aws_elasticache_cluster` resource](/docs/providers/aws/r/elasticache_cluster.html) and its `replication_group_id` attribute. In this situation, you will need to utilize the [lifecycle configuration block](/docs/configuration/resources.html) with `ignore_changes` to prevent perpetual differences during Terraform plan with the `number_cache_cluster` attribute. + +```hcl +resource "aws_elasticache_replication_group" "example" { automatic_failover_enabled = true + availability_zones = ["us-west-2a", "us-west-2b"] + replication_group_id = "tf-rep-group-1" + replication_group_description = "test description" + node_type = "cache.m4.large" + number_cache_clusters = 2 + parameter_group_name = "default.redis3.2" + port = 6379 + + lifecycle { + ignore_changes = ["number_cache_clusters"] + } +} + +resource "aws_elasticache_cluster" "replica" { + count = 1 + + cluster_id = "tf-rep-group-1-${count.index}" + replication_group_id = "${aws_elasticache_replication_group.example.id}" } ``` -### Native Redis Cluster 2 Masters 2 Replicas +### Redis Cluster Mode Enabled + +To create two shards with a primary and a single read replica each: ```hcl resource "aws_elasticache_replication_group" "baz" { replication_group_id = "tf-redis-cluster" replication_group_description = "test description" - node_type = "cache.m1.small" + node_type = "cache.t2.small" port = 6379 parameter_group_name = "default.redis3.2.cluster.on" automatic_failover_enabled = true + cluster_mode { - replicas_per_node_group = 1 - num_node_groups = 2 + replicas_per_node_group = 1 + num_node_groups = 2 } } ``` @@ -47,8 +83,7 @@ resource "aws_elasticache_replication_group" "baz" { ~> **Note:** We currently do not support passing a `primary_cluster_id` in order to create the Replication Group. ~> **Note:** Automatic Failover is unavailable for Redis versions earlier than 2.8.6, -and unavailable on T1 and T2 node types. See the [Amazon Replication with -Redis](http://docs.aws.amazon.com/en_en/AmazonElastiCache/latest/UserGuide/Replication.html) guide +and unavailable on T1 node types. For T2 node types, it is only available on Redis version 3.2.4 or later with cluster mode enabled. See the [High Availability Using Replication Groups](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Replication.html) guide for full details on using Replication Groups. ## Argument Reference @@ -57,10 +92,9 @@ The following arguments are supported: * `replication_group_id` – (Required) The replication group identifier. This parameter is stored as a lowercase string. * `replication_group_description` – (Required) A user-created description for the replication group. -* `number_cache_clusters` - (Required) The number of cache clusters this replication group will have. - If Multi-AZ is enabled , the value of this parameter must be at least 2. Changing this number will force a new resource +* `number_cache_clusters` - (Required for Cluster Mode Disabled) The number of cache clusters (primary and replicas) this replication group will have. If Multi-AZ is enabled, the value of this parameter must be at least 2. Updates will occur before other modifications. * `node_type` - (Required) The compute and memory capacity of the nodes in the node group. -* `automatic_failover_enabled` - (Optional) Specifies whether a read-only replica will be automatically promoted to read/write primary if the existing primary fails. Defaults to `false`. +* `automatic_failover_enabled` - (Optional) Specifies whether a read-only replica will be automatically promoted to read/write primary if the existing primary fails. If true, Multi-AZ is enabled for this replication group. If false, Multi-AZ is disabled for this replication group. Must be enabled for Redis (cluster mode enabled) replication groups. Defaults to `false`. * `auto_minor_version_upgrade` - (Optional) Specifies whether a minor engine upgrades will be applied automatically to the underlying Cache Cluster instances during the maintenance window. Defaults to `true`. * `availability_zones` - (Optional) A list of EC2 availability zones in which the replication group's cache clusters will be created. The order of the availability zones in the list is not important. * `engine` - (Optional) The name of the cache engine to be used for the clusters in this replication group. e.g. `redis` @@ -69,18 +103,18 @@ The following arguments are supported: * `auth_token` - (Optional) The password used to access a password protected server. Can be specified only if `transit_encryption_enabled = true`. * `engine_version` - (Optional) The version number of the cache engine to be used for the cache clusters in this replication group. * `parameter_group_name` - (Optional) The name of the parameter group to associate with this replication group. If this argument is omitted, the default cache parameter group for the specified engine is used. -* `port` – (Required) The port number on which each of the cache nodes will accept connections. For Memcache the default is 11211, and for Redis the default port is 6379. +* `port` – (Optional) The port number on which each of the cache nodes will accept connections. For Memcache the default is 11211, and for Redis the default port is 6379. * `subnet_group_name` - (Optional) The name of the cache subnet group to be used for the replication group. * `security_group_names` - (Optional) A list of cache security group names to associate with this replication group. * `security_group_ids` - (Optional) One or more Amazon VPC security groups associated with this replication group. Use this parameter only when you are creating a replication group in an Amazon Virtual Private Cloud -* `snapshot_arns` – (Optional) A single-element string list containing an +* `snapshot_arns` – (Optional) A single-element string list containing an Amazon Resource Name (ARN) of a Redis RDB snapshot file stored in Amazon S3. Example: `arn:aws:s3:::my_bucket/snapshot1.rdb` * `snapshot_name` - (Optional) The name of a snapshot from which to restore data into the new node group. Changing the `snapshot_name` forces a new resource. -* `maintenance_window` – (Optional) Specifies the weekly time range for when maintenance +* `maintenance_window` – (Optional) Specifies the weekly time range for when maintenance on the cache cluster is performed. The format is `ddd:hh24:mi-ddd:hh24:mi` (24H Clock UTC). The minimum maintenance window is a 60 minute period. Example: `sun:05:00-sun:09:00` -* `notification_topic_arn` – (Optional) An Amazon Resource Name (ARN) of an +* `notification_topic_arn` – (Optional) An Amazon Resource Name (ARN) of an SNS topic to send ElastiCache notifications to. Example: `arn:aws:sns:us-east-1:012345678999:my_sns_topic` * `snapshot_window` - (Optional, Redis only) The daily time range (in UTC) during which ElastiCache will @@ -97,15 +131,25 @@ Please note that setting a `snapshot_retention_limit` is not supported on cache. Cluster Mode (`cluster_mode`) supports the following: * `replicas_per_node_group` - (Required) Specify the number of replica nodes in each node group. Valid values are 0 to 5. Changing this number will force a new resource. -* `num_node_groups` - (Required) Specify the number of node groups (shards) for this Redis replication group. Changing this number will force a new resource. +* `num_node_groups` - (Required) Specify the number of node groups (shards) for this Redis replication group. Changing this number will trigger an online resizing operation before other settings modifications. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the ElastiCache Replication Group. * `configuration_endpoint_address` - The address of the replication group configuration endpoint when cluster mode is enabled. * `primary_endpoint_address` - (Redis only) The address of the endpoint for the primary node in the replication group, if the cluster mode is disabled. +* `member_clusters` - The identifiers of all the nodes that are part of this replication group. + +## Timeouts + +`aws_elasticache_replication_group` provides the following [Timeouts](/docs/configuration/resources.html#timeouts) +configuration options: + +* `create` - (Default `60m`) How long to wait for a replication group to be created. +* `delete` - (Default `40m`) How long to wait for a replication group to be deleted. +* `update` - (Default `40m`) How long to wait for replication group settings to be updated. This is also separately used for adding/removing replicas and online resize operation completion, if necessary. ## Import diff --git a/website/docs/r/elasticache_security_group.html.markdown b/website/docs/r/elasticache_security_group.html.markdown index 08aeae23dd7..2eb79d26077 100644 --- a/website/docs/r/elasticache_security_group.html.markdown +++ b/website/docs/r/elasticache_security_group.html.markdown @@ -40,7 +40,7 @@ authorized for ingress to the cache security group ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `description` * `name` diff --git a/website/docs/r/elasticache_subnet_group.html.markdown b/website/docs/r/elasticache_subnet_group.html.markdown index 059fdc7592e..133e3862b03 100644 --- a/website/docs/r/elasticache_subnet_group.html.markdown +++ b/website/docs/r/elasticache_subnet_group.html.markdown @@ -51,7 +51,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `description` * `name` @@ -64,4 +64,4 @@ ElastiCache Subnet Groups can be imported using the `name`, e.g. ``` $ terraform import aws_elasticache_subnet_group.bar tf-test-cache-subnet -``` \ No newline at end of file +``` diff --git a/website/docs/r/elasticsearch_domain.html.markdown b/website/docs/r/elasticsearch_domain.html.markdown index 84272c6d269..25abb82ff60 100644 --- a/website/docs/r/elasticsearch_domain.html.markdown +++ b/website/docs/r/elasticsearch_domain.html.markdown @@ -3,7 +3,7 @@ layout: "aws" page_title: "AWS: aws_elasticsearch_domain" sidebar_current: "docs-aws-resource-elasticsearch-domain" description: |- - Provides an ElasticSearch Domain. + Provides an Elasticsearch Domain. --- # aws_elasticsearch_domain @@ -12,11 +12,20 @@ description: |- ## Example Usage ```hcl +variable "domain" { + default = "tf-test" +} + +data "aws_region" "current" {} + +data "aws_caller_identity" "current" {} + resource "aws_elasticsearch_domain" "es" { - domain_name = "tf-test" + domain_name = "${var.domain}" elasticsearch_version = "1.5" + cluster_config { - instance_type = "r3.large.elasticsearch" + instance_type = "r4.large.elasticsearch" } advanced_options { @@ -31,6 +40,7 @@ resource "aws_elasticsearch_domain" "es" { "Action": "es:*", "Principal": "*", "Effect": "Allow", + "Resource": "arn:aws:es:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:domain/${var.domain}/*", "Condition": { "IpAddress": {"aws:SourceIp": ["66.193.100.22/32"]} } @@ -56,13 +66,17 @@ The following arguments are supported: * `domain_name` - (Required) Name of the domain. * `access_policies` - (Optional) IAM policy document specifying the access policies for the domain * `advanced_options` - (Optional) Key-value string pairs to specify advanced configuration options. + Note that the values for these configuration options must be strings (wrapped in quotes) or they + may be wrong and cause a perpetual diff, causing Terraform to want to recreate your Elasticsearch + domain on every apply. * `ebs_options` - (Optional) EBS related options, may be required based on chosen [instance size](https://aws.amazon.com/elasticsearch-service/pricing/). See below. * `encrypt_at_rest` - (Optional) Encrypt at rest options. Only available for [certain instance types](http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-supported-instance-types.html). See below. +* `node_to_node_encryption` - (Optional) Node-to-node encryption options. See below. * `cluster_config` - (Optional) Cluster configuration of the domain, see below. * `snapshot_options` - (Optional) Snapshot related options, see below. * `vpc_options` - (Optional) VPC related options, see below. Adding or removing this configuration forces a new resource ([documentation](https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-vpc.html#es-vpc-limitations)). * `log_publishing_options` - (Optional) Options for publishing slow logs to CloudWatch Logs. -* `elasticsearch_version` - (Optional) The version of ElasticSearch to deploy. Defaults to `1.5` +* `elasticsearch_version` - (Optional) The version of Elasticsearch to deploy. Defaults to `1.5` * `tags` - (Optional) A mapping of tags to assign to the resource **ebs_options** supports the following attributes: @@ -88,6 +102,10 @@ The following arguments are supported: * `dedicated_master_count` - (Optional) Number of dedicated master nodes in the cluster * `zone_awareness_enabled` - (Optional) Indicates whether zone awareness is enabled. +**node_to_node_encryption** supports the following attributes: + +* `enabled` - (Required) Whether to enable node-to-node encryption. If the `node_to_node_encryption` block is not provided then this defaults to `false`. + **vpc_options** supports the following attributes: AWS documentation: [VPC Support for Amazon Elasticsearch Service Domains](https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-vpc.html) @@ -104,16 +122,26 @@ Security Groups and Subnets referenced in these attributes must all be within th **log_publishing_options** supports the following attribute: -* `log_type` - (Required) A type of Elasticsearch log. Valid values: INDEX_SLOW_LOGS, SEARCH_SLOW_LOGS +* `log_type` - (Required) A type of Elasticsearch log. Valid values: INDEX_SLOW_LOGS, SEARCH_SLOW_LOGS, ES_APPLICATION_LOGS * `cloudwatch_log_group_arn` - (Required) ARN of the Cloudwatch log group to which log needs to be published. * `enabled` - (Optional, Default: true) Specifies whether given log publishing option is enabled or not. +**cognito_options** supports the following attribute: + +AWS documentation: [Amazon Cognito Authentication for Kibana](https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-cognito-auth.html) + +* `enabled` - (Optional, Default: false) Specifies whether Amazon Cognito authentication with Kibana is enabled or not +* `user_pool_id` - (Required) ID of the Cognito User Pool to use +* `identity_pool_id` - (Required) ID of the Cognito Identity Pool to use +* `role_arn` - (Required) ARN of the IAM role that has the AmazonESCognitoAccess policy attached + ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - Amazon Resource Name (ARN) of the domain. * `domain_id` - Unique identifier for the domain. +* `domain_name` - The name of the Elasticsearch domain. * `endpoint` - Domain-specific endpoint used to submit index, search, and data upload requests. * `kibana_endpoint` - Domain-specific endpoint for kibana without https scheme. * `vpc_options.0.availability_zones` - If the domain was created inside a VPC, the names of the availability zones the configured `subnet_ids` were created inside. @@ -121,7 +149,7 @@ The following attributes are exported: ## Import -ElasticSearch domains can be imported using the `domain_name`, e.g. +Elasticsearch domains can be imported using the `domain_name`, e.g. ``` $ terraform import aws_elasticsearch_domain.example domain_name diff --git a/website/docs/r/elasticsearch_domain_policy.html.markdown b/website/docs/r/elasticsearch_domain_policy.html.markdown index 24609f036ba..08165a0b99c 100644 --- a/website/docs/r/elasticsearch_domain_policy.html.markdown +++ b/website/docs/r/elasticsearch_domain_policy.html.markdown @@ -3,12 +3,12 @@ layout: "aws" page_title: "AWS: aws_elasticsearch_domain" sidebar_current: "docs-aws-resource-elasticsearch-domain" description: |- - Provides an ElasticSearch Domain. + Provides an Elasticsearch Domain Policy. --- # aws_elasticsearch_domain_policy -Allows setting policy to an ElasticSearch domain while referencing domain attributes (e.g. ARN) +Allows setting policy to an Elasticsearch domain while referencing domain attributes (e.g. ARN) ## Example Usage @@ -32,7 +32,7 @@ resource "aws_elasticsearch_domain_policy" "main" { "Condition": { "IpAddress": {"aws:SourceIp": "127.0.0.1/32"} }, - "Resource": "${aws_elasticsearch_domain.example.arn}" + "Resource": "${aws_elasticsearch_domain.example.arn}/*" } ] } diff --git a/website/docs/r/elb.html.markdown b/website/docs/r/elb.html.markdown index b193f00dad2..0f0090cedc4 100644 --- a/website/docs/r/elb.html.markdown +++ b/website/docs/r/elb.html.markdown @@ -132,7 +132,7 @@ browser. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The name of the ELB * `arn` - The ARN of the ELB diff --git a/website/docs/r/elb_attachment.html.markdown b/website/docs/r/elb_attachment.html.markdown index 85988315bf4..5470818ab56 100644 --- a/website/docs/r/elb_attachment.html.markdown +++ b/website/docs/r/elb_attachment.html.markdown @@ -8,7 +8,7 @@ description: |- # aws_elb_attachment -Provides an Elastic Load Balancer Attachment resource. +Attaches an EC2 instance to an Elastic Load Balancer (ELB). For attaching resources with Application Load Balancer (ALB) or Network Load Balancer (NLB), see the [`aws_lb_target_group_attachment` resource](/docs/providers/aws/r/lb_target_group_attachment.html). ~> **NOTE on ELB Instances and ELB Attachments:** Terraform currently provides both a standalone ELB Attachment resource (describing an instance attached to @@ -16,6 +16,7 @@ an ELB), and an [Elastic Load Balancer resource](elb.html) with `instances` defined in-line. At this time you cannot use an ELB with in-line instances in conjunction with an ELB Attachment resource. Doing so will cause a conflict and will overwrite attachments. + ## Example Usage ```hcl diff --git a/website/docs/r/emr_cluster.html.markdown b/website/docs/r/emr_cluster.html.markdown index 574d50badd6..edb071cab37 100644 --- a/website/docs/r/emr_cluster.html.markdown +++ b/website/docs/r/emr_cluster.html.markdown @@ -16,11 +16,19 @@ for more information. ```hcl resource "aws_emr_cluster" "emr-test-cluster" { - name = "emr-test-arn" - release_label = "emr-4.6.0" - applications = ["Spark"] + name = "emr-test-arn" + release_label = "emr-4.6.0" + applications = ["Spark"] + additional_info = < **NOTE on configurations_json:** If the `Configurations` value is empty then you should skip +the `Configurations` field instead of providing empty list as value `"Configurations": []`. + +```hcl +configurations_json = < **NOTE:** One of `eni_id`, `subnet_id`, or `vpc_id` must be specified. + The following arguments are supported: -* `log_group_name` - (Required) The name of the CloudWatch log group -* `iam_role_arn` - (Required) The ARN for the IAM role that's used to post flow - logs to a CloudWatch Logs log group -* `vpc_id` - (Optional) VPC ID to attach to -* `subnet_id` - (Optional) Subnet ID to attach to +* `traffic_type` - (Required) The type of traffic to capture. Valid values: `ACCEPT`,`REJECT`, `ALL`. * `eni_id` - (Optional) Elastic Network Interface ID to attach to -* `traffic_type` - (Required) The type of traffic to capture. Valid values: - `ACCEPT`,`REJECT`, `ALL` +* `iam_role_arn` - (Optional) The ARN for the IAM role that's used to post flow logs to a CloudWatch Logs log group +* `log_destination_type` - (Optional) The type of the logging destination. Valid values: `cloud-watch-logs`, `s3`. Default: `cloud-watch-logs`. +* `log_destination` - (Optional) The ARN of the logging destination. +* `log_group_name` - (Optional) *Deprecated:* Use `log_destination` instead. The name of the CloudWatch log group. +* `subnet_id` - (Optional) Subnet ID to attach to +* `vpc_id` - (Optional) VPC ID to attach to ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The Flow Log ID diff --git a/website/docs/r/gamelift_alias.html.markdown b/website/docs/r/gamelift_alias.html.markdown index 7cb6692dd8d..8e43d26ad6d 100644 --- a/website/docs/r/gamelift_alias.html.markdown +++ b/website/docs/r/gamelift_alias.html.markdown @@ -14,11 +14,12 @@ Provides a Gamelift Alias resource. ```hcl resource "aws_gamelift_alias" "example" { - name = "example-alias" + name = "example-alias" description = "Example Description" + routing_strategy { message = "Example Message" - type = "TERMINAL" + type = "TERMINAL" } } ``` @@ -41,7 +42,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - Alias ID. * `arn` - Alias ARN. diff --git a/website/docs/r/gamelift_build.html.markdown b/website/docs/r/gamelift_build.html.markdown index 75254c634bd..6b78e6a67f1 100644 --- a/website/docs/r/gamelift_build.html.markdown +++ b/website/docs/r/gamelift_build.html.markdown @@ -14,13 +14,15 @@ Provides an Gamelift Build resource. ```hcl resource "aws_gamelift_build" "test" { - name = "example-build" + name = "example-build" operating_system = "WINDOWS_2012" + storage_location { - bucket = "${aws_s3_bucket.test.bucket}" - key = "${aws_s3_bucket_object.test.key}" + bucket = "${aws_s3_bucket.test.bucket}" + key = "${aws_s3_bucket_object.test.key}" role_arn = "${aws_iam_role.test.arn}" } + depends_on = ["aws_iam_role_policy.test"] } ``` @@ -44,7 +46,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - Build ID. diff --git a/website/docs/r/gamelift_fleet.html.markdown b/website/docs/r/gamelift_fleet.html.markdown index a7c5f9aaa1d..0ecd4cc4553 100644 --- a/website/docs/r/gamelift_fleet.html.markdown +++ b/website/docs/r/gamelift_fleet.html.markdown @@ -14,13 +14,14 @@ Provides a Gamelift Fleet resource. ```hcl resource "aws_gamelift_fleet" "example" { - build_id = "${aws_gamelift_build.example.id}" + build_id = "${aws_gamelift_build.example.id}" ec2_instance_type = "t2.micro" - name = "example-fleet-name" + name = "example-fleet-name" + runtime_configuration { server_process { concurrent_executions = 1 - launch_path = "C:\\game\\GomokuServer.exe" + launch_path = "C:\\game\\GomokuServer.exe" } } } @@ -68,7 +69,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - Fleet ID. * `arn` - Fleet ARN. diff --git a/website/docs/r/gamelift_game_session_queue.html.markdown b/website/docs/r/gamelift_game_session_queue.html.markdown new file mode 100644 index 00000000000..1e22b56189e --- /dev/null +++ b/website/docs/r/gamelift_game_session_queue.html.markdown @@ -0,0 +1,61 @@ +--- +layout: "aws" +page_title: "AWS: aws_gamelift_game_session_queue" +sidebar_current: "docs-aws-resource-gamelift-session-queue" +description: |- + Provides a Gamelift Game Session Queue resource. +--- + +# aws_gamelift_game_session_queue + +Provides an Gamelift Game Session Queue resource. + +## Example Usage + +```hcl +resource "aws_gamelift_game_session_queue" "test" { + name = "example-session-queue" + destinations = [ + "${aws_gamelift_fleet.us_west_2_fleet.arn}", + "${aws_gamelift_fleet.eu_central_1_fleet.arn}", + ] + player_latency_policy { + maximum_individual_player_latency_milliseconds = 100 + policy_duration_seconds = 5 + } + player_latency_policy { + maximum_individual_player_latency_milliseconds = 200 + } + timeout_in_seconds = 60 +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) Name of the session queue. +* `timeout_in_seconds` - (Required) Maximum time a game session request can remain in the queue. +* `destinations` - (Optional) List of fleet/alias ARNs used by session queue for placing game sessions. +* `player_latency_policy` - (Optional) One or more policies used to choose fleet based on player latency. See below. + +### Nested Fields + +#### `player_latency_policy` + +* `maximum_individual_player_latency_milliseconds` - (Required) Maximum latency value that is allowed for any player. +* `policy_duration_seconds` - (Optional) Length of time that the policy is enforced while placing a new game session. Absence of value for this attribute means that the policy is enforced until the queue times out. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - Game Session Queue ARN. + +## Import + +Gamelift Game Session Queues can be imported by their `name`, e.g. + +``` +$ terraform import aws_gamelift_game_session_queue.example example +``` diff --git a/website/docs/r/glacier_vault.html.markdown b/website/docs/r/glacier_vault.html.markdown index 4973e75ea1d..1f1c5a76e0f 100644 --- a/website/docs/r/glacier_vault.html.markdown +++ b/website/docs/r/glacier_vault.html.markdown @@ -1,7 +1,7 @@ --- layout: "aws" page_title: "AWS: aws_glacier_vault" -sidebar_current: "docs-aws-resource-glacier-vault" +sidebar_current: "docs-aws-resource-glacier-vault-x" description: |- Provides a Glacier Vault. --- @@ -68,7 +68,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `location` - The URI of the vault that was created. * `arn` - The ARN of the vault. @@ -79,4 +79,4 @@ Glacier Vaults can be imported using the `name`, e.g. ``` $ terraform import aws_glacier_vault.archive my_archive -``` \ No newline at end of file +``` diff --git a/website/docs/r/glacier_vault_lock.html.markdown b/website/docs/r/glacier_vault_lock.html.markdown new file mode 100644 index 00000000000..d7b81d4155d --- /dev/null +++ b/website/docs/r/glacier_vault_lock.html.markdown @@ -0,0 +1,78 @@ +--- +layout: "aws" +page_title: "AWS: aws_glacier_vault_lock" +sidebar_current: "docs-aws-resource-glacier-vault-lock" +description: |- + Manages a Glacier Vault Lock. +--- + +# aws_glacier_vault_lock + +Manages a Glacier Vault Lock. You can refer to the [Glacier Developer Guide](https://docs.aws.amazon.com/amazonglacier/latest/dev/vault-lock.html) for a full explanation of the Glacier Vault Lock functionality. + +~> **NOTE:** This resource allows you to test Glacier Vault Lock policies by setting the `complete_lock` argument to `false`. When testing policies in this manner, the Glacier Vault Lock automatically expires after 24 hours and Terraform will show this resource as needing recreation after that time. To permanently apply the policy, set the `complete_lock` argument to `true`. When changing `complete_lock` to `true`, it is expected the resource will show as recreating. + +!> **WARNING:** Once a Glacier Vault Lock is completed, it is immutable. The deletion of the Glacier Vault Lock is not be possible and attempting to remove it from Terraform will return an error. Set the `ignore_deletion_error` argument to `true` and apply this configuration before attempting to delete this resource via Terraform or use `terraform state rm` to remove this resource from Terraform management. + +## Example Usage + +### Testing Glacier Vault Lock Policy + +```hcl +resource "aws_glacier_vault" "example" { + name = "example" +} + +data "aws_iam_policy_document" "example" { + statement { + actions = ["glacier:DeleteArchive"] + effect = "Deny" + resources = ["${aws_glacier_vault.example.arn}"] + + condition { + test = "NumericLessThanEquals" + variable = "glacier:ArchiveAgeinDays" + values = ["365"] + } + } +} + +resource "aws_glacier_vault_lock" "example" { + complete_lock = false + policy = "${data.aws_iam_policy_document.example.json}" + vault_name = "${aws_glacier_vault.example.name}" +} +``` + +### Permanently Applying Glacier Vault Lock Policy + +```hcl +resource "aws_glacier_vault_lock" "example" { + complete_lock = true + policy = "${data.aws_iam_policy_document.example.json}" + vault_name = "${aws_glacier_vault.example.name}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `complete_lock` - (Required) Boolean whether to permanently apply this Glacier Lock Policy. Once completed, this cannot be undone. If set to `false`, the Glacier Lock Policy remains in a testing mode for 24 hours. After that time, the Glacier Lock Policy is automatically removed by Glacier and the Terraform resource will show as needing recreation. Changing this from `false` to `true` will show as resource recreation, which is expected. Changing this from `true` to `false` is not possible unless the Glacier Vault is recreated at the same time. +* `policy` - (Required) JSON string containing the IAM policy to apply as the Glacier Vault Lock policy. +* `vault_name` - (Required) The name of the Glacier Vault. +* `ignore_deletion_error` - (Optional) Allow Terraform to ignore the error returned when attempting to delete the Glacier Lock Policy. This can be used to delete or recreate the Glacier Vault via Terraform, for example, if the Glacier Vault Lock policy permits that action. This should only be used in conjunction with `complete_lock` being set to `true`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Glacier Vault name. + +## Import + +Glacier Vault Locks can be imported using the Glacier Vault name, e.g. + +``` +$ terraform import aws_glacier_vault_lock.example example-vault +``` diff --git a/website/docs/r/glue_catalog_table.html.markdown b/website/docs/r/glue_catalog_table.html.markdown new file mode 100644 index 00000000000..ca8345bac04 --- /dev/null +++ b/website/docs/r/glue_catalog_table.html.markdown @@ -0,0 +1,83 @@ +--- +layout: "aws" +page_title: "AWS: aws_glue_catalog_table" +sidebar_current: "docs-aws-resource-glue-catalog-table" +description: |- + Provides a Glue Catalog Table. +--- + +# aws_glue_catalog_table + +Provides a Glue Catalog Table Resource. You can refer to the [Glue Developer Guide](http://docs.aws.amazon.com/glue/latest/dg/populate-data-catalog.html) for a full explanation of the Glue Data Catalog functionality. + +## Example Usage + +```hcl +resource "aws_glue_catalog_table" "aws_glue_catalog_table" { + name = "MyCatalogTable" + database_name = "MyCatalogDatabase" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) Name of the table. For Hive compatibility, this must be entirely lowercase. +* `database_name` - (Required) Name of the metadata database where the table metadata resides. For Hive compatibility, this must be all lowercase. +* `catalog_id` - (Optional) ID of the Glue Catalog and database to create the table in. If omitted, this defaults to the AWS Account ID plus the database name. +* `description` - (Optional) Description of the table. +* `owner` - (Optional) Owner of the table. +* `retention` - (Optional) Retention time for this table. +* `storage_descriptor` - (Optional) A [storage descriptor](#storage_descriptor) object containing information about the physical storage of this table. You can refer to the [Glue Developer Guide](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-catalog-tables.html#aws-glue-api-catalog-tables-StorageDescriptor) for a full explanation of this object. +* `partition_keys` - (Optional) A list of columns by which the table is partitioned. Only primitive types are supported as partition keys. +* `view_original_text` - (Optional) If the table is a view, the original text of the view; otherwise null. +* `view_expanded_text` - (Optional) If the table is a view, the expanded text of the view; otherwise null. +* `table_type` - (Optional) The type of this table (EXTERNAL_TABLE, VIRTUAL_VIEW, etc.). +* `parameters` - (Optional) Properties associated with this table, as a list of key-value pairs. + +##### storage_descriptor + +* `columns` - (Optional) A list of the [Columns](#column) in the table. +* `location` - (Optional) The physical location of the table. By default this takes the form of the warehouse location, followed by the database location in the warehouse, followed by the table name. +* `input_format` - (Optional) The input format: SequenceFileInputFormat (binary), or TextInputFormat, or a custom format. +* `output_format` - (Optional) The output format: SequenceFileOutputFormat (binary), or IgnoreKeyTextOutputFormat, or a custom format. +* `compressed` - (Optional) True if the data in the table is compressed, or False if not. +* `number_of_buckets` - (Optional) Must be specified if the table contains any dimension columns. +* `ser_de_info` - (Optional) [Serialization/deserialization (SerDe)](#ser_de_info) information. +* `bucket_columns` - (Optional) A list of reducer grouping columns, clustering columns, and bucketing columns in the table. +* `sort_columns` - (Optional) A list of [Order](#sort_column) objects specifying the sort order of each bucket in the table. +* `parameters` - (Optional) User-supplied properties in key-value form. +* `skewed_info` - (Optional) Information about values that appear very frequently in a column (skewed values). +* `stored_as_sub_directories` - (Optional) True if the table data is stored in subdirectories, or False if not. + +##### column + +* `name` - (Required) The name of the Column. +* `type` - (Optional) The datatype of data in the Column. +* `comment` - (Optional) Free-form text comment. + +##### ser_de_info + +* `name` - (Optional) Name of the SerDe. +* `parameters` - (Optional) A map of initialization parameters for the SerDe, in key-value form. +* `serialization_library` - (Optional) Usually the class that implements the SerDe. An example is: org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe. + +##### sort_column + +* `column` - (Required) The name of the column. +* `sort_order` - (Required) Indicates that the column is sorted in ascending order (== 1), or in descending order (==0). + +##### skewed_info + +* `skewed_column_names` - (Optional) A list of names of columns that contain skewed values. +* `skewed_column_value_location_maps` - (Optional) A list of values that appear so frequently as to be considered skewed. +* `skewed_column_values` - (Optional) A mapping of skewed values to the columns that contain them. + +## Import + +Glue Tables can be imported with their catalog ID (usually AWS account ID), database name, and table name, e.g. + +``` +$ terraform import aws_glue_catalog_table.MyTable 123456789012:MyDatabase:MyTable +``` diff --git a/website/docs/r/glue_classifier.html.markdown b/website/docs/r/glue_classifier.html.markdown new file mode 100644 index 00000000000..16bd5ab91cf --- /dev/null +++ b/website/docs/r/glue_classifier.html.markdown @@ -0,0 +1,91 @@ +--- +layout: "aws" +page_title: "AWS: aws_glue_classifier" +sidebar_current: "docs-aws-resource-glue-classifier" +description: |- + Provides an Glue Classifier resource. +--- + +# aws_glue_classifier + +Provides a Glue Classifier resource. + +~> **NOTE:** It is only valid to create one type of classifier (grok, JSON, or XML). Changing classifier types will recreate the classifier. + +## Example Usage + +### Grok Classifier + +```hcl +resource "aws_glue_classifier" "example" { + name = "example" + + grok_classifier { + classification = "example" + grok_pattern = "example" + } +} +``` + +### JSON Classifier + +```hcl +resource "aws_glue_classifier" "example" { + name = "example" + + json_classifier { + json_path = "example" + } +} +``` + +### XML Classifier + +```hcl +resource "aws_glue_classifier" "example" { + name = "example" + + xml_classifier { + classification = "example" + row_tag = "example" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `grok_classifier` – (Optional) A classifier that uses grok patterns. Defined below. +* `json_classifier` – (Optional) A classifier for JSON content. Defined below. +* `name` – (Required) The name of the classifier. +* `xml_classifier` – (Optional) A classifier for XML content. Defined below. + +### grok_classifier + +* `classification` - (Required) An identifier of the data format that the classifier matches, such as Twitter, JSON, Omniture logs, Amazon CloudWatch Logs, and so on. +* `custom_patterns` - (Optional) Custom grok patterns used by this classifier. +* `grok_pattern` - (Required) The grok pattern used by this classifier. + +### json_classifier + +* `json_path` - (Required) A `JsonPath` string defining the JSON data for the classifier to classify. AWS Glue supports a subset of `JsonPath`, as described in [Writing JsonPath Custom Classifiers](https://docs.aws.amazon.com/glue/latest/dg/custom-classifier.html#custom-classifier-json). + +### xml_classifier + +* `classification` - (Required) An identifier of the data format that the classifier matches. +* `row_tag` - (Required) The XML tag designating the element that contains each record in an XML document being parsed. Note that this cannot identify a self-closing element (closed by `/>`). An empty row element that contains only attributes can be parsed as long as it ends with a closing tag (for example, `` is okay, but `` is not). + +## Attributes Reference + +The following additional attributes are exported: + +* `id` - Name of the classifier + +## Import + +Glue Classifiers can be imported using their name, e.g. + +``` +$ terraform import aws_glue_classifier.MyClassifier MyClassifier +``` diff --git a/website/docs/r/glue_connection.html.markdown b/website/docs/r/glue_connection.html.markdown new file mode 100644 index 00000000000..888295045ba --- /dev/null +++ b/website/docs/r/glue_connection.html.markdown @@ -0,0 +1,81 @@ +--- +layout: "aws" +page_title: "AWS: aws_glue_connection" +sidebar_current: "docs-aws-resource-glue-connection" +description: |- + Provides an Glue Connection resource. +--- + +# aws_glue_connection + +Provides a Glue Connection resource. + +## Example Usage + +### Non-VPC Connection + +```hcl +resource "aws_glue_connection" "example" { + connection_properties = { + JDBC_CONNECTION_URL = "jdbc:mysql://example.com/exampledatabase" + PASSWORD = "examplepassword" + USERNAME = "exampleusername" + } + + name = "example" +} +``` + +### VPC Connection + +For more information, see the [AWS Documentation](https://docs.aws.amazon.com/glue/latest/dg/populate-add-connection.html#connection-JDBC-VPC). + +```hcl +resource "aws_glue_connection" "example" { + connection_properties = { + JDBC_CONNECTION_URL = "jdbc:mysql://${aws_rds_cluster.example.endpoint}/exampledatabase" + PASSWORD = "examplepassword" + USERNAME = "exampleusername" + } + + name = "example" + + physical_connection_requirements { + availability_zone = "${aws_subnet.example.availability_zone}" + security_group_id_list = ["${aws_security_group.example.id}"] + subnet_id = "${aws_subnet.example.id}" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `catalog_id` – (Optional) The ID of the Data Catalog in which to create the connection. If none is supplied, the AWS account ID is used by default. +* `connection_properties` – (Required) A map of key-value pairs used as parameters for this connection. +* `connection_type` – (Optional) The type of the connection. Defaults to `JBDC`. +* `description` – (Optional) Description of the connection. +* `match_criteria` – (Optional) A list of criteria that can be used in selecting this connection. +* `name` – (Required) The name of the connection. +* `physical_connection_requirements` - (Optional) A map of physical connection requirements, such as VPC and SecurityGroup. Defined below. + +### physical_connection_requirements + +* `availability_zone` - (Optional) The availability zone of the connection. This field is redundant and implied by `subnet_id`, but is currently an api requirement. +* `security_group_id_list` - (Optional) The security group ID list used by the connection. +* `subnet_id` - (Optional) The subnet ID used by the connection. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Catalog ID and name of the connection + +## Import + +Glue Connections can be imported using the `CATALOG-ID` (AWS account ID if not custom) and `NAME`, e.g. + +``` +$ terraform import aws_glue_connection.MyConnection 123456789012:MyConnection +``` diff --git a/website/docs/r/glue_crawler.html.markdown b/website/docs/r/glue_crawler.html.markdown new file mode 100644 index 00000000000..a3d9a75c0f0 --- /dev/null +++ b/website/docs/r/glue_crawler.html.markdown @@ -0,0 +1,109 @@ +--- +layout: "aws" +page_title: "AWS: aws_glue_crawler" +sidebar_current: "docs-aws-resource-glue-crawler" +description: |- + Manages a Glue Crawler +--- + +# aws_glue_crawler + +Manages a Glue Crawler. More information can be found in the [AWS Glue Developer Guide](https://docs.aws.amazon.com/glue/latest/dg/add-crawler.html) + +## Example Usage + +### DynamoDB Target + +```hcl +resource "aws_glue_crawler" "example" { + database_name = "${aws_glue_catalog_database.example.name}" + name = "example" + role = "${aws_iam_role.example.arn}" + + dynamodb_target { + path = "table-name" + } +} +``` + +### JDBC Target + +```hcl +resource "aws_glue_crawler" "example" { + database_name = "${aws_glue_catalog_database.example.name}" + name = "example" + role = "${aws_iam_role.example.arn}" + + jdbc_target { + connection_name = "${aws_glue_connection.example.name}" + path = "database-name/%" + } +} +``` + +### S3 Target + +```hcl +resource "aws_glue_crawler" "example" { + database_name = "${aws_glue_catalog_database.example.name}" + name = "example" + role = "${aws_iam_role.example.arn}" + + s3_target { + path = "s3://${aws_s3_bucket.example.bucket}" + } +} +``` + +## Argument Reference + +~> **NOTE:** At least one `jdbc_target` or `s3_target` must be specified. + +The following arguments are supported: + +* `database_name` (Required) Glue database where results are written. +* `name` (Required) Name of the crawler. +* `role` (Required) The IAM role friendly name (including path without leading slash), or ARN of an IAM role, used by the crawler to access other resources. +* `classifiers` (Optional) List of custom classifiers. By default, all AWS classifiers are included in a crawl, but these custom classifiers always override the default classifiers for a given classification. +* `configuration` (Optional) JSON string of configuration information. +* `description` (Optional) Description of the crawler. +* `dynamodb_target` (Optional) List of nested DynamoDB target arguments. See below. +* `jdbc_target` (Optional) List of nested JBDC target arguments. See below. +* `s3_target` (Optional) List nested Amazon S3 target arguments. See below. +* `schedule` (Optional) A cron expression used to specify the schedule. For more information, see [Time-Based Schedules for Jobs and Crawlers](https://docs.aws.amazon.com/glue/latest/dg/monitor-data-warehouse-schedule.html). For example, to run something every day at 12:15 UTC, you would specify: `cron(15 12 * * ? *)`. +* `schema_change_policy` (Optional) Policy for the crawler's update and deletion behavior. +* `table_prefix` (Optional) The table prefix used for catalog tables that are created. + +### dynamodb_target Argument Reference + +* `path` - (Required) The name of the DynamoDB table to crawl. + +### jdbc_target Argument Reference + +* `connection_name` - (Required) The name of the connection to use to connect to the JDBC target. +* `path` - (Required) The path of the JDBC target. +* `exclusions` - (Optional) A list of glob patterns used to exclude from the crawl. + +### s3_target Argument Reference + +* `path` - (Required) The path to the Amazon S3 target. +* `exclusions` - (Optional) A list of glob patterns used to exclude from the crawl. + +### schema_change_policy Argument Reference + +* `delete_behavior` - (Optional) The deletion behavior when the crawler finds a deleted object. Valid values: `LOG`, `DELETE_FROM_DATABASE`, or `DEPRECATE_IN_DATABASE`. Defaults to `DEPRECATE_IN_DATABASE`. +* `update_behavior` - (Optional) The update behavior when the crawler finds a changed schema. Valid values: `LOG` or `UPDATE_IN_DATABASE`. Defaults to `UPDATE_IN_DATABASE`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Crawler name + +## Import + +Glue Crawlers can be imported using `name`, e.g. + +``` +$ terraform import aws_glue_crawler.MyJob MyJob +``` diff --git a/website/docs/r/glue_job.html.markdown b/website/docs/r/glue_job.html.markdown new file mode 100644 index 00000000000..1e280de8bd2 --- /dev/null +++ b/website/docs/r/glue_job.html.markdown @@ -0,0 +1,82 @@ +--- +layout: "aws" +page_title: "AWS: aws_glue_job" +sidebar_current: "docs-aws-resource-glue-job" +description: |- + Provides an Glue Job resource. +--- + +# aws_glue_job + +Provides a Glue Job resource. + +## Example Usage + +### Python Job + +```hcl +resource "aws_glue_job" "example" { + name = "example" + role_arn = "${aws_iam_role.example.arn}" + + command { + script_location = "s3://${aws_s3_bucket.example.bucket}/example.py" + } +} +``` + +### Scala Job + +```hcl +resource "aws_glue_job" "example" { + name = "example" + role_arn = "${aws_iam_role.example.arn}" + + command { + script_location = "s3://${aws_s3_bucket.example.bucket}/example.scala" + } + + default_arguments = { + "--job-language" = "scala" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `allocated_capacity` – (Optional) The number of AWS Glue data processing units (DPUs) to allocate to this Job. At least 2 DPUs need to be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. +* `command` – (Required) The command of the job. Defined below. +* `connections` – (Optional) The list of connections used for this job. +* `default_arguments` – (Optional) The map of default arguments for this job. You can specify arguments here that your own job-execution script consumes, as well as arguments that AWS Glue itself consumes. For information about how to specify and consume your own Job arguments, see the [Calling AWS Glue APIs in Python](http://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-python-calling.html) topic in the developer guide. For information about the key-value pairs that AWS Glue consumes to set up your job, see the [Special Parameters Used by AWS Glue](http://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-python-glue-arguments.html) topic in the developer guide. +* `description` – (Optional) Description of the job. +* `execution_property` – (Optional) Execution property of the job. Defined below. +* `max_retries` – (Optional) The maximum number of times to retry this job if it fails. +* `name` – (Required) The name you assign to this job. It must be unique in your account. +* `role_arn` – (Required) The ARN of the IAM role associated with this job. +* `timeout` – (Optional) The job timeout in minutes. The default is 2880 minutes (48 hours). +* `security_configuration` - (Optional) The name of the Security Configuration to be associated with the job. + +### command Argument Reference + +* `name` - (Optional) The name of the job command. Defaults to `glueetl` +* `script_location` - (Required) Specifies the S3 path to a script that executes a job. + +### execution_property Argument Reference + +* `max_concurrent_runs` - (Optional) The maximum number of concurrent runs allowed for a job. The default is 1. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Job name + +## Import + +Glue Jobs can be imported using `name`, e.g. + +``` +$ terraform import aws_glue_job.MyJob MyJob +``` diff --git a/website/docs/r/glue_security_configuration.html.markdown b/website/docs/r/glue_security_configuration.html.markdown new file mode 100644 index 00000000000..c0c2a1cb8b8 --- /dev/null +++ b/website/docs/r/glue_security_configuration.html.markdown @@ -0,0 +1,76 @@ +--- +layout: "aws" +page_title: "AWS: aws_glue_security_configuration" +sidebar_current: "docs-aws-resource-glue-security-configuration" +description: |- + Manages a Glue Security Configuration +--- + +# aws_glue_security_configuration + +Manages a Glue Security Configuration. + +## Example Usage + +```hcl +resource "aws_glue_security_configuration" "example" { + name = "example" + + encryption_configuration { + cloudwatch_encryption { + cloudwatch_encryption_mode = "DISABLED" + } + + job_bookmarks_encryption { + job_bookmarks_encryption_mode = "DISABLED" + } + + s3_encryption { + kms_key_arn = "${data.aws_kms_key.example.arn}" + s3_encryption_mode = "SSE-KMS" + } + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `encryption_configuration` – (Required) Configuration block containing encryption configuration. Detailed below. +* `name` – (Required) Name of the security configuration. + +### encryption_configuration Argument Reference + +* `cloudwatch_encryption ` - (Required) A `cloudwatch_encryption ` block as described below, which contains encryption configuration for CloudWatch. +* `job_bookmarks_encryption ` - (Required) A `job_bookmarks_encryption ` block as described below, which contains encryption configuration for job bookmarks. +* `s3_encryption` - (Required) A `s3_encryption ` block as described below, which contains encryption configuration for S3 data. + +#### cloudwatch_encryption Argument Reference + +* `cloudwatch_encryption_mode` - (Optional) Encryption mode to use for CloudWatch data. Valid values: `DISABLED`, `SSE-KMS`. Default value: `DISABLED`. +* `kms_key_arn` - (Optional) Amazon Resource Name (ARN) of the KMS key to be used to encrypt the data. + +#### job_bookmarks_encryption Argument Reference + +* `job_bookmarks_encryption_mode` - (Optional) Encryption mode to use for job bookmarks data. Valid values: `CSE-KMS`, `DISABLED`. Default value: `DISABLED`. +* `kms_key_arn` - (Optional) Amazon Resource Name (ARN) of the KMS key to be used to encrypt the data. + +#### s3_encryption Argument Reference + +* `s3_encryption_mode` - (Optional) Encryption mode to use for S3 data. Valid values: `DISABLED`, `SSE-KMS`, `SSE-S3`. Default value: `DISABLED`. +* `kms_key_arn` - (Optional) Amazon Resource Name (ARN) of the KMS key to be used to encrypt the data. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Glue security configuration name + +## Import + +Glue Security Configurations can be imported using `name`, e.g. + +``` +$ terraform import aws_glue_security_configuration.example example +``` diff --git a/website/docs/r/glue_trigger.html.markdown b/website/docs/r/glue_trigger.html.markdown new file mode 100644 index 00000000000..aee66535ff6 --- /dev/null +++ b/website/docs/r/glue_trigger.html.markdown @@ -0,0 +1,111 @@ +--- +layout: "aws" +page_title: "AWS: aws_glue_trigger" +sidebar_current: "docs-aws-resource-glue-trigger" +description: |- + Manages a Glue Trigger resource. +--- + +# aws_glue_trigger + +Manages a Glue Trigger resource. + +## Example Usage + +### Conditional Trigger + +```hcl +resource "aws_glue_trigger" "example" { + name = "example" + type = "CONDITIONAL" + + actions { + job_name = "${aws_glue_job.example1.name}" + } + + predicate { + conditions { + job_name = "${aws_glue_job.example2.name}" + state = "SUCCEEDED" + } + } +} +``` + +### On-Demand Trigger + +```hcl +resource "aws_glue_trigger" "example" { + name = "example" + type = "ON_DEMAND" + + actions { + job_name = "${aws_glue_job.example.name}" + } +} +``` + +### Scheduled Trigger + +```hcl +resource "aws_glue_trigger" "example" { + name = "example" + schedule = "cron(15 12 * * ? *)" + type = "SCHEDULED" + + actions { + job_name = "${aws_glue_job.example.name}" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `actions` – (Required) List of actions initiated by this trigger when it fires. Defined below. +* `description` – (Optional) A description of the new trigger. +* `enabled` – (Optional) Start the trigger. Defaults to `true`. Not valid to disable for `ON_DEMAND` type. +* `name` – (Required) The name of the trigger. +* `predicate` – (Optional) A predicate to specify when the new trigger should fire. Required when trigger type is `CONDITIONAL`. Defined below. +* `schedule` – (Optional) A cron expression used to specify the schedule. [Time-Based Schedules for Jobs and Crawlers](https://docs.aws.amazon.com/glue/latest/dg/monitor-data-warehouse-schedule.html) +* `type` – (Required) The type of trigger. Valid values are `CONDITIONAL`, `ON_DEMAND`, and `SCHEDULED`. + +### actions Argument Reference + +* `arguments` - (Optional) Arguments to be passed to the job. You can specify arguments here that your own job-execution script consumes, as well as arguments that AWS Glue itself consumes. +* `job_name` - (Required) The name of a job to be executed. +* `timeout` - (Optional) The job run timeout in minutes. It overrides the timeout value of the job. + +### predicate Argument Reference + +* `conditions` - (Required) A list of the conditions that determine when the trigger will fire. Defined below. +* `logical` - (Optional) How to handle multiple conditions. Defaults to `AND`. Valid values are `AND` or `ANY`. + +#### conditions Argument Reference + +* `job_name` - (Required) The name of the job to watch. +* `logical_operator` - (Optional) A logical operator. Defaults to `EQUALS`. +* `state` - (Required) The condition state. Currently, the values supported are `SUCCEEDED`, `STOPPED`, `TIMEOUT` and `FAILED`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Trigger name + +## Timeouts + +`aws_glue_trigger` provides the following [Timeouts](/docs/configuration/resources.html#timeouts) +configuration options: + +- `create` - (Default `5m`) How long to wait for a trigger to be created. +- `delete` - (Default `5m`) How long to wait for a trigger to be deleted. + +## Import + +Glue Triggers can be imported using `name`, e.g. + +``` +$ terraform import aws_glue_trigger.MyTrigger MyTrigger +``` diff --git a/website/docs/r/guardduty_detector.html.markdown b/website/docs/r/guardduty_detector.html.markdown index 6ff7ee6e4c0..ec96a358d47 100644 --- a/website/docs/r/guardduty_detector.html.markdown +++ b/website/docs/r/guardduty_detector.html.markdown @@ -28,7 +28,7 @@ The following arguments are supported: ## Attributes Reference -The following additional attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the GuardDuty detector * `account_id` - The AWS account ID of the GuardDuty detector diff --git a/website/docs/r/guardduty_ipset.html.markdown b/website/docs/r/guardduty_ipset.html.markdown index a529735d274..7b08b85058b 100644 --- a/website/docs/r/guardduty_ipset.html.markdown +++ b/website/docs/r/guardduty_ipset.html.markdown @@ -50,7 +50,7 @@ The following arguments are supported: ## Attributes Reference -The following additional attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the GuardDuty IPSet. diff --git a/website/docs/r/guardduty_member.html.markdown b/website/docs/r/guardduty_member.html.markdown index 13e8d6a0382..fad2aa5059b 100644 --- a/website/docs/r/guardduty_member.html.markdown +++ b/website/docs/r/guardduty_member.html.markdown @@ -10,7 +10,7 @@ description: |- Provides a resource to manage a GuardDuty member. -~> **NOTE:** Currently after using this resource, you must manually invite and accept member account invitations before GuardDuty will begin sending cross-account events. More information for how to accomplish this via the AWS Console or API can be found in the [GuardDuty User Guide](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_accounts.html). Terraform implementation of member invitation and acceptance resources can be tracked in [Github](https://github.com/terraform-providers/terraform-provider-aws/issues/2489). +~> **NOTE:** Currently after using this resource, you must manually accept member account invitations before GuardDuty will begin sending cross-account events. More information for how to accomplish this via the AWS Console or API can be found in the [GuardDuty User Guide](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_accounts.html). Terraform implementation of the member acceptance resource can be tracked in [Github](https://github.com/terraform-providers/terraform-provider-aws/issues/2489). ## Example Usage @@ -26,9 +26,11 @@ resource "aws_guardduty_detector" "member" { } resource "aws_guardduty_member" "member" { - account_id = "${aws_guardduty_detector.member.account_id}" - detector_id = "${aws_guardduty_detector.master.id}" - email = "required@example.com" + account_id = "${aws_guardduty_detector.member.account_id}" + detector_id = "${aws_guardduty_detector.master.id}" + email = "required@example.com" + invite = true + invitation_message = "please accept guardduty invitation" } ``` @@ -39,12 +41,25 @@ The following arguments are supported: * `account_id` - (Required) AWS account ID for member account. * `detector_id` - (Required) The detector ID of the GuardDuty account where you want to create member accounts. * `email` - (Required) Email address for member account. +* `invite` - (Optional) Boolean whether to invite the account to GuardDuty as a member. Defaults to `false`. To detect if an invitation needs to be (re-)sent, the Terraform state value is `true` based on a `relationship_status` of `Disabled`, `Enabled`, `Invited`, or `EmailVerificationInProgress`. +* `invitation_message` - (Optional) Message for invitation. +* `disable_email_notification` - (Optional) Boolean whether an email notification is sent to the accounts. Defaults to `false`. + +## Timeouts + +`aws_guardduty_member` provides the following [Timeouts](/docs/configuration/resources.html#timeouts) +configuration options: + +- `create` - (Default `60s`) How long to wait for a verification to be done against inviting GuardDuty member account. +- `update` - (Default `60s`) How long to wait for a verification to be done against inviting GuardDuty member account. + ## Attributes Reference -The following additional attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the GuardDuty member +* `relationship_status` - The status of the relationship between the member account and its master account. More information can be found in [Amazon GuardDuty API Reference](https://docs.aws.amazon.com/guardduty/latest/ug/get-members.html). ## Import diff --git a/website/docs/r/guardduty_threatintelset.html.markdown b/website/docs/r/guardduty_threatintelset.html.markdown index 0eb09971a24..f32cbe94bfb 100644 --- a/website/docs/r/guardduty_threatintelset.html.markdown +++ b/website/docs/r/guardduty_threatintelset.html.markdown @@ -50,7 +50,7 @@ The following arguments are supported: ## Attributes Reference -The following additional attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the GuardDuty ThreatIntelSet and the detector ID. Format: `:` diff --git a/website/docs/r/iam_access_key.html.markdown b/website/docs/r/iam_access_key.html.markdown index 487be1e4fe6..8f7d81fd513 100644 --- a/website/docs/r/iam_access_key.html.markdown +++ b/website/docs/r/iam_access_key.html.markdown @@ -58,7 +58,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The access key ID. * `user` - The IAM user associated with this access key. @@ -69,7 +69,7 @@ to the state file. Please supply a `pgp_key` instead, which will prevent the secret from being stored in plain text * `encrypted_secret` - The encrypted secret, base64 encoded. ~> **NOTE:** The encrypted secret may be decrypted using the command line, - for example: `terraform output secret | base64 --decode | keybase pgp decrypt`. + for example: `terraform output encrypted_secret | base64 --decode | keybase pgp decrypt`. * `ses_smtp_password` - The secret access key converted into an SES SMTP password by applying [AWS's documented conversion algorithm](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/smtp-credentials.html#smtp-credentials-convert). diff --git a/website/docs/r/iam_account_password_policy.html.markdown b/website/docs/r/iam_account_password_policy.html.markdown index 5ea295a3dca..214c9e32366 100644 --- a/website/docs/r/iam_account_password_policy.html.markdown +++ b/website/docs/r/iam_account_password_policy.html.markdown @@ -44,7 +44,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `expire_passwords` - Indicates whether passwords in the account expire. Returns `true` if `max_password_age` contains a value greater than `0`. @@ -57,4 +57,4 @@ IAM Account Password Policy can be imported using the word `iam-account-password ``` $ terraform import aws_iam_account_password_policy.strict iam-account-password-policy -``` \ No newline at end of file +``` diff --git a/website/docs/r/iam_group.html.markdown b/website/docs/r/iam_group.html.markdown index 7e0f5ab3b6f..0cdd42f1bf9 100644 --- a/website/docs/r/iam_group.html.markdown +++ b/website/docs/r/iam_group.html.markdown @@ -28,7 +28,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The group's ID. * `arn` - The ARN assigned by AWS for this group. diff --git a/website/docs/r/iam_group_membership.html.markdown b/website/docs/r/iam_group_membership.html.markdown index 054d4c45f9c..b636115b79f 100644 --- a/website/docs/r/iam_group_membership.html.markdown +++ b/website/docs/r/iam_group_membership.html.markdown @@ -8,10 +8,15 @@ description: |- # aws_iam_group_membership +~> **WARNING:** Multiple aws_iam_group_membership resources with the same group name will produce inconsistent behavior! + Provides a top level resource to manage IAM Group membership for IAM Users. For more information on managing IAM Groups or IAM Users, see [IAM Groups][1] or [IAM Users][2] +~> **Note:** `aws_iam_group_membership` will conflict with itself if used more than once with the same group. To non-exclusively manage the users in a group, see the +[`aws_iam_user_group_membership` resource][3]. + ## Example Usage ```hcl @@ -49,10 +54,11 @@ The following arguments are supported: ## Attributes Reference -* `name` - The name to identifing the Group Membership +* `name` - The name to identify the Group Membership * `users` - list of IAM User names * `group` – IAM Group name [1]: /docs/providers/aws/r/iam_group.html [2]: /docs/providers/aws/r/iam_user.html +[3]: /docs/providers/aws/r/iam_user_group_membership.html diff --git a/website/docs/r/iam_group_policy.html.markdown b/website/docs/r/iam_group_policy.html.markdown index 4a182ccb96d..ab22efad963 100644 --- a/website/docs/r/iam_group_policy.html.markdown +++ b/website/docs/r/iam_group_policy.html.markdown @@ -43,8 +43,7 @@ resource "aws_iam_group" "my_developers" { The following arguments are supported: -* `policy` - (Required) The policy document. This is a JSON formatted string. - The heredoc syntax or `file` function is helpful here. +* `policy` - (Required) The policy document. This is a JSON formatted string. For more information about building IAM policy documents with Terraform, see the [AWS IAM Policy Document Guide](/docs/providers/aws/guides/iam-policy-documents.html) * `name` - (Optional) The name of the policy. If omitted, Terraform will assign a random, unique name. * `name_prefix` - (Optional) Creates a unique name beginning with the specified diff --git a/website/docs/r/iam_group_policy_attachment.markdown b/website/docs/r/iam_group_policy_attachment.markdown index 5ea98362927..ff65dba5387 100644 --- a/website/docs/r/iam_group_policy_attachment.markdown +++ b/website/docs/r/iam_group_policy_attachment.markdown @@ -10,6 +10,10 @@ description: |- Attaches a Managed IAM Policy to an IAM group +~> **NOTE:** The usage of this resource conflicts with the `aws_iam_policy_attachment` resource and will permanently show a difference if both are defined. + +## Example Usage + ```hcl resource "aws_iam_group" "group" { name = "test-group" @@ -18,7 +22,7 @@ resource "aws_iam_group" "group" { resource "aws_iam_policy" "policy" { name = "test-policy" description = "A test policy" - policy = # omitted + policy = "" # insert policy here } resource "aws_iam_group_policy_attachment" "test-attach" { @@ -31,5 +35,5 @@ resource "aws_iam_group_policy_attachment" "test-attach" { The following arguments are supported: -* `group` (Required) - The group the policy should be applied to -* `policy_arn` (Required) - The ARN of the policy you want to apply +* `group` (Required) - The group the policy should be applied to +* `policy_arn` (Required) - The ARN of the policy you want to apply diff --git a/website/docs/r/iam_instance_profile.html.markdown b/website/docs/r/iam_instance_profile.html.markdown index bd71504bb1d..8e69a38224a 100644 --- a/website/docs/r/iam_instance_profile.html.markdown +++ b/website/docs/r/iam_instance_profile.html.markdown @@ -16,7 +16,7 @@ Provides an IAM instance profile. ```hcl resource "aws_iam_instance_profile" "test_profile" { - name = "test_profile" + name = "test_profile" role = "${aws_iam_role.role.name}" } diff --git a/website/docs/r/iam_openid_connect_provider.html.markdown b/website/docs/r/iam_openid_connect_provider.html.markdown index 62851bfe5b5..5ff91adae78 100644 --- a/website/docs/r/iam_openid_connect_provider.html.markdown +++ b/website/docs/r/iam_openid_connect_provider.html.markdown @@ -14,11 +14,13 @@ Provides an IAM OpenID Connect provider. ```hcl resource "aws_iam_openid_connect_provider" "default" { - url = "https://accounts.google.com" - client_id_list = [ - "266362248691-342342xasdasdasda-apps.googleusercontent.com" - ] - thumbprint_list = [] + url = "https://accounts.google.com" + + client_id_list = [ + "266362248691-342342xasdasdasda-apps.googleusercontent.com", + ] + + thumbprint_list = [] } ``` @@ -32,7 +34,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - The ARN assigned by AWS for this provider. diff --git a/website/docs/r/iam_policy.html.markdown b/website/docs/r/iam_policy.html.markdown index 706184de5d5..ee299962ffa 100644 --- a/website/docs/r/iam_policy.html.markdown +++ b/website/docs/r/iam_policy.html.markdown @@ -10,6 +10,8 @@ description: |- Provides an IAM policy. +## Example Usage + ```hcl resource "aws_iam_policy" "policy" { name = "test_policy" @@ -37,19 +39,16 @@ EOF The following arguments are supported: -* `description` - (Optional) Description of the IAM policy. +* `description` - (Optional, Forces new resource) Description of the IAM policy. * `name` - (Optional, Forces new resource) The name of the policy. If omitted, Terraform will assign a random, unique name. * `name_prefix` - (Optional, Forces new resource) Creates a unique name beginning with the specified prefix. Conflicts with `name`. * `path` - (Optional, default "/") Path in which to create the policy. See [IAM Identifiers](https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) for more information. -* `policy` - (Required) The policy document. This is a JSON formatted string. - The heredoc syntax, `file` function, or the [`aws_iam_policy_document` data - source](/docs/providers/aws/d/iam_policy_document.html) - are all helpful here. +* `policy` - (Required) The policy document. This is a JSON formatted string. For more information about building AWS IAM policy documents with Terraform, see the [AWS IAM Policy Document Guide](/docs/providers/aws/guides/iam-policy-documents.html) ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The policy's ID. * `arn` - The ARN assigned by AWS to this policy. diff --git a/website/docs/r/iam_policy_attachment.html.markdown b/website/docs/r/iam_policy_attachment.html.markdown index 414f2f4b276..c838e9aa325 100644 --- a/website/docs/r/iam_policy_attachment.html.markdown +++ b/website/docs/r/iam_policy_attachment.html.markdown @@ -10,7 +10,11 @@ description: |- Attaches a Managed IAM Policy to user(s), role(s), and/or group(s) -!> **WARNING:** The aws_iam_policy_attachment resource creates **exclusive** attachments of IAM policies. Across the entire AWS account, all of the users/roles/groups to which a single policy is attached must be declared by a single aws_iam_policy_attachment resource. This means that even any users/roles/groups that have the attached policy via some mechanism other than Terraform will have that attached policy revoked by Terraform. Consider `aws_iam_role_policy_attachment`, `aws_iam_user_policy_attachment`, or `aws_iam_group_policy_attachment` instead. These resources do not enforce exclusive attachment of an IAM policy. +!> **WARNING:** The aws_iam_policy_attachment resource creates **exclusive** attachments of IAM policies. Across the entire AWS account, all of the users/roles/groups to which a single policy is attached must be declared by a single aws_iam_policy_attachment resource. This means that even any users/roles/groups that have the attached policy via any other mechanism (including other Terraform resources) will have that attached policy revoked by this resource. Consider `aws_iam_role_policy_attachment`, `aws_iam_user_policy_attachment`, or `aws_iam_group_policy_attachment` instead. These resources do not enforce exclusive attachment of an IAM policy. + +~> **NOTE:** The usage of this resource conflicts with the `aws_iam_group_policy_attachment`, `aws_iam_role_policy_attachment`, and `aws_iam_user_policy_attachment` resources and will permanently show a difference if both are defined. + +## Example Usage ```hcl resource "aws_iam_user" "user" { @@ -28,7 +32,7 @@ resource "aws_iam_group" "group" { resource "aws_iam_policy" "policy" { name = "test-policy" description = "A test policy" - policy = # omitted + policy = "" # insert policy here } resource "aws_iam_policy_attachment" "test-attach" { @@ -44,15 +48,15 @@ resource "aws_iam_policy_attachment" "test-attach" { The following arguments are supported: -* `name` (Required) - The name of the policy. This cannot be an empty string. -* `users` (Optional) - The user(s) the policy should be applied to -* `roles` (Optional) - The role(s) the policy should be applied to -* `groups` (Optional) - The group(s) the policy should be applied to -* `policy_arn` (Required) - The ARN of the policy you want to apply +* `name` (Required) - The name of the attachment. This cannot be an empty string. +* `users` (Optional) - The user(s) the policy should be applied to +* `roles` (Optional) - The role(s) the policy should be applied to +* `groups` (Optional) - The group(s) the policy should be applied to +* `policy_arn` (Required) - The ARN of the policy you want to apply ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The policy's ID. -* `name` - The name of the policy. +* `name` - The name of the attachment. diff --git a/website/docs/r/iam_role.html.markdown b/website/docs/r/iam_role.html.markdown index 2be5a6b834a..6be007a0bb2 100644 --- a/website/docs/r/iam_role.html.markdown +++ b/website/docs/r/iam_role.html.markdown @@ -31,6 +31,10 @@ resource "aws_iam_role" "test_role" { ] } EOF + + tags { + tag-key = "tag-value" + } } ``` @@ -49,9 +53,13 @@ The following arguments are supported: See [IAM Identifiers](https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identifiers.html) for more information. * `description` - (Optional) The description of the role. +* `max_session_duration` - (Optional) The maximum session duration (in seconds) that you want to set for the specified role. If you do not specify a value for this setting, the default maximum of one hour is applied. This setting can have a value from 1 hour to 12 hours. +* `permissions_boundary` - (Optional) The ARN of the policy that is used to set the permissions boundary for the role. +* `tags` - Key-value mapping of tags for the IAM role + ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - The Amazon Resource Name (ARN) specifying the role. * `create_date` - The creation date of the IAM role. diff --git a/website/docs/r/iam_role_policy.html.markdown b/website/docs/r/iam_role_policy.html.markdown index 568c3c93b0d..482aaedac0c 100644 --- a/website/docs/r/iam_role_policy.html.markdown +++ b/website/docs/r/iam_role_policy.html.markdown @@ -62,8 +62,7 @@ The following arguments are supported: assign a random, unique name. * `name_prefix` - (Optional) Creates a unique name beginning with the specified prefix. Conflicts with `name`. -* `policy` - (Required) The policy document. This is a JSON formatted string. - The heredoc syntax or `file` function is helpful here. +* `policy` - (Required) The policy document. This is a JSON formatted string. For more information about building IAM policy documents with Terraform, see the [AWS IAM Policy Document Guide](/docs/providers/aws/guides/iam-policy-documents.html) * `role` - (Required) The IAM role to attach to the policy. ## Attributes Reference diff --git a/website/docs/r/iam_role_policy_attachment.markdown b/website/docs/r/iam_role_policy_attachment.markdown index d7c8302b7a4..3bc1b555a4a 100644 --- a/website/docs/r/iam_role_policy_attachment.markdown +++ b/website/docs/r/iam_role_policy_attachment.markdown @@ -10,10 +10,15 @@ description: |- Attaches a Managed IAM Policy to an IAM role +~> **NOTE:** The usage of this resource conflicts with the `aws_iam_policy_attachment` resource and will permanently show a difference if both are defined. + +## Example Usage + ```hcl resource "aws_iam_role" "role" { - name = "test-role" - assume_role_policy = < **NOTE:** The usage of this resource conflicts with the `aws_iam_policy_attachment` resource and will permanently show a difference if both are defined. + +## Example Usage + ```hcl resource "aws_iam_user" "user" { - name = "test-user" + name = "test-user" } resource "aws_iam_policy" "policy" { - name = "test-policy" - description = "A test policy" - policy = # omitted + name = "test-policy" + description = "A test policy" + policy = "" # insert policy here } resource "aws_iam_user_policy_attachment" "test-attach" { - user = "${aws_iam_user.user.name}" - policy_arn = "${aws_iam_policy.policy.arn}" + user = "${aws_iam_user.user.name}" + policy_arn = "${aws_iam_policy.policy.arn}" } ``` @@ -31,5 +35,5 @@ resource "aws_iam_user_policy_attachment" "test-attach" { The following arguments are supported: -* `user` (Required) - The user the policy should be applied to -* `policy_arn` (Required) - The ARN of the policy you want to apply +* `user` (Required) - The user the policy should be applied to +* `policy_arn` (Required) - The ARN of the policy you want to apply diff --git a/website/docs/r/iam_user_ssh_key.html.markdown b/website/docs/r/iam_user_ssh_key.html.markdown index 4b347626318..6b440e0a35b 100644 --- a/website/docs/r/iam_user_ssh_key.html.markdown +++ b/website/docs/r/iam_user_ssh_key.html.markdown @@ -36,7 +36,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `ssh_public_key_id` - The unique identifier for the SSH public key. * `fingerprint` - The MD5 message digest of the SSH public key. diff --git a/website/docs/r/inspector_assessment_target.html.markdown b/website/docs/r/inspector_assessment_target.html.markdown index 61a4d81eaab..ab0f8061cbe 100644 --- a/website/docs/r/inspector_assessment_target.html.markdown +++ b/website/docs/r/inspector_assessment_target.html.markdown @@ -35,6 +35,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - The target assessment ARN. diff --git a/website/docs/r/inspector_assessment_template.html.markdown b/website/docs/r/inspector_assessment_template.html.markdown index e20a05bcdc3..80626314f54 100644 --- a/website/docs/r/inspector_assessment_template.html.markdown +++ b/website/docs/r/inspector_assessment_template.html.markdown @@ -38,6 +38,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - The template assessment ARN. diff --git a/website/docs/r/inspector_resource_group.html.markdown b/website/docs/r/inspector_resource_group.html.markdown index 734d1b378da..94ee08294b5 100644 --- a/website/docs/r/inspector_resource_group.html.markdown +++ b/website/docs/r/inspector_resource_group.html.markdown @@ -29,6 +29,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - The resource group ARN. diff --git a/website/docs/r/instance.html.markdown b/website/docs/r/instance.html.markdown index cbe343782f5..e2776350ed6 100644 --- a/website/docs/r/instance.html.markdown +++ b/website/docs/r/instance.html.markdown @@ -54,8 +54,18 @@ The following arguments are supported: * `availability_zone` - (Optional) The AZ to start the instance in. * `placement_group` - (Optional) The Placement Group to start the instance in. * `tenancy` - (Optional) The tenancy of the instance (if the instance is running in a VPC). An instance with a tenancy of dedicated runs on single-tenant hardware. The host tenancy is not supported for the import-instance command. -* `ebs_optimized` - (Optional) If true, the launched EC2 instance will be - EBS-optimized. +* `cpu_core_count` - (Optional) Sets the number of CPU cores for an instance. This option is + only supported on creation of instance type that support CPU Options + [CPU Cores and Threads Per CPU Core Per Instance Type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html#cpu-options-supported-instances-values) - specifying this option for unsupported instance types will return an error from the EC2 API. +* `cpu_threads_per_core` - (Optional - has no effect unless `cpu_core_count` is also set) If set to to 1, hyperthreading is disabled on the launched instance. Defaults to 2 if not set. See [Optimizing CPU Options](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html) for more information. + +-> **NOTE:** Changing `cpu_core_count` and/or `cpu_threads_per_core` will cause the resource to be destroyed and re-created. + +* `ebs_optimized` - (Optional) If true, the launched EC2 instance will be EBS-optimized. + Note that if this is not set on an instance type that is optimized by default then + this will show as disabled but if the instance type is optimized by default then + there is no need to set this and there is no effect to disabling it. + See the [EBS Optimized section](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html) of the AWS User Guide for more information. * `disable_api_termination` - (Optional) If true, enables [EC2 Instance Termination Protection](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html#Using_ChangingDisableAPITermination) * `instance_initiated_shutdown_behavior` - (Optional) Shutdown behavior for the @@ -63,12 +73,15 @@ instance. Amazon defaults this to `stop` for EBS-backed instances and `terminate` for instance-store instances. Cannot be set on instance-store instances. See [Shutdown Behavior](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html#Using_ChangingInstanceInitiatedShutdownBehavior) for more information. * `instance_type` - (Required) The type of instance to start. Updates to this field will trigger a stop/start of the EC2 instance. -* `key_name` - (Optional) The key name to use for the instance. +* `key_name` - (Optional) The key name of the Key Pair to use for the instance; which can be managed using [the `aws_key_pair` resource](key_pair.html). + * `get_password_data` - (Optional) If true, wait for password data to become available and retrieve it. Useful for getting the administrator password for instances running Microsoft Windows. The password data is exported to the `password_data` attribute. See [GetPasswordData](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_GetPasswordData.html) for more information. * `monitoring` - (Optional) If true, the launched EC2 instance will have detailed monitoring enabled. (Available since v0.6.0) -* `security_groups` - (Optional) A list of security group names to associate with. - If you are creating Instances in a VPC, use `vpc_security_group_ids` instead. -* `vpc_security_group_ids` - (Optional) A list of security group IDs to associate with. +* `security_groups` - (Optional, EC2-Classic and default VPC only) A list of security group names (EC2-Classic) or IDs (default VPC) to associate with. + +-> **NOTE:** If you are creating Instances in a VPC, use `vpc_security_group_ids` instead. + +* `vpc_security_group_ids` - (Optional, VPC only) A list of security group IDs to associate with. * `subnet_id` - (Optional) The VPC Subnet ID to launch in. * `associate_public_ip_address` - (Optional) Associate a public ip address with an instance in a VPC. Boolean value. * `private_ip` - (Optional) Private IP address to associate with the @@ -90,6 +103,7 @@ instances. See [Shutdown Behavior](https://docs.aws.amazon.com/AWSEC2/latest/Use * `ephemeral_block_device` - (Optional) Customize Ephemeral (also known as "Instance Store") volumes on the instance. See [Block Devices](#block-devices) below for details. * `network_interface` - (Optional) Customize network interfaces to be attached at instance boot time. See [Network Interfaces](#network-interfaces) below for more details. +* `credit_specification` - (Optional) Customize the credit specification of the instance. See [Credit Specification](#credit-specification) below for more details. ### Timeouts @@ -108,8 +122,7 @@ to understand the implications of using these attributes. The `root_block_device` mapping supports the following: -* `volume_type` - (Optional) The type of volume. Can be `"standard"`, `"gp2"`, - or `"io1"`. (Default: `"standard"`). +* `volume_type` - (Optional) The type of volume. Can be `"standard"`, `"gp2"`, `"io1"`, `"sc1"`, or `"st1"`. (Default: `"standard"`). * `volume_size` - (Optional) The size of the volume in gigabytes. * `iops` - (Optional) The amount of provisioned [IOPS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html). @@ -174,50 +187,69 @@ Each `network_interface` block supports the following: * `device_index` - (Required) The integer index of the network interface attachment. Limited by instance type. * `network_interface_id` - (Required) The ID of the network interface to attach. -* `delete_on_termination` - (Optional) Whether or not to delete the network interface on instance termination. Defaults to `false`. +* `delete_on_termination` - (Optional) Whether or not to delete the network interface on instance termination. Defaults to `false`. Currently, the only valid value is `false`, as this is only supported when creating new network interfaces when launching an instance. + +### Credit Specification + +~> **NOTE:** Removing this configuration on existing instances will only stop managing it. It will not change the configuration back to the default for the instance type. + +Credit specification can be applied/modified to the EC2 Instance at any time. + +The `credit_specification` block supports the following: + +* `cpu_credits` - (Optional) The credit option for CPU usage. ### Example ```hcl resource "aws_vpc" "my_vpc" { cidr_block = "172.16.0.0/16" + tags { Name = "tf-example" } } resource "aws_subnet" "my_subnet" { - vpc_id = "${aws_vpc.my_vpc.id}" - cidr_block = "172.16.10.0/24" + vpc_id = "${aws_vpc.my_vpc.id}" + cidr_block = "172.16.10.0/24" availability_zone = "us-west-2a" + tags { Name = "tf-example" } } resource "aws_network_interface" "foo" { - subnet_id = "${aws_subnet.my_subnet.id}" + subnet_id = "${aws_subnet.my_subnet.id}" private_ips = ["172.16.10.100"] + tags { Name = "primary_network_interface" } } resource "aws_instance" "foo" { - ami = "ami-22b9a343" # us-west-2 - instance_type = "t2.micro" - network_interface { - network_interface_id = "${aws_network_interface.foo.id}" - device_index = 0 + ami = "ami-22b9a343" # us-west-2 + instance_type = "t2.micro" + + network_interface { + network_interface_id = "${aws_network_interface.foo.id}" + device_index = 0 + } + + credit_specification { + cpu_credits = "unlimited" } } ``` ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The instance ID. +* `arn` - The ARN of the instance. * `availability_zone` - The availability zone of the instance. * `placement_group` - The placement group of the instance. * `key_name` - The key name of the instance @@ -239,6 +271,7 @@ The following attributes are exported: * `security_groups` - The associated security groups. * `vpc_security_group_ids` - The associated security groups in non-default VPC * `subnet_id` - The VPC subnet ID. +* `credit_specification` - Credit specification of instance. For any `root_block_device` and `ebs_block_device` the `volume_id` is exported. e.g. `aws_instance.web.root_block_device.0.volume_id` diff --git a/website/docs/r/internet_gateway.html.markdown b/website/docs/r/internet_gateway.html.markdown index 8b5eaca6c70..69cfbfb7573 100644 --- a/website/docs/r/internet_gateway.html.markdown +++ b/website/docs/r/internet_gateway.html.markdown @@ -43,7 +43,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the Internet Gateway. @@ -54,4 +54,4 @@ Internet Gateways can be imported using the `id`, e.g. ``` $ terraform import aws_internet_gateway.gw igw-c0a643a9 -``` \ No newline at end of file +``` diff --git a/website/docs/r/iot_certificate.html.markdown b/website/docs/r/iot_certificate.html.markdown index 28492766db4..502c5d2d0ed 100644 --- a/website/docs/r/iot_certificate.html.markdown +++ b/website/docs/r/iot_certificate.html.markdown @@ -14,7 +14,7 @@ Creates and manages an AWS IoT certificate. ```hcl resource "aws_iot_certificate" "cert" { - csr = "${file("/my/csr.pem")}" + csr = "${file("/my/csr.pem")}" active = true } ``` diff --git a/website/docs/r/iot_policy.html.markdown b/website/docs/r/iot_policy.html.markdown index 22b2820aab7..052785e45a7 100644 --- a/website/docs/r/iot_policy.html.markdown +++ b/website/docs/r/iot_policy.html.markdown @@ -10,9 +10,12 @@ description: |- Provides an IoT policy. +## Example Usage + ```hcl resource "aws_iot_policy" "pubsub" { - name = "PubSubToAnyTopic" + name = "PubSubToAnyTopic" + policy = < **NOTE:** Kinesis Firehose is currently only supported in us-east-1, us-west-2 and eu-west-1. ## Argument Reference @@ -240,7 +264,7 @@ The following arguments are supported: * `name` - (Required) A name to identify the stream. This is unique to the AWS account and region the Stream is created in. * `kinesis_source_configuration` - (Optional) Allows the ability to specify the kinesis stream that is used as the source of the firehose delivery stream. -* `destination` – (Required) This is the destination to where the data is delivered. The only options are `s3` (Deprecated, use `extended_s3` instead), `extended_s3`, `redshift`, and `elasticsearch`. +* `destination` – (Required) This is the destination to where the data is delivered. The only options are `s3` (Deprecated, use `extended_s3` instead), `extended_s3`, `redshift`, `elasticsearch`, and `splunk`. * `s3_configuration` - (Optional, Deprecated, see/use `extended_s3_configuration` unless `destination` is `redshift`) Configuration options for the s3 destination (or the intermediate bucket if the destination is redshift). More details are given below. * `extended_s3_configuration` - (Optional, only Required when `destination` is `extended_s3`) Enhanced configuration options for the s3 destination. More details are given below. @@ -248,7 +272,8 @@ is redshift). More details are given below. Using `redshift_configuration` requires the user to also specify a `s3_configuration` block. More details are given below. -The `kinesis_source_configuration` object supports the following: +The `kinesis_source_configuration` object supports the following: + * `kinesis_stream_arn` (Required) The kinesis stream used as the source of the firehose delivery stream. * `role_arn` (Required) The ARN of the role that provides access to the source Kinesis stream. @@ -267,7 +292,10 @@ be used. The `extended_s3_configuration` object supports the same fields from `s3_configuration` as well as the following: +* `data_format_conversion_configuration` - (Optional) Nested argument for the serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3. More details given below. * `processing_configuration` - (Optional) The data processing configuration. More details are given below. +* `s3_backup_mode` - (Optional) The Amazon S3 backup mode. Valid values are `Disabled` and `Enabled`. Default value is `Disabled`. +* `s3_backup_configuration` - (Optional) The configuration for backup in Amazon S3. Required if `s3_backup_mode` is `Enabled`. Supports the same fields as `s3_configuration` object. The `redshift_configuration` object supports the following: @@ -282,6 +310,7 @@ The `redshift_configuration` object supports the following: * `copy_options` - (Optional) Copy options for copying the data from the s3 intermediate bucket into redshift, for example to change the default delimiter. For valid values, see the [AWS documentation](http://docs.aws.amazon.com/firehose/latest/APIReference/API_CopyCommand.html) * `data_table_columns` - (Optional) The data table columns that will be targeted by the copy command. * `cloudwatch_logging_options` - (Optional) The CloudWatch Logging Options for the delivery stream. More details are given below +* `processing_configuration` - (Optional) The data processing configuration. More details are given below. The `elasticsearch_configuration` object supports the following: @@ -295,6 +324,7 @@ The `elasticsearch_configuration` object supports the following: * `s3_backup_mode` - (Optional) Defines how documents should be delivered to Amazon S3. Valid values are `FailedDocumentsOnly` and `AllDocuments`. Default value is `FailedDocumentsOnly`. * `type_name` - (Required) The Elasticsearch type name with maximum length of 100 characters. * `cloudwatch_logging_options` - (Optional) The CloudWatch Logging Options for the delivery stream. More details are given below +* `processing_configuration` - (Optional) The data processing configuration. More details are given below. The `splunk_configuration` objects supports the following: @@ -302,9 +332,10 @@ The `splunk_configuration` objects supports the following: * `hec_endpoint` - (Required) The HTTP Event Collector (HEC) endpoint to which Kinesis Firehose sends your data. * `hec_endpoint_type` - (Optional) The HEC endpoint type. Valid values are `Raw` or `Event`. The default value is `Raw`. * `hec_token` - The GUID that you obtain from your Splunk cluster when you create a new HEC endpoint. -* `s3_backup_mode` - (Optional) Defines how documents should be delivered to Amazon S3. Valid values are `FailedDocumentsOnly` and `AllDocuments`. Default value is `FailedDocumentsOnly`. +* `s3_backup_mode` - (Optional) Defines how documents should be delivered to Amazon S3. Valid values are `FailedEventsOnly` and `AllEvents`. Default value is `FailedEventsOnly`. * `retry_duration` - (Optional) After an initial failure to deliver to Amazon Elasticsearch, the total amount of time, in seconds between 0 to 7200, during which Firehose re-attempts delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. The default value is 300s. There will be no retry if the value is 0. * `cloudwatch_logging_options` - (Optional) The CloudWatch Logging Options for the delivery stream. More details are given below. +* `processing_configuration` - (Optional) The data processing configuration. More details are given below. The `cloudwatch_logging_options` object supports the following: @@ -324,9 +355,112 @@ The `processors` array objects support the following: The `parameters` array objects support the following: -* `parameter_name` - (Required) Parameter name. Valid Values: `LambdaArn`, `NumberOfRetries` +* `parameter_name` - (Required) Parameter name. Valid Values: `LambdaArn`, `NumberOfRetries`, `RoleArn`, `BufferSizeInMBs`, `BufferIntervalInSeconds` * `parameter_value` - (Required) Parameter value. Must be between 1 and 512 length (inclusive). When providing a Lambda ARN, you should specify the resource version as well. +### data_format_conversion_configuration + +Example: + +```hcl +resource "aws_kinesis_firehose_delivery_stream" "example" { + # ... other configuration ... + extended_s3_configuration { + # Must be at least 64 + buffer_size = 128 + + # ... other configuration ... + data_format_conversion_configuration { + input_format_configuration { + deserializer { + hive_json_ser_de {} + } + } + + output_format_configuration { + serializer { + orc_ser_de {} + } + } + + schema_configuration { + database_name = "${aws_glue_catalog_table.example.database_name}" + role_arn = "${aws_iam_role.example.arn}" + table_name = "${aws_glue_catalog_table.example.name}" + } + } + } +} +``` + +* `input_format_configuration` - (Required) Nested argument that specifies the deserializer that you want Kinesis Data Firehose to use to convert the format of your data from JSON. More details below. +* `output_format_configuration` - (Required) Nested argument that specifies the serializer that you want Kinesis Data Firehose to use to convert the format of your data to the Parquet or ORC format. More details below. +* `schema_configuration` - (Required) Nested argument that specifies the AWS Glue Data Catalog table that contains the column information. More details below. +* `enabled` - (Optional) Defaults to `true`. Set it to `false` if you want to disable format conversion while preserving the configuration details. + +#### input_format_configuration + +* `deserializer` - (Required) Nested argument that specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. More details below. + +##### deserializer + +~> **NOTE:** One of the deserializers must be configured. If no nested configuration needs to occur simply declare as `XXX_json_ser_de = []` or `XXX_json_ser_de {}`. + +* `hive_json_ser_de` - (Optional) Nested argument that specifies the native Hive / HCatalog JsonSerDe. More details below. +* `open_x_json_ser_de` - (Optional) Nested argument that specifies the OpenX SerDe. More details below. + +###### hive_json_ser_de + +* `timestamp_formats` - (Optional) A list of how you want Kinesis Data Firehose to parse the date and time stamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more information, see [Class DateTimeFormat](https://www.joda.org/joda-time/apidocs/org/joda/time/format/DateTimeFormat.html). You can also use the special value millis to parse time stamps in epoch milliseconds. If you don't specify a format, Kinesis Data Firehose uses java.sql.Timestamp::valueOf by default. + +###### open_x_json_ser_de + +* `case_insensitive` - (Optional) When set to true, which is the default, Kinesis Data Firehose converts JSON keys to lowercase before deserializing them. +* `column_to_json_key_mappings` - (Optional) A map of column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example, timestamp is a Hive keyword. If you have a JSON key named timestamp, set this parameter to `{ ts = "timestamp" }` to map this key to a column named ts. +* `convert_dots_in_json_keys_to_underscores` - (Optional) When set to `true`, specifies that the names of the keys include dots and that you want Kinesis Data Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using this option. Defaults to `false`. + +#### output_format_configuration + +* `serializer` - (Required) Nested argument that specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. More details below. + +##### serializer + +~> **NOTE:** One of the serializers must be configured. If no nested configuration needs to occur simply declare as `XXX_ser_de = []` or `XXX_ser_de {}`. + +* `orc_ser_de` - (Optional) Nested argument that specifies converting data to the ORC format before storing it in Amazon S3. For more information, see [Apache ORC](https://orc.apache.org/docs/). More details below. +* `parquet_ser_de` - (Optional) Nested argument that specifies converting data to the Parquet format before storing it in Amazon S3. For more information, see [Apache Parquet](https://parquet.apache.org/documentation/latest/). More details below. + +###### orc_ser_de + +* `block_size_bytes` - (Optional) The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value for padding calculations. +* `bloom_filter_columns` - (Optional) A list of column names for which you want Kinesis Data Firehose to create bloom filters. +* `bloom_filter_false_positive_probability` - (Optional) The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is `0.05`, the minimum is `0`, and the maximum is `1`. +* `compression` - (Optional) The compression code to use over data blocks. The default is `SNAPPY`. +* `dictionary_key_threshold` - (Optional) A float that represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to `1`. +* `enable_padding` - (Optional) Set this to `true` to indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is `false`. +* `format_version` - (Optional) The version of the file to write. The possible values are `V0_11` and `V0_12`. The default is `V0_12`. +* `padding_tolerance` - (Optional) A float between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is `0.05`, which means 5 percent of stripe size. For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task. Kinesis Data Firehose ignores this parameter when `enable_padding` is `false`. +* `row_index_stride` - (Optional) The number of rows between index entries. The default is `10000` and the minimum is `1000`. +* `stripe_size_bytes` - (Optional) The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB. + +###### parquet_ser_de + +* `block_size_bytes` - (Optional) The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value for padding calculations. +* `compression` - (Optional) The compression code to use over data blocks. The possible values are `UNCOMPRESSED`, `SNAPPY`, and `GZIP`, with the default being `SNAPPY`. Use `SNAPPY` for higher decompression speed. Use `GZIP` if the compression ratio is more important than speed. +* `enable_dictionary_compression` - (Optional) Indicates whether to enable dictionary compression. +* `max_padding_bytes` - (Optional) The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is `0`. +* `page_size_bytes` - (Optional) The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB. +* `writer_version` - (Optional) Indicates the version of row format to output. The possible values are `V1` and `V2`. The default is `V1`. + +#### schema_configuration + +* `database_name` - (Required) Specifies the name of the AWS Glue database that contains the schema for the output data. +* `role_arn` - (Required) The role that Kinesis Data Firehose can use to access AWS Glue. This role must be in the same account you use for Kinesis Data Firehose. Cross-account roles aren't allowed. +* `table_name` - (Required) Specifies the AWS Glue table that contains the column information that constitutes your data schema. +* `catalog_id` - (Optional) The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default. +* `region` - (Optional) If you don't specify an AWS Region, the default is the current region. +* `version_id` - (Optional) Specifies the table version for the output data schema. Defaults to `LATEST`. + ## Attributes Reference * `arn` - The Amazon Resource Name (ARN) specifying the Stream diff --git a/website/docs/r/kinesis_stream.html.markdown b/website/docs/r/kinesis_stream.html.markdown index 59b460d3e54..054056b2bbd 100644 --- a/website/docs/r/kinesis_stream.html.markdown +++ b/website/docs/r/kinesis_stream.html.markdown @@ -38,7 +38,7 @@ The following arguments are supported: * `name` - (Required) A name to identify the stream. This is unique to the AWS account and region the Stream is created in. -* `shard_count` – (Required) The number of shards that the stream will use. +* `shard_count` – (Required) The number of shards that the stream will use. Amazon has guidlines for specifying the Stream size that should be referenced when creating a Kinesis stream. See [Amazon Kinesis Streams][2] for more. * `retention_period` - (Optional) Length of time data records are accessible after they are added to the stream. The maximum value of a stream's retention period is 168 hours. Minimum value is 24. Default is 24. diff --git a/website/docs/r/kms_grant.html.markdown b/website/docs/r/kms_grant.html.markdown index 89cf6effbca..7d83f6a0110 100644 --- a/website/docs/r/kms_grant.html.markdown +++ b/website/docs/r/kms_grant.html.markdown @@ -16,7 +16,7 @@ Provides a resource-based access control mechanism for a KMS customer master key resource "aws_kms_key" "a" {} resource "aws_iam_role" "a" { -name = "iam-role-for-grant" + name = "iam-role-for-grant" assume_role_policy = < **NOTE::** Please note that listeners that are attached to Application Load Balancers must use either `HTTP` or `HTTPS` protocols while listeners that are attached to Network Load Balancers must use the `TCP` protocol. + Action Blocks (for `default_action`) support the following: -* `target_group_arn` - (Required) The ARN of the Target Group to which to route traffic. -* `type` - (Required) The type of routing action. The only valid value is `forward`. +* `type` - (Required) The type of routing action. Valid values are `forward`, `redirect`, `fixed-response`, `authenticate-cognito` and `authenticate-oidc`. +* `target_group_arn` - (Optional) The ARN of the Target Group to which to route traffic. Required if `type` is `forward`. +* `redirect` - (Optional) Information for creating a redirect action. Required if `type` is `redirect`. +* `fixed_response` - (Optional) Information for creating an action that returns a custom HTTP response. Required if `type` is `fixed-response`. + +Redirect Blocks (for `redirect`) support the following: + +~> **NOTE::** You can reuse URI components using the following reserved keywords: `#{protocol}`, `#{host}`, `#{port}`, `#{path}` (the leading "/" is removed) and `#{query}`. + +* `host` - (Optional) The hostname. This component is not percent-encoded. The hostname can contain `#{host}`. Defaults to `#{host}`. +* `path` - (Optional) The absolute path, starting with the leading "/". This component is not percent-encoded. The path can contain #{host}, #{path}, and #{port}. Defaults to `/#{path}`. +* `port` - (Optional) The port. Specify a value from `1` to `65535` or `#{port}`. Defaults to `#{port}`. +* `protocol` - (Optional) The protocol. Valid values are `HTTP`, `HTTPS`, or `#{protocol}`. Defaults to `#{protocol}`. +* `query` - (Optional) The query parameters, URL-encoded when necessary, but not percent-encoded. Do not include the leading "?". Defaults to `#{query}`. +* `status_code` - (Required) The HTTP redirect code. The redirect is either permanent (`HTTP_301`) or temporary (`HTTP_302`). + +Fixed-response Blocks (for `fixed_response`) support the following: + +* `content_type` - (Required) The content type. Valid values are `text/plain`, `text/css`, `text/html`, `application/javascript` and `application/json`. +* `message_body` - (Optional) The message body. +* `status_code` - (Optional) The HTTP response code. Valid values are `2XX`, `4XX`, or `5XX`. + +Authenticate Cognito Blocks (for `authenticate_cognito`) supports the following: + +* `authentication_request_extra_params` - (Optional) The query parameters to include in the redirect request to the authorization endpoint. Max: 10. +* `on_unauthenticated_request` - (Optional) The behavior if the user is not authenticated. Valid values: `deny`, `allow` and `authenticate` +* `scope` - (Optional) The set of user claims to be requested from the IdP. +* `session_cookie_name` - (Optional) The name of the cookie used to maintain session information. +* `session_time_out` - (Optional) The maximum duration of the authentication session, in seconds. +* `user_pool_arn` - (Required) The ARN of the Cognito user pool. +* `user_pool_client_id` - (Required) The ID of the Cognito user pool client. +* `user_pool_domain` - (Required) The domain prefix or fully-qualified domain name of the Cognito user pool. + +Authenticate OIDC Blocks (for `authenticate_oidc`) supports the following: + +* `authentication_request_extra_params` - (Optional) The query parameters to include in the redirect request to the authorization endpoint. Max: 10. +* `authorization_endpoint` - (Required) The authorization endpoint of the IdP. +* `client_id` - (Required) The OAuth 2.0 client identifier. +* `client_secret` - (Required) The OAuth 2.0 client secret. +* `issuer` - (Required) The OIDC issuer identifier of the IdP. +* `on_unauthenticated_request` - (Optional) The behavior if the user is not authenticated. Valid values: `deny`, `allow` and `authenticate` +* `scope` - (Optional) The set of user claims to be requested from the IdP. +* `session_cookie_name` - (Optional) The name of the cookie used to maintain session information. +* `session_time_out` - (Optional) The maximum duration of the authentication session, in seconds. +* `token_endpoint` - (Required) The token endpoint of the IdP. +* `user_info_endpoint` - (Required) The user info endpoint of the IdP. + +Authentication Request Extra Params Blocks (for `authentication_request_extra_params`) supports the following: + +* `key` - (Required) The key of query parameter +* `value` - (Required) The value of query parameter ## Attributes Reference diff --git a/website/docs/r/lb_listener_rule.html.markdown b/website/docs/r/lb_listener_rule.html.markdown index 345974fd2ba..cfeb0b11053 100644 --- a/website/docs/r/lb_listener_rule.html.markdown +++ b/website/docs/r/lb_listener_rule.html.markdown @@ -15,7 +15,6 @@ Provides a Load Balancer Listener Rule resource. ## Example Usage ```hcl -# Create a new load balancer resource "aws_lb" "front_end" { # ... } @@ -39,6 +38,8 @@ resource "aws_lb_listener_rule" "static" { } } +# Forward action + resource "aws_lb_listener_rule" "host_based_routing" { listener_arn = "${aws_lb_listener.front_end.arn}" priority = 99 @@ -54,6 +55,104 @@ resource "aws_lb_listener_rule" "host_based_routing" { } } +# Redirect action + +resource "aws_lb_listener_rule" "redirect_http_to_https" { + listener_arn = "${aws_lb_listener.front_end.arn}" + + action { + type = "redirect" + + redirect { + port = "443" + protocol = "HTTPS" + status_code = "HTTP_301" + } + } + + condition { + field = "host-header" + values = ["my-service.*.terraform.io"] + } +} + +# Fixed-response action + +resource "aws_lb_listener_rule" "health_check" { + listener_arn = "${aws_lb_listener.front_end.arn}" + + action { + type = "fixed-response" + + fixed_response { + content_type = "text/plain" + message_body = "HEALTHY" + status_code = "200" + } + } + + condition { + field = "path-pattern" + values = ["/health"] + } +} + +# Authenticate-cognito Action + +resource "aws_cognito_user_pool" "pool" { + # ... +} + +resource "aws_cognito_user_pool_client" "client" { + # ... +} + +resource "aws_cognito_user_pool_domain" "domain" { + # ... +} + +resource "aws_lb_listener_rule" "admin" { + listener_arn = "${aws_lb_listener.front_end.arn}" + + action { + type = "authenticate-cognito" + + authenticate_cognito { + user_pool_arn = "${aws_cognito_user_pool.pool.arn}" + user_pool_client_id = "${aws_cognito_user_pool_client.client.id}" + user_pool_domain = "${aws_cognito_user_pool_domain.domain.domain}" + } + } + + action { + type = "forward" + target_group_arn = "${aws_lb_target_group.static.arn}" + } +} + +# Authenticate-oidc Action + +resource "aws_lb_listener" "admin" { + listener_arn = "${aws_lb_listener.front_end.arn}" + + action { + type = "authenticate-oidc" + + authenticate_oidc { + authorization_endpoint = "https://example.com/authorization_endpoint" + client_id = "client_id" + client_secret = "client_secret" + issuer = "https://example.com" + token_endpoint = "https://example.com/token_endpoint" + user_info_endpoint = "https://example.com/user_info_endpoint" + } + } + + action { + type = "forward" + target_group_arn = "${aws_lb_target_group.static.arn}" + } +} ``` ## Argument Reference @@ -67,8 +166,59 @@ The following arguments are supported: Action Blocks (for `action`) support the following: -* `target_group_arn` - (Required) The ARN of the Target Group to which to route traffic. -* `type` - (Required) The type of routing action. The only valid value is `forward`. +* `type` - (Required) The type of routing action. Valid values are `forward`, `redirect`, `fixed-response`, `authenticate-cognito` and `authenticate-oidc`. +* `target_group_arn` - (Optional) The ARN of the Target Group to which to route traffic. Required if `type` is `forward`. +* `redirect` - (Optional) Information for creating a redirect action. Required if `type` is `redirect`. +* `fixed_response` - (Optional) Information for creating an action that returns a custom HTTP response. Required if `type` is `fixed-response`. +* `authenticate_cognito` - (Optional) Information for creating an authenticate action using Cognito. Required if `type` is `authenticate-cognito`. +* `authenticate_oidc` - (Optional) Information for creating an authenticate action using OIDC. Required if `type` is `authenticate-oidc`. + +Redirect Blocks (for `redirect`) support the following: + +~> **NOTE::** You can reuse URI components using the following reserved keywords: `#{protocol}`, `#{host}`, `#{port}`, `#{path}` (the leading "/" is removed) and `#{query}`. + +* `host` - (Optional) The hostname. This component is not percent-encoded. The hostname can contain `#{host}`. Defaults to `#{host}`. +* `path` - (Optional) The absolute path, starting with the leading "/". This component is not percent-encoded. The path can contain #{host}, #{path}, and #{port}. Defaults to `/#{path}`. +* `port` - (Optional) The port. Specify a value from `1` to `65535` or `#{port}`. Defaults to `#{port}`. +* `protocol` - (Optional) The protocol. Valid values are `HTTP`, `HTTPS`, or `#{protocol}`. Defaults to `#{protocol}`. +* `query` - (Optional) The query parameters, URL-encoded when necessary, but not percent-encoded. Do not include the leading "?". Defaults to `#{query}`. +* `status_code` - (Required) The HTTP redirect code. The redirect is either permanent (`HTTP_301`) or temporary (`HTTP_302`). + +Fixed-response Blocks (for `fixed_response`) support the following: + +* `content_type` - (Required) The content type. Valid values are `text/plain`, `text/css`, `text/html`, `application/javascript` and `application/json`. +* `message_body` - (Optional) The message body. +* `status_code` - (Optional) The HTTP response code. Valid values are `2XX`, `4XX`, or `5XX`. + +Authenticate Cognito Blocks (for `authenticate_cognito`) supports the following: + +* `authentication_request_extra_params` - (Optional) The query parameters to include in the redirect request to the authorization endpoint. Max: 10. +* `on_unauthenticated_request` - (Optional) The behavior if the user is not authenticated. Valid values: `deny`, `allow` and `authenticate` +* `scope` - (Optional) The set of user claims to be requested from the IdP. +* `session_cookie_name` - (Optional) The name of the cookie used to maintain session information. +* `session_time_out` - (Optional) The maximum duration of the authentication session, in seconds. +* `user_pool_arn` - (Required) The ARN of the Cognito user pool. +* `user_pool_client_id` - (Required) The ID of the Cognito user pool client. +* `user_pool_domain` - (Required) The domain prefix or fully-qualified domain name of the Cognito user pool. + +Authenticate OIDC Blocks (for `authenticate_oidc`) supports the following: + +* `authentication_request_extra_params` - (Optional) The query parameters to include in the redirect request to the authorization endpoint. Max: 10. +* `authorization_endpoint` - (Required) The authorization endpoint of the IdP. +* `client_id` - (Required) The OAuth 2.0 client identifier. +* `client_secret` - (Required) The OAuth 2.0 client secret. +* `issuer` - (Required) The OIDC issuer identifier of the IdP. +* `on_unauthenticated_request` - (Optional) The behavior if the user is not authenticated. Valid values: `deny`, `allow` and `authenticate` +* `scope` - (Optional) The set of user claims to be requested from the IdP. +* `session_cookie_name` - (Optional) The name of the cookie used to maintain session information. +* `session_time_out` - (Optional) The maximum duration of the authentication session, in seconds. +* `token_endpoint` - (Required) The token endpoint of the IdP. +* `user_info_endpoint` - (Required) The user info endpoint of the IdP. + +Authentication Request Extra Params Blocks (for `authentication_request_extra_params`) supports the following: + +* `key` - (Required) The key of query parameter +* `value` - (Required) The value of query parameter Condition Blocks (for `condition`) support the following: diff --git a/website/docs/r/lb_ssl_negotiation_policy.html.markdown b/website/docs/r/lb_ssl_negotiation_policy.html.markdown index d67b19b48d5..c42b0e9932a 100644 --- a/website/docs/r/lb_ssl_negotiation_policy.html.markdown +++ b/website/docs/r/lb_ssl_negotiation_policy.html.markdown @@ -88,7 +88,7 @@ To set your attributes, please see the [AWS Elastic Load Balancing Developer Gui ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the policy. * `name` - The name of the stickiness policy. diff --git a/website/docs/r/lb_target_group.html.markdown b/website/docs/r/lb_target_group.html.markdown index 5ed67d91b75..72cb579b547 100644 --- a/website/docs/r/lb_target_group.html.markdown +++ b/website/docs/r/lb_target_group.html.markdown @@ -34,9 +34,11 @@ The following arguments are supported: * `name` - (Optional, Forces new resource) The name of the target group. If omitted, Terraform will assign a random, unique name. * `name_prefix` - (Optional, Forces new resource) Creates a unique name beginning with the specified prefix. Conflicts with `name`. Cannot be longer than 6 characters. * `port` - (Required) The port on which targets receive traffic, unless overridden when registering a specific target. -* `protocol` - (Required) The protocol to use for routing traffic to the targets. +* `protocol` - (Required) The protocol to use for routing traffic to the targets. Should be one of "TCP", "HTTP", "HTTPS" or "TLS". * `vpc_id` - (Required) The identifier of the VPC in which to create the target group. * `deregistration_delay` - (Optional) The amount time for Elastic Load Balancing to wait before changing the state of a deregistering target from draining to unused. The range is 0-3600 seconds. The default value is 300 seconds. +* `slow_start` - (Optional) The amount time for targets to warm up before the load balancer sends them a full share of requests. The range is 30-900 seconds or 0 to disable. The default value is 0 seconds. +* `proxy_protocol_v2` - (Optional) Boolean to enable / disable support for proxy protocol v2 on Network Load Balancers. See [doc](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#proxy-protocol) for more information. * `stickiness` - (Optional) A Stickiness block. Stickiness blocks are documented below. `stickiness` is only valid if used with Load Balancers of type `Application` * `health_check` - (Optional) A Health Check block. Health Check blocks are documented below. * `target_type` - (Optional) The type of target that you must specify when registering targets with this target group. @@ -64,13 +66,13 @@ http://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_CreateTa for a complete reference. * `interval` - (Optional) The approximate amount of time, in seconds, between health checks of an individual target. Minimum value 5 seconds, Maximum value 300 seconds. Default 30 seconds. -* `path` - (Optional) The destination for the health check request. Default `/`. +* `path` - (Required for HTTP/HTTPS ALB) The destination for the health check request. Applies to Application Load Balancers only (HTTP/HTTPS), not Network Load Balancers (TCP). * `port` - (Optional) The port to use to connect with the target. Valid values are either ports 1-65536, or `traffic-port`. Defaults to `traffic-port`. * `protocol` - (Optional) The protocol to use to connect with the target. Defaults to `HTTP`. * `timeout` - (Optional) The amount of time, in seconds, during which no response means a failed health check. For Application Load Balancers, the range is 2 to 60 seconds and the default is 5 seconds. For Network Load Balancers, you cannot set a custom value, and the default is 10 seconds for TCP and HTTPS health checks and 6 seconds for HTTP health checks. * `healthy_threshold` - (Optional) The number of consecutive health checks successes required before considering an unhealthy target healthy. Defaults to 3. * `unhealthy_threshold` - (Optional) The number of consecutive health check failures required before considering the target unhealthy . For Network Load Balancers, this value must be the same as the `healthy_threshold`. Defaults to 3. -* `matcher` (Optional, only supported on Application Load Balancers): The HTTP codes to use when checking for a successful response from a target. You can specify multiple values (for example, "200,202") or a range of values (for example, "200-299"). +* `matcher` (Required for HTTP/HTTPS ALB) The HTTP codes to use when checking for a successful response from a target. You can specify multiple values (for example, "200,202") or a range of values (for example, "200-299"). Applies to Application Load Balancers only (HTTP/HTTPS), not Network Load Balancers (TCP). ## Attributes Reference diff --git a/website/docs/r/lb_target_group_attachment.html.markdown b/website/docs/r/lb_target_group_attachment.html.markdown index 80c1d548619..243ddf7b776 100644 --- a/website/docs/r/lb_target_group_attachment.html.markdown +++ b/website/docs/r/lb_target_group_attachment.html.markdown @@ -9,8 +9,7 @@ description: |- # aws_lb_target_group_attachment -Provides the ability to register instances and containers with a LB -target group +Provides the ability to register instances and containers with an Application Load Balancer (ALB) or Network Load Balancer (NLB) target group. For attaching resources with Elastic Load Balancer (ELB), see the [`aws_elb_attachment` resource](/docs/providers/aws/r/elb_attachment.html). ~> **Note:** `aws_alb_target_group_attachment` is known as `aws_lb_target_group_attachment`. The functionality is identical. diff --git a/website/docs/r/lightsail_instance.html.markdown b/website/docs/r/lightsail_instance.html.markdown index 2815acf3d49..b9375bd4878 100644 --- a/website/docs/r/lightsail_instance.html.markdown +++ b/website/docs/r/lightsail_instance.html.markdown @@ -85,8 +85,8 @@ The following attributes are exported in addition to the arguments listed above: ## Import -Lightsail Instances can be imported using their ARN, e.g. +Lightsail Instances can be imported using their name, e.g. ``` -$ terraform import aws_lightsail_instance.bar +$ terraform import aws_lightsail_instance.gitlab_test 'custom gitlab' ``` diff --git a/website/docs/r/lightsail_key_pair.html.markdown b/website/docs/r/lightsail_key_pair.html.markdown index 9988eb30277..68c2151fb38 100644 --- a/website/docs/r/lightsail_key_pair.html.markdown +++ b/website/docs/r/lightsail_key_pair.html.markdown @@ -9,7 +9,7 @@ description: |- # aws_lightsail_key_pair Provides a Lightsail Key Pair, for use with Lightsail Instances. These key pairs -are seperate from EC2 Key Pairs, and must be created or imported for use with +are separate from EC2 Key Pairs, and must be created or imported for use with Lightsail. ~> **Note:** Lightsail is currently only supported in a limited number of AWS Regions, please see ["Regions and Availability Zones in Amazon Lightsail"](https://lightsail.aws.amazon.com/ls/docs/overview/article/understanding-regions-and-availability-zones-in-amazon-lightsail) for more details @@ -47,7 +47,7 @@ The following arguments are supported: * `name` - (Optional) The name of the Lightsail Key Pair. If omitted, a unique name will be generated by Terraform -* `pgp_key` – (Optional) An optional PGP key to encrypt the resulting private +* `pgp_key` – (Optional) An optional PGP key to encrypt the resulting private key material. Only used when creating a new key pair * `public_key` - (Required) The public key material. This public key will be imported into Lightsail @@ -66,7 +66,7 @@ The following attributes are exported in addition to the arguments listed above: * `public_key` - the public key, base64 encoded * `private_key` - the private key, base64 encoded. This is only populated when creating a new key, and when no `pgp_key` is provided -* `encrypted_private_key` – the private key material, base 64 encoded and +* `encrypted_private_key` – the private key material, base 64 encoded and encrypted with the given `pgp_key`. This is only populated when creating a new key and `pgp_key` is supplied * `encrypted_fingerprint` - The MD5 public key fingerprint for the encrypted diff --git a/website/docs/r/lightsail_static_ip_attachment.html.markdown b/website/docs/r/lightsail_static_ip_attachment.html.markdown index a28f33b5104..635fed92c91 100644 --- a/website/docs/r/lightsail_static_ip_attachment.html.markdown +++ b/website/docs/r/lightsail_static_ip_attachment.html.markdown @@ -17,7 +17,7 @@ Provides a static IP address attachment - relationship between a Lightsail stati ```hcl resource "aws_lightsail_static_ip_attachment" "test" { static_ip_name = "${aws_lightsail_static_ip.test.name}" - instance_name = "${aws_lightsail_instance.test.name}" + instance_name = "${aws_lightsail_instance.test.name}" } resource "aws_lightsail_static_ip" "test" { diff --git a/website/docs/r/load_balancer_backend_server_policy.html.markdown b/website/docs/r/load_balancer_backend_server_policy.html.markdown index 780c4d0e26a..f10a28e288a 100644 --- a/website/docs/r/load_balancer_backend_server_policy.html.markdown +++ b/website/docs/r/load_balancer_backend_server_policy.html.markdown @@ -81,7 +81,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the policy. * `load_balancer_name` - The load balancer on which the policy is defined. diff --git a/website/docs/r/load_balancer_listener_policy.html.markdown b/website/docs/r/load_balancer_listener_policy.html.markdown index 95d620853ec..385999ca565 100644 --- a/website/docs/r/load_balancer_listener_policy.html.markdown +++ b/website/docs/r/load_balancer_listener_policy.html.markdown @@ -69,7 +69,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the policy. * `load_balancer_name` - The load balancer on which the policy is defined. diff --git a/website/docs/r/load_balancer_policy.html.markdown b/website/docs/r/load_balancer_policy.html.markdown index e0bb3a2df90..4f3d4c1335f 100644 --- a/website/docs/r/load_balancer_policy.html.markdown +++ b/website/docs/r/load_balancer_policy.html.markdown @@ -106,7 +106,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the policy. * `policy_name` - The name of the stickiness policy. diff --git a/website/docs/r/macie_member_account_association.html.markdown b/website/docs/r/macie_member_account_association.html.markdown new file mode 100644 index 00000000000..5bd929d7078 --- /dev/null +++ b/website/docs/r/macie_member_account_association.html.markdown @@ -0,0 +1,33 @@ +--- +layout: "aws" +page_title: "AWS: aws_macie_member_account_association" +sidebar_current: "docs-aws-macie-member-account-association" +description: |- + Associates an AWS account with Amazon Macie as a member account. +--- + +# aws_macie_member_account_association + +Associates an AWS account with Amazon Macie as a member account. + +~> **NOTE:** Before using Amazon Macie for the first time it must be enabled manually. Instructions are [here](https://docs.aws.amazon.com/macie/latest/userguide/macie-setting-up.html#macie-setting-up-enable). + +## Example Usage + +```hcl +resource "aws_macie_member_account_association" "example" { + member_account_id = "123456789012" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `member_account_id` - (Required) The ID of the AWS account that you want to associate with Amazon Macie as a member account. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the association. diff --git a/website/docs/r/macie_s3_bucket_association.html.markdown b/website/docs/r/macie_s3_bucket_association.html.markdown new file mode 100644 index 00000000000..99406f18444 --- /dev/null +++ b/website/docs/r/macie_s3_bucket_association.html.markdown @@ -0,0 +1,48 @@ +--- +layout: "aws" +page_title: "AWS: aws_macie_s3_bucket_association" +sidebar_current: "docs-aws-macie-s3-bucket-association" +description: |- + Associates an S3 resource with Amazon Macie for monitoring and data classification. +--- + +# aws_macie_s3_bucket_association + +Associates an S3 resource with Amazon Macie for monitoring and data classification. + +~> **NOTE:** Before using Amazon Macie for the first time it must be enabled manually. Instructions are [here](https://docs.aws.amazon.com/macie/latest/userguide/macie-setting-up.html#macie-setting-up-enable). + +## Example Usage + +```hcl +resource "aws_macie_s3_bucket_association" "example" { + bucket_name = "tf-macie-example" + prefix = "data" + + classification_type { + one_time = "FULL" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `bucket_name` - (Required) The name of the S3 bucket that you want to associate with Amazon Macie. +* `classification_type` - (Optional) The configuration of how Amazon Macie classifies the S3 objects. +* `member_account_id` - (Optional) The ID of the Amazon Macie member account whose S3 resources you want to associate with Macie. If `member_account_id` isn't specified, the action associates specified S3 resources with Macie for the current master account. +* `prefix` - (Optional) Object key prefix identifying one or more S3 objects to which the association applies. + +The `classification_type` object supports the following: + +* `continuous` - (Optional) A string value indicating that Macie perform a one-time classification of all of the existing objects in the bucket. +The only valid value is the default value, `FULL`. +* `one_time` - (Optional) A string value indicating whether or not Macie performs a one-time classification of all of the existing objects in the bucket. +Valid values are `NONE` and `FULL`. Defaults to `NONE` indicating that Macie only classifies objects that are added after the association was created. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the association. diff --git a/website/docs/r/main_route_table_assoc.html.markdown b/website/docs/r/main_route_table_assoc.html.markdown index 30955bcca8c..1242b94534b 100644 --- a/website/docs/r/main_route_table_assoc.html.markdown +++ b/website/docs/r/main_route_table_assoc.html.markdown @@ -29,7 +29,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the Route Table Association * `original_route_table_id` - Used internally, see __Notes__ below diff --git a/website/docs/r/media_store_container.html.markdown b/website/docs/r/media_store_container.html.markdown index 262b8e1e6b1..761a9ecbc34 100644 --- a/website/docs/r/media_store_container.html.markdown +++ b/website/docs/r/media_store_container.html.markdown @@ -26,7 +26,15 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - The ARN of the container. * `endpoint` - The DNS endpoint of the container. + +## Import + +MediaStore Container can be imported using the MediaStore Container Name, e.g. + +``` +$ terraform import aws_media_store_container.example example +``` diff --git a/website/docs/r/media_store_container_policy.html.markdown b/website/docs/r/media_store_container_policy.html.markdown new file mode 100644 index 00000000000..bbbc9560214 --- /dev/null +++ b/website/docs/r/media_store_container_policy.html.markdown @@ -0,0 +1,58 @@ +--- +layout: "aws" +page_title: "AWS: aws_media_store_container_policy" +sidebar_current: "docs-aws-resource-media-store-container-policy" +description: |- + Provides a MediaStore Container Policy. +--- + +# aws_media_store_container_policy + +Provides a MediaStore Container Policy. + +## Example Usage + +```hcl +data "aws_region" "current" {} + +data "aws_caller_identity" "current" {} + +resource "aws_media_store_container" "example" { + name = "example" +} + +resource "aws_media_store_container_policy" "example" { + container_name = "${aws_media_store_container.example.name}" + + policy = < @@ -47,7 +48,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The unique ID that Amazon MQ generates for the configuration. * `arn` - The ARN of the configuration. diff --git a/website/docs/r/nat_gateway.html.markdown b/website/docs/r/nat_gateway.html.markdown index 5c3fbd93d0f..5ee4a8a995a 100644 --- a/website/docs/r/nat_gateway.html.markdown +++ b/website/docs/r/nat_gateway.html.markdown @@ -55,7 +55,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the NAT Gateway. * `allocation_id` - The Allocation ID of the Elastic IP address for the gateway. @@ -70,4 +70,4 @@ NAT Gateways can be imported using the `id`, e.g. ``` $ terraform import aws_nat_gateway.private_gw nat-05dba92075d71c408 -``` \ No newline at end of file +``` diff --git a/website/docs/r/neptune_cluster.html.markdown b/website/docs/r/neptune_cluster.html.markdown new file mode 100644 index 00000000000..e861feeb5d4 --- /dev/null +++ b/website/docs/r/neptune_cluster.html.markdown @@ -0,0 +1,93 @@ +--- +layout: "aws" +page_title: "AWS: aws_neptune_cluster" +sidebar_current: "docs-aws-resource-neptune-cluster-x" +description: |- + Provides an Neptune Cluster Resource +--- + +# aws_neptune_cluster + +Provides an Neptune Cluster Resource. A Cluster Resource defines attributes that are +applied to the entire cluster of Neptune Cluster Instances. + +Changes to a Neptune Cluster can occur when you manually change a +parameter, such as `backup_retention_period`, and are reflected in the next maintenance +window. Because of this, Terraform may report a difference in its planning +phase because a modification has not yet taken place. You can use the +`apply_immediately` flag to instruct the service to apply the change immediately +(see documentation below). + +## Example Usage + +```hcl +resource "aws_neptune_cluster" "default" { + cluster_identifier = "neptune-cluster-demo" + engine = "neptune" + backup_retention_period = 5 + preferred_backup_window = "07:00-09:00" + skip_final_snapshot = true + iam_database_authentication_enabled = true + apply_immediately = true +} +``` + +~> **Note:** AWS Neptune does not support user name/password–based access control. +See the AWS [Docs](https://docs.aws.amazon.com/neptune/latest/userguide/limits.html) for more information. + +## Argument Reference + +The following arguments are supported: + +* `apply_immediately` - (Optional) Specifies whether any cluster modifications are applied immediately, or during the next maintenance window. Default is `false`. +* `availability_zones` - (Optional) A list of EC2 Availability Zones that instances in the Neptune cluster can be created in. +* `backup_retention_period` - (Optional) The days to retain backups for. Default `1` +* `cluster_identifier` - (Optional, Forces new resources) The cluster identifier. If omitted, Terraform will assign a random, unique identifier. +* `cluster_identifier_prefix` - (Optional, Forces new resource) Creates a unique cluster identifier beginning with the specified prefix. Conflicts with `cluster_identifer`. +* `engine` - (Optional) The name of the database engine to be used for this Neptune cluster. Defaults to `neptune`. +* `engine_version` - (Optional) The database engine version. +* `final_snapshot_identifier` - (Optional) The name of your final Neptune snapshot when this Neptune cluster is deleted. If omitted, no final snapshot will be made. +* `iam_roles` - (Optional) A List of ARNs for the IAM roles to associate to the Neptune Cluster. +* `iam_database_authentication_enabled` - (Optional) Specifies whether or mappings of AWS Identity and Access Management (IAM) accounts to database accounts is enabled. +* `kms_key_arn` - (Optional) The ARN for the KMS encryption key. When specifying `kms_key_arn`, `storage_encrypted` needs to be set to true. +* `neptune_subnet_group_name` - (Optional) A Neptune subnet group to associate with this Neptune instance. +* `neptune_cluster_parameter_group_name` - (Optional) A cluster parameter group to associate with the cluster. +* `preferred_backup_window` - (Optional) The daily time range during which automated backups are created if automated backups are enabled using the BackupRetentionPeriod parameter. Time in UTC. Default: A 30-minute window selected at random from an 8-hour block of time per region. e.g. 04:00-09:00 +* `preferred_maintenance_window` - (Optional) The weekly time range during which system maintenance can occur, in (UTC) e.g. wed:04:00-wed:04:30 +* `port` - (Optional) The port on which the Neptune accepts connections. Default is `8182`. +* `replication_source_identifier` - (Optional) ARN of a source Neptune cluster or Neptune instance if this Neptune cluster is to be created as a Read Replica. +* `skip_final_snapshot` - (Optional) Determines whether a final Neptune snapshot is created before the Neptune cluster is deleted. If true is specified, no Neptune snapshot is created. If false is specified, a Neptune snapshot is created before the Neptune cluster is deleted, using the value from `final_snapshot_identifier`. Default is `false`. +* `snapshot_identifier` - (Optional) Specifies whether or not to create this cluster from a snapshot. You can use either the name or ARN when specifying a Neptune cluster snapshot, or the ARN when specifying a Neptune snapshot. +* `storage_encrypted` - (Optional) Specifies whether the Neptune cluster is encrypted. The default is `false` if not specified. +* `tags` - (Optional) A mapping of tags to assign to the Neptune cluster. +* `vpc_security_group_ids` - (Optional) List of VPC security groups to associate with the Cluster + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The Neptune Cluster Amazon Resource Name (ARN) +* `cluster_resource_id` - The Neptune Cluster Resource ID +* `cluster_members` – List of Neptune Instances that are a part of this cluster +* `endpoint` - The DNS address of the Neptune instance +* `hosted_zone_id` - The Route53 Hosted Zone ID of the endpoint +* `id` - The Neptune Cluster Identifier +* `reader_endpoint` - A read-only endpoint for the Neptune cluster, automatically load-balanced across replicas +* `status` - The Neptune instance status + +## Timeouts + +`aws_neptune_cluster` provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - (Default `120 minutes`) Used for Cluster creation +- `update` - (Default `120 minutes`) Used for Cluster modifications +- `delete` - (Default `120 minutes`) Used for destroying cluster. This includes any cleanup task during the destroying process. + +## Import + +`aws_neptune_cluster` can be imported by using the cluster identifier, e.g. + +``` +$ terraform import aws_neptune_cluster.example my-cluster +``` diff --git a/website/docs/r/neptune_cluster_instance.html.markdown b/website/docs/r/neptune_cluster_instance.html.markdown new file mode 100644 index 00000000000..d9f7a9faf22 --- /dev/null +++ b/website/docs/r/neptune_cluster_instance.html.markdown @@ -0,0 +1,95 @@ +--- +layout: "aws" +page_title: "AWS: aws_neptune_cluster_instance" +sidebar_current: "docs-aws-resource-neptune-cluster-instance" +description: |- + Provides an Neptune Cluster Resource Instance +--- + +# aws_neptune_cluster_instance + +A Cluster Instance Resource defines attributes that are specific to a single instance in a Neptune Cluster. + +You can simply add neptune instances and Neptune manages the replication. You can use the [count][1] +meta-parameter to make multiple instances and join them all to the same Neptune Cluster, or you may specify different Cluster Instance resources with various `instance_class` sizes. + + +## Example Usage + +The following example will create a neptune cluster with two neptune instances(one writer and one reader). + +```hcl +resource "aws_neptune_cluster" "default" { + cluster_identifier = "neptune-cluster-demo" + engine = "neptune" + backup_retention_period = 5 + preferred_backup_window = "07:00-09:00" + skip_final_snapshot = true + iam_database_authentication_enabled = true + apply_immediately = true +} + +resource "aws_neptune_cluster_instance" "example" { + count = 2 + cluster_identifier = "${aws_neptune_cluster.default.id}" + engine = "neptune" + instance_class = "db.r4.large" + apply_immediately = true +} +``` + +## Argument Reference + +The following arguments are supported: + +* `apply_immediately` - (Optional) Specifies whether any instance modifications + are applied immediately, or during the next maintenance window. Default is`false`. +* `auto_minor_version_upgrade` - (Optional) Indicates that minor engine upgrades will be applied automatically to the instance during the maintenance window. Default is `true`. +* `availability_zone` - (Optional) The EC2 Availability Zone that the neptune instance is created in. +* `cluster_identifier` - (Required) The identifier of the [`aws_neptune_cluster`](/docs/providers/aws/r/neptune_cluster.html) in which to launch this instance. +* `engine` - (Optional) The name of the database engine to be used for the neptune instance. Defaults to `neptune`. Valid Values: `neptune`. +* `engine_version` - (Optional) The neptune engine version. +* `identifier` - (Optional, Forces new resource) The indentifier for the neptune instance, if omitted, Terraform will assign a random, unique identifier. +* `identifier_prefix` - (Optional, Forces new resource) Creates a unique identifier beginning with the specified prefix. Conflicts with `identifer`. +* `instance_class` - (Required) The instance class to use. +* `neptune_subnet_group_name` - (Required if `publicly_accessible = false`, Optional otherwise) A subnet group to associate with this neptune instance. **NOTE:** This must match the `neptune_subnet_group_name` of the attached [`aws_neptune_cluster`](/docs/providers/aws/r/neptune_cluster.html). +* `neptune_parameter_group_name` - (Optional) The name of the neptune parameter group to associate with this instance. +* `port` - (Optional) The port on which the DB accepts connections. Defaults to `8182`. +* `preferred_backup_window` - (Optional) The daily time range during which automated backups are created if automated backups are enabled. Eg: "04:00-09:00" +* `preferred_maintenance_window` - (Optional) The window to perform maintenance in. + Syntax: "ddd:hh24:mi-ddd:hh24:mi". Eg: "Mon:00:00-Mon:03:00". +* `promotion_tier` - (Optional) Default 0. Failover Priority setting on instance level. The reader who has lower tier has higher priority to get promoter to writer. +* `publicly_accessible` - (Optional) Bool to control if instance is publicly accessible. Default is `false`. +* `tags` - (Optional) A mapping of tags to assign to the instance. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `address` - The hostname of the instance. See also `endpoint` and `port`. +* `arn` - Amazon Resource Name (ARN) of neptune instance +* `dbi_resource_id` - The region-unique, immutable identifier for the neptune instance. +* `endpoint` - The connection endpoint in `address:port` format. +* `id` - The Instance identifier +* `kms_key_arn` - The ARN for the KMS encryption key if one is set to the neptune cluster. +* `storage_encrypted` - Specifies whether the neptune cluster is encrypted. +* `writer` – Boolean indicating if this instance is writable. `False` indicates this instance is a read replica. + +[1]: /docs/configuration/resources.html#count + +## Timeouts + +`aws_neptune_cluster_instance` provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - (Default `90 minutes`) How long to wait for creating instances to become available. +- `update` - (Default `90 minutes`) How long to wait for updating instances to complete updates. +- `delete` - (Default `90 minutes`) How long to wait for deleting instances to become fully deleted. + +## Import + +`aws_neptune_cluster_instance` can be imported by using the instance identifier, e.g. + +``` +$ terraform import aws_neptune_cluster_instance.example my-instance +``` diff --git a/website/docs/r/neptune_cluster_parameter_group.html.markdown b/website/docs/r/neptune_cluster_parameter_group.html.markdown new file mode 100644 index 00000000000..ae75772176a --- /dev/null +++ b/website/docs/r/neptune_cluster_parameter_group.html.markdown @@ -0,0 +1,59 @@ +--- +layout: "aws" +page_title: "AWS: aws_neptune_cluster_parameter_group" +sidebar_current: "docs-aws-resource-aws-neptune-cluster-parameter-group" +description: |- + Manages a Neptune Cluster Parameter Group +--- + +# aws_neptune_cluster_parameter_group + +Manages a Neptune Cluster Parameter Group + +## Example Usage + +```hcl +resource "aws_neptune_cluster_parameter_group" "example" { + family = "neptune1" + name = "example" + description = "neptune cluster parameter group" + + parameter { + name = "neptune_enable_audit_log" + value = 1 + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Optional, Forces new resource) The name of the neptune cluster parameter group. If omitted, Terraform will assign a random, unique name. +* `name_prefix` - (Optional, Forces new resource) Creates a unique name beginning with the specified prefix. Conflicts with `name`. +* `family` - (Required) The family of the neptune cluster parameter group. +* `description` - (Optional) The description of the neptune cluster parameter group. Defaults to "Managed by Terraform". +* `parameter` - (Optional) A list of neptune parameters to apply. +* `tags` - (Optional) A mapping of tags to assign to the resource. + +Parameter blocks support the following: + +* `name` - (Required) The name of the neptune parameter. +* `value` - (Required) The value of the neptune parameter. +* `apply_method` - (Optional) Valid values are `immediate` and `pending-reboot`. Defaults to `pending-reboot`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The neptune cluster parameter group name. +* `arn` - The ARN of the neptune cluster parameter group. + + +## Import + +Neptune Cluster Parameter Groups can be imported using the `name`, e.g. + +``` +$ terraform import aws_neptune_cluster_parameter_group.cluster_pg production-pg-1 +``` diff --git a/website/docs/r/neptune_cluster_snapshot.html.markdown b/website/docs/r/neptune_cluster_snapshot.html.markdown new file mode 100644 index 00000000000..0018fe06a04 --- /dev/null +++ b/website/docs/r/neptune_cluster_snapshot.html.markdown @@ -0,0 +1,58 @@ +--- +layout: "aws" +page_title: "AWS: aws_neptune_cluster_snapshot" +sidebar_current: "docs-aws-resource-neptune-cluster-snapshot" +description: |- + Manages a Neptune database cluster snapshot. +--- + +# aws_neptune_cluster_snapshot + +Manages a Neptune database cluster snapshot. + +## Example Usage + +```hcl +resource "aws_neptune_cluster_snapshot" "example" { + db_cluster_identifier = "${aws_neptune_cluster.example.id}" + db_cluster_snapshot_identifier = "resourcetestsnapshot1234" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `db_cluster_identifier` - (Required) The DB Cluster Identifier from which to take the snapshot. +* `db_cluster_snapshot_identifier` - (Required) The Identifier for the snapshot. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `allocated_storage` - Specifies the allocated storage size in gigabytes (GB). +* `availability_zones` - List of EC2 Availability Zones that instances in the DB cluster snapshot can be restored in. +* `db_cluster_snapshot_arn` - The Amazon Resource Name (ARN) for the DB Cluster Snapshot. +* `engine` - Specifies the name of the database engine. +* `engine_version` - Version of the database engine for this DB cluster snapshot. +* `kms_key_id` - If storage_encrypted is true, the AWS KMS key identifier for the encrypted DB cluster snapshot. +* `license_model` - License model information for the restored DB cluster. +* `port` - Port that the DB cluster was listening on at the time of the snapshot. +* `source_db_cluster_snapshot_identifier` - The DB Cluster Snapshot Arn that the DB Cluster Snapshot was copied from. It only has value in case of cross customer or cross region copy. +* `storage_encrypted` - Specifies whether the DB cluster snapshot is encrypted. +* `status` - The status of this DB Cluster Snapshot. +* `vpc_id` - The VPC ID associated with the DB cluster snapshot. + +## Timeouts + +`aws_neptune_cluster_snapshot` provides the following [Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +* `create` - (Default `20m`) How long to wait for the snapshot to be available. + +## Import + +`aws_neptune_cluster_snapshot` can be imported by using the cluster snapshot identifier, e.g. + +``` +$ terraform import aws_neptune_cluster_snapshot.example my-cluster-snapshot +``` diff --git a/website/docs/r/neptune_event_subscription.html.markdown b/website/docs/r/neptune_event_subscription.html.markdown new file mode 100644 index 00000000000..57fe1dae2bb --- /dev/null +++ b/website/docs/r/neptune_event_subscription.html.markdown @@ -0,0 +1,100 @@ +--- +layout: "aws" +page_title: "AWS: aws_neptune_event_subscription" +sidebar_current: "docs-aws-resource-neptune-event-subscription" +description: |- + Provides a Neptune event subscription resource. +--- + +# aws_neptune_event_subscription + +## Example Usage + +```hcl +resource "aws_neptune_cluster" "default" { + cluster_identifier = "neptune-cluster-demo" + engine = "neptune" + backup_retention_period = 5 + preferred_backup_window = "07:00-09:00" + skip_final_snapshot = true + iam_database_authentication_enabled = "true" + apply_immediately = "true" +} + +resource "aws_neptune_cluster_instance" "example" { + count = 1 + cluster_identifier = "${aws_neptune_cluster.default.id}" + engine = "neptune" + instance_class = "db.r4.large" + apply_immediately = "true" +} + +resource "aws_sns_topic" "default" { + name = "neptune-events" +} + +resource "aws_neptune_event_subscription" "default" { + name = "neptune-event-sub" + sns_topic_arn = "${aws_sns_topic.default.arn}" + + source_type = "db-instance" + source_ids = ["${aws_neptune_cluster_instance.example.id}"] + + event_categories = [ + "maintenance", + "availability", + "creation", + "backup", + "restoration", + "recovery", + "deletion", + "failover", + "failure", + "notification", + "configuration change", + "read replica", + ] + + tags { + "env" = "test" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `enabled` - (Optional) A boolean flag to enable/disable the subscription. Defaults to true. +* `event_categories` - (Optional) A list of event categories for a `source_type` that you want to subscribe to. Run `aws neptune describe-event-categories` to find all the event categories. +* `name` - (Optional) The name of the Neptune event subscription. By default generated by Terraform. +* `name_prefix` - (Optional) The name of the Neptune event subscription. Conflicts with `name`. +* `sns_topic_arn` - (Required) The ARN of the SNS topic to send events to. +* `source_ids` - (Optional) A list of identifiers of the event sources for which events will be returned. If not specified, then all sources are included in the response. If specified, a `source_type` must also be specified. +* `source_type` - (Optional) The type of source that will be generating the events. Valid options are `db-instance`, `db-security-group`, `db-parameter-group`, `db-snapshot`, `db-cluster` or `db-cluster-snapshot`. If not set, all sources will be subscribed to. +* `tags` - (Optional) A mapping of tags to assign to the resource. + +## Attributes + +The following additional atttributes are provided: + +* `id` - The name of the Neptune event notification subscription. +* `arn` - The Amazon Resource Name of the Neptune event notification subscription. +* `customer_aws_id` - The AWS customer account associated with the Neptune event notification subscription. + +## Timeouts + +`aws_neptune_event_subscription` provides the following [Timeouts](/docs/configuration/resources.html#timeouts) +configuration options: + +- `create` - (Default `40m`) How long to wait for creating event subscription to become available. +- `delete` - (Default `40m`) How long to wait for deleting event subscription to become fully deleted. +- `update` - (Default `40m`) How long to wait for updating event subscription to complete updates. + +## Import + +`aws_neptune_event_subscription` can be imported by using the event subscription name, e.g. + +``` +$ terraform import aws_neptune_event_subscription.example my-event-subscription +``` diff --git a/website/docs/r/neptune_parameter_group.html.markdown b/website/docs/r/neptune_parameter_group.html.markdown new file mode 100644 index 00000000000..4d0afe27062 --- /dev/null +++ b/website/docs/r/neptune_parameter_group.html.markdown @@ -0,0 +1,57 @@ +--- +layout: "aws" +page_title: "AWS: aws_neptune_parameter_group" +sidebar_current: "docs-aws-resource-aws-neptune-parameter-group" +description: |- + Manages a Neptune Parameter Group +--- + +# aws_neptune_parameter_group + +Manages a Neptune Parameter Group + +## Example Usage + +```hcl +resource "aws_neptune_parameter_group" "example" { + family = "neptune1" + name = "example" + + parameter { + name = "neptune_query_timeout" + value = "25" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required, Forces new resource) The name of the Neptune parameter group. +* `family` - (Required) The family of the Neptune parameter group. +* `description` - (Optional) The description of the Neptune parameter group. Defaults to "Managed by Terraform". +* `parameter` - (Optional) A list of Neptune parameters to apply. +* `tags` - (Optional) A mapping of tags to assign to the resource. + +Parameter blocks support the following: + +* `name` - (Required) The name of the Neptune parameter. +* `value` - (Required) The value of the Neptune parameter. +* `apply_method` - (Optional) The apply method of the Neptune parameter. Valid values are `immediate` and `pending-reboot`. Defaults to `pending-reboot`. + + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The Neptune parameter group name. +* `arn` - The Neptune parameter group Amazon Resource Name (ARN). + +## Import + +Neptune Parameter Groups can be imported using the `name`, e.g. + +``` +$ terraform import aws_neptune_parameter_group.some_pg some-pg +``` diff --git a/website/docs/r/neptune_subnet_group.html.markdown b/website/docs/r/neptune_subnet_group.html.markdown new file mode 100644 index 00000000000..69aea95f953 --- /dev/null +++ b/website/docs/r/neptune_subnet_group.html.markdown @@ -0,0 +1,50 @@ +--- +layout: "aws" +page_title: "AWS: aws_neptune_subnet_group" +sidebar_current: "docs-aws-resource-neptune-subnet-group" +description: |- + Provides an Neptune subnet group resource. +--- + +# aws_neptune_subnet_group + +Provides an Neptune subnet group resource. + +## Example Usage + +```hcl +resource "aws_neptune_subnet_group" "default" { + name = "main" + subnet_ids = ["${aws_subnet.frontend.id}", "${aws_subnet.backend.id}"] + + tags { + Name = "My neptune subnet group" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Optional, Forces new resource) The name of the neptune subnet group. If omitted, Terraform will assign a random, unique name. +* `name_prefix` - (Optional, Forces new resource) Creates a unique name beginning with the specified prefix. Conflicts with `name`. +* `description` - (Optional) The description of the neptune subnet group. Defaults to "Managed by Terraform". +* `subnet_ids` - (Required) A list of VPC subnet IDs. +* `tags` - (Optional) A mapping of tags to assign to the resource. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The neptune subnet group name. +* `arn` - The ARN of the neptune subnet group. + + +## Import + +Neptune Subnet groups can be imported using the `name`, e.g. + +``` +$ terraform import aws_neptune_subnet_group.default production-subnet-group +``` diff --git a/website/docs/r/network_acl.html.markdown b/website/docs/r/network_acl.html.markdown index 68c95c32af4..4a9c45e87dc 100644 --- a/website/docs/r/network_acl.html.markdown +++ b/website/docs/r/network_acl.html.markdown @@ -73,11 +73,11 @@ valid network mask. * `icmp_type` - (Optional) The ICMP type to be used. Default 0. * `icmp_code` - (Optional) The ICMP type code to be used. Default 0. -~> Note: For more information on ICMP types and codes, see here: http://www.nthelp.com/icmp.html +~> Note: For more information on ICMP types and codes, see here: https://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the network ACL diff --git a/website/docs/r/network_acl_rule.html.markdown b/website/docs/r/network_acl_rule.html.markdown index cb6223b6d23..9b5d55713ef 100644 --- a/website/docs/r/network_acl_rule.html.markdown +++ b/website/docs/r/network_acl_rule.html.markdown @@ -57,10 +57,10 @@ The following arguments are supported: ~> **NOTE:** If the value of `icmp_type` is `-1` (which results in a wildcard ICMP type), the `icmp_code` must also be set to `-1` (wildcard ICMP code). -~> Note: For more information on ICMP types and codes, see here: http://www.nthelp.com/icmp.html +~> Note: For more information on ICMP types and codes, see here: https://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the network ACL Rule diff --git a/website/docs/r/network_interface.markdown b/website/docs/r/network_interface.markdown index efe2fc229f6..37e7dbe26bd 100644 --- a/website/docs/r/network_interface.markdown +++ b/website/docs/r/network_interface.markdown @@ -45,7 +45,7 @@ The `attachment` block supports: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `subnet_id` - Subnet ID the ENI is in. * `description` - A description for the network interface. diff --git a/website/docs/r/network_interface_attachment.html.markdown b/website/docs/r/network_interface_attachment.html.markdown index 5975d32efe4..b92a9cf954b 100644 --- a/website/docs/r/network_interface_attachment.html.markdown +++ b/website/docs/r/network_interface_attachment.html.markdown @@ -14,9 +14,9 @@ Attach an Elastic network interface (ENI) resource with EC2 instance. ```hcl resource "aws_network_interface_attachment" "test" { - instance_id = "${aws_instance.test.id}" - network_interface_id = "${aws_network_interface.test.id}" - device_index = 0 + instance_id = "${aws_instance.test.id}" + network_interface_id = "${aws_network_interface.test.id}" + device_index = 0 } ``` @@ -30,7 +30,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `instance_id` - Instance ID. * `network_interface_id` - Network interface ID. diff --git a/website/docs/r/opsworks_application.html.markdown b/website/docs/r/opsworks_application.html.markdown index 1a5f12ba66b..1f6d0cb35b0 100644 --- a/website/docs/r/opsworks_application.html.markdown +++ b/website/docs/r/opsworks_application.html.markdown @@ -16,7 +16,7 @@ Provides an OpsWorks application resource. resource "aws_opsworks_application" "foo-app" { name = "foobar application" short_name = "foobar" - stack_id = "${aws_opsworks_stack.stack.id}" + stack_id = "${aws_opsworks_stack.main.id}" type = "rails" description = "This is a Rails application" @@ -95,6 +95,6 @@ A `ssl_configuration` block supports the following arguments (can only be define ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The id of the application. diff --git a/website/docs/r/opsworks_custom_layer.html.markdown b/website/docs/r/opsworks_custom_layer.html.markdown index b8a4563106c..6efd12038f5 100644 --- a/website/docs/r/opsworks_custom_layer.html.markdown +++ b/website/docs/r/opsworks_custom_layer.html.markdown @@ -62,7 +62,7 @@ An `ebs_volume` block supports the following arguments: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The id of the layer. @@ -72,4 +72,4 @@ OpsWorks Custom Layers can be imported using the `id`, e.g. ``` $ terraform import aws_opsworks_custom_layer.bar 00000000-0000-0000-0000-000000000000 -``` \ No newline at end of file +``` diff --git a/website/docs/r/opsworks_ganglia_layer.html.markdown b/website/docs/r/opsworks_ganglia_layer.html.markdown index 41f7f7b230c..fe6d127b2e4 100644 --- a/website/docs/r/opsworks_ganglia_layer.html.markdown +++ b/website/docs/r/opsworks_ganglia_layer.html.markdown @@ -63,6 +63,6 @@ An `ebs_volume` block supports the following arguments: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The id of the layer. diff --git a/website/docs/r/opsworks_haproxy_layer.html.markdown b/website/docs/r/opsworks_haproxy_layer.html.markdown index 333b0acc2b8..1b07e55a4a3 100644 --- a/website/docs/r/opsworks_haproxy_layer.html.markdown +++ b/website/docs/r/opsworks_haproxy_layer.html.markdown @@ -66,6 +66,6 @@ An `ebs_volume` block supports the following arguments: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The id of the layer. diff --git a/website/docs/r/opsworks_instance.html.markdown b/website/docs/r/opsworks_instance.html.markdown index 731c65fd692..48beec70fa4 100644 --- a/website/docs/r/opsworks_instance.html.markdown +++ b/website/docs/r/opsworks_instance.html.markdown @@ -14,7 +14,7 @@ Provides an OpsWorks instance resource. ```hcl resource "aws_opsworks_instance" "my-instance" { - stack_id = "${aws_opsworks_stack.my-stack.id}" + stack_id = "${aws_opsworks_stack.main.id}" layer_ids = [ "${aws_opsworks_custom_layer.my-layer.id}", @@ -115,11 +115,12 @@ using the [`taint` command](/docs/commands/taint.html). ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The id of the OpsWorks instance. * `agent_version` - The AWS OpsWorks agent version. * `availability_zone` - The availability zone of the instance. +* `ec2_instance_id` - EC2 instance ID * `ssh_key_name` - The key name of the instance * `public_dns` - The public DNS name assigned to the instance. For EC2-VPC, this is only available if you've enabled DNS hostnames for your VPC diff --git a/website/docs/r/opsworks_java_app_layer.html.markdown b/website/docs/r/opsworks_java_app_layer.html.markdown index a087bb3f2bc..a0b4db72bb9 100644 --- a/website/docs/r/opsworks_java_app_layer.html.markdown +++ b/website/docs/r/opsworks_java_app_layer.html.markdown @@ -64,6 +64,6 @@ An `ebs_volume` block supports the following arguments: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The id of the layer. diff --git a/website/docs/r/opsworks_memcached_layer.html.markdown b/website/docs/r/opsworks_memcached_layer.html.markdown index 7a4a21d475d..6fcdb91763b 100644 --- a/website/docs/r/opsworks_memcached_layer.html.markdown +++ b/website/docs/r/opsworks_memcached_layer.html.markdown @@ -60,6 +60,6 @@ An `ebs_volume` block supports the following arguments: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The id of the layer. diff --git a/website/docs/r/opsworks_mysql_layer.html.markdown b/website/docs/r/opsworks_mysql_layer.html.markdown index 71e4b34c5f9..7b2142c5a13 100644 --- a/website/docs/r/opsworks_mysql_layer.html.markdown +++ b/website/docs/r/opsworks_mysql_layer.html.markdown @@ -64,6 +64,6 @@ An `ebs_volume` block supports the following arguments: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The id of the layer. diff --git a/website/docs/r/opsworks_nodejs_app_layer.html.markdown b/website/docs/r/opsworks_nodejs_app_layer.html.markdown index 11ea509bbf3..1a71955b13e 100644 --- a/website/docs/r/opsworks_nodejs_app_layer.html.markdown +++ b/website/docs/r/opsworks_nodejs_app_layer.html.markdown @@ -60,6 +60,6 @@ An `ebs_volume` block supports the following arguments: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The id of the layer. diff --git a/website/docs/r/opsworks_permission.html.markdown b/website/docs/r/opsworks_permission.html.markdown index 01c350b8ff9..66f8680123d 100644 --- a/website/docs/r/opsworks_permission.html.markdown +++ b/website/docs/r/opsworks_permission.html.markdown @@ -34,6 +34,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The computed id of the permission. Please note that this is only used internally to identify the permission. This value is not used in aws. diff --git a/website/docs/r/opsworks_php_app_layer.html.markdown b/website/docs/r/opsworks_php_app_layer.html.markdown index 5d7b56764a3..493fe2dbfd1 100644 --- a/website/docs/r/opsworks_php_app_layer.html.markdown +++ b/website/docs/r/opsworks_php_app_layer.html.markdown @@ -59,6 +59,6 @@ An `ebs_volume` block supports the following arguments: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The id of the layer. diff --git a/website/docs/r/opsworks_rails_app_layer.html.markdown b/website/docs/r/opsworks_rails_app_layer.html.markdown index fb88aed81ab..ae43611c699 100644 --- a/website/docs/r/opsworks_rails_app_layer.html.markdown +++ b/website/docs/r/opsworks_rails_app_layer.html.markdown @@ -65,6 +65,6 @@ An `ebs_volume` block supports the following arguments: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The id of the layer. diff --git a/website/docs/r/opsworks_rds_db_instance.html.markdown b/website/docs/r/opsworks_rds_db_instance.html.markdown index 2f6d65d94c0..1d88835946c 100644 --- a/website/docs/r/opsworks_rds_db_instance.html.markdown +++ b/website/docs/r/opsworks_rds_db_instance.html.markdown @@ -35,6 +35,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The computed id. Please note that this is only used internally to identify the stack <-> instance relation. This value is not used in aws. diff --git a/website/docs/r/opsworks_stack.html.markdown b/website/docs/r/opsworks_stack.html.markdown index eb6cc15c58b..67561bea840 100644 --- a/website/docs/r/opsworks_stack.html.markdown +++ b/website/docs/r/opsworks_stack.html.markdown @@ -79,7 +79,7 @@ The `custom_cookbooks_source` block supports the following arguments: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The id of the stack. diff --git a/website/docs/r/opsworks_static_web_layer.html.markdown b/website/docs/r/opsworks_static_web_layer.html.markdown index ed6569d460b..a6a5589b0ec 100644 --- a/website/docs/r/opsworks_static_web_layer.html.markdown +++ b/website/docs/r/opsworks_static_web_layer.html.markdown @@ -58,6 +58,6 @@ An `ebs_volume` block supports the following arguments: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The id of the layer. diff --git a/website/docs/r/opsworks_user_profile.html.markdown b/website/docs/r/opsworks_user_profile.html.markdown index 331ccf54a4a..34506e7401b 100644 --- a/website/docs/r/opsworks_user_profile.html.markdown +++ b/website/docs/r/opsworks_user_profile.html.markdown @@ -30,6 +30,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - Same value as `user_arn` diff --git a/website/docs/r/organizations_account.html.markdown b/website/docs/r/organizations_account.html.markdown new file mode 100644 index 00000000000..fd150903d69 --- /dev/null +++ b/website/docs/r/organizations_account.html.markdown @@ -0,0 +1,49 @@ +--- +layout: "aws" +page_title: "AWS: aws_organizations_account" +sidebar_current: "docs-aws-resource-organizations-account" +description: |- + Provides a resource to create a member account in the current AWS Organization. +--- + +# aws_organizations_account + +Provides a resource to create a member account in the current organization. + +~> **Note:** Account management must be done from the organization's master account. + +!> **WARNING:** Deleting this Terraform resource will only remove an AWS account from an organization. Terraform will not close the account. The member account must be prepared to be a standalone account beforehand. See the [AWS Organizations documentation](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html) for more information. + +## Example Usage: + +```hcl +resource "aws_organizations_account" "account" { + name = "my_new_account" + email = "john@doe.org" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) A friendly name for the member account. +* `email` - (Required) The email address of the owner to assign to the new member account. This email address must not already be associated with another AWS account. +* `iam_user_access_to_billing` - (Optional) If set to `ALLOW`, the new account enables IAM users to access account billing information if they have the required permissions. If set to `DENY`, then only the root user of the new account can access account billing information. +* `role_name` - (Optional) The name of an IAM role that Organizations automatically preconfigures in the new member account. This role trusts the master account, allowing users in the master account to assume the role, as permitted by the master account administrator. The role has administrator permissions in the new member account. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The ARN for this account. + +* `id` - The AWS account id + +## Import + +The AWS member account can be imported by using the `account_id`, e.g. + +``` +$ terraform import aws_organizations_account.my_org 111111111111 +``` diff --git a/website/docs/r/organizations_organization.html.markdown b/website/docs/r/organizations_organization.html.markdown index bf7e537f4cd..ce9cbeb9937 100644 --- a/website/docs/r/organizations_organization.html.markdown +++ b/website/docs/r/organizations_organization.html.markdown @@ -26,7 +26,7 @@ The following arguments are supported: ## Attributes Reference -The following additional attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - ARN of the organization * `id` - Identifier of the organization diff --git a/website/docs/r/organizations_policy.html.markdown b/website/docs/r/organizations_policy.html.markdown new file mode 100644 index 00000000000..9adde37ba14 --- /dev/null +++ b/website/docs/r/organizations_policy.html.markdown @@ -0,0 +1,52 @@ +--- +layout: "aws" +page_title: "AWS: aws_organizations_policy" +sidebar_current: "docs-aws-resource-organizations-policy" +description: |- + Provides a resource to manage an AWS Organizations policy. +--- + +# aws_organizations_policy + +Provides a resource to manage an [AWS Organizations policy](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies.html). + +## Example Usage + +```hcl +resource "aws_organizations_policy" "example" { + name = "example" + + content = < **Note:** All arguments including the Client ID and Client Secret will be stored in the raw state as plain-text. +[Read more about sensitive data in state](/docs/state/sensitive-data.html). + + +## Example Usage + +```hcl +resource "aws_pinpoint_app" "app" {} + +resource "aws_pinpoint_adm_channel" "channel" { + application_id = "${aws_pinpoint_app.app.application_id}" + client_id = "" + client_secret = "" + enabled = true +} +``` + + +## Argument Reference + +The following arguments are supported: + +* `application_id` - (Required) The application ID. +* `client_id` - (Required) Client ID (part of OAuth Credentials) obtained via Amazon Developer Account. +* `client_secret` - (Required) Client Secret (part of OAuth Credentials) obtained via Amazon Developer Account. +* `enabled` - (Optional) Specifies whether to enable the channel. Defaults to `true`. + +## Import + +Pinpoint ADM Channel can be imported using the `application-id`, e.g. + +``` +$ terraform import aws_pinpoint_adm_channel.channel application-id +``` diff --git a/website/docs/r/pinpoint_apns_channel.markdown b/website/docs/r/pinpoint_apns_channel.markdown new file mode 100644 index 00000000000..d27ed7651bf --- /dev/null +++ b/website/docs/r/pinpoint_apns_channel.markdown @@ -0,0 +1,59 @@ +--- +layout: "aws" +page_title: "AWS: aws_pinpoint_apns_channel" +sidebar_current: "docs-aws-resource-pinpoint-apns-channel" +description: |- + Provides a Pinpoint APNs Channel resource. +--- + +# aws_pinpoint_apns_channel + +Provides a Pinpoint APNs Channel resource. + +~> **Note:** All arguments, including certificates and tokens, will be stored in the raw state as plain-text. +[Read more about sensitive data in state](/docs/state/sensitive-data.html). + +## Example Usage + +```hcl +resource "aws_pinpoint_apns_channel" "apns" { + application_id = "${aws_pinpoint_app.app.application_id}" + + certificate = "${file("./certificate.pem")}" + private_key = "${file("./private_key.key")}" +} + +resource "aws_pinpoint_app" "app" {} +``` + + +## Argument Reference + +The following arguments are supported: + +* `application_id` - (Required) The application ID. +* `enabled` - (Optional) Whether the channel is enabled or disabled. Defaults to `true`. +* `default_authentication_method` - (Optional) The default authentication method used for APNs. + __NOTE__: Amazon Pinpoint uses this default for every APNs push notification that you send using the console. + You can override the default when you send a message programmatically using the Amazon Pinpoint API, the AWS CLI, or an AWS SDK. + If your default authentication type fails, Amazon Pinpoint doesn't attempt to use the other authentication type. + +One of the following sets of credentials is also required. + +If you choose to use __Certificate credentials__ you will have to provide: +* `certificate` - (Required) The pem encoded TLS Certificate from Apple. +* `private_key` - (Required) The Certificate Private Key file (ie. `.key` file). + +If you choose to use __Key credentials__ you will have to provide: +* `bundle_id` - (Required) The ID assigned to your iOS app. To find this value, choose Certificates, IDs & Profiles, choose App IDs in the Identifiers section, and choose your app. +* `team_id` - (Required) The ID assigned to your Apple developer account team. This value is provided on the Membership page. +* `token_key` - (Required) The `.p8` file that you download from your Apple developer account when you create an authentication key. +* `token_key_id` - (Required) The ID assigned to your signing key. To find this value, choose Certificates, IDs & Profiles, and choose your key in the Keys section. + +## Import + +Pinpoint APNs Channel can be imported using the `application-id`, e.g. + +``` +$ terraform import aws_pinpoint_apns_channel.apns application-id +``` diff --git a/website/docs/r/pinpoint_apns_sandbox_channel.markdown b/website/docs/r/pinpoint_apns_sandbox_channel.markdown new file mode 100644 index 00000000000..ea671bacc18 --- /dev/null +++ b/website/docs/r/pinpoint_apns_sandbox_channel.markdown @@ -0,0 +1,59 @@ +--- +layout: "aws" +page_title: "AWS: aws_pinpoint_apns_sandbox_channel" +sidebar_current: "docs-aws-resource-pinpoint-apns_sandbox-channel" +description: |- + Provides a Pinpoint APNs Sandbox Channel resource. +--- + +# aws_pinpoint_apns_sandbox_channel + +Provides a Pinpoint APNs Sandbox Channel resource. + +~> **Note:** All arguments, including certificates and tokens, will be stored in the raw state as plain-text. +[Read more about sensitive data in state](/docs/state/sensitive-data.html). + +## Example Usage + +```hcl +resource "aws_pinpoint_apns_sandbox_channel" "apns_sandbox" { + application_id = "${aws_pinpoint_app.app.application_id}" + + certificate = "${file("./certificate.pem")}" + private_key = "${file("./private_key.key")}" +} + +resource "aws_pinpoint_app" "app" {} +``` + + +## Argument Reference + +The following arguments are supported: + +* `application_id` - (Required) The application ID. +* `enabled` - (Optional) Whether the channel is enabled or disabled. Defaults to `true`. +* `default_authentication_method` - (Optional) The default authentication method used for APNs Sandbox. + __NOTE__: Amazon Pinpoint uses this default for every APNs push notification that you send using the console. + You can override the default when you send a message programmatically using the Amazon Pinpoint API, the AWS CLI, or an AWS SDK. + If your default authentication type fails, Amazon Pinpoint doesn't attempt to use the other authentication type. + +One of the following sets of credentials is also required. + +If you choose to use __Certificate credentials__ you will have to provide: +* `certificate` - (Required) The pem encoded TLS Certificate from Apple. +* `private_key` - (Required) The Certificate Private Key file (ie. `.key` file). + +If you choose to use __Key credentials__ you will have to provide: +* `bundle_id` - (Required) The ID assigned to your iOS app. To find this value, choose Certificates, IDs & Profiles, choose App IDs in the Identifiers section, and choose your app. +* `team_id` - (Required) The ID assigned to your Apple developer account team. This value is provided on the Membership page. +* `token_key` - (Required) The `.p8` file that you download from your Apple developer account when you create an authentication key. +* `token_key_id` - (Required) The ID assigned to your signing key. To find this value, choose Certificates, IDs & Profiles, and choose your key in the Keys section. + +## Import + +Pinpoint APNs Sandbox Channel can be imported using the `application-id`, e.g. + +``` +$ terraform import aws_pinpoint_apns_sandbox_channel.apns_sandbox application-id +``` diff --git a/website/docs/r/pinpoint_apns_voip_channel.markdown b/website/docs/r/pinpoint_apns_voip_channel.markdown new file mode 100644 index 00000000000..51160ceae21 --- /dev/null +++ b/website/docs/r/pinpoint_apns_voip_channel.markdown @@ -0,0 +1,59 @@ +--- +layout: "aws" +page_title: "AWS: aws_pinpoint_apns_voip_channel" +sidebar_current: "docs-aws-resource-pinpoint-apns-voip-channel" +description: |- + Provides a Pinpoint APNs VoIP Channel resource. +--- + +# aws_pinpoint_apns_voip_channel + +Provides a Pinpoint APNs VoIP Channel resource. + +~> **Note:** All arguments, including certificates and tokens, will be stored in the raw state as plain-text. +[Read more about sensitive data in state](/docs/state/sensitive-data.html). + +## Example Usage + +```hcl +resource "aws_pinpoint_apns_voip_channel" "apns_voip" { + application_id = "${aws_pinpoint_app.app.application_id}" + + certificate = "${file("./certificate.pem")}" + private_key = "${file("./private_key.key")}" +} + +resource "aws_pinpoint_app" "app" {} +``` + + +## Argument Reference + +The following arguments are supported: + +* `application_id` - (Required) The application ID. +* `enabled` - (Optional) Whether the channel is enabled or disabled. Defaults to `true`. +* `default_authentication_method` - (Optional) The default authentication method used for APNs. + __NOTE__: Amazon Pinpoint uses this default for every APNs push notification that you send using the console. + You can override the default when you send a message programmatically using the Amazon Pinpoint API, the AWS CLI, or an AWS SDK. + If your default authentication type fails, Amazon Pinpoint doesn't attempt to use the other authentication type. + +One of the following sets of credentials is also required. + +If you choose to use __Certificate credentials__ you will have to provide: +* `certificate` - (Required) The pem encoded TLS Certificate from Apple. +* `private_key` - (Required) The Certificate Private Key file (ie. `.key` file). + +If you choose to use __Key credentials__ you will have to provide: +* `bundle_id` - (Required) The ID assigned to your iOS app. To find this value, choose Certificates, IDs & Profiles, choose App IDs in the Identifiers section, and choose your app. +* `team_id` - (Required) The ID assigned to your Apple developer account team. This value is provided on the Membership page. +* `token_key` - (Required) The `.p8` file that you download from your Apple developer account when you create an authentication key. +* `token_key_id` - (Required) The ID assigned to your signing key. To find this value, choose Certificates, IDs & Profiles, and choose your key in the Keys section. + +## Import + +Pinpoint APNs VoIP Channel can be imported using the `application-id`, e.g. + +``` +$ terraform import aws_pinpoint_apns_voip_channel.apns_voip application-id +``` diff --git a/website/docs/r/pinpoint_apns_voip_sandbox_channel.markdown b/website/docs/r/pinpoint_apns_voip_sandbox_channel.markdown new file mode 100644 index 00000000000..7733ce5b701 --- /dev/null +++ b/website/docs/r/pinpoint_apns_voip_sandbox_channel.markdown @@ -0,0 +1,59 @@ +--- +layout: "aws" +page_title: "AWS: aws_pinpoint_apns_voip_sandbox_channel" +sidebar_current: "docs-aws-resource-pinpoint-apns-voip-sandbox-channel" +description: |- + Provides a Pinpoint APNs VoIP Sandbox Channel resource. +--- + +# aws_pinpoint_apns_voip_sandbox_channel + +Provides a Pinpoint APNs VoIP Sandbox Channel resource. + +~> **Note:** All arguments, including certificates and tokens, will be stored in the raw state as plain-text. +[Read more about sensitive data in state](/docs/state/sensitive-data.html). + +## Example Usage + +```hcl +resource "aws_pinpoint_apns_voip_sandbox_channel" "apns_voip_sandbox" { + application_id = "${aws_pinpoint_app.app.application_id}" + + certificate = "${file("./certificate.pem")}" + private_key = "${file("./private_key.key")}" +} + +resource "aws_pinpoint_app" "app" {} +``` + + +## Argument Reference + +The following arguments are supported: + +* `application_id` - (Required) The application ID. +* `enabled` - (Optional) Whether the channel is enabled or disabled. Defaults to `true`. +* `default_authentication_method` - (Optional) The default authentication method used for APNs. + __NOTE__: Amazon Pinpoint uses this default for every APNs push notification that you send using the console. + You can override the default when you send a message programmatically using the Amazon Pinpoint API, the AWS CLI, or an AWS SDK. + If your default authentication type fails, Amazon Pinpoint doesn't attempt to use the other authentication type. + +One of the following sets of credentials is also required. + +If you choose to use __Certificate credentials__ you will have to provide: +* `certificate` - (Required) The pem encoded TLS Certificate from Apple. +* `private_key` - (Required) The Certificate Private Key file (ie. `.key` file). + +If you choose to use __Key credentials__ you will have to provide: +* `bundle_id` - (Required) The ID assigned to your iOS app. To find this value, choose Certificates, IDs & Profiles, choose App IDs in the Identifiers section, and choose your app. +* `team_id` - (Required) The ID assigned to your Apple developer account team. This value is provided on the Membership page. +* `token_key` - (Required) The `.p8` file that you download from your Apple developer account when you create an authentication key. +* `token_key_id` - (Required) The ID assigned to your signing key. To find this value, choose Certificates, IDs & Profiles, and choose your key in the Keys section. + +## Import + +Pinpoint APNs VoIP Sandbox Channel can be imported using the `application-id`, e.g. + +``` +$ terraform import aws_pinpoint_apns_voip_sandbox_channel.apns_voip_sandbox application-id +``` diff --git a/website/docs/r/pinpoint_app.markdown b/website/docs/r/pinpoint_app.markdown new file mode 100644 index 00000000000..284d26fc0be --- /dev/null +++ b/website/docs/r/pinpoint_app.markdown @@ -0,0 +1,72 @@ +--- +layout: "aws" +page_title: "AWS: aws_pinpoint_app" +sidebar_current: "docs-aws-resource-pinpoint-app" +description: |- + Provides a Pinpoint App resource. +--- + +# aws_pinpoint_app + +Provides a Pinpoint App resource. + +## Example Usage + +```hcl +resource "aws_pinpoint_app" "example" { + name = "test-app" + + limits { + maximum_duration = 600 + } + + quiet_time { + start = "00:00" + end = "06:00" + } +} +``` + + +## Argument Reference + +The following arguments are supported: + +* `name` - (Optional) The application name. By default generated by Terraform +* `name_prefix` - (Optional) The name of the Pinpoint application. Conflicts with `name` +* `campaign_hook` - (Optional) The default campaign limits for the app. These limits apply to each campaign for the app, unless the campaign overrides the default with limits of its own +* `limits` - (Optional) The default campaign limits for the app. These limits apply to each campaign for the app, unless the campaign overrides the default with limits of its own +* `quiet_time` - (Optional) The default quiet time for the app. Each campaign for this app sends no messages during this time unless the campaign overrides the default with a quiet time of its own + +`campaign_hook` supports the following: + +* `lambda_function_name` - (Optional) Lambda function name or ARN to be called for delivery. Conflicts with `web_url` +* `mode` - (Required if `lambda_function_name` or `web_url` are provided) What mode Lambda should be invoked in. Valid values for this parameter are `DELIVERY`, `FILTER`. +* `web_url` - (Optional) Web URL to call for hook. If the URL has authentication specified it will be added as authentication to the request. Conflicts with `lambda_function_name` + +`limits` supports the following: + +* `daily` - (Optional) The maximum number of messages that the campaign can send daily. +* `maximum_duration` - (Optional) The length of time (in seconds) that the campaign can run before it ends and message deliveries stop. This duration begins at the scheduled start time for the campaign. The minimum value is 60. +* `messages_per_second` - (Optional) The number of messages that the campaign can send per second. The minimum value is 50, and the maximum is 20000. +* `total` - (Optional) The maximum total number of messages that the campaign can send. + +`quiet_time` supports the following: + +* `end` - (Optional) The default end time for quiet time in ISO 8601 format. Required if `start` is set +* `start` - (Optional) The default start time for quiet time in ISO 8601 format. Required if `end` is set + + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `application_id` - The Application ID of the Pinpoint App. + +## Import + +Pinpoint App can be imported using the `application-id`, e.g. + +``` +$ terraform import aws_pinpoint_app.name application-id +``` diff --git a/website/docs/r/pinpoint_baidu_channel.markdown b/website/docs/r/pinpoint_baidu_channel.markdown new file mode 100644 index 00000000000..7161147d74c --- /dev/null +++ b/website/docs/r/pinpoint_baidu_channel.markdown @@ -0,0 +1,45 @@ +--- +layout: "aws" +page_title: "AWS: aws_pinpoint_baidu_channel" +sidebar_current: "docs-aws-resource-pinpoint-baidu-channel" +description: |- + Provides a Pinpoint Baidu Channel resource. +--- + +# aws_pinpoint_baidu_channel + +Provides a Pinpoint Baidu Channel resource. + +~> **Note:** All arguments including the Api Key and Secret Key will be stored in the raw state as plain-text. +[Read more about sensitive data in state](/docs/state/sensitive-data.html). + + +## Example Usage + +```hcl +resource "aws_pinpoint_app" "app" {} + +resource "aws_pinpoint_baidu_channel" "channel" { + application_id = "${aws_pinpoint_app.app.application_id}" + api_key = "" + secret_key = "" +} +``` + + +## Argument Reference + +The following arguments are supported: + +* `application_id` - (Required) The application ID. +* `enabled` - (Optional) Specifies whether to enable the channel. Defaults to `true`. +* `api_key` - (Required) Platform credential API key from Baidu. +* `secret_key` - (Required) Platform credential Secret key from Baidu. + +## Import + +Pinpoint Baidu Channel can be imported using the `application-id`, e.g. + +``` +$ terraform import aws_pinpoint_baidu_channel.channel application-id +``` diff --git a/website/docs/r/pinpoint_email_channel.markdown b/website/docs/r/pinpoint_email_channel.markdown new file mode 100644 index 00000000000..bdd32113975 --- /dev/null +++ b/website/docs/r/pinpoint_email_channel.markdown @@ -0,0 +1,92 @@ +--- +layout: "aws" +page_title: "AWS: aws_pinpoint_email_channel" +sidebar_current: "docs-aws-resource-pinpoint-email-channel" +description: |- + Provides a Pinpoint SMS Channel resource. +--- + +# aws_pinpoint_email_channel + +Provides a Pinpoint SMS Channel resource. + +## Example Usage + +```hcl +resource "aws_pinpoint_email_channel" "email" { + application_id = "${aws_pinpoint_app.app.application_id}" + from_address = "user@example.com" + identity = "${aws_ses_domain_identity.identity.arn}" + role_arn = "${aws_iam_role.role.arn}" +} + +resource "aws_pinpoint_app" "app" {} + +resource "aws_ses_domain_identity" "identity" { + domain = "example.com" +} + +resource "aws_iam_role" "role" { + assume_role_policy = < **Note:** Api Key argument will be stored in the raw state as plain-text. +[Read more about sensitive data in state](/docs/state/sensitive-data.html). + +## Example Usage + +```hcl +resource "aws_pinpoint_gcm_channel" "gcm" { + application_id = "${aws_pinpoint_app.app.application_id}" + api_key = "api_key" +} + +resource "aws_pinpoint_app" "app" {} +``` + + +## Argument Reference + +The following arguments are supported: + +* `application_id` - (Required) The application ID. +* `api_key` - (Required) Platform credential API key from Google. +* `enabled` - (Optional) Whether the channel is enabled or disabled. Defaults to `true`. + +## Import + +Pinpoint GCM Channel can be imported using the `application-id`, e.g. + +``` +$ terraform import aws_pinpoint_gcm_channel.gcm application-id +``` diff --git a/website/docs/r/pinpoint_sms_channel.markdown b/website/docs/r/pinpoint_sms_channel.markdown new file mode 100644 index 00000000000..03646586492 --- /dev/null +++ b/website/docs/r/pinpoint_sms_channel.markdown @@ -0,0 +1,46 @@ +--- +layout: "aws" +page_title: "AWS: aws_pinpoint_sms_channel" +sidebar_current: "docs-aws-resource-pinpoint-sms-channel" +description: |- + Provides a Pinpoint SMS Channel resource. +--- + +# aws_pinpoint_sms_channel + +Provides a Pinpoint SMS Channel resource. + +## Example Usage + +```hcl +resource "aws_pinpoint_sms_channel" "sms" { + application_id = "${aws_pinpoint_app.app.application_id}" +} + +resource "aws_pinpoint_app" "app" {} +``` + + +## Argument Reference + +The following arguments are supported: + +* `application_id` - (Required) The application ID. +* `enabled` - (Optional) Whether the channel is enabled or disabled. Defaults to `true`. +* `sender_id` - (Optional) Sender identifier of your messages. +* `short_code` - (Optional) The Short Code registered with the phone provider. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `promotional_messages_per_second` - Promotional messages per second that can be sent. +* `transactional_messages_per_second` - Transactional messages per second that can be sent. + +## Import + +Pinpoint SMS Channel can be imported using the `application-id`, e.g. + +``` +$ terraform import aws_pinpoint_sms_channel.sms application-id +``` diff --git a/website/docs/r/placement_group.html.markdown b/website/docs/r/placement_group.html.markdown index 7041df042bb..0c659f5f679 100644 --- a/website/docs/r/placement_group.html.markdown +++ b/website/docs/r/placement_group.html.markdown @@ -29,7 +29,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The name of the placement group. @@ -39,4 +39,4 @@ Placement groups can be imported using the `name`, e.g. ``` $ terraform import aws_placement_group.prod_pg production-placement-group -``` \ No newline at end of file +``` diff --git a/website/docs/r/proxy_protocol_policy.html.markdown b/website/docs/r/proxy_protocol_policy.html.markdown index 837e1e6f153..88b1409271e 100644 --- a/website/docs/r/proxy_protocol_policy.html.markdown +++ b/website/docs/r/proxy_protocol_policy.html.markdown @@ -49,7 +49,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the policy. * `load_balancer` - The load balancer to which the policy is attached. diff --git a/website/docs/r/rds_cluster.html.markdown b/website/docs/r/rds_cluster.html.markdown index 323aea3602f..0419b9bf49d 100644 --- a/website/docs/r/rds_cluster.html.markdown +++ b/website/docs/r/rds_cluster.html.markdown @@ -3,17 +3,16 @@ layout: "aws" page_title: "AWS: aws_rds_cluster" sidebar_current: "docs-aws-resource-rds-cluster" description: |- - Provides an RDS Cluster Resource + Manages a RDS Aurora Cluster --- # aws_rds_cluster -Provides an RDS Cluster Resource. A Cluster Resource defines attributes that are -applied to the entire cluster of [RDS Cluster Instances][3]. Use the RDS Cluster -resource and RDS Cluster Instances to create and use Amazon Aurora, a MySQL-compatible -database engine. +Manages a [RDS Aurora Cluster][2]. To manage cluster instances that inherit configuration from the cluster (when not running the cluster in `serverless` engine mode), see the [`aws_rds_cluster_instance` resource](/docs/providers/aws/r/rds_cluster_instance.html). To manage non-Aurora databases (e.g. MySQL, PostgreSQL, SQL Server, etc.), see the [`aws_db_instance` resource](/docs/providers/aws/r/db_instance.html). -For more information on Amazon Aurora, see [Aurora on Amazon RDS][2] in the Amazon RDS User Guide. +For information on the difference between the available Aurora MySQL engines +see [Comparison between Aurora MySQL 1 and Aurora MySQL 2](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AuroraMySQL.Updates.20180206.html) +in the Amazon RDS User Guide. Changes to a RDS Cluster can occur when you manually change a parameter, such as `port`, and are reflected in the next maintenance @@ -78,13 +77,14 @@ resource "aws_rds_cluster" "postgresql" { ## Argument Reference For more detailed documentation about each argument, refer to -the [AWS official documentation](https://docs.aws.amazon.com/AmazonRDS/latest/CommandLineReference/CLIReference-cmd-ModifyDBInstance.html). +the [AWS official documentation](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html). The following arguments are supported: * `cluster_identifier` - (Optional, Forces new resources) The cluster identifier. If omitted, Terraform will assign a random, unique identifier. * `cluster_identifier_prefix` - (Optional, Forces new resource) Creates a unique cluster identifier beginning with the specified prefix. Conflicts with `cluster_identifer`. * `database_name` - (Optional) Name for an automatically created database on cluster creation. There are different naming restrictions per database engine: [RDS Naming Constraints][5] +* `deletion_protection` - (Optional) If the DB instance should have deletion protection enabled. The database can't be deleted when this value is set to `true`. The default is `false`. * `master_password` - (Required unless a `snapshot_identifier` is provided) Password for the master DB user. Note that this may show up in logs, and it will be stored in the state file. Please refer to the [RDS Naming Constraints][5] * `master_username` - (Required unless a `snapshot_identifier` is provided) Username for the master DB user. Please refer to the [RDS Naming Constraints][5] @@ -94,8 +94,8 @@ The following arguments are supported: * `skip_final_snapshot` - (Optional) Determines whether a final DB snapshot is created before the DB cluster is deleted. If true is specified, no DB snapshot is created. If false is specified, a DB snapshot is created before the DB cluster is deleted, using the value from `final_snapshot_identifier`. Default is `false`. * `availability_zones` - (Optional) A list of EC2 Availability Zones that instances in the DB cluster can be created in -* `backup_retention_period` - (Optional) The days to retain backups for. Default -1 +* `backtrack_window` - (Optional) The target backtrack window, in seconds. Only available for `aurora` engine currently. To disable backtracking, set this value to `0`. Defaults to `0`. Must be between `0` and `259200` (72 hours) +* `backup_retention_period` - (Optional) The days to retain backups for. Default `1` * `preferred_backup_window` - (Optional) The daily time range during which automated backups are created if automated backups are enabled using the BackupRetentionPeriod parameter.Time in UTC Default: A 30-minute window selected at random from an 8-hour block of time per region. e.g. 04:00-09:00 * `preferred_maintenance_window` - (Optional) The weekly time range during which system maintenance can occur, in (UTC) e.g. wed:04:00-wed:04:30 @@ -103,7 +103,8 @@ Default: A 30-minute window selected at random from an 8-hour block of time per * `vpc_security_group_ids` - (Optional) List of VPC security groups to associate with the Cluster * `snapshot_identifier` - (Optional) Specifies whether or not to create this cluster from a snapshot. You can use either the name or ARN when specifying a DB cluster snapshot, or the ARN when specifying a DB snapshot. -* `storage_encrypted` - (Optional) Specifies whether the DB cluster is encrypted. The default is `false` if not specified. +* `storage_encrypted` - (Optional) Specifies whether the DB cluster is encrypted. The default is `false` for `provisioned` `engine_mode` and `true` for `serverless` `engine_mode`. +* `replication_source_identifier` - (Optional) ARN of a source DB cluster or DB instance if this DB cluster is to be created as a Read Replica. * `apply_immediately` - (Optional) Specifies whether any cluster modifications are applied immediately, or during the next maintenance window. Default is `false`. See [Amazon RDS Documentation for more information.](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html) @@ -111,19 +112,79 @@ Default: A 30-minute window selected at random from an 8-hour block of time per * `db_cluster_parameter_group_name` - (Optional) A cluster parameter group to associate with the cluster. * `kms_key_id` - (Optional) The ARN for the KMS encryption key. When specifying `kms_key_id`, `storage_encrypted` needs to be set to true. * `iam_roles` - (Optional) A List of ARNs for the IAM roles to associate to the RDS Cluster. -* `iam_database_authentication_enabled` - (Optional) Specifies whether or mappings of AWS Identity and Access Management (IAM) accounts to database accounts is enabled. -* `engine` - (Optional) The name of the database engine to be used for this DB cluster. Defaults to `aurora`. Valid Values: aurora,aurora-mysql,aurora-postgresql -* `engine_version` - (Optional) The database engine version. +* `iam_database_authentication_enabled` - (Optional) Specifies whether or mappings of AWS Identity and Access Management (IAM) accounts to database accounts is enabled. Please see [AWS Documentation][6] for availability and limitations. +* `engine` - (Optional) The name of the database engine to be used for this DB cluster. Defaults to `aurora`. Valid Values: `aurora`, `aurora-mysql`, `aurora-postgresql` +* `engine_mode` - (Optional) The database engine mode. Valid values: `parallelquery`, `provisioned`, `serverless`. Defaults to: `provisioned`. See the [RDS User Guide](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/aurora-serverless.html) for limitations when using `serverless`. +* `engine_version` - (Optional) The database engine version. Updating this argument results in an outage. * `source_region` - (Optional) The source region for an encrypted replica DB cluster. +* `enabled_cloudwatch_logs_exports` - (Optional) List of log types to export to cloudwatch. If omitted, no logs will be exported. + The following log types are supported: `audit`, `error`, `general`, `slowquery`. +* `scaling_configuration` - (Optional) Nested attribute with scaling properties. Only valid when `engine_mode` is set to `serverless`. More details below. +* `tags` - (Optional) A mapping of tags to assign to the DB cluster. + +### S3 Import Options + +Full details on the core parameters and impacts are in the API Docs: [RestoreDBClusterFromS3](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromS3.html). Requires that the S3 bucket be in the same region as the RDS cluster you're trying to create. Sample: + +~> **NOTE:** RDS Aurora Serverless does not support loading data from S3, so its not possible to directly use `engine_mode` set to `serverless` with `s3_import`. + +```hcl +resource "aws_rds_cluster" "db" { + engine = "aurora" + + s3_import { + source_engine = "mysql" + source_engine_version = "5.6" + bucket_name = "mybucket" + bucket_prefix = "backups" + ingestion_role = "arn:aws:iam::1234567890:role/role-xtrabackup-rds-restore" + } +} +``` + +* `bucket_name` - (Required) The bucket name where your backup is stored +* `bucket_prefix` - (Optional) Can be blank, but is the path to your backup +* `ingestion_role` - (Required) Role applied to load the data. +* `source_engine` - (Required) Source engine for the backup +* `source_engine_version` - (Required) Version of the source engine used to make the backup + +This will not recreate the resource if the S3 object changes in some way. It's only used to initialize the database. This only works currently with the aurora engine. See AWS for currently supported engines and options. See [Aurora S3 Migration Docs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AuroraMySQL.Migrating.ExtMySQL.html#AuroraMySQL.Migrating.ExtMySQL.S3). + +### scaling_configuration Argument Reference + +~> **NOTE:** `scaling_configuration` configuration is only valid when `engine_mode` is set to `serverless`. + +Example: + +```hcl +resource "aws_rds_cluster" "example" { + # ... other configuration ... + + engine_mode = "serverless" + + scaling_configuration { + auto_pause = true + max_capacity = 256 + min_capacity = 2 + seconds_until_auto_pause = 300 + } +} +``` + +* `auto_pause` - (Optional) Whether to enable automatic pause. A DB cluster can be paused only when it's idle (it has no connections). If a DB cluster is paused for more than seven days, the DB cluster might be backed up with a snapshot. In this case, the DB cluster is restored when there is a request to connect to it. Defaults to `true`. +* `max_capacity` - (Optional) The maximum capacity. The maximum capacity must be greater than or equal to the minimum capacity. Valid capacity values are `2`, `4`, `8`, `16`, `32`, `64`, `128`, and `256`. Defaults to `16`. +* `min_capacity` - (Optional) The minimum capacity. The minimum capacity must be lesser than or equal to the maximum capacity. Valid capacity values are `2`, `4`, `8`, `16`, `32`, `64`, `128`, and `256`. Defaults to `2`. +* `seconds_until_auto_pause` - (Optional) The time, in seconds, before an Aurora DB cluster in serverless mode is paused. Valid values are `300` through `86400`. Defaults to `300`. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: +* `arn` - Amazon Resource Name (ARN) of cluster * `id` - The RDS Cluster Identifier * `cluster_identifier` - The RDS Cluster Identifier * `cluster_resource_id` - The RDS Cluster Resource ID -* `cluster_members` – List of RDS Instances that are a part of this cluster +* `cluster_members` – List of RDS Instances that are a part of this cluster * `allocated_storage` - The amount of allocated storage * `availability_zones` - The availability zone of the instance * `backup_retention_period` - The backup retention period @@ -140,7 +201,7 @@ load-balanced across replicas * `status` - The RDS instance status * `master_username` - The master username for the database * `storage_encrypted` - Specifies whether the DB cluster is encrypted -* `replication_source_identifier` - ARN of the source DB cluster if this DB cluster is created as a Read Replica. +* `replication_source_identifier` - ARN of the source DB cluster or DB instance if this DB cluster is created as a Read Replica. * `hosted_zone_id` - The Route53 Hosted Zone ID of the endpoint [1]: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Replication.html @@ -148,6 +209,7 @@ load-balanced across replicas [3]: /docs/providers/aws/r/rds_cluster_instance.html [4]: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Maintenance.html [5]: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Limits.html#RDS_Limits.Constraints +[6]: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html#UsingWithRDS.IAMDBAuth.Availability ## Timeouts diff --git a/website/docs/r/rds_cluster_instance.html.markdown b/website/docs/r/rds_cluster_instance.html.markdown index 74a56975005..f87dacd355e 100644 --- a/website/docs/r/rds_cluster_instance.html.markdown +++ b/website/docs/r/rds_cluster_instance.html.markdown @@ -21,6 +21,8 @@ Cluster, or you may specify different Cluster Instance resources with various For more information on Amazon Aurora, see [Aurora on Amazon RDS][2] in the Amazon RDS User Guide. +~> **NOTE:** Deletion Protection from the RDS service can only be enabled at the cluster level, not for individual cluster instances. You can still add the [`prevent_destroy` lifecycle behavior](https://www.terraform.io/docs/configuration/resources.html#prevent_destroy) to your Terraform resource configuration if you desire protection from accidental deletion. + ## Example Usage ```hcl @@ -28,7 +30,7 @@ resource "aws_rds_cluster_instance" "cluster_instances" { count = 2 identifier = "aurora-cluster-demo-${count.index}" cluster_identifier = "${aws_rds_cluster.default.id}" - instance_class = "db.r3.large" + instance_class = "db.r4.large" } resource "aws_rds_cluster" "default" { @@ -43,18 +45,21 @@ resource "aws_rds_cluster" "default" { ## Argument Reference For more detailed documentation about each argument, refer to -the [AWS official documentation](https://docs.aws.amazon.com/AmazonRDS/latest/CommandLineReference/CLIReference-cmd-ModifyDBInstance.html). +the [AWS official documentation](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html). The following arguments are supported: * `identifier` - (Optional, Forces new resource) The indentifier for the RDS instance, if omitted, Terraform will assign a random, unique identifier. * `identifier_prefix` - (Optional, Forces new resource) Creates a unique identifier beginning with the specified prefix. Conflicts with `identifer`. * `cluster_identifier` - (Required) The identifier of the [`aws_rds_cluster`](/docs/providers/aws/r/rds_cluster.html) in which to launch this instance. -* `engine` - (Optional) The name of the database engine to be used for the RDS instance. Defaults to `aurora`. Valid Values: aurora,aurora-mysql,aurora-postgresql +* `engine` - (Optional) The name of the database engine to be used for the RDS instance. Defaults to `aurora`. Valid Values: `aurora`, `aurora-mysql`, `aurora-postgresql`. +For information on the difference between the available Aurora MySQL engines +see [Comparison between Aurora MySQL 1 and Aurora MySQL 2](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AuroraMySQL.Updates.20180206.html) +in the Amazon RDS User Guide. * `engine_version` - (Optional) The database engine version. * `instance_class` - (Required) The instance class to use. For details on CPU and memory, see [Scaling Aurora DB Instances][4]. Aurora currently - supports the below instance classes. + supports the below instance classes. Please see [AWS Documentation][7] for complete details. - db.t2.small - db.t2.medium - db.r3.large @@ -62,6 +67,12 @@ and memory, see [Scaling Aurora DB Instances][4]. Aurora currently - db.r3.2xlarge - db.r3.4xlarge - db.r3.8xlarge + - db.r4.large + - db.r4.xlarge + - db.r4.2xlarge + - db.r4.4xlarge + - db.r4.8xlarge + - db.r4.16xlarge * `publicly_accessible` - (Optional) Bool to control if instance is publicly accessible. Default `false`. See the documentation on [Creating DB Instances][6] for more details on controlling this property. @@ -86,13 +97,13 @@ what IAM permissions are needed to allow Enhanced Monitoring for RDS Instances. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: +* `arn` - Amazon Resource Name (ARN) of cluster instance * `cluster_identifier` - The RDS Cluster Identifier * `identifier` - The Instance identifier * `id` - The Instance identifier -* `writer` – Boolean indicating if this instance is writable. `False` indicates -this instance is a read replica +* `writer` – Boolean indicating if this instance is writable. `False` indicates this instance is a read replica. * `allocated_storage` - The amount of allocated storage * `availability_zone` - The availability zone of the instance * `endpoint` - The DNS address for this instance. May not be writable @@ -112,6 +123,7 @@ this instance is a read replica [4]: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Managing.html [5]: /docs/configuration/resources.html#count [6]: https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html +[7]: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html ## Timeouts diff --git a/website/docs/r/rds_cluster_parameter_group.markdown b/website/docs/r/rds_cluster_parameter_group.markdown index 14c355eab1b..578b4ff5c0d 100644 --- a/website/docs/r/rds_cluster_parameter_group.markdown +++ b/website/docs/r/rds_cluster_parameter_group.markdown @@ -2,11 +2,15 @@ layout: "aws" page_title: "AWS: aws_rds_cluster_parameter_group" sidebar_current: "docs-aws-resource-rds-cluster-parameter-group" +description: |- + Provides an RDS DB cluster parameter group resource. --- # aws_rds_cluster_parameter_group -Provides an RDS DB cluster parameter group resource. +Provides an RDS DB cluster parameter group resource. Documentation of the available parameters for various Aurora engines can be found at: +* [Aurora MySQL Parameters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AuroraMySQL.Reference.html) +* [Aurora PostgreSQL Parameters](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AuroraPostgreSQL.Reference.html) ## Example Usage @@ -49,7 +53,7 @@ Parameter blocks support the following: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The db cluster parameter group name. * `arn` - The ARN of the db cluster parameter group. diff --git a/website/docs/r/redshift_cluster.html.markdown b/website/docs/r/redshift_cluster.html.markdown index 37720b37886..ad5e59d479c 100644 --- a/website/docs/r/redshift_cluster.html.markdown +++ b/website/docs/r/redshift_cluster.html.markdown @@ -2,6 +2,8 @@ layout: "aws" page_title: "AWS: aws_redshift_cluster" sidebar_current: "docs-aws-resource-redshift-cluster" +description: |- + Provides a Redshift Cluster resource. --- # aws_redshift_cluster @@ -88,7 +90,7 @@ For more information on the permissions required for the bucket, please read the ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The Redshift Cluster ID. * `cluster_identifier` - The Cluster Identifier @@ -102,6 +104,7 @@ The following attributes are exported: * `encrypted` - Whether the data in the cluster is encrypted * `cluster_security_groups` - The security groups associated with the cluster * `vpc_security_group_ids` - The VPC security group Ids associated with the cluster +* `dns_name` - The DNS name of the cluster * `port` - The Port the cluster responds on * `cluster_version` - The version of Redshift engine software * `cluster_parameter_group_name` - The name of the parameter group to be associated with this cluster diff --git a/website/docs/r/redshift_event_subscription.html.markdown b/website/docs/r/redshift_event_subscription.html.markdown new file mode 100644 index 00000000000..81e1766966e --- /dev/null +++ b/website/docs/r/redshift_event_subscription.html.markdown @@ -0,0 +1,75 @@ +--- +layout: "aws" +page_title: "AWS: aws_redshift_event_subscription" +sidebar_current: "docs-aws-resource-redshift-event-subscription" +description: |- + Provides a Redshift event subscription resource. +--- + +# aws_redshift_event_subscription + +Provides a Redshift event subscription resource. + +## Example Usage + +```hcl +resource "aws_redshift_cluster" "default" { + cluster_identifier = "default" + database_name = "default" + + # ... +} + +resource "aws_sns_topic" "default" { + name = "redshift-events" +} + +resource "aws_redshift_event_subscription" "default" { + name = "redshift-event-sub" + sns_topic = "${aws_sns_topic.default.arn}" + + source_type = "cluster" + source_ids = ["${aws_redshift_cluster.default.id}"] + + severity = "INFO" + + event_categories = [ + "configuration", + "management", + "monitoring", + "security", + ] + + tags { + Name = "default" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the Redshift event subscription. +* `sns_topic_arn` - (Required) The ARN of the SNS topic to send events to. +* `source_ids` - (Optional) A list of identifiers of the event sources for which events will be returned. If not specified, then all sources are included in the response. If specified, a source_type must also be specified. +* `source_type` - (Optional) The type of source that will be generating the events. Valid options are `cluster`, `cluster-parameter-group`, `cluster-security-group`, or `cluster-snapshot`. If not set, all sources will be subscribed to. +* `severity` - (Optional) The event severity to be published by the notification subscription. Valid options are `INFO` or `ERROR`. +* `event_categories` - (Optional) A list of event categories for a SourceType that you want to subscribe to. See https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-event-notifications.html or run `aws redshift describe-event-categories`. +* `enabled` - (Optional) A boolean flag to enable/disable the subscription. Defaults to true. +* `tags` - (Optional) A mapping of tags to assign to the resource. + +## Attributes + +The following additional atttributes are provided: + +* `id` - The name of the Redshift event notification subscription +* `customer_aws_id` - The AWS customer account associated with the Redshift event notification subscription + +## Import + +Redshift Event Subscriptions can be imported using the `name`, e.g. + +``` +$ terraform import aws_redshift_event_subscription.default redshift-event-sub +``` diff --git a/website/docs/r/redshift_parameter_group.html.markdown b/website/docs/r/redshift_parameter_group.html.markdown index 3a46dcbf585..d38af6d56d3 100644 --- a/website/docs/r/redshift_parameter_group.html.markdown +++ b/website/docs/r/redshift_parameter_group.html.markdown @@ -2,6 +2,8 @@ layout: "aws" page_title: "AWS: aws_redshift_parameter_group" sidebar_current: "docs-aws-resource-redshift-parameter-group" +description: |- + Provides a Redshift Cluster parameter group resource. --- # aws_redshift_parameter_group @@ -50,7 +52,7 @@ You can read more about the parameters that Redshift supports in the [documentat ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The Redshift parameter group name. @@ -60,4 +62,4 @@ Redshift Parameter Groups can be imported using the `name`, e.g. ``` $ terraform import aws_redshift_parameter_group.paramgroup1 parameter-group-test-terraform -``` \ No newline at end of file +``` diff --git a/website/docs/r/redshift_security_group.html.markdown b/website/docs/r/redshift_security_group.html.markdown index eaefddb848f..e2d9d1cba71 100644 --- a/website/docs/r/redshift_security_group.html.markdown +++ b/website/docs/r/redshift_security_group.html.markdown @@ -39,7 +39,7 @@ Ingress blocks support the following: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The Redshift security group ID. diff --git a/website/docs/r/redshift_snapshot_copy_grant.html.markdown b/website/docs/r/redshift_snapshot_copy_grant.html.markdown new file mode 100644 index 00000000000..45d28546c42 --- /dev/null +++ b/website/docs/r/redshift_snapshot_copy_grant.html.markdown @@ -0,0 +1,40 @@ +--- +layout: "aws" +page_title: "AWS: aws_redshift_snapshot_copy_grant" +sidebar_current: "docs-aws-resource-redshift-snapshot-copy-grant" +description: |- + Creates a snapshot copy grant that allows AWS Redshift to encrypt copied snapshots with a customer master key from AWS KMS in a destination region. +--- + +# aws_redshift_snapshot_copy_grant + +Creates a snapshot copy grant that allows AWS Redshift to encrypt copied snapshots with a customer master key from AWS KMS in a destination region. + +Note that the grant must exist in the destination region, and not in the region of the cluster. + +## Example Usage + +```hcl +resource "aws_redshift_snapshot_copy_grant" "test" { + snapshot_copy_grant_name = "my-grant" +} + +resource "aws_redshift_cluster" "test" { + # ... other configuration ... + snapshot_copy { + destination_region = "us-east-2" + grant_name = "${aws_redshift_snapshot_copy_grant.test.snapshot_copy_grant_name}" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `snapshot_copy_grant_name` - (Required, Forces new resource) A friendly name for identifying the grant. +* `kms_key_id` - (Optional, Forces new resource) The unique identifier for the customer master key (CMK) that the grant applies to. Specify the key ID or the Amazon Resource Name (ARN) of the CMK. To specify a CMK in a different AWS account, you must use the key ARN. If not specified, the default key is used. + +## Attributes Reference + +No additional attributes beyond the arguments above are exported. diff --git a/website/docs/r/redshift_subnet_group.html.markdown b/website/docs/r/redshift_subnet_group.html.markdown index bfff73b705b..7b55a6e74e9 100644 --- a/website/docs/r/redshift_subnet_group.html.markdown +++ b/website/docs/r/redshift_subnet_group.html.markdown @@ -58,7 +58,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The Redshift Subnet group ID. diff --git a/website/docs/r/route.html.markdown b/website/docs/r/route.html.markdown index cc5e34272fe..d6be5dab882 100644 --- a/website/docs/r/route.html.markdown +++ b/website/docs/r/route.html.markdown @@ -31,7 +31,7 @@ resource "aws_route" "r" { ```hcl resource "aws_vpc" "vpc" { - cidr_block = "10.1.0.0/16" + cidr_block = "10.1.0.0/16" assign_generated_ipv6_cidr_block = true } @@ -40,9 +40,9 @@ resource "aws_egress_only_internet_gateway" "egress" { } resource "aws_route" "r" { - route_table_id = "rtb-4fbb3ac4" - destination_ipv6_cidr_block = "::/0" - egress_only_gateway_id = "${aws_egress_only_internet_gateway.egress.id}" + route_table_id = "rtb-4fbb3ac4" + destination_ipv6_cidr_block = "::/0" + egress_only_gateway_id = "${aws_egress_only_internet_gateway.egress.id}" } ``` @@ -67,7 +67,7 @@ created implicitly and cannot be specified. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: ~> **NOTE:** Only the target type that is specified (one of the above) will be exported as an attribute once the resource is created. @@ -89,3 +89,19 @@ will be exported as an attribute once the resource is created. - `create` - (Default `2 minutes`) Used for route creation - `delete` - (Default `5 minutes`) Used for route deletion + +## Import + +Individual routes can be imported using `ROUTETABLEID_DESTINATION`. + +For example, import a route in route table `rtb-656C65616E6F72` with an IPv4 destination CIDR of `10.42.0.0/16` like this: + +```console +$ terraform import aws_route.my_route rtb-656C65616E6F72_10.42.0.0/16 +``` + +Import a route in route table `rtb-656C65616E6F72` with an IPv6 destination CIDR of `2620:0:2d0:200::8/125` similarly: + +```console +$ terraform import aws_route.my_route rtb-656C65616E6F72_2620:0:2d0:200::8/125 +``` diff --git a/website/docs/r/route53_delegation_set.html.markdown b/website/docs/r/route53_delegation_set.html.markdown index 10eef7ae75a..ff401fbc7ee 100644 --- a/website/docs/r/route53_delegation_set.html.markdown +++ b/website/docs/r/route53_delegation_set.html.markdown @@ -37,7 +37,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The delegation set ID * `name_servers` - A list of authoritative name servers for the hosted zone @@ -51,4 +51,4 @@ Route53 Delegation Sets can be imported using the `delegation set id`, e.g. ``` $ terraform import aws_route53_delegation_set.set1 N1PA6795SAMPLE -``` \ No newline at end of file +``` diff --git a/website/docs/r/route53_health_check.html.markdown b/website/docs/r/route53_health_check.html.markdown index 5447d640256..c9dc48ce96d 100644 --- a/website/docs/r/route53_health_check.html.markdown +++ b/website/docs/r/route53_health_check.html.markdown @@ -11,9 +11,11 @@ Provides a Route53 health check. ## Example Usage +### Connectivity and HTTP Status Code Check + ```hcl -resource "aws_route53_health_check" "child1" { - fqdn = "foobar.terraform.com" +resource "aws_route53_health_check" "example" { + fqdn = "example.com" port = 80 type = "HTTP" resource_path = "/" @@ -24,11 +26,29 @@ resource "aws_route53_health_check" "child1" { Name = "tf-test-health-check" } } +``` -resource "aws_route53_health_check" "foo" { +### Connectivity and String Matching Check + +```hcl +resource "aws_route53_health_check" "example" { + failure_threshold = "5" + fqdn = "example.com" + port = 443 + request_interval = "30" + resource_path = "/" + search_string = "example" + type = "HTTPS_STR_MATCH" +} +``` + +### Aggregate Check + +```hcl +resource "aws_route53_health_check" "parent" { type = "CALCULATED" child_health_threshold = 1 - child_healthchecks = ["${aws_route53_health_check.child1.id}"] + child_healthchecks = ["${aws_route53_health_check.child.id}"] tags = { Name = "tf-test-calculated-health-check" @@ -36,7 +56,7 @@ resource "aws_route53_health_check" "foo" { } ``` -## CloudWatch Alarm Example +### CloudWatch Alarm Check ```hcl resource "aws_cloudwatch_metric_alarm" "foobar" { @@ -72,7 +92,7 @@ The following arguments are supported: * `failure_threshold` - (Required) The number of consecutive health checks that an endpoint must pass or fail. * `request_interval` - (Required) The number of seconds between the time that Amazon Route 53 gets a response from your endpoint and the time that it sends the next health-check request. * `resource_path` - (Optional) The path that you want Amazon Route 53 to request when performing health checks. -* `search_string` - (Optional) String searched in the first 5120 bytes of the response body for check to be considered healthy. +* `search_string` - (Optional) String searched in the first 5120 bytes of the response body for check to be considered healthy. Only valid with `HTTP_STR_MATCH` and `HTTPS_STR_MATCH`. * `measure_latency` - (Optional) A Boolean value that indicates whether you want Route 53 to measure the latency between health checkers in multiple AWS regions and your endpoint and to display CloudWatch latency graphs in the Route 53 console. * `invert_healthcheck` - (Optional) A boolean value that indicates whether the status of health check should be inverted. For example, if a health check is healthy but Inverted is True , then Route 53 considers the health check to be unhealthy. * `enable_sni` - (Optional) A boolean value that indicates whether Route53 should send the `fqdn` to the endpoint when performing the health check. This defaults to AWS' defaults: when the `type` is "HTTPS" `enable_sni` defaults to `true`, when `type` is anything else `enable_sni` defaults to `false`. diff --git a/website/docs/r/route53_query_log.html.markdown b/website/docs/r/route53_query_log.html.markdown index e3704b149ef..d26829b1a2a 100644 --- a/website/docs/r/route53_query_log.html.markdown +++ b/website/docs/r/route53_query_log.html.markdown @@ -53,6 +53,8 @@ data "aws_iam_policy_document" "route53-query-logging-policy" { } resource "aws_cloudwatch_log_resource_policy" "route53-query-logging-policy" { + provider = "aws.us-east-1" + policy_document = "${data.aws_iam_policy_document.route53-query-logging-policy.json}" policy_name = "route53-query-logging-policy" } @@ -80,7 +82,7 @@ The following arguments are supported: ## Attributes Reference -The following additional attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The query logging configuration ID diff --git a/website/docs/r/route53_record.html.markdown b/website/docs/r/route53_record.html.markdown index cb2a6e40b52..a4729b33a22 100644 --- a/website/docs/r/route53_record.html.markdown +++ b/website/docs/r/route53_record.html.markdown @@ -107,7 +107,7 @@ The following arguments are supported: * `geolocation_routing_policy` - (Optional) A block indicating a routing policy based on the geolocation of the requestor. Conflicts with any other routing policy. Documented below. * `latency_routing_policy` - (Optional) A block indicating a routing policy based on the latency between the requestor and an AWS region. Conflicts with any other routing policy. Documented below. * `weighted_routing_policy` - (Optional) A block indicating a weighted routing policy. Conflicts with any other routing policy. Documented below. -* `multivalue_answer_routing_policy` - (Optional) A block indicating a multivalue answer routing policy. Conflicts with any other routing policy. +* `multivalue_answer_routing_policy` - (Optional) Set to `true` to indicate a multivalue answer routing policy. Conflicts with any other routing policy. * `allow_overwrite` - (Optional) Allow creation of this record in Terraform to overwrite an existing record, if any. This does not prevent other resources within Terraform or manual Route53 changes from overwriting this record. `true` by default. Exactly one of `records` or `alias` must be specified: this determines whether it's an alias record. @@ -138,7 +138,10 @@ Weighted routing policies support the following: ## Attributes Reference -* `fqdn` - [FQDN](https://en.wikipedia.org/wiki/Fully_qualified_domain_name) built using the zone domain and `name` +In addition to all arguments above, the following attributes are exported: + +* `name` - The name of the record. +* `fqdn` - [FQDN](https://en.wikipedia.org/wiki/Fully_qualified_domain_name) built using the zone domain and `name`. ## Import diff --git a/website/docs/r/route53_zone.html.markdown b/website/docs/r/route53_zone.html.markdown index 66b77ce5782..59c5a7e6c7f 100644 --- a/website/docs/r/route53_zone.html.markdown +++ b/website/docs/r/route53_zone.html.markdown @@ -3,21 +3,25 @@ layout: "aws" page_title: "AWS: aws_route53_zone" sidebar_current: "docs-aws-resource-route53-zone" description: |- - Provides a Route53 Hosted Zone resource. + Manages a Route53 Hosted Zone --- # aws_route53_zone -Provides a Route53 Hosted Zone resource. +Manages a Route53 Hosted Zone. ## Example Usage +### Public Zone + ```hcl resource "aws_route53_zone" "primary" { name = "example.com" } ``` +### Public Subdomain Zone + For use in subdomains, note that you need to create a `aws_route53_record` of type `NS` as well as the subdomain zone. @@ -50,24 +54,43 @@ resource "aws_route53_record" "dev-ns" { } ``` +### Private Zone + +~> **NOTE:** Terraform provides both exclusive VPC associations defined in-line in this resource via `vpc` configuration blocks and a separate [Zone VPC Association](/docs/providers/aws/r/route53_zone_association.html) resource. At this time, you cannot use in-line VPC associations in conjunction with any `aws_route53_zone_association` resources with the same zone ID otherwise it will cause a perpetual difference in plan output. You can optionally use the generic Terraform resource [lifecycle configuration block](/docs/configuration/resources.html#lifecycle) with `ignore_changes` to manage additional associations via the `aws_route53_zone_association` resource. + +~> **NOTE:** Private zones require at least one VPC association at all times. + +```hcl +resource "aws_route53_zone" "private" { + name = "example.com" + + vpc { + vpc_id = "${aws_vpc.example.id}" + } +} +``` + ## Argument Reference The following arguments are supported: * `name` - (Required) This is the name of the hosted zone. * `comment` - (Optional) A comment for the hosted zone. Defaults to 'Managed by Terraform'. +* `delegation_set_id` - (Optional) The ID of the reusable delegation set whose NS records you want to assign to the hosted zone. Conflicts with `vpc` and `vpc_id` as delegation sets can only be used for public zones. +* `force_destroy` - (Optional) Whether to destroy all records (possibly managed outside of Terraform) in the zone when destroying the zone. * `tags` - (Optional) A mapping of tags to assign to the zone. -* `vpc_id` - (Optional) The VPC to associate with a private hosted zone. Specifying `vpc_id` will create a private hosted zone. - Conflicts w/ `delegation_set_id` as delegation sets can only be used for public zones. -* `vpc_region` - (Optional) The VPC's region. Defaults to the region of the AWS provider. -* `delegation_set_id` - (Optional) The ID of the reusable delegation set whose NS records you want to assign to the hosted zone. - Conflicts w/ `vpc_id` as delegation sets can only be used for public zones. -* `force_destroy` - (Optional) Whether to destroy all records (possibly managed outside of Terraform) - in the zone when destroying the zone. +* `vpc` - (Optional) Configuration block(s) specifying VPC(s) to associate with a private hosted zone. Conflicts with `delegation_set_id`, `vpc_id`, and `vpc_region` in this resource and any [`aws_route53_zone_association` resource](/docs/providers/aws/r/route53_zone_association.html) specifying the same zone ID. Detailed below. +* `vpc_id` - (Optional, **DEPRECATED**) Use `vpc` instead. The VPC to associate with a private hosted zone. Specifying `vpc_id` will create a private hosted zone. Conflicts with `delegation_set_id` as delegation sets can only be used for public zones and `vpc`. +* `vpc_region` - (Optional, **DEPRECATED**) Use `vpc` instead. The VPC's region. Defaults to the region of the AWS provider. + +### vpc Argument Reference + +* `vpc_id` - (Required) ID of the VPC to associate. +* `vpc_region` - (Optional) Region of the VPC to associate. Defaults to AWS provider region. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `zone_id` - The Hosted Zone ID. This can be referenced by zone records. * `name_servers` - A list of name servers in associated (or default) delegation set. diff --git a/website/docs/r/route53_zone_association.html.markdown b/website/docs/r/route53_zone_association.html.markdown index 4ffd550c4b8..73baedf89a7 100644 --- a/website/docs/r/route53_zone_association.html.markdown +++ b/website/docs/r/route53_zone_association.html.markdown @@ -3,12 +3,16 @@ layout: "aws" page_title: "AWS: aws_route53_zone_association" sidebar_current: "docs-aws-resource-route53-zone-association" description: |- - Provides a Route53 private Hosted Zone to VPC association resource. + Manages a Route53 Hosted Zone VPC association --- # aws_route53_zone_association -Provides a Route53 private Hosted Zone to VPC association resource. +Manages a Route53 Hosted Zone VPC association. VPC associations can only be made on private zones. + +~> **NOTE:** Unless explicit association ordering is required (e.g. a separate cross-account association authorization), usage of this resource is not recommended. Use the `vpc` configuration blocks available within the [`aws_route53_zone` resource](/docs/providers/aws/r/route53_zone.html) instead. + +~> **NOTE:** Terraform provides both this standalone Zone VPC Association resource and exclusive VPC associations defined in-line in the [`aws_route53_zone` resource](/docs/providers/aws/r/route53_zone.html) via `vpc` configuration blocks. At this time, you cannot use those in-line VPC associations in conjunction with this resource and the same zone ID otherwise it will cause a perpetual difference in plan output. You can optionally use the generic Terraform resource [lifecycle configuration block](/docs/configuration/resources.html#lifecycle) with `ignore_changes` in the `aws_route53_zone` resource to manage additional associations via this resource. ## Example Usage @@ -26,8 +30,20 @@ resource "aws_vpc" "secondary" { } resource "aws_route53_zone" "example" { - name = "example.com" - vpc_id = "${aws_vpc.primary.id}" + name = "example.com" + + # NOTE: The aws_route53_zone vpc argument accepts multiple configuration + # blocks. The below usage of the single vpc configuration, the + # lifecycle configuration, and the aws_route53_zone_association + # resource is for illustrative purposes (e.g. for a separate + # cross-account authorization process, which is not shown here). + vpc { + vpc_id = "${aws_vpc.primary.id}" + } + + lifecycle { + ignore_changes = ["vpc"] + } } resource "aws_route53_zone_association" "secondary" { @@ -46,7 +62,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The calculated unique identifier for the association. * `zone_id` - The ID of the hosted zone for the association. diff --git a/website/docs/r/route_table.html.markdown b/website/docs/r/route_table.html.markdown index 1cf2b9db072..2bdc75cb532 100644 --- a/website/docs/r/route_table.html.markdown +++ b/website/docs/r/route_table.html.markdown @@ -41,7 +41,7 @@ resource "aws_route_table" "r" { } route { - ipv6_cidr_block = "::/0" + ipv6_cidr_block = "::/0" egress_only_gateway_id = "${aws_egress_only_internet_gateway.foo.id}" } @@ -77,7 +77,7 @@ the VPC's CIDR block to "local", is created implicitly and cannot be specified. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: ~> **NOTE:** Only the target that is entered is exported as a readable attribute once the route resource is created. @@ -85,8 +85,11 @@ attribute once the route resource is created. ## Import -Route Tables can be imported using the `route table id`, e.g. +~> **NOTE:** Importing this resource currently adds an `aws_route` resource to the state for each route, in addition to adding the `aws_route_table` resource. If you plan to apply the imported state, avoid the deletion of actual routes by not using in-line routes in your configuration and by naming `aws_route` resources after the `aws_route_table`. For example, if your route table is `aws_route_table.rt`, name routes as `aws_route.rt`, `aws_route.rt-1` and so forth. The behavior of adding `aws_route` resources with the `aws_route_table` resource will be removed in the next major version. + +Route Tables can be imported using the route table `id`. For example, to import +route table `rtb-4e616f6d69`, use this command: ``` -$ terraform import aws_route_table.public_rt rtb-22574640 +$ terraform import aws_route_table.public_rt rtb-4e616f6d69 ``` diff --git a/website/docs/r/route_table_association.html.markdown b/website/docs/r/route_table_association.html.markdown index f121e43ab9e..3c54f920a8b 100644 --- a/website/docs/r/route_table_association.html.markdown +++ b/website/docs/r/route_table_association.html.markdown @@ -28,7 +28,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the association diff --git a/website/docs/r/s3_bucket.html.markdown b/website/docs/r/s3_bucket.html.markdown index 3f628655e22..249575c24f7 100644 --- a/website/docs/r/s3_bucket.html.markdown +++ b/website/docs/r/s3_bucket.html.markdown @@ -112,7 +112,8 @@ resource "aws_s3_bucket" "bucket" { id = "log" enabled = true - prefix = "log/" + prefix = "log/" + tags { "rule" = "log" "autoclean" = "true" @@ -120,7 +121,7 @@ resource "aws_s3_bucket" "bucket" { transition { days = 30 - storage_class = "STANDARD_IA" + storage_class = "STANDARD_IA" # or "ONEZONE_IA" } transition { @@ -252,8 +253,8 @@ resource "aws_iam_policy_attachment" "replication" { } resource "aws_s3_bucket" "destination" { - bucket = "tf-test-bucket-destination-12345" - region = "eu-west-1" + bucket = "tf-test-bucket-destination-12345" + region = "eu-west-1" versioning { enabled = true @@ -297,6 +298,7 @@ resource "aws_kms_key" "mykey" { resource "aws_s3_bucket" "mybucket" { bucket = "mybucket" + server_side_encryption_configuration { rule { apply_server_side_encryption_by_default { @@ -315,7 +317,7 @@ The following arguments are supported: * `bucket` - (Optional, Forces new resource) The name of the bucket. If omitted, Terraform will assign a random, unique name. * `bucket_prefix` - (Optional, Forces new resource) Creates a unique bucket name beginning with the specified prefix. Conflicts with `bucket`. * `acl` - (Optional) The [canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) to apply. Defaults to "private". -* `policy` - (Optional) A valid [bucket policy](https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html) JSON document. Note that if the policy document is not specific enough (but still valid), Terraform may view the policy as constantly changing in a `terraform plan`. In this case, please make sure you use the verbose/specific version of the policy. +* `policy` - (Optional) A valid [bucket policy](https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html) JSON document. Note that if the policy document is not specific enough (but still valid), Terraform may view the policy as constantly changing in a `terraform plan`. In this case, please make sure you use the verbose/specific version of the policy. For more information about building AWS IAM policy documents with Terraform, see the [AWS IAM Policy Document Guide](/docs/providers/aws/guides/iam-policy-documents.html). * `tags` - (Optional) A mapping of tags to assign to the bucket. * `force_destroy` - (Optional, Default:false ) A boolean that indicates all objects should be deleted from the bucket so that the bucket can be destroyed without error. These objects are *not* recoverable. @@ -385,7 +387,7 @@ The `transition` object supports the following * `date` (Optional) Specifies the date after which you want the corresponding action to take effect. * `days` (Optional) Specifies the number of days after object creation when the specific rule action takes effect. -* `storage_class` (Required) Specifies the Amazon S3 storage class to which you want the object to transition. Can be `STANDARD_IA` or `GLACIER`. +* `storage_class` (Required) Specifies the Amazon S3 storage class to which you want the object to transition. Can be `ONEZONE_IA`, `STANDARD_IA`, or `GLACIER`. The `noncurrent_version_expiration` object supports the following @@ -394,7 +396,7 @@ The `noncurrent_version_expiration` object supports the following The `noncurrent_version_transition` object supports the following * `days` (Required) Specifies the number of days an object is noncurrent object versions expire. -* `storage_class` (Required) Specifies the Amazon S3 storage class to which you want the noncurrent versions object to transition. Can be `STANDARD_IA` or `GLACIER`. +* `storage_class` (Required) Specifies the Amazon S3 storage class to which you want the noncurrent versions object to transition. Can be `ONEZONE_IA`, `STANDARD_IA`, or `GLACIER`. The `replication_configuration` object supports the following: @@ -404,17 +406,28 @@ The `replication_configuration` object supports the following: The `rules` object supports the following: * `id` - (Optional) Unique identifier for the rule. +* `priority` - (Optional) The priority associated with the rule. * `destination` - (Required) Specifies the destination for the rule (documented below). * `source_selection_criteria` - (Optional) Specifies special object selection criteria (documented below). -* `prefix` - (Required) Object keyname prefix identifying one or more objects to which the rule applies. Set as an empty string to replicate the whole bucket. +* `prefix` - (Optional) Object keyname prefix identifying one or more objects to which the rule applies. * `status` - (Required) The status of the rule. Either `Enabled` or `Disabled`. The rule is ignored if status is not Enabled. +* `filter` - (Optional) Filter that identifies subset of objects to which the replication rule applies (documented below). + +~> **NOTE on `prefix` and `filter`:** Amazon S3's latest version of the replication configuration is V2, which includes the `filter` attribute for replication rules. +With the `filter` attribute, you can specify object filters based on the object key prefix, tags, or both to scope the objects that the rule applies to. +Replication configuration V1 supports filtering based on only the `prefix` attribute. For backwards compatibility, Amazon S3 continues to support the V1 configuration. +* For a specific rule, `prefix` conflicts with `filter` +* If any rule has `filter` specified then they all must +* `priority` is optional (with a default value of `0`) but must be unique between multiple rules The `destination` object supports the following: * `bucket` - (Required) The ARN of the S3 bucket where you want Amazon S3 to store replicas of the object identified by the rule. * `storage_class` - (Optional) The class of storage used to store the object. -* `replica_kms_key_id` - (Optional) Destination KMS encryption key ID for SSE-KMS replication. Must be used in conjunction with +* `replica_kms_key_id` - (Optional) Destination KMS encryption key ARN for SSE-KMS replication. Must be used in conjunction with `sse_kms_encrypted_objects` source selection criteria. +* `access_control_translation` - (Optional) Specifies the overrides to use for object owners on replication. Must be used in conjunction with `account_id` owner override configuration. +* `account_id` - (Optional) The Account ID to use for overriding the object owner on replication. Must be used in conjunction with `access_control_translation` override configuration. The `source_selection_criteria` object supports the following: @@ -425,6 +438,12 @@ The `sse_kms_encrypted_objects` object supports the following: * `enabled` - (Required) Boolean which indicates if this criteria is enabled. +The `filter` object supports the following: + +* `prefix` - (Optional) Object keyname prefix that identifies subset of objects to which the rule applies. +* `tags` - (Optional) A mapping of tags that identifies subset of objects to which the rule applies. +The rule applies only to objects having all the tags in its tagset. + The `server_side_encryption_configuration` object supports the following: * `rule` - (required) A single object for server-side encryption by default configuration. (documented below) @@ -438,13 +457,18 @@ The `apply_server_side_encryption_by_default` object supports the following: * `sse_algorithm` - (required) The server-side encryption algorithm to use. Valid values are `AES256` and `aws:kms` * `kms_master_key_id` - (optional) The AWS KMS master key ID used for the SSE-KMS encryption. This can only be used when you set the value of `sse_algorithm` as `aws:kms`. The default `aws/s3` AWS KMS master key is used if this element is absent while the `sse_algorithm` is `aws:kms`. +The `access_control_translation` object supports the following: + +* `owner` - (Required) The override value for the owner on replicated objects. Currently only `Destination` is supported. + ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The name of the bucket. * `arn` - The ARN of the bucket. Will be of format `arn:aws:s3:::bucketname`. * `bucket_domain_name` - The bucket domain name. Will be of format `bucketname.s3.amazonaws.com`. +* `bucket_regional_domain_name` - The bucket region-specific domain name. The bucket domain name including the region name, please refer [here](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) for format. Note: The AWS CloudFront allows specifying S3 region-specific endpoint when creating S3 origin, it will prevent [redirect issues](https://forums.aws.amazon.com/thread.jspa?threadID=216814) from CloudFront to S3 Origin URL. * `hosted_zone_id` - The [Route 53 Hosted Zone ID](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_website_region_endpoints) for this bucket's region. * `region` - The AWS region this bucket resides in. * `website_endpoint` - The website endpoint, if the bucket is configured with a website. If not, this will be an empty string. diff --git a/website/docs/r/s3_bucket_inventory.html.markdown b/website/docs/r/s3_bucket_inventory.html.markdown new file mode 100644 index 00000000000..5c917990fc4 --- /dev/null +++ b/website/docs/r/s3_bucket_inventory.html.markdown @@ -0,0 +1,128 @@ +--- +layout: "aws" +page_title: "AWS: aws_s3_bucket_inventory" +sidebar_current: "docs-aws-resource-s3-bucket-inventory" +description: |- + Provides a S3 bucket inventory configuration resource. +--- + +# aws_s3_bucket_inventory + +Provides a S3 bucket [inventory configuration](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-inventory.html) resource. + +## Example Usage + +### Add inventory configuration + +```hcl +resource "aws_s3_bucket" "test" { + bucket = "my-tf-test-bucket" +} + +resource "aws_s3_bucket" "inventory" { + bucket = "my-tf-inventory-bucket" +} + +resource "aws_s3_bucket_inventory" "test" { + bucket = "${aws_s3_bucket.test.id}" + name = "EntireBucketDaily" + + included_object_versions = "All" + + schedule { + frequency = "Daily" + } + + destination { + bucket { + format = "ORC" + bucket_arn = "${aws_s3_bucket.inventory.arn}" + } + } +} +``` + +### Add inventory configuration with S3 bucket object prefix + +```hcl +resource "aws_s3_bucket" "test" { + bucket = "my-tf-test-bucket" +} + +resource "aws_s3_bucket" "inventory" { + bucket = "my-tf-inventory-bucket" +} + +resource "aws_s3_bucket_inventory" "test-prefix" { + bucket = "${aws_s3_bucket.test.id}" + name = "DocumentsWeekly" + + included_object_versions = "All" + + schedule { + frequency = "Daily" + } + + filter { + prefix = "documents/" + } + + destination { + bucket { + format = "ORC" + bucket_arn = "${aws_s3_bucket.inventory.arn}" + prefix = "inventory" + } + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `bucket` - (Required) The name of the bucket to put inventory configuration. +* `name` - (Required) Unique identifier of the inventory configuration for the bucket. +* `included_object_versions` - (Required) Object filtering that accepts a prefix (documented below). Can be `All` or `Current`. +* `schedule` - (Required) Contains the frequency for generating inventory results (documented below). +* `destination` - (Required) Destination bucket where inventory list files are written (documented below). +* `enabled` - (Optional, Default: true) Specifies whether the inventory is enabled or disabled. +* `filter` - (Optional) Object filtering that accepts a prefix (documented below). +* `optional_fields` - (Optional) Contains the optional fields that are included in the inventory results. + +The `filter` configuration supports the following: + +* `prefix` - (Optional) Object prefix for filtering (singular). + +The `schedule` configuration supports the following: + +* `frequency` - (Required) Specifies how frequently inventory results are produced. Can be `Daily` or `Weekly`. + +The `destination` configuration supports the following: + +* `bucket` - (Required) The S3 bucket configuration where inventory results are published (documented below). + +The `bucket` configuration supports the following: + +* `bucket_arn` - (Required) The Amazon S3 bucket ARN of the destination. +* `format` - (Required) Specifies the output format of the inventory results. Can be `CSV` or [`ORC`](https://orc.apache.org/). +* `account_id` - (Optional) The ID of the account that owns the destination bucket. Recommended to be set to prevent problems if the destination bucket ownership changes. +* `prefix` - (Optional) The prefix that is prepended to all inventory results. +* `encryption` - (Optional) Contains the type of server-side encryption to use to encrypt the inventory (documented below). + +The `encryption` configuration supports the following: + +* `sse_kms` - (Optional) Specifies to use server-side encryption with AWS KMS-managed keys to encrypt the inventory file (documented below). +* `sse_s3` - (Optional) Specifies to use server-side encryption with Amazon S3-managed keys (SSE-S3) to encrypt the inventory file. + +The `sse_kms` configuration supports the following: + +* `key_id` - (Required) The ARN of the KMS customer master key (CMK) used to encrypt the inventory file. + +## Import + +S3 bucket inventory configurations can be imported using `bucket:inventory`, e.g. + +```sh +$ terraform import aws_s3_bucket_inventory.my-bucket-entire-bucket my-bucket:EntireBucket +``` diff --git a/website/docs/r/s3_bucket_metric.html.markdown b/website/docs/r/s3_bucket_metric.html.markdown index 0eaf8a7ad01..f262b732303 100644 --- a/website/docs/r/s3_bucket_metric.html.markdown +++ b/website/docs/r/s3_bucket_metric.html.markdown @@ -1,7 +1,7 @@ --- layout: "aws" page_title: "AWS: aws_s3_bucket_metric" -side_bar_current: "docs-aws-resource-s3-bucket-metric" +sidebar_current: "docs-aws-resource-s3-bucket-metric" description: |- Provides a S3 bucket metrics configuration resource. --- @@ -41,7 +41,7 @@ resource "aws_s3_bucket_metric" "example-filtered" { tags { priority = "high" - class = "blue" + class = "blue" } } } diff --git a/website/docs/r/s3_bucket_notification.html.markdown b/website/docs/r/s3_bucket_notification.html.markdown index 947d29496ef..0c49f26f1c4 100644 --- a/website/docs/r/s3_bucket_notification.html.markdown +++ b/website/docs/r/s3_bucket_notification.html.markdown @@ -1,7 +1,7 @@ --- layout: "aws" page_title: "AWS: aws_s3_bucket_notification" -side_bar_current: "docs-aws-resource-s3-bucket-notification" +sidebar_current: "docs-aws-resource-s3-bucket-notification" description: |- Provides a S3 bucket notification resource. --- @@ -123,6 +123,7 @@ resource "aws_lambda_function" "func" { function_name = "example_lambda_name" role = "${aws_iam_role.iam_for_lambda.arn}" handler = "exports.example" + runtime = "go1.x" } resource "aws_s3_bucket" "bucket" { @@ -176,6 +177,7 @@ resource "aws_lambda_function" "func1" { function_name = "example_lambda_name1" role = "${aws_iam_role.iam_for_lambda.arn}" handler = "exports.example" + runtime = "go1.x" } resource "aws_lambda_permission" "allow_bucket2" { diff --git a/website/docs/r/s3_bucket_object.html.markdown b/website/docs/r/s3_bucket_object.html.markdown index 37587d20738..4e9d05f397c 100644 --- a/website/docs/r/s3_bucket_object.html.markdown +++ b/website/docs/r/s3_bucket_object.html.markdown @@ -60,6 +60,22 @@ resource "aws_s3_bucket_object" "examplebucket_object" { } ``` +### Server Side Encryption with AWS-Managed Key + +```hcl +resource "aws_s3_bucket" "examplebucket" { + bucket = "examplebuckettftest" + acl = "private" +} + +resource "aws_s3_bucket_object" "examplebucket_object" { + key = "someobject" + bucket = "${aws_s3_bucket.examplebucket.id}" + source = "index.html" + server_side_encryption = "AES256" +} +``` + ## Argument Reference -> **Note:** If you specify `content_encoding` you are responsible for encoding the body appropriately. `source`, `content`, and `content_base64` all expect already encoded/compressed bytes. @@ -73,15 +89,15 @@ The following arguments are supported: * `content_base64` - (Required unless `source` or `content` is set) Base64-encoded data that will be decoded and uploaded as raw bytes for the object content. This allows safely uploading non-UTF8 binary data, but is recommended only for small content such as the result of the `gzipbase64` function with small text strings. For larger objects, use `source` to stream the content from a disk file. * `acl` - (Optional) The [canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) to apply. Defaults to "private". * `cache_control` - (Optional) Specifies caching behavior along the request/reply chain Read [w3c cache_control](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9) for further details. -* `content_disposition` - (Optional) Specifies presentational information for the object. Read [wc3 content_disposition](http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1) for further information. +* `content_disposition` - (Optional) Specifies presentational information for the object. Read [w3c content_disposition](http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1) for further information. * `content_encoding` - (Optional) Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. Read [w3c content encoding](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11) for further information. * `content_language` - (Optional) The language the content is in e.g. en-US or en-GB. * `content_type` - (Optional) A standard MIME type describing the format of the object data, e.g. application/octet-stream. All Valid MIME Types are valid for this input. * `website_redirect` - (Optional) Specifies a target URL for [website redirect](http://docs.aws.amazon.com/AmazonS3/latest/dev/how-to-page-redirect.html). * `storage_class` - (Optional) Specifies the desired [Storage Class](http://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) -for the object. Can be either "`STANDARD`", "`REDUCED_REDUNDANCY`", or "`STANDARD_IA`". Defaults to "`STANDARD`". +for the object. Can be either "`STANDARD`", "`REDUCED_REDUNDANCY`", "`ONEZONE_IA`", or "`STANDARD_IA`". Defaults to "`STANDARD`". * `etag` - (Optional) Used to trigger updates. The only meaningful value is `${md5(file("path/to/file"))}`. -This attribute is not compatible with `kms_key_id`. +This attribute is not compatible with KMS encryption, `kms_key_id` or `server_side_encryption = "aws:kms"`. * `server_side_encryption` - (Optional) Specifies server-side encryption of the object in S3. Valid values are "`AES256`" and "`aws:kms`". * `kms_key_id` - (Optional) Specifies the AWS KMS Key ARN to use for object encryption. This value is a fully qualified **ARN** of the KMS Key. If using `aws_kms_key`, @@ -97,6 +113,6 @@ These two arguments are mutually-exclusive. The following attributes are exported * `id` - the `key` of the resource supplied above -* `etag` - the ETag generated for the object (an MD5 sum of the object content). +* `etag` - the ETag generated for the object (an MD5 sum of the object content). For plaintext objects or objects encrypted with an AWS-managed key, the hash is an MD5 digest of the object data. For objects encrypted with a KMS key or objects created by either the Multipart Upload or Part Copy operation, the hash is not an MD5 digest, regardless of the method of encryption. More information on possible values can be found on [Common Response Headers](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html). * `version_id` - A unique version ID value for the object, if bucket versioning is enabled. diff --git a/website/docs/r/s3_bucket_policy.html.markdown b/website/docs/r/s3_bucket_policy.html.markdown index cee15f1864b..128a32545f8 100644 --- a/website/docs/r/s3_bucket_policy.html.markdown +++ b/website/docs/r/s3_bucket_policy.html.markdown @@ -21,7 +21,8 @@ resource "aws_s3_bucket" "b" { resource "aws_s3_bucket_policy" "b" { bucket = "${aws_s3_bucket.b.id}" - policy =< **NOTE:** Configuring rotation causes the secret to rotate once as soon as you store the secret. Before you do this, you must ensure that all of your applications that use the credentials stored in the secret are updated to retrieve the secret from AWS Secrets Manager. The old credentials might no longer be usable after the initial rotation and any applications that you fail to update will break as soon as the old credentials are no longer valid. + +~> **NOTE:** If you cancel a rotation that is in progress (by removing the `rotation` configuration), it can leave the VersionStage labels in an unexpected state. Depending on what step of the rotation was in progress, you might need to remove the staging label AWSPENDING from the partially created version, specified by the SecretVersionId response value. You should also evaluate the partially rotated new version to see if it should be deleted, which you can do by removing all staging labels from the new version's VersionStage field. + +```hcl +resource "aws_secretsmanager_secret" "rotation-example" { + name = "rotation-example" + rotation_lambda_arn = "${aws_lambda_function.example.arn}" + + rotation_rules { + automatically_after_days = 7 + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Optional) Specifies the friendly name of the new secret. The secret name can consist of uppercase letters, lowercase letters, digits, and any of the following characters: `/_+=.@-` Conflicts with `name_prefix`. +* `name_prefix` - (Optional) Creates a unique name beginning with the specified prefix. Conflicts with `name`. +* `description` - (Optional) A description of the secret. +* `kms_key_id` - (Optional) Specifies the ARN or alias of the AWS KMS customer master key (CMK) to be used to encrypt the secret values in the versions stored in this secret. If you don't specify this value, then Secrets Manager defaults to using the AWS account's default CMK (the one named `aws/secretsmanager`). If the default KMS CMK with that name doesn't yet exist, then AWS Secrets Manager creates it for you automatically the first time. +* `policy` - (Optional) A valid JSON document representing a [resource policy](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_resource-based-policies.html). For more information about building AWS IAM policy documents with Terraform, see the [AWS IAM Policy Document Guide](/docs/providers/aws/guides/iam-policy-documents.html). +* `recovery_window_in_days` - (Optional) Specifies the number of days that AWS Secrets Manager waits before it can delete the secret. This value can be `0` to force deletion without recovery or range from `7` to `30` days. The default value is `30`. +* `rotation_lambda_arn` - (Optional) Specifies the ARN of the Lambda function that can rotate the secret. +* `rotation_rules` - (Optional) A structure that defines the rotation configuration for this secret. Defined below. +* `tags` - (Optional) Specifies a key-value map of user-defined tags that are attached to the secret. + +### rotation_rules + +* `automatically_after_days` - (Required) Specifies the number of days between automatic scheduled rotations of the secret. + +## Attribute Reference + +* `id` - Amazon Resource Name (ARN) of the secret. +* `arn` - Amazon Resource Name (ARN) of the secret. +* `rotation_enabled` - Specifies whether automatic rotation is enabled for this secret. + +## Import + +`aws_secretsmanager_secret` can be imported by using the secret Amazon Resource Name (ARN), e.g. + +``` +$ terraform import aws_secretsmanager_secret.example arn:aws:secretsmanager:us-east-1:123456789012:secret:example-123456 +``` diff --git a/website/docs/r/secretsmanager_secret_version.html.markdown b/website/docs/r/secretsmanager_secret_version.html.markdown new file mode 100644 index 00000000000..b3653896970 --- /dev/null +++ b/website/docs/r/secretsmanager_secret_version.html.markdown @@ -0,0 +1,71 @@ +--- +layout: "aws" +page_title: "AWS: aws_secretsmanager_secret_version" +sidebar_current: "docs-aws-resource-secretsmanager-secret-version" +description: |- + Provides a resource to manage AWS Secrets Manager secret version including its secret value +--- + +# aws_secretsmanager_secret_version + +Provides a resource to manage AWS Secrets Manager secret version including its secret value. To manage secret metadata, see the [`aws_secretsmanager_secret` resource](/docs/providers/aws/r/secretsmanager_secret.html). + +~> **NOTE:** If the `AWSCURRENT` staging label is present on this version during resource deletion, that label cannot be removed and will be skipped to prevent errors when fully deleting the secret. That label will leave this secret version active even after the resource is deleted from Terraform unless the secret itself is deleted. Move the `AWSCURRENT` staging label before or after deleting this resource from Terraform to fully trigger version deprecation if necessary. + +## Example Usage + +### Simple String Value + +```hcl +resource "aws_secretsmanager_secret_version" "example" { + secret_id = "${aws_secretsmanager_secret.example.id}" + secret_string = "example-string-to-protect" +} +``` + +### Key-Value Pairs + +Secrets Manager also accepts key-value pairs in JSON. + +```hcl +# The map here can come from other supported configurations +# like locals, resource attribute, map() built-in, etc. +variable "example" { + default = { + key1 = "value1" + key2 = "value2" + } + + type = "map" +} + +resource "aws_secretsmanager_secret_version" "example" { + secret_id = "${aws_secretsmanager_secret.example.id}" + secret_string = "${jsonencode(var.example)}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `secret_id` - (Required) Specifies the secret to which you want to add a new version. You can specify either the Amazon Resource Name (ARN) or the friendly name of the secret. The secret must already exist. +* `secret_string` - (Optional) Specifies text data that you want to encrypt and store in this version of the secret. This is required if secret_binary is not set. +* `secret_binary` - (Optional) Specifies binary data that you want to encrypt and store in this version of the secret. This is required if secret_string is not set. Needs to be encoded to base64. +* `version_stages` - (Optional) Specifies a list of staging labels that are attached to this version of the secret. A staging label must be unique to a single version of the secret. If you specify a staging label that's already associated with a different version of the same secret then that staging label is automatically removed from the other version and attached to this version. If you do not specify a value, then AWS Secrets Manager automatically moves the staging label `AWSCURRENT` to this new version on creation. + +~> **NOTE:** If `version_stages` is configured, you must include the `AWSCURRENT` staging label if this secret version is the only version or if the label is currently present on this secret version, otherwise Terraform will show a perpetual difference. + +## Attribute Reference + +* `arn` - The ARN of the secret. +* `id` - A pipe delimited combination of secret ID and version ID. +* `version_id` - The unique identifier of the version of the secret. + +## Import + +`aws_secretsmanager_secret_version` can be imported by using the secret ID and version ID, e.g. + +``` +$ terraform import aws_secretsmanager_secret.example arn:aws:secretsmanager:us-east-1:123456789012:secret:example-123456|xxxxx-xxxxxxx-xxxxxxx-xxxxx +``` diff --git a/website/docs/r/security_group.html.markdown b/website/docs/r/security_group.html.markdown index 3e1e4722ca5..c394de21d2b 100644 --- a/website/docs/r/security_group.html.markdown +++ b/website/docs/r/security_group.html.markdown @@ -17,6 +17,8 @@ defined in-line. At this time you cannot use a Security Group with in-line rules in conjunction with any Security Group Rule resources. Doing so will cause a conflict of rule settings and will overwrite rules. +~> **NOTE:** Referencing Security Groups across VPC peering has certain restrictions. More information is available in the [VPC Peering User Guide](https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html). + ## Example Usage Basic usage @@ -94,6 +96,7 @@ The `ingress` block supports: * `cidr_blocks` - (Optional) List of CIDR blocks. * `ipv6_cidr_blocks` - (Optional) List of IPv6 CIDR blocks. +* `prefix_list_ids` - (Optional) List of prefix list IDs. * `from_port` - (Required) The start port (or ICMP type number if protocol is "icmp") * `protocol` - (Required) The protocol. If you select a protocol of "-1" (semantically equivalent to `"all"`, which is not a valid value here), you must specify a "from_port" and "to_port" equal to 0. If not icmp, tcp, udp, or "-1" use the [protocol number](https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml) @@ -127,12 +130,12 @@ surprises in terms of controlling your egress rules. If you desire this rule to be in place, you can use this `egress` block: ```hcl - egress { - from_port = 0 - to_port = 0 - protocol = "-1" - cidr_blocks = ["0.0.0.0/0"] - } +egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] +} ``` ## Usage with prefix list IDs @@ -142,22 +145,23 @@ are associated with a prefix list name, or service name, that is linked to a spe Prefix list IDs are exported on VPC Endpoints, so you can use this format: ```hcl - # ... - egress { - from_port = 0 - to_port = 0 - protocol = "-1" - prefix_list_ids = ["${aws_vpc_endpoint.my_endpoint.prefix_list_id}"] - } - # ... - resource "aws_vpc_endpoint" "my_endpoint" { - # ... - } +# ... +egress { + from_port = 0 + to_port = 0 + protocol = "-1" + prefix_list_ids = ["${aws_vpc_endpoint.my_endpoint.prefix_list_id}"] +} + +# ... +resource "aws_vpc_endpoint" "my_endpoint" { + # ... +} ``` ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the security group * `arn` - The ARN of the security group diff --git a/website/docs/r/security_group_rule.html.markdown b/website/docs/r/security_group_rule.html.markdown index 79cf17784d8..84e92b6cef6 100644 --- a/website/docs/r/security_group_rule.html.markdown +++ b/website/docs/r/security_group_rule.html.markdown @@ -18,6 +18,10 @@ defined in-line. At this time you cannot use a Security Group with in-line rules in conjunction with any Security Group Rule resources. Doing so will cause a conflict of rule settings and will overwrite rules. +~> **NOTE:** Setting `protocol = "all"` or `protocol = -1` with `from_port` and `to_port` will result in the EC2 API creating a security group rule with all ports open. This API behavior cannot be controlled by Terraform and may generate warnings in the future. + +~> **NOTE:** Referencing Security Groups across VPC peering has certain restrictions. More information is available in the [VPC Peering User Guide](https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html). + ## Example Usage Basic usage @@ -79,11 +83,55 @@ resource "aws_vpc_endpoint" "my_endpoint" { ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the security group rule * `type` - The type of rule, `ingress` or `egress` * `from_port` - The start port (or ICMP type number if protocol is "icmp") * `to_port` - The end port (or ICMP code if protocol is "icmp") -* `protocol` – The protocol used -* `description` – Description of the rule +* `protocol` – The protocol used +* `description` – Description of the rule + +## Import + +Security Group Rules can be imported using the `security_group_id`, `type`, `protocol`, `from_port`, `to_port`, and source(s)/destination(s) (e.g. `cidr_block`) separated by underscores (`_`). All parts are required. + +Not all rule permissions (e.g., not all of a rule's CIDR blocks) need to be imported for Terraform to manage rule permissions. However, importing some of a rule's permissions but not others, and then making changes to the rule will result in the creation of an additional rule to capture the updated permissions. Rule permissions that were not imported are left intact in the original rule. + +### Examples + +Import an ingress rule in security group `sg-6e616f6d69` for TCP port 8000 with an IPv4 destination CIDR of `10.0.3.0/24`: + +```console +$ terraform import aws_security_group_rule.ingress sg-6e616f6d69_ingress_tcp_8000_8000_10.0.3.0/24 +``` + +Import a rule with various IPv4 and IPv6 source CIDR blocks: + +```console +$ terraform import aws_security_group_rule.ingress sg-4973616163_ingress_tcp_100_121_10.1.0.0/16_2001:db8::/48_10.2.0.0/16_2002:db8::/48 +``` + +Import a rule, applicable to all ports, with a protocol other than TCP/UDP/ICMP/ALL, e.g., Multicast Transport Protocol (MTP), using the IANA protocol number, e.g., 92. + +```console +$ terraform import aws_security_group_rule.ingress sg-6777656e646f6c796e_ingress_92_0_65536_10.0.3.0/24_10.0.4.0/24 +``` + +Import an egress rule with a prefix list ID destination: + +```console +$ terraform import aws_security_group_rule.egress sg-62726f6479_egress_tcp_8000_8000_pl-6469726b +``` + +Import a rule applicable to all protocols and ports with a security group source: + +```console +$ terraform import aws_security_group_rule.ingress_rule sg-7472697374616e_ingress_all_0_65536_sg-6176657279 +``` + +Import a rule that has itself and an IPv6 CIDR block as sources: + +```console +$ terraform import aws_security_group_rule.rule_name sg-656c65616e6f72_ingress_tcp_80_80_self_2001:db8::/48 +``` diff --git a/website/docs/r/service_discovery_private_dns_namespace.html.markdown b/website/docs/r/service_discovery_private_dns_namespace.html.markdown index f952a59c930..2eb5b40587b 100644 --- a/website/docs/r/service_discovery_private_dns_namespace.html.markdown +++ b/website/docs/r/service_discovery_private_dns_namespace.html.markdown @@ -18,9 +18,9 @@ resource "aws_vpc" "example" { } resource "aws_service_discovery_private_dns_namespace" "example" { - name = "hoge.example.local" + name = "hoge.example.local" description = "example" - vpc = "${aws_vpc.example.id}" + vpc = "${aws_vpc.example.id}" } ``` @@ -34,7 +34,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of a namespace. * `arn` - The ARN that Amazon Route 53 assigns to the namespace when you create it. diff --git a/website/docs/r/service_discovery_public_dns_namespace.html.markdown b/website/docs/r/service_discovery_public_dns_namespace.html.markdown index f1043fb82bb..051c36f610f 100644 --- a/website/docs/r/service_discovery_public_dns_namespace.html.markdown +++ b/website/docs/r/service_discovery_public_dns_namespace.html.markdown @@ -14,7 +14,7 @@ Provides a Service Discovery Public DNS Namespace resource. ```hcl resource "aws_service_discovery_public_dns_namespace" "example" { - name = "hoge.example.com" + name = "hoge.example.com" description = "example" } ``` @@ -28,7 +28,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of a namespace. * `arn` - The ARN that Amazon Route 53 assigns to the namespace when you create it. diff --git a/website/docs/r/service_discovery_service.html.markdown b/website/docs/r/service_discovery_service.html.markdown index cb3768d9603..9a8e9a77b81 100644 --- a/website/docs/r/service_discovery_service.html.markdown +++ b/website/docs/r/service_discovery_service.html.markdown @@ -18,43 +18,53 @@ resource "aws_vpc" "example" { } resource "aws_service_discovery_private_dns_namespace" "example" { - name = "example.terraform.local" + name = "example.terraform.local" description = "example" - vpc = "${aws_vpc.example.id}" + vpc = "${aws_vpc.example.id}" } resource "aws_service_discovery_service" "example" { name = "example" + dns_config { namespace_id = "${aws_service_discovery_private_dns_namespace.example.id}" + dns_records { - ttl = 10 + ttl = 10 type = "A" } + routing_policy = "MULTIVALUE" } + + health_check_custom_config { + failure_threshold = 1 + } } ``` ```hcl resource "aws_service_discovery_public_dns_namespace" "example" { - name = "example.terraform.com" + name = "example.terraform.com" description = "example" } resource "aws_service_discovery_service" "example" { name = "example" + dns_config { namespace_id = "${aws_service_discovery_public_dns_namespace.example.id}" + dns_records { - ttl = 10 + ttl = 10 type = "A" } } + health_check_config { - failure_threshold = 100 - resource_path = "path" - type = "HTTP" + failure_threshold = 10 + resource_path = "path" + type = "HTTP" } } ``` @@ -67,6 +77,7 @@ The following arguments are supported: * `description` - (Optional) The description of the service. * `dns_config` - (Required) A complex type that contains information about the resource record sets that you want Amazon Route 53 to create when you register an instance. * `health_check_config` - (Optional) A complex type that contains settings for an optional health check. Only for Public DNS namespaces. +* `health_check_custom_config` - (Optional, ForceNew) A complex type that contains settings for ECS managed health checks. ### dns_config @@ -88,12 +99,18 @@ The following arguments are supported: The following arguments are supported: * `failure_threshold` - (Optional) The number of consecutive health checks. Maximum value of 10. -* `resource_path` - (Optional) An array that contains one DnsRecord object for each resource record set. -* `type` - (Optional, ForceNew) An array that contains one DnsRecord object for each resource record set. +* `resource_path` - (Optional) The path that you want Route 53 to request when performing health checks. Route 53 automatically adds the DNS name for the service. If you don't specify a value, the default value is /. +* `type` - (Optional, ForceNew) The type of health check that you want to create, which indicates how Route 53 determines whether an endpoint is healthy. Valid Values: HTTP, HTTPS, TCP + +### health_check_custom_config + +The following arguments are supported: + +* `failure_threshold` - (Optional, ForceNew) The number of 30-second intervals that you want service discovery to wait before it changes the health status of a service instance. Maximum value of 10. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the service. * `arn` - The ARN of the service. diff --git a/website/docs/r/servicecatalog_portfolio.html.markdown b/website/docs/r/servicecatalog_portfolio.html.markdown index 5d0915d495a..b0d5ac9c9e9 100644 --- a/website/docs/r/servicecatalog_portfolio.html.markdown +++ b/website/docs/r/servicecatalog_portfolio.html.markdown @@ -14,8 +14,8 @@ Provides a resource to create a Service Catalog Portfolio. ```hcl resource "aws_servicecatalog_portfolio" "portfolio" { - name = "My App Portfolio" - description = "List of my organizations apps" + name = "My App Portfolio" + description = "List of my organizations apps" provider_name = "Brett" } ``` @@ -31,7 +31,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the Service Catalog Portfolio. diff --git a/website/docs/r/ses_configuration_set.markdown b/website/docs/r/ses_configuration_set.markdown index 2bc24955b94..070085bfa30 100644 --- a/website/docs/r/ses_configuration_set.markdown +++ b/website/docs/r/ses_configuration_set.markdown @@ -23,3 +23,11 @@ resource "aws_ses_configuration_set" "test" { The following arguments are supported: * `name` - (Required) The name of the configuration set + +## Import + +SES Configuration Sets can be imported using their `name`, e.g. + +``` +$ terraform import aws_ses_configuration_set.test some-configuration-set-test +``` diff --git a/website/docs/r/ses_domain_dkim.html.markdown b/website/docs/r/ses_domain_dkim.html.markdown index 7807ab76af7..dbb18a1d2bf 100644 --- a/website/docs/r/ses_domain_dkim.html.markdown +++ b/website/docs/r/ses_domain_dkim.html.markdown @@ -20,7 +20,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `dkim_tokens` - DKIM tokens generated by SES. These tokens should be used to create CNAME records used to verify SES Easy DKIM. diff --git a/website/docs/r/ses_domain_identity.html.markdown b/website/docs/r/ses_domain_identity.html.markdown index 2b14e6cf026..8f05885be75 100644 --- a/website/docs/r/ses_domain_identity.html.markdown +++ b/website/docs/r/ses_domain_identity.html.markdown @@ -18,7 +18,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `arn` - The ARN of the domain identity. @@ -46,3 +46,10 @@ resource "aws_route53_record" "example_amazonses_verification_record" { } ``` +## Import + +SES domain identities can be imported using the domain name. + +``` +$ terraform import aws_ses_domain_identity.example example.com +``` diff --git a/website/docs/r/ses_domain_identity_verification.html.markdown b/website/docs/r/ses_domain_identity_verification.html.markdown new file mode 100644 index 00000000000..6f4a8c4b872 --- /dev/null +++ b/website/docs/r/ses_domain_identity_verification.html.markdown @@ -0,0 +1,59 @@ +--- +layout: "aws" +page_title: "AWS: ses_domain_identity_verification" +sidebar_current: "docs-aws-resource-ses-domain-identity-verification" +description: |- + Waits for and checks successful verification of an SES domain identity. +--- + +# aws_ses_domain_identity_verification + +Represents a successful verification of an SES domain identity. + +Most commonly, this resource is used together with [`aws_route53_record`](route53_record.html) and +[`aws_ses_domain_identity`](ses_domain_identity.html) to request an SES domain identity, +deploy the required DNS verification records, and wait for verification to complete. + +~> **WARNING:** This resource implements a part of the verification workflow. It does not represent a real-world entity in AWS, therefore changing or deleting this resource on its own has no immediate effect. + +## Example Usage + +```hcl +resource "aws_ses_domain_identity" "example" { + domain = "example.com" +} + +resource "aws_route53_record" "example_amazonses_verification_record" { + zone_id = "${aws_route53_zone.example.zone_id}" + name = "_amazonses.${aws_ses_domain_identity.example.id}" + type = "TXT" + ttl = "600" + records = ["${aws_ses_domain_identity.example.verification_token}"] +} + +resource "aws_ses_domain_identity_verification" "example_verification" { + domain = "${aws_ses_domain_identity.example.id}" + + depends_on = ["aws_route53_record.example_amazonses_verification_record"] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `domain` - (Required) The domain name of the SES domain identity to verify. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The domain name of the domain identity. +* `arn` - The ARN of the domain identity. + +## Timeouts + +`acm_ses_domain_identity_verification` provides the following [Timeouts](/docs/configuration/resources.html#timeouts) +configuration options: + +- `create` - (Default `45m`) How long to wait for a domain identity to be verified. diff --git a/website/docs/r/ses_event_destination.markdown b/website/docs/r/ses_event_destination.markdown index 3c9b22061c3..c141df4143d 100644 --- a/website/docs/r/ses_event_destination.markdown +++ b/website/docs/r/ses_event_destination.markdown @@ -12,31 +12,50 @@ Provides an SES event destination ## Example Usage +### CloudWatch Destination + +```hcl +resource "aws_ses_event_destination" "cloudwatch" { + name = "event-destination-cloudwatch" + configuration_set_name = "${aws_ses_configuration_set.example.name}" + enabled = true + matching_types = ["bounce", "send"] + + cloudwatch_destination = { + default_value = "default" + dimension_name = "dimension" + value_source = "emailHeader" + } +} +``` + +### Kinesis Destination + ```hcl -# Add a firehose event destination to a configuration set resource "aws_ses_event_destination" "kinesis" { name = "event-destination-kinesis" - configuration_set_name = "${aws_ses_configuration_set.test.name}" + configuration_set_name = "${aws_ses_configuration_set.example.name}" enabled = true matching_types = ["bounce", "send"] kinesis_destination = { - stream_arn = "${aws_kinesis_firehose_delivery_stream.test_stream.arn}" - role_arn = "${aws_iam_role.firehose_role.arn}" + stream_arn = "${aws_kinesis_firehose_delivery_stream.example.arn}" + role_arn = "${aws_iam_role.example.arn}" } } +``` -# CloudWatch event destination -resource "aws_ses_event_destination" "cloudwatch" { - name = "event-destination-cloudwatch" - configuration_set_name = "${aws_ses_configuration_set.test.name}" +### SNS Destination + +```hcl +resource "aws_ses_event_destination" "sns" { + name = "event-destination-sns" + configuration_set_name = "${aws_ses_configuration_set.example.name}" enabled = true matching_types = ["bounce", "send"] - cloudwatch_destination = { - default_value = "default" - dimension_name = "dimension" - value_source = "emailHeader" + sns_destination { + topic_arn = "${aws_sns_topic.example.arn}" } } ``` @@ -48,25 +67,24 @@ The following arguments are supported: * `name` - (Required) The name of the event destination * `configuration_set_name` - (Required) The name of the configuration set * `enabled` - (Optional) If true, the event destination will be enabled -* `matching_types` - (Required) A list of matching types. May be any of `"send"`, `"reject"`, `"bounce"`, `"complaint"`, `"delivery"`, `"open"`, or `"click"`. +* `matching_types` - (Required) A list of matching types. May be any of `"send"`, `"reject"`, `"bounce"`, `"complaint"`, `"delivery"`, `"open"`, `"click"`, or `"renderingFailure"`. * `cloudwatch_destination` - (Optional) CloudWatch destination for the events * `kinesis_destination` - (Optional) Send the events to a kinesis firehose destination * `sns_destination` - (Optional) Send the events to an SNS Topic destination ~> **NOTE:** You can specify `"cloudwatch_destination"` or `"kinesis_destination"` but not both -CloudWatch Destination requires the following: +### cloudwatch_destination Argument Reference * `default_value` - (Required) The default value for the event * `dimension_name` - (Required) The name for the dimension * `value_source` - (Required) The source for the value. It can be either `"messageTag"` or `"emailHeader"` -Kinesis Destination requires the following: +### kinesis_destination Argument Reference * `stream_arn` - (Required) The ARN of the Kinesis Stream * `role_arn` - (Required) The ARN of the role that has permissions to access the Kinesis Stream -SNS Topic requires the following: +### sns_destination Argument Reference * `topic_arn` - (Required) The ARN of the SNS topic - diff --git a/website/docs/r/ses_identity_notification_topic.markdown b/website/docs/r/ses_identity_notification_topic.markdown new file mode 100644 index 00000000000..45d3cf3e34a --- /dev/null +++ b/website/docs/r/ses_identity_notification_topic.markdown @@ -0,0 +1,29 @@ +--- +layout: "aws" +page_title: "AWS: aws_ses_identity_notification_topic" +sidebar_current: "docs-aws-resource-ses-identity-notification-topic" +description: |- + Setting AWS SES Identity Notification Topic +--- + +# ses_identity_notification_topic + +Resource for managing SES Identity Notification Topics + +## Example Usage + +```hcl +resource "aws_ses_identity_notification_topic" "test" { + topic_arn = "${aws_sns_topic.example.arn}" + notification_type = "Bounce" + identity = "${aws_ses_domain_identity.example.domain}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `topic_arn` - (Optional) The Amazon Resource Name (ARN) of the Amazon SNS topic. Can be set to "" (an empty string) to disable publishing. +* `notification_type` - (Required) The type of notifications that will be published to the specified Amazon SNS topic. Valid Values: *Bounce*, *Complaint* or *Delivery*. +* `identity` - (Required) The identity for which the Amazon SNS topic will be set. You can specify an identity by using its name or by using its Amazon Resource Name (ARN). diff --git a/website/docs/r/ses_receipt_rule.html.markdown b/website/docs/r/ses_receipt_rule.html.markdown index dd65a579b2d..2256c5a09b5 100644 --- a/website/docs/r/ses_receipt_rule.html.markdown +++ b/website/docs/r/ses_receipt_rule.html.markdown @@ -97,3 +97,11 @@ WorkMail actions support the following: * `organization_arn` - (Required) The ARN of the WorkMail organization * `topic_arn` - (Optional) The ARN of an SNS topic to notify * `position` - (Required) The position of the action in the receipt rule + +## Import + +SES receipt rules can be imported using the ruleset name and rule name separated by `:`. + +``` +$ terraform import aws_ses_receipt_rule.my_rule my_rule_set:my_rule +``` diff --git a/website/docs/r/ses_receipt_rule_set.html.markdown b/website/docs/r/ses_receipt_rule_set.html.markdown index 1e834d19972..ba03bd58643 100644 --- a/website/docs/r/ses_receipt_rule_set.html.markdown +++ b/website/docs/r/ses_receipt_rule_set.html.markdown @@ -23,3 +23,11 @@ resource "aws_ses_receipt_rule_set" "main" { The following arguments are supported: * `rule_set_name` - (Required) The name of the rule set + +## Import + +SES receipt rule sets can be imported using the rule set name. + +``` +$ terraform import aws_ses_receipt_rule_set.my_rule_set my_rule_set_name +``` diff --git a/website/docs/r/ses_template.html.markdown b/website/docs/r/ses_template.html.markdown index f61bfa969cc..46c56ebd18d 100644 --- a/website/docs/r/ses_template.html.markdown +++ b/website/docs/r/ses_template.html.markdown @@ -32,7 +32,7 @@ The following arguments are supported: ## Attributes Reference -The following additional attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The name of the SES template diff --git a/website/docs/r/sfn_activity.html.markdown b/website/docs/r/sfn_activity.html.markdown index ebae92c4341..b3380837c26 100644 --- a/website/docs/r/sfn_activity.html.markdown +++ b/website/docs/r/sfn_activity.html.markdown @@ -26,7 +26,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The Amazon Resource Name (ARN) that identifies the created activity. * `name` - The name of the activity. diff --git a/website/docs/r/sfn_state_machine.html.markdown b/website/docs/r/sfn_state_machine.html.markdown index 1d88840929a..667cacda644 100644 --- a/website/docs/r/sfn_state_machine.html.markdown +++ b/website/docs/r/sfn_state_machine.html.markdown @@ -45,7 +45,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ARN of the state machine. * `creation_date` - The date the state machine was created. diff --git a/website/docs/r/simpledb_domain.html.markdown b/website/docs/r/simpledb_domain.html.markdown index e9900d084e9..ac10119f8db 100644 --- a/website/docs/r/simpledb_domain.html.markdown +++ b/website/docs/r/simpledb_domain.html.markdown @@ -26,7 +26,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The name of the SimpleDB domain @@ -36,4 +36,4 @@ SimpleDB Domains can be imported using the `name`, e.g. ``` $ terraform import aws_simpledb_domain.users users -``` \ No newline at end of file +``` diff --git a/website/docs/r/snapshot_create_volume_permission.html.markdown b/website/docs/r/snapshot_create_volume_permission.html.markdown index 14d0fe9002c..781ccee249b 100644 --- a/website/docs/r/snapshot_create_volume_permission.html.markdown +++ b/website/docs/r/snapshot_create_volume_permission.html.markdown @@ -37,6 +37,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - A combination of "`snapshot_id`-`account_id`". diff --git a/website/docs/r/sns_platform_application.html.markdown b/website/docs/r/sns_platform_application.html.markdown index 9b4e1cbf6be..05ecc163731 100644 --- a/website/docs/r/sns_platform_application.html.markdown +++ b/website/docs/r/sns_platform_application.html.markdown @@ -43,7 +43,7 @@ The following arguments are supported: * `event_delivery_failure_topic_arn` - (Optional) SNS Topic triggered when a delivery to any of the platform endpoints associated with your platform application encounters a permanent failure. * `event_endpoint_created_topic_arn` - (Optional) SNS Topic triggered when a new platform endpoint is added to your platform application. * `event_endpoint_deleted_topic_arn` - (Optional) SNS Topic triggered when an existing platform endpoint is deleted from your platform application. -* `event_endpoint_updated_topic` - (Optional) SNS Topic triggered when an existing platform endpoint is changed from your platform application. +* `event_endpoint_updated_topic_arn` - (Optional) SNS Topic triggered when an existing platform endpoint is changed from your platform application. * `failure_feedback_role_arn` - (Optional) The IAM role permitted to receive failure feedback for this application. * `platform_principal` - (Optional) Application Platform principal. See [Principal][2] for type of principal required for platform. The value of this attribute when stored into the Terraform state is only a hash of the real value, so therefore it is not practical to use this as an attribute for other resources. * `success_feedback_role_arn` - (Optional) The IAM role permitted to receive success feedback for this application. @@ -51,7 +51,7 @@ The following arguments are supported: ## Attributes Reference -The following additional attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ARN of the SNS platform application * `arn` - The ARN of the SNS platform application diff --git a/website/docs/r/sns_sms_preferences.html.markdown b/website/docs/r/sns_sms_preferences.html.markdown new file mode 100644 index 00000000000..670447e5b86 --- /dev/null +++ b/website/docs/r/sns_sms_preferences.html.markdown @@ -0,0 +1,28 @@ +--- +layout: "aws" +page_title: "AWS: sns_sms_preferences" +sidebar_current: "docs-aws-resource-sns-sms-preferences" +description: |- + Provides a way to set SNS SMS preferences. +--- + +# aws_sns_sms_preferences + +Provides a way to set SNS SMS preferences. + +## Example Usage + +```hcl +resource "aws_sns_sms_preferences" "update_sms_prefs" {} +``` + +## Argument Reference + +The following arguments are supported: + +* `monthly_spend_limit` - (Optional) The maximum amount in USD that you are willing to spend each month to send SMS messages. +* `delivery_status_iam_role_arn` - (Optional) The ARN of the IAM role that allows Amazon SNS to write logs about SMS deliveries in CloudWatch Logs. +* `delivery_status_success_sampling_rate` - (Optional) The percentage of successful SMS deliveries for which Amazon SNS will write logs in CloudWatch Logs. The value must be between 0 and 100. +* `default_sender_id` - (Optional) A string, such as your business brand, that is displayed as the sender on the receiving device. +* `default_sms_type` - (Optional) The type of SMS message that you will send by default. Possible values are: Promotional, Transactional +* `usage_report_s3_bucket` - (Optional) The name of the Amazon S3 bucket to receive daily SMS usage reports from Amazon SNS. diff --git a/website/docs/r/sns_topic.html.markdown b/website/docs/r/sns_topic.html.markdown index f28df2d703a..6b2e68f3370 100644 --- a/website/docs/r/sns_topic.html.markdown +++ b/website/docs/r/sns_topic.html.markdown @@ -18,6 +18,42 @@ resource "aws_sns_topic" "user_updates" { } ``` +## Example with Delivery Policy + +``` hcl +resource "aws_sns_topic" "user_updates" { + name = "user-updates-topic" + delivery_policy = <_success_feedback_role_arn` and `_failure_feedback_role_arn` arguments are used to give Amazon SNS write access to use CloudWatch Logs on your behalf. The `_success_feedback_sample_rate` argument is for specifying the sample rate percentage (0-100) of successfully delivered messages. After you configure the `_failure_feedback_role_arn` argument, then all failed message deliveries generate CloudWatch Logs. @@ -29,14 +65,15 @@ The following arguments are supported: * `name` - (Optional) The friendly name for the SNS topic. By default generated by Terraform. * `name_prefix` - (Optional) The friendly name for the SNS topic. Conflicts with `name`. * `display_name` - (Optional) The display name for the SNS topic -* `policy` - (Optional) The fully-formed AWS policy as JSON -* `delivery_policy` - (Optional) The SNS delivery policy +* `policy` - (Optional) The fully-formed AWS policy as JSON. For more information about building AWS IAM policy documents with Terraform, see the [AWS IAM Policy Document Guide](/docs/providers/aws/guides/iam-policy-documents.html). +* `delivery_policy` - (Optional) The SNS delivery policy. More on [AWS documentation](https://docs.aws.amazon.com/sns/latest/dg/DeliveryPolicies.html) * `application_success_feedback_role_arn` - (Optional) The IAM role permitted to receive success feedback for this topic * `application_success_feedback_sample_rate` - (Optional) Percentage of success to sample * `application_failure_feedback_role_arn` - (Optional) IAM role for failure feedback * `http_success_feedback_role_arn` - (Optional) The IAM role permitted to receive success feedback for this topic * `http_success_feedback_sample_rate` - (Optional) Percentage of success to sample * `http_failure_feedback_role_arn` - (Optional) IAM role for failure feedback +* `kms_master_key_id` - (Optional) The ID of an AWS-managed customer master key (CMK) for Amazon SNS or a custom CMK. For more information, see [Key Terms](https://docs.aws.amazon.com/sns/latest/dg/sns-server-side-encryption.html#sse-key-terms) * `lambda_success_feedback_role_arn` - (Optional) The IAM role permitted to receive success feedback for this topic * `lambda_success_feedback_sample_rate` - (Optional) Percentage of success to sample * `lambda_failure_feedback_role_arn` - (Optional) IAM role for failure feedback @@ -46,7 +83,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ARN of the SNS topic * `arn` - The ARN of the SNS topic, as a more obvious property (clone of id) diff --git a/website/docs/r/sns_topic_policy.html.markdown b/website/docs/r/sns_topic_policy.html.markdown index dca67082675..7526db7a4fd 100644 --- a/website/docs/r/sns_topic_policy.html.markdown +++ b/website/docs/r/sns_topic_policy.html.markdown @@ -63,6 +63,7 @@ data "aws_iam_policy_document" "sns-topic-policy" { sid = "__default_statement_ID" } +} ``` ## Argument Reference @@ -70,4 +71,4 @@ data "aws_iam_policy_document" "sns-topic-policy" { The following arguments are supported: * `arn` - (Required) The ARN of the SNS topic -* `policy` - (Required) The fully-formed AWS policy as JSON +* `policy` - (Required) The fully-formed AWS policy as JSON. For more information about building AWS IAM policy documents with Terraform, see the [AWS IAM Policy Document Guide](/docs/providers/aws/guides/iam-policy-documents.html). diff --git a/website/docs/r/sns_topic_subscription.html.markdown b/website/docs/r/sns_topic_subscription.html.markdown index 36ae6e02b02..9c004468b14 100644 --- a/website/docs/r/sns_topic_subscription.html.markdown +++ b/website/docs/r/sns_topic_subscription.html.markdown @@ -13,11 +13,11 @@ This resource allows you to automatically place messages sent to SNS topics in S to a given endpoint, send SMS messages, or notify devices / applications. The most likely use case for Terraform users will probably be SQS queues. -~> **NOTE:** If SNS topic and SQS queue are in different AWS regions it is important to place the "aws_sns_topic_subscription" into the terraform configuration of the region with the SQS queue. If "aws_sns_topic_subscription" is placed in the terraform configuration of the region with the SNS topic terraform will fail to create the subscription. +~> **NOTE:** If the SNS topic and SQS queue are in different AWS regions, it is important for the "aws_sns_topic_subscription" to use an AWS provider that is in the same region of the SNS topic. If the "aws_sns_topic_subscription" is using a provider with a different region than the SNS topic, terraform will fail to create the subscription. ~> **NOTE:** Setup of cross-account subscriptions from SNS topics to SQS queues requires Terraform to have access to BOTH accounts. -~> **NOTE:** If SNS topic and SQS queue are in different AWS accounts but the same region it is important to place the "aws_sns_topic_subscription" into the terraform configuration of the account with the SQS queue. If "aws_sns_topic_subscription" is placed in the terraform configuration of the account with the SNS topic terraform creates the subscriptions but does not keep state and tries to re-create the subscription at every apply. +~> **NOTE:** If SNS topic and SQS queue are in different AWS accounts but the same region it is important for the "aws_sns_topic_subscription" to use the AWS provider of the account with the SQS queue. If "aws_sns_topic_subscription" is using a Provider with a different account than the SNS topic, terraform creates the subscriptions but does not keep state and tries to re-create the subscription at every apply. ~> **NOTE:** If SNS topic and SQS queue are in different AWS accounts and different AWS regions it is important to recognize that the subscription needs to be initiated from the account with the SQS queue but in the region of the SNS topic. @@ -61,20 +61,20 @@ You can subscribe SNS topics to SQS queues in different Amazon accounts and regi */ variable "sns" { default = { - account-id = "111111111111" - role-name = "service/service-hashicorp-terraform" - name = "example-sns-topic" - display_name = "example" - region = "us-west-1" + account-id = "111111111111" + role-name = "service/service-hashicorp-terraform" + name = "example-sns-topic" + display_name = "example" + region = "us-west-1" } } variable "sqs" { default = { - account-id = "222222222222" - role-name = "service/service-hashicorp-terraform" - name = "example-sqs-queue" - region = "us-east-1" + account-id = "222222222222" + role-name = "service/service-hashicorp-terraform" + name = "example-sqs-queue" + region = "us-east-1" } } @@ -242,7 +242,8 @@ The following arguments are supported: * `endpoint_auto_confirms` - (Optional) Boolean indicating whether the end point is capable of [auto confirming subscription](http://docs.aws.amazon.com/sns/latest/dg/SendMessageToHttp.html#SendMessageToHttp.prepare) e.g., PagerDuty (default is false) * `confirmation_timeout_in_minutes` - (Optional) Integer indicating number of minutes to wait in retying mode for fetching subscription arn before marking it as failure. Only applicable for http and https protocols (default is 1 minute). * `raw_message_delivery` - (Optional) Boolean indicating whether or not to enable raw message delivery (the original message is directly passed, not wrapped in JSON with the original message in the message property) (default is false). -* `filter_policy` - (Optional) The text of a filter policy to the topic subscription. +* `filter_policy` - (Optional) JSON String with the filter policy that will be used in the subscription to filter messages seen by the target resource. Refer to the [SNS docs](https://docs.aws.amazon.com/sns/latest/dg/message-filtering.html) for more details. +* `delivery_policy` - (Optional) JSON String with the delivery policy (retries, backoff, etc.) that will be used in the subscription - this only applies to HTTP/S subscriptions. Refer to the [SNS docs](https://docs.aws.amazon.com/sns/latest/dg/DeliveryPolicies.html) for more details. ### Protocols supported @@ -276,7 +277,7 @@ Endpoints have different format requirements according to the protocol that is c ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ARN of the subscription * `topic_arn` - The ARN of the topic the subscription belongs to diff --git a/website/docs/r/spot_fleet_request.html.markdown b/website/docs/r/spot_fleet_request.html.markdown index 19bd03ea5c8..5f19fd3bcb0 100644 --- a/website/docs/r/spot_fleet_request.html.markdown +++ b/website/docs/r/spot_fleet_request.html.markdown @@ -23,20 +23,22 @@ resource "aws_spot_fleet_request" "cheap_compute" { valid_until = "2019-11-04T20:44:20Z" launch_specification { - instance_type = "m4.10xlarge" - ami = "ami-1234" - spot_price = "2.793" - placement_tenancy = "dedicated" + instance_type = "m4.10xlarge" + ami = "ami-1234" + spot_price = "2.793" + placement_tenancy = "dedicated" + iam_instance_profile_arn = "${aws_iam_instance_profile.example.arn}" } launch_specification { - instance_type = "m4.4xlarge" - ami = "ami-5678" - key_name = "my-key" - spot_price = "1.117" - availability_zone = "us-west-1a" - subnet_id = "subnet-1234" - weighted_capacity = 35 + instance_type = "m4.4xlarge" + ami = "ami-5678" + key_name = "my-key" + spot_price = "1.117" + iam_instance_profile_arn = "${aws_iam_instance_profile.example.arn}" + availability_zone = "us-west-1a" + subnet_id = "subnet-1234" + weighted_capacity = 35 root_block_device { volume_size = "300" @@ -68,7 +70,7 @@ resource "aws_spot_fleet_request" "foo" { } launch_specification { - instance_type = "m3.large" + instance_type = "m5.large" ami = "ami-d06a90b0" key_name = "my-key" availability_zone = "us-west-2a" @@ -95,29 +97,36 @@ across different markets and instance types. **Note:** This takes in similar but not identical inputs as [`aws_instance`](instance.html). There are limitations on what you can specify. See the list of officially supported inputs in the - [reference documentation](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_SpotFleetLaunchSpecification.html). Any normal [`aws_instance`](instance.html) parameter that corresponds to those inputs may be used. + [reference documentation](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_SpotFleetLaunchSpecification.html). Any normal [`aws_instance`](instance.html) parameter that corresponds to those inputs may be used and it have + a additional parameter `iam_instance_profile_arn` takes `aws_iam_instance_profile` attribute `arn` as input. -* `spot_price` - (Required) The bid price per unit hour. +* `spot_price` - (Optional; Default: On-demand price) The maximum bid price per unit hour. * `wait_for_fulfillment` - (Optional; Default: false) If set, Terraform will wait for the Spot Request to be fulfilled, and will throw an error if the timeout of 10m is reached. * `target_capacity` - The number of units to request. You can choose to set the target capacity in terms of instances or a performance characteristic that is -important to your application workload, such as vCPUs, memory, or I/O. + important to your application workload, such as vCPUs, memory, or I/O. * `allocation_strategy` - Indicates how to allocate the target capacity across the Spot pools specified by the Spot fleet request. The default is -lowestPrice. + `lowestPrice`. +* `instance_pools_to_use_count` - (Optional; Default: 1) + The number of Spot pools across which to allocate your target Spot capacity. + Valid only when `allocation_strategy` is set to `lowestPrice`. Spot Fleet selects + the cheapest Spot pools and evenly allocates your target Spot capacity across + the number of Spot pools that you specify. * `excess_capacity_termination_policy` - Indicates whether running Spot instances should be terminated if the target capacity of the Spot fleet request is decreased below the current size of the Spot fleet. * `terminate_instances_with_expiration` - Indicates whether running Spot instances should be terminated when the Spot fleet request expires. -* `instance_interruption_behavior` - (Optional) Indicates whether a Spot +* `instance_interruption_behaviour` - (Optional) Indicates whether a Spot instance stops or terminates when it is interrupted. Default is `terminate`. -* `valid_until` - The end date and time of the request, in UTC ISO8601 format - (for example, YYYY-MM-DDTHH:MM:SSZ). At this point, no new Spot instance -requests are placed or enabled to fulfill the request. Defaults to 24 hours. +* `fleet_type` - (Optional) The type of fleet request. Indicates whether the Spot Fleet only requests the target + capacity or also attempts to maintain it. Default is `maintain`. +* `valid_until` - (Optional) The end date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). At this point, no new Spot instance requests are placed or enabled to fulfill the request. Defaults to 24 hours. +* `valid_from` - (Optional) The start date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). The default is to start fulfilling the request immediately. * `load_balancers` (Optional) A list of elastic load balancer names to add to the Spot fleet. * `target_group_arns` (Optional) A list of `aws_alb_target_group` ARNs, for use with Application Load Balancing. @@ -127,10 +136,11 @@ Application Load Balancing. The `timeouts` block allows you to specify [timeouts](https://www.terraform.io/docs/configuration/resources.html#timeouts) for certain actions: * `create` - (Defaults to 10 mins) Used when requesting the spot instance (only valid if `wait_for_fulfillment = true`) +* `delete` - (Defaults to 5 mins) Used when destroying the spot instance ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The Spot fleet request ID * `spot_request_state` - The state of the Spot fleet request. diff --git a/website/docs/r/spot_instance_request.html.markdown b/website/docs/r/spot_instance_request.html.markdown index 1b18e4d439e..7498b65a432 100644 --- a/website/docs/r/spot_instance_request.html.markdown +++ b/website/docs/r/spot_instance_request.html.markdown @@ -11,14 +11,18 @@ description: |- Provides an EC2 Spot Instance Request resource. This allows instances to be requested on the spot market. -Terraform always creates Spot Instance Requests with a `persistent` type, which -means that for the duration of their lifetime, AWS will launch an instance -with the configured details if and when the spot market will accept the -requested price. +By default Terraform creates Spot Instance Requests with a `persistent` type, +which means that for the duration of their lifetime, AWS will launch an +instance with the configured details if and when the spot market will accept +the requested price. On destruction, Terraform will make an attempt to terminate the associated Spot Instance if there is one present. +Spot Instances requests with a `one-time` type will close the spot request +when the instance is terminated either by the request being below the current spot +price availability or by a user. + ~> **NOTE:** Because their behavior depends on the live status of the spot market, Spot Instance Requests have a unique lifecycle that makes them behave differently than other Terraform resources. Most importantly: there is __no @@ -48,19 +52,20 @@ resource "aws_spot_instance_request" "cheap_worker" { Spot Instance Requests support all the same arguments as [`aws_instance`](instance.html), with the addition of: -* `spot_price` - (Required) The price to request on the spot market. +* `spot_price` - (Optional; Default: On-demand price) The maximum price to request on the spot market. * `wait_for_fulfillment` - (Optional; Default: false) If set, Terraform will wait for the Spot Request to be fulfilled, and will throw an error if the timeout of 10m is reached. -* `spot_type` - (Optional; Default: "persistent") If set to "one-time", after - the instance is terminated, the spot request will be closed. Also, Terraform - can't manage one-time spot requests, just launch them. +* `spot_type` - (Optional; Default: `persistent`) If set to `one-time`, after + the instance is terminated, the spot request will be closed. * `launch_group` - (Optional) A launch group is a group of spot instances that launch together and terminate together. If left empty instances are launched and terminated individually. * `block_duration_minutes` - (Optional) The required duration for the Spot instances, in minutes. This value must be a multiple of 60 (60, 120, 180, 240, 300, or 360). The duration period starts as soon as your Spot instance receives its instance ID. At the end of the duration period, Amazon EC2 marks the Spot instance for termination and provides a Spot instance termination notice, which gives the instance a two-minute warning before it terminates. Note that you can't specify an Availability Zone group or a launch group if you specify a duration. -* `instance_interruption_behavior` - (Optional) Indicates whether a Spot instance stops or terminates when it is interrupted. Default is `terminate` as this is the current AWS behaviour. +* `instance_interruption_behaviour` - (Optional) Indicates whether a Spot instance stops or terminates when it is interrupted. Default is `terminate` as this is the current AWS behaviour. +* `valid_until` - (Optional) The end date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). At this point, no new Spot instance requests are placed or enabled to fulfill the request. The default end date is 7 days from the current date. +* `valid_from` - (Optional) The start date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). The default is to start fulfilling the request immediately. ### Timeouts @@ -71,7 +76,7 @@ The `timeouts` block allows you to specify [timeouts](https://www.terraform.io/d ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The Spot Instance Request ID. diff --git a/website/docs/r/sqs_queue.html.markdown b/website/docs/r/sqs_queue.html.markdown index e9f78b3bcc6..2d69f2ce80a 100644 --- a/website/docs/r/sqs_queue.html.markdown +++ b/website/docs/r/sqs_queue.html.markdown @@ -56,7 +56,7 @@ The following arguments are supported: * `max_message_size` - (Optional) The limit of how many bytes a message can contain before Amazon SQS rejects it. An integer from 1024 bytes (1 KiB) up to 262144 bytes (256 KiB). The default for this attribute is 262144 (256 KiB). * `delay_seconds` - (Optional) The time in seconds that the delivery of all messages in the queue will be delayed. An integer from 0 to 900 (15 minutes). The default for this attribute is 0 seconds. * `receive_wait_time_seconds` - (Optional) The time for which a ReceiveMessage call will wait for a message to arrive (long polling) before returning. An integer from 0 to 20 (seconds). The default for this attribute is 0, meaning that the call will return immediately. -* `policy` - (Optional) The JSON policy for the SQS queue +* `policy` - (Optional) The JSON policy for the SQS queue. For more information about building AWS IAM policy documents with Terraform, see the [AWS IAM Policy Document Guide](/docs/providers/aws/guides/iam-policy-documents.html). * `redrive_policy` - (Optional) The JSON policy to set up the Dead Letter Queue, see [AWS docs](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/SQSDeadLetterQueue.html). **Note:** when specifying `maxReceiveCount`, you must specify it as an integer (`5`), and not a string (`"5"`). * `fifo_queue` - (Optional) Boolean designating a FIFO queue. If not set, it defaults to `false` making it standard. * `content_based_deduplication` - (Optional) Enables content-based deduplication for FIFO queues. For more information, see the [related documentation](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html#FIFO-queues-exactly-once-processing) @@ -66,7 +66,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The URL for the created Amazon SQS queue. * `arn` - The ARN of the SQS queue diff --git a/website/docs/r/sqs_queue_policy.html.markdown b/website/docs/r/sqs_queue_policy.html.markdown index 579a54d8be7..b3ab3419237 100644 --- a/website/docs/r/sqs_queue_policy.html.markdown +++ b/website/docs/r/sqs_queue_policy.html.markdown @@ -49,7 +49,7 @@ POLICY The following arguments are supported: * `queue_url` - (Required) The URL of the SQS Queue to which to attach the policy -* `policy` - (Required) The JSON policy for the SQS queue +* `policy` - (Required) The JSON policy for the SQS queue. For more information about building AWS IAM policy documents with Terraform, see the [AWS IAM Policy Document Guide](/docs/providers/aws/guides/iam-policy-documents.html). ## Import diff --git a/website/docs/r/ssm_activation.html.markdown b/website/docs/r/ssm_activation.html.markdown index b7a6fa95b07..dd09bcd3632 100644 --- a/website/docs/r/ssm_activation.html.markdown +++ b/website/docs/r/ssm_activation.html.markdown @@ -46,7 +46,7 @@ resource "aws_ssm_activation" "foo" { The following arguments are supported: -* `name` - (Optional) The default name of the registerd managed instance. +* `name` - (Optional) The default name of the registered managed instance. * `description` - (Optional) The description of the resource that you want to register. * `expiration_date` - (Optional) A timestamp in [RFC3339 format](https://tools.ietf.org/html/rfc3339#section-5.8) by which this activation request should expire. The default value is 24 hours from resource creation time. * `iam_role` - (Required) The IAM Role to attach to the managed instance. @@ -54,10 +54,10 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `activation_code` - The code the system generates when it processes the activation. -* `name` - The default name of the registerd managed instance. +* `name` - The default name of the registered managed instance. * `description` - The description of the resource that was registered. * `expired` - If the current activation has expired. * `expiration_date` - The date by which this activation request should expire. The default value is 24 hours. diff --git a/website/docs/r/ssm_association.html.markdown b/website/docs/r/ssm_association.html.markdown index 0b96e8d4ad4..e078648855d 100644 --- a/website/docs/r/ssm_association.html.markdown +++ b/website/docs/r/ssm_association.html.markdown @@ -13,55 +13,14 @@ Associates an SSM Document to an instance or EC2 tag. ## Example Usage ```hcl -resource "aws_security_group" "tf_test_foo" { - name = "tf_test_foo" - description = "foo" +resource "aws_ssm_association" "example" { + name = "${aws_ssm_document.example.name}" - ingress { - protocol = "icmp" - from_port = -1 - to_port = -1 - cidr_blocks = ["0.0.0.0/0"] + targets { + key = "InstanceIds" + values = "${aws_instance.example.id}" } } - -resource "aws_instance" "foo" { - # eu-west-1 - ami = "ami-f77ac884" - availability_zone = "eu-west-1a" - instance_type = "t2.small" - security_groups = ["${aws_security_group.tf_test_foo.name}"] -} - -resource "aws_ssm_document" "foo_document" { - name = "test_document_association-%s" - document_type = "Command" - - content = < **NOTE:** The Storage Gateway API provides no method to remove a cache disk. Destroying this Terraform resource does not perform any Storage Gateway actions. + +## Example Usage + +```hcl +resource "aws_storagegateway_cache" "example" { + disk_id = "${data.aws_storagegateway_local_disk.example.id}" + gateway_arn = "${aws_storagegateway_gateway.example.arn}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `disk_id` - (Required) Local disk identifier. For example, `pci-0000:03:00.0-scsi-0:0:0:0`. +* `gateway_arn` - (Required) The Amazon Resource Name (ARN) of the gateway. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Combined gateway Amazon Resource Name (ARN) and local disk identifier. + +## Import + +`aws_storagegateway_cache` can be imported by using the gateway Amazon Resource Name (ARN) and local disk identifier separated with a colon (`:`), e.g. + +``` +$ terraform import aws_storagegateway_cache.example arn:aws:storagegateway:us-east-1:123456789012:gateway/sgw-12345678:pci-0000:03:00.0-scsi-0:0:0:0 +``` diff --git a/website/docs/r/storagegateway_cached_iscsi_volume.html.markdown b/website/docs/r/storagegateway_cached_iscsi_volume.html.markdown new file mode 100644 index 00000000000..1d5b309bcab --- /dev/null +++ b/website/docs/r/storagegateway_cached_iscsi_volume.html.markdown @@ -0,0 +1,86 @@ +--- +layout: "aws" +page_title: "AWS: aws_storagegateway_cached_iscsi_volume" +sidebar_current: "docs-aws-resource-storagegateway-cached-iscsi-volume" +description: |- + Manages an AWS Storage Gateway cached iSCSI volume +--- + +# aws_storagegateway_cached_iscsi_volume + +Manages an AWS Storage Gateway cached iSCSI volume. + +~> **NOTE:** The gateway must have cache added (e.g. via the [`aws_storagegateway_cache`](/docs/providers/aws/r/storagegateway_cache.html) resource) before creating volumes otherwise the Storage Gateway API will return an error. + +~> **NOTE:** The gateway must have an upload buffer added (e.g. via the [`aws_storagegateway_upload_buffer`](/docs/providers/aws/r/storagegateway_upload_buffer.html) resource) before the volume is operational to clients, however the Storage Gateway API will allow volume creation without error in that case and return volume status as `UPLOAD BUFFER NOT CONFIGURED`. + +## Example Usage + +~> **NOTE:** These examples are referencing the [`aws_storagegateway_cache`](/docs/providers/aws/r/storagegateway_cache.html) resource `gateway_arn` attribute to ensure Terraform properly adds cache before creating the volume. If you are not using this method, you may need to declare an expicit dependency (e.g. via `depends_on = ["aws_storagegateway_cache.example"]`) to ensure proper ordering. + +### Create Empty Cached iSCSI Volume + +```hcl +resource "aws_storagegateway_cached_iscsi_volume" "example" { + gateway_arn = "${aws_storagegateway_cache.example.gateway_arn}" + network_interface_id = "${aws_instance.example.private_ip}" + target_name = "example" + volume_size_in_bytes = 5368709120 # 5 GB +} +``` + +### Create Cached iSCSI Volume From Snapshot + +```hcl +resource "aws_storagegateway_cached_iscsi_volume" "example" { + gateway_arn = "${aws_storagegateway_cache.example.gateway_arn}" + network_interface_id = "${aws_instance.example.private_ip}" + snapshot_id = "${aws_ebs_snapshot.example.id}" + target_name = "example" + volume_size_in_bytes = "${aws_ebs_snapshot.example.volume_size * 1024 * 1024 * 1024}" +} +``` + +### Create Cached iSCSI Volume From Source Volume + +```hcl +resource "aws_storagegateway_cached_iscsi_volume" "example" { + gateway_arn = "${aws_storagegateway_cache.example.gateway_arn}" + network_interface_id = "${aws_instance.example.private_ip}" + source_volume_arn = "${aws_storagegateway_cached_iscsi_volume.existing.arn}" + target_name = "example" + volume_size_in_bytes = "${aws_storagegateway_cached_iscsi_volume.existing.volume_size_in_bytes}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `gateway_arn` - (Required) The Amazon Resource Name (ARN) of the gateway. +* `network_interface_id` - (Required) The network interface of the gateway on which to expose the iSCSI target. Only IPv4 addresses are accepted. +* `target_name` - (Required) The name of the iSCSI target used by initiators to connect to the target and as a suffix for the target ARN. The target name must be unique across all volumes of a gateway. +* `volume_size_in_bytes` - (Required) The size of the volume in bytes. +* `snapshot_id` - (Optional) The snapshot ID of the snapshot to restore as the new cached volume. e.g. `snap-1122aabb`. +* `source_volume_arn` - (Optional) The ARN for an existing volume. Specifying this ARN makes the new volume into an exact copy of the specified existing volume's latest recovery point. The `volume_size_in_bytes` value for this new volume must be equal to or larger than the size of the existing volume, in bytes. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - Volume Amazon Resource Name (ARN), e.g. `arn:aws:storagegateway:us-east-1:123456789012:gateway/sgw-12345678/volume/vol-12345678`. +* `chap_enabled` - Whether mutual CHAP is enabled for the iSCSI target. +* `id` - Volume Amazon Resource Name (ARN), e.g. `arn:aws:storagegateway:us-east-1:123456789012:gateway/sgw-12345678/volume/vol-12345678`. +* `lun_number` - Logical disk number. +* `network_interface_port` - The port used to communicate with iSCSI targets. +* `target_arn` - Target Amazon Resource Name (ARN), e.g. `arn:aws:storagegateway:us-east-1:123456789012:gateway/sgw-12345678/target/iqn.1997-05.com.amazon:TargetName`. +* `volume_arn` - Volume Amazon Resource Name (ARN), e.g. `arn:aws:storagegateway:us-east-1:123456789012:gateway/sgw-12345678/volume/vol-12345678`. +* `volume_id` - Volume ID, e.g. `vol-12345678`. + +## Import + +`aws_storagegateway_cached_iscsi_volume` can be imported by using the volume Amazon Resource Name (ARN), e.g. + +``` +$ terraform import aws_storagegateway_cache.example arn:aws:storagegateway:us-east-1:123456789012:gateway/sgw-12345678/volume/vol-12345678 +``` diff --git a/website/docs/r/storagegateway_gateway.html.markdown b/website/docs/r/storagegateway_gateway.html.markdown new file mode 100644 index 00000000000..ff7df1b619a --- /dev/null +++ b/website/docs/r/storagegateway_gateway.html.markdown @@ -0,0 +1,109 @@ +--- +layout: "aws" +page_title: "AWS: aws_storagegateway_gateway" +sidebar_current: "docs-aws-resource-secretsmanager-secret" +description: |- + Manages an AWS Storage Gateway file, tape, or volume gateway in the provider region +--- + +# aws_storagegateway_gateway + +Manages an AWS Storage Gateway file, tape, or volume gateway in the provider region. + +~> NOTE: The Storage Gateway API requires the gateway to be connected to properly return information after activation. If you are receiving `The specified gateway is not connected` errors during resource creation (gateway activation), ensure your gateway instance meets the [Storage Gateway requirements](https://docs.aws.amazon.com/storagegateway/latest/userguide/Requirements.html). + +## Example Usage + +### File Gateway + +```hcl +resource "aws_storagegateway_gateway" "example" { + gateway_ip_address = "1.2.3.4" + gateway_name = "example" + gateway_timezone = "GMT" + gateway_type = "FILE_S3" +} +``` + +### Tape Gateway + +```hcl +resource "aws_storagegateway_gateway" "example" { + gateway_ip_address = "1.2.3.4" + gateway_name = "example" + gateway_timezone = "GMT" + gateway_type = "VTL" + media_changer_type = "AWS-Gateway-VTL" + tape_drive_type = "IBM-ULT3580-TD5" +} +``` + +### Volume Gateway (Cached) + +```hcl +resource "aws_storagegateway_gateway" "example" { + gateway_ip_address = "1.2.3.4" + gateway_name = "example" + gateway_timezone = "GMT" + gateway_type = "CACHED" +} +``` + +### Volume Gateway (Stored) + +```hcl +resource "aws_storagegateway_gateway" "example" { + gateway_ip_address = "1.2.3.4" + gateway_name = "example" + gateway_timezone = "GMT" + gateway_type = "STORED" +} +``` + +## Argument Reference + +~> **NOTE:** One of `activation_key` or `gateway_ip_address` must be provided for resource creation (gateway activation). Neither is required for resource import. If using `gateway_ip_address`, Terraform must be able to make an HTTP (port 80) GET request to the specified IP address from where it is running. + +The following arguments are supported: + +* `gateway_name` - (Required) Name of the gateway. +* `gateway_timezone` - (Required) Time zone for the gateway. The time zone is of the format "GMT", "GMT-hr:mm", or "GMT+hr:mm". For example, `GMT-4:00` indicates the time is 4 hours behind GMT. The time zone is used, for example, for scheduling snapshots and your gateway's maintenance schedule. +* `activation_key` - (Optional) Gateway activation key during resource creation. Conflicts with `gateway_ip_address`. Additional information is available in the [Storage Gateway User Guide](https://docs.aws.amazon.com/storagegateway/latest/userguide/get-activation-key.html). +* `gateway_ip_address` - (Optional) Gateway IP address to retrieve activation key during resource creation. Conflicts with `activation_key`. Gateway must be accessible on port 80 from where Terraform is running. Additional information is available in the [Storage Gateway User Guide](https://docs.aws.amazon.com/storagegateway/latest/userguide/get-activation-key.html). +* `gateway_type` - (Optional) Type of the gateway. The default value is `STORED`. Valid values: `CACHED`, `FILE_S3`, `STORED`, `VTL`. +* `media_changer_type` - (Optional) Type of medium changer to use for tape gateway. Terraform cannot detect drift of this argument. Valid values: `STK-L700`, `AWS-Gateway-VTL`. +* `smb_active_directory_settings` - (Optional) Nested argument with Active Directory domain join information for Server Message Block (SMB) file shares. Only valid for `FILE_S3` gateway type. Must be set before creating `ActiveDirectory` authentication SMB file shares. More details below. +* `smb_guest_password` - (Optional) Guest password for Server Message Block (SMB) file shares. Only valid for `FILE_S3` gateway type. Must be set before creating `GuestAccess` authentication SMB file shares. Terraform can only detect drift of the existence of a guest password, not its actual value from the gateway. Terraform can however update the password with changing the argument. +* `tape_drive_type` - (Optional) Type of tape drive to use for tape gateway. Terraform cannot detect drift of this argument. Valid values: `IBM-ULT3580-TD5`. + +### smb_active_directory_settings + +Information to join the gateway to an Active Directory domain for Server Message Block (SMB) file shares. + +~> **NOTE** It is not possible to unconfigure this setting without recreating the gateway. Also, Terraform can only detect drift of the `domain_name` argument from the gateway. + +* `domain_name` - (Required) The name of the domain that you want the gateway to join. +* `password` - (Required) The password of the user who has permission to add the gateway to the Active Directory domain. +* `username` - (Required) The user name of user who has permission to add the gateway to the Active Directory domain. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Amazon Resource Name (ARN) of the gateway. +* `arn` - Amazon Resource Name (ARN) of the gateway. +* `gateway_id` - Identifier of the gateway. + +## Timeouts + +`aws_storagegateway_gateway` provides the following [Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +* `create` - (Default `10m`) How long to wait for gateway activation and connection to Storage Gateway. + +## Import + +`aws_storagegateway_gateway` can be imported by using the gateway Amazon Resource Name (ARN), e.g. + +``` +$ terraform import aws_storagegateway_gateway.example arn:aws:storagegateway:us-east-1:123456789012:gateway/sgw-12345678 +``` diff --git a/website/docs/r/storagegateway_nfs_file_share.html.markdown b/website/docs/r/storagegateway_nfs_file_share.html.markdown new file mode 100644 index 00000000000..275e60f9a09 --- /dev/null +++ b/website/docs/r/storagegateway_nfs_file_share.html.markdown @@ -0,0 +1,74 @@ +--- +layout: "aws" +page_title: "AWS: aws_storagegateway_nfs_file_share" +sidebar_current: "docs-aws-resource-storagegateway-nfs-file-share" +description: |- + Manages an AWS Storage Gateway NFS File Share +--- + +# aws_storagegateway_nfs_file_share + +Manages an AWS Storage Gateway NFS File Share. + +## Example Usage + +```hcl +resource "aws_storagegateway_nfs_file_share" "example" { + client_list = ["0.0.0.0/0"] + gateway_arn = "${aws_storagegateway_gateway.example.arn}" + location_arn = "${aws_s3_bucket.example.arn}" + role_arn = "${aws_iam_role.example.arn}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `client_list` - (Required) The list of clients that are allowed to access the file gateway. The list must contain either valid IP addresses or valid CIDR blocks. Set to `["0.0.0.0/0"]` to not limit access. Minimum 1 item. Maximum 100 items. +* `gateway_arn` - (Required) Amazon Resource Name (ARN) of the file gateway. +* `location_arn` - (Required) The ARN of the backed storage used for storing file data. +* `role_arn` - (Required) The ARN of the AWS Identity and Access Management (IAM) role that a file gateway assumes when it accesses the underlying storage. +* `default_storage_class` - (Optional) The default storage class for objects put into an Amazon S3 bucket by the file gateway. Defaults to `S3_STANDARD`. Valid values: `S3_STANDARD`, `S3_STANDARD_IA`, `S3_ONEZONE_IA`. +* `guess_mime_type_enabled` - (Optional) Boolean value that enables guessing of the MIME type for uploaded objects based on file extensions. Defaults to `true`. +* `kms_encrypted` - (Optional) Boolean value if `true` to use Amazon S3 server side encryption with your own AWS KMS key, or `false` to use a key managed by Amazon S3. Defaults to `false`. +* `kms_key_arn` - (Optional) Amazon Resource Name (ARN) for KMS key used for Amazon S3 server side encryption. This value can only be set when `kms_encrypted` is true. +* `nfs_file_share_defaults` - (Optional) Nested argument with file share default values. More information below. +* `object_acl` - (Optional) Access Control List permission for S3 bucket objects. Defaults to `private`. +* `read_only` - (Optional) Boolean to indicate write status of file share. File share does not accept writes if `true`. Defaults to `false`. +* `requester_pays` - (Optional) Boolean who pays the cost of the request and the data download from the Amazon S3 bucket. Set this value to `true` if you want the requester to pay instead of the bucket owner. Defaults to `false`. +* `squash` - (Optional) Maps a user to anonymous user. Defaults to `RootSquash`. Valid values: `RootSquash` (only root is mapped to anonymous user), `NoSquash` (no one is mapped to anonymous user), `AllSquash` (everyone is mapped to anonymous user) + +### nfs_file_share_defaults + +Files and folders stored as Amazon S3 objects in S3 buckets don't, by default, have Unix file permissions assigned to them. Upon discovery in an S3 bucket by Storage Gateway, the S3 objects that represent files and folders are assigned these default Unix permissions. + +* `directory_mode` - (Optional) The Unix directory mode in the string form "nnnn". Defaults to `"0777"`. +* `file_mode` - (Optional) The Unix file mode in the string form "nnnn". Defaults to `"0666"`. +* `group_id` - (Optional) The default group ID for the file share (unless the files have another group ID specified). Defaults to `65534` (`nfsnobody`). Valid values: `0` through `4294967294`. +* `owner_id` - (Optional) The default owner ID for the file share (unless the files have another owner ID specified). Defaults to `65534` (`nfsnobody`). Valid values: `0` through `4294967294`. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Amazon Resource Name (ARN) of the NFS File Share. +* `arn` - Amazon Resource Name (ARN) of the NFS File Share. +* `fileshare_id` - ID of the NFS File Share. +* `path` - File share path used by the NFS client to identify the mount point. + +## Timeouts + +`aws_storagegateway_nfs_file_share` provides the following [Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +* `create` - (Default `10m`) How long to wait for file share creation. +* `update` - (Default `10m`) How long to wait for file share updates. +* `delete` - (Default `10m`) How long to wait for file share deletion. + +## Import + +`aws_storagegateway_nfs_file_share` can be imported by using the NFS File Share Amazon Resource Name (ARN), e.g. + +``` +$ terraform import aws_storagegateway_nfs_file_share.example arn:aws:storagegateway:us-east-1:123456789012:share/share-12345678 +``` diff --git a/website/docs/r/storagegateway_smb_file_share.html.markdown b/website/docs/r/storagegateway_smb_file_share.html.markdown new file mode 100644 index 00000000000..9c793abc6f3 --- /dev/null +++ b/website/docs/r/storagegateway_smb_file_share.html.markdown @@ -0,0 +1,92 @@ +--- +layout: "aws" +page_title: "AWS: aws_storagegateway_smb_file_share" +sidebar_current: "docs-aws-resource-storagegateway-smb-file-share" +description: |- + Manages an AWS Storage Gateway SMB File Share +--- + +# aws_storagegateway_smb_file_share + +Manages an AWS Storage Gateway SMB File Share. + +## Example Usage + +### Active Directory Authentication + +~> **NOTE:** The gateway must have already joined the Active Directory domain prior to SMB file share creation. e.g. via "SMB Settings" in the AWS Storage Gateway console or `smb_active_directory_settings` in the [`aws_storagegateway_gateway` resource](/docs/providers/aws/r/storagegateway_gateway.html). + +```hcl +resource "aws_storagegateway_smb_file_share" "example" { + authentication = "ActiveDirectory" + gateway_arn = "${aws_storagegateway_gateway.example.arn}" + location_arn = "${aws_s3_bucket.example.arn}" + role_arn = "${aws_iam_role.example.arn}" +} +``` + +### Guest Authentication + +~> **NOTE:** The gateway must have already had the SMB guest password set prior to SMB file share creation. e.g. via "SMB Settings" in the AWS Storage Gateway console or `smb_guest_password` in the [`aws_storagegateway_gateway` resource](/docs/providers/aws/r/storagegateway_gateway.html). + +```hcl +resource "aws_storagegateway_smb_file_share" "example" { + authentication = "GuestAccess" + gateway_arn = "${aws_storagegateway_gateway.example.arn}" + location_arn = "${aws_s3_bucket.example.arn}" + role_arn = "${aws_iam_role.example.arn}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `gateway_arn` - (Required) Amazon Resource Name (ARN) of the file gateway. +* `location_arn` - (Required) The ARN of the backed storage used for storing file data. +* `role_arn` - (Required) The ARN of the AWS Identity and Access Management (IAM) role that a file gateway assumes when it accesses the underlying storage. +* `authentication` - (Optional) The authentication method that users use to access the file share. Defaults to `ActiveDirectory`. Valid values: `ActiveDirectory`, `GuestAccess`. +* `default_storage_class` - (Optional) The default storage class for objects put into an Amazon S3 bucket by the file gateway. Defaults to `S3_STANDARD`. Valid values: `S3_STANDARD`, `S3_STANDARD_IA`, `S3_ONEZONE_IA`. +* `guess_mime_type_enabled` - (Optional) Boolean value that enables guessing of the MIME type for uploaded objects based on file extensions. Defaults to `true`. +* `invalid_user_list` - (Optional) A list of users in the Active Directory that are not allowed to access the file share. Only valid if `authentication` is set to `ActiveDirectory`. +* `kms_encrypted` - (Optional) Boolean value if `true` to use Amazon S3 server side encryption with your own AWS KMS key, or `false` to use a key managed by Amazon S3. Defaults to `false`. +* `kms_key_arn` - (Optional) Amazon Resource Name (ARN) for KMS key used for Amazon S3 server side encryption. This value can only be set when `kms_encrypted` is true. +* `smb_file_share_defaults` - (Optional) Nested argument with file share default values. More information below. +* `object_acl` - (Optional) Access Control List permission for S3 bucket objects. Defaults to `private`. +* `read_only` - (Optional) Boolean to indicate write status of file share. File share does not accept writes if `true`. Defaults to `false`. +* `requester_pays` - (Optional) Boolean who pays the cost of the request and the data download from the Amazon S3 bucket. Set this value to `true` if you want the requester to pay instead of the bucket owner. Defaults to `false`. +* `valid_user_list` - (Optional) A list of users in the Active Directory that are allowed to access the file share. Only valid if `authentication` is set to `ActiveDirectory`. + +### smb_file_share_defaults + +Files and folders stored as Amazon S3 objects in S3 buckets don't, by default, have Unix file permissions assigned to them. Upon discovery in an S3 bucket by Storage Gateway, the S3 objects that represent files and folders are assigned these default Unix permissions. + +* `directory_mode` - (Optional) The Unix directory mode in the string form "nnnn". Defaults to `"0777"`. +* `file_mode` - (Optional) The Unix file mode in the string form "nnnn". Defaults to `"0666"`. +* `group_id` - (Optional) The default group ID for the file share (unless the files have another group ID specified). Defaults to `0`. Valid values: `0` through `4294967294`. +* `owner_id` - (Optional) The default owner ID for the file share (unless the files have another owner ID specified). Defaults to `0`. Valid values: `0` through `4294967294`. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Amazon Resource Name (ARN) of the SMB File Share. +* `arn` - Amazon Resource Name (ARN) of the SMB File Share. +* `fileshare_id` - ID of the SMB File Share. +* `path` - File share path used by the NFS client to identify the mount point. + +## Timeouts + +`aws_storagegateway_smb_file_share` provides the following [Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +* `create` - (Default `10m`) How long to wait for file share creation. +* `update` - (Default `10m`) How long to wait for file share updates. +* `delete` - (Default `15m`) How long to wait for file share deletion. + +## Import + +`aws_storagegateway_smb_file_share` can be imported by using the SMB File Share Amazon Resource Name (ARN), e.g. + +``` +$ terraform import aws_storagegateway_smb_file_share.example arn:aws:storagegateway:us-east-1:123456789012:share/share-12345678 +``` diff --git a/website/docs/r/storagegateway_upload_buffer.html.markdown b/website/docs/r/storagegateway_upload_buffer.html.markdown new file mode 100644 index 00000000000..d7434972295 --- /dev/null +++ b/website/docs/r/storagegateway_upload_buffer.html.markdown @@ -0,0 +1,43 @@ +--- +layout: "aws" +page_title: "AWS: aws_storagegateway_upload_buffer" +sidebar_current: "docs-aws-resource-storagegateway-upload-buffer" +description: |- + Manages an AWS Storage Gateway upload buffer +--- + +# aws_storagegateway_upload_buffer + +Manages an AWS Storage Gateway upload buffer. + +~> **NOTE:** The Storage Gateway API provides no method to remove an upload buffer disk. Destroying this Terraform resource does not perform any Storage Gateway actions. + +## Example Usage + +```hcl +resource "aws_storagegateway_upload_buffer" "example" { + disk_id = "${data.aws_storagegateway_local_disk.example.id}" + gateway_arn = "${aws_storagegateway_gateway.example.arn}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `disk_id` - (Required) Local disk identifier. For example, `pci-0000:03:00.0-scsi-0:0:0:0`. +* `gateway_arn` - (Required) The Amazon Resource Name (ARN) of the gateway. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Combined gateway Amazon Resource Name (ARN) and local disk identifier. + +## Import + +`aws_storagegateway_upload_buffer` can be imported by using the gateway Amazon Resource Name (ARN) and local disk identifier separated with a colon (`:`), e.g. + +``` +$ terraform import aws_storagegateway_upload_buffer.example arn:aws:storagegateway:us-east-1:123456789012:gateway/sgw-12345678:pci-0000:03:00.0-scsi-0:0:0:0 +``` diff --git a/website/docs/r/storagegateway_working_storage.html.markdown b/website/docs/r/storagegateway_working_storage.html.markdown new file mode 100644 index 00000000000..3dce28b3ffb --- /dev/null +++ b/website/docs/r/storagegateway_working_storage.html.markdown @@ -0,0 +1,43 @@ +--- +layout: "aws" +page_title: "AWS: aws_storagegateway_working_storage" +sidebar_current: "docs-aws-resource-storagegateway-working-storage" +description: |- + Manages an AWS Storage Gateway working storage +--- + +# aws_storagegateway_working_storage + +Manages an AWS Storage Gateway working storage. + +~> **NOTE:** The Storage Gateway API provides no method to remove a working storage disk. Destroying this Terraform resource does not perform any Storage Gateway actions. + +## Example Usage + +```hcl +resource "aws_storagegateway_working_storage" "example" { + disk_id = "${data.aws_storagegateway_local_disk.example.id}" + gateway_arn = "${aws_storagegateway_gateway.example.arn}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `disk_id` - (Required) Local disk identifier. For example, `pci-0000:03:00.0-scsi-0:0:0:0`. +* `gateway_arn` - (Required) The Amazon Resource Name (ARN) of the gateway. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Combined gateway Amazon Resource Name (ARN) and local disk identifier. + +## Import + +`aws_storagegateway_working_storage` can be imported by using the gateway Amazon Resource Name (ARN) and local disk identifier separated with a colon (`:`), e.g. + +``` +$ terraform import aws_storagegateway_working_storage.example arn:aws:storagegateway:us-east-1:123456789012:gateway/sgw-12345678:pci-0000:03:00.0-scsi-0:0:0:0 +``` diff --git a/website/docs/r/subnet.html.markdown b/website/docs/r/subnet.html.markdown index ec619e2dd22..6e9ce2c16dc 100644 --- a/website/docs/r/subnet.html.markdown +++ b/website/docs/r/subnet.html.markdown @@ -12,6 +12,8 @@ Provides an VPC subnet resource. ## Example Usage +### Basic Usage + ```hcl resource "aws_subnet" "main" { vpc_id = "${aws_vpc.main.id}" @@ -23,6 +25,23 @@ resource "aws_subnet" "main" { } ``` +### Subnets In Secondary VPC CIDR Blocks + +When managing subnets in one of a VPC's secondary CIDR blocks created using a [`aws_vpc_ipv4_cidr_block_association`](vpc_ipv4_cidr_block_association.html) +resource, it is recommended to reference that resource's `vpc_id` attribute to ensure correct dependency ordering. + +```hcl +resource "aws_vpc_ipv4_cidr_block_association" "secondary_cidr" { + vpc_id = "${aws_vpc.main.id}" + cidr_block = "172.2.0.0/16" +} + +resource "aws_subnet" "in_secondary_cidr" { + vpc_id = "${aws_vpc_ipv4_cidr_block_association.secondary_cidr.vpc_id}" + cidr_block = "172.2.0.0/24" +} +``` + ## Argument Reference The following arguments are supported: @@ -42,13 +61,14 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the subnet +* `arn` - The ARN of the subnet. * `availability_zone`- The AZ for the subnet. * `cidr_block` - The CIDR block for the subnet. * `vpc_id` - The VPC ID. -* `ipv6_association_id` - The association ID for the IPv6 CIDR block. +* `ipv6_cidr_block_association_id` - The association ID for the IPv6 CIDR block. * `ipv6_cidr_block` - The IPv6 CIDR block. ## Import @@ -57,4 +77,4 @@ Subnets can be imported using the `subnet id`, e.g. ``` $ terraform import aws_subnet.public_subnet subnet-9d4a7b6c -``` \ No newline at end of file +``` diff --git a/website/docs/r/swf_domain.html.markdown b/website/docs/r/swf_domain.html.markdown new file mode 100644 index 00000000000..a1cc20df9c6 --- /dev/null +++ b/website/docs/r/swf_domain.html.markdown @@ -0,0 +1,46 @@ +--- +layout: "aws" +page_title: "AWS: aws_swf_domain" +sidebar_current: "docs-aws-resource-swf-domain" +description: |- + Provides an SWF Domain resource +--- + +# aws_swf_domain + +Provides an SWF Domain resource. + +## Example Usage + +To register a basic SWF domain: + +```hcl +resource "aws_swf_domain" "foo" { + name = "foo" + description = "Terraform SWF Domain" + workflow_execution_retention_period_in_days = 30 +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Optional, Forces new resource) The name of the domain. If omitted, Terraform will assign a random, unique name. +* `name_prefix` - (Optional, Forces new resource) Creates a unique name beginning with the specified prefix. Conflicts with `name`. +* `description` - (Optional, Forces new resource) The domain description. +* `workflow_execution_retention_period_in_days` - (Required, Forces new resource) Length of time that SWF will continue to retain information about the workflow execution after the workflow execution is complete, must be between 0 and 90 days. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The name of the domain. + +## Import + +SWF Domains can be imported using the `name`, e.g. + +``` +$ terraform import aws_swf_domain.foo test-domain +``` diff --git a/website/docs/r/vpc.html.markdown b/website/docs/r/vpc.html.markdown index 5111dc5370a..3ece4c247b4 100644 --- a/website/docs/r/vpc.html.markdown +++ b/website/docs/r/vpc.html.markdown @@ -53,8 +53,9 @@ the size of the CIDR block. Default is `false`. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: +* `arn` - Amazon Resource Name (ARN) of VPC * `id` - The ID of the VPC * `cidr_block` - The CIDR block of the VPC * `instance_tenancy` - Tenancy of instances spin up within VPC. diff --git a/website/docs/r/vpc_dhcp_options.html.markdown b/website/docs/r/vpc_dhcp_options.html.markdown index 00fd0a9c5bf..7198af0352b 100644 --- a/website/docs/r/vpc_dhcp_options.html.markdown +++ b/website/docs/r/vpc_dhcp_options.html.markdown @@ -56,7 +56,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the DHCP Options Set. @@ -70,4 +70,4 @@ VPC DHCP Options can be imported using the `dhcp options id`, e.g. ``` $ terraform import aws_vpc_dhcp_options.my_options dopt-d9070ebb -``` \ No newline at end of file +``` diff --git a/website/docs/r/vpc_dhcp_options_association.html.markdown b/website/docs/r/vpc_dhcp_options_association.html.markdown index f4608e9dad2..56b6b887d07 100644 --- a/website/docs/r/vpc_dhcp_options_association.html.markdown +++ b/website/docs/r/vpc_dhcp_options_association.html.markdown @@ -32,6 +32,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the DHCP Options Set Association. diff --git a/website/docs/r/vpc_endpoint.html.markdown b/website/docs/r/vpc_endpoint.html.markdown index b13feacd308..a3973116ac9 100644 --- a/website/docs/r/vpc_endpoint.html.markdown +++ b/website/docs/r/vpc_endpoint.html.markdown @@ -37,13 +37,46 @@ resource "aws_vpc_endpoint" "ec2" { vpc_endpoint_type = "Interface" security_group_ids = [ - "${aws_security_group.sg1.id}" + "${aws_security_group.sg1.id}", ] private_dns_enabled = true } ``` +Custom Service Usage: + +```hcl +resource "aws_vpc_endpoint" "ptfe_service" { + vpc_id = "${var.vpc_id}" + service_name = "${var.ptfe_service}" + vpc_endpoint_type = "Interface" + + security_group_ids = [ + "${aws_security_group.ptfe_service.id}", + ] + + subnet_ids = ["${local.subnet_ids}"] + private_dns_enabled = false +} + +data "aws_route53_zone" "internal" { + name = "vpc.internal." + private_zone = true + vpc_id = "${var.vpc_id}" +} + +resource "aws_route53_record" "ptfe_service" { + zone_id = "${data.aws_route53_zone.internal.zone_id}" + name = "ptfe.${data.aws_route53_zone.internal.name}" + type = "CNAME" + ttl = "300" + records = ["${lookup(aws_vpc_endpoint.ptfe_service.dns_entry[0], "dns_name")}"] +} +``` + +~> **NOTE The `dns_entry` output is a list of maps:** Terraform interpolation support for lists of maps requires the `lookup` and `[]` until full support of lists of maps is available + ## Argument Reference The following arguments are supported: @@ -52,17 +85,25 @@ The following arguments are supported: * `vpc_endpoint_type` - (Optional) The VPC endpoint type, `Gateway` or `Interface`. Defaults to `Gateway`. * `service_name` - (Required) The service name, in the form `com.amazonaws.region.service` for AWS services. * `auto_accept` - (Optional) Accept the VPC endpoint (the VPC endpoint and service need to be in the same AWS account). -* `policy` - (Optional) A policy to attach to the endpoint that controls access to the service. Applicable for endpoints of type `Gateway`. -Defaults to full access. +* `policy` - (Optional) A policy to attach to the endpoint that controls access to the service. Applicable for endpoints of type `Gateway`. Defaults to full access. For more information about building AWS IAM policy documents with Terraform, see the [AWS IAM Policy Document Guide](/docs/providers/aws/guides/iam-policy-documents.html). * `route_table_ids` - (Optional) One or more route table IDs. Applicable for endpoints of type `Gateway`. * `subnet_ids` - (Optional) The ID of one or more subnets in which to create a network interface for the endpoint. Applicable for endpoints of type `Interface`. * `security_group_ids` - (Optional) The ID of one or more security groups to associate with the network interface. Required for endpoints of type `Interface`. * `private_dns_enabled` - (Optional) Whether or not to associate a private hosted zone with the specified VPC. Applicable for endpoints of type `Interface`. Defaults to `false`. +### Timeouts + +`aws_vpc_endpoint` provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - (Default `10 minutes`) Used for creating a VPC endpoint +- `update` - (Default `10 minutes`) Used for VPC endpoint modifications +- `delete` - (Default `10 minutes`) Used for destroying VPC endpoints + ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the VPC endpoint. * `state` - The state of the VPC endpoint. diff --git a/website/docs/r/vpc_endpoint_connection_notification.html.markdown b/website/docs/r/vpc_endpoint_connection_notification.html.markdown index 80a63212fcd..d8c23941b1d 100644 --- a/website/docs/r/vpc_endpoint_connection_notification.html.markdown +++ b/website/docs/r/vpc_endpoint_connection_notification.html.markdown @@ -33,14 +33,14 @@ POLICY } resource "aws_vpc_endpoint_service" "foo" { - acceptance_required = false + acceptance_required = false network_load_balancer_arns = ["${aws_lb.test.arn}"] } resource "aws_vpc_endpoint_connection_notification" "foo" { - vpc_endpoint_service_id = "${aws_vpc_endpoint_service.foo.id}" + vpc_endpoint_service_id = "${aws_vpc_endpoint_service.foo.id}" connection_notification_arn = "${aws_sns_topic.topic.arn}" - connection_events = ["Accept", "Reject"] + connection_events = ["Accept", "Reject"] } ``` @@ -57,7 +57,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the VPC connection notification. * `state` - The state of the notification. diff --git a/website/docs/r/vpc_endpoint_route_table_association.html.markdown b/website/docs/r/vpc_endpoint_route_table_association.html.markdown index f0e8c56779d..84ab794b66c 100644 --- a/website/docs/r/vpc_endpoint_route_table_association.html.markdown +++ b/website/docs/r/vpc_endpoint_route_table_association.html.markdown @@ -3,27 +3,19 @@ layout: "aws" page_title: "AWS: aws_vpc_endpoint_route_table_association" sidebar_current: "docs-aws-resource-vpc-endpoint-route-table-association" description: |- - Provides a resource to create an association between a VPC endpoint and routing table. + Manages a VPC Endpoint Route Table Association --- # aws_vpc_endpoint_route_table_association -Provides a resource to create an association between a VPC endpoint and routing table. - -~> **NOTE on VPC Endpoints and VPC Endpoint Route Table Associations:** Terraform provides -both a standalone VPC Endpoint Route Table Association (an association between a VPC endpoint -and a single `route_table_id`) and a [VPC Endpoint](vpc_endpoint.html) resource with a `route_table_ids` -attribute. Do not use the same route table ID in both a VPC Endpoint resource and a VPC Endpoint Route -Table Association resource. Doing so will cause a conflict of associations and will overwrite the association. +Manages a VPC Endpoint Route Table Association ## Example Usage -Basic usage: - ```hcl -resource "aws_vpc_endpoint_route_table_association" "private_s3" { - vpc_endpoint_id = "${aws_vpc_endpoint.s3.id}" - route_table_id = "${aws_route_table.private.id}" +resource "aws_vpc_endpoint_route_table_association" "example" { + route_table_id = "${aws_route_table.example.id}" + vpc_endpoint_id = "${aws_vpc_endpoint.example.id}" } ``` @@ -31,11 +23,11 @@ resource "aws_vpc_endpoint_route_table_association" "private_s3" { The following arguments are supported: -* `vpc_endpoint_id` - (Required) The ID of the VPC endpoint with which the routing table will be associated. -* `route_table_id` - (Required) The ID of the routing table to be associated with the VPC endpoint. +* `route_table_id` - (Required) Identifier of the EC2 Route Table to be associated with the VPC Endpoint. +* `vpc_endpoint_id` - (Required) Identifier of the VPC Endpoint with which the EC2 Route Table will be associated. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: -* `id` - The ID of the association. +* `id` - A hash of the EC2 Route Table and VPC Endpoint identifiers. diff --git a/website/docs/r/vpc_endpoint_service.html.markdown b/website/docs/r/vpc_endpoint_service.html.markdown index 76bc8125f1b..4d4be6524d2 100644 --- a/website/docs/r/vpc_endpoint_service.html.markdown +++ b/website/docs/r/vpc_endpoint_service.html.markdown @@ -23,7 +23,7 @@ Basic usage: ```hcl resource "aws_vpc_endpoint_service" "foo" { - acceptance_required = false + acceptance_required = false network_load_balancer_arns = ["${aws_lb.test.arn}"] } ``` @@ -38,7 +38,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the VPC endpoint service. * `state` - The state of the VPC endpoint service. diff --git a/website/docs/r/vpc_endpoint_service_allowed_principal.html.markdown b/website/docs/r/vpc_endpoint_service_allowed_principal.html.markdown index d1508b46da7..cb86e09c7cc 100644 --- a/website/docs/r/vpc_endpoint_service_allowed_principal.html.markdown +++ b/website/docs/r/vpc_endpoint_service_allowed_principal.html.markdown @@ -25,7 +25,7 @@ data "aws_caller_identity" "current" {} resource "aws_vpc_endpoint_service_allowed_principal" "allow_me_to_foo" { vpc_endpoint_service_id = "${aws_vpc_endpoint_service.foo.id}" - principal_arn = "${data.aws_caller_identity.current.arn}" + principal_arn = "${data.aws_caller_identity.current.arn}" } ``` @@ -38,6 +38,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the association. diff --git a/website/docs/r/vpc_endpoint_subnet_association.html.markdown b/website/docs/r/vpc_endpoint_subnet_association.html.markdown index a0b04b5445b..5ffb5cc76f2 100644 --- a/website/docs/r/vpc_endpoint_subnet_association.html.markdown +++ b/website/docs/r/vpc_endpoint_subnet_association.html.markdown @@ -34,8 +34,16 @@ The following arguments are supported: * `vpc_endpoint_id` - (Required) The ID of the VPC endpoint with which the subnet will be associated. * `subnet_id` - (Required) The ID of the subnet to be associated with the VPC endpoint. +### Timeouts + +`aws_vpc_endpoint_subnet_association` provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - (Default `10 minutes`) Used for creating the association +- `delete` - (Default `10 minutes`) Used for destroying the association + ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the association. diff --git a/website/docs/r/vpc_ipv4_cidr_block_association.html.markdown b/website/docs/r/vpc_ipv4_cidr_block_association.html.markdown new file mode 100644 index 00000000000..d2f1261fe28 --- /dev/null +++ b/website/docs/r/vpc_ipv4_cidr_block_association.html.markdown @@ -0,0 +1,56 @@ +--- +layout: "aws" +page_title: "AWS: aws_vpc_ipv4_cidr_block_association" +sidebar_current: "docs-aws-resource-vpc-ipv4-cidr-block-association" +description: |- + Associate additional IPv4 CIDR blocks with a VPC +--- + +# aws_vpc_ipv4_cidr_block_association + +Provides a resource to associate additional IPv4 CIDR blocks with a VPC. + +When a VPC is created, a primary IPv4 CIDR block for the VPC must be specified. +The `aws_vpc_ipv4_cidr_block_association` resource allows further IPv4 CIDR blocks to be added to the VPC. + +## Example Usage + +```hcl +resource "aws_vpc" "main" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_vpc_ipv4_cidr_block_association" "secondary_cidr" { + vpc_id = "${aws_vpc.main.id}" + cidr_block = "172.2.0.0/16" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `cidr_block` - (Required) The additional IPv4 CIDR block to associate with the VPC. +* `vpc_id` - (Required) The ID of the VPC to make the association with. + +## Timeouts + +`aws_vpc_ipv4_cidr_block_association` provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - (Default `10 minutes`) Used for creating the association +- `delete` - (Default `10 minutes`) Used for destroying the association + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the VPC CIDR association + +## Import + +`aws_vpc_ipv4_cidr_block_association` can be imported by using the VPC CIDR Association ID, e.g. + +``` +$ terraform import aws_vpc_ipv4_cidr_block_association.example vpc-cidr-assoc-xxxxxxxx +``` diff --git a/website/docs/r/vpc_peering.html.markdown b/website/docs/r/vpc_peering.html.markdown index 08f1d915cb4..7501a6aabb6 100644 --- a/website/docs/r/vpc_peering.html.markdown +++ b/website/docs/r/vpc_peering.html.markdown @@ -3,12 +3,20 @@ layout: "aws" page_title: "AWS: aws_vpc_peering_connection" sidebar_current: "docs-aws-resource-vpc-peering" description: |- - Manage a VPC Peering Connection resource. + Provides a resource to manage a VPC peering connection. --- # aws_vpc_peering_connection -Provides a resource to manage a VPC Peering Connection resource. +Provides a resource to manage a VPC peering connection. + +~> **NOTE on VPC Peering Connections and VPC Peering Connection Options:** Terraform provides +both a standalone [VPC Peering Connection Options](vpc_peering_options.html) and a VPC Peering Connection +resource with `accepter` and `requester` attributes. Do not manage options for the same VPC peering +connection in both a VPC Peering Connection resource and a VPC Peering Connection Options resource. +Doing so will cause a conflict of options and will overwrite the options. +Using a VPC Peering Connection Options resource decouples management of the connection options from +management of the VPC Peering Connection and allows options to be set correctly in cross-account scenarios. -> **Note:** For cross-account (requester's AWS account differs from the accepter's AWS account) or inter-region VPC Peering Connections use the `aws_vpc_peering_connection` resource to manage the requester's side of the @@ -118,8 +126,10 @@ must have support for the DNS hostnames enabled. This can be done using the [`en (vpc.html#enable_dns_hostnames) attribute in the [`aws_vpc`](vpc.html) resource. See [Using DNS with Your VPC] (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-dns.html) user guide for more information. -* `allow_remote_vpc_dns_resolution` - (Optional) Allow a local VPC to resolve public DNS hostnames to private -IP addresses when queried from instances in the peer VPC. +* `allow_remote_vpc_dns_resolution` - (Optional) Allow a local VPC to resolve public DNS hostnames to +private IP addresses when queried from instances in the peer VPC. This is +[not supported](https://docs.aws.amazon.com/vpc/latest/peering/modify-peering-connections.html) for +inter-region VPC peering. * `allow_classic_link_to_remote_vpc` - (Optional) Allow a local linked EC2-Classic instance to communicate with instances in a peer VPC. This enables an outbound communication from the local ClassicLink connection to the remote VPC. @@ -127,9 +137,18 @@ to the remote VPC. instance in a peer VPC. This enables an outbound communication from the local VPC to the remote ClassicLink connection. +### Timeouts + +`aws_vpc_peering_connection` provides the following +[Timeouts](/docs/configuration/resources.html#timeouts) configuration options: + +- `create` - (Default `1 minute`) Used for creating a peering connection +- `update` - (Default `1 minute`) Used for peering connection modifications +- `delete` - (Default `1 minute`) Used for destroying peering connections + ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the VPC Peering Connection. * `accept_status` - The status of the VPC Peering Connection request. @@ -145,7 +164,7 @@ or accept the connection manually using the AWS Management Console, AWS CLI, thr VPC Peering resources can be imported using the `vpc peering id`, e.g. -``` +```sh $ terraform import aws_vpc_peering_connection.test_connection pcx-111aaa111 ``` diff --git a/website/docs/r/vpc_peering_accepter.html.markdown b/website/docs/r/vpc_peering_accepter.html.markdown index ba4f5912b1e..10b85e7f0e9 100644 --- a/website/docs/r/vpc_peering_accepter.html.markdown +++ b/website/docs/r/vpc_peering_accepter.html.markdown @@ -27,7 +27,7 @@ provider "aws" { } provider "aws" { - alias = "peer" + alias = "peer" region = "us-west-2" # Accepter's credentials. @@ -110,3 +110,9 @@ private IP addresses when queried from instances in a peer VPC. with the peer VPC over the VPC Peering Connection. * `allow_vpc_to_remote_classic_link` - Indicates whether a local VPC can communicate with a ClassicLink connection in the peer VPC over the VPC Peering Connection. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the VPC Peering Connection. diff --git a/website/docs/r/vpc_peering_options.html.markdown b/website/docs/r/vpc_peering_options.html.markdown new file mode 100644 index 00000000000..6b479b0fdec --- /dev/null +++ b/website/docs/r/vpc_peering_options.html.markdown @@ -0,0 +1,180 @@ +--- +layout: "aws" +page_title: "AWS: aws_vpc_peering_connection_options" +sidebar_current: "docs-aws-resource-vpc-peering-options" +description: |- + Provides a resource to manage VPC peering connection options. +--- + +# aws_vpc_peering_connection_options + +Provides a resource to manage VPC peering connection options. + +~> **NOTE on VPC Peering Connections and VPC Peering Connection Options:** Terraform provides +both a standalone VPC Peering Connection Options and a [VPC Peering Connection](vpc_peering.html) +resource with `accepter` and `requester` attributes. Do not manage options for the same VPC peering +connection in both a VPC Peering Connection resource and a VPC Peering Connection Options resource. +Doing so will cause a conflict of options and will overwrite the options. +Using a VPC Peering Connection Options resource decouples management of the connection options from +management of the VPC Peering Connection and allows options to be set correctly in cross-account scenarios. + +Basic usage: + +```hcl +resource "aws_vpc" "foo" { + cidr_block = "10.0.0.0/16" +} + +resource "aws_vpc" "bar" { + cidr_block = "10.1.0.0/16" +} + +resource "aws_vpc_peering_connection" "foo" { + vpc_id = "${aws_vpc.foo.id}" + peer_vpc_id = "${aws_vpc.bar.id}" + auto_accept = true +} + +resource "aws_vpc_peering_connection_options" "foo" { + vpc_peering_connection_id = "${aws_vpc_peering_connection.foo.id}" + + accepter { + allow_remote_vpc_dns_resolution = true + } + + requester { + allow_vpc_to_remote_classic_link = true + allow_classic_link_to_remote_vpc = true + } +} +``` + +Basic cross-account usage: + +```hcl +provider "aws" { + alias = "requester" + + # Requester's credentials. +} + +provider "aws" { + alias = "accepter" + + # Accepter's credentials. +} + +resource "aws_vpc" "main" { + provider = "aws.requester" + + cidr_block = "10.0.0.0/16" + + enable_dns_support = true + enable_dns_hostnames = true +} + +resource "aws_vpc" "peer" { + provider = "aws.accepter" + + cidr_block = "10.1.0.0/16" + + enable_dns_support = true + enable_dns_hostnames = true +} + +data "aws_caller_identity" "peer" { + provider = "aws.accepter" +} + +# Requester's side of the connection. +resource "aws_vpc_peering_connection" "peer" { + provider = "aws.requester" + + vpc_id = "${aws_vpc.main.id}" + peer_vpc_id = "${aws_vpc.peer.id}" + peer_owner_id = "${data.aws_caller_identity.peer.account_id}" + auto_accept = false + + tags { + Side = "Requester" + } +} + +# Accepter's side of the connection. +resource "aws_vpc_peering_connection_accepter" "peer" { + provider = "aws.accepter" + + vpc_peering_connection_id = "${aws_vpc_peering_connection.peer.id}" + auto_accept = true + + tags { + Side = "Accepter" + } +} + +resource "aws_vpc_peering_connection_options" "requester" { + provider = "aws.requester" + + # As options can't be set until the connection has been accepted + # create an explicit dependency on the accepter. + vpc_peering_connection_id = "${aws_vpc_peering_connection_accepter.peer.id}" + + requester { + allow_remote_vpc_dns_resolution = true + } +} + +resource "aws_vpc_peering_connection_options" "accepter" { + provider = "aws.accepter" + + vpc_peering_connection_id = "${aws_vpc_peering_connection_accepter.peer.id}" + + accepter { + allow_remote_vpc_dns_resolution = true + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `vpc_peering_connection_id` - (Required) The ID of the requester VPC peering connection. +* `accepter` (Optional) - An optional configuration block that allows for [VPC Peering Connection] +(http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide) options to be set for the VPC that accepts +the peering connection (a maximum of one). +* `requester` (Optional) - A optional configuration block that allows for [VPC Peering Connection] +(http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide) options to be set for the VPC that requests +the peering connection (a maximum of one). + +#### Accepter and Requester Arguments + +-> **Note:** When enabled, the DNS resolution feature requires that VPCs participating in the peering +must have support for the DNS hostnames enabled. This can be done using the [`enable_dns_hostnames`] +(vpc.html#enable_dns_hostnames) attribute in the [`aws_vpc`](vpc.html) resource. See [Using DNS with Your VPC] +(http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-dns.html) user guide for more information. + +* `allow_remote_vpc_dns_resolution` - (Optional) Allow a local VPC to resolve public DNS hostnames to +private IP addresses when queried from instances in the peer VPC. This is +[not supported](https://docs.aws.amazon.com/vpc/latest/peering/modify-peering-connections.html) for +inter-region VPC peering. +* `allow_classic_link_to_remote_vpc` - (Optional) Allow a local linked EC2-Classic instance to communicate +with instances in a peer VPC. This enables an outbound communication from the local ClassicLink connection +to the remote VPC. +* `allow_vpc_to_remote_classic_link` - (Optional) Allow a local VPC to communicate with a linked EC2-Classic +instance in a peer VPC. This enables an outbound communication from the local VPC to the remote ClassicLink +connection. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the VPC Peering Connection Options. + +## Import + +VPC Peering Connection Options can be imported using the `vpc peering id`, e.g. + +``` +$ terraform import aws_vpc_peering_connection_options.foo pcx-111aaa111 +``` diff --git a/website/docs/r/vpn_connection.html.markdown b/website/docs/r/vpn_connection.html.markdown index 96d6ccc065e..22f8f2db95c 100644 --- a/website/docs/r/vpn_connection.html.markdown +++ b/website/docs/r/vpn_connection.html.markdown @@ -55,10 +55,11 @@ The following arguments are supported: * `tunnel2_inside_cidr` - (Optional) The CIDR block of the second IP addresses for the first VPN tunnel. * `tunnel1_preshared_key` - (Optional) The preshared key of the first VPN tunnel. * `tunnel2_preshared_key` - (Optional) The preshared key of the second VPN tunnel. +~> **Note:** The preshared key must be between 8 and 64 characters in length and cannot start with zero(0). Allowed characters are alphanumeric characters, periods(.) and underscores(_). ## Attribute Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The amazon-assigned ID of the VPN connection. * `customer_gateway_configuration` - The configuration information for the VPN connection's customer gateway (in the native XML format). @@ -87,4 +88,4 @@ VPN Connections can be imported using the `vpn connection id`, e.g. ``` $ terraform import aws_vpn_connection.testvpnconnection vpn-40f41529 -``` \ No newline at end of file +``` diff --git a/website/docs/r/vpn_connection_route.html.markdown b/website/docs/r/vpn_connection_route.html.markdown index 74cef732d19..8c28067851f 100644 --- a/website/docs/r/vpn_connection_route.html.markdown +++ b/website/docs/r/vpn_connection_route.html.markdown @@ -49,7 +49,7 @@ The following arguments are supported: ## Attribute Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `destination_cidr_block` - The CIDR block associated with the local subnet of the customer network. * `vpn_connection_id` - The ID of the VPN connection. diff --git a/website/docs/r/vpn_gateway.html.markdown b/website/docs/r/vpn_gateway.html.markdown index 57561031e3e..fb6a0aee471 100644 --- a/website/docs/r/vpn_gateway.html.markdown +++ b/website/docs/r/vpn_gateway.html.markdown @@ -33,7 +33,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the VPN Gateway. diff --git a/website/docs/r/vpn_gateway_attachment.html.markdown b/website/docs/r/vpn_gateway_attachment.html.markdown index fbcfcafb648..714383fbf73 100644 --- a/website/docs/r/vpn_gateway_attachment.html.markdown +++ b/website/docs/r/vpn_gateway_attachment.html.markdown @@ -47,7 +47,7 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `vpc_id` - The ID of the VPC that Virtual Private Gateway is attached to. * `vpn_gateway_id` - The ID of the Virtual Private Gateway. diff --git a/website/docs/r/waf_byte_match_set.html.markdown b/website/docs/r/waf_byte_match_set.html.markdown index 0bf76b97c44..5feec1dd1c1 100644 --- a/website/docs/r/waf_byte_match_set.html.markdown +++ b/website/docs/r/waf_byte_match_set.html.markdown @@ -74,6 +74,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the WAF Byte Match Set. diff --git a/website/docs/r/waf_geo_match_set.html.markdown b/website/docs/r/waf_geo_match_set.html.markdown index 2d943a5b9e7..e7768045714 100644 --- a/website/docs/r/waf_geo_match_set.html.markdown +++ b/website/docs/r/waf_geo_match_set.html.markdown @@ -50,6 +50,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the WAF GeoMatchSet. diff --git a/website/docs/r/waf_ipset.html.markdown b/website/docs/r/waf_ipset.html.markdown index b75bfed561a..3b508d34d02 100644 --- a/website/docs/r/waf_ipset.html.markdown +++ b/website/docs/r/waf_ipset.html.markdown @@ -20,6 +20,11 @@ resource "aws_waf_ipset" "ipset" { type = "IPV4" value = "192.0.7.0/24" } + + ip_set_descriptors { + type = "IPV4" + value = "10.16.16.0/16" + } } ``` @@ -28,8 +33,7 @@ resource "aws_waf_ipset" "ipset" { The following arguments are supported: * `name` - (Required) The name or description of the IPSet. -* `ip_set_descriptors` - (Optional) Specifies the IP address type (IPV4 or IPV6) - and the IP address range (in CIDR format) that web requests originate from. +* `ip_set_descriptors` - (Optional) One or more pairs specifying the IP address type (IPV4 or IPV6) and the IP address range (in CIDR format) from which web requests originate. ## Nested Blocks @@ -41,10 +45,17 @@ The following arguments are supported: * `value` - (Required) An IPv4 or IPv6 address specified via CIDR notation. e.g. `192.0.2.44/32` or `1111:0000:0000:0000:0000:0000:0000:0000/64` -## Remarks - ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the WAF IPSet. +* `arn` - The ARN of the WAF IPSet. + +## Import + +WAF IPSets can be imported using their ID, e.g. + +``` +$ terraform import aws_waf_ipset.example a1b2c3d4-d5f6-7777-8888-9999aaaabbbbcccc +``` diff --git a/website/docs/r/waf_rate_based_rule.html.markdown b/website/docs/r/waf_rate_based_rule.html.markdown index 4f89e6c3eac..8a5baf7c173 100644 --- a/website/docs/r/waf_rate_based_rule.html.markdown +++ b/website/docs/r/waf_rate_based_rule.html.markdown @@ -27,7 +27,7 @@ resource "aws_waf_rate_based_rule" "wafrule" { name = "tfWAFRule" metric_name = "tfWAFRule" - rate_key = "IP" + rate_key = "IP" rate_limit = 2000 predicates { @@ -52,6 +52,8 @@ The following arguments are supported: ### `predicates` +See the [WAF Documentation](https://docs.aws.amazon.com/waf/latest/APIReference/API_Predicate.html) for more information. + #### Arguments * `negated` - (Required) Set this to `false` if you want to allow, block, or count requests @@ -59,12 +61,12 @@ The following arguments are supported: For example, if an IPSet includes the IP address `192.0.2.44`, AWS WAF will allow or block requests based on that IP address. If set to `true`, AWS WAF will allow, block, or count requests based on all IP addresses _except_ `192.0.2.44`. * `data_id` - (Required) A unique identifier for a predicate in the rule, such as Byte Match Set ID or IPSet ID. -* `type` - (Required) The type of predicate in a rule, such as `ByteMatchSet` or `IPSet` +* `type` - (Required) The type of predicate in a rule. Valid values: `ByteMatch`, `GeoMatch`, `IPMatch`, `RegexMatch`, `SizeConstraint`, `SqlInjectionMatch`, or `XssMatch`. ## Remarks ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the WAF rule. diff --git a/website/docs/r/waf_regex_match_set.html.markdown b/website/docs/r/waf_regex_match_set.html.markdown new file mode 100644 index 00000000000..5f3292c03c0 --- /dev/null +++ b/website/docs/r/waf_regex_match_set.html.markdown @@ -0,0 +1,68 @@ +--- +layout: "aws" +page_title: "AWS: waf_regex_match_set" +sidebar_current: "docs-aws-resource-waf-regex-match-set" +description: |- + Provides a AWS WAF Regex Match Set resource. +--- + +# aws_waf_regex_match_set + +Provides a WAF Regex Match Set Resource + +## Example Usage + +```hcl +resource "aws_waf_regex_match_set" "example" { + name = "example" + + regex_match_tuple { + field_to_match { + data = "User-Agent" + type = "HEADER" + } + + regex_pattern_set_id = "${aws_waf_regex_pattern_set.example.id}" + text_transformation = "NONE" + } +} + +resource "aws_waf_regex_pattern_set" "example" { + name = "example" + regex_pattern_strings = ["one", "two"] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name or description of the Regex Match Set. +* `regex_match_tuple` - (Required) The regular expression pattern that you want AWS WAF to search for in web requests, + the location in requests that you want AWS WAF to search, and other settings. See below. + +### Nested Arguments + +#### `regex_match_tuple` + + * `field_to_match` - (Required) The part of a web request that you want to search, such as a specified header or a query string. + * `regex_pattern_set_id` - (Required) The ID of a [Regex Pattern Set](/docs/providers/aws/r/waf_regex_pattern_set.html). + * `text_transformation` - (Required) Text transformations used to eliminate unusual formatting that attackers use in web requests in an effort to bypass AWS WAF. + e.g. `CMD_LINE`, `HTML_ENTITY_DECODE` or `NONE`. + See [docs](http://docs.aws.amazon.com/waf/latest/APIReference/API_ByteMatchTuple.html#WAF-Type-ByteMatchTuple-TextTransformation) + for all supported values. + +#### `field_to_match` + +* `data` - (Optional) When `type` is `HEADER`, enter the name of the header that you want to search, e.g. `User-Agent` or `Referer`. + If `type` is any other value, omit this field. +* `type` - (Required) The part of the web request that you want AWS WAF to search for a specified string. + e.g. `HEADER`, `METHOD` or `BODY`. + See [docs](http://docs.aws.amazon.com/waf/latest/APIReference/API_FieldToMatch.html) + for all supported values. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the WAF Regex Match Set. diff --git a/website/docs/r/waf_regex_pattern_set.html.markdown b/website/docs/r/waf_regex_pattern_set.html.markdown new file mode 100644 index 00000000000..dfa95d76a26 --- /dev/null +++ b/website/docs/r/waf_regex_pattern_set.html.markdown @@ -0,0 +1,33 @@ +--- +layout: "aws" +page_title: "AWS: waf_regex_pattern_set" +sidebar_current: "docs-aws-resource-waf-regex-pattern-set" +description: |- + Provides a AWS WAF Regex Pattern Set resource. +--- + +# aws_waf_regex_pattern_set + +Provides a WAF Regex Pattern Set Resource + +## Example Usage + +```hcl +resource "aws_waf_regex_pattern_set" "example" { + name = "tf_waf_regex_pattern_set" + regex_pattern_strings = ["one", "two"] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name or description of the Regex Pattern Set. +* `regex_pattern_strings` - (Optional) A list of regular expression (regex) patterns that you want AWS WAF to search for, such as `B[a@]dB[o0]t`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the WAF Regex Pattern Set. diff --git a/website/docs/r/waf_rule.html.markdown b/website/docs/r/waf_rule.html.markdown index 8876b8c849b..aa12996a79d 100644 --- a/website/docs/r/waf_rule.html.markdown +++ b/website/docs/r/waf_rule.html.markdown @@ -39,7 +39,7 @@ resource "aws_waf_rule" "wafrule" { The following arguments are supported: -* `metric_name` - (Required) The name or description for the Amazon CloudWatch metric of this rule. +* `metric_name` - (Required) The name or description for the Amazon CloudWatch metric of this rule. The name can contain only alphanumeric characters (A-Z, a-z, 0-9); the name can't contain whitespace. * `name` - (Required) The name or description of the rule. * `predicates` - (Optional) One of ByteMatchSet, IPSet, SizeConstraintSet, SqlInjectionMatchSet, or XssMatchSet objects to include in a rule. @@ -47,20 +47,27 @@ The following arguments are supported: ### `predicates` +See the [WAF Documentation](https://docs.aws.amazon.com/waf/latest/APIReference/API_Predicate.html) for more information. + #### Arguments * `negated` - (Required) Set this to `false` if you want to allow, block, or count requests - based on the settings in the specified [waf_byte_match_set](/docs/providers/aws/r/waf_byte_match_set.html), [waf_ipset](/docs/providers/aws/r/waf_ipset.html), [aws_waf_size_constraint_set](/docs/providers/aws/r/aws_waf_size_constraint_set.html), [aws_waf_sql_injection_match_set](/docs/providers/aws/r/aws_waf_sql_injection_match_set.html) or [aws_waf_xss_match_set](/docs/providers/aws/r/aws_waf_xss_match_set.html). + based on the settings in the specified [waf_byte_match_set](/docs/providers/aws/r/waf_byte_match_set.html), [waf_ipset](/docs/providers/aws/r/waf_ipset.html), [aws_waf_size_constraint_set](/docs/providers/aws/r/waf_size_constraint_set.html), [aws_waf_sql_injection_match_set](/docs/providers/aws/r/waf_sql_injection_match_set.html) or [aws_waf_xss_match_set](/docs/providers/aws/r/waf_xss_match_set.html). For example, if an IPSet includes the IP address `192.0.2.44`, AWS WAF will allow or block requests based on that IP address. If set to `true`, AWS WAF will allow, block, or count requests based on all IP addresses _except_ `192.0.2.44`. * `data_id` - (Required) A unique identifier for a predicate in the rule, such as Byte Match Set ID or IPSet ID. -* `type` - (Required) The type of predicate in a rule. Valid value is one of `ByteMatch`, `IPMatch`, `SizeConstraint`, `SqlInjectionMatch`, or `XssMatch`. - See [docs](https://docs.aws.amazon.com/waf/latest/APIReference/API_Predicate.html) for all supported values. - -## Remarks +* `type` - (Required) The type of predicate in a rule. Valid values: `ByteMatch`, `GeoMatch`, `IPMatch`, `RegexMatch`, `SizeConstraint`, `SqlInjectionMatch`, or `XssMatch`. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the WAF rule. + +## Import + +WAF rules can be imported using the id, e.g. + +``` +$ terraform import aws_waf_rule.example a1b2c3d4-d5f6-7777-8888-9999aaaabbbbcccc +``` diff --git a/website/docs/r/waf_rule_group.html.markdown b/website/docs/r/waf_rule_group.html.markdown new file mode 100644 index 00000000000..4df372fb5da --- /dev/null +++ b/website/docs/r/waf_rule_group.html.markdown @@ -0,0 +1,60 @@ +--- +layout: "aws" +page_title: "AWS: waf_rule_group" +sidebar_current: "docs-aws-resource-waf-rule-group" +description: |- + Provides a AWS WAF rule group resource. +--- + +# aws_waf_rule_group + +Provides a WAF Rule Group Resource + +## Example Usage + +```hcl +resource "aws_waf_rule" "example" { + name = "example" + metric_name = "example" +} + +resource "aws_waf_rule_group" "example" { + name = "example" + metric_name = "example" + + activated_rule { + action { + type = "COUNT" + } + + priority = 50 + rule_id = "${aws_waf_rule.example.id}" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) A friendly name of the rule group +* `metric_name` - (Required) A friendly name for the metrics from the rule group +* `activated_rule` - (Optional) A list of activated rules, see below + +## Nested Blocks + +### `activated_rule` + +#### Arguments + +* `action` - (Required) Specifies the action that CloudFront or AWS WAF takes when a web request matches the conditions in the rule. + * `type` - (Required) e.g. `BLOCK`, `ALLOW`, or `COUNT` +* `priority` - (Required) Specifies the order in which the rules are evaluated. Rules with a lower value are evaluated before rules with a higher value. +* `rule_id` - (Required) The ID of a [rule](/docs/providers/aws/r/waf_rule.html) +* `type` - (Optional) The rule type, either [`REGULAR`](/docs/providers/aws/r/waf_rule.html), [`RATE_BASED`]((/docs/providers/aws/r/waf_rate_based_rule.html), or `GROUP`. Defaults to `REGULAR`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the WAF rule group. diff --git a/website/docs/r/waf_size_constraint_set.html.markdown b/website/docs/r/waf_size_constraint_set.html.markdown index 68a37152e2f..54683c121cc 100644 --- a/website/docs/r/waf_size_constraint_set.html.markdown +++ b/website/docs/r/waf_size_constraint_set.html.markdown @@ -65,10 +65,8 @@ The following arguments are supported: See [docs](http://docs.aws.amazon.com/waf/latest/APIReference/API_FieldToMatch.html) for all supported values. -## Remarks - ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the WAF Size Constraint Set. diff --git a/website/docs/r/waf_sql_injection_match_set.html.markdown b/website/docs/r/waf_sql_injection_match_set.html.markdown index f7726f14177..9f6319cdf61 100644 --- a/website/docs/r/waf_sql_injection_match_set.html.markdown +++ b/website/docs/r/waf_sql_injection_match_set.html.markdown @@ -60,6 +60,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the WAF SQL Injection Match Set. diff --git a/website/docs/r/waf_web_acl.html.markdown b/website/docs/r/waf_web_acl.html.markdown index 58e0fa97956..ca03b1b1496 100644 --- a/website/docs/r/waf_web_acl.html.markdown +++ b/website/docs/r/waf_web_acl.html.markdown @@ -79,15 +79,25 @@ See [docs](http://docs.aws.amazon.com/waf/latest/APIReference/API_ActivatedRule. #### Arguments -* `action` - (Required) The action that CloudFront or AWS WAF takes when a web request matches the conditions in the rule. - e.g. `ALLOW`, `BLOCK` or `COUNT` +* `action` - (Optional) The action that CloudFront or AWS WAF takes when a web request matches the conditions in the rule. Not used if `type` is `GROUP`. + * `type` - (Required) valid values are: `BLOCK`, `ALLOW`, or `COUNT` +* `override_action` - (Optional) Override the action that a group requests CloudFront or AWS WAF takes when a web request matches the conditions in the rule. Only used if `type` is `GROUP`. + * `type` - (Required) valid values are: `NONE` or `COUNT` * `priority` - (Required) Specifies the order in which the rules in a WebACL are evaluated. Rules with a lower value are evaluated before rules with a higher value. -* `rule_id` - (Required) ID of the associated [rule](/docs/providers/aws/r/waf_rule.html) -* `type` - (Optional) The rule type, either `REGULAR`, as defined by [Rule](http://docs.aws.amazon.com/waf/latest/APIReference/API_Rule.html), or `RATE_BASED`, as defined by [RateBasedRule](http://docs.aws.amazon.com/waf/latest/APIReference/API_RateBasedRule.html). The default is REGULAR. If you add a RATE_BASED rule, you need to set `type` as `RATE_BASED`. +* `rule_id` - (Required) ID of the associated WAF (Global) rule (e.g. [`aws_waf_rule`](/docs/providers/aws/r/waf_rule.html)). WAF (Regional) rules cannot be used. +* `type` - (Optional) The rule type, either `REGULAR`, as defined by [Rule](http://docs.aws.amazon.com/waf/latest/APIReference/API_Rule.html), `RATE_BASED`, as defined by [RateBasedRule](http://docs.aws.amazon.com/waf/latest/APIReference/API_RateBasedRule.html), or `GROUP`, as defined by [RuleGroup](https://docs.aws.amazon.com/waf/latest/APIReference/API_RuleGroup.html). The default is REGULAR. If you add a RATE_BASED rule, you need to set `type` as `RATE_BASED`. If you add a GROUP rule, you need to set `type` as `GROUP`. ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the WAF WebACL. + +## Import + +WAF Web ACL can be imported using the `id`, e.g. + +``` +$ terraform import aws_waf_web_acl.main 0c8e583e-18f3-4c13-9e2a-67c4805d2f94 +``` diff --git a/website/docs/r/waf_xss_match_set.html.markdown b/website/docs/r/waf_xss_match_set.html.markdown index cb7bc871182..5a2dd9a1ad3 100644 --- a/website/docs/r/waf_xss_match_set.html.markdown +++ b/website/docs/r/waf_xss_match_set.html.markdown @@ -68,6 +68,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the WAF XssMatchSet. diff --git a/website/docs/r/wafregional_byte_match_set.html.markdown b/website/docs/r/wafregional_byte_match_set.html.markdown index efaf11ace92..64c2a04bc9b 100644 --- a/website/docs/r/wafregional_byte_match_set.html.markdown +++ b/website/docs/r/wafregional_byte_match_set.html.markdown @@ -15,10 +15,12 @@ Provides a WAF Regional Byte Match Set Resource for use with Application Load Ba ```hcl resource "aws_wafregional_byte_match_set" "byte_set" { name = "tf_waf_byte_match_set" - byte_match_tuple { - text_transformation = "NONE" - target_string = "badrefer1" + + byte_match_tuples { + text_transformation = "NONE" + target_string = "badrefer1" positional_constraint = "CONTAINS" + field_to_match { type = "HEADER" data = "referer" @@ -32,9 +34,11 @@ resource "aws_wafregional_byte_match_set" "byte_set" { The following arguments are supported: * `name` - (Required) The name or description of the ByteMatchSet. -* `byte_match_tuple` - (Optional)Settings for the ByteMatchSet, such as the bytes (typically a string that corresponds with ASCII characters) that you want AWS WAF to search for in web requests. ByteMatchTuple documented below. +* `byte_match_tuple` - **Deprecated**, use `byte_match_tuples` instead. +* `byte_match_tuples` - (Optional)Settings for the ByteMatchSet, such as the bytes (typically a string that corresponds with ASCII characters) that you want AWS WAF to search for in web requests. ByteMatchTuple documented below. + -ByteMatchTuple(byte_match_tuple) support the following: +ByteMatchTuples(byte_match_tuples) support the following: * `field_to_match` - (Required) Settings for the ByteMatchTuple. FieldToMatch documented below. * `positional_constraint` - (Required) Within the portion of a web request that you want to search. @@ -50,6 +54,6 @@ FieldToMatch(field_to_match) support following: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the WAF ByteMatchSet. diff --git a/website/docs/r/wafregional_geo_match_set.html.markdown b/website/docs/r/wafregional_geo_match_set.html.markdown new file mode 100644 index 00000000000..35fac10dbb4 --- /dev/null +++ b/website/docs/r/wafregional_geo_match_set.html.markdown @@ -0,0 +1,53 @@ +--- +layout: "aws" +page_title: "AWS: wafregional_geo_match_set" +sidebar_current: "docs-aws-resource-wafregional-geo-match-set" +description: |- + Provides a AWS WAF Regional Geo Match Set resource. +--- + +# aws_wafregional_geo_match_set + +Provides a WAF Regional Geo Match Set Resource + +## Example Usage + +```hcl +resource "aws_wafregional_geo_match_set" "geo_match_set" { + name = "geo_match_set" + + geo_match_constraint { + type = "Country" + value = "US" + } + + geo_match_constraint { + type = "Country" + value = "CA" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name or description of the Geo Match Set. +* `geo_match_constraint` - (Optional) The Geo Match Constraint objects which contain the country that you want AWS WAF to search for. + +## Nested Blocks + +### `geo_match_constraint` + +#### Arguments + +* `type` - (Required) The type of geographical area you want AWS WAF to search for. Currently Country is the only valid value. +* `value` - (Required) The country that you want AWS WAF to search for. + This is the two-letter country code, e.g. `US`, `CA`, `RU`, `CN`, etc. + See [docs](https://docs.aws.amazon.com/waf/latest/APIReference/API_GeoMatchConstraint.html) for all supported values. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the WAF Regional Geo Match Set. diff --git a/website/docs/r/wafregional_ipset.html.markdown b/website/docs/r/wafregional_ipset.html.markdown index aa6b11985cb..f0e098aca0d 100644 --- a/website/docs/r/wafregional_ipset.html.markdown +++ b/website/docs/r/wafregional_ipset.html.markdown @@ -15,10 +15,16 @@ Provides a WAF Regional IPSet Resource for use with Application Load Balancer. ```hcl resource "aws_wafregional_ipset" "ipset" { name = "tfIPSet" + ip_set_descriptor { - type = "IPV4" + type = "IPV4" value = "192.0.7.0/24" } + + ip_set_descriptor { + type = "IPV4" + value = "10.16.16.0/16" + } } ``` @@ -27,9 +33,13 @@ resource "aws_wafregional_ipset" "ipset" { The following arguments are supported: * `name` - (Required) The name or description of the IPSet. -* `ip_set_descriptor` - (Optional) The IP address type and IP address range (in CIDR notation) from which web requests originate. +* `ip_set_descriptor` - (Optional) One or more pairs specifying the IP address type (IPV4 or IPV6) and the IP address range (in CIDR notation) from which web requests originate. + +## Nested Blocks + +### `ip_set_descriptor` -IPSetDescriptor(ip_set_descriptor) support following: +#### Arguments * `type` - (Required) The string like IPV4 or IPV6. * `value` - (Required) The CIDR notation. @@ -39,6 +49,7 @@ IPSetDescriptor(ip_set_descriptor) support following: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the WAF IPSet. +* `arn` - The ARN of the WAF IPSet. diff --git a/website/docs/r/wafregional_rate_based_rule.html.markdown b/website/docs/r/wafregional_rate_based_rule.html.markdown new file mode 100644 index 00000000000..15cea8f48b3 --- /dev/null +++ b/website/docs/r/wafregional_rate_based_rule.html.markdown @@ -0,0 +1,70 @@ +--- +layout: "aws" +page_title: "AWS: wafregional_rate_based_rule" +sidebar_current: "docs-aws-resource-wafregional-rate-based-rule" +description: |- + Provides a AWS WAF Regional rate based rule resource. +--- + +# aws_wafregional_rate_based_rule + +Provides a WAF Rate Based Rule Resource + +## Example Usage + +```hcl +resource "aws_wafregional_ipset" "ipset" { + name = "tfIPSet" + + ip_set_descriptors { + type = "IPV4" + value = "192.0.7.0/24" + } +} + +resource "aws_wafregional_rate_based_rule" "wafrule" { + depends_on = ["aws_wafregional_ipset.ipset"] + name = "tfWAFRule" + metric_name = "tfWAFRule" + + rate_key = "IP" + rate_limit = 2000 + + predicate { + data_id = "${aws_wafregional_ipset.ipset.id}" + negated = false + type = "IPMatch" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `metric_name` - (Required) The name or description for the Amazon CloudWatch metric of this rule. +* `name` - (Required) The name or description of the rule. +* `rate_key` - (Required) Valid value is IP. +* `rate_limit` - (Required) The maximum number of requests, which have an identical value in the field specified by the RateKey, allowed in a five-minute period. Minimum value is 2000. +* `predicate` - (Optional) One of ByteMatchSet, IPSet, SizeConstraintSet, SqlInjectionMatchSet, or XssMatchSet objects to include in a rule. + +## Nested Blocks + +### `predicate` + +See the [WAF Documentation](https://docs.aws.amazon.com/waf/latest/APIReference/API_Predicate.html) for more information. + +#### Arguments + +* `negated` - (Required) Set this to `false` if you want to allow, block, or count requests + based on the settings in the specified `ByteMatchSet`, `IPSet`, `SqlInjectionMatchSet`, `XssMatchSet`, or `SizeConstraintSet`. + For example, if an IPSet includes the IP address `192.0.2.44`, AWS WAF will allow or block requests based on that IP address. + If set to `true`, AWS WAF will allow, block, or count requests based on all IP addresses _except_ `192.0.2.44`. +* `data_id` - (Required) A unique identifier for a predicate in the rule, such as Byte Match Set ID or IPSet ID. +* `type` - (Required) The type of predicate in a rule. Valid values: `ByteMatch`, `GeoMatch`, `IPMatch`, `RegexMatch`, `SizeConstraint`, `SqlInjectionMatch`, or `XssMatch`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the WAF Regional rate based rule. diff --git a/website/docs/r/wafregional_regex_match_set.html.markdown b/website/docs/r/wafregional_regex_match_set.html.markdown new file mode 100644 index 00000000000..9f53db11311 --- /dev/null +++ b/website/docs/r/wafregional_regex_match_set.html.markdown @@ -0,0 +1,68 @@ +--- +layout: "aws" +page_title: "AWS: wafregional_regex_match_set" +sidebar_current: "docs-aws-resource-wafregional-regex-match-set" +description: |- + Provides a AWS WAF Regional Regex Match Set resource. +--- + +# aws_wafregional_regex_match_set + +Provides a WAF Regional Regex Match Set Resource + +## Example Usage + +```hcl +resource "aws_wafregional_regex_match_set" "example" { + name = "example" + + regex_match_tuple { + field_to_match { + data = "User-Agent" + type = "HEADER" + } + + regex_pattern_set_id = "${aws_wafregional_regex_pattern_set.example.id}" + text_transformation = "NONE" + } +} + +resource "aws_wafregional_regex_pattern_set" "example" { + name = "example" + regex_pattern_strings = ["one", "two"] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name or description of the Regex Match Set. +* `regex_match_tuple` - (Required) The regular expression pattern that you want AWS WAF to search for in web requests, + the location in requests that you want AWS WAF to search, and other settings. See below. + +### Nested Arguments + +#### `regex_match_tuple` + + * `field_to_match` - (Required) The part of a web request that you want to search, such as a specified header or a query string. + * `regex_pattern_set_id` - (Required) The ID of a [Regex Pattern Set](/docs/providers/aws/r/waf_regex_pattern_set.html). + * `text_transformation` - (Required) Text transformations used to eliminate unusual formatting that attackers use in web requests in an effort to bypass AWS WAF. + e.g. `CMD_LINE`, `HTML_ENTITY_DECODE` or `NONE`. + See [docs](http://docs.aws.amazon.com/waf/latest/APIReference/API_ByteMatchTuple.html#WAF-Type-ByteMatchTuple-TextTransformation) + for all supported values. + +#### `field_to_match` + +* `data` - (Optional) When `type` is `HEADER`, enter the name of the header that you want to search, e.g. `User-Agent` or `Referer`. + If `type` is any other value, omit this field. +* `type` - (Required) The part of the web request that you want AWS WAF to search for a specified string. + e.g. `HEADER`, `METHOD` or `BODY`. + See [docs](http://docs.aws.amazon.com/waf/latest/APIReference/API_FieldToMatch.html) + for all supported values. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the WAF Regional Regex Match Set. diff --git a/website/docs/r/wafregional_regex_pattern_set.html.markdown b/website/docs/r/wafregional_regex_pattern_set.html.markdown new file mode 100644 index 00000000000..f474cca8144 --- /dev/null +++ b/website/docs/r/wafregional_regex_pattern_set.html.markdown @@ -0,0 +1,33 @@ +--- +layout: "aws" +page_title: "AWS: wafregional_regex_pattern_set" +sidebar_current: "docs-aws-resource-wafregional-regex-pattern-set" +description: |- + Provides a AWS WAF Regional Regex Pattern Set resource. +--- + +# aws_wafregional_regex_pattern_set + +Provides a WAF Regional Regex Pattern Set Resource + +## Example Usage + +```hcl +resource "aws_wafregional_regex_pattern_set" "example" { + name = "example" + regex_pattern_strings = ["one", "two"] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name or description of the Regex Pattern Set. +* `regex_pattern_strings` - (Optional) A list of regular expression (regex) patterns that you want AWS WAF to search for, such as `B[a@]dB[o0]t`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the WAF Regional Regex Pattern Set. diff --git a/website/docs/r/wafregional_rule.html.markdown b/website/docs/r/wafregional_rule.html.markdown index e879cf89299..fd9e09b1c0d 100644 --- a/website/docs/r/wafregional_rule.html.markdown +++ b/website/docs/r/wafregional_rule.html.markdown @@ -6,7 +6,7 @@ description: |- Provides an AWS WAF Regional rule resource for use with ALB. --- -# aws\_wafregional\_rule +# aws_wafregional_rule Provides an WAF Regional Rule Resource for use with Application Load Balancer. @@ -15,7 +15,7 @@ Provides an WAF Regional Rule Resource for use with Application Load Balancer. ```hcl resource "aws_wafregional_ipset" "ipset" { name = "tfIPSet" - + ip_set_descriptor { type = "IPV4" value = "192.0.7.0/24" @@ -25,7 +25,7 @@ resource "aws_wafregional_ipset" "ipset" { resource "aws_wafregional_rule" "wafrule" { name = "tfWAFRule" metric_name = "tfWAFRule" - + predicate { type = "IPMatch" data_id = "${aws_wafregional_ipset.ipset.id}" @@ -40,24 +40,24 @@ The following arguments are supported: * `name` - (Required) The name or description of the rule. * `metric_name` - (Required) The name or description for the Amazon CloudWatch metric of this rule. -* `predicate` - (Optional) The `ByteMatchSet`, `IPSet`, `SizeConstraintSet`, `SqlInjectionMatchSet`, or `XssMatchSet` objects to include in a rule. +* `predicate` - (Optional) The objects to include in a rule. ## Nested Fields ### `predicate` -See [docs](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-wafregional-rule-predicates.html) +See the [WAF Documentation](https://docs.aws.amazon.com/waf/latest/APIReference/API_Predicate.html) for more information. #### Arguments -* `type` - (Required) The type of predicate in a rule, such as an IPSet (IPMatch) +* `type` - (Required) The type of predicate in a rule. Valid values: `ByteMatch`, `GeoMatch`, `IPMatch`, `RegexMatch`, `SizeConstraint`, `SqlInjectionMatch`, or `XssMatch` * `data_id` - (Required) The unique identifier of a predicate, such as the ID of a `ByteMatchSet` or `IPSet`. -* `negated` - (Required) Whether to use the settings or the negated settings that you specified in the `ByteMatchSet`, `IPSet`, `SizeConstraintSet`, `SqlInjectionMatchSet`, or `XssMatchSet` objects. +* `negated` - (Required) Whether to use the settings or the negated settings that you specified in the objects. ## Remarks ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the WAF Regional Rule. diff --git a/website/docs/r/wafregional_rule_group.html.markdown b/website/docs/r/wafregional_rule_group.html.markdown new file mode 100644 index 00000000000..eef2c289fc3 --- /dev/null +++ b/website/docs/r/wafregional_rule_group.html.markdown @@ -0,0 +1,60 @@ +--- +layout: "aws" +page_title: "AWS: wafregional_rule_group" +sidebar_current: "docs-aws-resource-wafregional-rule-group" +description: |- + Provides a AWS WAF Regional Rule Group resource. +--- + +# aws_wafregional_rule_group + +Provides a WAF Regional Rule Group Resource + +## Example Usage + +```hcl +resource "aws_wafregional_rule" "example" { + name = "example" + metric_name = "example" +} + +resource "aws_wafregional_rule_group" "example" { + name = "example" + metric_name = "example" + + activated_rule { + action { + type = "COUNT" + } + + priority = 50 + rule_id = "${aws_wafregional_rule.example.id}" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) A friendly name of the rule group +* `metric_name` - (Required) A friendly name for the metrics from the rule group +* `activated_rule` - (Optional) A list of activated rules, see below + +## Nested Blocks + +### `activated_rule` + +#### Arguments + +* `action` - (Required) Specifies the action that CloudFront or AWS WAF takes when a web request matches the conditions in the rule. + * `type` - (Required) e.g. `BLOCK`, `ALLOW`, or `COUNT` +* `priority` - (Required) Specifies the order in which the rules are evaluated. Rules with a lower value are evaluated before rules with a higher value. +* `rule_id` - (Required) The ID of a [rule](/docs/providers/aws/r/wafregional_rule.html) +* `type` - (Optional) The rule type, either [`REGULAR`](/docs/providers/aws/r/wafregional_rule.html), [`RATE_BASED`]((/docs/providers/aws/r/wafregional_rate_based_rule.html), or `GROUP`. Defaults to `REGULAR`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the WAF Regional Rule Group. diff --git a/website/docs/r/wafregional_size_constraint_set.html.markdown b/website/docs/r/wafregional_size_constraint_set.html.markdown new file mode 100644 index 00000000000..04be24e814b --- /dev/null +++ b/website/docs/r/wafregional_size_constraint_set.html.markdown @@ -0,0 +1,72 @@ +--- +layout: "aws" +page_title: "AWS: wafregional_size_constraint_set" +sidebar_current: "docs-aws-resource-wafregional-size-constraint-set" +description: |- + Provides an AWS WAF Regional Size Constraint Set resource for use with ALB. +--- + +# aws_wafregional_size_constraint_set + +Provides a WAF Regional Size Constraint Set Resource for use with Application Load Balancer. + +## Example Usage + +```hcl +resource "aws_wafregional_size_constraint_set" "size_constraint_set" { + name = "tfsize_constraints" + + size_constraints { + text_transformation = "NONE" + comparison_operator = "EQ" + size = "4096" + + field_to_match { + type = "BODY" + } + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name or description of the Size Constraint Set. +* `size_constraints` - (Optional) Specifies the parts of web requests that you want to inspect the size of. + +## Nested Blocks + +### `size_constraints` + +#### Arguments + +* `field_to_match` - (Required) Specifies where in a web request to look for the size constraint. +* `comparison_operator` - (Required) The type of comparison you want to perform. + e.g. `EQ`, `NE`, `LT`, `GT`. + See [docs](http://docs.aws.amazon.com/waf/latest/APIReference/API_SizeConstraint.html#WAF-Type-SizeConstraint-ComparisonOperator) for all supported values. +* `size` - (Required) The size in bytes that you want to compare against the size of the specified `field_to_match`. + Valid values are between 0 - 21474836480 bytes (0 - 20 GB). +* `text_transformation` - (Required) Text transformations used to eliminate unusual formatting that attackers use in web requests in an effort to bypass AWS WAF. + If you specify a transformation, AWS WAF performs the transformation on `field_to_match` before inspecting a request for a match. + e.g. `CMD_LINE`, `HTML_ENTITY_DECODE` or `NONE`. + See [docs](http://docs.aws.amazon.com/waf/latest/APIReference/API_SizeConstraint.html#WAF-Type-SizeConstraint-TextTransformation) + for all supported values. + **Note:** if you choose `BODY` as `type`, you must choose `NONE` because CloudFront forwards only the first 8192 bytes for inspection. + +### `field_to_match` + +#### Arguments + +* `data` - (Optional) When `type` is `HEADER`, enter the name of the header that you want to search, e.g. `User-Agent` or `Referer`. + If `type` is any other value, omit this field. +* `type` - (Required) The part of the web request that you want AWS WAF to search for a specified string. + e.g. `HEADER`, `METHOD` or `BODY`. + See [docs](http://docs.aws.amazon.com/waf/latest/APIReference/API_FieldToMatch.html) + for all supported values. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the WAF Size Constraint Set. diff --git a/website/docs/r/wafregional_sql_injection_match_set.html.markdown b/website/docs/r/wafregional_sql_injection_match_set.html.markdown index abb46dcf0c4..bb2198d62ac 100644 --- a/website/docs/r/wafregional_sql_injection_match_set.html.markdown +++ b/website/docs/r/wafregional_sql_injection_match_set.html.markdown @@ -15,8 +15,10 @@ Provides a WAF Regional SQL Injection Match Set Resource for use with Applicatio ```hcl resource "aws_wafregional_sql_injection_match_set" "sql_injection_match_set" { name = "tf-sql_injection_match_set" + sql_injection_match_tuple { text_transformation = "URL_DECODE" + field_to_match { type = "QUERY_STRING" } @@ -53,6 +55,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the WAF SqlInjectionMatchSet. diff --git a/website/docs/r/wafregional_web_acl.html.markdown b/website/docs/r/wafregional_web_acl.html.markdown index 5df9aa91326..ef78a53d2c7 100644 --- a/website/docs/r/wafregional_web_acl.html.markdown +++ b/website/docs/r/wafregional_web_acl.html.markdown @@ -15,8 +15,8 @@ Provides a WAF Regional Web ACL Resource for use with Application Load Balancer. ```hcl resource "aws_wafregional_ipset" "ipset" { name = "tfIPSet" - - ip_set_descriptors { + + ip_set_descriptor { type = "IPV4" value = "192.0.7.0/24" } @@ -25,8 +25,8 @@ resource "aws_wafregional_ipset" "ipset" { resource "aws_wafregional_rule" "wafrule" { name = "tfWAFRule" metric_name = "tfWAFRule" - - predicates { + + predicate { data_id = "${aws_wafregional_ipset.ipset.id}" negated = false type = "IPMatch" @@ -36,18 +36,19 @@ resource "aws_wafregional_rule" "wafrule" { resource "aws_wafregional_web_acl" "wafacl" { name = "tfWebACL" metric_name = "tfWebACL" - + default_action { type = "ALLOW" } - - rules { + + rule { action { - type = "BLOCK" + type = "BLOCK" } - + priority = 1 rule_id = "${aws_wafregional_rule.wafrule.id}" + type = "REGULAR" } } ``` @@ -69,10 +70,12 @@ See [docs](https://docs.aws.amazon.com/waf/latest/APIReference/API_regional_Acti #### Arguments -* `action` - (Required) The action that CloudFront or AWS WAF takes when a web request matches the conditions in the rule. +* `action` - (Required) The action that CloudFront or AWS WAF takes when a web request matches the conditions in the rule. Not used if `type` is `GROUP`. +* `override_action` - (Required) Override the action that a group requests CloudFront or AWS WAF takes when a web request matches the conditions in the rule. Only used if `type` is `GROUP`. * `priority` - (Required) Specifies the order in which the rules in a WebACL are evaluated. Rules with a lower value are evaluated before rules with a higher value. -* `rule_id` - (Required) ID of the associated [rule](/docs/providers/aws/r/wafregional_rule.html) +* `rule_id` - (Required) ID of the associated WAF (Regional) rule (e.g. [`aws_wafregional_rule`](/docs/providers/aws/r/wafregional_rule.html)). WAF (Global) rules cannot be used. +* `type` - (Optional) The rule type, either `REGULAR`, as defined by [Rule](http://docs.aws.amazon.com/waf/latest/APIReference/API_Rule.html), `RATE_BASED`, as defined by [RateBasedRule](http://docs.aws.amazon.com/waf/latest/APIReference/API_RateBasedRule.html), or `GROUP`, as defined by [RuleGroup](https://docs.aws.amazon.com/waf/latest/APIReference/API_RuleGroup.html). The default is REGULAR. If you add a RATE_BASED rule, you need to set `type` as `RATE_BASED`. If you add a GROUP rule, you need to set `type` as `GROUP`. ### `default_action` / `action` @@ -83,6 +86,6 @@ See [docs](https://docs.aws.amazon.com/waf/latest/APIReference/API_regional_Acti ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the WAF Regional WebACL. diff --git a/website/docs/r/wafregional_web_acl_association.html.markdown b/website/docs/r/wafregional_web_acl_association.html.markdown new file mode 100644 index 00000000000..9681b13effe --- /dev/null +++ b/website/docs/r/wafregional_web_acl_association.html.markdown @@ -0,0 +1,96 @@ +--- +layout: "aws" +page_title: "AWS: aws_wafregional_web_acl_association" +sidebar_current: "docs-aws-resource-wafregional-web-acl-association" +description: |- + Provides a resource to create an association between a WAF Regional WebACL and Application Load Balancer. +--- + +# aws_wafregional_web_acl_association + +Provides a resource to create an association between a WAF Regional WebACL and Application Load Balancer. + +-> **Note:** An Application Load Balancer can only be associated with one WAF Regional WebACL. + +## Example Usage + +```hcl +resource "aws_wafregional_ipset" "ipset" { + name = "tfIPSet" + + ip_set_descriptor { + type = "IPV4" + value = "192.0.7.0/24" + } +} + +resource "aws_wafregional_rule" "foo" { + name = "tfWAFRule" + metric_name = "tfWAFRule" + + predicate { + data_id = "${aws_wafregional_ipset.ipset.id}" + negated = false + type = "IPMatch" + } +} + +resource "aws_wafregional_web_acl" "foo" { + name = "foo" + metric_name = "foo" + + default_action { + type = "ALLOW" + } + + rule { + action { + type = "BLOCK" + } + + priority = 1 + rule_id = "${aws_wafregional_rule.foo.id}" + } +} + +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" +} + +data "aws_availability_zones" "available" {} + +resource "aws_subnet" "foo" { + vpc_id = "${aws_vpc.foo.id}" + cidr_block = "10.1.1.0/24" + availability_zone = "${data.aws_availability_zones.available.names[0]}" +} + +resource "aws_subnet" "bar" { + vpc_id = "${aws_vpc.foo.id}" + cidr_block = "10.1.2.0/24" + availability_zone = "${data.aws_availability_zones.available.names[1]}" +} + +resource "aws_alb" "foo" { + internal = true + subnets = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"] +} + +resource "aws_wafregional_web_acl_association" "foo" { + resource_arn = "${aws_alb.foo.arn}" + web_acl_id = "${aws_wafregional_web_acl.foo.id}" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `web_acl_id` - (Required) The ID of the WAF Regional WebACL to create an association. +* `resource_arn` - (Required) Application Load Balancer ARN to associate with. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the association diff --git a/website/docs/r/wafregional_xss_match_set.html.markdown b/website/docs/r/wafregional_xss_match_set.html.markdown index e2bd193dd2b..b2fb7bfbd25 100644 --- a/website/docs/r/wafregional_xss_match_set.html.markdown +++ b/website/docs/r/wafregional_xss_match_set.html.markdown @@ -52,6 +52,6 @@ The following arguments are supported: ## Attributes Reference -The following attributes are exported: +In addition to all arguments above, the following attributes are exported: * `id` - The ID of the Regional WAF XSS Match Set.